public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Yann Dupont <Yann.Dupont@univ-nantes.fr>
To: Yann Dupont <Yann.Dupont@univ-nantes.fr>
Cc: stan@hardwarefreak.com, xfs@oss.sgi.com
Subject: Re: Bad performance with XFS + 2.6.38 / 2.6.39
Date: Thu, 22 Dec 2011 12:02:53 +0100	[thread overview]
Message-ID: <4EF30E5D.7060608@univ-nantes.fr> (raw)
In-Reply-To: <4EF2F702.4050902@univ-nantes.fr>

Le 22/12/2011 10:23, Yann Dupont a écrit :
>
>> Can you run a block trace on both kernels (for say five minutes)
>> when the load differential is showing up and provide that to us so
>> we can see how the IO patterns are differing?


here we go.

1st server : Birnie, is running 2.6.26. This is normally the more loaded 
server (more active users)

2nd server : Penderyn, is runing a freshly compiled 3.1.6.


blktrace of relevent volumes during 10 minutes. The 2 machines are 
identical (poweredge M1610) : same mem & proc, disks, fibre channel 
cards, SAN disks ...

birnie:~/TRACE# uptime
  11:48:34 up 17:18,  3 users,  load average: 0.04, 0.18, 0.23

penderyn:~/TRACE# uptime
  11:48:30 up 23 min,  3 users,  load average: 4.03, 3.82, 3.21

As you can see, very sensible load difference. keep in mind my 
university is on holiday right now, so the load is really _very much 
lower_ than usual. In normal times, with 2.6.26 kernels, birnie has a 
load in 2 .. 6 range.

here are the results :


birnie:~/TRACE# blktrace /dev/gromelac/gromelac 
/dev/POMEROL-R0-P0/gromeldi -w 600
=== dm-18 ===
   CPU  0:                26787 events,     1256 KiB data
   CPU  1:                  530 events,       25 KiB data
   CPU  2:                 1811 events,       85 KiB data
   CPU  3:                  104 events,        5 KiB data
   CPU  4:                 5824 events,      274 KiB data
   CPU  5:                  146 events,        7 KiB data
   CPU  6:                 1958 events,       92 KiB data
   CPU  7:                  176 events,        9 KiB data
   CPU  8:                 5456 events,      256 KiB data
   CPU  9:                  175 events,        9 KiB data
   CPU 10:                 1161 events,       55 KiB data
   CPU 11:                  216 events,       11 KiB data
   CPU 12:                  118 events,        6 KiB data
   CPU 13:                   25 events,        2 KiB data
   CPU 14:                  287 events,       14 KiB data
   CPU 15:                  425 events,       20 KiB data
   Total:                 45199 events (dropped 0),     2119 KiB data
=== dm-16 ===
   CPU  0:                27966 events,     1311 KiB data
   CPU  1:                  311 events,       15 KiB data
   CPU  2:                 1403 events,       66 KiB data
   CPU  3:                 1699 events,       80 KiB data
   CPU  4:                 1706 events,       80 KiB data
   CPU  5:                 1515 events,       72 KiB data
   CPU  6:                   30 events,        2 KiB data
   CPU  7:                  428 events,       21 KiB data
   CPU  8:                 6774 events,      318 KiB data
   CPU  9:                  252 events,       12 KiB data
   CPU 10:                 1299 events,       61 KiB data
   CPU 11:                 1391 events,       66 KiB data
   CPU 12:                  111 events,        6 KiB data
   CPU 13:                 2317 events,      109 KiB data
   CPU 14:                  130 events,        7 KiB data
   CPU 15:                  504 events,       24 KiB data
   Total:                 47836 events (dropped 0),     2243 KiB data


and

penderyn:~/TRACE# blktrace /dev/gromeljo/gromeljo /dev/gromelpz/gromelpz 
/dev/POMEROL-R1-P0/gromelpz -w 600
=== dm-14 ===
   CPU  0:                12672 events,      595 KiB data
   CPU  1:                13248 events,      621 KiB data
   CPU  2:                  545 events,       26 KiB data
   CPU  3:                  285 events,       14 KiB data
   CPU  4:                  574 events,       27 KiB data
   CPU  5:                   94 events,        5 KiB data
   CPU  6:                  569 events,       27 KiB data
   CPU  7:                  172 events,        9 KiB data
   CPU  8:                  666 events,       32 KiB data
   CPU  9:                 3231 events,      152 KiB data
   CPU 10:                  610 events,       29 KiB data
   CPU 11:                  221 events,       11 KiB data
   CPU 12:                   11 events,        1 KiB data
   CPU 13:                   20 events,        1 KiB data
   CPU 14:                    6 events,        1 KiB data
   CPU 15:                   30 events,        2 KiB data
   Total:                 32954 events (dropped 0),     1545 KiB data
=== dm-13 ===
   CPU  0:                    0 events,        0 KiB data
   CPU  1:                    0 events,        0 KiB data
   CPU  2:                    1 events,        1 KiB data
   CPU  3:                    0 events,        0 KiB data
   CPU  4:                    0 events,        0 KiB data
   CPU  5:                    0 events,        0 KiB data
   CPU  6:                    0 events,        0 KiB data
   CPU  7:                    0 events,        0 KiB data
   CPU  8:                    0 events,        0 KiB data
   CPU  9:                    0 events,        0 KiB data
   CPU 10:                    0 events,        0 KiB data
   CPU 11:                    0 events,        0 KiB data
   CPU 12:                    0 events,        0 KiB data
   CPU 13:                    0 events,        0 KiB data
   CPU 14:                    0 events,        0 KiB data
   CPU 15:                    0 events,        0 KiB data
   Total:                     1 events (dropped 0),        1 KiB data
=== dm-16 ===
   CPU  0:                17499 events,      821 KiB data
   CPU  1:                15320 events,      719 KiB data
   CPU  2:                 1037 events,       49 KiB data
   CPU  3:                  667 events,       32 KiB data
   CPU  4:                  278 events,       14 KiB data
   CPU  5:                   91 events,        5 KiB data
   CPU  6:                  888 events,       42 KiB data
   CPU  7:                   67 events,        4 KiB data
   CPU  8:                 2317 events,      109 KiB data
   CPU  9:                 3662 events,      172 KiB data
   CPU 10:                 1756 events,       83 KiB data
   CPU 11:                  801 events,       38 KiB data
   CPU 12:                   20 events,        1 KiB data
   CPU 13:                  618 events,       29 KiB data
   CPU 14:                    3 events,        1 KiB data
   CPU 15:                   18 events,        1 KiB data
   Total:                 45042 events (dropped 0),     2112 KiB data



And The blktrace files are there  (for five days) :

http://filex.univ-nantes.fr/get?k=RDxGitXYOf4HKHd7Tan

Hope it can be helpfull,
Thanks,
-- 
Yann Dupont - Service IRTS, DSI Université de Nantes
Tel : 02.53.48.49.20 - Mail/Jabber : Yann.Dupont@univ-nantes.fr

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2011-12-22 11:05 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-12-11 12:45 Bad performance with XFS + 2.6.38 / 2.6.39 Xupeng Yun
2011-12-11 23:39 ` Dave Chinner
2011-12-12  0:40   ` Xupeng Yun
2011-12-12  1:00     ` Dave Chinner
2011-12-12  2:00       ` Xupeng Yun
2011-12-12 13:57         ` Christoph Hellwig
2011-12-21  9:08         ` Yann Dupont
2011-12-21 15:10           ` Stan Hoeppner
2011-12-21 17:56             ` Yann Dupont
2011-12-21 22:26               ` Dave Chinner
2011-12-22  9:23                 ` Yann Dupont
2011-12-22 11:02                   ` Yann Dupont [this message]
2012-01-02 10:06                     ` Yann Dupont
2012-01-02 16:08                       ` Peter Grandi
2012-01-02 18:02                         ` Peter Grandi
2012-01-04 10:54                         ` Yann Dupont
2012-01-02 20:35                       ` Dave Chinner
2012-01-03  8:20                         ` Yann Dupont
2012-01-04 12:33                           ` Christoph Hellwig
2012-01-04 13:06                             ` Yann Dupont

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4EF30E5D.7060608@univ-nantes.fr \
    --to=yann.dupont@univ-nantes.fr \
    --cc=stan@hardwarefreak.com \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox