linux-ext4.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Kay Diederichs <kay.diederichs@uni-konstanz.de>
To: Ted Ts'o <tytso@mit.edu>
Cc: Dave Chinner <david@fromorbit.com>,
	Ext4 Developers List <linux-ext4@vger.kernel.org>,
	linux <linux-kernel@vger.kernel.org>,
	Karsten Schaefer <karsten.schaefer@uni-konstanz.de>
Subject: Re: ext4 performance regression 2.6.27-stable versus 2.6.32 and later
Date: Wed, 04 Aug 2010 10:18:03 +0200	[thread overview]
Message-ID: <4C59223B.1080106@uni-konstanz.de> (raw)
In-Reply-To: <20100802202123.GC25653@thunk.org>

[-- Attachment #1: Type: text/plain, Size: 3337 bytes --]

Am 02.08.2010 22:21, schrieb Ted Ts'o:
> On Mon, Aug 02, 2010 at 05:30:03PM +0200, Kay Diederichs wrote:
>>
>> we pared down the benchmark to the last step (called "run xds_par in nfs
>> directory (reads 600M, and writes 50M)") because this captures most of
>> the problem. Here we report kernel messages with stacktrace, and the
>> blktrace output that you requested.
>
> Thanks, I'll take a look at it.
>
> Is NFS required to reproduce the problem?  If you simply copy the 100
> files using rsync, or cp -r while logged onto the server, do you
> notice the performance regression?
>
> Thanks, regards,
>
> 						- Ted

Ted,

we've run the benchmarks internally on the file server; it turns out 
that NFS is not required to reproduce the problem.

We also took the opportunity to try 2.6.32.17 which just came out. 
2.6.32.17 behaves similar to 2.6.32.16-patched (i.e. with reverted 
"ext4: Avoid group preallocation for closed files"); 2.6.32.17 has quite 
a few ext4 patches so one or a couple of those seems to have a similar 
effect as reverting "ext4: Avoid group preallocation for closed files".

These are the times for the second (and higher) benchmark runs; the 
first run is always slower. The last step ("run xds_par") is slower than 
in the NFS case because it's heavy in CPU usage (total CPU time is more 
than 200 seconds); the NFS client is a 8-core (+HT) Nehalem-type 
machine, whereas the NFS server is just a 2-core Pentium D @ 3.40GHz

Local machine: turn5 2.6.27.48 i686
Raid5: /dev/md5 /mnt/md5 ext4dev 
rw,noatime,barrier=1,stripe=512,data=writeback 0 0
  32 seconds for preparations
  19 seconds to rsync 100 frames with 597M from raid5,ext4 directory
  17 seconds to rsync 100 frames with 595M to raid5,ext4 directory
  36 seconds to untar 24353 kernel files with 323M to raid5,ext4 directory
  31 seconds to rsync 24353 kernel files with 323M from raid5,ext4 directory
267 seconds to run xds_par in raid5,ext4 directory
427 seconds to run the script

Local machine: turn5 2.6.32.16 i686  (vanilla, i.e. not patched)
Raid5: /dev/md5 /mnt/md5 ext4 
rw,seclabel,noatime,barrier=0,stripe=512,data=writeback 0 0
  36 seconds for preparations
  18 seconds to rsync 100 frames with 597M from raid5,ext4 directory
  33 seconds to rsync 100 frames with 595M to raid5,ext4 directory
  68 seconds to untar 24353 kernel files with 323M to raid5,ext4 directory
  40 seconds to rsync 24353 kernel files with 323M from raid5,ext4 directory
489 seconds to run xds_par in raid5,ext4 directory
714 seconds to run the script

Local machine: turn5 2.6.32.17 i686
Raid5: /dev/md5 /mnt/md5 ext4 
rw,seclabel,noatime,barrier=0,stripe=512,data=writeback 0 0
  38 seconds for preparations
  18 seconds to rsync 100 frames with 597M from raid5,ext4 directory
  33 seconds to rsync 100 frames with 595M to raid5,ext4 directory
  67 seconds to untar 24353 kernel files with 323M to raid5,ext4 directory
  41 seconds to rsync 24353 kernel files with 323M from raid5,ext4 directory
266 seconds to run xds_par in raid5,ext4 directory
492 seconds to run the script

So even if the patches that went into 2.6.32.17 seem to fix the worst 
stalls, it is obvious that untarring and rsyncing kernel files is 
significantly slower on 2.6.32.17 than 2.6.27.48 .

HTH,

Kay


[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 5236 bytes --]

      parent reply	other threads:[~2010-08-04  8:18 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-07-28 19:51 ext4 performance regression 2.6.27-stable versus 2.6.32 and later Kay Diederichs
2010-07-28 21:00 ` Greg Freemyer
2010-08-02 10:47   ` Kay Diederichs
2010-08-02 16:04     ` Henrique de Moraes Holschuh
2010-08-02 16:10       ` Henrique de Moraes Holschuh
2010-07-29 23:28 ` Dave Chinner
2010-08-02 14:52   ` Kay Diederichs
2010-08-02 16:12     ` Eric Sandeen
2010-08-02 21:08       ` Kay Diederichs
2010-08-03 13:31       ` Kay Diederichs
2010-07-30  2:20 ` Ted Ts'o
2010-07-30 21:01   ` Kay Diederichs
2010-08-01 23:02     ` Ted Ts'o
2010-08-02 15:28   ` Kay Diederichs
     [not found]   ` <4C56E47B.8080600@uni-konstanz.de>
     [not found]     ` <20100802202123.GC25653@thunk.org>
2010-08-04  8:18       ` Kay Diederichs [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4C59223B.1080106@uni-konstanz.de \
    --to=kay.diederichs@uni-konstanz.de \
    --cc=david@fromorbit.com \
    --cc=karsten.schaefer@uni-konstanz.de \
    --cc=linux-ext4@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=tytso@mit.edu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).