From: Dave Chinner <david@fromorbit.com>
To: Matthew Wilcox <matthew@wil.cx>
Cc: Szabolcs Szakacsits <szaka@ntfs-3g.org>,
Andrew Morton <akpm@linux-foundation.org>,
linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org,
xfs@oss.sgi.com
Subject: Re: XFS vs Elevators (was Re: [PATCH RFC] nilfs2: continuous snapshotting file system)
Date: Fri, 22 Aug 2008 01:56:29 +1000 [thread overview]
Message-ID: <20080821155629.GH5706@disturbed> (raw)
In-Reply-To: <20080821115310.GP8318@parisc-linux.org>
On Thu, Aug 21, 2008 at 05:53:10AM -0600, Matthew Wilcox wrote:
> On Thu, Aug 21, 2008 at 04:04:18PM +1000, Dave Chinner wrote:
> > One thing I just found out - my old *laptop* is 4-5x faster than the
> > 10krpm scsi disk behind an old cciss raid controller. I'm wondering
> > if the long delays in dispatch is caused by an interaction with CTQ
> > but I can't change it on the cciss raid controllers. Are you using
> > ctq/ncq on your machine? If so, can you reduce the depth to
> > something less than 4 and see what difference that makes?
>
> I don't think that's going to make a difference when using CFQ. I did
> some tests that showed that CFQ would never issue more than one IO at a
> time to a drive. This was using sixteen userspace threads, each doing a
> 4k direct I/O to the same location. When using noop, I would get 70k
> IOPS and when using CFQ I'd get around 40k IOPS.
Not obviously the same sort of issue. The traces clearly show
multiple nested dispatches and completions so CTQ is definitely
active...
Anyway, after a teeth-pulling equivalent exercise of finding the
latest firmware for the machine in a format I could apply, I
upgraded the firmware throughout the machine (disks, raid
controller, system, etc) and XFS is a *lot* faster. In fact -
mostly back to +/- a small amount compared to ext3.
run complete:
==========================================================================
avg MB/s user sys
runs xfs ext3 xfs ext3 xfs ext3
intial create total 30 6.36 6.29 4.48 3.79 7.03 5.22
create total 7 5.20 5.68 4.47 3.69 7.34 5.23
patch total 6 4.53 5.87 2.26 1.96 6.27 4.86
compile total 9 16.46 9.61 1.74 1.72 9.02 9.74
clean total 4 478.50 553.22 0.09 0.06 0.92 0.70
read tree total 2 13.07 15.62 2.39 2.19 3.68 3.44
read compiled tree 1 53.94 60.91 2.57 2.71 7.35 7.27
delete tree total 3 15.94s 6.82s 1.38 1.06 4.10 1.49
delete compiled tree 1 24.07s 8.70s 1.58 1.18 5.56 2.30
stat tree total 5 3.30s 3.22s 1.09 1.07 0.61 0.53
stat compiled tree total 3 2.93s 3.85s 1.17 1.22 0.59 0.55
The blocktrace looks very regular, too. All the big bursts of
dispatch and completion are gone as are the latencies on
log I/Os. It would appear that ext3 is not sensitive to
concurrent I/O latency like XFS is...
At this point, I'm still interested to know if the original
results were had ctq/ncq enabled and if it is whether it is
introducing latencies are not.
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
next prev parent reply other threads:[~2008-08-21 15:56 UTC|newest]
Thread overview: 57+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-08-20 2:45 [PATCH RFC] nilfs2: continuous snapshotting file system Ryusuke Konishi
2008-08-20 7:43 ` Andrew Morton
2008-08-20 8:22 ` Pekka Enberg
2008-08-20 18:47 ` Ryusuke Konishi
2008-08-20 16:13 ` Ryusuke Konishi
2008-08-20 21:25 ` Szabolcs Szakacsits
2008-08-20 21:39 ` Andrew Morton
2008-08-20 21:48 ` Szabolcs Szakacsits
2008-08-21 2:12 ` Dave Chinner
2008-08-21 2:46 ` Szabolcs Szakacsits
2008-08-21 5:15 ` XFS vs Elevators (was Re: [PATCH RFC] nilfs2: continuous snapshotting file system) Dave Chinner
2008-08-21 6:00 ` gus3
2008-08-21 6:14 ` Dave Chinner
2008-08-21 7:00 ` Nick Piggin
2008-08-21 8:53 ` Dave Chinner
2008-08-21 9:33 ` Nick Piggin
2008-08-21 17:08 ` Dave Chinner
2008-08-22 2:29 ` Nick Piggin
2008-08-25 1:59 ` Dave Chinner
2008-08-25 4:32 ` Nick Piggin
2008-08-25 12:01 ` Jamie Lokier
2008-08-26 3:07 ` Dave Chinner
2008-08-26 3:50 ` david
2008-08-27 1:20 ` Dave Chinner
2008-08-27 21:54 ` david
2008-08-28 1:08 ` Dave Chinner
2008-08-21 14:52 ` Chris Mason
2008-08-21 6:04 ` Dave Chinner
2008-08-21 8:07 ` Aaron Carroll
2008-08-21 8:25 ` Dave Chinner
2008-08-21 11:02 ` Martin Steigerwald
2008-08-21 15:00 ` Martin Steigerwald
2008-08-21 17:10 ` Szabolcs Szakacsits
2008-08-21 17:33 ` Szabolcs Szakacsits
2008-08-22 2:24 ` Dave Chinner
2008-08-22 6:49 ` Martin Steigerwald
2008-08-22 12:44 ` Szabolcs Szakacsits
2008-08-23 12:52 ` Szabolcs Szakacsits
2008-08-21 11:53 ` Matthew Wilcox
2008-08-21 15:56 ` Dave Chinner [this message]
2008-08-21 12:51 ` [PATCH RFC] nilfs2: continuous snapshotting file system Chris Mason
2008-08-26 10:16 ` Jörn Engel
2008-08-26 16:54 ` Ryusuke Konishi
2008-08-27 18:13 ` Jörn Engel
2008-08-27 18:19 ` Jörn Engel
2008-08-29 6:29 ` Ryusuke Konishi
2008-08-29 8:40 ` Arnd Bergmann
2008-08-29 10:51 ` konishi.ryusuke
2008-08-29 11:04 ` Jörn Engel
2008-08-29 10:45 ` Jörn Engel
2008-08-29 16:37 ` Ryusuke Konishi
2008-08-29 19:16 ` Jörn Engel
2008-09-01 12:25 ` Ryusuke Konishi
2008-08-20 9:47 ` Andi Kleen
2008-08-21 4:57 ` Ryusuke Konishi
-- strict thread matches above, loose matches on Subject: below --
2008-08-21 11:05 XFS vs Elevators (was Re: [PATCH RFC] nilfs2: continuous snapshotting file system) Martin Knoblauch
2008-08-21 15:59 ` Dave Chinner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20080821155629.GH5706@disturbed \
--to=david@fromorbit.com \
--cc=akpm@linux-foundation.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=matthew@wil.cx \
--cc=szaka@ntfs-3g.org \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox