From: Christoph Hellwig <hch@infradead.org>
To: Dave Chinner <david@fromorbit.com>
Cc: xfs@oss.sgi.com
Subject: Re: xfs performance problem
Date: Sun, 1 May 2011 12:55:46 -0400 [thread overview]
Message-ID: <20110501165546.GB5391@infradead.org> (raw)
In-Reply-To: <20110501085246.GF13542@dastard>
On Sun, May 01, 2011 at 06:52:46PM +1000, Dave Chinner wrote:
> > > more than likely your problem is that barriers have been enabled for
> > > MD/DM devices on the new kernel, and they aren't on the old kernel.
> > > XFS uses barriers by default, ext3 does not. Hence XFS performance
> > > will change while ext3 will not. Check dmesg output when mounting
> > > the filesystems on the different kernels.
> >
> > But didn't 2.6.38 replace barriers by explicit flushes the filesystem has to
> > wait for - mitigating most of the performance problems with barriers?
>
> IIRC, it depends on whether the hardware supports FUA or not. If it
> doesn't then device cache flushes are used to emulate FUA and so
> performance can still suck. Christoph will no doubt correct me if I
> got that wrong ;)
Mitigating most of the barrier performance issues is a bit of a strong
word. Yes, it remove useless ordering requirements, but fundamentally
you still have to flush the disk cache to the physical medium, which
is always going to be slower than just filling up a DRAM cache like
ext3's default behaviour in mainline does (interestingly both SLES
and RHEL have patched it to provide safe behaviour by default).
Both the old barrier and new flush code will use the FUA bit if
available, and those optimize the post-flush for a log write out.
Note that currently libata by default always disables FUA support,
even if the disk supports it, so you'll need a SAS/FC/iSCSI/etc
device to actually see FUA requests, which is quite sad as it
should provide a nice speedup epecially for SATA where the cache
flush command is not queueable and thus requires us to still
drain any outstanding I/O at least for a short duration.
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2011-05-01 16:52 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-04-26 19:44 xfs performance problem Benjamin Schindler
2011-04-26 22:12 ` Stan Hoeppner
2011-04-26 23:23 ` Benjamin Schindler
2011-04-26 23:59 ` Benjamin Schindler
2011-04-29 15:00 ` Peter Grandi
2011-04-30 20:36 ` Michael Monnerie
2011-05-01 8:49 ` Dave Chinner
2011-05-01 14:38 ` Peter Grandi
2011-05-01 15:08 ` Peter Grandi
2011-05-01 15:32 ` Michael Monnerie
2011-05-01 17:04 ` Peter Grandi
2011-05-02 2:50 ` Dave Chinner
2011-05-02 20:10 ` Emmanuel Florac
2011-05-01 13:33 ` Peter Grandi
2011-05-01 16:32 ` Peter Grandi
2011-04-27 7:55 ` Michael Weissenbacher
2011-04-27 8:09 ` Benjamin Schindler
2011-04-27 2:35 ` Dave Chinner
2011-04-29 16:27 ` Martin Steigerwald
2011-05-01 8:52 ` Dave Chinner
2011-05-01 16:55 ` Christoph Hellwig [this message]
2011-05-01 18:24 ` Markus Trippelsdorf
2011-05-02 10:14 ` Christoph Hellwig
-- strict thread matches above, loose matches on Subject: below --
2011-04-29 16:28 Martin Steigerwald
2011-04-29 19:51 ` Peter Grandi
2011-05-01 16:56 ` Benjamin Schindler
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20110501165546.GB5391@infradead.org \
--to=hch@infradead.org \
--cc=david@fromorbit.com \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox