All of lore.kernel.org
 help / color / mirror / Atom feed
From: Redeeman <redeeman@metanurb.dk>
To: Eric Sandeen <sandeen@sandeen.net>
Cc: Justin Piszcz <jpiszcz@lucidpixels.com>,
	linux-raid@vger.kernel.org, xfs@oss.sgi.com,
	Alan Piszcz <ap@solarrain.com>
Subject: Re: 12x performance drop on md/linux+sw raid1 due to barriers [xfs]
Date: Sat, 06 Dec 2008 21:35:40 +0100	[thread overview]
Message-ID: <1228595740.16555.105.camel@localhost> (raw)
In-Reply-To: <493A9BE7.3090001@sandeen.net>

On Sat, 2008-12-06 at 09:36 -0600, Eric Sandeen wrote:
> Justin Piszcz wrote:
> > Someone should write a document with XFS and barrier support, if I recall,
> > in the past, they never worked right on raid1 or raid5 devices, but it
> > appears now they they work on RAID1, which slows down performance ~12 times!!
> 
> What sort of document do you propose?  xfs will enable barriers on any
> block device which will support them, and after:
> 
> deeb5912db12e8b7ccf3f4b1afaad60bc29abed9
> 
> [XFS] Disable queue flag test in barrier check.
> 
> xfs is able to determine, via a test IO, that md raid1 does pass
> barriers through properly even though it doesn't set an ordered flag on
> the queue.
> 
> > l1:~# /usr/bin/time tar xf linux-2.6.27.7.tar 
> > 0.15user 1.54system 0:13.18elapsed 12%CPU (0avgtext+0avgdata 0maxresident)k
> > 0inputs+0outputs (0major+325minor)pagefaults 0swaps
> > l1:~#
> > 
> > l1:~# /usr/bin/time tar xf linux-2.6.27.7.tar
> > 0.14user 1.66system 2:39.68elapsed 1%CPU (0avgtext+0avgdata 0maxresident)k
> > 0inputs+0outputs (0major+324minor)pagefaults 0swaps
> > l1:~#
> > 
> > Before:
> > /dev/md2        /               xfs     defaults,noatime  0       1
> > 
> > After:
> > /dev/md2        /               xfs     defaults,noatime,nobarrier,logbufs=8,logbsize=262144 0 1
> 
> Well, if you're investigating barriers can you do a test with just the
> barrier option change; though I expect you'll still find it to have a
> substantial impact.
> 
> > There is some mention of it here:
> > http://oss.sgi.com/projects/xfs/faq.html#wcache_persistent
> > 
> > But basically I believe it should be noted in the kernel logs, FAQ or somewhere
> > because just through the process of upgrading the kernel, not changing fstab
> > or any other part of the system, performance can drop 12x just because the
> > newer kernels implement barriers.
> 
> Perhaps:
> 
> printk(KERN_ALERT "XFS is now looking after your metadata very
> carefully; if you prefer the old, fast, dangerous way, mount with -o
> nobarrier\n");
> 
> :)
> 
> Really, this just gets xfs on md raid1 in line with how it behaves on
> most other devices.
> 
> But I agree, some documentation/education is probably in order; if you
> choose to disable write caches or you have faith in the battery backup
> of your write cache, turning off barriers would be a good idea.  Justin,
> it might be interesting to do some tests with:
> 
> barrier,   write cache enabled
> nobarrier, write cache enabled
> nobarrier, write cache disabled
> 
> a 12x hit does hurt though...  If you're really motivated, try the same
> scenarios on ext3 and ext4 to see what the barrier hit is on those as well.
I have tested with ext3/xfs, and barriers have considerably more impact
on xfs than ext3. this is ~4 months old test, I do not have any precise
data anymore.


> 
> -Eric
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


WARNING: multiple messages have this Message-ID (diff)
From: Redeeman <redeeman@metanurb.dk>
To: Eric Sandeen <sandeen@sandeen.net>
Cc: linux-raid@vger.kernel.org, Alan Piszcz <ap@solarrain.com>,
	xfs@oss.sgi.com
Subject: Re: 12x performance drop on md/linux+sw raid1 due to barriers [xfs]
Date: Sat, 06 Dec 2008 21:35:40 +0100	[thread overview]
Message-ID: <1228595740.16555.105.camel@localhost> (raw)
In-Reply-To: <493A9BE7.3090001@sandeen.net>

On Sat, 2008-12-06 at 09:36 -0600, Eric Sandeen wrote:
> Justin Piszcz wrote:
> > Someone should write a document with XFS and barrier support, if I recall,
> > in the past, they never worked right on raid1 or raid5 devices, but it
> > appears now they they work on RAID1, which slows down performance ~12 times!!
> 
> What sort of document do you propose?  xfs will enable barriers on any
> block device which will support them, and after:
> 
> deeb5912db12e8b7ccf3f4b1afaad60bc29abed9
> 
> [XFS] Disable queue flag test in barrier check.
> 
> xfs is able to determine, via a test IO, that md raid1 does pass
> barriers through properly even though it doesn't set an ordered flag on
> the queue.
> 
> > l1:~# /usr/bin/time tar xf linux-2.6.27.7.tar 
> > 0.15user 1.54system 0:13.18elapsed 12%CPU (0avgtext+0avgdata 0maxresident)k
> > 0inputs+0outputs (0major+325minor)pagefaults 0swaps
> > l1:~#
> > 
> > l1:~# /usr/bin/time tar xf linux-2.6.27.7.tar
> > 0.14user 1.66system 2:39.68elapsed 1%CPU (0avgtext+0avgdata 0maxresident)k
> > 0inputs+0outputs (0major+324minor)pagefaults 0swaps
> > l1:~#
> > 
> > Before:
> > /dev/md2        /               xfs     defaults,noatime  0       1
> > 
> > After:
> > /dev/md2        /               xfs     defaults,noatime,nobarrier,logbufs=8,logbsize=262144 0 1
> 
> Well, if you're investigating barriers can you do a test with just the
> barrier option change; though I expect you'll still find it to have a
> substantial impact.
> 
> > There is some mention of it here:
> > http://oss.sgi.com/projects/xfs/faq.html#wcache_persistent
> > 
> > But basically I believe it should be noted in the kernel logs, FAQ or somewhere
> > because just through the process of upgrading the kernel, not changing fstab
> > or any other part of the system, performance can drop 12x just because the
> > newer kernels implement barriers.
> 
> Perhaps:
> 
> printk(KERN_ALERT "XFS is now looking after your metadata very
> carefully; if you prefer the old, fast, dangerous way, mount with -o
> nobarrier\n");
> 
> :)
> 
> Really, this just gets xfs on md raid1 in line with how it behaves on
> most other devices.
> 
> But I agree, some documentation/education is probably in order; if you
> choose to disable write caches or you have faith in the battery backup
> of your write cache, turning off barriers would be a good idea.  Justin,
> it might be interesting to do some tests with:
> 
> barrier,   write cache enabled
> nobarrier, write cache enabled
> nobarrier, write cache disabled
> 
> a 12x hit does hurt though...  If you're really motivated, try the same
> scenarios on ext3 and ext4 to see what the barrier hit is on those as well.
I have tested with ext3/xfs, and barriers have considerably more impact
on xfs than ext3. this is ~4 months old test, I do not have any precise
data anymore.


> 
> -Eric
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2008-12-06 20:35 UTC|newest]

Thread overview: 61+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-12-06 14:28 12x performance drop on md/linux+sw raid1 due to barriers [xfs] Justin Piszcz
2008-12-06 14:28 ` Justin Piszcz
2008-12-06 15:36 ` Eric Sandeen
2008-12-06 20:35   ` Redeeman [this message]
2008-12-06 20:35     ` Redeeman
2008-12-13 12:54   ` Justin Piszcz
2008-12-13 12:54     ` Justin Piszcz
2008-12-13 17:26     ` Martin Steigerwald
2008-12-13 17:26       ` Martin Steigerwald
2008-12-13 17:40       ` Eric Sandeen
2008-12-13 17:40         ` Eric Sandeen
2008-12-14  3:31         ` Redeeman
2008-12-14  3:31           ` Redeeman
2008-12-14 14:02           ` Peter Grandi
2008-12-14 14:02             ` Peter Grandi
2008-12-14 18:12             ` Martin Steigerwald
2008-12-14 18:12               ` Martin Steigerwald
2008-12-14 22:02               ` Peter Grandi
2008-12-14 22:02                 ` Peter Grandi
2008-12-15 18:48                 ` Martin Steigerwald
2008-12-15 22:50                   ` Peter Grandi
2009-02-18 22:14                     ` Leon Woestenberg
2009-02-18 22:24                       ` Eric Sandeen
2009-02-18 23:09                       ` Ralf Liebenow
2009-02-18 23:19                         ` Eric Sandeen
2009-02-20 19:19                       ` Peter Grandi
2008-12-15 22:38                 ` Dave Chinner
2008-12-15 22:38                   ` Dave Chinner
2008-12-16  9:39                   ` Martin Steigerwald
2008-12-16  9:39                     ` Martin Steigerwald
2008-12-16 20:57                     ` Peter Grandi
2008-12-16 23:14                     ` Dave Chinner
2008-12-16 23:14                       ` Dave Chinner
2008-12-17 21:40                 ` Bill Davidsen
2008-12-17 21:40                   ` Bill Davidsen
2008-12-18  8:20                   ` Leon Woestenberg
2008-12-18 23:33                     ` Bill Davidsen
2008-12-21 19:16                     ` Peter Grandi
2008-12-22 13:19                       ` Leon Woestenberg
2008-12-22 13:19                         ` Leon Woestenberg
2008-12-18 22:26                   ` Dave Chinner
2008-12-18 22:26                     ` Dave Chinner
2008-12-20 14:06               ` Peter Grandi
2008-12-14 18:35             ` Martin Steigerwald
2008-12-14 18:35               ` Martin Steigerwald
2008-12-14 17:49           ` Martin Steigerwald
2008-12-14 17:49             ` Martin Steigerwald
2008-12-14 23:36         ` Dave Chinner
2008-12-14 23:36           ` Dave Chinner
2008-12-14 23:55           ` Eric Sandeen
2008-12-13 18:01       ` David Lethe
2008-12-13 18:01         ` David Lethe
2008-12-06 18:42 ` Peter Grandi
2008-12-11  0:20 ` Bill Davidsen
2008-12-11  0:20   ` Bill Davidsen
2008-12-11  9:18   ` Justin Piszcz
2008-12-11  9:18     ` Justin Piszcz
2008-12-11  9:24     ` Justin Piszcz
2008-12-11  9:24       ` Justin Piszcz
  -- strict thread matches above, loose matches on Subject: below --
2008-12-14 18:33 Martin Steigerwald
2008-12-14 18:33 ` Martin Steigerwald

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1228595740.16555.105.camel@localhost \
    --to=redeeman@metanurb.dk \
    --cc=ap@solarrain.com \
    --cc=jpiszcz@lucidpixels.com \
    --cc=linux-raid@vger.kernel.org \
    --cc=sandeen@sandeen.net \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.