public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Martin Steigerwald <Martin@lichtvoll.de>
To: linux-xfs@oss.sgi.com
Cc: linux-raid@vger.kernel.org, Eric Sandeen <sandeen@sandeen.net>,
	Alan Piszcz <ap@solarrain.com>, Redeeman <redeeman@metanurb.dk>
Subject: Re: 12x performance drop on md/linux+sw raid1 due to barriers [xfs]
Date: Sun, 14 Dec 2008 18:49:32 +0100	[thread overview]
Message-ID: <200812141849.33534.Martin@lichtvoll.de> (raw)
In-Reply-To: <1229225480.16555.152.camel@localhost>

Am Sonntag 14 Dezember 2008 schrieb Redeeman:
> On Sat, 2008-12-13 at 11:40 -0600, Eric Sandeen wrote:
> > Martin Steigerwald wrote:
> > > At the moment it appears to me that disabling write cache may often
> > > give more performance than using barriers. And this doesn't match
> > > my expectation of write barriers as a feature that enhances
> > > performance.
> >
> > Why do you have that expectation?  I've never seen barriers
> > advertised as enhancing performance.  :)
>
> My initial thoughts were that write barriers would enhance performance,
> in that, you could have write cache on. So its really more of an
> expectation that wc+barriers on, performs better than wc+barriers off
> :)

Exactly that. My expectation from my technical understanding of the write 
barrier feature is from most performant to least performant:

1) Write cache + no barrier, but NVRAM ;)
2) Write cache + barrier
3) No write cache, where is shouldn't matter whether barrier was enabled 
or not

With 1 write requests are unordered, thus meta data changes could be 
applied in place before landing into the journal for example, thus NVRAM 
is a must. With 2 write requests are unordered except for certain 
markers, the barriers that say: Anything before the barrier goes before 
and anything after the barrier goes after it. This leaves room for 
optimizing the write requests before and after - either in-kernel by an 
IO scheduler or in firmware by NCQ, TCQ, FUA. And with 3 write requests 
would always be ordered... and if the filesystems places a marker - a 
sync in this case - any write requests that are in flight till then have 
to land on disk before the filesystem can proceed.

>From that understanding, which I explained in detail in my Linux-Magazin 
article[1] I always thought that write cache + barrier has to be faster 
than no write cache.

Well I am ready to learn more. But for me until now that was the whole 
point of the effort with write barriers. Seems I completely misunderstood 
their purpose if thats not what they where meant for.

[1] Only in german, it had een translated to english but never published: 
http://www.linux-magazin.de/online_artikel/beschraenktes_schreiben

Ciao,
-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  parent reply	other threads:[~2008-12-14 17:49 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-12-06 14:28 12x performance drop on md/linux+sw raid1 due to barriers [xfs] Justin Piszcz
2008-12-06 15:36 ` Eric Sandeen
2008-12-06 20:35   ` Redeeman
2008-12-13 12:54   ` Justin Piszcz
2008-12-13 17:26     ` Martin Steigerwald
2008-12-13 17:40       ` Eric Sandeen
2008-12-14  3:31         ` Redeeman
2008-12-14 14:02           ` Peter Grandi
2008-12-14 18:12             ` Martin Steigerwald
2008-12-14 22:02               ` Peter Grandi
2008-12-15 18:48                 ` Martin Steigerwald
2008-12-15 22:50                   ` Peter Grandi
2009-02-18 22:14                     ` Leon Woestenberg
2009-02-18 22:24                       ` Eric Sandeen
2009-02-18 23:09                       ` Ralf Liebenow
2009-02-18 23:19                         ` Eric Sandeen
2009-02-20 19:19                       ` Peter Grandi
2008-12-15 22:38                 ` Dave Chinner
2008-12-16  9:39                   ` Martin Steigerwald
2008-12-16 20:57                     ` Peter Grandi
2008-12-16 23:14                     ` Dave Chinner
2008-12-17 21:40                 ` Bill Davidsen
2008-12-18  8:20                   ` Leon Woestenberg
2008-12-18 23:33                     ` Bill Davidsen
2008-12-21 19:16                     ` Peter Grandi
2008-12-22 13:19                       ` Leon Woestenberg
2008-12-18 22:26                   ` Dave Chinner
2008-12-14 18:35             ` Martin Steigerwald
2008-12-14 17:49           ` Martin Steigerwald [this message]
2008-12-14 23:36         ` Dave Chinner
2008-12-14 23:55           ` Eric Sandeen
2008-12-13 18:01       ` David Lethe
2008-12-06 18:42 ` Peter Grandi
2008-12-11  0:20 ` Bill Davidsen
2008-12-11  9:18   ` Justin Piszcz
2008-12-11  9:24     ` Justin Piszcz
  -- strict thread matches above, loose matches on Subject: below --
2008-12-14 18:33 Martin Steigerwald

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=200812141849.33534.Martin@lichtvoll.de \
    --to=martin@lichtvoll.de \
    --cc=ap@solarrain.com \
    --cc=linux-raid@vger.kernel.org \
    --cc=linux-xfs@oss.sgi.com \
    --cc=redeeman@metanurb.dk \
    --cc=sandeen@sandeen.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox