linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: Christoph Hellwig <hch@infradead.org>
Cc: Theodore Tso <tytso@mit.edu>,
	Chris Mason <chris.mason@oracle.com>,
	linux-ext4@vger.kernel.org, linux-fsdevel@vger.kernel.org
Subject: Re: ext4 writepages is making tiny bios?
Date: Thu, 3 Sep 2009 15:52:01 +1000	[thread overview]
Message-ID: <20090903055201.GA7146@discord.disaster> (raw)
In-Reply-To: <20090901212740.GA9930@infradead.org>

On Tue, Sep 01, 2009 at 05:27:40PM -0400, Christoph Hellwig wrote:
> On Tue, Sep 01, 2009 at 04:57:44PM -0400, Theodore Tso wrote:
> > > This graph shows the difference:
> > > 
> > > http://oss.oracle.com/~mason/seekwatcher/trace-buffered.png
> > 
> > Wow, I'm surprised how seeky XFS was in these graphs compared to ext4
> > and btrfs.  I wonder what was going on.
> 
> XFS did the mistake of trusting the VM, while everyone more or less
> overrode it.  Removing all those checks and writing out much larger
> data fixes it with a relatively small patch:
> 
> 	http://verein.lst.de/~hch/xfs/xfs-writeback-scaling

Careful:

-	tloff = min(tlast, startpage->index + 64);
+	tloff = min(tlast, startpage->index + 8192);

That will cause 64k page machines to try to write back 512MB at a
time. This will re-introduce similar to the behaviour in sles9 where
writeback would only terminate at the end of an extent (because the
mapping end wasn't capped like above).

This has two nasty side effects:

	1. horrible fsync latency when streaming writes are
	   occuring (e.g. NFS writes) which limit throughput
	2. a single large streaming write could delay the writeback
	   of thousands of small files indefinitely.

#1 is still an issue, but #2 might not be so bad compared to sles9
given the way inodes are cycled during writeback now...

> when that code was last benchamrked extensively (on SLES9) it
> worked nicely to saturate extremly large machines using buffered
> I/O, since then VM tuning basically destroyed it.

It was removed because it caused all sorts of problems and buffered
writes on sles9 were limited by lock contention in XFS, not the VM.
On 2.6.15, pdflush and the code the above patch removes was capable
of pushing more than 6GB/s of buffered writes to a single block
device. VM writeback has gone steadily down hill since then...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

  parent reply	other threads:[~2009-09-03  5:52 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-09-01 18:44 ext4 writepages is making tiny bios? Chris Mason
2009-09-01 20:57 ` Theodore Tso
2009-09-01 21:27   ` Christoph Hellwig
2009-09-02  0:17     ` Chris Mason
2009-09-03  5:52     ` Dave Chinner [this message]
2009-09-03 16:42       ` Christoph Hellwig
2009-09-04  0:15         ` Theodore Tso
2009-09-04  7:20           ` Jens Axboe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20090903055201.GA7146@discord.disaster \
    --to=david@fromorbit.com \
    --cc=chris.mason@oracle.com \
    --cc=hch@infradead.org \
    --cc=linux-ext4@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=tytso@mit.edu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).