public inbox for linux-fsdevel@vger.kernel.org
 help / color / mirror / Atom feed
From: Sonny Rao <sonny@burdell.org>
To: Bryan Henderson <hbryan@us.ibm.com>
Cc: Anton Altaparmakov <aia21@cam.ac.uk>,
	Andrew Morton <akpm@osdl.org>,
	linux-fsdevel@vger.kernel.org, linux-xfs@oss.sgi.com,
	viro@parcelfarce.linux.theplanet.co.uk
Subject: Re: Advice sought on how to lock multiple pages in ->prepare_write and ->writepage
Date: Mon, 31 Jan 2005 19:10:53 -0500	[thread overview]
Message-ID: <20050201001053.GB11044@kevlar.burdell.org> (raw)
In-Reply-To: <OF2CCFA1A7.7AC38C6F-ON85256F9A.00802CED-88256F9A.00829475@us.ibm.com>

On Mon, Jan 31, 2005 at 03:46:15PM -0800, Bryan Henderson wrote:

> To get multi-page bios (in any natural way), you need to throw out not 
> only the generic file read/write routines, but the page cache as well.
> 
> Every time I've looked at multi-page bios, I've been unable to see any 
> reason that they would be faster than multiple single-page bios.  But I 
> haven't seen any experiments.

Well, I've certainly seen issues where certain filesystems will throw
down lots of smaller bios and rely on the io-schedulers to merge them,
but the io-schedulers don't do a perfect job of making the largest
requests possible.  This is especially true in the case of a large
fast raid array with fast writeback caching.

Here's an example output of iostat from a sequential write
(overwriting, not appending) to a 7 disk raid-0 array.

Apologies for the long lines.

Ext3 (writeback mode)

Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s wkB/s avgrq-sz avgqu-sz   await  svctm  %util
sdc          0.00 21095.60 21.00 244.40  168.00 170723.20    84.00 85361.60 643.90    11.15   42.15   3.45  91.60

We see 21k merges per second going on, and an average request size of 
only 643 sectors where the device can handle up to 1Mb (2048 sectors).

Here is iostat from the same test w/ JFS instead:

Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s wkB/s avgrq-sz avgqu-sz   await  svctm  %util
sdc          0.00 1110.58  0.00 97.80    0.00 201821.96     0.00 100910.98 2063.53   117.09 1054.11  10.21  99.84

So, in this case I think it is making a difference 1k merges and a big difference in
throughput, though there could be other issues. 

Sonny

  reply	other threads:[~2005-02-01  0:10 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2005-01-27 10:48 Advice sought on how to lock multiple pages in ->prepare_write and ->writepage Anton Altaparmakov
2005-01-28  0:58 ` Andrew Morton
2005-01-28  5:06   ` Nathan Scott
2005-01-28 11:08     ` Anton Altaparmakov
2005-01-28 22:53     ` Bryan Henderson
2005-01-31 22:00       ` Nathan Scott
2005-01-31 23:46         ` Bryan Henderson
2005-02-01  0:10           ` Sonny Rao [this message]
2005-02-01  1:32             ` Bryan Henderson
2005-02-01 16:49               ` Sonny Rao
2005-02-01  1:29       ` Matthew Wilcox
2005-01-28 10:43   ` Anton Altaparmakov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20050201001053.GB11044@kevlar.burdell.org \
    --to=sonny@burdell.org \
    --cc=aia21@cam.ac.uk \
    --cc=akpm@osdl.org \
    --cc=hbryan@us.ibm.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-xfs@oss.sgi.com \
    --cc=viro@parcelfarce.linux.theplanet.co.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox