linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Sonny Rao <sonny@burdell.org>
To: Bryan Henderson <hbryan@us.ibm.com>
Cc: Andreas Dilger <adilger@clusterfs.com>,
	linux-fsdevel@vger.kernel.org, pbadari@us.ibm.com
Subject: Re: ext3 writepages ?
Date: Thu, 10 Feb 2005 14:02:07 -0500	[thread overview]
Message-ID: <20050210190207.GA493@kevlar.burdell.org> (raw)
In-Reply-To: <OF56EDF119.D95130DF-ON88256FA4.00605B38-88256FA4.00621196@us.ibm.com>

On Thu, Feb 10, 2005 at 09:51:42AM -0800, Bryan Henderson wrote:
> >I am inferring this using iostat which shows that average device
> >utilization fluctuates between 83 and 99 percent and the average
> >request size is around 650 sectors (going to the device) without
> >writepages. 
> >
> >With writepages, device utilization never drops below 95 percent and
> >is usually about 98 percent utilized, and the average request size to
> >the device is around 1000 sectors.
> 
> Well that blows away the only two ways I know that this effect can happen. 
>  The first has to do with certain code being more efficient than other 
> code at assembling I/Os, but the fact that the CPU utilization is the same 
> in both cases pretty much eliminates that.  

No, I don't think you can draw that conclusion based on total CPU
utilization, because in the writepages case we are spending more time
(as a percentage of total time) copying data from userspace, which
leads to an increase in CPU utilization.  So, I think this shows that the
writepages code path is in fact more efficient than the ioscheduler path.

Here's the oprofile output from the runs where you'll see
__copy_from_user_ll at the top of both profiles:

No writepages:

CPU: P4 / Xeon, speed 1997.8 MHz (estimated)
Counted GLOBAL_POWER_EVENTS events (time during which processor is not stopped) with a unit mask of 0x01 (mandatory) count 100000
samples  %        image name               app name                 symbol name
2225649  38.7482  vmlinux-autobench-2.6.10-autokern1 vmlinux-autobench-2.6.10-autokern1 __copy_from_user_ll
1471012  25.6101  vmlinux-autobench-2.6.10-autokern1 vmlinux-autobench-2.6.10-autokern1 poll_idle
104736    1.8234  vmlinux-autobench-2.6.10-autokern1 vmlinux-autobench-2.6.10-autokern1 __block_commit_write
92702     1.6139  vmlinux-autobench-2.6.10-autokern1 vmlinux-autobench-2.6.10-autokern1 mark_offset_cyclone
90077     1.5682  vmlinux-autobench-2.6.10-autokern1 vmlinux-autobench-2.6.10-autokern1 _spin_lock
83649     1.4563  vmlinux-autobench-2.6.10-autokern1 vmlinux-autobench-2.6.10-autokern1 __block_write_full_page
81483     1.4186  vmlinux-autobench-2.6.10-autokern1 vmlinux-autobench-2.6.10-autokern1 generic_file_buffered_write
69232     1.2053  vmlinux-autobench-2.6.10-autokern1 vmlinux-autobench-2.6.10-autokern1 ext3_writeback_commit_write


With writepages:

CPU: P4 / Xeon, speed 1997.98 MHz (estimated)
Counted GLOBAL_POWER_EVENTS events (time during which processor is not stopped) with a unit mask of 0x01 (mandatory) count 100000
samples  %        image name               app name                 symbol name
2487751  43.4411  vmlinux-autobench-2.6.10-autokern1 vmlinux-autobench-2.6.10-autokern1 __copy_from_user_ll
1518775  26.5209  vmlinux-autobench-2.6.10-autokern1 vmlinux-autobench-2.6.10-autokern1 poll_idle
124956    2.1820  vmlinux-autobench-2.6.10-autokern1 vmlinux-autobench-2.6.10-autokern1 _spin_lock
93689     1.6360  vmlinux-autobench-2.6.10-autokern1 vmlinux-autobench-2.6.10-autokern1 generic_file_buffered_write
93139     1.6264  vmlinux-autobench-2.6.10-autokern1 vmlinux-autobench-2.6.10-autokern1 mark_offset_cyclone
89683     1.5660  vmlinux-autobench-2.6.10-autokern1 vmlinux-autobench-2.6.10-autokern1 ext3_writeback_commit_write

So we see 38% vs 43% which I belive should be directly correlated with
throughput ( about 12% diff. here ). 


> The other is where the 
> interactivity of the I/O generator doesn't match the buffering in the 
> device so that the device ends up 100% busy processing small I/Os that 
> were sent to it because it said all the while that it needed more work. 
> But in the small-I/O case, we don't see a 100% busy device.

That might be possible, but I'm not sure how one could account for it?

The application, VM, and I/O systems are all so intertwined it would be
difficult to isolate the application if we are trying to measure
maximum throughput, no?

 
> So why would the device be up to 17% idle, since the writepages case makes 
> it apparent that the I/O generator is capable of generating much more 
> work?  Is there some queue plugging (I/O scheduler delays sending I/O to 
> the device even though the device is idle) going on?

Again, I think the amount of work being generated is directly related
to how quickly the dirty pages are being flushed out, so
inefficiencies in the io-system bubble up to the generator.

Sonny



  reply	other threads:[~2005-02-10 19:15 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2005-02-02 15:32 ext3 writepages ? Badari Pulavarty
2005-02-02 20:19 ` Sonny Rao
2005-02-03 15:51   ` Badari Pulavarty
2005-02-03 17:00     ` Sonny Rao
2005-02-03 16:56       ` Badari Pulavarty
2005-02-03 17:24         ` Sonny Rao
2005-02-03 20:50     ` Sonny Rao
2005-02-08  1:33       ` Andreas Dilger
2005-02-08  5:38         ` Sonny Rao
2005-02-09 21:11           ` Sonny Rao
2005-02-09 22:29             ` Badari Pulavarty
2005-02-10  2:05               ` Bryan Henderson
2005-02-10  2:45                 ` Sonny Rao
2005-02-10 17:51                   ` Bryan Henderson
2005-02-10 19:02                     ` Sonny Rao [this message]
2005-02-10 16:02                 ` Badari Pulavarty
2005-02-10 18:00                   ` Bryan Henderson
2005-02-10 18:32                     ` Badari Pulavarty
2005-02-10 20:30                       ` Bryan Henderson
2005-02-10 20:25                         ` Sonny Rao
2005-02-11  0:20                           ` Bryan Henderson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20050210190207.GA493@kevlar.burdell.org \
    --to=sonny@burdell.org \
    --cc=adilger@clusterfs.com \
    --cc=hbryan@us.ibm.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=pbadari@us.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).