linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Wu Fengguang <fengguang.wu@intel.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Dave Chinner <david@fromorbit.com>, Jan Kara <jack@suse.cz>,
	Christoph Hellwig <hch@lst.de>, Theodore Ts'o <tytso@mit.edu>,
	Chris Mason <chris.mason@oracle.com>,
	Peter Zijlstra <a.p.zijlstra@chello.nl>,
	Mel Gorman <mel@csn.ul.ie>, Rik van Riel <riel@redhat.com>,
	KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>,
	linux-mm <linux-mm@kvack.org>,
	"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 00/13] IO-less dirty throttling
Date: Thu, 18 Nov 2010 10:50:39 +0800	[thread overview]
Message-ID: <20101118025039.GA15479@localhost> (raw)
In-Reply-To: <20101117175900.0d7878e5.akpm@linux-foundation.org>

On Thu, Nov 18, 2010 at 09:59:00AM +0800, Andrew Morton wrote:
> On Thu, 18 Nov 2010 12:40:51 +1100 Dave Chinner <david@fromorbit.com> wrote:
> 
> > 
> > There's no point
> > waking a dirtier if all they can do is write a single page before
> > they are throttled again - IO is most efficient when done in larger
> > batches...
> 
> That assumes the process was about to do another write.  That's
> reasonable on average, but a bit sad for interactive/rtprio tasks.  At
> some stage those scheduler things should be brought into the equation.

The interactive/rtprio tasks are given 1/4 bonus in
global_dirty_limits(). So when there are lots of heavy dirtiers,
the interactive/rtprio tasks will get soft throttled at
(6~8)*bdi_bandwidth. We can increase that to (12~16)*bdi_bandwidth
or whatever.

> >
> > ...
> >
> > Yeah, sorry, should have posted them - I didn't because I snapped
> > the numbers before the run had finished. Without series:
> > 
> > 373.19user 14940.49system 41:42.17elapsed 612%CPU (0avgtext+0avgdata 82560maxresident)k
> > 0inputs+0outputs (403major+2599763minor)pagefaults 0swaps
> > 
> > With your series:
> > 
> > 359.64user 5559.32system 40:53.23elapsed 241%CPU (0avgtext+0avgdata 82496maxresident)k
> > 0inputs+0outputs (312major+2598798minor)pagefaults 0swaps
> > 
> > So the wall time with your series is lower, and system CPU time is
> > way down (as I've already noted) for this workload on XFS.
> 
> How much of that benefit is an accounting artifact, moving work away
> from the calling process's CPU and into kernel threads?

The elapsed time won't cheat, and it's going down from 41:42 to 40:53.

For the CPU time, I have system wide numbers collected from iostat.
Citing from the changelog of the first patch:

- 1 dirtier case:    the same
- 10 dirtiers case:  CPU system time is reduced to 50%
- 100 dirtiers case: CPU system time is reduced to 10%, IO size and throughput increases by 10%

                        2.6.37-rc2                              2.6.37-rc1-next-20101115+
        ----------------------------------------        ----------------------------------------
        %system         wkB/s           avgrq-sz        %system         wkB/s           avgrq-sz
100dd   30.916          37843.000       748.670         3.079           41654.853       822.322
100dd   30.501          37227.521       735.754         3.744           41531.725       820.360

10dd    39.442          47745.021       900.935         20.756          47951.702       901.006
10dd    39.204          47484.616       899.330         20.550          47970.093       900.247

1dd     13.046          57357.468       910.659         13.060          57632.715       909.212
1dd     12.896          56433.152       909.861         12.467          56294.440       909.644

Those are real CPU savings :)

Thanks,
Fengguang

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom policy in Canada: sign http://dissolvethecrtc.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2010-11-18  2:50 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-11-17  3:58 [PATCH 00/13] IO-less dirty throttling Wu Fengguang
2010-11-17  7:25 ` Dave Chinner
2010-11-17 10:06   ` Wu Fengguang
2010-11-18  1:40     ` Dave Chinner
2010-11-18  1:59       ` Andrew Morton
2010-11-18  2:50         ` Wu Fengguang [this message]
2010-11-18  3:19           ` Wu Fengguang
2010-11-19  2:28         ` Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20101118025039.GA15479@localhost \
    --to=fengguang.wu@intel.com \
    --cc=a.p.zijlstra@chello.nl \
    --cc=akpm@linux-foundation.org \
    --cc=chris.mason@oracle.com \
    --cc=david@fromorbit.com \
    --cc=hch@lst.de \
    --cc=jack@suse.cz \
    --cc=kosaki.motohiro@jp.fujitsu.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mel@csn.ul.ie \
    --cc=riel@redhat.com \
    --cc=tytso@mit.edu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).