From: Dave Chinner <david@fromorbit.com>
To: Jan Kara <jack@suse.cz>
Cc: Wu Fengguang <fengguang.wu@intel.com>,
Andrew Morton <akpm@linux-foundation.org>,
linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
hch@infradead.org, peterz@infradead.org
Subject: Re: [PATCH RFC] mm: Implement balance_dirty_pages() through waiting for flusher thread
Date: Wed, 23 Jun 2010 08:29:32 +1000 [thread overview]
Message-ID: <20100622222932.GR7869@dastard> (raw)
In-Reply-To: <20100622140258.GE3338@quack.suse.cz>
On Tue, Jun 22, 2010 at 04:02:59PM +0200, Jan Kara wrote:
> On Tue 22-06-10 21:52:34, Wu Fengguang wrote:
> > > On the other hand I think we will have to come up with something
> > > more clever than what I do now because for some huge machines with
> > > nr_cpu_ids == 256, the error of the counter is 256*9*8 = 18432 so that's
> > > already unacceptable given the amounts we want to check (like 1536) -
> > > already for nr_cpu_ids == 32, the error is the same as the difference we
> > > want to check. I think we'll have to come up with some scheme whose error
> > > is not dependent on the number of cpus or if it is dependent, it's only a
> > > weak dependency (like a logarithm or so).
> > > Or we could rely on the fact that IO completions for a bdi won't happen on
> > > all CPUs and thus the error would be much more bounded. But I'm not sure
> > > how much that is true or not.
> >
> > Yes the per CPU counter seems tricky. How about plain atomic operations?
> >
> > This test shows that atomic_dec_and_test() is about 4.5 times slower
> > than plain i-- in a 4-core CPU. Not bad.
It's not how fast an uncontended operation runs that matter - it's
what happens when it is contended by lots of CPUs. In my experience,
atomics in writeback paths scale to medium sized machines (say
16-32p) but bottleneck on larger configurations due to the increased
cost of cacheline propagation on larger machines.
> > Note that
> > 1) we can avoid the atomic operations when there are no active waiters
Under heavy IO load there will always be waiters.
> > 2) most writeback will be submitted by one per-bdi-flusher, so no worry
> > of cache bouncing (this also means the per CPU counter error is
> > normally bounded by the batch size)
> Yes, writeback will be submitted by one flusher thread but the question
> is rather where the writeback will be completed. And that depends on which
> CPU that particular irq is handled. As far as my weak knowledge of HW goes,
> this very much depends on the system configuration (i.e., irq affinity and
> other things).
And how many paths to the storage you are using, how threaded the
underlying driver is, whether it is using MSI to direct interrupts to
multiple CPUs instead of just one, etc.
As we scale up we're more likely to see multiple CPUs doing IO
completion for the same BDI because the storage configs are more
complex in high end machines. Hence IMO preventing cacheline
bouncing between submission and completion is a significant
scalability concern.
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2010-06-22 22:29 UTC|newest]
Thread overview: 43+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-06-17 18:04 [PATCH RFC] mm: Implement balance_dirty_pages() through waiting for flusher thread Jan Kara
2010-06-18 6:09 ` Dave Chinner
2010-06-18 9:11 ` Peter Zijlstra
2010-06-18 23:29 ` Dave Chinner
2010-06-21 23:36 ` Jan Kara
2010-06-22 5:44 ` Dave Chinner
2010-06-22 6:14 ` Andrew Morton
2010-06-22 7:45 ` Peter Zijlstra
2010-06-22 8:24 ` Andrew Morton
2010-06-22 8:52 ` Peter Zijlstra
2010-06-22 10:09 ` Dave Chinner
2010-06-22 13:17 ` Jan Kara
2010-06-22 13:52 ` Wu Fengguang
2010-06-22 13:59 ` Peter Zijlstra
2010-06-22 14:00 ` Peter Zijlstra
2010-06-22 14:36 ` Wu Fengguang
2010-06-22 14:02 ` Jan Kara
2010-06-22 14:24 ` Wu Fengguang
2010-06-22 22:29 ` Dave Chinner [this message]
2010-06-23 13:15 ` Jan Kara
2010-06-23 23:06 ` Dave Chinner
2010-06-22 14:31 ` Christoph Hellwig
2010-06-22 14:38 ` Jan Kara
2010-06-22 22:45 ` Dave Chinner
2010-06-23 1:34 ` Wu Fengguang
2010-06-23 3:06 ` Dave Chinner
2010-06-23 3:22 ` Wu Fengguang
2010-06-23 6:03 ` Dave Chinner
2010-06-23 6:25 ` Wu Fengguang
2010-06-23 23:42 ` Dave Chinner
2010-06-22 14:41 ` Wu Fengguang
2010-06-22 11:19 ` Jan Kara
2010-06-18 10:21 ` Peter Zijlstra
2010-06-21 13:31 ` Jan Kara
2010-06-18 10:21 ` Peter Zijlstra
2010-06-21 14:02 ` Jan Kara
2010-06-21 14:10 ` Jan Kara
2010-06-21 14:12 ` Peter Zijlstra
2010-06-18 10:21 ` Peter Zijlstra
2010-06-21 13:42 ` Jan Kara
2010-06-22 4:07 ` Wu Fengguang
2010-06-22 13:27 ` Jan Kara
2010-06-22 13:33 ` Wu Fengguang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20100622222932.GR7869@dastard \
--to=david@fromorbit.com \
--cc=akpm@linux-foundation.org \
--cc=fengguang.wu@intel.com \
--cc=hch@infradead.org \
--cc=jack@suse.cz \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=peterz@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).