public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Steve Rago <sar@nec-labs.com>
To: Jan Kara <jack@suse.cz>
Cc: Wu Fengguang <fengguang.wu@intel.com>,
	Peter Zijlstra <peterz@infradead.org>,
	"linux-nfs@vger.kernel.org" <linux-nfs@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"Trond.Myklebust@netapp.com" <Trond.Myklebust@netapp.com>,
	"jens.axboe" <jens.axboe@oracle.com>,
	Peter Staubach <staubach@redhat.com>
Subject: Re: [PATCH] improve the performance of large sequential write NFS workloads
Date: Wed, 23 Dec 2009 15:16:25 -0500	[thread overview]
Message-ID: <1261599385.13028.142.camel@serenity> (raw)
In-Reply-To: <20091223183912.GE3159@quack.suse.cz>


On Wed, 2009-12-23 at 19:39 +0100, Jan Kara wrote:
> On Tue 22-12-09 11:20:15, Steve Rago wrote:
> > 
> > On Tue, 2009-12-22 at 13:25 +0100, Jan Kara wrote:
> > > > I originally spent several months playing with the balance_dirty_pages
> > > > algorithm.  The main drawback is that it affects more than the inodes
> > > > that the caller is writing and that the control of what to do is too
> > >   Can you be more specific here please?
> > 
> > Sure;  balance_dirty_pages() will schedule writeback by the flusher
> > thread once the number of dirty pages exceeds dirty_background_ratio.
> > The flusher thread calls writeback_inodes_wb() to flush all dirty inodes
> > associated with the bdi.  Similarly, the process dirtying the pages will
> > call writeback_inodes_wbc() when it's bdi threshold has been exceeded.
> > The first problem is that these functions process all dirty inodes with
> > the same backing device, which can lead to excess (duplicate) flushing
> > of the same inode.  Second, there is no distinction between pages that
> > need to be committed and pages that have commits pending in
> > NR_UNSTABLE_NFS/BDI_RECLAIMABLE (a page that has a commit pending won't
> > be cleaned any faster by sending more commits).  This tends to overstate
> > the amount of memory that can be cleaned, leading to additional commit
> > requests.  Third, these functions generate a commit for each set of
> > writes they do, which might not be appropriate.  For background writing,
> > you'd like to delay the commit as long as possible.
>   Ok, I get it. Thanks for explanation. The problem with more writing
> threads bites us also for ordinary SATA drives (the IO pattern and thus
> throughput gets worse and worse the more threads do writes). The plan is to
> let only flusher thread do the IO and throttled thread in
> balance_dirty_pages just waits for flusher thread to do the work. There
> were even patches for this floating around but I'm not sure what's happened
> to them. So that part of the problem should be easy to solve.
>   Another part is about sending commits - if we have just one thread doing
> flushing, we have no problems with excessive commits for one inode. You're
> right that we may want to avoid sending commits for background writeback
> but until we send the commit, pages are just accumulating in the unstable
> state, aren't they? So we might want to periodically send the commit for
> the inode anyway to get rid of those pages. So from this point of view,
> sending commit after each writepages call does not seem like a so bad idea
> - although it might be more appropriate to send it some time after the
> writepages call when we are not close to dirty limit so that server has
> more time to do more natural "unforced" writeback...

When to send the commit is a complex question to answer.  If you delay
it long enough, the server's flusher threads will have already done most
of the work for you, so commits can be cheap, but you don't have access
to the necessary information to figure this out.  You can't delay it too
long, though, because the unstable pages on the client will grow too
large, creating memory pressure.  I have a second patch, which I haven't
posted yet, that adds feedback piggy-backed on the NFS write response,
which allows the NFS client to free pages proactively.  This greatly
reduces the need to send commit messages, but it extends the protocol
(in a backward-compatible manner), so it could be hard to convince
people to accept.

> 
> > > > Part of the patch does implement a heuristic write-behind.  See where
> > > > nfs_wb_eager() is called.
> > >   I believe that if we had per-bdi dirty_background_ratio and set it low
> > > for NFS's bdi, then the write-behind logic would not be needed
> > > (essentially the flusher thread should submit the writes to the server
> > > early).
> > > 
> > Maybe so, but you still need something to prevent the process that is
> > dirtying pages from continuing, because a process can always write to
> > memory faster than writing to disk/network, so the flusher won't be able
> > to keep up.
>   Yes, I agree that part is needed. But Fengguang already had patches in
> that direction if my memory serves me well.
> 
>   So to recap: If we block tasks in balance_dirty_pages until unstable
> pages are committed and make just one thread do the writing, what else is
> missing to make you happy? :)
> 								Honza

As long as the performance improves substantially, I'll be happy.

Part of the problem that isn't addressed by your summary is the
synchronous writes.  With the eager writeback patch, these are removed
[see the short-circuit in wb_priority()].  I would have expected that
change to be controversial, but I've not heard any complaints (yet).  If
the process or the bdi flusher is writing and committing regularly, then
pages should be recycled quickly and the change shouldn't matter, but
I'd need to run my systemtap scripts to make sure.  

Steve



  reply	other threads:[~2009-12-23 20:16 UTC|newest]

Thread overview: 52+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-12-17  2:03 [PATCH] improve the performance of large sequential write NFS workloads Steve Rago
2009-12-17  8:17 ` Peter Zijlstra
2009-12-18 19:33   ` Steve Rago
2009-12-18 19:41     ` Ingo Molnar
2009-12-18 21:20       ` Steve Rago
2009-12-18 22:07         ` Ingo Molnar
2009-12-18 22:46           ` Steve Rago
2009-12-19  8:08         ` Arjan van de Ven
2009-12-19 13:37           ` Steve Rago
2009-12-18 19:44     ` Peter Zijlstra
2009-12-19 12:20   ` Wu Fengguang
2009-12-19 14:25     ` Steve Rago
2009-12-22  1:59       ` Wu Fengguang
2009-12-22 12:35         ` Jan Kara
2009-12-23  8:43           ` Christoph Hellwig
2009-12-23 13:32             ` Jan Kara
2009-12-24  5:25               ` Wu Fengguang
2009-12-24  1:26           ` Wu Fengguang
2009-12-22 13:01         ` Martin Knoblauch
2009-12-24  1:46           ` Wu Fengguang
2009-12-22 16:41         ` Steve Rago
2009-12-24  1:21           ` Wu Fengguang
2009-12-24 14:49             ` Steve Rago
2009-12-25  7:37               ` Wu Fengguang
2009-12-23 14:21         ` Trond Myklebust
2009-12-23 18:05           ` Jan Kara
2009-12-23 19:12             ` Trond Myklebust
2009-12-24  2:52               ` Wu Fengguang
2009-12-24 12:04                 ` Trond Myklebust
2009-12-25  5:56                   ` Wu Fengguang
2009-12-30 16:22                     ` Trond Myklebust
2009-12-31  5:04                       ` Wu Fengguang
2009-12-31 19:13                         ` Trond Myklebust
2010-01-06  3:03                           ` Wu Fengguang
2010-01-06 16:56                             ` Trond Myklebust
2010-01-06 18:26                               ` Trond Myklebust
2010-01-06 18:37                                 ` Peter Zijlstra
2010-01-06 18:52                                   ` Trond Myklebust
2010-01-06 19:07                                     ` Peter Zijlstra
2010-01-06 19:21                                       ` Trond Myklebust
2010-01-06 19:53                                         ` Trond Myklebust
2010-01-06 20:09                                           ` Jan Kara
2009-12-22 12:25       ` Jan Kara
2009-12-22 12:38         ` Peter Zijlstra
2009-12-22 12:55           ` Jan Kara
2009-12-22 16:20         ` Steve Rago
2009-12-23 18:39           ` Jan Kara
2009-12-23 20:16             ` Steve Rago [this message]
2009-12-23 21:49               ` Trond Myklebust
2009-12-23 23:13                 ` Steve Rago
2009-12-23 23:44                   ` Trond Myklebust
2009-12-24  4:30                     ` Steve Rago

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1261599385.13028.142.camel@serenity \
    --to=sar@nec-labs.com \
    --cc=Trond.Myklebust@netapp.com \
    --cc=fengguang.wu@intel.com \
    --cc=jack@suse.cz \
    --cc=jens.axboe@oracle.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nfs@vger.kernel.org \
    --cc=peterz@infradead.org \
    --cc=staubach@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox