public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: "J. Bruce Fields" <bfields@fieldses.org>
Cc: Neil Brown <neilb@suse.de>, Michael Shuey <shuey@purdue.edu>,
	Shehjar Tikoo <shehjart@cse.unsw.edu.au>,
	linux-kernel@vger.kernel.org, linux-nfs@vger.kernel.org,
	rees@citi.umich.edu, aglo@citi.umich.edu
Subject: Re: high latency NFS
Date: Mon, 4 Aug 2008 12:14:26 +1000	[thread overview]
Message-ID: <20080804021426.GF6119@disturbed> (raw)
In-Reply-To: <20080804011158.GA8066@fieldses.org>

On Sun, Aug 03, 2008 at 09:11:58PM -0400, J. Bruce Fields wrote:
> On Mon, Aug 04, 2008 at 10:32:06AM +1000, Dave Chinner wrote:
> > On Fri, Aug 01, 2008 at 03:15:59PM -0400, J. Bruce Fields wrote:
> > > On Fri, Aug 01, 2008 at 05:23:20PM +1000, Dave Chinner wrote:
> > > > On Thu, Jul 31, 2008 at 05:03:05PM +1000, Neil Brown wrote:
> > > > > You might want to track the max length of the request queue too and
> > > > > start more threads if the queue is long, to allow a quick ramp-up.
> > > > 
> > > > Right, but even request queue depth is not a good indicator. You
> > > > need to leep track of how many NFSDs are actually doing useful
> > > > work. That is, if you've got an NFSD on the CPU that is hitting
> > > > the cache and not blocking, you don't need more NFSDs to handle
> > > > that load because they can't do any more work than the NFSD
> > > > that is currently running is. 
> > > > 
> > > > i.e. take the solution that Greg banks used for the CPU scheduler
> > > > overload issue (limiting the number of nfsds woken but not yet on
> > > > the CPU),
> > > 
> > > I don't remember that, or wasn't watching when it happened.... Do you
> > > have a pointer?
> > 
> > Ah, I thought that had been sent to mainline because it was
> > mentioned in his LCA talk at the start of the year. Slides
> > 65-67 here:
> > 
> > http://mirror.linux.org.au/pub/linux.conf.au/2007/video/talks/41.pdf
> 
> OK, so to summarize: when the rate of incoming rpc's is very high (and,
> I guess, when we're serving everything out of cache and don't have IO
> wait), all the nfsd threads will stay runable all the time.  That keeps
> userspace processes from running (possibly for "minutes").  And that's a
> problem even on a server dedicated only to nfs, since it affects portmap
> and rpc.mountd.

In a nutshell.

> The solution is given just as "limit the # of nfsd's woken but not yet
> on CPU."  It'd be interesting to see more details.

Simple counters, IIRC (memory hazy so it might be a bit different).
Basically, when we queue a request we check a wakeup counter. If
the wakeup counter is less than a certain threshold (e.g. 5) we
issue a wakeup to get another NFSD running. When the NFSD first
runs and dequeues a request, it then decrements the wakeup counter,
effectively marking that NFSD as busy doing work. IIRC a small
threshold was necessary to ensure we always had enough NFSDs ready
to run if there was some I/O going on (i.e. a mixture of blocking
and non-blocking RPCs).

i.e. we need to track the wakeup-to-run latency to prevent waking too
many NFSDs and loading the run queue unnecessarily.

> Off hand, this seems like it should be at least partly the scheduler's
> job.

Partly, yes, in that the scheduler overhead shouldn't increase when we
do this. However, from an efficiency point of view, if we are blindly
waking NFSDs when it is not necessary then (IMO) we've got an NFSD
problem....

> E.g. could we tell it to schedule all the nfsd threads as a group?
> I suppose the disadvantage to that is that we'd lose information about
> how many threads are actually needed, hence lose the chance to reap
> unneeded threads?

I don't know enough about how the group scheduling works to be able
to comment in detail. In theory it sounds like it would prevent
the starvation problems, but if it prevents implementation of
dynamic NFSD pools then I don't think it's a good idea....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

  reply	other threads:[~2008-08-04  2:14 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-07-24 17:11 high latency NFS Michael Shuey
2008-07-30 19:21 ` J. Bruce Fields
2008-07-30 21:40   ` Shehjar Tikoo
2008-07-31  2:35     ` Michael Shuey
2008-07-31  3:15       ` J. Bruce Fields
2008-07-31  7:03         ` Neil Brown
2008-08-01  7:23           ` Dave Chinner
2008-08-01 19:15             ` J. Bruce Fields
2008-08-04  0:32               ` Dave Chinner
2008-08-04  1:11                 ` J. Bruce Fields
2008-08-04  2:14                   ` Dave Chinner [this message]
2008-08-04  9:18                   ` Bernd Schubert
2008-08-04  9:25                     ` Greg Banks
2008-08-04  1:29                 ` NeilBrown
2008-08-04  6:42                   ` Greg Banks
2008-08-04 19:07                     ` J. Bruce Fields
2008-08-05 10:51                       ` Greg Banks
2008-08-01 19:23             ` J. Bruce Fields
2008-08-04  0:38               ` Dave Chinner
2008-08-04  8:04   ` Greg Banks
2008-07-31  0:07 ` Lee Revell
2008-07-31 18:06 ` Enrico Weigelt

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20080804021426.GF6119@disturbed \
    --to=david@fromorbit.com \
    --cc=aglo@citi.umich.edu \
    --cc=bfields@fieldses.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nfs@vger.kernel.org \
    --cc=neilb@suse.de \
    --cc=rees@citi.umich.edu \
    --cc=shehjart@cse.unsw.edu.au \
    --cc=shuey@purdue.edu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox