linux-nfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "bfields@fieldses.org" <bfields@fieldses.org>
To: Stanislav Kinsbursky <skinsbursky@parallels.com>
Cc: "linux-nfs@vger.kernel.org" <linux-nfs@vger.kernel.org>,
	"devel@openvz.org" <devel@openvz.org>
Subject: Re: NFSd threads amount policy in containers context
Date: Tue, 27 Nov 2012 09:31:31 -0500	[thread overview]
Message-ID: <20121127143131.GC27142@fieldses.org> (raw)
In-Reply-To: <50B4740A.6000302@parallels.com>

On Tue, Nov 27, 2012 at 12:04:26PM +0400, Stanislav Kinsbursky wrote:
> 27.11.2012 02:08, bfields@fieldses.org пишет:
> >On Mon, Nov 26, 2012 at 08:09:01PM +0400, Stanislav Kinsbursky wrote:
> >>Hello.
> >>I would like to discuss how to control NFSd threads amount from
> >>container environment (is this particular case it means start of NFS
> >>server in network namespace different to init_net).
> >>
> >>So, I see three possible policies (let's assume, that there are two containers - one requested 3 NFSd threads and another one - 4 NFSd threads):
> >>1) start as many threads, as requested. I.e 7 threads for specified
> >>case (simplest case, but probably this is to much - 100 containers
> >>will start ~800 threads by default).
> >>2) start maximum number of requested threads. I.e. 4 threads for
> >>specified case (if NFSd server in container, requested 4 threads,
> >>will be stopped, then 3 thread will left working; will require some
> >>way to manage - rb tree of sorted list).
> >>3) There could be some other (more flexible) policy: combine second
> >>one with running of one more thread for each second and further
> >>network namespace, started NFS server. I.e.:
> >>1 net ns: 3 threads request = 3 threads started
> >>2 net ns: 4 threads request = 4 + 1 (per-net thread: 1 net ns) = 5 threads started
> >>3 net ns: 8 threads request = 8 + 2 (per-net threads: 2 net ns) = 10 threads started
> >>
> >>Bruce and community, what do you think about all this?
> >
> >I agree that options 2 or 3 seem more likely to be optimal.
> >
> >However, looking at the problems with, for example, getting race-free
> >shutdown correct: I'd *strongly* prefer that we start with 1, because I
> >think it will be simplest to get right.
> >
> >I'd rather put off figuring out how to scale to hundreds of containers
> >until after we demonstrate something simple and obviously correct.
> >
> 
> Ok. Then I think we could implement even a better and simpler solution:
> make the whole nfsd_serv per network namespace.
> This solution is easy to implement, non-racy on shutdown and will
> give us a rather easy way to apply scheduler policy to NFSd threads
> (this will be most probably required in future).
> Does it sounds good to you?

Yes, that sounds good, thanks!

--b.

      reply	other threads:[~2012-11-27 14:31 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-11-26 16:09 NFSd threads amount policy in containers context Stanislav Kinsbursky
2012-11-26 22:08 ` bfields
2012-11-27  8:04   ` Stanislav Kinsbursky
2012-11-27 14:31     ` bfields [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20121127143131.GC27142@fieldses.org \
    --to=bfields@fieldses.org \
    --cc=devel@openvz.org \
    --cc=linux-nfs@vger.kernel.org \
    --cc=skinsbursky@parallels.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).