From: Jeff Layton <jlayton@redhat.com>
To: Trond Myklebust <Trond.Myklebust@netapp.com>
Cc: andros@netapp.com, linux-nfs@vger.kernel.org
Subject: Re: [[RFC] 1/1] SUNRPC: dynamic rpc_slot allocator for TCP
Date: Tue, 3 May 2011 21:33:24 -0400 [thread overview]
Message-ID: <20110503213324.65027bef@corrin.poochiereds.net> (raw)
In-Reply-To: <1304469890.26184.16.camel@lade.trondhjem.org>
On Tue, 03 May 2011 20:44:50 -0400
Trond Myklebust <Trond.Myklebust@netapp.com> wrote:
> On Tue, 2011-05-03 at 20:20 -0400, Jeff Layton wrote:
> > On Mon, 2 May 2011 21:40:08 -0400
> > andros@netapp.com wrote:
> >
> > > From: Andy Adamson <andros@netapp.com>
> > >
> > > Hookup TCP congestion feedback into rpc_slot allocation so that the RPC layer
> > > can fully utilize the negotiated TCP window.
> > >
> > > Use a slab cache for rpc_slots. Statically allocate an rpc_xprt rpc_slot slab
> > > cache using GFP_KERNEL to the RPC_DEF_SLOT_TABLE number of slots at
> > > rpc_xprt allocation.
> > >
> > > Add a dynamic rpc slot allocator to rpc_xprt_ops which is set only for TCP.
> > > For TCP, trigger a dyamic slot allocation in response to a write_space
> > > callback which is in turn called when the TCP layer is waiting for buffer space.
> > >
> > > Dynamically add a slot at the beginning of the RPC call_transmit state. The slot
> > > allocator uses GFP_NOWAIT and will return without allocating a slot if
> > > GFP_NOWAIT allocation fails. This is OK because the write_space callback will
> > > be called again, and the dynamic slot allocator can retry.
> > >
> > > Signed-off-by: Andy Adamson <andros@netap.com>
> > > ---
> > > include/linux/sunrpc/sched.h | 2 +
> > > include/linux/sunrpc/xprt.h | 6 +++-
> > > net/sunrpc/clnt.c | 4 ++
> > > net/sunrpc/sched.c | 39 ++++++++++++++++++++++
> > > net/sunrpc/xprt.c | 75 +++++++++++++++++++++++++++++++++++++-----
> > > net/sunrpc/xprtsock.c | 1 +
> > > 6 files changed, 117 insertions(+), 10 deletions(-)
> > >
> >
> > Nice work, comments inline below...
> >
> >
> > [...]
> >
> > > +
> > > +/*
> > > + * Static transport rpc_slot allocation called only at rpc_xprt allocation.
> > > + * No need to take the xprt->reserve_lock.
> > > + */
> > > +int
> > > +xprt_alloc_slot_entries(struct rpc_xprt *xprt, int num_req)
> > > +{
> > > + struct rpc_rqst *req;
> > > + int i;
> > > +
> > > + for (i = 0; i < num_req; i++) {
> > > + req = rpc_alloc_slot(GFP_KERNEL);
> > > + if (!req)
> > > + return -ENOMEM;
> > > + memset(req, 0, sizeof(*req));
> > > + list_add(&req->rq_list, &xprt->free);
> > > + }
> > > + dprintk("<-- %s mempool_alloc %d reqs\n", __func__,
> > > + xprt->max_reqs);
> > > + return 0;
> > > +}
> > > +
> >
> > So, I don't quite get this...
> >
> > You declare a global mempool early on, and then allocate from that
> > mempool for a list of static entries. Doesn't that sort of rob you of
> > any benefit of using a mempool here? IOW, won't the static allocations
> > potentially rob the mempool of "guaranteed" entries such that the
> > dynamic ones eventually all turn into slab allocations anyway?
>
> The other thing is that we really should _never_ be using GFP_KERNEL
> allocations in any paths that might be used for writebacks, since that
> might recurse back into the filesystem due to direct memory reclaims. In
> fact, since this code path will commonly be used by rpciod, then we
> should rather be using GFP_ATOMIC and/or GFP_NOWAIT (see rpc_malloc).
>
I think GFP_NOWAIT is probably appropriate, and that seems to be what
the patch uses for the dynamic allocations. It only uses GFP_KERNEL for
the "static" ones (created at xprt_alloc time).
> > What I think would make more sense would be to have multiple mempools
> > -- one per xprt and simply set the mempool size to the number of
> > "static" entries that you want for the mempool. Then you could get rid
> > of the free list, and just do allocations out of the mempool directly.
> > You'll be guaranteed to be able to allocate up to the number in the
> > mempool and everything above that would just becomes a slab allocation.
>
> I'm not sure that we need multiple mempools: all we really require is
> that memory reclaims to be able to make some form of progress.
> I'd therefore rather advocate a global mempool (which matches our policy
> on the rpc buffer pool), but that we allow the allocations to fail if
> we're really low on memory.
>
> The problem with multiple mempools is that if there is no activity on
> one of those filesystems, then we're basically locking up memory for no
> good reason.
>
But this patch does that too. It allocates a list of static entries at
xprt_alloc time that won't shrink when memory is tight. If that's
something we want to keep, then why bother with a mempool for the
"dynamic" entries? We'll already have a set of guaranteed entries (the
static ones).
The whole point of a mempool is to give you a pool of objects that has
a guaranteed minimum number. I think we just need to understand whether
that minimum should be a global one or per-xprt.
--
Jeff Layton <jlayton@redhat.com>
next prev parent reply other threads:[~2011-05-04 1:33 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-05-03 1:40 [RFC 0/1] SUNRPC: dynamic rpc_slot allocator for TCP andros
2011-05-03 1:40 ` [[RFC] 1/1] " andros
2011-05-04 0:20 ` Jeff Layton
2011-05-04 0:44 ` Trond Myklebust
2011-05-04 1:18 ` NeilBrown
2011-05-04 1:46 ` Trond Myklebust
2011-05-04 2:07 ` NeilBrown
2011-05-04 11:54 ` Jeff Layton
2011-05-04 14:54 ` Andy Adamson
2011-05-04 15:18 ` Jeff Layton
2011-05-04 15:30 ` Trond Myklebust
2011-05-04 15:52 ` Andy Adamson
2011-05-04 16:01 ` Chuck Lever
2011-05-04 17:22 ` Andy Adamson
2011-05-05 12:05 ` Jeff Layton
2011-05-04 1:33 ` Jeff Layton [this message]
2011-05-04 14:59 ` Jeff Layton
[not found] ` <20110504105918.422f7609-9yPaYZwiELC+kQycOl6kW4xkIHaj4LzF@public.gmane.org>
2011-05-04 15:10 ` Andy Adamson
2011-05-04 15:08 ` Jeff Layton
2011-05-04 15:20 ` Andy Adamson
2011-05-04 15:31 ` Jeff Layton
2011-05-04 15:35 ` Trond Myklebust
2011-05-05 11:47 ` Jeff Layton
2011-05-05 12:19 ` Trond Myklebust
2011-05-03 20:06 ` [RFC 0/1] " Chuck Lever
2011-05-03 20:13 ` Andy Adamson
2011-05-03 20:20 ` Chuck Lever
2011-05-03 20:34 ` Andy Adamson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20110503213324.65027bef@corrin.poochiereds.net \
--to=jlayton@redhat.com \
--cc=Trond.Myklebust@netapp.com \
--cc=andros@netapp.com \
--cc=linux-nfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).