linux-nfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jeff Layton <jlayton@redhat.com>
To: Chuck Lever <chuck.lever@oracle.com>
Cc: Trond Myklebust <Trond.Myklebust@netapp.com>,
	Andy Adamson <andros@netapp.com>,
	Badari Pulavarty <pbadari@us.ibm.com>,
	linux-nfs@vger.kernel.org, khoa@us.ibm.com
Subject: Re: [RFC][PATCH] Vector read/write support for NFS (DIO) client
Date: Wed, 13 Apr 2011 15:04:19 -0400	[thread overview]
Message-ID: <20110413150419.7ec07418@corrin.poochiereds.net> (raw)
In-Reply-To: <D860A5B3-B6D5-4810-936B-ADC78E097997@oracle.com>

On Wed, 13 Apr 2011 14:47:05 -0400
Chuck Lever <chuck.lever@oracle.com> wrote:

> 
> On Apr 13, 2011, at 2:14 PM, Trond Myklebust wrote:
> 
> > On Wed, 2011-04-13 at 13:56 -0400, Andy Adamson wrote:
> >> On Apr 13, 2011, at 1:20 PM, Jeff Layton wrote:
> >> 
> >>> On Wed, 13 Apr 2011 10:22:13 -0400
> >>> Trond Myklebust <Trond.Myklebust@netapp.com> wrote:
> >>> 
> >>>> On Wed, 2011-04-13 at 10:02 -0400, Jeff Layton wrote:
> >>>>> We could put the rpc_rqst's into a slabcache, and give each rpc_xprt a
> >>>>> mempool with a minimum number of slots. Have them all be allocated with
> >>>>> GFP_NOWAIT. If it gets a NULL pointer back, then the task can sleep on
> >>>>> the waitqueue like it does today. Then, the clients can allocate
> >>>>> rpc_rqst's as they need as long as memory holds out for it.
> >>>>> 
> >>>>> We have the reserve_xprt stuff to handle congestion control anyway so I
> >>>>> don't really see the value in the artificial limits that the slot table
> >>>>> provides.
> >>>>> 
> >>>>> Maybe I should hack up a patchset for this...
> >>>> 
> >>>> This issue has come up several times recently. My preference would be to
> >>>> tie the availability of slots to the TCP window size, and basically say
> >>>> that if the SOCK_ASYNC_NOSPACE flag is set on the socket, then we hold
> >>>> off allocating more slots until we get a ->write_space() callback which
> >>>> clears that flag.
> >>>> 
> >>>> For the RDMA case, we can continue to use the current system of a fixed
> >>>> number of preallocated slots.
> >>>> 
> >>> 
> >>> I take it then that we'd want a similar scheme for UDP as well? I guess
> >>> I'm just not sure what the slot table is supposed to be for.
> >> 
> >> [andros] I look at the rpc_slot table as a representation of the amount of data the connection to the server
> >> can handle - basically the #slots should = double the bandwidth-delay product divided by the max(rsize/wsize).
> >> For TCP, this is the window size. (ping of max MTU packet * interface bandwidth).
> >> There is no reason to allocate more rpc_rqsts that can fit on the wire. 
> > 
> > Agreed, but as I said earlier, there is no reason to even try to use UDP
> > on high bandwidth links, so I suggest we just leave it as-is.
> 
> I think Jeff is suggesting that all the transports should use the same logic, but UDP and RDMA should simply have fixed upper limits on their slot table size.  UDP would then behave the same as before, but would share code with the others.  That might be cleaner than maintaining separate slot allocation mechanisms for each transport.
> 
> In other words, share the code, but parametrize it so that UDP and RDMA have effectively fixed slot tables as before, but TCP is allowed to expand.
>

That was my initial thought, but Trond has a point that there's no
reason to allocate info for a call that we're not able to send.

The idea of hooking up congestion feedback from the networking layer
into the slot allocation code sounds intriguing, so for now I'll stop
armchair quarterbacking and just wait to see what Andy comes up with :)

-- 
Jeff Layton <jlayton@redhat.com>

  reply	other threads:[~2011-04-13 19:04 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-04-12 15:32 [RFC][PATCH] Vector read/write support for NFS (DIO) client Badari Pulavarty
2011-04-12 15:36 ` Chuck Lever
2011-04-12 16:15   ` Badari Pulavarty
2011-04-12 16:42     ` Chuck Lever
2011-04-12 17:46       ` Badari Pulavarty
2011-04-13 12:36         ` Jeff Layton
2011-04-13 13:43           ` Badari Pulavarty
2011-04-13 14:02             ` Jeff Layton
2011-04-13 14:22               ` Trond Myklebust
2011-04-13 14:27                 ` Andy Adamson
2011-04-13 17:20                 ` Jeff Layton
2011-04-13 17:35                   ` Trond Myklebust
2011-04-13 17:56                   ` Andy Adamson
2011-04-13 18:14                     ` Trond Myklebust
2011-04-13 18:47                       ` Chuck Lever
2011-04-13 19:04                         ` Jeff Layton [this message]
2011-04-14  0:21                     ` Dean
2011-04-14  0:42                       ` Trond Myklebust
2011-04-14  6:39                         ` Dean
2011-04-12 15:49 ` Trond Myklebust
     [not found]   ` <1302623369.4801.28.camel-SyLVLa/KEI9HwK5hSS5vWB2eb7JE58TQ@public.gmane.org>
2011-04-12 16:17     ` Badari Pulavarty
2011-04-12 16:26       ` Trond Myklebust
2011-04-15 17:33   ` Christoph Hellwig
2011-04-15 18:00     ` Trond Myklebust

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20110413150419.7ec07418@corrin.poochiereds.net \
    --to=jlayton@redhat.com \
    --cc=Trond.Myklebust@netapp.com \
    --cc=andros@netapp.com \
    --cc=chuck.lever@oracle.com \
    --cc=khoa@us.ibm.com \
    --cc=linux-nfs@vger.kernel.org \
    --cc=pbadari@us.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).