linux-nfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
To: Chuck Lever <chuck.lever@oracle.com>
Cc: linux-rdma <linux-rdma@vger.kernel.org>,
	Linux NFS Mailing List <linux-nfs@vger.kernel.org>
Subject: Re: [PATCH v1 09/20] xprtrdma: Limit the number of rpcrdma_mws
Date: Tue, 7 Jun 2016 15:28:31 -0600	[thread overview]
Message-ID: <20160607212831.GA9192@obsidianresearch.com> (raw)
In-Reply-To: <F1820E16-A2BF-4529-A5C9-59C9AD57369A@oracle.com>

On Tue, Jun 07, 2016 at 05:09:47PM -0400, Chuck Lever wrote:

> > Either the number in queue is limited by the invalidation
> > synchronization point to 2, or in the case of iwarp read, it doesn't
> > matter since everything executes serially in order and the value can
> > safely wrap.
> 
> I don't understand this. How does invalidation prevent
> the ULP from registering more R_keys?

The ULP doesn't 'register' rkeys, it only changes the active RKey assigned
to a MR. The number of active rkeys is strictly limited to the number
of available MRs.

Further changing the rkey can only be done as part of an invalidate
cycle.

Invaliding a MR/RKey can only happen after synchronization with the
remote.

So the work queue looks broadly like this:

WR setup rkey 1
WR send rkey 1 to remote
<synchronize>
WR invalidate rkey 1
WR setup rkey 2
WR send rkey 2 to remote
<synchronize>

Thus the WR queue can never have more than 2 rkey's per MR in it at
any time, and there is no need to care about the 24/8 bit split.

> And, I'd like to limit the number of pre-allocated MRs
> to no more than each transport endpoint can use. For
> xprtrdma, a transport is one QP and PD. What is that
> number?

I'm not sure I understand this question. AFAIK there is no limit on
MRs that can be used with a QP. As long as the MR is allocated you can
use it.

Typically you'd allocate enough MRs to handle your typical case
concurrancy level (pipeline depth).

The concurrancy level is limited by lots of things, for instance the
max number of posted recv's typically places a hard upper limit.

Jason

  reply	other threads:[~2016-06-07 21:28 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-06-07 19:46 [PATCH v1 00/20] NFS/RDMA client patches proposed for v4.8 Chuck Lever
2016-06-07 19:46 ` [PATCH v1 01/20] xprtrdma: Remove ALLPHYSICAL memory registration mode Chuck Lever
2016-06-07 19:46 ` [PATCH v1 02/20] xprtrdma: Refactor ->ro_init Chuck Lever
2016-06-08 17:48   ` Anna Schumaker
2016-06-07 19:46 ` [PATCH v1 03/20] xprtrdma: Create common scatterlist fields in rpcrdma_mw Chuck Lever
2016-06-07 19:46 ` [PATCH v1 04/20] xprtrdma: Use scatterlist for DMA mapping and unmapping under FMR Chuck Lever
2016-06-07 19:46 ` [PATCH v1 05/20] xprtrdma: Remove rpcrdma_map_one() and friends Chuck Lever
2016-06-07 19:47 ` [PATCH v1 06/20] xprtrdma: Refactor MR recovery work queues Chuck Lever
2016-06-07 19:47 ` [PATCH v1 07/20] xprtrdma: Place registered MWs on a per-req list Chuck Lever
2016-06-07 19:47 ` [PATCH v1 08/20] xprtrdma: Reply buffer exhaustion can be catastrophic Chuck Lever
2016-06-07 19:47 ` [PATCH v1 09/20] xprtrdma: Limit the number of rpcrdma_mws Chuck Lever
2016-06-07 20:49   ` Jason Gunthorpe
2016-06-07 21:09     ` Chuck Lever
2016-06-07 21:28       ` Jason Gunthorpe [this message]
2016-06-07 21:51         ` Chuck Lever
2016-06-07 22:01           ` Jason Gunthorpe
2016-06-08 14:54             ` Tom Talpey
2016-06-08 15:06               ` Trond Myklebust
2016-06-08 17:40                 ` Jason Gunthorpe
2016-06-08 17:50                   ` Trond Myklebust
2016-06-08 17:53                   ` Chuck Lever
2016-06-08 18:45                     ` Tom Talpey
2016-06-07 19:47 ` [PATCH v1 10/20] xprtrdma: Chunk list encoders no longer share one rl_segments array Chuck Lever
2016-06-07 19:47 ` [PATCH v1 11/20] xprtrdma: rpcrdma_inline_fixup() overruns the receive page list Chuck Lever
2016-06-07 19:47 ` [PATCH v1 12/20] xprtrdma: Do not update {head, tail}.iov_len in rpcrdma_inline_fixup() Chuck Lever
2016-06-07 19:48 ` [PATCH v1 13/20] xprtrdma: Update only specific fields in private receive buffer Chuck Lever
2016-06-07 19:48 ` [PATCH v1 14/20] xprtrdma: Clean up fixup_copy_count accounting Chuck Lever
2016-06-07 19:48 ` [PATCH v1 15/20] xprtrdma: No direct data placement with krb5i and krb5p Chuck Lever
2016-06-07 19:48 ` [PATCH v1 16/20] svc: Avoid garbage replies when pc_func() returns rpc_drop_reply Chuck Lever
2016-06-07 19:48 ` [PATCH v1 17/20] NFS: Don't drop CB requests with invalid principals Chuck Lever
2016-06-07 19:48 ` [PATCH v1 18/20] xprtrdma: Eliminate rpcrdma_receive_worker() Chuck Lever
2016-06-07 19:48 ` [PATCH v1 19/20] xprtrdma: Eliminate INLINE_THRESHOLD macros Chuck Lever
2016-06-07 19:49 ` [PATCH v1 20/20] xprtrdma: Relocate connection helper functions Chuck Lever

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160607212831.GA9192@obsidianresearch.com \
    --to=jgunthorpe@obsidianresearch.com \
    --cc=chuck.lever@oracle.com \
    --cc=linux-nfs@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).