Linux NFS development
 help / color / mirror / Atom feed
From: cel@kernel.org
To: NeilBrown <neil@brown.name>, Jeff Layton <jlayton@kernel.org>,
	Olga Kornievskaia <okorniev@redhat.com>,
	Dai Ngo <dai.ngo@oracle.com>, Tom Talpey <tom@talpey.com>
Cc: <linux-nfs@vger.kernel.org>, Chuck Lever <chuck.lever@oracle.com>
Subject: [PATCH v3 07/11] svcrdma: Adjust the number of RDMA contexts per transport
Date: Wed, 23 Apr 2025 11:21:13 -0400	[thread overview]
Message-ID: <20250423152117.5418-8-cel@kernel.org> (raw)
In-Reply-To: <20250423152117.5418-1-cel@kernel.org>

From: Chuck Lever <chuck.lever@oracle.com>

The RDMA accept code requests enough RDMA contexts to read
and write one page per maximum size RPC message, plus one
context that is getting recycled for the next RPC.

Reviewed-by: Jeff Layton <jlayton@kernel.org>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
 net/sunrpc/xprtrdma/svc_rdma_transport.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/net/sunrpc/xprtrdma/svc_rdma_transport.c b/net/sunrpc/xprtrdma/svc_rdma_transport.c
index aca8bdf65d72..22687533c3e9 100644
--- a/net/sunrpc/xprtrdma/svc_rdma_transport.c
+++ b/net/sunrpc/xprtrdma/svc_rdma_transport.c
@@ -467,7 +467,7 @@ static struct svc_xprt *svc_rdma_accept(struct svc_xprt *xprt)
 	 * progress even if the client is using one rkey per page
 	 * in each Read chunk.
 	 */
-	ctxts = 3 * RPCSVC_MAXPAGES;
+	ctxts = 3 * svc_serv_maxpages(xprt->xpt_server);
 	newxprt->sc_sq_depth = rq_depth + ctxts;
 	if (newxprt->sc_sq_depth > dev->attrs.max_qp_wr)
 		newxprt->sc_sq_depth = dev->attrs.max_qp_wr;
-- 
2.49.0


  parent reply	other threads:[~2025-04-23 15:21 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-04-23 15:21 [PATCH v3 00/11] Allocate payload arrays dynamically cel
2025-04-23 15:21 ` [PATCH v3 01/11] sunrpc: Remove backchannel check in svc_init_buffer() cel
2025-04-23 15:21 ` [PATCH v3 02/11] sunrpc: Add a helper to derive maxpages from sv_max_mesg cel
2025-04-23 15:21 ` [PATCH v3 03/11] sunrpc: Replace the rq_pages array with dynamically-allocated memory cel
2025-04-23 15:21 ` [PATCH v3 04/11] sunrpc: Replace the rq_vec " cel
2025-04-23 15:21 ` [PATCH v3 05/11] sunrpc: Replace the rq_bvec " cel
2025-04-23 15:21 ` [PATCH v3 06/11] sunrpc: Adjust size of socket's receive page array dynamically cel
2025-04-23 15:21 ` cel [this message]
2025-04-23 15:21 ` [PATCH v3 08/11] svcrdma: Adjust the number of entries in svc_rdma_recv_ctxt::rc_pages cel
2025-04-23 15:21 ` [PATCH v3 09/11] svcrdma: Adjust the number of entries in svc_rdma_send_ctxt::sc_pages cel
2025-04-23 15:21 ` [PATCH v3 10/11] sunrpc: Remove the RPCSVC_MAXPAGES macro cel
2025-04-23 15:21 ` [PATCH v3 11/11] NFSD: Remove NFSSVC_MAXBLKSIZE from .pc_xdrressize cel
2025-04-26  4:31   ` NeilBrown
2025-04-26 15:08     ` Chuck Lever

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250423152117.5418-8-cel@kernel.org \
    --to=cel@kernel.org \
    --cc=chuck.lever@oracle.com \
    --cc=dai.ngo@oracle.com \
    --cc=jlayton@kernel.org \
    --cc=linux-nfs@vger.kernel.org \
    --cc=neil@brown.name \
    --cc=okorniev@redhat.com \
    --cc=tom@talpey.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox