From: cel@kernel.org
To: NeilBrown <neil@brown.name>, Jeff Layton <jlayton@kernel.org>,
Olga Kornievskaia <okorniev@redhat.com>,
Dai Ngo <dai.ngo@oracle.com>, Tom Talpey <tom@talpey.com>
Cc: <linux-nfs@vger.kernel.org>, <linux-rdma@vger.kernel.org>,
Chuck Lever <chuck.lever@oracle.com>,
Christoph Hellwig <hch@lst.de>
Subject: [PATCH v5 03/19] sunrpc: Remove backchannel check in svc_init_buffer()
Date: Fri, 9 May 2025 15:03:37 -0400 [thread overview]
Message-ID: <20250509190354.5393-4-cel@kernel.org> (raw)
In-Reply-To: <20250509190354.5393-1-cel@kernel.org>
From: Chuck Lever <chuck.lever@oracle.com>
The server's backchannel uses struct svc_rqst, but does not use the
pages in svc_rqst::rq_pages. It's rq_arg::pages and rq_res::pages
comes from the RPC client's page allocator. Currently,
svc_init_buffer() skips allocating pages in rq_pages for that
reason.
Except that, svc_rqst::rq_pages is filled anyway when a backchannel
svc_rqst is passed to svc_recv() -> and then to svc_alloc_arg().
This isn't really a problem at the moment, except that these pages
are allocated but then never used, as far as I can tell.
The problem is that later in this series, in addition to populating
the entries of rq_pages[], svc_init_buffer() will also allocate the
memory underlying the rq_pages[] array itself. If that allocation is
skipped, then svc_alloc_args() chases a NULL pointer for ingress
backchannel requests.
This approach avoids introducing extra conditional logic in
svc_alloc_args(), which is a hot path.
Acked-by: Jeff Layton <jlayton@kernel.org>
Reviewed-by: NeilBrown <neil@brown.name>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
net/sunrpc/svc.c | 4 ----
1 file changed, 4 deletions(-)
diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
index e7f9c295d13c..8ce3e6b3df6a 100644
--- a/net/sunrpc/svc.c
+++ b/net/sunrpc/svc.c
@@ -640,10 +640,6 @@ svc_init_buffer(struct svc_rqst *rqstp, unsigned int size, int node)
{
unsigned long pages, ret;
- /* bc_xprt uses fore channel allocated buffers */
- if (svc_is_backchannel(rqstp))
- return true;
-
pages = size / PAGE_SIZE + 1; /* extra page as we hold both request and reply.
* We assume one is at most one page
*/
--
2.49.0
next prev parent reply other threads:[~2025-05-09 19:03 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-05-09 19:03 [PATCH v5 00/19] Allocate payload arrays dynamically cel
2025-05-09 19:03 ` [PATCH v5 01/19] svcrdma: Reduce the number of rdma_rw contexts per-QP cel
2025-05-13 12:00 ` Jeff Layton
2025-05-09 19:03 ` [PATCH v5 02/19] sunrpc: Add a helper to derive maxpages from sv_max_mesg cel
2025-05-09 19:03 ` cel [this message]
2025-05-09 19:03 ` [PATCH v5 04/19] sunrpc: Replace the rq_pages array with dynamically-allocated memory cel
2025-05-09 19:03 ` [PATCH v5 05/19] sunrpc: Replace the rq_bvec " cel
2025-05-09 19:03 ` [PATCH v5 06/19] NFSD: Use rqstp->rq_bvec in nfsd_iter_read() cel
2025-05-09 19:03 ` [PATCH v5 07/19] NFSD: De-duplicate the svc_fill_write_vector() call sites cel
2025-05-13 12:02 ` Jeff Layton
2025-05-09 19:03 ` [PATCH v5 08/19] SUNRPC: Export xdr_buf_to_bvec() cel
2025-05-09 19:03 ` [PATCH v5 09/19] NFSD: Use rqstp->rq_bvec in nfsd_iter_write() cel
2025-05-09 19:03 ` [PATCH v5 10/19] SUNRPC: Remove svc_fill_write_vector() cel
2025-05-09 19:03 ` [PATCH v5 11/19] SUNRPC: Remove svc_rqst :: rq_vec cel
2025-05-09 19:03 ` [PATCH v5 12/19] sunrpc: Adjust size of socket's receive page array dynamically cel
2025-05-09 19:03 ` [PATCH v5 13/19] svcrdma: Adjust the number of entries in svc_rdma_recv_ctxt::rc_pages cel
2025-05-09 19:03 ` [PATCH v5 14/19] svcrdma: Adjust the number of entries in svc_rdma_send_ctxt::sc_pages cel
2025-05-09 19:03 ` [PATCH v5 15/19] sunrpc: Remove the RPCSVC_MAXPAGES macro cel
2025-05-09 19:03 ` [PATCH v5 16/19] NFSD: Remove NFSD_BUFSIZE cel
2025-05-09 19:03 ` [PATCH v5 17/19] NFSD: Remove NFSSVC_MAXBLKSIZE_V2 macro cel
2025-05-09 19:03 ` [PATCH v5 18/19] NFSD: Add a "default" block size cel
2025-05-09 19:03 ` [PATCH v5 19/19] SUNRPC: Bump the maximum payload size for the server cel
2025-05-12 16:44 ` Aurélien Couderc
2025-05-12 18:09 ` Chuck Lever
2025-05-13 8:42 ` Aurélien Couderc
2025-05-13 12:08 ` Chuck Lever
2025-05-14 0:11 ` NeilBrown
2025-05-13 12:05 ` [PATCH v5 00/19] Allocate payload arrays dynamically Jeff Layton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250509190354.5393-4-cel@kernel.org \
--to=cel@kernel.org \
--cc=chuck.lever@oracle.com \
--cc=dai.ngo@oracle.com \
--cc=hch@lst.de \
--cc=jlayton@kernel.org \
--cc=linux-nfs@vger.kernel.org \
--cc=linux-rdma@vger.kernel.org \
--cc=neil@brown.name \
--cc=okorniev@redhat.com \
--cc=tom@talpey.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox