From: Chuck Lever <cel@kernel.org>
To: NeilBrown <neilb@ownmail.net>, Jeff Layton <jlayton@kernel.org>,
Olga Kornievskaia <okorniev@redhat.com>,
Dai Ngo <dai.ngo@oracle.com>, Tom Talpey <tom@talpey.com>
Cc: <linux-nfs@vger.kernel.org>, Chuck Lever <chuck.lever@oracle.com>
Subject: [RFC PATCH 4/6] svcrdma: preserve rq_next_page in svc_rdma_save_io_pages
Date: Sun, 22 Feb 2026 11:20:00 -0500 [thread overview]
Message-ID: <20260222162002.10613-5-cel@kernel.org> (raw)
In-Reply-To: <20260222162002.10613-1-cel@kernel.org>
From: Chuck Lever <chuck.lever@oracle.com>
svc_rdma_save_io_pages() transfers response pages to the send
context and sets those slots to NULL. It then resets rq_next_page
to equal rq_respages, hiding the NULL region from
svc_rqst_release_pages().
Now that svc_rqst_release_pages() handles NULL entries, this
reset is no longer necessary. Removing it preserves the
invariant that the range [rq_respages, rq_next_page) accurately
describes how many response pages were consumed, enabling a
subsequent optimization in svc_alloc_arg() that refills only
the consumed range.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
net/sunrpc/xprtrdma/svc_rdma_sendto.c | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/net/sunrpc/xprtrdma/svc_rdma_sendto.c b/net/sunrpc/xprtrdma/svc_rdma_sendto.c
index 914cd263c2f1..17c8429da9d5 100644
--- a/net/sunrpc/xprtrdma/svc_rdma_sendto.c
+++ b/net/sunrpc/xprtrdma/svc_rdma_sendto.c
@@ -858,7 +858,8 @@ int svc_rdma_map_reply_msg(struct svcxprt_rdma *rdma,
/* The svc_rqst and all resources it owns are released as soon as
* svc_rdma_sendto returns. Transfer pages under I/O to the ctxt
- * so they are released by the Send completion handler.
+ * so they are released only after Send completion, and not by
+ * svc_rqst_release_pages().
*/
static void svc_rdma_save_io_pages(struct svc_rqst *rqstp,
struct svc_rdma_send_ctxt *ctxt)
@@ -870,9 +871,6 @@ static void svc_rdma_save_io_pages(struct svc_rqst *rqstp,
ctxt->sc_pages[i] = rqstp->rq_respages[i];
rqstp->rq_respages[i] = NULL;
}
-
- /* Prevent svc_xprt_release from releasing pages in rq_pages */
- rqstp->rq_next_page = rqstp->rq_respages;
}
/* Prepare the portion of the RPC Reply that will be transmitted
--
2.53.0
next prev parent reply other threads:[~2026-02-22 16:20 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-22 16:19 [RFC PATCH 0/6] Optimize NFSD buffer page management Chuck Lever
2026-02-22 16:19 ` [RFC PATCH 1/6] sunrpc: Tighten bounds checking in svc_rqst_replace_page Chuck Lever
2026-02-22 16:19 ` [RFC PATCH 2/6] sunrpc: Allocate a separate Reply page array Chuck Lever
2026-02-23 0:15 ` NeilBrown
2026-02-23 14:43 ` Chuck Lever
2026-02-22 16:19 ` [RFC PATCH 3/6] sunrpc: Handle NULL entries in svc_rqst_release_pages Chuck Lever
2026-02-22 16:20 ` Chuck Lever [this message]
2026-02-22 16:20 ` [RFC PATCH 5/6] sunrpc: Track consumed rq_pages entries Chuck Lever
2026-02-23 0:19 ` NeilBrown
2026-02-22 16:20 ` [RFC PATCH 6/6] sunrpc: Optimize rq_respages allocation in svc_alloc_arg Chuck Lever
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260222162002.10613-5-cel@kernel.org \
--to=cel@kernel.org \
--cc=chuck.lever@oracle.com \
--cc=dai.ngo@oracle.com \
--cc=jlayton@kernel.org \
--cc=linux-nfs@vger.kernel.org \
--cc=neilb@ownmail.net \
--cc=okorniev@redhat.com \
--cc=tom@talpey.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox