* [PATCH v2 1/5] SUNRPC: Revert cc93ce9529a6 ("svcrdma: Retain the page backing rq_res.head[0].iov_base")
2023-06-12 14:09 [PATCH v2 0/5] svcrdma: Go back to releasing pages-under-I/O Chuck Lever
@ 2023-06-12 14:10 ` Chuck Lever
2023-06-12 14:24 ` Jeff Layton
2023-06-12 14:10 ` [PATCH v2 2/5] SUNRPC: Revert 579900670ac7 ("svcrdma: Remove unused sc_pages field") Chuck Lever
` (4 subsequent siblings)
5 siblings, 1 reply; 9+ messages in thread
From: Chuck Lever @ 2023-06-12 14:10 UTC (permalink / raw)
To: linux-nfs; +Cc: Chuck Lever, linux-rdma, tom
From: Chuck Lever <chuck.lever@oracle.com>
Pre-requisite for releasing pages in the send completion handler.
Reverted by hand: patch -R would not apply cleanly.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
net/sunrpc/xprtrdma/svc_rdma_sendto.c | 5 -----
1 file changed, 5 deletions(-)
diff --git a/net/sunrpc/xprtrdma/svc_rdma_sendto.c b/net/sunrpc/xprtrdma/svc_rdma_sendto.c
index a35d1e055b1a..8e7ccef74207 100644
--- a/net/sunrpc/xprtrdma/svc_rdma_sendto.c
+++ b/net/sunrpc/xprtrdma/svc_rdma_sendto.c
@@ -975,11 +975,6 @@ int svc_rdma_sendto(struct svc_rqst *rqstp)
ret = svc_rdma_send_reply_msg(rdma, sctxt, rctxt, rqstp);
if (ret < 0)
goto put_ctxt;
-
- /* Prevent svc_xprt_release() from releasing the page backing
- * rq_res.head[0].iov_base. It's no longer being accessed by
- * the I/O device. */
- rqstp->rq_respages++;
return 0;
reply_chunk:
^ permalink raw reply related [flat|nested] 9+ messages in thread* Re: [PATCH v2 1/5] SUNRPC: Revert cc93ce9529a6 ("svcrdma: Retain the page backing rq_res.head[0].iov_base")
2023-06-12 14:10 ` [PATCH v2 1/5] SUNRPC: Revert cc93ce9529a6 ("svcrdma: Retain the page backing rq_res.head[0].iov_base") Chuck Lever
@ 2023-06-12 14:24 ` Jeff Layton
2023-06-12 14:25 ` Chuck Lever III
0 siblings, 1 reply; 9+ messages in thread
From: Jeff Layton @ 2023-06-12 14:24 UTC (permalink / raw)
To: Chuck Lever, linux-nfs; +Cc: Chuck Lever, linux-rdma, tom
On Mon, 2023-06-12 at 10:10 -0400, Chuck Lever wrote:
> From: Chuck Lever <chuck.lever@oracle.com>
>
> Pre-requisite for releasing pages in the send completion handler.
> Reverted by hand: patch -R would not apply cleanly.
>
I'm guessing because there were other patches to this area in the
interim that you didn't want to revert?
> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
> ---
> net/sunrpc/xprtrdma/svc_rdma_sendto.c | 5 -----
> 1 file changed, 5 deletions(-)
>
> diff --git a/net/sunrpc/xprtrdma/svc_rdma_sendto.c b/net/sunrpc/xprtrdma/svc_rdma_sendto.c
> index a35d1e055b1a..8e7ccef74207 100644
> --- a/net/sunrpc/xprtrdma/svc_rdma_sendto.c
> +++ b/net/sunrpc/xprtrdma/svc_rdma_sendto.c
> @@ -975,11 +975,6 @@ int svc_rdma_sendto(struct svc_rqst *rqstp)
> ret = svc_rdma_send_reply_msg(rdma, sctxt, rctxt, rqstp);
> if (ret < 0)
> goto put_ctxt;
> -
> - /* Prevent svc_xprt_release() from releasing the page backing
> - * rq_res.head[0].iov_base. It's no longer being accessed by
> - * the I/O device. */
> - rqstp->rq_respages++;
> return 0;
>
> reply_chunk:
>
>
--
Jeff Layton <jlayton@kernel.org>
^ permalink raw reply [flat|nested] 9+ messages in thread* Re: [PATCH v2 1/5] SUNRPC: Revert cc93ce9529a6 ("svcrdma: Retain the page backing rq_res.head[0].iov_base")
2023-06-12 14:24 ` Jeff Layton
@ 2023-06-12 14:25 ` Chuck Lever III
0 siblings, 0 replies; 9+ messages in thread
From: Chuck Lever III @ 2023-06-12 14:25 UTC (permalink / raw)
To: Jeff Layton; +Cc: Chuck Lever, Linux NFS Mailing List, linux-rdma, Tom Talpey
> On Jun 12, 2023, at 10:24 AM, Jeff Layton <jlayton@kernel.org> wrote:
>
> On Mon, 2023-06-12 at 10:10 -0400, Chuck Lever wrote:
>> From: Chuck Lever <chuck.lever@oracle.com>
>>
>> Pre-requisite for releasing pages in the send completion handler.
>> Reverted by hand: patch -R would not apply cleanly.
>>
>
> I'm guessing because there were other patches to this area in the
> interim that you didn't want to revert?
Correct.
>> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
>> ---
>> net/sunrpc/xprtrdma/svc_rdma_sendto.c | 5 -----
>> 1 file changed, 5 deletions(-)
>>
>> diff --git a/net/sunrpc/xprtrdma/svc_rdma_sendto.c b/net/sunrpc/xprtrdma/svc_rdma_sendto.c
>> index a35d1e055b1a..8e7ccef74207 100644
>> --- a/net/sunrpc/xprtrdma/svc_rdma_sendto.c
>> +++ b/net/sunrpc/xprtrdma/svc_rdma_sendto.c
>> @@ -975,11 +975,6 @@ int svc_rdma_sendto(struct svc_rqst *rqstp)
>> ret = svc_rdma_send_reply_msg(rdma, sctxt, rctxt, rqstp);
>> if (ret < 0)
>> goto put_ctxt;
>> -
>> - /* Prevent svc_xprt_release() from releasing the page backing
>> - * rq_res.head[0].iov_base. It's no longer being accessed by
>> - * the I/O device. */
>> - rqstp->rq_respages++;
>> return 0;
>>
>> reply_chunk:
>>
>>
>
> --
> Jeff Layton <jlayton@kernel.org>
--
Chuck Lever
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v2 2/5] SUNRPC: Revert 579900670ac7 ("svcrdma: Remove unused sc_pages field")
2023-06-12 14:09 [PATCH v2 0/5] svcrdma: Go back to releasing pages-under-I/O Chuck Lever
2023-06-12 14:10 ` [PATCH v2 1/5] SUNRPC: Revert cc93ce9529a6 ("svcrdma: Retain the page backing rq_res.head[0].iov_base") Chuck Lever
@ 2023-06-12 14:10 ` Chuck Lever
2023-06-12 14:10 ` [PATCH v2 3/5] svcrdma: Revert 2a1e4f21d841 ("svcrdma: Normalize Send page handling") Chuck Lever
` (3 subsequent siblings)
5 siblings, 0 replies; 9+ messages in thread
From: Chuck Lever @ 2023-06-12 14:10 UTC (permalink / raw)
To: linux-nfs; +Cc: Chuck Lever, linux-rdma, tom
From: Chuck Lever <chuck.lever@oracle.com>
Pre-requisite for releasing pages in the send completion handler.
Reverted by hand: patch -R would not apply cleanly.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
include/linux/sunrpc/svc_rdma.h | 3 ++-
net/sunrpc/xprtrdma/svc_rdma_sendto.c | 25 +++++++++++++++++++++++++
2 files changed, 27 insertions(+), 1 deletion(-)
diff --git a/include/linux/sunrpc/svc_rdma.h b/include/linux/sunrpc/svc_rdma.h
index a0f3ea357977..8e654da55170 100644
--- a/include/linux/sunrpc/svc_rdma.h
+++ b/include/linux/sunrpc/svc_rdma.h
@@ -158,8 +158,9 @@ struct svc_rdma_send_ctxt {
struct xdr_buf sc_hdrbuf;
struct xdr_stream sc_stream;
void *sc_xprt_buf;
+ int sc_page_count;
int sc_cur_sge_no;
-
+ struct page *sc_pages[RPCSVC_MAXPAGES];
struct ib_sge sc_sges[];
};
diff --git a/net/sunrpc/xprtrdma/svc_rdma_sendto.c b/net/sunrpc/xprtrdma/svc_rdma_sendto.c
index 8e7ccef74207..4c62bc41ea40 100644
--- a/net/sunrpc/xprtrdma/svc_rdma_sendto.c
+++ b/net/sunrpc/xprtrdma/svc_rdma_sendto.c
@@ -213,6 +213,7 @@ struct svc_rdma_send_ctxt *svc_rdma_send_ctxt_get(struct svcxprt_rdma *rdma)
ctxt->sc_send_wr.num_sge = 0;
ctxt->sc_cur_sge_no = 0;
+ ctxt->sc_page_count = 0;
return ctxt;
out_empty:
@@ -227,6 +228,8 @@ struct svc_rdma_send_ctxt *svc_rdma_send_ctxt_get(struct svcxprt_rdma *rdma)
* svc_rdma_send_ctxt_put - Return send_ctxt to free list
* @rdma: controlling svcxprt_rdma
* @ctxt: object to return to the free list
+ *
+ * Pages left in sc_pages are DMA unmapped and released.
*/
void svc_rdma_send_ctxt_put(struct svcxprt_rdma *rdma,
struct svc_rdma_send_ctxt *ctxt)
@@ -234,6 +237,9 @@ void svc_rdma_send_ctxt_put(struct svcxprt_rdma *rdma,
struct ib_device *device = rdma->sc_cm_id->device;
unsigned int i;
+ for (i = 0; i < ctxt->sc_page_count; ++i)
+ put_page(ctxt->sc_pages[i]);
+
/* The first SGE contains the transport header, which
* remains mapped until @ctxt is destroyed.
*/
@@ -798,6 +804,25 @@ int svc_rdma_map_reply_msg(struct svcxprt_rdma *rdma,
svc_rdma_xb_dma_map, &args);
}
+/* The svc_rqst and all resources it owns are released as soon as
+ * svc_rdma_sendto returns. Transfer pages under I/O to the ctxt
+ * so they are released by the Send completion handler.
+ */
+static inline void svc_rdma_save_io_pages(struct svc_rqst *rqstp,
+ struct svc_rdma_send_ctxt *ctxt)
+{
+ int i, pages = rqstp->rq_next_page - rqstp->rq_respages;
+
+ ctxt->sc_page_count += pages;
+ for (i = 0; i < pages; i++) {
+ ctxt->sc_pages[i] = rqstp->rq_respages[i];
+ rqstp->rq_respages[i] = NULL;
+ }
+
+ /* Prevent svc_xprt_release from releasing pages in rq_pages */
+ rqstp->rq_next_page = rqstp->rq_respages;
+}
+
/* Prepare the portion of the RPC Reply that will be transmitted
* via RDMA Send. The RPC-over-RDMA transport header is prepared
* in sc_sges[0], and the RPC xdr_buf is prepared in following sges.
^ permalink raw reply related [flat|nested] 9+ messages in thread* [PATCH v2 3/5] svcrdma: Revert 2a1e4f21d841 ("svcrdma: Normalize Send page handling")
2023-06-12 14:09 [PATCH v2 0/5] svcrdma: Go back to releasing pages-under-I/O Chuck Lever
2023-06-12 14:10 ` [PATCH v2 1/5] SUNRPC: Revert cc93ce9529a6 ("svcrdma: Retain the page backing rq_res.head[0].iov_base") Chuck Lever
2023-06-12 14:10 ` [PATCH v2 2/5] SUNRPC: Revert 579900670ac7 ("svcrdma: Remove unused sc_pages field") Chuck Lever
@ 2023-06-12 14:10 ` Chuck Lever
2023-06-12 14:10 ` [PATCH v2 4/5] svcrdma: Prevent page release when nothing was received Chuck Lever
` (2 subsequent siblings)
5 siblings, 0 replies; 9+ messages in thread
From: Chuck Lever @ 2023-06-12 14:10 UTC (permalink / raw)
To: linux-nfs; +Cc: Chuck Lever, linux-rdma, tom
From: Chuck Lever <chuck.lever@oracle.com>
Get rid of the completion wait in svc_rdma_sendto(), and release
pages in the send completion handler again. A subsequent patch will
handle releasing those pages more efficiently.
Reverted by hand: patch -R would not apply cleanly.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
include/linux/sunrpc/svc_rdma.h | 1 -
net/sunrpc/xprtrdma/svc_rdma_backchannel.c | 8 +-------
net/sunrpc/xprtrdma/svc_rdma_sendto.c | 27 ++++++++++++---------------
3 files changed, 13 insertions(+), 23 deletions(-)
diff --git a/include/linux/sunrpc/svc_rdma.h b/include/linux/sunrpc/svc_rdma.h
index 8e654da55170..a5ee0af2a310 100644
--- a/include/linux/sunrpc/svc_rdma.h
+++ b/include/linux/sunrpc/svc_rdma.h
@@ -154,7 +154,6 @@ struct svc_rdma_send_ctxt {
struct ib_send_wr sc_send_wr;
struct ib_cqe sc_cqe;
- struct completion sc_done;
struct xdr_buf sc_hdrbuf;
struct xdr_stream sc_stream;
void *sc_xprt_buf;
diff --git a/net/sunrpc/xprtrdma/svc_rdma_backchannel.c b/net/sunrpc/xprtrdma/svc_rdma_backchannel.c
index aa2227a7e552..7420a2c990c7 100644
--- a/net/sunrpc/xprtrdma/svc_rdma_backchannel.c
+++ b/net/sunrpc/xprtrdma/svc_rdma_backchannel.c
@@ -93,13 +93,7 @@ static int svc_rdma_bc_sendto(struct svcxprt_rdma *rdma,
*/
get_page(virt_to_page(rqst->rq_buffer));
sctxt->sc_send_wr.opcode = IB_WR_SEND;
- ret = svc_rdma_send(rdma, sctxt);
- if (ret < 0)
- return ret;
-
- ret = wait_for_completion_killable(&sctxt->sc_done);
- svc_rdma_send_ctxt_put(rdma, sctxt);
- return ret;
+ return svc_rdma_send(rdma, sctxt);
}
/* Server-side transport endpoint wants a whole page for its send
diff --git a/net/sunrpc/xprtrdma/svc_rdma_sendto.c b/net/sunrpc/xprtrdma/svc_rdma_sendto.c
index 4c62bc41ea40..1ae4236d04a3 100644
--- a/net/sunrpc/xprtrdma/svc_rdma_sendto.c
+++ b/net/sunrpc/xprtrdma/svc_rdma_sendto.c
@@ -147,7 +147,6 @@ svc_rdma_send_ctxt_alloc(struct svcxprt_rdma *rdma)
ctxt->sc_send_wr.wr_cqe = &ctxt->sc_cqe;
ctxt->sc_send_wr.sg_list = ctxt->sc_sges;
ctxt->sc_send_wr.send_flags = IB_SEND_SIGNALED;
- init_completion(&ctxt->sc_done);
ctxt->sc_cqe.done = svc_rdma_wc_send;
ctxt->sc_xprt_buf = buffer;
xdr_buf_init(&ctxt->sc_hdrbuf, ctxt->sc_xprt_buf,
@@ -286,12 +285,12 @@ static void svc_rdma_wc_send(struct ib_cq *cq, struct ib_wc *wc)
container_of(cqe, struct svc_rdma_send_ctxt, sc_cqe);
svc_rdma_wake_send_waiters(rdma, 1);
- complete(&ctxt->sc_done);
if (unlikely(wc->status != IB_WC_SUCCESS))
goto flushed;
trace_svcrdma_wc_send(wc, &ctxt->sc_cid);
+ svc_rdma_send_ctxt_put(rdma, ctxt);
return;
flushed:
@@ -299,6 +298,7 @@ static void svc_rdma_wc_send(struct ib_cq *cq, struct ib_wc *wc)
trace_svcrdma_wc_send_err(wc, &ctxt->sc_cid);
else
trace_svcrdma_wc_send_flush(wc, &ctxt->sc_cid);
+ svc_rdma_send_ctxt_put(rdma, ctxt);
svc_xprt_deferred_close(&rdma->sc_xprt);
}
@@ -315,8 +315,6 @@ int svc_rdma_send(struct svcxprt_rdma *rdma, struct svc_rdma_send_ctxt *ctxt)
struct ib_send_wr *wr = &ctxt->sc_send_wr;
int ret;
- reinit_completion(&ctxt->sc_done);
-
/* Sync the transport header buffer */
ib_dma_sync_single_for_device(rdma->sc_pd->device,
wr->sg_list[0].addr,
@@ -808,8 +806,8 @@ int svc_rdma_map_reply_msg(struct svcxprt_rdma *rdma,
* svc_rdma_sendto returns. Transfer pages under I/O to the ctxt
* so they are released by the Send completion handler.
*/
-static inline void svc_rdma_save_io_pages(struct svc_rqst *rqstp,
- struct svc_rdma_send_ctxt *ctxt)
+static void svc_rdma_save_io_pages(struct svc_rqst *rqstp,
+ struct svc_rdma_send_ctxt *ctxt)
{
int i, pages = rqstp->rq_next_page - rqstp->rq_respages;
@@ -852,6 +850,8 @@ static int svc_rdma_send_reply_msg(struct svcxprt_rdma *rdma,
if (ret < 0)
return ret;
+ svc_rdma_save_io_pages(rqstp, sctxt);
+
if (rctxt->rc_inv_rkey) {
sctxt->sc_send_wr.opcode = IB_WR_SEND_WITH_INV;
sctxt->sc_send_wr.ex.invalidate_rkey = rctxt->rc_inv_rkey;
@@ -859,13 +859,7 @@ static int svc_rdma_send_reply_msg(struct svcxprt_rdma *rdma,
sctxt->sc_send_wr.opcode = IB_WR_SEND;
}
- ret = svc_rdma_send(rdma, sctxt);
- if (ret < 0)
- return ret;
-
- ret = wait_for_completion_killable(&sctxt->sc_done);
- svc_rdma_send_ctxt_put(rdma, sctxt);
- return ret;
+ return svc_rdma_send(rdma, sctxt);
}
/**
@@ -931,8 +925,7 @@ void svc_rdma_send_error_msg(struct svcxprt_rdma *rdma,
sctxt->sc_sges[0].length = sctxt->sc_hdrbuf.len;
if (svc_rdma_send(rdma, sctxt))
goto put_ctxt;
-
- wait_for_completion_killable(&sctxt->sc_done);
+ return;
put_ctxt:
svc_rdma_send_ctxt_put(rdma, sctxt);
@@ -1006,6 +999,10 @@ int svc_rdma_sendto(struct svc_rqst *rqstp)
if (ret != -E2BIG && ret != -EINVAL)
goto put_ctxt;
+ /* Send completion releases payload pages that were part
+ * of previously posted RDMA Writes.
+ */
+ svc_rdma_save_io_pages(rqstp, sctxt);
svc_rdma_send_error_msg(rdma, sctxt, rctxt, ret);
return 0;
^ permalink raw reply related [flat|nested] 9+ messages in thread* [PATCH v2 4/5] svcrdma: Prevent page release when nothing was received
2023-06-12 14:09 [PATCH v2 0/5] svcrdma: Go back to releasing pages-under-I/O Chuck Lever
` (2 preceding siblings ...)
2023-06-12 14:10 ` [PATCH v2 3/5] svcrdma: Revert 2a1e4f21d841 ("svcrdma: Normalize Send page handling") Chuck Lever
@ 2023-06-12 14:10 ` Chuck Lever
2023-06-12 14:10 ` [PATCH v2 5/5] SUNRPC: Optimize page release in svc_rdma_sendto() Chuck Lever
2023-06-12 14:25 ` [PATCH v2 0/5] svcrdma: Go back to releasing pages-under-I/O Jeff Layton
5 siblings, 0 replies; 9+ messages in thread
From: Chuck Lever @ 2023-06-12 14:10 UTC (permalink / raw)
To: linux-nfs; +Cc: Chuck Lever, linux-rdma, tom
From: Chuck Lever <chuck.lever@oracle.com>
I noticed that svc_rqst_release_pages() was still unnecessarily
releasing a page when svc_rdma_recvfrom() returns zero.
Fixes: a53d5cb0646a ("svcrdma: Avoid releasing a page in svc_xprt_release()")
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
net/sunrpc/xprtrdma/svc_rdma_recvfrom.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
index 46a719ba4917..5bd16d19b16e 100644
--- a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
+++ b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
@@ -804,6 +804,12 @@ int svc_rdma_recvfrom(struct svc_rqst *rqstp)
clear_bit(XPT_DATA, &xprt->xpt_flags);
spin_unlock(&rdma_xprt->sc_rq_dto_lock);
+ /* Prevent svc_xprt_release() from releasing pages in rq_pages
+ * when returning 0 or an error.
+ */
+ rqstp->rq_respages = rqstp->rq_pages;
+ rqstp->rq_next_page = rqstp->rq_respages;
+
/* Unblock the transport for the next receive */
svc_xprt_received(xprt);
if (!ctxt)
@@ -815,12 +821,6 @@ int svc_rdma_recvfrom(struct svc_rqst *rqstp)
DMA_FROM_DEVICE);
svc_rdma_build_arg_xdr(rqstp, ctxt);
- /* Prevent svc_xprt_release from releasing pages in rq_pages
- * if we return 0 or an error.
- */
- rqstp->rq_respages = rqstp->rq_pages;
- rqstp->rq_next_page = rqstp->rq_respages;
-
ret = svc_rdma_xdr_decode_req(&rqstp->rq_arg, ctxt);
if (ret < 0)
goto out_err;
^ permalink raw reply related [flat|nested] 9+ messages in thread* [PATCH v2 5/5] SUNRPC: Optimize page release in svc_rdma_sendto()
2023-06-12 14:09 [PATCH v2 0/5] svcrdma: Go back to releasing pages-under-I/O Chuck Lever
` (3 preceding siblings ...)
2023-06-12 14:10 ` [PATCH v2 4/5] svcrdma: Prevent page release when nothing was received Chuck Lever
@ 2023-06-12 14:10 ` Chuck Lever
2023-06-12 14:25 ` [PATCH v2 0/5] svcrdma: Go back to releasing pages-under-I/O Jeff Layton
5 siblings, 0 replies; 9+ messages in thread
From: Chuck Lever @ 2023-06-12 14:10 UTC (permalink / raw)
To: linux-nfs; +Cc: Chuck Lever, Tom Talpey, linux-rdma, tom
From: Chuck Lever <chuck.lever@oracle.com>
Now that we have bulk page allocation and release APIs, it's more
efficient to use those than it is for nfsd threads to wait for send
completions. Previous patches have eliminated the calls to
wait_for_completion() and complete(), in order to avoid scheduler
overhead.
Now release pages-under-I/O in the send completion handler using
the efficient bulk release API.
I've measured a 7% reduction in cumulative CPU utilization in
svc_rdma_sendto(), svc_rdma_wc_send(), and svc_xprt_release(). In
particular, using release_pages() instead of complete() cuts the
time per svc_rdma_wc_send() call by two-thirds. This helps improve
scalability because svc_rdma_wc_send() is single-threaded per
connection.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Reviewed-by: Tom Talpey <tom@talpey.com>
---
net/sunrpc/xprtrdma/svc_rdma_sendto.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/net/sunrpc/xprtrdma/svc_rdma_sendto.c b/net/sunrpc/xprtrdma/svc_rdma_sendto.c
index 1ae4236d04a3..24228f3611e8 100644
--- a/net/sunrpc/xprtrdma/svc_rdma_sendto.c
+++ b/net/sunrpc/xprtrdma/svc_rdma_sendto.c
@@ -236,8 +236,8 @@ void svc_rdma_send_ctxt_put(struct svcxprt_rdma *rdma,
struct ib_device *device = rdma->sc_cm_id->device;
unsigned int i;
- for (i = 0; i < ctxt->sc_page_count; ++i)
- put_page(ctxt->sc_pages[i]);
+ if (ctxt->sc_page_count)
+ release_pages(ctxt->sc_pages, ctxt->sc_page_count);
/* The first SGE contains the transport header, which
* remains mapped until @ctxt is destroyed.
^ permalink raw reply related [flat|nested] 9+ messages in thread* Re: [PATCH v2 0/5] svcrdma: Go back to releasing pages-under-I/O
2023-06-12 14:09 [PATCH v2 0/5] svcrdma: Go back to releasing pages-under-I/O Chuck Lever
` (4 preceding siblings ...)
2023-06-12 14:10 ` [PATCH v2 5/5] SUNRPC: Optimize page release in svc_rdma_sendto() Chuck Lever
@ 2023-06-12 14:25 ` Jeff Layton
5 siblings, 0 replies; 9+ messages in thread
From: Jeff Layton @ 2023-06-12 14:25 UTC (permalink / raw)
To: Chuck Lever, linux-nfs; +Cc: Tom Talpey, Chuck Lever, linux-rdma
On Mon, 2023-06-12 at 10:09 -0400, Chuck Lever wrote:
> Return to the behavior of releasing reply buffer pages as part of
> sending an RPC Reply over RDMA. I measured a performance improvement
> (which is documented in 4/4). Matching the page release behavior of
> socket transports also means we should be able to share a little
> more code between transports as MSG_SPLICE_PAGES rolls out.
>
> Changes since v1:
> - Add a related fix
> - Clarify some of the patch descriptions
>
> ---
>
> Chuck Lever (5):
> SUNRPC: Revert cc93ce9529a6 ("svcrdma: Retain the page backing rq_res.head[0].iov_base")
> SUNRPC: Revert 579900670ac7 ("svcrdma: Remove unused sc_pages field")
> svcrdma: Revert 2a1e4f21d841 ("svcrdma: Normalize Send page handling")
> svcrdma: Prevent page release when nothing was received
> SUNRPC: Optimize page release in svc_rdma_sendto()
>
>
> include/linux/sunrpc/svc_rdma.h | 4 +-
> net/sunrpc/xprtrdma/svc_rdma_backchannel.c | 8 +---
> net/sunrpc/xprtrdma/svc_rdma_recvfrom.c | 12 ++---
> net/sunrpc/xprtrdma/svc_rdma_sendto.c | 53 ++++++++++++++--------
> 4 files changed, 44 insertions(+), 33 deletions(-)
>
> --
> Chuck Lever
>
Kind of cool that we're getting to the place where micro-optimizations
like this make such a big difference! This all looks fine to me:
Reviewed-by: Jeff Layton <jlayton@kernel.org>
^ permalink raw reply [flat|nested] 9+ messages in thread