From: cel@kernel.org
To: NeilBrown <neil@brown.name>, Jeff Layton <jlayton@kernel.org>,
Olga Kornievskaia <okorniev@redhat.com>,
Dai Ngo <dai.ngo@oracle.com>, Tom Talpey <tom@talpey.com>
Cc: <linux-nfs@vger.kernel.org>, <linux-rdma@vger.kernel.org>,
Chuck Lever <chuck.lever@oracle.com>
Subject: [PATCH v5 14/19] svcrdma: Adjust the number of entries in svc_rdma_send_ctxt::sc_pages
Date: Fri, 9 May 2025 15:03:48 -0400 [thread overview]
Message-ID: <20250509190354.5393-15-cel@kernel.org> (raw)
In-Reply-To: <20250509190354.5393-1-cel@kernel.org>
From: Chuck Lever <chuck.lever@oracle.com>
Allow allocation of more entries in the sc_pages[] array when the
maximum size of an RPC message is increased.
Reviewed-by: Jeff Layton <jlayton@kernel.org>
Reviewed-by: NeilBrown <neil@brown.name>
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
include/linux/sunrpc/svc_rdma.h | 3 ++-
net/sunrpc/xprtrdma/svc_rdma_sendto.c | 16 +++++++++++++---
2 files changed, 15 insertions(+), 4 deletions(-)
diff --git a/include/linux/sunrpc/svc_rdma.h b/include/linux/sunrpc/svc_rdma.h
index 1016f2feddc4..22704c2e5b9b 100644
--- a/include/linux/sunrpc/svc_rdma.h
+++ b/include/linux/sunrpc/svc_rdma.h
@@ -245,7 +245,8 @@ struct svc_rdma_send_ctxt {
void *sc_xprt_buf;
int sc_page_count;
int sc_cur_sge_no;
- struct page *sc_pages[RPCSVC_MAXPAGES];
+ unsigned long sc_maxpages;
+ struct page **sc_pages;
struct ib_sge sc_sges[];
};
diff --git a/net/sunrpc/xprtrdma/svc_rdma_sendto.c b/net/sunrpc/xprtrdma/svc_rdma_sendto.c
index 96154a2367a1..914cd263c2f1 100644
--- a/net/sunrpc/xprtrdma/svc_rdma_sendto.c
+++ b/net/sunrpc/xprtrdma/svc_rdma_sendto.c
@@ -118,6 +118,7 @@ svc_rdma_send_ctxt_alloc(struct svcxprt_rdma *rdma)
{
int node = ibdev_to_node(rdma->sc_cm_id->device);
struct svc_rdma_send_ctxt *ctxt;
+ unsigned long pages;
dma_addr_t addr;
void *buffer;
int i;
@@ -126,13 +127,19 @@ svc_rdma_send_ctxt_alloc(struct svcxprt_rdma *rdma)
GFP_KERNEL, node);
if (!ctxt)
goto fail0;
+ pages = svc_serv_maxpages(rdma->sc_xprt.xpt_server);
+ ctxt->sc_pages = kcalloc_node(pages, sizeof(struct page *),
+ GFP_KERNEL, node);
+ if (!ctxt->sc_pages)
+ goto fail1;
+ ctxt->sc_maxpages = pages;
buffer = kmalloc_node(rdma->sc_max_req_size, GFP_KERNEL, node);
if (!buffer)
- goto fail1;
+ goto fail2;
addr = ib_dma_map_single(rdma->sc_pd->device, buffer,
rdma->sc_max_req_size, DMA_TO_DEVICE);
if (ib_dma_mapping_error(rdma->sc_pd->device, addr))
- goto fail2;
+ goto fail3;
svc_rdma_send_cid_init(rdma, &ctxt->sc_cid);
@@ -151,8 +158,10 @@ svc_rdma_send_ctxt_alloc(struct svcxprt_rdma *rdma)
ctxt->sc_sges[i].lkey = rdma->sc_pd->local_dma_lkey;
return ctxt;
-fail2:
+fail3:
kfree(buffer);
+fail2:
+ kfree(ctxt->sc_pages);
fail1:
kfree(ctxt);
fail0:
@@ -176,6 +185,7 @@ void svc_rdma_send_ctxts_destroy(struct svcxprt_rdma *rdma)
rdma->sc_max_req_size,
DMA_TO_DEVICE);
kfree(ctxt->sc_xprt_buf);
+ kfree(ctxt->sc_pages);
kfree(ctxt);
}
}
--
2.49.0
next prev parent reply other threads:[~2025-05-09 19:04 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-05-09 19:03 [PATCH v5 00/19] Allocate payload arrays dynamically cel
2025-05-09 19:03 ` [PATCH v5 01/19] svcrdma: Reduce the number of rdma_rw contexts per-QP cel
2025-05-13 12:00 ` Jeff Layton
2025-05-09 19:03 ` [PATCH v5 02/19] sunrpc: Add a helper to derive maxpages from sv_max_mesg cel
2025-05-09 19:03 ` [PATCH v5 03/19] sunrpc: Remove backchannel check in svc_init_buffer() cel
2025-05-09 19:03 ` [PATCH v5 04/19] sunrpc: Replace the rq_pages array with dynamically-allocated memory cel
2025-05-09 19:03 ` [PATCH v5 05/19] sunrpc: Replace the rq_bvec " cel
2025-05-09 19:03 ` [PATCH v5 06/19] NFSD: Use rqstp->rq_bvec in nfsd_iter_read() cel
2025-05-09 19:03 ` [PATCH v5 07/19] NFSD: De-duplicate the svc_fill_write_vector() call sites cel
2025-05-13 12:02 ` Jeff Layton
2025-05-09 19:03 ` [PATCH v5 08/19] SUNRPC: Export xdr_buf_to_bvec() cel
2025-05-09 19:03 ` [PATCH v5 09/19] NFSD: Use rqstp->rq_bvec in nfsd_iter_write() cel
2025-05-09 19:03 ` [PATCH v5 10/19] SUNRPC: Remove svc_fill_write_vector() cel
2025-05-09 19:03 ` [PATCH v5 11/19] SUNRPC: Remove svc_rqst :: rq_vec cel
2025-05-09 19:03 ` [PATCH v5 12/19] sunrpc: Adjust size of socket's receive page array dynamically cel
2025-05-09 19:03 ` [PATCH v5 13/19] svcrdma: Adjust the number of entries in svc_rdma_recv_ctxt::rc_pages cel
2025-05-09 19:03 ` cel [this message]
2025-05-09 19:03 ` [PATCH v5 15/19] sunrpc: Remove the RPCSVC_MAXPAGES macro cel
2025-05-09 19:03 ` [PATCH v5 16/19] NFSD: Remove NFSD_BUFSIZE cel
2025-05-09 19:03 ` [PATCH v5 17/19] NFSD: Remove NFSSVC_MAXBLKSIZE_V2 macro cel
2025-05-09 19:03 ` [PATCH v5 18/19] NFSD: Add a "default" block size cel
2025-05-09 19:03 ` [PATCH v5 19/19] SUNRPC: Bump the maximum payload size for the server cel
2025-05-12 16:44 ` Aurélien Couderc
2025-05-12 18:09 ` Chuck Lever
2025-05-13 8:42 ` Aurélien Couderc
2025-05-13 12:08 ` Chuck Lever
2025-05-14 0:11 ` NeilBrown
2025-05-13 12:05 ` [PATCH v5 00/19] Allocate payload arrays dynamically Jeff Layton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250509190354.5393-15-cel@kernel.org \
--to=cel@kernel.org \
--cc=chuck.lever@oracle.com \
--cc=dai.ngo@oracle.com \
--cc=jlayton@kernel.org \
--cc=linux-nfs@vger.kernel.org \
--cc=linux-rdma@vger.kernel.org \
--cc=neil@brown.name \
--cc=okorniev@redhat.com \
--cc=tom@talpey.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox