From: cel@kernel.org
To: NeilBrown <neil@brown.name>, Jeff Layton <jlayton@kernel.org>,
Olga Kornievskaia <okorniev@redhat.com>,
Dai Ngo <dai.ngo@oracle.com>, Tom Talpey <tom@talpey.com>
Cc: <linux-nfs@vger.kernel.org>, Chuck Lever <chuck.lever@oracle.com>
Subject: [PATCH v2 04/10] sunrpc: Replace the rq_vec array with dynamically-allocated memory
Date: Sat, 19 Apr 2025 13:28:12 -0400 [thread overview]
Message-ID: <20250419172818.6945-5-cel@kernel.org> (raw)
In-Reply-To: <20250419172818.6945-1-cel@kernel.org>
From: Chuck Lever <chuck.lever@oracle.com>
As a step towards making NFSD's maximum rsize and wsize variable at
run-time, replace the fixed-size rq_vec[] array in struct svc_rqst
with a chunk of dynamically-allocated memory.
The rq_vec array is sized assuming request processing will need at
most one kvec per page in a maximum-sized RPC message.
On a system with 8-byte pointers and 4KB pages, pahole reports that
the rq_vec[] array is 4144 bytes. Replacing it with a single
pointer reduces the size of struct svc_rqst to about 5400 bytes.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
fs/nfsd/nfs4proc.c | 1 -
fs/nfsd/vfs.c | 2 +-
include/linux/sunrpc/svc.h | 2 +-
net/sunrpc/svc.c | 8 +++++++-
4 files changed, 9 insertions(+), 4 deletions(-)
diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
index b397246dae7b..d1be58b557d1 100644
--- a/fs/nfsd/nfs4proc.c
+++ b/fs/nfsd/nfs4proc.c
@@ -1228,7 +1228,6 @@ nfsd4_write(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
write->wr_how_written = write->wr_stable_how;
nvecs = svc_fill_write_vector(rqstp, &write->wr_payload);
- WARN_ON_ONCE(nvecs > ARRAY_SIZE(rqstp->rq_vec));
status = nfsd_vfs_write(rqstp, &cstate->current_fh, nf,
write->wr_offset, rqstp->rq_vec, nvecs, &cnt,
diff --git a/fs/nfsd/vfs.c b/fs/nfsd/vfs.c
index 9abdc4b75813..4eaac3aa7e15 100644
--- a/fs/nfsd/vfs.c
+++ b/fs/nfsd/vfs.c
@@ -1094,7 +1094,7 @@ __be32 nfsd_iter_read(struct svc_rqst *rqstp, struct svc_fh *fhp,
++v;
base = 0;
}
- WARN_ON_ONCE(v > ARRAY_SIZE(rqstp->rq_vec));
+ WARN_ON_ONCE(v > rqstp->rq_maxpages);
trace_nfsd_read_vector(rqstp, fhp, offset, *count);
iov_iter_kvec(&iter, ITER_DEST, rqstp->rq_vec, v, *count);
diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h
index 96ac12dbb04d..72d016772711 100644
--- a/include/linux/sunrpc/svc.h
+++ b/include/linux/sunrpc/svc.h
@@ -207,7 +207,7 @@ struct svc_rqst {
struct page * *rq_page_end; /* one past the last page */
struct folio_batch rq_fbatch;
- struct kvec rq_vec[RPCSVC_MAXPAGES]; /* generally useful.. */
+ struct kvec *rq_vec;
struct bio_vec rq_bvec[RPCSVC_MAXPAGES];
__be32 rq_xid; /* transmission id */
diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
index 682e11c9be36..5808d4b97547 100644
--- a/net/sunrpc/svc.c
+++ b/net/sunrpc/svc.c
@@ -675,6 +675,7 @@ static void
svc_rqst_free(struct svc_rqst *rqstp)
{
folio_batch_release(&rqstp->rq_fbatch);
+ kfree(rqstp->rq_vec);
svc_release_buffer(rqstp);
if (rqstp->rq_scratch_page)
put_page(rqstp->rq_scratch_page);
@@ -713,6 +714,11 @@ svc_prepare_thread(struct svc_serv *serv, struct svc_pool *pool, int node)
if (!svc_init_buffer(rqstp, serv, node))
goto out_enomem;
+ rqstp->rq_vec = kcalloc_node(rqstp->rq_maxpages, sizeof(struct kvec),
+ GFP_KERNEL, node);
+ if (!rqstp->rq_vec)
+ goto out_enomem;
+
rqstp->rq_err = -EAGAIN; /* No error yet */
serv->sv_nrthreads += 1;
@@ -1750,7 +1756,7 @@ unsigned int svc_fill_write_vector(struct svc_rqst *rqstp,
++pages;
}
- WARN_ON_ONCE(i > ARRAY_SIZE(rqstp->rq_vec));
+ WARN_ON_ONCE(i > rqstp->rq_maxpages);
return i;
}
EXPORT_SYMBOL_GPL(svc_fill_write_vector);
--
2.49.0
next prev parent reply other threads:[~2025-04-19 17:28 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-19 17:28 [PATCH v2 00/10] Allocate payload arrays dynamically cel
2025-04-19 17:28 ` [PATCH v2 01/10] sunrpc: Remove backchannel check in svc_init_buffer() cel
2025-04-21 12:16 ` Jeff Layton
2025-04-21 14:59 ` Chuck Lever
2025-04-19 17:28 ` [PATCH v2 02/10] sunrpc: Add a helper to derive maxpages from sv_max_mesg cel
2025-04-22 20:48 ` NeilBrown
2025-04-23 13:16 ` Chuck Lever
2025-04-19 17:28 ` [PATCH v2 03/10] sunrpc: Replace the rq_pages array with dynamically-allocated memory cel
2025-04-21 12:19 ` Jeff Layton
2025-04-19 17:28 ` cel [this message]
2025-04-21 12:22 ` [PATCH v2 04/10] sunrpc: Replace the rq_vec " Jeff Layton
2025-04-21 15:05 ` Chuck Lever
2025-04-19 17:28 ` [PATCH v2 05/10] sunrpc: Replace the rq_bvec " cel
2025-04-19 17:28 ` [PATCH v2 06/10] sunrpc: Adjust size of socket's receive page array dynamically cel
2025-04-19 17:28 ` [PATCH v2 07/10] svcrdma: Adjust the number of RDMA contexts per transport cel
2025-04-19 17:28 ` [PATCH v2 08/10] svcrdma: Adjust the number of entries in svc_rdma_recv_ctxt::rc_pages cel
2025-04-19 17:28 ` [PATCH v2 09/10] svcrdma: Adjust the number of entries in svc_rdma_send_ctxt::sc_pages cel
2025-04-19 17:28 ` [PATCH v2 10/10] sunrpc: Remove the RPCSVC_MAXPAGES macro cel
2025-04-19 17:54 ` [PATCH v2 00/10] Allocate payload arrays dynamically Chuck Lever
2025-04-21 12:28 ` Jeff Layton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250419172818.6945-5-cel@kernel.org \
--to=cel@kernel.org \
--cc=chuck.lever@oracle.com \
--cc=dai.ngo@oracle.com \
--cc=jlayton@kernel.org \
--cc=linux-nfs@vger.kernel.org \
--cc=neil@brown.name \
--cc=okorniev@redhat.com \
--cc=tom@talpey.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox