From: Chuck Lever <chuck.lever@oracle.com>
To: Jeff Layton <jlayton@kernel.org>, NeilBrown <neil@brown.name>,
Olga Kornievskaia <okorniev@redhat.com>,
Dai Ngo <dai.ngo@oracle.com>, Tom Talpey <tom@talpey.com>
Cc: linux-nfs@vger.kernel.org, Chuck Lever <chuck.lever@oracle.com>
Subject: Re: [RFC PATCH 1/2] sunrpc: Replace the rq_bvec array with dynamically-allocated memory
Date: Wed, 16 Apr 2025 14:45:00 -0400 [thread overview]
Message-ID: <8a0fc700-c0bc-42b3-b6c2-86a5ed171534@oracle.com> (raw)
In-Reply-To: <1086d2ecc8fc0aed85fc571e8bc4c66f6ff0fb64.camel@kernel.org>
On 4/16/25 2:42 PM, Jeff Layton wrote:
> On Wed, 2025-04-16 at 11:28 -0400, cel@kernel.org wrote:
>> From: Chuck Lever <chuck.lever@oracle.com>
>>
>> As a step towards making NFSD's maximum rsize and wsize variable,
>> replace the fixed-size rq_bvec[] array in struct svc_rqst with a
>> chunk of dynamically-allocated memory.
>>
>> On a system with 8-byte pointers and 4KB pages, pahole reports that
>> the rq_bvec[] array is 4144 bytes. Replacing it with a single
>> pointer reduces the size of struct svc_rqst to about 7500 bytes.
>>
>> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
>> ---
>> include/linux/sunrpc/svc.h | 2 +-
>> net/sunrpc/svc.c | 6 ++++++
>> net/sunrpc/svcsock.c | 7 +++----
>> 3 files changed, 10 insertions(+), 5 deletions(-)
>>
>> diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h
>> index 74658cca0f38..225c385085c3 100644
>> --- a/include/linux/sunrpc/svc.h
>> +++ b/include/linux/sunrpc/svc.h
>> @@ -195,7 +195,7 @@ struct svc_rqst {
>>
>> struct folio_batch rq_fbatch;
>> struct kvec rq_vec[RPCSVC_MAXPAGES]; /* generally useful.. */
>> - struct bio_vec rq_bvec[RPCSVC_MAXPAGES];
>> + struct bio_vec *rq_bvec;
>
> It's a reasonable start.
>
> What would also be good to do here is to replace the invocations of
> RPCSVC_MAXPAGES that involve this array with a helper function that
> returns the length of it.
>
> For now it could just return RPCSVC_MAXPAGES, but eventually you could
> add (e.g.) a rqstp->rq_bvec_len field and use that to indicate how many
> entries there are in rq_bvec.
rq_vec, rq_pages, and rq_bvec all have the same entry count (plus or
minus one) so only one new field is necessary. There are a few other
places that allocate arrays of size RPCSVC_MAXPAGES that will need
similar treatment.
Stay tuned for v2.
>> __be32 rq_xid; /* transmission id */
>> u32 rq_prog; /* program number */
>> diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
>> index e7f9c295d13c..db29819716b8 100644
>> --- a/net/sunrpc/svc.c
>> +++ b/net/sunrpc/svc.c
>> @@ -673,6 +673,7 @@ static void
>> svc_rqst_free(struct svc_rqst *rqstp)
>> {
>> folio_batch_release(&rqstp->rq_fbatch);
>> + kfree(rqstp->rq_bvec);
>> svc_release_buffer(rqstp);
>> if (rqstp->rq_scratch_page)
>> put_page(rqstp->rq_scratch_page);
>> @@ -711,6 +712,11 @@ svc_prepare_thread(struct svc_serv *serv, struct svc_pool *pool, int node)
>> if (!svc_init_buffer(rqstp, serv->sv_max_mesg, node))
>> goto out_enomem;
>>
>> + rqstp->rq_bvec = kcalloc_node(RPCSVC_MAXPAGES, sizeof(struct bio_vec),
>> + GFP_KERNEL, node);
>> + if (!rqstp->rq_bvec)
>> + goto out_enomem;
>> +
>> rqstp->rq_err = -EAGAIN; /* No error yet */
>>
>> serv->sv_nrthreads += 1;
>> diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
>> index 72e5a01df3d3..671640933f18 100644
>> --- a/net/sunrpc/svcsock.c
>> +++ b/net/sunrpc/svcsock.c
>> @@ -713,8 +713,7 @@ static int svc_udp_sendto(struct svc_rqst *rqstp)
>> if (svc_xprt_is_dead(xprt))
>> goto out_notconn;
>>
>> - count = xdr_buf_to_bvec(rqstp->rq_bvec,
>> - ARRAY_SIZE(rqstp->rq_bvec), xdr);
>> + count = xdr_buf_to_bvec(rqstp->rq_bvec, RPCSVC_MAXPAGES, xdr);
>>
>> iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, rqstp->rq_bvec,
>> count, rqstp->rq_res.len);
>> @@ -1219,8 +1218,8 @@ static int svc_tcp_sendmsg(struct svc_sock *svsk, struct svc_rqst *rqstp,
>> memcpy(buf, &marker, sizeof(marker));
>> bvec_set_virt(rqstp->rq_bvec, buf, sizeof(marker));
>>
>> - count = xdr_buf_to_bvec(rqstp->rq_bvec + 1,
>> - ARRAY_SIZE(rqstp->rq_bvec) - 1, &rqstp->rq_res);
>> + count = xdr_buf_to_bvec(rqstp->rq_bvec + 1, RPCSVC_MAXPAGES,
>> + &rqstp->rq_res);
>>
>> iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, rqstp->rq_bvec,
>> 1 + count, sizeof(marker) + rqstp->rq_res.len);
>
--
Chuck Lever
next prev parent reply other threads:[~2025-04-16 18:45 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-16 15:28 [RFC PATCH 0/2] Move rq_vec[] and rq_bvec[] out of svc_rqst cel
2025-04-16 15:28 ` [RFC PATCH 1/2] sunrpc: Replace the rq_bvec array with dynamically-allocated memory cel
2025-04-16 18:42 ` Jeff Layton
2025-04-16 18:45 ` Chuck Lever [this message]
2025-04-16 18:55 ` Jeff Layton
2025-04-16 15:28 ` [RFC PATCH 2/2] sunrpc: Replace the rq_vec " cel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=8a0fc700-c0bc-42b3-b6c2-86a5ed171534@oracle.com \
--to=chuck.lever@oracle.com \
--cc=dai.ngo@oracle.com \
--cc=jlayton@kernel.org \
--cc=linux-nfs@vger.kernel.org \
--cc=neil@brown.name \
--cc=okorniev@redhat.com \
--cc=tom@talpey.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox