public inbox for linux-nfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Chuck Lever <cel@kernel.org>
To: Christoph Hellwig <hch@infradead.org>
Cc: NeilBrown <neil@brown.name>, Jeff Layton <jlayton@kernel.org>,
	Olga Kornievskaia <okorniev@redhat.com>,
	Dai Ngo <dai.ngo@oracle.com>, Tom Talpey <tom@talpey.com>,
	Anna Schumaker <anna@kernel.org>,
	linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org,
	Chuck Lever <chuck.lever@oracle.com>
Subject: Re: [PATCH v4 05/14] sunrpc: Replace the rq_vec array with dynamically-allocated memory
Date: Tue, 6 May 2025 12:31:37 -0400	[thread overview]
Message-ID: <1ad45c3b-8882-4583-9cb2-afbc232e08d7@kernel.org> (raw)
In-Reply-To: <aBoOr0wZ5rqE6Erl@infradead.org>

On 5/6/25 9:29 AM, Christoph Hellwig wrote:
> On Mon, Apr 28, 2025 at 03:36:53PM -0400, cel@kernel.org wrote:
>> From: Chuck Lever <chuck.lever@oracle.com>
>>
>> As a step towards making NFSD's maximum rsize and wsize variable at
>> run-time, replace the fixed-size rq_vec[] array in struct svc_rqst
>> with a chunk of dynamically-allocated memory.
>>
>> The rq_vec array is sized assuming request processing will need at
>> most one kvec per page in a maximum-sized RPC message.
>>
>> On a system with 8-byte pointers and 4KB pages, pahole reports that
>> the rq_vec[] array is 4144 bytes. This patch replaces that array
>> with a single 8-byte pointer field.
> 
> The right thing to do here is to kill this array.  There is no
> reason to use kvecs in the VFS read/write APIs these days, we can
> use bio_vecs just fine, for which we have another allocation.

Fair enough. That's a little more churn than I wanted to do in this
patch series, but maybe it's easier than I expect.


> And given that both are only used by the server and never the client
> maybe they should both only be conditionally allocated?

Not sure I follow you here. The client certainly does make extensive use
of xdr_buf::bvec.

-- 
Chuck Lever

  reply	other threads:[~2025-05-06 16:31 UTC|newest]

Thread overview: 52+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-04-28 19:36 [PATCH v4 00/14] Allocate payload arrays dynamically cel
2025-04-28 19:36 ` [PATCH v4 01/14] svcrdma: Reduce the number of rdma_rw contexts per-QP cel
2025-05-06 13:08   ` Christoph Hellwig
2025-05-06 13:17     ` Jason Gunthorpe
2025-05-06 13:40       ` Christoph Hellwig
2025-05-06 13:55         ` Jason Gunthorpe
2025-05-06 14:13           ` Chuck Lever
2025-05-06 14:17             ` Jason Gunthorpe
2025-05-06 14:19               ` Chuck Lever
2025-05-06 14:22                 ` Jason Gunthorpe
2025-05-08  8:41                   ` Edward Srouji
2025-05-08 12:43                     ` Jason Gunthorpe
2025-05-10 23:12                       ` Edward Srouji
2025-04-28 19:36 ` [PATCH v4 02/14] sunrpc: Add a helper to derive maxpages from sv_max_mesg cel
2025-05-06 13:10   ` Christoph Hellwig
2025-04-28 19:36 ` [PATCH v4 03/14] sunrpc: Remove backchannel check in svc_init_buffer() cel
2025-05-06 13:11   ` Christoph Hellwig
2025-04-28 19:36 ` [PATCH v4 04/14] sunrpc: Replace the rq_pages array with dynamically-allocated memory cel
2025-04-30  4:53   ` NeilBrown
2025-04-28 19:36 ` [PATCH v4 05/14] sunrpc: Replace the rq_vec " cel
2025-05-06 13:29   ` Christoph Hellwig
2025-05-06 16:31     ` Chuck Lever [this message]
2025-05-07  7:34       ` Christoph Hellwig
2025-04-28 19:36 ` [PATCH v4 06/14] sunrpc: Replace the rq_bvec " cel
2025-04-28 19:36 ` [PATCH v4 07/14] sunrpc: Adjust size of socket's receive page array dynamically cel
2025-04-28 19:36 ` [PATCH v4 08/14] svcrdma: Adjust the number of entries in svc_rdma_recv_ctxt::rc_pages cel
2025-05-06 13:31   ` Christoph Hellwig
2025-05-06 15:20     ` Chuck Lever
2025-05-07  7:40       ` Christoph Hellwig
2025-04-28 19:36 ` [PATCH v4 09/14] svcrdma: Adjust the number of entries in svc_rdma_send_ctxt::sc_pages cel
2025-04-28 19:36 ` [PATCH v4 10/14] sunrpc: Remove the RPCSVC_MAXPAGES macro cel
2025-04-28 19:36 ` [PATCH v4 11/14] NFSD: Remove NFSD_BUFSIZE cel
2025-04-28 21:03   ` Jeff Layton
2025-05-06 13:32   ` Christoph Hellwig
2025-04-28 19:37 ` [PATCH v4 12/14] NFSD: Remove NFSSVC_MAXBLKSIZE_V2 macro cel
2025-05-06 13:33   ` Christoph Hellwig
2025-04-28 19:37 ` [PATCH v4 13/14] NFSD: Add a "default" block size cel
2025-04-28 21:07   ` Jeff Layton
2025-04-28 19:37 ` [PATCH v4 14/14] SUNRPC: Bump the maximum payload size for the server cel
2025-04-28 21:08   ` Jeff Layton
2025-04-29 15:44     ` Chuck Lever
2025-05-06 13:34   ` Christoph Hellwig
2025-05-06 13:52     ` Chuck Lever
2025-05-06 13:54       ` Jeff Layton
2025-05-06 13:59         ` Chuck Lever
2025-05-07  7:42       ` Christoph Hellwig
2025-05-07 14:25         ` Chuck Lever
2025-04-29 13:06 ` [PATCH v4 00/14] Allocate payload arrays dynamically Zhu Yanjun
2025-04-29 13:41   ` Chuck Lever
2025-04-29 13:52     ` Zhu Yanjun
2025-04-30  5:11 ` NeilBrown
2025-04-30 12:45   ` Chuck Lever

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1ad45c3b-8882-4583-9cb2-afbc232e08d7@kernel.org \
    --to=cel@kernel.org \
    --cc=anna@kernel.org \
    --cc=chuck.lever@oracle.com \
    --cc=dai.ngo@oracle.com \
    --cc=hch@infradead.org \
    --cc=jlayton@kernel.org \
    --cc=linux-nfs@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=neil@brown.name \
    --cc=okorniev@redhat.com \
    --cc=tom@talpey.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox