From: Dominique Martinet <asmadeus@codewreck.org>
To: Pierre Barre <pierre@barre.sh>
Cc: v9fs@lists.linux.dev, linux-fsdevel@vger.kernel.org,
ericvh@kernel.org, lucho@ionkov.net, linux_oss@crudebyte.com
Subject: Re: [RFC] net/9p: raise MAX_SOCK_BUF beyond 1 MiB for fd/tcp/unix transports?
Date: Thu, 16 Apr 2026 10:51:56 +0900 [thread overview]
Message-ID: <aeBAvOIZCbICnImG@codewreck.org> (raw)
In-Reply-To: <c2c75b03-e95d-49f4-be9a-2429fde4b5fe@app.fastmail.com>
Pierre Barre wrote on Tue, Apr 14, 2026 at 04:26:36PM +0200:
> MAX_SOCK_BUF in net/9p/trans_fd.c currently caps msize at 1 MiB for
> the fd/tcp/unix transports. The commit that introduced this ceiling
> (22bb3b79290e, "net/9p: increase tcp max msize to 1MB") noted that a
> further bump would need the allocator moved off contiguous slab
> chunks.
>
> That prerequisite appears to be met now: p9_fcall_init() in
> net/9p/client.c uses kvmalloc() when the transport sets
> supports_vmalloc = true, which fd/tcp/unix all do. So the original
> slab fragmentation argument against raising the cap no longer applies
> to these transports.
>
> Before I put together a patch, I wanted to check:
>
> 1. Are there other reasons that the 1 MiB cap should stay?
> 2. If a bump is welcome, is there a target value you'd prefer (e.g. 16
> MiB, 32 MiB)?
I personally don't consider trans_fd to be performance sensitive so I
don't care much here...
In theory it's possible to implement a zc operation for trans_fd as well
(send/write directly from the bio provided by the vfs) but this has
never been a priority as far as I know;
using kvmalloc() does allow bigger buffers so I guess it'd be possible
to allow larger buffers as an orthogonal step, but I'm honestly not sure
how much performance gain you'll see here?
Since commit 60ece0833b6c ("net/9p: allocate appropriate reduced message
buffers") there shouldn't be many max size allocations but iirc there
will still be some (e.g. readdir?), so a very high value will likely
still somewhat impact performances, so if you do care, feel free to
provide some quick benchmark that shows improvements in IO workload and
not too much degradation for metadata and send a patch -- since it's
something folks need to opt-in by setting the msize anyway I think it's
fine.
--
Dominique
prev parent reply other threads:[~2026-04-16 1:52 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-14 14:26 [RFC] net/9p: raise MAX_SOCK_BUF beyond 1 MiB for fd/tcp/unix transports? Pierre Barre
2026-04-16 1:51 ` Dominique Martinet [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aeBAvOIZCbICnImG@codewreck.org \
--to=asmadeus@codewreck.org \
--cc=ericvh@kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux_oss@crudebyte.com \
--cc=lucho@ionkov.net \
--cc=pierre@barre.sh \
--cc=v9fs@lists.linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox