public inbox for linux-fsdevel@vger.kernel.org
 help / color / mirror / Atom feed
* [RFC] net/9p: raise MAX_SOCK_BUF beyond 1 MiB for fd/tcp/unix transports?
@ 2026-04-14 14:26 Pierre Barre
  2026-04-16  1:51 ` Dominique Martinet
  0 siblings, 1 reply; 2+ messages in thread
From: Pierre Barre @ 2026-04-14 14:26 UTC (permalink / raw)
  To: v9fs, linux-fsdevel; +Cc: ericvh, lucho, asmadeus, linux_oss

Hi all,

MAX_SOCK_BUF in net/9p/trans_fd.c currently caps msize at 1 MiB for the fd/tcp/unix transports. The commit that introduced this ceiling (22bb3b79290e, "net/9p: increase tcp max msize to 1MB") noted that a further bump would need the allocator moved off contiguous slab chunks.

That prerequisite appears to be met now: p9_fcall_init() in net/9p/client.c uses kvmalloc() when the transport sets supports_vmalloc = true, which fd/tcp/unix all do. So the original slab fragmentation argument against raising the cap no longer applies to these transports.

Before I put together a patch, I wanted to check:

1. Are there other reasons that the 1 MiB cap should stay?
2. If a bump is welcome, is there a target value you'd prefer (e.g. 16 MiB, 32 MiB)?

Thanks,
Pierre

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [RFC] net/9p: raise MAX_SOCK_BUF beyond 1 MiB for fd/tcp/unix transports?
  2026-04-14 14:26 [RFC] net/9p: raise MAX_SOCK_BUF beyond 1 MiB for fd/tcp/unix transports? Pierre Barre
@ 2026-04-16  1:51 ` Dominique Martinet
  0 siblings, 0 replies; 2+ messages in thread
From: Dominique Martinet @ 2026-04-16  1:51 UTC (permalink / raw)
  To: Pierre Barre; +Cc: v9fs, linux-fsdevel, ericvh, lucho, linux_oss

Pierre Barre wrote on Tue, Apr 14, 2026 at 04:26:36PM +0200:
> MAX_SOCK_BUF in net/9p/trans_fd.c currently caps msize at 1 MiB for
> the fd/tcp/unix transports. The commit that introduced this ceiling
> (22bb3b79290e, "net/9p: increase tcp max msize to 1MB") noted that a
> further bump would need the allocator moved off contiguous slab
> chunks.
> 
> That prerequisite appears to be met now: p9_fcall_init() in
> net/9p/client.c uses kvmalloc() when the transport sets
> supports_vmalloc = true, which fd/tcp/unix all do. So the original
> slab fragmentation argument against raising the cap no longer applies
> to these transports.
> 
> Before I put together a patch, I wanted to check:
> 
> 1. Are there other reasons that the 1 MiB cap should stay?
> 2. If a bump is welcome, is there a target value you'd prefer (e.g. 16
> MiB, 32 MiB)?

I personally don't consider trans_fd to be performance sensitive so I
don't care much here...

In theory it's possible to implement a zc operation for trans_fd as well
(send/write directly from the bio provided by the vfs) but this has
never been a priority as far as I know;
using kvmalloc() does allow bigger buffers so I guess it'd be possible
to allow larger buffers as an orthogonal step, but I'm honestly not sure
how much performance gain you'll see here?

Since commit 60ece0833b6c ("net/9p: allocate appropriate reduced message
buffers") there shouldn't be many max size allocations but iirc there
will still be some (e.g. readdir?), so a very high value will likely
still somewhat impact performances, so if you do care, feel free to
provide some quick benchmark that shows improvements in IO workload and
not too much degradation for metadata and send a patch -- since it's
something folks need to opt-in by setting the msize anyway I think it's
fine.

-- 
Dominique

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2026-04-16  1:52 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-14 14:26 [RFC] net/9p: raise MAX_SOCK_BUF beyond 1 MiB for fd/tcp/unix transports? Pierre Barre
2026-04-16  1:51 ` Dominique Martinet

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox