public inbox for linux-fsdevel@vger.kernel.org
 help / color / mirror / Atom feed
From: Mike Snitzer <snitzer@kernel.org>
To: Chuck Lever <chuck.lever@oracle.com>
Cc: Jeff Layton <jlayton@kernel.org>,
	linux-nfs@vger.kernel.org, linux-fsdevel@vger.kernel.org,
	linux-block@vger.kernel.org, Jens Axboe <axboe@kernel.dk>,
	jonathan.flynn@hammerspace.com
Subject: Re: A comparison of the new nfsd iomodes (and an experimental one)
Date: Fri, 27 Mar 2026 12:57:38 -0400	[thread overview]
Message-ID: <aca3ApIPUGAovh_7@kernel.org> (raw)
In-Reply-To: <43921656-e0c3-4b55-ad1f-4965ff40f1f4@oracle.com>

On Fri, Mar 27, 2026 at 09:19:07AM -0400, Chuck Lever wrote:
> On 3/27/26 7:32 AM, Jeff Layton wrote:
> > On Thu, 2026-03-26 at 16:48 -0400, Mike Snitzer wrote:
> >
> >> Your bandwidth for 1MB sequential IO of 793 MB/s for O_DIRECT and
> >> 4,952 MB/s for buffered and dontcache is considerably less than the 72
> >> GB/s offered in Jon's testbed.  Your testing isn't exposing the
> >> bottlenecks (contention) of the MM subsystem for buffered IO... not
> >> yet put my finger on _why_ that is.
> > 
> > That may very well be, but not everyone has a box as large as the one
> > you and Jon were working with.
>
> Right, and this is kind of a blocker for us. Practically speaking, Jon's
> results are not reproducible.
> 
> It would be immensely helpful if the MM-tipover behavior could be
> reproduced on smaller systems. Reduced physical memory size, lower
> network and storage speed, and so on, so that Jeff and I can study the
> issue on our own systems.

Hammerspace still has the same performance lab setup, so we can
certainly reproduce if needed (and try to scale it down, etc) but
unfortunately its currently tied up with other important work. Jon
Flynn's testing was "only" with 2 systems (the NFS server has 16 fast
NVMe, client system connecting over 400 GbE with RDMA). But I'll see
about shaking a couple systems loose...

Might be useful for us to document the setup of the NFS server side
and give pointers for how to mount from the client system and then run
fio commandline.

I think Jens has a couple beefy servers with lots of fast NVMe. Maybe
he'd be open to testing NFS server scalability when he isn't touring
around the country watching his kid win motorsports events.. JENS!? ;)

Mike

  reply	other threads:[~2026-03-27 16:57 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-26 15:23 A comparison of the new nfsd iomodes (and an experimental one) Jeff Layton
2026-03-26 15:30 ` Chuck Lever
2026-03-26 16:35   ` Jeff Layton
2026-03-26 20:48     ` Mike Snitzer
2026-03-27 11:32       ` Jeff Layton
2026-03-27 13:19         ` Chuck Lever
2026-03-27 16:57           ` Mike Snitzer [this message]
2026-03-28 12:37             ` Jeff Layton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aca3ApIPUGAovh_7@kernel.org \
    --to=snitzer@kernel.org \
    --cc=axboe@kernel.dk \
    --cc=chuck.lever@oracle.com \
    --cc=jlayton@kernel.org \
    --cc=jonathan.flynn@hammerspace.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-nfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox