From: Mike Snitzer <snitzer@kernel.org>
To: Chuck Lever <chuck.lever@oracle.com>
Cc: linux-nfs@vger.kernel.org, Jeff Layton <jlayton@kernel.org>
Subject: Re: [PATCH v8 5/7] NFSD: issue READs using O_DIRECT even if IO is misaligned
Date: Thu, 28 Aug 2025 04:09:20 -0400 [thread overview]
Message-ID: <aLAOsGUIvONZvfX7@kernel.org> (raw)
In-Reply-To: <09eca412-b6e3-4011-b7dd-3a452eae6489@oracle.com>
On Wed, Aug 27, 2025 at 09:57:39PM -0400, Chuck Lever wrote:
> On 8/27/25 7:15 PM, Mike Snitzer wrote:
> > On Wed, Aug 27, 2025 at 04:56:08PM -0400, Chuck Lever wrote:
> >> On 8/27/25 3:41 PM, Mike Snitzer wrote:
> >>> Is your suggestion to, rather than allocate a disjoint single page,
> >>> borrow the extra page from the end of rq_pages? Just map it into the
> >>> bvec instead of my extra page?
> >>
> >> Yes, the extra page needs to come from rq_pages. But I don't see why it
> >> should come from the /end/ of rq_pages.
> >>
> >> - Extend the start of the byte range back to make it align with the
> >> file's DIO alignment constraint
> >>
> >> - Extend the end of the byte range forward to make it align with the
> >> file's DIO alignment constraint
> >
> > nfsd_analyze_read_dio() does that (start_extra and end_extra).
> >
> >> - Fill in the sink buffer's bvec using pages from rq_pages, as usual
> >>
> >> - When the I/O is complete, adjust the offset in the first bvec entry
> >> forward by setting a non-zero page offset, and adjust the returned
> >> count downward to match the requested byte count from the client
> >
> > Tried it long ago, such bvec manipulation only works when not using
> > RDMA. When the memory is remote, twiddling a local bvec isn't going
> > to ensure the correct pages have the correct data upon return to the
> > client.
> >
> > RDMA is why the pages must be used in-place, and RDMA is also why
> > the extra page needed by this patch (for use as throwaway front-pad
> > for expanded misaligned DIO READ) must either be allocated _or_
> > hopefully it can be from rq_pages (after the end of the client
> > requested READ payload).
> >
> > Or am I wrong and simply need to keep learning about NFSD's IO path?
>
> You're wrong, not to put a fine point on it.
You didn't even understand me.. but firmly believe I'm wrong?
> There's nothing I can think of in the RDMA or RPC/RDMA protocols that
> mandates that the first page offset must always be zero. Moving data
> at one address on the server to an entirely different address and
> alignment on the client is exactly what RDMA is supposed to do.
>
> It sounds like an implementation omission because the server's upper
> layers have never needed it before now. If TCP already handles it, I'm
> guessing it's going to be straightforward to fix.
I never said that first page offset must be zero. I said that I
already did what you suggested and it didn't work with RDMA. This is
recall of too many months ago now, but: the client will see the
correct READ payload _except_ IIRC it is offset by whatever front-pad
was added to expand the misaligned DIO; no matter whether
rqstp->rq_bvec updated when IO completes.
But I'll revisit it again.
> >>> NFSD using DIO is optional. I thought the point was to get it as an
> >>> available option so that _others_ could experiment and help categorize
> >>> the benefits/pitfalls further?
> >>
> >> Yes, that is the point. But such experiments lose value if there is no
> >> data collection plan to go with them.
> >
> > Each user runs something they care about performing well and they
> > measure the result.
>
> That assumes the user will continue to use the debug interfaces, and
> the particular implementation you've proposed, for the rest of time.
> And that's not my plan at all.
>
> If we, in the community, cannot reproduce that result, or cannot
> understand what has been measured, or the measurement misses part or
> most of the picture, of what value is that for us to decide whether and
> how to proceed with promoting the mechanism from debug feature to
> something with a long-term support lifetime and a documented ABI-stable
> user interface?
I'll work to put a finer point on how to reproduce and enumerate the
things to look for (representative flamegraphs showing the issue,
which I already did at last Bakeathon).
But I have repeatedly offered that the pathological worst case is
client doing sequential write IO of a file that is 3-4x larger than
the NFS server's system memory.
Large memory systems with 8 or more NVMe devices, fast networks that
allow for huge data ingest capabilities. These are the platforms that
showcase MM's dirty writeback limitions when large sequential IO is
initiated from the NFS client and its able to overrun the NFS server.
In addition, in general DIO requires significantly less memory and
CPU; so platforms that have more limited resources (and may have
historically struggled) could have a new lease on life if they switch
NFSD from buffered to DIO mode.
> > Literally the same thing as has been done for anything in Linux since
> > it all started. Nothing unicorn or bespoke here.
>
> So let me ask this another way: What do we need users to measure to give
> us good quality information about the page cache behavior and system
> thrashing behavior you reported?
IO throughput, CPU and memory usage should be monitored over time.
> For example: I can enable direct I/O on NFSD, but my workload is mostly
> one or two clients doing kernel builds. The latency of NFS READs goes
> up, but since a kernel build is not I/O bound and the client page caches
> hide most of the increase, there is very little to show a measured
> change.
>
> So how should I assess and report the impact of NFSD doing direct I/O?
Your underwhelming usage isn't what this patchset is meant to help.
> See -- users are not the only ones who are involved in this experiment;
> and they will need guidance because we're not providing any
> documentation for this feature.
Users are not created equal. Major companies like Oracle and Meta
_should_ be aware of NFSD's problems with buffered IO. They have
internal and external stakeholders that are power users.
Jeff, does Meta ever see NFSD struggle to consistently use NVMe
devices? Lumpy performance? Full-blown IO stalls? Lots of NFSD
threads hung in D state?
> >> If you would rather make this drive-by, then you'll have to realize
> >> that you are requesting more than simple review from us. You'll have
> >> to be content with the pace at which us overloaded maintainers can get
> >> to the work.
> >
> > I think I just experienced the mailing-list equivalent of the Detroit
> > definition of "drive-by". Good/bad news: you're a terrible shot.
>
> The term "drive-by contribution" has a well-understood meaning in the
> kernel community. If you are unfamiliar with it, I invite you to review
> the mailing list archives. As always, no-one is shooting at you. If
> anything, the drive-by contribution is aimed at me.
It is a blatant miscategorization here. That you just doubled down
on it having relevance in this instance is flagrantly wrong.
Whatever compells you to belittle me and my contributions, just know
it is extremely hard to take. Highly unproductive and unprofessional.
Boom, done.
next prev parent reply other threads:[~2025-08-28 8:09 UTC|newest]
Thread overview: 42+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-08-26 18:57 [PATCH v8 0/7] NFSD: add "NFSD DIRECT" and "NFSD DONTCACHE" IO modes Mike Snitzer
2025-08-26 18:57 ` [PATCH v8 1/7] NFSD: filecache: add STATX_DIOALIGN and STATX_DIO_READ_ALIGN support Mike Snitzer
2025-08-26 18:57 ` [PATCH v8 2/7] NFSD: pass nfsd_file to nfsd_iter_read() Mike Snitzer
2025-08-26 18:57 ` [PATCH v8 3/7] NFSD: add io_cache_read controls to debugfs interface Mike Snitzer
2025-09-03 14:38 ` Chuck Lever
2025-09-03 15:07 ` Mike Snitzer
2025-09-03 16:02 ` Mike Snitzer
2025-09-03 16:12 ` Chuck Lever
2025-09-03 16:50 ` Mike Snitzer
2025-08-26 18:57 ` [PATCH v8 4/7] NFSD: add io_cache_write " Mike Snitzer
2025-08-26 18:57 ` [PATCH v8 5/7] NFSD: issue READs using O_DIRECT even if IO is misaligned Mike Snitzer
2025-08-27 15:34 ` Chuck Lever
2025-08-27 19:41 ` Mike Snitzer
2025-08-27 20:56 ` Chuck Lever
2025-08-27 23:15 ` Mike Snitzer
2025-08-28 1:57 ` Chuck Lever
2025-08-28 8:09 ` Mike Snitzer [this message]
2025-08-28 14:53 ` Chuck Lever
2025-08-28 18:52 ` Mike Snitzer
2025-08-30 17:38 ` [RFC PATCH 0/2] some progress on rpcrdma bug [was: Re: [PATCH v8 5/7] NFSD: issue READs using O_DIRECT even if IO is misaligned] Mike Snitzer
2025-08-30 17:38 ` [RFC PATCH 1/2] NFSD: fix misaligned DIO READ to not use a start_extra_page, exposes rpcrdma bug? Mike Snitzer
2025-09-02 14:04 ` Chuck Lever
2025-09-02 15:56 ` Chuck Lever
2025-09-02 17:59 ` Chuck Lever
2025-09-02 21:06 ` Mike Snitzer
2025-09-02 21:16 ` Chuck Lever
2025-09-02 21:27 ` Mike Snitzer
2025-09-02 22:18 ` Mike Snitzer
2025-09-04 19:07 ` Chuck Lever
2025-09-04 21:00 ` Mike Snitzer
2025-09-04 14:42 ` Mike Snitzer
2025-09-04 15:12 ` Chuck Lever
2025-09-04 16:10 ` Chuck Lever
2025-09-04 16:33 ` Mike Snitzer
2025-09-04 17:54 ` Chuck Lever
2025-08-30 17:38 ` [RFC PATCH 2/2] NFSD: use /end/ of rq_pages for front_pad page, simpler workaround for rpcrdma bug Mike Snitzer
2025-08-30 18:53 ` [RFC PATCH 0/2] some progress on rpcrdma bug [was: Re: [PATCH v8 5/7] NFSD: issue READs using O_DIRECT even if IO is misaligned] Mike Snitzer
2025-08-28 16:36 ` [PATCH v8 5/7] NFSD: issue READs using O_DIRECT even if IO is misaligned Jeff Layton
2025-08-28 16:22 ` Jeff Layton
2025-08-28 16:27 ` Chuck Lever
2025-08-26 18:57 ` [PATCH v8 6/7] NFSD: issue WRITEs " Mike Snitzer
2025-08-26 18:57 ` [PATCH v8 7/7] NFSD: add nfsd_analyze_read_dio and nfsd_analyze_write_dio trace events Mike Snitzer
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aLAOsGUIvONZvfX7@kernel.org \
--to=snitzer@kernel.org \
--cc=chuck.lever@oracle.com \
--cc=jlayton@kernel.org \
--cc=linux-nfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox