Linux NFS development
 help / color / mirror / Atom feed
From: Chuck Lever <chuck.lever@oracle.com>
To: Mike Snitzer <snitzer@kernel.org>
Cc: linux-nfs@vger.kernel.org, Jeff Layton <jlayton@kernel.org>
Subject: Re: [RFC PATCH 1/2] NFSD: fix misaligned DIO READ to not use a start_extra_page, exposes rpcrdma bug?
Date: Thu, 4 Sep 2025 15:07:30 -0400	[thread overview]
Message-ID: <12b7f4cf-5781-4c98-92d0-3fdd03df39da@oracle.com> (raw)
In-Reply-To: <aLdtHaqIw9jaaVM2@kernel.org>

On 9/2/25 6:18 PM, Mike Snitzer wrote:
> On Tue, Sep 02, 2025 at 05:27:11PM -0400, Mike Snitzer wrote:
>> On Tue, Sep 02, 2025 at 05:16:10PM -0400, Chuck Lever wrote:
>>> On 9/2/25 5:06 PM, Mike Snitzer wrote:
>>>> On Tue, Sep 02, 2025 at 01:59:12PM -0400, Chuck Lever wrote:
>>>>> On 9/2/25 11:56 AM, Chuck Lever wrote:
>>>>>> On 8/30/25 1:38 PM, Mike Snitzer wrote:
>>>>>
>>>>>>> dt (j:1 t:1): File System Information:
>>>>>>> dt (j:1 t:1):            Mounted from device: 192.168.0.105:/hs_test
>>>>>>> dt (j:1 t:1):           Mounted on directory: /mnt/hs_test
>>>>>>> dt (j:1 t:1):                Filesystem type: nfs4
>>>>>>> dt (j:1 t:1):             Filesystem options: rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,fatal_neterrors=none,proto=tcp,nconnect=16,port=20491,timeo=600,retrans=2,sec=sys,clientaddr=192.168.0.106,local_lock=none,addr=192.168.0.105
>>>>>>
>>>>>> I haven't been able to reproduce a similar failure in my lab with
>>>>>> NFSv4.2 over RDMA with FDR InfiniBand. I've run dt 6-7 times, all
>>>>>> successful. Also, for shit giggles, I tried the fsx-based subtests in
>>>>>> fstests, no new failures there either. The export is xfs on an NVMe
>>>>>> add-on card; server uses direct I/O for READ and page cache for WRITE.
>>>>>>
>>>>>> Notice the mount options for your test run: "proto=tcp" and
>>>>>> "nconnect=16". Even if your network fabric is RoCE, "proto=tcp" will
>>>>>> not use RDMA at all; it will use bog standard TCP/IP on your ultra
>>>>>> fast Ethernet network.
>>>>>>
>>>>>> What should I try next? I can apply 2/2 or add "nconnect" or move the
>>>>>> testing to my RoCE fabric after lunch and keep poking at it.
>>>>
>>>> Hmm, I'll have to check with the Hammerspace performance team to
>>>> understand how RDMA used if the client mount has proto=tcp.
>>>>
>>>> Certainly surprising, thanks for noticing/reporting this aspect.
>>>>
>>>> I also cannot reproduce on a normal tcp mount and testbed.  This
>>>> frankenbeast of a fast "RDMA" network that is misconfigured to use
>>>> proto=tcp is the only testbed where I've seen this dt data mismatch.
>>>>
>>>>>> Or, I could switch to TCP. Suggestions welcome.
>>>>>
>>>>> The client is not sending any READ procedures/operations to the server.
>>>>> The following is NFSv3 for clarity, but NFSv4.x results are similar:
>>>>>
>>>>>             nfsd-1669  [003]  1466.634816: svc_process:
>>>>> addr=192.168.2.67 xid=0x7b2a6274 service=nfsd vers=3 proc=NULL
>>>>>             nfsd-1669  [003]  1466.635389: svc_process:
>>>>> addr=192.168.2.67 xid=0x7d2a6274 service=nfsd vers=3 proc=FSINFO
>>>>>             nfsd-1669  [003]  1466.635420: svc_process:
>>>>> addr=192.168.2.67 xid=0x7e2a6274 service=nfsd vers=3 proc=PATHCONF
>>>>>             nfsd-1669  [003]  1466.635451: svc_process:
>>>>> addr=192.168.2.67 xid=0x7f2a6274 service=nfsd vers=3 proc=GETATTR
>>>>>             nfsd-1669  [003]  1466.635486: svc_process:
>>>>> addr=192.168.2.67 xid=0x802a6274 service=nfsacl vers=3 proc=NULL
>>>>>             nfsd-1669  [003]  1466.635558: svc_process:
>>>>> addr=192.168.2.67 xid=0x812a6274 service=nfsd vers=3 proc=FSINFO
>>>>>             nfsd-1669  [003]  1466.635585: svc_process:
>>>>> addr=192.168.2.67 xid=0x822a6274 service=nfsd vers=3 proc=GETATTR
>>>>>             nfsd-1669  [003]  1470.029208: svc_process:
>>>>> addr=192.168.2.67 xid=0x832a6274 service=nfsd vers=3 proc=ACCESS
>>>>>             nfsd-1669  [003]  1470.029255: svc_process:
>>>>> addr=192.168.2.67 xid=0x842a6274 service=nfsd vers=3 proc=LOOKUP
>>>>>             nfsd-1669  [003]  1470.029296: svc_process:
>>>>> addr=192.168.2.67 xid=0x852a6274 service=nfsd vers=3 proc=FSSTAT
>>>>>             nfsd-1669  [003]  1470.039715: svc_process:
>>>>> addr=192.168.2.67 xid=0x862a6274 service=nfsacl vers=3 proc=GETACL
>>>>>             nfsd-1669  [003]  1470.039758: svc_process:
>>>>> addr=192.168.2.67 xid=0x872a6274 service=nfsd vers=3 proc=CREATE
>>>>>             nfsd-1669  [003]  1470.040091: svc_process:
>>>>> addr=192.168.2.67 xid=0x882a6274 service=nfsd vers=3 proc=WRITE
>>>>>             nfsd-1669  [003]  1470.040469: svc_process:
>>>>> addr=192.168.2.67 xid=0x892a6274 service=nfsd vers=3 proc=GETATTR
>>>>>             nfsd-1669  [003]  1470.040503: svc_process:
>>>>> addr=192.168.2.67 xid=0x8a2a6274 service=nfsd vers=3 proc=ACCESS
>>>>>             nfsd-1669  [003]  1470.041867: svc_process:
>>>>> addr=192.168.2.67 xid=0x8b2a6274 service=nfsd vers=3 proc=FSSTAT
>>>>>             nfsd-1669  [003]  1470.042109: svc_process:
>>>>> addr=192.168.2.67 xid=0x8c2a6274 service=nfsd vers=3 proc=REMOVE
>>>>>
>>>>> So I'm probably missing some setting on the reproducer/client.
>>>>>
>>>>> /mnt from klimt.ib.1015granger.net:/export/fast
>>>>>  Flags:	rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,
>>>>>   fatal_neterrors=none,proto=rdma,port=20049,timeo=600,retrans=2,
>>>>>   sec=sys,mountaddr=192.168.2.55,mountvers=3,mountproto=tcp,
>>>>>   local_lock=none,addr=192.168.2.55
>>>>>
>>>>> Linux morisot.1015granger.net 6.15.10-100.fc41.x86_64 #1 SMP
>>>>>  PREEMPT_DYNAMIC Fri Aug 15 14:55:12 UTC 2025 x86_64 GNU/Linux
>>>>
>>>> If you're using LOCALIO (client on server) that'd explain your not
>>>> seeing any READs coming over the wire to NFSD.
>>>>
>>>> I've made sure to disable LOCALIO on my client, with:
>>>> echo N > /sys/module/nfs/parameters/localio_enabled
>>>
>>> I am testing with a physically separate client and server, so I believe
>>> that LOCALIO is not in play. I do see WRITEs. And other workloads (in
>>> particular "fsx -Z <fname>") show READ traffic and I'm getting the
>>> new trace point to fire quite a bit, and it is showing misaligned
>>> READ requests. So it has something to do with dt.
>>
>> OK, yeah I figured you weren't doing loopback mount, only thing that
>> came to mind for you not seeing READ like expected.  I haven't had any
>> problems with dt not driving READs to NFSD...
>>
>> You'll certainly need to see READs in order for NFSD's new misaligned
>> DIO READ handling to get tested.
>>
>>> If I understand your two patches correctly, they are still pulling a
>>> page from the end of rq_pages to do the initial pad page. That, I
>>> think, is a working implementation, not the failing one.
>>
>> Patch 1 removes the use of a separate page, instead using the very
>> first page of rq_pages for the "start_extra" (or "front_pad) page for
>> the misaligned DIO READ.  And with that my dt testing fails with data
>> mismatch like I shared. So patch 1 is failing implementation (for me
>> on the "RDMA" system I'm testing on).
>>
>> Patch 2 then switches to using a rq_pages page _after_ the memory that
>> would normally get used as the READ payload memory to service the
>> READ. So patch 2 is a working implementation.
>>
>>> EOD -- will continue tomorrow.
>>
>> Ack.
>>
> 
> The reason for proto=tcp is that I was mounting the Hammerspace
> Anvil (metadata server) via 4.2 using tcp. And it is the layout that
> the metadata server hands out that directs my 4.2 flexfiles client to
> then access the DS over v3 using RDMA. My particular DS server in the
> broader testbed has the following in /etc/nfs.conf:
> 
> [general]
> 
> [nfsrahead]
> 
> [exports]
> 
> [exportfs]
> 
> [gssd]
> use-gss-proxy = 1
> 
> [lockd]
> 
> [exportd]
> 
> [mountd]
> 
> [nfsdcld]
> 
> [nfsdcltrack]
> 
> [nfsd]
> rdma = y
> rdma-port = 20049
> threads = 576
> vers4.0 = n
> vers4.1 = n
> 
> [statd]
> 
> [sm-notify]
> 
> And if I instead mount with:
> 
>   mount -o vers=3,proto=rdma,port=20049 192.168.0.106:/mnt/hs_nvme13 /test
> 
> And then re-run dt, I don't see any data mismatch:

I'm beginning to suspect that NFSv3 isn't the interesting case. For
NFSv3 READs, nfsd_iter_read() is always called with @base == 0.

NFSv4 READs, on the other hand, set @base to whatever is the current
end of the send buffer's .pages array. The checks in
nfsd_analyze_read_dio() might reject the use of direct I/O, or it
might be that the code is setting up the alignment of the read buffer
incorrectly.


> dt (j:1 t:1): File System Information:
> dt (j:1 t:1):            Mounted from device: 192.168.0.106:/mnt/hs_nvme13
> dt (j:1 t:1):           Mounted on directory: /test
> dt (j:1 t:1):                Filesystem type: nfs
> dt (j:1 t:1):             Filesystem options: rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,fatal_neterrors=none,proto=rdma,port=20049,timeo=600,retrans=2,sec=sys,mountaddr=192.168.0.106,mountvers=3,mountproto=tcp,local_lock=none,addr=192.168.0.106
> dt (j:1 t:1):          Filesystem block size: 1048576
> dt (j:1 t:1):          Filesystem free space: 3812019404800 (3635425.000 Mbytes, 3550.220 Gbytes, 3.467 Tbytes)
> dt (j:1 t:1):         Filesystem total space: 3838875533312 (3661037.000 Mbytes, 3575.231 Gbytes, 3.491 Tbytes)
> 
> So... I think what this means is my "patch 1" _is_ a working
> implementation.  BUT, for some reason RDMA with pnfs flexfiles is
> "unhappy".
> 
> Would seem I get to keep both pieces and need to sort out what's up
> with pNFS flexfiles on this particular RDMA testbed.
> 
> I will post v9 of the NFSD DIRECT patchset with "patch 1" folded in to
> the misaligned READ patch (5) and some other small fixes/improvements
> to the series, probably tomorrow morning.
> 
> Thanks,
> Mike


-- 
Chuck Lever

  reply	other threads:[~2025-09-04 19:07 UTC|newest]

Thread overview: 42+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-08-26 18:57 [PATCH v8 0/7] NFSD: add "NFSD DIRECT" and "NFSD DONTCACHE" IO modes Mike Snitzer
2025-08-26 18:57 ` [PATCH v8 1/7] NFSD: filecache: add STATX_DIOALIGN and STATX_DIO_READ_ALIGN support Mike Snitzer
2025-08-26 18:57 ` [PATCH v8 2/7] NFSD: pass nfsd_file to nfsd_iter_read() Mike Snitzer
2025-08-26 18:57 ` [PATCH v8 3/7] NFSD: add io_cache_read controls to debugfs interface Mike Snitzer
2025-09-03 14:38   ` Chuck Lever
2025-09-03 15:07     ` Mike Snitzer
2025-09-03 16:02       ` Mike Snitzer
2025-09-03 16:12         ` Chuck Lever
2025-09-03 16:50           ` Mike Snitzer
2025-08-26 18:57 ` [PATCH v8 4/7] NFSD: add io_cache_write " Mike Snitzer
2025-08-26 18:57 ` [PATCH v8 5/7] NFSD: issue READs using O_DIRECT even if IO is misaligned Mike Snitzer
2025-08-27 15:34   ` Chuck Lever
2025-08-27 19:41     ` Mike Snitzer
2025-08-27 20:56       ` Chuck Lever
2025-08-27 23:15         ` Mike Snitzer
2025-08-28  1:57           ` Chuck Lever
2025-08-28  8:09             ` Mike Snitzer
2025-08-28 14:53               ` Chuck Lever
2025-08-28 18:52                 ` Mike Snitzer
2025-08-30 17:38                   ` [RFC PATCH 0/2] some progress on rpcrdma bug [was: Re: [PATCH v8 5/7] NFSD: issue READs using O_DIRECT even if IO is misaligned] Mike Snitzer
2025-08-30 17:38                     ` [RFC PATCH 1/2] NFSD: fix misaligned DIO READ to not use a start_extra_page, exposes rpcrdma bug? Mike Snitzer
2025-09-02 14:04                       ` Chuck Lever
2025-09-02 15:56                       ` Chuck Lever
2025-09-02 17:59                         ` Chuck Lever
2025-09-02 21:06                           ` Mike Snitzer
2025-09-02 21:16                             ` Chuck Lever
2025-09-02 21:27                               ` Mike Snitzer
2025-09-02 22:18                                 ` Mike Snitzer
2025-09-04 19:07                                   ` Chuck Lever [this message]
2025-09-04 21:00                                     ` Mike Snitzer
2025-09-04 14:42                                 ` Mike Snitzer
2025-09-04 15:12                                   ` Chuck Lever
2025-09-04 16:10                                   ` Chuck Lever
2025-09-04 16:33                                     ` Mike Snitzer
2025-09-04 17:54                                       ` Chuck Lever
2025-08-30 17:38                     ` [RFC PATCH 2/2] NFSD: use /end/ of rq_pages for front_pad page, simpler workaround for rpcrdma bug Mike Snitzer
2025-08-30 18:53                     ` [RFC PATCH 0/2] some progress on rpcrdma bug [was: Re: [PATCH v8 5/7] NFSD: issue READs using O_DIRECT even if IO is misaligned] Mike Snitzer
2025-08-28 16:36               ` [PATCH v8 5/7] NFSD: issue READs using O_DIRECT even if IO is misaligned Jeff Layton
2025-08-28 16:22       ` Jeff Layton
2025-08-28 16:27         ` Chuck Lever
2025-08-26 18:57 ` [PATCH v8 6/7] NFSD: issue WRITEs " Mike Snitzer
2025-08-26 18:57 ` [PATCH v8 7/7] NFSD: add nfsd_analyze_read_dio and nfsd_analyze_write_dio trace events Mike Snitzer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=12b7f4cf-5781-4c98-92d0-3fdd03df39da@oracle.com \
    --to=chuck.lever@oracle.com \
    --cc=jlayton@kernel.org \
    --cc=linux-nfs@vger.kernel.org \
    --cc=snitzer@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox