From: Li Lingfeng <lilingfeng3@huawei.com>
To: Trond Myklebust <trondmy@kernel.org>,
Alkis Georgopoulos <alkisg@gmail.com>
Cc: <linux-nfs@vger.kernel.org>, yangerkun <yangerkun@huawei.com>,
"chengzhihao1@huawei.com" <chengzhihao1@huawei.com>,
"zhangyi (F)" <yi.zhang@huawei.com>, Hou Tao <houtao1@huawei.com>,
<wangzhaolong1@huawei.com>
Subject: Re: [PATCH 0/6] Fix up NFS client mount option regressions
Date: Fri, 30 Jan 2026 10:41:33 +0800 [thread overview]
Message-ID: <12072fd2-b5ca-40f1-b0cb-d9bc8873caa1@huawei.com> (raw)
In-Reply-To: <f8bf92fb35e7bfd4c0b87c108ac7e8d2813899a4.camel@kernel.org>
Hi Trond,
在 2026/1/30 9:43, Trond Myklebust 写道:
> On Fri, 2026-01-30 at 09:34 +0800, Li Lingfeng wrote:
>> Hi Trond,
>>
>> 在 2026/1/30 0:00, Trond Myklebust 写道:
>>> On Thu, 2026-01-29 at 15:06 +0800, Li Lingfeng wrote:
>>>> Hi Trond,
>>>>
>>>> 在 2025/11/29 12:06, Trond Myklebust 写道:
>>>>> From: Trond Myklebust <trond.myklebust@hammerspace.com>
>>>>>
>>>>> The recent changes to suppress the 'ro' and 'rw' mount options
>>>>> when
>>>>> mounting the same NFS filesystem with different settings are
>>>>> causing
>>>>> confusion with users, and are an unnecessary restriction. They
>>>>> represent
>>>>> a functionality regression.
>>>>>
>>>>> The following patch set reverts the regressions, before
>>>>> applying a
>>>>> different set of fixes to address the original problem, which
>>>>> was
>>>>> one of
>>>>> the NFSv4 mount automounter code failing to propagate the
>>>>> correct
>>>>> mount
>>>>> options.
>>>>>
>>>>> Trond Myklebust (6):
>>>>> Revert "nfs: ignore SB_RDONLY when remounting nfs"
>>>>> Revert "nfs: clear SB_RDONLY before getting superblock"
>>>>> Revert "nfs: ignore SB_RDONLY when mounting nfs"
>>>>> NFS: Automounted filesystem should inherit
>>>>> ro,noexec,nodev,sync
>>>>> flags
>>>>> NFS: Fix inheritance of the block sizes when automounting
>>>>> NFS: Fix up the automount fs_context to use the correct
>>>>> cred
>>>>>
>>>>> fs/nfs/client.c | 21 +++++++++++++++++----
>>>>> fs/nfs/internal.h | 3 +--
>>>>> fs/nfs/namespace.c | 16 +++++++++++++++-
>>>>> fs/nfs/nfs4client.c | 18 ++++++++++++++----
>>>>> fs/nfs/super.c | 33 +++--------------------------
>>>>> ----
>>>>> include/linux/nfs_fs_sb.h | 5 +++++
>>>>> 6 files changed, 55 insertions(+), 41 deletions(-)
>>>> After this series of patches was merged, I found that the issue
>>>> described
>>>> in link [1] has appeared again.
>>>>
>>>> [root@nfs-client1 ~]# mount /dev/sda /mnt2
>>>> [root@nfs-client1 ~]# echo "/mnt2 *(rw,no_root_squash,fsid=0)"
>>>>> /etc/exports
>>>> [root@nfs-client1 ~]# systemctl restart nfs-server
>>>> [root@nfs-client1 ~]# mount -t nfs -o ro,vers=4 127.0.0.1:/
>>>> /mnt/sdaa
>>>> [root@nfs-client1 ~]# mount -t nfs -o rw,vers=4 127.0.0.1:/
>>>> /mnt/sdaa
>>>> [root@nfs-client1 ~]# mount -t nfs -o ro,vers=4 127.0.0.1:/
>>>> /mnt/sdaa
>>>> [root@nfs-client1 ~]# mount -t nfs -o rw,vers=4 127.0.0.1:/
>>>> /mnt/sdaa
>>>> [root@nfs-client1 ~]# mount | grep nfs4
>>>> 127.0.0.1:/ on /mnt/sdaa type nfs4
>>>> (ro,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard
>>>> ,fat
>>>> al_neterrors=none,proto=tcp,timeo=600,retrans=2,sec=sys,clientadd
>>>> r=12
>>>> 7.0.0.1,local_lock=none,addr=127.0.0.1)
>>>> 127.0.0.1:/ on /mnt/sdaa type nfs4
>>>> (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard
>>>> ,fat
>>>> al_neterrors=none,proto=tcp,timeo=600,retrans=2,sec=sys,clientadd
>>>> r=12
>>>> 7.0.0.1,local_lock=none,addr=127.0.0.1)
>>>> 127.0.0.1:/ on /mnt/sdaa type nfs4
>>>> (ro,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard
>>>> ,fat
>>>> al_neterrors=none,proto=tcp,timeo=600,retrans=2,sec=sys,clientadd
>>>> r=12
>>>> 7.0.0.1,local_lock=none,addr=127.0.0.1)
>>>> 127.0.0.1:/ on /mnt/sdaa type nfs4
>>>> (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard
>>>> ,fat
>>>> al_neterrors=none,proto=tcp,timeo=600,retrans=2,sec=sys,clientadd
>>>> r=12
>>>> 7.0.0.1,local_lock=none,addr=127.0.0.1)
>>>> [root@nfs-client1 ~]# uname -a
>>>> Linux nfs-client1 6.19.0-rc7+ #178 SMP PREEMPT_DYNAMIC Thu Jan 29
>>>> 14:06:54 CST 2026 x86_64 x86_64 x86_64 GNU/Linux
>>>> [root@nfs-client1 ~]#
>>>>
>>>> [1]
>>>> https://lore.kernel.org/all/20241114045303.1656426-1-lilingfeng3@huawei.com/
>>>>
>>>> Thanks,
>>>> Lingfeng.
>>> What does the output of "cat /proc/fs/nfsfs/volumes" show? Does it
>>> show
>>> more than 2 devices associated with that fsid?
>>>
>> Here is the result of the test:
>>
>> [root@nfs-client1 ~]# mount /dev/sda /mnt2
>> [root@nfs-client1 ~]# echo "/mnt2 *(rw,no_root_squash,fsid=0)"
>>> /etc/exports
>> [root@nfs-client1 ~]# systemctl restart nfs-server
>> [root@nfs-client1 ~]# mount -t nfs -o ro,vers=4 127.0.0.1:/ /mnt/sdaa
>> [root@nfs-client1 ~]# cat /proc/fs/nfsfs/volumes
>> NV SERVER PORT DEV FSID FSC
>> v4 7f000001 801 0:51 0:0 no
>> [root@nfs-client1 ~]# mount -t nfs -o rw,vers=4 127.0.0.1:/ /mnt/sdaa
>> [root@nfs-client1 ~]# cat /proc/fs/nfsfs/volumes
>> NV SERVER PORT DEV FSID FSC
>> v4 7f000001 801 0:51 0:0 no
>> v4 7f000001 801 0:52 0:0 no
>> [root@nfs-client1 ~]# mount | grep nfs4
>> 127.0.0.1:/ on /mnt/sdaa type nfs4
>> (ro,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,fat
>> al_neterrors=none,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=12
>> 7.0.0.1)
>> 127.0.0.1:/ on /mnt/sdaa type nfs4
>> (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,fat
>> al_neterrors=none,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=12
>> 7.0.0.1)
>> [root@nfs-client1 ~]# mount -t nfs -o ro,vers=4 127.0.0.1:/ /mnt/sdaa
>> [root@nfs-client1 ~]# mount -t nfs -o rw,vers=4 127.0.0.1:/ /mnt/sdaa
>> [root@nfs-client1 ~]# cat /proc/fs/nfsfs/volumes
>> NV SERVER PORT DEV FSID FSC
>> v4 7f000001 801 0:51 0:0 no
>> v4 7f000001 801 0:52 0:0 no
>> [root@nfs-client1 ~]# mount | grep nfs4
>> 127.0.0.1:/ on /mnt/sdaa type nfs4
>> (ro,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,fat
>> al_neterrors=none,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=12
>> 7.0.0.1)
>> 127.0.0.1:/ on /mnt/sdaa type nfs4
>> (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,fat
>> al_neterrors=none,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=12
>> 7.0.0.1)
>> 127.0.0.1:/ on /mnt/sdaa type nfs4
>> (ro,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,fat
>> al_neterrors=none,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=12
>> 7.0.0.1)
>> 127.0.0.1:/ on /mnt/sdaa type nfs4
>> (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,fat
>> al_neterrors=none,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=12
>> 7.0.0.1)
>> [root@nfs-client1 ~]#
>>
>> There are only 2 devices associated with that fsid.
>>
> Then it is working as expected. It's not the kernel's job to stop
> people from stacking one mount on top of another. If it were, then the
> right place to do that would be in the VFS and not the NFS client.
>
> However the NFS client does try to ensure that mounts of the same
> remote filesystem with the same set of mount options gets mapped to the
> same super block (and hence device). The exception is if you're playing
> with the nosharecache option; in that case you're knowingly asking the
> kernel to ignore that constraint.
For local file systems like ext4, when I attempt to mount a file system
as read-only after it has already been mounted for read and write, I
encounter an EBUSY error (from the `get_tree` callback `ext4_get_tree`).
[root@nfs-client1 ~]# mount -o rw /dev/sdb /mnt3
[root@nfs-client1 ~]# mount -o ro /dev/sdb /mnt3
[ 2524.071442][ T1281] sdb: Can't mount, would change RO state
mount: /mnt3: /dev/sdb already mounted on /mnt3.
[root@nfs-client1 ~]# df -Th | grep sdb
/dev/sdb ext4 20G 28K 19G 1% /mnt3
[root@nfs-client1 ~]#
Do you think NFS should have a similar restriction, allowing only one
mount per mount point?
Thanks,
Lingfeng.
next prev parent reply other threads:[~2026-01-30 2:41 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-28 9:39 NFS EACCES regression since 6.15.4 Alkis Georgopoulos
2025-11-29 4:06 ` [PATCH 0/6] Fix up NFS client mount option regressions Trond Myklebust
2025-11-29 4:06 ` [PATCH 1/6] Revert "nfs: ignore SB_RDONLY when remounting nfs" Trond Myklebust
2025-11-29 4:06 ` [PATCH 2/6] Revert "nfs: clear SB_RDONLY before getting superblock" Trond Myklebust
2025-11-29 4:06 ` [PATCH 3/6] Revert "nfs: ignore SB_RDONLY when mounting nfs" Trond Myklebust
2025-11-29 4:06 ` [PATCH 4/6] NFS: Automounted filesystem should inherit ro,noexec,nodev,sync flags Trond Myklebust
2025-11-29 4:06 ` [PATCH 5/6] NFS: Fix inheritance of the block sizes when automounting Trond Myklebust
2025-11-29 4:06 ` [PATCH 6/6] NFS: Fix up the automount fs_context to use the correct cred Trond Myklebust
2026-01-29 7:06 ` [PATCH 0/6] Fix up NFS client mount option regressions Li Lingfeng
2026-01-29 16:00 ` Trond Myklebust
2026-01-30 1:34 ` Li Lingfeng
2026-01-30 1:43 ` Trond Myklebust
2026-01-30 2:41 ` Li Lingfeng [this message]
2026-01-30 3:14 ` Trond Myklebust
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=12072fd2-b5ca-40f1-b0cb-d9bc8873caa1@huawei.com \
--to=lilingfeng3@huawei.com \
--cc=alkisg@gmail.com \
--cc=chengzhihao1@huawei.com \
--cc=houtao1@huawei.com \
--cc=linux-nfs@vger.kernel.org \
--cc=trondmy@kernel.org \
--cc=wangzhaolong1@huawei.com \
--cc=yangerkun@huawei.com \
--cc=yi.zhang@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox