linux-rt-devel.lists.linux.dev archive mirror
 help / color / mirror / Atom feed
From: Ian Kent <ikent@redhat.com>
To: Ian Kent <raven@themaw.net>,
	Christian Brauner <brauner@kernel.org>,
	Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Mateusz Guzik <mjguzik@gmail.com>,
	Eric Chanudet <echanude@redhat.com>,
	Alexander Viro <viro@zeniv.linux.org.uk>, Jan Kara <jack@suse.cz>,
	Clark Williams <clrkwllms@kernel.org>,
	Steven Rostedt <rostedt@goodmis.org>,
	linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-rt-devel@lists.linux.dev,
	Alexander Larsson <alexl@redhat.com>,
	Lucas Karpinski <lkarpins@redhat.com>
Subject: Re: [PATCH v4] fs/namespace: defer RCU sync for MNT_DETACH umount
Date: Fri, 11 Apr 2025 10:36:58 +0800	[thread overview]
Message-ID: <52f5a8a0-7721-45c0-92f9-38b87afb62d3@redhat.com> (raw)
In-Reply-To: <64fb3c80-96f9-4156-a085-516cbaa28376@themaw.net>

On 10/4/25 21:58, Ian Kent wrote:
>
> On 10/4/25 00:04, Christian Brauner wrote:
>> On Wed, Apr 09, 2025 at 04:25:10PM +0200, Sebastian Andrzej Siewior 
>> wrote:
>>> On 2025-04-09 16:02:29 [+0200], Mateusz Guzik wrote:
>>>> On Wed, Apr 09, 2025 at 03:14:44PM +0200, Sebastian Andrzej Siewior 
>>>> wrote:
>>>>> One question: Do we need this lazy/ MNT_DETACH case? Couldn't we 
>>>>> handle
>>>>> them all via queue_rcu_work()?
>>>>> If so, couldn't we have make deferred_free_mounts global and have two
>>>>> release_list, say release_list and release_list_next_gp? The first 
>>>>> one
>>>>> will be used if queue_rcu_work() returns true, otherwise the second.
>>>>> Then once defer_free_mounts() is done and release_list_next_gp not
>>>>> empty, it would move release_list_next_gp -> release_list and invoke
>>>>> queue_rcu_work().
>>>>> This would avoid the kmalloc, synchronize_rcu_expedited() and the
>>>>> special-sauce.
>>>>>
>>>> To my understanding it was preferred for non-lazy unmount consumers to
>>>> wait until the mntput before returning.
>>>>
>>>> That aside if I understood your approach it would de facto 
>>>> serialize all
>>>> of these?
>>>>
>>>> As in with the posted patches you can have different worker threads
>>>> progress in parallel as they all get a private list to iterate.
>>>>
>>>> With your proposal only one can do any work.
>>>>
>>>> One has to assume with sufficient mount/unmount traffic this can
>>>> eventually get into trouble.
>>> Right, it would serialize them within the same worker thread. With one
>>> worker for each put you would schedule multiple worker from the RCU
>>> callback. Given the system_wq you will schedule them all on the CPU
>>> which invokes the RCU callback. This kind of serializes it, too.
>>>
>>> The mntput() callback uses spinlock_t for locking and then it frees
>>> resources. It does not look like it waits for something nor takes ages.
>>> So it might not be needed to split each put into its own worker on a
>>> different CPU… One busy bee might be enough ;)
>> Unmounting can trigger very large number of mounts to be unmounted. If
>> you're on a container heavy system or services that all propagate to
>> each other in different mount namespaces mount propagation will generate
>> a ton of umounts. So this cannot be underestimated.
>>
>> If a mount tree is wasted without MNT_DETACH it will pass UMOUNT_SYNC to
>> umount_tree(). That'll cause MNT_SYNC_UMOUNT to be raised on all mounts
>> during the unmount.
>>
>> If a concurrent path lookup calls legitimize_mnt() on such a mount and
>> sees that MNT_SYNC_UMOUNT is set it will discount as it know that the
>> concurrent unmounter hold the last reference and it __legitimize_mnt()
>> can thus simply drop the reference count. The final mntput() will be
>> done by the umounter.
>
> In umount_tree() it looks like the unmounted mount remains hashed (ie.
>
> disconnect_mount() returns false) so can't it still race with an rcu-walk
>
> regardless of the sybcronsize_rcu().
>
>
> Surely I'm missing something ...

Ans I am, please ignore this.

I miss-read the return of the mnt->mnt_parent->mnt.mnt_flags check in 
disconnect_mount(),

my bad.

>
>
> Ian
>
>>
>> The synchronize_rcu() call in namespace_unlock() takes care that the
>> last mntput() doesn't happen until path walking has dropped out of RCU
>> mode.
>>
>> Without it it's possible that a non-MNT_DETACH umounter gets a spurious
>> EBUSY error because a concurrent lazy path walk will suddenly put the
>> last reference via mntput().
>>
>> I'm unclear how that's handled in whatever it is you're proposing.
>>
>


  reply	other threads:[~2025-04-11  2:37 UTC|newest]

Thread overview: 47+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-04-08 20:58 [PATCH v4] fs/namespace: defer RCU sync for MNT_DETACH umount Eric Chanudet
2025-04-09 10:37 ` Christian Brauner
2025-04-09 13:14   ` Sebastian Andrzej Siewior
2025-04-09 14:02     ` Mateusz Guzik
2025-04-09 14:25       ` Sebastian Andrzej Siewior
2025-04-09 16:04         ` Christian Brauner
2025-04-10  3:04           ` Ian Kent
2025-04-10  8:28           ` Sebastian Andrzej Siewior
2025-04-10 10:48             ` Christian Brauner
2025-04-10 13:58           ` Ian Kent
2025-04-11  2:36             ` Ian Kent [this message]
2025-04-09 16:08         ` Eric Chanudet
2025-04-11 15:17           ` Christian Brauner
2025-04-11 18:30             ` Eric Chanudet
2025-04-09 16:09     ` Christian Brauner
2025-04-10  1:17   ` Ian Kent
2025-04-09 13:04 ` Mateusz Guzik
2025-04-09 16:41   ` Eric Chanudet
2025-04-16 22:11 ` Mark Brown
2025-04-17  9:01   ` Christian Brauner
2025-04-17 10:17     ` Ian Kent
2025-04-17 11:31       ` Christian Brauner
2025-04-17 11:49         ` Mark Brown
2025-04-17 15:12         ` Christian Brauner
2025-04-17 15:28           ` Christian Brauner
2025-04-17 15:31             ` Sebastian Andrzej Siewior
2025-04-17 16:28               ` Christian Brauner
2025-04-17 22:33                 ` Eric Chanudet
2025-04-18  1:13                 ` Ian Kent
2025-04-18  1:20                   ` Ian Kent
2025-04-18  8:47                     ` Christian Brauner
2025-04-18 12:55                       ` Christian Brauner
2025-04-18 19:59                       ` Christian Brauner
2025-04-18 21:20                         ` Eric Chanudet
2025-04-19  1:24                       ` Ian Kent
2025-04-19 10:44                         ` Christian Brauner
2025-04-19 13:26                           ` Christian Brauner
2025-04-21  0:12                             ` Ian Kent
2025-04-21  0:44                               ` Al Viro
2025-04-18  0:31           ` Ian Kent
2025-04-18  8:59             ` Christian Brauner
2025-04-19  1:14               ` Ian Kent
2025-04-20  4:24           ` Al Viro
2025-04-20  5:54 ` Al Viro
2025-04-22 19:53   ` Eric Chanudet
2025-04-23  2:15     ` Al Viro
2025-04-23 15:04       ` Eric Chanudet

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=52f5a8a0-7721-45c0-92f9-38b87afb62d3@redhat.com \
    --to=ikent@redhat.com \
    --cc=alexl@redhat.com \
    --cc=bigeasy@linutronix.de \
    --cc=brauner@kernel.org \
    --cc=clrkwllms@kernel.org \
    --cc=echanude@redhat.com \
    --cc=jack@suse.cz \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-rt-devel@lists.linux.dev \
    --cc=lkarpins@redhat.com \
    --cc=mjguzik@gmail.com \
    --cc=raven@themaw.net \
    --cc=rostedt@goodmis.org \
    --cc=viro@zeniv.linux.org.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).