netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net 0/2] net: avoid LOCKDEP MAX_LOCK_DEPTH splat
@ 2025-10-10 13:54 Florian Westphal
  2025-10-10 13:54 ` [PATCH net 1/2] net: core: move unregister_many inner loops to a helper Florian Westphal
  2025-10-10 13:54 ` [PATCH net 2/2] net: core: split unregister_netdevice list into smaller chunks Florian Westphal
  0 siblings, 2 replies; 5+ messages in thread
From: Florian Westphal @ 2025-10-10 13:54 UTC (permalink / raw)
  To: netdev
  Cc: Paolo Abeni, David S. Miller, Eric Dumazet, Jakub Kicinski,
	linux-kernel, sdf

unshare -n bash -c 'for i in $(seq 1 100);do ip link add foo$i type dummy;done'
Gives:

BUG: MAX_LOCK_DEPTH too low!
turning off the locking correctness validator.
depth: 48  max: 48!
48 locks held by kworker/u16:1/69:
 #0: ffff8880010b7148 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x7ed/0x1350
 #1: ffffc900004a7d40 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0xcf3/0x1350
 #2: ffffffff8bc6fbd0 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0xab/0x7f0
 #3: ffffffff8bc8daa8 (rtnl_mutex){+.+.}-{4:4}, at: default_device_exit_batch+0x7e/0x2e0
 #4: ffff88800b5e9cb0 (&dev_instance_lock_key#3){+.+.}-{4:4}, at: unregister_netdevice_many_notify+0x1056/0x1b00
[..]

Work around this by splitting the list into lockdep-digestable sublists.
This patchset should have no effect whatsoever for non-lockdep builds.

This issue is a problem for me because of a recent test case added
to nftables userspace which will create/destroy 100 dummy net devices,
so when I run the tests on a debug kernel lockdep coverage is now lost.

Alternative suggestions welcome.

I did not yet encounter another code path that would take so many mutexes
in a row, so I don't see a reason to muck with task_struct->held_locks[].

Florian Westphal (2):
  net: core: move unregister_many inner loops to a helper
  net: core: split unregister_netdevice list into smaller chunks

 net/core/dev.c | 89 ++++++++++++++++++++++++++++++++++++--------------
 1 file changed, 65 insertions(+), 24 deletions(-)

-- 
2.49.1

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH net 1/2] net: core: move unregister_many inner loops to a helper
  2025-10-10 13:54 [PATCH net 0/2] net: avoid LOCKDEP MAX_LOCK_DEPTH splat Florian Westphal
@ 2025-10-10 13:54 ` Florian Westphal
  2025-10-10 13:54 ` [PATCH net 2/2] net: core: split unregister_netdevice list into smaller chunks Florian Westphal
  1 sibling, 0 replies; 5+ messages in thread
From: Florian Westphal @ 2025-10-10 13:54 UTC (permalink / raw)
  To: netdev
  Cc: Paolo Abeni, David S. Miller, Eric Dumazet, Jakub Kicinski,
	linux-kernel, sdf

Will be re-used in a followup patch, no functional change intended.

Signed-off-by: Florian Westphal <fw@strlen.de>
---
 net/core/dev.c | 57 +++++++++++++++++++++++++++++---------------------
 1 file changed, 33 insertions(+), 24 deletions(-)

diff --git a/net/core/dev.c b/net/core/dev.c
index a64cef2c537e..9a09b48c9371 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -12176,11 +12176,42 @@ static void dev_memory_provider_uninstall(struct net_device *dev)
 	}
 }
 
+static void unregister_netdevice_close_many(struct list_head *head)
+{
+	struct net_device *dev;
+	LIST_HEAD(close_head);
+
+	/* If device is running, close it first. Start with ops locked... */
+	list_for_each_entry(dev, head, unreg_list) {
+		if (netdev_need_ops_lock(dev)) {
+			list_add_tail(&dev->close_list, &close_head);
+			netdev_lock(dev);
+		}
+	}
+	netif_close_many(&close_head, true);
+	/* ... now unlock them and go over the rest. */
+
+	list_for_each_entry(dev, head, unreg_list) {
+		if (netdev_need_ops_lock(dev))
+			netdev_unlock(dev);
+		else
+			list_add_tail(&dev->close_list, &close_head);
+	}
+	netif_close_many(&close_head, true);
+
+	list_for_each_entry(dev, head, unreg_list) {
+		/* And unlink it from device chain. */
+		unlist_netdevice(dev);
+		netdev_lock(dev);
+		WRITE_ONCE(dev->reg_state, NETREG_UNREGISTERING);
+		netdev_unlock(dev);
+	}
+}
+
 void unregister_netdevice_many_notify(struct list_head *head,
 				      u32 portid, const struct nlmsghdr *nlh)
 {
 	struct net_device *dev, *tmp;
-	LIST_HEAD(close_head);
 	int cnt = 0;
 
 	BUG_ON(dev_boot_phase);
@@ -12206,30 +12237,8 @@ void unregister_netdevice_many_notify(struct list_head *head,
 		BUG_ON(dev->reg_state != NETREG_REGISTERED);
 	}
 
-	/* If device is running, close it first. Start with ops locked... */
-	list_for_each_entry(dev, head, unreg_list) {
-		if (netdev_need_ops_lock(dev)) {
-			list_add_tail(&dev->close_list, &close_head);
-			netdev_lock(dev);
-		}
-	}
-	netif_close_many(&close_head, true);
-	/* ... now unlock them and go over the rest. */
-	list_for_each_entry(dev, head, unreg_list) {
-		if (netdev_need_ops_lock(dev))
-			netdev_unlock(dev);
-		else
-			list_add_tail(&dev->close_list, &close_head);
-	}
-	netif_close_many(&close_head, true);
+	unregister_netdevice_close_many(head);
 
-	list_for_each_entry(dev, head, unreg_list) {
-		/* And unlink it from device chain. */
-		unlist_netdevice(dev);
-		netdev_lock(dev);
-		WRITE_ONCE(dev->reg_state, NETREG_UNREGISTERING);
-		netdev_unlock(dev);
-	}
 	flush_all_backlogs();
 
 	synchronize_net();
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH net 2/2] net: core: split unregister_netdevice list into smaller chunks
  2025-10-10 13:54 [PATCH net 0/2] net: avoid LOCKDEP MAX_LOCK_DEPTH splat Florian Westphal
  2025-10-10 13:54 ` [PATCH net 1/2] net: core: move unregister_many inner loops to a helper Florian Westphal
@ 2025-10-10 13:54 ` Florian Westphal
  2025-10-10 22:38   ` Stanislav Fomichev
  1 sibling, 1 reply; 5+ messages in thread
From: Florian Westphal @ 2025-10-10 13:54 UTC (permalink / raw)
  To: netdev
  Cc: Paolo Abeni, David S. Miller, Eric Dumazet, Jakub Kicinski,
	linux-kernel, sdf

Since blamed commit, unregister_netdevice_many_notify() takes the netdev
mutex if the device needs it.

This isn't a problem in itself, the problem is that the list can be
very long, so it may lock a LOT of mutexes, but lockdep engine can only
deal with MAX_LOCK_DEPTH held locks:

unshare -n bash -c 'for i in $(seq 1 100);do  ip link add foo$i type dummy;done'
BUG: MAX_LOCK_DEPTH too low!
turning off the locking correctness validator.
depth: 48  max: 48!
48 locks held by kworker/u16:1/69:
 #0: ffff8880010b7148 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x7ed/0x1350
 #1: ffffc900004a7d40 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0xcf3/0x1350
 #2: ffffffff8bc6fbd0 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0xab/0x7f0
 #3: ffffffff8bc8daa8 (rtnl_mutex){+.+.}-{4:4}, at: default_device_exit_batch+0x7e/0x2e0
 #4: ffff88800b5e9cb0 (&dev_instance_lock_key#3){+.+.}-{4:4}, at: unregister_netdevice_many_notify+0x1056/0x1b00
[..]

Work around this limitation by chopping the list into smaller chunks
and process them individually for LOCKDEP enabled kernels.

Fixes: 7e4d784f5810 ("net: hold netdev instance lock during rtnetlink operations")
Signed-off-by: Florian Westphal <fw@strlen.de>
---
 net/core/dev.c | 34 +++++++++++++++++++++++++++++++++-
 1 file changed, 33 insertions(+), 1 deletion(-)

diff --git a/net/core/dev.c b/net/core/dev.c
index 9a09b48c9371..7e35aa4ebc74 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -12208,6 +12208,38 @@ static void unregister_netdevice_close_many(struct list_head *head)
 	}
 }
 
+static void unregister_netdevice_close_many_lockdep(struct list_head *head)
+{
+#ifdef CONFIG_LOCKDEP
+	unsigned int lock_depth = lockdep_depth(current);
+	unsigned int lock_count = lock_depth;
+	struct net_device *dev, *tmp;
+	LIST_HEAD(done_head);
+
+	list_for_each_entry_safe(dev, tmp, head, unreg_list) {
+		if (netdev_need_ops_lock(dev))
+			lock_count++;
+
+		/* we'll run out of lockdep keys, reduce size. */
+		if (lock_count >= MAX_LOCK_DEPTH - 1) {
+			LIST_HEAD(tmp_head);
+
+			list_cut_before(&tmp_head, head, &dev->unreg_list);
+			unregister_netdevice_close_many(&tmp_head);
+			lock_count = lock_depth;
+			list_splice_tail(&tmp_head, &done_head);
+		}
+	}
+
+	unregister_netdevice_close_many(head);
+
+	list_for_each_entry_safe_reverse(dev, tmp, &done_head, unreg_list)
+		list_move(&dev->unreg_list, head);
+#else
+	unregister_netdevice_close_many(head);
+#endif
+}
+
 void unregister_netdevice_many_notify(struct list_head *head,
 				      u32 portid, const struct nlmsghdr *nlh)
 {
@@ -12237,7 +12269,7 @@ void unregister_netdevice_many_notify(struct list_head *head,
 		BUG_ON(dev->reg_state != NETREG_REGISTERED);
 	}
 
-	unregister_netdevice_close_many(head);
+	unregister_netdevice_close_many_lockdep(head);
 
 	flush_all_backlogs();
 
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH net 2/2] net: core: split unregister_netdevice list into smaller chunks
  2025-10-10 13:54 ` [PATCH net 2/2] net: core: split unregister_netdevice list into smaller chunks Florian Westphal
@ 2025-10-10 22:38   ` Stanislav Fomichev
  2025-10-11 14:30     ` Florian Westphal
  0 siblings, 1 reply; 5+ messages in thread
From: Stanislav Fomichev @ 2025-10-10 22:38 UTC (permalink / raw)
  To: Florian Westphal
  Cc: netdev, Paolo Abeni, David S. Miller, Eric Dumazet,
	Jakub Kicinski, linux-kernel, sdf

On 10/10, Florian Westphal wrote:
> Since blamed commit, unregister_netdevice_many_notify() takes the netdev
> mutex if the device needs it.
> 
> This isn't a problem in itself, the problem is that the list can be
> very long, so it may lock a LOT of mutexes, but lockdep engine can only
> deal with MAX_LOCK_DEPTH held locks:
> 
> unshare -n bash -c 'for i in $(seq 1 100);do  ip link add foo$i type dummy;done'
> BUG: MAX_LOCK_DEPTH too low!
> turning off the locking correctness validator.
> depth: 48  max: 48!
> 48 locks held by kworker/u16:1/69:
>  #0: ffff8880010b7148 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x7ed/0x1350
>  #1: ffffc900004a7d40 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0xcf3/0x1350
>  #2: ffffffff8bc6fbd0 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0xab/0x7f0
>  #3: ffffffff8bc8daa8 (rtnl_mutex){+.+.}-{4:4}, at: default_device_exit_batch+0x7e/0x2e0
>  #4: ffff88800b5e9cb0 (&dev_instance_lock_key#3){+.+.}-{4:4}, at: unregister_netdevice_many_notify+0x1056/0x1b00
> [..]
> 
> Work around this limitation by chopping the list into smaller chunks
> and process them individually for LOCKDEP enabled kernels.
> 
> Fixes: 7e4d784f5810 ("net: hold netdev instance lock during rtnetlink operations")
> Signed-off-by: Florian Westphal <fw@strlen.de>
> ---
>  net/core/dev.c | 34 +++++++++++++++++++++++++++++++++-
>  1 file changed, 33 insertions(+), 1 deletion(-)
> 
> diff --git a/net/core/dev.c b/net/core/dev.c
> index 9a09b48c9371..7e35aa4ebc74 100644
> --- a/net/core/dev.c
> +++ b/net/core/dev.c
> @@ -12208,6 +12208,38 @@ static void unregister_netdevice_close_many(struct list_head *head)
>  	}
>  }
>  
> +static void unregister_netdevice_close_many_lockdep(struct list_head *head)
> +{
> +#ifdef CONFIG_LOCKDEP
> +	unsigned int lock_depth = lockdep_depth(current);
> +	unsigned int lock_count = lock_depth;
> +	struct net_device *dev, *tmp;
> +	LIST_HEAD(done_head);
> +
> +	list_for_each_entry_safe(dev, tmp, head, unreg_list) {
> +		if (netdev_need_ops_lock(dev))
> +			lock_count++;
> +
> +		/* we'll run out of lockdep keys, reduce size. */
> +		if (lock_count >= MAX_LOCK_DEPTH - 1) {
> +			LIST_HEAD(tmp_head);
> +
> +			list_cut_before(&tmp_head, head, &dev->unreg_list);
> +			unregister_netdevice_close_many(&tmp_head);
> +			lock_count = lock_depth;
> +			list_splice_tail(&tmp_head, &done_head);
> +		}
> +	}
> +
> +	unregister_netdevice_close_many(head);
> +
> +	list_for_each_entry_safe_reverse(dev, tmp, &done_head, unreg_list)
> +		list_move(&dev->unreg_list, head);
> +#else
> +	unregister_netdevice_close_many(head);
> +#endif


Any reason not to morph the original code to add this 'no more than 8 at a
time' constraint? Having a separate lockdep path with list juggling
seems a bit fragile.

1. add all ops locked devs to the list
2. for each MAX_LOCK_DEPTH (or 'infinity' in the case of non-lockdep)
  2.1 lock N devs
  2.2 netif_close_many
  2.3 unlock N devs
3. ... do the non-ops-locked ones

This way the code won't diverge too much I hope.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH net 2/2] net: core: split unregister_netdevice list into smaller chunks
  2025-10-10 22:38   ` Stanislav Fomichev
@ 2025-10-11 14:30     ` Florian Westphal
  0 siblings, 0 replies; 5+ messages in thread
From: Florian Westphal @ 2025-10-11 14:30 UTC (permalink / raw)
  To: Stanislav Fomichev
  Cc: netdev, Paolo Abeni, David S. Miller, Eric Dumazet,
	Jakub Kicinski, linux-kernel, sdf

Stanislav Fomichev <stfomichev@gmail.com> wrote:
> On 10/10, Florian Westphal wrote:
> > +static void unregister_netdevice_close_many_lockdep(struct list_head *head)
> > +{
> > +#ifdef CONFIG_LOCKDEP
> > +	unsigned int lock_depth = lockdep_depth(current);
> > +	unsigned int lock_count = lock_depth;
> > +	struct net_device *dev, *tmp;
> > +	LIST_HEAD(done_head);
> > +
> > +	list_for_each_entry_safe(dev, tmp, head, unreg_list) {
> > +		if (netdev_need_ops_lock(dev))
> > +			lock_count++;
> > +
> > +		/* we'll run out of lockdep keys, reduce size. */
> > +		if (lock_count >= MAX_LOCK_DEPTH - 1) {
> > +			LIST_HEAD(tmp_head);
> > +
> > +			list_cut_before(&tmp_head, head, &dev->unreg_list);
> > +			unregister_netdevice_close_many(&tmp_head);
> > +			lock_count = lock_depth;
> > +			list_splice_tail(&tmp_head, &done_head);
> > +		}
> > +	}
> > +
> > +	unregister_netdevice_close_many(head);
> > +
> > +	list_for_each_entry_safe_reverse(dev, tmp, &done_head, unreg_list)
> > +		list_move(&dev->unreg_list, head);
> > +#else
> > +	unregister_netdevice_close_many(head);
> > +#endif
> 
> 
> Any reason not to morph the original code to add this 'no more than 8 at a
> time' constraint? Having a separate lockdep path with list juggling
> seems a bit fragile.
> 
> 1. add all ops locked devs to the list
> 2. for each MAX_LOCK_DEPTH (or 'infinity' in the case of non-lockdep)
>   2.1 lock N devs
>   2.2 netif_close_many
>   2.3 unlock N devs
> 3. ... do the non-ops-locked ones
> 
> This way the code won't diverge too much I hope.

I think that having extra code for LOCKDEP (which means debug kernel
that often also includes k?san, kmemleak etc. is ok.

I was more concerned with having no changes to normal (non-lockdep)
kernel.

Let me try again, I tried to do your solution above before going with
this extra lockdep-only juggling but I ended up making a mess.

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2025-10-11 14:30 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-10 13:54 [PATCH net 0/2] net: avoid LOCKDEP MAX_LOCK_DEPTH splat Florian Westphal
2025-10-10 13:54 ` [PATCH net 1/2] net: core: move unregister_many inner loops to a helper Florian Westphal
2025-10-10 13:54 ` [PATCH net 2/2] net: core: split unregister_netdevice list into smaller chunks Florian Westphal
2025-10-10 22:38   ` Stanislav Fomichev
2025-10-11 14:30     ` Florian Westphal

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).