From: Junjie Cao <junjie.cao@intel.com>
To: pabeni@redhat.com, davem@davemloft.net, edumazet@google.com,
kuba@kernel.org,
syzbot+14afda08dc3484d5db82@syzkaller.appspotmail.com
Cc: horms@kernel.org, linux-hams@vger.kernel.org,
netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
syzkaller-bugs@googlegroups.com, stable@vger.kernel.org,
junjie.cao@intel.com
Subject: [PATCH 1/2] netrom: fix possible deadlock in nr_rt_device_down
Date: Thu, 4 Dec 2025 17:09:04 +0800 [thread overview]
Message-ID: <20251204090905.28663-2-junjie.cao@intel.com> (raw)
In-Reply-To: <20251204090905.28663-1-junjie.cao@intel.com>
syzbot reported a circular locking dependency involving
nr_neigh_list_lock, nr_node_list_lock and nr_node->node_lock in the
NET/ROM routing code [1].
One of the problematic scenarios looks like this:
CPU0 CPU1
---- ----
nr_rt_device_down() nr_rt_ioctl()
lock(nr_neigh_list_lock); nr_del_node()
... lock(nr_node_list_lock);
lock(nr_node_list_lock); nr_remove_neigh();
lock(nr_neigh_list_lock);
This creates the following lock chain:
nr_neigh_list_lock -> nr_node_list_lock -> &nr_node->node_lock
while the ioctl path may acquire the locks in the opposite order via
nr_dec_obs()/nr_del_node(), which makes lockdep complain about a
possible deadlock.
Refactor nr_rt_device_down() to avoid nested locking of
nr_neigh_list_lock and nr_node_list_lock. The function now performs
two separate passes: one that walks all nodes under nr_node_list_lock
and drops routes, and a second one that removes unused neighbours
under nr_neigh_list_lock.
Also adjust nr_rt_free() to acquire nr_node_list_lock before
nr_neigh_list_lock so that the global lock ordering remains
consistent.
[1] https://syzkaller.appspot.com/bug?extid=14afda08dc3484d5db82
Reported-by: syzbot+14afda08dc3484d5db82@syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=14afda08dc3484d5db82
Tested-by: syzbot+14afda08dc3484d5db82@syzkaller.appspotmail.com
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
Cc: stable@vger.kernel.org
Signed-off-by: Junjie Cao <junjie.cao@intel.com>
---
net/netrom/nr_route.c | 65 ++++++++++++++++++++++---------------------
1 file changed, 33 insertions(+), 32 deletions(-)
diff --git a/net/netrom/nr_route.c b/net/netrom/nr_route.c
index b94cb2ffbaf8..20aacfdfccd4 100644
--- a/net/netrom/nr_route.c
+++ b/net/netrom/nr_route.c
@@ -508,40 +508,41 @@ void nr_rt_device_down(struct net_device *dev)
{
struct nr_neigh *s;
struct hlist_node *nodet, *node2t;
- struct nr_node *t;
+ struct nr_node *t;
int i;
- spin_lock_bh(&nr_neigh_list_lock);
- nr_neigh_for_each_safe(s, nodet, &nr_neigh_list) {
- if (s->dev == dev) {
- spin_lock_bh(&nr_node_list_lock);
- nr_node_for_each_safe(t, node2t, &nr_node_list) {
- nr_node_lock(t);
- for (i = 0; i < t->count; i++) {
- if (t->routes[i].neighbour == s) {
- t->count--;
-
- switch (i) {
- case 0:
- t->routes[0] = t->routes[1];
- fallthrough;
- case 1:
- t->routes[1] = t->routes[2];
- break;
- case 2:
- break;
- }
- }
- }
+ spin_lock_bh(&nr_node_list_lock);
+ nr_node_for_each_safe(t, node2t, &nr_node_list) {
+ nr_node_lock(t);
+ for (i = 0; i < t->count; i++) {
+ s = t->routes[i].neighbour;
+ if (s->dev == dev) {
+ t->count--;
- if (t->count <= 0)
- nr_remove_node_locked(t);
- nr_node_unlock(t);
+ switch (i) {
+ case 0:
+ t->routes[0] = t->routes[1];
+ fallthrough;
+ case 1:
+ t->routes[1] = t->routes[2];
+ break;
+ case 2:
+ break;
+ }
+ i--;
}
- spin_unlock_bh(&nr_node_list_lock);
+ }
+ if (t->count <= 0)
+ nr_remove_node_locked(t);
+ nr_node_unlock(t);
+ }
+ spin_unlock_bh(&nr_node_list_lock);
+
+ spin_lock_bh(&nr_neigh_list_lock);
+ nr_neigh_for_each_safe(s, nodet, &nr_neigh_list) {
+ if (s->dev == dev)
nr_remove_neigh_locked(s);
- }
}
spin_unlock_bh(&nr_neigh_list_lock);
}
@@ -962,23 +963,23 @@ const struct seq_operations nr_neigh_seqops = {
void nr_rt_free(void)
{
struct nr_neigh *s = NULL;
- struct nr_node *t = NULL;
+ struct nr_node *t = NULL;
struct hlist_node *nodet;
- spin_lock_bh(&nr_neigh_list_lock);
spin_lock_bh(&nr_node_list_lock);
+ spin_lock_bh(&nr_neigh_list_lock);
nr_node_for_each_safe(t, nodet, &nr_node_list) {
nr_node_lock(t);
nr_remove_node_locked(t);
nr_node_unlock(t);
}
nr_neigh_for_each_safe(s, nodet, &nr_neigh_list) {
- while(s->count) {
+ while (s->count) {
s->count--;
nr_neigh_put(s);
}
nr_remove_neigh_locked(s);
}
- spin_unlock_bh(&nr_node_list_lock);
spin_unlock_bh(&nr_neigh_list_lock);
+ spin_unlock_bh(&nr_node_list_lock);
}
--
2.43.0
next prev parent reply other threads:[~2025-12-04 9:09 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-12-04 9:09 [PATCH v2 0/2] netrom: fix deadlock and refcount leak in nr_rt_device_down Junjie Cao
2025-12-04 9:09 ` Junjie Cao [this message]
2026-01-02 20:46 ` [PATCH 1/2] netrom: fix possible deadlock " Jakub Kicinski
2025-12-04 9:09 ` [PATCH 2/2] netrom: fix reference count leak " Junjie Cao
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251204090905.28663-2-junjie.cao@intel.com \
--to=junjie.cao@intel.com \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=horms@kernel.org \
--cc=kuba@kernel.org \
--cc=linux-hams@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=stable@vger.kernel.org \
--cc=syzbot+14afda08dc3484d5db82@syzkaller.appspotmail.com \
--cc=syzkaller-bugs@googlegroups.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).