From: Mashiro Chen <mashiro.chen@mailbox.org>
To: netdev@vger.kernel.org
Cc: linux-hams@vger.kernel.org, davem@davemloft.net,
edumazet@google.com, kuba@kernel.org, pabeni@redhat.com,
horms@kernel.org, linux-kernel@vger.kernel.org,
syzbot+6eb7834837cf6a8db75b@syzkaller.appspotmail.com,
Mashiro Chen <mashiro.chen@mailbox.org>
Subject: [PATCH net] net: netrom: fix lock order inversion in nr_add_node, nr_del_node and nr_dec_obs
Date: Mon, 6 Apr 2026 19:06:43 +0800 [thread overview]
Message-ID: <20260406110643.82577-1-mashiro.chen@mailbox.org> (raw)
In-Reply-To: <69694d6f.050a0220.58bed.0028.GAE@google.com>
nr_del_node() and nr_dec_obs() acquire nr_node_list_lock first, then
call nr_remove_neigh() which internally acquires nr_neigh_list_lock.
nr_add_node() acquires node_lock first, then calls nr_remove_neigh()
which acquires nr_neigh_list_lock.
Both are the reverse of the lock order used in nr_rt_device_down() and
nr_rt_free(), which acquire nr_neigh_list_lock before nr_node_list_lock
and node_lock.
The resulting lock order inversions can cause an ABBA deadlock when
concurrently executing:
- SIOCDELRT or SIOCNRDECOBS ioctl (requires CAP_NET_ADMIN)
- bringing down a NET/ROM-attached network device
Fix by acquiring nr_neigh_list_lock before nr_node_list_lock and
node_lock in all three functions, following the canonical lock order,
and replacing the internal-locking nr_remove_neigh() with
nr_remove_neigh_locked() which assumes the caller already holds
nr_neigh_list_lock.
Fixes: e03e7f20ebf7 ("netrom: fix possible dead-lock in nr_rt_ioctl()")
Reported-by: syzbot+6eb7834837cf6a8db75b@syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=6eb7834837cf6a8db75b
Signed-off-by: Mashiro Chen <mashiro.chen@mailbox.org>
---
net/netrom/nr_route.c | 13 ++++++++++---
1 file changed, 10 insertions(+), 3 deletions(-)
diff --git a/net/netrom/nr_route.c b/net/netrom/nr_route.c
index 9cc29ae85b06f..5bc24644ed544 100644
--- a/net/netrom/nr_route.c
+++ b/net/netrom/nr_route.c
@@ -211,6 +211,7 @@ static int __must_check nr_add_node(ax25_address *nr, const char *mnemonic,
nr_neigh_put(nr_neigh);
return 0;
}
+ spin_lock_bh(&nr_neigh_list_lock);
nr_node_lock(nr_node);
if (quality != 0)
@@ -246,7 +247,7 @@ static int __must_check nr_add_node(ax25_address *nr, const char *mnemonic,
nr_neigh_put(nr_node->routes[2].neighbour);
if (nr_node->routes[2].neighbour->count == 0 && !nr_node->routes[2].neighbour->locked)
- nr_remove_neigh(nr_node->routes[2].neighbour);
+ nr_remove_neigh_locked(nr_node->routes[2].neighbour);
nr_node->routes[2].quality = quality;
nr_node->routes[2].obs_count = obs_count;
@@ -281,6 +282,7 @@ static int __must_check nr_add_node(ax25_address *nr, const char *mnemonic,
nr_neigh_put(nr_neigh);
nr_node_unlock(nr_node);
+ spin_unlock_bh(&nr_neigh_list_lock);
nr_node_put(nr_node);
return 0;
}
@@ -331,6 +333,7 @@ static int nr_del_node(ax25_address *callsign, ax25_address *neighbour, struct n
return -EINVAL;
}
+ spin_lock_bh(&nr_neigh_list_lock);
spin_lock_bh(&nr_node_list_lock);
nr_node_lock(nr_node);
for (i = 0; i < nr_node->count; i++) {
@@ -339,7 +342,7 @@ static int nr_del_node(ax25_address *callsign, ax25_address *neighbour, struct n
nr_neigh_put(nr_neigh);
if (nr_neigh->count == 0 && !nr_neigh->locked)
- nr_remove_neigh(nr_neigh);
+ nr_remove_neigh_locked(nr_neigh);
nr_neigh_put(nr_neigh);
nr_node->count--;
@@ -361,6 +364,7 @@ static int nr_del_node(ax25_address *callsign, ax25_address *neighbour, struct n
}
nr_node_unlock(nr_node);
spin_unlock_bh(&nr_node_list_lock);
+ spin_unlock_bh(&nr_neigh_list_lock);
return 0;
}
@@ -368,6 +372,7 @@ static int nr_del_node(ax25_address *callsign, ax25_address *neighbour, struct n
nr_neigh_put(nr_neigh);
nr_node_unlock(nr_node);
spin_unlock_bh(&nr_node_list_lock);
+ spin_unlock_bh(&nr_neigh_list_lock);
nr_node_put(nr_node);
return -EINVAL;
@@ -454,6 +459,7 @@ static int nr_dec_obs(void)
struct hlist_node *nodet;
int i;
+ spin_lock_bh(&nr_neigh_list_lock);
spin_lock_bh(&nr_node_list_lock);
nr_node_for_each_safe(s, nodet, &nr_node_list) {
nr_node_lock(s);
@@ -469,7 +475,7 @@ static int nr_dec_obs(void)
nr_neigh_put(nr_neigh);
if (nr_neigh->count == 0 && !nr_neigh->locked)
- nr_remove_neigh(nr_neigh);
+ nr_remove_neigh_locked(nr_neigh);
s->count--;
@@ -497,6 +503,7 @@ static int nr_dec_obs(void)
nr_node_unlock(s);
}
spin_unlock_bh(&nr_node_list_lock);
+ spin_unlock_bh(&nr_neigh_list_lock);
return 0;
}
--
2.53.0
next prev parent reply other threads:[~2026-04-06 11:07 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-15 20:26 [syzbot] [hams?] possible deadlock in nr_del_node (2) syzbot
2026-04-06 11:06 ` Mashiro Chen [this message]
2026-04-06 11:49 ` [PATCH net v2] net: netrom: fix lock order inversion in nr_add_node, nr_del_node and nr_dec_obs Mashiro Chen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260406110643.82577-1-mashiro.chen@mailbox.org \
--to=mashiro.chen@mailbox.org \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=horms@kernel.org \
--cc=kuba@kernel.org \
--cc=linux-hams@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=syzbot+6eb7834837cf6a8db75b@syzkaller.appspotmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox