* [syzbot] [hams?] possible deadlock in nr_del_node (2)
@ 2026-01-15 20:26 syzbot
2026-04-06 11:06 ` [PATCH net] net: netrom: fix lock order inversion in nr_add_node, nr_del_node and nr_dec_obs Mashiro Chen
0 siblings, 1 reply; 3+ messages in thread
From: syzbot @ 2026-01-15 20:26 UTC (permalink / raw)
To: davem, edumazet, horms, kuba, linux-hams, linux-kernel, netdev,
pabeni, syzkaller-bugs
Hello,
syzbot found the following issue on:
HEAD commit: 4427259cc7f7 Merge tag 'riscv-for-linus-6.18-rc6' of git:/..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=13eadc12580000
kernel config: https://syzkaller.appspot.com/x/.config?x=929790bc044e87d7
dashboard link: https://syzkaller.appspot.com/bug?extid=6eb7834837cf6a8db75b
compiler: Debian clang version 20.1.8 (++20250708063551+0c9f909b7976-1~exp1~20250708183702.136), Debian LLD 20.1.8
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=125eb0b4580000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=15f3f17c580000
Downloadable assets:
disk image (non-bootable): https://storage.googleapis.com/syzbot-assets/d900f083ada3/non_bootable_disk-4427259c.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/5a19e3326bed/vmlinux-4427259c.xz
kernel image: https://storage.googleapis.com/syzbot-assets/582f300a9de8/bzImage-4427259c.xz
IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+6eb7834837cf6a8db75b@syzkaller.appspotmail.com
bond0: (slave rose0): Error: Device is in use and cannot be enslaved
======================================================
WARNING: possible circular locking dependency detected
syzkaller #0 Not tainted
------------------------------------------------------
syz.0.18/5503 is trying to acquire lock:
ffffffff8f428318 (nr_neigh_list_lock){+...}-{3:3}, at: spin_lock_bh include/linux/spinlock.h:356 [inline]
ffffffff8f428318 (nr_neigh_list_lock){+...}-{3:3}, at: nr_remove_neigh net/netrom/nr_route.c:307 [inline]
ffffffff8f428318 (nr_neigh_list_lock){+...}-{3:3}, at: nr_del_node+0x517/0x8d0 net/netrom/nr_route.c:342
but task is already holding lock:
ffff88804c3b2c70 (&nr_node->node_lock){+...}-{3:3}, at: spin_lock_bh include/linux/spinlock.h:356 [inline]
ffff88804c3b2c70 (&nr_node->node_lock){+...}-{3:3}, at: nr_node_lock include/net/netrom.h:152 [inline]
ffff88804c3b2c70 (&nr_node->node_lock){+...}-{3:3}, at: nr_del_node+0x152/0x8d0 net/netrom/nr_route.c:335
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #2 (&nr_node->node_lock){+...}-{3:3}:
lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5868
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline]
_raw_spin_lock_bh+0x36/0x50 kernel/locking/spinlock.c:178
spin_lock_bh include/linux/spinlock.h:356 [inline]
nr_node_lock include/net/netrom.h:152 [inline]
nr_del_node+0x152/0x8d0 net/netrom/nr_route.c:335
nr_rt_ioctl+0x989/0xd50 net/netrom/nr_route.c:678
sock_do_ioctl+0xdc/0x300 net/socket.c:1254
sock_ioctl+0x576/0x790 net/socket.c:1375
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:597 [inline]
__se_sys_ioctl+0xfc/0x170 fs/ioctl.c:583
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xfa/0xfa0 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
-> #1 (nr_node_list_lock){+...}-{3:3}:
lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5868
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline]
_raw_spin_lock_bh+0x36/0x50 kernel/locking/spinlock.c:178
spin_lock_bh include/linux/spinlock.h:356 [inline]
nr_rt_device_down+0xa9/0x720 net/netrom/nr_route.c:517
nr_device_event+0x137/0x150 net/netrom/af_netrom.c:126
notifier_call_chain+0x1b6/0x3e0 kernel/notifier.c:85
call_netdevice_notifiers_extack net/core/dev.c:2267 [inline]
call_netdevice_notifiers net/core/dev.c:2281 [inline]
netif_close_many+0x29c/0x410 net/core/dev.c:1784
netif_close+0x158/0x210 net/core/dev.c:1797
dev_close+0x10a/0x220 net/core/dev_api.c:220
bpq_device_event+0x377/0x6a0 drivers/net/hamradio/bpqether.c:528
notifier_call_chain+0x1b6/0x3e0 kernel/notifier.c:85
call_netdevice_notifiers_extack net/core/dev.c:2267 [inline]
call_netdevice_notifiers net/core/dev.c:2281 [inline]
netif_close_many+0x29c/0x410 net/core/dev.c:1784
netif_close+0x158/0x210 net/core/dev.c:1797
dev_close+0x10a/0x220 net/core/dev_api.c:220
bond_setup_by_slave+0x5f/0x3f0 drivers/net/bonding/bond_main.c:1567
bond_enslave+0x6ca/0x3850 drivers/net/bonding/bond_main.c:1972
bond_do_ioctl+0x635/0x9b0 drivers/net/bonding/bond_main.c:4615
dev_siocbond net/core/dev_ioctl.c:516 [inline]
dev_ifsioc+0x90b/0xf00 net/core/dev_ioctl.c:666
dev_ioctl+0x7b4/0x1150 net/core/dev_ioctl.c:838
sock_do_ioctl+0x22c/0x300 net/socket.c:1268
sock_ioctl+0x576/0x790 net/socket.c:1375
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:597 [inline]
__se_sys_ioctl+0xfc/0x170 fs/ioctl.c:583
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xfa/0xfa0 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
-> #0 (nr_neigh_list_lock){+...}-{3:3}:
check_prev_add kernel/locking/lockdep.c:3165 [inline]
check_prevs_add kernel/locking/lockdep.c:3284 [inline]
validate_chain+0xb9b/0x2140 kernel/locking/lockdep.c:3908
__lock_acquire+0xab9/0xd20 kernel/locking/lockdep.c:5237
lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5868
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline]
_raw_spin_lock_bh+0x36/0x50 kernel/locking/spinlock.c:178
spin_lock_bh include/linux/spinlock.h:356 [inline]
nr_remove_neigh net/netrom/nr_route.c:307 [inline]
nr_del_node+0x517/0x8d0 net/netrom/nr_route.c:342
nr_rt_ioctl+0x989/0xd50 net/netrom/nr_route.c:678
sock_do_ioctl+0xdc/0x300 net/socket.c:1254
sock_ioctl+0x576/0x790 net/socket.c:1375
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:597 [inline]
__se_sys_ioctl+0xfc/0x170 fs/ioctl.c:583
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xfa/0xfa0 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
other info that might help us debug this:
Chain exists of:
nr_neigh_list_lock --> nr_node_list_lock --> &nr_node->node_lock
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(&nr_node->node_lock);
lock(nr_node_list_lock);
lock(&nr_node->node_lock);
lock(nr_neigh_list_lock);
*** DEADLOCK ***
2 locks held by syz.0.18/5503:
#0: ffffffff8f428378 (nr_node_list_lock){+...}-{3:3}, at: spin_lock_bh include/linux/spinlock.h:356 [inline]
#0: ffffffff8f428378 (nr_node_list_lock){+...}-{3:3}, at: nr_del_node+0xfc/0x8d0 net/netrom/nr_route.c:334
#1: ffff88804c3b2c70 (&nr_node->node_lock){+...}-{3:3}, at: spin_lock_bh include/linux/spinlock.h:356 [inline]
#1: ffff88804c3b2c70 (&nr_node->node_lock){+...}-{3:3}, at: nr_node_lock include/net/netrom.h:152 [inline]
#1: ffff88804c3b2c70 (&nr_node->node_lock){+...}-{3:3}, at: nr_del_node+0x152/0x8d0 net/netrom/nr_route.c:335
stack backtrace:
CPU: 0 UID: 0 PID: 5503 Comm: syz.0.18 Not tainted syzkaller #0 PREEMPT(full)
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014
Call Trace:
<TASK>
dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120
print_circular_bug+0x2ee/0x310 kernel/locking/lockdep.c:2043
check_noncircular+0x134/0x160 kernel/locking/lockdep.c:2175
check_prev_add kernel/locking/lockdep.c:3165 [inline]
check_prevs_add kernel/locking/lockdep.c:3284 [inline]
validate_chain+0xb9b/0x2140 kernel/locking/lockdep.c:3908
__lock_acquire+0xab9/0xd20 kernel/locking/lockdep.c:5237
lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5868
__raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline]
_raw_spin_lock_bh+0x36/0x50 kernel/locking/spinlock.c:178
spin_lock_bh include/linux/spinlock.h:356 [inline]
nr_remove_neigh net/netrom/nr_route.c:307 [inline]
nr_del_node+0x517/0x8d0 net/netrom/nr_route.c:342
nr_rt_ioctl+0x989/0xd50 net/netrom/nr_route.c:678
sock_do_ioctl+0xdc/0x300 net/socket.c:1254
sock_ioctl+0x576/0x790 net/socket.c:1375
vfs_ioctl fs/ioctl.c:51 [inline]
__do_sys_ioctl fs/ioctl.c:597 [inline]
__se_sys_ioctl+0xfc/0x170 fs/ioctl.c:583
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xfa/0xfa0 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fc3e5b8f6c9
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007ffe09c753a8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007fc3e5de5fa0 RCX: 00007fc3e5b8f6c9
RDX: 0000200000000680 RSI: 000000000000890c RDI: 000000000000000a
RBP: 00007fc3e5c11f91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fc3e5de5fa0 R14: 00007fc3e5de5fa0 R15: 0000000000000003
</TASK>
---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkaller@googlegroups.com.
syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title
If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.
If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)
If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report
If you want to undo deduplication, reply with:
#syz undup
^ permalink raw reply [flat|nested] 3+ messages in thread
* [PATCH net] net: netrom: fix lock order inversion in nr_add_node, nr_del_node and nr_dec_obs
2026-01-15 20:26 [syzbot] [hams?] possible deadlock in nr_del_node (2) syzbot
@ 2026-04-06 11:06 ` Mashiro Chen
2026-04-06 11:49 ` [PATCH net v2] " Mashiro Chen
0 siblings, 1 reply; 3+ messages in thread
From: Mashiro Chen @ 2026-04-06 11:06 UTC (permalink / raw)
To: netdev
Cc: linux-hams, davem, edumazet, kuba, pabeni, horms, linux-kernel,
syzbot+6eb7834837cf6a8db75b, Mashiro Chen
nr_del_node() and nr_dec_obs() acquire nr_node_list_lock first, then
call nr_remove_neigh() which internally acquires nr_neigh_list_lock.
nr_add_node() acquires node_lock first, then calls nr_remove_neigh()
which acquires nr_neigh_list_lock.
Both are the reverse of the lock order used in nr_rt_device_down() and
nr_rt_free(), which acquire nr_neigh_list_lock before nr_node_list_lock
and node_lock.
The resulting lock order inversions can cause an ABBA deadlock when
concurrently executing:
- SIOCDELRT or SIOCNRDECOBS ioctl (requires CAP_NET_ADMIN)
- bringing down a NET/ROM-attached network device
Fix by acquiring nr_neigh_list_lock before nr_node_list_lock and
node_lock in all three functions, following the canonical lock order,
and replacing the internal-locking nr_remove_neigh() with
nr_remove_neigh_locked() which assumes the caller already holds
nr_neigh_list_lock.
Fixes: e03e7f20ebf7 ("netrom: fix possible dead-lock in nr_rt_ioctl()")
Reported-by: syzbot+6eb7834837cf6a8db75b@syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=6eb7834837cf6a8db75b
Signed-off-by: Mashiro Chen <mashiro.chen@mailbox.org>
---
net/netrom/nr_route.c | 13 ++++++++++---
1 file changed, 10 insertions(+), 3 deletions(-)
diff --git a/net/netrom/nr_route.c b/net/netrom/nr_route.c
index 9cc29ae85b06f..5bc24644ed544 100644
--- a/net/netrom/nr_route.c
+++ b/net/netrom/nr_route.c
@@ -211,6 +211,7 @@ static int __must_check nr_add_node(ax25_address *nr, const char *mnemonic,
nr_neigh_put(nr_neigh);
return 0;
}
+ spin_lock_bh(&nr_neigh_list_lock);
nr_node_lock(nr_node);
if (quality != 0)
@@ -246,7 +247,7 @@ static int __must_check nr_add_node(ax25_address *nr, const char *mnemonic,
nr_neigh_put(nr_node->routes[2].neighbour);
if (nr_node->routes[2].neighbour->count == 0 && !nr_node->routes[2].neighbour->locked)
- nr_remove_neigh(nr_node->routes[2].neighbour);
+ nr_remove_neigh_locked(nr_node->routes[2].neighbour);
nr_node->routes[2].quality = quality;
nr_node->routes[2].obs_count = obs_count;
@@ -281,6 +282,7 @@ static int __must_check nr_add_node(ax25_address *nr, const char *mnemonic,
nr_neigh_put(nr_neigh);
nr_node_unlock(nr_node);
+ spin_unlock_bh(&nr_neigh_list_lock);
nr_node_put(nr_node);
return 0;
}
@@ -331,6 +333,7 @@ static int nr_del_node(ax25_address *callsign, ax25_address *neighbour, struct n
return -EINVAL;
}
+ spin_lock_bh(&nr_neigh_list_lock);
spin_lock_bh(&nr_node_list_lock);
nr_node_lock(nr_node);
for (i = 0; i < nr_node->count; i++) {
@@ -339,7 +342,7 @@ static int nr_del_node(ax25_address *callsign, ax25_address *neighbour, struct n
nr_neigh_put(nr_neigh);
if (nr_neigh->count == 0 && !nr_neigh->locked)
- nr_remove_neigh(nr_neigh);
+ nr_remove_neigh_locked(nr_neigh);
nr_neigh_put(nr_neigh);
nr_node->count--;
@@ -361,6 +364,7 @@ static int nr_del_node(ax25_address *callsign, ax25_address *neighbour, struct n
}
nr_node_unlock(nr_node);
spin_unlock_bh(&nr_node_list_lock);
+ spin_unlock_bh(&nr_neigh_list_lock);
return 0;
}
@@ -368,6 +372,7 @@ static int nr_del_node(ax25_address *callsign, ax25_address *neighbour, struct n
nr_neigh_put(nr_neigh);
nr_node_unlock(nr_node);
spin_unlock_bh(&nr_node_list_lock);
+ spin_unlock_bh(&nr_neigh_list_lock);
nr_node_put(nr_node);
return -EINVAL;
@@ -454,6 +459,7 @@ static int nr_dec_obs(void)
struct hlist_node *nodet;
int i;
+ spin_lock_bh(&nr_neigh_list_lock);
spin_lock_bh(&nr_node_list_lock);
nr_node_for_each_safe(s, nodet, &nr_node_list) {
nr_node_lock(s);
@@ -469,7 +475,7 @@ static int nr_dec_obs(void)
nr_neigh_put(nr_neigh);
if (nr_neigh->count == 0 && !nr_neigh->locked)
- nr_remove_neigh(nr_neigh);
+ nr_remove_neigh_locked(nr_neigh);
s->count--;
@@ -497,6 +503,7 @@ static int nr_dec_obs(void)
nr_node_unlock(s);
}
spin_unlock_bh(&nr_node_list_lock);
+ spin_unlock_bh(&nr_neigh_list_lock);
return 0;
}
--
2.53.0
^ permalink raw reply related [flat|nested] 3+ messages in thread
* [PATCH net v2] net: netrom: fix lock order inversion in nr_add_node, nr_del_node and nr_dec_obs
2026-04-06 11:06 ` [PATCH net] net: netrom: fix lock order inversion in nr_add_node, nr_del_node and nr_dec_obs Mashiro Chen
@ 2026-04-06 11:49 ` Mashiro Chen
0 siblings, 0 replies; 3+ messages in thread
From: Mashiro Chen @ 2026-04-06 11:49 UTC (permalink / raw)
To: netdev
Cc: linux-hams, davem, edumazet, kuba, pabeni, horms, linux-kernel,
syzbot+6eb7834837cf6a8db75b, Mashiro Chen
nr_del_node() and nr_dec_obs() acquire nr_node_list_lock first, then
call nr_remove_neigh() which internally acquires nr_neigh_list_lock.
nr_add_node() acquires node_lock first, then calls nr_remove_neigh()
which acquires nr_neigh_list_lock.
Both are the reverse of the lock order used in nr_rt_device_down() and
nr_rt_free(), which acquire nr_neigh_list_lock before nr_node_list_lock
and node_lock.
The resulting lock order inversions can cause an ABBA deadlock when
concurrently executing:
- SIOCDELRT or SIOCNRDECOBS ioctl (requires CAP_NET_ADMIN)
- bringing down a NET/ROM-attached network device
Fix by acquiring nr_neigh_list_lock before nr_node_list_lock and
node_lock in all three functions, following the canonical lock order,
and replacing the internal-locking nr_remove_neigh() with
nr_remove_neigh_locked() which assumes the caller already holds
nr_neigh_list_lock.
Fixes: e03e7f20ebf7 ("netrom: fix possible dead-lock in nr_rt_ioctl()")
Reported-by: syzbot+6eb7834837cf6a8db75b@syzkaller.appspotmail.com
Closes: https://syzkaller.appspot.com/bug?extid=6eb7834837cf6a8db75b
Signed-off-by: Mashiro Chen <mashiro.chen@mailbox.org>
---
Changes in v2:
- Move __nr_remove_neigh() and nr_remove_neigh_locked() macro definition
before nr_add_node() to fix implicit function declaration build error
net/netrom/nr_route.c | 45 ++++++++++++++++++++++++-------------------
1 file changed, 25 insertions(+), 20 deletions(-)
diff --git a/net/netrom/nr_route.c b/net/netrom/nr_route.c
index 9cc29ae85b06f..c3cceee5a2284 100644
--- a/net/netrom/nr_route.c
+++ b/net/netrom/nr_route.c
@@ -75,7 +75,21 @@ static struct nr_neigh *nr_neigh_get_dev(ax25_address *callsign,
return found;
}
-static void nr_remove_neigh(struct nr_neigh *);
+static inline void __nr_remove_neigh(struct nr_neigh *nr_neigh)
+{
+ hlist_del_init(&nr_neigh->neigh_node);
+ nr_neigh_put(nr_neigh);
+}
+
+#define nr_remove_neigh_locked(__neigh) \
+ __nr_remove_neigh(__neigh)
+
+static void nr_remove_neigh(struct nr_neigh *nr_neigh)
+{
+ spin_lock_bh(&nr_neigh_list_lock);
+ __nr_remove_neigh(nr_neigh);
+ spin_unlock_bh(&nr_neigh_list_lock);
+}
/* re-sort the routes in quality order. */
static void re_sort_routes(struct nr_node *nr_node, int x, int y)
@@ -211,6 +225,7 @@ static int __must_check nr_add_node(ax25_address *nr, const char *mnemonic,
nr_neigh_put(nr_neigh);
return 0;
}
+ spin_lock_bh(&nr_neigh_list_lock);
nr_node_lock(nr_node);
if (quality != 0)
@@ -246,7 +261,7 @@ static int __must_check nr_add_node(ax25_address *nr, const char *mnemonic,
nr_neigh_put(nr_node->routes[2].neighbour);
if (nr_node->routes[2].neighbour->count == 0 && !nr_node->routes[2].neighbour->locked)
- nr_remove_neigh(nr_node->routes[2].neighbour);
+ nr_remove_neigh_locked(nr_node->routes[2].neighbour);
nr_node->routes[2].quality = quality;
nr_node->routes[2].obs_count = obs_count;
@@ -281,6 +296,7 @@ static int __must_check nr_add_node(ax25_address *nr, const char *mnemonic,
nr_neigh_put(nr_neigh);
nr_node_unlock(nr_node);
+ spin_unlock_bh(&nr_neigh_list_lock);
nr_node_put(nr_node);
return 0;
}
@@ -293,22 +309,6 @@ static void nr_remove_node_locked(struct nr_node *nr_node)
nr_node_put(nr_node);
}
-static inline void __nr_remove_neigh(struct nr_neigh *nr_neigh)
-{
- hlist_del_init(&nr_neigh->neigh_node);
- nr_neigh_put(nr_neigh);
-}
-
-#define nr_remove_neigh_locked(__neigh) \
- __nr_remove_neigh(__neigh)
-
-static void nr_remove_neigh(struct nr_neigh *nr_neigh)
-{
- spin_lock_bh(&nr_neigh_list_lock);
- __nr_remove_neigh(nr_neigh);
- spin_unlock_bh(&nr_neigh_list_lock);
-}
-
/*
* "Delete" a node. Strictly speaking remove a route to a node. The node
* is only deleted if no routes are left to it.
@@ -331,6 +331,7 @@ static int nr_del_node(ax25_address *callsign, ax25_address *neighbour, struct n
return -EINVAL;
}
+ spin_lock_bh(&nr_neigh_list_lock);
spin_lock_bh(&nr_node_list_lock);
nr_node_lock(nr_node);
for (i = 0; i < nr_node->count; i++) {
@@ -339,7 +340,7 @@ static int nr_del_node(ax25_address *callsign, ax25_address *neighbour, struct n
nr_neigh_put(nr_neigh);
if (nr_neigh->count == 0 && !nr_neigh->locked)
- nr_remove_neigh(nr_neigh);
+ nr_remove_neigh_locked(nr_neigh);
nr_neigh_put(nr_neigh);
nr_node->count--;
@@ -361,6 +362,7 @@ static int nr_del_node(ax25_address *callsign, ax25_address *neighbour, struct n
}
nr_node_unlock(nr_node);
spin_unlock_bh(&nr_node_list_lock);
+ spin_unlock_bh(&nr_neigh_list_lock);
return 0;
}
@@ -368,6 +370,7 @@ static int nr_del_node(ax25_address *callsign, ax25_address *neighbour, struct n
nr_neigh_put(nr_neigh);
nr_node_unlock(nr_node);
spin_unlock_bh(&nr_node_list_lock);
+ spin_unlock_bh(&nr_neigh_list_lock);
nr_node_put(nr_node);
return -EINVAL;
@@ -454,6 +457,7 @@ static int nr_dec_obs(void)
struct hlist_node *nodet;
int i;
+ spin_lock_bh(&nr_neigh_list_lock);
spin_lock_bh(&nr_node_list_lock);
nr_node_for_each_safe(s, nodet, &nr_node_list) {
nr_node_lock(s);
@@ -469,7 +473,7 @@ static int nr_dec_obs(void)
nr_neigh_put(nr_neigh);
if (nr_neigh->count == 0 && !nr_neigh->locked)
- nr_remove_neigh(nr_neigh);
+ nr_remove_neigh_locked(nr_neigh);
s->count--;
@@ -497,6 +501,7 @@ static int nr_dec_obs(void)
nr_node_unlock(s);
}
spin_unlock_bh(&nr_node_list_lock);
+ spin_unlock_bh(&nr_neigh_list_lock);
return 0;
}
--
2.53.0
^ permalink raw reply related [flat|nested] 3+ messages in thread
end of thread, other threads:[~2026-04-06 11:49 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-15 20:26 [syzbot] [hams?] possible deadlock in nr_del_node (2) syzbot
2026-04-06 11:06 ` [PATCH net] net: netrom: fix lock order inversion in nr_add_node, nr_del_node and nr_dec_obs Mashiro Chen
2026-04-06 11:49 ` [PATCH net v2] " Mashiro Chen
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox