* deadlocks on pernet_ops_rwsem
@ 2025-10-01 15:20 Jakub Kicinski
2025-10-01 16:46 ` Paolo Abeni
0 siblings, 1 reply; 7+ messages in thread
From: Jakub Kicinski @ 2025-10-01 15:20 UTC (permalink / raw)
To: Eric Dumazet, Florian Westphal, Paolo Abeni, Kuniyuki Iwashima; +Cc: netdev
Hi!
We started hitting a deadlock on pernet_ops_rwsem, not very often,
these are the CI branches were we saw it:
2025-09-18--03-00
2025-09-18--15-00
2025-09-18--18-00
2025-09-19--21-00
2025-09-21--15-00
2025-09-21--18-00
2025-09-22--00-00
2025-09-22--09-00
2025-09-23--06-00
2025-09-24--21-00
2025-09-27--21-00
2025-09-28--09-00
2025-10-01--00-00
2025-10-01--06-00
First hit seems to be:
https://netdev-3.bots.linux.dev/vmksft-net-dbg/results/302741/65-l2tp-sh/stderr
Two most recent:
https://netdev-3.bots.linux.dev/vmksft-net-dbg/results/321640/110-ip-local-port-range-sh/stderr
https://netdev-3.bots.linux.dev/vmksft-net-dbg/results/321281/63-fdb-notify-sh/stderr
The stack traces aren't very helpful, they mostly show the victim not
who caused the deadlock :(
Any ideas?
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: deadlocks on pernet_ops_rwsem
2025-10-01 15:20 deadlocks on pernet_ops_rwsem Jakub Kicinski
@ 2025-10-01 16:46 ` Paolo Abeni
2025-10-01 17:22 ` Jakub Kicinski
0 siblings, 1 reply; 7+ messages in thread
From: Paolo Abeni @ 2025-10-01 16:46 UTC (permalink / raw)
To: Jakub Kicinski, Eric Dumazet, Florian Westphal, Kuniyuki Iwashima,
Willem de Bruijn
Cc: netdev
On 10/1/25 5:20 PM, Jakub Kicinski wrote:
> We started hitting a deadlock on pernet_ops_rwsem, not very often,
> these are the CI branches were we saw it:
>
> 2025-09-18--03-00
> 2025-09-18--15-00
> 2025-09-18--18-00
> 2025-09-19--21-00
> 2025-09-21--15-00
> 2025-09-21--18-00
> 2025-09-22--00-00
> 2025-09-22--09-00
> 2025-09-23--06-00
> 2025-09-24--21-00
> 2025-09-27--21-00
> 2025-09-28--09-00
> 2025-10-01--00-00
> 2025-10-01--06-00
>
> First hit seems to be:
> https://netdev-3.bots.linux.dev/vmksft-net-dbg/results/302741/65-l2tp-sh/stderr
> Two most recent:
> https://netdev-3.bots.linux.dev/vmksft-net-dbg/results/321640/110-ip-local-port-range-sh/stderr
> https://netdev-3.bots.linux.dev/vmksft-net-dbg/results/321281/63-fdb-notify-sh/stderr
>
> The stack traces aren't very helpful, they mostly show the victim not
> who caused the deadlock :(
>
> Any ideas?
Not many here. The above are debug builds, so we should get a lockdep
splat on deadlock, the logs lack it. I guess the request_module() breaks
the lockdep checks?
IIRC syzkaller uses config with most/all builtins, so I guess it should
not be able to replicate this scenario (to possibly help creating a
repro), unless there is an allmodconfig instance?
Adding Willem, just in case he knows more about possible syzkaller usage
here.
/P
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: deadlocks on pernet_ops_rwsem
2025-10-01 16:46 ` Paolo Abeni
@ 2025-10-01 17:22 ` Jakub Kicinski
2025-10-01 18:50 ` Kuniyuki Iwashima
0 siblings, 1 reply; 7+ messages in thread
From: Jakub Kicinski @ 2025-10-01 17:22 UTC (permalink / raw)
To: Paolo Abeni
Cc: Eric Dumazet, Florian Westphal, Kuniyuki Iwashima,
Willem de Bruijn, netdev
On Wed, 1 Oct 2025 18:46:16 +0200 Paolo Abeni wrote:
> Not many here. The above are debug builds, so we should get a lockdep
> splat on deadlock, the logs lack it. I guess the request_module() breaks
> the lockdep checks?
To be clear -- AFAICT lockdep misses this.
The splat is from the "stuck task" checker.
2 min wait to load a module during test init would definitely be a sign
of something going sideways.. but I think it's worse than that, these
time out completely and we kill the VM. I think the modprobe is truly
stuck here.
In one of the splats lockdep was able to say:
[ 4302.448228][ T44] INFO: task modprobe:31634 <writer> blocked on an rw-semaphore likely owned by task kworker/u16:0:12 <reader>
but most are more useless:
[ 4671.090728][ T44] INFO: task modprobe:2342 is blocked on an rw-semaphore, but the owner is not found.
(?!?)
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: deadlocks on pernet_ops_rwsem
2025-10-01 17:22 ` Jakub Kicinski
@ 2025-10-01 18:50 ` Kuniyuki Iwashima
2025-10-01 19:26 ` Jakub Kicinski
0 siblings, 1 reply; 7+ messages in thread
From: Kuniyuki Iwashima @ 2025-10-01 18:50 UTC (permalink / raw)
To: kuba; +Cc: edumazet, fw, kuniyu, netdev, pabeni, willemb
From: Jakub Kicinski <kuba@kernel.org>
Date: Wed, 1 Oct 2025 10:22:23 -0700
> On Wed, 1 Oct 2025 18:46:16 +0200 Paolo Abeni wrote:
> > Not many here. The above are debug builds, so we should get a lockdep
> > splat on deadlock, the logs lack it. I guess the request_module() breaks
> > the lockdep checks?
>
> To be clear -- AFAICT lockdep misses this.
>
> The splat is from the "stuck task" checker.
>
> 2 min wait to load a module during test init would definitely be a sign
> of something going sideways.. but I think it's worse than that, these
> time out completely and we kill the VM. I think the modprobe is truly
> stuck here.
>
> In one of the splats lockdep was able to say:
>
> [ 4302.448228][ T44] INFO: task modprobe:31634 <writer> blocked on an rw-semaphore likely owned by task kworker/u16:0:12 <reader>
>
> but most are more useless:
>
> [ 4671.090728][ T44] INFO: task modprobe:2342 is blocked on an rw-semaphore, but the owner is not found.
>
> (?!?)
Even when it caught the possible owner, lockdep seems confused :/
[ 4302.448228][ T44] INFO: task modprobe:31634 <writer> blocked on an rw-semaphore likely owned by task kworker/u16:0:12 <reader>
modprobe:31634 seems to be blocked by kworker/u16:0:12,
[ 4302.449035][ T44] task:kworker/u16:0 state:R running task stack:26368 pid:12 tgid:12 ppid:2 task_flags:0x4208060 flags:0x00004000
[ 4302.449872][ T44] Workqueue: netns cleanup_net
...
[ 4302.460889][ T44] Showing all locks held in the system:
[ 4302.461368][ T44] 4 locks held by kworker/u16:0/12:
but no lock shows up here for kworker/u16:0/12,
[ 4302.461597][ T44] 2 locks held by kworker/u18:0/36:
[ 4302.461926][ T44] #0: ffff8880010d9d48 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x7e5/0x1650
[ 4302.462429][ T44] #1: ffffc9000028fd40 ((work_completion)(&sub_info->work)){+.+.}-{0:0}, at: process_one_work+0xded/0x1650
[ 4302.463011][ T44] 1 lock held by khungtaskd/44:
[ 4302.463261][ T44] #0: ffffffffb7b83f80 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x36/0x260
[ 4302.463717][ T44] 1 lock held by modprobe/31634:
[ 4302.463982][ T44] #0: ffffffffb8270430 (pernet_ops_rwsem){++++}-{4:4}, at: register_pernet_subsys+0x1a/0x40
and modprobe/31634 is holding pernet_ops_rwsem ???
Was there any update on packages (especially qemu?) used by
CI around 2025-09-18 ?
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: deadlocks on pernet_ops_rwsem
2025-10-01 18:50 ` Kuniyuki Iwashima
@ 2025-10-01 19:26 ` Jakub Kicinski
2025-10-01 19:33 ` Willem de Bruijn
0 siblings, 1 reply; 7+ messages in thread
From: Jakub Kicinski @ 2025-10-01 19:26 UTC (permalink / raw)
To: Kuniyuki Iwashima; +Cc: edumazet, fw, netdev, pabeni, willemb
On Wed, 1 Oct 2025 18:50:22 +0000 Kuniyuki Iwashima wrote:
> From: Jakub Kicinski <kuba@kernel.org>
> Date: Wed, 1 Oct 2025 10:22:23 -0700
> > To be clear -- AFAICT lockdep misses this.
> >
> > The splat is from the "stuck task" checker.
> >
> > 2 min wait to load a module during test init would definitely be a sign
> > of something going sideways.. but I think it's worse than that, these
> > time out completely and we kill the VM. I think the modprobe is truly
> > stuck here.
> >
> > In one of the splats lockdep was able to say:
> >
> > [ 4302.448228][ T44] INFO: task modprobe:31634 <writer> blocked on an rw-semaphore likely owned by task kworker/u16:0:12 <reader>
> >
> > but most are more useless:
> >
> > [ 4671.090728][ T44] INFO: task modprobe:2342 is blocked on an rw-semaphore, but the owner is not found.
> >
> > (?!?)
>
> Even when it caught the possible owner, lockdep seems confused :/
>
>
> [ 4302.448228][ T44] INFO: task modprobe:31634 <writer> blocked on an rw-semaphore likely owned by task kworker/u16:0:12 <reader>
>
> modprobe:31634 seems to be blocked by kworker/u16:0:12,
>
>
> [ 4302.449035][ T44] task:kworker/u16:0 state:R running task stack:26368 pid:12 tgid:12 ppid:2 task_flags:0x4208060 flags:0x00004000
> [ 4302.449872][ T44] Workqueue: netns cleanup_net
> ...
> [ 4302.460889][ T44] Showing all locks held in the system:
> [ 4302.461368][ T44] 4 locks held by kworker/u16:0/12:
>
> but no lock shows up here for kworker/u16:0/12,
>
>
> [ 4302.461597][ T44] 2 locks held by kworker/u18:0/36:
> [ 4302.461926][ T44] #0: ffff8880010d9d48 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x7e5/0x1650
> [ 4302.462429][ T44] #1: ffffc9000028fd40 ((work_completion)(&sub_info->work)){+.+.}-{0:0}, at: process_one_work+0xded/0x1650
> [ 4302.463011][ T44] 1 lock held by khungtaskd/44:
> [ 4302.463261][ T44] #0: ffffffffb7b83f80 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x36/0x260
> [ 4302.463717][ T44] 1 lock held by modprobe/31634:
> [ 4302.463982][ T44] #0: ffffffffb8270430 (pernet_ops_rwsem){++++}-{4:4}, at: register_pernet_subsys+0x1a/0x40
>
> and modprobe/31634 is holding pernet_ops_rwsem ???
>
>
> Was there any update on packages (especially qemu?) used by
> CI around 2025-09-18 ?
No updates according to the logs. First hit was on Thursday so I thought
maybe it came from Linus. But looking at the branches we fast forwarded
2025-09-18--21-00 and there were 2 hits earlier that day (2025-09-18--03-00,
2025-09-18--15-00)
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: deadlocks on pernet_ops_rwsem
2025-10-01 19:26 ` Jakub Kicinski
@ 2025-10-01 19:33 ` Willem de Bruijn
2025-10-01 20:30 ` Jakub Kicinski
0 siblings, 1 reply; 7+ messages in thread
From: Willem de Bruijn @ 2025-10-01 19:33 UTC (permalink / raw)
To: Jakub Kicinski; +Cc: Kuniyuki Iwashima, edumazet, fw, netdev, pabeni
On Wed, Oct 1, 2025 at 3:26 PM Jakub Kicinski <kuba@kernel.org> wrote:
>
> On Wed, 1 Oct 2025 18:50:22 +0000 Kuniyuki Iwashima wrote:
> > From: Jakub Kicinski <kuba@kernel.org>
> > Date: Wed, 1 Oct 2025 10:22:23 -0700
> > > To be clear -- AFAICT lockdep misses this.
> > >
> > > The splat is from the "stuck task" checker.
> > >
> > > 2 min wait to load a module during test init would definitely be a sign
> > > of something going sideways.. but I think it's worse than that, these
> > > time out completely and we kill the VM. I think the modprobe is truly
> > > stuck here.
> > >
> > > In one of the splats lockdep was able to say:
> > >
> > > [ 4302.448228][ T44] INFO: task modprobe:31634 <writer> blocked on an rw-semaphore likely owned by task kworker/u16:0:12 <reader>
> > >
> > > but most are more useless:
> > >
> > > [ 4671.090728][ T44] INFO: task modprobe:2342 is blocked on an rw-semaphore, but the owner is not found.
> > >
> > > (?!?)
> >
> > Even when it caught the possible owner, lockdep seems confused :/
> >
> >
> > [ 4302.448228][ T44] INFO: task modprobe:31634 <writer> blocked on an rw-semaphore likely owned by task kworker/u16:0:12 <reader>
> >
> > modprobe:31634 seems to be blocked by kworker/u16:0:12,
> >
> >
> > [ 4302.449035][ T44] task:kworker/u16:0 state:R running task stack:26368 pid:12 tgid:12 ppid:2 task_flags:0x4208060 flags:0x00004000
> > [ 4302.449872][ T44] Workqueue: netns cleanup_net
> > ...
> > [ 4302.460889][ T44] Showing all locks held in the system:
> > [ 4302.461368][ T44] 4 locks held by kworker/u16:0/12:
> >
> > but no lock shows up here for kworker/u16:0/12,
> >
> >
> > [ 4302.461597][ T44] 2 locks held by kworker/u18:0/36:
> > [ 4302.461926][ T44] #0: ffff8880010d9d48 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x7e5/0x1650
> > [ 4302.462429][ T44] #1: ffffc9000028fd40 ((work_completion)(&sub_info->work)){+.+.}-{0:0}, at: process_one_work+0xded/0x1650
> > [ 4302.463011][ T44] 1 lock held by khungtaskd/44:
> > [ 4302.463261][ T44] #0: ffffffffb7b83f80 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x36/0x260
> > [ 4302.463717][ T44] 1 lock held by modprobe/31634:
> > [ 4302.463982][ T44] #0: ffffffffb8270430 (pernet_ops_rwsem){++++}-{4:4}, at: register_pernet_subsys+0x1a/0x40
> >
> > and modprobe/31634 is holding pernet_ops_rwsem ???
> >
> >
> > Was there any update on packages (especially qemu?) used by
> > CI around 2025-09-18 ?
>
> No updates according to the logs. First hit was on Thursday so I thought
> maybe it came from Linus. But looking at the branches we fast forwarded
> 2025-09-18--21-00 and there were 2 hits earlier that day (2025-09-18--03-00,
> 2025-09-18--15-00)
Is there a good SHA1 around the time of the earliest report?
There do not seem to be any recent changes to rwsem, let alone
specifically to pernet_ops_rwsem.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: deadlocks on pernet_ops_rwsem
2025-10-01 19:33 ` Willem de Bruijn
@ 2025-10-01 20:30 ` Jakub Kicinski
0 siblings, 0 replies; 7+ messages in thread
From: Jakub Kicinski @ 2025-10-01 20:30 UTC (permalink / raw)
To: Willem de Bruijn; +Cc: Kuniyuki Iwashima, edumazet, fw, netdev, pabeni
On Wed, 1 Oct 2025 15:33:56 -0400 Willem de Bruijn wrote:
> > > Was there any update on packages (especially qemu?) used by
> > > CI around 2025-09-18 ?
> >
> > No updates according to the logs. First hit was on Thursday so I thought
> > maybe it came from Linus. But looking at the branches we fast forwarded
> > 2025-09-18--21-00 and there were 2 hits earlier that day (2025-09-18--03-00,
> > 2025-09-18--15-00)
>
> Is there a good SHA1 around the time of the earliest report?
>
> There do not seem to be any recent changes to rwsem, let alone
> specifically to pernet_ops_rwsem.
The first time it triggered when net-next was at 152ba35c04ad
and net at 09847108971a, CI branch contents:
https://netdev.bots.linux.dev/static/nipa/branch_deltas/net-next-2025-09-18--03-00.html
The previous one (so the branch before the first time it triggered)
had base commits of 6b957c0a36f5 09847108971a.
If you click on the link above you can go back / forward in the branch
contents.
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2025-10-01 20:30 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-01 15:20 deadlocks on pernet_ops_rwsem Jakub Kicinski
2025-10-01 16:46 ` Paolo Abeni
2025-10-01 17:22 ` Jakub Kicinski
2025-10-01 18:50 ` Kuniyuki Iwashima
2025-10-01 19:26 ` Jakub Kicinski
2025-10-01 19:33 ` Willem de Bruijn
2025-10-01 20:30 ` Jakub Kicinski
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).