* [BUG almost bisected] Splat in dequeue_rt_stack() and build error
@ 2024-08-21 21:57 Paul E. McKenney
2024-08-22 23:01 ` Paul E. McKenney
` (2 more replies)
0 siblings, 3 replies; 67+ messages in thread
From: Paul E. McKenney @ 2024-08-21 21:57 UTC (permalink / raw)
To: peterz, vschneid; +Cc: linux-kernel, sfr, linux-next, kernel-team
Hello!
When running rcutorture scenario TREE03 on both next-20240820 and
next-20240821, I see failures like this about half a second into the run
("2024.08.21-11.24.13" on my laptop in case I overtrimmed):
------------------------------------------------------------------------
WARNING: CPU: 4 PID: 42 at kernel/sched/rt.c:1405 dequeue_rt_stack+0x246/0x290
Modules linked in:
CPU: 4 UID: 0 PID: 42 Comm: cpuhp/4 Not tainted 6.11.0-rc1-00048-gaef6987d8954 #152
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.15.0-1 04/01/2014
RIP: 0010:dequeue_rt_stack+0x246/0x290
Code: f6 66 89 73 24 83 fa 63 0f 86 ad fe ff ff 90 0f 0b 90 e9 a4 fe ff ff 44 89 ee 48 89 df f7 de e8 50 22 ff ff e9 51 ff ff ff 90 <0f> 0b 90 e9 3a fe ff ff 90 0f 0b 90 e9 ef fd ff ff 8b 14 25 94 fe
RSP: 0000:ffffbc07801dfc18 EFLAGS: 00010046
RAX: ffff9ab05f22c200 RBX: ffff9ab04182e8c0 RCX: 0000000000000000
RDX: 0000000000000000 RSI: ffff9ab05f22c200 RDI: ffff9ab04182e8c0
RBP: 000000000002c200 R08: ffffbc07801dfcf8 R09: ffff9ab04182efb4
R10: 0000000000000001 R11: 00000000ffffffff R12: ffff9ab04182e8c0
R13: 0000000000000001 R14: 000000000002c200 R15: 0000000000000008
FS: 0000000000000000(0000) GS:ffff9ab05f300000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000000000 CR3: 0000000011e2e000 CR4: 00000000000006f0
Call Trace:
<TASK>
? __warn+0x7e/0x120
? dequeue_rt_stack+0x246/0x290
? report_bug+0x18e/0x1a0
? handle_bug+0x3d/0x70
? exc_invalid_op+0x18/0x70
? asm_exc_invalid_op+0x1a/0x20
? dequeue_rt_stack+0x246/0x290
dequeue_task_rt+0x68/0x160
move_queued_task.constprop.0+0x62/0xf0
affine_move_task+0x4a3/0x4d0
? affine_move_task+0x229/0x4d0
__set_cpus_allowed_ptr+0x4e/0xa0
set_cpus_allowed_ptr+0x36/0x60
rcutree_affinity_setting+0x16a/0x1d0
? __pfx_rcutree_online_cpu+0x10/0x10
rcutree_online_cpu+0x55/0x60
cpuhp_invoke_callback+0x2cd/0x470
------------------------------------------------------------------------
My reproducer on the two-socket 40-core 80-HW-thread systems is:
tools/testing/selftests/rcutorture/bin/kvm.sh --allcpus --duration 1m --configs "50*TREE03" --trust-make
This happens maybe half the time on a two-socket x86 system, and rather
less frequently on my 8-core (16 hardware threads) x86 laptop. (I cheat
and use kvm-remote.sh on 10 two-socket x86 systems to speed things up
a bit.) But bisection is uncharacteristically easy (once I got another
next-20240820 bug out of the way), and converges here:
2e0199df252a ("sched/fair: Prepare exit/cleanup paths for delayed_dequeue")
The preceding commit is very reliable.
Only instead of (or maybe as well as?) introducing the dequeue_rt_stack()
bug, the 2e0199df252a commit introduced a build bug:
------------------------------------------------------------------------
In file included from kernel/sched/fair.c:54:
kernel/sched/fair.c: In function ‘switched_from_fair’:
kernel/sched/sched.h:2154:58: error: ‘__SCHED_FEAT_DELAY_ZERO’ undeclared (first use in this function); did you mean ‘__SCHED_FEAT_LATENCY_WARN’?
2154 | #define sched_feat(x) !!(sysctl_sched_features & (1UL << __SCHED_FEAT_##x))
| ^~~~~~~~~~~~~
kernel/sched/fair.c:12878:21: note: in expansion of macro ‘sched_feat’
12878 | if (sched_feat(DELAY_ZERO) && p->se.vlag > 0)
| ^~~~~~~~~~
kernel/sched/sched.h:2154:58: note: each undeclared identifier is reported only once for each function it appears in
2154 | #define sched_feat(x) !!(sysctl_sched_features & (1UL << __SCHED_FEAT_##x))
| ^~~~~~~~~~~~~
kernel/sched/fair.c:12878:21: note: in expansion of macro ‘sched_feat’
12878 | if (sched_feat(DELAY_ZERO) && p->se.vlag > 0)
| ^~~~~~~~~~
------------------------------------------------------------------------
This build bug continues through this commit:
152e11f6df293 ("sched/fair: Implement delayed dequeue")
By which time it is also accompanied by this (admittedly trivial) warning:
------------------------------------------------------------------------
kernel/sched/fair.c: In function ‘requeue_delayed_entity’:
kernel/sched/fair.c:6818:24: error: unused variable ‘cfs_rq’ [-Werror=unused-variable]
6818 | struct cfs_rq *cfs_rq = cfs_rq_of(se);
| ^~~~~~
------------------------------------------------------------------------
The commit following this one is:
54a58a7877916 ("sched/fair: Implement DELAY_ZERO")
This builds cleanly, but suffers from the dequeue_rt_stack() bug whose
splat is shown above.
Thoughts?
Thanx, Paul
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-08-21 21:57 [BUG almost bisected] Splat in dequeue_rt_stack() and build error Paul E. McKenney
@ 2024-08-22 23:01 ` Paul E. McKenney
2024-08-23 7:47 ` Peter Zijlstra
2024-10-03 8:40 ` Peter Zijlstra
2 siblings, 0 replies; 67+ messages in thread
From: Paul E. McKenney @ 2024-08-22 23:01 UTC (permalink / raw)
To: peterz, vschneid; +Cc: linux-kernel, sfr, linux-next, kernel-team
On Wed, Aug 21, 2024 at 02:57:16PM -0700, Paul E. McKenney wrote:
> Hello!
>
> When running rcutorture scenario TREE03 on both next-20240820 and
> next-20240821, I see failures like this about half a second into the run
> ("2024.08.21-11.24.13" on my laptop in case I overtrimmed):
>
> ------------------------------------------------------------------------
>
> WARNING: CPU: 4 PID: 42 at kernel/sched/rt.c:1405 dequeue_rt_stack+0x246/0x290
> Modules linked in:
> CPU: 4 UID: 0 PID: 42 Comm: cpuhp/4 Not tainted 6.11.0-rc1-00048-gaef6987d8954 #152
> Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.15.0-1 04/01/2014
> RIP: 0010:dequeue_rt_stack+0x246/0x290
> Code: f6 66 89 73 24 83 fa 63 0f 86 ad fe ff ff 90 0f 0b 90 e9 a4 fe ff ff 44 89 ee 48 89 df f7 de e8 50 22 ff ff e9 51 ff ff ff 90 <0f> 0b 90 e9 3a fe ff ff 90 0f 0b 90 e9 ef fd ff ff 8b 14 25 94 fe
> RSP: 0000:ffffbc07801dfc18 EFLAGS: 00010046
> RAX: ffff9ab05f22c200 RBX: ffff9ab04182e8c0 RCX: 0000000000000000
> RDX: 0000000000000000 RSI: ffff9ab05f22c200 RDI: ffff9ab04182e8c0
> RBP: 000000000002c200 R08: ffffbc07801dfcf8 R09: ffff9ab04182efb4
> R10: 0000000000000001 R11: 00000000ffffffff R12: ffff9ab04182e8c0
> R13: 0000000000000001 R14: 000000000002c200 R15: 0000000000000008
> FS: 0000000000000000(0000) GS:ffff9ab05f300000(0000) knlGS:0000000000000000
> CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> CR2: 0000000000000000 CR3: 0000000011e2e000 CR4: 00000000000006f0
> Call Trace:
> <TASK>
> ? __warn+0x7e/0x120
> ? dequeue_rt_stack+0x246/0x290
> ? report_bug+0x18e/0x1a0
> ? handle_bug+0x3d/0x70
> ? exc_invalid_op+0x18/0x70
> ? asm_exc_invalid_op+0x1a/0x20
> ? dequeue_rt_stack+0x246/0x290
> dequeue_task_rt+0x68/0x160
> move_queued_task.constprop.0+0x62/0xf0
> affine_move_task+0x4a3/0x4d0
> ? affine_move_task+0x229/0x4d0
> __set_cpus_allowed_ptr+0x4e/0xa0
> set_cpus_allowed_ptr+0x36/0x60
> rcutree_affinity_setting+0x16a/0x1d0
> ? __pfx_rcutree_online_cpu+0x10/0x10
> rcutree_online_cpu+0x55/0x60
> cpuhp_invoke_callback+0x2cd/0x470
>
> ------------------------------------------------------------------------
>
> My reproducer on the two-socket 40-core 80-HW-thread systems is:
>
> tools/testing/selftests/rcutorture/bin/kvm.sh --allcpus --duration 1m --configs "50*TREE03" --trust-make
>
> This happens maybe half the time on a two-socket x86 system, and rather
> less frequently on my 8-core (16 hardware threads) x86 laptop. (I cheat
> and use kvm-remote.sh on 10 two-socket x86 systems to speed things up
> a bit.) But bisection is uncharacteristically easy (once I got another
> next-20240820 bug out of the way), and converges here:
>
> 2e0199df252a ("sched/fair: Prepare exit/cleanup paths for delayed_dequeue")
>
> The preceding commit is very reliable.
>
> Only instead of (or maybe as well as?) introducing the dequeue_rt_stack()
> bug, the 2e0199df252a commit introduced a build bug:
>
> ------------------------------------------------------------------------
>
> In file included from kernel/sched/fair.c:54:
> kernel/sched/fair.c: In function ‘switched_from_fair’:
> kernel/sched/sched.h:2154:58: error: ‘__SCHED_FEAT_DELAY_ZERO’ undeclared (first use in this function); did you mean ‘__SCHED_FEAT_LATENCY_WARN’?
> 2154 | #define sched_feat(x) !!(sysctl_sched_features & (1UL << __SCHED_FEAT_##x))
> | ^~~~~~~~~~~~~
> kernel/sched/fair.c:12878:21: note: in expansion of macro ‘sched_feat’
> 12878 | if (sched_feat(DELAY_ZERO) && p->se.vlag > 0)
> | ^~~~~~~~~~
> kernel/sched/sched.h:2154:58: note: each undeclared identifier is reported only once for each function it appears in
> 2154 | #define sched_feat(x) !!(sysctl_sched_features & (1UL << __SCHED_FEAT_##x))
> | ^~~~~~~~~~~~~
> kernel/sched/fair.c:12878:21: note: in expansion of macro ‘sched_feat’
> 12878 | if (sched_feat(DELAY_ZERO) && p->se.vlag > 0)
> | ^~~~~~~~~~
>
> ------------------------------------------------------------------------
>
> This build bug continues through this commit:
>
> 152e11f6df293 ("sched/fair: Implement delayed dequeue")
>
> By which time it is also accompanied by this (admittedly trivial) warning:
>
> ------------------------------------------------------------------------
>
> kernel/sched/fair.c: In function ‘requeue_delayed_entity’:
> kernel/sched/fair.c:6818:24: error: unused variable ‘cfs_rq’ [-Werror=unused-variable]
> 6818 | struct cfs_rq *cfs_rq = cfs_rq_of(se);
> | ^~~~~~
>
> ------------------------------------------------------------------------
>
> The commit following this one is:
>
> 54a58a7877916 ("sched/fair: Implement DELAY_ZERO")
>
> This builds cleanly, but suffers from the dequeue_rt_stack() bug whose
> splat is shown above.
>
> Thoughts?
Many of the failures seem to have little effect, that is, the system splats,
and then proceeds as if nothing had happened. But sometimes things are
more serious:
------------------------------------------------------------------------
kernel BUG at kernel/sched/rt.c:1714!
Oops: invalid opcode: 0000 [#1] PREEMPT SMP NOPTI
CPU: 0 UID: 0 PID: 0 Comm: swapper/0 Not tainted 6.11.0-rc4-next-20240822 #53511
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
RIP: 0010:pick_next_task_rt+0x1d8/0x1e0
Code: 30 0a 00 00 8b 93 98 0a 00 00 f0 48 0f b3 90 b0 00 00 00 c6 83 20 08 00 00 00 e9 f2 fe ff ff f3 48 0f bc c0 e9 60 fe ff ff 90 <0f> 0b 66 0f 1f 44 00 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90
RSP: 0018:ffffffff90203e38 EFLAGS: 00010002
RAX: 0000000000000064 RBX: ffff8bd55f22c240 RCX: ffff8bd55f200000
RDX: 0000000000000001 RSI: 0000000000000000 RDI: ffff8bd55f22c240
RBP: ffffffff90203ec0 R08: 00000000000000b4 R09: 000000000000002e
R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
R13: ffff8bd55f22c240 R14: 0000000000000000 R15: ffffffff9020c940
FS: 0000000000000000(0000) GS:ffff8bd55f200000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: ffff8bd551401000 CR3: 000000001082e000 CR4: 00000000000006f0
Call Trace:
<TASK>
? die+0x32/0x90
? do_trap+0xd8/0x100
? pick_next_task_rt+0x1d8/0x1e0
? do_error_trap+0x60/0x80
? pick_next_task_rt+0x1d8/0x1e0
? exc_invalid_op+0x53/0x70
? pick_next_task_rt+0x1d8/0x1e0
? asm_exc_invalid_op+0x1a/0x20
? pick_next_task_rt+0x1d8/0x1e0
__schedule+0x50b/0x8e0
? ct_kernel_enter.constprop.0+0x30/0xd0
? ct_idle_exit+0xd/0x20
schedule_idle+0x1b/0x30
cpu_startup_entry+0x24/0x30
rest_init+0xbc/0xc0
start_kernel+0x4f9/0x790
x86_64_start_reservations+0x18/0x30
x86_64_start_kernel+0xc6/0xe0
common_startup_64+0x12c/0x138
</TASK>
Modules linked in:
---[ end trace 0000000000000000 ]---
RIP: 0010:pick_next_task_rt+0x1d8/0x1e0
Code: 30 0a 00 00 8b 93 98 0a 00 00 f0 48 0f b3 90 b0 00 00 00 c6 83 20 08 00 00 00 e9 f2 fe ff ff f3 48 0f bc c0 e9 60 fe ff ff 90 <0f> 0b 66 0f 1f 44 00 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90
RSP: 0018:ffffffff90203e38 EFLAGS: 00010002
RAX: 0000000000000064 RBX: ffff8bd55f22c240 RCX: ffff8bd55f200000
RDX: 0000000000000001 RSI: 0000000000000000 RDI: ffff8bd55f22c240
RBP: ffffffff90203ec0 R08: 00000000000000b4 R09: 000000000000002e
R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
R13: ffff8bd55f22c240 R14: 0000000000000000 R15: ffffffff9020c940
FS: 0000000000000000(0000) GS:ffff8bd55f200000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: ffff8bd551401000 CR3: 000000001082e000 CR4: 00000000000006f0
Kernel panic - not syncing: Attempted to kill the idle task!
Shutting down cpus with NMI
------------------------------------------------------------------------
Thanx, Paul
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-08-21 21:57 [BUG almost bisected] Splat in dequeue_rt_stack() and build error Paul E. McKenney
2024-08-22 23:01 ` Paul E. McKenney
@ 2024-08-23 7:47 ` Peter Zijlstra
2024-08-23 12:46 ` Paul E. McKenney
2024-08-26 11:44 ` Valentin Schneider
2024-10-03 8:40 ` Peter Zijlstra
2 siblings, 2 replies; 67+ messages in thread
From: Peter Zijlstra @ 2024-08-23 7:47 UTC (permalink / raw)
To: Paul E. McKenney; +Cc: vschneid, linux-kernel, sfr, linux-next, kernel-team
On Wed, Aug 21, 2024 at 02:57:16PM -0700, Paul E. McKenney wrote:
> 2e0199df252a ("sched/fair: Prepare exit/cleanup paths for delayed_dequeue")
>
> The preceding commit is very reliable.
>
> Only instead of (or maybe as well as?) introducing the dequeue_rt_stack()
> bug, the 2e0199df252a commit introduced a build bug:
>
> ------------------------------------------------------------------------
>
> In file included from kernel/sched/fair.c:54:
> kernel/sched/fair.c: In function ‘switched_from_fair’:
> kernel/sched/sched.h:2154:58: error: ‘__SCHED_FEAT_DELAY_ZERO’ undeclared (first use in this function); did you mean ‘__SCHED_FEAT_LATENCY_WARN’?
> 2154 | #define sched_feat(x) !!(sysctl_sched_features & (1UL << __SCHED_FEAT_##x))
> | ^~~~~~~~~~~~~
> kernel/sched/fair.c:12878:21: note: in expansion of macro ‘sched_feat’
> 12878 | if (sched_feat(DELAY_ZERO) && p->se.vlag > 0)
> | ^~~~~~~~~~
> kernel/sched/sched.h:2154:58: note: each undeclared identifier is reported only once for each function it appears in
> 2154 | #define sched_feat(x) !!(sysctl_sched_features & (1UL << __SCHED_FEAT_##x))
> | ^~~~~~~~~~~~~
> kernel/sched/fair.c:12878:21: note: in expansion of macro ‘sched_feat’
> 12878 | if (sched_feat(DELAY_ZERO) && p->se.vlag > 0)
> | ^~~~~~~~~~
>
Oh gawd, last minute back-merges :/
Does the below help any? That's more or less what it was before Valentin
asked me why it was weird like that :-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 6be618110885..5757dd50b02f 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -13107,7 +13107,6 @@ static void switched_from_fair(struct rq *rq, struct task_struct *p)
* and we cannot use DEQUEUE_DELAYED.
*/
if (p->se.sched_delayed) {
- dequeue_task(rq, p, DEQUEUE_NOCLOCK | DEQUEUE_SLEEP);
p->se.sched_delayed = 0;
p->se.rel_deadline = 0;
if (sched_feat(DELAY_ZERO) && p->se.vlag > 0)
^ permalink raw reply related [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-08-23 7:47 ` Peter Zijlstra
@ 2024-08-23 12:46 ` Paul E. McKenney
2024-08-23 21:51 ` Paul E. McKenney
2024-08-26 11:44 ` Valentin Schneider
1 sibling, 1 reply; 67+ messages in thread
From: Paul E. McKenney @ 2024-08-23 12:46 UTC (permalink / raw)
To: Peter Zijlstra; +Cc: vschneid, linux-kernel, sfr, linux-next, kernel-team
On Fri, Aug 23, 2024 at 09:47:05AM +0200, Peter Zijlstra wrote:
> On Wed, Aug 21, 2024 at 02:57:16PM -0700, Paul E. McKenney wrote:
>
> > 2e0199df252a ("sched/fair: Prepare exit/cleanup paths for delayed_dequeue")
> >
> > The preceding commit is very reliable.
> >
> > Only instead of (or maybe as well as?) introducing the dequeue_rt_stack()
> > bug, the 2e0199df252a commit introduced a build bug:
> >
> > ------------------------------------------------------------------------
> >
> > In file included from kernel/sched/fair.c:54:
> > kernel/sched/fair.c: In function ‘switched_from_fair’:
> > kernel/sched/sched.h:2154:58: error: ‘__SCHED_FEAT_DELAY_ZERO’ undeclared (first use in this function); did you mean ‘__SCHED_FEAT_LATENCY_WARN’?
> > 2154 | #define sched_feat(x) !!(sysctl_sched_features & (1UL << __SCHED_FEAT_##x))
> > | ^~~~~~~~~~~~~
> > kernel/sched/fair.c:12878:21: note: in expansion of macro ‘sched_feat’
> > 12878 | if (sched_feat(DELAY_ZERO) && p->se.vlag > 0)
> > | ^~~~~~~~~~
> > kernel/sched/sched.h:2154:58: note: each undeclared identifier is reported only once for each function it appears in
> > 2154 | #define sched_feat(x) !!(sysctl_sched_features & (1UL << __SCHED_FEAT_##x))
> > | ^~~~~~~~~~~~~
> > kernel/sched/fair.c:12878:21: note: in expansion of macro ‘sched_feat’
> > 12878 | if (sched_feat(DELAY_ZERO) && p->se.vlag > 0)
> > | ^~~~~~~~~~
> >
>
> Oh gawd, last minute back-merges :/
I know that feeling! ;-)
> Does the below help any? That's more or less what it was before Valentin
> asked me why it was weird like that :-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 6be618110885..5757dd50b02f 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -13107,7 +13107,6 @@ static void switched_from_fair(struct rq *rq, struct task_struct *p)
> * and we cannot use DEQUEUE_DELAYED.
> */
> if (p->se.sched_delayed) {
> - dequeue_task(rq, p, DEQUEUE_NOCLOCK | DEQUEUE_SLEEP);
> p->se.sched_delayed = 0;
> p->se.rel_deadline = 0;
> if (sched_feat(DELAY_ZERO) && p->se.vlag > 0)
Removing that line from 2e0199df252a still gets me the complaint about
__SCHED_FEAT_DELAY_ZERO being undefined. To my naive eyes, it appears
that this commit:
54a58a787791 ("sched/fair: Implement DELAY_ZERO")
Need to be placed before 2e0199df252a. Of course, when I try it, I
get conflicts. So I took just this hunk:
------------------------------------------------------------------------
diff --git a/kernel/sched/features.h b/kernel/sched/features.h
index 97fb2d4920898..6c5f5424614d4 100644
--- a/kernel/sched/features.h
+++ b/kernel/sched/features.h
@@ -28,6 +28,11 @@ SCHED_FEAT(NEXT_BUDDY, false)
*/
SCHED_FEAT(CACHE_HOT_BUDDY, true)
+/*
+ * DELAY_ZERO clips the lag on dequeue (or wakeup) to 0.
+ */
+SCHED_FEAT(DELAY_ZERO, true)
+
/*
* Allow wakeup-time preemption of the current task:
*/
------------------------------------------------------------------------
That makes the build error go away. Maybe even legitimately?
Just to pick on the easy one, I took a look at the complaint about
cfs_rq being unused and the complaint about __SCHED_FEAT_DELAY_ZERO
being undefined. This variable was added here:
781773e3b680 ("sched/fair: Implement ENQUEUE_DELAYED")
And its first use was added here:
54a58a787791 ("sched/fair: Implement DELAY_ZERO")
Which matches my experience.
So left to myself, I would run on these commits with the above hunk:
54a58a7877916 sched/fair: Implement DELAY_ZERO
152e11f6df293 sched/fair: Implement delayed dequeue
e1459a50ba318 sched: Teach dequeue_task() about special task states
a1c446611e31c sched,freezer: Mark TASK_FROZEN special
781773e3b6803 sched/fair: Implement ENQUEUE_DELAYED
f12e148892ede sched/fair: Prepare pick_next_task() for delayed dequeue
2e0199df252a5 sched/fair: Prepare exit/cleanup paths for delayed_dequeue
e28b5f8bda017 sched/fair: Assert {set_next,put_prev}_entity() are properly balanced
And where needed, remove the unused cfs_rq local variable.
Would that likely work?
In the meantime, SIGFOOD!
Thanx, Paul
^ permalink raw reply related [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-08-23 12:46 ` Paul E. McKenney
@ 2024-08-23 21:51 ` Paul E. McKenney
2024-08-24 6:54 ` Peter Zijlstra
0 siblings, 1 reply; 67+ messages in thread
From: Paul E. McKenney @ 2024-08-23 21:51 UTC (permalink / raw)
To: Peter Zijlstra; +Cc: vschneid, linux-kernel, sfr, linux-next, kernel-team
On Fri, Aug 23, 2024 at 05:46:59AM -0700, Paul E. McKenney wrote:
> On Fri, Aug 23, 2024 at 09:47:05AM +0200, Peter Zijlstra wrote:
> > On Wed, Aug 21, 2024 at 02:57:16PM -0700, Paul E. McKenney wrote:
> >
> > > 2e0199df252a ("sched/fair: Prepare exit/cleanup paths for delayed_dequeue")
> > >
> > > The preceding commit is very reliable.
> > >
> > > Only instead of (or maybe as well as?) introducing the dequeue_rt_stack()
> > > bug, the 2e0199df252a commit introduced a build bug:
> > >
> > > ------------------------------------------------------------------------
> > >
> > > In file included from kernel/sched/fair.c:54:
> > > kernel/sched/fair.c: In function ‘switched_from_fair’:
> > > kernel/sched/sched.h:2154:58: error: ‘__SCHED_FEAT_DELAY_ZERO’ undeclared (first use in this function); did you mean ‘__SCHED_FEAT_LATENCY_WARN’?
> > > 2154 | #define sched_feat(x) !!(sysctl_sched_features & (1UL << __SCHED_FEAT_##x))
> > > | ^~~~~~~~~~~~~
> > > kernel/sched/fair.c:12878:21: note: in expansion of macro ‘sched_feat’
> > > 12878 | if (sched_feat(DELAY_ZERO) && p->se.vlag > 0)
> > > | ^~~~~~~~~~
> > > kernel/sched/sched.h:2154:58: note: each undeclared identifier is reported only once for each function it appears in
> > > 2154 | #define sched_feat(x) !!(sysctl_sched_features & (1UL << __SCHED_FEAT_##x))
> > > | ^~~~~~~~~~~~~
> > > kernel/sched/fair.c:12878:21: note: in expansion of macro ‘sched_feat’
> > > 12878 | if (sched_feat(DELAY_ZERO) && p->se.vlag > 0)
> > > | ^~~~~~~~~~
> > >
> >
> > Oh gawd, last minute back-merges :/
>
> I know that feeling! ;-)
>
> > Does the below help any? That's more or less what it was before Valentin
> > asked me why it was weird like that :-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 6be618110885..5757dd50b02f 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -13107,7 +13107,6 @@ static void switched_from_fair(struct rq *rq, struct task_struct *p)
> > * and we cannot use DEQUEUE_DELAYED.
> > */
> > if (p->se.sched_delayed) {
> > - dequeue_task(rq, p, DEQUEUE_NOCLOCK | DEQUEUE_SLEEP);
> > p->se.sched_delayed = 0;
> > p->se.rel_deadline = 0;
> > if (sched_feat(DELAY_ZERO) && p->se.vlag > 0)
>
> Removing that line from 2e0199df252a still gets me the complaint about
> __SCHED_FEAT_DELAY_ZERO being undefined. To my naive eyes, it appears
> that this commit:
>
> 54a58a787791 ("sched/fair: Implement DELAY_ZERO")
>
> Need to be placed before 2e0199df252a. Of course, when I try it, I
> get conflicts. So I took just this hunk:
>
> ------------------------------------------------------------------------
>
> diff --git a/kernel/sched/features.h b/kernel/sched/features.h
> index 97fb2d4920898..6c5f5424614d4 100644
> --- a/kernel/sched/features.h
> +++ b/kernel/sched/features.h
> @@ -28,6 +28,11 @@ SCHED_FEAT(NEXT_BUDDY, false)
> */
> SCHED_FEAT(CACHE_HOT_BUDDY, true)
>
> +/*
> + * DELAY_ZERO clips the lag on dequeue (or wakeup) to 0.
> + */
> +SCHED_FEAT(DELAY_ZERO, true)
> +
> /*
> * Allow wakeup-time preemption of the current task:
> */
>
> ------------------------------------------------------------------------
>
> That makes the build error go away. Maybe even legitimately?
>
> Just to pick on the easy one, I took a look at the complaint about
> cfs_rq being unused and the complaint about __SCHED_FEAT_DELAY_ZERO
> being undefined. This variable was added here:
>
> 781773e3b680 ("sched/fair: Implement ENQUEUE_DELAYED")
>
> And its first use was added here:
>
> 54a58a787791 ("sched/fair: Implement DELAY_ZERO")
>
> Which matches my experience.
>
> So left to myself, I would run on these commits with the above hunk:
>
> 54a58a7877916 sched/fair: Implement DELAY_ZERO
> 152e11f6df293 sched/fair: Implement delayed dequeue
> e1459a50ba318 sched: Teach dequeue_task() about special task states
> a1c446611e31c sched,freezer: Mark TASK_FROZEN special
> 781773e3b6803 sched/fair: Implement ENQUEUE_DELAYED
> f12e148892ede sched/fair: Prepare pick_next_task() for delayed dequeue
> 2e0199df252a5 sched/fair: Prepare exit/cleanup paths for delayed_dequeue
> e28b5f8bda017 sched/fair: Assert {set_next,put_prev}_entity() are properly balanced
>
> And where needed, remove the unused cfs_rq local variable.
>
> Would that likely work?
>
> In the meantime, SIGFOOD!
Hearing no objections...
Given two patches each of which might or might not need to be applied to a
given commit, I chose to rebase as follows:
e28b5f8bda017 sched/fair: Assert {set_next,put_prev}_entity() are properly balanced
8aed87410a695 EXP sched/fair: Provide DELAY_ZERO definition
I took this from 54a58a7877916 sched/fair: Implement DELAY_ZERO.
49575c0087bc0 sched/fair: Prepare exit/cleanup paths for delayed_dequeue
14c3207fd2456 sched/fair: Prepare pick_next_task() for delayed dequeue
be567af45dd04 sched/fair: Implement ENQUEUE_DELAYED
I dropped the unused cfs_rq local variable from requeue_delayed_entity()
ed28f7b3ac3f4 sched,freezer: Mark TASK_FROZEN special
48d541847b4a6 sched: Teach dequeue_task() about special task states
ef3b9c5d038dc sched/fair: Implement delayed dequeue
--- First bad commit with dequeue_rt_stack() failures.
876c99c058219 sched/fair: Implement DELAY_ZERO
I added the cfs_rq local variable to requeue_delayed_entity()
This is on -rcu branch peterz.2024.08.23b.
I ran 50*TREE05 in a bisection, which converged on be567af45dd04, but only
one run of the 50 had a complaint, and that was in enqueue_dl_entry(),
not the dequeue_rt_stack() that I have been chasing. I ran three
additional 50*TREE05 runs on its predecessor (14c3207fd2456) with no
failures. I then ran 50*TREE03 on each of ed28f7b3ac3f4, 48d541847b4a6,
and ef3b9c5d038dc. Only this last ("ef3b9c5d038dc sched/fair: Implement
delayed dequeue") had failure, and they were all the dequeue_rt_stack()
failures I am chasing. One of the runs also hung.
I am currently running 1000*TREE03 on 48d541847b4a6 to see if I can
reproduce the enqueue_dl_entry() issue.
Thoughts?
Thanx, Paul
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-08-23 21:51 ` Paul E. McKenney
@ 2024-08-24 6:54 ` Peter Zijlstra
2024-08-24 15:26 ` Paul E. McKenney
0 siblings, 1 reply; 67+ messages in thread
From: Peter Zijlstra @ 2024-08-24 6:54 UTC (permalink / raw)
To: Paul E. McKenney; +Cc: vschneid, linux-kernel, sfr, linux-next, kernel-team
On Fri, Aug 23, 2024 at 02:51:03PM -0700, Paul E. McKenney wrote:
> > > Does the below help any? That's more or less what it was before Valentin
> > > asked me why it was weird like that :-)
> > >
> > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > > index 6be618110885..5757dd50b02f 100644
> > > --- a/kernel/sched/fair.c
> > > +++ b/kernel/sched/fair.c
> > > @@ -13107,7 +13107,6 @@ static void switched_from_fair(struct rq *rq, struct task_struct *p)
> > > * and we cannot use DEQUEUE_DELAYED.
> > > */
> > > if (p->se.sched_delayed) {
> > > - dequeue_task(rq, p, DEQUEUE_NOCLOCK | DEQUEUE_SLEEP);
> > > p->se.sched_delayed = 0;
> > > p->se.rel_deadline = 0;
> > > if (sched_feat(DELAY_ZERO) && p->se.vlag > 0)
> >
> > Removing that line from 2e0199df252a still gets me the complaint about
> > __SCHED_FEAT_DELAY_ZERO being undefined. To my naive eyes, it appears
> > that this commit:
> >
> > 54a58a787791 ("sched/fair: Implement DELAY_ZERO")
> >
> > Need to be placed before 2e0199df252a. Of course, when I try it, I
> > get conflicts. So I took just this hunk:
> >
> > ------------------------------------------------------------------------
> >
> > diff --git a/kernel/sched/features.h b/kernel/sched/features.h
> > index 97fb2d4920898..6c5f5424614d4 100644
> > --- a/kernel/sched/features.h
> > +++ b/kernel/sched/features.h
> > @@ -28,6 +28,11 @@ SCHED_FEAT(NEXT_BUDDY, false)
> > */
> > SCHED_FEAT(CACHE_HOT_BUDDY, true)
> >
> > +/*
> > + * DELAY_ZERO clips the lag on dequeue (or wakeup) to 0.
> > + */
> > +SCHED_FEAT(DELAY_ZERO, true)
> > +
> > /*
> > * Allow wakeup-time preemption of the current task:
> > */
> >
> > ------------------------------------------------------------------------
> >
> > That makes the build error go away. Maybe even legitimately?
Yep.
> > Just to pick on the easy one, I took a look at the complaint about
> > cfs_rq being unused and the complaint about __SCHED_FEAT_DELAY_ZERO
> > being undefined. This variable was added here:
> >
> > 781773e3b680 ("sched/fair: Implement ENQUEUE_DELAYED")
> >
> > And its first use was added here:
> >
> > 54a58a787791 ("sched/fair: Implement DELAY_ZERO")
> >
> > Which matches my experience.
> >
> > So left to myself, I would run on these commits with the above hunk:
> >
> > 54a58a7877916 sched/fair: Implement DELAY_ZERO
> > 152e11f6df293 sched/fair: Implement delayed dequeue
> > e1459a50ba318 sched: Teach dequeue_task() about special task states
> > a1c446611e31c sched,freezer: Mark TASK_FROZEN special
> > 781773e3b6803 sched/fair: Implement ENQUEUE_DELAYED
> > f12e148892ede sched/fair: Prepare pick_next_task() for delayed dequeue
> > 2e0199df252a5 sched/fair: Prepare exit/cleanup paths for delayed_dequeue
> > e28b5f8bda017 sched/fair: Assert {set_next,put_prev}_entity() are properly balanced
> >
> > And where needed, remove the unused cfs_rq local variable.
> >
> > Would that likely work?
Sounds about right.
> >
> > In the meantime, SIGFOOD!
>
> Hearing no objections...
Yeah, sorry, I'm on holidays with the kids and not glued to the screen
as per usual :-)
> Given two patches each of which might or might not need to be applied to a
> given commit, I chose to rebase as follows:
>
> e28b5f8bda017 sched/fair: Assert {set_next,put_prev}_entity() are properly balanced
> 8aed87410a695 EXP sched/fair: Provide DELAY_ZERO definition
> I took this from 54a58a7877916 sched/fair: Implement DELAY_ZERO.
> 49575c0087bc0 sched/fair: Prepare exit/cleanup paths for delayed_dequeue
> 14c3207fd2456 sched/fair: Prepare pick_next_task() for delayed dequeue
> be567af45dd04 sched/fair: Implement ENQUEUE_DELAYED
> I dropped the unused cfs_rq local variable from requeue_delayed_entity()
> ed28f7b3ac3f4 sched,freezer: Mark TASK_FROZEN special
> 48d541847b4a6 sched: Teach dequeue_task() about special task states
> ef3b9c5d038dc sched/fair: Implement delayed dequeue
> --- First bad commit with dequeue_rt_stack() failures.
> 876c99c058219 sched/fair: Implement DELAY_ZERO
> I added the cfs_rq local variable to requeue_delayed_entity()
>
> This is on -rcu branch peterz.2024.08.23b.
>
> I ran 50*TREE05 in a bisection, which converged on be567af45dd04, but only
> one run of the 50 had a complaint, and that was in enqueue_dl_entry(),
Hmm, I have one other report about that. Hasn't made much sense yet --
then again, as per the above mentioned reason, I'm not able to put real
time in atm.
> not the dequeue_rt_stack() that I have been chasing. I ran three
> additional 50*TREE05 runs on its predecessor (14c3207fd2456) with no
> failures. I then ran 50*TREE03 on each of ed28f7b3ac3f4, 48d541847b4a6,
> and ef3b9c5d038dc. Only this last ("ef3b9c5d038dc sched/fair: Implement
> delayed dequeue") had failure, and they were all the dequeue_rt_stack()
> failures I am chasing. One of the runs also hung.
I'm a little confused now though; this is with the dequeue removed from
switched_from_fair() ?
Looking at your tree, 49575c0087bc0 still has that dequeue. Does the
dequeue_rt_stack() issue go away with that line removed?
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-08-24 6:54 ` Peter Zijlstra
@ 2024-08-24 15:26 ` Paul E. McKenney
2024-08-25 2:10 ` Paul E. McKenney
0 siblings, 1 reply; 67+ messages in thread
From: Paul E. McKenney @ 2024-08-24 15:26 UTC (permalink / raw)
To: Peter Zijlstra; +Cc: vschneid, linux-kernel, sfr, linux-next, kernel-team
On Sat, Aug 24, 2024 at 08:54:34AM +0200, Peter Zijlstra wrote:
> On Fri, Aug 23, 2024 at 02:51:03PM -0700, Paul E. McKenney wrote:
>
> > > > Does the below help any? That's more or less what it was before Valentin
> > > > asked me why it was weird like that :-)
> > > >
> > > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > > > index 6be618110885..5757dd50b02f 100644
> > > > --- a/kernel/sched/fair.c
> > > > +++ b/kernel/sched/fair.c
> > > > @@ -13107,7 +13107,6 @@ static void switched_from_fair(struct rq *rq, struct task_struct *p)
> > > > * and we cannot use DEQUEUE_DELAYED.
> > > > */
> > > > if (p->se.sched_delayed) {
> > > > - dequeue_task(rq, p, DEQUEUE_NOCLOCK | DEQUEUE_SLEEP);
> > > > p->se.sched_delayed = 0;
> > > > p->se.rel_deadline = 0;
> > > > if (sched_feat(DELAY_ZERO) && p->se.vlag > 0)
> > >
> > > Removing that line from 2e0199df252a still gets me the complaint about
> > > __SCHED_FEAT_DELAY_ZERO being undefined. To my naive eyes, it appears
> > > that this commit:
> > >
> > > 54a58a787791 ("sched/fair: Implement DELAY_ZERO")
> > >
> > > Need to be placed before 2e0199df252a. Of course, when I try it, I
> > > get conflicts. So I took just this hunk:
> > >
> > > ------------------------------------------------------------------------
> > >
> > > diff --git a/kernel/sched/features.h b/kernel/sched/features.h
> > > index 97fb2d4920898..6c5f5424614d4 100644
> > > --- a/kernel/sched/features.h
> > > +++ b/kernel/sched/features.h
> > > @@ -28,6 +28,11 @@ SCHED_FEAT(NEXT_BUDDY, false)
> > > */
> > > SCHED_FEAT(CACHE_HOT_BUDDY, true)
> > >
> > > +/*
> > > + * DELAY_ZERO clips the lag on dequeue (or wakeup) to 0.
> > > + */
> > > +SCHED_FEAT(DELAY_ZERO, true)
> > > +
> > > /*
> > > * Allow wakeup-time preemption of the current task:
> > > */
> > >
> > > ------------------------------------------------------------------------
> > >
> > > That makes the build error go away. Maybe even legitimately?
>
> Yep.
>
> > > Just to pick on the easy one, I took a look at the complaint about
> > > cfs_rq being unused and the complaint about __SCHED_FEAT_DELAY_ZERO
> > > being undefined. This variable was added here:
> > >
> > > 781773e3b680 ("sched/fair: Implement ENQUEUE_DELAYED")
> > >
> > > And its first use was added here:
> > >
> > > 54a58a787791 ("sched/fair: Implement DELAY_ZERO")
> > >
> > > Which matches my experience.
> > >
> > > So left to myself, I would run on these commits with the above hunk:
> > >
> > > 54a58a7877916 sched/fair: Implement DELAY_ZERO
> > > 152e11f6df293 sched/fair: Implement delayed dequeue
> > > e1459a50ba318 sched: Teach dequeue_task() about special task states
> > > a1c446611e31c sched,freezer: Mark TASK_FROZEN special
> > > 781773e3b6803 sched/fair: Implement ENQUEUE_DELAYED
> > > f12e148892ede sched/fair: Prepare pick_next_task() for delayed dequeue
> > > 2e0199df252a5 sched/fair: Prepare exit/cleanup paths for delayed_dequeue
> > > e28b5f8bda017 sched/fair: Assert {set_next,put_prev}_entity() are properly balanced
> > >
> > > And where needed, remove the unused cfs_rq local variable.
> > >
> > > Would that likely work?
>
> Sounds about right.
>
> > >
> > > In the meantime, SIGFOOD!
> >
> > Hearing no objections...
>
> Yeah, sorry, I'm on holidays with the kids and not glued to the screen
> as per usual :-)
No worries, and have a great holiday!!!
> > Given two patches each of which might or might not need to be applied to a
> > given commit, I chose to rebase as follows:
> >
> > e28b5f8bda017 sched/fair: Assert {set_next,put_prev}_entity() are properly balanced
> > 8aed87410a695 EXP sched/fair: Provide DELAY_ZERO definition
> > I took this from 54a58a7877916 sched/fair: Implement DELAY_ZERO.
> > 49575c0087bc0 sched/fair: Prepare exit/cleanup paths for delayed_dequeue
> > 14c3207fd2456 sched/fair: Prepare pick_next_task() for delayed dequeue
> > be567af45dd04 sched/fair: Implement ENQUEUE_DELAYED
> > I dropped the unused cfs_rq local variable from requeue_delayed_entity()
> > ed28f7b3ac3f4 sched,freezer: Mark TASK_FROZEN special
> > 48d541847b4a6 sched: Teach dequeue_task() about special task states
> > ef3b9c5d038dc sched/fair: Implement delayed dequeue
> > --- First bad commit with dequeue_rt_stack() failures.
> > 876c99c058219 sched/fair: Implement DELAY_ZERO
> > I added the cfs_rq local variable to requeue_delayed_entity()
> >
> > This is on -rcu branch peterz.2024.08.23b.
> >
> > I ran 50*TREE05 in a bisection, which converged on be567af45dd04, but only
> > one run of the 50 had a complaint, and that was in enqueue_dl_entry(),
>
> Hmm, I have one other report about that. Hasn't made much sense yet --
> then again, as per the above mentioned reason, I'm not able to put real
> time in atm.
I ran 1000*TREE03 on that same commit, no failures. Just started
5000*TREE03, and will let you know what happens. This will likely take
better part of a day to complete.
> > not the dequeue_rt_stack() that I have been chasing. I ran three
> > additional 50*TREE05 runs on its predecessor (14c3207fd2456) with no
> > failures. I then ran 50*TREE03 on each of ed28f7b3ac3f4, 48d541847b4a6,
> > and ef3b9c5d038dc. Only this last ("ef3b9c5d038dc sched/fair: Implement
> > delayed dequeue") had failure, and they were all the dequeue_rt_stack()
> > failures I am chasing. One of the runs also hung.
>
> I'm a little confused now though; this is with the dequeue removed from
> switched_from_fair() ?
Ah!!! I thought that change was for the build issue, which I will
admit puzzled me a bit.
> Looking at your tree, 49575c0087bc0 still has that dequeue. Does the
> dequeue_rt_stack() issue go away with that line removed?
I will try it and let you know. Thank you for reminding me!
Thanx, Paul
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-08-24 15:26 ` Paul E. McKenney
@ 2024-08-25 2:10 ` Paul E. McKenney
2024-08-25 19:36 ` Paul E. McKenney
0 siblings, 1 reply; 67+ messages in thread
From: Paul E. McKenney @ 2024-08-25 2:10 UTC (permalink / raw)
To: Peter Zijlstra, vschneid, linux-kernel, sfr, linux-next,
kernel-team
On Sat, Aug 24, 2024 at 08:26:57AM -0700, Paul E. McKenney wrote:
> On Sat, Aug 24, 2024 at 08:54:34AM +0200, Peter Zijlstra wrote:
> > On Fri, Aug 23, 2024 at 02:51:03PM -0700, Paul E. McKenney wrote:
> >
> > > > > Does the below help any? That's more or less what it was before Valentin
> > > > > asked me why it was weird like that :-)
> > > > >
> > > > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > > > > index 6be618110885..5757dd50b02f 100644
> > > > > --- a/kernel/sched/fair.c
> > > > > +++ b/kernel/sched/fair.c
> > > > > @@ -13107,7 +13107,6 @@ static void switched_from_fair(struct rq *rq, struct task_struct *p)
> > > > > * and we cannot use DEQUEUE_DELAYED.
> > > > > */
> > > > > if (p->se.sched_delayed) {
> > > > > - dequeue_task(rq, p, DEQUEUE_NOCLOCK | DEQUEUE_SLEEP);
> > > > > p->se.sched_delayed = 0;
> > > > > p->se.rel_deadline = 0;
> > > > > if (sched_feat(DELAY_ZERO) && p->se.vlag > 0)
> > > >
> > > > Removing that line from 2e0199df252a still gets me the complaint about
> > > > __SCHED_FEAT_DELAY_ZERO being undefined. To my naive eyes, it appears
> > > > that this commit:
> > > >
> > > > 54a58a787791 ("sched/fair: Implement DELAY_ZERO")
> > > >
> > > > Need to be placed before 2e0199df252a. Of course, when I try it, I
> > > > get conflicts. So I took just this hunk:
> > > >
> > > > ------------------------------------------------------------------------
> > > >
> > > > diff --git a/kernel/sched/features.h b/kernel/sched/features.h
> > > > index 97fb2d4920898..6c5f5424614d4 100644
> > > > --- a/kernel/sched/features.h
> > > > +++ b/kernel/sched/features.h
> > > > @@ -28,6 +28,11 @@ SCHED_FEAT(NEXT_BUDDY, false)
> > > > */
> > > > SCHED_FEAT(CACHE_HOT_BUDDY, true)
> > > >
> > > > +/*
> > > > + * DELAY_ZERO clips the lag on dequeue (or wakeup) to 0.
> > > > + */
> > > > +SCHED_FEAT(DELAY_ZERO, true)
> > > > +
> > > > /*
> > > > * Allow wakeup-time preemption of the current task:
> > > > */
> > > >
> > > > ------------------------------------------------------------------------
> > > >
> > > > That makes the build error go away. Maybe even legitimately?
> >
> > Yep.
> >
> > > > Just to pick on the easy one, I took a look at the complaint about
> > > > cfs_rq being unused and the complaint about __SCHED_FEAT_DELAY_ZERO
> > > > being undefined. This variable was added here:
> > > >
> > > > 781773e3b680 ("sched/fair: Implement ENQUEUE_DELAYED")
> > > >
> > > > And its first use was added here:
> > > >
> > > > 54a58a787791 ("sched/fair: Implement DELAY_ZERO")
> > > >
> > > > Which matches my experience.
> > > >
> > > > So left to myself, I would run on these commits with the above hunk:
> > > >
> > > > 54a58a7877916 sched/fair: Implement DELAY_ZERO
> > > > 152e11f6df293 sched/fair: Implement delayed dequeue
> > > > e1459a50ba318 sched: Teach dequeue_task() about special task states
> > > > a1c446611e31c sched,freezer: Mark TASK_FROZEN special
> > > > 781773e3b6803 sched/fair: Implement ENQUEUE_DELAYED
> > > > f12e148892ede sched/fair: Prepare pick_next_task() for delayed dequeue
> > > > 2e0199df252a5 sched/fair: Prepare exit/cleanup paths for delayed_dequeue
> > > > e28b5f8bda017 sched/fair: Assert {set_next,put_prev}_entity() are properly balanced
> > > >
> > > > And where needed, remove the unused cfs_rq local variable.
> > > >
> > > > Would that likely work?
> >
> > Sounds about right.
> >
> > > >
> > > > In the meantime, SIGFOOD!
> > >
> > > Hearing no objections...
> >
> > Yeah, sorry, I'm on holidays with the kids and not glued to the screen
> > as per usual :-)
>
> No worries, and have a great holiday!!!
>
> > > Given two patches each of which might or might not need to be applied to a
> > > given commit, I chose to rebase as follows:
> > >
> > > e28b5f8bda017 sched/fair: Assert {set_next,put_prev}_entity() are properly balanced
> > > 8aed87410a695 EXP sched/fair: Provide DELAY_ZERO definition
> > > I took this from 54a58a7877916 sched/fair: Implement DELAY_ZERO.
> > > 49575c0087bc0 sched/fair: Prepare exit/cleanup paths for delayed_dequeue
> > > 14c3207fd2456 sched/fair: Prepare pick_next_task() for delayed dequeue
> > > be567af45dd04 sched/fair: Implement ENQUEUE_DELAYED
> > > I dropped the unused cfs_rq local variable from requeue_delayed_entity()
> > > ed28f7b3ac3f4 sched,freezer: Mark TASK_FROZEN special
> > > 48d541847b4a6 sched: Teach dequeue_task() about special task states
> > > ef3b9c5d038dc sched/fair: Implement delayed dequeue
> > > --- First bad commit with dequeue_rt_stack() failures.
> > > 876c99c058219 sched/fair: Implement DELAY_ZERO
> > > I added the cfs_rq local variable to requeue_delayed_entity()
> > >
> > > This is on -rcu branch peterz.2024.08.23b.
> > >
> > > I ran 50*TREE05 in a bisection, which converged on be567af45dd04, but only
> > > one run of the 50 had a complaint, and that was in enqueue_dl_entry(),
> >
> > Hmm, I have one other report about that. Hasn't made much sense yet --
> > then again, as per the above mentioned reason, I'm not able to put real
> > time in atm.
>
> I ran 1000*TREE03 on that same commit, no failures. Just started
> 5000*TREE03, and will let you know what happens. This will likely take
> better part of a day to complete.
>
> > > not the dequeue_rt_stack() that I have been chasing. I ran three
> > > additional 50*TREE05 runs on its predecessor (14c3207fd2456) with no
> > > failures. I then ran 50*TREE03 on each of ed28f7b3ac3f4, 48d541847b4a6,
> > > and ef3b9c5d038dc. Only this last ("ef3b9c5d038dc sched/fair: Implement
> > > delayed dequeue") had failure, and they were all the dequeue_rt_stack()
> > > failures I am chasing. One of the runs also hung.
> >
> > I'm a little confused now though; this is with the dequeue removed from
> > switched_from_fair() ?
>
> Ah!!! I thought that change was for the build issue, which I will
> admit puzzled me a bit.
>
> > Looking at your tree, 49575c0087bc0 still has that dequeue. Does the
> > dequeue_rt_stack() issue go away with that line removed?
>
> I will try it and let you know. Thank you for reminding me!
Preliminary results show that removing the dequeue from that commit or
just from next-20240823 at the very least greatly reduces the probability
of the problem occurring. I am doing an overnight run with that dequeue
removed from next-20240823 and will let you know how it goes.
Thanx, Paul
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-08-25 2:10 ` Paul E. McKenney
@ 2024-08-25 19:36 ` Paul E. McKenney
0 siblings, 0 replies; 67+ messages in thread
From: Paul E. McKenney @ 2024-08-25 19:36 UTC (permalink / raw)
To: Peter Zijlstra, vschneid, linux-kernel, sfr, linux-next,
kernel-team
On Sat, Aug 24, 2024 at 07:10:21PM -0700, Paul E. McKenney wrote:
> On Sat, Aug 24, 2024 at 08:26:57AM -0700, Paul E. McKenney wrote:
> > On Sat, Aug 24, 2024 at 08:54:34AM +0200, Peter Zijlstra wrote:
> > > On Fri, Aug 23, 2024 at 02:51:03PM -0700, Paul E. McKenney wrote:
> > >
> > > > > > Does the below help any? That's more or less what it was before Valentin
> > > > > > asked me why it was weird like that :-)
> > > > > >
> > > > > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > > > > > index 6be618110885..5757dd50b02f 100644
> > > > > > --- a/kernel/sched/fair.c
> > > > > > +++ b/kernel/sched/fair.c
> > > > > > @@ -13107,7 +13107,6 @@ static void switched_from_fair(struct rq *rq, struct task_struct *p)
> > > > > > * and we cannot use DEQUEUE_DELAYED.
> > > > > > */
> > > > > > if (p->se.sched_delayed) {
> > > > > > - dequeue_task(rq, p, DEQUEUE_NOCLOCK | DEQUEUE_SLEEP);
> > > > > > p->se.sched_delayed = 0;
> > > > > > p->se.rel_deadline = 0;
> > > > > > if (sched_feat(DELAY_ZERO) && p->se.vlag > 0)
> > > > >
> > > > > Removing that line from 2e0199df252a still gets me the complaint about
> > > > > __SCHED_FEAT_DELAY_ZERO being undefined. To my naive eyes, it appears
> > > > > that this commit:
> > > > >
> > > > > 54a58a787791 ("sched/fair: Implement DELAY_ZERO")
> > > > >
> > > > > Need to be placed before 2e0199df252a. Of course, when I try it, I
> > > > > get conflicts. So I took just this hunk:
> > > > >
> > > > > ------------------------------------------------------------------------
> > > > >
> > > > > diff --git a/kernel/sched/features.h b/kernel/sched/features.h
> > > > > index 97fb2d4920898..6c5f5424614d4 100644
> > > > > --- a/kernel/sched/features.h
> > > > > +++ b/kernel/sched/features.h
> > > > > @@ -28,6 +28,11 @@ SCHED_FEAT(NEXT_BUDDY, false)
> > > > > */
> > > > > SCHED_FEAT(CACHE_HOT_BUDDY, true)
> > > > >
> > > > > +/*
> > > > > + * DELAY_ZERO clips the lag on dequeue (or wakeup) to 0.
> > > > > + */
> > > > > +SCHED_FEAT(DELAY_ZERO, true)
> > > > > +
> > > > > /*
> > > > > * Allow wakeup-time preemption of the current task:
> > > > > */
> > > > >
> > > > > ------------------------------------------------------------------------
> > > > >
> > > > > That makes the build error go away. Maybe even legitimately?
> > >
> > > Yep.
> > >
> > > > > Just to pick on the easy one, I took a look at the complaint about
> > > > > cfs_rq being unused and the complaint about __SCHED_FEAT_DELAY_ZERO
> > > > > being undefined. This variable was added here:
> > > > >
> > > > > 781773e3b680 ("sched/fair: Implement ENQUEUE_DELAYED")
> > > > >
> > > > > And its first use was added here:
> > > > >
> > > > > 54a58a787791 ("sched/fair: Implement DELAY_ZERO")
> > > > >
> > > > > Which matches my experience.
> > > > >
> > > > > So left to myself, I would run on these commits with the above hunk:
> > > > >
> > > > > 54a58a7877916 sched/fair: Implement DELAY_ZERO
> > > > > 152e11f6df293 sched/fair: Implement delayed dequeue
> > > > > e1459a50ba318 sched: Teach dequeue_task() about special task states
> > > > > a1c446611e31c sched,freezer: Mark TASK_FROZEN special
> > > > > 781773e3b6803 sched/fair: Implement ENQUEUE_DELAYED
> > > > > f12e148892ede sched/fair: Prepare pick_next_task() for delayed dequeue
> > > > > 2e0199df252a5 sched/fair: Prepare exit/cleanup paths for delayed_dequeue
> > > > > e28b5f8bda017 sched/fair: Assert {set_next,put_prev}_entity() are properly balanced
> > > > >
> > > > > And where needed, remove the unused cfs_rq local variable.
> > > > >
> > > > > Would that likely work?
> > >
> > > Sounds about right.
> > >
> > > > >
> > > > > In the meantime, SIGFOOD!
> > > >
> > > > Hearing no objections...
> > >
> > > Yeah, sorry, I'm on holidays with the kids and not glued to the screen
> > > as per usual :-)
> >
> > No worries, and have a great holiday!!!
> >
> > > > Given two patches each of which might or might not need to be applied to a
> > > > given commit, I chose to rebase as follows:
> > > >
> > > > e28b5f8bda017 sched/fair: Assert {set_next,put_prev}_entity() are properly balanced
> > > > 8aed87410a695 EXP sched/fair: Provide DELAY_ZERO definition
> > > > I took this from 54a58a7877916 sched/fair: Implement DELAY_ZERO.
> > > > 49575c0087bc0 sched/fair: Prepare exit/cleanup paths for delayed_dequeue
> > > > 14c3207fd2456 sched/fair: Prepare pick_next_task() for delayed dequeue
> > > > be567af45dd04 sched/fair: Implement ENQUEUE_DELAYED
> > > > I dropped the unused cfs_rq local variable from requeue_delayed_entity()
> > > > ed28f7b3ac3f4 sched,freezer: Mark TASK_FROZEN special
> > > > 48d541847b4a6 sched: Teach dequeue_task() about special task states
> > > > ef3b9c5d038dc sched/fair: Implement delayed dequeue
> > > > --- First bad commit with dequeue_rt_stack() failures.
> > > > 876c99c058219 sched/fair: Implement DELAY_ZERO
> > > > I added the cfs_rq local variable to requeue_delayed_entity()
> > > >
> > > > This is on -rcu branch peterz.2024.08.23b.
> > > >
> > > > I ran 50*TREE05 in a bisection, which converged on be567af45dd04, but only
> > > > one run of the 50 had a complaint, and that was in enqueue_dl_entry(),
> > >
> > > Hmm, I have one other report about that. Hasn't made much sense yet --
> > > then again, as per the above mentioned reason, I'm not able to put real
> > > time in atm.
> >
> > I ran 1000*TREE03 on that same commit, no failures. Just started
> > 5000*TREE03, and will let you know what happens. This will likely take
> > better part of a day to complete.
> >
> > > > not the dequeue_rt_stack() that I have been chasing. I ran three
> > > > additional 50*TREE05 runs on its predecessor (14c3207fd2456) with no
> > > > failures. I then ran 50*TREE03 on each of ed28f7b3ac3f4, 48d541847b4a6,
> > > > and ef3b9c5d038dc. Only this last ("ef3b9c5d038dc sched/fair: Implement
> > > > delayed dequeue") had failure, and they were all the dequeue_rt_stack()
> > > > failures I am chasing. One of the runs also hung.
> > >
> > > I'm a little confused now though; this is with the dequeue removed from
> > > switched_from_fair() ?
> >
> > Ah!!! I thought that change was for the build issue, which I will
> > admit puzzled me a bit.
> >
> > > Looking at your tree, 49575c0087bc0 still has that dequeue. Does the
> > > dequeue_rt_stack() issue go away with that line removed?
> >
> > I will try it and let you know. Thank you for reminding me!
>
> Preliminary results show that removing the dequeue from that commit or
> just from next-20240823 at the very least greatly reduces the probability
> of the problem occurring. I am doing an overnight run with that dequeue
> removed from next-20240823 and will let you know how it goes.
No dequeue_rt_stack() or enqueue_dl_entry() issues in 5000*TREE03 runs, so
I think we can declare the first to be fixed and the second to be rather
low probability. I also searched for "enqueue_dl_entry" in my employer's
full fleet's worth of console output from the past week, and saw no hits.
(Not too surprising, given that we don't do much RT here, but still...)
I did get what appears to me to be an unrelated one-off shown below. I am
including this not as a bug report, but just for completeness. I didn't
find anything like this from the fleet over the past week, either.
Unicorns!!! ;-)
Thanx, Paul
------------------------------------------------------------------------
[ 66.315476] smpboot: CPU 2 is now offline
[ 67.245115] rcu-torture: rcu_torture_read_exit: Start of episode
[ 69.232773] rcu-torture: Stopping rcu_torture_boost task
[ 70.290610] rcu-torture: rcu_torture_boost is stopping
[ 70.295436] BUG: kernel NULL pointer dereference, address: 0000000000000000
[ 70.295444] smpboot: CPU 3 is now offline
[ 70.296343] #PF: supervisor write access in kernel mode
[ 70.296343] #PF: error_code(0x0002) - not-present page
[ 70.296343] PGD 0 P4D 0
[ 70.296343] Oops: Oops: 0002 [#1] PREEMPT SMP NOPTI
[ 70.296343] CPU: 14 UID: 0 PID: 414 Comm: kworker/u67:1 Not tainted 6.11.0-rc4-next-20240823-dirty #53827
[ 70.296343] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
[ 70.296343] RIP: 0010:_raw_spin_lock_irq+0x13/0x30
[ 70.303668] Code: 1f 84 00 00 00 00 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 f3 0f 1e fa fa 65 ff 05 7c c9 0e 6e 31 c0 ba 01 00 00 00 <f0> 0f b1 17 75 05 c3 cc cc cc cc 89 c6 e9 1b 00 00 00 66 2e 0f 1f
[ 70.303668] RSP: 0018:ffffa13840cafec0 EFLAGS: 00010046
[ 70.322799] rcu-torture: rcu_torture_read_exit: End of episode
[ 70.323615] RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffff9fb682c0ac40
[ 70.323615] RDX: 0000000000000001 RSI: ffffa13840cafe60 RDI: 0000000000000000
[ 70.323615] RBP: ffff9fb68294c300 R08: 000000000000041e R09: 0000000000000001
[ 70.323615] R10: 0000000000000003 R11: 00000000002dc6c0 R12: ffff9fb682b2ba80
[ 70.323615] R13: ffff9fb682c120c0 R14: ffff9fb682c120c0 R15: ffff9fb682c0ac40
[ 70.323615] FS: 0000000000000000(0000) GS:ffff9fb69f580000(0000) knlGS:0000000000000000
[ 70.323615] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 70.323615] CR2: 0000000000000000 CR3: 000000001ac2e000 CR4: 00000000000006f0
[ 70.323615] Call Trace:
[ 70.323615] <TASK>
[ 70.323615] ? __die+0x1f/0x70
[ 70.323615] ? page_fault_oops+0x155/0x440
[ 70.323615] ? _raw_spin_lock_irq+0x15/0x30
[ 70.323615] ? is_prefetch.constprop.0+0xed/0x1b0
[ 70.323615] ? exc_page_fault+0x69/0x150
[ 70.323615] ? asm_exc_page_fault+0x26/0x30
[ 70.323615] ? _raw_spin_lock_irq+0x13/0x30
[ 70.323615] worker_thread+0x41/0x3a0
[ 70.323615] ? __pfx_worker_thread+0x10/0x10
[ 70.323615] kthread+0xd1/0x100
[ 70.323615] ? __pfx_kthread+0x10/0x10
[ 70.323615] ret_from_fork+0x2f/0x50
[ 70.323615] ? __pfx_kthread+0x10/0x10
[ 70.323615] ret_from_fork_asm+0x1a/0x30
[ 70.323615] </TASK>
[ 70.323615] Modules linked in:
[ 70.323615] CR2: 0000000000000000
[ 70.323615] ---[ end trace 0000000000000000 ]---
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-08-23 7:47 ` Peter Zijlstra
2024-08-23 12:46 ` Paul E. McKenney
@ 2024-08-26 11:44 ` Valentin Schneider
2024-08-26 16:31 ` Paul E. McKenney
1 sibling, 1 reply; 67+ messages in thread
From: Valentin Schneider @ 2024-08-26 11:44 UTC (permalink / raw)
To: Peter Zijlstra, Paul E. McKenney
Cc: linux-kernel, sfr, linux-next, kernel-team
On 23/08/24 09:47, Peter Zijlstra wrote:
> On Wed, Aug 21, 2024 at 02:57:16PM -0700, Paul E. McKenney wrote:
>
>> 2e0199df252a ("sched/fair: Prepare exit/cleanup paths for delayed_dequeue")
>>
>> The preceding commit is very reliable.
>>
>> Only instead of (or maybe as well as?) introducing the dequeue_rt_stack()
>> bug, the 2e0199df252a commit introduced a build bug:
>>
>> ------------------------------------------------------------------------
>>
>> In file included from kernel/sched/fair.c:54:
>> kernel/sched/fair.c: In function ‘switched_from_fair’:
>> kernel/sched/sched.h:2154:58: error: ‘__SCHED_FEAT_DELAY_ZERO’ undeclared (first use in this function); did you mean ‘__SCHED_FEAT_LATENCY_WARN’?
>> 2154 | #define sched_feat(x) !!(sysctl_sched_features & (1UL << __SCHED_FEAT_##x))
>> | ^~~~~~~~~~~~~
>> kernel/sched/fair.c:12878:21: note: in expansion of macro ‘sched_feat’
>> 12878 | if (sched_feat(DELAY_ZERO) && p->se.vlag > 0)
>> | ^~~~~~~~~~
>> kernel/sched/sched.h:2154:58: note: each undeclared identifier is reported only once for each function it appears in
>> 2154 | #define sched_feat(x) !!(sysctl_sched_features & (1UL << __SCHED_FEAT_##x))
>> | ^~~~~~~~~~~~~
>> kernel/sched/fair.c:12878:21: note: in expansion of macro ‘sched_feat’
>> 12878 | if (sched_feat(DELAY_ZERO) && p->se.vlag > 0)
>> | ^~~~~~~~~~
>>
>
> Oh gawd, last minute back-merges :/
>
> Does the below help any? That's more or less what it was before Valentin
> asked me why it was weird like that :-)
>
Woops...
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-08-26 11:44 ` Valentin Schneider
@ 2024-08-26 16:31 ` Paul E. McKenney
2024-08-27 10:03 ` Valentin Schneider
0 siblings, 1 reply; 67+ messages in thread
From: Paul E. McKenney @ 2024-08-26 16:31 UTC (permalink / raw)
To: Valentin Schneider
Cc: Peter Zijlstra, linux-kernel, sfr, linux-next, kernel-team
On Mon, Aug 26, 2024 at 01:44:35PM +0200, Valentin Schneider wrote:
> On 23/08/24 09:47, Peter Zijlstra wrote:
> > On Wed, Aug 21, 2024 at 02:57:16PM -0700, Paul E. McKenney wrote:
> >
> >> 2e0199df252a ("sched/fair: Prepare exit/cleanup paths for delayed_dequeue")
> >>
> >> The preceding commit is very reliable.
> >>
> >> Only instead of (or maybe as well as?) introducing the dequeue_rt_stack()
> >> bug, the 2e0199df252a commit introduced a build bug:
> >>
> >> ------------------------------------------------------------------------
> >>
> >> In file included from kernel/sched/fair.c:54:
> >> kernel/sched/fair.c: In function ‘switched_from_fair’:
> >> kernel/sched/sched.h:2154:58: error: ‘__SCHED_FEAT_DELAY_ZERO’ undeclared (first use in this function); did you mean ‘__SCHED_FEAT_LATENCY_WARN’?
> >> 2154 | #define sched_feat(x) !!(sysctl_sched_features & (1UL << __SCHED_FEAT_##x))
> >> | ^~~~~~~~~~~~~
> >> kernel/sched/fair.c:12878:21: note: in expansion of macro ‘sched_feat’
> >> 12878 | if (sched_feat(DELAY_ZERO) && p->se.vlag > 0)
> >> | ^~~~~~~~~~
> >> kernel/sched/sched.h:2154:58: note: each undeclared identifier is reported only once for each function it appears in
> >> 2154 | #define sched_feat(x) !!(sysctl_sched_features & (1UL << __SCHED_FEAT_##x))
> >> | ^~~~~~~~~~~~~
> >> kernel/sched/fair.c:12878:21: note: in expansion of macro ‘sched_feat’
> >> 12878 | if (sched_feat(DELAY_ZERO) && p->se.vlag > 0)
> >> | ^~~~~~~~~~
> >>
> >
> > Oh gawd, last minute back-merges :/
> >
> > Does the below help any? That's more or less what it was before Valentin
> > asked me why it was weird like that :-)
>
> Woops...
On the other hand, removing that dequeue_task() makes next-20240823
pass light testing.
I have to ask...
Does it make sense for Valentin to rearrange those commits to fix
the two build bugs and remove that dequeue_task(), all in the name of
bisectability. Or is there something subtle here so that only Peter
can do this work, shoulder and all?
Thanx, Paul
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-08-26 16:31 ` Paul E. McKenney
@ 2024-08-27 10:03 ` Valentin Schneider
2024-08-27 15:41 ` Valentin Schneider
0 siblings, 1 reply; 67+ messages in thread
From: Valentin Schneider @ 2024-08-27 10:03 UTC (permalink / raw)
To: paulmck; +Cc: Peter Zijlstra, linux-kernel, sfr, linux-next, kernel-team
On 26/08/24 09:31, Paul E. McKenney wrote:
> On Mon, Aug 26, 2024 at 01:44:35PM +0200, Valentin Schneider wrote:
>>
>> Woops...
>
> On the other hand, removing that dequeue_task() makes next-20240823
> pass light testing.
>
> I have to ask...
>
> Does it make sense for Valentin to rearrange those commits to fix
> the two build bugs and remove that dequeue_task(), all in the name of
> bisectability. Or is there something subtle here so that only Peter
> can do this work, shoulder and all?
>
I suppose at the very least another pair of eyes on this can't hurt, let me
get untangled from some other things first and I'll take a jab at it.
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-08-27 10:03 ` Valentin Schneider
@ 2024-08-27 15:41 ` Valentin Schneider
2024-08-27 17:33 ` Paul E. McKenney
0 siblings, 1 reply; 67+ messages in thread
From: Valentin Schneider @ 2024-08-27 15:41 UTC (permalink / raw)
To: paulmck; +Cc: Peter Zijlstra, linux-kernel, sfr, linux-next, kernel-team
On 27/08/24 12:03, Valentin Schneider wrote:
> On 26/08/24 09:31, Paul E. McKenney wrote:
>> On Mon, Aug 26, 2024 at 01:44:35PM +0200, Valentin Schneider wrote:
>>>
>>> Woops...
>>
>> On the other hand, removing that dequeue_task() makes next-20240823
>> pass light testing.
>>
>> I have to ask...
>>
>> Does it make sense for Valentin to rearrange those commits to fix
>> the two build bugs and remove that dequeue_task(), all in the name of
>> bisectability. Or is there something subtle here so that only Peter
>> can do this work, shoulder and all?
>>
>
> I suppose at the very least another pair of eyes on this can't hurt, let me
> get untangled from some other things first and I'll take a jab at it.
I've taken tip/sched/core and shuffled hunks around; I didn't re-order any
commit. I've also taken out the dequeue from switched_from_fair() and put
it at the very top of the branch which should hopefully help bisection.
The final delta between that branch and tip/sched/core is empty, so it
really is just shuffling inbetween commits.
Please find the branch at:
https://gitlab.com/vschneid/linux.git -b mainline/sched/eevdf-complete-builderr
I'll go stare at the BUG itself now.
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-08-27 15:41 ` Valentin Schneider
@ 2024-08-27 17:33 ` Paul E. McKenney
2024-08-27 18:35 ` Paul E. McKenney
0 siblings, 1 reply; 67+ messages in thread
From: Paul E. McKenney @ 2024-08-27 17:33 UTC (permalink / raw)
To: Valentin Schneider
Cc: Peter Zijlstra, linux-kernel, sfr, linux-next, kernel-team
On Tue, Aug 27, 2024 at 05:41:52PM +0200, Valentin Schneider wrote:
> On 27/08/24 12:03, Valentin Schneider wrote:
> > On 26/08/24 09:31, Paul E. McKenney wrote:
> >> On Mon, Aug 26, 2024 at 01:44:35PM +0200, Valentin Schneider wrote:
> >>>
> >>> Woops...
> >>
> >> On the other hand, removing that dequeue_task() makes next-20240823
> >> pass light testing.
> >>
> >> I have to ask...
> >>
> >> Does it make sense for Valentin to rearrange those commits to fix
> >> the two build bugs and remove that dequeue_task(), all in the name of
> >> bisectability. Or is there something subtle here so that only Peter
> >> can do this work, shoulder and all?
> >>
> >
> > I suppose at the very least another pair of eyes on this can't hurt, let me
> > get untangled from some other things first and I'll take a jab at it.
>
> I've taken tip/sched/core and shuffled hunks around; I didn't re-order any
> commit. I've also taken out the dequeue from switched_from_fair() and put
> it at the very top of the branch which should hopefully help bisection.
>
> The final delta between that branch and tip/sched/core is empty, so it
> really is just shuffling inbetween commits.
>
> Please find the branch at:
>
> https://gitlab.com/vschneid/linux.git -b mainline/sched/eevdf-complete-builderr
>
> I'll go stare at the BUG itself now.
Thank you!
I have fired up tests on the "BROKEN?" commit. If that fails, I will
try its predecessor, and if that fails, I wlll bisect from e28b5f8bda01
("sched/fair: Assert {set_next,put_prev}_entity() are properly balanced"),
which has stood up to heavy hammering in earlier testing.
Thanx, Paul
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-08-27 17:33 ` Paul E. McKenney
@ 2024-08-27 18:35 ` Paul E. McKenney
2024-08-27 20:30 ` Valentin Schneider
0 siblings, 1 reply; 67+ messages in thread
From: Paul E. McKenney @ 2024-08-27 18:35 UTC (permalink / raw)
To: Valentin Schneider
Cc: Peter Zijlstra, linux-kernel, sfr, linux-next, kernel-team
On Tue, Aug 27, 2024 at 10:33:13AM -0700, Paul E. McKenney wrote:
> On Tue, Aug 27, 2024 at 05:41:52PM +0200, Valentin Schneider wrote:
> > On 27/08/24 12:03, Valentin Schneider wrote:
> > > On 26/08/24 09:31, Paul E. McKenney wrote:
> > >> On Mon, Aug 26, 2024 at 01:44:35PM +0200, Valentin Schneider wrote:
> > >>>
> > >>> Woops...
> > >>
> > >> On the other hand, removing that dequeue_task() makes next-20240823
> > >> pass light testing.
> > >>
> > >> I have to ask...
> > >>
> > >> Does it make sense for Valentin to rearrange those commits to fix
> > >> the two build bugs and remove that dequeue_task(), all in the name of
> > >> bisectability. Or is there something subtle here so that only Peter
> > >> can do this work, shoulder and all?
> > >>
> > >
> > > I suppose at the very least another pair of eyes on this can't hurt, let me
> > > get untangled from some other things first and I'll take a jab at it.
> >
> > I've taken tip/sched/core and shuffled hunks around; I didn't re-order any
> > commit. I've also taken out the dequeue from switched_from_fair() and put
> > it at the very top of the branch which should hopefully help bisection.
> >
> > The final delta between that branch and tip/sched/core is empty, so it
> > really is just shuffling inbetween commits.
> >
> > Please find the branch at:
> >
> > https://gitlab.com/vschneid/linux.git -b mainline/sched/eevdf-complete-builderr
> >
> > I'll go stare at the BUG itself now.
>
> Thank you!
>
> I have fired up tests on the "BROKEN?" commit. If that fails, I will
> try its predecessor, and if that fails, I wlll bisect from e28b5f8bda01
> ("sched/fair: Assert {set_next,put_prev}_entity() are properly balanced"),
> which has stood up to heavy hammering in earlier testing.
And of 50 runs of TREE03 on the "BROKEN?" commit resulted in 32 failures.
Of these, 29 were the dequeue_rt_stack() failure. Two more were RCU
CPU stall warnings, and the last one was an oddball "kernel BUG at
kernel/sched/rt.c:1714" followed by an equally oddball "Oops: invalid
opcode: 0000 [#1] PREEMPT SMP PTI".
Just to be specific, this is commit:
df8fe34bfa36 ("BROKEN? sched/fair: Dequeue sched_delayed tasks when switching from fair")
This commit's predecessor is this commit:
2f888533d073 ("sched/eevdf: Propagate min_slice up the cgroup hierarchy")
This predecessor commit passes 50 runs of TREE03 with no failures.
So that addition of that dequeue_task() call to the switched_from_fair()
function is looking quite suspicious to me. ;-)
Thanx, Paul
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-08-27 18:35 ` Paul E. McKenney
@ 2024-08-27 20:30 ` Valentin Schneider
2024-08-27 20:36 ` Paul E. McKenney
0 siblings, 1 reply; 67+ messages in thread
From: Valentin Schneider @ 2024-08-27 20:30 UTC (permalink / raw)
To: paulmck; +Cc: Peter Zijlstra, linux-kernel, sfr, linux-next, kernel-team
On 27/08/24 11:35, Paul E. McKenney wrote:
> On Tue, Aug 27, 2024 at 10:33:13AM -0700, Paul E. McKenney wrote:
>> On Tue, Aug 27, 2024 at 05:41:52PM +0200, Valentin Schneider wrote:
>> > I've taken tip/sched/core and shuffled hunks around; I didn't re-order any
>> > commit. I've also taken out the dequeue from switched_from_fair() and put
>> > it at the very top of the branch which should hopefully help bisection.
>> >
>> > The final delta between that branch and tip/sched/core is empty, so it
>> > really is just shuffling inbetween commits.
>> >
>> > Please find the branch at:
>> >
>> > https://gitlab.com/vschneid/linux.git -b mainline/sched/eevdf-complete-builderr
>> >
>> > I'll go stare at the BUG itself now.
>>
>> Thank you!
>>
>> I have fired up tests on the "BROKEN?" commit. If that fails, I will
>> try its predecessor, and if that fails, I wlll bisect from e28b5f8bda01
>> ("sched/fair: Assert {set_next,put_prev}_entity() are properly balanced"),
>> which has stood up to heavy hammering in earlier testing.
>
> And of 50 runs of TREE03 on the "BROKEN?" commit resulted in 32 failures.
> Of these, 29 were the dequeue_rt_stack() failure. Two more were RCU
> CPU stall warnings, and the last one was an oddball "kernel BUG at
> kernel/sched/rt.c:1714" followed by an equally oddball "Oops: invalid
> opcode: 0000 [#1] PREEMPT SMP PTI".
>
> Just to be specific, this is commit:
>
> df8fe34bfa36 ("BROKEN? sched/fair: Dequeue sched_delayed tasks when switching from fair")
>
> This commit's predecessor is this commit:
>
> 2f888533d073 ("sched/eevdf: Propagate min_slice up the cgroup hierarchy")
>
> This predecessor commit passes 50 runs of TREE03 with no failures.
>
> So that addition of that dequeue_task() call to the switched_from_fair()
> function is looking quite suspicious to me. ;-)
>
> Thanx, Paul
Thanks for the testing!
The WARN_ON_ONCE(!rt_se->on_list); hit in __dequeue_rt_entity() feels like
a put_prev/set_next kind of issue...
So far I'd assumed a ->sched_delayed task can't be current during
switched_from_fair(), I got confused because it's Mond^CCC Tuesday, but I
think that still holds: we can't get a balance_dl() or balance_rt() to drop
the RQ lock because prev would be fair, and we can't get a
newidle_balance() with a ->sched_delayed task because we'd have
sched_fair_runnable() := true.
I'll pick this back up tomorrow, this is a task that requires either
caffeine or booze and it's too late for either.
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-08-27 20:30 ` Valentin Schneider
@ 2024-08-27 20:36 ` Paul E. McKenney
2024-08-28 12:35 ` Valentin Schneider
0 siblings, 1 reply; 67+ messages in thread
From: Paul E. McKenney @ 2024-08-27 20:36 UTC (permalink / raw)
To: Valentin Schneider
Cc: Peter Zijlstra, linux-kernel, sfr, linux-next, kernel-team
On Tue, Aug 27, 2024 at 10:30:24PM +0200, Valentin Schneider wrote:
> On 27/08/24 11:35, Paul E. McKenney wrote:
> > On Tue, Aug 27, 2024 at 10:33:13AM -0700, Paul E. McKenney wrote:
> >> On Tue, Aug 27, 2024 at 05:41:52PM +0200, Valentin Schneider wrote:
> >> > I've taken tip/sched/core and shuffled hunks around; I didn't re-order any
> >> > commit. I've also taken out the dequeue from switched_from_fair() and put
> >> > it at the very top of the branch which should hopefully help bisection.
> >> >
> >> > The final delta between that branch and tip/sched/core is empty, so it
> >> > really is just shuffling inbetween commits.
> >> >
> >> > Please find the branch at:
> >> >
> >> > https://gitlab.com/vschneid/linux.git -b mainline/sched/eevdf-complete-builderr
> >> >
> >> > I'll go stare at the BUG itself now.
> >>
> >> Thank you!
> >>
> >> I have fired up tests on the "BROKEN?" commit. If that fails, I will
> >> try its predecessor, and if that fails, I wlll bisect from e28b5f8bda01
> >> ("sched/fair: Assert {set_next,put_prev}_entity() are properly balanced"),
> >> which has stood up to heavy hammering in earlier testing.
> >
> > And of 50 runs of TREE03 on the "BROKEN?" commit resulted in 32 failures.
> > Of these, 29 were the dequeue_rt_stack() failure. Two more were RCU
> > CPU stall warnings, and the last one was an oddball "kernel BUG at
> > kernel/sched/rt.c:1714" followed by an equally oddball "Oops: invalid
> > opcode: 0000 [#1] PREEMPT SMP PTI".
> >
> > Just to be specific, this is commit:
> >
> > df8fe34bfa36 ("BROKEN? sched/fair: Dequeue sched_delayed tasks when switching from fair")
> >
> > This commit's predecessor is this commit:
> >
> > 2f888533d073 ("sched/eevdf: Propagate min_slice up the cgroup hierarchy")
> >
> > This predecessor commit passes 50 runs of TREE03 with no failures.
> >
> > So that addition of that dequeue_task() call to the switched_from_fair()
> > function is looking quite suspicious to me. ;-)
> >
> > Thanx, Paul
>
> Thanks for the testing!
>
> The WARN_ON_ONCE(!rt_se->on_list); hit in __dequeue_rt_entity() feels like
> a put_prev/set_next kind of issue...
>
> So far I'd assumed a ->sched_delayed task can't be current during
> switched_from_fair(), I got confused because it's Mond^CCC Tuesday, but I
> think that still holds: we can't get a balance_dl() or balance_rt() to drop
> the RQ lock because prev would be fair, and we can't get a
> newidle_balance() with a ->sched_delayed task because we'd have
> sched_fair_runnable() := true.
>
> I'll pick this back up tomorrow, this is a task that requires either
> caffeine or booze and it's too late for either.
Thank you for chasing this, and get some sleep! This one is of course
annoying, but it is not (yet) an emergency. I look forward to seeing
what you come up with.
Also, I would of course be happy to apply debug patches.
Thanx, Paul
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-08-27 20:36 ` Paul E. McKenney
@ 2024-08-28 12:35 ` Valentin Schneider
2024-08-28 13:03 ` Paul E. McKenney
2024-08-28 13:44 ` Chen Yu
0 siblings, 2 replies; 67+ messages in thread
From: Valentin Schneider @ 2024-08-28 12:35 UTC (permalink / raw)
To: paulmck; +Cc: Peter Zijlstra, linux-kernel, sfr, linux-next, kernel-team,
Chen Yu
On 27/08/24 13:36, Paul E. McKenney wrote:
> On Tue, Aug 27, 2024 at 10:30:24PM +0200, Valentin Schneider wrote:
>> On 27/08/24 11:35, Paul E. McKenney wrote:
>> > On Tue, Aug 27, 2024 at 10:33:13AM -0700, Paul E. McKenney wrote:
>> >> On Tue, Aug 27, 2024 at 05:41:52PM +0200, Valentin Schneider wrote:
>> >> > I've taken tip/sched/core and shuffled hunks around; I didn't re-order any
>> >> > commit. I've also taken out the dequeue from switched_from_fair() and put
>> >> > it at the very top of the branch which should hopefully help bisection.
>> >> >
>> >> > The final delta between that branch and tip/sched/core is empty, so it
>> >> > really is just shuffling inbetween commits.
>> >> >
>> >> > Please find the branch at:
>> >> >
>> >> > https://gitlab.com/vschneid/linux.git -b mainline/sched/eevdf-complete-builderr
>> >> >
>> >> > I'll go stare at the BUG itself now.
>> >>
>> >> Thank you!
>> >>
>> >> I have fired up tests on the "BROKEN?" commit. If that fails, I will
>> >> try its predecessor, and if that fails, I wlll bisect from e28b5f8bda01
>> >> ("sched/fair: Assert {set_next,put_prev}_entity() are properly balanced"),
>> >> which has stood up to heavy hammering in earlier testing.
>> >
>> > And of 50 runs of TREE03 on the "BROKEN?" commit resulted in 32 failures.
>> > Of these, 29 were the dequeue_rt_stack() failure. Two more were RCU
>> > CPU stall warnings, and the last one was an oddball "kernel BUG at
>> > kernel/sched/rt.c:1714" followed by an equally oddball "Oops: invalid
>> > opcode: 0000 [#1] PREEMPT SMP PTI".
>> >
>> > Just to be specific, this is commit:
>> >
>> > df8fe34bfa36 ("BROKEN? sched/fair: Dequeue sched_delayed tasks when switching from fair")
>> >
>> > This commit's predecessor is this commit:
>> >
>> > 2f888533d073 ("sched/eevdf: Propagate min_slice up the cgroup hierarchy")
>> >
>> > This predecessor commit passes 50 runs of TREE03 with no failures.
>> >
>> > So that addition of that dequeue_task() call to the switched_from_fair()
>> > function is looking quite suspicious to me. ;-)
>> >
>> > Thanx, Paul
>>
>> Thanks for the testing!
>>
>> The WARN_ON_ONCE(!rt_se->on_list); hit in __dequeue_rt_entity() feels like
>> a put_prev/set_next kind of issue...
>>
>> So far I'd assumed a ->sched_delayed task can't be current during
>> switched_from_fair(), I got confused because it's Mond^CCC Tuesday, but I
>> think that still holds: we can't get a balance_dl() or balance_rt() to drop
>> the RQ lock because prev would be fair, and we can't get a
>> newidle_balance() with a ->sched_delayed task because we'd have
>> sched_fair_runnable() := true.
>>
>> I'll pick this back up tomorrow, this is a task that requires either
>> caffeine or booze and it's too late for either.
>
> Thank you for chasing this, and get some sleep! This one is of course
> annoying, but it is not (yet) an emergency. I look forward to seeing
> what you come up with.
>
> Also, I would of course be happy to apply debug patches.
>
> Thanx, Paul
Chen Yu made me realize [1] that dequeue_task() really isn't enough; the
dequeue_task() in e.g. __sched_setscheduler() won't have DEQUEUE_DELAYED,
so stuff will just be left on the CFS tree.
Worse, what we need here is the __block_task() like we have at the end of
dequeue_entities(), otherwise p stays ->on_rq and that's borked - AFAICT
that explains the splat you're getting, because affine_move_task() ends up
doing a move_queued_task() for what really is a dequeued task.
I unfortunately couldn't reproduce the issue locally using your TREE03
invocation. I've pushed a new patch on top of my branch, would you mind
giving it a spin? It's a bit sketchy but should at least be going in the
right direction...
[1]: http://lore.kernel.org/r/Zs2d2aaC/zSyR94v@chenyu5-mobl2
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-08-28 12:35 ` Valentin Schneider
@ 2024-08-28 13:03 ` Paul E. McKenney
2024-08-28 13:40 ` Paul E. McKenney
2024-08-28 13:44 ` Chen Yu
1 sibling, 1 reply; 67+ messages in thread
From: Paul E. McKenney @ 2024-08-28 13:03 UTC (permalink / raw)
To: Valentin Schneider
Cc: Peter Zijlstra, linux-kernel, sfr, linux-next, kernel-team,
Chen Yu
On Wed, Aug 28, 2024 at 02:35:45PM +0200, Valentin Schneider wrote:
> On 27/08/24 13:36, Paul E. McKenney wrote:
> > On Tue, Aug 27, 2024 at 10:30:24PM +0200, Valentin Schneider wrote:
> >> On 27/08/24 11:35, Paul E. McKenney wrote:
> >> > On Tue, Aug 27, 2024 at 10:33:13AM -0700, Paul E. McKenney wrote:
> >> >> On Tue, Aug 27, 2024 at 05:41:52PM +0200, Valentin Schneider wrote:
> >> >> > I've taken tip/sched/core and shuffled hunks around; I didn't re-order any
> >> >> > commit. I've also taken out the dequeue from switched_from_fair() and put
> >> >> > it at the very top of the branch which should hopefully help bisection.
> >> >> >
> >> >> > The final delta between that branch and tip/sched/core is empty, so it
> >> >> > really is just shuffling inbetween commits.
> >> >> >
> >> >> > Please find the branch at:
> >> >> >
> >> >> > https://gitlab.com/vschneid/linux.git -b mainline/sched/eevdf-complete-builderr
> >> >> >
> >> >> > I'll go stare at the BUG itself now.
> >> >>
> >> >> Thank you!
> >> >>
> >> >> I have fired up tests on the "BROKEN?" commit. If that fails, I will
> >> >> try its predecessor, and if that fails, I wlll bisect from e28b5f8bda01
> >> >> ("sched/fair: Assert {set_next,put_prev}_entity() are properly balanced"),
> >> >> which has stood up to heavy hammering in earlier testing.
> >> >
> >> > And of 50 runs of TREE03 on the "BROKEN?" commit resulted in 32 failures.
> >> > Of these, 29 were the dequeue_rt_stack() failure. Two more were RCU
> >> > CPU stall warnings, and the last one was an oddball "kernel BUG at
> >> > kernel/sched/rt.c:1714" followed by an equally oddball "Oops: invalid
> >> > opcode: 0000 [#1] PREEMPT SMP PTI".
> >> >
> >> > Just to be specific, this is commit:
> >> >
> >> > df8fe34bfa36 ("BROKEN? sched/fair: Dequeue sched_delayed tasks when switching from fair")
> >> >
> >> > This commit's predecessor is this commit:
> >> >
> >> > 2f888533d073 ("sched/eevdf: Propagate min_slice up the cgroup hierarchy")
> >> >
> >> > This predecessor commit passes 50 runs of TREE03 with no failures.
> >> >
> >> > So that addition of that dequeue_task() call to the switched_from_fair()
> >> > function is looking quite suspicious to me. ;-)
> >> >
> >> > Thanx, Paul
> >>
> >> Thanks for the testing!
> >>
> >> The WARN_ON_ONCE(!rt_se->on_list); hit in __dequeue_rt_entity() feels like
> >> a put_prev/set_next kind of issue...
> >>
> >> So far I'd assumed a ->sched_delayed task can't be current during
> >> switched_from_fair(), I got confused because it's Mond^CCC Tuesday, but I
> >> think that still holds: we can't get a balance_dl() or balance_rt() to drop
> >> the RQ lock because prev would be fair, and we can't get a
> >> newidle_balance() with a ->sched_delayed task because we'd have
> >> sched_fair_runnable() := true.
> >>
> >> I'll pick this back up tomorrow, this is a task that requires either
> >> caffeine or booze and it's too late for either.
> >
> > Thank you for chasing this, and get some sleep! This one is of course
> > annoying, but it is not (yet) an emergency. I look forward to seeing
> > what you come up with.
> >
> > Also, I would of course be happy to apply debug patches.
> >
> > Thanx, Paul
>
> Chen Yu made me realize [1] that dequeue_task() really isn't enough; the
> dequeue_task() in e.g. __sched_setscheduler() won't have DEQUEUE_DELAYED,
> so stuff will just be left on the CFS tree.
>
> Worse, what we need here is the __block_task() like we have at the end of
> dequeue_entities(), otherwise p stays ->on_rq and that's borked - AFAICT
> that explains the splat you're getting, because affine_move_task() ends up
> doing a move_queued_task() for what really is a dequeued task.
Sounds like something that *I* would do! ;-)
> I unfortunately couldn't reproduce the issue locally using your TREE03
> invocation. I've pushed a new patch on top of my branch, would you mind
> giving it a spin? It's a bit sketchy but should at least be going in the
> right direction...
>
> [1]: http://lore.kernel.org/r/Zs2d2aaC/zSyR94v@chenyu5-mobl2
Thank you!
I just now fired it up on 50*TREE03. If that passes, I will let you
know and also fire up 500*TREE03.
Thanx, Paul
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-08-28 13:03 ` Paul E. McKenney
@ 2024-08-28 13:40 ` Paul E. McKenney
0 siblings, 0 replies; 67+ messages in thread
From: Paul E. McKenney @ 2024-08-28 13:40 UTC (permalink / raw)
To: Valentin Schneider
Cc: Peter Zijlstra, linux-kernel, sfr, linux-next, kernel-team,
Chen Yu
On Wed, Aug 28, 2024 at 06:03:31AM -0700, Paul E. McKenney wrote:
> On Wed, Aug 28, 2024 at 02:35:45PM +0200, Valentin Schneider wrote:
> > On 27/08/24 13:36, Paul E. McKenney wrote:
> > > On Tue, Aug 27, 2024 at 10:30:24PM +0200, Valentin Schneider wrote:
> > >> On 27/08/24 11:35, Paul E. McKenney wrote:
> > >> > On Tue, Aug 27, 2024 at 10:33:13AM -0700, Paul E. McKenney wrote:
> > >> >> On Tue, Aug 27, 2024 at 05:41:52PM +0200, Valentin Schneider wrote:
> > >> >> > I've taken tip/sched/core and shuffled hunks around; I didn't re-order any
> > >> >> > commit. I've also taken out the dequeue from switched_from_fair() and put
> > >> >> > it at the very top of the branch which should hopefully help bisection.
> > >> >> >
> > >> >> > The final delta between that branch and tip/sched/core is empty, so it
> > >> >> > really is just shuffling inbetween commits.
> > >> >> >
> > >> >> > Please find the branch at:
> > >> >> >
> > >> >> > https://gitlab.com/vschneid/linux.git -b mainline/sched/eevdf-complete-builderr
> > >> >> >
> > >> >> > I'll go stare at the BUG itself now.
> > >> >>
> > >> >> Thank you!
> > >> >>
> > >> >> I have fired up tests on the "BROKEN?" commit. If that fails, I will
> > >> >> try its predecessor, and if that fails, I wlll bisect from e28b5f8bda01
> > >> >> ("sched/fair: Assert {set_next,put_prev}_entity() are properly balanced"),
> > >> >> which has stood up to heavy hammering in earlier testing.
> > >> >
> > >> > And of 50 runs of TREE03 on the "BROKEN?" commit resulted in 32 failures.
> > >> > Of these, 29 were the dequeue_rt_stack() failure. Two more were RCU
> > >> > CPU stall warnings, and the last one was an oddball "kernel BUG at
> > >> > kernel/sched/rt.c:1714" followed by an equally oddball "Oops: invalid
> > >> > opcode: 0000 [#1] PREEMPT SMP PTI".
> > >> >
> > >> > Just to be specific, this is commit:
> > >> >
> > >> > df8fe34bfa36 ("BROKEN? sched/fair: Dequeue sched_delayed tasks when switching from fair")
> > >> >
> > >> > This commit's predecessor is this commit:
> > >> >
> > >> > 2f888533d073 ("sched/eevdf: Propagate min_slice up the cgroup hierarchy")
> > >> >
> > >> > This predecessor commit passes 50 runs of TREE03 with no failures.
> > >> >
> > >> > So that addition of that dequeue_task() call to the switched_from_fair()
> > >> > function is looking quite suspicious to me. ;-)
> > >> >
> > >> > Thanx, Paul
> > >>
> > >> Thanks for the testing!
> > >>
> > >> The WARN_ON_ONCE(!rt_se->on_list); hit in __dequeue_rt_entity() feels like
> > >> a put_prev/set_next kind of issue...
> > >>
> > >> So far I'd assumed a ->sched_delayed task can't be current during
> > >> switched_from_fair(), I got confused because it's Mond^CCC Tuesday, but I
> > >> think that still holds: we can't get a balance_dl() or balance_rt() to drop
> > >> the RQ lock because prev would be fair, and we can't get a
> > >> newidle_balance() with a ->sched_delayed task because we'd have
> > >> sched_fair_runnable() := true.
> > >>
> > >> I'll pick this back up tomorrow, this is a task that requires either
> > >> caffeine or booze and it's too late for either.
> > >
> > > Thank you for chasing this, and get some sleep! This one is of course
> > > annoying, but it is not (yet) an emergency. I look forward to seeing
> > > what you come up with.
> > >
> > > Also, I would of course be happy to apply debug patches.
> > >
> > > Thanx, Paul
> >
> > Chen Yu made me realize [1] that dequeue_task() really isn't enough; the
> > dequeue_task() in e.g. __sched_setscheduler() won't have DEQUEUE_DELAYED,
> > so stuff will just be left on the CFS tree.
> >
> > Worse, what we need here is the __block_task() like we have at the end of
> > dequeue_entities(), otherwise p stays ->on_rq and that's borked - AFAICT
> > that explains the splat you're getting, because affine_move_task() ends up
> > doing a move_queued_task() for what really is a dequeued task.
>
> Sounds like something that *I* would do! ;-)
>
> > I unfortunately couldn't reproduce the issue locally using your TREE03
> > invocation. I've pushed a new patch on top of my branch, would you mind
> > giving it a spin? It's a bit sketchy but should at least be going in the
> > right direction...
> >
> > [1]: http://lore.kernel.org/r/Zs2d2aaC/zSyR94v@chenyu5-mobl2
>
> Thank you!
>
> I just now fired it up on 50*TREE03. If that passes, I will let you
> know and also fire up 500*TREE03.
The good news is that there are no dequeue_rt_stack() failures.
The not-quite-so-good news is that there were 26 failures out of 50
runs, several of which are RCU CPU or rcutorture-writer stall warnings,
but the most frequent being splats like the one shown below. (If you
really want, I would be happy to send you the full set.)
In case it helps, this is my reproducer:
tools/testing/selftests/rcutorture/bin/kvm-remote.sh "<list of 20 system names>" --cpus 80 --duration 5m --configs "50*TREE03"
Each of the 20 system names is a 80-CPU system that can be reached with "ssh".
Though I can also reproduce on my laptop, it just takes a bit longer to
run 50 instances of TREE03. ;-)
On the other hand, this kvm-remote.sh run took only 11 minutes wall-clock
time, so testing your patches works fine for me, give or take timezone
issues.
Thanx, Paul
------------------------------------------------------------------------
[ 1.636950] BUG: kernel NULL pointer dereference, address: 0000000000000051
[ 1.637886] #PF: supervisor read access in kernel mode
[ 1.637886] #PF: error_code(0x0000) - not-present page
[ 1.637886] PGD 0 P4D 0
[ 1.637886] Oops: Oops: 0000 [#1] PREEMPT SMP PTI
[ 1.637886] CPU: 0 UID: 0 PID: 1 Comm: swapper/0 Not tainted 6.11.0-rc1-00050-ge8593c21265a #1698
[ 1.637886] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014
[ 1.637886] RIP: 0010:pick_task_fair+0x26/0xb0
[ 1.637886] Code: 90 90 90 90 f3 0f 1e fa 8b 57 50 85 d2 0f 84 93 00 00 00 55 48 8d 6f 40 53 48 89 fb 48 83 ec 08 48 89 ef eb 1c e8 6a be ff ff <80> 78 51 00 75 38 48 85 c0 74 43 48 8b b8 a8 00 00 00 48 85 ff 74
[ 1.637886] RSP: 0000:ffffa5fb00013b78 EFLAGS: 00010082
[ 1.637886] RAX: 0000000000000000 RBX: ffff95271f22c200 RCX: 0000000000000800
[ 1.637886] RDX: ed73115ce0430400 RSI: 0000000000000800 RDI: 0000db7ef943c3be
[ 1.637886] RBP: ffff95271f22c240 R08: 0000000000000000 R09: 0000000000000000
[ 1.637886] R10: fffe89e124bd80db R11: 0000000000000000 R12: ffffffffaa00e9b0
[ 1.637886] R13: ffff9527011e8000 R14: 0000000000000000 R15: ffff9527011e8000
[ 1.637886] FS: 0000000000000000(0000) GS:ffff95271f200000(0000) knlGS:0000000000000000
[ 1.637886] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 1.637886] CR2: 0000000000000051 CR3: 0000000008e2e000 CR4: 00000000000006f0
[ 1.637886] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 1.637886] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 1.637886] Call Trace:
[ 1.637886] <TASK>
[ 1.637886] ? __die+0x1f/0x70
[ 1.637886] ? page_fault_oops+0x155/0x440
[ 1.637886] ? search_extable+0x26/0x30
[ 1.637886] ? pick_task_fair+0x26/0xb0
[ 1.637886] ? search_exception_tables+0x37/0x50
[ 1.637886] ? exc_page_fault+0x69/0x150
[ 1.637886] ? asm_exc_page_fault+0x26/0x30
[ 1.637886] ? pick_task_fair+0x26/0xb0
[ 1.637886] ? pick_task_fair+0x26/0xb0
[ 1.637886] pick_next_task_fair+0x40/0x2e0
[ 1.637886] __schedule+0x106/0x8b0
[ 1.637886] ? hrtimer_update_next_event+0x70/0x90
[ 1.637886] schedule+0x22/0xd0
[ 1.637886] schedule_hrtimeout_range_clock+0xa8/0x120
[ 1.637886] ? __pfx_hrtimer_wakeup+0x10/0x10
[ 1.637886] wait_task_inactive+0x1ac/0x1c0
[ 1.637886] __kthread_bind_mask+0x13/0x60
[ 1.637886] kthread_create_on_cpu+0x54/0x80
[ 1.637886] __smpboot_create_thread.part.0+0x60/0x130
[ 1.637886] smpboot_create_threads+0x5c/0x90
[ 1.637886] ? __pfx_smpboot_create_threads+0x10/0x10
[ 1.637886] cpuhp_invoke_callback+0x2cd/0x470
[ 1.637886] ? __pfx_trace_rb_cpu_prepare+0x10/0x10
[ 1.637886] __cpuhp_invoke_callback_range+0x71/0xe0
[ 1.637886] _cpu_up+0xee/0x1d0
[ 1.637886] cpu_up+0x88/0xb0
[ 1.637886] cpuhp_bringup_mask+0x47/0xb0
[ 1.637886] bringup_nonboot_cpus+0xca/0xf0
[ 1.637886] smp_init+0x25/0x80
[ 1.637886] kernel_init_freeable+0xd6/0x2d0
[ 1.637886] ? __pfx_kernel_init+0x10/0x10
[ 1.637886] kernel_init+0x15/0x1c0
[ 1.637886] ret_from_fork+0x2f/0x50
[ 1.637886] ? __pfx_kernel_init+0x10/0x10
[ 1.637886] ret_from_fork_asm+0x1a/0x30
[ 1.637886] </TASK>
[ 1.637886] Modules linked in:
[ 1.637886] CR2: 0000000000000051
[ 1.637886] ---[ end trace 0000000000000000 ]---
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-08-28 12:35 ` Valentin Schneider
2024-08-28 13:03 ` Paul E. McKenney
@ 2024-08-28 13:44 ` Chen Yu
2024-08-28 14:32 ` Valentin Schneider
1 sibling, 1 reply; 67+ messages in thread
From: Chen Yu @ 2024-08-28 13:44 UTC (permalink / raw)
To: Valentin Schneider
Cc: paulmck, Peter Zijlstra, linux-kernel, sfr, linux-next,
kernel-team
Hi,
On 2024-08-28 at 14:35:45 +0200, Valentin Schneider wrote:
> On 27/08/24 13:36, Paul E. McKenney wrote:
> > On Tue, Aug 27, 2024 at 10:30:24PM +0200, Valentin Schneider wrote:
> >> On 27/08/24 11:35, Paul E. McKenney wrote:
> >> > On Tue, Aug 27, 2024 at 10:33:13AM -0700, Paul E. McKenney wrote:
> >> >> On Tue, Aug 27, 2024 at 05:41:52PM +0200, Valentin Schneider wrote:
> >> >> > I've taken tip/sched/core and shuffled hunks around; I didn't re-order any
> >> >> > commit. I've also taken out the dequeue from switched_from_fair() and put
> >> >> > it at the very top of the branch which should hopefully help bisection.
> >> >> >
> >> >> > The final delta between that branch and tip/sched/core is empty, so it
> >> >> > really is just shuffling inbetween commits.
> >> >> >
> >> >> > Please find the branch at:
> >> >> >
> >> >> > https://gitlab.com/vschneid/linux.git -b mainline/sched/eevdf-complete-builderr
> >> >> >
> >> >> > I'll go stare at the BUG itself now.
> >> >>
> >> >> Thank you!
> >> >>
> >> >> I have fired up tests on the "BROKEN?" commit. If that fails, I will
> >> >> try its predecessor, and if that fails, I wlll bisect from e28b5f8bda01
> >> >> ("sched/fair: Assert {set_next,put_prev}_entity() are properly balanced"),
> >> >> which has stood up to heavy hammering in earlier testing.
> >> >
> >> > And of 50 runs of TREE03 on the "BROKEN?" commit resulted in 32 failures.
> >> > Of these, 29 were the dequeue_rt_stack() failure. Two more were RCU
> >> > CPU stall warnings, and the last one was an oddball "kernel BUG at
> >> > kernel/sched/rt.c:1714" followed by an equally oddball "Oops: invalid
> >> > opcode: 0000 [#1] PREEMPT SMP PTI".
> >> >
> >> > Just to be specific, this is commit:
> >> >
> >> > df8fe34bfa36 ("BROKEN? sched/fair: Dequeue sched_delayed tasks when switching from fair")
> >> >
> >> > This commit's predecessor is this commit:
> >> >
> >> > 2f888533d073 ("sched/eevdf: Propagate min_slice up the cgroup hierarchy")
> >> >
> >> > This predecessor commit passes 50 runs of TREE03 with no failures.
> >> >
> >> > So that addition of that dequeue_task() call to the switched_from_fair()
> >> > function is looking quite suspicious to me. ;-)
> >> >
> >> > Thanx, Paul
> >>
> >> Thanks for the testing!
> >>
> >> The WARN_ON_ONCE(!rt_se->on_list); hit in __dequeue_rt_entity() feels like
> >> a put_prev/set_next kind of issue...
> >>
> >> So far I'd assumed a ->sched_delayed task can't be current during
> >> switched_from_fair(), I got confused because it's Mond^CCC Tuesday, but I
> >> think that still holds: we can't get a balance_dl() or balance_rt() to drop
> >> the RQ lock because prev would be fair, and we can't get a
> >> newidle_balance() with a ->sched_delayed task because we'd have
> >> sched_fair_runnable() := true.
> >>
> >> I'll pick this back up tomorrow, this is a task that requires either
> >> caffeine or booze and it's too late for either.
> >
> > Thank you for chasing this, and get some sleep! This one is of course
> > annoying, but it is not (yet) an emergency. I look forward to seeing
> > what you come up with.
> >
> > Also, I would of course be happy to apply debug patches.
> >
> > Thanx, Paul
>
> Chen Yu made me realize [1] that dequeue_task() really isn't enough; the
> dequeue_task() in e.g. __sched_setscheduler() won't have DEQUEUE_DELAYED,
> so stuff will just be left on the CFS tree.
>
One question, although there is no DEQUEUE_DELAYED flag, it is possible
the delayed task could be dequeued from CFS tree. Because the dequeue in
set_schedule() does not have DEQUEUE_SLEEP. And in dequeue_entity():
bool sleep = flags & DEQUEUE_SLEEP;
if (flags & DEQUEUE_DELAYED) {
} else {
bool delay = sleep;
if (sched_feat(DELAY_DEQUEUE) && delay && //false
!entity_eligible(cfs_rq, se) {
//do not dequeue
}
}
//dequeue the task <---- we should reach here?
thanks,
Chenyu
> Worse, what we need here is the __block_task() like we have at the end of
> dequeue_entities(), otherwise p stays ->on_rq and that's borked - AFAICT
> that explains the splat you're getting, because affine_move_task() ends up
> doing a move_queued_task() for what really is a dequeued task.
>
> I unfortunately couldn't reproduce the issue locally using your TREE03
> invocation. I've pushed a new patch on top of my branch, would you mind
> giving it a spin? It's a bit sketchy but should at least be going in the
> right direction...
>
> [1]: http://lore.kernel.org/r/Zs2d2aaC/zSyR94v@chenyu5-mobl2
>
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-08-28 13:44 ` Chen Yu
@ 2024-08-28 14:32 ` Valentin Schneider
2024-08-28 16:35 ` Paul E. McKenney
0 siblings, 1 reply; 67+ messages in thread
From: Valentin Schneider @ 2024-08-28 14:32 UTC (permalink / raw)
To: Chen Yu; +Cc: paulmck, Peter Zijlstra, linux-kernel, sfr, linux-next,
kernel-team
On 28/08/24 21:44, Chen Yu wrote:
> Hi,
>
> On 2024-08-28 at 14:35:45 +0200, Valentin Schneider wrote:
>> On 27/08/24 13:36, Paul E. McKenney wrote:
>> > On Tue, Aug 27, 2024 at 10:30:24PM +0200, Valentin Schneider wrote:
>> >>
>> >> Thanks for the testing!
>> >>
>> >> The WARN_ON_ONCE(!rt_se->on_list); hit in __dequeue_rt_entity() feels like
>> >> a put_prev/set_next kind of issue...
>> >>
>> >> So far I'd assumed a ->sched_delayed task can't be current during
>> >> switched_from_fair(), I got confused because it's Mond^CCC Tuesday, but I
>> >> think that still holds: we can't get a balance_dl() or balance_rt() to drop
>> >> the RQ lock because prev would be fair, and we can't get a
>> >> newidle_balance() with a ->sched_delayed task because we'd have
>> >> sched_fair_runnable() := true.
>> >>
>> >> I'll pick this back up tomorrow, this is a task that requires either
>> >> caffeine or booze and it's too late for either.
>> >
>> > Thank you for chasing this, and get some sleep! This one is of course
>> > annoying, but it is not (yet) an emergency. I look forward to seeing
>> > what you come up with.
>> >
>> > Also, I would of course be happy to apply debug patches.
>> >
>> > Thanx, Paul
>>
>> Chen Yu made me realize [1] that dequeue_task() really isn't enough; the
>> dequeue_task() in e.g. __sched_setscheduler() won't have DEQUEUE_DELAYED,
>> so stuff will just be left on the CFS tree.
>>
>
> One question, although there is no DEQUEUE_DELAYED flag, it is possible
> the delayed task could be dequeued from CFS tree. Because the dequeue in
> set_schedule() does not have DEQUEUE_SLEEP. And in dequeue_entity():
>
> bool sleep = flags & DEQUEUE_SLEEP;
>
> if (flags & DEQUEUE_DELAYED) {
>
> } else {
> bool delay = sleep;
> if (sched_feat(DELAY_DEQUEUE) && delay && //false
> !entity_eligible(cfs_rq, se) {
> //do not dequeue
> }
> }
>
> //dequeue the task <---- we should reach here?
>
You're quite right, so really here the main missing bit would be the final
__block_task() that a DEQUEUE_DELAYED dequeue_entities() would get us.
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-08-28 14:32 ` Valentin Schneider
@ 2024-08-28 16:35 ` Paul E. McKenney
2024-08-28 18:17 ` Valentin Schneider
0 siblings, 1 reply; 67+ messages in thread
From: Paul E. McKenney @ 2024-08-28 16:35 UTC (permalink / raw)
To: Valentin Schneider
Cc: Chen Yu, Peter Zijlstra, linux-kernel, sfr, linux-next,
kernel-team
On Wed, Aug 28, 2024 at 04:32:41PM +0200, Valentin Schneider wrote:
> On 28/08/24 21:44, Chen Yu wrote:
> > Hi,
> >
> > On 2024-08-28 at 14:35:45 +0200, Valentin Schneider wrote:
> >> On 27/08/24 13:36, Paul E. McKenney wrote:
> >> > On Tue, Aug 27, 2024 at 10:30:24PM +0200, Valentin Schneider wrote:
> >> >>
> >> >> Thanks for the testing!
> >> >>
> >> >> The WARN_ON_ONCE(!rt_se->on_list); hit in __dequeue_rt_entity() feels like
> >> >> a put_prev/set_next kind of issue...
> >> >>
> >> >> So far I'd assumed a ->sched_delayed task can't be current during
> >> >> switched_from_fair(), I got confused because it's Mond^CCC Tuesday, but I
> >> >> think that still holds: we can't get a balance_dl() or balance_rt() to drop
> >> >> the RQ lock because prev would be fair, and we can't get a
> >> >> newidle_balance() with a ->sched_delayed task because we'd have
> >> >> sched_fair_runnable() := true.
> >> >>
> >> >> I'll pick this back up tomorrow, this is a task that requires either
> >> >> caffeine or booze and it's too late for either.
> >> >
> >> > Thank you for chasing this, and get some sleep! This one is of course
> >> > annoying, but it is not (yet) an emergency. I look forward to seeing
> >> > what you come up with.
> >> >
> >> > Also, I would of course be happy to apply debug patches.
> >> >
> >> > Thanx, Paul
> >>
> >> Chen Yu made me realize [1] that dequeue_task() really isn't enough; the
> >> dequeue_task() in e.g. __sched_setscheduler() won't have DEQUEUE_DELAYED,
> >> so stuff will just be left on the CFS tree.
> >>
> >
> > One question, although there is no DEQUEUE_DELAYED flag, it is possible
> > the delayed task could be dequeued from CFS tree. Because the dequeue in
> > set_schedule() does not have DEQUEUE_SLEEP. And in dequeue_entity():
> >
> > bool sleep = flags & DEQUEUE_SLEEP;
> >
> > if (flags & DEQUEUE_DELAYED) {
> >
> > } else {
> > bool delay = sleep;
> > if (sched_feat(DELAY_DEQUEUE) && delay && //false
> > !entity_eligible(cfs_rq, se) {
> > //do not dequeue
> > }
> > }
> >
> > //dequeue the task <---- we should reach here?
> >
>
> You're quite right, so really here the main missing bit would be the final
> __block_task() that a DEQUEUE_DELAYED dequeue_entities() would get us.
50*TREE03 passed, yay! Thank you both!!!
I started a 500*TREE03.
Yes, the odds all 50 passing given the baseline 52% failure rate is
something like 10^-16, but software bugs are not necessarily constrained
by elementary statistics...
Thanx, Paul
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-08-28 16:35 ` Paul E. McKenney
@ 2024-08-28 18:17 ` Valentin Schneider
2024-08-28 18:39 ` Paul E. McKenney
0 siblings, 1 reply; 67+ messages in thread
From: Valentin Schneider @ 2024-08-28 18:17 UTC (permalink / raw)
To: paulmck; +Cc: Chen Yu, Peter Zijlstra, linux-kernel, sfr, linux-next,
kernel-team
On 28/08/24 09:35, Paul E. McKenney wrote:
> On Wed, Aug 28, 2024 at 04:32:41PM +0200, Valentin Schneider wrote:
>> On 28/08/24 21:44, Chen Yu wrote:
>> >
>> > One question, although there is no DEQUEUE_DELAYED flag, it is possible
>> > the delayed task could be dequeued from CFS tree. Because the dequeue in
>> > set_schedule() does not have DEQUEUE_SLEEP. And in dequeue_entity():
>> >
>> > bool sleep = flags & DEQUEUE_SLEEP;
>> >
>> > if (flags & DEQUEUE_DELAYED) {
>> >
>> > } else {
>> > bool delay = sleep;
>> > if (sched_feat(DELAY_DEQUEUE) && delay && //false
>> > !entity_eligible(cfs_rq, se) {
>> > //do not dequeue
>> > }
>> > }
>> >
>> > //dequeue the task <---- we should reach here?
>> >
>>
>> You're quite right, so really here the main missing bit would be the final
>> __block_task() that a DEQUEUE_DELAYED dequeue_entities() would get us.
>
> 50*TREE03 passed, yay! Thank you both!!!
>
Fantastic, I'll hammer this into a "proper" patch then. Thanks again for
all the testing!
> I started a 500*TREE03.
>
> Yes, the odds all 50 passing given the baseline 52% failure rate is
> something like 10^-16, but software bugs are not necessarily constrained
> by elementary statistics...
>
:-)
> Thanx, Paul
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-08-28 18:17 ` Valentin Schneider
@ 2024-08-28 18:39 ` Paul E. McKenney
2024-08-29 10:28 ` Paul E. McKenney
0 siblings, 1 reply; 67+ messages in thread
From: Paul E. McKenney @ 2024-08-28 18:39 UTC (permalink / raw)
To: Valentin Schneider
Cc: Chen Yu, Peter Zijlstra, linux-kernel, sfr, linux-next,
kernel-team
On Wed, Aug 28, 2024 at 08:17:06PM +0200, Valentin Schneider wrote:
> On 28/08/24 09:35, Paul E. McKenney wrote:
> > On Wed, Aug 28, 2024 at 04:32:41PM +0200, Valentin Schneider wrote:
> >> On 28/08/24 21:44, Chen Yu wrote:
> >> >
> >> > One question, although there is no DEQUEUE_DELAYED flag, it is possible
> >> > the delayed task could be dequeued from CFS tree. Because the dequeue in
> >> > set_schedule() does not have DEQUEUE_SLEEP. And in dequeue_entity():
> >> >
> >> > bool sleep = flags & DEQUEUE_SLEEP;
> >> >
> >> > if (flags & DEQUEUE_DELAYED) {
> >> >
> >> > } else {
> >> > bool delay = sleep;
> >> > if (sched_feat(DELAY_DEQUEUE) && delay && //false
> >> > !entity_eligible(cfs_rq, se) {
> >> > //do not dequeue
> >> > }
> >> > }
> >> >
> >> > //dequeue the task <---- we should reach here?
> >> >
> >>
> >> You're quite right, so really here the main missing bit would be the final
> >> __block_task() that a DEQUEUE_DELAYED dequeue_entities() would get us.
> >
> > 50*TREE03 passed, yay! Thank you both!!!
>
> Fantastic, I'll hammer this into a "proper" patch then. Thanks again for
> all the testing!
>
> > I started a 500*TREE03.
> >
> > Yes, the odds all 50 passing given the baseline 52% failure rate is
> > something like 10^-16, but software bugs are not necessarily constrained
> > by elementary statistics...
>
> :-)
The 500*TREE03 run had exactly one failure that was the dreaded
enqueue_dl_entity() failure, followed by RCU CPU stall warnings.
But a huge improvement over the prior state!
Plus, this failure is likely unrelated (see earlier discussions with
Peter). I just started a 5000*TREE03 run, just in case we can now
reproduce this thing.
Thanx, Paul
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-08-28 18:39 ` Paul E. McKenney
@ 2024-08-29 10:28 ` Paul E. McKenney
2024-08-29 13:50 ` Valentin Schneider
0 siblings, 1 reply; 67+ messages in thread
From: Paul E. McKenney @ 2024-08-29 10:28 UTC (permalink / raw)
To: Valentin Schneider
Cc: Chen Yu, Peter Zijlstra, linux-kernel, sfr, linux-next,
kernel-team
On Wed, Aug 28, 2024 at 11:39:19AM -0700, Paul E. McKenney wrote:
> On Wed, Aug 28, 2024 at 08:17:06PM +0200, Valentin Schneider wrote:
> > On 28/08/24 09:35, Paul E. McKenney wrote:
> > > On Wed, Aug 28, 2024 at 04:32:41PM +0200, Valentin Schneider wrote:
> > >> On 28/08/24 21:44, Chen Yu wrote:
> > >> >
> > >> > One question, although there is no DEQUEUE_DELAYED flag, it is possible
> > >> > the delayed task could be dequeued from CFS tree. Because the dequeue in
> > >> > set_schedule() does not have DEQUEUE_SLEEP. And in dequeue_entity():
> > >> >
> > >> > bool sleep = flags & DEQUEUE_SLEEP;
> > >> >
> > >> > if (flags & DEQUEUE_DELAYED) {
> > >> >
> > >> > } else {
> > >> > bool delay = sleep;
> > >> > if (sched_feat(DELAY_DEQUEUE) && delay && //false
> > >> > !entity_eligible(cfs_rq, se) {
> > >> > //do not dequeue
> > >> > }
> > >> > }
> > >> >
> > >> > //dequeue the task <---- we should reach here?
> > >> >
> > >>
> > >> You're quite right, so really here the main missing bit would be the final
> > >> __block_task() that a DEQUEUE_DELAYED dequeue_entities() would get us.
> > >
> > > 50*TREE03 passed, yay! Thank you both!!!
> >
> > Fantastic, I'll hammer this into a "proper" patch then. Thanks again for
> > all the testing!
> >
> > > I started a 500*TREE03.
> > >
> > > Yes, the odds all 50 passing given the baseline 52% failure rate is
> > > something like 10^-16, but software bugs are not necessarily constrained
> > > by elementary statistics...
> >
> > :-)
>
> The 500*TREE03 run had exactly one failure that was the dreaded
> enqueue_dl_entity() failure, followed by RCU CPU stall warnings.
>
> But a huge improvement over the prior state!
>
> Plus, this failure is likely unrelated (see earlier discussions with
> Peter). I just started a 5000*TREE03 run, just in case we can now
> reproduce this thing.
And we can now reproduce it! Again, this might an unrelated bug that
was previously a one-off (OK, OK, a two-off!). Or this series might
have made it more probably. Who knows?
Eight of those 5000 runs got us this splat in enqueue_dl_entity():
WARN_ON_ONCE(on_dl_rq(dl_se));
Immediately followed by this splat in __enqueue_dl_entity():
WARN_ON_ONCE(!RB_EMPTY_NODE(&dl_se->rb_node));
These two splats always happened during rcutorture's testing of
RCU priority boosting. This testing involves spawning a CPU-bound
low-priority real-time kthread for each CPU, which is intended to starve
the non-realtime RCU readers, which are in turn to be rescued by RCU
priority boosting.
I do not entirely trust the following rcutorture diagnostic, but just
in case it helps...
Many of them also printed something like this as well:
[ 111.279575] Boost inversion persisted: No QS from CPU 3
This message means that rcutorture has decided that RCU priority boosting
has failed, but not because a low-priority preempted task was blocking
the grace period, but rather because some CPU managed to be running
the same task in-kernel the whole time without doing a context switch.
In some cases (but not this one), this was simply a side-effect of
RCU's grace-period kthread being starved of CPU time. Such starvation
is a surprise in this case because this kthread is running at higher
real-time priority than the kthreads that are intended to force RCU
priority boosting to happen.
Again, I do not entirely trust this rcutorture diagnostic, just in case
it helps.
Thanx, Paul
------------------------------------------------------------------------
[ 287.536845] rcu-torture: rcu_torture_boost is stopping
[ 287.536867] ------------[ cut here ]------------
[ 287.540661] WARNING: CPU: 4 PID: 132 at kernel/sched/deadline.c:2003 enqueue_dl_entity+0x50d/0x5c0
[ 287.542299] Modules linked in:
[ 287.542868] CPU: 4 UID: 0 PID: 132 Comm: kcompactd0 Not tainted 6.11.0-rc1-00051-gb32d207e39de #1701
[ 287.544335] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014
[ 287.546337] RIP: 0010:enqueue_dl_entity+0x50d/0x5c0
[ 287.547154] Code: 8e a5 fb ff ff 48 c7 45 40 00 00 00 00 e9 98 fb ff ff 85 db 0f 84 80 fe ff ff 5b 44 89 e6 48 89 ef 5d 41 5c e9 44 d7 ff ff 90 <0f> 0b 90 e9 fe fa ff ff 48 8b bb f8 09 00 00 48 39 fe 0f 89 12 fc
[ 287.550035] RSP: 0018:ffff9a57404dfb60 EFLAGS: 00010082
[ 287.550855] RAX: 0000000000000001 RBX: ffff8d955f32caa8 RCX: 0000000000000002
[ 287.551954] RDX: 0000000000000001 RSI: 0000000000000001 RDI: ffff8d955f32caa8
[ 287.553064] RBP: ffff8d955f32caa8 R08: 0000000000000001 R09: 000000000000020b
[ 287.554167] R10: 0000000000000000 R11: ffff8d9542360090 R12: 0000000000000001
[ 287.555256] R13: 00000000002dc6c0 R14: ffff8d955f32c200 R15: ffff8d955f32c240
[ 287.556364] FS: 0000000000000000(0000) GS:ffff8d955f300000(0000) knlGS:0000000000000000
[ 287.557650] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 287.558561] CR2: 0000000000000000 CR3: 0000000001f00000 CR4: 00000000000006f0
[ 287.559663] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 287.560777] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 287.561918] Call Trace:
[ 287.562319] <TASK>
[ 287.562659] ? __warn+0x7e/0x120
[ 287.563172] ? enqueue_dl_entity+0x50d/0x5c0
[ 287.563847] ? report_bug+0x18e/0x1a0
[ 287.564424] ? handle_bug+0x3d/0x70
[ 287.564902] rcu-torture: rcu_torture_read_exit: End of episode
[ 287.564983] ? exc_invalid_op+0x18/0x70
[ 287.566477] ? asm_exc_invalid_op+0x1a/0x20
[ 287.567130] ? enqueue_dl_entity+0x50d/0x5c0
[ 287.567791] dl_server_start+0x31/0xe0
[ 287.568375] enqueue_task_fair+0x218/0x680
[ 287.569019] activate_task+0x21/0x50
[ 287.569579] attach_task+0x30/0x50
[ 287.570110] sched_balance_rq+0x65d/0xe20
[ 287.570737] sched_balance_newidle.constprop.0+0x1a0/0x360
[ 287.571593] pick_next_task_fair+0x2a/0x2e0
[ 287.572242] __schedule+0x106/0x8b0
[ 287.572789] ? __mod_timer+0x23f/0x350
[ 287.573370] schedule+0x22/0xd0
[ 287.573864] schedule_timeout+0x8a/0x160
[ 287.574479] ? __pfx_process_timeout+0x10/0x10
[ 287.575162] kcompactd+0x336/0x3a0
[ 287.575696] ? __pfx_autoremove_wake_function+0x10/0x10
[ 287.576504] ? __pfx_kcompactd+0x10/0x10
[ 287.577109] kthread+0xd1/0x100
[ 287.577601] ? __pfx_kthread+0x10/0x10
[ 287.578192] ret_from_fork+0x2f/0x50
[ 287.578750] ? __pfx_kthread+0x10/0x10
[ 287.579328] ret_from_fork_asm+0x1a/0x30
[ 287.579942] </TASK>
[ 287.580280] ---[ end trace 0000000000000000 ]---
[ 287.581004] ------------[ cut here ]------------
[ 287.581712] WARNING: CPU: 4 PID: 132 at kernel/sched/deadline.c:1979 enqueue_dl_entity+0x54b/0x5c0
[ 287.583076] Modules linked in:
[ 287.583563] CPU: 4 UID: 0 PID: 132 Comm: kcompactd0 Tainted: G W 6.11.0-rc1-00051-gb32d207e39de #1701
[ 287.585170] Tainted: [W]=WARN
[ 287.585631] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014
[ 287.587329] RIP: 0010:enqueue_dl_entity+0x54b/0x5c0
[ 287.588075] Code: 12 fc ff ff e9 16 ff ff ff 89 c1 45 84 d2 0f 84 02 fc ff ff a8 20 0f 84 fa fb ff ff 84 c0 0f 89 d0 fd ff ff e9 ed fb ff ff 90 <0f> 0b 90 e9 28 fc ff ff 84 d2 75 59 48 8d b5 50 fe ff ff 48 8d 95
[ 287.590887] RSP: 0018:ffff9a57404dfb60 EFLAGS: 00010082
[ 287.591684] RAX: 00000000ffffff00 RBX: ffff8d955f32c200 RCX: 0000000000000000
[ 287.592761] RDX: 0000000000000001 RSI: 0000000b1a2986b8 RDI: 0000000b1a28c7fc
[ 287.593891] RBP: ffff8d955f32caa8 R08: ffff8d955f32ca40 R09: 000000003b9aca00
[ 287.594969] R10: 0000000000000001 R11: 00000000000ee6b2 R12: 0000000000000001
[ 287.596048] R13: 00000000002dc6c0 R14: ffff8d955f32c200 R15: ffff8d955f32c240
[ 287.597129] FS: 0000000000000000(0000) GS:ffff8d955f300000(0000) knlGS:0000000000000000
[ 287.598375] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 287.599265] CR2: 0000000000000000 CR3: 0000000001f00000 CR4: 00000000000006f0
[ 287.600351] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 287.601439] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 287.602540] Call Trace:
[ 287.602917] <TASK>
[ 287.603245] ? __warn+0x7e/0x120
[ 287.603752] ? enqueue_dl_entity+0x54b/0x5c0
[ 287.604405] ? report_bug+0x18e/0x1a0
[ 287.604978] ? handle_bug+0x3d/0x70
[ 287.605523] ? exc_invalid_op+0x18/0x70
[ 287.606116] ? asm_exc_invalid_op+0x1a/0x20
[ 287.606765] ? enqueue_dl_entity+0x54b/0x5c0
[ 287.607420] dl_server_start+0x31/0xe0
[ 287.608013] enqueue_task_fair+0x218/0x680
[ 287.608643] activate_task+0x21/0x50
[ 287.609197] attach_task+0x30/0x50
[ 287.609736] sched_balance_rq+0x65d/0xe20
[ 287.610351] sched_balance_newidle.constprop.0+0x1a0/0x360
[ 287.611205] pick_next_task_fair+0x2a/0x2e0
[ 287.611849] __schedule+0x106/0x8b0
[ 287.612383] ? __mod_timer+0x23f/0x350
[ 287.612969] schedule+0x22/0xd0
[ 287.613450] schedule_timeout+0x8a/0x160
[ 287.614059] ? __pfx_process_timeout+0x10/0x10
[ 287.614740] kcompactd+0x336/0x3a0
[ 287.615261] ? __pfx_autoremove_wake_function+0x10/0x10
[ 287.616069] ? __pfx_kcompactd+0x10/0x10
[ 287.616676] kthread+0xd1/0x100
[ 287.617159] ? __pfx_kthread+0x10/0x10
[ 287.617739] ret_from_fork+0x2f/0x50
[ 287.618285] ? __pfx_kthread+0x10/0x10
[ 287.618869] ret_from_fork_asm+0x1a/0x30
[ 287.619472] </TASK>
[ 287.619809] ---[ end trace 0000000000000000 ]---
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-08-29 10:28 ` Paul E. McKenney
@ 2024-08-29 13:50 ` Valentin Schneider
2024-08-29 14:13 ` Paul E. McKenney
0 siblings, 1 reply; 67+ messages in thread
From: Valentin Schneider @ 2024-08-29 13:50 UTC (permalink / raw)
To: paulmck; +Cc: Chen Yu, Peter Zijlstra, linux-kernel, sfr, linux-next,
kernel-team
On 29/08/24 03:28, Paul E. McKenney wrote:
> On Wed, Aug 28, 2024 at 11:39:19AM -0700, Paul E. McKenney wrote:
>>
>> The 500*TREE03 run had exactly one failure that was the dreaded
>> enqueue_dl_entity() failure, followed by RCU CPU stall warnings.
>>
>> But a huge improvement over the prior state!
>>
>> Plus, this failure is likely unrelated (see earlier discussions with
>> Peter). I just started a 5000*TREE03 run, just in case we can now
>> reproduce this thing.
>
> And we can now reproduce it! Again, this might an unrelated bug that
> was previously a one-off (OK, OK, a two-off!). Or this series might
> have made it more probably. Who knows?
>
> Eight of those 5000 runs got us this splat in enqueue_dl_entity():
>
> WARN_ON_ONCE(on_dl_rq(dl_se));
>
> Immediately followed by this splat in __enqueue_dl_entity():
>
> WARN_ON_ONCE(!RB_EMPTY_NODE(&dl_se->rb_node));
>
> These two splats always happened during rcutorture's testing of
> RCU priority boosting. This testing involves spawning a CPU-bound
> low-priority real-time kthread for each CPU, which is intended to starve
> the non-realtime RCU readers, which are in turn to be rescued by RCU
> priority boosting.
>
Thanks!
> I do not entirely trust the following rcutorture diagnostic, but just
> in case it helps...
>
> Many of them also printed something like this as well:
>
> [ 111.279575] Boost inversion persisted: No QS from CPU 3
>
> This message means that rcutorture has decided that RCU priority boosting
> has failed, but not because a low-priority preempted task was blocking
> the grace period, but rather because some CPU managed to be running
> the same task in-kernel the whole time without doing a context switch.
> In some cases (but not this one), this was simply a side-effect of
> RCU's grace-period kthread being starved of CPU time. Such starvation
> is a surprise in this case because this kthread is running at higher
> real-time priority than the kthreads that are intended to force RCU
> priority boosting to happen.
>
> Again, I do not entirely trust this rcutorture diagnostic, just in case
> it helps.
>
> Thanx, Paul
>
> ------------------------------------------------------------------------
>
> [ 287.536845] rcu-torture: rcu_torture_boost is stopping
> [ 287.536867] ------------[ cut here ]------------
> [ 287.540661] WARNING: CPU: 4 PID: 132 at kernel/sched/deadline.c:2003 enqueue_dl_entity+0x50d/0x5c0
> [ 287.542299] Modules linked in:
> [ 287.542868] CPU: 4 UID: 0 PID: 132 Comm: kcompactd0 Not tainted 6.11.0-rc1-00051-gb32d207e39de #1701
> [ 287.544335] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014
> [ 287.546337] RIP: 0010:enqueue_dl_entity+0x50d/0x5c0
> [ 287.603245] ? __warn+0x7e/0x120
> [ 287.603752] ? enqueue_dl_entity+0x54b/0x5c0
> [ 287.604405] ? report_bug+0x18e/0x1a0
> [ 287.604978] ? handle_bug+0x3d/0x70
> [ 287.605523] ? exc_invalid_op+0x18/0x70
> [ 287.606116] ? asm_exc_invalid_op+0x1a/0x20
> [ 287.606765] ? enqueue_dl_entity+0x54b/0x5c0
> [ 287.607420] dl_server_start+0x31/0xe0
> [ 287.608013] enqueue_task_fair+0x218/0x680
> [ 287.608643] activate_task+0x21/0x50
> [ 287.609197] attach_task+0x30/0x50
> [ 287.609736] sched_balance_rq+0x65d/0xe20
> [ 287.610351] sched_balance_newidle.constprop.0+0x1a0/0x360
> [ 287.611205] pick_next_task_fair+0x2a/0x2e0
> [ 287.611849] __schedule+0x106/0x8b0
Assuming this is still related to switched_from_fair(), since this is hit
during priority boosting then it would mean rt_mutex_setprio() gets
involved, but that uses the same set of DQ/EQ flags as
__sched_setscheduler().
I don't see any obvious path in
dequeue_task_fair()
`\
dequeue_entities()
that would prevent dl_server_stop() from happening when doing the
class-switch dequeue_task()... I don't see it in the TREE03 config, but can
you confirm CONFIG_CFS_BANDWIDTH isn't set in that scenario?
I'm going to keep digging but I'm not entirely sure yet whether this is
related to the switched_from_fair() hackery or not, I'll send the patch I
have as-is and continue digging for a bit.
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-08-29 13:50 ` Valentin Schneider
@ 2024-08-29 14:13 ` Paul E. McKenney
2024-09-08 16:32 ` Paul E. McKenney
0 siblings, 1 reply; 67+ messages in thread
From: Paul E. McKenney @ 2024-08-29 14:13 UTC (permalink / raw)
To: Valentin Schneider
Cc: Chen Yu, Peter Zijlstra, linux-kernel, sfr, linux-next,
kernel-team
On Thu, Aug 29, 2024 at 03:50:03PM +0200, Valentin Schneider wrote:
> On 29/08/24 03:28, Paul E. McKenney wrote:
> > On Wed, Aug 28, 2024 at 11:39:19AM -0700, Paul E. McKenney wrote:
> >>
> >> The 500*TREE03 run had exactly one failure that was the dreaded
> >> enqueue_dl_entity() failure, followed by RCU CPU stall warnings.
> >>
> >> But a huge improvement over the prior state!
> >>
> >> Plus, this failure is likely unrelated (see earlier discussions with
> >> Peter). I just started a 5000*TREE03 run, just in case we can now
> >> reproduce this thing.
> >
> > And we can now reproduce it! Again, this might an unrelated bug that
> > was previously a one-off (OK, OK, a two-off!). Or this series might
> > have made it more probably. Who knows?
> >
> > Eight of those 5000 runs got us this splat in enqueue_dl_entity():
> >
> > WARN_ON_ONCE(on_dl_rq(dl_se));
> >
> > Immediately followed by this splat in __enqueue_dl_entity():
> >
> > WARN_ON_ONCE(!RB_EMPTY_NODE(&dl_se->rb_node));
> >
> > These two splats always happened during rcutorture's testing of
> > RCU priority boosting. This testing involves spawning a CPU-bound
> > low-priority real-time kthread for each CPU, which is intended to starve
> > the non-realtime RCU readers, which are in turn to be rescued by RCU
> > priority boosting.
> >
>
> Thanks!
>
> > I do not entirely trust the following rcutorture diagnostic, but just
> > in case it helps...
> >
> > Many of them also printed something like this as well:
> >
> > [ 111.279575] Boost inversion persisted: No QS from CPU 3
> >
> > This message means that rcutorture has decided that RCU priority boosting
> > has failed, but not because a low-priority preempted task was blocking
> > the grace period, but rather because some CPU managed to be running
> > the same task in-kernel the whole time without doing a context switch.
> > In some cases (but not this one), this was simply a side-effect of
> > RCU's grace-period kthread being starved of CPU time. Such starvation
> > is a surprise in this case because this kthread is running at higher
> > real-time priority than the kthreads that are intended to force RCU
> > priority boosting to happen.
> >
> > Again, I do not entirely trust this rcutorture diagnostic, just in case
> > it helps.
> >
> > Thanx, Paul
> >
> > ------------------------------------------------------------------------
> >
> > [ 287.536845] rcu-torture: rcu_torture_boost is stopping
> > [ 287.536867] ------------[ cut here ]------------
> > [ 287.540661] WARNING: CPU: 4 PID: 132 at kernel/sched/deadline.c:2003 enqueue_dl_entity+0x50d/0x5c0
> > [ 287.542299] Modules linked in:
> > [ 287.542868] CPU: 4 UID: 0 PID: 132 Comm: kcompactd0 Not tainted 6.11.0-rc1-00051-gb32d207e39de #1701
> > [ 287.544335] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014
> > [ 287.546337] RIP: 0010:enqueue_dl_entity+0x50d/0x5c0
> > [ 287.603245] ? __warn+0x7e/0x120
> > [ 287.603752] ? enqueue_dl_entity+0x54b/0x5c0
> > [ 287.604405] ? report_bug+0x18e/0x1a0
> > [ 287.604978] ? handle_bug+0x3d/0x70
> > [ 287.605523] ? exc_invalid_op+0x18/0x70
> > [ 287.606116] ? asm_exc_invalid_op+0x1a/0x20
> > [ 287.606765] ? enqueue_dl_entity+0x54b/0x5c0
> > [ 287.607420] dl_server_start+0x31/0xe0
> > [ 287.608013] enqueue_task_fair+0x218/0x680
> > [ 287.608643] activate_task+0x21/0x50
> > [ 287.609197] attach_task+0x30/0x50
> > [ 287.609736] sched_balance_rq+0x65d/0xe20
> > [ 287.610351] sched_balance_newidle.constprop.0+0x1a0/0x360
> > [ 287.611205] pick_next_task_fair+0x2a/0x2e0
> > [ 287.611849] __schedule+0x106/0x8b0
>
>
> Assuming this is still related to switched_from_fair(), since this is hit
> during priority boosting then it would mean rt_mutex_setprio() gets
> involved, but that uses the same set of DQ/EQ flags as
> __sched_setscheduler().
>
> I don't see any obvious path in
>
> dequeue_task_fair()
> `\
> dequeue_entities()
>
> that would prevent dl_server_stop() from happening when doing the
> class-switch dequeue_task()... I don't see it in the TREE03 config, but can
> you confirm CONFIG_CFS_BANDWIDTH isn't set in that scenario?
>
> I'm going to keep digging but I'm not entirely sure yet whether this is
> related to the switched_from_fair() hackery or not, I'll send the patch I
> have as-is and continue digging for a bit.
Makes sense to me, thank you, and glad that the diagnostics helped.
Looking forward to further fixes. ;-)
Thanx, Paul
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-08-29 14:13 ` Paul E. McKenney
@ 2024-09-08 16:32 ` Paul E. McKenney
2024-09-13 14:08 ` Paul E. McKenney
0 siblings, 1 reply; 67+ messages in thread
From: Paul E. McKenney @ 2024-09-08 16:32 UTC (permalink / raw)
To: Valentin Schneider
Cc: Chen Yu, Peter Zijlstra, linux-kernel, sfr, linux-next,
kernel-team
On Thu, Aug 29, 2024 at 07:13:07AM -0700, Paul E. McKenney wrote:
> On Thu, Aug 29, 2024 at 03:50:03PM +0200, Valentin Schneider wrote:
> > On 29/08/24 03:28, Paul E. McKenney wrote:
> > > On Wed, Aug 28, 2024 at 11:39:19AM -0700, Paul E. McKenney wrote:
> > >>
> > >> The 500*TREE03 run had exactly one failure that was the dreaded
> > >> enqueue_dl_entity() failure, followed by RCU CPU stall warnings.
> > >>
> > >> But a huge improvement over the prior state!
> > >>
> > >> Plus, this failure is likely unrelated (see earlier discussions with
> > >> Peter). I just started a 5000*TREE03 run, just in case we can now
> > >> reproduce this thing.
> > >
> > > And we can now reproduce it! Again, this might an unrelated bug that
> > > was previously a one-off (OK, OK, a two-off!). Or this series might
> > > have made it more probably. Who knows?
> > >
> > > Eight of those 5000 runs got us this splat in enqueue_dl_entity():
> > >
> > > WARN_ON_ONCE(on_dl_rq(dl_se));
> > >
> > > Immediately followed by this splat in __enqueue_dl_entity():
> > >
> > > WARN_ON_ONCE(!RB_EMPTY_NODE(&dl_se->rb_node));
> > >
> > > These two splats always happened during rcutorture's testing of
> > > RCU priority boosting. This testing involves spawning a CPU-bound
> > > low-priority real-time kthread for each CPU, which is intended to starve
> > > the non-realtime RCU readers, which are in turn to be rescued by RCU
> > > priority boosting.
> > >
> >
> > Thanks!
> >
> > > I do not entirely trust the following rcutorture diagnostic, but just
> > > in case it helps...
> > >
> > > Many of them also printed something like this as well:
> > >
> > > [ 111.279575] Boost inversion persisted: No QS from CPU 3
> > >
> > > This message means that rcutorture has decided that RCU priority boosting
> > > has failed, but not because a low-priority preempted task was blocking
> > > the grace period, but rather because some CPU managed to be running
> > > the same task in-kernel the whole time without doing a context switch.
> > > In some cases (but not this one), this was simply a side-effect of
> > > RCU's grace-period kthread being starved of CPU time. Such starvation
> > > is a surprise in this case because this kthread is running at higher
> > > real-time priority than the kthreads that are intended to force RCU
> > > priority boosting to happen.
> > >
> > > Again, I do not entirely trust this rcutorture diagnostic, just in case
> > > it helps.
> > >
> > > Thanx, Paul
> > >
> > > ------------------------------------------------------------------------
> > >
> > > [ 287.536845] rcu-torture: rcu_torture_boost is stopping
> > > [ 287.536867] ------------[ cut here ]------------
> > > [ 287.540661] WARNING: CPU: 4 PID: 132 at kernel/sched/deadline.c:2003 enqueue_dl_entity+0x50d/0x5c0
> > > [ 287.542299] Modules linked in:
> > > [ 287.542868] CPU: 4 UID: 0 PID: 132 Comm: kcompactd0 Not tainted 6.11.0-rc1-00051-gb32d207e39de #1701
> > > [ 287.544335] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014
> > > [ 287.546337] RIP: 0010:enqueue_dl_entity+0x50d/0x5c0
> > > [ 287.603245] ? __warn+0x7e/0x120
> > > [ 287.603752] ? enqueue_dl_entity+0x54b/0x5c0
> > > [ 287.604405] ? report_bug+0x18e/0x1a0
> > > [ 287.604978] ? handle_bug+0x3d/0x70
> > > [ 287.605523] ? exc_invalid_op+0x18/0x70
> > > [ 287.606116] ? asm_exc_invalid_op+0x1a/0x20
> > > [ 287.606765] ? enqueue_dl_entity+0x54b/0x5c0
> > > [ 287.607420] dl_server_start+0x31/0xe0
> > > [ 287.608013] enqueue_task_fair+0x218/0x680
> > > [ 287.608643] activate_task+0x21/0x50
> > > [ 287.609197] attach_task+0x30/0x50
> > > [ 287.609736] sched_balance_rq+0x65d/0xe20
> > > [ 287.610351] sched_balance_newidle.constprop.0+0x1a0/0x360
> > > [ 287.611205] pick_next_task_fair+0x2a/0x2e0
> > > [ 287.611849] __schedule+0x106/0x8b0
> >
> >
> > Assuming this is still related to switched_from_fair(), since this is hit
> > during priority boosting then it would mean rt_mutex_setprio() gets
> > involved, but that uses the same set of DQ/EQ flags as
> > __sched_setscheduler().
> >
> > I don't see any obvious path in
> >
> > dequeue_task_fair()
> > `\
> > dequeue_entities()
> >
> > that would prevent dl_server_stop() from happening when doing the
> > class-switch dequeue_task()... I don't see it in the TREE03 config, but can
> > you confirm CONFIG_CFS_BANDWIDTH isn't set in that scenario?
> >
> > I'm going to keep digging but I'm not entirely sure yet whether this is
> > related to the switched_from_fair() hackery or not, I'll send the patch I
> > have as-is and continue digging for a bit.
>
> Makes sense to me, thank you, and glad that the diagnostics helped.
>
> Looking forward to further fixes. ;-)
Just following up...
For whatever it is worth, on last night's run of next-20240906, I got
nine failures out of 100 6-hour runs of rcutorture’s TREE03 scenario.
These failures were often, but not always, shortly followed by a hard hang.
The warning at line 1995 is the WARN_ON_ONCE(on_dl_rq(dl_se))
in enqueue_dl_entity() and the warning at line 1971 is the
WARN_ON_ONCE(!RB_EMPTY_NODE(&dl_se->rb_node)) in __enqueue_dl_entity().
The pair of splats is shown below, in case it helps.
Thanx, Paul
------------------------------------------------------------------------
[21122.992435] ------------[ cut here ]------------
[21122.994090] WARNING: CPU: 13 PID: 8032 at kernel/sched/deadline.c:1995 enqueue_dl_entity+0x511/0x5d0
[21122.995554] Modules linked in:
[21122.996048] CPU: 13 UID: 0 PID: 8032 Comm: kworker/13:1 Not tainted 6.11.0-rc6-next-20240906-dirty #2006
[21122.997548] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014
[21122.999310] Workqueue: 0x0 (mm_percpu_wq)
[21122.999981] RIP: 0010:enqueue_dl_entity+0x511/0x5d0
[21123.000757] Code: ff 48 89 ef e8 b0 d2 ff ff 0f b6 4d 54 e9 0e fc ff ff 85 db
0f 84 d0 fe ff ff 5b 44 89 e6 48 89 ef 5d 41 5c e9 00 df ff ff 90 <0f> 0b 90 e9 fa fa ff ff 48 8b bb f8 09 00 00 48 39 fe 0f 89 de fb
[21123.003697] RSP: 0000:ffffabca84373bf0 EFLAGS: 00010086
[21123.004537] RAX: 0000000000000001 RBX: ffff98381f56cae8 RCX: 0000000000000002
[21123.005660] RDX: 0000000000000001 RSI: 0000000000000001 RDI: ffff98381f56cae8
[21123.006781] RBP: ffff98381f56cae8 R08: 0000000000000001 R09: 0000000000000161
[21123.007902] R10: 0000000000000000 R11: ffff983802399d90 R12: 0000000000000001
[21123.009026] R13: 00000000002dc6c0 R14: ffff98381f56c240 R15: ffff98381f56c280
[21123.010213] FS: 0000000000000000(0000) GS:ffff98381f540000(0000) knlGS:0000000000000000
[21123.011584] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[21123.012498] CR2: 0000000000000000 CR3: 0000000002c16000 CR4: 00000000000006f0
[21123.013647] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[21123.014780] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[21123.015915] Call Trace:
[21123.016326] <TASK>
[21123.016672] ? __warn+0x83/0x130
[21123.017198] ? enqueue_dl_entity+0x511/0x5d0
[21123.017877] ? report_bug+0x18e/0x1a0
[21123.018469] ? handle_bug+0x54/0x90
[21123.019027] ? exc_invalid_op+0x18/0x70
[21123.019647] ? asm_exc_invalid_op+0x1a/0x20
[21123.020318] ? enqueue_dl_entity+0x511/0x5d0
[21123.020997] dl_server_start+0x31/0xe0
[21123.021603] enqueue_task_fair+0x218/0x680
[21123.022264] activate_task+0x21/0x50
[21123.022837] attach_task+0x30/0x50
[21123.023389] sched_balance_rq+0x65e/0xe00
[21123.024031] sched_balance_newidle.constprop.0+0x190/0x360
[21123.024903] pick_next_task_fair+0x2a/0x340
[21123.025576] __schedule+0x10e/0x8b0
[21123.026135] ? queue_delayed_work_on+0x53/0x60
[21123.026849] schedule+0x22/0xd0
[21123.027366] worker_thread+0x1a2/0x3a0
[21123.027963] ? __pfx_worker_thread+0x10/0x10
[21123.028651] kthread+0xd1/0x100
[21123.029153] ? __pfx_kthread+0x10/0x10
[21123.029758] ret_from_fork+0x2f/0x50
[21123.030334] ? __pfx_kthread+0x10/0x10
[21123.030933] ret_from_fork_asm+0x1a/0x30
[21123.031566] </TASK>
[21123.031920] ---[ end trace 0000000000000000 ]---
[21123.032669] ------------[ cut here ]------------
[21123.033409] WARNING: CPU: 13 PID: 8032 at kernel/sched/deadline.c:1971 enqueue_dl_entity+0x54f/0x5d0
[21123.034853] Modules linked in:
[21123.035354] CPU: 13 UID: 0 PID: 8032 Comm: kworker/13:1 Tainted: G W 6.11.0-rc6-next-20240906-dirty #2006
[21123.037081] Tainted: [W]=WARN
[21123.037562] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014
[21123.039331] Workqueue: 0x0 (mm_percpu_wq)
[21123.039984] RIP: 0010:enqueue_dl_entity+0x54f/0x5d0
[21123.040767] Code: de fb ff ff e9 66 ff ff ff 89 c1 45 84 d2 0f 84 ce fb ff ff a8 20 0f 84 c6 fb ff ff 84 c0 0f 89 20 fe ff ff e9 b9 fb ff ff 90 <0f> 0b 90 e9 f4 fb ff ff 84 d2 0f 85 e3 fa ff ff 48 89 ea 48 8d b5
[21123.043716] RSP: 0000:ffffabca84373bf0 EFLAGS: 00010086
[21123.044549] RAX: 00000000ffffff00 RBX: ffff98381f56c240 RCX: 0000000000000000
[21123.045676] RDX: 0000000000000001 RSI: 0000000b1a2986b8 RDI: 0000000b1a2154a4
[21123.046806] RBP: ffff98381f56cae8 R08: ffff98381f56ca80 R09: 000000003b9aca00
[21123.047931] R10: 0000000000000001 R11: 00000000000ee6b2 R12: 0000000000000001
[21123.049061] R13: 00000000002dc6c0 R14: ffff98381f56c240 R15: ffff98381f56c280
[21123.050469] FS: 0000000000000000(0000) GS:ffff98381f540000(0000) knlGS:0000000000000000
[21123.051761] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[21123.052679] CR2: 0000000000000000 CR3: 0000000002c16000 CR4: 00000000000006f0
[21123.053817] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[21123.054952] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[21123.056086] Call Trace:
[21123.056691] <TASK>
[21123.057205] ? __warn+0x83/0x130
[21123.057984] ? enqueue_dl_entity+0x54f/0x5d0
[21123.058989] ? report_bug+0x18e/0x1a0
[21123.059865] ? handle_bug+0x54/0x90
[21123.060689] ? exc_invalid_op+0x18/0x70
[21123.061591] ? asm_exc_invalid_op+0x1a/0x20
[21123.062584] ? enqueue_dl_entity+0x54f/0x5d0
[21123.063337] dl_server_start+0x31/0xe0
[21123.063939] enqueue_task_fair+0x218/0x680
[21123.064604] activate_task+0x21/0x50
[21123.065185] attach_task+0x30/0x50
[21123.065729] sched_balance_rq+0x65e/0xe00
[21123.066377] sched_balance_newidle.constprop.0+0x190/0x360
[21123.067255] pick_next_task_fair+0x2a/0x340
[21123.067921] __schedule+0x10e/0x8b0
[21123.068524] ? queue_delayed_work_on+0x53/0x60
[21123.069238] schedule+0x22/0xd0
[21123.069742] worker_thread+0x1a2/0x3a0
[21123.070347] ? __pfx_worker_thread+0x10/0x10
[21123.071025] kthread+0xd1/0x100
[21123.071536] ? __pfx_kthread+0x10/0x10
[21123.072133] ret_from_fork+0x2f/0x50
[21123.072714] ? __pfx_kthread+0x10/0x10
[21123.073323] ret_from_fork_asm+0x1a/0x30
[21123.073949] </TASK>
[21123.074314] ---[ end trace 0000000000000000 ]---
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-09-08 16:32 ` Paul E. McKenney
@ 2024-09-13 14:08 ` Paul E. McKenney
2024-09-13 16:55 ` Valentin Schneider
0 siblings, 1 reply; 67+ messages in thread
From: Paul E. McKenney @ 2024-09-13 14:08 UTC (permalink / raw)
To: Valentin Schneider, Chen Yu, Peter Zijlstra, linux-kernel, sfr,
linux-next, kernel-team
On Sun, Sep 08, 2024 at 09:32:18AM -0700, Paul E. McKenney wrote:
> On Thu, Aug 29, 2024 at 07:13:07AM -0700, Paul E. McKenney wrote:
> > On Thu, Aug 29, 2024 at 03:50:03PM +0200, Valentin Schneider wrote:
> > > On 29/08/24 03:28, Paul E. McKenney wrote:
> > > > On Wed, Aug 28, 2024 at 11:39:19AM -0700, Paul E. McKenney wrote:
> > > >>
> > > >> The 500*TREE03 run had exactly one failure that was the dreaded
> > > >> enqueue_dl_entity() failure, followed by RCU CPU stall warnings.
> > > >>
> > > >> But a huge improvement over the prior state!
> > > >>
> > > >> Plus, this failure is likely unrelated (see earlier discussions with
> > > >> Peter). I just started a 5000*TREE03 run, just in case we can now
> > > >> reproduce this thing.
> > > >
> > > > And we can now reproduce it! Again, this might an unrelated bug that
> > > > was previously a one-off (OK, OK, a two-off!). Or this series might
> > > > have made it more probably. Who knows?
> > > >
> > > > Eight of those 5000 runs got us this splat in enqueue_dl_entity():
> > > >
> > > > WARN_ON_ONCE(on_dl_rq(dl_se));
> > > >
> > > > Immediately followed by this splat in __enqueue_dl_entity():
> > > >
> > > > WARN_ON_ONCE(!RB_EMPTY_NODE(&dl_se->rb_node));
> > > >
> > > > These two splats always happened during rcutorture's testing of
> > > > RCU priority boosting. This testing involves spawning a CPU-bound
> > > > low-priority real-time kthread for each CPU, which is intended to starve
> > > > the non-realtime RCU readers, which are in turn to be rescued by RCU
> > > > priority boosting.
> > > >
> > >
> > > Thanks!
> > >
> > > > I do not entirely trust the following rcutorture diagnostic, but just
> > > > in case it helps...
> > > >
> > > > Many of them also printed something like this as well:
> > > >
> > > > [ 111.279575] Boost inversion persisted: No QS from CPU 3
> > > >
> > > > This message means that rcutorture has decided that RCU priority boosting
> > > > has failed, but not because a low-priority preempted task was blocking
> > > > the grace period, but rather because some CPU managed to be running
> > > > the same task in-kernel the whole time without doing a context switch.
> > > > In some cases (but not this one), this was simply a side-effect of
> > > > RCU's grace-period kthread being starved of CPU time. Such starvation
> > > > is a surprise in this case because this kthread is running at higher
> > > > real-time priority than the kthreads that are intended to force RCU
> > > > priority boosting to happen.
> > > >
> > > > Again, I do not entirely trust this rcutorture diagnostic, just in case
> > > > it helps.
> > > >
> > > > Thanx, Paul
> > > >
> > > > ------------------------------------------------------------------------
> > > >
> > > > [ 287.536845] rcu-torture: rcu_torture_boost is stopping
> > > > [ 287.536867] ------------[ cut here ]------------
> > > > [ 287.540661] WARNING: CPU: 4 PID: 132 at kernel/sched/deadline.c:2003 enqueue_dl_entity+0x50d/0x5c0
> > > > [ 287.542299] Modules linked in:
> > > > [ 287.542868] CPU: 4 UID: 0 PID: 132 Comm: kcompactd0 Not tainted 6.11.0-rc1-00051-gb32d207e39de #1701
> > > > [ 287.544335] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014
> > > > [ 287.546337] RIP: 0010:enqueue_dl_entity+0x50d/0x5c0
> > > > [ 287.603245] ? __warn+0x7e/0x120
> > > > [ 287.603752] ? enqueue_dl_entity+0x54b/0x5c0
> > > > [ 287.604405] ? report_bug+0x18e/0x1a0
> > > > [ 287.604978] ? handle_bug+0x3d/0x70
> > > > [ 287.605523] ? exc_invalid_op+0x18/0x70
> > > > [ 287.606116] ? asm_exc_invalid_op+0x1a/0x20
> > > > [ 287.606765] ? enqueue_dl_entity+0x54b/0x5c0
> > > > [ 287.607420] dl_server_start+0x31/0xe0
> > > > [ 287.608013] enqueue_task_fair+0x218/0x680
> > > > [ 287.608643] activate_task+0x21/0x50
> > > > [ 287.609197] attach_task+0x30/0x50
> > > > [ 287.609736] sched_balance_rq+0x65d/0xe20
> > > > [ 287.610351] sched_balance_newidle.constprop.0+0x1a0/0x360
> > > > [ 287.611205] pick_next_task_fair+0x2a/0x2e0
> > > > [ 287.611849] __schedule+0x106/0x8b0
> > >
> > >
> > > Assuming this is still related to switched_from_fair(), since this is hit
> > > during priority boosting then it would mean rt_mutex_setprio() gets
> > > involved, but that uses the same set of DQ/EQ flags as
> > > __sched_setscheduler().
> > >
> > > I don't see any obvious path in
> > >
> > > dequeue_task_fair()
> > > `\
> > > dequeue_entities()
> > >
> > > that would prevent dl_server_stop() from happening when doing the
> > > class-switch dequeue_task()... I don't see it in the TREE03 config, but can
> > > you confirm CONFIG_CFS_BANDWIDTH isn't set in that scenario?
> > >
> > > I'm going to keep digging but I'm not entirely sure yet whether this is
> > > related to the switched_from_fair() hackery or not, I'll send the patch I
> > > have as-is and continue digging for a bit.
> >
> > Makes sense to me, thank you, and glad that the diagnostics helped.
> >
> > Looking forward to further fixes. ;-)
>
> Just following up...
>
> For whatever it is worth, on last night's run of next-20240906, I got
> nine failures out of 100 6-hour runs of rcutorture’s TREE03 scenario.
> These failures were often, but not always, shortly followed by a hard hang.
>
> The warning at line 1995 is the WARN_ON_ONCE(on_dl_rq(dl_se))
> in enqueue_dl_entity() and the warning at line 1971 is the
> WARN_ON_ONCE(!RB_EMPTY_NODE(&dl_se->rb_node)) in __enqueue_dl_entity().
>
> The pair of splats is shown below, in case it helps.
Again following up...
I am still seeing this on next-20240912, with six failures out of 100
6-hour runs of rcutorture’s TREE03 scenario. Statistics suggests that
we not read much into the change in frequency.
Please let me know if there are any diagnostic patches or options that
I should apply.
Thanx, Paul
> ------------------------------------------------------------------------
>
> [21122.992435] ------------[ cut here ]------------
> [21122.994090] WARNING: CPU: 13 PID: 8032 at kernel/sched/deadline.c:1995 enqueue_dl_entity+0x511/0x5d0
> [21122.995554] Modules linked in:
> [21122.996048] CPU: 13 UID: 0 PID: 8032 Comm: kworker/13:1 Not tainted 6.11.0-rc6-next-20240906-dirty #2006
> [21122.997548] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014
> [21122.999310] Workqueue: 0x0 (mm_percpu_wq)
> [21122.999981] RIP: 0010:enqueue_dl_entity+0x511/0x5d0
> [21123.000757] Code: ff 48 89 ef e8 b0 d2 ff ff 0f b6 4d 54 e9 0e fc ff ff 85 db
> 0f 84 d0 fe ff ff 5b 44 89 e6 48 89 ef 5d 41 5c e9 00 df ff ff 90 <0f> 0b 90 e9 fa fa ff ff 48 8b bb f8 09 00 00 48 39 fe 0f 89 de fb
> [21123.003697] RSP: 0000:ffffabca84373bf0 EFLAGS: 00010086
> [21123.004537] RAX: 0000000000000001 RBX: ffff98381f56cae8 RCX: 0000000000000002
> [21123.005660] RDX: 0000000000000001 RSI: 0000000000000001 RDI: ffff98381f56cae8
> [21123.006781] RBP: ffff98381f56cae8 R08: 0000000000000001 R09: 0000000000000161
> [21123.007902] R10: 0000000000000000 R11: ffff983802399d90 R12: 0000000000000001
> [21123.009026] R13: 00000000002dc6c0 R14: ffff98381f56c240 R15: ffff98381f56c280
> [21123.010213] FS: 0000000000000000(0000) GS:ffff98381f540000(0000) knlGS:0000000000000000
> [21123.011584] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [21123.012498] CR2: 0000000000000000 CR3: 0000000002c16000 CR4: 00000000000006f0
> [21123.013647] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> [21123.014780] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
> [21123.015915] Call Trace:
> [21123.016326] <TASK>
> [21123.016672] ? __warn+0x83/0x130
> [21123.017198] ? enqueue_dl_entity+0x511/0x5d0
> [21123.017877] ? report_bug+0x18e/0x1a0
> [21123.018469] ? handle_bug+0x54/0x90
> [21123.019027] ? exc_invalid_op+0x18/0x70
> [21123.019647] ? asm_exc_invalid_op+0x1a/0x20
> [21123.020318] ? enqueue_dl_entity+0x511/0x5d0
> [21123.020997] dl_server_start+0x31/0xe0
> [21123.021603] enqueue_task_fair+0x218/0x680
> [21123.022264] activate_task+0x21/0x50
> [21123.022837] attach_task+0x30/0x50
> [21123.023389] sched_balance_rq+0x65e/0xe00
> [21123.024031] sched_balance_newidle.constprop.0+0x190/0x360
> [21123.024903] pick_next_task_fair+0x2a/0x340
> [21123.025576] __schedule+0x10e/0x8b0
> [21123.026135] ? queue_delayed_work_on+0x53/0x60
> [21123.026849] schedule+0x22/0xd0
> [21123.027366] worker_thread+0x1a2/0x3a0
> [21123.027963] ? __pfx_worker_thread+0x10/0x10
> [21123.028651] kthread+0xd1/0x100
> [21123.029153] ? __pfx_kthread+0x10/0x10
> [21123.029758] ret_from_fork+0x2f/0x50
> [21123.030334] ? __pfx_kthread+0x10/0x10
> [21123.030933] ret_from_fork_asm+0x1a/0x30
> [21123.031566] </TASK>
> [21123.031920] ---[ end trace 0000000000000000 ]---
> [21123.032669] ------------[ cut here ]------------
> [21123.033409] WARNING: CPU: 13 PID: 8032 at kernel/sched/deadline.c:1971 enqueue_dl_entity+0x54f/0x5d0
> [21123.034853] Modules linked in:
> [21123.035354] CPU: 13 UID: 0 PID: 8032 Comm: kworker/13:1 Tainted: G W 6.11.0-rc6-next-20240906-dirty #2006
> [21123.037081] Tainted: [W]=WARN
> [21123.037562] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014
> [21123.039331] Workqueue: 0x0 (mm_percpu_wq)
> [21123.039984] RIP: 0010:enqueue_dl_entity+0x54f/0x5d0
> [21123.040767] Code: de fb ff ff e9 66 ff ff ff 89 c1 45 84 d2 0f 84 ce fb ff ff a8 20 0f 84 c6 fb ff ff 84 c0 0f 89 20 fe ff ff e9 b9 fb ff ff 90 <0f> 0b 90 e9 f4 fb ff ff 84 d2 0f 85 e3 fa ff ff 48 89 ea 48 8d b5
> [21123.043716] RSP: 0000:ffffabca84373bf0 EFLAGS: 00010086
> [21123.044549] RAX: 00000000ffffff00 RBX: ffff98381f56c240 RCX: 0000000000000000
> [21123.045676] RDX: 0000000000000001 RSI: 0000000b1a2986b8 RDI: 0000000b1a2154a4
> [21123.046806] RBP: ffff98381f56cae8 R08: ffff98381f56ca80 R09: 000000003b9aca00
> [21123.047931] R10: 0000000000000001 R11: 00000000000ee6b2 R12: 0000000000000001
> [21123.049061] R13: 00000000002dc6c0 R14: ffff98381f56c240 R15: ffff98381f56c280
> [21123.050469] FS: 0000000000000000(0000) GS:ffff98381f540000(0000) knlGS:0000000000000000
> [21123.051761] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [21123.052679] CR2: 0000000000000000 CR3: 0000000002c16000 CR4: 00000000000006f0
> [21123.053817] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> [21123.054952] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
> [21123.056086] Call Trace:
> [21123.056691] <TASK>
> [21123.057205] ? __warn+0x83/0x130
> [21123.057984] ? enqueue_dl_entity+0x54f/0x5d0
> [21123.058989] ? report_bug+0x18e/0x1a0
> [21123.059865] ? handle_bug+0x54/0x90
> [21123.060689] ? exc_invalid_op+0x18/0x70
> [21123.061591] ? asm_exc_invalid_op+0x1a/0x20
> [21123.062584] ? enqueue_dl_entity+0x54f/0x5d0
> [21123.063337] dl_server_start+0x31/0xe0
> [21123.063939] enqueue_task_fair+0x218/0x680
> [21123.064604] activate_task+0x21/0x50
> [21123.065185] attach_task+0x30/0x50
> [21123.065729] sched_balance_rq+0x65e/0xe00
> [21123.066377] sched_balance_newidle.constprop.0+0x190/0x360
> [21123.067255] pick_next_task_fair+0x2a/0x340
> [21123.067921] __schedule+0x10e/0x8b0
> [21123.068524] ? queue_delayed_work_on+0x53/0x60
> [21123.069238] schedule+0x22/0xd0
> [21123.069742] worker_thread+0x1a2/0x3a0
> [21123.070347] ? __pfx_worker_thread+0x10/0x10
> [21123.071025] kthread+0xd1/0x100
> [21123.071536] ? __pfx_kthread+0x10/0x10
> [21123.072133] ret_from_fork+0x2f/0x50
> [21123.072714] ? __pfx_kthread+0x10/0x10
> [21123.073323] ret_from_fork_asm+0x1a/0x30
> [21123.073949] </TASK>
> [21123.074314] ---[ end trace 0000000000000000 ]---
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-09-13 14:08 ` Paul E. McKenney
@ 2024-09-13 16:55 ` Valentin Schneider
2024-09-13 18:00 ` Paul E. McKenney
0 siblings, 1 reply; 67+ messages in thread
From: Valentin Schneider @ 2024-09-13 16:55 UTC (permalink / raw)
To: paulmck, Chen Yu, Peter Zijlstra, linux-kernel, sfr, linux-next,
kernel-team
On 13/09/24 07:08, Paul E. McKenney wrote:
> On Sun, Sep 08, 2024 at 09:32:18AM -0700, Paul E. McKenney wrote:
>>
>> Just following up...
>>
>> For whatever it is worth, on last night's run of next-20240906, I got
>> nine failures out of 100 6-hour runs of rcutorture’s TREE03 scenario.
>> These failures were often, but not always, shortly followed by a hard hang.
>>
>> The warning at line 1995 is the WARN_ON_ONCE(on_dl_rq(dl_se))
>> in enqueue_dl_entity() and the warning at line 1971 is the
>> WARN_ON_ONCE(!RB_EMPTY_NODE(&dl_se->rb_node)) in __enqueue_dl_entity().
>>
>> The pair of splats is shown below, in case it helps.
>
> Again following up...
>
> I am still seeing this on next-20240912, with six failures out of 100
> 6-hour runs of rcutorture’s TREE03 scenario. Statistics suggests that
> we not read much into the change in frequency.
>
> Please let me know if there are any diagnostic patches or options that
> I should apply.
>
Hey, sorry I haven't forgotten about this, I've just spread myself a bit
too thin and also apparently I'm supposed to prepare some slides for next
week, I'll get back to this soonish.
> Thanx, Paul
>
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-09-13 16:55 ` Valentin Schneider
@ 2024-09-13 18:00 ` Paul E. McKenney
2024-09-30 19:09 ` Paul E. McKenney
0 siblings, 1 reply; 67+ messages in thread
From: Paul E. McKenney @ 2024-09-13 18:00 UTC (permalink / raw)
To: Valentin Schneider
Cc: Chen Yu, Peter Zijlstra, linux-kernel, sfr, linux-next,
kernel-team
On Fri, Sep 13, 2024 at 06:55:34PM +0200, Valentin Schneider wrote:
> On 13/09/24 07:08, Paul E. McKenney wrote:
> > On Sun, Sep 08, 2024 at 09:32:18AM -0700, Paul E. McKenney wrote:
> >>
> >> Just following up...
> >>
> >> For whatever it is worth, on last night's run of next-20240906, I got
> >> nine failures out of 100 6-hour runs of rcutorture’s TREE03 scenario.
> >> These failures were often, but not always, shortly followed by a hard hang.
> >>
> >> The warning at line 1995 is the WARN_ON_ONCE(on_dl_rq(dl_se))
> >> in enqueue_dl_entity() and the warning at line 1971 is the
> >> WARN_ON_ONCE(!RB_EMPTY_NODE(&dl_se->rb_node)) in __enqueue_dl_entity().
> >>
> >> The pair of splats is shown below, in case it helps.
> >
> > Again following up...
> >
> > I am still seeing this on next-20240912, with six failures out of 100
> > 6-hour runs of rcutorture’s TREE03 scenario. Statistics suggests that
> > we not read much into the change in frequency.
> >
> > Please let me know if there are any diagnostic patches or options that
> > I should apply.
>
> Hey, sorry I haven't forgotten about this, I've just spread myself a bit
> too thin and also apparently I'm supposed to prepare some slides for next
> week, I'll get back to this soonish.
I know that feeling! Just didn't want it to get lost.
Thanx, Paul
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-09-13 18:00 ` Paul E. McKenney
@ 2024-09-30 19:09 ` Paul E. McKenney
2024-09-30 20:44 ` Valentin Schneider
0 siblings, 1 reply; 67+ messages in thread
From: Paul E. McKenney @ 2024-09-30 19:09 UTC (permalink / raw)
To: Valentin Schneider
Cc: Chen Yu, Peter Zijlstra, linux-kernel, sfr, linux-next,
kernel-team
On Fri, Sep 13, 2024 at 11:00:39AM -0700, Paul E. McKenney wrote:
> On Fri, Sep 13, 2024 at 06:55:34PM +0200, Valentin Schneider wrote:
> > On 13/09/24 07:08, Paul E. McKenney wrote:
> > > On Sun, Sep 08, 2024 at 09:32:18AM -0700, Paul E. McKenney wrote:
> > >>
> > >> Just following up...
> > >>
> > >> For whatever it is worth, on last night's run of next-20240906, I got
> > >> nine failures out of 100 6-hour runs of rcutorture’s TREE03 scenario.
> > >> These failures were often, but not always, shortly followed by a hard hang.
> > >>
> > >> The warning at line 1995 is the WARN_ON_ONCE(on_dl_rq(dl_se))
> > >> in enqueue_dl_entity() and the warning at line 1971 is the
> > >> WARN_ON_ONCE(!RB_EMPTY_NODE(&dl_se->rb_node)) in __enqueue_dl_entity().
> > >>
> > >> The pair of splats is shown below, in case it helps.
> > >
> > > Again following up...
> > >
> > > I am still seeing this on next-20240912, with six failures out of 100
> > > 6-hour runs of rcutorture’s TREE03 scenario. Statistics suggests that
> > > we not read much into the change in frequency.
> > >
> > > Please let me know if there are any diagnostic patches or options that
> > > I should apply.
> >
> > Hey, sorry I haven't forgotten about this, I've just spread myself a bit
> > too thin and also apparently I'm supposed to prepare some slides for next
> > week, I'll get back to this soonish.
>
> I know that feeling! Just didn't want it to get lost.
And Peter asked that I send along a reproducer, which I am finally getting
around to doing:
tools/testing/selftests/rcutorture/bin/kvm.sh --allcpus --duration 12h --configs "100*TREE03" --trust-make
Note that this run will consume 19,200 CPU hours, or almost two CPU
years. Therefore, this is best done across a largish number of systems.
The kvm-remote.sh script can be helpful for this sort of thing, and
you give it a quoted list of systems before the rest of the arguments
shown above.
Doing this on a -next from last week got me 15 failures similar to the
following:
[41212.683966] WARNING: CPU: 14 PID: 126 at kernel/sched/deadline.c:1995 enqueue_dl_entity+0x511/0x5d0
[41212.712453] WARNING: CPU: 14 PID: 126 at kernel/sched/deadline.c:1971 enqueue_dl_entity+0x54f/0x5d0
Thanx, Paul
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-09-30 19:09 ` Paul E. McKenney
@ 2024-09-30 20:44 ` Valentin Schneider
2024-10-01 10:10 ` Paul E. McKenney
0 siblings, 1 reply; 67+ messages in thread
From: Valentin Schneider @ 2024-09-30 20:44 UTC (permalink / raw)
To: paulmck
Cc: Chen Yu, Peter Zijlstra, linux-kernel, sfr, linux-next,
kernel-team, Tomas Glozar
On 30/09/24 12:09, Paul E. McKenney wrote:
> On Fri, Sep 13, 2024 at 11:00:39AM -0700, Paul E. McKenney wrote:
>> On Fri, Sep 13, 2024 at 06:55:34PM +0200, Valentin Schneider wrote:
>> > On 13/09/24 07:08, Paul E. McKenney wrote:
>> > > On Sun, Sep 08, 2024 at 09:32:18AM -0700, Paul E. McKenney wrote:
>> > >>
>> > >> Just following up...
>> > >>
>> > >> For whatever it is worth, on last night's run of next-20240906, I got
>> > >> nine failures out of 100 6-hour runs of rcutorture’s TREE03 scenario.
>> > >> These failures were often, but not always, shortly followed by a hard hang.
>> > >>
>> > >> The warning at line 1995 is the WARN_ON_ONCE(on_dl_rq(dl_se))
>> > >> in enqueue_dl_entity() and the warning at line 1971 is the
>> > >> WARN_ON_ONCE(!RB_EMPTY_NODE(&dl_se->rb_node)) in __enqueue_dl_entity().
>> > >>
>> > >> The pair of splats is shown below, in case it helps.
>> > >
>> > > Again following up...
>> > >
>> > > I am still seeing this on next-20240912, with six failures out of 100
>> > > 6-hour runs of rcutorture’s TREE03 scenario. Statistics suggests that
>> > > we not read much into the change in frequency.
>> > >
>> > > Please let me know if there are any diagnostic patches or options that
>> > > I should apply.
>> >
>> > Hey, sorry I haven't forgotten about this, I've just spread myself a bit
>> > too thin and also apparently I'm supposed to prepare some slides for next
>> > week, I'll get back to this soonish.
>>
>> I know that feeling! Just didn't want it to get lost.
>
> And Peter asked that I send along a reproducer, which I am finally getting
> around to doing:
>
> tools/testing/selftests/rcutorture/bin/kvm.sh --allcpus --duration 12h --configs "100*TREE03" --trust-make
>
FYI Tomas (on Cc) has been working on getting pretty much this to run on
our infra, no hit so far.
How much of a pain would it be to record an ftrace trace while this runs?
I'm thinking sched_switch, sched_wakeup and function-tracing
dl_server_start() and dl_server_stop() would be a start.
AIUI this is running under QEMU so we'd need to record the trace within
that, I'm guessing we can (ab)use --bootargs to feed it tracing arguments,
but how do we get the trace out?
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-09-30 20:44 ` Valentin Schneider
@ 2024-10-01 10:10 ` Paul E. McKenney
2024-10-01 12:52 ` Valentin Schneider
0 siblings, 1 reply; 67+ messages in thread
From: Paul E. McKenney @ 2024-10-01 10:10 UTC (permalink / raw)
To: Valentin Schneider
Cc: Chen Yu, Peter Zijlstra, linux-kernel, sfr, linux-next,
kernel-team, Tomas Glozar
On Mon, Sep 30, 2024 at 10:44:24PM +0200, Valentin Schneider wrote:
> On 30/09/24 12:09, Paul E. McKenney wrote:
> > On Fri, Sep 13, 2024 at 11:00:39AM -0700, Paul E. McKenney wrote:
> >> On Fri, Sep 13, 2024 at 06:55:34PM +0200, Valentin Schneider wrote:
> >> > On 13/09/24 07:08, Paul E. McKenney wrote:
> >> > > On Sun, Sep 08, 2024 at 09:32:18AM -0700, Paul E. McKenney wrote:
> >> > >>
> >> > >> Just following up...
> >> > >>
> >> > >> For whatever it is worth, on last night's run of next-20240906, I got
> >> > >> nine failures out of 100 6-hour runs of rcutorture’s TREE03 scenario.
> >> > >> These failures were often, but not always, shortly followed by a hard hang.
> >> > >>
> >> > >> The warning at line 1995 is the WARN_ON_ONCE(on_dl_rq(dl_se))
> >> > >> in enqueue_dl_entity() and the warning at line 1971 is the
> >> > >> WARN_ON_ONCE(!RB_EMPTY_NODE(&dl_se->rb_node)) in __enqueue_dl_entity().
> >> > >>
> >> > >> The pair of splats is shown below, in case it helps.
> >> > >
> >> > > Again following up...
> >> > >
> >> > > I am still seeing this on next-20240912, with six failures out of 100
> >> > > 6-hour runs of rcutorture’s TREE03 scenario. Statistics suggests that
> >> > > we not read much into the change in frequency.
> >> > >
> >> > > Please let me know if there are any diagnostic patches or options that
> >> > > I should apply.
> >> >
> >> > Hey, sorry I haven't forgotten about this, I've just spread myself a bit
> >> > too thin and also apparently I'm supposed to prepare some slides for next
> >> > week, I'll get back to this soonish.
> >>
> >> I know that feeling! Just didn't want it to get lost.
> >
> > And Peter asked that I send along a reproducer, which I am finally getting
> > around to doing:
> >
> > tools/testing/selftests/rcutorture/bin/kvm.sh --allcpus --duration 12h --configs "100*TREE03" --trust-make
> >
>
> FYI Tomas (on Cc) has been working on getting pretty much this to run on
> our infra, no hit so far.
>
> How much of a pain would it be to record an ftrace trace while this runs?
> I'm thinking sched_switch, sched_wakeup and function-tracing
> dl_server_start() and dl_server_stop() would be a start.
>
> AIUI this is running under QEMU so we'd need to record the trace within
> that, I'm guessing we can (ab)use --bootargs to feed it tracing arguments,
> but how do we get the trace out?
Me, I would change those warnings to dump the trace buffer to the
console when triggered. Let me see if I can come up with something
better over breakfast. And yes, there is the concern that adding tracing
will suppress this issue.
So is there some state that I could manually dump upon triggering either
of these two warnings? That approach would minimize the probability of
suppressing the problem.
Thanx, Paul
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-10-01 10:10 ` Paul E. McKenney
@ 2024-10-01 12:52 ` Valentin Schneider
2024-10-01 16:47 ` Paul E. McKenney
0 siblings, 1 reply; 67+ messages in thread
From: Valentin Schneider @ 2024-10-01 12:52 UTC (permalink / raw)
To: paulmck
Cc: Chen Yu, Peter Zijlstra, linux-kernel, sfr, linux-next,
kernel-team, Tomas Glozar
On 01/10/24 03:10, Paul E. McKenney wrote:
> On Mon, Sep 30, 2024 at 10:44:24PM +0200, Valentin Schneider wrote:
>> On 30/09/24 12:09, Paul E. McKenney wrote:
>> >
>> > And Peter asked that I send along a reproducer, which I am finally getting
>> > around to doing:
>> >
>> > tools/testing/selftests/rcutorture/bin/kvm.sh --allcpus --duration 12h --configs "100*TREE03" --trust-make
>> >
>>
>> FYI Tomas (on Cc) has been working on getting pretty much this to run on
>> our infra, no hit so far.
>>
>> How much of a pain would it be to record an ftrace trace while this runs?
>> I'm thinking sched_switch, sched_wakeup and function-tracing
>> dl_server_start() and dl_server_stop() would be a start.
>>
>> AIUI this is running under QEMU so we'd need to record the trace within
>> that, I'm guessing we can (ab)use --bootargs to feed it tracing arguments,
>> but how do we get the trace out?
>
> Me, I would change those warnings to dump the trace buffer to the
> console when triggered. Let me see if I can come up with something
> better over breakfast. And yes, there is the concern that adding tracing
> will suppress this issue.
>
> So is there some state that I could manually dump upon triggering either
> of these two warnings? That approach would minimize the probability of
> suppressing the problem.
>
Usually enabling panic_on_warn and getting a kdump is ideal, but here this
is with QEMU - I know we can get a vmcore out via dump-guest-memory in the
QEMU monitor, but I don't have an immediate solution to do that on a
warn/panic.
Also I'd say here we're mostly interested in the sequence of events leading
us to the warn (dl_server_start() when the DL entity is somehow still
enqueued) rather than the state of things when the warn is hit, and for
that dumping the ftrace buffer to the console sounds good enough to me.
> Thanx, Paul
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-10-01 12:52 ` Valentin Schneider
@ 2024-10-01 16:47 ` Paul E. McKenney
2024-10-02 9:01 ` Tomas Glozar
0 siblings, 1 reply; 67+ messages in thread
From: Paul E. McKenney @ 2024-10-01 16:47 UTC (permalink / raw)
To: Valentin Schneider
Cc: Chen Yu, Peter Zijlstra, linux-kernel, sfr, linux-next,
kernel-team, Tomas Glozar
On Tue, Oct 01, 2024 at 02:52:37PM +0200, Valentin Schneider wrote:
> On 01/10/24 03:10, Paul E. McKenney wrote:
> > On Mon, Sep 30, 2024 at 10:44:24PM +0200, Valentin Schneider wrote:
> >> On 30/09/24 12:09, Paul E. McKenney wrote:
> >> >
> >> > And Peter asked that I send along a reproducer, which I am finally getting
> >> > around to doing:
> >> >
> >> > tools/testing/selftests/rcutorture/bin/kvm.sh --allcpus --duration 12h --configs "100*TREE03" --trust-make
> >> >
> >>
> >> FYI Tomas (on Cc) has been working on getting pretty much this to run on
> >> our infra, no hit so far.
> >>
> >> How much of a pain would it be to record an ftrace trace while this runs?
> >> I'm thinking sched_switch, sched_wakeup and function-tracing
> >> dl_server_start() and dl_server_stop() would be a start.
> >>
> >> AIUI this is running under QEMU so we'd need to record the trace within
> >> that, I'm guessing we can (ab)use --bootargs to feed it tracing arguments,
> >> but how do we get the trace out?
To answer this question directly, I am trying this:
--bootargs "trace_event=sched:sched_switch,sched:sched_wakeup ftrace_filter=dl_server_start,dl_server_stop torture.ftrace_dump_at_shutdown=1"
Huh, 50MB and growing. I need to limit the buffer size as well.
How about "trace_buf_size=2k"? The default is 1,441,792, just
over 1m.
Except that I am not getting either dl_server_start() or dl_server_stop(),
perhaps because they are not being invoked in this short test run.
So try some function that is definitely getting invoked, such as
rcu_sched_clock_irq().
No joy there, either, so maybe add "ftrace=function"?
No: "[ 1.542360] ftrace bootup tracer 'function' not registered."
The "torture.ftrace_dump_at_shutdown" is just for experiment. The
actual runs will do something like this:
if (on_dl_rq(dl_se)) { // Was: WARN_ON_ONCE(on_dl_rq(dl_se));
tracing_off();
ftrace_dump(DUMP_ALL);
WARN_ON_ONCE(1);
}
> > Me, I would change those warnings to dump the trace buffer to the
> > console when triggered. Let me see if I can come up with something
> > better over breakfast. And yes, there is the concern that adding tracing
> > will suppress this issue.
> >
> > So is there some state that I could manually dump upon triggering either
> > of these two warnings? That approach would minimize the probability of
> > suppressing the problem.
>
> Usually enabling panic_on_warn and getting a kdump is ideal, but here this
> is with QEMU - I know we can get a vmcore out via dump-guest-memory in the
> QEMU monitor, but I don't have an immediate solution to do that on a
> warn/panic.
Especially given that I don't have a QEMU monitor for these 100 runs.
But if there is a way to do this programatically from within the
kernel, I would be happy to give it a try.
> Also I'd say here we're mostly interested in the sequence of events leading
> us to the warn (dl_server_start() when the DL entity is somehow still
> enqueued) rather than the state of things when the warn is hit, and for
> that dumping the ftrace buffer to the console sounds good enough to me.
That I can do!!! Give or take function tracing appearing not to work
for me from the kernel command line. :-(
Thanx, Paul
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-10-01 16:47 ` Paul E. McKenney
@ 2024-10-02 9:01 ` Tomas Glozar
2024-10-02 12:07 ` Paul E. McKenney
2024-10-10 11:24 ` Tomas Glozar
0 siblings, 2 replies; 67+ messages in thread
From: Tomas Glozar @ 2024-10-02 9:01 UTC (permalink / raw)
To: paulmck
Cc: Valentin Schneider, Chen Yu, Peter Zijlstra, linux-kernel, sfr,
linux-next, kernel-team
út 1. 10. 2024 v 18:47 odesílatel Paul E. McKenney <paulmck@kernel.org> napsal:
> Huh, 50MB and growing. I need to limit the buffer size as well.
> How about "trace_buf_size=2k"? The default is 1,441,792, just
> over 1m.
>
Yeah, limiting the size of the buffer is the way to go, we only need
the last n entries before the oops.
> Except that I am not getting either dl_server_start() or dl_server_stop(),
> perhaps because they are not being invoked in this short test run.
> So try some function that is definitely getting invoked, such as
> rcu_sched_clock_irq().
>
> No joy there, either, so maybe add "ftrace=function"?
>
> No: "[ 1.542360] ftrace bootup tracer 'function' not registered."
>
Did you enable CONFIG_BOOTTIME_TRACING and CONFIG_FUNCTION_TRACER?
They are not set in the default configuration for TREE03:
$ grep -E '(FUNCTION_TRACER)|(BOOTTIME_TRACING)'
./tools/testing/selftests/rcutorture/res/2024.09.26-14.35.03/TREE03/.config
CONFIG_HAVE_FUNCTION_TRACER=y
# CONFIG_BOOTTIME_TRACING is not set
# CONFIG_FUNCTION_TRACER is not set
>
> Especially given that I don't have a QEMU monitor for these 100 runs.
>
> But if there is a way to do this programatically from within the
> kernel, I would be happy to give it a try.
>
> > Also I'd say here we're mostly interested in the sequence of events leading
> > us to the warn (dl_server_start() when the DL entity is somehow still
> > enqueued) rather than the state of things when the warn is hit, and for
> > that dumping the ftrace buffer to the console sounds good enough to me.
>
> That I can do!!! Give or take function tracing appearing not to work
> for me from the kernel command line. :-(
>
> Thanx, Paul
>
Thanks for trying to get details about the bug. See my comment above
about the config options to enable function tracing.
FYI I have managed to reproduce the bug on our infrastructure after 21
hours of 7*TREE03 and I will continue with trying to reproduce it with
the tracers we want.
Tomas
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-10-02 9:01 ` Tomas Glozar
@ 2024-10-02 12:07 ` Paul E. McKenney
2024-10-10 11:24 ` Tomas Glozar
1 sibling, 0 replies; 67+ messages in thread
From: Paul E. McKenney @ 2024-10-02 12:07 UTC (permalink / raw)
To: Tomas Glozar
Cc: Valentin Schneider, Chen Yu, Peter Zijlstra, linux-kernel, sfr,
linux-next, kernel-team
On Wed, Oct 02, 2024 at 11:01:03AM +0200, Tomas Glozar wrote:
> út 1. 10. 2024 v 18:47 odesílatel Paul E. McKenney <paulmck@kernel.org> napsal:
> > Huh, 50MB and growing. I need to limit the buffer size as well.
> > How about "trace_buf_size=2k"? The default is 1,441,792, just
> > over 1m.
> >
> Yeah, limiting the size of the buffer is the way to go, we only need
> the last n entries before the oops.
>
> > Except that I am not getting either dl_server_start() or dl_server_stop(),
> > perhaps because they are not being invoked in this short test run.
> > So try some function that is definitely getting invoked, such as
> > rcu_sched_clock_irq().
> >
> > No joy there, either, so maybe add "ftrace=function"?
> >
> > No: "[ 1.542360] ftrace bootup tracer 'function' not registered."
> >
> Did you enable CONFIG_BOOTTIME_TRACING and CONFIG_FUNCTION_TRACER?
> They are not set in the default configuration for TREE03:
>
> $ grep -E '(FUNCTION_TRACER)|(BOOTTIME_TRACING)'
> ./tools/testing/selftests/rcutorture/res/2024.09.26-14.35.03/TREE03/.config
> CONFIG_HAVE_FUNCTION_TRACER=y
> # CONFIG_BOOTTIME_TRACING is not set
> # CONFIG_FUNCTION_TRACER is not set
Ah, thank you! I knew I must be forgetting something. Now a short test
gets me things like this:
[ 304.572701] torture_-190 13d.h2. 302863957us : rcu_is_cpu_rrupt_from_idle <-rcu_sched_clock_irq
> > Especially given that I don't have a QEMU monitor for these 100 runs.
> >
> > But if there is a way to do this programatically from within the
> > kernel, I would be happy to give it a try.
> >
> > > Also I'd say here we're mostly interested in the sequence of events leading
> > > us to the warn (dl_server_start() when the DL entity is somehow still
> > > enqueued) rather than the state of things when the warn is hit, and for
> > > that dumping the ftrace buffer to the console sounds good enough to me.
> >
> > That I can do!!! Give or take function tracing appearing not to work
> > for me from the kernel command line. :-(
> >
> > Thanx, Paul
> >
>
> Thanks for trying to get details about the bug. See my comment above
> about the config options to enable function tracing.
I will check up on last night's run for heisenbug-evaluation purposes,
and if it did trigger, restart with this added:
--kconfigs "CONFIG_BOOTTIME_TRACING=y CONFIG_FUNCTION_TRACER=y"
> FYI I have managed to reproduce the bug on our infrastructure after 21
> hours of 7*TREE03 and I will continue with trying to reproduce it with
> the tracers we want.
Even better!!!
Thanx, Paul
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-08-21 21:57 [BUG almost bisected] Splat in dequeue_rt_stack() and build error Paul E. McKenney
2024-08-22 23:01 ` Paul E. McKenney
2024-08-23 7:47 ` Peter Zijlstra
@ 2024-10-03 8:40 ` Peter Zijlstra
2024-10-03 8:47 ` Peter Zijlstra
2 siblings, 1 reply; 67+ messages in thread
From: Peter Zijlstra @ 2024-10-03 8:40 UTC (permalink / raw)
To: Paul E. McKenney; +Cc: vschneid, linux-kernel, sfr, linux-next, kernel-team
On Wed, Aug 21, 2024 at 02:57:16PM -0700, Paul E. McKenney wrote:
> My reproducer on the two-socket 40-core 80-HW-thread systems is:
>
> tools/testing/selftests/rcutorture/bin/kvm.sh --allcpus --duration 1m --configs "50*TREE03" --trust-make
>
This gets me a very long stream of:
Results directory: /usr/src/linux-rcu/tools/testing/selftests/rcutorture/res/2024.10.03-09.30.33
tools/testing/selftests/rcutorture/bin/kvm.sh --allcpus --duration 1m --configs 50*TREE03 --trust-make
TREE03 -------
QEMU error, output:
cat: /usr/src/linux-rcu/tools/testing/selftests/rcutorture/res/2024.10.03-09.30.33/TREE03/qemu-output: No such file or directory
TREE03.10 -------
QEMU error, output:
cat: /usr/src/linux-rcu/tools/testing/selftests/rcutorture/res/2024.10.03-09.30.33/TREE03.10/qemu-output: No such file or directory
...
Did I not do it right?
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-10-03 8:40 ` Peter Zijlstra
@ 2024-10-03 8:47 ` Peter Zijlstra
2024-10-03 9:27 ` Peter Zijlstra
0 siblings, 1 reply; 67+ messages in thread
From: Peter Zijlstra @ 2024-10-03 8:47 UTC (permalink / raw)
To: Paul E. McKenney; +Cc: vschneid, linux-kernel, sfr, linux-next, kernel-team
On Thu, Oct 03, 2024 at 10:40:39AM +0200, Peter Zijlstra wrote:
> On Wed, Aug 21, 2024 at 02:57:16PM -0700, Paul E. McKenney wrote:
>
> > My reproducer on the two-socket 40-core 80-HW-thread systems is:
> >
> > tools/testing/selftests/rcutorture/bin/kvm.sh --allcpus --duration 1m --configs "50*TREE03" --trust-make
> >
>
> This gets me a very long stream of:
>
> Results directory: /usr/src/linux-rcu/tools/testing/selftests/rcutorture/res/2024.10.03-09.30.33
> tools/testing/selftests/rcutorture/bin/kvm.sh --allcpus --duration 1m --configs 50*TREE03 --trust-make
> TREE03 -------
> QEMU error, output:
> cat: /usr/src/linux-rcu/tools/testing/selftests/rcutorture/res/2024.10.03-09.30.33/TREE03/qemu-output: No such file or directory
> TREE03.10 -------
> QEMU error, output:
> cat: /usr/src/linux-rcu/tools/testing/selftests/rcutorture/res/2024.10.03-09.30.33/TREE03.10/qemu-output: No such file or directory
> ...
>
>
> Did I not do it right?
Urgh, for some reason my machine doesn't auto load kvm_intel.ko and then
proceeds to not do anything useful.. Let me try again.
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-10-03 8:47 ` Peter Zijlstra
@ 2024-10-03 9:27 ` Peter Zijlstra
2024-10-03 12:28 ` Peter Zijlstra
2024-10-03 12:44 ` Paul E. McKenney
0 siblings, 2 replies; 67+ messages in thread
From: Peter Zijlstra @ 2024-10-03 9:27 UTC (permalink / raw)
To: Paul E. McKenney; +Cc: vschneid, linux-kernel, sfr, linux-next, kernel-team
On Thu, Oct 03, 2024 at 10:47:43AM +0200, Peter Zijlstra wrote:
> On Thu, Oct 03, 2024 at 10:40:39AM +0200, Peter Zijlstra wrote:
> > On Wed, Aug 21, 2024 at 02:57:16PM -0700, Paul E. McKenney wrote:
> >
> > > My reproducer on the two-socket 40-core 80-HW-thread systems is:
> > >
> > > tools/testing/selftests/rcutorture/bin/kvm.sh --allcpus --duration 1m --configs "50*TREE03" --trust-make
> > >
> >
> > This gets me a very long stream of:
> >
> > Results directory: /usr/src/linux-rcu/tools/testing/selftests/rcutorture/res/2024.10.03-09.30.33
> > tools/testing/selftests/rcutorture/bin/kvm.sh --allcpus --duration 1m --configs 50*TREE03 --trust-make
> > TREE03 -------
> > QEMU error, output:
> > cat: /usr/src/linux-rcu/tools/testing/selftests/rcutorture/res/2024.10.03-09.30.33/TREE03/qemu-output: No such file or directory
> > TREE03.10 -------
> > QEMU error, output:
> > cat: /usr/src/linux-rcu/tools/testing/selftests/rcutorture/res/2024.10.03-09.30.33/TREE03.10/qemu-output: No such file or directory
> > ...
> >
> >
> > Did I not do it right?
>
> Urgh, for some reason my machine doesn't auto load kvm_intel.ko and then
> proceeds to not do anything useful.. Let me try again.
Works a ton better now, obviously no splat yet :/
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-10-03 9:27 ` Peter Zijlstra
@ 2024-10-03 12:28 ` Peter Zijlstra
2024-10-03 12:45 ` Paul E. McKenney
2024-10-03 12:44 ` Paul E. McKenney
1 sibling, 1 reply; 67+ messages in thread
From: Peter Zijlstra @ 2024-10-03 12:28 UTC (permalink / raw)
To: Paul E. McKenney; +Cc: vschneid, linux-kernel, sfr, linux-next, kernel-team
On Thu, Oct 03, 2024 at 11:27:07AM +0200, Peter Zijlstra wrote:
> On Thu, Oct 03, 2024 at 10:47:43AM +0200, Peter Zijlstra wrote:
> > On Thu, Oct 03, 2024 at 10:40:39AM +0200, Peter Zijlstra wrote:
> > > On Wed, Aug 21, 2024 at 02:57:16PM -0700, Paul E. McKenney wrote:
> > >
> > > > My reproducer on the two-socket 40-core 80-HW-thread systems is:
> > > >
> > > > tools/testing/selftests/rcutorture/bin/kvm.sh --allcpus --duration 1m --configs "50*TREE03" --trust-make
> > > >
> > >
> > > This gets me a very long stream of:
> > >
> > > Results directory: /usr/src/linux-rcu/tools/testing/selftests/rcutorture/res/2024.10.03-09.30.33
> > > tools/testing/selftests/rcutorture/bin/kvm.sh --allcpus --duration 1m --configs 50*TREE03 --trust-make
> > > TREE03 -------
> > > QEMU error, output:
> > > cat: /usr/src/linux-rcu/tools/testing/selftests/rcutorture/res/2024.10.03-09.30.33/TREE03/qemu-output: No such file or directory
> > > TREE03.10 -------
> > > QEMU error, output:
> > > cat: /usr/src/linux-rcu/tools/testing/selftests/rcutorture/res/2024.10.03-09.30.33/TREE03.10/qemu-output: No such file or directory
> > > ...
> > >
> > >
> > > Did I not do it right?
> >
> > Urgh, for some reason my machine doesn't auto load kvm_intel.ko and then
> > proceeds to not do anything useful.. Let me try again.
>
> Works a ton better now, obviously no splat yet :/
I have by now ran many hundreds of 1m TREE03 instances, and not yet seen
anything. Surely I'm doing it wrong?
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-10-03 9:27 ` Peter Zijlstra
2024-10-03 12:28 ` Peter Zijlstra
@ 2024-10-03 12:44 ` Paul E. McKenney
1 sibling, 0 replies; 67+ messages in thread
From: Paul E. McKenney @ 2024-10-03 12:44 UTC (permalink / raw)
To: Peter Zijlstra; +Cc: vschneid, linux-kernel, sfr, linux-next, kernel-team
On Thu, Oct 03, 2024 at 11:27:07AM +0200, Peter Zijlstra wrote:
> On Thu, Oct 03, 2024 at 10:47:43AM +0200, Peter Zijlstra wrote:
> > On Thu, Oct 03, 2024 at 10:40:39AM +0200, Peter Zijlstra wrote:
> > > On Wed, Aug 21, 2024 at 02:57:16PM -0700, Paul E. McKenney wrote:
> > >
> > > > My reproducer on the two-socket 40-core 80-HW-thread systems is:
> > > >
> > > > tools/testing/selftests/rcutorture/bin/kvm.sh --allcpus --duration 1m --configs "50*TREE03" --trust-make
> > > >
> > >
> > > This gets me a very long stream of:
> > >
> > > Results directory: /usr/src/linux-rcu/tools/testing/selftests/rcutorture/res/2024.10.03-09.30.33
> > > tools/testing/selftests/rcutorture/bin/kvm.sh --allcpus --duration 1m --configs 50*TREE03 --trust-make
> > > TREE03 -------
> > > QEMU error, output:
> > > cat: /usr/src/linux-rcu/tools/testing/selftests/rcutorture/res/2024.10.03-09.30.33/TREE03/qemu-output: No such file or directory
> > > TREE03.10 -------
> > > QEMU error, output:
> > > cat: /usr/src/linux-rcu/tools/testing/selftests/rcutorture/res/2024.10.03-09.30.33/TREE03.10/qemu-output: No such file or directory
> > > ...
> > >
> > >
> > > Did I not do it right?
> >
> > Urgh, for some reason my machine doesn't auto load kvm_intel.ko and then
> > proceeds to not do anything useful.. Let me try again.
Been there, done that, and many variations...
> Works a ton better now, obviously no splat yet :/
Well, after enabling function tracing as advised, I got splats on 18
hours of 100*TREE03, but different ones. Heisenbugs for the win!!!
Sadly, the bug's win, not ours. :-(
The first syndrome is starvation of RCU's grace-period kthread, and
apologies for the lack of trimming, but I have no idea what is relevant
here. Though the rcu_torture_barrier_cbs() might suggest a scheduling bug
in rcutorture. Not that I am seeing one, but all developers are blind...
And it is quite possible that the stalls were initially triggered by the
ftrace_dump(). So I will rerun suppressing stalls during that time.
(I thought limiting the buffers to 2k would make that unnecessary,
but it appears not!)
The stalls are below, just in case they are useful.
Thanx, Paul
------------------------------------------------------------------------
[35205.919036] rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
[35205.919834] rcu: (detected by 3, t=21003 jiffies, g=19954621, q=13582 ncpus=7)
[35205.920705] rcu: All QSes seen, last rcu_preempt kthread activity 17821 (4329872953-4329855132), jiffies_till_next_fqs=3, root ->qsmask 0x0
[35205.922197] rcu: rcu_preempt kthread starved for 17821 jiffies! g19954621 f0x2 RCU_GP_CLEANUP(7) ->state=0x0 ->cpu=5
[35205.923446] rcu: Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
[35205.924530] rcu: RCU grace-period kthread stack dump:
[35205.925140] task:rcu_preempt state:R running task stack:14752 pid:16 tgid:16 ppid:2 flags:0x00004000
[35205.926432] Call Trace:
[35205.926729] <TASK>
[35205.926992] __schedule+0x3e8/0x8f0
[35205.927428] ? __mod_timer+0x23f/0x350
[35205.927890] schedule+0x27/0xd0
[35205.928270] schedule_timeout+0x77/0xf0
[35205.928801] ? __pfx_process_timeout+0x10/0x10
[35205.929396] rcu_gp_cleanup+0x14d/0x5d0
[35205.929870] rcu_gp_kthread+0x1a0/0x240
[35205.930336] ? __pfx_rcu_gp_kthread+0x10/0x10
[35205.930867] kthread+0xd6/0x100
[35205.931251] ? __pfx_kthread+0x10/0x10
[35205.931703] ret_from_fork+0x34/0x50
[35205.932145] ? __pfx_kthread+0x10/0x10
[35205.932598] ret_from_fork_asm+0x1a/0x30
[35205.933080] </TASK>
[35205.933347] rcu: Stack dump where RCU GP kthread last ran:
[35205.934010] Sending NMI from CPU 3 to CPUs 5:
[35205.934545] NMI backtrace for cpu 5
[35205.934550] CPU: 5 UID: 0 PID: 193 Comm: rcu_torture_bar Not tainted 6.11.0-next-20240927-dirty #2148
[35205.934554] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014
[35205.934556] RIP: 0010:queued_spin_lock_slowpath+0x1c/0x2b0
[35205.934564] Code: 90 90 90 90 90 90 90 90 90 90 90 90 90 90 f3 0f 1e fa 0f 1f 44 00 00 41 55 41 54 55 48 89 fd 53 66 90 ba 01 00 00 00 8b 45 00 <85> c0 75 6c f0 0f b1 55 00 75 65 5b 5d 41 5c 41 5d c3 cc cc cc cc
[35205.934566] RSP: 0000:ffffaa11806bfd08 EFLAGS: 00000002
[35205.934569] RAX: 0000000000000001 RBX: ffff93d0820ec880 RCX: ffff93d09f200000
[35205.934571] RDX: 0000000000000001 RSI: 0000000000000001 RDI: ffff93d09f22c400
[35205.934572] RBP: ffff93d09f22c400 R08: 0000000000000000 R09: ffff93d08241c008
[35205.934573] R10: ffff93d08241c008 R11: 0000000000000ccc R12: ffff93d09f22c400
[35205.934575] R13: ffff93d0820ed0b4 R14: 0000000000000087 R15: ffff93d09f200000
[35205.934578] FS: 0000000000000000(0000) GS:ffff93d09f340000(0000) knlGS:0000000000000000
[35205.934580] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[35205.934582] CR2: 0000000000000000 CR3: 0000000001154000 CR4: 00000000000006f0
[35205.934583] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[35205.934584] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[35205.934586] Call Trace:
[35205.934588] <NMI>
[35205.934590] ? nmi_cpu_backtrace+0x87/0xf0
[35205.934595] ? nmi_cpu_backtrace_handler+0x11/0x20
[35205.934601] ? nmi_handle+0x62/0x150
[35205.934606] ? default_do_nmi+0x41/0x100
[35205.934610] ? exc_nmi+0xe0/0x110
[35205.934613] ? end_repeat_nmi+0xf/0x53
[35205.934617] ? queued_spin_lock_slowpath+0x1c/0x2b0
[35205.934620] ? queued_spin_lock_slowpath+0x1c/0x2b0
[35205.934623] ? queued_spin_lock_slowpath+0x1c/0x2b0
[35205.934625] </NMI>
[35205.934626] <TASK>
[35205.934627] try_to_wake_up+0x1b6/0x5e0
[35205.934635] kick_pool+0x5e/0x130
[35205.934641] __queue_work+0x2b2/0x4f0
[35205.934644] queue_work_on+0x4e/0x60
[35205.934647] ? __pfx_rcu_torture_barrier1cb+0x10/0x10
[35205.934653] smp_call_on_cpu+0xde/0x100
[35205.934660] ? __pfx_smp_call_on_cpu_callback+0x10/0x10
[35205.934664] ? __pfx_rcu_torture_barrier1cb+0x10/0x10
[35205.934667] rcu_torture_barrier_cbs+0x96/0x1c0
[35205.934671] ? __pfx_autoremove_wake_function+0x10/0x10
[35205.934674] ? __pfx_rcu_torture_barrier_cbs+0x10/0x10
[35205.934676] kthread+0xd6/0x100
[35205.934679] ? __pfx_kthread+0x10/0x10
[35205.934681] ret_from_fork+0x34/0x50
[35205.934685] ? __pfx_kthread+0x10/0x10
[35205.934687] ret_from_fork_asm+0x1a/0x30
[35205.934691] </TASK>
------------------------------------------------------------------------
This continues long enough to trigger a second warning an additional
63 seconds in, and again 63 seconds after that. And maybe more after
that, but there are 22 others to glance at.
The second syndrome combines a more normal stall with an immediately
following stall of RCU's grace-period kthread, but apparently due to
that kthread spinning in resched_cpu():
------------------------------------------------------------------------
[32120.616146] rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
[32120.617373] rcu: 10-...0: (1 ticks this GP) idle=090c/1/0x4000000000000000 softirq=0/0 fqs=2316 rcuc=21012 jiffies(starved)
[32120.619427] rcu: 12-...0: (1 GPs behind) idle=84f4/1/0x4000000000000000 softirq=0/0 fqs=2316 rcuc=24529 jiffies(starved)
[32120.621456] rcu: (detected by 7, t=21008 jiffies, g=18282245, q=22839 ncpus=9)
[32120.622814] Sending NMI from CPU 7 to CPUs 10:
[32120.622831] NMI backtrace for cpu 10
[32120.622837] CPU: 10 UID: 0 PID: 85 Comm: cpuhp/10 Not tainted 6.11.0-next-20240927-dirty #2148
[32120.622841] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014
[32120.622843] RIP: 0010:queued_spin_lock_slowpath+0x8e/0x2b0
[32120.622854] Code: 74 12 0f b6 45 00 84 c0 74 0a f3 90 0f b6 45 00 84 c0 75 f6 b8 01 00 00 00 66 89 45 00 5b 5d 41 5c 41 5d c3 cc cc cc cc f3 90 <eb> 89 8b 37 b8 00 02 00 00 81 fe 00 01 00 00 74 07 eb 99 83 e8 01
[32120.622857] RSP: 0000:ffffb7b80034fc20 EFLAGS: 00000002
[32120.622860] RAX: 0000000000000001 RBX: ffffffff97dd0c40 RCX: 000000000000000c
[32120.622862] RDX: 0000000000000001 RSI: 0000000000000001 RDI: ffff9c6f1f52c400
[32120.622863] RBP: ffff9c6f1f52c400 R08: 000000000000000b R09: ffffffff97dd0cf8
[32120.622865] R10: ffffb7b80034fd40 R11: ffffffff97309d48 R12: 000000000000000c
[32120.622867] R13: 0000000000000246 R14: 0000000000000000 R15: 000000000000000c
[32120.622870] FS: 0000000000000000(0000) GS:ffff9c6f1f480000(0000) knlGS:0000000000000000
[32120.622872] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[32120.622873] CR2: 0000000000000000 CR3: 000000001342e000 CR4: 00000000000006f0
[32120.622875] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[32120.622876] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[32120.622877] Call Trace:
[32120.622891] <NMI>
[32120.622893] ? nmi_cpu_backtrace+0x87/0xf0
[32120.622899] ? nmi_cpu_backtrace_handler+0x11/0x20
[32120.622905] ? nmi_handle+0x62/0x150
[32120.622912] ? default_do_nmi+0x41/0x100
[32120.622916] ? exc_nmi+0xe0/0x110
[32120.622918] ? end_repeat_nmi+0xf/0x53
[32120.622923] ? queued_spin_lock_slowpath+0x8e/0x2b0
[32120.622926] ? queued_spin_lock_slowpath+0x8e/0x2b0
[32120.622929] ? queued_spin_lock_slowpath+0x8e/0x2b0
[32120.622932] </NMI>
[32120.622932] <TASK>
[32120.622933] raw_spin_rq_lock_nested+0x15/0x30
[32120.622938] rq_attach_root+0x20/0xf0
[32120.622944] cpu_attach_domain+0x104/0x360
[32120.622948] partition_sched_domains_locked+0x31a/0x450
[32120.622953] rebuild_sched_domains_locked+0x119/0x8a0
[32120.622960] cpuset_update_active_cpus+0x344/0x760
[32120.622965] ? __pfx_sched_cpu_activate+0x10/0x10
[32120.622970] sched_cpu_activate+0xdd/0xf0
[32120.622973] cpuhp_invoke_callback+0x2d2/0x470
[32120.622979] ? __pfx_virtnet_cpu_online+0x10/0x10
[32120.622986] cpuhp_thread_fun+0x8f/0x150
[32120.622990] smpboot_thread_fn+0xdd/0x1d0
[32120.622994] ? __pfx_smpboot_thread_fn+0x10/0x10
[32120.622997] kthread+0xd6/0x100
[32120.623001] ? __pfx_kthread+0x10/0x10
[32120.623003] ret_from_fork+0x34/0x50
[32120.623008] ? __pfx_kthread+0x10/0x10
[32120.623010] ret_from_fork_asm+0x1a/0x30
[32120.623017] </TASK>
[32120.623823] Sending NMI from CPU 7 to CPUs 12:
[32120.623836] NMI backtrace for cpu 12
[32120.623841] CPU: 12 UID: 0 PID: 26816 Comm: kworker/12:1 Not tainted 6.11.0-next-20240927-dirty #2148
[32120.623845] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014
[32120.623848] Workqueue: 0x0 (mm_percpu_wq)
[32120.623853] RIP: 0010:ktime_get+0xf/0xd0
[32120.623860] Code: 82 66 2e 0f 1f 84 00 00 00 00 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 f3 0f 1e fa 0f 1f 44 00 00 8b 05 4d ad b6 01 <55> 53 85 c0 0f 85 97 00 00 00 8b 2d 91 f8 4b 02 40 f6 c5 01 75 65
[32120.623862] RSP: 0000:ffffb7b808523df0 EFLAGS: 00000002
[32120.623865] RAX: 0000000000000000 RBX: ffff9c6f1f52cca8 RCX: 0000000000000017
[32120.623867] RDX: 0000000000000000 RSI: ffff9c6f1f52cca8 RDI: ffff9c6f1f52cca8
[32120.623868] RBP: 00001d4142bdf7fd R08: 00001d36c1b190e5 R09: 0000000000000001
[32120.623870] R10: 0000000000000000 R11: 0000000000000006 R12: ffff9c6f1f52c400
[32120.623871] R13: ffff9c6f1f52c400 R14: ffffffff9702a320 R15: ffff9c6f047a2b80
[32120.623875] FS: 0000000000000000(0000) GS:ffff9c6f1f500000(0000) knlGS:0000000000000000
[32120.623876] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[32120.623878] CR2: 0000000000000000 CR3: 000000001342e000 CR4: 00000000000006f0
[32120.623880] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[32120.623881] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[32120.623882] Call Trace:
[32120.623885] <NMI>
[32120.623888] ? nmi_cpu_backtrace+0x87/0xf0
[32120.623896] ? nmi_cpu_backtrace_handler+0x11/0x20
[32120.623903] ? nmi_handle+0x62/0x150
[32120.623910] ? default_do_nmi+0x41/0x100
[32120.623914] ? exc_nmi+0xe0/0x110
[32120.623917] ? end_repeat_nmi+0xf/0x53
[32120.623923] ? ktime_get+0xf/0xd0
[32120.623925] ? ktime_get+0xf/0xd0
[32120.623927] ? ktime_get+0xf/0xd0
[32120.623930] </NMI>
[32120.623930] <TASK>
[32120.623931] start_dl_timer+0x50/0xf0
[32120.623935] update_curr_dl_se+0x85/0x1b0
[32120.623940] pick_task_dl+0x45/0xa0
[32120.623943] __schedule+0x49f/0x8f0
[32120.623947] ? queue_delayed_work_on+0x58/0x60
[32120.623951] schedule+0x27/0xd0
[32120.623953] worker_thread+0x1a7/0x3b0
[32120.623958] ? __pfx_worker_thread+0x10/0x10
[32120.623962] kthread+0xd6/0x100
[32120.623966] ? __pfx_kthread+0x10/0x10
[32120.623968] ret_from_fork+0x34/0x50
[32120.623974] ? __pfx_kthread+0x10/0x10
[32120.623976] ret_from_fork_asm+0x1a/0x30
[32120.623984] </TASK>
[32120.624832] rcu: rcu_preempt kthread starved for 10506 jiffies! g18282245 f0x0 RCU_GP_DOING_FQS(6) ->state=0x0 ->cpu=0
[32120.714332] rcu: Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
[32120.715968] rcu: RCU grace-period kthread stack dump:
[32120.716880] task:rcu_preempt state:R running task stack:14752 pid:16 tgid:16 ppid:2 flags:0x00004000
[32120.718830] Call Trace:
[32120.719276] <TASK>
[32120.719671] ? __schedule+0x3f0/0x8f0
[32120.720336] ? lock_timer_base+0x4b/0x70
[32120.721052] ? resched_cpu+0x35/0x80
[32120.721706] ? force_qs_rnp+0x237/0x2b0
[32120.722398] ? __pfx_rcu_watching_snap_recheck+0x10/0x10
[32120.723361] ? rcu_gp_fqs_loop+0x39a/0x5e0
[32120.724105] ? rcu_gp_kthread+0x18f/0x240
[32120.724838] ? __pfx_rcu_gp_kthread+0x10/0x10
[32120.725627] ? kthread+0xd6/0x100
[32120.726229] ? __pfx_kthread+0x10/0x10
[32120.726912] ? ret_from_fork+0x34/0x50
[32120.727598] ? __pfx_kthread+0x10/0x10
[32120.728271] ? ret_from_fork_asm+0x1a/0x30
[32120.729017] </TASK>
[32120.729418] rcu: Stack dump where RCU GP kthread last ran:
[32120.730409] Sending NMI from CPU 7 to CPUs 0:
[32120.731222] NMI backtrace for cpu 0
[32120.731228] CPU: 0 UID: 0 PID: 16 Comm: rcu_preempt Not tainted 6.11.0-next-20240927-dirty #2148
[32120.731233] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014
[32120.731235] RIP: 0010:queued_spin_lock_slowpath+0x8e/0x2b0
[32120.731244] Code: 74 12 0f b6 45 00 84 c0 74 0a f3 90 0f b6 45 00 84 c0 75 f6 b8 01 00 00 00 66 89 45 00 5b 5d 41 5c 41 5d c3 cc cc cc cc f3 90 <eb> 89 8b 37 b8 00 02 00 00 81 fe 00 01 00 00 74 07 eb 99 83 e8 01
[32120.731248] RSP: 0000:ffffb7b80008bdd8 EFLAGS: 00000002
[32120.731251] RAX: 0000000000000001 RBX: 000000000000000c RCX: 0000000000000000
[32120.731253] RDX: 0000000000000001 RSI: 0000000000000001 RDI: ffff9c6f1f52c400
[32120.731254] RBP: ffff9c6f1f52c400 R08: ffff9c6f1f52d298 R09: 000000000000007d
[32120.731256] R10: 0000000000000001 R11: 0000000000000000 R12: 0000000000000286
[32120.731258] R13: 0000000000000000 R14: ffffffff97341640 R15: 000000000000004c
[32120.731262] FS: 0000000000000000(0000) GS:ffff9c6f1f200000(0000) knlGS:0000000000000000
[32120.731264] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[32120.731266] CR2: ffff9c6f14201000 CR3: 0000000001154000 CR4: 00000000000006f0
[32120.731267] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[32120.731268] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[32120.731270] Call Trace:
[32120.731273] <NMI>
[32120.731275] ? nmi_cpu_backtrace+0x87/0xf0
[32120.731281] ? nmi_cpu_backtrace_handler+0x11/0x20
[32120.731287] ? nmi_handle+0x62/0x150
[32120.731293] ? default_do_nmi+0x41/0x100
[32120.731297] ? exc_nmi+0xe0/0x110
[32120.731299] ? end_repeat_nmi+0xf/0x53
[32120.731304] ? queued_spin_lock_slowpath+0x8e/0x2b0
[32120.731307] ? queued_spin_lock_slowpath+0x8e/0x2b0
[32120.731310] ? queued_spin_lock_slowpath+0x8e/0x2b0
[32120.731313] </NMI>
[32120.731314] <TASK>
[32120.731315] resched_cpu+0x35/0x80
[32120.731320] force_qs_rnp+0x237/0x2b0
[32120.731326] ? __pfx_rcu_watching_snap_recheck+0x10/0x10
[32120.731330] rcu_gp_fqs_loop+0x39a/0x5e0
[32120.731334] rcu_gp_kthread+0x18f/0x240
[32120.731338] ? __pfx_rcu_gp_kthread+0x10/0x10
[32120.731341] kthread+0xd6/0x100
[32120.731345] ? __pfx_kthread+0x10/0x10
[32120.731347] ret_from_fork+0x34/0x50
[32120.731352] ? __pfx_kthread+0x10/0x10
[32120.731354] ret_from_fork_asm+0x1a/0x30
[32120.731360] </TASK>
------------------------------------------------------------------------
This second stall also continues long enough to trigger a second
stall warning an additional 63 seconds in.
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-10-03 12:28 ` Peter Zijlstra
@ 2024-10-03 12:45 ` Paul E. McKenney
2024-10-03 14:22 ` Peter Zijlstra
0 siblings, 1 reply; 67+ messages in thread
From: Paul E. McKenney @ 2024-10-03 12:45 UTC (permalink / raw)
To: Peter Zijlstra; +Cc: vschneid, linux-kernel, sfr, linux-next, kernel-team
On Thu, Oct 03, 2024 at 02:28:24PM +0200, Peter Zijlstra wrote:
> On Thu, Oct 03, 2024 at 11:27:07AM +0200, Peter Zijlstra wrote:
> > On Thu, Oct 03, 2024 at 10:47:43AM +0200, Peter Zijlstra wrote:
> > > On Thu, Oct 03, 2024 at 10:40:39AM +0200, Peter Zijlstra wrote:
> > > > On Wed, Aug 21, 2024 at 02:57:16PM -0700, Paul E. McKenney wrote:
> > > >
> > > > > My reproducer on the two-socket 40-core 80-HW-thread systems is:
> > > > >
> > > > > tools/testing/selftests/rcutorture/bin/kvm.sh --allcpus --duration 1m --configs "50*TREE03" --trust-make
> > > > >
> > > >
> > > > This gets me a very long stream of:
> > > >
> > > > Results directory: /usr/src/linux-rcu/tools/testing/selftests/rcutorture/res/2024.10.03-09.30.33
> > > > tools/testing/selftests/rcutorture/bin/kvm.sh --allcpus --duration 1m --configs 50*TREE03 --trust-make
> > > > TREE03 -------
> > > > QEMU error, output:
> > > > cat: /usr/src/linux-rcu/tools/testing/selftests/rcutorture/res/2024.10.03-09.30.33/TREE03/qemu-output: No such file or directory
> > > > TREE03.10 -------
> > > > QEMU error, output:
> > > > cat: /usr/src/linux-rcu/tools/testing/selftests/rcutorture/res/2024.10.03-09.30.33/TREE03.10/qemu-output: No such file or directory
> > > > ...
> > > >
> > > >
> > > > Did I not do it right?
> > >
> > > Urgh, for some reason my machine doesn't auto load kvm_intel.ko and then
> > > proceeds to not do anything useful.. Let me try again.
> >
> > Works a ton better now, obviously no splat yet :/
>
> I have by now ran many hundreds of 1m TREE03 instances, and not yet seen
> anything. Surely I'm doing it wrong?
I ran 100*TREE03 for 18 hours each, and got 23 instances of *something*
happening (and I need to suppress stalls on the repeat). One of the
earlier bugs happened early, but sadly not this one.
Thanx, Paul
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-10-03 12:45 ` Paul E. McKenney
@ 2024-10-03 14:22 ` Peter Zijlstra
2024-10-03 16:04 ` Paul E. McKenney
0 siblings, 1 reply; 67+ messages in thread
From: Peter Zijlstra @ 2024-10-03 14:22 UTC (permalink / raw)
To: Paul E. McKenney; +Cc: vschneid, linux-kernel, sfr, linux-next, kernel-team
On Thu, Oct 03, 2024 at 05:45:47AM -0700, Paul E. McKenney wrote:
> I ran 100*TREE03 for 18 hours each, and got 23 instances of *something*
> happening (and I need to suppress stalls on the repeat). One of the
> earlier bugs happened early, but sadly not this one.
Damn, I don't have the amount of CPU hours available you mention in your
later email. I'll just go up the rounds to 20 minutes and see if
something wants to go bang before I have to shut down the noise
pollution for the day...
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-10-03 14:22 ` Peter Zijlstra
@ 2024-10-03 16:04 ` Paul E. McKenney
2024-10-03 18:50 ` Peter Zijlstra
0 siblings, 1 reply; 67+ messages in thread
From: Paul E. McKenney @ 2024-10-03 16:04 UTC (permalink / raw)
To: Peter Zijlstra; +Cc: vschneid, linux-kernel, sfr, linux-next, kernel-team
On Thu, Oct 03, 2024 at 04:22:40PM +0200, Peter Zijlstra wrote:
> On Thu, Oct 03, 2024 at 05:45:47AM -0700, Paul E. McKenney wrote:
>
> > I ran 100*TREE03 for 18 hours each, and got 23 instances of *something*
> > happening (and I need to suppress stalls on the repeat). One of the
> > earlier bugs happened early, but sadly not this one.
>
> Damn, I don't have the amount of CPU hours available you mention in your
> later email. I'll just go up the rounds to 20 minutes and see if
> something wants to go bang before I have to shut down the noise
> pollution for the day...
Indeed, this was one reason I was soliciting debug patches. ;-)
Thanx, Paul
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-10-03 16:04 ` Paul E. McKenney
@ 2024-10-03 18:50 ` Peter Zijlstra
2024-10-03 19:12 ` Paul E. McKenney
` (2 more replies)
0 siblings, 3 replies; 67+ messages in thread
From: Peter Zijlstra @ 2024-10-03 18:50 UTC (permalink / raw)
To: Paul E. McKenney; +Cc: vschneid, linux-kernel, sfr, linux-next, kernel-team
On Thu, Oct 03, 2024 at 09:04:30AM -0700, Paul E. McKenney wrote:
> On Thu, Oct 03, 2024 at 04:22:40PM +0200, Peter Zijlstra wrote:
> > On Thu, Oct 03, 2024 at 05:45:47AM -0700, Paul E. McKenney wrote:
> >
> > > I ran 100*TREE03 for 18 hours each, and got 23 instances of *something*
> > > happening (and I need to suppress stalls on the repeat). One of the
> > > earlier bugs happened early, but sadly not this one.
> >
> > Damn, I don't have the amount of CPU hours available you mention in your
> > later email. I'll just go up the rounds to 20 minutes and see if
> > something wants to go bang before I have to shut down the noise
> > pollution for the day...
>
> Indeed, this was one reason I was soliciting debug patches. ;-)
Sooo... I was contemplating if something like the below might perhaps
help some. It's a bit of a mess (I'll try and clean up if/when it
actually proves to work), but it compiles and survives a hand full of 1m
runs.
I'll try and give it more runs tomorrow when I can power up the big
machines again -- unless you've already told me it's crap by then :-)
---
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 43e453ab7e20..1fe850788195 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7010,20 +7010,20 @@ int default_wake_function(wait_queue_entry_t *curr, unsigned mode, int wake_flag
}
EXPORT_SYMBOL(default_wake_function);
-void __setscheduler_prio(struct task_struct *p, int prio)
+const struct sched_class *__setscheduler_class(struct task_struct *p, int prio)
{
if (dl_prio(prio))
- p->sched_class = &dl_sched_class;
- else if (rt_prio(prio))
- p->sched_class = &rt_sched_class;
+ return &dl_sched_class;
+
+ if (rt_prio(prio))
+ return &rt_sched_class;
+
#ifdef CONFIG_SCHED_CLASS_EXT
- else if (task_should_scx(p))
- p->sched_class = &ext_sched_class;
+ if (task_should_scx(p))
+ return &ext_sched_class;
#endif
- else
- p->sched_class = &fair_sched_class;
- p->prio = prio;
+ return &fair_sched_class;
}
#ifdef CONFIG_RT_MUTEXES
@@ -7069,7 +7069,7 @@ void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task)
{
int prio, oldprio, queued, running, queue_flag =
DEQUEUE_SAVE | DEQUEUE_MOVE | DEQUEUE_NOCLOCK;
- const struct sched_class *prev_class;
+ const struct sched_class *prev_class, *next_class;
struct rq_flags rf;
struct rq *rq;
@@ -7127,6 +7127,11 @@ void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task)
queue_flag &= ~DEQUEUE_MOVE;
prev_class = p->sched_class;
+ next_class = __setscheduler_class(p, prio);
+
+ if (prev_class != next_class && p->se.sched_delayed)
+ dequeue_task(rq, p, DEQUEUE_SLEEP | DEQUEUE_DELAYED | DEQUEUE_NOCLOCK);
+
queued = task_on_rq_queued(p);
running = task_current(rq, p);
if (queued)
@@ -7164,7 +7169,9 @@ void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task)
p->rt.timeout = 0;
}
- __setscheduler_prio(p, prio);
+ p->sched_class = next_class;
+ p->prio = prio;
+
check_class_changing(rq, p, prev_class);
if (queued)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index ab497fafa7be..c157d4860a3b 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -13177,22 +13177,6 @@ static void attach_task_cfs_rq(struct task_struct *p)
static void switched_from_fair(struct rq *rq, struct task_struct *p)
{
detach_task_cfs_rq(p);
- /*
- * Since this is called after changing class, this is a little weird
- * and we cannot use DEQUEUE_DELAYED.
- */
- if (p->se.sched_delayed) {
- /* First, dequeue it from its new class' structures */
- dequeue_task(rq, p, DEQUEUE_NOCLOCK | DEQUEUE_SLEEP);
- /*
- * Now, clean up the fair_sched_class side of things
- * related to sched_delayed being true and that wasn't done
- * due to the generic dequeue not using DEQUEUE_DELAYED.
- */
- finish_delayed_dequeue_entity(&p->se);
- p->se.rel_deadline = 0;
- __block_task(rq, p);
- }
}
static void switched_to_fair(struct rq *rq, struct task_struct *p)
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index b1c3588a8f00..fba524c81c63 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -3797,7 +3797,7 @@ static inline int rt_effective_prio(struct task_struct *p, int prio)
extern int __sched_setscheduler(struct task_struct *p, const struct sched_attr *attr, bool user, bool pi);
extern int __sched_setaffinity(struct task_struct *p, struct affinity_context *ctx);
-extern void __setscheduler_prio(struct task_struct *p, int prio);
+extern const struct sched_class *__setscheduler_class(struct task_struct *p, int prio);
extern void set_load_weight(struct task_struct *p, bool update_load);
extern void enqueue_task(struct rq *rq, struct task_struct *p, int flags);
extern bool dequeue_task(struct rq *rq, struct task_struct *p, int flags);
diff --git a/kernel/sched/syscalls.c b/kernel/sched/syscalls.c
index aa70beee9895..0470bcc3d204 100644
--- a/kernel/sched/syscalls.c
+++ b/kernel/sched/syscalls.c
@@ -529,7 +529,7 @@ int __sched_setscheduler(struct task_struct *p,
{
int oldpolicy = -1, policy = attr->sched_policy;
int retval, oldprio, newprio, queued, running;
- const struct sched_class *prev_class;
+ const struct sched_class *prev_class, *next_class;
struct balance_callback *head;
struct rq_flags rf;
int reset_on_fork;
@@ -706,6 +706,12 @@ int __sched_setscheduler(struct task_struct *p,
queue_flags &= ~DEQUEUE_MOVE;
}
+ prev_class = p->sched_class;
+ next_class = __setscheduler_class(p, newprio);
+
+ if (prev_class != next_class && p->se.sched_delayed)
+ dequeue_task(rq, p, DEQUEUE_SLEEP | DEQUEUE_DELAYED | DEQUEUE_NOCLOCK);
+
queued = task_on_rq_queued(p);
running = task_current(rq, p);
if (queued)
@@ -713,11 +719,10 @@ int __sched_setscheduler(struct task_struct *p,
if (running)
put_prev_task(rq, p);
- prev_class = p->sched_class;
-
if (!(attr->sched_flags & SCHED_FLAG_KEEP_PARAMS)) {
__setscheduler_params(p, attr);
- __setscheduler_prio(p, newprio);
+ p->sched_class = next_class;
+ p->prio = newprio;
}
__setscheduler_uclamp(p, attr);
check_class_changing(rq, p, prev_class);
^ permalink raw reply related [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-10-03 18:50 ` Peter Zijlstra
@ 2024-10-03 19:12 ` Paul E. McKenney
2024-10-04 13:22 ` Paul E. McKenney
2024-10-04 13:35 ` Peter Zijlstra
2 siblings, 0 replies; 67+ messages in thread
From: Paul E. McKenney @ 2024-10-03 19:12 UTC (permalink / raw)
To: Peter Zijlstra; +Cc: vschneid, linux-kernel, sfr, linux-next, kernel-team
On Thu, Oct 03, 2024 at 08:50:37PM +0200, Peter Zijlstra wrote:
> On Thu, Oct 03, 2024 at 09:04:30AM -0700, Paul E. McKenney wrote:
> > On Thu, Oct 03, 2024 at 04:22:40PM +0200, Peter Zijlstra wrote:
> > > On Thu, Oct 03, 2024 at 05:45:47AM -0700, Paul E. McKenney wrote:
> > >
> > > > I ran 100*TREE03 for 18 hours each, and got 23 instances of *something*
> > > > happening (and I need to suppress stalls on the repeat). One of the
> > > > earlier bugs happened early, but sadly not this one.
> > >
> > > Damn, I don't have the amount of CPU hours available you mention in your
> > > later email. I'll just go up the rounds to 20 minutes and see if
> > > something wants to go bang before I have to shut down the noise
> > > pollution for the day...
> >
> > Indeed, this was one reason I was soliciting debug patches. ;-)
>
> Sooo... I was contemplating if something like the below might perhaps
> help some. It's a bit of a mess (I'll try and clean up if/when it
> actually proves to work), but it compiles and survives a hand full of 1m
> runs.
Thank you very much! I will give it a spin.
Unless you tell me otherwise, I will allow the current test to complete
(about 12 hours from now), collect any data from it, then start this one.
> I'll try and give it more runs tomorrow when I can power up the big
> machines again -- unless you've already told me it's crap by then :-)
18-hour runs here, so even if I immediately kill the old run and start the
new one, I won't know until 6AM Pacific Time on Friday at the earliest. ;-)
Thanx, Paul
> ---
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 43e453ab7e20..1fe850788195 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -7010,20 +7010,20 @@ int default_wake_function(wait_queue_entry_t *curr, unsigned mode, int wake_flag
> }
> EXPORT_SYMBOL(default_wake_function);
>
> -void __setscheduler_prio(struct task_struct *p, int prio)
> +const struct sched_class *__setscheduler_class(struct task_struct *p, int prio)
> {
> if (dl_prio(prio))
> - p->sched_class = &dl_sched_class;
> - else if (rt_prio(prio))
> - p->sched_class = &rt_sched_class;
> + return &dl_sched_class;
> +
> + if (rt_prio(prio))
> + return &rt_sched_class;
> +
> #ifdef CONFIG_SCHED_CLASS_EXT
> - else if (task_should_scx(p))
> - p->sched_class = &ext_sched_class;
> + if (task_should_scx(p))
> + return &ext_sched_class;
> #endif
> - else
> - p->sched_class = &fair_sched_class;
>
> - p->prio = prio;
> + return &fair_sched_class;
> }
>
> #ifdef CONFIG_RT_MUTEXES
> @@ -7069,7 +7069,7 @@ void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task)
> {
> int prio, oldprio, queued, running, queue_flag =
> DEQUEUE_SAVE | DEQUEUE_MOVE | DEQUEUE_NOCLOCK;
> - const struct sched_class *prev_class;
> + const struct sched_class *prev_class, *next_class;
> struct rq_flags rf;
> struct rq *rq;
>
> @@ -7127,6 +7127,11 @@ void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task)
> queue_flag &= ~DEQUEUE_MOVE;
>
> prev_class = p->sched_class;
> + next_class = __setscheduler_class(p, prio);
> +
> + if (prev_class != next_class && p->se.sched_delayed)
> + dequeue_task(rq, p, DEQUEUE_SLEEP | DEQUEUE_DELAYED | DEQUEUE_NOCLOCK);
> +
> queued = task_on_rq_queued(p);
> running = task_current(rq, p);
> if (queued)
> @@ -7164,7 +7169,9 @@ void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task)
> p->rt.timeout = 0;
> }
>
> - __setscheduler_prio(p, prio);
> + p->sched_class = next_class;
> + p->prio = prio;
> +
> check_class_changing(rq, p, prev_class);
>
> if (queued)
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index ab497fafa7be..c157d4860a3b 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -13177,22 +13177,6 @@ static void attach_task_cfs_rq(struct task_struct *p)
> static void switched_from_fair(struct rq *rq, struct task_struct *p)
> {
> detach_task_cfs_rq(p);
> - /*
> - * Since this is called after changing class, this is a little weird
> - * and we cannot use DEQUEUE_DELAYED.
> - */
> - if (p->se.sched_delayed) {
> - /* First, dequeue it from its new class' structures */
> - dequeue_task(rq, p, DEQUEUE_NOCLOCK | DEQUEUE_SLEEP);
> - /*
> - * Now, clean up the fair_sched_class side of things
> - * related to sched_delayed being true and that wasn't done
> - * due to the generic dequeue not using DEQUEUE_DELAYED.
> - */
> - finish_delayed_dequeue_entity(&p->se);
> - p->se.rel_deadline = 0;
> - __block_task(rq, p);
> - }
> }
>
> static void switched_to_fair(struct rq *rq, struct task_struct *p)
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index b1c3588a8f00..fba524c81c63 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -3797,7 +3797,7 @@ static inline int rt_effective_prio(struct task_struct *p, int prio)
>
> extern int __sched_setscheduler(struct task_struct *p, const struct sched_attr *attr, bool user, bool pi);
> extern int __sched_setaffinity(struct task_struct *p, struct affinity_context *ctx);
> -extern void __setscheduler_prio(struct task_struct *p, int prio);
> +extern const struct sched_class *__setscheduler_class(struct task_struct *p, int prio);
> extern void set_load_weight(struct task_struct *p, bool update_load);
> extern void enqueue_task(struct rq *rq, struct task_struct *p, int flags);
> extern bool dequeue_task(struct rq *rq, struct task_struct *p, int flags);
> diff --git a/kernel/sched/syscalls.c b/kernel/sched/syscalls.c
> index aa70beee9895..0470bcc3d204 100644
> --- a/kernel/sched/syscalls.c
> +++ b/kernel/sched/syscalls.c
> @@ -529,7 +529,7 @@ int __sched_setscheduler(struct task_struct *p,
> {
> int oldpolicy = -1, policy = attr->sched_policy;
> int retval, oldprio, newprio, queued, running;
> - const struct sched_class *prev_class;
> + const struct sched_class *prev_class, *next_class;
> struct balance_callback *head;
> struct rq_flags rf;
> int reset_on_fork;
> @@ -706,6 +706,12 @@ int __sched_setscheduler(struct task_struct *p,
> queue_flags &= ~DEQUEUE_MOVE;
> }
>
> + prev_class = p->sched_class;
> + next_class = __setscheduler_class(p, newprio);
> +
> + if (prev_class != next_class && p->se.sched_delayed)
> + dequeue_task(rq, p, DEQUEUE_SLEEP | DEQUEUE_DELAYED | DEQUEUE_NOCLOCK);
> +
> queued = task_on_rq_queued(p);
> running = task_current(rq, p);
> if (queued)
> @@ -713,11 +719,10 @@ int __sched_setscheduler(struct task_struct *p,
> if (running)
> put_prev_task(rq, p);
>
> - prev_class = p->sched_class;
> -
> if (!(attr->sched_flags & SCHED_FLAG_KEEP_PARAMS)) {
> __setscheduler_params(p, attr);
> - __setscheduler_prio(p, newprio);
> + p->sched_class = next_class;
> + p->prio = newprio;
> }
> __setscheduler_uclamp(p, attr);
> check_class_changing(rq, p, prev_class);
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-10-03 18:50 ` Peter Zijlstra
2024-10-03 19:12 ` Paul E. McKenney
@ 2024-10-04 13:22 ` Paul E. McKenney
2024-10-04 13:35 ` Peter Zijlstra
2 siblings, 0 replies; 67+ messages in thread
From: Paul E. McKenney @ 2024-10-04 13:22 UTC (permalink / raw)
To: Peter Zijlstra; +Cc: vschneid, linux-kernel, sfr, linux-next, kernel-team
On Thu, Oct 03, 2024 at 08:50:37PM +0200, Peter Zijlstra wrote:
> On Thu, Oct 03, 2024 at 09:04:30AM -0700, Paul E. McKenney wrote:
> > On Thu, Oct 03, 2024 at 04:22:40PM +0200, Peter Zijlstra wrote:
> > > On Thu, Oct 03, 2024 at 05:45:47AM -0700, Paul E. McKenney wrote:
> > >
> > > > I ran 100*TREE03 for 18 hours each, and got 23 instances of *something*
> > > > happening (and I need to suppress stalls on the repeat). One of the
> > > > earlier bugs happened early, but sadly not this one.
> > >
> > > Damn, I don't have the amount of CPU hours available you mention in your
> > > later email. I'll just go up the rounds to 20 minutes and see if
> > > something wants to go bang before I have to shut down the noise
> > > pollution for the day...
> >
> > Indeed, this was one reason I was soliciting debug patches. ;-)
>
> Sooo... I was contemplating if something like the below might perhaps
> help some. It's a bit of a mess (I'll try and clean up if/when it
> actually proves to work), but it compiles and survives a hand full of 1m
> runs.
And here is the ftrace dump from one of the failures in the past
18-hour run. Idiot here re-enabled RCU CPU stall warnings after doing
ftrace_dump(), forgetting the asynchronous nature of new-age printk(),
so I don't have the CPU number that the failure happened on.
Of to test your new patch...
Thanx, Paul
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-10-03 18:50 ` Peter Zijlstra
2024-10-03 19:12 ` Paul E. McKenney
2024-10-04 13:22 ` Paul E. McKenney
@ 2024-10-04 13:35 ` Peter Zijlstra
2024-10-06 20:44 ` Paul E. McKenney
2 siblings, 1 reply; 67+ messages in thread
From: Peter Zijlstra @ 2024-10-04 13:35 UTC (permalink / raw)
To: Paul E. McKenney; +Cc: vschneid, linux-kernel, sfr, linux-next, kernel-team
On Thu, Oct 03, 2024 at 08:50:37PM +0200, Peter Zijlstra wrote:
> On Thu, Oct 03, 2024 at 09:04:30AM -0700, Paul E. McKenney wrote:
> > On Thu, Oct 03, 2024 at 04:22:40PM +0200, Peter Zijlstra wrote:
> > > On Thu, Oct 03, 2024 at 05:45:47AM -0700, Paul E. McKenney wrote:
> > >
> > > > I ran 100*TREE03 for 18 hours each, and got 23 instances of *something*
> > > > happening (and I need to suppress stalls on the repeat). One of the
> > > > earlier bugs happened early, but sadly not this one.
> > >
> > > Damn, I don't have the amount of CPU hours available you mention in your
> > > later email. I'll just go up the rounds to 20 minutes and see if
> > > something wants to go bang before I have to shut down the noise
> > > pollution for the day...
> >
> > Indeed, this was one reason I was soliciting debug patches. ;-)
>
> Sooo... I was contemplating if something like the below might perhaps
> help some. It's a bit of a mess (I'll try and clean up if/when it
> actually proves to work), but it compiles and survives a hand full of 1m
> runs.
>
> I'll try and give it more runs tomorrow when I can power up the big
> machines again -- unless you've already told me it's crap by then :-)
I've given it 200*20m and the worst I got was one dl-server double
enqueue. I'll go stare at that I suppose.
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-10-04 13:35 ` Peter Zijlstra
@ 2024-10-06 20:44 ` Paul E. McKenney
2024-10-07 9:34 ` Peter Zijlstra
2024-10-08 11:11 ` Peter Zijlstra
0 siblings, 2 replies; 67+ messages in thread
From: Paul E. McKenney @ 2024-10-06 20:44 UTC (permalink / raw)
To: Peter Zijlstra; +Cc: vschneid, linux-kernel, sfr, linux-next, kernel-team
On Fri, Oct 04, 2024 at 03:35:32PM +0200, Peter Zijlstra wrote:
> On Thu, Oct 03, 2024 at 08:50:37PM +0200, Peter Zijlstra wrote:
> > On Thu, Oct 03, 2024 at 09:04:30AM -0700, Paul E. McKenney wrote:
> > > On Thu, Oct 03, 2024 at 04:22:40PM +0200, Peter Zijlstra wrote:
> > > > On Thu, Oct 03, 2024 at 05:45:47AM -0700, Paul E. McKenney wrote:
> > > >
> > > > > I ran 100*TREE03 for 18 hours each, and got 23 instances of *something*
> > > > > happening (and I need to suppress stalls on the repeat). One of the
> > > > > earlier bugs happened early, but sadly not this one.
> > > >
> > > > Damn, I don't have the amount of CPU hours available you mention in your
> > > > later email. I'll just go up the rounds to 20 minutes and see if
> > > > something wants to go bang before I have to shut down the noise
> > > > pollution for the day...
> > >
> > > Indeed, this was one reason I was soliciting debug patches. ;-)
> >
> > Sooo... I was contemplating if something like the below might perhaps
> > help some. It's a bit of a mess (I'll try and clean up if/when it
> > actually proves to work), but it compiles and survives a hand full of 1m
> > runs.
> >
> > I'll try and give it more runs tomorrow when I can power up the big
> > machines again -- unless you've already told me it's crap by then :-)
>
> I've given it 200*20m and the worst I got was one dl-server double
> enqueue. I'll go stare at that I suppose.
With your patch, I got 24 failures out of 100 TREE03 runs of 18 hours
each. The failures were different, though, mostly involving boost
failures in which RCU priority boosting didn't actually result in the
low-priority readers getting boosted. An ftrace/event-trace dump of
such a situation is shown below.
There were also a number of "sched: DL replenish lagged too much"
messages, but it looks like this was a symptom of the ftrace dump.
Given that this now involves priority boosting, I am trying 400*TREE03
with each guest OS restricted to four CPUs to see if that makes things
happen more quickly, and will let you know how this goes.
Any other debug I should apply?
Thanx, Paul
------------------------------------------------------------------------
[10363.660761] Dumping ftrace buffer:
[10363.661231] ---------------------------------
[10363.661789] CPU:1 [LOST 12570412 EVENTS]
[10363.661789] rcu_tort-155 1d..2. 10347723897us : sched_switch: prev_comm=rcu_torture_rea prev_pid=155 prev_prio=97 prev_state=R+ ==> next_comm=rcub/7 next_pid=17 next_prio=97
[10363.664164] rcub/7-17 1d..2. 10347723905us : sched_switch: prev_comm=rcub/7 prev_pid=17 prev_prio=97 prev_state=S ==> next_comm=rcuc/1 next_pid=25 next_prio=97
[10363.665965] rcuc/1-25 1d..2. 10347723908us : sched_switch: prev_comm=rcuc/1 prev_pid=25 prev_prio=97 prev_state=S ==> next_comm=ksoftirqd/1 next_pid=26 next_prio=97
[10363.667819] ksoftirq-26 1d..2. 10347723909us : sched_switch: prev_comm=ksoftirqd/1 prev_pid=26 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=159 next_prio=139
[10363.669784] rcu_tort-159 1dNh5. 10347724881us : sched_wakeup: comm=rcuc/1 pid=25 prio=97 target_cpu=001
[10363.670971] rcu_tort-159 1dN.5. 10347724884us : sched_wakeup: comm=ksoftirqd/1 pid=26 prio=97 target_cpu=001
[10363.672210] rcu_tort-159 1d..2. 10347724942us : sched_switch: prev_comm=rcu_torture_rea prev_pid=159 prev_prio=139 prev_state=R+ ==> next_comm=rcuc/1 next_pid=25 next_prio=97
[10363.674130] rcuc/1-25 1d..2. 10347724945us : sched_switch: prev_comm=rcuc/1 prev_pid=25 prev_prio=97 prev_state=S ==> next_comm=ksoftirqd/1 next_pid=26 next_prio=97
[10363.675971] ksoftirq-26 1d..2. 10347724947us : sched_switch: prev_comm=ksoftirqd/1 prev_pid=26 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=159 next_prio=139
[10363.677935] rcu_tort-159 1DNh3. 10347726881us : sched_wakeup: comm=rcuc/1 pid=25 prio=97 target_cpu=001
[10363.679125] rcu_tort-159 1d..2. 10347726884us : sched_switch: prev_comm=rcu_torture_rea prev_pid=159 prev_prio=139 prev_state=R+ ==> next_comm=rcuc/1 next_pid=25 next_prio=97
[10363.681048] rcuc/1-25 1d..2. 10347726887us : sched_switch: prev_comm=rcuc/1 prev_pid=25 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=159 next_prio=139
[10363.682964] rcu_tort-159 1DNh3. 10347727881us : sched_wakeup: comm=rcuc/1 pid=25 prio=97 target_cpu=001
[10363.684180] rcu_tort-159 1d..2. 10347727887us : sched_switch: prev_comm=rcu_torture_rea prev_pid=159 prev_prio=139 prev_state=R+ ==> next_comm=rcuc/1 next_pid=25 next_prio=97
[10363.686103] rcuc/1-25 1d..2. 10347727889us : sched_switch: prev_comm=rcuc/1 prev_pid=25 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=159 next_prio=139
[10363.688026] rcu_tort-159 1dNh3. 10347728881us : sched_wakeup: comm=rcuc/1 pid=25 prio=97 target_cpu=001
[10363.689219] rcu_tort-159 1d..2. 10347728884us : sched_switch: prev_comm=rcu_torture_rea prev_pid=159 prev_prio=139 prev_state=R+ ==> next_comm=rcuc/1 next_pid=25 next_prio=97
[10363.691150] rcuc/1-25 1d..2. 10347728886us : sched_switch: prev_comm=rcuc/1 prev_pid=25 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=159 next_prio=139
[10363.693076] rcu_tort-159 1dNh4. 10347729270us : sched_wakeup: comm=rcu_torture_wri pid=148 prio=120 target_cpu=001
[10363.694392] rcu_tort-159 1d..2. 10347729272us : sched_switch: prev_comm=rcu_torture_rea prev_pid=159 prev_prio=139 prev_state=R+ ==> next_comm=rcu_torture_wri next_pid=148 next_prio=120
[10363.696437] rcu_tort-148 1d..2. 10347729276us : sched_switch: prev_comm=rcu_torture_wri prev_pid=148 prev_prio=120 prev_state=I ==> next_comm=rcu_torture_rea next_pid=159 next_prio=139
[10363.698467] rcu_tort-159 1dNh4. 10347729777us : sched_wakeup: comm=rcu_torture_wri pid=148 prio=120 target_cpu=001
[10363.699771] rcu_tort-159 1d..2. 10347729780us : sched_switch: prev_comm=rcu_torture_rea prev_pid=159 prev_prio=139 prev_state=R+ ==> next_comm=rcu_torture_wri next_pid=148 next_prio=120
[10363.701821] rcu_tort-148 1dNh4. 10347729881us : sched_wakeup: comm=rcuc/1 pid=25 prio=97 target_cpu=001
[10363.703018] rcu_tort-148 1d..2. 10347729884us : sched_switch: prev_comm=rcu_torture_wri prev_pid=148 prev_prio=120 prev_state=R+ ==> next_comm=rcuc/1 next_pid=25 next_prio=97
[10363.704940] rcuc/1-25 1d..2. 10347729886us : sched_switch: prev_comm=rcuc/1 prev_pid=25 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_wri next_pid=148 next_prio=120
[10363.706846] rcu_tort-148 1d..2. 10347730434us : sched_switch: prev_comm=rcu_torture_wri prev_pid=148 prev_prio=120 prev_state=I ==> next_comm=rcu_torture_rea next_pid=159 next_prio=139
[10363.708880] rcu_tort-159 1dN.4. 10347730882us : sched_wakeup: comm=ksoftirqd/1 pid=26 prio=97 target_cpu=001
[10363.710126] rcu_tort-159 1d..2. 10347730883us : sched_switch: prev_comm=rcu_torture_rea prev_pid=159 prev_prio=139 prev_state=R+ ==> next_comm=ksoftirqd/1 next_pid=26 next_prio=97
[10363.712107] ksoftirq-26 1d..2. 10347730886us : sched_switch: prev_comm=ksoftirqd/1 prev_pid=26 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=159 next_prio=139
[10363.714095] rcu_tort-159 1d..2. 10347731423us : sched_switch: prev_comm=rcu_torture_rea prev_pid=159 prev_prio=139 prev_state=R+ ==> next_comm=rcu_exp_par_gp_ next_pid=18 next_prio=97
[10363.716147] rcu_exp_-18 1d..4. 10347731431us : sched_wakeup: comm=rcu_exp_gp_kthr pid=20 prio=97 target_cpu=008
[10363.717459] rcu_exp_-18 1d..2. 10347731435us : sched_switch: prev_comm=rcu_exp_par_gp_ prev_pid=18 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=159 next_prio=139
[10363.719503] rcu_tort-159 1d..2. 10347731526us : sched_switch: prev_comm=rcu_torture_rea prev_pid=159 prev_prio=139 prev_state=R+ ==> next_comm=rcu_exp_par_gp_ next_pid=36 next_prio=97
[10363.721551] rcu_exp_-36 1d..4. 10347731532us : sched_wakeup: comm=rcu_exp_gp_kthr pid=20 prio=97 target_cpu=008
[10363.722853] rcu_exp_-36 1d..2. 10347731536us : sched_switch: prev_comm=rcu_exp_par_gp_ prev_pid=36 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=159 next_prio=139
[10363.724906] rcu_tort-159 1d..2. 10347731568us : sched_switch: prev_comm=rcu_torture_rea prev_pid=159 prev_prio=139 prev_state=R+ ==> next_comm=rcu_exp_par_gp_ next_pid=18 next_prio=97
[10363.726968] rcu_exp_-18 1d..2. 10347731577us : sched_switch: prev_comm=rcu_exp_par_gp_ prev_pid=18 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=159 next_prio=139
[10363.729019] rcu_tort-159 1d..2. 10347731595us : sched_switch: prev_comm=rcu_torture_rea prev_pid=159 prev_prio=139 prev_state=R+ ==> next_comm=rcu_exp_par_gp_ next_pid=92 next_prio=97
[10363.731308] rcu_exp_-92 1d..2. 10347731599us : sched_switch: prev_comm=rcu_exp_par_gp_ prev_pid=92 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=159 next_prio=139
[10363.733371] rcu_tort-159 1d..2. 10347731779us : sched_switch: prev_comm=rcu_torture_rea prev_pid=159 prev_prio=139 prev_state=R+ ==> next_comm=rcu_torture_rea next_pid=158 next_prio=97
[10363.735456] rcu_tort-158 1d..2. 10347731797us : sched_switch: prev_comm=rcu_torture_rea prev_pid=158 prev_prio=139 prev_state=R+ ==> next_comm=rcu_torture_rea next_pid=159 next_prio=139
[10363.737541] rcu_tort-159 1DNh4. 10347731882us : sched_wakeup: comm=rcuc/1 pid=25 prio=97 target_cpu=001
[10363.738761] rcu_tort-159 1d..2. 10347731885us : sched_switch: prev_comm=rcu_torture_rea prev_pid=159 prev_prio=139 prev_state=R+ ==> next_comm=rcuc/1 next_pid=25 next_prio=97
[10363.740720] rcuc/1-25 1d..2. 10347731888us : sched_switch: prev_comm=rcuc/1 prev_pid=25 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=158 next_prio=139
[10363.742673] rcu_tort-158 1dNh3. 10347732881us : sched_wakeup: comm=rcuc/1 pid=25 prio=97 target_cpu=001
[10363.743883] rcu_tort-158 1d..2. 10347732884us : sched_switch: prev_comm=rcu_torture_rea prev_pid=158 prev_prio=139 prev_state=R+ ==> next_comm=rcuc/1 next_pid=25 next_prio=97
[10363.745848] rcuc/1-25 1d..2. 10347732887us : sched_switch: prev_comm=rcuc/1 prev_pid=25 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=159 next_prio=139
[10363.747813] rcu_tort-159 1dN.3. 10347733882us : sched_wakeup: comm=ksoftirqd/1 pid=26 prio=97 target_cpu=001
[10363.749078] rcu_tort-159 1d..2. 10347733884us : sched_switch: prev_comm=rcu_torture_rea prev_pid=159 prev_prio=139 prev_state=R+ ==> next_comm=ksoftirqd/1 next_pid=26 next_prio=97
[10363.751101] ksoftirq-26 1d..2. 10347733888us : sched_switch: prev_comm=ksoftirqd/1 prev_pid=26 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=158 next_prio=139
[10363.753098] rcu_tort-158 1d..2. 10347733893us : sched_switch: prev_comm=rcu_torture_rea prev_pid=158 prev_prio=139 prev_state=I ==> next_comm=rcu_torture_rea next_pid=159 next_prio=139
[10363.755177] rcu_tort-159 1d..3. 10347733899us : dl_server_stop <-dequeue_entities
[10363.756172] rcu_tort-159 1d..3. 10347733906us : dl_server_start <-enqueue_task_fair
[10363.757172] rcu_tort-159 1d..2. 10347733908us : sched_switch: prev_comm=rcu_torture_rea prev_pid=159 prev_prio=139 prev_state=I ==> next_comm=rcu_torture_rea next_pid=155 next_prio=139
[10363.759240] rcu_tort-155 1dNh4. 10347734882us : sched_wakeup: comm=rcuc/1 pid=25 prio=97 target_cpu=001
[10363.760468] rcu_tort-155 1dN.4. 10347734885us : sched_wakeup: comm=ksoftirqd/1 pid=26 prio=97 target_cpu=001
[10363.761730] rcu_tort-155 1d..2. 10347734887us : sched_switch: prev_comm=rcu_torture_rea prev_pid=155 prev_prio=139 prev_state=R+ ==> next_comm=rcuc/1 next_pid=25 next_prio=97
[10363.763733] rcuc/1-25 1d..2. 10347734890us : sched_switch: prev_comm=rcuc/1 prev_pid=25 prev_prio=97 prev_state=S ==> next_comm=ksoftirqd/1 next_pid=26 next_prio=97
[10363.765611] ksoftirq-26 1d..2. 10347734891us : sched_switch: prev_comm=ksoftirqd/1 prev_pid=26 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=155 next_prio=139
[10363.767612] rcu_tort-155 1DNh5. 10347735881us : sched_wakeup: comm=rcuc/1 pid=25 prio=97 target_cpu=001
[10363.768825] rcu_tort-155 1d..2. 10347735885us : sched_switch: prev_comm=rcu_torture_rea prev_pid=155 prev_prio=139 prev_state=R+ ==> next_comm=rcuc/1 next_pid=25 next_prio=97
[10363.770787] rcuc/1-25 1d..2. 10347735888us : sched_switch: prev_comm=rcuc/1 prev_pid=25 prev_prio=97 prev_state=S ==> next_comm=cpuhp/1 next_pid=23 next_prio=120
[10363.772636] cpuhp/1-23 1d..2. 10347735892us : sched_switch: prev_comm=cpuhp/1 prev_pid=23 prev_prio=120 prev_state=D ==> next_comm=rcu_torture_rea next_pid=155 next_prio=139
[10363.774605] rcu_tort-155 1dN.6. 10347735894us : sched_wakeup: comm=migration/1 pid=24 prio=0 target_cpu=001
[10363.775851] rcu_tort-155 1d..2. 10347735896us : sched_switch: prev_comm=rcu_torture_rea prev_pid=155 prev_prio=139 prev_state=R+ ==> next_comm=migration/1 next_pid=24 next_prio=0
[10363.777842] migratio-24 1d..4. 10347735898us : dl_server_stop <-dequeue_entities
[10363.778826] migratio-24 1d..2. 10347735904us : sched_switch: prev_comm=migration/1 prev_pid=24 prev_prio=0 prev_state=S ==> next_comm=swapper/1 next_pid=0 next_prio=120
[10363.780728] <idle>-0 1dNh4. 10347736888us : sched_wakeup: comm=rcuc/1 pid=25 prio=97 target_cpu=001
[10363.781930] <idle>-0 1dN.4. 10347736890us : sched_wakeup: comm=ksoftirqd/1 pid=26 prio=97 target_cpu=001
[10363.783190] <idle>-0 1d..2. 10347736892us : sched_switch: prev_comm=swapper/1 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/1 next_pid=25 next_prio=97
[10363.785061] rcuc/1-25 1d..2. 10347736898us : sched_switch: prev_comm=rcuc/1 prev_pid=25 prev_prio=97 prev_state=S ==> next_comm=ksoftirqd/1 next_pid=26 next_prio=97
[10363.786946] ksoftirq-26 1d..2. 10347736899us : sched_switch: prev_comm=ksoftirqd/1 prev_pid=26 prev_prio=97 prev_state=S ==> next_comm=swapper/1 next_pid=0 next_prio=120
[10363.788870] <idle>-0 1d..2. 10347736957us : sched_switch: prev_comm=swapper/1 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=kworker/1:0 next_pid=18422 next_prio=120
[10363.790839] kworker/-18422 1d..4. 10347736965us : sched_wakeup: comm=rcu_torture_bar pid=193 prio=139 target_cpu=000
[10363.792170] kworker/-18422 1d..3. 10347736966us : dl_server_stop <-dequeue_entities
[10363.793160] kworker/-18422 1d..2. 10347736967us : sched_switch: prev_comm=kworker/1:0 prev_pid=18422 prev_prio=120 prev_state=I ==> next_comm=swapper/1 next_pid=0 next_prio=120
[10363.795126] <idle>-0 1dNh4. 10347737886us : sched_wakeup: comm=rcuc/1 pid=25 prio=97 target_cpu=001
[10363.796348] <idle>-0 1d..2. 10347737889us : sched_switch: prev_comm=swapper/1 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/1 next_pid=25 next_prio=97
[10363.798218] rcuc/1-25 1d..2. 10347737891us : sched_switch: prev_comm=rcuc/1 prev_pid=25 prev_prio=97 prev_state=S ==> next_comm=swapper/1 next_pid=0 next_prio=120
[10363.800106] <idle>-0 1dNh4. 10347738885us : sched_wakeup: comm=rcuc/1 pid=25 prio=97 target_cpu=001
[10363.801387] <idle>-0 1d..2. 10347738888us : sched_switch: prev_comm=swapper/1 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/1 next_pid=25 next_prio=97
[10363.803219] rcuc/1-25 1d..2. 10347738891us : sched_switch: prev_comm=rcuc/1 prev_pid=25 prev_prio=97 prev_state=S ==> next_comm=swapper/1 next_pid=0 next_prio=120
[10363.805072] <idle>-0 1d.h4. 10347739243us : sched_wakeup: comm=rcu_torture_wri pid=148 prio=120 target_cpu=000
[10363.806403] <idle>-0 1dNh4. 10347739885us : sched_wakeup: comm=rcuc/1 pid=25 prio=97 target_cpu=001
[10363.807610] <idle>-0 1d..2. 10347739888us : sched_switch: prev_comm=swapper/1 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/1 next_pid=25 next_prio=97
[10363.809453] rcuc/1-25 1d..2. 10347739889us : sched_switch: prev_comm=rcuc/1 prev_pid=25 prev_prio=97 prev_state=S ==> next_comm=swapper/1 next_pid=0 next_prio=120
[10363.811456] <idle>-0 1dNh4. 10347740885us : sched_wakeup: comm=rcuc/1 pid=25 prio=97 target_cpu=001
[10363.812866] <idle>-0 1d..2. 10347740888us : sched_switch: prev_comm=swapper/1 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/1 next_pid=25 next_prio=97
[10363.814750] rcuc/1-25 1D..5. 10347740890us : dl_server_start <-enqueue_task_fair
[10363.815764] rcuc/1-25 1DN.4. 10347740891us : sched_wakeup: comm=cpuhp/1 pid=23 prio=120 target_cpu=001
[10363.816989] rcuc/1-25 1d..2. 10347740894us : sched_switch: prev_comm=rcuc/1 prev_pid=25 prev_prio=97 prev_state=S ==> next_comm=cpuhp/1 next_pid=23 next_prio=120
[10363.818842] cpuhp/1-23 1d..3. 10347741725us : dl_server_stop <-dequeue_entities
[10363.819814] cpuhp/1-23 1d..2. 10347741726us : sched_switch: prev_comm=cpuhp/1 prev_pid=23 prev_prio=120 prev_state=D ==> next_comm=swapper/1 next_pid=0 next_prio=120
[10363.821679] <idle>-0 1dNh4. 10347741887us : sched_wakeup: comm=rcuc/1 pid=25 prio=97 target_cpu=001
[10363.822877] <idle>-0 1d..2. 10347741890us : sched_switch: prev_comm=swapper/1 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/1 next_pid=25 next_prio=97
[10363.824720] rcuc/1-25 1d..2. 10347741892us : sched_switch: prev_comm=rcuc/1 prev_pid=25 prev_prio=97 prev_state=S ==> next_comm=swapper/1 next_pid=0 next_prio=120
[10363.826574] <idle>-0 1d..2. 10347742453us : sched_switch: prev_comm=swapper/1 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=cpuhp/1 next_pid=23 next_prio=120
[10363.828441] cpuhp/1-23 1d..3. 10347742474us : dl_server_stop <-dequeue_entities
[10363.829422] cpuhp/1-23 1d..2. 10347742475us : sched_switch: prev_comm=cpuhp/1 prev_pid=23 prev_prio=120 prev_state=D ==> next_comm=swapper/1 next_pid=0 next_prio=120
[10363.831290] <idle>-0 1d..2. 10347742917us : sched_switch: prev_comm=swapper/1 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=cpuhp/1 next_pid=23 next_prio=120
[10363.833165] cpuhp/1-23 1d..3. 10347742944us : dl_server_stop <-dequeue_entities
[10363.834139] cpuhp/1-23 1d..2. 10347742946us : sched_switch: prev_comm=cpuhp/1 prev_pid=23 prev_prio=120 prev_state=D ==> next_comm=swapper/1 next_pid=0 next_prio=120
[10363.836021] <idle>-0 1d..2. 10347742969us : sched_switch: prev_comm=swapper/1 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=cpuhp/1 next_pid=23 next_prio=120
[10363.837887] cpuhp/1-23 1dN.3. 10347743083us : sched_wakeup: comm=ksoftirqd/1 pid=26 prio=97 target_cpu=001
[10363.839158] cpuhp/1-23 1d..2. 10347743084us : sched_switch: prev_comm=cpuhp/1 prev_pid=23 prev_prio=120 prev_state=R+ ==> next_comm=ksoftirqd/1 next_pid=26 next_prio=97
[10363.841086] ksoftirq-26 1d..2. 10347743086us : sched_switch: prev_comm=ksoftirqd/1 prev_pid=26 prev_prio=97 prev_state=P ==> next_comm=cpuhp/1 next_pid=23 next_prio=120
[10363.842993] cpuhp/1-23 1dN.3. 10347743087us : sched_wakeup: comm=rcuc/1 pid=25 prio=97 target_cpu=001
[10363.844212] cpuhp/1-23 1d..2. 10347743088us : sched_switch: prev_comm=cpuhp/1 prev_pid=23 prev_prio=120 prev_state=R+ ==> next_comm=rcuc/1 next_pid=25 next_prio=97
[10363.846107] rcuc/1-25 1d..2. 10347743089us : sched_switch: prev_comm=rcuc/1 prev_pid=25 prev_prio=97 prev_state=P ==> next_comm=cpuhp/1 next_pid=23 next_prio=120
[10363.847967] cpuhp/1-23 1d..3. 10347743094us : dl_server_stop <-dequeue_entities
[10363.848968] cpuhp/1-23 1d..2. 10347743095us : sched_switch: prev_comm=cpuhp/1 prev_pid=23 prev_prio=120 prev_state=S ==> next_comm=swapper/1 next_pid=0 next_prio=120
[10363.850869] <idle>-0 1d..2. 10347743107us : sched_switch: prev_comm=swapper/1 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=cpuhp/1 next_pid=23 next_prio=120
[10363.852779] cpuhp/1-23 1d..3. 10347743111us : dl_server_stop <-dequeue_entities
[10363.853770] cpuhp/1-23 1d..2. 10347743112us : sched_switch: prev_comm=cpuhp/1 prev_pid=23 prev_prio=120 prev_state=P ==> next_comm=swapper/1 next_pid=0 next_prio=120
[10363.855664] <idle>-0 1d..2. 10347743122us : sched_switch: prev_comm=swapper/1 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=migration/1 next_pid=24 next_prio=0
[10363.857599] CPU:6 [LOST 11447803 EVENTS]
[10363.857599] kworker/-19976 6d..2. 10354298092us : sched_switch: prev_comm=kworker/6:2 prev_pid=19976 prev_prio=120 prev_state=D ==> next_comm=swapper/6 next_pid=0 next_prio=120
[10363.860014] <idle>-0 6dNh2. 10354298095us : sched_wakeup: comm=rcu_exp_gp_kthr pid=20 prio=97 target_cpu=006
[10363.861315] <idle>-0 6d..2. 10354298097us : sched_switch: prev_comm=swapper/6 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcu_exp_gp_kthr next_pid=20 next_prio=97
[10363.863294] rcu_exp_-20 6d..2. 10354298104us : sched_switch: prev_comm=rcu_exp_gp_kthr prev_pid=20 prev_prio=97 prev_state=D ==> next_comm=swapper/6 next_pid=0 next_prio=120
[10363.865256] <idle>-0 6dNh2. 10354298111us : sched_wakeup: comm=rcu_exp_gp_kthr pid=20 prio=97 target_cpu=006
[10363.866598] <idle>-0 6d..2. 10354298112us : sched_switch: prev_comm=swapper/6 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcu_exp_gp_kthr next_pid=20 next_prio=97
[10363.868608] rcu_exp_-20 6d..5. 10354298114us : dl_server_start <-enqueue_task_fair
[10363.869615] rcu_exp_-20 6dN.4. 10354298115us : sched_wakeup: comm=kworker/6:2 pid=19976 prio=120 target_cpu=006
[10363.870925] rcu_exp_-20 6d..2. 10354298118us : sched_switch: prev_comm=rcu_exp_gp_kthr prev_pid=20 prev_prio=97 prev_state=S ==> next_comm=kworker/6:2 next_pid=19976 next_prio=120
[10363.872948] kworker/-19976 6d..3. 10354298119us : dl_server_stop <-dequeue_entities
[10363.873940] kworker/-19976 6d..2. 10354298123us : sched_switch: prev_comm=kworker/6:2 prev_pid=19976 prev_prio=120 prev_state=I ==> next_comm=swapper/6 next_pid=0 next_prio=120
[10363.875928] <idle>-0 6d.h3. 10354298479us : dl_server_start <-enqueue_task_fair
[10363.876955] <idle>-0 6dNh2. 10354298481us : sched_wakeup: comm=rcu_torture_fak pid=151 prio=139 target_cpu=006
[10363.878287] <idle>-0 6d..2. 10354298482us : sched_switch: prev_comm=swapper/6 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcu_torture_fak next_pid=151 next_prio=139
[10363.880268] rcu_tort-151 6d..3. 10354298485us : dl_server_stop <-dequeue_entities
[10363.881262] rcu_tort-151 6d..2. 10354298489us : sched_switch: prev_comm=rcu_torture_fak prev_pid=151 prev_prio=139 prev_state=I ==> next_comm=swapper/6 next_pid=0 next_prio=120
[10363.883949] <idle>-0 6dN.4. 10354298892us : sched_wakeup: comm=ksoftirqd/6 pid=60 prio=97 target_cpu=006
[10363.885869] <idle>-0 6d..2. 10354298893us : sched_switch: prev_comm=swapper/6 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=ksoftirqd/6 next_pid=60 next_prio=97
[10363.888185] ksoftirq-60 6d..2. 10354298901us : sched_switch: prev_comm=ksoftirqd/6 prev_pid=60 prev_prio=97 prev_state=S ==> next_comm=swapper/6 next_pid=0 next_prio=120
[10363.890094] <idle>-0 6dNh4. 10354299889us : sched_wakeup: comm=rcuc/6 pid=59 prio=97 target_cpu=006
[10363.891310] <idle>-0 6d..2. 10354299893us : sched_switch: prev_comm=swapper/6 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/6 next_pid=59 next_prio=97
[10363.893182] rcuc/6-59 6d..2. 10354299899us : sched_switch: prev_comm=rcuc/6 prev_pid=59 prev_prio=97 prev_state=S ==> next_comm=swapper/6 next_pid=0 next_prio=120
[10363.895050] <idle>-0 6dNh4. 10354300890us : sched_wakeup: comm=rcuc/6 pid=59 prio=97 target_cpu=006
[10363.896255] <idle>-0 6d..2. 10354300894us : sched_switch: prev_comm=swapper/6 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/6 next_pid=59 next_prio=97
[10363.898162] rcuc/6-59 6d..2. 10354300899us : sched_switch: prev_comm=rcuc/6 prev_pid=59 prev_prio=97 prev_state=S ==> next_comm=swapper/6 next_pid=0 next_prio=120
[10363.900032] <idle>-0 6dNh4. 10354302889us : sched_wakeup: comm=rcuc/6 pid=59 prio=97 target_cpu=006
[10363.901248] <idle>-0 6d..2. 10354302893us : sched_switch: prev_comm=swapper/6 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/6 next_pid=59 next_prio=97
[10363.903132] rcuc/6-59 6d..3. 10354302906us : dl_server_start <-enqueue_task_fair
[10363.904144] rcuc/6-59 6d..2. 10354302908us : sched_switch: prev_comm=rcuc/6 prev_pid=59 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=154 next_prio=139
[10363.906107] rcu_tort-154 6d.h2. 10354302912us : sched_wakeup: comm=cpuhp/6 pid=57 prio=120 target_cpu=006
[10363.907348] rcu_tort-154 6DNh3. 10354303884us : sched_wakeup: comm=rcuc/6 pid=59 prio=97 target_cpu=006
[10363.908585] rcu_tort-154 6d..2. 10354303889us : sched_switch: prev_comm=rcu_torture_rea prev_pid=154 prev_prio=139 prev_state=R+ ==> next_comm=rcuc/6 next_pid=59 next_prio=97
[10363.910547] rcuc/6-59 6d..2. 10354303892us : sched_switch: prev_comm=rcuc/6 prev_pid=59 prev_prio=97 prev_state=S ==> next_comm=cpuhp/6 next_pid=57 next_prio=120
[10363.912408] cpuhp/6-57 6d..2. 10354303896us : sched_switch: prev_comm=cpuhp/6 prev_pid=57 prev_prio=120 prev_state=D ==> next_comm=rcu_torture_rea next_pid=154 next_prio=139
[10363.914357] rcu_tort-154 6dN.6. 10354303899us : sched_wakeup: comm=migration/6 pid=58 prio=0 target_cpu=006
[10363.915642] rcu_tort-154 6d..2. 10354303901us : sched_switch: prev_comm=rcu_torture_rea prev_pid=154 prev_prio=139 prev_state=R+ ==> next_comm=migration/6 next_pid=58 next_prio=0
[10363.917653] migratio-58 6d..4. 10354303903us : dl_server_stop <-dequeue_entities
[10363.918641] migratio-58 6d..2. 10354303911us : sched_switch: prev_comm=migration/6 prev_pid=58 prev_prio=0 prev_state=S ==> next_comm=swapper/6 next_pid=0 next_prio=120
[10363.920567] <idle>-0 6dN.4. 10354304891us : sched_wakeup: comm=ksoftirqd/6 pid=60 prio=97 target_cpu=006
[10363.921839] <idle>-0 6d..2. 10354304893us : sched_switch: prev_comm=swapper/6 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=ksoftirqd/6 next_pid=60 next_prio=97
[10363.923765] ksoftirq-60 6d..2. 10354304898us : sched_switch: prev_comm=ksoftirqd/6 prev_pid=60 prev_prio=97 prev_state=S ==> next_comm=swapper/6 next_pid=0 next_prio=120
[10363.925701] <idle>-0 6d.h4. 10354306531us : sched_wakeup: comm=rcu_torture_wri pid=148 prio=120 target_cpu=000
[10363.927023] <idle>-0 6dNh4. 10354306888us : sched_wakeup: comm=rcuc/6 pid=59 prio=97 target_cpu=006
[10363.928233] <idle>-0 6dN.4. 10354306891us : sched_wakeup: comm=ksoftirqd/6 pid=60 prio=97 target_cpu=006
[10363.929519] <idle>-0 6d..2. 10354306892us : sched_switch: prev_comm=swapper/6 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/6 next_pid=59 next_prio=97
[10363.931382] rcuc/6-59 6d..2. 10354306898us : sched_switch: prev_comm=rcuc/6 prev_pid=59 prev_prio=97 prev_state=S ==> next_comm=ksoftirqd/6 next_pid=60 next_prio=97
[10363.933263] ksoftirq-60 6d..2. 10354306900us : sched_switch: prev_comm=ksoftirqd/6 prev_pid=60 prev_prio=97 prev_state=S ==> next_comm=swapper/6 next_pid=0 next_prio=120
[10363.935176] <idle>-0 6d.h4. 10354307223us : sched_wakeup: comm=rcu_torture_fak pid=151 prio=139 target_cpu=000
[10363.936571] <idle>-0 6dNh4. 10354307888us : sched_wakeup: comm=rcuc/6 pid=59 prio=97 target_cpu=006
[10363.937784] <idle>-0 6d..2. 10354307892us : sched_switch: prev_comm=swapper/6 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/6 next_pid=59 next_prio=97
[10363.939643] rcuc/6-59 6d..2. 10354307895us : sched_switch: prev_comm=rcuc/6 prev_pid=59 prev_prio=97 prev_state=S ==> next_comm=swapper/6 next_pid=0 next_prio=120
[10363.941506] <idle>-0 6dNh4. 10354310888us : sched_wakeup: comm=rcuc/6 pid=59 prio=97 target_cpu=006
[10363.942730] <idle>-0 6d..2. 10354310892us : sched_switch: prev_comm=swapper/6 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/6 next_pid=59 next_prio=97
[10363.945358] rcuc/6-59 6d..2. 10354310894us : sched_switch: prev_comm=rcuc/6 prev_pid=59 prev_prio=97 prev_state=S ==> next_comm=swapper/6 next_pid=0 next_prio=120
[10363.948169] <idle>-0 6d.h4. 10354311779us : sched_wakeup: comm=rcu_torture_fak pid=149 prio=139 target_cpu=000
[10363.950184] <idle>-0 6dNh4. 10354311889us : sched_wakeup: comm=rcuc/6 pid=59 prio=97 target_cpu=006
[10363.951973] <idle>-0 6d..2. 10354311893us : sched_switch: prev_comm=swapper/6 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/6 next_pid=59 next_prio=97
[10363.954758] rcuc/6-59 6d..2. 10354311896us : sched_switch: prev_comm=rcuc/6 prev_pid=59 prev_prio=97 prev_state=S ==> next_comm=swapper/6 next_pid=0 next_prio=120
[10363.957574] <idle>-0 6dNh4. 10354312889us : sched_wakeup: comm=rcuc/6 pid=59 prio=97 target_cpu=006
[10363.959410] <idle>-0 6d..2. 10354312892us : sched_switch: prev_comm=swapper/6 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/6 next_pid=59 next_prio=97
[10363.962196] rcuc/6-59 6d..2. 10354312895us : sched_switch: prev_comm=rcuc/6 prev_pid=59 prev_prio=97 prev_state=S ==> next_comm=swapper/6 next_pid=0 next_prio=120
[10363.964698] <idle>-0 6dNh4. 10354314889us : sched_wakeup: comm=rcuc/6 pid=59 prio=97 target_cpu=006
[10363.965895] <idle>-0 6d..2. 10354314893us : sched_switch: prev_comm=swapper/6 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/6 next_pid=59 next_prio=97
[10363.967749] rcuc/6-59 6d..2. 10354314895us : sched_switch: prev_comm=rcuc/6 prev_pid=59 prev_prio=97 prev_state=S ==> next_comm=swapper/6 next_pid=0 next_prio=120
[10363.969603] <idle>-0 6dNh4. 10354315888us : sched_wakeup: comm=rcuc/6 pid=59 prio=97 target_cpu=006
[10363.970803] <idle>-0 6d..2. 10354315892us : sched_switch: prev_comm=swapper/6 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/6 next_pid=59 next_prio=97
[10363.972722] rcuc/6-59 6D..5. 10354315894us : dl_server_start <-enqueue_task_fair
[10363.973722] rcuc/6-59 6DN.4. 10354315895us : sched_wakeup: comm=cpuhp/6 pid=57 prio=120 target_cpu=006
[10363.974944] rcuc/6-59 6d..2. 10354315898us : sched_switch: prev_comm=rcuc/6 prev_pid=59 prev_prio=97 prev_state=S ==> next_comm=cpuhp/6 next_pid=57 next_prio=120
[10363.976794] cpuhp/6-57 6dNh4. 10354316898us : sched_wakeup: comm=rcuc/6 pid=59 prio=97 target_cpu=006
[10363.977993] cpuhp/6-57 6d..2. 10354316906us : sched_switch: prev_comm=cpuhp/6 prev_pid=57 prev_prio=120 prev_state=R+ ==> next_comm=rcuc/6 next_pid=59 next_prio=97
[10363.979842] rcuc/6-59 6d..2. 10354316914us : sched_switch: prev_comm=rcuc/6 prev_pid=59 prev_prio=97 prev_state=S ==> next_comm=cpuhp/6 next_pid=57 next_prio=120
[10363.981713] cpuhp/6-57 6d..3. 10354316922us : dl_server_stop <-dequeue_entities
[10363.982692] cpuhp/6-57 6d..2. 10354316924us : sched_switch: prev_comm=cpuhp/6 prev_pid=57 prev_prio=120 prev_state=D ==> next_comm=swapper/6 next_pid=0 next_prio=120
[10363.984567] <idle>-0 6d..2. 10354317629us : sched_switch: prev_comm=swapper/6 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=cpuhp/6 next_pid=57 next_prio=120
[10363.986439] cpuhp/6-57 6d..3. 10354317656us : dl_server_stop <-dequeue_entities
[10363.987428] cpuhp/6-57 6d..2. 10354317658us : sched_switch: prev_comm=cpuhp/6 prev_pid=57 prev_prio=120 prev_state=D ==> next_comm=swapper/6 next_pid=0 next_prio=120
[10363.989289] <idle>-0 6d..2. 10354317704us : sched_switch: prev_comm=swapper/6 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=cpuhp/6 next_pid=57 next_prio=120
[10363.991166] cpuhp/6-57 6d..3. 10354317735us : dl_server_stop <-dequeue_entities
[10363.992147] cpuhp/6-57 6d..2. 10354317736us : sched_switch: prev_comm=cpuhp/6 prev_pid=57 prev_prio=120 prev_state=D ==> next_comm=swapper/6 next_pid=0 next_prio=120
[10363.994025] <idle>-0 6d..2. 10354317766us : sched_switch: prev_comm=swapper/6 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=cpuhp/6 next_pid=57 next_prio=120
[10363.995890] cpuhp/6-57 6dNh3. 10354317882us : sched_wakeup: comm=rcuc/6 pid=59 prio=97 target_cpu=006
[10363.997116] cpuhp/6-57 6d..2. 10354317887us : sched_switch: prev_comm=cpuhp/6 prev_pid=57 prev_prio=120 prev_state=R+ ==> next_comm=rcuc/6 next_pid=59 next_prio=97
[10363.998993] rcuc/6-59 6d..2. 10354317889us : sched_switch: prev_comm=rcuc/6 prev_pid=59 prev_prio=97 prev_state=S ==> next_comm=cpuhp/6 next_pid=57 next_prio=120
[10364.000864] cpuhp/6-57 6dN.3. 10354317939us : sched_wakeup: comm=ksoftirqd/6 pid=60 prio=97 target_cpu=006
[10364.002127] cpuhp/6-57 6d..2. 10354317940us : sched_switch: prev_comm=cpuhp/6 prev_pid=57 prev_prio=120 prev_state=R+ ==> next_comm=ksoftirqd/6 next_pid=60 next_prio=97
[10364.004042] ksoftirq-60 6d..2. 10354317942us : sched_switch: prev_comm=ksoftirqd/6 prev_pid=60 prev_prio=97 prev_state=P ==> next_comm=cpuhp/6 next_pid=57 next_prio=120
[10364.005956] cpuhp/6-57 6dN.3. 10354317943us : sched_wakeup: comm=rcuc/6 pid=59 prio=97 target_cpu=006
[10364.007169] cpuhp/6-57 6d..2. 10354317944us : sched_switch: prev_comm=cpuhp/6 prev_pid=57 prev_prio=120 prev_state=R+ ==> next_comm=rcuc/6 next_pid=59 next_prio=97
[10364.009038] rcuc/6-59 6d..2. 10354317946us : sched_switch: prev_comm=rcuc/6 prev_pid=59 prev_prio=97 prev_state=P ==> next_comm=cpuhp/6 next_pid=57 next_prio=120
[10364.010886] cpuhp/6-57 6d..3. 10354317952us : dl_server_stop <-dequeue_entities
[10364.011873] cpuhp/6-57 6d..2. 10354317953us : sched_switch: prev_comm=cpuhp/6 prev_pid=57 prev_prio=120 prev_state=S ==> next_comm=swapper/6 next_pid=0 next_prio=120
[10364.013769] <idle>-0 6d..2. 10354317966us : sched_switch: prev_comm=swapper/6 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=cpuhp/6 next_pid=57 next_prio=120
[10364.015655] cpuhp/6-57 6d..3. 10354317971us : dl_server_stop <-dequeue_entities
[10364.016632] cpuhp/6-57 6d..2. 10354317973us : sched_switch: prev_comm=cpuhp/6 prev_pid=57 prev_prio=120 prev_state=P ==> next_comm=swapper/6 next_pid=0 next_prio=120
[10364.018520] <idle>-0 6d..2. 10354317990us : sched_switch: prev_comm=swapper/6 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=migration/6 next_pid=58 next_prio=0
[10364.020427] CPU:14 [LOST 11510813 EVENTS]
[10364.020427] rcu_tort-159 14d..2. 10356469386us : sched_switch: prev_comm=rcu_torture_rea prev_pid=159 prev_prio=139 prev_state=R+ ==> next_comm=rcu_torture_rea next_pid=154 next_prio=139
[10364.022985] rcu_tort-154 14dN.4. 10356469885us : sched_wakeup: comm=ksoftirqd/14 pid=116 prio=97 target_cpu=014
[10364.024259] rcu_tort-154 14d..2. 10356469890us : sched_switch: prev_comm=rcu_torture_rea prev_pid=154 prev_prio=139 prev_state=R+ ==> next_comm=ksoftirqd/14 next_pid=116 next_prio=97
[10364.026288] ksoftirq-116 14d..2. 10356469892us : sched_switch: prev_comm=ksoftirqd/14 prev_pid=116 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=159 next_prio=139
[10364.028390] rcu_tort-159 14dNs5. 10356470886us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=014
[10364.029686] rcu_tort-159 14d..2. 10356470888us : sched_switch: prev_comm=rcu_torture_rea prev_pid=159 prev_prio=139 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10364.031723] rcu_pree-16 14d..2. 10356470891us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_rea next_pid=154 next_prio=139
[10364.033718] rcu_tort-154 14d..2. 10356470895us : sched_switch: prev_comm=rcu_torture_rea prev_pid=154 prev_prio=139 prev_state=I ==> next_comm=rcu_torture_rea next_pid=159 next_prio=139
[10364.035767] rcu_tort-159 14DNh3. 10356471396us : sched_wakeup: comm=rcu_torture_rea pid=154 prio=139 target_cpu=014
[10364.037098] rcu_tort-159 14d..2. 10356471399us : sched_switch: prev_comm=rcu_torture_rea prev_pid=159 prev_prio=139 prev_state=R+ ==> next_comm=rcu_torture_rea next_pid=154 next_prio=139
[10364.039173] rcu_tort-154 14dN.4. 10356474885us : sched_wakeup: comm=ksoftirqd/14 pid=116 prio=97 target_cpu=014
[10364.040499] rcu_tort-154 14d..2. 10356474887us : sched_switch: prev_comm=rcu_torture_rea prev_pid=154 prev_prio=139 prev_state=R+ ==> next_comm=ksoftirqd/14 next_pid=116 next_prio=97
[10364.042547] ksoftirq-116 14d.s3. 10356474895us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=012
[10364.043921] ksoftirq-116 14d..2. 10356474900us : sched_switch: prev_comm=ksoftirqd/14 prev_pid=116 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=154 next_prio=139
[10364.045993] rcu_tort-154 14DNh4. 10356475881us : sched_wakeup: comm=rcuc/14 pid=115 prio=97 target_cpu=014
[10364.047224] rcu_tort-154 14d..2. 10356475886us : sched_switch: prev_comm=rcu_torture_rea prev_pid=154 prev_prio=139 prev_state=R+ ==> next_comm=rcuc/14 next_pid=115 next_prio=97
[10364.049195] rcuc/14-115 14d..2. 10356475890us : sched_switch: prev_comm=rcuc/14 prev_pid=115 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=154 next_prio=139
[10364.051160] rcu_tort-154 14d..2. 10356476814us : sched_switch: prev_comm=rcu_torture_rea prev_pid=154 prev_prio=139 prev_state=R+ ==> next_comm=rcu_exp_gp_kthr next_pid=20 next_prio=97
[10364.053183] rcu_exp_-20 14d..4. 10356476825us : sched_wakeup: comm=rcu_exp_par_gp_ pid=92 prio=97 target_cpu=004
[10364.054490] rcu_exp_-20 14d..4. 10356476836us : sched_wakeup: comm=rcu_exp_par_gp_ pid=106 prio=97 target_cpu=012
[10364.055795] rcu_exp_-20 14dN.3. 10356476838us : sched_wakeup: comm=rcub/14 pid=119 prio=97 target_cpu=014
[10364.057017] rcu_exp_-20 14d..2. 10356476840us : sched_switch: prev_comm=rcu_exp_gp_kthr prev_pid=20 prev_prio=97 prev_state=R+ ==> next_comm=rcub/14 next_pid=119 next_prio=97
[10364.058978] rcub/14-119 14d..4. 10356476852us : dl_server_stop <-dequeue_entities
[10364.059959] rcub/14-119 14d..2. 10356476870us : sched_switch: prev_comm=rcub/14 prev_pid=119 prev_prio=97 prev_state=D ==> next_comm=swapper/14 next_pid=0 next_prio=120
[10364.061851] <idle>-0 14dNh2. 10356476890us : sched_wakeup: comm=rcub/14 pid=119 prio=97 target_cpu=014
[10364.063077] <idle>-0 14dN.4. 10356476895us : sched_wakeup: comm=ksoftirqd/14 pid=116 prio=97 target_cpu=014
[10364.064358] <idle>-0 14d..2. 10356476897us : sched_switch: prev_comm=swapper/14 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcub/14 next_pid=119 next_prio=97
[10364.066245] rcub/14-119 14d..2. 10356476899us : sched_switch: prev_comm=rcub/14 prev_pid=119 prev_prio=97 prev_state=S ==> next_comm=ksoftirqd/14 next_pid=116 next_prio=97
[10364.068183] ksoftirq-116 14d..2. 10356476907us : sched_switch: prev_comm=ksoftirqd/14 prev_pid=116 prev_prio=97 prev_state=S ==> next_comm=swapper/14 next_pid=0 next_prio=120
[10364.070137] <idle>-0 14d.h3. 10356477410us : dl_server_start <-enqueue_task_fair
[10364.071141] <idle>-0 14dNh2. 10356477411us : sched_wakeup: comm=rcu_torture_rea pid=159 prio=139 target_cpu=014
[10364.072488] <idle>-0 14d..2. 10356477412us : sched_switch: prev_comm=swapper/14 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcu_torture_rea next_pid=159 next_prio=139
[10364.074485] rcu_tort-159 14dN.3. 10356477885us : sched_wakeup: comm=ksoftirqd/14 pid=116 prio=97 target_cpu=014
[10364.076345] rcu_tort-159 14d..2. 10356477886us : sched_switch: prev_comm=rcu_torture_rea prev_pid=159 prev_prio=139 prev_state=R+ ==> next_comm=ksoftirqd/14 next_pid=116 next_prio=97
[10364.079470] ksoftirq-116 14d..2. 10356477889us : sched_switch: prev_comm=ksoftirqd/14 prev_pid=116 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=159 next_prio=139
[10364.081835] rcu_tort-159 14DNh4. 10356480882us : sched_wakeup: comm=rcuc/14 pid=115 prio=97 target_cpu=014
[10364.083062] rcu_tort-159 14d..2. 10356480887us : sched_switch: prev_comm=rcu_torture_rea prev_pid=159 prev_prio=139 prev_state=R+ ==> next_comm=rcuc/14 next_pid=115 next_prio=97
[10364.085029] rcuc/14-115 14d..2. 10356480890us : sched_switch: prev_comm=rcuc/14 prev_pid=115 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=159 next_prio=139
[10364.086997] rcu_tort-159 14dNh4. 10356481881us : sched_wakeup: comm=rcuc/14 pid=115 prio=97 target_cpu=014
[10364.088224] rcu_tort-159 14d..2. 10356481886us : sched_switch: prev_comm=rcu_torture_rea prev_pid=159 prev_prio=139 prev_state=R+ ==> next_comm=rcuc/14 next_pid=115 next_prio=97
[10364.090191] rcuc/14-115 14d..2. 10356481889us : sched_switch: prev_comm=rcuc/14 prev_pid=115 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=159 next_prio=139
[10364.092141] rcu_tort-159 14dNh4. 10356483882us : sched_wakeup: comm=rcuc/14 pid=115 prio=97 target_cpu=014
[10364.093362] rcu_tort-159 14d..2. 10356483886us : sched_switch: prev_comm=rcu_torture_rea prev_pid=159 prev_prio=139 prev_state=R+ ==> next_comm=rcuc/14 next_pid=115 next_prio=97
[10364.095321] rcuc/14-115 14d..2. 10356483895us : sched_switch: prev_comm=rcuc/14 prev_pid=115 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=159 next_prio=139
[10364.097273] rcu_tort-159 14dNh3. 10356484882us : sched_wakeup: comm=rcuc/14 pid=115 prio=97 target_cpu=014
[10364.098506] rcu_tort-159 14d..2. 10356484886us : sched_switch: prev_comm=rcu_torture_rea prev_pid=159 prev_prio=139 prev_state=R+ ==> next_comm=rcuc/14 next_pid=115 next_prio=97
[10364.100469] rcuc/14-115 14d..2. 10356484889us : sched_switch: prev_comm=rcuc/14 prev_pid=115 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=159 next_prio=139
[10364.102423] rcu_tort-159 14dNh3. 10356485882us : sched_wakeup: comm=rcuc/14 pid=115 prio=97 target_cpu=014
[10364.103645] rcu_tort-159 14d..2. 10356485886us : sched_switch: prev_comm=rcu_torture_rea prev_pid=159 prev_prio=139 prev_state=R+ ==> next_comm=rcuc/14 next_pid=115 next_prio=97
[10364.105602] rcuc/14-115 14d..2. 10356485889us : sched_switch: prev_comm=rcuc/14 prev_pid=115 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=159 next_prio=139
[10364.107578] rcu_tort-159 14DNh3. 10356486882us : sched_wakeup: comm=rcuc/14 pid=115 prio=97 target_cpu=014
[10364.108820] rcu_tort-159 14d..2. 10356486889us : sched_switch: prev_comm=rcu_torture_rea prev_pid=159 prev_prio=139 prev_state=R+ ==> next_comm=rcuc/14 next_pid=115 next_prio=97
[10364.110810] rcuc/14-115 14d..2. 10356486892us : sched_switch: prev_comm=rcuc/14 prev_pid=115 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=159 next_prio=139
[10364.112782] rcu_tort-159 14dNh4. 10356487882us : sched_wakeup: comm=rcuc/14 pid=115 prio=97 target_cpu=014
[10364.114012] rcu_tort-159 14dN.4. 10356487886us : sched_wakeup: comm=ksoftirqd/14 pid=116 prio=97 target_cpu=014
[10364.115289] rcu_tort-159 14d..2. 10356487888us : sched_switch: prev_comm=rcu_torture_rea prev_pid=159 prev_prio=139 prev_state=R+ ==> next_comm=rcuc/14 next_pid=115 next_prio=97
[10364.117256] rcuc/14-115 14d..2. 10356487892us : sched_switch: prev_comm=rcuc/14 prev_pid=115 prev_prio=97 prev_state=S ==> next_comm=ksoftirqd/14 next_pid=116 next_prio=97
[10364.119204] ksoftirq-116 14d..2. 10356487894us : sched_switch: prev_comm=ksoftirqd/14 prev_pid=116 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=159 next_prio=139
[10364.121357] rcu_tort-159 14d..3. 10356487898us : dl_server_stop <-dequeue_entities
[10364.122358] rcu_tort-159 14d..2. 10356487902us : sched_switch: prev_comm=rcu_torture_rea prev_pid=159 prev_prio=139 prev_state=I ==> next_comm=swapper/14 next_pid=0 next_prio=120
[10364.124374] <idle>-0 14d.h3. 10356487918us : dl_server_start <-enqueue_task_fair
[10364.125388] <idle>-0 14dNh2. 10356487919us : sched_wakeup: comm=cpuhp/14 pid=113 prio=120 target_cpu=014
[10364.126664] <idle>-0 14d..2. 10356487920us : sched_switch: prev_comm=swapper/14 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=cpuhp/14 next_pid=113 next_prio=120
[10364.128613] cpuhp/14-113 14d..3. 10356487924us : dl_server_stop <-dequeue_entities
[10364.129622] cpuhp/14-113 14d..2. 10356487925us : sched_switch: prev_comm=cpuhp/14 prev_pid=113 prev_prio=120 prev_state=D ==> next_comm=swapper/14 next_pid=0 next_prio=120
[10364.131560] <idle>-0 14d.h4. 10356488409us : sched_wakeup: comm=rcu_torture_rea pid=159 prio=139 target_cpu=000
[10364.132891] <idle>-0 14dNh4. 10356488887us : sched_wakeup: comm=rcuc/14 pid=115 prio=97 target_cpu=014
[10364.134126] <idle>-0 14d..2. 10356488891us : sched_switch: prev_comm=swapper/14 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/14 next_pid=115 next_prio=97
[10364.136044] rcuc/14-115 14d..2. 10356488894us : sched_switch: prev_comm=rcuc/14 prev_pid=115 prev_prio=97 prev_state=S ==> next_comm=swapper/14 next_pid=0 next_prio=120
[10364.137962] <idle>-0 14dNh4. 10356489888us : sched_wakeup: comm=rcuc/14 pid=115 prio=97 target_cpu=014
[10364.139203] <idle>-0 14d..2. 10356489892us : sched_switch: prev_comm=swapper/14 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/14 next_pid=115 next_prio=97
[10364.141096] rcuc/14-115 14d..2. 10356489894us : sched_switch: prev_comm=rcuc/14 prev_pid=115 prev_prio=97 prev_state=S ==> next_comm=swapper/14 next_pid=0 next_prio=120
[10364.142990] <idle>-0 14dNh4. 10356490888us : sched_wakeup: comm=rcuc/14 pid=115 prio=97 target_cpu=014
[10364.144230] <idle>-0 14d..2. 10356490892us : sched_switch: prev_comm=swapper/14 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/14 next_pid=115 next_prio=97
[10364.146124] rcuc/14-115 14d..2. 10356490895us : sched_switch: prev_comm=rcuc/14 prev_pid=115 prev_prio=97 prev_state=S ==> next_comm=swapper/14 next_pid=0 next_prio=120
[10364.148030] <idle>-0 14dNh4. 10356493889us : sched_wakeup: comm=rcuc/14 pid=115 prio=97 target_cpu=014
[10364.149266] <idle>-0 14d..2. 10356493892us : sched_switch: prev_comm=swapper/14 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/14 next_pid=115 next_prio=97
[10364.151181] rcuc/14-115 14D..5. 10356493896us : dl_server_start <-enqueue_task_fair
[10364.152188] rcuc/14-115 14DN.4. 10356493897us : sched_wakeup: comm=cpuhp/14 pid=113 prio=120 target_cpu=014
[10364.153474] rcuc/14-115 14d..2. 10356493900us : sched_switch: prev_comm=rcuc/14 prev_pid=115 prev_prio=97 prev_state=S ==> next_comm=cpuhp/14 next_pid=113 next_prio=120
[10364.155363] cpuhp/14-113 14d..3. 10356494772us : dl_server_stop <-dequeue_entities
[10364.156351] cpuhp/14-113 14d..2. 10356494773us : sched_switch: prev_comm=cpuhp/14 prev_pid=113 prev_prio=120 prev_state=D ==> next_comm=swapper/14 next_pid=0 next_prio=120
[10364.158274] <idle>-0 14dNh4. 10356494889us : sched_wakeup: comm=rcuc/14 pid=115 prio=97 target_cpu=014
[10364.159527] <idle>-0 14d..2. 10356494893us : sched_switch: prev_comm=swapper/14 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/14 next_pid=115 next_prio=97
[10364.161425] rcuc/14-115 14d..2. 10356494896us : sched_switch: prev_comm=rcuc/14 prev_pid=115 prev_prio=97 prev_state=S ==> next_comm=swapper/14 next_pid=0 next_prio=120
[10364.163320] <idle>-0 14d..2. 10356495520us : sched_switch: prev_comm=swapper/14 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=cpuhp/14 next_pid=113 next_prio=120
[10364.165239] cpuhp/14-113 14d..3. 10356495543us : dl_server_stop <-dequeue_entities
[10364.166220] cpuhp/14-113 14d..2. 10356495544us : sched_switch: prev_comm=cpuhp/14 prev_pid=113 prev_prio=120 prev_state=D ==> next_comm=swapper/14 next_pid=0 next_prio=120
[10364.168143] <idle>-0 14d..2. 10356495588us : sched_switch: prev_comm=swapper/14 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=cpuhp/14 next_pid=113 next_prio=120
[10364.170090] cpuhp/14-113 14d..3. 10356495616us : dl_server_stop <-dequeue_entities
[10364.171074] cpuhp/14-113 14d..2. 10356495617us : sched_switch: prev_comm=cpuhp/14 prev_pid=113 prev_prio=120 prev_state=D ==> next_comm=swapper/14 next_pid=0 next_prio=120
[10364.172982] <idle>-0 14d..2. 10356495646us : sched_switch: prev_comm=swapper/14 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=cpuhp/14 next_pid=113 next_prio=120
[10364.174919] cpuhp/14-113 14dN.3. 10356495757us : sched_wakeup: comm=ksoftirqd/14 pid=116 prio=97 target_cpu=014
[10364.176206] cpuhp/14-113 14d..2. 10356495758us : sched_switch: prev_comm=cpuhp/14 prev_pid=113 prev_prio=120 prev_state=R+ ==> next_comm=ksoftirqd/14 next_pid=116 next_prio=97
[10364.178173] ksoftirq-116 14d..2. 10356495760us : sched_switch: prev_comm=ksoftirqd/14 prev_pid=116 prev_prio=97 prev_state=P ==> next_comm=cpuhp/14 next_pid=113 next_prio=120
[10364.180173] cpuhp/14-113 14dN.3. 10356495761us : sched_wakeup: comm=rcuc/14 pid=115 prio=97 target_cpu=014
[10364.181422] cpuhp/14-113 14d..2. 10356495762us : sched_switch: prev_comm=cpuhp/14 prev_pid=113 prev_prio=120 prev_state=R+ ==> next_comm=rcuc/14 next_pid=115 next_prio=97
[10364.183318] rcuc/14-115 14d..2. 10356495763us : sched_switch: prev_comm=rcuc/14 prev_pid=115 prev_prio=97 prev_state=P ==> next_comm=cpuhp/14 next_pid=113 next_prio=120
[10364.185221] cpuhp/14-113 14d..3. 10356495768us : dl_server_stop <-dequeue_entities
[10364.186203] cpuhp/14-113 14d..2. 10356495769us : sched_switch: prev_comm=cpuhp/14 prev_pid=113 prev_prio=120 prev_state=S ==> next_comm=swapper/14 next_pid=0 next_prio=120
[10364.188123] <idle>-0 14d..2. 10356495783us : sched_switch: prev_comm=swapper/14 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=cpuhp/14 next_pid=113 next_prio=120
[10364.190047] cpuhp/14-113 14d..3. 10356495787us : dl_server_stop <-dequeue_entities
[10364.191036] cpuhp/14-113 14d..2. 10356495788us : sched_switch: prev_comm=cpuhp/14 prev_pid=113 prev_prio=120 prev_state=P ==> next_comm=swapper/14 next_pid=0 next_prio=120
[10364.192963] <idle>-0 14d..2. 10356495837us : sched_switch: prev_comm=swapper/14 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=migration/14 next_pid=114 next_prio=0
[10364.194901] CPU:4 [LOST 11340756 EVENTS]
[10364.194901] rcu_tort-154 4d..2. 10358460910us : sched_switch: prev_comm=rcu_torture_rea prev_pid=154 prev_prio=139 prev_state=I ==> next_comm=swapper/4 next_pid=0 next_prio=120
[10364.197354] <idle>-0 4d.h3. 10358460987us : dl_server_start <-enqueue_task_fair
[10364.198353] <idle>-0 4dNh2. 10358460988us : sched_wakeup: comm=torture_onoff pid=190 prio=120 target_cpu=004
[10364.199679] <idle>-0 4d..2. 10358460990us : sched_switch: prev_comm=swapper/4 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=torture_onoff next_pid=190 next_prio=120
[10364.201647] torture_-190 4d..4. 10358461002us : sched_wakeup: comm=kworker/0:3 pid=10184 prio=120 target_cpu=000
[10364.202951] torture_-190 4d..3. 10358461005us : dl_server_stop <-dequeue_entities
[10364.203935] torture_-190 4d..2. 10358461009us : sched_switch: prev_comm=torture_onoff prev_pid=190 prev_prio=120 prev_state=D ==> next_comm=swapper/4 next_pid=0 next_prio=120
[10364.205917] <idle>-0 4d.h5. 10358461415us : dl_server_start <-enqueue_task_fair
[10364.206924] <idle>-0 4dNh4. 10358461416us : sched_wakeup: comm=rcu_torture_rea pid=154 prio=139 target_cpu=004
[10364.208248] <idle>-0 4d..2. 10358461420us : sched_switch: prev_comm=swapper/4 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcu_torture_rea next_pid=154 next_prio=139
[10364.210223] rcu_tort-154 4DNh3. 10358461884us : sched_wakeup: comm=rcuc/4 pid=45 prio=97 target_cpu=004
[10364.211455] rcu_tort-154 4d..2. 10358461894us : sched_switch: prev_comm=rcu_torture_rea prev_pid=154 prev_prio=139 prev_state=R+ ==> next_comm=rcuc/4 next_pid=45 next_prio=97
[10364.213415] rcuc/4-45 4d..2. 10358461897us : sched_switch: prev_comm=rcuc/4 prev_pid=45 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=154 next_prio=139
[10364.215353] rcu_tort-154 4DNh3. 10358463884us : sched_wakeup: comm=rcuc/4 pid=45 prio=97 target_cpu=004
[10364.216578] rcu_tort-154 4d..2. 10358463896us : sched_switch: prev_comm=rcu_torture_rea prev_pid=154 prev_prio=139 prev_state=R+ ==> next_comm=rcuc/4 next_pid=45 next_prio=97
[10364.218541] rcuc/4-45 4d..2. 10358463900us : sched_switch: prev_comm=rcuc/4 prev_pid=45 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=154 next_prio=139
[10364.220498] rcu_tort-154 4dNh3. 10358464884us : sched_wakeup: comm=rcuc/4 pid=45 prio=97 target_cpu=004
[10364.221711] rcu_tort-154 4d..2. 10358464891us : sched_switch: prev_comm=rcu_torture_rea prev_pid=154 prev_prio=139 prev_state=R+ ==> next_comm=rcuc/4 next_pid=45 next_prio=97
[10364.223672] rcuc/4-45 4d..2. 10358464895us : sched_switch: prev_comm=rcuc/4 prev_pid=45 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=154 next_prio=139
[10364.225631] rcu_tort-154 4d..2. 10358466463us : sched_switch: prev_comm=rcu_torture_rea prev_pid=154 prev_prio=139 prev_state=R+ ==> next_comm=rcu_exp_par_gp_ next_pid=50 next_prio=97
[10364.227682] rcu_exp_-50 4d..2. 10358466477us : sched_switch: prev_comm=rcu_exp_par_gp_ prev_pid=50 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=154 next_prio=139
[10364.229728] rcu_tort-154 4d..2. 10358466485us : sched_switch: prev_comm=rcu_torture_rea prev_pid=154 prev_prio=139 prev_state=R+ ==> next_comm=rcub/9 next_pid=49 next_prio=97
[10364.231701] rcub/9-49 4d..2. 10358466488us : sched_switch: prev_comm=rcub/9 prev_pid=49 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=154 next_prio=139
[10364.233683] rcu_tort-154 4d..2. 10358466499us : sched_switch: prev_comm=rcu_torture_rea prev_pid=154 prev_prio=139 prev_state=R+ ==> next_comm=rcu_torture_rea next_pid=159 next_prio=97
[10364.235755] rcu_tort-159 4dN.4. 10358466512us : sched_wakeup: comm=rcub/14 pid=119 prio=97 target_cpu=015
[10364.236991] rcu_tort-159 4d..2. 10358466514us : sched_switch: prev_comm=rcu_torture_rea prev_pid=159 prev_prio=139 prev_state=R+ ==> next_comm=rcu_torture_rea next_pid=154 next_prio=139
[10364.239078] rcu_tort-154 4d..2. 10358466736us : sched_switch: prev_comm=rcu_torture_rea prev_pid=154 prev_prio=139 prev_state=R+ ==> next_comm=rcu_exp_par_gp_ next_pid=50 next_prio=97
[10364.242118] rcu_exp_-50 4d..2. 10358466745us : sched_switch: prev_comm=rcu_exp_par_gp_ prev_pid=50 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=159 next_prio=139
[10364.245206] rcu_tort-159 4d..2. 10358466753us : sched_switch: prev_comm=rcu_torture_rea prev_pid=159 prev_prio=139 prev_state=R+ ==> next_comm=rcub/9 next_pid=49 next_prio=97
[10364.248174] rcub/9-49 4d..2. 10358466766us : sched_switch: prev_comm=rcub/9 prev_pid=49 prev_prio=97 prev_state=D ==> next_comm=rcu_torture_rea next_pid=159 next_prio=139
[10364.251095] rcu_tort-159 4d..2. 10358466775us : sched_switch: prev_comm=rcu_torture_rea prev_pid=159 prev_prio=139 prev_state=R+ ==> next_comm=rcu_torture_rea next_pid=162 next_prio=97
[10364.254239] rcu_tort-162 4d..2. 10358466788us : sched_switch: prev_comm=rcu_torture_rea prev_pid=162 prev_prio=139 prev_state=R+ ==> next_comm=rcub/9 next_pid=49 next_prio=97
[10364.257164] rcub/9-49 4d..2. 10358466798us : sched_switch: prev_comm=rcub/9 prev_pid=49 prev_prio=97 prev_state=D ==> next_comm=rcu_torture_rea next_pid=159 next_prio=139
[10364.260047] rcu_tort-159 4d..2. 10358466818us : sched_switch: prev_comm=rcu_torture_rea prev_pid=159 prev_prio=139 prev_state=R+ ==> next_comm=rcub/9 next_pid=49 next_prio=97
[10364.261994] rcub/9-49 4d..2. 10358466824us : sched_switch: prev_comm=rcub/9 prev_pid=49 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=162 next_prio=139
[10364.263935] rcu_tort-162 4dNh3. 10358466883us : sched_wakeup: comm=rcuc/4 pid=45 prio=97 target_cpu=004
[10364.265142] rcu_tort-162 4d..2. 10358466887us : sched_switch: prev_comm=rcu_torture_rea prev_pid=162 prev_prio=139 prev_state=R+ ==> next_comm=rcuc/4 next_pid=45 next_prio=97
[10364.267106] rcuc/4-45 4d..2. 10358466891us : sched_switch: prev_comm=rcuc/4 prev_pid=45 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=159 next_prio=139
[10364.269067] rcu_tort-159 4dNh4. 10358467884us : sched_wakeup: comm=rcuc/4 pid=45 prio=97 target_cpu=004
[10364.270267] rcu_tort-159 4dN.4. 10358467890us : sched_wakeup: comm=ksoftirqd/4 pid=46 prio=97 target_cpu=004
[10364.271535] rcu_tort-159 4d..2. 10358467893us : sched_switch: prev_comm=rcu_torture_rea prev_pid=159 prev_prio=139 prev_state=R+ ==> next_comm=rcuc/4 next_pid=45 next_prio=97
[10364.273486] rcuc/4-45 4d..2. 10358467900us : sched_switch: prev_comm=rcuc/4 prev_pid=45 prev_prio=97 prev_state=S ==> next_comm=ksoftirqd/4 next_pid=46 next_prio=97
[10364.275385] ksoftirq-46 4d..2. 10358467903us : sched_switch: prev_comm=ksoftirqd/4 prev_pid=46 prev_prio=97 prev_state=S ==> next_comm=cpuhp/4 next_pid=43 next_prio=120
[10364.277285] cpuhp/4-43 4d..2. 10358467910us : sched_switch: prev_comm=cpuhp/4 prev_pid=43 prev_prio=120 prev_state=D ==> next_comm=rcu_torture_rea next_pid=159 next_prio=139
[10364.279270] rcu_tort-159 4dN.6. 10358467913us : sched_wakeup: comm=migration/4 pid=44 prio=0 target_cpu=004
[10364.280529] rcu_tort-159 4d..2. 10358467914us : sched_switch: prev_comm=rcu_torture_rea prev_pid=159 prev_prio=139 prev_state=R+ ==> next_comm=migration/4 next_pid=44 next_prio=0
[10364.282563] migratio-44 4d..4. 10358467918us : dl_server_stop <-dequeue_entities
[10364.283604] migratio-44 4d..4. 10358467920us : dl_server_start <-enqueue_task_fair
[10364.284623] migratio-44 4d..2. 10358467926us : sched_switch: prev_comm=migration/4 prev_pid=44 prev_prio=0 prev_state=S ==> next_comm=swapper/4 next_pid=0 next_prio=120
[10364.286537] <idle>-0 4dNh4. 10358469898us : sched_wakeup: comm=rcuc/4 pid=45 prio=97 target_cpu=004
[10364.287757] <idle>-0 4dN.4. 10358469903us : sched_wakeup: comm=ksoftirqd/4 pid=46 prio=97 target_cpu=004
[10364.289032] <idle>-0 4d..2. 10358469906us : sched_switch: prev_comm=swapper/4 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/4 next_pid=45 next_prio=97
[10364.290926] rcuc/4-45 4d..2. 10358469913us : sched_switch: prev_comm=rcuc/4 prev_pid=45 prev_prio=97 prev_state=S ==> next_comm=ksoftirqd/4 next_pid=46 next_prio=97
[10364.292832] ksoftirq-46 4d..2. 10358469915us : sched_switch: prev_comm=ksoftirqd/4 prev_pid=46 prev_prio=97 prev_state=S ==> next_comm=swapper/4 next_pid=0 next_prio=120
[10364.294838] <idle>-0 4dNh4. 10358470903us : sched_wakeup: comm=rcuc/4 pid=45 prio=97 target_cpu=004
[10364.296052] <idle>-0 4d..2. 10358470909us : sched_switch: prev_comm=swapper/4 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/4 next_pid=45 next_prio=97
[10364.297937] rcuc/4-45 4d..2. 10358470913us : sched_switch: prev_comm=rcuc/4 prev_pid=45 prev_prio=97 prev_state=S ==> next_comm=swapper/4 next_pid=0 next_prio=120
[10364.299826] <idle>-0 4dNh4. 10358471904us : sched_wakeup: comm=rcuc/4 pid=45 prio=97 target_cpu=004
[10364.301057] <idle>-0 4dN.4. 10358471909us : sched_wakeup: comm=ksoftirqd/4 pid=46 prio=97 target_cpu=004
[10364.302324] <idle>-0 4d..2. 10358471911us : sched_switch: prev_comm=swapper/4 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/4 next_pid=45 next_prio=97
[10364.304185] rcuc/4-45 4d..2. 10358471917us : sched_switch: prev_comm=rcuc/4 prev_pid=45 prev_prio=97 prev_state=S ==> next_comm=ksoftirqd/4 next_pid=46 next_prio=97
[10364.306061] ksoftirq-46 4d..2. 10358471919us : sched_switch: prev_comm=ksoftirqd/4 prev_pid=46 prev_prio=97 prev_state=S ==> next_comm=swapper/4 next_pid=0 next_prio=120
[10364.307999] <idle>-0 4dNh4. 10358472890us : sched_wakeup: comm=rcuc/4 pid=45 prio=97 target_cpu=004
[10364.309215] <idle>-0 4d..2. 10358472894us : sched_switch: prev_comm=swapper/4 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/4 next_pid=45 next_prio=97
[10364.311108] rcuc/4-45 4d..2. 10358472896us : sched_switch: prev_comm=rcuc/4 prev_pid=45 prev_prio=97 prev_state=S ==> next_comm=swapper/4 next_pid=0 next_prio=120
[10364.312984] <idle>-0 4dNh4. 10358474889us : sched_wakeup: comm=rcuc/4 pid=45 prio=97 target_cpu=004
[10364.314194] <idle>-0 4d..2. 10358474893us : sched_switch: prev_comm=swapper/4 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/4 next_pid=45 next_prio=97
[10364.316063] rcuc/4-45 4D..5. 10358474897us : dl_server_start <-enqueue_task_fair
[10364.317083] rcuc/4-45 4DN.4. 10358474898us : sched_wakeup: comm=cpuhp/4 pid=43 prio=120 target_cpu=004
[10364.318335] rcuc/4-45 4d..2. 10358474901us : sched_switch: prev_comm=rcuc/4 prev_pid=45 prev_prio=97 prev_state=S ==> next_comm=cpuhp/4 next_pid=43 next_prio=120
[10364.320194] cpuhp/4-43 4dNh4. 10358476129us : sched_wakeup: comm=rcuc/4 pid=45 prio=97 target_cpu=004
[10364.321427] cpuhp/4-43 4d..2. 10358476137us : sched_switch: prev_comm=cpuhp/4 prev_pid=43 prev_prio=120 prev_state=R+ ==> next_comm=rcuc/4 next_pid=45 next_prio=97
[10364.323276] rcuc/4-45 4d..2. 10358476141us : sched_switch: prev_comm=rcuc/4 prev_pid=45 prev_prio=97 prev_state=S ==> next_comm=cpuhp/4 next_pid=43 next_prio=120
[10364.325147] cpuhp/4-43 4d..3. 10358476148us : dl_server_stop <-dequeue_entities
[10364.326134] cpuhp/4-43 4d..2. 10358476150us : sched_switch: prev_comm=cpuhp/4 prev_pid=43 prev_prio=120 prev_state=D ==> next_comm=swapper/4 next_pid=0 next_prio=120
[10364.328012] <idle>-0 4d..2. 10358476813us : sched_switch: prev_comm=swapper/4 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=cpuhp/4 next_pid=43 next_prio=120
[10364.329904] cpuhp/4-43 4d..3. 10358476847us : dl_server_stop <-dequeue_entities
[10364.330895] cpuhp/4-43 4d..2. 10358476849us : sched_switch: prev_comm=cpuhp/4 prev_pid=43 prev_prio=120 prev_state=D ==> next_comm=swapper/4 next_pid=0 next_prio=120
[10364.332789] <idle>-0 4dNh4. 10358476897us : sched_wakeup: comm=rcuc/4 pid=45 prio=97 target_cpu=004
[10364.334003] <idle>-0 4d..2. 10358476901us : sched_switch: prev_comm=swapper/4 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/4 next_pid=45 next_prio=97
[10364.335877] rcuc/4-45 4d..2. 10358476905us : sched_switch: prev_comm=rcuc/4 prev_pid=45 prev_prio=97 prev_state=S ==> next_comm=swapper/4 next_pid=0 next_prio=120
[10364.337745] <idle>-0 4d..2. 10358476929us : sched_switch: prev_comm=swapper/4 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=cpuhp/4 next_pid=43 next_prio=120
[10364.339641] cpuhp/4-43 4d..3. 10358476963us : dl_server_stop <-dequeue_entities
[10364.340642] cpuhp/4-43 4d..2. 10358476964us : sched_switch: prev_comm=cpuhp/4 prev_pid=43 prev_prio=120 prev_state=D ==> next_comm=swapper/4 next_pid=0 next_prio=120
[10364.342554] <idle>-0 4d..2. 10358477012us : sched_switch: prev_comm=swapper/4 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=cpuhp/4 next_pid=43 next_prio=120
[10364.344444] cpuhp/4-43 4dN.3. 10358477164us : sched_wakeup: comm=ksoftirqd/4 pid=46 prio=97 target_cpu=004
[10364.345727] cpuhp/4-43 4d..2. 10358477166us : sched_switch: prev_comm=cpuhp/4 prev_pid=43 prev_prio=120 prev_state=R+ ==> next_comm=ksoftirqd/4 next_pid=46 next_prio=97
[10364.347641] ksoftirq-46 4d..2. 10358477167us : sched_switch: prev_comm=ksoftirqd/4 prev_pid=46 prev_prio=97 prev_state=P ==> next_comm=cpuhp/4 next_pid=43 next_prio=120
[10364.349557] cpuhp/4-43 4dN.3. 10358477169us : sched_wakeup: comm=rcuc/4 pid=45 prio=97 target_cpu=004
[10364.350765] cpuhp/4-43 4d..2. 10358477170us : sched_switch: prev_comm=cpuhp/4 prev_pid=43 prev_prio=120 prev_state=R+ ==> next_comm=rcuc/4 next_pid=45 next_prio=97
[10364.352633] rcuc/4-45 4d..2. 10358477171us : sched_switch: prev_comm=rcuc/4 prev_pid=45 prev_prio=97 prev_state=P ==> next_comm=cpuhp/4 next_pid=43 next_prio=120
[10364.354488] cpuhp/4-43 4d..3. 10358477177us : dl_server_stop <-dequeue_entities
[10364.355483] cpuhp/4-43 4d..2. 10358477178us : sched_switch: prev_comm=cpuhp/4 prev_pid=43 prev_prio=120 prev_state=S ==> next_comm=swapper/4 next_pid=0 next_prio=120
[10364.357344] <idle>-0 4d..2. 10358477189us : sched_switch: prev_comm=swapper/4 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=cpuhp/4 next_pid=43 next_prio=120
[10364.359232] cpuhp/4-43 4d..3. 10358477192us : dl_server_stop <-dequeue_entities
[10364.360212] cpuhp/4-43 4d..2. 10358477193us : sched_switch: prev_comm=cpuhp/4 prev_pid=43 prev_prio=120 prev_state=P ==> next_comm=swapper/4 next_pid=0 next_prio=120
[10364.362106] <idle>-0 4d..2. 10358477207us : sched_switch: prev_comm=swapper/4 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=migration/4 next_pid=44 next_prio=0
[10364.364016] CPU:8 [LOST 11393681 EVENTS]
[10364.364016] rcuc/8-73 8d..2. 10358679902us : sched_switch: prev_comm=rcuc/8 prev_pid=73 prev_prio=97 prev_state=S ==> next_comm=swapper/8 next_pid=0 next_prio=120
[10364.366356] <idle>-0 8dNh4. 10358680889us : sched_wakeup: comm=rcuc/8 pid=73 prio=97 target_cpu=008
[10364.367591] <idle>-0 8d..2. 10358680893us : sched_switch: prev_comm=swapper/8 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/8 next_pid=73 next_prio=97
[10364.369453] rcuc/8-73 8d..2. 10358680903us : sched_switch: prev_comm=rcuc/8 prev_pid=73 prev_prio=97 prev_state=S ==> next_comm=swapper/8 next_pid=0 next_prio=120
[10364.371300] <idle>-0 8dN.4. 10358681891us : sched_wakeup: comm=ksoftirqd/8 pid=74 prio=97 target_cpu=008
[10364.372587] <idle>-0 8d..2. 10358681893us : sched_switch: prev_comm=swapper/8 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=ksoftirqd/8 next_pid=74 next_prio=97
[10364.374518] ksoftirq-74 8d..2. 10358681904us : sched_switch: prev_comm=ksoftirqd/8 prev_pid=74 prev_prio=97 prev_state=S ==> next_comm=swapper/8 next_pid=0 next_prio=120
[10364.376435] <idle>-0 8dNh2. 10358682828us : sched_wakeup: comm=rcu_exp_gp_kthr pid=20 prio=97 target_cpu=008
[10364.377733] <idle>-0 8d..2. 10358682829us : sched_switch: prev_comm=swapper/8 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcu_exp_gp_kthr next_pid=20 next_prio=97
[10364.379685] rcu_exp_-20 8d..4. 10358682843us : sched_wakeup: comm=rcu_exp_par_gp_ pid=18 prio=97 target_cpu=000
[10364.380988] rcu_exp_-20 8d..4. 10358682847us : sched_wakeup: comm=rcu_exp_par_gp_ pid=36 prio=97 target_cpu=002
[10364.382284] rcu_exp_-20 8d..4. 10358682851us : sched_wakeup: comm=rcu_exp_par_gp_ pid=50 prio=97 target_cpu=005
[10364.383611] rcu_exp_-20 8d..4. 10358682857us : sched_wakeup: comm=rcu_exp_par_gp_ pid=64 prio=97 target_cpu=007
[10364.384914] rcu_exp_-20 8d..4. 10358682862us : sched_wakeup: comm=rcu_exp_par_gp_ pid=78 prio=97 target_cpu=009
[10364.386226] rcu_exp_-20 8d..4. 10358682866us : sched_wakeup: comm=rcu_exp_par_gp_ pid=92 prio=97 target_cpu=010
[10364.387546] rcu_exp_-20 8d..4. 10358682872us : sched_wakeup: comm=rcu_exp_par_gp_ pid=106 prio=97 target_cpu=012
[10364.388915] rcu_exp_-20 8dNh5. 10358682882us : sched_wakeup: comm=rcuc/8 pid=73 prio=97 target_cpu=008
[10364.390123] rcu_exp_-20 8d..2. 10358682887us : sched_switch: prev_comm=rcu_exp_gp_kthr prev_pid=20 prev_prio=97 prev_state=R+ ==> next_comm=rcuc/8 next_pid=73 next_prio=97
[10364.392062] rcuc/8-73 8d..2. 10358682890us : sched_switch: prev_comm=rcuc/8 prev_pid=73 prev_prio=97 prev_state=S ==> next_comm=rcu_exp_gp_kthr next_pid=20 next_prio=97
[10364.393996] rcu_exp_-20 8d..3. 10358682898us : sched_wakeup: comm=rcub/14 pid=119 prio=97 target_cpu=015
[10364.395230] rcu_exp_-20 8d..4. 10358682908us : sched_wakeup: comm=kworker/2:1 pid=18174 prio=120 target_cpu=002
[10364.396559] rcu_exp_-20 8d..2. 10358682917us : sched_switch: prev_comm=rcu_exp_gp_kthr prev_pid=20 prev_prio=97 prev_state=S ==> next_comm=swapper/8 next_pid=0 next_prio=120
[10364.398535] <idle>-0 8dNh2. 10358682920us : sched_wakeup: comm=rcu_exp_gp_kthr pid=20 prio=97 target_cpu=008
[10364.399841] <idle>-0 8d..2. 10358682921us : sched_switch: prev_comm=swapper/8 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcu_exp_gp_kthr next_pid=20 next_prio=97
[10364.401802] rcu_exp_-20 8d..4. 10358682927us : sched_wakeup: comm=rcu_exp_par_gp_ pid=18 prio=97 target_cpu=000
[10364.403119] rcu_exp_-20 8d..4. 10358682931us : sched_wakeup: comm=rcu_exp_par_gp_ pid=36 prio=97 target_cpu=002
[10364.404451] rcu_exp_-20 8d..4. 10358682935us : sched_wakeup: comm=rcu_exp_par_gp_ pid=50 prio=97 target_cpu=005
[10364.405750] rcu_exp_-20 8d..4. 10358682940us : sched_wakeup: comm=rcu_exp_par_gp_ pid=64 prio=97 target_cpu=007
[10364.407051] rcu_exp_-20 8d..4. 10358682946us : sched_wakeup: comm=rcu_exp_par_gp_ pid=78 prio=97 target_cpu=009
[10364.408364] rcu_exp_-20 8d..4. 10358682949us : sched_wakeup: comm=rcu_exp_par_gp_ pid=92 prio=97 target_cpu=010
[10364.409686] rcu_exp_-20 8d..2. 10358682971us : sched_switch: prev_comm=rcu_exp_gp_kthr prev_pid=20 prev_prio=97 prev_state=D ==> next_comm=swapper/8 next_pid=0 next_prio=120
[10364.411649] <idle>-0 8dNh2. 10358682973us : sched_wakeup: comm=rcu_exp_gp_kthr pid=20 prio=97 target_cpu=008
[10364.412963] <idle>-0 8d..2. 10358682974us : sched_switch: prev_comm=swapper/8 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcu_exp_gp_kthr next_pid=20 next_prio=97
[10364.414936] rcu_exp_-20 8d..2. 10358682989us : sched_switch: prev_comm=rcu_exp_gp_kthr prev_pid=20 prev_prio=97 prev_state=D ==> next_comm=swapper/8 next_pid=0 next_prio=120
[10364.416913] <idle>-0 8dNh2. 10358682991us : sched_wakeup: comm=rcu_exp_gp_kthr pid=20 prio=97 target_cpu=008
[10364.418212] <idle>-0 8d..2. 10358682992us : sched_switch: prev_comm=swapper/8 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcu_exp_gp_kthr next_pid=20 next_prio=97
[10364.420177] rcu_exp_-20 8d..2. 10358683002us : sched_switch: prev_comm=rcu_exp_gp_kthr prev_pid=20 prev_prio=97 prev_state=S ==> next_comm=swapper/8 next_pid=0 next_prio=120
[10364.422145] <idle>-0 8dNh4. 10358683890us : sched_wakeup: comm=rcuc/8 pid=73 prio=97 target_cpu=008
[10364.423354] <idle>-0 8d..2. 10358683894us : sched_switch: prev_comm=swapper/8 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/8 next_pid=73 next_prio=97
[10364.425234] rcuc/8-73 8d..2. 10358683905us : sched_switch: prev_comm=rcuc/8 prev_pid=73 prev_prio=97 prev_state=S ==> next_comm=swapper/8 next_pid=0 next_prio=120
[10364.427107] <idle>-0 8dNh4. 10358685890us : sched_wakeup: comm=rcuc/8 pid=73 prio=97 target_cpu=008
[10364.428339] <idle>-0 8d..2. 10358685894us : sched_switch: prev_comm=swapper/8 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/8 next_pid=73 next_prio=97
[10364.430218] rcuc/8-73 8d..2. 10358685902us : sched_switch: prev_comm=rcuc/8 prev_pid=73 prev_prio=97 prev_state=S ==> next_comm=swapper/8 next_pid=0 next_prio=120
[10364.432081] <idle>-0 8dNh4. 10358686889us : sched_wakeup: comm=rcuc/8 pid=73 prio=97 target_cpu=008
[10364.433301] <idle>-0 8d..2. 10358686893us : sched_switch: prev_comm=swapper/8 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/8 next_pid=73 next_prio=97
[10364.435184] rcuc/8-73 8D..4. 10358686901us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=015
[10364.436468] rcuc/8-73 8d..2. 10358686912us : sched_switch: prev_comm=rcuc/8 prev_pid=73 prev_prio=97 prev_state=S ==> next_comm=swapper/8 next_pid=0 next_prio=120
[10364.438312] <idle>-0 8dNh4. 10358687890us : sched_wakeup: comm=rcuc/8 pid=73 prio=97 target_cpu=008
[10364.439540] <idle>-0 8d..2. 10358687894us : sched_switch: prev_comm=swapper/8 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/8 next_pid=73 next_prio=97
[10364.441405] rcuc/8-73 8d..2. 10358687907us : sched_switch: prev_comm=rcuc/8 prev_pid=73 prev_prio=97 prev_state=S ==> next_comm=swapper/8 next_pid=0 next_prio=120
[10364.443263] <idle>-0 8dNh4. 10358688890us : sched_wakeup: comm=rcuc/8 pid=73 prio=97 target_cpu=008
[10364.444539] <idle>-0 8d..2. 10358688894us : sched_switch: prev_comm=swapper/8 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/8 next_pid=73 next_prio=97
[10364.446401] rcuc/8-73 8d..2. 10358688900us : sched_switch: prev_comm=rcuc/8 prev_pid=73 prev_prio=97 prev_state=S ==> next_comm=swapper/8 next_pid=0 next_prio=120
[10364.448252] <idle>-0 8d.h3. 10358688906us : dl_server_start <-enqueue_task_fair
[10364.449258] <idle>-0 8dNh2. 10358688907us : sched_wakeup: comm=cpuhp/8 pid=71 prio=120 target_cpu=008
[10364.450502] <idle>-0 8d..2. 10358688908us : sched_switch: prev_comm=swapper/8 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=cpuhp/8 next_pid=71 next_prio=120
[10364.452389] cpuhp/8-71 8d..3. 10358688913us : dl_server_stop <-dequeue_entities
[10364.453360] cpuhp/8-71 8d..2. 10358688917us : sched_switch: prev_comm=cpuhp/8 prev_pid=71 prev_prio=120 prev_state=D ==> next_comm=swapper/8 next_pid=0 next_prio=120
[10364.455249] <idle>-0 8dNh4. 10358689890us : sched_wakeup: comm=rcuc/8 pid=73 prio=97 target_cpu=008
[10364.456486] <idle>-0 8d..2. 10358689894us : sched_switch: prev_comm=swapper/8 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/8 next_pid=73 next_prio=97
[10364.458341] rcuc/8-73 8d..2. 10358689896us : sched_switch: prev_comm=rcuc/8 prev_pid=73 prev_prio=97 prev_state=S ==> next_comm=swapper/8 next_pid=0 next_prio=120
[10364.460232] <idle>-0 8dNh4. 10358690890us : sched_wakeup: comm=rcuc/8 pid=73 prio=97 target_cpu=008
[10364.461454] <idle>-0 8d..2. 10358690894us : sched_switch: prev_comm=swapper/8 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/8 next_pid=73 next_prio=97
[10364.463299] rcuc/8-73 8d..2. 10358690896us : sched_switch: prev_comm=rcuc/8 prev_pid=73 prev_prio=97 prev_state=S ==> next_comm=swapper/8 next_pid=0 next_prio=120
[10364.465158] <idle>-0 8dNh4. 10358693888us : sched_wakeup: comm=rcuc/8 pid=73 prio=97 target_cpu=008
[10364.466361] <idle>-0 8d..2. 10358693892us : sched_switch: prev_comm=swapper/8 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/8 next_pid=73 next_prio=97
[10364.468226] rcuc/8-73 8d..2. 10358693895us : sched_switch: prev_comm=rcuc/8 prev_pid=73 prev_prio=97 prev_state=S ==> next_comm=swapper/8 next_pid=0 next_prio=120
[10364.470080] <idle>-0 8dNh4. 10358694889us : sched_wakeup: comm=rcuc/8 pid=73 prio=97 target_cpu=008
[10364.471287] <idle>-0 8d..2. 10358694892us : sched_switch: prev_comm=swapper/8 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/8 next_pid=73 next_prio=97
[10364.473145] rcuc/8-73 8d..2. 10358694895us : sched_switch: prev_comm=rcuc/8 prev_pid=73 prev_prio=97 prev_state=S ==> next_comm=swapper/8 next_pid=0 next_prio=120
[10364.474994] <idle>-0 8dNh4. 10358695904us : sched_wakeup: comm=rcuc/8 pid=73 prio=97 target_cpu=008
[10364.476201] <idle>-0 8d..2. 10358695909us : sched_switch: prev_comm=swapper/8 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/8 next_pid=73 next_prio=97
[10364.478074] rcuc/8-73 8D..5. 10358695912us : dl_server_start <-enqueue_task_fair
[10364.479084] rcuc/8-73 8DN.4. 10358695913us : sched_wakeup: comm=cpuhp/8 pid=71 prio=120 target_cpu=008
[10364.480308] rcuc/8-73 8d..2. 10358695915us : sched_switch: prev_comm=rcuc/8 prev_pid=73 prev_prio=97 prev_state=S ==> next_comm=cpuhp/8 next_pid=71 next_prio=120
[10364.482172] cpuhp/8-71 8dNh4. 10358697094us : sched_wakeup: comm=rcuc/8 pid=73 prio=97 target_cpu=008
[10364.483395] cpuhp/8-71 8d..2. 10358697102us : sched_switch: prev_comm=cpuhp/8 prev_pid=71 prev_prio=120 prev_state=R+ ==> next_comm=rcuc/8 next_pid=73 next_prio=97
[10364.485249] rcuc/8-73 8d..2. 10358697105us : sched_switch: prev_comm=rcuc/8 prev_pid=73 prev_prio=97 prev_state=S ==> next_comm=cpuhp/8 next_pid=71 next_prio=120
[10364.487109] cpuhp/8-71 8d..3. 10358697112us : dl_server_stop <-dequeue_entities
[10364.488096] cpuhp/8-71 8d..2. 10358697113us : sched_switch: prev_comm=cpuhp/8 prev_pid=71 prev_prio=120 prev_state=D ==> next_comm=swapper/8 next_pid=0 next_prio=120
[10364.489977] <idle>-0 8d..2. 10358697856us : sched_switch: prev_comm=swapper/8 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=cpuhp/8 next_pid=71 next_prio=120
[10364.491867] cpuhp/8-71 8d..3. 10358697887us : dl_server_stop <-dequeue_entities
[10364.492896] cpuhp/8-71 8d..2. 10358697888us : sched_switch: prev_comm=cpuhp/8 prev_pid=71 prev_prio=120 prev_state=D ==> next_comm=swapper/8 next_pid=0 next_prio=120
[10364.494782] <idle>-0 8d..2. 10358698941us : sched_switch: prev_comm=swapper/8 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=cpuhp/8 next_pid=71 next_prio=120
[10364.496683] cpuhp/8-71 8d..3. 10358698975us : dl_server_stop <-dequeue_entities
[10364.497672] cpuhp/8-71 8d..2. 10358698977us : sched_switch: prev_comm=cpuhp/8 prev_pid=71 prev_prio=120 prev_state=D ==> next_comm=swapper/8 next_pid=0 next_prio=120
[10364.499548] <idle>-0 8d..2. 10358699087us : sched_switch: prev_comm=swapper/8 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=cpuhp/8 next_pid=71 next_prio=120
[10364.501427] cpuhp/8-71 8dN.3. 10358699198us : sched_wakeup: comm=ksoftirqd/8 pid=74 prio=97 target_cpu=008
[10364.502688] cpuhp/8-71 8d..2. 10358699200us : sched_switch: prev_comm=cpuhp/8 prev_pid=71 prev_prio=120 prev_state=R+ ==> next_comm=ksoftirqd/8 next_pid=74 next_prio=97
[10364.504598] ksoftirq-74 8d..2. 10358699202us : sched_switch: prev_comm=ksoftirqd/8 prev_pid=74 prev_prio=97 prev_state=P ==> next_comm=cpuhp/8 next_pid=71 next_prio=120
[10364.506513] cpuhp/8-71 8dN.3. 10358699203us : sched_wakeup: comm=rcuc/8 pid=73 prio=97 target_cpu=008
[10364.507722] cpuhp/8-71 8d..2. 10358699203us : sched_switch: prev_comm=cpuhp/8 prev_pid=71 prev_prio=120 prev_state=R+ ==> next_comm=rcuc/8 next_pid=73 next_prio=97
[10364.509583] rcuc/8-73 8d..2. 10358699204us : sched_switch: prev_comm=rcuc/8 prev_pid=73 prev_prio=97 prev_state=P ==> next_comm=cpuhp/8 next_pid=71 next_prio=120
[10364.511445] cpuhp/8-71 8d..3. 10358699209us : dl_server_stop <-dequeue_entities
[10364.512434] cpuhp/8-71 8d..2. 10358699210us : sched_switch: prev_comm=cpuhp/8 prev_pid=71 prev_prio=120 prev_state=S ==> next_comm=swapper/8 next_pid=0 next_prio=120
[10364.514450] <idle>-0 8d..2. 10358699899us : sched_switch: prev_comm=swapper/8 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=cpuhp/8 next_pid=71 next_prio=120
[10364.517264] cpuhp/8-71 8d..3. 10358699903us : dl_server_stop <-dequeue_entities
[10364.518709] cpuhp/8-71 8d..2. 10358699904us : sched_switch: prev_comm=cpuhp/8 prev_pid=71 prev_prio=120 prev_state=P ==> next_comm=swapper/8 next_pid=0 next_prio=120
[10364.520877] <idle>-0 8d..2. 10358699941us : sched_switch: prev_comm=swapper/8 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=migration/8 next_pid=72 next_prio=0
[10364.522805] CPU:10 [LOST 11861188 EVENTS]
[10364.522805] ksoftirq-88 10d..2. 10359438892us : sched_switch: prev_comm=ksoftirqd/10 prev_pid=88 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=157 next_prio=139
[10364.525286] rcu_tort-157 10d..3. 10359440891us : dl_server_stop <-dequeue_entities
[10364.526273] rcu_tort-157 10d..2. 10359440900us : sched_switch: prev_comm=rcu_torture_rea prev_pid=157 prev_prio=139 prev_state=I ==> next_comm=swapper/10 next_pid=0 next_prio=120
[10364.528270] <idle>-0 10d.h5. 10359441399us : dl_server_start <-enqueue_task_fair
[10364.529780] <idle>-0 10dNh4. 10359441400us : sched_wakeup: comm=rcu_torture_rea pid=157 prio=139 target_cpu=010
[10364.531819] <idle>-0 10d..2. 10359441402us : sched_switch: prev_comm=swapper/10 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcu_torture_rea next_pid=157 next_prio=139
[10364.534347] rcu_tort-157 10dN.4. 10359445885us : sched_wakeup: comm=ksoftirqd/10 pid=88 prio=97 target_cpu=010
[10364.535632] rcu_tort-157 10d..2. 10359445887us : sched_switch: prev_comm=rcu_torture_rea prev_pid=157 prev_prio=139 prev_state=R+ ==> next_comm=ksoftirqd/10 next_pid=88 next_prio=97
[10364.537640] ksoftirq-88 10d..2. 10359445892us : sched_switch: prev_comm=ksoftirqd/10 prev_pid=88 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=157 next_prio=139
[10364.539631] rcu_tort-157 10dN.4. 10359447885us : sched_wakeup: comm=ksoftirqd/10 pid=88 prio=97 target_cpu=010
[10364.540899] rcu_tort-157 10d..2. 10359447887us : sched_switch: prev_comm=rcu_torture_rea prev_pid=157 prev_prio=139 prev_state=R+ ==> next_comm=ksoftirqd/10 next_pid=88 next_prio=97
[10364.542914] ksoftirq-88 10d..2. 10359447892us : sched_switch: prev_comm=ksoftirqd/10 prev_pid=88 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=157 next_prio=139
[10364.544910] rcu_tort-157 10d..3. 10359451888us : dl_server_stop <-dequeue_entities
[10364.545882] rcu_tort-157 10d..3. 10359451895us : dl_server_start <-enqueue_task_fair
[10364.546886] rcu_tort-157 10d..2. 10359451897us : sched_switch: prev_comm=rcu_torture_rea prev_pid=157 prev_prio=139 prev_state=I ==> next_comm=rcu_torture_rea next_pid=158 next_prio=139
[10364.548947] rcu_tort-158 10d.h3. 10359452390us : sched_wakeup: comm=rcu_torture_rea pid=157 prio=139 target_cpu=010
[10364.550265] rcu_tort-158 10dN.3. 10359452884us : sched_wakeup: comm=ksoftirqd/10 pid=88 prio=97 target_cpu=010
[10364.551556] rcu_tort-158 10d..2. 10359452886us : sched_switch: prev_comm=rcu_torture_rea prev_pid=158 prev_prio=139 prev_state=R+ ==> next_comm=ksoftirqd/10 next_pid=88 next_prio=97
[10364.553559] ksoftirq-88 10d..2. 10359452892us : sched_switch: prev_comm=ksoftirqd/10 prev_pid=88 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=157 next_prio=139
[10364.555583] rcu_tort-157 10dN.3. 10359453885us : sched_wakeup: comm=ksoftirqd/10 pid=88 prio=97 target_cpu=010
[10364.556846] rcu_tort-157 10d..2. 10359453887us : sched_switch: prev_comm=rcu_torture_rea prev_pid=157 prev_prio=139 prev_state=R+ ==> next_comm=ksoftirqd/10 next_pid=88 next_prio=97
[10364.558848] ksoftirq-88 10d..2. 10359453892us : sched_switch: prev_comm=ksoftirqd/10 prev_pid=88 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=158 next_prio=139
[10364.560842] rcu_tort-158 10d..2. 10359454888us : sched_switch: prev_comm=rcu_torture_rea prev_pid=158 prev_prio=139 prev_state=R+ ==> next_comm=rcu_torture_rea next_pid=157 next_prio=139
[10364.562898] rcu_tort-157 10dN.3. 10359456885us : sched_wakeup: comm=ksoftirqd/10 pid=88 prio=97 target_cpu=010
[10364.564162] rcu_tort-157 10d..2. 10359456886us : sched_switch: prev_comm=rcu_torture_rea prev_pid=157 prev_prio=139 prev_state=R+ ==> next_comm=ksoftirqd/10 next_pid=88 next_prio=97
[10364.566174] ksoftirq-88 10d..2. 10359456893us : sched_switch: prev_comm=ksoftirqd/10 prev_pid=88 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=158 next_prio=139
[10364.568174] rcu_tort-158 10d..3. 10359458891us : dl_server_stop <-dequeue_entities
[10364.569149] rcu_tort-158 10d..2. 10359458896us : sched_switch: prev_comm=rcu_torture_rea prev_pid=158 prev_prio=139 prev_state=I ==> next_comm=swapper/10 next_pid=0 next_prio=120
[10364.571131] <idle>-0 10d.h5. 10359459398us : dl_server_start <-enqueue_task_fair
[10364.572130] <idle>-0 10dNh4. 10359459399us : sched_wakeup: comm=rcu_torture_rea pid=158 prio=139 target_cpu=010
[10364.573459] <idle>-0 10d..2. 10359459401us : sched_switch: prev_comm=swapper/10 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcu_torture_rea next_pid=158 next_prio=139
[10364.575440] rcu_tort-158 10dN.5. 10359459884us : sched_wakeup: comm=ksoftirqd/10 pid=88 prio=97 target_cpu=010
[10364.576717] rcu_tort-158 10d..2. 10359459886us : sched_switch: prev_comm=rcu_torture_rea prev_pid=158 prev_prio=139 prev_state=R+ ==> next_comm=ksoftirqd/10 next_pid=88 next_prio=97
[10364.578728] ksoftirq-88 10d..2. 10359459888us : sched_switch: prev_comm=ksoftirqd/10 prev_pid=88 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=158 next_prio=139
[10364.580719] rcu_tort-158 10dN.3. 10359461885us : sched_wakeup: comm=ksoftirqd/10 pid=88 prio=97 target_cpu=010
[10364.581996] rcu_tort-158 10d..2. 10359461887us : sched_switch: prev_comm=rcu_torture_rea prev_pid=158 prev_prio=139 prev_state=R+ ==> next_comm=ksoftirqd/10 next_pid=88 next_prio=97
[10364.584021] ksoftirq-88 10d..2. 10359461891us : sched_switch: prev_comm=ksoftirqd/10 prev_pid=88 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=158 next_prio=139
[10364.586023] rcu_tort-158 10dN.5. 10359464885us : sched_wakeup: comm=ksoftirqd/10 pid=88 prio=97 target_cpu=010
[10364.587282] rcu_tort-158 10d..2. 10359465079us : sched_switch: prev_comm=rcu_torture_rea prev_pid=158 prev_prio=139 prev_state=R+ ==> next_comm=ksoftirqd/10 next_pid=88 next_prio=97
[10364.589282] ksoftirq-88 10d..2. 10359465082us : sched_switch: prev_comm=ksoftirqd/10 prev_pid=88 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=158 next_prio=139
[10364.591269] rcu_tort-158 10DNh3. 10359468882us : sched_wakeup: comm=rcuc/10 pid=87 prio=97 target_cpu=010
[10364.592485] rcu_tort-158 10d..2. 10359468887us : sched_switch: prev_comm=rcu_torture_rea prev_pid=158 prev_prio=139 prev_state=R+ ==> next_comm=rcuc/10 next_pid=87 next_prio=97
[10364.594429] rcuc/10-87 10d..2. 10359468891us : sched_switch: prev_comm=rcuc/10 prev_pid=87 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=158 next_prio=139
[10364.596349] rcu_tort-158 10dN.4. 10359469885us : sched_wakeup: comm=ksoftirqd/10 pid=88 prio=97 target_cpu=010
[10364.597660] rcu_tort-158 10d..2. 10359469887us : sched_switch: prev_comm=rcu_torture_rea prev_pid=158 prev_prio=139 prev_state=R+ ==> next_comm=ksoftirqd/10 next_pid=88 next_prio=97
[10364.599654] ksoftirq-88 10d..2. 10359469891us : sched_switch: prev_comm=ksoftirqd/10 prev_pid=88 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=158 next_prio=139
[10364.601639] rcu_tort-158 10d..3. 10359469895us : dl_server_stop <-dequeue_entities
[10364.602612] rcu_tort-158 10d..2. 10359469900us : sched_switch: prev_comm=rcu_torture_rea prev_pid=158 prev_prio=139 prev_state=I ==> next_comm=swapper/10 next_pid=0 next_prio=120
[10364.604576] <idle>-0 10d.h5. 10359470402us : dl_server_start <-enqueue_task_fair
[10364.605573] <idle>-0 10dNh4. 10359470403us : sched_wakeup: comm=rcu_torture_rea pid=158 prio=139 target_cpu=010
[10364.606884] <idle>-0 10d..2. 10359470405us : sched_switch: prev_comm=swapper/10 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcu_torture_rea next_pid=158 next_prio=139
[10364.608852] rcu_tort-158 10dN.4. 10359472885us : sched_wakeup: comm=ksoftirqd/10 pid=88 prio=97 target_cpu=010
[10364.610117] rcu_tort-158 10d..2. 10359472887us : sched_switch: prev_comm=rcu_torture_rea prev_pid=158 prev_prio=139 prev_state=R+ ==> next_comm=ksoftirqd/10 next_pid=88 next_prio=97
[10364.612120] ksoftirq-88 10d..2. 10359472891us : sched_switch: prev_comm=ksoftirqd/10 prev_pid=88 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=158 next_prio=139
[10364.614104] rcu_tort-158 10dN.4. 10359474886us : sched_wakeup: comm=ksoftirqd/10 pid=88 prio=97 target_cpu=010
[10364.615376] rcu_tort-158 10d..2. 10359474888us : sched_switch: prev_comm=rcu_torture_rea prev_pid=158 prev_prio=139 prev_state=R+ ==> next_comm=ksoftirqd/10 next_pid=88 next_prio=97
[10364.617346] ksoftirq-88 10d..2. 10359474891us : sched_switch: prev_comm=ksoftirqd/10 prev_pid=88 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=158 next_prio=139
[10364.619337] rcu_tort-158 10dN.3. 10359479885us : sched_wakeup: comm=ksoftirqd/10 pid=88 prio=97 target_cpu=010
[10364.620611] rcu_tort-158 10d..2. 10359479886us : sched_switch: prev_comm=rcu_torture_rea prev_pid=158 prev_prio=139 prev_state=R+ ==> next_comm=ksoftirqd/10 next_pid=88 next_prio=97
[10364.622624] ksoftirq-88 10d..2. 10359479891us : sched_switch: prev_comm=ksoftirqd/10 prev_pid=88 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=158 next_prio=139
[10364.624618] rcu_tort-158 10d..3. 10359480888us : dl_server_stop <-dequeue_entities
[10364.625612] rcu_tort-158 10d..3. 10359480895us : dl_server_start <-enqueue_task_fair
[10364.626622] rcu_tort-158 10d..2. 10359480897us : sched_switch: prev_comm=rcu_torture_rea prev_pid=158 prev_prio=139 prev_state=I ==> next_comm=rcu_torture_rea next_pid=164 next_prio=139
[10364.629297] rcu_tort-164 10D.h3. 10359481390us : sched_wakeup: comm=rcu_torture_rea pid=158 prio=139 target_cpu=010
[10364.631340] rcu_tort-164 10dNh3. 10359481881us : sched_wakeup: comm=rcuc/10 pid=87 prio=97 target_cpu=010
[10364.633161] rcu_tort-164 10dN.3. 10359481885us : sched_wakeup: comm=ksoftirqd/10 pid=88 prio=97 target_cpu=010
[10364.635099] rcu_tort-164 10d..2. 10359481887us : sched_switch: prev_comm=rcu_torture_rea prev_pid=164 prev_prio=139 prev_state=R+ ==> next_comm=rcuc/10 next_pid=87 next_prio=97
[10364.637988] rcuc/10-87 10d..2. 10359481890us : sched_switch: prev_comm=rcuc/10 prev_pid=87 prev_prio=97 prev_state=S ==> next_comm=ksoftirqd/10 next_pid=88 next_prio=97
[10364.640859] ksoftirq-88 10d..2. 10359481892us : sched_switch: prev_comm=ksoftirqd/10 prev_pid=88 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=158 next_prio=139
[10364.643888] rcu_tort-158 10DNh4. 10359482981us : sched_wakeup: comm=rcuc/10 pid=87 prio=97 target_cpu=010
[10364.645775] rcu_tort-158 10d..2. 10359482990us : sched_switch: prev_comm=rcu_torture_rea prev_pid=158 prev_prio=139 prev_state=R+ ==> next_comm=rcuc/10 next_pid=87 next_prio=97
[10364.649302] rcuc/10-87 10d..2. 10359482992us : sched_switch: prev_comm=rcuc/10 prev_pid=87 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=164 next_prio=139
[10364.651766] rcu_tort-164 10dNh4. 10359483882us : sched_wakeup: comm=rcuc/10 pid=87 prio=97 target_cpu=010
[10364.653287] rcu_tort-164 10d..2. 10359483887us : sched_switch: prev_comm=rcu_torture_rea prev_pid=164 prev_prio=139 prev_state=R+ ==> next_comm=rcuc/10 next_pid=87 next_prio=97
[10364.655716] rcuc/10-87 10d..2. 10359483896us : sched_switch: prev_comm=rcuc/10 prev_pid=87 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=158 next_prio=139
[10364.658108] rcu_tort-158 10dNh3. 10359484882us : sched_wakeup: comm=rcuc/10 pid=87 prio=97 target_cpu=010
[10364.659614] rcu_tort-158 10dN.3. 10359484886us : sched_wakeup: comm=ksoftirqd/10 pid=88 prio=97 target_cpu=010
[10364.661172] rcu_tort-158 10d..2. 10359484887us : sched_switch: prev_comm=rcu_torture_rea prev_pid=158 prev_prio=139 prev_state=R+ ==> next_comm=rcuc/10 next_pid=87 next_prio=97
[10364.663589] rcuc/10-87 10d..2. 10359484891us : sched_switch: prev_comm=rcuc/10 prev_pid=87 prev_prio=97 prev_state=S ==> next_comm=ksoftirqd/10 next_pid=88 next_prio=97
[10364.665923] ksoftirq-88 10d..2. 10359484893us : sched_switch: prev_comm=ksoftirqd/10 prev_pid=88 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=164 next_prio=139
[10364.668400] rcu_tort-164 10d..3. 10359486892us : dl_server_stop <-dequeue_entities
[10364.669608] rcu_tort-164 10d..2. 10359486897us : sched_switch: prev_comm=rcu_torture_rea prev_pid=164 prev_prio=139 prev_state=I ==> next_comm=swapper/10 next_pid=0 next_prio=120
[10364.672052] <idle>-0 10d.h5. 10359487399us : dl_server_start <-enqueue_task_fair
[10364.673277] <idle>-0 10dNh4. 10359487400us : sched_wakeup: comm=rcu_torture_rea pid=164 prio=139 target_cpu=010
[10364.674928] <idle>-0 10d..2. 10359487403us : sched_switch: prev_comm=swapper/10 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcu_torture_rea next_pid=164 next_prio=139
[10364.677381] rcu_tort-164 10dNh3. 10359487882us : sched_wakeup: comm=rcuc/10 pid=87 prio=97 target_cpu=010
[10364.678886] rcu_tort-164 10dN.3. 10359487886us : sched_wakeup: comm=ksoftirqd/10 pid=88 prio=97 target_cpu=010
[10364.680454] rcu_tort-164 10d..2. 10359487888us : sched_switch: prev_comm=rcu_torture_rea prev_pid=164 prev_prio=139 prev_state=R+ ==> next_comm=rcuc/10 next_pid=87 next_prio=97
[10364.682866] rcuc/10-87 10d..2. 10359487892us : sched_switch: prev_comm=rcuc/10 prev_pid=87 prev_prio=97 prev_state=S ==> next_comm=ksoftirqd/10 next_pid=88 next_prio=97
[10364.685203] ksoftirq-88 10d..2. 10359487894us : sched_switch: prev_comm=ksoftirqd/10 prev_pid=88 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=164 next_prio=139
[10364.687671] rcu_tort-164 10dNh3. 10359488882us : sched_wakeup: comm=rcuc/10 pid=87 prio=97 target_cpu=010
[10364.689172] rcu_tort-164 10d..2. 10359488888us : sched_switch: prev_comm=rcu_torture_rea prev_pid=164 prev_prio=139 prev_state=R+ ==> next_comm=rcuc/10 next_pid=87 next_prio=97
[10364.691582] rcuc/10-87 10d..2. 10359488891us : sched_switch: prev_comm=rcuc/10 prev_pid=87 prev_prio=97 prev_state=S ==> next_comm=cpuhp/10 next_pid=85 next_prio=120
[10364.693876] cpuhp/10-85 10d..2. 10359488896us : sched_switch: prev_comm=cpuhp/10 prev_pid=85 prev_prio=120 prev_state=D ==> next_comm=rcu_torture_rea next_pid=164 next_prio=139
[10364.696290] rcu_tort-164 10dN.6. 10359488898us : sched_wakeup: comm=migration/10 pid=86 prio=0 target_cpu=010
[10364.697854] rcu_tort-164 10d..2. 10359488900us : sched_switch: prev_comm=rcu_torture_rea prev_pid=164 prev_prio=139 prev_state=R+ ==> next_comm=migration/10 next_pid=86 next_prio=0
[10364.700315] migratio-86 10d..4. 10359488902us : dl_server_stop <-dequeue_entities
[10364.701606] migratio-86 10d..2. 10359488911us : sched_switch: prev_comm=migration/10 prev_pid=86 prev_prio=0 prev_state=S ==> next_comm=swapper/10 next_pid=0 next_prio=120
[10364.704045] <idle>-0 10dN.4. 10359489890us : sched_wakeup: comm=ksoftirqd/10 pid=88 prio=97 target_cpu=010
[10364.705664] <idle>-0 10d..2. 10359489892us : sched_switch: prev_comm=swapper/10 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=ksoftirqd/10 next_pid=88 next_prio=97
[10364.708112] ksoftirq-88 10d..2. 10359489897us : sched_switch: prev_comm=ksoftirqd/10 prev_pid=88 prev_prio=97 prev_state=S ==> next_comm=swapper/10 next_pid=0 next_prio=120
[10364.710557] <idle>-0 10dNh4. 10359491888us : sched_wakeup: comm=rcuc/10 pid=87 prio=97 target_cpu=010
[10364.712098] <idle>-0 10d..2. 10359491892us : sched_switch: prev_comm=swapper/10 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/10 next_pid=87 next_prio=97
[10364.714481] rcuc/10-87 10d..2. 10359491895us : sched_switch: prev_comm=rcuc/10 prev_pid=87 prev_prio=97 prev_state=S ==> next_comm=swapper/10 next_pid=0 next_prio=120
[10364.716849] <idle>-0 10dNh4. 10359492888us : sched_wakeup: comm=rcuc/10 pid=87 prio=97 target_cpu=010
[10364.718395] <idle>-0 10d..2. 10359492891us : sched_switch: prev_comm=swapper/10 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/10 next_pid=87 next_prio=97
[10364.720766] rcuc/10-87 10d..2. 10359492894us : sched_switch: prev_comm=rcuc/10 prev_pid=87 prev_prio=97 prev_state=S ==> next_comm=swapper/10 next_pid=0 next_prio=120
[10364.723153] <idle>-0 10dNh4. 10359495888us : sched_wakeup: comm=rcuc/10 pid=87 prio=97 target_cpu=010
[10364.724703] <idle>-0 10d..2. 10359495892us : sched_switch: prev_comm=swapper/10 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/10 next_pid=87 next_prio=97
[10364.727083] rcuc/10-87 10d..2. 10359495895us : sched_switch: prev_comm=rcuc/10 prev_pid=87 prev_prio=97 prev_state=S ==> next_comm=swapper/10 next_pid=0 next_prio=120
[10364.729477] <idle>-0 10dNh4. 10359496889us : sched_wakeup: comm=rcuc/10 pid=87 prio=97 target_cpu=010
[10364.731016] <idle>-0 10d..2. 10359496893us : sched_switch: prev_comm=swapper/10 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/10 next_pid=87 next_prio=97
[10364.733405] rcuc/10-87 10d..2. 10359496896us : sched_switch: prev_comm=rcuc/10 prev_pid=87 prev_prio=97 prev_state=S ==> next_comm=swapper/10 next_pid=0 next_prio=120
[10364.735791] <idle>-0 10dNh4. 10359498888us : sched_wakeup: comm=rcuc/10 pid=87 prio=97 target_cpu=010
[10364.737333] <idle>-0 10d..2. 10359498892us : sched_switch: prev_comm=swapper/10 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/10 next_pid=87 next_prio=97
[10364.739724] rcuc/10-87 10D..5. 10359498895us : dl_server_start <-enqueue_task_fair
[10364.740988] rcuc/10-87 10DN.4. 10359498896us : sched_wakeup: comm=cpuhp/10 pid=85 prio=120 target_cpu=010
[10364.742561] rcuc/10-87 10d..2. 10359498899us : sched_switch: prev_comm=rcuc/10 prev_pid=87 prev_prio=97 prev_state=S ==> next_comm=cpuhp/10 next_pid=85 next_prio=120
[10364.744920] cpuhp/10-85 10dNh4. 10359500093us : sched_wakeup: comm=rcuc/10 pid=87 prio=97 target_cpu=010
[10364.746473] cpuhp/10-85 10d..2. 10359500101us : sched_switch: prev_comm=cpuhp/10 prev_pid=85 prev_prio=120 prev_state=R+ ==> next_comm=rcuc/10 next_pid=87 next_prio=97
[10364.748851] rcuc/10-87 10d..2. 10359500103us : sched_switch: prev_comm=rcuc/10 prev_pid=87 prev_prio=97 prev_state=S ==> next_comm=cpuhp/10 next_pid=85 next_prio=120
[10364.751225] cpuhp/10-85 10d..3. 10359500110us : dl_server_stop <-dequeue_entities
[10364.752474] cpuhp/10-85 10d..2. 10359500111us : sched_switch: prev_comm=cpuhp/10 prev_pid=85 prev_prio=120 prev_state=D ==> next_comm=swapper/10 next_pid=0 next_prio=120
[10364.754882] <idle>-0 10d..2. 10359501178us : sched_switch: prev_comm=swapper/10 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=cpuhp/10 next_pid=85 next_prio=120
[10364.757289] cpuhp/10-85 10d..3. 10359501202us : dl_server_stop <-dequeue_entities
[10364.758539] cpuhp/10-85 10d..2. 10359501203us : sched_switch: prev_comm=cpuhp/10 prev_pid=85 prev_prio=120 prev_state=D ==> next_comm=swapper/10 next_pid=0 next_prio=120
[10364.760947] <idle>-0 10d..2. 10359501247us : sched_switch: prev_comm=swapper/10 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=cpuhp/10 next_pid=85 next_prio=120
[10364.763353] cpuhp/10-85 10d..3. 10359501280us : dl_server_stop <-dequeue_entities
[10364.764599] cpuhp/10-85 10d..2. 10359501281us : sched_switch: prev_comm=cpuhp/10 prev_pid=85 prev_prio=120 prev_state=D ==> next_comm=swapper/10 next_pid=0 next_prio=120
[10364.767007] <idle>-0 10d..2. 10359501316us : sched_switch: prev_comm=swapper/10 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=cpuhp/10 next_pid=85 next_prio=120
[10364.769417] cpuhp/10-85 10dN.3. 10359501462us : sched_wakeup: comm=ksoftirqd/10 pid=88 prio=97 target_cpu=010
[10364.771027] cpuhp/10-85 10d..2. 10359501463us : sched_switch: prev_comm=cpuhp/10 prev_pid=85 prev_prio=120 prev_state=R+ ==> next_comm=ksoftirqd/10 next_pid=88 next_prio=97
[10364.773481] ksoftirq-88 10d..2. 10359501465us : sched_switch: prev_comm=ksoftirqd/10 prev_pid=88 prev_prio=97 prev_state=P ==> next_comm=cpuhp/10 next_pid=85 next_prio=120
[10364.775907] cpuhp/10-85 10dN.3. 10359501467us : sched_wakeup: comm=rcuc/10 pid=87 prio=97 target_cpu=010
[10364.777455] cpuhp/10-85 10d..2. 10359501468us : sched_switch: prev_comm=cpuhp/10 prev_pid=85 prev_prio=120 prev_state=R+ ==> next_comm=rcuc/10 next_pid=87 next_prio=97
[10364.779828] rcuc/10-87 10d..2. 10359501469us : sched_switch: prev_comm=rcuc/10 prev_pid=87 prev_prio=97 prev_state=P ==> next_comm=cpuhp/10 next_pid=85 next_prio=120
[10364.782188] cpuhp/10-85 10d..3. 10359501475us : dl_server_stop <-dequeue_entities
[10364.783432] cpuhp/10-85 10d..2. 10359501476us : sched_switch: prev_comm=cpuhp/10 prev_pid=85 prev_prio=120 prev_state=S ==> next_comm=swapper/10 next_pid=0 next_prio=120
[10364.785844] <idle>-0 10d..2. 10359501601us : sched_switch: prev_comm=swapper/10 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=cpuhp/10 next_pid=85 next_prio=120
[10364.788248] cpuhp/10-85 10d..3. 10359501606us : dl_server_stop <-dequeue_entities
[10364.789494] cpuhp/10-85 10d..2. 10359501608us : sched_switch: prev_comm=cpuhp/10 prev_pid=85 prev_prio=120 prev_state=P ==> next_comm=swapper/10 next_pid=0 next_prio=120
[10364.791897] <idle>-0 10d..2. 10359501641us : sched_switch: prev_comm=swapper/10 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=migration/10 next_pid=86 next_prio=0
[10364.794329] CPU:12 [LOST 11813902 EVENTS]
[10364.794329] rcu_tort-162 12d..2. 10360086891us : sched_switch: prev_comm=rcu_torture_rea prev_pid=162 prev_prio=139 prev_state=R+ ==> next_comm=rcuc/12 next_pid=101 next_prio=97
[10364.797447] rcuc/12-101 12d..2. 10360086894us : sched_switch: prev_comm=rcuc/12 prev_pid=101 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=162 next_prio=139
[10364.799934] rcu_tort-162 12dNh3. 10360087882us : sched_wakeup: comm=rcuc/12 pid=101 prio=97 target_cpu=012
[10364.801492] rcu_tort-162 12d..2. 10360087886us : sched_switch: prev_comm=rcu_torture_rea prev_pid=162 prev_prio=139 prev_state=R+ ==> next_comm=rcuc/12 next_pid=101 next_prio=97
[10364.803987] rcuc/12-101 12d..2. 10360087889us : sched_switch: prev_comm=rcuc/12 prev_pid=101 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=162 next_prio=139
[10364.806513] rcu_tort-162 12dN.3. 10360088885us : sched_wakeup: comm=ksoftirqd/12 pid=102 prio=97 target_cpu=012
[10364.808129] rcu_tort-162 12d..2. 10360088887us : sched_switch: prev_comm=rcu_torture_rea prev_pid=162 prev_prio=139 prev_state=R+ ==> next_comm=ksoftirqd/12 next_pid=102 next_prio=97
[10364.810696] ksoftirq-102 12d..2. 10360088892us : sched_switch: prev_comm=ksoftirqd/12 prev_pid=102 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=162 next_prio=139
[10364.813279] rcu_tort-162 12DNh3. 10360089882us : sched_wakeup: comm=rcuc/12 pid=101 prio=97 target_cpu=012
[10364.814851] rcu_tort-162 12d..2. 10360089887us : sched_switch: prev_comm=rcu_torture_rea prev_pid=162 prev_prio=139 prev_state=R+ ==> next_comm=rcuc/12 next_pid=101 next_prio=97
[10364.817340] rcuc/12-101 12d..2. 10360089890us : sched_switch: prev_comm=rcuc/12 prev_pid=101 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=162 next_prio=139
[10364.819829] rcu_tort-162 12DNh3. 10360090881us : sched_wakeup: comm=rcuc/12 pid=101 prio=97 target_cpu=012
[10364.821390] rcu_tort-162 12d..2. 10360090888us : sched_switch: prev_comm=rcu_torture_rea prev_pid=162 prev_prio=139 prev_state=R+ ==> next_comm=rcuc/12 next_pid=101 next_prio=97
[10364.823886] rcuc/12-101 12d..2. 10360090891us : sched_switch: prev_comm=rcuc/12 prev_pid=101 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=162 next_prio=139
[10364.826366] rcu_tort-162 12dNh3. 10360091882us : sched_wakeup: comm=rcuc/12 pid=101 prio=97 target_cpu=012
[10364.827923] rcu_tort-162 12d..2. 10360091887us : sched_switch: prev_comm=rcu_torture_rea prev_pid=162 prev_prio=139 prev_state=R+ ==> next_comm=rcuc/12 next_pid=101 next_prio=97
[10364.830429] rcuc/12-101 12d..2. 10360091890us : sched_switch: prev_comm=rcuc/12 prev_pid=101 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=162 next_prio=139
[10364.832908] rcu_tort-162 12DNh3. 10360092882us : sched_wakeup: comm=rcuc/12 pid=101 prio=97 target_cpu=012
[10364.834470] rcu_tort-162 12d..2. 10360092890us : sched_switch: prev_comm=rcu_torture_rea prev_pid=162 prev_prio=139 prev_state=R+ ==> next_comm=rcuc/12 next_pid=101 next_prio=97
[10364.836969] rcuc/12-101 12d..2. 10360092892us : sched_switch: prev_comm=rcuc/12 prev_pid=101 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=162 next_prio=139
[10364.839471] rcu_tort-162 12dN.3. 10360094884us : sched_wakeup: comm=ksoftirqd/12 pid=102 prio=97 target_cpu=012
[10364.841095] rcu_tort-162 12d..2. 10360094886us : sched_switch: prev_comm=rcu_torture_rea prev_pid=162 prev_prio=139 prev_state=R+ ==> next_comm=ksoftirqd/12 next_pid=102 next_prio=97
[10364.843669] ksoftirq-102 12d..2. 10360094891us : sched_switch: prev_comm=ksoftirqd/12 prev_pid=102 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=162 next_prio=139
[10364.846224] rcu_tort-162 12dNh4. 10360095883us : sched_wakeup: comm=rcuc/12 pid=101 prio=97 target_cpu=012
[10364.847795] rcu_tort-162 12d..2. 10360095887us : sched_switch: prev_comm=rcu_torture_rea prev_pid=162 prev_prio=139 prev_state=R+ ==> next_comm=rcuc/12 next_pid=101 next_prio=97
[10364.850292] rcuc/12-101 12d..2. 10360095890us : sched_switch: prev_comm=rcuc/12 prev_pid=101 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=162 next_prio=139
[10364.852785] rcu_tort-162 12d..2. 10360096233us : sched_switch: prev_comm=rcu_torture_rea prev_pid=162 prev_prio=139 prev_state=R+ ==> next_comm=rcu_torture_rea next_pid=155 next_prio=97
[10364.855399] rcu_tort-155 12d..2. 10360096243us : sched_switch: prev_comm=rcu_torture_rea prev_pid=155 prev_prio=139 prev_state=R+ ==> next_comm=rcu_torture_rea next_pid=158 next_prio=97
[10364.858003] rcu_tort-158 12dN.4. 10360096407us : sched_wakeup: comm=rcub/13 pid=105 prio=97 target_cpu=013
[10364.859564] rcu_tort-158 12d..2. 10360096418us : sched_switch: prev_comm=rcu_torture_rea prev_pid=158 prev_prio=139 prev_state=R+ ==> next_comm=rcu_torture_rea next_pid=162 next_prio=97
[10364.862163] rcu_tort-162 12d..4. 10360096428us : sched_wakeup: comm=rcu_exp_gp_kthr pid=20 prio=97 target_cpu=015
[10364.863817] rcu_tort-162 12dN.4. 10360096435us : sched_wakeup: comm=rcub/13 pid=105 prio=97 target_cpu=013
[10364.865377] rcu_tort-162 12d..2. 10360096437us : sched_switch: prev_comm=rcu_torture_rea prev_pid=162 prev_prio=139 prev_state=R+ ==> next_comm=rcu_torture_rea next_pid=155 next_prio=139
[10364.867991] rcu_tort-155 12dNh3. 10360096884us : sched_wakeup: comm=rcuc/12 pid=101 prio=97 target_cpu=012
[10364.869558] rcu_tort-155 12dN.3. 10360096888us : sched_wakeup: comm=ksoftirqd/12 pid=102 prio=97 target_cpu=012
[10364.871184] rcu_tort-155 12d..2. 10360096889us : sched_switch: prev_comm=rcu_torture_rea prev_pid=155 prev_prio=139 prev_state=R+ ==> next_comm=rcuc/12 next_pid=101 next_prio=97
[10364.873684] rcuc/12-101 12d..2. 10360096894us : sched_switch: prev_comm=rcuc/12 prev_pid=101 prev_prio=97 prev_state=S ==> next_comm=ksoftirqd/12 next_pid=102 next_prio=97
[10364.876115] ksoftirq-102 12d..2. 10360096896us : sched_switch: prev_comm=ksoftirqd/12 prev_pid=102 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=162 next_prio=139
[10364.878701] rcu_tort-162 12d..2. 10360096901us : sched_switch: prev_comm=rcu_torture_rea prev_pid=162 prev_prio=139 prev_state=I ==> next_comm=rcu_torture_rea next_pid=158 next_prio=139
[10364.881300] rcu_tort-158 12dNh3. 10360098883us : sched_wakeup: comm=rcuc/12 pid=101 prio=97 target_cpu=012
[10364.882859] rcu_tort-158 12dN.3. 10360098887us : sched_wakeup: comm=ksoftirqd/12 pid=102 prio=97 target_cpu=012
[10364.884486] rcu_tort-158 12d..2. 10360098889us : sched_switch: prev_comm=rcu_torture_rea prev_pid=158 prev_prio=139 prev_state=R+ ==> next_comm=rcuc/12 next_pid=101 next_prio=97
[10364.886983] rcuc/12-101 12d..2. 10360098894us : sched_switch: prev_comm=rcuc/12 prev_pid=101 prev_prio=97 prev_state=S ==> next_comm=ksoftirqd/12 next_pid=102 next_prio=97
[10364.889425] ksoftirq-102 12d..2. 10360098896us : sched_switch: prev_comm=ksoftirqd/12 prev_pid=102 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=155 next_prio=139
[10364.891978] rcu_tort-155 12dNh4. 10360099882us : sched_wakeup: comm=rcuc/12 pid=101 prio=97 target_cpu=012
[10364.893556] rcu_tort-155 12d..2. 10360099887us : sched_switch: prev_comm=rcu_torture_rea prev_pid=155 prev_prio=139 prev_state=R+ ==> next_comm=rcuc/12 next_pid=101 next_prio=97
[10364.896048] rcuc/12-101 12d..2. 10360099889us : sched_switch: prev_comm=rcuc/12 prev_pid=101 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=155 next_prio=139
[10364.898683] rcu_tort-155 12d..2. 10360100609us : sched_switch: prev_comm=rcu_torture_rea prev_pid=155 prev_prio=139 prev_state=R+ ==> next_comm=rcu_torture_rea next_pid=158 next_prio=139
[10364.901293] rcu_tort-158 12dN.3. 10360100885us : sched_wakeup: comm=ksoftirqd/12 pid=102 prio=97 target_cpu=012
[10364.902922] rcu_tort-158 12d..2. 10360100888us : sched_switch: prev_comm=rcu_torture_rea prev_pid=158 prev_prio=139 prev_state=R+ ==> next_comm=ksoftirqd/12 next_pid=102 next_prio=97
[10364.905489] ksoftirq-102 12d..2. 10360100892us : sched_switch: prev_comm=ksoftirqd/12 prev_pid=102 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=155 next_prio=139
[10364.908063] rcu_tort-155 12dNh4. 10360101883us : sched_wakeup: comm=rcuc/12 pid=101 prio=97 target_cpu=012
[10364.909630] rcu_tort-155 12dN.4. 10360101886us : sched_wakeup: comm=ksoftirqd/12 pid=102 prio=97 target_cpu=012
[10364.911245] rcu_tort-155 12d..2. 10360101889us : sched_switch: prev_comm=rcu_torture_rea prev_pid=155 prev_prio=139 prev_state=R+ ==> next_comm=rcuc/12 next_pid=101 next_prio=97
[10364.913743] rcuc/12-101 12d..2. 10360101893us : sched_switch: prev_comm=rcuc/12 prev_pid=101 prev_prio=97 prev_state=S ==> next_comm=ksoftirqd/12 next_pid=102 next_prio=97
[10364.916168] ksoftirq-102 12d..2. 10360101895us : sched_switch: prev_comm=ksoftirqd/12 prev_pid=102 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=158 next_prio=139
[10364.918712] rcu_tort-158 12d..2. 10360101903us : sched_switch: prev_comm=rcu_torture_rea prev_pid=158 prev_prio=139 prev_state=I ==> next_comm=rcu_torture_rea next_pid=155 next_prio=139
[10364.921312] rcu_tort-155 12d..3. 10360101905us : dl_server_stop <-dequeue_entities
[10364.922551] rcu_tort-155 12d..2. 10360101907us : sched_switch: prev_comm=rcu_torture_rea prev_pid=155 prev_prio=139 prev_state=I ==> next_comm=swapper/12 next_pid=0 next_prio=120
[10364.925058] <idle>-0 12d.h5. 10360102410us : dl_server_start <-enqueue_task_fair
[10364.926324] <idle>-0 12dNh4. 10360102411us : sched_wakeup: comm=rcu_torture_rea pid=158 prio=139 target_cpu=012
[10364.928003] <idle>-0 12d..2. 10360102417us : sched_switch: prev_comm=swapper/12 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcu_torture_rea next_pid=158 next_prio=139
[10364.930508] rcu_tort-158 12DNh3. 10360103077us : sched_wakeup: comm=rcuc/12 pid=101 prio=97 target_cpu=012
[10364.932062] rcu_tort-158 12d..2. 10360103085us : sched_switch: prev_comm=rcu_torture_rea prev_pid=158 prev_prio=139 prev_state=R+ ==> next_comm=rcuc/12 next_pid=101 next_prio=97
[10364.934566] rcuc/12-101 12d..2. 10360103087us : sched_switch: prev_comm=rcuc/12 prev_pid=101 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=158 next_prio=139
[10364.937046] rcu_tort-158 12DNh3. 10360104882us : sched_wakeup: comm=rcuc/12 pid=101 prio=97 target_cpu=012
[10364.938604] rcu_tort-158 12d..2. 10360104889us : sched_switch: prev_comm=rcu_torture_rea prev_pid=158 prev_prio=139 prev_state=R+ ==> next_comm=rcuc/12 next_pid=101 next_prio=97
[10364.941108] rcuc/12-101 12d..2. 10360104893us : sched_switch: prev_comm=rcuc/12 prev_pid=101 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=158 next_prio=139
[10364.943596] rcu_tort-158 12dNh4. 10360105882us : sched_wakeup: comm=rcuc/12 pid=101 prio=97 target_cpu=012
[10364.945150] rcu_tort-158 12d..2. 10360105887us : sched_switch: prev_comm=rcu_torture_rea prev_pid=158 prev_prio=139 prev_state=R+ ==> next_comm=rcuc/12 next_pid=101 next_prio=97
[10364.947646] rcuc/12-101 12d..2. 10360105891us : sched_switch: prev_comm=rcuc/12 prev_pid=101 prev_prio=97 prev_state=S ==> next_comm=cpuhp/12 next_pid=99 next_prio=120
[10364.950029] cpuhp/12-99 12d..2. 10360105896us : sched_switch: prev_comm=cpuhp/12 prev_pid=99 prev_prio=120 prev_state=D ==> next_comm=rcu_torture_rea next_pid=158 next_prio=139
[10364.952532] rcu_tort-158 12dN.6. 10360105898us : sched_wakeup: comm=migration/12 pid=100 prio=0 target_cpu=012
[10364.954137] rcu_tort-158 12d..2. 10360105900us : sched_switch: prev_comm=rcu_torture_rea prev_pid=158 prev_prio=139 prev_state=R+ ==> next_comm=migration/12 next_pid=100 next_prio=0
[10364.956693] migratio-100 12d..4. 10360105903us : dl_server_stop <-dequeue_entities
[10364.957931] migratio-100 12d..2. 10360105911us : sched_switch: prev_comm=migration/12 prev_pid=100 prev_prio=0 prev_state=S ==> next_comm=swapper/12 next_pid=0 next_prio=120
[10364.960382] <idle>-0 12dN.4. 10360106892us : sched_wakeup: comm=ksoftirqd/12 pid=102 prio=97 target_cpu=012
[10364.962008] <idle>-0 12d..2. 10360106893us : sched_switch: prev_comm=swapper/12 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=ksoftirqd/12 next_pid=102 next_prio=97
[10364.964470] ksoftirq-102 12d..2. 10360106898us : sched_switch: prev_comm=ksoftirqd/12 prev_pid=102 prev_prio=97 prev_state=S ==> next_comm=swapper/12 next_pid=0 next_prio=120
[10364.966928] <idle>-0 12dNh4. 10360107890us : sched_wakeup: comm=rcuc/12 pid=101 prio=97 target_cpu=012
[10364.968492] <idle>-0 12d..2. 10360107893us : sched_switch: prev_comm=swapper/12 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/12 next_pid=101 next_prio=97
[10364.970884] rcuc/12-101 12d..2. 10360107896us : sched_switch: prev_comm=rcuc/12 prev_pid=101 prev_prio=97 prev_state=S ==> next_comm=swapper/12 next_pid=0 next_prio=120
[10364.973273] <idle>-0 12dNh4. 10360108890us : sched_wakeup: comm=rcuc/12 pid=101 prio=97 target_cpu=012
[10364.974834] <idle>-0 12d..2. 10360108894us : sched_switch: prev_comm=swapper/12 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/12 next_pid=101 next_prio=97
[10364.977221] rcuc/12-101 12d..2. 10360108897us : sched_switch: prev_comm=rcuc/12 prev_pid=101 prev_prio=97 prev_state=S ==> next_comm=swapper/12 next_pid=0 next_prio=120
[10364.979622] <idle>-0 12dNh4. 10360110898us : sched_wakeup: comm=rcuc/12 pid=101 prio=97 target_cpu=012
[10364.981175] <idle>-0 12d..2. 10360110905us : sched_switch: prev_comm=swapper/12 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/12 next_pid=101 next_prio=97
[10364.983574] rcuc/12-101 12d..2. 10360110909us : sched_switch: prev_comm=rcuc/12 prev_pid=101 prev_prio=97 prev_state=S ==> next_comm=swapper/12 next_pid=0 next_prio=120
[10364.985967] <idle>-0 12dNh4. 10360111891us : sched_wakeup: comm=rcuc/12 pid=101 prio=97 target_cpu=012
[10364.987526] <idle>-0 12d..2. 10360111895us : sched_switch: prev_comm=swapper/12 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/12 next_pid=101 next_prio=97
[10364.989919] rcuc/12-101 12d..2. 10360111899us : sched_switch: prev_comm=rcuc/12 prev_pid=101 prev_prio=97 prev_state=S ==> next_comm=swapper/12 next_pid=0 next_prio=120
[10364.992309] <idle>-0 12dNh4. 10360113888us : sched_wakeup: comm=rcuc/12 pid=101 prio=97 target_cpu=012
[10364.993863] <idle>-0 12d..2. 10360113891us : sched_switch: prev_comm=swapper/12 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/12 next_pid=101 next_prio=97
[10364.996255] rcuc/12-101 12D..5. 10360113894us : dl_server_start <-enqueue_task_fair
[10364.997527] rcuc/12-101 12DN.4. 10360113896us : sched_wakeup: comm=cpuhp/12 pid=99 prio=120 target_cpu=012
[10364.999095] rcuc/12-101 12D..4. 10360113899us : sched_wakeup: comm=rcu_torture_bar pid=196 prio=120 target_cpu=011
[10365.000773] rcuc/12-101 12d..2. 10360113901us : sched_switch: prev_comm=rcuc/12 prev_pid=101 prev_prio=97 prev_state=S ==> next_comm=cpuhp/12 next_pid=99 next_prio=120
[10365.003153] cpuhp/12-99 12d..3. 10360114795us : dl_server_stop <-dequeue_entities
[10365.004403] cpuhp/12-99 12d..2. 10360114797us : sched_switch: prev_comm=cpuhp/12 prev_pid=99 prev_prio=120 prev_state=D ==> next_comm=swapper/12 next_pid=0 next_prio=120
[10365.006802] <idle>-0 12dNh4. 10360114887us : sched_wakeup: comm=rcuc/12 pid=101 prio=97 target_cpu=012
[10365.008355] <idle>-0 12d..2. 10360114891us : sched_switch: prev_comm=swapper/12 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/12 next_pid=101 next_prio=97
[10365.010753] rcuc/12-101 12d..2. 10360114893us : sched_switch: prev_comm=rcuc/12 prev_pid=101 prev_prio=97 prev_state=S ==> next_comm=swapper/12 next_pid=0 next_prio=120
[10365.013167] <idle>-0 12d..2. 10360115537us : sched_switch: prev_comm=swapper/12 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=cpuhp/12 next_pid=99 next_prio=120
[10365.015578] cpuhp/12-99 12d..3. 10360115560us : dl_server_stop <-dequeue_entities
[10365.016814] cpuhp/12-99 12d..2. 10360115562us : sched_switch: prev_comm=cpuhp/12 prev_pid=99 prev_prio=120 prev_state=D ==> next_comm=swapper/12 next_pid=0 next_prio=120
[10365.019215] <idle>-0 12d..2. 10360115627us : sched_switch: prev_comm=swapper/12 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=cpuhp/12 next_pid=99 next_prio=120
[10365.021620] cpuhp/12-99 12d..3. 10360115653us : dl_server_stop <-dequeue_entities
[10365.022856] cpuhp/12-99 12d..2. 10360115654us : sched_switch: prev_comm=cpuhp/12 prev_pid=99 prev_prio=120 prev_state=D ==> next_comm=swapper/12 next_pid=0 next_prio=120
[10365.025256] <idle>-0 12d..2. 10360115692us : sched_switch: prev_comm=swapper/12 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=cpuhp/12 next_pid=99 next_prio=120
[10365.027665] cpuhp/12-99 12dN.3. 10360115803us : sched_wakeup: comm=ksoftirqd/12 pid=102 prio=97 target_cpu=012
[10365.029281] cpuhp/12-99 12d..2. 10360115804us : sched_switch: prev_comm=cpuhp/12 prev_pid=99 prev_prio=120 prev_state=R+ ==> next_comm=ksoftirqd/12 next_pid=102 next_prio=97
[10365.031743] ksoftirq-102 12d..2. 10360115806us : sched_switch: prev_comm=ksoftirqd/12 prev_pid=102 prev_prio=97 prev_state=P ==> next_comm=cpuhp/12 next_pid=99 next_prio=120
[10365.034180] cpuhp/12-99 12dN.3. 10360115807us : sched_wakeup: comm=rcuc/12 pid=101 prio=97 target_cpu=012
[10365.035737] cpuhp/12-99 12d..2. 10360115808us : sched_switch: prev_comm=cpuhp/12 prev_pid=99 prev_prio=120 prev_state=R+ ==> next_comm=rcuc/12 next_pid=101 next_prio=97
[10365.038126] rcuc/12-101 12d..2. 10360115809us : sched_switch: prev_comm=rcuc/12 prev_pid=101 prev_prio=97 prev_state=P ==> next_comm=cpuhp/12 next_pid=99 next_prio=120
[10365.040503] cpuhp/12-99 12d..3. 10360115814us : dl_server_stop <-dequeue_entities
[10365.041739] cpuhp/12-99 12d..2. 10360115815us : sched_switch: prev_comm=cpuhp/12 prev_pid=99 prev_prio=120 prev_state=S ==> next_comm=swapper/12 next_pid=0 next_prio=120
[10365.044143] <idle>-0 12d..2. 10360115826us : sched_switch: prev_comm=swapper/12 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=cpuhp/12 next_pid=99 next_prio=120
[10365.046551] cpuhp/12-99 12d..3. 10360115830us : dl_server_stop <-dequeue_entities
[10365.047800] cpuhp/12-99 12d..2. 10360115832us : sched_switch: prev_comm=cpuhp/12 prev_pid=99 prev_prio=120 prev_state=P ==> next_comm=swapper/12 next_pid=0 next_prio=120
[10365.050207] <idle>-0 12d..2. 10360115877us : sched_switch: prev_comm=swapper/12 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=migration/12 next_pid=100 next_prio=0
[10365.052656] CPU:15 [LOST 11356995 EVENTS]
[10365.052656] rcu_tort-155 15d..2. 10361405925us : sched_switch: prev_comm=rcu_torture_rea prev_pid=155 prev_prio=139 prev_state=R+ ==> next_comm=migration/15 next_pid=122 next_prio=0
[10365.055826] migratio-122 15d..4. 10361405926us : dl_server_stop <-dequeue_entities
[10365.057067] migratio-122 15d..2. 10361405930us : sched_switch: prev_comm=migration/15 prev_pid=122 prev_prio=0 prev_state=S ==> next_comm=rcuc/15 next_pid=123 next_prio=97
[10365.059501] rcuc/15-123 15d..2. 10361405932us : sched_switch: prev_comm=rcuc/15 prev_pid=123 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20364 next_prio=98
[10365.062001] rcu_tort-20364 15dN.6. 10361405933us : sched_wakeup: comm=migration/15 pid=122 prio=0 target_cpu=015
[10365.063623] rcu_tort-20364 15d..2. 10361405934us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20364 prev_prio=98 prev_state=R+ ==> next_comm=migration/15 next_pid=122 next_prio=0
[10365.066188] migratio-122 15d..2. 10361405942us : sched_switch: prev_comm=migration/15 prev_pid=122 prev_prio=0 prev_state=S ==> next_comm=swapper/15 next_pid=0 next_prio=120
[10365.068655] <idle>-0 15dNh4. 10361406888us : sched_wakeup: comm=rcuc/15 pid=123 prio=97 target_cpu=015
[10365.070208] <idle>-0 15d..2. 10361406892us : sched_switch: prev_comm=swapper/15 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/15 next_pid=123 next_prio=97
[10365.072602] rcuc/15-123 15D..4. 10361406900us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=009
[10365.074195] rcuc/15-123 15d..2. 10361406905us : sched_switch: prev_comm=rcuc/15 prev_pid=123 prev_prio=97 prev_state=S ==> next_comm=swapper/15 next_pid=0 next_prio=120
[10365.076593] <idle>-0 15dNh4. 10361407888us : sched_wakeup: comm=rcuc/15 pid=123 prio=97 target_cpu=015
[10365.078148] <idle>-0 15d..2. 10361407892us : sched_switch: prev_comm=swapper/15 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/15 next_pid=123 next_prio=97
[10365.080543] rcuc/15-123 15d..2. 10361407894us : sched_switch: prev_comm=rcuc/15 prev_pid=123 prev_prio=97 prev_state=S ==> next_comm=swapper/15 next_pid=0 next_prio=120
[10365.082923] <idle>-0 15dNh4. 10361408887us : sched_wakeup: comm=rcuc/15 pid=123 prio=97 target_cpu=015
[10365.084486] <idle>-0 15d..2. 10361408891us : sched_switch: prev_comm=swapper/15 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/15 next_pid=123 next_prio=97
[10365.086879] rcuc/15-123 15D..4. 10361408900us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=009
[10365.088490] rcuc/15-123 15d..2. 10361408904us : sched_switch: prev_comm=rcuc/15 prev_pid=123 prev_prio=97 prev_state=S ==> next_comm=swapper/15 next_pid=0 next_prio=120
[10365.090884] <idle>-0 15dNh4. 10361409888us : sched_wakeup: comm=rcuc/15 pid=123 prio=97 target_cpu=015
[10365.092448] <idle>-0 15d..2. 10361409891us : sched_switch: prev_comm=swapper/15 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/15 next_pid=123 next_prio=97
[10365.094844] rcuc/15-123 15D..5. 10361409894us : dl_server_start <-enqueue_task_fair
[10365.096109] rcuc/15-123 15DN.4. 10361409896us : sched_wakeup: comm=cpuhp/15 pid=121 prio=120 target_cpu=015
[10365.097699] rcuc/15-123 15d..2. 10361409897us : sched_switch: prev_comm=rcuc/15 prev_pid=123 prev_prio=97 prev_state=R+ ==> next_comm=cpuhp/15 next_pid=121 next_prio=120
[10365.100105] cpuhp/15-121 15d..3. 10361411047us : dl_server_stop <-dequeue_entities
[10365.101343] cpuhp/15-121 15d..2. 10361411049us : sched_switch: prev_comm=cpuhp/15 prev_pid=121 prev_prio=120 prev_state=D ==> next_comm=rcuc/15 next_pid=123 next_prio=97
[10365.103733] rcuc/15-123 15d..2. 10361411058us : sched_switch: prev_comm=rcuc/15 prev_pid=123 prev_prio=97 prev_state=S ==> next_comm=swapper/15 next_pid=0 next_prio=120
[10365.106124] <idle>-0 15dNh4. 10361411889us : sched_wakeup: comm=rcuc/15 pid=123 prio=97 target_cpu=015
[10365.107685] <idle>-0 15d..2. 10361411893us : sched_switch: prev_comm=swapper/15 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/15 next_pid=123 next_prio=97
[10365.110079] rcuc/15-123 15d..2. 10361411895us : sched_switch: prev_comm=rcuc/15 prev_pid=123 prev_prio=97 prev_state=S ==> next_comm=swapper/15 next_pid=0 next_prio=120
[10365.112478] <idle>-0 15dNh4. 10361481889us : sched_wakeup: comm=rcuc/15 pid=123 prio=97 target_cpu=015
[10365.114040] <idle>-0 15d..2. 10361481894us : sched_switch: prev_comm=swapper/15 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/15 next_pid=123 next_prio=97
[10365.116479] rcuc/15-123 15d..2. 10361481899us : sched_switch: prev_comm=rcuc/15 prev_pid=123 prev_prio=97 prev_state=S ==> next_comm=swapper/15 next_pid=0 next_prio=120
[10365.118860] <idle>-0 15dNh4. 10361482888us : sched_wakeup: comm=rcuc/15 pid=123 prio=97 target_cpu=015
[10365.120434] <idle>-0 15d..2. 10361482891us : sched_switch: prev_comm=swapper/15 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/15 next_pid=123 next_prio=97
[10365.122827] rcuc/15-123 15d..2. 10361482902us : sched_switch: prev_comm=rcuc/15 prev_pid=123 prev_prio=97 prev_state=S ==> next_comm=swapper/15 next_pid=0 next_prio=120
[10365.125222] <idle>-0 15dNh4. 10361549887us : sched_wakeup: comm=rcuc/15 pid=123 prio=97 target_cpu=015
[10365.126783] <idle>-0 15d..2. 10361549892us : sched_switch: prev_comm=swapper/15 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/15 next_pid=123 next_prio=97
[10365.129185] rcuc/15-123 15d..2. 10361549915us : sched_switch: prev_comm=rcuc/15 prev_pid=123 prev_prio=97 prev_state=S ==> next_comm=swapper/15 next_pid=0 next_prio=120
[10365.131589] <idle>-0 15dNh4. 10361554887us : sched_wakeup: comm=rcuc/15 pid=123 prio=97 target_cpu=015
[10365.133146] <idle>-0 15d..2. 10361554891us : sched_switch: prev_comm=swapper/15 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/15 next_pid=123 next_prio=97
[10365.135552] rcuc/15-123 15d..2. 10361554893us : sched_switch: prev_comm=rcuc/15 prev_pid=123 prev_prio=97 prev_state=S ==> next_comm=swapper/15 next_pid=0 next_prio=120
[10365.137946] <idle>-0 15dNh4. 10361555887us : sched_wakeup: comm=rcuc/15 pid=123 prio=97 target_cpu=015
[10365.139510] <idle>-0 15d..2. 10361555891us : sched_switch: prev_comm=swapper/15 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/15 next_pid=123 next_prio=97
[10365.141902] rcuc/15-123 15d..2. 10361555893us : sched_switch: prev_comm=rcuc/15 prev_pid=123 prev_prio=97 prev_state=S ==> next_comm=swapper/15 next_pid=0 next_prio=120
[10365.144277] <idle>-0 15dNh4. 10362058891us : sched_wakeup: comm=rcuc/15 pid=123 prio=97 target_cpu=015
[10365.145846] <idle>-0 15d..2. 10362058897us : sched_switch: prev_comm=swapper/15 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/15 next_pid=123 next_prio=97
[10365.148236] rcuc/15-123 15d..2. 10362058901us : sched_switch: prev_comm=rcuc/15 prev_pid=123 prev_prio=97 prev_state=S ==> next_comm=swapper/15 next_pid=0 next_prio=120
[10365.150635] <idle>-0 15dNh4. 10362059888us : sched_wakeup: comm=rcuc/15 pid=123 prio=97 target_cpu=015
[10365.152188] <idle>-0 15d..2. 10362059892us : sched_switch: prev_comm=swapper/15 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/15 next_pid=123 next_prio=97
[10365.154589] rcuc/15-123 15d..2. 10362059894us : sched_switch: prev_comm=rcuc/15 prev_pid=123 prev_prio=97 prev_state=S ==> next_comm=swapper/15 next_pid=0 next_prio=120
[10365.156977] <idle>-0 15dN.4. 10362375893us : sched_wakeup: comm=ksoftirqd/15 pid=124 prio=97 target_cpu=015
[10365.158602] <idle>-0 15d..2. 10362375896us : sched_switch: prev_comm=swapper/15 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=ksoftirqd/15 next_pid=124 next_prio=97
[10365.161054] ksoftirq-124 15d..2. 10362375903us : sched_switch: prev_comm=ksoftirqd/15 prev_pid=124 prev_prio=97 prev_state=S ==> next_comm=swapper/15 next_pid=0 next_prio=120
[10365.163517] <idle>-0 15dNh4. 10362518890us : sched_wakeup: comm=rcuc/15 pid=123 prio=97 target_cpu=015
[10365.165069] <idle>-0 15d..2. 10362518896us : sched_switch: prev_comm=swapper/15 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/15 next_pid=123 next_prio=97
[10365.167919] rcuc/15-123 15d..2. 10362518901us : sched_switch: prev_comm=rcuc/15 prev_pid=123 prev_prio=97 prev_state=S ==> next_comm=swapper/15 next_pid=0 next_prio=120
[10365.170759] <idle>-0 15dNh4. 10362519888us : sched_wakeup: comm=rcuc/15 pid=123 prio=97 target_cpu=015
[10365.171987] <idle>-0 15d..2. 10362519891us : sched_switch: prev_comm=swapper/15 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/15 next_pid=123 next_prio=97
[10365.173866] rcuc/15-123 15d..2. 10362519894us : sched_switch: prev_comm=rcuc/15 prev_pid=123 prev_prio=97 prev_state=S ==> next_comm=swapper/15 next_pid=0 next_prio=120
[10365.175741] <idle>-0 15dNh4. 10362558889us : sched_wakeup: comm=rcuc/15 pid=123 prio=97 target_cpu=015
[10365.176967] <idle>-0 15d..2. 10362558894us : sched_switch: prev_comm=swapper/15 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/15 next_pid=123 next_prio=97
[10365.178844] rcuc/15-123 15d..2. 10362558898us : sched_switch: prev_comm=rcuc/15 prev_pid=123 prev_prio=97 prev_state=S ==> next_comm=swapper/15 next_pid=0 next_prio=120
[10365.180719] <idle>-0 15dNh4. 10362559888us : sched_wakeup: comm=rcuc/15 pid=123 prio=97 target_cpu=015
[10365.181940] <idle>-0 15d..2. 10362559891us : sched_switch: prev_comm=swapper/15 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/15 next_pid=123 next_prio=97
[10365.183819] rcuc/15-123 15d..2. 10362559894us : sched_switch: prev_comm=rcuc/15 prev_pid=123 prev_prio=97 prev_state=S ==> next_comm=swapper/15 next_pid=0 next_prio=120
[10365.185693] <idle>-0 15dNh4. 10362570888us : sched_wakeup: comm=rcuc/15 pid=123 prio=97 target_cpu=015
[10365.186914] <idle>-0 15d..2. 10362570892us : sched_switch: prev_comm=swapper/15 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/15 next_pid=123 next_prio=97
[10365.188810] rcuc/15-123 15d..2. 10362570894us : sched_switch: prev_comm=rcuc/15 prev_pid=123 prev_prio=97 prev_state=S ==> next_comm=swapper/15 next_pid=0 next_prio=120
[10365.190677] <idle>-0 15dNh4. 10362571887us : sched_wakeup: comm=rcuc/15 pid=123 prio=97 target_cpu=015
[10365.191888] <idle>-0 15d..2. 10362571890us : sched_switch: prev_comm=swapper/15 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/15 next_pid=123 next_prio=97
[10365.193759] rcuc/15-123 15d..2. 10362571893us : sched_switch: prev_comm=rcuc/15 prev_pid=123 prev_prio=97 prev_state=S ==> next_comm=swapper/15 next_pid=0 next_prio=120
[10365.195626] <idle>-0 15dNh4. 10362592888us : sched_wakeup: comm=rcuc/15 pid=123 prio=97 target_cpu=015
[10365.196848] <idle>-0 15d..2. 10362592892us : sched_switch: prev_comm=swapper/15 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/15 next_pid=123 next_prio=97
[10365.198726] rcuc/15-123 15d..2. 10362592894us : sched_switch: prev_comm=rcuc/15 prev_pid=123 prev_prio=97 prev_state=S ==> next_comm=swapper/15 next_pid=0 next_prio=120
[10365.200601] <idle>-0 15dNh4. 10362593887us : sched_wakeup: comm=rcuc/15 pid=123 prio=97 target_cpu=015
[10365.201830] <idle>-0 15d..2. 10362593891us : sched_switch: prev_comm=swapper/15 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/15 next_pid=123 next_prio=97
[10365.203732] rcuc/15-123 15d..2. 10362593893us : sched_switch: prev_comm=rcuc/15 prev_pid=123 prev_prio=97 prev_state=S ==> next_comm=swapper/15 next_pid=0 next_prio=120
[10365.205643] CPU:5 [LOST 11877163 EVENTS]
[10365.205643] rcu_tort-20362 5d..2. 10362867884us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20362 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/5 next_pid=54 next_prio=97
[10365.208125] ksoftirq-54 5d.s3. 10362867889us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=009
[10365.209380] ksoftirq-54 5d..2. 10362867892us : sched_switch: prev_comm=ksoftirqd/5 prev_pid=54 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20362 next_prio=98
[10365.211383] CPU:7 [LOST 11917504 EVENTS]
[10365.211383] rcu_tort-20355 7d..2. 10362881894us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20355 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.213886] rcu_pree-16 7d..2. 10362881901us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20355 next_prio=98
[10365.215876] rcu_tort-20355 7dN.4. 10362886882us : sched_wakeup: comm=ksoftirqd/7 pid=68 prio=97 target_cpu=007
[10365.217131] rcu_tort-20355 7d..2. 10362886883us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20355 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/7 next_pid=68 next_prio=97
[10365.219142] ksoftirq-68 7d.s3. 10362886888us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=013
[10365.220412] ksoftirq-68 7d..2. 10362886891us : sched_switch: prev_comm=ksoftirqd/7 prev_pid=68 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20355 next_prio=98
[10365.222440] <idle>-0 15dN.4. 10362887892us : sched_wakeup: comm=ksoftirqd/15 pid=124 prio=97 target_cpu=015
[10365.223720] <idle>-0 15d..2. 10362887894us : sched_switch: prev_comm=swapper/15 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=ksoftirqd/15 next_pid=124 next_prio=97
[10365.225666] ksoftirq-124 15d.s5. 10362887901us : dl_server_start <-enqueue_task_fair
[10365.226662] ksoftirq-124 15dNs4. 10362887903us : sched_wakeup: comm=kworker/15:2 pid=18749 prio=120 target_cpu=015
[10365.227967] ksoftirq-124 15d..2. 10362887906us : sched_switch: prev_comm=ksoftirqd/15 prev_pid=124 prev_prio=97 prev_state=S ==> next_comm=kworker/15:2 next_pid=18749 next_prio=120
[10365.229979] kworker/-18749 15d..3. 10362887914us : dl_server_stop <-dequeue_entities
[10365.230959] kworker/-18749 15d..2. 10362887918us : sched_switch: prev_comm=kworker/15:2 prev_pid=18749 prev_prio=120 prev_state=I ==> next_comm=swapper/15 next_pid=0 next_prio=120
[10365.232919] rcu_tort-20355 7dN.4. 10362888886us : sched_wakeup: comm=ksoftirqd/7 pid=68 prio=97 target_cpu=007
[10365.234172] rcu_tort-20355 7d..2. 10362888888us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20355 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/7 next_pid=68 next_prio=97
[10365.236179] ksoftirq-68 7d.s5. 10362888893us : dl_server_start <-enqueue_task_fair
[10365.237175] ksoftirq-68 7dNs4. 10362888895us : sched_wakeup: comm=kworker/7:0 pid=19250 prio=120 target_cpu=007
[10365.238481] ksoftirq-68 7d..2. 10362888898us : sched_switch: prev_comm=ksoftirqd/7 prev_pid=68 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20355 next_prio=98
[10365.240482] rcu_tort-20355 7d..2. 10362896899us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20355 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.242503] rcu_pree-16 7d..2. 10362896913us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20355 next_prio=98
[10365.244510] rcu_tort-20355 7dN.4. 10362900887us : sched_wakeup: comm=ksoftirqd/7 pid=68 prio=97 target_cpu=007
[10365.245769] rcu_tort-20355 7d..2. 10362900889us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20355 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/7 next_pid=68 next_prio=97
[10365.247779] ksoftirq-68 7d.s3. 10362900897us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=000
[10365.249032] ksoftirq-68 7d..2. 10362900902us : sched_switch: prev_comm=ksoftirqd/7 prev_pid=68 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20355 next_prio=98
[10365.251032] CPU:9 [LOST 11288286 EVENTS]
[10365.251032] rcu_tort-20365 9d..2. 10362909888us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20365 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/9 next_pid=82 next_prio=97
[10365.253536] ksoftirq-82 9d.s3. 10362909895us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=013
[10365.254791] ksoftirq-82 9d..2. 10362909900us : sched_switch: prev_comm=ksoftirqd/9 prev_pid=82 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20365 next_prio=98
[10365.256805] rcu_tort-20365 9d..2. 10362933893us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20365 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.258827] rcu_pree-16 9d..2. 10362933905us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20365 next_prio=98
[10365.260834] rcu_tort-20365 9dN.3. 10362938884us : sched_wakeup: comm=ksoftirqd/9 pid=82 prio=97 target_cpu=009
[10365.262092] rcu_tort-20365 9d..2. 10362938887us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20365 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/9 next_pid=82 next_prio=97
[10365.264108] ksoftirq-82 9d.s3. 10362938895us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=000
[10365.265359] ksoftirq-82 9d..2. 10362938899us : sched_switch: prev_comm=ksoftirqd/9 prev_pid=82 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20365 next_prio=98
[10365.267351] rcu_tort-20362 5d..2. 10362947897us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20362 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.269383] rcu_pree-16 5d..2. 10362947910us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20362 next_prio=98
[10365.271407] rcu_tort-20362 5dN.3. 10362952884us : sched_wakeup: comm=ksoftirqd/5 pid=54 prio=97 target_cpu=005
[10365.272674] rcu_tort-20362 5d..2. 10362952885us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20362 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/5 next_pid=54 next_prio=97
[10365.274684] ksoftirq-54 5d.s3. 10362952892us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=011
[10365.275945] ksoftirq-54 5d..2. 10362952897us : sched_switch: prev_comm=ksoftirqd/5 prev_pid=54 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20362 next_prio=98
[10365.277945] rcu_tort-20362 5d..2. 10362972896us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20362 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.279970] rcu_pree-16 5d..2. 10362972908us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20362 next_prio=98
[10365.281988] rcu_tort-20362 5dN.4. 10362977883us : sched_wakeup: comm=ksoftirqd/5 pid=54 prio=97 target_cpu=005
[10365.283245] rcu_tort-20362 5d..2. 10362977885us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20362 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/5 next_pid=54 next_prio=97
[10365.285265] ksoftirq-54 5d.s3. 10362977891us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=013
[10365.286547] ksoftirq-54 5d..2. 10362977895us : sched_switch: prev_comm=ksoftirqd/5 prev_pid=54 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20362 next_prio=98
[10365.288558] rcu_tort-20355 7d..2. 10362981897us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20355 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.290586] rcu_pree-16 7d..2. 10362981909us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20355 next_prio=98
[10365.292597] rcu_tort-20355 7dN.4. 10362986884us : sched_wakeup: comm=ksoftirqd/7 pid=68 prio=97 target_cpu=007
[10365.293848] rcu_tort-20355 7d..2. 10362986885us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20355 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/7 next_pid=68 next_prio=97
[10365.295864] ksoftirq-68 7d.s3. 10362986892us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=002
[10365.297118] CPU:2 [LOST 11706162 EVENTS]
[10365.297118] rcu_tort-20360 2d..2. 10362986895us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20360 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.299615] ksoftirq-68 7d..2. 10362986896us : sched_switch: prev_comm=ksoftirqd/7 prev_pid=68 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20355 next_prio=98
[10365.301619] rcu_pree-16 2d..2. 10362986906us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20360 next_prio=98
[10365.303612] CPU:0 [LOST 26252193 EVENTS]
[10365.303612] rcu_tort-173 0d..2. 10362987887us : sched_switch: prev_comm=rcu_torture_boo prev_pid=173 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/0 next_pid=15 next_prio=97
[10365.306058] ksoftirq-15 0d..2. 10362987895us : sched_switch: prev_comm=ksoftirqd/0 prev_pid=15 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=173 next_prio=98
[10365.308043] rcu_tort-20362 5dN.3. 10362988884us : sched_wakeup: comm=ksoftirqd/5 pid=54 prio=97 target_cpu=005
[10365.309307] rcu_tort-20362 5d..2. 10362988885us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20362 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/5 next_pid=54 next_prio=97
[10365.311323] ksoftirq-54 5d..2. 10362988895us : sched_switch: prev_comm=ksoftirqd/5 prev_pid=54 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20362 next_prio=98
[10365.313309] rcu_tort-20355 7dN.3. 10362989884us : sched_wakeup: comm=ksoftirqd/7 pid=68 prio=97 target_cpu=007
[10365.314565] rcu_tort-20355 7d..2. 10362989885us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20355 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/7 next_pid=68 next_prio=97
[10365.316578] ksoftirq-68 7d..2. 10362989892us : sched_switch: prev_comm=ksoftirqd/7 prev_pid=68 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20355 next_prio=98
[10365.318606] rcu_tort-20360 2dN.3. 10362991882us : sched_wakeup: comm=ksoftirqd/2 pid=32 prio=97 target_cpu=002
[10365.319861] rcu_tort-20360 2d..2. 10362991883us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20360 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/2 next_pid=32 next_prio=97
[10365.321866] ksoftirq-32 2d.s3. 10362991887us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=013
[10365.323121] ksoftirq-32 2d..2. 10362991891us : sched_switch: prev_comm=ksoftirqd/2 prev_pid=32 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20360 next_prio=98
[10365.325194] rcu_tort-20365 9d..2. 10362996890us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20365 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.327220] rcu_pree-16 9d..2. 10362996895us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20365 next_prio=98
[10365.329255] rcu_tort-20365 9dN.3. 10363001885us : sched_wakeup: comm=ksoftirqd/9 pid=82 prio=97 target_cpu=009
[10365.330530] rcu_tort-20365 9d..2. 10363001886us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20365 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/9 next_pid=82 next_prio=97
[10365.332545] ksoftirq-82 9d.s3. 10363001893us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=002
[10365.333807] rcu_tort-20360 2d..2. 10363001894us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20360 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.335834] ksoftirq-82 9d..2. 10363001897us : sched_switch: prev_comm=ksoftirqd/9 prev_pid=82 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20365 next_prio=98
[10365.337856] rcu_pree-16 2d..2. 10363001904us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20360 next_prio=98
[10365.339865] rcu_tort-20360 2dN.3. 10363005883us : sched_wakeup: comm=ksoftirqd/2 pid=32 prio=97 target_cpu=002
[10365.341125] rcu_tort-20360 2d..2. 10363005884us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20360 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/2 next_pid=32 next_prio=97
[10365.343131] ksoftirq-32 2d.s3. 10363005889us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=000
[10365.344392] ksoftirq-32 2d..2. 10363005893us : sched_switch: prev_comm=ksoftirqd/2 prev_pid=32 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20360 next_prio=98
[10365.346402] rcu_tort-173 0d..2. 10363005894us : sched_switch: prev_comm=rcu_torture_boo prev_pid=173 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.348384] rcu_pree-16 0d..2. 10363005904us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=173 next_prio=98
[10365.350378] rcu_tort-173 0dN.3. 10363009883us : sched_wakeup: comm=ksoftirqd/0 pid=15 prio=97 target_cpu=000
[10365.351644] rcu_tort-173 0d..2. 10363009884us : sched_switch: prev_comm=rcu_torture_boo prev_pid=173 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/0 next_pid=15 next_prio=97
[10365.353636] ksoftirq-15 0d.s3. 10363009892us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=002
[10365.354895] ksoftirq-15 0d..2. 10363009894us : sched_switch: prev_comm=ksoftirqd/0 prev_pid=15 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=173 next_prio=98
[10365.356885] rcu_tort-20360 2d..2. 10363009895us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20360 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.358899] rcu_pree-16 2d..2. 10363009905us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20360 next_prio=98
[10365.360885] rcu_tort-20360 2dN.3. 10363014884us : sched_wakeup: comm=ksoftirqd/2 pid=32 prio=97 target_cpu=002
[10365.362147] rcu_tort-20360 2d..2. 10363014885us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20360 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/2 next_pid=32 next_prio=97
[10365.364166] ksoftirq-32 2d.s3. 10363014891us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=003
[10365.365443] ksoftirq-32 2d..2. 10363014895us : sched_switch: prev_comm=ksoftirqd/2 prev_pid=32 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20360 next_prio=98
[10365.367432] rcu_tort-20362 5d..2. 10363018895us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20362 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.369451] rcu_pree-16 5d..2. 10363018906us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20362 next_prio=98
[10365.371454] rcu_tort-20362 5dN.3. 10363023884us : sched_wakeup: comm=ksoftirqd/5 pid=54 prio=97 target_cpu=005
[10365.372717] rcu_tort-20362 5d..2. 10363023885us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20362 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/5 next_pid=54 next_prio=97
[10365.374725] ksoftirq-54 5d.s3. 10363023891us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=000
[10365.375980] rcu_tort-173 0d..2. 10363023893us : sched_switch: prev_comm=rcu_torture_boo prev_pid=173 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.377982] ksoftirq-54 5d..2. 10363023895us : sched_switch: prev_comm=ksoftirqd/5 prev_pid=54 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20362 next_prio=98
[10365.379987] rcu_pree-16 0d..2. 10363023903us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=173 next_prio=98
[10365.381988] rcu_tort-173 0dN.4. 10363027883us : sched_wakeup: comm=ksoftirqd/0 pid=15 prio=97 target_cpu=000
[10365.383242] rcu_tort-173 0d..2. 10363027884us : sched_switch: prev_comm=rcu_torture_boo prev_pid=173 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/0 next_pid=15 next_prio=97
[10365.385233] ksoftirq-15 0d.s3. 10363027889us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=003
[10365.386517] ksoftirq-15 0d..2. 10363027891us : sched_switch: prev_comm=ksoftirqd/0 prev_pid=15 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=173 next_prio=98
[10365.388514] rcu_tort-20355 7d..2. 10363031893us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20355 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.390640] rcu_pree-16 7d..2. 10363031903us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20355 next_prio=98
[10365.392649] rcu_tort-20355 7dN.4. 10363036885us : sched_wakeup: comm=ksoftirqd/7 pid=68 prio=97 target_cpu=007
[10365.393943] rcu_tort-20355 7d..2. 10363036886us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20355 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/7 next_pid=68 next_prio=97
[10365.395957] ksoftirq-68 7d.s3. 10363036892us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=003
[10365.397212] ksoftirq-68 7d..2. 10363036896us : sched_switch: prev_comm=ksoftirqd/7 prev_pid=68 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20355 next_prio=98
[10365.399224] rcu_tort-20365 9d..2. 10363040896us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20365 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.401273] rcu_pree-16 9d..2. 10363040911us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20365 next_prio=98
[10365.403313] rcu_tort-20365 9dN.3. 10363044885us : sched_wakeup: comm=ksoftirqd/9 pid=82 prio=97 target_cpu=009
[10365.404667] rcu_tort-20365 9d..2. 10363044886us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20365 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/9 next_pid=82 next_prio=97
[10365.406693] ksoftirq-82 9d.s3. 10363044894us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=003
[10365.407949] ksoftirq-82 9d..2. 10363044898us : sched_switch: prev_comm=ksoftirqd/9 prev_pid=82 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20365 next_prio=98
[10365.409952] rcu_tort-20355 7d..2. 10363053893us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20355 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.411977] rcu_pree-16 7d..2. 10363053902us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20355 next_prio=98
[10365.413997] rcu_tort-20355 7dN.3. 10363057885us : sched_wakeup: comm=ksoftirqd/7 pid=68 prio=97 target_cpu=007
[10365.415261] rcu_tort-20355 7d..2. 10363057886us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20355 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/7 next_pid=68 next_prio=97
[10365.417290] ksoftirq-68 7d.s3. 10363057893us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=005
[10365.418584] rcu_tort-20362 5d..2. 10363057897us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20362 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.420608] ksoftirq-68 7d..2. 10363057897us : sched_switch: prev_comm=ksoftirqd/7 prev_pid=68 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20355 next_prio=98
[10365.422641] rcu_pree-16 5d..2. 10363057909us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20362 next_prio=98
[10365.424655] rcu_tort-20362 5dN.3. 10363062883us : sched_wakeup: comm=ksoftirqd/5 pid=54 prio=97 target_cpu=005
[10365.425913] rcu_tort-20362 5d..2. 10363062884us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20362 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/5 next_pid=54 next_prio=97
[10365.427976] ksoftirq-54 5d.s3. 10363062890us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=002
[10365.429229] rcu_tort-20360 2d..2. 10363062892us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20360 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.431266] ksoftirq-54 5d..2. 10363062893us : sched_switch: prev_comm=ksoftirqd/5 prev_pid=54 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20362 next_prio=98
[10365.433291] rcu_pree-16 2d..2. 10363062902us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20360 next_prio=98
[10365.435306] rcu_tort-20360 2dN.3. 10363067884us : sched_wakeup: comm=ksoftirqd/2 pid=32 prio=97 target_cpu=002
[10365.436595] rcu_tort-20360 2d..2. 10363067885us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20360 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/2 next_pid=32 next_prio=97
[10365.438611] ksoftirq-32 2d.s3. 10363067891us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=005
[10365.439875] rcu_tort-20362 5d..2. 10363067892us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20362 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.441892] ksoftirq-32 2d..2. 10363067895us : sched_switch: prev_comm=ksoftirqd/2 prev_pid=32 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20360 next_prio=98
[10365.443906] rcu_pree-16 5d..2. 10363067902us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20362 next_prio=98
[10365.445917] rcu_tort-20360 2dN.3. 10363071883us : sched_wakeup: comm=ksoftirqd/2 pid=32 prio=97 target_cpu=002
[10365.447176] rcu_tort-20360 2d..2. 10363071884us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20360 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/2 next_pid=32 next_prio=97
[10365.449196] rcu_tort-20362 5dN.3. 10363071890us : sched_wakeup: comm=ksoftirqd/5 pid=54 prio=97 target_cpu=005
[10365.450481] ksoftirq-32 2d..2. 10363071892us : sched_switch: prev_comm=ksoftirqd/2 prev_pid=32 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20360 next_prio=98
[10365.452489] rcu_tort-20362 5d..2. 10363071892us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20362 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/5 next_pid=54 next_prio=97
[10365.454519] ksoftirq-54 5d.s3. 10363071901us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=003
[10365.455786] ksoftirq-54 5d..2. 10363071906us : sched_switch: prev_comm=ksoftirqd/5 prev_pid=54 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20362 next_prio=98
[10365.457793] rcu_tort-173 0d..2. 10363075895us : sched_switch: prev_comm=rcu_torture_boo prev_pid=173 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.459787] rcu_pree-16 0d..2. 10363075905us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=173 next_prio=98
[10365.461871] rcu_tort-173 0dN.3. 10363079885us : sched_wakeup: comm=ksoftirqd/0 pid=15 prio=97 target_cpu=000
[10365.463468] rcu_tort-173 0d..2. 10363079886us : sched_switch: prev_comm=rcu_torture_boo prev_pid=173 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/0 next_pid=15 next_prio=97
[10365.465953] ksoftirq-15 0d.s3. 10363079893us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=005
[10365.467513] ksoftirq-15 0d..2. 10363079895us : sched_switch: prev_comm=ksoftirqd/0 prev_pid=15 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=173 next_prio=98
[10365.470325] rcu_tort-20362 5d..2. 10363079897us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20362 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.473351] rcu_pree-16 5d..2. 10363079912us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20362 next_prio=98
[10365.475350] rcu_tort-20362 5dN.3. 10363083890us : sched_wakeup: comm=ksoftirqd/5 pid=54 prio=97 target_cpu=005
[10365.476629] rcu_tort-20362 5d..2. 10363083892us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20362 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/5 next_pid=54 next_prio=97
[10365.478635] ksoftirq-54 5d.s3. 10363083902us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=007
[10365.479886] rcu_tort-20355 7d..2. 10363083907us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20355 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.481891] ksoftirq-54 5d..2. 10363083908us : sched_switch: prev_comm=ksoftirqd/5 prev_pid=54 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20362 next_prio=98
[10365.483888] rcu_pree-16 7d..2. 10363083919us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20355 next_prio=98
[10365.485875] rcu_tort-173 0dN.4. 10363087886us : sched_wakeup: comm=ksoftirqd/0 pid=15 prio=97 target_cpu=000
[10365.487126] rcu_tort-173 0d..2. 10363087887us : sched_switch: prev_comm=rcu_torture_boo prev_pid=173 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/0 next_pid=15 next_prio=97
[10365.489100] rcu_tort-20355 7dN.3. 10363087889us : sched_wakeup: comm=ksoftirqd/7 pid=68 prio=97 target_cpu=007
[10365.490348] ksoftirq-15 0d..2. 10363087891us : sched_switch: prev_comm=ksoftirqd/0 prev_pid=15 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=173 next_prio=98
[10365.492319] rcu_tort-20355 7d..2. 10363087892us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20355 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/7 next_pid=68 next_prio=97
[10365.494319] ksoftirq-68 7d.s3. 10363087901us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=009
[10365.495579] rcu_tort-20365 9d..2. 10363087904us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20365 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.497578] ksoftirq-68 7d..2. 10363087906us : sched_switch: prev_comm=ksoftirqd/7 prev_pid=68 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20355 next_prio=98
[10365.499572] rcu_pree-16 9d..2. 10363087917us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20365 next_prio=98
[10365.501563] rcu_tort-20365 9dN.3. 10363092884us : sched_wakeup: comm=ksoftirqd/9 pid=82 prio=97 target_cpu=009
[10365.502814] rcu_tort-20365 9d..2. 10363092886us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20365 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/9 next_pid=82 next_prio=97
[10365.504811] CPU:13 [LOST 11829010 EVENTS]
[10365.504811] rcu_tort-154 13d..2. 10363092886us : sched_switch: prev_comm=rcu_torture_rea prev_pid=154 prev_prio=139 prev_state=R+ ==> next_comm=rcu_torture_rea next_pid=161 next_prio=139
[10365.507381] ksoftirq-82 9d.s3. 10363092891us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=005
[10365.508624] rcu_tort-20362 5d..2. 10363092894us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20362 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.510624] ksoftirq-82 9d..2. 10363092896us : sched_switch: prev_comm=ksoftirqd/9 prev_pid=82 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20365 next_prio=98
[10365.512618] rcu_tort-161 13d..2. 10363092897us : sched_switch: prev_comm=rcu_torture_rea prev_pid=161 prev_prio=139 prev_state=I ==> next_comm=rcu_torture_rea next_pid=158 next_prio=139
[10365.514655] rcu_pree-16 5d..3. 10363092903us : sched_wakeup: comm=rcub/8 pid=35 prio=97 target_cpu=002
[10365.515853] rcu_tort-20360 2d..2. 10363092905us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20360 prev_prio=98 prev_state=R+ ==> next_comm=rcub/8 next_pid=35 next_prio=97
[10365.517797] rcu_pree-16 5d..2. 10363092912us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20362 next_prio=98
[10365.519795] rcu_tort-20362 5d..2. 10363092922us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20362 prev_prio=98 prev_state=R+ ==> next_comm=rcu_torture_rea next_pid=164 next_prio=97
[10365.521849] rcub/8-35 2d..2. 10363092926us : sched_switch: prev_comm=rcub/8 prev_pid=35 prev_prio=97 prev_state=D ==> next_comm=rcu_torture_boo next_pid=20360 next_prio=98
[10365.523785] rcu_tort-164 5d..4. 10363092932us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=009
[10365.525036] rcu_tort-20365 9d..2. 10363092934us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20365 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.527047] rcu_tort-164 5dN.4. 10363092939us : sched_wakeup: comm=rcub/8 pid=35 prio=97 target_cpu=002
[10365.528244] rcu_tort-164 5d..2. 10363092941us : sched_switch: prev_comm=rcu_torture_rea prev_pid=164 prev_prio=139 prev_state=R+ ==> next_comm=rcu_torture_boo next_pid=20362 next_prio=98
[10365.530305] rcu_tort-20360 2d..2. 10363092941us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20360 prev_prio=98 prev_state=R+ ==> next_comm=rcub/8 next_pid=35 next_prio=97
[10365.532306] rcub/8-35 2d..2. 10363092945us : sched_switch: prev_comm=rcub/8 prev_pid=35 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20360 next_prio=98
[10365.534234] rcu_pree-16 9d..2. 10363092957us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20365 next_prio=98
[10365.536216] rcu_tort-158 13DNh3. 10363093399us : sched_wakeup: comm=rcu_torture_rea pid=161 prio=139 target_cpu=013
[10365.537531] rcu_tort-158 13d..2. 10363093403us : sched_switch: prev_comm=rcu_torture_rea prev_pid=158 prev_prio=139 prev_state=R+ ==> next_comm=rcu_torture_rea next_pid=161 next_prio=139
[10365.539582] rcu_tort-20360 2dNh4. 10363093881us : sched_wakeup: comm=rcuc/2 pid=31 prio=97 target_cpu=002
[10365.540780] rcu_tort-20362 5dNh4. 10363093881us : sched_wakeup: comm=rcuc/5 pid=53 prio=97 target_cpu=005
[10365.541977] rcu_tort-20365 9dNh3. 10363093882us : sched_wakeup: comm=rcuc/9 pid=81 prio=97 target_cpu=009
[10365.543174] rcu_tort-161 13DNh3. 10363093882us : sched_wakeup: comm=rcuc/13 pid=109 prio=97 target_cpu=013
[10365.544398] rcu_tort-173 0dNh3. 10363093883us : sched_wakeup: comm=rcuc/0 pid=19 prio=97 target_cpu=000
[10365.545602] rcu_tort-20360 2d..2. 10363093885us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20360 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/2 next_pid=31 next_prio=97
[10365.547552] rcu_tort-20362 5d..2. 10363093885us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20362 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/5 next_pid=53 next_prio=97
[10365.549510] rcu_tort-20355 7dNh3. 10363093885us : sched_wakeup: comm=rcuc/7 pid=67 prio=97 target_cpu=007
[10365.550707] rcu_tort-20365 9d..2. 10363093885us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20365 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/9 next_pid=81 next_prio=97
[10365.552650] rcu_tort-173 0d..2. 10363093887us : sched_switch: prev_comm=rcu_torture_boo prev_pid=173 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/0 next_pid=19 next_prio=97
[10365.554570] rcuc/9-81 9d..2. 10363093889us : sched_switch: prev_comm=rcuc/9 prev_pid=81 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20365 next_prio=98
[10365.556499] <idle>-0 15dNh4. 10363093889us : sched_wakeup: comm=rcuc/15 pid=123 prio=97 target_cpu=015
[10365.557716] rcuc/5-53 5d..2. 10363093891us : sched_switch: prev_comm=rcuc/5 prev_pid=53 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20362 next_prio=98
[10365.559650] rcu_tort-20355 7d..2. 10363093892us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20355 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/7 next_pid=67 next_prio=97
[10365.561595] rcuc/2-31 2d..2. 10363093892us : sched_switch: prev_comm=rcuc/2 prev_pid=31 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20360 next_prio=98
[10365.563555] <idle>-0 15d..2. 10363093895us : sched_switch: prev_comm=swapper/15 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/15 next_pid=123 next_prio=97
[10365.565441] rcuc/0-19 0d..2. 10363093897us : sched_switch: prev_comm=rcuc/0 prev_pid=19 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=173 next_prio=98
[10365.567347] rcuc/15-123 15d..2. 10363093899us : sched_switch: prev_comm=rcuc/15 prev_pid=123 prev_prio=97 prev_state=S ==> next_comm=swapper/15 next_pid=0 next_prio=120
[10365.569218] rcuc/7-67 7D..4. 10363093903us : sched_wakeup: comm=rcu_torture_fak pid=151 prio=139 target_cpu=007
[10365.570536] rcuc/7-67 7d..2. 10363093910us : sched_switch: prev_comm=rcuc/7 prev_pid=67 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20355 next_prio=98
[10365.572486] rcu_tort-20360 2dNh3. 10363094881us : sched_wakeup: comm=rcuc/2 pid=31 prio=97 target_cpu=002
[10365.573700] rcu_tort-20362 5dNh3. 10363094881us : sched_wakeup: comm=rcuc/5 pid=53 prio=97 target_cpu=005
[10365.574916] rcu_tort-173 0dNh3. 10363094883us : sched_wakeup: comm=rcuc/0 pid=19 prio=97 target_cpu=000
[10365.576124] rcu_tort-20355 7dNh4. 10363094883us : sched_wakeup: comm=rcuc/7 pid=67 prio=97 target_cpu=007
[10365.577328] rcu_tort-20362 5d..2. 10363094884us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20362 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/5 next_pid=53 next_prio=97
[10365.579296] rcu_tort-20360 2d..2. 10363094884us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20360 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/2 next_pid=31 next_prio=97
[10365.581262] rcuc/5-53 5d..2. 10363094887us : sched_switch: prev_comm=rcuc/5 prev_pid=53 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20362 next_prio=98
[10365.583233] rcu_tort-173 0d..2. 10363094887us : sched_switch: prev_comm=rcu_torture_boo prev_pid=173 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/0 next_pid=19 next_prio=97
[10365.585180] <idle>-0 15dNh4. 10363094887us : sched_wakeup: comm=rcuc/15 pid=123 prio=97 target_cpu=015
[10365.586422] rcu_tort-20355 7d..2. 10363094888us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20355 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/7 next_pid=67 next_prio=97
[10365.588369] rcuc/2-31 2d..2. 10363094890us : sched_switch: prev_comm=rcuc/2 prev_pid=31 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20360 next_prio=98
[10365.590303] <idle>-0 15d..2. 10363094891us : sched_switch: prev_comm=swapper/15 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/15 next_pid=123 next_prio=97
[10365.592194] rcuc/7-67 7d..2. 10363094893us : sched_switch: prev_comm=rcuc/7 prev_pid=67 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20355 next_prio=98
[10365.594145] rcuc/15-123 15d..2. 10363094894us : sched_switch: prev_comm=rcuc/15 prev_pid=123 prev_prio=97 prev_state=S ==> next_comm=swapper/15 next_pid=0 next_prio=120
[10365.596042] rcuc/0-19 0d..2. 10363094896us : sched_switch: prev_comm=rcuc/0 prev_pid=19 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=173 next_prio=98
[10365.597971] rcu_tort-161 13d..2. 10363096887us : sched_switch: prev_comm=rcu_torture_rea prev_pid=161 prev_prio=139 prev_state=R+ ==> next_comm=rcu_torture_rea next_pid=158 next_prio=139
[10365.600039] rcu_tort-158 13d..2. 10363096892us : sched_switch: prev_comm=rcu_torture_rea prev_pid=158 prev_prio=139 prev_state=I ==> next_comm=rcu_torture_rea next_pid=154 next_prio=139
[10365.602093] rcu_tort-154 13d..2. 10363096895us : sched_switch: prev_comm=rcu_torture_rea prev_pid=154 prev_prio=139 prev_state=I ==> next_comm=rcu_torture_rea next_pid=161 next_prio=139
[10365.604146] rcu_tort-161 13dNh3. 10363097395us : sched_wakeup: comm=rcu_torture_rea pid=158 prio=139 target_cpu=013
[10365.606152] rcu_tort-161 13dNh3. 10363097397us : sched_wakeup: comm=rcu_torture_rea pid=154 prio=139 target_cpu=013
[10365.608175] rcu_tort-161 13d..2. 10363097400us : sched_switch: prev_comm=rcu_torture_rea prev_pid=161 prev_prio=139 prev_state=R+ ==> next_comm=rcu_torture_rea next_pid=154 next_prio=139
[10365.610619] rcu_tort-20362 5dN.4. 10363097884us : sched_wakeup: comm=ksoftirqd/5 pid=54 prio=97 target_cpu=005
[10365.614183] rcu_tort-20365 9dN.4. 10363097885us : sched_wakeup: comm=ksoftirqd/9 pid=82 prio=97 target_cpu=009
[10365.615957] rcu_tort-20362 5d..2. 10363097885us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20362 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/5 next_pid=54 next_prio=97
[10365.617964] rcu_tort-20365 9d..2. 10363097886us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20365 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/9 next_pid=82 next_prio=97
[10365.619963] ksoftirq-54 5d..2. 10363097891us : sched_switch: prev_comm=ksoftirqd/5 prev_pid=54 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20362 next_prio=98
[10365.621960] ksoftirq-82 9d.s3. 10363097893us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=007
[10365.623207] ksoftirq-82 9d..2. 10363097894us : sched_switch: prev_comm=ksoftirqd/9 prev_pid=82 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20365 next_prio=98
[10365.625193] rcu_tort-20355 7d..2. 10363097898us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20355 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.627218] rcu_pree-16 7d..2. 10363097914us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20355 next_prio=98
[10365.629206] rcu_tort-154 13d..2. 10363100886us : sched_switch: prev_comm=rcu_torture_rea prev_pid=154 prev_prio=139 prev_state=R+ ==> next_comm=rcu_torture_rea next_pid=158 next_prio=139
[10365.632564] rcu_tort-20355 7dN.4. 10363101890us : sched_wakeup: comm=ksoftirqd/7 pid=68 prio=97 target_cpu=007
[10365.634201] rcu_tort-20355 7d..2. 10363101892us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20355 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/7 next_pid=68 next_prio=97
[10365.637052] ksoftirq-68 7d.s3. 10363101900us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=011
[10365.638912] ksoftirq-68 7d..2. 10363101905us : sched_switch: prev_comm=ksoftirqd/7 prev_pid=68 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20355 next_prio=98
[10365.641827] rcu_tort-158 13d..2. 10363103930us : sched_switch: prev_comm=rcu_torture_rea prev_pid=158 prev_prio=139 prev_state=R+ ==> next_comm=rcu_torture_rea next_pid=161 next_prio=139
[10365.644579] rcu_tort-161 13d..2. 10363103937us : sched_switch: prev_comm=rcu_torture_rea prev_pid=161 prev_prio=139 prev_state=I ==> next_comm=rcu_torture_rea next_pid=158 next_prio=139
[10365.646611] rcu_tort-158 13dNh3. 10363104439us : sched_wakeup: comm=rcu_torture_rea pid=161 prio=139 target_cpu=013
[10365.647910] rcu_tort-158 13d..2. 10363104442us : sched_switch: prev_comm=rcu_torture_rea prev_pid=158 prev_prio=139 prev_state=R+ ==> next_comm=rcu_torture_rea next_pid=161 next_prio=139
[10365.649949] rcu_tort-20365 9d..2. 10363106893us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20365 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.651938] rcu_pree-16 9d..2. 10363106906us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20365 next_prio=98
[10365.653925] rcu_tort-161 13d..2. 10363107885us : sched_switch: prev_comm=rcu_torture_rea prev_pid=161 prev_prio=139 prev_state=R+ ==> next_comm=rcu_torture_rea next_pid=158 next_prio=139
[10365.655964] rcu_tort-158 13d..2. 10363110046us : sched_switch: prev_comm=rcu_torture_rea prev_pid=158 prev_prio=139 prev_state=R+ ==> next_comm=rcu_torture_rea next_pid=154 next_prio=139
[10365.658042] rcu_tort-154 13d..2. 10363110053us : sched_switch: prev_comm=rcu_torture_rea prev_pid=154 prev_prio=139 prev_state=I ==> next_comm=rcu_torture_rea next_pid=161 next_prio=139
[10365.660100] rcu_tort-161 13dNh3. 10363110557us : sched_wakeup: comm=rcu_torture_rea pid=154 prio=139 target_cpu=013
[10365.661436] rcu_tort-161 13d..2. 10363110562us : sched_switch: prev_comm=rcu_torture_rea prev_pid=161 prev_prio=139 prev_state=R+ ==> next_comm=rcu_torture_rea next_pid=154 next_prio=139
[10365.663492] rcu_tort-20365 9dN.4. 10363111884us : sched_wakeup: comm=ksoftirqd/9 pid=82 prio=97 target_cpu=009
[10365.664749] rcu_tort-20365 9d..2. 10363111886us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20365 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/9 next_pid=82 next_prio=97
[10365.666756] rcu_tort-154 13d..2. 10363111892us : sched_switch: prev_comm=rcu_torture_rea prev_pid=154 prev_prio=139 prev_state=R+ ==> next_comm=ksoftirqd/13 next_pid=110 next_prio=97
[10365.668773] ksoftirq-82 9d.s3. 10363111895us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=011
[10365.670027] ksoftirq-82 9d..2. 10363111900us : sched_switch: prev_comm=ksoftirqd/9 prev_pid=82 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20365 next_prio=98
[10365.672021] ksoftirq-110 13d..2. 10363111903us : sched_switch: prev_comm=ksoftirqd/13 prev_pid=110 prev_prio=97 prev_state=S ==> next_comm=rcuc/13 next_pid=109 next_prio=97
[10365.673937] rcuc/13-109 13d..2. 10363111907us : sched_switch: prev_comm=rcuc/13 prev_pid=109 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20366 next_prio=98
[10365.675911] rcu_tort-20366 13dNh4. 10363112883us : sched_wakeup: comm=rcuc/13 pid=109 prio=97 target_cpu=013
[10365.677125] rcu_tort-20366 13d..2. 10363112889us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20366 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/13 next_pid=109 next_prio=97
[10365.679137] rcuc/13-109 13d..2. 10363112896us : sched_switch: prev_comm=rcuc/13 prev_pid=109 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20366 next_prio=98
[10365.681096] rcu_tort-20366 13d..2. 10363116894us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20366 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.683117] rcu_pree-16 13d..2. 10363116907us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20366 next_prio=98
[10365.685132] rcu_tort-20366 13dN.3. 10363121882us : sched_wakeup: comm=ksoftirqd/13 pid=110 prio=97 target_cpu=013
[10365.686445] rcu_tort-20366 13d..2. 10363121883us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20366 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/13 next_pid=110 next_prio=97
[10365.688481] ksoftirq-110 13d.s3. 10363121889us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=011
[10365.689743] ksoftirq-110 13d..2. 10363121893us : sched_switch: prev_comm=ksoftirqd/13 prev_pid=110 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20366 next_prio=98
[10365.691763] CPU:11 [LOST 10977971 EVENTS]
[10365.691763] rcu_tort-20367 11d..2. 10363126886us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20367 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/11 next_pid=96 next_prio=97
[10365.694261] ksoftirq-96 11d.s3. 10363126891us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=000
[10365.695548] rcu_tort-173 0d..2. 10363126893us : sched_switch: prev_comm=rcu_torture_boo prev_pid=173 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.697542] ksoftirq-96 11d..2. 10363126895us : sched_switch: prev_comm=ksoftirqd/11 prev_pid=96 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20367 next_prio=98
[10365.699583] rcu_pree-16 0d..2. 10363126903us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=173 next_prio=98
[10365.701579] rcu_tort-173 0dN.3. 10363130886us : sched_wakeup: comm=ksoftirqd/0 pid=15 prio=97 target_cpu=000
[10365.702840] rcu_tort-173 0d..2. 10363130887us : sched_switch: prev_comm=rcu_torture_boo prev_pid=173 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/0 next_pid=15 next_prio=97
[10365.704826] ksoftirq-15 0d.s3. 10363130893us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=007
[10365.706080] rcu_tort-20355 7d..2. 10363130895us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20355 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.708095] ksoftirq-15 0d..2. 10363130896us : sched_switch: prev_comm=ksoftirqd/0 prev_pid=15 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=173 next_prio=98
[10365.710066] rcu_pree-16 7d..2. 10363130905us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20355 next_prio=98
[10365.712060] rcu_tort-20355 7dN.3. 10363135882us : sched_wakeup: comm=ksoftirqd/7 pid=68 prio=97 target_cpu=007
[10365.713319] rcu_tort-20355 7d..2. 10363135883us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20355 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/7 next_pid=68 next_prio=97
[10365.715326] ksoftirq-68 7d.s3. 10363135889us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=013
[10365.716604] rcu_tort-20366 13d..2. 10363135892us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20366 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.718621] ksoftirq-68 7d..2. 10363135893us : sched_switch: prev_comm=ksoftirqd/7 prev_pid=68 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20355 next_prio=98
[10365.720627] rcu_pree-16 13d..2. 10363135903us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20366 next_prio=98
[10365.722630] rcu_tort-20366 13dN.4. 10363140882us : sched_wakeup: comm=ksoftirqd/13 pid=110 prio=97 target_cpu=013
[10365.723905] rcu_tort-20366 13d..2. 10363140883us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20366 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/13 next_pid=110 next_prio=97
[10365.725929] ksoftirq-110 13d.s3. 10363140889us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=000
[10365.727184] rcu_tort-173 0d..2. 10363140892us : sched_switch: prev_comm=rcu_torture_boo prev_pid=173 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.729173] ksoftirq-110 13d..2. 10363140892us : sched_switch: prev_comm=ksoftirqd/13 prev_pid=110 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20366 next_prio=98
[10365.731175] rcu_pree-16 0d..2. 10363140902us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=173 next_prio=98
[10365.733146] rcu_tort-173 0dN.3. 10363144885us : sched_wakeup: comm=ksoftirqd/0 pid=15 prio=97 target_cpu=000
[10365.734423] rcu_tort-20365 9dN.4. 10363144885us : sched_wakeup: comm=ksoftirqd/9 pid=82 prio=97 target_cpu=009
[10365.735702] CPU:3 [LOST 11500515 EVENTS]
[10365.735702] rcu_tort-20368 3d..2. 10363144886us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20368 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/3 next_pid=40 next_prio=97
[10365.738210] rcu_tort-173 0d..2. 10363144886us : sched_switch: prev_comm=rcu_torture_boo prev_pid=173 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/0 next_pid=15 next_prio=97
[10365.740204] rcu_tort-20365 9d..2. 10363144888us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20365 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/9 next_pid=82 next_prio=97
[10365.742300] ksoftirq-15 0d.s3. 10363144892us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=011
[10365.743603] rcu_tort-20367 11d..2. 10363144894us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20367 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.745714] ksoftirq-15 0d..2. 10363144895us : sched_switch: prev_comm=ksoftirqd/0 prev_pid=15 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=173 next_prio=98
[10365.747716] ksoftirq-82 9d..2. 10363144897us : sched_switch: prev_comm=ksoftirqd/9 prev_pid=82 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20365 next_prio=98
[10365.749736] ksoftirq-40 3d.s2. 10363144897us : dl_server_stop <-dequeue_entities
[10365.750718] ksoftirq-40 3d..2. 10363144902us : sched_switch: prev_comm=ksoftirqd/3 prev_pid=40 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20368 next_prio=98
[10365.752722] rcu_pree-16 11d..2. 10363144903us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20367 next_prio=98
[10365.754728] rcu_tort-20367 11dN.3. 10363150918us : sched_wakeup: comm=ksoftirqd/11 pid=96 prio=97 target_cpu=011
[10365.756002] rcu_tort-20367 11d..2. 10363150919us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20367 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/11 next_pid=96 next_prio=97
[10365.758021] ksoftirq-96 11d.s3. 10363150927us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=002
[10365.759307] rcu_tort-20360 2d..2. 10363150930us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20360 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.761370] ksoftirq-96 11d..2. 10363150931us : sched_switch: prev_comm=ksoftirqd/11 prev_pid=96 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20367 next_prio=98
[10365.763518] rcu_pree-16 2d..2. 10363150942us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20360 next_prio=98
[10365.765622] rcu_tort-20360 2dN.3. 10363155882us : sched_wakeup: comm=ksoftirqd/2 pid=32 prio=97 target_cpu=002
[10365.766881] rcu_tort-20360 2d..2. 10363155883us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20360 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/2 next_pid=32 next_prio=97
[10365.768938] ksoftirq-32 2d.s3. 10363155890us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=007
[10365.770223] rcu_tort-20355 7d..2. 10363155894us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20355 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.772262] ksoftirq-32 2d..2. 10363155894us : sched_switch: prev_comm=ksoftirqd/2 prev_pid=32 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20360 next_prio=98
[10365.774268] rcu_pree-16 7d..2. 10363155904us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20355 next_prio=98
[10365.776256] rcu_tort-20355 7dN.4. 10363159885us : sched_wakeup: comm=ksoftirqd/7 pid=68 prio=97 target_cpu=007
[10365.777565] rcu_tort-20355 7d..2. 10363159886us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20355 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/7 next_pid=68 next_prio=97
[10365.779596] ksoftirq-68 7d.s3. 10363159892us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=000
[10365.780878] rcu_tort-173 0d..2. 10363159894us : sched_switch: prev_comm=rcu_torture_boo prev_pid=173 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.782867] ksoftirq-68 7d..2. 10363159896us : sched_switch: prev_comm=ksoftirqd/7 prev_pid=68 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20355 next_prio=98
[10365.784864] rcu_pree-16 0d..2. 10363159904us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=173 next_prio=98
[10365.786855] rcu_tort-173 0dN.3. 10363163885us : sched_wakeup: comm=ksoftirqd/0 pid=15 prio=97 target_cpu=000
[10365.788125] rcu_tort-173 0d..2. 10363163886us : sched_switch: prev_comm=rcu_torture_boo prev_pid=173 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/0 next_pid=15 next_prio=97
[10365.790102] ksoftirq-15 0d.s3. 10363163891us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=013
[10365.791360] ksoftirq-15 0d..2. 10363163894us : sched_switch: prev_comm=ksoftirqd/0 prev_pid=15 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=173 next_prio=98
[10365.793318] rcu_tort-20366 13d..2. 10363163894us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20366 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.795316] rcu_pree-16 13d..2. 10363163906us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20366 next_prio=98
[10365.797315] rcu_tort-20366 13dN.3. 10363168883us : sched_wakeup: comm=ksoftirqd/13 pid=110 prio=97 target_cpu=013
[10365.798604] rcu_tort-20366 13d..2. 10363168885us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20366 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/13 next_pid=110 next_prio=97
[10365.800633] ksoftirq-110 13d.s3. 10363168892us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=002
[10365.801889] rcu_tort-20360 2d..2. 10363168894us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20360 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.803890] ksoftirq-110 13d..2. 10363168896us : sched_switch: prev_comm=ksoftirqd/13 prev_pid=110 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20366 next_prio=98
[10365.805911] rcu_pree-16 2d..2. 10363168906us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20360 next_prio=98
[10365.807916] rcu_tort-20366 13dN.4. 10363172882us : sched_wakeup: comm=ksoftirqd/13 pid=110 prio=97 target_cpu=013
[10365.809190] rcu_tort-20366 13d..2. 10363172883us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20366 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/13 next_pid=110 next_prio=97
[10365.811214] ksoftirq-110 13d..2. 10363172905us : sched_switch: prev_comm=ksoftirqd/13 prev_pid=110 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20366 next_prio=98
[10365.813234] rcu_tort-20360 2dN.3. 10363173883us : sched_wakeup: comm=ksoftirqd/2 pid=32 prio=97 target_cpu=002
[10365.814515] rcu_tort-20360 2d..2. 10363173884us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20360 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/2 next_pid=32 next_prio=97
[10365.816511] ksoftirq-32 2d.s3. 10363173888us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=009
[10365.817765] rcu_tort-20365 9d..2. 10363173892us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20365 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.819769] ksoftirq-32 2d..2. 10363173892us : sched_switch: prev_comm=ksoftirqd/2 prev_pid=32 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20360 next_prio=98
[10365.821762] rcu_pree-16 9d..2. 10363173902us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20365 next_prio=98
[10365.823759] rcu_tort-20365 9dN.3. 10363178883us : sched_wakeup: comm=ksoftirqd/9 pid=82 prio=97 target_cpu=009
[10365.825008] rcu_tort-20365 9d..2. 10363178885us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20365 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/9 next_pid=82 next_prio=97
[10365.827015] ksoftirq-82 9d.s3. 10363178892us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=013
[10365.828259] rcu_tort-20366 13d..2. 10363178895us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20366 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.830269] ksoftirq-82 9d..2. 10363178897us : sched_switch: prev_comm=ksoftirqd/9 prev_pid=82 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20365 next_prio=98
[10365.832270] rcu_pree-16 13d..2. 10363178903us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20366 next_prio=98
[10365.834282] rcu_tort-20366 13dN.3. 10363183882us : sched_wakeup: comm=ksoftirqd/13 pid=110 prio=97 target_cpu=013
[10365.835575] rcu_tort-20366 13d..2. 10363183882us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20366 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/13 next_pid=110 next_prio=97
[10365.837650] ksoftirq-110 13d.s3. 10363183887us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=003
[10365.838909] rcu_tort-20368 3d..2. 10363183891us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20368 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.840918] ksoftirq-110 13d..2. 10363183891us : sched_switch: prev_comm=ksoftirqd/13 prev_pid=110 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20366 next_prio=98
[10365.842933] rcu_pree-16 3d..2. 10363183901us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20368 next_prio=98
[10365.844989] rcu_tort-20368 3dN.3. 10363188883us : sched_wakeup: comm=ksoftirqd/3 pid=40 prio=97 target_cpu=003
[10365.846241] rcu_tort-20368 3d..2. 10363188884us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20368 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/3 next_pid=40 next_prio=97
[10365.848243] ksoftirq-40 3d.s3. 10363188890us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=002
[10365.849516] rcu_tort-20360 2d..2. 10363188893us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20360 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.851517] ksoftirq-40 3d..2. 10363188894us : sched_switch: prev_comm=ksoftirqd/3 prev_pid=40 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20368 next_prio=98
[10365.853513] rcu_pree-16 2d..2. 10363188904us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20360 next_prio=98
[10365.855513] rcu_tort-173 0dN.3. 10363191884us : sched_wakeup: comm=ksoftirqd/0 pid=15 prio=97 target_cpu=000
[10365.856771] rcu_tort-173 0d..2. 10363191885us : sched_switch: prev_comm=rcu_torture_boo prev_pid=173 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/0 next_pid=15 next_prio=97
[10365.858758] ksoftirq-15 0d..2. 10363191888us : sched_switch: prev_comm=ksoftirqd/0 prev_pid=15 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=173 next_prio=98
[10365.860846] rcu_tort-20360 2dN.3. 10363193882us : sched_wakeup: comm=ksoftirqd/2 pid=32 prio=97 target_cpu=002
[10365.862105] rcu_tort-20360 2d..2. 10363193883us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20360 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/2 next_pid=32 next_prio=97
[10365.864180] ksoftirq-32 2d.s3. 10363193889us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=011
[10365.865501] rcu_tort-20367 11d..2. 10363193892us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20367 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.867561] ksoftirq-32 2d..2. 10363193893us : sched_switch: prev_comm=ksoftirqd/2 prev_pid=32 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20360 next_prio=98
[10365.869557] rcu_pree-16 11d..2. 10363193903us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20367 next_prio=98
[10365.871560] rcu_tort-20367 11dN.3. 10363198884us : sched_wakeup: comm=ksoftirqd/11 pid=96 prio=97 target_cpu=011
[10365.872825] rcu_tort-20367 11d..2. 10363198885us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20367 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/11 next_pid=96 next_prio=97
[10365.874843] ksoftirq-96 11d.s3. 10363198890us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=003
[10365.876094] rcu_tort-20368 3d..2. 10363198892us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20368 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.878114] ksoftirq-96 11d..2. 10363198894us : sched_switch: prev_comm=ksoftirqd/11 prev_pid=96 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20367 next_prio=98
[10365.880150] rcu_pree-16 3d..2. 10363198899us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20368 next_prio=98
[10365.882156] rcu_tort-20368 3dN.3. 10363203882us : sched_wakeup: comm=ksoftirqd/3 pid=40 prio=97 target_cpu=003
[10365.883416] rcu_tort-20368 3d..2. 10363203882us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20368 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/3 next_pid=40 next_prio=97
[10365.885437] ksoftirq-40 3d.s3. 10363203888us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=005
[10365.886689] ksoftirq-40 3d..2. 10363203891us : sched_switch: prev_comm=ksoftirqd/3 prev_pid=40 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20368 next_prio=98
[10365.888683] rcu_tort-20362 5d..2. 10363203893us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20362 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.890694] rcu_pree-16 5d..2. 10363203907us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20362 next_prio=98
[10365.892698] rcu_tort-20362 5dN.3. 10363208883us : sched_wakeup: comm=ksoftirqd/5 pid=54 prio=97 target_cpu=005
[10365.893964] rcu_tort-20362 5d..2. 10363208884us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20362 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/5 next_pid=54 next_prio=97
[10365.895974] ksoftirq-54 5d.s3. 10363208892us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=011
[10365.897244] rcu_tort-20367 11d..2. 10363208895us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20367 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.899328] ksoftirq-54 5d..2. 10363208896us : sched_switch: prev_comm=ksoftirqd/5 prev_pid=54 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20362 next_prio=98
[10365.901385] rcu_pree-16 11d..2. 10363208905us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20367 next_prio=98
[10365.903818] rcu_tort-20367 11dN.4. 10363213883us : sched_wakeup: comm=ksoftirqd/11 pid=96 prio=97 target_cpu=011
[10365.905744] rcu_tort-20367 11d..2. 10363213885us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20367 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/11 next_pid=96 next_prio=97
[10365.908747] ksoftirq-96 11d.s3. 10363213891us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=005
[10365.910644] rcu_tort-20362 5d..2. 10363213894us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20362 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.913658] ksoftirq-96 11d..2. 10363213894us : sched_switch: prev_comm=ksoftirqd/11 prev_pid=96 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20367 next_prio=98
[10365.916716] rcu_pree-16 5d..2. 10363213905us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20362 next_prio=98
[10365.919709] rcu_tort-20362 5dN.3. 10363218883us : sched_wakeup: comm=ksoftirqd/5 pid=54 prio=97 target_cpu=005
[10365.921637] rcu_tort-20362 5d..2. 10363218884us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20362 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/5 next_pid=54 next_prio=97
[10365.924425] ksoftirq-54 5d.s3. 10363218890us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=013
[10365.925687] rcu_tort-20366 13d..2. 10363218892us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20366 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.927686] ksoftirq-54 5d..2. 10363218894us : sched_switch: prev_comm=ksoftirqd/5 prev_pid=54 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20362 next_prio=98
[10365.929676] rcu_pree-16 13d..2. 10363218900us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20366 next_prio=98
[10365.931655] rcu_tort-20366 13dN.3. 10363223882us : sched_wakeup: comm=ksoftirqd/13 pid=110 prio=97 target_cpu=013
[10365.932929] rcu_tort-20366 13d..2. 10363223883us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20366 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/13 next_pid=110 next_prio=97
[10365.934937] ksoftirq-110 13d.s3. 10363223888us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=005
[10365.936182] rcu_tort-20362 5d..2. 10363223891us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20362 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.938259] ksoftirq-110 13d..2. 10363223892us : sched_switch: prev_comm=ksoftirqd/13 prev_pid=110 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20366 next_prio=98
[10365.940253] rcu_pree-16 5d..2. 10363223901us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20362 next_prio=98
[10365.942235] rcu_tort-20362 5dN.3. 10363228882us : sched_wakeup: comm=ksoftirqd/5 pid=54 prio=97 target_cpu=005
[10365.943533] rcu_tort-20362 5d..2. 10363228883us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20362 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/5 next_pid=54 next_prio=97
[10365.945524] ksoftirq-54 5d.s3. 10363228889us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=000
[10365.946768] ksoftirq-54 5d..2. 10363228892us : sched_switch: prev_comm=ksoftirqd/5 prev_pid=54 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20362 next_prio=98
[10365.948800] rcu_tort-173 0d..2. 10363228893us : sched_switch: prev_comm=rcu_torture_boo prev_pid=173 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.950763] rcu_pree-16 0d..2. 10363228905us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=173 next_prio=98
[10365.952718] rcu_tort-173 0dN.3. 10363232884us : sched_wakeup: comm=ksoftirqd/0 pid=15 prio=97 target_cpu=000
[10365.953961] rcu_tort-173 0d..2. 10363232885us : sched_switch: prev_comm=rcu_torture_boo prev_pid=173 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/0 next_pid=15 next_prio=97
[10365.955931] ksoftirq-15 0d.s3. 10363232893us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=002
[10365.957179] rcu_tort-20360 2d..2. 10363232895us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20360 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.959166] ksoftirq-15 0d..2. 10363232896us : sched_switch: prev_comm=ksoftirqd/0 prev_pid=15 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=173 next_prio=98
[10365.961125] rcu_pree-16 2d..2. 10363232906us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20360 next_prio=98
[10365.963109] rcu_tort-20360 2dN.4. 10363237883us : sched_wakeup: comm=ksoftirqd/2 pid=32 prio=97 target_cpu=002
[10365.964352] rcu_tort-20360 2d..2. 10363237884us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20360 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/2 next_pid=32 next_prio=97
[10365.966335] ksoftirq-32 2d.s3. 10363237890us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=013
[10365.967587] rcu_tort-20366 13d..2. 10363237893us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20366 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.969608] ksoftirq-32 2d..2. 10363237894us : sched_switch: prev_comm=ksoftirqd/2 prev_pid=32 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20360 next_prio=98
[10365.971597] rcu_pree-16 13d..2. 10363237901us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20366 next_prio=98
[10365.973598] rcu_tort-20366 13dN.3. 10363242882us : sched_wakeup: comm=ksoftirqd/13 pid=110 prio=97 target_cpu=013
[10365.974868] rcu_tort-20366 13d..2. 10363242883us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20366 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/13 next_pid=110 next_prio=97
[10365.976901] ksoftirq-110 13d.s3. 10363242888us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=007
[10365.978154] rcu_tort-20355 7d..2. 10363242891us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20355 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.980188] ksoftirq-110 13d..2. 10363242892us : sched_switch: prev_comm=ksoftirqd/13 prev_pid=110 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20366 next_prio=98
[10365.982297] rcu_pree-16 7d..2. 10363242900us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20355 next_prio=98
[10365.984334] rcu_tort-20355 7dN.3. 10363247882us : sched_wakeup: comm=ksoftirqd/7 pid=68 prio=97 target_cpu=007
[10365.985714] rcu_tort-20355 7d..2. 10363247883us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20355 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/7 next_pid=68 next_prio=97
[10365.987736] ksoftirq-68 7d.s3. 10363247889us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=002
[10365.988995] rcu_tort-20360 2d..2. 10363247892us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20360 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10365.991006] ksoftirq-68 7d..2. 10363247893us : sched_switch: prev_comm=ksoftirqd/7 prev_pid=68 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20355 next_prio=98
[10365.993054] rcu_pree-16 2d..2. 10363247904us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20360 next_prio=98
[10365.995069] rcu_tort-20360 2dN.3. 10363252882us : sched_wakeup: comm=ksoftirqd/2 pid=32 prio=97 target_cpu=002
[10365.996356] rcu_tort-20360 2d..2. 10363252883us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20360 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/2 next_pid=32 next_prio=97
[10365.998384] ksoftirq-32 2d.s3. 10363252888us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=000
[10365.999707] rcu_tort-173 0d..2. 10363252892us : sched_switch: prev_comm=rcu_torture_boo prev_pid=173 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.001744] ksoftirq-32 2d..2. 10363252892us : sched_switch: prev_comm=ksoftirqd/2 prev_pid=32 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20360 next_prio=98
[10366.003800] rcu_pree-16 0d..2. 10363252903us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=173 next_prio=98
[10366.005811] rcu_tort-173 0dN.4. 10363256883us : sched_wakeup: comm=ksoftirqd/0 pid=15 prio=97 target_cpu=000
[10366.007126] rcu_tort-173 0d..2. 10363256885us : sched_switch: prev_comm=rcu_torture_boo prev_pid=173 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/0 next_pid=15 next_prio=97
[10366.009169] ksoftirq-15 0d.s3. 10363256890us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=003
[10366.010467] rcu_tort-20368 3d..2. 10363256892us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20368 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.012491] ksoftirq-15 0d..2. 10363256893us : sched_switch: prev_comm=ksoftirqd/0 prev_pid=15 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=173 next_prio=98
[10366.014483] rcu_pree-16 3d..2. 10363256903us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20368 next_prio=98
[10366.016481] rcu_tort-20368 3dN.3. 10363260884us : sched_wakeup: comm=ksoftirqd/3 pid=40 prio=97 target_cpu=003
[10366.017750] rcu_tort-20368 3d..2. 10363260885us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20368 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/3 next_pid=40 next_prio=97
[10366.019773] ksoftirq-40 3d.s3. 10363260891us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=007
[10366.021034] rcu_tort-20355 7d..2. 10363260893us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20355 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.023050] ksoftirq-40 3d..2. 10363260896us : sched_switch: prev_comm=ksoftirqd/3 prev_pid=40 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20368 next_prio=98
[10366.025057] rcu_pree-16 7d..2. 10363260902us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20355 next_prio=98
[10366.027060] rcu_tort-20355 7dN.4. 10363265882us : sched_wakeup: comm=ksoftirqd/7 pid=68 prio=97 target_cpu=007
[10366.028313] rcu_tort-20355 7d..2. 10363265884us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20355 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/7 next_pid=68 next_prio=97
[10366.030313] ksoftirq-68 7d.s3. 10363265889us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=003
[10366.031588] rcu_tort-20368 3d..2. 10363265891us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20368 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.033617] ksoftirq-68 7d..2. 10363265892us : sched_switch: prev_comm=ksoftirqd/7 prev_pid=68 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20355 next_prio=98
[10366.035618] rcu_pree-16 3d..2. 10363265901us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20368 next_prio=98
[10366.037628] rcu_tort-20368 3dN.3. 10363269885us : sched_wakeup: comm=ksoftirqd/3 pid=40 prio=97 target_cpu=003
[10366.038973] rcu_tort-20368 3d..2. 10363269887us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20368 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/3 next_pid=40 next_prio=97
[10366.040994] ksoftirq-40 3d.s3. 10363269894us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=009
[10366.042248] ksoftirq-40 3d..2. 10363269899us : sched_switch: prev_comm=ksoftirqd/3 prev_pid=40 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20368 next_prio=98
[10366.044246] rcu_tort-20365 9d..2. 10363269899us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20365 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.046270] rcu_pree-16 9d..2. 10363269913us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20365 next_prio=98
[10366.048280] rcu_tort-20365 9dN.3. 10363274883us : sched_wakeup: comm=ksoftirqd/9 pid=82 prio=97 target_cpu=009
[10366.049559] rcu_tort-20365 9d..2. 10363274884us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20365 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/9 next_pid=82 next_prio=97
[10366.051670] rcu_tort-173 0dN.4. 10363274885us : sched_wakeup: comm=ksoftirqd/0 pid=15 prio=97 target_cpu=000
[10366.052940] rcu_tort-173 0d..2. 10363274887us : sched_switch: prev_comm=rcu_torture_boo prev_pid=173 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/0 next_pid=15 next_prio=97
[10366.054928] ksoftirq-82 9d.s3. 10363274892us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=002
[10366.056196] ksoftirq-82 9d..2. 10363274894us : sched_switch: prev_comm=ksoftirqd/9 prev_pid=82 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20365 next_prio=98
[10366.058204] rcu_tort-20360 2d..2. 10363274895us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20360 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.060223] ksoftirq-15 0d..2. 10363274896us : sched_switch: prev_comm=ksoftirqd/0 prev_pid=15 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=173 next_prio=98
[10366.062219] rcu_pree-16 2d..2. 10363274911us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20360 next_prio=98
[10366.064206] rcu_tort-20362 5dN.3. 10363275884us : sched_wakeup: comm=ksoftirqd/5 pid=54 prio=97 target_cpu=005
[10366.065474] rcu_tort-20362 5d..2. 10363275886us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20362 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/5 next_pid=54 next_prio=97
[10366.067508] ksoftirq-54 5d..2. 10363275896us : sched_switch: prev_comm=ksoftirqd/5 prev_pid=54 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20362 next_prio=98
[10366.069500] rcu_tort-20355 7dN.3. 10363277884us : sched_wakeup: comm=ksoftirqd/7 pid=68 prio=97 target_cpu=007
[10366.070758] rcu_tort-20355 7d..2. 10363277885us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20355 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/7 next_pid=68 next_prio=97
[10366.072772] ksoftirq-68 7d..2. 10363277893us : sched_switch: prev_comm=ksoftirqd/7 prev_pid=68 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20355 next_prio=98
[10366.074753] rcu_tort-20360 2dN.3. 10363279884us : sched_wakeup: comm=ksoftirqd/2 pid=32 prio=97 target_cpu=002
[10366.076013] rcu_tort-20360 2d..2. 10363279885us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20360 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/2 next_pid=32 next_prio=97
[10366.078014] ksoftirq-32 2d.s3. 10363279893us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=003
[10366.079259] rcu_tort-20368 3d..2. 10363279896us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20368 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.081285] ksoftirq-32 2d..2. 10363279897us : sched_switch: prev_comm=ksoftirqd/2 prev_pid=32 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20360 next_prio=98
[10366.083302] rcu_pree-16 3d..2. 10363279906us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20368 next_prio=98
[10366.085304] rcu_tort-20368 3dN.4. 10363283884us : sched_wakeup: comm=ksoftirqd/3 pid=40 prio=97 target_cpu=003
[10366.086603] rcu_tort-20368 3d..2. 10363283886us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20368 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/3 next_pid=40 next_prio=97
[10366.088609] ksoftirq-40 3d.s3. 10363283893us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=011
[10366.089865] rcu_tort-20367 11d..2. 10363283895us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20367 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.091878] ksoftirq-40 3d..2. 10363283897us : sched_switch: prev_comm=ksoftirqd/3 prev_pid=40 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20368 next_prio=98
[10366.093885] rcu_pree-16 11d..2. 10363283905us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20367 next_prio=98
[10366.095895] rcu_tort-20367 11dN.3. 10363284884us : sched_wakeup: comm=ksoftirqd/11 pid=96 prio=97 target_cpu=011
[10366.097158] rcu_tort-20367 11d..2. 10363284885us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20367 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/11 next_pid=96 next_prio=97
[10366.099181] ksoftirq-96 11d..2. 10363284893us : sched_switch: prev_comm=ksoftirqd/11 prev_pid=96 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20367 next_prio=98
[10366.101203] rcu_tort-20367 11dN.3. 10363287883us : sched_wakeup: comm=ksoftirqd/11 pid=96 prio=97 target_cpu=011
[10366.102493] rcu_tort-20367 11d..2. 10363287884us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20367 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/11 next_pid=96 next_prio=97
[10366.104528] rcu_tort-20368 3dN.3. 10363287884us : sched_wakeup: comm=ksoftirqd/3 pid=40 prio=97 target_cpu=003
[10366.105786] rcu_tort-20368 3d..2. 10363287885us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20368 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/3 next_pid=40 next_prio=97
[10366.107784] ksoftirq-96 11d.s3. 10363287890us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=007
[10366.109037] ksoftirq-96 11d..2. 10363287892us : sched_switch: prev_comm=ksoftirqd/11 prev_pid=96 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20367 next_prio=98
[10366.111049] rcu_tort-20355 7d..2. 10363287892us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20355 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.113065] ksoftirq-40 3d..2. 10363287894us : sched_switch: prev_comm=ksoftirqd/3 prev_pid=40 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20368 next_prio=98
[10366.115053] rcu_pree-16 7d..2. 10363287901us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20355 next_prio=98
[10366.117037] rcu_tort-20355 7dN.3. 10363292882us : sched_wakeup: comm=ksoftirqd/7 pid=68 prio=97 target_cpu=007
[10366.118289] rcu_tort-20355 7d..2. 10363292883us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20355 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/7 next_pid=68 next_prio=97
[10366.120275] ksoftirq-68 7d.s3. 10363292888us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=005
[10366.121535] rcu_tort-20362 5d..2. 10363292891us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20362 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.123564] ksoftirq-68 7d..2. 10363292892us : sched_switch: prev_comm=ksoftirqd/7 prev_pid=68 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20355 next_prio=98
[10366.125590] rcu_pree-16 5d..2. 10363292902us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20362 next_prio=98
[10366.127616] rcu_tort-173 0dN.4. 10363295884us : sched_wakeup: comm=ksoftirqd/0 pid=15 prio=97 target_cpu=000
[10366.128889] rcu_tort-173 0d..2. 10363295886us : sched_switch: prev_comm=rcu_torture_boo prev_pid=173 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/0 next_pid=15 next_prio=97
[10366.130892] ksoftirq-15 0d..2. 10363295890us : sched_switch: prev_comm=ksoftirqd/0 prev_pid=15 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=173 next_prio=98
[10366.132866] rcu_tort-20362 5dN.3. 10363297882us : sched_wakeup: comm=ksoftirqd/5 pid=54 prio=97 target_cpu=005
[10366.134132] rcu_tort-20362 5d..2. 10363297882us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20362 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/5 next_pid=54 next_prio=97
[10366.136153] ksoftirq-54 5d.s3. 10363297887us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=002
[10366.137412] rcu_tort-20360 2d..2. 10363297889us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20360 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.139511] ksoftirq-54 5d..2. 10363297891us : sched_switch: prev_comm=ksoftirqd/5 prev_pid=54 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20362 next_prio=98
[10366.141515] rcu_pree-16 2d..2. 10363297898us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20360 next_prio=98
[10366.143526] rcu_tort-20360 2dN.3. 10363302882us : sched_wakeup: comm=ksoftirqd/2 pid=32 prio=97 target_cpu=002
[10366.144786] rcu_tort-20360 2d..2. 10363302883us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20360 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/2 next_pid=32 next_prio=97
[10366.146792] ksoftirq-32 2d.s3. 10363302887us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=005
[10366.148053] ksoftirq-32 2d..2. 10363302890us : sched_switch: prev_comm=ksoftirqd/2 prev_pid=32 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20360 next_prio=98
[10366.150067] rcu_tort-20362 5d..2. 10363302907us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20362 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.152084] rcu_pree-16 5d..2. 10363302918us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20362 next_prio=98
[10366.154087] rcu_tort-20362 5dN.4. 10363307882us : sched_wakeup: comm=ksoftirqd/5 pid=54 prio=97 target_cpu=005
[10366.155343] rcu_tort-20362 5d..2. 10363307883us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20362 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/5 next_pid=54 next_prio=97
[10366.157395] ksoftirq-54 5d.s3. 10363307887us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=003
[10366.158666] rcu_tort-20368 3d..2. 10363307891us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20368 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.160675] ksoftirq-54 5d..2. 10363307891us : sched_switch: prev_comm=ksoftirqd/5 prev_pid=54 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20362 next_prio=98
[10366.162672] rcu_pree-16 3d..2. 10363307902us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20368 next_prio=98
[10366.164680] rcu_tort-20368 3dN.3. 10363311885us : sched_wakeup: comm=ksoftirqd/3 pid=40 prio=97 target_cpu=003
[10366.165937] rcu_tort-20368 3d..2. 10363311886us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20368 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/3 next_pid=40 next_prio=97
[10366.167962] ksoftirq-40 3d.s3. 10363311892us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=013
[10366.169222] rcu_tort-20366 13d..2. 10363311897us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20366 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.171258] ksoftirq-40 3d..2. 10363311897us : sched_switch: prev_comm=ksoftirqd/3 prev_pid=40 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20368 next_prio=98
[10366.173268] rcu_pree-16 13d..2. 10363311910us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20366 next_prio=98
[10366.175270] rcu_tort-20366 13dN.3. 10363315889us : sched_wakeup: comm=ksoftirqd/13 pid=110 prio=97 target_cpu=013
[10366.176575] rcu_tort-20366 13d..2. 10363315891us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20366 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/13 next_pid=110 next_prio=97
[10366.178616] ksoftirq-110 13d.s3. 10363315902us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=009
[10366.179891] rcu_tort-20365 9d..2. 10363315906us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20365 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.181903] ksoftirq-110 13d..2. 10363315915us : sched_switch: prev_comm=ksoftirqd/13 prev_pid=110 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20366 next_prio=98
[10366.183923] rcu_pree-16 9d..2. 10363315918us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20365 next_prio=98
[10366.185944] rcu_tort-20365 9dN.3. 10363319885us : sched_wakeup: comm=ksoftirqd/9 pid=82 prio=97 target_cpu=009
[10366.187205] rcu_tort-20365 9d..2. 10363319887us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20365 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/9 next_pid=82 next_prio=97
[10366.189225] ksoftirq-82 9d.s3. 10363319894us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=003
[10366.190513] rcu_tort-20368 3d..2. 10363319897us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20368 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.192536] ksoftirq-82 9d..2. 10363319899us : sched_switch: prev_comm=ksoftirqd/9 prev_pid=82 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20365 next_prio=98
[10366.194539] rcu_pree-16 3d..2. 10363319905us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20368 next_prio=98
[10366.196549] rcu_tort-20368 3dN.3. 10363323884us : sched_wakeup: comm=ksoftirqd/3 pid=40 prio=97 target_cpu=003
[10366.197810] rcu_tort-20368 3d..2. 10363323885us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20368 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/3 next_pid=40 next_prio=97
[10366.199848] ksoftirq-40 3d.s3. 10363323890us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=000
[10366.201100] rcu_tort-173 0d..2. 10363323892us : sched_switch: prev_comm=rcu_torture_boo prev_pid=173 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.203094] ksoftirq-40 3d..2. 10363323894us : sched_switch: prev_comm=ksoftirqd/3 prev_pid=40 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20368 next_prio=98
[10366.205087] rcu_pree-16 0d..2. 10363323903us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=173 next_prio=98
[10366.207097] rcu_tort-173 0dN.4. 10363327884us : sched_wakeup: comm=ksoftirqd/0 pid=15 prio=97 target_cpu=000
[10366.208364] rcu_tort-173 0d..2. 10363327885us : sched_switch: prev_comm=rcu_torture_boo prev_pid=173 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/0 next_pid=15 next_prio=97
[10366.210335] ksoftirq-15 0d.s3. 10363327891us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=005
[10366.211606] ksoftirq-15 0d..2. 10363327894us : sched_switch: prev_comm=ksoftirqd/0 prev_pid=15 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=173 next_prio=98
[10366.213587] rcu_tort-20362 5d..2. 10363327894us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20362 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.215595] rcu_pree-16 5d..2. 10363327905us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20362 next_prio=98
[10366.217597] rcu_tort-20362 5dN.4. 10363332883us : sched_wakeup: comm=ksoftirqd/5 pid=54 prio=97 target_cpu=005
[10366.218843] rcu_tort-20362 5d..2. 10363332884us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20362 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/5 next_pid=54 next_prio=97
[10366.220855] ksoftirq-54 5d.s3. 10363332890us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=007
[10366.222443] rcu_tort-20355 7d..2. 10363332892us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20355 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.225495] ksoftirq-54 5d..2. 10363332894us : sched_switch: prev_comm=ksoftirqd/5 prev_pid=54 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20362 next_prio=98
[10366.228559] rcu_pree-16 7d..2. 10363332903us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20355 next_prio=98
[10366.231530] rcu_tort-20355 7dN.3. 10363337882us : sched_wakeup: comm=ksoftirqd/7 pid=68 prio=97 target_cpu=007
[10366.233459] rcu_tort-20355 7d..2. 10363337883us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20355 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/7 next_pid=68 next_prio=97
[10366.236530] ksoftirq-68 7d.s3. 10363337888us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=009
[10366.238400] rcu_tort-20365 9d..2. 10363337891us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20365 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.241413] ksoftirq-68 7d..2. 10363337891us : sched_switch: prev_comm=ksoftirqd/7 prev_pid=68 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20355 next_prio=98
[10366.243686] rcu_pree-16 9d..2. 10363337902us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20365 next_prio=98
[10366.245676] rcu_tort-20365 9dN.4. 10363342884us : sched_wakeup: comm=ksoftirqd/9 pid=82 prio=97 target_cpu=009
[10366.246924] rcu_tort-20365 9d..2. 10363342885us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20365 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/9 next_pid=82 next_prio=97
[10366.248940] ksoftirq-82 9d.s3. 10363342890us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=005
[10366.250212] rcu_tort-20362 5d..2. 10363342892us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20362 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.252225] ksoftirq-82 9d..2. 10363342894us : sched_switch: prev_comm=ksoftirqd/9 prev_pid=82 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20365 next_prio=98
[10366.254234] rcu_pree-16 5d..2. 10363342901us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20362 next_prio=98
[10366.256237] rcu_tort-20362 5dN.3. 10363347882us : sched_wakeup: comm=ksoftirqd/5 pid=54 prio=97 target_cpu=005
[10366.257525] rcu_tort-20362 5d..2. 10363347883us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20362 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/5 next_pid=54 next_prio=97
[10366.259577] ksoftirq-54 5d.s3. 10363347888us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=009
[10366.260819] rcu_tort-20365 9d..2. 10363347890us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20365 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.262829] ksoftirq-54 5d..2. 10363347892us : sched_switch: prev_comm=ksoftirqd/5 prev_pid=54 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20362 next_prio=98
[10366.264829] rcu_pree-16 9d..2. 10363347901us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20365 next_prio=98
[10366.266825] rcu_tort-20365 9dN.3. 10363352883us : sched_wakeup: comm=ksoftirqd/9 pid=82 prio=97 target_cpu=009
[10366.268077] rcu_tort-20365 9d..2. 10363352883us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20365 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/9 next_pid=82 next_prio=97
[10366.270067] ksoftirq-82 9d.s3. 10363352889us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=007
[10366.271314] rcu_tort-20355 7d..2. 10363352892us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20355 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.273303] ksoftirq-82 9d..2. 10363352893us : sched_switch: prev_comm=ksoftirqd/9 prev_pid=82 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20365 next_prio=98
[10366.275292] rcu_pree-16 7d..2. 10363352899us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20355 next_prio=98
[10366.277264] rcu_tort-20355 7dN.3. 10363357882us : sched_wakeup: comm=ksoftirqd/7 pid=68 prio=97 target_cpu=007
[10366.278549] rcu_tort-20355 7d..2. 10363357883us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20355 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/7 next_pid=68 next_prio=97
[10366.280580] ksoftirq-68 7d.s3. 10363357888us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=011
[10366.281837] rcu_tort-20367 11d..2. 10363357890us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20367 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.283865] ksoftirq-68 7d..2. 10363357891us : sched_switch: prev_comm=ksoftirqd/7 prev_pid=68 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20355 next_prio=98
[10366.285883] rcu_pree-16 11d..2. 10363357900us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20367 next_prio=98
[10366.287890] rcu_tort-20360 2dN.3. 10363358883us : sched_wakeup: comm=ksoftirqd/2 pid=32 prio=97 target_cpu=002
[10366.289143] rcu_tort-20360 2d..2. 10363358885us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20360 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/2 next_pid=32 next_prio=97
[10366.291139] ksoftirq-32 2d..2. 10363358894us : sched_switch: prev_comm=ksoftirqd/2 prev_pid=32 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20360 next_prio=98
[10366.293139] rcu_tort-20367 11dN.3. 10363362885us : sched_wakeup: comm=ksoftirqd/11 pid=96 prio=97 target_cpu=011
[10366.294411] rcu_tort-20367 11d..2. 10363362887us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20367 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/11 next_pid=96 next_prio=97
[10366.296430] ksoftirq-96 11d.s3. 10363362893us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=009
[10366.297686] rcu_tort-20365 9d..2. 10363362896us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20365 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.299709] ksoftirq-96 11d..2. 10363362898us : sched_switch: prev_comm=ksoftirqd/11 prev_pid=96 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20367 next_prio=98
[10366.301729] rcu_pree-16 9d..2. 10363362907us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20365 next_prio=98
[10366.303732] rcu_tort-20365 9dN.4. 10363367882us : sched_wakeup: comm=ksoftirqd/9 pid=82 prio=97 target_cpu=009
[10366.304986] rcu_tort-20365 9d..2. 10363367883us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20365 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/9 next_pid=82 next_prio=97
[10366.307001] ksoftirq-82 9d.s3. 10363367889us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=011
[10366.308245] rcu_tort-20367 11d..2. 10363367892us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20367 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.310264] ksoftirq-82 9d..2. 10363367893us : sched_switch: prev_comm=ksoftirqd/9 prev_pid=82 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20365 next_prio=98
[10366.312275] rcu_pree-16 11d..2. 10363367902us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20367 next_prio=98
[10366.314274] rcu_tort-20367 11dN.3. 10363372885us : sched_wakeup: comm=ksoftirqd/11 pid=96 prio=97 target_cpu=011
[10366.315560] rcu_tort-20367 11d..2. 10363372886us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20367 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/11 next_pid=96 next_prio=97
[10366.317577] ksoftirq-96 11d.s3. 10363372893us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=013
[10366.318832] ksoftirq-96 11d..2. 10363372897us : sched_switch: prev_comm=ksoftirqd/11 prev_pid=96 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20367 next_prio=98
[10366.320830] rcu_tort-20366 13d..2. 10363372898us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20366 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.322861] rcu_pree-16 13d..2. 10363372914us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20366 next_prio=98
[10366.324867] rcu_tort-20360 2dN.3. 10363375890us : sched_wakeup: comm=ksoftirqd/2 pid=32 prio=97 target_cpu=002
[10366.326133] rcu_tort-20360 2d..2. 10363375892us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20360 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/2 next_pid=32 next_prio=97
[10366.328146] ksoftirq-32 2d..2. 10363375903us : sched_switch: prev_comm=ksoftirqd/2 prev_pid=32 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20360 next_prio=98
[10366.330146] rcu_tort-20366 13dN.3. 10363377883us : sched_wakeup: comm=ksoftirqd/13 pid=110 prio=97 target_cpu=013
[10366.331443] rcu_tort-20366 13d..2. 10363377884us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20366 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/13 next_pid=110 next_prio=97
[10366.334483] ksoftirq-110 13d.s3. 10363377891us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=011
[10366.335767] rcu_tort-20367 11d..2. 10363377893us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20367 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.337782] ksoftirq-110 13d..2. 10363377895us : sched_switch: prev_comm=ksoftirqd/13 prev_pid=110 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20366 next_prio=98
[10366.339909] rcu_pree-16 11d..2. 10363377903us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20367 next_prio=98
[10366.341921] rcu_tort-20367 11dN.3. 10363381885us : sched_wakeup: comm=ksoftirqd/11 pid=96 prio=97 target_cpu=011
[10366.343184] rcu_tort-20367 11d..2. 10363381886us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20367 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/11 next_pid=96 next_prio=97
[10366.345208] ksoftirq-96 11d.s3. 10363381891us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=000
[10366.346477] rcu_tort-173 0d..2. 10363381893us : sched_switch: prev_comm=rcu_torture_boo prev_pid=173 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.348476] ksoftirq-96 11d..2. 10363381895us : sched_switch: prev_comm=ksoftirqd/11 prev_pid=96 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20367 next_prio=98
[10366.350506] rcu_pree-16 0d..2. 10363381902us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=173 next_prio=98
[10366.352493] rcu_tort-173 0dN.3. 10363385884us : sched_wakeup: comm=ksoftirqd/0 pid=15 prio=97 target_cpu=000
[10366.353756] rcu_tort-173 0d..2. 10363385885us : sched_switch: prev_comm=rcu_torture_boo prev_pid=173 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/0 next_pid=15 next_prio=97
[10366.355762] ksoftirq-15 0d.s3. 10363385891us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=007
[10366.357281] ksoftirq-15 0d..2. 10363385894us : sched_switch: prev_comm=ksoftirqd/0 prev_pid=15 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=173 next_prio=98
[10366.359307] rcu_tort-20355 7d..2. 10363385894us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20355 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.361343] rcu_pree-16 7d..2. 10363385906us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20355 next_prio=98
[10366.363344] rcu_tort-20355 7dN.3. 10363389885us : sched_wakeup: comm=ksoftirqd/7 pid=68 prio=97 target_cpu=007
[10366.364667] rcu_tort-20355 7d..2. 10363389886us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20355 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/7 next_pid=68 next_prio=97
[10366.366764] ksoftirq-68 7d.s3. 10363389894us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=013
[10366.368052] rcu_tort-20366 13d..2. 10363389896us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20366 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.370076] ksoftirq-68 7d..2. 10363389898us : sched_switch: prev_comm=ksoftirqd/7 prev_pid=68 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20355 next_prio=98
[10366.372097] rcu_pree-16 13d..2. 10363389906us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20366 next_prio=98
[10366.374121] rcu_tort-20366 13dN.3. 10363393889us : sched_wakeup: comm=ksoftirqd/13 pid=110 prio=97 target_cpu=013
[10366.375494] rcu_tort-20366 13d..2. 10363393891us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20366 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/13 next_pid=110 next_prio=97
[10366.377534] ksoftirq-110 13d.s3. 10363393900us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=000
[10366.378789] rcu_tort-173 0d..2. 10363393903us : sched_switch: prev_comm=rcu_torture_boo prev_pid=173 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.380787] ksoftirq-110 13d..2. 10363393905us : sched_switch: prev_comm=ksoftirqd/13 prev_pid=110 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20366 next_prio=98
[10366.383016] rcu_pree-16 0d..2. 10363393916us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=173 next_prio=98
[10366.385006] rcu_tort-173 0dN.3. 10363397884us : sched_wakeup: comm=ksoftirqd/0 pid=15 prio=97 target_cpu=000
[10366.386270] rcu_tort-173 0d..2. 10363397886us : sched_switch: prev_comm=rcu_torture_boo prev_pid=173 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/0 next_pid=15 next_prio=97
[10366.388267] ksoftirq-15 0d.s3. 10363397893us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=009
[10366.389557] rcu_tort-20365 9d..2. 10363397895us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20365 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.391581] ksoftirq-15 0d..2. 10363397896us : sched_switch: prev_comm=ksoftirqd/0 prev_pid=15 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=173 next_prio=98
[10366.393580] rcu_pree-16 9d..2. 10363397906us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20365 next_prio=98
[10366.395622] rcu_tort-173 0dN.4. 10363399885us : sched_wakeup: comm=ksoftirqd/0 pid=15 prio=97 target_cpu=000
[10366.396878] rcu_tort-173 0d..2. 10363399886us : sched_switch: prev_comm=rcu_torture_boo prev_pid=173 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/0 next_pid=15 next_prio=97
[10366.398877] ksoftirq-15 0d..2. 10363399890us : sched_switch: prev_comm=ksoftirqd/0 prev_pid=15 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=173 next_prio=98
[10366.400893] rcu_tort-20365 9dN.3. 10363402882us : sched_wakeup: comm=ksoftirqd/9 pid=82 prio=97 target_cpu=009
[10366.402155] rcu_tort-20365 9d..2. 10363402883us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20365 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/9 next_pid=82 next_prio=97
[10366.404163] ksoftirq-82 9d.s3. 10363402888us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=013
[10366.405434] rcu_tort-20366 13d..2. 10363402891us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20366 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.407556] ksoftirq-82 9d..2. 10363402891us : sched_switch: prev_comm=ksoftirqd/9 prev_pid=82 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20365 next_prio=98
[10366.409567] rcu_pree-16 13d..2. 10363402902us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20366 next_prio=98
[10366.411578] rcu_tort-20366 13dN.4. 10363407883us : sched_wakeup: comm=ksoftirqd/13 pid=110 prio=97 target_cpu=013
[10366.412865] rcu_tort-20366 13d..2. 10363407884us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20366 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/13 next_pid=110 next_prio=97
[10366.414914] ksoftirq-110 13d.s3. 10363407890us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=002
[10366.416169] rcu_tort-20360 2d..2. 10363407893us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20360 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.418172] ksoftirq-110 13d..2. 10363407894us : sched_switch: prev_comm=ksoftirqd/13 prev_pid=110 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20366 next_prio=98
[10366.420186] rcu_pree-16 2d..2. 10363407907us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20360 next_prio=98
[10366.422180] rcu_tort-20360 2dN.4. 10363411885us : sched_wakeup: comm=ksoftirqd/2 pid=32 prio=97 target_cpu=002
[10366.423461] rcu_tort-20360 2d..2. 10363411886us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20360 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/2 next_pid=32 next_prio=97
[10366.425490] ksoftirq-32 2d.s3. 10363411894us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=007
[10366.426813] rcu_tort-20355 7d..2. 10363411899us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20355 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.428819] ksoftirq-32 2d..2. 10363411899us : sched_switch: prev_comm=ksoftirqd/2 prev_pid=32 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20360 next_prio=98
[10366.430827] rcu_pree-16 7d..2. 10363411913us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20355 next_prio=98
[10366.432826] rcu_tort-20355 7dN.4. 10363415885us : sched_wakeup: comm=ksoftirqd/7 pid=68 prio=97 target_cpu=007
[10366.434079] rcu_tort-20355 7d..2. 10363415887us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20355 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/7 next_pid=68 next_prio=97
[10366.436077] ksoftirq-68 7d.s3. 10363415895us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=000
[10366.437338] rcu_tort-173 0d..2. 10363415898us : sched_switch: prev_comm=rcu_torture_boo prev_pid=173 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.439331] ksoftirq-68 7d..2. 10363415899us : sched_switch: prev_comm=ksoftirqd/7 prev_pid=68 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20355 next_prio=98
[10366.441366] rcu_pree-16 0d..2. 10363415909us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=173 next_prio=98
[10366.443332] rcu_tort-173 0d.h3. 10363416913us : sched_wakeup: comm=torture_stutter pid=172 prio=120 target_cpu=000
[10366.444664] rcu_tort-173 0dN.4. 10363419884us : sched_wakeup: comm=ksoftirqd/0 pid=15 prio=97 target_cpu=000
[10366.445913] rcu_tort-173 0d..2. 10363419886us : sched_switch: prev_comm=rcu_torture_boo prev_pid=173 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/0 next_pid=15 next_prio=97
[10366.447886] ksoftirq-15 0d.s3. 10363419892us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=011
[10366.449140] rcu_tort-20367 11d..2. 10363419893us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20367 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.451145] ksoftirq-15 0d..2. 10363419894us : sched_switch: prev_comm=ksoftirqd/0 prev_pid=15 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=173 next_prio=98
[10366.453108] rcu_pree-16 11d..2. 10363419902us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20367 next_prio=98
[10366.455107] rcu_tort-20367 11dN.3. 10363424885us : sched_wakeup: comm=ksoftirqd/11 pid=96 prio=97 target_cpu=011
[10366.456363] rcu_tort-20367 11d..2. 10363424886us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20367 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/11 next_pid=96 next_prio=97
[10366.458403] ksoftirq-96 11d.s3. 10363424893us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=002
[10366.459657] rcu_tort-20360 2d..2. 10363424896us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20360 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.461657] ksoftirq-96 11d..2. 10363424897us : sched_switch: prev_comm=ksoftirqd/11 prev_pid=96 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20367 next_prio=98
[10366.463674] rcu_pree-16 2d..2. 10363424907us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20360 next_prio=98
[10366.465668] rcu_tort-20360 2dN.3. 10363428884us : sched_wakeup: comm=ksoftirqd/2 pid=32 prio=97 target_cpu=002
[10366.466928] rcu_tort-20360 2d..2. 10363428885us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20360 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/2 next_pid=32 next_prio=97
[10366.468980] ksoftirq-32 2d.s3. 10363428891us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=009
[10366.470234] rcu_tort-20365 9d..2. 10363428893us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20365 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.472246] ksoftirq-32 2d..2. 10363428895us : sched_switch: prev_comm=ksoftirqd/2 prev_pid=32 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20360 next_prio=98
[10366.474241] rcu_pree-16 9d..2. 10363428904us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20365 next_prio=98
[10366.476250] rcu_tort-20365 9dN.3. 10363431883us : sched_wakeup: comm=ksoftirqd/9 pid=82 prio=97 target_cpu=009
[10366.477535] rcu_tort-20365 9d..2. 10363431885us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20365 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/9 next_pid=82 next_prio=97
[10366.479565] ksoftirq-82 9d..2. 10363431899us : sched_switch: prev_comm=ksoftirqd/9 prev_pid=82 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20365 next_prio=98
[10366.481569] rcu_tort-20365 9dN.3. 10363433884us : sched_wakeup: comm=ksoftirqd/9 pid=82 prio=97 target_cpu=009
[10366.482830] rcu_tort-20365 9d..2. 10363433885us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20365 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/9 next_pid=82 next_prio=97
[10366.484857] ksoftirq-82 9d.s3. 10363433892us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=000
[10366.486117] rcu_tort-173 0d..2. 10363433896us : sched_switch: prev_comm=rcu_torture_boo prev_pid=173 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.488103] ksoftirq-82 9d..2. 10363433897us : sched_switch: prev_comm=ksoftirqd/9 prev_pid=82 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20365 next_prio=98
[10366.490168] rcu_pree-16 0d..2. 10363433909us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=173 next_prio=98
[10366.492199] rcu_tort-173 0dN.3. 10363437886us : sched_wakeup: comm=ksoftirqd/0 pid=15 prio=97 target_cpu=000
[10366.493454] rcu_tort-173 0d..2. 10363437888us : sched_switch: prev_comm=rcu_torture_boo prev_pid=173 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/0 next_pid=15 next_prio=97
[10366.495445] ksoftirq-15 0d.s3. 10363437896us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=013
[10366.496703] ksoftirq-15 0d..2. 10363437899us : sched_switch: prev_comm=ksoftirqd/0 prev_pid=15 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=173 next_prio=98
[10366.498690] rcu_tort-20366 13d..2. 10363437899us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20366 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.500709] rcu_pree-16 13d..2. 10363437909us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20366 next_prio=98
[10366.502700] rcu_tort-20366 13dN.3. 10363442883us : sched_wakeup: comm=ksoftirqd/13 pid=110 prio=97 target_cpu=013
[10366.503987] rcu_tort-20366 13d..2. 10363442885us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20366 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/13 next_pid=110 next_prio=97
[10366.506010] ksoftirq-110 13d.s3. 10363442892us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=003
[10366.507264] ksoftirq-110 13d..2. 10363442896us : sched_switch: prev_comm=ksoftirqd/13 prev_pid=110 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20366 next_prio=98
[10366.509286] rcu_tort-20368 3d..2. 10363442897us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20368 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.511292] rcu_pree-16 3d..2. 10363442909us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20368 next_prio=98
[10366.513292] rcu_tort-20368 3dN.4. 10363447885us : sched_wakeup: comm=ksoftirqd/3 pid=40 prio=97 target_cpu=003
[10366.514575] rcu_tort-20368 3d..2. 10363447887us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20368 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/3 next_pid=40 next_prio=97
[10366.516597] ksoftirq-40 3d.s3. 10363447895us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=002
[10366.517858] rcu_tort-20360 2d..2. 10363447898us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20360 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.519881] ksoftirq-40 3d..2. 10363447899us : sched_switch: prev_comm=ksoftirqd/3 prev_pid=40 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20368 next_prio=98
[10366.521886] rcu_pree-16 2d..2. 10363447910us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20360 next_prio=98
[10366.523887] rcu_tort-20360 2dN.3. 10363451891us : sched_wakeup: comm=ksoftirqd/2 pid=32 prio=97 target_cpu=002
[10366.525173] rcu_tort-20360 2d..2. 10363451894us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20360 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/2 next_pid=32 next_prio=97
[10366.527188] ksoftirq-32 2d.s3. 10363451904us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=011
[10366.528510] rcu_tort-20367 11d..2. 10363451907us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20367 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.530623] ksoftirq-32 2d..2. 10363451910us : sched_switch: prev_comm=ksoftirqd/2 prev_pid=32 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20360 next_prio=98
[10366.532699] rcu_pree-16 11d..2. 10363451919us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20367 next_prio=98
[10366.534919] rcu_tort-20367 11dN.3. 10363455884us : sched_wakeup: comm=ksoftirqd/11 pid=96 prio=97 target_cpu=011
[10366.536311] rcu_tort-20367 11d..2. 10363455885us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20367 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/11 next_pid=96 next_prio=97
[10366.538460] ksoftirq-96 11d.s3. 10363455892us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=003
[10366.539757] rcu_tort-20368 3d..2. 10363455895us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20368 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.541838] ksoftirq-96 11d..2. 10363455896us : sched_switch: prev_comm=ksoftirqd/11 prev_pid=96 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20367 next_prio=98
[10366.543820] rcu_pree-16 3d..2. 10363455906us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20368 next_prio=98
[10366.545795] rcu_tort-20366 13dN.3. 10363459884us : sched_wakeup: comm=ksoftirqd/13 pid=110 prio=97 target_cpu=013
[10366.547060] rcu_tort-20366 13d..2. 10363459885us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20366 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/13 next_pid=110 next_prio=97
[10366.549071] rcu_tort-20368 3dN.3. 10363459890us : sched_wakeup: comm=ksoftirqd/3 pid=40 prio=97 target_cpu=003
[10366.550317] rcu_tort-20368 3d..2. 10363459893us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20368 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/3 next_pid=40 next_prio=97
[10366.552317] ksoftirq-110 13d..2. 10363459895us : sched_switch: prev_comm=ksoftirqd/13 prev_pid=110 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20366 next_prio=98
[10366.554340] ksoftirq-40 3d.s3. 10363459902us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=005
[10366.555772] ksoftirq-40 3d..2. 10363459904us : sched_switch: prev_comm=ksoftirqd/3 prev_pid=40 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20368 next_prio=98
[10366.557921] rcu_tort-20362 5d..2. 10363459913us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20362 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.559931] rcu_pree-16 5d..2. 10363459925us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20362 next_prio=98
[10366.561948] rcu_tort-20362 5dN.3. 10363464883us : sched_wakeup: comm=ksoftirqd/5 pid=54 prio=97 target_cpu=005
[10366.563204] rcu_tort-20362 5d..2. 10363464884us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20362 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/5 next_pid=54 next_prio=97
[10366.565218] ksoftirq-54 5d.s3. 10363464890us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=011
[10366.566526] rcu_tort-20367 11d..2. 10363464894us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20367 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.568619] ksoftirq-54 5d..2. 10363464894us : sched_switch: prev_comm=ksoftirqd/5 prev_pid=54 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20362 next_prio=98
[10366.570860] rcu_pree-16 11d..2. 10363464904us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20367 next_prio=98
[10366.573090] rcu_tort-20367 11dN.3. 10363469884us : sched_wakeup: comm=ksoftirqd/11 pid=96 prio=97 target_cpu=011
[10366.574357] rcu_tort-20367 11d..2. 10363469885us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20367 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/11 next_pid=96 next_prio=97
[10366.576366] ksoftirq-96 11d.s3. 10363469890us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=005
[10366.577640] rcu_tort-20362 5d..2. 10363469893us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20362 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.579669] ksoftirq-96 11d..2. 10363469894us : sched_switch: prev_comm=ksoftirqd/11 prev_pid=96 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20367 next_prio=98
[10366.581695] rcu_pree-16 5d..2. 10363469904us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20362 next_prio=98
[10366.583696] rcu_tort-20362 5dN.3. 10363474882us : sched_wakeup: comm=ksoftirqd/5 pid=54 prio=97 target_cpu=005
[10366.584951] rcu_tort-20362 5d..2. 10363474883us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20362 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/5 next_pid=54 next_prio=97
[10366.586964] ksoftirq-54 5d.s3. 10363474887us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=013
[10366.588212] rcu_tort-20366 13d..2. 10363474890us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20366 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.590240] ksoftirq-54 5d..2. 10363474891us : sched_switch: prev_comm=ksoftirqd/5 prev_pid=54 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20362 next_prio=98
[10366.592238] rcu_pree-16 13d..2. 10363474906us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20366 next_prio=98
[10366.594221] rcu_tort-20366 13dN.3. 10363478889us : sched_wakeup: comm=ksoftirqd/13 pid=110 prio=97 target_cpu=013
[10366.595520] rcu_tort-20366 13d..2. 10363478892us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20366 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/13 next_pid=110 next_prio=97
[10366.597569] ksoftirq-110 13d.s3. 10363478900us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=005
[10366.598831] rcu_tort-20362 5d..2. 10363478901us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20362 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.600854] ksoftirq-110 13d..2. 10363478906us : sched_switch: prev_comm=ksoftirqd/13 prev_pid=110 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20366 next_prio=98
[10366.602863] rcu_pree-16 5d..2. 10363478912us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20362 next_prio=98
[10366.604861] rcu_tort-20362 5dN.4. 10363483882us : sched_wakeup: comm=ksoftirqd/5 pid=54 prio=97 target_cpu=005
[10366.606137] rcu_tort-20362 5d..2. 10363483882us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20362 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/5 next_pid=54 next_prio=97
[10366.608164] ksoftirq-54 5d.s3. 10363483887us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=000
[10366.609436] ksoftirq-54 5d..2. 10363483890us : sched_switch: prev_comm=ksoftirqd/5 prev_pid=54 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20362 next_prio=98
[10366.611442] rcu_tort-173 0d..2. 10363483891us : sched_switch: prev_comm=rcu_torture_boo prev_pid=173 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.613420] rcu_pree-16 0d..2. 10363483903us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=173 next_prio=98
[10366.615362] rcu_tort-173 0dN.3. 10363487885us : sched_wakeup: comm=ksoftirqd/0 pid=15 prio=97 target_cpu=000
[10366.616626] rcu_tort-173 0d..2. 10363487886us : sched_switch: prev_comm=rcu_torture_boo prev_pid=173 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/0 next_pid=15 next_prio=97
[10366.618607] ksoftirq-15 0d.s3. 10363487893us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=002
[10366.619870] ksoftirq-15 0d..2. 10363487896us : sched_switch: prev_comm=ksoftirqd/0 prev_pid=15 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=173 next_prio=98
[10366.621843] rcu_tort-20360 2d..2. 10363487897us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20360 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.623852] rcu_pree-16 2d..2. 10363487907us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20360 next_prio=98
[10366.625858] rcu_tort-20360 2dN.3. 10363492883us : sched_wakeup: comm=ksoftirqd/2 pid=32 prio=97 target_cpu=002
[10366.627110] rcu_tort-20360 2d..2. 10363492883us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20360 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/2 next_pid=32 next_prio=97
[10366.629138] ksoftirq-32 2d.s3. 10363492889us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=013
[10366.630437] rcu_tort-20366 13d..2. 10363492892us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20366 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.632492] ksoftirq-32 2d..2. 10363492893us : sched_switch: prev_comm=ksoftirqd/2 prev_pid=32 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20360 next_prio=98
[10366.634537] rcu_pree-16 13d..2. 10363492904us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20366 next_prio=98
[10366.636567] rcu_tort-20366 13dN.3. 10363497884us : sched_wakeup: comm=ksoftirqd/13 pid=110 prio=97 target_cpu=013
[10366.637861] rcu_tort-20366 13d..2. 10363497885us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20366 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/13 next_pid=110 next_prio=97
[10366.639907] ksoftirq-110 13d.s3. 10363497891us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=007
[10366.641183] rcu_tort-20355 7d..2. 10363497895us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20355 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.643203] ksoftirq-110 13d..2. 10363497895us : sched_switch: prev_comm=ksoftirqd/13 prev_pid=110 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20366 next_prio=98
[10366.645240] rcu_pree-16 7d..2. 10363497905us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20355 next_prio=98
[10366.647519] rcu_tort-20355 7dN.4. 10363501885us : sched_wakeup: comm=ksoftirqd/7 pid=68 prio=97 target_cpu=007
[10366.648862] rcu_tort-20355 7d..2. 10363501886us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20355 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/7 next_pid=68 next_prio=97
[10366.650925] ksoftirq-68 7d.s3. 10363501894us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=002
[10366.652182] rcu_tort-20360 2d..2. 10363501896us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20360 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.654233] ksoftirq-68 7d..2. 10363501899us : sched_switch: prev_comm=ksoftirqd/7 prev_pid=68 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20355 next_prio=98
[10366.656309] rcu_pree-16 2d..2. 10363501908us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20360 next_prio=98
[10366.658306] rcu_tort-173 0dN.3. 10363503884us : sched_wakeup: comm=ksoftirqd/0 pid=15 prio=97 target_cpu=000
[10366.659582] rcu_tort-173 0d..2. 10363503886us : sched_switch: prev_comm=rcu_torture_boo prev_pid=173 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/0 next_pid=15 next_prio=97
[10366.661564] ksoftirq-15 0d..2. 10363503889us : sched_switch: prev_comm=ksoftirqd/0 prev_pid=15 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=173 next_prio=98
[10366.663550] rcu_tort-20360 2dN.3. 10363506882us : sched_wakeup: comm=ksoftirqd/2 pid=32 prio=97 target_cpu=002
[10366.664801] rcu_tort-20360 2d..2. 10363506883us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20360 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/2 next_pid=32 next_prio=97
[10366.666804] ksoftirq-32 2d.s3. 10363506888us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=000
[10366.668109] ksoftirq-32 2d..2. 10363506892us : sched_switch: prev_comm=ksoftirqd/2 prev_pid=32 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20360 next_prio=98
[10366.670112] rcu_tort-173 0d..2. 10363506892us : sched_switch: prev_comm=rcu_torture_boo prev_pid=173 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.672103] rcu_tort-20367 11d..2. 10363506895us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20367 prev_prio=98 prev_state=R+ ==> next_comm=kworker/11:1 next_pid=19375 next_prio=120
[10366.674181] rcu_pree-16 0d..2. 10363506902us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=173 next_prio=98
[10366.676221] kworker/-19375 11d..3. 10363506905us : dl_server_stop <-dequeue_entities
[10366.677216] kworker/-19375 11d..2. 10363506911us : sched_switch: prev_comm=kworker/11:1 prev_pid=19375 prev_prio=120 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20367 next_prio=98
[10366.679289] rcu_tort-20367 11dN.3. 10363507883us : sched_wakeup: comm=ksoftirqd/11 pid=96 prio=97 target_cpu=011
[10366.680609] rcu_tort-20367 11d..2. 10363507885us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20367 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/11 next_pid=96 next_prio=97
[10366.682639] ksoftirq-96 11d..2. 10363507890us : sched_switch: prev_comm=ksoftirqd/11 prev_pid=96 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20367 next_prio=98
[10366.684671] rcu_tort-173 0dN.3. 10363510884us : sched_wakeup: comm=ksoftirqd/0 pid=15 prio=97 target_cpu=000
[10366.685932] rcu_tort-173 0d..2. 10363510885us : sched_switch: prev_comm=rcu_torture_boo prev_pid=173 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/0 next_pid=15 next_prio=97
[10366.687937] ksoftirq-15 0d.s3. 10363510891us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=003
[10366.689196] ksoftirq-15 0d..2. 10363510893us : sched_switch: prev_comm=ksoftirqd/0 prev_pid=15 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=173 next_prio=98
[10366.691179] rcu_tort-20368 3d..2. 10363510894us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20368 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.693298] rcu_pree-16 3d..2. 10363510905us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20368 next_prio=98
[10366.695314] rcu_tort-20368 3dN.3. 10363515883us : sched_wakeup: comm=ksoftirqd/3 pid=40 prio=97 target_cpu=003
[10366.696615] rcu_tort-20368 3d..2. 10363515884us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20368 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/3 next_pid=40 next_prio=97
[10366.698627] ksoftirq-40 3d.s3. 10363515890us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=007
[10366.699902] rcu_tort-20355 7d..2. 10363515892us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20355 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.701917] ksoftirq-40 3d..2. 10363515893us : sched_switch: prev_comm=ksoftirqd/3 prev_pid=40 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20368 next_prio=98
[10366.703928] rcu_pree-16 7d..2. 10363515901us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20355 next_prio=98
[10366.706138] rcu_tort-20355 7dN.4. 10363520884us : sched_wakeup: comm=ksoftirqd/7 pid=68 prio=97 target_cpu=007
[10366.707410] rcu_tort-20355 7d..2. 10363520885us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20355 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/7 next_pid=68 next_prio=97
[10366.709435] ksoftirq-68 7d.s3. 10363520891us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=003
[10366.710703] rcu_tort-20368 3d..2. 10363520892us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20368 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.712713] ksoftirq-68 7d..2. 10363520895us : sched_switch: prev_comm=ksoftirqd/7 prev_pid=68 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20355 next_prio=98
[10366.714732] rcu_pree-16 3d..2. 10363520897us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20368 next_prio=98
[10366.716755] rcu_tort-20368 3dN.3. 10363525883us : sched_wakeup: comm=ksoftirqd/3 pid=40 prio=97 target_cpu=003
[10366.718019] rcu_tort-20368 3d..2. 10363525884us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20368 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/3 next_pid=40 next_prio=97
[10366.720031] ksoftirq-40 3d.s3. 10363525889us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=009
[10366.721278] ksoftirq-40 3d..2. 10363525892us : sched_switch: prev_comm=ksoftirqd/3 prev_pid=40 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20368 next_prio=98
[10366.723266] rcu_tort-20365 9d..2. 10363525893us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20365 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.725270] rcu_pree-16 9d..2. 10363525904us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20365 next_prio=98
[10366.727258] rcu_tort-20365 9dN.4. 10363530886us : sched_wakeup: comm=ksoftirqd/9 pid=82 prio=97 target_cpu=009
[10366.728523] rcu_tort-20365 9d..2. 10363530887us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20365 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/9 next_pid=82 next_prio=97
[10366.730530] ksoftirq-82 9d.s3. 10363530894us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=002
[10366.731788] rcu_tort-20360 2d..2. 10363530898us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20360 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.733804] ksoftirq-82 9d..2. 10363530898us : sched_switch: prev_comm=ksoftirqd/9 prev_pid=82 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20365 next_prio=98
[10366.735807] rcu_pree-16 2d..2. 10363530910us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20360 next_prio=98
[10366.737799] rcu_tort-20360 2dN.3. 10363534890us : sched_wakeup: comm=ksoftirqd/2 pid=32 prio=97 target_cpu=002
[10366.739047] rcu_tort-20360 2d..2. 10363534893us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20360 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/2 next_pid=32 next_prio=97
[10366.741067] ksoftirq-32 2d.s3. 10363534901us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=003
[10366.742346] rcu_tort-20368 3d..2. 10363534904us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20368 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.744357] ksoftirq-32 2d..2. 10363534907us : sched_switch: prev_comm=ksoftirqd/2 prev_pid=32 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20360 next_prio=98
[10366.746351] rcu_pree-16 3d..2. 10363534910us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20368 next_prio=98
[10366.748349] rcu_tort-20368 3dN.4. 10363539882us : sched_wakeup: comm=ksoftirqd/3 pid=40 prio=97 target_cpu=003
[10366.749632] rcu_tort-20368 3d..2. 10363539883us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20368 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/3 next_pid=40 next_prio=97
[10366.751652] ksoftirq-40 3d.s3. 10363539888us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=011
[10366.752903] rcu_tort-20367 11d..2. 10363539891us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20367 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.754905] ksoftirq-40 3d..2. 10363539892us : sched_switch: prev_comm=ksoftirqd/3 prev_pid=40 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20368 next_prio=98
[10366.756907] rcu_pree-16 11d..2. 10363539900us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20367 next_prio=98
[10366.758905] rcu_tort-20367 11dN.3. 10363544883us : sched_wakeup: comm=ksoftirqd/11 pid=96 prio=97 target_cpu=011
[10366.760171] rcu_tort-20367 11d..2. 10363544884us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20367 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/11 next_pid=96 next_prio=97
[10366.762194] ksoftirq-96 11d.s3. 10363544890us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=007
[10366.763490] rcu_tort-20355 7d..2. 10363544892us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20355 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.765526] ksoftirq-96 11d..2. 10363544894us : sched_switch: prev_comm=ksoftirqd/11 prev_pid=96 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20367 next_prio=98
[10366.767552] rcu_pree-16 7d..2. 10363544902us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20355 next_prio=98
[10366.769589] rcu_tort-20355 7dN.3. 10363548886us : sched_wakeup: comm=ksoftirqd/7 pid=68 prio=97 target_cpu=007
[10366.770855] rcu_tort-20355 7d..2. 10363548887us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20355 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/7 next_pid=68 next_prio=97
[10366.772893] ksoftirq-68 7d.s3. 10363548893us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=005
[10366.774170] rcu_tort-20362 5d..2. 10363548897us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20362 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.776753] ksoftirq-68 7d..2. 10363548898us : sched_switch: prev_comm=ksoftirqd/7 prev_pid=68 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20355 next_prio=98
[10366.779820] rcu_pree-16 5d..2. 10363548908us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20362 next_prio=98
[10366.782066] rcu_tort-20362 5dN.4. 10363553882us : sched_wakeup: comm=ksoftirqd/5 pid=54 prio=97 target_cpu=005
[10366.783331] rcu_tort-20362 5d..2. 10363553883us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20362 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/5 next_pid=54 next_prio=97
[10366.785358] ksoftirq-54 5d.s3. 10363553889us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=002
[10366.786653] rcu_tort-20360 2d..2. 10363553892us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20360 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.788681] ksoftirq-54 5d..2. 10363553893us : sched_switch: prev_comm=ksoftirqd/5 prev_pid=54 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20362 next_prio=98
[10366.790682] rcu_pree-16 2d..2. 10363553905us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20360 next_prio=98
[10366.792698] rcu_tort-20360 2dN.3. 10363558883us : sched_wakeup: comm=ksoftirqd/2 pid=32 prio=97 target_cpu=002
[10366.793954] rcu_tort-20360 2d..2. 10363558884us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20360 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/2 next_pid=32 next_prio=97
[10366.795958] ksoftirq-32 2d.s3. 10363558891us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=005
[10366.797285] rcu_tort-20362 5d..2. 10363558893us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20362 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10366.799405] ksoftirq-32 2d..2. 10363558895us : sched_switch: prev_comm=ksoftirqd/2 prev_pid=32 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=20360 next_prio=98
[10366.801559] rcu_pree-16 5d..2. 10363558902us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20362 next_prio=98
[10366.803628] rcu_tort-173 0d..2. 10363559190us : sched_switch: prev_comm=rcu_torture_boo prev_pid=173 prev_prio=98 prev_state=R+ ==> next_comm=kworker/0:0 next_pid=20144 next_prio=120
[10366.805756] kworker/-20144 0d..2. 10363559197us : sched_switch: prev_comm=kworker/0:0 prev_pid=20144 prev_prio=120 prev_state=I ==> next_comm=torture_stutter next_pid=172 next_prio=120
[10366.807885] torture_-172 0d..2. 10363559201us : sched_switch: prev_comm=torture_stutter prev_pid=172 prev_prio=120 prev_state=I ==> next_comm=rcu_torture_rea next_pid=157 next_prio=139
[10366.810012] rcu_tort-20355 7d..2. 10363559204us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20355 prev_prio=98 prev_state=I ==> next_comm=rcu_torture_fak next_pid=151 next_prio=139
[10366.812100] rcu_tort-20368 3d..2. 10363559205us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20368 prev_prio=98 prev_state=I ==> next_comm=rcu_torture_wri next_pid=148 next_prio=120
[10366.814261] rcu_tort-20362 5d..2. 10363559206us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20362 prev_prio=98 prev_state=I ==> next_comm=rcu_torture_rea next_pid=162 next_prio=139
[10366.816421] rcu_tort-20366 13d..2. 10363559207us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20366 prev_prio=98 prev_state=I ==> next_comm=rcu_torture_rea next_pid=155 next_prio=139
[10366.818491] rcu_tort-151 7d..2. 10363559210us : sched_switch: prev_comm=rcu_torture_fak prev_pid=151 prev_prio=139 prev_state=I ==> next_comm=kworker/7:0 next_pid=19250 next_prio=120
[10366.820538] rcu_tort-148 3dN.3. 10363559210us : dl_server_stop <-dequeue_entities
[10366.821524] rcu_tort-157 0d..3. 10363559211us : dl_server_stop <-dequeue_entities
[10366.822504] rcu_tort-162 5d..2. 10363559215us : sched_switch: prev_comm=rcu_torture_rea prev_pid=162 prev_prio=139 prev_state=I ==> next_comm=rcu_torture_rea next_pid=164 next_prio=139
[10366.824556] rcu_tort-157 0d..2. 10363559216us : sched_switch: prev_comm=rcu_torture_rea prev_pid=157 prev_prio=139 prev_state=I ==> next_comm=rcu_torture_boo next_pid=173 next_prio=98
[10366.826593] rcu_tort-20365 9d..3. 10363559218us : dl_server_start <-enqueue_task_fair
[10366.827590] kworker/-19250 7d..3. 10363559219us : dl_server_stop <-dequeue_entities
[10366.828572] rcu_tort-20365 9d..2. 10363559221us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20365 prev_prio=98 prev_state=I ==> next_comm=kcompactd0 next_pid=134 next_prio=120
[10366.830603] rcu_tort-20360 2d..3. 10363559222us : dl_server_start <-enqueue_task_fair
[10366.831611] rcu_tort-20367 11d..3. 10363559225us : dl_server_start <-enqueue_task_fair
[10366.832615] rcu_tort-20360 2d..2. 10363559225us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20360 prev_prio=98 prev_state=I ==> next_comm=kthreadd next_pid=2 next_prio=120
[10366.834608] rcu_tort-164 5d..2. 10363559225us : sched_switch: prev_comm=rcu_torture_rea prev_pid=164 prev_prio=139 prev_state=I ==> next_comm=rcu_torture_boo next_pid=20364 next_prio=98
[10366.836703] kcompact-134 9d..3. 10363559225us : dl_server_stop <-dequeue_entities
[10366.837691] rcu_tort-155 13d..2. 10363559227us : sched_switch: prev_comm=rcu_torture_rea prev_pid=155 prev_prio=139 prev_state=R+ ==> next_comm=rcu_torture_rea next_pid=154 next_prio=139
[10366.839748] rcu_tort-20367 11d..2. 10363559227us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20367 prev_prio=98 prev_state=I ==> next_comm=rcu_torture_rea next_pid=156 next_prio=139
[10366.841822] kworker/-19250 7d..3. 10363559232us : dl_server_start <-enqueue_task_fair
[10366.842856] rcu_tort-156 11d..3. 10363559232us : dl_server_stop <-dequeue_entities
[10366.843846] rcu_tort-20364 5d..2. 10363559233us : sched_switch: prev_comm=rcu_torture_boo prev_pid=20364 prev_prio=98 prev_state=I ==> next_comm=rcu_torture_fak next_pid=153 next_prio=139
[10366.845949] kworker/-19250 7d..2. 10363559234us : sched_switch: prev_comm=kworker/7:0 prev_pid=19250 prev_prio=120 prev_state=I ==> next_comm=rcu_torture_fak next_pid=149 next_prio=139
[10366.847982] rcu_tort-154 13d..2. 10363559236us : sched_switch: prev_comm=rcu_torture_rea prev_pid=154 prev_prio=139 prev_state=I ==> next_comm=rcu_torture_bar next_pid=196 next_prio=120
[10366.850053] rcu_tort-149 7d..3. 10363559236us : dl_server_stop <-dequeue_entities
[10366.851038] kcompact-134 9d..2. 10363559237us : sched_switch: prev_comm=kcompactd0 prev_pid=134 prev_prio=120 prev_state=S ==> next_comm=swapper/9 next_pid=0 next_prio=120
[10366.852970] rcu_tort-153 5d..2. 10363559237us : sched_switch: prev_comm=rcu_torture_fak prev_pid=153 prev_prio=139 prev_state=I ==> next_comm=rcu_torture_rea next_pid=159 next_prio=139
[10366.855030] rcu_tort-173 0d..3. 10363559240us : dl_server_start <-enqueue_task_fair
[10366.856025] rcu_tort-159 5d..3. 10363559241us : dl_server_stop <-dequeue_entities
[10366.856998] rcu_tort-173 0d..2. 10363559244us : sched_switch: prev_comm=rcu_torture_boo prev_pid=173 prev_prio=98 prev_state=I ==> next_comm=rcu_torture_rea next_pid=165 next_prio=139
[10366.859061] rcu_tort-148 3dN.3. 10363559245us : dl_server_start <-enqueue_task_fair
[10366.860069] ---------------------------------
[10366.860683] Dumping ftrace buffer:
[10366.861094] (ftrace buffer empty)
[10368.660736] Boost inversion persisted: No QS from CPU 3
[10368.660756] rcu-torture: rcu_torture_boost is stopping
[10368.667675] rcu-torture: rcu_torture_read_exit: End of episode
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-10-06 20:44 ` Paul E. McKenney
@ 2024-10-07 9:34 ` Peter Zijlstra
2024-10-08 11:11 ` Peter Zijlstra
1 sibling, 0 replies; 67+ messages in thread
From: Peter Zijlstra @ 2024-10-07 9:34 UTC (permalink / raw)
To: Paul E. McKenney; +Cc: vschneid, linux-kernel, sfr, linux-next, kernel-team
On Sun, Oct 06, 2024 at 01:44:53PM -0700, Paul E. McKenney wrote:
> > I've given it 200*20m and the worst I got was one dl-server double
> > enqueue. I'll go stare at that I suppose.
>
> With your patch, I got 24 failures out of 100 TREE03 runs of 18 hours
> each. The failures were different, though, mostly involving boost
> failures in which RCU priority boosting didn't actually result in the
> low-priority readers getting boosted. An ftrace/event-trace dump of
> such a situation is shown below.
Urgh, WTF and more of that. Let me go ponder things.
Thanks for testing.
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-10-06 20:44 ` Paul E. McKenney
2024-10-07 9:34 ` Peter Zijlstra
@ 2024-10-08 11:11 ` Peter Zijlstra
2024-10-08 16:24 ` Paul E. McKenney
1 sibling, 1 reply; 67+ messages in thread
From: Peter Zijlstra @ 2024-10-08 11:11 UTC (permalink / raw)
To: Paul E. McKenney; +Cc: vschneid, linux-kernel, sfr, linux-next, kernel-team
On Sun, Oct 06, 2024 at 01:44:53PM -0700, Paul E. McKenney wrote:
> With your patch, I got 24 failures out of 100 TREE03 runs of 18 hours
> each. The failures were different, though, mostly involving boost
> failures in which RCU priority boosting didn't actually result in the
> low-priority readers getting boosted.
Somehow I feel this is progress, albeit very minor :/
> There were also a number of "sched: DL replenish lagged too much"
> messages, but it looks like this was a symptom of the ftrace dump.
>
> Given that this now involves priority boosting, I am trying 400*TREE03
> with each guest OS restricted to four CPUs to see if that makes things
> happen more quickly, and will let you know how this goes.
>
> Any other debug I should apply?
The sched_pi_setprio tracepoint perhaps?
I've read all the RCU_BOOST and rtmutex code (once again), and I've been
running pi_stress with --sched id=low,policy=other to ensure the code
paths in question are taken. But so far so very nothing :/
(Noting that both RCU_BOOST and PI futexes use the same rt_mutex / PI API)
You know RCU_BOOST better than me.. then again, it is utterly weird this
is apparently affected. I've gotta ask, a kernel with my patch on and
additionally flipping kernel/sched/features.h:SCHED_FEAT(DELAY_DEQUEUE,
false) functions as expected?
One very minor thing I noticed while I read the code, do with as you
think best...
diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index 1c7cbd145d5e..95061119653d 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -1071,10 +1071,6 @@ static int rcu_boost(struct rcu_node *rnp)
* Recheck under the lock: all tasks in need of boosting
* might exit their RCU read-side critical sections on their own.
*/
- if (rnp->exp_tasks == NULL && rnp->boost_tasks == NULL) {
- raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
- return 0;
- }
/*
* Preferentially boost tasks blocking expedited grace periods.
@@ -1082,10 +1078,13 @@ static int rcu_boost(struct rcu_node *rnp)
* expedited grace period must boost all blocked tasks, including
* those blocking the pre-existing normal grace period.
*/
- if (rnp->exp_tasks != NULL)
- tb = rnp->exp_tasks;
- else
+ tb = rnp->exp_tasks;
+ if (!tb)
tb = rnp->boost_tasks;
+ if (!tb) {
+ raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
+ return 0;
+ }
/*
* We boost task t by manufacturing an rt_mutex that appears to
^ permalink raw reply related [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-10-08 11:11 ` Peter Zijlstra
@ 2024-10-08 16:24 ` Paul E. McKenney
2024-10-08 22:34 ` Paul E. McKenney
0 siblings, 1 reply; 67+ messages in thread
From: Paul E. McKenney @ 2024-10-08 16:24 UTC (permalink / raw)
To: Peter Zijlstra; +Cc: vschneid, linux-kernel, sfr, linux-next, kernel-team
On Tue, Oct 08, 2024 at 01:11:50PM +0200, Peter Zijlstra wrote:
> On Sun, Oct 06, 2024 at 01:44:53PM -0700, Paul E. McKenney wrote:
>
> > With your patch, I got 24 failures out of 100 TREE03 runs of 18 hours
> > each. The failures were different, though, mostly involving boost
> > failures in which RCU priority boosting didn't actually result in the
> > low-priority readers getting boosted.
>
> Somehow I feel this is progress, albeit very minor :/
>
> > There were also a number of "sched: DL replenish lagged too much"
> > messages, but it looks like this was a symptom of the ftrace dump.
> >
> > Given that this now involves priority boosting, I am trying 400*TREE03
> > with each guest OS restricted to four CPUs to see if that makes things
> > happen more quickly, and will let you know how this goes.
And this does seem to make things happen more quickly, but including
an RCU splat. So...
> > Any other debug I should apply?
>
> The sched_pi_setprio tracepoint perhaps?
I will give it a shot, thank you!
> I've read all the RCU_BOOST and rtmutex code (once again), and I've been
> running pi_stress with --sched id=low,policy=other to ensure the code
> paths in question are taken. But so far so very nothing :/
>
> (Noting that both RCU_BOOST and PI futexes use the same rt_mutex / PI API)
>
> You know RCU_BOOST better than me.. then again, it is utterly weird this
> is apparently affected. I've gotta ask, a kernel with my patch on and
> additionally flipping kernel/sched/features.h:SCHED_FEAT(DELAY_DEQUEUE,
> false) functions as expected?
I will try that after the sched_pi_setprio tracepoint (presumably both).
> One very minor thing I noticed while I read the code, do with as you
> think best...
>
> diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
> index 1c7cbd145d5e..95061119653d 100644
> --- a/kernel/rcu/tree_plugin.h
> +++ b/kernel/rcu/tree_plugin.h
> @@ -1071,10 +1071,6 @@ static int rcu_boost(struct rcu_node *rnp)
> * Recheck under the lock: all tasks in need of boosting
> * might exit their RCU read-side critical sections on their own.
> */
> - if (rnp->exp_tasks == NULL && rnp->boost_tasks == NULL) {
> - raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
> - return 0;
> - }
>
> /*
> * Preferentially boost tasks blocking expedited grace periods.
> @@ -1082,10 +1078,13 @@ static int rcu_boost(struct rcu_node *rnp)
> * expedited grace period must boost all blocked tasks, including
> * those blocking the pre-existing normal grace period.
> */
> - if (rnp->exp_tasks != NULL)
> - tb = rnp->exp_tasks;
> - else
> + tb = rnp->exp_tasks;
> + if (!tb)
> tb = rnp->boost_tasks;
> + if (!tb) {
> + raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
> + return 0;
> + }
>
> /*
> * We boost task t by manufacturing an rt_mutex that appears to
Well, it is one line shorter and arguably simpler. It looks equivalent,
or am I missing something? If equivalent, I will leave it to Frederic
and the others, since they likely must live with this longer than I do.
And my next step will be attempting to make rcutorture provoke that RCU
splat more often. In the meantime, please feel free to consider this
to be my bug.
Thanx, Paul
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-10-08 16:24 ` Paul E. McKenney
@ 2024-10-08 22:34 ` Paul E. McKenney
0 siblings, 0 replies; 67+ messages in thread
From: Paul E. McKenney @ 2024-10-08 22:34 UTC (permalink / raw)
To: Peter Zijlstra; +Cc: vschneid, linux-kernel, sfr, linux-next, kernel-team
On Tue, Oct 08, 2024 at 09:24:11AM -0700, Paul E. McKenney wrote:
> On Tue, Oct 08, 2024 at 01:11:50PM +0200, Peter Zijlstra wrote:
> > On Sun, Oct 06, 2024 at 01:44:53PM -0700, Paul E. McKenney wrote:
> >
> > > With your patch, I got 24 failures out of 100 TREE03 runs of 18 hours
> > > each. The failures were different, though, mostly involving boost
> > > failures in which RCU priority boosting didn't actually result in the
> > > low-priority readers getting boosted.
> >
> > Somehow I feel this is progress, albeit very minor :/
> >
> > > There were also a number of "sched: DL replenish lagged too much"
> > > messages, but it looks like this was a symptom of the ftrace dump.
> > >
> > > Given that this now involves priority boosting, I am trying 400*TREE03
> > > with each guest OS restricted to four CPUs to see if that makes things
> > > happen more quickly, and will let you know how this goes.
>
> And this does seem to make things happen more quickly, but including
> an RCU splat. So...
>
> > > Any other debug I should apply?
> >
> > The sched_pi_setprio tracepoint perhaps?
>
> I will give it a shot, thank you!
And the ftrace output is shown below.
> > I've read all the RCU_BOOST and rtmutex code (once again), and I've been
> > running pi_stress with --sched id=low,policy=other to ensure the code
> > paths in question are taken. But so far so very nothing :/
> >
> > (Noting that both RCU_BOOST and PI futexes use the same rt_mutex / PI API)
> >
> > You know RCU_BOOST better than me.. then again, it is utterly weird this
> > is apparently affected. I've gotta ask, a kernel with my patch on and
> > additionally flipping kernel/sched/features.h:SCHED_FEAT(DELAY_DEQUEUE,
> > false) functions as expected?
>
> I will try that after the sched_pi_setprio tracepoint (presumably both).
And trying this now.
Thanx, Paul
------------------------------------------------------------------------
[10611.834143] Dumping ftrace buffer:
[10611.834561] ---------------------------------
[10611.835048] CPU:1 [LOST 16689746 EVENTS]
[10611.835048] rcu_tort-70 1dNh3. 10607789211us : sched_wakeup: comm=rcu_torture_fak pid=65 prio=139 target_cpu=001
[10611.836636] rcu_tort-70 1d..2. 10607789214us : sched_switch: prev_comm=rcu_torture_rea prev_pid=70 prev_prio=139 prev_state=R+ ==> next_comm=rcu_torture_fak next_pid=65 next_prio=139
[10611.838442] rcu_tort-65 1d..2. 10607789216us : sched_switch: prev_comm=rcu_torture_fak prev_pid=65 prev_prio=139 prev_state=I ==> next_comm=rcu_torture_rea next_pid=70 next_prio=139
[10611.840241] rcu_tort-70 1dNh3. 10607789717us : sched_wakeup: comm=rcuc/1 pid=25 prio=97 target_cpu=001
[10611.841300] rcu_tort-70 1dN.3. 10607789720us : sched_wakeup: comm=ksoftirqd/1 pid=26 prio=97 target_cpu=001
[10611.842407] rcu_tort-70 1d..2. 10607789722us : sched_switch: prev_comm=rcu_torture_rea prev_pid=70 prev_prio=139 prev_state=R+ ==> next_comm=rcuc/1 next_pid=25 next_prio=97
[10611.844154] rcuc/1-25 1d..2. 10607789726us : sched_switch: prev_comm=rcuc/1 prev_pid=25 prev_prio=97 prev_state=S ==> next_comm=ksoftirqd/1 next_pid=26 next_prio=97
[10611.845811] ksoftirq-26 1d..2. 10607789728us : sched_switch: prev_comm=ksoftirqd/1 prev_pid=26 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=71 next_prio=139
[10611.847613] rcu_tort-71 1d..2. 10607790926us : sched_switch: prev_comm=rcu_torture_rea prev_pid=71 prev_prio=139 prev_state=R+ ==> next_comm=rcu_torture_rea next_pid=70 next_prio=139
[10611.849423] rcu_tort-70 1dN.5. 10607790929us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=001
[10611.850541] rcu_tort-70 1d..2. 10607790931us : sched_switch: prev_comm=rcu_torture_rea prev_pid=70 prev_prio=139 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10611.852290] rcu_pree-16 1d..2. 10607790934us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_rea next_pid=70 next_prio=139
[10611.854025] rcu_tort-70 1d..2. 10607791171us : sched_switch: prev_comm=rcu_torture_rea prev_pid=70 prev_prio=139 prev_state=R+ ==> next_comm=rcu_exp_par_gp_ next_pid=18 next_prio=97
[10611.855867] rcu_exp_-18 1d..2. 10607791180us : sched_switch: prev_comm=rcu_exp_par_gp_ prev_pid=18 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=70 next_prio=139
[10611.857681] rcu_tort-70 1d..2. 10607791187us : sched_switch: prev_comm=rcu_torture_rea prev_pid=70 prev_prio=139 prev_state=R+ ==> next_comm=rcub/1 next_pid=17 next_prio=97
[10611.859412] rcub/1-17 1d..3. 10607791189us : sched_pi_setprio: comm=rcu_torture_rea pid=71 oldprio=139 newprio=97
[10611.860594] rcub/1-17 1d..2. 10607791194us : sched_switch: prev_comm=rcub/1 prev_pid=17 prev_prio=97 prev_state=D ==> next_comm=rcu_torture_rea next_pid=71 next_prio=97
[10611.862281] rcu_tort-71 1dN.4. 10607791203us : sched_pi_setprio: comm=rcu_torture_rea pid=71 oldprio=97 newprio=139
[10611.863467] rcu_tort-71 1dN.5. 10607791206us : sched_wakeup: comm=rcub/1 pid=17 prio=97 target_cpu=001
[10611.864532] rcu_tort-71 1d..2. 10607791207us : sched_switch: prev_comm=rcu_torture_rea prev_pid=71 prev_prio=139 prev_state=R+ ==> next_comm=rcub/1 next_pid=17 next_prio=97
[10611.866247] rcub/1-17 1d..2. 10607791209us : sched_switch: prev_comm=rcub/1 prev_pid=17 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=70 next_prio=139
[10611.867939] rcu_tort-70 1dNh3. 10607791718us : sched_wakeup: comm=rcuc/1 pid=25 prio=97 target_cpu=001
[10611.869003] rcu_tort-70 1d..2. 10607791722us : sched_switch: prev_comm=rcu_torture_rea prev_pid=70 prev_prio=139 prev_state=R+ ==> next_comm=rcuc/1 next_pid=25 next_prio=97
[10611.870714] rcuc/1-25 1D..4. 10607791725us : sched_wakeup: comm=rcu_torture_fak pid=67 prio=139 target_cpu=001
[10611.871889] rcuc/1-25 1d..2. 10607791727us : sched_switch: prev_comm=rcuc/1 prev_pid=25 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_fak next_pid=67 next_prio=139
[10611.873607] rcu_tort-67 1d..2. 10607791730us : sched_switch: prev_comm=rcu_torture_fak prev_pid=67 prev_prio=139 prev_state=I ==> next_comm=rcu_torture_rea next_pid=70 next_prio=139
[10611.875411] rcu_tort-70 1dNh4. 10607792268us : sched_wakeup: comm=torture_onoff pid=80 prio=120 target_cpu=001
[10611.876557] rcu_tort-70 1d..2. 10607792273us : sched_switch: prev_comm=rcu_torture_rea prev_pid=70 prev_prio=139 prev_state=R+ ==> next_comm=torture_onoff next_pid=80 next_prio=120
[10611.878372] torture_-80 1d..2. 10607792285us : sched_switch: prev_comm=torture_onoff prev_pid=80 prev_prio=120 prev_state=D ==> next_comm=rcu_torture_rea next_pid=71 next_prio=139
[10611.880164] rcu_tort-71 1d..2. 10607792776us : sched_switch: prev_comm=rcu_torture_rea prev_pid=71 prev_prio=139 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10611.881932] rcu_pree-16 1d..2. 10607792780us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_rea next_pid=71 next_prio=139
[10611.883704] rcu_tort-71 1d..2. 10607793384us : sched_switch: prev_comm=rcu_torture_rea prev_pid=71 prev_prio=139 prev_state=R+ ==> next_comm=rcu_torture_rea next_pid=70 next_prio=139
[10611.885531] rcu_tort-70 1DNh3. 10607793717us : sched_wakeup: comm=rcuc/1 pid=25 prio=97 target_cpu=001
[10611.886718] rcu_tort-70 1d..2. 10607793721us : sched_switch: prev_comm=rcu_torture_rea prev_pid=70 prev_prio=139 prev_state=R+ ==> next_comm=rcuc/1 next_pid=25 next_prio=97
[10611.888440] rcuc/1-25 1d..2. 10607793724us : sched_switch: prev_comm=rcuc/1 prev_pid=25 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=70 next_prio=139
[10611.890155] rcu_tort-70 1dNh3. 10607794403us : sched_wakeup: comm=rcu_torture_fak pid=66 prio=139 target_cpu=001
[10611.891349] rcu_tort-70 1d..2. 10607794406us : sched_switch: prev_comm=rcu_torture_rea prev_pid=70 prev_prio=139 prev_state=R+ ==> next_comm=rcu_torture_fak next_pid=66 next_prio=139
[10611.893184] rcu_tort-66 1d..5. 10607794408us : sched_wakeup: comm=kworker/1:2 pid=17058 prio=120 target_cpu=001
[10611.894365] rcu_tort-66 1d..2. 10607794410us : sched_switch: prev_comm=rcu_torture_fak prev_pid=66 prev_prio=139 prev_state=I ==> next_comm=kworker/1:2 next_pid=17058 next_prio=120
[10611.896169] kworker/-17058 1d..2. 10607794418us : sched_switch: prev_comm=kworker/1:2 prev_pid=17058 prev_prio=120 prev_state=D ==> next_comm=rcu_torture_rea next_pid=71 next_prio=139
[10611.897961] rcu_tort-71 1d..2. 10607794445us : sched_switch: prev_comm=rcu_torture_rea prev_pid=71 prev_prio=139 prev_state=R+ ==> next_comm=rcu_exp_par_gp_ next_pid=18 next_prio=97
[10611.899770] rcu_exp_-18 1d..2. 10607794455us : sched_switch: prev_comm=rcu_exp_par_gp_ prev_pid=18 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=71 next_prio=139
[10611.901564] rcu_tort-71 1d..2. 10607794462us : sched_switch: prev_comm=rcu_torture_rea prev_pid=71 prev_prio=139 prev_state=R+ ==> next_comm=rcu_torture_rea next_pid=69 next_prio=97
[10611.903370] rcu_tort-69 1dN.4. 10607794467us : sched_pi_setprio: comm=rcu_torture_rea pid=69 oldprio=97 newprio=139
[10611.904560] rcu_tort-69 1d..2. 10607794475us : sched_switch: prev_comm=rcu_torture_rea prev_pid=69 prev_prio=139 prev_state=R+ ==> next_comm=rcub/1 next_pid=17 next_prio=97
[10611.906279] rcub/1-17 1d..3. 10607794479us : sched_pi_setprio: comm=rcu_torture_rea pid=70 oldprio=139 newprio=97
[10611.907467] rcub/1-17 1d..2. 10607794484us : sched_switch: prev_comm=rcub/1 prev_pid=17 prev_prio=97 prev_state=D ==> next_comm=rcu_torture_rea next_pid=70 next_prio=97
[10611.909164] rcu_tort-70 1dN.4. 10607794493us : sched_pi_setprio: comm=rcu_torture_rea pid=70 oldprio=97 newprio=139
[10611.910354] rcu_tort-70 1dN.5. 10607794495us : sched_wakeup: comm=rcub/1 pid=17 prio=97 target_cpu=001
[10611.911434] rcu_tort-70 1d..2. 10607794496us : sched_switch: prev_comm=rcu_torture_rea prev_pid=70 prev_prio=139 prev_state=R+ ==> next_comm=rcub/1 next_pid=17 next_prio=97
[10611.913150] rcub/1-17 1d..2. 10607794498us : sched_switch: prev_comm=rcub/1 prev_pid=17 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=71 next_prio=139
[10611.914855] rcu_tort-71 1d..2. 10607794518us : sched_switch: prev_comm=rcu_torture_rea prev_pid=71 prev_prio=139 prev_state=R+ ==> next_comm=kworker/1:2 next_pid=17058 next_prio=120
[10611.916657] kworker/-17058 1d..2. 10607794525us : sched_switch: prev_comm=kworker/1:2 prev_pid=17058 prev_prio=120 prev_state=D ==> next_comm=rcu_torture_rea next_pid=71 next_prio=139
[10611.918449] rcu_tort-71 1d..2. 10607794537us : sched_switch: prev_comm=rcu_torture_rea prev_pid=71 prev_prio=139 prev_state=R+ ==> next_comm=rcu_exp_par_gp_ next_pid=18 next_prio=97
[10611.920251] rcu_exp_-18 1d..2. 10607794545us : sched_switch: prev_comm=rcu_exp_par_gp_ prev_pid=18 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=71 next_prio=139
[10611.922041] rcu_tort-71 1d..2. 10607794550us : sched_switch: prev_comm=rcu_torture_rea prev_pid=71 prev_prio=139 prev_state=R+ ==> next_comm=rcub/1 next_pid=17 next_prio=97
[10611.923767] rcub/1-17 1d..2. 10607794552us : sched_switch: prev_comm=rcub/1 prev_pid=17 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_rea next_pid=71 next_prio=139
[10611.925478] rcu_tort-71 1d..2. 10607794560us : sched_switch: prev_comm=rcu_torture_rea prev_pid=71 prev_prio=139 prev_state=R+ ==> next_comm=kworker/1:2 next_pid=17058 next_prio=120
[10611.927276] kworker/-17058 1d..2. 10607794562us : sched_switch: prev_comm=kworker/1:2 prev_pid=17058 prev_prio=120 prev_state=I ==> next_comm=rcu_torture_rea next_pid=71 next_prio=139
[10611.929049] rcu_tort-71 1d..2. 10607794747us : sched_switch: prev_comm=rcu_torture_rea prev_pid=71 prev_prio=139 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10611.930807] rcu_pree-16 1d..2. 10607794752us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_rea next_pid=71 next_prio=139
[10611.932585] rcu_tort-71 1DNh5. 10607795806us : sched_wakeup: comm=rcuc/1 pid=25 prio=97 target_cpu=001
[10611.933652] rcu_tort-71 1d..2. 10607795819us : sched_switch: prev_comm=rcu_torture_rea prev_pid=71 prev_prio=139 prev_state=R+ ==> next_comm=rcu_exp_par_gp_ next_pid=18 next_prio=97
[10611.935442] rcu_exp_-18 1d..2. 10607795826us : sched_switch: prev_comm=rcu_exp_par_gp_ prev_pid=18 prev_prio=97 prev_state=S ==> next_comm=rcuc/1 next_pid=25 next_prio=97
[10611.937143] rcuc/1-25 1d..2. 10607795829us : sched_switch: prev_comm=rcuc/1 prev_pid=25 prev_prio=97 prev_state=S ==> next_comm=cpuhp/1 next_pid=23 next_prio=120
[10611.938772] cpuhp/1-23 1d..2. 10607795835us : sched_switch: prev_comm=cpuhp/1 prev_pid=23 prev_prio=120 prev_state=D ==> next_comm=rcu_torture_rea next_pid=69 next_prio=139
[10611.940497] rcu_tort-69 1dN.6. 10607795838us : sched_wakeup: comm=migration/1 pid=24 prio=0 target_cpu=001
[10611.941606] rcu_tort-69 1d..2. 10607795839us : sched_switch: prev_comm=rcu_torture_rea prev_pid=69 prev_prio=139 prev_state=R+ ==> next_comm=migration/1 next_pid=24 next_prio=0
[10611.943359] migratio-24 1d..2. 10607795850us : sched_switch: prev_comm=migration/1 prev_pid=24 prev_prio=0 prev_state=S ==> next_comm=rcu_torture_rea next_pid=71 next_prio=139
[10611.945763] rcu_tort-71 1dN.6. 10607795851us : sched_wakeup: comm=migration/1 pid=24 prio=0 target_cpu=001
[10611.947438] rcu_tort-71 1d..2. 10607795852us : sched_switch: prev_comm=rcu_torture_rea prev_pid=71 prev_prio=139 prev_state=R+ ==> next_comm=migration/1 next_pid=24 next_prio=0
[10611.949813] migratio-24 1d..4. 10607795854us : dl_server_stop <-dequeue_entities
[10611.950703] migratio-24 1d..2. 10607795859us : sched_switch: prev_comm=migration/1 prev_pid=24 prev_prio=0 prev_state=S ==> next_comm=swapper/1 next_pid=0 next_prio=120
[10611.952391] <idle>-0 1d.h4. 10607796329us : sched_wakeup: comm=rcu_torture_rea pid=71 prio=139 target_cpu=003
[10611.953557] <idle>-0 1dNh4. 10607797725us : sched_wakeup: comm=rcuc/1 pid=25 prio=97 target_cpu=001
[10611.954631] <idle>-0 1d..2. 10607797729us : sched_switch: prev_comm=swapper/1 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/1 next_pid=25 next_prio=97
[10611.956280] rcuc/1-25 1d..2. 10607797732us : sched_switch: prev_comm=rcuc/1 prev_pid=25 prev_prio=97 prev_state=S ==> next_comm=swapper/1 next_pid=0 next_prio=120
[10611.957915] <idle>-0 1d.h4. 10607798540us : sched_wakeup: comm=rcu_torture_fak pid=65 prio=139 target_cpu=003
[10611.959081] <idle>-0 1dNh4. 10607798723us : sched_wakeup: comm=rcuc/1 pid=25 prio=97 target_cpu=001
[10611.960162] <idle>-0 1d..2. 10607798726us : sched_switch: prev_comm=swapper/1 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/1 next_pid=25 next_prio=97
[10611.961800] rcuc/1-25 1d..2. 10607798728us : sched_switch: prev_comm=rcuc/1 prev_pid=25 prev_prio=97 prev_state=S ==> next_comm=swapper/1 next_pid=0 next_prio=120
[10611.963453] <idle>-0 1d.h4. 10607798758us : sched_wakeup: comm=rcu_torture_fak pid=67 prio=139 target_cpu=003
[10611.964626] <idle>-0 1dN.4. 10607799726us : sched_wakeup: comm=ksoftirqd/1 pid=26 prio=97 target_cpu=001
[10611.965757] <idle>-0 1d..2. 10607799727us : sched_switch: prev_comm=swapper/1 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=ksoftirqd/1 next_pid=26 next_prio=97
[10611.967444] ksoftirq-26 1d..2. 10607799729us : sched_switch: prev_comm=ksoftirqd/1 prev_pid=26 prev_prio=97 prev_state=S ==> next_comm=swapper/1 next_pid=0 next_prio=120
[10611.969146] <idle>-0 1d.h4. 10607808780us : sched_wakeup: comm=rcu_torture_fak pid=66 prio=139 target_cpu=003
[10611.970315] <idle>-0 1dN.4. 10608035728us : sched_wakeup: comm=ksoftirqd/1 pid=26 prio=97 target_cpu=001
[10611.971442] <idle>-0 1d..2. 10608035730us : sched_switch: prev_comm=swapper/1 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=ksoftirqd/1 next_pid=26 next_prio=97
[10611.973138] ksoftirq-26 1d..2. 10608035734us : sched_switch: prev_comm=ksoftirqd/1 prev_pid=26 prev_prio=97 prev_state=S ==> next_comm=swapper/1 next_pid=0 next_prio=120
[10611.974829] <idle>-0 1dNh4. 10608098728us : sched_wakeup: comm=rcuc/1 pid=25 prio=97 target_cpu=001
[10611.975903] <idle>-0 1d..2. 10608098733us : sched_switch: prev_comm=swapper/1 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/1 next_pid=25 next_prio=97
[10611.977556] rcuc/1-25 1d..2. 10608098739us : sched_switch: prev_comm=rcuc/1 prev_pid=25 prev_prio=97 prev_state=S ==> next_comm=swapper/1 next_pid=0 next_prio=120
[10611.979216] <idle>-0 1dNh4. 10608099736us : sched_wakeup: comm=rcuc/1 pid=25 prio=97 target_cpu=001
[10611.980289] <idle>-0 1d..2. 10608099744us : sched_switch: prev_comm=swapper/1 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/1 next_pid=25 next_prio=97
[10611.981927] rcuc/1-25 1d..2. 10608099753us : sched_switch: prev_comm=rcuc/1 prev_pid=25 prev_prio=97 prev_state=S ==> next_comm=swapper/1 next_pid=0 next_prio=120
[10611.983576] <idle>-0 1dNh4. 10608100771us : sched_wakeup: comm=rcuc/1 pid=25 prio=97 target_cpu=001
[10611.984649] <idle>-0 1d..2. 10608100774us : sched_switch: prev_comm=swapper/1 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/1 next_pid=25 next_prio=97
[10611.986360] rcuc/1-25 1D..5. 10608100779us : dl_server_start <-enqueue_task_fair
[10611.987249] rcuc/1-25 1DN.4. 10608100781us : sched_wakeup: comm=cpuhp/1 pid=23 prio=120 target_cpu=001
[10611.988339] rcuc/1-25 1d..2. 10608100784us : sched_switch: prev_comm=rcuc/1 prev_pid=25 prev_prio=97 prev_state=S ==> next_comm=cpuhp/1 next_pid=23 next_prio=120
[10611.989984] cpuhp/1-23 1dNh4. 10608101849us : sched_wakeup: comm=rcuc/1 pid=25 prio=97 target_cpu=001
[10611.991130] cpuhp/1-23 1d..2. 10608101862us : sched_switch: prev_comm=cpuhp/1 prev_pid=23 prev_prio=120 prev_state=R+ ==> next_comm=rcuc/1 next_pid=25 next_prio=97
[10611.992756] rcuc/1-25 1d..2. 10608101873us : sched_switch: prev_comm=rcuc/1 prev_pid=25 prev_prio=97 prev_state=S ==> next_comm=cpuhp/1 next_pid=23 next_prio=120
[10611.994388] cpuhp/1-23 1d..3. 10608101878us : dl_server_stop <-dequeue_entities
[10611.995253] cpuhp/1-23 1d..2. 10608101882us : sched_switch: prev_comm=cpuhp/1 prev_pid=23 prev_prio=120 prev_state=D ==> next_comm=swapper/1 next_pid=0 next_prio=120
[10611.996901] <idle>-0 1d..2. 10608102674us : sched_switch: prev_comm=swapper/1 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=cpuhp/1 next_pid=23 next_prio=120
[10611.998558] cpuhp/1-23 1d..3. 10608102700us : dl_server_stop <-dequeue_entities
[10611.999419] cpuhp/1-23 1d..2. 10608102706us : sched_switch: prev_comm=cpuhp/1 prev_pid=23 prev_prio=120 prev_state=D ==> next_comm=swapper/1 next_pid=0 next_prio=120
[10612.001066] <idle>-0 1dNh4. 10608102724us : sched_wakeup: comm=rcuc/1 pid=25 prio=97 target_cpu=001
[10612.002181] <idle>-0 1d..2. 10608102728us : sched_switch: prev_comm=swapper/1 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcuc/1 next_pid=25 next_prio=97
[10612.003853] rcuc/1-25 1d..2. 10608102732us : sched_switch: prev_comm=rcuc/1 prev_pid=25 prev_prio=97 prev_state=S ==> next_comm=swapper/1 next_pid=0 next_prio=120
[10612.005493] <idle>-0 1d..2. 10608102776us : sched_switch: prev_comm=swapper/1 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=cpuhp/1 next_pid=23 next_prio=120
[10612.007148] cpuhp/1-23 1d..3. 10608102808us : dl_server_stop <-dequeue_entities
[10612.008002] cpuhp/1-23 1d..2. 10608102809us : sched_switch: prev_comm=cpuhp/1 prev_pid=23 prev_prio=120 prev_state=D ==> next_comm=swapper/1 next_pid=0 next_prio=120
[10612.009655] <idle>-0 1d..2. 10608102835us : sched_switch: prev_comm=swapper/1 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=cpuhp/1 next_pid=23 next_prio=120
[10612.011308] cpuhp/1-23 1dN.3. 10608102923us : sched_wakeup: comm=ksoftirqd/1 pid=26 prio=97 target_cpu=001
[10612.012417] cpuhp/1-23 1d..2. 10608102925us : sched_switch: prev_comm=cpuhp/1 prev_pid=23 prev_prio=120 prev_state=R+ ==> next_comm=ksoftirqd/1 next_pid=26 next_prio=97
[10612.014095] ksoftirq-26 1d..2. 10608102927us : sched_switch: prev_comm=ksoftirqd/1 prev_pid=26 prev_prio=97 prev_state=P ==> next_comm=cpuhp/1 next_pid=23 next_prio=120
[10612.015775] cpuhp/1-23 1dN.3. 10608102928us : sched_wakeup: comm=rcuc/1 pid=25 prio=97 target_cpu=001
[10612.016842] cpuhp/1-23 1d..2. 10608102929us : sched_switch: prev_comm=cpuhp/1 prev_pid=23 prev_prio=120 prev_state=R+ ==> next_comm=rcuc/1 next_pid=25 next_prio=97
[10612.018483] rcuc/1-25 1d..2. 10608102930us : sched_switch: prev_comm=rcuc/1 prev_pid=25 prev_prio=97 prev_state=P ==> next_comm=cpuhp/1 next_pid=23 next_prio=120
[10612.020125] cpuhp/1-23 1d..3. 10608102936us : dl_server_stop <-dequeue_entities
[10612.020980] cpuhp/1-23 1d..2. 10608102937us : sched_switch: prev_comm=cpuhp/1 prev_pid=23 prev_prio=120 prev_state=S ==> next_comm=swapper/1 next_pid=0 next_prio=120
[10612.022639] <idle>-0 1d..2. 10608102952us : sched_switch: prev_comm=swapper/1 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=cpuhp/1 next_pid=23 next_prio=120
[10612.024297] cpuhp/1-23 1d..3. 10608102956us : dl_server_stop <-dequeue_entities
[10612.025159] cpuhp/1-23 1d..2. 10608102957us : sched_switch: prev_comm=cpuhp/1 prev_pid=23 prev_prio=120 prev_state=P ==> next_comm=swapper/1 next_pid=0 next_prio=120
[10612.026812] <idle>-0 1d..2. 10608102968us : sched_switch: prev_comm=swapper/1 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=migration/1 next_pid=24 next_prio=0
[10612.028498] CPU:0 [LOST 41556131 EVENTS]
[10612.028498] ksoftirq-15 0d..2. 10612061728us : sched_switch: prev_comm=ksoftirqd/0 prev_pid=15 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=75 next_prio=98
[10612.030666] rcu_tort-75 0dNh3. 10612062717us : sched_wakeup: comm=rcuc/0 pid=19 prio=97 target_cpu=000
[10612.031732] rcu_tort-75 0d..2. 10612062720us : sched_switch: prev_comm=rcu_torture_boo prev_pid=75 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/0 next_pid=19 next_prio=97
[10612.033431] rcuc/0-19 0d..2. 10612062721us : sched_switch: prev_comm=rcuc/0 prev_pid=19 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=75 next_prio=98
[10612.035822] rcu_tort-75 0dNh3. 10612063717us : sched_wakeup: comm=rcuc/0 pid=19 prio=97 target_cpu=000
[10612.037431] rcu_tort-75 0d..2. 10612063720us : sched_switch: prev_comm=rcu_torture_boo prev_pid=75 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/0 next_pid=19 next_prio=97
[10612.039694] rcuc/0-19 0d..2. 10612063721us : sched_switch: prev_comm=rcuc/0 prev_pid=19 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=75 next_prio=98
[10612.041398] rcu_tort-75 0dNh3. 10612064717us : sched_wakeup: comm=rcuc/0 pid=19 prio=97 target_cpu=000
[10612.042469] rcu_tort-75 0d..2. 10612064719us : sched_switch: prev_comm=rcu_torture_boo prev_pid=75 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/0 next_pid=19 next_prio=97
[10612.044178] rcuc/0-19 0d..2. 10612064721us : sched_switch: prev_comm=rcuc/0 prev_pid=19 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=75 next_prio=98
[10612.045874] rcu_tort-75 0dNh3. 10612065717us : sched_wakeup: comm=rcuc/0 pid=19 prio=97 target_cpu=000
[10612.046943] rcu_tort-75 0d..2. 10612065720us : sched_switch: prev_comm=rcu_torture_boo prev_pid=75 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/0 next_pid=19 next_prio=97
[10612.048671] rcuc/0-19 0d..2. 10612065721us : sched_switch: prev_comm=rcuc/0 prev_pid=19 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=75 next_prio=98
[10612.050376] CPU:2 [LOST 16225065 EVENTS]
[10612.050376] rcuc/2-31 2D..4. 10612066725us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=003
[10612.051903] rcuc/2-31 2d..2. 10612066729us : sched_switch: prev_comm=rcuc/2 prev_pid=31 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=19668 next_prio=98
[10612.053636] rcu_tort-75 0dNh3. 10612067717us : sched_wakeup: comm=rcuc/0 pid=19 prio=97 target_cpu=000
[10612.054706] rcu_tort-19668 2dNh4. 10612067717us : sched_wakeup: comm=rcuc/2 pid=31 prio=97 target_cpu=002
[10612.055810] rcu_tort-75 0d..2. 10612067719us : sched_switch: prev_comm=rcu_torture_boo prev_pid=75 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/0 next_pid=19 next_prio=97
[10612.057559] rcu_tort-19668 2d..2. 10612067720us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19668 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/2 next_pid=31 next_prio=97
[10612.059356] rcuc/0-19 0d..2. 10612067721us : sched_switch: prev_comm=rcuc/0 prev_pid=19 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=75 next_prio=98
[10612.061055] rcuc/2-31 2d..2. 10612067723us : sched_switch: prev_comm=rcuc/2 prev_pid=31 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=19668 next_prio=98
[10612.062793] rcu_tort-75 0dNh3. 10612068718us : sched_wakeup: comm=rcuc/0 pid=19 prio=97 target_cpu=000
[10612.063870] rcu_tort-19668 2dNh4. 10612068719us : sched_wakeup: comm=rcuc/2 pid=31 prio=97 target_cpu=002
[10612.064942] rcu_tort-75 0d..2. 10612068720us : sched_switch: prev_comm=rcu_torture_boo prev_pid=75 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/0 next_pid=19 next_prio=97
[10612.066649] rcuc/0-19 0d..2. 10612068722us : sched_switch: prev_comm=rcuc/0 prev_pid=19 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=75 next_prio=98
[10612.068421] rcu_tort-19668 2d..2. 10612068722us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19668 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/2 next_pid=31 next_prio=97
[10612.070195] rcuc/2-31 2D..4. 10612068727us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=003
[10612.071306] rcuc/2-31 2d..2. 10612068730us : sched_switch: prev_comm=rcuc/2 prev_pid=31 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=19668 next_prio=98
[10612.073018] rcu_tort-75 0dNh3. 10612069718us : sched_wakeup: comm=rcuc/0 pid=19 prio=97 target_cpu=000
[10612.074094] rcu_tort-19668 2dNh3. 10612069718us : sched_wakeup: comm=rcuc/2 pid=31 prio=97 target_cpu=002
[10612.075185] rcu_tort-75 0d..2. 10612069720us : sched_switch: prev_comm=rcu_torture_boo prev_pid=75 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/0 next_pid=19 next_prio=97
[10612.076900] rcu_tort-19668 2d..2. 10612069721us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19668 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/2 next_pid=31 next_prio=97
[10612.078706] rcuc/0-19 0d..2. 10612069722us : sched_switch: prev_comm=rcuc/0 prev_pid=19 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=75 next_prio=98
[10612.080432] rcuc/2-31 2d..2. 10612069724us : sched_switch: prev_comm=rcuc/2 prev_pid=31 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=19668 next_prio=98
[10612.082160] rcu_tort-19668 2dNh3. 10612070717us : sched_wakeup: comm=rcuc/2 pid=31 prio=97 target_cpu=002
[10612.083231] rcu_tort-75 0dNh3. 10612070718us : sched_wakeup: comm=rcuc/0 pid=19 prio=97 target_cpu=000
[10612.084303] rcu_tort-75 0d..2. 10612070720us : sched_switch: prev_comm=rcu_torture_boo prev_pid=75 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/0 next_pid=19 next_prio=97
[10612.086004] rcu_tort-19668 2d..2. 10612070720us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19668 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/2 next_pid=31 next_prio=97
[10612.087745] rcuc/0-19 0d..2. 10612070721us : sched_switch: prev_comm=rcuc/0 prev_pid=19 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=75 next_prio=98
[10612.089444] rcuc/2-31 2d..2. 10612070722us : sched_switch: prev_comm=rcuc/2 prev_pid=31 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=19668 next_prio=98
[10612.091177] rcu_tort-19668 2dNh3. 10612071718us : sched_wakeup: comm=rcuc/2 pid=31 prio=97 target_cpu=002
[10612.092246] rcu_tort-19668 2d..2. 10612071721us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19668 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/2 next_pid=31 next_prio=97
[10612.094022] rcuc/2-31 2D..4. 10612071726us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=003
[10612.095228] rcuc/2-31 2d..2. 10612071729us : sched_switch: prev_comm=rcuc/2 prev_pid=31 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=19668 next_prio=98
[10612.097464] rcu_tort-19668 2dNh3. 10612072717us : sched_wakeup: comm=rcuc/2 pid=31 prio=97 target_cpu=002
[10612.099083] rcu_tort-75 0dNh3. 10612072717us : sched_wakeup: comm=rcuc/0 pid=19 prio=97 target_cpu=000
[10612.100657] rcu_tort-75 0d..2. 10612072720us : sched_switch: prev_comm=rcu_torture_boo prev_pid=75 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/0 next_pid=19 next_prio=97
[10612.102669] rcu_tort-19668 2d..2. 10612072720us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19668 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/2 next_pid=31 next_prio=97
[10612.104422] rcuc/0-19 0d..2. 10612072721us : sched_switch: prev_comm=rcuc/0 prev_pid=19 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=75 next_prio=98
[10612.106128] rcuc/2-31 2d..2. 10612072723us : sched_switch: prev_comm=rcuc/2 prev_pid=31 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=19668 next_prio=98
[10612.107854] rcu_tort-75 0dNh3. 10612073717us : sched_wakeup: comm=rcuc/0 pid=19 prio=97 target_cpu=000
[10612.108945] rcu_tort-19668 2dNh3. 10612073719us : sched_wakeup: comm=rcuc/2 pid=31 prio=97 target_cpu=002
[10612.110023] rcu_tort-75 0d..2. 10612073720us : sched_switch: prev_comm=rcu_torture_boo prev_pid=75 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/0 next_pid=19 next_prio=97
[10612.111750] rcuc/0-19 0d..2. 10612073721us : sched_switch: prev_comm=rcuc/0 prev_pid=19 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=75 next_prio=98
[10612.113505] rcu_tort-19668 2d..2. 10612073722us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19668 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/2 next_pid=31 next_prio=97
[10612.115262] rcuc/2-31 2D..4. 10612073726us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=003
[10612.116380] rcuc/2-31 2d..2. 10612073730us : sched_switch: prev_comm=rcuc/2 prev_pid=31 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=19668 next_prio=98
[10612.118115] rcu_tort-75 0dNh3. 10612074718us : sched_wakeup: comm=rcuc/0 pid=19 prio=97 target_cpu=000
[10612.119187] rcu_tort-19668 2dNh3. 10612074718us : sched_wakeup: comm=rcuc/2 pid=31 prio=97 target_cpu=002
[10612.120269] rcu_tort-75 0d..2. 10612074720us : sched_switch: prev_comm=rcu_torture_boo prev_pid=75 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/0 next_pid=19 next_prio=97
[10612.121975] rcuc/0-19 0d..2. 10612074721us : sched_switch: prev_comm=rcuc/0 prev_pid=19 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=75 next_prio=98
[10612.123703] rcu_tort-19668 2d..2. 10612074721us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19668 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/2 next_pid=31 next_prio=97
[10612.125471] rcuc/2-31 2d..2. 10612074723us : sched_switch: prev_comm=rcuc/2 prev_pid=31 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=19668 next_prio=98
[10612.127223] rcu_tort-19668 2dNh3. 10612075717us : sched_wakeup: comm=rcuc/2 pid=31 prio=97 target_cpu=002
[10612.128293] rcu_tort-75 0dNh4. 10612075718us : sched_wakeup: comm=rcuc/0 pid=19 prio=97 target_cpu=000
[10612.129365] rcu_tort-19668 2d..2. 10612075720us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19668 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/2 next_pid=31 next_prio=97
[10612.131116] rcu_tort-75 0d..2. 10612075720us : sched_switch: prev_comm=rcu_torture_boo prev_pid=75 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/0 next_pid=19 next_prio=97
[10612.132833] rcuc/0-19 0d..2. 10612075722us : sched_switch: prev_comm=rcuc/0 prev_pid=19 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=75 next_prio=98
[10612.134540] rcuc/2-31 2D..4. 10612075726us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=003
[10612.135660] rcuc/2-31 2d..2. 10612075729us : sched_switch: prev_comm=rcuc/2 prev_pid=31 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=19668 next_prio=98
[10612.137387] rcu_tort-75 0dNh3. 10612076717us : sched_wakeup: comm=rcuc/0 pid=19 prio=97 target_cpu=000
[10612.138459] rcu_tort-19668 2dNh3. 10612076718us : sched_wakeup: comm=rcuc/2 pid=31 prio=97 target_cpu=002
[10612.139530] rcu_tort-75 0d..2. 10612076719us : sched_switch: prev_comm=rcu_torture_boo prev_pid=75 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/0 next_pid=19 next_prio=97
[10612.141240] rcuc/0-19 0d..2. 10612076721us : sched_switch: prev_comm=rcuc/0 prev_pid=19 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=75 next_prio=98
[10612.142946] rcu_tort-19668 2d..2. 10612076722us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19668 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/2 next_pid=31 next_prio=97
[10612.144686] CPU:3 [LOST 16631447 EVENTS]
[10612.144686] rcu_tort-19649 3d..2. 10612076722us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19649 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/3 next_pid=39 next_prio=97
[10612.146847] rcuc/2-31 2d..2. 10612076723us : sched_switch: prev_comm=rcuc/2 prev_pid=31 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=19668 next_prio=98
[10612.148570] rcuc/3-39 3d..2. 10612076723us : sched_switch: prev_comm=rcuc/3 prev_pid=39 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=19649 next_prio=98
[10612.150296] rcu_tort-19668 2dNh3. 10612077717us : sched_wakeup: comm=rcuc/2 pid=31 prio=97 target_cpu=002
[10612.151370] rcu_tort-75 0dNh3. 10612077718us : sched_wakeup: comm=rcuc/0 pid=19 prio=97 target_cpu=000
[10612.152440] rcu_tort-19668 2d..2. 10612077720us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19668 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/2 next_pid=31 next_prio=97
[10612.154169] rcu_tort-75 0d..2. 10612077720us : sched_switch: prev_comm=rcu_torture_boo prev_pid=75 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/0 next_pid=19 next_prio=97
[10612.155865] rcuc/2-31 2d..2. 10612077722us : sched_switch: prev_comm=rcuc/2 prev_pid=31 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=19668 next_prio=98
[10612.157585] rcuc/0-19 0D..4. 10612077726us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=003
[10612.158695] rcu_tort-19649 3d..2. 10612077729us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19649 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10612.160468] rcuc/0-19 0d..2. 10612077730us : sched_switch: prev_comm=rcuc/0 prev_pid=19 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=75 next_prio=98
[10612.162160] rcu_pree-16 3d..2. 10612077741us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=19649 next_prio=98
[10612.163918] rcu_tort-75 0dNh3. 10612078717us : sched_wakeup: comm=rcuc/0 pid=19 prio=97 target_cpu=000
[10612.164979] rcu_tort-19649 3dNh3. 10612078718us : sched_wakeup: comm=rcuc/3 pid=39 prio=97 target_cpu=003
[10612.166039] rcu_tort-19668 2dNh4. 10612078718us : sched_wakeup: comm=rcuc/2 pid=31 prio=97 target_cpu=002
[10612.167100] rcu_tort-75 0d..2. 10612078720us : sched_switch: prev_comm=rcu_torture_boo prev_pid=75 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/0 next_pid=19 next_prio=97
[10612.168799] rcuc/0-19 0d..2. 10612078721us : sched_switch: prev_comm=rcuc/0 prev_pid=19 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=75 next_prio=98
[10612.170489] rcu_tort-19649 3dN.3. 10612078722us : sched_wakeup: comm=ksoftirqd/3 pid=40 prio=97 target_cpu=003
[10612.171600] rcu_tort-19668 2d..2. 10612078722us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19668 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/2 next_pid=31 next_prio=97
[10612.173333] rcu_tort-19649 3d..2. 10612078722us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19649 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/3 next_pid=39 next_prio=97
[10612.175050] rcuc/2-31 2d..2. 10612078724us : sched_switch: prev_comm=rcuc/2 prev_pid=31 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=19668 next_prio=98
[10612.176765] rcuc/3-39 3d..2. 10612078724us : sched_switch: prev_comm=rcuc/3 prev_pid=39 prev_prio=97 prev_state=S ==> next_comm=ksoftirqd/3 next_pid=40 next_prio=97
[10612.178431] ksoftirq-40 3d..2. 10612078725us : sched_switch: prev_comm=ksoftirqd/3 prev_pid=40 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=19649 next_prio=98
[10612.180191] rcu_tort-75 0dNh4. 10612079717us : sched_wakeup: comm=rcuc/0 pid=19 prio=97 target_cpu=000
[10612.181255] rcu_tort-19668 2dNh3. 10612079718us : sched_wakeup: comm=rcuc/2 pid=31 prio=97 target_cpu=002
[10612.182320] rcu_tort-75 0d..2. 10612079720us : sched_switch: prev_comm=rcu_torture_boo prev_pid=75 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/0 next_pid=19 next_prio=97
[10612.184011] rcuc/0-19 0d..2. 10612079721us : sched_switch: prev_comm=rcuc/0 prev_pid=19 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=75 next_prio=98
[10612.185715] rcu_tort-19668 2d..2. 10612079722us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19668 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/2 next_pid=31 next_prio=97
[10612.187444] rcuc/2-31 2D..4. 10612079726us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=003
[10612.188554] rcu_tort-19649 3d..2. 10612079728us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19649 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10612.190330] rcuc/2-31 2d..2. 10612079730us : sched_switch: prev_comm=rcuc/2 prev_pid=31 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=19668 next_prio=98
[10612.192041] rcu_pree-16 3d..2. 10612079737us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=19649 next_prio=98
[10612.193807] rcu_tort-19649 3dNh4. 10612080717us : sched_wakeup: comm=rcuc/3 pid=39 prio=97 target_cpu=003
[10612.194873] rcu_tort-19668 2dNh4. 10612080717us : sched_wakeup: comm=rcuc/2 pid=31 prio=97 target_cpu=002
[10612.195941] rcu_tort-75 0dNh3. 10612080717us : sched_wakeup: comm=rcuc/0 pid=19 prio=97 target_cpu=000
[10612.197007] rcu_tort-75 0d..2. 10612080720us : sched_switch: prev_comm=rcu_torture_boo prev_pid=75 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/0 next_pid=19 next_prio=97
[10612.198791] rcu_tort-19668 2d..2. 10612080721us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19668 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/2 next_pid=31 next_prio=97
[10612.200549] rcu_tort-19649 3d..2. 10612080721us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19649 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/3 next_pid=39 next_prio=97
[10612.202293] rcuc/0-19 0d..2. 10612080721us : sched_switch: prev_comm=rcuc/0 prev_pid=19 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=75 next_prio=98
[10612.204014] rcuc/2-31 2d..2. 10612080723us : sched_switch: prev_comm=rcuc/2 prev_pid=31 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=19668 next_prio=98
[10612.205812] rcuc/3-39 3d..2. 10612080723us : sched_switch: prev_comm=rcuc/3 prev_pid=39 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=19649 next_prio=98
[10612.207556] rcu_tort-75 0dNh4. 10612081718us : sched_wakeup: comm=rcuc/0 pid=19 prio=97 target_cpu=000
[10612.208632] rcu_tort-19668 2dNh3. 10612081718us : sched_wakeup: comm=rcuc/2 pid=31 prio=97 target_cpu=002
[10612.209714] rcu_tort-75 0d..2. 10612081720us : sched_switch: prev_comm=rcu_torture_boo prev_pid=75 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/0 next_pid=19 next_prio=97
[10612.211477] rcu_tort-19649 3dN.3. 10612081720us : sched_wakeup: comm=ksoftirqd/3 pid=40 prio=97 target_cpu=003
[10612.213195] rcu_tort-19649 3d..2. 10612081721us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19649 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/3 next_pid=40 next_prio=97
[10612.215854] rcuc/0-19 0d..2. 10612081721us : sched_switch: prev_comm=rcuc/0 prev_pid=19 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=75 next_prio=98
[10612.218373] rcu_tort-19668 2d..2. 10612081722us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19668 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/2 next_pid=31 next_prio=97
[10612.220941] ksoftirq-40 3d..2. 10612081723us : sched_switch: prev_comm=ksoftirqd/3 prev_pid=40 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=19649 next_prio=98
[10612.223583] rcuc/2-31 2D..4. 10612081726us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=003
[10612.225255] rcu_tort-19649 3d..2. 10612081728us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19649 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10612.227931] rcuc/2-31 2d..2. 10612081730us : sched_switch: prev_comm=rcuc/2 prev_pid=31 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=19668 next_prio=98
[10612.230513] rcu_pree-16 3d..2. 10612081736us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=19649 next_prio=98
[10612.233127] rcu_tort-75 0dNh3. 10612082717us : sched_wakeup: comm=rcuc/0 pid=19 prio=97 target_cpu=000
[10612.234458] rcu_tort-19649 3dNh3. 10612082718us : sched_wakeup: comm=rcuc/3 pid=39 prio=97 target_cpu=003
[10612.235524] rcu_tort-19668 2dNh4. 10612082718us : sched_wakeup: comm=rcuc/2 pid=31 prio=97 target_cpu=002
[10612.236591] rcu_tort-75 0d..2. 10612082719us : sched_switch: prev_comm=rcu_torture_boo prev_pid=75 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/0 next_pid=19 next_prio=97
[10612.238293] rcuc/0-19 0d..2. 10612082721us : sched_switch: prev_comm=rcuc/0 prev_pid=19 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=75 next_prio=98
[10612.239982] rcu_tort-19649 3d..2. 10612082721us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19649 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/3 next_pid=39 next_prio=97
[10612.241713] rcu_tort-19668 2d..2. 10612082722us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19668 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/2 next_pid=31 next_prio=97
[10612.243475] rcuc/2-31 2d..2. 10612082723us : sched_switch: prev_comm=rcuc/2 prev_pid=31 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=19668 next_prio=98
[10612.245195] rcuc/3-39 3d..2. 10612082723us : sched_switch: prev_comm=rcuc/3 prev_pid=39 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=19649 next_prio=98
[10612.246911] rcu_tort-75 0dNh3. 10612083718us : sched_wakeup: comm=rcuc/0 pid=19 prio=97 target_cpu=000
[10612.247975] rcu_tort-19668 2dNh4. 10612083719us : sched_wakeup: comm=rcuc/2 pid=31 prio=97 target_cpu=002
[10612.249040] rcu_tort-75 0d..2. 10612083720us : sched_switch: prev_comm=rcu_torture_boo prev_pid=75 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/0 next_pid=19 next_prio=97
[10612.250767] rcu_tort-19649 3dN.3. 10612083720us : sched_wakeup: comm=ksoftirqd/3 pid=40 prio=97 target_cpu=003
[10612.251877] rcu_tort-19649 3d..2. 10612083721us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19649 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/3 next_pid=40 next_prio=97
[10612.253646] rcuc/0-19 0d..2. 10612083722us : sched_switch: prev_comm=rcuc/0 prev_pid=19 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=75 next_prio=98
[10612.255345] rcu_tort-19668 2d..2. 10612083722us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19668 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/2 next_pid=31 next_prio=97
[10612.257071] ksoftirq-40 3d..2. 10612083723us : sched_switch: prev_comm=ksoftirqd/3 prev_pid=40 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=19649 next_prio=98
[10612.258842] rcuc/2-31 2D..4. 10612083727us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=003
[10612.259951] rcu_tort-19649 3d..2. 10612083728us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19649 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10612.261728] rcuc/2-31 2d..2. 10612083730us : sched_switch: prev_comm=rcuc/2 prev_pid=31 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=19668 next_prio=98
[10612.263453] rcu_pree-16 3d..2. 10612083737us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=19649 next_prio=98
[10612.265217] rcu_tort-75 0dNh3. 10612084717us : sched_wakeup: comm=rcuc/0 pid=19 prio=97 target_cpu=000
[10612.266337] rcu_tort-19649 3dNh3. 10612084718us : sched_wakeup: comm=rcuc/3 pid=39 prio=97 target_cpu=003
[10612.267401] rcu_tort-19668 2dNh3. 10612084718us : sched_wakeup: comm=rcuc/2 pid=31 prio=97 target_cpu=002
[10612.268501] rcu_tort-75 0d..2. 10612084719us : sched_switch: prev_comm=rcu_torture_boo prev_pid=75 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/0 next_pid=19 next_prio=97
[10612.270270] rcuc/0-19 0d..2. 10612084721us : sched_switch: prev_comm=rcuc/0 prev_pid=19 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=75 next_prio=98
[10612.271987] rcu_tort-19649 3d..2. 10612084721us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19649 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/3 next_pid=39 next_prio=97
[10612.273718] rcu_tort-19668 2d..2. 10612084721us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19668 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/2 next_pid=31 next_prio=97
[10612.275454] rcuc/2-31 2d..2. 10612084723us : sched_switch: prev_comm=rcuc/2 prev_pid=31 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=19668 next_prio=98
[10612.277179] rcuc/3-39 3d..2. 10612084723us : sched_switch: prev_comm=rcuc/3 prev_pid=39 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=19649 next_prio=98
[10612.278891] rcu_tort-19668 2dNh4. 10612085717us : sched_wakeup: comm=rcuc/2 pid=31 prio=97 target_cpu=002
[10612.279954] rcu_tort-75 0dNh3. 10612085717us : sched_wakeup: comm=rcuc/0 pid=19 prio=97 target_cpu=000
[10612.281033] rcu_tort-19649 3dN.3. 10612085719us : sched_wakeup: comm=ksoftirqd/3 pid=40 prio=97 target_cpu=003
[10612.282151] rcu_tort-75 0d..2. 10612085720us : sched_switch: prev_comm=rcu_torture_boo prev_pid=75 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/0 next_pid=19 next_prio=97
[10612.283857] rcu_tort-19668 2d..2. 10612085720us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19668 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/2 next_pid=31 next_prio=97
[10612.285586] rcu_tort-19649 3d..2. 10612085720us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19649 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/3 next_pid=40 next_prio=97
[10612.287360] rcuc/0-19 0d..2. 10612085721us : sched_switch: prev_comm=rcuc/0 prev_pid=19 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=75 next_prio=98
[10612.289043] ksoftirq-40 3d..2. 10612085722us : sched_switch: prev_comm=ksoftirqd/3 prev_pid=40 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=19649 next_prio=98
[10612.290807] rcuc/2-31 2D..4. 10612085725us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=003
[10612.291915] rcu_tort-19649 3d..2. 10612085727us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19649 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10612.293689] rcuc/2-31 2d..2. 10612085729us : sched_switch: prev_comm=rcuc/2 prev_pid=31 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=19668 next_prio=98
[10612.295404] rcu_pree-16 3d..2. 10612085736us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=19649 next_prio=98
[10612.297167] rcu_tort-75 0dNh3. 10612086717us : sched_wakeup: comm=rcuc/0 pid=19 prio=97 target_cpu=000
[10612.298233] rcu_tort-19649 3dNh4. 10612086719us : sched_wakeup: comm=rcuc/3 pid=39 prio=97 target_cpu=003
[10612.299295] rcu_tort-19668 2dNh3. 10612086719us : sched_wakeup: comm=rcuc/2 pid=31 prio=97 target_cpu=002
[10612.300358] rcu_tort-75 0d..2. 10612086720us : sched_switch: prev_comm=rcu_torture_boo prev_pid=75 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/0 next_pid=19 next_prio=97
[10612.302049] rcuc/0-19 0d..2. 10612086721us : sched_switch: prev_comm=rcuc/0 prev_pid=19 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=75 next_prio=98
[10612.303824] rcu_tort-19668 2d..2. 10612086722us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19668 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/2 next_pid=31 next_prio=97
[10612.305554] rcu_tort-19649 3d..2. 10612086722us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19649 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/3 next_pid=39 next_prio=97
[10612.307284] rcuc/2-31 2d..2. 10612086724us : sched_switch: prev_comm=rcuc/2 prev_pid=31 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=19668 next_prio=98
[10612.309010] rcuc/3-39 3d..2. 10612086724us : sched_switch: prev_comm=rcuc/3 prev_pid=39 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=19649 next_prio=98
[10612.310729] rcu_tort-75 0dNh3. 10612087717us : sched_wakeup: comm=rcuc/0 pid=19 prio=97 target_cpu=000
[10612.311793] rcu_tort-19668 2dNh3. 10612087718us : sched_wakeup: comm=rcuc/2 pid=31 prio=97 target_cpu=002
[10612.312858] rcu_tort-75 0d..2. 10612087719us : sched_switch: prev_comm=rcu_torture_boo prev_pid=75 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/0 next_pid=19 next_prio=97
[10612.314566] rcu_tort-19649 3dN.4. 10612087720us : sched_wakeup: comm=ksoftirqd/3 pid=40 prio=97 target_cpu=003
[10612.315677] rcuc/0-19 0d..2. 10612087721us : sched_switch: prev_comm=rcuc/0 prev_pid=19 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=75 next_prio=98
[10612.317374] rcu_tort-19649 3d..2. 10612087721us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19649 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/3 next_pid=40 next_prio=97
[10612.319151] rcu_tort-19668 2d..2. 10612087721us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19668 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/2 next_pid=31 next_prio=97
[10612.320876] ksoftirq-40 3d..2. 10612087723us : sched_switch: prev_comm=ksoftirqd/3 prev_pid=40 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=19649 next_prio=98
[10612.322645] rcuc/2-31 2D..4. 10612087727us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=003
[10612.323758] rcu_tort-19649 3d..2. 10612087728us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19649 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10612.325538] rcuc/2-31 2d..2. 10612087730us : sched_switch: prev_comm=rcuc/2 prev_pid=31 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=19668 next_prio=98
[10612.327267] rcu_pree-16 3d..2. 10612087738us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=19649 next_prio=98
[10612.329031] rcu_tort-19668 2dNh3. 10612088717us : sched_wakeup: comm=rcuc/2 pid=31 prio=97 target_cpu=002
[10612.330097] rcu_tort-19649 3dNh3. 10612088717us : sched_wakeup: comm=rcuc/3 pid=39 prio=97 target_cpu=003
[10612.331179] rcu_tort-75 0dNh4. 10612088718us : sched_wakeup: comm=rcuc/0 pid=19 prio=97 target_cpu=000
[10612.332276] rcu_tort-19649 3d..2. 10612088720us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19649 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/3 next_pid=39 next_prio=97
[10612.333998] rcu_tort-19668 2d..2. 10612088720us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19668 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/2 next_pid=31 next_prio=97
[10612.335729] rcu_tort-75 0d..2. 10612088721us : sched_switch: prev_comm=rcu_torture_boo prev_pid=75 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/0 next_pid=19 next_prio=97
[10612.337435] rcuc/3-39 3d..2. 10612088722us : sched_switch: prev_comm=rcuc/3 prev_pid=39 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=19649 next_prio=98
[10612.339157] rcuc/0-19 0d..2. 10612088722us : sched_switch: prev_comm=rcuc/0 prev_pid=19 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=75 next_prio=98
[10612.340845] rcuc/2-31 2d..2. 10612088722us : sched_switch: prev_comm=rcuc/2 prev_pid=31 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=19668 next_prio=98
[10612.342566] rcu_tort-75 0dNh3. 10612089717us : sched_wakeup: comm=rcuc/0 pid=19 prio=97 target_cpu=000
[10612.343647] rcu_tort-19668 2dNh3. 10612089718us : sched_wakeup: comm=rcuc/2 pid=31 prio=97 target_cpu=002
[10612.344714] rcu_tort-75 0d..2. 10612089719us : sched_switch: prev_comm=rcu_torture_boo prev_pid=75 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/0 next_pid=19 next_prio=97
[10612.346420] rcu_tort-19649 3dN.3. 10612089720us : sched_wakeup: comm=ksoftirqd/3 pid=40 prio=97 target_cpu=003
[10612.347531] rcu_tort-19649 3d..2. 10612089721us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19649 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/3 next_pid=40 next_prio=97
[10612.349321] rcu_tort-19668 2d..2. 10612089721us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19668 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/2 next_pid=31 next_prio=97
[10612.351049] rcuc/0-19 0d..2. 10612089721us : sched_switch: prev_comm=rcuc/0 prev_pid=19 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=75 next_prio=98
[10612.352749] ksoftirq-40 3d..2. 10612089723us : sched_switch: prev_comm=ksoftirqd/3 prev_pid=40 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=19649 next_prio=98
[10612.354535] rcuc/2-31 2d..2. 10612089723us : sched_switch: prev_comm=rcuc/2 prev_pid=31 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=19668 next_prio=98
[10612.356297] rcu_tort-19668 2dNh3. 10612090717us : sched_wakeup: comm=rcuc/2 pid=31 prio=97 target_cpu=002
[10612.357395] rcu_tort-19668 2d..2. 10612090720us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19668 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/2 next_pid=31 next_prio=97
[10612.359154] rcuc/2-31 2D..4. 10612090726us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=003
[10612.360308] rcu_tort-19649 3d..2. 10612090727us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19649 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10612.362146] rcuc/2-31 2d..2. 10612090729us : sched_switch: prev_comm=rcuc/2 prev_pid=31 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=19668 next_prio=98
[10612.363857] rcu_pree-16 3d..2. 10612090734us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=19649 next_prio=98
[10612.365626] rcu_tort-19649 3dNh3. 10612091717us : sched_wakeup: comm=rcuc/3 pid=39 prio=97 target_cpu=003
[10612.366689] rcu_tort-19668 2dNh3. 10612091717us : sched_wakeup: comm=rcuc/2 pid=31 prio=97 target_cpu=002
[10612.367751] rcu_tort-75 0dNh3. 10612091717us : sched_wakeup: comm=rcuc/0 pid=19 prio=97 target_cpu=000
[10612.368815] rcu_tort-75 0d..2. 10612091719us : sched_switch: prev_comm=rcu_torture_boo prev_pid=75 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/0 next_pid=19 next_prio=97
[10612.370568] rcu_tort-19668 2d..2. 10612091720us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19668 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/2 next_pid=31 next_prio=97
[10612.372303] rcu_tort-19649 3dN.3. 10612091720us : sched_wakeup: comm=ksoftirqd/3 pid=40 prio=97 target_cpu=003
[10612.373415] rcu_tort-19649 3d..2. 10612091721us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19649 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/3 next_pid=39 next_prio=97
[10612.375149] rcuc/0-19 0d..2. 10612091721us : sched_switch: prev_comm=rcuc/0 prev_pid=19 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=75 next_prio=98
[10612.376884] rcuc/2-31 2d..2. 10612091722us : sched_switch: prev_comm=rcuc/2 prev_pid=31 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=19668 next_prio=98
[10612.378649] rcuc/3-39 3d..2. 10612091723us : sched_switch: prev_comm=rcuc/3 prev_pid=39 prev_prio=97 prev_state=S ==> next_comm=ksoftirqd/3 next_pid=40 next_prio=97
[10612.380310] ksoftirq-40 3d..2. 10612091724us : sched_switch: prev_comm=ksoftirqd/3 prev_pid=40 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=19649 next_prio=98
[10612.382074] rcu_tort-75 0dNh3. 10612092717us : sched_wakeup: comm=rcuc/0 pid=19 prio=97 target_cpu=000
[10612.383159] rcu_tort-19668 2dNh3. 10612092718us : sched_wakeup: comm=rcuc/2 pid=31 prio=97 target_cpu=002
[10612.384279] rcu_tort-75 0d..2. 10612092719us : sched_switch: prev_comm=rcu_torture_boo prev_pid=75 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/0 next_pid=19 next_prio=97
[10612.386019] rcuc/0-19 0d..2. 10612092721us : sched_switch: prev_comm=rcuc/0 prev_pid=19 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=75 next_prio=98
[10612.387705] rcu_tort-19668 2d..2. 10612092721us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19668 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/2 next_pid=31 next_prio=97
[10612.389434] rcuc/2-31 2D..4. 10612092727us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=003
[10612.390682] rcu_tort-19649 3d..2. 10612092728us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19649 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10612.393333] rcuc/2-31 2d..2. 10612092730us : sched_switch: prev_comm=rcuc/2 prev_pid=31 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=19668 next_prio=98
[10612.395886] rcu_pree-16 3d..2. 10612092739us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=19649 next_prio=98
[10612.397669] rcu_tort-19649 3dNh3. 10612093717us : sched_wakeup: comm=rcuc/3 pid=39 prio=97 target_cpu=003
[10612.398740] rcu_tort-19668 2dNh4. 10612093717us : sched_wakeup: comm=rcuc/2 pid=31 prio=97 target_cpu=002
[10612.399811] rcu_tort-75 0dNh3. 10612093717us : sched_wakeup: comm=rcuc/0 pid=19 prio=97 target_cpu=000
[10612.400879] rcu_tort-75 0d..2. 10612093720us : sched_switch: prev_comm=rcu_torture_boo prev_pid=75 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/0 next_pid=19 next_prio=97
[10612.402591] rcu_tort-19668 2d..2. 10612093720us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19668 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/2 next_pid=31 next_prio=97
[10612.404326] rcu_tort-19649 3d..2. 10612093720us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19649 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/3 next_pid=39 next_prio=97
[10612.406060] rcuc/0-19 0d..2. 10612093721us : sched_switch: prev_comm=rcuc/0 prev_pid=19 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=75 next_prio=98
[10612.407842] rcuc/2-31 2d..2. 10612093722us : sched_switch: prev_comm=rcuc/2 prev_pid=31 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=19668 next_prio=98
[10612.409632] rcuc/3-39 3d..2. 10612093722us : sched_switch: prev_comm=rcuc/3 prev_pid=39 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=19649 next_prio=98
[10612.411366] rcu_tort-75 0dNh4. 10612094717us : sched_wakeup: comm=rcuc/0 pid=19 prio=97 target_cpu=000
[10612.412435] rcu_tort-19668 2dNh3. 10612094719us : sched_wakeup: comm=rcuc/2 pid=31 prio=97 target_cpu=002
[10612.413511] rcu_tort-75 0d..2. 10612094720us : sched_switch: prev_comm=rcu_torture_boo prev_pid=75 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/0 next_pid=19 next_prio=97
[10612.415225] rcu_tort-19649 3dN.3. 10612094720us : sched_wakeup: comm=ksoftirqd/3 pid=40 prio=97 target_cpu=003
[10612.416344] rcu_tort-19649 3d..2. 10612094721us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19649 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/3 next_pid=40 next_prio=97
[10612.418138] rcu_tort-19668 2d..2. 10612094721us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19668 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/2 next_pid=31 next_prio=97
[10612.420073] rcuc/0-19 0d..2. 10612094721us : sched_switch: prev_comm=rcuc/0 prev_pid=19 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=75 next_prio=98
[10612.422166] ksoftirq-40 3d..2. 10612094723us : sched_switch: prev_comm=ksoftirqd/3 prev_pid=40 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=19649 next_prio=98
[10612.424368] rcuc/2-31 2D..4. 10612094727us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=003
[10612.425769] rcu_tort-19649 3d..2. 10612094728us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19649 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10612.427964] rcuc/2-31 2d..2. 10612094730us : sched_switch: prev_comm=rcuc/2 prev_pid=31 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=19668 next_prio=98
[10612.430090] rcu_pree-16 3d..2. 10612094736us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=19649 next_prio=98
[10612.432276] rcu_tort-75 0dNh3. 10612095717us : sched_wakeup: comm=rcuc/0 pid=19 prio=97 target_cpu=000
[10612.433595] rcu_tort-19668 2dNh3. 10612095718us : sched_wakeup: comm=rcuc/2 pid=31 prio=97 target_cpu=002
[10612.434931] rcu_tort-75 0d..2. 10612095720us : sched_switch: prev_comm=rcu_torture_boo prev_pid=75 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/0 next_pid=19 next_prio=97
[10612.437030] rcuc/0-19 0d..2. 10612095721us : sched_switch: prev_comm=rcuc/0 prev_pid=19 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=75 next_prio=98
[10612.439140] rcu_tort-19668 2d..2. 10612095722us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19668 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/2 next_pid=31 next_prio=97
[10612.441285] rcuc/2-31 2d..2. 10612095724us : sched_switch: prev_comm=rcuc/2 prev_pid=31 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=19668 next_prio=98
[10612.443413] rcu_tort-19649 3dN.3. 10612096719us : sched_wakeup: comm=ksoftirqd/3 pid=40 prio=97 target_cpu=003
[10612.444819] rcu_tort-19649 3d..2. 10612096721us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19649 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/3 next_pid=40 next_prio=97
[10612.447042] ksoftirq-40 3d..2. 10612096722us : sched_switch: prev_comm=ksoftirqd/3 prev_pid=40 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=19649 next_prio=98
[10612.449261] rcu_tort-19649 3dN.4. 10612098720us : sched_wakeup: comm=ksoftirqd/3 pid=40 prio=97 target_cpu=003
[10612.450633] rcu_tort-19649 3d..2. 10612098721us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19649 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/3 next_pid=40 next_prio=97
[10612.452819] ksoftirq-40 3d.s3. 10612098726us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=002
[10612.454194] rcu_tort-19668 2d..2. 10612098727us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19668 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10612.456380] ksoftirq-40 3d..2. 10612098729us : sched_switch: prev_comm=ksoftirqd/3 prev_pid=40 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=19649 next_prio=98
[10612.458558] rcu_pree-16 2d..2. 10612098729us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=19668 next_prio=98
[10612.460734] rcu_tort-19668 2d..2. 10612099603us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19668 prev_prio=98 prev_state=R+ ==> next_comm=rcu_torture_sta next_pid=72 next_prio=120
[10612.462979] rcu_tort-72 2dN.4. 10612103759us : sched_wakeup: comm=ksoftirqd/2 pid=32 prio=97 target_cpu=002
[10612.464378] rcu_tort-19649 3dN.3. 10612105721us : sched_wakeup: comm=ksoftirqd/3 pid=40 prio=97 target_cpu=003
[10612.465747] rcu_tort-19649 3d..2. 10612105722us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19649 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/3 next_pid=40 next_prio=97
[10612.467937] ksoftirq-40 3d..2. 10612105741us : sched_switch: prev_comm=ksoftirqd/3 prev_pid=40 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=19649 next_prio=98
[10612.470127] rcu_tort-72 2d..2. 10612105938us : sched_switch: prev_comm=rcu_torture_sta prev_pid=72 prev_prio=120 prev_state=R+ ==> next_comm=kworker/2:2 next_pid=18235 next_prio=120
[10612.472342] kworker/-18235 2dN.4. 10612105947us : sched_wakeup: comm=rcu_torture_bar pid=84 prio=139 target_cpu=002
[10612.473775] kworker/-18235 2d..2. 10612105948us : sched_switch: prev_comm=kworker/2:2 prev_pid=18235 prev_prio=120 prev_state=R+ ==> next_comm=rcu_torture_bar next_pid=84 next_prio=139
[10612.475980] rcu_tort-84 2d..2. 10612105952us : sched_switch: prev_comm=rcu_torture_bar prev_pid=84 prev_prio=139 prev_state=D ==> next_comm=kworker/2:2 next_pid=18235 next_prio=120
[10612.478184] kworker/-18235 2d..2. 10612105954us : sched_switch: prev_comm=kworker/2:2 prev_pid=18235 prev_prio=120 prev_state=I ==> next_comm=rcu_torture_sta next_pid=72 next_prio=120
[10612.480413] rcu_tort-75 0dN.3. 10612106719us : sched_wakeup: comm=ksoftirqd/0 pid=15 prio=97 target_cpu=000
[10612.481777] rcu_tort-75 0d..2. 10612106720us : sched_switch: prev_comm=rcu_torture_boo prev_pid=75 prev_prio=98 prev_state=R+ ==> next_comm=ksoftirqd/0 next_pid=15 next_prio=97
[10612.483927] ksoftirq-15 0d..2. 10612106723us : sched_switch: prev_comm=ksoftirqd/0 prev_pid=15 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=75 next_prio=98
[10612.486069] rcu_tort-72 2d..3. 10612107654us : dl_server_stop <-dequeue_entities
[10612.487139] rcu_tort-72 2d..2. 10612107658us : sched_switch: prev_comm=rcu_torture_sta prev_pid=72 prev_prio=120 prev_state=S ==> next_comm=ksoftirqd/2 next_pid=32 next_prio=97
[10612.489380] ksoftirq-32 2d.s3. 10612107664us : sched_wakeup: comm=rcu_preempt pid=16 prio=97 target_cpu=003
[10612.490768] rcu_tort-19649 3d..2. 10612107665us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19649 prev_prio=98 prev_state=R+ ==> next_comm=rcu_preempt next_pid=16 next_prio=97
[10612.492950] rcu_pree-16 3d..2. 10612107668us : sched_switch: prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==> next_comm=rcu_torture_boo next_pid=19649 next_prio=98
[10612.495130] ksoftirq-32 2d.s2. 10612107670us : dl_server_start <-enqueue_task_fair
[10612.496232] ksoftirq-32 2d..2. 10612107673us : sched_switch: prev_comm=ksoftirqd/2 prev_pid=32 prev_prio=97 prev_state=S ==> next_comm=torture_onoff next_pid=80 next_prio=120
[10612.498372] torture_-80 2d..2. 10612107683us : sched_switch: prev_comm=torture_onoff prev_pid=80 prev_prio=120 prev_state=D ==> next_comm=kcompactd0 next_pid=50 next_prio=120
[10612.500512] kcompact-50 2d..3. 10612107689us : dl_server_stop <-dequeue_entities
[10612.501572] kcompact-50 2d..2. 10612107691us : sched_switch: prev_comm=kcompactd0 prev_pid=50 prev_prio=120 prev_state=S ==> next_comm=rcu_torture_boo next_pid=19668 next_prio=98
[10612.503743] rcu_tort-75 0dNh3. 10612107718us : sched_wakeup: comm=rcuc/0 pid=19 prio=97 target_cpu=000
[10612.505065] rcu_tort-75 0d..2. 10612107720us : sched_switch: prev_comm=rcu_torture_boo prev_pid=75 prev_prio=98 prev_state=R+ ==> next_comm=rcuc/0 next_pid=19 next_prio=97
[10612.507176] rcuc/0-19 0d..2. 10612107722us : sched_switch: prev_comm=rcuc/0 prev_pid=19 prev_prio=97 prev_state=S ==> next_comm=rcu_torture_boo next_pid=75 next_prio=98
[10612.509258] rcu_tort-19649 3d..2. 10612107729us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19649 prev_prio=98 prev_state=R+ ==> next_comm=rcu_torture_bar next_pid=82 next_prio=139
[10612.511542] rcu_tort-82 3d..4. 10612107734us : sched_wakeup: comm=rcu_torture_bar pid=86 prio=120 target_cpu=003
[10612.512958] rcu_tort-82 3d..2. 10612107737us : sched_switch: prev_comm=rcu_torture_bar prev_pid=82 prev_prio=139 prev_state=D ==> next_comm=rcu_torture_fak next_pid=66 next_prio=139
[10612.515147] rcu_tort-66 3d..2. 10612107742us : sched_switch: prev_comm=rcu_torture_fak prev_pid=66 prev_prio=139 prev_state=I ==> next_comm=rcu_torture_fak next_pid=64 next_prio=139
[10612.517333] rcu_tort-64 3d..2. 10612107746us : sched_switch: prev_comm=rcu_torture_fak prev_pid=64 prev_prio=139 prev_state=I ==> next_comm=rcu_torture_fak next_pid=65 next_prio=139
[10612.519515] rcu_tort-65 3d..2. 10612107749us : sched_switch: prev_comm=rcu_torture_fak prev_pid=65 prev_prio=139 prev_state=I ==> next_comm=rcu_torture_bar next_pid=86 next_prio=120
[10612.521710] rcu_tort-86 3d..2. 10612107758us : sched_switch: prev_comm=rcu_torture_bar prev_pid=86 prev_prio=120 prev_state=D ==> next_comm=torture_stutter next_pid=74 next_prio=120
[10612.523898] torture_-74 3d..2. 10612107769us : sched_switch: prev_comm=torture_stutter prev_pid=74 prev_prio=120 prev_state=I ==> next_comm=kworker/3:1 next_pid=19499 next_prio=120
[10612.526079] rcu_tort-19668 2d..3. 10612107772us : dl_server_start <-enqueue_task_fair
[10612.527161] rcu_tort-75 0d..2. 10612107773us : sched_switch: prev_comm=rcu_torture_boo prev_pid=75 prev_prio=98 prev_state=I ==> next_comm=kworker/0:0 next_pid=9796 next_prio=120
[10612.529317] rcu_tort-19668 2d..2. 10612107775us : sched_switch: prev_comm=rcu_torture_boo prev_pid=19668 prev_prio=98 prev_state=I ==> next_comm=rcu_torture_rea next_pid=69 next_prio=139
[10612.531532] kworker/-19499 3d..3. 10612107775us : sched_wakeup: comm=rcu_torture_wri pid=63 prio=120 target_cpu=003
[10612.532942] kworker/-19499 3d..2. 10612107777us : sched_switch: prev_comm=kworker/3:1 prev_pid=19499 prev_prio=120 prev_state=I ==> next_comm=rcu_torture_wri next_pid=63 next_prio=120
[10612.535122] rcu_tort-63 3d..4. 10612107784us : sched_wakeup: comm=rcu_exp_gp_kthr pid=20 prio=97 target_cpu=002
[10612.536521] rcu_tort-69 2dn.3. 10612107785us : dl_server_stop <-dequeue_entities
[10612.537576] rcu_tort-63 3d..2. 10612107786us : sched_switch: prev_comm=rcu_torture_wri prev_pid=63 prev_prio=120 prev_state=D ==> next_comm=rcu_torture_fak next_pid=67 next_prio=139
[10612.539764] kworker/-9796 0dN.3. 10612107787us : dl_server_stop <-dequeue_entities
[10612.540819] rcu_tort-69 2d..2. 10612107787us : sched_switch: prev_comm=rcu_torture_rea prev_pid=69 prev_prio=139 prev_state=I ==> next_comm=rcu_exp_gp_kthr next_pid=20 next_prio=97
[10612.543020] rcu_tort-67 3d..2. 10612107790us : sched_switch: prev_comm=rcu_torture_fak prev_pid=67 prev_prio=139 prev_state=I ==> next_comm=rcu_torture_rea next_pid=71 next_prio=139
[10612.545238] rcu_tort-71 3d..3. 10612107796us : dl_server_stop <-dequeue_entities
[10612.546299] kworker/-9796 0dN.3. 10612107797us : dl_server_start <-enqueue_task_fair
[10612.547376] ---------------------------------
[10612.547961] Dumping ftrace buffer:
[10612.548418] (ftrace buffer empty)
[10616.834128] Boost inversion persisted: No QS from CPU 0
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-10-02 9:01 ` Tomas Glozar
2024-10-02 12:07 ` Paul E. McKenney
@ 2024-10-10 11:24 ` Tomas Glozar
2024-10-10 15:01 ` Paul E. McKenney
2024-10-22 6:33 ` Tomas Glozar
1 sibling, 2 replies; 67+ messages in thread
From: Tomas Glozar @ 2024-10-10 11:24 UTC (permalink / raw)
To: paulmck
Cc: Valentin Schneider, Chen Yu, Peter Zijlstra, linux-kernel, sfr,
linux-next, kernel-team
st 2. 10. 2024 v 11:01 odesílatel Tomas Glozar <tglozar@redhat.com> napsal:
>
> FYI I have managed to reproduce the bug on our infrastructure after 21
> hours of 7*TREE03 and I will continue with trying to reproduce it with
> the tracers we want.
>
> Tomas
I successfully reproduced the bug also with the tracers active after a
few 8-hour test runs on our infrastructure:
[ 0.000000] Linux version 6.11.0-g2004cef11ea0-dirty (...) #1 SMP
PREEMPT_DYNAMIC Wed Oct 9 12:13:40 EDT 2024
[ 0.000000] Command line: debug_boot_weak_hash panic=-1 selinux=0
initcall_debug debug console=ttyS0 rcutorture.n_barrier_cbs=4
rcutorture.stat_interval=15 rcutorture.shutdown_secs=25200
rcutorture.test_no_idle_hz=1 rcutorture.verbose=1
rcutorture.onoff_interval=200 rcutorture.onoff_holdoff=30
rcutree.gp_preinit_delay=12 rcutree.gp_init_delay=3
rcutree.gp_cleanup_delay=3 rcutree.kthread_prio=2 threadirqs
rcutree.use_softirq=0
trace_event=sched:sched_switch,sched:sched_wakeup
ftrace_filter=dl_server_start,dl_server_stop trace_buf_size=2k
ftrace=function torture.ftrace_dump_at_shutdown=1
...
[13550.127541] WARNING: CPU: 1 PID: 155 at
kernel/sched/deadline.c:1971 enqueue_dl_entity+0x554/0x5d0
[13550.128982] Modules linked in:
[13550.129528] CPU: 1 UID: 0 PID: 155 Comm: rcu_torture_rea Tainted: G
W 6.11.0-g2004cef11ea0-dirty #1
[13550.131419] Tainted: [W]=WARN
[13550.131979] Hardware name: Red Hat KVM/RHEL, BIOS 1.16.3-2.el9 04/01/2014
[13550.133230] RIP: 0010:enqueue_dl_entity+0x554/0x5d0
...
[13550.151286] Call Trace:
[13550.151749] <TASK>
[13550.152141] ? __warn+0x88/0x130
[13550.152717] ? enqueue_dl_entity+0x554/0x5d0
[13550.153485] ? report_bug+0x18e/0x1a0
[13550.154149] ? handle_bug+0x54/0x90
[13550.154792] ? exc_invalid_op+0x18/0x70
[13550.155484] ? asm_exc_invalid_op+0x1a/0x20
[13550.156249] ? enqueue_dl_entity+0x554/0x5d0
[13550.157055] dl_server_start+0x36/0xf0
[13550.157709] enqueue_task_fair+0x220/0x6b0
[13550.158447] activate_task+0x26/0x60
[13550.159131] attach_task+0x35/0x50
[13550.159756] sched_balance_rq+0x663/0xe00
[13550.160511] sched_balance_newidle.constprop.0+0x1a5/0x360
[13550.161520] pick_next_task_fair+0x2f/0x340
[13550.162290] __schedule+0x203/0x900
[13550.162958] ? enqueue_hrtimer+0x35/0x90
[13550.163703] schedule+0x27/0xd0
[13550.164299] schedule_hrtimeout_range_clock+0x99/0x120
[13550.165239] ? __pfx_hrtimer_wakeup+0x10/0x10
[13550.165954] torture_hrtimeout_us+0x7b/0xe0
[13550.166624] rcu_torture_reader+0x139/0x200
[13550.167284] ? __pfx_rcu_torture_timer+0x10/0x10
[13550.168019] ? __pfx_rcu_torture_reader+0x10/0x10
[13550.168764] kthread+0xd6/0x100
[13550.169262] ? __pfx_kthread+0x10/0x10
[13550.169860] ret_from_fork+0x34/0x50
[13550.170424] ? __pfx_kthread+0x10/0x10
[13550.171020] ret_from_fork_asm+0x1a/0x30
[13550.171657] </TASK>
Unfortunately, the following rcu stalls appear to have resulted in
abnormal termination of the VM, which led to the ftrace buffer not
being dumped into the console. Currently re-running the same test with
the addition of "ftrace_dump_on_oops panic_on_warn=1" and hoping for
the best.
Tomas
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-10-10 11:24 ` Tomas Glozar
@ 2024-10-10 15:01 ` Paul E. McKenney
2024-10-10 23:28 ` Paul E. McKenney
2024-10-22 6:33 ` Tomas Glozar
1 sibling, 1 reply; 67+ messages in thread
From: Paul E. McKenney @ 2024-10-10 15:01 UTC (permalink / raw)
To: Tomas Glozar
Cc: Valentin Schneider, Chen Yu, Peter Zijlstra, linux-kernel, sfr,
linux-next, kernel-team
On Thu, Oct 10, 2024 at 01:24:11PM +0200, Tomas Glozar wrote:
> st 2. 10. 2024 v 11:01 odesílatel Tomas Glozar <tglozar@redhat.com> napsal:
> >
> > FYI I have managed to reproduce the bug on our infrastructure after 21
> > hours of 7*TREE03 and I will continue with trying to reproduce it with
> > the tracers we want.
> >
> > Tomas
>
> I successfully reproduced the bug also with the tracers active after a
> few 8-hour test runs on our infrastructure:
>
> [ 0.000000] Linux version 6.11.0-g2004cef11ea0-dirty (...) #1 SMP
> PREEMPT_DYNAMIC Wed Oct 9 12:13:40 EDT 2024
> [ 0.000000] Command line: debug_boot_weak_hash panic=-1 selinux=0
> initcall_debug debug console=ttyS0 rcutorture.n_barrier_cbs=4
> rcutorture.stat_interval=15 rcutorture.shutdown_secs=25200
> rcutorture.test_no_idle_hz=1 rcutorture.verbose=1
> rcutorture.onoff_interval=200 rcutorture.onoff_holdoff=30
> rcutree.gp_preinit_delay=12 rcutree.gp_init_delay=3
> rcutree.gp_cleanup_delay=3 rcutree.kthread_prio=2 threadirqs
> rcutree.use_softirq=0
> trace_event=sched:sched_switch,sched:sched_wakeup
> ftrace_filter=dl_server_start,dl_server_stop trace_buf_size=2k
> ftrace=function torture.ftrace_dump_at_shutdown=1
> ...
> [13550.127541] WARNING: CPU: 1 PID: 155 at
> kernel/sched/deadline.c:1971 enqueue_dl_entity+0x554/0x5d0
> [13550.128982] Modules linked in:
> [13550.129528] CPU: 1 UID: 0 PID: 155 Comm: rcu_torture_rea Tainted: G
> W 6.11.0-g2004cef11ea0-dirty #1
> [13550.131419] Tainted: [W]=WARN
> [13550.131979] Hardware name: Red Hat KVM/RHEL, BIOS 1.16.3-2.el9 04/01/2014
> [13550.133230] RIP: 0010:enqueue_dl_entity+0x554/0x5d0
> ...
> [13550.151286] Call Trace:
> [13550.151749] <TASK>
> [13550.152141] ? __warn+0x88/0x130
> [13550.152717] ? enqueue_dl_entity+0x554/0x5d0
> [13550.153485] ? report_bug+0x18e/0x1a0
> [13550.154149] ? handle_bug+0x54/0x90
> [13550.154792] ? exc_invalid_op+0x18/0x70
> [13550.155484] ? asm_exc_invalid_op+0x1a/0x20
> [13550.156249] ? enqueue_dl_entity+0x554/0x5d0
> [13550.157055] dl_server_start+0x36/0xf0
> [13550.157709] enqueue_task_fair+0x220/0x6b0
> [13550.158447] activate_task+0x26/0x60
> [13550.159131] attach_task+0x35/0x50
> [13550.159756] sched_balance_rq+0x663/0xe00
> [13550.160511] sched_balance_newidle.constprop.0+0x1a5/0x360
> [13550.161520] pick_next_task_fair+0x2f/0x340
> [13550.162290] __schedule+0x203/0x900
> [13550.162958] ? enqueue_hrtimer+0x35/0x90
> [13550.163703] schedule+0x27/0xd0
> [13550.164299] schedule_hrtimeout_range_clock+0x99/0x120
> [13550.165239] ? __pfx_hrtimer_wakeup+0x10/0x10
> [13550.165954] torture_hrtimeout_us+0x7b/0xe0
> [13550.166624] rcu_torture_reader+0x139/0x200
> [13550.167284] ? __pfx_rcu_torture_timer+0x10/0x10
> [13550.168019] ? __pfx_rcu_torture_reader+0x10/0x10
> [13550.168764] kthread+0xd6/0x100
> [13550.169262] ? __pfx_kthread+0x10/0x10
> [13550.169860] ret_from_fork+0x34/0x50
> [13550.170424] ? __pfx_kthread+0x10/0x10
> [13550.171020] ret_from_fork_asm+0x1a/0x30
> [13550.171657] </TASK>
>
> Unfortunately, the following rcu stalls appear to have resulted in
> abnormal termination of the VM, which led to the ftrace buffer not
> being dumped into the console. Currently re-running the same test with
> the addition of "ftrace_dump_on_oops panic_on_warn=1" and hoping for
> the best.
Another approach would be rcupdate.rcu_cpu_stall_suppress=1.
We probably need to disable RCU CPU stall warnings automatically while
dumping ftrace buffers, but the asynchronous nature of printk() makes
it difficult to work out when to automatically re-enable them...
Thoughts?
Thanx, Paul
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-10-10 15:01 ` Paul E. McKenney
@ 2024-10-10 23:28 ` Paul E. McKenney
2024-10-14 18:55 ` Paul E. McKenney
0 siblings, 1 reply; 67+ messages in thread
From: Paul E. McKenney @ 2024-10-10 23:28 UTC (permalink / raw)
To: Tomas Glozar
Cc: Valentin Schneider, Chen Yu, Peter Zijlstra, linux-kernel, sfr,
linux-next, kernel-team
On Thu, Oct 10, 2024 at 08:01:35AM -0700, Paul E. McKenney wrote:
> On Thu, Oct 10, 2024 at 01:24:11PM +0200, Tomas Glozar wrote:
> > st 2. 10. 2024 v 11:01 odesílatel Tomas Glozar <tglozar@redhat.com> napsal:
> > >
> > > FYI I have managed to reproduce the bug on our infrastructure after 21
> > > hours of 7*TREE03 and I will continue with trying to reproduce it with
> > > the tracers we want.
> > >
> > > Tomas
> >
> > I successfully reproduced the bug also with the tracers active after a
> > few 8-hour test runs on our infrastructure:
> >
> > [ 0.000000] Linux version 6.11.0-g2004cef11ea0-dirty (...) #1 SMP
> > PREEMPT_DYNAMIC Wed Oct 9 12:13:40 EDT 2024
> > [ 0.000000] Command line: debug_boot_weak_hash panic=-1 selinux=0
> > initcall_debug debug console=ttyS0 rcutorture.n_barrier_cbs=4
> > rcutorture.stat_interval=15 rcutorture.shutdown_secs=25200
> > rcutorture.test_no_idle_hz=1 rcutorture.verbose=1
> > rcutorture.onoff_interval=200 rcutorture.onoff_holdoff=30
> > rcutree.gp_preinit_delay=12 rcutree.gp_init_delay=3
> > rcutree.gp_cleanup_delay=3 rcutree.kthread_prio=2 threadirqs
> > rcutree.use_softirq=0
> > trace_event=sched:sched_switch,sched:sched_wakeup
> > ftrace_filter=dl_server_start,dl_server_stop trace_buf_size=2k
> > ftrace=function torture.ftrace_dump_at_shutdown=1
> > ...
> > [13550.127541] WARNING: CPU: 1 PID: 155 at
> > kernel/sched/deadline.c:1971 enqueue_dl_entity+0x554/0x5d0
> > [13550.128982] Modules linked in:
> > [13550.129528] CPU: 1 UID: 0 PID: 155 Comm: rcu_torture_rea Tainted: G
> > W 6.11.0-g2004cef11ea0-dirty #1
> > [13550.131419] Tainted: [W]=WARN
> > [13550.131979] Hardware name: Red Hat KVM/RHEL, BIOS 1.16.3-2.el9 04/01/2014
> > [13550.133230] RIP: 0010:enqueue_dl_entity+0x554/0x5d0
> > ...
> > [13550.151286] Call Trace:
> > [13550.151749] <TASK>
> > [13550.152141] ? __warn+0x88/0x130
> > [13550.152717] ? enqueue_dl_entity+0x554/0x5d0
> > [13550.153485] ? report_bug+0x18e/0x1a0
> > [13550.154149] ? handle_bug+0x54/0x90
> > [13550.154792] ? exc_invalid_op+0x18/0x70
> > [13550.155484] ? asm_exc_invalid_op+0x1a/0x20
> > [13550.156249] ? enqueue_dl_entity+0x554/0x5d0
> > [13550.157055] dl_server_start+0x36/0xf0
> > [13550.157709] enqueue_task_fair+0x220/0x6b0
> > [13550.158447] activate_task+0x26/0x60
> > [13550.159131] attach_task+0x35/0x50
> > [13550.159756] sched_balance_rq+0x663/0xe00
> > [13550.160511] sched_balance_newidle.constprop.0+0x1a5/0x360
> > [13550.161520] pick_next_task_fair+0x2f/0x340
> > [13550.162290] __schedule+0x203/0x900
> > [13550.162958] ? enqueue_hrtimer+0x35/0x90
> > [13550.163703] schedule+0x27/0xd0
> > [13550.164299] schedule_hrtimeout_range_clock+0x99/0x120
> > [13550.165239] ? __pfx_hrtimer_wakeup+0x10/0x10
> > [13550.165954] torture_hrtimeout_us+0x7b/0xe0
> > [13550.166624] rcu_torture_reader+0x139/0x200
> > [13550.167284] ? __pfx_rcu_torture_timer+0x10/0x10
> > [13550.168019] ? __pfx_rcu_torture_reader+0x10/0x10
> > [13550.168764] kthread+0xd6/0x100
> > [13550.169262] ? __pfx_kthread+0x10/0x10
> > [13550.169860] ret_from_fork+0x34/0x50
> > [13550.170424] ? __pfx_kthread+0x10/0x10
> > [13550.171020] ret_from_fork_asm+0x1a/0x30
> > [13550.171657] </TASK>
> >
> > Unfortunately, the following rcu stalls appear to have resulted in
> > abnormal termination of the VM, which led to the ftrace buffer not
> > being dumped into the console. Currently re-running the same test with
> > the addition of "ftrace_dump_on_oops panic_on_warn=1" and hoping for
> > the best.
>
> Another approach would be rcupdate.rcu_cpu_stall_suppress=1.
>
> We probably need to disable RCU CPU stall warnings automatically while
> dumping ftrace buffers, but the asynchronous nature of printk() makes
> it difficult to work out when to automatically re-enable them...
And in the meantime, for whatever it is worth...
The pattern of failures motivated me to add to rcutorture a real-time
task that randomly preempts a randomly chosen online CPU. Here are
the (new and not-to-be-trusted) commits on -rcu's "dev" branch:
d1b99fa42af7 ("torture: Add dowarn argument to torture_sched_setaffinity()")
aed555adc22a ("rcutorture: Add random real-time preemption")
b09bcf8e1406 ("rcutorture: Make the TREE03 scenario do preemption")
Given these, the following sort of command, when run on dual-socket
systems, reproduces a silent failure within a few minutes:
tools/testing/selftests/rcutorture/bin/kvm.sh --allcpus --duration 4h --configs "4*TREE03" --kconfig "CONFIG_NR_CPUS=4" --trust-make
But on my laptop, a 30-minute run resulted in zero failures. I am now
retrying with a four-hour laptop run.
I am also adjusting the preemption duration and frequency to see if a
more edifying failure mode might make itself apparent. :-/
Thanx, Paul
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-10-10 23:28 ` Paul E. McKenney
@ 2024-10-14 18:55 ` Paul E. McKenney
2024-10-21 19:25 ` Paul E. McKenney
0 siblings, 1 reply; 67+ messages in thread
From: Paul E. McKenney @ 2024-10-14 18:55 UTC (permalink / raw)
To: Tomas Glozar
Cc: Valentin Schneider, Chen Yu, Peter Zijlstra, linux-kernel, sfr,
linux-next, kernel-team
On Thu, Oct 10, 2024 at 04:28:38PM -0700, Paul E. McKenney wrote:
> On Thu, Oct 10, 2024 at 08:01:35AM -0700, Paul E. McKenney wrote:
> > On Thu, Oct 10, 2024 at 01:24:11PM +0200, Tomas Glozar wrote:
> > > st 2. 10. 2024 v 11:01 odesílatel Tomas Glozar <tglozar@redhat.com> napsal:
> > > >
> > > > FYI I have managed to reproduce the bug on our infrastructure after 21
> > > > hours of 7*TREE03 and I will continue with trying to reproduce it with
> > > > the tracers we want.
> > > >
> > > > Tomas
> > >
> > > I successfully reproduced the bug also with the tracers active after a
> > > few 8-hour test runs on our infrastructure:
> > >
> > > [ 0.000000] Linux version 6.11.0-g2004cef11ea0-dirty (...) #1 SMP
> > > PREEMPT_DYNAMIC Wed Oct 9 12:13:40 EDT 2024
> > > [ 0.000000] Command line: debug_boot_weak_hash panic=-1 selinux=0
> > > initcall_debug debug console=ttyS0 rcutorture.n_barrier_cbs=4
> > > rcutorture.stat_interval=15 rcutorture.shutdown_secs=25200
> > > rcutorture.test_no_idle_hz=1 rcutorture.verbose=1
> > > rcutorture.onoff_interval=200 rcutorture.onoff_holdoff=30
> > > rcutree.gp_preinit_delay=12 rcutree.gp_init_delay=3
> > > rcutree.gp_cleanup_delay=3 rcutree.kthread_prio=2 threadirqs
> > > rcutree.use_softirq=0
> > > trace_event=sched:sched_switch,sched:sched_wakeup
> > > ftrace_filter=dl_server_start,dl_server_stop trace_buf_size=2k
> > > ftrace=function torture.ftrace_dump_at_shutdown=1
> > > ...
> > > [13550.127541] WARNING: CPU: 1 PID: 155 at
> > > kernel/sched/deadline.c:1971 enqueue_dl_entity+0x554/0x5d0
> > > [13550.128982] Modules linked in:
> > > [13550.129528] CPU: 1 UID: 0 PID: 155 Comm: rcu_torture_rea Tainted: G
> > > W 6.11.0-g2004cef11ea0-dirty #1
> > > [13550.131419] Tainted: [W]=WARN
> > > [13550.131979] Hardware name: Red Hat KVM/RHEL, BIOS 1.16.3-2.el9 04/01/2014
> > > [13550.133230] RIP: 0010:enqueue_dl_entity+0x554/0x5d0
> > > ...
> > > [13550.151286] Call Trace:
> > > [13550.151749] <TASK>
> > > [13550.152141] ? __warn+0x88/0x130
> > > [13550.152717] ? enqueue_dl_entity+0x554/0x5d0
> > > [13550.153485] ? report_bug+0x18e/0x1a0
> > > [13550.154149] ? handle_bug+0x54/0x90
> > > [13550.154792] ? exc_invalid_op+0x18/0x70
> > > [13550.155484] ? asm_exc_invalid_op+0x1a/0x20
> > > [13550.156249] ? enqueue_dl_entity+0x554/0x5d0
> > > [13550.157055] dl_server_start+0x36/0xf0
> > > [13550.157709] enqueue_task_fair+0x220/0x6b0
> > > [13550.158447] activate_task+0x26/0x60
> > > [13550.159131] attach_task+0x35/0x50
> > > [13550.159756] sched_balance_rq+0x663/0xe00
> > > [13550.160511] sched_balance_newidle.constprop.0+0x1a5/0x360
> > > [13550.161520] pick_next_task_fair+0x2f/0x340
> > > [13550.162290] __schedule+0x203/0x900
> > > [13550.162958] ? enqueue_hrtimer+0x35/0x90
> > > [13550.163703] schedule+0x27/0xd0
> > > [13550.164299] schedule_hrtimeout_range_clock+0x99/0x120
> > > [13550.165239] ? __pfx_hrtimer_wakeup+0x10/0x10
> > > [13550.165954] torture_hrtimeout_us+0x7b/0xe0
> > > [13550.166624] rcu_torture_reader+0x139/0x200
> > > [13550.167284] ? __pfx_rcu_torture_timer+0x10/0x10
> > > [13550.168019] ? __pfx_rcu_torture_reader+0x10/0x10
> > > [13550.168764] kthread+0xd6/0x100
> > > [13550.169262] ? __pfx_kthread+0x10/0x10
> > > [13550.169860] ret_from_fork+0x34/0x50
> > > [13550.170424] ? __pfx_kthread+0x10/0x10
> > > [13550.171020] ret_from_fork_asm+0x1a/0x30
> > > [13550.171657] </TASK>
> > >
> > > Unfortunately, the following rcu stalls appear to have resulted in
> > > abnormal termination of the VM, which led to the ftrace buffer not
> > > being dumped into the console. Currently re-running the same test with
> > > the addition of "ftrace_dump_on_oops panic_on_warn=1" and hoping for
> > > the best.
> >
> > Another approach would be rcupdate.rcu_cpu_stall_suppress=1.
> >
> > We probably need to disable RCU CPU stall warnings automatically while
> > dumping ftrace buffers, but the asynchronous nature of printk() makes
> > it difficult to work out when to automatically re-enable them...
>
> And in the meantime, for whatever it is worth...
>
> The pattern of failures motivated me to add to rcutorture a real-time
> task that randomly preempts a randomly chosen online CPU. Here are
> the (new and not-to-be-trusted) commits on -rcu's "dev" branch:
>
> d1b99fa42af7 ("torture: Add dowarn argument to torture_sched_setaffinity()")
> aed555adc22a ("rcutorture: Add random real-time preemption")
> b09bcf8e1406 ("rcutorture: Make the TREE03 scenario do preemption")
>
> Given these, the following sort of command, when run on dual-socket
> systems, reproduces a silent failure within a few minutes:
>
> tools/testing/selftests/rcutorture/bin/kvm.sh --allcpus --duration 4h --configs "4*TREE03" --kconfig "CONFIG_NR_CPUS=4" --trust-make
>
> But on my laptop, a 30-minute run resulted in zero failures. I am now
> retrying with a four-hour laptop run.
And this silent failure was me hurting myself with a change to scripting
to better handle test hosts disappearing (it does sometimes happen).
With the scripting fixed, I am getting simple too-short grace periods,
though only a few per 8-hour 400*TREE03 4-CPU guest-OS run.
> I am also adjusting the preemption duration and frequency to see if a
> more edifying failure mode might make itself apparent. :-/
But no big wins thus far, so this will be a slow process. My current test
disables CPU hotplug. I will be disabling other things in the hope of
better identifying the code paths that should be placed under suspicion.
Thanx, Paul
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-10-14 18:55 ` Paul E. McKenney
@ 2024-10-21 19:25 ` Paul E. McKenney
2024-11-14 18:16 ` Paul E. McKenney
0 siblings, 1 reply; 67+ messages in thread
From: Paul E. McKenney @ 2024-10-21 19:25 UTC (permalink / raw)
To: Tomas Glozar
Cc: Valentin Schneider, Chen Yu, Peter Zijlstra, linux-kernel, sfr,
linux-next, kernel-team
On Mon, Oct 14, 2024 at 11:55:05AM -0700, Paul E. McKenney wrote:
> On Thu, Oct 10, 2024 at 04:28:38PM -0700, Paul E. McKenney wrote:
> > On Thu, Oct 10, 2024 at 08:01:35AM -0700, Paul E. McKenney wrote:
> > > On Thu, Oct 10, 2024 at 01:24:11PM +0200, Tomas Glozar wrote:
> > > > st 2. 10. 2024 v 11:01 odesílatel Tomas Glozar <tglozar@redhat.com> napsal:
> > > > >
> > > > > FYI I have managed to reproduce the bug on our infrastructure after 21
> > > > > hours of 7*TREE03 and I will continue with trying to reproduce it with
> > > > > the tracers we want.
> > > > >
> > > > > Tomas
> > > >
> > > > I successfully reproduced the bug also with the tracers active after a
> > > > few 8-hour test runs on our infrastructure:
> > > >
> > > > [ 0.000000] Linux version 6.11.0-g2004cef11ea0-dirty (...) #1 SMP
> > > > PREEMPT_DYNAMIC Wed Oct 9 12:13:40 EDT 2024
> > > > [ 0.000000] Command line: debug_boot_weak_hash panic=-1 selinux=0
> > > > initcall_debug debug console=ttyS0 rcutorture.n_barrier_cbs=4
> > > > rcutorture.stat_interval=15 rcutorture.shutdown_secs=25200
> > > > rcutorture.test_no_idle_hz=1 rcutorture.verbose=1
> > > > rcutorture.onoff_interval=200 rcutorture.onoff_holdoff=30
> > > > rcutree.gp_preinit_delay=12 rcutree.gp_init_delay=3
> > > > rcutree.gp_cleanup_delay=3 rcutree.kthread_prio=2 threadirqs
> > > > rcutree.use_softirq=0
> > > > trace_event=sched:sched_switch,sched:sched_wakeup
> > > > ftrace_filter=dl_server_start,dl_server_stop trace_buf_size=2k
> > > > ftrace=function torture.ftrace_dump_at_shutdown=1
> > > > ...
> > > > [13550.127541] WARNING: CPU: 1 PID: 155 at
> > > > kernel/sched/deadline.c:1971 enqueue_dl_entity+0x554/0x5d0
> > > > [13550.128982] Modules linked in:
> > > > [13550.129528] CPU: 1 UID: 0 PID: 155 Comm: rcu_torture_rea Tainted: G
> > > > W 6.11.0-g2004cef11ea0-dirty #1
> > > > [13550.131419] Tainted: [W]=WARN
> > > > [13550.131979] Hardware name: Red Hat KVM/RHEL, BIOS 1.16.3-2.el9 04/01/2014
> > > > [13550.133230] RIP: 0010:enqueue_dl_entity+0x554/0x5d0
> > > > ...
> > > > [13550.151286] Call Trace:
> > > > [13550.151749] <TASK>
> > > > [13550.152141] ? __warn+0x88/0x130
> > > > [13550.152717] ? enqueue_dl_entity+0x554/0x5d0
> > > > [13550.153485] ? report_bug+0x18e/0x1a0
> > > > [13550.154149] ? handle_bug+0x54/0x90
> > > > [13550.154792] ? exc_invalid_op+0x18/0x70
> > > > [13550.155484] ? asm_exc_invalid_op+0x1a/0x20
> > > > [13550.156249] ? enqueue_dl_entity+0x554/0x5d0
> > > > [13550.157055] dl_server_start+0x36/0xf0
> > > > [13550.157709] enqueue_task_fair+0x220/0x6b0
> > > > [13550.158447] activate_task+0x26/0x60
> > > > [13550.159131] attach_task+0x35/0x50
> > > > [13550.159756] sched_balance_rq+0x663/0xe00
> > > > [13550.160511] sched_balance_newidle.constprop.0+0x1a5/0x360
> > > > [13550.161520] pick_next_task_fair+0x2f/0x340
> > > > [13550.162290] __schedule+0x203/0x900
> > > > [13550.162958] ? enqueue_hrtimer+0x35/0x90
> > > > [13550.163703] schedule+0x27/0xd0
> > > > [13550.164299] schedule_hrtimeout_range_clock+0x99/0x120
> > > > [13550.165239] ? __pfx_hrtimer_wakeup+0x10/0x10
> > > > [13550.165954] torture_hrtimeout_us+0x7b/0xe0
> > > > [13550.166624] rcu_torture_reader+0x139/0x200
> > > > [13550.167284] ? __pfx_rcu_torture_timer+0x10/0x10
> > > > [13550.168019] ? __pfx_rcu_torture_reader+0x10/0x10
> > > > [13550.168764] kthread+0xd6/0x100
> > > > [13550.169262] ? __pfx_kthread+0x10/0x10
> > > > [13550.169860] ret_from_fork+0x34/0x50
> > > > [13550.170424] ? __pfx_kthread+0x10/0x10
> > > > [13550.171020] ret_from_fork_asm+0x1a/0x30
> > > > [13550.171657] </TASK>
> > > >
> > > > Unfortunately, the following rcu stalls appear to have resulted in
> > > > abnormal termination of the VM, which led to the ftrace buffer not
> > > > being dumped into the console. Currently re-running the same test with
> > > > the addition of "ftrace_dump_on_oops panic_on_warn=1" and hoping for
> > > > the best.
> > >
> > > Another approach would be rcupdate.rcu_cpu_stall_suppress=1.
> > >
> > > We probably need to disable RCU CPU stall warnings automatically while
> > > dumping ftrace buffers, but the asynchronous nature of printk() makes
> > > it difficult to work out when to automatically re-enable them...
> >
> > And in the meantime, for whatever it is worth...
> >
> > The pattern of failures motivated me to add to rcutorture a real-time
> > task that randomly preempts a randomly chosen online CPU. Here are
> > the (new and not-to-be-trusted) commits on -rcu's "dev" branch:
> >
> > d1b99fa42af7 ("torture: Add dowarn argument to torture_sched_setaffinity()")
> > aed555adc22a ("rcutorture: Add random real-time preemption")
> > b09bcf8e1406 ("rcutorture: Make the TREE03 scenario do preemption")
> >
> > Given these, the following sort of command, when run on dual-socket
> > systems, reproduces a silent failure within a few minutes:
> >
> > tools/testing/selftests/rcutorture/bin/kvm.sh --allcpus --duration 4h --configs "4*TREE03" --kconfig "CONFIG_NR_CPUS=4" --trust-make
> >
> > But on my laptop, a 30-minute run resulted in zero failures. I am now
> > retrying with a four-hour laptop run.
>
> And this silent failure was me hurting myself with a change to scripting
> to better handle test hosts disappearing (it does sometimes happen).
> With the scripting fixed, I am getting simple too-short grace periods,
> though only a few per 8-hour 400*TREE03 4-CPU guest-OS run.
>
> > I am also adjusting the preemption duration and frequency to see if a
> > more edifying failure mode might make itself apparent. :-/
>
> But no big wins thus far, so this will be a slow process. My current test
> disables CPU hotplug. I will be disabling other things in the hope of
> better identifying the code paths that should be placed under suspicion.
Disabling CPU hotplug seems to make the problem go away (though
all I can really say is that I am 99% confident that it reduces the
problem's incidence by at least a factor of two). This problem affects
non-preemptible kernels and non-preemptible RCU, though it is possible
that use of the latter reduces the failure rate (which is of course *not*
what you want for testing).
A number of experiments failed to significantly/usefully increase the
failure rate.
The problem does not seem to happen on straight normal call_rcu()
grace periods (rcutorture.gp_normal=1), synchronize_rcu() grace periods
(rcutorture.gp_sync=1), or synchronize_rcu_expedited() grace periods.
Of course, my reproduction rate is still low enough that I might be
fooled here.
However, the problem does occur reasonably often on polled grace periods
(rcutorture.gp_poll=1 rcutorture.gp_poll_exp=1 rcutorture.gp_poll_full=1
rcutorture.gp_poll_exp_full=1). This might be a bug in the polling
code itself, or it might be because polling grace periods do not incur
callback and/or wakeup delays (as in the bug might still be in the
underlying grace-period computation and polling makes it more apparent).
This also appears to have made the problem happen more frequently,
but not enough data to be sure yet.
Currently, rcutorture does millisecond-scale waits of duration randomly
chosen between zero and 15 milliseconds. I have started a run with
microsecond-scale waits of duration randomly chosen between zero and
128 microseconds. Here is hoping that this will cause the problem to
reproduce more quickly, and I will know more this evening, Pacific Time.
Thanx, Paul
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-10-10 11:24 ` Tomas Glozar
2024-10-10 15:01 ` Paul E. McKenney
@ 2024-10-22 6:33 ` Tomas Glozar
1 sibling, 0 replies; 67+ messages in thread
From: Tomas Glozar @ 2024-10-22 6:33 UTC (permalink / raw)
To: paulmck
Cc: Valentin Schneider, Chen Yu, Peter Zijlstra, linux-kernel, sfr,
linux-next, kernel-team
čt 10. 10. 2024 v 13:24 odesílatel Tomas Glozar <tglozar@redhat.com> napsal:
>
> Unfortunately, the following rcu stalls appear to have resulted in
> abnormal termination of the VM, which led to the ftrace buffer not
> being dumped into the console. Currently re-running the same test with
> the addition of "ftrace_dump_on_oops panic_on_warn=1" and hoping for
> the best.
>
Adding ftrace_dump_on_oops and panic_on_warn fixed the tracing issue.
The last 20 lines on CPU 1, where the WARN was triggered, are:
[21031.703970] rcu_tort-6861 1dN.4. 21027715638us : sched_wakeup:
comm=ksoftirqd/1 pid=26 prio=97 target_cpu=001
[21031.705308] rcu_tort-6861 1d..2. 21027715639us : sched_switch:
prev_comm=rcu_torture_boo prev_pid=6861 prev_prio=98 prev_state=R+ ==>
next_comm=ksoftirqd/1 next_pid=26 next_prio=97
[21031.707388] ksoftirq-26 1d.s3. 21027715645us : sched_wakeup:
comm=rcu_preempt pid=16 prio=97 target_cpu=000
[21031.710718] ksoftirq-26 1d..2. 21027715649us : sched_switch:
prev_comm=ksoftirqd/1 prev_pid=26 prev_prio=97 prev_state=S ==>
next_comm=rcu_torture_boo next_pid=6861 next_prio=98
[21031.719409] rcu_tort-6861 1d..2. 21027719648us : sched_switch:
prev_comm=rcu_torture_boo prev_pid=6861 prev_prio=98 prev_state=R+ ==>
next_comm=rcu_preempt next_pid=16 next_prio=97
[21031.723490] rcu_pree-16 1d..2. 21027719657us : sched_switch:
prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==>
next_comm=rcu_torture_boo next_pid=6861 next_prio=98
[21031.725527] rcu_tort-6861 1dN.3. 21027724637us : sched_wakeup:
comm=ksoftirqd/1 pid=26 prio=97 target_cpu=001
[21031.726817] rcu_tort-6861 1d..2. 21027724638us : sched_switch:
prev_comm=rcu_torture_boo prev_pid=6861 prev_prio=98 prev_state=R+ ==>
next_comm=ksoftirqd/1 next_pid=26 next_prio=97
[21031.728867] ksoftirq-26 1d.s3. 21027724644us : sched_wakeup:
comm=rcu_preempt pid=16 prio=97 target_cpu=008
[21031.732215] ksoftirq-26 1d..2. 21027724648us : sched_switch:
prev_comm=ksoftirqd/1 prev_pid=26 prev_prio=97 prev_state=S ==>
next_comm=rcu_torture_boo next_pid=6861 next_prio=98
[21031.751734] rcu_tort-6861 1d..2. 21027729646us : sched_switch:
prev_comm=rcu_torture_boo prev_pid=6861 prev_prio=98 prev_state=R+ ==>
next_comm=rcu_preempt next_pid=16 next_prio=97
[21031.755815] rcu_pree-16 1d..2. 21027729656us : sched_switch:
prev_comm=rcu_preempt prev_pid=16 prev_prio=97 prev_state=I ==>
next_comm=rcu_torture_boo next_pid=6861 next_prio=98
[21031.757850] rcu_tort-6861 1dN.4. 21027734637us : sched_wakeup:
comm=ksoftirqd/1 pid=26 prio=97 target_cpu=001
[21031.759147] rcu_tort-6861 1d..2. 21027734638us : sched_switch:
prev_comm=rcu_torture_boo prev_pid=6861 prev_prio=98 prev_state=R+ ==>
next_comm=ksoftirqd/1 next_pid=26 next_prio=97
[21031.761193] ksoftirq-26 1d.s3. 21027734643us : sched_wakeup:
comm=rcu_preempt pid=16 prio=97 target_cpu=009
[21031.764531] ksoftirq-26 1d..2. 21027734648us : sched_switch:
prev_comm=ksoftirqd/1 prev_pid=26 prev_prio=97 prev_state=S ==>
next_comm=rcu_torture_boo next_pid=6861 next_prio=98
[21031.871062] rcu_tort-6861 1d..2. 21027768837us : sched_switch:
prev_comm=rcu_torture_boo prev_pid=6861 prev_prio=98 prev_state=I ==>
next_comm=kworker/1:1 next_pid=5774 next_prio=120
[21031.882721] kworker/-5774 1d..2. 21027768847us : sched_switch:
prev_comm=kworker/1:1 prev_pid=5774 prev_prio=120 prev_state=I ==>
next_comm=kworker/1:2 next_pid=6636 next_prio=120
[21031.889173] kworker/-6636 1dN.3. 21027768851us :
dl_server_stop <-dequeue_entities
[21031.902411] kworker/-6636 1dN.3. 21027768863us :
dl_server_start <-enqueue_task_fair
and then the trace goes silent for that CPU. That corresponds to the
warning after which the panic is triggered:
[21027.819253] WARNING: CPU: 1 PID: 6636 at
kernel/sched/deadline.c:1995 enqueue_dl_entity+0x516/0x5d0
[21027.820450] Modules linked in:
[21027.820856] CPU: 1 UID: 0 PID: 6636 Comm: kworker/1:2 Not tainted
6.11.0-g2004cef11ea0-dirty #1
[21027.821993] Hardware name: Red Hat KVM/RHEL, BIOS 1.16.3-2.el9 04/01/2014
[21027.822883] Workqueue: 0x0 (rcu_gp)
[21027.823363] RIP: 0010:enqueue_dl_entity+0x516/0x5d0
[21027.824002] Code: ff 48 89 ef e8 8b cf ff ff 0f b6 4d 54 e9 0e fc
ff ff 85 db 0f 84 d0 fe ff ff 5b 44 89 e6 48 89 ef 5d 41 5c e9 db d6
ff ff 90 <0f> 0b 90 e9 fa fa ff ff 48 8b bb f8 09 00 00 48 39 fe 0f 89
de fb
[21027.827134] RSP: 0000:ffff8cbd85713bf0 EFLAGS: 00010086
[21027.827817] RAX: 0000000000000001 RBX: ffff8919dc86cc28 RCX: 0000000000000002
[21027.828749] RDX: 0000000000000001 RSI: 0000000000000001 RDI: ffff8919dc86cc28
[21027.829678] RBP: ffff8919dc86cc28 R08: 0000000000000001 R09: 00000000000001f4
[21027.830608] R10: 0000000000000000 R11: ffff8919c1efe610 R12: 0000000000000001
[21027.831537] R13: 0000000000225510 R14: ffff8919dc86c380 R15: ffff8919dc86c3c0
[21027.832468] FS: 0000000000000000(0000) GS:ffff8919dc840000(0000)
knlGS:0000000000000000
[21027.833521] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[21027.834277] CR2: 0000000000000000 CR3: 000000001e62e000 CR4: 00000000000006f0
[21027.835204] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[21027.836127] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[21027.837049] Call Trace:
[21027.837388] <TASK>
[21027.837671] ? __warn+0x88/0x130
[21027.838107] ? enqueue_dl_entity+0x516/0x5d0
[21027.838665] ? report_bug+0x18e/0x1a0
[21027.839181] ? handle_bug+0x54/0x90
[21027.839640] ? exc_invalid_op+0x18/0x70
[21027.840145] ? asm_exc_invalid_op+0x1a/0x20
[21027.840696] ? enqueue_dl_entity+0x516/0x5d0
[21027.841259] dl_server_start+0x36/0xf0
[21027.841751] enqueue_task_fair+0x220/0x6b0
[21027.842291] activate_task+0x26/0x60
[21027.842761] attach_task+0x35/0x50
[21027.843213] sched_balance_rq+0x663/0xe00
[21027.843739] sched_balance_newidle.constprop.0+0x1a5/0x360
[21027.844455] pick_next_task_fair+0x2f/0x340
[21027.845000] __schedule+0x203/0x900
[21027.845470] ? sync_rcu_do_polled_gp+0xba/0x110
[21027.846068] schedule+0x27/0xd0
[21027.846483] worker_thread+0x1a7/0x3b0
[21027.846974] ? __pfx_worker_thread+0x10/0x10
[21027.847539] kthread+0xd6/0x100
[21027.847955] ? __pfx_kthread+0x10/0x10
[21027.848455] ret_from_fork+0x34/0x50
[21027.848923] ? __pfx_kthread+0x10/0x10
[21027.849420] ret_from_fork_asm+0x1a/0x30
[21027.849934] </TASK>
Apparently, dl_server_start is not called twice so it has to be
something else messing up with the dl_se->rb_node.
Tomas
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-10-21 19:25 ` Paul E. McKenney
@ 2024-11-14 18:16 ` Paul E. McKenney
2024-12-15 18:31 ` Paul E. McKenney
0 siblings, 1 reply; 67+ messages in thread
From: Paul E. McKenney @ 2024-11-14 18:16 UTC (permalink / raw)
To: Tomas Glozar
Cc: Valentin Schneider, Chen Yu, Peter Zijlstra, linux-kernel, sfr,
linux-next, kernel-team
On Mon, Oct 21, 2024 at 12:25:41PM -0700, Paul E. McKenney wrote:
> On Mon, Oct 14, 2024 at 11:55:05AM -0700, Paul E. McKenney wrote:
[ . . . ]
> > But no big wins thus far, so this will be a slow process. My current test
> > disables CPU hotplug. I will be disabling other things in the hope of
> > better identifying the code paths that should be placed under suspicion.
The "this will be a slow process" was no joke...
> Disabling CPU hotplug seems to make the problem go away (though
> all I can really say is that I am 99% confident that it reduces the
> problem's incidence by at least a factor of two). This problem affects
> non-preemptible kernels and non-preemptible RCU, though it is possible
> that use of the latter reduces the failure rate (which is of course *not*
> what you want for testing).
>
> A number of experiments failed to significantly/usefully increase the
> failure rate.
>
> The problem does not seem to happen on straight normal call_rcu()
> grace periods (rcutorture.gp_normal=1), synchronize_rcu() grace periods
> (rcutorture.gp_sync=1), or synchronize_rcu_expedited() grace periods.
> Of course, my reproduction rate is still low enough that I might be
> fooled here.
>
> However, the problem does occur reasonably often on polled grace periods
> (rcutorture.gp_poll=1 rcutorture.gp_poll_exp=1 rcutorture.gp_poll_full=1
> rcutorture.gp_poll_exp_full=1). This might be a bug in the polling
> code itself, or it might be because polling grace periods do not incur
> callback and/or wakeup delays (as in the bug might still be in the
> underlying grace-period computation and polling makes it more apparent).
> This also appears to have made the problem happen more frequently,
> but not enough data to be sure yet.
>
> Currently, rcutorture does millisecond-scale waits of duration randomly
> chosen between zero and 15 milliseconds. I have started a run with
> microsecond-scale waits of duration randomly chosen between zero and
> 128 microseconds. Here is hoping that this will cause the problem to
> reproduce more quickly, and I will know more this evening, Pacific Time.
Well, that evening was a long time ago, but here finally is an update.
After some time, varying the wait duration between zero and 16
microseconds with nanosecond granularity pushed the rate up to between
10 and 20 per hour. This allowed me to find one entertaining bug,
whose fix may be found here in my -rcu tree:
9dfca26bcbc8 ("rcu: Make expedited grace periods wait for initialization")
The fix ensures that an expedited grace period is fully initialized before
honoring any quiescent-state reports, thus avoiding a failure scenario
in which one of a pair of quiescent-state reports could "leak" into
the next expedited grace period. But only if a concurrent CPU-hotplug
operation shows up at just the wrong time.
There are also a couple of other minor fixes of things like needless
lockless accesses:
6142841a2389 ("rcu: Make rcu_report_exp_cpu_mult() caller acquire lock")
dd8104928722 ("rcu: Move rcu_report_exp_rdp() setting of ->cpu_no_qs.b.exp under lock")
Plus quite a few additional debug checks.
So problem solved, right?
Wrong!!!
Those changes at best reduced the bug rate by about 10%. So I am still
beating on this thing. But you will be happy (or more likely not)
to learn that the enqueue_dl_entity() splats that I was chasing when
starting on this bug still occasionally make their presence known. ;-)
Added diagnostics pushed the bug rate down to about four per hours,
which isn't quite as nice as between 10 and 20 per hour, but is still
something I can work with.
Back to beating on it. More info than anyone needs is available here:
https://docs.google.com/document/d/1-JQ4QYF1qid0TWSLa76O1kusdhER2wgm0dYdwFRUzU8/edit?usp=sharing
Thanx, Paul
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-11-14 18:16 ` Paul E. McKenney
@ 2024-12-15 18:31 ` Paul E. McKenney
2024-12-16 14:38 ` Tomas Glozar
0 siblings, 1 reply; 67+ messages in thread
From: Paul E. McKenney @ 2024-12-15 18:31 UTC (permalink / raw)
To: Tomas Glozar
Cc: Valentin Schneider, Chen Yu, Peter Zijlstra, linux-kernel, sfr,
linux-next, kernel-team
On Thu, Nov 14, 2024 at 10:16:12AM -0800, Paul E. McKenney wrote:
> On Mon, Oct 21, 2024 at 12:25:41PM -0700, Paul E. McKenney wrote:
> > On Mon, Oct 14, 2024 at 11:55:05AM -0700, Paul E. McKenney wrote:
>
> [ . . . ]
>
> > > But no big wins thus far, so this will be a slow process. My current test
> > > disables CPU hotplug. I will be disabling other things in the hope of
> > > better identifying the code paths that should be placed under suspicion.
>
> The "this will be a slow process" was no joke...
[ . . . ]
> Back to beating on it. More info than anyone needs is available here:
>
> https://docs.google.com/document/d/1-JQ4QYF1qid0TWSLa76O1kusdhER2wgm0dYdwFRUzU8/edit?usp=sharing
And the fix for the TREE03 too-short grace periods is finally in, at
least in prototype form:
https://lore.kernel.org/all/da5065c4-79ba-431f-9d7e-1ca314394443@paulmck-laptop/
Or this commit on -rcu:
22bee20913a1 ("rcu: Fix get_state_synchronize_rcu_full() GP-start detection")
This passes more than 30 hours of 400 concurrent instances of rcutorture's
TREE03 scenario, with modifications that brought the bug reproduction
rate up to 50 per hour. I therefore have strong reason to believe that
this fix is a real fix.
With this fix in place, a 20-hour run of 400 concurrent instances
of rcutorture's TREE03 scenario resulted in 50 instances of the
enqueue_dl_entity() splat pair. One (untrimmed) instance of this pair
of splats is shown below.
You guys did reproduce this some time back, so unless you tell me
otherwise, I will assume that you have this in hand. I would of course
be quite happy to help, especially with adding carefully chosen debug
(heisenbug and all that) or testing of alleged fixes.
Just let me know!
Thanx, Paul
------------------------------------------------------------------------
------------[ cut here ]------------
smpboot: Booting Node 0 Processor 3 APIC 0x3
WARNING: CPU: 1 PID: 29304 at kernel/sched/deadline.c:1995 enqueue_dl_entity+0x511/0x5d0
Modules linked in:
CPU: 1 UID: 0 PID: 29304 Comm: kworker/1:2 Not tainted 6.13.0-rc1-00063-ga51301a307ab-dirty #2355
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014
Workqueue: rcu_gp sync_rcu_do_polled_gp
RIP: 0010:enqueue_dl_entity+0x511/0x5d0
Code: ff 48 89 ef e8 10 cf ff ff 0f b6 4d 54 e9 0e fc ff ff 85 db 0f 84 d0 fe ff ff 5b 44 89 e6 48 89 ef 5d 41 5c e9 30 d6 ff ff 90 <0f> 0b 90 e9 fa fa ff ff 48 8b bb f8 09 00 00 48 39 fe 0f 89 de fb
RSP: 0018:ffff9b88c13ebaf0 EFLAGS: 00010086
RAX: 0000000000000001 RBX: ffff893a5f4ac8e8 RCX: 0000000000000002
RDX: 0000000000000001 RSI: 0000000000000001 RDI: ffff893a5f4ac8e8
RBP: ffff893a5f4ac8e8 R08: 0000000000000001 R09: 0000000000000197
R10: 0000000000000000 R11: ffff893a4181bb90 R12: 0000000000000001
R13: 000000000016e360 R14: ffff893a5f4ac040 R15: ffff893a5f4ac080
FS: 0000000000000000(0000) GS:ffff893a5f480000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000000000 CR3: 0000000001d06000 CR4: 00000000000006f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<TASK>
? __warn+0x83/0x130
? enqueue_dl_entity+0x511/0x5d0
? report_bug+0x18e/0x1a0
? handle_bug+0x54/0x90
? exc_invalid_op+0x18/0x70
? asm_exc_invalid_op+0x1a/0x20
? enqueue_dl_entity+0x511/0x5d0
dl_server_start+0x31/0xe0
enqueue_task_fair+0x21b/0x6b0
enqueue_task+0x2c/0x70
activate_task+0x21/0x50
attach_task+0x30/0x50
sched_balance_rq+0x654/0xdf0
sched_balance_newidle.constprop.0+0x190/0x360
pick_next_task_fair+0x2a/0x340
__schedule+0x1f3/0x8f0
schedule+0x22/0xd0
synchronize_rcu_expedited+0x1bf/0x350
? __pfx_autoremove_wake_function+0x10/0x10
? __pfx_wait_rcu_exp_gp+0x10/0x10
sync_rcu_do_polled_gp+0x4f/0x110
process_one_work+0x163/0x390
worker_thread+0x293/0x3b0
? __pfx_worker_thread+0x10/0x10
kthread+0xd1/0x100
? __pfx_kthread+0x10/0x10
ret_from_fork+0x2f/0x50
? __pfx_kthread+0x10/0x10
ret_from_fork_asm+0x1a/0x30
</TASK>
---[ end trace 0000000000000000 ]---
------------[ cut here ]------------
WARNING: CPU: 1 PID: 29304 at kernel/sched/deadline.c:1971 enqueue_dl_entity+0x54f/0x5d0
Modules linked in:
CPU: 1 UID: 0 PID: 29304 Comm: kworker/1:2 Tainted: G W 6.13.0-rc1-00063-ga51301a307ab-dirty #2355
Tainted: [W]=WARN
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014
Workqueue: rcu_gp sync_rcu_do_polled_gp
RIP: 0010:enqueue_dl_entity+0x54f/0x5d0
Code: de fb ff ff e9 66 ff ff ff 89 c1 45 84 d2 0f 84 ce fb ff ff a8 20 0f 84 c6 fb ff ff 84 c0 0f 89 20 fe ff ff e9 b9 fb ff ff 90 <0f> 0b 90 e9 f4 fb ff ff 84 d2 0f 85 e3 fa ff ff 48 89 ea 48 8d b5
RSP: 0018:ffff9b88c13ebaf0 EFLAGS: 00010086
RAX: 00000000ffffff00 RBX: ffff893a5f4ac040 RCX: 0000000000000000
RDX: 0000000000000001 RSI: 0000000b1a2986b8 RDI: 0000000b1a268bc8
RBP: ffff893a5f4ac8e8 R08: ffff893a5f4ac880 R09: 000000003b9aca00
R10: 0000000000000001 R11: 00000000000ee6b2 R12: 0000000000000001
R13: 000000000016e360 R14: ffff893a5f4ac040 R15: ffff893a5f4ac080
FS: 0000000000000000(0000) GS:ffff893a5f480000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000000000 CR3: 0000000001d06000 CR4: 00000000000006f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
<TASK>
? __warn+0x83/0x130
? enqueue_dl_entity+0x54f/0x5d0
? report_bug+0x18e/0x1a0
? handle_bug+0x54/0x90
? exc_invalid_op+0x18/0x70
? asm_exc_invalid_op+0x1a/0x20
? enqueue_dl_entity+0x54f/0x5d0
dl_server_start+0x31/0xe0
enqueue_task_fair+0x21b/0x6b0
enqueue_task+0x2c/0x70
activate_task+0x21/0x50
attach_task+0x30/0x50
sched_balance_rq+0x654/0xdf0
sched_balance_newidle.constprop.0+0x190/0x360
pick_next_task_fair+0x2a/0x340
__schedule+0x1f3/0x8f0
schedule+0x22/0xd0
synchronize_rcu_expedited+0x1bf/0x350
? __pfx_autoremove_wake_function+0x10/0x10
? __pfx_wait_rcu_exp_gp+0x10/0x10
sync_rcu_do_polled_gp+0x4f/0x110
process_one_work+0x163/0x390
worker_thread+0x293/0x3b0
? __pfx_worker_thread+0x10/0x10
kthread+0xd1/0x100
? __pfx_kthread+0x10/0x10
ret_from_fork+0x2f/0x50
? __pfx_kthread+0x10/0x10
ret_from_fork_asm+0x1a/0x30
</TASK>
---[ end trace 0000000000000000 ]---
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-12-15 18:31 ` Paul E. McKenney
@ 2024-12-16 14:38 ` Tomas Glozar
2024-12-16 19:36 ` Paul E. McKenney
0 siblings, 1 reply; 67+ messages in thread
From: Tomas Glozar @ 2024-12-16 14:38 UTC (permalink / raw)
To: paulmck
Cc: Valentin Schneider, Chen Yu, Peter Zijlstra, linux-kernel, sfr,
linux-next, kernel-team
ne 15. 12. 2024 v 19:41 odesílatel Paul E. McKenney <paulmck@kernel.org> napsal:
>
> And the fix for the TREE03 too-short grace periods is finally in, at
> least in prototype form:
>
> https://lore.kernel.org/all/da5065c4-79ba-431f-9d7e-1ca314394443@paulmck-laptop/
>
> Or this commit on -rcu:
>
> 22bee20913a1 ("rcu: Fix get_state_synchronize_rcu_full() GP-start detection")
>
> This passes more than 30 hours of 400 concurrent instances of rcutorture's
> TREE03 scenario, with modifications that brought the bug reproduction
> rate up to 50 per hour. I therefore have strong reason to believe that
> this fix is a real fix.
>
> With this fix in place, a 20-hour run of 400 concurrent instances
> of rcutorture's TREE03 scenario resulted in 50 instances of the
> enqueue_dl_entity() splat pair. One (untrimmed) instance of this pair
> of splats is shown below.
>
> You guys did reproduce this some time back, so unless you tell me
> otherwise, I will assume that you have this in hand. I would of course
> be quite happy to help, especially with adding carefully chosen debug
> (heisenbug and all that) or testing of alleged fixes.
>
The same splat was recently reported to LKML [1] and a patchset was
sent and merged into tip/sched/urgent that fixes a few bugs around
double-enqueue of the deadline server [2]. I'm currently re-running
TREE03 with those patches, hoping they will also fix this issue.
Also, last week I came up with some more extensive tracing, which
showed dl_server_update and dl_server_start happening right after each
other, apparently during the same run of enqueue_task_fair (see
below). I'm currently looking into that to figure out whether the
mechanism shown by the trace is fixed by the patchset.
--------------------------
rcu_tort-148 1dN.3. 20531758076us : dl_server_stop <-dequeue_entities
rcu_tort-148 1dN.2. 20531758076us : dl_server_queue: cpu=1
level=2 enqueue=0
rcu_tort-148 1dN.3. 20531758078us : <stack trace>
=> trace_event_raw_event_dl_server_queue
=> dl_server_stop
=> dequeue_entities
=> dequeue_task_fair
=> __schedule
=> schedule
=> schedule_hrtimeout_range_clock
=> torture_hrtimeout_us
=> rcu_torture_writer
=> kthread
=> ret_from_fork
=> ret_from_fork_asm
rcu_tort-148 1dN.3. 20531758095us : dl_server_update <-update_curr
rcu_tort-148 1dN.3. 20531758097us : dl_server_update <-update_curr
rcu_tort-148 1dN.2. 20531758101us : dl_server_queue: cpu=1
level=2 enqueue=1
rcu_tort-148 1dN.3. 20531758103us : <stack trace>
rcu_tort-148 1dN.2. 20531758104us : dl_server_queue: cpu=1
level=1 enqueue=1
rcu_tort-148 1dN.3. 20531758106us : <stack trace>
rcu_tort-148 1dN.2. 20531758106us : dl_server_queue: cpu=1
level=0 enqueue=1
rcu_tort-148 1dN.3. 20531758108us : <stack trace>
=> trace_event_raw_event_dl_server_queue
=> rb_insert_color
=> enqueue_dl_entity
=> update_curr_dl_se
=> update_curr
=> enqueue_task_fair
=> enqueue_task
=> activate_task
=> attach_task
=> sched_balance_rq
=> sched_balance_newidle.constprop.0
=> pick_next_task_fair
=> __schedule
=> schedule
=> schedule_hrtimeout_range_clock
=> torture_hrtimeout_us
=> rcu_torture_writer
=> kthread
=> ret_from_fork
=> ret_from_fork_asm
rcu_tort-148 1dN.3. 20531758110us : dl_server_start <-enqueue_task_fair
rcu_tort-148 1dN.2. 20531758110us : dl_server_queue: cpu=1
level=2 enqueue=1
rcu_tort-148 1dN.3. 20531760934us : <stack trace>
=> trace_event_raw_event_dl_server_queue
=> enqueue_dl_entity
=> dl_server_start
=> enqueue_task_fair
=> enqueue_task
=> activate_task
=> attach_task
=> sched_balance_rq
=> sched_balance_newidle.constprop.0
=> pick_next_task_fair
=> __schedule
=> schedule
=> schedule_hrtimeout_range_clock
=> torture_hrtimeout_us
=> rcu_torture_writer
=> kthread
=> ret_from_fork
=> ret_from_fork_asm
[1] - https://lore.kernel.org/lkml/571b2045-320d-4ac2-95db-1423d0277613@ovn.org/
[2] - https://lore.kernel.org/lkml/20241213032244.877029-1-vineeth@bitbyteword.org/
> Just let me know!
>
> Thanx, Paul
Tomas
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-12-16 14:38 ` Tomas Glozar
@ 2024-12-16 19:36 ` Paul E. McKenney
2024-12-17 16:42 ` Paul E. McKenney
0 siblings, 1 reply; 67+ messages in thread
From: Paul E. McKenney @ 2024-12-16 19:36 UTC (permalink / raw)
To: Tomas Glozar
Cc: Valentin Schneider, Chen Yu, Peter Zijlstra, linux-kernel, sfr,
linux-next, kernel-team
On Mon, Dec 16, 2024 at 03:38:20PM +0100, Tomas Glozar wrote:
> ne 15. 12. 2024 v 19:41 odesílatel Paul E. McKenney <paulmck@kernel.org> napsal:
> >
> > And the fix for the TREE03 too-short grace periods is finally in, at
> > least in prototype form:
> >
> > https://lore.kernel.org/all/da5065c4-79ba-431f-9d7e-1ca314394443@paulmck-laptop/
> >
> > Or this commit on -rcu:
> >
> > 22bee20913a1 ("rcu: Fix get_state_synchronize_rcu_full() GP-start detection")
> >
> > This passes more than 30 hours of 400 concurrent instances of rcutorture's
> > TREE03 scenario, with modifications that brought the bug reproduction
> > rate up to 50 per hour. I therefore have strong reason to believe that
> > this fix is a real fix.
> >
> > With this fix in place, a 20-hour run of 400 concurrent instances
> > of rcutorture's TREE03 scenario resulted in 50 instances of the
> > enqueue_dl_entity() splat pair. One (untrimmed) instance of this pair
> > of splats is shown below.
> >
> > You guys did reproduce this some time back, so unless you tell me
> > otherwise, I will assume that you have this in hand. I would of course
> > be quite happy to help, especially with adding carefully chosen debug
> > (heisenbug and all that) or testing of alleged fixes.
> >
>
> The same splat was recently reported to LKML [1] and a patchset was
> sent and merged into tip/sched/urgent that fixes a few bugs around
> double-enqueue of the deadline server [2]. I'm currently re-running
> TREE03 with those patches, hoping they will also fix this issue.
Thank you very much!
An initial four-hour test of 400 instances of an enhanced TREE03 ran
error-free. I would have expected about 10 errors, so this gives me
99.9+% confidence that the patches improved things at least a little
bit and 99% confidence that these patches reduced the error rate by at
least a factor of two.
I am starting an overnight run. If that completes without error, this
will provide 99% confidence that these patches reduced the error rate
by at least an order of magnitude.
> Also, last week I came up with some more extensive tracing, which
> showed dl_server_update and dl_server_start happening right after each
> other, apparently during the same run of enqueue_task_fair (see
> below). I'm currently looking into that to figure out whether the
> mechanism shown by the trace is fixed by the patchset.
And for whatever it is worth, judging by my console output, I am seeing
something similar.
Thanx, Paul
> --------------------------
>
> rcu_tort-148 1dN.3. 20531758076us : dl_server_stop <-dequeue_entities
> rcu_tort-148 1dN.2. 20531758076us : dl_server_queue: cpu=1
> level=2 enqueue=0
> rcu_tort-148 1dN.3. 20531758078us : <stack trace>
> => trace_event_raw_event_dl_server_queue
> => dl_server_stop
> => dequeue_entities
> => dequeue_task_fair
> => __schedule
> => schedule
> => schedule_hrtimeout_range_clock
> => torture_hrtimeout_us
> => rcu_torture_writer
> => kthread
> => ret_from_fork
> => ret_from_fork_asm
> rcu_tort-148 1dN.3. 20531758095us : dl_server_update <-update_curr
> rcu_tort-148 1dN.3. 20531758097us : dl_server_update <-update_curr
> rcu_tort-148 1dN.2. 20531758101us : dl_server_queue: cpu=1
> level=2 enqueue=1
> rcu_tort-148 1dN.3. 20531758103us : <stack trace>
> rcu_tort-148 1dN.2. 20531758104us : dl_server_queue: cpu=1
> level=1 enqueue=1
> rcu_tort-148 1dN.3. 20531758106us : <stack trace>
> rcu_tort-148 1dN.2. 20531758106us : dl_server_queue: cpu=1
> level=0 enqueue=1
> rcu_tort-148 1dN.3. 20531758108us : <stack trace>
> => trace_event_raw_event_dl_server_queue
> => rb_insert_color
> => enqueue_dl_entity
> => update_curr_dl_se
> => update_curr
> => enqueue_task_fair
> => enqueue_task
> => activate_task
> => attach_task
> => sched_balance_rq
> => sched_balance_newidle.constprop.0
> => pick_next_task_fair
> => __schedule
> => schedule
> => schedule_hrtimeout_range_clock
> => torture_hrtimeout_us
> => rcu_torture_writer
> => kthread
> => ret_from_fork
> => ret_from_fork_asm
> rcu_tort-148 1dN.3. 20531758110us : dl_server_start <-enqueue_task_fair
> rcu_tort-148 1dN.2. 20531758110us : dl_server_queue: cpu=1
> level=2 enqueue=1
> rcu_tort-148 1dN.3. 20531760934us : <stack trace>
> => trace_event_raw_event_dl_server_queue
> => enqueue_dl_entity
> => dl_server_start
> => enqueue_task_fair
> => enqueue_task
> => activate_task
> => attach_task
> => sched_balance_rq
> => sched_balance_newidle.constprop.0
> => pick_next_task_fair
> => __schedule
> => schedule
> => schedule_hrtimeout_range_clock
> => torture_hrtimeout_us
> => rcu_torture_writer
> => kthread
> => ret_from_fork
> => ret_from_fork_asm
>
> [1] - https://lore.kernel.org/lkml/571b2045-320d-4ac2-95db-1423d0277613@ovn.org/
> [2] - https://lore.kernel.org/lkml/20241213032244.877029-1-vineeth@bitbyteword.org/
>
> > Just let me know!
> >
> > Thanx, Paul
>
> Tomas
>
^ permalink raw reply [flat|nested] 67+ messages in thread
* Re: [BUG almost bisected] Splat in dequeue_rt_stack() and build error
2024-12-16 19:36 ` Paul E. McKenney
@ 2024-12-17 16:42 ` Paul E. McKenney
0 siblings, 0 replies; 67+ messages in thread
From: Paul E. McKenney @ 2024-12-17 16:42 UTC (permalink / raw)
To: Tomas Glozar
Cc: Valentin Schneider, Chen Yu, Peter Zijlstra, linux-kernel, sfr,
linux-next, kernel-team
On Mon, Dec 16, 2024 at 11:36:25AM -0800, Paul E. McKenney wrote:
> On Mon, Dec 16, 2024 at 03:38:20PM +0100, Tomas Glozar wrote:
> > ne 15. 12. 2024 v 19:41 odesílatel Paul E. McKenney <paulmck@kernel.org> napsal:
> > >
> > > And the fix for the TREE03 too-short grace periods is finally in, at
> > > least in prototype form:
> > >
> > > https://lore.kernel.org/all/da5065c4-79ba-431f-9d7e-1ca314394443@paulmck-laptop/
> > >
> > > Or this commit on -rcu:
> > >
> > > 22bee20913a1 ("rcu: Fix get_state_synchronize_rcu_full() GP-start detection")
> > >
> > > This passes more than 30 hours of 400 concurrent instances of rcutorture's
> > > TREE03 scenario, with modifications that brought the bug reproduction
> > > rate up to 50 per hour. I therefore have strong reason to believe that
> > > this fix is a real fix.
> > >
> > > With this fix in place, a 20-hour run of 400 concurrent instances
> > > of rcutorture's TREE03 scenario resulted in 50 instances of the
> > > enqueue_dl_entity() splat pair. One (untrimmed) instance of this pair
> > > of splats is shown below.
> > >
> > > You guys did reproduce this some time back, so unless you tell me
> > > otherwise, I will assume that you have this in hand. I would of course
> > > be quite happy to help, especially with adding carefully chosen debug
> > > (heisenbug and all that) or testing of alleged fixes.
> > >
> >
> > The same splat was recently reported to LKML [1] and a patchset was
> > sent and merged into tip/sched/urgent that fixes a few bugs around
> > double-enqueue of the deadline server [2]. I'm currently re-running
> > TREE03 with those patches, hoping they will also fix this issue.
>
> Thank you very much!
>
> An initial four-hour test of 400 instances of an enhanced TREE03 ran
> error-free. I would have expected about 10 errors, so this gives me
> 99.9+% confidence that the patches improved things at least a little
> bit and 99% confidence that these patches reduced the error rate by at
> least a factor of two.
>
> I am starting an overnight run. If that completes without error, this
> will provide 99% confidence that these patches reduced the error rate
> by at least an order of magnitude.
And we have that level of confidence!
Tested-by: Paul E. McKenney <paulmck@kernel.org>
^ permalink raw reply [flat|nested] 67+ messages in thread
end of thread, other threads:[~2024-12-17 16:42 UTC | newest]
Thread overview: 67+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-08-21 21:57 [BUG almost bisected] Splat in dequeue_rt_stack() and build error Paul E. McKenney
2024-08-22 23:01 ` Paul E. McKenney
2024-08-23 7:47 ` Peter Zijlstra
2024-08-23 12:46 ` Paul E. McKenney
2024-08-23 21:51 ` Paul E. McKenney
2024-08-24 6:54 ` Peter Zijlstra
2024-08-24 15:26 ` Paul E. McKenney
2024-08-25 2:10 ` Paul E. McKenney
2024-08-25 19:36 ` Paul E. McKenney
2024-08-26 11:44 ` Valentin Schneider
2024-08-26 16:31 ` Paul E. McKenney
2024-08-27 10:03 ` Valentin Schneider
2024-08-27 15:41 ` Valentin Schneider
2024-08-27 17:33 ` Paul E. McKenney
2024-08-27 18:35 ` Paul E. McKenney
2024-08-27 20:30 ` Valentin Schneider
2024-08-27 20:36 ` Paul E. McKenney
2024-08-28 12:35 ` Valentin Schneider
2024-08-28 13:03 ` Paul E. McKenney
2024-08-28 13:40 ` Paul E. McKenney
2024-08-28 13:44 ` Chen Yu
2024-08-28 14:32 ` Valentin Schneider
2024-08-28 16:35 ` Paul E. McKenney
2024-08-28 18:17 ` Valentin Schneider
2024-08-28 18:39 ` Paul E. McKenney
2024-08-29 10:28 ` Paul E. McKenney
2024-08-29 13:50 ` Valentin Schneider
2024-08-29 14:13 ` Paul E. McKenney
2024-09-08 16:32 ` Paul E. McKenney
2024-09-13 14:08 ` Paul E. McKenney
2024-09-13 16:55 ` Valentin Schneider
2024-09-13 18:00 ` Paul E. McKenney
2024-09-30 19:09 ` Paul E. McKenney
2024-09-30 20:44 ` Valentin Schneider
2024-10-01 10:10 ` Paul E. McKenney
2024-10-01 12:52 ` Valentin Schneider
2024-10-01 16:47 ` Paul E. McKenney
2024-10-02 9:01 ` Tomas Glozar
2024-10-02 12:07 ` Paul E. McKenney
2024-10-10 11:24 ` Tomas Glozar
2024-10-10 15:01 ` Paul E. McKenney
2024-10-10 23:28 ` Paul E. McKenney
2024-10-14 18:55 ` Paul E. McKenney
2024-10-21 19:25 ` Paul E. McKenney
2024-11-14 18:16 ` Paul E. McKenney
2024-12-15 18:31 ` Paul E. McKenney
2024-12-16 14:38 ` Tomas Glozar
2024-12-16 19:36 ` Paul E. McKenney
2024-12-17 16:42 ` Paul E. McKenney
2024-10-22 6:33 ` Tomas Glozar
2024-10-03 8:40 ` Peter Zijlstra
2024-10-03 8:47 ` Peter Zijlstra
2024-10-03 9:27 ` Peter Zijlstra
2024-10-03 12:28 ` Peter Zijlstra
2024-10-03 12:45 ` Paul E. McKenney
2024-10-03 14:22 ` Peter Zijlstra
2024-10-03 16:04 ` Paul E. McKenney
2024-10-03 18:50 ` Peter Zijlstra
2024-10-03 19:12 ` Paul E. McKenney
2024-10-04 13:22 ` Paul E. McKenney
2024-10-04 13:35 ` Peter Zijlstra
2024-10-06 20:44 ` Paul E. McKenney
2024-10-07 9:34 ` Peter Zijlstra
2024-10-08 11:11 ` Peter Zijlstra
2024-10-08 16:24 ` Paul E. McKenney
2024-10-08 22:34 ` Paul E. McKenney
2024-10-03 12:44 ` Paul E. McKenney
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox