* My Dell XPS-9320 (kernel 6.10.x, et al.) doesn't detect Thunderbolt additions when coming out of suspend or hibernate
@ 2024-08-21 22:05 Kenneth Crudup
2024-08-26 3:06 ` Lukas Wunner
0 siblings, 1 reply; 13+ messages in thread
From: Kenneth Crudup @ 2024-08-21 22:05 UTC (permalink / raw)
To: linux-pm@vger.kernel.org, linux-usb; +Cc: Me
Subject says it all, but to recap my laptop doesn't detect Thunderbolt
topology changes when resuming or coming out of hibernate; i.e., the
only time a TB topology change happens is if a TB cable is disconnected
while suspended or hibernated, but if one is connected, or a different
TB setup altogether is connected when the system resumes it doesn't
notice the topology change until I disconnect and reconnect.
I'm currently running 6.10.6, but this has been going on for a while.
----
[ 0.000000] DMI: Dell Inc. XPS 9320/0KNXGD, BIOS 2.12.0 04/11/2024
...
[ 0.136807] smpboot: CPU0: 12th Gen Intel(R) Core(TM) i7-1280P
(family: 0x6, model: 0x9a, stepping: 0x3)
----
LMK if you'll (likely) need further information.
-Kenny
--
Kenneth R. Crudup / Sr. SW Engineer, Scott County Consulting, Orange
County CA
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: My Dell XPS-9320 (kernel 6.10.x, et al.) doesn't detect Thunderbolt additions when coming out of suspend or hibernate
2024-08-21 22:05 My Dell XPS-9320 (kernel 6.10.x, et al.) doesn't detect Thunderbolt additions when coming out of suspend or hibernate Kenneth Crudup
@ 2024-08-26 3:06 ` Lukas Wunner
2024-08-30 19:52 ` Kenneth Crudup
0 siblings, 1 reply; 13+ messages in thread
From: Lukas Wunner @ 2024-08-26 3:06 UTC (permalink / raw)
To: Kenneth Crudup, Mika Westerberg, linux-usb
Cc: linux-pm@vger.kernel.org, linux-usb
[cc += Mika, linux-usb]
On Wed, Aug 21, 2024 at 03:05:59PM -0700, Kenneth Crudup wrote:
> Subject says it all, but to recap my laptop doesn't detect Thunderbolt
> topology changes when resuming or coming out of hibernate; i.e., the only
> time a TB topology change happens is if a TB cable is disconnected while
> suspended or hibernated, but if one is connected, or a different TB setup
> altogether is connected when the system resumes it doesn't notice the
> topology change until I disconnect and reconnect.
>
> I'm currently running 6.10.6, but this has been going on for a while.
>
> [ 0.000000] DMI: Dell Inc. XPS 9320/0KNXGD, BIOS 2.12.0 04/11/2024
> ...
> [ 0.136807] smpboot: CPU0: 12th Gen Intel(R) Core(TM) i7-1280P (family:
> 0x6, model: 0x9a, stepping: 0x3)
This commit went into v6.11-rc1 and will at least detect replacement
of PCI devices (to a certain extent):
https://git.kernel.org/linus/9d573d19547b
However PCI is layered on top of (tunneled through) the Thunderbolt
switching fabric and that's where the real problem likely is here.
Maybe you can open a bug at bugzilla.kernel.org and attach full dmesg
and lspci -vvv output in the working case (device attachment at runtime)
and the non-working case (device attachment during system sleep).
Does the machine wake up if you attach devices during system sleep?
Are you suspending to ACPI S0ix, S3 or S4?
Thanks,
Lukas
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: My Dell XPS-9320 (kernel 6.10.x, et al.) doesn't detect Thunderbolt additions when coming out of suspend or hibernate
2024-08-26 3:06 ` Lukas Wunner
@ 2024-08-30 19:52 ` Kenneth Crudup
2024-09-04 6:10 ` Kenneth Crudup
0 siblings, 1 reply; 13+ messages in thread
From: Kenneth Crudup @ 2024-08-30 19:52 UTC (permalink / raw)
To: Lukas Wunner, Mika Westerberg, linux-usb; +Cc: linux-pm@vger.kernel.org, Me
Huh. So I checked out Linus' master (currently up to 6.11-rc5) and it
seems to be doing the right thing now; I left a USB-C ALT monitor setup
plugged in when I suspended, then came back to my 4K monitor setup via
TB and it came up in the right resolution and everything.
Excellent news, so I'll keep using Linus' master until 6.11 is released.
Oh, and to answer your question, no, my system doesn't (perceptively)
wake up when suspended if I connect/disconnect USB/TB cables.
... and unfortunately all I have now is (power-hungry) s0ix sleep.
-Kenny
On 8/25/24 20:06, Lukas Wunner wrote:
> [cc += Mika, linux-usb]
>
> On Wed, Aug 21, 2024 at 03:05:59PM -0700, Kenneth Crudup wrote:
>> Subject says it all, but to recap my laptop doesn't detect Thunderbolt
>> topology changes when resuming or coming out of hibernate; i.e., the only
>> time a TB topology change happens is if a TB cable is disconnected while
>> suspended or hibernated, but if one is connected, or a different TB setup
>> altogether is connected when the system resumes it doesn't notice the
>> topology change until I disconnect and reconnect.
>>
>> I'm currently running 6.10.6, but this has been going on for a while.
>>
>> [ 0.000000] DMI: Dell Inc. XPS 9320/0KNXGD, BIOS 2.12.0 04/11/2024
>> ...
>> [ 0.136807] smpboot: CPU0: 12th Gen Intel(R) Core(TM) i7-1280P (family:
>> 0x6, model: 0x9a, stepping: 0x3)
>
> This commit went into v6.11-rc1 and will at least detect replacement
> of PCI devices (to a certain extent):
>
> https://git.kernel.org/linus/9d573d19547b
>
> However PCI is layered on top of (tunneled through) the Thunderbolt
> switching fabric and that's where the real problem likely is here.
>
> Maybe you can open a bug at bugzilla.kernel.org and attach full dmesg
> and lspci -vvv output in the working case (device attachment at runtime)
> and the non-working case (device attachment during system sleep).
>
> Does the machine wake up if you attach devices during system sleep?
> Are you suspending to ACPI S0ix, S3 or S4?
>
> Thanks,
>
> Lukas
>
--
Kenneth R. Crudup / Sr. SW Engineer, Scott County Consulting, Orange
County CA
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: My Dell XPS-9320 (kernel 6.10.x, et al.) doesn't detect Thunderbolt additions when coming out of suspend or hibernate
2024-08-30 19:52 ` Kenneth Crudup
@ 2024-09-04 6:10 ` Kenneth Crudup
2024-09-04 12:28 ` Mika Westerberg
0 siblings, 1 reply; 13+ messages in thread
From: Kenneth Crudup @ 2024-09-04 6:10 UTC (permalink / raw)
To: Lukas Wunner, Mika Westerberg, linux-usb
Cc: linux-pm@vger.kernel.org, Kenneth Crudup
... or, maybe not. Turns out that sometimes my system can't suspend
(just hangs, spinning hard somewhere based on the heat and the fans)
when plugged into a Thunderbolt dock at the time of suspend.
-K
On 8/30/24 12:52, Kenneth Crudup wrote:
>
> Huh. So I checked out Linus' master (currently up to 6.11-rc5) and it
> seems to be doing the right thing now; I left a USB-C ALT monitor setup
> plugged in when I suspended, then came back to my 4K monitor setup via
> TB and it came up in the right resolution and everything.
>
> Excellent news, so I'll keep using Linus' master until 6.11 is released.
>
> Oh, and to answer your question, no, my system doesn't (perceptively)
> wake up when suspended if I connect/disconnect USB/TB cables.
>
> ... and unfortunately all I have now is (power-hungry) s0ix sleep.
>
> -Kenny
>
> On 8/25/24 20:06, Lukas Wunner wrote:
>> [cc += Mika, linux-usb]
>>
>> On Wed, Aug 21, 2024 at 03:05:59PM -0700, Kenneth Crudup wrote:
>>> Subject says it all, but to recap my laptop doesn't detect Thunderbolt
>>> topology changes when resuming or coming out of hibernate; i.e., the
>>> only
>>> time a TB topology change happens is if a TB cable is disconnected while
>>> suspended or hibernated, but if one is connected, or a different TB
>>> setup
>>> altogether is connected when the system resumes it doesn't notice the
>>> topology change until I disconnect and reconnect.
>>>
>>> I'm currently running 6.10.6, but this has been going on for a while.
>>>
>>> [ 0.000000] DMI: Dell Inc. XPS 9320/0KNXGD, BIOS 2.12.0 04/11/2024
>>> ...
>>> [ 0.136807] smpboot: CPU0: 12th Gen Intel(R) Core(TM) i7-1280P
>>> (family:
>>> 0x6, model: 0x9a, stepping: 0x3)
>>
>> This commit went into v6.11-rc1 and will at least detect replacement
>> of PCI devices (to a certain extent):
>>
>> https://git.kernel.org/linus/9d573d19547b
>>
>> However PCI is layered on top of (tunneled through) the Thunderbolt
>> switching fabric and that's where the real problem likely is here.
>>
>> Maybe you can open a bug at bugzilla.kernel.org and attach full dmesg
>> and lspci -vvv output in the working case (device attachment at runtime)
>> and the non-working case (device attachment during system sleep).
>>
>> Does the machine wake up if you attach devices during system sleep?
>> Are you suspending to ACPI S0ix, S3 or S4?
>>
>> Thanks,
>>
>> Lukas
>>
>
--
Kenneth R. Crudup / Sr. SW Engineer, Scott County Consulting, Orange
County CA
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: My Dell XPS-9320 (kernel 6.10.x, et al.) doesn't detect Thunderbolt additions when coming out of suspend or hibernate
2024-09-04 6:10 ` Kenneth Crudup
@ 2024-09-04 12:28 ` Mika Westerberg
2024-09-09 7:51 ` Kenneth Crudup
0 siblings, 1 reply; 13+ messages in thread
From: Mika Westerberg @ 2024-09-04 12:28 UTC (permalink / raw)
To: Kenneth Crudup; +Cc: Lukas Wunner, linux-usb, linux-pm@vger.kernel.org
Hi,
On Tue, Sep 03, 2024 at 11:10:41PM -0700, Kenneth Crudup wrote:
>
> ... or, maybe not. Turns out that sometimes my system can't suspend (just
> hangs, spinning hard somewhere based on the heat and the fans) when plugged
> into a Thunderbolt dock at the time of suspend.
Can you create a bug in bugzilla.kernel.org and attach full dmesg so
that you enter suspend with dock connected (so that the issue
reproduces)? Please also add "thunderbolt.dyndbg=+p" in the kernel
command line so we can see what the driver is doing. Also probably good
to add the lspci dumps too as Lukas asked.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: My Dell XPS-9320 (kernel 6.10.x, et al.) doesn't detect Thunderbolt additions when coming out of suspend or hibernate
2024-09-04 12:28 ` Mika Westerberg
@ 2024-09-09 7:51 ` Kenneth Crudup
2024-09-09 9:01 ` Mika Westerberg
2024-09-12 21:12 ` Kenneth Crudup
0 siblings, 2 replies; 13+ messages in thread
From: Kenneth Crudup @ 2024-09-09 7:51 UTC (permalink / raw)
To: Mika Westerberg, Kenneth Crudup
Cc: Lukas Wunner, linux-usb, linux-pm@vger.kernel.org
[-- Attachment #1: Type: text/plain, Size: 1096 bytes --]
I can't get to the dmesg when it crashes, but I did a SysRq-S/C and have
attached the crash output; let me know if this is at all helpful.
I see I'd SysRq-S/C on a previous hang, I've attached that one, too.
This particular time it suspended OK, but hung indefinitely when I
plugged it into another TB3 dock (the previous one was TB4, if it matters).
On 9/4/24 05:28, Mika Westerberg wrote:
> Hi,
>
> On Tue, Sep 03, 2024 at 11:10:41PM -0700, Kenneth Crudup wrote:
>>
>> ... or, maybe not. Turns out that sometimes my system can't suspend (just
>> hangs, spinning hard somewhere based on the heat and the fans) when plugged
>> into a Thunderbolt dock at the time of suspend.
>
> Can you create a bug in bugzilla.kernel.org and attach full dmesg so
> that you enter suspend with dock connected (so that the issue
> reproduces)? Please also add "thunderbolt.dyndbg=+p" in the kernel
> command line so we can see what the driver is doing. Also probably good
> to add the lspci dumps too as Lukas asked.
>
--
Kenneth R. Crudup / Sr. SW Engineer, Scott County Consulting, Orange
County CA
[-- Attachment #2: sysrq-c-output --]
[-- Type: text/plain, Size: 33032 bytes --]
Panic#1 Part5
<4>[21966.378002][ C11] lowmem_reserve[]: 0 0 30344 0
<4>[21966.378003][ C11] Node 0 Normal free:568068kB boost:0kB min:64940kB low:96012kB high:127084kB reserved_highatomic:22528KB active_anon:1157560kB inactive_anon:14167840kB active_file:3906004kB inactive_file:9175400kB unevictable:1133156kB writepending:0kB present:31711232kB managed:31077296kB mlocked:32kB bounce:0kB free_pcp:912kB local_pcp:912kB free_cma:0kB
<4>[21966.378007][ C11] lowmem_reserve[]: 0 0 0 0
<4>[21966.378008][ C11] Node 0 DMA: 1*4kB (M) 0*8kB 0*16kB 1*32kB (M) 9*64kB (M) 4*128kB (M) 5*256kB (M) 3*512kB (M) 1*1024kB (M) 0*2048kB 0*4096kB = 4964kB
<4>[21966.378016][ C11] Node 0 DMA32: 0*4kB 0*8kB 1*16kB (H) 1*32kB (H) 90*64kB (MH) 23*128kB (MH) 23*256kB (MH) 7*512kB (MH) 3*1024kB (MH) 2*2048kB (UM) 0*4096kB = 25392kB
<4>[21966.378022][ C11] Node 0 Normal: 26673*4kB (UMEH) 25024*8kB (UMEH) 2610*16kB (UMEH) 559*32kB (UMEH) 291*64kB (UMEH) 361*128kB (UMEH) 188*256kB (UMEH) 33*512kB (UMEH) 20*1024kB (UMEH) 19*2048kB (UMH) 3*4096kB (M) = 568068kB
<4>[21966.378031][ C11] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
<4>[21966.378032][ C11] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
<4>[21966.378033][ C11] 6220334 total pagecache pages
<4>[21966.378033][ C11] 2142 pages in swap cache
Panic#1 Part4
<4>[21966.378034][ C11] Free swap = 33210876kB
<4>[21966.378034][ C11] Total swap = 33554428kB
<4>[21966.378035][ C11] 8287317 pages RAM
<4>[21966.378035][ C11] 0 pages HighMem/MovableOnly
<4>[21966.378036][ C11] 175115 pages reserved
<4>[21966.378037][ C11] 0 pages hwpoisoned
<4>[21966.378038][ C11] Timer List Version: v0.10
<4>[21966.378038][ C11] HRTIMER_MAX_CLOCK_BASES: 8
<4>[21966.378039][ C11] now at 15896350821922 nsecs
<4>[21966.378039][ C11]
<4>[21966.378040][ C11] cpu: 11
<4>[21966.378041][ C11] clock 0:
<4>[21966.378041][ C11] .base: pK-error
<4>[21966.378042][ C11] .index: 0
<4>[21966.378043][ C11] .resolution: 1 nsecs
<4>[21966.378043][ C11] .get_time: ktime_get
<4>[21966.378044][ C11] .offset: 0 nsecs
<4>[21966.378045][ C11] active timers:
<4>[21966.378045][ C11] #0: < pK-error>, tick_nohz_handler
<4>[21966.378048][ C11] , S:01
<4>[21966.378048][ C11]
<4>[21966.378049][ C11] # expires at 15897239000000-15897239000000 nsecs [in 888178078 to 888178078 nsecs]
<4>[21966.378050][ C11] #1: < pK-error>, watchdog_timer_fn
<4>[21966.378051][ C11] , S:01
<4>[21966.378052][ C11]
<4>[21966.378052][ C11] # expires at 15900202515884-15900202515884 nsecs [in 3851693962 to 3851693962 nsecs]
<4>[21966.378053][ C11] #2: < pK-error>, hrtimer_wakeup
<4>[21966.378054][ C11] , S:01
<4>[21966.378055][ C11]
<4>[21966.378055][ C11] # expires at 16211873631493-16211873681493 nsecs [in 315522809571 to 315522859571 nsecs]
<4>[21966.378056][ C11] clock 1:
<4>[21966.378056][ C11] .base: pK-error
Panic#1 Part3
<4>[21966.378056][ C11] .index: 1
<4>[21966.378057][ C11] .resolution: 1 nsecs
<4>[21966.378057][ C11] .get_time: ktime_get_real
<4>[21966.378058][ C11] .offset: 1725851624333471085 nsecs
<4>[21966.378059][ C11] active timers:
<4>[21966.378059][ C11] clock 2:
<4>[21966.378059][ C11] .base: pK-error
<4>[21966.378060][ C11] .index: 2
<4>[21966.378060][ C11] .resolution: 1 nsecs
<4>[21966.378060][ C11] .get_time: ktime_get_boottime
<4>[21966.378061][ C11] .offset: 6069209201912 nsecs
<4>[21966.378062][ C11] active timers:
<4>[21966.378062][ C11] clock 3:
<4>[21966.378062][ C11] .base: pK-error
<4>[21966.378063][ C11] .index: 3
<4>[21966.378063][ C11] .resolution: 1 nsecs
<4>[21966.378063][ C11] .get_time: ktime_get_clocktai
<4>[21966.378064][ C11] .offset: 1725851624333471085 nsecs
<4>[21966.378065][ C11] active timers:
<4>[21966.378065][ C11] clock 4:
<4>[21966.378065][ C11] .base: pK-error
<4>[21966.378066][ C11] .index: 4
<4>[21966.378066][ C11] .resolution: 1 nsecs
<4>[21966.378066][ C11] .get_time: ktime_get
<4>[21966.378067][ C11] .offset: 0 nsecs
<4>[21966.378068][ C11] active timers:
<4>[21966.378068][ C11] clock 5:
<4>[21966.378068][ C11] .base: pK-error
<4>[21966.378069][ C11] .index: 5
<4>[21966.378069][ C11] .resolution: 1 nsecs
<4>[21966.378069][ C11] .get_time: ktime_get_real
<4>[21966.378070][ C11] .offset: 1725851624333471085 nsecs
<4>[21966.378071][ C11] active timers:
Panic#1 Part2
<4>[21966.378071][ C11] clock 6:
<4>[21966.378071][ C11] .base: pK-error
<4>[21966.378071][ C11] .index: 6
<4>[21966.378072][ C11] .resolution: 1 nsecs
<4>[21966.378072][ C11] .get_time: ktime_get_boottime
<4>[21966.378073][ C11] .offset: 6069209201912 nsecs
<4>[21966.378073][ C11] active timers:
<4>[21966.378074][ C11] clock 7:
<4>[21966.378074][ C11] .base: pK-error
<4>[21966.378074][ C11] .index: 7
<4>[21966.378075][ C11] .resolution: 1 nsecs
<4>[21966.378075][ C11] .get_time: ktime_get_clocktai
<4>[21966.378076][ C11] .offset: 1725851624333471085 nsecs
<4>[21966.378076][ C11] active timers:
<4>[21966.378076][ C11] .expires_next : 15897239000000 nsecs
<4>[21966.378077][ C11] .hres_active : 1
<4>[21966.378078][ C11] .nr_events : 531587
<4>[21966.378078][ C11] .nr_retries : 105
<4>[21966.378079][ C11] .nr_hangs : 0
<4>[21966.378080][ C11] .max_hang_time : 0
<4>[21966.378080][ C11] .nohz : 1
<4>[21966.378081][ C11] .highres : 1
<4>[21966.378081][ C11] .last_tick : 15896276000000 nsecs
<4>[21966.378082][ C11] .tick_stopped : 1
<4>[21966.378082][ C11] .idle_jiffies : 4310563452
<4>[21966.378083][ C11] .idle_calls : 785546
<4>[21966.378083][ C11] .idle_sleeps : 785510
<4>[21966.378084][ C11] .idle_entrytime : 15896279629203 nsecs
<4>[21966.378085][ C11] .idle_waketime : 15896279629203 nsecs
<4>[21966.378085][ C11] .idle_exittime : 15896275868357 nsecs
<4>[21966.378086][ C11] .idle_sleeptime : 15774976750651 nsecs
Panic#1 Part1
<4>[21966.378087][ C11] .iowait_sleeptime: 9288888905 nsecs
<4>[21966.378087][ C11] .last_jiffies : 4310563452
<4>[21966.378088][ C11] .next_timer : 15897239000000
<4>[21966.378088][ C11] .idle_expires : 15897239000000 nsecs
<4>[21966.378089][ C11] jiffies: 4310563457
<4>[21966.378089][ C11]
<4>[21966.378090][ C11] Tick Device: mode: 1
<4>[21966.378090][ C11] Broadcast device
<4>[21966.378090][ C11] Clock Event Device:
<4>[21966.378090][ C11] <NULL>
<4>[21966.378091][ C11] tick_broadcast_mask: 00000
<4>[21966.378092][ C11] tick_broadcast_oneshot_mask: 00000
<4>[21966.378092][ C11]
<4>[21966.378093][ C11] Tick Device: mode: 1
<4>[21966.378093][ C11] Per CPU device: 11
<4>[21966.378093][ C11] Clock Event Device:
<4>[21966.378094][ C11] lapic-deadline
<4>[21966.378094][ C11] max_delta_ns: 1101273695516
<4>[21966.378095][ C11] min_delta_ns: 1000
<4>[21966.378095][ C11] mult: 16750372
<4>[21966.378096][ C11] shift: 26
<4>[21966.378096][ C11] mode: 3
<4>[21966.378096][ C11] next_event: 15897239000000 nsecs
<4>[21966.378097][ C11] set_next_event: lapic_next_deadline
<4>[21966.378098][ C11] shutdown: lapic_timer_shutdown
<4>[21966.378098][ C11] periodic: lapic_timer_set_periodic
<4>[21966.378099][ C11] oneshot: lapic_timer_set_oneshot
<4>[21966.378100][ C11] oneshot stopped: lapic_timer_shutdown
<4>[21966.378101][ C11] event_handler: hrtimer_interrupt
<4>[21966.378102][ C11]
<4>[21966.378102][ C11] retries: 2404
<4>[21966.378103][ C11] Wakeup Device: <NULL>
<4>[21966.378103][ C11]
Panic#1 Part18
<6>[21966.377605][ C11] RAX: ffffffffffffffda RBX: 0000000000000004 RCX: 00007fce71914887
<6>[21966.377605][ C11] RDX: 0000000000000004 RSI: 00007ffca3e6ccc0 RDI: 0000000000000004
<6>[21966.377606][ C11] RBP: 00007ffca3e6ccc0 R08: 0000000000000004 R09: 000000007fffffff
<6>[21966.377606][ C11] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000004
<6>[21966.377607][ C11] R13: 000055e3818f22d0 R14: 00007fce71a16a00 R15: 0000000000000004
<6>[21966.377608][ C11] </TASK>
<6>[21966.377610][ C11] task:kioslave5 state:R stack:0 pid:34618 tgid:34618 ppid:1 flags:0x00004006
<6>[21966.377611][ C11] Call Trace:
<6>[21966.377612][ C11] <TASK>
<6>[21966.377612][ C11] __schedule+0x445/0x5e0
<6>[21966.377613][ C11] schedule+0x5e/0xc0
<6>[21966.377614][ C11] __refrigerator+0xd7/0x160
<6>[21966.377616][ C11] get_signal+0x4e6/0x510
<6>[21966.377618][ C11] arch_do_signal_or_restart+0x2c/0x240
<6>[21966.377619][ C11] ? __se_sys_ppoll+0x107/0x130
<6>[21966.377621][ C11] syscall_exit_to_user_mode+0x59/0x120
<6>[21966.377622][ C11] do_syscall_64+0x7d/0xf0
<6>[21966.377624][ C11] ? ksys_write+0x68/0xd0
<6>[21966.377626][ C11] ? __se_sys_ppoll+0x107/0x130
<6>[21966.377627][ C11] ? eventfd_write+0x18b/0x1b0
<6>[21966.377629][ C11] ? vfs_write+0x125/0x420
<6>[21966.377631][ C11] ? ksys_write+0x68/0xd0
<6>[21966.377633][ C11] ? do_syscall_64+0x7d/0xf0
<6>[21966.377635][ C11] ? irqentry_exit+0x16/0x40
<6>[21966.377635][ C11] ? exc_page_fault+0x72/0x90
<6>[21966.377636][ C11] entry_SYSCALL_64_after_hwframe+0x4b/0x53
Panic#1 Part17
<6>[21966.377637][ C11] RIP: 0033:0x7f5894b18c6f
<6>[21966.377638][ C11] RSP: 002b:00007ffcb0139660 EFLAGS: 00000246 ORIG_RAX: 000000000000010f
<6>[21966.377639][ C11] RAX: fffffffffffffdfe RBX: 00007ffcb01397f7 RCX: 00007f5894b18c6f
<6>[21966.377640][ C11] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 00007ffcb0139718
<6>[21966.377640][ C11] RBP: 00007ffcb0139718 R08: 0000000000000008 R09: 00007ffcb01397f7
<6>[21966.377641][ C11] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000001
<6>[21966.377641][ C11] R13: 0000000000000000 R14: 000056274e943b00 R15: 00007ffcb01397f6
<6>[21966.377642][ C11] </TASK>
<6>[21966.377644][ C11] task:systemd-udevd state:R stack:0 pid:34619 tgid:34619 ppid:568 flags:0x00004006
<6>[21966.377645][ C11] Call Trace:
<6>[21966.377646][ C11] <TASK>
<6>[21966.377646][ C11] __schedule+0x445/0x5e0
<6>[21966.377647][ C11] schedule+0x5e/0xc0
<6>[21966.377649][ C11] __refrigerator+0xd7/0x160
<6>[21966.377650][ C11] get_signal+0x4e6/0x510
<6>[21966.377652][ C11] ? unix_dgram_sendmsg+0x756/0x850
<6>[21966.377653][ C11] arch_do_signal_or_restart+0x2c/0x240
<6>[21966.377654][ C11] ? __x64_sys_epoll_wait+0x9f/0xd0
<6>[21966.377656][ C11] syscall_exit_to_user_mode+0x59/0x120
<6>[21966.377657][ C11] do_syscall_64+0x7d/0xf0
<6>[21966.377658][ C11] ? vfs_write+0x34b/0x420
<6>[21966.377660][ C11] ? ksys_write+0x68/0xd0
<6>[21966.377662][ C11] ? do_syscall_64+0x7d/0xf0
<6>[21966.377664][ C11] ? do_syscall_64+0x7d/0xf0
<6>[21966.377665][ C11] ? __check_object_size+0x6c/0x140
<6>[21966.377666][ C11] ? kmem_cache_free+0x2a/0x210
Panic#1 Part16
<6>[21966.377667][ C11] ? putname+0x4b/0x60
<6>[21966.377668][ C11] ? do_readlinkat+0x12a/0x140
<6>[21966.377670][ C11] ? do_syscall_64+0x7d/0xf0
<6>[21966.377672][ C11] ? do_syscall_64+0x7d/0xf0
<6>[21966.377673][ C11] ? do_syscall_64+0x7d/0xf0
<6>[21966.377675][ C11] ? sysvec_reschedule_ipi+0x61/0x70
<6>[21966.377675][ C11] entry_SYSCALL_64_after_hwframe+0x4b/0x53
<6>[21966.377676][ C11] RIP: 0033:0x7fb7b9925dea
<6>[21966.377677][ C11] RSP: 002b:00007ffd5ba1fae8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
<6>[21966.377678][ C11] RAX: fffffffffffffffc RBX: 00005617604f46f0 RCX: 00007fb7b9925dea
<6>[21966.377678][ C11] RDX: 0000000000000006 RSI: 00005617604ae020 RDI: 0000000000000003
<6>[21966.377679][ C11] RBP: 7fffffffffffffff R08: 00005617604ae020 R09: 0000000000000004
<6>[21966.377680][ C11] R10: 00000000ffffffff R11: 0000000000000246 R12: 0000000000000014
<6>[21966.377680][ C11] R13: 0000000000000006 R14: 0000000000000002 R15: 00005617604f4880
<6>[21966.377681][ C11] </TASK>
<6>[21966.377683][ C11] task:systemd-udevd state:R stack:0 pid:34620 tgid:34620 ppid:568 flags:0x00004006
<6>[21966.377684][ C11] Call Trace:
<6>[21966.377685][ C11] <TASK>
<6>[21966.377685][ C11] __schedule+0x445/0x5e0
<6>[21966.377686][ C11] schedule+0x5e/0xc0
<6>[21966.377688][ C11] __refrigerator+0xd7/0x160
<6>[21966.377689][ C11] get_signal+0x4e6/0x510
<6>[21966.377691][ C11] arch_do_signal_or_restart+0x2c/0x240
<6>[21966.377692][ C11] ? __x64_sys_epoll_wait+0x9f/0xd0
<6>[21966.377694][ C11] syscall_exit_to_user_mode+0x59/0x120
<6>[21966.377695][ C11] do_syscall_64+0x7d/0xf0
Panic#1 Part15
<6>[21966.377696][ C11] ? vfs_write+0x34b/0x420
<6>[21966.377698][ C11] ? ksys_write+0x68/0xd0
<6>[21966.377700][ C11] ? do_syscall_64+0x7d/0xf0
<6>[21966.377702][ C11] ? do_readlinkat+0x12a/0x140
<6>[21966.377704][ C11] ? do_syscall_64+0x7d/0xf0
<6>[21966.377706][ C11] ? do_syscall_64+0x7d/0xf0
<6>[21966.377707][ C11] ? do_syscall_64+0x7d/0xf0
<6>[21966.377709][ C11] ? __fput_sync+0x14/0x20
<6>[21966.377710][ C11] ? __se_sys_close.llvm.4875362272298582369+0x70/0xc0
<6>[21966.377712][ C11] ? do_syscall_64+0x7d/0xf0
<6>[21966.377713][ C11] ? do_syscall_64+0x7d/0xf0
<6>[21966.377715][ C11] ? do_syscall_64+0x7d/0xf0
<6>[21966.377716][ C11] ? sysvec_apic_timer_interrupt+0x48/0x80
<6>[21966.377717][ C11] entry_SYSCALL_64_after_hwframe+0x4b/0x53
<6>[21966.377718][ C11] RIP: 0033:0x7fb7b9925dea
<6>[21966.377718][ C11] RSP: 002b:00007ffd5ba1fae8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
<6>[21966.377719][ C11] RAX: fffffffffffffffc RBX: 00005617604f46f0 RCX: 00007fb7b9925dea
<6>[21966.377720][ C11] RDX: 0000000000000006 RSI: 000056176046f8a0 RDI: 0000000000000003
<6>[21966.377720][ C11] RBP: 7fffffffffffffff R08: 000056176046f8a0 R09: 0000000000000004
<6>[21966.377721][ C11] R10: 00000000ffffffff R11: 0000000000000246 R12: 0000000000000014
<6>[21966.377722][ C11] R13: 0000000000000006 R14: 0000000000000002 R15: 00005617604f4880
<6>[21966.377722][ C11] </TASK>
<6>[21966.377724][ C11] task:systemd-udevd state:R stack:0 pid:34622 tgid:34622 ppid:568 flags:0x00004006
<6>[21966.377725][ C11] Call Trace:
<6>[21966.377726][ C11] <TASK>
<6>[21966.377726][ C11] __schedule+0x445/0x5e0
Panic#1 Part14
<6>[21966.377728][ C11] schedule+0x5e/0xc0
<6>[21966.377729][ C11] __refrigerator+0xd7/0x160
<6>[21966.377731][ C11] get_signal+0x4e6/0x510
<6>[21966.377732][ C11] arch_do_signal_or_restart+0x2c/0x240
<6>[21966.377734][ C11] ? __x64_sys_epoll_wait+0x9f/0xd0
<6>[21966.377735][ C11] syscall_exit_to_user_mode+0x59/0x120
<6>[21966.377736][ C11] do_syscall_64+0x7d/0xf0
<6>[21966.377738][ C11] ? syscall_exit_to_user_mode+0x118/0x120
<6>[21966.377738][ C11] ? do_syscall_64+0x7d/0xf0
<6>[21966.377740][ C11] ? handle_pte_fault+0x16b/0x190
<6>[21966.377741][ C11] ? __handle_mm_fault+0x3e6/0x650
<6>[21966.377743][ C11] ? __count_memcg_events+0x6d/0x100
<6>[21966.377743][ C11] ? mm_account_fault+0x7e/0x110
<6>[21966.377745][ C11] ? handle_mm_fault+0xc7/0x1a0
<6>[21966.377746][ C11] ? do_user_addr_fault+0x410/0x590
<6>[21966.377747][ C11] ? irqentry_exit+0x16/0x40
<6>[21966.377748][ C11] ? exc_page_fault+0x72/0x90
<6>[21966.377749][ C11] entry_SYSCALL_64_after_hwframe+0x4b/0x53
<6>[21966.377749][ C11] RIP: 0033:0x7fb7b9925dea
<6>[21966.377750][ C11] RSP: 002b:00007ffd5ba1fae8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
<6>[21966.377751][ C11] RAX: fffffffffffffffc RBX: 00005617604f46f0 RCX: 00007fb7b9925dea
<6>[21966.377751][ C11] RDX: 0000000000000006 RSI: 00005617604868f0 RDI: 0000000000000003
<6>[21966.377752][ C11] RBP: 7fffffffffffffff R08: 00005617604868f0 R09: 0000000000000004
<6>[21966.377753][ C11] R10: 00000000ffffffff R11: 0000000000000246 R12: 0000000000000014
<6>[21966.377753][ C11] R13: 0000000000000006 R14: 0000000000000002 R15: 00005617604f4880
<6>[21966.377754][ C11] </TASK>
Panic#1 Part13
<6>[21966.377755][ C11] task:systemd-udevd state:R stack:0 pid:34626 tgid:34626 ppid:568 flags:0x00004006
<6>[21966.377756][ C11] Call Trace:
<6>[21966.377757][ C11] <TASK>
<6>[21966.377758][ C11] __schedule+0x445/0x5e0
<6>[21966.377759][ C11] schedule+0x5e/0xc0
<6>[21966.377760][ C11] __refrigerator+0xd7/0x160
<6>[21966.377762][ C11] get_signal+0x4e6/0x510
<6>[21966.377763][ C11] arch_do_signal_or_restart+0x2c/0x240
<6>[21966.377765][ C11] ? __x64_sys_epoll_wait+0x9f/0xd0
<6>[21966.377766][ C11] syscall_exit_to_user_mode+0x59/0x120
<6>[21966.377767][ C11] do_syscall_64+0x7d/0xf0
<6>[21966.377769][ C11] ? handle_irq_event+0x59/0x70
<6>[21966.377771][ C11] ? handle_edge_irq+0x1b7/0x1f0
<6>[21966.377772][ C11] ? irqentry_exit+0x16/0x40
<6>[21966.377773][ C11] ? common_interrupt+0x54/0xa0
<6>[21966.377775][ C11] entry_SYSCALL_64_after_hwframe+0x4b/0x53
<6>[21966.377776][ C11] RIP: 0033:0x7fb7b9925dea
<6>[21966.377776][ C11] RSP: 002b:00007ffd5ba1fae8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
<6>[21966.377777][ C11] RAX: fffffffffffffffc RBX: 00005617604f46f0 RCX: 00007fb7b9925dea
<6>[21966.377778][ C11] RDX: 0000000000000006 RSI: 000056176031ca70 RDI: 0000000000000003
<6>[21966.377778][ C11] RBP: 7fffffffffffffff R08: 000056176031ca70 R09: 0000000000000004
<6>[21966.377779][ C11] R10: 00000000ffffffff R11: 0000000000000246 R12: 0000000000000014
<6>[21966.377779][ C11] R13: 0000000000000006 R14: 0000000000000002 R15: 00005617604f4880
<6>[21966.377780][ C11] </TASK>
<6>[21966.377782][ C11] task:systemd-udevd state:R stack:0 pid:34627 tgid:34627 ppid:568 flags:0x00004006
Panic#1 Part12
<6>[21966.377783][ C11] Call Trace:
<6>[21966.377783][ C11] <TASK>
<6>[21966.377784][ C11] __schedule+0x445/0x5e0
<6>[21966.377785][ C11] ? update_load_avg+0x6c/0x5c0
<6>[21966.377786][ C11] schedule+0x5e/0xc0
<6>[21966.377787][ C11] __refrigerator+0xd7/0x160
<6>[21966.377789][ C11] get_signal+0x4e6/0x510
<6>[21966.377790][ C11] ? __switch_to+0x149/0x550
<6>[21966.377792][ C11] arch_do_signal_or_restart+0x2c/0x240
<6>[21966.377793][ C11] ? __schedule+0x44d/0x5e0
<6>[21966.377795][ C11] syscall_exit_to_user_mode+0x59/0x120
<6>[21966.377796][ C11] do_syscall_64+0x7d/0xf0
<6>[21966.377797][ C11] ? do_syscall_64+0x7d/0xf0
<6>[21966.377799][ C11] ? kmem_cache_free+0x162/0x210
<6>[21966.377800][ C11] ? __fput+0x199/0x2b0
<6>[21966.377801][ C11] ? __fput_sync+0x14/0x20
<6>[21966.377802][ C11] ? __se_sys_close.llvm.4875362272298582369+0x70/0xc0
<6>[21966.377804][ C11] ? do_syscall_64+0x7d/0xf0
<6>[21966.377805][ C11] ? exc_page_fault+0x72/0x90
<6>[21966.377806][ C11] entry_SYSCALL_64_after_hwframe+0x4b/0x53
<6>[21966.377807][ C11] RIP: 0033:0x7fb7b9925dea
<6>[21966.377807][ C11] RSP: 002b:00007ffd5ba1fae8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
<6>[21966.377808][ C11] RAX: fffffffffffffffc RBX: 00005617604f46f0 RCX: 00007fb7b9925dea
<6>[21966.377809][ C11] RDX: 0000000000000006 RSI: 000056176031ca70 RDI: 0000000000000003
<6>[21966.377809][ C11] RBP: 7fffffffffffffff R08: 000056176031ca70 R09: 0000000000000004
<6>[21966.377810][ C11] R10: 00000000ffffffff R11: 0000000000000246 R12: 0000000000000014
Panic#1 Part11
<6>[21966.377810][ C11] R13: 0000000000000006 R14: 0000000000000002 R15: 00005617604f4880
<6>[21966.377811][ C11] </TASK>
<6>[21966.377813][ C11] task:systemd-udevd state:R stack:0 pid:34632 tgid:34632 ppid:568 flags:0x00004006
<6>[21966.377814][ C11] Call Trace:
<6>[21966.377815][ C11] <TASK>
<6>[21966.377816][ C11] __schedule+0x445/0x5e0
<6>[21966.377817][ C11] schedule+0x5e/0xc0
<6>[21966.377818][ C11] __refrigerator+0xd7/0x160
<6>[21966.377820][ C11] get_signal+0x4e6/0x510
<6>[21966.377821][ C11] arch_do_signal_or_restart+0x2c/0x240
<6>[21966.377823][ C11] ? __x64_sys_epoll_wait+0x9f/0xd0
<6>[21966.377824][ C11] syscall_exit_to_user_mode+0x59/0x120
<6>[21966.377825][ C11] do_syscall_64+0x7d/0xf0
<6>[21966.377826][ C11] ? do_syscall_64+0x7d/0xf0
<6>[21966.377828][ C11] entry_SYSCALL_64_after_hwframe+0x4b/0x53
<6>[21966.377829][ C11] RIP: 0033:0x7fb7b9925dea
<6>[21966.377830][ C11] RSP: 002b:00007ffd5ba1fae8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
<6>[21966.377830][ C11] RAX: fffffffffffffffc RBX: 00005617604f46f0 RCX: 00007fb7b9925dea
<6>[21966.377831][ C11] RDX: 0000000000000006 RSI: 00005617604ae020 RDI: 0000000000000003
<6>[21966.377831][ C11] RBP: 7fffffffffffffff R08: 00005617604ae020 R09: 0000000000000000
<6>[21966.377832][ C11] R10: 00000000ffffffff R11: 0000000000000246 R12: 0000000000000014
<6>[21966.377833][ C11] R13: 0000000000000006 R14: 0000000000000002 R15: 00005617604f4880
<6>[21966.377834][ C11] </TASK>
<6>[21966.377835][ C11] task:kworker/8:0 state:I stack:0 pid:34633 tgid:34633 ppid:2 flags:0x00004000
Panic#1 Part10
<6>[21966.377837][ C11] Workqueue: 0x0 (events)
<6>[21966.377838][ C11] Call Trace:
<6>[21966.377838][ C11] <TASK>
<6>[21966.377839][ C11] __schedule+0x445/0x5e0
<6>[21966.377840][ C11] schedule+0x5e/0xc0
<6>[21966.377841][ C11] worker_thread+0x96/0x3c0
<6>[21966.377843][ C11] kthread+0xe9/0x100
<6>[21966.377844][ C11] ? pr_cont_work+0x1b0/0x1b0
<6>[21966.377846][ C11] ? kthread_blkcg+0x30/0x30
<6>[21966.377847][ C11] ret_from_fork+0x34/0x40
<6>[21966.377848][ C11] ? kthread_blkcg+0x30/0x30
<6>[21966.377849][ C11] ret_from_fork_asm+0x11/0x20
<6>[21966.377851][ C11] </TASK>
<6>[21966.377852][ C11] task:kworker/u80:6 state:I stack:0 pid:34634 tgid:34634 ppid:2 flags:0x00004000
<6>[21966.377853][ C11] Workqueue: 0x0 (ipv6_addrconf)
<6>[21966.377854][ C11] Call Trace:
<6>[21966.377855][ C11] <TASK>
<6>[21966.377856][ C11] __schedule+0x445/0x5e0
<6>[21966.377857][ C11] schedule+0x5e/0xc0
<6>[21966.377858][ C11] worker_thread+0x96/0x3c0
<6>[21966.377860][ C11] kthread+0xe9/0x100
<6>[21966.377861][ C11] ? pr_cont_work+0x1b0/0x1b0
<6>[21966.377863][ C11] ? kthread_blkcg+0x30/0x30
<6>[21966.377864][ C11] ret_from_fork+0x34/0x40
<6>[21966.377865][ C11] ? kthread_blkcg+0x30/0x30
<6>[21966.377866][ C11] ret_from_fork_asm+0x11/0x20
<6>[21966.377867][ C11] </TASK>
<6>[21966.377869][ C11] task:kworker/7:0 state:I stack:0 pid:34637 tgid:34637 ppid:2 flags:0x00004000
<6>[21966.377871][ C11] Workqueue: 0x0 (events)
<6>[21966.377872][ C11] Call Trace:
<6>[21966.377873][ C11] <TASK>
Panic#1 Part9
<6>[21966.377873][ C11] __schedule+0x445/0x5e0
<6>[21966.377874][ C11] schedule+0x5e/0xc0
<6>[21966.377875][ C11] worker_thread+0x96/0x3c0
<6>[21966.377877][ C11] kthread+0xe9/0x100
<6>[21966.377879][ C11] ? pr_cont_work+0x1b0/0x1b0
<6>[21966.377880][ C11] ? kthread_blkcg+0x30/0x30
<6>[21966.377881][ C11] ret_from_fork+0x34/0x40
<6>[21966.377882][ C11] ? kthread_blkcg+0x30/0x30
<6>[21966.377883][ C11] ret_from_fork_asm+0x11/0x20
<6>[21966.377885][ C11] </TASK>
<6>[21966.377886][ C11] task:kworker/11:1 state:I stack:0 pid:34639 tgid:34639 ppid:2 flags:0x00004000
<6>[21966.377888][ C11] Workqueue: 0x0 (events)
<6>[21966.377889][ C11] Call Trace:
<6>[21966.377889][ C11] <TASK>
<6>[21966.377890][ C11] __schedule+0x445/0x5e0
<6>[21966.377891][ C11] schedule+0x5e/0xc0
<6>[21966.377892][ C11] worker_thread+0x96/0x3c0
<6>[21966.377894][ C11] kthread+0xe9/0x100
<6>[21966.377895][ C11] ? pr_cont_work+0x1b0/0x1b0
<6>[21966.377897][ C11] ? kthread_blkcg+0x30/0x30
<6>[21966.377898][ C11] ret_from_fork+0x34/0x40
<6>[21966.377899][ C11] ? kthread_blkcg+0x30/0x30
<6>[21966.377900][ C11] ret_from_fork_asm+0x11/0x20
<6>[21966.377901][ C11] </TASK>
<6>[21966.377903][ C11] task:kworker/17:0 state:I stack:0 pid:34640 tgid:34640 ppid:2 flags:0x00004000
<6>[21966.377905][ C11] Workqueue: 0x0 (mm_percpu_wq)
<6>[21966.377906][ C11] Call Trace:
<6>[21966.377906][ C11] <TASK>
<6>[21966.377907][ C11] __schedule+0x445/0x5e0
<6>[21966.377908][ C11] schedule+0x5e/0xc0
<6>[21966.377909][ C11] worker_thread+0x96/0x3c0
<6>[21966.377911][ C11] kthread+0xe9/0x100
Panic#1 Part8
<6>[21966.377912][ C11] ? pr_cont_work+0x1b0/0x1b0
<6>[21966.377914][ C11] ? kthread_blkcg+0x30/0x30
<6>[21966.377915][ C11] ret_from_fork+0x34/0x40
<6>[21966.377916][ C11] ? kthread_blkcg+0x30/0x30
<6>[21966.377917][ C11] ret_from_fork_asm+0x11/0x20
<6>[21966.377918][ C11] </TASK>
<6>[21966.377920][ C11] task:kworker/2:1 state:I stack:0 pid:34642 tgid:34642 ppid:2 flags:0x00004000
<6>[21966.377921][ C11] Workqueue: 0x0 (events)
<6>[21966.377922][ C11] Call Trace:
<6>[21966.377922][ C11] <TASK>
<6>[21966.377923][ C11] __schedule+0x445/0x5e0
<6>[21966.377924][ C11] schedule+0x5e/0xc0
<6>[21966.377925][ C11] worker_thread+0x96/0x3c0
<6>[21966.377927][ C11] kthread+0xe9/0x100
<6>[21966.377928][ C11] ? pr_cont_work+0x1b0/0x1b0
<6>[21966.377930][ C11] ? kthread_blkcg+0x30/0x30
<6>[21966.377931][ C11] ret_from_fork+0x34/0x40
<6>[21966.377932][ C11] ? kthread_blkcg+0x30/0x30
<6>[21966.377933][ C11] ret_from_fork_asm+0x11/0x20
<6>[21966.377935][ C11] </TASK>
<6>[21966.377936][ C11] task:kworker/0:1 state:I stack:0 pid:34643 tgid:34643 ppid:2 flags:0x00004000
<6>[21966.377938][ C11] Workqueue: 0x0 (rcu_gp)
<6>[21966.377938][ C11] Call Trace:
<6>[21966.377939][ C11] <TASK>
<6>[21966.377940][ C11] __schedule+0x445/0x5e0
<6>[21966.377941][ C11] schedule+0x5e/0xc0
<6>[21966.377942][ C11] worker_thread+0x96/0x3c0
<6>[21966.377944][ C11] kthread+0xe9/0x100
<6>[21966.377945][ C11] ? pr_cont_work+0x1b0/0x1b0
<6>[21966.377947][ C11] ? kthread_blkcg+0x30/0x30
<6>[21966.377948][ C11] ret_from_fork+0x34/0x40
<6>[21966.377949][ C11] ? kthread_blkcg+0x30/0x30
Panic#1 Part7
<6>[21966.377950][ C11] ret_from_fork_asm+0x11/0x20
<6>[21966.377951][ C11] </TASK>
<6>[21966.377952][ C11] task:irq/217-mei_me state:S stack:0 pid:34646 tgid:34646 ppid:2 flags:0x00004000
<6>[21966.377953][ C11] Call Trace:
<6>[21966.377954][ C11] <TASK>
<6>[21966.377954][ C11] __schedule+0x445/0x5e0
<6>[21966.377955][ C11] ? irq_forced_thread_fn+0x70/0x70
<6>[21966.377957][ C11] schedule+0x5e/0xc0
<6>[21966.377958][ C11] irq_thread+0xa5/0x230
<6>[21966.377959][ C11] ? irq_thread_fn+0x50/0x50
<6>[21966.377960][ C11] kthread+0xe9/0x100
<6>[21966.377961][ C11] ? irq_forced_secondary_handler+0x20/0x20
<6>[21966.377962][ C11] ? kthread_blkcg+0x30/0x30
<6>[21966.377964][ C11] ret_from_fork+0x34/0x40
<6>[21966.377964][ C11] ? kthread_blkcg+0x30/0x30
<6>[21966.377966][ C11] ret_from_fork_asm+0x11/0x20
<6>[21966.377967][ C11] </TASK>
<6>[21966.377968][ C11] task:kworker/14:2 state:I stack:0 pid:34648 tgid:34648 ppid:2 flags:0x00004000
<6>[21966.377970][ C11] Workqueue: 0x0 (mm_percpu_wq)
<6>[21966.377970][ C11] Call Trace:
<6>[21966.377971][ C11] <TASK>
<6>[21966.377972][ C11] __schedule+0x445/0x5e0
<6>[21966.377973][ C11] schedule+0x5e/0xc0
<6>[21966.377974][ C11] worker_thread+0x96/0x3c0
<6>[21966.377976][ C11] kthread+0xe9/0x100
<6>[21966.377977][ C11] ? pr_cont_work+0x1b0/0x1b0
<6>[21966.377979][ C11] ? kthread_blkcg+0x30/0x30
<6>[21966.377980][ C11] ret_from_fork+0x34/0x40
<6>[21966.377981][ C11] ? kthread_blkcg+0x30/0x30
<6>[21966.377982][ C11] ret_from_fork_asm+0x11/0x20
<6>[21966.377983][ C11] </TASK>
<4>[21966.377984][ C11] Mem-Info:
Panic#1 Part6
<4>[21966.377986][ C11] active_anon:295291 inactive_anon:3701866 isolated_anon:0
<4>[21966.377986][ C11] active_file:997816 inactive_file:2436723 isolated_file:0
<4>[21966.377986][ C11] unevictable:283289 dirty:0 writeback:0
<4>[21966.377986][ C11] slab_reclaimable:109605 slab_unreclaimable:57431
<4>[21966.377986][ C11] mapped:1567699 shmem:2783558 pagetables:25635
<4>[21966.377986][ C11] sec_pagetables:2775 bounce:0
<4>[21966.377986][ C11] kernel_misc_reclaimable:0
<4>[21966.377986][ C11] free:149606 free_pcp:228 free_cma:0
<4>[21966.377990][ C11] Node 0 active_anon:1181164kB inactive_anon:14807464kB active_file:3991264kB inactive_file:9746892kB unevictable:1133156kB isolated(anon):0kB isolated(file):0kB mapped:6270796kB dirty:0kB writeback:0kB shmem:11134232kB shmem_thp:1574912kB shmem_pmdmapped:0kB anon_thp:0kB writeback_tmp:0kB kernel_stack:34400kB pagetables:102540kB sec_pagetables:11100kB all_unreclaimable? no
<4>[21966.377993][ C11] Node 0 DMA free:4964kB boost:0kB min:32kB low:44kB high:56kB reserved_highatomic:0KB active_anon:0kB inactive_anon:20kB active_file:0kB inactive_file:10368kB unevictable:0kB writepending:0kB present:15992kB managed:15360kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
<4>[21966.377996][ C11] lowmem_reserve[]: 0 1219 31564 0
<4>[21966.377998][ C11] Node 0 DMA32 free:25392kB boost:0kB min:2608kB low:3856kB high:5104kB reserved_highatomic:2048KB active_anon:23604kB inactive_anon:639604kB active_file:85260kB inactive_file:561124kB unevictable:0kB writepending:0kB present:1422044kB managed:1356152kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
Panic#1 Part20
<6>[21966.377534][ C11] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f1600291117
<6>[21966.377535][ C11] RDX: 0000000000000000 RSI: 0000000000000189 RDI: 000055ef1e4402c0
<6>[21966.377535][ C11] RBP: 000055ef1e440298 R08: 0000000000000000 R09: 00000000ffffffff
<6>[21966.377536][ C11] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
<6>[21966.377537][ C11] R13: 0000000000000000 R14: 0000000000000000 R15: 000055ef1e4402c0
<6>[21966.377538][ C11] </TASK>
<6>[21966.377539][ C11] task:kworker/u80:5 state:I stack:0 pid:34535 tgid:34535 ppid:2 flags:0x00004000
<6>[21966.377541][ C11] Workqueue: 0x0 (events_unbound)
<6>[21966.377542][ C11] Call Trace:
<6>[21966.377542][ C11] <TASK>
<6>[21966.377543][ C11] __schedule+0x445/0x5e0
<6>[21966.377544][ C11] schedule+0x5e/0xc0
<6>[21966.377545][ C11] worker_thread+0x96/0x3c0
<6>[21966.377547][ C11] kthread+0xe9/0x100
<6>[21966.377549][ C11] ? pr_cont_work+0x1b0/0x1b0
<6>[21966.377550][ C11] ? kthread_blkcg+0x30/0x30
<6>[21966.377552][ C11] ret_from_fork+0x34/0x40
<6>[21966.377552][ C11] ? kthread_blkcg+0x30/0x30
<6>[21966.377554][ C11] ret_from_fork_asm+0x11/0x20
<6>[21966.377555][ C11] </TASK>
<6>[21966.377556][ C11] task:systemd-sleep state:D stack:0 pid:34560 tgid:34560 ppid:1 flags:0x00004002
<6>[21966.377558][ C11] Call Trace:
<6>[21966.377558][ C11] <TASK>
<6>[21966.377559][ C11] __schedule+0x445/0x5e0
<6>[21966.377560][ C11] schedule+0x5e/0xc0
<6>[21966.377561][ C11] schedule_preempt_disabled+0x14/0x20
<6>[21966.377563][ C11] __mutex_lock+0x249/0x3d0
Panic#1 Part19
<6>[21966.377564][ C11] __mutex_lock_slowpath+0xe/0x10
<6>[21966.377566][ C11] mutex_lock+0x1f/0x30
<6>[21966.377567][ C11] device_resume+0x7f/0x340
<6>[21966.377568][ C11] dpm_resume+0x134/0x1a0
<6>[21966.377570][ C11] suspend_devices_and_enter+0x4f1/0x570
<6>[21966.377572][ C11] enter_state+0x1c0/0x2c0
<6>[21966.377574][ C11] pm_suspend+0x42/0x60
<6>[21966.377576][ C11] state_store+0x105/0x120
<6>[21966.377577][ C11] kobj_attr_store+0x13/0x20
<6>[21966.377578][ C11] sysfs_kf_write+0x33/0x50
<6>[21966.377580][ C11] kernfs_fop_write_iter.llvm.9000635758286636156+0x106/0x190
<6>[21966.377582][ C11] vfs_write+0x34b/0x420
<6>[21966.377584][ C11] ksys_write+0x68/0xd0
<6>[21966.377586][ C11] __x64_sys_write+0x1a/0x20
<6>[21966.377588][ C11] x64_sys_call+0x15a4/0x1ee0
<6>[21966.377589][ C11] do_syscall_64+0x6e/0xf0
<6>[21966.377590][ C11] ? kmem_cache_free+0x162/0x210
<6>[21966.377592][ C11] ? __fput+0x199/0x2b0
<6>[21966.377593][ C11] ? __fput_sync+0x14/0x20
<6>[21966.377594][ C11] ? __se_sys_close.llvm.4875362272298582369+0x70/0xc0
<6>[21966.377596][ C11] ? do_syscall_64+0x7d/0xf0
<6>[21966.377597][ C11] ? mm_account_fault+0x7e/0x110
<6>[21966.377598][ C11] ? handle_mm_fault+0xc7/0x1a0
<6>[21966.377599][ C11] ? do_user_addr_fault+0x410/0x590
<6>[21966.377601][ C11] ? irqentry_exit+0x16/0x40
<6>[21966.377601][ C11] ? exc_page_fault+0x72/0x90
<6>[21966.377602][ C11] entry_SYSCALL_64_after_hwframe+0x4b/0x53
<6>[21966.377603][ C11] RIP: 0033:0x7fce71914887
<6>[21966.377604][ C11] RSP: 002b:00007ffca3e6cc08 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
[-- Attachment #3: sysrq-c-output --]
[-- Type: text/plain, Size: 33548 bytes --]
Panic#1 Part12
<6>[63052.639653][ C11] ? __x64_sys_epoll_wait+0x9f/0xd0
<6>[63052.639655][ C11] syscall_exit_to_user_mode+0x59/0x120
<6>[63052.639656][ C11] do_syscall_64+0x7d/0xf0
<6>[63052.639658][ C11] ? ksys_write+0x68/0xd0
<6>[63052.639659][ C11] ? do_syscall_64+0x7d/0xf0
<6>[63052.639662][ C11] ? putname+0x4b/0x60
<6>[63052.639662][ C11] ? do_readlinkat+0x12a/0x140
<6>[63052.639664][ C11] ? do_syscall_64+0x7d/0xf0
<6>[63052.639666][ C11] ? __count_memcg_events+0x6d/0x100
<6>[63052.639667][ C11] ? mm_account_fault+0x7e/0x110
<6>[63052.639668][ C11] ? handle_mm_fault+0xc7/0x1a0
<6>[63052.639669][ C11] ? do_user_addr_fault+0x410/0x590
<6>[63052.639670][ C11] ? irqentry_exit+0x16/0x40
<6>[63052.639671][ C11] ? exc_page_fault+0x72/0x90
<6>[63052.639672][ C11] entry_SYSCALL_64_after_hwframe+0x4b/0x53
<6>[63052.639673][ C11] RIP: 0033:0x7fe20c525dea
<6>[63052.639674][ C11] RSP: 002b:00007fff4c132208 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
<6>[63052.639675][ C11] RAX: fffffffffffffffc RBX: 00005649141121e0 RCX: 00007fe20c525dea
<6>[63052.639675][ C11] RDX: 0000000000000006 RSI: 0000564913f8e070 RDI: 0000000000000003
<6>[63052.639676][ C11] RBP: 7fffffffffffffff R08: 0000564913f8e070 R09: 0000000000000004
<6>[63052.639676][ C11] R10: 00000000ffffffff R11: 0000000000000246 R12: 0000000000000014
<6>[63052.639677][ C11] R13: 0000000000000006 R14: 0000000000000002 R15: 0000564914112370
<6>[63052.639678][ C11] </TASK>
<6>[63052.639680][ C11] task:systemd-udevd state:R stack:0 pid:23384 tgid:23384 ppid:553 flags:0x00004006
<6>[63052.639681][ C11] Call Trace:
Panic#1 Part11
<6>[63052.639681][ C11] <TASK>
<6>[63052.639682][ C11] __schedule+0x445/0x5e0
<6>[63052.639683][ C11] schedule+0x5e/0xc0
<6>[63052.639684][ C11] __refrigerator+0xd7/0x160
<6>[63052.639686][ C11] get_signal+0x4e6/0x510
<6>[63052.639688][ C11] arch_do_signal_or_restart+0x2c/0x240
<6>[63052.639689][ C11] ? __x64_sys_epoll_wait+0x9f/0xd0
<6>[63052.639690][ C11] syscall_exit_to_user_mode+0x59/0x120
<6>[63052.639691][ C11] do_syscall_64+0x7d/0xf0
<6>[63052.639693][ C11] ? vfs_write+0x34b/0x420
<6>[63052.639695][ C11] ? ksys_write+0x68/0xd0
<6>[63052.639697][ C11] ? do_syscall_64+0x7d/0xf0
<6>[63052.639699][ C11] ? do_syscall_64+0x7d/0xf0
<6>[63052.639701][ C11] ? kmem_cache_free+0x2a/0x210
<6>[63052.639702][ C11] ? putname+0x4b/0x60
<6>[63052.639703][ C11] ? do_readlinkat+0x12a/0x140
<6>[63052.639704][ C11] ? do_syscall_64+0x7d/0xf0
<6>[63052.639706][ C11] ? do_syscall_64+0x7d/0xf0
<6>[63052.639708][ C11] ? __fput_sync+0x14/0x20
<6>[63052.639709][ C11] ? __se_sys_close.llvm.4875362272298582369+0x70/0xc0
<6>[63052.639711][ C11] ? do_syscall_64+0x7d/0xf0
<6>[63052.639712][ C11] ? do_syscall_64+0x7d/0xf0
<6>[63052.639714][ C11] ? do_syscall_64+0x7d/0xf0
<6>[63052.639715][ C11] ? do_syscall_64+0x7d/0xf0
<6>[63052.639717][ C11] entry_SYSCALL_64_after_hwframe+0x4b/0x53
<6>[63052.639718][ C11] RIP: 0033:0x7fe20c525dea
<6>[63052.639719][ C11] RSP: 002b:00007fff4c132208 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
<6>[63052.639720][ C11] RAX: fffffffffffffffc RBX: 00005649141121e0 RCX: 00007fe20c525dea
<6>[63052.639720][ C11] RDX: 0000000000000006 RSI: 0000564913f8baa0 RDI: 0000000000000003
Panic#1 Part10
<6>[63052.639721][ C11] RBP: 7fffffffffffffff R08: 0000564913f8baa0 R09: 0000000000000004
<6>[63052.639721][ C11] R10: 00000000ffffffff R11: 0000000000000246 R12: 0000000000000014
<6>[63052.639722][ C11] R13: 0000000000000006 R14: 0000000000000002 R15: 0000564914112370
<6>[63052.639723][ C11] </TASK>
<6>[63052.639724][ C11] task:kworker/0:1 state:I stack:0 pid:23392 tgid:23392 ppid:2 flags:0x00004000
<6>[63052.639726][ C11] Workqueue: 0x0 (mm_percpu_wq)
<6>[63052.639727][ C11] Call Trace:
<6>[63052.639727][ C11] <TASK>
<6>[63052.639728][ C11] __schedule+0x445/0x5e0
<6>[63052.639729][ C11] schedule+0x5e/0xc0
<6>[63052.639730][ C11] worker_thread+0x96/0x3c0
<6>[63052.639732][ C11] kthread+0xe9/0x100
<6>[63052.639734][ C11] ? pr_cont_work+0x1b0/0x1b0
<6>[63052.639735][ C11] ? kthread_blkcg+0x30/0x30
<6>[63052.639737][ C11] ret_from_fork+0x34/0x40
<6>[63052.639737][ C11] ? kthread_blkcg+0x30/0x30
<6>[63052.639738][ C11] ret_from_fork_asm+0x11/0x20
<6>[63052.639740][ C11] </TASK>
<6>[63052.639742][ C11] task:kworker/u80:4 state:I stack:0 pid:23393 tgid:23393 ppid:2 flags:0x00004000
<6>[63052.639743][ C11] Workqueue: 0x0 (events_power_efficient)
<6>[63052.639744][ C11] Call Trace:
<6>[63052.639745][ C11] <TASK>
<6>[63052.639746][ C11] __schedule+0x445/0x5e0
<6>[63052.639747][ C11] schedule+0x5e/0xc0
<6>[63052.639748][ C11] worker_thread+0x96/0x3c0
<6>[63052.639750][ C11] kthread+0xe9/0x100
<6>[63052.639751][ C11] ? pr_cont_work+0x1b0/0x1b0
<6>[63052.639753][ C11] ? kthread_blkcg+0x30/0x30
Panic#1 Part9
<6>[63052.639754][ C11] ret_from_fork+0x34/0x40
<6>[63052.639755][ C11] ? kthread_blkcg+0x30/0x30
<6>[63052.639756][ C11] ret_from_fork_asm+0x11/0x20
<6>[63052.639758][ C11] </TASK>
<6>[63052.639759][ C11] task:kworker/u80:6 state:I stack:0 pid:23395 tgid:23395 ppid:2 flags:0x00004000
<6>[63052.639760][ C11] Workqueue: 0x0 (events_power_efficient)
<6>[63052.639761][ C11] Call Trace:
<6>[63052.639762][ C11] <TASK>
<6>[63052.639763][ C11] __schedule+0x445/0x5e0
<6>[63052.639764][ C11] schedule+0x5e/0xc0
<6>[63052.639765][ C11] worker_thread+0x96/0x3c0
<6>[63052.639767][ C11] kthread+0xe9/0x100
<6>[63052.639768][ C11] ? pr_cont_work+0x1b0/0x1b0
<6>[63052.639770][ C11] ? kthread_blkcg+0x30/0x30
<6>[63052.639771][ C11] ret_from_fork+0x34/0x40
<6>[63052.639772][ C11] ? kthread_blkcg+0x30/0x30
<6>[63052.639773][ C11] ret_from_fork_asm+0x11/0x20
<6>[63052.639775][ C11] </TASK>
<6>[63052.639776][ C11] task:kworker/2:0 state:I stack:0 pid:23396 tgid:23396 ppid:2 flags:0x00004000
<6>[63052.639777][ C11] Workqueue: 0x0 (events)
<6>[63052.639778][ C11] Call Trace:
<6>[63052.639779][ C11] <TASK>
<6>[63052.639779][ C11] __schedule+0x445/0x5e0
<6>[63052.639780][ C11] schedule+0x5e/0xc0
<6>[63052.639782][ C11] worker_thread+0x96/0x3c0
<6>[63052.639783][ C11] kthread+0xe9/0x100
<6>[63052.639785][ C11] ? pr_cont_work+0x1b0/0x1b0
<6>[63052.639786][ C11] ? kthread_blkcg+0x30/0x30
<6>[63052.639788][ C11] ret_from_fork+0x34/0x40
<6>[63052.639788][ C11] ? kthread_blkcg+0x30/0x30
<6>[63052.639790][ C11] ret_from_fork_asm+0x11/0x20
<6>[63052.639791][ C11] </TASK>
Panic#1 Part8
<6>[63052.639792][ C11] task:kworker/5:2 state:I stack:0 pid:23398 tgid:23398 ppid:2 flags:0x00004000
<6>[63052.639794][ C11] Workqueue: 0x0 (mm_percpu_wq)
<6>[63052.639795][ C11] Call Trace:
<6>[63052.639795][ C11] <TASK>
<6>[63052.639796][ C11] __schedule+0x445/0x5e0
<6>[63052.639797][ C11] schedule+0x5e/0xc0
<6>[63052.639798][ C11] worker_thread+0x96/0x3c0
<6>[63052.639800][ C11] kthread+0xe9/0x100
<6>[63052.639801][ C11] ? pr_cont_work+0x1b0/0x1b0
<6>[63052.639803][ C11] ? kthread_blkcg+0x30/0x30
<6>[63052.639804][ C11] ret_from_fork+0x34/0x40
<6>[63052.639805][ C11] ? kthread_blkcg+0x30/0x30
<6>[63052.639806][ C11] ret_from_fork_asm+0x11/0x20
<6>[63052.639808][ C11] </TASK>
<6>[63052.639809][ C11] task:irq/217-mei_me state:S stack:0 pid:23400 tgid:23400 ppid:2 flags:0x00004000
<6>[63052.639810][ C11] Call Trace:
<6>[63052.639811][ C11] <TASK>
<6>[63052.639812][ C11] __schedule+0x445/0x5e0
<6>[63052.639813][ C11] ? irq_forced_thread_fn+0x70/0x70
<6>[63052.639814][ C11] schedule+0x5e/0xc0
<6>[63052.639816][ C11] irq_thread+0xa5/0x230
<6>[63052.639817][ C11] ? irq_thread_fn+0x50/0x50
<6>[63052.639818][ C11] kthread+0xe9/0x100
<6>[63052.639819][ C11] ? irq_forced_secondary_handler+0x20/0x20
<6>[63052.639820][ C11] ? kthread_blkcg+0x30/0x30
<6>[63052.639822][ C11] ret_from_fork+0x34/0x40
<6>[63052.639822][ C11] ? kthread_blkcg+0x30/0x30
<6>[63052.639823][ C11] ret_from_fork_asm+0x11/0x20
<6>[63052.639825][ C11] </TASK>
<6>[63052.639826][ C11] task:kworker/12:0 state:I stack:0 pid:23403 tgid:23403 ppid:2 flags:0x00004000
Panic#1 Part7
<6>[63052.639828][ C11] Call Trace:
<6>[63052.639829][ C11] <TASK>
<6>[63052.639829][ C11] __schedule+0x445/0x5e0
<6>[63052.639831][ C11] schedule+0x5e/0xc0
<6>[63052.639832][ C11] worker_thread+0x96/0x3c0
<6>[63052.639834][ C11] kthread+0xe9/0x100
<6>[63052.639835][ C11] ? pr_cont_work+0x1b0/0x1b0
<6>[63052.639836][ C11] ? kthread_blkcg+0x30/0x30
<6>[63052.639838][ C11] ret_from_fork+0x34/0x40
<6>[63052.639838][ C11] ? kthread_blkcg+0x30/0x30
<6>[63052.639840][ C11] ret_from_fork_asm+0x11/0x20
<6>[63052.639841][ C11] </TASK>
<6>[63052.639843][ C11] task:kworker/12:3 state:I stack:0 pid:23404 tgid:23404 ppid:2 flags:0x00004000
<6>[63052.639844][ C11] Call Trace:
<6>[63052.639845][ C11] <TASK>
<6>[63052.639845][ C11] __schedule+0x445/0x5e0
<6>[63052.639847][ C11] ? finish_task_switch+0xb6/0x270
<6>[63052.639848][ C11] schedule+0x5e/0xc0
<6>[63052.639850][ C11] worker_thread+0x96/0x3c0
<6>[63052.639851][ C11] kthread+0xe9/0x100
<6>[63052.639853][ C11] ? pr_cont_work+0x1b0/0x1b0
<6>[63052.639854][ C11] ? kthread_blkcg+0x30/0x30
<6>[63052.639855][ C11] ret_from_fork+0x34/0x40
<6>[63052.639856][ C11] ? kthread_blkcg+0x30/0x30
<6>[63052.639857][ C11] ret_from_fork_asm+0x11/0x20
<6>[63052.639859][ C11] </TASK>
<6>[63052.639860][ C11] task:kworker/11:1 state:I stack:0 pid:23405 tgid:23405 ppid:2 flags:0x00004000
<6>[63052.639861][ C11] Call Trace:
<6>[63052.639862][ C11] <TASK>
<6>[63052.639862][ C11] __schedule+0x445/0x5e0
<6>[63052.639863][ C11] ? finish_task_switch+0xb6/0x270
<6>[63052.639865][ C11] schedule+0x5e/0xc0
Panic#1 Part6
<6>[63052.639866][ C11] worker_thread+0x96/0x3c0
<6>[63052.639868][ C11] kthread+0xe9/0x100
<6>[63052.639869][ C11] ? pr_cont_work+0x1b0/0x1b0
<6>[63052.639871][ C11] ? kthread_blkcg+0x30/0x30
<6>[63052.639872][ C11] ret_from_fork+0x34/0x40
<6>[63052.639873][ C11] ? kthread_blkcg+0x30/0x30
<6>[63052.639874][ C11] ret_from_fork_asm+0x11/0x20
<6>[63052.639876][ C11] </TASK>
<4>[63052.639876][ C11] Mem-Info:
<4>[63052.639878][ C11] active_anon:26850 inactive_anon:2144290 isolated_anon:0
<4>[63052.639878][ C11] active_file:553230 inactive_file:509901 isolated_file:0
<4>[63052.639878][ C11] unevictable:8 dirty:0 writeback:0
<4>[63052.639878][ C11] slab_reclaimable:168685 slab_unreclaimable:51302
<4>[63052.639878][ C11] mapped:439945 shmem:273355 pagetables:21495
<4>[63052.639878][ C11] sec_pagetables:1341 bounce:0
<4>[63052.639878][ C11] kernel_misc_reclaimable:0
<4>[63052.639878][ C11] free:4551553 free_pcp:808 free_cma:0
<4>[63052.639882][ C11] Node 0 active_anon:107400kB inactive_anon:8577160kB active_file:2212920kB inactive_file:2039604kB unevictable:32kB isolated(anon):0kB isolated(file):0kB mapped:1759780kB dirty:0kB writeback:0kB shmem:1093420kB shmem_thp:0kB shmem_pmdmapped:0kB anon_thp:0kB writeback_tmp:0kB kernel_stack:30288kB pagetables:85980kB sec_pagetables:5364kB all_unreclaimable? no
<4>[63052.639885][ C11] Node 0 DMA free:15360kB boost:0kB min:32kB low:44kB high:56kB reserved_highatomic:0KB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15360kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
Panic#1 Part5
<4>[63052.639889][ C11] lowmem_reserve[]: 0 1220 31565 0
<4>[63052.639891][ C11] Node 0 DMA32 free:1355076kB boost:0kB min:2608kB low:3856kB high:5104kB reserved_highatomic:0KB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:1422044kB managed:1356208kB mlocked:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
<4>[63052.639894][ C11] lowmem_reserve[]: 0 0 30345 0
<4>[63052.639895][ C11] Node 0 Normal free:16835776kB boost:0kB min:64940kB low:96012kB high:127084kB reserved_highatomic:0KB active_anon:107400kB inactive_anon:8577160kB active_file:2212920kB inactive_file:2039604kB unevictable:32kB writepending:0kB present:31711232kB managed:31077308kB mlocked:32kB bounce:0kB free_pcp:3232kB local_pcp:3232kB free_cma:0kB
<4>[63052.639899][ C11] lowmem_reserve[]: 0 0 0 0
<4>[63052.639900][ C11] Node 0 DMA: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15360kB
<4>[63052.639906][ C11] Node 0 DMA32: 5*4kB (UM) 6*8kB (UM) 6*16kB (M) 5*32kB (M) 6*64kB (M) 3*128kB (M) 5*256kB (M) 6*512kB (UM) 4*1024kB (UM) 3*2048kB (UM) 327*4096kB (M) = 1355076kB
<4>[63052.639914][ C11] Node 0 Normal: 4530*4kB (UME) 671*8kB (UME) 288*16kB (UME) 248*32kB (UME) 218*64kB (UME) 81*128kB (UME) 33*256kB (ME) 28*512kB (UME) 22*1024kB (UME) 21*2048kB (UME) 4074*4096kB (UME) = 16835776kB
<4>[63052.639922][ C11] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
<4>[63052.639923][ C11] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
<4>[63052.639923][ C11] 1336488 total pagecache pages
Panic#1 Part4
<4>[63052.639924][ C11] 0 pages in swap cache
<4>[63052.639924][ C11] Free swap = 33554428kB
<4>[63052.639925][ C11] Total swap = 33554428kB
<4>[63052.639926][ C11] 8287317 pages RAM
<4>[63052.639926][ C11] 0 pages HighMem/MovableOnly
<4>[63052.639927][ C11] 175098 pages reserved
<4>[63052.639927][ C11] 0 pages hwpoisoned
<4>[63052.639929][ C11] Timer List Version: v0.10
<4>[63052.639929][ C11] HRTIMER_MAX_CLOCK_BASES: 8
<4>[63052.639930][ C11] now at 61977522945278 nsecs
<4>[63052.639930][ C11]
<4>[63052.639931][ C11] cpu: 11
<4>[63052.639931][ C11] clock 0:
<4>[63052.639932][ C11] .base: pK-error
<4>[63052.639933][ C11] .index: 0
<4>[63052.639934][ C11] .resolution: 1 nsecs
<4>[63052.639934][ C11] .get_time: ktime_get
<4>[63052.639935][ C11] .offset: 0 nsecs
<4>[63052.639936][ C11] active timers:
<4>[63052.639936][ C11] #0: < pK-error>, watchdog_timer_fn
<4>[63052.639938][ C11] , S:01
<4>[63052.639938][ C11]
<4>[63052.639939][ C11] # expires at 61977792528935-61977792528935 nsecs [in 269583657 to 269583657 nsecs]
<4>[63052.639940][ C11] #1: < pK-error>, tick_nohz_handler
<4>[63052.639942][ C11] , S:01
<4>[63052.639942][ C11]
<4>[63052.639943][ C11] # expires at 61991510000000-61991510000000 nsecs [in 13987054722 to 13987054722 nsecs]
<4>[63052.639943][ C11] clock 1:
<4>[63052.639944][ C11] .base: pK-error
<4>[63052.639944][ C11] .index: 1
<4>[63052.639945][ C11] .resolution: 1 nsecs
<4>[63052.639945][ C11] .get_time: ktime_get_real
<4>[63052.639946][ C11] .offset: 1725320092651132105 nsecs
Panic#1 Part3
<4>[63052.639946][ C11] active timers:
<4>[63052.639947][ C11] #0: < pK-error>, hrtimer_wakeup
<4>[63052.639948][ C11] , S:01
<4>[63052.639948][ C11]
<4>[63052.639949][ C11] # expires at 1725407114938051965-1725407114938101965 nsecs [in 25044763974582 to 25044764024582 nsecs]
<4>[63052.639949][ C11] clock 2:
<4>[63052.639950][ C11] .base: pK-error
<4>[63052.639950][ C11] .index: 2
<4>[63052.639950][ C11] .resolution: 1 nsecs
<4>[63052.639951][ C11] .get_time: ktime_get_boottime
<4>[63052.639952][ C11] .offset: 1074537202439 nsecs
<4>[63052.639952][ C11] active timers:
<4>[63052.639953][ C11] clock 3:
<4>[63052.639953][ C11] .base: pK-error
<4>[63052.639953][ C11] .index: 3
<4>[63052.639954][ C11] .resolution: 1 nsecs
<4>[63052.639954][ C11] .get_time: ktime_get_clocktai
<4>[63052.639955][ C11] .offset: 1725320092651132105 nsecs
<4>[63052.639955][ C11] active timers:
<4>[63052.639956][ C11] clock 4:
<4>[63052.639956][ C11] .base: pK-error
<4>[63052.639956][ C11] .index: 4
<4>[63052.639957][ C11] .resolution: 1 nsecs
<4>[63052.639957][ C11] .get_time: ktime_get
<4>[63052.639958][ C11] .offset: 0 nsecs
<4>[63052.639958][ C11] active timers:
<4>[63052.639958][ C11] clock 5:
<4>[63052.639959][ C11] .base: pK-error
<4>[63052.639959][ C11] .index: 5
<4>[63052.639959][ C11] .resolution: 1 nsecs
<4>[63052.639960][ C11] .get_time: ktime_get_real
<4>[63052.639961][ C11] .offset: 1725320092651132105 nsecs
<4>[63052.639961][ C11] active timers:
Panic#1 Part2
<4>[63052.639961][ C11] clock 6:
<4>[63052.639962][ C11] .base: pK-error
<4>[63052.639962][ C11] .index: 6
<4>[63052.639962][ C11] .resolution: 1 nsecs
<4>[63052.639963][ C11] .get_time: ktime_get_boottime
<4>[63052.639964][ C11] .offset: 1074537202439 nsecs
<4>[63052.639964][ C11] active timers:
<4>[63052.639964][ C11] clock 7:
<4>[63052.639964][ C11] .base: pK-error
<4>[63052.639965][ C11] .index: 7
<4>[63052.639965][ C11] .resolution: 1 nsecs
<4>[63052.639965][ C11] .get_time: ktime_get_clocktai
<4>[63052.639966][ C11] .offset: 1725320092651132105 nsecs
<4>[63052.639967][ C11] active timers:
<4>[63052.639967][ C11] .expires_next : 61977792528935 nsecs
<4>[63052.639968][ C11] .hres_active : 1
<4>[63052.639968][ C11] .nr_events : 3128413
<4>[63052.639969][ C11] .nr_retries : 1037
<4>[63052.639970][ C11] .nr_hangs : 0
<4>[63052.639970][ C11] .max_hang_time : 0
<4>[63052.639971][ C11] .nohz : 1
<4>[63052.639971][ C11] .highres : 1
<4>[63052.639972][ C11] .last_tick : 61977454000000 nsecs
<4>[63052.639972][ C11] .tick_stopped : 1
<4>[63052.639973][ C11] .idle_jiffies : 4356644631
<4>[63052.639973][ C11] .idle_calls : 6524371
<4>[63052.639974][ C11] .idle_sleeps : 6524371
<4>[63052.639974][ C11] .idle_entrytime : 61977458470645 nsecs
<4>[63052.639975][ C11] .idle_waketime : 61977458470645 nsecs
<4>[63052.639976][ C11] .idle_exittime : 61977453585272 nsecs
<4>[63052.639976][ C11] .idle_sleeptime : 61005325232841 nsecs
Panic#1 Part1
<4>[63052.639976][ C11] .iowait_sleeptime: 6381331231 nsecs
<4>[63052.639977][ C11] .last_jiffies : 4356644631
<4>[63052.639978][ C11] .next_timer : 61991510000000
<4>[63052.639978][ C11] .idle_expires : 61991510000000 nsecs
<4>[63052.639979][ C11] jiffies: 4356644637
<4>[63052.639979][ C11]
<4>[63052.639979][ C11] Tick Device: mode: 1
<4>[63052.639980][ C11] Broadcast device
<4>[63052.639980][ C11] Clock Event Device:
<4>[63052.639980][ C11] <NULL>
<4>[63052.639981][ C11] tick_broadcast_mask: 00000
<4>[63052.639982][ C11] tick_broadcast_oneshot_mask: 00000
<4>[63052.639982][ C11]
<4>[63052.639983][ C11] Tick Device: mode: 1
<4>[63052.639983][ C11] Per CPU device: 11
<4>[63052.639984][ C11] Clock Event Device:
<4>[63052.639984][ C11] lapic-deadline
<4>[63052.639984][ C11] max_delta_ns: 1101273695516
<4>[63052.639985][ C11] min_delta_ns: 1000
<4>[63052.639985][ C11] mult: 16750372
<4>[63052.639985][ C11] shift: 26
<4>[63052.639986][ C11] mode: 3
<4>[63052.639986][ C11] next_event: 61977792528935 nsecs
<4>[63052.639987][ C11] set_next_event: lapic_next_deadline
<4>[63052.639988][ C11] shutdown: lapic_timer_shutdown
<4>[63052.639988][ C11] periodic: lapic_timer_set_periodic
<4>[63052.639989][ C11] oneshot: lapic_timer_set_oneshot
<4>[63052.639990][ C11] oneshot stopped: lapic_timer_shutdown
<4>[63052.639991][ C11] event_handler: hrtimer_interrupt
<4>[63052.639992][ C11]
<4>[63052.639992][ C11] retries: 12272
<4>[63052.639993][ C11] Wakeup Device: <NULL>
<4>[63052.639993][ C11]
Panic#1 Part20
<6>[63052.639406][ C11] schedule+0x5e/0xc0
<6>[63052.639407][ C11] __futex_wait+0x118/0x1a0
<6>[63052.639409][ C11] ? __futex_wake_mark+0x50/0x50
<6>[63052.639410][ C11] futex_wait+0x7c/0x120
<6>[63052.639411][ C11] ? switch_hrtimer_base+0x110/0x110
<6>[63052.639413][ C11] do_futex+0x16c/0x1f0
<6>[63052.639414][ C11] __se_sys_futex+0x11e/0x1a0
<6>[63052.639415][ C11] __x64_sys_futex+0x28/0x30
<6>[63052.639416][ C11] x64_sys_call+0x17d9/0x1ee0
<6>[63052.639417][ C11] do_syscall_64+0x6e/0xf0
<6>[63052.639419][ C11] ? do_user_addr_fault+0x410/0x590
<6>[63052.639420][ C11] ? irqentry_exit+0x16/0x40
<6>[63052.639421][ C11] ? exc_page_fault+0x72/0x90
<6>[63052.639422][ C11] entry_SYSCALL_64_after_hwframe+0x4b/0x53
<6>[63052.639423][ C11] RIP: 0033:0x7efea6891117
<6>[63052.639423][ C11] RSP: 002b:00007efe9dffdcc0 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
<6>[63052.639424][ C11] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007efea6891117
<6>[63052.639425][ C11] RDX: 0000000000000000 RSI: 0000000000000089 RDI: 00007efe9dffde98
<6>[63052.639425][ C11] RBP: 00007efe9dffde70 R08: 0000000000000000 R09: 00000000ffffffff
<6>[63052.639426][ C11] R10: 00007efe9dffdeb0 R11: 0000000000000246 R12: 0000000000000000
<6>[63052.639426][ C11] R13: 0000000000000000 R14: 00007efe9dffde98 R15: 0000000000000000
<6>[63052.639427][ C11] </TASK>
<6>[63052.639428][ C11] task:ThreadPoolForeg state:R stack:0 pid:23298 tgid:23294 ppid:3378 flags:0x00000002
<6>[63052.639430][ C11] Call Trace:
<6>[63052.639430][ C11] <TASK>
<6>[63052.639431][ C11] __schedule+0x445/0x5e0
Panic#1 Part19
<6>[63052.639432][ C11] schedule+0x5e/0xc0
<6>[63052.639433][ C11] __futex_wait+0x118/0x1a0
<6>[63052.639434][ C11] ? __futex_wake_mark+0x50/0x50
<6>[63052.639436][ C11] futex_wait+0x7c/0x120
<6>[63052.639437][ C11] ? switch_hrtimer_base+0x110/0x110
<6>[63052.639439][ C11] do_futex+0x16c/0x1f0
<6>[63052.639440][ C11] __se_sys_futex+0x11e/0x1a0
<6>[63052.639441][ C11] __x64_sys_futex+0x28/0x30
<6>[63052.639442][ C11] x64_sys_call+0x17d9/0x1ee0
<6>[63052.639443][ C11] do_syscall_64+0x6e/0xf0
<6>[63052.639445][ C11] ? do_user_addr_fault+0x410/0x590
<6>[63052.639446][ C11] ? irqentry_exit+0x16/0x40
<6>[63052.639447][ C11] ? exc_page_fault+0x72/0x90
<6>[63052.639448][ C11] entry_SYSCALL_64_after_hwframe+0x4b/0x53
<6>[63052.639449][ C11] RIP: 0033:0x7efea6891117
<6>[63052.639449][ C11] RSP: 002b:00007efe9d5fdcc0 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
<6>[63052.639450][ C11] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007efea6891117
<6>[63052.639451][ C11] RDX: 0000000000000000 RSI: 0000000000000089 RDI: 00007efe9d5fde98
<6>[63052.639451][ C11] RBP: 00007efe9d5fde70 R08: 0000000000000000 R09: 00000000ffffffff
<6>[63052.639452][ C11] R10: 00007efe9d5fdeb0 R11: 0000000000000246 R12: 0000000000000000
<6>[63052.639452][ C11] R13: 0000000000000000 R14: 00007efe9d5fde98 R15: 0000000000000000
<6>[63052.639453][ C11] </TASK>
<6>[63052.639454][ C11] task:Chrome_ChildIOT state:R stack:0 pid:23299 tgid:23294 ppid:3378 flags:0x00004006
<6>[63052.639455][ C11] Call Trace:
<6>[63052.639456][ C11] <TASK>
<6>[63052.639457][ C11] __schedule+0x445/0x5e0
<6>[63052.639458][ C11] schedule+0x5e/0xc0
Panic#1 Part18
<6>[63052.639459][ C11] __refrigerator+0xd7/0x160
<6>[63052.639461][ C11] get_signal+0x4e6/0x510
<6>[63052.639462][ C11] arch_do_signal_or_restart+0x2c/0x240
<6>[63052.639464][ C11] ? __x64_sys_epoll_wait+0x9f/0xd0
<6>[63052.639465][ C11] syscall_exit_to_user_mode+0x59/0x120
<6>[63052.639466][ C11] do_syscall_64+0x7d/0xf0
<6>[63052.639468][ C11] ? do_syscall_64+0x7d/0xf0
<6>[63052.639470][ C11] entry_SYSCALL_64_after_hwframe+0x4b/0x53
<6>[63052.639471][ C11] RIP: 0033:0x7efea6925e2e
<6>[63052.639471][ C11] RSP: 002b:00007efe9cbfd980 EFLAGS: 00000293 ORIG_RAX: 00000000000000e8
<6>[63052.639472][ C11] RAX: fffffffffffffffc RBX: 7fffffffffffffff RCX: 00007efea6925e2e
<6>[63052.639473][ C11] RDX: 0000000000000010 RSI: 00007efe9cbfdca0 RDI: 0000000000000017
<6>[63052.639473][ C11] RBP: 00007efe9cbfdee0 R08: 0000000000000000 R09: 0000000000000000
<6>[63052.639474][ C11] R10: 00000000ffffffff R11: 0000000000000293 R12: 000029ac0002e920
<6>[63052.639474][ C11] R13: 000029ac0002e990 R14: 000029ac00011b80 R15: fffffffc00000000
<6>[63052.639475][ C11] </TASK>
<6>[63052.639477][ C11] task:systemd-sleep state:D stack:0 pid:23317 tgid:23317 ppid:1 flags:0x00004002
<6>[63052.639478][ C11] Call Trace:
<6>[63052.639479][ C11] <TASK>
<6>[63052.639480][ C11] __schedule+0x445/0x5e0
<6>[63052.639481][ C11] schedule+0x5e/0xc0
<6>[63052.639482][ C11] schedule_preempt_disabled+0x14/0x20
<6>[63052.639484][ C11] __mutex_lock+0x249/0x3d0
<6>[63052.639485][ C11] __mutex_lock_slowpath+0xe/0x10
<6>[63052.639487][ C11] mutex_lock+0x1f/0x30
<6>[63052.639488][ C11] device_resume+0x7f/0x340
Panic#1 Part17
<6>[63052.639490][ C11] dpm_resume+0x134/0x1a0
<6>[63052.639492][ C11] suspend_devices_and_enter+0x4f1/0x570
<6>[63052.639494][ C11] enter_state+0x1c0/0x2c0
<6>[63052.639496][ C11] pm_suspend+0x42/0x60
<6>[63052.639497][ C11] state_store+0x105/0x120
<6>[63052.639499][ C11] kobj_attr_store+0x13/0x20
<6>[63052.639500][ C11] sysfs_kf_write+0x33/0x50
<6>[63052.639502][ C11] kernfs_fop_write_iter.llvm.9000635758286636156+0x106/0x190
<6>[63052.639504][ C11] vfs_write+0x34b/0x420
<6>[63052.639506][ C11] ksys_write+0x68/0xd0
<6>[63052.639508][ C11] __x64_sys_write+0x1a/0x20
<6>[63052.639510][ C11] x64_sys_call+0x15a4/0x1ee0
<6>[63052.639511][ C11] do_syscall_64+0x6e/0xf0
<6>[63052.639512][ C11] ? ep_poll_callback+0x1ad/0x240
<6>[63052.639514][ C11] ? __wake_up_sync_key+0x7c/0x90
<6>[63052.639516][ C11] ? scm_destroy+0x10/0x30
<6>[63052.639517][ C11] ? unix_dgram_sendmsg+0x756/0x850
<6>[63052.639518][ C11] ? ____sys_sendmsg.llvm.5628874028608458902+0x171/0x250
<6>[63052.639520][ C11] ? __sys_sendmsg+0xdf/0x120
<6>[63052.639522][ C11] ? __handle_mm_fault+0x3e6/0x650
<6>[63052.639523][ C11] ? mm_account_fault+0x7e/0x110
<6>[63052.639524][ C11] ? handle_mm_fault+0xc7/0x1a0
<6>[63052.639525][ C11] ? do_user_addr_fault+0x410/0x590
<6>[63052.639527][ C11] ? do_syscall_64+0x7d/0xf0
<6>[63052.639528][ C11] entry_SYSCALL_64_after_hwframe+0x4b/0x53
<6>[63052.639529][ C11] RIP: 0033:0x7fae4e314887
<6>[63052.639530][ C11] RSP: 002b:00007ffcb35b7958 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
<6>[63052.639531][ C11] RAX: ffffffffffffffda RBX: 0000000000000004 RCX: 00007fae4e314887
Panic#1 Part16
<6>[63052.639531][ C11] RDX: 0000000000000004 RSI: 00007ffcb35b7a10 RDI: 0000000000000005
<6>[63052.639532][ C11] RBP: 00007ffcb35b7a10 R08: 0000000000000004 R09: 000000007fffffff
<6>[63052.639532][ C11] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000004
<6>[63052.639533][ C11] R13: 0000556b3fe3a2d0 R14: 00007fae4e416a00 R15: 0000000000000004
<6>[63052.639534][ C11] </TASK>
<6>[63052.639535][ C11] task:kworker/14:2 state:I stack:0 pid:23353 tgid:23353 ppid:2 flags:0x00004000
<6>[63052.639537][ C11] Workqueue: 0x0 (events)
<6>[63052.639538][ C11] Call Trace:
<6>[63052.639539][ C11] <TASK>
<6>[63052.639539][ C11] __schedule+0x445/0x5e0
<6>[63052.639541][ C11] schedule+0x5e/0xc0
<6>[63052.639542][ C11] worker_thread+0x96/0x3c0
<6>[63052.639544][ C11] kthread+0xe9/0x100
<6>[63052.639545][ C11] ? pr_cont_work+0x1b0/0x1b0
<6>[63052.639547][ C11] ? kthread_blkcg+0x30/0x30
<6>[63052.639548][ C11] ret_from_fork+0x34/0x40
<6>[63052.639549][ C11] ? kthread_blkcg+0x30/0x30
<6>[63052.639550][ C11] ret_from_fork_asm+0x11/0x20
<6>[63052.639552][ C11] </TASK>
<6>[63052.639553][ C11] task:systemd-udevd state:R stack:0 pid:23378 tgid:23378 ppid:553 flags:0x00004006
<6>[63052.639555][ C11] Call Trace:
<6>[63052.639555][ C11] <TASK>
<6>[63052.639556][ C11] __schedule+0x445/0x5e0
<6>[63052.639557][ C11] schedule+0x5e/0xc0
<6>[63052.639559][ C11] __refrigerator+0xd7/0x160
<6>[63052.639560][ C11] get_signal+0x4e6/0x510
<6>[63052.639562][ C11] ? handle_pte_fault+0x16b/0x190
<6>[63052.639563][ C11] arch_do_signal_or_restart+0x2c/0x240
Panic#1 Part15
<6>[63052.639565][ C11] ? __x64_sys_epoll_wait+0x9f/0xd0
<6>[63052.639566][ C11] syscall_exit_to_user_mode+0x59/0x120
<6>[63052.639567][ C11] do_syscall_64+0x7d/0xf0
<6>[63052.639569][ C11] ? do_syscall_64+0x7d/0xf0
<6>[63052.639570][ C11] ? do_user_addr_fault+0x410/0x590
<6>[63052.639572][ C11] ? irqentry_exit+0x16/0x40
<6>[63052.639573][ C11] ? exc_page_fault+0x72/0x90
<6>[63052.639573][ C11] entry_SYSCALL_64_after_hwframe+0x4b/0x53
<6>[63052.639575][ C11] RIP: 0033:0x7fe20c525dea
<6>[63052.639575][ C11] RSP: 002b:00007fff4c132208 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
<6>[63052.639576][ C11] RAX: fffffffffffffffc RBX: 00005649141121e0 RCX: 00007fe20c525dea
<6>[63052.639577][ C11] RDX: 0000000000000006 RSI: 0000564914105fb0 RDI: 0000000000000003
<6>[63052.639577][ C11] RBP: 7fffffffffffffff R08: 0000564914105fb0 R09: 0000000000000004
<6>[63052.639578][ C11] R10: 00000000ffffffff R11: 0000000000000246 R12: 0000000000000014
<6>[63052.639578][ C11] R13: 0000000000000006 R14: 0000000000000002 R15: 0000564914112370
<6>[63052.639579][ C11] </TASK>
<6>[63052.639581][ C11] task:systemd-udevd state:R stack:0 pid:23380 tgid:23380 ppid:553 flags:0x00004006
<6>[63052.639583][ C11] Call Trace:
<6>[63052.639583][ C11] <TASK>
<6>[63052.639584][ C11] __schedule+0x445/0x5e0
<6>[63052.639585][ C11] ? __kmalloc_node_track_caller_noprof+0x1d3/0x2b0
<6>[63052.639587][ C11] schedule+0x5e/0xc0
<6>[63052.639588][ C11] __refrigerator+0xd7/0x160
<6>[63052.639590][ C11] get_signal+0x4e6/0x510
<6>[63052.639591][ C11] arch_do_signal_or_restart+0x2c/0x240
Panic#1 Part14
<6>[63052.639593][ C11] ? __x64_sys_epoll_wait+0x9f/0xd0
<6>[63052.639594][ C11] syscall_exit_to_user_mode+0x59/0x120
<6>[63052.639595][ C11] do_syscall_64+0x7d/0xf0
<6>[63052.639597][ C11] ? sock_write_iter+0xf1/0x150
<6>[63052.639599][ C11] ? vfs_write+0x34b/0x420
<6>[63052.639601][ C11] ? ksys_write+0x68/0xd0
<6>[63052.639603][ C11] ? do_syscall_64+0x7d/0xf0
<6>[63052.639605][ C11] ? putname+0x4b/0x60
<6>[63052.639606][ C11] ? do_readlinkat+0x12a/0x140
<6>[63052.639607][ C11] ? do_syscall_64+0x7d/0xf0
<6>[63052.639609][ C11] ? sysvec_apic_timer_interrupt+0x48/0x80
<6>[63052.639610][ C11] entry_SYSCALL_64_after_hwframe+0x4b/0x53
<6>[63052.639611][ C11] RIP: 0033:0x7fe20c525dea
<6>[63052.639612][ C11] RSP: 002b:00007fff4c132208 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
<6>[63052.639612][ C11] RAX: fffffffffffffffc RBX: 00005649141121e0 RCX: 00007fe20c525dea
<6>[63052.639613][ C11] RDX: 0000000000000006 RSI: 0000564914106d30 RDI: 0000000000000003
<6>[63052.639614][ C11] RBP: 7fffffffffffffff R08: 0000564914106d30 R09: 0000000000000004
<6>[63052.639614][ C11] R10: 00000000ffffffff R11: 0000000000000246 R12: 0000000000000014
<6>[63052.639615][ C11] R13: 0000000000000006 R14: 0000000000000002 R15: 0000564914112370
<6>[63052.639616][ C11] </TASK>
<6>[63052.639617][ C11] task:systemd-udevd state:R stack:0 pid:23382 tgid:23382 ppid:553 flags:0x00004006
<6>[63052.639618][ C11] Call Trace:
<6>[63052.639619][ C11] <TASK>
<6>[63052.639619][ C11] __schedule+0x445/0x5e0
<6>[63052.639621][ C11] schedule+0x5e/0xc0
<6>[63052.639622][ C11] __refrigerator+0xd7/0x160
Panic#1 Part13
<6>[63052.639624][ C11] get_signal+0x4e6/0x510
<6>[63052.639625][ C11] arch_do_signal_or_restart+0x2c/0x240
<6>[63052.639627][ C11] ? __x64_sys_epoll_wait+0x9f/0xd0
<6>[63052.639628][ C11] syscall_exit_to_user_mode+0x59/0x120
<6>[63052.639629][ C11] do_syscall_64+0x7d/0xf0
<6>[63052.639631][ C11] ? do_syscall_64+0x7d/0xf0
<6>[63052.639633][ C11] ? irqentry_exit_to_user_mode+0x111/0x120
<6>[63052.639634][ C11] ? irqentry_exit+0x16/0x40
<6>[63052.639635][ C11] ? sysvec_reschedule_ipi+0x61/0x70
<6>[63052.639636][ C11] entry_SYSCALL_64_after_hwframe+0x4b/0x53
<6>[63052.639637][ C11] RIP: 0033:0x7fe20c525dea
<6>[63052.639637][ C11] RSP: 002b:00007fff4c132208 EFLAGS: 00000246 ORIG_RAX: 00000000000000e8
<6>[63052.639638][ C11] RAX: fffffffffffffffc RBX: 00005649141121e0 RCX: 00007fe20c525dea
<6>[63052.639639][ C11] RDX: 0000000000000006 RSI: 0000564913f8e070 RDI: 0000000000000003
<6>[63052.639639][ C11] RBP: 7fffffffffffffff R08: 0000564913f8e070 R09: 0000000000000004
<6>[63052.639640][ C11] R10: 00000000ffffffff R11: 0000000000000246 R12: 0000000000000014
<6>[63052.639641][ C11] R13: 0000000000000006 R14: 0000000000000002 R15: 0000564914112370
<6>[63052.639642][ C11] </TASK>
<6>[63052.639643][ C11] task:systemd-udevd state:R stack:0 pid:23383 tgid:23383 ppid:553 flags:0x00004006
<6>[63052.639644][ C11] Call Trace:
<6>[63052.639646][ C11] <TASK>
<6>[63052.639646][ C11] __schedule+0x445/0x5e0
<6>[63052.639647][ C11] schedule+0x5e/0xc0
<6>[63052.639649][ C11] __refrigerator+0xd7/0x160
<6>[63052.639650][ C11] get_signal+0x4e6/0x510
<6>[63052.639652][ C11] arch_do_signal_or_restart+0x2c/0x240
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: My Dell XPS-9320 (kernel 6.10.x, et al.) doesn't detect Thunderbolt additions when coming out of suspend or hibernate
2024-09-09 7:51 ` Kenneth Crudup
@ 2024-09-09 9:01 ` Mika Westerberg
2024-09-12 21:12 ` Kenneth Crudup
1 sibling, 0 replies; 13+ messages in thread
From: Mika Westerberg @ 2024-09-09 9:01 UTC (permalink / raw)
To: Kenneth Crudup; +Cc: Lukas Wunner, linux-usb, linux-pm@vger.kernel.org
Hi,
On Mon, Sep 09, 2024 at 12:51:18AM -0700, Kenneth Crudup wrote:
>
> I can't get to the dmesg when it crashes, but I did a SysRq-S/C and have
> attached the crash output; let me know if this is at all helpful.
>
> I see I'd SysRq-S/C on a previous hang, I've attached that one, too.
Unfortunately I did not see anything too useful in that. There is the
suspend thread going on but it does not seem to show what exactly is
hanging there.
> This particular time it suspended OK, but hung indefinitely when I plugged
> it into another TB3 dock (the previous one was TB4, if it matters).
Can you describe the flow with bit more details? And let's stick with
one dock for now (if both have the same issue anyway).
You do something like this?
1. Boot the system up, TB4 dock connected.
2. Verify everything is working. BTW, do you have monitor(s) connected
to the dock?
3. Enter system suspend.
4. Verify it is suspended (suspend LED if exists is "blinking", fans are
turned off).
5. Wake it up from keyboard.
Expectation: System wakes up fine, all the devices work exactly same as
prior suspend. All connected monitors display picture.
If this is the flow, can you do steps up to 3 with
"thunderbolt.dyndbg=+p" and provide full dmesg of that at least so if
nothing else I can try to reproduce it on our end?
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: My Dell XPS-9320 (kernel 6.10.x, et al.) doesn't detect Thunderbolt additions when coming out of suspend or hibernate
2024-09-09 7:51 ` Kenneth Crudup
2024-09-09 9:01 ` Mika Westerberg
@ 2024-09-12 21:12 ` Kenneth Crudup
2024-09-13 5:25 ` Mika Westerberg
1 sibling, 1 reply; 13+ messages in thread
From: Kenneth Crudup @ 2024-09-12 21:12 UTC (permalink / raw)
To: Mika Westerberg; +Cc: Lukas Wunner, linux-usb, linux-pm@vger.kernel.org, Me
I'll run the stuff you need, but now it looks like whatever is breaking
suspend/resume in Linus' master has been ported down from upstream into
6.10.10; I'm now getting the same panic()s as I did with master. I just
had a failed resume and the crash dump (which happened on its own) looks
the same as the one I'd posted here.
I may try and find some time to bisect the issue, but it'll take some time.
-K
On 9/9/24 00:51, Kenneth Crudup wrote:
>
> I can't get to the dmesg when it crashes, but I did a SysRq-S/C and have
> attached the crash output; let me know if this is at all helpful.
>
> I see I'd SysRq-S/C on a previous hang, I've attached that one, too.
>
> This particular time it suspended OK, but hung indefinitely when I
> plugged it into another TB3 dock (the previous one was TB4, if it matters).
>
>
> On 9/4/24 05:28, Mika Westerberg wrote:
>> Hi,
>>
>> On Tue, Sep 03, 2024 at 11:10:41PM -0700, Kenneth Crudup wrote:
>>>
>>> ... or, maybe not. Turns out that sometimes my system can't suspend
>>> (just
>>> hangs, spinning hard somewhere based on the heat and the fans) when
>>> plugged
>>> into a Thunderbolt dock at the time of suspend.
>>
>> Can you create a bug in bugzilla.kernel.org and attach full dmesg so
>> that you enter suspend with dock connected (so that the issue
>> reproduces)? Please also add "thunderbolt.dyndbg=+p" in the kernel
>> command line so we can see what the driver is doing. Also probably good
>> to add the lspci dumps too as Lukas asked.
>>
>
--
Kenneth R. Crudup / Sr. SW Engineer, Scott County Consulting, Orange
County CA
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: My Dell XPS-9320 (kernel 6.10.x, et al.) doesn't detect Thunderbolt additions when coming out of suspend or hibernate
2024-09-12 21:12 ` Kenneth Crudup
@ 2024-09-13 5:25 ` Mika Westerberg
2024-09-13 6:11 ` Kenneth Crudup
0 siblings, 1 reply; 13+ messages in thread
From: Mika Westerberg @ 2024-09-13 5:25 UTC (permalink / raw)
To: Kenneth Crudup; +Cc: Lukas Wunner, linux-usb, linux-pm@vger.kernel.org
Hi,
On Thu, Sep 12, 2024 at 02:12:27PM -0700, Kenneth Crudup wrote:
> I'll run the stuff you need, but now it looks like whatever is breaking
> suspend/resume in Linus' master has been ported down from upstream into
> 6.10.10; I'm now getting the same panic()s as I did with master. I just had
> a failed resume and the crash dump (which happened on its own) looks the
> same as the one I'd posted here.
Is the crash you see something different from the hang? If you can catch
that with the backtrace and the register dump it should help.
Couple of additional steps to try:
- Unplug monitors from the dock and see if that makes it work (assuming
you have monitors connected).
- Disable PCIe tunneling and see if that makes it work. This results
that the PCIe devices on the dock are not functional but it can point
us to the direction. You can do this on regular distro (Ubuntu, Fedora
etC) like:
$ boltctl config auth-mode disabled
Or got to "Settings" -> "Privacy & Security" -> "Thunderbolt" and flip
off the "Direct Access" switch.
> I may try and find some time to bisect the issue, but it'll take some time.
Sure.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: My Dell XPS-9320 (kernel 6.10.x, et al.) doesn't detect Thunderbolt additions when coming out of suspend or hibernate
2024-09-13 5:25 ` Mika Westerberg
@ 2024-09-13 6:11 ` Kenneth Crudup
2024-09-13 21:59 ` Kenneth Crudup
0 siblings, 1 reply; 13+ messages in thread
From: Kenneth Crudup @ 2024-09-13 6:11 UTC (permalink / raw)
To: Mika Westerberg, Me; +Cc: Lukas Wunner, linux-usb, linux-pm@vger.kernel.org
Well, now get this- I'm back to running Linus' master (as of
79a61cc3fc0) and I've been trying to get resumes to fail and they
haven't (which means the next time I try after hitting "send" it's going
to fail spectacularly).
My SWAG is it may be related to commits 79a61cc3fc or 3e705251d998c9,
but I'll see if it breaks and if it doesn't, all the better :)
-K
On 9/12/24 22:25, Mika Westerberg wrote:
> Hi,
>
> On Thu, Sep 12, 2024 at 02:12:27PM -0700, Kenneth Crudup wrote:
>> I'll run the stuff you need, but now it looks like whatever is breaking
>> suspend/resume in Linus' master has been ported down from upstream into
>> 6.10.10; I'm now getting the same panic()s as I did with master. I just had
>> a failed resume and the crash dump (which happened on its own) looks the
>> same as the one I'd posted here.
>
> Is the crash you see something different from the hang? If you can catch
> that with the backtrace and the register dump it should help.
>
> Couple of additional steps to try:
>
> - Unplug monitors from the dock and see if that makes it work (assuming
> you have monitors connected).
>
> - Disable PCIe tunneling and see if that makes it work. This results
> that the PCIe devices on the dock are not functional but it can point
> us to the direction. You can do this on regular distro (Ubuntu, Fedora
> etC) like:
>
> $ boltctl config auth-mode disabled
>
> Or got to "Settings" -> "Privacy & Security" -> "Thunderbolt" and flip
> off the "Direct Access" switch.
>
>> I may try and find some time to bisect the issue, but it'll take some time.
>
> Sure.
>
--
Kenneth R. Crudup / Sr. SW Engineer, Scott County Consulting, Orange
County CA
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: My Dell XPS-9320 (kernel 6.10.x, et al.) doesn't detect Thunderbolt additions when coming out of suspend or hibernate
2024-09-13 6:11 ` Kenneth Crudup
@ 2024-09-13 21:59 ` Kenneth Crudup
2024-09-16 0:14 ` Kenneth Crudup
0 siblings, 1 reply; 13+ messages in thread
From: Kenneth Crudup @ 2024-09-13 21:59 UTC (permalink / raw)
To: Mika Westerberg
Cc: Lukas Wunner, linux-usb, linux-pm@vger.kernel.org, Kenneth Crudup
Huh. This particular kernel is proving to be quite resilient, as in
"announce that it's been fixed, as that'll definitely make it break"
resilient.
I've done at least 5/6 suspend/resume cycles going between no dock,
USB-C/DP docks and now TB(USB4) docks and it's resumed properly every
time (and thanks to 9d573d195 even seems to recognize topology changes
too).
(My main USB4/TB dock is at home, A Caldigit 4 with a 7680x2160 DP
monitor on it; this tends to be the problematic dock for suspend/resumes
and provided calling these suspend/resume issues publically "fixed"
doesn't invoke Murphy's Law I'll know if I'd had continued success
tomorrow).
-K
On 9/12/24 23:11, Kenneth Crudup wrote:
>
> Well, now get this- I'm back to running Linus' master (as of
> 79a61cc3fc0) and I've been trying to get resumes to fail and they
> haven't (which means the next time I try after hitting "send" it's going
> to fail spectacularly).
>
> My SWAG is it may be related to commits 79a61cc3fc or 3e705251d998c9,
> but I'll see if it breaks and if it doesn't, all the better :)
>
> -K
>
> On 9/12/24 22:25, Mika Westerberg wrote:
>> Hi,
>>
>> On Thu, Sep 12, 2024 at 02:12:27PM -0700, Kenneth Crudup wrote:
>>> I'll run the stuff you need, but now it looks like whatever is breaking
>>> suspend/resume in Linus' master has been ported down from upstream into
>>> 6.10.10; I'm now getting the same panic()s as I did with master. I
>>> just had
>>> a failed resume and the crash dump (which happened on its own) looks the
>>> same as the one I'd posted here.
>>
>> Is the crash you see something different from the hang? If you can catch
>> that with the backtrace and the register dump it should help.
>>
>> Couple of additional steps to try:
>>
>> - Unplug monitors from the dock and see if that makes it work (assuming
>> you have monitors connected).
>>
>> - Disable PCIe tunneling and see if that makes it work. This results
>> that the PCIe devices on the dock are not functional but it can point
>> us to the direction. You can do this on regular distro (Ubuntu, Fedora
>> etC) like:
>>
>> $ boltctl config auth-mode disabled
>>
>> Or got to "Settings" -> "Privacy & Security" -> "Thunderbolt" and flip
>> off the "Direct Access" switch.
>>
>>> I may try and find some time to bisect the issue, but it'll take some
>>> time.
>>
>> Sure.
>>
>
--
Kenneth R. Crudup / Sr. SW Engineer, Scott County Consulting, Orange
County CA
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: My Dell XPS-9320 (kernel 6.10.x, et al.) doesn't detect Thunderbolt additions when coming out of suspend or hibernate
2024-09-13 21:59 ` Kenneth Crudup
@ 2024-09-16 0:14 ` Kenneth Crudup
2024-09-25 16:55 ` Kenneth Crudup
0 siblings, 1 reply; 13+ messages in thread
From: Kenneth Crudup @ 2024-09-16 0:14 UTC (permalink / raw)
To: Mika Westerberg, Kenneth Crudup
Cc: Lukas Wunner, linux-usb, linux-pm@vger.kernel.org
OK, looks like the changes made to the (now-recently-released) 6.11 have
fixed all the suspend/resume issues.
... and it turns out that my crashes on the CalDigit TB4 dock are
probably related to a Thunderbolt-to-NVMe enclosure that was always
plugged into to the dock; apparently when resuming "something" was
waiting for the now-disconnected NVMe drive to come back, leading to the
hangs. Disconnecting the enclosure from the dock seems to prevent the
resume crashes.
I may try and root-cause that issue later, if I have time.
I guess we can call the Subject: issue mostly-solved.
-Kenny
On 9/13/24 14:59, Kenneth Crudup wrote:
>
> Huh. This particular kernel is proving to be quite resilient, as in
> "announce that it's been fixed, as that'll definitely make it break"
> resilient.
>
> I've done at least 5/6 suspend/resume cycles going between no dock,
> USB-C/DP docks and now TB(USB4) docks and it's resumed properly every
> time (and thanks to 9d573d195 even seems to recognize topology
> changes too).
>
> (My main USB4/TB dock is at home, A Caldigit 4 with a 7680x2160 DP
> monitor on it; this tends to be the problematic dock for suspend/resumes
> and provided calling these suspend/resume issues publically "fixed"
> doesn't invoke Murphy's Law I'll know if I'd had continued success
> tomorrow).
>
> -K
>
> On 9/12/24 23:11, Kenneth Crudup wrote:
>>
>> Well, now get this- I'm back to running Linus' master (as of
>> 79a61cc3fc0) and I've been trying to get resumes to fail and they
>> haven't (which means the next time I try after hitting "send" it's
>> going to fail spectacularly).
>>
>> My SWAG is it may be related to commits 79a61cc3fc or 3e705251d998c9,
>> but I'll see if it breaks and if it doesn't, all the better :)
>>
>> -K
>>
>> On 9/12/24 22:25, Mika Westerberg wrote:
>>> Hi,
>>>
>>> On Thu, Sep 12, 2024 at 02:12:27PM -0700, Kenneth Crudup wrote:
>>>> I'll run the stuff you need, but now it looks like whatever is breaking
>>>> suspend/resume in Linus' master has been ported down from upstream into
>>>> 6.10.10; I'm now getting the same panic()s as I did with master. I
>>>> just had
>>>> a failed resume and the crash dump (which happened on its own) looks
>>>> the
>>>> same as the one I'd posted here.
>>>
>>> Is the crash you see something different from the hang? If you can catch
>>> that with the backtrace and the register dump it should help.
>>>
>>> Couple of additional steps to try:
>>>
>>> - Unplug monitors from the dock and see if that makes it work (assuming
>>> you have monitors connected).
>>>
>>> - Disable PCIe tunneling and see if that makes it work. This results
>>> that the PCIe devices on the dock are not functional but it can point
>>> us to the direction. You can do this on regular distro (Ubuntu,
>>> Fedora
>>> etC) like:
>>>
>>> $ boltctl config auth-mode disabled
>>>
>>> Or got to "Settings" -> "Privacy & Security" -> "Thunderbolt" and
>>> flip
>>> off the "Direct Access" switch.
>>>
>>>> I may try and find some time to bisect the issue, but it'll take
>>>> some time.
>>>
>>> Sure.
>>>
>>
>
--
Kenneth R. Crudup / Sr. SW Engineer, Scott County Consulting, Orange
County CA
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: My Dell XPS-9320 (kernel 6.10.x, et al.) doesn't detect Thunderbolt additions when coming out of suspend or hibernate
2024-09-16 0:14 ` Kenneth Crudup
@ 2024-09-25 16:55 ` Kenneth Crudup
0 siblings, 0 replies; 13+ messages in thread
From: Kenneth Crudup @ 2024-09-25 16:55 UTC (permalink / raw)
To: Mika Westerberg, Me
Cc: Lukas Wunner, linux-usb, linux-pm@vger.kernel.org, linux-pci
On 9/15/24 17:14, Kenneth Crudup wrote:
> ... and it turns out that my crashes on the CalDigit TB4 dock are
> probably related to a Thunderbolt-to-NVMe enclosure that was always
> plugged into to the dock; apparently when resuming "something" was
> waiting for the now-disconnected NVMe drive to come back, leading to the
> hangs. Disconnecting the enclosure from the dock seems to prevent the
> resume crashes.
>
> I may try and root-cause that issue later, if I have time.
So I've determined this problem happened somewhere between 6.10.8 and
6.10.9; I don't always have the affected hardware so it'll take me a
couple of days to bisect the issue, but at least I have an idea on where
the problem is.
What's interesting is testing using the NVMe-to-TB adaptor directly into
the laptop isn't enough to trigger the crashes, it has to be plugged
into the CalDigit TB4 dock at suspend time to trigger a hang on resume
if the CalDigit dock is disconnected in between.
-Kenny
--
Kenneth R. Crudup / Sr. SW Engineer, Scott County Consulting, Orange
County CA
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2024-09-25 17:14 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-08-21 22:05 My Dell XPS-9320 (kernel 6.10.x, et al.) doesn't detect Thunderbolt additions when coming out of suspend or hibernate Kenneth Crudup
2024-08-26 3:06 ` Lukas Wunner
2024-08-30 19:52 ` Kenneth Crudup
2024-09-04 6:10 ` Kenneth Crudup
2024-09-04 12:28 ` Mika Westerberg
2024-09-09 7:51 ` Kenneth Crudup
2024-09-09 9:01 ` Mika Westerberg
2024-09-12 21:12 ` Kenneth Crudup
2024-09-13 5:25 ` Mika Westerberg
2024-09-13 6:11 ` Kenneth Crudup
2024-09-13 21:59 ` Kenneth Crudup
2024-09-16 0:14 ` Kenneth Crudup
2024-09-25 16:55 ` Kenneth Crudup
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).