* rcu stalls during fstests runs for xfs
@ 2026-01-26 11:30 Shinichiro Kawasaki
2026-01-26 23:05 ` Dave Chinner
2026-01-28 9:55 ` Kunwu Chan
0 siblings, 2 replies; 14+ messages in thread
From: Shinichiro Kawasaki @ 2026-01-26 11:30 UTC (permalink / raw)
To: rcu@vger.kernel.org; +Cc: linux-xfs@vger.kernel.org, hch
[-- Attachment #1: Type: text/plain, Size: 1572 bytes --]
Hello all,
I regularly run fstests with the kernel at xfs/for-next branch tip to validate
the capability of zoned block device capability of xfs. Recently, I started
observing hangs of the test runs with the message:
"rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:"
The hangs occurred in different test cases, and simply rerunning the test cases
does not reproduce the hang. When I ran the whole fstests test cases, it also
fails to reproduce the hang. However, when the whole fstests is repeated the
hang is recreated. The hang looks rare, takes very long time to recreate and is
tough to chase down.
To tackle this problem, I would like to seek the expertise of rcu developers. I
have attached kernel message logs captured at the hangs for analysis [1][2][3].
Any insights or guidance on how to debug this problem will be appreciated.
[1] hang observed on Jan/23/2026
dmesg log file attached: generic_005_hang
hanged test case: generic/005
kernel: xfs/for-next, 51aba4ca399, v6.19-rc5+
block device: dm-linear on HDD (non-zoned)
xfs: zoned
[2] hang observed on Jan/18/2026
dmesg log file attached: xfs_598_hang
hanged test case: xfs/598
kernel: Christophs' xfs branch, ec6aea2a5 v6.19-rc1+
block device: TCMU (non-zoned)
xfs: non-zoned
[3] hang observed on Jan/14/2026
dmesg log file attached: generic_417_hang
hanged test case: generic/417
kernel: xfs/for-next, ea44380376c, v6.19-rc1+
block device: null_blk (non-zoned)
xfs: zoned
[-- Attachment #2: generic_005_hang --]
[-- Type: text/plain, Size: 64642 bytes --]
[272051.694901][T844837] XFS (dm-1): EXPERIMENTAL zoned RT device feature enabled. Use at your own risk!
[272051.707676][T844837] XFS (dm-1): Mounting V5 Filesystem a93cfc07-0455-4fbe-80f9-9d083d9ce62d
[272051.920527][T844837] XFS (dm-1): Ending clean mount
[272051.922932][T844837] XFS (dm-1): limiting open zones to 39 due to total zone count (158)
[272051.925611][T844837] XFS (dm-1): 158 zones of 65536 blocks (39 max open zones)
[272052.581630][T844878] XFS (dm-1): Unmounting Filesystem a93cfc07-0455-4fbe-80f9-9d083d9ce62d
[272054.948871][T844935] XFS (dm-0): Unmounting Filesystem f5e12e54-47e6-4c29-9b87-ab176a9bf453
[272056.000375][T845127] XFS (dm-0): EXPERIMENTAL zoned RT device feature enabled. Use at your own risk!
[272056.010682][T845127] XFS (dm-0): Mounting V5 Filesystem f5e12e54-47e6-4c29-9b87-ab176a9bf453
[272056.087841][T845127] XFS (dm-0): Ending clean mount
[272056.089082][T845127] XFS (dm-0): limiting open zones to 39 due to total zone count (158)
[272056.090196][T845127] XFS (dm-0): 158 zones of 65536 blocks (39 max open zones)
[272056.903099][T843348] run fstests generic/001 at 2026-01-23 05:39:45
[272121.008177][T847786] XFS (dm-0): Unmounting Filesystem f5e12e54-47e6-4c29-9b87-ab176a9bf453
[272122.198536][T847978] XFS (dm-0): EXPERIMENTAL zoned RT device feature enabled. Use at your own risk!
[272122.208526][T847978] XFS (dm-0): Mounting V5 Filesystem f5e12e54-47e6-4c29-9b87-ab176a9bf453
[272122.292040][T847978] XFS (dm-0): Ending clean mount
[272122.293575][T847978] XFS (dm-0): limiting open zones to 39 due to total zone count (158)
[272122.295185][T847978] XFS (dm-0): 158 zones of 65536 blocks (39 max open zones)
[272123.547774][T843348] run fstests generic/002 at 2026-01-23 05:40:51
[272132.514378][T848476] XFS (dm-0): Unmounting Filesystem f5e12e54-47e6-4c29-9b87-ab176a9bf453
[272133.717016][T848668] XFS (dm-0): EXPERIMENTAL zoned RT device feature enabled. Use at your own risk!
[272133.727497][T848668] XFS (dm-0): Mounting V5 Filesystem f5e12e54-47e6-4c29-9b87-ab176a9bf453
[272133.803261][T848668] XFS (dm-0): Ending clean mount
[272133.804864][T848668] XFS (dm-0): limiting open zones to 39 due to total zone count (158)
[272133.806356][T848668] XFS (dm-0): 158 zones of 65536 blocks (39 max open zones)
[272134.683984][T843348] run fstests generic/003 at 2026-01-23 05:41:02
[272139.571327][T848961] XFS (dm-1): EXPERIMENTAL zoned RT device feature enabled. Use at your own risk!
[272139.581647][T848961] XFS (dm-1): Mounting V5 Filesystem e7503700-0efd-4da8-9404-b42b5e0393f6
[272139.742777][T848961] XFS (dm-1): Ending clean mount
[272139.744143][T848961] XFS (dm-1): limiting open zones to 39 due to total zone count (158)
[272139.745883][T848961] XFS (dm-1): 158 zones of 65536 blocks (39 max open zones)
[272140.340054][T849003] XFS (dm-1): Unmounting Filesystem e7503700-0efd-4da8-9404-b42b5e0393f6
[272141.501440][T849021] XFS (dm-1): EXPERIMENTAL zoned RT device feature enabled. Use at your own risk!
[272141.512544][T849021] XFS (dm-1): Mounting V5 Filesystem f96cfb7c-624f-42ad-95e9-c7ee07e09d84
[272141.664352][T849021] XFS (dm-1): Ending clean mount
[272141.665691][T849021] XFS (dm-1): limiting open zones to 39 due to total zone count (158)
[272141.666883][T849021] XFS (dm-1): 158 zones of 65536 blocks (39 max open zones)
[272144.519373][T849071] XFS (dm-1): Unmounting Filesystem f96cfb7c-624f-42ad-95e9-c7ee07e09d84
[272144.765769][T849075] XFS (dm-1): EXPERIMENTAL zoned RT device feature enabled. Use at your own risk!
[272144.777075][T849075] XFS (dm-1): Mounting V5 Filesystem f96cfb7c-624f-42ad-95e9-c7ee07e09d84
[272144.936905][T849075] XFS (dm-1): Ending clean mount
[272144.938784][T849075] XFS (dm-1): limiting open zones to 39 due to total zone count (158)
[272144.941117][T849075] XFS (dm-1): 158 zones of 65536 blocks (39 max open zones)
[272147.852814][T849127] XFS (dm-1): Unmounting Filesystem f96cfb7c-624f-42ad-95e9-c7ee07e09d84
[272148.090129][T849131] XFS (dm-1): EXPERIMENTAL zoned RT device feature enabled. Use at your own risk!
[272148.102258][T849131] XFS (dm-1): Mounting V5 Filesystem f96cfb7c-624f-42ad-95e9-c7ee07e09d84
[272148.307883][T849131] XFS (dm-1): Ending clean mount
[272148.310413][T849131] XFS (dm-1): limiting open zones to 39 due to total zone count (158)
[272148.312854][T849131] XFS (dm-1): 158 zones of 65536 blocks (39 max open zones)
[272152.252689][T849183] XFS (dm-1): Unmounting Filesystem f96cfb7c-624f-42ad-95e9-c7ee07e09d84
[272152.492283][T849187] XFS (dm-1): EXPERIMENTAL zoned RT device feature enabled. Use at your own risk!
[272152.506371][T849187] XFS (dm-1): Mounting V5 Filesystem f96cfb7c-624f-42ad-95e9-c7ee07e09d84
[272152.693141][T849187] XFS (dm-1): Ending clean mount
[272152.695123][T849187] XFS (dm-1): limiting open zones to 39 due to total zone count (158)
[272152.697075][T849187] XFS (dm-1): 158 zones of 65536 blocks (39 max open zones)
[272155.577112][T849237] XFS (dm-1): Unmounting Filesystem f96cfb7c-624f-42ad-95e9-c7ee07e09d84
[272155.805951][T849241] XFS (dm-1): EXPERIMENTAL zoned RT device feature enabled. Use at your own risk!
[272155.812104][T849241] XFS (dm-1): Mounting V5 Filesystem f96cfb7c-624f-42ad-95e9-c7ee07e09d84
[272155.987454][T849241] XFS (dm-1): Ending clean mount
[272156.025918][T849241] XFS (dm-1): limiting open zones to 39 due to total zone count (158)
[272156.028704][T849241] XFS (dm-1): 158 zones of 65536 blocks (39 max open zones)
[272156.758671][T849287] XFS (dm-1): Unmounting Filesystem f96cfb7c-624f-42ad-95e9-c7ee07e09d84
[272157.230429][T849296] XFS (dm-0): Unmounting Filesystem f5e12e54-47e6-4c29-9b87-ab176a9bf453
[272159.312184][T843348] run fstests generic/004 at 2026-01-23 05:41:27
[272162.827987][T849734] XFS (dm-0): EXPERIMENTAL zoned RT device feature enabled. Use at your own risk!
[272162.837904][T849734] XFS (dm-0): Mounting V5 Filesystem f5e12e54-47e6-4c29-9b87-ab176a9bf453
[272163.010417][T849734] XFS (dm-0): Ending clean mount
[272163.011944][T849734] XFS (dm-0): limiting open zones to 39 due to total zone count (158)
[272163.013318][T849734] XFS (dm-0): 158 zones of 65536 blocks (39 max open zones)
[272167.347426][T849914] XFS (dm-0): Unmounting Filesystem f5e12e54-47e6-4c29-9b87-ab176a9bf453
[272168.570465][T850106] XFS (dm-0): EXPERIMENTAL zoned RT device feature enabled. Use at your own risk!
[272168.581556][T850106] XFS (dm-0): Mounting V5 Filesystem f5e12e54-47e6-4c29-9b87-ab176a9bf453
[272168.660335][T850106] XFS (dm-0): Ending clean mount
[272168.661808][T850106] XFS (dm-0): limiting open zones to 39 due to total zone count (158)
[272168.662915][T850106] XFS (dm-0): 158 zones of 65536 blocks (39 max open zones)
[272169.553351][T843348] run fstests generic/005 at 2026-01-23 05:41:37
[272177.090939][T850502] XFS (dm-0): Unmounting Filesystem f5e12e54-47e6-4c29-9b87-ab176a9bf453
[272243.197322][ C15] rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
[272243.199112][ C15] rcu: 5-...0: (7 GPs behind) idle=908c/1/0x4000000000000000 softirq=14655808/14655808 fqs=8125
[272243.203524][ C15] rcu: Tasks blocked on level-1 rcu_node (CPUs 0-11):
[272280.853887][ T18] rcu: INFO: rcu_preempt detected expedited stalls on CPUs/tasks: { P850507 5-...D } 81366 jiffies s: 1086577 root: 0x1/.
[272280.859045][ T18] rcu: blocking rcu_node structures (internal RCU debug): l=1:0-11:0x20/T
[272280.861233][ T18] Sending NMI from CPU 12 to CPUs 5:
[272280.862649][ C5] NMI backtrace for cpu 5
[272280.862655][ C5] CPU: 5 UID: 0 PID: 0 Comm: swapper/5 Not tainted 6.19.0-rc5-kts-xfs-g51aba4ca399+ #1 PREEMPT(lazy)
[272280.862661][ C5] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.17.0-5.fc42 04/01/2014
[272280.862663][ C5] RIP: 0010:mm_get_cid+0x171/0x2c0
[272280.862672][ C5] Code: 90 a7 48 b8 00 00 00 00 00 fc ff df 83 e5 07 48 c1 eb 03 83 c5 03 48 01 c3 49 c7 c4 8c ca 51 a6 41 83 e4 07 41 83 c4 03 f3 90 <48> 8b 04 24 0f b6 00 41 38 c4 7c 08 84 c0 0f 85 f2 00 00 00 8b 35
[272280.862675][ C5] RSP: 0018:ff110001010c7cf8 EFLAGS: 00000046
[272280.862679][ C5] RAX: 0000000000000018 RBX: fffffbfff4f21c3c RCX: 0000000000000018
[272280.862682][ C5] RDX: 0000000000000018 RSI: dffffc0000000000 RDI: ff11000441730b90
[272280.862684][ C5] RBP: 0000000000000003 R08: 0000000040000000 R09: 0000000000000015
[272280.862686][ C5] R10: ff11000441730100 R11: ff11000162cfd5d4 R12: 0000000000000007
[272280.862688][ C5] R13: ff11000441730b80 R14: 0000000000000018 R15: 0000000000000018
[272280.862690][ C5] FS: 0000000000000000(0000) GS:ff11000f94139000(0000) knlGS:0000000000000000
[272280.862692][ C5] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[272280.862694][ C5] CR2: 00007fdfa0000020 CR3: 0000000146db6004 CR4: 0000000000773ef0
[272280.862699][ C5] PKRU: 55555554
[272280.862701][ C5] Call Trace:
[272280.862703][ C5] <TASK>
[272280.862707][ C5] mm_cid_switch_to+0xa6d/0x1300
[272280.862712][ C5] ? switch_mm_irqs_off+0x6fe/0x970
[272280.862718][ C5] __schedule+0x6b5/0x12a0
[272280.862723][ C5] ? __lock_release.isra.0+0x59/0x170
[272280.862728][ C5] ? __pfx___schedule+0x10/0x10
[272280.862731][ C5] ? do_raw_spin_unlock+0x14b/0x230
[272280.862735][ C5] ? lock_release.part.0+0x1c/0x50
[272280.862739][ C5] ? irqtime_account_process_tick+0x261/0x490
[272280.862744][ C5] ? lockdep_hardirqs_on_prepare.part.0+0x9b/0x140
[272280.862749][ C5] schedule_idle+0x59/0x90
[272280.862752][ C5] do_idle+0x10e/0x190
[272280.862756][ C5] cpu_startup_entry+0x53/0x70
[272280.862760][ C5] start_secondary+0x220/0x2e0
[272280.862766][ C5] ? __pfx_start_secondary+0x10/0x10
[272280.862772][ C5] common_startup_64+0x13e/0x141
[272280.862781][ C5] </TASK>
[272416.020080][ T167] INFO: task rcu_exp_gp_kthr:18 blocked for more than 122 seconds.
[272416.022378][ T167] Not tainted 6.19.0-rc5-kts-xfs-g51aba4ca399+ #1
[272416.024251][ T167] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[272416.026435][ T167] task:rcu_exp_gp_kthr state:D stack:0 pid:18 tgid:18 ppid:2 task_flags:0x208040 flags:0x00080000
[272416.029193][ T167] Call Trace:
[272416.030064][ T167] <TASK>
[272416.030831][ T167] __schedule+0x841/0x12a0
[272416.032011][ T167] ? __pfx___schedule+0x10/0x10
[272416.033223][ T167] ? find_held_lock+0x2b/0x80
[272416.034415][ T167] ? __lock_release.isra.0+0x59/0x170
[272416.035785][ T167] ? lock_acquire+0xf6/0x130
[272416.036967][ T167] schedule+0xd1/0x250
[272416.038028][ T167] schedule_timeout+0x103/0x260
[272416.039257][ T167] ? __pfx_schedule_timeout+0x10/0x10
[272416.040637][ T167] ? __pfx_process_timeout+0x10/0x10
[272416.041916][ T167] ? lockdep_hardirqs_on_prepare.part.0+0x9b/0x140
[272416.043502][ T167] ? trace_hardirqs_on+0x18/0x140
[272416.044764][ T167] ? _raw_spin_unlock_irqrestore+0x44/0x60
[272416.046206][ T167] synchronize_rcu_expedited_wait_once+0x154/0x170
[272416.047845][ T167] ? __pfx_synchronize_rcu_expedited_wait_once+0x10/0x10
[272416.049516][ T167] ? nbcon_cpu_emergency_exit+0xbf/0x160
[272416.050893][ T167] ? __pfx_nbcon_cpu_emergency_exit+0x10/0x10
[272416.052378][ T167] ? rcu_exp_jiffies_till_stall_check+0x7c/0x1d0
[272416.053937][ T167] synchronize_rcu_expedited_wait+0x195/0x910
[272416.055407][ T167] ? __lock_release.isra.0+0x59/0x170
[272416.056743][ T167] ? lockdep_hardirqs_on_prepare.part.0+0x9b/0x140
[272416.058309][ T167] rcu_exp_wait_wake+0x1b/0x4d0
[272416.059536][ T167] kthread_worker_fn+0x27e/0x8d0
[272416.060763][ T167] ? __pfx_wait_rcu_exp_gp+0x10/0x10
[272416.062052][ T167] ? __pfx_kthread_worker_fn+0x10/0x10
[272416.063388][ T167] kthread+0x3a4/0x760
[272416.064418][ T167] ? __pfx_kthread+0x10/0x10
[272416.065578][ T167] ? __lock_release.isra.0+0x59/0x170
[272416.066884][ T167] ? __pfx_kthread+0x10/0x10
[272416.068043][ T167] ret_from_fork+0x44d/0x680
[272416.069183][ T167] ? __pfx_ret_from_fork+0x10/0x10
[272416.070471][ T167] ? __switch_to+0x42c/0xd70
[272416.071667][ T167] ? __pfx_kthread+0x10/0x10
[272416.072820][ T167] ret_from_fork_asm+0x1a/0x30
[272416.074043][ T167] </TASK>
[272416.074849][ T167] INFO: task xfsaild/sde3:653 blocked for more than 122 seconds.
[272416.076718][ T167] Not tainted 6.19.0-rc5-kts-xfs-g51aba4ca399+ #1
[272416.078287][ T167] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[272416.080138][ T167] task:xfsaild/sde3 state:D stack:0 pid:653 tgid:653 ppid:2 task_flags:0x200840 flags:0x00080000
[272416.082667][ T167] Call Trace:
[272416.083416][ T167] <TASK>
[272416.084125][ T167] __schedule+0x841/0x12a0
[272416.085114][ T167] ? __pfx___schedule+0x10/0x10
[272416.086179][ T167] ? find_held_lock+0x2b/0x80
[272416.087215][ T167] ? __lock_release.isra.0+0x59/0x170
[272416.088394][ T167] ? lock_acquire+0xf6/0x130
[272416.089433][ T167] schedule+0xd1/0x250
[272416.090340][ T167] io_schedule+0x8c/0x100
[272416.091335][ T167] blk_mq_get_tag+0x496/0xb00
[272416.092400][ T167] ? __pfx_blk_mq_get_tag+0x10/0x10
[272416.093507][ T167] ? __pfx_autoremove_wake_function+0x10/0x10
[272416.094715][ T167] ? lock_release.part.0+0x1c/0x50
[272416.095725][ T167] ? bfq_limit_depth+0x22e/0x950
[272416.096723][ T167] __blk_mq_alloc_requests+0x293/0xcf0
[272416.097812][ T167] blk_mq_submit_bio+0x87d/0x2470
[272416.098809][ T167] ? __pfx_blk_mq_submit_bio+0x10/0x10
[272416.099914][ T167] __submit_bio+0x362/0x700
[272416.100868][ T167] ? __pfx___submit_bio+0x10/0x10
[272416.101872][ T167] ? __lock_release.isra.0+0x59/0x170
[272416.102925][ T167] ? __pfx_blk_cgroup_bio_start+0x10/0x10
[272416.104067][ T167] ? submit_bio_noacct_nocheck+0x3cc/0x5a0
[272416.105149][ T167] submit_bio_noacct_nocheck+0x3cc/0x5a0
[272416.106142][ T167] ? __pfx_submit_bio_noacct_nocheck+0x10/0x10
[272416.107231][ T167] ? submit_bio_noacct+0x248/0xf60
[272416.108170][ T167] xfs_buf_submit_bio+0x414/0x5d0 [xfs]
[272416.110021][ T167] ? kernel_fpu_begin_mask+0x118/0x1c0
[272416.111048][ T167] ? __pfx_xfs_buf_submit_bio+0x10/0x10 [xfs]
[272416.112926][ T167] ? trace_hardirqs_on+0x18/0x140
[272416.113852][ T167] ? xfs_inobt_write_verify+0x17/0x190 [xfs]
[272416.115526][ T167] ? xfs_btree_agblock_calc_crc+0x17a/0x250 [xfs]
[272416.117219][ T167] ? xfs_buf_submit+0x34c/0x700 [xfs]
[272416.118843][ T167] xfs_buf_delwri_submit_nowait+0x1fb/0x410 [xfs]
[272416.120671][ T167] ? _raw_spin_unlock_irqrestore+0x44/0x60
[272416.121638][ T167] ? __pfx_xfs_buf_delwri_submit_nowait+0x10/0x10 [xfs]
[272416.123506][ T167] ? __pfx_up+0x10/0x10
[272416.124217][ T167] ? __lock_release.isra.0+0x59/0x170
[272416.125049][ T167] xfsaild_push+0x4d2/0x1c70 [xfs]
[272416.126519][ T167] ? __pfx_xfsaild_push+0x10/0x10 [xfs]
[272416.128087][ T167] ? __pfx___might_resched+0x10/0x10
[272416.128898][ T167] ? xfsaild+0x15f/0x450 [xfs]
[272416.130335][ T167] xfsaild+0x1c9/0x450 [xfs]
[272416.131754][ T167] ? __pfx_xfsaild+0x10/0x10 [xfs]
[272416.133154][ T167] kthread+0x3a4/0x760
[272416.133744][ T167] ? __pfx_kthread+0x10/0x10
[272416.134386][ T167] ? __lock_release.isra.0+0x59/0x170
[272416.135147][ T167] ? __pfx_kthread+0x10/0x10
[272416.135796][ T167] ret_from_fork+0x44d/0x680
[272416.136442][ T167] ? __pfx_ret_from_fork+0x10/0x10
[272416.137167][ T167] ? __switch_to+0x42c/0xd70
[272416.137816][ T167] ? __pfx_kthread+0x10/0x10
[272416.138453][ T167] ret_from_fork_asm+0x1a/0x30
[272416.139156][ T167] </TASK>
[272416.139609][ T167] INFO: task journal-offline:850686 blocked for more than 122 seconds.
[272416.140724][ T167] Not tainted 6.19.0-rc5-kts-xfs-g51aba4ca399+ #1
[272416.141704][ T167] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[272416.142876][ T167] task:journal-offline state:D stack:0 pid:850686 tgid:736 ppid:1 task_flags:0x400040 flags:0x00080000
[272416.144378][ T167] Call Trace:
[272416.144836][ T167] <TASK>
[272416.145247][ T167] __schedule+0x841/0x12a0
[272416.145850][ T167] ? __pfx___schedule+0x10/0x10
[272416.146483][ T167] ? find_held_lock+0x2b/0x80
[272416.147131][ T167] ? __lock_release.isra.0+0x59/0x170
[272416.147829][ T167] ? lock_acquire+0xf6/0x130
[272416.148436][ T167] schedule+0xd1/0x250
[272416.148994][ T167] io_schedule+0x8c/0x100
[272416.149582][ T167] blk_mq_get_tag+0x496/0xb00
[272416.150154][ T167] ? __pfx_blk_mq_get_tag+0x10/0x10
[272416.150811][ T167] ? __pfx_autoremove_wake_function+0x10/0x10
[272416.151540][ T167] ? lock_release.part.0+0x1c/0x50
[272416.152161][ T167] ? bfq_limit_depth+0x22e/0x950
[272416.152780][ T167] __blk_mq_alloc_requests+0x293/0xcf0
[272416.153449][ T167] blk_mq_submit_bio+0x87d/0x2470
[272416.154078][ T167] ? __pfx_blk_mq_submit_bio+0x10/0x10
[272416.154762][ T167] __submit_bio+0x362/0x700
[272416.155319][ T167] ? __pfx___submit_bio+0x10/0x10
[272416.155952][ T167] ? __pfx_blk_cgroup_bio_start+0x10/0x10
[272416.156673][ T167] ? __pfx_css_rstat_updated+0x10/0x10
[272416.157327][ T167] ? submit_bio_noacct_nocheck+0x3cc/0x5a0
[272416.158010][ T167] submit_bio_noacct_nocheck+0x3cc/0x5a0
[272416.158659][ T167] ? __pfx_submit_bio_noacct_nocheck+0x10/0x10
[272416.159366][ T167] ? submit_bio_noacct+0x248/0xf60
[272416.159965][ T167] iomap_ioend_writeback_submit+0xfa/0x180
[272416.160657][ T167] iomap_add_to_ioend+0x44f/0x1110
[272416.161246][ T167] xfs_writeback_range+0x5d/0x90 [xfs]
[272416.162386][ T167] iomap_writeback_folio+0x721/0xfe0
[272416.162995][ T167] ? __pfx_iomap_writeback_folio+0x10/0x10
[272416.163654][ T167] iomap_writepages+0x119/0x270
[272416.164188][ T167] ? __pfx_iomap_writepages+0x10/0x10
[272416.164801][ T167] ? __lock_release.isra.0+0x59/0x170
[272416.165371][ T167] ? lock_acquire+0xf6/0x130
[272416.165890][ T167] xfs_vm_writepages+0x1a2/0x2c0 [xfs]
[272416.167004][ T167] ? __pfx_xfs_vm_writepages+0x10/0x10 [xfs]
[272416.168162][ T167] ? lock_acquire.part.0+0xb8/0x230
[272416.168769][ T167] do_writepages+0x21e/0x560
[272416.169271][ T167] ? __pfx_do_writepages+0x10/0x10
[272416.169840][ T167] ? lock_release.part.0+0x1c/0x50
[272416.170391][ T167] ? _raw_spin_unlock+0x23/0x40
[272416.170923][ T167] ? wbc_attach_and_unlock_inode.part.0+0x388/0x730
[272416.171619][ T167] filemap_writeback+0x1c4/0x280
[272416.172133][ T167] ? __pfx_filemap_writeback+0x10/0x10
[272416.172734][ T167] ? lock_acquire.part.0+0xb8/0x230
[272416.173257][ T167] ? __fget_files+0x31/0x2f0
[272416.173756][ T167] file_write_and_wait_range+0x97/0x150
[272416.174313][ T167] xfs_file_fsync+0x121/0x800 [xfs]
[272416.175312][ T167] ? __pfx_xfs_file_fsync+0x10/0x10 [xfs]
[272416.176383][ T167] do_fsync+0x3b/0x80
[272416.176809][ T167] ? syscall_trace_enter+0x8e/0x2b0
[272416.177317][ T167] __x64_sys_fsync+0x35/0x50
[272416.177784][ T167] do_syscall_64+0x98/0x5c0
[272416.178248][ T167] ? trace_hardirqs_on_prepare+0x101/0x140
[272416.178830][ T167] ? do_syscall_64+0x180/0x5c0
[272416.179310][ T167] ? __seccomp_filter+0xa2/0x920
[272416.179826][ T167] ? __pfx___seccomp_filter+0x10/0x10
[272416.180363][ T167] ? __pfx___seccomp_filter+0x10/0x10
[272416.180914][ T167] ? __pfx___do_sys_rseq+0x10/0x10
[272416.181413][ T167] ? trace_hardirqs_on_prepare+0x101/0x140
[272416.182004][ T167] ? do_syscall_64+0x180/0x5c0
[272416.182476][ T167] ? __lock_release.isra.0+0x59/0x170
[272416.183014][ T167] ? switch_fpu_return+0x111/0x240
[272416.183489][ T167] ? trace_hardirqs_on_prepare+0x101/0x140
[272416.184047][ T167] ? ret_from_fork+0x290/0x680
[272416.184484][ T167] ? __pfx_ret_from_fork+0x10/0x10
[272416.184972][ T167] ? __switch_to+0x42c/0xd70
[272416.185412][ T167] ? clear_bhb_loop+0x50/0xa0
[272416.185861][ T167] ? clear_bhb_loop+0x50/0xa0
[272416.186297][ T167] entry_SYSCALL_64_after_hwframe+0x76/0x7e
[272416.186851][ T167] RIP: 0033:0x7f20ab089422
[272416.187269][ T167] RSP: 002b:00007f20aa953958 EFLAGS: 00000246 ORIG_RAX: 000000000000004a
[272416.188056][ T167] RAX: ffffffffffffffda RBX: 000055ba3fa4be40 RCX: 00007f20ab089422
[272416.188775][ T167] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000016
[272416.189506][ T167] RBP: 00007f20aa953980 R08: 0000000000000000 R09: 0000000000000000
[272416.190228][ T167] R10: 0000000000000000 R11: 0000000000000246 R12: 00007f20ab5b6294
[272416.190951][ T167] R13: 00007ffd10b5ab50 R14: 00007f20aa954cdc R15: 00007ffd10b5ac57
[272416.191673][ T167] </TASK>
[272416.192001][ T167] INFO: task kworker/4:2H:1630165 blocked for more than 123 seconds.
[272416.192720][ T167] Not tainted 6.19.0-rc5-kts-xfs-g51aba4ca399+ #1
[272416.193312][ T167] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[272416.194084][ T167] task:kworker/4:2H state:D stack:0 pid:1630165 tgid:1630165 ppid:2 task_flags:0x4208060 flags:0x00080000
[272416.195148][ T167] Workqueue: kblockd blk_mq_timeout_work
[272416.195641][ T167] Call Trace:
[272416.195920][ T167] <TASK>
[272416.196181][ T167] __schedule+0x841/0x12a0
[272416.196583][ T167] ? __pfx___schedule+0x10/0x10
[272416.196998][ T167] ? find_held_lock+0x2b/0x80
[272416.197394][ T167] ? __lock_release.isra.0+0x59/0x170
[272416.197873][ T167] schedule+0xd1/0x250
[272416.198235][ T167] schedule_timeout+0x17b/0x260
[272416.198675][ T167] ? __pfx_schedule_timeout+0x10/0x10
[272416.199134][ T167] ? mark_held_locks+0x40/0x70
[272416.199568][ T167] ? lockdep_hardirqs_on_prepare.part.0+0x9b/0x140
[272416.200098][ T167] __wait_for_common+0x2d7/0x4a0
[272416.200500][ T167] ? __pfx_schedule_timeout+0x10/0x10
[272416.200956][ T167] ? __pfx___wait_for_common+0x10/0x10
[272416.201398][ T167] ? __pfx___call_rcu_common.constprop.0+0x10/0x10
[272416.201935][ T167] ? lockdep_init_map_type+0x5c/0x220
[272416.202374][ T167] ? __raw_spin_lock_init+0x44/0x120
[272416.202823][ T167] ? lockdep_init_map_type+0x5c/0x220
[272416.203262][ T167] wait_for_completion_state+0x21/0x40
[272416.203719][ T167] __wait_rcu_gp+0x1cd/0x410
[272416.204106][ T167] ? find_held_lock+0x2b/0x80
[272416.204487][ T167] synchronize_rcu_normal+0x4a8/0x510
[272416.204943][ T167] ? __pfx_synchronize_rcu_normal+0x10/0x10
[272416.205419][ T167] ? lock_release.part.0+0x1c/0x50
[272416.205854][ T167] ? __pfx_call_rcu_hurry+0x10/0x10
[272416.206294][ T167] ? __pfx_wakeme_after_rcu+0x10/0x10
[272416.206756][ T167] ? __pfx___might_resched+0x10/0x10
[272416.207196][ T167] ? lock_is_held_type+0x9a/0x110
[272416.207632][ T167] blk_mq_timeout_work+0x4aa/0x5d0
[272416.208052][ T167] ? __pfx_blk_mq_timeout_work+0x10/0x10
[272416.208499][ T167] ? __lock_release.isra.0+0x59/0x170
[272416.208944][ T167] ? lock_acquire+0xf6/0x130
[272416.209324][ T167] process_one_work+0x86b/0x1490
[272416.209761][ T167] ? __pfx_process_one_work+0x10/0x10
[272416.210200][ T167] ? lock_acquire.part.0+0xb8/0x230
[272416.210642][ T167] ? lock_is_held_type+0x9a/0x110
[272416.211045][ T167] ? assign_work+0x156/0x390
[272416.211422][ T167] worker_thread+0x5f2/0xfd0
[272416.211823][ T167] ? __pfx_worker_thread+0x10/0x10
[272416.212238][ T167] ? __kthread_parkme+0xb3/0x1f0
[272416.212659][ T167] ? __pfx_worker_thread+0x10/0x10
[272416.213066][ T167] kthread+0x3a4/0x760
[272416.213397][ T167] ? __pfx_kthread+0x10/0x10
[272416.213792][ T167] ? __lock_release.isra.0+0x59/0x170
[272416.214236][ T167] ? __pfx_kthread+0x10/0x10
[272416.214645][ T167] ret_from_fork+0x44d/0x680
[272416.215042][ T167] ? __pfx_ret_from_fork+0x10/0x10
[272416.215465][ T167] ? __switch_to+0x42c/0xd70
[272416.215858][ T167] ? __pfx_kthread+0x10/0x10
[272416.216239][ T167] ret_from_fork_asm+0x1a/0x30
[272416.216688][ T167] </TASK>
[272416.216939][ T167] INFO: task kworker/4:1H:3881240 blocked for more than 123 seconds.
[272416.217600][ T167] Not tainted 6.19.0-rc5-kts-xfs-g51aba4ca399+ #1
[272416.218160][ T167] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[272416.218855][ T167] task:kworker/4:1H state:D stack:0 pid:3881240 tgid:3881240 ppid:2 task_flags:0x4208060 flags:0x00080000
[272416.219841][ T167] Workqueue: kblockd blk_mq_timeout_work
[272416.220308][ T167] Call Trace:
[272416.220614][ T167] <TASK>
[272416.220864][ T167] __schedule+0x841/0x12a0
[272416.221231][ T167] ? __pfx___schedule+0x10/0x10
[272416.221643][ T167] ? find_held_lock+0x2b/0x80
[272416.222028][ T167] ? __lock_release.isra.0+0x59/0x170
[272416.222454][ T167] schedule+0xd1/0x250
[272416.222807][ T167] schedule_timeout+0x17b/0x260
[272416.223207][ T167] ? __pfx_schedule_timeout+0x10/0x10
[272416.223664][ T167] ? mark_held_locks+0x40/0x70
[272416.224051][ T167] ? lockdep_hardirqs_on_prepare.part.0+0x9b/0x140
[272416.224597][ T167] __wait_for_common+0x2d7/0x4a0
[272416.224998][ T167] ? __pfx_schedule_timeout+0x10/0x10
[272416.225434][ T167] ? __pfx___wait_for_common+0x10/0x10
[272416.225890][ T167] ? __pfx___call_rcu_common.constprop.0+0x10/0x10
[272416.226422][ T167] ? lockdep_init_map_type+0x5c/0x220
[272416.226867][ T167] ? __raw_spin_lock_init+0x44/0x120
[272416.227299][ T167] ? lockdep_init_map_type+0x5c/0x220
[272416.227761][ T167] wait_for_completion_state+0x21/0x40
[272416.228198][ T167] __wait_rcu_gp+0x1cd/0x410
[272416.228597][ T167] ? find_held_lock+0x2b/0x80
[272416.228976][ T167] synchronize_rcu_normal+0x4a8/0x510
[272416.229414][ T167] ? __pfx_synchronize_rcu_normal+0x10/0x10
[272416.229911][ T167] ? lock_release.part.0+0x1c/0x50
[272416.230331][ T167] ? __pfx_call_rcu_hurry+0x10/0x10
[272416.230780][ T167] ? __pfx_wakeme_after_rcu+0x10/0x10
[272416.231226][ T167] ? __pfx___might_resched+0x10/0x10
[272416.231689][ T167] ? lock_is_held_type+0x9a/0x110
[272416.232106][ T167] blk_mq_timeout_work+0x4aa/0x5d0
[272416.232517][ T167] ? __pfx_blk_mq_timeout_work+0x10/0x10
[272416.232999][ T167] ? __lock_release.isra.0+0x59/0x170
[272416.233430][ T167] ? lock_acquire+0xf6/0x130
[272416.233829][ T167] process_one_work+0x86b/0x1490
[272416.234245][ T167] ? __pfx_process_one_work+0x10/0x10
[272416.234702][ T167] ? lock_acquire.part.0+0xb8/0x230
[272416.235128][ T167] ? rcuwait_wake_up+0x22/0x190
[272416.235517][ T167] ? lock_is_held_type+0x9a/0x110
[272416.235936][ T167] ? assign_work+0x156/0x390
[272416.236324][ T167] worker_thread+0x5f2/0xfd0
[272416.236738][ T167] ? __pfx_worker_thread+0x10/0x10
[272416.237153][ T167] kthread+0x3a4/0x760
[272416.237486][ T167] ? __pfx_kthread+0x10/0x10
[272416.237938][ T167] ? __lock_release.isra.0+0x59/0x170
[272416.238383][ T167] ? __pfx_kthread+0x10/0x10
[272416.238786][ T167] ret_from_fork+0x44d/0x680
[272416.239170][ T167] ? __pfx_ret_from_fork+0x10/0x10
[272416.239609][ T167] ? __switch_to+0x42c/0xd70
[272416.239992][ T167] ? __pfx_kthread+0x10/0x10
[272416.240366][ T167] ret_from_fork_asm+0x1a/0x30
[272416.240789][ T167] </TASK>
[272416.241062][ T167] INFO: task kworker/u96:1:781928 blocked for more than 123 seconds.
[272416.241709][ T167] Not tainted 6.19.0-rc5-kts-xfs-g51aba4ca399+ #1
[272416.242258][ T167] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[272416.242961][ T167] task:kworker/u96:1 state:D stack:0 pid:781928 tgid:781928 ppid:2 task_flags:0x4208060 flags:0x00080000
[272416.243974][ T167] Workqueue: writeback wb_workfn (flush-8:64)
[272416.244474][ T167] Call Trace:
[272416.244807][ T167] <TASK>
[272416.245063][ T167] __schedule+0x841/0x12a0
[272416.245426][ T167] ? __pfx___schedule+0x10/0x10
[272416.245840][ T167] ? find_held_lock+0x2b/0x80
[272416.246231][ T167] ? __lock_release.isra.0+0x59/0x170
[272416.246688][ T167] ? lock_acquire+0xf6/0x130
[272416.247073][ T167] schedule+0xd1/0x250
[272416.247409][ T167] io_schedule+0x8c/0x100
[272416.247791][ T167] blk_mq_get_tag+0x496/0xb00
[272416.248180][ T167] ? __pfx_blk_mq_get_tag+0x10/0x10
[272416.248629][ T167] ? __pfx_autoremove_wake_function+0x10/0x10
[272416.249125][ T167] ? lock_release.part.0+0x1c/0x50
[272416.249528][ T167] ? bfq_limit_depth+0x22e/0x950
[272416.249949][ T167] __blk_mq_alloc_requests+0x293/0xcf0
[272416.250406][ T167] blk_mq_submit_bio+0x87d/0x2470
[272416.250837][ T167] ? __pfx_blk_mq_submit_bio+0x10/0x10
[272416.251303][ T167] __submit_bio+0x362/0x700
[272416.251699][ T167] ? __lock_acquire+0x55d/0xbd0
[272416.252098][ T167] ? __pfx___submit_bio+0x10/0x10
[272416.252515][ T167] ? __pfx_blk_cgroup_bio_start+0x10/0x10
[272416.253005][ T167] ? submit_bio_noacct_nocheck+0x3cc/0x5a0
[272416.253473][ T167] submit_bio_noacct_nocheck+0x3cc/0x5a0
[272416.253955][ T167] ? __pfx_submit_bio_noacct_nocheck+0x10/0x10
[272416.254462][ T167] ? submit_bio_noacct+0x248/0xf60
[272416.254899][ T167] iomap_ioend_writeback_submit+0xfa/0x180
[272416.255381][ T167] iomap_add_to_ioend+0x44f/0x1110
[272416.255818][ T167] xfs_writeback_range+0x5d/0x90 [xfs]
[272416.256674][ T167] iomap_writeback_folio+0x721/0xfe0
[272416.257111][ T167] ? __pfx_iomap_writeback_folio+0x10/0x10
[272416.257600][ T167] iomap_writepages+0x119/0x270
[272416.258001][ T167] ? __pfx_iomap_writepages+0x10/0x10
[272416.258428][ T167] ? __lock_release.isra.0+0x59/0x170
[272416.258874][ T167] ? lock_acquire+0xf6/0x130
[272416.259261][ T167] xfs_vm_writepages+0x1a2/0x2c0 [xfs]
[272416.260069][ T167] ? __pfx_xfs_vm_writepages+0x10/0x10 [xfs]
[272416.260941][ T167] ? __lock_release.isra.0+0x59/0x170
[272416.261386][ T167] ? lock_acquire+0xf6/0x130
[272416.261780][ T167] do_writepages+0x21e/0x560
[272416.262176][ T167] ? __pfx_do_writepages+0x10/0x10
[272416.262620][ T167] __writeback_single_inode+0x119/0xc70
[272416.263075][ T167] ? __pfx___writeback_single_inode+0x10/0x10
[272416.263589][ T167] ? lock_release.part.0+0x1c/0x50
[272416.264002][ T167] ? _raw_spin_unlock+0x23/0x40
[272416.264398][ T167] ? wbc_attach_and_unlock_inode.part.0+0x388/0x730
[272416.264949][ T167] writeback_sb_inodes+0x67a/0x10a0
[272416.265388][ T167] ? __pfx_writeback_sb_inodes+0x10/0x10
[272416.265860][ T167] ? arch_stack_walk+0xa1/0x100
[272416.266290][ T167] __writeback_inodes_wb+0xf2/0x270
[272416.266740][ T167] ? __pfx___writeback_inodes_wb+0x10/0x10
[272416.267209][ T167] ? queue_io+0x2ab/0x4a0
[272416.267598][ T167] wb_writeback+0x5af/0x7d0
[272416.267963][ T167] ? __pfx_wb_writeback+0x10/0x10
[272416.268379][ T167] ? get_nr_dirty_inodes+0xd8/0x1d0
[272416.268819][ T167] ? lockdep_hardirqs_on_prepare.part.0+0x9b/0x140
[272416.269357][ T167] wb_do_writeback+0x51c/0x7b0
[272416.269769][ T167] ? __pfx_wb_do_writeback+0x10/0x10
[272416.270206][ T167] ? lock_acquire.part.0+0xb8/0x230
[272416.270648][ T167] ? process_one_work+0x7e7/0x1490
[272416.271073][ T167] wb_workfn+0x9f/0x3d0
[272416.271414][ T167] process_one_work+0x86b/0x1490
[272416.271846][ T167] ? __pfx_process_one_work+0x10/0x10
[272416.272288][ T167] ? lock_acquire.part.0+0xb8/0x230
[272416.272734][ T167] ? lock_is_held_type+0x9a/0x110
[272416.273146][ T167] ? assign_work+0x156/0x390
[272416.273519][ T167] worker_thread+0x5f2/0xfd0
[272416.273926][ T167] ? __pfx_worker_thread+0x10/0x10
[272416.274343][ T167] kthread+0x3a4/0x760
[272416.274707][ T167] ? __pfx_kthread+0x10/0x10
[272416.275086][ T167] ? __lock_release.isra.0+0x59/0x170
[272416.275520][ T167] ? __pfx_kthread+0x10/0x10
[272416.275911][ T167] ret_from_fork+0x44d/0x680
[272416.276292][ T167] ? __pfx_ret_from_fork+0x10/0x10
[272416.276733][ T167] ? __switch_to+0x42c/0xd70
[272416.277111][ T167] ? __pfx_kthread+0x10/0x10
[272416.277486][ T167] ret_from_fork_asm+0x1a/0x30
[272416.277907][ T167] </TASK>
[272416.278165][ T167] INFO: task kworker/u96:4:804951 blocked for more than 123 seconds.
[272416.278822][ T167] Not tainted 6.19.0-rc5-kts-xfs-g51aba4ca399+ #1
[272416.279381][ T167] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[272416.280088][ T167] task:kworker/u96:4 state:D stack:0 pid:804951 tgid:804951 ppid:2 task_flags:0x4208060 flags:0x00080000
[272416.281071][ T167] Workqueue: writeback wb_workfn (flush-8:64)
[272416.281590][ T167] Call Trace:
[272416.281856][ T167] <TASK>
[272416.282109][ T167] __schedule+0x841/0x12a0
[272416.282470][ T167] ? __pfx___schedule+0x10/0x10
[272416.282884][ T167] ? find_held_lock+0x2b/0x80
[272416.283288][ T167] ? __lock_release.isra.0+0x59/0x170
[272416.283743][ T167] ? lock_acquire+0xf6/0x130
[272416.284129][ T167] schedule+0xd1/0x250
[272416.284458][ T167] io_schedule+0x8c/0x100
[272416.284834][ T167] blk_mq_get_tag+0x496/0xb00
[272416.285225][ T167] ? __pfx_blk_mq_get_tag+0x10/0x10
[272416.285674][ T167] ? __pfx_autoremove_wake_function+0x10/0x10
[272416.286165][ T167] ? lock_release.part.0+0x1c/0x50
[272416.286602][ T167] ? bfq_limit_depth+0x22e/0x950
[272416.287006][ T167] __blk_mq_alloc_requests+0x293/0xcf0
[272416.287451][ T167] blk_mq_submit_bio+0x87d/0x2470
[272416.287883][ T167] ? __pfx_blk_mq_submit_bio+0x10/0x10
[272416.288338][ T167] __submit_bio+0x362/0x700
[272416.288728][ T167] ? __lock_acquire+0x55d/0xbd0
[272416.289136][ T167] ? __pfx___submit_bio+0x10/0x10
[272416.289572][ T167] ? __pfx_blk_cgroup_bio_start+0x10/0x10
[272416.290040][ T167] ? submit_bio_noacct_nocheck+0x3cc/0x5a0
[272416.290503][ T167] submit_bio_noacct_nocheck+0x3cc/0x5a0
[272416.290973][ T167] ? __pfx_submit_bio_noacct_nocheck+0x10/0x10
[272416.291480][ T167] ? submit_bio_noacct+0x248/0xf60
[272416.291918][ T167] iomap_ioend_writeback_submit+0xfa/0x180
[272416.292402][ T167] iomap_add_to_ioend+0x44f/0x1110
[272416.292847][ T167] xfs_writeback_range+0x5d/0x90 [xfs]
[272416.293698][ T167] iomap_writeback_folio+0x721/0xfe0
[272416.294138][ T167] ? __pfx_iomap_writeback_folio+0x10/0x10
[272416.294627][ T167] iomap_writepages+0x119/0x270
[272416.295025][ T167] ? __pfx_iomap_writepages+0x10/0x10
[272416.295453][ T167] ? __lock_release.isra.0+0x59/0x170
[272416.295910][ T167] ? lock_acquire+0xf6/0x130
[272416.296302][ T167] xfs_vm_writepages+0x1a2/0x2c0 [xfs]
[272416.297102][ T167] ? __pfx_xfs_vm_writepages+0x10/0x10 [xfs]
[272416.297969][ T167] ? __lock_release.isra.0+0x59/0x170
[272416.298424][ T167] ? lock_acquire+0xf6/0x130
[272416.298821][ T167] do_writepages+0x21e/0x560
[272416.299208][ T167] ? __pfx_do_writepages+0x10/0x10
[272416.299653][ T167] __writeback_single_inode+0x119/0xc70
[272416.300111][ T167] ? __pfx___writeback_single_inode+0x10/0x10
[272416.300628][ T167] ? lock_release.part.0+0x1c/0x50
[272416.301044][ T167] ? _raw_spin_unlock+0x23/0x40
[272416.301436][ T167] ? wbc_attach_and_unlock_inode.part.0+0x388/0x730
[272416.301991][ T167] writeback_sb_inodes+0x67a/0x10a0
[272416.302417][ T167] ? __pfx_writeback_sb_inodes+0x10/0x10
[272416.302892][ T167] ? arch_stack_walk+0xa1/0x100
[272416.303335][ T167] __writeback_inodes_wb+0xf2/0x270
[272416.303786][ T167] ? __pfx___writeback_inodes_wb+0x10/0x10
[272416.304258][ T167] ? queue_io+0x2ab/0x4a0
[272416.304638][ T167] wb_writeback+0x5af/0x7d0
[272416.305015][ T167] ? __pfx_wb_writeback+0x10/0x10
[272416.305426][ T167] ? get_nr_dirty_inodes+0xd8/0x1d0
[272416.305863][ T167] ? lockdep_hardirqs_on_prepare.part.0+0x9b/0x140
[272416.306385][ T167] wb_do_writeback+0x51c/0x7b0
[272416.306798][ T167] ? __pfx_wb_do_writeback+0x10/0x10
[272416.307251][ T167] ? lock_acquire.part.0+0xb8/0x230
[272416.307691][ T167] ? process_one_work+0x7e7/0x1490
[272416.308108][ T167] wb_workfn+0x9f/0x3d0
[272416.308449][ T167] process_one_work+0x86b/0x1490
[272416.308874][ T167] ? __pfx_process_one_work+0x10/0x10
[272416.309310][ T167] ? lock_acquire.part.0+0xb8/0x230
[272416.309755][ T167] ? lock_is_held_type+0x9a/0x110
[272416.310169][ T167] ? assign_work+0x156/0x390
[272416.310568][ T167] worker_thread+0x5f2/0xfd0
[272416.310947][ T167] ? __pfx_worker_thread+0x10/0x10
[272416.311372][ T167] kthread+0x3a4/0x760
[272416.311740][ T167] ? __pfx_kthread+0x10/0x10
[272416.312128][ T167] ? __lock_release.isra.0+0x59/0x170
[272416.312586][ T167] ? __pfx_kthread+0x10/0x10
[272416.312953][ T167] ret_from_fork+0x44d/0x680
[272416.313335][ T167] ? __pfx_ret_from_fork+0x10/0x10
[272416.313798][ T167] ? __switch_to+0x42c/0xd70
[272416.314187][ T167] ? __pfx_kthread+0x10/0x10
[272416.314594][ T167] ret_from_fork_asm+0x1a/0x30
[272416.315000][ T167] </TASK>
[272416.315261][ T167] INFO: task systemd-timedat:848017 blocked for more than 123 seconds.
[272416.315932][ T167] Not tainted 6.19.0-rc5-kts-xfs-g51aba4ca399+ #1
[272416.316491][ T167] Blocked by coredump.
[272416.316890][ T167] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[272416.317601][ T167] task:systemd-timedat state:D stack:0 pid:848017 tgid:848017 ppid:1 task_flags:0x40010c flags:0x00080801
[272416.318597][ T167] Call Trace:
[272416.318874][ T167] <TASK>
[272416.319132][ T167] __schedule+0x841/0x12a0
[272416.319491][ T167] ? __pfx___schedule+0x10/0x10
[272416.319904][ T167] ? find_held_lock+0x2b/0x80
[272416.320297][ T167] ? __lock_release.isra.0+0x59/0x170
[272416.320750][ T167] ? lock_acquire+0xf6/0x130
[272416.321142][ T167] schedule+0xd1/0x250
[272416.321472][ T167] synchronize_rcu_expedited+0x3c5/0x480
[272416.321956][ T167] ? __pfx_synchronize_rcu_expedited+0x10/0x10
[272416.322454][ T167] ? __pfx_autoremove_wake_function+0x10/0x10
[272416.322962][ T167] ? __call_rcu_common.constprop.0+0x3ca/0x840
[272416.323457][ T167] ? __pfx_wait_rcu_exp_gp+0x10/0x10
[272416.323923][ T167] ? __lock_release.isra.0+0x59/0x170
[272416.324371][ T167] namespace_unlock+0x52b/0x8d0
[272416.324786][ T167] ? lock_acquire.part.0+0xb8/0x230
[272416.325212][ T167] ? __pfx_namespace_unlock+0x10/0x10
[272416.325672][ T167] ? __lock_release.isra.0+0x59/0x170
[272416.326106][ T167] ? do_raw_spin_unlock+0x14b/0x230
[272416.326522][ T167] nsproxy_free+0x32/0x490
[272416.326903][ T167] do_exit+0x574/0xcd0
[272416.327249][ T167] ? __pfx_do_exit+0x10/0x10
[272416.327648][ T167] ? __lock_release.isra.0+0x59/0x170
[272416.328080][ T167] ? lock_acquire+0xf6/0x130
[272416.328465][ T167] do_group_exit+0xb8/0x370
[272416.328862][ T167] __x64_sys_exit_group+0x3c/0x50
[272416.329271][ T167] x64_sys_call+0x14fd/0x1510
[272416.329675][ T167] do_syscall_64+0x98/0x5c0
[272416.330047][ T167] ? __lock_release.isra.0+0x59/0x170
[272416.330489][ T167] ? do_user_addr_fault+0x811/0xed0
[272416.330936][ T167] ? trace_hardirqs_on_prepare+0x101/0x140
[272416.331418][ T167] ? lockdep_hardirqs_on_prepare.part.0+0x9b/0x140
[272416.331961][ T167] ? clear_bhb_loop+0x50/0xa0
[272416.332363][ T167] ? clear_bhb_loop+0x50/0xa0
[272416.332764][ T167] entry_SYSCALL_64_after_hwframe+0x76/0x7e
[272416.333237][ T167] RIP: 0033:0x7f24b7ecf088
[272416.333627][ T167] RSP: 002b:00007ffc35bface8 EFLAGS: 00000206 ORIG_RAX: 00000000000000e7
[272416.334295][ T167] RAX: ffffffffffffffda RBX: 00007f24b7ff7fe8 RCX: 00007f24b7ecf088
[272416.334941][ T167] RDX: 00007f24b771b6c8 RSI: fffffffffffffe80 RDI: 0000000000000000
[272416.335613][ T167] RBP: 00007ffc35bfad40 R08: 0000000000000000 R09: 0000000000000000
[272416.336246][ T167] R10: 0000000000000002 R11: 0000000000000206 R12: 0000000000000001
[272416.336888][ T167] R13: 0000000000000000 R14: 00007f24b7ff6680 R15: 00007f24b7ff8000
[272416.337543][ T167] </TASK>
[272416.337807][ T167]
[272416.337807][ T167] Showing all locks held in the system:
[272416.338418][ T167] 1 lock held by rcu_preempt/16:
[272416.338844][ T167] 1 lock held by khungtaskd/167:
[272416.339253][ T167] #0: ffffffffa6b2f8a0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire.constprop.0+0x7/0x30
[272416.340155][ T167] 2 locks held by kworker/4:2H/1630165:
[272416.340630][ T167] #0: ff11000102b8d548 ((wq_completion)kblockd){+.+.}-{0:0}, at: process_one_work+0xef8/0x1490
[272416.341493][ T167] #1: ff110003ee7efca8 ((work_completion)(&q->timeout_work)){+.+.}-{0:0}, at: process_one_work+0x7e7/0x1490
[272416.342445][ T167] 2 locks held by kworker/4:1H/3881240:
[272416.342909][ T167] #0: ff11000102b8d548 ((wq_completion)kblockd){+.+.}-{0:0}, at: process_one_work+0xef8/0x1490
[272416.343766][ T167] #1: ff1100068290fca8 ((work_completion)(&q->timeout_work)){+.+.}-{0:0}, at: process_one_work+0x7e7/0x1490
[272416.344726][ T167] 3 locks held by kworker/u96:1/781928:
[272416.345170][ T167] #0: ff11000101fef948 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work+0xef8/0x1490
[272416.346034][ T167] #1: ff1100015b1ffca8 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work+0x7e7/0x1490
[272416.346990][ T167] #2: ff1100014baa40e0 (&type->s_umount_key#58){++++}-{4:4}, at: super_trylock_shared+0x1e/0xb0
[272416.347848][ T167] 3 locks held by kworker/u96:4/804951:
[272416.348292][ T167] #0: ff11000101fef948 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work+0xef8/0x1490
[272416.349157][ T167] #1: ff1100049a3bfca8 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work+0x7e7/0x1490
[272416.350105][ T167] #2: ff1100014baa40e0 (&type->s_umount_key#58){++++}-{4:4}, at: super_trylock_shared+0x1e/0xb0
[272416.350956][ T167] 2 locks held by kworker/4:0/836415:
[272416.351405][ T167] #0: ff110001000d0948 ((wq_completion)events_freezable_pwr_efficient){+.+.}-{0:0}, at: process_one_work+0xef8/0x1490
[272416.352412][ T167] #1: ff1100044f10fca8 ((work_completion)(&(&ev->dwork)->work)){+.+.}-{0:0}, at: process_one_work+0x7e7/0x1490
[272416.353374][ T167] 1 lock held by systemd-timedat/848017:
[272416.353842][ T167] #0: ffffffffa6bf6878 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock+0x56b/0xa50
[272416.354676][ T167] 2 locks held by xfs_repair/850507:
[272416.355103][ T167]
[272416.355303][ T167] =============================================
[272416.355303][ T167]
[272526.611589][ T18] rcu: INFO: rcu_preempt detected expedited stalls on CPUs/tasks: { P850507 5-...D } 327127 jiffies s: 1086577 root: 0x1/.
[272526.614938][ T18] rcu: blocking rcu_node structures (internal RCU debug): l=1:0-11:0x20/T
[272526.617066][ T18] Sending NMI from CPU 12 to CPUs 5:
[272526.618396][ C5] NMI backtrace for cpu 5
[272526.618403][ C5] CPU: 5 UID: 0 PID: 0 Comm: swapper/5 Not tainted 6.19.0-rc5-kts-xfs-g51aba4ca399+ #1 PREEMPT(lazy)
[272526.618407][ C5] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.17.0-5.fc42 04/01/2014
[272526.618409][ C5] RIP: 0010:mm_get_cid+0x171/0x2c0
[272526.618417][ C5] Code: 90 a7 48 b8 00 00 00 00 00 fc ff df 83 e5 07 48 c1 eb 03 83 c5 03 48 01 c3 49 c7 c4 8c ca 51 a6 41 83 e4 07 41 83 c4 03 f3 90 <48> 8b 04 24 0f b6 00 41 38 c4 7c 08 84 c0 0f 85 f2 00 00 00 8b 35
[272526.618420][ C5] RSP: 0018:ff110001010c7cf8 EFLAGS: 00000046
[272526.618423][ C5] RAX: 0000000000000018 RBX: fffffbfff4f21c3c RCX: 0000000000000018
[272526.618425][ C5] RDX: 0000000000000018 RSI: dffffc0000000000 RDI: ff11000441730b90
[272526.618427][ C5] RBP: 0000000000000003 R08: 0000000040000000 R09: 0000000000000015
[272526.618429][ C5] R10: ff11000441730100 R11: ff11000162cfd5d4 R12: 0000000000000007
[272526.618432][ C5] R13: ff11000441730b80 R14: 0000000000000018 R15: 0000000000000018
[272526.618434][ C5] FS: 0000000000000000(0000) GS:ff11000f94139000(0000) knlGS:0000000000000000
[272526.618436][ C5] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[272526.618438][ C5] CR2: 00007fdfa0000020 CR3: 0000000146db6004 CR4: 0000000000773ef0
[272526.618443][ C5] PKRU: 55555554
[272526.618445][ C5] Call Trace:
[272526.618447][ C5] <TASK>
[272526.618451][ C5] mm_cid_switch_to+0xa6d/0x1300
[272526.618455][ C5] ? switch_mm_irqs_off+0x6fe/0x970
[272526.618460][ C5] __schedule+0x6b5/0x12a0
[272526.618465][ C5] ? __lock_release.isra.0+0x59/0x170
[272526.618470][ C5] ? __pfx___schedule+0x10/0x10
[272526.618473][ C5] ? do_raw_spin_unlock+0x14b/0x230
[272526.618477][ C5] ? lock_release.part.0+0x1c/0x50
[272526.618481][ C5] ? irqtime_account_process_tick+0x261/0x490
[272526.618486][ C5] ? lockdep_hardirqs_on_prepare.part.0+0x9b/0x140
[272526.618491][ C5] schedule_idle+0x59/0x90
[272526.618494][ C5] do_idle+0x10e/0x190
[272526.618498][ C5] cpu_startup_entry+0x53/0x70
[272526.618502][ C5] start_secondary+0x220/0x2e0
[272526.618507][ C5] ? __pfx_start_secondary+0x10/0x10
[272526.618513][ C5] common_startup_64+0x13e/0x141
[272526.618522][ C5] </TASK>
[272538.899457][ T167] INFO: task xfsaild/sde3:653 blocked for more than 245 seconds.
[272538.901546][ T167] Not tainted 6.19.0-rc5-kts-xfs-g51aba4ca399+ #1
[272538.903256][ T167] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[272538.905364][ T167] task:xfsaild/sde3 state:D stack:0 pid:653 tgid:653 ppid:2 task_flags:0x200840 flags:0x00080000
[272538.908190][ T167] Call Trace:
[272538.909077][ T167] <TASK>
[272538.909879][ T167] __schedule+0x841/0x12a0
[272538.911048][ T167] ? __pfx___schedule+0x10/0x10
[272538.912321][ T167] ? find_held_lock+0x2b/0x80
[272538.913573][ T167] ? __lock_release.isra.0+0x59/0x170
[272538.914913][ T167] ? lock_acquire+0xf6/0x130
[272538.916077][ T167] schedule+0xd1/0x250
[272538.917146][ T167] io_schedule+0x8c/0x100
[272538.918286][ T167] blk_mq_get_tag+0x496/0xb00
[272538.919494][ T167] ? __pfx_blk_mq_get_tag+0x10/0x10
[272538.920844][ T167] ? __pfx_autoremove_wake_function+0x10/0x10
[272538.922374][ T167] ? lock_release.part.0+0x1c/0x50
[272538.923664][ T167] ? bfq_limit_depth+0x22e/0x950
[272538.924922][ T167] __blk_mq_alloc_requests+0x293/0xcf0
[272538.926317][ T167] blk_mq_submit_bio+0x87d/0x2470
[272538.927595][ T167] ? __pfx_blk_mq_submit_bio+0x10/0x10
[272538.929019][ T167] __submit_bio+0x362/0x700
[272538.930200][ T167] ? __pfx___submit_bio+0x10/0x10
[272538.931483][ T167] ? __lock_release.isra.0+0x59/0x170
[272538.932796][ T167] ? __pfx_blk_cgroup_bio_start+0x10/0x10
[272538.934366][ T167] ? submit_bio_noacct_nocheck+0x3cc/0x5a0
[272538.935825][ T167] submit_bio_noacct_nocheck+0x3cc/0x5a0
[272538.937233][ T167] ? __pfx_submit_bio_noacct_nocheck+0x10/0x10
[272538.938798][ T167] ? submit_bio_noacct+0x248/0xf60
[272538.940151][ T167] xfs_buf_submit_bio+0x414/0x5d0 [xfs]
[272538.942712][ T167] ? kernel_fpu_begin_mask+0x118/0x1c0
[272538.944164][ T167] ? __pfx_xfs_buf_submit_bio+0x10/0x10 [xfs]
[272538.946772][ T167] ? trace_hardirqs_on+0x18/0x140
[272538.948091][ T167] ? xfs_inobt_write_verify+0x17/0x190 [xfs]
[272538.950661][ T167] ? xfs_btree_agblock_calc_crc+0x17a/0x250 [xfs]
[272538.953377][ T167] ? xfs_buf_submit+0x34c/0x700 [xfs]
[272538.955846][ T167] xfs_buf_delwri_submit_nowait+0x1fb/0x410 [xfs]
[272538.958520][ T167] ? _raw_spin_unlock_irqrestore+0x44/0x60
[272538.959894][ T167] ? __pfx_xfs_buf_delwri_submit_nowait+0x10/0x10 [xfs]
[272538.962411][ T167] ? __pfx_up+0x10/0x10
[272538.963373][ T167] ? __lock_release.isra.0+0x59/0x170
[272538.964593][ T167] xfsaild_push+0x4d2/0x1c70 [xfs]
[272538.966763][ T167] ? __pfx_xfsaild_push+0x10/0x10 [xfs]
[272538.969072][ T167] ? __pfx___might_resched+0x10/0x10
[272538.970281][ T167] ? xfsaild+0x15f/0x450 [xfs]
[272538.972413][ T167] xfsaild+0x1c9/0x450 [xfs]
[272538.974465][ T167] ? __pfx_xfsaild+0x10/0x10 [xfs]
[272538.976426][ T167] kthread+0x3a4/0x760
[272538.977260][ T167] ? __pfx_kthread+0x10/0x10
[272538.978212][ T167] ? __lock_release.isra.0+0x59/0x170
[272538.979294][ T167] ? __pfx_kthread+0x10/0x10
[272538.980245][ T167] ret_from_fork+0x44d/0x680
[272538.981200][ T167] ? __pfx_ret_from_fork+0x10/0x10
[272538.982257][ T167] ? __switch_to+0x42c/0xd70
[272538.983216][ T167] ? __pfx_kthread+0x10/0x10
[272538.984199][ T167] ret_from_fork_asm+0x1a/0x30
[272538.985229][ T167] </TASK>
[272538.985866][ T167] INFO: task journal-offline:850686 blocked for more than 245 seconds.
[272538.987372][ T167] Not tainted 6.19.0-rc5-kts-xfs-g51aba4ca399+ #1
[272538.988630][ T167] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[272538.990175][ T167] task:journal-offline state:D stack:0 pid:850686 tgid:736 ppid:1 task_flags:0x400040 flags:0x00080000
[272538.992298][ T167] Call Trace:
[272538.992900][ T167] <TASK>
[272538.993444][ T167] __schedule+0x841/0x12a0
[272538.994262][ T167] ? __pfx___schedule+0x10/0x10
[272538.995181][ T167] ? find_held_lock+0x2b/0x80
[272538.996009][ T167] ? __lock_release.isra.0+0x59/0x170
[272538.996865][ T167] ? lock_acquire+0xf6/0x130
[272538.997656][ T167] schedule+0xd1/0x250
[272538.998384][ T167] io_schedule+0x8c/0x100
[272538.999155][ T167] blk_mq_get_tag+0x496/0xb00
[272538.999918][ T167] ? __pfx_blk_mq_get_tag+0x10/0x10
[272539.000761][ T167] ? __pfx_autoremove_wake_function+0x10/0x10
[272539.001794][ T167] ? lock_release.part.0+0x1c/0x50
[272539.002647][ T167] ? bfq_limit_depth+0x22e/0x950
[272539.003515][ T167] __blk_mq_alloc_requests+0x293/0xcf0
[272539.004460][ T167] blk_mq_submit_bio+0x87d/0x2470
[272539.005305][ T167] ? __pfx_blk_mq_submit_bio+0x10/0x10
[272539.006232][ T167] __submit_bio+0x362/0x700
[272539.006957][ T167] ? __pfx___submit_bio+0x10/0x10
[272539.007729][ T167] ? __pfx_blk_cgroup_bio_start+0x10/0x10
[272539.008639][ T167] ? __pfx_css_rstat_updated+0x10/0x10
[272539.009486][ T167] ? submit_bio_noacct_nocheck+0x3cc/0x5a0
[272539.010391][ T167] submit_bio_noacct_nocheck+0x3cc/0x5a0
[272539.011225][ T167] ? __pfx_submit_bio_noacct_nocheck+0x10/0x10
[272539.012216][ T167] ? submit_bio_noacct+0x248/0xf60
[272539.013047][ T167] iomap_ioend_writeback_submit+0xfa/0x180
[272539.013947][ T167] iomap_add_to_ioend+0x44f/0x1110
[272539.014703][ T167] xfs_writeback_range+0x5d/0x90 [xfs]
[272539.016109][ T167] iomap_writeback_folio+0x721/0xfe0
[272539.016856][ T167] ? __pfx_iomap_writeback_folio+0x10/0x10
[272539.017720][ T167] iomap_writepages+0x119/0x270
[272539.018424][ T167] ? __pfx_iomap_writepages+0x10/0x10
[272539.019200][ T167] ? __lock_release.isra.0+0x59/0x170
[272539.019996][ T167] ? lock_acquire+0xf6/0x130
[272539.020659][ T167] xfs_vm_writepages+0x1a2/0x2c0 [xfs]
[272539.022061][ T167] ? __pfx_xfs_vm_writepages+0x10/0x10 [xfs]
[272539.023490][ T167] ? lock_acquire.part.0+0xb8/0x230
[272539.024209][ T167] do_writepages+0x21e/0x560
[272539.024824][ T167] ? __pfx_do_writepages+0x10/0x10
[272539.025517][ T167] ? lock_release.part.0+0x1c/0x50
[272539.026182][ T167] ? _raw_spin_unlock+0x23/0x40
[272539.026807][ T167] ? wbc_attach_and_unlock_inode.part.0+0x388/0x730
[272539.027692][ T167] filemap_writeback+0x1c4/0x280
[272539.028376][ T167] ? __pfx_filemap_writeback+0x10/0x10
[272539.029158][ T167] ? lock_acquire.part.0+0xb8/0x230
[272539.029840][ T167] ? __fget_files+0x31/0x2f0
[272539.030481][ T167] file_write_and_wait_range+0x97/0x150
[272539.031245][ T167] xfs_file_fsync+0x121/0x800 [xfs]
[272539.032531][ T167] ? __pfx_xfs_file_fsync+0x10/0x10 [xfs]
[272539.033781][ T167] do_fsync+0x3b/0x80
[272539.034311][ T167] ? syscall_trace_enter+0x8e/0x2b0
[272539.035031][ T167] __x64_sys_fsync+0x35/0x50
[272539.035603][ T167] do_syscall_64+0x98/0x5c0
[272539.036192][ T167] ? trace_hardirqs_on_prepare+0x101/0x140
[272539.036946][ T167] ? do_syscall_64+0x180/0x5c0
[272539.037560][ T167] ? __seccomp_filter+0xa2/0x920
[272539.038164][ T167] ? __pfx___seccomp_filter+0x10/0x10
[272539.038796][ T167] ? __pfx___seccomp_filter+0x10/0x10
[272539.039501][ T167] ? __pfx___do_sys_rseq+0x10/0x10
[272539.040169][ T167] ? trace_hardirqs_on_prepare+0x101/0x140
[272539.040836][ T167] ? do_syscall_64+0x180/0x5c0
[272539.041438][ T167] ? __lock_release.isra.0+0x59/0x170
[272539.042086][ T167] ? switch_fpu_return+0x111/0x240
[272539.042681][ T167] ? trace_hardirqs_on_prepare+0x101/0x140
[272539.043391][ T167] ? ret_from_fork+0x290/0x680
[272539.043976][ T167] ? __pfx_ret_from_fork+0x10/0x10
[272539.044571][ T167] ? __switch_to+0x42c/0xd70
[272539.045124][ T167] ? clear_bhb_loop+0x50/0xa0
[272539.045667][ T167] ? clear_bhb_loop+0x50/0xa0
[272539.046253][ T167] entry_SYSCALL_64_after_hwframe+0x76/0x7e
[272539.047000][ T167] RIP: 0033:0x7f20ab089422
[272539.047512][ T167] RSP: 002b:00007f20aa953958 EFLAGS: 00000246 ORIG_RAX: 000000000000004a
[272539.048441][ T167] RAX: ffffffffffffffda RBX: 000055ba3fa4be40 RCX: 00007f20ab089422
[272539.049311][ T167] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000016
[272539.050187][ T167] RBP: 00007f20aa953980 R08: 0000000000000000 R09: 0000000000000000
[272539.051072][ T167] R10: 0000000000000000 R11: 0000000000000246 R12: 00007f20ab5b6294
[272539.051967][ T167] R13: 00007ffd10b5ab50 R14: 00007f20aa954cdc R15: 00007ffd10b5ac57
[272539.052854][ T167] </TASK>
[272539.053228][ T167] Future hung task reports are suppressed, see sysctl kernel.hung_task_warnings
[272539.054227][ T167]
[272539.054227][ T167] Showing all locks held in the system:
[272539.055077][ T167] 1 lock held by rcu_preempt/16:
[272539.055599][ T167] 1 lock held by khungtaskd/167:
[272539.056136][ T167] #0: ffffffffa6b2f8a0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire.constprop.0+0x7/0x30
[272539.057239][ T167] 2 locks held by kworker/4:2H/1630165:
[272539.057819][ T167] #0: ff11000102b8d548 ((wq_completion)kblockd){+.+.}-{0:0}, at: process_one_work+0xef8/0x1490
[272539.058936][ T167] #1: ff110003ee7efca8 ((work_completion)(&q->timeout_work)){+.+.}-{0:0}, at: process_one_work+0x7e7/0x1490
[272539.060101][ T167] 2 locks held by kworker/4:1H/3881240:
[272539.060628][ T167] #0: ff11000102b8d548 ((wq_completion)kblockd){+.+.}-{0:0}, at: process_one_work+0xef8/0x1490
[272539.061661][ T167] #1: ff1100068290fca8 ((work_completion)(&q->timeout_work)){+.+.}-{0:0}, at: process_one_work+0x7e7/0x1490
[272539.062825][ T167] 3 locks held by kworker/u96:1/781928:
[272539.063411][ T167] #0: ff11000101fef948 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work+0xef8/0x1490
[272539.064452][ T167] #1: ff1100015b1ffca8 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work+0x7e7/0x1490
[272539.065603][ T167] #2: ff1100014baa40e0 (&type->s_umount_key#58){++++}-{4:4}, at: super_trylock_shared+0x1e/0xb0
[272539.066667][ T167] 3 locks held by kworker/u96:4/804951:
[272539.067228][ T167] #0: ff11000101fef948 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work+0xef8/0x1490
[272539.068234][ T167] #1: ff1100049a3bfca8 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work+0x7e7/0x1490
[272539.069359][ T167] #2: ff1100014baa40e0 (&type->s_umount_key#58){++++}-{4:4}, at: super_trylock_shared+0x1e/0xb0
[272539.070364][ T167] 2 locks held by kworker/4:0/836415:
[272539.070849][ T167] #0: ff110001000d0948 ((wq_completion)events_freezable_pwr_efficient){+.+.}-{0:0}, at: process_one_work+0xef8/0x1490
[272539.072073][ T167] #1: ff1100044f10fca8 ((work_completion)(&(&ev->dwork)->work)){+.+.}-{0:0}, at: process_one_work+0x7e7/0x1490
[272539.073193][ T167] 1 lock held by systemd-timedat/848017:
[272539.073700][ T167] #0: ffffffffa6bf6878 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock+0x56b/0xa50
[272539.074621][ T167] 2 locks held by xfs_repair/850507:
[272539.075123][ T167]
[272539.075354][ T167] =============================================
[272539.075354][ T167]
[272772.386310][ T18] rcu: INFO: rcu_preempt detected expedited stalls on CPUs/tasks: { P850507 5-...D } 572905 jiffies s: 1086577 root: 0x1/.
[272772.389672][ T18] rcu: blocking rcu_node structures (internal RCU debug): l=1:0-11:0x20/T
[272772.391838][ T18] Sending NMI from CPU 13 to CPUs 5:
[272772.393175][ C5] NMI backtrace for cpu 5
[272772.393184][ C5] CPU: 5 UID: 0 PID: 0 Comm: swapper/5 Not tainted 6.19.0-rc5-kts-xfs-g51aba4ca399+ #1 PREEMPT(lazy)
[272772.393188][ C5] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.17.0-5.fc42 04/01/2014
[272772.393190][ C5] RIP: 0010:mm_get_cid+0x175/0x2c0
[272772.393199][ C5] Code: 00 00 00 00 00 fc ff df 83 e5 07 48 c1 eb 03 83 c5 03 48 01 c3 49 c7 c4 8c ca 51 a6 41 83 e4 07 41 83 c4 03 f3 90 48 8b 04 24 <0f> b6 00 41 38 c4 7c 08 84 c0 0f 85 f2 00 00 00 8b 35 d1 65 ea 03
[272772.393201][ C5] RSP: 0018:ff110001010c7cf8 EFLAGS: 00000046
[272772.393205][ C5] RAX: fffffbfff4ca3951 RBX: fffffbfff4f21c3c RCX: 0000000000000018
[272772.393208][ C5] RDX: 0000000000000018 RSI: dffffc0000000000 RDI: ff11000441730b90
[272772.393210][ C5] RBP: 0000000000000003 R08: 0000000040000000 R09: 0000000000000015
[272772.393212][ C5] R10: ff11000441730100 R11: ff11000162cfd5d4 R12: 0000000000000007
[272772.393214][ C5] R13: ff11000441730b80 R14: 0000000000000018 R15: 0000000000000018
[272772.393216][ C5] FS: 0000000000000000(0000) GS:ff11000f94139000(0000) knlGS:0000000000000000
[272772.393218][ C5] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[272772.393220][ C5] CR2: 00007fdfa0000020 CR3: 0000000146db6004 CR4: 0000000000773ef0
[272772.393225][ C5] PKRU: 55555554
[272772.393227][ C5] Call Trace:
[272772.393229][ C5] <TASK>
[272772.393233][ C5] mm_cid_switch_to+0xa6d/0x1300
[272772.393237][ C5] ? switch_mm_irqs_off+0x6fe/0x970
[272772.393242][ C5] __schedule+0x6b5/0x12a0
[272772.393247][ C5] ? __lock_release.isra.0+0x59/0x170
[272772.393252][ C5] ? __pfx___schedule+0x10/0x10
[272772.393255][ C5] ? do_raw_spin_unlock+0x14b/0x230
[272772.393260][ C5] ? lock_release.part.0+0x1c/0x50
[272772.393264][ C5] ? irqtime_account_process_tick+0x261/0x490
[272772.393269][ C5] ? lockdep_hardirqs_on_prepare.part.0+0x9b/0x140
[272772.393274][ C5] schedule_idle+0x59/0x90
[272772.393278][ C5] do_idle+0x10e/0x190
[272772.393282][ C5] cpu_startup_entry+0x53/0x70
[272772.393285][ C5] start_secondary+0x220/0x2e0
[272772.393291][ C5] ? __pfx_start_secondary+0x10/0x10
[272772.393297][ C5] common_startup_64+0x13e/0x141
[272772.393306][ C5] </TASK>
[273018.123757][ T18] rcu: INFO: rcu_preempt detected expedited stalls on CPUs/tasks: { P850507 5-...D } 818646 jiffies s: 1086577 root: 0x1/.
[273018.127254][ T18] rcu: blocking rcu_node structures (internal RCU debug): l=1:0-11:0x20/T
[273018.129431][ T18] Sending NMI from CPU 12 to CPUs 5:
[273018.130909][ C5] NMI backtrace for cpu 5
[273018.130916][ C5] CPU: 5 UID: 0 PID: 0 Comm: swapper/5 Not tainted 6.19.0-rc5-kts-xfs-g51aba4ca399+ #1 PREEMPT(lazy)
[273018.130920][ C5] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.17.0-5.fc42 04/01/2014
[273018.130922][ C5] RIP: 0010:mm_get_cid+0x175/0x2c0
[273018.130928][ C5] Code: 00 00 00 00 00 fc ff df 83 e5 07 48 c1 eb 03 83 c5 03 48 01 c3 49 c7 c4 8c ca 51 a6 41 83 e4 07 41 83 c4 03 f3 90 48 8b 04 24 <0f> b6 00 41 38 c4 7c 08 84 c0 0f 85 f2 00 00 00 8b 35 d1 65 ea 03
[273018.130931][ C5] RSP: 0018:ff110001010c7cf8 EFLAGS: 00000046
[273018.130934][ C5] RAX: fffffbfff4ca3951 RBX: fffffbfff4f21c3c RCX: 0000000000000018
[273018.130936][ C5] RDX: 0000000000000018 RSI: dffffc0000000000 RDI: ff11000441730b90
[273018.130938][ C5] RBP: 0000000000000003 R08: 0000000040000000 R09: 0000000000000015
[273018.130940][ C5] R10: ff11000441730100 R11: ff11000162cfd5d4 R12: 0000000000000007
[273018.130942][ C5] R13: ff11000441730b80 R14: 0000000000000018 R15: 0000000000000018
[273018.130944][ C5] FS: 0000000000000000(0000) GS:ff11000f94139000(0000) knlGS:0000000000000000
[273018.130951][ C5] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[273018.130953][ C5] CR2: 00007fdfa0000020 CR3: 0000000146db6004 CR4: 0000000000773ef0
[273018.130958][ C5] PKRU: 55555554
[273018.130959][ C5] Call Trace:
[273018.130961][ C5] <TASK>
[273018.130965][ C5] mm_cid_switch_to+0xa6d/0x1300
[273018.130969][ C5] ? switch_mm_irqs_off+0x6fe/0x970
[273018.130973][ C5] __schedule+0x6b5/0x12a0
[273018.130978][ C5] ? __lock_release.isra.0+0x59/0x170
[273018.130982][ C5] ? __pfx___schedule+0x10/0x10
[273018.130986][ C5] ? do_raw_spin_unlock+0x14b/0x230
[273018.130990][ C5] ? lock_release.part.0+0x1c/0x50
[273018.130994][ C5] ? irqtime_account_process_tick+0x261/0x490
[273018.130998][ C5] ? lockdep_hardirqs_on_prepare.part.0+0x9b/0x140
[273018.131003][ C5] schedule_idle+0x59/0x90
[273018.131007][ C5] do_idle+0x10e/0x190
[273018.131011][ C5] cpu_startup_entry+0x53/0x70
[273018.131015][ C5] start_secondary+0x220/0x2e0
[273018.131020][ C5] ? __pfx_start_secondary+0x10/0x10
[273018.131026][ C5] common_startup_64+0x13e/0x141
[273018.131034][ C5] </TASK>
[273263.880365][ T18] rcu: INFO: rcu_preempt detected expedited stalls on CPUs/tasks: { P850507 5-...D } 1064406 jiffies s: 1086577 root: 0x1/.
[273263.883796][ T18] rcu: blocking rcu_node structures (internal RCU debug): l=1:0-11:0x20/T
[273263.886032][ T18] Sending NMI from CPU 12 to CPUs 5:
[273263.887501][ C5] NMI backtrace for cpu 5
[273263.887509][ C5] CPU: 5 UID: 0 PID: 0 Comm: swapper/5 Not tainted 6.19.0-rc5-kts-xfs-g51aba4ca399+ #1 PREEMPT(lazy)
[273263.887514][ C5] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.17.0-5.fc42 04/01/2014
[273263.887517][ C5] RIP: 0010:mm_get_cid+0x171/0x2c0
[273263.887526][ C5] Code: 90 a7 48 b8 00 00 00 00 00 fc ff df 83 e5 07 48 c1 eb 03 83 c5 03 48 01 c3 49 c7 c4 8c ca 51 a6 41 83 e4 07 41 83 c4 03 f3 90 <48> 8b 04 24 0f b6 00 41 38 c4 7c 08 84 c0 0f 85 f2 00 00 00 8b 35
[273263.887529][ C5] RSP: 0018:ff110001010c7cf8 EFLAGS: 00000046
[273263.887534][ C5] RAX: 0000000000000018 RBX: fffffbfff4f21c3c RCX: 0000000000000018
[273263.887538][ C5] RDX: 0000000000000018 RSI: dffffc0000000000 RDI: ff11000441730b90
[273263.887540][ C5] RBP: 0000000000000003 R08: 0000000040000000 R09: 0000000000000015
[273263.887542][ C5] R10: ff11000441730100 R11: ff11000162cfd5d4 R12: 0000000000000007
[273263.887545][ C5] R13: ff11000441730b80 R14: 0000000000000018 R15: 0000000000000018
[273263.887547][ C5] FS: 0000000000000000(0000) GS:ff11000f94139000(0000) knlGS:0000000000000000
[273263.887549][ C5] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[273263.887551][ C5] CR2: 00007fdfa0000020 CR3: 0000000146db6004 CR4: 0000000000773ef0
[273263.887557][ C5] PKRU: 55555554
[273263.887558][ C5] Call Trace:
[273263.887560][ C5] <TASK>
[273263.887565][ C5] mm_cid_switch_to+0xa6d/0x1300
[273263.887570][ C5] ? switch_mm_irqs_off+0x6fe/0x970
[273263.887575][ C5] __schedule+0x6b5/0x12a0
[273263.887581][ C5] ? __lock_release.isra.0+0x59/0x170
[273263.887585][ C5] ? __pfx___schedule+0x10/0x10
[273263.887588][ C5] ? do_raw_spin_unlock+0x14b/0x230
[273263.887593][ C5] ? lock_release.part.0+0x1c/0x50
[273263.887597][ C5] ? irqtime_account_process_tick+0x261/0x490
[273263.887602][ C5] ? lockdep_hardirqs_on_prepare.part.0+0x9b/0x140
[273263.887607][ C5] schedule_idle+0x59/0x90
[273263.887611][ C5] do_idle+0x10e/0x190
[273263.887615][ C5] cpu_startup_entry+0x53/0x70
[273263.887618][ C5] start_secondary+0x220/0x2e0
[273263.887625][ C5] ? __pfx_start_secondary+0x10/0x10
[273263.887631][ C5] common_startup_64+0x13e/0x141
[273263.887640][ C5] </TASK>
...
[-- Attachment #3: xfs_598_hang --]
[-- Type: text/plain, Size: 77361 bytes --]
[164362.087760][T615746] run fstests xfs/598 at 2026-01-17 23:32:19
[164362.249183][T3695722] MODE SENSE: unimplemented page/subpage: 0x0a/0x05
[164362.683408][T3644020] MODE SENSE: unimplemented page/subpage: 0x0a/0x05
[164363.177458][T3712338] MODE SENSE: unimplemented page/subpage: 0x0a/0x05
[164363.238180][T3623436] MODE SENSE: unimplemented page/subpage: 0x0a/0x05
[164365.560575][T3731123] XFS (sde): Mounting V5 Filesystem 370e9d06-7cd7-4409-a646-8df1281ad340
[164365.590219][T3731123] XFS (sde): Ending clean mount
[164366.806697][T3712338] MODE SENSE: unimplemented page/subpage: 0x0a/0x05
[164367.434544][T3731222] XFS (sdf): Mounting V5 Filesystem 0f2f1dc1-26a3-4727-b406-02fd47068ba0
[164367.460312][T3731222] XFS (sdf): Ending clean mount
[164367.933481][T3731257] XFS (sdf): Unmounting Filesystem 0f2f1dc1-26a3-4727-b406-02fd47068ba0
[164368.522780][T3641638] MODE SENSE: unimplemented page/subpage: 0x0a/0x05
[164368.807146][T3731285] XFS (sdf): Mounting V5 Filesystem 1e2f895f-6082-4205-b1c5-1be269eed299
[164368.832462][T3731285] XFS (sdf): Ending clean mount
[164371.724834][T3731467] XFS (sdf): Unmounting Filesystem 1e2f895f-6082-4205-b1c5-1be269eed299
[164382.615394][T3644020] MODE SENSE: unimplemented page/subpage: 0x0a/0x05
[164383.675161][T3731484] XFS (sdf): Mounting V5 Filesystem 1e2f895f-6082-4205-b1c5-1be269eed299
[164383.864407][T3731484] XFS (sdf): Ending clean mount
[164384.706147][T3731546] TARGET_CORE[loopback]: Expected Transfer Length: 0 does not match SCSI CDB Length: 512 for SAM Opcode: 0x8f
[164386.423013][T3731676] XFS (sdf): Unmounting Filesystem 1e2f895f-6082-4205-b1c5-1be269eed299
[164386.653848][T3642378] MODE SENSE: unimplemented page/subpage: 0x0a/0x05
[164387.355073][T3731890] XFS (sdf): Mounting V5 Filesystem 1e2f895f-6082-4205-b1c5-1be269eed299
[164387.382939][T3731890] XFS (sdf): Ending clean mount
[164387.443458][T3731900] XFS (sdf): Unmounting Filesystem 1e2f895f-6082-4205-b1c5-1be269eed299
[164396.179956][T3642378] MODE SENSE: unimplemented page/subpage: 0x0a/0x05
[164397.268929][T3731912] XFS (sdf): Mounting V5 Filesystem 1e2f895f-6082-4205-b1c5-1be269eed299
[164397.475222][T3731912] XFS (sdf): Ending clean mount
[164398.292849][T3731974] TARGET_CORE[loopback]: Expected Transfer Length: 0 does not match SCSI CDB Length: 512 for SAM Opcode: 0x8f
[164399.987980][T3732104] XFS (sdf): Unmounting Filesystem 1e2f895f-6082-4205-b1c5-1be269eed299
[164400.223661][T3718281] MODE SENSE: unimplemented page/subpage: 0x0a/0x05
[164465.805668][ C7] rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
[164465.807458][ C7] rcu: 5-...0: (1 GPs behind) idle=9184/1/0x4000000000000000 softirq=12412680/12412680 fqs=7111
[164465.811931][ C7] rcu: (detected by 7, t=65008 jiffies, g=63951405, q=3734 ncpus=16)
[164465.813731][ C7] Sending NMI from CPU 7 to CPUs 5:
[164465.813769][ C5] NMI backtrace for cpu 5
[164465.813779][ C5] CPU: 5 UID: 0 PID: 0 Comm: swapper/5 Not tainted 6.19.0-rc1-kts-xfs-zoned-fixes-gec6aea2a5+ #1 PREEMPT(lazy)
[164465.813786][ C5] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.17.0-5.fc42 04/01/2014
[164465.813789][ C5] RIP: 0010:mm_get_cid+0x171/0x2c0
[164465.813800][ C5] Code: d0 a4 48 b8 00 00 00 00 00 fc ff df 83 e5 07 48 c1 eb 03 83 c5 03 48 01 c3 49 c7 c4 8c ba 91 a3 41 83 e4 07 41 83 c4 03 f3 90 <48> 8b 04 24 0f b6 00 41 38 c4 7c 08 84 c0 0f 85 f2 00 00 00 8b 35
[164465.813804][ C5] RSP: 0018:ffff88810115fcf8 EFLAGS: 00000046
[164465.813809][ C5] RAX: 0000000000000010 RBX: fffffbfff49a1c1c RCX: 0000000000000010
[164465.813813][ C5] RDX: 0000000000000010 RSI: dffffc0000000000 RDI: ffff888128a4d510
[164465.813815][ C5] RBP: 0000000000000003 R08: 0000000040000002 R09: 000000000000000e
[164465.813818][ C5] R10: ffff888128a4ca80 R11: ffff888565575654 R12: 0000000000000007
[164465.813821][ C5] R13: ffff888128a4d500 R14: 0000000000000010 R15: 0000000000000010
[164465.813824][ C5] FS: 0000000000000000(0000) GS:ffff888c67339000(0000) knlGS:0000000000000000
[164465.813827][ C5] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[164465.813830][ C5] CR2: 00007f7661c41820 CR3: 000000013d927004 CR4: 0000000000772ef0
[164465.813836][ C5] PKRU: 55555554
[164465.813838][ C5] Call Trace:
[164465.813841][ C5] <TASK>
[164465.813847][ C5] mm_cid_switch_to+0xa6d/0x1300
[164465.813853][ C5] ? switch_mm_irqs_off+0x6fe/0x970
[164465.813860][ C5] __schedule+0x6b5/0x12a0
[164465.813868][ C5] ? __pfx___schedule+0x10/0x10
[164465.813874][ C5] ? tick_nohz_stop_idle+0x206/0x330
[164465.813880][ C5] ? lockdep_hardirqs_on_prepare.part.0+0x9b/0x140
[164465.813887][ C5] schedule_idle+0x59/0x90
[164465.813892][ C5] do_idle+0x10e/0x190
[164465.813898][ C5] cpu_startup_entry+0x53/0x70
[164465.813903][ C5] start_secondary+0x220/0x2e0
[164465.813910][ C5] ? __pfx_start_secondary+0x10/0x10
[164465.813918][ C5] common_startup_64+0x13e/0x141
[164465.813930][ C5] </TASK>
[164465.814770][ C7] rcu: rcu_preempt kthread starved for 32507 jiffies! g63951405 f0x0 RCU_GP_DOING_FQS(6) ->state=0x0 ->cpu=12
[164465.861005][ C7] rcu: Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
[164465.863304][ C7] rcu: RCU grace-period kthread stack dump:
[164465.864533][ C7] task:rcu_preempt state:R running task stack:0 pid:16 tgid:16 ppid:2 task_flags:0x208040 flags:0x00080000
[164465.867474][ C7] Call Trace:
[164465.868274][ C7] <TASK>
[164465.868930][ C7] ? do_raw_spin_lock+0x1d9/0x270
[164465.870096][ C7] ? __pfx_do_raw_spin_lock+0x10/0x10
[164465.871337][ C7] ? lock_acquire+0xf6/0x130
[164465.872474][ C7] ? raw_spin_rq_lock_nested+0x24/0x170
[164465.873734][ C7] ? _raw_spin_rq_lock_irqsave+0x41/0x50
[164465.874911][ C7] ? resched_cpu+0x62/0xf0
[164465.875856][ C7] ? force_qs_rnp+0x67d/0xaa0
[164465.876839][ C7] ? __pfx_rcu_watching_snap_recheck+0x10/0x10
[164465.878084][ C7] ? rcu_gp_fqs_loop+0x948/0x11b0
[164465.879123][ C7] ? rcu_gp_init+0xb0f/0x14a0
[164465.880101][ C7] ? __pfx_rcu_gp_fqs_loop+0x10/0x10
[164465.881199][ C7] ? _raw_spin_unlock_irq+0x28/0x50
[164465.882245][ C7] ? rcu_gp_cleanup+0x824/0xf40
[164465.883218][ C7] ? __pfx_rcu_gp_init+0x10/0x10
[164465.884313][ C7] ? __pfx_rcu_gp_cleanup+0x10/0x10
[164465.885399][ C7] ? lockdep_hardirqs_on_prepare.part.0+0x9b/0x140
[164465.886816][ C7] ? rcu_gp_kthread+0x4f2/0x660
[164465.887868][ C7] ? lock_acquire+0xf6/0x130
[164465.888768][ C7] ? __pfx_rcu_gp_kthread+0x10/0x10
[164465.889732][ C7] ? lockdep_hardirqs_on_prepare.part.0+0x9b/0x140
[164465.890918][ C7] ? __pfx_rcu_gp_kthread+0x10/0x10
[164465.891888][ C7] ? __kthread_parkme+0xb3/0x1f0
[164465.892807][ C7] ? __pfx_rcu_gp_kthread+0x10/0x10
[164465.893771][ C7] ? kthread+0x3a4/0x760
[164465.894583][ C7] ? __pfx_kthread+0x10/0x10
[164465.895451][ C7] ? __lock_release.isra.0+0x59/0x170
[164465.896432][ C7] ? __pfx_kthread+0x10/0x10
[164465.897287][ C7] ? ret_from_fork+0x50e/0x670
[164465.898192][ C7] ? __pfx_ret_from_fork+0x10/0x10
[164465.899205][ C7] ? __switch_to+0x42c/0xd70
[164465.900087][ C7] ? __pfx_kthread+0x10/0x10
[164465.900919][ C7] ? ret_from_fork_asm+0x1a/0x30
[164465.901777][ C7] </TASK>
[164465.902303][ C7] rcu: Stack dump where RCU GP kthread last ran:
[164465.903374][ C7] Sending NMI from CPU 7 to CPUs 12:
[164465.904302][ C12] NMI backtrace for cpu 12
[164465.904310][ C12] CPU: 12 UID: 0 PID: 16 Comm: rcu_preempt Not tainted 6.19.0-rc1-kts-xfs-zoned-fixes-gec6aea2a5+ #1 PREEMPT(lazy)
[164465.904316][ C12] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.17.0-5.fc42 04/01/2014
[164465.904319][ C12] RIP: 0010:__pv_queued_spin_lock_slowpath+0x232/0xdc0
[164465.904328][ C12] Code: 44 8b 2b c6 44 24 60 00 66 45 85 ed 74 1e 41 81 fd ff ff 00 00 0f 86 5e 02 00 00 41 81 e5 00 ff 00 00 0f 85 51 02 00 00 f3 90 <eb> b5 48 89 df be 01 00 00 00 e8 9f 96 bc fd be 01 00 00 00 48 8d
[164465.904332][ C12] RSP: 0018:ffff8881010c7948 EFLAGS: 00000046
[164465.904337][ C12] RAX: 0000000000000000 RBX: ffff888c0d0c5440 RCX: 0000000000000001
[164465.904341][ C12] RDX: 0000000000000000 RSI: 0000000000000004 RDI: ffff888c0d0c5440
[164465.904343][ C12] RBP: 0000000000000003 R08: ffffffffa28edce6 R09: ffffed1181a18a88
[164465.904346][ C12] R10: ffffed1181a18a89 R11: 0000000000000001 R12: ffffed1181a18a88
[164465.904349][ C12] R13: 0000000000000000 R14: ffff888c0d446408 R15: ffff888c0d446400
[164465.904351][ C12] FS: 0000000000000000(0000) GS:ffff888c676b9000(0000) knlGS:0000000000000000
[164465.904354][ C12] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[164465.904357][ C12] CR2: 00007f7657ff6cb8 CR3: 0000000108544001 CR4: 0000000000772ef0
[164465.904363][ C12] PKRU: 55555554
[164465.904365][ C12] Call Trace:
[164465.904368][ C12] <TASK>
[164465.904372][ C12] ? lock_release.part.0+0x1c/0x50
[164465.904378][ C12] ? __virt_addr_valid+0x1d1/0x3f0
[164465.904383][ C12] ? __pfx___pv_queued_spin_lock_slowpath+0x10/0x10
[164465.904389][ C12] ? __lock_acquire+0x55d/0xbd0
[164465.904398][ C12] do_raw_spin_lock+0x1d9/0x270
[164465.904404][ C12] ? __pfx_do_raw_spin_lock+0x10/0x10
[164465.904409][ C12] ? lock_acquire+0xf6/0x130
[164465.904415][ C12] raw_spin_rq_lock_nested+0x24/0x170
[164465.904427][ C12] _raw_spin_rq_lock_irqsave+0x41/0x50
[164465.904431][ C12] resched_cpu+0x62/0xf0
[164465.904436][ C12] force_qs_rnp+0x67d/0xaa0
[164465.904442][ C12] ? __pfx_rcu_watching_snap_recheck+0x10/0x10
[164465.904451][ C12] rcu_gp_fqs_loop+0x948/0x11b0
[164465.904456][ C12] ? rcu_gp_init+0xb0f/0x14a0
[164465.904463][ C12] ? __pfx_rcu_gp_fqs_loop+0x10/0x10
[164465.904467][ C12] ? _raw_spin_unlock_irq+0x28/0x50
[164465.904470][ C12] ? rcu_gp_cleanup+0x824/0xf40
[164465.904476][ C12] ? __pfx_rcu_gp_init+0x10/0x10
[164465.904482][ C12] ? __pfx_rcu_gp_cleanup+0x10/0x10
[164465.904487][ C12] ? lockdep_hardirqs_on_prepare.part.0+0x9b/0x140
[164465.904494][ C12] rcu_gp_kthread+0x4f2/0x660
[164465.904499][ C12] ? lock_acquire+0xf6/0x130
[164465.904504][ C12] ? __pfx_rcu_gp_kthread+0x10/0x10
[164465.904508][ C12] ? lockdep_hardirqs_on_prepare.part.0+0x9b/0x140
[164465.904514][ C12] ? __pfx_rcu_gp_kthread+0x10/0x10
[164465.904518][ C12] ? __kthread_parkme+0xb3/0x1f0
[164465.904524][ C12] ? __pfx_rcu_gp_kthread+0x10/0x10
[164465.904529][ C12] kthread+0x3a4/0x760
[164465.904534][ C12] ? __pfx_kthread+0x10/0x10
[164465.904539][ C12] ? __lock_release.isra.0+0x59/0x170
[164465.904545][ C12] ? __pfx_kthread+0x10/0x10
[164465.904549][ C12] ret_from_fork+0x50e/0x670
[164465.904555][ C12] ? __pfx_ret_from_fork+0x10/0x10
[164465.904560][ C12] ? __switch_to+0x42c/0xd70
[164465.904565][ C12] ? __pfx_kthread+0x10/0x10
[164465.904570][ C12] ret_from_fork_asm+0x1a/0x30
[164465.904582][ C12] </TASK>
[164515.495757][ C0] BUG: workqueue lockup - pool cpus=0-15 flags=0x4 nice=0 stuck for 85s!
[164515.499167][ C0] Showing busy workqueues and worker pools:
[164515.501301][ C0] workqueue events_unbound: flags=0x2
[164515.502579][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=2 refcnt=3
[164515.502605][ C0] pending: toggle_allocation_gate, flush_memcg_stats_dwork
[164515.505814][ C0] workqueue events_unbound: flags=0x2
[164515.507059][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=1 refcnt=2
[164515.507082][ C0] pending: crng_reseed
[164515.507123][ C0] workqueue kvfree_rcu_reclaim: flags=0xa
[164515.511082][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=1 refcnt=2
[164515.511104][ C0] pending: kfree_rcu_monitor
[164515.511147][ C0] workqueue writeback: flags=0x4a
[164515.515016][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=4 refcnt=5
[164515.515036][ C0] in-flight: 3727760:wb_workfn
[164515.515061][ C0] pending: 3*wb_workfn
[164515.515160][ C0] workqueue ipv6_addrconf: flags=0x6000a
[164515.520291][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=1 refcnt=19
[164515.520313][ C0] pending: addrconf_verify_work
[164515.520330][ C0] inactive: addrconf_verify_work
[164515.520367][ C0] workqueue xfs-blockgc/sdc3: flags=0xe
[164515.525585][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=1 refcnt=2
[164515.525606][ C0] pending: xfs_blockgc_worker [xfs]
[164515.529856][ C0] workqueue xfs-sync/sdc3: flags=0x104
[164515.531106][ C0] pwq 62: cpus=15 node=0 flags=0x0 nice=0 active=1 refcnt=2
[164515.531128][ C0] in-flight: 3623436:xfs_log_worker [xfs]
[164515.535620][ C0] workqueue xfs-cil/sdc3: flags=0xe
[164515.536860][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=1 refcnt=2
[164515.536883][ C0] pending: xlog_cil_push_work [xfs]
[164515.541202][ C0] workqueue xfs-sync/sde: flags=0x104
[164515.542517][ C0] pwq 58: cpus=14 node=0 flags=0x0 nice=0 active=1 refcnt=2
[164515.542542][ C0] in-flight: 3732215:xfs_log_worker [xfs]
[164515.547065][ C0] workqueue xfs-cil/sde: flags=0xe
[164515.548270][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=1 refcnt=2
[164515.548293][ C0] pending: xlog_cil_push_work [xfs]
[164515.552585][ C0] pool 58: cpus=14 node=0 flags=0x0 nice=0 hung=0s workers=3 idle: 3644020 3712377
[164515.552626][ C0] pool 62: cpus=15 node=0 flags=0x0 nice=0 hung=0s workers=3 idle: 3727348 3706576
[164515.556810][ C0] pool 64: cpus=0-15 flags=0x4 nice=0 hung=85s workers=7 idle: 3667344 3645384 3691158 3661385 3578714 3713463
[164545.704750][ C0] BUG: workqueue lockup - pool cpus=0-15 flags=0x4 nice=0 stuck for 116s!
[164545.706787][ C0] Showing busy workqueues and worker pools:
[164545.708108][ C0] workqueue events_unbound: flags=0x2
[164545.709311][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=2 refcnt=3
[164545.709335][ C0] pending: toggle_allocation_gate, flush_memcg_stats_dwork
[164545.709382][ C0] workqueue events_unbound: flags=0x2
[164545.713977][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=1 refcnt=2
[164545.713999][ C0] pending: crng_reseed
[164545.714039][ C0] workqueue kvfree_rcu_reclaim: flags=0xa
[164545.717969][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=1 refcnt=2
[164545.717989][ C0] pending: kfree_rcu_monitor
[164545.718033][ C0] workqueue writeback: flags=0x4a
[164545.721826][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=4 refcnt=5
[164545.721846][ C0] in-flight: 3727760:wb_workfn
[164545.721869][ C0] pending: 3*wb_workfn
[164545.721965][ C0] workqueue ipv6_addrconf: flags=0x6000a
[164545.727049][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=1 refcnt=20
[164545.727070][ C0] pending: addrconf_verify_work
[164545.727088][ C0] inactive: 2*addrconf_verify_work
[164545.727127][ C0] workqueue xfs-blockgc/sdc3: flags=0xe
[164545.732387][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=1 refcnt=2
[164545.732407][ C0] pending: xfs_blockgc_worker [xfs]
[164545.736691][ C0] workqueue xfs-sync/sdc3: flags=0x104
[164545.737969][ C0] pwq 62: cpus=15 node=0 flags=0x0 nice=0 active=1 refcnt=2
[164545.737992][ C0] in-flight: 3623436:xfs_log_worker [xfs]
[164545.742479][ C0] workqueue xfs-cil/sdc3: flags=0xe
[164545.743637][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=1 refcnt=2
[164545.743660][ C0] pending: xlog_cil_push_work [xfs]
[164545.748058][ C0] workqueue xfs-sync/sde: flags=0x104
[164545.749333][ C0] pwq 58: cpus=14 node=0 flags=0x0 nice=0 active=1 refcnt=2
[164545.749356][ C0] in-flight: 3732215:xfs_log_worker [xfs]
[164545.753841][ C0] workqueue xfs-cil/sde: flags=0xe
[164545.754986][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=1 refcnt=2
[164545.755007][ C0] pending: xlog_cil_push_work [xfs]
[164545.759288][ C0] pool 58: cpus=14 node=0 flags=0x0 nice=0 hung=0s workers=3 idle: 3644020 3712377
[164545.759329][ C0] pool 62: cpus=15 node=0 flags=0x0 nice=0 hung=0s workers=3 idle: 3727348 3706576
[164545.759360][ C0] pool 64: cpus=0-15 flags=0x4 nice=0 hung=116s workers=7 idle: 3667344 3645384 3691158 3661385 3578714 3713463
[164575.912713][ C0] BUG: workqueue lockup - pool cpus=0-15 flags=0x4 nice=0 stuck for 146s!
[164575.913601][ C0] Showing busy workqueues and worker pools:
[164575.914149][ C0] workqueue events: flags=0x100
[164575.914633][ C0] pwq 42: cpus=10 node=0 flags=0x0 nice=0 active=2 refcnt=3
[164575.914645][ C0] in-flight: 3712027:netstamp_clear
[164575.914658][ C0] pending: psi_avgs_work
[164575.914674][ C0] workqueue events_unbound: flags=0x2
[164575.916842][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=2 refcnt=3
[164575.916852][ C0] pending: toggle_allocation_gate, flush_memcg_stats_dwork
[164575.916876][ C0] workqueue events_unbound: flags=0x2
[164575.918789][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=1 refcnt=2
[164575.918798][ C0] pending: crng_reseed
[164575.918825][ C0] workqueue kvfree_rcu_reclaim: flags=0xa
[164575.920408][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=1 refcnt=2
[164575.920417][ C0] pending: kfree_rcu_monitor
[164575.920448][ C0] workqueue writeback: flags=0x4a
[164575.922038][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=4 refcnt=5
[164575.922045][ C0] in-flight: 3727760:wb_workfn
[164575.922056][ C0] pending: 3*wb_workfn
[164575.922136][ C0] workqueue ipv6_addrconf: flags=0x6000a
[164575.924319][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=1 refcnt=20
[164575.924329][ C0] pending: addrconf_verify_work
[164575.924337][ C0] inactive: 2*addrconf_verify_work
[164575.924365][ C0] workqueue xfs-blockgc/sdc3: flags=0xe
[164575.926540][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=1 refcnt=2
[164575.926549][ C0] pending: xfs_blockgc_worker [xfs]
[164575.928422][ C0] workqueue xfs-sync/sdc3: flags=0x104
[164575.928914][ C0] pwq 62: cpus=15 node=0 flags=0x0 nice=0 active=1 refcnt=2
[164575.928925][ C0] in-flight: 3623436:xfs_log_worker [xfs]
[164575.929561][ C0] workqueue xfs-cil/sdc3: flags=0xe
[164575.931266][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=1 refcnt=2
[164575.931277][ C0] pending: xlog_cil_push_work [xfs]
[164575.933154][ C0] workqueue xfs-sync/sde: flags=0x104
[164575.933653][ C0] pwq 58: cpus=14 node=0 flags=0x0 nice=0 active=1 refcnt=2
[164575.933664][ C0] in-flight: 3732215:xfs_log_worker [xfs]
[164575.935527][ C0] workqueue xfs-cil/sde: flags=0xe
[164575.935989][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=1 refcnt=2
[164575.936000][ C0] pending: xlog_cil_push_work [xfs]
[164575.936652][ C0] pool 42: cpus=10 node=0 flags=0x0 nice=0 hung=13s workers=3 idle: 3726993 3662394
[164575.938713][ C0] pool 58: cpus=14 node=0 flags=0x0 nice=0 hung=0s workers=3 idle: 3644020 3712377
[164575.938731][ C0] pool 62: cpus=15 node=0 flags=0x0 nice=0 hung=0s workers=3 idle: 3727348 3706576
[164575.938746][ C0] pool 64: cpus=0-15 flags=0x4 nice=0 hung=146s workers=7 idle: 3667344 3645384 3691158 3661385 3578714 3713463
[164582.111906][ C10] watchdog: BUG: soft lockup - CPU#10 stuck for 23s! [kworker/10:3:3712027]
[164582.111915][ C10] CPU#10 Utilization every 4000ms during lockup:
[164582.111918][ C10] #1: 100% system, 0% softirq, 2% hardirq, 0% idle
[164582.111922][ C10] #2: 100% system, 0% softirq, 1% hardirq, 0% idle
[164582.111925][ C10] #3: 100% system, 0% softirq, 2% hardirq, 0% idle
[164582.111928][ C10] #4: 100% system, 0% softirq, 1% hardirq, 0% idle
[164582.111931][ C10] #5: 100% system, 0% softirq, 1% hardirq, 0% idle
[164582.111935][ C10] Modules linked in: iscsi_target_mod tcm_loop target_core_pscsi target_core_file target_core_iblock btrfs xor raid6_pq dm_flakey target_core_user target_core_mod rfkill nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nf_tables qrtr sunrpc intel_rapl_msr intel_rapl_common intel_uncore_frequency_common skx_edac_common nfit libnvdimm kvm_intel iTCO_wdt intel_pmc_bxt ppdev iTCO_vendor_support kvm irqbypass i2c_i801 parport_pc rapl i2c_smbus parport lpc_ich joydev fuse loop dm_multipath nfnetlink zram vsock_loopback lz4hc_compress lz4_compress vmw_vsock_virtio_transport_common zstd_compress vmw_vsock_vmci_transport vmw_vmci vsock xfs bochs drm_client_lib drm_shmem_helper drm_kms_helper virtio_net sym53c8xx net_failover ghash_clmulni_intel drm failover scsi_transport_spi serio_raw scsi_dh_rdac scsi_dh_emc scsi_dh_alua i2c_dev qemu_fw_cfg [last unloaded: scsi_debug]
[164582.112045][ C10] irq event stamp: 220528
[164582.112047][ C10] hardirqs last enabled at (220527): [<ffffffff9f40160a>] asm_sysvec_apic_timer_interrupt+0x1a/0x20
[164582.112057][ C10] hardirqs last disabled at (220528): [<ffffffffa28bf18e>] sysvec_apic_timer_interrupt+0xe/0x90
[164582.112062][ C10] softirqs last enabled at (220404): [<ffffffff9f9bcb82>] handle_softirqs+0x522/0x7b0
[164582.112068][ C10] softirqs last disabled at (220399): [<ffffffff9f9bcfa1>] __irq_exit_rcu+0x181/0x1d0
[164582.112075][ C10] CPU: 10 UID: 0 PID: 3712027 Comm: kworker/10:3 Not tainted 6.19.0-rc1-kts-xfs-zoned-fixes-gec6aea2a5+ #1 PREEMPT(lazy)
[164582.112081][ C10] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.17.0-5.fc42 04/01/2014
[164582.112084][ C10] Workqueue: events netstamp_clear
[164582.112090][ C10] RIP: 0010:smp_call_function_many_cond+0x888/0x1070
[164582.112096][ C10] Code: 08 84 c9 0f 85 6e 05 00 00 8b 43 08 a8 01 74 2e 48 89 f1 49 89 f6 48 c1 e9 03 41 83 e6 07 4c 01 f9 41 83 c6 03 f3 90 0f b6 01 <41> 38 c6 7c 08 84 c0 0f 85 db 04 00 00 8b 43 08 a8 01 75 e7 83 c2
[164582.112100][ C10] RSP: 0018:ffff888485a1f990 EFLAGS: 00000202
[164582.112104][ C10] RAX: 0000000000000000 RBX: ffffe8ffff082980 RCX: fffff91fffe10531
[164582.112107][ C10] RDX: 0000000000000005 RSI: ffffe8ffff082988 RDI: ffffffffa390b188
[164582.112109][ C10] RBP: ffffed1181a68d59 R08: ffff8881014fcd60 R09: 0000000000000000
[164582.112112][ C10] R10: 1ffff1102029f9ac R11: 0000000000000000 R12: ffff888c0d346ac0
[164582.112115][ C10] R13: ffffed1181a68d58 R14: 0000000000000003 R15: dffffc0000000000
[164582.112117][ C10] FS: 0000000000000000(0000) GS:ffff888c675b9000(0000) knlGS:0000000000000000
[164582.112121][ C10] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[164582.112123][ C10] CR2: 00007fb20918fa10 CR3: 00000007db69c004 CR4: 0000000000772ef0
[164582.112129][ C10] PKRU: 55555554
[164582.112131][ C10] Call Trace:
[164582.112134][ C10] <TASK>
[164582.112140][ C10] ? __pfx_do_sync_core+0x10/0x10
[164582.112147][ C10] ? __text_poke+0x371/0x870
[164582.112157][ C10] ? __pfx_smp_call_function_many_cond+0x10/0x10
[164582.112160][ C10] ? __pfx___text_poke+0x10/0x10
[164582.112166][ C10] ? __pfx___might_resched+0x10/0x10
[164582.112175][ C10] on_each_cpu_cond_mask+0x24/0x40
[164582.112179][ C10] smp_text_poke_batch_finish+0x45c/0xd20
[164582.112184][ C10] ? tpacket_rcv+0xe19/0x3f40
[164582.112192][ C10] ? __pfx_smp_text_poke_batch_finish+0x10/0x10
[164582.112197][ C10] ? __jump_label_patch+0x25f/0x320
[164582.112207][ C10] ? arch_jump_label_transform_queue+0xa8/0x110
[164582.112218][ C10] arch_jump_label_transform_apply+0x1c/0x30
[164582.112224][ C10] static_key_enable_cpuslocked+0x16c/0x230
[164582.112230][ C10] static_key_enable+0x1f/0x30
[164582.112235][ C10] process_one_work+0x86b/0x1490
[164582.112251][ C10] ? __pfx_process_one_work+0x10/0x10
[164582.112256][ C10] ? lock_acquire.part.0+0xb8/0x230
[164582.112265][ C10] ? lock_is_held_type+0x9a/0x110
[164582.112272][ C10] ? assign_work+0x156/0x390
[164582.112280][ C10] worker_thread+0x5f2/0xfd0
[164582.112289][ C10] ? __pfx_worker_thread+0x10/0x10
[164582.112293][ C10] ? __kthread_parkme+0xb3/0x1f0
[164582.112302][ C10] ? __pfx_worker_thread+0x10/0x10
[164582.112306][ C10] kthread+0x3a4/0x760
[164582.112313][ C10] ? __pfx_kthread+0x10/0x10
[164582.112318][ C10] ? __lock_release.isra.0+0x59/0x170
[164582.112332][ C10] ? __pfx_kthread+0x10/0x10
[164582.112339][ C10] ret_from_fork+0x50e/0x670
[164582.112346][ C10] ? __pfx_ret_from_fork+0x10/0x10
[164582.112352][ C10] ? __switch_to+0x42c/0xd70
[164582.112359][ C10] ? __pfx_kthread+0x10/0x10
[164582.112365][ C10] ret_from_fork_asm+0x1a/0x30
[164582.112384][ C10] </TASK>
[164606.120729][ C0] BUG: workqueue lockup - pool cpus=0-15 flags=0x4 nice=0 stuck for 176s!
[164606.121633][ C0] Showing busy workqueues and worker pools:
[164606.122170][ C0] workqueue events: flags=0x100
[164606.122663][ C0] pwq 42: cpus=10 node=0 flags=0x0 nice=0 active=2 refcnt=3
[164606.122674][ C0] in-flight: 3712027:netstamp_clear
[164606.122687][ C0] pending: psi_avgs_work
[164606.124344][ C0] workqueue events_unbound: flags=0x2
[164606.124838][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=2 refcnt=3
[164606.124848][ C0] pending: toggle_allocation_gate, flush_memcg_stats_dwork
[164606.124873][ C0] workqueue events_unbound: flags=0x2
[164606.126746][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=1 refcnt=2
[164606.126755][ C0] pending: idle_cull_fn
[164606.126763][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=1 refcnt=2
[164606.126770][ C0] pending: idle_cull_fn
[164606.126776][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=1 refcnt=2
[164606.126783][ C0] pending: idle_cull_fn
[164606.126789][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=1 refcnt=2
[164606.126796][ C0] pending: idle_cull_fn
[164606.126802][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=1 refcnt=2
[164606.126809][ C0] pending: crng_reseed
[164606.126822][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=1 refcnt=2
[164606.126829][ C0] pending: idle_cull_fn
[164606.126849][ C0] workqueue kvfree_rcu_reclaim: flags=0xa
[164606.133969][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=1 refcnt=2
[164606.133978][ C0] pending: kfree_rcu_monitor
[164606.134010][ C0] workqueue writeback: flags=0x4a
[164606.135631][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=4 refcnt=5
[164606.135639][ C0] in-flight: 3727760:wb_workfn
[164606.135650][ C0] pending: 3*wb_workfn
[164606.137282][ C0] workqueue ipv6_addrconf: flags=0x6000a
[164606.137809][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=1 refcnt=20
[164606.137819][ C0] pending: addrconf_verify_work
[164606.137827][ C0] inactive: 2*addrconf_verify_work
[164606.137852][ C0] workqueue xfs-blockgc/sdc3: flags=0xe
[164606.140035][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=1 refcnt=2
[164606.140044][ C0] pending: xfs_blockgc_worker [xfs]
[164606.141941][ C0] workqueue xfs-sync/sdc3: flags=0x104
[164606.142486][ C0] pwq 62: cpus=15 node=0 flags=0x0 nice=0 active=1 refcnt=2
[164606.142496][ C0] in-flight: 3623436:xfs_log_worker [xfs]
[164606.144430][ C0] workqueue xfs-cil/sdc3: flags=0xe
[164606.144918][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=1 refcnt=2
[164606.144928][ C0] pending: xlog_cil_push_work [xfs]
[164606.145650][ C0] workqueue xfs-sync/sde: flags=0x104
[164606.147518][ C0] pwq 58: cpus=14 node=0 flags=0x0 nice=0 active=1 refcnt=2
[164606.147549][ C0] in-flight: 3732215:xfs_log_worker [xfs]
[164606.150780][ C0] workqueue xfs-cil/sde: flags=0xe
[164606.151502][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=1 refcnt=2
[164606.151519][ C0] pending: xlog_cil_push_work [xfs]
[164606.154116][ C0] pool 42: cpus=10 node=0 flags=0x0 nice=0 hung=43s workers=3 idle: 3726993 3662394
[164606.154153][ C0] pool 58: cpus=14 node=0 flags=0x0 nice=0 hung=0s workers=3 idle: 3644020 3712377
[164606.154174][ C0] pool 62: cpus=15 node=0 flags=0x0 nice=0 hung=0s workers=3 idle: 3727348 3706576
[164606.154193][ C0] pool 64: cpus=0-15 flags=0x4 nice=0 hung=176s workers=7 idle: 3667344 3645384 3691158 3661385 3578714 3713463
[164610.111910][ C10] watchdog: BUG: soft lockup - CPU#10 stuck for 49s! [kworker/10:3:3712027]
[164610.111916][ C10] CPU#10 Utilization every 4000ms during lockup:
[164610.111919][ C10] #1: 100% system, 0% softirq, 2% hardirq, 0% idle
[164610.111923][ C10] #2: 100% system, 0% softirq, 1% hardirq, 0% idle
[164610.111926][ C10] #3: 100% system, 0% softirq, 1% hardirq, 0% idle
[164610.111929][ C10] #4: 100% system, 0% softirq, 2% hardirq, 0% idle
[164610.111932][ C10] #5: 100% system, 0% softirq, 1% hardirq, 0% idle
[164610.111934][ C10] Modules linked in: iscsi_target_mod tcm_loop target_core_pscsi target_core_file target_core_iblock btrfs xor raid6_pq dm_flakey target_core_user target_core_mod rfkill nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nf_tables qrtr sunrpc intel_rapl_msr intel_rapl_common intel_uncore_frequency_common skx_edac_common nfit libnvdimm kvm_intel iTCO_wdt intel_pmc_bxt ppdev iTCO_vendor_support kvm irqbypass i2c_i801 parport_pc rapl i2c_smbus parport lpc_ich joydev fuse loop dm_multipath nfnetlink zram vsock_loopback lz4hc_compress lz4_compress vmw_vsock_virtio_transport_common zstd_compress vmw_vsock_vmci_transport vmw_vmci vsock xfs bochs drm_client_lib drm_shmem_helper drm_kms_helper virtio_net sym53c8xx net_failover ghash_clmulni_intel drm failover scsi_transport_spi serio_raw scsi_dh_rdac scsi_dh_emc scsi_dh_alua i2c_dev qemu_fw_cfg [last unloaded: scsi_debug]
[164610.112037][ C10] irq event stamp: 279272
[164610.112039][ C10] hardirqs last enabled at (279271): [<ffffffff9f40160a>] asm_sysvec_apic_timer_interrupt+0x1a/0x20
[164610.112049][ C10] hardirqs last disabled at (279272): [<ffffffffa28bf18e>] sysvec_apic_timer_interrupt+0xe/0x90
[164610.112055][ C10] softirqs last enabled at (279170): [<ffffffff9f9bcb82>] handle_softirqs+0x522/0x7b0
[164610.112060][ C10] softirqs last disabled at (279165): [<ffffffff9f9bcfa1>] __irq_exit_rcu+0x181/0x1d0
[164610.112068][ C10] CPU: 10 UID: 0 PID: 3712027 Comm: kworker/10:3 Tainted: G L 6.19.0-rc1-kts-xfs-zoned-fixes-gec6aea2a5+ #1 PREEMPT(lazy)
[164610.112075][ C10] Tainted: [L]=SOFTLOCKUP
[164610.112077][ C10] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.17.0-5.fc42 04/01/2014
[164610.112080][ C10] Workqueue: events netstamp_clear
[164610.112087][ C10] RIP: 0010:smp_call_function_many_cond+0x888/0x1070
[164610.112092][ C10] Code: 08 84 c9 0f 85 6e 05 00 00 8b 43 08 a8 01 74 2e 48 89 f1 49 89 f6 48 c1 e9 03 41 83 e6 07 4c 01 f9 41 83 c6 03 f3 90 0f b6 01 <41> 38 c6 7c 08 84 c0 0f 85 db 04 00 00 8b 43 08 a8 01 75 e7 83 c2
[164610.112097][ C10] RSP: 0018:ffff888485a1f990 EFLAGS: 00000202
[164610.112100][ C10] RAX: 0000000000000000 RBX: ffffe8ffff082980 RCX: fffff91fffe10531
[164610.112104][ C10] RDX: 0000000000000005 RSI: ffffe8ffff082988 RDI: ffffffffa390b188
[164610.112106][ C10] RBP: ffffed1181a68d59 R08: ffff8881014fcd60 R09: 0000000000000000
[164610.112109][ C10] R10: 1ffff1102029f9ac R11: 0000000000000000 R12: ffff888c0d346ac0
[164610.112112][ C10] R13: ffffed1181a68d58 R14: 0000000000000003 R15: dffffc0000000000
[164610.112114][ C10] FS: 0000000000000000(0000) GS:ffff888c675b9000(0000) knlGS:0000000000000000
[164610.112118][ C10] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[164610.112120][ C10] CR2: 00007fb20918fa10 CR3: 00000007db69c004 CR4: 0000000000772ef0
[164610.112126][ C10] PKRU: 55555554
[164610.112128][ C10] Call Trace:
[164610.112131][ C10] <TASK>
[164610.112137][ C10] ? __pfx_do_sync_core+0x10/0x10
[164610.112144][ C10] ? __text_poke+0x371/0x870
[164610.112154][ C10] ? __pfx_smp_call_function_many_cond+0x10/0x10
[164610.112158][ C10] ? __pfx___text_poke+0x10/0x10
[164610.112164][ C10] ? __pfx___might_resched+0x10/0x10
[164610.112173][ C10] on_each_cpu_cond_mask+0x24/0x40
[164610.112177][ C10] smp_text_poke_batch_finish+0x45c/0xd20
[164610.112181][ C10] ? tpacket_rcv+0xe19/0x3f40
[164610.112190][ C10] ? __pfx_smp_text_poke_batch_finish+0x10/0x10
[164610.112195][ C10] ? __jump_label_patch+0x25f/0x320
[164610.112205][ C10] ? arch_jump_label_transform_queue+0xa8/0x110
[164610.112216][ C10] arch_jump_label_transform_apply+0x1c/0x30
[164610.112221][ C10] static_key_enable_cpuslocked+0x16c/0x230
[164610.112228][ C10] static_key_enable+0x1f/0x30
[164610.112232][ C10] process_one_work+0x86b/0x1490
[164610.112248][ C10] ? __pfx_process_one_work+0x10/0x10
[164610.112253][ C10] ? lock_acquire.part.0+0xb8/0x230
[164610.112262][ C10] ? lock_is_held_type+0x9a/0x110
[164610.112270][ C10] ? assign_work+0x156/0x390
[164610.112277][ C10] worker_thread+0x5f2/0xfd0
[164610.112286][ C10] ? __pfx_worker_thread+0x10/0x10
[164610.112291][ C10] ? __kthread_parkme+0xb3/0x1f0
[164610.112299][ C10] ? __pfx_worker_thread+0x10/0x10
[164610.112303][ C10] kthread+0x3a4/0x760
[164610.112310][ C10] ? __pfx_kthread+0x10/0x10
[164610.112315][ C10] ? __lock_release.isra.0+0x59/0x170
[164610.112323][ C10] ? __pfx_kthread+0x10/0x10
[164610.112329][ C10] ret_from_fork+0x50e/0x670
[164610.112335][ C10] ? __pfx_ret_from_fork+0x10/0x10
[164610.112342][ C10] ? __switch_to+0x42c/0xd70
[164610.112349][ C10] ? __pfx_kthread+0x10/0x10
[164610.112355][ C10] ret_from_fork_asm+0x1a/0x30
[164610.112374][ C10] </TASK>
[164636.327795][ C0] BUG: workqueue lockup - pool cpus=0-15 flags=0x4 nice=0 stuck for 206s!
[164636.329851][ C0] Showing busy workqueues and worker pools:
[164636.331184][ C0] workqueue events: flags=0x100
[164636.332312][ C0] pwq 42: cpus=10 node=0 flags=0x0 nice=0 active=2 refcnt=3
[164636.332338][ C0] in-flight: 3712027:netstamp_clear
[164636.332363][ C0] pending: psi_avgs_work
[164636.332390][ C0] workqueue events_unbound: flags=0x2
[164636.337543][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=2 refcnt=3
[164636.337567][ C0] pending: toggle_allocation_gate, flush_memcg_stats_dwork
[164636.337611][ C0] workqueue events_unbound: flags=0x2
[164636.342083][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=1 refcnt=2
[164636.342107][ C0] pending: idle_cull_fn
[164636.342125][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=2 refcnt=3
[164636.342141][ C0] pending: 2*idle_cull_fn
[164636.342155][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=1 refcnt=2
[164636.342170][ C0] pending: idle_cull_fn
[164636.342182][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=1 refcnt=2
[164636.342197][ C0] pending: idle_cull_fn
[164636.342209][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=1 refcnt=2
[164636.342223][ C0] pending: crng_reseed
[164636.342243][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=1 refcnt=2
[164636.342258][ C0] pending: idle_cull_fn
[164636.342285][ C0] workqueue kvfree_rcu_reclaim: flags=0xa
[164636.359392][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=1 refcnt=2
[164636.359414][ C0] pending: kfree_rcu_monitor
[164636.359479][ C0] workqueue writeback: flags=0x4a
[164636.363221][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=4 refcnt=5
[164636.363241][ C0] in-flight: 3727760:wb_workfn
[164636.363265][ C0] pending: 3*wb_workfn
[164636.363365][ C0] workqueue ipv6_addrconf: flags=0x6000a
[164636.368365][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=1 refcnt=20
[164636.368387][ C0] pending: addrconf_verify_work
[164636.368404][ C0] inactive: 2*addrconf_verify_work
[164636.368459][ C0] workqueue xfs-blockgc/sdc3: flags=0xe
[164636.374143][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=1 refcnt=2
[164636.374164][ C0] pending: xfs_blockgc_worker [xfs]
[164636.378468][ C0] workqueue xfs-sync/sdc3: flags=0x104
[164636.379684][ C0] pwq 62: cpus=15 node=0 flags=0x0 nice=0 active=1 refcnt=2
[164636.379718][ C0] in-flight: 3623436:xfs_log_worker [xfs]
[164636.384175][ C0] workqueue xfs-cil/sdc3: flags=0xe
[164636.385389][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=1 refcnt=2
[164636.385411][ C0] pending: xlog_cil_push_work [xfs]
[164636.389691][ C0] workqueue xfs-sync/sde: flags=0x104
[164636.390947][ C0] pwq 58: cpus=14 node=0 flags=0x0 nice=0 active=1 refcnt=2
[164636.390970][ C0] in-flight: 3732215:xfs_log_worker [xfs]
[164636.395191][ C0] workqueue xfs-cil/sde: flags=0xe
[164636.396274][ C0] pwq 64: cpus=0-15 flags=0x4 nice=0 active=1 refcnt=2
[164636.396295][ C0] pending: xlog_cil_push_work [xfs]
[164636.400174][ C0] pool 42: cpus=10 node=0 flags=0x0 nice=0 hung=73s workers=3 idle: 3726993 3662394
[164636.400223][ C0] pool 58: cpus=14 node=0 flags=0x0 nice=0 hung=0s workers=3 idle: 3644020 3712377
[164636.400253][ C0] pool 62: cpus=15 node=0 flags=0x0 nice=0 hung=0s workers=3 idle: 3727348 3706576
[164636.400280][ C0] pool 64: cpus=0-15 flags=0x4 nice=0 hung=206s workers=7 idle: 3667344 3645384 3691158 3661385 3578714 3713463
[164638.111923][ C10] watchdog: BUG: soft lockup - CPU#10 stuck for 75s! [kworker/10:3:3712027]
[164638.111929][ C10] CPU#10 Utilization every 4000ms during lockup:
[164638.111932][ C10] #1: 100% system, 0% softirq, 1% hardirq, 0% idle
[164638.111936][ C10] #2: 100% system, 0% softirq, 1% hardirq, 0% idle
[164638.111939][ C10] #3: 100% system, 0% softirq, 1% hardirq, 0% idle
[164638.111941][ C10] #4: 100% system, 0% softirq, 1% hardirq, 0% idle
[164638.111944][ C10] #5: 100% system, 0% softirq, 1% hardirq, 0% idle
[164638.111947][ C10] Modules linked in: iscsi_target_mod tcm_loop target_core_pscsi target_core_file target_core_iblock btrfs xor raid6_pq dm_flakey target_core_user target_core_mod rfkill nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nf_tables qrtr sunrpc intel_rapl_msr intel_rapl_common intel_uncore_frequency_common skx_edac_common nfit libnvdimm kvm_intel iTCO_wdt intel_pmc_bxt ppdev iTCO_vendor_support kvm irqbypass i2c_i801 parport_pc rapl i2c_smbus parport lpc_ich joydev fuse loop dm_multipath nfnetlink zram vsock_loopback lz4hc_compress lz4_compress vmw_vsock_virtio_transport_common zstd_compress vmw_vsock_vmci_transport vmw_vmci vsock xfs bochs drm_client_lib drm_shmem_helper drm_kms_helper virtio_net sym53c8xx net_failover ghash_clmulni_intel drm failover scsi_transport_spi serio_raw scsi_dh_rdac scsi_dh_emc scsi_dh_alua i2c_dev qemu_fw_cfg [last unloaded: scsi_debug]
[164638.112050][ C10] irq event stamp: 338004
[164638.112052][ C10] hardirqs last enabled at (338003): [<ffffffff9f40160a>] asm_sysvec_apic_timer_interrupt+0x1a/0x20
[164638.112061][ C10] hardirqs last disabled at (338004): [<ffffffffa28bf18e>] sysvec_apic_timer_interrupt+0xe/0x90
[164638.112066][ C10] softirqs last enabled at (337902): [<ffffffff9f9bcb82>] handle_softirqs+0x522/0x7b0
[164638.112071][ C10] softirqs last disabled at (337897): [<ffffffff9f9bcfa1>] __irq_exit_rcu+0x181/0x1d0
[164638.112079][ C10] CPU: 10 UID: 0 PID: 3712027 Comm: kworker/10:3 Tainted: G L 6.19.0-rc1-kts-xfs-zoned-fixes-gec6aea2a5+ #1 PREEMPT(lazy)
[164638.112086][ C10] Tainted: [L]=SOFTLOCKUP
[164638.112088][ C10] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.17.0-5.fc42 04/01/2014
[164638.112091][ C10] Workqueue: events netstamp_clear
[164638.112097][ C10] RIP: 0010:smp_call_function_many_cond+0x885/0x1070
[164638.112103][ C10] Code: 38 c8 7c 08 84 c9 0f 85 6e 05 00 00 8b 43 08 a8 01 74 2e 48 89 f1 49 89 f6 48 c1 e9 03 41 83 e6 07 4c 01 f9 41 83 c6 03 f3 90 <0f> b6 01 41 38 c6 7c 08 84 c0 0f 85 db 04 00 00 8b 43 08 a8 01 75
[164638.112107][ C10] RSP: 0018:ffff888485a1f990 EFLAGS: 00000202
[164638.112111][ C10] RAX: 0000000000000011 RBX: ffffe8ffff082980 RCX: fffff91fffe10531
[164638.112114][ C10] RDX: 0000000000000005 RSI: ffffe8ffff082988 RDI: ffffffffa390b188
[164638.112117][ C10] RBP: ffffed1181a68d59 R08: ffff8881014fcd60 R09: 0000000000000000
[164638.112119][ C10] R10: 1ffff1102029f9ac R11: 0000000000000000 R12: ffff888c0d346ac0
[164638.112122][ C10] R13: ffffed1181a68d58 R14: 0000000000000003 R15: dffffc0000000000
[164638.112125][ C10] FS: 0000000000000000(0000) GS:ffff888c675b9000(0000) knlGS:0000000000000000
[164638.112128][ C10] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[164638.112131][ C10] CR2: 00007fb20918fa10 CR3: 00000007db69c004 CR4: 0000000000772ef0
[164638.112136][ C10] PKRU: 55555554
[164638.112138][ C10] Call Trace:
[164638.112140][ C10] <TASK>
[164638.112147][ C10] ? __pfx_do_sync_core+0x10/0x10
[164638.112155][ C10] ? __text_poke+0x371/0x870
[164638.112164][ C10] ? __pfx_smp_call_function_many_cond+0x10/0x10
[164638.112168][ C10] ? __pfx___text_poke+0x10/0x10
[164638.112174][ C10] ? __pfx___might_resched+0x10/0x10
[164638.112183][ C10] on_each_cpu_cond_mask+0x24/0x40
[164638.112187][ C10] smp_text_poke_batch_finish+0x45c/0xd20
[164638.112191][ C10] ? tpacket_rcv+0xe19/0x3f40
[164638.112200][ C10] ? __pfx_smp_text_poke_batch_finish+0x10/0x10
[164638.112204][ C10] ? __jump_label_patch+0x25f/0x320
[164638.112214][ C10] ? arch_jump_label_transform_queue+0xa8/0x110
[164638.112226][ C10] arch_jump_label_transform_apply+0x1c/0x30
[164638.112231][ C10] static_key_enable_cpuslocked+0x16c/0x230
[164638.112237][ C10] static_key_enable+0x1f/0x30
[164638.112242][ C10] process_one_work+0x86b/0x1490
[164638.112258][ C10] ? __pfx_process_one_work+0x10/0x10
[164638.112262][ C10] ? lock_acquire.part.0+0xb8/0x230
[164638.112272][ C10] ? lock_is_held_type+0x9a/0x110
[164638.112279][ C10] ? assign_work+0x156/0x390
[164638.112287][ C10] worker_thread+0x5f2/0xfd0
[164638.112296][ C10] ? __pfx_worker_thread+0x10/0x10
[164638.112300][ C10] ? __kthread_parkme+0xb3/0x1f0
[164638.112309][ C10] ? __pfx_worker_thread+0x10/0x10
[164638.112313][ C10] kthread+0x3a4/0x760
[164638.112319][ C10] ? __pfx_kthread+0x10/0x10
[164638.112325][ C10] ? __lock_release.isra.0+0x59/0x170
[164638.112332][ C10] ? __pfx_kthread+0x10/0x10
[164638.112338][ C10] ret_from_fork+0x50e/0x670
[164638.112345][ C10] ? __pfx_ret_from_fork+0x10/0x10
[164638.112352][ C10] ? __switch_to+0x42c/0xd70
[164638.112359][ C10] ? __pfx_kthread+0x10/0x10
[164638.112364][ C10] ret_from_fork_asm+0x1a/0x30
[164638.112384][ C10] </TASK>
[164658.343906][ T118] INFO: task journal-offline:3732216 blocked for more than 122 seconds.
[164658.346054][ T118] Tainted: G L 6.19.0-rc1-kts-xfs-zoned-fixes-gec6aea2a5+ #1
[164658.348445][ T118] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[164658.350963][ T118] task:journal-offline state:D stack:0 pid:3732216 tgid:634 ppid:1 task_flags:0x400040 flags:0x00080000
[164658.353753][ T118] Call Trace:
[164658.354591][ T118] <TASK>
[164658.355381][ T118] __schedule+0x841/0x12a0
[164658.356569][ T118] ? __pfx___schedule+0x10/0x10
[164658.357790][ T118] ? find_held_lock+0x2b/0x80
[164658.358994][ T118] ? __lock_release.isra.0+0x59/0x170
[164658.360361][ T118] ? lock_acquire+0xf6/0x130
[164658.361569][ T118] schedule+0xd1/0x250
[164658.362594][ T118] io_schedule+0x8c/0x100
[164658.363641][ T118] blk_mq_get_tag+0x496/0xb00
[164658.364796][ T118] ? __pfx_blk_mq_get_tag+0x10/0x10
[164658.366069][ T118] ? __pfx_autoremove_wake_function+0x10/0x10
[164658.367635][ T118] ? lock_release.part.0+0x1c/0x50
[164658.368893][ T118] ? bfq_limit_depth+0x22e/0x950
[164658.370085][ T118] __blk_mq_alloc_requests+0x293/0xcf0
[164658.371505][ T118] blk_mq_submit_bio+0x8f3/0x2490
[164658.372817][ T118] ? __pfx_blk_mq_submit_bio+0x10/0x10
[164658.374168][ T118] __submit_bio+0x362/0x700
[164658.375301][ T118] ? __pfx___submit_bio+0x10/0x10
[164658.376545][ T118] ? __pfx_blk_cgroup_bio_start+0x10/0x10
[164658.377948][ T118] ? submit_bio_noacct_nocheck+0x3cc/0x5a0
[164658.379367][ T118] submit_bio_noacct_nocheck+0x3cc/0x5a0
[164658.380687][ T118] ? __pfx_submit_bio_noacct_nocheck+0x10/0x10
[164658.382155][ T118] ? submit_bio_noacct+0x248/0xf60
[164658.383483][ T118] iomap_ioend_writeback_submit+0xfa/0x180
[164658.384918][ T118] iomap_add_to_ioend+0x44f/0x1110
[164658.386189][ T118] xfs_writeback_range+0x5d/0x90 [xfs]
[164658.389042][ T118] iomap_writeback_folio+0x721/0xfe0
[164658.390393][ T118] ? __pfx_iomap_writeback_folio+0x10/0x10
[164658.391860][ T118] iomap_writepages+0x119/0x270
[164658.393058][ T118] ? __pfx_iomap_writepages+0x10/0x10
[164658.394472][ T118] ? __lock_release.isra.0+0x59/0x170
[164658.395824][ T118] ? lock_acquire+0xf6/0x130
[164658.397005][ T118] xfs_vm_writepages+0x1a2/0x2c0 [xfs]
[164658.399688][ T118] ? __pfx_xfs_vm_writepages+0x10/0x10 [xfs]
[164658.402533][ T118] ? lock_acquire.part.0+0xb8/0x230
[164658.403707][ T118] do_writepages+0x21e/0x560
[164658.404931][ T118] ? __pfx_do_writepages+0x10/0x10
[164658.406162][ T118] ? lock_release.part.0+0x1c/0x50
[164658.407573][ T118] ? _raw_spin_unlock+0x23/0x40
[164658.408855][ T118] ? wbc_attach_and_unlock_inode.part.0+0x388/0x730
[164658.410318][ T118] filemap_writeback+0x1c4/0x280
[164658.411478][ T118] ? __pfx_filemap_writeback+0x10/0x10
[164658.412858][ T118] ? lock_acquire.part.0+0xb8/0x230
[164658.414038][ T118] ? __fget_files+0x31/0x2f0
[164658.415077][ T118] file_write_and_wait_range+0x97/0x150
[164658.416407][ T118] xfs_file_fsync+0x121/0x800 [xfs]
[164658.419063][ T118] ? __pfx_xfs_file_fsync+0x10/0x10 [xfs]
[164658.421845][ T118] do_fsync+0x3b/0x80
[164658.422770][ T118] ? syscall_trace_enter+0x8e/0x2b0
[164658.423943][ T118] __x64_sys_fsync+0x35/0x50
[164658.425031][ T118] do_syscall_64+0x98/0x5c0
[164658.426076][ T118] ? lock_acquire.part.0+0xb8/0x230
[164658.427274][ T118] ? find_held_lock+0x2b/0x80
[164658.428439][ T118] ? __lock_release.isra.0+0x59/0x170
[164658.429775][ T118] ? lock_acquire+0xf6/0x130
[164658.430969][ T118] ? lockdep_hardirqs_on_prepare.part.0+0x9b/0x140
[164658.432670][ T118] ? trace_hardirqs_on+0x18/0x140
[164658.433928][ T118] ? _raw_spin_unlock_irq+0x28/0x50
[164658.435111][ T118] ? __x64_sys_rt_sigprocmask+0x241/0x400
[164658.436482][ T118] ? __pfx___x64_sys_rt_sigprocmask+0x10/0x10
[164658.437867][ T118] ? switch_fpu_return+0x111/0x240
[164658.439032][ T118] ? trace_hardirqs_on_prepare+0x101/0x140
[164658.440333][ T118] ? do_syscall_64+0x16d/0x5c0
[164658.441425][ T118] ? __switch_to+0x42c/0xd70
[164658.442509][ T118] ? clear_bhb_loop+0x50/0xa0
[164658.443602][ T118] ? clear_bhb_loop+0x50/0xa0
[164658.444683][ T118] entry_SYSCALL_64_after_hwframe+0x76/0x7e
[164658.446017][ T118] RIP: 0033:0x7fb20a689422
[164658.447014][ T118] RSP: 002b:00007fb2031fe958 EFLAGS: 00000246 ORIG_RAX: 000000000000004a
[164658.448865][ T118] RAX: ffffffffffffffda RBX: 000055a0fe937de0 RCX: 00007fb20a689422
[164658.450641][ T118] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000018
[164658.452542][ T118] RBP: 00007fb2031fe980 R08: 0000000000000000 R09: 0000000000000000
[164658.454549][ T118] R10: 0000000000000000 R11: 0000000000000246 R12: 00007fb20abb6294
[164658.456308][ T118] R13: 00007ffd5e7997b0 R14: 00007fb2031ffcdc R15: 00007ffd5e7998b7
[164658.458113][ T118] </TASK>
[164658.458960][ T118] INFO: task kworker/15:2:3623436 blocked for more than 122 seconds.
[164658.460754][ T118] Tainted: G L 6.19.0-rc1-kts-xfs-zoned-fixes-gec6aea2a5+ #1
[164658.462842][ T118] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[164658.464578][ T118] task:kworker/15:2 state:D stack:0 pid:3623436 tgid:3623436 ppid:2 task_flags:0x4208060 flags:0x00080000
[164658.467029][ T118] Workqueue: xfs-sync/sdc3 xfs_log_worker [xfs]
[164658.469625][ T118] Call Trace:
[164658.470298][ T118] <TASK>
[164658.470954][ T118] __schedule+0x841/0x12a0
[164658.471923][ T118] ? __pfx___schedule+0x10/0x10
[164658.472939][ T118] ? find_held_lock+0x2b/0x80
[164658.473931][ T118] ? __lock_release.isra.0+0x59/0x170
[164658.475065][ T118] schedule+0xd1/0x250
[164658.475993][ T118] schedule_timeout+0x17b/0x260
[164658.477066][ T118] ? __pfx_schedule_timeout+0x10/0x10
[164658.478239][ T118] ? mark_held_locks+0x40/0x70
[164658.479298][ T118] ? lockdep_hardirqs_on_prepare.part.0+0x9b/0x140
[164658.480583][ T118] wait_for_completion+0x173/0x3c0
[164658.481619][ T118] ? __pfx_wait_for_completion+0x10/0x10
[164658.482695][ T118] ? lockdep_hardirqs_on_prepare.part.0+0x9b/0x140
[164658.483957][ T118] ? flush_workqueue_prep_pwqs+0x337/0x460
[164658.485043][ T118] __flush_workqueue+0x36d/0x1090
[164658.486005][ T118] ? __pfx_try_to_wake_up+0x10/0x10
[164658.486984][ T118] ? find_held_lock+0x2b/0x80
[164658.487911][ T118] ? find_held_lock+0x2b/0x80
[164658.488821][ T118] ? __pfx___flush_workqueue+0x10/0x10
[164658.489898][ T118] ? find_held_lock+0x2b/0x80
[164658.490769][ T118] xlog_cil_push_now.isra.0+0x4c/0x1a0 [xfs]
[164658.492891][ T118] xlog_cil_force_seq+0x17b/0x800 [xfs]
[164658.494979][ T118] ? __pfx_xlog_cil_force_seq+0x10/0x10 [xfs]
[164658.497115][ T118] ? lock_acquire.part.0+0xb8/0x230
[164658.498021][ T118] ? find_held_lock+0x2b/0x80
[164658.498859][ T118] ? __lock_release.isra.0+0x59/0x170
[164658.499784][ T118] ? lock_acquire+0xf6/0x130
[164658.500525][ T118] ? xfs_log_worker+0x8a/0x2e0 [xfs]
[164658.502408][ T118] xfs_log_force+0x20d/0xa70 [xfs]
[164658.504233][ T118] xfs_log_worker+0x8a/0x2e0 [xfs]
[164658.506107][ T118] process_one_work+0x86b/0x1490
[164658.506931][ T118] ? __pfx_process_one_work+0x10/0x10
[164658.507782][ T118] ? lock_acquire.part.0+0xb8/0x230
[164658.508587][ T118] ? lock_is_held_type+0x9a/0x110
[164658.509409][ T118] ? assign_work+0x156/0x390
[164658.510143][ T118] worker_thread+0x5f2/0xfd0
[164658.510909][ T118] ? __pfx_worker_thread+0x10/0x10
[164658.511699][ T118] ? __kthread_parkme+0xb3/0x1f0
[164658.512485][ T118] ? __pfx_worker_thread+0x10/0x10
[164658.513230][ T118] kthread+0x3a4/0x760
[164658.513913][ T118] ? __pfx_kthread+0x10/0x10
[164658.514617][ T118] ? __lock_release.isra.0+0x59/0x170
[164658.515486][ T118] ? __pfx_kthread+0x10/0x10
[164658.516227][ T118] ret_from_fork+0x50e/0x670
[164658.517031][ T118] ? __pfx_ret_from_fork+0x10/0x10
[164658.517876][ T118] ? __switch_to+0x42c/0xd70
[164658.518578][ T118] ? __pfx_kthread+0x10/0x10
[164658.519259][ T118] ret_from_fork_asm+0x1a/0x30
[164658.520054][ T118] </TASK>
[164658.520576][ T118] INFO: task kworker/u64:2:3727760 blocked for more than 123 seconds.
[164658.521795][ T118] Tainted: G L 6.19.0-rc1-kts-xfs-zoned-fixes-gec6aea2a5+ #1
[164658.523099][ T118] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[164658.524242][ T118] task:kworker/u64:2 state:D stack:0 pid:3727760 tgid:3727760 ppid:2 task_flags:0x4208060 flags:0x00080000
[164658.525916][ T118] Workqueue: writeback wb_workfn (flush-8:32)
[164658.526787][ T118] Call Trace:
[164658.527218][ T118] <TASK>
[164658.527668][ T118] __schedule+0x841/0x12a0
[164658.528268][ T118] ? __pfx___schedule+0x10/0x10
[164658.528984][ T118] ? find_held_lock+0x2b/0x80
[164658.529628][ T118] ? __lock_release.isra.0+0x59/0x170
[164658.530373][ T118] ? lock_acquire+0xf6/0x130
[164658.531030][ T118] schedule+0xd1/0x250
[164658.531588][ T118] io_schedule+0x8c/0x100
[164658.532178][ T118] blk_mq_get_tag+0x496/0xb00
[164658.532847][ T118] ? __pfx_blk_mq_get_tag+0x10/0x10
[164658.533511][ T118] ? __pfx_autoremove_wake_function+0x10/0x10
[164658.534270][ T118] ? lock_release.part.0+0x1c/0x50
[164658.534968][ T118] ? bfq_limit_depth+0x22e/0x950
[164658.535621][ T118] __blk_mq_alloc_requests+0x293/0xcf0
[164658.536304][ T118] blk_mq_submit_bio+0x8f3/0x2490
[164658.536955][ T118] ? __pfx_blk_mq_submit_bio+0x10/0x10
[164658.537681][ T118] __submit_bio+0x362/0x700
[164658.538270][ T118] ? __pfx___submit_bio+0x10/0x10
[164658.538941][ T118] ? __pfx_page_vma_mkclean_one.constprop.0+0x10/0x10
[164658.539811][ T118] ? __pfx_blk_cgroup_bio_start+0x10/0x10
[164658.540527][ T118] ? submit_bio_noacct_nocheck+0x3cc/0x5a0
[164658.541256][ T118] submit_bio_noacct_nocheck+0x3cc/0x5a0
[164658.542019][ T118] ? __pfx_submit_bio_noacct_nocheck+0x10/0x10
[164658.542873][ T118] ? submit_bio_noacct+0x248/0xf60
[164658.543580][ T118] iomap_ioend_writeback_submit+0xfa/0x180
[164658.544373][ T118] iomap_add_to_ioend+0x44f/0x1110
[164658.545030][ T118] xfs_writeback_range+0x5d/0x90 [xfs]
[164658.546504][ T118] iomap_writeback_folio+0x721/0xfe0
[164658.547189][ T118] ? __pfx_iomap_writeback_folio+0x10/0x10
[164658.547975][ T118] iomap_writepages+0x119/0x270
[164658.548590][ T118] ? __pfx_iomap_writepages+0x10/0x10
[164658.549244][ T118] ? __lock_release.isra.0+0x59/0x170
[164658.549960][ T118] ? lock_acquire+0xf6/0x130
[164658.550508][ T118] xfs_vm_writepages+0x1a2/0x2c0 [xfs]
[164658.551923][ T118] ? __pfx_xfs_vm_writepages+0x10/0x10 [xfs]
[164658.553335][ T118] ? __lock_release.isra.0+0x59/0x170
[164658.553960][ T118] ? lock_acquire+0xf6/0x130
[164658.554509][ T118] do_writepages+0x21e/0x560
[164658.555045][ T118] ? __pfx_do_writepages+0x10/0x10
[164658.555676][ T118] __writeback_single_inode+0x119/0xc70
[164658.556316][ T118] ? __pfx___writeback_single_inode+0x10/0x10
[164658.557013][ T118] ? lock_release.part.0+0x1c/0x50
[164658.557590][ T118] ? _raw_spin_unlock+0x23/0x40
[164658.558102][ T118] ? wbc_attach_and_unlock_inode.part.0+0x388/0x730
[164658.558831][ T118] writeback_sb_inodes+0x67a/0x10a0
[164658.559399][ T118] ? __pfx_writeback_sb_inodes+0x10/0x10
[164658.560000][ T118] ? __pfx_do_raw_spin_lock+0x10/0x10
[164658.560606][ T118] ? do_raw_spin_lock+0x128/0x270
[164658.561183][ T118] __writeback_inodes_wb+0xf2/0x270
[164658.561788][ T118] ? __pfx___writeback_inodes_wb+0x10/0x10
[164658.562440][ T118] ? queue_io+0x2ab/0x4a0
[164658.562921][ T118] wb_writeback+0x5af/0x7d0
[164658.563423][ T118] ? __pfx_wb_writeback+0x10/0x10
[164658.563982][ T118] ? get_nr_dirty_inodes+0xd8/0x1d0
[164658.564602][ T118] ? lockdep_hardirqs_on_prepare.part.0+0x9b/0x140
[164658.565373][ T118] wb_do_writeback+0x51c/0x7b0
[164658.565923][ T118] ? __pfx_wb_do_writeback+0x10/0x10
[164658.566527][ T118] ? lock_acquire.part.0+0xb8/0x230
[164658.567094][ T118] ? process_one_work+0x7e7/0x1490
[164658.567675][ T118] wb_workfn+0x9f/0x3d0
[164658.568137][ T118] process_one_work+0x86b/0x1490
[164658.568702][ T118] ? __pfx_process_one_work+0x10/0x10
[164658.569254][ T118] ? lock_acquire.part.0+0xb8/0x230
[164658.569829][ T118] ? lock_is_held_type+0x9a/0x110
[164658.570342][ T118] ? assign_work+0x156/0x390
[164658.570827][ T118] worker_thread+0x5f2/0xfd0
[164658.571271][ T118] ? __pfx_worker_thread+0x10/0x10
[164658.571824][ T118] ? __kthread_parkme+0xb3/0x1f0
[164658.572295][ T118] ? __pfx_worker_thread+0x10/0x10
[164658.572836][ T118] kthread+0x3a4/0x760
[164658.573227][ T118] ? __pfx_kthread+0x10/0x10
[164658.573699][ T118] ? __lock_release.isra.0+0x59/0x170
[164658.574223][ T118] ? __pfx_kthread+0x10/0x10
[164658.574741][ T118] ret_from_fork+0x50e/0x670
[164658.575177][ T118] ? __pfx_ret_from_fork+0x10/0x10
[164658.575746][ T118] ? __switch_to+0x42c/0xd70
[164658.576205][ T118] ? __pfx_kthread+0x10/0x10
[164658.576694][ T118] ret_from_fork_asm+0x1a/0x30
[164658.577208][ T118] </TASK>
[164658.577574][ T118] INFO: task kworker/14:0:3732215 blocked for more than 123 seconds.
[164658.578379][ T118] Tainted: G L 6.19.0-rc1-kts-xfs-zoned-fixes-gec6aea2a5+ #1
[164658.579301][ T118] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[164658.580169][ T118] task:kworker/14:0 state:D stack:0 pid:3732215 tgid:3732215 ppid:2 task_flags:0x4208060 flags:0x00080000
[164658.581410][ T118] Workqueue: xfs-sync/sde xfs_log_worker [xfs]
[164658.582684][ T118] Call Trace:
[164658.583023][ T118] <TASK>
[164658.583341][ T118] __schedule+0x841/0x12a0
[164658.583798][ T118] ? __pfx___schedule+0x10/0x10
[164658.584241][ T118] ? find_held_lock+0x2b/0x80
[164658.584772][ T118] ? __lock_release.isra.0+0x59/0x170
[164658.585266][ T118] schedule+0xd1/0x250
[164658.585690][ T118] schedule_timeout+0x17b/0x260
[164658.586187][ T118] ? __pfx_schedule_timeout+0x10/0x10
[164658.586805][ T118] ? mark_held_locks+0x40/0x70
[164658.587255][ T118] ? lockdep_hardirqs_on_prepare.part.0+0x9b/0x140
[164658.587934][ T118] wait_for_completion+0x173/0x3c0
[164658.588442][ T118] ? __pfx_wait_for_completion+0x10/0x10
[164658.588992][ T118] ? lockdep_hardirqs_on_prepare.part.0+0x9b/0x140
[164658.589662][ T118] ? flush_workqueue_prep_pwqs+0x337/0x460
[164658.590237][ T118] __flush_workqueue+0x36d/0x1090
[164658.590802][ T118] ? __pfx_try_to_wake_up+0x10/0x10
[164658.591275][ T118] ? lock_acquire.part.0+0xb8/0x230
[164658.591821][ T118] ? find_held_lock+0x2b/0x80
[164658.592274][ T118] ? find_held_lock+0x2b/0x80
[164658.592793][ T118] ? __pfx___flush_workqueue+0x10/0x10
[164658.593303][ T118] ? find_held_lock+0x2b/0x80
[164658.593792][ T118] xlog_cil_push_now.isra.0+0x4c/0x1a0 [xfs]
[164658.594996][ T118] xlog_cil_force_seq+0x17b/0x800 [xfs]
[164658.596170][ T118] ? __pfx_xlog_cil_force_seq+0x10/0x10 [xfs]
[164658.597416][ T118] ? mark_held_locks+0x40/0x70
[164658.597909][ T118] ? lockdep_hardirqs_on_prepare.part.0+0x9b/0x140
[164658.598545][ T118] ? trace_hardirqs_on+0x18/0x140
[164658.599033][ T118] ? kasan_quarantine_put+0xf5/0x240
[164658.599580][ T118] xfs_log_force_seq+0x1c1/0x5b0 [xfs]
[164658.600832][ T118] __xfs_trans_commit+0x818/0xba0 [xfs]
[164658.602000][ T118] ? __pfx___xfs_trans_commit+0x10/0x10 [xfs]
[164658.603202][ T118] ? lockdep_hardirqs_on_prepare.part.0+0x9b/0x140
[164658.603877][ T118] xfs_trans_commit+0xbf/0x180 [xfs]
[164658.605032][ T118] ? xfs_sb_to_disk+0x7f5/0x1fe0 [xfs]
[164658.606175][ T118] ? __pfx_xfs_trans_commit+0x10/0x10 [xfs]
[164658.607405][ T118] ? xfs_trans_dirty_buf+0x126/0x1b0 [xfs]
[164658.608644][ T118] xfs_sync_sb+0xee/0x110 [xfs]
[164658.609803][ T118] ? __pfx_xfs_sync_sb+0x10/0x10 [xfs]
[164658.610957][ T118] ? lock_release.part.0+0x1c/0x50
[164658.611531][ T118] ? _raw_spin_unlock+0x23/0x40
[164658.612040][ T118] ? xfs_log_need_covered.isra.0+0xbc/0x180 [xfs]
[164658.613370][ T118] xfs_log_worker+0x23d/0x2e0 [xfs]
[164658.614564][ T118] process_one_work+0x86b/0x1490
[164658.615059][ T118] ? __pfx_process_one_work+0x10/0x10
[164658.615610][ T118] ? lock_acquire.part.0+0xb8/0x230
[164658.616130][ T118] ? lock_is_held_type+0x9a/0x110
[164658.616640][ T118] ? assign_work+0x156/0x390
[164658.617094][ T118] worker_thread+0x5f2/0xfd0
[164658.617589][ T118] ? __pfx_worker_thread+0x10/0x10
[164658.618099][ T118] ? __kthread_parkme+0xb3/0x1f0
[164658.618593][ T118] ? __pfx_worker_thread+0x10/0x10
[164658.619108][ T118] kthread+0x3a4/0x760
[164658.619530][ T118] ? __pfx_kthread+0x10/0x10
[164658.619977][ T118] ? __lock_release.isra.0+0x59/0x170
[164658.620518][ T118] ? __pfx_kthread+0x10/0x10
[164658.620980][ T118] ret_from_fork+0x50e/0x670
[164658.621444][ T118] ? __pfx_ret_from_fork+0x10/0x10
[164658.621971][ T118] ? __switch_to+0x42c/0xd70
[164658.622479][ T118] ? __pfx_kthread+0x10/0x10
[164658.622967][ T118] ret_from_fork_asm+0x1a/0x30
[164658.623521][ T118] </TASK>
[164658.623879][ T118]
[164658.623879][ T118] Showing all locks held in the system:
[164658.624683][ T118] 1 lock held by rcu_preempt/16:
[164658.625187][ T118] 1 lock held by khungtaskd/118:
[164658.625704][ T118] #0: ffffffffa3f2fc20 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire.constprop.0+0x7/0x30
[164658.626879][ T118] 2 locks held by kworker/15:2/3623436:
[164658.627447][ T118] #0: ffff88812d1a9148 ((wq_completion)xfs-sync/sdc3){+.+.}-{0:0}, at: process_one_work+0xef8/0x1490
[164658.628547][ T118] #1: ffff88840be1fca8 ((work_completion)(&(&log->l_work)->work)){+.+.}-{0:0}, at: process_one_work+0x7e7/0x1490
[164658.629704][ T118] 5 locks held by kworker/10:3/3712027:
[164658.630234][ T118] 3 locks held by kworker/u64:2/3727760:
[164658.630817][ T118] #0: ffff888102625148 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work+0xef8/0x1490
[164658.631852][ T118] #1: ffff8886b4b57ca8 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work+0x7e7/0x1490
[164658.633052][ T118] #2: ffff88814fd480e0 (&type->s_umount_key#58){++++}-{4:4}, at: super_trylock_shared+0x1e/0xb0
[164658.634106][ T118] 1 lock held by xfs_repair/3732109:
[164658.634650][ T118] 2 locks held by kworker/14:0/3732215:
[164658.635179][ T118] #0: ffff88864dda8948 ((wq_completion)xfs-sync/sde){+.+.}-{0:0}, at: process_one_work+0xef8/0x1490
[164658.636235][ T118] #1: ffff88814109fca8 ((work_completion)(&(&log->l_work)->work)){+.+.}-{0:0}, at: process_one_work+0x7e7/0x1490
[164658.637428][ T118]
[164658.637669][ T118] =============================================
[164658.637669][ T118]
[164660.955748][ C7] rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
[164660.957272][ C7] rcu: 5-...0: (1 GPs behind) idle=9184/1/0x4000000000000000 softirq=12412680/12412680 fqs=7111
[164660.959517][ C7] rcu: (detected by 7, t=260155 jiffies, g=63951405, q=7286 ncpus=16)
[164660.961273][ C7] Sending NMI from CPU 7 to CPUs 5:
[164660.961298][ C5] NMI backtrace for cpu 5
[164660.961306][ C5] CPU: 5 UID: 0 PID: 0 Comm: swapper/5 Tainted: G L 6.19.0-rc1-kts-xfs-zoned-fixes-gec6aea2a5+ #1 PREEMPT(lazy)
[164660.961313][ C5] Tainted: [L]=SOFTLOCKUP
[164660.961315][ C5] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.17.0-5.fc42 04/01/2014
[164660.961318][ C5] RIP: 0010:_find_first_zero_bit+0x36/0x90
[164660.961327][ C5] Code: 48 be 00 00 00 00 00 fc ff df 48 83 ec 18 31 c9 eb 0d 48 83 c1 40 48 83 c7 08 48 39 c1 73 25 48 89 fa 48 c1 ea 03 80 3c 32 00 <75> 26 48 8b 17 48 83 f2 ff 74 dd f3 48 0f bc d2 48 01 d1 48 39 c8
[164660.961331][ C5] RSP: 0018:ffff88810115fcd8 EFLAGS: 00000046
[164660.961335][ C5] RAX: 0000000000000010 RBX: fffffbfff49a1c1c RCX: 0000000000000000
[164660.961339][ C5] RDX: 1ffff11025149aa2 RSI: dffffc0000000000 RDI: ffff888128a4d510
[164660.961342][ C5] RBP: 0000000000000003 R08: 0000000040000002 R09: 000000000000000e
[164660.961344][ C5] R10: ffff888128a4ca80 R11: ffff888565575654 R12: 0000000000000007
[164660.961347][ C5] R13: ffff888128a4d500 R14: 0000000000000010 R15: 0000000000000010
[164660.961350][ C5] FS: 0000000000000000(0000) GS:ffff888c67339000(0000) knlGS:0000000000000000
[164660.961353][ C5] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[164660.961356][ C5] CR2: 00007f7661c41820 CR3: 000000013d927004 CR4: 0000000000772ef0
[164660.961361][ C5] PKRU: 55555554
[164660.961363][ C5] Call Trace:
[164660.961366][ C5] <TASK>
[164660.961370][ C5] ? __lock_release.isra.0+0x59/0x170
[164660.961376][ C5] mm_get_cid+0x1b9/0x2c0
[164660.961384][ C5] mm_cid_switch_to+0xa6d/0x1300
[164660.961389][ C5] ? switch_mm_irqs_off+0x6fe/0x970
[164660.961396][ C5] __schedule+0x6b5/0x12a0
[164660.961402][ C5] ? __pfx___schedule+0x10/0x10
[164660.961408][ C5] ? tick_nohz_stop_idle+0x206/0x330
[164660.961413][ C5] ? lockdep_hardirqs_on_prepare.part.0+0x9b/0x140
[164660.961420][ C5] schedule_idle+0x59/0x90
[164660.961424][ C5] do_idle+0x10e/0x190
[164660.961430][ C5] cpu_startup_entry+0x53/0x70
[164660.961435][ C5] start_secondary+0x220/0x2e0
[164660.961442][ C5] ? __pfx_start_secondary+0x10/0x10
[164660.961449][ C5] common_startup_64+0x13e/0x141
[164660.961468][ C5] </TASK>
[164660.962296][ C7] rcu: rcu_preempt kthread starved for 227654 jiffies! g63951405 f0x2 RCU_GP_DOING_FQS(6) ->state=0x0 ->cpu=12
[164661.011699][ C7] rcu: Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
[164661.013967][ C7] rcu: RCU grace-period kthread stack dump:
[164661.015518][ C7] task:rcu_preempt state:R running task stack:0 pid:16 tgid:16 ppid:2 task_flags:0x208040 flags:0x00080000
[164661.018657][ C7] Call Trace:
[164661.019421][ C7] <TASK>
[164661.020105][ C7] ? do_raw_spin_lock+0x1d9/0x270
[164661.021265][ C7] ? __pfx_do_raw_spin_lock+0x10/0x10
[164661.022469][ C7] ? lock_acquire+0xf6/0x130
[164661.023489][ C7] ? raw_spin_rq_lock_nested+0x24/0x170
[164661.024662][ C7] ? _raw_spin_rq_lock_irqsave+0x41/0x50
[164661.025827][ C7] ? resched_cpu+0x62/0xf0
[164661.026727][ C7] ? force_qs_rnp+0x67d/0xaa0
[164661.027673][ C7] ? __pfx_rcu_watching_snap_recheck+0x10/0x10
[164661.028951][ C7] ? rcu_gp_fqs_loop+0x948/0x11b0
[164661.029991][ C7] ? rcu_gp_init+0xb0f/0x14a0
[164661.030928][ C7] ? __pfx_rcu_gp_fqs_loop+0x10/0x10
[164661.031979][ C7] ? _raw_spin_unlock_irq+0x28/0x50
[164661.032977][ C7] ? rcu_gp_cleanup+0x824/0xf40
[164661.033966][ C7] ? __pfx_rcu_gp_init+0x10/0x10
[164661.034967][ C7] ? __pfx_rcu_gp_cleanup+0x10/0x10
[164661.036012][ C7] ? lockdep_hardirqs_on_prepare.part.0+0x9b/0x140
[164661.037272][ C7] ? rcu_gp_kthread+0x4f2/0x660
[164661.038235][ C7] ? lock_acquire+0xf6/0x130
[164661.039172][ C7] ? __pfx_rcu_gp_kthread+0x10/0x10
[164661.040218][ C7] ? lockdep_hardirqs_on_prepare.part.0+0x9b/0x140
[164661.041520][ C7] ? __pfx_rcu_gp_kthread+0x10/0x10
[164661.042476][ C7] ? __kthread_parkme+0xb3/0x1f0
[164661.043407][ C7] ? __pfx_rcu_gp_kthread+0x10/0x10
[164661.044376][ C7] ? kthread+0x3a4/0x760
[164661.045149][ C7] ? __pfx_kthread+0x10/0x10
[164661.045999][ C7] ? __lock_release.isra.0+0x59/0x170
[164661.046995][ C7] ? __pfx_kthread+0x10/0x10
[164661.047867][ C7] ? ret_from_fork+0x50e/0x670
[164661.048752][ C7] ? __pfx_ret_from_fork+0x10/0x10
[164661.049697][ C7] ? __switch_to+0x42c/0xd70
[164661.050567][ C7] ? __pfx_kthread+0x10/0x10
[164661.051365][ C7] ? ret_from_fork_asm+0x1a/0x30
[164661.052181][ C7] </TASK>
[164661.052729][ C7] rcu: Stack dump where RCU GP kthread last ran:
[164661.053767][ C7] Sending NMI from CPU 7 to CPUs 12:
[164661.054646][ C12] NMI backtrace for cpu 12
[164661.054655][ C12] CPU: 12 UID: 0 PID: 16 Comm: rcu_preempt Tainted: G L 6.19.0-rc1-kts-xfs-zoned-fixes-gec6aea2a5+ #1 PREEMPT(lazy)
[164661.054662][ C12] Tainted: [L]=SOFTLOCKUP
[164661.054664][ C12] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.17.0-5.fc42 04/01/2014
[164661.054667][ C12] RIP: 0010:__pv_queued_spin_lock_slowpath+0x232/0xdc0
[164661.054674][ C12] Code: 44 8b 2b c6 44 24 60 00 66 45 85 ed 74 1e 41 81 fd ff ff 00 00 0f 86 5e 02 00 00 41 81 e5 00 ff 00 00 0f 85 51 02 00 00 f3 90 <eb> b5 48 89 df be 01 00 00 00 e8 9f 96 bc fd be 01 00 00 00 48 8d
[164661.054678][ C12] RSP: 0018:ffff8881010c7948 EFLAGS: 00000046
[164661.054683][ C12] RAX: 0000000000000000 RBX: ffff888c0d0c5440 RCX: 0000000000000001
[164661.054686][ C12] RDX: 0000000000000000 RSI: 0000000000000004 RDI: ffff888c0d0c5440
[164661.054689][ C12] RBP: 0000000000000003 R08: ffffffffa28edce6 R09: ffffed1181a18a88
[164661.054692][ C12] R10: ffffed1181a18a89 R11: 0000000000000001 R12: ffffed1181a18a88
[164661.054694][ C12] R13: 0000000000000000 R14: ffff888c0d446408 R15: ffff888c0d446400
[164661.054697][ C12] FS: 0000000000000000(0000) GS:ffff888c676b9000(0000) knlGS:0000000000000000
[164661.054700][ C12] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[164661.054703][ C12] CR2: 00007f7657ff6cb8 CR3: 0000000108544001 CR4: 0000000000772ef0
[164661.054708][ C12] PKRU: 55555554
[164661.054710][ C12] Call Trace:
[164661.054713][ C12] <TASK>
[164661.054718][ C12] ? lock_release.part.0+0x1c/0x50
[164661.054725][ C12] ? __virt_addr_valid+0x1d1/0x3f0
[164661.054731][ C12] ? __pfx___pv_queued_spin_lock_slowpath+0x10/0x10
[164661.054736][ C12] ? __lock_acquire+0x55d/0xbd0
[164661.054745][ C12] do_raw_spin_lock+0x1d9/0x270
[164661.054754][ C12] ? __pfx_do_raw_spin_lock+0x10/0x10
[164661.054761][ C12] ? lock_acquire+0xf6/0x130
[164661.054768][ C12] raw_spin_rq_lock_nested+0x24/0x170
[164661.054774][ C12] _raw_spin_rq_lock_irqsave+0x41/0x50
[164661.054778][ C12] resched_cpu+0x62/0xf0
[164661.054783][ C12] force_qs_rnp+0x67d/0xaa0
[164661.054789][ C12] ? __pfx_rcu_watching_snap_recheck+0x10/0x10
[164661.054799][ C12] rcu_gp_fqs_loop+0x948/0x11b0
[164661.054804][ C12] ? rcu_gp_init+0xb0f/0x14a0
[164661.054810][ C12] ? __pfx_rcu_gp_fqs_loop+0x10/0x10
[164661.054814][ C12] ? _raw_spin_unlock_irq+0x28/0x50
[164661.054818][ C12] ? rcu_gp_cleanup+0x824/0xf40
[164661.054823][ C12] ? __pfx_rcu_gp_init+0x10/0x10
[164661.054829][ C12] ? __pfx_rcu_gp_cleanup+0x10/0x10
[164661.054835][ C12] ? lockdep_hardirqs_on_prepare.part.0+0x9b/0x140
[164661.054841][ C12] rcu_gp_kthread+0x4f2/0x660
[164661.054847][ C12] ? lock_acquire+0xf6/0x130
[164661.054851][ C12] ? __pfx_rcu_gp_kthread+0x10/0x10
[164661.054856][ C12] ? lockdep_hardirqs_on_prepare.part.0+0x9b/0x140
[164661.054861][ C12] ? __pfx_rcu_gp_kthread+0x10/0x10
[164661.054866][ C12] ? __kthread_parkme+0xb3/0x1f0
[164661.054872][ C12] ? __pfx_rcu_gp_kthread+0x10/0x10
[164661.054876][ C12] kthread+0x3a4/0x760
[164661.054882][ C12] ? __pfx_kthread+0x10/0x10
[164661.054886][ C12] ? __lock_release.isra.0+0x59/0x170
[164661.054892][ C12] ? __pfx_kthread+0x10/0x10
[164661.054896][ C12] ret_from_fork+0x50e/0x670
[164661.054902][ C12] ? __pfx_ret_from_fork+0x10/0x10
[164661.054907][ C12] ? __switch_to+0x42c/0xd70
[164661.054911][ C12] ? __pfx_kthread+0x10/0x10
[164661.054916][ C12] ret_from_fork_asm+0x1a/0x30
[164661.054928][ C12] </TASK>
[164686.111947][ C10] watchdog: BUG: soft lockup - CPU#10 stuck for 119s! [kworker/10:3:3712027]
[164686.111955][ C10] CPU#10 Utilization every 4000ms during lockup:
[164686.111958][ C10] #1: 100% system, 0% softirq, 1% hardirq, 0% idle
[164686.111961][ C10] #2: 100% system, 0% softirq, 1% hardirq, 0% idle
[164686.111964][ C10] #3: 100% system, 0% softirq, 1% hardirq, 0% idle
[164686.111967][ C10] #4: 100% system, 0% softirq, 1% hardirq, 0% idle
[164686.111970][ C10] #5: 100% system, 0% softirq, 1% hardirq, 0% idle
[164686.111972][ C10] Modules linked in: iscsi_target_mod tcm_loop target_core_pscsi target_core_file target_core_iblock btrfs xor raid6_pq dm_flakey target_core_user target_core_mod rfkill nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nf_tables qrtr sunrpc intel_rapl_msr intel_rapl_common intel_uncore_frequency_common skx_edac_common nfit libnvdimm kvm_intel iTCO_wdt intel_pmc_bxt ppdev iTCO_vendor_support kvm irqbypass i2c_i801 parport_pc rapl i2c_smbus parport lpc_ich joydev fuse loop dm_multipath nfnetlink zram vsock_loopback lz4hc_compress lz4_compress vmw_vsock_virtio_transport_common zstd_compress vmw_vsock_vmci_transport vmw_vmci vsock xfs bochs drm_client_lib drm_shmem_helper drm_kms_helper virtio_net sym53c8xx net_failover ghash_clmulni_intel drm failover scsi_transport_spi serio_raw scsi_dh_rdac scsi_dh_emc scsi_dh_alua i2c_dev qemu_fw_cfg [last unloaded: scsi_debug]
[164686.112076][ C10] irq event stamp: 438740
[164686.112078][ C10] hardirqs last enabled at (438739): [<ffffffff9f40160a>] asm_sysvec_apic_timer_interrupt+0x1a/0x20
[164686.112087][ C10] hardirqs last disabled at (438740): [<ffffffffa28bf18e>] sysvec_apic_timer_interrupt+0xe/0x90
[164686.112093][ C10] softirqs last enabled at (438704): [<ffffffff9f9bcb82>] handle_softirqs+0x522/0x7b0
[164686.112099][ C10] softirqs last disabled at (438699): [<ffffffff9f9bcfa1>] __irq_exit_rcu+0x181/0x1d0
[164686.112106][ C10] CPU: 10 UID: 0 PID: 3712027 Comm: kworker/10:3 Tainted: G L 6.19.0-rc1-kts-xfs-zoned-fixes-gec6aea2a5+ #1 PREEMPT(lazy)
[164686.112113][ C10] Tainted: [L]=SOFTLOCKUP
[164686.112115][ C10] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.17.0-5.fc42 04/01/2014
[164686.112118][ C10] Workqueue: events netstamp_clear
[164686.112125][ C10] RIP: 0010:smp_call_function_many_cond+0x885/0x1070
[164686.112130][ C10] Code: 38 c8 7c 08 84 c9 0f 85 6e 05 00 00 8b 43 08 a8 01 74 2e 48 89 f1 49 89 f6 48 c1 e9 03 41 83 e6 07 4c 01 f9 41 83 c6 03 f3 90 <0f> b6 01 41 38 c6 7c 08 84 c0 0f 85 db 04 00 00 8b 43 08 a8 01 75
[164686.112134][ C10] RSP: 0018:ffff888485a1f990 EFLAGS: 00000202
[164686.112138][ C10] RAX: 0000000000000011 RBX: ffffe8ffff082980 RCX: fffff91fffe10531
[164686.112141][ C10] RDX: 0000000000000005 RSI: ffffe8ffff082988 RDI: ffffffffa390b188
[164686.112144][ C10] RBP: ffffed1181a68d59 R08: ffff8881014fcd60 R09: 0000000000000000
[164686.112147][ C10] R10: 1ffff1102029f9ac R11: 0000000000000000 R12: ffff888c0d346ac0
[164686.112149][ C10] R13: ffffed1181a68d58 R14: 0000000000000003 R15: dffffc0000000000
[164686.112152][ C10] FS: 0000000000000000(0000) GS:ffff888c675b9000(0000) knlGS:0000000000000000
[164686.112155][ C10] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[164686.112158][ C10] CR2: 00007fb20918fa10 CR3: 00000007db69c004 CR4: 0000000000772ef0
[164686.112164][ C10] PKRU: 55555554
[164686.112166][ C10] Call Trace:
[164686.112168][ C10] <TASK>
[164686.112175][ C10] ? __pfx_do_sync_core+0x10/0x10
[164686.112182][ C10] ? __text_poke+0x371/0x870
[164686.112192][ C10] ? __pfx_smp_call_function_many_cond+0x10/0x10
[164686.112196][ C10] ? __pfx___text_poke+0x10/0x10
[164686.112202][ C10] ? __pfx___might_resched+0x10/0x10
[164686.112211][ C10] on_each_cpu_cond_mask+0x24/0x40
[164686.112215][ C10] smp_text_poke_batch_finish+0x45c/0xd20
[164686.112220][ C10] ? tpacket_rcv+0xe19/0x3f40
[164686.112228][ C10] ? __pfx_smp_text_poke_batch_finish+0x10/0x10
[164686.112232][ C10] ? __jump_label_patch+0x25f/0x320
[164686.112243][ C10] ? arch_jump_label_transform_queue+0xa8/0x110
[164686.112254][ C10] arch_jump_label_transform_apply+0x1c/0x30
[164686.112259][ C10] static_key_enable_cpuslocked+0x16c/0x230
[164686.112266][ C10] static_key_enable+0x1f/0x30
[164686.112270][ C10] process_one_work+0x86b/0x1490
[164686.112286][ C10] ? __pfx_process_one_work+0x10/0x10
[164686.112291][ C10] ? lock_acquire.part.0+0xb8/0x230
[164686.112300][ C10] ? lock_is_held_type+0x9a/0x110
[164686.112307][ C10] ? assign_work+0x156/0x390
[164686.112315][ C10] worker_thread+0x5f2/0xfd0
[164686.112324][ C10] ? __pfx_worker_thread+0x10/0x10
[164686.112329][ C10] ? __kthread_parkme+0xb3/0x1f0
[164686.112337][ C10] ? __pfx_worker_thread+0x10/0x10
[164686.112342][ C10] kthread+0x3a4/0x760
[164686.112348][ C10] ? __pfx_kthread+0x10/0x10
[164686.112353][ C10] ? __lock_release.isra.0+0x59/0x170
[164686.112361][ C10] ? __pfx_kthread+0x10/0x10
[164686.112367][ C10] ret_from_fork+0x50e/0x670
[164686.112374][ C10] ? __pfx_ret_from_fork+0x10/0x10
[164686.112381][ C10] ? __switch_to+0x42c/0xd70
[164686.112388][ C10] ? __pfx_kthread+0x10/0x10
[164686.112394][ C10] ret_from_fork_asm+0x1a/0x30
[164686.112413][ C10] </TASK>
[-- Attachment #4: generic_417_hang --]
[-- Type: text/plain, Size: 82556 bytes --]
[74538.343665][T3325018] run fstests generic/417 at 2026-01-13 22:48:08
[74541.776676][T4016687] XFS (nullb0): EXPERIMENTAL zoned RT device feature enabled. Use at your own risk!
[74541.784841][T4016687] XFS (nullb0): Mounting V5 Filesystem 705a8e01-bd9c-4946-b1d5-b436bbdb4cdc
[74541.819352][T4016687] XFS (nullb0): Ending clean mount
[74541.821872][T4016687] XFS (nullb0): limiting open zones to 7 due to total zone count (30)
[74541.824675][T4016687] XFS (nullb0): 30 zones of 65536 blocks (7 max open zones)
[74543.205819][T4016763] XFS (nullb1): EXPERIMENTAL zoned RT device feature enabled. Use at your own risk!
[74543.213143][T4016763] XFS (nullb1): Mounting V5 Filesystem 5a2cde0d-cf72-4f2e-a28b-2649cf2217a2
[74543.240888][T4016763] XFS (nullb1): Ending clean mount
[74543.242693][T4016763] XFS (nullb1): limiting open zones to 7 due to total zone count (30)
[74543.244556][T4016763] XFS (nullb1): 30 zones of 65536 blocks (7 max open zones)
[74543.695135][T4016798] XFS (nullb1): User initiated shutdown received.
[74543.697269][T4016798] XFS (nullb1): Metadata I/O Error (0x4) detected at xfs_fs_goingdown+0xc5/0x190 [xfs] (fs/xfs/xfs_fsops.c:472). Shutting down filesystem.
[74543.704807][T4016798] XFS (nullb1): Please unmount the filesystem and rectify the problem(s)
[74543.765043][T4016802] XFS (nullb1): Unmounting Filesystem 5a2cde0d-cf72-4f2e-a28b-2649cf2217a2
[74543.963741][T4016808] XFS (nullb1): EXPERIMENTAL zoned RT device feature enabled. Use at your own risk!
[74543.969895][T4016808] XFS (nullb1): Mounting V5 Filesystem 5a2cde0d-cf72-4f2e-a28b-2649cf2217a2
[74543.995903][T4016808] XFS (nullb1): Ending clean mount
[74543.997961][T4016808] XFS (nullb1): limiting open zones to 7 due to total zone count (30)
[74544.000221][T4016808] XFS (nullb1): 30 zones of 65536 blocks (7 max open zones)
[74547.452701][T4016845] XFS (nullb1): User initiated shutdown received.
[74547.501565][T4016845] XFS (nullb1): Metadata I/O Error (0x4) detected at xfs_fs_goingdown+0xc5/0x190 [xfs] (fs/xfs/xfs_fsops.c:472). Shutting down filesystem.
[74547.506722][T4016845] XFS (nullb1): Please unmount the filesystem and rectify the problem(s)
[74547.622016][T4016850] XFS (nullb1): Unmounting Filesystem 5a2cde0d-cf72-4f2e-a28b-2649cf2217a2
[74547.827582][T4016854] XFS (nullb1): EXPERIMENTAL zoned RT device feature enabled. Use at your own risk!
[74547.833525][T4016854] XFS (nullb1): Mounting V5 Filesystem 5a2cde0d-cf72-4f2e-a28b-2649cf2217a2
[74547.859663][T4016854] XFS (nullb1): Starting recovery (logdev: internal)
[74548.324835][T4016854] XFS (nullb1): Ending recovery (logdev: internal)
[74548.327665][T4016854] XFS (nullb1): limiting open zones to 7 due to total zone count (30)
[74548.330051][T4016854] XFS (nullb1): 30 zones of 65536 blocks (7 max open zones)
[74548.806950][T4016890] XFS (nullb1): Unmounting Filesystem 5a2cde0d-cf72-4f2e-a28b-2649cf2217a2
[74549.475861][T4017092] XFS (nullb1): EXPERIMENTAL zoned RT device feature enabled. Use at your own risk!
[74549.483659][T4017092] XFS (nullb1): Mounting V5 Filesystem 5a2cde0d-cf72-4f2e-a28b-2649cf2217a2
[74549.513461][T4017092] XFS (nullb1): Ending clean mount
[74549.515091][T4017092] XFS (nullb1): limiting open zones to 7 due to total zone count (30)
[74549.516936][T4017092] XFS (nullb1): 30 zones of 65536 blocks (7 max open zones)
[74552.983159][T4017129] XFS (nullb1): User initiated shutdown received.
[74553.020194][T4017129] XFS (nullb1): Metadata I/O Error (0x4) detected at xfs_fs_goingdown+0xc5/0x190 [xfs] (fs/xfs/xfs_fsops.c:472). Shutting down filesystem.
[74553.025978][T4017129] XFS (nullb1): Please unmount the filesystem and rectify the problem(s)
[74553.128603][T4017134] XFS (nullb1): Unmounting Filesystem 5a2cde0d-cf72-4f2e-a28b-2649cf2217a2
[74553.328406][T4017138] XFS (nullb1): EXPERIMENTAL zoned RT device feature enabled. Use at your own risk!
[74553.334896][T4017138] XFS (nullb1): Mounting V5 Filesystem 5a2cde0d-cf72-4f2e-a28b-2649cf2217a2
[74553.367157][T4017138] XFS (nullb1): Starting recovery (logdev: internal)
[74553.666555][T4017138] XFS (nullb1): Ending recovery (logdev: internal)
[74553.672192][T4017138] XFS (nullb1): limiting open zones to 7 due to total zone count (30)
[74553.674566][T4017138] XFS (nullb1): 30 zones of 65536 blocks (7 max open zones)
[74554.147022][T4017174] XFS (nullb1): Unmounting Filesystem 5a2cde0d-cf72-4f2e-a28b-2649cf2217a2
[74554.764106][T4017376] XFS (nullb1): EXPERIMENTAL zoned RT device feature enabled. Use at your own risk!
[74554.769911][T4017376] XFS (nullb1): Mounting V5 Filesystem 5a2cde0d-cf72-4f2e-a28b-2649cf2217a2
[74554.805168][T4017376] XFS (nullb1): Ending clean mount
[74554.807178][T4017376] XFS (nullb1): limiting open zones to 7 due to total zone count (30)
[74554.809444][T4017376] XFS (nullb1): 30 zones of 65536 blocks (7 max open zones)
[74558.292837][T4017413] XFS (nullb1): User initiated shutdown received.
[74558.331407][T4017413] XFS (nullb1): Metadata I/O Error (0x4) detected at xfs_fs_goingdown+0xc5/0x190 [xfs] (fs/xfs/xfs_fsops.c:472). Shutting down filesystem.
[74558.336966][T4017413] XFS (nullb1): Please unmount the filesystem and rectify the problem(s)
[74558.426728][T4017418] XFS (nullb1): Unmounting Filesystem 5a2cde0d-cf72-4f2e-a28b-2649cf2217a2
[74558.638994][T4017422] XFS (nullb1): EXPERIMENTAL zoned RT device feature enabled. Use at your own risk!
[74558.644810][T4017422] XFS (nullb1): Mounting V5 Filesystem 5a2cde0d-cf72-4f2e-a28b-2649cf2217a2
[74558.679313][T4017422] XFS (nullb1): Starting recovery (logdev: internal)
[74559.093398][T4017422] XFS (nullb1): Ending recovery (logdev: internal)
[74559.098780][T4017422] XFS (nullb1): limiting open zones to 7 due to total zone count (30)
[74559.101390][T4017422] XFS (nullb1): 30 zones of 65536 blocks (7 max open zones)
[74560.058920][T4017486] XFS (nullb1): Unmounting Filesystem 5a2cde0d-cf72-4f2e-a28b-2649cf2217a2
[74560.952072][T4017693] XFS (nullb0): Unmounting Filesystem 705a8e01-bd9c-4946-b1d5-b436bbdb4cdc
[74627.036659][ C2] rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
[74627.038841][ C2] rcu: 17-...!: (2 ticks this GP) idle=9eec/1/0x4000000000000000 softirq=3813925/3813925 fqs=214
[74627.043437][ C2] rcu: (detected by 2, t=65008 jiffies, g=29102977, q=1655 ncpus=24)
[74627.045884][ C2] Sending NMI from CPU 2 to CPUs 17:
[74627.045912][ C17] NMI backtrace for cpu 17
[74627.045920][ C17] CPU: 17 UID: 0 PID: 0 Comm: swapper/17 Not tainted 6.19.0-rc1-kts-xfs-gea44380376c+ #1 PREEMPT(lazy)
[74627.045925][ C17] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.17.0-5.fc42 04/01/2014
[74627.045927][ C17] RIP: 0010:mm_get_cid+0x171/0x2c0
[74627.045936][ C17] Code: cf ae 48 b8 00 00 00 00 00 fc ff df 83 e5 07 48 c1 eb 03 83 c5 03 48 01 c3 49 c7 c4 8c 89 b0 ad 41 83 e4 07 41 83 c4 03 f3 90 <48> 8b 04 24 0f b6 00 41 38 c4 7c 08 84 c0 0f 85 f2 00 00 00 8b 35
[74627.045939][ C17] RSP: 0018:ff1100010126fcf8 EFLAGS: 00000046
[74627.045944][ C17] RAX: 0000000000000018 RBX: fffffbfff5d9ee1c RCX: 0000000000000018
[74627.045946][ C17] RDX: 0000000000000018 RSI: dffffc0000000000 RDI: ff110006d9e797d0
[74627.045948][ C17] RBP: 0000000000000003 R08: 0000000040000000 R09: 0000000000000011
[74627.045950][ C17] R10: ff110006d9e78d40 R11: ff11000597bd55d4 R12: 0000000000000007
[74627.045953][ C17] R13: ff110006d9e797c0 R14: 0000000000000018 R15: 0000000000000018
[74627.045956][ C17] FS: 0000000000000000(0000) GS:ff11000e8d366000(0000) knlGS:0000000000000000
[74627.045958][ C17] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[74627.045960][ C17] CR2: 00007ff129963660 CR3: 0000000241074005 CR4: 0000000000773ef0
[74627.045965][ C17] PKRU: 55555554
[74627.045967][ C17] Call Trace:
[74627.045969][ C17] <TASK>
[74627.045974][ C17] mm_cid_switch_to+0xa6d/0x1300
[74627.045980][ C17] ? switch_mm_irqs_off+0x6fe/0x970
[74627.045987][ C17] __schedule+0x6b5/0x12a0
[74627.045992][ C17] ? __pfx___schedule+0x10/0x10
[74627.045995][ C17] ? trace_hardirqs_on+0x18/0x140
[74627.045999][ C17] ? default_idle_call+0x93/0xb0
[74627.046004][ C17] ? __pfx_cpuidle_idle_call+0x10/0x10
[74627.046008][ C17] ? lockdep_hardirqs_on_prepare.part.0+0x9b/0x140
[74627.046014][ C17] schedule_idle+0x59/0x90
[74627.046017][ C17] do_idle+0x10e/0x190
[74627.046021][ C17] cpu_startup_entry+0x53/0x70
[74627.046024][ C17] start_secondary+0x220/0x2e0
[74627.046029][ C17] ? __pfx_start_secondary+0x10/0x10
[74627.046035][ C17] common_startup_64+0x13e/0x141
[74627.046045][ C17] </TASK>
[74627.046912][ C2] rcu: rcu_preempt kthread timer wakeup didn't happen for 63942 jiffies! g29102977 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x200
[74627.105007][ C2] rcu: Possible timer handling issue on cpu=16 timer-softirq=1330160
[74627.106796][ C2] rcu: rcu_preempt kthread starved for 64005 jiffies! g29102977 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x200 ->cpu=16
[74627.109233][ C2] rcu: Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
[74627.111373][ C2] rcu: RCU grace-period kthread stack dump:
[74627.112728][ C2] task:rcu_preempt state:R stack:0 pid:16 tgid:16 ppid:2 task_flags:0x208040 flags:0x00080000
[74627.115098][ C2] Call Trace:
[74627.115809][ C2] <TASK>
[74627.116407][ C2] __schedule+0x841/0x12a0
[74627.117335][ C2] ? __pfx___schedule+0x10/0x10
[74627.118239][ C2] ? find_held_lock+0x2b/0x80
[74627.119109][ C2] ? __lock_release.isra.0+0x59/0x170
[74627.120152][ C2] ? lock_acquire+0xf6/0x130
[74627.121083][ C2] schedule+0xd1/0x250
[74627.121959][ C2] schedule_timeout+0x103/0x260
[74627.122934][ C2] ? find_held_lock+0x2b/0x80
[74627.123828][ C2] ? __pfx_schedule_timeout+0x10/0x10
[74627.124853][ C2] ? __pfx_process_timeout+0x10/0x10
[74627.125926][ C2] ? _raw_spin_unlock_irqrestore+0x44/0x60
[74627.127059][ C2] ? prepare_to_swait_event+0xe2/0x4f0
[74627.128027][ C2] rcu_gp_fqs_loop+0x208/0x11b0
[74627.128888][ C2] ? rcu_gp_init+0xb0f/0x14a0
[74627.129706][ C2] ? __pfx_rcu_gp_fqs_loop+0x10/0x10
[74627.130608][ C2] ? _raw_spin_unlock_irq+0x28/0x50
[74627.131561][ C2] ? rcu_gp_cleanup+0x824/0xf40
[74627.132454][ C2] ? __pfx_rcu_gp_init+0x10/0x10
[74627.133294][ C2] ? __pfx_rcu_gp_cleanup+0x10/0x10
[74627.134152][ C2] ? lockdep_hardirqs_on_prepare.part.0+0x9b/0x140
[74627.135240][ C2] rcu_gp_kthread+0x4f2/0x660
[74627.136056][ C2] ? lock_acquire+0xf6/0x130
[74627.136862][ C2] ? __pfx_rcu_gp_kthread+0x10/0x10
[74627.137692][ C2] ? lockdep_hardirqs_on_prepare.part.0+0x9b/0x140
[74627.138710][ C2] ? __pfx_rcu_gp_kthread+0x10/0x10
[74627.139492][ C2] ? __kthread_parkme+0xb3/0x1f0
[74627.140333][ C2] ? __pfx_rcu_gp_kthread+0x10/0x10
[74627.141192][ C2] kthread+0x3a4/0x760
[74627.141924][ C2] ? __pfx_kthread+0x10/0x10
[74627.142669][ C2] ? __lock_release.isra.0+0x59/0x170
[74627.143483][ C2] ? __pfx_kthread+0x10/0x10
[74627.144173][ C2] ret_from_fork+0x50e/0x670
[74627.144900][ C2] ? __pfx_ret_from_fork+0x10/0x10
[74627.145713][ C2] ? __switch_to+0x42c/0xd70
[74627.146409][ C2] ? __pfx_kthread+0x10/0x10
[74627.147098][ C2] ret_from_fork_asm+0x1a/0x30
[74627.147867][ C2] </TASK>
[74627.148314][ C2] rcu: Stack dump where RCU GP kthread last ran:
[74627.149199][ C2] Sending NMI from CPU 2 to CPUs 16:
[74627.150023][ C16] NMI backtrace for cpu 16
[74627.150029][ C16] CPU: 16 UID: 0 PID: 0 Comm: swapper/16 Not tainted 6.19.0-rc1-kts-xfs-gea44380376c+ #1 PREEMPT(lazy)
[74627.150034][ C16] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.17.0-5.fc42 04/01/2014
[74627.150037][ C16] RIP: 0010:__pv_queued_spin_lock_slowpath+0x232/0xdc0
[74627.150043][ C16] Code: 44 8b 2b c6 44 24 60 00 66 45 85 ed 74 1e 41 81 fd ff ff 00 00 0f 86 5e 02 00 00 41 81 e5 00 ff 00 00 0f 85 51 02 00 00 f3 90 <eb> b5 48 89 df be 01 00 00 00 e8 bf d0 bd fd be 01 00 00 00 48 8d
[74627.150047][ C16] RSP: 0018:ff11000e3d008ae0 EFLAGS: 00000046
[74627.150050][ C16] RAX: 0000000000000000 RBX: ff11000e3d0c53c0 RCX: 0000000000000001
[74627.150053][ C16] RDX: 0000000000000000 RSI: 0000000000000004 RDI: ff11000e3d0c53c0
[74627.150055][ C16] RBP: 0000000000000003 R08: ffffffffacab9a86 R09: ffe21c01c7a18a78
[74627.150057][ C16] R10: ffe21c01c7a18a79 R11: 0000000000000001 R12: ffe21c01c7a18a78
[74627.150059][ C16] R13: 0000000000000000 R14: ff11000e3d046388 R15: ff11000e3d046380
[74627.150061][ C16] FS: 0000000000000000(0000) GS:ff11000e8d2e6000(0000) knlGS:0000000000000000
[74627.150064][ C16] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[74627.150066][ C16] CR2: 00007ff11a7f4d10 CR3: 0000000241074006 CR4: 0000000000773ef0
[74627.150071][ C16] PKRU: 55555554
[74627.150072][ C16] Call Trace:
[74627.150075][ C16] <IRQ>
[74627.150079][ C16] ? __pfx___pv_queued_spin_lock_slowpath+0x10/0x10
[74627.150084][ C16] ? __lock_acquire+0x55d/0xbd0
[74627.150091][ C16] do_raw_spin_lock+0x1d9/0x270
[74627.150096][ C16] ? __pfx_do_raw_spin_lock+0x10/0x10
[74627.150099][ C16] ? lock_acquire.part.0+0xb8/0x230
[74627.150102][ C16] ? lock_acquire+0xf6/0x130
[74627.150106][ C16] raw_spin_rq_lock_nested+0x24/0x170
[74627.150112][ C16] dl_server_timer+0xe8/0x1020
[74627.150117][ C16] ? sched_balance_trigger+0x138/0x1e0
[74627.150122][ C16] ? __pfx_dl_server_timer+0x10/0x10
[74627.150130][ C16] dl_task_timer+0x24f/0x7e0
[74627.150133][ C16] ? find_held_lock+0x2b/0x80
[74627.150137][ C16] ? __pfx_dl_task_timer+0x10/0x10
[74627.150144][ C16] __hrtimer_run_queues+0x186/0x890
[74627.150147][ C16] ? __pfx_dl_task_timer+0x10/0x10
[74627.150152][ C16] ? __pfx___hrtimer_run_queues+0x10/0x10
[74627.150155][ C16] ? lock_release.part.0+0x1c/0x50
[74627.150158][ C16] ? ktime_get_update_offsets_now+0x8e/0x2f0
[74627.150164][ C16] hrtimer_interrupt+0x313/0x870
[74627.150171][ C16] __sysvec_apic_timer_interrupt+0xc0/0x2f0
[74627.150177][ C16] sysvec_apic_timer_interrupt+0x6c/0x90
[74627.150181][ C16] </IRQ>
[74627.150182][ C16] <TASK>
[74627.150184][ C16] asm_sysvec_apic_timer_interrupt+0x1a/0x20
[74627.150188][ C16] RIP: 0010:pv_native_safe_halt+0xf/0x20
[74627.150191][ C16] Code: 20 d0 c3 cc cc cc cc 0f 1f 40 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 f3 0f 1e fa 66 90 0f 00 2d 53 51 20 00 fb f4 <e9> 4c 1f 03 00 66 66 2e 0f 1f 84 00 00 00 00 00 90 90 90 90 90 90
[74627.150194][ C16] RSP: 0018:ff1100010125fdf0 EFLAGS: 000002c2
[74627.150197][ C16] RAX: 000000000b714149 RBX: 0000000000000000 RCX: ffffffffa9d05990
[74627.150199][ C16] RDX: ff11000101243b80 RSI: 0000000000000000 RDI: ffffffffa9d05990
[74627.150201][ C16] RBP: ff11000101243b80 R08: 0000000000000000 R09: 0000000000000001
[74627.150202][ C16] R10: ffe21c01c7a07c0e R11: 0000000000000001 R12: 1fe220002024bfc1
[74627.150204][ C16] R13: 0000000000000000 R14: dffffc0000000000 R15: 0000000000000000
[74627.150209][ C16] ? cpuidle_idle_call+0x1e0/0x270
[74627.150212][ C16] ? cpuidle_idle_call+0x1e0/0x270
[74627.150216][ C16] default_idle+0x9/0x20
[74627.150219][ C16] default_idle_call+0x6a/0xb0
[74627.150222][ C16] cpuidle_idle_call+0x1e0/0x270
[74627.150226][ C16] ? __pfx_cpuidle_idle_call+0x10/0x10
[74627.150229][ C16] ? __pfx_tsc_verify_tsc_adjust+0x10/0x10
[74627.150232][ C16] ? lock_release.part.0+0x1c/0x50
[74627.150235][ C16] ? lockdep_hardirqs_on_prepare.part.0+0x9b/0x140
[74627.150240][ C16] do_idle+0xee/0x190
[74627.150243][ C16] cpu_startup_entry+0x53/0x70
[74627.150246][ C16] start_secondary+0x220/0x2e0
[74627.150250][ C16] ? __pfx_start_secondary+0x10/0x10
[74627.150256][ C16] common_startup_64+0x13e/0x141
[74627.150264][ C16] </TASK>
[74672.276008][ C9] watchdog: BUG: soft lockup - CPU#9 stuck for 22s! [kworker/9:8:4005961]
[74672.276015][ C9] CPU#9 Utilization every 4000ms during lockup:
[74672.276017][ C9] #1: 100% system, 0% softirq, 1% hardirq, 0% idle
[74672.276021][ C9] #2: 100% system, 0% softirq, 1% hardirq, 0% idle
[74672.276023][ C9] #3: 100% system, 1% softirq, 1% hardirq, 0% idle
[74672.276025][ C9] #4: 100% system, 0% softirq, 1% hardirq, 0% idle
[74672.276027][ C9] #5: 100% system, 0% softirq, 1% hardirq, 0% idle
[74672.276029][ C9] Modules linked in: btrfs xor raid6_pq dm_flakey null_blk target_core_user target_core_mod rfkill nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nf_tables qrtr sunrpc intel_rapl_msr intel_rapl_common intel_uncore_frequency_common skx_edac_common nfit libnvdimm kvm_intel kvm ppdev iTCO_wdt intel_pmc_bxt iTCO_vendor_support irqbypass parport_pc i2c_i801 rapl i2c_smbus parport lpc_ich joydev fuse loop dm_multipath nfnetlink zram lz4hc_compress lz4_compress vsock_loopback zstd_compress vmw_vsock_virtio_transport_common vmw_vsock_vmci_transport vmw_vmci vsock xfs bochs drm_client_lib drm_shmem_helper drm_kms_helper drm virtio_net ghash_clmulni_intel net_failover failover serio_raw scsi_dh_rdac scsi_dh_emc scsi_dh_alua i2c_dev qemu_fw_cfg [last unloaded: scsi_debug]
[74672.276105][ C9] irq event stamp: 129200
[74672.276106][ C9] hardirqs last enabled at (129199): [<ffffffffa960160a>] asm_sysvec_apic_timer_interrupt+0x1a/0x20
[74672.276114][ C9] hardirqs last disabled at (129200): [<ffffffffaca8afae>] sysvec_apic_timer_interrupt+0xe/0x90
[74672.276119][ C9] softirqs last enabled at (129130): [<ffffffffa9baa582>] handle_softirqs+0x522/0x7b0
[74672.276125][ C9] softirqs last disabled at (129125): [<ffffffffa9baa9a1>] __irq_exit_rcu+0x181/0x1d0
[74672.276131][ C9] CPU: 9 UID: 0 PID: 4005961 Comm: kworker/9:8 Not tainted 6.19.0-rc1-kts-xfs-gea44380376c+ #1 PREEMPT(lazy)
[74672.276136][ C9] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.17.0-5.fc42 04/01/2014
[74672.276139][ C9] Workqueue: events netstamp_clear
[74672.276145][ C9] RIP: 0010:smp_call_function_many_cond+0x885/0x1070
[74672.276151][ C9] Code: 38 c8 7c 08 84 c9 0f 85 6e 05 00 00 8b 43 08 a8 01 74 2e 48 89 f1 49 89 f6 48 c1 e9 03 41 83 e6 07 4c 01 f9 41 83 c6 03 f3 90 <0f> b6 01 41 38 c6 7c 08 84 c0 0f 85 db 04 00 00 8b 43 08 a8 01 75
[74672.276155][ C9] RSP: 0018:ff110006c740f990 EFLAGS: 00000202
[74672.276158][ C9] RAX: 0000000000000011 RBX: ffd1fffffe600d40 RCX: fffa3bffffcc01a9
[74672.276160][ C9] RDX: 0000000000000000 RSI: ffd1fffffe600d48 RDI: ffffffffadaf8160
[74672.276162][ C9] RBP: ffe21c01c7998d49 R08: ff11000174e44740 R09: 0000000000000000
[74672.276165][ C9] R10: 1fe220002e9c88e8 R11: 0000000000000000 R12: ff11000e3ccc6a40
[74672.276167][ C9] R13: ffe21c01c7998d48 R14: 0000000000000003 R15: dffffc0000000000
[74672.276169][ C9] FS: 0000000000000000(0000) GS:ff11000e8cf66000(0000) knlGS:0000000000000000
[74672.276171][ C9] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[74672.276177][ C9] CR2: 00007ff11b7f5cb8 CR3: 0000000ac5c96005 CR4: 0000000000773ef0
[74672.276182][ C9] PKRU: 55555554
[74672.276184][ C9] Call Trace:
[74672.276186][ C9] <TASK>
[74672.276192][ C9] ? __pfx_do_sync_core+0x10/0x10
[74672.276198][ C9] ? __text_poke+0x3b1/0x870
[74672.276205][ C9] ? __pfx_smp_call_function_many_cond+0x10/0x10
[74672.276209][ C9] ? __pfx___text_poke+0x10/0x10
[74672.276214][ C9] ? __pfx___might_resched+0x10/0x10
[74672.276222][ C9] on_each_cpu_cond_mask+0x24/0x40
[74672.276226][ C9] smp_text_poke_batch_finish+0x45c/0xd20
[74672.276230][ C9] ? tpacket_rcv+0xdc9/0x3ec0
[74672.276236][ C9] ? __pfx_smp_text_poke_batch_finish+0x10/0x10
[74672.276240][ C9] ? __jump_label_patch+0x25f/0x320
[74672.276247][ C9] ? arch_jump_label_transform_queue+0xa8/0x110
[74672.276256][ C9] arch_jump_label_transform_apply+0x1c/0x30
[74672.276260][ C9] static_key_enable_cpuslocked+0x16c/0x230
[74672.276265][ C9] static_key_enable+0x1f/0x30
[74672.276268][ C9] process_one_work+0x86b/0x1490
[74672.276280][ C9] ? __pfx_process_one_work+0x10/0x10
[74672.276283][ C9] ? lock_acquire.part.0+0xb8/0x230
[74672.276289][ C9] ? lock_is_held_type+0x9a/0x110
[74672.276295][ C9] ? assign_work+0x156/0x390
[74672.276303][ C9] worker_thread+0x5f2/0xfd0
[74672.276313][ C9] ? __pfx_worker_thread+0x10/0x10
[74672.276317][ C9] kthread+0x3a4/0x760
[74672.276321][ C9] ? __pfx_kthread+0x10/0x10
[74672.276325][ C9] ? __lock_release.isra.0+0x59/0x170
[74672.276330][ C9] ? __pfx_kthread+0x10/0x10
[74672.276334][ C9] ret_from_fork+0x50e/0x670
[74672.276339][ C9] ? __pfx_ret_from_fork+0x10/0x10
[74672.276344][ C9] ? __switch_to+0x42c/0xd70
[74672.276348][ C9] ? __pfx_kthread+0x10/0x10
[74672.276352][ C9] ret_from_fork_asm+0x1a/0x30
[74672.276366][ C9] </TASK>
[74695.040703][ C14] watchdog: BUG: soft lockup - CPU#14 stuck for 23s! [abrt-dump-journ:1115]
[74695.040711][ C14] CPU#14 Utilization every 4000ms during lockup:
[74695.040714][ C14] #1: 100% system, 0% softirq, 1% hardirq, 0% idle
[74695.040717][ C14] #2: 100% system, 1% softirq, 1% hardirq, 0% idle
[74695.040719][ C14] #3: 100% system, 0% softirq, 1% hardirq, 0% idle
[74695.040721][ C14] #4: 100% system, 0% softirq, 1% hardirq, 0% idle
[74695.040723][ C14] #5: 100% system, 0% softirq, 1% hardirq, 0% idle
[74695.040725][ C14] Modules linked in: btrfs xor raid6_pq dm_flakey null_blk target_core_user target_core_mod rfkill nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nf_tables qrtr sunrpc intel_rapl_msr intel_rapl_common intel_uncore_frequency_common skx_edac_common nfit libnvdimm kvm_intel kvm ppdev iTCO_wdt intel_pmc_bxt iTCO_vendor_support irqbypass parport_pc i2c_i801 rapl i2c_smbus parport lpc_ich joydev fuse loop dm_multipath nfnetlink zram lz4hc_compress lz4_compress vsock_loopback zstd_compress vmw_vsock_virtio_transport_common vmw_vsock_vmci_transport vmw_vmci vsock xfs bochs drm_client_lib drm_shmem_helper drm_kms_helper drm virtio_net ghash_clmulni_intel net_failover failover serio_raw scsi_dh_rdac scsi_dh_emc scsi_dh_alua i2c_dev qemu_fw_cfg [last unloaded: scsi_debug]
[74695.040801][ C14] irq event stamp: 1666440
[74695.040802][ C14] hardirqs last enabled at (1666439): [<ffffffffa960160a>] asm_sysvec_apic_timer_interrupt+0x1a/0x20
[74695.040811][ C14] hardirqs last disabled at (1666440): [<ffffffffaca8afae>] sysvec_apic_timer_interrupt+0xe/0x90
[74695.040815][ C14] softirqs last enabled at (1666428): [<ffffffffa9baa582>] handle_softirqs+0x522/0x7b0
[74695.040820][ C14] softirqs last disabled at (1666423): [<ffffffffa9baa9a1>] __irq_exit_rcu+0x181/0x1d0
[74695.040827][ C14] CPU: 14 UID: 0 PID: 1115 Comm: abrt-dump-journ Tainted: G L 6.19.0-rc1-kts-xfs-gea44380376c+ #1 PREEMPT(lazy)
[74695.040833][ C14] Tainted: [L]=SOFTLOCKUP
[74695.040835][ C14] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.17.0-5.fc42 04/01/2014
[74695.040837][ C14] RIP: 0010:smp_call_function_many_cond+0x885/0x1070
[74695.040843][ C14] Code: 38 c8 7c 08 84 c9 0f 85 6e 05 00 00 8b 43 08 a8 01 74 2e 48 89 f1 49 89 f6 48 c1 e9 03 41 83 e6 07 4c 01 f9 41 83 c6 03 f3 90 <0f> b6 01 41 38 c6 7c 08 84 c0 0f 85 db 04 00 00 8b 43 08 a8 01 75
[74695.040847][ C14] RSP: 0018:ff110001502e7230 EFLAGS: 00000202
[74695.040850][ C14] RAX: 0000000000000011 RBX: ffd1fffffe604be0 RCX: fffa3bffffcc097d
[74695.040852][ C14] RDX: 0000000000000000 RSI: ffd1fffffe604be8 RDI: ffffffffadaf8160
[74695.040854][ C14] RBP: ffe21c01c79e8d49 R08: ff11000102129f00 R09: 0000000000000000
[74695.040856][ C14] R10: 1fe22000204253e0 R11: 0000000000000000 R12: ff11000e3cf46a40
[74695.040858][ C14] R13: ffe21c01c79e8d48 R14: 0000000000000003 R15: dffffc0000000000
[74695.040861][ C14] FS: 00007f806c869a00(0000) GS:ff11000e8d1e6000(0000) knlGS:0000000000000000
[74695.040863][ C14] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[74695.040865][ C14] CR2: 00007f806d14eb60 CR3: 00000001aa5a6005 CR4: 0000000000773ef0
[74695.040870][ C14] PKRU: 55555554
[74695.040871][ C14] Call Trace:
[74695.040874][ C14] <TASK>
[74695.040879][ C14] ? __pfx_flush_tlb_func+0x10/0x10
[74695.040890][ C14] ? __pfx_smp_call_function_many_cond+0x10/0x10
[74695.040899][ C14] on_each_cpu_cond_mask+0x24/0x40
[74695.040903][ C14] kvm_flush_tlb_multi+0x216/0x320
[74695.040911][ C14] ? __pfx_kvm_flush_tlb_multi+0x10/0x10
[74695.040916][ C14] ? do_raw_spin_lock+0x128/0x270
[74695.040921][ C14] ? __pfx_do_raw_spin_lock+0x10/0x10
[74695.040928][ C14] flush_tlb_mm_range+0x420/0x690
[74695.040934][ C14] ? __pte_offset_map_lock+0x150/0x2c0
[74695.040939][ C14] ? __pfx_flush_tlb_mm_range+0x10/0x10
[74695.040943][ C14] ? __pfx___pte_offset_map_lock+0x10/0x10
[74695.040946][ C14] ? lock_acquire+0xf6/0x130
[74695.040952][ C14] flush_tlb_batched_pending+0x88/0xd0
[74695.040958][ C14] zap_pte_range+0x189/0x850
[74695.040966][ C14] ? arch_stack_walk+0xb7/0x100
[74695.040979][ C14] ? __pfx_zap_pte_range+0x10/0x10
[74695.040985][ C14] ? __lock_acquire+0x55d/0xbd0
[74695.040993][ C14] ? __pfx_pmd_val+0x10/0x10
[74695.040996][ C14] ? find_held_lock+0x2b/0x80
[74695.041003][ C14] zap_pmd_range.isra.0+0x1bf/0x570
[74695.041009][ C14] ? lock_release.part.0+0x1c/0x50
[74695.041013][ C14] ? __pfx_zap_pmd_range.isra.0+0x10/0x10
[74695.041019][ C14] ? __pfx_p4d_offset.part.0.isra.0+0x10/0x10
[74695.041026][ C14] unmap_page_range+0x436/0x950
[74695.041038][ C14] unmap_vmas+0x1f1/0x3b0
[74695.041044][ C14] ? __pfx_unmap_vmas+0x10/0x10
[74695.041055][ C14] ? __lock_acquire+0x55d/0xbd0
[74695.041063][ C14] vms_clear_ptes+0x381/0x6f0
[74695.041071][ C14] ? __pfx_vms_clear_ptes+0x10/0x10
[74695.041086][ C14] vms_complete_munmap_vmas+0x1a2/0x720
[74695.041093][ C14] do_vmi_align_munmap+0x314/0x480
[74695.041100][ C14] ? __pfx_do_vmi_align_munmap+0x10/0x10
[74695.041110][ C14] ? __lock_acquire+0x55d/0xbd0
[74695.041130][ C14] __do_sys_brk+0x6cc/0x950
[74695.041139][ C14] ? __pfx___do_sys_brk+0x10/0x10
[74695.041151][ C14] ? __lock_release.isra.0+0x59/0x170
[74695.041157][ C14] do_syscall_64+0x98/0x5c0
[74695.041162][ C14] ? __lock_release.isra.0+0x59/0x170
[74695.041168][ C14] ? do_user_addr_fault+0x811/0xed0
[74695.041177][ C14] ? trace_hardirqs_on_prepare+0x101/0x140
[74695.041180][ C14] ? lockdep_hardirqs_on_prepare.part.0+0x9b/0x140
[74695.041183][ C14] ? clear_bhb_loop+0x30/0x80
[74695.041186][ C14] ? clear_bhb_loop+0x30/0x80
[74695.041191][ C14] entry_SYSCALL_64_after_hwframe+0x76/0x7e
[74695.041194][ C14] RIP: 0033:0x7f806dc4f16b
[74695.041198][ C14] Code: 85 c0 74 e0 4d 85 f6 74 0b 4c 89 e7 41 ff d6 4c 89 fe eb d9 48 89 f7 e8 a3 66 f1 ff eb f1 90 f3 0f 1e fa b8 0c 00 00 00 0f 05 <48> 8b 15 ce cc 0f 00 48 89 02 48 39 f8 72 06 31 c0 c3 0f 1f 00 48
[74695.041201][ C14] RSP: 002b:00007ffe7f391a18 EFLAGS: 00000206 ORIG_RAX: 000000000000000c
[74695.041204][ C14] RAX: ffffffffffffffda RBX: 000000000000000b RCX: 00007f806dc4f16b
[74695.041206][ C14] RDX: 000055bbb19ef000 RSI: 000055bbb19ef000 RDI: 000055bbb19d8000
[74695.041208][ C14] RBP: 00007ffe7f391a30 R08: 00000000000208d0 R09: 00007f806dd4cac0
[74695.041210][ C14] R10: 0000000000017700 R11: 0000000000000206 R12: 0000000000000000
[74695.041212][ C14] R13: 000055bbae28535a R14: 0000000000000000 R15: 00007ffe7f391d38
[74695.041223][ C14] </TASK>
[74700.275716][ C9] watchdog: BUG: soft lockup - CPU#9 stuck for 49s! [kworker/9:8:4005961]
[74700.275723][ C9] CPU#9 Utilization every 4000ms during lockup:
[74700.275725][ C9] #1: 100% system, 0% softirq, 1% hardirq, 0% idle
[74700.275728][ C9] #2: 100% system, 0% softirq, 1% hardirq, 0% idle
[74700.275730][ C9] #3: 100% system, 0% softirq, 1% hardirq, 0% idle
[74700.275732][ C9] #4: 100% system, 0% softirq, 1% hardirq, 0% idle
[74700.275734][ C9] #5: 100% system, 0% softirq, 1% hardirq, 0% idle
[74700.275736][ C9] Modules linked in: btrfs xor raid6_pq dm_flakey null_blk target_core_user target_core_mod rfkill nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nf_tables qrtr sunrpc intel_rapl_msr intel_rapl_common intel_uncore_frequency_common skx_edac_common nfit libnvdimm kvm_intel kvm ppdev iTCO_wdt intel_pmc_bxt iTCO_vendor_support irqbypass parport_pc i2c_i801 rapl i2c_smbus parport lpc_ich joydev fuse loop dm_multipath nfnetlink zram lz4hc_compress lz4_compress vsock_loopback zstd_compress vmw_vsock_virtio_transport_common vmw_vsock_vmci_transport vmw_vmci vsock xfs bochs drm_client_lib drm_shmem_helper drm_kms_helper drm virtio_net ghash_clmulni_intel net_failover failover serio_raw scsi_dh_rdac scsi_dh_emc scsi_dh_alua i2c_dev qemu_fw_cfg [last unloaded: scsi_debug]
[74700.275811][ C9] irq event stamp: 188068
[74700.275813][ C9] hardirqs last enabled at (188067): [<ffffffffa960160a>] asm_sysvec_apic_timer_interrupt+0x1a/0x20
[74700.275822][ C9] hardirqs last disabled at (188068): [<ffffffffaca8afae>] sysvec_apic_timer_interrupt+0xe/0x90
[74700.275826][ C9] softirqs last enabled at (187950): [<ffffffffa9baa582>] handle_softirqs+0x522/0x7b0
[74700.275832][ C9] softirqs last disabled at (187945): [<ffffffffa9baa9a1>] __irq_exit_rcu+0x181/0x1d0
[74700.275839][ C9] CPU: 9 UID: 0 PID: 4005961 Comm: kworker/9:8 Tainted: G L 6.19.0-rc1-kts-xfs-gea44380376c+ #1 PREEMPT(lazy)
[74700.275845][ C9] Tainted: [L]=SOFTLOCKUP
[74700.275847][ C9] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.17.0-5.fc42 04/01/2014
[74700.275849][ C9] Workqueue: events netstamp_clear
[74700.275855][ C9] RIP: 0010:smp_call_function_many_cond+0x885/0x1070
[74700.275861][ C9] Code: 38 c8 7c 08 84 c9 0f 85 6e 05 00 00 8b 43 08 a8 01 74 2e 48 89 f1 49 89 f6 48 c1 e9 03 41 83 e6 07 4c 01 f9 41 83 c6 03 f3 90 <0f> b6 01 41 38 c6 7c 08 84 c0 0f 85 db 04 00 00 8b 43 08 a8 01 75
[74700.275865][ C9] RSP: 0018:ff110006c740f990 EFLAGS: 00000202
[74700.275868][ C9] RAX: 0000000000000011 RBX: ffd1fffffe600d40 RCX: fffa3bffffcc01a9
[74700.275870][ C9] RDX: 0000000000000000 RSI: ffd1fffffe600d48 RDI: ffffffffadaf8160
[74700.275873][ C9] RBP: ffe21c01c7998d49 R08: ff11000174e44740 R09: 0000000000000000
[74700.275875][ C9] R10: 1fe220002e9c88e8 R11: 0000000000000000 R12: ff11000e3ccc6a40
[74700.275879][ C9] R13: ffe21c01c7998d48 R14: 0000000000000003 R15: dffffc0000000000
[74700.275883][ C9] FS: 0000000000000000(0000) GS:ff11000e8cf66000(0000) knlGS:0000000000000000
[74700.275888][ C9] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[74700.275892][ C9] CR2: 00007ff11b7f5cb8 CR3: 0000000ac5c96005 CR4: 0000000000773ef0
[74700.275899][ C9] PKRU: 55555554
[74700.275902][ C9] Call Trace:
[74700.275905][ C9] <TASK>
[74700.275910][ C9] ? __pfx_do_sync_core+0x10/0x10
[74700.275915][ C9] ? __text_poke+0x3b1/0x870
[74700.275930][ C9] ? __pfx_smp_call_function_many_cond+0x10/0x10
[74700.275934][ C9] ? __pfx___text_poke+0x10/0x10
[74700.275939][ C9] ? __pfx___might_resched+0x10/0x10
[74700.275947][ C9] on_each_cpu_cond_mask+0x24/0x40
[74700.275952][ C9] smp_text_poke_batch_finish+0x45c/0xd20
[74700.275955][ C9] ? tpacket_rcv+0xdc9/0x3ec0
[74700.275963][ C9] ? __pfx_smp_text_poke_batch_finish+0x10/0x10
[74700.275966][ C9] ? __jump_label_patch+0x25f/0x320
[74700.275974][ C9] ? arch_jump_label_transform_queue+0xa8/0x110
[74700.275983][ C9] arch_jump_label_transform_apply+0x1c/0x30
[74700.275987][ C9] static_key_enable_cpuslocked+0x16c/0x230
[74700.275991][ C9] static_key_enable+0x1f/0x30
[74700.275995][ C9] process_one_work+0x86b/0x1490
[74700.276006][ C9] ? __pfx_process_one_work+0x10/0x10
[74700.276009][ C9] ? lock_acquire.part.0+0xb8/0x230
[74700.276015][ C9] ? lock_is_held_type+0x9a/0x110
[74700.276021][ C9] ? assign_work+0x156/0x390
[74700.276030][ C9] worker_thread+0x5f2/0xfd0
[74700.276040][ C9] ? __pfx_worker_thread+0x10/0x10
[74700.276043][ C9] kthread+0x3a4/0x760
[74700.276048][ C9] ? __pfx_kthread+0x10/0x10
[74700.276052][ C9] ? __lock_release.isra.0+0x59/0x170
[74700.276057][ C9] ? __pfx_kthread+0x10/0x10
[74700.276061][ C9] ret_from_fork+0x50e/0x670
[74700.276066][ C9] ? __pfx_ret_from_fork+0x10/0x10
[74700.276071][ C9] ? __switch_to+0x42c/0xd70
[74700.276076][ C9] ? __pfx_kthread+0x10/0x10
[74700.276080][ C9] ret_from_fork_asm+0x1a/0x30
[74700.276094][ C9] </TASK>
[74723.040410][ C14] watchdog: BUG: soft lockup - CPU#14 stuck for 49s! [abrt-dump-journ:1115]
[74723.040418][ C14] CPU#14 Utilization every 4000ms during lockup:
[74723.040421][ C14] #1: 100% system, 0% softirq, 1% hardirq, 0% idle
[74723.040424][ C14] #2: 100% system, 0% softirq, 1% hardirq, 0% idle
[74723.040427][ C14] #3: 100% system, 0% softirq, 1% hardirq, 0% idle
[74723.040429][ C14] #4: 100% system, 0% softirq, 1% hardirq, 0% idle
[74723.040430][ C14] #5: 100% system, 0% softirq, 1% hardirq, 0% idle
[74723.040432][ C14] Modules linked in: btrfs xor raid6_pq dm_flakey null_blk target_core_user target_core_mod rfkill nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nf_tables qrtr sunrpc intel_rapl_msr intel_rapl_common intel_uncore_frequency_common skx_edac_common nfit libnvdimm kvm_intel kvm ppdev iTCO_wdt intel_pmc_bxt iTCO_vendor_support irqbypass parport_pc i2c_i801 rapl i2c_smbus parport lpc_ich joydev fuse loop dm_multipath nfnetlink zram lz4hc_compress lz4_compress vsock_loopback zstd_compress vmw_vsock_virtio_transport_common vmw_vsock_vmci_transport vmw_vmci vsock xfs bochs drm_client_lib drm_shmem_helper drm_kms_helper drm virtio_net ghash_clmulni_intel net_failover failover serio_raw scsi_dh_rdac scsi_dh_emc scsi_dh_alua i2c_dev qemu_fw_cfg [last unloaded: scsi_debug]
[74723.040507][ C14] irq event stamp: 1725162
[74723.040509][ C14] hardirqs last enabled at (1725161): [<ffffffffa960160a>] asm_sysvec_apic_timer_interrupt+0x1a/0x20
[74723.040517][ C14] hardirqs last disabled at (1725162): [<ffffffffaca8afae>] sysvec_apic_timer_interrupt+0xe/0x90
[74723.040522][ C14] softirqs last enabled at (1725096): [<ffffffffa9baa582>] handle_softirqs+0x522/0x7b0
[74723.040526][ C14] softirqs last disabled at (1725091): [<ffffffffa9baa9a1>] __irq_exit_rcu+0x181/0x1d0
[74723.040534][ C14] CPU: 14 UID: 0 PID: 1115 Comm: abrt-dump-journ Tainted: G L 6.19.0-rc1-kts-xfs-gea44380376c+ #1 PREEMPT(lazy)
[74723.040539][ C14] Tainted: [L]=SOFTLOCKUP
[74723.040541][ C14] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.17.0-5.fc42 04/01/2014
[74723.040543][ C14] RIP: 0010:smp_call_function_many_cond+0x885/0x1070
[74723.040549][ C14] Code: 38 c8 7c 08 84 c9 0f 85 6e 05 00 00 8b 43 08 a8 01 74 2e 48 89 f1 49 89 f6 48 c1 e9 03 41 83 e6 07 4c 01 f9 41 83 c6 03 f3 90 <0f> b6 01 41 38 c6 7c 08 84 c0 0f 85 db 04 00 00 8b 43 08 a8 01 75
[74723.040552][ C14] RSP: 0018:ff110001502e7230 EFLAGS: 00000202
[74723.040555][ C14] RAX: 0000000000000011 RBX: ffd1fffffe604be0 RCX: fffa3bffffcc097d
[74723.040558][ C14] RDX: 0000000000000000 RSI: ffd1fffffe604be8 RDI: ffffffffadaf8160
[74723.040560][ C14] RBP: ffe21c01c79e8d49 R08: ff11000102129f00 R09: 0000000000000000
[74723.040562][ C14] R10: 1fe22000204253e0 R11: 0000000000000000 R12: ff11000e3cf46a40
[74723.040564][ C14] R13: ffe21c01c79e8d48 R14: 0000000000000003 R15: dffffc0000000000
[74723.040566][ C14] FS: 00007f806c869a00(0000) GS:ff11000e8d1e6000(0000) knlGS:0000000000000000
[74723.040569][ C14] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[74723.040571][ C14] CR2: 00007f806d14eb60 CR3: 00000001aa5a6005 CR4: 0000000000773ef0
[74723.040576][ C14] PKRU: 55555554
[74723.040577][ C14] Call Trace:
[74723.040579][ C14] <TASK>
[74723.040585][ C14] ? __pfx_flush_tlb_func+0x10/0x10
[74723.040595][ C14] ? __pfx_smp_call_function_many_cond+0x10/0x10
[74723.040604][ C14] on_each_cpu_cond_mask+0x24/0x40
[74723.040608][ C14] kvm_flush_tlb_multi+0x216/0x320
[74723.040616][ C14] ? __pfx_kvm_flush_tlb_multi+0x10/0x10
[74723.040621][ C14] ? do_raw_spin_lock+0x128/0x270
[74723.040628][ C14] ? __pfx_do_raw_spin_lock+0x10/0x10
[74723.040634][ C14] flush_tlb_mm_range+0x420/0x690
[74723.040645][ C14] ? __pte_offset_map_lock+0x150/0x2c0
[74723.040653][ C14] ? __pfx_flush_tlb_mm_range+0x10/0x10
[74723.040659][ C14] ? __pfx___pte_offset_map_lock+0x10/0x10
[74723.040666][ C14] ? lock_acquire+0xf6/0x130
[74723.040673][ C14] flush_tlb_batched_pending+0x88/0xd0
[74723.040678][ C14] zap_pte_range+0x189/0x850
[74723.040686][ C14] ? arch_stack_walk+0xb7/0x100
[74723.040694][ C14] ? __pfx_zap_pte_range+0x10/0x10
[74723.040700][ C14] ? __lock_acquire+0x55d/0xbd0
[74723.040708][ C14] ? __pfx_pmd_val+0x10/0x10
[74723.040711][ C14] ? find_held_lock+0x2b/0x80
[74723.040725][ C14] zap_pmd_range.isra.0+0x1bf/0x570
[74723.040731][ C14] ? lock_release.part.0+0x1c/0x50
[74723.040735][ C14] ? __pfx_zap_pmd_range.isra.0+0x10/0x10
[74723.040741][ C14] ? __pfx_p4d_offset.part.0.isra.0+0x10/0x10
[74723.040748][ C14] unmap_page_range+0x436/0x950
[74723.040760][ C14] unmap_vmas+0x1f1/0x3b0
[74723.040766][ C14] ? __pfx_unmap_vmas+0x10/0x10
[74723.040777][ C14] ? __lock_acquire+0x55d/0xbd0
[74723.040785][ C14] vms_clear_ptes+0x381/0x6f0
[74723.040794][ C14] ? __pfx_vms_clear_ptes+0x10/0x10
[74723.040809][ C14] vms_complete_munmap_vmas+0x1a2/0x720
[74723.040816][ C14] do_vmi_align_munmap+0x314/0x480
[74723.040822][ C14] ? __pfx_do_vmi_align_munmap+0x10/0x10
[74723.040832][ C14] ? __lock_acquire+0x55d/0xbd0
[74723.040852][ C14] __do_sys_brk+0x6cc/0x950
[74723.040861][ C14] ? __pfx___do_sys_brk+0x10/0x10
[74723.040873][ C14] ? __lock_release.isra.0+0x59/0x170
[74723.040880][ C14] do_syscall_64+0x98/0x5c0
[74723.040884][ C14] ? __lock_release.isra.0+0x59/0x170
[74723.040891][ C14] ? do_user_addr_fault+0x811/0xed0
[74723.040899][ C14] ? trace_hardirqs_on_prepare+0x101/0x140
[74723.040902][ C14] ? lockdep_hardirqs_on_prepare.part.0+0x9b/0x140
[74723.040906][ C14] ? clear_bhb_loop+0x30/0x80
[74723.040908][ C14] ? clear_bhb_loop+0x30/0x80
[74723.040913][ C14] entry_SYSCALL_64_after_hwframe+0x76/0x7e
[74723.040917][ C14] RIP: 0033:0x7f806dc4f16b
[74723.040921][ C14] Code: 85 c0 74 e0 4d 85 f6 74 0b 4c 89 e7 41 ff d6 4c 89 fe eb d9 48 89 f7 e8 a3 66 f1 ff eb f1 90 f3 0f 1e fa b8 0c 00 00 00 0f 05 <48> 8b 15 ce cc 0f 00 48 89 02 48 39 f8 72 06 31 c0 c3 0f 1f 00 48
[74723.040923][ C14] RSP: 002b:00007ffe7f391a18 EFLAGS: 00000206 ORIG_RAX: 000000000000000c
[74723.040927][ C14] RAX: ffffffffffffffda RBX: 000000000000000b RCX: 00007f806dc4f16b
[74723.040929][ C14] RDX: 000055bbb19ef000 RSI: 000055bbb19ef000 RDI: 000055bbb19d8000
[74723.040931][ C14] RBP: 00007ffe7f391a30 R08: 00000000000208d0 R09: 00007f806dd4cac0
[74723.040932][ C14] R10: 0000000000017700 R11: 0000000000000206 R12: 0000000000000000
[74723.040934][ C14] R13: 000055bbae28535a R14: 0000000000000000 R15: 00007ffe7f391d38
[74723.040945][ C14] </TASK>
[74728.275422][ C9] watchdog: BUG: soft lockup - CPU#9 stuck for 75s! [kworker/9:8:4005961]
[74728.275428][ C9] CPU#9 Utilization every 4000ms during lockup:
[74728.275431][ C9] #1: 100% system, 0% softirq, 1% hardirq, 0% idle
[74728.275434][ C9] #2: 100% system, 0% softirq, 1% hardirq, 0% idle
[74728.275436][ C9] #3: 100% system, 0% softirq, 1% hardirq, 0% idle
[74728.275438][ C9] #4: 100% system, 0% softirq, 1% hardirq, 0% idle
[74728.275440][ C9] #5: 100% system, 0% softirq, 1% hardirq, 0% idle
[74728.275442][ C9] Modules linked in: btrfs xor raid6_pq dm_flakey null_blk target_core_user target_core_mod rfkill nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nf_tables qrtr sunrpc intel_rapl_msr intel_rapl_common intel_uncore_frequency_common skx_edac_common nfit libnvdimm kvm_intel kvm ppdev iTCO_wdt intel_pmc_bxt iTCO_vendor_support irqbypass parport_pc i2c_i801 rapl i2c_smbus parport lpc_ich joydev fuse loop dm_multipath nfnetlink zram lz4hc_compress lz4_compress vsock_loopback zstd_compress vmw_vsock_virtio_transport_common vmw_vsock_vmci_transport vmw_vmci vsock xfs bochs drm_client_lib drm_shmem_helper drm_kms_helper drm virtio_net ghash_clmulni_intel net_failover failover serio_raw scsi_dh_rdac scsi_dh_emc scsi_dh_alua i2c_dev qemu_fw_cfg [last unloaded: scsi_debug]
[74728.275515][ C9] irq event stamp: 246798
[74728.275517][ C9] hardirqs last enabled at (246797): [<ffffffffa960160a>] asm_sysvec_apic_timer_interrupt+0x1a/0x20
[74728.275526][ C9] hardirqs last disabled at (246798): [<ffffffffaca8afae>] sysvec_apic_timer_interrupt+0xe/0x90
[74728.275530][ C9] softirqs last enabled at (246764): [<ffffffffa9baa582>] handle_softirqs+0x522/0x7b0
[74728.275535][ C9] softirqs last disabled at (246759): [<ffffffffa9baa9a1>] __irq_exit_rcu+0x181/0x1d0
[74728.275542][ C9] CPU: 9 UID: 0 PID: 4005961 Comm: kworker/9:8 Tainted: G L 6.19.0-rc1-kts-xfs-gea44380376c+ #1 PREEMPT(lazy)
[74728.275548][ C9] Tainted: [L]=SOFTLOCKUP
[74728.275550][ C9] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.17.0-5.fc42 04/01/2014
[74728.275552][ C9] Workqueue: events netstamp_clear
[74728.275558][ C9] RIP: 0010:smp_call_function_many_cond+0x885/0x1070
[74728.275564][ C9] Code: 38 c8 7c 08 84 c9 0f 85 6e 05 00 00 8b 43 08 a8 01 74 2e 48 89 f1 49 89 f6 48 c1 e9 03 41 83 e6 07 4c 01 f9 41 83 c6 03 f3 90 <0f> b6 01 41 38 c6 7c 08 84 c0 0f 85 db 04 00 00 8b 43 08 a8 01 75
[74728.275567][ C9] RSP: 0018:ff110006c740f990 EFLAGS: 00000202
[74728.275570][ C9] RAX: 0000000000000011 RBX: ffd1fffffe600d40 RCX: fffa3bffffcc01a9
[74728.275573][ C9] RDX: 0000000000000000 RSI: ffd1fffffe600d48 RDI: ffffffffadaf8160
[74728.275575][ C9] RBP: ffe21c01c7998d49 R08: ff11000174e44740 R09: 0000000000000000
[74728.275577][ C9] R10: 1fe220002e9c88e8 R11: 0000000000000000 R12: ff11000e3ccc6a40
[74728.275579][ C9] R13: ffe21c01c7998d48 R14: 0000000000000003 R15: dffffc0000000000
[74728.275581][ C9] FS: 0000000000000000(0000) GS:ff11000e8cf66000(0000) knlGS:0000000000000000
[74728.275586][ C9] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[74728.275589][ C9] CR2: 00007ff11b7f5cb8 CR3: 0000000ac5c96005 CR4: 0000000000773ef0
[74728.275597][ C9] PKRU: 55555554
[74728.275600][ C9] Call Trace:
[74728.275603][ C9] <TASK>
[74728.275610][ C9] ? __pfx_do_sync_core+0x10/0x10
[74728.275614][ C9] ? __text_poke+0x3b1/0x870
[74728.275622][ C9] ? __pfx_smp_call_function_many_cond+0x10/0x10
[74728.275626][ C9] ? __pfx___text_poke+0x10/0x10
[74728.275630][ C9] ? __pfx___might_resched+0x10/0x10
[74728.275639][ C9] on_each_cpu_cond_mask+0x24/0x40
[74728.275643][ C9] smp_text_poke_batch_finish+0x45c/0xd20
[74728.275647][ C9] ? tpacket_rcv+0xdc9/0x3ec0
[74728.275653][ C9] ? __pfx_smp_text_poke_batch_finish+0x10/0x10
[74728.275657][ C9] ? __jump_label_patch+0x25f/0x320
[74728.275664][ C9] ? arch_jump_label_transform_queue+0xa8/0x110
[74728.275679][ C9] arch_jump_label_transform_apply+0x1c/0x30
[74728.275684][ C9] static_key_enable_cpuslocked+0x16c/0x230
[74728.275688][ C9] static_key_enable+0x1f/0x30
[74728.275692][ C9] process_one_work+0x86b/0x1490
[74728.275703][ C9] ? __pfx_process_one_work+0x10/0x10
[74728.275706][ C9] ? lock_acquire.part.0+0xb8/0x230
[74728.275712][ C9] ? lock_is_held_type+0x9a/0x110
[74728.275718][ C9] ? assign_work+0x156/0x390
[74728.275726][ C9] worker_thread+0x5f2/0xfd0
[74728.275736][ C9] ? __pfx_worker_thread+0x10/0x10
[74728.275739][ C9] kthread+0x3a4/0x760
[74728.275744][ C9] ? __pfx_kthread+0x10/0x10
[74728.275747][ C9] ? __lock_release.isra.0+0x59/0x170
[74728.275753][ C9] ? __pfx_kthread+0x10/0x10
[74728.275757][ C9] ret_from_fork+0x50e/0x670
[74728.275761][ C9] ? __pfx_ret_from_fork+0x10/0x10
[74728.275766][ C9] ? __switch_to+0x42c/0xd70
[74728.275771][ C9] ? __pfx_kthread+0x10/0x10
[74728.275775][ C9] ret_from_fork_asm+0x1a/0x30
[74728.275789][ C9] </TASK>
[74744.781704][ C18] watchdog: BUG: soft lockup - CPU#18 stuck for 22s! [abrt-dump-journ:1114]
[74744.781711][ C18] CPU#18 Utilization every 4000ms during lockup:
[74744.781713][ C18] #1: 100% system, 0% softirq, 2% hardirq, 0% idle
[74744.781716][ C18] #2: 100% system, 0% softirq, 1% hardirq, 0% idle
[74744.781718][ C18] #3: 100% system, 0% softirq, 1% hardirq, 0% idle
[74744.781720][ C18] #4: 100% system, 0% softirq, 1% hardirq, 0% idle
[74744.781722][ C18] #5: 100% system, 0% softirq, 2% hardirq, 0% idle
[74744.781724][ C18] Modules linked in: btrfs xor raid6_pq dm_flakey null_blk target_core_user target_core_mod rfkill nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nf_tables qrtr sunrpc intel_rapl_msr intel_rapl_common intel_uncore_frequency_common skx_edac_common nfit libnvdimm kvm_intel kvm ppdev iTCO_wdt intel_pmc_bxt iTCO_vendor_support irqbypass parport_pc i2c_i801 rapl i2c_smbus parport lpc_ich joydev fuse loop dm_multipath nfnetlink zram lz4hc_compress lz4_compress vsock_loopback zstd_compress vmw_vsock_virtio_transport_common vmw_vsock_vmci_transport vmw_vmci vsock xfs bochs drm_client_lib drm_shmem_helper drm_kms_helper drm virtio_net ghash_clmulni_intel net_failover failover serio_raw scsi_dh_rdac scsi_dh_emc scsi_dh_alua i2c_dev qemu_fw_cfg [last unloaded: scsi_debug]
[74744.781798][ C18] irq event stamp: 1710500
[74744.781800][ C18] hardirqs last enabled at (1710499): [<ffffffffa960160a>] asm_sysvec_apic_timer_interrupt+0x1a/0x20
[74744.781808][ C18] hardirqs last disabled at (1710500): [<ffffffffaca8afae>] sysvec_apic_timer_interrupt+0xe/0x90
[74744.781813][ C18] softirqs last enabled at (1710470): [<ffffffffa9baa582>] handle_softirqs+0x522/0x7b0
[74744.781818][ C18] softirqs last disabled at (1710465): [<ffffffffa9baa9a1>] __irq_exit_rcu+0x181/0x1d0
[74744.781824][ C18] CPU: 18 UID: 0 PID: 1114 Comm: abrt-dump-journ Tainted: G L 6.19.0-rc1-kts-xfs-gea44380376c+ #1 PREEMPT(lazy)
[74744.781830][ C18] Tainted: [L]=SOFTLOCKUP
[74744.781831][ C18] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.17.0-5.fc42 04/01/2014
[74744.781833][ C18] RIP: 0010:smp_call_function_many_cond+0x885/0x1070
[74744.781839][ C18] Code: 38 c8 7c 08 84 c9 0f 85 6e 05 00 00 8b 43 08 a8 01 74 2e 48 89 f1 49 89 f6 48 c1 e9 03 41 83 e6 07 4c 01 f9 41 83 c6 03 f3 90 <0f> b6 01 41 38 c6 7c 08 84 c0 0f 85 db 04 00 00 8b 43 08 a8 01 75
[74744.781842][ C18] RSP: 0018:ff110001477a7590 EFLAGS: 00000202
[74744.781846][ C18] RAX: 0000000000000011 RBX: ffd1fffffe6036e0 RCX: fffa3bffffcc06dd
[74744.781848][ C18] RDX: 0000000000000000 RSI: ffd1fffffe6036e8 RDI: ffffffffadaf8160
[74744.781850][ C18] RBP: ffe21c01c7a28d49 R08: ff1100010190ab00 R09: 0000000000000000
[74744.781852][ C18] R10: 1fe2200020321560 R11: 0000000000000000 R12: ff11000e3d146a40
[74744.781854][ C18] R13: ffe21c01c7a28d48 R14: 0000000000000003 R15: dffffc0000000000
[74744.781856][ C18] FS: 00007fcd1f693a00(0000) GS:ff11000e8d3e6000(0000) knlGS:0000000000000000
[74744.781859][ C18] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[74744.781861][ C18] CR2: 00007fcd0e5f5af8 CR3: 0000000163b13004 CR4: 0000000000773ef0
[74744.781866][ C18] PKRU: 55555554
[74744.781868][ C18] Call Trace:
[74744.781870][ C18] <TASK>
[74744.781876][ C18] ? __pfx_flush_tlb_func+0x10/0x10
[74744.781886][ C18] ? __pfx_smp_call_function_many_cond+0x10/0x10
[74744.781892][ C18] ? free_p4d_range+0x11c/0x3b0
[74744.781900][ C18] on_each_cpu_cond_mask+0x24/0x40
[74744.781904][ C18] kvm_flush_tlb_multi+0x216/0x320
[74744.781910][ C18] ? free_pgd_range+0x297/0x390
[74744.781915][ C18] ? __pfx_kvm_flush_tlb_multi+0x10/0x10
[74744.781924][ C18] flush_tlb_mm_range+0x420/0x690
[74744.781932][ C18] ? __pfx_flush_tlb_mm_range+0x10/0x10
[74744.781940][ C18] tlb_flush_mmu_tlbonly+0xb3/0x3e0
[74744.781948][ C18] tlb_finish_mmu+0x91/0x360
[74744.781953][ C18] vms_clear_ptes+0x49b/0x6f0
[74744.781962][ C18] ? __pfx_vms_clear_ptes+0x10/0x10
[74744.781977][ C18] vms_complete_munmap_vmas+0x1a2/0x720
[74744.781983][ C18] do_vmi_align_munmap+0x314/0x480
[74744.781990][ C18] ? __pfx_do_vmi_align_munmap+0x10/0x10
[74744.781994][ C18] ? vm_mmap_pgoff+0x2c2/0x3a0
[74744.782021][ C18] do_vmi_munmap+0x155/0x2d0
[74744.782028][ C18] __vm_munmap+0x14d/0x2d0
[74744.782034][ C18] ? __pfx___vm_munmap+0x10/0x10
[74744.782044][ C18] ? setfl+0x251/0x3e0
[74744.782050][ C18] ? trace_hardirqs_on_prepare+0x101/0x140
[74744.782058][ C18] __x64_sys_munmap+0x58/0x90
[74744.782062][ C18] do_syscall_64+0x98/0x5c0
[74744.782073][ C18] ? __x64_sys_fcntl+0x10d/0x1c0
[74744.782077][ C18] ? trace_hardirqs_on_prepare+0x101/0x140
[74744.782081][ C18] ? trace_hardirqs_on_prepare+0x101/0x140
[74744.782084][ C18] ? do_syscall_64+0x16d/0x5c0
[74744.782087][ C18] ? do_syscall_64+0x16d/0x5c0
[74744.782091][ C18] ? trace_hardirqs_on_prepare+0x101/0x140
[74744.782093][ C18] ? lockdep_hardirqs_on_prepare.part.0+0x9b/0x140
[74744.782097][ C18] ? clear_bhb_loop+0x30/0x80
[74744.782101][ C18] ? clear_bhb_loop+0x30/0x80
[74744.782105][ C18] entry_SYSCALL_64_after_hwframe+0x76/0x7e
[74744.782109][ C18] RIP: 0033:0x7fcd1ffc2f2b
[74744.782113][ C18] Code: 73 01 c3 48 8b 0d d5 4e 0f 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa b8 0b 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d a5 4e 0f 00 f7 d8 64 89 01 48
[74744.782116][ C18] RSP: 002b:00007fff5b61e948 EFLAGS: 00000206 ORIG_RAX: 000000000000000b
[74744.782119][ C18] RAX: ffffffffffffffda RBX: 0000562349a284d0 RCX: 00007fcd1ffc2f2b
[74744.782121][ C18] RDX: 00007fcd1b803000 RSI: 0000000000800000 RDI: 00007fcd0d354000
[74744.782123][ C18] RBP: 00007fff5b61e960 R08: 0000000000000010 R09: 0000000002403000
[74744.782125][ C18] R10: 0000000000000001 R11: 0000000000000206 R12: 0000562349a10b80
[74744.782127][ C18] R13: 0000000000800000 R14: 0000562349a59540 R15: 0000562349a10b80
[74744.782138][ C18] </TASK>
[74751.040116][ C14] watchdog: BUG: soft lockup - CPU#14 stuck for 75s! [abrt-dump-journ:1115]
[74751.040122][ C14] CPU#14 Utilization every 4000ms during lockup:
[74751.040124][ C14] #1: 100% system, 0% softirq, 1% hardirq, 0% idle
[74751.040127][ C14] #2: 100% system, 0% softirq, 1% hardirq, 0% idle
[74751.040129][ C14] #3: 100% system, 0% softirq, 1% hardirq, 0% idle
[74751.040131][ C14] #4: 100% system, 0% softirq, 1% hardirq, 0% idle
[74751.040133][ C14] #5: 100% system, 0% softirq, 1% hardirq, 0% idle
[74751.040135][ C14] Modules linked in: btrfs xor raid6_pq dm_flakey null_blk target_core_user target_core_mod rfkill nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nf_tables qrtr sunrpc intel_rapl_msr intel_rapl_common intel_uncore_frequency_common skx_edac_common nfit libnvdimm kvm_intel kvm ppdev iTCO_wdt intel_pmc_bxt iTCO_vendor_support irqbypass parport_pc i2c_i801 rapl i2c_smbus parport lpc_ich joydev fuse loop dm_multipath nfnetlink zram lz4hc_compress lz4_compress vsock_loopback zstd_compress vmw_vsock_virtio_transport_common vmw_vsock_vmci_transport vmw_vmci vsock xfs bochs drm_client_lib drm_shmem_helper drm_kms_helper drm virtio_net ghash_clmulni_intel net_failover failover serio_raw scsi_dh_rdac scsi_dh_emc scsi_dh_alua i2c_dev qemu_fw_cfg [last unloaded: scsi_debug]
[74751.040209][ C14] irq event stamp: 1783878
[74751.040210][ C14] hardirqs last enabled at (1783877): [<ffffffffa960160a>] asm_sysvec_apic_timer_interrupt+0x1a/0x20
[74751.040218][ C14] hardirqs last disabled at (1783878): [<ffffffffaca8afae>] sysvec_apic_timer_interrupt+0xe/0x90
[74751.040222][ C14] softirqs last enabled at (1783778): [<ffffffffa9baa582>] handle_softirqs+0x522/0x7b0
[74751.040227][ C14] softirqs last disabled at (1783773): [<ffffffffa9baa9a1>] __irq_exit_rcu+0x181/0x1d0
[74751.040234][ C14] CPU: 14 UID: 0 PID: 1115 Comm: abrt-dump-journ Tainted: G L 6.19.0-rc1-kts-xfs-gea44380376c+ #1 PREEMPT(lazy)
[74751.040240][ C14] Tainted: [L]=SOFTLOCKUP
[74751.040241][ C14] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.17.0-5.fc42 04/01/2014
[74751.040243][ C14] RIP: 0010:smp_call_function_many_cond+0x885/0x1070
[74751.040250][ C14] Code: 38 c8 7c 08 84 c9 0f 85 6e 05 00 00 8b 43 08 a8 01 74 2e 48 89 f1 49 89 f6 48 c1 e9 03 41 83 e6 07 4c 01 f9 41 83 c6 03 f3 90 <0f> b6 01 41 38 c6 7c 08 84 c0 0f 85 db 04 00 00 8b 43 08 a8 01 75
[74751.040253][ C14] RSP: 0018:ff110001502e7230 EFLAGS: 00000202
[74751.040256][ C14] RAX: 0000000000000011 RBX: ffd1fffffe604be0 RCX: fffa3bffffcc097d
[74751.040259][ C14] RDX: 0000000000000000 RSI: ffd1fffffe604be8 RDI: ffffffffadaf8160
[74751.040261][ C14] RBP: ffe21c01c79e8d49 R08: ff11000102129f00 R09: 0000000000000000
[74751.040263][ C14] R10: 1fe22000204253e0 R11: 0000000000000000 R12: ff11000e3cf46a40
[74751.040265][ C14] R13: ffe21c01c79e8d48 R14: 0000000000000003 R15: dffffc0000000000
[74751.040267][ C14] FS: 00007f806c869a00(0000) GS:ff11000e8d1e6000(0000) knlGS:0000000000000000
[74751.040270][ C14] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[74751.040272][ C14] CR2: 00007f806d14eb60 CR3: 00000001aa5a6005 CR4: 0000000000773ef0
[74751.040277][ C14] PKRU: 55555554
[74751.040278][ C14] Call Trace:
[74751.040281][ C14] <TASK>
[74751.040286][ C14] ? __pfx_flush_tlb_func+0x10/0x10
[74751.040296][ C14] ? __pfx_smp_call_function_many_cond+0x10/0x10
[74751.040305][ C14] on_each_cpu_cond_mask+0x24/0x40
[74751.040309][ C14] kvm_flush_tlb_multi+0x216/0x320
[74751.040318][ C14] ? __pfx_kvm_flush_tlb_multi+0x10/0x10
[74751.040323][ C14] ? do_raw_spin_lock+0x128/0x270
[74751.040329][ C14] ? __pfx_do_raw_spin_lock+0x10/0x10
[74751.040335][ C14] flush_tlb_mm_range+0x420/0x690
[74751.040341][ C14] ? __pte_offset_map_lock+0x150/0x2c0
[74751.040346][ C14] ? __pfx_flush_tlb_mm_range+0x10/0x10
[74751.040350][ C14] ? __pfx___pte_offset_map_lock+0x10/0x10
[74751.040353][ C14] ? lock_acquire+0xf6/0x130
[74751.040359][ C14] flush_tlb_batched_pending+0x88/0xd0
[74751.040365][ C14] zap_pte_range+0x189/0x850
[74751.040372][ C14] ? arch_stack_walk+0xb7/0x100
[74751.040380][ C14] ? __pfx_zap_pte_range+0x10/0x10
[74751.040386][ C14] ? __lock_acquire+0x55d/0xbd0
[74751.040394][ C14] ? __pfx_pmd_val+0x10/0x10
[74751.040397][ C14] ? find_held_lock+0x2b/0x80
[74751.040404][ C14] zap_pmd_range.isra.0+0x1bf/0x570
[74751.040410][ C14] ? lock_release.part.0+0x1c/0x50
[74751.040414][ C14] ? __pfx_zap_pmd_range.isra.0+0x10/0x10
[74751.040420][ C14] ? __pfx_p4d_offset.part.0.isra.0+0x10/0x10
[74751.040427][ C14] unmap_page_range+0x436/0x950
[74751.040439][ C14] unmap_vmas+0x1f1/0x3b0
[74751.040445][ C14] ? __pfx_unmap_vmas+0x10/0x10
[74751.040456][ C14] ? __lock_acquire+0x55d/0xbd0
[74751.040468][ C14] vms_clear_ptes+0x381/0x6f0
[74751.040476][ C14] ? __pfx_vms_clear_ptes+0x10/0x10
[74751.040491][ C14] vms_complete_munmap_vmas+0x1a2/0x720
[74751.040498][ C14] do_vmi_align_munmap+0x314/0x480
[74751.040505][ C14] ? __pfx_do_vmi_align_munmap+0x10/0x10
[74751.040515][ C14] ? __lock_acquire+0x55d/0xbd0
[74751.040535][ C14] __do_sys_brk+0x6cc/0x950
[74751.040544][ C14] ? __pfx___do_sys_brk+0x10/0x10
[74751.040556][ C14] ? __lock_release.isra.0+0x59/0x170
[74751.040562][ C14] do_syscall_64+0x98/0x5c0
[74751.040567][ C14] ? __lock_release.isra.0+0x59/0x170
[74751.040573][ C14] ? do_user_addr_fault+0x811/0xed0
[74751.040581][ C14] ? trace_hardirqs_on_prepare+0x101/0x140
[74751.040584][ C14] ? lockdep_hardirqs_on_prepare.part.0+0x9b/0x140
[74751.040588][ C14] ? clear_bhb_loop+0x30/0x80
[74751.040591][ C14] ? clear_bhb_loop+0x30/0x80
[74751.040596][ C14] entry_SYSCALL_64_after_hwframe+0x76/0x7e
[74751.040599][ C14] RIP: 0033:0x7f806dc4f16b
[74751.040604][ C14] Code: 85 c0 74 e0 4d 85 f6 74 0b 4c 89 e7 41 ff d6 4c 89 fe eb d9 48 89 f7 e8 a3 66 f1 ff eb f1 90 f3 0f 1e fa b8 0c 00 00 00 0f 05 <48> 8b 15 ce cc 0f 00 48 89 02 48 39 f8 72 06 31 c0 c3 0f 1f 00 48
[74751.040606][ C14] RSP: 002b:00007ffe7f391a18 EFLAGS: 00000206 ORIG_RAX: 000000000000000c
[74751.040609][ C14] RAX: ffffffffffffffda RBX: 000000000000000b RCX: 00007f806dc4f16b
[74751.040612][ C14] RDX: 000055bbb19ef000 RSI: 000055bbb19ef000 RDI: 000055bbb19d8000
[74751.040613][ C14] RBP: 00007ffe7f391a30 R08: 00000000000208d0 R09: 00007f806dd4cac0
[74751.040615][ C14] R10: 0000000000017700 R11: 0000000000000206 R12: 0000000000000000
[74751.040617][ C14] R13: 000055bbae28535a R14: 0000000000000000 R15: 00007ffe7f391d38
[74751.040628][ C14] </TASK>
[74756.275128][ C9] watchdog: BUG: soft lockup - CPU#9 stuck for 101s! [kworker/9:8:4005961]
[74756.275135][ C9] CPU#9 Utilization every 4000ms during lockup:
[74756.275137][ C9] #1: 100% system, 0% softirq, 1% hardirq, 0% idle
[74756.275140][ C9] #2: 100% system, 0% softirq, 1% hardirq, 0% idle
[74756.275142][ C9] #3: 100% system, 0% softirq, 1% hardirq, 0% idle
[74756.275144][ C9] #4: 100% system, 0% softirq, 1% hardirq, 0% idle
[74756.275146][ C9] #5: 100% system, 0% softirq, 1% hardirq, 0% idle
[74756.275148][ C9] Modules linked in: btrfs xor raid6_pq dm_flakey null_blk target_core_user target_core_mod rfkill nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nf_tables qrtr sunrpc intel_rapl_msr intel_rapl_common intel_uncore_frequency_common skx_edac_common nfit libnvdimm kvm_intel kvm ppdev iTCO_wdt intel_pmc_bxt iTCO_vendor_support irqbypass parport_pc i2c_i801 rapl i2c_smbus parport lpc_ich joydev fuse loop dm_multipath nfnetlink zram lz4hc_compress lz4_compress vsock_loopback zstd_compress vmw_vsock_virtio_transport_common vmw_vsock_vmci_transport vmw_vmci vsock xfs bochs drm_client_lib drm_shmem_helper drm_kms_helper drm virtio_net ghash_clmulni_intel net_failover failover serio_raw scsi_dh_rdac scsi_dh_emc scsi_dh_alua i2c_dev qemu_fw_cfg [last unloaded: scsi_debug]
[74756.275221][ C9] irq event stamp: 305600
[74756.275222][ C9] hardirqs last enabled at (305599): [<ffffffffa960160a>] asm_sysvec_apic_timer_interrupt+0x1a/0x20
[74756.275230][ C9] hardirqs last disabled at (305600): [<ffffffffaca8afae>] sysvec_apic_timer_interrupt+0xe/0x90
[74756.275234][ C9] softirqs last enabled at (305502): [<ffffffffa9baa582>] handle_softirqs+0x522/0x7b0
[74756.275239][ C9] softirqs last disabled at (305497): [<ffffffffa9baa9a1>] __irq_exit_rcu+0x181/0x1d0
[74756.275246][ C9] CPU: 9 UID: 0 PID: 4005961 Comm: kworker/9:8 Tainted: G L 6.19.0-rc1-kts-xfs-gea44380376c+ #1 PREEMPT(lazy)
[74756.275252][ C9] Tainted: [L]=SOFTLOCKUP
[74756.275254][ C9] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.17.0-5.fc42 04/01/2014
[74756.275256][ C9] Workqueue: events netstamp_clear
[74756.275262][ C9] RIP: 0010:smp_call_function_many_cond+0x885/0x1070
[74756.275268][ C9] Code: 38 c8 7c 08 84 c9 0f 85 6e 05 00 00 8b 43 08 a8 01 74 2e 48 89 f1 49 89 f6 48 c1 e9 03 41 83 e6 07 4c 01 f9 41 83 c6 03 f3 90 <0f> b6 01 41 38 c6 7c 08 84 c0 0f 85 db 04 00 00 8b 43 08 a8 01 75
[74756.275271][ C9] RSP: 0018:ff110006c740f990 EFLAGS: 00000202
[74756.275274][ C9] RAX: 0000000000000011 RBX: ffd1fffffe600d40 RCX: fffa3bffffcc01a9
[74756.275277][ C9] RDX: 0000000000000000 RSI: ffd1fffffe600d48 RDI: ffffffffadaf8160
[74756.275279][ C9] RBP: ffe21c01c7998d49 R08: ff11000174e44740 R09: 0000000000000000
[74756.275281][ C9] R10: 1fe220002e9c88e8 R11: 0000000000000000 R12: ff11000e3ccc6a40
[74756.275283][ C9] R13: ffe21c01c7998d48 R14: 0000000000000003 R15: dffffc0000000000
[74756.275285][ C9] FS: 0000000000000000(0000) GS:ff11000e8cf66000(0000) knlGS:0000000000000000
[74756.275287][ C9] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[74756.275289][ C9] CR2: 00007ff11b7f5cb8 CR3: 0000000ac5c96005 CR4: 0000000000773ef0
[74756.275294][ C9] PKRU: 55555554
[74756.275296][ C9] Call Trace:
[74756.275298][ C9] <TASK>
[74756.275303][ C9] ? __pfx_do_sync_core+0x10/0x10
[74756.275309][ C9] ? __text_poke+0x3b1/0x870
[74756.275316][ C9] ? __pfx_smp_call_function_many_cond+0x10/0x10
[74756.275320][ C9] ? __pfx___text_poke+0x10/0x10
[74756.275325][ C9] ? __pfx___might_resched+0x10/0x10
[74756.275333][ C9] on_each_cpu_cond_mask+0x24/0x40
[74756.275337][ C9] smp_text_poke_batch_finish+0x45c/0xd20
[74756.275340][ C9] ? tpacket_rcv+0xdc9/0x3ec0
[74756.275347][ C9] ? __pfx_smp_text_poke_batch_finish+0x10/0x10
[74756.275350][ C9] ? __jump_label_patch+0x25f/0x320
[74756.275358][ C9] ? arch_jump_label_transform_queue+0xa8/0x110
[74756.275367][ C9] arch_jump_label_transform_apply+0x1c/0x30
[74756.275371][ C9] static_key_enable_cpuslocked+0x16c/0x230
[74756.275375][ C9] static_key_enable+0x1f/0x30
[74756.275378][ C9] process_one_work+0x86b/0x1490
[74756.275389][ C9] ? __pfx_process_one_work+0x10/0x10
[74756.275392][ C9] ? lock_acquire.part.0+0xb8/0x230
[74756.275399][ C9] ? lock_is_held_type+0x9a/0x110
[74756.275404][ C9] ? assign_work+0x156/0x390
[74756.275417][ C9] worker_thread+0x5f2/0xfd0
[74756.275427][ C9] ? __pfx_worker_thread+0x10/0x10
[74756.275430][ C9] kthread+0x3a4/0x760
[74756.275435][ C9] ? __pfx_kthread+0x10/0x10
[74756.275439][ C9] ? __lock_release.isra.0+0x59/0x170
[74756.275444][ C9] ? __pfx_kthread+0x10/0x10
[74756.275448][ C9] ret_from_fork+0x50e/0x670
[74756.275453][ C9] ? __pfx_ret_from_fork+0x10/0x10
[74756.275458][ C9] ? __switch_to+0x42c/0xd70
[74756.275463][ C9] ? __pfx_kthread+0x10/0x10
[74756.275467][ C9] ret_from_fork_asm+0x1a/0x30
[74756.275481][ C9] </TASK>
[74772.781410][ C18] watchdog: BUG: soft lockup - CPU#18 stuck for 48s! [abrt-dump-journ:1114]
[74772.781417][ C18] CPU#18 Utilization every 4000ms during lockup:
[74772.781420][ C18] #1: 100% system, 0% softirq, 1% hardirq, 0% idle
[74772.781423][ C18] #2: 100% system, 0% softirq, 1% hardirq, 0% idle
[74772.781425][ C18] #3: 100% system, 0% softirq, 1% hardirq, 0% idle
[74772.781427][ C18] #4: 100% system, 0% softirq, 1% hardirq, 0% idle
[74772.781428][ C18] #5: 100% system, 0% softirq, 1% hardirq, 0% idle
[74772.781431][ C18] Modules linked in: btrfs xor raid6_pq dm_flakey null_blk target_core_user target_core_mod rfkill nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nf_tables qrtr sunrpc intel_rapl_msr intel_rapl_common intel_uncore_frequency_common skx_edac_common nfit libnvdimm kvm_intel kvm ppdev iTCO_wdt intel_pmc_bxt iTCO_vendor_support irqbypass parport_pc i2c_i801 rapl i2c_smbus parport lpc_ich joydev fuse loop dm_multipath nfnetlink zram lz4hc_compress lz4_compress vsock_loopback zstd_compress vmw_vsock_virtio_transport_common vmw_vsock_vmci_transport vmw_vmci vsock xfs bochs drm_client_lib drm_shmem_helper drm_kms_helper drm virtio_net ghash_clmulni_intel net_failover failover serio_raw scsi_dh_rdac scsi_dh_emc scsi_dh_alua i2c_dev qemu_fw_cfg [last unloaded: scsi_debug]
[74772.781505][ C18] irq event stamp: 1769362
[74772.781507][ C18] hardirqs last enabled at (1769361): [<ffffffffa960160a>] asm_sysvec_apic_timer_interrupt+0x1a/0x20
[74772.781515][ C18] hardirqs last disabled at (1769362): [<ffffffffaca8afae>] sysvec_apic_timer_interrupt+0xe/0x90
[74772.781520][ C18] softirqs last enabled at (1769346): [<ffffffffa9baa582>] handle_softirqs+0x522/0x7b0
[74772.781525][ C18] softirqs last disabled at (1769341): [<ffffffffa9baa9a1>] __irq_exit_rcu+0x181/0x1d0
[74772.781532][ C18] CPU: 18 UID: 0 PID: 1114 Comm: abrt-dump-journ Tainted: G L 6.19.0-rc1-kts-xfs-gea44380376c+ #1 PREEMPT(lazy)
[74772.781538][ C18] Tainted: [L]=SOFTLOCKUP
[74772.781539][ C18] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.17.0-5.fc42 04/01/2014
[74772.781541][ C18] RIP: 0010:smp_call_function_many_cond+0x885/0x1070
[74772.781547][ C18] Code: 38 c8 7c 08 84 c9 0f 85 6e 05 00 00 8b 43 08 a8 01 74 2e 48 89 f1 49 89 f6 48 c1 e9 03 41 83 e6 07 4c 01 f9 41 83 c6 03 f3 90 <0f> b6 01 41 38 c6 7c 08 84 c0 0f 85 db 04 00 00 8b 43 08 a8 01 75
[74772.781550][ C18] RSP: 0018:ff110001477a7590 EFLAGS: 00000202
[74772.781554][ C18] RAX: 0000000000000011 RBX: ffd1fffffe6036e0 RCX: fffa3bffffcc06dd
[74772.781556][ C18] RDX: 0000000000000000 RSI: ffd1fffffe6036e8 RDI: ffffffffadaf8160
[74772.781558][ C18] RBP: ffe21c01c7a28d49 R08: ff1100010190ab00 R09: 0000000000000000
[74772.781560][ C18] R10: 1fe2200020321560 R11: 0000000000000000 R12: ff11000e3d146a40
[74772.781562][ C18] R13: ffe21c01c7a28d48 R14: 0000000000000003 R15: dffffc0000000000
[74772.781564][ C18] FS: 00007fcd1f693a00(0000) GS:ff11000e8d3e6000(0000) knlGS:0000000000000000
[74772.781567][ C18] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[74772.781569][ C18] CR2: 00007fcd0e5f5af8 CR3: 0000000163b13004 CR4: 0000000000773ef0
[74772.781574][ C18] PKRU: 55555554
[74772.781575][ C18] Call Trace:
[74772.781578][ C18] <TASK>
[74772.781583][ C18] ? __pfx_flush_tlb_func+0x10/0x10
[74772.781593][ C18] ? __pfx_smp_call_function_many_cond+0x10/0x10
[74772.781599][ C18] ? free_p4d_range+0x11c/0x3b0
[74772.781608][ C18] on_each_cpu_cond_mask+0x24/0x40
[74772.781612][ C18] kvm_flush_tlb_multi+0x216/0x320
[74772.781619][ C18] ? free_pgd_range+0x297/0x390
[74772.781623][ C18] ? __pfx_kvm_flush_tlb_multi+0x10/0x10
[74772.781632][ C18] flush_tlb_mm_range+0x420/0x690
[74772.781640][ C18] ? __pfx_flush_tlb_mm_range+0x10/0x10
[74772.781648][ C18] tlb_flush_mmu_tlbonly+0xb3/0x3e0
[74772.781656][ C18] tlb_finish_mmu+0x91/0x360
[74772.781662][ C18] vms_clear_ptes+0x49b/0x6f0
[74772.781670][ C18] ? __pfx_vms_clear_ptes+0x10/0x10
[74772.781685][ C18] vms_complete_munmap_vmas+0x1a2/0x720
[74772.781692][ C18] do_vmi_align_munmap+0x314/0x480
[74772.781698][ C18] ? __pfx_do_vmi_align_munmap+0x10/0x10
[74772.781703][ C18] ? vm_mmap_pgoff+0x2c2/0x3a0
[74772.781730][ C18] do_vmi_munmap+0x155/0x2d0
[74772.781737][ C18] __vm_munmap+0x14d/0x2d0
[74772.781743][ C18] ? __pfx___vm_munmap+0x10/0x10
[74772.781754][ C18] ? setfl+0x251/0x3e0
[74772.781760][ C18] ? trace_hardirqs_on_prepare+0x101/0x140
[74772.781767][ C18] __x64_sys_munmap+0x58/0x90
[74772.781772][ C18] do_syscall_64+0x98/0x5c0
[74772.781783][ C18] ? __x64_sys_fcntl+0x10d/0x1c0
[74772.781787][ C18] ? trace_hardirqs_on_prepare+0x101/0x140
[74772.781791][ C18] ? trace_hardirqs_on_prepare+0x101/0x140
[74772.781794][ C18] ? do_syscall_64+0x16d/0x5c0
[74772.781797][ C18] ? do_syscall_64+0x16d/0x5c0
[74772.781800][ C18] ? trace_hardirqs_on_prepare+0x101/0x140
[74772.781803][ C18] ? lockdep_hardirqs_on_prepare.part.0+0x9b/0x140
[74772.781807][ C18] ? clear_bhb_loop+0x30/0x80
[74772.781810][ C18] ? clear_bhb_loop+0x30/0x80
[74772.781815][ C18] entry_SYSCALL_64_after_hwframe+0x76/0x7e
[74772.781818][ C18] RIP: 0033:0x7fcd1ffc2f2b
[74772.781822][ C18] Code: 73 01 c3 48 8b 0d d5 4e 0f 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa b8 0b 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d a5 4e 0f 00 f7 d8 64 89 01 48
[74772.781825][ C18] RSP: 002b:00007fff5b61e948 EFLAGS: 00000206 ORIG_RAX: 000000000000000b
[74772.781828][ C18] RAX: ffffffffffffffda RBX: 0000562349a284d0 RCX: 00007fcd1ffc2f2b
[74772.781830][ C18] RDX: 00007fcd1b803000 RSI: 0000000000800000 RDI: 00007fcd0d354000
[74772.781832][ C18] RBP: 00007fff5b61e960 R08: 0000000000000010 R09: 0000000002403000
[74772.781834][ C18] R10: 0000000000000001 R11: 0000000000000206 R12: 0000562349a10b80
[74772.781836][ C18] R13: 0000000000800000 R14: 0000562349a59540 R15: 0000562349a10b80
[74772.781847][ C18] </TASK>
[74779.039823][ C14] watchdog: BUG: soft lockup - CPU#14 stuck for 101s! [abrt-dump-journ:1115]
[74779.039830][ C14] CPU#14 Utilization every 4000ms during lockup:
[74779.039832][ C14] #1: 100% system, 0% softirq, 1% hardirq, 0% idle
[74779.039836][ C14] #2: 100% system, 0% softirq, 1% hardirq, 0% idle
[74779.039838][ C14] #3: 100% system, 0% softirq, 1% hardirq, 0% idle
[74779.039840][ C14] #4: 100% system, 0% softirq, 1% hardirq, 0% idle
[74779.039842][ C14] #5: 100% system, 0% softirq, 1% hardirq, 0% idle
[74779.039844][ C14] Modules linked in: btrfs xor raid6_pq dm_flakey null_blk target_core_user target_core_mod rfkill nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nf_tables qrtr sunrpc intel_rapl_msr intel_rapl_common intel_uncore_frequency_common skx_edac_common nfit libnvdimm kvm_intel kvm ppdev iTCO_wdt intel_pmc_bxt iTCO_vendor_support irqbypass parport_pc i2c_i801 rapl i2c_smbus parport lpc_ich joydev fuse loop dm_multipath nfnetlink zram lz4hc_compress lz4_compress vsock_loopback zstd_compress vmw_vsock_virtio_transport_common vmw_vsock_vmci_transport vmw_vmci vsock xfs bochs drm_client_lib drm_shmem_helper drm_kms_helper drm virtio_net ghash_clmulni_intel net_failover failover serio_raw scsi_dh_rdac scsi_dh_emc scsi_dh_alua i2c_dev qemu_fw_cfg [last unloaded: scsi_debug]
[74779.039918][ C14] irq event stamp: 1842598
[74779.039920][ C14] hardirqs last enabled at (1842597): [<ffffffffa960160a>] asm_sysvec_apic_timer_interrupt+0x1a/0x20
[74779.039929][ C14] hardirqs last disabled at (1842598): [<ffffffffaca8afae>] sysvec_apic_timer_interrupt+0xe/0x90
[74779.039933][ C14] softirqs last enabled at (1842498): [<ffffffffa9baa582>] handle_softirqs+0x522/0x7b0
[74779.039938][ C14] softirqs last disabled at (1842493): [<ffffffffa9baa9a1>] __irq_exit_rcu+0x181/0x1d0
[74779.039946][ C14] CPU: 14 UID: 0 PID: 1115 Comm: abrt-dump-journ Tainted: G L 6.19.0-rc1-kts-xfs-gea44380376c+ #1 PREEMPT(lazy)
[74779.039951][ C14] Tainted: [L]=SOFTLOCKUP
[74779.039953][ C14] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.17.0-5.fc42 04/01/2014
[74779.039955][ C14] RIP: 0010:smp_call_function_many_cond+0x888/0x1070
[74779.039961][ C14] Code: 08 84 c9 0f 85 6e 05 00 00 8b 43 08 a8 01 74 2e 48 89 f1 49 89 f6 48 c1 e9 03 41 83 e6 07 4c 01 f9 41 83 c6 03 f3 90 0f b6 01 <41> 38 c6 7c 08 84 c0 0f 85 db 04 00 00 8b 43 08 a8 01 75 e7 83 c2
[74779.039964][ C14] RSP: 0018:ff110001502e7230 EFLAGS: 00000202
[74779.039968][ C14] RAX: 0000000000000000 RBX: ffd1fffffe604be0 RCX: fffa3bffffcc097d
[74779.039970][ C14] RDX: 0000000000000000 RSI: ffd1fffffe604be8 RDI: ffffffffadaf8160
[74779.039972][ C14] RBP: ffe21c01c79e8d49 R08: ff11000102129f00 R09: 0000000000000000
[74779.039974][ C14] R10: 1fe22000204253e0 R11: 0000000000000000 R12: ff11000e3cf46a40
[74779.039976][ C14] R13: ffe21c01c79e8d48 R14: 0000000000000003 R15: dffffc0000000000
[74779.039978][ C14] FS: 00007f806c869a00(0000) GS:ff11000e8d1e6000(0000) knlGS:0000000000000000
[74779.039981][ C14] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[74779.039983][ C14] CR2: 00007f806d14eb60 CR3: 00000001aa5a6005 CR4: 0000000000773ef0
[74779.039988][ C14] PKRU: 55555554
[74779.039989][ C14] Call Trace:
[74779.039992][ C14] <TASK>
[74779.039997][ C14] ? __pfx_flush_tlb_func+0x10/0x10
[74779.040007][ C14] ? __pfx_smp_call_function_many_cond+0x10/0x10
[74779.040016][ C14] on_each_cpu_cond_mask+0x24/0x40
[74779.040020][ C14] kvm_flush_tlb_multi+0x216/0x320
[74779.040029][ C14] ? __pfx_kvm_flush_tlb_multi+0x10/0x10
[74779.040033][ C14] ? do_raw_spin_lock+0x128/0x270
[74779.040039][ C14] ? __pfx_do_raw_spin_lock+0x10/0x10
[74779.040045][ C14] flush_tlb_mm_range+0x420/0x690
[74779.040052][ C14] ? __pte_offset_map_lock+0x150/0x2c0
[74779.040057][ C14] ? __pfx_flush_tlb_mm_range+0x10/0x10
[74779.040061][ C14] ? __pfx___pte_offset_map_lock+0x10/0x10
[74779.040064][ C14] ? lock_acquire+0xf6/0x130
[74779.040070][ C14] flush_tlb_batched_pending+0x88/0xd0
[74779.040076][ C14] zap_pte_range+0x189/0x850
[74779.040083][ C14] ? arch_stack_walk+0xb7/0x100
[74779.040091][ C14] ? __pfx_zap_pte_range+0x10/0x10
[74779.040098][ C14] ? __lock_acquire+0x55d/0xbd0
[74779.040106][ C14] ? __pfx_pmd_val+0x10/0x10
[74779.040108][ C14] ? find_held_lock+0x2b/0x80
[74779.040115][ C14] zap_pmd_range.isra.0+0x1bf/0x570
[74779.040121][ C14] ? lock_release.part.0+0x1c/0x50
[74779.040125][ C14] ? __pfx_zap_pmd_range.isra.0+0x10/0x10
[74779.040131][ C14] ? __pfx_p4d_offset.part.0.isra.0+0x10/0x10
[74779.040138][ C14] unmap_page_range+0x436/0x950
[74779.040150][ C14] unmap_vmas+0x1f1/0x3b0
[74779.040155][ C14] ? __pfx_unmap_vmas+0x10/0x10
[74779.040167][ C14] ? __lock_acquire+0x55d/0xbd0
[74779.040179][ C14] vms_clear_ptes+0x381/0x6f0
[74779.040188][ C14] ? __pfx_vms_clear_ptes+0x10/0x10
[74779.040203][ C14] vms_complete_munmap_vmas+0x1a2/0x720
[74779.040210][ C14] do_vmi_align_munmap+0x314/0x480
[74779.040217][ C14] ? __pfx_do_vmi_align_munmap+0x10/0x10
[74779.040227][ C14] ? __lock_acquire+0x55d/0xbd0
[74779.040247][ C14] __do_sys_brk+0x6cc/0x950
[74779.040256][ C14] ? __pfx___do_sys_brk+0x10/0x10
[74779.040268][ C14] ? __lock_release.isra.0+0x59/0x170
[74779.040274][ C14] do_syscall_64+0x98/0x5c0
[74779.040279][ C14] ? __lock_release.isra.0+0x59/0x170
[74779.040285][ C14] ? do_user_addr_fault+0x811/0xed0
[74779.040294][ C14] ? trace_hardirqs_on_prepare+0x101/0x140
[74779.040297][ C14] ? lockdep_hardirqs_on_prepare.part.0+0x9b/0x140
[74779.040301][ C14] ? clear_bhb_loop+0x30/0x80
[74779.040304][ C14] ? clear_bhb_loop+0x30/0x80
[74779.040309][ C14] entry_SYSCALL_64_after_hwframe+0x76/0x7e
[74779.040312][ C14] RIP: 0033:0x7f806dc4f16b
[74779.040316][ C14] Code: 85 c0 74 e0 4d 85 f6 74 0b 4c 89 e7 41 ff d6 4c 89 fe eb d9 48 89 f7 e8 a3 66 f1 ff eb f1 90 f3 0f 1e fa b8 0c 00 00 00 0f 05 <48> 8b 15 ce cc 0f 00 48 89 02 48 39 f8 72 06 31 c0 c3 0f 1f 00 48
[74779.040319][ C14] RSP: 002b:00007ffe7f391a18 EFLAGS: 00000206 ORIG_RAX: 000000000000000c
[74779.040322][ C14] RAX: ffffffffffffffda RBX: 000000000000000b RCX: 00007f806dc4f16b
[74779.040324][ C14] RDX: 000055bbb19ef000 RSI: 000055bbb19ef000 RDI: 000055bbb19d8000
[74779.040326][ C14] RBP: 00007ffe7f391a30 R08: 00000000000208d0 R09: 00007f806dd4cac0
[74779.040328][ C14] R10: 0000000000017700 R11: 0000000000000206 R12: 0000000000000000
[74779.040330][ C14] R13: 000055bbae28535a R14: 0000000000000000 R15: 00007ffe7f391d38
[74779.040340][ C14] </TASK>
[74784.274835][ C9] watchdog: BUG: soft lockup - CPU#9 stuck for 127s! [kworker/9:8:4005961]
[74784.274841][ C9] CPU#9 Utilization every 4000ms during lockup:
[74784.274843][ C9] #1: 100% system, 0% softirq, 1% hardirq, 0% idle
[74784.274846][ C9] #2: 100% system, 0% softirq, 1% hardirq, 0% idle
[74784.274848][ C9] #3: 100% system, 0% softirq, 1% hardirq, 0% idle
[74784.274850][ C9] #4: 100% system, 0% softirq, 1% hardirq, 0% idle
[74784.274852][ C9] #5: 100% system, 0% softirq, 1% hardirq, 0% idle
[74784.274854][ C9] Modules linked in: btrfs xor raid6_pq dm_flakey null_blk target_core_user target_core_mod rfkill nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nf_tables qrtr sunrpc intel_rapl_msr intel_rapl_common intel_uncore_frequency_common skx_edac_common nfit libnvdimm kvm_intel kvm ppdev iTCO_wdt intel_pmc_bxt iTCO_vendor_support irqbypass parport_pc i2c_i801 rapl i2c_smbus parport lpc_ich joydev fuse loop dm_multipath nfnetlink zram lz4hc_compress lz4_compress vsock_loopback zstd_compress vmw_vsock_virtio_transport_common vmw_vsock_vmci_transport vmw_vmci vsock xfs bochs drm_client_lib drm_shmem_helper drm_kms_helper drm virtio_net ghash_clmulni_intel net_failover failover serio_raw scsi_dh_rdac scsi_dh_emc scsi_dh_alua i2c_dev qemu_fw_cfg [last unloaded: scsi_debug]
[74784.274923][ C9] irq event stamp: 364338
[74784.274925][ C9] hardirqs last enabled at (364337): [<ffffffffa960160a>] asm_sysvec_apic_timer_interrupt+0x1a/0x20
[74784.274933][ C9] hardirqs last disabled at (364338): [<ffffffffaca8afae>] sysvec_apic_timer_interrupt+0xe/0x90
[74784.274937][ C9] softirqs last enabled at (364214): [<ffffffffa9baa582>] handle_softirqs+0x522/0x7b0
[74784.274942][ C9] softirqs last disabled at (364209): [<ffffffffa9baa9a1>] __irq_exit_rcu+0x181/0x1d0
[74784.274948][ C9] CPU: 9 UID: 0 PID: 4005961 Comm: kworker/9:8 Tainted: G L 6.19.0-rc1-kts-xfs-gea44380376c+ #1 PREEMPT(lazy)
[74784.274954][ C9] Tainted: [L]=SOFTLOCKUP
[74784.274955][ C9] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.17.0-5.fc42 04/01/2014
[74784.274958][ C9] Workqueue: events netstamp_clear
[74784.274964][ C9] RIP: 0010:smp_call_function_many_cond+0x885/0x1070
[74784.274969][ C9] Code: 38 c8 7c 08 84 c9 0f 85 6e 05 00 00 8b 43 08 a8 01 74 2e 48 89 f1 49 89 f6 48 c1 e9 03 41 83 e6 07 4c 01 f9 41 83 c6 03 f3 90 <0f> b6 01 41 38 c6 7c 08 84 c0 0f 85 db 04 00 00 8b 43 08 a8 01 75
[74784.274972][ C9] RSP: 0018:ff110006c740f990 EFLAGS: 00000202
[74784.274975][ C9] RAX: 0000000000000011 RBX: ffd1fffffe600d40 RCX: fffa3bffffcc01a9
[74784.274978][ C9] RDX: 0000000000000000 RSI: ffd1fffffe600d48 RDI: ffffffffadaf8160
[74784.274980][ C9] RBP: ffe21c01c7998d49 R08: ff11000174e44740 R09: 0000000000000000
[74784.274982][ C9] R10: 1fe220002e9c88e8 R11: 0000000000000000 R12: ff11000e3ccc6a40
[74784.274984][ C9] R13: ffe21c01c7998d48 R14: 0000000000000003 R15: dffffc0000000000
[74784.274986][ C9] FS: 0000000000000000(0000) GS:ff11000e8cf66000(0000) knlGS:0000000000000000
[74784.274988][ C9] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[74784.274990][ C9] CR2: 00007ff11b7f5cb8 CR3: 0000000ac5c96005 CR4: 0000000000773ef0
[74784.274995][ C9] PKRU: 55555554
[74784.274997][ C9] Call Trace:
[74784.274999][ C9] <TASK>
[74784.275005][ C9] ? __pfx_do_sync_core+0x10/0x10
[74784.275009][ C9] ? __text_poke+0x3b1/0x870
[74784.275017][ C9] ? __pfx_smp_call_function_many_cond+0x10/0x10
[74784.275020][ C9] ? __pfx___text_poke+0x10/0x10
[74784.275026][ C9] ? __pfx___might_resched+0x10/0x10
[74784.275034][ C9] on_each_cpu_cond_mask+0x24/0x40
[74784.275038][ C9] smp_text_poke_batch_finish+0x45c/0xd20
[74784.275041][ C9] ? tpacket_rcv+0xdc9/0x3ec0
[74784.275048][ C9] ? __pfx_smp_text_poke_batch_finish+0x10/0x10
[74784.275051][ C9] ? __jump_label_patch+0x25f/0x320
[74784.275059][ C9] ? arch_jump_label_transform_queue+0xa8/0x110
[74784.275068][ C9] arch_jump_label_transform_apply+0x1c/0x30
[74784.275072][ C9] static_key_enable_cpuslocked+0x16c/0x230
[74784.275077][ C9] static_key_enable+0x1f/0x30
[74784.275080][ C9] process_one_work+0x86b/0x1490
[74784.275091][ C9] ? __pfx_process_one_work+0x10/0x10
[74784.275094][ C9] ? lock_acquire.part.0+0xb8/0x230
[74784.275100][ C9] ? lock_is_held_type+0x9a/0x110
[74784.275106][ C9] ? assign_work+0x156/0x390
[74784.275114][ C9] worker_thread+0x5f2/0xfd0
[74784.275128][ C9] ? __pfx_worker_thread+0x10/0x10
[74784.275131][ C9] kthread+0x3a4/0x760
[74784.275136][ C9] ? __pfx_kthread+0x10/0x10
[74784.275140][ C9] ? __lock_release.isra.0+0x59/0x170
[74784.275145][ C9] ? __pfx_kthread+0x10/0x10
[74784.275150][ C9] ret_from_fork+0x50e/0x670
[74784.275154][ C9] ? __pfx_ret_from_fork+0x10/0x10
[74784.275160][ C9] ? __switch_to+0x42c/0xd70
[74784.275165][ C9] ? __pfx_kthread+0x10/0x10
[74784.275169][ C9] ret_from_fork_asm+0x1a/0x30
[74784.275182][ C9] </TASK>
...
^ permalink raw reply [flat|nested] 14+ messages in thread* Re: rcu stalls during fstests runs for xfs
2026-01-26 11:30 rcu stalls during fstests runs for xfs Shinichiro Kawasaki
@ 2026-01-26 23:05 ` Dave Chinner
2026-01-27 1:38 ` Shinichiro Kawasaki
2026-01-28 9:55 ` Kunwu Chan
1 sibling, 1 reply; 14+ messages in thread
From: Dave Chinner @ 2026-01-26 23:05 UTC (permalink / raw)
To: Shinichiro Kawasaki; +Cc: rcu@vger.kernel.org, linux-xfs@vger.kernel.org, hch
On Mon, Jan 26, 2026 at 11:30:17AM +0000, Shinichiro Kawasaki wrote:
> Hello all,
>
> I regularly run fstests with the kernel at xfs/for-next branch tip to validate
> the capability of zoned block device capability of xfs. Recently, I started
> observing hangs of the test runs with the message:
>
> "rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:"
>
> The hangs occurred in different test cases, and simply rerunning the test cases
> does not reproduce the hang. When I ran the whole fstests test cases, it also
> fails to reproduce the hang. However, when the whole fstests is repeated the
> hang is recreated. The hang looks rare, takes very long time to recreate and is
> tough to chase down.
>
> To tackle this problem, I would like to seek the expertise of rcu developers. I
> have attached kernel message logs captured at the hangs for analysis [1][2][3].
> Any insights or guidance on how to debug this problem will be appreciated.
>
Nothing XFS related in these. All the XFS traces are waiting on IO
submission - the block layer below XFS is typically sleeping waiting
for tags to be allocated.
> [1] hang observed on Jan/23/2026
>
> dmesg log file attached: generic_005_hang
> hanged test case: generic/005
> kernel: xfs/for-next, 51aba4ca399, v6.19-rc5+
> block device: dm-linear on HDD (non-zoned)
> xfs: zoned
The block device has an expired rq so the timeout work is trying to
run synchronise_rcu():
> [272416.203262][ T167] wait_for_completion_state+0x21/0x40
> [272416.203719][ T167] __wait_rcu_gp+0x1cd/0x410
> [272416.204487][ T167] synchronize_rcu_normal+0x4a8/0x510
> [272416.207632][ T167] blk_mq_timeout_work+0x4aa/0x5d0
> [272416.209324][ T167] process_one_work+0x86b/0x1490
So that's possibly why IO is stuck. i.e. the block device is waiting
on the RCU grace period to expire, and RCU processing has stalled
for some reason. Hence the block device appears to be a victim of
the issue, not the cause.
> [2] hang observed on Jan/18/2026
>
> dmesg log file attached: xfs_598_hang
> hanged test case: xfs/598
> kernel: Christophs' xfs branch, ec6aea2a5 v6.19-rc1+
> block device: TCMU (non-zoned)
> xfs: non-zoned
Looks like some kind of scheduler/static-key livelock or deadlock.
There are a bunch of tasks all doing stuff like:
> [164582.112175][ C10] on_each_cpu_cond_mask+0x24/0x40
> [164582.112179][ C10] smp_text_poke_batch_finish+0x45c/0xd20
> [164582.112218][ C10] arch_jump_label_transform_apply+0x1c/0x30
> [164582.112224][ C10] static_key_enable_cpuslocked+0x16c/0x230
> [164582.112230][ C10] static_key_enable+0x1f/0x30
> [164582.112235][ C10] process_one_work+0x86b/0x1490
Along with the rcu_preempt thread apparently spinning trying to
reschedule:
> [164661.054667][ C12] RIP: 0010:__pv_queued_spin_lock_slowpath+0x232/0xdc0
> [164661.054745][ C12] do_raw_spin_lock+0x1d9/0x270
> [164661.054768][ C12] raw_spin_rq_lock_nested+0x24/0x170
> [164661.054774][ C12] _raw_spin_rq_lock_irqsave+0x41/0x50
> [164661.054778][ C12] resched_cpu+0x62/0xf0
> [164661.054783][ C12] force_qs_rnp+0x67d/0xaa0
> [164661.054799][ C12] rcu_gp_fqs_loop+0x948/0x11b0
> [164661.054841][ C12] rcu_gp_kthread+0x4f2/0x660
> [164661.054876][ C12] kthread+0x3a4/0x760
I can't find anything obvious in the block layer waiting on RCU.
However, XFS is waiting in the block layer on mq tag allocation for
submission (like the 005 hang above) or waiting on journal write IO
completion, so the block may may well be hung on RCU again.
> [3] hang observed on Jan/14/2026
>
> dmesg log file attached: generic_417_hang
> hanged test case: generic/417
> kernel: xfs/for-next, ea44380376c, v6.19-rc1+
> block device: null_blk (non-zoned)
> xfs: zoned
Same static key pattern in on_each_cpu_cond_mask(), there's also a
bunch of tlb flushes stcuk in on_each_cpu_cond_mask(). rcu_preempt
thread is not waking from:
> [74627.121083][ C2] schedule+0xd1/0x250
> [74627.121959][ C2] schedule_timeout+0x103/0x260
> [74627.128027][ C2] rcu_gp_fqs_loop+0x208/0x11b0
> [74627.135240][ C2] rcu_gp_kthread+0x4f2/0x660
There is nothing XFS or block related in the hung task traces
at all.
IOWs, this looks like some kind of RCU/static key/scheduler
interaction which may propagate into the block layer if it needs RCU
synchronisation. Hence it does not appear to have anything to do
with the filesystem layers, and it is possible the block layer is
colateral damage, too.
Probably best to hand this over to the core kernel ppl.
-Dave.
--
Dave Chinner
david@fromorbit.com
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: rcu stalls during fstests runs for xfs
2026-01-26 23:05 ` Dave Chinner
@ 2026-01-27 1:38 ` Shinichiro Kawasaki
0 siblings, 0 replies; 14+ messages in thread
From: Shinichiro Kawasaki @ 2026-01-27 1:38 UTC (permalink / raw)
To: Dave Chinner; +Cc: rcu@vger.kernel.org, linux-xfs@vger.kernel.org, hch
On Jan 27, 2026 / 10:05, Dave Chinner wrote:
[...]
> IOWs, this looks like some kind of RCU/static key/scheduler
> interaction which may propagate into the block layer if it needs RCU
> synchronisation. Hence it does not appear to have anything to do
> with the filesystem layers, and it is possible the block layer is
> colateral damage, too.
>
> Probably best to hand this over to the core kernel ppl.
Dave, thank you very much for looking in the logs and sharing your views.
I will wait for comments from RCU experts and keep the effort to recreate
the hangs.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: rcu stalls during fstests runs for xfs
2026-01-26 11:30 rcu stalls during fstests runs for xfs Shinichiro Kawasaki
2026-01-26 23:05 ` Dave Chinner
@ 2026-01-28 9:55 ` Kunwu Chan
2026-01-28 16:42 ` Paul E. McKenney
1 sibling, 1 reply; 14+ messages in thread
From: Kunwu Chan @ 2026-01-28 9:55 UTC (permalink / raw)
To: Shinichiro Kawasaki, rcu@vger.kernel.org; +Cc: linux-xfs@vger.kernel.org, hch
On 1/26/26 19:30, Shinichiro Kawasaki wrote:
> kernel: xfs/for-next, 51aba4ca399, v6.19-rc5+
> block device: dm-linear on HDD (non-zoned)
> xfs: zoned
I had a quick look at the attached logs. Across the different runs, the
stall traces consistently show CPUs spending extended time in
|mm_get_cid()|along the mm/sched context switch path.
This doesn’t seem to indicate an immediate RCU issue by itself, but it
raises the question of whether context switch completion can be delayed
for unusually long periods under these test configurations.
--
Thanx, Kunwu
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: rcu stalls during fstests runs for xfs
2026-01-28 9:55 ` Kunwu Chan
@ 2026-01-28 16:42 ` Paul E. McKenney
2026-01-29 5:27 ` Shinichiro Kawasaki
0 siblings, 1 reply; 14+ messages in thread
From: Paul E. McKenney @ 2026-01-28 16:42 UTC (permalink / raw)
To: Kunwu Chan
Cc: Shinichiro Kawasaki, rcu@vger.kernel.org,
linux-xfs@vger.kernel.org, hch
On Wed, Jan 28, 2026 at 05:55:01PM +0800, Kunwu Chan wrote:
> On 1/26/26 19:30, Shinichiro Kawasaki wrote:
> > kernel: xfs/for-next, 51aba4ca399, v6.19-rc5+
> > block device: dm-linear on HDD (non-zoned)
> > xfs: zoned
>
> I had a quick look at the attached logs. Across the different runs, the
> stall traces consistently show CPUs spending extended time in
> |mm_get_cid()|along the mm/sched context switch path.
>
> This doesn’t seem to indicate an immediate RCU issue by itself, but it
> raises the question of whether context switch completion can be delayed
> for unusually long periods under these test configurations.
Thank you all!
Us RCU guys looked at this and it also looks to us that at least one
part of this issue is that mm_get_cid() is spinning. This is being
investigated over here:
https://lore.kernel.org/all/877bt29cgv.ffs@tglx/
https://lore.kernel.org/all/bdfea828-4585-40e8-8835-247c6a8a76b0@linux.ibm.com/
https://lore.kernel.org/all/87y0lh96xo.ffs@tglx/
I have seen the static-key pattern called out by Dave Chinner when running
KASAN on large systems. We worked around this by disabling KASAN's use
of static keys. In case you were running KASAN in these tests.
Thanx, Paul
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: rcu stalls during fstests runs for xfs
2026-01-28 16:42 ` Paul E. McKenney
@ 2026-01-29 5:27 ` Shinichiro Kawasaki
2026-01-29 17:46 ` Paul E. McKenney
0 siblings, 1 reply; 14+ messages in thread
From: Shinichiro Kawasaki @ 2026-01-29 5:27 UTC (permalink / raw)
To: Paul E. McKenney
Cc: Kunwu Chan, rcu@vger.kernel.org, linux-xfs@vger.kernel.org, hch
On Jan 28, 2026 / 08:42, Paul E. McKenney wrote:
> On Wed, Jan 28, 2026 at 05:55:01PM +0800, Kunwu Chan wrote:
> > On 1/26/26 19:30, Shinichiro Kawasaki wrote:
> > > kernel: xfs/for-next, 51aba4ca399, v6.19-rc5+
> > > block device: dm-linear on HDD (non-zoned)
> > > xfs: zoned
> >
> > I had a quick look at the attached logs. Across the different runs, the
> > stall traces consistently show CPUs spending extended time in
> > |mm_get_cid()|along the mm/sched context switch path.
> >
> > This doesn’t seem to indicate an immediate RCU issue by itself, but it
> > raises the question of whether context switch completion can be delayed
> > for unusually long periods under these test configurations.
>
> Thank you all!
>
> Us RCU guys looked at this and it also looks to us that at least one
> part of this issue is that mm_get_cid() is spinning. This is being
> investigated over here:
>
> https://lore.kernel.org/all/877bt29cgv.ffs@tglx/
> https://lore.kernel.org/all/bdfea828-4585-40e8-8835-247c6a8a76b0@linux.ibm.com/
> https://lore.kernel.org/all/87y0lh96xo.ffs@tglx/
Knuwu, Paul and RCU experts, thank you very much. It's good to know that the
similar issue is already under investigation. I hope that a fix gets available
in timely manner.
> I have seen the static-key pattern called out by Dave Chinner when running
> KASAN on large systems. We worked around this by disabling KASAN's use
> of static keys. In case you were running KASAN in these tests.
As to KASAN, yes, I enable it in my test runs. I find three static-keys under
mm/kasan/*. I will think if they can be disabled in my test runs. Thanks.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: rcu stalls during fstests runs for xfs
2026-01-29 5:27 ` Shinichiro Kawasaki
@ 2026-01-29 17:46 ` Paul E. McKenney
2026-01-29 23:19 ` Paul E. McKenney
0 siblings, 1 reply; 14+ messages in thread
From: Paul E. McKenney @ 2026-01-29 17:46 UTC (permalink / raw)
To: Shinichiro Kawasaki
Cc: Kunwu Chan, rcu@vger.kernel.org, linux-xfs@vger.kernel.org, hch
On Thu, Jan 29, 2026 at 05:27:04AM +0000, Shinichiro Kawasaki wrote:
> On Jan 28, 2026 / 08:42, Paul E. McKenney wrote:
> > On Wed, Jan 28, 2026 at 05:55:01PM +0800, Kunwu Chan wrote:
> > > On 1/26/26 19:30, Shinichiro Kawasaki wrote:
> > > > kernel: xfs/for-next, 51aba4ca399, v6.19-rc5+
> > > > block device: dm-linear on HDD (non-zoned)
> > > > xfs: zoned
> > >
> > > I had a quick look at the attached logs. Across the different runs, the
> > > stall traces consistently show CPUs spending extended time in
> > > |mm_get_cid()|along the mm/sched context switch path.
> > >
> > > This doesn’t seem to indicate an immediate RCU issue by itself, but it
> > > raises the question of whether context switch completion can be delayed
> > > for unusually long periods under these test configurations.
> >
> > Thank you all!
> >
> > Us RCU guys looked at this and it also looks to us that at least one
> > part of this issue is that mm_get_cid() is spinning. This is being
> > investigated over here:
> >
> > https://lore.kernel.org/all/877bt29cgv.ffs@tglx/
> > https://lore.kernel.org/all/bdfea828-4585-40e8-8835-247c6a8a76b0@linux.ibm.com/
> > https://lore.kernel.org/all/87y0lh96xo.ffs@tglx/
>
> Knuwu, Paul and RCU experts, thank you very much. It's good to know that the
> similar issue is already under investigation. I hope that a fix gets available
> in timely manner.
>
> > I have seen the static-key pattern called out by Dave Chinner when running
> > KASAN on large systems. We worked around this by disabling KASAN's use
> > of static keys. In case you were running KASAN in these tests.
>
> As to KASAN, yes, I enable it in my test runs. I find three static-keys under
> mm/kasan/*. I will think if they can be disabled in my test runs. Thanks.
There is a set of Kconfig options that disables static branches. If you
cannot find them quickly, please let me know and I can look them up.
Thanx, Paul
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: rcu stalls during fstests runs for xfs
2026-01-29 17:46 ` Paul E. McKenney
@ 2026-01-29 23:19 ` Paul E. McKenney
2026-01-30 11:16 ` Shinichiro Kawasaki
0 siblings, 1 reply; 14+ messages in thread
From: Paul E. McKenney @ 2026-01-29 23:19 UTC (permalink / raw)
To: Shinichiro Kawasaki
Cc: Kunwu Chan, rcu@vger.kernel.org, linux-xfs@vger.kernel.org, hch
On Thu, Jan 29, 2026 at 09:46:12AM -0800, Paul E. McKenney wrote:
> On Thu, Jan 29, 2026 at 05:27:04AM +0000, Shinichiro Kawasaki wrote:
> > On Jan 28, 2026 / 08:42, Paul E. McKenney wrote:
> > > On Wed, Jan 28, 2026 at 05:55:01PM +0800, Kunwu Chan wrote:
> > > > On 1/26/26 19:30, Shinichiro Kawasaki wrote:
> > > > > kernel: xfs/for-next, 51aba4ca399, v6.19-rc5+
> > > > > block device: dm-linear on HDD (non-zoned)
> > > > > xfs: zoned
> > > >
> > > > I had a quick look at the attached logs. Across the different runs, the
> > > > stall traces consistently show CPUs spending extended time in
> > > > |mm_get_cid()|along the mm/sched context switch path.
> > > >
> > > > This doesn’t seem to indicate an immediate RCU issue by itself, but it
> > > > raises the question of whether context switch completion can be delayed
> > > > for unusually long periods under these test configurations.
> > >
> > > Thank you all!
> > >
> > > Us RCU guys looked at this and it also looks to us that at least one
> > > part of this issue is that mm_get_cid() is spinning. This is being
> > > investigated over here:
> > >
> > > https://lore.kernel.org/all/877bt29cgv.ffs@tglx/
> > > https://lore.kernel.org/all/bdfea828-4585-40e8-8835-247c6a8a76b0@linux.ibm.com/
> > > https://lore.kernel.org/all/87y0lh96xo.ffs@tglx/
> >
> > Knuwu, Paul and RCU experts, thank you very much. It's good to know that the
> > similar issue is already under investigation. I hope that a fix gets available
> > in timely manner.
> >
> > > I have seen the static-key pattern called out by Dave Chinner when running
> > > KASAN on large systems. We worked around this by disabling KASAN's use
> > > of static keys. In case you were running KASAN in these tests.
> >
> > As to KASAN, yes, I enable it in my test runs. I find three static-keys under
> > mm/kasan/*. I will think if they can be disabled in my test runs. Thanks.
>
> There is a set of Kconfig options that disables static branches. If you
> cannot find them quickly, please let me know and I can look them up.
And Thomas Gleixner posted an alleged fix to the CID issue here:
https://lore.kernel.org/lkml/20260129210219.452851594@kernel.org/
Please let him know whether or not it helps.
Thanx, Paul
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: rcu stalls during fstests runs for xfs
2026-01-29 23:19 ` Paul E. McKenney
@ 2026-01-30 11:16 ` Shinichiro Kawasaki
2026-02-06 9:33 ` Matthieu Baerts
2026-02-13 1:26 ` Shinichiro Kawasaki
0 siblings, 2 replies; 14+ messages in thread
From: Shinichiro Kawasaki @ 2026-01-30 11:16 UTC (permalink / raw)
To: Paul E. McKenney
Cc: Kunwu Chan, rcu@vger.kernel.org, linux-xfs@vger.kernel.org, hch
On Jan 29, 2026 / 15:19, Paul E. McKenney wrote:
[...]
> > > > I have seen the static-key pattern called out by Dave Chinner when running
> > > > KASAN on large systems. We worked around this by disabling KASAN's use
> > > > of static keys. In case you were running KASAN in these tests.
> > >
> > > As to KASAN, yes, I enable it in my test runs. I find three static-keys under
> > > mm/kasan/*. I will think if they can be disabled in my test runs. Thanks.
> >
> > There is a set of Kconfig options that disables static branches. If you
> > cannot find them quickly, please let me know and I can look them up.
Thank you. But now I know the fix series by Thomas is available. I prioritize
the evaluation of the fix series. Later on, I will try disabling the static-keys
if it is required.
>
> And Thomas Gleixner posted an alleged fix to the CID issue here:
>
> https://lore.kernel.org/lkml/20260129210219.452851594@kernel.org/
>
> Please let him know whether or not it helps.
Good to see this fix candidate series, thanks :) I have set up the patches and
started my regular test runs. So far, the hangs have been observed once or twice
a week. To confirm the effect of the fix series, I think two weeks runs will be
required. Once I get the result, will share it on this thread and with Thomas.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: rcu stalls during fstests runs for xfs
2026-01-30 11:16 ` Shinichiro Kawasaki
@ 2026-02-06 9:33 ` Matthieu Baerts
2026-02-06 10:02 ` Shinichiro Kawasaki
2026-02-13 1:26 ` Shinichiro Kawasaki
1 sibling, 1 reply; 14+ messages in thread
From: Matthieu Baerts @ 2026-02-06 9:33 UTC (permalink / raw)
To: Shinichiro Kawasaki
Cc: Kunwu Chan, rcu@vger.kernel.org, linux-xfs@vger.kernel.org, hch,
Paul E. McKenney, MPTCP Linux
Hi Shinichiro,
Sorry to jump in, but I *think* our CI for the MPTCP subsystem is
hitting the same issue.
On 30/01/2026 12:16, Shinichiro Kawasaki wrote:
> On Jan 29, 2026 / 15:19, Paul E. McKenney wrote:
> [...]
>>>>> I have seen the static-key pattern called out by Dave Chinner when running
>>>>> KASAN on large systems. We worked around this by disabling KASAN's use
>>>>> of static keys. In case you were running KASAN in these tests.
>>>>
>>>> As to KASAN, yes, I enable it in my test runs. I find three static-keys under
>>>> mm/kasan/*. I will think if they can be disabled in my test runs. Thanks.
>>>
>>> There is a set of Kconfig options that disables static branches. If you
>>> cannot find them quickly, please let me know and I can look them up.
>
> Thank you. But now I know the fix series by Thomas is available. I prioritize
> the evaluation of the fix series. Later on, I will try disabling the static-keys
> if it is required.
>
>>
>> And Thomas Gleixner posted an alleged fix to the CID issue here:
>>
>> https://lore.kernel.org/lkml/20260129210219.452851594@kernel.org/
>>
>> Please let him know whether or not it helps.
>
> Good to see this fix candidate series, thanks :) I have set up the patches and
> started my regular test runs. So far, the hangs have been observed once or twice
> a week. To confirm the effect of the fix series, I think two weeks runs will be
> required. Once I get the result, will share it on this thread and with Thomas.
I know it is only one week now, but did you see any effects so far? On
my side, I applied the v2 series -- which has been applied in
tip/sched/urgent -- but I still have issues, and it looks like it is
even more frequent. Maybe what I see is different. If you no longer see
the issues on your side after one week, I'm going to start a new thread
with my issues not to mix them.
Note that in my case, the issue is visible on a system where nested VMs
are used, with and without KASAN (enabled via debug.config), just after
having started a VSOCK listening socket via socat.
Cheers,
Matt
--
Sponsored by the NGI0 Core fund.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: rcu stalls during fstests runs for xfs
2026-02-06 9:33 ` Matthieu Baerts
@ 2026-02-06 10:02 ` Shinichiro Kawasaki
2026-02-06 11:04 ` Matthieu Baerts
0 siblings, 1 reply; 14+ messages in thread
From: Shinichiro Kawasaki @ 2026-02-06 10:02 UTC (permalink / raw)
To: Matthieu Baerts
Cc: Kunwu Chan, rcu@vger.kernel.org, linux-xfs@vger.kernel.org, hch,
Paul E. McKenney, MPTCP Linux
On Feb 06, 2026 / 10:33, Matthieu Baerts wrote:
> Hi Shinichiro,
>
> Sorry to jump in, but I *think* our CI for the MPTCP subsystem is
> hitting the same issue.
Hi Matthieu,
> On 30/01/2026 12:16, Shinichiro Kawasaki wrote:
> > On Jan 29, 2026 / 15:19, Paul E. McKenney wrote:
> > [...]
> >>>>> I have seen the static-key pattern called out by Dave Chinner when running
> >>>>> KASAN on large systems. We worked around this by disabling KASAN's use
> >>>>> of static keys. In case you were running KASAN in these tests.
> >>>>
> >>>> As to KASAN, yes, I enable it in my test runs. I find three static-keys under
> >>>> mm/kasan/*. I will think if they can be disabled in my test runs. Thanks.
> >>>
> >>> There is a set of Kconfig options that disables static branches. If you
> >>> cannot find them quickly, please let me know and I can look them up.
> >
> > Thank you. But now I know the fix series by Thomas is available. I prioritize
> > the evaluation of the fix series. Later on, I will try disabling the static-keys
> > if it is required.
> >
> >>
> >> And Thomas Gleixner posted an alleged fix to the CID issue here:
> >>
> >> https://lore.kernel.org/lkml/20260129210219.452851594@kernel.org/
> >>
> >> Please let him know whether or not it helps.
> >
> > Good to see this fix candidate series, thanks :) I have set up the patches and
> > started my regular test runs. So far, the hangs have been observed once or twice
> > a week. To confirm the effect of the fix series, I think two weeks runs will be
> > required. Once I get the result, will share it on this thread and with Thomas.
>
> I know it is only one week now, but did you see any effects so far?
No, I do not see any hang so far. And I hope there will be no hang in the
next week either. Fingers crossed...
> On
> my side, I applied the v2 series -- which has been applied i
> tip/sched/urgent -- but I still have issues, and it looks like it is
> even more frequent. Maybe what I see is different. If you no longer see
> the issues on your side after one week, I'm going to start a new thread
> with my issues not to mix them.
>
> Note that in my case, the issue is visible on a system where nested VMs
> are used, with and without KASAN (enabled via debug.config), just after
> having started a VSOCK listening socket via socat.
I applied the v1 series on top of my test target xfs kernel branches enabling
KASAN.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: rcu stalls during fstests runs for xfs
2026-02-06 10:02 ` Shinichiro Kawasaki
@ 2026-02-06 11:04 ` Matthieu Baerts
0 siblings, 0 replies; 14+ messages in thread
From: Matthieu Baerts @ 2026-02-06 11:04 UTC (permalink / raw)
To: Shinichiro Kawasaki
Cc: Kunwu Chan, rcu@vger.kernel.org, linux-xfs@vger.kernel.org, hch,
Paul E. McKenney, MPTCP Linux
On 06/02/2026 11:02, Shinichiro Kawasaki wrote:
> On Feb 06, 2026 / 10:33, Matthieu Baerts wrote:
>> Hi Shinichiro,
>>
>> Sorry to jump in, but I *think* our CI for the MPTCP subsystem is
>> hitting the same issue.
>
> Hi Matthieu,
>
>> On 30/01/2026 12:16, Shinichiro Kawasaki wrote:
>>> On Jan 29, 2026 / 15:19, Paul E. McKenney wrote:
>>> [...]
>>>>>>> I have seen the static-key pattern called out by Dave Chinner when running
>>>>>>> KASAN on large systems. We worked around this by disabling KASAN's use
>>>>>>> of static keys. In case you were running KASAN in these tests.
>>>>>>
>>>>>> As to KASAN, yes, I enable it in my test runs. I find three static-keys under
>>>>>> mm/kasan/*. I will think if they can be disabled in my test runs. Thanks.
>>>>>
>>>>> There is a set of Kconfig options that disables static branches. If you
>>>>> cannot find them quickly, please let me know and I can look them up.
>>>
>>> Thank you. But now I know the fix series by Thomas is available. I prioritize
>>> the evaluation of the fix series. Later on, I will try disabling the static-keys
>>> if it is required.
>>>
>>>>
>>>> And Thomas Gleixner posted an alleged fix to the CID issue here:
>>>>
>>>> https://lore.kernel.org/lkml/20260129210219.452851594@kernel.org/
>>>>
>>>> Please let him know whether or not it helps.
>>>
>>> Good to see this fix candidate series, thanks :) I have set up the patches and
>>> started my regular test runs. So far, the hangs have been observed once or twice
>>> a week. To confirm the effect of the fix series, I think two weeks runs will be
>>> required. Once I get the result, will share it on this thread and with Thomas.
>>
>> I know it is only one week now, but did you see any effects so far?
>
> No, I do not see any hang so far. And I hope there will be no hang in the
> next week either. Fingers crossed...
Thank you for your reply!
>> On
>> my side, I applied the v2 series -- which has been applied i
>> tip/sched/urgent -- but I still have issues, and it looks like it is
>> even more frequent. Maybe what I see is different. If you no longer see
>> the issues on your side after one week, I'm going to start a new thread
>> with my issues not to mix them.
>>
>> Note that in my case, the issue is visible on a system where nested VMs
>> are used, with and without KASAN (enabled via debug.config), just after
>> having started a VSOCK listening socket via socat.
>
> I applied the v1 series on top of my test target xfs kernel branches enabling
> KASAN.
Sorry for the noise, I guess I have a different issue, even if the
traces look similar [1]. Hopefully someone can help me find the root
cause :)
[1]
https://github.com/multipath-tcp/mptcp_net-next/actions/runs/21723325004/job/62658752123#step:7:7288
Cheers,
Matt
--
Sponsored by the NGI0 Core fund.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: rcu stalls during fstests runs for xfs
2026-01-30 11:16 ` Shinichiro Kawasaki
2026-02-06 9:33 ` Matthieu Baerts
@ 2026-02-13 1:26 ` Shinichiro Kawasaki
2026-02-20 19:01 ` Joel Fernandes
1 sibling, 1 reply; 14+ messages in thread
From: Shinichiro Kawasaki @ 2026-02-13 1:26 UTC (permalink / raw)
To: Paul E. McKenney
Cc: Kunwu Chan, rcu@vger.kernel.org, linux-xfs@vger.kernel.org, hch,
Thomas Gleixner
Cc+: Thomas,
On Jan 30, 2026 / 20:16, Shin'ichiro Kawasaki wrote:
> On Jan 29, 2026 / 15:19, Paul E. McKenney wrote:
> [...]
> > And Thomas Gleixner posted an alleged fix to the CID issue here:
> >
> > https://lore.kernel.org/lkml/20260129210219.452851594@kernel.org/
> >
> > Please let him know whether or not it helps.
>
> Good to see this fix candidate series, thanks :) I have set up the patches and
> started my regular test runs. So far, the hangs have been observed once or twice
> a week. To confirm the effect of the fix series, I think two weeks runs will be
> required. Once I get the result, will share it on this thread and with Thomas.
Two weeks have passed, and I did not observed the hang! Then I'm confident that
the v1 fix series by Thomas avoided the rcu stall issue in my xfs-zoned test
systems. The series is already in v6.19 kernel tag as v2. Great.
Thomas, just FYI.
I faced mysterious kernel hangs during my regular fstests runs [1]. RCU experts
suggested the hangs might be caused by the recent MMCID changes. I tried your v1
fix patch series "sched/mmcid: Cure mode transition woes", and confirmed it
avoids the hangs. Thank you for the fix. The v2 series is already in v6.19
kernel tag, so this report might not be so valuable, but just in case. (And
thank you again for the additional quick fix for my blktests failure caused by
one of the patches in the series).
[1] https://lore.kernel.org/rcu/aXdO52wh2rqTUi1E@shinmob/
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: rcu stalls during fstests runs for xfs
2026-02-13 1:26 ` Shinichiro Kawasaki
@ 2026-02-20 19:01 ` Joel Fernandes
0 siblings, 0 replies; 14+ messages in thread
From: Joel Fernandes @ 2026-02-20 19:01 UTC (permalink / raw)
To: Shinichiro Kawasaki
Cc: Paul E. McKenney, Kunwu Chan, rcu@vger.kernel.org,
linux-xfs@vger.kernel.org, hch, Thomas Gleixner
On Fri, Feb 13, 2026 at 01:26:32AM +0000, Shinichiro Kawasaki wrote:
> Cc+: Thomas,
>
> On Jan 30, 2026 / 20:16, Shin'ichiro Kawasaki wrote:
> > On Jan 29, 2026 / 15:19, Paul E. McKenney wrote:
> > [...]
> > > And Thomas Gleixner posted an alleged fix to the CID issue here:
> > >
> > > https://lore.kernel.org/lkml/20260129210219.452851594@kernel.org/
> > >
> > > Please let him know whether or not it helps.
> >
> > Good to see this fix candidate series, thanks :) I have set up the patches and
> > started my regular test runs. So far, the hangs have been observed once or twice
> > a week. To confirm the effect of the fix series, I think two weeks runs will be
> > required. Once I get the result, will share it on this thread and with Thomas.
>
> Two weeks have passed, and I did not observed the hang! Then I'm confident that
> the v1 fix series by Thomas avoided the rcu stall issue in my xfs-zoned test
> systems. The series is already in v6.19 kernel tag as v2. Great.
>
> Thomas, just FYI.
>
> I faced mysterious kernel hangs during my regular fstests runs [1]. RCU experts
> suggested the hangs might be caused by the recent MMCID changes. I tried your v1
> fix patch series "sched/mmcid: Cure mode transition woes", and confirmed it
> avoids the hangs. Thank you for the fix. The v2 series is already in v6.19
> kernel tag, so this report might not be so valuable, but just in case. (And
> thank you again for the additional quick fix for my blktests failure caused by
> one of the patches in the series).
>
> [1] https://lore.kernel.org/rcu/aXdO52wh2rqTUi1E@shinmob/
Good to see that this got resolved. I am guessing there's nothing from an RCU
point of view that could be done differently to diagnose this earlier, since
I pretty quickly spotted it was MMCID related when I saw the existing report.
Let me/us RCU folk know if there's anything else to do here though..
thanks,
--
Joel Fernandes
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2026-02-20 19:01 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-26 11:30 rcu stalls during fstests runs for xfs Shinichiro Kawasaki
2026-01-26 23:05 ` Dave Chinner
2026-01-27 1:38 ` Shinichiro Kawasaki
2026-01-28 9:55 ` Kunwu Chan
2026-01-28 16:42 ` Paul E. McKenney
2026-01-29 5:27 ` Shinichiro Kawasaki
2026-01-29 17:46 ` Paul E. McKenney
2026-01-29 23:19 ` Paul E. McKenney
2026-01-30 11:16 ` Shinichiro Kawasaki
2026-02-06 9:33 ` Matthieu Baerts
2026-02-06 10:02 ` Shinichiro Kawasaki
2026-02-06 11:04 ` Matthieu Baerts
2026-02-13 1:26 ` Shinichiro Kawasaki
2026-02-20 19:01 ` Joel Fernandes
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox