* [v6.12] WARNING: at kernel/sched/deadline.c:1995 enqueue_dl_entity (task blocked for more than 28262 seconds)
@ 2024-12-04 12:47 Ilya Maximets
2024-12-06 15:18 ` Joel Fernandes
0 siblings, 1 reply; 22+ messages in thread
From: Ilya Maximets @ 2024-12-04 12:47 UTC (permalink / raw)
To: LKML; +Cc: i.maximets, Ingo Molnar, Peter Zijlstra, Juri Lelli,
Vincent Guittot
Hi. It seems like I'm hitting some bug in the scheduler.
I'm running some tests with Open vSwitch on v6.12 kernel and some time
5 to 8 hours down the line I'm getting task blocked splats and I also
have a WARNING triggered in the scheduler code right before that:
Dec 3 22:19:55 kernel: WARNING: CPU: 27 PID: 3391271 at kernel/sched/deadline.c:1995 enqueue_dl_entity
I have a lot of processes (kernel threads and userpsace threads) stuck
in DN, Ds, D+ and D states. It feels like IO tasks are being scheduled,
but scheduler never picks them up or they are not being scheduled at all
for whatever reason, and threads waiting on these tasks are stuck.
Dec 3 22:22:45 kernel: INFO: task khugepaged:330 blocked for more than 122 seconds.
Dec 3 22:22:45 kernel: INFO: task ovs-monitor-ips:3479822 blocked for more than 122 seconds.
Dec 3 22:22:45 kernel: INFO: task mv:3483072 blocked for more than 122 seconds.
Dec 3 22:24:48 kernel: INFO: task khugepaged:330 blocked for more than 245 seconds.
Dec 3 22:24:48 kernel: INFO: task ovs-monitor-ips:3479822 blocked for more than 245 seconds.
Dec 3 22:24:48 kernel: INFO: task ovs-monitor-ips:3480383 blocked for more than 122 seconds.
Dec 3 22:24:48 kernel: INFO: task ovs-monitor-ips:3481787 blocked for more than 122 seconds.
Dec 3 22:24:48 kernel: INFO: task ovs-monitor-ips:3482631 blocked for more than 122 seconds.
Dec 3 22:24:48 kernel: INFO: task mv:3483072 blocked for more than 245 seconds.
Dec 3 22:26:51 kernel: INFO: task khugepaged:330 blocked for more than 368 seconds.
...
Dec 4 06:11:45 kernel: INFO: task khugepaged:330 blocked for more than 28262 seconds.
I have two separate instances where this behavior is reproduced. One is mostly
around file systems, the other was more severe as multiple kernel threads got
stuck in netlink code. The traces do not have much in common, except that most
of blocked tasks are in scheduling. The system is also idle, nothing is really
running. Some of these tasks are holding resources that make other tasks to
block on those resources as well.
I seem to be able to reproduce the issue, but it takes 5-8 hours to do so.
Best regards, Ilya Maximets.
Below are logs from two instances. The first one is from v6.12 + one small
unrelated patch for network namespaces. The second one is from pure v6.12,
but it's not decoded as I lost the vmlinux from that run, the system was also
completely unresponsive when the issue was hit.
=====================
THE FIST DECODED LOG:
=====================
Dec 3 22:19:55 kernel: WARNING: CPU: 27 PID: 3391271 at kernel/sched/deadline.c:1995 enqueue_dl_entity (kernel/sched/deadline.c:1995 (discriminator 1))
Dec 3 22:19:55 kernel: Modules linked in: vport_vxlan vxlan vport_gre ip_gre ip_tunnel gre vport_geneve geneve ip6_udp_tunnel udp_tunnel openvswitch nf_conncount nf_nat tls esp4 veth nfnetlink_cttimeout nfnetlink nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 intel_rapl_msr intel_rapl_common rfkill intel_uncore_frequency_common skx_edac_common nfit libnvdimm kvm_intel kvm rapl vfat fat iTCO_wdt iTCO_vendor_support virtio_gpu virtio_dma_buf i2c_i801 drm_shmem_helper lpc_ich pcspkr i2c_smbus virtio_balloon drm_kms_helper joydev drm xfs libcrc32c ahci crct10dif_pclmul libahci crc32_pclmul virtio_net crc32c_intel libata ghash_clmulni_intel net_failover virtio_blk virtio_console failover serio_raw sunrpc dm_mirror dm_region_hash dm_log dm_mod fuse [last unloaded: ip6_udp_tunnel]
Dec 3 22:19:55 kernel: CPU: 27 UID: 0 PID: 3391271 Comm: kworker/27:1 Kdump: loaded Not tainted 6.12.0+ #77
Dec 3 22:19:55 kernel: Hardware name: Red Hat KVM/RHEL, BIOS 1.16.1-1.el9 04/01/2014
Dec 3 22:19:55 kernel: Workqueue: 0x0 (mm_percpu_wq)
Dec 3 22:19:55 kernel: RIP: 0010:enqueue_dl_entity (kernel/sched/deadline.c:1995 (discriminator 1))
Dec 3 22:19:55 kernel: Code: d2 0f 89 14 fd ff ff e9 0d fb ff ff 45 85 ed 0f
84 65 fd ff ff 5b 44 89 e6 48 89 ef 5d 41 5c 41
5d 41 5e 41 5f e9 76 c4 ff ff <0f> 0b e9 bd f9 ff
ff 0f 0b e9 1f fb ff ff 8b 83 b0 0a 00 00 48 8b
All code
========
0: d2 0f rorb %cl,(%rdi)
2: 89 14 fd ff ff e9 0d mov %edx,0xde9ffff(,%rdi,8)
9: fb sti
a: ff (bad)
b: ff 45 85 incl -0x7b(%rbp)
e: ed in (%dx),%eax
f: 0f 84 65 fd ff ff je 0xfffffffffffffd7a
15: 5b pop %rbx
16: 44 89 e6 mov %r12d,%esi
19: 48 89 ef mov %rbp,%rdi
1c: 5d pop %rbp
1d: 41 5c pop %r12
1f: 41 5d pop %r13
21: 41 5e pop %r14
23: 41 5f pop %r15
25: e9 76 c4 ff ff jmpq 0xffffffffffffc4a0
2a:* 0f 0b ud2 <-- trapping instruction
2c: e9 bd f9 ff ff jmpq 0xfffffffffffff9ee
31: 0f 0b ud2
33: e9 1f fb ff ff jmpq 0xfffffffffffffb57
38: 8b 83 b0 0a 00 00 mov 0xab0(%rbx),%eax
3e: 48 rex.W
3f: 8b .byte 0x8b
Code starting with the faulting instruction
===========================================
0: 0f 0b ud2
2: e9 bd f9 ff ff jmpq 0xfffffffffffff9c4
7: 0f 0b ud2
9: e9 1f fb ff ff jmpq 0xfffffffffffffb2d
e: 8b 83 b0 0a 00 00 mov 0xab0(%rbx),%eax
14: 48 rex.W
15: 8b .byte 0x8b
Dec 3 22:19:55 kernel: RSP: 0018:ffffacba8d87fbb8 EFLAGS: 00010086
Dec 3 22:19:55 kernel: RAX: 0000000000000001 RBX: ffff8f5e3f7b65e8 RCX: 0000000000000000
Dec 3 22:19:55 kernel: RDX: 0000000000000001 RSI: 0000000000000001 RDI: ffff8f5e3f7b65e8
Dec 3 22:19:55 kernel: RBP: ffff8f5e3f7b65e8 R08: 0000000000000000 R09: 0000000000000000
Dec 3 22:19:55 kernel: R10: ffff8f5e3f7b5d00 R11: ffff8f4f132ed610 R12: 0000000000000001
Dec 3 22:19:55 kernel: R13: 0000000000000001 R14: 00000000002dc6c0 R15: 0000000000000000
Dec 3 22:19:55 kernel: FS: 0000000000000000(0000) GS:ffff8f5e3f780000(0000) knlGS:0000000000000000
Dec 3 22:19:55 kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Dec 3 22:19:55 kernel: CR2: 00007faf752d93e0 CR3: 000000011879a001 CR4: 0000000000772ef0
Dec 3 22:19:55 kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
Dec 3 22:19:55 kernel: DR3: 0000000000000000 DR6: 00000000ffff4ff0 DR7: 0000000000000400
Dec 3 22:19:55 kernel: PKRU: 55555554
Dec 3 22:19:55 kernel: Call Trace:
Dec 3 22:19:55 kernel: <TASK>
Dec 3 22:19:55 kernel: ? __warn (kernel/panic.c:748)
Dec 3 22:19:55 kernel: ? enqueue_dl_entity (kernel/sched/deadline.c:1995 (discriminator 1))
Dec 3 22:19:55 kernel: ? report_bug (lib/bug.c:201 lib/bug.c:219)
Dec 3 22:19:55 kernel: ? handle_bug (arch/x86/kernel/traps.c:285)
Dec 3 22:19:55 kernel: ? exc_invalid_op (arch/x86/kernel/traps.c:309 (discriminator 1))
Dec 3 22:19:55 kernel: ? asm_exc_invalid_op (./arch/x86/include/asm/idtentry.h:621)
Dec 3 22:19:55 kernel: ? enqueue_dl_entity (kernel/sched/deadline.c:1995 (discriminator 1))
Dec 3 22:19:55 kernel: dl_server_start (kernel/sched/deadline.c:1651)
Dec 3 22:19:55 kernel: enqueue_task_fair (kernel/sched/sched.h:2745 kernel/sched/fair.c:7048)
Dec 3 22:19:55 kernel: enqueue_task (kernel/sched/core.c:2020)
Dec 3 22:19:55 kernel: activate_task (kernel/sched/core.c:2069)
Dec 3 22:19:55 kernel: sched_balance_rq (kernel/sched/fair.c:9642 kernel/sched/fair.c:9676 kernel/sched/fair.c:11753)
Dec 3 22:19:55 kernel: sched_balance_newidle (kernel/sched/fair.c:12799)
Dec 3 22:19:55 kernel: pick_next_task_fair (kernel/sched/fair.c:8950)
Dec 3 22:19:55 kernel: __pick_next_task (kernel/sched/core.c:5972)
Dec 3 22:19:55 kernel: __schedule (kernel/sched/core.c:6647)
Dec 3 22:19:55 kernel: schedule (./arch/x86/include/asm/preempt.h:84 kernel/sched/core.c:6771 kernel/sched/core.c:6785)
Dec 3 22:19:55 kernel: worker_thread (kernel/workqueue.c:3344)
Dec 3 22:19:55 kernel: ? __pfx_worker_thread (kernel/workqueue.c:3337)
Dec 3 22:19:55 kernel: kthread (kernel/kthread.c:389)
Dec 3 22:19:55 kernel: ? __pfx_kthread (kernel/kthread.c:342)
Dec 3 22:19:55 kernel: ret_from_fork (arch/x86/kernel/process.c:147)
Dec 3 22:19:55 kernel: ? __pfx_kthread (kernel/kthread.c:342)
Dec 3 22:19:55 kernel: ret_from_fork_asm (arch/x86/entry/entry_64.S:257)
Dec 3 22:19:55 kernel: </TASK>
Dec 3 22:19:55 kernel: ---[ end trace 0000000000000000 ]---
Dec 3 22:19:55 kernel: ovs-p-13: entered promiscuous mode
Dec 3 22:22:45 kernel: INFO: task khugepaged:330 blocked for more than 122 seconds.
Dec 3 22:22:45 kernel: Tainted: G W 6.12.0+ #77
Dec 3 22:22:45 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Dec 3 22:22:45 kernel: task:khugepaged state:D stack:0 pid:330 tgid:330 ppid:2 flags:0x00004000
Dec 3 22:22:45 kernel: Call Trace:
Dec 3 22:22:45 kernel: <TASK>
Dec 3 22:22:45 kernel: __schedule (kernel/sched/core.c:5328 kernel/sched/core.c:6693)
Dec 3 22:22:45 kernel: schedule (./arch/x86/include/asm/preempt.h:84 kernel/sched/core.c:6771 kernel/sched/core.c:6785)
Dec 3 22:22:45 kernel: schedule_timeout (kernel/time/timer.c:2592)
Dec 3 22:22:45 kernel: ? kvm_sched_clock_read (arch/x86/kernel/kvmclock.c:91)
Dec 3 22:22:45 kernel: ? sched_clock (./arch/x86/include/asm/preempt.h:94 arch/x86/kernel/tsc.c:285)
Dec 3 22:22:45 kernel: ? sched_clock_cpu (kernel/sched/clock.c:394)
Dec 3 22:22:45 kernel: ? __smp_call_single_queue (kernel/smp.c:115 kernel/smp.c:411)
Dec 3 22:22:45 kernel: __wait_for_common (kernel/sched/completion.c:95 kernel/sched/completion.c:116)
Dec 3 22:22:45 kernel: ? __pfx_schedule_timeout (kernel/time/timer.c:2577)
Dec 3 22:22:45 kernel: __flush_work (kernel/workqueue.c:4222)
Dec 3 22:22:45 kernel: ? __pfx_wq_barrier_func (kernel/workqueue.c:3718)
Dec 3 22:22:45 kernel: __lru_add_drain_all (mm/swap.c:873 (discriminator 3))
Dec 3 22:22:45 kernel: khugepaged (mm/khugepaged.c:2499 mm/khugepaged.c:2571)
Dec 3 22:22:45 kernel: ? __pfx_khugepaged (mm/khugepaged.c:2564)
Dec 3 22:22:45 kernel: kthread (kernel/kthread.c:389)
Dec 3 22:22:45 kernel: ? __pfx_kthread (kernel/kthread.c:342)
Dec 3 22:22:45 kernel: ret_from_fork (arch/x86/kernel/process.c:147)
Dec 3 22:22:45 kernel: ? __pfx_kthread (kernel/kthread.c:342)
Dec 3 22:22:45 kernel: ret_from_fork_asm (arch/x86/entry/entry_64.S:257)
Dec 3 22:22:45 kernel: </TASK>
Dec 3 22:22:45 kernel: INFO: task ovs-monitor-ips:3479822 blocked for more than 122 seconds.
Dec 3 22:22:45 kernel: Tainted: G W 6.12.0+ #77
Dec 3 22:22:45 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Dec 3 22:22:45 kernel: task:ovs-monitor-ips state:D stack:0 pid:3479822 tgid:3479822 ppid:1 flags:0x00000002
Dec 3 22:22:45 kernel: Call Trace:
Dec 3 22:22:45 kernel: <TASK>
Dec 3 22:22:45 kernel: __schedule (kernel/sched/core.c:5328 kernel/sched/core.c:6693)
Dec 3 22:22:45 kernel: ? _raw_spin_unlock_irqrestore (./arch/x86/include/asm/paravirt.h:589 ./arch/x86/include/asm/qspinlock.h:57 ./include/linux/spinlock.h:204 ./include/linux/spinlock_api_smp.h:150 kernel/locking/spinlock.c:194)
Dec 3 22:22:45 kernel: schedule (./arch/x86/include/asm/preempt.h:84 kernel/sched/core.c:6771 kernel/sched/core.c:6785)
Dec 3 22:22:45 kernel: io_schedule (kernel/sched/core.c:7552 kernel/sched/core.c:7578)
Dec 3 22:22:45 kernel: folio_wait_bit_common (mm/filemap.c:1301)
Dec 3 22:22:45 kernel: ? xas_load (./include/linux/xarray.h:175 ./include/linux/xarray.h:1264 lib/xarray.c:240)
Dec 3 22:22:45 kernel: ? __pfx_wake_page_function (mm/filemap.c:1117)
Dec 3 22:22:45 kernel: folio_wait_writeback (./arch/x86/include/asm/bitops.h:206 ./arch/x86/include/asm/bitops.h:238 ./include/asm-generic/bitops/instrumented-non-atomic.h:142 ./include/linux/page-flags.h:555 mm/page-writeback.c:3187)
Dec 3 22:22:45 kernel: truncate_inode_partial_folio (./arch/x86/include/asm/bitops.h:206 ./arch/x86/include/asm/bitops.h:238 ./include/asm-generic/bitops/instrumented-non-atomic.h:142 ./include/linux/page-flags.h:822 ./include/linux/page-flags.h:843 ./include/linux/mm.h:1115 ./include/linux/mm.h:2137 mm/truncate.c:209)
Dec 3 22:22:45 kernel: truncate_inode_pages_range (mm/truncate.c:354)
Dec 3 22:22:45 kernel: ? _raw_spin_unlock_irqrestore (./arch/x86/include/asm/paravirt.h:589 ./arch/x86/include/asm/qspinlock.h:57 ./include/linux/spinlock.h:204 ./include/linux/spinlock_api_smp.h:150 kernel/locking/spinlock.c:194)
Dec 3 22:22:45 kernel: ? poll_freewait (fs/select.c:140 (discriminator 3))
Dec 3 22:22:45 kernel: ? do_select (fs/select.c:612)
Dec 3 22:22:45 kernel: ? __pfx_pollwake (fs/select.c:209)
Dec 3 22:22:45 kernel: ? __pfx_pollwake (fs/select.c:209)
Dec 3 22:22:45 kernel: ? down_read (./arch/x86/include/asm/preempt.h:79 kernel/locking/rwsem.c:1246 kernel/locking/rwsem.c:1261 kernel/locking/rwsem.c:1526)
Dec 3 22:22:45 kernel: ? unmap_mapping_range (mm/memory.c:3873)
Dec 3 22:22:45 kernel: truncate_pagecache (mm/truncate.c:728)
Dec 3 22:22:45 kernel: xfs_setattr_size+0x139/0x410 xfs
Dec 3 22:22:45 kernel: xfs_vn_setattr+0x78/0x140 xfs
Dec 3 22:22:45 kernel: notify_change (fs/attr.c:503)
Dec 3 22:22:45 kernel: ? do_truncate (./include/linux/fs.h:820 fs/open.c:66)
Dec 3 22:22:45 kernel: do_truncate (./include/linux/fs.h:820 fs/open.c:66)
Dec 3 22:22:45 kernel: do_open (fs/namei.c:3395 fs/namei.c:3778)
Dec 3 22:22:45 kernel: path_openat (fs/namei.c:3933)
Dec 3 22:22:45 kernel: ? memcg_list_lru_alloc (mm/list_lru.c:475 mm/list_lru.c:489)
Dec 3 22:22:45 kernel: do_filp_open (fs/namei.c:3960)
Dec 3 22:22:45 kernel: ? _raw_spin_unlock_irqrestore (./arch/x86/include/asm/paravirt.h:589 ./arch/x86/include/asm/qspinlock.h:57 ./include/linux/spinlock.h:204 ./include/linux/spinlock_api_smp.h:150 kernel/locking/spinlock.c:194)
Dec 3 22:22:45 kernel: ? __skb_try_recv_datagram (net/core/datagram.c:267)
Dec 3 22:22:45 kernel: ? kmem_cache_alloc_noprof (mm/slub.c:4115 mm/slub.c:4141)
Dec 3 22:22:45 kernel: do_sys_openat2 (fs/open.c:1415)
Dec 3 22:22:45 kernel: __x64_sys_openat (fs/open.c:1441)
Dec 3 22:22:45 kernel: do_syscall_64 (arch/x86/entry/common.c:52 arch/x86/entry/common.c:83)
Dec 3 22:22:45 kernel: ? unix_stream_recvmsg (net/unix/af_unix.c:2997)
Dec 3 22:22:45 kernel: ? __pfx_unix_stream_read_actor (net/unix/af_unix.c:2957)
Dec 3 22:22:45 kernel: ? sock_recvmsg (net/socket.c:1051 net/socket.c:1073)
Dec 3 22:22:45 kernel: ? __sys_recvfrom (net/socket.c:2265)
Dec 3 22:22:45 kernel: ? __pte_offset_map (./include/linux/pgtable.h:324 ./include/linux/pgtable.h:594 mm/pgtable-generic.c:289)
Dec 3 22:22:45 kernel: ? _raw_spin_unlock (./arch/x86/include/asm/paravirt.h:589 ./arch/x86/include/asm/qspinlock.h:57 ./include/linux/spinlock.h:204 ./include/linux/spinlock_api_smp.h:142 kernel/locking/spinlock.c:186)
Dec 3 22:22:45 kernel: ? do_wp_page (./include/linux/vmstat.h:75 mm/memory.c:3263 mm/memory.c:3731)
Dec 3 22:22:45 kernel: ? __handle_mm_fault (mm/memory.c:5909)
Dec 3 22:22:45 kernel: ? syscall_exit_work (./include/linux/audit.h:357 kernel/entry/common.c:166)
Dec 3 22:22:45 kernel: ? __count_memcg_events (mm/memcontrol.c:573 mm/memcontrol.c:836)
Dec 3 22:22:45 kernel: ? handle_mm_fault (mm/memory.c:5951 mm/memory.c:6103)
Dec 3 22:22:45 kernel: ? do_user_addr_fault (./include/linux/mm.h:730 arch/x86/mm/fault.c:1340)
Dec 3 22:22:45 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
Dec 3 22:22:45 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
Dec 3 22:22:45 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
Dec 3 22:22:45 kernel: entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:130)
Dec 3 22:22:45 kernel: RIP: 0033:0x7f8537cfd70b
Dec 3 22:22:45 kernel: RSP: 002b:00007fff841fec70 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
Dec 3 22:22:45 kernel: RAX: ffffffffffffffda RBX: 0000000000000001 RCX: 00007f8537cfd70b
Dec 3 22:22:45 kernel: RDX: 0000000000080241 RSI: 00007f853707d290 RDI: 00000000ffffff9c
Dec 3 22:22:45 kernel: RBP: 00007f853707d290 R08: 0000000000000000 R09: 0000000000000000
Dec 3 22:22:45 kernel: R10: 00000000000001b6 R11: 0000000000000246 R12: 0000000000080241
Dec 3 22:22:45 kernel: R13: 00007f85371b4ea0 R14: 0000000000080241 R15: 0000000000000000
Dec 3 22:22:45 kernel: </TASK>
Dec 3 22:22:45 kernel: INFO: task mv:3483072 blocked for more than 122 seconds.
Dec 3 22:22:45 kernel: Tainted: G W 6.12.0+ #77
Dec 3 22:22:45 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Dec 3 22:22:45 kernel: task:mv state:D stack:0 pid:3483072 tgid:3483072 ppid:3479428 flags:0x00000002
Dec 3 22:22:45 kernel: Call Trace:
Dec 3 22:22:45 kernel: <TASK>
Dec 3 22:22:45 kernel: __schedule (kernel/sched/core.c:5328 kernel/sched/core.c:6693)
Dec 3 22:22:45 kernel: schedule (./arch/x86/include/asm/preempt.h:84 kernel/sched/core.c:6771 kernel/sched/core.c:6785)
Dec 3 22:22:45 kernel: io_schedule (kernel/sched/core.c:7552 kernel/sched/core.c:7578)
Dec 3 22:22:45 kernel: folio_wait_bit_common (mm/filemap.c:1301)
Dec 3 22:22:45 kernel: ? xas_load (./include/linux/xarray.h:175 ./include/linux/xarray.h:1264 lib/xarray.c:240)
Dec 3 22:22:45 kernel: ? __pfx_wake_page_function (mm/filemap.c:1117)
Dec 3 22:22:45 kernel: folio_wait_writeback (./arch/x86/include/asm/bitops.h:206 ./arch/x86/include/asm/bitops.h:238 ./include/asm-generic/bitops/instrumented-non-atomic.h:142 ./include/linux/page-flags.h:555 mm/page-writeback.c:3187)
Dec 3 22:22:45 kernel: truncate_inode_partial_folio (./arch/x86/include/asm/bitops.h:206 ./arch/x86/include/asm/bitops.h:238 ./include/asm-generic/bitops/instrumented-non-atomic.h:142 ./include/linux/page-flags.h:822 ./include/linux/page-flags.h:843 ./include/linux/mm.h:1115 ./include/linux/mm.h:2137 mm/truncate.c:209)
Dec 3 22:22:45 kernel: truncate_inode_pages_range (mm/truncate.c:354)
Dec 3 22:22:45 kernel: ? xfs_iunlock+0x108/0x200 xfs
Dec 3 22:22:45 kernel: ? xfs_rename+0x368/0x990 xfs
Dec 3 22:22:45 kernel: ? fsnotify_move (./include/linux/fsnotify.h:72 ./include/linux/fsnotify.h:64 ./include/linux/fsnotify.h:238)
Dec 3 22:22:45 kernel: ? _raw_spin_unlock (./arch/x86/include/asm/paravirt.h:589 ./arch/x86/include/asm/qspinlock.h:57 ./include/linux/spinlock.h:204 ./include/linux/spinlock_api_smp.h:142 kernel/locking/spinlock.c:186)
Dec 3 22:22:45 kernel: ? locked_inode_to_wb_and_lock_list (fs/fs-writeback.c:355)
Dec 3 22:22:45 kernel: evict (fs/inode.c:728)
Dec 3 22:22:45 kernel: ? fsnotify_destroy_marks (fs/notify/mark.c:923)
Dec 3 22:22:45 kernel: ? _atomic_dec_and_lock (./arch/x86/include/asm/atomic.h:67 ./include/linux/atomic/atomic-arch-fallback.h:2278 ./include/linux/atomic/atomic-instrumented.h:1384 lib/dec_and_lock.c:29)
Dec 3 22:22:45 kernel: __dentry_kill (fs/dcache.c:618)
Dec 3 22:22:45 kernel: dput (fs/dcache.c:857 fs/dcache.c:845)
Dec 3 22:22:45 kernel: do_renameat2 (fs/namei.c:5174)
Dec 3 22:22:45 kernel: __x64_sys_rename (fs/namei.c:5215)
Dec 3 22:22:45 kernel: do_syscall_64 (arch/x86/entry/common.c:52 arch/x86/entry/common.c:83)
Dec 3 22:22:45 kernel: ? do_user_addr_fault (./include/linux/mm.h:730 arch/x86/mm/fault.c:1340)
Dec 3 22:22:45 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
Dec 3 22:22:45 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
Dec 3 22:22:45 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
Dec 3 22:22:45 kernel: entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:130)
Dec 3 22:22:45 kernel: RIP: 0033:0x7f3185a5aadb
Dec 3 22:22:45 kernel: RSP: 002b:00007ffe258c4548 EFLAGS: 00000246 ORIG_RAX: 0000000000000052
Dec 3 22:22:45 kernel: RAX: ffffffffffffffda RBX: 0000000000000001 RCX: 00007f3185a5aadb
Dec 3 22:22:45 kernel: RDX: 0000000000000025 RSI: 00007ffe258c5cc3 RDI: 00007ffe258c5cb7
Dec 3 22:22:45 kernel: RBP: 00007ffe258c48f0 R08: 00007ffe258c4670 R09: 00007ffe258c4ac0
Dec 3 22:22:45 kernel: R10: 0000000000000100 R11: 0000000000000246 R12: 0000000000000011
Dec 3 22:22:45 kernel: R13: 0000000000000000 R14: 00007ffe258c5cc3 R15: 00007ffe258c4ac0
Dec 3 22:22:45 kernel: </TASK>
Dec 3 22:24:48 kernel: INFO: task khugepaged:330 blocked for more than 245 seconds.
Dec 3 22:24:48 kernel: Tainted: G W 6.12.0+ #77
Dec 3 22:24:48 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Dec 3 22:24:48 kernel: task:khugepaged state:D stack:0 pid:330 tgid:330 ppid:2 flags:0x00004000
Dec 3 22:24:48 kernel: Call Trace:
Dec 3 22:24:48 kernel: <TASK>
Dec 3 22:24:48 kernel: __schedule (kernel/sched/core.c:5328 kernel/sched/core.c:6693)
Dec 3 22:24:48 kernel: schedule (./arch/x86/include/asm/preempt.h:84 kernel/sched/core.c:6771 kernel/sched/core.c:6785)
Dec 3 22:24:48 kernel: schedule_timeout (kernel/time/timer.c:2592)
Dec 3 22:24:48 kernel: ? kvm_sched_clock_read (arch/x86/kernel/kvmclock.c:91)
Dec 3 22:24:48 kernel: ? sched_clock (./arch/x86/include/asm/preempt.h:94 arch/x86/kernel/tsc.c:285)
Dec 3 22:24:48 kernel: ? sched_clock_cpu (kernel/sched/clock.c:394)
Dec 3 22:24:48 kernel: ? __smp_call_single_queue (kernel/smp.c:115 kernel/smp.c:411)
Dec 3 22:24:48 kernel: __wait_for_common (kernel/sched/completion.c:95 kernel/sched/completion.c:116)
Dec 3 22:24:48 kernel: ? __pfx_schedule_timeout (kernel/time/timer.c:2577)
Dec 3 22:24:48 kernel: __flush_work (kernel/workqueue.c:4222)
Dec 3 22:24:48 kernel: ? __pfx_wq_barrier_func (kernel/workqueue.c:3718)
Dec 3 22:24:48 kernel: __lru_add_drain_all (mm/swap.c:873 (discriminator 3))
Dec 3 22:24:48 kernel: khugepaged (mm/khugepaged.c:2499 mm/khugepaged.c:2571)
Dec 3 22:24:48 kernel: ? __pfx_khugepaged (mm/khugepaged.c:2564)
Dec 3 22:24:48 kernel: kthread (kernel/kthread.c:389)
Dec 3 22:24:48 kernel: ? __pfx_kthread (kernel/kthread.c:342)
Dec 3 22:24:48 kernel: ret_from_fork (arch/x86/kernel/process.c:147)
Dec 3 22:24:48 kernel: ? __pfx_kthread (kernel/kthread.c:342)
Dec 3 22:24:48 kernel: ret_from_fork_asm (arch/x86/entry/entry_64.S:257)
Dec 3 22:24:48 kernel: </TASK>
Dec 3 22:24:48 kernel: INFO: task ovs-monitor-ips:3479822 blocked for more than 245 seconds.
Dec 3 22:24:48 kernel: Tainted: G W 6.12.0+ #77
Dec 3 22:24:48 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Dec 3 22:24:48 kernel: task:ovs-monitor-ips state:D stack:0 pid:3479822 tgid:3479822 ppid:1 flags:0x00000002
Dec 3 22:24:48 kernel: Call Trace:
Dec 3 22:24:48 kernel: <TASK>
Dec 3 22:24:48 kernel: __schedule (kernel/sched/core.c:5328 kernel/sched/core.c:6693)
Dec 3 22:24:48 kernel: ? _raw_spin_unlock_irqrestore (./arch/x86/include/asm/paravirt.h:589 ./arch/x86/include/asm/qspinlock.h:57 ./include/linux/spinlock.h:204 ./include/linux/spinlock_api_smp.h:150 kernel/locking/spinlock.c:194)
Dec 3 22:24:48 kernel: schedule (./arch/x86/include/asm/preempt.h:84 kernel/sched/core.c:6771 kernel/sched/core.c:6785)
Dec 3 22:24:48 kernel: io_schedule (kernel/sched/core.c:7552 kernel/sched/core.c:7578)
Dec 3 22:24:48 kernel: folio_wait_bit_common (mm/filemap.c:1301)
Dec 3 22:24:48 kernel: ? xas_load (./include/linux/xarray.h:175 ./include/linux/xarray.h:1264 lib/xarray.c:240)
Dec 3 22:24:48 kernel: ? __pfx_wake_page_function (mm/filemap.c:1117)
Dec 3 22:24:48 kernel: folio_wait_writeback (./arch/x86/include/asm/bitops.h:206 ./arch/x86/include/asm/bitops.h:238 ./include/asm-generic/bitops/instrumented-non-atomic.h:142 ./include/linux/page-flags.h:555 mm/page-writeback.c:3187)
Dec 3 22:24:48 kernel: truncate_inode_partial_folio (./arch/x86/include/asm/bitops.h:206 ./arch/x86/include/asm/bitops.h:238 ./include/asm-generic/bitops/instrumented-non-atomic.h:142 ./include/linux/page-flags.h:822 ./include/linux/page-flags.h:843 ./include/linux/mm.h:1115 ./include/linux/mm.h:2137 mm/truncate.c:209)
Dec 3 22:24:48 kernel: truncate_inode_pages_range (mm/truncate.c:354)
Dec 3 22:24:48 kernel: ? _raw_spin_unlock_irqrestore (./arch/x86/include/asm/paravirt.h:589 ./arch/x86/include/asm/qspinlock.h:57 ./include/linux/spinlock.h:204 ./include/linux/spinlock_api_smp.h:150 kernel/locking/spinlock.c:194)
Dec 3 22:24:48 kernel: ? poll_freewait (fs/select.c:140 (discriminator 3))
Dec 3 22:24:48 kernel: ? do_select (fs/select.c:612)
Dec 3 22:24:48 kernel: ? __pfx_pollwake (fs/select.c:209)
Dec 3 22:24:48 kernel: ? __pfx_pollwake (fs/select.c:209)
Dec 3 22:24:48 kernel: ? down_read (./arch/x86/include/asm/preempt.h:79 kernel/locking/rwsem.c:1246 kernel/locking/rwsem.c:1261 kernel/locking/rwsem.c:1526)
Dec 3 22:24:48 kernel: ? unmap_mapping_range (mm/memory.c:3873)
Dec 3 22:24:48 kernel: truncate_pagecache (mm/truncate.c:728)
Dec 3 22:24:48 kernel: xfs_setattr_size+0x139/0x410 xfs
Dec 3 22:24:48 kernel: xfs_vn_setattr+0x78/0x140 xfs
Dec 3 22:24:48 kernel: notify_change (fs/attr.c:503)
Dec 3 22:24:48 kernel: ? do_truncate (./include/linux/fs.h:820 fs/open.c:66)
Dec 3 22:24:48 kernel: do_truncate (./include/linux/fs.h:820 fs/open.c:66)
Dec 3 22:24:48 kernel: do_open (fs/namei.c:3395 fs/namei.c:3778)
Dec 3 22:24:48 kernel: path_openat (fs/namei.c:3933)
Dec 3 22:24:48 kernel: ? memcg_list_lru_alloc (mm/list_lru.c:475 mm/list_lru.c:489)
Dec 3 22:24:48 kernel: do_filp_open (fs/namei.c:3960)
Dec 3 22:24:48 kernel: ? _raw_spin_unlock_irqrestore (./arch/x86/include/asm/paravirt.h:589 ./arch/x86/include/asm/qspinlock.h:57 ./include/linux/spinlock.h:204 ./include/linux/spinlock_api_smp.h:150 kernel/locking/spinlock.c:194)
Dec 3 22:24:48 kernel: ? __skb_try_recv_datagram (net/core/datagram.c:267)
Dec 3 22:24:48 kernel: ? kmem_cache_alloc_noprof (mm/slub.c:4115 mm/slub.c:4141)
Dec 3 22:24:48 kernel: do_sys_openat2 (fs/open.c:1415)
Dec 3 22:24:48 kernel: __x64_sys_openat (fs/open.c:1441)
Dec 3 22:24:48 kernel: do_syscall_64 (arch/x86/entry/common.c:52 arch/x86/entry/common.c:83)
Dec 3 22:24:48 kernel: ? unix_stream_recvmsg (net/unix/af_unix.c:2997)
Dec 3 22:24:48 kernel: ? __pfx_unix_stream_read_actor (net/unix/af_unix.c:2957)
Dec 3 22:24:48 kernel: ? sock_recvmsg (net/socket.c:1051 net/socket.c:1073)
Dec 3 22:24:48 kernel: ? __sys_recvfrom (net/socket.c:2265)
Dec 3 22:24:48 kernel: ? __pte_offset_map (./include/linux/pgtable.h:324 ./include/linux/pgtable.h:594 mm/pgtable-generic.c:289)
Dec 3 22:24:48 kernel: ? _raw_spin_unlock (./arch/x86/include/asm/paravirt.h:589 ./arch/x86/include/asm/qspinlock.h:57 ./include/linux/spinlock.h:204 ./include/linux/spinlock_api_smp.h:142 kernel/locking/spinlock.c:186)
Dec 3 22:24:48 kernel: ? do_wp_page (./include/linux/vmstat.h:75 mm/memory.c:3263 mm/memory.c:3731)
Dec 3 22:24:48 kernel: ? __handle_mm_fault (mm/memory.c:5909)
Dec 3 22:24:48 kernel: ? syscall_exit_work (./include/linux/audit.h:357 kernel/entry/common.c:166)
Dec 3 22:24:48 kernel: ? __count_memcg_events (mm/memcontrol.c:573 mm/memcontrol.c:836)
Dec 3 22:24:48 kernel: ? handle_mm_fault (mm/memory.c:5951 mm/memory.c:6103)
Dec 3 22:24:48 kernel: ? do_user_addr_fault (./include/linux/mm.h:730 arch/x86/mm/fault.c:1340)
Dec 3 22:24:48 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
Dec 3 22:24:48 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
Dec 3 22:24:48 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
Dec 3 22:24:48 kernel: entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:130)
Dec 3 22:24:48 kernel: RIP: 0033:0x7f8537cfd70b
Dec 3 22:24:48 kernel: RSP: 002b:00007fff841fec70 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
Dec 3 22:24:48 kernel: RAX: ffffffffffffffda RBX: 0000000000000001 RCX: 00007f8537cfd70b
Dec 3 22:24:48 kernel: RDX: 0000000000080241 RSI: 00007f853707d290 RDI: 00000000ffffff9c
Dec 3 22:24:48 kernel: RBP: 00007f853707d290 R08: 0000000000000000 R09: 0000000000000000
Dec 3 22:24:48 kernel: R10: 00000000000001b6 R11: 0000000000000246 R12: 0000000000080241
Dec 3 22:24:48 kernel: R13: 00007f85371b4ea0 R14: 0000000000080241 R15: 0000000000000000
Dec 3 22:24:48 kernel: </TASK>
Dec 3 22:24:48 kernel: INFO: task ovs-monitor-ips:3480383 blocked for more than 122 seconds.
Dec 3 22:24:48 kernel: Tainted: G W 6.12.0+ #77
Dec 3 22:24:48 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Dec 3 22:24:48 kernel: task:ovs-monitor-ips state:D stack:0 pid:3480383 tgid:3480383 ppid:1 flags:0x00000002
Dec 3 22:24:48 kernel: Call Trace:
Dec 3 22:24:48 kernel: <TASK>
Dec 3 22:24:48 kernel: __schedule (kernel/sched/core.c:5328 kernel/sched/core.c:6693)
Dec 3 22:24:48 kernel: schedule (./arch/x86/include/asm/preempt.h:84 kernel/sched/core.c:6771 kernel/sched/core.c:6785)
Dec 3 22:24:48 kernel: io_schedule (kernel/sched/core.c:7552 kernel/sched/core.c:7578)
Dec 3 22:24:48 kernel: folio_wait_bit_common (mm/filemap.c:1301)
Dec 3 22:24:48 kernel: ? xas_load (./include/linux/xarray.h:175 ./include/linux/xarray.h:1264 lib/xarray.c:240)
Dec 3 22:24:48 kernel: ? __pfx_wake_page_function (mm/filemap.c:1117)
Dec 3 22:24:48 kernel: folio_wait_writeback (./arch/x86/include/asm/bitops.h:206 ./arch/x86/include/asm/bitops.h:238 ./include/asm-generic/bitops/instrumented-non-atomic.h:142 ./include/linux/page-flags.h:555 mm/page-writeback.c:3187)
Dec 3 22:24:48 kernel: truncate_inode_partial_folio (./arch/x86/include/asm/bitops.h:206 ./arch/x86/include/asm/bitops.h:238 ./include/asm-generic/bitops/instrumented-non-atomic.h:142 ./include/linux/page-flags.h:822 ./include/linux/page-flags.h:843 ./include/linux/mm.h:1115 ./include/linux/mm.h:2137 mm/truncate.c:209)
Dec 3 22:24:48 kernel: truncate_inode_pages_range (mm/truncate.c:354)
Dec 3 22:24:48 kernel: ? __pfx_pollwake (fs/select.c:209)
Dec 3 22:24:48 kernel: ? arch_stack_walk (arch/x86/kernel/stacktrace.c:24)
Dec 3 22:24:48 kernel: ? __is_insn_slot_addr (kernel/kprobes.c:299)
Dec 3 22:24:48 kernel: ? is_bpf_text_address (kernel/bpf/core.c:768)
Dec 3 22:24:48 kernel: ? kernel_text_address (kernel/extable.c:97 kernel/extable.c:94)
Dec 3 22:24:48 kernel: ? __kernel_text_address (kernel/extable.c:79)
Dec 3 22:24:48 kernel: ? unwind_get_return_address (arch/x86/kernel/unwind_orc.c:369 arch/x86/kernel/unwind_orc.c:364)
Dec 3 22:24:48 kernel: ? __pfx_stack_trace_consume_entry (kernel/stacktrace.c:83)
Dec 3 22:24:48 kernel: ? arch_stack_walk (arch/x86/kernel/stacktrace.c:26)
Dec 3 22:24:48 kernel: ? kvm_sched_clock_read (arch/x86/kernel/kvmclock.c:91)
Dec 3 22:24:48 kernel: ? local_clock_noinstr (kernel/sched/clock.c:301)
Dec 3 22:24:48 kernel: ? local_clock (./arch/x86/include/asm/preempt.h:94 kernel/sched/clock.c:316)
Dec 3 22:24:48 kernel: ? metadata_update_state (mm/kfence/core.c:313)
Dec 3 22:24:48 kernel: ? inode_init_once (fs/inode.c:405 fs/inode.c:431)
Dec 3 22:24:48 kernel: ? kfence_guarded_alloc (mm/kfence/core.c:502)
Dec 3 22:24:48 kernel: ? sock_alloc_inode (net/socket.c:307)
Dec 3 22:24:48 kernel: ? __kfence_alloc (mm/kfence/core.c:1136)
Dec 3 22:24:48 kernel: ? __kfence_alloc (mm/kfence/core.c:209 mm/kfence/core.c:1130)
Dec 3 22:24:48 kernel: ? kmem_cache_alloc_lru_noprof (mm/slub.c:4119 mm/slub.c:4153)
Dec 3 22:24:48 kernel: ? sock_alloc_inode (net/socket.c:307)
Dec 3 22:24:48 kernel: ? down_read (./arch/x86/include/asm/preempt.h:79 kernel/locking/rwsem.c:1246 kernel/locking/rwsem.c:1261 kernel/locking/rwsem.c:1526)
Dec 3 22:24:48 kernel: ? unmap_mapping_range (mm/memory.c:3873)
Dec 3 22:24:48 kernel: truncate_pagecache (mm/truncate.c:728)
Dec 3 22:24:48 kernel: xfs_setattr_size+0x139/0x410 xfs
Dec 3 22:24:48 kernel: xfs_vn_setattr+0x78/0x140 xfs
Dec 3 22:24:48 kernel: notify_change (fs/attr.c:503)
Dec 3 22:24:48 kernel: ? do_truncate (./include/linux/fs.h:820 fs/open.c:66)
Dec 3 22:24:48 kernel: do_truncate (./include/linux/fs.h:820 fs/open.c:66)
Dec 3 22:24:48 kernel: do_open (fs/namei.c:3395 fs/namei.c:3778)
Dec 3 22:24:48 kernel: path_openat (fs/namei.c:3933)
Dec 3 22:24:48 kernel: do_filp_open (fs/namei.c:3960)
Dec 3 22:24:48 kernel: ? __pfx_unix_stream_read_actor (net/unix/af_unix.c:2957)
Dec 3 22:24:48 kernel: ? sock_recvmsg (net/socket.c:1051 net/socket.c:1073)
Dec 3 22:24:48 kernel: ? kmem_cache_alloc_noprof (mm/slub.c:4115 mm/slub.c:4141)
Dec 3 22:24:48 kernel: do_sys_openat2 (fs/open.c:1415)
Dec 3 22:24:48 kernel: __x64_sys_openat (fs/open.c:1441)
Dec 3 22:24:48 kernel: do_syscall_64 (arch/x86/entry/common.c:52 arch/x86/entry/common.c:83)
Dec 3 22:24:48 kernel: ? syscall_exit_work (./include/linux/audit.h:357 kernel/entry/common.c:166)
Dec 3 22:24:48 kernel: ? syscall_exit_to_user_mode (./arch/x86/include/asm/irqflags.h:37 ./arch/x86/include/asm/irqflags.h:92 ./include/linux/entry-common.h:231 kernel/entry/common.c:206 kernel/entry/common.c:218)
Dec 3 22:24:48 kernel: ? do_syscall_64 (arch/x86/entry/common.c:102)
Dec 3 22:24:48 kernel: ? rseq_ip_fixup (kernel/rseq.c:257 kernel/rseq.c:291)
Dec 3 22:24:48 kernel: ? ktime_get_ts64 (kernel/time/timekeeping.c:195 (discriminator 3) kernel/time/timekeeping.c:395 (discriminator 3) kernel/time/timekeeping.c:403 (discriminator 3) kernel/time/timekeeping.c:983 (discriminator 3))
Dec 3 22:24:48 kernel: ? switch_fpu_return (arch/x86/kernel/fpu/context.h:49 arch/x86/kernel/fpu/context.h:76 arch/x86/kernel/fpu/core.c:787)
Dec 3 22:24:48 kernel: ? syscall_exit_to_user_mode (./arch/x86/include/asm/entry-common.h:58 ./arch/x86/include/asm/entry-common.h:65 ./include/linux/entry-common.h:330 kernel/entry/common.c:207 kernel/entry/common.c:218)
Dec 3 22:24:48 kernel: ? __pte_offset_map (./include/linux/pgtable.h:324 ./include/linux/pgtable.h:594 mm/pgtable-generic.c:289)
Dec 3 22:24:48 kernel: ? _raw_spin_unlock (./arch/x86/include/asm/paravirt.h:589 ./arch/x86/include/asm/qspinlock.h:57 ./include/linux/spinlock.h:204 ./include/linux/spinlock_api_smp.h:142 kernel/locking/spinlock.c:186)
Dec 3 22:24:48 kernel: ? do_wp_page (./include/linux/vmstat.h:75 mm/memory.c:3263 mm/memory.c:3731)
Dec 3 22:24:48 kernel: ? __handle_mm_fault (mm/memory.c:5909)
Dec 3 22:24:48 kernel: ? __count_memcg_events (mm/memcontrol.c:573 mm/memcontrol.c:836)
Dec 3 22:24:48 kernel: ? handle_mm_fault (mm/memory.c:5951 mm/memory.c:6103)
Dec 3 22:24:48 kernel: ? do_user_addr_fault (./include/linux/mm.h:730 arch/x86/mm/fault.c:1340)
Dec 3 22:24:48 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
Dec 3 22:24:48 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
Dec 3 22:24:48 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
Dec 3 22:24:48 kernel: entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:130)
Dec 3 22:24:48 kernel: RIP: 0033:0x7f85c7afd70b
Dec 3 22:24:48 kernel: RSP: 002b:00007ffe53f6c870 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
Dec 3 22:24:48 kernel: RAX: ffffffffffffffda RBX: 0000000000000001 RCX: 00007f85c7afd70b
Dec 3 22:24:48 kernel: RDX: 0000000000080241 RSI: 00007f85c6d74290 RDI: 00000000ffffff9c
Dec 3 22:24:48 kernel: RBP: 00007f85c6d74290 R08: 0000000000000000 R09: 0000000000000000
Dec 3 22:24:48 kernel: R10: 00000000000001b6 R11: 0000000000000246 R12: 0000000000080241
Dec 3 22:24:48 kernel: R13: 00007f85c6e01ea0 R14: 0000000000080241 R15: 0000000000000000
Dec 3 22:24:48 kernel: </TASK>
Dec 3 22:24:48 kernel: INFO: task ovs-monitor-ips:3481787 blocked for more than 122 seconds.
Dec 3 22:24:48 kernel: Tainted: G W 6.12.0+ #77
Dec 3 22:24:48 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Dec 3 22:24:48 kernel: task:ovs-monitor-ips state:D stack:0 pid:3481787 tgid:3481787 ppid:1 flags:0x00000002
Dec 3 22:24:48 kernel: Call Trace:
Dec 3 22:24:48 kernel: <TASK>
Dec 3 22:24:48 kernel: __schedule (kernel/sched/core.c:5328 kernel/sched/core.c:6693)
Dec 3 22:24:48 kernel: schedule (./arch/x86/include/asm/preempt.h:84 kernel/sched/core.c:6771 kernel/sched/core.c:6785)
Dec 3 22:24:48 kernel: io_schedule (kernel/sched/core.c:7552 kernel/sched/core.c:7578)
Dec 3 22:24:48 kernel: folio_wait_bit_common (mm/filemap.c:1301)
Dec 3 22:24:48 kernel: ? xas_load (./include/linux/xarray.h:175 ./include/linux/xarray.h:1264 lib/xarray.c:240)
Dec 3 22:24:48 kernel: ? __pfx_wake_page_function (mm/filemap.c:1117)
Dec 3 22:24:48 kernel: folio_wait_writeback (./arch/x86/include/asm/bitops.h:206 ./arch/x86/include/asm/bitops.h:238 ./include/asm-generic/bitops/instrumented-non-atomic.h:142 ./include/linux/page-flags.h:555 mm/page-writeback.c:3187)
Dec 3 22:24:48 kernel: truncate_inode_partial_folio (./arch/x86/include/asm/bitops.h:206 ./arch/x86/include/asm/bitops.h:238 ./include/asm-generic/bitops/instrumented-non-atomic.h:142 ./include/linux/page-flags.h:822 ./include/linux/page-flags.h:843 ./include/linux/mm.h:1115 ./include/linux/mm.h:2137 mm/truncate.c:209)
Dec 3 22:24:48 kernel: truncate_inode_pages_range (mm/truncate.c:354)
Dec 3 22:24:48 kernel: ? sock_alloc_inode (net/socket.c:307)
Dec 3 22:24:48 kernel: ? __kfence_alloc (mm/kfence/core.c:1136)
Dec 3 22:24:48 kernel: ? __kfence_alloc (mm/kfence/core.c:209 mm/kfence/core.c:1130)
Dec 3 22:24:48 kernel: ? kmem_cache_alloc_lru_noprof (mm/slub.c:4119 mm/slub.c:4153)
Dec 3 22:24:48 kernel: ? sock_alloc_inode (net/socket.c:307)
Dec 3 22:24:48 kernel: ? alloc_inode (fs/inode.c:265)
Dec 3 22:24:48 kernel: ? sock_alloc (net/socket.c:634)
Dec 3 22:24:48 kernel: ? do_accept (net/socket.c:1929)
Dec 3 22:24:48 kernel: ? __sys_accept4 (net/socket.c:1992 net/socket.c:2022)
Dec 3 22:24:48 kernel: ? __x64_sys_accept4 (net/socket.c:2033 net/socket.c:2030 net/socket.c:2030)
Dec 3 22:24:48 kernel: ? do_syscall_64 (arch/x86/entry/common.c:52 arch/x86/entry/common.c:83)
Dec 3 22:24:48 kernel: ? entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:130)
Dec 3 22:24:48 kernel: ? __pfx_pollwake (fs/select.c:209)
Dec 3 22:24:48 kernel: ? xas_load (./include/linux/xarray.h:175 ./include/linux/xarray.h:1264 lib/xarray.c:240)
Dec 3 22:24:48 kernel: ? xfrm_state_mtu (net/xfrm/xfrm_state.c:2842 net/xfrm/xfrm_state.c:2824)
Dec 3 22:24:48 kernel: ? memcg_list_lru_alloc (mm/list_lru.c:475 mm/list_lru.c:489)
Dec 3 22:24:48 kernel: ? down_read (./arch/x86/include/asm/preempt.h:79 kernel/locking/rwsem.c:1246 kernel/locking/rwsem.c:1261 kernel/locking/rwsem.c:1526)
Dec 3 22:24:48 kernel: ? unmap_mapping_range (mm/memory.c:3873)
Dec 3 22:24:48 kernel: truncate_pagecache (mm/truncate.c:728)
Dec 3 22:24:48 kernel: xfs_setattr_size+0x139/0x410 xfs
Dec 3 22:24:48 kernel: xfs_vn_setattr+0x78/0x140 xfs
Dec 3 22:24:48 kernel: notify_change (fs/attr.c:503)
Dec 3 22:24:48 kernel: ? do_truncate (./include/linux/fs.h:820 fs/open.c:66)
Dec 3 22:24:48 kernel: do_truncate (./include/linux/fs.h:820 fs/open.c:66)
Dec 3 22:24:48 kernel: do_open (fs/namei.c:3395 fs/namei.c:3778)
Dec 3 22:24:48 kernel: path_openat (fs/namei.c:3933)
Dec 3 22:24:48 kernel: do_filp_open (fs/namei.c:3960)
Dec 3 22:24:48 kernel: ? do_wp_page (./include/linux/vmstat.h:75 mm/memory.c:3263 mm/memory.c:3731)
Dec 3 22:24:48 kernel: ? __handle_mm_fault (mm/memory.c:5909)
Dec 3 22:24:48 kernel: ? kmem_cache_alloc_noprof (mm/slub.c:4115 mm/slub.c:4141)
Dec 3 22:24:48 kernel: do_sys_openat2 (fs/open.c:1415)
Dec 3 22:24:48 kernel: __x64_sys_openat (fs/open.c:1441)
Dec 3 22:24:48 kernel: do_syscall_64 (arch/x86/entry/common.c:52 arch/x86/entry/common.c:83)
Dec 3 22:24:48 kernel: ? do_user_addr_fault (./include/linux/mm.h:730 arch/x86/mm/fault.c:1340)
Dec 3 22:24:48 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
Dec 3 22:24:48 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
Dec 3 22:24:48 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
Dec 3 22:24:48 kernel: entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:130)
Dec 3 22:24:48 kernel: RIP: 0033:0x7f79468fd70b
Dec 3 22:24:48 kernel: RSP: 002b:00007ffd27a6c700 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
Dec 3 22:24:48 kernel: RAX: ffffffffffffffda RBX: 0000000000000001 RCX: 00007f79468fd70b
Dec 3 22:24:48 kernel: RDX: 0000000000080241 RSI: 00007f7945b64290 RDI: 00000000ffffff9c
Dec 3 22:24:48 kernel: RBP: 00007f7945b64290 R08: 0000000000000000 R09: 0000000000000000
Dec 3 22:24:48 kernel: R10: 00000000000001b6 R11: 0000000000000246 R12: 0000000000080241
Dec 3 22:24:48 kernel: R13: 00007f7945bf2ea0 R14: 0000000000080241 R15: 0000000000000000
Dec 3 22:24:48 kernel: </TASK>
Dec 3 22:24:48 kernel: INFO: task ovs-monitor-ips:3482631 blocked for more than 122 seconds.
Dec 3 22:24:48 kernel: Tainted: G W 6.12.0+ #77
Dec 3 22:24:48 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Dec 3 22:24:48 kernel: task:ovs-monitor-ips state:D stack:0 pid:3482631 tgid:3482631 ppid:1 flags:0x00000002
Dec 3 22:24:48 kernel: Call Trace:
Dec 3 22:24:48 kernel: <TASK>
Dec 3 22:24:48 kernel: __schedule (kernel/sched/core.c:5328 kernel/sched/core.c:6693)
Dec 3 22:24:48 kernel: schedule (./arch/x86/include/asm/preempt.h:84 kernel/sched/core.c:6771 kernel/sched/core.c:6785)
Dec 3 22:24:48 kernel: io_schedule (kernel/sched/core.c:7552 kernel/sched/core.c:7578)
Dec 3 22:24:48 kernel: folio_wait_bit_common (mm/filemap.c:1301)
Dec 3 22:24:48 kernel: ? xas_load (./include/linux/xarray.h:175 ./include/linux/xarray.h:1264 lib/xarray.c:240)
Dec 3 22:24:48 kernel: ? __pfx_wake_page_function (mm/filemap.c:1117)
Dec 3 22:24:48 kernel: folio_wait_writeback (./arch/x86/include/asm/bitops.h:206 ./arch/x86/include/asm/bitops.h:238 ./include/asm-generic/bitops/instrumented-non-atomic.h:142 ./include/linux/page-flags.h:555 mm/page-writeback.c:3187)
Dec 3 22:24:48 kernel: truncate_inode_partial_folio (./arch/x86/include/asm/bitops.h:206 ./arch/x86/include/asm/bitops.h:238 ./include/asm-generic/bitops/instrumented-non-atomic.h:142 ./include/linux/page-flags.h:822 ./include/linux/page-flags.h:843 ./include/linux/mm.h:1115 ./include/linux/mm.h:2137 mm/truncate.c:209)
Dec 3 22:24:48 kernel: truncate_inode_pages_range (mm/truncate.c:354)
Dec 3 22:24:48 kernel: ? _raw_spin_unlock (./arch/x86/include/asm/paravirt.h:589 ./arch/x86/include/asm/qspinlock.h:57 ./include/linux/spinlock.h:204 ./include/linux/spinlock_api_smp.h:142 kernel/locking/spinlock.c:186)
Dec 3 22:24:48 kernel: ? finish_task_switch.isra.0 (./arch/x86/include/asm/irqflags.h:42 ./arch/x86/include/asm/irqflags.h:97 kernel/sched/sched.h:1518 kernel/sched/core.c:5082 kernel/sched/core.c:5200)
Dec 3 22:24:48 kernel: ? __schedule (kernel/sched/core.c:6699)
Dec 3 22:24:48 kernel: ? xfrm_state_mtu (net/xfrm/xfrm_state.c:2842 net/xfrm/xfrm_state.c:2824)
Dec 3 22:24:48 kernel: ? schedule_hrtimeout_range_clock (kernel/time/hrtimer.c:1332 kernel/time/hrtimer.c:1449 kernel/time/hrtimer.c:2283)
Dec 3 22:24:48 kernel: ? remove_wait_queue (./include/linux/list.h:215 ./include/linux/list.h:229 ./include/linux/wait.h:207 kernel/sched/wait.c:55)
Dec 3 22:24:48 kernel: ? _raw_spin_unlock_irqrestore (./arch/x86/include/asm/paravirt.h:589 ./arch/x86/include/asm/qspinlock.h:57 ./include/linux/spinlock.h:204 ./include/linux/spinlock_api_smp.h:150 kernel/locking/spinlock.c:194)
Dec 3 22:24:48 kernel: ? poll_freewait (fs/select.c:140 (discriminator 3))
Dec 3 22:24:48 kernel: ? do_select (fs/select.c:612)
Dec 3 22:24:48 kernel: ? down_read (./arch/x86/include/asm/preempt.h:79 kernel/locking/rwsem.c:1246 kernel/locking/rwsem.c:1261 kernel/locking/rwsem.c:1526)
Dec 3 22:24:48 kernel: ? unmap_mapping_range (mm/memory.c:3873)
Dec 3 22:24:48 kernel: truncate_pagecache (mm/truncate.c:728)
Dec 3 22:24:48 kernel: xfs_setattr_size+0x139/0x410 xfs
Dec 3 22:24:48 kernel: xfs_vn_setattr+0x78/0x140 xfs
Dec 3 22:24:48 kernel: notify_change (fs/attr.c:503)
Dec 3 22:24:48 kernel: ? do_truncate (./include/linux/fs.h:820 fs/open.c:66)
Dec 3 22:24:48 kernel: do_truncate (./include/linux/fs.h:820 fs/open.c:66)
Dec 3 22:24:48 kernel: do_open (fs/namei.c:3395 fs/namei.c:3778)
Dec 3 22:24:48 kernel: path_openat (fs/namei.c:3933)
Dec 3 22:24:48 kernel: do_filp_open (fs/namei.c:3960)
Dec 3 22:24:48 kernel: ? unix_stream_recvmsg (net/unix/af_unix.c:2997)
Dec 3 22:24:48 kernel: ? __pfx_unix_stream_read_actor (net/unix/af_unix.c:2957)
Dec 3 22:24:48 kernel: ? kmem_cache_alloc_noprof (mm/slub.c:4115 mm/slub.c:4141)
Dec 3 22:24:48 kernel: do_sys_openat2 (fs/open.c:1415)
Dec 3 22:24:48 kernel: __x64_sys_openat (fs/open.c:1441)
Dec 3 22:24:48 kernel: do_syscall_64 (arch/x86/entry/common.c:52 arch/x86/entry/common.c:83)
Dec 3 22:24:48 kernel: ? syscall_exit_work (./include/linux/audit.h:357 kernel/entry/common.c:166)
Dec 3 22:24:48 kernel: ? syscall_exit_to_user_mode (./arch/x86/include/asm/irqflags.h:37 ./arch/x86/include/asm/irqflags.h:92 ./include/linux/entry-common.h:231 kernel/entry/common.c:206 kernel/entry/common.c:218)
Dec 3 22:24:48 kernel: ? do_syscall_64 (arch/x86/entry/common.c:102)
Dec 3 22:24:48 kernel: ? _raw_spin_unlock_irqrestore (./arch/x86/include/asm/paravirt.h:589 ./arch/x86/include/asm/qspinlock.h:57 ./include/linux/spinlock.h:204 ./include/linux/spinlock_api_smp.h:150 kernel/locking/spinlock.c:194)
Dec 3 22:24:48 kernel: ? __skb_try_recv_datagram (net/core/datagram.c:267)
Dec 3 22:24:48 kernel: ? __skb_recv_datagram (net/core/datagram.c:296)
Dec 3 22:24:48 kernel: ? __memcg_slab_free_hook (mm/memcontrol.c:3004 (discriminator 2))
Dec 3 22:24:48 kernel: ? __pte_offset_map (./include/linux/pgtable.h:324 ./include/linux/pgtable.h:594 mm/pgtable-generic.c:289)
Dec 3 22:24:48 kernel: ? _raw_spin_unlock (./arch/x86/include/asm/paravirt.h:589 ./arch/x86/include/asm/qspinlock.h:57 ./include/linux/spinlock.h:204 ./include/linux/spinlock_api_smp.h:142 kernel/locking/spinlock.c:186)
Dec 3 22:24:48 kernel: ? do_wp_page (./include/linux/vmstat.h:75 mm/memory.c:3263 mm/memory.c:3731)
Dec 3 22:24:48 kernel: ? __handle_mm_fault (mm/memory.c:5909)
Dec 3 22:24:48 kernel: ? __count_memcg_events (mm/memcontrol.c:573 mm/memcontrol.c:836)
Dec 3 22:24:48 kernel: ? handle_mm_fault (mm/memory.c:5951 mm/memory.c:6103)
Dec 3 22:24:48 kernel: ? do_user_addr_fault (./include/linux/mm.h:730 arch/x86/mm/fault.c:1340)
Dec 3 22:24:48 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
Dec 3 22:24:48 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
Dec 3 22:24:48 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
Dec 3 22:24:48 kernel: entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:130)
Dec 3 22:24:48 kernel: RIP: 0033:0x7f10818fd70b
Dec 3 22:24:48 kernel: RSP: 002b:00007fff83e83f80 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
Dec 3 22:24:48 kernel: RAX: ffffffffffffffda RBX: 0000000000000001 RCX: 00007f10818fd70b
Dec 3 22:24:48 kernel: RDX: 0000000000080241 RSI: 00007f1080d5a8a0 RDI: 00000000ffffff9c
Dec 3 22:24:48 kernel: RBP: 00007f1080d5a8a0 R08: 0000000000000000 R09: 0000000000000000
Dec 3 22:24:48 kernel: R10: 00000000000001b6 R11: 0000000000000246 R12: 0000000000080241
Dec 3 22:24:48 kernel: R13: 00007f1080d50bb0 R14: 0000000000080241 R15: 0000000000000000
Dec 3 22:24:48 kernel: </TASK>
Dec 3 22:24:48 kernel: INFO: task mv:3483072 blocked for more than 245 seconds.
Dec 3 22:24:48 kernel: Tainted: G W 6.12.0+ #77
Dec 3 22:24:48 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Dec 3 22:24:48 kernel: task:mv state:D stack:0 pid:3483072 tgid:3483072 ppid:3479428 flags:0x00000002
Dec 3 22:24:48 kernel: Call Trace:
Dec 3 22:24:48 kernel: <TASK>
Dec 3 22:24:48 kernel: __schedule (kernel/sched/core.c:5328 kernel/sched/core.c:6693)
Dec 3 22:24:48 kernel: schedule (./arch/x86/include/asm/preempt.h:84 kernel/sched/core.c:6771 kernel/sched/core.c:6785)
Dec 3 22:24:48 kernel: io_schedule (kernel/sched/core.c:7552 kernel/sched/core.c:7578)
Dec 3 22:24:48 kernel: folio_wait_bit_common (mm/filemap.c:1301)
Dec 3 22:24:48 kernel: ? xas_load (./include/linux/xarray.h:175 ./include/linux/xarray.h:1264 lib/xarray.c:240)
Dec 3 22:24:48 kernel: ? __pfx_wake_page_function (mm/filemap.c:1117)
Dec 3 22:24:48 kernel: folio_wait_writeback (./arch/x86/include/asm/bitops.h:206 ./arch/x86/include/asm/bitops.h:238 ./include/asm-generic/bitops/instrumented-non-atomic.h:142 ./include/linux/page-flags.h:555 mm/page-writeback.c:3187)
Dec 3 22:24:48 kernel: truncate_inode_partial_folio (./arch/x86/include/asm/bitops.h:206 ./arch/x86/include/asm/bitops.h:238 ./include/asm-generic/bitops/instrumented-non-atomic.h:142 ./include/linux/page-flags.h:822 ./include/linux/page-flags.h:843 ./include/linux/mm.h:1115 ./include/linux/mm.h:2137 mm/truncate.c:209)
Dec 3 22:24:48 kernel: truncate_inode_pages_range (mm/truncate.c:354)
Dec 3 22:24:48 kernel: ? xfs_iunlock+0x108/0x200 xfs
Dec 3 22:24:48 kernel: ? xfs_rename+0x368/0x990 xfs
Dec 3 22:24:48 kernel: ? fsnotify_move (./include/linux/fsnotify.h:72 ./include/linux/fsnotify.h:64 ./include/linux/fsnotify.h:238)
Dec 3 22:24:48 kernel: ? _raw_spin_unlock (./arch/x86/include/asm/paravirt.h:589 ./arch/x86/include/asm/qspinlock.h:57 ./include/linux/spinlock.h:204 ./include/linux/spinlock_api_smp.h:142 kernel/locking/spinlock.c:186)
Dec 3 22:24:48 kernel: ? locked_inode_to_wb_and_lock_list (fs/fs-writeback.c:355)
Dec 3 22:24:48 kernel: evict (fs/inode.c:728)
Dec 3 22:24:48 kernel: ? fsnotify_destroy_marks (fs/notify/mark.c:923)
Dec 3 22:24:48 kernel: ? _atomic_dec_and_lock (./arch/x86/include/asm/atomic.h:67 ./include/linux/atomic/atomic-arch-fallback.h:2278 ./include/linux/atomic/atomic-instrumented.h:1384 lib/dec_and_lock.c:29)
Dec 3 22:24:48 kernel: __dentry_kill (fs/dcache.c:618)
Dec 3 22:24:48 kernel: dput (fs/dcache.c:857 fs/dcache.c:845)
Dec 3 22:24:48 kernel: do_renameat2 (fs/namei.c:5174)
Dec 3 22:24:48 kernel: __x64_sys_rename (fs/namei.c:5215)
Dec 3 22:24:48 kernel: do_syscall_64 (arch/x86/entry/common.c:52 arch/x86/entry/common.c:83)
Dec 3 22:24:48 kernel: ? do_user_addr_fault (./include/linux/mm.h:730 arch/x86/mm/fault.c:1340)
Dec 3 22:24:48 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
Dec 3 22:24:48 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
Dec 3 22:24:48 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
Dec 3 22:24:48 kernel: entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:130)
Dec 3 22:24:48 kernel: RIP: 0033:0x7f3185a5aadb
Dec 3 22:24:48 kernel: RSP: 002b:00007ffe258c4548 EFLAGS: 00000246 ORIG_RAX: 0000000000000052
Dec 3 22:24:48 kernel: RAX: ffffffffffffffda RBX: 0000000000000001 RCX: 00007f3185a5aadb
Dec 3 22:24:48 kernel: RDX: 0000000000000025 RSI: 00007ffe258c5cc3 RDI: 00007ffe258c5cb7
Dec 3 22:24:48 kernel: RBP: 00007ffe258c48f0 R08: 00007ffe258c4670 R09: 00007ffe258c4ac0
Dec 3 22:24:48 kernel: R10: 0000000000000100 R11: 0000000000000246 R12: 0000000000000011
Dec 3 22:24:48 kernel: R13: 0000000000000000 R14: 00007ffe258c5cc3 R15: 00007ffe258c4ac0
Dec 3 22:24:48 kernel: </TASK>
Dec 3 22:26:51 kernel: INFO: task khugepaged:330 blocked for more than 368 seconds.
Dec 3 22:26:51 kernel: Tainted: G W 6.12.0+ #77
Dec 3 22:26:51 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Dec 3 22:26:51 kernel: task:khugepaged state:D stack:0 pid:330 tgid:330 ppid:2 flags:0x00004000
Dec 3 22:26:51 kernel: Call Trace:
Dec 3 22:26:51 kernel: <TASK>
Dec 3 22:26:51 kernel: __schedule (kernel/sched/core.c:5328 kernel/sched/core.c:6693)
Dec 3 22:26:51 kernel: schedule (./arch/x86/include/asm/preempt.h:84 kernel/sched/core.c:6771 kernel/sched/core.c:6785)
Dec 3 22:26:51 kernel: schedule_timeout (kernel/time/timer.c:2592)
Dec 3 22:26:51 kernel: ? kvm_sched_clock_read (arch/x86/kernel/kvmclock.c:91)
Dec 3 22:26:51 kernel: ? sched_clock (./arch/x86/include/asm/preempt.h:94 arch/x86/kernel/tsc.c:285)
Dec 3 22:26:51 kernel: ? sched_clock_cpu (kernel/sched/clock.c:394)
Dec 3 22:26:51 kernel: ? __smp_call_single_queue (kernel/smp.c:115 kernel/smp.c:411)
Dec 3 22:26:51 kernel: __wait_for_common (kernel/sched/completion.c:95 kernel/sched/completion.c:116)
Dec 3 22:26:51 kernel: ? __pfx_schedule_timeout (kernel/time/timer.c:2577)
Dec 3 22:26:51 kernel: __flush_work (kernel/workqueue.c:4222)
Dec 3 22:26:51 kernel: ? __pfx_wq_barrier_func (kernel/workqueue.c:3718)
Dec 3 22:26:51 kernel: __lru_add_drain_all (mm/swap.c:873 (discriminator 3))
Dec 3 22:26:51 kernel: khugepaged (mm/khugepaged.c:2499 mm/khugepaged.c:2571)
Dec 3 22:26:51 kernel: ? __pfx_khugepaged (mm/khugepaged.c:2564)
Dec 3 22:26:51 kernel: kthread (kernel/kthread.c:389)
Dec 3 22:26:51 kernel: ? __pfx_kthread (kernel/kthread.c:342)
Dec 3 22:26:51 kernel: ret_from_fork (arch/x86/kernel/process.c:147)
Dec 3 22:26:51 kernel: ? __pfx_kthread (kernel/kthread.c:342)
Dec 3 22:26:51 kernel: ret_from_fork_asm (arch/x86/entry/entry_64.S:257)
Dec 3 22:26:51 kernel: </TASK>
Dec 3 22:26:51 kernel: Future hung task reports are suppressed, see sysctl kernel.hung_task_warnings
Still blocked after many hours:
Dec 4 06:11:45 kernel: INFO: task khugepaged:330 blocked for more than 28262 seconds.
Dec 4 06:11:45 kernel: Tainted: G W 6.12.0+ #77
Dec 4 06:11:45 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Dec 4 06:11:45 kernel: task:khugepaged state:D stack:0 pid:330 tgid:330 ppid:2 flags:0x00004000
Dec 4 06:11:45 kernel: Call Trace:
Dec 4 06:11:45 kernel: <TASK>
Dec 4 06:11:45 kernel: __schedule (kernel/sched/core.c:5328 kernel/sched/core.c:6693)
Dec 4 06:11:45 kernel: schedule (./arch/x86/include/asm/preempt.h:84 kernel/sched/core.c:6771 kernel/sched/core.c:6785)
Dec 4 06:11:45 kernel: schedule_timeout (kernel/time/timer.c:2592)
Dec 4 06:11:45 kernel: ? kvm_sched_clock_read (arch/x86/kernel/kvmclock.c:91)
Dec 4 06:11:45 kernel: ? sched_clock (./arch/x86/include/asm/preempt.h:94 arch/x86/kernel/tsc.c:285)
Dec 4 06:11:45 kernel: ? sched_clock_cpu (kernel/sched/clock.c:394)
Dec 4 06:11:45 kernel: ? __smp_call_single_queue (kernel/smp.c:115 kernel/smp.c:411)
Dec 4 06:11:45 kernel: __wait_for_common (kernel/sched/completion.c:95 kernel/sched/completion.c:116)
Dec 4 06:11:45 kernel: ? __pfx_schedule_timeout (kernel/time/timer.c:2577)
Dec 4 06:11:45 kernel: __flush_work (kernel/workqueue.c:4222)
Dec 4 06:11:45 kernel: ? __pfx_wq_barrier_func (kernel/workqueue.c:3718)
Dec 4 06:11:45 kernel: __lru_add_drain_all (mm/swap.c:873 (discriminator 3))
Dec 4 06:11:45 kernel: khugepaged (mm/khugepaged.c:2499 mm/khugepaged.c:2571)
Dec 4 06:11:45 kernel: ? __pfx_khugepaged (mm/khugepaged.c:2564)
Dec 4 06:11:45 kernel: kthread (kernel/kthread.c:389)
Dec 4 06:11:45 kernel: ? __pfx_kthread (kernel/kthread.c:342)
Dec 4 06:11:45 kernel: ret_from_fork (arch/x86/kernel/process.c:147)
Dec 4 06:11:45 kernel: ? __pfx_kthread (kernel/kthread.c:342)
Dec 4 06:11:45 kernel: ret_from_fork_asm (arch/x86/entry/entry_64.S:257)
Dec 4 06:11:45 kernel: </TASK>
Dec 4 06:11:45 kernel: Future hung task reports are suppressed, see sysctl kernel.hung_task_warnings
The system is actually idle:
Dec 4 06:27:07 kernel: sysrq: Show backtrace of all active CPUs
Dec 4 06:27:07 kernel: NMI backtrace for cpu 30
Dec 4 06:27:07 kernel: CPU: 30 UID: 0 PID: 10810 Comm: bash Kdump: loaded Tainted: G W 6.12.0+ #77
Dec 4 06:27:07 kernel: Tainted: [W]=WARN
Dec 4 06:27:07 kernel: Hardware name: Red Hat KVM/RHEL, BIOS 1.16.1-1.el9 04/01/2014
Dec 4 06:27:07 kernel: Call Trace:
Dec 4 06:27:07 kernel: <TASK>
Dec 4 06:27:07 kernel: dump_stack_lvl (lib/dump_stack.c:123)
Dec 4 06:27:07 kernel: nmi_cpu_backtrace (lib/nmi_backtrace.c:113)
Dec 4 06:27:07 kernel: ? __pfx_nmi_raise_cpu_backtrace (arch/x86/kernel/apic/hw_nmi.c:35)
Dec 4 06:27:07 kernel: nmi_trigger_cpumask_backtrace (lib/nmi_backtrace.c:62)
Dec 4 06:27:07 kernel: __handle_sysrq (drivers/tty/sysrq.c:613)
Dec 4 06:27:07 kernel: write_sysrq_trigger (drivers/tty/sysrq.c:1184)
Dec 4 06:27:07 kernel: proc_reg_write (fs/proc/inode.c:330 fs/proc/inode.c:342)
Dec 4 06:27:07 kernel: vfs_write (fs/read_write.c:681)
Dec 4 06:27:07 kernel: ? do_fcntl (fs/fcntl.c:463)
Dec 4 06:27:07 kernel: ksys_write (fs/read_write.c:736)
Dec 4 06:27:07 kernel: do_syscall_64 (arch/x86/entry/common.c:52 arch/x86/entry/common.c:83)
Dec 4 06:27:07 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
Dec 4 06:27:07 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
Dec 4 06:27:07 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
Dec 4 06:27:07 kernel: entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:130)
Dec 4 06:27:07 kernel: RIP: 0033:0x7fb77cefda57
Dec 4 06:27:07 kernel: </TASK>
Dec 4 06:27:07 kernel: Sending NMI from CPU 30 to CPUs 0-29,31-39:
Dec 4 06:27:07 kernel: NMI backtrace for cpu 12 skipped: idling at default_idle
Dec 4 06:27:07 kernel: NMI backtrace for cpu 6 skipped: idling at default_idle
Dec 4 06:27:07 kernel: NMI backtrace for cpu 16 skipped: idling at default_idle
Dec 4 06:27:07 kernel: NMI backtrace for cpu 4 skipped: idling at default_idle
Dec 4 06:27:07 kernel: NMI backtrace for cpu 20 skipped: idling at default_idle
Dec 4 06:27:07 kernel: NMI backtrace for cpu 10 skipped: idling at default_idle
Dec 4 06:27:07 kernel: NMI backtrace for cpu 33 skipped: idling at default_idle
Dec 4 06:27:07 kernel: NMI backtrace for cpu 18 skipped: idling at default_idle
Dec 4 06:27:07 kernel: NMI backtrace for cpu 1 skipped: idling at default_idle
Dec 4 06:27:07 kernel: NMI backtrace for cpu 23 skipped: idling at default_idle
Dec 4 06:27:07 kernel: NMI backtrace for cpu 27 skipped: idling at default_idle
Dec 4 06:27:07 kernel: NMI backtrace for cpu 3 skipped: idling at default_idle
Dec 4 06:27:07 kernel: NMI backtrace for cpu 11 skipped: idling at default_idle
Dec 4 06:27:07 kernel: NMI backtrace for cpu 32 skipped: idling at default_idle
Dec 4 06:27:07 kernel: NMI backtrace for cpu 7 skipped: idling at default_idle
Dec 4 06:27:07 kernel: NMI backtrace for cpu 0 skipped: idling at default_idle
Dec 4 06:27:07 kernel: NMI backtrace for cpu 13 skipped: idling at default_idle
Dec 4 06:27:07 kernel: NMI backtrace for cpu 5 skipped: idling at default_idle
Dec 4 06:27:07 kernel: NMI backtrace for cpu 26 skipped: idling at default_idle
Dec 4 06:27:07 kernel: NMI backtrace for cpu 28 skipped: idling at default_idle
Dec 4 06:27:07 kernel: NMI backtrace for cpu 24 skipped: idling at default_idle
Dec 4 06:27:07 kernel: NMI backtrace for cpu 37 skipped: idling at default_idle
Dec 4 06:27:07 kernel: NMI backtrace for cpu 39 skipped: idling at default_idle
Dec 4 06:27:07 kernel: NMI backtrace for cpu 17 skipped: idling at default_idle
Dec 4 06:27:07 kernel: NMI backtrace for cpu 22 skipped: idling at default_idle
Dec 4 06:27:07 kernel: NMI backtrace for cpu 31 skipped: idling at default_idle
Dec 4 06:27:07 kernel: NMI backtrace for cpu 8 skipped: idling at default_idle
Dec 4 06:27:07 kernel: NMI backtrace for cpu 21 skipped: idling at default_idle
Dec 4 06:27:07 kernel: NMI backtrace for cpu 29 skipped: idling at default_idle
Dec 4 06:27:07 kernel: NMI backtrace for cpu 14 skipped: idling at default_idle
Dec 4 06:27:07 kernel: NMI backtrace for cpu 2 skipped: idling at default_idle
Dec 4 06:27:07 kernel: NMI backtrace for cpu 9 skipped: idling at default_idle
Dec 4 06:27:07 kernel: NMI backtrace for cpu 35 skipped: idling at default_idle
Dec 4 06:27:07 kernel: NMI backtrace for cpu 25 skipped: idling at default_idle
Dec 4 06:27:07 kernel: NMI backtrace for cpu 15 skipped: idling at default_idle
Dec 4 06:27:07 kernel: NMI backtrace for cpu 36 skipped: idling at default_idle
Dec 4 06:27:07 kernel: NMI backtrace for cpu 19 skipped: idling at default_idle
Dec 4 06:27:07 kernel: NMI backtrace for cpu 34 skipped: idling at default_idle
Dec 4 06:27:07 kernel: NMI backtrace for cpu 38 skipped: idling at default_idle
=====================
SECOND UNDECODED LOG:
=====================
Dec 3 01:58:46 kernel: ------------[ cut here ]------------
Dec 3 01:58:46 kernel: watchdog: BUG: soft lockup - CPU#11 stuck for 21s! [kworker/11:1:866154]
Dec 3 01:58:46 kernel: rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
Dec 3 01:58:46 kernel: rcu: #01117-...!: (0 ticks this GP) idle=b124/1/0x4000000000000000 softirq=2109356/2109356 fqs=0
Dec 3 01:58:46 kernel: rcu: #011(detected by 18, t=60004 jiffies, g=11126601, q=393720 ncpus=40)
Dec 3 01:58:46 kernel: Sending NMI from CPU 18 to CPUs 17:
Dec 3 01:58:46 kernel: rcu: rcu_preempt kthread timer wakeup didn't happen for 70001 jiffies! g11126601 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x200
Dec 3 01:58:46 kernel: rcu: #011Possible timer handling issue on cpu=17 timer-softirq=273727
Dec 3 01:58:46 kernel: rcu: rcu_preempt kthread starved for 70004 jiffies! g11126601 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x200 ->cpu=17
Dec 3 01:58:46 kernel: rcu: #011Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
Dec 3 01:58:46 kernel: rcu: RCU grace-period kthread stack dump:
Dec 3 01:58:46 kernel: task:rcu_preempt state:R stack:0 pid:18 tgid:18 ppid:2 flags:0x00004008
Dec 3 01:58:46 kernel: Call Trace:
Dec 3 01:58:46 kernel: <TASK>
Dec 3 01:58:46 kernel: ? __pick_next_task+0x3e/0x1a0
Dec 3 01:58:46 kernel: ? __schedule+0xfe/0x620
Dec 3 01:58:46 kernel: ? _raw_spin_unlock_irqrestore+0xa/0x30
Dec 3 01:58:46 kernel: ? __pfx_rcu_gp_kthread+0x10/0x10
Dec 3 01:58:46 kernel: ? schedule+0x23/0xa0
Dec 3 01:58:46 kernel: ? schedule_timeout+0x8b/0x160
Dec 3 01:58:46 kernel: ? __pfx_process_timeout+0x10/0x10
Dec 3 01:58:46 kernel: ? rcu_gp_fqs_loop+0x10b/0x500
Dec 3 01:58:46 kernel: ? rcu_gp_kthread+0x13f/0x1d0
Dec 3 01:58:46 kernel: ? kthread+0xcc/0x100
Dec 3 01:58:46 kernel: ? __pfx_kthread+0x10/0x10
Dec 3 01:58:46 kernel: ? ret_from_fork+0x2d/0x50
Dec 3 01:58:46 kernel: ? __pfx_kthread+0x10/0x10
Dec 3 01:58:46 kernel: ? ret_from_fork_asm+0x1a/0x30
Dec 3 01:58:46 kernel: </TASK>
Dec 3 02:00:48 kernel: INFO: task kworker/u165:0:746685 blocked for more than 122 seconds.
Dec 3 02:00:48 kernel: Not tainted 6.12.0 #64
Dec 3 02:00:48 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Dec 3 02:00:48 kernel: task:kworker/u165:0 state:D stack:0 pid:746685 tgid:746685 ppid:2 flags:0x00004000
Dec 3 02:00:48 kernel: Workqueue: events_unbound linkwatch_event
Dec 3 02:00:48 kernel: Call Trace:
Dec 3 02:00:48 kernel: <TASK>
Dec 3 02:00:48 kernel: __schedule+0x23f/0x620
Dec 3 02:00:48 kernel: schedule+0x23/0xa0
Dec 3 02:00:48 kernel: schedule_preempt_disabled+0x11/0x20
Dec 3 02:00:48 kernel: __mutex_lock.constprop.0+0x31d/0x650
Dec 3 02:00:48 kernel: ? __schedule+0x247/0x620
Dec 3 02:00:48 kernel: linkwatch_event+0xa/0x30
Dec 3 02:00:48 kernel: process_one_work+0x179/0x390
Dec 3 02:00:48 kernel: worker_thread+0x239/0x340
Dec 3 02:00:48 kernel: ? __pfx_worker_thread+0x10/0x10
Dec 3 02:00:48 kernel: kthread+0xcc/0x100
Dec 3 02:00:48 kernel: ? __pfx_kthread+0x10/0x10
Dec 3 02:00:48 kernel: ret_from_fork+0x2d/0x50
Dec 3 02:00:48 kernel: ? __pfx_kthread+0x10/0x10
Dec 3 02:00:48 kernel: ret_from_fork_asm+0x1a/0x30
Dec 3 02:00:48 kernel: </TASK>
Dec 3 02:00:48 kernel: INFO: task kworker/5:2:900494 blocked for more than 122 seconds.
Dec 3 02:00:48 kernel: Not tainted 6.12.0 #64
Dec 3 02:00:48 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Dec 3 02:00:48 kernel: task:kworker/5:2 state:D stack:0 pid:900494 tgid:900494 ppid:2 flags:0x00004000
Dec 3 02:00:48 kernel: Workqueue: events xfrm_state_gc_task
Dec 3 02:00:48 kernel: Call Trace:
Dec 3 02:00:48 kernel: <TASK>
Dec 3 02:00:48 kernel: __schedule+0x23f/0x620
Dec 3 02:00:48 kernel: schedule+0x23/0xa0
Dec 3 02:00:48 kernel: schedule_timeout+0x14a/0x160
Dec 3 02:00:48 kernel: ? __hrtimer_start_range_ns+0x20b/0x2e0
Dec 3 02:00:48 kernel: ? kvm_clock_get_cycles+0x14/0x30
Dec 3 02:00:48 kernel: ? ktime_get+0x34/0xc0
Dec 3 02:00:48 kernel: ? timerqueue_del+0x2a/0x50
Dec 3 02:00:48 kernel: __wait_for_common+0x8f/0x1d0
Dec 3 02:00:48 kernel: ? __pfx_schedule_timeout+0x10/0x10
Dec 3 02:00:48 kernel: wait_for_completion_state+0x1d/0x40
Dec 3 02:00:48 kernel: __wait_rcu_gp+0x126/0x130
Dec 3 02:00:48 kernel: synchronize_rcu_normal.part.0+0x3a/0x60
Dec 3 02:00:48 kernel: ? __pfx_call_rcu_hurry+0x10/0x10
Dec 3 02:00:48 kernel: ? __pfx_wakeme_after_rcu+0x10/0x10
Dec 3 02:00:48 kernel: xfrm_state_gc_task+0x56/0xa0
Dec 3 02:00:48 kernel: process_one_work+0x179/0x390
Dec 3 02:00:48 kernel: worker_thread+0x239/0x340
Dec 3 02:00:48 kernel: ? __pfx_worker_thread+0x10/0x10
Dec 3 02:00:48 kernel: kthread+0xcc/0x100
Dec 3 02:00:48 kernel: ? __pfx_kthread+0x10/0x10
Dec 3 02:00:48 kernel: ret_from_fork+0x2d/0x50
Dec 3 02:00:48 kernel: ? __pfx_kthread+0x10/0x10
Dec 3 02:00:48 kernel: ret_from_fork_asm+0x1a/0x30
Dec 3 02:00:48 kernel: </TASK>
Dec 3 02:00:48 kernel: INFO: task systemd-udevd:995278 blocked for more than 122 seconds.
Dec 3 02:00:48 kernel: Not tainted 6.12.0 #64
Dec 3 02:00:48 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Dec 3 02:00:48 kernel: task:systemd-udevd state:D stack:0 pid:995278 tgid:995278 ppid:1080 flags:0x00000002
Dec 3 02:00:48 kernel: Call Trace:
Dec 3 02:00:48 kernel: <TASK>
Dec 3 02:00:48 kernel: __schedule+0x23f/0x620
Dec 3 02:00:48 kernel: schedule+0x23/0xa0
Dec 3 02:00:48 kernel: netlink_table_grab.part.0+0x82/0xe0
Dec 3 02:00:48 kernel: ? __pfx_default_wake_function+0x10/0x10
Dec 3 02:00:48 kernel: netlink_release+0x36c/0x520
Dec 3 02:00:48 kernel: ? __pfx_netlink_hash+0x10/0x10
Dec 3 02:00:48 kernel: ? __pfx_netlink_compare+0x10/0x10
Dec 3 02:00:48 kernel: __sock_release+0x3a/0xc0
Dec 3 02:00:48 kernel: sock_close+0x11/0x20
Dec 3 02:00:48 kernel: __fput+0xdb/0x2a0
Dec 3 02:00:48 kernel: task_work_run+0x55/0x90
Dec 3 02:00:48 kernel: do_exit+0x279/0x4b0
Dec 3 02:00:48 kernel: do_group_exit+0x2c/0x80
Dec 3 02:00:48 kernel: __x64_sys_exit_group+0x14/0x20
Dec 3 02:00:48 kernel: x64_sys_call+0x1836/0x1840
Dec 3 02:00:48 kernel: do_syscall_64+0x79/0x150
Dec 3 02:00:48 kernel: ? __count_memcg_events+0x4f/0xe0
Dec 3 02:00:48 kernel: ? handle_mm_fault+0x18e/0x270
Dec 3 02:00:48 kernel: ? do_user_addr_fault+0x34c/0x680
Dec 3 02:00:48 kernel: ? clear_bhb_loop+0x45/0xa0
Dec 3 02:00:48 kernel: ? clear_bhb_loop+0x45/0xa0
Dec 3 02:00:48 kernel: ? clear_bhb_loop+0x45/0xa0
Dec 3 02:00:48 kernel: entry_SYSCALL_64_after_hwframe+0x76/0x7e
Dec 3 02:00:48 kernel: RIP: 0033:0x7f6cfa8d921d
Dec 3 02:00:48 kernel: RSP: 002b:00007ffd7e081b58 EFLAGS: 00000246 ORIG_RAX: 00000000000000e7
Dec 3 02:00:48 kernel: RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f6cfa8d921d
Dec 3 02:00:48 kernel: RDX: 00000000000000e7 RSI: fffffffffffffe88 RDI: 0000000000000000
Dec 3 02:00:48 kernel: RBP: 00007ffd7e081c00 R08: 00005615d9e8dbb0 R09: 0000000000000004
Dec 3 02:00:48 kernel: R10: 0000000000000018 R11: 0000000000000246 R12: 00007ffd7e081bb0
Dec 3 02:00:48 kernel: R13: 00005615d9d0e8b0 R14: 00005615d9e8dbb0 R15: 0000000000000000
Dec 3 02:00:48 kernel: </TASK>
Dec 3 02:00:48 kernel: INFO: task systemd-udevd:995279 blocked for more than 122 seconds.
Dec 3 02:00:48 kernel: Not tainted 6.12.0 #64
Dec 3 02:00:48 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Dec 3 02:00:48 kernel: task:systemd-udevd state:D stack:0 pid:995279 tgid:995279 ppid:1080 flags:0x00004002
Dec 3 02:00:48 kernel: Call Trace:
Dec 3 02:00:48 kernel: <TASK>
Dec 3 02:00:48 kernel: __schedule+0x23f/0x620
Dec 3 02:00:48 kernel: schedule+0x23/0xa0
Dec 3 02:00:48 kernel: netlink_table_grab.part.0+0x82/0xe0
Dec 3 02:00:48 kernel: ? __pfx_default_wake_function+0x10/0x10
Dec 3 02:00:48 kernel: netlink_release+0x36c/0x520
Dec 3 02:00:48 kernel: ? __pfx_netlink_hash+0x10/0x10
Dec 3 02:00:48 kernel: ? __pfx_netlink_compare+0x10/0x10
Dec 3 02:00:48 kernel: __sock_release+0x3a/0xc0
Dec 3 02:00:48 kernel: sock_close+0x11/0x20
Dec 3 02:00:48 kernel: __fput+0xdb/0x2a0
Dec 3 02:00:48 kernel: task_work_run+0x55/0x90
Dec 3 02:00:48 kernel: do_exit+0x279/0x4b0
Dec 3 02:00:48 kernel: do_group_exit+0x2c/0x80
Dec 3 02:00:48 kernel: __x64_sys_exit_group+0x14/0x20
Dec 3 02:00:48 kernel: x64_sys_call+0x1836/0x1840
Dec 3 02:00:48 kernel: do_syscall_64+0x79/0x150
Dec 3 02:00:48 kernel: ? get_page_from_freelist+0x333/0x630
Dec 3 02:00:48 kernel: ? __alloc_pages_noprof+0x186/0x350
Dec 3 02:00:48 kernel: ? __mod_memcg_lruvec_state+0x95/0x150
Dec 3 02:00:48 kernel: ? __lruvec_stat_mod_folio+0x80/0xd0
Dec 3 02:00:48 kernel: ? __folio_mod_stat+0x2a/0x80
Dec 3 02:00:48 kernel: ? _raw_spin_unlock+0xa/0x30
Dec 3 02:00:48 kernel: ? wp_page_copy+0x4e0/0x710
Dec 3 02:00:48 kernel: ? __pte_offset_map+0x17/0x160
Dec 3 02:00:48 kernel: ? _raw_spin_unlock+0xa/0x30
Dec 3 02:00:48 kernel: ? do_wp_page+0x666/0x760
Dec 3 02:00:48 kernel: ? __handle_mm_fault+0x326/0x730
Dec 3 02:00:48 kernel: ? __count_memcg_events+0x4f/0xe0
Dec 3 02:00:48 kernel: ? handle_mm_fault+0x18e/0x270
Dec 3 02:00:48 kernel: ? do_user_addr_fault+0x34c/0x680
Dec 3 02:00:48 kernel: ? clear_bhb_loop+0x45/0xa0
Dec 3 02:00:48 kernel: ? clear_bhb_loop+0x45/0xa0
Dec 3 02:00:48 kernel: ? clear_bhb_loop+0x45/0xa0
Dec 3 02:00:48 kernel: entry_SYSCALL_64_after_hwframe+0x76/0x7e
Dec 3 02:00:48 kernel: RIP: 0033:0x7f6cfa8d921d
Dec 3 02:00:48 kernel: RSP: 002b:00007ffd7e081b58 EFLAGS: 00000246 ORIG_RAX: 00000000000000e7
Dec 3 02:00:48 kernel: RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f6cfa8d921d
Dec 3 02:00:48 kernel: RDX: 00000000000000e7 RSI: fffffffffffffe88 RDI: 0000000000000000
Dec 3 02:00:48 kernel: RBP: 00007ffd7e081c00 R08: 00005615d9e932b0 R09: 0000000000000004
Dec 3 02:00:48 kernel: R10: 0000000000000018 R11: 0000000000000246 R12: 00007ffd7e081bb0
Dec 3 02:00:48 kernel: R13: 00005615d9d0e8b0 R14: 00005615d9e932b0 R15: 0000000000000000
Dec 3 02:00:48 kernel: </TASK>
Dec 3 02:00:48 kernel: INFO: task ip:998743 blocked for more than 122 seconds.
Dec 3 02:00:48 kernel: Not tainted 6.12.0 #64
Dec 3 02:00:48 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Dec 3 02:00:48 kernel: task:ip state:D stack:0 pid:998743 tgid:998743 ppid:998736 flags:0x00000002
Dec 3 02:00:48 kernel: Call Trace:
Dec 3 02:00:48 kernel: <TASK>
Dec 3 02:00:48 kernel: __schedule+0x23f/0x620
Dec 3 02:00:48 kernel: schedule+0x23/0xa0
Dec 3 02:00:48 kernel: schedule_preempt_disabled+0x11/0x20
Dec 3 02:00:48 kernel: __mutex_lock.constprop.0+0x31d/0x650
Dec 3 02:00:48 kernel: rtnetlink_rcv_msg+0x111/0x410
Dec 3 02:00:48 kernel: ? avc_has_perm_noaudit+0x67/0xf0
Dec 3 02:00:48 kernel: ? __pfx_rtnetlink_rcv_msg+0x10/0x10
Dec 3 02:00:48 kernel: netlink_rcv_skb+0x54/0x100
Dec 3 02:00:48 kernel: netlink_unicast+0x243/0x370
Dec 3 02:00:48 kernel: netlink_sendmsg+0x1f6/0x430
Dec 3 02:00:48 kernel: __sys_sendto+0x1f3/0x200
Dec 3 02:00:48 kernel: ? do_read_fault+0x10a/0x1e0
Dec 3 02:00:48 kernel: ? do_fault+0x21f/0x380
Dec 3 02:00:48 kernel: ? pte_offset_map_nolock+0x2b/0xb0
Dec 3 02:00:48 kernel: __x64_sys_sendto+0x20/0x30
Dec 3 02:00:48 kernel: do_syscall_64+0x79/0x150
Dec 3 02:00:48 kernel: ? __count_memcg_events+0x4f/0xe0
Dec 3 02:00:48 kernel: ? handle_mm_fault+0x18e/0x270
Dec 3 02:00:48 kernel: ? do_user_addr_fault+0x34c/0x680
Dec 3 02:00:48 kernel: ? clear_bhb_loop+0x45/0xa0
Dec 3 02:00:48 kernel: ? clear_bhb_loop+0x45/0xa0
Dec 3 02:00:48 kernel: ? clear_bhb_loop+0x45/0xa0
Dec 3 02:00:48 kernel: entry_SYSCALL_64_after_hwframe+0x76/0x7e
Dec 3 02:00:48 kernel: RIP: 0033:0x7f287830f860
Dec 3 02:00:48 kernel: RSP: 002b:00007ffdd4f2c518 EFLAGS: 00000246 ORIG_RAX: 000000000000002c
Dec 3 02:00:48 kernel: RAX: ffffffffffffffda RBX: 00007ffdd4f2cc88 RCX: 00007f287830f860
Dec 3 02:00:48 kernel: RDX: 0000000000000020 RSI: 00007ffdd4f2c520 RDI: 0000000000000003
Dec 3 02:00:48 kernel: RBP: 00007ffdd4f2d7e4 R08: 0000000000000000 R09: 0000000000000000
Dec 3 02:00:48 kernel: R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000002
Dec 3 02:00:48 kernel: R13: 000055969d419040 R14: 00007ffdd4f2cc78 R15: 0000000000000004
Dec 3 02:00:48 kernel: </TASK>
Dec 3 02:01:56 kernel: rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
Dec 3 02:01:56 kernel: rcu: #01117-...!: (0 ticks this GP) idle=b124/1/0x4000000000000002 softirq=2109356/2109356 fqs=0
Dec 3 02:01:56 kernel: rcu: #011(detected by 8, t=250009 jiffies, g=11126601, q=1577177 ncpus=40)
Dec 3 02:01:56 kernel: Sending NMI from CPU 8 to CPUs 17:
Dec 3 02:01:56 kernel: rcu: rcu_preempt kthread timer wakeup didn't happen for 260006 jiffies! g11126601 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x200
Dec 3 02:01:56 kernel: rcu: #011Possible timer handling issue on cpu=17 timer-softirq=273727
Dec 3 02:01:56 kernel: rcu: rcu_preempt kthread starved for 260009 jiffies! g11126601 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x200 ->cpu=17
Dec 3 02:01:56 kernel: rcu: #011Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
Dec 3 02:01:56 kernel: rcu: RCU grace-period kthread stack dump:
Dec 3 02:01:56 kernel: task:rcu_preempt state:R stack:0 pid:18 tgid:18 ppid:2 flags:0x00004008
Dec 3 02:01:56 kernel: Call Trace:
Dec 3 02:01:56 kernel: <TASK>
Dec 3 02:01:56 kernel: ? __pick_next_task+0x3e/0x1a0
Dec 3 02:01:56 kernel: ? __schedule+0xfe/0x620
Dec 3 02:01:56 kernel: ? _raw_spin_unlock_irqrestore+0xa/0x30
Dec 3 02:01:56 kernel: ? __pfx_rcu_gp_kthread+0x10/0x10
Dec 3 02:01:56 kernel: ? schedule+0x23/0xa0
Dec 3 02:01:56 kernel: ? schedule_timeout+0x8b/0x160
Dec 3 02:01:56 kernel: ? __pfx_process_timeout+0x10/0x10
Dec 3 02:01:56 kernel: ? rcu_gp_fqs_loop+0x10b/0x500
Dec 3 02:01:56 kernel: ? rcu_gp_kthread+0x13f/0x1d0
Dec 3 02:01:56 kernel: ? kthread+0xcc/0x100
Dec 3 02:01:56 kernel: ? __pfx_kthread+0x10/0x10
Dec 3 02:01:56 kernel: ? ret_from_fork+0x2d/0x50
Dec 3 02:01:56 kernel: ? __pfx_kthread+0x10/0x10
Dec 3 02:01:56 kernel: ? ret_from_fork_asm+0x1a/0x30
Dec 3 02:01:56 kernel: </TASK>
Dec 3 02:02:51 kernel: INFO: task kworker/u165:0:746685 blocked for more than 245 seconds.
Dec 3 02:02:51 kernel: Not tainted 6.12.0 #64
Dec 3 02:02:51 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Dec 3 02:02:51 kernel: task:kworker/u165:0 state:D stack:0 pid:746685 tgid:746685 ppid:2 flags:0x00004000
Dec 3 02:02:51 kernel: Workqueue: events_unbound linkwatch_event
Dec 3 02:02:51 kernel: Call Trace:
Dec 3 02:02:51 kernel: <TASK>
Dec 3 02:02:51 kernel: __schedule+0x23f/0x620
Dec 3 02:02:51 kernel: schedule+0x23/0xa0
Dec 3 02:02:51 kernel: schedule_preempt_disabled+0x11/0x20
Dec 3 02:02:51 kernel: __mutex_lock.constprop.0+0x31d/0x650
Dec 3 02:02:51 kernel: ? __schedule+0x247/0x620
Dec 3 02:02:51 kernel: linkwatch_event+0xa/0x30
Dec 3 02:02:51 kernel: process_one_work+0x179/0x390
Dec 3 02:02:51 kernel: worker_thread+0x239/0x340
Dec 3 02:02:51 kernel: ? __pfx_worker_thread+0x10/0x10
Dec 3 02:02:51 kernel: kthread+0xcc/0x100
Dec 3 02:02:51 kernel: ? __pfx_kthread+0x10/0x10
Dec 3 02:02:51 kernel: ret_from_fork+0x2d/0x50
Dec 3 02:02:51 kernel: ? __pfx_kthread+0x10/0x10
Dec 3 02:02:51 kernel: ret_from_fork_asm+0x1a/0x30
Dec 3 02:02:51 kernel: </TASK>
Dec 3 02:02:51 kernel: INFO: task kworker/5:2:900494 blocked for more than 245 seconds.
Dec 3 02:02:51 kernel: Not tainted 6.12.0 #64
Dec 3 02:02:51 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Dec 3 02:02:51 kernel: task:kworker/5:2 state:D stack:0 pid:900494 tgid:900494 ppid:2 flags:0x00004000
Dec 3 02:02:51 kernel: Workqueue: events xfrm_state_gc_task
Dec 3 02:02:51 kernel: Call Trace:
Dec 3 02:02:51 kernel: <TASK>
Dec 3 02:02:51 kernel: __schedule+0x23f/0x620
Dec 3 02:02:51 kernel: schedule+0x23/0xa0
Dec 3 02:02:51 kernel: schedule_timeout+0x14a/0x160
Dec 3 02:02:51 kernel: ? __hrtimer_start_range_ns+0x20b/0x2e0
Dec 3 02:02:51 kernel: ? kvm_clock_get_cycles+0x14/0x30
Dec 3 02:02:51 kernel: ? ktime_get+0x34/0xc0
Dec 3 02:02:51 kernel: ? timerqueue_del+0x2a/0x50
Dec 3 02:02:51 kernel: __wait_for_common+0x8f/0x1d0
Dec 3 02:02:51 kernel: ? __pfx_schedule_timeout+0x10/0x10
Dec 3 02:02:51 kernel: wait_for_completion_state+0x1d/0x40
Dec 3 02:02:51 kernel: __wait_rcu_gp+0x126/0x130
Dec 3 02:02:51 kernel: synchronize_rcu_normal.part.0+0x3a/0x60
Dec 3 02:02:51 kernel: ? __pfx_call_rcu_hurry+0x10/0x10
Dec 3 02:02:51 kernel: ? __pfx_wakeme_after_rcu+0x10/0x10
Dec 3 02:02:51 kernel: xfrm_state_gc_task+0x56/0xa0
Dec 3 02:02:51 kernel: process_one_work+0x179/0x390
Dec 3 02:02:51 kernel: worker_thread+0x239/0x340
Dec 3 02:02:51 kernel: ? __pfx_worker_thread+0x10/0x10
Dec 3 02:02:51 kernel: kthread+0xcc/0x100
Dec 3 02:02:51 kernel: ? __pfx_kthread+0x10/0x10
Dec 3 02:02:51 kernel: ret_from_fork+0x2d/0x50
Dec 3 02:02:51 kernel: ? __pfx_kthread+0x10/0x10
Dec 3 02:02:51 kernel: ret_from_fork_asm+0x1a/0x30
Dec 3 02:02:51 kernel: </TASK>
Dec 3 02:02:51 kernel: INFO: task systemd-udevd:995278 blocked for more than 245 seconds.
Dec 3 02:02:51 kernel: Not tainted 6.12.0 #64
Dec 3 02:02:51 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Dec 3 02:02:51 kernel: task:systemd-udevd state:D stack:0 pid:995278 tgid:995278 ppid:1080 flags:0x00000002
Dec 3 02:02:51 kernel: Call Trace:
Dec 3 02:02:51 kernel: <TASK>
Dec 3 02:02:51 kernel: __schedule+0x23f/0x620
Dec 3 02:02:51 kernel: schedule+0x23/0xa0
Dec 3 02:02:51 kernel: netlink_table_grab.part.0+0x82/0xe0
Dec 3 02:02:51 kernel: ? __pfx_default_wake_function+0x10/0x10
Dec 3 02:02:51 kernel: netlink_release+0x36c/0x520
Dec 3 02:02:51 kernel: ? __pfx_netlink_hash+0x10/0x10
Dec 3 02:02:51 kernel: ? __pfx_netlink_compare+0x10/0x10
Dec 3 02:02:51 kernel: __sock_release+0x3a/0xc0
Dec 3 02:02:51 kernel: sock_close+0x11/0x20
Dec 3 02:02:51 kernel: __fput+0xdb/0x2a0
Dec 3 02:02:51 kernel: task_work_run+0x55/0x90
Dec 3 02:02:51 kernel: do_exit+0x279/0x4b0
Dec 3 02:02:51 kernel: do_group_exit+0x2c/0x80
Dec 3 02:02:51 kernel: __x64_sys_exit_group+0x14/0x20
Dec 3 02:02:51 kernel: x64_sys_call+0x1836/0x1840
Dec 3 02:02:51 kernel: do_syscall_64+0x79/0x150
Dec 3 02:02:51 kernel: ? __count_memcg_events+0x4f/0xe0
Dec 3 02:02:51 kernel: ? handle_mm_fault+0x18e/0x270
Dec 3 02:02:51 kernel: ? do_user_addr_fault+0x34c/0x680
Dec 3 02:02:51 kernel: ? clear_bhb_loop+0x45/0xa0
Dec 3 02:02:51 kernel: ? clear_bhb_loop+0x45/0xa0
Dec 3 02:02:51 kernel: ? clear_bhb_loop+0x45/0xa0
Dec 3 02:02:51 kernel: entry_SYSCALL_64_after_hwframe+0x76/0x7e
Dec 3 02:02:51 kernel: RIP: 0033:0x7f6cfa8d921d
Dec 3 02:02:51 kernel: RSP: 002b:00007ffd7e081b58 EFLAGS: 00000246 ORIG_RAX: 00000000000000e7
Dec 3 02:02:51 kernel: RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f6cfa8d921d
Dec 3 02:02:51 kernel: RDX: 00000000000000e7 RSI: fffffffffffffe88 RDI: 0000000000000000
Dec 3 02:02:51 kernel: RBP: 00007ffd7e081c00 R08: 00005615d9e8dbb0 R09: 0000000000000004
Dec 3 02:02:51 kernel: R10: 0000000000000018 R11: 0000000000000246 R12: 00007ffd7e081bb0
Dec 3 02:02:51 kernel: R13: 00005615d9d0e8b0 R14: 00005615d9e8dbb0 R15: 0000000000000000
Dec 3 02:02:51 kernel: </TASK>
Dec 3 02:02:51 kernel: INFO: task systemd-udevd:995279 blocked for more than 245 seconds.
Dec 3 02:02:51 kernel: Not tainted 6.12.0 #64
Dec 3 02:02:51 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Dec 3 02:02:51 kernel: task:systemd-udevd state:D stack:0 pid:995279 tgid:995279 ppid:1080 flags:0x00004002
Dec 3 02:02:51 kernel: Call Trace:
Dec 3 02:02:51 kernel: <TASK>
Dec 3 02:02:51 kernel: __schedule+0x23f/0x620
Dec 3 02:02:51 kernel: schedule+0x23/0xa0
Dec 3 02:02:51 kernel: netlink_table_grab.part.0+0x82/0xe0
Dec 3 02:02:51 kernel: ? __pfx_default_wake_function+0x10/0x10
Dec 3 02:02:51 kernel: netlink_release+0x36c/0x520
Dec 3 02:02:51 kernel: ? __pfx_netlink_hash+0x10/0x10
Dec 3 02:02:51 kernel: ? __pfx_netlink_compare+0x10/0x10
Dec 3 02:02:51 kernel: __sock_release+0x3a/0xc0
Dec 3 02:02:51 kernel: sock_close+0x11/0x20
Dec 3 02:02:51 kernel: __fput+0xdb/0x2a0
Dec 3 02:02:51 kernel: task_work_run+0x55/0x90
Dec 3 02:02:51 kernel: do_exit+0x279/0x4b0
Dec 3 02:02:51 kernel: do_group_exit+0x2c/0x80
Dec 3 02:02:51 kernel: __x64_sys_exit_group+0x14/0x20
Dec 3 02:02:51 kernel: x64_sys_call+0x1836/0x1840
Dec 3 02:02:51 kernel: do_syscall_64+0x79/0x150
Dec 3 02:02:51 kernel: ? get_page_from_freelist+0x333/0x630
Dec 3 02:02:51 kernel: ? __alloc_pages_noprof+0x186/0x350
Dec 3 02:02:51 kernel: ? __mod_memcg_lruvec_state+0x95/0x150
Dec 3 02:02:51 kernel: ? __lruvec_stat_mod_folio+0x80/0xd0
Dec 3 02:02:51 kernel: ? __folio_mod_stat+0x2a/0x80
Dec 3 02:02:51 kernel: ? _raw_spin_unlock+0xa/0x30
Dec 3 02:02:51 kernel: ? wp_page_copy+0x4e0/0x710
Dec 3 02:02:51 kernel: ? __pte_offset_map+0x17/0x160
Dec 3 02:02:51 kernel: ? _raw_spin_unlock+0xa/0x30
Dec 3 02:02:51 kernel: ? do_wp_page+0x666/0x760
Dec 3 02:02:51 kernel: ? __handle_mm_fault+0x326/0x730
Dec 3 02:02:51 kernel: ? __count_memcg_events+0x4f/0xe0
Dec 3 02:02:51 kernel: ? handle_mm_fault+0x18e/0x270
Dec 3 02:02:51 kernel: ? do_user_addr_fault+0x34c/0x680
Dec 3 02:02:51 kernel: ? clear_bhb_loop+0x45/0xa0
Dec 3 02:02:51 kernel: ? clear_bhb_loop+0x45/0xa0
Dec 3 02:02:51 kernel: ? clear_bhb_loop+0x45/0xa0
Dec 3 02:02:51 kernel: entry_SYSCALL_64_after_hwframe+0x76/0x7e
Dec 3 02:02:51 kernel: RIP: 0033:0x7f6cfa8d921d
Dec 3 02:02:51 kernel: RSP: 002b:00007ffd7e081b58 EFLAGS: 00000246 ORIG_RAX: 00000000000000e7
Dec 3 02:02:51 kernel: RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f6cfa8d921d
Dec 3 02:02:51 kernel: RDX: 00000000000000e7 RSI: fffffffffffffe88 RDI: 0000000000000000
Dec 3 02:02:51 kernel: RBP: 00007ffd7e081c00 R08: 00005615d9e932b0 R09: 0000000000000004
Dec 3 02:02:51 kernel: R10: 0000000000000018 R11: 0000000000000246 R12: 00007ffd7e081bb0
Dec 3 02:02:51 kernel: R13: 00005615d9d0e8b0 R14: 00005615d9e932b0 R15: 0000000000000000
Dec 3 02:02:51 kernel: </TASK>
Dec 3 02:02:51 kernel: INFO: task ip:998743 blocked for more than 245 seconds.
Dec 3 02:02:51 kernel: Not tainted 6.12.0 #64
Dec 3 02:02:51 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Dec 3 02:02:51 kernel: task:ip state:D stack:0 pid:998743 tgid:998743 ppid:998736 flags:0x00000002
Dec 3 02:02:51 kernel: Call Trace:
Dec 3 02:02:51 kernel: <TASK>
Dec 3 02:02:51 kernel: __schedule+0x23f/0x620
Dec 3 02:02:51 kernel: schedule+0x23/0xa0
Dec 3 02:02:51 kernel: schedule_preempt_disabled+0x11/0x20
Dec 3 02:02:51 kernel: __mutex_lock.constprop.0+0x31d/0x650
Dec 3 02:02:51 kernel: rtnetlink_rcv_msg+0x111/0x410
Dec 3 02:02:51 kernel: ? avc_has_perm_noaudit+0x67/0xf0
Dec 3 02:02:51 kernel: ? __pfx_rtnetlink_rcv_msg+0x10/0x10
Dec 3 02:02:51 kernel: netlink_rcv_skb+0x54/0x100
Dec 3 02:02:51 kernel: netlink_unicast+0x243/0x370
Dec 3 02:02:51 kernel: netlink_sendmsg+0x1f6/0x430
Dec 3 02:02:51 kernel: __sys_sendto+0x1f3/0x200
Dec 3 02:02:51 kernel: ? do_read_fault+0x10a/0x1e0
Dec 3 02:02:51 kernel: ? do_fault+0x21f/0x380
Dec 3 02:02:51 kernel: ? pte_offset_map_nolock+0x2b/0xb0
Dec 3 02:02:51 kernel: __x64_sys_sendto+0x20/0x30
Dec 3 02:02:51 kernel: do_syscall_64+0x79/0x150
Dec 3 02:02:51 kernel: ? __count_memcg_events+0x4f/0xe0
Dec 3 02:02:51 kernel: ? handle_mm_fault+0x18e/0x270
Dec 3 02:02:51 kernel: ? do_user_addr_fault+0x34c/0x680
Dec 3 02:02:51 kernel: ? clear_bhb_loop+0x45/0xa0
Dec 3 02:02:51 kernel: ? clear_bhb_loop+0x45/0xa0
Dec 3 02:02:51 kernel: ? clear_bhb_loop+0x45/0xa0
Dec 3 02:02:51 kernel: entry_SYSCALL_64_after_hwframe+0x76/0x7e
Dec 3 02:02:51 kernel: RIP: 0033:0x7f287830f860
Dec 3 02:02:51 kernel: RSP: 002b:00007ffdd4f2c518 EFLAGS: 00000246 ORIG_RAX: 000000000000002c
Dec 3 02:02:51 kernel: RAX: ffffffffffffffda RBX: 00007ffdd4f2cc88 RCX: 00007f287830f860
Dec 3 02:02:51 kernel: RDX: 0000000000000020 RSI: 00007ffdd4f2c520 RDI: 0000000000000003
Dec 3 02:02:51 kernel: RBP: 00007ffdd4f2d7e4 R08: 0000000000000000 R09: 0000000000000000
Dec 3 02:02:51 kernel: R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000002
Dec 3 02:02:51 kernel: R13: 000055969d419040 R14: 00007ffdd4f2cc78 R15: 0000000000000004
Dec 3 02:02:51 kernel: </TASK>
Dec 3 02:02:51 kernel: Future hung task reports are suppressed, see sysctl kernel.hung_task_warnings
Dec 3 02:05:06 kernel: rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
Dec 3 02:05:06 kernel: rcu: #01117-...!: (0 ticks this GP) idle=b124/1/0x4000000000000002 softirq=2109356/2109356 fqs=0
Dec 3 02:05:06 kernel: rcu: #011(detected by 5, t=440015 jiffies, g=11126601, q=2760447 ncpus=40)
Dec 3 02:05:06 kernel: Sending NMI from CPU 5 to CPUs 17:
Dec 3 02:05:06 kernel: rcu: rcu_preempt kthread timer wakeup didn't happen for 450012 jiffies! g11126601 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x200
Dec 3 02:05:06 kernel: rcu: #011Possible timer handling issue on cpu=17 timer-softirq=273727
Dec 3 02:05:06 kernel: rcu: rcu_preempt kthread starved for 450015 jiffies! g11126601 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x200 ->cpu=17
Dec 3 02:05:06 kernel: rcu: #011Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
Dec 3 02:05:06 kernel: rcu: RCU grace-period kthread stack dump:
Dec 3 02:05:06 kernel: task:rcu_preempt state:R stack:0 pid:18 tgid:18 ppid:2 flags:0x00004008
Dec 3 02:05:06 kernel: Call Trace:
Dec 3 02:05:06 kernel: <TASK>
Dec 3 02:05:06 kernel: ? __pick_next_task+0x3e/0x1a0
Dec 3 02:05:06 kernel: ? __schedule+0xfe/0x620
Dec 3 02:05:06 kernel: ? _raw_spin_unlock_irqrestore+0xa/0x30
Dec 3 02:05:06 kernel: ? __pfx_rcu_gp_kthread+0x10/0x10
Dec 3 02:05:06 kernel: ? schedule+0x23/0xa0
Dec 3 02:05:06 kernel: ? schedule_timeout+0x8b/0x160
Dec 3 02:05:06 kernel: ? __pfx_process_timeout+0x10/0x10
Dec 3 02:05:06 kernel: ? rcu_gp_fqs_loop+0x10b/0x500
Dec 3 02:05:06 kernel: ? rcu_gp_kthread+0x13f/0x1d0
Dec 3 02:05:06 kernel: ? kthread+0xcc/0x100
Dec 3 02:05:06 kernel: ? __pfx_kthread+0x10/0x10
Dec 3 02:05:06 kernel: ? ret_from_fork+0x2d/0x50
Dec 3 02:05:06 kernel: ? __pfx_kthread+0x10/0x10
Dec 3 02:05:06 kernel: ? ret_from_fork_asm+0x1a/0x30
Dec 3 02:05:06 kernel: </TASK>
Dec 3 02:08:16 kernel: rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
Dec 3 02:08:16 kernel: rcu: #01117-...!: (0 ticks this GP) idle=b124/1/0x4000000000000002 softirq=2109356/2109356 fqs=0
Dec 3 02:08:16 kernel: rcu: #011(detected by 15, t=630022 jiffies, g=11126601, q=3942985 ncpus=40)
Dec 3 02:08:16 kernel: Sending NMI from CPU 15 to CPUs 17:
Dec 3 02:08:16 kernel: rcu: rcu_preempt kthread timer wakeup didn't happen for 640019 jiffies! g11126601 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x200
Dec 3 02:08:16 kernel: rcu: #011Possible timer handling issue on cpu=17 timer-softirq=273727
Dec 3 02:08:16 kernel: rcu: rcu_preempt kthread starved for 640022 jiffies! g11126601 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x200 ->cpu=17
Dec 3 02:08:16 kernel: rcu: #011Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
Dec 3 02:08:16 kernel: rcu: RCU grace-period kthread stack dump:
Dec 3 02:08:16 kernel: task:rcu_preempt state:R stack:0 pid:18 tgid:18 ppid:2 flags:0x00004008
Dec 3 02:08:16 kernel: Call Trace:
Dec 3 02:08:16 kernel: <TASK>
Dec 3 02:08:16 kernel: ? __pick_next_task+0x3e/0x1a0
Dec 3 02:08:16 kernel: ? __schedule+0xfe/0x620
Dec 3 02:08:16 kernel: ? _raw_spin_unlock_irqrestore+0xa/0x30
Dec 3 02:08:16 kernel: ? __pfx_rcu_gp_kthread+0x10/0x10
Dec 3 02:08:16 kernel: ? schedule+0x23/0xa0
Dec 3 02:08:16 kernel: ? schedule_timeout+0x8b/0x160
Dec 3 02:08:16 kernel: ? __pfx_process_timeout+0x10/0x10
Dec 3 02:08:16 kernel: ? rcu_gp_fqs_loop+0x10b/0x500
Dec 3 02:08:16 kernel: ? rcu_gp_kthread+0x13f/0x1d0
Dec 3 02:08:16 kernel: ? kthread+0xcc/0x100
Dec 3 02:08:16 kernel: ? __pfx_kthread+0x10/0x10
Dec 3 02:08:16 kernel: ? ret_from_fork+0x2d/0x50
Dec 3 02:08:16 kernel: ? __pfx_kthread+0x10/0x10
Dec 3 02:08:16 kernel: ? ret_from_fork_asm+0x1a/0x30
Dec 3 02:08:16 kernel: </TASK>
Dec 3 02:11:26 kernel: rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
Dec 3 02:11:26 kernel: rcu: #01117-...!: (0 ticks this GP) idle=b124/1/0x4000000000000002 softirq=2109356/2109356 fqs=0
Dec 3 02:11:26 kernel: rcu: #011(detected by 13, t=820027 jiffies, g=11126601, q=5124103 ncpus=40)
Dec 3 02:11:26 kernel: Sending NMI from CPU 13 to CPUs 17:
Dec 3 02:11:26 kernel: rcu: rcu_preempt kthread timer wakeup didn't happen for 830024 jiffies! g11126601 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x200
Dec 3 02:11:26 kernel: rcu: #011Possible timer handling issue on cpu=17 timer-softirq=273727
Dec 3 02:11:26 kernel: rcu: rcu_preempt kthread starved for 830027 jiffies! g11126601 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x200 ->cpu=17
Dec 3 02:11:26 kernel: rcu: #011Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
Dec 3 02:11:26 kernel: rcu: RCU grace-period kthread stack dump:
Dec 3 02:11:26 kernel: task:rcu_preempt state:R stack:0 pid:18 tgid:18 ppid:2 flags:0x00004008
Dec 3 02:11:26 kernel: Call Trace:
Dec 3 02:11:26 kernel: <TASK>
Dec 3 02:11:26 kernel: ? __pick_next_task+0x3e/0x1a0
Dec 3 02:11:26 kernel: ? __schedule+0xfe/0x620
Dec 3 02:11:26 kernel: ? _raw_spin_unlock_irqrestore+0xa/0x30
Dec 3 02:11:26 kernel: ? __pfx_rcu_gp_kthread+0x10/0x10
Dec 3 02:11:26 kernel: ? schedule+0x23/0xa0
Dec 3 02:11:26 kernel: ? schedule_timeout+0x8b/0x160
Dec 3 02:11:26 kernel: ? __pfx_process_timeout+0x10/0x10
Dec 3 02:11:26 kernel: ? rcu_gp_fqs_loop+0x10b/0x500
Dec 3 02:11:26 kernel: ? rcu_gp_kthread+0x13f/0x1d0
Dec 3 02:11:26 kernel: ? kthread+0xcc/0x100
Dec 3 02:11:26 kernel: ? __pfx_kthread+0x10/0x10
Dec 3 02:11:26 kernel: ? ret_from_fork+0x2d/0x50
Dec 3 02:11:26 kernel: ? __pfx_kthread+0x10/0x10
Dec 3 02:11:26 kernel: ? ret_from_fork_asm+0x1a/0x30
Dec 3 02:11:26 kernel: </TASK>
Dec 3 02:14:36 kernel: rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
Dec 3 02:14:36 kernel: rcu: #01117-...!: (0 ticks this GP) idle=b124/1/0x4000000000000002 softirq=2109356/2109356 fqs=0
Dec 3 02:14:36 kernel: rcu: #011(detected by 29, t=1010032 jiffies, g=11126601, q=6331978 ncpus=40)
Dec 3 02:14:36 kernel: Sending NMI from CPU 29 to CPUs 17:
Dec 3 02:14:36 kernel: rcu: rcu_preempt kthread timer wakeup didn't happen for 1020029 jiffies! g11126601 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x200
Dec 3 02:14:36 kernel: rcu: #011Possible timer handling issue on cpu=17 timer-softirq=273727
Dec 3 02:14:36 kernel: rcu: rcu_preempt kthread starved for 1020032 jiffies! g11126601 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x200 ->cpu=17
Dec 3 02:14:36 kernel: rcu: #011Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
Dec 3 02:14:36 kernel: rcu: RCU grace-period kthread stack dump:
Dec 3 02:14:36 kernel: task:rcu_preempt state:R stack:0 pid:18 tgid:18 ppid:2 flags:0x00004008
Dec 3 02:14:36 kernel: Call Trace:
Dec 3 02:14:36 kernel: <TASK>
Dec 3 02:14:36 kernel: ? __pick_next_task+0x3e/0x1a0
Dec 3 02:14:36 kernel: ? __schedule+0xfe/0x620
Dec 3 02:14:36 kernel: ? _raw_spin_unlock_irqrestore+0xa/0x30
Dec 3 02:14:36 kernel: ? __pfx_rcu_gp_kthread+0x10/0x10
Dec 3 02:14:36 kernel: ? schedule+0x23/0xa0
Dec 3 02:14:36 kernel: ? schedule_timeout+0x8b/0x160
Dec 3 02:14:36 kernel: ? __pfx_process_timeout+0x10/0x10
Dec 3 02:14:36 kernel: ? rcu_gp_fqs_loop+0x10b/0x500
Dec 3 02:14:36 kernel: ? rcu_gp_kthread+0x13f/0x1d0
Dec 3 02:14:36 kernel: ? kthread+0xcc/0x100
Dec 3 02:14:36 kernel: ? __pfx_kthread+0x10/0x10
Dec 3 02:14:36 kernel: ? ret_from_fork+0x2d/0x50
Dec 3 02:14:36 kernel: ? __pfx_kthread+0x10/0x10
Dec 3 02:14:36 kernel: ? ret_from_fork_asm+0x1a/0x30
Dec 3 02:14:36 kernel: </TASK>
Dec 3 02:17:46 kernel: rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
Dec 3 02:17:46 kernel: rcu: #01117-...!: (0 ticks this GP) idle=b124/1/0x4000000000000002 softirq=2109356/2109356 fqs=0
Dec 3 02:17:46 kernel: rcu: #011(detected by 8, t=1200037 jiffies, g=11126601, q=7535150 ncpus=40)
Dec 3 02:17:46 kernel: Sending NMI from CPU 8 to CPUs 17:
Dec 3 02:17:46 kernel: rcu: rcu_preempt kthread timer wakeup didn't happen for 1210034 jiffies! g11126601 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x200
Dec 3 02:17:46 kernel: rcu: #011Possible timer handling issue on cpu=17 timer-softirq=273727
Dec 3 02:17:46 kernel: rcu: rcu_preempt kthread starved for 1210037 jiffies! g11126601 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x200 ->cpu=17
Dec 3 02:17:46 kernel: rcu: #011Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
Dec 3 02:17:46 kernel: rcu: RCU grace-period kthread stack dump:
Dec 3 02:17:46 kernel: task:rcu_preempt state:R stack:0 pid:18 tgid:18 ppid:2 flags:0x00004008
Dec 3 02:17:46 kernel: Call Trace:
Dec 3 02:17:46 kernel: <TASK>
Dec 3 02:17:46 kernel: ? __pick_next_task+0x3e/0x1a0
Dec 3 02:17:46 kernel: ? __schedule+0xfe/0x620
Dec 3 02:17:46 kernel: ? _raw_spin_unlock_irqrestore+0xa/0x30
Dec 3 02:17:46 kernel: ? __pfx_rcu_gp_kthread+0x10/0x10
Dec 3 02:17:46 kernel: ? schedule+0x23/0xa0
Dec 3 02:17:46 kernel: ? schedule_timeout+0x8b/0x160
Dec 3 02:17:46 kernel: ? __pfx_process_timeout+0x10/0x10
Dec 3 02:17:46 kernel: ? rcu_gp_fqs_loop+0x10b/0x500
Dec 3 02:17:46 kernel: ? rcu_gp_kthread+0x13f/0x1d0
Dec 3 02:17:46 kernel: ? kthread+0xcc/0x100
Dec 3 02:17:46 kernel: ? __pfx_kthread+0x10/0x10
Dec 3 02:17:46 kernel: ? ret_from_fork+0x2d/0x50
Dec 3 02:17:46 kernel: ? __pfx_kthread+0x10/0x10
Dec 3 02:17:46 kernel: ? ret_from_fork_asm+0x1a/0x30
Dec 3 02:17:46 kernel: </TASK>
Dec 3 02:20:56 kernel: rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
Dec 3 02:20:56 kernel: rcu: #01117-...!: (0 ticks this GP) idle=b124/1/0x4000000000000002 softirq=2109356/2109356 fqs=0
Dec 3 02:20:56 kernel: rcu: #011(detected by 8, t=1390043 jiffies, g=11126601, q=8740806 ncpus=40)
Dec 3 02:20:56 kernel: Sending NMI from CPU 8 to CPUs 17:
Dec 3 02:20:56 kernel: rcu: rcu_preempt kthread timer wakeup didn't happen for 1400040 jiffies! g11126601 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x200
Dec 3 02:20:56 kernel: rcu: #011Possible timer handling issue on cpu=17 timer-softirq=273727
Dec 3 02:20:56 kernel: rcu: rcu_preempt kthread starved for 1400043 jiffies! g11126601 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x200 ->cpu=17
Dec 3 02:20:56 kernel: rcu: #011Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
Dec 3 02:20:56 kernel: rcu: RCU grace-period kthread stack dump:
Dec 3 02:20:56 kernel: task:rcu_preempt state:R stack:0 pid:18 tgid:18 ppid:2 flags:0x00004008
Dec 3 02:20:56 kernel: Call Trace:
Dec 3 02:20:56 kernel: <TASK>
Dec 3 02:20:56 kernel: ? __pick_next_task+0x3e/0x1a0
Dec 3 02:20:56 kernel: ? __schedule+0xfe/0x620
Dec 3 02:20:56 kernel: ? _raw_spin_unlock_irqrestore+0xa/0x30
Dec 3 02:20:56 kernel: ? __pfx_rcu_gp_kthread+0x10/0x10
Dec 3 02:20:56 kernel: ? schedule+0x23/0xa0
Dec 3 02:20:56 kernel: ? schedule_timeout+0x8b/0x160
Dec 3 02:20:56 kernel: ? __pfx_process_timeout+0x10/0x10
Dec 3 02:20:56 kernel: ? rcu_gp_fqs_loop+0x10b/0x500
Dec 3 02:20:56 kernel: ? rcu_gp_kthread+0x13f/0x1d0
Dec 3 02:20:56 kernel: ? kthread+0xcc/0x100
Dec 3 02:20:56 kernel: ? __pfx_kthread+0x10/0x10
Dec 3 02:20:56 kernel: ? ret_from_fork+0x2d/0x50
Dec 3 02:20:56 kernel: ? __pfx_kthread+0x10/0x10
Dec 3 02:20:56 kernel: ? ret_from_fork_asm+0x1a/0x30
Dec 3 02:20:56 kernel: </TASK>
Dec 3 02:24:06 kernel: rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
Dec 3 02:24:06 kernel: rcu: #01117-...!: (0 ticks this GP) idle=b124/1/0x4000000000000002 softirq=2109356/2109356 fqs=0
Dec 3 02:24:06 kernel: rcu: #011(detected by 23, t=1580048 jiffies, g=11126601, q=9924647 ncpus=40)
Dec 3 02:24:06 kernel: Sending NMI from CPU 23 to CPUs 17:
Dec 3 02:24:06 kernel: rcu: rcu_preempt kthread timer wakeup didn't happen for 1590045 jiffies! g11126601 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x200
Dec 3 02:24:06 kernel: rcu: #011Possible timer handling issue on cpu=17 timer-softirq=273727
Dec 3 02:24:06 kernel: rcu: rcu_preempt kthread starved for 1590048 jiffies! g11126601 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x200 ->cpu=17
Dec 3 02:24:06 kernel: rcu: #011Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
Dec 3 02:24:06 kernel: rcu: RCU grace-period kthread stack dump:
Dec 3 02:24:06 kernel: task:rcu_preempt state:R stack:0 pid:18 tgid:18 ppid:2 flags:0x00004008
Dec 3 02:24:06 kernel: Call Trace:
Dec 3 02:24:06 kernel: <TASK>
Dec 3 02:24:06 kernel: ? __pick_next_task+0x3e/0x1a0
Dec 3 02:24:06 kernel: ? __schedule+0xfe/0x620
Dec 3 02:24:06 kernel: ? _raw_spin_unlock_irqrestore+0xa/0x30
Dec 3 02:24:06 kernel: ? __pfx_rcu_gp_kthread+0x10/0x10
Dec 3 02:24:06 kernel: ? schedule+0x23/0xa0
Dec 3 02:24:06 kernel: ? schedule_timeout+0x8b/0x160
Dec 3 02:24:06 kernel: ? __pfx_process_timeout+0x10/0x10
Dec 3 02:24:06 kernel: ? rcu_gp_fqs_loop+0x10b/0x500
Dec 3 02:24:06 kernel: ? rcu_gp_kthread+0x13f/0x1d0
Dec 3 02:24:06 kernel: ? kthread+0xcc/0x100
Dec 3 02:24:06 kernel: ? __pfx_kthread+0x10/0x10
Dec 3 02:24:06 kernel: ? ret_from_fork+0x2d/0x50
Dec 3 02:24:06 kernel: ? __pfx_kthread+0x10/0x10
Dec 3 02:24:06 kernel: ? ret_from_fork_asm+0x1a/0x30
Dec 3 02:24:06 kernel: </TASK>
Dec 3 02:27:16 kernel: rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
Dec 3 02:27:16 kernel: rcu: #01117-...!: (0 ticks this GP) idle=b124/1/0x4000000000000002 softirq=2109356/2109356 fqs=0
Dec 3 02:27:16 kernel: rcu: #011(detected by 18, t=1770053 jiffies, g=11126601, q=11108597 ncpus=40)
Dec 3 02:27:16 kernel: Sending NMI from CPU 18 to CPUs 17:
Dec 3 02:27:16 kernel: rcu: rcu_preempt kthread timer wakeup didn't happen for 1780050 jiffies! g11126601 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x200
Dec 3 02:27:16 kernel: rcu: #011Possible timer handling issue on cpu=17 timer-softirq=273727
Dec 3 02:27:16 kernel: rcu: rcu_preempt kthread starved for 1780053 jiffies! g11126601 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x200 ->cpu=17
Dec 3 02:27:16 kernel: rcu: #011Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
Dec 3 02:27:16 kernel: rcu: RCU grace-period kthread stack dump:
Dec 3 02:27:16 kernel: task:rcu_preempt state:R stack:0 pid:18 tgid:18 ppid:2 flags:0x00004008
Dec 3 02:27:16 kernel: Call Trace:
Dec 3 02:27:16 kernel: <TASK>
Dec 3 02:27:16 kernel: ? __pick_next_task+0x3e/0x1a0
Dec 3 02:27:16 kernel: ? __schedule+0xfe/0x620
Dec 3 02:27:16 kernel: ? _raw_spin_unlock_irqrestore+0xa/0x30
Dec 3 02:27:16 kernel: ? __pfx_rcu_gp_kthread+0x10/0x10
Dec 3 02:27:16 kernel: ? schedule+0x23/0xa0
Dec 3 02:27:16 kernel: ? schedule_timeout+0x8b/0x160
Dec 3 02:27:16 kernel: ? __pfx_process_timeout+0x10/0x10
Dec 3 02:27:16 kernel: ? rcu_gp_fqs_loop+0x10b/0x500
Dec 3 02:27:16 kernel: ? rcu_gp_kthread+0x13f/0x1d0
Dec 3 02:27:16 kernel: ? kthread+0xcc/0x100
Dec 3 02:27:16 kernel: ? __pfx_kthread+0x10/0x10
Dec 3 02:27:16 kernel: ? ret_from_fork+0x2d/0x50
Dec 3 02:27:16 kernel: ? __pfx_kthread+0x10/0x10
Dec 3 02:27:16 kernel: ? ret_from_fork_asm+0x1a/0x30
Dec 3 02:27:16 kernel: </TASK>
Dec 3 02:30:26 kernel: rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
Dec 3 02:30:26 kernel: rcu: #01117-...!: (0 ticks this GP) idle=b124/1/0x4000000000000002 softirq=2109356/2109356 fqs=0
Dec 3 02:30:26 kernel: rcu: #011(detected by 1, t=1960059 jiffies, g=11126601, q=12298028 ncpus=40)
Dec 3 02:30:26 kernel: Sending NMI from CPU 1 to CPUs 17:
Dec 3 02:30:26 kernel: rcu: rcu_preempt kthread timer wakeup didn't happen for 1970056 jiffies! g11126601 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x200
Dec 3 02:30:26 kernel: rcu: #011Possible timer handling issue on cpu=17 timer-softirq=273727
Dec 3 02:30:26 kernel: rcu: rcu_preempt kthread starved for 1970059 jiffies! g11126601 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x200 ->cpu=17
Dec 3 02:30:26 kernel: rcu: #011Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
Dec 3 02:30:26 kernel: rcu: RCU grace-period kthread stack dump:
Dec 3 02:30:26 kernel: task:rcu_preempt state:R stack:0 pid:18 tgid:18 ppid:2 flags:0x00004008
Dec 3 02:30:26 kernel: Call Trace:
Dec 3 02:30:26 kernel: <TASK>
Dec 3 02:30:26 kernel: ? __pick_next_task+0x3e/0x1a0
Dec 3 02:30:26 kernel: ? __schedule+0xfe/0x620
Dec 3 02:30:26 kernel: ? _raw_spin_unlock_irqrestore+0xa/0x30
Dec 3 02:30:26 kernel: ? __pfx_rcu_gp_kthread+0x10/0x10
Dec 3 02:30:26 kernel: ? schedule+0x23/0xa0
Dec 3 02:30:26 kernel: ? schedule_timeout+0x8b/0x160
Dec 3 02:30:26 kernel: ? __pfx_process_timeout+0x10/0x10
Dec 3 02:30:26 kernel: ? rcu_gp_fqs_loop+0x10b/0x500
Dec 3 02:30:26 kernel: ? rcu_gp_kthread+0x13f/0x1d0
Dec 3 02:30:26 kernel: ? kthread+0xcc/0x100
Dec 3 02:30:26 kernel: ? __pfx_kthread+0x10/0x10
Dec 3 02:30:26 kernel: ? ret_from_fork+0x2d/0x50
Dec 3 02:30:26 kernel: ? __pfx_kthread+0x10/0x10
Dec 3 02:30:26 kernel: ? ret_from_fork_asm+0x1a/0x30
Dec 3 02:30:26 kernel: </TASK>
Dec 3 02:30:26 kernel: ? kthread+0xcc/0x100
Dec 3 02:30:26 kernel: ? __pfx_kthread+0x10/0x10
Dec 3 02:30:26 kernel: ? ret_from_fork+0x2d/0x50
Dec 3 02:30:26 kernel: ? __pfx_kthread+0x10/0x10
Dec 3 02:30:26 kernel: ? ret_from_fork_asm+0x1a/0x30
Dec 3 02:30:26 kernel: </TASK>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [v6.12] WARNING: at kernel/sched/deadline.c:1995 enqueue_dl_entity (task blocked for more than 28262 seconds)
2024-12-04 12:47 [v6.12] WARNING: at kernel/sched/deadline.c:1995 enqueue_dl_entity (task blocked for more than 28262 seconds) Ilya Maximets
@ 2024-12-06 15:18 ` Joel Fernandes
2024-12-06 16:57 ` Vineeth Remanan Pillai
0 siblings, 1 reply; 22+ messages in thread
From: Joel Fernandes @ 2024-12-06 15:18 UTC (permalink / raw)
To: Ilya Maximets
Cc: LKML, Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
vineethrp, shraash, vineeth
On Wed, Dec 04, 2024 at 01:47:44PM +0100, Ilya Maximets wrote:
> Hi. It seems like I'm hitting some bug in the scheduler.
>
> I'm running some tests with Open vSwitch on v6.12 kernel and some time
> 5 to 8 hours down the line I'm getting task blocked splats and I also
> have a WARNING triggered in the scheduler code right before that:
>
> Dec 3 22:19:55 kernel: WARNING: CPU: 27 PID: 3391271 at kernel/sched/deadline.c:1995 enqueue_dl_entity
>
> I have a lot of processes (kernel threads and userpsace threads) stuck
> in DN, Ds, D+ and D states. It feels like IO tasks are being scheduled,
> but scheduler never picks them up or they are not being scheduled at all
> for whatever reason, and threads waiting on these tasks are stuck.
>
> Dec 3 22:22:45 kernel: INFO: task khugepaged:330 blocked for more than 122 seconds.
> Dec 3 22:22:45 kernel: INFO: task ovs-monitor-ips:3479822 blocked for more than 122 seconds.
> Dec 3 22:22:45 kernel: INFO: task mv:3483072 blocked for more than 122 seconds.
> Dec 3 22:24:48 kernel: INFO: task khugepaged:330 blocked for more than 245 seconds.
> Dec 3 22:24:48 kernel: INFO: task ovs-monitor-ips:3479822 blocked for more than 245 seconds.
> Dec 3 22:24:48 kernel: INFO: task ovs-monitor-ips:3480383 blocked for more than 122 seconds.
> Dec 3 22:24:48 kernel: INFO: task ovs-monitor-ips:3481787 blocked for more than 122 seconds.
> Dec 3 22:24:48 kernel: INFO: task ovs-monitor-ips:3482631 blocked for more than 122 seconds.
> Dec 3 22:24:48 kernel: INFO: task mv:3483072 blocked for more than 245 seconds.
> Dec 3 22:26:51 kernel: INFO: task khugepaged:330 blocked for more than 368 seconds.
> ...
> Dec 4 06:11:45 kernel: INFO: task khugepaged:330 blocked for more than 28262 seconds.
>
> I have two separate instances where this behavior is reproduced. One is mostly
> around file systems, the other was more severe as multiple kernel threads got
> stuck in netlink code. The traces do not have much in common, except that most
> of blocked tasks are in scheduling. The system is also idle, nothing is really
> running. Some of these tasks are holding resources that make other tasks to
> block on those resources as well.
>
> I seem to be able to reproduce the issue, but it takes 5-8 hours to do so.
>
CC'ing a few more from my team as well.
We haven't seen such an issue with the DL server, but we are also testing on
slightly older kernels.
Its coming from:
WARN_ON_ONCE(on_dl_rq(dl_se));
To CC'd Googlers, does this issue look familiar at all?
thanks,
- Joel
> Best regards, Ilya Maximets.
>
>
>
> Below are logs from two instances. The first one is from v6.12 + one small
> unrelated patch for network namespaces. The second one is from pure v6.12,
> but it's not decoded as I lost the vmlinux from that run, the system was also
> completely unresponsive when the issue was hit.
>
> =====================
> THE FIST DECODED LOG:
> =====================
>
> Dec 3 22:19:55 kernel: WARNING: CPU: 27 PID: 3391271 at kernel/sched/deadline.c:1995 enqueue_dl_entity (kernel/sched/deadline.c:1995 (discriminator 1))
> Dec 3 22:19:55 kernel: Modules linked in: vport_vxlan vxlan vport_gre ip_gre ip_tunnel gre vport_geneve geneve ip6_udp_tunnel udp_tunnel openvswitch nf_conncount nf_nat tls esp4 veth nfnetlink_cttimeout nfnetlink nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 intel_rapl_msr intel_rapl_common rfkill intel_uncore_frequency_common skx_edac_common nfit libnvdimm kvm_intel kvm rapl vfat fat iTCO_wdt iTCO_vendor_support virtio_gpu virtio_dma_buf i2c_i801 drm_shmem_helper lpc_ich pcspkr i2c_smbus virtio_balloon drm_kms_helper joydev drm xfs libcrc32c ahci crct10dif_pclmul libahci crc32_pclmul virtio_net crc32c_intel libata ghash_clmulni_intel net_failover virtio_blk virtio_console failover serio_raw sunrpc dm_mirror dm_region_hash dm_log dm_mod fuse [last unloaded: ip6_udp_tunnel]
> Dec 3 22:19:55 kernel: CPU: 27 UID: 0 PID: 3391271 Comm: kworker/27:1 Kdump: loaded Not tainted 6.12.0+ #77
> Dec 3 22:19:55 kernel: Hardware name: Red Hat KVM/RHEL, BIOS 1.16.1-1.el9 04/01/2014
> Dec 3 22:19:55 kernel: Workqueue: 0x0 (mm_percpu_wq)
> Dec 3 22:19:55 kernel: RIP: 0010:enqueue_dl_entity (kernel/sched/deadline.c:1995 (discriminator 1))
> Dec 3 22:19:55 kernel: Code: d2 0f 89 14 fd ff ff e9 0d fb ff ff 45 85 ed 0f
> 84 65 fd ff ff 5b 44 89 e6 48 89 ef 5d 41 5c 41
> 5d 41 5e 41 5f e9 76 c4 ff ff <0f> 0b e9 bd f9 ff
> ff 0f 0b e9 1f fb ff ff 8b 83 b0 0a 00 00 48 8b
> All code
> ========
> 0: d2 0f rorb %cl,(%rdi)
> 2: 89 14 fd ff ff e9 0d mov %edx,0xde9ffff(,%rdi,8)
> 9: fb sti
> a: ff (bad)
> b: ff 45 85 incl -0x7b(%rbp)
> e: ed in (%dx),%eax
> f: 0f 84 65 fd ff ff je 0xfffffffffffffd7a
> 15: 5b pop %rbx
> 16: 44 89 e6 mov %r12d,%esi
> 19: 48 89 ef mov %rbp,%rdi
> 1c: 5d pop %rbp
> 1d: 41 5c pop %r12
> 1f: 41 5d pop %r13
> 21: 41 5e pop %r14
> 23: 41 5f pop %r15
> 25: e9 76 c4 ff ff jmpq 0xffffffffffffc4a0
> 2a:* 0f 0b ud2 <-- trapping instruction
> 2c: e9 bd f9 ff ff jmpq 0xfffffffffffff9ee
> 31: 0f 0b ud2
> 33: e9 1f fb ff ff jmpq 0xfffffffffffffb57
> 38: 8b 83 b0 0a 00 00 mov 0xab0(%rbx),%eax
> 3e: 48 rex.W
> 3f: 8b .byte 0x8b
>
> Code starting with the faulting instruction
> ===========================================
> 0: 0f 0b ud2
> 2: e9 bd f9 ff ff jmpq 0xfffffffffffff9c4
> 7: 0f 0b ud2
> 9: e9 1f fb ff ff jmpq 0xfffffffffffffb2d
> e: 8b 83 b0 0a 00 00 mov 0xab0(%rbx),%eax
> 14: 48 rex.W
> 15: 8b .byte 0x8b
> Dec 3 22:19:55 kernel: RSP: 0018:ffffacba8d87fbb8 EFLAGS: 00010086
> Dec 3 22:19:55 kernel: RAX: 0000000000000001 RBX: ffff8f5e3f7b65e8 RCX: 0000000000000000
> Dec 3 22:19:55 kernel: RDX: 0000000000000001 RSI: 0000000000000001 RDI: ffff8f5e3f7b65e8
> Dec 3 22:19:55 kernel: RBP: ffff8f5e3f7b65e8 R08: 0000000000000000 R09: 0000000000000000
> Dec 3 22:19:55 kernel: R10: ffff8f5e3f7b5d00 R11: ffff8f4f132ed610 R12: 0000000000000001
> Dec 3 22:19:55 kernel: R13: 0000000000000001 R14: 00000000002dc6c0 R15: 0000000000000000
> Dec 3 22:19:55 kernel: FS: 0000000000000000(0000) GS:ffff8f5e3f780000(0000) knlGS:0000000000000000
> Dec 3 22:19:55 kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> Dec 3 22:19:55 kernel: CR2: 00007faf752d93e0 CR3: 000000011879a001 CR4: 0000000000772ef0
> Dec 3 22:19:55 kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> Dec 3 22:19:55 kernel: DR3: 0000000000000000 DR6: 00000000ffff4ff0 DR7: 0000000000000400
> Dec 3 22:19:55 kernel: PKRU: 55555554
> Dec 3 22:19:55 kernel: Call Trace:
> Dec 3 22:19:55 kernel: <TASK>
> Dec 3 22:19:55 kernel: ? __warn (kernel/panic.c:748)
> Dec 3 22:19:55 kernel: ? enqueue_dl_entity (kernel/sched/deadline.c:1995 (discriminator 1))
> Dec 3 22:19:55 kernel: ? report_bug (lib/bug.c:201 lib/bug.c:219)
> Dec 3 22:19:55 kernel: ? handle_bug (arch/x86/kernel/traps.c:285)
> Dec 3 22:19:55 kernel: ? exc_invalid_op (arch/x86/kernel/traps.c:309 (discriminator 1))
> Dec 3 22:19:55 kernel: ? asm_exc_invalid_op (./arch/x86/include/asm/idtentry.h:621)
> Dec 3 22:19:55 kernel: ? enqueue_dl_entity (kernel/sched/deadline.c:1995 (discriminator 1))
> Dec 3 22:19:55 kernel: dl_server_start (kernel/sched/deadline.c:1651)
> Dec 3 22:19:55 kernel: enqueue_task_fair (kernel/sched/sched.h:2745 kernel/sched/fair.c:7048)
> Dec 3 22:19:55 kernel: enqueue_task (kernel/sched/core.c:2020)
> Dec 3 22:19:55 kernel: activate_task (kernel/sched/core.c:2069)
> Dec 3 22:19:55 kernel: sched_balance_rq (kernel/sched/fair.c:9642 kernel/sched/fair.c:9676 kernel/sched/fair.c:11753)
> Dec 3 22:19:55 kernel: sched_balance_newidle (kernel/sched/fair.c:12799)
> Dec 3 22:19:55 kernel: pick_next_task_fair (kernel/sched/fair.c:8950)
> Dec 3 22:19:55 kernel: __pick_next_task (kernel/sched/core.c:5972)
> Dec 3 22:19:55 kernel: __schedule (kernel/sched/core.c:6647)
> Dec 3 22:19:55 kernel: schedule (./arch/x86/include/asm/preempt.h:84 kernel/sched/core.c:6771 kernel/sched/core.c:6785)
> Dec 3 22:19:55 kernel: worker_thread (kernel/workqueue.c:3344)
> Dec 3 22:19:55 kernel: ? __pfx_worker_thread (kernel/workqueue.c:3337)
> Dec 3 22:19:55 kernel: kthread (kernel/kthread.c:389)
> Dec 3 22:19:55 kernel: ? __pfx_kthread (kernel/kthread.c:342)
> Dec 3 22:19:55 kernel: ret_from_fork (arch/x86/kernel/process.c:147)
> Dec 3 22:19:55 kernel: ? __pfx_kthread (kernel/kthread.c:342)
> Dec 3 22:19:55 kernel: ret_from_fork_asm (arch/x86/entry/entry_64.S:257)
> Dec 3 22:19:55 kernel: </TASK>
> Dec 3 22:19:55 kernel: ---[ end trace 0000000000000000 ]---
> Dec 3 22:19:55 kernel: ovs-p-13: entered promiscuous mode
> Dec 3 22:22:45 kernel: INFO: task khugepaged:330 blocked for more than 122 seconds.
> Dec 3 22:22:45 kernel: Tainted: G W 6.12.0+ #77
> Dec 3 22:22:45 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Dec 3 22:22:45 kernel: task:khugepaged state:D stack:0 pid:330 tgid:330 ppid:2 flags:0x00004000
> Dec 3 22:22:45 kernel: Call Trace:
> Dec 3 22:22:45 kernel: <TASK>
> Dec 3 22:22:45 kernel: __schedule (kernel/sched/core.c:5328 kernel/sched/core.c:6693)
> Dec 3 22:22:45 kernel: schedule (./arch/x86/include/asm/preempt.h:84 kernel/sched/core.c:6771 kernel/sched/core.c:6785)
> Dec 3 22:22:45 kernel: schedule_timeout (kernel/time/timer.c:2592)
> Dec 3 22:22:45 kernel: ? kvm_sched_clock_read (arch/x86/kernel/kvmclock.c:91)
> Dec 3 22:22:45 kernel: ? sched_clock (./arch/x86/include/asm/preempt.h:94 arch/x86/kernel/tsc.c:285)
> Dec 3 22:22:45 kernel: ? sched_clock_cpu (kernel/sched/clock.c:394)
> Dec 3 22:22:45 kernel: ? __smp_call_single_queue (kernel/smp.c:115 kernel/smp.c:411)
> Dec 3 22:22:45 kernel: __wait_for_common (kernel/sched/completion.c:95 kernel/sched/completion.c:116)
> Dec 3 22:22:45 kernel: ? __pfx_schedule_timeout (kernel/time/timer.c:2577)
> Dec 3 22:22:45 kernel: __flush_work (kernel/workqueue.c:4222)
> Dec 3 22:22:45 kernel: ? __pfx_wq_barrier_func (kernel/workqueue.c:3718)
> Dec 3 22:22:45 kernel: __lru_add_drain_all (mm/swap.c:873 (discriminator 3))
> Dec 3 22:22:45 kernel: khugepaged (mm/khugepaged.c:2499 mm/khugepaged.c:2571)
> Dec 3 22:22:45 kernel: ? __pfx_khugepaged (mm/khugepaged.c:2564)
> Dec 3 22:22:45 kernel: kthread (kernel/kthread.c:389)
> Dec 3 22:22:45 kernel: ? __pfx_kthread (kernel/kthread.c:342)
> Dec 3 22:22:45 kernel: ret_from_fork (arch/x86/kernel/process.c:147)
> Dec 3 22:22:45 kernel: ? __pfx_kthread (kernel/kthread.c:342)
> Dec 3 22:22:45 kernel: ret_from_fork_asm (arch/x86/entry/entry_64.S:257)
> Dec 3 22:22:45 kernel: </TASK>
> Dec 3 22:22:45 kernel: INFO: task ovs-monitor-ips:3479822 blocked for more than 122 seconds.
> Dec 3 22:22:45 kernel: Tainted: G W 6.12.0+ #77
> Dec 3 22:22:45 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Dec 3 22:22:45 kernel: task:ovs-monitor-ips state:D stack:0 pid:3479822 tgid:3479822 ppid:1 flags:0x00000002
> Dec 3 22:22:45 kernel: Call Trace:
> Dec 3 22:22:45 kernel: <TASK>
> Dec 3 22:22:45 kernel: __schedule (kernel/sched/core.c:5328 kernel/sched/core.c:6693)
> Dec 3 22:22:45 kernel: ? _raw_spin_unlock_irqrestore (./arch/x86/include/asm/paravirt.h:589 ./arch/x86/include/asm/qspinlock.h:57 ./include/linux/spinlock.h:204 ./include/linux/spinlock_api_smp.h:150 kernel/locking/spinlock.c:194)
> Dec 3 22:22:45 kernel: schedule (./arch/x86/include/asm/preempt.h:84 kernel/sched/core.c:6771 kernel/sched/core.c:6785)
> Dec 3 22:22:45 kernel: io_schedule (kernel/sched/core.c:7552 kernel/sched/core.c:7578)
> Dec 3 22:22:45 kernel: folio_wait_bit_common (mm/filemap.c:1301)
> Dec 3 22:22:45 kernel: ? xas_load (./include/linux/xarray.h:175 ./include/linux/xarray.h:1264 lib/xarray.c:240)
> Dec 3 22:22:45 kernel: ? __pfx_wake_page_function (mm/filemap.c:1117)
> Dec 3 22:22:45 kernel: folio_wait_writeback (./arch/x86/include/asm/bitops.h:206 ./arch/x86/include/asm/bitops.h:238 ./include/asm-generic/bitops/instrumented-non-atomic.h:142 ./include/linux/page-flags.h:555 mm/page-writeback.c:3187)
> Dec 3 22:22:45 kernel: truncate_inode_partial_folio (./arch/x86/include/asm/bitops.h:206 ./arch/x86/include/asm/bitops.h:238 ./include/asm-generic/bitops/instrumented-non-atomic.h:142 ./include/linux/page-flags.h:822 ./include/linux/page-flags.h:843 ./include/linux/mm.h:1115 ./include/linux/mm.h:2137 mm/truncate.c:209)
> Dec 3 22:22:45 kernel: truncate_inode_pages_range (mm/truncate.c:354)
> Dec 3 22:22:45 kernel: ? _raw_spin_unlock_irqrestore (./arch/x86/include/asm/paravirt.h:589 ./arch/x86/include/asm/qspinlock.h:57 ./include/linux/spinlock.h:204 ./include/linux/spinlock_api_smp.h:150 kernel/locking/spinlock.c:194)
> Dec 3 22:22:45 kernel: ? poll_freewait (fs/select.c:140 (discriminator 3))
> Dec 3 22:22:45 kernel: ? do_select (fs/select.c:612)
> Dec 3 22:22:45 kernel: ? __pfx_pollwake (fs/select.c:209)
> Dec 3 22:22:45 kernel: ? __pfx_pollwake (fs/select.c:209)
> Dec 3 22:22:45 kernel: ? down_read (./arch/x86/include/asm/preempt.h:79 kernel/locking/rwsem.c:1246 kernel/locking/rwsem.c:1261 kernel/locking/rwsem.c:1526)
> Dec 3 22:22:45 kernel: ? unmap_mapping_range (mm/memory.c:3873)
> Dec 3 22:22:45 kernel: truncate_pagecache (mm/truncate.c:728)
> Dec 3 22:22:45 kernel: xfs_setattr_size+0x139/0x410 xfs
> Dec 3 22:22:45 kernel: xfs_vn_setattr+0x78/0x140 xfs
> Dec 3 22:22:45 kernel: notify_change (fs/attr.c:503)
> Dec 3 22:22:45 kernel: ? do_truncate (./include/linux/fs.h:820 fs/open.c:66)
> Dec 3 22:22:45 kernel: do_truncate (./include/linux/fs.h:820 fs/open.c:66)
> Dec 3 22:22:45 kernel: do_open (fs/namei.c:3395 fs/namei.c:3778)
> Dec 3 22:22:45 kernel: path_openat (fs/namei.c:3933)
> Dec 3 22:22:45 kernel: ? memcg_list_lru_alloc (mm/list_lru.c:475 mm/list_lru.c:489)
> Dec 3 22:22:45 kernel: do_filp_open (fs/namei.c:3960)
> Dec 3 22:22:45 kernel: ? _raw_spin_unlock_irqrestore (./arch/x86/include/asm/paravirt.h:589 ./arch/x86/include/asm/qspinlock.h:57 ./include/linux/spinlock.h:204 ./include/linux/spinlock_api_smp.h:150 kernel/locking/spinlock.c:194)
> Dec 3 22:22:45 kernel: ? __skb_try_recv_datagram (net/core/datagram.c:267)
> Dec 3 22:22:45 kernel: ? kmem_cache_alloc_noprof (mm/slub.c:4115 mm/slub.c:4141)
> Dec 3 22:22:45 kernel: do_sys_openat2 (fs/open.c:1415)
> Dec 3 22:22:45 kernel: __x64_sys_openat (fs/open.c:1441)
> Dec 3 22:22:45 kernel: do_syscall_64 (arch/x86/entry/common.c:52 arch/x86/entry/common.c:83)
> Dec 3 22:22:45 kernel: ? unix_stream_recvmsg (net/unix/af_unix.c:2997)
> Dec 3 22:22:45 kernel: ? __pfx_unix_stream_read_actor (net/unix/af_unix.c:2957)
> Dec 3 22:22:45 kernel: ? sock_recvmsg (net/socket.c:1051 net/socket.c:1073)
> Dec 3 22:22:45 kernel: ? __sys_recvfrom (net/socket.c:2265)
> Dec 3 22:22:45 kernel: ? __pte_offset_map (./include/linux/pgtable.h:324 ./include/linux/pgtable.h:594 mm/pgtable-generic.c:289)
> Dec 3 22:22:45 kernel: ? _raw_spin_unlock (./arch/x86/include/asm/paravirt.h:589 ./arch/x86/include/asm/qspinlock.h:57 ./include/linux/spinlock.h:204 ./include/linux/spinlock_api_smp.h:142 kernel/locking/spinlock.c:186)
> Dec 3 22:22:45 kernel: ? do_wp_page (./include/linux/vmstat.h:75 mm/memory.c:3263 mm/memory.c:3731)
> Dec 3 22:22:45 kernel: ? __handle_mm_fault (mm/memory.c:5909)
> Dec 3 22:22:45 kernel: ? syscall_exit_work (./include/linux/audit.h:357 kernel/entry/common.c:166)
> Dec 3 22:22:45 kernel: ? __count_memcg_events (mm/memcontrol.c:573 mm/memcontrol.c:836)
> Dec 3 22:22:45 kernel: ? handle_mm_fault (mm/memory.c:5951 mm/memory.c:6103)
> Dec 3 22:22:45 kernel: ? do_user_addr_fault (./include/linux/mm.h:730 arch/x86/mm/fault.c:1340)
> Dec 3 22:22:45 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
> Dec 3 22:22:45 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
> Dec 3 22:22:45 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
> Dec 3 22:22:45 kernel: entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:130)
> Dec 3 22:22:45 kernel: RIP: 0033:0x7f8537cfd70b
> Dec 3 22:22:45 kernel: RSP: 002b:00007fff841fec70 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
> Dec 3 22:22:45 kernel: RAX: ffffffffffffffda RBX: 0000000000000001 RCX: 00007f8537cfd70b
> Dec 3 22:22:45 kernel: RDX: 0000000000080241 RSI: 00007f853707d290 RDI: 00000000ffffff9c
> Dec 3 22:22:45 kernel: RBP: 00007f853707d290 R08: 0000000000000000 R09: 0000000000000000
> Dec 3 22:22:45 kernel: R10: 00000000000001b6 R11: 0000000000000246 R12: 0000000000080241
> Dec 3 22:22:45 kernel: R13: 00007f85371b4ea0 R14: 0000000000080241 R15: 0000000000000000
> Dec 3 22:22:45 kernel: </TASK>
> Dec 3 22:22:45 kernel: INFO: task mv:3483072 blocked for more than 122 seconds.
> Dec 3 22:22:45 kernel: Tainted: G W 6.12.0+ #77
> Dec 3 22:22:45 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Dec 3 22:22:45 kernel: task:mv state:D stack:0 pid:3483072 tgid:3483072 ppid:3479428 flags:0x00000002
> Dec 3 22:22:45 kernel: Call Trace:
> Dec 3 22:22:45 kernel: <TASK>
> Dec 3 22:22:45 kernel: __schedule (kernel/sched/core.c:5328 kernel/sched/core.c:6693)
> Dec 3 22:22:45 kernel: schedule (./arch/x86/include/asm/preempt.h:84 kernel/sched/core.c:6771 kernel/sched/core.c:6785)
> Dec 3 22:22:45 kernel: io_schedule (kernel/sched/core.c:7552 kernel/sched/core.c:7578)
> Dec 3 22:22:45 kernel: folio_wait_bit_common (mm/filemap.c:1301)
> Dec 3 22:22:45 kernel: ? xas_load (./include/linux/xarray.h:175 ./include/linux/xarray.h:1264 lib/xarray.c:240)
> Dec 3 22:22:45 kernel: ? __pfx_wake_page_function (mm/filemap.c:1117)
> Dec 3 22:22:45 kernel: folio_wait_writeback (./arch/x86/include/asm/bitops.h:206 ./arch/x86/include/asm/bitops.h:238 ./include/asm-generic/bitops/instrumented-non-atomic.h:142 ./include/linux/page-flags.h:555 mm/page-writeback.c:3187)
> Dec 3 22:22:45 kernel: truncate_inode_partial_folio (./arch/x86/include/asm/bitops.h:206 ./arch/x86/include/asm/bitops.h:238 ./include/asm-generic/bitops/instrumented-non-atomic.h:142 ./include/linux/page-flags.h:822 ./include/linux/page-flags.h:843 ./include/linux/mm.h:1115 ./include/linux/mm.h:2137 mm/truncate.c:209)
> Dec 3 22:22:45 kernel: truncate_inode_pages_range (mm/truncate.c:354)
> Dec 3 22:22:45 kernel: ? xfs_iunlock+0x108/0x200 xfs
> Dec 3 22:22:45 kernel: ? xfs_rename+0x368/0x990 xfs
> Dec 3 22:22:45 kernel: ? fsnotify_move (./include/linux/fsnotify.h:72 ./include/linux/fsnotify.h:64 ./include/linux/fsnotify.h:238)
> Dec 3 22:22:45 kernel: ? _raw_spin_unlock (./arch/x86/include/asm/paravirt.h:589 ./arch/x86/include/asm/qspinlock.h:57 ./include/linux/spinlock.h:204 ./include/linux/spinlock_api_smp.h:142 kernel/locking/spinlock.c:186)
> Dec 3 22:22:45 kernel: ? locked_inode_to_wb_and_lock_list (fs/fs-writeback.c:355)
> Dec 3 22:22:45 kernel: evict (fs/inode.c:728)
> Dec 3 22:22:45 kernel: ? fsnotify_destroy_marks (fs/notify/mark.c:923)
> Dec 3 22:22:45 kernel: ? _atomic_dec_and_lock (./arch/x86/include/asm/atomic.h:67 ./include/linux/atomic/atomic-arch-fallback.h:2278 ./include/linux/atomic/atomic-instrumented.h:1384 lib/dec_and_lock.c:29)
> Dec 3 22:22:45 kernel: __dentry_kill (fs/dcache.c:618)
> Dec 3 22:22:45 kernel: dput (fs/dcache.c:857 fs/dcache.c:845)
> Dec 3 22:22:45 kernel: do_renameat2 (fs/namei.c:5174)
> Dec 3 22:22:45 kernel: __x64_sys_rename (fs/namei.c:5215)
> Dec 3 22:22:45 kernel: do_syscall_64 (arch/x86/entry/common.c:52 arch/x86/entry/common.c:83)
> Dec 3 22:22:45 kernel: ? do_user_addr_fault (./include/linux/mm.h:730 arch/x86/mm/fault.c:1340)
> Dec 3 22:22:45 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
> Dec 3 22:22:45 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
> Dec 3 22:22:45 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
> Dec 3 22:22:45 kernel: entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:130)
> Dec 3 22:22:45 kernel: RIP: 0033:0x7f3185a5aadb
> Dec 3 22:22:45 kernel: RSP: 002b:00007ffe258c4548 EFLAGS: 00000246 ORIG_RAX: 0000000000000052
> Dec 3 22:22:45 kernel: RAX: ffffffffffffffda RBX: 0000000000000001 RCX: 00007f3185a5aadb
> Dec 3 22:22:45 kernel: RDX: 0000000000000025 RSI: 00007ffe258c5cc3 RDI: 00007ffe258c5cb7
> Dec 3 22:22:45 kernel: RBP: 00007ffe258c48f0 R08: 00007ffe258c4670 R09: 00007ffe258c4ac0
> Dec 3 22:22:45 kernel: R10: 0000000000000100 R11: 0000000000000246 R12: 0000000000000011
> Dec 3 22:22:45 kernel: R13: 0000000000000000 R14: 00007ffe258c5cc3 R15: 00007ffe258c4ac0
> Dec 3 22:22:45 kernel: </TASK>
> Dec 3 22:24:48 kernel: INFO: task khugepaged:330 blocked for more than 245 seconds.
> Dec 3 22:24:48 kernel: Tainted: G W 6.12.0+ #77
> Dec 3 22:24:48 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Dec 3 22:24:48 kernel: task:khugepaged state:D stack:0 pid:330 tgid:330 ppid:2 flags:0x00004000
> Dec 3 22:24:48 kernel: Call Trace:
> Dec 3 22:24:48 kernel: <TASK>
> Dec 3 22:24:48 kernel: __schedule (kernel/sched/core.c:5328 kernel/sched/core.c:6693)
> Dec 3 22:24:48 kernel: schedule (./arch/x86/include/asm/preempt.h:84 kernel/sched/core.c:6771 kernel/sched/core.c:6785)
> Dec 3 22:24:48 kernel: schedule_timeout (kernel/time/timer.c:2592)
> Dec 3 22:24:48 kernel: ? kvm_sched_clock_read (arch/x86/kernel/kvmclock.c:91)
> Dec 3 22:24:48 kernel: ? sched_clock (./arch/x86/include/asm/preempt.h:94 arch/x86/kernel/tsc.c:285)
> Dec 3 22:24:48 kernel: ? sched_clock_cpu (kernel/sched/clock.c:394)
> Dec 3 22:24:48 kernel: ? __smp_call_single_queue (kernel/smp.c:115 kernel/smp.c:411)
> Dec 3 22:24:48 kernel: __wait_for_common (kernel/sched/completion.c:95 kernel/sched/completion.c:116)
> Dec 3 22:24:48 kernel: ? __pfx_schedule_timeout (kernel/time/timer.c:2577)
> Dec 3 22:24:48 kernel: __flush_work (kernel/workqueue.c:4222)
> Dec 3 22:24:48 kernel: ? __pfx_wq_barrier_func (kernel/workqueue.c:3718)
> Dec 3 22:24:48 kernel: __lru_add_drain_all (mm/swap.c:873 (discriminator 3))
> Dec 3 22:24:48 kernel: khugepaged (mm/khugepaged.c:2499 mm/khugepaged.c:2571)
> Dec 3 22:24:48 kernel: ? __pfx_khugepaged (mm/khugepaged.c:2564)
> Dec 3 22:24:48 kernel: kthread (kernel/kthread.c:389)
> Dec 3 22:24:48 kernel: ? __pfx_kthread (kernel/kthread.c:342)
> Dec 3 22:24:48 kernel: ret_from_fork (arch/x86/kernel/process.c:147)
> Dec 3 22:24:48 kernel: ? __pfx_kthread (kernel/kthread.c:342)
> Dec 3 22:24:48 kernel: ret_from_fork_asm (arch/x86/entry/entry_64.S:257)
> Dec 3 22:24:48 kernel: </TASK>
> Dec 3 22:24:48 kernel: INFO: task ovs-monitor-ips:3479822 blocked for more than 245 seconds.
> Dec 3 22:24:48 kernel: Tainted: G W 6.12.0+ #77
> Dec 3 22:24:48 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Dec 3 22:24:48 kernel: task:ovs-monitor-ips state:D stack:0 pid:3479822 tgid:3479822 ppid:1 flags:0x00000002
> Dec 3 22:24:48 kernel: Call Trace:
> Dec 3 22:24:48 kernel: <TASK>
> Dec 3 22:24:48 kernel: __schedule (kernel/sched/core.c:5328 kernel/sched/core.c:6693)
> Dec 3 22:24:48 kernel: ? _raw_spin_unlock_irqrestore (./arch/x86/include/asm/paravirt.h:589 ./arch/x86/include/asm/qspinlock.h:57 ./include/linux/spinlock.h:204 ./include/linux/spinlock_api_smp.h:150 kernel/locking/spinlock.c:194)
> Dec 3 22:24:48 kernel: schedule (./arch/x86/include/asm/preempt.h:84 kernel/sched/core.c:6771 kernel/sched/core.c:6785)
> Dec 3 22:24:48 kernel: io_schedule (kernel/sched/core.c:7552 kernel/sched/core.c:7578)
> Dec 3 22:24:48 kernel: folio_wait_bit_common (mm/filemap.c:1301)
> Dec 3 22:24:48 kernel: ? xas_load (./include/linux/xarray.h:175 ./include/linux/xarray.h:1264 lib/xarray.c:240)
> Dec 3 22:24:48 kernel: ? __pfx_wake_page_function (mm/filemap.c:1117)
> Dec 3 22:24:48 kernel: folio_wait_writeback (./arch/x86/include/asm/bitops.h:206 ./arch/x86/include/asm/bitops.h:238 ./include/asm-generic/bitops/instrumented-non-atomic.h:142 ./include/linux/page-flags.h:555 mm/page-writeback.c:3187)
> Dec 3 22:24:48 kernel: truncate_inode_partial_folio (./arch/x86/include/asm/bitops.h:206 ./arch/x86/include/asm/bitops.h:238 ./include/asm-generic/bitops/instrumented-non-atomic.h:142 ./include/linux/page-flags.h:822 ./include/linux/page-flags.h:843 ./include/linux/mm.h:1115 ./include/linux/mm.h:2137 mm/truncate.c:209)
> Dec 3 22:24:48 kernel: truncate_inode_pages_range (mm/truncate.c:354)
> Dec 3 22:24:48 kernel: ? _raw_spin_unlock_irqrestore (./arch/x86/include/asm/paravirt.h:589 ./arch/x86/include/asm/qspinlock.h:57 ./include/linux/spinlock.h:204 ./include/linux/spinlock_api_smp.h:150 kernel/locking/spinlock.c:194)
> Dec 3 22:24:48 kernel: ? poll_freewait (fs/select.c:140 (discriminator 3))
> Dec 3 22:24:48 kernel: ? do_select (fs/select.c:612)
> Dec 3 22:24:48 kernel: ? __pfx_pollwake (fs/select.c:209)
> Dec 3 22:24:48 kernel: ? __pfx_pollwake (fs/select.c:209)
> Dec 3 22:24:48 kernel: ? down_read (./arch/x86/include/asm/preempt.h:79 kernel/locking/rwsem.c:1246 kernel/locking/rwsem.c:1261 kernel/locking/rwsem.c:1526)
> Dec 3 22:24:48 kernel: ? unmap_mapping_range (mm/memory.c:3873)
> Dec 3 22:24:48 kernel: truncate_pagecache (mm/truncate.c:728)
> Dec 3 22:24:48 kernel: xfs_setattr_size+0x139/0x410 xfs
> Dec 3 22:24:48 kernel: xfs_vn_setattr+0x78/0x140 xfs
> Dec 3 22:24:48 kernel: notify_change (fs/attr.c:503)
> Dec 3 22:24:48 kernel: ? do_truncate (./include/linux/fs.h:820 fs/open.c:66)
> Dec 3 22:24:48 kernel: do_truncate (./include/linux/fs.h:820 fs/open.c:66)
> Dec 3 22:24:48 kernel: do_open (fs/namei.c:3395 fs/namei.c:3778)
> Dec 3 22:24:48 kernel: path_openat (fs/namei.c:3933)
> Dec 3 22:24:48 kernel: ? memcg_list_lru_alloc (mm/list_lru.c:475 mm/list_lru.c:489)
> Dec 3 22:24:48 kernel: do_filp_open (fs/namei.c:3960)
> Dec 3 22:24:48 kernel: ? _raw_spin_unlock_irqrestore (./arch/x86/include/asm/paravirt.h:589 ./arch/x86/include/asm/qspinlock.h:57 ./include/linux/spinlock.h:204 ./include/linux/spinlock_api_smp.h:150 kernel/locking/spinlock.c:194)
> Dec 3 22:24:48 kernel: ? __skb_try_recv_datagram (net/core/datagram.c:267)
> Dec 3 22:24:48 kernel: ? kmem_cache_alloc_noprof (mm/slub.c:4115 mm/slub.c:4141)
> Dec 3 22:24:48 kernel: do_sys_openat2 (fs/open.c:1415)
> Dec 3 22:24:48 kernel: __x64_sys_openat (fs/open.c:1441)
> Dec 3 22:24:48 kernel: do_syscall_64 (arch/x86/entry/common.c:52 arch/x86/entry/common.c:83)
> Dec 3 22:24:48 kernel: ? unix_stream_recvmsg (net/unix/af_unix.c:2997)
> Dec 3 22:24:48 kernel: ? __pfx_unix_stream_read_actor (net/unix/af_unix.c:2957)
> Dec 3 22:24:48 kernel: ? sock_recvmsg (net/socket.c:1051 net/socket.c:1073)
> Dec 3 22:24:48 kernel: ? __sys_recvfrom (net/socket.c:2265)
> Dec 3 22:24:48 kernel: ? __pte_offset_map (./include/linux/pgtable.h:324 ./include/linux/pgtable.h:594 mm/pgtable-generic.c:289)
> Dec 3 22:24:48 kernel: ? _raw_spin_unlock (./arch/x86/include/asm/paravirt.h:589 ./arch/x86/include/asm/qspinlock.h:57 ./include/linux/spinlock.h:204 ./include/linux/spinlock_api_smp.h:142 kernel/locking/spinlock.c:186)
> Dec 3 22:24:48 kernel: ? do_wp_page (./include/linux/vmstat.h:75 mm/memory.c:3263 mm/memory.c:3731)
> Dec 3 22:24:48 kernel: ? __handle_mm_fault (mm/memory.c:5909)
> Dec 3 22:24:48 kernel: ? syscall_exit_work (./include/linux/audit.h:357 kernel/entry/common.c:166)
> Dec 3 22:24:48 kernel: ? __count_memcg_events (mm/memcontrol.c:573 mm/memcontrol.c:836)
> Dec 3 22:24:48 kernel: ? handle_mm_fault (mm/memory.c:5951 mm/memory.c:6103)
> Dec 3 22:24:48 kernel: ? do_user_addr_fault (./include/linux/mm.h:730 arch/x86/mm/fault.c:1340)
> Dec 3 22:24:48 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
> Dec 3 22:24:48 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
> Dec 3 22:24:48 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
> Dec 3 22:24:48 kernel: entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:130)
> Dec 3 22:24:48 kernel: RIP: 0033:0x7f8537cfd70b
> Dec 3 22:24:48 kernel: RSP: 002b:00007fff841fec70 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
> Dec 3 22:24:48 kernel: RAX: ffffffffffffffda RBX: 0000000000000001 RCX: 00007f8537cfd70b
> Dec 3 22:24:48 kernel: RDX: 0000000000080241 RSI: 00007f853707d290 RDI: 00000000ffffff9c
> Dec 3 22:24:48 kernel: RBP: 00007f853707d290 R08: 0000000000000000 R09: 0000000000000000
> Dec 3 22:24:48 kernel: R10: 00000000000001b6 R11: 0000000000000246 R12: 0000000000080241
> Dec 3 22:24:48 kernel: R13: 00007f85371b4ea0 R14: 0000000000080241 R15: 0000000000000000
> Dec 3 22:24:48 kernel: </TASK>
> Dec 3 22:24:48 kernel: INFO: task ovs-monitor-ips:3480383 blocked for more than 122 seconds.
> Dec 3 22:24:48 kernel: Tainted: G W 6.12.0+ #77
> Dec 3 22:24:48 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Dec 3 22:24:48 kernel: task:ovs-monitor-ips state:D stack:0 pid:3480383 tgid:3480383 ppid:1 flags:0x00000002
> Dec 3 22:24:48 kernel: Call Trace:
> Dec 3 22:24:48 kernel: <TASK>
> Dec 3 22:24:48 kernel: __schedule (kernel/sched/core.c:5328 kernel/sched/core.c:6693)
> Dec 3 22:24:48 kernel: schedule (./arch/x86/include/asm/preempt.h:84 kernel/sched/core.c:6771 kernel/sched/core.c:6785)
> Dec 3 22:24:48 kernel: io_schedule (kernel/sched/core.c:7552 kernel/sched/core.c:7578)
> Dec 3 22:24:48 kernel: folio_wait_bit_common (mm/filemap.c:1301)
> Dec 3 22:24:48 kernel: ? xas_load (./include/linux/xarray.h:175 ./include/linux/xarray.h:1264 lib/xarray.c:240)
> Dec 3 22:24:48 kernel: ? __pfx_wake_page_function (mm/filemap.c:1117)
> Dec 3 22:24:48 kernel: folio_wait_writeback (./arch/x86/include/asm/bitops.h:206 ./arch/x86/include/asm/bitops.h:238 ./include/asm-generic/bitops/instrumented-non-atomic.h:142 ./include/linux/page-flags.h:555 mm/page-writeback.c:3187)
> Dec 3 22:24:48 kernel: truncate_inode_partial_folio (./arch/x86/include/asm/bitops.h:206 ./arch/x86/include/asm/bitops.h:238 ./include/asm-generic/bitops/instrumented-non-atomic.h:142 ./include/linux/page-flags.h:822 ./include/linux/page-flags.h:843 ./include/linux/mm.h:1115 ./include/linux/mm.h:2137 mm/truncate.c:209)
> Dec 3 22:24:48 kernel: truncate_inode_pages_range (mm/truncate.c:354)
> Dec 3 22:24:48 kernel: ? __pfx_pollwake (fs/select.c:209)
> Dec 3 22:24:48 kernel: ? arch_stack_walk (arch/x86/kernel/stacktrace.c:24)
> Dec 3 22:24:48 kernel: ? __is_insn_slot_addr (kernel/kprobes.c:299)
> Dec 3 22:24:48 kernel: ? is_bpf_text_address (kernel/bpf/core.c:768)
> Dec 3 22:24:48 kernel: ? kernel_text_address (kernel/extable.c:97 kernel/extable.c:94)
> Dec 3 22:24:48 kernel: ? __kernel_text_address (kernel/extable.c:79)
> Dec 3 22:24:48 kernel: ? unwind_get_return_address (arch/x86/kernel/unwind_orc.c:369 arch/x86/kernel/unwind_orc.c:364)
> Dec 3 22:24:48 kernel: ? __pfx_stack_trace_consume_entry (kernel/stacktrace.c:83)
> Dec 3 22:24:48 kernel: ? arch_stack_walk (arch/x86/kernel/stacktrace.c:26)
> Dec 3 22:24:48 kernel: ? kvm_sched_clock_read (arch/x86/kernel/kvmclock.c:91)
> Dec 3 22:24:48 kernel: ? local_clock_noinstr (kernel/sched/clock.c:301)
> Dec 3 22:24:48 kernel: ? local_clock (./arch/x86/include/asm/preempt.h:94 kernel/sched/clock.c:316)
> Dec 3 22:24:48 kernel: ? metadata_update_state (mm/kfence/core.c:313)
> Dec 3 22:24:48 kernel: ? inode_init_once (fs/inode.c:405 fs/inode.c:431)
> Dec 3 22:24:48 kernel: ? kfence_guarded_alloc (mm/kfence/core.c:502)
> Dec 3 22:24:48 kernel: ? sock_alloc_inode (net/socket.c:307)
> Dec 3 22:24:48 kernel: ? __kfence_alloc (mm/kfence/core.c:1136)
> Dec 3 22:24:48 kernel: ? __kfence_alloc (mm/kfence/core.c:209 mm/kfence/core.c:1130)
> Dec 3 22:24:48 kernel: ? kmem_cache_alloc_lru_noprof (mm/slub.c:4119 mm/slub.c:4153)
> Dec 3 22:24:48 kernel: ? sock_alloc_inode (net/socket.c:307)
> Dec 3 22:24:48 kernel: ? down_read (./arch/x86/include/asm/preempt.h:79 kernel/locking/rwsem.c:1246 kernel/locking/rwsem.c:1261 kernel/locking/rwsem.c:1526)
> Dec 3 22:24:48 kernel: ? unmap_mapping_range (mm/memory.c:3873)
> Dec 3 22:24:48 kernel: truncate_pagecache (mm/truncate.c:728)
> Dec 3 22:24:48 kernel: xfs_setattr_size+0x139/0x410 xfs
> Dec 3 22:24:48 kernel: xfs_vn_setattr+0x78/0x140 xfs
> Dec 3 22:24:48 kernel: notify_change (fs/attr.c:503)
> Dec 3 22:24:48 kernel: ? do_truncate (./include/linux/fs.h:820 fs/open.c:66)
> Dec 3 22:24:48 kernel: do_truncate (./include/linux/fs.h:820 fs/open.c:66)
> Dec 3 22:24:48 kernel: do_open (fs/namei.c:3395 fs/namei.c:3778)
> Dec 3 22:24:48 kernel: path_openat (fs/namei.c:3933)
> Dec 3 22:24:48 kernel: do_filp_open (fs/namei.c:3960)
> Dec 3 22:24:48 kernel: ? __pfx_unix_stream_read_actor (net/unix/af_unix.c:2957)
> Dec 3 22:24:48 kernel: ? sock_recvmsg (net/socket.c:1051 net/socket.c:1073)
> Dec 3 22:24:48 kernel: ? kmem_cache_alloc_noprof (mm/slub.c:4115 mm/slub.c:4141)
> Dec 3 22:24:48 kernel: do_sys_openat2 (fs/open.c:1415)
> Dec 3 22:24:48 kernel: __x64_sys_openat (fs/open.c:1441)
> Dec 3 22:24:48 kernel: do_syscall_64 (arch/x86/entry/common.c:52 arch/x86/entry/common.c:83)
> Dec 3 22:24:48 kernel: ? syscall_exit_work (./include/linux/audit.h:357 kernel/entry/common.c:166)
> Dec 3 22:24:48 kernel: ? syscall_exit_to_user_mode (./arch/x86/include/asm/irqflags.h:37 ./arch/x86/include/asm/irqflags.h:92 ./include/linux/entry-common.h:231 kernel/entry/common.c:206 kernel/entry/common.c:218)
> Dec 3 22:24:48 kernel: ? do_syscall_64 (arch/x86/entry/common.c:102)
> Dec 3 22:24:48 kernel: ? rseq_ip_fixup (kernel/rseq.c:257 kernel/rseq.c:291)
> Dec 3 22:24:48 kernel: ? ktime_get_ts64 (kernel/time/timekeeping.c:195 (discriminator 3) kernel/time/timekeeping.c:395 (discriminator 3) kernel/time/timekeeping.c:403 (discriminator 3) kernel/time/timekeeping.c:983 (discriminator 3))
> Dec 3 22:24:48 kernel: ? switch_fpu_return (arch/x86/kernel/fpu/context.h:49 arch/x86/kernel/fpu/context.h:76 arch/x86/kernel/fpu/core.c:787)
> Dec 3 22:24:48 kernel: ? syscall_exit_to_user_mode (./arch/x86/include/asm/entry-common.h:58 ./arch/x86/include/asm/entry-common.h:65 ./include/linux/entry-common.h:330 kernel/entry/common.c:207 kernel/entry/common.c:218)
> Dec 3 22:24:48 kernel: ? __pte_offset_map (./include/linux/pgtable.h:324 ./include/linux/pgtable.h:594 mm/pgtable-generic.c:289)
> Dec 3 22:24:48 kernel: ? _raw_spin_unlock (./arch/x86/include/asm/paravirt.h:589 ./arch/x86/include/asm/qspinlock.h:57 ./include/linux/spinlock.h:204 ./include/linux/spinlock_api_smp.h:142 kernel/locking/spinlock.c:186)
> Dec 3 22:24:48 kernel: ? do_wp_page (./include/linux/vmstat.h:75 mm/memory.c:3263 mm/memory.c:3731)
> Dec 3 22:24:48 kernel: ? __handle_mm_fault (mm/memory.c:5909)
> Dec 3 22:24:48 kernel: ? __count_memcg_events (mm/memcontrol.c:573 mm/memcontrol.c:836)
> Dec 3 22:24:48 kernel: ? handle_mm_fault (mm/memory.c:5951 mm/memory.c:6103)
> Dec 3 22:24:48 kernel: ? do_user_addr_fault (./include/linux/mm.h:730 arch/x86/mm/fault.c:1340)
> Dec 3 22:24:48 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
> Dec 3 22:24:48 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
> Dec 3 22:24:48 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
> Dec 3 22:24:48 kernel: entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:130)
> Dec 3 22:24:48 kernel: RIP: 0033:0x7f85c7afd70b
> Dec 3 22:24:48 kernel: RSP: 002b:00007ffe53f6c870 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
> Dec 3 22:24:48 kernel: RAX: ffffffffffffffda RBX: 0000000000000001 RCX: 00007f85c7afd70b
> Dec 3 22:24:48 kernel: RDX: 0000000000080241 RSI: 00007f85c6d74290 RDI: 00000000ffffff9c
> Dec 3 22:24:48 kernel: RBP: 00007f85c6d74290 R08: 0000000000000000 R09: 0000000000000000
> Dec 3 22:24:48 kernel: R10: 00000000000001b6 R11: 0000000000000246 R12: 0000000000080241
> Dec 3 22:24:48 kernel: R13: 00007f85c6e01ea0 R14: 0000000000080241 R15: 0000000000000000
> Dec 3 22:24:48 kernel: </TASK>
> Dec 3 22:24:48 kernel: INFO: task ovs-monitor-ips:3481787 blocked for more than 122 seconds.
> Dec 3 22:24:48 kernel: Tainted: G W 6.12.0+ #77
> Dec 3 22:24:48 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Dec 3 22:24:48 kernel: task:ovs-monitor-ips state:D stack:0 pid:3481787 tgid:3481787 ppid:1 flags:0x00000002
> Dec 3 22:24:48 kernel: Call Trace:
> Dec 3 22:24:48 kernel: <TASK>
> Dec 3 22:24:48 kernel: __schedule (kernel/sched/core.c:5328 kernel/sched/core.c:6693)
> Dec 3 22:24:48 kernel: schedule (./arch/x86/include/asm/preempt.h:84 kernel/sched/core.c:6771 kernel/sched/core.c:6785)
> Dec 3 22:24:48 kernel: io_schedule (kernel/sched/core.c:7552 kernel/sched/core.c:7578)
> Dec 3 22:24:48 kernel: folio_wait_bit_common (mm/filemap.c:1301)
> Dec 3 22:24:48 kernel: ? xas_load (./include/linux/xarray.h:175 ./include/linux/xarray.h:1264 lib/xarray.c:240)
> Dec 3 22:24:48 kernel: ? __pfx_wake_page_function (mm/filemap.c:1117)
> Dec 3 22:24:48 kernel: folio_wait_writeback (./arch/x86/include/asm/bitops.h:206 ./arch/x86/include/asm/bitops.h:238 ./include/asm-generic/bitops/instrumented-non-atomic.h:142 ./include/linux/page-flags.h:555 mm/page-writeback.c:3187)
> Dec 3 22:24:48 kernel: truncate_inode_partial_folio (./arch/x86/include/asm/bitops.h:206 ./arch/x86/include/asm/bitops.h:238 ./include/asm-generic/bitops/instrumented-non-atomic.h:142 ./include/linux/page-flags.h:822 ./include/linux/page-flags.h:843 ./include/linux/mm.h:1115 ./include/linux/mm.h:2137 mm/truncate.c:209)
> Dec 3 22:24:48 kernel: truncate_inode_pages_range (mm/truncate.c:354)
> Dec 3 22:24:48 kernel: ? sock_alloc_inode (net/socket.c:307)
> Dec 3 22:24:48 kernel: ? __kfence_alloc (mm/kfence/core.c:1136)
> Dec 3 22:24:48 kernel: ? __kfence_alloc (mm/kfence/core.c:209 mm/kfence/core.c:1130)
> Dec 3 22:24:48 kernel: ? kmem_cache_alloc_lru_noprof (mm/slub.c:4119 mm/slub.c:4153)
> Dec 3 22:24:48 kernel: ? sock_alloc_inode (net/socket.c:307)
> Dec 3 22:24:48 kernel: ? alloc_inode (fs/inode.c:265)
> Dec 3 22:24:48 kernel: ? sock_alloc (net/socket.c:634)
> Dec 3 22:24:48 kernel: ? do_accept (net/socket.c:1929)
> Dec 3 22:24:48 kernel: ? __sys_accept4 (net/socket.c:1992 net/socket.c:2022)
> Dec 3 22:24:48 kernel: ? __x64_sys_accept4 (net/socket.c:2033 net/socket.c:2030 net/socket.c:2030)
> Dec 3 22:24:48 kernel: ? do_syscall_64 (arch/x86/entry/common.c:52 arch/x86/entry/common.c:83)
> Dec 3 22:24:48 kernel: ? entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:130)
> Dec 3 22:24:48 kernel: ? __pfx_pollwake (fs/select.c:209)
> Dec 3 22:24:48 kernel: ? xas_load (./include/linux/xarray.h:175 ./include/linux/xarray.h:1264 lib/xarray.c:240)
> Dec 3 22:24:48 kernel: ? xfrm_state_mtu (net/xfrm/xfrm_state.c:2842 net/xfrm/xfrm_state.c:2824)
> Dec 3 22:24:48 kernel: ? memcg_list_lru_alloc (mm/list_lru.c:475 mm/list_lru.c:489)
> Dec 3 22:24:48 kernel: ? down_read (./arch/x86/include/asm/preempt.h:79 kernel/locking/rwsem.c:1246 kernel/locking/rwsem.c:1261 kernel/locking/rwsem.c:1526)
> Dec 3 22:24:48 kernel: ? unmap_mapping_range (mm/memory.c:3873)
> Dec 3 22:24:48 kernel: truncate_pagecache (mm/truncate.c:728)
> Dec 3 22:24:48 kernel: xfs_setattr_size+0x139/0x410 xfs
> Dec 3 22:24:48 kernel: xfs_vn_setattr+0x78/0x140 xfs
> Dec 3 22:24:48 kernel: notify_change (fs/attr.c:503)
> Dec 3 22:24:48 kernel: ? do_truncate (./include/linux/fs.h:820 fs/open.c:66)
> Dec 3 22:24:48 kernel: do_truncate (./include/linux/fs.h:820 fs/open.c:66)
> Dec 3 22:24:48 kernel: do_open (fs/namei.c:3395 fs/namei.c:3778)
> Dec 3 22:24:48 kernel: path_openat (fs/namei.c:3933)
> Dec 3 22:24:48 kernel: do_filp_open (fs/namei.c:3960)
> Dec 3 22:24:48 kernel: ? do_wp_page (./include/linux/vmstat.h:75 mm/memory.c:3263 mm/memory.c:3731)
> Dec 3 22:24:48 kernel: ? __handle_mm_fault (mm/memory.c:5909)
> Dec 3 22:24:48 kernel: ? kmem_cache_alloc_noprof (mm/slub.c:4115 mm/slub.c:4141)
> Dec 3 22:24:48 kernel: do_sys_openat2 (fs/open.c:1415)
> Dec 3 22:24:48 kernel: __x64_sys_openat (fs/open.c:1441)
> Dec 3 22:24:48 kernel: do_syscall_64 (arch/x86/entry/common.c:52 arch/x86/entry/common.c:83)
> Dec 3 22:24:48 kernel: ? do_user_addr_fault (./include/linux/mm.h:730 arch/x86/mm/fault.c:1340)
> Dec 3 22:24:48 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
> Dec 3 22:24:48 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
> Dec 3 22:24:48 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
> Dec 3 22:24:48 kernel: entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:130)
> Dec 3 22:24:48 kernel: RIP: 0033:0x7f79468fd70b
> Dec 3 22:24:48 kernel: RSP: 002b:00007ffd27a6c700 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
> Dec 3 22:24:48 kernel: RAX: ffffffffffffffda RBX: 0000000000000001 RCX: 00007f79468fd70b
> Dec 3 22:24:48 kernel: RDX: 0000000000080241 RSI: 00007f7945b64290 RDI: 00000000ffffff9c
> Dec 3 22:24:48 kernel: RBP: 00007f7945b64290 R08: 0000000000000000 R09: 0000000000000000
> Dec 3 22:24:48 kernel: R10: 00000000000001b6 R11: 0000000000000246 R12: 0000000000080241
> Dec 3 22:24:48 kernel: R13: 00007f7945bf2ea0 R14: 0000000000080241 R15: 0000000000000000
> Dec 3 22:24:48 kernel: </TASK>
> Dec 3 22:24:48 kernel: INFO: task ovs-monitor-ips:3482631 blocked for more than 122 seconds.
> Dec 3 22:24:48 kernel: Tainted: G W 6.12.0+ #77
> Dec 3 22:24:48 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Dec 3 22:24:48 kernel: task:ovs-monitor-ips state:D stack:0 pid:3482631 tgid:3482631 ppid:1 flags:0x00000002
> Dec 3 22:24:48 kernel: Call Trace:
> Dec 3 22:24:48 kernel: <TASK>
> Dec 3 22:24:48 kernel: __schedule (kernel/sched/core.c:5328 kernel/sched/core.c:6693)
> Dec 3 22:24:48 kernel: schedule (./arch/x86/include/asm/preempt.h:84 kernel/sched/core.c:6771 kernel/sched/core.c:6785)
> Dec 3 22:24:48 kernel: io_schedule (kernel/sched/core.c:7552 kernel/sched/core.c:7578)
> Dec 3 22:24:48 kernel: folio_wait_bit_common (mm/filemap.c:1301)
> Dec 3 22:24:48 kernel: ? xas_load (./include/linux/xarray.h:175 ./include/linux/xarray.h:1264 lib/xarray.c:240)
> Dec 3 22:24:48 kernel: ? __pfx_wake_page_function (mm/filemap.c:1117)
> Dec 3 22:24:48 kernel: folio_wait_writeback (./arch/x86/include/asm/bitops.h:206 ./arch/x86/include/asm/bitops.h:238 ./include/asm-generic/bitops/instrumented-non-atomic.h:142 ./include/linux/page-flags.h:555 mm/page-writeback.c:3187)
> Dec 3 22:24:48 kernel: truncate_inode_partial_folio (./arch/x86/include/asm/bitops.h:206 ./arch/x86/include/asm/bitops.h:238 ./include/asm-generic/bitops/instrumented-non-atomic.h:142 ./include/linux/page-flags.h:822 ./include/linux/page-flags.h:843 ./include/linux/mm.h:1115 ./include/linux/mm.h:2137 mm/truncate.c:209)
> Dec 3 22:24:48 kernel: truncate_inode_pages_range (mm/truncate.c:354)
> Dec 3 22:24:48 kernel: ? _raw_spin_unlock (./arch/x86/include/asm/paravirt.h:589 ./arch/x86/include/asm/qspinlock.h:57 ./include/linux/spinlock.h:204 ./include/linux/spinlock_api_smp.h:142 kernel/locking/spinlock.c:186)
> Dec 3 22:24:48 kernel: ? finish_task_switch.isra.0 (./arch/x86/include/asm/irqflags.h:42 ./arch/x86/include/asm/irqflags.h:97 kernel/sched/sched.h:1518 kernel/sched/core.c:5082 kernel/sched/core.c:5200)
> Dec 3 22:24:48 kernel: ? __schedule (kernel/sched/core.c:6699)
> Dec 3 22:24:48 kernel: ? xfrm_state_mtu (net/xfrm/xfrm_state.c:2842 net/xfrm/xfrm_state.c:2824)
> Dec 3 22:24:48 kernel: ? schedule_hrtimeout_range_clock (kernel/time/hrtimer.c:1332 kernel/time/hrtimer.c:1449 kernel/time/hrtimer.c:2283)
> Dec 3 22:24:48 kernel: ? remove_wait_queue (./include/linux/list.h:215 ./include/linux/list.h:229 ./include/linux/wait.h:207 kernel/sched/wait.c:55)
> Dec 3 22:24:48 kernel: ? _raw_spin_unlock_irqrestore (./arch/x86/include/asm/paravirt.h:589 ./arch/x86/include/asm/qspinlock.h:57 ./include/linux/spinlock.h:204 ./include/linux/spinlock_api_smp.h:150 kernel/locking/spinlock.c:194)
> Dec 3 22:24:48 kernel: ? poll_freewait (fs/select.c:140 (discriminator 3))
> Dec 3 22:24:48 kernel: ? do_select (fs/select.c:612)
> Dec 3 22:24:48 kernel: ? down_read (./arch/x86/include/asm/preempt.h:79 kernel/locking/rwsem.c:1246 kernel/locking/rwsem.c:1261 kernel/locking/rwsem.c:1526)
> Dec 3 22:24:48 kernel: ? unmap_mapping_range (mm/memory.c:3873)
> Dec 3 22:24:48 kernel: truncate_pagecache (mm/truncate.c:728)
> Dec 3 22:24:48 kernel: xfs_setattr_size+0x139/0x410 xfs
> Dec 3 22:24:48 kernel: xfs_vn_setattr+0x78/0x140 xfs
> Dec 3 22:24:48 kernel: notify_change (fs/attr.c:503)
> Dec 3 22:24:48 kernel: ? do_truncate (./include/linux/fs.h:820 fs/open.c:66)
> Dec 3 22:24:48 kernel: do_truncate (./include/linux/fs.h:820 fs/open.c:66)
> Dec 3 22:24:48 kernel: do_open (fs/namei.c:3395 fs/namei.c:3778)
> Dec 3 22:24:48 kernel: path_openat (fs/namei.c:3933)
> Dec 3 22:24:48 kernel: do_filp_open (fs/namei.c:3960)
> Dec 3 22:24:48 kernel: ? unix_stream_recvmsg (net/unix/af_unix.c:2997)
> Dec 3 22:24:48 kernel: ? __pfx_unix_stream_read_actor (net/unix/af_unix.c:2957)
> Dec 3 22:24:48 kernel: ? kmem_cache_alloc_noprof (mm/slub.c:4115 mm/slub.c:4141)
> Dec 3 22:24:48 kernel: do_sys_openat2 (fs/open.c:1415)
> Dec 3 22:24:48 kernel: __x64_sys_openat (fs/open.c:1441)
> Dec 3 22:24:48 kernel: do_syscall_64 (arch/x86/entry/common.c:52 arch/x86/entry/common.c:83)
> Dec 3 22:24:48 kernel: ? syscall_exit_work (./include/linux/audit.h:357 kernel/entry/common.c:166)
> Dec 3 22:24:48 kernel: ? syscall_exit_to_user_mode (./arch/x86/include/asm/irqflags.h:37 ./arch/x86/include/asm/irqflags.h:92 ./include/linux/entry-common.h:231 kernel/entry/common.c:206 kernel/entry/common.c:218)
> Dec 3 22:24:48 kernel: ? do_syscall_64 (arch/x86/entry/common.c:102)
> Dec 3 22:24:48 kernel: ? _raw_spin_unlock_irqrestore (./arch/x86/include/asm/paravirt.h:589 ./arch/x86/include/asm/qspinlock.h:57 ./include/linux/spinlock.h:204 ./include/linux/spinlock_api_smp.h:150 kernel/locking/spinlock.c:194)
> Dec 3 22:24:48 kernel: ? __skb_try_recv_datagram (net/core/datagram.c:267)
> Dec 3 22:24:48 kernel: ? __skb_recv_datagram (net/core/datagram.c:296)
> Dec 3 22:24:48 kernel: ? __memcg_slab_free_hook (mm/memcontrol.c:3004 (discriminator 2))
> Dec 3 22:24:48 kernel: ? __pte_offset_map (./include/linux/pgtable.h:324 ./include/linux/pgtable.h:594 mm/pgtable-generic.c:289)
> Dec 3 22:24:48 kernel: ? _raw_spin_unlock (./arch/x86/include/asm/paravirt.h:589 ./arch/x86/include/asm/qspinlock.h:57 ./include/linux/spinlock.h:204 ./include/linux/spinlock_api_smp.h:142 kernel/locking/spinlock.c:186)
> Dec 3 22:24:48 kernel: ? do_wp_page (./include/linux/vmstat.h:75 mm/memory.c:3263 mm/memory.c:3731)
> Dec 3 22:24:48 kernel: ? __handle_mm_fault (mm/memory.c:5909)
> Dec 3 22:24:48 kernel: ? __count_memcg_events (mm/memcontrol.c:573 mm/memcontrol.c:836)
> Dec 3 22:24:48 kernel: ? handle_mm_fault (mm/memory.c:5951 mm/memory.c:6103)
> Dec 3 22:24:48 kernel: ? do_user_addr_fault (./include/linux/mm.h:730 arch/x86/mm/fault.c:1340)
> Dec 3 22:24:48 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
> Dec 3 22:24:48 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
> Dec 3 22:24:48 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
> Dec 3 22:24:48 kernel: entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:130)
> Dec 3 22:24:48 kernel: RIP: 0033:0x7f10818fd70b
> Dec 3 22:24:48 kernel: RSP: 002b:00007fff83e83f80 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
> Dec 3 22:24:48 kernel: RAX: ffffffffffffffda RBX: 0000000000000001 RCX: 00007f10818fd70b
> Dec 3 22:24:48 kernel: RDX: 0000000000080241 RSI: 00007f1080d5a8a0 RDI: 00000000ffffff9c
> Dec 3 22:24:48 kernel: RBP: 00007f1080d5a8a0 R08: 0000000000000000 R09: 0000000000000000
> Dec 3 22:24:48 kernel: R10: 00000000000001b6 R11: 0000000000000246 R12: 0000000000080241
> Dec 3 22:24:48 kernel: R13: 00007f1080d50bb0 R14: 0000000000080241 R15: 0000000000000000
> Dec 3 22:24:48 kernel: </TASK>
> Dec 3 22:24:48 kernel: INFO: task mv:3483072 blocked for more than 245 seconds.
> Dec 3 22:24:48 kernel: Tainted: G W 6.12.0+ #77
> Dec 3 22:24:48 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Dec 3 22:24:48 kernel: task:mv state:D stack:0 pid:3483072 tgid:3483072 ppid:3479428 flags:0x00000002
> Dec 3 22:24:48 kernel: Call Trace:
> Dec 3 22:24:48 kernel: <TASK>
> Dec 3 22:24:48 kernel: __schedule (kernel/sched/core.c:5328 kernel/sched/core.c:6693)
> Dec 3 22:24:48 kernel: schedule (./arch/x86/include/asm/preempt.h:84 kernel/sched/core.c:6771 kernel/sched/core.c:6785)
> Dec 3 22:24:48 kernel: io_schedule (kernel/sched/core.c:7552 kernel/sched/core.c:7578)
> Dec 3 22:24:48 kernel: folio_wait_bit_common (mm/filemap.c:1301)
> Dec 3 22:24:48 kernel: ? xas_load (./include/linux/xarray.h:175 ./include/linux/xarray.h:1264 lib/xarray.c:240)
> Dec 3 22:24:48 kernel: ? __pfx_wake_page_function (mm/filemap.c:1117)
> Dec 3 22:24:48 kernel: folio_wait_writeback (./arch/x86/include/asm/bitops.h:206 ./arch/x86/include/asm/bitops.h:238 ./include/asm-generic/bitops/instrumented-non-atomic.h:142 ./include/linux/page-flags.h:555 mm/page-writeback.c:3187)
> Dec 3 22:24:48 kernel: truncate_inode_partial_folio (./arch/x86/include/asm/bitops.h:206 ./arch/x86/include/asm/bitops.h:238 ./include/asm-generic/bitops/instrumented-non-atomic.h:142 ./include/linux/page-flags.h:822 ./include/linux/page-flags.h:843 ./include/linux/mm.h:1115 ./include/linux/mm.h:2137 mm/truncate.c:209)
> Dec 3 22:24:48 kernel: truncate_inode_pages_range (mm/truncate.c:354)
> Dec 3 22:24:48 kernel: ? xfs_iunlock+0x108/0x200 xfs
> Dec 3 22:24:48 kernel: ? xfs_rename+0x368/0x990 xfs
> Dec 3 22:24:48 kernel: ? fsnotify_move (./include/linux/fsnotify.h:72 ./include/linux/fsnotify.h:64 ./include/linux/fsnotify.h:238)
> Dec 3 22:24:48 kernel: ? _raw_spin_unlock (./arch/x86/include/asm/paravirt.h:589 ./arch/x86/include/asm/qspinlock.h:57 ./include/linux/spinlock.h:204 ./include/linux/spinlock_api_smp.h:142 kernel/locking/spinlock.c:186)
> Dec 3 22:24:48 kernel: ? locked_inode_to_wb_and_lock_list (fs/fs-writeback.c:355)
> Dec 3 22:24:48 kernel: evict (fs/inode.c:728)
> Dec 3 22:24:48 kernel: ? fsnotify_destroy_marks (fs/notify/mark.c:923)
> Dec 3 22:24:48 kernel: ? _atomic_dec_and_lock (./arch/x86/include/asm/atomic.h:67 ./include/linux/atomic/atomic-arch-fallback.h:2278 ./include/linux/atomic/atomic-instrumented.h:1384 lib/dec_and_lock.c:29)
> Dec 3 22:24:48 kernel: __dentry_kill (fs/dcache.c:618)
> Dec 3 22:24:48 kernel: dput (fs/dcache.c:857 fs/dcache.c:845)
> Dec 3 22:24:48 kernel: do_renameat2 (fs/namei.c:5174)
> Dec 3 22:24:48 kernel: __x64_sys_rename (fs/namei.c:5215)
> Dec 3 22:24:48 kernel: do_syscall_64 (arch/x86/entry/common.c:52 arch/x86/entry/common.c:83)
> Dec 3 22:24:48 kernel: ? do_user_addr_fault (./include/linux/mm.h:730 arch/x86/mm/fault.c:1340)
> Dec 3 22:24:48 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
> Dec 3 22:24:48 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
> Dec 3 22:24:48 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
> Dec 3 22:24:48 kernel: entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:130)
> Dec 3 22:24:48 kernel: RIP: 0033:0x7f3185a5aadb
> Dec 3 22:24:48 kernel: RSP: 002b:00007ffe258c4548 EFLAGS: 00000246 ORIG_RAX: 0000000000000052
> Dec 3 22:24:48 kernel: RAX: ffffffffffffffda RBX: 0000000000000001 RCX: 00007f3185a5aadb
> Dec 3 22:24:48 kernel: RDX: 0000000000000025 RSI: 00007ffe258c5cc3 RDI: 00007ffe258c5cb7
> Dec 3 22:24:48 kernel: RBP: 00007ffe258c48f0 R08: 00007ffe258c4670 R09: 00007ffe258c4ac0
> Dec 3 22:24:48 kernel: R10: 0000000000000100 R11: 0000000000000246 R12: 0000000000000011
> Dec 3 22:24:48 kernel: R13: 0000000000000000 R14: 00007ffe258c5cc3 R15: 00007ffe258c4ac0
> Dec 3 22:24:48 kernel: </TASK>
> Dec 3 22:26:51 kernel: INFO: task khugepaged:330 blocked for more than 368 seconds.
> Dec 3 22:26:51 kernel: Tainted: G W 6.12.0+ #77
> Dec 3 22:26:51 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Dec 3 22:26:51 kernel: task:khugepaged state:D stack:0 pid:330 tgid:330 ppid:2 flags:0x00004000
> Dec 3 22:26:51 kernel: Call Trace:
> Dec 3 22:26:51 kernel: <TASK>
> Dec 3 22:26:51 kernel: __schedule (kernel/sched/core.c:5328 kernel/sched/core.c:6693)
> Dec 3 22:26:51 kernel: schedule (./arch/x86/include/asm/preempt.h:84 kernel/sched/core.c:6771 kernel/sched/core.c:6785)
> Dec 3 22:26:51 kernel: schedule_timeout (kernel/time/timer.c:2592)
> Dec 3 22:26:51 kernel: ? kvm_sched_clock_read (arch/x86/kernel/kvmclock.c:91)
> Dec 3 22:26:51 kernel: ? sched_clock (./arch/x86/include/asm/preempt.h:94 arch/x86/kernel/tsc.c:285)
> Dec 3 22:26:51 kernel: ? sched_clock_cpu (kernel/sched/clock.c:394)
> Dec 3 22:26:51 kernel: ? __smp_call_single_queue (kernel/smp.c:115 kernel/smp.c:411)
> Dec 3 22:26:51 kernel: __wait_for_common (kernel/sched/completion.c:95 kernel/sched/completion.c:116)
> Dec 3 22:26:51 kernel: ? __pfx_schedule_timeout (kernel/time/timer.c:2577)
> Dec 3 22:26:51 kernel: __flush_work (kernel/workqueue.c:4222)
> Dec 3 22:26:51 kernel: ? __pfx_wq_barrier_func (kernel/workqueue.c:3718)
> Dec 3 22:26:51 kernel: __lru_add_drain_all (mm/swap.c:873 (discriminator 3))
> Dec 3 22:26:51 kernel: khugepaged (mm/khugepaged.c:2499 mm/khugepaged.c:2571)
> Dec 3 22:26:51 kernel: ? __pfx_khugepaged (mm/khugepaged.c:2564)
> Dec 3 22:26:51 kernel: kthread (kernel/kthread.c:389)
> Dec 3 22:26:51 kernel: ? __pfx_kthread (kernel/kthread.c:342)
> Dec 3 22:26:51 kernel: ret_from_fork (arch/x86/kernel/process.c:147)
> Dec 3 22:26:51 kernel: ? __pfx_kthread (kernel/kthread.c:342)
> Dec 3 22:26:51 kernel: ret_from_fork_asm (arch/x86/entry/entry_64.S:257)
> Dec 3 22:26:51 kernel: </TASK>
> Dec 3 22:26:51 kernel: Future hung task reports are suppressed, see sysctl kernel.hung_task_warnings
>
>
> Still blocked after many hours:
>
> Dec 4 06:11:45 kernel: INFO: task khugepaged:330 blocked for more than 28262 seconds.
> Dec 4 06:11:45 kernel: Tainted: G W 6.12.0+ #77
> Dec 4 06:11:45 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Dec 4 06:11:45 kernel: task:khugepaged state:D stack:0 pid:330 tgid:330 ppid:2 flags:0x00004000
> Dec 4 06:11:45 kernel: Call Trace:
> Dec 4 06:11:45 kernel: <TASK>
> Dec 4 06:11:45 kernel: __schedule (kernel/sched/core.c:5328 kernel/sched/core.c:6693)
> Dec 4 06:11:45 kernel: schedule (./arch/x86/include/asm/preempt.h:84 kernel/sched/core.c:6771 kernel/sched/core.c:6785)
> Dec 4 06:11:45 kernel: schedule_timeout (kernel/time/timer.c:2592)
> Dec 4 06:11:45 kernel: ? kvm_sched_clock_read (arch/x86/kernel/kvmclock.c:91)
> Dec 4 06:11:45 kernel: ? sched_clock (./arch/x86/include/asm/preempt.h:94 arch/x86/kernel/tsc.c:285)
> Dec 4 06:11:45 kernel: ? sched_clock_cpu (kernel/sched/clock.c:394)
> Dec 4 06:11:45 kernel: ? __smp_call_single_queue (kernel/smp.c:115 kernel/smp.c:411)
> Dec 4 06:11:45 kernel: __wait_for_common (kernel/sched/completion.c:95 kernel/sched/completion.c:116)
> Dec 4 06:11:45 kernel: ? __pfx_schedule_timeout (kernel/time/timer.c:2577)
> Dec 4 06:11:45 kernel: __flush_work (kernel/workqueue.c:4222)
> Dec 4 06:11:45 kernel: ? __pfx_wq_barrier_func (kernel/workqueue.c:3718)
> Dec 4 06:11:45 kernel: __lru_add_drain_all (mm/swap.c:873 (discriminator 3))
> Dec 4 06:11:45 kernel: khugepaged (mm/khugepaged.c:2499 mm/khugepaged.c:2571)
> Dec 4 06:11:45 kernel: ? __pfx_khugepaged (mm/khugepaged.c:2564)
> Dec 4 06:11:45 kernel: kthread (kernel/kthread.c:389)
> Dec 4 06:11:45 kernel: ? __pfx_kthread (kernel/kthread.c:342)
> Dec 4 06:11:45 kernel: ret_from_fork (arch/x86/kernel/process.c:147)
> Dec 4 06:11:45 kernel: ? __pfx_kthread (kernel/kthread.c:342)
> Dec 4 06:11:45 kernel: ret_from_fork_asm (arch/x86/entry/entry_64.S:257)
> Dec 4 06:11:45 kernel: </TASK>
> Dec 4 06:11:45 kernel: Future hung task reports are suppressed, see sysctl kernel.hung_task_warnings
>
>
> The system is actually idle:
>
> Dec 4 06:27:07 kernel: sysrq: Show backtrace of all active CPUs
> Dec 4 06:27:07 kernel: NMI backtrace for cpu 30
> Dec 4 06:27:07 kernel: CPU: 30 UID: 0 PID: 10810 Comm: bash Kdump: loaded Tainted: G W 6.12.0+ #77
> Dec 4 06:27:07 kernel: Tainted: [W]=WARN
> Dec 4 06:27:07 kernel: Hardware name: Red Hat KVM/RHEL, BIOS 1.16.1-1.el9 04/01/2014
> Dec 4 06:27:07 kernel: Call Trace:
> Dec 4 06:27:07 kernel: <TASK>
> Dec 4 06:27:07 kernel: dump_stack_lvl (lib/dump_stack.c:123)
> Dec 4 06:27:07 kernel: nmi_cpu_backtrace (lib/nmi_backtrace.c:113)
> Dec 4 06:27:07 kernel: ? __pfx_nmi_raise_cpu_backtrace (arch/x86/kernel/apic/hw_nmi.c:35)
> Dec 4 06:27:07 kernel: nmi_trigger_cpumask_backtrace (lib/nmi_backtrace.c:62)
> Dec 4 06:27:07 kernel: __handle_sysrq (drivers/tty/sysrq.c:613)
> Dec 4 06:27:07 kernel: write_sysrq_trigger (drivers/tty/sysrq.c:1184)
> Dec 4 06:27:07 kernel: proc_reg_write (fs/proc/inode.c:330 fs/proc/inode.c:342)
> Dec 4 06:27:07 kernel: vfs_write (fs/read_write.c:681)
> Dec 4 06:27:07 kernel: ? do_fcntl (fs/fcntl.c:463)
> Dec 4 06:27:07 kernel: ksys_write (fs/read_write.c:736)
> Dec 4 06:27:07 kernel: do_syscall_64 (arch/x86/entry/common.c:52 arch/x86/entry/common.c:83)
> Dec 4 06:27:07 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
> Dec 4 06:27:07 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
> Dec 4 06:27:07 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
> Dec 4 06:27:07 kernel: entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:130)
> Dec 4 06:27:07 kernel: RIP: 0033:0x7fb77cefda57
> Dec 4 06:27:07 kernel: </TASK>
> Dec 4 06:27:07 kernel: Sending NMI from CPU 30 to CPUs 0-29,31-39:
> Dec 4 06:27:07 kernel: NMI backtrace for cpu 12 skipped: idling at default_idle
> Dec 4 06:27:07 kernel: NMI backtrace for cpu 6 skipped: idling at default_idle
> Dec 4 06:27:07 kernel: NMI backtrace for cpu 16 skipped: idling at default_idle
> Dec 4 06:27:07 kernel: NMI backtrace for cpu 4 skipped: idling at default_idle
> Dec 4 06:27:07 kernel: NMI backtrace for cpu 20 skipped: idling at default_idle
> Dec 4 06:27:07 kernel: NMI backtrace for cpu 10 skipped: idling at default_idle
> Dec 4 06:27:07 kernel: NMI backtrace for cpu 33 skipped: idling at default_idle
> Dec 4 06:27:07 kernel: NMI backtrace for cpu 18 skipped: idling at default_idle
> Dec 4 06:27:07 kernel: NMI backtrace for cpu 1 skipped: idling at default_idle
> Dec 4 06:27:07 kernel: NMI backtrace for cpu 23 skipped: idling at default_idle
> Dec 4 06:27:07 kernel: NMI backtrace for cpu 27 skipped: idling at default_idle
> Dec 4 06:27:07 kernel: NMI backtrace for cpu 3 skipped: idling at default_idle
> Dec 4 06:27:07 kernel: NMI backtrace for cpu 11 skipped: idling at default_idle
> Dec 4 06:27:07 kernel: NMI backtrace for cpu 32 skipped: idling at default_idle
> Dec 4 06:27:07 kernel: NMI backtrace for cpu 7 skipped: idling at default_idle
> Dec 4 06:27:07 kernel: NMI backtrace for cpu 0 skipped: idling at default_idle
> Dec 4 06:27:07 kernel: NMI backtrace for cpu 13 skipped: idling at default_idle
> Dec 4 06:27:07 kernel: NMI backtrace for cpu 5 skipped: idling at default_idle
> Dec 4 06:27:07 kernel: NMI backtrace for cpu 26 skipped: idling at default_idle
> Dec 4 06:27:07 kernel: NMI backtrace for cpu 28 skipped: idling at default_idle
> Dec 4 06:27:07 kernel: NMI backtrace for cpu 24 skipped: idling at default_idle
> Dec 4 06:27:07 kernel: NMI backtrace for cpu 37 skipped: idling at default_idle
> Dec 4 06:27:07 kernel: NMI backtrace for cpu 39 skipped: idling at default_idle
> Dec 4 06:27:07 kernel: NMI backtrace for cpu 17 skipped: idling at default_idle
> Dec 4 06:27:07 kernel: NMI backtrace for cpu 22 skipped: idling at default_idle
> Dec 4 06:27:07 kernel: NMI backtrace for cpu 31 skipped: idling at default_idle
> Dec 4 06:27:07 kernel: NMI backtrace for cpu 8 skipped: idling at default_idle
> Dec 4 06:27:07 kernel: NMI backtrace for cpu 21 skipped: idling at default_idle
> Dec 4 06:27:07 kernel: NMI backtrace for cpu 29 skipped: idling at default_idle
> Dec 4 06:27:07 kernel: NMI backtrace for cpu 14 skipped: idling at default_idle
> Dec 4 06:27:07 kernel: NMI backtrace for cpu 2 skipped: idling at default_idle
> Dec 4 06:27:07 kernel: NMI backtrace for cpu 9 skipped: idling at default_idle
> Dec 4 06:27:07 kernel: NMI backtrace for cpu 35 skipped: idling at default_idle
> Dec 4 06:27:07 kernel: NMI backtrace for cpu 25 skipped: idling at default_idle
> Dec 4 06:27:07 kernel: NMI backtrace for cpu 15 skipped: idling at default_idle
> Dec 4 06:27:07 kernel: NMI backtrace for cpu 36 skipped: idling at default_idle
> Dec 4 06:27:07 kernel: NMI backtrace for cpu 19 skipped: idling at default_idle
> Dec 4 06:27:07 kernel: NMI backtrace for cpu 34 skipped: idling at default_idle
> Dec 4 06:27:07 kernel: NMI backtrace for cpu 38 skipped: idling at default_idle
>
>
>
>
> =====================
> SECOND UNDECODED LOG:
> =====================
>
> Dec 3 01:58:46 kernel: ------------[ cut here ]------------
> Dec 3 01:58:46 kernel: watchdog: BUG: soft lockup - CPU#11 stuck for 21s! [kworker/11:1:866154]
> Dec 3 01:58:46 kernel: rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
> Dec 3 01:58:46 kernel: rcu: #01117-...!: (0 ticks this GP) idle=b124/1/0x4000000000000000 softirq=2109356/2109356 fqs=0
> Dec 3 01:58:46 kernel: rcu: #011(detected by 18, t=60004 jiffies, g=11126601, q=393720 ncpus=40)
> Dec 3 01:58:46 kernel: Sending NMI from CPU 18 to CPUs 17:
> Dec 3 01:58:46 kernel: rcu: rcu_preempt kthread timer wakeup didn't happen for 70001 jiffies! g11126601 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x200
> Dec 3 01:58:46 kernel: rcu: #011Possible timer handling issue on cpu=17 timer-softirq=273727
> Dec 3 01:58:46 kernel: rcu: rcu_preempt kthread starved for 70004 jiffies! g11126601 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x200 ->cpu=17
> Dec 3 01:58:46 kernel: rcu: #011Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
> Dec 3 01:58:46 kernel: rcu: RCU grace-period kthread stack dump:
> Dec 3 01:58:46 kernel: task:rcu_preempt state:R stack:0 pid:18 tgid:18 ppid:2 flags:0x00004008
> Dec 3 01:58:46 kernel: Call Trace:
> Dec 3 01:58:46 kernel: <TASK>
> Dec 3 01:58:46 kernel: ? __pick_next_task+0x3e/0x1a0
> Dec 3 01:58:46 kernel: ? __schedule+0xfe/0x620
> Dec 3 01:58:46 kernel: ? _raw_spin_unlock_irqrestore+0xa/0x30
> Dec 3 01:58:46 kernel: ? __pfx_rcu_gp_kthread+0x10/0x10
> Dec 3 01:58:46 kernel: ? schedule+0x23/0xa0
> Dec 3 01:58:46 kernel: ? schedule_timeout+0x8b/0x160
> Dec 3 01:58:46 kernel: ? __pfx_process_timeout+0x10/0x10
> Dec 3 01:58:46 kernel: ? rcu_gp_fqs_loop+0x10b/0x500
> Dec 3 01:58:46 kernel: ? rcu_gp_kthread+0x13f/0x1d0
> Dec 3 01:58:46 kernel: ? kthread+0xcc/0x100
> Dec 3 01:58:46 kernel: ? __pfx_kthread+0x10/0x10
> Dec 3 01:58:46 kernel: ? ret_from_fork+0x2d/0x50
> Dec 3 01:58:46 kernel: ? __pfx_kthread+0x10/0x10
> Dec 3 01:58:46 kernel: ? ret_from_fork_asm+0x1a/0x30
> Dec 3 01:58:46 kernel: </TASK>
> Dec 3 02:00:48 kernel: INFO: task kworker/u165:0:746685 blocked for more than 122 seconds.
> Dec 3 02:00:48 kernel: Not tainted 6.12.0 #64
> Dec 3 02:00:48 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Dec 3 02:00:48 kernel: task:kworker/u165:0 state:D stack:0 pid:746685 tgid:746685 ppid:2 flags:0x00004000
> Dec 3 02:00:48 kernel: Workqueue: events_unbound linkwatch_event
> Dec 3 02:00:48 kernel: Call Trace:
> Dec 3 02:00:48 kernel: <TASK>
> Dec 3 02:00:48 kernel: __schedule+0x23f/0x620
> Dec 3 02:00:48 kernel: schedule+0x23/0xa0
> Dec 3 02:00:48 kernel: schedule_preempt_disabled+0x11/0x20
> Dec 3 02:00:48 kernel: __mutex_lock.constprop.0+0x31d/0x650
> Dec 3 02:00:48 kernel: ? __schedule+0x247/0x620
> Dec 3 02:00:48 kernel: linkwatch_event+0xa/0x30
> Dec 3 02:00:48 kernel: process_one_work+0x179/0x390
> Dec 3 02:00:48 kernel: worker_thread+0x239/0x340
> Dec 3 02:00:48 kernel: ? __pfx_worker_thread+0x10/0x10
> Dec 3 02:00:48 kernel: kthread+0xcc/0x100
> Dec 3 02:00:48 kernel: ? __pfx_kthread+0x10/0x10
> Dec 3 02:00:48 kernel: ret_from_fork+0x2d/0x50
> Dec 3 02:00:48 kernel: ? __pfx_kthread+0x10/0x10
> Dec 3 02:00:48 kernel: ret_from_fork_asm+0x1a/0x30
> Dec 3 02:00:48 kernel: </TASK>
> Dec 3 02:00:48 kernel: INFO: task kworker/5:2:900494 blocked for more than 122 seconds.
> Dec 3 02:00:48 kernel: Not tainted 6.12.0 #64
> Dec 3 02:00:48 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Dec 3 02:00:48 kernel: task:kworker/5:2 state:D stack:0 pid:900494 tgid:900494 ppid:2 flags:0x00004000
> Dec 3 02:00:48 kernel: Workqueue: events xfrm_state_gc_task
> Dec 3 02:00:48 kernel: Call Trace:
> Dec 3 02:00:48 kernel: <TASK>
> Dec 3 02:00:48 kernel: __schedule+0x23f/0x620
> Dec 3 02:00:48 kernel: schedule+0x23/0xa0
> Dec 3 02:00:48 kernel: schedule_timeout+0x14a/0x160
> Dec 3 02:00:48 kernel: ? __hrtimer_start_range_ns+0x20b/0x2e0
> Dec 3 02:00:48 kernel: ? kvm_clock_get_cycles+0x14/0x30
> Dec 3 02:00:48 kernel: ? ktime_get+0x34/0xc0
> Dec 3 02:00:48 kernel: ? timerqueue_del+0x2a/0x50
> Dec 3 02:00:48 kernel: __wait_for_common+0x8f/0x1d0
> Dec 3 02:00:48 kernel: ? __pfx_schedule_timeout+0x10/0x10
> Dec 3 02:00:48 kernel: wait_for_completion_state+0x1d/0x40
> Dec 3 02:00:48 kernel: __wait_rcu_gp+0x126/0x130
> Dec 3 02:00:48 kernel: synchronize_rcu_normal.part.0+0x3a/0x60
> Dec 3 02:00:48 kernel: ? __pfx_call_rcu_hurry+0x10/0x10
> Dec 3 02:00:48 kernel: ? __pfx_wakeme_after_rcu+0x10/0x10
> Dec 3 02:00:48 kernel: xfrm_state_gc_task+0x56/0xa0
> Dec 3 02:00:48 kernel: process_one_work+0x179/0x390
> Dec 3 02:00:48 kernel: worker_thread+0x239/0x340
> Dec 3 02:00:48 kernel: ? __pfx_worker_thread+0x10/0x10
> Dec 3 02:00:48 kernel: kthread+0xcc/0x100
> Dec 3 02:00:48 kernel: ? __pfx_kthread+0x10/0x10
> Dec 3 02:00:48 kernel: ret_from_fork+0x2d/0x50
> Dec 3 02:00:48 kernel: ? __pfx_kthread+0x10/0x10
> Dec 3 02:00:48 kernel: ret_from_fork_asm+0x1a/0x30
> Dec 3 02:00:48 kernel: </TASK>
> Dec 3 02:00:48 kernel: INFO: task systemd-udevd:995278 blocked for more than 122 seconds.
> Dec 3 02:00:48 kernel: Not tainted 6.12.0 #64
> Dec 3 02:00:48 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Dec 3 02:00:48 kernel: task:systemd-udevd state:D stack:0 pid:995278 tgid:995278 ppid:1080 flags:0x00000002
> Dec 3 02:00:48 kernel: Call Trace:
> Dec 3 02:00:48 kernel: <TASK>
> Dec 3 02:00:48 kernel: __schedule+0x23f/0x620
> Dec 3 02:00:48 kernel: schedule+0x23/0xa0
> Dec 3 02:00:48 kernel: netlink_table_grab.part.0+0x82/0xe0
> Dec 3 02:00:48 kernel: ? __pfx_default_wake_function+0x10/0x10
> Dec 3 02:00:48 kernel: netlink_release+0x36c/0x520
> Dec 3 02:00:48 kernel: ? __pfx_netlink_hash+0x10/0x10
> Dec 3 02:00:48 kernel: ? __pfx_netlink_compare+0x10/0x10
> Dec 3 02:00:48 kernel: __sock_release+0x3a/0xc0
> Dec 3 02:00:48 kernel: sock_close+0x11/0x20
> Dec 3 02:00:48 kernel: __fput+0xdb/0x2a0
> Dec 3 02:00:48 kernel: task_work_run+0x55/0x90
> Dec 3 02:00:48 kernel: do_exit+0x279/0x4b0
> Dec 3 02:00:48 kernel: do_group_exit+0x2c/0x80
> Dec 3 02:00:48 kernel: __x64_sys_exit_group+0x14/0x20
> Dec 3 02:00:48 kernel: x64_sys_call+0x1836/0x1840
> Dec 3 02:00:48 kernel: do_syscall_64+0x79/0x150
> Dec 3 02:00:48 kernel: ? __count_memcg_events+0x4f/0xe0
> Dec 3 02:00:48 kernel: ? handle_mm_fault+0x18e/0x270
> Dec 3 02:00:48 kernel: ? do_user_addr_fault+0x34c/0x680
> Dec 3 02:00:48 kernel: ? clear_bhb_loop+0x45/0xa0
> Dec 3 02:00:48 kernel: ? clear_bhb_loop+0x45/0xa0
> Dec 3 02:00:48 kernel: ? clear_bhb_loop+0x45/0xa0
> Dec 3 02:00:48 kernel: entry_SYSCALL_64_after_hwframe+0x76/0x7e
> Dec 3 02:00:48 kernel: RIP: 0033:0x7f6cfa8d921d
> Dec 3 02:00:48 kernel: RSP: 002b:00007ffd7e081b58 EFLAGS: 00000246 ORIG_RAX: 00000000000000e7
> Dec 3 02:00:48 kernel: RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f6cfa8d921d
> Dec 3 02:00:48 kernel: RDX: 00000000000000e7 RSI: fffffffffffffe88 RDI: 0000000000000000
> Dec 3 02:00:48 kernel: RBP: 00007ffd7e081c00 R08: 00005615d9e8dbb0 R09: 0000000000000004
> Dec 3 02:00:48 kernel: R10: 0000000000000018 R11: 0000000000000246 R12: 00007ffd7e081bb0
> Dec 3 02:00:48 kernel: R13: 00005615d9d0e8b0 R14: 00005615d9e8dbb0 R15: 0000000000000000
> Dec 3 02:00:48 kernel: </TASK>
> Dec 3 02:00:48 kernel: INFO: task systemd-udevd:995279 blocked for more than 122 seconds.
> Dec 3 02:00:48 kernel: Not tainted 6.12.0 #64
> Dec 3 02:00:48 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Dec 3 02:00:48 kernel: task:systemd-udevd state:D stack:0 pid:995279 tgid:995279 ppid:1080 flags:0x00004002
> Dec 3 02:00:48 kernel: Call Trace:
> Dec 3 02:00:48 kernel: <TASK>
> Dec 3 02:00:48 kernel: __schedule+0x23f/0x620
> Dec 3 02:00:48 kernel: schedule+0x23/0xa0
> Dec 3 02:00:48 kernel: netlink_table_grab.part.0+0x82/0xe0
> Dec 3 02:00:48 kernel: ? __pfx_default_wake_function+0x10/0x10
> Dec 3 02:00:48 kernel: netlink_release+0x36c/0x520
> Dec 3 02:00:48 kernel: ? __pfx_netlink_hash+0x10/0x10
> Dec 3 02:00:48 kernel: ? __pfx_netlink_compare+0x10/0x10
> Dec 3 02:00:48 kernel: __sock_release+0x3a/0xc0
> Dec 3 02:00:48 kernel: sock_close+0x11/0x20
> Dec 3 02:00:48 kernel: __fput+0xdb/0x2a0
> Dec 3 02:00:48 kernel: task_work_run+0x55/0x90
> Dec 3 02:00:48 kernel: do_exit+0x279/0x4b0
> Dec 3 02:00:48 kernel: do_group_exit+0x2c/0x80
> Dec 3 02:00:48 kernel: __x64_sys_exit_group+0x14/0x20
> Dec 3 02:00:48 kernel: x64_sys_call+0x1836/0x1840
> Dec 3 02:00:48 kernel: do_syscall_64+0x79/0x150
> Dec 3 02:00:48 kernel: ? get_page_from_freelist+0x333/0x630
> Dec 3 02:00:48 kernel: ? __alloc_pages_noprof+0x186/0x350
> Dec 3 02:00:48 kernel: ? __mod_memcg_lruvec_state+0x95/0x150
> Dec 3 02:00:48 kernel: ? __lruvec_stat_mod_folio+0x80/0xd0
> Dec 3 02:00:48 kernel: ? __folio_mod_stat+0x2a/0x80
> Dec 3 02:00:48 kernel: ? _raw_spin_unlock+0xa/0x30
> Dec 3 02:00:48 kernel: ? wp_page_copy+0x4e0/0x710
> Dec 3 02:00:48 kernel: ? __pte_offset_map+0x17/0x160
> Dec 3 02:00:48 kernel: ? _raw_spin_unlock+0xa/0x30
> Dec 3 02:00:48 kernel: ? do_wp_page+0x666/0x760
> Dec 3 02:00:48 kernel: ? __handle_mm_fault+0x326/0x730
> Dec 3 02:00:48 kernel: ? __count_memcg_events+0x4f/0xe0
> Dec 3 02:00:48 kernel: ? handle_mm_fault+0x18e/0x270
> Dec 3 02:00:48 kernel: ? do_user_addr_fault+0x34c/0x680
> Dec 3 02:00:48 kernel: ? clear_bhb_loop+0x45/0xa0
> Dec 3 02:00:48 kernel: ? clear_bhb_loop+0x45/0xa0
> Dec 3 02:00:48 kernel: ? clear_bhb_loop+0x45/0xa0
> Dec 3 02:00:48 kernel: entry_SYSCALL_64_after_hwframe+0x76/0x7e
> Dec 3 02:00:48 kernel: RIP: 0033:0x7f6cfa8d921d
> Dec 3 02:00:48 kernel: RSP: 002b:00007ffd7e081b58 EFLAGS: 00000246 ORIG_RAX: 00000000000000e7
> Dec 3 02:00:48 kernel: RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f6cfa8d921d
> Dec 3 02:00:48 kernel: RDX: 00000000000000e7 RSI: fffffffffffffe88 RDI: 0000000000000000
> Dec 3 02:00:48 kernel: RBP: 00007ffd7e081c00 R08: 00005615d9e932b0 R09: 0000000000000004
> Dec 3 02:00:48 kernel: R10: 0000000000000018 R11: 0000000000000246 R12: 00007ffd7e081bb0
> Dec 3 02:00:48 kernel: R13: 00005615d9d0e8b0 R14: 00005615d9e932b0 R15: 0000000000000000
> Dec 3 02:00:48 kernel: </TASK>
> Dec 3 02:00:48 kernel: INFO: task ip:998743 blocked for more than 122 seconds.
> Dec 3 02:00:48 kernel: Not tainted 6.12.0 #64
> Dec 3 02:00:48 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Dec 3 02:00:48 kernel: task:ip state:D stack:0 pid:998743 tgid:998743 ppid:998736 flags:0x00000002
> Dec 3 02:00:48 kernel: Call Trace:
> Dec 3 02:00:48 kernel: <TASK>
> Dec 3 02:00:48 kernel: __schedule+0x23f/0x620
> Dec 3 02:00:48 kernel: schedule+0x23/0xa0
> Dec 3 02:00:48 kernel: schedule_preempt_disabled+0x11/0x20
> Dec 3 02:00:48 kernel: __mutex_lock.constprop.0+0x31d/0x650
> Dec 3 02:00:48 kernel: rtnetlink_rcv_msg+0x111/0x410
> Dec 3 02:00:48 kernel: ? avc_has_perm_noaudit+0x67/0xf0
> Dec 3 02:00:48 kernel: ? __pfx_rtnetlink_rcv_msg+0x10/0x10
> Dec 3 02:00:48 kernel: netlink_rcv_skb+0x54/0x100
> Dec 3 02:00:48 kernel: netlink_unicast+0x243/0x370
> Dec 3 02:00:48 kernel: netlink_sendmsg+0x1f6/0x430
> Dec 3 02:00:48 kernel: __sys_sendto+0x1f3/0x200
> Dec 3 02:00:48 kernel: ? do_read_fault+0x10a/0x1e0
> Dec 3 02:00:48 kernel: ? do_fault+0x21f/0x380
> Dec 3 02:00:48 kernel: ? pte_offset_map_nolock+0x2b/0xb0
> Dec 3 02:00:48 kernel: __x64_sys_sendto+0x20/0x30
> Dec 3 02:00:48 kernel: do_syscall_64+0x79/0x150
> Dec 3 02:00:48 kernel: ? __count_memcg_events+0x4f/0xe0
> Dec 3 02:00:48 kernel: ? handle_mm_fault+0x18e/0x270
> Dec 3 02:00:48 kernel: ? do_user_addr_fault+0x34c/0x680
> Dec 3 02:00:48 kernel: ? clear_bhb_loop+0x45/0xa0
> Dec 3 02:00:48 kernel: ? clear_bhb_loop+0x45/0xa0
> Dec 3 02:00:48 kernel: ? clear_bhb_loop+0x45/0xa0
> Dec 3 02:00:48 kernel: entry_SYSCALL_64_after_hwframe+0x76/0x7e
> Dec 3 02:00:48 kernel: RIP: 0033:0x7f287830f860
> Dec 3 02:00:48 kernel: RSP: 002b:00007ffdd4f2c518 EFLAGS: 00000246 ORIG_RAX: 000000000000002c
> Dec 3 02:00:48 kernel: RAX: ffffffffffffffda RBX: 00007ffdd4f2cc88 RCX: 00007f287830f860
> Dec 3 02:00:48 kernel: RDX: 0000000000000020 RSI: 00007ffdd4f2c520 RDI: 0000000000000003
> Dec 3 02:00:48 kernel: RBP: 00007ffdd4f2d7e4 R08: 0000000000000000 R09: 0000000000000000
> Dec 3 02:00:48 kernel: R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000002
> Dec 3 02:00:48 kernel: R13: 000055969d419040 R14: 00007ffdd4f2cc78 R15: 0000000000000004
> Dec 3 02:00:48 kernel: </TASK>
> Dec 3 02:01:56 kernel: rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
> Dec 3 02:01:56 kernel: rcu: #01117-...!: (0 ticks this GP) idle=b124/1/0x4000000000000002 softirq=2109356/2109356 fqs=0
> Dec 3 02:01:56 kernel: rcu: #011(detected by 8, t=250009 jiffies, g=11126601, q=1577177 ncpus=40)
> Dec 3 02:01:56 kernel: Sending NMI from CPU 8 to CPUs 17:
> Dec 3 02:01:56 kernel: rcu: rcu_preempt kthread timer wakeup didn't happen for 260006 jiffies! g11126601 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x200
> Dec 3 02:01:56 kernel: rcu: #011Possible timer handling issue on cpu=17 timer-softirq=273727
> Dec 3 02:01:56 kernel: rcu: rcu_preempt kthread starved for 260009 jiffies! g11126601 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x200 ->cpu=17
> Dec 3 02:01:56 kernel: rcu: #011Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
> Dec 3 02:01:56 kernel: rcu: RCU grace-period kthread stack dump:
> Dec 3 02:01:56 kernel: task:rcu_preempt state:R stack:0 pid:18 tgid:18 ppid:2 flags:0x00004008
> Dec 3 02:01:56 kernel: Call Trace:
> Dec 3 02:01:56 kernel: <TASK>
> Dec 3 02:01:56 kernel: ? __pick_next_task+0x3e/0x1a0
> Dec 3 02:01:56 kernel: ? __schedule+0xfe/0x620
> Dec 3 02:01:56 kernel: ? _raw_spin_unlock_irqrestore+0xa/0x30
> Dec 3 02:01:56 kernel: ? __pfx_rcu_gp_kthread+0x10/0x10
> Dec 3 02:01:56 kernel: ? schedule+0x23/0xa0
> Dec 3 02:01:56 kernel: ? schedule_timeout+0x8b/0x160
> Dec 3 02:01:56 kernel: ? __pfx_process_timeout+0x10/0x10
> Dec 3 02:01:56 kernel: ? rcu_gp_fqs_loop+0x10b/0x500
> Dec 3 02:01:56 kernel: ? rcu_gp_kthread+0x13f/0x1d0
> Dec 3 02:01:56 kernel: ? kthread+0xcc/0x100
> Dec 3 02:01:56 kernel: ? __pfx_kthread+0x10/0x10
> Dec 3 02:01:56 kernel: ? ret_from_fork+0x2d/0x50
> Dec 3 02:01:56 kernel: ? __pfx_kthread+0x10/0x10
> Dec 3 02:01:56 kernel: ? ret_from_fork_asm+0x1a/0x30
> Dec 3 02:01:56 kernel: </TASK>
> Dec 3 02:02:51 kernel: INFO: task kworker/u165:0:746685 blocked for more than 245 seconds.
> Dec 3 02:02:51 kernel: Not tainted 6.12.0 #64
> Dec 3 02:02:51 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Dec 3 02:02:51 kernel: task:kworker/u165:0 state:D stack:0 pid:746685 tgid:746685 ppid:2 flags:0x00004000
> Dec 3 02:02:51 kernel: Workqueue: events_unbound linkwatch_event
> Dec 3 02:02:51 kernel: Call Trace:
> Dec 3 02:02:51 kernel: <TASK>
> Dec 3 02:02:51 kernel: __schedule+0x23f/0x620
> Dec 3 02:02:51 kernel: schedule+0x23/0xa0
> Dec 3 02:02:51 kernel: schedule_preempt_disabled+0x11/0x20
> Dec 3 02:02:51 kernel: __mutex_lock.constprop.0+0x31d/0x650
> Dec 3 02:02:51 kernel: ? __schedule+0x247/0x620
> Dec 3 02:02:51 kernel: linkwatch_event+0xa/0x30
> Dec 3 02:02:51 kernel: process_one_work+0x179/0x390
> Dec 3 02:02:51 kernel: worker_thread+0x239/0x340
> Dec 3 02:02:51 kernel: ? __pfx_worker_thread+0x10/0x10
> Dec 3 02:02:51 kernel: kthread+0xcc/0x100
> Dec 3 02:02:51 kernel: ? __pfx_kthread+0x10/0x10
> Dec 3 02:02:51 kernel: ret_from_fork+0x2d/0x50
> Dec 3 02:02:51 kernel: ? __pfx_kthread+0x10/0x10
> Dec 3 02:02:51 kernel: ret_from_fork_asm+0x1a/0x30
> Dec 3 02:02:51 kernel: </TASK>
> Dec 3 02:02:51 kernel: INFO: task kworker/5:2:900494 blocked for more than 245 seconds.
> Dec 3 02:02:51 kernel: Not tainted 6.12.0 #64
> Dec 3 02:02:51 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Dec 3 02:02:51 kernel: task:kworker/5:2 state:D stack:0 pid:900494 tgid:900494 ppid:2 flags:0x00004000
> Dec 3 02:02:51 kernel: Workqueue: events xfrm_state_gc_task
> Dec 3 02:02:51 kernel: Call Trace:
> Dec 3 02:02:51 kernel: <TASK>
> Dec 3 02:02:51 kernel: __schedule+0x23f/0x620
> Dec 3 02:02:51 kernel: schedule+0x23/0xa0
> Dec 3 02:02:51 kernel: schedule_timeout+0x14a/0x160
> Dec 3 02:02:51 kernel: ? __hrtimer_start_range_ns+0x20b/0x2e0
> Dec 3 02:02:51 kernel: ? kvm_clock_get_cycles+0x14/0x30
> Dec 3 02:02:51 kernel: ? ktime_get+0x34/0xc0
> Dec 3 02:02:51 kernel: ? timerqueue_del+0x2a/0x50
> Dec 3 02:02:51 kernel: __wait_for_common+0x8f/0x1d0
> Dec 3 02:02:51 kernel: ? __pfx_schedule_timeout+0x10/0x10
> Dec 3 02:02:51 kernel: wait_for_completion_state+0x1d/0x40
> Dec 3 02:02:51 kernel: __wait_rcu_gp+0x126/0x130
> Dec 3 02:02:51 kernel: synchronize_rcu_normal.part.0+0x3a/0x60
> Dec 3 02:02:51 kernel: ? __pfx_call_rcu_hurry+0x10/0x10
> Dec 3 02:02:51 kernel: ? __pfx_wakeme_after_rcu+0x10/0x10
> Dec 3 02:02:51 kernel: xfrm_state_gc_task+0x56/0xa0
> Dec 3 02:02:51 kernel: process_one_work+0x179/0x390
> Dec 3 02:02:51 kernel: worker_thread+0x239/0x340
> Dec 3 02:02:51 kernel: ? __pfx_worker_thread+0x10/0x10
> Dec 3 02:02:51 kernel: kthread+0xcc/0x100
> Dec 3 02:02:51 kernel: ? __pfx_kthread+0x10/0x10
> Dec 3 02:02:51 kernel: ret_from_fork+0x2d/0x50
> Dec 3 02:02:51 kernel: ? __pfx_kthread+0x10/0x10
> Dec 3 02:02:51 kernel: ret_from_fork_asm+0x1a/0x30
> Dec 3 02:02:51 kernel: </TASK>
> Dec 3 02:02:51 kernel: INFO: task systemd-udevd:995278 blocked for more than 245 seconds.
> Dec 3 02:02:51 kernel: Not tainted 6.12.0 #64
> Dec 3 02:02:51 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Dec 3 02:02:51 kernel: task:systemd-udevd state:D stack:0 pid:995278 tgid:995278 ppid:1080 flags:0x00000002
> Dec 3 02:02:51 kernel: Call Trace:
> Dec 3 02:02:51 kernel: <TASK>
> Dec 3 02:02:51 kernel: __schedule+0x23f/0x620
> Dec 3 02:02:51 kernel: schedule+0x23/0xa0
> Dec 3 02:02:51 kernel: netlink_table_grab.part.0+0x82/0xe0
> Dec 3 02:02:51 kernel: ? __pfx_default_wake_function+0x10/0x10
> Dec 3 02:02:51 kernel: netlink_release+0x36c/0x520
> Dec 3 02:02:51 kernel: ? __pfx_netlink_hash+0x10/0x10
> Dec 3 02:02:51 kernel: ? __pfx_netlink_compare+0x10/0x10
> Dec 3 02:02:51 kernel: __sock_release+0x3a/0xc0
> Dec 3 02:02:51 kernel: sock_close+0x11/0x20
> Dec 3 02:02:51 kernel: __fput+0xdb/0x2a0
> Dec 3 02:02:51 kernel: task_work_run+0x55/0x90
> Dec 3 02:02:51 kernel: do_exit+0x279/0x4b0
> Dec 3 02:02:51 kernel: do_group_exit+0x2c/0x80
> Dec 3 02:02:51 kernel: __x64_sys_exit_group+0x14/0x20
> Dec 3 02:02:51 kernel: x64_sys_call+0x1836/0x1840
> Dec 3 02:02:51 kernel: do_syscall_64+0x79/0x150
> Dec 3 02:02:51 kernel: ? __count_memcg_events+0x4f/0xe0
> Dec 3 02:02:51 kernel: ? handle_mm_fault+0x18e/0x270
> Dec 3 02:02:51 kernel: ? do_user_addr_fault+0x34c/0x680
> Dec 3 02:02:51 kernel: ? clear_bhb_loop+0x45/0xa0
> Dec 3 02:02:51 kernel: ? clear_bhb_loop+0x45/0xa0
> Dec 3 02:02:51 kernel: ? clear_bhb_loop+0x45/0xa0
> Dec 3 02:02:51 kernel: entry_SYSCALL_64_after_hwframe+0x76/0x7e
> Dec 3 02:02:51 kernel: RIP: 0033:0x7f6cfa8d921d
> Dec 3 02:02:51 kernel: RSP: 002b:00007ffd7e081b58 EFLAGS: 00000246 ORIG_RAX: 00000000000000e7
> Dec 3 02:02:51 kernel: RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f6cfa8d921d
> Dec 3 02:02:51 kernel: RDX: 00000000000000e7 RSI: fffffffffffffe88 RDI: 0000000000000000
> Dec 3 02:02:51 kernel: RBP: 00007ffd7e081c00 R08: 00005615d9e8dbb0 R09: 0000000000000004
> Dec 3 02:02:51 kernel: R10: 0000000000000018 R11: 0000000000000246 R12: 00007ffd7e081bb0
> Dec 3 02:02:51 kernel: R13: 00005615d9d0e8b0 R14: 00005615d9e8dbb0 R15: 0000000000000000
> Dec 3 02:02:51 kernel: </TASK>
> Dec 3 02:02:51 kernel: INFO: task systemd-udevd:995279 blocked for more than 245 seconds.
> Dec 3 02:02:51 kernel: Not tainted 6.12.0 #64
> Dec 3 02:02:51 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Dec 3 02:02:51 kernel: task:systemd-udevd state:D stack:0 pid:995279 tgid:995279 ppid:1080 flags:0x00004002
> Dec 3 02:02:51 kernel: Call Trace:
> Dec 3 02:02:51 kernel: <TASK>
> Dec 3 02:02:51 kernel: __schedule+0x23f/0x620
> Dec 3 02:02:51 kernel: schedule+0x23/0xa0
> Dec 3 02:02:51 kernel: netlink_table_grab.part.0+0x82/0xe0
> Dec 3 02:02:51 kernel: ? __pfx_default_wake_function+0x10/0x10
> Dec 3 02:02:51 kernel: netlink_release+0x36c/0x520
> Dec 3 02:02:51 kernel: ? __pfx_netlink_hash+0x10/0x10
> Dec 3 02:02:51 kernel: ? __pfx_netlink_compare+0x10/0x10
> Dec 3 02:02:51 kernel: __sock_release+0x3a/0xc0
> Dec 3 02:02:51 kernel: sock_close+0x11/0x20
> Dec 3 02:02:51 kernel: __fput+0xdb/0x2a0
> Dec 3 02:02:51 kernel: task_work_run+0x55/0x90
> Dec 3 02:02:51 kernel: do_exit+0x279/0x4b0
> Dec 3 02:02:51 kernel: do_group_exit+0x2c/0x80
> Dec 3 02:02:51 kernel: __x64_sys_exit_group+0x14/0x20
> Dec 3 02:02:51 kernel: x64_sys_call+0x1836/0x1840
> Dec 3 02:02:51 kernel: do_syscall_64+0x79/0x150
> Dec 3 02:02:51 kernel: ? get_page_from_freelist+0x333/0x630
> Dec 3 02:02:51 kernel: ? __alloc_pages_noprof+0x186/0x350
> Dec 3 02:02:51 kernel: ? __mod_memcg_lruvec_state+0x95/0x150
> Dec 3 02:02:51 kernel: ? __lruvec_stat_mod_folio+0x80/0xd0
> Dec 3 02:02:51 kernel: ? __folio_mod_stat+0x2a/0x80
> Dec 3 02:02:51 kernel: ? _raw_spin_unlock+0xa/0x30
> Dec 3 02:02:51 kernel: ? wp_page_copy+0x4e0/0x710
> Dec 3 02:02:51 kernel: ? __pte_offset_map+0x17/0x160
> Dec 3 02:02:51 kernel: ? _raw_spin_unlock+0xa/0x30
> Dec 3 02:02:51 kernel: ? do_wp_page+0x666/0x760
> Dec 3 02:02:51 kernel: ? __handle_mm_fault+0x326/0x730
> Dec 3 02:02:51 kernel: ? __count_memcg_events+0x4f/0xe0
> Dec 3 02:02:51 kernel: ? handle_mm_fault+0x18e/0x270
> Dec 3 02:02:51 kernel: ? do_user_addr_fault+0x34c/0x680
> Dec 3 02:02:51 kernel: ? clear_bhb_loop+0x45/0xa0
> Dec 3 02:02:51 kernel: ? clear_bhb_loop+0x45/0xa0
> Dec 3 02:02:51 kernel: ? clear_bhb_loop+0x45/0xa0
> Dec 3 02:02:51 kernel: entry_SYSCALL_64_after_hwframe+0x76/0x7e
> Dec 3 02:02:51 kernel: RIP: 0033:0x7f6cfa8d921d
> Dec 3 02:02:51 kernel: RSP: 002b:00007ffd7e081b58 EFLAGS: 00000246 ORIG_RAX: 00000000000000e7
> Dec 3 02:02:51 kernel: RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f6cfa8d921d
> Dec 3 02:02:51 kernel: RDX: 00000000000000e7 RSI: fffffffffffffe88 RDI: 0000000000000000
> Dec 3 02:02:51 kernel: RBP: 00007ffd7e081c00 R08: 00005615d9e932b0 R09: 0000000000000004
> Dec 3 02:02:51 kernel: R10: 0000000000000018 R11: 0000000000000246 R12: 00007ffd7e081bb0
> Dec 3 02:02:51 kernel: R13: 00005615d9d0e8b0 R14: 00005615d9e932b0 R15: 0000000000000000
> Dec 3 02:02:51 kernel: </TASK>
> Dec 3 02:02:51 kernel: INFO: task ip:998743 blocked for more than 245 seconds.
> Dec 3 02:02:51 kernel: Not tainted 6.12.0 #64
> Dec 3 02:02:51 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Dec 3 02:02:51 kernel: task:ip state:D stack:0 pid:998743 tgid:998743 ppid:998736 flags:0x00000002
> Dec 3 02:02:51 kernel: Call Trace:
> Dec 3 02:02:51 kernel: <TASK>
> Dec 3 02:02:51 kernel: __schedule+0x23f/0x620
> Dec 3 02:02:51 kernel: schedule+0x23/0xa0
> Dec 3 02:02:51 kernel: schedule_preempt_disabled+0x11/0x20
> Dec 3 02:02:51 kernel: __mutex_lock.constprop.0+0x31d/0x650
> Dec 3 02:02:51 kernel: rtnetlink_rcv_msg+0x111/0x410
> Dec 3 02:02:51 kernel: ? avc_has_perm_noaudit+0x67/0xf0
> Dec 3 02:02:51 kernel: ? __pfx_rtnetlink_rcv_msg+0x10/0x10
> Dec 3 02:02:51 kernel: netlink_rcv_skb+0x54/0x100
> Dec 3 02:02:51 kernel: netlink_unicast+0x243/0x370
> Dec 3 02:02:51 kernel: netlink_sendmsg+0x1f6/0x430
> Dec 3 02:02:51 kernel: __sys_sendto+0x1f3/0x200
> Dec 3 02:02:51 kernel: ? do_read_fault+0x10a/0x1e0
> Dec 3 02:02:51 kernel: ? do_fault+0x21f/0x380
> Dec 3 02:02:51 kernel: ? pte_offset_map_nolock+0x2b/0xb0
> Dec 3 02:02:51 kernel: __x64_sys_sendto+0x20/0x30
> Dec 3 02:02:51 kernel: do_syscall_64+0x79/0x150
> Dec 3 02:02:51 kernel: ? __count_memcg_events+0x4f/0xe0
> Dec 3 02:02:51 kernel: ? handle_mm_fault+0x18e/0x270
> Dec 3 02:02:51 kernel: ? do_user_addr_fault+0x34c/0x680
> Dec 3 02:02:51 kernel: ? clear_bhb_loop+0x45/0xa0
> Dec 3 02:02:51 kernel: ? clear_bhb_loop+0x45/0xa0
> Dec 3 02:02:51 kernel: ? clear_bhb_loop+0x45/0xa0
> Dec 3 02:02:51 kernel: entry_SYSCALL_64_after_hwframe+0x76/0x7e
> Dec 3 02:02:51 kernel: RIP: 0033:0x7f287830f860
> Dec 3 02:02:51 kernel: RSP: 002b:00007ffdd4f2c518 EFLAGS: 00000246 ORIG_RAX: 000000000000002c
> Dec 3 02:02:51 kernel: RAX: ffffffffffffffda RBX: 00007ffdd4f2cc88 RCX: 00007f287830f860
> Dec 3 02:02:51 kernel: RDX: 0000000000000020 RSI: 00007ffdd4f2c520 RDI: 0000000000000003
> Dec 3 02:02:51 kernel: RBP: 00007ffdd4f2d7e4 R08: 0000000000000000 R09: 0000000000000000
> Dec 3 02:02:51 kernel: R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000002
> Dec 3 02:02:51 kernel: R13: 000055969d419040 R14: 00007ffdd4f2cc78 R15: 0000000000000004
> Dec 3 02:02:51 kernel: </TASK>
> Dec 3 02:02:51 kernel: Future hung task reports are suppressed, see sysctl kernel.hung_task_warnings
> Dec 3 02:05:06 kernel: rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
> Dec 3 02:05:06 kernel: rcu: #01117-...!: (0 ticks this GP) idle=b124/1/0x4000000000000002 softirq=2109356/2109356 fqs=0
> Dec 3 02:05:06 kernel: rcu: #011(detected by 5, t=440015 jiffies, g=11126601, q=2760447 ncpus=40)
> Dec 3 02:05:06 kernel: Sending NMI from CPU 5 to CPUs 17:
> Dec 3 02:05:06 kernel: rcu: rcu_preempt kthread timer wakeup didn't happen for 450012 jiffies! g11126601 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x200
> Dec 3 02:05:06 kernel: rcu: #011Possible timer handling issue on cpu=17 timer-softirq=273727
> Dec 3 02:05:06 kernel: rcu: rcu_preempt kthread starved for 450015 jiffies! g11126601 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x200 ->cpu=17
> Dec 3 02:05:06 kernel: rcu: #011Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
> Dec 3 02:05:06 kernel: rcu: RCU grace-period kthread stack dump:
> Dec 3 02:05:06 kernel: task:rcu_preempt state:R stack:0 pid:18 tgid:18 ppid:2 flags:0x00004008
> Dec 3 02:05:06 kernel: Call Trace:
> Dec 3 02:05:06 kernel: <TASK>
> Dec 3 02:05:06 kernel: ? __pick_next_task+0x3e/0x1a0
> Dec 3 02:05:06 kernel: ? __schedule+0xfe/0x620
> Dec 3 02:05:06 kernel: ? _raw_spin_unlock_irqrestore+0xa/0x30
> Dec 3 02:05:06 kernel: ? __pfx_rcu_gp_kthread+0x10/0x10
> Dec 3 02:05:06 kernel: ? schedule+0x23/0xa0
> Dec 3 02:05:06 kernel: ? schedule_timeout+0x8b/0x160
> Dec 3 02:05:06 kernel: ? __pfx_process_timeout+0x10/0x10
> Dec 3 02:05:06 kernel: ? rcu_gp_fqs_loop+0x10b/0x500
> Dec 3 02:05:06 kernel: ? rcu_gp_kthread+0x13f/0x1d0
> Dec 3 02:05:06 kernel: ? kthread+0xcc/0x100
> Dec 3 02:05:06 kernel: ? __pfx_kthread+0x10/0x10
> Dec 3 02:05:06 kernel: ? ret_from_fork+0x2d/0x50
> Dec 3 02:05:06 kernel: ? __pfx_kthread+0x10/0x10
> Dec 3 02:05:06 kernel: ? ret_from_fork_asm+0x1a/0x30
> Dec 3 02:05:06 kernel: </TASK>
> Dec 3 02:08:16 kernel: rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
> Dec 3 02:08:16 kernel: rcu: #01117-...!: (0 ticks this GP) idle=b124/1/0x4000000000000002 softirq=2109356/2109356 fqs=0
> Dec 3 02:08:16 kernel: rcu: #011(detected by 15, t=630022 jiffies, g=11126601, q=3942985 ncpus=40)
> Dec 3 02:08:16 kernel: Sending NMI from CPU 15 to CPUs 17:
> Dec 3 02:08:16 kernel: rcu: rcu_preempt kthread timer wakeup didn't happen for 640019 jiffies! g11126601 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x200
> Dec 3 02:08:16 kernel: rcu: #011Possible timer handling issue on cpu=17 timer-softirq=273727
> Dec 3 02:08:16 kernel: rcu: rcu_preempt kthread starved for 640022 jiffies! g11126601 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x200 ->cpu=17
> Dec 3 02:08:16 kernel: rcu: #011Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
> Dec 3 02:08:16 kernel: rcu: RCU grace-period kthread stack dump:
> Dec 3 02:08:16 kernel: task:rcu_preempt state:R stack:0 pid:18 tgid:18 ppid:2 flags:0x00004008
> Dec 3 02:08:16 kernel: Call Trace:
> Dec 3 02:08:16 kernel: <TASK>
> Dec 3 02:08:16 kernel: ? __pick_next_task+0x3e/0x1a0
> Dec 3 02:08:16 kernel: ? __schedule+0xfe/0x620
> Dec 3 02:08:16 kernel: ? _raw_spin_unlock_irqrestore+0xa/0x30
> Dec 3 02:08:16 kernel: ? __pfx_rcu_gp_kthread+0x10/0x10
> Dec 3 02:08:16 kernel: ? schedule+0x23/0xa0
> Dec 3 02:08:16 kernel: ? schedule_timeout+0x8b/0x160
> Dec 3 02:08:16 kernel: ? __pfx_process_timeout+0x10/0x10
> Dec 3 02:08:16 kernel: ? rcu_gp_fqs_loop+0x10b/0x500
> Dec 3 02:08:16 kernel: ? rcu_gp_kthread+0x13f/0x1d0
> Dec 3 02:08:16 kernel: ? kthread+0xcc/0x100
> Dec 3 02:08:16 kernel: ? __pfx_kthread+0x10/0x10
> Dec 3 02:08:16 kernel: ? ret_from_fork+0x2d/0x50
> Dec 3 02:08:16 kernel: ? __pfx_kthread+0x10/0x10
> Dec 3 02:08:16 kernel: ? ret_from_fork_asm+0x1a/0x30
> Dec 3 02:08:16 kernel: </TASK>
> Dec 3 02:11:26 kernel: rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
> Dec 3 02:11:26 kernel: rcu: #01117-...!: (0 ticks this GP) idle=b124/1/0x4000000000000002 softirq=2109356/2109356 fqs=0
> Dec 3 02:11:26 kernel: rcu: #011(detected by 13, t=820027 jiffies, g=11126601, q=5124103 ncpus=40)
> Dec 3 02:11:26 kernel: Sending NMI from CPU 13 to CPUs 17:
> Dec 3 02:11:26 kernel: rcu: rcu_preempt kthread timer wakeup didn't happen for 830024 jiffies! g11126601 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x200
> Dec 3 02:11:26 kernel: rcu: #011Possible timer handling issue on cpu=17 timer-softirq=273727
> Dec 3 02:11:26 kernel: rcu: rcu_preempt kthread starved for 830027 jiffies! g11126601 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x200 ->cpu=17
> Dec 3 02:11:26 kernel: rcu: #011Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
> Dec 3 02:11:26 kernel: rcu: RCU grace-period kthread stack dump:
> Dec 3 02:11:26 kernel: task:rcu_preempt state:R stack:0 pid:18 tgid:18 ppid:2 flags:0x00004008
> Dec 3 02:11:26 kernel: Call Trace:
> Dec 3 02:11:26 kernel: <TASK>
> Dec 3 02:11:26 kernel: ? __pick_next_task+0x3e/0x1a0
> Dec 3 02:11:26 kernel: ? __schedule+0xfe/0x620
> Dec 3 02:11:26 kernel: ? _raw_spin_unlock_irqrestore+0xa/0x30
> Dec 3 02:11:26 kernel: ? __pfx_rcu_gp_kthread+0x10/0x10
> Dec 3 02:11:26 kernel: ? schedule+0x23/0xa0
> Dec 3 02:11:26 kernel: ? schedule_timeout+0x8b/0x160
> Dec 3 02:11:26 kernel: ? __pfx_process_timeout+0x10/0x10
> Dec 3 02:11:26 kernel: ? rcu_gp_fqs_loop+0x10b/0x500
> Dec 3 02:11:26 kernel: ? rcu_gp_kthread+0x13f/0x1d0
> Dec 3 02:11:26 kernel: ? kthread+0xcc/0x100
> Dec 3 02:11:26 kernel: ? __pfx_kthread+0x10/0x10
> Dec 3 02:11:26 kernel: ? ret_from_fork+0x2d/0x50
> Dec 3 02:11:26 kernel: ? __pfx_kthread+0x10/0x10
> Dec 3 02:11:26 kernel: ? ret_from_fork_asm+0x1a/0x30
> Dec 3 02:11:26 kernel: </TASK>
> Dec 3 02:14:36 kernel: rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
> Dec 3 02:14:36 kernel: rcu: #01117-...!: (0 ticks this GP) idle=b124/1/0x4000000000000002 softirq=2109356/2109356 fqs=0
> Dec 3 02:14:36 kernel: rcu: #011(detected by 29, t=1010032 jiffies, g=11126601, q=6331978 ncpus=40)
> Dec 3 02:14:36 kernel: Sending NMI from CPU 29 to CPUs 17:
> Dec 3 02:14:36 kernel: rcu: rcu_preempt kthread timer wakeup didn't happen for 1020029 jiffies! g11126601 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x200
> Dec 3 02:14:36 kernel: rcu: #011Possible timer handling issue on cpu=17 timer-softirq=273727
> Dec 3 02:14:36 kernel: rcu: rcu_preempt kthread starved for 1020032 jiffies! g11126601 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x200 ->cpu=17
> Dec 3 02:14:36 kernel: rcu: #011Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
> Dec 3 02:14:36 kernel: rcu: RCU grace-period kthread stack dump:
> Dec 3 02:14:36 kernel: task:rcu_preempt state:R stack:0 pid:18 tgid:18 ppid:2 flags:0x00004008
> Dec 3 02:14:36 kernel: Call Trace:
> Dec 3 02:14:36 kernel: <TASK>
> Dec 3 02:14:36 kernel: ? __pick_next_task+0x3e/0x1a0
> Dec 3 02:14:36 kernel: ? __schedule+0xfe/0x620
> Dec 3 02:14:36 kernel: ? _raw_spin_unlock_irqrestore+0xa/0x30
> Dec 3 02:14:36 kernel: ? __pfx_rcu_gp_kthread+0x10/0x10
> Dec 3 02:14:36 kernel: ? schedule+0x23/0xa0
> Dec 3 02:14:36 kernel: ? schedule_timeout+0x8b/0x160
> Dec 3 02:14:36 kernel: ? __pfx_process_timeout+0x10/0x10
> Dec 3 02:14:36 kernel: ? rcu_gp_fqs_loop+0x10b/0x500
> Dec 3 02:14:36 kernel: ? rcu_gp_kthread+0x13f/0x1d0
> Dec 3 02:14:36 kernel: ? kthread+0xcc/0x100
> Dec 3 02:14:36 kernel: ? __pfx_kthread+0x10/0x10
> Dec 3 02:14:36 kernel: ? ret_from_fork+0x2d/0x50
> Dec 3 02:14:36 kernel: ? __pfx_kthread+0x10/0x10
> Dec 3 02:14:36 kernel: ? ret_from_fork_asm+0x1a/0x30
> Dec 3 02:14:36 kernel: </TASK>
> Dec 3 02:17:46 kernel: rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
> Dec 3 02:17:46 kernel: rcu: #01117-...!: (0 ticks this GP) idle=b124/1/0x4000000000000002 softirq=2109356/2109356 fqs=0
> Dec 3 02:17:46 kernel: rcu: #011(detected by 8, t=1200037 jiffies, g=11126601, q=7535150 ncpus=40)
> Dec 3 02:17:46 kernel: Sending NMI from CPU 8 to CPUs 17:
> Dec 3 02:17:46 kernel: rcu: rcu_preempt kthread timer wakeup didn't happen for 1210034 jiffies! g11126601 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x200
> Dec 3 02:17:46 kernel: rcu: #011Possible timer handling issue on cpu=17 timer-softirq=273727
> Dec 3 02:17:46 kernel: rcu: rcu_preempt kthread starved for 1210037 jiffies! g11126601 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x200 ->cpu=17
> Dec 3 02:17:46 kernel: rcu: #011Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
> Dec 3 02:17:46 kernel: rcu: RCU grace-period kthread stack dump:
> Dec 3 02:17:46 kernel: task:rcu_preempt state:R stack:0 pid:18 tgid:18 ppid:2 flags:0x00004008
> Dec 3 02:17:46 kernel: Call Trace:
> Dec 3 02:17:46 kernel: <TASK>
> Dec 3 02:17:46 kernel: ? __pick_next_task+0x3e/0x1a0
> Dec 3 02:17:46 kernel: ? __schedule+0xfe/0x620
> Dec 3 02:17:46 kernel: ? _raw_spin_unlock_irqrestore+0xa/0x30
> Dec 3 02:17:46 kernel: ? __pfx_rcu_gp_kthread+0x10/0x10
> Dec 3 02:17:46 kernel: ? schedule+0x23/0xa0
> Dec 3 02:17:46 kernel: ? schedule_timeout+0x8b/0x160
> Dec 3 02:17:46 kernel: ? __pfx_process_timeout+0x10/0x10
> Dec 3 02:17:46 kernel: ? rcu_gp_fqs_loop+0x10b/0x500
> Dec 3 02:17:46 kernel: ? rcu_gp_kthread+0x13f/0x1d0
> Dec 3 02:17:46 kernel: ? kthread+0xcc/0x100
> Dec 3 02:17:46 kernel: ? __pfx_kthread+0x10/0x10
> Dec 3 02:17:46 kernel: ? ret_from_fork+0x2d/0x50
> Dec 3 02:17:46 kernel: ? __pfx_kthread+0x10/0x10
> Dec 3 02:17:46 kernel: ? ret_from_fork_asm+0x1a/0x30
> Dec 3 02:17:46 kernel: </TASK>
> Dec 3 02:20:56 kernel: rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
> Dec 3 02:20:56 kernel: rcu: #01117-...!: (0 ticks this GP) idle=b124/1/0x4000000000000002 softirq=2109356/2109356 fqs=0
> Dec 3 02:20:56 kernel: rcu: #011(detected by 8, t=1390043 jiffies, g=11126601, q=8740806 ncpus=40)
> Dec 3 02:20:56 kernel: Sending NMI from CPU 8 to CPUs 17:
> Dec 3 02:20:56 kernel: rcu: rcu_preempt kthread timer wakeup didn't happen for 1400040 jiffies! g11126601 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x200
> Dec 3 02:20:56 kernel: rcu: #011Possible timer handling issue on cpu=17 timer-softirq=273727
> Dec 3 02:20:56 kernel: rcu: rcu_preempt kthread starved for 1400043 jiffies! g11126601 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x200 ->cpu=17
> Dec 3 02:20:56 kernel: rcu: #011Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
> Dec 3 02:20:56 kernel: rcu: RCU grace-period kthread stack dump:
> Dec 3 02:20:56 kernel: task:rcu_preempt state:R stack:0 pid:18 tgid:18 ppid:2 flags:0x00004008
> Dec 3 02:20:56 kernel: Call Trace:
> Dec 3 02:20:56 kernel: <TASK>
> Dec 3 02:20:56 kernel: ? __pick_next_task+0x3e/0x1a0
> Dec 3 02:20:56 kernel: ? __schedule+0xfe/0x620
> Dec 3 02:20:56 kernel: ? _raw_spin_unlock_irqrestore+0xa/0x30
> Dec 3 02:20:56 kernel: ? __pfx_rcu_gp_kthread+0x10/0x10
> Dec 3 02:20:56 kernel: ? schedule+0x23/0xa0
> Dec 3 02:20:56 kernel: ? schedule_timeout+0x8b/0x160
> Dec 3 02:20:56 kernel: ? __pfx_process_timeout+0x10/0x10
> Dec 3 02:20:56 kernel: ? rcu_gp_fqs_loop+0x10b/0x500
> Dec 3 02:20:56 kernel: ? rcu_gp_kthread+0x13f/0x1d0
> Dec 3 02:20:56 kernel: ? kthread+0xcc/0x100
> Dec 3 02:20:56 kernel: ? __pfx_kthread+0x10/0x10
> Dec 3 02:20:56 kernel: ? ret_from_fork+0x2d/0x50
> Dec 3 02:20:56 kernel: ? __pfx_kthread+0x10/0x10
> Dec 3 02:20:56 kernel: ? ret_from_fork_asm+0x1a/0x30
> Dec 3 02:20:56 kernel: </TASK>
> Dec 3 02:24:06 kernel: rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
> Dec 3 02:24:06 kernel: rcu: #01117-...!: (0 ticks this GP) idle=b124/1/0x4000000000000002 softirq=2109356/2109356 fqs=0
> Dec 3 02:24:06 kernel: rcu: #011(detected by 23, t=1580048 jiffies, g=11126601, q=9924647 ncpus=40)
> Dec 3 02:24:06 kernel: Sending NMI from CPU 23 to CPUs 17:
> Dec 3 02:24:06 kernel: rcu: rcu_preempt kthread timer wakeup didn't happen for 1590045 jiffies! g11126601 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x200
> Dec 3 02:24:06 kernel: rcu: #011Possible timer handling issue on cpu=17 timer-softirq=273727
> Dec 3 02:24:06 kernel: rcu: rcu_preempt kthread starved for 1590048 jiffies! g11126601 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x200 ->cpu=17
> Dec 3 02:24:06 kernel: rcu: #011Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
> Dec 3 02:24:06 kernel: rcu: RCU grace-period kthread stack dump:
> Dec 3 02:24:06 kernel: task:rcu_preempt state:R stack:0 pid:18 tgid:18 ppid:2 flags:0x00004008
> Dec 3 02:24:06 kernel: Call Trace:
> Dec 3 02:24:06 kernel: <TASK>
> Dec 3 02:24:06 kernel: ? __pick_next_task+0x3e/0x1a0
> Dec 3 02:24:06 kernel: ? __schedule+0xfe/0x620
> Dec 3 02:24:06 kernel: ? _raw_spin_unlock_irqrestore+0xa/0x30
> Dec 3 02:24:06 kernel: ? __pfx_rcu_gp_kthread+0x10/0x10
> Dec 3 02:24:06 kernel: ? schedule+0x23/0xa0
> Dec 3 02:24:06 kernel: ? schedule_timeout+0x8b/0x160
> Dec 3 02:24:06 kernel: ? __pfx_process_timeout+0x10/0x10
> Dec 3 02:24:06 kernel: ? rcu_gp_fqs_loop+0x10b/0x500
> Dec 3 02:24:06 kernel: ? rcu_gp_kthread+0x13f/0x1d0
> Dec 3 02:24:06 kernel: ? kthread+0xcc/0x100
> Dec 3 02:24:06 kernel: ? __pfx_kthread+0x10/0x10
> Dec 3 02:24:06 kernel: ? ret_from_fork+0x2d/0x50
> Dec 3 02:24:06 kernel: ? __pfx_kthread+0x10/0x10
> Dec 3 02:24:06 kernel: ? ret_from_fork_asm+0x1a/0x30
> Dec 3 02:24:06 kernel: </TASK>
> Dec 3 02:27:16 kernel: rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
> Dec 3 02:27:16 kernel: rcu: #01117-...!: (0 ticks this GP) idle=b124/1/0x4000000000000002 softirq=2109356/2109356 fqs=0
> Dec 3 02:27:16 kernel: rcu: #011(detected by 18, t=1770053 jiffies, g=11126601, q=11108597 ncpus=40)
> Dec 3 02:27:16 kernel: Sending NMI from CPU 18 to CPUs 17:
> Dec 3 02:27:16 kernel: rcu: rcu_preempt kthread timer wakeup didn't happen for 1780050 jiffies! g11126601 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x200
> Dec 3 02:27:16 kernel: rcu: #011Possible timer handling issue on cpu=17 timer-softirq=273727
> Dec 3 02:27:16 kernel: rcu: rcu_preempt kthread starved for 1780053 jiffies! g11126601 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x200 ->cpu=17
> Dec 3 02:27:16 kernel: rcu: #011Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
> Dec 3 02:27:16 kernel: rcu: RCU grace-period kthread stack dump:
> Dec 3 02:27:16 kernel: task:rcu_preempt state:R stack:0 pid:18 tgid:18 ppid:2 flags:0x00004008
> Dec 3 02:27:16 kernel: Call Trace:
> Dec 3 02:27:16 kernel: <TASK>
> Dec 3 02:27:16 kernel: ? __pick_next_task+0x3e/0x1a0
> Dec 3 02:27:16 kernel: ? __schedule+0xfe/0x620
> Dec 3 02:27:16 kernel: ? _raw_spin_unlock_irqrestore+0xa/0x30
> Dec 3 02:27:16 kernel: ? __pfx_rcu_gp_kthread+0x10/0x10
> Dec 3 02:27:16 kernel: ? schedule+0x23/0xa0
> Dec 3 02:27:16 kernel: ? schedule_timeout+0x8b/0x160
> Dec 3 02:27:16 kernel: ? __pfx_process_timeout+0x10/0x10
> Dec 3 02:27:16 kernel: ? rcu_gp_fqs_loop+0x10b/0x500
> Dec 3 02:27:16 kernel: ? rcu_gp_kthread+0x13f/0x1d0
> Dec 3 02:27:16 kernel: ? kthread+0xcc/0x100
> Dec 3 02:27:16 kernel: ? __pfx_kthread+0x10/0x10
> Dec 3 02:27:16 kernel: ? ret_from_fork+0x2d/0x50
> Dec 3 02:27:16 kernel: ? __pfx_kthread+0x10/0x10
> Dec 3 02:27:16 kernel: ? ret_from_fork_asm+0x1a/0x30
> Dec 3 02:27:16 kernel: </TASK>
> Dec 3 02:30:26 kernel: rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
> Dec 3 02:30:26 kernel: rcu: #01117-...!: (0 ticks this GP) idle=b124/1/0x4000000000000002 softirq=2109356/2109356 fqs=0
> Dec 3 02:30:26 kernel: rcu: #011(detected by 1, t=1960059 jiffies, g=11126601, q=12298028 ncpus=40)
> Dec 3 02:30:26 kernel: Sending NMI from CPU 1 to CPUs 17:
> Dec 3 02:30:26 kernel: rcu: rcu_preempt kthread timer wakeup didn't happen for 1970056 jiffies! g11126601 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x200
> Dec 3 02:30:26 kernel: rcu: #011Possible timer handling issue on cpu=17 timer-softirq=273727
> Dec 3 02:30:26 kernel: rcu: rcu_preempt kthread starved for 1970059 jiffies! g11126601 f0x2 RCU_GP_WAIT_FQS(5) ->state=0x200 ->cpu=17
> Dec 3 02:30:26 kernel: rcu: #011Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
> Dec 3 02:30:26 kernel: rcu: RCU grace-period kthread stack dump:
> Dec 3 02:30:26 kernel: task:rcu_preempt state:R stack:0 pid:18 tgid:18 ppid:2 flags:0x00004008
> Dec 3 02:30:26 kernel: Call Trace:
> Dec 3 02:30:26 kernel: <TASK>
> Dec 3 02:30:26 kernel: ? __pick_next_task+0x3e/0x1a0
> Dec 3 02:30:26 kernel: ? __schedule+0xfe/0x620
> Dec 3 02:30:26 kernel: ? _raw_spin_unlock_irqrestore+0xa/0x30
> Dec 3 02:30:26 kernel: ? __pfx_rcu_gp_kthread+0x10/0x10
> Dec 3 02:30:26 kernel: ? schedule+0x23/0xa0
> Dec 3 02:30:26 kernel: ? schedule_timeout+0x8b/0x160
> Dec 3 02:30:26 kernel: ? __pfx_process_timeout+0x10/0x10
> Dec 3 02:30:26 kernel: ? rcu_gp_fqs_loop+0x10b/0x500
> Dec 3 02:30:26 kernel: ? rcu_gp_kthread+0x13f/0x1d0
> Dec 3 02:30:26 kernel: ? kthread+0xcc/0x100
> Dec 3 02:30:26 kernel: ? __pfx_kthread+0x10/0x10
> Dec 3 02:30:26 kernel: ? ret_from_fork+0x2d/0x50
> Dec 3 02:30:26 kernel: ? __pfx_kthread+0x10/0x10
> Dec 3 02:30:26 kernel: ? ret_from_fork_asm+0x1a/0x30
> Dec 3 02:30:26 kernel: </TASK>
> Dec 3 02:30:26 kernel: ? kthread+0xcc/0x100
> Dec 3 02:30:26 kernel: ? __pfx_kthread+0x10/0x10
> Dec 3 02:30:26 kernel: ? ret_from_fork+0x2d/0x50
> Dec 3 02:30:26 kernel: ? __pfx_kthread+0x10/0x10
> Dec 3 02:30:26 kernel: ? ret_from_fork_asm+0x1a/0x30
> Dec 3 02:30:26 kernel: </TASK>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [v6.12] WARNING: at kernel/sched/deadline.c:1995 enqueue_dl_entity (task blocked for more than 28262 seconds)
2024-12-06 15:18 ` Joel Fernandes
@ 2024-12-06 16:57 ` Vineeth Remanan Pillai
2024-12-06 17:24 ` Joel Fernandes
2024-12-09 10:55 ` Peter Zijlstra
0 siblings, 2 replies; 22+ messages in thread
From: Vineeth Remanan Pillai @ 2024-12-06 16:57 UTC (permalink / raw)
To: Joel Fernandes
Cc: Ilya Maximets, LKML, Ingo Molnar, Peter Zijlstra, Juri Lelli,
Vincent Guittot, vineethrp, shraash
On Fri, Dec 6, 2024 at 10:18 AM Joel Fernandes <joel@joelfernandes.org> wrote:
>
> On Wed, Dec 04, 2024 at 01:47:44PM +0100, Ilya Maximets wrote:
> > Hi. It seems like I'm hitting some bug in the scheduler.
> >
> > I'm running some tests with Open vSwitch on v6.12 kernel and some time
> > 5 to 8 hours down the line I'm getting task blocked splats and I also
> > have a WARNING triggered in the scheduler code right before that:
> >
> > Dec 3 22:19:55 kernel: WARNING: CPU: 27 PID: 3391271 at kernel/sched/deadline.c:1995 enqueue_dl_entity
> >
> > I have a lot of processes (kernel threads and userpsace threads) stuck
> > in DN, Ds, D+ and D states. It feels like IO tasks are being scheduled,
> > but scheduler never picks them up or they are not being scheduled at all
> > for whatever reason, and threads waiting on these tasks are stuck.
> >
> > Dec 3 22:22:45 kernel: INFO: task khugepaged:330 blocked for more than 122 seconds.
> > Dec 3 22:22:45 kernel: INFO: task ovs-monitor-ips:3479822 blocked for more than 122 seconds.
> > Dec 3 22:22:45 kernel: INFO: task mv:3483072 blocked for more than 122 seconds.
> > Dec 3 22:24:48 kernel: INFO: task khugepaged:330 blocked for more than 245 seconds.
> > Dec 3 22:24:48 kernel: INFO: task ovs-monitor-ips:3479822 blocked for more than 245 seconds.
> > Dec 3 22:24:48 kernel: INFO: task ovs-monitor-ips:3480383 blocked for more than 122 seconds.
> > Dec 3 22:24:48 kernel: INFO: task ovs-monitor-ips:3481787 blocked for more than 122 seconds.
> > Dec 3 22:24:48 kernel: INFO: task ovs-monitor-ips:3482631 blocked for more than 122 seconds.
> > Dec 3 22:24:48 kernel: INFO: task mv:3483072 blocked for more than 245 seconds.
> > Dec 3 22:26:51 kernel: INFO: task khugepaged:330 blocked for more than 368 seconds.
> > ...
> > Dec 4 06:11:45 kernel: INFO: task khugepaged:330 blocked for more than 28262 seconds.
> >
> > I have two separate instances where this behavior is reproduced. One is mostly
> > around file systems, the other was more severe as multiple kernel threads got
> > stuck in netlink code. The traces do not have much in common, except that most
> > of blocked tasks are in scheduling. The system is also idle, nothing is really
> > running. Some of these tasks are holding resources that make other tasks to
> > block on those resources as well.
> >
> > I seem to be able to reproduce the issue, but it takes 5-8 hours to do so.
> >
>
> CC'ing a few more from my team as well.
>
> We haven't seen such an issue with the DL server, but we are also testing on
> slightly older kernels.
>
> Its coming from:
> WARN_ON_ONCE(on_dl_rq(dl_se));
>
Thanks for including me Joel :-)
I was able to reproduce this WARN_ON couple of days back with
syzkaller. dlserver's dl_se gets enqueued during a update_curr while
the dlserver is stopped. And subsequent dlserver start will cause a
double enqueue. On the peripheral, we don't track where dlserver is
active or not directly and an explicit tracking could solve this
issue. But the root cause is a little more deep and I think I
understood the real cause. I have a potential fix and doing more
testing to verify. Will send the fix out soon after a bit more
verification
Thanks,
Vineeth
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [v6.12] WARNING: at kernel/sched/deadline.c:1995 enqueue_dl_entity (task blocked for more than 28262 seconds)
2024-12-06 16:57 ` Vineeth Remanan Pillai
@ 2024-12-06 17:24 ` Joel Fernandes
2024-12-09 10:48 ` Juri Lelli
2024-12-09 10:55 ` Peter Zijlstra
1 sibling, 1 reply; 22+ messages in thread
From: Joel Fernandes @ 2024-12-06 17:24 UTC (permalink / raw)
To: Vineeth Remanan Pillai
Cc: Ilya Maximets, LKML, Ingo Molnar, Peter Zijlstra, Juri Lelli,
Vincent Guittot, vineethrp, shraash
On Fri, Dec 6, 2024 at 11:57 AM Vineeth Remanan Pillai
<vineeth@bitbyteword.org> wrote:
>
> On Fri, Dec 6, 2024 at 10:18 AM Joel Fernandes <joel@joelfernandes.org> wrote:
> >
> > On Wed, Dec 04, 2024 at 01:47:44PM +0100, Ilya Maximets wrote:
> > > Hi. It seems like I'm hitting some bug in the scheduler.
> > >
> > > I'm running some tests with Open vSwitch on v6.12 kernel and some time
> > > 5 to 8 hours down the line I'm getting task blocked splats and I also
> > > have a WARNING triggered in the scheduler code right before that:
> > >
> > > Dec 3 22:19:55 kernel: WARNING: CPU: 27 PID: 3391271 at kernel/sched/deadline.c:1995 enqueue_dl_entity
> > >
> > > I have a lot of processes (kernel threads and userpsace threads) stuck
> > > in DN, Ds, D+ and D states. It feels like IO tasks are being scheduled,
> > > but scheduler never picks them up or they are not being scheduled at all
> > > for whatever reason, and threads waiting on these tasks are stuck.
> > >
> > > Dec 3 22:22:45 kernel: INFO: task khugepaged:330 blocked for more than 122 seconds.
> > > Dec 3 22:22:45 kernel: INFO: task ovs-monitor-ips:3479822 blocked for more than 122 seconds.
> > > Dec 3 22:22:45 kernel: INFO: task mv:3483072 blocked for more than 122 seconds.
> > > Dec 3 22:24:48 kernel: INFO: task khugepaged:330 blocked for more than 245 seconds.
> > > Dec 3 22:24:48 kernel: INFO: task ovs-monitor-ips:3479822 blocked for more than 245 seconds.
> > > Dec 3 22:24:48 kernel: INFO: task ovs-monitor-ips:3480383 blocked for more than 122 seconds.
> > > Dec 3 22:24:48 kernel: INFO: task ovs-monitor-ips:3481787 blocked for more than 122 seconds.
> > > Dec 3 22:24:48 kernel: INFO: task ovs-monitor-ips:3482631 blocked for more than 122 seconds.
> > > Dec 3 22:24:48 kernel: INFO: task mv:3483072 blocked for more than 245 seconds.
> > > Dec 3 22:26:51 kernel: INFO: task khugepaged:330 blocked for more than 368 seconds.
> > > ...
> > > Dec 4 06:11:45 kernel: INFO: task khugepaged:330 blocked for more than 28262 seconds.
> > >
> > > I have two separate instances where this behavior is reproduced. One is mostly
> > > around file systems, the other was more severe as multiple kernel threads got
> > > stuck in netlink code. The traces do not have much in common, except that most
> > > of blocked tasks are in scheduling. The system is also idle, nothing is really
> > > running. Some of these tasks are holding resources that make other tasks to
> > > block on those resources as well.
> > >
> > > I seem to be able to reproduce the issue, but it takes 5-8 hours to do so.
> > >
> >
> > CC'ing a few more from my team as well.
> >
> > We haven't seen such an issue with the DL server, but we are also testing on
> > slightly older kernels.
> >
> > Its coming from:
> > WARN_ON_ONCE(on_dl_rq(dl_se));
> >
>
> Thanks for including me Joel :-)
>
> I was able to reproduce this WARN_ON couple of days back with
> syzkaller. dlserver's dl_se gets enqueued during a update_curr while
> the dlserver is stopped. And subsequent dlserver start will cause a
> double enqueue. On the peripheral, we don't track where dlserver is
> active or not directly and an explicit tracking could solve this
> issue. But the root cause is a little more deep and I think I
> understood the real cause. I have a potential fix and doing more
> testing to verify. Will send the fix out soon after a bit more
> verification
Oh, so we _have_ seen this issue :-). Thanks Vineeth, looking forward
to your fix! By the way, I do remember now some variation of this that
happened a long time ago but I thought it was fixed.
- Joel
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [v6.12] WARNING: at kernel/sched/deadline.c:1995 enqueue_dl_entity (task blocked for more than 28262 seconds)
2024-12-06 17:24 ` Joel Fernandes
@ 2024-12-09 10:48 ` Juri Lelli
0 siblings, 0 replies; 22+ messages in thread
From: Juri Lelli @ 2024-12-09 10:48 UTC (permalink / raw)
To: Joel Fernandes
Cc: Vineeth Remanan Pillai, Ilya Maximets, LKML, Ingo Molnar,
Peter Zijlstra, Vincent Guittot, vineethrp, shraash
On 06/12/24 12:24, Joel Fernandes wrote:
> On Fri, Dec 6, 2024 at 11:57 AM Vineeth Remanan Pillai
> <vineeth@bitbyteword.org> wrote:
> >
> > On Fri, Dec 6, 2024 at 10:18 AM Joel Fernandes <joel@joelfernandes.org> wrote:
> > >
> > > On Wed, Dec 04, 2024 at 01:47:44PM +0100, Ilya Maximets wrote:
> > > > Hi. It seems like I'm hitting some bug in the scheduler.
> > > >
> > > > I'm running some tests with Open vSwitch on v6.12 kernel and some time
> > > > 5 to 8 hours down the line I'm getting task blocked splats and I also
> > > > have a WARNING triggered in the scheduler code right before that:
> > > >
> > > > Dec 3 22:19:55 kernel: WARNING: CPU: 27 PID: 3391271 at kernel/sched/deadline.c:1995 enqueue_dl_entity
> > > >
> > > > I have a lot of processes (kernel threads and userpsace threads) stuck
> > > > in DN, Ds, D+ and D states. It feels like IO tasks are being scheduled,
> > > > but scheduler never picks them up or they are not being scheduled at all
> > > > for whatever reason, and threads waiting on these tasks are stuck.
> > > >
> > > > Dec 3 22:22:45 kernel: INFO: task khugepaged:330 blocked for more than 122 seconds.
> > > > Dec 3 22:22:45 kernel: INFO: task ovs-monitor-ips:3479822 blocked for more than 122 seconds.
> > > > Dec 3 22:22:45 kernel: INFO: task mv:3483072 blocked for more than 122 seconds.
> > > > Dec 3 22:24:48 kernel: INFO: task khugepaged:330 blocked for more than 245 seconds.
> > > > Dec 3 22:24:48 kernel: INFO: task ovs-monitor-ips:3479822 blocked for more than 245 seconds.
> > > > Dec 3 22:24:48 kernel: INFO: task ovs-monitor-ips:3480383 blocked for more than 122 seconds.
> > > > Dec 3 22:24:48 kernel: INFO: task ovs-monitor-ips:3481787 blocked for more than 122 seconds.
> > > > Dec 3 22:24:48 kernel: INFO: task ovs-monitor-ips:3482631 blocked for more than 122 seconds.
> > > > Dec 3 22:24:48 kernel: INFO: task mv:3483072 blocked for more than 245 seconds.
> > > > Dec 3 22:26:51 kernel: INFO: task khugepaged:330 blocked for more than 368 seconds.
> > > > ...
> > > > Dec 4 06:11:45 kernel: INFO: task khugepaged:330 blocked for more than 28262 seconds.
> > > >
> > > > I have two separate instances where this behavior is reproduced. One is mostly
> > > > around file systems, the other was more severe as multiple kernel threads got
> > > > stuck in netlink code. The traces do not have much in common, except that most
> > > > of blocked tasks are in scheduling. The system is also idle, nothing is really
> > > > running. Some of these tasks are holding resources that make other tasks to
> > > > block on those resources as well.
> > > >
> > > > I seem to be able to reproduce the issue, but it takes 5-8 hours to do so.
> > > >
> > >
> > > CC'ing a few more from my team as well.
> > >
> > > We haven't seen such an issue with the DL server, but we are also testing on
> > > slightly older kernels.
> > >
> > > Its coming from:
> > > WARN_ON_ONCE(on_dl_rq(dl_se));
> > >
> >
> > Thanks for including me Joel :-)
> >
> > I was able to reproduce this WARN_ON couple of days back with
> > syzkaller. dlserver's dl_se gets enqueued during a update_curr while
> > the dlserver is stopped. And subsequent dlserver start will cause a
> > double enqueue. On the peripheral, we don't track where dlserver is
> > active or not directly and an explicit tracking could solve this
> > issue. But the root cause is a little more deep and I think I
> > understood the real cause. I have a potential fix and doing more
> > testing to verify. Will send the fix out soon after a bit more
> > verification
>
> Oh, so we _have_ seen this issue :-). Thanks Vineeth, looking forward
> to your fix! By the way, I do remember now some variation of this that
> happened a long time ago but I thought it was fixed.
Hey folks. We have been looking into this as well (and there are private
conversations going on on IRC also with Peter), but we don't seem to
have a fix for it yet. So, Vineeth, if you have a fix please send it out
when you feel comfortable with it. And thanks a lot for looking into
this! :)
Best,
Juri
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [v6.12] WARNING: at kernel/sched/deadline.c:1995 enqueue_dl_entity (task blocked for more than 28262 seconds)
2024-12-06 16:57 ` Vineeth Remanan Pillai
2024-12-06 17:24 ` Joel Fernandes
@ 2024-12-09 10:55 ` Peter Zijlstra
2024-12-09 12:29 ` Vineeth Remanan Pillai
1 sibling, 1 reply; 22+ messages in thread
From: Peter Zijlstra @ 2024-12-09 10:55 UTC (permalink / raw)
To: Vineeth Remanan Pillai
Cc: Joel Fernandes, Ilya Maximets, LKML, Ingo Molnar, Juri Lelli,
Vincent Guittot, vineethrp, shraash
On Fri, Dec 06, 2024 at 11:57:30AM -0500, Vineeth Remanan Pillai wrote:
> I was able to reproduce this WARN_ON couple of days back with
> syzkaller. dlserver's dl_se gets enqueued during a update_curr while
> the dlserver is stopped. And subsequent dlserver start will cause a
> double enqueue.
Right, I spotted that hole late last week. There is this thread:
https://lore.kernel.org/all/20241209094941.GF21636@noisy.programming.kicks-ass.net/T/#u
Where I just added this thunk:
@@ -1674,6 +1679,12 @@ void dl_server_start(struct sched_dl_entity *dl_se)
void dl_server_stop(struct sched_dl_entity *dl_se)
{
+ if (current->dl_server == dl_se) {
+ struct rq *rq = rq_of_dl_se(dl_se);
+ trace_printk("stop fair server %d\n", cpu_of(rq));
+ current->dl_server = NULL;
+ }
+
if (!dl_se->dl_runtime)
return;
Which was my attempt at plugging said hole. But since I do not have
means of reproduction, I'm not at all sure it is sufficient :/
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [v6.12] WARNING: at kernel/sched/deadline.c:1995 enqueue_dl_entity (task blocked for more than 28262 seconds)
2024-12-09 10:55 ` Peter Zijlstra
@ 2024-12-09 12:29 ` Vineeth Remanan Pillai
2024-12-09 12:34 ` Ilya Maximets
2024-12-09 12:56 ` Peter Zijlstra
0 siblings, 2 replies; 22+ messages in thread
From: Vineeth Remanan Pillai @ 2024-12-09 12:29 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Joel Fernandes, Ilya Maximets, LKML, Ingo Molnar, Juri Lelli,
Vincent Guittot, vineethrp, shraash
On Mon, Dec 9, 2024 at 5:55 AM Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Fri, Dec 06, 2024 at 11:57:30AM -0500, Vineeth Remanan Pillai wrote:
>
> > I was able to reproduce this WARN_ON couple of days back with
> > syzkaller. dlserver's dl_se gets enqueued during a update_curr while
> > the dlserver is stopped. And subsequent dlserver start will cause a
> > double enqueue.
>
> Right, I spotted that hole late last week. There is this thread:
>
> https://lore.kernel.org/all/20241209094941.GF21636@noisy.programming.kicks-ass.net/T/#u
>
> Where I just added this thunk:
>
> @@ -1674,6 +1679,12 @@ void dl_server_start(struct sched_dl_entity *dl_se)
>
> void dl_server_stop(struct sched_dl_entity *dl_se)
> {
> + if (current->dl_server == dl_se) {
> + struct rq *rq = rq_of_dl_se(dl_se);
> + trace_printk("stop fair server %d\n", cpu_of(rq));
> + current->dl_server = NULL;
> + }
> +
> if (!dl_se->dl_runtime)
> return;
>
> Which was my attempt at plugging said hole. But since I do not have
> means of reproduction, I'm not at all sure it is sufficient :/
>
I think I was able to get to the root cause last week. So the issue
seems to be that dlserver is stopped in the pick_task path of dlserver
itself when the sched_delayed is set:
__pick_next_task
=> pick_task_dl -> server_pick_task
=> pick_task_fair
=> pick_next_entity (if (sched_delayed))
=> dequeue_entities
=> dl_server_stop
Now server_pick_task returns NULL and then we set dl_yielded and call
update_curr_dl_se. But dl_se is already dequeued and now the code is
confused and it does all sorts of things including setting a timer to
enqueue it back. This ultimately leads to double enqueue when dlserver
is started next time(based on timing of dl_server_start)
I think we should not call update_curr_dl_se when the dlserver is
dequeued. Based on this I have a small patch and it seems to solve the
issue:
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index d9d5a702f1a6..a9f3f020e421 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -2419,12 +2419,18 @@ static struct task_struct *__pick_task_dl(struct rq *rq)
if (dl_server(dl_se)) {
p = dl_se->server_pick_task(dl_se);
- if (!p) {
+ if (p) {
+ rq->dl_server = dl_se;
+ } else if (WARN_ON_ONCE(on_dl_rq(dl_se))) {
+ /*
+ * If server_pick_task returns NULL and dlserver is
+ * enqueued, we have a problem. Lets yield and do a
+ * pick again.
+ */
dl_se->dl_yielded = 1;
update_curr_dl_se(rq, dl_se, 0);
goto again;
}
- rq->dl_server = dl_se;
} else {
p = dl_task_of(dl_se);
}
I can send a formal patch if this looks okay to you all..
Thanks,
Vineeth
^ permalink raw reply related [flat|nested] 22+ messages in thread
* Re: [v6.12] WARNING: at kernel/sched/deadline.c:1995 enqueue_dl_entity (task blocked for more than 28262 seconds)
2024-12-09 12:29 ` Vineeth Remanan Pillai
@ 2024-12-09 12:34 ` Ilya Maximets
2024-12-10 0:31 ` Ilya Maximets
2024-12-09 12:56 ` Peter Zijlstra
1 sibling, 1 reply; 22+ messages in thread
From: Ilya Maximets @ 2024-12-09 12:34 UTC (permalink / raw)
To: Vineeth Remanan Pillai, Peter Zijlstra
Cc: i.maximets, Joel Fernandes, LKML, Ingo Molnar, Juri Lelli,
Vincent Guittot, vineethrp, shraash
On 12/9/24 13:29, Vineeth Remanan Pillai wrote:
> On Mon, Dec 9, 2024 at 5:55 AM Peter Zijlstra <peterz@infradead.org> wrote:
>>
>> On Fri, Dec 06, 2024 at 11:57:30AM -0500, Vineeth Remanan Pillai wrote:
>>
>>> I was able to reproduce this WARN_ON couple of days back with
>>> syzkaller. dlserver's dl_se gets enqueued during a update_curr while
>>> the dlserver is stopped. And subsequent dlserver start will cause a
>>> double enqueue.
>>
>> Right, I spotted that hole late last week. There is this thread:
>>
>> https://lore.kernel.org/all/20241209094941.GF21636@noisy.programming.kicks-ass.net/T/#u
>>
>> Where I just added this thunk:
>>
>> @@ -1674,6 +1679,12 @@ void dl_server_start(struct sched_dl_entity *dl_se)
>>
>> void dl_server_stop(struct sched_dl_entity *dl_se)
>> {
>> + if (current->dl_server == dl_se) {
>> + struct rq *rq = rq_of_dl_se(dl_se);
>> + trace_printk("stop fair server %d\n", cpu_of(rq));
>> + current->dl_server = NULL;
>> + }
>> +
>> if (!dl_se->dl_runtime)
>> return;
>>
>> Which was my attempt at plugging said hole. But since I do not have
>> means of reproduction, I'm not at all sure it is sufficient :/
>>
> I think I was able to get to the root cause last week. So the issue
> seems to be that dlserver is stopped in the pick_task path of dlserver
> itself when the sched_delayed is set:
> __pick_next_task
> => pick_task_dl -> server_pick_task
> => pick_task_fair
> => pick_next_entity (if (sched_delayed))
> => dequeue_entities
> => dl_server_stop
>
> Now server_pick_task returns NULL and then we set dl_yielded and call
> update_curr_dl_se. But dl_se is already dequeued and now the code is
> confused and it does all sorts of things including setting a timer to
> enqueue it back. This ultimately leads to double enqueue when dlserver
> is started next time(based on timing of dl_server_start)
>
> I think we should not call update_curr_dl_se when the dlserver is
> dequeued. Based on this I have a small patch and it seems to solve the
> issue:
>
> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> index d9d5a702f1a6..a9f3f020e421 100644
> --- a/kernel/sched/deadline.c
> +++ b/kernel/sched/deadline.c
> @@ -2419,12 +2419,18 @@ static struct task_struct *__pick_task_dl(struct rq *rq)
>
> if (dl_server(dl_se)) {
> p = dl_se->server_pick_task(dl_se);
> - if (!p) {
> + if (p) {
> + rq->dl_server = dl_se;
> + } else if (WARN_ON_ONCE(on_dl_rq(dl_se))) {
> + /*
> + * If server_pick_task returns NULL and dlserver is
> + * enqueued, we have a problem. Lets yield and do a
> + * pick again.
> + */
> dl_se->dl_yielded = 1;
> update_curr_dl_se(rq, dl_se, 0);
> goto again;
> }
> - rq->dl_server = dl_se;
> } else {
> p = dl_task_of(dl_se);
> }
>
> I can send a formal patch if this looks okay to you all..
Thanks!
I can try this out on my setup today over the night (it takes a long time
to reproduce the issue on my setup).
Best regards, Ilya Maximets.
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [v6.12] WARNING: at kernel/sched/deadline.c:1995 enqueue_dl_entity (task blocked for more than 28262 seconds)
2024-12-09 12:29 ` Vineeth Remanan Pillai
2024-12-09 12:34 ` Ilya Maximets
@ 2024-12-09 12:56 ` Peter Zijlstra
2024-12-09 13:56 ` Vineeth Remanan Pillai
` (2 more replies)
1 sibling, 3 replies; 22+ messages in thread
From: Peter Zijlstra @ 2024-12-09 12:56 UTC (permalink / raw)
To: Vineeth Remanan Pillai
Cc: Joel Fernandes, Ilya Maximets, LKML, Ingo Molnar, Juri Lelli,
Vincent Guittot, vineethrp, shraash, marcel.ziswiler
On Mon, Dec 09, 2024 at 07:29:52AM -0500, Vineeth Remanan Pillai wrote:
> On Mon, Dec 9, 2024 at 5:55 AM Peter Zijlstra <peterz@infradead.org> wrote:
> >
> > On Fri, Dec 06, 2024 at 11:57:30AM -0500, Vineeth Remanan Pillai wrote:
> >
> > > I was able to reproduce this WARN_ON couple of days back with
> > > syzkaller. dlserver's dl_se gets enqueued during a update_curr while
> > > the dlserver is stopped. And subsequent dlserver start will cause a
> > > double enqueue.
> >
> > Right, I spotted that hole late last week. There is this thread:
> >
> > https://lore.kernel.org/all/20241209094941.GF21636@noisy.programming.kicks-ass.net/T/#u
> >
> > Where I just added this thunk:
> >
> > @@ -1674,6 +1679,12 @@ void dl_server_start(struct sched_dl_entity *dl_se)
> >
> > void dl_server_stop(struct sched_dl_entity *dl_se)
> > {
> > + if (current->dl_server == dl_se) {
> > + struct rq *rq = rq_of_dl_se(dl_se);
> > + trace_printk("stop fair server %d\n", cpu_of(rq));
> > + current->dl_server = NULL;
> > + }
> > +
> > if (!dl_se->dl_runtime)
> > return;
> >
> > Which was my attempt at plugging said hole. But since I do not have
> > means of reproduction, I'm not at all sure it is sufficient :/
> >
> I think I was able to get to the root cause last week. So the issue
> seems to be that dlserver is stopped in the pick_task path of dlserver
> itself when the sched_delayed is set:
> __pick_next_task
> => pick_task_dl -> server_pick_task
> => pick_task_fair
> => pick_next_entity (if (sched_delayed))
> => dequeue_entities
> => dl_server_stop
Ooh, that's where it happens.
So the scenario I had in mind was that we were doing something like:
current->state = TASK_INTERRUPTIBLE();
schedule();
deactivate_task()
dl_stop_server();
pick_next_task()
pick_next_task_fair()
sched_balance_newidle()
rq_unlock(this_rq)
at which point another CPU can take our RQ-lock and do:
try_to_wake_up()
ttwu_queue()
rq_lock()
...
activate_task()
dl_server_start()
wakeup_preempt() := check_preempt_wakeup_fair()
update_curr()
update_curr_task()
if (current->dl_server)
dl_server_update()
enqueue_dl_entity()
Which then also goes *bang*. The above can't happen if we clear
current->dl_server in dl_stop_server().
I was worried that might not be it, bcause Marcel had biscected it to
the delayed stuff, but I'd not managed to reach the pick site yet :/
> Now server_pick_task returns NULL and then we set dl_yielded and call
> update_curr_dl_se. But dl_se is already dequeued and now the code is
> confused and it does all sorts of things including setting a timer to
> enqueue it back. This ultimately leads to double enqueue when dlserver
> is started next time(based on timing of dl_server_start)
>
> I think we should not call update_curr_dl_se when the dlserver is
> dequeued. Based on this I have a small patch and it seems to solve the
> issue:
>
> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> index d9d5a702f1a6..a9f3f020e421 100644
> --- a/kernel/sched/deadline.c
> +++ b/kernel/sched/deadline.c
> @@ -2419,12 +2419,18 @@ static struct task_struct *__pick_task_dl(struct rq *rq)
>
> if (dl_server(dl_se)) {
> p = dl_se->server_pick_task(dl_se);
> - if (!p) {
> + if (p) {
> + rq->dl_server = dl_se;
> + } else if (WARN_ON_ONCE(on_dl_rq(dl_se))) {
> + /*
> + * If server_pick_task returns NULL and dlserver is
> + * enqueued, we have a problem. Lets yield and do a
> + * pick again.
> + */
> dl_se->dl_yielded = 1;
> update_curr_dl_se(rq, dl_se, 0);
> goto again;
> }
> - rq->dl_server = dl_se;
> } else {
> p = dl_task_of(dl_se);
> }
Hmm.. so fundamentally that yield() makes sense, but yeah, it's lost
track of the fact that we've stopped the server and it should not
continue.
Does something like the below make sense?
diff --git a/include/linux/sched.h b/include/linux/sched.h
index d380bffee2ef..abebeb67de4e 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -664,6 +664,7 @@ struct sched_dl_entity {
unsigned int dl_non_contending : 1;
unsigned int dl_overrun : 1;
unsigned int dl_server : 1;
+ unsigned int dl_server_active : 1;
unsigned int dl_defer : 1;
unsigned int dl_defer_armed : 1;
unsigned int dl_defer_running : 1;
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index d9d5a702f1a6..e2b542f684db 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -1647,6 +1647,7 @@ void dl_server_start(struct sched_dl_entity *dl_se)
if (!dl_se->dl_runtime)
return;
+ dl_se->dl_server_active = 1;
enqueue_dl_entity(dl_se, ENQUEUE_WAKEUP);
if (!dl_task(dl_se->rq->curr) || dl_entity_preempt(dl_se, &rq->curr->dl))
resched_curr(dl_se->rq);
@@ -1661,6 +1662,7 @@ void dl_server_stop(struct sched_dl_entity *dl_se)
hrtimer_try_to_cancel(&dl_se->dl_timer);
dl_se->dl_defer_armed = 0;
dl_se->dl_throttled = 0;
+ dl_se->dl_server_active = 0;
}
void dl_server_init(struct sched_dl_entity *dl_se, struct rq *rq,
@@ -2420,8 +2422,10 @@ static struct task_struct *__pick_task_dl(struct rq *rq)
if (dl_server(dl_se)) {
p = dl_se->server_pick_task(dl_se);
if (!p) {
- dl_se->dl_yielded = 1;
- update_curr_dl_se(rq, dl_se, 0);
+ if (dl_se->dl_server_active) {
+ dl_se->dl_yielded = 1;
+ update_curr_dl_se(rq, dl_se, 0);
+ }
goto again;
}
rq->dl_server = dl_se;
^ permalink raw reply related [flat|nested] 22+ messages in thread
* Re: [v6.12] WARNING: at kernel/sched/deadline.c:1995 enqueue_dl_entity (task blocked for more than 28262 seconds)
2024-12-09 12:56 ` Peter Zijlstra
@ 2024-12-09 13:56 ` Vineeth Remanan Pillai
2024-12-09 14:01 ` Peter Zijlstra
2024-12-10 0:34 ` Ilya Maximets
2024-12-10 16:08 ` Marcel Ziswiler
2 siblings, 1 reply; 22+ messages in thread
From: Vineeth Remanan Pillai @ 2024-12-09 13:56 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Joel Fernandes, Ilya Maximets, LKML, Ingo Molnar, Juri Lelli,
Vincent Guittot, vineethrp, shraash, marcel.ziswiler
On Mon, Dec 9, 2024 at 7:56 AM Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Mon, Dec 09, 2024 at 07:29:52AM -0500, Vineeth Remanan Pillai wrote:
> > On Mon, Dec 9, 2024 at 5:55 AM Peter Zijlstra <peterz@infradead.org> wrote:
> > >
> > > On Fri, Dec 06, 2024 at 11:57:30AM -0500, Vineeth Remanan Pillai wrote:
> > >
> > > > I was able to reproduce this WARN_ON couple of days back with
> > > > syzkaller. dlserver's dl_se gets enqueued during a update_curr while
> > > > the dlserver is stopped. And subsequent dlserver start will cause a
> > > > double enqueue.
> > >
> > > Right, I spotted that hole late last week. There is this thread:
> > >
> > > https://lore.kernel.org/all/20241209094941.GF21636@noisy.programming.kicks-ass.net/T/#u
> > >
> > > Where I just added this thunk:
> > >
> > > @@ -1674,6 +1679,12 @@ void dl_server_start(struct sched_dl_entity *dl_se)
> > >
> > > void dl_server_stop(struct sched_dl_entity *dl_se)
> > > {
> > > + if (current->dl_server == dl_se) {
> > > + struct rq *rq = rq_of_dl_se(dl_se);
> > > + trace_printk("stop fair server %d\n", cpu_of(rq));
> > > + current->dl_server = NULL;
> > > + }
> > > +
> > > if (!dl_se->dl_runtime)
> > > return;
> > >
> > > Which was my attempt at plugging said hole. But since I do not have
> > > means of reproduction, I'm not at all sure it is sufficient :/
> > >
> > I think I was able to get to the root cause last week. So the issue
> > seems to be that dlserver is stopped in the pick_task path of dlserver
> > itself when the sched_delayed is set:
> > __pick_next_task
> > => pick_task_dl -> server_pick_task
> > => pick_task_fair
> > => pick_next_entity (if (sched_delayed))
> > => dequeue_entities
> > => dl_server_stop
>
> Ooh, that's where it happens.
>
> So the scenario I had in mind was that we were doing something like:
>
> current->state = TASK_INTERRUPTIBLE();
> schedule();
> deactivate_task()
> dl_stop_server();
> pick_next_task()
> pick_next_task_fair()
> sched_balance_newidle()
> rq_unlock(this_rq)
>
> at which point another CPU can take our RQ-lock and do:
>
> try_to_wake_up()
> ttwu_queue()
> rq_lock()
> ...
> activate_task()
> dl_server_start()
> wakeup_preempt() := check_preempt_wakeup_fair()
> update_curr()
> update_curr_task()
> if (current->dl_server)
> dl_server_update()
> enqueue_dl_entity()
>
>
> Which then also goes *bang*. The above can't happen if we clear
> current->dl_server in dl_stop_server().
>
I also thought this could be a possibility but the previous deactivate
for this task would have cleared the dl_server no? Soon after this in
update_curr() we again call dl_server_update if p_.dl_server !=
rq->fair_server and this is also another possibility of a double
enqueue. Thats the reason I thought we should have a flag to denote if
dl_server is active or not. I initially had a fix as you suggested
below. But it was not fully fixing the issue because the dl_yield was
confusing the server. So I split into 2 patches with dl_server active
flag as the second patch.
> I was worried that might not be it, bcause Marcel had biscected it to
> the delayed stuff, but I'd not managed to reach the pick site yet :/
>
> ...
> ....
> Hmm.. so fundamentally that yield() makes sense, but yeah, it's lost
> track of the fact that we've stopped the server and it should not
> continue.
>
> Does something like the below make sense?
>
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index d380bffee2ef..abebeb67de4e 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -664,6 +664,7 @@ struct sched_dl_entity {
> unsigned int dl_non_contending : 1;
> unsigned int dl_overrun : 1;
> unsigned int dl_server : 1;
> + unsigned int dl_server_active : 1;
> unsigned int dl_defer : 1;
> unsigned int dl_defer_armed : 1;
> unsigned int dl_defer_running : 1;
> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> index d9d5a702f1a6..e2b542f684db 100644
> --- a/kernel/sched/deadline.c
> +++ b/kernel/sched/deadline.c
> @@ -1647,6 +1647,7 @@ void dl_server_start(struct sched_dl_entity *dl_se)
> if (!dl_se->dl_runtime)
> return;
>
> + dl_se->dl_server_active = 1;
> enqueue_dl_entity(dl_se, ENQUEUE_WAKEUP);
> if (!dl_task(dl_se->rq->curr) || dl_entity_preempt(dl_se, &rq->curr->dl))
> resched_curr(dl_se->rq);
> @@ -1661,6 +1662,7 @@ void dl_server_stop(struct sched_dl_entity *dl_se)
> hrtimer_try_to_cancel(&dl_se->dl_timer);
> dl_se->dl_defer_armed = 0;
> dl_se->dl_throttled = 0;
> + dl_se->dl_server_active = 0;
> }
>
> void dl_server_init(struct sched_dl_entity *dl_se, struct rq *rq,
> @@ -2420,8 +2422,10 @@ static struct task_struct *__pick_task_dl(struct rq *rq)
> if (dl_server(dl_se)) {
> p = dl_se->server_pick_task(dl_se);
> if (!p) {
> - dl_se->dl_yielded = 1;
> - update_curr_dl_se(rq, dl_se, 0);
> + if (dl_se->dl_server_active) {
> + dl_se->dl_yielded = 1;
> + update_curr_dl_se(rq, dl_se, 0);
> + }
> goto again;
> }
> rq->dl_server = dl_se;
This should work as well. I was planning to send a second patch with
the dl_server active flag as it was not strictly the root cause of
this. But the active flag serves the purpose here and this change
looks good to me :-). I will test this on my end and let you know. It
takes more than 12 hours to reproduce in my test case ;-)
I feel that p should never be NULL when dl_server is active and that
should be a bug. From going through the code, I think we should never
hit this and a WARN_ON_ONCE would be good. What do you think about it?
Thanks,
Vineeth
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [v6.12] WARNING: at kernel/sched/deadline.c:1995 enqueue_dl_entity (task blocked for more than 28262 seconds)
2024-12-09 13:56 ` Vineeth Remanan Pillai
@ 2024-12-09 14:01 ` Peter Zijlstra
2024-12-09 14:12 ` Vineeth Remanan Pillai
0 siblings, 1 reply; 22+ messages in thread
From: Peter Zijlstra @ 2024-12-09 14:01 UTC (permalink / raw)
To: Vineeth Remanan Pillai
Cc: Joel Fernandes, Ilya Maximets, LKML, Ingo Molnar, Juri Lelli,
Vincent Guittot, vineethrp, shraash, marcel.ziswiler
On Mon, Dec 09, 2024 at 08:56:43AM -0500, Vineeth Remanan Pillai wrote:
> > So the scenario I had in mind was that we were doing something like:
> >
> > current->state = TASK_INTERRUPTIBLE();
> > schedule();
> > deactivate_task()
> > dl_stop_server();
> > pick_next_task()
> > pick_next_task_fair()
> > sched_balance_newidle()
> > rq_unlock(this_rq)
> >
> > at which point another CPU can take our RQ-lock and do:
> >
> > try_to_wake_up()
> > ttwu_queue()
> > rq_lock()
> > ...
> > activate_task()
> > dl_server_start()
> > wakeup_preempt() := check_preempt_wakeup_fair()
> > update_curr()
> > update_curr_task()
> > if (current->dl_server)
> > dl_server_update()
> > enqueue_dl_entity()
> >
> >
> > Which then also goes *bang*. The above can't happen if we clear
> > current->dl_server in dl_stop_server().
> >
> I also thought this could be a possibility but the previous deactivate
> for this task would have cleared the dl_server no?
That gets cleared in put_prev_set_next_task(), which gets called *after*
pick_next_task() completes. So until that time, current will have
dl_server set.
> Soon after this in
> update_curr() we again call dl_server_update if p_.dl_server !=
> rq->fair_server and this is also another possibility of a double
> enqueue.
Right, there's few possible paths there, I've not fully mapped them. But
I think clearing ->dl_server in dl_server_stop() is the cleanest option
for this.
> This should work as well. I was planning to send a second patch with
> the dl_server active flag as it was not strictly the root cause of
> this. But the active flag serves the purpose here and this change
> looks good to me :-). I will test this on my end and let you know. It
> takes more than 12 hours to reproduce in my test case ;-)
Urgh... Thanks!
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [v6.12] WARNING: at kernel/sched/deadline.c:1995 enqueue_dl_entity (task blocked for more than 28262 seconds)
2024-12-09 14:01 ` Peter Zijlstra
@ 2024-12-09 14:12 ` Vineeth Remanan Pillai
0 siblings, 0 replies; 22+ messages in thread
From: Vineeth Remanan Pillai @ 2024-12-09 14:12 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Joel Fernandes, Ilya Maximets, LKML, Ingo Molnar, Juri Lelli,
Vincent Guittot, vineethrp, shraash, marcel.ziswiler
> > >
> > > Which then also goes *bang*. The above can't happen if we clear
> > > current->dl_server in dl_stop_server().
> > >
> > I also thought this could be a possibility but the previous deactivate
> > for this task would have cleared the dl_server no?
>
> That gets cleared in put_prev_set_next_task(), which gets called *after*
> pick_next_task() completes. So until that time, current will have
> dl_server set.
>
Ahh ok that makes sense.
> > Soon after this in
> > update_curr() we again call dl_server_update if p_.dl_server !=
> > rq->fair_server and this is also another possibility of a double
> > enqueue.
>
> Right, there's few possible paths there, I've not fully mapped them. But
> I think clearing ->dl_server in dl_server_stop() is the cleanest option
> for this.
>
Even clearing would not help here as it will still make this condition
true (NULL != &rq->fair_server). Now that we have the dl_active flag,
we should probably check dl_se->dl_active before doing the
dl_server_update.
Thanks,
Vineeth
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [v6.12] WARNING: at kernel/sched/deadline.c:1995 enqueue_dl_entity (task blocked for more than 28262 seconds)
2024-12-09 12:34 ` Ilya Maximets
@ 2024-12-10 0:31 ` Ilya Maximets
0 siblings, 0 replies; 22+ messages in thread
From: Ilya Maximets @ 2024-12-10 0:31 UTC (permalink / raw)
To: Vineeth Remanan Pillai, Peter Zijlstra
Cc: i.maximets, Joel Fernandes, LKML, Ingo Molnar, Juri Lelli,
Vincent Guittot, vineethrp, shraash
On 12/9/24 13:34, Ilya Maximets wrote:
> On 12/9/24 13:29, Vineeth Remanan Pillai wrote:
>> On Mon, Dec 9, 2024 at 5:55 AM Peter Zijlstra <peterz@infradead.org> wrote:
>>>
>>> On Fri, Dec 06, 2024 at 11:57:30AM -0500, Vineeth Remanan Pillai wrote:
>>>
>>>> I was able to reproduce this WARN_ON couple of days back with
>>>> syzkaller. dlserver's dl_se gets enqueued during a update_curr while
>>>> the dlserver is stopped. And subsequent dlserver start will cause a
>>>> double enqueue.
>>>
>>> Right, I spotted that hole late last week. There is this thread:
>>>
>>> https://lore.kernel.org/all/20241209094941.GF21636@noisy.programming.kicks-ass.net/T/#u
>>>
>>> Where I just added this thunk:
>>>
>>> @@ -1674,6 +1679,12 @@ void dl_server_start(struct sched_dl_entity *dl_se)
>>>
>>> void dl_server_stop(struct sched_dl_entity *dl_se)
>>> {
>>> + if (current->dl_server == dl_se) {
>>> + struct rq *rq = rq_of_dl_se(dl_se);
>>> + trace_printk("stop fair server %d\n", cpu_of(rq));
>>> + current->dl_server = NULL;
>>> + }
>>> +
>>> if (!dl_se->dl_runtime)
>>> return;
>>>
>>> Which was my attempt at plugging said hole. But since I do not have
>>> means of reproduction, I'm not at all sure it is sufficient :/
>>>
>> I think I was able to get to the root cause last week. So the issue
>> seems to be that dlserver is stopped in the pick_task path of dlserver
>> itself when the sched_delayed is set:
>> __pick_next_task
>> => pick_task_dl -> server_pick_task
>> => pick_task_fair
>> => pick_next_entity (if (sched_delayed))
>> => dequeue_entities
>> => dl_server_stop
>>
>> Now server_pick_task returns NULL and then we set dl_yielded and call
>> update_curr_dl_se. But dl_se is already dequeued and now the code is
>> confused and it does all sorts of things including setting a timer to
>> enqueue it back. This ultimately leads to double enqueue when dlserver
>> is started next time(based on timing of dl_server_start)
>>
>> I think we should not call update_curr_dl_se when the dlserver is
>> dequeued. Based on this I have a small patch and it seems to solve the
>> issue:
>>
>> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
>> index d9d5a702f1a6..a9f3f020e421 100644
>> --- a/kernel/sched/deadline.c
>> +++ b/kernel/sched/deadline.c
>> @@ -2419,12 +2419,18 @@ static struct task_struct *__pick_task_dl(struct rq *rq)
>>
>> if (dl_server(dl_se)) {
>> p = dl_se->server_pick_task(dl_se);
>> - if (!p) {
>> + if (p) {
>> + rq->dl_server = dl_se;
>> + } else if (WARN_ON_ONCE(on_dl_rq(dl_se))) {
>> + /*
>> + * If server_pick_task returns NULL and dlserver is
>> + * enqueued, we have a problem. Lets yield and do a
>> + * pick again.
>> + */
>> dl_se->dl_yielded = 1;
>> update_curr_dl_se(rq, dl_se, 0);
>> goto again;
>> }
>> - rq->dl_server = dl_se;
>> } else {
>> p = dl_task_of(dl_se);
>> }
>>
>> I can send a formal patch if this looks okay to you all..
>
> Thanks!
>
> I can try this out on my setup today over the night (it takes a long time
> to reproduce the issue on my setup).
So, I tried applying this one on top of v6.12 and I got the following after
about 20 minutes of testing:
Dec 9 19:08:31 kernel: ------------[ cut here ]------------
Dec 9 19:08:31 kernel: watchdog: BUG: soft lockup - CPU#2 stuck for 21s! [handler37:428139]
Dec 9 19:08:31 kernel: rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
Dec 9 19:08:31 kernel: rcu: #0110-...0: (1 GPs behind) idle=e41c/1/0x4000000000000000 softirq=186760/186760 fqs=7457
Dec 9 19:08:31 kernel: rcu: #011(detected by 27, t=60002 jiffies, g=913205, q=503974 ncpus=40)
Dec 9 19:08:31 kernel: Sending NMI from CPU 27 to CPUs 0:
Dec 9 19:08:31 kernel: rcu: rcu_preempt kthread starved for 40001 jiffies! g913205 f0x2 RCU_GP_DOING_FQS(6) ->state=0x0 ->cpu=15
Dec 9 19:08:31 kernel: rcu: #011Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
Dec 9 19:08:31 kernel: rcu: RCU grace-period kthread stack dump:
Dec 9 19:08:31 kernel: task:rcu_preempt state:R running task stack:0 pid:18 tgid:18 ppid:2 flags:0x00004000
Dec 9 19:08:31 kernel: Call Trace:
Dec 9 19:08:31 kernel: <TASK>
Dec 9 19:08:31 kernel: ? lock_timer_base (kernel/time/timer.c:1051)
Dec 9 19:08:31 kernel: ? _raw_spin_lock (./arch/x86/include/asm/paravirt.h:584 ./arch/x86/include/asm/qspinlock.h:51 ./include/asm-generic/qspinlock.h:114 ./include/linux/spinlock.h:187 ./include/linux/spinlock_api_smp.h:134 kernel/locking/spinlock.c:154)
Dec 9 19:08:31 kernel: ? raw_spin_rq_lock_nested (./arch/x86/include/asm/preempt.h:84 kernel/sched/core.c:600)
Dec 9 19:08:31 kernel: ? resched_cpu (./arch/x86/include/asm/bitops.h:239 ./include/asm-generic/bitops/instrumented-non-atomic.h:142 ./include/linux/cpumask.h:570 ./include/linux/cpumask.h:1117 kernel/sched/core.c:1109)
Dec 9 19:08:31 kernel: ? force_qs_rnp (kernel/rcu/tree.c:2734 (discriminator 6))
Dec 9 19:08:31 kernel: ? __pfx_rcu_watching_snap_recheck (kernel/rcu/tree.c:816)
Dec 9 19:08:31 kernel: ? __pfx_rcu_gp_kthread (kernel/rcu/tree.c:2222)
Dec 9 19:08:31 kernel: ? rcu_gp_fqs_loop (kernel/rcu/tree.c:2004 kernel/rcu/tree.c:2067)
Dec 9 19:08:31 kernel: ? rcu_gp_kthread (kernel/rcu/tree.c:2250)
Dec 9 19:08:31 kernel: ? kthread (kernel/kthread.c:389)
Dec 9 19:08:31 kernel: ? __pfx_kthread (kernel/kthread.c:342)
Dec 9 19:08:31 kernel: ? ret_from_fork (arch/x86/kernel/process.c:147)
Dec 9 19:08:31 kernel: ? __pfx_kthread (kernel/kthread.c:342)
Dec 9 19:08:31 kernel: ? ret_from_fork_asm (arch/x86/entry/entry_64.S:257)
Dec 9 19:08:31 kernel: </TASK>
Dec 9 19:08:31 kernel: rcu: Stack dump where RCU GP kthread last ran:
Dec 9 19:08:31 kernel: Sending NMI from CPU 27 to CPUs 15:
Dec 9 19:09:49 systemd[1]: systemd-udevd.service: Watchdog timeout (limit 3min)!
Dec 9 19:09:49 systemd[1]: systemd-udevd.service: Killing process 1087 (systemd-udevd) with signal SIGABRT.
Dec 9 19:11:19 systemd[1]: systemd-udevd.service: State 'stop-watchdog' timed out. Killing.
Dec 9 19:11:19 systemd[1]: systemd-udevd.service: Killing process 1087 (systemd-udevd) with signal SIGKILL.
Dec 9 19:11:51 kernel: rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
Dec 9 19:11:51 kernel: rcu: #0110-...0: (1 GPs behind) idle=e41c/1/0x4000000000000002 softirq=186760/186760 fqs=7457
Dec 9 19:11:51 kernel: rcu: #011(detected by 5, t=260010 jiffies, g=913205, q=2087901 ncpus=40)
Dec 9 19:11:51 kernel: Sending NMI from CPU 5 to CPUs 0:
Dec 9 19:11:51 kernel: rcu: rcu_preempt kthread starved for 240008 jiffies! g913205 f0x2 RCU_GP_DOING_FQS(6) ->state=0x0 ->cpu=15
Dec 9 19:11:51 kernel: rcu: #011Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
Dec 9 19:11:51 kernel: rcu: RCU grace-period kthread stack dump:
Dec 9 19:11:51 kernel: task:rcu_preempt state:R running task stack:0 pid:18 tgid:18 ppid:2 flags:0x00004000
Dec 9 19:11:51 kernel: Call Trace:
Dec 9 19:11:51 kernel: <TASK>
Dec 9 19:11:51 kernel: ? lock_timer_base (kernel/time/timer.c:1051)
Dec 9 19:11:51 kernel: ? _raw_spin_lock (./arch/x86/include/asm/paravirt.h:584 ./arch/x86/include/asm/qspinlock.h:51 ./include/asm-generic/qspinlock.h:114 ./include/linux/spinlock.h:187 ./include/linux/spinlock_api_smp.h:134 kernel/locking/spinlock.c:154)
Dec 9 19:11:51 kernel: ? raw_spin_rq_lock_nested (./arch/x86/include/asm/preempt.h:84 kernel/sched/core.c:600)
Dec 9 19:11:51 kernel: ? resched_cpu (./arch/x86/include/asm/bitops.h:239 ./include/asm-generic/bitops/instrumented-non-atomic.h:142 ./include/linux/cpumask.h:570 ./include/linux/cpumask.h:1117 kernel/sched/core.c:1109)
Dec 9 19:11:51 kernel: ? force_qs_rnp (kernel/rcu/tree.c:2734 (discriminator 6))
Dec 9 19:11:51 kernel: ? __pfx_rcu_watching_snap_recheck (kernel/rcu/tree.c:816)
Dec 9 19:11:51 kernel: ? __pfx_rcu_gp_kthread (kernel/rcu/tree.c:2222)
Dec 9 19:11:51 kernel: ? rcu_gp_fqs_loop (kernel/rcu/tree.c:2004 kernel/rcu/tree.c:2067)
Dec 9 19:11:51 kernel: ? rcu_gp_kthread (kernel/rcu/tree.c:2250)
Dec 9 19:11:51 kernel: ? kthread (kernel/kthread.c:389)
Dec 9 19:11:51 kernel: ? __pfx_kthread (kernel/kthread.c:342)
Dec 9 19:11:51 kernel: ? ret_from_fork (arch/x86/kernel/process.c:147)
Dec 9 19:11:51 kernel: ? __pfx_kthread (kernel/kthread.c:342)
Dec 9 19:11:51 kernel: ? ret_from_fork_asm (arch/x86/entry/entry_64.S:257)
Dec 9 19:11:51 kernel: </TASK>
Dec 9 19:11:51 kernel: rcu: Stack dump where RCU GP kthread last ran:
Dec 9 19:11:51 kernel: Sending NMI from CPU 5 to CPUs 15:
Dec 9 19:12:49 systemd[1]: systemd-udevd.service: Processes still around after SIGKILL. Ignoring.
Dec 9 19:14:20 systemd[1]: systemd-udevd.service: State 'final-sigterm' timed out. Killing.
An interesting part here is that CPU 15 doesn't even react to NMI...
I had to kill the system as it got completely unresponsive.
Best regards, Ilya Maximets.
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [v6.12] WARNING: at kernel/sched/deadline.c:1995 enqueue_dl_entity (task blocked for more than 28262 seconds)
2024-12-09 12:56 ` Peter Zijlstra
2024-12-09 13:56 ` Vineeth Remanan Pillai
@ 2024-12-10 0:34 ` Ilya Maximets
2024-12-10 2:52 ` Vineeth Remanan Pillai
2024-12-10 16:08 ` Marcel Ziswiler
2 siblings, 1 reply; 22+ messages in thread
From: Ilya Maximets @ 2024-12-10 0:34 UTC (permalink / raw)
To: Peter Zijlstra, Vineeth Remanan Pillai
Cc: i.maximets, Joel Fernandes, LKML, Ingo Molnar, Juri Lelli,
Vincent Guittot, vineethrp, shraash, marcel.ziswiler
On 12/9/24 13:56, Peter Zijlstra wrote:
>
> Does something like the below make sense?
>
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index d380bffee2ef..abebeb67de4e 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -664,6 +664,7 @@ struct sched_dl_entity {
> unsigned int dl_non_contending : 1;
> unsigned int dl_overrun : 1;
> unsigned int dl_server : 1;
> + unsigned int dl_server_active : 1;
> unsigned int dl_defer : 1;
> unsigned int dl_defer_armed : 1;
> unsigned int dl_defer_running : 1;
> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> index d9d5a702f1a6..e2b542f684db 100644
> --- a/kernel/sched/deadline.c
> +++ b/kernel/sched/deadline.c
> @@ -1647,6 +1647,7 @@ void dl_server_start(struct sched_dl_entity *dl_se)
> if (!dl_se->dl_runtime)
> return;
>
> + dl_se->dl_server_active = 1;
> enqueue_dl_entity(dl_se, ENQUEUE_WAKEUP);
> if (!dl_task(dl_se->rq->curr) || dl_entity_preempt(dl_se, &rq->curr->dl))
> resched_curr(dl_se->rq);
> @@ -1661,6 +1662,7 @@ void dl_server_stop(struct sched_dl_entity *dl_se)
> hrtimer_try_to_cancel(&dl_se->dl_timer);
> dl_se->dl_defer_armed = 0;
> dl_se->dl_throttled = 0;
> + dl_se->dl_server_active = 0;
> }
>
> void dl_server_init(struct sched_dl_entity *dl_se, struct rq *rq,
> @@ -2420,8 +2422,10 @@ static struct task_struct *__pick_task_dl(struct rq *rq)
> if (dl_server(dl_se)) {
> p = dl_se->server_pick_task(dl_se);
> if (!p) {
> - dl_se->dl_yielded = 1;
> - update_curr_dl_se(rq, dl_se, 0);
> + if (dl_se->dl_server_active) {
> + dl_se->dl_yielded = 1;
> + update_curr_dl_se(rq, dl_se, 0);
> + }
> goto again;
> }
> rq->dl_server = dl_se;
And I tried this one on top of v6.12, but got a warning after about 1 minute (lucky?).
Funny this one is also on CPU 15, but it's a coincidense, it happened on different
CPUs before.
Dec 9 18:11:10 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1194465656 wd_nsec: 1194465705
Dec 9 18:11:10 kernel: ------------[ cut here ]------------
Dec 9 18:11:10 kernel: WARNING: CPU: 15 PID: 7389 at kernel/sched/deadline.c:1997 enqueue_dl_entity (kernel/sched/deadline.c:1997 (discriminator 1))
Dec 9 18:11:10 kernel: Modules linked in: vport_vxlan vxlan vport_gre ip_gre ip_tunnel gre vport_geneve geneve ip6_udp_tunnel udp_tunnel openvswitch nf_conncount nf_nat ib_core esp4 veth nfnetlink_cttimeout nfnetlink nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 int
el_rapl_msr intel_rapl_common rfkill intel_uncore_frequency_common skx_edac_common nfit libnvdimm kvm_intel kvm rapl vfat fat iTCO_wdt virtio_gpu iTCO_vendor_support virtio_dma_buf i2c_i801 drm_shmem_helper virtio_balloon pcspkr i2c_smbus drm_kms_helper lpc_ich joydev d
rm xfs libcrc32c crct10dif_pclmul ahci crc32_pclmul libahci virtio_net crc32c_intel libata ghash_clmulni_intel net_failover virtio_console virtio_blk failover serio_raw sunrpc dm_mirror dm_region_hash dm_log dm_mod fuse [last unloaded: ip6_udp_tunnel]
Dec 9 18:11:10 kernel: CPU: 15 UID: 0 PID: 7389 Comm: revalidator42 Kdump: loaded Not tainted 6.12.0+ #78
Dec 9 18:11:10 kernel: Hardware name: Red Hat KVM/RHEL, BIOS 1.16.1-1.el9 04/01/2014
Dec 9 18:11:10 kernel: RIP: 0010:enqueue_dl_entity (kernel/sched/deadline.c:1997 (discriminator 1))
Dec 9 18:11:10 kernel: Code: 0a 00 00 0f b6 45 54 e9 d9 fc ff ff 45 85 ed 0f 84 5e fd ff ff 5b 44 89 e6 48 89 ef 5d 41 5c 41 5d 41 5e 41 5f e9 8b c4 ff ff <0f> 0b e9 b2 f9 ff ff 0f 0b e9 14 fb ff ff 8b 83 b0 0a 00 00 48 8b
All code
========
0: 0a 00 or (%rax),%al
2: 00 0f add %cl,(%rdi)
4: b6 45 mov $0x45,%dh
6: 54 push %rsp
7: e9 d9 fc ff ff jmpq 0xfffffffffffffce5
c: 45 85 ed test %r13d,%r13d
f: 0f 84 5e fd ff ff je 0xfffffffffffffd73
15: 5b pop %rbx
16: 44 89 e6 mov %r12d,%esi
19: 48 89 ef mov %rbp,%rdi
1c: 5d pop %rbp
1d: 41 5c pop %r12
1f: 41 5d pop %r13
21: 41 5e pop %r14
23: 41 5f pop %r15
25: e9 8b c4 ff ff jmpq 0xffffffffffffc4b5
2a:* 0f 0b ud2 <-- trapping instruction
2c: e9 b2 f9 ff ff jmpq 0xfffffffffffff9e3
31: 0f 0b ud2
33: e9 14 fb ff ff jmpq 0xfffffffffffffb4c
38: 8b 83 b0 0a 00 00 mov 0xab0(%rbx),%eax
3e: 48 rex.W
3f: 8b .byte 0x8b
Code starting with the faulting instruction
===========================================
0: 0f 0b ud2
2: e9 b2 f9 ff ff jmpq 0xfffffffffffff9b9
7: 0f 0b ud2
9: e9 14 fb ff ff jmpq 0xfffffffffffffb22
e: 8b 83 b0 0a 00 00 mov 0xab0(%rbx),%eax
14: 48 rex.W
15: 8b .byte 0x8b
Dec 9 18:11:10 kernel: RSP: 0018:ffffb03601d336f8 EFLAGS: 00010086
Dec 9 18:11:10 kernel: RAX: 0000000000000001 RBX: ffff9ee93f1b65e8 RCX: 0000000000000001
Dec 9 18:11:10 kernel: RDX: 0000000000000001 RSI: 0000000000000001 RDI: ffff9ee93f1b65e8
Dec 9 18:11:10 kernel: RBP: ffff9ee93f1b65e8 R08: 0000000000000000 R09: 0000000000000000
Dec 9 18:11:10 kernel: R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000001
Dec 9 18:11:10 kernel: R13: 0000000000000001 R14: 00000000002dc6c0 R15: 0000000000000000
Dec 9 18:11:10 kernel: FS: 00007effb172e640(0000) GS:ffff9ee93f180000(0000) knlGS:0000000000000000
Dec 9 18:11:10 kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Dec 9 18:11:10 kernel: CR2: 0000720000080000 CR3: 000000010d86a005 CR4: 0000000000772ef0
Dec 9 18:11:10 kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
Dec 9 18:11:10 kernel: DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Dec 9 18:11:10 kernel: PKRU: 55555554
Dec 9 18:11:10 kernel: Call Trace:
Dec 9 18:11:10 kernel: <TASK>
Dec 9 18:11:10 kernel: ? __warn (kernel/panic.c:748)
Dec 9 18:11:10 kernel: ? enqueue_dl_entity (kernel/sched/deadline.c:1997 (discriminator 1))
Dec 9 18:11:10 kernel: ? report_bug (lib/bug.c:201 lib/bug.c:219)
Dec 9 18:11:10 kernel: ? handle_bug (arch/x86/kernel/traps.c:285)
Dec 9 18:11:10 kernel: ? exc_invalid_op (arch/x86/kernel/traps.c:309 (discriminator 1))
Dec 9 18:11:10 kernel: ? asm_exc_invalid_op (./arch/x86/include/asm/idtentry.h:621)
Dec 9 18:11:10 kernel: ? enqueue_dl_entity (kernel/sched/deadline.c:1997 (discriminator 1))
Dec 9 18:11:10 kernel: dl_server_start (kernel/sched/deadline.c:1652)
Dec 9 18:11:10 kernel: enqueue_task_fair (kernel/sched/sched.h:2745 kernel/sched/fair.c:7048)
Dec 9 18:11:10 kernel: enqueue_task (kernel/sched/core.c:2020)
Dec 9 18:11:10 kernel: activate_task (kernel/sched/core.c:2069)
Dec 9 18:11:10 kernel: sched_balance_rq (kernel/sched/fair.c:9642 kernel/sched/fair.c:9676 kernel/sched/fair.c:11753)
Dec 9 18:11:10 kernel: sched_balance_newidle (kernel/sched/fair.c:12799)
Dec 9 18:11:10 kernel: pick_next_task_fair (kernel/sched/fair.c:8950)
Dec 9 18:11:10 kernel: __pick_next_task (kernel/sched/core.c:5972)
Dec 9 18:11:10 kernel: __schedule (kernel/sched/core.c:6647)
Dec 9 18:11:10 kernel: ? plist_add (./include/linux/list.h:150 ./include/linux/list.h:183 lib/plist.c:111)
Dec 9 18:11:10 kernel: schedule (./arch/x86/include/asm/preempt.h:84 kernel/sched/core.c:6771 kernel/sched/core.c:6785)
Dec 9 18:11:10 kernel: futex_wait_queue (kernel/futex/waitwake.c:372 (discriminator 2))
Dec 9 18:11:10 kernel: __futex_wait (kernel/futex/waitwake.c:672)
Dec 9 18:11:10 kernel: ? __pfx_futex_wake_mark (kernel/futex/waitwake.c:135)
Dec 9 18:11:10 kernel: futex_wait (kernel/futex/waitwake.c:700)
Dec 9 18:11:10 kernel: do_futex (kernel/futex/syscalls.c:131)
Dec 9 18:11:10 kernel: __x64_sys_futex (kernel/futex/syscalls.c:179 kernel/futex/syscalls.c:160 kernel/futex/syscalls.c:160)
Dec 9 18:11:10 kernel: ? kvm_sched_clock_read (arch/x86/kernel/kvmclock.c:91)
Dec 9 18:11:10 kernel: ? sched_clock (./arch/x86/include/asm/preempt.h:94 arch/x86/kernel/tsc.c:285)
Dec 9 18:11:10 kernel: ? sched_clock_cpu (kernel/sched/clock.c:394)
Dec 9 18:11:10 kernel: ? raw_spin_rq_lock_nested (./arch/x86/include/asm/preempt.h:84 kernel/sched/core.c:600)
Dec 9 18:11:10 kernel: do_syscall_64 (arch/x86/entry/common.c:52 arch/x86/entry/common.c:83)
Dec 9 18:11:10 kernel: ? _raw_spin_unlock (./arch/x86/include/asm/paravirt.h:589 ./arch/x86/include/asm/qspinlock.h:57 ./include/linux/spinlock.h:204 ./include/linux/spinlock_api_smp.h:142 kernel/locking/spinlock.c:186)
Dec 9 18:11:10 kernel: ? finish_task_switch.isra.0 (./arch/x86/include/asm/irqflags.h:42 ./arch/x86/include/asm/irqflags.h:97 kernel/sched/sched.h:1518 kernel/sched/core.c:5082 kernel/sched/core.c:5200)
Dec 9 18:11:10 kernel: ? __schedule (kernel/sched/core.c:6699)
Dec 9 18:11:10 kernel: ? schedule (./arch/x86/include/asm/preempt.h:84 kernel/sched/core.c:6771 kernel/sched/core.c:6785)
Dec 9 18:11:10 kernel: ? futex_wait_queue (kernel/futex/waitwake.c:372 (discriminator 2))
Dec 9 18:11:10 kernel: ? __futex_wait (kernel/futex/waitwake.c:672)
Dec 9 18:11:10 kernel: ? __pfx_futex_wake_mark (kernel/futex/waitwake.c:135)
Dec 9 18:11:10 kernel: ? futex_wait (kernel/futex/waitwake.c:700)
Dec 9 18:11:10 kernel: ? do_futex (kernel/futex/syscalls.c:131)
Dec 9 18:11:10 kernel: ? rseq_get_rseq_cs (kernel/rseq.c:161)
Dec 9 18:11:10 kernel: ? rseq_ip_fixup (kernel/rseq.c:257 kernel/rseq.c:291)
Dec 9 18:11:10 kernel: ? do_futex (kernel/futex/syscalls.c:131)
Dec 9 18:11:10 kernel: ? syscall_exit_to_user_mode (./arch/x86/include/asm/entry-common.h:58 ./arch/x86/include/asm/entry-common.h:65 ./include/linux/entry-common.h:330 kernel/entry/common.c:207 kernel/entry/common.c:218)
Dec 9 18:11:10 kernel: ? do_syscall_64 (arch/x86/entry/common.c:102)
Dec 9 18:11:10 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
Dec 9 18:11:10 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
Dec 9 18:11:10 kernel: ? clear_bhb_loop (arch/x86/entry/entry_64.S:1539)
Dec 9 18:11:10 kernel: entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:130)
Dec 9 18:11:10 kernel: RIP: 0033:0x7effd0a86a80
Dec 9 18:11:10 kernel: Code: 41 89 f0 83 f8 02 74 0b b8 02 00 00 00 87 07 85 c0 74 47 90 44 89 c6 45 31 d2 ba 02 00 00 00 b8 ca 00 00 00 40 80 f6 80 0f 05 <48> 3d 00 f0 ff ff 76 d6 83 c0 0b 83 f8 0b 77 0b ba 81 08 00 00 48
All code
========
0: 41 89 f0 mov %esi,%r8d
3: 83 f8 02 cmp $0x2,%eax
6: 74 0b je 0x13
8: b8 02 00 00 00 mov $0x2,%eax
d: 87 07 xchg %eax,(%rdi)
f: 85 c0 test %eax,%eax
11: 74 47 je 0x5a
13: 90 nop
14: 44 89 c6 mov %r8d,%esi
17: 45 31 d2 xor %r10d,%r10d
1a: ba 02 00 00 00 mov $0x2,%edx
1f: b8 ca 00 00 00 mov $0xca,%eax
24: 40 80 f6 80 xor $0x80,%sil
28: 0f 05 syscall
2a:* 48 3d 00 f0 ff ff cmp $0xfffffffffffff000,%rax <-- trapping instruction
30: 76 d6 jbe 0x8
32: 83 c0 0b add $0xb,%eax
35: 83 f8 0b cmp $0xb,%eax
38: 77 0b ja 0x45
3a: ba 81 08 00 00 mov $0x881,%edx
3f: 48 rex.W
Code starting with the faulting instruction
===========================================
0: 48 3d 00 f0 ff ff cmp $0xfffffffffffff000,%rax
6: 76 d6 jbe 0xffffffffffffffde
8: 83 c0 0b add $0xb,%eax
b: 83 f8 0b cmp $0xb,%eax
e: 77 0b ja 0x1b
10: ba 81 08 00 00 mov $0x881,%edx
15: 48 rex.W
Dec 9 18:11:10 kernel: RSP: 002b:00007effb172a528 EFLAGS: 00000282 ORIG_RAX: 00000000000000ca
Dec 9 18:11:10 kernel: RAX: ffffffffffffffda RBX: 00007effb172e640 RCX: 00007effd0a86a80
Dec 9 18:11:10 kernel: RDX: 0000000000000002 RSI: 0000000000000080 RDI: 000000000171a3c0
Dec 9 18:11:10 kernel: RBP: 00007effb172a5e0 R08: 0000000000000000 R09: 00002effb172a89c
Dec 9 18:11:10 kernel: R10: 0000000000000000 R11: 0000000000000282 R12: 00007effb172e640
Dec 9 18:11:10 kernel: R13: 000000000000000b R14: 00007effd0a89a50 R15: 0000000000000000
Dec 9 18:11:10 kernel: </TASK>
Dec 9 18:11:10 kernel: ---[ end trace 0000000000000000 ]---
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [v6.12] WARNING: at kernel/sched/deadline.c:1995 enqueue_dl_entity (task blocked for more than 28262 seconds)
2024-12-10 0:34 ` Ilya Maximets
@ 2024-12-10 2:52 ` Vineeth Remanan Pillai
2024-12-10 2:58 ` Vineeth Remanan Pillai
2024-12-10 16:11 ` Marcel Ziswiler
0 siblings, 2 replies; 22+ messages in thread
From: Vineeth Remanan Pillai @ 2024-12-10 2:52 UTC (permalink / raw)
To: Ilya Maximets
Cc: Peter Zijlstra, Joel Fernandes, LKML, Ingo Molnar, Juri Lelli,
Vincent Guittot, vineethrp, shraash, marcel.ziswiler
On Mon, Dec 9, 2024 at 7:34 PM Ilya Maximets <i.maximets@ovn.org> wrote:
>
> On 12/9/24 13:56, Peter Zijlstra wrote:
> >
> > Does something like the below make sense?
> >
> > diff --git a/include/linux/sched.h b/include/linux/sched.h
> > index d380bffee2ef..abebeb67de4e 100644
> > --- a/include/linux/sched.h
> > +++ b/include/linux/sched.h
> > @@ -664,6 +664,7 @@ struct sched_dl_entity {
> > unsigned int dl_non_contending : 1;
> > unsigned int dl_overrun : 1;
> > unsigned int dl_server : 1;
> > + unsigned int dl_server_active : 1;
> > unsigned int dl_defer : 1;
> > unsigned int dl_defer_armed : 1;
> > unsigned int dl_defer_running : 1;
> > diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> > index d9d5a702f1a6..e2b542f684db 100644
> > --- a/kernel/sched/deadline.c
> > +++ b/kernel/sched/deadline.c
> > @@ -1647,6 +1647,7 @@ void dl_server_start(struct sched_dl_entity *dl_se)
> > if (!dl_se->dl_runtime)
> > return;
> >
> > + dl_se->dl_server_active = 1;
> > enqueue_dl_entity(dl_se, ENQUEUE_WAKEUP);
> > if (!dl_task(dl_se->rq->curr) || dl_entity_preempt(dl_se, &rq->curr->dl))
> > resched_curr(dl_se->rq);
> > @@ -1661,6 +1662,7 @@ void dl_server_stop(struct sched_dl_entity *dl_se)
> > hrtimer_try_to_cancel(&dl_se->dl_timer);
> > dl_se->dl_defer_armed = 0;
> > dl_se->dl_throttled = 0;
> > + dl_se->dl_server_active = 0;
> > }
> >
> > void dl_server_init(struct sched_dl_entity *dl_se, struct rq *rq,
> > @@ -2420,8 +2422,10 @@ static struct task_struct *__pick_task_dl(struct rq *rq)
> > if (dl_server(dl_se)) {
> > p = dl_se->server_pick_task(dl_se);
> > if (!p) {
> > - dl_se->dl_yielded = 1;
> > - update_curr_dl_se(rq, dl_se, 0);
> > + if (dl_se->dl_server_active) {
> > + dl_se->dl_yielded = 1;
> > + update_curr_dl_se(rq, dl_se, 0);
> > + }
> > goto again;
> > }
> > rq->dl_server = dl_se;
>
> And I tried this one on top of v6.12, but got a warning after about 1 minute (lucky?).
>
Hmm strange, I was running it for about 12 hours and has not WARNed
till now. I am on 6.13-rc1 but git log did not show any dlserver
related changes between 6.12 and 6.13 though. I also have another
patch for the double enqueue scenario we were disussing in this
thread(because of the wrong check in update_curr). Could you please
add the following changes to above patches and see if the isssue is
reproducible?
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index fbdca89c677f..1f4b76c1f032 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1159,8 +1159,6 @@ static inline void update_curr_task(struct
task_struct *p, s64 delta_exec)
trace_sched_stat_runtime(p, delta_exec);
account_group_exec_runtime(p, delta_exec);
cgroup_account_cputime(p, delta_exec);
- if (p->dl_server)
- dl_server_update(p->dl_server, delta_exec);
}
static inline bool did_preempt_short(struct cfs_rq *cfs_rq, struct
sched_entity *curr)
@@ -1210,6 +1208,11 @@ s64 update_curr_common(struct rq *rq)
return delta_exec;
}
+static inline bool dl_server_active(struct dl_sched_entity *dl_se)
+{
+ return dl_se->dl_server_active;
+}
+
/*
* Update the current task's runtime statistics.
*/
@@ -1237,11 +1240,16 @@ static void update_curr(struct cfs_rq *cfs_rq)
update_curr_task(p, delta_exec);
/*
- * Any fair task that runs outside of fair_server should
- * account against fair_server such that it can account for
- * this time and possibly avoid running this period.
+ * If the fair_server is active, we need to account for the
+ * fair_server time whether or not the task is running on
+ * behalf of fair_server or not:
+ * - If the task is running on behalf of fair_server, we need
+ * to limit its time based on the assigned runtime.
+ * - Fair task that runs outside of fair_server should account
+ * against fair_server such that it can account for this time
+ * and possibly avoid running this period.
*/
- if (p->dl_server != &rq->fair_server)
+ if (dl_server_active(&rq->fair_server))
dl_server_update(&rq->fair_server, delta_exec);
}
Thanks for your time testing the fixes :-)
~Vineeth
^ permalink raw reply related [flat|nested] 22+ messages in thread
* Re: [v6.12] WARNING: at kernel/sched/deadline.c:1995 enqueue_dl_entity (task blocked for more than 28262 seconds)
2024-12-10 2:52 ` Vineeth Remanan Pillai
@ 2024-12-10 2:58 ` Vineeth Remanan Pillai
2024-12-10 9:28 ` Ilya Maximets
2024-12-10 16:11 ` Marcel Ziswiler
1 sibling, 1 reply; 22+ messages in thread
From: Vineeth Remanan Pillai @ 2024-12-10 2:58 UTC (permalink / raw)
To: Ilya Maximets
Cc: Peter Zijlstra, Joel Fernandes, LKML, Ingo Molnar, Juri Lelli,
Vincent Guittot, vineethrp, shraash, marcel.ziswiler
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index fbdca89c677f..1f4b76c1f032 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -1159,8 +1159,6 @@ static inline void update_curr_task(struct
> task_struct *p, s64 delta_exec)
> trace_sched_stat_runtime(p, delta_exec);
> account_group_exec_runtime(p, delta_exec);
> cgroup_account_cputime(p, delta_exec);
> - if (p->dl_server)
> - dl_server_update(p->dl_server, delta_exec);
> }
>
> static inline bool did_preempt_short(struct cfs_rq *cfs_rq, struct
> sched_entity *curr)
> @@ -1210,6 +1208,11 @@ s64 update_curr_common(struct rq *rq)
> return delta_exec;
> }
>
> +static inline bool dl_server_active(struct dl_sched_entity *dl_se)
Sorry a small typo in here. it should be struct sched_dl_entity and
not dl_sched_entity. The line should be:
"static inline bool dl_server_active(struct sched_dl_entity *dl_se)"
Thanks,
Vineeth
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [v6.12] WARNING: at kernel/sched/deadline.c:1995 enqueue_dl_entity (task blocked for more than 28262 seconds)
2024-12-10 2:58 ` Vineeth Remanan Pillai
@ 2024-12-10 9:28 ` Ilya Maximets
2024-12-10 23:16 ` Ilya Maximets
0 siblings, 1 reply; 22+ messages in thread
From: Ilya Maximets @ 2024-12-10 9:28 UTC (permalink / raw)
To: Vineeth Remanan Pillai
Cc: i.maximets, Peter Zijlstra, Joel Fernandes, LKML, Ingo Molnar,
Juri Lelli, Vincent Guittot, vineethrp, shraash, marcel.ziswiler
On 12/10/24 03:58, Vineeth Remanan Pillai wrote:
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index fbdca89c677f..1f4b76c1f032 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -1159,8 +1159,6 @@ static inline void update_curr_task(struct
>> task_struct *p, s64 delta_exec)
>> trace_sched_stat_runtime(p, delta_exec);
>> account_group_exec_runtime(p, delta_exec);
>> cgroup_account_cputime(p, delta_exec);
>> - if (p->dl_server)
>> - dl_server_update(p->dl_server, delta_exec);
>> }
>>
>> static inline bool did_preempt_short(struct cfs_rq *cfs_rq, struct
>> sched_entity *curr)
>> @@ -1210,6 +1208,11 @@ s64 update_curr_common(struct rq *rq)
>> return delta_exec;
>> }
>>
>> +static inline bool dl_server_active(struct dl_sched_entity *dl_se)
> Sorry a small typo in here. it should be struct sched_dl_entity and
> not dl_sched_entity. The line should be:
>
> "static inline bool dl_server_active(struct sched_dl_entity *dl_se)"
Sure. I can try that.
Note: I did indeed got lucky with the warning after 1 minute the
first time. The second time I tried the system didn't show any
issues up until 6 hours into the test...
Best regards, Ilya Maximets.
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [v6.12] WARNING: at kernel/sched/deadline.c:1995 enqueue_dl_entity (task blocked for more than 28262 seconds)
2024-12-09 12:56 ` Peter Zijlstra
2024-12-09 13:56 ` Vineeth Remanan Pillai
2024-12-10 0:34 ` Ilya Maximets
@ 2024-12-10 16:08 ` Marcel Ziswiler
2 siblings, 0 replies; 22+ messages in thread
From: Marcel Ziswiler @ 2024-12-10 16:08 UTC (permalink / raw)
To: Peter Zijlstra, Vineeth Remanan Pillai
Cc: Joel Fernandes, Ilya Maximets, LKML, Ingo Molnar, Juri Lelli,
Vincent Guittot, vineethrp, shraash
On Mon, 2024-12-09 at 13:56 +0100, Peter Zijlstra wrote:
> On Mon, Dec 09, 2024 at 07:29:52AM -0500, Vineeth Remanan Pillai wrote:
> > On Mon, Dec 9, 2024 at 5:55 AM Peter Zijlstra <peterz@infradead.org> wrote:
> > >
> > > On Fri, Dec 06, 2024 at 11:57:30AM -0500, Vineeth Remanan Pillai wrote:
> > >
> > > > I was able to reproduce this WARN_ON couple of days back with
> > > > syzkaller. dlserver's dl_se gets enqueued during a update_curr while
> > > > the dlserver is stopped. And subsequent dlserver start will cause a
> > > > double enqueue.
> > >
> > > Right, I spotted that hole late last week. There is this thread:
> > >
> > > https://lore.kernel.org/all/20241209094941.GF21636@noisy.programming.kicks-ass.net/T/#u
> > >
> > > Where I just added this thunk:
> > >
> > > @@ -1674,6 +1679,12 @@ void dl_server_start(struct sched_dl_entity *dl_se)
> > >
> > > void dl_server_stop(struct sched_dl_entity *dl_se)
> > > {
> > > + if (current->dl_server == dl_se) {
> > > + struct rq *rq = rq_of_dl_se(dl_se);
> > > + trace_printk("stop fair server %d\n", cpu_of(rq));
> > > + current->dl_server = NULL;
> > > + }
> > > +
> > > if (!dl_se->dl_runtime)
> > > return;
> > >
> > > Which was my attempt at plugging said hole. But since I do not have
> > > means of reproduction, I'm not at all sure it is sufficient :/
> > >
> > I think I was able to get to the root cause last week. So the issue
> > seems to be that dlserver is stopped in the pick_task path of dlserver
> > itself when the sched_delayed is set:
> > __pick_next_task
> > => pick_task_dl -> server_pick_task
> > => pick_task_fair
> > => pick_next_entity (if (sched_delayed))
> > => dequeue_entities
> > => dl_server_stop
>
> Ooh, that's where it happens.
>
> So the scenario I had in mind was that we were doing something like:
>
> current->state = TASK_INTERRUPTIBLE();
> schedule();
> deactivate_task()
> dl_stop_server();
> pick_next_task()
> pick_next_task_fair()
> sched_balance_newidle()
> rq_unlock(this_rq)
>
> at which point another CPU can take our RQ-lock and do:
>
> try_to_wake_up()
> ttwu_queue()
> rq_lock()
> ...
> activate_task()
> dl_server_start()
> wakeup_preempt() := check_preempt_wakeup_fair()
> update_curr()
> update_curr_task()
> if (current->dl_server)
> dl_server_update()
> enqueue_dl_entity()
>
>
> Which then also goes *bang*. The above can't happen if we clear
> current->dl_server in dl_stop_server().
>
> I was worried that might not be it, bcause Marcel had biscected it to
> the delayed stuff, but I'd not managed to reach the pick site yet :/
>
> > Now server_pick_task returns NULL and then we set dl_yielded and call
> > update_curr_dl_se. But dl_se is already dequeued and now the code is
> > confused and it does all sorts of things including setting a timer to
> > enqueue it back. This ultimately leads to double enqueue when dlserver
> > is started next time(based on timing of dl_server_start)
> >
> > I think we should not call update_curr_dl_se when the dlserver is
> > dequeued. Based on this I have a small patch and it seems to solve the
> > issue:
> >
> > diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> > index d9d5a702f1a6..a9f3f020e421 100644
> > --- a/kernel/sched/deadline.c
> > +++ b/kernel/sched/deadline.c
> > @@ -2419,12 +2419,18 @@ static struct task_struct *__pick_task_dl(struct rq *rq)
> >
> > if (dl_server(dl_se)) {
> > p = dl_se->server_pick_task(dl_se);
> > - if (!p) {
> > + if (p) {
> > + rq->dl_server = dl_se;
> > + } else if (WARN_ON_ONCE(on_dl_rq(dl_se))) {
> > + /*
> > + * If server_pick_task returns NULL and dlserver is
> > + * enqueued, we have a problem. Lets yield and do a
> > + * pick again.
> > + */
> > dl_se->dl_yielded = 1;
> > update_curr_dl_se(rq, dl_se, 0);
> > goto again;
> > }
> > - rq->dl_server = dl_se;
> > } else {
> > p = dl_task_of(dl_se);
> > }
>
> Hmm.. so fundamentally that yield() makes sense, but yeah, it's lost
> track of the fact that we've stopped the server and it should not
> continue.
>
> Does something like the below make sense?
At least it stopped crashing for me.
https://drive.codethink.co.uk/s/XRqN2y9BLwPsD9H
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index d380bffee2ef..abebeb67de4e 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -664,6 +664,7 @@ struct sched_dl_entity {
> unsigned int dl_non_contending : 1;
> unsigned int dl_overrun : 1;
> unsigned int dl_server : 1;
> + unsigned int dl_server_active : 1;
> unsigned int dl_defer : 1;
> unsigned int dl_defer_armed : 1;
> unsigned int dl_defer_running : 1;
> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> index d9d5a702f1a6..e2b542f684db 100644
> --- a/kernel/sched/deadline.c
> +++ b/kernel/sched/deadline.c
> @@ -1647,6 +1647,7 @@ void dl_server_start(struct sched_dl_entity *dl_se)
> if (!dl_se->dl_runtime)
> return;
>
> + dl_se->dl_server_active = 1;
> enqueue_dl_entity(dl_se, ENQUEUE_WAKEUP);
> if (!dl_task(dl_se->rq->curr) || dl_entity_preempt(dl_se, &rq->curr->dl))
> resched_curr(dl_se->rq);
> @@ -1661,6 +1662,7 @@ void dl_server_stop(struct sched_dl_entity *dl_se)
> hrtimer_try_to_cancel(&dl_se->dl_timer);
> dl_se->dl_defer_armed = 0;
> dl_se->dl_throttled = 0;
> + dl_se->dl_server_active = 0;
> }
>
> void dl_server_init(struct sched_dl_entity *dl_se, struct rq *rq,
> @@ -2420,8 +2422,10 @@ static struct task_struct *__pick_task_dl(struct rq *rq)
> if (dl_server(dl_se)) {
> p = dl_se->server_pick_task(dl_se);
> if (!p) {
> - dl_se->dl_yielded = 1;
> - update_curr_dl_se(rq, dl_se, 0);
> + if (dl_se->dl_server_active) {
> + dl_se->dl_yielded = 1;
> + update_curr_dl_se(rq, dl_se, 0);
> + }
> goto again;
> }
> rq->dl_server = dl_se;
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [v6.12] WARNING: at kernel/sched/deadline.c:1995 enqueue_dl_entity (task blocked for more than 28262 seconds)
2024-12-10 2:52 ` Vineeth Remanan Pillai
2024-12-10 2:58 ` Vineeth Remanan Pillai
@ 2024-12-10 16:11 ` Marcel Ziswiler
1 sibling, 0 replies; 22+ messages in thread
From: Marcel Ziswiler @ 2024-12-10 16:11 UTC (permalink / raw)
To: Vineeth Remanan Pillai, Ilya Maximets
Cc: Peter Zijlstra, Joel Fernandes, LKML, Ingo Molnar, Juri Lelli,
Vincent Guittot, vineethrp, shraash
On Mon, 2024-12-09 at 21:52 -0500, Vineeth Remanan Pillai wrote:
> On Mon, Dec 9, 2024 at 7:34 PM Ilya Maximets <i.maximets@ovn.org> wrote:
> >
> > On 12/9/24 13:56, Peter Zijlstra wrote:
> > >
> > > Does something like the below make sense?
> > >
> > > diff --git a/include/linux/sched.h b/include/linux/sched.h
> > > index d380bffee2ef..abebeb67de4e 100644
> > > --- a/include/linux/sched.h
> > > +++ b/include/linux/sched.h
> > > @@ -664,6 +664,7 @@ struct sched_dl_entity {
> > > unsigned int dl_non_contending : 1;
> > > unsigned int dl_overrun : 1;
> > > unsigned int dl_server : 1;
> > > + unsigned int dl_server_active : 1;
> > > unsigned int dl_defer : 1;
> > > unsigned int dl_defer_armed : 1;
> > > unsigned int dl_defer_running : 1;
> > > diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> > > index d9d5a702f1a6..e2b542f684db 100644
> > > --- a/kernel/sched/deadline.c
> > > +++ b/kernel/sched/deadline.c
> > > @@ -1647,6 +1647,7 @@ void dl_server_start(struct sched_dl_entity *dl_se)
> > > if (!dl_se->dl_runtime)
> > > return;
> > >
> > > + dl_se->dl_server_active = 1;
> > > enqueue_dl_entity(dl_se, ENQUEUE_WAKEUP);
> > > if (!dl_task(dl_se->rq->curr) || dl_entity_preempt(dl_se, &rq->curr->dl))
> > > resched_curr(dl_se->rq);
> > > @@ -1661,6 +1662,7 @@ void dl_server_stop(struct sched_dl_entity *dl_se)
> > > hrtimer_try_to_cancel(&dl_se->dl_timer);
> > > dl_se->dl_defer_armed = 0;
> > > dl_se->dl_throttled = 0;
> > > + dl_se->dl_server_active = 0;
> > > }
> > >
> > > void dl_server_init(struct sched_dl_entity *dl_se, struct rq *rq,
> > > @@ -2420,8 +2422,10 @@ static struct task_struct *__pick_task_dl(struct rq *rq)
> > > if (dl_server(dl_se)) {
> > > p = dl_se->server_pick_task(dl_se);
> > > if (!p) {
> > > - dl_se->dl_yielded = 1;
> > > - update_curr_dl_se(rq, dl_se, 0);
> > > + if (dl_se->dl_server_active) {
> > > + dl_se->dl_yielded = 1;
> > > + update_curr_dl_se(rq, dl_se, 0);
> > > + }
> > > goto again;
> > > }
> > > rq->dl_server = dl_se;
> >
> > And I tried this one on top of v6.12, but got a warning after about 1 minute (lucky?).
> >
> Hmm strange, I was running it for about 12 hours and has not WARNed
> till now. I am on 6.13-rc1 but git log did not show any dlserver
> related changes between 6.12 and 6.13 though. I also have another
> patch for the double enqueue scenario we were disussing in this
> thread(because of the wrong check in update_curr). Could you please
> add the following changes to above patches and see if the isssue is
> reproducible?
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index fbdca89c677f..1f4b76c1f032 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -1159,8 +1159,6 @@ static inline void update_curr_task(struct
> task_struct *p, s64 delta_exec)
> trace_sched_stat_runtime(p, delta_exec);
> account_group_exec_runtime(p, delta_exec);
> cgroup_account_cputime(p, delta_exec);
> - if (p->dl_server)
> - dl_server_update(p->dl_server, delta_exec);
> }
>
> static inline bool did_preempt_short(struct cfs_rq *cfs_rq, struct
> sched_entity *curr)
> @@ -1210,6 +1208,11 @@ s64 update_curr_common(struct rq *rq)
> return delta_exec;
> }
>
> +static inline bool dl_server_active(struct dl_sched_entity *dl_se)
> +{
> + return dl_se->dl_server_active;
> +}
> +
> /*
> * Update the current task's runtime statistics.
> */
> @@ -1237,11 +1240,16 @@ static void update_curr(struct cfs_rq *cfs_rq)
> update_curr_task(p, delta_exec);
>
> /*
> - * Any fair task that runs outside of fair_server should
> - * account against fair_server such that it can account for
> - * this time and possibly avoid running this period.
> + * If the fair_server is active, we need to account for the
> + * fair_server time whether or not the task is running on
> + * behalf of fair_server or not:
> + * - If the task is running on behalf of fair_server, we need
> + * to limit its time based on the assigned runtime.
> + * - Fair task that runs outside of fair_server should account
> + * against fair_server such that it can account for this time
> + * and possibly avoid running this period.
> */
> - if (p->dl_server != &rq->fair_server)
> + if (dl_server_active(&rq->fair_server))
> dl_server_update(&rq->fair_server, delta_exec);
> }
That indeed also fixes it for me.
https://drive.codethink.co.uk/s/s9kZQs2Mz6DpH3X
> Thanks for your time testing the fixes :-)
You are very welcome. Thanks you!
> ~Vineeth
Cheers
Marcel
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [v6.12] WARNING: at kernel/sched/deadline.c:1995 enqueue_dl_entity (task blocked for more than 28262 seconds)
2024-12-10 9:28 ` Ilya Maximets
@ 2024-12-10 23:16 ` Ilya Maximets
2024-12-11 2:30 ` Vineeth Remanan Pillai
0 siblings, 1 reply; 22+ messages in thread
From: Ilya Maximets @ 2024-12-10 23:16 UTC (permalink / raw)
To: Vineeth Remanan Pillai
Cc: i.maximets, Peter Zijlstra, Joel Fernandes, LKML, Ingo Molnar,
Juri Lelli, Vincent Guittot, vineethrp, shraash, marcel.ziswiler
On 12/10/24 10:28, Ilya Maximets wrote:
> On 12/10/24 03:58, Vineeth Remanan Pillai wrote:
>>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>>> index fbdca89c677f..1f4b76c1f032 100644
>>> --- a/kernel/sched/fair.c
>>> +++ b/kernel/sched/fair.c
>>> @@ -1159,8 +1159,6 @@ static inline void update_curr_task(struct
>>> task_struct *p, s64 delta_exec)
>>> trace_sched_stat_runtime(p, delta_exec);
>>> account_group_exec_runtime(p, delta_exec);
>>> cgroup_account_cputime(p, delta_exec);
>>> - if (p->dl_server)
>>> - dl_server_update(p->dl_server, delta_exec);
>>> }
>>>
>>> static inline bool did_preempt_short(struct cfs_rq *cfs_rq, struct
>>> sched_entity *curr)
>>> @@ -1210,6 +1208,11 @@ s64 update_curr_common(struct rq *rq)
>>> return delta_exec;
>>> }
>>>
>>> +static inline bool dl_server_active(struct dl_sched_entity *dl_se)
>> Sorry a small typo in here. it should be struct sched_dl_entity and
>> not dl_sched_entity. The line should be:
>>
>> "static inline bool dl_server_active(struct sched_dl_entity *dl_se)"
>
> Sure. I can try that.
Running with this for about 8 hours and so far so good.
Will leave the test running for the night, just in case.
Best regards, Ilya Maximets.
>
> Note: I did indeed got lucky with the warning after 1 minute the
> first time. The second time I tried the system didn't show any
> issues up until 6 hours into the test...
>
> Best regards, Ilya Maximets.
>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [v6.12] WARNING: at kernel/sched/deadline.c:1995 enqueue_dl_entity (task blocked for more than 28262 seconds)
2024-12-10 23:16 ` Ilya Maximets
@ 2024-12-11 2:30 ` Vineeth Remanan Pillai
2024-12-11 9:48 ` Ilya Maximets
0 siblings, 1 reply; 22+ messages in thread
From: Vineeth Remanan Pillai @ 2024-12-11 2:30 UTC (permalink / raw)
To: Ilya Maximets
Cc: Peter Zijlstra, Joel Fernandes, LKML, Ingo Molnar, Juri Lelli,
Vincent Guittot, vineethrp, shraash, marcel.ziswiler
> > /*
> > - * Any fair task that runs outside of fair_server should
> > - * account against fair_server such that it can account for
> > - * this time and possibly avoid running this period.
> > + * If the fair_server is active, we need to account for the
> > + * fair_server time whether or not the task is running on
> > + * behalf of fair_server or not:
> > + * - If the task is running on behalf of fair_server, we need
> > + * to limit its time based on the assigned runtime.
> > + * - Fair task that runs outside of fair_server should account
> > + * against fair_server such that it can account for this time
> > + * and possibly avoid running this period.
> > */
> > - if (p->dl_server != &rq->fair_server)
> > + if (dl_server_active(&rq->fair_server))
> > dl_server_update(&rq->fair_server, delta_exec);
> > }
>
> That indeed also fixes it for me.
>
Thanks for the confirmation Marcel
> >>
> >> "static inline bool dl_server_active(struct sched_dl_entity *dl_se)"
> >
> > Sure. I can try that.
>
> Running with this for about 8 hours and so far so good.
> Will leave the test running for the night, just in case.
>
Thanks for the update Ilya
I also have been running the test for more than 24 hours now and did
not encounter warnings or crashes.
Juri, Peter, Shall I go ahead and send out a single patch folding the
2 fixes in this thread(dl_server_active and fix for the
dl_server_update call)?
Thanks,
Vineeth
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [v6.12] WARNING: at kernel/sched/deadline.c:1995 enqueue_dl_entity (task blocked for more than 28262 seconds)
2024-12-11 2:30 ` Vineeth Remanan Pillai
@ 2024-12-11 9:48 ` Ilya Maximets
0 siblings, 0 replies; 22+ messages in thread
From: Ilya Maximets @ 2024-12-11 9:48 UTC (permalink / raw)
To: Vineeth Remanan Pillai
Cc: i.maximets, Peter Zijlstra, Joel Fernandes, LKML, Ingo Molnar,
Juri Lelli, Vincent Guittot, vineethrp, shraash, marcel.ziswiler
On 12/11/24 03:30, Vineeth Remanan Pillai wrote:
>>> /*
>>> - * Any fair task that runs outside of fair_server should
>>> - * account against fair_server such that it can account for
>>> - * this time and possibly avoid running this period.
>>> + * If the fair_server is active, we need to account for the
>>> + * fair_server time whether or not the task is running on
>>> + * behalf of fair_server or not:
>>> + * - If the task is running on behalf of fair_server, we need
>>> + * to limit its time based on the assigned runtime.
>>> + * - Fair task that runs outside of fair_server should account
>>> + * against fair_server such that it can account for this time
>>> + * and possibly avoid running this period.
>>> */
>>> - if (p->dl_server != &rq->fair_server)
>>> + if (dl_server_active(&rq->fair_server))
>>> dl_server_update(&rq->fair_server, delta_exec);
>>> }
>>
>> That indeed also fixes it for me.
>>
> Thanks for the confirmation Marcel
>
>>>>
>>>> "static inline bool dl_server_active(struct sched_dl_entity *dl_se)"
>>>
>>> Sure. I can try that.
>>
>> Running with this for about 8 hours and so far so good.
>> Will leave the test running for the night, just in case.
>>
> Thanks for the update Ilya
>
> I also have been running the test for more than 24 hours now and did
> not encounter warnings or crashes.
Nice! On my side, another 11 hours and no issues observed.
>
> Juri, Peter, Shall I go ahead and send out a single patch folding the
> 2 fixes in this thread(dl_server_active and fix for the
> dl_server_update call)?
>
> Thanks,
> Vineeth
^ permalink raw reply [flat|nested] 22+ messages in thread
end of thread, other threads:[~2024-12-11 9:48 UTC | newest]
Thread overview: 22+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-12-04 12:47 [v6.12] WARNING: at kernel/sched/deadline.c:1995 enqueue_dl_entity (task blocked for more than 28262 seconds) Ilya Maximets
2024-12-06 15:18 ` Joel Fernandes
2024-12-06 16:57 ` Vineeth Remanan Pillai
2024-12-06 17:24 ` Joel Fernandes
2024-12-09 10:48 ` Juri Lelli
2024-12-09 10:55 ` Peter Zijlstra
2024-12-09 12:29 ` Vineeth Remanan Pillai
2024-12-09 12:34 ` Ilya Maximets
2024-12-10 0:31 ` Ilya Maximets
2024-12-09 12:56 ` Peter Zijlstra
2024-12-09 13:56 ` Vineeth Remanan Pillai
2024-12-09 14:01 ` Peter Zijlstra
2024-12-09 14:12 ` Vineeth Remanan Pillai
2024-12-10 0:34 ` Ilya Maximets
2024-12-10 2:52 ` Vineeth Remanan Pillai
2024-12-10 2:58 ` Vineeth Remanan Pillai
2024-12-10 9:28 ` Ilya Maximets
2024-12-10 23:16 ` Ilya Maximets
2024-12-11 2:30 ` Vineeth Remanan Pillai
2024-12-11 9:48 ` Ilya Maximets
2024-12-10 16:11 ` Marcel Ziswiler
2024-12-10 16:08 ` Marcel Ziswiler
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox