* [PATCH sched_ext/for-7.1-fixes] sched_ext: Defer scx_hardlockup() out of NMI
@ 2026-04-24 20:14 Tejun Heo
2026-04-24 20:15 ` Tejun Heo
` (2 more replies)
0 siblings, 3 replies; 4+ messages in thread
From: Tejun Heo @ 2026-04-24 20:14 UTC (permalink / raw)
To: David Vernet, Andrea Righi, Changwoo Min; +Cc: sched-ext, emil, linux-kernel
scx_hardlockup() runs from NMI and eventually calls scx_claim_exit(),
which takes scx_sched_lock. scx_sched_lock isn't NMI-safe and grabbing
it from NMI context can lead to deadlocks.
The hardlockup handler is best-effort recovery and the disable path it
triggers runs off of irq_work anyway. Move the handle_lockup() call into
an irq_work so it runs in IRQ context.
Fixes: ebeca1f930ea ("sched_ext: Introduce cgroup sub-sched support")
Cc: stable@vger.kernel.org # v7.1+
Signed-off-by: Tejun Heo <tj@kernel.org>
---
kernel/sched/ext.c | 33 +++++++++++++++++++++++++++------
1 file changed, 27 insertions(+), 6 deletions(-)
--- a/kernel/sched/ext.c
+++ b/kernel/sched/ext.c
@@ -5142,6 +5142,25 @@ void scx_softlockup(u32 dur_s)
smp_processor_id(), dur_s);
}
+/*
+ * scx_hardlockup() runs from NMI and eventually calls scx_claim_exit(),
+ * which takes scx_sched_lock. scx_sched_lock isn't NMI-safe and grabbing
+ * it from NMI context can lead to deadlocks. Defer via irq_work; the
+ * disable path runs off irq_work anyway.
+ */
+static atomic_t scx_hardlockup_cpu = ATOMIC_INIT(-1);
+
+static void scx_hardlockup_irq_workfn(struct irq_work *work)
+{
+ int cpu = atomic_xchg(&scx_hardlockup_cpu, -1);
+
+ if (cpu >= 0 && handle_lockup("hard lockup - CPU %d", cpu))
+ printk_deferred(KERN_ERR "sched_ext: Hard lockup - CPU %d, disabling BPF scheduler\n",
+ cpu);
+}
+
+static DEFINE_IRQ_WORK(scx_hardlockup_irq_work, scx_hardlockup_irq_workfn);
+
/**
* scx_hardlockup - sched_ext hardlockup handler
*
@@ -5150,17 +5169,19 @@ void scx_softlockup(u32 dur_s)
* Try kicking out the current scheduler in an attempt to recover the system to
* a good state before taking more drastic actions.
*
- * Returns %true if sched_ext is enabled and abort was initiated, which may
- * resolve the reported hardlockup. %false if sched_ext is not enabled or
- * someone else already initiated abort.
+ * Queues an irq_work; the handle_lockup() call happens in IRQ context (see
+ * scx_hardlockup_irq_workfn).
+ *
+ * Returns %true if sched_ext is enabled and the work was queued, %false
+ * otherwise.
*/
bool scx_hardlockup(int cpu)
{
- if (!handle_lockup("hard lockup - CPU %d", cpu))
+ if (!rcu_access_pointer(scx_root))
return false;
- printk_deferred(KERN_ERR "sched_ext: Hard lockup - CPU %d, disabling BPF scheduler\n",
- cpu);
+ atomic_cmpxchg(&scx_hardlockup_cpu, -1, cpu);
+ irq_work_queue(&scx_hardlockup_irq_work);
return true;
}
^ permalink raw reply [flat|nested] 4+ messages in thread* Re: [PATCH sched_ext/for-7.1-fixes] sched_ext: Defer scx_hardlockup() out of NMI
2026-04-24 20:14 [PATCH sched_ext/for-7.1-fixes] sched_ext: Defer scx_hardlockup() out of NMI Tejun Heo
@ 2026-04-24 20:15 ` Tejun Heo
2026-04-24 22:19 ` Andrea Righi
2026-04-25 0:23 ` Tejun Heo
2 siblings, 0 replies; 4+ messages in thread
From: Tejun Heo @ 2026-04-24 20:15 UTC (permalink / raw)
To: David Vernet, Andrea Righi, Changwoo Min; +Cc: sched-ext, emil, linux-kernel
On Fri, Apr 24, 2026 at 10:14:32AM -1000, Tejun Heo wrote:
> Cc: stable@vger.kernel.org # v7.1+
Please ignore this non-sensical stable cc.
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH sched_ext/for-7.1-fixes] sched_ext: Defer scx_hardlockup() out of NMI
2026-04-24 20:14 [PATCH sched_ext/for-7.1-fixes] sched_ext: Defer scx_hardlockup() out of NMI Tejun Heo
2026-04-24 20:15 ` Tejun Heo
@ 2026-04-24 22:19 ` Andrea Righi
2026-04-25 0:23 ` Tejun Heo
2 siblings, 0 replies; 4+ messages in thread
From: Andrea Righi @ 2026-04-24 22:19 UTC (permalink / raw)
To: Tejun Heo; +Cc: David Vernet, Changwoo Min, sched-ext, emil, linux-kernel
On Fri, Apr 24, 2026 at 10:14:32AM -1000, Tejun Heo wrote:
> scx_hardlockup() runs from NMI and eventually calls scx_claim_exit(),
> which takes scx_sched_lock. scx_sched_lock isn't NMI-safe and grabbing
> it from NMI context can lead to deadlocks.
>
> The hardlockup handler is best-effort recovery and the disable path it
> triggers runs off of irq_work anyway. Move the handle_lockup() call into
> an irq_work so it runs in IRQ context.
>
> Fixes: ebeca1f930ea ("sched_ext: Introduce cgroup sub-sched support")
> Cc: stable@vger.kernel.org # v7.1+
Apart from the Cc, as you mentioned. :)
Reviewed-by: Andrea Righi <arighi@nvidia.com>
Thanks,
-Andrea
> Signed-off-by: Tejun Heo <tj@kernel.org>
> ---
> kernel/sched/ext.c | 33 +++++++++++++++++++++++++++------
> 1 file changed, 27 insertions(+), 6 deletions(-)
>
> --- a/kernel/sched/ext.c
> +++ b/kernel/sched/ext.c
> @@ -5142,6 +5142,25 @@ void scx_softlockup(u32 dur_s)
> smp_processor_id(), dur_s);
> }
>
> +/*
> + * scx_hardlockup() runs from NMI and eventually calls scx_claim_exit(),
> + * which takes scx_sched_lock. scx_sched_lock isn't NMI-safe and grabbing
> + * it from NMI context can lead to deadlocks. Defer via irq_work; the
> + * disable path runs off irq_work anyway.
> + */
> +static atomic_t scx_hardlockup_cpu = ATOMIC_INIT(-1);
> +
> +static void scx_hardlockup_irq_workfn(struct irq_work *work)
> +{
> + int cpu = atomic_xchg(&scx_hardlockup_cpu, -1);
> +
> + if (cpu >= 0 && handle_lockup("hard lockup - CPU %d", cpu))
> + printk_deferred(KERN_ERR "sched_ext: Hard lockup - CPU %d, disabling BPF scheduler\n",
> + cpu);
> +}
> +
> +static DEFINE_IRQ_WORK(scx_hardlockup_irq_work, scx_hardlockup_irq_workfn);
> +
> /**
> * scx_hardlockup - sched_ext hardlockup handler
> *
> @@ -5150,17 +5169,19 @@ void scx_softlockup(u32 dur_s)
> * Try kicking out the current scheduler in an attempt to recover the system to
> * a good state before taking more drastic actions.
> *
> - * Returns %true if sched_ext is enabled and abort was initiated, which may
> - * resolve the reported hardlockup. %false if sched_ext is not enabled or
> - * someone else already initiated abort.
> + * Queues an irq_work; the handle_lockup() call happens in IRQ context (see
> + * scx_hardlockup_irq_workfn).
> + *
> + * Returns %true if sched_ext is enabled and the work was queued, %false
> + * otherwise.
> */
> bool scx_hardlockup(int cpu)
> {
> - if (!handle_lockup("hard lockup - CPU %d", cpu))
> + if (!rcu_access_pointer(scx_root))
> return false;
>
> - printk_deferred(KERN_ERR "sched_ext: Hard lockup - CPU %d, disabling BPF scheduler\n",
> - cpu);
> + atomic_cmpxchg(&scx_hardlockup_cpu, -1, cpu);
> + irq_work_queue(&scx_hardlockup_irq_work);
> return true;
> }
>
^ permalink raw reply [flat|nested] 4+ messages in thread* Re: [PATCH sched_ext/for-7.1-fixes] sched_ext: Defer scx_hardlockup() out of NMI
2026-04-24 20:14 [PATCH sched_ext/for-7.1-fixes] sched_ext: Defer scx_hardlockup() out of NMI Tejun Heo
2026-04-24 20:15 ` Tejun Heo
2026-04-24 22:19 ` Andrea Righi
@ 2026-04-25 0:23 ` Tejun Heo
2 siblings, 0 replies; 4+ messages in thread
From: Tejun Heo @ 2026-04-25 0:23 UTC (permalink / raw)
To: David Vernet, Andrea Righi, Changwoo Min; +Cc: sched-ext, emil, linux-kernel
Hello,
> [PATCH sched_ext/for-7.1-fixes] sched_ext: Defer scx_hardlockup() out of NMI
Applied to sched_ext/for-7.1-fixes with the Cc: stable line dropped.
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2026-04-25 0:23 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-24 20:14 [PATCH sched_ext/for-7.1-fixes] sched_ext: Defer scx_hardlockup() out of NMI Tejun Heo
2026-04-24 20:15 ` Tejun Heo
2026-04-24 22:19 ` Andrea Righi
2026-04-25 0:23 ` Tejun Heo
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox