* [PATCH] sched_ext: Fix spurious WARN on stale ops_state in ops_dequeue()
@ 2026-05-13 9:53 Samuele Mariotti
2026-05-13 14:26 ` Andrea Righi
0 siblings, 1 reply; 5+ messages in thread
From: Samuele Mariotti @ 2026-05-13 9:53 UTC (permalink / raw)
To: arighi, tj, void, changwoo
Cc: sched-ext, linux-kernel, Samuele Mariotti, Paolo Valente
ops_dequeue() can race with finish_dispatch() and spuriously trigger the
"queued task must be in BPF scheduler's custody" warning.
ops_dequeue() snapshots p->scx.ops_state via atomic_long_read_acquire()
and then, in the SCX_OPSS_QUEUED arm, asserts that SCX_TASK_IN_CUSTODY
is set. The two reads are not atomic w.r.t. a concurrent
finish_dispatch() running on another CPU:
CPU 1 CPU 2
===== =====
dequeue_task_scx()
ops_dequeue()
opss = read_acquire(ops_state)
= SCX_OPSS_QUEUED
finish_dispatch()
cmpxchg ops_state:
SCX_OPSS_QUEUED -> SCX_OPSS_DISPATCHING [succeeds]
dispatch_enqueue(SCX_DSQ_GLOBAL,
SCX_ENQ_CLEAR_OPSS)
call_task_dequeue()
p->scx.flags &= ~SCX_TASK_IN_CUSTODY
WARN_ON_ONCE(!(p->scx.flags &
SCX_TASK_IN_CUSTODY))
/* opss is stale: QUEUED,
* but task already claimed */
set_release(ops_state, SCX_OPSS_NONE)
The race has been observed via two distinct call chains: the most common
goes through sched_setaffinity(), a rarer variant through
sched_change_begin().
For SCX_DSQ_GLOBAL / SCX_DSQ_BYPASS, dispatch_enqueue() clears
SCX_TASK_IN_CUSTODY before clearing ops_state to SCX_OPSS_NONE
(intentional, to avoid concurrent non-atomic RMW of p->scx.flags against
ops_dequeue()). The window between those two writes is exactly what
ops_dequeue() observes as "QUEUED without custody".
The observed state is not actually inconsistent, it just means CPU 1 has
already claimed the task and the QUEUED value held by CPU 2 is stale.
Re-read ops_state in that case; the next read is guaranteed to return
SCX_OPSS_DISPATCHING or SCX_OPSS_NONE, both of which exit the switch
cleanly. The retry is bounded: once IN_CUSTODY is cleared, ops_state has
already advanced past QUEUED for this dispatch cycle, and a fresh QUEUED
would require re-enqueue under p's rq lock, which CPU 2 holds.
Fixes: ebf1ccff79c4 ("sched_ext: Fix ops.dequeue() semantics")
Suggested-by: Andrea Righi <arighi@nvidia.com>
Signed-off-by: Samuele Mariotti <smariotti@disroot.org>
Signed-off-by: Paolo Valente <paolo.valente@unimore.it>
---
kernel/sched/ext.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
index 23f7b3f63b09..d285e37f2177 100644
--- a/kernel/sched/ext.c
+++ b/kernel/sched/ext.c
@@ -2078,6 +2078,7 @@ static void ops_dequeue(struct rq *rq, struct task_struct *p, u64 deq_flags)
/* dequeue is always temporary, don't reset runnable_at */
clr_task_runnable(p, false);
+retry:
/* acquire ensures that we see the preceding updates on QUEUED */
opss = atomic_long_read_acquire(&p->scx.ops_state);
@@ -2092,7 +2093,9 @@ static void ops_dequeue(struct rq *rq, struct task_struct *p, u64 deq_flags)
BUG();
case SCX_OPSS_QUEUED:
/* A queued task must always be in BPF scheduler's custody */
- WARN_ON_ONCE(!(p->scx.flags & SCX_TASK_IN_CUSTODY));
+ if (!(p->scx.flags & SCX_TASK_IN_CUSTODY))
+ goto retry;
+
if (atomic_long_try_cmpxchg(&p->scx.ops_state, &opss,
SCX_OPSS_NONE))
break;
--
2.54.0
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH] sched_ext: Fix spurious WARN on stale ops_state in ops_dequeue()
2026-05-13 9:53 [PATCH] sched_ext: Fix spurious WARN on stale ops_state in ops_dequeue() Samuele Mariotti
@ 2026-05-13 14:26 ` Andrea Righi
2026-05-13 16:41 ` Samuele Mariotti
0 siblings, 1 reply; 5+ messages in thread
From: Andrea Righi @ 2026-05-13 14:26 UTC (permalink / raw)
To: Samuele Mariotti
Cc: tj, void, changwoo, sched-ext, linux-kernel, Paolo Valente
Hi Samuele,
On Wed, May 13, 2026 at 11:53:29AM +0200, Samuele Mariotti wrote:
> ops_dequeue() can race with finish_dispatch() and spuriously trigger the
> "queued task must be in BPF scheduler's custody" warning.
>
> ops_dequeue() snapshots p->scx.ops_state via atomic_long_read_acquire()
> and then, in the SCX_OPSS_QUEUED arm, asserts that SCX_TASK_IN_CUSTODY
> is set. The two reads are not atomic w.r.t. a concurrent
> finish_dispatch() running on another CPU:
>
> CPU 1 CPU 2
> ===== =====
> dequeue_task_scx()
> ops_dequeue()
> opss = read_acquire(ops_state)
> = SCX_OPSS_QUEUED
> finish_dispatch()
> cmpxchg ops_state:
> SCX_OPSS_QUEUED -> SCX_OPSS_DISPATCHING [succeeds]
> dispatch_enqueue(SCX_DSQ_GLOBAL,
> SCX_ENQ_CLEAR_OPSS)
> call_task_dequeue()
> p->scx.flags &= ~SCX_TASK_IN_CUSTODY
> WARN_ON_ONCE(!(p->scx.flags &
> SCX_TASK_IN_CUSTODY))
> /* opss is stale: QUEUED,
> * but task already claimed */
> set_release(ops_state, SCX_OPSS_NONE)
>
> The race has been observed via two distinct call chains: the most common
> goes through sched_setaffinity(), a rarer variant through
> sched_change_begin().
>
> For SCX_DSQ_GLOBAL / SCX_DSQ_BYPASS, dispatch_enqueue() clears
> SCX_TASK_IN_CUSTODY before clearing ops_state to SCX_OPSS_NONE
> (intentional, to avoid concurrent non-atomic RMW of p->scx.flags against
> ops_dequeue()). The window between those two writes is exactly what
> ops_dequeue() observes as "QUEUED without custody".
>
> The observed state is not actually inconsistent, it just means CPU 1 has
> already claimed the task and the QUEUED value held by CPU 2 is stale.
> Re-read ops_state in that case; the next read is guaranteed to return
> SCX_OPSS_DISPATCHING or SCX_OPSS_NONE, both of which exit the switch
> cleanly. The retry is bounded: once IN_CUSTODY is cleared, ops_state has
> already advanced past QUEUED for this dispatch cycle, and a fresh QUEUED
> would require re-enqueue under p's rq lock, which CPU 2 holds.
>
> Fixes: ebf1ccff79c4 ("sched_ext: Fix ops.dequeue() semantics")
> Suggested-by: Andrea Righi <arighi@nvidia.com>
> Signed-off-by: Samuele Mariotti <smariotti@disroot.org>
> Signed-off-by: Paolo Valente <paolo.valente@unimore.it>
> ---
> kernel/sched/ext.c | 5 ++++-
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
> index 23f7b3f63b09..d285e37f2177 100644
> --- a/kernel/sched/ext.c
> +++ b/kernel/sched/ext.c
> @@ -2078,6 +2078,7 @@ static void ops_dequeue(struct rq *rq, struct task_struct *p, u64 deq_flags)
> /* dequeue is always temporary, don't reset runnable_at */
> clr_task_runnable(p, false);
>
> +retry:
> /* acquire ensures that we see the preceding updates on QUEUED */
> opss = atomic_long_read_acquire(&p->scx.ops_state);
>
> @@ -2092,7 +2093,9 @@ static void ops_dequeue(struct rq *rq, struct task_struct *p, u64 deq_flags)
> BUG();
> case SCX_OPSS_QUEUED:
> /* A queued task must always be in BPF scheduler's custody */
> - WARN_ON_ONCE(!(p->scx.flags & SCX_TASK_IN_CUSTODY));
> + if (!(p->scx.flags & SCX_TASK_IN_CUSTODY))
> + goto retry;
Can we add a cpu_relax() before the goto? A hot spin polling two cachelines from
another CPU could be very unkind to SMT siblings and bus traffic.
Moreover, we completely lose the original WARN_ON_ONCE(), so we don't catch the
case where the invariant QUEUED -> IN_CUSTODY is violated by a realy bug. How
about adding a max retries as well, i.e., something like this:
int retries = 0;
...
retry:
...
if (!(p->scx.flags & SCX_TASK_IN_CUSTODY) &&
!WARN_ON_ONCE(retries++ >= 128)) {
cpu_relax();
goto retry;
}
> +
> if (atomic_long_try_cmpxchg(&p->scx.ops_state, &opss,
> SCX_OPSS_NONE))
> break;
> --
> 2.54.0
>
Thanks,
-Andrea
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] sched_ext: Fix spurious WARN on stale ops_state in ops_dequeue()
2026-05-13 14:26 ` Andrea Righi
@ 2026-05-13 16:41 ` Samuele Mariotti
2026-05-13 16:49 ` Andrea Righi
2026-05-13 20:01 ` Tejun Heo
0 siblings, 2 replies; 5+ messages in thread
From: Samuele Mariotti @ 2026-05-13 16:41 UTC (permalink / raw)
To: Andrea Righi; +Cc: tj, void, changwoo, sched-ext, linux-kernel, Paolo Valente
Hi Andrea,
On 13/05/2026 16:26, Andrea Righi wrote:
>Hi Samuele,
>
>On Wed, May 13, 2026 at 11:53:29AM +0200, Samuele Mariotti wrote:
>> ops_dequeue() can race with finish_dispatch() and spuriously trigger the
>> "queued task must be in BPF scheduler's custody" warning.
>>
>> ops_dequeue() snapshots p->scx.ops_state via atomic_long_read_acquire()
>> and then, in the SCX_OPSS_QUEUED arm, asserts that SCX_TASK_IN_CUSTODY
>> is set. The two reads are not atomic w.r.t. a concurrent
>> finish_dispatch() running on another CPU:
>>
>> CPU 1 CPU 2
>> ===== =====
>> dequeue_task_scx()
>> ops_dequeue()
>> opss = read_acquire(ops_state)
>> = SCX_OPSS_QUEUED
>> finish_dispatch()
>> cmpxchg ops_state:
>> SCX_OPSS_QUEUED -> SCX_OPSS_DISPATCHING [succeeds]
>> dispatch_enqueue(SCX_DSQ_GLOBAL,
>> SCX_ENQ_CLEAR_OPSS)
>> call_task_dequeue()
>> p->scx.flags &= ~SCX_TASK_IN_CUSTODY
>> WARN_ON_ONCE(!(p->scx.flags &
>> SCX_TASK_IN_CUSTODY))
>> /* opss is stale: QUEUED,
>> * but task already claimed */
>> set_release(ops_state, SCX_OPSS_NONE)
>>
>> The race has been observed via two distinct call chains: the most common
>> goes through sched_setaffinity(), a rarer variant through
>> sched_change_begin().
>>
>> For SCX_DSQ_GLOBAL / SCX_DSQ_BYPASS, dispatch_enqueue() clears
>> SCX_TASK_IN_CUSTODY before clearing ops_state to SCX_OPSS_NONE
>> (intentional, to avoid concurrent non-atomic RMW of p->scx.flags against
>> ops_dequeue()). The window between those two writes is exactly what
>> ops_dequeue() observes as "QUEUED without custody".
>>
>> The observed state is not actually inconsistent, it just means CPU 1 has
>> already claimed the task and the QUEUED value held by CPU 2 is stale.
>> Re-read ops_state in that case; the next read is guaranteed to return
>> SCX_OPSS_DISPATCHING or SCX_OPSS_NONE, both of which exit the switch
>> cleanly. The retry is bounded: once IN_CUSTODY is cleared, ops_state has
>> already advanced past QUEUED for this dispatch cycle, and a fresh QUEUED
>> would require re-enqueue under p's rq lock, which CPU 2 holds.
>>
>> Fixes: ebf1ccff79c4 ("sched_ext: Fix ops.dequeue() semantics")
>> Suggested-by: Andrea Righi <arighi@nvidia.com>
>> Signed-off-by: Samuele Mariotti <smariotti@disroot.org>
>> Signed-off-by: Paolo Valente <paolo.valente@unimore.it>
>> ---
>> kernel/sched/ext.c | 5 ++++-
>> 1 file changed, 4 insertions(+), 1 deletion(-)
>>
>> diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
>> index 23f7b3f63b09..d285e37f2177 100644
>> --- a/kernel/sched/ext.c
>> +++ b/kernel/sched/ext.c
>> @@ -2078,6 +2078,7 @@ static void ops_dequeue(struct rq *rq, struct task_struct *p, u64 deq_flags)
>> /* dequeue is always temporary, don't reset runnable_at */
>> clr_task_runnable(p, false);
>>
>> +retry:
>> /* acquire ensures that we see the preceding updates on QUEUED */
>> opss = atomic_long_read_acquire(&p->scx.ops_state);
>>
>> @@ -2092,7 +2093,9 @@ static void ops_dequeue(struct rq *rq, struct task_struct *p, u64 deq_flags)
>> BUG();
>> case SCX_OPSS_QUEUED:
>> /* A queued task must always be in BPF scheduler's custody */
>> - WARN_ON_ONCE(!(p->scx.flags & SCX_TASK_IN_CUSTODY));
>> + if (!(p->scx.flags & SCX_TASK_IN_CUSTODY))
>> + goto retry;
>
>Can we add a cpu_relax() before the goto? A hot spin polling two cachelines from
>another CPU could be very unkind to SMT siblings and bus traffic.
>
>Moreover, we completely lose the original WARN_ON_ONCE(), so we don't catch the
>case where the invariant QUEUED -> IN_CUSTODY is violated by a realy bug. How
>about adding a max retries as well, i.e., something like this:
>
> int retries = 0;
>
> ...
>retry:
> ...
> if (!(p->scx.flags & SCX_TASK_IN_CUSTODY) &&
> !WARN_ON_ONCE(retries++ >= 128)) {
> cpu_relax();
> goto retry;
> }
>
>> +
>> if (atomic_long_try_cmpxchg(&p->scx.ops_state, &opss,
>> SCX_OPSS_NONE))
>> break;
>> --
>> 2.54.0
>>
>
>Thanks,
>-Andrea
Thanks for the suggestion. I agree with adding cpu_relax() and the
retry limit to preserve the original WARN_ON_ONCE() as a safety net
for real bugs.
Given the improvements to efficiency, I would also improve the non-atomic
read of p->scx.flags by using READ_ONCE(), preventing the compiler from
caching the value across retries and ensuring each iteration observes the
latest value written by the concurrent finish_dispatch(). I would also
lower the retry limit from 128 to 4, since the maximum number of retries
observed empirically is 1, so 4 gives a reasonable safety margin without
spinning unnecessarily long.
Something like this:
if (!(READ_ONCE(p->scx.flags) & SCX_TASK_IN_CUSTODY) &&
!WARN_ON_ONCE(retries++ >= 4)) {
cpu_relax();
goto retry;
}
Let me know if this looks good to you.
Thanks,
Samuele
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] sched_ext: Fix spurious WARN on stale ops_state in ops_dequeue()
2026-05-13 16:41 ` Samuele Mariotti
@ 2026-05-13 16:49 ` Andrea Righi
2026-05-13 20:01 ` Tejun Heo
1 sibling, 0 replies; 5+ messages in thread
From: Andrea Righi @ 2026-05-13 16:49 UTC (permalink / raw)
To: Samuele Mariotti
Cc: tj, void, changwoo, sched-ext, linux-kernel, Paolo Valente
On Wed, May 13, 2026 at 06:41:26PM +0200, Samuele Mariotti wrote:
...
> Thanks for the suggestion. I agree with adding cpu_relax() and the
> retry limit to preserve the original WARN_ON_ONCE() as a safety net
> for real bugs.
>
> Given the improvements to efficiency, I would also improve the non-atomic
> read of p->scx.flags by using READ_ONCE(), preventing the compiler from
> caching the value across retries and ensuring each iteration observes the
> latest value written by the concurrent finish_dispatch(). I would also
> lower the retry limit from 128 to 4, since the maximum number of retries
> observed empirically is 1, so 4 gives a reasonable safety margin without
> spinning unnecessarily long.
>
> Something like this:
>
> if (!(READ_ONCE(p->scx.flags) & SCX_TASK_IN_CUSTODY) &&
> !WARN_ON_ONCE(retries++ >= 4)) {
> cpu_relax();
> goto retry;
> }
>
> Let me know if this looks good to you.
Yeah, that sounds reasonable to me.
Thanks,
-Andrea
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] sched_ext: Fix spurious WARN on stale ops_state in ops_dequeue()
2026-05-13 16:41 ` Samuele Mariotti
2026-05-13 16:49 ` Andrea Righi
@ 2026-05-13 20:01 ` Tejun Heo
1 sibling, 0 replies; 5+ messages in thread
From: Tejun Heo @ 2026-05-13 20:01 UTC (permalink / raw)
To: Samuele Mariotti
Cc: Andrea Righi, void, changwoo, sched-ext, linux-kernel,
Paolo Valente
Hello,
On Wed, May 13, 2026 at 06:41:26PM +0200, Samuele Mariotti wrote:
> if (!(READ_ONCE(p->scx.flags) & SCX_TASK_IN_CUSTODY) &&
> !WARN_ON_ONCE(retries++ >= 4)) {
> cpu_relax();
> goto retry;
> }
Let's not do the WARN and exit. We shouldn't get this wrong and if we get
this wrong, it's going to be obvious from lockup detectors. Can you please
add a comment explaining the retry condition tho?
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2026-05-13 20:01 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-13 9:53 [PATCH] sched_ext: Fix spurious WARN on stale ops_state in ops_dequeue() Samuele Mariotti
2026-05-13 14:26 ` Andrea Righi
2026-05-13 16:41 ` Samuele Mariotti
2026-05-13 16:49 ` Andrea Righi
2026-05-13 20:01 ` Tejun Heo
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox