All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] sched_ext: Fix spurious WARN on stale ops_state in ops_dequeue()
@ 2026-05-13  9:53 Samuele Mariotti
  2026-05-13 14:26 ` Andrea Righi
  2026-05-14  4:00 ` sashiko-bot
  0 siblings, 2 replies; 9+ messages in thread
From: Samuele Mariotti @ 2026-05-13  9:53 UTC (permalink / raw)
  To: arighi, tj, void, changwoo
  Cc: sched-ext, linux-kernel, Samuele Mariotti, Paolo Valente

ops_dequeue() can race with finish_dispatch() and spuriously trigger the
"queued task must be in BPF scheduler's custody" warning.

ops_dequeue() snapshots p->scx.ops_state via atomic_long_read_acquire()
and then, in the SCX_OPSS_QUEUED arm, asserts that SCX_TASK_IN_CUSTODY
is set. The two reads are not atomic w.r.t. a concurrent
finish_dispatch() running on another CPU:

CPU 1                                    CPU 2
=====                                    =====
                                         dequeue_task_scx()
                                           ops_dequeue()
                                             opss = read_acquire(ops_state)
                                                  = SCX_OPSS_QUEUED
finish_dispatch()
  cmpxchg ops_state:
    SCX_OPSS_QUEUED -> SCX_OPSS_DISPATCHING  [succeeds]
  dispatch_enqueue(SCX_DSQ_GLOBAL,
                   SCX_ENQ_CLEAR_OPSS)
    call_task_dequeue()
      p->scx.flags &= ~SCX_TASK_IN_CUSTODY
                                             WARN_ON_ONCE(!(p->scx.flags &
                                                     SCX_TASK_IN_CUSTODY))
                                            /* opss is stale: QUEUED,
                                             * but task already claimed */
    set_release(ops_state, SCX_OPSS_NONE)

The race has been observed via two distinct call chains: the most common
goes through sched_setaffinity(), a rarer variant through
sched_change_begin().

For SCX_DSQ_GLOBAL / SCX_DSQ_BYPASS, dispatch_enqueue() clears
SCX_TASK_IN_CUSTODY before clearing ops_state to SCX_OPSS_NONE
(intentional, to avoid concurrent non-atomic RMW of p->scx.flags against
ops_dequeue()). The window between those two writes is exactly what
ops_dequeue() observes as "QUEUED without custody".

The observed state is not actually inconsistent, it just means CPU 1 has
already claimed the task and the QUEUED value held by CPU 2 is stale.
Re-read ops_state in that case; the next read is guaranteed to return
SCX_OPSS_DISPATCHING or SCX_OPSS_NONE, both of which exit the switch
cleanly. The retry is bounded: once IN_CUSTODY is cleared, ops_state has
already advanced past QUEUED for this dispatch cycle, and a fresh QUEUED
would require re-enqueue under p's rq lock, which CPU 2 holds.

Fixes: ebf1ccff79c4 ("sched_ext: Fix ops.dequeue() semantics")
Suggested-by: Andrea Righi <arighi@nvidia.com>
Signed-off-by: Samuele Mariotti <smariotti@disroot.org>
Signed-off-by: Paolo Valente <paolo.valente@unimore.it>
---
 kernel/sched/ext.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
index 23f7b3f63b09..d285e37f2177 100644
--- a/kernel/sched/ext.c
+++ b/kernel/sched/ext.c
@@ -2078,6 +2078,7 @@ static void ops_dequeue(struct rq *rq, struct task_struct *p, u64 deq_flags)
 	/* dequeue is always temporary, don't reset runnable_at */
 	clr_task_runnable(p, false);
 
+retry:
 	/* acquire ensures that we see the preceding updates on QUEUED */
 	opss = atomic_long_read_acquire(&p->scx.ops_state);
 
@@ -2092,7 +2093,9 @@ static void ops_dequeue(struct rq *rq, struct task_struct *p, u64 deq_flags)
 		BUG();
 	case SCX_OPSS_QUEUED:
 		/* A queued task must always be in BPF scheduler's custody */
-		WARN_ON_ONCE(!(p->scx.flags & SCX_TASK_IN_CUSTODY));
+		if (!(p->scx.flags & SCX_TASK_IN_CUSTODY))
+			goto retry;
+
 		if (atomic_long_try_cmpxchg(&p->scx.ops_state, &opss,
 					    SCX_OPSS_NONE))
 			break;
-- 
2.54.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH] sched_ext: Fix spurious WARN on stale ops_state in ops_dequeue()
  2026-05-13  9:53 [PATCH] sched_ext: Fix spurious WARN on stale ops_state in ops_dequeue() Samuele Mariotti
@ 2026-05-13 14:26 ` Andrea Righi
  2026-05-13 16:41   ` Samuele Mariotti
  2026-05-14  4:00 ` sashiko-bot
  1 sibling, 1 reply; 9+ messages in thread
From: Andrea Righi @ 2026-05-13 14:26 UTC (permalink / raw)
  To: Samuele Mariotti
  Cc: tj, void, changwoo, sched-ext, linux-kernel, Paolo Valente

Hi Samuele,

On Wed, May 13, 2026 at 11:53:29AM +0200, Samuele Mariotti wrote:
> ops_dequeue() can race with finish_dispatch() and spuriously trigger the
> "queued task must be in BPF scheduler's custody" warning.
> 
> ops_dequeue() snapshots p->scx.ops_state via atomic_long_read_acquire()
> and then, in the SCX_OPSS_QUEUED arm, asserts that SCX_TASK_IN_CUSTODY
> is set. The two reads are not atomic w.r.t. a concurrent
> finish_dispatch() running on another CPU:
> 
> CPU 1                                    CPU 2
> =====                                    =====
>                                          dequeue_task_scx()
>                                            ops_dequeue()
>                                              opss = read_acquire(ops_state)
>                                                   = SCX_OPSS_QUEUED
> finish_dispatch()
>   cmpxchg ops_state:
>     SCX_OPSS_QUEUED -> SCX_OPSS_DISPATCHING  [succeeds]
>   dispatch_enqueue(SCX_DSQ_GLOBAL,
>                    SCX_ENQ_CLEAR_OPSS)
>     call_task_dequeue()
>       p->scx.flags &= ~SCX_TASK_IN_CUSTODY
>                                              WARN_ON_ONCE(!(p->scx.flags &
>                                                      SCX_TASK_IN_CUSTODY))
>                                             /* opss is stale: QUEUED,
>                                              * but task already claimed */
>     set_release(ops_state, SCX_OPSS_NONE)
> 
> The race has been observed via two distinct call chains: the most common
> goes through sched_setaffinity(), a rarer variant through
> sched_change_begin().
> 
> For SCX_DSQ_GLOBAL / SCX_DSQ_BYPASS, dispatch_enqueue() clears
> SCX_TASK_IN_CUSTODY before clearing ops_state to SCX_OPSS_NONE
> (intentional, to avoid concurrent non-atomic RMW of p->scx.flags against
> ops_dequeue()). The window between those two writes is exactly what
> ops_dequeue() observes as "QUEUED without custody".
> 
> The observed state is not actually inconsistent, it just means CPU 1 has
> already claimed the task and the QUEUED value held by CPU 2 is stale.
> Re-read ops_state in that case; the next read is guaranteed to return
> SCX_OPSS_DISPATCHING or SCX_OPSS_NONE, both of which exit the switch
> cleanly. The retry is bounded: once IN_CUSTODY is cleared, ops_state has
> already advanced past QUEUED for this dispatch cycle, and a fresh QUEUED
> would require re-enqueue under p's rq lock, which CPU 2 holds.
> 
> Fixes: ebf1ccff79c4 ("sched_ext: Fix ops.dequeue() semantics")
> Suggested-by: Andrea Righi <arighi@nvidia.com>
> Signed-off-by: Samuele Mariotti <smariotti@disroot.org>
> Signed-off-by: Paolo Valente <paolo.valente@unimore.it>
> ---
>  kernel/sched/ext.c | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
> index 23f7b3f63b09..d285e37f2177 100644
> --- a/kernel/sched/ext.c
> +++ b/kernel/sched/ext.c
> @@ -2078,6 +2078,7 @@ static void ops_dequeue(struct rq *rq, struct task_struct *p, u64 deq_flags)
>  	/* dequeue is always temporary, don't reset runnable_at */
>  	clr_task_runnable(p, false);
>  
> +retry:
>  	/* acquire ensures that we see the preceding updates on QUEUED */
>  	opss = atomic_long_read_acquire(&p->scx.ops_state);
>  
> @@ -2092,7 +2093,9 @@ static void ops_dequeue(struct rq *rq, struct task_struct *p, u64 deq_flags)
>  		BUG();
>  	case SCX_OPSS_QUEUED:
>  		/* A queued task must always be in BPF scheduler's custody */
> -		WARN_ON_ONCE(!(p->scx.flags & SCX_TASK_IN_CUSTODY));
> +		if (!(p->scx.flags & SCX_TASK_IN_CUSTODY))
> +			goto retry;

Can we add a cpu_relax() before the goto? A hot spin polling two cachelines from
another CPU could be very unkind to SMT siblings and bus traffic.

Moreover, we completely lose the original WARN_ON_ONCE(), so we don't catch the
case where the invariant QUEUED -> IN_CUSTODY is violated by a realy bug. How
about adding a max retries as well, i.e., something like this:

	int retries = 0;

	...
retry:
	...
	if (!(p->scx.flags & SCX_TASK_IN_CUSTODY) &&
	    !WARN_ON_ONCE(retries++ >= 128)) {
		cpu_relax();
		goto retry;
	}

> +
>  		if (atomic_long_try_cmpxchg(&p->scx.ops_state, &opss,
>  					    SCX_OPSS_NONE))
>  			break;
> -- 
> 2.54.0
> 

Thanks,
-Andrea

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] sched_ext: Fix spurious WARN on stale ops_state in ops_dequeue()
  2026-05-13 14:26 ` Andrea Righi
@ 2026-05-13 16:41   ` Samuele Mariotti
  2026-05-13 16:49     ` Andrea Righi
  2026-05-13 20:01     ` Tejun Heo
  0 siblings, 2 replies; 9+ messages in thread
From: Samuele Mariotti @ 2026-05-13 16:41 UTC (permalink / raw)
  To: Andrea Righi; +Cc: tj, void, changwoo, sched-ext, linux-kernel, Paolo Valente


Hi Andrea,

On 13/05/2026 16:26, Andrea Righi wrote:
>Hi Samuele,
>
>On Wed, May 13, 2026 at 11:53:29AM +0200, Samuele Mariotti wrote:
>> ops_dequeue() can race with finish_dispatch() and spuriously trigger the
>> "queued task must be in BPF scheduler's custody" warning.
>>
>> ops_dequeue() snapshots p->scx.ops_state via atomic_long_read_acquire()
>> and then, in the SCX_OPSS_QUEUED arm, asserts that SCX_TASK_IN_CUSTODY
>> is set. The two reads are not atomic w.r.t. a concurrent
>> finish_dispatch() running on another CPU:
>>
>> CPU 1                                    CPU 2
>> =====                                    =====
>>                                          dequeue_task_scx()
>>                                            ops_dequeue()
>>                                              opss = read_acquire(ops_state)
>>                                                   = SCX_OPSS_QUEUED
>> finish_dispatch()
>>   cmpxchg ops_state:
>>     SCX_OPSS_QUEUED -> SCX_OPSS_DISPATCHING  [succeeds]
>>   dispatch_enqueue(SCX_DSQ_GLOBAL,
>>                    SCX_ENQ_CLEAR_OPSS)
>>     call_task_dequeue()
>>       p->scx.flags &= ~SCX_TASK_IN_CUSTODY
>>                                              WARN_ON_ONCE(!(p->scx.flags &
>>                                                      SCX_TASK_IN_CUSTODY))
>>                                             /* opss is stale: QUEUED,
>>                                              * but task already claimed */
>>     set_release(ops_state, SCX_OPSS_NONE)
>>
>> The race has been observed via two distinct call chains: the most common
>> goes through sched_setaffinity(), a rarer variant through
>> sched_change_begin().
>>
>> For SCX_DSQ_GLOBAL / SCX_DSQ_BYPASS, dispatch_enqueue() clears
>> SCX_TASK_IN_CUSTODY before clearing ops_state to SCX_OPSS_NONE
>> (intentional, to avoid concurrent non-atomic RMW of p->scx.flags against
>> ops_dequeue()). The window between those two writes is exactly what
>> ops_dequeue() observes as "QUEUED without custody".
>>
>> The observed state is not actually inconsistent, it just means CPU 1 has
>> already claimed the task and the QUEUED value held by CPU 2 is stale.
>> Re-read ops_state in that case; the next read is guaranteed to return
>> SCX_OPSS_DISPATCHING or SCX_OPSS_NONE, both of which exit the switch
>> cleanly. The retry is bounded: once IN_CUSTODY is cleared, ops_state has
>> already advanced past QUEUED for this dispatch cycle, and a fresh QUEUED
>> would require re-enqueue under p's rq lock, which CPU 2 holds.
>>
>> Fixes: ebf1ccff79c4 ("sched_ext: Fix ops.dequeue() semantics")
>> Suggested-by: Andrea Righi <arighi@nvidia.com>
>> Signed-off-by: Samuele Mariotti <smariotti@disroot.org>
>> Signed-off-by: Paolo Valente <paolo.valente@unimore.it>
>> ---
>>  kernel/sched/ext.c | 5 ++++-
>>  1 file changed, 4 insertions(+), 1 deletion(-)
>>
>> diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
>> index 23f7b3f63b09..d285e37f2177 100644
>> --- a/kernel/sched/ext.c
>> +++ b/kernel/sched/ext.c
>> @@ -2078,6 +2078,7 @@ static void ops_dequeue(struct rq *rq, struct task_struct *p, u64 deq_flags)
>>      /* dequeue is always temporary, don't reset runnable_at */
>>      clr_task_runnable(p, false);
>>
>> +retry:
>>      /* acquire ensures that we see the preceding updates on QUEUED */
>>      opss = atomic_long_read_acquire(&p->scx.ops_state);
>>
>> @@ -2092,7 +2093,9 @@ static void ops_dequeue(struct rq *rq, struct task_struct *p, u64 deq_flags)
>>          BUG();
>>      case SCX_OPSS_QUEUED:
>>          /* A queued task must always be in BPF scheduler's custody */
>> -        WARN_ON_ONCE(!(p->scx.flags & SCX_TASK_IN_CUSTODY));
>> +        if (!(p->scx.flags & SCX_TASK_IN_CUSTODY))
>> +            goto retry;
>
>Can we add a cpu_relax() before the goto? A hot spin polling two cachelines from
>another CPU could be very unkind to SMT siblings and bus traffic.
>
>Moreover, we completely lose the original WARN_ON_ONCE(), so we don't catch the
>case where the invariant QUEUED -> IN_CUSTODY is violated by a realy bug. How
>about adding a max retries as well, i.e., something like this:
>
>    int retries = 0;
>
>    ...
>retry:
>    ...
>    if (!(p->scx.flags & SCX_TASK_IN_CUSTODY) &&
>        !WARN_ON_ONCE(retries++ >= 128)) {
>        cpu_relax();
>        goto retry;
>    }
>
>> +
>>          if (atomic_long_try_cmpxchg(&p->scx.ops_state, &opss,
>>                          SCX_OPSS_NONE))
>>              break;
>> --
>> 2.54.0
>>
>
>Thanks,
>-Andrea

Thanks for the suggestion. I agree with adding cpu_relax() and the
retry limit to preserve the original WARN_ON_ONCE() as a safety net
for real bugs.

Given the improvements to efficiency, I would also improve the non-atomic
read of p->scx.flags by using READ_ONCE(), preventing the compiler from
caching the value across retries and ensuring each iteration observes the
latest value written by the concurrent finish_dispatch(). I would also
lower the retry limit from 128 to 4, since the maximum number of retries
observed empirically is 1, so 4 gives a reasonable safety margin without
spinning unnecessarily long. 

Something like this: 

if (!(READ_ONCE(p->scx.flags) & SCX_TASK_IN_CUSTODY) &&
     !WARN_ON_ONCE(retries++ >= 4)) {
         cpu_relax();
         goto retry;
}

Let me know if this looks good to you.

Thanks,
Samuele

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] sched_ext: Fix spurious WARN on stale ops_state in ops_dequeue()
  2026-05-13 16:41   ` Samuele Mariotti
@ 2026-05-13 16:49     ` Andrea Righi
  2026-05-13 20:01     ` Tejun Heo
  1 sibling, 0 replies; 9+ messages in thread
From: Andrea Righi @ 2026-05-13 16:49 UTC (permalink / raw)
  To: Samuele Mariotti
  Cc: tj, void, changwoo, sched-ext, linux-kernel, Paolo Valente

On Wed, May 13, 2026 at 06:41:26PM +0200, Samuele Mariotti wrote:
...
> Thanks for the suggestion. I agree with adding cpu_relax() and the
> retry limit to preserve the original WARN_ON_ONCE() as a safety net
> for real bugs.
> 
> Given the improvements to efficiency, I would also improve the non-atomic
> read of p->scx.flags by using READ_ONCE(), preventing the compiler from
> caching the value across retries and ensuring each iteration observes the
> latest value written by the concurrent finish_dispatch(). I would also
> lower the retry limit from 128 to 4, since the maximum number of retries
> observed empirically is 1, so 4 gives a reasonable safety margin without
> spinning unnecessarily long. 
> 
> Something like this: 
> 
> if (!(READ_ONCE(p->scx.flags) & SCX_TASK_IN_CUSTODY) &&
>      !WARN_ON_ONCE(retries++ >= 4)) {
>          cpu_relax();
>          goto retry;
> }
> 
> Let me know if this looks good to you.

Yeah, that sounds reasonable to me.

Thanks,
-Andrea

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] sched_ext: Fix spurious WARN on stale ops_state in ops_dequeue()
  2026-05-13 16:41   ` Samuele Mariotti
  2026-05-13 16:49     ` Andrea Righi
@ 2026-05-13 20:01     ` Tejun Heo
  2026-05-14  9:13       ` Samuele Mariotti
  1 sibling, 1 reply; 9+ messages in thread
From: Tejun Heo @ 2026-05-13 20:01 UTC (permalink / raw)
  To: Samuele Mariotti
  Cc: Andrea Righi, void, changwoo, sched-ext, linux-kernel,
	Paolo Valente

Hello,

On Wed, May 13, 2026 at 06:41:26PM +0200, Samuele Mariotti wrote:
> if (!(READ_ONCE(p->scx.flags) & SCX_TASK_IN_CUSTODY) &&
>      !WARN_ON_ONCE(retries++ >= 4)) {
>          cpu_relax();
>          goto retry;
> }

Let's not do the WARN and exit. We shouldn't get this wrong and if we get
this wrong, it's going to be obvious from lockup detectors. Can you please
add a comment explaining the retry condition tho?

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] sched_ext: Fix spurious WARN on stale ops_state in ops_dequeue()
  2026-05-13  9:53 [PATCH] sched_ext: Fix spurious WARN on stale ops_state in ops_dequeue() Samuele Mariotti
  2026-05-13 14:26 ` Andrea Righi
@ 2026-05-14  4:00 ` sashiko-bot
  1 sibling, 0 replies; 9+ messages in thread
From: sashiko-bot @ 2026-05-14  4:00 UTC (permalink / raw)
  To: Samuele Mariotti; +Cc: sched-ext

Thank you for your contribution! Sashiko AI review found 2 potential issue(s) to consider:
- [Medium] Concurrent plain read and write of `p->scx.flags` without proper annotations constitutes a C data race.
- [Low] The newly added `goto retry;` loop lacks a `cpu_relax()` call.
--

commit e9d6f8408ebc8ea1acdd33de496b607223fd733b
Author: Samuele Mariotti <smariotti@disroot.org>

sched_ext: Fix spurious WARN on stale ops_state in ops_dequeue()

This commit resolves a spurious warning in ops_dequeue() caused by a race
with finish_dispatch(). It introduces a retry loop that re-evaluates
ops_state if SCX_TASK_IN_CUSTODY is cleared while the task is still seen
as SCX_OPSS_QUEUED.

> diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
> --- a/kernel/sched/ext.c
> +++ b/kernel/sched/ext.c
[ ... ]
> @@ -2035,7 +2036,9 @@ static void ops_dequeue(struct rq *rq, struct task_struct *p, u64 deq_flags)
>  		BUG();
>  	case SCX_OPSS_QUEUED:
>  		/* A queued task must always be in BPF scheduler's custody */
> -		WARN_ON_ONCE(!(p->scx.flags & SCX_TASK_IN_CUSTODY));
> +		if (!(p->scx.flags & SCX_TASK_IN_CUSTODY))

The commit message mentions that this read can execute concurrently with
finish_dispatch() on another CPU performing a non-atomic read-modify-write
that clears SCX_TASK_IN_CUSTODY.

Since a plain read racing with a plain write constitutes a C data race,
can this trigger KCSAN splats or allow compiler optimizations like load
tearing?

Should this read use READ_ONCE() or data_race(), and the lockless writer
be appropriately annotated as well?

> +			goto retry;

Since the goto retry creates a spin-wait loop waiting for the other CPU's
memory writes to propagate, is it necessary to include a cpu_relax() call
here?

Standard kernel practice usually requires a cpu_relax() in polling loops to
yield execution resources to HyperThreading siblings and prevent the CPU
from hogging power during a tight spin.

> +
>  		if (atomic_long_try_cmpxchg(&p->scx.ops_state, &opss,
>  					    SCX_OPSS_NONE))
>  			break;

-- 
Sashiko AI review · https://sashiko.dev/#/patchset/20260513095329.4029345-1-smariotti@disroot.org?part=1

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] sched_ext: Fix spurious WARN on stale ops_state in ops_dequeue()
  2026-05-13 20:01     ` Tejun Heo
@ 2026-05-14  9:13       ` Samuele Mariotti
  2026-05-14 20:08         ` Andrea Righi
  0 siblings, 1 reply; 9+ messages in thread
From: Samuele Mariotti @ 2026-05-14  9:13 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Andrea Righi, void, changwoo, sched-ext, linux-kernel,
	Paolo Valente

Hi Tejun,

>Let's not do the WARN and exit. We shouldn't get this wrong and if we get
>this wrong, it's going to be obvious from lockup detectors. Can you please
>add a comment explaining the retry condition tho?
>
>Thanks.
>
>-- 
>tejun

Thanks for the feedback. If I understood correctly, you prefer no retry
limit, letting the lockup detectors catch any real bug. I also added
unlikely() since the stale case is by definition rare.

Here is the updated version:

/*
  * If SCX_TASK_IN_CUSTODY is not set, opss is stale: finish_dispatch()
  * has already claimed the task and cleared SCX_TASK_IN_CUSTODY. Retry
  * to get a fresh view of p->scx.ops_state.
  */
if (unlikely(!(READ_ONCE(p->scx.flags) & SCX_TASK_IN_CUSTODY))) {
     cpu_relax();
     goto retry;
}

Let me know if this looks good to you.

Thanks,
Samuele

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] sched_ext: Fix spurious WARN on stale ops_state in ops_dequeue()
  2026-05-14  9:13       ` Samuele Mariotti
@ 2026-05-14 20:08         ` Andrea Righi
  2026-05-15 10:12           ` Samuele Mariotti
  0 siblings, 1 reply; 9+ messages in thread
From: Andrea Righi @ 2026-05-14 20:08 UTC (permalink / raw)
  To: Samuele Mariotti
  Cc: Tejun Heo, void, changwoo, sched-ext, linux-kernel, Paolo Valente

Hi Samuele,

On Thu, May 14, 2026 at 11:13:49AM +0200, Samuele Mariotti wrote:
> Hi Tejun,
> 
> > Let's not do the WARN and exit. We shouldn't get this wrong and if we get
> > this wrong, it's going to be obvious from lockup detectors. Can you please
> > add a comment explaining the retry condition tho?
> > 
> > Thanks.
> > 
> > -- 
> > tejun
> 
> Thanks for the feedback. If I understood correctly, you prefer no retry
> limit, letting the lockup detectors catch any real bug. I also added
> unlikely() since the stale case is by definition rare.
> 
> Here is the updated version:
> 
> /*
>  * If SCX_TASK_IN_CUSTODY is not set, opss is stale: finish_dispatch()
>  * has already claimed the task and cleared SCX_TASK_IN_CUSTODY. Retry
>  * to get a fresh view of p->scx.ops_state.
>  */
> if (unlikely(!(READ_ONCE(p->scx.flags) & SCX_TASK_IN_CUSTODY))) {
>     cpu_relax();
>     goto retry;
> }

The code looks good to me, I'd elaborate more on the comment to make it clear
that the retry loop is guaranteed to terminate (not a deadlock).

How about this (or something along these lines)?

 /*
  * A queued task must be in BPF scheduler's custody. If
  * SCX_TASK_IN_CUSTODY is clear, finish_dispatch() on another
  * CPU has already passed call_task_dequeue() (which clears the
  * flag), but has not yet written SCX_OPSS_NONE. That final
  * store does not require this rq's lock, so retrying with
  * cpu_relax() is bounded: we'll observe NONE (or DISPATCHING,
  * handled by the fallthrough) on a subsequent iteration.
  */

Thanks,
-Andrea

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] sched_ext: Fix spurious WARN on stale ops_state in ops_dequeue()
  2026-05-14 20:08         ` Andrea Righi
@ 2026-05-15 10:12           ` Samuele Mariotti
  0 siblings, 0 replies; 9+ messages in thread
From: Samuele Mariotti @ 2026-05-15 10:12 UTC (permalink / raw)
  To: Andrea Righi
  Cc: Tejun Heo, void, changwoo, sched-ext, linux-kernel, Paolo Valente

Hello Andrea,

On 14/05/2026 22:08, Andrea Righi wrote:
...
>The code looks good to me, I'd elaborate more on the comment to make it clear
>that the retry loop is guaranteed to terminate (not a deadlock).
>
>How about this (or something along these lines)?
>
> /*
>  * A queued task must be in BPF scheduler's custody. If
>  * SCX_TASK_IN_CUSTODY is clear, finish_dispatch() on another
>  * CPU has already passed call_task_dequeue() (which clears the
>  * flag), but has not yet written SCX_OPSS_NONE. That final
>  * store does not require this rq's lock, so retrying with
>  * cpu_relax() is bounded: we'll observe NONE (or DISPATCHING,
>  * handled by the fallthrough) on a subsequent iteration.
>  */
>
>Thanks,
>-Andrea

Agreed, the comment covers all the relevant aspects and explains the if
condition clearly. I would go with it.


Thanks,
Samuele

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2026-05-15 10:12 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-13  9:53 [PATCH] sched_ext: Fix spurious WARN on stale ops_state in ops_dequeue() Samuele Mariotti
2026-05-13 14:26 ` Andrea Righi
2026-05-13 16:41   ` Samuele Mariotti
2026-05-13 16:49     ` Andrea Righi
2026-05-13 20:01     ` Tejun Heo
2026-05-14  9:13       ` Samuele Mariotti
2026-05-14 20:08         ` Andrea Righi
2026-05-15 10:12           ` Samuele Mariotti
2026-05-14  4:00 ` sashiko-bot

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.