All of lore.kernel.org
 help / color / mirror / Atom feed
From: sashiko-bot@kernel.org
To: "Samuele Mariotti" <smariotti@disroot.org>
Cc: sched-ext@lists.linux.dev
Subject: Re: [PATCH] sched_ext: Fix spurious WARN on stale ops_state in ops_dequeue()
Date: Thu, 14 May 2026 04:00:37 +0000	[thread overview]
Message-ID: <20260514040038.4C6D5C2BCB7@smtp.kernel.org> (raw)
In-Reply-To: <20260513095329.4029345-1-smariotti@disroot.org>

Thank you for your contribution! Sashiko AI review found 2 potential issue(s) to consider:
- [Medium] Concurrent plain read and write of `p->scx.flags` without proper annotations constitutes a C data race.
- [Low] The newly added `goto retry;` loop lacks a `cpu_relax()` call.
--

commit e9d6f8408ebc8ea1acdd33de496b607223fd733b
Author: Samuele Mariotti <smariotti@disroot.org>

sched_ext: Fix spurious WARN on stale ops_state in ops_dequeue()

This commit resolves a spurious warning in ops_dequeue() caused by a race
with finish_dispatch(). It introduces a retry loop that re-evaluates
ops_state if SCX_TASK_IN_CUSTODY is cleared while the task is still seen
as SCX_OPSS_QUEUED.

> diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
> --- a/kernel/sched/ext.c
> +++ b/kernel/sched/ext.c
[ ... ]
> @@ -2035,7 +2036,9 @@ static void ops_dequeue(struct rq *rq, struct task_struct *p, u64 deq_flags)
>  		BUG();
>  	case SCX_OPSS_QUEUED:
>  		/* A queued task must always be in BPF scheduler's custody */
> -		WARN_ON_ONCE(!(p->scx.flags & SCX_TASK_IN_CUSTODY));
> +		if (!(p->scx.flags & SCX_TASK_IN_CUSTODY))

The commit message mentions that this read can execute concurrently with
finish_dispatch() on another CPU performing a non-atomic read-modify-write
that clears SCX_TASK_IN_CUSTODY.

Since a plain read racing with a plain write constitutes a C data race,
can this trigger KCSAN splats or allow compiler optimizations like load
tearing?

Should this read use READ_ONCE() or data_race(), and the lockless writer
be appropriately annotated as well?

> +			goto retry;

Since the goto retry creates a spin-wait loop waiting for the other CPU's
memory writes to propagate, is it necessary to include a cpu_relax() call
here?

Standard kernel practice usually requires a cpu_relax() in polling loops to
yield execution resources to HyperThreading siblings and prevent the CPU
from hogging power during a tight spin.

> +
>  		if (atomic_long_try_cmpxchg(&p->scx.ops_state, &opss,
>  					    SCX_OPSS_NONE))
>  			break;

-- 
Sashiko AI review · https://sashiko.dev/#/patchset/20260513095329.4029345-1-smariotti@disroot.org?part=1

      parent reply	other threads:[~2026-05-14  4:00 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-05-13  9:53 [PATCH] sched_ext: Fix spurious WARN on stale ops_state in ops_dequeue() Samuele Mariotti
2026-05-13 14:26 ` Andrea Righi
2026-05-13 16:41   ` Samuele Mariotti
2026-05-13 16:49     ` Andrea Righi
2026-05-13 20:01     ` Tejun Heo
2026-05-14  9:13       ` Samuele Mariotti
2026-05-14 20:08         ` Andrea Righi
2026-05-15 10:12           ` Samuele Mariotti
2026-05-14  4:00 ` sashiko-bot [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260514040038.4C6D5C2BCB7@smtp.kernel.org \
    --to=sashiko-bot@kernel.org \
    --cc=sashiko-reviews@lists.linux.dev \
    --cc=sched-ext@lists.linux.dev \
    --cc=smariotti@disroot.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.