public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Andrea Righi <arighi@nvidia.com>
To: Zqiang <qiang.zhang@linux.dev>
Cc: tj@kernel.org, void@manifault.com, changwoo@igalia.com,
	sched-ext@lists.linux.dev, linux-kernel@vger.kernel.org
Subject: Re: [PATCH] sched_ext: Fix incorrect sched_class settings for per-cpu migration tasks
Date: Mon, 1 Dec 2025 21:51:07 +0100	[thread overview]
Message-ID: <aS3_u1upYCkb3qzT@gpd4> (raw)
In-Reply-To: <20251201112540.5119-1-qiang.zhang@linux.dev>

Hi Zqiang,

On Mon, Dec 01, 2025 at 07:25:40PM +0800, Zqiang wrote:
> When loading the ebpf scheduler, the tasks in the scx_tasks list will
> be traversed and invoke __setscheduler_class() to get new sched_class.
> however, this would also incorrectly set the per-cpu migration
> task's->sched_class to rt_sched_class, even after unload, the per-cpu
> migration task's->sched_class remains sched_rt_class.
> 
> The log for this issue is as follows:
> 
> ./scx_rustland --stats 1
> [  199.245639][  T630] sched_ext: "rustland" does not implement cgroup cpu.weight
> [  199.269213][  T630] sched_ext: BPF scheduler "rustland" enabled
> 04:25:09 [INFO] RustLand scheduler attached
> 
> bpftrace -e 'iter:task /strcontains(ctx->task->comm, "migration")/
> { printf("%s:%d->%pS\n", ctx->task->comm, ctx->task->pid, ctx->task->sched_class); }'
> Attaching 1 probe...
> migration/0:24->rt_sched_class+0x0/0xe0
> migration/1:27->rt_sched_class+0x0/0xe0
> migration/2:33->rt_sched_class+0x0/0xe0
> migration/3:39->rt_sched_class+0x0/0xe0
> migration/4:45->rt_sched_class+0x0/0xe0
> migration/5:52->rt_sched_class+0x0/0xe0
> migration/6:58->rt_sched_class+0x0/0xe0
> migration/7:64->rt_sched_class+0x0/0xe0
> 
> sched_ext: BPF scheduler "rustland" disabled (unregistered from user space)
> EXIT: unregistered from user space
> 04:25:21 [INFO] Unregister RustLand scheduler
> 
> bpftrace -e 'iter:task /strcontains(ctx->task->comm, "migration")/
> { printf("%s:%d->%pS\n", ctx->task->comm, ctx->task->pid, ctx->task->sched_class); }'
> Attaching 1 probe...
> migration/0:24->rt_sched_class+0x0/0xe0
> migration/1:27->rt_sched_class+0x0/0xe0
> migration/2:33->rt_sched_class+0x0/0xe0
> migration/3:39->rt_sched_class+0x0/0xe0
> migration/4:45->rt_sched_class+0x0/0xe0
> migration/5:52->rt_sched_class+0x0/0xe0
> migration/6:58->rt_sched_class+0x0/0xe0
> migration/7:64->rt_sched_class+0x0/0xe0
> 
> This commit therefore generate a new scx_setscheduler_class() and
> add check for stop_sched_class to replace __setscheduler_class().
> 
> Signed-off-by: Zqiang <qiang.zhang@linux.dev>

Good catch! It looks like we had this since the beginnig...
Maybe we should add:

Fixes: f0e1a0643a59b ("sched_ext: Implement BPF extensible scheduler class")

In any case, the fix looks correct to me.

Reviewed-by: Andrea Righi <arighi@nvidia.com>

Thanks,
-Andrea

> ---
>  kernel/sched/ext.c | 14 ++++++++++----
>  1 file changed, 10 insertions(+), 4 deletions(-)
> 
> diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
> index b40d35964cd4..9447fada0050 100644
> --- a/kernel/sched/ext.c
> +++ b/kernel/sched/ext.c
> @@ -248,6 +248,14 @@ static struct scx_dispatch_q *find_user_dsq(struct scx_sched *sch, u64 dsq_id)
>  	return rhashtable_lookup(&sch->dsq_hash, &dsq_id, dsq_hash_params);
>  }
>  
> +static const struct sched_class *scx_setscheduler_class(struct task_struct *p)
> +{
> +	if (p->sched_class == &stop_sched_class)
> +		return &stop_sched_class;
> +
> +	return __setscheduler_class(p->policy, p->prio);
> +}
> +
>  /*
>   * scx_kf_mask enforcement. Some kfuncs can only be called from specific SCX
>   * ops. When invoking SCX ops, SCX_CALL_OP[_RET]() should be used to indicate
> @@ -4241,8 +4249,7 @@ static void scx_disable_workfn(struct kthread_work *work)
>  	while ((p = scx_task_iter_next_locked(&sti))) {
>  		unsigned int queue_flags = DEQUEUE_SAVE | DEQUEUE_MOVE | DEQUEUE_NOCLOCK;
>  		const struct sched_class *old_class = p->sched_class;
> -		const struct sched_class *new_class =
> -			__setscheduler_class(p->policy, p->prio);
> +		const struct sched_class *new_class = scx_setscheduler_class(p);
>  
>  		update_rq_clock(task_rq(p));
>  
> @@ -5045,8 +5052,7 @@ static int scx_enable(struct sched_ext_ops *ops, struct bpf_link *link)
>  	while ((p = scx_task_iter_next_locked(&sti))) {
>  		unsigned int queue_flags = DEQUEUE_SAVE | DEQUEUE_MOVE;
>  		const struct sched_class *old_class = p->sched_class;
> -		const struct sched_class *new_class =
> -			__setscheduler_class(p->policy, p->prio);
> +		const struct sched_class *new_class = scx_setscheduler_class(p);
>  
>  		if (scx_get_task_state(p) != SCX_TASK_READY)
>  			continue;
> -- 
> 2.17.1
> 

  parent reply	other threads:[~2025-12-01 20:51 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-12-01 11:25 [PATCH] sched_ext: Fix incorrect sched_class settings for per-cpu migration tasks Zqiang
2025-12-01 20:32 ` Tejun Heo
2025-12-01 20:51 ` Andrea Righi [this message]
2025-12-01 20:58   ` Tejun Heo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aS3_u1upYCkb3qzT@gpd4 \
    --to=arighi@nvidia.com \
    --cc=changwoo@igalia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=qiang.zhang@linux.dev \
    --cc=sched-ext@lists.linux.dev \
    --cc=tj@kernel.org \
    --cc=void@manifault.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox