From: Andrea Righi <arighi@nvidia.com>
To: Cheng-Yang Chou <yphbchou0911@gmail.com>
Cc: sched-ext@lists.linux.dev, tj@kernel.org, void@manifault.com,
changwoo@igalia.com, jserv@ccns.ncku.edu.tw
Subject: Re: [PATCH 2/2] sched_ext: Update demo schedulers and selftests to drop ops.cpu_acquire/release()
Date: Thu, 12 Mar 2026 08:40:30 +0100 [thread overview]
Message-ID: <abJt7tIXGC0qv7ex@gpd4> (raw)
In-Reply-To: <20260312042001.955675-3-yphbchou0911@gmail.com>
Hi Cheng-Yang,
On Thu, Mar 12, 2026 at 12:20:01PM +0800, Cheng-Yang Chou wrote:
> ops.cpu_acquire() and ops.cpu_release() are deprecated in favor of
> handling CPU preemption via the sched_switch tracepoint. Update scx_qmap
> and the maximal selftest to use the new approach.
We could mention that commit a3f5d4822253 ("sched_ext: Allow
scx_bpf_reenqueue_local() to be called from anywhere") is deprecating
ops.cpu_acquire/relese().
>
> In scx_qmap, remove the cpu_release fallback and the
> __COMPAT_scx_bpf_reenqueue_local_from_anywhere() compat guard from
> qmap_sched_switch(), unconditionally handling preemption via the TP.
>
> In the maximal selftest, replace the cpu_acquire and cpu_release stubs
> with a minimal sched_switch TP program.
>
> Signed-off-by: Cheng-Yang Chou <yphbchou0911@gmail.com>
> ---
> tools/sched_ext/scx_qmap.bpf.c | 15 ++-------------
> tools/testing/selftests/sched_ext/maximal.bpf.c | 15 ++++++---------
> 2 files changed, 8 insertions(+), 22 deletions(-)
>
> diff --git a/tools/sched_ext/scx_qmap.bpf.c b/tools/sched_ext/scx_qmap.bpf.c
> index a4a1b84fe359..a11e27c8de77 100644
> --- a/tools/sched_ext/scx_qmap.bpf.c
> +++ b/tools/sched_ext/scx_qmap.bpf.c
> @@ -11,8 +11,8 @@
> *
> * - BPF-side queueing using PIDs.
> * - Sleepable per-task storage allocation using ops.prep_enable().
> - * - Using ops.cpu_release() to handle a higher priority scheduling class taking
> - * the CPU away.
> + * - Using the sched_switch tracepoint to handle a higher priority scheduling
> + * class taking the CPU away.
> * - Core-sched support.
> *
> * This scheduler is primarily for demonstration and testing of sched_ext
> @@ -562,9 +562,6 @@ SEC("tp_btf/sched_switch")
> int BPF_PROG(qmap_sched_switch, bool preempt, struct task_struct *prev,
> struct task_struct *next, unsigned long prev_state)
> {
> - if (!__COMPAT_scx_bpf_reenqueue_local_from_anywhere())
> - return 0;
> -
> /*
> * If @cpu is taken by a higher priority scheduling class, it is no
> * longer available for executing sched_ext tasks. As we don't want the
> @@ -586,13 +583,6 @@ int BPF_PROG(qmap_sched_switch, bool preempt, struct task_struct *prev,
> return 0;
> }
>
> -void BPF_STRUCT_OPS(qmap_cpu_release, s32 cpu, struct scx_cpu_release_args *args)
> -{
> - /* see qmap_sched_switch() to learn how to do this on newer kernels */
> - if (!__COMPAT_scx_bpf_reenqueue_local_from_anywhere())
> - scx_bpf_reenqueue_local();
> -}
> -
> s32 BPF_STRUCT_OPS(qmap_init_task, struct task_struct *p,
> struct scx_init_task_args *args)
> {
> @@ -999,7 +989,6 @@ SCX_OPS_DEFINE(qmap_ops,
> .dispatch = (void *)qmap_dispatch,
> .tick = (void *)qmap_tick,
> .core_sched_before = (void *)qmap_core_sched_before,
> - .cpu_release = (void *)qmap_cpu_release,
> .init_task = (void *)qmap_init_task,
> .dump = (void *)qmap_dump,
> .dump_cpu = (void *)qmap_dump_cpu,
> diff --git a/tools/testing/selftests/sched_ext/maximal.bpf.c b/tools/testing/selftests/sched_ext/maximal.bpf.c
> index 01cf4f3da4e0..5858f64313e9 100644
> --- a/tools/testing/selftests/sched_ext/maximal.bpf.c
> +++ b/tools/testing/selftests/sched_ext/maximal.bpf.c
> @@ -67,13 +67,12 @@ void BPF_STRUCT_OPS(maximal_set_cpumask, struct task_struct *p,
> void BPF_STRUCT_OPS(maximal_update_idle, s32 cpu, bool idle)
> {}
>
> -void BPF_STRUCT_OPS(maximal_cpu_acquire, s32 cpu,
> - struct scx_cpu_acquire_args *args)
> -{}
> -
> -void BPF_STRUCT_OPS(maximal_cpu_release, s32 cpu,
> - struct scx_cpu_release_args *args)
> -{}
> +SEC("tp_btf/sched_switch")
> +int BPF_PROG(maximal_sched_switch, bool preempt, struct task_struct *prev,
> + struct task_struct *next, unsigned long prev_state)
> +{
> + return 0;
> +}
I think this tracepoint is never attached. You can verify it by adding a
bpf_printk("hello") here and check if you see any message in
/sys/kernel/tracing/trace_pipe.
To attach this TP you need to do something like this:
diff --git a/tools/testing/selftests/sched_ext/maximal.c b/tools/testing/selftests/sched_ext/maximal.c
index c6be50a9941d5..1dc3692246705 100644
--- a/tools/testing/selftests/sched_ext/maximal.c
+++ b/tools/testing/selftests/sched_ext/maximal.c
@@ -19,6 +19,9 @@ static enum scx_test_status setup(void **ctx)
SCX_ENUM_INIT(skel);
SCX_FAIL_IF(maximal__load(skel), "Failed to load skel");
+ bpf_map__set_autoattach(skel->maps.maximal_ops, false);
+ SCX_FAIL_IF(maximal__attach(skel), "Failed to attach skel");
+
*ctx = skel;
return SCX_TEST_PASS;
diff --git a/tools/testing/selftests/sched_ext/reload_loop.c b/tools/testing/selftests/sched_ext/reload_loop.c
index 308211d804364..49297b83d748d 100644
--- a/tools/testing/selftests/sched_ext/reload_loop.c
+++ b/tools/testing/selftests/sched_ext/reload_loop.c
@@ -23,6 +23,9 @@ static enum scx_test_status setup(void **ctx)
SCX_ENUM_INIT(skel);
SCX_FAIL_IF(maximal__load(skel), "Failed to load skel");
+ bpf_map__set_autoattach(skel->maps.maximal_ops, false);
+ SCX_FAIL_IF(maximal__attach(skel), "Failed to attach skel");
+
return SCX_TEST_PASS;
}
>
> void BPF_STRUCT_OPS(maximal_cpu_online, s32 cpu)
> {}
> @@ -150,8 +149,6 @@ struct sched_ext_ops maximal_ops = {
> .set_weight = (void *) maximal_set_weight,
> .set_cpumask = (void *) maximal_set_cpumask,
> .update_idle = (void *) maximal_update_idle,
> - .cpu_acquire = (void *) maximal_cpu_acquire,
> - .cpu_release = (void *) maximal_cpu_release,
> .cpu_online = (void *) maximal_cpu_online,
> .cpu_offline = (void *) maximal_cpu_offline,
> .init_task = (void *) maximal_init_task,
> --
> 2.48.1
>
Thanks,
-Andrea
prev parent reply other threads:[~2026-03-12 7:42 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-12 4:19 [PATCH 0/2] sched_ext: Update demo schedulers and selftests for deprecated APIs Cheng-Yang Chou
2026-03-12 4:20 ` [PATCH 1/2] sched_ext: Update demo schedulers and selftests to use scx_bpf_task_set_dsq_vtime() Cheng-Yang Chou
2026-03-12 6:41 ` Andrea Righi
2026-03-12 4:20 ` [PATCH 2/2] sched_ext: Update demo schedulers and selftests to drop ops.cpu_acquire/release() Cheng-Yang Chou
2026-03-12 7:40 ` Andrea Righi [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=abJt7tIXGC0qv7ex@gpd4 \
--to=arighi@nvidia.com \
--cc=changwoo@igalia.com \
--cc=jserv@ccns.ncku.edu.tw \
--cc=sched-ext@lists.linux.dev \
--cc=tj@kernel.org \
--cc=void@manifault.com \
--cc=yphbchou0911@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.