* [PATCH 0/2] sched_ext: Update demo schedulers and selftests for deprecated APIs
@ 2026-03-12 4:19 Cheng-Yang Chou
2026-03-12 4:20 ` [PATCH 1/2] sched_ext: Update demo schedulers and selftests to use scx_bpf_task_set_dsq_vtime() Cheng-Yang Chou
2026-03-12 4:20 ` [PATCH 2/2] sched_ext: Update demo schedulers and selftests to drop ops.cpu_acquire/release() Cheng-Yang Chou
0 siblings, 2 replies; 5+ messages in thread
From: Cheng-Yang Chou @ 2026-03-12 4:19 UTC (permalink / raw)
To: sched-ext; +Cc: tj, void, arighi, changwoo, jserv, yphbchou0911
Two sets of sched_ext APIs have been deprecated:
- Direct writes to p->scx.dsq_vtime in favor of
scx_bpf_task_set_dsq_vtime()
- ops.cpu_acquire/release() in favor of handling CPU preemption via the
sched_switch tracepoint
This series updates the demo schedulers (scx_simple, scx_flatcg,
scx_qmap) and selftests (select_cpu_vtime, maximal) to use the new
APIs, keeping them in sync with current best practices.
Patch 1 uses bpf_ksym_exists() to fall back to direct assignment on
older kernels that don't have scx_bpf_task_set_dsq_vtime(), preserving
backwards compatibility.
Patch 2 removes the cpu_acquire/release stubs and the
__COMPAT_scx_bpf_reenqueue_local_from_anywhere() compat guard from
scx_qmap, unconditionally relying on the sched_switch TP.
Thanks,
Cheng-Yang
---
Cheng-Yang Chou (2):
sched_ext: Update demo schedulers and selftests to use
scx_bpf_task_set_dsq_vtime()
sched_ext: Update demo schedulers and selftests to drop
ops.cpu_acquire/release()
tools/sched_ext/scx_flatcg.bpf.c | 21 ++++++++++++++-----
tools/sched_ext/scx_qmap.bpf.c | 15 ++-----------
tools/sched_ext/scx_simple.bpf.c | 12 +++++++++--
.../testing/selftests/sched_ext/maximal.bpf.c | 15 ++++++-------
.../sched_ext/select_cpu_vtime.bpf.c | 13 ++++++++++--
5 files changed, 45 insertions(+), 31 deletions(-)
--
2.48.1
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH 1/2] sched_ext: Update demo schedulers and selftests to use scx_bpf_task_set_dsq_vtime()
2026-03-12 4:19 [PATCH 0/2] sched_ext: Update demo schedulers and selftests for deprecated APIs Cheng-Yang Chou
@ 2026-03-12 4:20 ` Cheng-Yang Chou
2026-03-12 6:41 ` Andrea Righi
2026-03-12 4:20 ` [PATCH 2/2] sched_ext: Update demo schedulers and selftests to drop ops.cpu_acquire/release() Cheng-Yang Chou
1 sibling, 1 reply; 5+ messages in thread
From: Cheng-Yang Chou @ 2026-03-12 4:20 UTC (permalink / raw)
To: sched-ext; +Cc: tj, void, arighi, changwoo, jserv, yphbchou0911
Direct writes to p->scx.dsq_vtime are deprecated in favor of
scx_bpf_task_set_dsq_vtime(). Update scx_simple, scx_flatcg, and
select_cpu_vtime selftest to use the new kfunc.
Use bpf_ksym_exists() to fall back to direct assignment on older kernels
that don't have the new kfunc, preserving backwards compatibility.
Signed-off-by: Cheng-Yang Chou <yphbchou0911@gmail.com>
---
tools/sched_ext/scx_flatcg.bpf.c | 21 ++++++++++++++-----
tools/sched_ext/scx_simple.bpf.c | 12 +++++++++--
.../sched_ext/select_cpu_vtime.bpf.c | 13 ++++++++++--
3 files changed, 37 insertions(+), 9 deletions(-)
diff --git a/tools/sched_ext/scx_flatcg.bpf.c b/tools/sched_ext/scx_flatcg.bpf.c
index a8a9234bb41e..27d99bb1e60f 100644
--- a/tools/sched_ext/scx_flatcg.bpf.c
+++ b/tools/sched_ext/scx_flatcg.bpf.c
@@ -551,9 +551,14 @@ void BPF_STRUCT_OPS(fcg_stopping, struct task_struct *p, bool runnable)
* too much, determine the execution time by taking explicit timestamps
* instead of depending on @p->scx.slice.
*/
- if (!fifo_sched)
- p->scx.dsq_vtime +=
- (SCX_SLICE_DFL - p->scx.slice) * 100 / p->scx.weight;
+ u64 delta = (SCX_SLICE_DFL - p->scx.slice) * 100 / p->scx.weight;
+
+ if (!fifo_sched) {
+ if (bpf_ksym_exists(scx_bpf_task_set_dsq_vtime___new))
+ scx_bpf_task_set_dsq_vtime___new(p, p->scx.dsq_vtime + delta);
+ else
+ p->scx.dsq_vtime += delta;
+ }
taskc = bpf_task_storage_get(&task_ctx, p, 0, 0);
if (!taskc) {
@@ -822,7 +827,10 @@ s32 BPF_STRUCT_OPS(fcg_init_task, struct task_struct *p,
if (!(cgc = find_cgrp_ctx(args->cgroup)))
return -ENOENT;
- p->scx.dsq_vtime = cgc->tvtime_now;
+ if (bpf_ksym_exists(scx_bpf_task_set_dsq_vtime___new))
+ scx_bpf_task_set_dsq_vtime___new(p, cgc->tvtime_now);
+ else
+ p->scx.dsq_vtime = cgc->tvtime_now;
return 0;
}
@@ -924,7 +932,10 @@ void BPF_STRUCT_OPS(fcg_cgroup_move, struct task_struct *p,
return;
delta = time_delta(p->scx.dsq_vtime, from_cgc->tvtime_now);
- p->scx.dsq_vtime = to_cgc->tvtime_now + delta;
+ if (bpf_ksym_exists(scx_bpf_task_set_dsq_vtime___new))
+ scx_bpf_task_set_dsq_vtime___new(p, to_cgc->tvtime_now + delta);
+ else
+ p->scx.dsq_vtime = to_cgc->tvtime_now + delta;
}
s32 BPF_STRUCT_OPS_SLEEPABLE(fcg_init)
diff --git a/tools/sched_ext/scx_simple.bpf.c b/tools/sched_ext/scx_simple.bpf.c
index b456bd7cae77..61d3bcf54ce7 100644
--- a/tools/sched_ext/scx_simple.bpf.c
+++ b/tools/sched_ext/scx_simple.bpf.c
@@ -121,12 +121,20 @@ void BPF_STRUCT_OPS(simple_stopping, struct task_struct *p, bool runnable)
* too much, determine the execution time by taking explicit timestamps
* instead of depending on @p->scx.slice.
*/
- p->scx.dsq_vtime += (SCX_SLICE_DFL - p->scx.slice) * 100 / p->scx.weight;
+ u64 delta = (SCX_SLICE_DFL - p->scx.slice) * 100 / p->scx.weight;
+
+ if (bpf_ksym_exists(scx_bpf_task_set_dsq_vtime___new))
+ scx_bpf_task_set_dsq_vtime___new(p, p->scx.dsq_vtime + delta);
+ else
+ p->scx.dsq_vtime += delta;
}
void BPF_STRUCT_OPS(simple_enable, struct task_struct *p)
{
- p->scx.dsq_vtime = vtime_now;
+ if (bpf_ksym_exists(scx_bpf_task_set_dsq_vtime___new))
+ scx_bpf_task_set_dsq_vtime___new(p, vtime_now);
+ else
+ p->scx.dsq_vtime = vtime_now;
}
s32 BPF_STRUCT_OPS_SLEEPABLE(simple_init)
diff --git a/tools/testing/selftests/sched_ext/select_cpu_vtime.bpf.c b/tools/testing/selftests/sched_ext/select_cpu_vtime.bpf.c
index bfcb96cd4954..17ed515c3e25 100644
--- a/tools/testing/selftests/sched_ext/select_cpu_vtime.bpf.c
+++ b/tools/testing/selftests/sched_ext/select_cpu_vtime.bpf.c
@@ -66,12 +66,21 @@ void BPF_STRUCT_OPS(select_cpu_vtime_running, struct task_struct *p)
void BPF_STRUCT_OPS(select_cpu_vtime_stopping, struct task_struct *p,
bool runnable)
{
- p->scx.dsq_vtime += (SCX_SLICE_DFL - p->scx.slice) * 100 / p->scx.weight;
+ u64 delta = (SCX_SLICE_DFL - p->scx.slice) * 100 / p->scx.weight;
+
+ if (bpf_ksym_exists(scx_bpf_task_set_dsq_vtime___new))
+ scx_bpf_task_set_dsq_vtime___new(p, p->scx.dsq_vtime + delta);
+ else
+ p->scx.dsq_vtime += delta;
+
}
void BPF_STRUCT_OPS(select_cpu_vtime_enable, struct task_struct *p)
{
- p->scx.dsq_vtime = vtime_now;
+ if (bpf_ksym_exists(scx_bpf_task_set_dsq_vtime___new))
+ scx_bpf_task_set_dsq_vtime___new(p, vtime_now);
+ else
+ p->scx.dsq_vtime = vtime_now;
}
s32 BPF_STRUCT_OPS_SLEEPABLE(select_cpu_vtime_init)
--
2.48.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH 2/2] sched_ext: Update demo schedulers and selftests to drop ops.cpu_acquire/release()
2026-03-12 4:19 [PATCH 0/2] sched_ext: Update demo schedulers and selftests for deprecated APIs Cheng-Yang Chou
2026-03-12 4:20 ` [PATCH 1/2] sched_ext: Update demo schedulers and selftests to use scx_bpf_task_set_dsq_vtime() Cheng-Yang Chou
@ 2026-03-12 4:20 ` Cheng-Yang Chou
2026-03-12 7:40 ` Andrea Righi
1 sibling, 1 reply; 5+ messages in thread
From: Cheng-Yang Chou @ 2026-03-12 4:20 UTC (permalink / raw)
To: sched-ext; +Cc: tj, void, arighi, changwoo, jserv, yphbchou0911
ops.cpu_acquire() and ops.cpu_release() are deprecated in favor of
handling CPU preemption via the sched_switch tracepoint. Update scx_qmap
and the maximal selftest to use the new approach.
In scx_qmap, remove the cpu_release fallback and the
__COMPAT_scx_bpf_reenqueue_local_from_anywhere() compat guard from
qmap_sched_switch(), unconditionally handling preemption via the TP.
In the maximal selftest, replace the cpu_acquire and cpu_release stubs
with a minimal sched_switch TP program.
Signed-off-by: Cheng-Yang Chou <yphbchou0911@gmail.com>
---
tools/sched_ext/scx_qmap.bpf.c | 15 ++-------------
tools/testing/selftests/sched_ext/maximal.bpf.c | 15 ++++++---------
2 files changed, 8 insertions(+), 22 deletions(-)
diff --git a/tools/sched_ext/scx_qmap.bpf.c b/tools/sched_ext/scx_qmap.bpf.c
index a4a1b84fe359..a11e27c8de77 100644
--- a/tools/sched_ext/scx_qmap.bpf.c
+++ b/tools/sched_ext/scx_qmap.bpf.c
@@ -11,8 +11,8 @@
*
* - BPF-side queueing using PIDs.
* - Sleepable per-task storage allocation using ops.prep_enable().
- * - Using ops.cpu_release() to handle a higher priority scheduling class taking
- * the CPU away.
+ * - Using the sched_switch tracepoint to handle a higher priority scheduling
+ * class taking the CPU away.
* - Core-sched support.
*
* This scheduler is primarily for demonstration and testing of sched_ext
@@ -562,9 +562,6 @@ SEC("tp_btf/sched_switch")
int BPF_PROG(qmap_sched_switch, bool preempt, struct task_struct *prev,
struct task_struct *next, unsigned long prev_state)
{
- if (!__COMPAT_scx_bpf_reenqueue_local_from_anywhere())
- return 0;
-
/*
* If @cpu is taken by a higher priority scheduling class, it is no
* longer available for executing sched_ext tasks. As we don't want the
@@ -586,13 +583,6 @@ int BPF_PROG(qmap_sched_switch, bool preempt, struct task_struct *prev,
return 0;
}
-void BPF_STRUCT_OPS(qmap_cpu_release, s32 cpu, struct scx_cpu_release_args *args)
-{
- /* see qmap_sched_switch() to learn how to do this on newer kernels */
- if (!__COMPAT_scx_bpf_reenqueue_local_from_anywhere())
- scx_bpf_reenqueue_local();
-}
-
s32 BPF_STRUCT_OPS(qmap_init_task, struct task_struct *p,
struct scx_init_task_args *args)
{
@@ -999,7 +989,6 @@ SCX_OPS_DEFINE(qmap_ops,
.dispatch = (void *)qmap_dispatch,
.tick = (void *)qmap_tick,
.core_sched_before = (void *)qmap_core_sched_before,
- .cpu_release = (void *)qmap_cpu_release,
.init_task = (void *)qmap_init_task,
.dump = (void *)qmap_dump,
.dump_cpu = (void *)qmap_dump_cpu,
diff --git a/tools/testing/selftests/sched_ext/maximal.bpf.c b/tools/testing/selftests/sched_ext/maximal.bpf.c
index 01cf4f3da4e0..5858f64313e9 100644
--- a/tools/testing/selftests/sched_ext/maximal.bpf.c
+++ b/tools/testing/selftests/sched_ext/maximal.bpf.c
@@ -67,13 +67,12 @@ void BPF_STRUCT_OPS(maximal_set_cpumask, struct task_struct *p,
void BPF_STRUCT_OPS(maximal_update_idle, s32 cpu, bool idle)
{}
-void BPF_STRUCT_OPS(maximal_cpu_acquire, s32 cpu,
- struct scx_cpu_acquire_args *args)
-{}
-
-void BPF_STRUCT_OPS(maximal_cpu_release, s32 cpu,
- struct scx_cpu_release_args *args)
-{}
+SEC("tp_btf/sched_switch")
+int BPF_PROG(maximal_sched_switch, bool preempt, struct task_struct *prev,
+ struct task_struct *next, unsigned long prev_state)
+{
+ return 0;
+}
void BPF_STRUCT_OPS(maximal_cpu_online, s32 cpu)
{}
@@ -150,8 +149,6 @@ struct sched_ext_ops maximal_ops = {
.set_weight = (void *) maximal_set_weight,
.set_cpumask = (void *) maximal_set_cpumask,
.update_idle = (void *) maximal_update_idle,
- .cpu_acquire = (void *) maximal_cpu_acquire,
- .cpu_release = (void *) maximal_cpu_release,
.cpu_online = (void *) maximal_cpu_online,
.cpu_offline = (void *) maximal_cpu_offline,
.init_task = (void *) maximal_init_task,
--
2.48.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH 1/2] sched_ext: Update demo schedulers and selftests to use scx_bpf_task_set_dsq_vtime()
2026-03-12 4:20 ` [PATCH 1/2] sched_ext: Update demo schedulers and selftests to use scx_bpf_task_set_dsq_vtime() Cheng-Yang Chou
@ 2026-03-12 6:41 ` Andrea Righi
0 siblings, 0 replies; 5+ messages in thread
From: Andrea Righi @ 2026-03-12 6:41 UTC (permalink / raw)
To: Cheng-Yang Chou; +Cc: sched-ext, tj, void, changwoo, jserv
Hi Cheng-Yang,
On Thu, Mar 12, 2026 at 12:20:00PM +0800, Cheng-Yang Chou wrote:
> Direct writes to p->scx.dsq_vtime are deprecated in favor of
> scx_bpf_task_set_dsq_vtime(). Update scx_simple, scx_flatcg, and
> select_cpu_vtime selftest to use the new kfunc.
>
> Use bpf_ksym_exists() to fall back to direct assignment on older kernels
> that don't have the new kfunc, preserving backwards compatibility.
>
> Signed-off-by: Cheng-Yang Chou <yphbchou0911@gmail.com>
> ---
> tools/sched_ext/scx_flatcg.bpf.c | 21 ++++++++++++++-----
> tools/sched_ext/scx_simple.bpf.c | 12 +++++++++--
> .../sched_ext/select_cpu_vtime.bpf.c | 13 ++++++++++--
> 3 files changed, 37 insertions(+), 9 deletions(-)
>
> diff --git a/tools/sched_ext/scx_flatcg.bpf.c b/tools/sched_ext/scx_flatcg.bpf.c
> index a8a9234bb41e..27d99bb1e60f 100644
> --- a/tools/sched_ext/scx_flatcg.bpf.c
> +++ b/tools/sched_ext/scx_flatcg.bpf.c
> @@ -551,9 +551,14 @@ void BPF_STRUCT_OPS(fcg_stopping, struct task_struct *p, bool runnable)
> * too much, determine the execution time by taking explicit timestamps
> * instead of depending on @p->scx.slice.
> */
> - if (!fifo_sched)
> - p->scx.dsq_vtime +=
> - (SCX_SLICE_DFL - p->scx.slice) * 100 / p->scx.weight;
> + u64 delta = (SCX_SLICE_DFL - p->scx.slice) * 100 / p->scx.weight;
While at it, can we use:
u64 delta = scale_by_task_weight(p, SCX_SLICE_DFL - p->scx.slice);
> +
> + if (!fifo_sched) {
> + if (bpf_ksym_exists(scx_bpf_task_set_dsq_vtime___new))
> + scx_bpf_task_set_dsq_vtime___new(p, p->scx.dsq_vtime + delta);
> + else
> + p->scx.dsq_vtime += delta;
> + }
No, let's just use:
scx_bpf_task_set_dsq_vtime(p, p->scx.dsq_vtime + delta);
It's already doing the bpf_ksym_exists() logic internally, see
tools/sched_ext/include/scx/compat.bpf.h.
>
> taskc = bpf_task_storage_get(&task_ctx, p, 0, 0);
> if (!taskc) {
> @@ -822,7 +827,10 @@ s32 BPF_STRUCT_OPS(fcg_init_task, struct task_struct *p,
> if (!(cgc = find_cgrp_ctx(args->cgroup)))
> return -ENOENT;
>
> - p->scx.dsq_vtime = cgc->tvtime_now;
> + if (bpf_ksym_exists(scx_bpf_task_set_dsq_vtime___new))
> + scx_bpf_task_set_dsq_vtime___new(p, cgc->tvtime_now);
> + else
> + p->scx.dsq_vtime = cgc->tvtime_now;
Ditto.
>
> return 0;
> }
> @@ -924,7 +932,10 @@ void BPF_STRUCT_OPS(fcg_cgroup_move, struct task_struct *p,
> return;
>
> delta = time_delta(p->scx.dsq_vtime, from_cgc->tvtime_now);
> - p->scx.dsq_vtime = to_cgc->tvtime_now + delta;
> + if (bpf_ksym_exists(scx_bpf_task_set_dsq_vtime___new))
> + scx_bpf_task_set_dsq_vtime___new(p, to_cgc->tvtime_now + delta);
> + else
> + p->scx.dsq_vtime = to_cgc->tvtime_now + delta;
Ditto.
> }
>
> s32 BPF_STRUCT_OPS_SLEEPABLE(fcg_init)
> diff --git a/tools/sched_ext/scx_simple.bpf.c b/tools/sched_ext/scx_simple.bpf.c
> index b456bd7cae77..61d3bcf54ce7 100644
> --- a/tools/sched_ext/scx_simple.bpf.c
> +++ b/tools/sched_ext/scx_simple.bpf.c
> @@ -121,12 +121,20 @@ void BPF_STRUCT_OPS(simple_stopping, struct task_struct *p, bool runnable)
> * too much, determine the execution time by taking explicit timestamps
> * instead of depending on @p->scx.slice.
> */
> - p->scx.dsq_vtime += (SCX_SLICE_DFL - p->scx.slice) * 100 / p->scx.weight;
> + u64 delta = (SCX_SLICE_DFL - p->scx.slice) * 100 / p->scx.weight;
> +
> + if (bpf_ksym_exists(scx_bpf_task_set_dsq_vtime___new))
> + scx_bpf_task_set_dsq_vtime___new(p, p->scx.dsq_vtime + delta);
> + else
> + p->scx.dsq_vtime += delta;
Ditto.
> }
>
> void BPF_STRUCT_OPS(simple_enable, struct task_struct *p)
> {
> - p->scx.dsq_vtime = vtime_now;
> + if (bpf_ksym_exists(scx_bpf_task_set_dsq_vtime___new))
> + scx_bpf_task_set_dsq_vtime___new(p, vtime_now);
> + else
> + p->scx.dsq_vtime = vtime_now;
Ditto.
> }
>
> s32 BPF_STRUCT_OPS_SLEEPABLE(simple_init)
> diff --git a/tools/testing/selftests/sched_ext/select_cpu_vtime.bpf.c b/tools/testing/selftests/sched_ext/select_cpu_vtime.bpf.c
> index bfcb96cd4954..17ed515c3e25 100644
> --- a/tools/testing/selftests/sched_ext/select_cpu_vtime.bpf.c
> +++ b/tools/testing/selftests/sched_ext/select_cpu_vtime.bpf.c
> @@ -66,12 +66,21 @@ void BPF_STRUCT_OPS(select_cpu_vtime_running, struct task_struct *p)
> void BPF_STRUCT_OPS(select_cpu_vtime_stopping, struct task_struct *p,
> bool runnable)
> {
> - p->scx.dsq_vtime += (SCX_SLICE_DFL - p->scx.slice) * 100 / p->scx.weight;
> + u64 delta = (SCX_SLICE_DFL - p->scx.slice) * 100 / p->scx.weight;
Ditto (scale_by_task_weight).
> +
> + if (bpf_ksym_exists(scx_bpf_task_set_dsq_vtime___new))
> + scx_bpf_task_set_dsq_vtime___new(p, p->scx.dsq_vtime + delta);
> + else
> + p->scx.dsq_vtime += delta;
> +
Ditto.
> }
>
> void BPF_STRUCT_OPS(select_cpu_vtime_enable, struct task_struct *p)
> {
> - p->scx.dsq_vtime = vtime_now;
> + if (bpf_ksym_exists(scx_bpf_task_set_dsq_vtime___new))
> + scx_bpf_task_set_dsq_vtime___new(p, vtime_now);
> + else
> + p->scx.dsq_vtime = vtime_now;
Ditto.
> }
>
> s32 BPF_STRUCT_OPS_SLEEPABLE(select_cpu_vtime_init)
> --
> 2.48.1
>
Thanks,
-Andrea
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH 2/2] sched_ext: Update demo schedulers and selftests to drop ops.cpu_acquire/release()
2026-03-12 4:20 ` [PATCH 2/2] sched_ext: Update demo schedulers and selftests to drop ops.cpu_acquire/release() Cheng-Yang Chou
@ 2026-03-12 7:40 ` Andrea Righi
0 siblings, 0 replies; 5+ messages in thread
From: Andrea Righi @ 2026-03-12 7:40 UTC (permalink / raw)
To: Cheng-Yang Chou; +Cc: sched-ext, tj, void, changwoo, jserv
Hi Cheng-Yang,
On Thu, Mar 12, 2026 at 12:20:01PM +0800, Cheng-Yang Chou wrote:
> ops.cpu_acquire() and ops.cpu_release() are deprecated in favor of
> handling CPU preemption via the sched_switch tracepoint. Update scx_qmap
> and the maximal selftest to use the new approach.
We could mention that commit a3f5d4822253 ("sched_ext: Allow
scx_bpf_reenqueue_local() to be called from anywhere") is deprecating
ops.cpu_acquire/relese().
>
> In scx_qmap, remove the cpu_release fallback and the
> __COMPAT_scx_bpf_reenqueue_local_from_anywhere() compat guard from
> qmap_sched_switch(), unconditionally handling preemption via the TP.
>
> In the maximal selftest, replace the cpu_acquire and cpu_release stubs
> with a minimal sched_switch TP program.
>
> Signed-off-by: Cheng-Yang Chou <yphbchou0911@gmail.com>
> ---
> tools/sched_ext/scx_qmap.bpf.c | 15 ++-------------
> tools/testing/selftests/sched_ext/maximal.bpf.c | 15 ++++++---------
> 2 files changed, 8 insertions(+), 22 deletions(-)
>
> diff --git a/tools/sched_ext/scx_qmap.bpf.c b/tools/sched_ext/scx_qmap.bpf.c
> index a4a1b84fe359..a11e27c8de77 100644
> --- a/tools/sched_ext/scx_qmap.bpf.c
> +++ b/tools/sched_ext/scx_qmap.bpf.c
> @@ -11,8 +11,8 @@
> *
> * - BPF-side queueing using PIDs.
> * - Sleepable per-task storage allocation using ops.prep_enable().
> - * - Using ops.cpu_release() to handle a higher priority scheduling class taking
> - * the CPU away.
> + * - Using the sched_switch tracepoint to handle a higher priority scheduling
> + * class taking the CPU away.
> * - Core-sched support.
> *
> * This scheduler is primarily for demonstration and testing of sched_ext
> @@ -562,9 +562,6 @@ SEC("tp_btf/sched_switch")
> int BPF_PROG(qmap_sched_switch, bool preempt, struct task_struct *prev,
> struct task_struct *next, unsigned long prev_state)
> {
> - if (!__COMPAT_scx_bpf_reenqueue_local_from_anywhere())
> - return 0;
> -
> /*
> * If @cpu is taken by a higher priority scheduling class, it is no
> * longer available for executing sched_ext tasks. As we don't want the
> @@ -586,13 +583,6 @@ int BPF_PROG(qmap_sched_switch, bool preempt, struct task_struct *prev,
> return 0;
> }
>
> -void BPF_STRUCT_OPS(qmap_cpu_release, s32 cpu, struct scx_cpu_release_args *args)
> -{
> - /* see qmap_sched_switch() to learn how to do this on newer kernels */
> - if (!__COMPAT_scx_bpf_reenqueue_local_from_anywhere())
> - scx_bpf_reenqueue_local();
> -}
> -
> s32 BPF_STRUCT_OPS(qmap_init_task, struct task_struct *p,
> struct scx_init_task_args *args)
> {
> @@ -999,7 +989,6 @@ SCX_OPS_DEFINE(qmap_ops,
> .dispatch = (void *)qmap_dispatch,
> .tick = (void *)qmap_tick,
> .core_sched_before = (void *)qmap_core_sched_before,
> - .cpu_release = (void *)qmap_cpu_release,
> .init_task = (void *)qmap_init_task,
> .dump = (void *)qmap_dump,
> .dump_cpu = (void *)qmap_dump_cpu,
> diff --git a/tools/testing/selftests/sched_ext/maximal.bpf.c b/tools/testing/selftests/sched_ext/maximal.bpf.c
> index 01cf4f3da4e0..5858f64313e9 100644
> --- a/tools/testing/selftests/sched_ext/maximal.bpf.c
> +++ b/tools/testing/selftests/sched_ext/maximal.bpf.c
> @@ -67,13 +67,12 @@ void BPF_STRUCT_OPS(maximal_set_cpumask, struct task_struct *p,
> void BPF_STRUCT_OPS(maximal_update_idle, s32 cpu, bool idle)
> {}
>
> -void BPF_STRUCT_OPS(maximal_cpu_acquire, s32 cpu,
> - struct scx_cpu_acquire_args *args)
> -{}
> -
> -void BPF_STRUCT_OPS(maximal_cpu_release, s32 cpu,
> - struct scx_cpu_release_args *args)
> -{}
> +SEC("tp_btf/sched_switch")
> +int BPF_PROG(maximal_sched_switch, bool preempt, struct task_struct *prev,
> + struct task_struct *next, unsigned long prev_state)
> +{
> + return 0;
> +}
I think this tracepoint is never attached. You can verify it by adding a
bpf_printk("hello") here and check if you see any message in
/sys/kernel/tracing/trace_pipe.
To attach this TP you need to do something like this:
diff --git a/tools/testing/selftests/sched_ext/maximal.c b/tools/testing/selftests/sched_ext/maximal.c
index c6be50a9941d5..1dc3692246705 100644
--- a/tools/testing/selftests/sched_ext/maximal.c
+++ b/tools/testing/selftests/sched_ext/maximal.c
@@ -19,6 +19,9 @@ static enum scx_test_status setup(void **ctx)
SCX_ENUM_INIT(skel);
SCX_FAIL_IF(maximal__load(skel), "Failed to load skel");
+ bpf_map__set_autoattach(skel->maps.maximal_ops, false);
+ SCX_FAIL_IF(maximal__attach(skel), "Failed to attach skel");
+
*ctx = skel;
return SCX_TEST_PASS;
diff --git a/tools/testing/selftests/sched_ext/reload_loop.c b/tools/testing/selftests/sched_ext/reload_loop.c
index 308211d804364..49297b83d748d 100644
--- a/tools/testing/selftests/sched_ext/reload_loop.c
+++ b/tools/testing/selftests/sched_ext/reload_loop.c
@@ -23,6 +23,9 @@ static enum scx_test_status setup(void **ctx)
SCX_ENUM_INIT(skel);
SCX_FAIL_IF(maximal__load(skel), "Failed to load skel");
+ bpf_map__set_autoattach(skel->maps.maximal_ops, false);
+ SCX_FAIL_IF(maximal__attach(skel), "Failed to attach skel");
+
return SCX_TEST_PASS;
}
>
> void BPF_STRUCT_OPS(maximal_cpu_online, s32 cpu)
> {}
> @@ -150,8 +149,6 @@ struct sched_ext_ops maximal_ops = {
> .set_weight = (void *) maximal_set_weight,
> .set_cpumask = (void *) maximal_set_cpumask,
> .update_idle = (void *) maximal_update_idle,
> - .cpu_acquire = (void *) maximal_cpu_acquire,
> - .cpu_release = (void *) maximal_cpu_release,
> .cpu_online = (void *) maximal_cpu_online,
> .cpu_offline = (void *) maximal_cpu_offline,
> .init_task = (void *) maximal_init_task,
> --
> 2.48.1
>
Thanks,
-Andrea
^ permalink raw reply related [flat|nested] 5+ messages in thread
end of thread, other threads:[~2026-03-12 7:42 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-12 4:19 [PATCH 0/2] sched_ext: Update demo schedulers and selftests for deprecated APIs Cheng-Yang Chou
2026-03-12 4:20 ` [PATCH 1/2] sched_ext: Update demo schedulers and selftests to use scx_bpf_task_set_dsq_vtime() Cheng-Yang Chou
2026-03-12 6:41 ` Andrea Righi
2026-03-12 4:20 ` [PATCH 2/2] sched_ext: Update demo schedulers and selftests to drop ops.cpu_acquire/release() Cheng-Yang Chou
2026-03-12 7:40 ` Andrea Righi
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox