* [PATCH 1/6] sched_ext: idle: Extend topology optimizations to all tasks
2025-03-21 22:10 [PATCHSET v6 sched_ext/for-6.15] sched_ext: Enhance built-in idle selection with allowed CPUs Andrea Righi
@ 2025-03-21 22:10 ` Andrea Righi
2025-03-21 22:10 ` [PATCH 2/6] sched_ext: idle: Explicitly pass allowed cpumask to scx_select_cpu_dfl() Andrea Righi
` (5 subsequent siblings)
6 siblings, 0 replies; 19+ messages in thread
From: Andrea Righi @ 2025-03-21 22:10 UTC (permalink / raw)
To: Tejun Heo, David Vernet, Changwoo Min; +Cc: Joel Fernandes, linux-kernel
The built-in idle selection policy, scx_select_cpu_dfl(), always
prioritizes picking idle CPUs within the same LLC or NUMA node, but
these optimizations are currently applied only when a task has no CPU
affinity constraints.
This is done primarily for efficiency, as it avoids the overhead of
updating a cpumask every time we need to select an idle CPU (which can
be costly in large SMP systems).
However, this approach limits the effectiveness of the built-in idle
policy and results in inconsistent behavior, as affinity-restricted
tasks don't benefit from topology-aware optimizations.
To address this, modify the policy to apply LLC and NUMA-aware
optimizations even when a task is constrained to a subset of CPUs.
We can still avoid updating the cpumasks by checking if the subset of
LLC and node CPUs are contained in the subset of allowed CPUs usable by
the task (which is true in most of the cases - for tasks that don't have
affinity constratints).
Moreover, use temporary local per-CPU cpumasks to determine the LLC and
node subsets, minimizing potential overhead even on large SMP systems.
Signed-off-by: Andrea Righi <arighi@nvidia.com>
---
kernel/sched/ext_idle.c | 72 ++++++++++++++++++++++++++++-------------
1 file changed, 49 insertions(+), 23 deletions(-)
diff --git a/kernel/sched/ext_idle.c b/kernel/sched/ext_idle.c
index 52c36a70a3d04..9c36f7719fcf9 100644
--- a/kernel/sched/ext_idle.c
+++ b/kernel/sched/ext_idle.c
@@ -46,6 +46,12 @@ static struct scx_idle_cpus scx_idle_global_masks;
*/
static struct scx_idle_cpus **scx_idle_node_masks;
+/*
+ * Local per-CPU cpumasks (used to generate temporary idle cpumasks).
+ */
+static DEFINE_PER_CPU(cpumask_var_t, local_llc_idle_cpumask);
+static DEFINE_PER_CPU(cpumask_var_t, local_numa_idle_cpumask);
+
/*
* Return the idle masks associated to a target @node.
*
@@ -391,6 +397,14 @@ void scx_idle_update_selcpu_topology(struct sched_ext_ops *ops)
static_branch_disable_cpuslocked(&scx_selcpu_topo_numa);
}
+/*
+ * Return true if @p can run on all possible CPUs, false otherwise.
+ */
+static inline bool task_affinity_all(const struct task_struct *p)
+{
+ return p->nr_cpus_allowed >= num_possible_cpus();
+}
+
/*
* Built-in CPU idle selection policy:
*
@@ -426,8 +440,7 @@ void scx_idle_update_selcpu_topology(struct sched_ext_ops *ops)
*/
s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, u64 flags)
{
- const struct cpumask *llc_cpus = NULL;
- const struct cpumask *numa_cpus = NULL;
+ const struct cpumask *llc_cpus = NULL, *numa_cpus = NULL;
int node = scx_cpu_node_if_enabled(prev_cpu);
s32 cpu;
@@ -437,22 +450,27 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, u64
rcu_read_lock();
/*
- * Determine the scheduling domain only if the task is allowed to run
- * on all CPUs.
- *
- * This is done primarily for efficiency, as it avoids the overhead of
- * updating a cpumask every time we need to select an idle CPU (which
- * can be costly in large SMP systems), but it also aligns logically:
- * if a task's scheduling domain is restricted by user-space (through
- * CPU affinity), the task will simply use the flat scheduling domain
- * defined by user-space.
+ * Determine the subset of CPUs that the task can use in its
+ * current LLC and node.
*/
- if (p->nr_cpus_allowed >= num_possible_cpus()) {
- if (static_branch_maybe(CONFIG_NUMA, &scx_selcpu_topo_numa))
- numa_cpus = numa_span(prev_cpu);
+ if (static_branch_maybe(CONFIG_NUMA, &scx_selcpu_topo_numa)) {
+ struct cpumask *local_cpus = this_cpu_cpumask_var_ptr(local_numa_idle_cpumask);
+ const struct cpumask *cpus = numa_span(prev_cpu);
+
+ if (task_affinity_all(p))
+ numa_cpus = cpus;
+ else if (cpus && cpumask_and(local_cpus, p->cpus_ptr, cpus))
+ numa_cpus = local_cpus;
+ }
- if (static_branch_maybe(CONFIG_SCHED_MC, &scx_selcpu_topo_llc))
- llc_cpus = llc_span(prev_cpu);
+ if (static_branch_maybe(CONFIG_SCHED_MC, &scx_selcpu_topo_llc)) {
+ struct cpumask *local_cpus = this_cpu_cpumask_var_ptr(local_llc_idle_cpumask);
+ const struct cpumask *cpus = llc_span(prev_cpu);
+
+ if (task_affinity_all(p))
+ llc_cpus = cpus;
+ else if (cpus && cpumask_and(local_cpus, p->cpus_ptr, cpus))
+ llc_cpus = local_cpus;
}
/*
@@ -598,7 +616,7 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, u64
*/
void scx_idle_init_masks(void)
{
- int node;
+ int i;
/* Allocate global idle cpumasks */
BUG_ON(!alloc_cpumask_var(&scx_idle_global_masks.cpu, GFP_KERNEL));
@@ -609,13 +627,21 @@ void scx_idle_init_masks(void)
sizeof(*scx_idle_node_masks), GFP_KERNEL);
BUG_ON(!scx_idle_node_masks);
- for_each_node(node) {
- scx_idle_node_masks[node] = kzalloc_node(sizeof(**scx_idle_node_masks),
- GFP_KERNEL, node);
- BUG_ON(!scx_idle_node_masks[node]);
+ for_each_node(i) {
+ scx_idle_node_masks[i] = kzalloc_node(sizeof(**scx_idle_node_masks),
+ GFP_KERNEL, i);
+ BUG_ON(!scx_idle_node_masks[i]);
+
+ BUG_ON(!alloc_cpumask_var_node(&scx_idle_node_masks[i]->cpu, GFP_KERNEL, i));
+ BUG_ON(!alloc_cpumask_var_node(&scx_idle_node_masks[i]->smt, GFP_KERNEL, i));
+ }
- BUG_ON(!alloc_cpumask_var_node(&scx_idle_node_masks[node]->cpu, GFP_KERNEL, node));
- BUG_ON(!alloc_cpumask_var_node(&scx_idle_node_masks[node]->smt, GFP_KERNEL, node));
+ /* Allocate local per-cpu idle cpumasks */
+ for_each_possible_cpu(i) {
+ BUG_ON(!alloc_cpumask_var_node(&per_cpu(local_llc_idle_cpumask, i),
+ GFP_KERNEL, cpu_to_node(i)));
+ BUG_ON(!alloc_cpumask_var_node(&per_cpu(local_numa_idle_cpumask, i),
+ GFP_KERNEL, cpu_to_node(i)));
}
}
--
2.48.1
^ permalink raw reply related [flat|nested] 19+ messages in thread* [PATCH 2/6] sched_ext: idle: Explicitly pass allowed cpumask to scx_select_cpu_dfl()
2025-03-21 22:10 [PATCHSET v6 sched_ext/for-6.15] sched_ext: Enhance built-in idle selection with allowed CPUs Andrea Righi
2025-03-21 22:10 ` [PATCH 1/6] sched_ext: idle: Extend topology optimizations to all tasks Andrea Righi
@ 2025-03-21 22:10 ` Andrea Righi
2025-03-31 21:50 ` Tejun Heo
2025-03-21 22:10 ` [PATCH 3/6] sched_ext: idle: Accept an arbitrary cpumask in scx_select_cpu_dfl() Andrea Righi
` (4 subsequent siblings)
6 siblings, 1 reply; 19+ messages in thread
From: Andrea Righi @ 2025-03-21 22:10 UTC (permalink / raw)
To: Tejun Heo, David Vernet, Changwoo Min; +Cc: Joel Fernandes, linux-kernel
Modify scx_select_cpu_dfl() to take the allowed cpumask as an explicit
argument, instead of implicitly using @p->cpus_ptr.
This prepares for future changes where arbitrary cpumasks may be passed
to the built-in idle CPU selection policy.
This is a pure refactoring with no functional changes.
Signed-off-by: Andrea Righi <arighi@nvidia.com>
---
kernel/sched/ext.c | 2 +-
kernel/sched/ext_idle.c | 19 ++++++++++---------
kernel/sched/ext_idle.h | 3 ++-
3 files changed, 13 insertions(+), 11 deletions(-)
diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
index 06561d6717c9a..f42352e8d889e 100644
--- a/kernel/sched/ext.c
+++ b/kernel/sched/ext.c
@@ -3395,7 +3395,7 @@ static int select_task_rq_scx(struct task_struct *p, int prev_cpu, int wake_flag
} else {
s32 cpu;
- cpu = scx_select_cpu_dfl(p, prev_cpu, wake_flags, 0);
+ cpu = scx_select_cpu_dfl(p, prev_cpu, wake_flags, p->cpus_ptr, 0);
if (cpu >= 0) {
p->scx.slice = SCX_SLICE_DFL;
p->scx.ddsp_dsq_id = SCX_DSQ_LOCAL;
diff --git a/kernel/sched/ext_idle.c b/kernel/sched/ext_idle.c
index 9c36f7719fcf9..2dcd758681170 100644
--- a/kernel/sched/ext_idle.c
+++ b/kernel/sched/ext_idle.c
@@ -438,7 +438,8 @@ static inline bool task_affinity_all(const struct task_struct *p)
* NOTE: tasks that can only run on 1 CPU are excluded by this logic, because
* we never call ops.select_cpu() for them, see select_task_rq().
*/
-s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, u64 flags)
+s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags,
+ const struct cpumask *cpus_allowed, u64 flags)
{
const struct cpumask *llc_cpus = NULL, *numa_cpus = NULL;
int node = scx_cpu_node_if_enabled(prev_cpu);
@@ -457,9 +458,9 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, u64
struct cpumask *local_cpus = this_cpu_cpumask_var_ptr(local_numa_idle_cpumask);
const struct cpumask *cpus = numa_span(prev_cpu);
- if (task_affinity_all(p))
+ if (cpus_allowed == p->cpus_ptr && task_affinity_all(p))
numa_cpus = cpus;
- else if (cpus && cpumask_and(local_cpus, p->cpus_ptr, cpus))
+ else if (cpus && cpumask_and(local_cpus, cpus_allowed, cpus))
numa_cpus = local_cpus;
}
@@ -467,9 +468,9 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, u64
struct cpumask *local_cpus = this_cpu_cpumask_var_ptr(local_llc_idle_cpumask);
const struct cpumask *cpus = llc_span(prev_cpu);
- if (task_affinity_all(p))
+ if (cpus_allowed == p->cpus_ptr && task_affinity_all(p))
llc_cpus = cpus;
- else if (cpus && cpumask_and(local_cpus, p->cpus_ptr, cpus))
+ else if (cpus && cpumask_and(local_cpus, cpus_allowed, cpus))
llc_cpus = local_cpus;
}
@@ -508,7 +509,7 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, u64
cpu_rq(cpu)->scx.local_dsq.nr == 0 &&
(!(flags & SCX_PICK_IDLE_IN_NODE) || (waker_node == node)) &&
!cpumask_empty(idle_cpumask(waker_node)->cpu)) {
- if (cpumask_test_cpu(cpu, p->cpus_ptr))
+ if (cpumask_test_cpu(cpu, cpus_allowed))
goto out_unlock;
}
}
@@ -553,7 +554,7 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, u64
* begin in prev_cpu's node and proceed to other nodes in
* order of increasing distance.
*/
- cpu = scx_pick_idle_cpu(p->cpus_ptr, node, flags | SCX_PICK_IDLE_CORE);
+ cpu = scx_pick_idle_cpu(cpus_allowed, node, flags | SCX_PICK_IDLE_CORE);
if (cpu >= 0)
goto out_unlock;
@@ -601,7 +602,7 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, u64
* in prev_cpu's node and proceed to other nodes in order of
* increasing distance.
*/
- cpu = scx_pick_idle_cpu(p->cpus_ptr, node, flags);
+ cpu = scx_pick_idle_cpu(cpus_allowed, node, flags);
if (cpu >= 0)
goto out_unlock;
@@ -857,7 +858,7 @@ __bpf_kfunc s32 scx_bpf_select_cpu_dfl(struct task_struct *p, s32 prev_cpu,
goto prev_cpu;
#ifdef CONFIG_SMP
- cpu = scx_select_cpu_dfl(p, prev_cpu, wake_flags, 0);
+ cpu = scx_select_cpu_dfl(p, prev_cpu, wake_flags, p->cpus_ptr, 0);
if (cpu >= 0) {
*is_idle = true;
return cpu;
diff --git a/kernel/sched/ext_idle.h b/kernel/sched/ext_idle.h
index 511cc2221f7a8..37be78a7502b3 100644
--- a/kernel/sched/ext_idle.h
+++ b/kernel/sched/ext_idle.h
@@ -27,7 +27,8 @@ static inline s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, int node
}
#endif /* CONFIG_SMP */
-s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, u64 flags);
+s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags,
+ const struct cpumask *cpus_allowed, u64 flags);
void scx_idle_enable(struct sched_ext_ops *ops);
void scx_idle_disable(void);
int scx_idle_init(void);
--
2.48.1
^ permalink raw reply related [flat|nested] 19+ messages in thread* Re: [PATCH 2/6] sched_ext: idle: Explicitly pass allowed cpumask to scx_select_cpu_dfl()
2025-03-21 22:10 ` [PATCH 2/6] sched_ext: idle: Explicitly pass allowed cpumask to scx_select_cpu_dfl() Andrea Righi
@ 2025-03-31 21:50 ` Tejun Heo
2025-04-01 6:21 ` Andrea Righi
0 siblings, 1 reply; 19+ messages in thread
From: Tejun Heo @ 2025-03-31 21:50 UTC (permalink / raw)
To: Andrea Righi; +Cc: David Vernet, Changwoo Min, Joel Fernandes, linux-kernel
Hello,
On Fri, Mar 21, 2025 at 11:10:48PM +0100, Andrea Righi wrote:
...
> +s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags,
> + const struct cpumask *cpus_allowed, u64 flags)
> {
> const struct cpumask *llc_cpus = NULL, *numa_cpus = NULL;
> int node = scx_cpu_node_if_enabled(prev_cpu);
> @@ -457,9 +458,9 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, u64
> struct cpumask *local_cpus = this_cpu_cpumask_var_ptr(local_numa_idle_cpumask);
> const struct cpumask *cpus = numa_span(prev_cpu);
>
> - if (task_affinity_all(p))
> + if (cpus_allowed == p->cpus_ptr && task_affinity_all(p))
> numa_cpus = cpus;
Note that this test isn't quite correct. While the error isn't introduced by
this patchset, this becomes a lot more prominent with the series.
p->nr_cpus_allowed tracks the number of CPUs in p->cpus_mask. p->cpus_ptr
can point away from p->cpus_mask without updating p->nr_cpus_allowed, so the
condition that should be checked is p->cpus_ptr == &p->cpus_mask &&
p->nr_cpus_allowed == num_possible_cpus().
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 19+ messages in thread* Re: [PATCH 2/6] sched_ext: idle: Explicitly pass allowed cpumask to scx_select_cpu_dfl()
2025-03-31 21:50 ` Tejun Heo
@ 2025-04-01 6:21 ` Andrea Righi
0 siblings, 0 replies; 19+ messages in thread
From: Andrea Righi @ 2025-04-01 6:21 UTC (permalink / raw)
To: Tejun Heo; +Cc: David Vernet, Changwoo Min, Joel Fernandes, linux-kernel
On Mon, Mar 31, 2025 at 11:50:27AM -1000, Tejun Heo wrote:
> Hello,
>
> On Fri, Mar 21, 2025 at 11:10:48PM +0100, Andrea Righi wrote:
> ...
> > +s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags,
> > + const struct cpumask *cpus_allowed, u64 flags)
> > {
> > const struct cpumask *llc_cpus = NULL, *numa_cpus = NULL;
> > int node = scx_cpu_node_if_enabled(prev_cpu);
> > @@ -457,9 +458,9 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, u64
> > struct cpumask *local_cpus = this_cpu_cpumask_var_ptr(local_numa_idle_cpumask);
> > const struct cpumask *cpus = numa_span(prev_cpu);
> >
> > - if (task_affinity_all(p))
> > + if (cpus_allowed == p->cpus_ptr && task_affinity_all(p))
> > numa_cpus = cpus;
>
> Note that this test isn't quite correct. While the error isn't introduced by
> this patchset, this becomes a lot more prominent with the series.
> p->nr_cpus_allowed tracks the number of CPUs in p->cpus_mask. p->cpus_ptr
> can point away from p->cpus_mask without updating p->nr_cpus_allowed, so the
> condition that should be checked is p->cpus_ptr == &p->cpus_mask &&
> p->nr_cpus_allowed == num_possible_cpus().
Thanks for pointing this out. Considering that, it's more clear (and less
bug prone) to just use NULL when the caller doesn't want to specify an
additional cpumask. Will change it in the next version.
Thanks,
-Andrea
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH 3/6] sched_ext: idle: Accept an arbitrary cpumask in scx_select_cpu_dfl()
2025-03-21 22:10 [PATCHSET v6 sched_ext/for-6.15] sched_ext: Enhance built-in idle selection with allowed CPUs Andrea Righi
2025-03-21 22:10 ` [PATCH 1/6] sched_ext: idle: Extend topology optimizations to all tasks Andrea Righi
2025-03-21 22:10 ` [PATCH 2/6] sched_ext: idle: Explicitly pass allowed cpumask to scx_select_cpu_dfl() Andrea Righi
@ 2025-03-21 22:10 ` Andrea Righi
2025-03-31 21:56 ` Tejun Heo
2025-03-21 22:10 ` [PATCH 4/6] sched_ext: idle: Introduce scx_bpf_select_cpu_and() Andrea Righi
` (3 subsequent siblings)
6 siblings, 1 reply; 19+ messages in thread
From: Andrea Righi @ 2025-03-21 22:10 UTC (permalink / raw)
To: Tejun Heo, David Vernet, Changwoo Min; +Cc: Joel Fernandes, linux-kernel
Many scx schedulers implement their own hard or soft-affinity rules
to support topology characteristics, such as heterogeneous architectures
(e.g., big.LITTLE, P-cores/E-cores), or to categorize tasks based on
specific properties (e.g., running certain tasks only in a subset of
CPUs).
Currently, there is no mechanism that allows to use the built-in idle
CPU selection policy to an arbitrary subset of CPUs. As a result,
schedulers often implement their own idle CPU selection policies, which
are typically similar to one another, leading to a lot of code
duplication.
To address this, modify scx_select_cpu_dfl() to accept an arbitrary
cpumask, that can be used by the BPF schedulers to apply the existent
built-in idle CPU selection policy to a subset of allowed CPUs.
With this concept the idle CPU selection policy becomes the following:
- always prioritize CPUs from fully idle SMT cores (if SMT is enabled),
- select the same CPU if it's idle and in the allowed CPUs,
- select an idle CPU within the same LLC, if the LLC cpumask is a
subset of the allowed CPUs,
- select an idle CPU within the same node, if the node cpumask is a
subset of the allowed CPUs,
- select an idle CPU within the allowed CPUs.
This functionality will be exposed through a dedicated kfunc in a
separate patch.
Signed-off-by: Andrea Righi <arighi@nvidia.com>
---
kernel/sched/ext_idle.c | 68 +++++++++++++++++++++++++++++++++--------
1 file changed, 56 insertions(+), 12 deletions(-)
diff --git a/kernel/sched/ext_idle.c b/kernel/sched/ext_idle.c
index 2dcd758681170..faed4f89f95e9 100644
--- a/kernel/sched/ext_idle.c
+++ b/kernel/sched/ext_idle.c
@@ -49,6 +49,7 @@ static struct scx_idle_cpus **scx_idle_node_masks;
/*
* Local per-CPU cpumasks (used to generate temporary idle cpumasks).
*/
+static DEFINE_PER_CPU(cpumask_var_t, local_idle_cpumask);
static DEFINE_PER_CPU(cpumask_var_t, local_llc_idle_cpumask);
static DEFINE_PER_CPU(cpumask_var_t, local_numa_idle_cpumask);
@@ -417,13 +418,15 @@ static inline bool task_affinity_all(const struct task_struct *p)
* branch prediction optimizations.
*
* 3. Pick a CPU within the same LLC (Last-Level Cache):
- * - if the above conditions aren't met, pick a CPU that shares the same LLC
- * to maintain cache locality.
+ * - if the above conditions aren't met, pick a CPU that shares the same
+ * LLC, if the LLC domain is a subset of @cpus_allowed, to maintain
+ * cache locality.
*
* 4. Pick a CPU within the same NUMA node, if enabled:
- * - choose a CPU from the same NUMA node to reduce memory access latency.
+ * - choose a CPU from the same NUMA node, if the node cpumask is a
+ * subset of @cpus_allowed, to reduce memory access latency.
*
- * 5. Pick any idle CPU usable by the task.
+ * 5. Pick any idle CPU within the @cpus_allowed domain.
*
* Step 3 and 4 are performed only if the system has, respectively,
* multiple LLCs / multiple NUMA nodes (see scx_selcpu_topo_llc and
@@ -442,9 +445,43 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags,
const struct cpumask *cpus_allowed, u64 flags)
{
const struct cpumask *llc_cpus = NULL, *numa_cpus = NULL;
- int node = scx_cpu_node_if_enabled(prev_cpu);
+ const struct cpumask *allowed = p->cpus_ptr;
+ int node;
s32 cpu;
+ preempt_disable();
+
+ /*
+ * Determine the subset of CPUs usable by @p within @cpus_allowed.
+ */
+ if (cpus_allowed != p->cpus_ptr) {
+ struct cpumask *local_cpus = this_cpu_cpumask_var_ptr(local_idle_cpumask);
+
+ if (task_affinity_all(p)) {
+ allowed = cpus_allowed;
+ } else if (cpumask_and(local_cpus, cpus_allowed, p->cpus_ptr)) {
+ allowed = local_cpus;
+ } else {
+ cpu = -EBUSY;
+ goto out_enable;
+ }
+ }
+
+ /*
+ * If @prev_cpu is not in the allowed domain, try to assign a new
+ * arbitrary CPU usable by the task in the allowed domain.
+ */
+ if (!cpumask_test_cpu(prev_cpu, allowed)) {
+ cpu = cpumask_any_and_distribute(p->cpus_ptr, allowed);
+ if (cpu < nr_cpu_ids) {
+ prev_cpu = cpu;
+ } else {
+ cpu = -EBUSY;
+ goto out_enable;
+ }
+ }
+ node = scx_cpu_node_if_enabled(prev_cpu);
+
/*
* This is necessary to protect llc_cpus.
*/
@@ -453,14 +490,17 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags,
/*
* Determine the subset of CPUs that the task can use in its
* current LLC and node.
+ *
+ * If the task can run on all CPUs, use the node and LLC cpumasks
+ * directly.
*/
if (static_branch_maybe(CONFIG_NUMA, &scx_selcpu_topo_numa)) {
struct cpumask *local_cpus = this_cpu_cpumask_var_ptr(local_numa_idle_cpumask);
const struct cpumask *cpus = numa_span(prev_cpu);
- if (cpus_allowed == p->cpus_ptr && task_affinity_all(p))
+ if (allowed == p->cpus_ptr && task_affinity_all(p))
numa_cpus = cpus;
- else if (cpus && cpumask_and(local_cpus, cpus_allowed, cpus))
+ else if (cpus && cpumask_and(local_cpus, allowed, cpus))
numa_cpus = local_cpus;
}
@@ -468,9 +508,9 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags,
struct cpumask *local_cpus = this_cpu_cpumask_var_ptr(local_llc_idle_cpumask);
const struct cpumask *cpus = llc_span(prev_cpu);
- if (cpus_allowed == p->cpus_ptr && task_affinity_all(p))
+ if (allowed == p->cpus_ptr && task_affinity_all(p))
llc_cpus = cpus;
- else if (cpus && cpumask_and(local_cpus, cpus_allowed, cpus))
+ else if (cpus && cpumask_and(local_cpus, allowed, cpus))
llc_cpus = local_cpus;
}
@@ -509,7 +549,7 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags,
cpu_rq(cpu)->scx.local_dsq.nr == 0 &&
(!(flags & SCX_PICK_IDLE_IN_NODE) || (waker_node == node)) &&
!cpumask_empty(idle_cpumask(waker_node)->cpu)) {
- if (cpumask_test_cpu(cpu, cpus_allowed))
+ if (cpumask_test_cpu(cpu, allowed))
goto out_unlock;
}
}
@@ -554,7 +594,7 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags,
* begin in prev_cpu's node and proceed to other nodes in
* order of increasing distance.
*/
- cpu = scx_pick_idle_cpu(cpus_allowed, node, flags | SCX_PICK_IDLE_CORE);
+ cpu = scx_pick_idle_cpu(allowed, node, flags | SCX_PICK_IDLE_CORE);
if (cpu >= 0)
goto out_unlock;
@@ -602,12 +642,14 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags,
* in prev_cpu's node and proceed to other nodes in order of
* increasing distance.
*/
- cpu = scx_pick_idle_cpu(cpus_allowed, node, flags);
+ cpu = scx_pick_idle_cpu(allowed, node, flags);
if (cpu >= 0)
goto out_unlock;
out_unlock:
rcu_read_unlock();
+out_enable:
+ preempt_enable();
return cpu;
}
@@ -639,6 +681,8 @@ void scx_idle_init_masks(void)
/* Allocate local per-cpu idle cpumasks */
for_each_possible_cpu(i) {
+ BUG_ON(!alloc_cpumask_var_node(&per_cpu(local_idle_cpumask, i),
+ GFP_KERNEL, cpu_to_node(i)));
BUG_ON(!alloc_cpumask_var_node(&per_cpu(local_llc_idle_cpumask, i),
GFP_KERNEL, cpu_to_node(i)));
BUG_ON(!alloc_cpumask_var_node(&per_cpu(local_numa_idle_cpumask, i),
--
2.48.1
^ permalink raw reply related [flat|nested] 19+ messages in thread* Re: [PATCH 3/6] sched_ext: idle: Accept an arbitrary cpumask in scx_select_cpu_dfl()
2025-03-21 22:10 ` [PATCH 3/6] sched_ext: idle: Accept an arbitrary cpumask in scx_select_cpu_dfl() Andrea Righi
@ 2025-03-31 21:56 ` Tejun Heo
2025-04-01 6:33 ` Andrea Righi
0 siblings, 1 reply; 19+ messages in thread
From: Tejun Heo @ 2025-03-31 21:56 UTC (permalink / raw)
To: Andrea Righi; +Cc: David Vernet, Changwoo Min, Joel Fernandes, linux-kernel
On Fri, Mar 21, 2025 at 11:10:49PM +0100, Andrea Righi wrote:
...
> + /*
> + * If @prev_cpu is not in the allowed domain, try to assign a new
> + * arbitrary CPU usable by the task in the allowed domain.
> + */
> + if (!cpumask_test_cpu(prev_cpu, allowed)) {
> + cpu = cpumask_any_and_distribute(p->cpus_ptr, allowed);
> + if (cpu < nr_cpu_ids) {
> + prev_cpu = cpu;
> + } else {
> + cpu = -EBUSY;
> + goto out_enable;
> + }
> + }
Would it be better to clear it to -1 and disable @prev_cpu optimizations if
negative? Not a big deal, so please feel free to push back but things like
wake_sync optimization become a bit weird with @prev_cpu set to some random
CPU and down the line if we want to allow e.g. preferring previous idle CPU
even when the sibling CPU isn't idle which seems to help with some
workloads, this can become tricky.
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 19+ messages in thread* Re: [PATCH 3/6] sched_ext: idle: Accept an arbitrary cpumask in scx_select_cpu_dfl()
2025-03-31 21:56 ` Tejun Heo
@ 2025-04-01 6:33 ` Andrea Righi
0 siblings, 0 replies; 19+ messages in thread
From: Andrea Righi @ 2025-04-01 6:33 UTC (permalink / raw)
To: Tejun Heo; +Cc: David Vernet, Changwoo Min, Joel Fernandes, linux-kernel
On Mon, Mar 31, 2025 at 11:56:36AM -1000, Tejun Heo wrote:
> On Fri, Mar 21, 2025 at 11:10:49PM +0100, Andrea Righi wrote:
> ...
> > + /*
> > + * If @prev_cpu is not in the allowed domain, try to assign a new
> > + * arbitrary CPU usable by the task in the allowed domain.
> > + */
> > + if (!cpumask_test_cpu(prev_cpu, allowed)) {
> > + cpu = cpumask_any_and_distribute(p->cpus_ptr, allowed);
> > + if (cpu < nr_cpu_ids) {
> > + prev_cpu = cpu;
> > + } else {
> > + cpu = -EBUSY;
> > + goto out_enable;
> > + }
> > + }
>
> Would it be better to clear it to -1 and disable @prev_cpu optimizations if
> negative? Not a big deal, so please feel free to push back but things like
> wake_sync optimization become a bit weird with @prev_cpu set to some random
> CPU and down the line if we want to allow e.g. preferring previous idle CPU
> even when the sibling CPU isn't idle which seems to help with some
> workloads, this can become tricky.
Maybe a better strategy would be to try with prev_cpu = smp_processor_id(),
if it's in the subset p->cpus_ptr & allowed, which might be beneficial for
some waker->wakee scenarios, otherwise jump directly to the end, with
cpu = scx_pick_idle_cpu(allowed, node, flags).
Thanks,
-Andrea
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH 4/6] sched_ext: idle: Introduce scx_bpf_select_cpu_and()
2025-03-21 22:10 [PATCHSET v6 sched_ext/for-6.15] sched_ext: Enhance built-in idle selection with allowed CPUs Andrea Righi
` (2 preceding siblings ...)
2025-03-21 22:10 ` [PATCH 3/6] sched_ext: idle: Accept an arbitrary cpumask in scx_select_cpu_dfl() Andrea Righi
@ 2025-03-21 22:10 ` Andrea Righi
2025-03-31 21:59 ` Tejun Heo
2025-03-21 22:10 ` [PATCH 5/6] selftests/sched_ext: Add test for scx_bpf_select_cpu_and() Andrea Righi
` (2 subsequent siblings)
6 siblings, 1 reply; 19+ messages in thread
From: Andrea Righi @ 2025-03-21 22:10 UTC (permalink / raw)
To: Tejun Heo, David Vernet, Changwoo Min; +Cc: Joel Fernandes, linux-kernel
Provide a new kfunc, scx_bpf_select_cpu_and(), that can be used to apply
the built-in idle CPU selection policy to a subset of allowed CPU.
This new helper is basically an extension of scx_bpf_select_cpu_dfl().
However, when an idle CPU can't be found, it returns a negative value
instead of @prev_cpu, aligning its behavior more closely with
scx_bpf_pick_idle_cpu().
It also accepts %SCX_PICK_IDLE_* flags, which can be used to enforce
strict selection to @prev_cpu's node (%SCX_PICK_IDLE_IN_NODE), or to
request only a full-idle SMT core (%SCX_PICK_IDLE_CORE), while applying
the built-in selection logic.
With this helper, BPF schedulers can apply the built-in idle CPU
selection policy restricted to any arbitrary subset of CPUs.
Example usage
=============
Possible usage in ops.select_cpu():
s32 BPF_STRUCT_OPS(foo_select_cpu, struct task_struct *p,
s32 prev_cpu, u64 wake_flags)
{
const struct cpumask *cpus = task_allowed_cpus(p) ?: p->cpus_ptr;
s32 cpu;
cpu = scx_bpf_select_cpu_and(p, prev_cpu, wake_flags, cpus, 0);
if (cpu >= 0) {
scx_bpf_dsq_insert(p, SCX_DSQ_LOCAL, SCX_SLICE_DFL, 0);
return cpu;
}
return prev_cpu;
}
Results
=======
Load distribution on a 4 sockets, 4 cores per socket system, simulated
using virtme-ng, running a modified version of scx_bpfland that uses
scx_bpf_select_cpu_and() with 0xff00 as the allowed subset of CPUs:
$ vng --cpu 16,sockets=4,cores=4,threads=1
...
$ stress-ng -c 16
...
$ htop
...
0[ 0.0%] 8[||||||||||||||||||||||||100.0%]
1[ 0.0%] 9[||||||||||||||||||||||||100.0%]
2[ 0.0%] 10[||||||||||||||||||||||||100.0%]
3[ 0.0%] 11[||||||||||||||||||||||||100.0%]
4[ 0.0%] 12[||||||||||||||||||||||||100.0%]
5[ 0.0%] 13[||||||||||||||||||||||||100.0%]
6[ 0.0%] 14[||||||||||||||||||||||||100.0%]
7[ 0.0%] 15[||||||||||||||||||||||||100.0%]
With scx_bpf_select_cpu_dfl() tasks would be distributed evenly across
all the available CPUs.
Signed-off-by: Andrea Righi <arighi@nvidia.com>
---
kernel/sched/ext.c | 1 +
kernel/sched/ext_idle.c | 43 ++++++++++++++++++++++++
tools/sched_ext/include/scx/common.bpf.h | 2 ++
3 files changed, 46 insertions(+)
diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
index f42352e8d889e..343f066c1185d 100644
--- a/kernel/sched/ext.c
+++ b/kernel/sched/ext.c
@@ -465,6 +465,7 @@ struct sched_ext_ops {
* idle CPU tracking and the following helpers become unavailable:
*
* - scx_bpf_select_cpu_dfl()
+ * - scx_bpf_select_cpu_and()
* - scx_bpf_test_and_clear_cpu_idle()
* - scx_bpf_pick_idle_cpu()
*
diff --git a/kernel/sched/ext_idle.c b/kernel/sched/ext_idle.c
index faed4f89f95e9..220e11cd0ab67 100644
--- a/kernel/sched/ext_idle.c
+++ b/kernel/sched/ext_idle.c
@@ -914,6 +914,48 @@ __bpf_kfunc s32 scx_bpf_select_cpu_dfl(struct task_struct *p, s32 prev_cpu,
return prev_cpu;
}
+/**
+ * scx_bpf_select_cpu_and - Pick an idle CPU usable by task @p,
+ * prioritizing those in @cpus_allowed
+ * @p: task_struct to select a CPU for
+ * @prev_cpu: CPU @p was on previously
+ * @wake_flags: %SCX_WAKE_* flags
+ * @cpus_allowed: cpumask of allowed CPUs
+ * @flags: %SCX_PICK_IDLE* flags
+ *
+ * Can only be called from ops.select_cpu() or ops.enqueue() if the
+ * built-in CPU selection is enabled: ops.update_idle() is missing or
+ * %SCX_OPS_KEEP_BUILTIN_IDLE is set.
+ *
+ * @p, @prev_cpu and @wake_flags match ops.select_cpu().
+ *
+ * Returns the selected idle CPU, which will be automatically awakened upon
+ * returning from ops.select_cpu() and can be used for direct dispatch, or
+ * a negative value if no idle CPU is available.
+ */
+__bpf_kfunc s32 scx_bpf_select_cpu_and(struct task_struct *p, s32 prev_cpu, u64 wake_flags,
+ const struct cpumask *cpus_allowed, u64 flags)
+{
+ s32 cpu;
+
+ if (!ops_cpu_valid(prev_cpu, NULL))
+ return -EINVAL;
+
+ if (!check_builtin_idle_enabled())
+ return -EBUSY;
+
+ if (!scx_kf_allowed(SCX_KF_SELECT_CPU | SCX_KF_ENQUEUE))
+ return -EPERM;
+
+#ifdef CONFIG_SMP
+ cpu = scx_select_cpu_dfl(p, prev_cpu, wake_flags, cpus_allowed, flags);
+#else
+ cpu = -EBUSY;
+#endif
+
+ return cpu;
+}
+
/**
* scx_bpf_get_idle_cpumask_node - Get a referenced kptr to the
* idle-tracking per-CPU cpumask of a target NUMA node.
@@ -1222,6 +1264,7 @@ static const struct btf_kfunc_id_set scx_kfunc_set_idle = {
BTF_KFUNCS_START(scx_kfunc_ids_select_cpu)
BTF_ID_FLAGS(func, scx_bpf_select_cpu_dfl, KF_RCU)
+BTF_ID_FLAGS(func, scx_bpf_select_cpu_and, KF_RCU)
BTF_KFUNCS_END(scx_kfunc_ids_select_cpu)
static const struct btf_kfunc_id_set scx_kfunc_set_select_cpu = {
diff --git a/tools/sched_ext/include/scx/common.bpf.h b/tools/sched_ext/include/scx/common.bpf.h
index dc4333d23189f..6f1da61cf7f17 100644
--- a/tools/sched_ext/include/scx/common.bpf.h
+++ b/tools/sched_ext/include/scx/common.bpf.h
@@ -48,6 +48,8 @@ static inline void ___vmlinux_h_sanity_check___(void)
s32 scx_bpf_create_dsq(u64 dsq_id, s32 node) __ksym;
s32 scx_bpf_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, bool *is_idle) __ksym;
+s32 scx_bpf_select_cpu_and(struct task_struct *p, s32 prev_cpu, u64 wake_flags,
+ const struct cpumask *cpus_allowed, u64 flags) __ksym __weak;
void scx_bpf_dsq_insert(struct task_struct *p, u64 dsq_id, u64 slice, u64 enq_flags) __ksym __weak;
void scx_bpf_dsq_insert_vtime(struct task_struct *p, u64 dsq_id, u64 slice, u64 vtime, u64 enq_flags) __ksym __weak;
u32 scx_bpf_dispatch_nr_slots(void) __ksym;
--
2.48.1
^ permalink raw reply related [flat|nested] 19+ messages in thread* Re: [PATCH 4/6] sched_ext: idle: Introduce scx_bpf_select_cpu_and()
2025-03-21 22:10 ` [PATCH 4/6] sched_ext: idle: Introduce scx_bpf_select_cpu_and() Andrea Righi
@ 2025-03-31 21:59 ` Tejun Heo
2025-04-01 6:35 ` Andrea Righi
0 siblings, 1 reply; 19+ messages in thread
From: Tejun Heo @ 2025-03-31 21:59 UTC (permalink / raw)
To: Andrea Righi; +Cc: David Vernet, Changwoo Min, Joel Fernandes, linux-kernel
Hello,
On Fri, Mar 21, 2025 at 11:10:50PM +0100, Andrea Righi wrote:
...
> +__bpf_kfunc s32 scx_bpf_select_cpu_and(struct task_struct *p, s32 prev_cpu, u64 wake_flags,
> + const struct cpumask *cpus_allowed, u64 flags)
> +{
> + s32 cpu;
> +
> + if (!ops_cpu_valid(prev_cpu, NULL))
> + return -EINVAL;
> +
> + if (!check_builtin_idle_enabled())
> + return -EBUSY;
> +
> + if (!scx_kf_allowed(SCX_KF_SELECT_CPU | SCX_KF_ENQUEUE))
> + return -EPERM;
> +
> +#ifdef CONFIG_SMP
> + cpu = scx_select_cpu_dfl(p, prev_cpu, wake_flags, cpus_allowed, flags);
> +#else
> + cpu = -EBUSY;
> +#endif
> +
> + return cpu;
> +}
Later in the series, I find scx_bpf_select_cpu_and() being called with
p->cpus_ptr really confusing. scx_bpf_select_cpu_and() is always constrained
by p->cpus_ptr (except for the currently buggy case where p->nr_cpus_allowed
is used while p->cpus_ptr is overridden), so what does it mean to call
scx_bpf_select_cpu_and() with p->cpus_ptr as @cpus_allowed? I'd much prefer
if the convention in such cases is calling with NULL @cpus_allowed.
@cpus_allowed is the extra mask to and to p->cpus_ptr when searching for an
idle CPU. If we're going to use p->cpus_ptr, we just don't have the extra
cpumask to and.
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 19+ messages in thread* Re: [PATCH 4/6] sched_ext: idle: Introduce scx_bpf_select_cpu_and()
2025-03-31 21:59 ` Tejun Heo
@ 2025-04-01 6:35 ` Andrea Righi
0 siblings, 0 replies; 19+ messages in thread
From: Andrea Righi @ 2025-04-01 6:35 UTC (permalink / raw)
To: Tejun Heo; +Cc: David Vernet, Changwoo Min, Joel Fernandes, linux-kernel
On Mon, Mar 31, 2025 at 11:59:42AM -1000, Tejun Heo wrote:
> Hello,
>
> On Fri, Mar 21, 2025 at 11:10:50PM +0100, Andrea Righi wrote:
> ...
> > +__bpf_kfunc s32 scx_bpf_select_cpu_and(struct task_struct *p, s32 prev_cpu, u64 wake_flags,
> > + const struct cpumask *cpus_allowed, u64 flags)
> > +{
> > + s32 cpu;
> > +
> > + if (!ops_cpu_valid(prev_cpu, NULL))
> > + return -EINVAL;
> > +
> > + if (!check_builtin_idle_enabled())
> > + return -EBUSY;
> > +
> > + if (!scx_kf_allowed(SCX_KF_SELECT_CPU | SCX_KF_ENQUEUE))
> > + return -EPERM;
> > +
> > +#ifdef CONFIG_SMP
> > + cpu = scx_select_cpu_dfl(p, prev_cpu, wake_flags, cpus_allowed, flags);
> > +#else
> > + cpu = -EBUSY;
> > +#endif
> > +
> > + return cpu;
> > +}
>
> Later in the series, I find scx_bpf_select_cpu_and() being called with
> p->cpus_ptr really confusing. scx_bpf_select_cpu_and() is always constrained
> by p->cpus_ptr (except for the currently buggy case where p->nr_cpus_allowed
> is used while p->cpus_ptr is overridden), so what does it mean to call
> scx_bpf_select_cpu_and() with p->cpus_ptr as @cpus_allowed? I'd much prefer
> if the convention in such cases is calling with NULL @cpus_allowed.
> @cpus_allowed is the extra mask to and to p->cpus_ptr when searching for an
> idle CPU. If we're going to use p->cpus_ptr, we just don't have the extra
> cpumask to and.
Exactly, as mentioned in a previous email I also agree that using NULL as
@cpus_allowed would be much more clear and less bug prone. Will change
that.
-Andrea
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH 5/6] selftests/sched_ext: Add test for scx_bpf_select_cpu_and()
2025-03-21 22:10 [PATCHSET v6 sched_ext/for-6.15] sched_ext: Enhance built-in idle selection with allowed CPUs Andrea Righi
` (3 preceding siblings ...)
2025-03-21 22:10 ` [PATCH 4/6] sched_ext: idle: Introduce scx_bpf_select_cpu_and() Andrea Righi
@ 2025-03-21 22:10 ` Andrea Righi
2025-03-21 22:10 ` [PATCH 6/6] sched_ext: idle: Deprecate scx_bpf_select_cpu_dfl() Andrea Righi
2025-03-22 3:56 ` [PATCHSET v6 sched_ext/for-6.15] sched_ext: Enhance built-in idle selection with allowed CPUs Changwoo Min
6 siblings, 0 replies; 19+ messages in thread
From: Andrea Righi @ 2025-03-21 22:10 UTC (permalink / raw)
To: Tejun Heo, David Vernet, Changwoo Min; +Cc: Joel Fernandes, linux-kernel
Add a selftest to validate the behavior of the built-in idle CPU
selection policy applied to a subset of allowed CPUs, using
scx_bpf_select_cpu_and().
Signed-off-by: Andrea Righi <arighi@nvidia.com>
---
tools/testing/selftests/sched_ext/Makefile | 1 +
.../selftests/sched_ext/allowed_cpus.bpf.c | 121 ++++++++++++++++++
.../selftests/sched_ext/allowed_cpus.c | 57 +++++++++
3 files changed, 179 insertions(+)
create mode 100644 tools/testing/selftests/sched_ext/allowed_cpus.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/allowed_cpus.c
diff --git a/tools/testing/selftests/sched_ext/Makefile b/tools/testing/selftests/sched_ext/Makefile
index f4531327b8e76..e9d5bc575f806 100644
--- a/tools/testing/selftests/sched_ext/Makefile
+++ b/tools/testing/selftests/sched_ext/Makefile
@@ -173,6 +173,7 @@ auto-test-targets := \
maybe_null \
minimal \
numa \
+ allowed_cpus \
prog_run \
reload_loop \
select_cpu_dfl \
diff --git a/tools/testing/selftests/sched_ext/allowed_cpus.bpf.c b/tools/testing/selftests/sched_ext/allowed_cpus.bpf.c
new file mode 100644
index 0000000000000..39d57f7f74099
--- /dev/null
+++ b/tools/testing/selftests/sched_ext/allowed_cpus.bpf.c
@@ -0,0 +1,121 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * A scheduler that validates the behavior of scx_bpf_select_cpu_and() by
+ * selecting idle CPUs strictly within a subset of allowed CPUs.
+ *
+ * Copyright (c) 2025 Andrea Righi <arighi@nvidia.com>
+ */
+
+#include <scx/common.bpf.h>
+
+char _license[] SEC("license") = "GPL";
+
+UEI_DEFINE(uei);
+
+private(PREF_CPUS) struct bpf_cpumask __kptr * allowed_cpumask;
+
+static void
+validate_idle_cpu(const struct task_struct *p, const struct cpumask *allowed, s32 cpu)
+{
+ if (scx_bpf_test_and_clear_cpu_idle(cpu))
+ scx_bpf_error("CPU %d should be marked as busy", cpu);
+
+ if (bpf_cpumask_subset(allowed, p->cpus_ptr) &&
+ !bpf_cpumask_test_cpu(cpu, allowed))
+ scx_bpf_error("CPU %d not in the allowed domain for %d (%s)",
+ cpu, p->pid, p->comm);
+}
+
+s32 BPF_STRUCT_OPS(allowed_cpus_select_cpu,
+ struct task_struct *p, s32 prev_cpu, u64 wake_flags)
+{
+ const struct cpumask *allowed;
+ s32 cpu;
+
+ allowed = cast_mask(allowed_cpumask);
+ if (!allowed) {
+ scx_bpf_error("allowed domain not initialized");
+ return -EINVAL;
+ }
+
+ /*
+ * Select an idle CPU strictly within the allowed domain.
+ */
+ cpu = scx_bpf_select_cpu_and(p, prev_cpu, wake_flags, allowed, 0);
+ if (cpu >= 0) {
+ validate_idle_cpu(p, allowed, cpu);
+ scx_bpf_dsq_insert(p, SCX_DSQ_LOCAL, SCX_SLICE_DFL, 0);
+
+ return cpu;
+ }
+
+ return prev_cpu;
+}
+
+void BPF_STRUCT_OPS(allowed_cpus_enqueue, struct task_struct *p, u64 enq_flags)
+{
+ const struct cpumask *allowed;
+ s32 prev_cpu = scx_bpf_task_cpu(p), cpu;
+
+ scx_bpf_dsq_insert(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, 0);
+
+ allowed = cast_mask(allowed_cpumask);
+ if (!allowed) {
+ scx_bpf_error("allowed domain not initialized");
+ return;
+ }
+
+ /*
+ * Use scx_bpf_select_cpu_and() to proactively kick an idle CPU
+ * within @allowed_cpumask, usable by @p.
+ */
+ cpu = scx_bpf_select_cpu_and(p, prev_cpu, 0, allowed, 0);
+ if (cpu >= 0) {
+ validate_idle_cpu(p, allowed, cpu);
+ scx_bpf_kick_cpu(cpu, SCX_KICK_IDLE);
+ }
+}
+
+s32 BPF_STRUCT_OPS_SLEEPABLE(allowed_cpus_init)
+{
+ struct bpf_cpumask *mask;
+
+ mask = bpf_cpumask_create();
+ if (!mask)
+ return -ENOMEM;
+
+ mask = bpf_kptr_xchg(&allowed_cpumask, mask);
+ if (mask)
+ bpf_cpumask_release(mask);
+
+ bpf_rcu_read_lock();
+
+ /*
+ * Assign the first online CPU to the allowed domain.
+ */
+ mask = allowed_cpumask;
+ if (mask) {
+ const struct cpumask *online = scx_bpf_get_online_cpumask();
+
+ bpf_cpumask_set_cpu(bpf_cpumask_first(online), mask);
+ scx_bpf_put_cpumask(online);
+ }
+
+ bpf_rcu_read_unlock();
+
+ return 0;
+}
+
+void BPF_STRUCT_OPS(allowed_cpus_exit, struct scx_exit_info *ei)
+{
+ UEI_RECORD(uei, ei);
+}
+
+SEC(".struct_ops.link")
+struct sched_ext_ops allowed_cpus_ops = {
+ .select_cpu = (void *)allowed_cpus_select_cpu,
+ .enqueue = (void *)allowed_cpus_enqueue,
+ .init = (void *)allowed_cpus_init,
+ .exit = (void *)allowed_cpus_exit,
+ .name = "allowed_cpus",
+};
diff --git a/tools/testing/selftests/sched_ext/allowed_cpus.c b/tools/testing/selftests/sched_ext/allowed_cpus.c
new file mode 100644
index 0000000000000..a001a3a0e9f1f
--- /dev/null
+++ b/tools/testing/selftests/sched_ext/allowed_cpus.c
@@ -0,0 +1,57 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2025 Andrea Righi <arighi@nvidia.com>
+ */
+#include <bpf/bpf.h>
+#include <scx/common.h>
+#include <sys/wait.h>
+#include <unistd.h>
+#include "allowed_cpus.bpf.skel.h"
+#include "scx_test.h"
+
+static enum scx_test_status setup(void **ctx)
+{
+ struct allowed_cpus *skel;
+
+ skel = allowed_cpus__open();
+ SCX_FAIL_IF(!skel, "Failed to open");
+ SCX_ENUM_INIT(skel);
+ SCX_FAIL_IF(allowed_cpus__load(skel), "Failed to load skel");
+
+ *ctx = skel;
+
+ return SCX_TEST_PASS;
+}
+
+static enum scx_test_status run(void *ctx)
+{
+ struct allowed_cpus *skel = ctx;
+ struct bpf_link *link;
+
+ link = bpf_map__attach_struct_ops(skel->maps.allowed_cpus_ops);
+ SCX_FAIL_IF(!link, "Failed to attach scheduler");
+
+ /* Just sleeping is fine, plenty of scheduling events happening */
+ sleep(1);
+
+ SCX_EQ(skel->data->uei.kind, EXIT_KIND(SCX_EXIT_NONE));
+ bpf_link__destroy(link);
+
+ return SCX_TEST_PASS;
+}
+
+static void cleanup(void *ctx)
+{
+ struct allowed_cpus *skel = ctx;
+
+ allowed_cpus__destroy(skel);
+}
+
+struct scx_test allowed_cpus = {
+ .name = "allowed_cpus",
+ .description = "Verify scx_bpf_select_cpu_and()",
+ .setup = setup,
+ .run = run,
+ .cleanup = cleanup,
+};
+REGISTER_SCX_TEST(&allowed_cpus)
--
2.48.1
^ permalink raw reply related [flat|nested] 19+ messages in thread* [PATCH 6/6] sched_ext: idle: Deprecate scx_bpf_select_cpu_dfl()
2025-03-21 22:10 [PATCHSET v6 sched_ext/for-6.15] sched_ext: Enhance built-in idle selection with allowed CPUs Andrea Righi
` (4 preceding siblings ...)
2025-03-21 22:10 ` [PATCH 5/6] selftests/sched_ext: Add test for scx_bpf_select_cpu_and() Andrea Righi
@ 2025-03-21 22:10 ` Andrea Righi
2025-03-31 22:01 ` Tejun Heo
2025-03-22 3:56 ` [PATCHSET v6 sched_ext/for-6.15] sched_ext: Enhance built-in idle selection with allowed CPUs Changwoo Min
6 siblings, 1 reply; 19+ messages in thread
From: Andrea Righi @ 2025-03-21 22:10 UTC (permalink / raw)
To: Tejun Heo, David Vernet, Changwoo Min; +Cc: Joel Fernandes, linux-kernel
With the introduction of scx_bpf_select_cpu_and(), we can deprecate
scx_bpf_select_cpu_dfl(), as it offers only a subset of features and
it's also more consistent with other idle-related APIs (returning a
negative value when no idle CPU is found).
Therefore, mark scx_bpf_select_cpu_dfl() as deprecated (printing a
warning when it's used), update all the scheduler examples and
kselftests to adopt the new API, and ensure backward (source and binary)
compatibility by providing the necessary macros and hooks.
Support for scx_bpf_select_cpu_dfl() can be maintained until v6.17.
Signed-off-by: Andrea Righi <arighi@nvidia.com>
---
Documentation/scheduler/sched-ext.rst | 11 +++---
kernel/sched/ext.c | 3 +-
kernel/sched/ext_idle.c | 18 ++-------
tools/sched_ext/include/scx/common.bpf.h | 3 +-
tools/sched_ext/include/scx/compat.bpf.h | 37 +++++++++++++++++++
tools/sched_ext/scx_flatcg.bpf.c | 12 +++---
tools/sched_ext/scx_simple.bpf.c | 9 +++--
.../sched_ext/enq_select_cpu_fails.bpf.c | 12 +-----
.../sched_ext/enq_select_cpu_fails.c | 2 +-
tools/testing/selftests/sched_ext/exit.bpf.c | 6 ++-
.../sched_ext/select_cpu_dfl_nodispatch.bpf.c | 13 +++----
.../sched_ext/select_cpu_dfl_nodispatch.c | 2 +-
12 files changed, 73 insertions(+), 55 deletions(-)
diff --git a/Documentation/scheduler/sched-ext.rst b/Documentation/scheduler/sched-ext.rst
index 0993e41353db7..7f36f4fcf5f31 100644
--- a/Documentation/scheduler/sched-ext.rst
+++ b/Documentation/scheduler/sched-ext.rst
@@ -142,15 +142,14 @@ optional. The following modified excerpt is from
s32 prev_cpu, u64 wake_flags)
{
s32 cpu;
- /* Need to initialize or the BPF verifier will reject the program */
- bool direct = false;
- cpu = scx_bpf_select_cpu_dfl(p, prev_cpu, wake_flags, &direct);
-
- if (direct)
+ cpu = scx_bpf_select_cpu_and(p, prev_cpu, wake_flags, p->cpus_ptr, 0);
+ if (cpu >= 0)
scx_bpf_dsq_insert(p, SCX_DSQ_LOCAL, SCX_SLICE_DFL, 0);
+ return cpu;
+ }
- return cpu;
+ return prev_cpu;
}
/*
diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
index 343f066c1185d..d82e9d3cbc0dc 100644
--- a/kernel/sched/ext.c
+++ b/kernel/sched/ext.c
@@ -464,13 +464,12 @@ struct sched_ext_ops {
* state. By default, implementing this operation disables the built-in
* idle CPU tracking and the following helpers become unavailable:
*
- * - scx_bpf_select_cpu_dfl()
* - scx_bpf_select_cpu_and()
* - scx_bpf_test_and_clear_cpu_idle()
* - scx_bpf_pick_idle_cpu()
*
* The user also must implement ops.select_cpu() as the default
- * implementation relies on scx_bpf_select_cpu_dfl().
+ * implementation relies on scx_bpf_select_cpu_and().
*
* Specify the %SCX_OPS_KEEP_BUILTIN_IDLE flag to keep the built-in idle
* tracking.
diff --git a/kernel/sched/ext_idle.c b/kernel/sched/ext_idle.c
index 220e11cd0ab67..746fd36050045 100644
--- a/kernel/sched/ext_idle.c
+++ b/kernel/sched/ext_idle.c
@@ -872,26 +872,16 @@ __bpf_kfunc int scx_bpf_cpu_node(s32 cpu)
#endif
}
-/**
- * scx_bpf_select_cpu_dfl - The default implementation of ops.select_cpu()
- * @p: task_struct to select a CPU for
- * @prev_cpu: CPU @p was on previously
- * @wake_flags: %SCX_WAKE_* flags
- * @is_idle: out parameter indicating whether the returned CPU is idle
- *
- * Can only be called from ops.select_cpu() if the built-in CPU selection is
- * enabled - ops.update_idle() is missing or %SCX_OPS_KEEP_BUILTIN_IDLE is set.
- * @p, @prev_cpu and @wake_flags match ops.select_cpu().
- *
- * Returns the picked CPU with *@is_idle indicating whether the picked CPU is
- * currently idle and thus a good candidate for direct dispatching.
- */
+/* Provided for backward binary compatibility, will be removed in v6.17. */
__bpf_kfunc s32 scx_bpf_select_cpu_dfl(struct task_struct *p, s32 prev_cpu,
u64 wake_flags, bool *is_idle)
{
#ifdef CONFIG_SMP
s32 cpu;
#endif
+ printk_deferred_once(KERN_WARNING
+ "sched_ext: scx_bpf_select_cpu_dfl() deprecated in favor of scx_bpf_select_cpu_and()");
+
if (!ops_cpu_valid(prev_cpu, NULL))
goto prev_cpu;
diff --git a/tools/sched_ext/include/scx/common.bpf.h b/tools/sched_ext/include/scx/common.bpf.h
index 6f1da61cf7f17..1eb790eb90d40 100644
--- a/tools/sched_ext/include/scx/common.bpf.h
+++ b/tools/sched_ext/include/scx/common.bpf.h
@@ -47,7 +47,8 @@ static inline void ___vmlinux_h_sanity_check___(void)
}
s32 scx_bpf_create_dsq(u64 dsq_id, s32 node) __ksym;
-s32 scx_bpf_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, bool *is_idle) __ksym;
+s32 scx_bpf_select_cpu_dfl(struct task_struct *p,
+ s32 prev_cpu, u64 wake_flags, bool *is_idle) __ksym __weak;
s32 scx_bpf_select_cpu_and(struct task_struct *p, s32 prev_cpu, u64 wake_flags,
const struct cpumask *cpus_allowed, u64 flags) __ksym __weak;
void scx_bpf_dsq_insert(struct task_struct *p, u64 dsq_id, u64 slice, u64 enq_flags) __ksym __weak;
diff --git a/tools/sched_ext/include/scx/compat.bpf.h b/tools/sched_ext/include/scx/compat.bpf.h
index 9252e1a00556f..f9caa7baf356c 100644
--- a/tools/sched_ext/include/scx/compat.bpf.h
+++ b/tools/sched_ext/include/scx/compat.bpf.h
@@ -225,6 +225,43 @@ static inline bool __COMPAT_is_enq_cpu_selected(u64 enq_flags)
scx_bpf_pick_any_cpu_node(cpus_allowed, node, flags) : \
scx_bpf_pick_any_cpu(cpus_allowed, flags))
+/**
+ * scx_bpf_select_cpu_dfl - The default implementation of ops.select_cpu().
+ * We will preserve this compatible helper until v6.17.
+ *
+ * @p: task_struct to select a CPU for
+ * @prev_cpu: CPU @p was on previously
+ * @wake_flags: %SCX_WAKE_* flags
+ * @is_idle: out parameter indicating whether the returned CPU is idle
+ *
+ * Can only be called from ops.select_cpu() if the built-in CPU selection is
+ * enabled - ops.update_idle() is missing or %SCX_OPS_KEEP_BUILTIN_IDLE is set.
+ * @p, @prev_cpu and @wake_flags match ops.select_cpu().
+ *
+ * Returns the picked CPU with *@is_idle indicating whether the picked CPU is
+ * currently idle and thus a good candidate for direct dispatching.
+ */
+#define scx_bpf_select_cpu_dfl(p, prev_cpu, wake_flags, is_idle) \
+({ \
+ s32 __cpu; \
+ \
+ if (bpf_ksym_exists(scx_bpf_select_cpu_and)) { \
+ __cpu = scx_bpf_select_cpu_and((p), (prev_cpu), (wake_flags), \
+ (p)->cpus_ptr, 0); \
+ if (__cpu >= 0) { \
+ *(is_idle) = true; \
+ } else { \
+ *(is_idle) = false; \
+ __cpu = (prev_cpu); \
+ } \
+ } else { \
+ __cpu = scx_bpf_select_cpu_dfl((p), (prev_cpu), \
+ (wake_flags), (is_idle)); \
+ } \
+ \
+ __cpu; \
+})
+
/*
* Define sched_ext_ops. This may be expanded to define multiple variants for
* backward compatibility. See compat.h::SCX_OPS_LOAD/ATTACH().
diff --git a/tools/sched_ext/scx_flatcg.bpf.c b/tools/sched_ext/scx_flatcg.bpf.c
index 2c720e3ecad59..0075bff928893 100644
--- a/tools/sched_ext/scx_flatcg.bpf.c
+++ b/tools/sched_ext/scx_flatcg.bpf.c
@@ -317,15 +317,12 @@ static void set_bypassed_at(struct task_struct *p, struct fcg_task_ctx *taskc)
s32 BPF_STRUCT_OPS(fcg_select_cpu, struct task_struct *p, s32 prev_cpu, u64 wake_flags)
{
struct fcg_task_ctx *taskc;
- bool is_idle = false;
s32 cpu;
- cpu = scx_bpf_select_cpu_dfl(p, prev_cpu, wake_flags, &is_idle);
-
taskc = bpf_task_storage_get(&task_ctx, p, 0, 0);
if (!taskc) {
scx_bpf_error("task_ctx lookup failed");
- return cpu;
+ return prev_cpu;
}
/*
@@ -333,13 +330,16 @@ s32 BPF_STRUCT_OPS(fcg_select_cpu, struct task_struct *p, s32 prev_cpu, u64 wake
* idle. Follow it and charge the cgroup later in fcg_stopping() after
* the fact.
*/
- if (is_idle) {
+ cpu = scx_bpf_select_cpu_and(p, prev_cpu, wake_flags, p->cpus_ptr, 0);
+ if (cpu >= 0) {
set_bypassed_at(p, taskc);
stat_inc(FCG_STAT_LOCAL);
scx_bpf_dsq_insert(p, SCX_DSQ_LOCAL, SCX_SLICE_DFL, 0);
+
+ return cpu;
}
- return cpu;
+ return prev_cpu;
}
void BPF_STRUCT_OPS(fcg_enqueue, struct task_struct *p, u64 enq_flags)
diff --git a/tools/sched_ext/scx_simple.bpf.c b/tools/sched_ext/scx_simple.bpf.c
index e6de99dba7db6..0e48b2e46a683 100644
--- a/tools/sched_ext/scx_simple.bpf.c
+++ b/tools/sched_ext/scx_simple.bpf.c
@@ -54,16 +54,17 @@ static void stat_inc(u32 idx)
s32 BPF_STRUCT_OPS(simple_select_cpu, struct task_struct *p, s32 prev_cpu, u64 wake_flags)
{
- bool is_idle = false;
s32 cpu;
- cpu = scx_bpf_select_cpu_dfl(p, prev_cpu, wake_flags, &is_idle);
- if (is_idle) {
+ cpu = scx_bpf_select_cpu_and(p, prev_cpu, wake_flags, p->cpus_ptr, 0);
+ if (cpu >= 0) {
stat_inc(0); /* count local queueing */
scx_bpf_dsq_insert(p, SCX_DSQ_LOCAL, SCX_SLICE_DFL, 0);
+
+ return cpu;
}
- return cpu;
+ return prev_cpu;
}
void BPF_STRUCT_OPS(simple_enqueue, struct task_struct *p, u64 enq_flags)
diff --git a/tools/testing/selftests/sched_ext/enq_select_cpu_fails.bpf.c b/tools/testing/selftests/sched_ext/enq_select_cpu_fails.bpf.c
index a7cf868d5e311..d3c0716aa79c9 100644
--- a/tools/testing/selftests/sched_ext/enq_select_cpu_fails.bpf.c
+++ b/tools/testing/selftests/sched_ext/enq_select_cpu_fails.bpf.c
@@ -9,10 +9,6 @@
char _license[] SEC("license") = "GPL";
-/* Manually specify the signature until the kfunc is added to the scx repo. */
-s32 scx_bpf_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags,
- bool *found) __ksym;
-
s32 BPF_STRUCT_OPS(enq_select_cpu_fails_select_cpu, struct task_struct *p,
s32 prev_cpu, u64 wake_flags)
{
@@ -22,14 +18,8 @@ s32 BPF_STRUCT_OPS(enq_select_cpu_fails_select_cpu, struct task_struct *p,
void BPF_STRUCT_OPS(enq_select_cpu_fails_enqueue, struct task_struct *p,
u64 enq_flags)
{
- /*
- * Need to initialize the variable or the verifier will fail to load.
- * Improving these semantics is actively being worked on.
- */
- bool found = false;
-
/* Can only call from ops.select_cpu() */
- scx_bpf_select_cpu_dfl(p, 0, 0, &found);
+ scx_bpf_select_cpu_and(p, 0, 0, p->cpus_ptr, 0);
scx_bpf_dsq_insert(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, enq_flags);
}
diff --git a/tools/testing/selftests/sched_ext/enq_select_cpu_fails.c b/tools/testing/selftests/sched_ext/enq_select_cpu_fails.c
index a80e3a3b3698c..c964444998667 100644
--- a/tools/testing/selftests/sched_ext/enq_select_cpu_fails.c
+++ b/tools/testing/selftests/sched_ext/enq_select_cpu_fails.c
@@ -52,7 +52,7 @@ static void cleanup(void *ctx)
struct scx_test enq_select_cpu_fails = {
.name = "enq_select_cpu_fails",
- .description = "Verify we fail to call scx_bpf_select_cpu_dfl() "
+ .description = "Verify we fail to call scx_bpf_select_cpu_and() "
"from ops.enqueue()",
.setup = setup,
.run = run,
diff --git a/tools/testing/selftests/sched_ext/exit.bpf.c b/tools/testing/selftests/sched_ext/exit.bpf.c
index 4bc36182d3ffc..8122421856c1b 100644
--- a/tools/testing/selftests/sched_ext/exit.bpf.c
+++ b/tools/testing/selftests/sched_ext/exit.bpf.c
@@ -20,12 +20,14 @@ UEI_DEFINE(uei);
s32 BPF_STRUCT_OPS(exit_select_cpu, struct task_struct *p,
s32 prev_cpu, u64 wake_flags)
{
- bool found;
+ s32 cpu;
if (exit_point == EXIT_SELECT_CPU)
EXIT_CLEANLY();
- return scx_bpf_select_cpu_dfl(p, prev_cpu, wake_flags, &found);
+ cpu = scx_bpf_select_cpu_and(p, prev_cpu, wake_flags, p->cpus_ptr, 0);
+
+ return cpu >= 0 ? cpu : prev_cpu;
}
void BPF_STRUCT_OPS(exit_enqueue, struct task_struct *p, u64 enq_flags)
diff --git a/tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.bpf.c b/tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.bpf.c
index 815f1d5d61ac4..4e1b698f710e7 100644
--- a/tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.bpf.c
+++ b/tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.bpf.c
@@ -27,10 +27,6 @@ struct {
__type(value, struct task_ctx);
} task_ctx_stor SEC(".maps");
-/* Manually specify the signature until the kfunc is added to the scx repo. */
-s32 scx_bpf_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags,
- bool *found) __ksym;
-
s32 BPF_STRUCT_OPS(select_cpu_dfl_nodispatch_select_cpu, struct task_struct *p,
s32 prev_cpu, u64 wake_flags)
{
@@ -43,10 +39,13 @@ s32 BPF_STRUCT_OPS(select_cpu_dfl_nodispatch_select_cpu, struct task_struct *p,
return -ESRCH;
}
- cpu = scx_bpf_select_cpu_dfl(p, prev_cpu, wake_flags,
- &tctx->force_local);
+ cpu = scx_bpf_select_cpu_and(p, prev_cpu, wake_flags, p->cpus_ptr, 0);
+ if (cpu >= 0) {
+ tctx->force_local = true;
+ return cpu;
+ }
- return cpu;
+ return prev_cpu;
}
void BPF_STRUCT_OPS(select_cpu_dfl_nodispatch_enqueue, struct task_struct *p,
diff --git a/tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.c b/tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.c
index 9b5d232efb7f6..2f450bb14e8d9 100644
--- a/tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.c
+++ b/tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.c
@@ -66,7 +66,7 @@ static void cleanup(void *ctx)
struct scx_test select_cpu_dfl_nodispatch = {
.name = "select_cpu_dfl_nodispatch",
- .description = "Verify behavior of scx_bpf_select_cpu_dfl() in "
+ .description = "Verify behavior of scx_bpf_select_cpu_and() in "
"ops.select_cpu()",
.setup = setup,
.run = run,
--
2.48.1
^ permalink raw reply related [flat|nested] 19+ messages in thread* Re: [PATCH 6/6] sched_ext: idle: Deprecate scx_bpf_select_cpu_dfl()
2025-03-21 22:10 ` [PATCH 6/6] sched_ext: idle: Deprecate scx_bpf_select_cpu_dfl() Andrea Righi
@ 2025-03-31 22:01 ` Tejun Heo
2025-04-01 6:38 ` Andrea Righi
0 siblings, 1 reply; 19+ messages in thread
From: Tejun Heo @ 2025-03-31 22:01 UTC (permalink / raw)
To: Andrea Righi; +Cc: David Vernet, Changwoo Min, Joel Fernandes, linux-kernel
Hello,
On Fri, Mar 21, 2025 at 11:10:52PM +0100, Andrea Righi wrote:
> With the introduction of scx_bpf_select_cpu_and(), we can deprecate
> scx_bpf_select_cpu_dfl(), as it offers only a subset of features and
> it's also more consistent with other idle-related APIs (returning a
> negative value when no idle CPU is found).
>
> Therefore, mark scx_bpf_select_cpu_dfl() as deprecated (printing a
> warning when it's used), update all the scheduler examples and
> kselftests to adopt the new API, and ensure backward (source and binary)
> compatibility by providing the necessary macros and hooks.
>
> Support for scx_bpf_select_cpu_dfl() can be maintained until v6.17.
Do we need to deprecate it?
...
> @@ -43,10 +39,13 @@ s32 BPF_STRUCT_OPS(select_cpu_dfl_nodispatch_select_cpu, struct task_struct *p,
> return -ESRCH;
> }
>
> - cpu = scx_bpf_select_cpu_dfl(p, prev_cpu, wake_flags,
> - &tctx->force_local);
> + cpu = scx_bpf_select_cpu_and(p, prev_cpu, wake_flags, p->cpus_ptr, 0);
> + if (cpu >= 0) {
> + tctx->force_local = true;
> + return cpu;
> + }
>
> - return cpu;
> + return prev_cpu;
> }
scx_bpf_select_cpu_dfl() is simpler for simple cases. I don't see a pressing
need to convert everybody to _and().
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 19+ messages in thread* Re: [PATCH 6/6] sched_ext: idle: Deprecate scx_bpf_select_cpu_dfl()
2025-03-31 22:01 ` Tejun Heo
@ 2025-04-01 6:38 ` Andrea Righi
0 siblings, 0 replies; 19+ messages in thread
From: Andrea Righi @ 2025-04-01 6:38 UTC (permalink / raw)
To: Tejun Heo; +Cc: David Vernet, Changwoo Min, Joel Fernandes, linux-kernel
On Mon, Mar 31, 2025 at 12:01:22PM -1000, Tejun Heo wrote:
> Hello,
>
> On Fri, Mar 21, 2025 at 11:10:52PM +0100, Andrea Righi wrote:
> > With the introduction of scx_bpf_select_cpu_and(), we can deprecate
> > scx_bpf_select_cpu_dfl(), as it offers only a subset of features and
> > it's also more consistent with other idle-related APIs (returning a
> > negative value when no idle CPU is found).
> >
> > Therefore, mark scx_bpf_select_cpu_dfl() as deprecated (printing a
> > warning when it's used), update all the scheduler examples and
> > kselftests to adopt the new API, and ensure backward (source and binary)
> > compatibility by providing the necessary macros and hooks.
> >
> > Support for scx_bpf_select_cpu_dfl() can be maintained until v6.17.
>
> Do we need to deprecate it?
>
> ...
> > @@ -43,10 +39,13 @@ s32 BPF_STRUCT_OPS(select_cpu_dfl_nodispatch_select_cpu, struct task_struct *p,
> > return -ESRCH;
> > }
> >
> > - cpu = scx_bpf_select_cpu_dfl(p, prev_cpu, wake_flags,
> > - &tctx->force_local);
> > + cpu = scx_bpf_select_cpu_and(p, prev_cpu, wake_flags, p->cpus_ptr, 0);
> > + if (cpu >= 0) {
> > + tctx->force_local = true;
> > + return cpu;
> > + }
> >
> > - return cpu;
> > + return prev_cpu;
> > }
>
> scx_bpf_select_cpu_dfl() is simpler for simple cases. I don't see a pressing
> need to convert everybody to _and().
Yeah, I don't have strong opinions on this, I included this patch mostly to
show that we can get rid of a kfunc if we want, but we don't really have to
and it's probably less work to just keep it. I'll drop this patch in the
next version.
Thanks for the review!
-Andrea
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCHSET v6 sched_ext/for-6.15] sched_ext: Enhance built-in idle selection with allowed CPUs
2025-03-21 22:10 [PATCHSET v6 sched_ext/for-6.15] sched_ext: Enhance built-in idle selection with allowed CPUs Andrea Righi
` (5 preceding siblings ...)
2025-03-21 22:10 ` [PATCH 6/6] sched_ext: idle: Deprecate scx_bpf_select_cpu_dfl() Andrea Righi
@ 2025-03-22 3:56 ` Changwoo Min
6 siblings, 0 replies; 19+ messages in thread
From: Changwoo Min @ 2025-03-22 3:56 UTC (permalink / raw)
To: Andrea Righi; +Cc: Tejun Heo, David Vernet, Joel Fernandes, linux-kernel
Hi Andrea,
Looks great to me.
Thanks!
Changwoo Min
On 2025-03-22 07:10, Andrea Righi wrote:
> Many scx schedulers implement their own hard or soft-affinity rules to
> support topology characteristics, such as heterogeneous architectures
> (e.g., big.LITTLE, P-cores/E-cores), or to categorize tasks based on
> specific properties (e.g., running certain tasks only in a subset of CPUs).
>
> Currently, there is no mechanism that allows to use the built-in idle CPU
> selection policy to an arbitrary subset of CPUs. As a result, schedulers
> often implement their own idle CPU selection policies, which are typically
> similar to one another, leading to a lot of code duplication.
>
> To address this, extend the built-in idle CPU selection policy introducing
> ]the concept of allowed CPUs.
>
> With this concept, BPF schedulers can apply the built-in idle CPU selection
> policy to a subset of allowed CPUs, allowing them to implement their own
> hard/soft-affinity rules while still using the topology optimizations of
> the built-in policy, preventing code duplication across different
> schedulers.
>
> To implement this introduce a new helper kfunc scx_bpf_select_cpu_and()
> that accepts a cpumask of allowed CPUs:
>
> s32 scx_bpf_select_cpu_and(struct task_struct *p, s32 prev_cpu,
> u64 wake_flags,
> const struct cpumask *cpus_allowed, u64 flags);
>
> Example usage
> =============
>
> s32 BPF_STRUCT_OPS(foo_select_cpu, struct task_struct *p,
> s32 prev_cpu, u64 wake_flags)
> {
> const struct cpumask *cpus = task_allowed_cpus(p) ?: p->cpus_ptr;
> s32 cpu;
>
> cpu = scx_bpf_select_cpu_and(p, prev_cpu, wake_flags, cpus, 0);
> if (cpu >= 0) {
> scx_bpf_dsq_insert(p, SCX_DSQ_LOCAL, SCX_SLICE_DFL, 0);
> return cpu;
> }
>
> return prev_cpu;
> }
>
> Results
> =======
>
> Load distribution on a 4 sockets / 4 cores per socket system, simulated
> using virtme-ng, running a modified version of scx_bpfland that uses the
> new helper scx_bpf_select_cpu_and() and 0xff00 as allowed domain:
>
> $ vng --cpu 16,sockets=4,cores=4,threads=1
> ...
> $ stress-ng -c 16
> ...
> $ htop
> ...
> 0[ 0.0%] 8[||||||||||||||||||||||||100.0%]
> 1[ 0.0%] 9[||||||||||||||||||||||||100.0%]
> 2[ 0.0%] 10[||||||||||||||||||||||||100.0%]
> 3[ 0.0%] 11[||||||||||||||||||||||||100.0%]
> 4[ 0.0%] 12[||||||||||||||||||||||||100.0%]
> 5[ 0.0%] 13[||||||||||||||||||||||||100.0%]
> 6[ 0.0%] 14[||||||||||||||||||||||||100.0%]
> 7[ 0.0%] 15[||||||||||||||||||||||||100.0%]
>
> With scx_bpf_select_cpu_dfl() tasks would be distributed evenly across all
> the available CPUs.
>
> ChangeLog v5 -> v6:
> - prevent redundant cpumask_subset() + cpumask_equal() checks in all
> patches
> - remove cpumask_subset() + cpumask_and() combo with local cpumasks, as
> cpumask_and() alone is generally more efficient
> - cleanup patches to prevent unnecessary function renames
>
> ChangeLog v4 -> v5:
> - simplify code to compute the temporary task's cpumasks (and)
>
> ChangeLog v3 -> v4:
> - keep p->nr_cpus_allowed optimizations (skip cpumask operations when the
> task can run on all CPUs)
> - allow to call scx_bpf_select_cpu_and() also from ops.enqueue() and
> modify the kselftest to cover this case as well
> - rebase to the latest sched_ext/for-6.15
>
> ChangeLog v2 -> v3:
> - incrementally refactor scx_select_cpu_dfl() to accept idle flags and an
> arbitrary allowed cpumask
> - build scx_bpf_select_cpu_and() on top of the existing logic
> - re-arrange scx_select_cpu_dfl() prototype, aligning the first three
> arguments with select_task_rq()
> - do not use "domain" for the allowed cpumask to avoid potential ambiguity
> with sched_domain
>
> ChangeLog v1 -> v2:
> - rename scx_bpf_select_cpu_pref() to scx_bpf_select_cpu_and() and always
> select idle CPUs strictly within the allowed domain
> - rename preferred CPUs -> allowed CPU
> - drop %SCX_PICK_IDLE_IN_PREF (not required anymore)
> - deprecate scx_bpf_select_cpu_dfl() in favor of scx_bpf_select_cpu_and()
> and provide all the required backward compatibility boilerplate
>
> Andrea Righi (6):
> sched_ext: idle: Extend topology optimizations to all tasks
> sched_ext: idle: Explicitly pass allowed cpumask to scx_select_cpu_dfl()
> sched_ext: idle: Accept an arbitrary cpumask in scx_select_cpu_dfl()
> sched_ext: idle: Introduce scx_bpf_select_cpu_and()
> selftests/sched_ext: Add test for scx_bpf_select_cpu_and()
> sched_ext: idle: Deprecate scx_bpf_select_cpu_dfl()
>
> Documentation/scheduler/sched-ext.rst | 11 +-
> kernel/sched/ext.c | 6 +-
> kernel/sched/ext_idle.c | 196 ++++++++++++++++-----
> kernel/sched/ext_idle.h | 3 +-
> tools/sched_ext/include/scx/common.bpf.h | 5 +-
> tools/sched_ext/include/scx/compat.bpf.h | 37 ++++
> tools/sched_ext/scx_flatcg.bpf.c | 12 +-
> tools/sched_ext/scx_simple.bpf.c | 9 +-
> tools/testing/selftests/sched_ext/Makefile | 1 +
> .../testing/selftests/sched_ext/allowed_cpus.bpf.c | 121 +++++++++++++
> tools/testing/selftests/sched_ext/allowed_cpus.c | 57 ++++++
> .../selftests/sched_ext/enq_select_cpu_fails.bpf.c | 12 +-
> .../selftests/sched_ext/enq_select_cpu_fails.c | 2 +-
> tools/testing/selftests/sched_ext/exit.bpf.c | 6 +-
> .../sched_ext/select_cpu_dfl_nodispatch.bpf.c | 13 +-
> .../sched_ext/select_cpu_dfl_nodispatch.c | 2 +-
> 16 files changed, 404 insertions(+), 89 deletions(-)
> create mode 100644 tools/testing/selftests/sched_ext/allowed_cpus.bpf.c
> create mode 100644 tools/testing/selftests/sched_ext/allowed_cpus.c
^ permalink raw reply [flat|nested] 19+ messages in thread