* [PATCHSET v5 sched_ext/for-6.15] sched_ext: Enhance built-in idle selection with allowed CPUs
@ 2025-03-20 7:36 Andrea Righi
2025-03-20 7:36 ` [PATCH 1/6] sched_ext: idle: Extend topology optimizations to all tasks Andrea Righi
` (6 more replies)
0 siblings, 7 replies; 13+ messages in thread
From: Andrea Righi @ 2025-03-20 7:36 UTC (permalink / raw)
To: Tejun Heo, David Vernet, Changwoo Min; +Cc: Joel Fernandes, linux-kernel
Many scx schedulers implement their own hard or soft-affinity rules to
support topology characteristics, such as heterogeneous architectures
(e.g., big.LITTLE, P-cores/E-cores), or to categorize tasks based on
specific properties (e.g., running certain tasks only in a subset of CPUs).
Currently, there is no mechanism that allows to use the built-in idle CPU
selection policy to an arbitrary subset of CPUs. As a result, schedulers
often implement their own idle CPU selection policies, which are typically
similar to one another, leading to a lot of code duplication.
To address this, extend the built-in idle CPU selection policy introducing
the concept of allowed CPUs.
With this concept, BPF schedulers can apply the built-in idle CPU selection
policy to a subset of allowed CPUs, allowing them to implement their own
hard/soft-affinity rules while still using the topology optimizations of
the built-in policy, preventing code duplication across different
schedulers.
To implement this introduce a new helper kfunc scx_bpf_select_cpu_and()
that accepts a cpumask of allowed CPUs:
s32 scx_bpf_select_cpu_and(struct task_struct *p, s32 prev_cpu,
u64 wake_flags,
const struct cpumask *cpus_allowed, u64 flags);
Example usage
=============
s32 BPF_STRUCT_OPS(foo_select_cpu, struct task_struct *p,
s32 prev_cpu, u64 wake_flags)
{
const struct cpumask *cpus = task_allowed_cpus(p) ?: p->cpus_ptr;
s32 cpu;
cpu = scx_bpf_select_cpu_and(p, prev_cpu, wake_flags, cpus, 0);
if (cpu >= 0) {
scx_bpf_dsq_insert(p, SCX_DSQ_LOCAL, SCX_SLICE_DFL, 0);
return cpu;
}
return prev_cpu;
}
Results
=======
Load distribution on a 4 sockets / 4 cores per socket system, simulated
using virtme-ng, running a modified version of scx_bpfland that uses the
new helper scx_bpf_select_cpu_and() and 0xff00 as allowed domain:
$ vng --cpu 16,sockets=4,cores=4,threads=1
...
$ stress-ng -c 16
...
$ htop
...
0[ 0.0%] 8[||||||||||||||||||||||||100.0%]
1[ 0.0%] 9[||||||||||||||||||||||||100.0%]
2[ 0.0%] 10[||||||||||||||||||||||||100.0%]
3[ 0.0%] 11[||||||||||||||||||||||||100.0%]
4[ 0.0%] 12[||||||||||||||||||||||||100.0%]
5[ 0.0%] 13[||||||||||||||||||||||||100.0%]
6[ 0.0%] 14[||||||||||||||||||||||||100.0%]
7[ 0.0%] 15[||||||||||||||||||||||||100.0%]
With scx_bpf_select_cpu_dfl() tasks would be distributed evenly across all
the available CPUs.
ChangeLog v4 -> v5:
- simplify the code to compute (and) task's temporary cpumasks
ChangeLog v3 -> v4:
- keep p->nr_cpus_allowed optimizations (skip cpumask operations when the
task can run on all CPUs)
- allow to call scx_bpf_select_cpu_and() also from ops.enqueue() and
modify the kselftest to cover this case as well
- rebase to the latest sched_ext/for-6.15
ChangeLog v2 -> v3:
- incrementally refactor scx_select_cpu_dfl() to accept idle flags and an
arbitrary allowed cpumask
- build scx_bpf_select_cpu_and() on top of the existing logic
- re-arrange scx_select_cpu_dfl() prototype, aligning the first three
arguments with select_task_rq()
- do not use "domain" for the allowed cpumask to avoid potential ambiguity
with sched_domain
ChangeLog v1 -> v2:
- rename scx_bpf_select_cpu_pref() to scx_bpf_select_cpu_and() and always
select idle CPUs strictly within the allowed domain
- rename preferred CPUs -> allowed CPU
- drop %SCX_PICK_IDLE_IN_PREF (not required anymore)
- deprecate scx_bpf_select_cpu_dfl() in favor of scx_bpf_select_cpu_and()
and provide all the required backward compatibility boilerplate
Andrea Righi (6):
sched_ext: idle: Extend topology optimizations to all tasks
sched_ext: idle: Explicitly pass allowed cpumask to scx_select_cpu_dfl()
sched_ext: idle: Accept an arbitrary cpumask in scx_select_cpu_dfl()
sched_ext: idle: Introduce scx_bpf_select_cpu_and()
selftests/sched_ext: Add test for scx_bpf_select_cpu_and()
sched_ext: idle: Deprecate scx_bpf_select_cpu_dfl()
Documentation/scheduler/sched-ext.rst | 11 +-
kernel/sched/ext.c | 6 +-
kernel/sched/ext_idle.c | 196 ++++++++++++++++-----
kernel/sched/ext_idle.h | 3 +-
tools/sched_ext/include/scx/common.bpf.h | 5 +-
tools/sched_ext/include/scx/compat.bpf.h | 37 ++++
tools/sched_ext/scx_flatcg.bpf.c | 12 +-
tools/sched_ext/scx_simple.bpf.c | 9 +-
tools/testing/selftests/sched_ext/Makefile | 1 +
.../testing/selftests/sched_ext/allowed_cpus.bpf.c | 121 +++++++++++++
tools/testing/selftests/sched_ext/allowed_cpus.c | 57 ++++++
.../selftests/sched_ext/enq_select_cpu_fails.bpf.c | 12 +-
.../selftests/sched_ext/enq_select_cpu_fails.c | 2 +-
tools/testing/selftests/sched_ext/exit.bpf.c | 6 +-
.../sched_ext/select_cpu_dfl_nodispatch.bpf.c | 13 +-
.../sched_ext/select_cpu_dfl_nodispatch.c | 2 +-
16 files changed, 404 insertions(+), 89 deletions(-)
create mode 100644 tools/testing/selftests/sched_ext/allowed_cpus.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/allowed_cpus.c
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH 1/6] sched_ext: idle: Extend topology optimizations to all tasks
2025-03-20 7:36 [PATCHSET v5 sched_ext/for-6.15] sched_ext: Enhance built-in idle selection with allowed CPUs Andrea Righi
@ 2025-03-20 7:36 ` Andrea Righi
2025-03-20 16:49 ` Tejun Heo
2025-03-20 7:36 ` [PATCH 2/6] sched_ext: idle: Explicitly pass allowed cpumask to scx_select_cpu_dfl() Andrea Righi
` (5 subsequent siblings)
6 siblings, 1 reply; 13+ messages in thread
From: Andrea Righi @ 2025-03-20 7:36 UTC (permalink / raw)
To: Tejun Heo, David Vernet, Changwoo Min; +Cc: Joel Fernandes, linux-kernel
The built-in idle selection policy, scx_select_cpu_dfl(), always
prioritizes picking idle CPUs within the same LLC or NUMA node, but
these optimizations are currently applied only when a task has no CPU
affinity constraints.
This is done primarily for efficiency, as it avoids the overhead of
updating a cpumask every time we need to select an idle CPU (which can
be costly in large SMP systems).
However, this approach limits the effectiveness of the built-in idle
policy and results in inconsistent behavior, as affinity-restricted
tasks don't benefit from topology-aware optimizations.
To address this, modify the policy to apply LLC and NUMA-aware
optimizations even when a task is constrained to a subset of CPUs.
We can still avoid updating the cpumasks by checking if the subset of
LLC and node CPUs are contained in the subset of allowed CPUs usable by
the task (which is true in most of the cases - for tasks that don't have
affinity constratints).
Moreover, use temporary local per-CPU cpumasks to determine the LLC and
node subsets, minimizing potential overhead even on large SMP systems.
Signed-off-by: Andrea Righi <arighi@nvidia.com>
---
kernel/sched/ext_idle.c | 78 ++++++++++++++++++++++++++++-------------
1 file changed, 54 insertions(+), 24 deletions(-)
diff --git a/kernel/sched/ext_idle.c b/kernel/sched/ext_idle.c
index 52c36a70a3d04..e1e020c27c07c 100644
--- a/kernel/sched/ext_idle.c
+++ b/kernel/sched/ext_idle.c
@@ -46,6 +46,12 @@ static struct scx_idle_cpus scx_idle_global_masks;
*/
static struct scx_idle_cpus **scx_idle_node_masks;
+/*
+ * Local per-CPU cpumasks (used to generate temporary idle cpumasks).
+ */
+static DEFINE_PER_CPU(cpumask_var_t, local_llc_idle_cpumask);
+static DEFINE_PER_CPU(cpumask_var_t, local_numa_idle_cpumask);
+
/*
* Return the idle masks associated to a target @node.
*
@@ -391,6 +397,30 @@ void scx_idle_update_selcpu_topology(struct sched_ext_ops *ops)
static_branch_disable_cpuslocked(&scx_selcpu_topo_numa);
}
+/*
+ * Return the subset of @cpus that task @p can use or NULL if none of the
+ * CPUs in the @cpus cpumask can be used.
+ */
+static const struct cpumask *task_cpumask(const struct task_struct *p, const struct cpumask *cpus,
+ struct cpumask *local_cpus)
+{
+ /*
+ * If the task is allowed to run on all CPUs, simply use the
+ * architecture's cpumask directly. Otherwise, compute the
+ * intersection of the architecture's cpumask and the task's
+ * allowed cpumask.
+ */
+ if (!cpus || p->nr_cpus_allowed >= num_possible_cpus() ||
+ cpumask_subset(cpus, p->cpus_ptr))
+ return cpus;
+
+ if (!cpumask_equal(cpus, p->cpus_ptr) &&
+ cpumask_and(local_cpus, cpus, p->cpus_ptr))
+ return local_cpus;
+
+ return NULL;
+}
+
/*
* Built-in CPU idle selection policy:
*
@@ -426,8 +456,7 @@ void scx_idle_update_selcpu_topology(struct sched_ext_ops *ops)
*/
s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, u64 flags)
{
- const struct cpumask *llc_cpus = NULL;
- const struct cpumask *numa_cpus = NULL;
+ const struct cpumask *llc_cpus = NULL, *numa_cpus = NULL;
int node = scx_cpu_node_if_enabled(prev_cpu);
s32 cpu;
@@ -437,23 +466,16 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, u64
rcu_read_lock();
/*
- * Determine the scheduling domain only if the task is allowed to run
- * on all CPUs.
- *
- * This is done primarily for efficiency, as it avoids the overhead of
- * updating a cpumask every time we need to select an idle CPU (which
- * can be costly in large SMP systems), but it also aligns logically:
- * if a task's scheduling domain is restricted by user-space (through
- * CPU affinity), the task will simply use the flat scheduling domain
- * defined by user-space.
+ * Determine the subset of CPUs that the task can use in its
+ * current LLC and node.
*/
- if (p->nr_cpus_allowed >= num_possible_cpus()) {
- if (static_branch_maybe(CONFIG_NUMA, &scx_selcpu_topo_numa))
- numa_cpus = numa_span(prev_cpu);
+ if (static_branch_maybe(CONFIG_NUMA, &scx_selcpu_topo_numa))
+ numa_cpus = task_cpumask(p, numa_span(prev_cpu),
+ this_cpu_cpumask_var_ptr(local_numa_idle_cpumask));
- if (static_branch_maybe(CONFIG_SCHED_MC, &scx_selcpu_topo_llc))
- llc_cpus = llc_span(prev_cpu);
- }
+ if (static_branch_maybe(CONFIG_SCHED_MC, &scx_selcpu_topo_llc))
+ llc_cpus = task_cpumask(p, llc_span(prev_cpu),
+ this_cpu_cpumask_var_ptr(local_llc_idle_cpumask));
/*
* If WAKE_SYNC, try to migrate the wakee to the waker's CPU.
@@ -598,7 +620,7 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, u64
*/
void scx_idle_init_masks(void)
{
- int node;
+ int i;
/* Allocate global idle cpumasks */
BUG_ON(!alloc_cpumask_var(&scx_idle_global_masks.cpu, GFP_KERNEL));
@@ -609,13 +631,21 @@ void scx_idle_init_masks(void)
sizeof(*scx_idle_node_masks), GFP_KERNEL);
BUG_ON(!scx_idle_node_masks);
- for_each_node(node) {
- scx_idle_node_masks[node] = kzalloc_node(sizeof(**scx_idle_node_masks),
- GFP_KERNEL, node);
- BUG_ON(!scx_idle_node_masks[node]);
+ for_each_node(i) {
+ scx_idle_node_masks[i] = kzalloc_node(sizeof(**scx_idle_node_masks),
+ GFP_KERNEL, i);
+ BUG_ON(!scx_idle_node_masks[i]);
+
+ BUG_ON(!alloc_cpumask_var_node(&scx_idle_node_masks[i]->cpu, GFP_KERNEL, i));
+ BUG_ON(!alloc_cpumask_var_node(&scx_idle_node_masks[i]->smt, GFP_KERNEL, i));
+ }
- BUG_ON(!alloc_cpumask_var_node(&scx_idle_node_masks[node]->cpu, GFP_KERNEL, node));
- BUG_ON(!alloc_cpumask_var_node(&scx_idle_node_masks[node]->smt, GFP_KERNEL, node));
+ /* Allocate local per-cpu idle cpumasks */
+ for_each_possible_cpu(i) {
+ BUG_ON(!alloc_cpumask_var_node(&per_cpu(local_llc_idle_cpumask, i),
+ GFP_KERNEL, cpu_to_node(i)));
+ BUG_ON(!alloc_cpumask_var_node(&per_cpu(local_numa_idle_cpumask, i),
+ GFP_KERNEL, cpu_to_node(i)));
}
}
--
2.48.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH 2/6] sched_ext: idle: Explicitly pass allowed cpumask to scx_select_cpu_dfl()
2025-03-20 7:36 [PATCHSET v5 sched_ext/for-6.15] sched_ext: Enhance built-in idle selection with allowed CPUs Andrea Righi
2025-03-20 7:36 ` [PATCH 1/6] sched_ext: idle: Extend topology optimizations to all tasks Andrea Righi
@ 2025-03-20 7:36 ` Andrea Righi
2025-03-21 10:15 ` changwoo
2025-03-20 7:36 ` [PATCH 3/6] sched_ext: idle: Accept an arbitrary cpumask in scx_select_cpu_dfl() Andrea Righi
` (4 subsequent siblings)
6 siblings, 1 reply; 13+ messages in thread
From: Andrea Righi @ 2025-03-20 7:36 UTC (permalink / raw)
To: Tejun Heo, David Vernet, Changwoo Min; +Cc: Joel Fernandes, linux-kernel
Modify scx_select_cpu_dfl() to take the allowed cpumask as an explicit
argument, instead of implicitly using @p->cpus_ptr.
This prepares for future changes where arbitrary cpumasks may be passed
to the built-in idle CPU selection policy.
This is a pure refactoring with no functional changes.
Signed-off-by: Andrea Righi <arighi@nvidia.com>
---
kernel/sched/ext.c | 2 +-
kernel/sched/ext_idle.c | 45 ++++++++++++++++++++++++++---------------
kernel/sched/ext_idle.h | 3 ++-
3 files changed, 32 insertions(+), 18 deletions(-)
diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
index 06561d6717c9a..f42352e8d889e 100644
--- a/kernel/sched/ext.c
+++ b/kernel/sched/ext.c
@@ -3395,7 +3395,7 @@ static int select_task_rq_scx(struct task_struct *p, int prev_cpu, int wake_flag
} else {
s32 cpu;
- cpu = scx_select_cpu_dfl(p, prev_cpu, wake_flags, 0);
+ cpu = scx_select_cpu_dfl(p, prev_cpu, wake_flags, p->cpus_ptr, 0);
if (cpu >= 0) {
p->scx.slice = SCX_SLICE_DFL;
p->scx.ddsp_dsq_id = SCX_DSQ_LOCAL;
diff --git a/kernel/sched/ext_idle.c b/kernel/sched/ext_idle.c
index e1e020c27c07c..a90d85bce1ccb 100644
--- a/kernel/sched/ext_idle.c
+++ b/kernel/sched/ext_idle.c
@@ -397,11 +397,19 @@ void scx_idle_update_selcpu_topology(struct sched_ext_ops *ops)
static_branch_disable_cpuslocked(&scx_selcpu_topo_numa);
}
+static inline bool task_allowed_all_cpus(const struct task_struct *p)
+{
+ return p->nr_cpus_allowed >= num_possible_cpus();
+}
+
/*
- * Return the subset of @cpus that task @p can use or NULL if none of the
- * CPUs in the @cpus cpumask can be used.
+ * Return the subset of @cpus that task @p can use, according to
+ * @cpus_allowed, or NULL if none of the CPUs in the @cpus cpumask can be
+ * used.
*/
-static const struct cpumask *task_cpumask(const struct task_struct *p, const struct cpumask *cpus,
+static const struct cpumask *task_cpumask(const struct task_struct *p,
+ const struct cpumask *cpus_allowed,
+ const struct cpumask *cpus,
struct cpumask *local_cpus)
{
/*
@@ -410,12 +418,10 @@ static const struct cpumask *task_cpumask(const struct task_struct *p, const str
* intersection of the architecture's cpumask and the task's
* allowed cpumask.
*/
- if (!cpus || p->nr_cpus_allowed >= num_possible_cpus() ||
- cpumask_subset(cpus, p->cpus_ptr))
+ if (!cpus || task_allowed_all_cpus(p) || cpumask_subset(cpus, cpus_allowed))
return cpus;
- if (!cpumask_equal(cpus, p->cpus_ptr) &&
- cpumask_and(local_cpus, cpus, p->cpus_ptr))
+ if (cpumask_and(local_cpus, cpus, cpus_allowed))
return local_cpus;
return NULL;
@@ -454,7 +460,8 @@ static const struct cpumask *task_cpumask(const struct task_struct *p, const str
* NOTE: tasks that can only run on 1 CPU are excluded by this logic, because
* we never call ops.select_cpu() for them, see select_task_rq().
*/
-s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, u64 flags)
+s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags,
+ const struct cpumask *cpus_allowed, u64 flags)
{
const struct cpumask *llc_cpus = NULL, *numa_cpus = NULL;
int node = scx_cpu_node_if_enabled(prev_cpu);
@@ -469,13 +476,19 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, u64
* Determine the subset of CPUs that the task can use in its
* current LLC and node.
*/
- if (static_branch_maybe(CONFIG_NUMA, &scx_selcpu_topo_numa))
- numa_cpus = task_cpumask(p, numa_span(prev_cpu),
+ if (static_branch_maybe(CONFIG_NUMA, &scx_selcpu_topo_numa)) {
+ numa_cpus = task_cpumask(p, cpus_allowed, numa_span(prev_cpu),
this_cpu_cpumask_var_ptr(local_numa_idle_cpumask));
+ if (cpumask_equal(numa_cpus, cpus_allowed))
+ numa_cpus = NULL;
+ }
- if (static_branch_maybe(CONFIG_SCHED_MC, &scx_selcpu_topo_llc))
- llc_cpus = task_cpumask(p, llc_span(prev_cpu),
+ if (static_branch_maybe(CONFIG_SCHED_MC, &scx_selcpu_topo_llc)) {
+ llc_cpus = task_cpumask(p, cpus_allowed, llc_span(prev_cpu),
this_cpu_cpumask_var_ptr(local_llc_idle_cpumask));
+ if (cpumask_equal(llc_cpus, cpus_allowed))
+ llc_cpus = NULL;
+ }
/*
* If WAKE_SYNC, try to migrate the wakee to the waker's CPU.
@@ -512,7 +525,7 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, u64
cpu_rq(cpu)->scx.local_dsq.nr == 0 &&
(!(flags & SCX_PICK_IDLE_IN_NODE) || (waker_node == node)) &&
!cpumask_empty(idle_cpumask(waker_node)->cpu)) {
- if (cpumask_test_cpu(cpu, p->cpus_ptr))
+ if (cpumask_test_cpu(cpu, cpus_allowed))
goto out_unlock;
}
}
@@ -557,7 +570,7 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, u64
* begin in prev_cpu's node and proceed to other nodes in
* order of increasing distance.
*/
- cpu = scx_pick_idle_cpu(p->cpus_ptr, node, flags | SCX_PICK_IDLE_CORE);
+ cpu = scx_pick_idle_cpu(cpus_allowed, node, flags | SCX_PICK_IDLE_CORE);
if (cpu >= 0)
goto out_unlock;
@@ -605,7 +618,7 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, u64
* in prev_cpu's node and proceed to other nodes in order of
* increasing distance.
*/
- cpu = scx_pick_idle_cpu(p->cpus_ptr, node, flags);
+ cpu = scx_pick_idle_cpu(cpus_allowed, node, flags);
if (cpu >= 0)
goto out_unlock;
@@ -861,7 +874,7 @@ __bpf_kfunc s32 scx_bpf_select_cpu_dfl(struct task_struct *p, s32 prev_cpu,
goto prev_cpu;
#ifdef CONFIG_SMP
- cpu = scx_select_cpu_dfl(p, prev_cpu, wake_flags, 0);
+ cpu = scx_select_cpu_dfl(p, prev_cpu, wake_flags, p->cpus_ptr, 0);
if (cpu >= 0) {
*is_idle = true;
return cpu;
diff --git a/kernel/sched/ext_idle.h b/kernel/sched/ext_idle.h
index 511cc2221f7a8..37be78a7502b3 100644
--- a/kernel/sched/ext_idle.h
+++ b/kernel/sched/ext_idle.h
@@ -27,7 +27,8 @@ static inline s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, int node
}
#endif /* CONFIG_SMP */
-s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, u64 flags);
+s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags,
+ const struct cpumask *cpus_allowed, u64 flags);
void scx_idle_enable(struct sched_ext_ops *ops);
void scx_idle_disable(void);
int scx_idle_init(void);
--
2.48.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH 3/6] sched_ext: idle: Accept an arbitrary cpumask in scx_select_cpu_dfl()
2025-03-20 7:36 [PATCHSET v5 sched_ext/for-6.15] sched_ext: Enhance built-in idle selection with allowed CPUs Andrea Righi
2025-03-20 7:36 ` [PATCH 1/6] sched_ext: idle: Extend topology optimizations to all tasks Andrea Righi
2025-03-20 7:36 ` [PATCH 2/6] sched_ext: idle: Explicitly pass allowed cpumask to scx_select_cpu_dfl() Andrea Righi
@ 2025-03-20 7:36 ` Andrea Righi
2025-03-20 7:36 ` [PATCH 4/6] sched_ext: idle: Introduce scx_bpf_select_cpu_and() Andrea Righi
` (3 subsequent siblings)
6 siblings, 0 replies; 13+ messages in thread
From: Andrea Righi @ 2025-03-20 7:36 UTC (permalink / raw)
To: Tejun Heo, David Vernet, Changwoo Min; +Cc: Joel Fernandes, linux-kernel
Many scx schedulers implement their own hard or soft-affinity rules
to support topology characteristics, such as heterogeneous architectures
(e.g., big.LITTLE, P-cores/E-cores), or to categorize tasks based on
specific properties (e.g., running certain tasks only in a subset of
CPUs).
Currently, there is no mechanism that allows to use the built-in idle
CPU selection policy to an arbitrary subset of CPUs. As a result,
schedulers often implement their own idle CPU selection policies, which
are typically similar to one another, leading to a lot of code
duplication.
To address this, modify scx_select_cpu_dfl() to accept an arbitrary
cpumask, that can be used by the BPF schedulers to apply the existent
built-in idle CPU selection policy to a subset of allowed CPUs.
With this concept the idle CPU selection policy becomes the following:
- always prioritize CPUs from fully idle SMT cores (if SMT is enabled),
- select the same CPU if it's idle and in the allowed CPUs,
- select an idle CPU within the same LLC, if the LLC cpumask is a
subset of the allowed CPUs,
- select an idle CPU within the same node, if the node cpumask is a
subset of the allowed CPUs,
- select an idle CPU within the allowed CPUs.
This functionality will be exposed through a dedicated kfunc in a
separate patch.
Signed-off-by: Andrea Righi <arighi@nvidia.com>
---
kernel/sched/ext_idle.c | 110 +++++++++++++++++++++++++---------------
1 file changed, 69 insertions(+), 41 deletions(-)
diff --git a/kernel/sched/ext_idle.c b/kernel/sched/ext_idle.c
index a90d85bce1ccb..faed4f89f95e9 100644
--- a/kernel/sched/ext_idle.c
+++ b/kernel/sched/ext_idle.c
@@ -49,6 +49,7 @@ static struct scx_idle_cpus **scx_idle_node_masks;
/*
* Local per-CPU cpumasks (used to generate temporary idle cpumasks).
*/
+static DEFINE_PER_CPU(cpumask_var_t, local_idle_cpumask);
static DEFINE_PER_CPU(cpumask_var_t, local_llc_idle_cpumask);
static DEFINE_PER_CPU(cpumask_var_t, local_numa_idle_cpumask);
@@ -397,34 +398,12 @@ void scx_idle_update_selcpu_topology(struct sched_ext_ops *ops)
static_branch_disable_cpuslocked(&scx_selcpu_topo_numa);
}
-static inline bool task_allowed_all_cpus(const struct task_struct *p)
-{
- return p->nr_cpus_allowed >= num_possible_cpus();
-}
-
/*
- * Return the subset of @cpus that task @p can use, according to
- * @cpus_allowed, or NULL if none of the CPUs in the @cpus cpumask can be
- * used.
+ * Return true if @p can run on all possible CPUs, false otherwise.
*/
-static const struct cpumask *task_cpumask(const struct task_struct *p,
- const struct cpumask *cpus_allowed,
- const struct cpumask *cpus,
- struct cpumask *local_cpus)
+static inline bool task_affinity_all(const struct task_struct *p)
{
- /*
- * If the task is allowed to run on all CPUs, simply use the
- * architecture's cpumask directly. Otherwise, compute the
- * intersection of the architecture's cpumask and the task's
- * allowed cpumask.
- */
- if (!cpus || task_allowed_all_cpus(p) || cpumask_subset(cpus, cpus_allowed))
- return cpus;
-
- if (cpumask_and(local_cpus, cpus, cpus_allowed))
- return local_cpus;
-
- return NULL;
+ return p->nr_cpus_allowed >= num_possible_cpus();
}
/*
@@ -439,13 +418,15 @@ static const struct cpumask *task_cpumask(const struct task_struct *p,
* branch prediction optimizations.
*
* 3. Pick a CPU within the same LLC (Last-Level Cache):
- * - if the above conditions aren't met, pick a CPU that shares the same LLC
- * to maintain cache locality.
+ * - if the above conditions aren't met, pick a CPU that shares the same
+ * LLC, if the LLC domain is a subset of @cpus_allowed, to maintain
+ * cache locality.
*
* 4. Pick a CPU within the same NUMA node, if enabled:
- * - choose a CPU from the same NUMA node to reduce memory access latency.
+ * - choose a CPU from the same NUMA node, if the node cpumask is a
+ * subset of @cpus_allowed, to reduce memory access latency.
*
- * 5. Pick any idle CPU usable by the task.
+ * 5. Pick any idle CPU within the @cpus_allowed domain.
*
* Step 3 and 4 are performed only if the system has, respectively,
* multiple LLCs / multiple NUMA nodes (see scx_selcpu_topo_llc and
@@ -464,9 +445,43 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags,
const struct cpumask *cpus_allowed, u64 flags)
{
const struct cpumask *llc_cpus = NULL, *numa_cpus = NULL;
- int node = scx_cpu_node_if_enabled(prev_cpu);
+ const struct cpumask *allowed = p->cpus_ptr;
+ int node;
s32 cpu;
+ preempt_disable();
+
+ /*
+ * Determine the subset of CPUs usable by @p within @cpus_allowed.
+ */
+ if (cpus_allowed != p->cpus_ptr) {
+ struct cpumask *local_cpus = this_cpu_cpumask_var_ptr(local_idle_cpumask);
+
+ if (task_affinity_all(p)) {
+ allowed = cpus_allowed;
+ } else if (cpumask_and(local_cpus, cpus_allowed, p->cpus_ptr)) {
+ allowed = local_cpus;
+ } else {
+ cpu = -EBUSY;
+ goto out_enable;
+ }
+ }
+
+ /*
+ * If @prev_cpu is not in the allowed domain, try to assign a new
+ * arbitrary CPU usable by the task in the allowed domain.
+ */
+ if (!cpumask_test_cpu(prev_cpu, allowed)) {
+ cpu = cpumask_any_and_distribute(p->cpus_ptr, allowed);
+ if (cpu < nr_cpu_ids) {
+ prev_cpu = cpu;
+ } else {
+ cpu = -EBUSY;
+ goto out_enable;
+ }
+ }
+ node = scx_cpu_node_if_enabled(prev_cpu);
+
/*
* This is necessary to protect llc_cpus.
*/
@@ -475,19 +490,28 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags,
/*
* Determine the subset of CPUs that the task can use in its
* current LLC and node.
+ *
+ * If the task can run on all CPUs, use the node and LLC cpumasks
+ * directly.
*/
if (static_branch_maybe(CONFIG_NUMA, &scx_selcpu_topo_numa)) {
- numa_cpus = task_cpumask(p, cpus_allowed, numa_span(prev_cpu),
- this_cpu_cpumask_var_ptr(local_numa_idle_cpumask));
- if (cpumask_equal(numa_cpus, cpus_allowed))
- numa_cpus = NULL;
+ struct cpumask *local_cpus = this_cpu_cpumask_var_ptr(local_numa_idle_cpumask);
+ const struct cpumask *cpus = numa_span(prev_cpu);
+
+ if (allowed == p->cpus_ptr && task_affinity_all(p))
+ numa_cpus = cpus;
+ else if (cpus && cpumask_and(local_cpus, allowed, cpus))
+ numa_cpus = local_cpus;
}
if (static_branch_maybe(CONFIG_SCHED_MC, &scx_selcpu_topo_llc)) {
- llc_cpus = task_cpumask(p, cpus_allowed, llc_span(prev_cpu),
- this_cpu_cpumask_var_ptr(local_llc_idle_cpumask));
- if (cpumask_equal(llc_cpus, cpus_allowed))
- llc_cpus = NULL;
+ struct cpumask *local_cpus = this_cpu_cpumask_var_ptr(local_llc_idle_cpumask);
+ const struct cpumask *cpus = llc_span(prev_cpu);
+
+ if (allowed == p->cpus_ptr && task_affinity_all(p))
+ llc_cpus = cpus;
+ else if (cpus && cpumask_and(local_cpus, allowed, cpus))
+ llc_cpus = local_cpus;
}
/*
@@ -525,7 +549,7 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags,
cpu_rq(cpu)->scx.local_dsq.nr == 0 &&
(!(flags & SCX_PICK_IDLE_IN_NODE) || (waker_node == node)) &&
!cpumask_empty(idle_cpumask(waker_node)->cpu)) {
- if (cpumask_test_cpu(cpu, cpus_allowed))
+ if (cpumask_test_cpu(cpu, allowed))
goto out_unlock;
}
}
@@ -570,7 +594,7 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags,
* begin in prev_cpu's node and proceed to other nodes in
* order of increasing distance.
*/
- cpu = scx_pick_idle_cpu(cpus_allowed, node, flags | SCX_PICK_IDLE_CORE);
+ cpu = scx_pick_idle_cpu(allowed, node, flags | SCX_PICK_IDLE_CORE);
if (cpu >= 0)
goto out_unlock;
@@ -618,12 +642,14 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags,
* in prev_cpu's node and proceed to other nodes in order of
* increasing distance.
*/
- cpu = scx_pick_idle_cpu(cpus_allowed, node, flags);
+ cpu = scx_pick_idle_cpu(allowed, node, flags);
if (cpu >= 0)
goto out_unlock;
out_unlock:
rcu_read_unlock();
+out_enable:
+ preempt_enable();
return cpu;
}
@@ -655,6 +681,8 @@ void scx_idle_init_masks(void)
/* Allocate local per-cpu idle cpumasks */
for_each_possible_cpu(i) {
+ BUG_ON(!alloc_cpumask_var_node(&per_cpu(local_idle_cpumask, i),
+ GFP_KERNEL, cpu_to_node(i)));
BUG_ON(!alloc_cpumask_var_node(&per_cpu(local_llc_idle_cpumask, i),
GFP_KERNEL, cpu_to_node(i)));
BUG_ON(!alloc_cpumask_var_node(&per_cpu(local_numa_idle_cpumask, i),
--
2.48.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH 4/6] sched_ext: idle: Introduce scx_bpf_select_cpu_and()
2025-03-20 7:36 [PATCHSET v5 sched_ext/for-6.15] sched_ext: Enhance built-in idle selection with allowed CPUs Andrea Righi
` (2 preceding siblings ...)
2025-03-20 7:36 ` [PATCH 3/6] sched_ext: idle: Accept an arbitrary cpumask in scx_select_cpu_dfl() Andrea Righi
@ 2025-03-20 7:36 ` Andrea Righi
2025-03-20 7:36 ` [PATCH 5/6] selftests/sched_ext: Add test for scx_bpf_select_cpu_and() Andrea Righi
` (2 subsequent siblings)
6 siblings, 0 replies; 13+ messages in thread
From: Andrea Righi @ 2025-03-20 7:36 UTC (permalink / raw)
To: Tejun Heo, David Vernet, Changwoo Min; +Cc: Joel Fernandes, linux-kernel
Provide a new kfunc, scx_bpf_select_cpu_and(), that can be used to apply
the built-in idle CPU selection policy to a subset of allowed CPU.
This new helper is basically an extension of scx_bpf_select_cpu_dfl().
However, when an idle CPU can't be found, it returns a negative value
instead of @prev_cpu, aligning its behavior more closely with
scx_bpf_pick_idle_cpu().
It also accepts %SCX_PICK_IDLE_* flags, which can be used to enforce
strict selection to @prev_cpu's node (%SCX_PICK_IDLE_IN_NODE), or to
request only a full-idle SMT core (%SCX_PICK_IDLE_CORE), while applying
the built-in selection logic.
With this helper, BPF schedulers can apply the built-in idle CPU
selection policy restricted to any arbitrary subset of CPUs.
Example usage
=============
Possible usage in ops.select_cpu():
s32 BPF_STRUCT_OPS(foo_select_cpu, struct task_struct *p,
s32 prev_cpu, u64 wake_flags)
{
const struct cpumask *cpus = task_allowed_cpus(p) ?: p->cpus_ptr;
s32 cpu;
cpu = scx_bpf_select_cpu_and(p, prev_cpu, wake_flags, cpus, 0);
if (cpu >= 0) {
scx_bpf_dsq_insert(p, SCX_DSQ_LOCAL, SCX_SLICE_DFL, 0);
return cpu;
}
return prev_cpu;
}
Results
=======
Load distribution on a 4 sockets, 4 cores per socket system, simulated
using virtme-ng, running a modified version of scx_bpfland that uses
scx_bpf_select_cpu_and() with 0xff00 as the allowed subset of CPUs:
$ vng --cpu 16,sockets=4,cores=4,threads=1
...
$ stress-ng -c 16
...
$ htop
...
0[ 0.0%] 8[||||||||||||||||||||||||100.0%]
1[ 0.0%] 9[||||||||||||||||||||||||100.0%]
2[ 0.0%] 10[||||||||||||||||||||||||100.0%]
3[ 0.0%] 11[||||||||||||||||||||||||100.0%]
4[ 0.0%] 12[||||||||||||||||||||||||100.0%]
5[ 0.0%] 13[||||||||||||||||||||||||100.0%]
6[ 0.0%] 14[||||||||||||||||||||||||100.0%]
7[ 0.0%] 15[||||||||||||||||||||||||100.0%]
With scx_bpf_select_cpu_dfl() tasks would be distributed evenly across
all the available CPUs.
Signed-off-by: Andrea Righi <arighi@nvidia.com>
---
kernel/sched/ext.c | 1 +
kernel/sched/ext_idle.c | 43 ++++++++++++++++++++++++
tools/sched_ext/include/scx/common.bpf.h | 2 ++
3 files changed, 46 insertions(+)
diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
index f42352e8d889e..343f066c1185d 100644
--- a/kernel/sched/ext.c
+++ b/kernel/sched/ext.c
@@ -465,6 +465,7 @@ struct sched_ext_ops {
* idle CPU tracking and the following helpers become unavailable:
*
* - scx_bpf_select_cpu_dfl()
+ * - scx_bpf_select_cpu_and()
* - scx_bpf_test_and_clear_cpu_idle()
* - scx_bpf_pick_idle_cpu()
*
diff --git a/kernel/sched/ext_idle.c b/kernel/sched/ext_idle.c
index faed4f89f95e9..220e11cd0ab67 100644
--- a/kernel/sched/ext_idle.c
+++ b/kernel/sched/ext_idle.c
@@ -914,6 +914,48 @@ __bpf_kfunc s32 scx_bpf_select_cpu_dfl(struct task_struct *p, s32 prev_cpu,
return prev_cpu;
}
+/**
+ * scx_bpf_select_cpu_and - Pick an idle CPU usable by task @p,
+ * prioritizing those in @cpus_allowed
+ * @p: task_struct to select a CPU for
+ * @prev_cpu: CPU @p was on previously
+ * @wake_flags: %SCX_WAKE_* flags
+ * @cpus_allowed: cpumask of allowed CPUs
+ * @flags: %SCX_PICK_IDLE* flags
+ *
+ * Can only be called from ops.select_cpu() or ops.enqueue() if the
+ * built-in CPU selection is enabled: ops.update_idle() is missing or
+ * %SCX_OPS_KEEP_BUILTIN_IDLE is set.
+ *
+ * @p, @prev_cpu and @wake_flags match ops.select_cpu().
+ *
+ * Returns the selected idle CPU, which will be automatically awakened upon
+ * returning from ops.select_cpu() and can be used for direct dispatch, or
+ * a negative value if no idle CPU is available.
+ */
+__bpf_kfunc s32 scx_bpf_select_cpu_and(struct task_struct *p, s32 prev_cpu, u64 wake_flags,
+ const struct cpumask *cpus_allowed, u64 flags)
+{
+ s32 cpu;
+
+ if (!ops_cpu_valid(prev_cpu, NULL))
+ return -EINVAL;
+
+ if (!check_builtin_idle_enabled())
+ return -EBUSY;
+
+ if (!scx_kf_allowed(SCX_KF_SELECT_CPU | SCX_KF_ENQUEUE))
+ return -EPERM;
+
+#ifdef CONFIG_SMP
+ cpu = scx_select_cpu_dfl(p, prev_cpu, wake_flags, cpus_allowed, flags);
+#else
+ cpu = -EBUSY;
+#endif
+
+ return cpu;
+}
+
/**
* scx_bpf_get_idle_cpumask_node - Get a referenced kptr to the
* idle-tracking per-CPU cpumask of a target NUMA node.
@@ -1222,6 +1264,7 @@ static const struct btf_kfunc_id_set scx_kfunc_set_idle = {
BTF_KFUNCS_START(scx_kfunc_ids_select_cpu)
BTF_ID_FLAGS(func, scx_bpf_select_cpu_dfl, KF_RCU)
+BTF_ID_FLAGS(func, scx_bpf_select_cpu_and, KF_RCU)
BTF_KFUNCS_END(scx_kfunc_ids_select_cpu)
static const struct btf_kfunc_id_set scx_kfunc_set_select_cpu = {
diff --git a/tools/sched_ext/include/scx/common.bpf.h b/tools/sched_ext/include/scx/common.bpf.h
index dc4333d23189f..6f1da61cf7f17 100644
--- a/tools/sched_ext/include/scx/common.bpf.h
+++ b/tools/sched_ext/include/scx/common.bpf.h
@@ -48,6 +48,8 @@ static inline void ___vmlinux_h_sanity_check___(void)
s32 scx_bpf_create_dsq(u64 dsq_id, s32 node) __ksym;
s32 scx_bpf_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, bool *is_idle) __ksym;
+s32 scx_bpf_select_cpu_and(struct task_struct *p, s32 prev_cpu, u64 wake_flags,
+ const struct cpumask *cpus_allowed, u64 flags) __ksym __weak;
void scx_bpf_dsq_insert(struct task_struct *p, u64 dsq_id, u64 slice, u64 enq_flags) __ksym __weak;
void scx_bpf_dsq_insert_vtime(struct task_struct *p, u64 dsq_id, u64 slice, u64 vtime, u64 enq_flags) __ksym __weak;
u32 scx_bpf_dispatch_nr_slots(void) __ksym;
--
2.48.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH 5/6] selftests/sched_ext: Add test for scx_bpf_select_cpu_and()
2025-03-20 7:36 [PATCHSET v5 sched_ext/for-6.15] sched_ext: Enhance built-in idle selection with allowed CPUs Andrea Righi
` (3 preceding siblings ...)
2025-03-20 7:36 ` [PATCH 4/6] sched_ext: idle: Introduce scx_bpf_select_cpu_and() Andrea Righi
@ 2025-03-20 7:36 ` Andrea Righi
2025-03-20 7:36 ` [PATCH 6/6] sched_ext: idle: Deprecate scx_bpf_select_cpu_dfl() Andrea Righi
2025-03-20 14:05 ` [PATCHSET v5 sched_ext/for-6.15] sched_ext: Enhance built-in idle selection with allowed CPUs Joel Fernandes
6 siblings, 0 replies; 13+ messages in thread
From: Andrea Righi @ 2025-03-20 7:36 UTC (permalink / raw)
To: Tejun Heo, David Vernet, Changwoo Min; +Cc: Joel Fernandes, linux-kernel
Add a selftest to validate the behavior of the built-in idle CPU
selection policy applied to a subset of allowed CPUs, using
scx_bpf_select_cpu_and().
Signed-off-by: Andrea Righi <arighi@nvidia.com>
---
tools/testing/selftests/sched_ext/Makefile | 1 +
.../selftests/sched_ext/allowed_cpus.bpf.c | 121 ++++++++++++++++++
.../selftests/sched_ext/allowed_cpus.c | 57 +++++++++
3 files changed, 179 insertions(+)
create mode 100644 tools/testing/selftests/sched_ext/allowed_cpus.bpf.c
create mode 100644 tools/testing/selftests/sched_ext/allowed_cpus.c
diff --git a/tools/testing/selftests/sched_ext/Makefile b/tools/testing/selftests/sched_ext/Makefile
index f4531327b8e76..e9d5bc575f806 100644
--- a/tools/testing/selftests/sched_ext/Makefile
+++ b/tools/testing/selftests/sched_ext/Makefile
@@ -173,6 +173,7 @@ auto-test-targets := \
maybe_null \
minimal \
numa \
+ allowed_cpus \
prog_run \
reload_loop \
select_cpu_dfl \
diff --git a/tools/testing/selftests/sched_ext/allowed_cpus.bpf.c b/tools/testing/selftests/sched_ext/allowed_cpus.bpf.c
new file mode 100644
index 0000000000000..39d57f7f74099
--- /dev/null
+++ b/tools/testing/selftests/sched_ext/allowed_cpus.bpf.c
@@ -0,0 +1,121 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * A scheduler that validates the behavior of scx_bpf_select_cpu_and() by
+ * selecting idle CPUs strictly within a subset of allowed CPUs.
+ *
+ * Copyright (c) 2025 Andrea Righi <arighi@nvidia.com>
+ */
+
+#include <scx/common.bpf.h>
+
+char _license[] SEC("license") = "GPL";
+
+UEI_DEFINE(uei);
+
+private(PREF_CPUS) struct bpf_cpumask __kptr * allowed_cpumask;
+
+static void
+validate_idle_cpu(const struct task_struct *p, const struct cpumask *allowed, s32 cpu)
+{
+ if (scx_bpf_test_and_clear_cpu_idle(cpu))
+ scx_bpf_error("CPU %d should be marked as busy", cpu);
+
+ if (bpf_cpumask_subset(allowed, p->cpus_ptr) &&
+ !bpf_cpumask_test_cpu(cpu, allowed))
+ scx_bpf_error("CPU %d not in the allowed domain for %d (%s)",
+ cpu, p->pid, p->comm);
+}
+
+s32 BPF_STRUCT_OPS(allowed_cpus_select_cpu,
+ struct task_struct *p, s32 prev_cpu, u64 wake_flags)
+{
+ const struct cpumask *allowed;
+ s32 cpu;
+
+ allowed = cast_mask(allowed_cpumask);
+ if (!allowed) {
+ scx_bpf_error("allowed domain not initialized");
+ return -EINVAL;
+ }
+
+ /*
+ * Select an idle CPU strictly within the allowed domain.
+ */
+ cpu = scx_bpf_select_cpu_and(p, prev_cpu, wake_flags, allowed, 0);
+ if (cpu >= 0) {
+ validate_idle_cpu(p, allowed, cpu);
+ scx_bpf_dsq_insert(p, SCX_DSQ_LOCAL, SCX_SLICE_DFL, 0);
+
+ return cpu;
+ }
+
+ return prev_cpu;
+}
+
+void BPF_STRUCT_OPS(allowed_cpus_enqueue, struct task_struct *p, u64 enq_flags)
+{
+ const struct cpumask *allowed;
+ s32 prev_cpu = scx_bpf_task_cpu(p), cpu;
+
+ scx_bpf_dsq_insert(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, 0);
+
+ allowed = cast_mask(allowed_cpumask);
+ if (!allowed) {
+ scx_bpf_error("allowed domain not initialized");
+ return;
+ }
+
+ /*
+ * Use scx_bpf_select_cpu_and() to proactively kick an idle CPU
+ * within @allowed_cpumask, usable by @p.
+ */
+ cpu = scx_bpf_select_cpu_and(p, prev_cpu, 0, allowed, 0);
+ if (cpu >= 0) {
+ validate_idle_cpu(p, allowed, cpu);
+ scx_bpf_kick_cpu(cpu, SCX_KICK_IDLE);
+ }
+}
+
+s32 BPF_STRUCT_OPS_SLEEPABLE(allowed_cpus_init)
+{
+ struct bpf_cpumask *mask;
+
+ mask = bpf_cpumask_create();
+ if (!mask)
+ return -ENOMEM;
+
+ mask = bpf_kptr_xchg(&allowed_cpumask, mask);
+ if (mask)
+ bpf_cpumask_release(mask);
+
+ bpf_rcu_read_lock();
+
+ /*
+ * Assign the first online CPU to the allowed domain.
+ */
+ mask = allowed_cpumask;
+ if (mask) {
+ const struct cpumask *online = scx_bpf_get_online_cpumask();
+
+ bpf_cpumask_set_cpu(bpf_cpumask_first(online), mask);
+ scx_bpf_put_cpumask(online);
+ }
+
+ bpf_rcu_read_unlock();
+
+ return 0;
+}
+
+void BPF_STRUCT_OPS(allowed_cpus_exit, struct scx_exit_info *ei)
+{
+ UEI_RECORD(uei, ei);
+}
+
+SEC(".struct_ops.link")
+struct sched_ext_ops allowed_cpus_ops = {
+ .select_cpu = (void *)allowed_cpus_select_cpu,
+ .enqueue = (void *)allowed_cpus_enqueue,
+ .init = (void *)allowed_cpus_init,
+ .exit = (void *)allowed_cpus_exit,
+ .name = "allowed_cpus",
+};
diff --git a/tools/testing/selftests/sched_ext/allowed_cpus.c b/tools/testing/selftests/sched_ext/allowed_cpus.c
new file mode 100644
index 0000000000000..a001a3a0e9f1f
--- /dev/null
+++ b/tools/testing/selftests/sched_ext/allowed_cpus.c
@@ -0,0 +1,57 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2025 Andrea Righi <arighi@nvidia.com>
+ */
+#include <bpf/bpf.h>
+#include <scx/common.h>
+#include <sys/wait.h>
+#include <unistd.h>
+#include "allowed_cpus.bpf.skel.h"
+#include "scx_test.h"
+
+static enum scx_test_status setup(void **ctx)
+{
+ struct allowed_cpus *skel;
+
+ skel = allowed_cpus__open();
+ SCX_FAIL_IF(!skel, "Failed to open");
+ SCX_ENUM_INIT(skel);
+ SCX_FAIL_IF(allowed_cpus__load(skel), "Failed to load skel");
+
+ *ctx = skel;
+
+ return SCX_TEST_PASS;
+}
+
+static enum scx_test_status run(void *ctx)
+{
+ struct allowed_cpus *skel = ctx;
+ struct bpf_link *link;
+
+ link = bpf_map__attach_struct_ops(skel->maps.allowed_cpus_ops);
+ SCX_FAIL_IF(!link, "Failed to attach scheduler");
+
+ /* Just sleeping is fine, plenty of scheduling events happening */
+ sleep(1);
+
+ SCX_EQ(skel->data->uei.kind, EXIT_KIND(SCX_EXIT_NONE));
+ bpf_link__destroy(link);
+
+ return SCX_TEST_PASS;
+}
+
+static void cleanup(void *ctx)
+{
+ struct allowed_cpus *skel = ctx;
+
+ allowed_cpus__destroy(skel);
+}
+
+struct scx_test allowed_cpus = {
+ .name = "allowed_cpus",
+ .description = "Verify scx_bpf_select_cpu_and()",
+ .setup = setup,
+ .run = run,
+ .cleanup = cleanup,
+};
+REGISTER_SCX_TEST(&allowed_cpus)
--
2.48.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH 6/6] sched_ext: idle: Deprecate scx_bpf_select_cpu_dfl()
2025-03-20 7:36 [PATCHSET v5 sched_ext/for-6.15] sched_ext: Enhance built-in idle selection with allowed CPUs Andrea Righi
` (4 preceding siblings ...)
2025-03-20 7:36 ` [PATCH 5/6] selftests/sched_ext: Add test for scx_bpf_select_cpu_and() Andrea Righi
@ 2025-03-20 7:36 ` Andrea Righi
2025-03-20 14:05 ` [PATCHSET v5 sched_ext/for-6.15] sched_ext: Enhance built-in idle selection with allowed CPUs Joel Fernandes
6 siblings, 0 replies; 13+ messages in thread
From: Andrea Righi @ 2025-03-20 7:36 UTC (permalink / raw)
To: Tejun Heo, David Vernet, Changwoo Min; +Cc: Joel Fernandes, linux-kernel
With the introduction of scx_bpf_select_cpu_and(), we can deprecate
scx_bpf_select_cpu_dfl(), as it offers only a subset of features and
it's also more consistent with other idle-related APIs (returning a
negative value when no idle CPU is found).
Therefore, mark scx_bpf_select_cpu_dfl() as deprecated (printing a
warning when it's used), update all the scheduler examples and
kselftests to adopt the new API, and ensure backward (source and binary)
compatibility by providing the necessary macros and hooks.
Support for scx_bpf_select_cpu_dfl() can be maintained until v6.17.
Signed-off-by: Andrea Righi <arighi@nvidia.com>
---
Documentation/scheduler/sched-ext.rst | 11 +++---
kernel/sched/ext.c | 3 +-
kernel/sched/ext_idle.c | 18 ++-------
tools/sched_ext/include/scx/common.bpf.h | 3 +-
tools/sched_ext/include/scx/compat.bpf.h | 37 +++++++++++++++++++
tools/sched_ext/scx_flatcg.bpf.c | 12 +++---
tools/sched_ext/scx_simple.bpf.c | 9 +++--
.../sched_ext/enq_select_cpu_fails.bpf.c | 12 +-----
.../sched_ext/enq_select_cpu_fails.c | 2 +-
tools/testing/selftests/sched_ext/exit.bpf.c | 6 ++-
.../sched_ext/select_cpu_dfl_nodispatch.bpf.c | 13 +++----
.../sched_ext/select_cpu_dfl_nodispatch.c | 2 +-
12 files changed, 73 insertions(+), 55 deletions(-)
diff --git a/Documentation/scheduler/sched-ext.rst b/Documentation/scheduler/sched-ext.rst
index 0993e41353db7..7f36f4fcf5f31 100644
--- a/Documentation/scheduler/sched-ext.rst
+++ b/Documentation/scheduler/sched-ext.rst
@@ -142,15 +142,14 @@ optional. The following modified excerpt is from
s32 prev_cpu, u64 wake_flags)
{
s32 cpu;
- /* Need to initialize or the BPF verifier will reject the program */
- bool direct = false;
- cpu = scx_bpf_select_cpu_dfl(p, prev_cpu, wake_flags, &direct);
-
- if (direct)
+ cpu = scx_bpf_select_cpu_and(p, prev_cpu, wake_flags, p->cpus_ptr, 0);
+ if (cpu >= 0)
scx_bpf_dsq_insert(p, SCX_DSQ_LOCAL, SCX_SLICE_DFL, 0);
+ return cpu;
+ }
- return cpu;
+ return prev_cpu;
}
/*
diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
index 343f066c1185d..d82e9d3cbc0dc 100644
--- a/kernel/sched/ext.c
+++ b/kernel/sched/ext.c
@@ -464,13 +464,12 @@ struct sched_ext_ops {
* state. By default, implementing this operation disables the built-in
* idle CPU tracking and the following helpers become unavailable:
*
- * - scx_bpf_select_cpu_dfl()
* - scx_bpf_select_cpu_and()
* - scx_bpf_test_and_clear_cpu_idle()
* - scx_bpf_pick_idle_cpu()
*
* The user also must implement ops.select_cpu() as the default
- * implementation relies on scx_bpf_select_cpu_dfl().
+ * implementation relies on scx_bpf_select_cpu_and().
*
* Specify the %SCX_OPS_KEEP_BUILTIN_IDLE flag to keep the built-in idle
* tracking.
diff --git a/kernel/sched/ext_idle.c b/kernel/sched/ext_idle.c
index 220e11cd0ab67..746fd36050045 100644
--- a/kernel/sched/ext_idle.c
+++ b/kernel/sched/ext_idle.c
@@ -872,26 +872,16 @@ __bpf_kfunc int scx_bpf_cpu_node(s32 cpu)
#endif
}
-/**
- * scx_bpf_select_cpu_dfl - The default implementation of ops.select_cpu()
- * @p: task_struct to select a CPU for
- * @prev_cpu: CPU @p was on previously
- * @wake_flags: %SCX_WAKE_* flags
- * @is_idle: out parameter indicating whether the returned CPU is idle
- *
- * Can only be called from ops.select_cpu() if the built-in CPU selection is
- * enabled - ops.update_idle() is missing or %SCX_OPS_KEEP_BUILTIN_IDLE is set.
- * @p, @prev_cpu and @wake_flags match ops.select_cpu().
- *
- * Returns the picked CPU with *@is_idle indicating whether the picked CPU is
- * currently idle and thus a good candidate for direct dispatching.
- */
+/* Provided for backward binary compatibility, will be removed in v6.17. */
__bpf_kfunc s32 scx_bpf_select_cpu_dfl(struct task_struct *p, s32 prev_cpu,
u64 wake_flags, bool *is_idle)
{
#ifdef CONFIG_SMP
s32 cpu;
#endif
+ printk_deferred_once(KERN_WARNING
+ "sched_ext: scx_bpf_select_cpu_dfl() deprecated in favor of scx_bpf_select_cpu_and()");
+
if (!ops_cpu_valid(prev_cpu, NULL))
goto prev_cpu;
diff --git a/tools/sched_ext/include/scx/common.bpf.h b/tools/sched_ext/include/scx/common.bpf.h
index 6f1da61cf7f17..1eb790eb90d40 100644
--- a/tools/sched_ext/include/scx/common.bpf.h
+++ b/tools/sched_ext/include/scx/common.bpf.h
@@ -47,7 +47,8 @@ static inline void ___vmlinux_h_sanity_check___(void)
}
s32 scx_bpf_create_dsq(u64 dsq_id, s32 node) __ksym;
-s32 scx_bpf_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, bool *is_idle) __ksym;
+s32 scx_bpf_select_cpu_dfl(struct task_struct *p,
+ s32 prev_cpu, u64 wake_flags, bool *is_idle) __ksym __weak;
s32 scx_bpf_select_cpu_and(struct task_struct *p, s32 prev_cpu, u64 wake_flags,
const struct cpumask *cpus_allowed, u64 flags) __ksym __weak;
void scx_bpf_dsq_insert(struct task_struct *p, u64 dsq_id, u64 slice, u64 enq_flags) __ksym __weak;
diff --git a/tools/sched_ext/include/scx/compat.bpf.h b/tools/sched_ext/include/scx/compat.bpf.h
index 9252e1a00556f..f9caa7baf356c 100644
--- a/tools/sched_ext/include/scx/compat.bpf.h
+++ b/tools/sched_ext/include/scx/compat.bpf.h
@@ -225,6 +225,43 @@ static inline bool __COMPAT_is_enq_cpu_selected(u64 enq_flags)
scx_bpf_pick_any_cpu_node(cpus_allowed, node, flags) : \
scx_bpf_pick_any_cpu(cpus_allowed, flags))
+/**
+ * scx_bpf_select_cpu_dfl - The default implementation of ops.select_cpu().
+ * We will preserve this compatible helper until v6.17.
+ *
+ * @p: task_struct to select a CPU for
+ * @prev_cpu: CPU @p was on previously
+ * @wake_flags: %SCX_WAKE_* flags
+ * @is_idle: out parameter indicating whether the returned CPU is idle
+ *
+ * Can only be called from ops.select_cpu() if the built-in CPU selection is
+ * enabled - ops.update_idle() is missing or %SCX_OPS_KEEP_BUILTIN_IDLE is set.
+ * @p, @prev_cpu and @wake_flags match ops.select_cpu().
+ *
+ * Returns the picked CPU with *@is_idle indicating whether the picked CPU is
+ * currently idle and thus a good candidate for direct dispatching.
+ */
+#define scx_bpf_select_cpu_dfl(p, prev_cpu, wake_flags, is_idle) \
+({ \
+ s32 __cpu; \
+ \
+ if (bpf_ksym_exists(scx_bpf_select_cpu_and)) { \
+ __cpu = scx_bpf_select_cpu_and((p), (prev_cpu), (wake_flags), \
+ (p)->cpus_ptr, 0); \
+ if (__cpu >= 0) { \
+ *(is_idle) = true; \
+ } else { \
+ *(is_idle) = false; \
+ __cpu = (prev_cpu); \
+ } \
+ } else { \
+ __cpu = scx_bpf_select_cpu_dfl((p), (prev_cpu), \
+ (wake_flags), (is_idle)); \
+ } \
+ \
+ __cpu; \
+})
+
/*
* Define sched_ext_ops. This may be expanded to define multiple variants for
* backward compatibility. See compat.h::SCX_OPS_LOAD/ATTACH().
diff --git a/tools/sched_ext/scx_flatcg.bpf.c b/tools/sched_ext/scx_flatcg.bpf.c
index 2c720e3ecad59..0075bff928893 100644
--- a/tools/sched_ext/scx_flatcg.bpf.c
+++ b/tools/sched_ext/scx_flatcg.bpf.c
@@ -317,15 +317,12 @@ static void set_bypassed_at(struct task_struct *p, struct fcg_task_ctx *taskc)
s32 BPF_STRUCT_OPS(fcg_select_cpu, struct task_struct *p, s32 prev_cpu, u64 wake_flags)
{
struct fcg_task_ctx *taskc;
- bool is_idle = false;
s32 cpu;
- cpu = scx_bpf_select_cpu_dfl(p, prev_cpu, wake_flags, &is_idle);
-
taskc = bpf_task_storage_get(&task_ctx, p, 0, 0);
if (!taskc) {
scx_bpf_error("task_ctx lookup failed");
- return cpu;
+ return prev_cpu;
}
/*
@@ -333,13 +330,16 @@ s32 BPF_STRUCT_OPS(fcg_select_cpu, struct task_struct *p, s32 prev_cpu, u64 wake
* idle. Follow it and charge the cgroup later in fcg_stopping() after
* the fact.
*/
- if (is_idle) {
+ cpu = scx_bpf_select_cpu_and(p, prev_cpu, wake_flags, p->cpus_ptr, 0);
+ if (cpu >= 0) {
set_bypassed_at(p, taskc);
stat_inc(FCG_STAT_LOCAL);
scx_bpf_dsq_insert(p, SCX_DSQ_LOCAL, SCX_SLICE_DFL, 0);
+
+ return cpu;
}
- return cpu;
+ return prev_cpu;
}
void BPF_STRUCT_OPS(fcg_enqueue, struct task_struct *p, u64 enq_flags)
diff --git a/tools/sched_ext/scx_simple.bpf.c b/tools/sched_ext/scx_simple.bpf.c
index e6de99dba7db6..0e48b2e46a683 100644
--- a/tools/sched_ext/scx_simple.bpf.c
+++ b/tools/sched_ext/scx_simple.bpf.c
@@ -54,16 +54,17 @@ static void stat_inc(u32 idx)
s32 BPF_STRUCT_OPS(simple_select_cpu, struct task_struct *p, s32 prev_cpu, u64 wake_flags)
{
- bool is_idle = false;
s32 cpu;
- cpu = scx_bpf_select_cpu_dfl(p, prev_cpu, wake_flags, &is_idle);
- if (is_idle) {
+ cpu = scx_bpf_select_cpu_and(p, prev_cpu, wake_flags, p->cpus_ptr, 0);
+ if (cpu >= 0) {
stat_inc(0); /* count local queueing */
scx_bpf_dsq_insert(p, SCX_DSQ_LOCAL, SCX_SLICE_DFL, 0);
+
+ return cpu;
}
- return cpu;
+ return prev_cpu;
}
void BPF_STRUCT_OPS(simple_enqueue, struct task_struct *p, u64 enq_flags)
diff --git a/tools/testing/selftests/sched_ext/enq_select_cpu_fails.bpf.c b/tools/testing/selftests/sched_ext/enq_select_cpu_fails.bpf.c
index a7cf868d5e311..d3c0716aa79c9 100644
--- a/tools/testing/selftests/sched_ext/enq_select_cpu_fails.bpf.c
+++ b/tools/testing/selftests/sched_ext/enq_select_cpu_fails.bpf.c
@@ -9,10 +9,6 @@
char _license[] SEC("license") = "GPL";
-/* Manually specify the signature until the kfunc is added to the scx repo. */
-s32 scx_bpf_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags,
- bool *found) __ksym;
-
s32 BPF_STRUCT_OPS(enq_select_cpu_fails_select_cpu, struct task_struct *p,
s32 prev_cpu, u64 wake_flags)
{
@@ -22,14 +18,8 @@ s32 BPF_STRUCT_OPS(enq_select_cpu_fails_select_cpu, struct task_struct *p,
void BPF_STRUCT_OPS(enq_select_cpu_fails_enqueue, struct task_struct *p,
u64 enq_flags)
{
- /*
- * Need to initialize the variable or the verifier will fail to load.
- * Improving these semantics is actively being worked on.
- */
- bool found = false;
-
/* Can only call from ops.select_cpu() */
- scx_bpf_select_cpu_dfl(p, 0, 0, &found);
+ scx_bpf_select_cpu_and(p, 0, 0, p->cpus_ptr, 0);
scx_bpf_dsq_insert(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, enq_flags);
}
diff --git a/tools/testing/selftests/sched_ext/enq_select_cpu_fails.c b/tools/testing/selftests/sched_ext/enq_select_cpu_fails.c
index a80e3a3b3698c..c964444998667 100644
--- a/tools/testing/selftests/sched_ext/enq_select_cpu_fails.c
+++ b/tools/testing/selftests/sched_ext/enq_select_cpu_fails.c
@@ -52,7 +52,7 @@ static void cleanup(void *ctx)
struct scx_test enq_select_cpu_fails = {
.name = "enq_select_cpu_fails",
- .description = "Verify we fail to call scx_bpf_select_cpu_dfl() "
+ .description = "Verify we fail to call scx_bpf_select_cpu_and() "
"from ops.enqueue()",
.setup = setup,
.run = run,
diff --git a/tools/testing/selftests/sched_ext/exit.bpf.c b/tools/testing/selftests/sched_ext/exit.bpf.c
index 4bc36182d3ffc..8122421856c1b 100644
--- a/tools/testing/selftests/sched_ext/exit.bpf.c
+++ b/tools/testing/selftests/sched_ext/exit.bpf.c
@@ -20,12 +20,14 @@ UEI_DEFINE(uei);
s32 BPF_STRUCT_OPS(exit_select_cpu, struct task_struct *p,
s32 prev_cpu, u64 wake_flags)
{
- bool found;
+ s32 cpu;
if (exit_point == EXIT_SELECT_CPU)
EXIT_CLEANLY();
- return scx_bpf_select_cpu_dfl(p, prev_cpu, wake_flags, &found);
+ cpu = scx_bpf_select_cpu_and(p, prev_cpu, wake_flags, p->cpus_ptr, 0);
+
+ return cpu >= 0 ? cpu : prev_cpu;
}
void BPF_STRUCT_OPS(exit_enqueue, struct task_struct *p, u64 enq_flags)
diff --git a/tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.bpf.c b/tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.bpf.c
index 815f1d5d61ac4..4e1b698f710e7 100644
--- a/tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.bpf.c
+++ b/tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.bpf.c
@@ -27,10 +27,6 @@ struct {
__type(value, struct task_ctx);
} task_ctx_stor SEC(".maps");
-/* Manually specify the signature until the kfunc is added to the scx repo. */
-s32 scx_bpf_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags,
- bool *found) __ksym;
-
s32 BPF_STRUCT_OPS(select_cpu_dfl_nodispatch_select_cpu, struct task_struct *p,
s32 prev_cpu, u64 wake_flags)
{
@@ -43,10 +39,13 @@ s32 BPF_STRUCT_OPS(select_cpu_dfl_nodispatch_select_cpu, struct task_struct *p,
return -ESRCH;
}
- cpu = scx_bpf_select_cpu_dfl(p, prev_cpu, wake_flags,
- &tctx->force_local);
+ cpu = scx_bpf_select_cpu_and(p, prev_cpu, wake_flags, p->cpus_ptr, 0);
+ if (cpu >= 0) {
+ tctx->force_local = true;
+ return cpu;
+ }
- return cpu;
+ return prev_cpu;
}
void BPF_STRUCT_OPS(select_cpu_dfl_nodispatch_enqueue, struct task_struct *p,
diff --git a/tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.c b/tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.c
index 9b5d232efb7f6..2f450bb14e8d9 100644
--- a/tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.c
+++ b/tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.c
@@ -66,7 +66,7 @@ static void cleanup(void *ctx)
struct scx_test select_cpu_dfl_nodispatch = {
.name = "select_cpu_dfl_nodispatch",
- .description = "Verify behavior of scx_bpf_select_cpu_dfl() in "
+ .description = "Verify behavior of scx_bpf_select_cpu_and() in "
"ops.select_cpu()",
.setup = setup,
.run = run,
--
2.48.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [PATCHSET v5 sched_ext/for-6.15] sched_ext: Enhance built-in idle selection with allowed CPUs
2025-03-20 7:36 [PATCHSET v5 sched_ext/for-6.15] sched_ext: Enhance built-in idle selection with allowed CPUs Andrea Righi
` (5 preceding siblings ...)
2025-03-20 7:36 ` [PATCH 6/6] sched_ext: idle: Deprecate scx_bpf_select_cpu_dfl() Andrea Righi
@ 2025-03-20 14:05 ` Joel Fernandes
2025-03-20 15:33 ` Andrea Righi
6 siblings, 1 reply; 13+ messages in thread
From: Joel Fernandes @ 2025-03-20 14:05 UTC (permalink / raw)
To: Andrea Righi, Tejun Heo, David Vernet, Changwoo Min; +Cc: linux-kernel
On 3/20/2025 8:36 AM, Andrea Righi wrote:
> Many scx schedulers implement their own hard or soft-affinity rules to
> support topology characteristics, such as heterogeneous architectures
> (e.g., big.LITTLE, P-cores/E-cores), or to categorize tasks based on
> specific properties (e.g., running certain tasks only in a subset of CPUs).
>
> Currently, there is no mechanism that allows to use the built-in idle CPU
> selection policy to an arbitrary subset of CPUs. As a result, schedulers
> often implement their own idle CPU selection policies, which are typically
> similar to one another, leading to a lot of code duplication.
>
> To address this, extend the built-in idle CPU selection policy introducing
> the concept of allowed CPUs.
>
> With this concept, BPF schedulers can apply the built-in idle CPU selection
> policy to a subset of allowed CPUs, allowing them to implement their own
> hard/soft-affinity rules while still using the topology optimizations of
> the built-in policy, preventing code duplication across different
> schedulers.
>
> To implement this introduce a new helper kfunc scx_bpf_select_cpu_and()
> that accepts a cpumask of allowed CPUs:
>
> s32 scx_bpf_select_cpu_and(struct task_struct *p, s32 prev_cpu,
> u64 wake_flags,
> const struct cpumask *cpus_allowed, u64 flags);
>
> Example usage
> =============
>
> s32 BPF_STRUCT_OPS(foo_select_cpu, struct task_struct *p,
> s32 prev_cpu, u64 wake_flags)
> {
> const struct cpumask *cpus = task_allowed_cpus(p) ?: p->cpus_ptr;
> s32 cpu;
Andrea, I'm curious why cannot this expression simply be moved into the default
select implementation? And then for those that need a more custom mask, we can
do the scx_bpf_select_cpu_and() as a second step.
Also I think I am missing, what is the motivation in the existing code to not do
LLC/NUMA-only scans if the task is restrained? Thanks for clarifying.
thanks,
- Joel
>
> cpu = scx_bpf_select_cpu_and(p, prev_cpu, wake_flags, cpus, 0);
> if (cpu >= 0) {
> scx_bpf_dsq_insert(p, SCX_DSQ_LOCAL, SCX_SLICE_DFL, 0);
> return cpu;
> }
>
> return prev_cpu;
> }
>
> Results
> =======
>
> Load distribution on a 4 sockets / 4 cores per socket system, simulated
> using virtme-ng, running a modified version of scx_bpfland that uses the
> new helper scx_bpf_select_cpu_and() and 0xff00 as allowed domain:
>
> $ vng --cpu 16,sockets=4,cores=4,threads=1
> ...
> $ stress-ng -c 16
> ...
> $ htop
> ...
> 0[ 0.0%] 8[||||||||||||||||||||||||100.0%]
> 1[ 0.0%] 9[||||||||||||||||||||||||100.0%]
> 2[ 0.0%] 10[||||||||||||||||||||||||100.0%]
> 3[ 0.0%] 11[||||||||||||||||||||||||100.0%]
> 4[ 0.0%] 12[||||||||||||||||||||||||100.0%]
> 5[ 0.0%] 13[||||||||||||||||||||||||100.0%]
> 6[ 0.0%] 14[||||||||||||||||||||||||100.0%]
> 7[ 0.0%] 15[||||||||||||||||||||||||100.0%]
>
> With scx_bpf_select_cpu_dfl() tasks would be distributed evenly across all
> the available CPUs.
>
> ChangeLog v4 -> v5:
> - simplify the code to compute (and) task's temporary cpumasks
>
> ChangeLog v3 -> v4:
> - keep p->nr_cpus_allowed optimizations (skip cpumask operations when the
> task can run on all CPUs)
> - allow to call scx_bpf_select_cpu_and() also from ops.enqueue() and
> modify the kselftest to cover this case as well
> - rebase to the latest sched_ext/for-6.15
>
> ChangeLog v2 -> v3:
> - incrementally refactor scx_select_cpu_dfl() to accept idle flags and an
> arbitrary allowed cpumask
> - build scx_bpf_select_cpu_and() on top of the existing logic
> - re-arrange scx_select_cpu_dfl() prototype, aligning the first three
> arguments with select_task_rq()
> - do not use "domain" for the allowed cpumask to avoid potential ambiguity
> with sched_domain
>
> ChangeLog v1 -> v2:
> - rename scx_bpf_select_cpu_pref() to scx_bpf_select_cpu_and() and always
> select idle CPUs strictly within the allowed domain
> - rename preferred CPUs -> allowed CPU
> - drop %SCX_PICK_IDLE_IN_PREF (not required anymore)
> - deprecate scx_bpf_select_cpu_dfl() in favor of scx_bpf_select_cpu_and()
> and provide all the required backward compatibility boilerplate
>
> Andrea Righi (6):
> sched_ext: idle: Extend topology optimizations to all tasks
> sched_ext: idle: Explicitly pass allowed cpumask to scx_select_cpu_dfl()
> sched_ext: idle: Accept an arbitrary cpumask in scx_select_cpu_dfl()
> sched_ext: idle: Introduce scx_bpf_select_cpu_and()
> selftests/sched_ext: Add test for scx_bpf_select_cpu_and()
> sched_ext: idle: Deprecate scx_bpf_select_cpu_dfl()
>
> Documentation/scheduler/sched-ext.rst | 11 +-
> kernel/sched/ext.c | 6 +-
> kernel/sched/ext_idle.c | 196 ++++++++++++++++-----
> kernel/sched/ext_idle.h | 3 +-
> tools/sched_ext/include/scx/common.bpf.h | 5 +-
> tools/sched_ext/include/scx/compat.bpf.h | 37 ++++
> tools/sched_ext/scx_flatcg.bpf.c | 12 +-
> tools/sched_ext/scx_simple.bpf.c | 9 +-
> tools/testing/selftests/sched_ext/Makefile | 1 +
> .../testing/selftests/sched_ext/allowed_cpus.bpf.c | 121 +++++++++++++
> tools/testing/selftests/sched_ext/allowed_cpus.c | 57 ++++++
> .../selftests/sched_ext/enq_select_cpu_fails.bpf.c | 12 +-
> .../selftests/sched_ext/enq_select_cpu_fails.c | 2 +-
> tools/testing/selftests/sched_ext/exit.bpf.c | 6 +-
> .../sched_ext/select_cpu_dfl_nodispatch.bpf.c | 13 +-
> .../sched_ext/select_cpu_dfl_nodispatch.c | 2 +-
> 16 files changed, 404 insertions(+), 89 deletions(-)
> create mode 100644 tools/testing/selftests/sched_ext/allowed_cpus.bpf.c
> create mode 100644 tools/testing/selftests/sched_ext/allowed_cpus.c
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCHSET v5 sched_ext/for-6.15] sched_ext: Enhance built-in idle selection with allowed CPUs
2025-03-20 14:05 ` [PATCHSET v5 sched_ext/for-6.15] sched_ext: Enhance built-in idle selection with allowed CPUs Joel Fernandes
@ 2025-03-20 15:33 ` Andrea Righi
0 siblings, 0 replies; 13+ messages in thread
From: Andrea Righi @ 2025-03-20 15:33 UTC (permalink / raw)
To: Joel Fernandes; +Cc: Tejun Heo, David Vernet, Changwoo Min, linux-kernel
On Thu, Mar 20, 2025 at 03:05:37PM +0100, Joel Fernandes wrote:
> On 3/20/2025 8:36 AM, Andrea Righi wrote:
...
> > Example usage
> > =============
> >
> > s32 BPF_STRUCT_OPS(foo_select_cpu, struct task_struct *p,
> > s32 prev_cpu, u64 wake_flags)
> > {
> > const struct cpumask *cpus = task_allowed_cpus(p) ?: p->cpus_ptr;
> > s32 cpu;
>
> Andrea, I'm curious why cannot this expression simply be moved into the default
> select implementation? And then for those that need a more custom mask, we can
> do the scx_bpf_select_cpu_and() as a second step.
Yeah, maybe the example could be improved a bit. Basically I'm doing
task_allowed_cpus(p) ?: p->cpus_ptr to highlight that you can't pass NULL
as the extra "and" cpumask (otherwise the verifier won't be happy).
Also, if you call the old scx_bpf_select_cpu_dfl(), the internal logic
already uses the same backend as scx_bpf_select_cpu_and() passing
p->cpus_ptr as @cpus_allowed.
>
> Also I think I am missing, what is the motivation in the existing code to not do
> LLC/NUMA-only scans if the task is restrained? Thanks for clarifying.
You can use the "flags" argument to restrict the selection to the current
node, setting SCX_PICK_IDLE_IN_NODE.
We currently don't have a SCX_PICK_IDLE_IN_LLC flag (it'd be nice to
introduce it), so currently the only way to restrict the selection to the
current LLC is to use the additional "and" cpumask (@cpus_allowed), passing
the LLC span.
Thanks,
-Andrea
>
> thanks,
>
> - Joel
>
>
>
> >
> > cpu = scx_bpf_select_cpu_and(p, prev_cpu, wake_flags, cpus, 0);
> > if (cpu >= 0) {
> > scx_bpf_dsq_insert(p, SCX_DSQ_LOCAL, SCX_SLICE_DFL, 0);
> > return cpu;
> > }
> >
> > return prev_cpu;
> > }
> >
> > Results
> > =======
> >
> > Load distribution on a 4 sockets / 4 cores per socket system, simulated
> > using virtme-ng, running a modified version of scx_bpfland that uses the
> > new helper scx_bpf_select_cpu_and() and 0xff00 as allowed domain:
> >
> > $ vng --cpu 16,sockets=4,cores=4,threads=1
> > ...
> > $ stress-ng -c 16
> > ...
> > $ htop
> > ...
> > 0[ 0.0%] 8[||||||||||||||||||||||||100.0%]
> > 1[ 0.0%] 9[||||||||||||||||||||||||100.0%]
> > 2[ 0.0%] 10[||||||||||||||||||||||||100.0%]
> > 3[ 0.0%] 11[||||||||||||||||||||||||100.0%]
> > 4[ 0.0%] 12[||||||||||||||||||||||||100.0%]
> > 5[ 0.0%] 13[||||||||||||||||||||||||100.0%]
> > 6[ 0.0%] 14[||||||||||||||||||||||||100.0%]
> > 7[ 0.0%] 15[||||||||||||||||||||||||100.0%]
> >
> > With scx_bpf_select_cpu_dfl() tasks would be distributed evenly across all
> > the available CPUs.
> >
> > ChangeLog v4 -> v5:
> > - simplify the code to compute (and) task's temporary cpumasks
> >
> > ChangeLog v3 -> v4:
> > - keep p->nr_cpus_allowed optimizations (skip cpumask operations when the
> > task can run on all CPUs)
> > - allow to call scx_bpf_select_cpu_and() also from ops.enqueue() and
> > modify the kselftest to cover this case as well
> > - rebase to the latest sched_ext/for-6.15
> >
> > ChangeLog v2 -> v3:
> > - incrementally refactor scx_select_cpu_dfl() to accept idle flags and an
> > arbitrary allowed cpumask
> > - build scx_bpf_select_cpu_and() on top of the existing logic
> > - re-arrange scx_select_cpu_dfl() prototype, aligning the first three
> > arguments with select_task_rq()
> > - do not use "domain" for the allowed cpumask to avoid potential ambiguity
> > with sched_domain
> >
> > ChangeLog v1 -> v2:
> > - rename scx_bpf_select_cpu_pref() to scx_bpf_select_cpu_and() and always
> > select idle CPUs strictly within the allowed domain
> > - rename preferred CPUs -> allowed CPU
> > - drop %SCX_PICK_IDLE_IN_PREF (not required anymore)
> > - deprecate scx_bpf_select_cpu_dfl() in favor of scx_bpf_select_cpu_and()
> > and provide all the required backward compatibility boilerplate
> >
> > Andrea Righi (6):
> > sched_ext: idle: Extend topology optimizations to all tasks
> > sched_ext: idle: Explicitly pass allowed cpumask to scx_select_cpu_dfl()
> > sched_ext: idle: Accept an arbitrary cpumask in scx_select_cpu_dfl()
> > sched_ext: idle: Introduce scx_bpf_select_cpu_and()
> > selftests/sched_ext: Add test for scx_bpf_select_cpu_and()
> > sched_ext: idle: Deprecate scx_bpf_select_cpu_dfl()
> >
> > Documentation/scheduler/sched-ext.rst | 11 +-
> > kernel/sched/ext.c | 6 +-
> > kernel/sched/ext_idle.c | 196 ++++++++++++++++-----
> > kernel/sched/ext_idle.h | 3 +-
> > tools/sched_ext/include/scx/common.bpf.h | 5 +-
> > tools/sched_ext/include/scx/compat.bpf.h | 37 ++++
> > tools/sched_ext/scx_flatcg.bpf.c | 12 +-
> > tools/sched_ext/scx_simple.bpf.c | 9 +-
> > tools/testing/selftests/sched_ext/Makefile | 1 +
> > .../testing/selftests/sched_ext/allowed_cpus.bpf.c | 121 +++++++++++++
> > tools/testing/selftests/sched_ext/allowed_cpus.c | 57 ++++++
> > .../selftests/sched_ext/enq_select_cpu_fails.bpf.c | 12 +-
> > .../selftests/sched_ext/enq_select_cpu_fails.c | 2 +-
> > tools/testing/selftests/sched_ext/exit.bpf.c | 6 +-
> > .../sched_ext/select_cpu_dfl_nodispatch.bpf.c | 13 +-
> > .../sched_ext/select_cpu_dfl_nodispatch.c | 2 +-
> > 16 files changed, 404 insertions(+), 89 deletions(-)
> > create mode 100644 tools/testing/selftests/sched_ext/allowed_cpus.bpf.c
> > create mode 100644 tools/testing/selftests/sched_ext/allowed_cpus.c
>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 1/6] sched_ext: idle: Extend topology optimizations to all tasks
2025-03-20 7:36 ` [PATCH 1/6] sched_ext: idle: Extend topology optimizations to all tasks Andrea Righi
@ 2025-03-20 16:49 ` Tejun Heo
2025-03-20 22:08 ` Andrea Righi
0 siblings, 1 reply; 13+ messages in thread
From: Tejun Heo @ 2025-03-20 16:49 UTC (permalink / raw)
To: Andrea Righi; +Cc: David Vernet, Changwoo Min, Joel Fernandes, linux-kernel
On Thu, Mar 20, 2025 at 08:36:41AM +0100, Andrea Righi wrote:
> +static const struct cpumask *task_cpumask(const struct task_struct *p, const struct cpumask *cpus,
> + struct cpumask *local_cpus)
> +{
> + /*
> + * If the task is allowed to run on all CPUs, simply use the
> + * architecture's cpumask directly. Otherwise, compute the
> + * intersection of the architecture's cpumask and the task's
> + * allowed cpumask.
> + */
> + if (!cpus || p->nr_cpus_allowed >= num_possible_cpus() ||
> + cpumask_subset(cpus, p->cpus_ptr))
> + return cpus;
> +
> + if (!cpumask_equal(cpus, p->cpus_ptr) &&
Weren't we talkign about removing this test?
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 1/6] sched_ext: idle: Extend topology optimizations to all tasks
2025-03-20 16:49 ` Tejun Heo
@ 2025-03-20 22:08 ` Andrea Righi
0 siblings, 0 replies; 13+ messages in thread
From: Andrea Righi @ 2025-03-20 22:08 UTC (permalink / raw)
To: Tejun Heo; +Cc: David Vernet, Changwoo Min, Joel Fernandes, linux-kernel
Hi Tejun,
On Thu, Mar 20, 2025 at 06:49:47AM -1000, Tejun Heo wrote:
> On Thu, Mar 20, 2025 at 08:36:41AM +0100, Andrea Righi wrote:
> > +static const struct cpumask *task_cpumask(const struct task_struct *p, const struct cpumask *cpus,
> > + struct cpumask *local_cpus)
> > +{
> > + /*
> > + * If the task is allowed to run on all CPUs, simply use the
> > + * architecture's cpumask directly. Otherwise, compute the
> > + * intersection of the architecture's cpumask and the task's
> > + * allowed cpumask.
> > + */
> > + if (!cpus || p->nr_cpus_allowed >= num_possible_cpus() ||
> > + cpumask_subset(cpus, p->cpus_ptr))
> > + return cpus;
> > +
> > + if (!cpumask_equal(cpus, p->cpus_ptr) &&
>
> Weren't we talkign about removing this test?
sorry, it's actually removed in PATCH 3/6, but I'll clean it up also in
this one.
Thanks,
-Andrea
>
> Thanks.
>
> --
> tejun
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 2/6] sched_ext: idle: Explicitly pass allowed cpumask to scx_select_cpu_dfl()
2025-03-20 7:36 ` [PATCH 2/6] sched_ext: idle: Explicitly pass allowed cpumask to scx_select_cpu_dfl() Andrea Righi
@ 2025-03-21 10:15 ` changwoo
2025-03-21 22:02 ` Andrea Righi
0 siblings, 1 reply; 13+ messages in thread
From: changwoo @ 2025-03-21 10:15 UTC (permalink / raw)
To: Andrea Righi, Tejun Heo, David Vernet; +Cc: Joel Fernandes, linux-kernel
Hi Andrea,
On 3/20/25 16:36, Andrea Righi wrote:
> Modify scx_select_cpu_dfl() to take the allowed cpumask as an explicit
> argument, instead of implicitly using @p->cpus_ptr.
>
> This prepares for future changes where arbitrary cpumasks may be passed
> to the built-in idle CPU selection policy.
>
> This is a pure refactoring with no functional changes.
>
> Signed-off-by: Andrea Righi <arighi@nvidia.com>
> ---
> kernel/sched/ext.c | 2 +-
> kernel/sched/ext_idle.c | 45 ++++++++++++++++++++++++++---------------
> kernel/sched/ext_idle.h | 3 ++-
> 3 files changed, 32 insertions(+), 18 deletions(-)
>
> diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
> index 06561d6717c9a..f42352e8d889e 100644
> --- a/kernel/sched/ext.c
> +++ b/kernel/sched/ext.c
> @@ -3395,7 +3395,7 @@ static int select_task_rq_scx(struct task_struct *p, int prev_cpu, int wake_flag
> } else {
> s32 cpu;
>
> - cpu = scx_select_cpu_dfl(p, prev_cpu, wake_flags, 0);
> + cpu = scx_select_cpu_dfl(p, prev_cpu, wake_flags, p->cpus_ptr, 0);
> if (cpu >= 0) {
> p->scx.slice = SCX_SLICE_DFL;
> p->scx.ddsp_dsq_id = SCX_DSQ_LOCAL;
> diff --git a/kernel/sched/ext_idle.c b/kernel/sched/ext_idle.c
> index e1e020c27c07c..a90d85bce1ccb 100644
> --- a/kernel/sched/ext_idle.c
> +++ b/kernel/sched/ext_idle.c
> @@ -397,11 +397,19 @@ void scx_idle_update_selcpu_topology(struct sched_ext_ops *ops)
> static_branch_disable_cpuslocked(&scx_selcpu_topo_numa);
> }
>
> +static inline bool task_allowed_all_cpus(const struct task_struct *p)
> +{
> + return p->nr_cpus_allowed >= num_possible_cpus();
> +}
This function will be renamed to task_affinity_all() in patch #3.
Can we use the same name from the beginning?
That will make the commits easier to read.
> +
> /*
> - * Return the subset of @cpus that task @p can use or NULL if none of the
> - * CPUs in the @cpus cpumask can be used.
> + * Return the subset of @cpus that task @p can use, according to
> + * @cpus_allowed, or NULL if none of the CPUs in the @cpus cpumask can be
> + * used.
> */
> -static const struct cpumask *task_cpumask(const struct task_struct *p, const struct cpumask *cpus,
> +static const struct cpumask *task_cpumask(const struct task_struct *p,
> + const struct cpumask *cpus_allowed,
> + const struct cpumask *cpus,
> struct cpumask *local_cpus)
> {
> /*
> @@ -410,12 +418,10 @@ static const struct cpumask *task_cpumask(const struct task_struct *p, const str
> * intersection of the architecture's cpumask and the task's
> * allowed cpumask.
> */
> - if (!cpus || p->nr_cpus_allowed >= num_possible_cpus() ||
> - cpumask_subset(cpus, p->cpus_ptr))
> + if (!cpus || task_allowed_all_cpus(p) || cpumask_subset(cpus, cpus_allowed))
> return cpus;
>
> - if (!cpumask_equal(cpus, p->cpus_ptr) &&
> - cpumask_and(local_cpus, cpus, p->cpus_ptr))
> + if (cpumask_and(local_cpus, cpus, cpus_allowed))
> return local_cpus;
>
> return NULL;
> @@ -454,7 +460,8 @@ static const struct cpumask *task_cpumask(const struct task_struct *p, const str
> * NOTE: tasks that can only run on 1 CPU are excluded by this logic, because
> * we never call ops.select_cpu() for them, see select_task_rq().
> */
> -s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, u64 flags)
> +s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags,
> + const struct cpumask *cpus_allowed, u64 flags)
> {
> const struct cpumask *llc_cpus = NULL, *numa_cpus = NULL;
> int node = scx_cpu_node_if_enabled(prev_cpu);
> @@ -469,13 +476,19 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, u64
> * Determine the subset of CPUs that the task can use in its
> * current LLC and node.
> */
> - if (static_branch_maybe(CONFIG_NUMA, &scx_selcpu_topo_numa))
> - numa_cpus = task_cpumask(p, numa_span(prev_cpu),
> + if (static_branch_maybe(CONFIG_NUMA, &scx_selcpu_topo_numa)) {
> + numa_cpus = task_cpumask(p, cpus_allowed, numa_span(prev_cpu),
> this_cpu_cpumask_var_ptr(local_numa_idle_cpumask));
> + if (cpumask_equal(numa_cpus, cpus_allowed))
Since task_cpumask() can return NULL, I think we should test if
numa_cpus is NULL or not here, something like this:
if (numa_cpus && cpumask_equal(numa_cpus, cpus_allowed))
> + numa_cpus = NULL;
> + }
>
> - if (static_branch_maybe(CONFIG_SCHED_MC, &scx_selcpu_topo_llc))
> - llc_cpus = task_cpumask(p, llc_span(prev_cpu),
> + if (static_branch_maybe(CONFIG_SCHED_MC, &scx_selcpu_topo_llc)) {
> + llc_cpus = task_cpumask(p, cpus_allowed, llc_span(prev_cpu),
> this_cpu_cpumask_var_ptr(local_llc_idle_cpumask));
> + if (cpumask_equal(llc_cpus, cpus_allowed))
Same here.
if (llc_cpus && cpumask_equal(llc_cpus, cpus_allowed))
> + llc_cpus = NULL;
> + }
>
> /*
> * If WAKE_SYNC, try to migrate the wakee to the waker's CPU.
> @@ -512,7 +525,7 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, u64
> cpu_rq(cpu)->scx.local_dsq.nr == 0 &&
> (!(flags & SCX_PICK_IDLE_IN_NODE) || (waker_node == node)) &&
> !cpumask_empty(idle_cpumask(waker_node)->cpu)) {
> - if (cpumask_test_cpu(cpu, p->cpus_ptr))
> + if (cpumask_test_cpu(cpu, cpus_allowed))
> goto out_unlock;
> }
> }
> @@ -557,7 +570,7 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, u64
> * begin in prev_cpu's node and proceed to other nodes in
> * order of increasing distance.
> */
> - cpu = scx_pick_idle_cpu(p->cpus_ptr, node, flags | SCX_PICK_IDLE_CORE);
> + cpu = scx_pick_idle_cpu(cpus_allowed, node, flags | SCX_PICK_IDLE_CORE);
> if (cpu >= 0)
> goto out_unlock;
>
> @@ -605,7 +618,7 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, u64
> * in prev_cpu's node and proceed to other nodes in order of
> * increasing distance.
> */
> - cpu = scx_pick_idle_cpu(p->cpus_ptr, node, flags);
> + cpu = scx_pick_idle_cpu(cpus_allowed, node, flags);
> if (cpu >= 0)
> goto out_unlock;
>
> @@ -861,7 +874,7 @@ __bpf_kfunc s32 scx_bpf_select_cpu_dfl(struct task_struct *p, s32 prev_cpu,
> goto prev_cpu;
>
> #ifdef CONFIG_SMP
> - cpu = scx_select_cpu_dfl(p, prev_cpu, wake_flags, 0);
> + cpu = scx_select_cpu_dfl(p, prev_cpu, wake_flags, p->cpus_ptr, 0);
> if (cpu >= 0) {
> *is_idle = true;
> return cpu;
> diff --git a/kernel/sched/ext_idle.h b/kernel/sched/ext_idle.h
> index 511cc2221f7a8..37be78a7502b3 100644
> --- a/kernel/sched/ext_idle.h
> +++ b/kernel/sched/ext_idle.h
> @@ -27,7 +27,8 @@ static inline s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, int node
> }
> #endif /* CONFIG_SMP */
>
> -s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, u64 flags);
> +s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags,
> + const struct cpumask *cpus_allowed, u64 flags);
> void scx_idle_enable(struct sched_ext_ops *ops);
> void scx_idle_disable(void);
> int scx_idle_init(void);
Regards,
Changwoo Min
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 2/6] sched_ext: idle: Explicitly pass allowed cpumask to scx_select_cpu_dfl()
2025-03-21 10:15 ` changwoo
@ 2025-03-21 22:02 ` Andrea Righi
0 siblings, 0 replies; 13+ messages in thread
From: Andrea Righi @ 2025-03-21 22:02 UTC (permalink / raw)
To: changwoo; +Cc: Tejun Heo, David Vernet, Joel Fernandes, linux-kernel
On Fri, Mar 21, 2025 at 07:15:37PM +0900, changwoo wrote:
> Hi Andrea,
>
> On 3/20/25 16:36, Andrea Righi wrote:
> > Modify scx_select_cpu_dfl() to take the allowed cpumask as an explicit
> > argument, instead of implicitly using @p->cpus_ptr.
> >
> > This prepares for future changes where arbitrary cpumasks may be passed
> > to the built-in idle CPU selection policy.
> >
> > This is a pure refactoring with no functional changes.
> >
> > Signed-off-by: Andrea Righi <arighi@nvidia.com>
> > ---
> > kernel/sched/ext.c | 2 +-
> > kernel/sched/ext_idle.c | 45 ++++++++++++++++++++++++++---------------
> > kernel/sched/ext_idle.h | 3 ++-
> > 3 files changed, 32 insertions(+), 18 deletions(-)
> >
> > diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
> > index 06561d6717c9a..f42352e8d889e 100644
> > --- a/kernel/sched/ext.c
> > +++ b/kernel/sched/ext.c
> > @@ -3395,7 +3395,7 @@ static int select_task_rq_scx(struct task_struct *p, int prev_cpu, int wake_flag
> > } else {
> > s32 cpu;
> >
> > - cpu = scx_select_cpu_dfl(p, prev_cpu, wake_flags, 0);
> > + cpu = scx_select_cpu_dfl(p, prev_cpu, wake_flags, p->cpus_ptr, 0);
> > if (cpu >= 0) {
> > p->scx.slice = SCX_SLICE_DFL;
> > p->scx.ddsp_dsq_id = SCX_DSQ_LOCAL;
> > diff --git a/kernel/sched/ext_idle.c b/kernel/sched/ext_idle.c
> > index e1e020c27c07c..a90d85bce1ccb 100644
> > --- a/kernel/sched/ext_idle.c
> > +++ b/kernel/sched/ext_idle.c
> > @@ -397,11 +397,19 @@ void scx_idle_update_selcpu_topology(struct sched_ext_ops *ops)
> > static_branch_disable_cpuslocked(&scx_selcpu_topo_numa);
> > }
> >
> > +static inline bool task_allowed_all_cpus(const struct task_struct *p)
> > +{
> > + return p->nr_cpus_allowed >= num_possible_cpus();
> > +}
>
> This function will be renamed to task_affinity_all() in patch #3.
> Can we use the same name from the beginning?
> That will make the commits easier to read.
Right, I'll clean up also this patch in the next version, thanks!
-Andrea
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2025-03-21 22:02 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-03-20 7:36 [PATCHSET v5 sched_ext/for-6.15] sched_ext: Enhance built-in idle selection with allowed CPUs Andrea Righi
2025-03-20 7:36 ` [PATCH 1/6] sched_ext: idle: Extend topology optimizations to all tasks Andrea Righi
2025-03-20 16:49 ` Tejun Heo
2025-03-20 22:08 ` Andrea Righi
2025-03-20 7:36 ` [PATCH 2/6] sched_ext: idle: Explicitly pass allowed cpumask to scx_select_cpu_dfl() Andrea Righi
2025-03-21 10:15 ` changwoo
2025-03-21 22:02 ` Andrea Righi
2025-03-20 7:36 ` [PATCH 3/6] sched_ext: idle: Accept an arbitrary cpumask in scx_select_cpu_dfl() Andrea Righi
2025-03-20 7:36 ` [PATCH 4/6] sched_ext: idle: Introduce scx_bpf_select_cpu_and() Andrea Righi
2025-03-20 7:36 ` [PATCH 5/6] selftests/sched_ext: Add test for scx_bpf_select_cpu_and() Andrea Righi
2025-03-20 7:36 ` [PATCH 6/6] sched_ext: idle: Deprecate scx_bpf_select_cpu_dfl() Andrea Righi
2025-03-20 14:05 ` [PATCHSET v5 sched_ext/for-6.15] sched_ext: Enhance built-in idle selection with allowed CPUs Joel Fernandes
2025-03-20 15:33 ` Andrea Righi
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox