* [PATCHSET v9 sched_ext/for-6.15] sched_ext: split global idle cpumask into per-NUMA cpumasks
@ 2025-02-06 20:15 Andrea Righi
2025-02-06 20:15 ` [PATCH 1/5] sched/topology: Introduce for_each_numa_node() iterator Andrea Righi
` (4 more replies)
0 siblings, 5 replies; 9+ messages in thread
From: Andrea Righi @ 2025-02-06 20:15 UTC (permalink / raw)
To: Tejun Heo, David Vernet, Changwoo Min
Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider, Ian May, bpf, linux-kernel
= Overview =
As discussed during the sched_ext office hours, using a global cpumask to
keep track of the idle CPUs can be inefficient and it may not scale really
well on large NUMA systems.
Therefore, split the idle cpumask into multiple per-NUMA node cpumasks to
improve scalability and performance on such large systems.
Scalability issues seem to be more noticeable on Intel Sapphire Rapids
dual-socket architectures.
= Test =
Hardware:
- System: DGX B200
- CPUs: 224 SMT threads (112 physical cores)
- Processor: INTEL(R) XEON(R) PLATINUM 8570
- 2 NUMA nodes
Scheduler:
- scx_simple [1] (so that we can focus at the built-in idle selection
policy and not at the scheduling policy itself)
Test:
- Run a parallel kernel build `make -j $(nproc)` and measure the average
elapsed time over 10 runs:
avg time | stdev
---------+------
before: 52.431s | 2.895
after: 50.342s | 2.895
= Conclusion =
Splitting the global cpumask into multiple per-NUMA cpumasks helped to
achieve a speedup of approximately +4% with this particular architecture
and test case.
The same test on a DGX-1 (40 physical cores, Intel Xeon E5-2698 v4 @
2.20GHz, 2 NUMA nodes) shows a speedup of around 1.5-3%.
On smaller systems, I haven't noticed any measurable regressions or
improvements with the same test (parallel kernel build) and scheduler
(scx_simple).
Moreover, with a modified scx_bpfland that uses the new NUMA-aware APIs I
observed an additional +2-2.5% performance improvement in the same test.
NOTE: splitting the global cpumask into multiple cpumasks may increase the
overhead of scx_bpf_pick_idle_cpu() or ops.select_cpu() (for schedulers
relying on the built-in CPU idle selection policy) in presence of multiple
NUMA nodes, particularly under high system load, since we may have to
access multiple cpumasks to find an idle CPU.
However, this increased overhead seems to be highly compensated by a lower
overhead when updating the idle state and by the fact that CPUs are more
likely operating within their local idle cpumask, reducing the stress on
the cache coherency protocol.
= References =
[1] https://github.com/sched-ext/scx/blob/main/scheds/c/scx_simple.bpf.c
ChangeLog v8 -> v9:
- drop some preliminary refactoring patches that are now applied to
sched_ext/for-6.15
- rename SCX_PICK_IDLE_NODE -> SCX_PICK_IDLE_IN_NODE
- use NUMA_NO_NODE to reference to the global idle cpumask (and drop
NUMA_FLAT_NODE)
- do not implicitly translate NUMA_NO_NODE to the current node
- rename struct idle_cpumask -> idle_cpus
- drop "get_" prefix from the idle cpumask helpers
- rename for_each_numa_hop_node() -> for_each_numa_node()
- return MAX_NUMNODES from for_each_numa_node() at the end of the loop
(for coherency with other node iterators)
- do not get rid of the scx_selcpu_topo_numa logic (since it can still be
used when per-node cpumasks are not enabled)
ChangeLog v7 -> v8:
- patch set refactoring: move ext_idle.c as first patch and introduce more
preparation patches
- introduce SCX_PICK_IDLE_NODE to restrict idle CPU selection to a single
specified node
- trigger scx_ops_error() when the *_node() kfunc's are used without
enbling SCX_OPS_NODE_BUILTIN_IDLE
- check for node_possible() in validate_node()
- do node validation in the kfunc's (instead of the internal kernel
functions) and trigger scx_ops_error in case of failure
- rename idle_masks -> scx_idle_masks
- drop unused CL_ALIGNED_IF_ONSTACK
- drop unnecessary rcu_read_lock/unlock() when iterating NUMA nodes
ChangeLog v6 -> v7:
- addressed some issues based on Yury's review (thanks!)
- introduced a new iterator to navigate the NUMA nodes in order of
increasing distance
ChangeLog v5 -> v6:
- refactor patch set to introduce SCX_OPS_NODE_BUILTIN_IDLE before
the per-node cpumasks
- move idle CPU selection policy to a separate file (ext_idle.c)
(no functional change, just some code shuffling)
ChangeLog v4 -> v5:
- introduce new scx_bpf_cpu_to_node() kfunc
- provide __COMPAT_*() helpers for the new kfunc's
ChangeLog v3 -> v4:
- introduce SCX_OPS_NODE_BUILTIN_IDLE to select multiple per-node
cpumasks or single flat cpumask
- introduce new kfuncs to access per-node idle cpumasks information
- use for_each_numa_hop_mask() to traverse NUMA nodes in increasing
distance
- dropped nodemask helpers (not needed anymore)
- rebase to sched_ext/for-6.14
ChangeLog v2 -> v3:
- introduce for_each_online_node_wrap()
- re-introduce cpumask_intersects() in test_and_clear_cpu_idle() (to
reduce memory writes / cache coherence pressure)
- get rid of the redundant scx_selcpu_topo_numa logic
[test results are pretty much identical, so I haven't updated them from v2]
ChangeLog v1 -> v2:
- renamed for_each_node_mask|state_from() -> for_each_node_mask|state_wrap()
- misc cpumask optimizations (thanks to Yury)
Andrea Righi (5):
sched/topology: Introduce for_each_numa_node() iterator
sched_ext: idle: Introduce SCX_OPS_BUILTIN_IDLE_PER_NODE
sched_ext: idle: introduce SCX_PICK_IDLE_IN_NODE
sched_ext: idle: Per-node idle cpumasks
sched_ext: idle: Introduce node-aware idle cpu kfunc helpers
include/linux/topology.h | 31 ++-
kernel/sched/ext.c | 22 +-
kernel/sched/ext_idle.c | 390 ++++++++++++++++++++++++++-----
kernel/sched/ext_idle.h | 17 +-
kernel/sched/topology.c | 42 ++++
tools/sched_ext/include/scx/common.bpf.h | 4 +
tools/sched_ext/include/scx/compat.bpf.h | 19 ++
7 files changed, 462 insertions(+), 63 deletions(-)
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 1/5] sched/topology: Introduce for_each_numa_node() iterator
2025-02-06 20:15 [PATCHSET v9 sched_ext/for-6.15] sched_ext: split global idle cpumask into per-NUMA cpumasks Andrea Righi
@ 2025-02-06 20:15 ` Andrea Righi
2025-02-07 3:57 ` Yury Norov
2025-02-06 20:15 ` [PATCH 2/5] sched_ext: idle: Introduce SCX_OPS_BUILTIN_IDLE_PER_NODE Andrea Righi
` (3 subsequent siblings)
4 siblings, 1 reply; 9+ messages in thread
From: Andrea Righi @ 2025-02-06 20:15 UTC (permalink / raw)
To: Tejun Heo, David Vernet, Changwoo Min
Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider, Ian May, bpf, linux-kernel, Yury Norov
Introduce for_each_numa_node() and sched_numa_node() helpers to iterate
over node IDs in order of increasing NUMA distance from a given starting
node.
These iterator functions are similar to for_each_numa_hop_mask() and
sched_numa_hop_mask(), but instead of providing a cpumask at each
iteration, they provide a node ID.
Example usage:
nodemask_t visited = NODE_MASK_NONE;
int start = cpu_to_node(smp_processor_id());
for_each_numa_node(node, start, visited, N_ONLINE)
pr_info("node (%d, %d) -> %d\n",
start, node, node_distance(start, node));
On a system with equidistant nodes:
$ numactl -H
...
node distances:
node 0 1 2 3
0: 10 20 20 20
1: 20 10 20 20
2: 20 20 10 20
3: 20 20 20 10
Output of the example above (on node 0):
[ 7.367022] node (0, 0) -> 10
[ 7.367151] node (0, 1) -> 20
[ 7.367186] node (0, 2) -> 20
[ 7.367247] node (0, 3) -> 20
On a system with non-equidistant nodes (simulated using virtme-ng):
$ numactl -H
...
node distances:
node 0 1 2 3
0: 10 51 31 41
1: 51 10 21 61
2: 31 21 10 11
3: 41 61 11 10
Output of the example above (on node 0):
[ 8.953644] node (0, 0) -> 10
[ 8.953712] node (0, 2) -> 31
[ 8.953764] node (0, 3) -> 41
[ 8.953817] node (0, 1) -> 51
Cc: Yury Norov <yury.norov@gmail.com>
Signed-off-by: Andrea Righi <arighi@nvidia.com>
---
include/linux/topology.h | 31 ++++++++++++++++++++++++++++-
kernel/sched/topology.c | 42 ++++++++++++++++++++++++++++++++++++++++
2 files changed, 72 insertions(+), 1 deletion(-)
diff --git a/include/linux/topology.h b/include/linux/topology.h
index 52f5850730b3e..0c82b913a8814 100644
--- a/include/linux/topology.h
+++ b/include/linux/topology.h
@@ -248,12 +248,18 @@ static inline const struct cpumask *cpu_cpu_mask(int cpu)
#ifdef CONFIG_NUMA
int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node);
extern const struct cpumask *sched_numa_hop_mask(unsigned int node, unsigned int hops);
-#else
+extern int sched_numa_node(nodemask_t *visited, int start, unsigned int state);
+#else /* !CONFIG_NUMA */
static __always_inline int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node)
{
return cpumask_nth_and(cpu, cpus, cpu_online_mask);
}
+static inline int sched_numa_node(nodemask_t *visited, int start, unsigned int state)
+{
+ return MAX_NUMNODES;
+}
+
static inline const struct cpumask *
sched_numa_hop_mask(unsigned int node, unsigned int hops)
{
@@ -261,6 +267,29 @@ sched_numa_hop_mask(unsigned int node, unsigned int hops)
}
#endif /* CONFIG_NUMA */
+/**
+ * for_each_numa_node - iterate over NUMA nodes at increasing hop distances
+ * from a given starting node.
+ * @node: the iteration variable, representing the current NUMA node.
+ * @start: the NUMA node to start the iteration from.
+ * @visited: a nodemask_t to track the visited nodes.
+ * @state: state of NUMA nodes to iterate.
+ *
+ * This macro iterates over NUMA nodes in increasing distance from
+ * @start_node and yields MAX_NUMNODES when all the nodes have been
+ * visited.
+ *
+ * The difference between for_each_node() and for_each_numa_node() is that
+ * the former allows to iterate over nodes in no particular order, whereas
+ * the latter iterates over nodes in increasing order of distance.
+ *
+ * Requires rcu_lock to be held.
+ */
+#define for_each_numa_node(node, start, visited, state) \
+ for (node = start; \
+ node != MAX_NUMNODES; \
+ node = sched_numa_node(&(visited), start, state))
+
/**
* for_each_numa_hop_mask - iterate over cpumasks of increasing NUMA distance
* from a given node.
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index da33ec9e94ab2..e1d0a33415fb5 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -2183,6 +2183,48 @@ int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node)
}
EXPORT_SYMBOL_GPL(sched_numa_find_nth_cpu);
+/**
+ * sched_numa_node - Find the NUMA node at the closest distance from
+ * node @start.
+ *
+ * @visited: a pointer to a nodemask_t representing the visited nodes.
+ * @start: the node to start the search from.
+ * @state: the node state to filter nodes by.
+ *
+ * This function iterates over all nodes in the given state and calculates
+ * the distance to the starting node. It returns the node that is the
+ * closest in terms of distance that has not already been considered (not
+ * set in @visited and not the starting node). If the node is found, it is
+ * marked as visited in the @visited node mask.
+ *
+ * Returns the node ID closest in terms of hop distance from the @start
+ * node, or MAX_NUMNODES if no node is found (or all nodes have been
+ * visited).
+ */
+int sched_numa_node(nodemask_t *visited, int start, unsigned int state)
+{
+ int dist, n, min_node, min_dist;
+
+ min_node = MAX_NUMNODES;
+ min_dist = INT_MAX;
+
+ /* Find the nearest unvisted node */
+ for_each_node_state(n, state) {
+ if (n == start || node_isset(n, *visited))
+ continue;
+ dist = node_distance(start, n);
+ if (dist < min_dist) {
+ min_dist = dist;
+ min_node = n;
+ }
+ }
+ if (min_node != MAX_NUMNODES)
+ node_set(min_node, *visited);
+
+ return min_node;
+}
+EXPORT_SYMBOL_GPL(sched_numa_node);
+
/**
* sched_numa_hop_mask() - Get the cpumask of CPUs at most @hops hops away from
* @node
--
2.48.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 2/5] sched_ext: idle: Introduce SCX_OPS_BUILTIN_IDLE_PER_NODE
2025-02-06 20:15 [PATCHSET v9 sched_ext/for-6.15] sched_ext: split global idle cpumask into per-NUMA cpumasks Andrea Righi
2025-02-06 20:15 ` [PATCH 1/5] sched/topology: Introduce for_each_numa_node() iterator Andrea Righi
@ 2025-02-06 20:15 ` Andrea Righi
2025-02-06 20:15 ` [PATCH 3/5] sched_ext: idle: introduce SCX_PICK_IDLE_IN_NODE Andrea Righi
` (2 subsequent siblings)
4 siblings, 0 replies; 9+ messages in thread
From: Andrea Righi @ 2025-02-06 20:15 UTC (permalink / raw)
To: Tejun Heo, David Vernet, Changwoo Min
Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider, Ian May, bpf, linux-kernel
Add the new scheduler flag SCX_OPS_BUILTIN_IDLE_PER_NODE, which allows
scx schedulers to select between using a global flat idle cpumask or
multiple per-node cpumasks.
This only introduces the flag and the mechanism to enable/disable this
feature without affecting any scheduling behavior.
Signed-off-by: Andrea Righi <arighi@nvidia.com>
---
kernel/sched/ext.c | 21 +++++++++++++++++++--
kernel/sched/ext_idle.c | 33 +++++++++++++++++++++++++--------
kernel/sched/ext_idle.h | 6 ++++--
3 files changed, 48 insertions(+), 12 deletions(-)
diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
index 8a9a30895381a..0063a646124bc 100644
--- a/kernel/sched/ext.c
+++ b/kernel/sched/ext.c
@@ -125,6 +125,12 @@ enum scx_ops_flags {
*/
SCX_OPS_SWITCH_PARTIAL = 1LLU << 3,
+ /*
+ * If set, enable per-node idle cpumasks. If clear, use a single global
+ * flat idle cpumask.
+ */
+ SCX_OPS_BUILTIN_IDLE_PER_NODE = 1LLU << 4,
+
/*
* CPU cgroup support flags
*/
@@ -134,6 +140,7 @@ enum scx_ops_flags {
SCX_OPS_ENQ_LAST |
SCX_OPS_ENQ_EXITING |
SCX_OPS_SWITCH_PARTIAL |
+ SCX_OPS_BUILTIN_IDLE_PER_NODE |
SCX_OPS_HAS_CGROUP_WEIGHT,
};
@@ -3344,7 +3351,7 @@ static void handle_hotplug(struct rq *rq, bool online)
atomic_long_inc(&scx_hotplug_seq);
if (scx_enabled())
- scx_idle_update_selcpu_topology();
+ scx_idle_update_selcpu_topology(&scx_ops);
if (online && SCX_HAS_OP(cpu_online))
SCX_CALL_OP(SCX_KF_UNLOCKED, cpu_online, cpu);
@@ -5116,6 +5123,16 @@ static int validate_ops(const struct sched_ext_ops *ops)
return -EINVAL;
}
+ /*
+ * SCX_OPS_BUILTIN_IDLE_PER_NODE requires built-in CPU idle
+ * selection policy to be enabled.
+ */
+ if ((ops->flags & SCX_OPS_BUILTIN_IDLE_PER_NODE) &&
+ (ops->update_idle && !(ops->flags & SCX_OPS_KEEP_BUILTIN_IDLE))) {
+ scx_ops_error("SCX_OPS_BUILTIN_IDLE_PER_NODE requires CPU idle selection enabled");
+ return -EINVAL;
+ }
+
return 0;
}
@@ -5240,7 +5257,7 @@ static int scx_ops_enable(struct sched_ext_ops *ops, struct bpf_link *link)
static_branch_enable_cpuslocked(&scx_has_op[i]);
check_hotplug_seq(ops);
- scx_idle_update_selcpu_topology();
+ scx_idle_update_selcpu_topology(ops);
cpus_read_unlock();
diff --git a/kernel/sched/ext_idle.c b/kernel/sched/ext_idle.c
index cb981956005b4..a3f2b00903ac2 100644
--- a/kernel/sched/ext_idle.c
+++ b/kernel/sched/ext_idle.c
@@ -14,6 +14,9 @@
/* Enable/disable built-in idle CPU selection policy */
DEFINE_STATIC_KEY_FALSE(scx_builtin_idle_enabled);
+/* Enable/disable per-node idle cpumasks */
+DEFINE_STATIC_KEY_FALSE(scx_builtin_idle_per_node);
+
#ifdef CONFIG_SMP
#ifdef CONFIG_CPUMASK_OFFSTACK
#define CL_ALIGNED_IF_ONSTACK
@@ -204,9 +207,9 @@ static bool llc_numa_mismatch(void)
* CPU belongs to a single LLC domain, and that each LLC domain is entirely
* contained within a single NUMA node.
*/
-void scx_idle_update_selcpu_topology(void)
+void scx_idle_update_selcpu_topology(struct sched_ext_ops *ops)
{
- bool enable_llc = false, enable_numa = false;
+ bool enable_llc = false, enable_numa = false, enable_idle_node = false;
unsigned int nr_cpus;
s32 cpu = cpumask_first(cpu_online_mask);
@@ -237,13 +240,21 @@ void scx_idle_update_selcpu_topology(void)
* If all CPUs belong to the same NUMA node and the same LLC domain,
* enabling both NUMA and LLC optimizations is unnecessary, as checking
* for an idle CPU in the same domain twice is redundant.
+ *
+ * If SCX_OPS_BUILTIN_IDLE_PER_NODE is enabled ignore the NUMA
+ * optimization, as we would naturally select idle CPUs within
+ * specific NUMA nodes querying the corresponding per-node cpumask.
*/
- nr_cpus = numa_weight(cpu);
- if (nr_cpus > 0) {
- if (nr_cpus < num_online_cpus() && llc_numa_mismatch())
- enable_numa = true;
- pr_debug("sched_ext: NUMA=%*pb weight=%u\n",
- cpumask_pr_args(numa_span(cpu)), numa_weight(cpu));
+ if (ops->flags & SCX_OPS_BUILTIN_IDLE_PER_NODE) {
+ enable_idle_node = true;
+ } else {
+ nr_cpus = numa_weight(cpu);
+ if (nr_cpus > 0) {
+ if (nr_cpus < num_online_cpus() && llc_numa_mismatch())
+ enable_numa = true;
+ pr_debug("sched_ext: NUMA=%*pb weight=%u\n",
+ cpumask_pr_args(numa_span(cpu)), numa_weight(cpu));
+ }
}
rcu_read_unlock();
@@ -251,6 +262,8 @@ void scx_idle_update_selcpu_topology(void)
str_enabled_disabled(enable_llc));
pr_debug("sched_ext: NUMA idle selection %s\n",
str_enabled_disabled(enable_numa));
+ pr_debug("sched_ext: per-node idle cpumask %s\n",
+ str_enabled_disabled(enable_idle_node));
if (enable_llc)
static_branch_enable_cpuslocked(&scx_selcpu_topo_llc);
@@ -260,6 +273,10 @@ void scx_idle_update_selcpu_topology(void)
static_branch_enable_cpuslocked(&scx_selcpu_topo_numa);
else
static_branch_disable_cpuslocked(&scx_selcpu_topo_numa);
+ if (enable_idle_node)
+ static_branch_enable_cpuslocked(&scx_builtin_idle_per_node);
+ else
+ static_branch_disable_cpuslocked(&scx_builtin_idle_per_node);
}
/*
diff --git a/kernel/sched/ext_idle.h b/kernel/sched/ext_idle.h
index 7a13a74815ba7..d005bd22c19a5 100644
--- a/kernel/sched/ext_idle.h
+++ b/kernel/sched/ext_idle.h
@@ -10,19 +10,21 @@
#ifndef _KERNEL_SCHED_EXT_IDLE_H
#define _KERNEL_SCHED_EXT_IDLE_H
+struct sched_ext_ops;
+
extern struct static_key_false scx_builtin_idle_enabled;
#ifdef CONFIG_SMP
extern struct static_key_false scx_selcpu_topo_llc;
extern struct static_key_false scx_selcpu_topo_numa;
-void scx_idle_update_selcpu_topology(void);
+void scx_idle_update_selcpu_topology(struct sched_ext_ops *ops);
void scx_idle_reset_masks(void);
void scx_idle_init_masks(void);
bool scx_idle_test_and_clear_cpu(int cpu);
s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, u64 flags);
#else /* !CONFIG_SMP */
-static inline void scx_idle_update_selcpu_topology(void) {}
+static inline void scx_idle_update_selcpu_topology(struct sched_ext_ops *ops) {}
static inline void scx_idle_reset_masks(void) {}
static inline void scx_idle_init_masks(void) {}
static inline bool scx_idle_test_and_clear_cpu(int cpu) { return false; }
--
2.48.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 3/5] sched_ext: idle: introduce SCX_PICK_IDLE_IN_NODE
2025-02-06 20:15 [PATCHSET v9 sched_ext/for-6.15] sched_ext: split global idle cpumask into per-NUMA cpumasks Andrea Righi
2025-02-06 20:15 ` [PATCH 1/5] sched/topology: Introduce for_each_numa_node() iterator Andrea Righi
2025-02-06 20:15 ` [PATCH 2/5] sched_ext: idle: Introduce SCX_OPS_BUILTIN_IDLE_PER_NODE Andrea Righi
@ 2025-02-06 20:15 ` Andrea Righi
2025-02-06 20:15 ` [PATCH 4/5] sched_ext: idle: Per-node idle cpumasks Andrea Righi
2025-02-06 20:15 ` [PATCH 5/5] sched_ext: idle: Introduce node-aware idle cpu kfunc helpers Andrea Righi
4 siblings, 0 replies; 9+ messages in thread
From: Andrea Righi @ 2025-02-06 20:15 UTC (permalink / raw)
To: Tejun Heo, David Vernet, Changwoo Min
Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider, Ian May, bpf, linux-kernel
Introduce a new flag to restrict the selection of an idle CPU to a
single specific NUMA node.
Signed-off-by: Andrea Righi <arighi@nvidia.com>
---
kernel/sched/ext.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
index 0063a646124bc..8dbe22167c158 100644
--- a/kernel/sched/ext.c
+++ b/kernel/sched/ext.c
@@ -775,6 +775,7 @@ enum scx_deq_flags {
enum scx_pick_idle_cpu_flags {
SCX_PICK_IDLE_CORE = 1LLU << 0, /* pick a CPU whose SMT siblings are also idle */
+ SCX_PICK_IDLE_IN_NODE = 1LLU << 1, /* pick a CPU in the same target NUMA node */
};
enum scx_kick_flags {
--
2.48.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 4/5] sched_ext: idle: Per-node idle cpumasks
2025-02-06 20:15 [PATCHSET v9 sched_ext/for-6.15] sched_ext: split global idle cpumask into per-NUMA cpumasks Andrea Righi
` (2 preceding siblings ...)
2025-02-06 20:15 ` [PATCH 3/5] sched_ext: idle: introduce SCX_PICK_IDLE_IN_NODE Andrea Righi
@ 2025-02-06 20:15 ` Andrea Righi
2025-02-06 20:15 ` [PATCH 5/5] sched_ext: idle: Introduce node-aware idle cpu kfunc helpers Andrea Righi
4 siblings, 0 replies; 9+ messages in thread
From: Andrea Righi @ 2025-02-06 20:15 UTC (permalink / raw)
To: Tejun Heo, David Vernet, Changwoo Min
Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider, Ian May, bpf, linux-kernel
Using a single global idle mask can lead to inefficiencies and a lot of
stress on the cache coherency protocol on large systems with multiple
NUMA nodes, since all the CPUs can create a really intense read/write
activity on the single global cpumask.
Therefore, split the global cpumask into multiple per-NUMA node cpumasks
to improve scalability and performance on large systems.
The concept is that each cpumask will track only the idle CPUs within
its corresponding NUMA node, treating CPUs in other NUMA nodes as busy.
In this way concurrent access to the idle cpumask will be restricted
within each NUMA node.
The split of multiple per-node idle cpumasks can be controlled using the
SCX_OPS_BUILTIN_IDLE_PER_NODE flag.
By default SCX_OPS_BUILTIN_IDLE_PER_NODE is not enabled and a global
host-wide idle cpumask is used, maintaining the previous behavior.
NOTE: if a scheduler explicitly enables the per-node idle cpumasks (via
SCX_OPS_BUILTIN_IDLE_PER_NODE), scx_bpf_get_idle_cpu/smtmask() will
trigger an scx error, since there are no system-wide cpumasks.
Signed-off-by: Andrea Righi <arighi@nvidia.com>
---
kernel/sched/ext_idle.c | 236 ++++++++++++++++++++++++++++++++--------
kernel/sched/ext_idle.h | 11 +-
2 files changed, 197 insertions(+), 50 deletions(-)
diff --git a/kernel/sched/ext_idle.c b/kernel/sched/ext_idle.c
index a3f2b00903ac2..0deef6987f76e 100644
--- a/kernel/sched/ext_idle.c
+++ b/kernel/sched/ext_idle.c
@@ -18,25 +18,88 @@ DEFINE_STATIC_KEY_FALSE(scx_builtin_idle_enabled);
DEFINE_STATIC_KEY_FALSE(scx_builtin_idle_per_node);
#ifdef CONFIG_SMP
-#ifdef CONFIG_CPUMASK_OFFSTACK
-#define CL_ALIGNED_IF_ONSTACK
-#else
-#define CL_ALIGNED_IF_ONSTACK __cacheline_aligned_in_smp
-#endif
-
/* Enable/disable LLC aware optimizations */
DEFINE_STATIC_KEY_FALSE(scx_selcpu_topo_llc);
/* Enable/disable NUMA aware optimizations */
DEFINE_STATIC_KEY_FALSE(scx_selcpu_topo_numa);
-static struct {
+/*
+ * cpumasks to track idle CPUs within each NUMA node.
+ *
+ * If SCX_OPS_BUILTIN_IDLE_PER_NODE is not enabled, a single global cpumask
+ * from is used to track all the idle CPUs in the system.
+ */
+struct idle_cpus {
cpumask_var_t cpu;
cpumask_var_t smt;
-} idle_masks CL_ALIGNED_IF_ONSTACK;
+};
+
+/*
+ * Global host-wide idle cpumasks (used when SCX_OPS_BUILTIN_IDLE_PER_NODE
+ * is not enabled).
+ */
+static struct idle_cpus scx_idle_global_masks;
+
+/*
+ * Per-node idle cpumasks.
+ */
+static struct idle_cpus **scx_idle_node_masks;
+
+/*
+ * Initialize per-node idle cpumasks.
+ *
+ * In case of a single NUMA node or if NUMA support is disabled, only a
+ * single global host-wide cpumask will be initialized.
+ */
+void scx_idle_init_masks(void)
+{
+ int node;
+
+ /* Allocate global idle cpumasks */
+ BUG_ON(!alloc_cpumask_var(&scx_idle_global_masks.cpu, GFP_KERNEL));
+ BUG_ON(!alloc_cpumask_var(&scx_idle_global_masks.smt, GFP_KERNEL));
+
+ /* Allocate per-node idle cpumasks */
+ scx_idle_node_masks = kcalloc(num_possible_nodes(),
+ sizeof(*scx_idle_node_masks), GFP_KERNEL);
+ BUG_ON(!scx_idle_node_masks);
+
+ for_each_node(node) {
+ scx_idle_node_masks[node] = kzalloc_node(sizeof(**scx_idle_node_masks),
+ GFP_KERNEL, node);
+ BUG_ON(!scx_idle_node_masks[node]);
+
+ BUG_ON(!alloc_cpumask_var_node(&scx_idle_node_masks[node]->cpu, GFP_KERNEL, node));
+ BUG_ON(!alloc_cpumask_var_node(&scx_idle_node_masks[node]->smt, GFP_KERNEL, node));
+ }
+}
+
+/*
+ * Return the idle masks associated to a target @node.
+ */
+static struct idle_cpus *idle_cpumask(int node)
+{
+ return node == NUMA_NO_NODE ? &scx_idle_global_masks : scx_idle_node_masks[node];
+}
+
+/*
+ * Return the node id associated to a target idle CPU (used to determine
+ * the proper idle cpumask).
+ */
+static int idle_cpu_to_node(int cpu)
+{
+ if (!static_branch_maybe(CONFIG_NUMA, &scx_builtin_idle_per_node))
+ return NUMA_NO_NODE;
+
+ return cpu_to_node(cpu);
+}
bool scx_idle_test_and_clear_cpu(int cpu)
{
+ int node = idle_cpu_to_node(cpu);
+ struct cpumask *idle_cpus = idle_cpumask(node)->cpu;
+
#ifdef CONFIG_SCHED_SMT
/*
* SMT mask should be cleared whether we can claim @cpu or not. The SMT
@@ -45,33 +108,38 @@ bool scx_idle_test_and_clear_cpu(int cpu)
*/
if (sched_smt_active()) {
const struct cpumask *smt = cpu_smt_mask(cpu);
+ struct cpumask *idle_smts = idle_cpumask(node)->smt;
/*
* If offline, @cpu is not its own sibling and
* scx_pick_idle_cpu() can get caught in an infinite loop as
- * @cpu is never cleared from idle_masks.smt. Ensure that @cpu
- * is eventually cleared.
+ * @cpu is never cleared from the idle SMT mask. Ensure that
+ * @cpu is eventually cleared.
*
* NOTE: Use cpumask_intersects() and cpumask_test_cpu() to
* reduce memory writes, which may help alleviate cache
* coherence pressure.
*/
- if (cpumask_intersects(smt, idle_masks.smt))
- cpumask_andnot(idle_masks.smt, idle_masks.smt, smt);
- else if (cpumask_test_cpu(cpu, idle_masks.smt))
- __cpumask_clear_cpu(cpu, idle_masks.smt);
+ if (cpumask_intersects(smt, idle_smts))
+ cpumask_andnot(idle_smts, idle_smts, smt);
+ else if (cpumask_test_cpu(cpu, idle_smts))
+ __cpumask_clear_cpu(cpu, idle_smts);
}
#endif
- return cpumask_test_and_clear_cpu(cpu, idle_masks.cpu);
+
+ return cpumask_test_and_clear_cpu(cpu, idle_cpus);
}
-s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, u64 flags)
+/*
+ * Pick an idle CPU in a specific NUMA node.
+ */
+s32 pick_idle_cpu_from_node(const struct cpumask *cpus_allowed, int node, u64 flags)
{
int cpu;
retry:
if (sched_smt_active()) {
- cpu = cpumask_any_and_distribute(idle_masks.smt, cpus_allowed);
+ cpu = cpumask_any_and_distribute(idle_cpumask(node)->smt, cpus_allowed);
if (cpu < nr_cpu_ids)
goto found;
@@ -79,7 +147,7 @@ s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, u64 flags)
return -EBUSY;
}
- cpu = cpumask_any_and_distribute(idle_masks.cpu, cpus_allowed);
+ cpu = cpumask_any_and_distribute(idle_cpumask(node)->cpu, cpus_allowed);
if (cpu >= nr_cpu_ids)
return -EBUSY;
@@ -90,6 +158,49 @@ s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, u64 flags)
goto retry;
}
+/*
+ * Find the best idle CPU in the system, relative to @node.
+ */
+s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, int node, u64 flags)
+{
+ nodemask_t visited = NODE_MASK_NONE;
+ s32 cpu = -EBUSY;
+ int n;
+
+ if (!static_branch_maybe(CONFIG_NUMA, &scx_builtin_idle_per_node))
+ return pick_idle_cpu_from_node(cpus_allowed, NUMA_NO_NODE, flags);
+
+ /*
+ * Traverse all nodes in order of increasing distance, starting
+ * from @node.
+ *
+ * This loop is O(N^2), with N being the amount of NUMA nodes,
+ * which might be quite expensive in large NUMA systems. However,
+ * this complexity comes into play only when a scheduler enables
+ * SCX_OPS_BUILTIN_IDLE_PER_NODE and it's requesting an idle CPU
+ * without specifying a target NUMA node, so it shouldn't be a
+ * bottleneck is most cases.
+ *
+ * As a future optimization we may want to cache the list of hop
+ * nodes in a per-node array, instead of actually traversing them
+ * every time.
+ */
+ for_each_numa_node(n, node, visited, N_POSSIBLE) {
+ cpu = pick_idle_cpu_from_node(cpus_allowed, n, flags);
+ if (cpu >= 0)
+ break;
+
+ /*
+ * Check if the search is restricted to the same core or
+ * the same node.
+ */
+ if (flags & SCX_PICK_IDLE_IN_NODE)
+ break;
+ }
+
+ return cpu;
+}
+
/*
* Return the amount of CPUs in the same LLC domain of @cpu (or zero if the LLC
* domain is not defined).
@@ -310,6 +421,7 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, bool
{
const struct cpumask *llc_cpus = NULL;
const struct cpumask *numa_cpus = NULL;
+ int node = idle_cpu_to_node(prev_cpu);
s32 cpu;
*found = false;
@@ -367,9 +479,9 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, bool
* piled up on it even if there is an idle core elsewhere on
* the system.
*/
- if (!cpumask_empty(idle_masks.cpu) &&
- !(current->flags & PF_EXITING) &&
- cpu_rq(cpu)->scx.local_dsq.nr == 0) {
+ if (!(current->flags & PF_EXITING) &&
+ cpu_rq(cpu)->scx.local_dsq.nr == 0 &&
+ !cpumask_empty(idle_cpumask(cpu_to_node(cpu))->cpu)) {
if (cpumask_test_cpu(cpu, p->cpus_ptr))
goto cpu_found;
}
@@ -383,7 +495,7 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, bool
/*
* Keep using @prev_cpu if it's part of a fully idle core.
*/
- if (cpumask_test_cpu(prev_cpu, idle_masks.smt) &&
+ if (cpumask_test_cpu(prev_cpu, idle_cpumask(node)->smt) &&
scx_idle_test_and_clear_cpu(prev_cpu)) {
cpu = prev_cpu;
goto cpu_found;
@@ -393,7 +505,7 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, bool
* Search for any fully idle core in the same LLC domain.
*/
if (llc_cpus) {
- cpu = scx_pick_idle_cpu(llc_cpus, SCX_PICK_IDLE_CORE);
+ cpu = pick_idle_cpu_from_node(llc_cpus, node, SCX_PICK_IDLE_CORE);
if (cpu >= 0)
goto cpu_found;
}
@@ -402,15 +514,19 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, bool
* Search for any fully idle core in the same NUMA node.
*/
if (numa_cpus) {
- cpu = scx_pick_idle_cpu(numa_cpus, SCX_PICK_IDLE_CORE);
+ cpu = pick_idle_cpu_from_node(numa_cpus, node, SCX_PICK_IDLE_CORE);
if (cpu >= 0)
goto cpu_found;
}
/*
* Search for any full idle core usable by the task.
+ *
+ * If NUMA aware idle selection is enabled, the search will
+ * begin in prev_cpu's node and proceed to other nodes in
+ * order of increasing distance.
*/
- cpu = scx_pick_idle_cpu(p->cpus_ptr, SCX_PICK_IDLE_CORE);
+ cpu = scx_pick_idle_cpu(p->cpus_ptr, node, SCX_PICK_IDLE_CORE);
if (cpu >= 0)
goto cpu_found;
}
@@ -427,7 +543,7 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, bool
* Search for any idle CPU in the same LLC domain.
*/
if (llc_cpus) {
- cpu = scx_pick_idle_cpu(llc_cpus, 0);
+ cpu = pick_idle_cpu_from_node(llc_cpus, node, 0);
if (cpu >= 0)
goto cpu_found;
}
@@ -436,7 +552,7 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, bool
* Search for any idle CPU in the same NUMA node.
*/
if (numa_cpus) {
- cpu = scx_pick_idle_cpu(numa_cpus, 0);
+ cpu = pick_idle_cpu_from_node(numa_cpus, node, 0);
if (cpu >= 0)
goto cpu_found;
}
@@ -444,7 +560,7 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, bool
/*
* Search for any idle CPU usable by the task.
*/
- cpu = scx_pick_idle_cpu(p->cpus_ptr, 0);
+ cpu = scx_pick_idle_cpu(p->cpus_ptr, node, 0);
if (cpu >= 0)
goto cpu_found;
@@ -460,38 +576,50 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, bool
void scx_idle_reset_masks(void)
{
+ int node;
+
+ if (!static_branch_maybe(CONFIG_NUMA, &scx_builtin_idle_per_node)) {
+ cpumask_copy(idle_cpumask(NUMA_NO_NODE)->cpu, cpu_online_mask);
+ cpumask_copy(idle_cpumask(NUMA_NO_NODE)->smt, cpu_online_mask);
+ return;
+ }
+
/*
* Consider all online cpus idle. Should converge to the actual state
* quickly.
*/
- cpumask_copy(idle_masks.cpu, cpu_online_mask);
- cpumask_copy(idle_masks.smt, cpu_online_mask);
-}
+ for_each_node(node) {
+ const struct cpumask *node_mask = cpumask_of_node(node);
+ struct cpumask *idle_cpus = idle_cpumask(node)->cpu;
+ struct cpumask *idle_smts = idle_cpumask(node)->smt;
-void scx_idle_init_masks(void)
-{
- BUG_ON(!alloc_cpumask_var(&idle_masks.cpu, GFP_KERNEL));
- BUG_ON(!alloc_cpumask_var(&idle_masks.smt, GFP_KERNEL));
+ cpumask_and(idle_cpus, cpu_online_mask, node_mask);
+ cpumask_copy(idle_smts, idle_cpus);
+ }
}
static void update_builtin_idle(int cpu, bool idle)
{
- assign_cpu(cpu, idle_masks.cpu, idle);
+ int node = idle_cpu_to_node(cpu);
+ struct cpumask *idle_cpus = idle_cpumask(node)->cpu;
+
+ assign_cpu(cpu, idle_cpus, idle);
#ifdef CONFIG_SCHED_SMT
if (sched_smt_active()) {
const struct cpumask *smt = cpu_smt_mask(cpu);
+ struct cpumask *idle_smts = idle_cpumask(node)->smt;
if (idle) {
/*
- * idle_masks.smt handling is racy but that's fine as
- * it's only for optimization and self-correcting.
+ * idle_smt handling is racy but that's fine as it's
+ * only for optimization and self-correcting.
*/
- if (!cpumask_subset(smt, idle_masks.cpu))
+ if (!cpumask_subset(smt, idle_cpus))
return;
- cpumask_or(idle_masks.smt, idle_masks.smt, smt);
+ cpumask_or(idle_smts, idle_smts, smt);
} else {
- cpumask_andnot(idle_masks.smt, idle_masks.smt, smt);
+ cpumask_andnot(idle_smts, idle_smts, smt);
}
}
#endif
@@ -599,15 +727,21 @@ __bpf_kfunc s32 scx_bpf_select_cpu_dfl(struct task_struct *p, s32 prev_cpu,
* scx_bpf_get_idle_cpumask - Get a referenced kptr to the idle-tracking
* per-CPU cpumask.
*
- * Returns NULL if idle tracking is not enabled, or running on a UP kernel.
+ * Returns an empty mask if idle tracking is not enabled, or running on a
+ * UP kernel.
*/
__bpf_kfunc const struct cpumask *scx_bpf_get_idle_cpumask(void)
{
if (!check_builtin_idle_enabled())
return cpu_none_mask;
+ if (static_branch_unlikely(&scx_builtin_idle_per_node)) {
+ scx_ops_error("SCX_OPS_BUILTIN_IDLE_PER_NODE enabled");
+ return cpu_none_mask;
+ }
+
#ifdef CONFIG_SMP
- return idle_masks.cpu;
+ return idle_cpumask(NUMA_NO_NODE)->cpu;
#else
return cpu_none_mask;
#endif
@@ -618,18 +752,24 @@ __bpf_kfunc const struct cpumask *scx_bpf_get_idle_cpumask(void)
* per-physical-core cpumask. Can be used to determine if an entire physical
* core is free.
*
- * Returns NULL if idle tracking is not enabled, or running on a UP kernel.
+ * Returns an empty mask if idle tracking is not enabled, or running on a
+ * UP kernel.
*/
__bpf_kfunc const struct cpumask *scx_bpf_get_idle_smtmask(void)
{
if (!check_builtin_idle_enabled())
return cpu_none_mask;
+ if (static_branch_unlikely(&scx_builtin_idle_per_node)) {
+ scx_ops_error("SCX_OPS_BUILTIN_IDLE_PER_NODE enabled");
+ return cpu_none_mask;
+ }
+
#ifdef CONFIG_SMP
if (sched_smt_active())
- return idle_masks.smt;
+ return idle_cpumask(NUMA_NO_NODE)->smt;
else
- return idle_masks.cpu;
+ return idle_cpumask(NUMA_NO_NODE)->cpu;
#else
return cpu_none_mask;
#endif
@@ -696,7 +836,7 @@ __bpf_kfunc s32 scx_bpf_pick_idle_cpu(const struct cpumask *cpus_allowed,
if (!check_builtin_idle_enabled())
return -EBUSY;
- return scx_pick_idle_cpu(cpus_allowed, flags);
+ return scx_pick_idle_cpu(cpus_allowed, NUMA_NO_NODE, flags);
}
/**
@@ -719,7 +859,7 @@ __bpf_kfunc s32 scx_bpf_pick_any_cpu(const struct cpumask *cpus_allowed,
s32 cpu;
if (static_branch_likely(&scx_builtin_idle_enabled)) {
- cpu = scx_pick_idle_cpu(cpus_allowed, flags);
+ cpu = scx_pick_idle_cpu(cpus_allowed, NUMA_NO_NODE, flags);
if (cpu >= 0)
return cpu;
}
diff --git a/kernel/sched/ext_idle.h b/kernel/sched/ext_idle.h
index d005bd22c19a5..b00ed5ad48e89 100644
--- a/kernel/sched/ext_idle.h
+++ b/kernel/sched/ext_idle.h
@@ -13,6 +13,7 @@
struct sched_ext_ops;
extern struct static_key_false scx_builtin_idle_enabled;
+extern struct static_key_false scx_builtin_idle_per_node;
#ifdef CONFIG_SMP
extern struct static_key_false scx_selcpu_topo_llc;
@@ -22,13 +23,18 @@ void scx_idle_update_selcpu_topology(struct sched_ext_ops *ops);
void scx_idle_reset_masks(void);
void scx_idle_init_masks(void);
bool scx_idle_test_and_clear_cpu(int cpu);
-s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, u64 flags);
+s32 pick_idle_cpu_from_node(const struct cpumask *cpus_allowed, int node, u64 flags);
+s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, int node, u64 flags);
#else /* !CONFIG_SMP */
static inline void scx_idle_update_selcpu_topology(struct sched_ext_ops *ops) {}
static inline void scx_idle_reset_masks(void) {}
static inline void scx_idle_init_masks(void) {}
static inline bool scx_idle_test_and_clear_cpu(int cpu) { return false; }
-static inline s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, u64 flags)
+static inline s32 pick_idle_cpu_from_node(const struct cpumask *cpus_allowed, int node, u64 flags)
+{
+ return -EBUSY;
+}
+static inline s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, int node, u64 flags)
{
return -EBUSY;
}
@@ -36,6 +42,7 @@ static inline s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, u64 flag
s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, bool *found);
+extern void scx_idle_init_masks(void);
extern int scx_idle_init(void);
#endif /* _KERNEL_SCHED_EXT_IDLE_H */
--
2.48.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 5/5] sched_ext: idle: Introduce node-aware idle cpu kfunc helpers
2025-02-06 20:15 [PATCHSET v9 sched_ext/for-6.15] sched_ext: split global idle cpumask into per-NUMA cpumasks Andrea Righi
` (3 preceding siblings ...)
2025-02-06 20:15 ` [PATCH 4/5] sched_ext: idle: Per-node idle cpumasks Andrea Righi
@ 2025-02-06 20:15 ` Andrea Righi
4 siblings, 0 replies; 9+ messages in thread
From: Andrea Righi @ 2025-02-06 20:15 UTC (permalink / raw)
To: Tejun Heo, David Vernet, Changwoo Min
Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider, Ian May, bpf, linux-kernel
Add the following kfunc's to provide scx schedulers direct access to
per-node idle cpumasks information:
int scx_bpf_cpu_to_node(s32 cpu)
const struct cpumask *scx_bpf_get_idle_cpumask_node(int node)
const struct cpumask *scx_bpf_get_idle_smtmask_node(int node)
s32 scx_bpf_pick_idle_cpu_node(const cpumask_t *cpus_allowed,
int node, u64 flags)
Signed-off-by: Andrea Righi <arighi@nvidia.com>
---
kernel/sched/ext_idle.c | 121 +++++++++++++++++++++++
tools/sched_ext/include/scx/common.bpf.h | 4 +
tools/sched_ext/include/scx/compat.bpf.h | 19 ++++
3 files changed, 144 insertions(+)
diff --git a/kernel/sched/ext_idle.c b/kernel/sched/ext_idle.c
index 0deef6987f76e..df8cd76c86fdd 100644
--- a/kernel/sched/ext_idle.c
+++ b/kernel/sched/ext_idle.c
@@ -680,6 +680,33 @@ void __scx_update_idle(struct rq *rq, bool idle, bool do_notify)
/********************************************************************************
* Helpers that can be called from the BPF scheduler.
*/
+
+static int validate_node(int node)
+{
+ if (!static_branch_likely(&scx_builtin_idle_per_node)) {
+ scx_ops_error("per-node idle tracking is disabled");
+ return -ENOTSUPP;
+ }
+
+ /* Return no entry for NUMA_NO_NODE (not a critical scx error) */
+ if (node == NUMA_NO_NODE)
+ return -ENOENT;
+
+ /* Make sure node is in a valid range */
+ if (node < 0 || node >= nr_node_ids) {
+ scx_ops_error("invalid node %d", node);
+ return -EINVAL;
+ }
+
+ /* Make sure the node is part of the set of possible nodes */
+ if (!node_possible(node)) {
+ scx_ops_error("unavailable node %d", node);
+ return -EINVAL;
+ }
+
+ return node;
+}
+
__bpf_kfunc_start_defs();
static bool check_builtin_idle_enabled(void)
@@ -691,6 +718,21 @@ static bool check_builtin_idle_enabled(void)
return false;
}
+/**
+ * scx_bpf_cpu_to_node - Return the NUMA node the given @cpu belongs to
+ */
+__bpf_kfunc int scx_bpf_cpu_to_node(s32 cpu)
+{
+#ifdef CONFIG_NUMA
+ if (cpu < 0 || cpu >= nr_cpu_ids)
+ return -EINVAL;
+
+ return idle_cpu_to_node(cpu);
+#else
+ return 0;
+#endif
+}
+
/**
* scx_bpf_select_cpu_dfl - The default implementation of ops.select_cpu()
* @p: task_struct to select a CPU for
@@ -723,6 +765,27 @@ __bpf_kfunc s32 scx_bpf_select_cpu_dfl(struct task_struct *p, s32 prev_cpu,
return prev_cpu;
}
+/**
+ * scx_bpf_get_idle_cpumask_node - Get a referenced kptr to the
+ * idle-tracking per-CPU cpumask of a target NUMA node.
+ *
+ * Returns an empty cpumask if idle tracking is not enabled, if @node is
+ * not valid, or running on a UP kernel. In this case the actual error will
+ * be reported to the BPF scheduler via scx_ops_error().
+ */
+__bpf_kfunc const struct cpumask *scx_bpf_get_idle_cpumask_node(int node)
+{
+ node = validate_node(node);
+ if (node < 0)
+ return cpu_none_mask;
+
+#ifdef CONFIG_SMP
+ return idle_cpumask(node)->cpu;
+#else
+ return cpu_none_mask;
+#endif
+}
+
/**
* scx_bpf_get_idle_cpumask - Get a referenced kptr to the idle-tracking
* per-CPU cpumask.
@@ -747,6 +810,31 @@ __bpf_kfunc const struct cpumask *scx_bpf_get_idle_cpumask(void)
#endif
}
+/**
+ * scx_bpf_get_idle_smtmask_node - Get a referenced kptr to the
+ * idle-tracking, per-physical-core cpumask of a target NUMA node. Can be
+ * used to determine if an entire physical core is free.
+ *
+ * Returns an empty cpumask if idle tracking is not enabled, if @node is
+ * not valid, or running on a UP kernel. In this case the actual error will
+ * be reported to the BPF scheduler via scx_ops_error().
+ */
+__bpf_kfunc const struct cpumask *scx_bpf_get_idle_smtmask_node(int node)
+{
+ node = validate_node(node);
+ if (node < 0)
+ return cpu_none_mask;
+
+#ifdef CONFIG_SMP
+ if (sched_smt_active())
+ return idle_cpumask(node)->smt;
+ else
+ return idle_cpumask(node)->cpu;
+#else
+ return cpu_none_mask;
+#endif
+}
+
/**
* scx_bpf_get_idle_smtmask - Get a referenced kptr to the idle-tracking,
* per-physical-core cpumask. Can be used to determine if an entire physical
@@ -811,6 +899,35 @@ __bpf_kfunc bool scx_bpf_test_and_clear_cpu_idle(s32 cpu)
return false;
}
+/**
+ * scx_bpf_pick_idle_cpu_node - Pick and claim an idle cpu from a NUMA node
+ * @cpus_allowed: Allowed cpumask
+ * @node: target NUMA node
+ * @flags: %SCX_PICK_IDLE_* flags
+ *
+ * Pick and claim an idle cpu in @cpus_allowed from the NUMA node @node.
+ *
+ * Returns the picked idle cpu number on success, or -%EBUSY if no matching
+ * cpu was found.
+ *
+ * The search starts from @node and proceeds to other online NUMA nodes in
+ * order of increasing distance (unless SCX_PICK_IDLE_IN_NODE is specified,
+ * in which case the search is limited to the target @node).
+ *
+ * Always returns an error if ops.update_idle() is implemented and
+ * %SCX_OPS_KEEP_BUILTIN_IDLE is not set, or if
+ * %SCX_OPS_BUILTIN_IDLE_PER_NODE is not set.
+ */
+__bpf_kfunc s32 scx_bpf_pick_idle_cpu_node(const struct cpumask *cpus_allowed,
+ int node, u64 flags)
+{
+ node = validate_node(node);
+ if (node < 0)
+ return node;
+
+ return scx_pick_idle_cpu(cpus_allowed, node, flags);
+}
+
/**
* scx_bpf_pick_idle_cpu - Pick and claim an idle cpu
* @cpus_allowed: Allowed cpumask
@@ -874,10 +991,14 @@ __bpf_kfunc s32 scx_bpf_pick_any_cpu(const struct cpumask *cpus_allowed,
__bpf_kfunc_end_defs();
BTF_KFUNCS_START(scx_kfunc_ids_idle)
+BTF_ID_FLAGS(func, scx_bpf_cpu_to_node)
+BTF_ID_FLAGS(func, scx_bpf_get_idle_cpumask_node, KF_ACQUIRE)
BTF_ID_FLAGS(func, scx_bpf_get_idle_cpumask, KF_ACQUIRE)
+BTF_ID_FLAGS(func, scx_bpf_get_idle_smtmask_node, KF_ACQUIRE)
BTF_ID_FLAGS(func, scx_bpf_get_idle_smtmask, KF_ACQUIRE)
BTF_ID_FLAGS(func, scx_bpf_put_idle_cpumask, KF_RELEASE)
BTF_ID_FLAGS(func, scx_bpf_test_and_clear_cpu_idle)
+BTF_ID_FLAGS(func, scx_bpf_pick_idle_cpu_node, KF_RCU)
BTF_ID_FLAGS(func, scx_bpf_pick_idle_cpu, KF_RCU)
BTF_ID_FLAGS(func, scx_bpf_pick_any_cpu, KF_RCU)
BTF_KFUNCS_END(scx_kfunc_ids_idle)
diff --git a/tools/sched_ext/include/scx/common.bpf.h b/tools/sched_ext/include/scx/common.bpf.h
index 7055400030241..7dd8ba2964553 100644
--- a/tools/sched_ext/include/scx/common.bpf.h
+++ b/tools/sched_ext/include/scx/common.bpf.h
@@ -63,13 +63,17 @@ u32 scx_bpf_cpuperf_cap(s32 cpu) __ksym __weak;
u32 scx_bpf_cpuperf_cur(s32 cpu) __ksym __weak;
void scx_bpf_cpuperf_set(s32 cpu, u32 perf) __ksym __weak;
u32 scx_bpf_nr_cpu_ids(void) __ksym __weak;
+int scx_bpf_cpu_to_node(s32 cpu) __ksym __weak;
const struct cpumask *scx_bpf_get_possible_cpumask(void) __ksym __weak;
const struct cpumask *scx_bpf_get_online_cpumask(void) __ksym __weak;
void scx_bpf_put_cpumask(const struct cpumask *cpumask) __ksym __weak;
+const struct cpumask *scx_bpf_get_idle_cpumask_node(int node) __ksym __weak;
const struct cpumask *scx_bpf_get_idle_cpumask(void) __ksym;
+const struct cpumask *scx_bpf_get_idle_smtmask_node(int node) __ksym __weak;
const struct cpumask *scx_bpf_get_idle_smtmask(void) __ksym;
void scx_bpf_put_idle_cpumask(const struct cpumask *cpumask) __ksym;
bool scx_bpf_test_and_clear_cpu_idle(s32 cpu) __ksym;
+s32 scx_bpf_pick_idle_cpu_node(const cpumask_t *cpus_allowed, int node, u64 flags) __ksym __weak;
s32 scx_bpf_pick_idle_cpu(const cpumask_t *cpus_allowed, u64 flags) __ksym;
s32 scx_bpf_pick_any_cpu(const cpumask_t *cpus_allowed, u64 flags) __ksym;
bool scx_bpf_task_running(const struct task_struct *p) __ksym;
diff --git a/tools/sched_ext/include/scx/compat.bpf.h b/tools/sched_ext/include/scx/compat.bpf.h
index 50e1499ae0935..caa1a80f9a60c 100644
--- a/tools/sched_ext/include/scx/compat.bpf.h
+++ b/tools/sched_ext/include/scx/compat.bpf.h
@@ -130,6 +130,25 @@ bool scx_bpf_dispatch_vtime_from_dsq___compat(struct bpf_iter_scx_dsq *it__iter,
scx_bpf_now() : \
bpf_ktime_get_ns())
+#define __COMPAT_scx_bpf_cpu_to_node(cpu) \
+ (bpf_ksym_exists(scx_bpf_cpu_to_node) ? \
+ scx_bpf_cpu_to_node(cpu) : 0)
+
+#define __COMPAT_scx_bpf_get_idle_cpumask_node(node) \
+ (bpf_ksym_exists(scx_bpf_get_idle_cpumask_node) ? \
+ scx_bpf_get_idle_cpumask_node(node) : \
+ scx_bpf_get_idle_cpumask()) \
+
+#define __COMPAT_scx_bpf_get_idle_smtmask_node(node) \
+ (bpf_ksym_exists(scx_bpf_get_idle_smtmask_node) ? \
+ scx_bpf_get_idle_smtmask_node(node) : \
+ scx_bpf_get_idle_smtmask())
+
+#define __COMPAT_scx_bpf_pick_idle_cpu_node(cpus_allowed, node, flags) \
+ (bpf_ksym_exists(scx_bpf_pick_idle_cpu_node) ? \
+ scx_bpf_pick_idle_cpu_node(cpus_allowed, node, flags) : \
+ scx_bpf_pick_idle_cpu(cpus_allowed, flags))
+
/*
* Define sched_ext_ops. This may be expanded to define multiple variants for
* backward compatibility. See compat.h::SCX_OPS_LOAD/ATTACH().
--
2.48.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH 1/5] sched/topology: Introduce for_each_numa_node() iterator
2025-02-06 20:15 ` [PATCH 1/5] sched/topology: Introduce for_each_numa_node() iterator Andrea Righi
@ 2025-02-07 3:57 ` Yury Norov
2025-02-07 15:44 ` Andrea Righi
0 siblings, 1 reply; 9+ messages in thread
From: Yury Norov @ 2025-02-07 3:57 UTC (permalink / raw)
To: Andrea Righi
Cc: Tejun Heo, David Vernet, Changwoo Min, Ingo Molnar,
Peter Zijlstra, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider,
Ian May, bpf, linux-kernel
On Thu, Feb 06, 2025 at 09:15:31PM +0100, Andrea Righi wrote:
> Introduce for_each_numa_node() and sched_numa_node() helpers to iterate
> over node IDs in order of increasing NUMA distance from a given starting
> node.
>
> These iterator functions are similar to for_each_numa_hop_mask() and
> sched_numa_hop_mask(), but instead of providing a cpumask at each
> iteration, they provide a node ID.
>
> Example usage:
>
> nodemask_t visited = NODE_MASK_NONE;
> int start = cpu_to_node(smp_processor_id());
>
> for_each_numa_node(node, start, visited, N_ONLINE)
> pr_info("node (%d, %d) -> %d\n",
> start, node, node_distance(start, node));
>
> On a system with equidistant nodes:
>
> $ numactl -H
> ...
> node distances:
> node 0 1 2 3
> 0: 10 20 20 20
> 1: 20 10 20 20
> 2: 20 20 10 20
> 3: 20 20 20 10
>
> Output of the example above (on node 0):
>
> [ 7.367022] node (0, 0) -> 10
> [ 7.367151] node (0, 1) -> 20
> [ 7.367186] node (0, 2) -> 20
> [ 7.367247] node (0, 3) -> 20
>
> On a system with non-equidistant nodes (simulated using virtme-ng):
>
> $ numactl -H
> ...
> node distances:
> node 0 1 2 3
> 0: 10 51 31 41
> 1: 51 10 21 61
> 2: 31 21 10 11
> 3: 41 61 11 10
>
> Output of the example above (on node 0):
>
> [ 8.953644] node (0, 0) -> 10
> [ 8.953712] node (0, 2) -> 31
> [ 8.953764] node (0, 3) -> 41
> [ 8.953817] node (0, 1) -> 51
>
> Cc: Yury Norov <yury.norov@gmail.com>
> Signed-off-by: Andrea Righi <arighi@nvidia.com>
Please keep me posted for the whole series.
> ---
> include/linux/topology.h | 31 ++++++++++++++++++++++++++++-
> kernel/sched/topology.c | 42 ++++++++++++++++++++++++++++++++++++++++
> 2 files changed, 72 insertions(+), 1 deletion(-)
>
> diff --git a/include/linux/topology.h b/include/linux/topology.h
> index 52f5850730b3e..0c82b913a8814 100644
> --- a/include/linux/topology.h
> +++ b/include/linux/topology.h
> @@ -248,12 +248,18 @@ static inline const struct cpumask *cpu_cpu_mask(int cpu)
> #ifdef CONFIG_NUMA
> int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node);
> extern const struct cpumask *sched_numa_hop_mask(unsigned int node, unsigned int hops);
> -#else
> +extern int sched_numa_node(nodemask_t *visited, int start, unsigned int state);
> +#else /* !CONFIG_NUMA */
> static __always_inline int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node)
> {
> return cpumask_nth_and(cpu, cpus, cpu_online_mask);
> }
>
> +static inline int sched_numa_node(nodemask_t *visited, int start, unsigned int state)
> +{
> + return MAX_NUMNODES;
> +}
> +
> static inline const struct cpumask *
> sched_numa_hop_mask(unsigned int node, unsigned int hops)
> {
> @@ -261,6 +267,29 @@ sched_numa_hop_mask(unsigned int node, unsigned int hops)
> }
> #endif /* CONFIG_NUMA */
>
> +/**
> + * for_each_numa_node - iterate over NUMA nodes at increasing hop distances
> + * from a given starting node.
> + * @node: the iteration variable, representing the current NUMA node.
> + * @start: the NUMA node to start the iteration from.
> + * @visited: a nodemask_t to track the visited nodes.
nit: s/nodemask_t/nodemask
> + * @state: state of NUMA nodes to iterate.
> + *
> + * This macro iterates over NUMA nodes in increasing distance from
> + * @start_node and yields MAX_NUMNODES when all the nodes have been
> + * visited.
> + *
> + * The difference between for_each_node() and for_each_numa_node() is that
> + * the former allows to iterate over nodes in no particular order, whereas
> + * the latter iterates over nodes in increasing order of distance.
for_each_node iterates them in numerical order.
> + *
> + * Requires rcu_lock to be held.
> + */
Please mention complexity here, which is O(N^2).
> +#define for_each_numa_node(node, start, visited, state) \
> + for (node = start; \
> + node != MAX_NUMNODES; \
> + node = sched_numa_node(&(visited), start, state))
Please braces around parameters.
'node < MAX_NUMNODES' is better. It will cost you nothing but will give
some guarantee that if user passes broken start value, we don't call
sched_numa_node().
What about:
start = -EINVAL;
foe_each_numa_node(node, start, visited, N_ONLINE)
do_something(node);
Whatever garbage user puts in 'start', at the very first iteration,
do_something() will be executed against it. Offline node, -EINVAL or
NUMA_NO_NODE are the examples.
> +
> /**
> * for_each_numa_hop_mask - iterate over cpumasks of increasing NUMA distance
> * from a given node.
> diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> index da33ec9e94ab2..e1d0a33415fb5 100644
> --- a/kernel/sched/topology.c
> +++ b/kernel/sched/topology.c
> @@ -2183,6 +2183,48 @@ int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node)
> }
> EXPORT_SYMBOL_GPL(sched_numa_find_nth_cpu);
>
> +/**
> + * sched_numa_node - Find the NUMA node at the closest distance from
> + * node @start.
> + *
> + * @visited: a pointer to a nodemask_t representing the visited nodes.
> + * @start: the node to start the search from.
Maybe just 'node' The 'start' is only meaningful in the caller. Here
we don't start, we just look for a node that is the most nearest to a
given one.
What if NOMA_NO_NODE is passed?
> + * @state: the node state to filter nodes by.
> + *
> + * This function iterates over all nodes in the given state and calculates
> + * the distance to the starting node. It returns the node that is the
> + * closest in terms of distance that has not already been considered (not
> + * set in @visited and not the starting node). If the node is found, it is
> + * marked as visited in the @visited node mask.
> + *
> + * Returns the node ID closest in terms of hop distance from the @start
> + * node, or MAX_NUMNODES if no node is found (or all nodes have been
> + * visited).
> + */
> +int sched_numa_node(nodemask_t *visited, int start, unsigned int state)
The name is somewhat confusing. Sched_numa_node what? If you're looking
for a closest node, call your function find_closest_node().
We already have numa_nearest_node(). The difference between this one
and what you're implementing here is that you add an additional mask.
So, I'd suggest something like
int numa_nearest_node_andnot(int node, unsigned int state, nodemask_t *mask)
Also, there's about scheduler her, so I'd suggest to place it next to
numa_nearest_node() in mm/mempolicy.c.
> +{
> + int dist, n, min_node, min_dist;
> +
> + min_node = MAX_NUMNODES;
> + min_dist = INT_MAX;
> +
> + /* Find the nearest unvisted node */
If you name it properly, you don't need to explain your intentions in
the code. Also, at this level of abstraction, the 'visited' name may
be too specific. Let's refer to it as just a mask containing nodes
that user wants to skip for whatever reason.
> + for_each_node_state(n, state) {
> + if (n == start || node_isset(n, *visited))
> + continue;
Nah.
1. Skipping start node is wrong. The very first call should return
'start' exactly. Then it will be masked out, so the traversing will
move forward.
2. This should be for_each_node_state_andnot(n, state, mask).
This way you'll be able to drop the above condition entirely.
> + dist = node_distance(start, n);
> + if (dist < min_dist) {
> + min_dist = dist;
> + min_node = n;
> + }
> + }
> + if (min_node != MAX_NUMNODES)
> + node_set(min_node, *visited);
Is it possible to set the 'visited' bit in caller code? This way we'll
make the function pure, like other bitmap search functions.
Would something like this work?
int numa_nearest_node_andnot(int node, unsigned int state, const nodemask_t *mask);
#define for_each_numa_node(node, visited, state) \
for (int start = (node), n = numa_nearest_node_andnot(start, (state), &(visited)); \
n < MAX_NUMNODES; \
node_set(n, (visited)), n = numa_nearest_node_andnot(start, (state), &(visited)))
One other option to think about is that we can introduce numa_nearest_nodemask()
and use it like this
nodemask_t nodes;
nodes_complement(nodes, node_states[N_ONLINE];
for_each_numa_node(node, unvisited)
do_something();
Where:
int numa_nearest_nodemask(int node, const nodemask_t *mask);
#define for_each_numa_node(node, unvisited) \
for (int start = (node), n = numa_nearest_nodemask(start, &(unvisited)); \
n < MAX_NUMNODES; \
node_clear(n, (visited)), n = numa_nearest_nodemask(start, &(visited)))
Thanks,
Yury
> +
> + return min_node;
> +}
> +EXPORT_SYMBOL_GPL(sched_numa_node);
> +
> /**
> * sched_numa_hop_mask() - Get the cpumask of CPUs at most @hops hops away from
> * @node
> --
> 2.48.1
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 1/5] sched/topology: Introduce for_each_numa_node() iterator
2025-02-07 3:57 ` Yury Norov
@ 2025-02-07 15:44 ` Andrea Righi
2025-02-07 17:07 ` Yury Norov
0 siblings, 1 reply; 9+ messages in thread
From: Andrea Righi @ 2025-02-07 15:44 UTC (permalink / raw)
To: Yury Norov
Cc: Tejun Heo, David Vernet, Changwoo Min, Ingo Molnar,
Peter Zijlstra, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider,
Ian May, bpf, linux-kernel
Hi Yury,
On Thu, Feb 06, 2025 at 10:57:19PM -0500, Yury Norov wrote:
> On Thu, Feb 06, 2025 at 09:15:31PM +0100, Andrea Righi wrote:
...
> > @@ -261,6 +267,29 @@ sched_numa_hop_mask(unsigned int node, unsigned int hops)
> > }
> > #endif /* CONFIG_NUMA */
> >
> > +/**
> > + * for_each_numa_node - iterate over NUMA nodes at increasing hop distances
> > + * from a given starting node.
> > + * @node: the iteration variable, representing the current NUMA node.
> > + * @start: the NUMA node to start the iteration from.
> > + * @visited: a nodemask_t to track the visited nodes.
>
> nit: s/nodemask_t/nodemask
The type is actually nodemask_t, do you think it's better to mention
nodemask instead?
>
> > + * @state: state of NUMA nodes to iterate.
> > + *
> > + * This macro iterates over NUMA nodes in increasing distance from
> > + * @start_node and yields MAX_NUMNODES when all the nodes have been
> > + * visited.
> > + *
> > + * The difference between for_each_node() and for_each_numa_node() is that
> > + * the former allows to iterate over nodes in no particular order, whereas
> > + * the latter iterates over nodes in increasing order of distance.
>
> for_each_node iterates them in numerical order.
Oh yes, much better. :)
>
> > + *
> > + * Requires rcu_lock to be held.
> > + */
>
> Please mention complexity here, which is O(N^2).
Ok. Will also add a comment to describe why it's O(N^2).
>
> > +#define for_each_numa_node(node, start, visited, state) \
> > + for (node = start; \
> > + node != MAX_NUMNODES; \
> > + node = sched_numa_node(&(visited), start, state))
>
> Please braces around parameters.
Ok.
>
> 'node < MAX_NUMNODES' is better. It will cost you nothing but will give
> some guarantee that if user passes broken start value, we don't call
> sched_numa_node().
Good point.
>
> What about:
> start = -EINVAL;
> foe_each_numa_node(node, start, visited, N_ONLINE)
> do_something(node);
>
> Whatever garbage user puts in 'start', at the very first iteration,
> do_something() will be executed against it. Offline node, -EINVAL or
> NUMA_NO_NODE are the examples.
So, my idea was actually to use start == NUMA_NO_NODE for all the cases
where the starting node doesn't matter, for example when calling
scx_bpf_pick_idle_cpu(), that doesn't have a prev_cpu context.
Should we implicitly fallback to for_each_node() when
start == NUMA_NO_NODE? And if it's complete garbage maybe just break and
never execute do_something(node)?
>
> > +
> > /**
> > * for_each_numa_hop_mask - iterate over cpumasks of increasing NUMA distance
> > * from a given node.
> > diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> > index da33ec9e94ab2..e1d0a33415fb5 100644
> > --- a/kernel/sched/topology.c
> > +++ b/kernel/sched/topology.c
> > @@ -2183,6 +2183,48 @@ int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node)
> > }
> > EXPORT_SYMBOL_GPL(sched_numa_find_nth_cpu);
> >
> > +/**
> > + * sched_numa_node - Find the NUMA node at the closest distance from
> > + * node @start.
> > + *
> > + * @visited: a pointer to a nodemask_t representing the visited nodes.
> > + * @start: the node to start the search from.
>
> Maybe just 'node' The 'start' is only meaningful in the caller. Here
> we don't start, we just look for a node that is the most nearest to a
> given one.
Ok.
>
> What if NOMA_NO_NODE is passed?
We could return the first non-visited node in numerical order. And still
fallback to for_each_node() from for_each_numa_node() when
start == NUMA_NO_NODE.
>
> > + * @state: the node state to filter nodes by.
> > + *
> > + * This function iterates over all nodes in the given state and calculates
> > + * the distance to the starting node. It returns the node that is the
> > + * closest in terms of distance that has not already been considered (not
> > + * set in @visited and not the starting node). If the node is found, it is
> > + * marked as visited in the @visited node mask.
> > + *
> > + * Returns the node ID closest in terms of hop distance from the @start
> > + * node, or MAX_NUMNODES if no node is found (or all nodes have been
> > + * visited).
> > + */
> > +int sched_numa_node(nodemask_t *visited, int start, unsigned int state)
>
> The name is somewhat confusing. Sched_numa_node what? If you're looking
> for a closest node, call your function find_closest_node().
>
> We already have numa_nearest_node(). The difference between this one
> and what you're implementing here is that you add an additional mask.
> So, I'd suggest something like
>
> int numa_nearest_node_andnot(int node, unsigned int state, nodemask_t *mask)
>
> Also, there's about scheduler her, so I'd suggest to place it next to
> numa_nearest_node() in mm/mempolicy.c.
Makes sense, and I agree that mm/mempolicy.c is a better place for this.
>
> > +{
> > + int dist, n, min_node, min_dist;
> > +
> > + min_node = MAX_NUMNODES;
> > + min_dist = INT_MAX;
> > +
> > + /* Find the nearest unvisted node */
>
> If you name it properly, you don't need to explain your intentions in
> the code. Also, at this level of abstraction, the 'visited' name may
> be too specific. Let's refer to it as just a mask containing nodes
> that user wants to skip for whatever reason.
Ok.
>
>
> > + for_each_node_state(n, state) {
> > + if (n == start || node_isset(n, *visited))
> > + continue;
>
> Nah.
>
> 1. Skipping start node is wrong. The very first call should return
> 'start' exactly. Then it will be masked out, so the traversing will
> move forward.
> 2. This should be for_each_node_state_andnot(n, state, mask).
>
> This way you'll be able to drop the above condition entirely.
Yeah, I agree, I'll revisit this, also considering the comments above.
>
> > + dist = node_distance(start, n);
> > + if (dist < min_dist) {
> > + min_dist = dist;
> > + min_node = n;
> > + }
> > + }
> > + if (min_node != MAX_NUMNODES)
> > + node_set(min_node, *visited);
>
> Is it possible to set the 'visited' bit in caller code? This way we'll
> make the function pure, like other bitmap search functions.
>
> Would something like this work?
>
> int numa_nearest_node_andnot(int node, unsigned int state, const nodemask_t *mask);
> #define for_each_numa_node(node, visited, state) \
> for (int start = (node), n = numa_nearest_node_andnot(start, (state), &(visited)); \
> n < MAX_NUMNODES; \
> node_set(n, (visited)), n = numa_nearest_node_andnot(start, (state), &(visited)))
>
> One other option to think about is that we can introduce numa_nearest_nodemask()
> and use it like this
>
> nodemask_t nodes;
>
> nodes_complement(nodes, node_states[N_ONLINE];
> for_each_numa_node(node, unvisited)
> do_something();
>
> Where:
>
> int numa_nearest_nodemask(int node, const nodemask_t *mask);
> #define for_each_numa_node(node, unvisited) \
> for (int start = (node), n = numa_nearest_nodemask(start, &(unvisited)); \
> n < MAX_NUMNODES; \
> node_clear(n, (visited)), n = numa_nearest_nodemask(start, &(visited)))
I like the numa_nearest_nodemask() idea, I'll do some experiemnts with it.
Thanks!
-Andrea
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 1/5] sched/topology: Introduce for_each_numa_node() iterator
2025-02-07 15:44 ` Andrea Righi
@ 2025-02-07 17:07 ` Yury Norov
0 siblings, 0 replies; 9+ messages in thread
From: Yury Norov @ 2025-02-07 17:07 UTC (permalink / raw)
To: Andrea Righi
Cc: Tejun Heo, David Vernet, Changwoo Min, Ingo Molnar,
Peter Zijlstra, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider,
Ian May, bpf, linux-kernel
On Fri, Feb 07, 2025 at 04:44:07PM +0100, Andrea Righi wrote:
> Hi Yury,
>
> On Thu, Feb 06, 2025 at 10:57:19PM -0500, Yury Norov wrote:
> > On Thu, Feb 06, 2025 at 09:15:31PM +0100, Andrea Righi wrote:
> ...
> > > @@ -261,6 +267,29 @@ sched_numa_hop_mask(unsigned int node, unsigned int hops)
> > > }
> > > #endif /* CONFIG_NUMA */
> > >
> > > +/**
> > > + * for_each_numa_node - iterate over NUMA nodes at increasing hop distances
> > > + * from a given starting node.
> > > + * @node: the iteration variable, representing the current NUMA node.
> > > + * @start: the NUMA node to start the iteration from.
> > > + * @visited: a nodemask_t to track the visited nodes.
> >
> > nit: s/nodemask_t/nodemask
>
> The type is actually nodemask_t, do you think it's better to mention
> nodemask instead?
We just don't put types in variables descriptions. Refer the comment
on top of memcpy():
/**
* memcpy - Copy one area of memory to another
* @dest: Where to copy to
* @src: Where to copy from
* @count: The size of the area.
*
* You should not use this function to access IO space, use memcpy_toio()
* or memcpy_fromio() instead.
*/
void *memcpy(void *dest, const void *src, size_t count)
We don't say like
* @count: The size_t of the area.
Right?
> > int numa_nearest_nodemask(int node, const nodemask_t *mask);
> > #define for_each_numa_node(node, unvisited) \
> > for (int start = (node), n = numa_nearest_nodemask(start, &(unvisited)); \
> > n < MAX_NUMNODES; \
> > node_clear(n, (visited)), n = numa_nearest_nodemask(start, &(visited)))
>
> I like the numa_nearest_nodemask() idea, I'll do some experiemnts with it.
Yeah, this one looks like the most generic API I can figure out, and
still it fits your needs just as well.
Please in the comment refer that the 'unvisited' will be touched by
the macro.
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2025-02-07 17:07 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-02-06 20:15 [PATCHSET v9 sched_ext/for-6.15] sched_ext: split global idle cpumask into per-NUMA cpumasks Andrea Righi
2025-02-06 20:15 ` [PATCH 1/5] sched/topology: Introduce for_each_numa_node() iterator Andrea Righi
2025-02-07 3:57 ` Yury Norov
2025-02-07 15:44 ` Andrea Righi
2025-02-07 17:07 ` Yury Norov
2025-02-06 20:15 ` [PATCH 2/5] sched_ext: idle: Introduce SCX_OPS_BUILTIN_IDLE_PER_NODE Andrea Righi
2025-02-06 20:15 ` [PATCH 3/5] sched_ext: idle: introduce SCX_PICK_IDLE_IN_NODE Andrea Righi
2025-02-06 20:15 ` [PATCH 4/5] sched_ext: idle: Per-node idle cpumasks Andrea Righi
2025-02-06 20:15 ` [PATCH 5/5] sched_ext: idle: Introduce node-aware idle cpu kfunc helpers Andrea Righi
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox