* [PATCHSET v11 sched_ext/for-6.15] sched_ext: split global idle cpumask into per-NUMA cpumasks
@ 2025-02-12 16:48 Andrea Righi
2025-02-12 16:48 ` [PATCH 1/7] nodemask: numa: reorganize inclusion path Andrea Righi
` (6 more replies)
0 siblings, 7 replies; 21+ messages in thread
From: Andrea Righi @ 2025-02-12 16:48 UTC (permalink / raw)
To: Tejun Heo, David Vernet, Changwoo Min
Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider, Joel Fernandes, Ian May, bpf, linux-kernel
= Overview =
As discussed during the sched_ext office hours, using a global cpumask to
keep track of the idle CPUs can be inefficient and it may not scale really
well on large NUMA systems.
Therefore, split the idle cpumask into multiple per-NUMA node cpumasks to
improve scalability and performance on such large systems.
Scalability issues seem to be more noticeable on Intel Sapphire Rapids
dual-socket architectures.
= Test =
Hardware:
- System: DGX B200
- CPUs: 224 SMT threads (112 physical cores)
- Processor: INTEL(R) XEON(R) PLATINUM 8570
- 2 NUMA nodes
Scheduler:
- scx_simple [1] (so that we can focus at the built-in idle selection
policy and not at the scheduling policy itself)
Test:
- Run a parallel kernel build `make -j $(nproc)` and measure the average
elapsed time over 10 runs:
avg time | stdev
---------+------
before: 52.431s | 2.895
after: 50.342s | 2.895
= Conclusion =
Splitting the global cpumask into multiple per-NUMA cpumasks helped to
achieve a speedup of approximately +4% with this particular architecture
and test case.
The same test on a DGX-1 (40 physical cores, Intel Xeon E5-2698 v4 @
2.20GHz, 2 NUMA nodes) shows a speedup of around 1.5-3%.
On smaller systems, I haven't noticed any measurable regressions or
improvements with the same test (parallel kernel build) and scheduler
(scx_simple).
Moreover, with a modified scx_bpfland that uses the new NUMA-aware APIs I
observed an additional +2-2.5% performance improvement in the same test.
NOTE: splitting the global cpumask into multiple cpumasks may increase the
overhead of scx_bpf_pick_idle_cpu() or ops.select_cpu() (for schedulers
relying on the built-in CPU idle selection policy) in presence of multiple
NUMA nodes, particularly under high system load, since we may have to
access multiple cpumasks to find an idle CPU.
However, this increased overhead seems to be highly compensated by a lower
overhead when updating the idle state and by the fact that CPUs are more
likely operating within their local idle cpumask, reducing the stress on
the cache coherency protocol.
= References =
[1] https://github.com/sched-ext/scx/blob/main/scheds/c/scx_simple.bpf.c
ChangeLog v10 -> v11:
- drop state from numa_nearest_nodemask() and rename it to
nearest_node_nodemask()
- rename for_each_numa_node() to for_each_node_numadist()
- rename pick_idle_cpu_from_node() to pick_idle_cpu_in_node() for better
coherency with SCX_PICK_IDLE_IN_NODE
- rename idle_cpu_to_node() to scx_cpu_node_if_enabled()
- rename scx_bpf_cpu_to_node() to scx_bpf_cpu_node() and trigger an scx
error when an invalid CPU is specified
- provide compatibility macros for SCX_OPS_BUILTIN_IDLE_PER_NODE and
SCX_PICK_IDLE_IN_NODE
- always trigger an scx error when a non-numa aware kfunc is used with
SCX_OPS_BUILTIN_IDLE_PER_NODE enabled
- make all static keys local to ext_idle.c (no need to expose them
publicly)
ChangeLog v9 -> v10:
- introduce for_each_numa_node() and numa_nearest_nodemask() helpers
ChangeLog v8 -> v9:
- drop some preliminary refactoring patches that are now applied to
sched_ext/for-6.15
- rename SCX_PICK_IDLE_NODE -> SCX_PICK_IDLE_IN_NODE
- use NUMA_NO_NODE to reference to the global idle cpumask and drop
NUMA_FLAT_NODE (which was quite confusing)
- NUMA_NO_NODE is not implicitly translated to the current node
- rename struct idle_cpumask -> idle_cpus
- drop "get_" prefix from the idle cpumask helpers
- rename for_each_numa_hop_node() -> for_each_numa_node()
- for_each_numa_node() returns MAX_NUMNODES at the end of the loop
(for coherency with other node iterators)
- do not get rid of the scx_selcpu_topo_numa logic (since it can still be
used when per-node cpumasks are not enabled)
ChangeLog v7 -> v8:
- patch set refactoring: move ext_idle.c as first patch and introduce more
preparation patches
- introduce SCX_PICK_IDLE_NODE to restrict idle CPU selection to a single
specified node
- trigger scx_ops_error() when the *_node() kfunc's are used without
enbling SCX_OPS_NODE_BUILTIN_IDLE
- check for node_possible() in validate_node()
- do node validation in the kfunc's (instead of the internal kernel
functions) and trigger scx_ops_error in case of failure
- rename idle_masks -> scx_idle_masks
- drop unused CL_ALIGNED_IF_ONSTACK
- drop unnecessary rcu_read_lock/unlock() when iterating NUMA nodes
ChangeLog v6 -> v7:
- addressed some issues based on Yury's review (thanks!)
- introduced a new iterator to navigate the NUMA nodes in order of
increasing distance
ChangeLog v5 -> v6:
- refactor patch set to introduce SCX_OPS_NODE_BUILTIN_IDLE before
the per-node cpumasks
- move idle CPU selection policy to a separate file (ext_idle.c)
(no functional change, just some code shuffling)
ChangeLog v4 -> v5:
- introduce new scx_bpf_cpu_to_node() kfunc
- provide __COMPAT_*() helpers for the new kfunc's
ChangeLog v3 -> v4:
- introduce SCX_OPS_NODE_BUILTIN_IDLE to select multiple per-node
cpumasks or single flat cpumask
- introduce new kfuncs to access per-node idle cpumasks information
- use for_each_numa_hop_mask() to traverse NUMA nodes in increasing
distance
- dropped nodemask helpers (not needed anymore)
- rebase to sched_ext/for-6.14
ChangeLog v2 -> v3:
- introduce for_each_online_node_wrap()
- re-introduce cpumask_intersects() in test_and_clear_cpu_idle() (to
reduce memory writes / cache coherence pressure)
- get rid of the redundant scx_selcpu_topo_numa logic
[test results are pretty much identical, so I haven't updated them from v2]
ChangeLog v1 -> v2:
- renamed for_each_node_mask|state_from() -> for_each_node_mask|state_wrap()
- misc cpumask optimizations (thanks to Yury)
Andrea Righi (6):
mm/numa: Introduce nearest_node_nodemask()
sched/topology: Introduce for_each_node_numadist() iterator
sched_ext: idle: Make idle static keys private
sched_ext: idle: Introduce SCX_OPS_BUILTIN_IDLE_PER_NODE
sched_ext: idle: Per-node idle cpumasks
sched_ext: idle: Introduce node-aware idle cpu kfunc helpers
Yury Norov (1):
nodemask: numa: reorganize inclusion path
include/linux/nodemask.h | 1 -
include/linux/nodemask_types.h | 11 +-
include/linux/numa.h | 17 +-
include/linux/topology.h | 30 ++
kernel/sched/ext.c | 31 +-
kernel/sched/ext_idle.c | 504 +++++++++++++++++++++++++++----
kernel/sched/ext_idle.h | 20 +-
mm/mempolicy.c | 32 ++
tools/sched_ext/include/scx/common.bpf.h | 5 +
tools/sched_ext/include/scx/compat.bpf.h | 31 ++
tools/sched_ext/include/scx/compat.h | 6 +
11 files changed, 593 insertions(+), 95 deletions(-)
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH 1/7] nodemask: numa: reorganize inclusion path
2025-02-12 16:48 [PATCHSET v11 sched_ext/for-6.15] sched_ext: split global idle cpumask into per-NUMA cpumasks Andrea Righi
@ 2025-02-12 16:48 ` Andrea Righi
2025-02-13 15:29 ` Yury Norov
2025-02-12 16:48 ` [PATCH 2/7] mm/numa: Introduce nearest_node_nodemask() Andrea Righi
` (5 subsequent siblings)
6 siblings, 1 reply; 21+ messages in thread
From: Andrea Righi @ 2025-02-12 16:48 UTC (permalink / raw)
To: Tejun Heo, David Vernet, Changwoo Min
Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider, Joel Fernandes, Ian May, bpf, linux-kernel,
Yury Norov
From: Yury Norov <yury.norov@gmail.com>
Nodemasks now pull linux/numa.h for MAX_NUMNODES and NUMA_NO_NODE
macros. This series makes numa.h depending on nodemasks, so we hit
a circular dependency.
Nodemasks library is highly employed by NUMA code, and it would be
logical to resolve the circular dependency by making NUMA headers
dependent nodemask.h.
Signed-off-by: Yury Norov <yury.norov@gmail.com>
---
include/linux/nodemask.h | 1 -
include/linux/nodemask_types.h | 11 ++++++++++-
include/linux/numa.h | 10 +---------
3 files changed, 11 insertions(+), 11 deletions(-)
diff --git a/include/linux/nodemask.h b/include/linux/nodemask.h
index 9fd7a0ce9c1a7..27644a6edc6ee 100644
--- a/include/linux/nodemask.h
+++ b/include/linux/nodemask.h
@@ -94,7 +94,6 @@
#include <linux/bitmap.h>
#include <linux/minmax.h>
#include <linux/nodemask_types.h>
-#include <linux/numa.h>
#include <linux/random.h>
extern nodemask_t _unused_nodemask_arg_;
diff --git a/include/linux/nodemask_types.h b/include/linux/nodemask_types.h
index 6b28d97ea6ed0..f850a48742f1f 100644
--- a/include/linux/nodemask_types.h
+++ b/include/linux/nodemask_types.h
@@ -3,7 +3,16 @@
#define __LINUX_NODEMASK_TYPES_H
#include <linux/bitops.h>
-#include <linux/numa.h>
+
+#ifdef CONFIG_NODES_SHIFT
+#define NODES_SHIFT CONFIG_NODES_SHIFT
+#else
+#define NODES_SHIFT 0
+#endif
+
+#define MAX_NUMNODES (1 << NODES_SHIFT)
+
+#define NUMA_NO_NODE (-1)
typedef struct { DECLARE_BITMAP(bits, MAX_NUMNODES); } nodemask_t;
diff --git a/include/linux/numa.h b/include/linux/numa.h
index 3567e40329ebc..31d8bf8a951a7 100644
--- a/include/linux/numa.h
+++ b/include/linux/numa.h
@@ -3,16 +3,8 @@
#define _LINUX_NUMA_H
#include <linux/init.h>
#include <linux/types.h>
+#include <linux/nodemask.h>
-#ifdef CONFIG_NODES_SHIFT
-#define NODES_SHIFT CONFIG_NODES_SHIFT
-#else
-#define NODES_SHIFT 0
-#endif
-
-#define MAX_NUMNODES (1 << NODES_SHIFT)
-
-#define NUMA_NO_NODE (-1)
#define NUMA_NO_MEMBLK (-1)
static inline bool numa_valid_node(int nid)
--
2.48.1
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 2/7] mm/numa: Introduce nearest_node_nodemask()
2025-02-12 16:48 [PATCHSET v11 sched_ext/for-6.15] sched_ext: split global idle cpumask into per-NUMA cpumasks Andrea Righi
2025-02-12 16:48 ` [PATCH 1/7] nodemask: numa: reorganize inclusion path Andrea Righi
@ 2025-02-12 16:48 ` Andrea Righi
2025-02-13 15:57 ` Yury Norov
2025-02-12 16:48 ` [PATCH 3/7] sched/topology: Introduce for_each_node_numadist() iterator Andrea Righi
` (4 subsequent siblings)
6 siblings, 1 reply; 21+ messages in thread
From: Andrea Righi @ 2025-02-12 16:48 UTC (permalink / raw)
To: Tejun Heo, David Vernet, Changwoo Min
Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider, Joel Fernandes, Ian May, bpf, linux-kernel,
Yury Norov
Introduce the new helper nearest_node_nodemask() to find the closest
node in a specified nodemask from a given starting node.
Returns MAX_NUMNODES if no node is found.
Cc: Yury Norov <yury.norov@gmail.com>
Signed-off-by: Andrea Righi <arighi@nvidia.com>
---
include/linux/numa.h | 7 +++++++
mm/mempolicy.c | 32 ++++++++++++++++++++++++++++++++
2 files changed, 39 insertions(+)
diff --git a/include/linux/numa.h b/include/linux/numa.h
index 31d8bf8a951a7..e6baaf6051bcf 100644
--- a/include/linux/numa.h
+++ b/include/linux/numa.h
@@ -31,6 +31,8 @@ void __init alloc_offline_node_data(int nid);
/* Generic implementation available */
int numa_nearest_node(int node, unsigned int state);
+int nearest_node_nodemask(int node, nodemask_t *mask);
+
#ifndef memory_add_physaddr_to_nid
int memory_add_physaddr_to_nid(u64 start);
#endif
@@ -47,6 +49,11 @@ static inline int numa_nearest_node(int node, unsigned int state)
return NUMA_NO_NODE;
}
+static inline int nearest_node_nodemask(int node, nodemask_t *mask)
+{
+ return NUMA_NO_NODE;
+}
+
static inline int memory_add_physaddr_to_nid(u64 start)
{
return 0;
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 162407fbf2bc7..1e2acf187ea3a 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -196,6 +196,38 @@ int numa_nearest_node(int node, unsigned int state)
}
EXPORT_SYMBOL_GPL(numa_nearest_node);
+/**
+ * nearest_node_nodemask - Find the node in @mask at the nearest distance
+ * from @node.
+ *
+ * @node: the node to start the search from.
+ * @mask: a pointer to a nodemask representing the allowed nodes.
+ *
+ * This function iterates over all nodes in the given state and calculates
+ * the distance to the starting node.
+ *
+ * Returns the node ID in @mask that is the closest in terms of distance
+ * from @node, or MAX_NUMNODES if no node is found.
+ */
+int nearest_node_nodemask(int node, nodemask_t *mask)
+{
+ int dist, n, min_dist = INT_MAX, min_node = MAX_NUMNODES;
+
+ if (node == NUMA_NO_NODE)
+ return MAX_NUMNODES;
+
+ for_each_node_mask(n, *mask) {
+ dist = node_distance(node, n);
+ if (dist < min_dist) {
+ min_dist = dist;
+ min_node = n;
+ }
+ }
+
+ return min_node;
+}
+EXPORT_SYMBOL_GPL(nearest_node_nodemask);
+
struct mempolicy *get_task_policy(struct task_struct *p)
{
struct mempolicy *pol = p->mempolicy;
--
2.48.1
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 3/7] sched/topology: Introduce for_each_node_numadist() iterator
2025-02-12 16:48 [PATCHSET v11 sched_ext/for-6.15] sched_ext: split global idle cpumask into per-NUMA cpumasks Andrea Righi
2025-02-12 16:48 ` [PATCH 1/7] nodemask: numa: reorganize inclusion path Andrea Righi
2025-02-12 16:48 ` [PATCH 2/7] mm/numa: Introduce nearest_node_nodemask() Andrea Righi
@ 2025-02-12 16:48 ` Andrea Righi
2025-02-13 16:02 ` Yury Norov
2025-02-12 16:48 ` [PATCH 4/7] sched_ext: idle: Make idle static keys private Andrea Righi
` (3 subsequent siblings)
6 siblings, 1 reply; 21+ messages in thread
From: Andrea Righi @ 2025-02-12 16:48 UTC (permalink / raw)
To: Tejun Heo, David Vernet, Changwoo Min
Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider, Joel Fernandes, Ian May, bpf, linux-kernel,
Yury Norov
Introduce the new helper for_each_node_numadist() to iterate over node
IDs in order of increasing NUMA distance from a given starting node.
This iterator is somehow similar to for_each_numa_hop_mask(), but
instead of providing a cpumask at each iteration, it provides a node ID.
Example usage:
nodemask_t unvisited = NODE_MASK_ALL;
int node, start = cpu_to_node(smp_processor_id());
node = start;
for_each_node_numadist(node, unvisited)
pr_info("node (%d, %d) -> %d\n",
start, node, node_distance(start, node));
On a system with equidistant nodes:
$ numactl -H
...
node distances:
node 0 1 2 3
0: 10 20 20 20
1: 20 10 20 20
2: 20 20 10 20
3: 20 20 20 10
Output of the example above (on node 0):
[ 7.367022] node (0, 0) -> 10
[ 7.367151] node (0, 1) -> 20
[ 7.367186] node (0, 2) -> 20
[ 7.367247] node (0, 3) -> 20
On a system with non-equidistant nodes (simulated using virtme-ng):
$ numactl -H
...
node distances:
node 0 1 2 3
0: 10 51 31 41
1: 51 10 21 61
2: 31 21 10 11
3: 41 61 11 10
Output of the example above (on node 0):
[ 8.953644] node (0, 0) -> 10
[ 8.953712] node (0, 2) -> 31
[ 8.953764] node (0, 3) -> 41
[ 8.953817] node (0, 1) -> 51
Cc: Yury Norov <yury.norov@gmail.com>
Signed-off-by: Andrea Righi <arighi@nvidia.com>
---
include/linux/topology.h | 30 ++++++++++++++++++++++++++++++
1 file changed, 30 insertions(+)
diff --git a/include/linux/topology.h b/include/linux/topology.h
index 52f5850730b3e..932d8b819c1b7 100644
--- a/include/linux/topology.h
+++ b/include/linux/topology.h
@@ -261,6 +261,36 @@ sched_numa_hop_mask(unsigned int node, unsigned int hops)
}
#endif /* CONFIG_NUMA */
+/**
+ * for_each_node_numadist() - iterate over nodes in increasing distance
+ * order, starting from a given node
+ * @node: the iteration variable and the starting node.
+ * @unvisited: a nodemask to keep track of the unvisited nodes.
+ *
+ * This macro iterates over NUMA node IDs in increasing distance from the
+ * starting @node and yields MAX_NUMNODES when all the nodes have been
+ * visited.
+ *
+ * Note that by the time the loop completes, the @unvisited nodemask will
+ * be fully cleared, unless the loop exits early.
+ *
+ * The difference between for_each_node() and for_each_node_numadist() is
+ * that the former allows to iterate over nodes in numerical order, whereas
+ * the latter iterates over nodes in increasing order of distance.
+ *
+ * This complexity of this iterator is O(N^2), where N represents the
+ * number of nodes, as each iteration involves scanning all nodes to
+ * find the one with the shortest distance.
+ *
+ * Requires rcu_lock to be held.
+ */
+#define for_each_node_numadist(node, unvisited) \
+ for (int start = (node), \
+ node = nearest_node_nodemask((start), &(unvisited)); \
+ node < MAX_NUMNODES; \
+ node_clear(node, (unvisited)), \
+ node = nearest_node_nodemask((start), &(unvisited)))
+
/**
* for_each_numa_hop_mask - iterate over cpumasks of increasing NUMA distance
* from a given node.
--
2.48.1
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 4/7] sched_ext: idle: Make idle static keys private
2025-02-12 16:48 [PATCHSET v11 sched_ext/for-6.15] sched_ext: split global idle cpumask into per-NUMA cpumasks Andrea Righi
` (2 preceding siblings ...)
2025-02-12 16:48 ` [PATCH 3/7] sched/topology: Introduce for_each_node_numadist() iterator Andrea Righi
@ 2025-02-12 16:48 ` Andrea Righi
2025-02-12 16:48 ` [PATCH 5/7] sched_ext: idle: Introduce SCX_OPS_BUILTIN_IDLE_PER_NODE Andrea Righi
` (2 subsequent siblings)
6 siblings, 0 replies; 21+ messages in thread
From: Andrea Righi @ 2025-02-12 16:48 UTC (permalink / raw)
To: Tejun Heo, David Vernet, Changwoo Min
Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider, Joel Fernandes, Ian May, bpf, linux-kernel,
Yury Norov
Make all the static keys used by the idle CPU selection policy private
to ext_idle.c. This avoids unnecessary exposure in headers and improves
code encapsulation.
Cc: Yury Norov <yury.norov@gmail.com>
Signed-off-by: Andrea Righi <arighi@nvidia.com>
---
kernel/sched/ext.c | 9 ++-------
kernel/sched/ext_idle.c | 39 ++++++++++++++++++++++++++-------------
kernel/sched/ext_idle.h | 12 ++++--------
3 files changed, 32 insertions(+), 28 deletions(-)
diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
index 98d5f2f68f38c..c47e7e2024a94 100644
--- a/kernel/sched/ext.c
+++ b/kernel/sched/ext.c
@@ -4721,7 +4721,7 @@ static void scx_ops_disable_workfn(struct kthread_work *work)
static_branch_disable(&scx_ops_enq_exiting);
static_branch_disable(&scx_ops_enq_migration_disabled);
static_branch_disable(&scx_ops_cpu_preempt);
- static_branch_disable(&scx_builtin_idle_enabled);
+ scx_idle_disable();
synchronize_rcu();
if (ei->kind >= SCX_EXIT_ERROR) {
@@ -5358,12 +5358,7 @@ static int scx_ops_enable(struct sched_ext_ops *ops, struct bpf_link *link)
if (scx_ops.cpu_acquire || scx_ops.cpu_release)
static_branch_enable(&scx_ops_cpu_preempt);
- if (!ops->update_idle || (ops->flags & SCX_OPS_KEEP_BUILTIN_IDLE)) {
- scx_idle_reset_masks();
- static_branch_enable(&scx_builtin_idle_enabled);
- } else {
- static_branch_disable(&scx_builtin_idle_enabled);
- }
+ scx_idle_enable(ops);
/*
* Lock out forks, cgroup on/offlining and moves before opening the
diff --git a/kernel/sched/ext_idle.c b/kernel/sched/ext_idle.c
index cb981956005b4..ed1804506585b 100644
--- a/kernel/sched/ext_idle.c
+++ b/kernel/sched/ext_idle.c
@@ -12,7 +12,7 @@
#include "ext_idle.h"
/* Enable/disable built-in idle CPU selection policy */
-DEFINE_STATIC_KEY_FALSE(scx_builtin_idle_enabled);
+static DEFINE_STATIC_KEY_FALSE(scx_builtin_idle_enabled);
#ifdef CONFIG_SMP
#ifdef CONFIG_CPUMASK_OFFSTACK
@@ -22,10 +22,10 @@ DEFINE_STATIC_KEY_FALSE(scx_builtin_idle_enabled);
#endif
/* Enable/disable LLC aware optimizations */
-DEFINE_STATIC_KEY_FALSE(scx_selcpu_topo_llc);
+static DEFINE_STATIC_KEY_FALSE(scx_selcpu_topo_llc);
/* Enable/disable NUMA aware optimizations */
-DEFINE_STATIC_KEY_FALSE(scx_selcpu_topo_numa);
+static DEFINE_STATIC_KEY_FALSE(scx_selcpu_topo_numa);
static struct {
cpumask_var_t cpu;
@@ -441,16 +441,6 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, bool
return cpu;
}
-void scx_idle_reset_masks(void)
-{
- /*
- * Consider all online cpus idle. Should converge to the actual state
- * quickly.
- */
- cpumask_copy(idle_masks.cpu, cpu_online_mask);
- cpumask_copy(idle_masks.smt, cpu_online_mask);
-}
-
void scx_idle_init_masks(void)
{
BUG_ON(!alloc_cpumask_var(&idle_masks.cpu, GFP_KERNEL));
@@ -532,6 +522,29 @@ void __scx_update_idle(struct rq *rq, bool idle, bool do_notify)
}
#endif /* CONFIG_SMP */
+void scx_idle_enable(struct sched_ext_ops *ops)
+{
+ if (ops->update_idle && !(ops->flags & SCX_OPS_KEEP_BUILTIN_IDLE)) {
+ static_branch_disable(&scx_builtin_idle_enabled);
+ return;
+ }
+ static_branch_enable(&scx_builtin_idle_enabled);
+
+#ifdef CONFIG_SMP
+ /*
+ * Consider all online cpus idle. Should converge to the actual state
+ * quickly.
+ */
+ cpumask_copy(idle_masks.cpu, cpu_online_mask);
+ cpumask_copy(idle_masks.smt, cpu_online_mask);
+#endif
+}
+
+void scx_idle_disable(void)
+{
+ static_branch_disable(&scx_builtin_idle_enabled);
+}
+
/********************************************************************************
* Helpers that can be called from the BPF scheduler.
*/
diff --git a/kernel/sched/ext_idle.h b/kernel/sched/ext_idle.h
index 7a13a74815ba7..bbac0fd9a5ddd 100644
--- a/kernel/sched/ext_idle.h
+++ b/kernel/sched/ext_idle.h
@@ -10,20 +10,15 @@
#ifndef _KERNEL_SCHED_EXT_IDLE_H
#define _KERNEL_SCHED_EXT_IDLE_H
-extern struct static_key_false scx_builtin_idle_enabled;
+struct sched_ext_ops;
#ifdef CONFIG_SMP
-extern struct static_key_false scx_selcpu_topo_llc;
-extern struct static_key_false scx_selcpu_topo_numa;
-
void scx_idle_update_selcpu_topology(void);
-void scx_idle_reset_masks(void);
void scx_idle_init_masks(void);
bool scx_idle_test_and_clear_cpu(int cpu);
s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, u64 flags);
#else /* !CONFIG_SMP */
static inline void scx_idle_update_selcpu_topology(void) {}
-static inline void scx_idle_reset_masks(void) {}
static inline void scx_idle_init_masks(void) {}
static inline bool scx_idle_test_and_clear_cpu(int cpu) { return false; }
static inline s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, u64 flags)
@@ -33,7 +28,8 @@ static inline s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, u64 flag
#endif /* CONFIG_SMP */
s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, bool *found);
-
-extern int scx_idle_init(void);
+void scx_idle_enable(struct sched_ext_ops *ops);
+void scx_idle_disable(void);
+int scx_idle_init(void);
#endif /* _KERNEL_SCHED_EXT_IDLE_H */
--
2.48.1
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 5/7] sched_ext: idle: Introduce SCX_OPS_BUILTIN_IDLE_PER_NODE
2025-02-12 16:48 [PATCHSET v11 sched_ext/for-6.15] sched_ext: split global idle cpumask into per-NUMA cpumasks Andrea Righi
` (3 preceding siblings ...)
2025-02-12 16:48 ` [PATCH 4/7] sched_ext: idle: Make idle static keys private Andrea Righi
@ 2025-02-12 16:48 ` Andrea Righi
2025-02-13 16:08 ` Yury Norov
2025-02-12 16:48 ` [PATCH 6/7] sched_ext: idle: Per-node idle cpumasks Andrea Righi
2025-02-12 16:48 ` [PATCH 7/7] sched_ext: idle: Introduce node-aware idle cpu kfunc helpers Andrea Righi
6 siblings, 1 reply; 21+ messages in thread
From: Andrea Righi @ 2025-02-12 16:48 UTC (permalink / raw)
To: Tejun Heo, David Vernet, Changwoo Min
Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider, Joel Fernandes, Ian May, bpf, linux-kernel,
Yury Norov
Add the new scheduler flag SCX_OPS_BUILTIN_IDLE_PER_NODE, which allows
BPF schedulers to select between using a global flat idle cpumask or
multiple per-node cpumasks.
This only introduces the flag and the mechanism to enable/disable this
feature without affecting any scheduling behavior.
Cc: Yury Norov <yury.norov@gmail.com>
Signed-off-by: Andrea Righi <arighi@nvidia.com>
---
kernel/sched/ext.c | 21 ++++++++++++++++++--
kernel/sched/ext_idle.c | 29 +++++++++++++++++++++-------
kernel/sched/ext_idle.h | 4 ++--
tools/sched_ext/include/scx/compat.h | 3 +++
4 files changed, 46 insertions(+), 11 deletions(-)
diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
index c47e7e2024a94..c3e154f0e8188 100644
--- a/kernel/sched/ext.c
+++ b/kernel/sched/ext.c
@@ -138,6 +138,12 @@ enum scx_ops_flags {
*/
SCX_OPS_ENQ_MIGRATION_DISABLED = 1LLU << 4,
+ /*
+ * If set, enable per-node idle cpumasks. If clear, use a single global
+ * flat idle cpumask.
+ */
+ SCX_OPS_BUILTIN_IDLE_PER_NODE = 1LLU << 5,
+
/*
* CPU cgroup support flags
*/
@@ -148,6 +154,7 @@ enum scx_ops_flags {
SCX_OPS_ENQ_EXITING |
SCX_OPS_ENQ_MIGRATION_DISABLED |
SCX_OPS_SWITCH_PARTIAL |
+ SCX_OPS_BUILTIN_IDLE_PER_NODE |
SCX_OPS_HAS_CGROUP_WEIGHT,
};
@@ -3409,7 +3416,7 @@ static void handle_hotplug(struct rq *rq, bool online)
atomic_long_inc(&scx_hotplug_seq);
if (scx_enabled())
- scx_idle_update_selcpu_topology();
+ scx_idle_update_selcpu_topology(&scx_ops);
if (online && SCX_HAS_OP(cpu_online))
SCX_CALL_OP(SCX_KF_UNLOCKED, cpu_online, cpu);
@@ -5184,6 +5191,16 @@ static int validate_ops(const struct sched_ext_ops *ops)
return -EINVAL;
}
+ /*
+ * SCX_OPS_BUILTIN_IDLE_PER_NODE requires built-in CPU idle
+ * selection policy to be enabled.
+ */
+ if ((ops->flags & SCX_OPS_BUILTIN_IDLE_PER_NODE) &&
+ (ops->update_idle && !(ops->flags & SCX_OPS_KEEP_BUILTIN_IDLE))) {
+ scx_ops_error("SCX_OPS_BUILTIN_IDLE_PER_NODE requires CPU idle selection enabled");
+ return -EINVAL;
+ }
+
return 0;
}
@@ -5308,7 +5325,7 @@ static int scx_ops_enable(struct sched_ext_ops *ops, struct bpf_link *link)
static_branch_enable_cpuslocked(&scx_has_op[i]);
check_hotplug_seq(ops);
- scx_idle_update_selcpu_topology();
+ scx_idle_update_selcpu_topology(ops);
cpus_read_unlock();
diff --git a/kernel/sched/ext_idle.c b/kernel/sched/ext_idle.c
index ed1804506585b..59b9e95238e97 100644
--- a/kernel/sched/ext_idle.c
+++ b/kernel/sched/ext_idle.c
@@ -14,6 +14,9 @@
/* Enable/disable built-in idle CPU selection policy */
static DEFINE_STATIC_KEY_FALSE(scx_builtin_idle_enabled);
+/* Enable/disable per-node idle cpumasks */
+static DEFINE_STATIC_KEY_FALSE(scx_builtin_idle_per_node);
+
#ifdef CONFIG_SMP
#ifdef CONFIG_CPUMASK_OFFSTACK
#define CL_ALIGNED_IF_ONSTACK
@@ -204,7 +207,7 @@ static bool llc_numa_mismatch(void)
* CPU belongs to a single LLC domain, and that each LLC domain is entirely
* contained within a single NUMA node.
*/
-void scx_idle_update_selcpu_topology(void)
+void scx_idle_update_selcpu_topology(struct sched_ext_ops *ops)
{
bool enable_llc = false, enable_numa = false;
unsigned int nr_cpus;
@@ -237,13 +240,19 @@ void scx_idle_update_selcpu_topology(void)
* If all CPUs belong to the same NUMA node and the same LLC domain,
* enabling both NUMA and LLC optimizations is unnecessary, as checking
* for an idle CPU in the same domain twice is redundant.
+ *
+ * If SCX_OPS_BUILTIN_IDLE_PER_NODE is enabled ignore the NUMA
+ * optimization, as we would naturally select idle CPUs within
+ * specific NUMA nodes querying the corresponding per-node cpumask.
*/
- nr_cpus = numa_weight(cpu);
- if (nr_cpus > 0) {
- if (nr_cpus < num_online_cpus() && llc_numa_mismatch())
- enable_numa = true;
- pr_debug("sched_ext: NUMA=%*pb weight=%u\n",
- cpumask_pr_args(numa_span(cpu)), numa_weight(cpu));
+ if (!(ops->flags & SCX_OPS_BUILTIN_IDLE_PER_NODE)) {
+ nr_cpus = numa_weight(cpu);
+ if (nr_cpus > 0) {
+ if (nr_cpus < num_online_cpus() && llc_numa_mismatch())
+ enable_numa = true;
+ pr_debug("sched_ext: NUMA=%*pb weight=%u\n",
+ cpumask_pr_args(numa_span(cpu)), numa_weight(cpu));
+ }
}
rcu_read_unlock();
@@ -530,6 +539,11 @@ void scx_idle_enable(struct sched_ext_ops *ops)
}
static_branch_enable(&scx_builtin_idle_enabled);
+ if (ops->flags & SCX_OPS_BUILTIN_IDLE_PER_NODE)
+ static_branch_enable(&scx_builtin_idle_per_node);
+ else
+ static_branch_disable(&scx_builtin_idle_per_node);
+
#ifdef CONFIG_SMP
/*
* Consider all online cpus idle. Should converge to the actual state
@@ -543,6 +557,7 @@ void scx_idle_enable(struct sched_ext_ops *ops)
void scx_idle_disable(void)
{
static_branch_disable(&scx_builtin_idle_enabled);
+ static_branch_disable(&scx_builtin_idle_per_node);
}
/********************************************************************************
diff --git a/kernel/sched/ext_idle.h b/kernel/sched/ext_idle.h
index bbac0fd9a5ddd..339b6ec9c4cb7 100644
--- a/kernel/sched/ext_idle.h
+++ b/kernel/sched/ext_idle.h
@@ -13,12 +13,12 @@
struct sched_ext_ops;
#ifdef CONFIG_SMP
-void scx_idle_update_selcpu_topology(void);
+void scx_idle_update_selcpu_topology(struct sched_ext_ops *ops);
void scx_idle_init_masks(void);
bool scx_idle_test_and_clear_cpu(int cpu);
s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, u64 flags);
#else /* !CONFIG_SMP */
-static inline void scx_idle_update_selcpu_topology(void) {}
+static inline void scx_idle_update_selcpu_topology(struct sched_ext_ops *ops) {}
static inline void scx_idle_init_masks(void) {}
static inline bool scx_idle_test_and_clear_cpu(int cpu) { return false; }
static inline s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, u64 flags)
diff --git a/tools/sched_ext/include/scx/compat.h b/tools/sched_ext/include/scx/compat.h
index b50280e2ba2ba..d63cf40be8eee 100644
--- a/tools/sched_ext/include/scx/compat.h
+++ b/tools/sched_ext/include/scx/compat.h
@@ -109,6 +109,9 @@ static inline bool __COMPAT_struct_has_field(const char *type, const char *field
#define SCX_OPS_SWITCH_PARTIAL \
__COMPAT_ENUM_OR_ZERO("scx_ops_flags", "SCX_OPS_SWITCH_PARTIAL")
+#define SCX_OPS_BUILTIN_IDLE_PER_NODE \
+ __COMPAT_ENUM_OR_ZERO("scx_ops_flags", "SCX_OPS_BUILTIN_IDLE_PER_NODE")
+
static inline long scx_hotplug_seq(void)
{
int fd;
--
2.48.1
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 6/7] sched_ext: idle: Per-node idle cpumasks
2025-02-12 16:48 [PATCHSET v11 sched_ext/for-6.15] sched_ext: split global idle cpumask into per-NUMA cpumasks Andrea Righi
` (4 preceding siblings ...)
2025-02-12 16:48 ` [PATCH 5/7] sched_ext: idle: Introduce SCX_OPS_BUILTIN_IDLE_PER_NODE Andrea Righi
@ 2025-02-12 16:48 ` Andrea Righi
2025-02-13 10:57 ` kernel test robot
2025-02-13 18:03 ` Yury Norov
2025-02-12 16:48 ` [PATCH 7/7] sched_ext: idle: Introduce node-aware idle cpu kfunc helpers Andrea Righi
6 siblings, 2 replies; 21+ messages in thread
From: Andrea Righi @ 2025-02-12 16:48 UTC (permalink / raw)
To: Tejun Heo, David Vernet, Changwoo Min
Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider, Joel Fernandes, Ian May, bpf, linux-kernel,
Yury Norov
Using a single global idle mask can lead to inefficiencies and a lot of
stress on the cache coherency protocol on large systems with multiple
NUMA nodes, since all the CPUs can create a really intense read/write
activity on the single global cpumask.
Therefore, split the global cpumask into multiple per-NUMA node cpumasks
to improve scalability and performance on large systems.
The concept is that each cpumask will track only the idle CPUs within
its corresponding NUMA node, treating CPUs in other NUMA nodes as busy.
In this way concurrent access to the idle cpumask will be restricted
within each NUMA node.
The split of multiple per-node idle cpumasks can be controlled using the
SCX_OPS_BUILTIN_IDLE_PER_NODE flag.
By default SCX_OPS_BUILTIN_IDLE_PER_NODE is not enabled and a global
host-wide idle cpumask is used, maintaining the previous behavior.
NOTE: if a scheduler explicitly enables the per-node idle cpumasks (via
SCX_OPS_BUILTIN_IDLE_PER_NODE), scx_bpf_get_idle_cpu/smtmask() will
trigger an scx error, since there are no system-wide cpumasks.
= Test =
Hardware:
- System: DGX B200
- CPUs: 224 SMT threads (112 physical cores)
- Processor: INTEL(R) XEON(R) PLATINUM 8570
- 2 NUMA nodes
Scheduler:
- scx_simple [1] (so that we can focus at the built-in idle selection
policy and not at the scheduling policy itself)
Test:
- Run a parallel kernel build `make -j $(nproc)` and measure the average
elapsed time over 10 runs:
avg time | stdev
---------+------
before: 52.431s | 2.895
after: 50.342s | 2.895
= Conclusion =
Splitting the global cpumask into multiple per-NUMA cpumasks helped to
achieve a speedup of approximately +4% with this particular architecture
and test case.
The same test on a DGX-1 (40 physical cores, Intel Xeon E5-2698 v4 @
2.20GHz, 2 NUMA nodes) shows a speedup of around 1.5-3%.
On smaller systems, I haven't noticed any measurable regressions or
improvements with the same test (parallel kernel build) and scheduler
(scx_simple).
Moreover, with a modified scx_bpfland that uses the new NUMA-aware APIs
I observed an additional +2-2.5% performance improvement with the same
test.
[1] https://github.com/sched-ext/scx/blob/main/scheds/c/scx_simple.bpf.c
Cc: Yury Norov <yury.norov@gmail.com>
Signed-off-by: Andrea Righi <arighi@nvidia.com>
---
kernel/sched/ext.c | 1 +
kernel/sched/ext_idle.c | 276 ++++++++++++++++++++++-----
kernel/sched/ext_idle.h | 4 +-
tools/sched_ext/include/scx/compat.h | 3 +
4 files changed, 229 insertions(+), 55 deletions(-)
diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
index c3e154f0e8188..129e77a779d3c 100644
--- a/kernel/sched/ext.c
+++ b/kernel/sched/ext.c
@@ -789,6 +789,7 @@ enum scx_deq_flags {
enum scx_pick_idle_cpu_flags {
SCX_PICK_IDLE_CORE = 1LLU << 0, /* pick a CPU whose SMT siblings are also idle */
+ SCX_PICK_IDLE_IN_NODE = 1LLU << 1, /* pick a CPU in the same target NUMA node */
};
enum scx_kick_flags {
diff --git a/kernel/sched/ext_idle.c b/kernel/sched/ext_idle.c
index 59b9e95238e97..d028fa809fe1d 100644
--- a/kernel/sched/ext_idle.c
+++ b/kernel/sched/ext_idle.c
@@ -18,25 +18,61 @@ static DEFINE_STATIC_KEY_FALSE(scx_builtin_idle_enabled);
static DEFINE_STATIC_KEY_FALSE(scx_builtin_idle_per_node);
#ifdef CONFIG_SMP
-#ifdef CONFIG_CPUMASK_OFFSTACK
-#define CL_ALIGNED_IF_ONSTACK
-#else
-#define CL_ALIGNED_IF_ONSTACK __cacheline_aligned_in_smp
-#endif
-
/* Enable/disable LLC aware optimizations */
static DEFINE_STATIC_KEY_FALSE(scx_selcpu_topo_llc);
/* Enable/disable NUMA aware optimizations */
static DEFINE_STATIC_KEY_FALSE(scx_selcpu_topo_numa);
-static struct {
+/*
+ * cpumasks to track idle CPUs within each NUMA node.
+ *
+ * If SCX_OPS_BUILTIN_IDLE_PER_NODE is not enabled, a single global cpumask
+ * from is used to track all the idle CPUs in the system.
+ */
+struct scx_idle_cpus {
cpumask_var_t cpu;
cpumask_var_t smt;
-} idle_masks CL_ALIGNED_IF_ONSTACK;
+};
+
+/*
+ * Global host-wide idle cpumasks (used when SCX_OPS_BUILTIN_IDLE_PER_NODE
+ * is not enabled).
+ */
+static struct scx_idle_cpus scx_idle_global_masks;
+
+/*
+ * Per-node idle cpumasks.
+ */
+static struct scx_idle_cpus **scx_idle_node_masks;
+
+/*
+ * Return the idle masks associated to a target @node.
+ *
+ * NUMA_NO_NODE identifies the global idle cpumask.
+ */
+static struct scx_idle_cpus *idle_cpumask(int node)
+{
+ return node == NUMA_NO_NODE ? &scx_idle_global_masks : scx_idle_node_masks[node];
+}
+
+/*
+ * Returns the NUMA node ID associated with a @cpu, or NUMA_NO_NODE if
+ * per-node idle cpumasks are disabled.
+ */
+static int scx_cpu_node_if_enabled(int cpu)
+{
+ if (!static_branch_maybe(CONFIG_NUMA, &scx_builtin_idle_per_node))
+ return NUMA_NO_NODE;
+
+ return cpu_to_node(cpu);
+}
bool scx_idle_test_and_clear_cpu(int cpu)
{
+ int node = scx_cpu_node_if_enabled(cpu);
+ struct cpumask *idle_cpus = idle_cpumask(node)->cpu;
+
#ifdef CONFIG_SCHED_SMT
/*
* SMT mask should be cleared whether we can claim @cpu or not. The SMT
@@ -45,33 +81,38 @@ bool scx_idle_test_and_clear_cpu(int cpu)
*/
if (sched_smt_active()) {
const struct cpumask *smt = cpu_smt_mask(cpu);
+ struct cpumask *idle_smts = idle_cpumask(node)->smt;
/*
* If offline, @cpu is not its own sibling and
* scx_pick_idle_cpu() can get caught in an infinite loop as
- * @cpu is never cleared from idle_masks.smt. Ensure that @cpu
- * is eventually cleared.
+ * @cpu is never cleared from the idle SMT mask. Ensure that
+ * @cpu is eventually cleared.
*
* NOTE: Use cpumask_intersects() and cpumask_test_cpu() to
* reduce memory writes, which may help alleviate cache
* coherence pressure.
*/
- if (cpumask_intersects(smt, idle_masks.smt))
- cpumask_andnot(idle_masks.smt, idle_masks.smt, smt);
- else if (cpumask_test_cpu(cpu, idle_masks.smt))
- __cpumask_clear_cpu(cpu, idle_masks.smt);
+ if (cpumask_intersects(smt, idle_smts))
+ cpumask_andnot(idle_smts, idle_smts, smt);
+ else if (cpumask_test_cpu(cpu, idle_smts))
+ __cpumask_clear_cpu(cpu, idle_smts);
}
#endif
- return cpumask_test_and_clear_cpu(cpu, idle_masks.cpu);
+
+ return cpumask_test_and_clear_cpu(cpu, idle_cpus);
}
-s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, u64 flags)
+/*
+ * Pick an idle CPU in a specific NUMA node.
+ */
+static s32 pick_idle_cpu_in_node(const struct cpumask *cpus_allowed, int node, u64 flags)
{
int cpu;
retry:
if (sched_smt_active()) {
- cpu = cpumask_any_and_distribute(idle_masks.smt, cpus_allowed);
+ cpu = cpumask_any_and_distribute(idle_cpumask(node)->smt, cpus_allowed);
if (cpu < nr_cpu_ids)
goto found;
@@ -79,7 +120,7 @@ s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, u64 flags)
return -EBUSY;
}
- cpu = cpumask_any_and_distribute(idle_masks.cpu, cpus_allowed);
+ cpu = cpumask_any_and_distribute(idle_cpumask(node)->cpu, cpus_allowed);
if (cpu >= nr_cpu_ids)
return -EBUSY;
@@ -90,6 +131,78 @@ s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, u64 flags)
goto retry;
}
+static s32 pick_idle_cpu_from_other_nodes(const struct cpumask *cpus_allowed, int node, u64 flags)
+{
+ static DEFINE_PER_CPU(nodemask_t, per_cpu_unvisited);
+ nodemask_t *unvisited = this_cpu_ptr(&per_cpu_unvisited);
+ s32 cpu = -EBUSY;
+
+ preempt_disable();
+ unvisited = this_cpu_ptr(&per_cpu_unvisited);
+
+ /*
+ * Restrict the search to the online nodes, excluding the current
+ * one.
+ */
+ nodes_clear(*unvisited);
+ nodes_or(*unvisited, *unvisited, node_states[N_ONLINE]);
+ node_clear(node, *unvisited);
+
+ /*
+ * Traverse all nodes in order of increasing distance, starting
+ * from @node.
+ *
+ * This loop is O(N^2), with N being the amount of NUMA nodes,
+ * which might be quite expensive in large NUMA systems. However,
+ * this complexity comes into play only when a scheduler enables
+ * SCX_OPS_BUILTIN_IDLE_PER_NODE and it's requesting an idle CPU
+ * without specifying a target NUMA node, so it shouldn't be a
+ * bottleneck is most cases.
+ *
+ * As a future optimization we may want to cache the list of nodes
+ * in a per-node array, instead of actually traversing them every
+ * time.
+ */
+ for_each_node_numadist(node, *unvisited) {
+ cpu = pick_idle_cpu_in_node(cpus_allowed, node, flags);
+ if (cpu >= 0)
+ break;
+ }
+ preempt_enable();
+
+ return cpu;
+}
+
+/*
+ * Find an idle CPU in the system, starting from @node.
+ */
+s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, int node, u64 flags)
+{
+ s32 cpu;
+
+ /*
+ * Always search in the starting node first (this is an
+ * optimization that can save some cycles even when the search is
+ * not limited to a single node).
+ */
+ cpu = pick_idle_cpu_in_node(cpus_allowed, node, flags);
+ if (cpu >= 0)
+ return cpu;
+
+ /*
+ * Stop the search if we are using only a single global cpumask
+ * (NUMA_NO_NODE) or if the search is restricted to the first node
+ * only.
+ */
+ if (node == NUMA_NO_NODE || flags & SCX_PICK_IDLE_IN_NODE)
+ return -EBUSY;
+
+ /*
+ * Extend the search to the other nodes.
+ */
+ return pick_idle_cpu_from_other_nodes(cpus_allowed, node, flags);
+}
+
/*
* Return the amount of CPUs in the same LLC domain of @cpu (or zero if the LLC
* domain is not defined).
@@ -302,6 +415,7 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, bool
{
const struct cpumask *llc_cpus = NULL;
const struct cpumask *numa_cpus = NULL;
+ int node = scx_cpu_node_if_enabled(prev_cpu);
s32 cpu;
*found = false;
@@ -359,9 +473,9 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, bool
* piled up on it even if there is an idle core elsewhere on
* the system.
*/
- if (!cpumask_empty(idle_masks.cpu) &&
- !(current->flags & PF_EXITING) &&
- cpu_rq(cpu)->scx.local_dsq.nr == 0) {
+ if (!(current->flags & PF_EXITING) &&
+ cpu_rq(cpu)->scx.local_dsq.nr == 0 &&
+ !cpumask_empty(idle_cpumask(cpu_to_node(cpu))->cpu)) {
if (cpumask_test_cpu(cpu, p->cpus_ptr))
goto cpu_found;
}
@@ -375,7 +489,7 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, bool
/*
* Keep using @prev_cpu if it's part of a fully idle core.
*/
- if (cpumask_test_cpu(prev_cpu, idle_masks.smt) &&
+ if (cpumask_test_cpu(prev_cpu, idle_cpumask(node)->smt) &&
scx_idle_test_and_clear_cpu(prev_cpu)) {
cpu = prev_cpu;
goto cpu_found;
@@ -385,7 +499,7 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, bool
* Search for any fully idle core in the same LLC domain.
*/
if (llc_cpus) {
- cpu = scx_pick_idle_cpu(llc_cpus, SCX_PICK_IDLE_CORE);
+ cpu = pick_idle_cpu_in_node(llc_cpus, node, SCX_PICK_IDLE_CORE);
if (cpu >= 0)
goto cpu_found;
}
@@ -394,15 +508,19 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, bool
* Search for any fully idle core in the same NUMA node.
*/
if (numa_cpus) {
- cpu = scx_pick_idle_cpu(numa_cpus, SCX_PICK_IDLE_CORE);
+ cpu = pick_idle_cpu_in_node(numa_cpus, node, SCX_PICK_IDLE_CORE);
if (cpu >= 0)
goto cpu_found;
}
/*
* Search for any full idle core usable by the task.
+ *
+ * If NUMA aware idle selection is enabled, the search will
+ * begin in prev_cpu's node and proceed to other nodes in
+ * order of increasing distance.
*/
- cpu = scx_pick_idle_cpu(p->cpus_ptr, SCX_PICK_IDLE_CORE);
+ cpu = scx_pick_idle_cpu(p->cpus_ptr, node, SCX_PICK_IDLE_CORE);
if (cpu >= 0)
goto cpu_found;
}
@@ -419,7 +537,7 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, bool
* Search for any idle CPU in the same LLC domain.
*/
if (llc_cpus) {
- cpu = scx_pick_idle_cpu(llc_cpus, 0);
+ cpu = pick_idle_cpu_in_node(llc_cpus, node, 0);
if (cpu >= 0)
goto cpu_found;
}
@@ -428,7 +546,7 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, bool
* Search for any idle CPU in the same NUMA node.
*/
if (numa_cpus) {
- cpu = scx_pick_idle_cpu(numa_cpus, 0);
+ cpu = pick_idle_cpu_in_node(numa_cpus, node, 0);
if (cpu >= 0)
goto cpu_found;
}
@@ -436,7 +554,7 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, bool
/*
* Search for any idle CPU usable by the task.
*/
- cpu = scx_pick_idle_cpu(p->cpus_ptr, 0);
+ cpu = scx_pick_idle_cpu(p->cpus_ptr, node, 0);
if (cpu >= 0)
goto cpu_found;
@@ -450,30 +568,54 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags, bool
return cpu;
}
+/*
+ * Initialize global and per-node idle cpumasks.
+ */
void scx_idle_init_masks(void)
{
- BUG_ON(!alloc_cpumask_var(&idle_masks.cpu, GFP_KERNEL));
- BUG_ON(!alloc_cpumask_var(&idle_masks.smt, GFP_KERNEL));
+ int node;
+
+ /* Allocate global idle cpumasks */
+ BUG_ON(!alloc_cpumask_var(&scx_idle_global_masks.cpu, GFP_KERNEL));
+ BUG_ON(!alloc_cpumask_var(&scx_idle_global_masks.smt, GFP_KERNEL));
+
+ /* Allocate per-node idle cpumasks */
+ scx_idle_node_masks = kcalloc(num_possible_nodes(),
+ sizeof(*scx_idle_node_masks), GFP_KERNEL);
+ BUG_ON(!scx_idle_node_masks);
+
+ for_each_node(node) {
+ scx_idle_node_masks[node] = kzalloc_node(sizeof(**scx_idle_node_masks),
+ GFP_KERNEL, node);
+ BUG_ON(!scx_idle_node_masks[node]);
+
+ BUG_ON(!alloc_cpumask_var_node(&scx_idle_node_masks[node]->cpu, GFP_KERNEL, node));
+ BUG_ON(!alloc_cpumask_var_node(&scx_idle_node_masks[node]->smt, GFP_KERNEL, node));
+ }
}
static void update_builtin_idle(int cpu, bool idle)
{
- assign_cpu(cpu, idle_masks.cpu, idle);
+ int node = scx_cpu_node_if_enabled(cpu);
+ struct cpumask *idle_cpus = idle_cpumask(node)->cpu;
+
+ assign_cpu(cpu, idle_cpus, idle);
#ifdef CONFIG_SCHED_SMT
if (sched_smt_active()) {
const struct cpumask *smt = cpu_smt_mask(cpu);
+ struct cpumask *idle_smts = idle_cpumask(node)->smt;
if (idle) {
/*
- * idle_masks.smt handling is racy but that's fine as
- * it's only for optimization and self-correcting.
+ * idle_smt handling is racy but that's fine as it's
+ * only for optimization and self-correcting.
*/
- if (!cpumask_subset(smt, idle_masks.cpu))
+ if (!cpumask_subset(smt, idle_cpus))
return;
- cpumask_or(idle_masks.smt, idle_masks.smt, smt);
+ cpumask_or(idle_smts, idle_smts, smt);
} else {
- cpumask_andnot(idle_masks.smt, idle_masks.smt, smt);
+ cpumask_andnot(idle_smts, idle_smts, smt);
}
}
#endif
@@ -529,15 +671,36 @@ void __scx_update_idle(struct rq *rq, bool idle, bool do_notify)
if (do_notify || is_idle_task(rq->curr))
update_builtin_idle(cpu, idle);
}
+
+static void reset_idle_masks(struct sched_ext_ops *ops)
+{
+ int node;
+
+ /*
+ * Consider all online cpus idle. Should converge to the actual state
+ * quickly.
+ */
+ if (!(ops->flags & SCX_OPS_BUILTIN_IDLE_PER_NODE)) {
+ cpumask_copy(idle_cpumask(NUMA_NO_NODE)->cpu, cpu_online_mask);
+ cpumask_copy(idle_cpumask(NUMA_NO_NODE)->smt, cpu_online_mask);
+ return;
+ }
+
+ for_each_node(node) {
+ const struct cpumask *node_mask = cpumask_of_node(node);
+
+ cpumask_and(idle_cpumask(node)->cpu, cpu_online_mask, node_mask);
+ cpumask_and(idle_cpumask(node)->smt, cpu_online_mask, node_mask);
+ }
+}
#endif /* CONFIG_SMP */
void scx_idle_enable(struct sched_ext_ops *ops)
{
- if (ops->update_idle && !(ops->flags & SCX_OPS_KEEP_BUILTIN_IDLE)) {
+ if (!ops->update_idle || (ops->flags & SCX_OPS_KEEP_BUILTIN_IDLE))
+ static_branch_enable(&scx_builtin_idle_enabled);
+ else
static_branch_disable(&scx_builtin_idle_enabled);
- return;
- }
- static_branch_enable(&scx_builtin_idle_enabled);
if (ops->flags & SCX_OPS_BUILTIN_IDLE_PER_NODE)
static_branch_enable(&scx_builtin_idle_per_node);
@@ -545,12 +708,7 @@ void scx_idle_enable(struct sched_ext_ops *ops)
static_branch_disable(&scx_builtin_idle_per_node);
#ifdef CONFIG_SMP
- /*
- * Consider all online cpus idle. Should converge to the actual state
- * quickly.
- */
- cpumask_copy(idle_masks.cpu, cpu_online_mask);
- cpumask_copy(idle_masks.smt, cpu_online_mask);
+ reset_idle_masks(ops);
#endif
}
@@ -610,15 +768,21 @@ __bpf_kfunc s32 scx_bpf_select_cpu_dfl(struct task_struct *p, s32 prev_cpu,
* scx_bpf_get_idle_cpumask - Get a referenced kptr to the idle-tracking
* per-CPU cpumask.
*
- * Returns NULL if idle tracking is not enabled, or running on a UP kernel.
+ * Returns an empty mask if idle tracking is not enabled, or running on a
+ * UP kernel.
*/
__bpf_kfunc const struct cpumask *scx_bpf_get_idle_cpumask(void)
{
+ if (static_branch_unlikely(&scx_builtin_idle_per_node)) {
+ scx_ops_error("SCX_OPS_BUILTIN_IDLE_PER_NODE enabled");
+ return cpu_none_mask;
+ }
+
if (!check_builtin_idle_enabled())
return cpu_none_mask;
#ifdef CONFIG_SMP
- return idle_masks.cpu;
+ return idle_cpumask(NUMA_NO_NODE)->cpu;
#else
return cpu_none_mask;
#endif
@@ -629,18 +793,24 @@ __bpf_kfunc const struct cpumask *scx_bpf_get_idle_cpumask(void)
* per-physical-core cpumask. Can be used to determine if an entire physical
* core is free.
*
- * Returns NULL if idle tracking is not enabled, or running on a UP kernel.
+ * Returns an empty mask if idle tracking is not enabled, or running on a
+ * UP kernel.
*/
__bpf_kfunc const struct cpumask *scx_bpf_get_idle_smtmask(void)
{
+ if (static_branch_unlikely(&scx_builtin_idle_per_node)) {
+ scx_ops_error("SCX_OPS_BUILTIN_IDLE_PER_NODE enabled");
+ return cpu_none_mask;
+ }
+
if (!check_builtin_idle_enabled())
return cpu_none_mask;
#ifdef CONFIG_SMP
if (sched_smt_active())
- return idle_masks.smt;
+ return idle_cpumask(NUMA_NO_NODE)->smt;
else
- return idle_masks.cpu;
+ return idle_cpumask(NUMA_NO_NODE)->cpu;
#else
return cpu_none_mask;
#endif
@@ -707,7 +877,7 @@ __bpf_kfunc s32 scx_bpf_pick_idle_cpu(const struct cpumask *cpus_allowed,
if (!check_builtin_idle_enabled())
return -EBUSY;
- return scx_pick_idle_cpu(cpus_allowed, flags);
+ return scx_pick_idle_cpu(cpus_allowed, NUMA_NO_NODE, flags);
}
/**
@@ -730,7 +900,7 @@ __bpf_kfunc s32 scx_bpf_pick_any_cpu(const struct cpumask *cpus_allowed,
s32 cpu;
if (static_branch_likely(&scx_builtin_idle_enabled)) {
- cpu = scx_pick_idle_cpu(cpus_allowed, flags);
+ cpu = scx_pick_idle_cpu(cpus_allowed, NUMA_NO_NODE, flags);
if (cpu >= 0)
return cpu;
}
diff --git a/kernel/sched/ext_idle.h b/kernel/sched/ext_idle.h
index 339b6ec9c4cb7..68c4307ce4f6f 100644
--- a/kernel/sched/ext_idle.h
+++ b/kernel/sched/ext_idle.h
@@ -16,12 +16,12 @@ struct sched_ext_ops;
void scx_idle_update_selcpu_topology(struct sched_ext_ops *ops);
void scx_idle_init_masks(void);
bool scx_idle_test_and_clear_cpu(int cpu);
-s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, u64 flags);
+s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, int node, u64 flags);
#else /* !CONFIG_SMP */
static inline void scx_idle_update_selcpu_topology(struct sched_ext_ops *ops) {}
static inline void scx_idle_init_masks(void) {}
static inline bool scx_idle_test_and_clear_cpu(int cpu) { return false; }
-static inline s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, u64 flags)
+static inline s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, int node, u64 flags)
{
return -EBUSY;
}
diff --git a/tools/sched_ext/include/scx/compat.h b/tools/sched_ext/include/scx/compat.h
index d63cf40be8eee..03e06bd15c738 100644
--- a/tools/sched_ext/include/scx/compat.h
+++ b/tools/sched_ext/include/scx/compat.h
@@ -112,6 +112,9 @@ static inline bool __COMPAT_struct_has_field(const char *type, const char *field
#define SCX_OPS_BUILTIN_IDLE_PER_NODE \
__COMPAT_ENUM_OR_ZERO("scx_ops_flags", "SCX_OPS_BUILTIN_IDLE_PER_NODE")
+#define SCX_PICK_IDLE_IN_NODE \
+ __COMPAT_ENUM_OR_ZERO("scx_pick_idle_cpu_flags", "SCX_PICK_IDLE_IN_NODE")
+
static inline long scx_hotplug_seq(void)
{
int fd;
--
2.48.1
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 7/7] sched_ext: idle: Introduce node-aware idle cpu kfunc helpers
2025-02-12 16:48 [PATCHSET v11 sched_ext/for-6.15] sched_ext: split global idle cpumask into per-NUMA cpumasks Andrea Righi
` (5 preceding siblings ...)
2025-02-12 16:48 ` [PATCH 6/7] sched_ext: idle: Per-node idle cpumasks Andrea Righi
@ 2025-02-12 16:48 ` Andrea Righi
6 siblings, 0 replies; 21+ messages in thread
From: Andrea Righi @ 2025-02-12 16:48 UTC (permalink / raw)
To: Tejun Heo, David Vernet, Changwoo Min
Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider, Joel Fernandes, Ian May, bpf, linux-kernel,
Yury Norov
Introduce a new kfunc to retrieve the node associated to a CPU:
int scx_bpf_cpu_node(s32 cpu)
Add the following kfuncs to provide BPF schedulers direct access to
per-node idle cpumasks information:
const struct cpumask *scx_bpf_get_idle_cpumask_node(int node)
const struct cpumask *scx_bpf_get_idle_smtmask_node(int node)
s32 scx_bpf_pick_idle_cpu_in_node(const cpumask_t *cpus_allowed,
int node, u64 flags)
s32 scx_bpf_pick_any_cpu_node(const cpumask_t *cpus_allowed,
int node, u64 flags)
Moreover, trigger an scx error when any of the non-node aware idle CPU
kfuncs are used when SCX_OPS_BUILTIN_IDLE_PER_NODE is enabled.
Cc: Yury Norov <yury.norov@gmail.com>
Signed-off-by: Andrea Righi <arighi@nvidia.com>
---
kernel/sched/ext_idle.c | 180 +++++++++++++++++++++++
tools/sched_ext/include/scx/common.bpf.h | 5 +
tools/sched_ext/include/scx/compat.bpf.h | 31 ++++
3 files changed, 216 insertions(+)
diff --git a/kernel/sched/ext_idle.c b/kernel/sched/ext_idle.c
index d028fa809fe1d..8196c533c983d 100644
--- a/kernel/sched/ext_idle.c
+++ b/kernel/sched/ext_idle.c
@@ -721,6 +721,33 @@ void scx_idle_disable(void)
/********************************************************************************
* Helpers that can be called from the BPF scheduler.
*/
+
+static int validate_node(int node)
+{
+ if (!static_branch_likely(&scx_builtin_idle_per_node)) {
+ scx_ops_error("per-node idle tracking is disabled");
+ return -EOPNOTSUPP;
+ }
+
+ /* Return no entry for NUMA_NO_NODE (not a critical scx error) */
+ if (node == NUMA_NO_NODE)
+ return -ENOENT;
+
+ /* Make sure node is in a valid range */
+ if (node < 0 || node >= nr_node_ids) {
+ scx_ops_error("invalid node %d", node);
+ return -EINVAL;
+ }
+
+ /* Make sure the node is part of the set of possible nodes */
+ if (!node_possible(node)) {
+ scx_ops_error("unavailable node %d", node);
+ return -EINVAL;
+ }
+
+ return node;
+}
+
__bpf_kfunc_start_defs();
static bool check_builtin_idle_enabled(void)
@@ -732,6 +759,23 @@ static bool check_builtin_idle_enabled(void)
return false;
}
+/**
+ * scx_bpf_cpu_node - Return the NUMA node the given @cpu belongs to, or
+ * trigger an error if @cpu is invalid
+ * @cpu: target CPU
+ */
+__bpf_kfunc int scx_bpf_cpu_node(s32 cpu)
+{
+#ifdef CONFIG_NUMA
+ if (!ops_cpu_valid(cpu, NULL))
+ return NUMA_NO_NODE;
+
+ return cpu_to_node(cpu);
+#else
+ return 0;
+#endif
+}
+
/**
* scx_bpf_select_cpu_dfl - The default implementation of ops.select_cpu()
* @p: task_struct to select a CPU for
@@ -764,6 +808,27 @@ __bpf_kfunc s32 scx_bpf_select_cpu_dfl(struct task_struct *p, s32 prev_cpu,
return prev_cpu;
}
+/**
+ * scx_bpf_get_idle_cpumask_node - Get a referenced kptr to the
+ * idle-tracking per-CPU cpumask of a target NUMA node.
+ *
+ * Returns an empty cpumask if idle tracking is not enabled, if @node is
+ * not valid, or running on a UP kernel. In this case the actual error will
+ * be reported to the BPF scheduler via scx_ops_error().
+ */
+__bpf_kfunc const struct cpumask *scx_bpf_get_idle_cpumask_node(int node)
+{
+ node = validate_node(node);
+ if (node < 0)
+ return cpu_none_mask;
+
+#ifdef CONFIG_SMP
+ return idle_cpumask(node)->cpu;
+#else
+ return cpu_none_mask;
+#endif
+}
+
/**
* scx_bpf_get_idle_cpumask - Get a referenced kptr to the idle-tracking
* per-CPU cpumask.
@@ -788,6 +853,31 @@ __bpf_kfunc const struct cpumask *scx_bpf_get_idle_cpumask(void)
#endif
}
+/**
+ * scx_bpf_get_idle_smtmask_node - Get a referenced kptr to the
+ * idle-tracking, per-physical-core cpumask of a target NUMA node. Can be
+ * used to determine if an entire physical core is free.
+ *
+ * Returns an empty cpumask if idle tracking is not enabled, if @node is
+ * not valid, or running on a UP kernel. In this case the actual error will
+ * be reported to the BPF scheduler via scx_ops_error().
+ */
+__bpf_kfunc const struct cpumask *scx_bpf_get_idle_smtmask_node(int node)
+{
+ node = validate_node(node);
+ if (node < 0)
+ return cpu_none_mask;
+
+#ifdef CONFIG_SMP
+ if (sched_smt_active())
+ return idle_cpumask(node)->smt;
+ else
+ return idle_cpumask(node)->cpu;
+#else
+ return cpu_none_mask;
+#endif
+}
+
/**
* scx_bpf_get_idle_smtmask - Get a referenced kptr to the idle-tracking,
* per-physical-core cpumask. Can be used to determine if an entire physical
@@ -852,6 +942,35 @@ __bpf_kfunc bool scx_bpf_test_and_clear_cpu_idle(s32 cpu)
return false;
}
+/**
+ * scx_bpf_pick_idle_cpu_in_node - Pick and claim an idle cpu from @node
+ * @cpus_allowed: Allowed cpumask
+ * @node: target NUMA node
+ * @flags: %SCX_PICK_IDLE_* flags
+ *
+ * Pick and claim an idle cpu in @cpus_allowed from the NUMA node @node.
+ *
+ * Returns the picked idle cpu number on success, or -%EBUSY if no matching
+ * cpu was found.
+ *
+ * The search starts from @node and proceeds to other online NUMA nodes in
+ * order of increasing distance (unless SCX_PICK_IDLE_IN_NODE is specified,
+ * in which case the search is limited to the target @node).
+ *
+ * Always returns an error if ops.update_idle() is implemented and
+ * %SCX_OPS_KEEP_BUILTIN_IDLE is not set, or if
+ * %SCX_OPS_BUILTIN_IDLE_PER_NODE is not set.
+ */
+__bpf_kfunc s32 scx_bpf_pick_idle_cpu_in_node(const struct cpumask *cpus_allowed,
+ int node, u64 flags)
+{
+ node = validate_node(node);
+ if (node < 0)
+ return node;
+
+ return scx_pick_idle_cpu(cpus_allowed, node, flags);
+}
+
/**
* scx_bpf_pick_idle_cpu - Pick and claim an idle cpu
* @cpus_allowed: Allowed cpumask
@@ -870,16 +989,64 @@ __bpf_kfunc bool scx_bpf_test_and_clear_cpu_idle(s32 cpu)
*
* Unavailable if ops.update_idle() is implemented and
* %SCX_OPS_KEEP_BUILTIN_IDLE is not set.
+ *
+ * Always returns an error if %SCX_OPS_BUILTIN_IDLE_PER_NODE is set, use
+ * scx_bpf_pick_idle_cpu_in_node() instead.
*/
__bpf_kfunc s32 scx_bpf_pick_idle_cpu(const struct cpumask *cpus_allowed,
u64 flags)
{
+ if (static_branch_maybe(CONFIG_NUMA, &scx_builtin_idle_per_node)) {
+ scx_ops_error("per-node idle tracking is enabled");
+ return -EBUSY;
+ }
+
if (!check_builtin_idle_enabled())
return -EBUSY;
return scx_pick_idle_cpu(cpus_allowed, NUMA_NO_NODE, flags);
}
+/**
+ * scx_bpf_pick_any_cpu_in_node - Pick and claim an idle cpu if available
+ * or pick any CPU from @node
+ * @cpus_allowed: Allowed cpumask
+ * @node: target NUMA node
+ * @flags: %SCX_PICK_IDLE_CPU_* flags
+ *
+ * Pick and claim an idle cpu in @cpus_allowed. If none is available, pick any
+ * CPU in @cpus_allowed. Guaranteed to succeed and returns the picked idle cpu
+ * number if @cpus_allowed is not empty. -%EBUSY is returned if @cpus_allowed is
+ * empty.
+ *
+ * The search starts from @node and proceeds to other online NUMA nodes in
+ * order of increasing distance (unless SCX_PICK_IDLE_IN_NODE is specified,
+ * in which case the search is limited to the target @node).
+ *
+ * If ops.update_idle() is implemented and %SCX_OPS_KEEP_BUILTIN_IDLE is not
+ * set, this function can't tell which CPUs are idle and will always pick any
+ * CPU.
+ */
+__bpf_kfunc s32 scx_bpf_pick_any_cpu_in_node(const struct cpumask *cpus_allowed,
+ int node, u64 flags)
+{
+ s32 cpu;
+
+ node = validate_node(node);
+ if (node < 0)
+ return node;
+
+ cpu = scx_pick_idle_cpu(cpus_allowed, node, flags);
+ if (cpu >= 0)
+ return cpu;
+
+ cpu = cpumask_any_distribute(cpus_allowed);
+ if (cpu < nr_cpu_ids)
+ return cpu;
+ else
+ return -EBUSY;
+}
+
/**
* scx_bpf_pick_any_cpu - Pick and claim an idle cpu if available or pick any CPU
* @cpus_allowed: Allowed cpumask
@@ -893,12 +1060,20 @@ __bpf_kfunc s32 scx_bpf_pick_idle_cpu(const struct cpumask *cpus_allowed,
* If ops.update_idle() is implemented and %SCX_OPS_KEEP_BUILTIN_IDLE is not
* set, this function can't tell which CPUs are idle and will always pick any
* CPU.
+ *
+ * Always returns an error if %SCX_OPS_BUILTIN_IDLE_PER_NODE is set, use
+ * scx_bpf_pick_any_cpu_in_node() instead.
*/
__bpf_kfunc s32 scx_bpf_pick_any_cpu(const struct cpumask *cpus_allowed,
u64 flags)
{
s32 cpu;
+ if (static_branch_maybe(CONFIG_NUMA, &scx_builtin_idle_per_node)) {
+ scx_ops_error("per-node idle tracking is enabled");
+ return -EBUSY;
+ }
+
if (static_branch_likely(&scx_builtin_idle_enabled)) {
cpu = scx_pick_idle_cpu(cpus_allowed, NUMA_NO_NODE, flags);
if (cpu >= 0)
@@ -915,11 +1090,16 @@ __bpf_kfunc s32 scx_bpf_pick_any_cpu(const struct cpumask *cpus_allowed,
__bpf_kfunc_end_defs();
BTF_KFUNCS_START(scx_kfunc_ids_idle)
+BTF_ID_FLAGS(func, scx_bpf_cpu_node)
+BTF_ID_FLAGS(func, scx_bpf_get_idle_cpumask_node, KF_ACQUIRE)
BTF_ID_FLAGS(func, scx_bpf_get_idle_cpumask, KF_ACQUIRE)
+BTF_ID_FLAGS(func, scx_bpf_get_idle_smtmask_node, KF_ACQUIRE)
BTF_ID_FLAGS(func, scx_bpf_get_idle_smtmask, KF_ACQUIRE)
BTF_ID_FLAGS(func, scx_bpf_put_idle_cpumask, KF_RELEASE)
BTF_ID_FLAGS(func, scx_bpf_test_and_clear_cpu_idle)
+BTF_ID_FLAGS(func, scx_bpf_pick_idle_cpu_in_node, KF_RCU)
BTF_ID_FLAGS(func, scx_bpf_pick_idle_cpu, KF_RCU)
+BTF_ID_FLAGS(func, scx_bpf_pick_any_cpu_in_node, KF_RCU)
BTF_ID_FLAGS(func, scx_bpf_pick_any_cpu, KF_RCU)
BTF_KFUNCS_END(scx_kfunc_ids_idle)
diff --git a/tools/sched_ext/include/scx/common.bpf.h b/tools/sched_ext/include/scx/common.bpf.h
index f1caf9fc8f8cc..0acc0f4c087e5 100644
--- a/tools/sched_ext/include/scx/common.bpf.h
+++ b/tools/sched_ext/include/scx/common.bpf.h
@@ -64,14 +64,19 @@ u32 scx_bpf_cpuperf_cap(s32 cpu) __ksym __weak;
u32 scx_bpf_cpuperf_cur(s32 cpu) __ksym __weak;
void scx_bpf_cpuperf_set(s32 cpu, u32 perf) __ksym __weak;
u32 scx_bpf_nr_cpu_ids(void) __ksym __weak;
+int scx_bpf_cpu_node(s32 cpu) __ksym __weak;
const struct cpumask *scx_bpf_get_possible_cpumask(void) __ksym __weak;
const struct cpumask *scx_bpf_get_online_cpumask(void) __ksym __weak;
void scx_bpf_put_cpumask(const struct cpumask *cpumask) __ksym __weak;
+const struct cpumask *scx_bpf_get_idle_cpumask_node(int node) __ksym __weak;
const struct cpumask *scx_bpf_get_idle_cpumask(void) __ksym;
+const struct cpumask *scx_bpf_get_idle_smtmask_node(int node) __ksym __weak;
const struct cpumask *scx_bpf_get_idle_smtmask(void) __ksym;
void scx_bpf_put_idle_cpumask(const struct cpumask *cpumask) __ksym;
bool scx_bpf_test_and_clear_cpu_idle(s32 cpu) __ksym;
+s32 scx_bpf_pick_idle_cpu_in_node(const cpumask_t *cpus_allowed, int node, u64 flags) __ksym __weak;
s32 scx_bpf_pick_idle_cpu(const cpumask_t *cpus_allowed, u64 flags) __ksym;
+s32 scx_bpf_pick_any_cpu_in_node(const cpumask_t *cpus_allowed, int node, u64 flags) __ksym __weak;
s32 scx_bpf_pick_any_cpu(const cpumask_t *cpus_allowed, u64 flags) __ksym;
bool scx_bpf_task_running(const struct task_struct *p) __ksym;
s32 scx_bpf_task_cpu(const struct task_struct *p) __ksym;
diff --git a/tools/sched_ext/include/scx/compat.bpf.h b/tools/sched_ext/include/scx/compat.bpf.h
index e5fa72f9bf22b..617ed0ec85dc4 100644
--- a/tools/sched_ext/include/scx/compat.bpf.h
+++ b/tools/sched_ext/include/scx/compat.bpf.h
@@ -182,6 +182,37 @@ static inline bool __COMPAT_is_enq_cpu_selected(u64 enq_flags)
scx_bpf_now() : \
bpf_ktime_get_ns())
+/*
+ *
+ * v6.15: Introduce NUMA-aware kfuncs to operate with per-node idle
+ * cpumasks.
+ *
+ * Preserve the following __COMPAT_scx_*_node macros until v6.17.
+ */
+#define __COMPAT_scx_bpf_cpu_node(cpu) \
+ (bpf_ksym_exists(scx_bpf_cpu_node) ? \
+ scx_bpf_cpu_node(cpu) : 0)
+
+#define __COMPAT_scx_bpf_get_idle_cpumask_node(node) \
+ (bpf_ksym_exists(scx_bpf_get_idle_cpumask_node) ? \
+ scx_bpf_get_idle_cpumask_node(node) : \
+ scx_bpf_get_idle_cpumask()) \
+
+#define __COMPAT_scx_bpf_get_idle_smtmask_node(node) \
+ (bpf_ksym_exists(scx_bpf_get_idle_smtmask_node) ? \
+ scx_bpf_get_idle_smtmask_node(node) : \
+ scx_bpf_get_idle_smtmask())
+
+#define __COMPAT_scx_bpf_pick_idle_cpu_in_node(cpus_allowed, node, flags) \
+ (bpf_ksym_exists(scx_bpf_pick_idle_cpu_in_node) ? \
+ scx_bpf_pick_idle_cpu_in_node(cpus_allowed, node, flags) : \
+ scx_bpf_pick_idle_cpu(cpus_allowed, flags))
+
+#define __COMPAT_scx_bpf_pick_any_cpu_in_node(cpus_allowed, node, flags) \
+ (bpf_ksym_exists(scx_bpf_pick_any_cpu_in_node) ? \
+ scx_bpf_pick_any_cpu_in_node(cpus_allowed, node, flags) : \
+ scx_bpf_pick_any_cpu(cpus_allowed, flags))
+
/*
* Define sched_ext_ops. This may be expanded to define multiple variants for
* backward compatibility. See compat.h::SCX_OPS_LOAD/ATTACH().
--
2.48.1
^ permalink raw reply related [flat|nested] 21+ messages in thread
* Re: [PATCH 6/7] sched_ext: idle: Per-node idle cpumasks
2025-02-12 16:48 ` [PATCH 6/7] sched_ext: idle: Per-node idle cpumasks Andrea Righi
@ 2025-02-13 10:57 ` kernel test robot
2025-02-13 18:03 ` Yury Norov
1 sibling, 0 replies; 21+ messages in thread
From: kernel test robot @ 2025-02-13 10:57 UTC (permalink / raw)
To: Andrea Righi, Tejun Heo, David Vernet, Changwoo Min
Cc: llvm, oe-kbuild-all, Ingo Molnar, Peter Zijlstra, Juri Lelli,
Vincent Guittot, Dietmar Eggemann, Steven Rostedt, Ben Segall,
Mel Gorman, Valentin Schneider, Joel Fernandes, Ian May, bpf,
linux-kernel, Yury Norov
Hi Andrea,
kernel test robot noticed the following build errors:
[auto build test ERROR on next-20250212]
[cannot apply to tip/sched/core akpm-mm/mm-everything tip/master linus/master tip/auto-latest v6.14-rc2 v6.14-rc1 v6.13 v6.14-rc2]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Andrea-Righi/mm-numa-Introduce-nearest_node_nodemask/20250213-014857
base: next-20250212
patch link: https://lore.kernel.org/r/20250212165006.490130-7-arighi%40nvidia.com
patch subject: [PATCH 6/7] sched_ext: idle: Per-node idle cpumasks
config: i386-buildonly-randconfig-005-20250213 (https://download.01.org/0day-ci/archive/20250213/202502131834.ni8ojoRO-lkp@intel.com/config)
compiler: clang version 19.1.3 (https://github.com/llvm/llvm-project ab51eccf88f5321e7c60591c5546b254b6afab99)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250213/202502131834.ni8ojoRO-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202502131834.ni8ojoRO-lkp@intel.com/
All errors (new ones prefixed by >>):
In file included from kernel/sched/build_policy.c:63:
kernel/sched/ext.c:6014:31: warning: bitwise operation between different enumeration types ('enum scx_enq_flags' and 'enum scx_deq_flags') [-Wenum-enum-conversion]
6014 | WRITE_ONCE(v, SCX_ENQ_WAKEUP | SCX_DEQ_SLEEP | SCX_KICK_PREEMPT |
| ~~~~~~~~~~~~~~ ^ ~~~~~~~~~~~~~
include/asm-generic/rwonce.h:61:18: note: expanded from macro 'WRITE_ONCE'
61 | __WRITE_ONCE(x, val); \
| ^~~
include/asm-generic/rwonce.h:55:33: note: expanded from macro '__WRITE_ONCE'
55 | *(volatile typeof(x) *)&(x) = (val); \
| ^~~
In file included from kernel/sched/build_policy.c:64:
>> kernel/sched/ext_idle.c:136:9: error: '__section__' attribute only applies to functions, global variables, Objective-C methods, and Objective-C properties
136 | static DEFINE_PER_CPU(nodemask_t, per_cpu_unvisited);
| ^
include/linux/percpu-defs.h:115:2: note: expanded from macro 'DEFINE_PER_CPU'
115 | DEFINE_PER_CPU_SECTION(type, name, "")
| ^
include/linux/percpu-defs.h:93:2: note: expanded from macro 'DEFINE_PER_CPU_SECTION'
93 | __PCPU_DUMMY_ATTRS char __pcpu_unique_##name; \
| ^
include/linux/percpu-defs.h:54:2: note: expanded from macro '__PCPU_DUMMY_ATTRS'
54 | __section(".discard") __attribute__((unused))
| ^
include/linux/compiler_attributes.h:321:56: note: expanded from macro '__section'
321 | #define __section(section) __attribute__((__section__(section)))
| ^
In file included from kernel/sched/build_policy.c:64:
>> kernel/sched/ext_idle.c:136:9: error: non-extern declaration of '__pcpu_unique_per_cpu_unvisited' follows extern declaration
include/linux/percpu-defs.h:115:2: note: expanded from macro 'DEFINE_PER_CPU'
115 | DEFINE_PER_CPU_SECTION(type, name, "")
| ^
include/linux/percpu-defs.h:93:26: note: expanded from macro 'DEFINE_PER_CPU_SECTION'
93 | __PCPU_DUMMY_ATTRS char __pcpu_unique_##name; \
| ^
<scratch space>:82:1: note: expanded from here
82 | __pcpu_unique_per_cpu_unvisited
| ^
kernel/sched/ext_idle.c:136:9: note: previous declaration is here
include/linux/percpu-defs.h:115:2: note: expanded from macro 'DEFINE_PER_CPU'
115 | DEFINE_PER_CPU_SECTION(type, name, "")
| ^
include/linux/percpu-defs.h:92:33: note: expanded from macro 'DEFINE_PER_CPU_SECTION'
92 | extern __PCPU_DUMMY_ATTRS char __pcpu_unique_##name; \
| ^
<scratch space>:81:1: note: expanded from here
81 | __pcpu_unique_per_cpu_unvisited
| ^
In file included from kernel/sched/build_policy.c:64:
>> kernel/sched/ext_idle.c:136:9: error: 'section' attribute only applies to functions, global variables, Objective-C methods, and Objective-C properties
136 | static DEFINE_PER_CPU(nodemask_t, per_cpu_unvisited);
| ^
include/linux/percpu-defs.h:115:2: note: expanded from macro 'DEFINE_PER_CPU'
115 | DEFINE_PER_CPU_SECTION(type, name, "")
| ^
include/linux/percpu-defs.h:95:2: note: expanded from macro 'DEFINE_PER_CPU_SECTION'
95 | __PCPU_ATTRS(sec) __weak __typeof__(type) name
| ^
include/linux/percpu-defs.h:50:26: note: expanded from macro '__PCPU_ATTRS'
50 | __percpu __attribute__((section(PER_CPU_BASE_SECTION sec))) \
| ^
In file included from kernel/sched/build_policy.c:64:
>> kernel/sched/ext_idle.c:136:36: error: non-extern declaration of 'per_cpu_unvisited' follows extern declaration
136 | static DEFINE_PER_CPU(nodemask_t, per_cpu_unvisited);
| ^
kernel/sched/ext_idle.c:136:36: note: previous declaration is here
>> kernel/sched/ext_idle.c:136:9: error: weak declaration cannot have internal linkage
136 | static DEFINE_PER_CPU(nodemask_t, per_cpu_unvisited);
| ^
include/linux/percpu-defs.h:115:2: note: expanded from macro 'DEFINE_PER_CPU'
115 | DEFINE_PER_CPU_SECTION(type, name, "")
| ^
include/linux/percpu-defs.h:95:20: note: expanded from macro 'DEFINE_PER_CPU_SECTION'
95 | __PCPU_ATTRS(sec) __weak __typeof__(type) name
| ^
include/linux/compiler_attributes.h:403:56: note: expanded from macro '__weak'
403 | #define __weak __attribute__((__weak__))
| ^
1 warning and 5 errors generated.
vim +/__section__ +136 kernel/sched/ext_idle.c
133
134 static s32 pick_idle_cpu_from_other_nodes(const struct cpumask *cpus_allowed, int node, u64 flags)
135 {
> 136 static DEFINE_PER_CPU(nodemask_t, per_cpu_unvisited);
137 nodemask_t *unvisited = this_cpu_ptr(&per_cpu_unvisited);
138 s32 cpu = -EBUSY;
139
140 preempt_disable();
141 unvisited = this_cpu_ptr(&per_cpu_unvisited);
142
143 /*
144 * Restrict the search to the online nodes, excluding the current
145 * one.
146 */
147 nodes_clear(*unvisited);
148 nodes_or(*unvisited, *unvisited, node_states[N_ONLINE]);
149 node_clear(node, *unvisited);
150
151 /*
152 * Traverse all nodes in order of increasing distance, starting
153 * from @node.
154 *
155 * This loop is O(N^2), with N being the amount of NUMA nodes,
156 * which might be quite expensive in large NUMA systems. However,
157 * this complexity comes into play only when a scheduler enables
158 * SCX_OPS_BUILTIN_IDLE_PER_NODE and it's requesting an idle CPU
159 * without specifying a target NUMA node, so it shouldn't be a
160 * bottleneck is most cases.
161 *
162 * As a future optimization we may want to cache the list of nodes
163 * in a per-node array, instead of actually traversing them every
164 * time.
165 */
166 for_each_node_numadist(node, *unvisited) {
167 cpu = pick_idle_cpu_in_node(cpus_allowed, node, flags);
168 if (cpu >= 0)
169 break;
170 }
171 preempt_enable();
172
173 return cpu;
174 }
175
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 1/7] nodemask: numa: reorganize inclusion path
2025-02-12 16:48 ` [PATCH 1/7] nodemask: numa: reorganize inclusion path Andrea Righi
@ 2025-02-13 15:29 ` Yury Norov
2025-02-13 15:59 ` Andrea Righi
0 siblings, 1 reply; 21+ messages in thread
From: Yury Norov @ 2025-02-13 15:29 UTC (permalink / raw)
To: Andrea Righi
Cc: Tejun Heo, David Vernet, Changwoo Min, Ingo Molnar,
Peter Zijlstra, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider,
Joel Fernandes, Ian May, bpf, linux-kernel
On Wed, Feb 12, 2025 at 05:48:08PM +0100, Andrea Righi wrote:
> From: Yury Norov <yury.norov@gmail.com>
>
> Nodemasks now pull linux/numa.h for MAX_NUMNODES and NUMA_NO_NODE
> macros. This series makes numa.h depending on nodemasks, so we hit
> a circular dependency.
>
> Nodemasks library is highly employed by NUMA code, and it would be
> logical to resolve the circular dependency by making NUMA headers
> dependent nodemask.h.
>
> Signed-off-by: Yury Norov <yury.norov@gmail.com>
You must sign-off this patch yourself as well, if you pull it with
your series.
> ---
> include/linux/nodemask.h | 1 -
> include/linux/nodemask_types.h | 11 ++++++++++-
> include/linux/numa.h | 10 +---------
> 3 files changed, 11 insertions(+), 11 deletions(-)
>
> diff --git a/include/linux/nodemask.h b/include/linux/nodemask.h
> index 9fd7a0ce9c1a7..27644a6edc6ee 100644
> --- a/include/linux/nodemask.h
> +++ b/include/linux/nodemask.h
> @@ -94,7 +94,6 @@
> #include <linux/bitmap.h>
> #include <linux/minmax.h>
> #include <linux/nodemask_types.h>
> -#include <linux/numa.h>
> #include <linux/random.h>
>
> extern nodemask_t _unused_nodemask_arg_;
> diff --git a/include/linux/nodemask_types.h b/include/linux/nodemask_types.h
> index 6b28d97ea6ed0..f850a48742f1f 100644
> --- a/include/linux/nodemask_types.h
> +++ b/include/linux/nodemask_types.h
> @@ -3,7 +3,16 @@
> #define __LINUX_NODEMASK_TYPES_H
>
> #include <linux/bitops.h>
> -#include <linux/numa.h>
> +
> +#ifdef CONFIG_NODES_SHIFT
> +#define NODES_SHIFT CONFIG_NODES_SHIFT
> +#else
> +#define NODES_SHIFT 0
> +#endif
> +
> +#define MAX_NUMNODES (1 << NODES_SHIFT)
> +
> +#define NUMA_NO_NODE (-1)
>
> typedef struct { DECLARE_BITMAP(bits, MAX_NUMNODES); } nodemask_t;
>
> diff --git a/include/linux/numa.h b/include/linux/numa.h
> index 3567e40329ebc..31d8bf8a951a7 100644
> --- a/include/linux/numa.h
> +++ b/include/linux/numa.h
> @@ -3,16 +3,8 @@
> #define _LINUX_NUMA_H
> #include <linux/init.h>
> #include <linux/types.h>
> +#include <linux/nodemask.h>
>
> -#ifdef CONFIG_NODES_SHIFT
> -#define NODES_SHIFT CONFIG_NODES_SHIFT
> -#else
> -#define NODES_SHIFT 0
> -#endif
> -
> -#define MAX_NUMNODES (1 << NODES_SHIFT)
> -
> -#define NUMA_NO_NODE (-1)
> #define NUMA_NO_MEMBLK (-1)
>
> static inline bool numa_valid_node(int nid)
> --
> 2.48.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 2/7] mm/numa: Introduce nearest_node_nodemask()
2025-02-12 16:48 ` [PATCH 2/7] mm/numa: Introduce nearest_node_nodemask() Andrea Righi
@ 2025-02-13 15:57 ` Yury Norov
2025-02-13 16:19 ` Andrea Righi
0 siblings, 1 reply; 21+ messages in thread
From: Yury Norov @ 2025-02-13 15:57 UTC (permalink / raw)
To: Andrea Righi
Cc: Tejun Heo, David Vernet, Changwoo Min, Ingo Molnar,
Peter Zijlstra, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider,
Joel Fernandes, Ian May, bpf, linux-kernel
On Wed, Feb 12, 2025 at 05:48:09PM +0100, Andrea Righi wrote:
> Introduce the new helper nearest_node_nodemask() to find the closest
> node in a specified nodemask from a given starting node.
>
> Returns MAX_NUMNODES if no node is found.
>
> Cc: Yury Norov <yury.norov@gmail.com>
> Signed-off-by: Andrea Righi <arighi@nvidia.com>
Suggested-by: Yury Norov [NVIDIA] <yury.norov@gmail.com>
> ---
> include/linux/numa.h | 7 +++++++
> mm/mempolicy.c | 32 ++++++++++++++++++++++++++++++++
> 2 files changed, 39 insertions(+)
>
> diff --git a/include/linux/numa.h b/include/linux/numa.h
> index 31d8bf8a951a7..e6baaf6051bcf 100644
> --- a/include/linux/numa.h
> +++ b/include/linux/numa.h
> @@ -31,6 +31,8 @@ void __init alloc_offline_node_data(int nid);
> /* Generic implementation available */
> int numa_nearest_node(int node, unsigned int state);
>
> +int nearest_node_nodemask(int node, nodemask_t *mask);
> +
See how you use it. It looks a bit inconsistent to the other functions:
#define for_each_node_numadist(node, unvisited) \
for (int start = (node), \
node = nearest_node_nodemask((start), &(unvisited)); \
node < MAX_NUMNODES; \
node_clear(node, (unvisited)), \
node = nearest_node_nodemask((start), &(unvisited)))
I would suggest to make it aligned with the rest of the API:
#define node_clear(node, dst) __node_clear((node), &(dst))
static __always_inline void __node_clear(int node, volatile nodemask_t *dstp)
{
clear_bit(node, dstp->bits);
}
> #ifndef memory_add_physaddr_to_nid
> int memory_add_physaddr_to_nid(u64 start);
> #endif
> @@ -47,6 +49,11 @@ static inline int numa_nearest_node(int node, unsigned int state)
> return NUMA_NO_NODE;
> }
>
> +static inline int nearest_node_nodemask(int node, nodemask_t *mask)
> +{
> + return NUMA_NO_NODE;
> +}
> +
> static inline int memory_add_physaddr_to_nid(u64 start)
> {
> return 0;
> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> index 162407fbf2bc7..1e2acf187ea3a 100644
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -196,6 +196,38 @@ int numa_nearest_node(int node, unsigned int state)
> }
> EXPORT_SYMBOL_GPL(numa_nearest_node);
>
> +/**
> + * nearest_node_nodemask - Find the node in @mask at the nearest distance
> + * from @node.
> + *
> + * @node: the node to start the search from.
> + * @mask: a pointer to a nodemask representing the allowed nodes.
> + *
> + * This function iterates over all nodes in the given state and calculates
> + * the distance to the starting node.
> + *
> + * Returns the node ID in @mask that is the closest in terms of distance
> + * from @node, or MAX_NUMNODES if no node is found.
> + */
> +int nearest_node_nodemask(int node, nodemask_t *mask)
> +{
> + int dist, n, min_dist = INT_MAX, min_node = MAX_NUMNODES;
> +
> + if (node == NUMA_NO_NODE)
> + return MAX_NUMNODES;
This makes it unclear: you make it legal to pass NUMA_NO_NODE, but
your function returns something useless. I don't think it would help
users in any reasonable scenario.
So, if you don't want user to call this with node == NUMA_NO_NODE,
just describe it in comment on top of the function. Otherwise, please
do something useful like
if (node == NUMA_NO_NODE)
node = current_node;
I would go with option 1. Notice, node_distance() doesn't bother to
check against NUMA_NO_NODE.
> + for_each_node_mask(n, *mask) {
> + dist = node_distance(node, n);
> + if (dist < min_dist) {
> + min_dist = dist;
> + min_node = n;
> + }
> + }
> +
> + return min_node;
> +}
> +EXPORT_SYMBOL_GPL(nearest_node_nodemask);
> +
> struct mempolicy *get_task_policy(struct task_struct *p)
> {
> struct mempolicy *pol = p->mempolicy;
> --
> 2.48.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 1/7] nodemask: numa: reorganize inclusion path
2025-02-13 15:29 ` Yury Norov
@ 2025-02-13 15:59 ` Andrea Righi
0 siblings, 0 replies; 21+ messages in thread
From: Andrea Righi @ 2025-02-13 15:59 UTC (permalink / raw)
To: Yury Norov
Cc: Tejun Heo, David Vernet, Changwoo Min, Ingo Molnar,
Peter Zijlstra, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider,
Joel Fernandes, Ian May, bpf, linux-kernel
On Thu, Feb 13, 2025 at 10:29:48AM -0500, Yury Norov wrote:
> On Wed, Feb 12, 2025 at 05:48:08PM +0100, Andrea Righi wrote:
> > From: Yury Norov <yury.norov@gmail.com>
> >
> > Nodemasks now pull linux/numa.h for MAX_NUMNODES and NUMA_NO_NODE
> > macros. This series makes numa.h depending on nodemasks, so we hit
> > a circular dependency.
> >
> > Nodemasks library is highly employed by NUMA code, and it would be
> > logical to resolve the circular dependency by making NUMA headers
> > dependent nodemask.h.
> >
> > Signed-off-by: Yury Norov <yury.norov@gmail.com>
>
> You must sign-off this patch yourself as well, if you pull it with
> your series.
Ok, will fix it in the next version.
Thanks,
-Andrea
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 3/7] sched/topology: Introduce for_each_node_numadist() iterator
2025-02-12 16:48 ` [PATCH 3/7] sched/topology: Introduce for_each_node_numadist() iterator Andrea Righi
@ 2025-02-13 16:02 ` Yury Norov
2025-02-13 16:32 ` Andrea Righi
0 siblings, 1 reply; 21+ messages in thread
From: Yury Norov @ 2025-02-13 16:02 UTC (permalink / raw)
To: Andrea Righi
Cc: Tejun Heo, David Vernet, Changwoo Min, Ingo Molnar,
Peter Zijlstra, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider,
Joel Fernandes, Ian May, bpf, linux-kernel
On Wed, Feb 12, 2025 at 05:48:10PM +0100, Andrea Righi wrote:
> Introduce the new helper for_each_node_numadist() to iterate over node
> IDs in order of increasing NUMA distance from a given starting node.
>
> This iterator is somehow similar to for_each_numa_hop_mask(), but
> instead of providing a cpumask at each iteration, it provides a node ID.
>
> Example usage:
>
> nodemask_t unvisited = NODE_MASK_ALL;
> int node, start = cpu_to_node(smp_processor_id());
>
> node = start;
> for_each_node_numadist(node, unvisited)
> pr_info("node (%d, %d) -> %d\n",
> start, node, node_distance(start, node));
>
> On a system with equidistant nodes:
>
> $ numactl -H
> ...
> node distances:
> node 0 1 2 3
> 0: 10 20 20 20
> 1: 20 10 20 20
> 2: 20 20 10 20
> 3: 20 20 20 10
>
> Output of the example above (on node 0):
>
> [ 7.367022] node (0, 0) -> 10
> [ 7.367151] node (0, 1) -> 20
> [ 7.367186] node (0, 2) -> 20
> [ 7.367247] node (0, 3) -> 20
>
> On a system with non-equidistant nodes (simulated using virtme-ng):
>
> $ numactl -H
> ...
> node distances:
> node 0 1 2 3
> 0: 10 51 31 41
> 1: 51 10 21 61
> 2: 31 21 10 11
> 3: 41 61 11 10
>
> Output of the example above (on node 0):
>
> [ 8.953644] node (0, 0) -> 10
> [ 8.953712] node (0, 2) -> 31
> [ 8.953764] node (0, 3) -> 41
> [ 8.953817] node (0, 1) -> 51
>
> Cc: Yury Norov <yury.norov@gmail.com>
> Signed-off-by: Andrea Righi <arighi@nvidia.com>
Suggested-by: Yury Norov [NVIDIA] <yury.norov@gmail.com>
> ---
> include/linux/topology.h | 30 ++++++++++++++++++++++++++++++
> 1 file changed, 30 insertions(+)
>
> diff --git a/include/linux/topology.h b/include/linux/topology.h
> index 52f5850730b3e..932d8b819c1b7 100644
> --- a/include/linux/topology.h
> +++ b/include/linux/topology.h
> @@ -261,6 +261,36 @@ sched_numa_hop_mask(unsigned int node, unsigned int hops)
> }
> #endif /* CONFIG_NUMA */
>
> +/**
> + * for_each_node_numadist() - iterate over nodes in increasing distance
> + * order, starting from a given node
> + * @node: the iteration variable and the starting node.
> + * @unvisited: a nodemask to keep track of the unvisited nodes.
> + *
> + * This macro iterates over NUMA node IDs in increasing distance from the
> + * starting @node and yields MAX_NUMNODES when all the nodes have been
> + * visited.
> + *
> + * Note that by the time the loop completes, the @unvisited nodemask will
> + * be fully cleared, unless the loop exits early.
> + *
> + * The difference between for_each_node() and for_each_node_numadist() is
> + * that the former allows to iterate over nodes in numerical order, whereas
> + * the latter iterates over nodes in increasing order of distance.
> + *
> + * This complexity of this iterator is O(N^2), where N represents the
> + * number of nodes, as each iteration involves scanning all nodes to
> + * find the one with the shortest distance.
> + *
> + * Requires rcu_lock to be held.
> + */
> +#define for_each_node_numadist(node, unvisited) \
> + for (int start = (node), \
> + node = nearest_node_nodemask((start), &(unvisited)); \
> + node < MAX_NUMNODES; \
> + node_clear(node, (unvisited)), \
> + node = nearest_node_nodemask((start), &(unvisited)))
the 'node' should be protected with braces inside the macro, the start should
not because you declare it just inside. Also, the 'start' is a common word,
so there's a chance that you'll mask out already existing 'start' in the scope.
Maybe __start, or simply __s?
> /**
> * for_each_numa_hop_mask - iterate over cpumasks of increasing NUMA distance
> * from a given node.
> --
> 2.48.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 5/7] sched_ext: idle: Introduce SCX_OPS_BUILTIN_IDLE_PER_NODE
2025-02-12 16:48 ` [PATCH 5/7] sched_ext: idle: Introduce SCX_OPS_BUILTIN_IDLE_PER_NODE Andrea Righi
@ 2025-02-13 16:08 ` Yury Norov
2025-02-13 16:22 ` Andrea Righi
0 siblings, 1 reply; 21+ messages in thread
From: Yury Norov @ 2025-02-13 16:08 UTC (permalink / raw)
To: Andrea Righi
Cc: Tejun Heo, David Vernet, Changwoo Min, Ingo Molnar,
Peter Zijlstra, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider,
Joel Fernandes, Ian May, bpf, linux-kernel
On Wed, Feb 12, 2025 at 05:48:12PM +0100, Andrea Righi wrote:
> Add the new scheduler flag SCX_OPS_BUILTIN_IDLE_PER_NODE, which allows
> BPF schedulers to select between using a global flat idle cpumask or
> multiple per-node cpumasks.
>
> This only introduces the flag and the mechanism to enable/disable this
> feature without affecting any scheduling behavior.
>
> Cc: Yury Norov <yury.norov@gmail.com>
> Signed-off-by: Andrea Righi <arighi@nvidia.com>
> ---
> kernel/sched/ext.c | 21 ++++++++++++++++++--
> kernel/sched/ext_idle.c | 29 +++++++++++++++++++++-------
> kernel/sched/ext_idle.h | 4 ++--
> tools/sched_ext/include/scx/compat.h | 3 +++
> 4 files changed, 46 insertions(+), 11 deletions(-)
>
> diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
> index c47e7e2024a94..c3e154f0e8188 100644
> --- a/kernel/sched/ext.c
> +++ b/kernel/sched/ext.c
> @@ -138,6 +138,12 @@ enum scx_ops_flags {
> */
> SCX_OPS_ENQ_MIGRATION_DISABLED = 1LLU << 4,
>
> + /*
> + * If set, enable per-node idle cpumasks. If clear, use a single global
> + * flat idle cpumask.
> + */
> + SCX_OPS_BUILTIN_IDLE_PER_NODE = 1LLU << 5,
> +
> /*
> * CPU cgroup support flags
> */
> @@ -148,6 +154,7 @@ enum scx_ops_flags {
> SCX_OPS_ENQ_EXITING |
> SCX_OPS_ENQ_MIGRATION_DISABLED |
> SCX_OPS_SWITCH_PARTIAL |
> + SCX_OPS_BUILTIN_IDLE_PER_NODE |
> SCX_OPS_HAS_CGROUP_WEIGHT,
> };
>
> @@ -3409,7 +3416,7 @@ static void handle_hotplug(struct rq *rq, bool online)
> atomic_long_inc(&scx_hotplug_seq);
>
> if (scx_enabled())
> - scx_idle_update_selcpu_topology();
> + scx_idle_update_selcpu_topology(&scx_ops);
>
> if (online && SCX_HAS_OP(cpu_online))
> SCX_CALL_OP(SCX_KF_UNLOCKED, cpu_online, cpu);
> @@ -5184,6 +5191,16 @@ static int validate_ops(const struct sched_ext_ops *ops)
> return -EINVAL;
> }
>
> + /*
> + * SCX_OPS_BUILTIN_IDLE_PER_NODE requires built-in CPU idle
> + * selection policy to be enabled.
> + */
> + if ((ops->flags & SCX_OPS_BUILTIN_IDLE_PER_NODE) &&
> + (ops->update_idle && !(ops->flags & SCX_OPS_KEEP_BUILTIN_IDLE))) {
> + scx_ops_error("SCX_OPS_BUILTIN_IDLE_PER_NODE requires CPU idle selection enabled");
> + return -EINVAL;
> + }
> +
> return 0;
> }
>
> @@ -5308,7 +5325,7 @@ static int scx_ops_enable(struct sched_ext_ops *ops, struct bpf_link *link)
> static_branch_enable_cpuslocked(&scx_has_op[i]);
>
> check_hotplug_seq(ops);
> - scx_idle_update_selcpu_topology();
> + scx_idle_update_selcpu_topology(ops);
>
> cpus_read_unlock();
>
> diff --git a/kernel/sched/ext_idle.c b/kernel/sched/ext_idle.c
> index ed1804506585b..59b9e95238e97 100644
> --- a/kernel/sched/ext_idle.c
> +++ b/kernel/sched/ext_idle.c
> @@ -14,6 +14,9 @@
> /* Enable/disable built-in idle CPU selection policy */
> static DEFINE_STATIC_KEY_FALSE(scx_builtin_idle_enabled);
>
> +/* Enable/disable per-node idle cpumasks */
> +static DEFINE_STATIC_KEY_FALSE(scx_builtin_idle_per_node);
> +
> #ifdef CONFIG_SMP
> #ifdef CONFIG_CPUMASK_OFFSTACK
> #define CL_ALIGNED_IF_ONSTACK
> @@ -204,7 +207,7 @@ static bool llc_numa_mismatch(void)
> * CPU belongs to a single LLC domain, and that each LLC domain is entirely
> * contained within a single NUMA node.
> */
> -void scx_idle_update_selcpu_topology(void)
> +void scx_idle_update_selcpu_topology(struct sched_ext_ops *ops)
> {
> bool enable_llc = false, enable_numa = false;
> unsigned int nr_cpus;
> @@ -237,13 +240,19 @@ void scx_idle_update_selcpu_topology(void)
> * If all CPUs belong to the same NUMA node and the same LLC domain,
> * enabling both NUMA and LLC optimizations is unnecessary, as checking
> * for an idle CPU in the same domain twice is redundant.
> + *
> + * If SCX_OPS_BUILTIN_IDLE_PER_NODE is enabled ignore the NUMA
> + * optimization, as we would naturally select idle CPUs within
> + * specific NUMA nodes querying the corresponding per-node cpumask.
> */
> - nr_cpus = numa_weight(cpu);
> - if (nr_cpus > 0) {
> - if (nr_cpus < num_online_cpus() && llc_numa_mismatch())
> - enable_numa = true;
> - pr_debug("sched_ext: NUMA=%*pb weight=%u\n",
> - cpumask_pr_args(numa_span(cpu)), numa_weight(cpu));
> + if (!(ops->flags & SCX_OPS_BUILTIN_IDLE_PER_NODE)) {
> + nr_cpus = numa_weight(cpu);
> + if (nr_cpus > 0) {
> + if (nr_cpus < num_online_cpus() && llc_numa_mismatch())
> + enable_numa = true;
> + pr_debug("sched_ext: NUMA=%*pb weight=%u\n",
> + cpumask_pr_args(numa_span(cpu)), numa_weight(cpu));
No need to call numa_weight(cpu) for the 2nd time, right?
> + }
> }
> rcu_read_unlock();
>
> @@ -530,6 +539,11 @@ void scx_idle_enable(struct sched_ext_ops *ops)
> }
> static_branch_enable(&scx_builtin_idle_enabled);
>
> + if (ops->flags & SCX_OPS_BUILTIN_IDLE_PER_NODE)
> + static_branch_enable(&scx_builtin_idle_per_node);
> + else
> + static_branch_disable(&scx_builtin_idle_per_node);
> +
> #ifdef CONFIG_SMP
> /*
> * Consider all online cpus idle. Should converge to the actual state
> @@ -543,6 +557,7 @@ void scx_idle_enable(struct sched_ext_ops *ops)
> void scx_idle_disable(void)
> {
> static_branch_disable(&scx_builtin_idle_enabled);
> + static_branch_disable(&scx_builtin_idle_per_node);
> }
>
> /********************************************************************************
> diff --git a/kernel/sched/ext_idle.h b/kernel/sched/ext_idle.h
> index bbac0fd9a5ddd..339b6ec9c4cb7 100644
> --- a/kernel/sched/ext_idle.h
> +++ b/kernel/sched/ext_idle.h
> @@ -13,12 +13,12 @@
> struct sched_ext_ops;
>
> #ifdef CONFIG_SMP
> -void scx_idle_update_selcpu_topology(void);
> +void scx_idle_update_selcpu_topology(struct sched_ext_ops *ops);
> void scx_idle_init_masks(void);
> bool scx_idle_test_and_clear_cpu(int cpu);
> s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, u64 flags);
> #else /* !CONFIG_SMP */
> -static inline void scx_idle_update_selcpu_topology(void) {}
> +static inline void scx_idle_update_selcpu_topology(struct sched_ext_ops *ops) {}
> static inline void scx_idle_init_masks(void) {}
> static inline bool scx_idle_test_and_clear_cpu(int cpu) { return false; }
> static inline s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, u64 flags)
> diff --git a/tools/sched_ext/include/scx/compat.h b/tools/sched_ext/include/scx/compat.h
> index b50280e2ba2ba..d63cf40be8eee 100644
> --- a/tools/sched_ext/include/scx/compat.h
> +++ b/tools/sched_ext/include/scx/compat.h
> @@ -109,6 +109,9 @@ static inline bool __COMPAT_struct_has_field(const char *type, const char *field
> #define SCX_OPS_SWITCH_PARTIAL \
> __COMPAT_ENUM_OR_ZERO("scx_ops_flags", "SCX_OPS_SWITCH_PARTIAL")
>
> +#define SCX_OPS_BUILTIN_IDLE_PER_NODE \
> + __COMPAT_ENUM_OR_ZERO("scx_ops_flags", "SCX_OPS_BUILTIN_IDLE_PER_NODE")
> +
> static inline long scx_hotplug_seq(void)
> {
> int fd;
> --
> 2.48.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 2/7] mm/numa: Introduce nearest_node_nodemask()
2025-02-13 15:57 ` Yury Norov
@ 2025-02-13 16:19 ` Andrea Righi
2025-02-13 17:12 ` Yury Norov
0 siblings, 1 reply; 21+ messages in thread
From: Andrea Righi @ 2025-02-13 16:19 UTC (permalink / raw)
To: Yury Norov
Cc: Tejun Heo, David Vernet, Changwoo Min, Ingo Molnar,
Peter Zijlstra, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider,
Joel Fernandes, Ian May, bpf, linux-kernel
On Thu, Feb 13, 2025 at 10:57:00AM -0500, Yury Norov wrote:
> On Wed, Feb 12, 2025 at 05:48:09PM +0100, Andrea Righi wrote:
> > Introduce the new helper nearest_node_nodemask() to find the closest
> > node in a specified nodemask from a given starting node.
> >
> > Returns MAX_NUMNODES if no node is found.
> >
> > Cc: Yury Norov <yury.norov@gmail.com>
> > Signed-off-by: Andrea Righi <arighi@nvidia.com>
>
> Suggested-by: Yury Norov [NVIDIA] <yury.norov@gmail.com>
Ok.
>
> > ---
> > include/linux/numa.h | 7 +++++++
> > mm/mempolicy.c | 32 ++++++++++++++++++++++++++++++++
> > 2 files changed, 39 insertions(+)
> >
> > diff --git a/include/linux/numa.h b/include/linux/numa.h
> > index 31d8bf8a951a7..e6baaf6051bcf 100644
> > --- a/include/linux/numa.h
> > +++ b/include/linux/numa.h
> > @@ -31,6 +31,8 @@ void __init alloc_offline_node_data(int nid);
> > /* Generic implementation available */
> > int numa_nearest_node(int node, unsigned int state);
> >
> > +int nearest_node_nodemask(int node, nodemask_t *mask);
> > +
>
> See how you use it. It looks a bit inconsistent to the other functions:
>
> #define for_each_node_numadist(node, unvisited) \
> for (int start = (node), \
> node = nearest_node_nodemask((start), &(unvisited)); \
> node < MAX_NUMNODES; \
> node_clear(node, (unvisited)), \
> node = nearest_node_nodemask((start), &(unvisited)))
>
>
> I would suggest to make it aligned with the rest of the API:
>
> #define node_clear(node, dst) __node_clear((node), &(dst))
> static __always_inline void __node_clear(int node, volatile nodemask_t *dstp)
> {
> clear_bit(node, dstp->bits);
> }
Sorry Yury, can you elaborate more on this? What do you mean with
inconsistent, is it the volatile nodemask_t *?
>
> > #ifndef memory_add_physaddr_to_nid
> > int memory_add_physaddr_to_nid(u64 start);
> > #endif
> > @@ -47,6 +49,11 @@ static inline int numa_nearest_node(int node, unsigned int state)
> > return NUMA_NO_NODE;
> > }
> >
> > +static inline int nearest_node_nodemask(int node, nodemask_t *mask)
> > +{
> > + return NUMA_NO_NODE;
> > +}
> > +
> > static inline int memory_add_physaddr_to_nid(u64 start)
> > {
> > return 0;
> > diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> > index 162407fbf2bc7..1e2acf187ea3a 100644
> > --- a/mm/mempolicy.c
> > +++ b/mm/mempolicy.c
> > @@ -196,6 +196,38 @@ int numa_nearest_node(int node, unsigned int state)
> > }
> > EXPORT_SYMBOL_GPL(numa_nearest_node);
> >
> > +/**
> > + * nearest_node_nodemask - Find the node in @mask at the nearest distance
> > + * from @node.
> > + *
> > + * @node: the node to start the search from.
> > + * @mask: a pointer to a nodemask representing the allowed nodes.
> > + *
> > + * This function iterates over all nodes in the given state and calculates
> > + * the distance to the starting node.
> > + *
> > + * Returns the node ID in @mask that is the closest in terms of distance
> > + * from @node, or MAX_NUMNODES if no node is found.
> > + */
> > +int nearest_node_nodemask(int node, nodemask_t *mask)
> > +{
> > + int dist, n, min_dist = INT_MAX, min_node = MAX_NUMNODES;
> > +
> > + if (node == NUMA_NO_NODE)
> > + return MAX_NUMNODES;
>
> This makes it unclear: you make it legal to pass NUMA_NO_NODE, but
> your function returns something useless. I don't think it would help
> users in any reasonable scenario.
>
> So, if you don't want user to call this with node == NUMA_NO_NODE,
> just describe it in comment on top of the function. Otherwise, please
> do something useful like
>
> if (node == NUMA_NO_NODE)
> node = current_node;
>
> I would go with option 1. Notice, node_distance() doesn't bother to
> check against NUMA_NO_NODE.
Hm... is it? Looking at __node_distance(), it doesn't seem really safe to
pass a negative value (maybe I'm missing something?).
Anyway, I'd also prefer to go with option 1 and not implicitly assuming
NUMA_NO_NODE == current node (it feels that it might hide nasty bugs).
So, I can add a comment in the description to clarify that NUMA_NO_NODE is
forbidenx, but what is someone is passing it? Should we WARN_ON_ONCE() at
least?
>
> > + for_each_node_mask(n, *mask) {
> > + dist = node_distance(node, n);
> > + if (dist < min_dist) {
> > + min_dist = dist;
> > + min_node = n;
> > + }
> > + }
> > +
> > + return min_node;
> > +}
> > +EXPORT_SYMBOL_GPL(nearest_node_nodemask);
> > +
> > struct mempolicy *get_task_policy(struct task_struct *p)
> > {
> > struct mempolicy *pol = p->mempolicy;
> > --
> > 2.48.1
Thanks,
-Andrea
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 5/7] sched_ext: idle: Introduce SCX_OPS_BUILTIN_IDLE_PER_NODE
2025-02-13 16:08 ` Yury Norov
@ 2025-02-13 16:22 ` Andrea Righi
0 siblings, 0 replies; 21+ messages in thread
From: Andrea Righi @ 2025-02-13 16:22 UTC (permalink / raw)
To: Yury Norov
Cc: Tejun Heo, David Vernet, Changwoo Min, Ingo Molnar,
Peter Zijlstra, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider,
Joel Fernandes, Ian May, bpf, linux-kernel
On Thu, Feb 13, 2025 at 11:08:01AM -0500, Yury Norov wrote:
...
> > @@ -237,13 +240,19 @@ void scx_idle_update_selcpu_topology(void)
> > * If all CPUs belong to the same NUMA node and the same LLC domain,
> > * enabling both NUMA and LLC optimizations is unnecessary, as checking
> > * for an idle CPU in the same domain twice is redundant.
> > + *
> > + * If SCX_OPS_BUILTIN_IDLE_PER_NODE is enabled ignore the NUMA
> > + * optimization, as we would naturally select idle CPUs within
> > + * specific NUMA nodes querying the corresponding per-node cpumask.
> > */
> > - nr_cpus = numa_weight(cpu);
> > - if (nr_cpus > 0) {
> > - if (nr_cpus < num_online_cpus() && llc_numa_mismatch())
> > - enable_numa = true;
> > - pr_debug("sched_ext: NUMA=%*pb weight=%u\n",
> > - cpumask_pr_args(numa_span(cpu)), numa_weight(cpu));
> > + if (!(ops->flags & SCX_OPS_BUILTIN_IDLE_PER_NODE)) {
> > + nr_cpus = numa_weight(cpu);
> > + if (nr_cpus > 0) {
> > + if (nr_cpus < num_online_cpus() && llc_numa_mismatch())
> > + enable_numa = true;
> > + pr_debug("sched_ext: NUMA=%*pb weight=%u\n",
> > + cpumask_pr_args(numa_span(cpu)), numa_weight(cpu));
>
> No need to call numa_weight(cpu) for the 2nd time, right?
Ah good catch, will fix that.
Thanks,
-Andrea
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 3/7] sched/topology: Introduce for_each_node_numadist() iterator
2025-02-13 16:02 ` Yury Norov
@ 2025-02-13 16:32 ` Andrea Righi
0 siblings, 0 replies; 21+ messages in thread
From: Andrea Righi @ 2025-02-13 16:32 UTC (permalink / raw)
To: Yury Norov
Cc: Tejun Heo, David Vernet, Changwoo Min, Ingo Molnar,
Peter Zijlstra, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider,
Joel Fernandes, Ian May, bpf, linux-kernel
On Thu, Feb 13, 2025 at 11:02:44AM -0500, Yury Norov wrote:
...
> > Cc: Yury Norov <yury.norov@gmail.com>
> > Signed-off-by: Andrea Righi <arighi@nvidia.com>
>
> Suggested-by: Yury Norov [NVIDIA] <yury.norov@gmail.com>
Ok.
>
> > ---
> > include/linux/topology.h | 30 ++++++++++++++++++++++++++++++
> > 1 file changed, 30 insertions(+)
> >
> > diff --git a/include/linux/topology.h b/include/linux/topology.h
> > index 52f5850730b3e..932d8b819c1b7 100644
> > --- a/include/linux/topology.h
> > +++ b/include/linux/topology.h
> > @@ -261,6 +261,36 @@ sched_numa_hop_mask(unsigned int node, unsigned int hops)
> > }
> > #endif /* CONFIG_NUMA */
> >
> > +/**
> > + * for_each_node_numadist() - iterate over nodes in increasing distance
> > + * order, starting from a given node
> > + * @node: the iteration variable and the starting node.
> > + * @unvisited: a nodemask to keep track of the unvisited nodes.
> > + *
> > + * This macro iterates over NUMA node IDs in increasing distance from the
> > + * starting @node and yields MAX_NUMNODES when all the nodes have been
> > + * visited.
> > + *
> > + * Note that by the time the loop completes, the @unvisited nodemask will
> > + * be fully cleared, unless the loop exits early.
> > + *
> > + * The difference between for_each_node() and for_each_node_numadist() is
> > + * that the former allows to iterate over nodes in numerical order, whereas
> > + * the latter iterates over nodes in increasing order of distance.
> > + *
> > + * This complexity of this iterator is O(N^2), where N represents the
> > + * number of nodes, as each iteration involves scanning all nodes to
> > + * find the one with the shortest distance.
> > + *
> > + * Requires rcu_lock to be held.
> > + */
> > +#define for_each_node_numadist(node, unvisited) \
> > + for (int start = (node), \
> > + node = nearest_node_nodemask((start), &(unvisited)); \
> > + node < MAX_NUMNODES; \
> > + node_clear(node, (unvisited)), \
> > + node = nearest_node_nodemask((start), &(unvisited)))
>
> the 'node' should be protected with braces inside the macro, the start should
> not because you declare it just inside. Also, the 'start' is a common word,
> so there's a chance that you'll mask out already existing 'start' in the scope.
> Maybe __start, or simply __s?
Right, will also fix this (good thing I needed to send a new version
anyway, because the test robot found a build bug). :)
Thanks!
-Andrea
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 2/7] mm/numa: Introduce nearest_node_nodemask()
2025-02-13 16:19 ` Andrea Righi
@ 2025-02-13 17:12 ` Yury Norov
2025-02-14 8:55 ` Andrea Righi
0 siblings, 1 reply; 21+ messages in thread
From: Yury Norov @ 2025-02-13 17:12 UTC (permalink / raw)
To: Andrea Righi
Cc: Tejun Heo, David Vernet, Changwoo Min, Ingo Molnar,
Peter Zijlstra, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider,
Joel Fernandes, Ian May, bpf, linux-kernel
On Thu, Feb 13, 2025 at 05:19:58PM +0100, Andrea Righi wrote:
> On Thu, Feb 13, 2025 at 10:57:00AM -0500, Yury Norov wrote:
> > On Wed, Feb 12, 2025 at 05:48:09PM +0100, Andrea Righi wrote:
> > > Introduce the new helper nearest_node_nodemask() to find the closest
> > > node in a specified nodemask from a given starting node.
> > >
> > > Returns MAX_NUMNODES if no node is found.
> > >
> > > Cc: Yury Norov <yury.norov@gmail.com>
> > > Signed-off-by: Andrea Righi <arighi@nvidia.com>
> >
> > Suggested-by: Yury Norov [NVIDIA] <yury.norov@gmail.com>
>
> Ok.
>
> >
> > > ---
> > > include/linux/numa.h | 7 +++++++
> > > mm/mempolicy.c | 32 ++++++++++++++++++++++++++++++++
> > > 2 files changed, 39 insertions(+)
> > >
> > > diff --git a/include/linux/numa.h b/include/linux/numa.h
> > > index 31d8bf8a951a7..e6baaf6051bcf 100644
> > > --- a/include/linux/numa.h
> > > +++ b/include/linux/numa.h
> > > @@ -31,6 +31,8 @@ void __init alloc_offline_node_data(int nid);
> > > /* Generic implementation available */
> > > int numa_nearest_node(int node, unsigned int state);
> > >
> > > +int nearest_node_nodemask(int node, nodemask_t *mask);
> > > +
> >
> > See how you use it. It looks a bit inconsistent to the other functions:
> >
> > #define for_each_node_numadist(node, unvisited) \
> > for (int start = (node), \
> > node = nearest_node_nodemask((start), &(unvisited)); \
> > node < MAX_NUMNODES; \
> > node_clear(node, (unvisited)), \
> > node = nearest_node_nodemask((start), &(unvisited)))
> >
> >
> > I would suggest to make it aligned with the rest of the API:
> >
> > #define node_clear(node, dst) __node_clear((node), &(dst))
> > static __always_inline void __node_clear(int node, volatile nodemask_t *dstp)
> > {
> > clear_bit(node, dstp->bits);
> > }
>
> Sorry Yury, can you elaborate more on this? What do you mean with
> inconsistent, is it the volatile nodemask_t *?
What I mean is:
#define nearest_node_nodemask(start, srcp)
__nearest_node_nodemask((start), &(srcp))
int __nearest_node_nodemask(int node, nodemask_t *mask);
That way you'll be able to make the above for-loop looking more
uniform:
#define for_each_node_numadist(node, unvisited) \
for (int __s = (node), \
(node) = nearest_node_nodemask(__s, (unvisited)); \
(node) < MAX_NUMNODES; \
node_clear((node), (unvisited)), \
(node) = nearest_node_nodemask(__s, (unvisited)))
> > > #ifndef memory_add_physaddr_to_nid
> > > int memory_add_physaddr_to_nid(u64 start);
> > > #endif
> > > @@ -47,6 +49,11 @@ static inline int numa_nearest_node(int node, unsigned int state)
> > > return NUMA_NO_NODE;
> > > }
> > >
> > > +static inline int nearest_node_nodemask(int node, nodemask_t *mask)
> > > +{
> > > + return NUMA_NO_NODE;
> > > +}
> > > +
> > > static inline int memory_add_physaddr_to_nid(u64 start)
> > > {
> > > return 0;
> > > diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> > > index 162407fbf2bc7..1e2acf187ea3a 100644
> > > --- a/mm/mempolicy.c
> > > +++ b/mm/mempolicy.c
> > > @@ -196,6 +196,38 @@ int numa_nearest_node(int node, unsigned int state)
> > > }
> > > EXPORT_SYMBOL_GPL(numa_nearest_node);
> > >
> > > +/**
> > > + * nearest_node_nodemask - Find the node in @mask at the nearest distance
> > > + * from @node.
> > > + *
> > > + * @node: the node to start the search from.
> > > + * @mask: a pointer to a nodemask representing the allowed nodes.
> > > + *
> > > + * This function iterates over all nodes in the given state and calculates
> > > + * the distance to the starting node.
> > > + *
> > > + * Returns the node ID in @mask that is the closest in terms of distance
> > > + * from @node, or MAX_NUMNODES if no node is found.
> > > + */
> > > +int nearest_node_nodemask(int node, nodemask_t *mask)
> > > +{
> > > + int dist, n, min_dist = INT_MAX, min_node = MAX_NUMNODES;
> > > +
> > > + if (node == NUMA_NO_NODE)
> > > + return MAX_NUMNODES;
> >
> > This makes it unclear: you make it legal to pass NUMA_NO_NODE, but
> > your function returns something useless. I don't think it would help
> > users in any reasonable scenario.
> >
> > So, if you don't want user to call this with node == NUMA_NO_NODE,
> > just describe it in comment on top of the function. Otherwise, please
> > do something useful like
> >
> > if (node == NUMA_NO_NODE)
> > node = current_node;
> >
> > I would go with option 1. Notice, node_distance() doesn't bother to
> > check against NUMA_NO_NODE.
>
> Hm... is it? Looking at __node_distance(), it doesn't seem really safe to
> pass a negative value (maybe I'm missing something?).
It's not safe, but inside the kernel we don't check parameters. Out of
your courtesy you may decide to put a comment, but strictly speaking you
don't have to.
> Anyway, I'd also prefer to go with option 1 and not implicitly assuming
> NUMA_NO_NODE == current node (it feels that it might hide nasty bugs).
Yeah, very true
> So, I can add a comment in the description to clarify that NUMA_NO_NODE is
> forbidenx, but what is someone is passing it? Should we WARN_ON_ONCE() at
> least?
He will brick his testing board, and learn to read comments in a hard
way.
Speaking more seriously, you will be most likely CCed as an author of
that function, and you will be able to comment that on review. Also,
there's a great chance that it will be caught by KASAN or some other
sanitation tool even before someone sends a buggy patch.
This is an old as the world and very well known problem, and everyone
is aware.
Thanks,
Yury
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 6/7] sched_ext: idle: Per-node idle cpumasks
2025-02-12 16:48 ` [PATCH 6/7] sched_ext: idle: Per-node idle cpumasks Andrea Righi
2025-02-13 10:57 ` kernel test robot
@ 2025-02-13 18:03 ` Yury Norov
1 sibling, 0 replies; 21+ messages in thread
From: Yury Norov @ 2025-02-13 18:03 UTC (permalink / raw)
To: Andrea Righi
Cc: Tejun Heo, David Vernet, Changwoo Min, Ingo Molnar,
Peter Zijlstra, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider,
Joel Fernandes, Ian May, bpf, linux-kernel
On Wed, Feb 12, 2025 at 05:48:13PM +0100, Andrea Righi wrote:
> @@ -90,6 +131,78 @@ s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, u64 flags)
> goto retry;
> }
>
> +static s32 pick_idle_cpu_from_other_nodes(const struct cpumask *cpus_allowed, int node, u64 flags)
'From other node' sounds a bit vague
> +{
> + static DEFINE_PER_CPU(nodemask_t, per_cpu_unvisited);
> + nodemask_t *unvisited = this_cpu_ptr(&per_cpu_unvisited);
> + s32 cpu = -EBUSY;
> +
> + preempt_disable();
> + unvisited = this_cpu_ptr(&per_cpu_unvisited);
> +
> + /*
> + * Restrict the search to the online nodes, excluding the current
> + * one.
> + */
> + nodes_clear(*unvisited);
> + nodes_or(*unvisited, *unvisited, node_states[N_ONLINE]);
nodes_clear() + nodes_or() == nodes_copy()
Yeah, we miss it. The attached patch adds nodes_copy(). Can you
consider taking it for your series?
> + node_clear(node, *unvisited);
> +
> + /*
> + * Traverse all nodes in order of increasing distance, starting
> + * from @node.
> + *
> + * This loop is O(N^2), with N being the amount of NUMA nodes,
> + * which might be quite expensive in large NUMA systems. However,
> + * this complexity comes into play only when a scheduler enables
> + * SCX_OPS_BUILTIN_IDLE_PER_NODE and it's requesting an idle CPU
> + * without specifying a target NUMA node, so it shouldn't be a
> + * bottleneck is most cases.
> + *
> + * As a future optimization we may want to cache the list of nodes
> + * in a per-node array, instead of actually traversing them every
> + * time.
> + */
> + for_each_node_numadist(node, *unvisited) {
> + cpu = pick_idle_cpu_in_node(cpus_allowed, node, flags);
> + if (cpu >= 0)
> + break;
> + }
> + preempt_enable();
> +
> + return cpu;
> +}
> +
> +/*
> + * Find an idle CPU in the system, starting from @node.
> + */
> +s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, int node, u64 flags)
> +{
> + s32 cpu;
> +
> + /*
> + * Always search in the starting node first (this is an
> + * optimization that can save some cycles even when the search is
> + * not limited to a single node).
> + */
> + cpu = pick_idle_cpu_in_node(cpus_allowed, node, flags);
> + if (cpu >= 0)
> + return cpu;
> +
> + /*
> + * Stop the search if we are using only a single global cpumask
> + * (NUMA_NO_NODE) or if the search is restricted to the first node
> + * only.
> + */
> + if (node == NUMA_NO_NODE || flags & SCX_PICK_IDLE_IN_NODE)
> + return -EBUSY;
> +
> + /*
> + * Extend the search to the other nodes.
> + */
> + return pick_idle_cpu_from_other_nodes(cpus_allowed, node, flags);
> +}
From d69294cba9bffc05924dc3351a88601937c24213 Mon Sep 17 00:00:00 2001
From: Yury Norov <yury.norov@gmail.com>
Date: Thu, 13 Feb 2025 11:21:08 -0500
Subject: [PATCH] nodemask: add nodes_copy()
Nodemasks API misses the plain nodes_copy() which is required in this series.
Signed-off-by: Yury Norov [NVIDIA] <yury.norov@gmail.com>
---
include/linux/nodemask.h | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/include/linux/nodemask.h b/include/linux/nodemask.h
index 9fd7a0ce9c1a..41cf43c4e70f 100644
--- a/include/linux/nodemask.h
+++ b/include/linux/nodemask.h
@@ -191,6 +191,13 @@ static __always_inline void __nodes_andnot(nodemask_t *dstp, const nodemask_t *s
bitmap_andnot(dstp->bits, src1p->bits, src2p->bits, nbits);
}
+#define nodes_copy(dst, src) __nodes_copy(&(dst), &(src), MAX_NUMNODES)
+static __always_inline void __nodes_copy(nodemask_t *dstp,
+ const nodemask_t *srcp, unsigned int nbits)
+{
+ bitmap_copy(dstp->bits, srcp->bits, nbits);
+}
+
#define nodes_complement(dst, src) \
__nodes_complement(&(dst), &(src), MAX_NUMNODES)
static __always_inline void __nodes_complement(nodemask_t *dstp,
--
2.43.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* Re: [PATCH 2/7] mm/numa: Introduce nearest_node_nodemask()
2025-02-13 17:12 ` Yury Norov
@ 2025-02-14 8:55 ` Andrea Righi
2025-02-14 16:04 ` Yury Norov
0 siblings, 1 reply; 21+ messages in thread
From: Andrea Righi @ 2025-02-14 8:55 UTC (permalink / raw)
To: Yury Norov
Cc: Tejun Heo, David Vernet, Changwoo Min, Ingo Molnar,
Peter Zijlstra, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider,
Joel Fernandes, Ian May, bpf, linux-kernel
Hi Yury,
On Thu, Feb 13, 2025 at 12:12:46PM -0500, Yury Norov wrote:
...
> > > > include/linux/numa.h | 7 +++++++
> > > > mm/mempolicy.c | 32 ++++++++++++++++++++++++++++++++
> > > > 2 files changed, 39 insertions(+)
> > > >
> > > > diff --git a/include/linux/numa.h b/include/linux/numa.h
> > > > index 31d8bf8a951a7..e6baaf6051bcf 100644
> > > > --- a/include/linux/numa.h
> > > > +++ b/include/linux/numa.h
> > > > @@ -31,6 +31,8 @@ void __init alloc_offline_node_data(int nid);
> > > > /* Generic implementation available */
> > > > int numa_nearest_node(int node, unsigned int state);
> > > >
> > > > +int nearest_node_nodemask(int node, nodemask_t *mask);
> > > > +
> > >
> > > See how you use it. It looks a bit inconsistent to the other functions:
> > >
> > > #define for_each_node_numadist(node, unvisited) \
> > > for (int start = (node), \
> > > node = nearest_node_nodemask((start), &(unvisited)); \
> > > node < MAX_NUMNODES; \
> > > node_clear(node, (unvisited)), \
> > > node = nearest_node_nodemask((start), &(unvisited)))
> > >
> > >
> > > I would suggest to make it aligned with the rest of the API:
> > >
> > > #define node_clear(node, dst) __node_clear((node), &(dst))
> > > static __always_inline void __node_clear(int node, volatile nodemask_t *dstp)
> > > {
> > > clear_bit(node, dstp->bits);
> > > }
> >
> > Sorry Yury, can you elaborate more on this? What do you mean with
> > inconsistent, is it the volatile nodemask_t *?
>
> What I mean is:
> #define nearest_node_nodemask(start, srcp)
> __nearest_node_nodemask((start), &(srcp))
> int __nearest_node_nodemask(int node, nodemask_t *mask);
This all makes sense assuming that nearest_node_nodemask() is placed in
include/linux/nodemask.h and is considered as a nodemask API, but I thought
we determined to place it in include/linux/numa.h, since it seems more of a
NUMA API, similar to numa_nearest_node(), so under this assumption I was
planning to follow the same style of numa_nearest_node().
Or do you think it should go in linux/nodemask.h and follow the style of
the other nodemask APIs?
>
> That way you'll be able to make the above for-loop looking more
> uniform:
>
> #define for_each_node_numadist(node, unvisited) \
> for (int __s = (node), \
> (node) = nearest_node_nodemask(__s, (unvisited)); \
> (node) < MAX_NUMNODES; \
> node_clear((node), (unvisited)), \
> (node) = nearest_node_nodemask(__s, (unvisited)))
>
> > > > #ifndef memory_add_physaddr_to_nid
> > > > int memory_add_physaddr_to_nid(u64 start);
> > > > #endif
> > > > @@ -47,6 +49,11 @@ static inline int numa_nearest_node(int node, unsigned int state)
> > > > return NUMA_NO_NODE;
> > > > }
> > > >
> > > > +static inline int nearest_node_nodemask(int node, nodemask_t *mask)
> > > > +{
> > > > + return NUMA_NO_NODE;
> > > > +}
> > > > +
> > > > static inline int memory_add_physaddr_to_nid(u64 start)
> > > > {
> > > > return 0;
> > > > diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> > > > index 162407fbf2bc7..1e2acf187ea3a 100644
> > > > --- a/mm/mempolicy.c
> > > > +++ b/mm/mempolicy.c
> > > > @@ -196,6 +196,38 @@ int numa_nearest_node(int node, unsigned int state)
> > > > }
> > > > EXPORT_SYMBOL_GPL(numa_nearest_node);
> > > >
> > > > +/**
> > > > + * nearest_node_nodemask - Find the node in @mask at the nearest distance
> > > > + * from @node.
> > > > + *
> > > > + * @node: the node to start the search from.
> > > > + * @mask: a pointer to a nodemask representing the allowed nodes.
> > > > + *
> > > > + * This function iterates over all nodes in the given state and calculates
> > > > + * the distance to the starting node.
> > > > + *
> > > > + * Returns the node ID in @mask that is the closest in terms of distance
> > > > + * from @node, or MAX_NUMNODES if no node is found.
> > > > + */
> > > > +int nearest_node_nodemask(int node, nodemask_t *mask)
> > > > +{
> > > > + int dist, n, min_dist = INT_MAX, min_node = MAX_NUMNODES;
> > > > +
> > > > + if (node == NUMA_NO_NODE)
> > > > + return MAX_NUMNODES;
> > >
> > > This makes it unclear: you make it legal to pass NUMA_NO_NODE, but
> > > your function returns something useless. I don't think it would help
> > > users in any reasonable scenario.
> > >
> > > So, if you don't want user to call this with node == NUMA_NO_NODE,
> > > just describe it in comment on top of the function. Otherwise, please
> > > do something useful like
> > >
> > > if (node == NUMA_NO_NODE)
> > > node = current_node;
> > >
> > > I would go with option 1. Notice, node_distance() doesn't bother to
> > > check against NUMA_NO_NODE.
> >
> > Hm... is it? Looking at __node_distance(), it doesn't seem really safe to
> > pass a negative value (maybe I'm missing something?).
>
> It's not safe, but inside the kernel we don't check parameters. Out of
> your courtesy you may decide to put a comment, but strictly speaking you
> don't have to.
>
> > Anyway, I'd also prefer to go with option 1 and not implicitly assuming
> > NUMA_NO_NODE == current node (it feels that it might hide nasty bugs).
>
> Yeah, very true
>
> > So, I can add a comment in the description to clarify that NUMA_NO_NODE is
> > forbidenx, but what is someone is passing it? Should we WARN_ON_ONCE() at
> > least?
>
> He will brick his testing board, and learn to read comments in a hard
> way.
>
> Speaking more seriously, you will be most likely CCed as an author of
> that function, and you will be able to comment that on review. Also,
> there's a great chance that it will be caught by KASAN or some other
> sanitation tool even before someone sends a buggy patch.
>
> This is an old as the world and very well known problem, and everyone
> is aware.
Ok, makes sense, I'll just clarify this in the comment then.
Thanks,
-Andrea
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 2/7] mm/numa: Introduce nearest_node_nodemask()
2025-02-14 8:55 ` Andrea Righi
@ 2025-02-14 16:04 ` Yury Norov
0 siblings, 0 replies; 21+ messages in thread
From: Yury Norov @ 2025-02-14 16:04 UTC (permalink / raw)
To: Andrea Righi
Cc: Tejun Heo, David Vernet, Changwoo Min, Ingo Molnar,
Peter Zijlstra, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider,
Joel Fernandes, Ian May, bpf, linux-kernel
On Fri, Feb 14, 2025 at 09:55:25AM +0100, Andrea Righi wrote:
> Hi Yury,
>
> On Thu, Feb 13, 2025 at 12:12:46PM -0500, Yury Norov wrote:
> ...
> > > > > include/linux/numa.h | 7 +++++++
> > > > > mm/mempolicy.c | 32 ++++++++++++++++++++++++++++++++
> > > > > 2 files changed, 39 insertions(+)
> > > > >
> > > > > diff --git a/include/linux/numa.h b/include/linux/numa.h
> > > > > index 31d8bf8a951a7..e6baaf6051bcf 100644
> > > > > --- a/include/linux/numa.h
> > > > > +++ b/include/linux/numa.h
> > > > > @@ -31,6 +31,8 @@ void __init alloc_offline_node_data(int nid);
> > > > > /* Generic implementation available */
> > > > > int numa_nearest_node(int node, unsigned int state);
> > > > >
> > > > > +int nearest_node_nodemask(int node, nodemask_t *mask);
> > > > > +
> > > >
> > > > See how you use it. It looks a bit inconsistent to the other functions:
> > > >
> > > > #define for_each_node_numadist(node, unvisited) \
> > > > for (int start = (node), \
> > > > node = nearest_node_nodemask((start), &(unvisited)); \
> > > > node < MAX_NUMNODES; \
> > > > node_clear(node, (unvisited)), \
> > > > node = nearest_node_nodemask((start), &(unvisited)))
> > > >
> > > >
> > > > I would suggest to make it aligned with the rest of the API:
> > > >
> > > > #define node_clear(node, dst) __node_clear((node), &(dst))
> > > > static __always_inline void __node_clear(int node, volatile nodemask_t *dstp)
> > > > {
> > > > clear_bit(node, dstp->bits);
> > > > }
> > >
> > > Sorry Yury, can you elaborate more on this? What do you mean with
> > > inconsistent, is it the volatile nodemask_t *?
> >
> > What I mean is:
> > #define nearest_node_nodemask(start, srcp)
> > __nearest_node_nodemask((start), &(srcp))
> > int __nearest_node_nodemask(int node, nodemask_t *mask);
>
> This all makes sense assuming that nearest_node_nodemask() is placed in
> include/linux/nodemask.h and is considered as a nodemask API, but I thought
> we determined to place it in include/linux/numa.h, since it seems more of a
> NUMA API, similar to numa_nearest_node(), so under this assumption I was
> planning to follow the same style of numa_nearest_node().
>
> Or do you think it should go in linux/nodemask.h and follow the style of
> the other nodemask APIs?
Ok, I see. I have no strong opinion. I like to have the API looking
consistent, but I also like to have all functions of the same family
together. If we move nearest_node_nodemask to linux/nodemask.h, it
will help with consistency, but will separate it from the sibling
numa_nearest_node().
So, at your discretion. If you don't want to change anything - I'm OK
with that.
This is anyways the very final nits, and I feel like the series now is
in a good shape, almost ready to be merged.
Thanks,
Yury
^ permalink raw reply [flat|nested] 21+ messages in thread
end of thread, other threads:[~2025-02-14 16:04 UTC | newest]
Thread overview: 21+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-02-12 16:48 [PATCHSET v11 sched_ext/for-6.15] sched_ext: split global idle cpumask into per-NUMA cpumasks Andrea Righi
2025-02-12 16:48 ` [PATCH 1/7] nodemask: numa: reorganize inclusion path Andrea Righi
2025-02-13 15:29 ` Yury Norov
2025-02-13 15:59 ` Andrea Righi
2025-02-12 16:48 ` [PATCH 2/7] mm/numa: Introduce nearest_node_nodemask() Andrea Righi
2025-02-13 15:57 ` Yury Norov
2025-02-13 16:19 ` Andrea Righi
2025-02-13 17:12 ` Yury Norov
2025-02-14 8:55 ` Andrea Righi
2025-02-14 16:04 ` Yury Norov
2025-02-12 16:48 ` [PATCH 3/7] sched/topology: Introduce for_each_node_numadist() iterator Andrea Righi
2025-02-13 16:02 ` Yury Norov
2025-02-13 16:32 ` Andrea Righi
2025-02-12 16:48 ` [PATCH 4/7] sched_ext: idle: Make idle static keys private Andrea Righi
2025-02-12 16:48 ` [PATCH 5/7] sched_ext: idle: Introduce SCX_OPS_BUILTIN_IDLE_PER_NODE Andrea Righi
2025-02-13 16:08 ` Yury Norov
2025-02-13 16:22 ` Andrea Righi
2025-02-12 16:48 ` [PATCH 6/7] sched_ext: idle: Per-node idle cpumasks Andrea Righi
2025-02-13 10:57 ` kernel test robot
2025-02-13 18:03 ` Yury Norov
2025-02-12 16:48 ` [PATCH 7/7] sched_ext: idle: Introduce node-aware idle cpu kfunc helpers Andrea Righi
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox