* [PATCH v4 0/2] Fix NUMA sched domain build errors for GNR and CWF
@ 2025-09-19 17:50 Tim Chen
2025-09-19 17:50 ` [PATCH v4 1/2] sched: Create architecture specific sched domain distances Tim Chen
2025-09-19 17:50 ` [PATCH v4 2/2] sched/topology: Fix sched domain build error for GNR, CWF in SNC-3 mode Tim Chen
0 siblings, 2 replies; 8+ messages in thread
From: Tim Chen @ 2025-09-19 17:50 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar
Cc: Tim Chen, Juri Lelli, Dietmar Eggemann, Ben Segall, Mel Gorman,
Valentin Schneider, Tim Chen, Vincent Guittot, Len Brown,
linux-kernel, Chen Yu, K Prateek Nayak, Gautham R . Shenoy,
Zhao Liu, Vinicius Costa Gomes, Arjan Van De Ven
While testing Granite Rapids (GNR) and Clearwater Forest (CWF) in
SNC-3 mode, we encountered sched domain build errors in dmesg.
Asymmetric node distances from local node to nodes in remote package
was not expected by the scheduler domain code. Multiple distances
to different remote nodes led to multiple grouping of partial remote nodes
with local nodes, and too many sched domain hierarchy levels.
Simplify the remote node distances for the purpose of building sched
domains for GNR and CWF. Replace remote distance to nodes in the same
remote package with average distance to the remote node. This fixed the
domain build errors and reduced the number of NUMA sched domain levels.
The actual SLIT NUMA node distances are kept separately should the node
distances be modified for building sched domains. NUMA balancing still
need to use the actual distance to locate remote node that is closer to
a task numa_group.
Thanks to Pratek, Chen Yu and Peter from reviewing previous
versions of the patches and providing valuable feedbacks.
Please add your Reviewed-by if this version looks okay to you.
Thanks.
Tim
Changes in v4:
- Move average node distance computation to x86 specific code
- Put all the changes under CONFIG_NUMA.
- Use __free() to simplify code.
- Allocate separate distance array only if node distances are
modified.
- Assert that we don't have more than 2 packages for GNR/CWF
when replacing remote node distances with average remote node
distance.
- Comments and code style clean ups.
- Link to v3:
https://lore.kernel.org/lkml/cover.1757614784.git.tim.c.chen@linux.intel.com/
Changes in v3:
- Simplify sched_record_numa_dist() by getting rid of max distance
computation.
- minor clean ups.
- Link to v2:
https://lore.kernel.org/lkml/61a6adbb845c148361101e16737307c8aa7ee362.1757097030.git.tim.c.chen@linux.intel.com/
Changes in v2:
- Allow modification of NUMA distances by architecture to be the
sched domain NUMA distances for building sched domains to
simplify NUMA domains.
Maintain separate NUMA distances for the purpose of building
sched domains from actual NUMA distances.
- Use average remote node distance as the distance to nodes in remote
packages for GNR and CWF.
- Remove the original fix for topology_span_sane() that's superseded
by better fix from Pratek.
https://lore.kernel.org/lkml/175688671425.1920.13690753997160836570.tip-bot2@tip-bot2/.
- Link to v1: https://lore.kernel.org/lkml/cover.1755893468.git.tim.c.chen@linux.intel.com/
Tim Chen (2):
sched: Create architecture specific sched domain distances
sched/topology: Fix sched domain build error for GNR, CWF in SNC-3
mode
arch/x86/kernel/smpboot.c | 70 ++++++++++++++++++++
include/linux/sched/topology.h | 1 +
kernel/sched/topology.c | 117 ++++++++++++++++++++++++++-------
3 files changed, 166 insertions(+), 22 deletions(-)
--
2.32.0
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH v4 1/2] sched: Create architecture specific sched domain distances
2025-09-19 17:50 [PATCH v4 0/2] Fix NUMA sched domain build errors for GNR and CWF Tim Chen
@ 2025-09-19 17:50 ` Tim Chen
2025-09-27 12:34 ` Chen, Yu C
2025-09-19 17:50 ` [PATCH v4 2/2] sched/topology: Fix sched domain build error for GNR, CWF in SNC-3 mode Tim Chen
1 sibling, 1 reply; 8+ messages in thread
From: Tim Chen @ 2025-09-19 17:50 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar
Cc: Tim Chen, Juri Lelli, Dietmar Eggemann, Ben Segall, Mel Gorman,
Valentin Schneider, Tim Chen, Vincent Guittot, Len Brown,
linux-kernel, Chen Yu, K Prateek Nayak, Gautham R . Shenoy,
Zhao Liu, Vinicius Costa Gomes, Arjan Van De Ven
Allow architecture specific sched domain NUMA distances that are
modified from actual NUMA node distances for the purpose of building
NUMA sched domains.
Keep actual NUMA distances separately if modified distances
are used for building sched domains. Such distances
are still needed as NUMA balancing benefits from finding the
NUMA nodes that are actually closer to a task numa_group.
Consolidate the recording of unique NUMA distances in an array to
sched_record_numa_dist() so the function can be reused to record NUMA
distances when the NUMA distance metric is changed.
No functional change and additional distance array
allocated if there're no arch specific NUMA distances
being defined.
Co-developed-by: Vinicius Costa Gomes <vinicius.gomes@intel.com>
Signed-off-by: Vinicius Costa Gomes <vinicius.gomes@intel.com>
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
---
include/linux/sched/topology.h | 1 +
kernel/sched/topology.c | 117 ++++++++++++++++++++++++++-------
2 files changed, 96 insertions(+), 22 deletions(-)
diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
index 5263746b63e8..2d2d29553df8 100644
--- a/include/linux/sched/topology.h
+++ b/include/linux/sched/topology.h
@@ -56,6 +56,7 @@ static inline int cpu_numa_flags(void)
{
return SD_NUMA;
}
+extern int arch_sched_node_distance(int from, int to);
#endif
extern int arch_asym_cpu_priority(int cpu);
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 6e2f54169e66..f25e4402c63e 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -1591,10 +1591,12 @@ static void claim_allocations(int cpu, struct sched_domain *sd)
enum numa_topology_type sched_numa_topology_type;
static int sched_domains_numa_levels;
+static int sched_numa_node_levels;
static int sched_domains_curr_level;
int sched_max_numa_distance;
static int *sched_domains_numa_distance;
+static int *sched_numa_node_distance;
static struct cpumask ***sched_domains_numa_masks;
#endif /* CONFIG_NUMA */
@@ -1808,10 +1810,10 @@ bool find_numa_distance(int distance)
return true;
rcu_read_lock();
- distances = rcu_dereference(sched_domains_numa_distance);
+ distances = rcu_dereference(sched_numa_node_distance);
if (!distances)
goto unlock;
- for (i = 0; i < sched_domains_numa_levels; i++) {
+ for (i = 0; i < sched_numa_node_levels; i++) {
if (distances[i] == distance) {
found = true;
break;
@@ -1887,14 +1889,48 @@ static void init_numa_topology_type(int offline_node)
#define NR_DISTANCE_VALUES (1 << DISTANCE_BITS)
-void sched_init_numa(int offline_node)
+/*
+ * An architecture could modify its NUMA distance, to change
+ * grouping of NUMA nodes and number of NUMA levels when creating
+ * NUMA level sched domains.
+ *
+ * A NUMA level is created for each unique
+ * arch_sched_node_distance.
+ */
+static bool __modified_sched_node_dist = true;
+
+int __weak arch_sched_node_distance(int from, int to)
{
- struct sched_domain_topology_level *tl;
- unsigned long *distance_map;
+ if (__modified_sched_node_dist)
+ __modified_sched_node_dist = false;
+
+ return node_distance(from, to);
+}
+
+static bool modified_sched_node_distance(void)
+{
+ /*
+ * Call arch_sched_node_distance()
+ * to determine if arch_sched_node_distance
+ * has been modified from node_distance()
+ * to arch specific distance.
+ */
+ arch_sched_node_distance(0, 0);
+ return __modified_sched_node_dist;
+}
+
+static int numa_node_dist(int i, int j)
+{
+ return node_distance(i, j);
+}
+
+static int sched_record_numa_dist(int offline_node, int (*n_dist)(int, int),
+ int **dist, int *levels)
+{
+ unsigned long *distance_map __free(bitmap) = NULL;
int nr_levels = 0;
int i, j;
int *distances;
- struct cpumask ***masks;
/*
* O(nr_nodes^2) de-duplicating selection sort -- in order to find the
@@ -1902,17 +1938,16 @@ void sched_init_numa(int offline_node)
*/
distance_map = bitmap_alloc(NR_DISTANCE_VALUES, GFP_KERNEL);
if (!distance_map)
- return;
+ return -ENOMEM;
bitmap_zero(distance_map, NR_DISTANCE_VALUES);
for_each_cpu_node_but(i, offline_node) {
for_each_cpu_node_but(j, offline_node) {
- int distance = node_distance(i, j);
+ int distance = n_dist(i, j);
if (distance < LOCAL_DISTANCE || distance >= NR_DISTANCE_VALUES) {
sched_numa_warn("Invalid distance value range");
- bitmap_free(distance_map);
- return;
+ return -EINVAL;
}
bitmap_set(distance_map, distance, 1);
@@ -1925,18 +1960,46 @@ void sched_init_numa(int offline_node)
nr_levels = bitmap_weight(distance_map, NR_DISTANCE_VALUES);
distances = kcalloc(nr_levels, sizeof(int), GFP_KERNEL);
- if (!distances) {
- bitmap_free(distance_map);
- return;
- }
+ if (!distances)
+ return -ENOMEM;
for (i = 0, j = 0; i < nr_levels; i++, j++) {
j = find_next_bit(distance_map, NR_DISTANCE_VALUES, j);
distances[i] = j;
}
- rcu_assign_pointer(sched_domains_numa_distance, distances);
+ *dist = distances;
+ *levels = nr_levels;
+
+ return 0;
+}
+
+void sched_init_numa(int offline_node)
+{
+ struct sched_domain_topology_level *tl;
+ int nr_levels, nr_node_levels;
+ int i, j;
+ int *distances, *domain_distances;
+ struct cpumask ***masks;
+
+ /* Record the NUMA distances from SLIT table */
+ if (sched_record_numa_dist(offline_node, numa_node_dist, &distances,
+ &nr_node_levels))
+ return;
- bitmap_free(distance_map);
+ /* Record modified NUMA distances for building sched domains */
+ if (modified_sched_node_distance()) {
+ if (sched_record_numa_dist(offline_node, arch_sched_node_distance,
+ &domain_distances, &nr_levels)) {
+ kfree(distances);
+ return;
+ }
+ } else {
+ domain_distances = distances;
+ nr_levels = nr_node_levels;
+ }
+ rcu_assign_pointer(sched_numa_node_distance, distances);
+ WRITE_ONCE(sched_max_numa_distance, distances[nr_node_levels - 1]);
+ WRITE_ONCE(sched_numa_node_levels, nr_node_levels);
/*
* 'nr_levels' contains the number of unique distances
@@ -1954,6 +2017,8 @@ void sched_init_numa(int offline_node)
*
* We reset it to 'nr_levels' at the end of this function.
*/
+ rcu_assign_pointer(sched_domains_numa_distance, domain_distances);
+
sched_domains_numa_levels = 0;
masks = kzalloc(sizeof(void *) * nr_levels, GFP_KERNEL);
@@ -1979,10 +2044,13 @@ void sched_init_numa(int offline_node)
masks[i][j] = mask;
for_each_cpu_node_but(k, offline_node) {
- if (sched_debug() && (node_distance(j, k) != node_distance(k, j)))
+ if (sched_debug() &&
+ (arch_sched_node_distance(j, k) !=
+ arch_sched_node_distance(k, j)))
sched_numa_warn("Node-distance not symmetric");
- if (node_distance(j, k) > sched_domains_numa_distance[i])
+ if (arch_sched_node_distance(j, k) >
+ sched_domains_numa_distance[i])
continue;
cpumask_or(mask, mask, cpumask_of_node(k));
@@ -2022,7 +2090,6 @@ void sched_init_numa(int offline_node)
sched_domain_topology = tl;
sched_domains_numa_levels = nr_levels;
- WRITE_ONCE(sched_max_numa_distance, sched_domains_numa_distance[nr_levels - 1]);
init_numa_topology_type(offline_node);
}
@@ -2030,14 +2097,18 @@ void sched_init_numa(int offline_node)
static void sched_reset_numa(void)
{
- int nr_levels, *distances;
+ int nr_levels, *distances, *dom_distances = NULL;
struct cpumask ***masks;
nr_levels = sched_domains_numa_levels;
+ sched_numa_node_levels = 0;
sched_domains_numa_levels = 0;
sched_max_numa_distance = 0;
sched_numa_topology_type = NUMA_DIRECT;
- distances = sched_domains_numa_distance;
+ distances = sched_numa_node_distance;
+ if (sched_numa_node_distance != sched_domains_numa_distance)
+ dom_distances = sched_domains_numa_distance;
+ rcu_assign_pointer(sched_numa_node_distance, NULL);
rcu_assign_pointer(sched_domains_numa_distance, NULL);
masks = sched_domains_numa_masks;
rcu_assign_pointer(sched_domains_numa_masks, NULL);
@@ -2046,6 +2117,7 @@ static void sched_reset_numa(void)
synchronize_rcu();
kfree(distances);
+ kfree(dom_distances);
for (i = 0; i < nr_levels && masks; i++) {
if (!masks[i])
continue;
@@ -2092,7 +2164,8 @@ void sched_domains_numa_masks_set(unsigned int cpu)
continue;
/* Set ourselves in the remote node's masks */
- if (node_distance(j, node) <= sched_domains_numa_distance[i])
+ if (arch_sched_node_distance(j, node) <=
+ sched_domains_numa_distance[i])
cpumask_set_cpu(cpu, sched_domains_numa_masks[i][j]);
}
}
--
2.32.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v4 2/2] sched/topology: Fix sched domain build error for GNR, CWF in SNC-3 mode
2025-09-19 17:50 [PATCH v4 0/2] Fix NUMA sched domain build errors for GNR and CWF Tim Chen
2025-09-19 17:50 ` [PATCH v4 1/2] sched: Create architecture specific sched domain distances Tim Chen
@ 2025-09-19 17:50 ` Tim Chen
1 sibling, 0 replies; 8+ messages in thread
From: Tim Chen @ 2025-09-19 17:50 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar
Cc: Tim Chen, Juri Lelli, Dietmar Eggemann, Ben Segall, Mel Gorman,
Valentin Schneider, Tim Chen, Vincent Guittot, Len Brown,
linux-kernel, Chen Yu, K Prateek Nayak, Gautham R . Shenoy,
Zhao Liu, Vinicius Costa Gomes, Arjan Van De Ven
It is possible for Granite Rapids (GNR) and Clearwater Forest
(CWF) to have up to 3 dies per package. When sub-numa cluster (SNC-3)
is enabled, each die will become a separate NUMA node in the package
with different distances between dies within the same package.
For example, on GNR, we see the following numa distances for a 2 socket
system with 3 dies per socket:
package 1 package2
----------------
| |
--------- ---------
| 0 | | 3 |
--------- ---------
| |
--------- ---------
| 1 | | 4 |
--------- ---------
| |
--------- ---------
| 2 | | 5 |
--------- ---------
| |
----------------
node distances:
node 0 1 2 3 4 5
0: 10 15 17 21 28 26
1: 15 10 15 23 26 23
2: 17 15 10 26 23 21
3: 21 28 26 10 15 17
4: 23 26 23 15 10 15
5: 26 23 21 17 15 10
The node distances above led to 2 problems:
1. Asymmetric routes taken between nodes in different packages led to
asymmetric scheduler domain perspective depending on which node you
are on. Current scheduler code failed to build domains properly with
asymmetric distances.
2. Multiple remote distances to respective tiles on remote package create
too many levels of domain hierarchies grouping different nodes between
remote packages.
For example, the above GNR topology lead to NUMA domains below:
Sched domains from the perspective of a CPU in node 0, where the number
in bracket represent node number.
NUMA-level 1 [0,1] [2]
NUMA-level 2 [0,1,2] [3]
NUMA-level 3 [0,1,2,3] [5]
NUMA-level 4 [0,1,2,3,5] [4]
Sched domains from the perspective of a CPU in node 4
NUMA-level 1 [4] [3,5]
NUMA-level 2 [3,4,5] [0,2]
NUMA-level 3 [0,2,3,4,5] [1]
Scheduler group peers for load balancing from the perspective of CPU 0
and 4 are different. Improper task could be chosen for load balancing
between groups such as [0,2,3,4,5] [1]. Ideally you should choose nodes
in 0 or 2 that are in same package as node 1 first. But instead tasks
in the remote package node 3, 4, 5 could be chosen with an equal chance
and could lead to excessive remote package migrations and imbalance of
load between packages. We should not group partial remote nodes and
local nodes together.
Simplify the remote distances for CWF and GNR for the purpose of
sched domains building, which maintains symmetry and leads to a more
reasonable load balance hierarchy.
The sched domains from the perspective of a CPU in node 0 NUMA-level 1
is now
NUMA-level 1 [0,1] [2]
NUMA-level 2 [0,1,2] [3,4,5]
The sched domains from the perspective of a CPU in node 4 NUMA-level 1
is now
NUMA-level 1 [4] [3,5]
NUMA-level 2 [3,4,5] [0,1,2]
We have the same balancing perspective from node 0 or node 4. Loads are
now balanced equally between packages.
Tested-by: Zhao Liu <zhao1.liu@intel.com>
Co-developed-by: Vinicius Costa Gomes <vinicius.gomes@intel.com>
Signed-off-by: Vinicius Costa Gomes <vinicius.gomes@intel.com>
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
---
arch/x86/kernel/smpboot.c | 70 +++++++++++++++++++++++++++++++++++++++
1 file changed, 70 insertions(+)
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index 33e166f6ab12..d6b772990ec2 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -515,6 +515,76 @@ static void __init build_sched_topology(void)
set_sched_topology(topology);
}
+#ifdef CONFIG_NUMA
+static int sched_avg_remote_distance;
+static int avg_remote_numa_distance(void)
+{
+ int i, j;
+ int distance, nr_remote, total_distance;
+
+ if (sched_avg_remote_distance > 0)
+ return sched_avg_remote_distance;
+
+ nr_remote = 0;
+ total_distance = 0;
+ for_each_node_state(i, N_CPU) {
+ for_each_node_state(j, N_CPU) {
+ distance = node_distance(i, j);
+
+ if (distance >= REMOTE_DISTANCE) {
+ nr_remote++;
+ total_distance += distance;
+ }
+ }
+ }
+ if (nr_remote)
+ sched_avg_remote_distance = total_distance / nr_remote;
+ else
+ sched_avg_remote_distance = REMOTE_DISTANCE;
+
+ return sched_avg_remote_distance;
+}
+
+int arch_sched_node_distance(int from, int to)
+{
+ int d = node_distance(from, to);
+
+ switch (boot_cpu_data.x86_vfm) {
+ case INTEL_GRANITERAPIDS_X:
+ case INTEL_ATOM_DARKMONT_X:
+
+ if (!x86_has_numa_in_package || topology_max_packages() == 1 ||
+ d < REMOTE_DISTANCE)
+ return d;
+
+ /*
+ * With SNC enabled, there could be too many levels of remote
+ * NUMA node distances, creating NUMA domain levels
+ * including local nodes and partial remote nodes.
+ *
+ * Trim finer distance tuning for NUMA nodes in remote package
+ * for the purpose of building sched domains. Group NUMA nodes
+ * in the remote package in the same sched group.
+ * Simplify NUMA domains and avoid extra NUMA levels including
+ * different remote NUMA nodes and local nodes.
+ *
+ * GNR and CWF don't expect systmes with more than 2 packages
+ * and more than 2 hops between packages. Single average remote
+ * distance won't be appropriate if there are more than 2
+ * packages as average distance to different remote packages
+ * could be different.
+ */
+ WARN_ONCE(topology_max_packages() > 2,
+ "sched: Expect only up to 2 packages for GNR or CWF, "
+ "but saw %d packages when building sched domains.",
+ topology_max_packages());
+
+ d = avg_remote_numa_distance();
+ }
+ return d;
+}
+#endif /* CONFIG_NUMA */
+
void set_cpu_sibling_map(int cpu)
{
bool has_smt = __max_threads_per_core > 1;
--
2.32.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH v4 1/2] sched: Create architecture specific sched domain distances
2025-09-19 17:50 ` [PATCH v4 1/2] sched: Create architecture specific sched domain distances Tim Chen
@ 2025-09-27 12:34 ` Chen, Yu C
2025-09-29 22:18 ` Tim Chen
0 siblings, 1 reply; 8+ messages in thread
From: Chen, Yu C @ 2025-09-27 12:34 UTC (permalink / raw)
To: Tim Chen, Peter Zijlstra, Ingo Molnar
Cc: Juri Lelli, Dietmar Eggemann, Ben Segall, Mel Gorman,
Valentin Schneider, Tim Chen, Vincent Guittot, Len Brown,
linux-kernel, K Prateek Nayak, Gautham R . Shenoy, Zhao Liu,
Vinicius Costa Gomes, Arjan Van De Ven
On 9/20/2025 1:50 AM, Tim Chen wrote:
> Allow architecture specific sched domain NUMA distances that are
> modified from actual NUMA node distances for the purpose of building
> NUMA sched domains.
>
> Keep actual NUMA distances separately if modified distances
> are used for building sched domains. Such distances
> are still needed as NUMA balancing benefits from finding the
> NUMA nodes that are actually closer to a task numa_group.
>
> Consolidate the recording of unique NUMA distances in an array to
> sched_record_numa_dist() so the function can be reused to record NUMA
> distances when the NUMA distance metric is changed.
>
> No functional change and additional distance array
> allocated if there're no arch specific NUMA distances
> being defined.
>
> Co-developed-by: Vinicius Costa Gomes <vinicius.gomes@intel.com>
> Signed-off-by: Vinicius Costa Gomes <vinicius.gomes@intel.com>
> Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
[snip]
> @@ -1591,10 +1591,12 @@ static void claim_allocations(int cpu, struct sched_domain *sd)
> enum numa_topology_type sched_numa_topology_type;
>
> static int sched_domains_numa_levels;
> +static int sched_numa_node_levels;
I agree that the benefit of maintaining two NUMA distances - one for the
sched_domain and another for the NUMA balancing/page allocation policy - is
to avoid complicating the sched_domain hierarchy while preserving the
advantages of NUMA locality.
Meanwhile, I wonder if we could also add a "orig" prefix to the original
NUMA distance. This way, we can quickly understand its meaning later.
For example,
sched_orig_node_levels
sched_orig_node_distance
> static int sched_domains_curr_level;
>
> int sched_max_numa_distance;
> static int *sched_domains_numa_distance;
> +static int *sched_numa_node_distance;
> static struct cpumask ***sched_domains_numa_masks;
> #endif /* CONFIG_NUMA */
>
> @@ -1808,10 +1810,10 @@ bool find_numa_distance(int distance)
> return true;
>
> rcu_read_lock();
> - distances = rcu_dereference(sched_domains_numa_distance);
> + distances = rcu_dereference(sched_numa_node_distance);
> if (!distances)
> goto unlock;
> - for (i = 0; i < sched_domains_numa_levels; i++) {
> + for (i = 0; i < sched_numa_node_levels; i++) {
> if (distances[i] == distance) {
> found = true;
> break;
> @@ -1887,14 +1889,48 @@ static void init_numa_topology_type(int offline_node)
>
> #define NR_DISTANCE_VALUES (1 << DISTANCE_BITS)
>
> -void sched_init_numa(int offline_node)
> +/*
> + * An architecture could modify its NUMA distance, to change
> + * grouping of NUMA nodes and number of NUMA levels when creating
> + * NUMA level sched domains.
> + *
> + * A NUMA level is created for each unique
> + * arch_sched_node_distance.
> + */
> +static bool __modified_sched_node_dist = true;
> +
> +int __weak arch_sched_node_distance(int from, int to)
> {
> - struct sched_domain_topology_level *tl;
> - unsigned long *distance_map;
> + if (__modified_sched_node_dist)
> + __modified_sched_node_dist = false;
> +
> + return node_distance(from, to);
> +}
> +
> +static bool modified_sched_node_distance(void)
> +{
> + /*
> + * Call arch_sched_node_distance()
> + * to determine if arch_sched_node_distance
> + * has been modified from node_distance()
> + * to arch specific distance.
> + */
> + arch_sched_node_distance(0, 0);
> + return __modified_sched_node_dist;
> +}
> +
If our goal is to figure out whether the arch_sched_node_distance()
has been overridden, how about the following alias?
int __weak arch_sched_node_distance(int from, int to)
{
return __node_distance(from, to);
}
int arch_sched_node_distance_original(int from, int to) __weak
__alias(arch_sched_node_distance);
static bool arch_sched_node_distance_is_overridden(void)
{
return arch_sched_node_distance != arch_sched_node_distance_original;
}
so arch_sched_node_distance_is_overridden() can replace
modified_sched_node_distance()
thanks,
Chenyu
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v4 1/2] sched: Create architecture specific sched domain distances
2025-09-27 12:34 ` Chen, Yu C
@ 2025-09-29 22:18 ` Tim Chen
2025-09-30 2:28 ` Chen, Yu C
0 siblings, 1 reply; 8+ messages in thread
From: Tim Chen @ 2025-09-29 22:18 UTC (permalink / raw)
To: Chen, Yu C, Peter Zijlstra, Ingo Molnar
Cc: Juri Lelli, Dietmar Eggemann, Ben Segall, Mel Gorman,
Valentin Schneider, Tim Chen, Vincent Guittot, Len Brown,
linux-kernel, K Prateek Nayak, Gautham R . Shenoy, Zhao Liu,
Vinicius Costa Gomes, Arjan Van De Ven
On Sat, 2025-09-27 at 20:34 +0800, Chen, Yu C wrote:
> [snip]
>
> > @@ -1591,10 +1591,12 @@ static void claim_allocations(int cpu, struct sched_domain *sd)
> > enum numa_topology_type sched_numa_topology_type;
> >
> > static int sched_domains_numa_levels;
> > +static int sched_numa_node_levels;
>
> I agree that the benefit of maintaining two NUMA distances - one for the
> sched_domain and another for the NUMA balancing/page allocation policy - is
> to avoid complicating the sched_domain hierarchy while preserving the
> advantages of NUMA locality.
>
> Meanwhile, I wonder if we could also add a "orig" prefix to the original
> NUMA distance. This way, we can quickly understand its meaning later.
> For example,
> sched_orig_node_levels
> sched_orig_node_distance
I am not sure adding orig will make the meaning any clearer.
I can add comments to note that
sched_numa_node_distance mean the node distance between numa nodes
sched_numa_nodel_levels mean the number of unique distances between numa nodes
>
> > static int sched_domains_curr_level;
> >
> > int sched_max_numa_distance;
> > static int *sched_domains_numa_distance;
> > +static int *sched_numa_node_distance;
> > static struct cpumask ***sched_domains_numa_masks;
> > #endif /* CONFIG_NUMA */
> >
> > @@ -1808,10 +1810,10 @@ bool find_numa_distance(int distance)
> > return true;
> >
> > rcu_read_lock();
> > - distances = rcu_dereference(sched_domains_numa_distance);
> > + distances = rcu_dereference(sched_numa_node_distance);
> > if (!distances)
> > goto unlock;
> > - for (i = 0; i < sched_domains_numa_levels; i++) {
> > + for (i = 0; i < sched_numa_node_levels; i++) {
> > if (distances[i] == distance) {
> > found = true;
> > break;
> > @@ -1887,14 +1889,48 @@ static void init_numa_topology_type(int offline_node)
> >
> > #define NR_DISTANCE_VALUES (1 << DISTANCE_BITS)
> >
> > -void sched_init_numa(int offline_node)
> > +/*
> > + * An architecture could modify its NUMA distance, to change
> > + * grouping of NUMA nodes and number of NUMA levels when creating
> > + * NUMA level sched domains.
> > + *
> > + * A NUMA level is created for each unique
> > + * arch_sched_node_distance.
> > + */
> > +static bool __modified_sched_node_dist = true;
> > +
> > +int __weak arch_sched_node_distance(int from, int to)
> > {
> > - struct sched_domain_topology_level *tl;
> > - unsigned long *distance_map;
> > + if (__modified_sched_node_dist)
> > + __modified_sched_node_dist = false;
> > +
> > + return node_distance(from, to);
> > +}
> > +
> > +static bool modified_sched_node_distance(void)
> > +{
> > + /*
> > + * Call arch_sched_node_distance()
> > + * to determine if arch_sched_node_distance
> > + * has been modified from node_distance()
> > + * to arch specific distance.
> > + */
> > + arch_sched_node_distance(0, 0);
> > + return __modified_sched_node_dist;
> > +}
> > +
>
> If our goal is to figure out whether the arch_sched_node_distance()
> has been overridden, how about the following alias?
>
> int __weak arch_sched_node_distance(int from, int to)
> {
> return __node_distance(from, to);
> }
> int arch_sched_node_distance_original(int from, int to) __weak
> __alias(arch_sched_node_distance);
>
> static bool arch_sched_node_distance_is_overridden(void)
> {
> return arch_sched_node_distance != arch_sched_node_distance_original;
> }
>
> so arch_sched_node_distance_is_overridden() can replace
> modified_sched_node_distance()
>
I think that the alias version will still point to the replaced function and not
the originally defined one.
How about not using __weak and just explicitly define arch_sched_node_distance
as a function pointer. Change the code like below.
Tim
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index d6b772990ec2..12db78af09d5 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -545,7 +545,7 @@ static int avg_remote_numa_distance(void)
return sched_avg_remote_distance;
}
-int arch_sched_node_distance(int from, int to)
+static int x86_arch_sched_node_distance(int from, int to)
{
int d = node_distance(from, to);
@@ -918,6 +918,9 @@ static int do_boot_cpu(u32 apicid, unsigned int cpu, struct task_struct *idle)
/* If 64-bit wakeup method exists, use the 64-bit mode trampoline IP */
if (apic->wakeup_secondary_cpu_64)
start_ip = real_mode_header->trampoline_start64;
+#endif
+#ifdef CONFIG_NUMA
+ arch_sched_node_distance = x86_arch_sched_node_distance;
#endif
idle->thread.sp = (unsigned long)task_pt_regs(idle);
initial_code = (unsigned long)start_secondary;
diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
index 2d2d29553df8..3549c4a19816 100644
--- a/include/linux/sched/topology.h
+++ b/include/linux/sched/topology.h
@@ -56,7 +56,7 @@ static inline int cpu_numa_flags(void)
{
return SD_NUMA;
}
-extern int arch_sched_node_distance(int from, int to);
+extern int (*arch_sched_node_distance)(int, int);
#endif
extern int arch_asym_cpu_priority(int cpu);
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index f25e4402c63e..7cfb7422e9d4 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -1897,26 +1897,17 @@ static void init_numa_topology_type(int offline_node)
* A NUMA level is created for each unique
* arch_sched_node_distance.
*/
-static bool __modified_sched_node_dist = true;
-int __weak arch_sched_node_distance(int from, int to)
+static int default_sched_node_distance(int from, int to)
{
- if (__modified_sched_node_dist)
- __modified_sched_node_dist = false;
-
return node_distance(from, to);
}
+int (*arch_sched_node_distance)(int, int) = default_sched_node_distance;
+
static bool modified_sched_node_distance(void)
{
- /*
- * Call arch_sched_node_distance()
- * to determine if arch_sched_node_distance
- * has been modified from node_distance()
- * to arch specific distance.
- */
- arch_sched_node_distance(0, 0);
- return __modified_sched_node_dist;
+ return arch_sched_node_distance != default_sched_node_distance;
}
static int numa_node_dist(int i, int j)
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH v4 1/2] sched: Create architecture specific sched domain distances
2025-09-29 22:18 ` Tim Chen
@ 2025-09-30 2:28 ` Chen, Yu C
2025-09-30 17:30 ` Tim Chen
0 siblings, 1 reply; 8+ messages in thread
From: Chen, Yu C @ 2025-09-30 2:28 UTC (permalink / raw)
To: Tim Chen, Peter Zijlstra, Ingo Molnar
Cc: Juri Lelli, Dietmar Eggemann, Ben Segall, Mel Gorman,
Valentin Schneider, Tim Chen, Vincent Guittot, Len Brown,
linux-kernel, K Prateek Nayak, Gautham R . Shenoy, Zhao Liu,
Vinicius Costa Gomes, Arjan Van De Ven
On 9/30/2025 6:18 AM, Tim Chen wrote:
> On Sat, 2025-09-27 at 20:34 +0800, Chen, Yu C wrote:
>> [snip]
>>
>>> @@ -1591,10 +1591,12 @@ static void claim_allocations(int cpu, struct sched_domain *sd)
>>> enum numa_topology_type sched_numa_topology_type;
>>>
>>> static int sched_domains_numa_levels;
>>> +static int sched_numa_node_levels;
>>
>> I agree that the benefit of maintaining two NUMA distances - one for the
>> sched_domain and another for the NUMA balancing/page allocation policy - is
>> to avoid complicating the sched_domain hierarchy while preserving the
>> advantages of NUMA locality.
>>
>> Meanwhile, I wonder if we could also add a "orig" prefix to the original
>> NUMA distance. This way, we can quickly understand its meaning later.
>> For example,
>> sched_orig_node_levels
>> sched_orig_node_distance
>
> I am not sure adding orig will make the meaning any clearer.
> I can add comments to note that
>
> sched_numa_node_distance mean the node distance between numa nodes
> sched_numa_nodel_levels mean the number of unique distances between numa nodes
>
OK, looks good to me.
>>
>>> static int sched_domains_curr_level;
>>>
>>> int sched_max_numa_distance;
>>> static int *sched_domains_numa_distance;
>>> +static int *sched_numa_node_distance;
>>> static struct cpumask ***sched_domains_numa_masks;
>>> #endif /* CONFIG_NUMA */
>>>
>>> @@ -1808,10 +1810,10 @@ bool find_numa_distance(int distance)
>>> return true;
>>>
>>> rcu_read_lock();
>>> - distances = rcu_dereference(sched_domains_numa_distance);
>>> + distances = rcu_dereference(sched_numa_node_distance);
>>> if (!distances)
>>> goto unlock;
>>> - for (i = 0; i < sched_domains_numa_levels; i++) {
>>> + for (i = 0; i < sched_numa_node_levels; i++) {
>>> if (distances[i] == distance) {
>>> found = true;
>>> break;
>>> @@ -1887,14 +1889,48 @@ static void init_numa_topology_type(int offline_node)
>>>
>>> #define NR_DISTANCE_VALUES (1 << DISTANCE_BITS)
>>>
>>> -void sched_init_numa(int offline_node)
>>> +/*
>>> + * An architecture could modify its NUMA distance, to change
>>> + * grouping of NUMA nodes and number of NUMA levels when creating
>>> + * NUMA level sched domains.
>>> + *
>>> + * A NUMA level is created for each unique
>>> + * arch_sched_node_distance.
>>> + */
>>> +static bool __modified_sched_node_dist = true;
>>> +
>>> +int __weak arch_sched_node_distance(int from, int to)
>>> {
>>> - struct sched_domain_topology_level *tl;
>>> - unsigned long *distance_map;
>>> + if (__modified_sched_node_dist)
>>> + __modified_sched_node_dist = false;
>>> +
>>> + return node_distance(from, to);
>>> +}
>>> +
>>> +static bool modified_sched_node_distance(void)
>>> +{
>>> + /*
>>> + * Call arch_sched_node_distance()
>>> + * to determine if arch_sched_node_distance
>>> + * has been modified from node_distance()
>>> + * to arch specific distance.
>>> + */
>>> + arch_sched_node_distance(0, 0);
>>> + return __modified_sched_node_dist;
>>> +}
>>> +
>>
>> If our goal is to figure out whether the arch_sched_node_distance()
>> has been overridden, how about the following alias?
>>
>> int __weak arch_sched_node_distance(int from, int to)
>> {
>> return __node_distance(from, to);
>> }
>> int arch_sched_node_distance_original(int from, int to) __weak
>> __alias(arch_sched_node_distance);
>>
>> static bool arch_sched_node_distance_is_overridden(void)
>> {
>> return arch_sched_node_distance != arch_sched_node_distance_original;
>> }
>>
>> so arch_sched_node_distance_is_overridden() can replace
>> modified_sched_node_distance()
>>
>
> I think that the alias version will still point to the replaced function and not
> the originally defined one.
>
> How about not using __weak and just explicitly define arch_sched_node_distance
> as a function pointer. Change the code like below.
>
The arch_sched_node_distance_original is defined as __weak, so it
should point to the old function even if the function has been
overridden. I did a test on a X86 VM and it seems to be so.
But using the arch_sched_node_distance as a function point
should also be OK.
> Tim
>
> diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
> index d6b772990ec2..12db78af09d5 100644
> --- a/arch/x86/kernel/smpboot.c
> +++ b/arch/x86/kernel/smpboot.c
> @@ -545,7 +545,7 @@ static int avg_remote_numa_distance(void)
> return sched_avg_remote_distance;
> }
>
> -int arch_sched_node_distance(int from, int to)
> +static int x86_arch_sched_node_distance(int from, int to)
> {
> int d = node_distance(from, to);
>
> @@ -918,6 +918,9 @@ static int do_boot_cpu(u32 apicid, unsigned int cpu, struct task_struct *idle)
> /* If 64-bit wakeup method exists, use the 64-bit mode trampoline IP */
> if (apic->wakeup_secondary_cpu_64)
> start_ip = real_mode_header->trampoline_start64;
> +#endif
> +#ifdef CONFIG_NUMA
> + arch_sched_node_distance = x86_arch_sched_node_distance;
> #endif
Above might be called for several APs, maybe we can just call it
once in smp_prepare_cpus_common().
thanks,
Chenyu
> idle->thread.sp = (unsigned long)task_pt_regs(idle);
> initial_code = (unsigned long)start_secondary;
> diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
> index 2d2d29553df8..3549c4a19816 100644
> --- a/include/linux/sched/topology.h
> +++ b/include/linux/sched/topology.h
> @@ -56,7 +56,7 @@ static inline int cpu_numa_flags(void)
> {
> return SD_NUMA;
> }
> -extern int arch_sched_node_distance(int from, int to);
> +extern int (*arch_sched_node_distance)(int, int);
> #endif
>
> extern int arch_asym_cpu_priority(int cpu);
> diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> index f25e4402c63e..7cfb7422e9d4 100644
> --- a/kernel/sched/topology.c
> +++ b/kernel/sched/topology.c
> @@ -1897,26 +1897,17 @@ static void init_numa_topology_type(int offline_node)
> * A NUMA level is created for each unique
> * arch_sched_node_distance.
> */
> -static bool __modified_sched_node_dist = true;
>
> -int __weak arch_sched_node_distance(int from, int to)
> +static int default_sched_node_distance(int from, int to)
> {
> - if (__modified_sched_node_dist)
> - __modified_sched_node_dist = false;
> -
> return node_distance(from, to);
> }
>
> +int (*arch_sched_node_distance)(int, int) = default_sched_node_distance;
> +
> static bool modified_sched_node_distance(void)
> {
> - /*
> - * Call arch_sched_node_distance()
> - * to determine if arch_sched_node_distance
> - * has been modified from node_distance()
> - * to arch specific distance.
> - */
> - arch_sched_node_distance(0, 0);
> - return __modified_sched_node_dist;
> + return arch_sched_node_distance != default_sched_node_distance;
> }
>
> static int numa_node_dist(int i, int j)
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v4 1/2] sched: Create architecture specific sched domain distances
2025-09-30 2:28 ` Chen, Yu C
@ 2025-09-30 17:30 ` Tim Chen
2025-10-01 1:10 ` Chen, Yu C
0 siblings, 1 reply; 8+ messages in thread
From: Tim Chen @ 2025-09-30 17:30 UTC (permalink / raw)
To: Chen, Yu C, Peter Zijlstra, Ingo Molnar
Cc: Juri Lelli, Dietmar Eggemann, Ben Segall, Mel Gorman,
Valentin Schneider, Tim Chen, Vincent Guittot, Len Brown,
linux-kernel, K Prateek Nayak, Gautham R . Shenoy, Zhao Liu,
Vinicius Costa Gomes, Arjan Van De Ven
On Tue, 2025-09-30 at 10:28 +0800, Chen, Yu C wrote:
> On 9/30/2025 6:18 AM, Tim Chen wrote:
> > On Sat, 2025-09-27 at 20:34 +0800, Chen, Yu C wrote:
> > >
[snip]
> > >
> > > If our goal is to figure out whether the arch_sched_node_distance()
> > > has been overridden, how about the following alias?
> > >
> > > int __weak arch_sched_node_distance(int from, int to)
> > > {
> > > return __node_distance(from, to);
> > > }
> > > int arch_sched_node_distance_original(int from, int to) __weak
> > > __alias(arch_sched_node_distance);
> > >
> > > static bool arch_sched_node_distance_is_overridden(void)
> > > {
> > > return arch_sched_node_distance != arch_sched_node_distance_original;
> > > }
> > >
> > > so arch_sched_node_distance_is_overridden() can replace
> > > modified_sched_node_distance()
> > >
> >
> > I think that the alias version will still point to the replaced function and not
> > the originally defined one.
> >
> > How about not using __weak and just explicitly define arch_sched_node_distance
> > as a function pointer. Change the code like below.
> >
>
> The arch_sched_node_distance_original is defined as __weak, so it
> should point to the old function even if the function has been
> overridden. I did a test on a X86 VM and it seems to be so.
> But using the arch_sched_node_distance as a function point
> should also be OK.
>
How about changing the code as follow. I think this change is cleaner.
I tested it in my VM and works for detecting sched distance substitution.
Thanks.
Tim
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index f25e4402c63e..3dc941258df3 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -1897,31 +1897,17 @@ static void init_numa_topology_type(int offline_node)
* A NUMA level is created for each unique
* arch_sched_node_distance.
*/
-static bool __modified_sched_node_dist = true;
-
-int __weak arch_sched_node_distance(int from, int to)
+static int numa_node_dist(int i, int j)
{
- if (__modified_sched_node_dist)
- __modified_sched_node_dist = false;
-
- return node_distance(from, to);
+ return node_distance(i, j);
}
-static bool modified_sched_node_distance(void)
-{
- /*
- * Call arch_sched_node_distance()
- * to determine if arch_sched_node_distance
- * has been modified from node_distance()
- * to arch specific distance.
- */
- arch_sched_node_distance(0, 0);
- return __modified_sched_node_dist;
-}
+int arch_sched_node_distance(int from, int to)
+ __weak __alias(numa_node_dist);
-static int numa_node_dist(int i, int j)
+static bool modified_sched_node_distance(void)
{
- return node_distance(i, j);
+ return numa_node_dist != arch_sched_node_distance;
}
static int sched_record_numa_dist(int offline_node, int (*n_dist)(int, int),
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH v4 1/2] sched: Create architecture specific sched domain distances
2025-09-30 17:30 ` Tim Chen
@ 2025-10-01 1:10 ` Chen, Yu C
0 siblings, 0 replies; 8+ messages in thread
From: Chen, Yu C @ 2025-10-01 1:10 UTC (permalink / raw)
To: Tim Chen, Peter Zijlstra, Ingo Molnar
Cc: Juri Lelli, Dietmar Eggemann, Ben Segall, Mel Gorman,
Valentin Schneider, Tim Chen, Vincent Guittot, Len Brown,
linux-kernel, K Prateek Nayak, Gautham R . Shenoy, Zhao Liu,
Vinicius Costa Gomes, Arjan Van De Ven
On 10/1/2025 1:30 AM, Tim Chen wrote:
> On Tue, 2025-09-30 at 10:28 +0800, Chen, Yu C wrote:
>> On 9/30/2025 6:18 AM, Tim Chen wrote:
>>> On Sat, 2025-09-27 at 20:34 +0800, Chen, Yu C wrote:
>>>>
>
> [snip]
>
>>>>
>>>> If our goal is to figure out whether the arch_sched_node_distance()
>>>> has been overridden, how about the following alias?
>>>>
>>>> int __weak arch_sched_node_distance(int from, int to)
>>>> {
>>>> return __node_distance(from, to);
>>>> }
>>>> int arch_sched_node_distance_original(int from, int to) __weak
>>>> __alias(arch_sched_node_distance);
>>>>
>>>> static bool arch_sched_node_distance_is_overridden(void)
>>>> {
>>>> return arch_sched_node_distance != arch_sched_node_distance_original;
>>>> }
>>>>
>>>> so arch_sched_node_distance_is_overridden() can replace
>>>> modified_sched_node_distance()
>>>>
>>>
>>> I think that the alias version will still point to the replaced function and not
>>> the originally defined one.
>>>
>>> How about not using __weak and just explicitly define arch_sched_node_distance
>>> as a function pointer. Change the code like below.
>>>
>>
>> The arch_sched_node_distance_original is defined as __weak, so it
>> should point to the old function even if the function has been
>> overridden. I did a test on a X86 VM and it seems to be so.
>> But using the arch_sched_node_distance as a function point
>> should also be OK.
>>
>
> How about changing the code as follow. I think this change is cleaner.
> I tested it in my VM and works for detecting sched distance substitution.
> Thanks.
>
Yes, the following change looks good to me.
Thanks,
Chenyu> Tim
>
> diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> index f25e4402c63e..3dc941258df3 100644
> --- a/kernel/sched/topology.c
> +++ b/kernel/sched/topology.c
> @@ -1897,31 +1897,17 @@ static void init_numa_topology_type(int offline_node)
> * A NUMA level is created for each unique
> * arch_sched_node_distance.
> */
> -static bool __modified_sched_node_dist = true;
> -
> -int __weak arch_sched_node_distance(int from, int to)
> +static int numa_node_dist(int i, int j)
> {
> - if (__modified_sched_node_dist)
> - __modified_sched_node_dist = false;
> -
> - return node_distance(from, to);
> + return node_distance(i, j);
> }
>
> -static bool modified_sched_node_distance(void)
> -{
> - /*
> - * Call arch_sched_node_distance()
> - * to determine if arch_sched_node_distance
> - * has been modified from node_distance()
> - * to arch specific distance.
> - */
> - arch_sched_node_distance(0, 0);
> - return __modified_sched_node_dist;
> -}
> +int arch_sched_node_distance(int from, int to)
> + __weak __alias(numa_node_dist);
>
> -static int numa_node_dist(int i, int j)
> +static bool modified_sched_node_distance(void)
> {
> - return node_distance(i, j);
> + return numa_node_dist != arch_sched_node_distance;
> }
>
> static int sched_record_numa_dist(int offline_node, int (*n_dist)(int, int),
>
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2025-10-01 1:11 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-09-19 17:50 [PATCH v4 0/2] Fix NUMA sched domain build errors for GNR and CWF Tim Chen
2025-09-19 17:50 ` [PATCH v4 1/2] sched: Create architecture specific sched domain distances Tim Chen
2025-09-27 12:34 ` Chen, Yu C
2025-09-29 22:18 ` Tim Chen
2025-09-30 2:28 ` Chen, Yu C
2025-09-30 17:30 ` Tim Chen
2025-10-01 1:10 ` Chen, Yu C
2025-09-19 17:50 ` [PATCH v4 2/2] sched/topology: Fix sched domain build error for GNR, CWF in SNC-3 mode Tim Chen
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox