The Linux Kernel Mailing List
 help / color / mirror / Atom feed
* [PATCH 0/3] sched: Simplify ifdeffery around CONFIG_SCHED_SMT
@ 2026-05-06 11:00 Shrikanth Hegde
  2026-05-06 11:00 ` [PATCH 1/3] topology: Introduce cpu_smt_mask for CONFIG_SCHED_SMT=n Shrikanth Hegde
                   ` (3 more replies)
  0 siblings, 4 replies; 12+ messages in thread
From: Shrikanth Hegde @ 2026-05-06 11:00 UTC (permalink / raw)
  To: mingo, peterz, vincent.guittot, linux-kernel
  Cc: sshegde, kprateek.nayak, juri.lelli, vschneid, dietmar.eggemann,
	tj, rostedt, tglx, mgorman, bsegall, arighi

While working on paravirt series on maintaining preferred CPUs, I wanted
to add/remove a core. To do that, I had to access cpu_smt_mask. For that
had to use CONFIG_SCHED_SMT around the code. That made me think, for
CONFIG_SCHED_SMT=n, it effectively means to just add/remove a CPU. Thats when
I thought of this idea of using cpumask_of(cpu) when CONFIG_SCHED_SMT=n. 

Semantics
=========
- For CONFIG_SCHED_SMT=y:
    No functional change.
- For CONFIG_SCHED_SMT=n:
    - cpu_smt_mask(cpu) becomes cpumask_of(cpu), effectively making it
      per CPU with no siblings.
    - sched_smt_present remains defined, but never becomes active:
    	 Since cpumask_weight(cpumask_of(cpu)) == 1

Performance impact
==================
- CONFIG_SCHED_SMT=y:
    No change in generated code.
- CONFIG_SCHED_SMT=n:
    - Small increase in text size (~0.01%) due to removal of compile-time
      stubs. Most paths remain effectively dead due to static keys.
    - Fast paths are protected using IS_ENABLED(CONFIG_SCHED_SMT).

Testing
=======
- Did build/boot test on powerpc for CONFIG_SCHED_SMT=y/n.
- Ran hackbench on powerpc for CONFIG_SCHED_SMT=y/n. Didn't observe any
  major difference.
- Did build/boot test on x86 for CONFIG_SCHED_SMT=y/n. For x86 i had
  change the code for CONFIG_SCHED_SMT=n


Plus, Major distros make CONFIG_SCHED_SMT=y for all major
archs and few archs unconditionally make CONFIG_SCHED_SMT=y (x86,
s390), So CONFIG_SCHED_SMT=n is a rare case.

With that, cpu_smt_mask() to be used unconditionally and reduces
CONFIG_SCHED_SMT-specific code paths, improving readability and
maintainability.

Please review and consider if this simplification is a worthwhile cleanup.

Shrikanth Hegde (3):
  topology: Introduce cpu_smt_mask for CONFIG_SCHED_SMT=n
  sched: Simplify ifdeffery around cpu_smt_mask
  sched/fair: Add compile time check in fastpaths for CONFIG_SCHED_SMT=n

 include/linux/sched/smt.h |  4 ----
 include/linux/topology.h  | 15 +++++++++++++-
 kernel/sched/core.c       |  6 ------
 kernel/sched/ext_idle.c   |  6 ------
 kernel/sched/fair.c       | 41 +++++----------------------------------
 kernel/sched/sched.h      |  6 ------
 kernel/sched/topology.c   |  2 --
 kernel/stop_machine.c     |  2 --
 kernel/workqueue.c        |  4 ----
 9 files changed, 19 insertions(+), 67 deletions(-)

-- 
2.51.0


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 1/3] topology: Introduce cpu_smt_mask for CONFIG_SCHED_SMT=n
  2026-05-06 11:00 [PATCH 0/3] sched: Simplify ifdeffery around CONFIG_SCHED_SMT Shrikanth Hegde
@ 2026-05-06 11:00 ` Shrikanth Hegde
  2026-05-06 11:00 ` [PATCH 2/3] sched: Simplify ifdeffery around cpu_smt_mask Shrikanth Hegde
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 12+ messages in thread
From: Shrikanth Hegde @ 2026-05-06 11:00 UTC (permalink / raw)
  To: mingo, peterz, vincent.guittot, linux-kernel
  Cc: sshegde, kprateek.nayak, juri.lelli, vschneid, dietmar.eggemann,
	tj, rostedt, tglx, mgorman, bsegall, arighi

Define cpu_smt_mask in case of CONFIG_SCHED_SMT=n as cpumask_of that
CPU. With that config, it is expected that kernel treats each CPU
as individual core. Using cpumask_of(cpu) reflects that.

This would help to get rid of the ifdeffery that is spread across
the codebase since cpu_smt_mask is defined only in case of
CONFIG_SCHED_SMT=y.

Note: There is no arch today which defines cpu_smt_mask unconditionally.
So likely defining the cpu_smt_mask shouldn't lead redefintion errors.

Signed-off-by: Shrikanth Hegde <sshegde@linux.ibm.com>
---
 include/linux/topology.h | 15 ++++++++++++++-
 1 file changed, 14 insertions(+), 1 deletion(-)

diff --git a/include/linux/topology.h b/include/linux/topology.h
index 6575af39fd10..3a36fd1066fe 100644
--- a/include/linux/topology.h
+++ b/include/linux/topology.h
@@ -230,11 +230,24 @@ static inline int cpu_to_mem(int cpu)
 #define topology_drawer_cpumask(cpu)		cpumask_of(cpu)
 #endif
 
-#if defined(CONFIG_SCHED_SMT) && !defined(cpu_smt_mask)
+/*
+ * Defining cpu_smt_mask as cpumask_of that CPU helps to get
+ * rid of lot of ifdeffery all around the codebase in case of
+ * CONFIG_SCHED_SMT=n. It just means there are no other siblings, which
+ * is what is expected.
+ */
+#if defined(CONFIG_SCHED_SMT)
+# if !defined(cpu_smt_mask)
 static inline const struct cpumask *cpu_smt_mask(int cpu)
 {
 	return topology_sibling_cpumask(cpu);
 }
+# endif
+#else	/* !CONFIG_SCHED_SMT */
+static inline const struct cpumask *cpu_smt_mask(int cpu)
+{
+	return cpumask_of(cpu);
+}
 #endif
 
 #ifndef topology_is_primary_thread
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 2/3] sched: Simplify ifdeffery around cpu_smt_mask
  2026-05-06 11:00 [PATCH 0/3] sched: Simplify ifdeffery around CONFIG_SCHED_SMT Shrikanth Hegde
  2026-05-06 11:00 ` [PATCH 1/3] topology: Introduce cpu_smt_mask for CONFIG_SCHED_SMT=n Shrikanth Hegde
@ 2026-05-06 11:00 ` Shrikanth Hegde
  2026-05-11 12:53   ` Valentin Schneider
  2026-05-06 11:00 ` [PATCH 3/3] sched/fair: Add compile time check in fastpaths for CONFIG_SCHED_SMT=n Shrikanth Hegde
  2026-05-12 10:59 ` [PATCH 0/3] sched: Simplify ifdeffery around CONFIG_SCHED_SMT Valentin Schneider
  3 siblings, 1 reply; 12+ messages in thread
From: Shrikanth Hegde @ 2026-05-06 11:00 UTC (permalink / raw)
  To: mingo, peterz, vincent.guittot, linux-kernel
  Cc: sshegde, kprateek.nayak, juri.lelli, vschneid, dietmar.eggemann,
	tj, rostedt, tglx, mgorman, bsegall, arighi

Now, that cpu_smt_mask is defined as cpumask_of(cpu) for
CONFIG_SCHED_SMT=n, it is possible to get rid of the ifdeffery.

Effectively, 
- This makes sched_smt_present is defined always

- cpumask_weight(cpumask_of(cpu)) == 1. So sched_smt_present_inc/dec
  will never enable the sched_smt_present. Which is expected.

- Paths that were compile-time eliminated become runtime guarded
  using static keys.

- Defines set_idle_cores, test_idle_cores etc which could likely benefit
  the CONFIG_SCHED_SMT=n systems to use the same optimizations within the
  LLC at wakeups.

- This will expose sched_smt_present,stop_core_cpuslocked symbol for
  CONFIG_SCHED_SMT=n. Likely not a concern.

- There a bloat of code CONFIG_SCHED_SMT=n. (NR_CPUS=2048)
  add/remove: 25/18 grow/shrink: 26/19 up/down: 6696/-3064 (3632)
  Total: Before=30771823, After=30775455, chg +0.01%

- No code bloat for CONFIG_SCHED_SMT=y, which is expected.

Signed-off-by: Shrikanth Hegde <sshegde@linux.ibm.com>
---
 include/linux/sched/smt.h |  4 ----
 kernel/sched/core.c       |  6 ------
 kernel/sched/ext_idle.c   |  6 ------
 kernel/sched/fair.c       | 35 -----------------------------------
 kernel/sched/sched.h      |  6 ------
 kernel/sched/topology.c   |  2 --
 kernel/stop_machine.c     |  2 --
 kernel/workqueue.c        |  4 ----
 8 files changed, 65 deletions(-)

diff --git a/include/linux/sched/smt.h b/include/linux/sched/smt.h
index 166b19af956f..cde6679c0278 100644
--- a/include/linux/sched/smt.h
+++ b/include/linux/sched/smt.h
@@ -4,16 +4,12 @@
 
 #include <linux/static_key.h>
 
-#ifdef CONFIG_SCHED_SMT
 extern struct static_key_false sched_smt_present;
 
 static __always_inline bool sched_smt_active(void)
 {
 	return static_branch_likely(&sched_smt_present);
 }
-#else
-static __always_inline bool sched_smt_active(void) { return false; }
-#endif
 
 void arch_smt_update(void);
 
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index b8871449d3c6..055db51c5483 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -8604,18 +8604,14 @@ static void cpuset_cpu_inactive(unsigned int cpu)
 
 static inline void sched_smt_present_inc(int cpu)
 {
-#ifdef CONFIG_SCHED_SMT
 	if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
 		static_branch_inc_cpuslocked(&sched_smt_present);
-#endif
 }
 
 static inline void sched_smt_present_dec(int cpu)
 {
-#ifdef CONFIG_SCHED_SMT
 	if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
 		static_branch_dec_cpuslocked(&sched_smt_present);
-#endif
 }
 
 int sched_cpu_activate(unsigned int cpu)
@@ -8703,9 +8699,7 @@ int sched_cpu_deactivate(unsigned int cpu)
 	 */
 	sched_smt_present_dec(cpu);
 
-#ifdef CONFIG_SCHED_SMT
 	sched_core_cpu_deactivate(cpu);
-#endif
 
 	if (!sched_smp_initialized)
 		return 0;
diff --git a/kernel/sched/ext_idle.c b/kernel/sched/ext_idle.c
index 7468560a6d80..2bcf58e99c9b 100644
--- a/kernel/sched/ext_idle.c
+++ b/kernel/sched/ext_idle.c
@@ -79,7 +79,6 @@ static bool scx_idle_test_and_clear_cpu(int cpu)
 	int node = scx_cpu_node_if_enabled(cpu);
 	struct cpumask *idle_cpus = idle_cpumask(node)->cpu;
 
-#ifdef CONFIG_SCHED_SMT
 	/*
 	 * SMT mask should be cleared whether we can claim @cpu or not. The SMT
 	 * cluster is not wholly idle either way. This also prevents
@@ -104,7 +103,6 @@ static bool scx_idle_test_and_clear_cpu(int cpu)
 		else if (cpumask_test_cpu(cpu, idle_smts))
 			__cpumask_clear_cpu(cpu, idle_smts);
 	}
-#endif
 
 	return cpumask_test_and_clear_cpu(cpu, idle_cpus);
 }
@@ -622,7 +620,6 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags,
 		goto out_unlock;
 	}
 
-#ifdef CONFIG_SCHED_SMT
 	/*
 	 * Use @prev_cpu's sibling if it's idle.
 	 */
@@ -634,7 +631,6 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags,
 				goto out_unlock;
 		}
 	}
-#endif
 
 	/*
 	 * Search for any idle CPU in the same LLC domain.
@@ -714,7 +710,6 @@ static void update_builtin_idle(int cpu, bool idle)
 
 	assign_cpu(cpu, idle_cpus, idle);
 
-#ifdef CONFIG_SCHED_SMT
 	if (sched_smt_active()) {
 		const struct cpumask *smt = cpu_smt_mask(cpu);
 		struct cpumask *idle_smts = idle_cpumask(node)->smt;
@@ -731,7 +726,6 @@ static void update_builtin_idle(int cpu, bool idle)
 			cpumask_andnot(idle_smts, idle_smts, smt);
 		}
 	}
-#endif
 }
 
 /*
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 728965851842..d19c416d1b84 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1555,7 +1555,6 @@ update_stats_curr_start(struct cfs_rq *cfs_rq, struct sched_entity *se)
 
 static inline bool is_core_idle(int cpu)
 {
-#ifdef CONFIG_SCHED_SMT
 	int sibling;
 
 	for_each_cpu(sibling, cpu_smt_mask(cpu)) {
@@ -1565,7 +1564,6 @@ static inline bool is_core_idle(int cpu)
 		if (!idle_cpu(sibling))
 			return false;
 	}
-#endif
 
 	return true;
 }
@@ -2248,7 +2246,6 @@ numa_type numa_classify(unsigned int imbalance_pct,
 	return node_fully_busy;
 }
 
-#ifdef CONFIG_SCHED_SMT
 /* Forward declarations of select_idle_sibling helpers */
 static inline bool test_idle_cores(int cpu);
 static inline int numa_idle_core(int idle_core, int cpu)
@@ -2266,12 +2263,6 @@ static inline int numa_idle_core(int idle_core, int cpu)
 
 	return idle_core;
 }
-#else /* !CONFIG_SCHED_SMT: */
-static inline int numa_idle_core(int idle_core, int cpu)
-{
-	return idle_core;
-}
-#endif /* !CONFIG_SCHED_SMT */
 
 /*
  * Gather all necessary information to make NUMA balancing placement
@@ -7782,7 +7773,6 @@ static inline int __select_idle_cpu(int cpu, struct task_struct *p)
 	return -1;
 }
 
-#ifdef CONFIG_SCHED_SMT
 DEFINE_STATIC_KEY_FALSE(sched_smt_present);
 EXPORT_SYMBOL_GPL(sched_smt_present);
 
@@ -7892,29 +7882,6 @@ static int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int t
 	return -1;
 }
 
-#else /* !CONFIG_SCHED_SMT: */
-
-static inline void set_idle_cores(int cpu, int val)
-{
-}
-
-static inline bool test_idle_cores(int cpu)
-{
-	return false;
-}
-
-static inline int select_idle_core(struct task_struct *p, int core, struct cpumask *cpus, int *idle_cpu)
-{
-	return __select_idle_cpu(core, p);
-}
-
-static inline int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int target)
-{
-	return -1;
-}
-
-#endif /* !CONFIG_SCHED_SMT */
-
 /*
  * Scan the LLC domain for idle CPUs; this is dynamically regulated by
  * comparing the average scan cost (tracked in sd->avg_scan_cost) against the
@@ -12006,9 +11973,7 @@ static int should_we_balance(struct lb_env *env)
 			 * idle has been found, then its not needed to check other
 			 * SMT siblings for idleness:
 			 */
-#ifdef CONFIG_SCHED_SMT
 			cpumask_andnot(swb_cpus, swb_cpus, cpu_smt_mask(cpu));
-#endif
 			continue;
 		}
 
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 9f63b15d309d..e476623a0c2a 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1667,7 +1667,6 @@ do {						\
 	flags = _raw_spin_rq_lock_irqsave(rq);	\
 } while (0)
 
-#ifdef CONFIG_SCHED_SMT
 extern void __update_idle_core(struct rq *rq);
 
 static inline void update_idle_core(struct rq *rq)
@@ -1676,12 +1675,7 @@ static inline void update_idle_core(struct rq *rq)
 		__update_idle_core(rq);
 }
 
-#else /* !CONFIG_SCHED_SMT: */
-static inline void update_idle_core(struct rq *rq) { }
-#endif /* !CONFIG_SCHED_SMT */
-
 #ifdef CONFIG_FAIR_GROUP_SCHED
-
 static inline struct task_struct *task_of(struct sched_entity *se)
 {
 	WARN_ON_ONCE(!entity_is_task(se));
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 5847b83d9d55..a1f46e3f4ede 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -1310,9 +1310,7 @@ static void init_sched_groups_capacity(int cpu, struct sched_domain *sd)
 		cpumask_copy(mask, sched_group_span(sg));
 		for_each_cpu(cpu, mask) {
 			cores++;
-#ifdef CONFIG_SCHED_SMT
 			cpumask_andnot(mask, mask, cpu_smt_mask(cpu));
-#endif
 		}
 		sg->cores = cores;
 
diff --git a/kernel/stop_machine.c b/kernel/stop_machine.c
index 3fe6b0c99f3d..e17afa52893c 100644
--- a/kernel/stop_machine.c
+++ b/kernel/stop_machine.c
@@ -632,7 +632,6 @@ int stop_machine(cpu_stop_fn_t fn, void *data, const struct cpumask *cpus)
 }
 EXPORT_SYMBOL_GPL(stop_machine);
 
-#ifdef CONFIG_SCHED_SMT
 int stop_core_cpuslocked(unsigned int cpu, cpu_stop_fn_t fn, void *data)
 {
 	const struct cpumask *smt_mask = cpu_smt_mask(cpu);
@@ -651,7 +650,6 @@ int stop_core_cpuslocked(unsigned int cpu, cpu_stop_fn_t fn, void *data)
 	return stop_cpus(smt_mask, multi_cpu_stop, &msdata);
 }
 EXPORT_SYMBOL_GPL(stop_core_cpuslocked);
-#endif
 
 /**
  * stop_machine_from_inactive_cpu - stop_machine() from inactive CPU
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 5f747f241a5f..99ef412f02a6 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -8187,11 +8187,7 @@ static bool __init cpus_dont_share(int cpu0, int cpu1)
 
 static bool __init cpus_share_smt(int cpu0, int cpu1)
 {
-#ifdef CONFIG_SCHED_SMT
 	return cpumask_test_cpu(cpu0, cpu_smt_mask(cpu1));
-#else
-	return false;
-#endif
 }
 
 static bool __init cpus_share_numa(int cpu0, int cpu1)
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 3/3] sched/fair: Add compile time check in fastpaths for CONFIG_SCHED_SMT=n
  2026-05-06 11:00 [PATCH 0/3] sched: Simplify ifdeffery around CONFIG_SCHED_SMT Shrikanth Hegde
  2026-05-06 11:00 ` [PATCH 1/3] topology: Introduce cpu_smt_mask for CONFIG_SCHED_SMT=n Shrikanth Hegde
  2026-05-06 11:00 ` [PATCH 2/3] sched: Simplify ifdeffery around cpu_smt_mask Shrikanth Hegde
@ 2026-05-06 11:00 ` Shrikanth Hegde
  2026-05-12 10:59 ` [PATCH 0/3] sched: Simplify ifdeffery around CONFIG_SCHED_SMT Valentin Schneider
  3 siblings, 0 replies; 12+ messages in thread
From: Shrikanth Hegde @ 2026-05-06 11:00 UTC (permalink / raw)
  To: mingo, peterz, vincent.guittot, linux-kernel
  Cc: sshegde, kprateek.nayak, juri.lelli, vschneid, dietmar.eggemann,
	tj, rostedt, tglx, mgorman, bsegall, arighi

For fastpaths such as wakeup, load balance even a minimal code additons
can pop up. Add IS_ENABLED checks there to ensure there is no overhead.

Other places are either have sched_smt_active() check or they are not in
fast paths.

Signed-off-by: Shrikanth Hegde <sshegde@linux.ibm.com>
---
 kernel/sched/fair.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index d19c416d1b84..cdd7f9633f98 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1557,6 +1557,9 @@ static inline bool is_core_idle(int cpu)
 {
 	int sibling;
 
+	if (!IS_ENABLED(CONFIG_SCHED_SMT))
+		return true;
+
 	for_each_cpu(sibling, cpu_smt_mask(cpu)) {
 		if (cpu == sibling)
 			continue;
@@ -11973,7 +11976,8 @@ static int should_we_balance(struct lb_env *env)
 			 * idle has been found, then its not needed to check other
 			 * SMT siblings for idleness:
 			 */
-			cpumask_andnot(swb_cpus, swb_cpus, cpu_smt_mask(cpu));
+			if (IS_ENABLED(CONFIG_SCHED_SMT))
+				cpumask_andnot(swb_cpus, swb_cpus, cpu_smt_mask(cpu));
 			continue;
 		}
 
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/3] sched: Simplify ifdeffery around cpu_smt_mask
  2026-05-06 11:00 ` [PATCH 2/3] sched: Simplify ifdeffery around cpu_smt_mask Shrikanth Hegde
@ 2026-05-11 12:53   ` Valentin Schneider
  2026-05-11 14:37     ` Shrikanth Hegde
  0 siblings, 1 reply; 12+ messages in thread
From: Valentin Schneider @ 2026-05-11 12:53 UTC (permalink / raw)
  To: Shrikanth Hegde, mingo, peterz, vincent.guittot, linux-kernel
  Cc: sshegde, kprateek.nayak, juri.lelli, dietmar.eggemann, tj,
	rostedt, tglx, mgorman, bsegall, arighi

On 06/05/26 16:30, Shrikanth Hegde wrote:
> Now, that cpu_smt_mask is defined as cpumask_of(cpu) for
> CONFIG_SCHED_SMT=n, it is possible to get rid of the ifdeffery.
>
> Effectively,
> - This makes sched_smt_present is defined always
>
> - cpumask_weight(cpumask_of(cpu)) == 1. So sched_smt_present_inc/dec
>   will never enable the sched_smt_present. Which is expected.
>
> - Paths that were compile-time eliminated become runtime guarded
>   using static keys.
>
> - Defines set_idle_cores, test_idle_cores etc which could likely benefit
>   the CONFIG_SCHED_SMT=n systems to use the same optimizations within the
>   LLC at wakeups.
>
> - This will expose sched_smt_present,stop_core_cpuslocked symbol for
>   CONFIG_SCHED_SMT=n. Likely not a concern.
>
> - There a bloat of code CONFIG_SCHED_SMT=n. (NR_CPUS=2048)
>   add/remove: 25/18 grow/shrink: 26/19 up/down: 6696/-3064 (3632)
>   Total: Before=30771823, After=30775455, chg +0.01%
>
> - No code bloat for CONFIG_SCHED_SMT=y, which is expected.
>

Some nitpicks below, otherwise this LGTM except the sched_ext bits which
I'm not familiar enough with.

> @@ -8703,9 +8699,7 @@ int sched_cpu_deactivate(unsigned int cpu)
>        */
>       sched_smt_present_dec(cpu);
>
> -#ifdef CONFIG_SCHED_SMT
>       sched_core_cpu_deactivate(cpu);
> -#endif

That ends up grabbing @core_lock, arguably this is during hotplug but still
seems a bit wasteful when, with CONFIG_SCHED_SMT=1, we know the mask weight
will never exceed 1. Probably worth adding a sched_smt_active() check
within the callee.


> @@ -632,7 +632,6 @@ int stop_machine(cpu_stop_fn_t fn, void *data, const struct cpumask *cpus)
>  }
>  EXPORT_SYMBOL_GPL(stop_machine);
>
> -#ifdef CONFIG_SCHED_SMT
>  int stop_core_cpuslocked(unsigned int cpu, cpu_stop_fn_t fn, void *data)

That seems to be only used by the INTEL_IFS selftest stuff which does some
wait_for_sibling_cpu() loop; at a quick glance it seems to do the right
thing for weight := 1 but IMO worth a proper look. That or have the IFS
code not run that when there is no SMT.

>  {
>       const struct cpumask *smt_mask = cpu_smt_mask(cpu);


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/3] sched: Simplify ifdeffery around cpu_smt_mask
  2026-05-11 12:53   ` Valentin Schneider
@ 2026-05-11 14:37     ` Shrikanth Hegde
  2026-05-11 18:46       ` Tejun Heo
                         ` (2 more replies)
  0 siblings, 3 replies; 12+ messages in thread
From: Shrikanth Hegde @ 2026-05-11 14:37 UTC (permalink / raw)
  To: Valentin Schneider
  Cc: kprateek.nayak, juri.lelli, dietmar.eggemann, tj, rostedt, tglx,
	mgorman, bsegall, arighi, linux-kernel, mingo, peterz,
	vincent.guittot

Hi Valentin, Thanks for the reviewing this patchset.

On 5/11/26 6:23 PM, Valentin Schneider wrote:
> On 06/05/26 16:30, Shrikanth Hegde wrote:
>> Now, that cpu_smt_mask is defined as cpumask_of(cpu) for
>> CONFIG_SCHED_SMT=n, it is possible to get rid of the ifdeffery.
>>
>> Effectively,
>> - This makes sched_smt_present is defined always
>>
>> - cpumask_weight(cpumask_of(cpu)) == 1. So sched_smt_present_inc/dec
>>    will never enable the sched_smt_present. Which is expected.
>>
>> - Paths that were compile-time eliminated become runtime guarded
>>    using static keys.
>>
>> - Defines set_idle_cores, test_idle_cores etc which could likely benefit
>>    the CONFIG_SCHED_SMT=n systems to use the same optimizations within the
>>    LLC at wakeups.
>>
>> - This will expose sched_smt_present,stop_core_cpuslocked symbol for
>>    CONFIG_SCHED_SMT=n. Likely not a concern.
>>
>> - There a bloat of code CONFIG_SCHED_SMT=n. (NR_CPUS=2048)
>>    add/remove: 25/18 grow/shrink: 26/19 up/down: 6696/-3064 (3632)
>>    Total: Before=30771823, After=30775455, chg +0.01%
>>
>> - No code bloat for CONFIG_SCHED_SMT=y, which is expected.
>>
> 
> Some nitpicks below, otherwise this LGTM except the sched_ext bits which
> I'm not familiar enough with.

sched_ext just added the ifdefs for the masks i think.
It has sched_smt_active() already.

> 
>> @@ -8703,9 +8699,7 @@ int sched_cpu_deactivate(unsigned int cpu)
>>         */
>>        sched_smt_present_dec(cpu);
>>
>> -#ifdef CONFIG_SCHED_SMT
>>        sched_core_cpu_deactivate(cpu);
>> -#endif
> 
> That ends up grabbing @core_lock, arguably this is during hotplug but still
> seems a bit wasteful when, with CONFIG_SCHED_SMT=1, we know the mask weight
> will never exceed 1. Probably worth adding a sched_smt_active() check
> within the callee.
> 

Ok. Fair enough. Even cpu bringup path too could use the same opt.
Something like below?
---

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 084ec3987d7c..add0fcc8ba90 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6494,6 +6494,10 @@ static void sched_core_cpu_starting(unsigned int cpu)
         struct rq *rq = cpu_rq(cpu), *core_rq = NULL;
         int t;
  
+       /* No point in doing anything further if SMT is not active */
+       if (!sched_smt_active())
+               return;
+
         guard(core_lock)(&cpu);
  
         WARN_ON_ONCE(rq->core != rq);
@@ -6533,6 +6537,10 @@ static void sched_core_cpu_deactivate(unsigned int cpu)
         struct rq *rq = cpu_rq(cpu), *core_rq = NULL;
         int t;
  
+       /* No point in doing anything further if SMT is not active */
+       if (!sched_smt_active())
+               return;
+
         guard(core_lock)(&cpu);
  
         /* if we're the last man standing, nothing to do */


> 
>> @@ -632,7 +632,6 @@ int stop_machine(cpu_stop_fn_t fn, void *data, const struct cpumask *cpus)
>>   }
>>   EXPORT_SYMBOL_GPL(stop_machine);
>>
>> -#ifdef CONFIG_SCHED_SMT
>>   int stop_core_cpuslocked(unsigned int cpu, cpu_stop_fn_t fn, void *data)
> 
> That seems to be only used by the INTEL_IFS selftest stuff which does some
> wait_for_sibling_cpu() loop; at a quick glance it seems to do the right
> thing for weight := 1 but IMO worth a proper look. That or have the IFS
> code not run that when there is no SMT.
> 

Right. intel_ifs is the only user.
INTEL_IFS depends on SMP. SMP select CONFIG_SMT on x86.

Symbol: INTEL_IFS [=n]
Depends on: X86_PLATFORM_DEVICES [=y] && X86 [=y] && CPU_SUP_INTEL [=y] && 64BIT [=y] && SMP [=y]

So, maybe leave this as is, to avoid the code bloat for CONFIG_SCHED_SMT=n as there is no user?
Maybe i will add a comment about it.

from ./bloat-o-meter. it adds about 260
stop_core_cpuslocked                           -     260    +260

Does that make sense?

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/3] sched: Simplify ifdeffery around cpu_smt_mask
  2026-05-11 14:37     ` Shrikanth Hegde
@ 2026-05-11 18:46       ` Tejun Heo
  2026-05-12 10:13       ` Shrikanth Hegde
  2026-05-12 10:58       ` Valentin Schneider
  2 siblings, 0 replies; 12+ messages in thread
From: Tejun Heo @ 2026-05-11 18:46 UTC (permalink / raw)
  To: Shrikanth Hegde
  Cc: Valentin Schneider, kprateek.nayak, juri.lelli, dietmar.eggemann,
	rostedt, tglx, mgorman, bsegall, arighi, linux-kernel, mingo,
	peterz, vincent.guittot

Hello,

On Mon, May 11, 2026 at 08:07:23PM +0530, Shrikanth Hegde wrote:
> sched_ext just added the ifdefs for the masks i think.
> It has sched_smt_active() already.

Yeah, sched_ext parts look good to me.

Acked-by: Tejun Heo <tj@kernel.org>

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/3] sched: Simplify ifdeffery around cpu_smt_mask
  2026-05-11 14:37     ` Shrikanth Hegde
  2026-05-11 18:46       ` Tejun Heo
@ 2026-05-12 10:13       ` Shrikanth Hegde
  2026-05-12 10:58       ` Valentin Schneider
  2 siblings, 0 replies; 12+ messages in thread
From: Shrikanth Hegde @ 2026-05-12 10:13 UTC (permalink / raw)
  To: Valentin Schneider
  Cc: kprateek.nayak, juri.lelli, dietmar.eggemann, tj, rostedt, tglx,
	mgorman, bsegall, arighi, linux-kernel, mingo, peterz,
	vincent.guittot



On 5/11/26 8:07 PM, Shrikanth Hegde wrote:
> Hi Valentin, Thanks for the reviewing this patchset.
> 
> On 5/11/26 6:23 PM, Valentin Schneider wrote:
>> On 06/05/26 16:30, Shrikanth Hegde wrote:
>>> Now, that cpu_smt_mask is defined as cpumask_of(cpu) for
>>> CONFIG_SCHED_SMT=n, it is possible to get rid of the ifdeffery.
>>>
>>> Effectively,
>>> - This makes sched_smt_present is defined always
>>>
>>> - cpumask_weight(cpumask_of(cpu)) == 1. So sched_smt_present_inc/dec
>>>    will never enable the sched_smt_present. Which is expected.
>>>
>>> - Paths that were compile-time eliminated become runtime guarded
>>>    using static keys.
>>>
>>> - Defines set_idle_cores, test_idle_cores etc which could likely benefit
>>>    the CONFIG_SCHED_SMT=n systems to use the same optimizations 
>>> within the
>>>    LLC at wakeups.
>>>
>>> - This will expose sched_smt_present,stop_core_cpuslocked symbol for
>>>    CONFIG_SCHED_SMT=n. Likely not a concern.
>>>
>>> - There a bloat of code CONFIG_SCHED_SMT=n. (NR_CPUS=2048)
>>>    add/remove: 25/18 grow/shrink: 26/19 up/down: 6696/-3064 (3632)
>>>    Total: Before=30771823, After=30775455, chg +0.01%
>>>
>>> - No code bloat for CONFIG_SCHED_SMT=y, which is expected.
>>>
>>
>> Some nitpicks below, otherwise this LGTM except the sched_ext bits which
>> I'm not familiar enough with.
> 
> sched_ext just added the ifdefs for the masks i think.
> It has sched_smt_active() already.
> 
>>
>>> @@ -8703,9 +8699,7 @@ int sched_cpu_deactivate(unsigned int cpu)
>>>         */
>>>        sched_smt_present_dec(cpu);
>>>
>>> -#ifdef CONFIG_SCHED_SMT
>>>        sched_core_cpu_deactivate(cpu);
>>> -#endif
>>
>> That ends up grabbing @core_lock, arguably this is during hotplug but 
>> still
>> seems a bit wasteful when, with CONFIG_SCHED_SMT=1, we know the mask 
>> weight
>> will never exceed 1. Probably worth adding a sched_smt_active() check
>> within the callee.
>>
> 
> Ok. Fair enough. Even cpu bringup path too could use the same opt.
> Something like below?
> ---
> 
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 084ec3987d7c..add0fcc8ba90 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -6494,6 +6494,10 @@ static void sched_core_cpu_starting(unsigned int 
> cpu)
>          struct rq *rq = cpu_rq(cpu), *core_rq = NULL;
>          int t;
> 
> +       /* No point in doing anything further if SMT is not active */
> +       if (!sched_smt_active())
> +               return;
> +
>          guard(core_lock)(&cpu);
> 
>          WARN_ON_ONCE(rq->core != rq);
> @@ -6533,6 +6537,10 @@ static void sched_core_cpu_deactivate(unsigned 
> int cpu)
>          struct rq *rq = cpu_rq(cpu), *core_rq = NULL;
>          int t;
> 
> +       /* No point in doing anything further if SMT is not active */
> +       if (!sched_smt_active())
> +               return;
> +
>          guard(core_lock)(&cpu);
> 
>          /* if we're the last man standing, nothing to do */
> 
> 

This isn't necessary. Both sched_core_cpu_starting/sched_core_cpu_deactivate bail out
quickly when cpumask_weight(cpu_smt_mask) == 1.

Given that it is not fastpath, I will skip the above diff.


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/3] sched: Simplify ifdeffery around cpu_smt_mask
  2026-05-11 14:37     ` Shrikanth Hegde
  2026-05-11 18:46       ` Tejun Heo
  2026-05-12 10:13       ` Shrikanth Hegde
@ 2026-05-12 10:58       ` Valentin Schneider
  2 siblings, 0 replies; 12+ messages in thread
From: Valentin Schneider @ 2026-05-12 10:58 UTC (permalink / raw)
  To: Shrikanth Hegde
  Cc: kprateek.nayak, juri.lelli, dietmar.eggemann, tj, rostedt, tglx,
	mgorman, bsegall, arighi, linux-kernel, mingo, peterz,
	vincent.guittot

On 11/05/26 20:07, Shrikanth Hegde wrote:
> On 5/11/26 6:23 PM, Valentin Schneider wrote:
>> That ends up grabbing @core_lock, arguably this is during hotplug but still
>> seems a bit wasteful when, with CONFIG_SCHED_SMT=1, we know the mask weight
>> will never exceed 1. Probably worth adding a sched_smt_active() check
>> within the callee.
>>
>
> Ok. Fair enough. Even cpu bringup path too could use the same opt.
> Something like below?

LGTM

> ---
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 084ec3987d7c..add0fcc8ba90 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -6494,6 +6494,10 @@ static void sched_core_cpu_starting(unsigned int cpu)
>          struct rq *rq = cpu_rq(cpu), *core_rq = NULL;
>          int t;
>
> +       /* No point in doing anything further if SMT is not active */
> +       if (!sched_smt_active())
> +               return;
> +
>          guard(core_lock)(&cpu);
>
>          WARN_ON_ONCE(rq->core != rq);
> @@ -6533,6 +6537,10 @@ static void sched_core_cpu_deactivate(unsigned int cpu)
>          struct rq *rq = cpu_rq(cpu), *core_rq = NULL;
>          int t;
>
> +       /* No point in doing anything further if SMT is not active */
> +       if (!sched_smt_active())
> +               return;
> +
>          guard(core_lock)(&cpu);
>
>          /* if we're the last man standing, nothing to do */
>
>
>>
>>> @@ -632,7 +632,6 @@ int stop_machine(cpu_stop_fn_t fn, void *data, const struct cpumask *cpus)
>>>   }
>>>   EXPORT_SYMBOL_GPL(stop_machine);
>>>
>>> -#ifdef CONFIG_SCHED_SMT
>>>   int stop_core_cpuslocked(unsigned int cpu, cpu_stop_fn_t fn, void *data)
>>
>> That seems to be only used by the INTEL_IFS selftest stuff which does some
>> wait_for_sibling_cpu() loop; at a quick glance it seems to do the right
>> thing for weight := 1 but IMO worth a proper look. That or have the IFS
>> code not run that when there is no SMT.
>>
>
> Right. intel_ifs is the only user.
> INTEL_IFS depends on SMP. SMP select CONFIG_SMT on x86.
>
> Symbol: INTEL_IFS [=n]
> Depends on: X86_PLATFORM_DEVICES [=y] && X86 [=y] && CPU_SUP_INTEL [=y] && 64BIT [=y] && SMP [=y]
>

Ah, I had looked at that config but didn't expand on SMP implying SMT.

> So, maybe leave this as is, to avoid the code bloat for CONFIG_SCHED_SMT=n as there is no user?
> Maybe i will add a comment about it.
>
> from ./bloat-o-meter. it adds about 260
> stop_core_cpuslocked                           -     260    +260
>
> Does that make sense?

Yup!


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 0/3] sched: Simplify ifdeffery around CONFIG_SCHED_SMT
  2026-05-06 11:00 [PATCH 0/3] sched: Simplify ifdeffery around CONFIG_SCHED_SMT Shrikanth Hegde
                   ` (2 preceding siblings ...)
  2026-05-06 11:00 ` [PATCH 3/3] sched/fair: Add compile time check in fastpaths for CONFIG_SCHED_SMT=n Shrikanth Hegde
@ 2026-05-12 10:59 ` Valentin Schneider
  2026-05-12 12:25   ` Shrikanth Hegde
  3 siblings, 1 reply; 12+ messages in thread
From: Valentin Schneider @ 2026-05-12 10:59 UTC (permalink / raw)
  To: Shrikanth Hegde, mingo, peterz, vincent.guittot, linux-kernel
  Cc: sshegde, kprateek.nayak, juri.lelli, dietmar.eggemann, tj,
	rostedt, tglx, mgorman, bsegall, arighi

On 06/05/26 16:30, Shrikanth Hegde wrote:
> Plus, Major distros make CONFIG_SCHED_SMT=y for all major
> archs and few archs unconditionally make CONFIG_SCHED_SMT=y (x86,
> s390), So CONFIG_SCHED_SMT=n is a rare case.
>

That leaves only a handful of #ifdefs:

  include/linux/sched/topology.h:35:2:#ifdef CONFIG_SCHED_SMT
  kernel/sched/topology.c:1757:2:#ifdef CONFIG_SCHED_SMT
  kernel/sched/topology.c:1802:2:#ifdef CONFIG_SCHED_SMT
  arch/powerpc/kernel/smp.c:986:2:#ifdef CONFIG_SCHED_SMT
  arch/powerpc/kernel/smp.c:1038:2:#ifdef CONFIG_SCHED_SMT
  arch/powerpc/kernel/smp.c:1720:2:#ifdef CONFIG_SCHED_SMT
  arch/powerpc/include/asm/smp.h:141:2:#ifdef CONFIG_SCHED_SMT

The topology code could deal with unconditionally building an SMT layer as
it'll just get degenerated. The powerpc usage is mostly topology and
cpu_smt_mask(). So if we want to push for it, we could even get rid of the
config entirely.


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 0/3] sched: Simplify ifdeffery around CONFIG_SCHED_SMT
  2026-05-12 10:59 ` [PATCH 0/3] sched: Simplify ifdeffery around CONFIG_SCHED_SMT Valentin Schneider
@ 2026-05-12 12:25   ` Shrikanth Hegde
  2026-05-12 15:32     ` Valentin Schneider
  0 siblings, 1 reply; 12+ messages in thread
From: Shrikanth Hegde @ 2026-05-12 12:25 UTC (permalink / raw)
  To: Valentin Schneider, mingo, peterz, vincent.guittot, linux-kernel
  Cc: kprateek.nayak, juri.lelli, dietmar.eggemann, tj, rostedt, tglx,
	mgorman, bsegall, arighi

Hi Valentin.

On 5/12/26 4:29 PM, Valentin Schneider wrote:
> On 06/05/26 16:30, Shrikanth Hegde wrote:
>> Plus, Major distros make CONFIG_SCHED_SMT=y for all major
>> archs and few archs unconditionally make CONFIG_SCHED_SMT=y (x86,
>> s390), So CONFIG_SCHED_SMT=n is a rare case.
>>
> 
> That leaves only a handful of #ifdefs:
> 
>    include/linux/sched/topology.h:35:2:#ifdef CONFIG_SCHED_SMT
>    kernel/sched/topology.c:1757:2:#ifdef CONFIG_SCHED_SMT
>    kernel/sched/topology.c:1802:2:#ifdef CONFIG_SCHED_SMT
>    arch/powerpc/kernel/smp.c:986:2:#ifdef CONFIG_SCHED_SMT
>    arch/powerpc/kernel/smp.c:1038:2:#ifdef CONFIG_SCHED_SMT
>    arch/powerpc/kernel/smp.c:1720:2:#ifdef CONFIG_SCHED_SMT
>    arch/powerpc/include/asm/smp.h:141:2:#ifdef CONFIG_SCHED_SMT
> 
> The topology code could deal with unconditionally building an SMT layer as
> it'll just get degenerated. The powerpc usage is mostly topology and
> cpu_smt_mask(). So if we want to push for it, we could even get rid of the
> config entirely.
> 

I think that's tricky. This is my take. Feel free to add/correct.

1 - That's policy decision making. It is only scheduler perspective.
     hardware topology mapping need to necessarily be on the same
     page. Hence we have the config today.


2 - Implementation is difficult as well.

Lets say we define it as:

#if !defined(cpu_smt_mask)
static inline const struct cpumask *cpu_smt_mask(int cpu)
{
         return topology_sibling_cpumask(cpu);
}
#endif

Then topology_sibling_cpumask could still point to actual HW siblings and SMT domain
may not degenerate as it could have more than 1 CPU. And Since powerpc for example
defined cpu_smt_mask, will have SMT domain always. That's not correct IMO.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 0/3] sched: Simplify ifdeffery around CONFIG_SCHED_SMT
  2026-05-12 12:25   ` Shrikanth Hegde
@ 2026-05-12 15:32     ` Valentin Schneider
  0 siblings, 0 replies; 12+ messages in thread
From: Valentin Schneider @ 2026-05-12 15:32 UTC (permalink / raw)
  To: Shrikanth Hegde, mingo, peterz, vincent.guittot, linux-kernel
  Cc: kprateek.nayak, juri.lelli, dietmar.eggemann, tj, rostedt, tglx,
	mgorman, bsegall, arighi

On 12/05/26 17:55, Shrikanth Hegde wrote:
> Hi Valentin.
>
> On 5/12/26 4:29 PM, Valentin Schneider wrote:
>> On 06/05/26 16:30, Shrikanth Hegde wrote:
>>> Plus, Major distros make CONFIG_SCHED_SMT=y for all major
>>> archs and few archs unconditionally make CONFIG_SCHED_SMT=y (x86,
>>> s390), So CONFIG_SCHED_SMT=n is a rare case.
>>>
>>
>> That leaves only a handful of #ifdefs:
>>
>>    include/linux/sched/topology.h:35:2:#ifdef CONFIG_SCHED_SMT
>>    kernel/sched/topology.c:1757:2:#ifdef CONFIG_SCHED_SMT
>>    kernel/sched/topology.c:1802:2:#ifdef CONFIG_SCHED_SMT
>>    arch/powerpc/kernel/smp.c:986:2:#ifdef CONFIG_SCHED_SMT
>>    arch/powerpc/kernel/smp.c:1038:2:#ifdef CONFIG_SCHED_SMT
>>    arch/powerpc/kernel/smp.c:1720:2:#ifdef CONFIG_SCHED_SMT
>>    arch/powerpc/include/asm/smp.h:141:2:#ifdef CONFIG_SCHED_SMT
>>
>> The topology code could deal with unconditionally building an SMT layer as
>> it'll just get degenerated. The powerpc usage is mostly topology and
>> cpu_smt_mask(). So if we want to push for it, we could even get rid of the
>> config entirely.
>>
>
> I think that's tricky. This is my take. Feel free to add/correct.
>
> 1 - That's policy decision making. It is only scheduler perspective.
>      hardware topology mapping need to necessarily be on the same
>      page. Hence we have the config today.
>
>
> 2 - Implementation is difficult as well.
>
> Lets say we define it as:
>
> #if !defined(cpu_smt_mask)
> static inline const struct cpumask *cpu_smt_mask(int cpu)
> {
>          return topology_sibling_cpumask(cpu);
> }
> #endif
>
> Then topology_sibling_cpumask could still point to actual HW siblings and SMT domain
> may not degenerate as it could have more than 1 CPU. And Since powerpc for example
> defined cpu_smt_mask, will have SMT domain always. That's not correct IMO.

Yeah you're right, let's forget about it for now :-)


^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2026-05-12 15:32 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-06 11:00 [PATCH 0/3] sched: Simplify ifdeffery around CONFIG_SCHED_SMT Shrikanth Hegde
2026-05-06 11:00 ` [PATCH 1/3] topology: Introduce cpu_smt_mask for CONFIG_SCHED_SMT=n Shrikanth Hegde
2026-05-06 11:00 ` [PATCH 2/3] sched: Simplify ifdeffery around cpu_smt_mask Shrikanth Hegde
2026-05-11 12:53   ` Valentin Schneider
2026-05-11 14:37     ` Shrikanth Hegde
2026-05-11 18:46       ` Tejun Heo
2026-05-12 10:13       ` Shrikanth Hegde
2026-05-12 10:58       ` Valentin Schneider
2026-05-06 11:00 ` [PATCH 3/3] sched/fair: Add compile time check in fastpaths for CONFIG_SCHED_SMT=n Shrikanth Hegde
2026-05-12 10:59 ` [PATCH 0/3] sched: Simplify ifdeffery around CONFIG_SCHED_SMT Valentin Schneider
2026-05-12 12:25   ` Shrikanth Hegde
2026-05-12 15:32     ` Valentin Schneider

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox