public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 0/6] sched/fair: SMT-aware asymmetric CPU capacity
@ 2026-04-28  5:16 Andrea Righi
  2026-04-28  5:16 ` [PATCH 1/6] sched/fair: Use guard(rcu) for sched_domain RCU sections Andrea Righi
                   ` (5 more replies)
  0 siblings, 6 replies; 20+ messages in thread
From: Andrea Righi @ 2026-04-28  5:16 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot
  Cc: Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Valentin Schneider, K Prateek Nayak, Christian Loehle, Koba Ko,
	Felix Abecassis, Balbir Singh, Joel Fernandes, Shrikanth Hegde,
	linux-kernel

This series attempts to improve SD_ASYM_CPUCAPACITY scheduling by introducing
SMT awareness.

= Problem =

Nominal per-logical-CPU capacity can overstate usable compute when an SMT
sibling is busy, because the physical core doesn't deliver its full nominal
capacity. So, several asym-cpu-capacity paths may pick high capacity idle CPUs
that are not actually good destinations.

= Solution =

This patch set aligns those paths with a simple rule already used elsewhere:
when SMT is active, prefer fully idle cores and avoid treating partially idle
SMT siblings as full-capacity targets where that would mislead load balance.

Patch set summary:
 - Attach sched_domain_shared to sd_asym_cpucapacity in SD_ASYM_CPUCAPACITY to
   use has_idle_cores hint consistently in the wakeup idle scan and rename
   sd_llc_shared -> sd_balance_shared.
 - Prefer fully-idle SMT cores in asym-capacity idle selection: in the wakeup
   fast path, extend select_idle_capacity() / asym_fits_cpu() so idle
   selection can prefer CPUs on fully idle cores.
 - Reject misfit pulls onto busy SMT siblings on SD_ASYM_CPUCAPACITY.
 - Add SIS_UTIL support to select_idle_capacity(): add to select_idle_capacity()
   the same SIS_UTIL-controlled idle-scan mechanism, already used by
   select_idle_cpu().

This patch set has been tested on the new NVIDIA Vera Rubin platform, where SMT
is enabled and the firmware exposes small frequency variations (+/-~5%) as
differences in CPU capacity, resulting in SD_ASYM_CPUCAPACITY being set.

Without these patches, performance can drop by up to ~2x with CPU-intensive
workloads, because the SD_ASYM_CPUCAPACITY idle selection policy does not
account for busy SMT siblings.

Alternative approaches have been evaluated, such as equalizing CPU capacities,
either by exposing uniform values via firmware or normalizing them in the kernel
by grouping CPUs within a small capacity window (+-5%).

However, the SMT-aware SD_ASYM_CPUCAPACITY approach has shown better results so
far. Improving this policy also seems worthwhile in general, as future platforms
may enable SMT with asymmetric CPU topologies.

Performance results on Vera Rubin with SD_ASYM_CPUCAPACITY (mainline) vs
SD_ASYM_CPUCAPACITY + SMT

- NVBLAS benchblas (one task / SMT core):

 +---------------------------------+--------+
 | Configuration                   | gflops |
 +---------------------------------+--------+
 | ASYM (mainline) + SIS_UTIL      |  5478  |
 | ASYM (mainline) + NO_SIS_UTIL   |  5491  |
 |                                 |        |
 | NO ASYM + SIS_UTIL              |  8912  |
 | NO ASYM + NO_SIS_UTIL           |  8978  |
 |                                 |        |
 | ASYM + SMT + SIS_UTIL           |  9259  |
 | ASYM + SMT + NO_SIS_UTIL        |  9291  |
 +---------------------------------+--------+

 - DCPerf MediaWiki (all CPUs):

 +---------------------------------+--------+--------+--------+--------+
 | Configuration                   |   rps  |  p50   |  p95   |  p99   |
 +---------------------------------+--------+--------+--------+--------+
 | ASYM (mainline) + SIS_UTIL      |  7994  |  0.052 |  0.223 |  0.246 |
 | ASYM (mainline) + NO_SIS_UTIL   |  7993  |  0.052 |  0.221 |  0.245 |
 |                                 |        |        |        |        |
 | NO ASYM + SIS_UTIL              |  8113  |  0.067 |  0.184 |  0.225 |
 | NO ASYM + NO_SIS_UTIL           |  8093  |  0.068 |  0.184 |  0.223 |
 |                                 |        |        |        |        |
 | ASYM + SMT + SIS_UTIL           |  8129  |  0.076 |  0.149 |  0.188 |
 | ASYM + SMT + NO_SIS_UTIL        |  8138  |  0.076 |  0.148 |  0.186 |
 +---------------------------------+--------+--------+--------+--------+

In the MediaWiki case SMT awareness is less impactful, because for the majority
of the run all CPUs are used, but it still seems to provide some benefits at
reducing tail latency.

Tests have also been conducted on NVIDIA Grace (which does not support SMT) to
ensure that SIS_UTIL support in select_idle_capacity() does not introduce
regressions and results show slight improvements under the same workloads.

See also:
 - https://lore.kernel.org/lkml/20260324005509.1134981-1-arighi@nvidia.com
 - https://lore.kernel.org/lkml/20260318092214.130908-1-arighi@nvidia.com

Changes in v4:
 - Rename sd_llc_shared -> sd_balance_shared
 - Add preliminary cleanup patch to use guard(rcu)() for sched_domain RCU
   (Prateek Nayak)
 - Apply SIS_UTIL scan cap only with !prefers_idle_core, matching
   select_idle_cpu() / has_idle_core logic (Vincent Guittot)
 - Cache env->dst_cpu idle state to reduce is_core_idle() calls (Prateek Nayak)
 - Remove warning about CPU capacity asymmetry not supporting SMT
 - Link to v3: https://lore.kernel.org/all/20260423074135.380390-1-arighi@nvidia.com

Changes in v3:
 - Add SIS_UTIL support to select_idle_capacity() (K Prateek Nayak)
 - Attach sched_domain_shared to sd_asym_cpucapacity (K Prateek Nayak)
 - Add enum for the different fit state (K Prateek Nayak)
 - Update has_idle_cores hint (Vincent Guittot)
 - Link to v2: https://lore.kernel.org/all/20260403053654.1559142-1-arighi@nvidia.com

Changes in v2:
 - Rework SMT awareness logic in select_idle_capacity() (K Prateek Nayak)
 - Drop EAS and find_new_ilb() changes for now
 - Link to v1: https://lore.kernel.org/all/20260326151211.1862600-1-arighi@nvidia.com

Git tree: git://git.kernel.org/pub/scm/linux/kernel/git/arighi/linux.git sched-asym-smt

Andrea Righi (4):
      sched/fair: Use guard(rcu) for sched_domain RCU sections
      sched/fair: Prefer fully-idle SMT cores in asym-capacity idle selection
      sched/fair: Reject misfit pulls onto busy SMT siblings on asym-capacity
      sched/topology: Remove SMT/asym capacity warning

K Prateek Nayak (2):
      sched/fair: Attach sched_domain_shared to sd_asym_cpucapacity
      sched/fair: Add SIS_UTIL support to select_idle_capacity()

 kernel/sched/fair.c     | 261 ++++++++++++++++++++++++++++++++----------------
 kernel/sched/sched.h    |   2 +-
 kernel/sched/topology.c |  95 ++++++++++++++----
 3 files changed, 256 insertions(+), 102 deletions(-)

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 1/6] sched/fair: Use guard(rcu) for sched_domain RCU sections
  2026-04-28  5:16 [PATCH v4 0/6] sched/fair: SMT-aware asymmetric CPU capacity Andrea Righi
@ 2026-04-28  5:16 ` Andrea Righi
  2026-04-28  8:33   ` K Prateek Nayak
  2026-04-28  5:16 ` [PATCH 2/6] sched/fair: Attach sched_domain_shared to sd_asym_cpucapacity Andrea Righi
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 20+ messages in thread
From: Andrea Righi @ 2026-04-28  5:16 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot
  Cc: Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Valentin Schneider, K Prateek Nayak, Christian Loehle, Koba Ko,
	Felix Abecassis, Balbir Singh, Joel Fernandes, Shrikanth Hegde,
	linux-kernel

Use the scoped guard(rcu)() helper to safely access sched_domain
pointers.

No functional change intended, this is preparation for topology work
where sched_domain lifetimes are easier to reason about with explicit,
scope-bounded RCU critical sections.

Suggested-by: K Prateek Nayak <kprateek.nayak@amd.com>
Signed-off-by: Andrea Righi <arighi@nvidia.com>
---
 kernel/sched/fair.c | 141 ++++++++++++++++++++++----------------------
 1 file changed, 71 insertions(+), 70 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 69361c63353ad..fc0828150c780 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8083,6 +8083,8 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
 	 */
 	lockdep_assert_irqs_disabled();
 
+	guard(rcu)();
+
 	if (choose_idle_cpu(target, p) &&
 	    asym_fits_cpu(task_util, util_min, util_max, target))
 		return target;
@@ -12701,55 +12703,16 @@ static void kick_ilb(unsigned int flags)
 }
 
 /*
- * Current decision point for kicking the idle load balancer in the presence
- * of idle CPUs in the system.
+ * Decide whether the ILB needs a stats and/or balance kick based on
+ * sched_domain state.
  */
-static void nohz_balancer_kick(struct rq *rq)
+static bool nohz_balancer_needs_kick(struct rq *rq)
 {
-	unsigned long now = jiffies;
 	struct sched_domain_shared *sds;
 	struct sched_domain *sd;
 	int nr_busy, i, cpu = rq->cpu;
-	unsigned int flags = 0;
-
-	if (unlikely(rq->idle_balance))
-		return;
-
-	/*
-	 * We may be recently in ticked or tickless idle mode. At the first
-	 * busy tick after returning from idle, we will update the busy stats.
-	 */
-	nohz_balance_exit_idle(rq);
-
-	if (READ_ONCE(nohz.has_blocked_load) &&
-	    time_after(now, READ_ONCE(nohz.next_blocked)))
-		flags = NOHZ_STATS_KICK;
-
-	/*
-	 * Most of the time system is not 100% busy. i.e nohz.nr_cpus > 0
-	 * Skip the read if time is not due.
-	 *
-	 * If none are in tickless mode, there maybe a narrow window
-	 * (28 jiffies, HZ=1000) where flags maybe set and kick_ilb called.
-	 * But idle load balancing is not done as find_new_ilb fails.
-	 * That's very rare. So read nohz.nr_cpus only if time is due.
-	 */
-	if (time_before(now, nohz.next_balance))
-		goto out;
 
-	/*
-	 * None are in tickless mode and hence no need for NOHZ idle load
-	 * balancing
-	 */
-	if (unlikely(cpumask_empty(nohz.idle_cpus_mask)))
-		return;
-
-	if (rq->nr_running >= 2) {
-		flags = NOHZ_STATS_KICK | NOHZ_BALANCE_KICK;
-		goto out;
-	}
-
-	rcu_read_lock();
+	guard(rcu)();
 
 	sd = rcu_dereference_all(rq->sd);
 	if (sd) {
@@ -12757,10 +12720,8 @@ static void nohz_balancer_kick(struct rq *rq)
 		 * If there's a runnable CFS task and the current CPU has reduced
 		 * capacity, kick the ILB to see if there's a better CPU to run on:
 		 */
-		if (rq->cfs.h_nr_runnable >= 1 && check_cpu_capacity(rq, sd)) {
-			flags = NOHZ_STATS_KICK | NOHZ_BALANCE_KICK;
-			goto unlock;
-		}
+		if (rq->cfs.h_nr_runnable >= 1 && check_cpu_capacity(rq, sd))
+			return true;
 	}
 
 	sd = rcu_dereference_all(per_cpu(sd_asym_packing, cpu));
@@ -12774,10 +12735,8 @@ static void nohz_balancer_kick(struct rq *rq)
 		 * preferred CPU must be idle.
 		 */
 		for_each_cpu_and(i, sched_domain_span(sd), nohz.idle_cpus_mask) {
-			if (sched_asym(sd, i, cpu)) {
-				flags = NOHZ_STATS_KICK | NOHZ_BALANCE_KICK;
-				goto unlock;
-			}
+			if (sched_asym(sd, i, cpu))
+				return true;
 		}
 	}
 
@@ -12787,10 +12746,8 @@ static void nohz_balancer_kick(struct rq *rq)
 		 * When ASYM_CPUCAPACITY; see if there's a higher capacity CPU
 		 * to run the misfit task on.
 		 */
-		if (check_misfit_status(rq)) {
-			flags = NOHZ_STATS_KICK | NOHZ_BALANCE_KICK;
-			goto unlock;
-		}
+		if (check_misfit_status(rq))
+			return true;
 
 		/*
 		 * For asymmetric systems, we do not want to nicely balance
@@ -12799,7 +12756,7 @@ static void nohz_balancer_kick(struct rq *rq)
 		 *
 		 * Skip the LLC logic because it's not relevant in that case.
 		 */
-		goto unlock;
+		return false;
 	}
 
 	sds = rcu_dereference_all(per_cpu(sd_llc_shared, cpu));
@@ -12814,13 +12771,61 @@ static void nohz_balancer_kick(struct rq *rq)
 		 * like this LLC domain has tasks we could move.
 		 */
 		nr_busy = atomic_read(&sds->nr_busy_cpus);
-		if (nr_busy > 1) {
-			flags = NOHZ_STATS_KICK | NOHZ_BALANCE_KICK;
-			goto unlock;
-		}
+		if (nr_busy > 1)
+			return true;
 	}
-unlock:
-	rcu_read_unlock();
+
+	return false;
+}
+
+/*
+ * Current decision point for kicking the idle load balancer in the presence
+ * of idle CPUs in the system.
+ */
+static void nohz_balancer_kick(struct rq *rq)
+{
+	unsigned long now = jiffies;
+	unsigned int flags = 0;
+
+	if (unlikely(rq->idle_balance))
+		return;
+
+	/*
+	 * We may be recently in ticked or tickless idle mode. At the first
+	 * busy tick after returning from idle, we will update the busy stats.
+	 */
+	nohz_balance_exit_idle(rq);
+
+	if (READ_ONCE(nohz.has_blocked_load) &&
+	    time_after(now, READ_ONCE(nohz.next_blocked)))
+		flags = NOHZ_STATS_KICK;
+
+	/*
+	 * Most of the time system is not 100% busy. i.e nohz.nr_cpus > 0
+	 * Skip the read if time is not due.
+	 *
+	 * If none are in tickless mode, there maybe a narrow window
+	 * (28 jiffies, HZ=1000) where flags maybe set and kick_ilb called.
+	 * But idle load balancing is not done as find_new_ilb fails.
+	 * That's very rare. So read nohz.nr_cpus only if time is due.
+	 */
+	if (time_before(now, nohz.next_balance))
+		goto out;
+
+	/*
+	 * None are in tickless mode and hence no need for NOHZ idle load
+	 * balancing
+	 */
+	if (unlikely(cpumask_empty(nohz.idle_cpus_mask)))
+		return;
+
+	if (rq->nr_running >= 2) {
+		flags = NOHZ_STATS_KICK | NOHZ_BALANCE_KICK;
+		goto out;
+	}
+
+	if (nohz_balancer_needs_kick(rq))
+		flags |= NOHZ_STATS_KICK | NOHZ_BALANCE_KICK;
 out:
 	if (READ_ONCE(nohz.needs_update))
 		flags |= NOHZ_NEXT_KICK;
@@ -12833,16 +12838,14 @@ static void set_cpu_sd_state_busy(int cpu)
 {
 	struct sched_domain *sd;
 
-	rcu_read_lock();
+	guard(rcu)();
 	sd = rcu_dereference_all(per_cpu(sd_llc, cpu));
 
 	if (!sd || !sd->nohz_idle)
-		goto unlock;
+		return;
 	sd->nohz_idle = 0;
 
 	atomic_inc(&sd->shared->nr_busy_cpus);
-unlock:
-	rcu_read_unlock();
 }
 
 void nohz_balance_exit_idle(struct rq *rq)
@@ -12862,16 +12865,14 @@ static void set_cpu_sd_state_idle(int cpu)
 {
 	struct sched_domain *sd;
 
-	rcu_read_lock();
+	guard(rcu)();
 	sd = rcu_dereference_all(per_cpu(sd_llc, cpu));
 
 	if (!sd || sd->nohz_idle)
-		goto unlock;
+		return;
 	sd->nohz_idle = 1;
 
 	atomic_dec(&sd->shared->nr_busy_cpus);
-unlock:
-	rcu_read_unlock();
 }
 
 /*
-- 
2.54.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 2/6] sched/fair: Attach sched_domain_shared to sd_asym_cpucapacity
  2026-04-28  5:16 [PATCH v4 0/6] sched/fair: SMT-aware asymmetric CPU capacity Andrea Righi
  2026-04-28  5:16 ` [PATCH 1/6] sched/fair: Use guard(rcu) for sched_domain RCU sections Andrea Righi
@ 2026-04-28  5:16 ` Andrea Righi
  2026-04-28  6:45   ` Shrikanth Hegde
  2026-04-28  5:16 ` [PATCH 3/6] sched/fair: Prefer fully-idle SMT cores in asym-capacity idle selection Andrea Righi
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 20+ messages in thread
From: Andrea Righi @ 2026-04-28  5:16 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot
  Cc: Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Valentin Schneider, K Prateek Nayak, Christian Loehle, Koba Ko,
	Felix Abecassis, Balbir Singh, Joel Fernandes, Shrikanth Hegde,
	linux-kernel

From: K Prateek Nayak <kprateek.nayak@amd.com>

On asymmetric CPU capacity systems, the wakeup path uses
select_idle_capacity(), which scans the span of sd_asym_cpucapacity
rather than sd_llc.

The has_idle_cores hint however lives on sd_llc->shared, so the
wakeup-time read of has_idle_cores operates on an LLC-scoped blob while
the actual scan/decision spans the asym domain; nr_busy_cpus also lives
in the same shared sched_domain data, but it's never used in the asym
CPU capacity scenario.

Therefore, move the sched_domain_shared object to sd_asym_cpucapacity
whenever the CPU has a SD_ASYM_CPUCAPACITY_FULL ancestor and that
ancestor is non-overlapping (i.e., not built from SD_NUMA). In that case
the scope of has_idle_cores matches the scope of the wakeup scan.

Fall back to attaching the shared object to sd_llc in three cases:

  1) plain symmetric systems (no SD_ASYM_CPUCAPACITY_FULL anywhere);

  2) CPUs in an exclusive cpuset that carves out a symmetric capacity
     island: has_asym is system-wide but those CPUs have no
     SD_ASYM_CPUCAPACITY_FULL ancestor in their hierarchy and follow
     the symmetric LLC path in select_idle_sibling();

  3) exotic topologies where SD_ASYM_CPUCAPACITY_FULL lands on an
     SD_NUMA-built domain. init_sched_domain_shared() keys the shared
     blob off cpumask_first(span), which on overlapping NUMA domains
     would alias unrelated spans onto the same blob. Keep the shared
     object on the LLC there; select_idle_capacity() gracefully skips
     the has_idle_cores preference when sd->shared is NULL.

While at it, also rename the per-CPU sd_llc_shared to sd_balance_shared,
as it is no longer strictly tied to the LLC.

Co-developed-by: Andrea Righi <arighi@nvidia.com>
Signed-off-by: Andrea Righi <arighi@nvidia.com>
Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com>
---
 kernel/sched/fair.c     | 20 +++++----
 kernel/sched/sched.h    |  2 +-
 kernel/sched/topology.c | 91 +++++++++++++++++++++++++++++++++++------
 3 files changed, 91 insertions(+), 22 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index fc0828150c780..ece3a26f59c27 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7790,7 +7790,7 @@ static inline void set_idle_cores(int cpu, int val)
 {
 	struct sched_domain_shared *sds;
 
-	sds = rcu_dereference_all(per_cpu(sd_llc_shared, cpu));
+	sds = rcu_dereference_all(per_cpu(sd_balance_shared, cpu));
 	if (sds)
 		WRITE_ONCE(sds->has_idle_cores, val);
 }
@@ -7799,7 +7799,7 @@ static inline bool test_idle_cores(int cpu)
 {
 	struct sched_domain_shared *sds;
 
-	sds = rcu_dereference_all(per_cpu(sd_llc_shared, cpu));
+	sds = rcu_dereference_all(per_cpu(sd_balance_shared, cpu));
 	if (sds)
 		return READ_ONCE(sds->has_idle_cores);
 
@@ -7808,7 +7808,7 @@ static inline bool test_idle_cores(int cpu)
 
 /*
  * Scans the local SMT mask to see if the entire core is idle, and records this
- * information in sd_llc_shared->has_idle_cores.
+ * information in sd_balance_shared->has_idle_cores.
  *
  * Since SMT siblings share all cache levels, inspecting this limited remote
  * state should be fairly cheap.
@@ -7925,7 +7925,7 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
 	struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_rq_mask);
 	int i, cpu, idle_cpu = -1, nr = INT_MAX;
 
-	if (sched_feat(SIS_UTIL)) {
+	if (sched_feat(SIS_UTIL) && sd->shared) {
 		/*
 		 * Increment because !--nr is the condition to stop scan.
 		 *
@@ -12759,7 +12759,7 @@ static bool nohz_balancer_needs_kick(struct rq *rq)
 		return false;
 	}
 
-	sds = rcu_dereference_all(per_cpu(sd_llc_shared, cpu));
+	sds = rcu_dereference_all(per_cpu(sd_balance_shared, cpu));
 	if (sds) {
 		/*
 		 * If there is an imbalance between LLC domains (IOW we could
@@ -12841,10 +12841,13 @@ static void set_cpu_sd_state_busy(int cpu)
 	guard(rcu)();
 	sd = rcu_dereference_all(per_cpu(sd_llc, cpu));
 
-	if (!sd || !sd->nohz_idle)
+	/*
+	 * sd->nohz_idle only pairs with nr_busy_cpus on sd->shared; if this LLC
+	 * domain has no shared object there is nothing to clear or account.
+	 */
+	if (!sd || !sd->shared || !sd->nohz_idle)
 		return;
 	sd->nohz_idle = 0;
-
 	atomic_inc(&sd->shared->nr_busy_cpus);
 }
 
@@ -12868,7 +12871,8 @@ static void set_cpu_sd_state_idle(int cpu)
 	guard(rcu)();
 	sd = rcu_dereference_all(per_cpu(sd_llc, cpu));
 
-	if (!sd || sd->nohz_idle)
+	/* See set_cpu_sd_state_busy(): nohz_idle is only used with sd->shared. */
+	if (!sd || !sd->shared || sd->nohz_idle)
 		return;
 	sd->nohz_idle = 1;
 
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 9f63b15d309d1..330f5893c4561 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2170,7 +2170,7 @@ DECLARE_PER_CPU(struct sched_domain __rcu *, sd_llc);
 DECLARE_PER_CPU(int, sd_llc_size);
 DECLARE_PER_CPU(int, sd_llc_id);
 DECLARE_PER_CPU(int, sd_share_id);
-DECLARE_PER_CPU(struct sched_domain_shared __rcu *, sd_llc_shared);
+DECLARE_PER_CPU(struct sched_domain_shared __rcu *, sd_balance_shared);
 DECLARE_PER_CPU(struct sched_domain __rcu *, sd_numa);
 DECLARE_PER_CPU(struct sched_domain __rcu *, sd_asym_packing);
 DECLARE_PER_CPU(struct sched_domain __rcu *, sd_asym_cpucapacity);
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 5847b83d9d552..1e6ce369a4bbc 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -665,7 +665,7 @@ DEFINE_PER_CPU(struct sched_domain __rcu *, sd_llc);
 DEFINE_PER_CPU(int, sd_llc_size);
 DEFINE_PER_CPU(int, sd_llc_id);
 DEFINE_PER_CPU(int, sd_share_id);
-DEFINE_PER_CPU(struct sched_domain_shared __rcu *, sd_llc_shared);
+DEFINE_PER_CPU(struct sched_domain_shared __rcu *, sd_balance_shared);
 DEFINE_PER_CPU(struct sched_domain __rcu *, sd_numa);
 DEFINE_PER_CPU(struct sched_domain __rcu *, sd_asym_packing);
 DEFINE_PER_CPU(struct sched_domain __rcu *, sd_asym_cpucapacity);
@@ -680,20 +680,39 @@ static void update_top_cache_domain(int cpu)
 	int id = cpu;
 	int size = 1;
 
+	sd = lowest_flag_domain(cpu, SD_ASYM_CPUCAPACITY_FULL);
+	/*
+	 * The shared object is attached to sd_asym_cpucapacity only when the
+	 * asym domain is non-overlapping (i.e., not built from SD_NUMA).
+	 * On overlapping (NUMA) asym domains we fall back to letting the
+	 * SD_SHARE_LLC path own the shared object, so sd->shared may be NULL
+	 * here.
+	 */
+	if (sd && sd->shared)
+		sds = sd->shared;
+
+	rcu_assign_pointer(per_cpu(sd_asym_cpucapacity, cpu), sd);
+
 	sd = highest_flag_domain(cpu, SD_SHARE_LLC);
 	if (sd) {
 		id = cpumask_first(sched_domain_span(sd));
 		size = cpumask_weight(sched_domain_span(sd));
 
-		/* If sd_llc exists, sd_llc_shared should exist too. */
-		WARN_ON_ONCE(!sd->shared);
-		sds = sd->shared;
+		/*
+		 * If sd_asym_cpucapacity didn't claim the shared object,
+		 * sd_llc must have one linked.
+		 */
+		if (!sds) {
+			WARN_ON_ONCE(!sd->shared);
+			sds = sd->shared;
+		}
 	}
 
 	rcu_assign_pointer(per_cpu(sd_llc, cpu), sd);
 	per_cpu(sd_llc_size, cpu) = size;
 	per_cpu(sd_llc_id, cpu) = id;
-	rcu_assign_pointer(per_cpu(sd_llc_shared, cpu), sds);
+
+	rcu_assign_pointer(per_cpu(sd_balance_shared, cpu), sds);
 
 	sd = lowest_flag_domain(cpu, SD_CLUSTER);
 	if (sd)
@@ -711,9 +730,6 @@ static void update_top_cache_domain(int cpu)
 
 	sd = highest_flag_domain(cpu, SD_ASYM_PACKING);
 	rcu_assign_pointer(per_cpu(sd_asym_packing, cpu), sd);
-
-	sd = lowest_flag_domain(cpu, SD_ASYM_CPUCAPACITY_FULL);
-	rcu_assign_pointer(per_cpu(sd_asym_cpucapacity, cpu), sd);
 }
 
 /*
@@ -2650,6 +2666,49 @@ static void adjust_numa_imbalance(struct sched_domain *sd_llc)
 	}
 }
 
+static void init_sched_domain_shared(struct s_data *d, struct sched_domain *sd)
+{
+	int sd_id = cpumask_first(sched_domain_span(sd));
+
+	sd->shared = *per_cpu_ptr(d->sds, sd_id);
+	atomic_set(&sd->shared->nr_busy_cpus, sd->span_weight);
+	atomic_inc(&sd->shared->ref);
+}
+
+/*
+ * For asymmetric CPU capacity, attach sched_domain_shared on the innermost
+ * SD_ASYM_CPUCAPACITY_FULL ancestor of @cpu's base domain when that ancestor is
+ * not an overlapping NUMA-built domain (then LLC should claim shared).
+ *
+ * A CPU may lack any FULL ancestor (e.g., exclusive cpuset symmetric island),
+ * then LLC must claim shared instead.
+ *
+ * Note: SD_ASYM_CPUCAPACITY_FULL is only set when multiple distinct capacities
+ * exist in the domain span, so the asym domain we attach to cannot degenerate
+ * into a single-capacity group. The relevant edge cases are instead covered by
+ * the caveats above.
+ *
+ * Return true if this CPU's asym path claimed sd->shared, false otherwise.
+ */
+static bool claim_asym_sched_domain_shared(struct s_data *d, int cpu)
+{
+	struct sched_domain *sd = *per_cpu_ptr(d->sd, cpu);
+	struct sched_domain *sd_asym;
+
+	if (!sd)
+		return false;
+
+	sd_asym = sd;
+	while (sd_asym && !(sd_asym->flags & SD_ASYM_CPUCAPACITY_FULL))
+		sd_asym = sd_asym->parent;
+
+	if (!sd_asym || (sd_asym->flags & SD_NUMA))
+		return false;
+
+	init_sched_domain_shared(d, sd_asym);
+	return true;
+}
+
 /*
  * Build sched domains for a given set of CPUs and attach the sched domains
  * to the individual CPUs
@@ -2708,20 +2767,26 @@ build_sched_domains(const struct cpumask *cpu_map, struct sched_domain_attr *att
 	}
 
 	for_each_cpu(i, cpu_map) {
+		bool asym_claimed = false;
+
 		sd = *per_cpu_ptr(d.sd, i);
 		if (!sd)
 			continue;
 
+		if (has_asym)
+			asym_claimed = claim_asym_sched_domain_shared(&d, i);
+
 		/* First, find the topmost SD_SHARE_LLC domain */
 		while (sd->parent && (sd->parent->flags & SD_SHARE_LLC))
 			sd = sd->parent;
 
 		if (sd->flags & SD_SHARE_LLC) {
-			int sd_id = cpumask_first(sched_domain_span(sd));
-
-			sd->shared = *per_cpu_ptr(d.sds, sd_id);
-			atomic_set(&sd->shared->nr_busy_cpus, sd->span_weight);
-			atomic_inc(&sd->shared->ref);
+			/*
+			 * Initialize the sd->shared for SD_SHARE_LLC unless
+			 * the asym path above already claimed it.
+			 */
+			if (!asym_claimed)
+				init_sched_domain_shared(&d, sd);
 
 			/*
 			 * In presence of higher domains, adjust the
-- 
2.54.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 3/6] sched/fair: Prefer fully-idle SMT cores in asym-capacity idle selection
  2026-04-28  5:16 [PATCH v4 0/6] sched/fair: SMT-aware asymmetric CPU capacity Andrea Righi
  2026-04-28  5:16 ` [PATCH 1/6] sched/fair: Use guard(rcu) for sched_domain RCU sections Andrea Righi
  2026-04-28  5:16 ` [PATCH 2/6] sched/fair: Attach sched_domain_shared to sd_asym_cpucapacity Andrea Righi
@ 2026-04-28  5:16 ` Andrea Righi
  2026-04-28  5:16 ` [PATCH 4/6] sched/fair: Reject misfit pulls onto busy SMT siblings on asym-capacity Andrea Righi
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 20+ messages in thread
From: Andrea Righi @ 2026-04-28  5:16 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot
  Cc: Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Valentin Schneider, K Prateek Nayak, Christian Loehle, Koba Ko,
	Felix Abecassis, Balbir Singh, Joel Fernandes, Shrikanth Hegde,
	linux-kernel

On systems with asymmetric CPU capacity (e.g., ACPI/CPPC reporting
different per-core frequencies), the wakeup path uses
select_idle_capacity() and prioritizes idle CPUs with higher capacity
for better task placement. However, when those CPUs belong to SMT cores,
their effective capacity can be much lower than the nominal capacity
when the sibling thread is busy: SMT siblings compete for shared
resources, so a "high capacity" CPU that is idle but whose sibling is
busy does not deliver its full capacity. This effective capacity
reduction cannot be modeled by the static capacity value alone.

Introduce SMT awareness in the asym-capacity idle selection policy: when
SMT is active, always prefer fully-idle SMT cores over partially-idle
ones.

Prioritizing fully-idle SMT cores yields better task placement because
the effective capacity of partially-idle SMT cores is reduced; always
preferring them when available leads to more accurate capacity usage on
task wakeup.

On an SMT system with asymmetric CPU capacities, SMT-aware idle
selection has been shown to improve throughput by around 15-18% for
CPU-bound workloads, running an amount of tasks equal to the amount of
SMT cores.

Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Christian Loehle <christian.loehle@arm.com>
Cc: Koba Ko <kobak@nvidia.com>
Reviewed-by: K Prateek Nayak <kprateek.nayak@amd.com>
Reported-by: Felix Abecassis <fabecassis@nvidia.com>
Signed-off-by: Andrea Righi <arighi@nvidia.com>
---
 kernel/sched/fair.c | 70 +++++++++++++++++++++++++++++++++++++++++----
 1 file changed, 65 insertions(+), 5 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index ece3a26f59c27..6349a425cda7a 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7989,6 +7989,22 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
 	return idle_cpu;
 }
 
+/*
+ * Idle-capacity scan ranks transformed util_fits_cpu() outcomes; lower values
+ * are more preferred (see select_idle_capacity()).
+ */
+enum asym_fits_state {
+	/* In descending order of preference */
+	ASYM_IDLE_CORE_UCLAMP_MISFIT = -4,
+	ASYM_IDLE_CORE_COMPLETE_MISFIT,
+	ASYM_IDLE_THREAD_FITS,
+	ASYM_IDLE_THREAD_UCLAMP_MISFIT,
+	ASYM_IDLE_COMPLETE_MISFIT,
+
+	/* util_fits_cpu() bias for an idle core. */
+	ASYM_IDLE_CORE_BIAS = -3,
+};
+
 /*
  * Scan the asym_capacity domain for idle CPUs; pick the first idle one on which
  * the task fits. If no CPU is big enough, but there are idle ones, try to
@@ -7997,8 +8013,9 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
 static int
 select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target)
 {
+	bool prefers_idle_core = sched_smt_active() && test_idle_cores(target);
 	unsigned long task_util, util_min, util_max, best_cap = 0;
-	int fits, best_fits = 0;
+	int fits, best_fits = ASYM_IDLE_COMPLETE_MISFIT;
 	int cpu, best_cpu = -1;
 	struct cpumask *cpus;
 
@@ -8010,6 +8027,7 @@ select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target)
 	util_max = uclamp_eff_value(p, UCLAMP_MAX);
 
 	for_each_cpu_wrap(cpu, cpus, target) {
+		bool preferred_core = !prefers_idle_core || is_core_idle(cpu);
 		unsigned long cpu_cap = capacity_of(cpu);
 
 		if (!choose_idle_cpu(cpu, p))
@@ -8018,7 +8036,7 @@ select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target)
 		fits = util_fits_cpu(task_util, util_min, util_max, cpu);
 
 		/* This CPU fits with all requirements */
-		if (fits > 0)
+		if (fits > 0 && preferred_core)
 			return cpu;
 		/*
 		 * Only the min performance hint (i.e. uclamp_min) doesn't fit.
@@ -8026,9 +8044,33 @@ select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target)
 		 */
 		else if (fits < 0)
 			cpu_cap = get_actual_cpu_capacity(cpu);
+		/*
+		 * fits > 0 implies we are not on a preferred core
+		 * but the util fits CPU capacity. Set fits to ASYM_IDLE_THREAD_FITS
+		 * so the effective range becomes
+		 * [ASYM_IDLE_THREAD_FITS, ASYM_IDLE_COMPLETE_MISFIT], where:
+		 *    ASYM_IDLE_COMPLETE_MISFIT - does not fit
+		 *    ASYM_IDLE_THREAD_UCLAMP_MISFIT - fits with the exception of UCLAMP_MIN
+		 *    ASYM_IDLE_THREAD_FITS - fits with the exception of preferred_core
+		 */
+		else if (fits > 0)
+			fits = ASYM_IDLE_THREAD_FITS;
+
+		/*
+		 * If we are on a preferred core, translate the range of fits
+		 * of [ASYM_IDLE_THREAD_UCLAMP_MISFIT, ASYM_IDLE_COMPLETE_MISFIT] to
+		 * [ASYM_IDLE_CORE_UCLAMP_MISFIT, ASYM_IDLE_CORE_COMPLETE_MISFIT].
+		 * This ensures that an idle core is always given priority over
+		 * (partially) busy core.
+		 *
+		 * A fully fitting idle core would have returned early and hence
+		 * fits > 0 for preferred_core need not be dealt with.
+		 */
+		if (preferred_core)
+			fits += ASYM_IDLE_CORE_BIAS;
 
 		/*
-		 * First, select CPU which fits better (-1 being better than 0).
+		 * First, select CPU which fits better (lower is more preferred).
 		 * Then, select the one with best capacity at same level.
 		 */
 		if ((fits < best_fits) ||
@@ -8039,6 +8081,19 @@ select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target)
 		}
 	}
 
+	/*
+	 * A value in the [ASYM_IDLE_CORE_UCLAMP_MISFIT, ASYM_IDLE_CORE_BIAS]
+	 * range means the chosen CPU is in a fully idle SMT core. Values above
+	 * ASYM_IDLE_CORE_BIAS mean we never ranked such a CPU best.
+	 *
+	 * The asym-capacity wakeup path returns from select_idle_sibling()
+	 * after this function and never runs select_idle_cpu(), so the usual
+	 * select_idle_cpu() tail that clears idle cores must live here when the
+	 * idle-core preference did not win.
+	 */
+	if (prefers_idle_core && best_fits > ASYM_IDLE_CORE_BIAS)
+		set_idle_cores(target, false);
+
 	return best_cpu;
 }
 
@@ -8047,12 +8102,17 @@ static inline bool asym_fits_cpu(unsigned long util,
 				 unsigned long util_max,
 				 int cpu)
 {
-	if (sched_asym_cpucap_active())
+	if (sched_asym_cpucap_active()) {
 		/*
 		 * Return true only if the cpu fully fits the task requirements
 		 * which include the utilization and the performance hints.
+		 *
+		 * When SMT is active, also require that the core has no busy
+		 * siblings.
 		 */
-		return (util_fits_cpu(util, util_min, util_max, cpu) > 0);
+		return (!sched_smt_active() || is_core_idle(cpu)) &&
+		       (util_fits_cpu(util, util_min, util_max, cpu) > 0);
+	}
 
 	return true;
 }
-- 
2.54.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 4/6] sched/fair: Reject misfit pulls onto busy SMT siblings on asym-capacity
  2026-04-28  5:16 [PATCH v4 0/6] sched/fair: SMT-aware asymmetric CPU capacity Andrea Righi
                   ` (2 preceding siblings ...)
  2026-04-28  5:16 ` [PATCH 3/6] sched/fair: Prefer fully-idle SMT cores in asym-capacity idle selection Andrea Righi
@ 2026-04-28  5:16 ` Andrea Righi
  2026-04-28  5:16 ` [PATCH 5/6] sched/fair: Add SIS_UTIL support to select_idle_capacity() Andrea Righi
  2026-04-28  5:16 ` [PATCH 6/6] sched/topology: Remove SMT/asym capacity warning Andrea Righi
  5 siblings, 0 replies; 20+ messages in thread
From: Andrea Righi @ 2026-04-28  5:16 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot
  Cc: Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Valentin Schneider, K Prateek Nayak, Christian Loehle, Koba Ko,
	Felix Abecassis, Balbir Singh, Joel Fernandes, Shrikanth Hegde,
	linux-kernel

When SD_ASYM_CPUCAPACITY load balancing considers pulling a misfit task,
capacity_of(dst_cpu) can overstate available compute if the SMT sibling is
busy: the core does not deliver its full nominal capacity.

If SMT is active and dst_cpu is not on a fully idle core, skip this
destination so we do not migrate a misfit expecting a capacity upgrade we
cannot actually provide.

Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Christian Loehle <christian.loehle@arm.com>
Cc: Koba Ko <kobak@nvidia.com>
Cc: K Prateek Nayak <kprateek.nayak@amd.com>
Reported-by: Felix Abecassis <fabecassis@nvidia.com>
Signed-off-by: Andrea Righi <arighi@nvidia.com>
---
 kernel/sched/fair.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 6349a425cda7a..3e61e1ec513c5 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -9612,6 +9612,7 @@ struct lb_env {
 
 	int			dst_cpu;
 	struct rq		*dst_rq;
+	bool			dst_core_idle;
 
 	struct cpumask		*dst_grpmask;
 	int			new_dst_cpu;
@@ -10837,10 +10838,16 @@ static bool update_sd_pick_busiest(struct lb_env *env,
 	 * We can use max_capacity here as reduction in capacity on some
 	 * CPUs in the group should either be possible to resolve
 	 * internally or be covered by avg_load imbalance (eventually).
+	 *
+	 * When SMT is active, only pull a misfit to dst_cpu if it is on a
+	 * fully idle core; otherwise the effective capacity of the core is
+	 * reduced and we may not actually provide more capacity than the
+	 * source.
 	 */
 	if ((env->sd->flags & SD_ASYM_CPUCAPACITY) &&
 	    (sgs->group_type == group_misfit_task) &&
-	    (!capacity_greater(capacity_of(env->dst_cpu), sg->sgc->max_capacity) ||
+	    (!env->dst_core_idle ||
+	     !capacity_greater(capacity_of(env->dst_cpu), sg->sgc->max_capacity) ||
 	     sds->local_stat.group_type != group_has_spare))
 		return false;
 
@@ -11404,6 +11411,8 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd
 	unsigned long sum_util = 0;
 	bool sg_overloaded = 0, sg_overutilized = 0;
 
+	env->dst_core_idle = !sched_smt_active() || is_core_idle(env->dst_cpu);
+
 	do {
 		struct sg_lb_stats *sgs = &tmp_sgs;
 		int local_group;
-- 
2.54.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 5/6] sched/fair: Add SIS_UTIL support to select_idle_capacity()
  2026-04-28  5:16 [PATCH v4 0/6] sched/fair: SMT-aware asymmetric CPU capacity Andrea Righi
                   ` (3 preceding siblings ...)
  2026-04-28  5:16 ` [PATCH 4/6] sched/fair: Reject misfit pulls onto busy SMT siblings on asym-capacity Andrea Righi
@ 2026-04-28  5:16 ` Andrea Righi
  2026-04-28  5:16 ` [PATCH 6/6] sched/topology: Remove SMT/asym capacity warning Andrea Righi
  5 siblings, 0 replies; 20+ messages in thread
From: Andrea Righi @ 2026-04-28  5:16 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot
  Cc: Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Valentin Schneider, K Prateek Nayak, Christian Loehle, Koba Ko,
	Felix Abecassis, Balbir Singh, Joel Fernandes, Shrikanth Hegde,
	linux-kernel

From: K Prateek Nayak <kprateek.nayak@amd.com>

Add to select_idle_capacity() the same SIS_UTIL-controlled idle-scan
mechanism, already used by select_idle_cpu(): when sched_feat(SIS_UTIL)
is enabled and the LLC domain has sched_domain_shared data, derive the
per-attempt scan limit from sd->shared->nr_idle_scan.

That bounds the walk on large LLCs and allows an early return once the
scan limit is reached, if we already picked a sufficiently strong
idle-core candidate (best_fits == ASYM_IDLE_CORE_UCLAMP_MISFIT).

Co-developed-by: Andrea Righi <arighi@nvidia.com>
Signed-off-by: Andrea Righi <arighi@nvidia.com>
Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com>
---
 kernel/sched/fair.c | 19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 3e61e1ec513c5..01b95e3b7c50b 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8018,6 +8018,7 @@ select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target)
 	int fits, best_fits = ASYM_IDLE_COMPLETE_MISFIT;
 	int cpu, best_cpu = -1;
 	struct cpumask *cpus;
+	int nr = INT_MAX;
 
 	cpus = this_cpu_cpumask_var_ptr(select_rq_mask);
 	cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
@@ -8026,10 +8027,28 @@ select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target)
 	util_min = uclamp_eff_value(p, UCLAMP_MIN);
 	util_max = uclamp_eff_value(p, UCLAMP_MAX);
 
+	if (sched_feat(SIS_UTIL) && sd->shared) {
+		/*
+		 * Same nr_idle_scan hint as select_idle_cpu(), nr only limits
+		 * the scan when not preferring an idle core.
+		 */
+		nr = READ_ONCE(sd->shared->nr_idle_scan) + 1;
+		/* overloaded domain is unlikely to have idle cpu/core */
+		if (nr == 1)
+			return -1;
+	}
+
 	for_each_cpu_wrap(cpu, cpus, target) {
 		bool preferred_core = !prefers_idle_core || is_core_idle(cpu);
 		unsigned long cpu_cap = capacity_of(cpu);
 
+		/*
+		 * Good-enough early exit (mirrors select_idle_cpu() logic).
+		 */
+		if (!prefers_idle_core &&
+		    --nr <= 0 && best_fits == ASYM_IDLE_CORE_UCLAMP_MISFIT)
+			return best_cpu;
+
 		if (!choose_idle_cpu(cpu, p))
 			continue;
 
-- 
2.54.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 6/6] sched/topology: Remove SMT/asym capacity warning
  2026-04-28  5:16 [PATCH v4 0/6] sched/fair: SMT-aware asymmetric CPU capacity Andrea Righi
                   ` (4 preceding siblings ...)
  2026-04-28  5:16 ` [PATCH 5/6] sched/fair: Add SIS_UTIL support to select_idle_capacity() Andrea Righi
@ 2026-04-28  5:16 ` Andrea Righi
  2026-04-28  5:28   ` K Prateek Nayak
  5 siblings, 1 reply; 20+ messages in thread
From: Andrea Righi @ 2026-04-28  5:16 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot
  Cc: Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Valentin Schneider, K Prateek Nayak, Christian Loehle, Koba Ko,
	Felix Abecassis, Balbir Singh, Joel Fernandes, Shrikanth Hegde,
	linux-kernel

Now that asymmetric CPU capacity supports SMT, the combination of
SD_SHARE_CPUCAPACITY and SD_ASYM_CPUCAPACITY is a valid topology
configuration.

The existing WARN_ONCE() therefore triggers on legitimate systems and no
longer provides useful information. Remove the warning to avoid spurious
reports.

Signed-off-by: Andrea Righi <arighi@nvidia.com>
---
 kernel/sched/topology.c | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 1e6ce369a4bbc..39805f09761cc 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -1732,10 +1732,6 @@ sd_init(struct sched_domain_topology_level *tl,
 		.name			= tl->name,
 	};
 
-	WARN_ONCE((sd->flags & (SD_SHARE_CPUCAPACITY | SD_ASYM_CPUCAPACITY)) ==
-		  (SD_SHARE_CPUCAPACITY | SD_ASYM_CPUCAPACITY),
-		  "CPU capacity asymmetry not supported on SMT\n");
-
 	/*
 	 * Convert topological properties into behaviour.
 	 */
-- 
2.54.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH 6/6] sched/topology: Remove SMT/asym capacity warning
  2026-04-28  5:16 ` [PATCH 6/6] sched/topology: Remove SMT/asym capacity warning Andrea Righi
@ 2026-04-28  5:28   ` K Prateek Nayak
  2026-04-28  5:54     ` Andrea Righi
  0 siblings, 1 reply; 20+ messages in thread
From: K Prateek Nayak @ 2026-04-28  5:28 UTC (permalink / raw)
  To: Andrea Righi, Ingo Molnar, Peter Zijlstra, Juri Lelli,
	Vincent Guittot
  Cc: Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Valentin Schneider, Christian Loehle, Koba Ko, Felix Abecassis,
	Balbir Singh, Joel Fernandes, Shrikanth Hegde, linux-kernel

Hello Andrea,

On 4/28/2026 10:46 AM, Andrea Righi wrote:
> @@ -1732,10 +1732,6 @@ sd_init(struct sched_domain_topology_level *tl,
>  		.name			= tl->name,
>  	};
>  
> -	WARN_ONCE((sd->flags & (SD_SHARE_CPUCAPACITY | SD_ASYM_CPUCAPACITY)) ==
> -		  (SD_SHARE_CPUCAPACITY | SD_ASYM_CPUCAPACITY),
> -		  "CPU capacity asymmetry not supported on SMT\n");
> -

Wait! Doesn't this mean we have ASYM_CAPACITY between the SMT siblings?
That seems wrong since we expect the siblings of cores to have the same
capacity.

Have you tripped this warning during your testing?

>  	/*
>  	 * Convert topological properties into behaviour.
>  	 */

-- 
Thanks and Regards,
Prateek


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 6/6] sched/topology: Remove SMT/asym capacity warning
  2026-04-28  5:28   ` K Prateek Nayak
@ 2026-04-28  5:54     ` Andrea Righi
  2026-04-28  6:04       ` Andrea Righi
  0 siblings, 1 reply; 20+ messages in thread
From: Andrea Righi @ 2026-04-28  5:54 UTC (permalink / raw)
  To: K Prateek Nayak
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Valentin Schneider, Christian Loehle, Koba Ko, Felix Abecassis,
	Balbir Singh, Joel Fernandes, Shrikanth Hegde, linux-kernel

On Tue, Apr 28, 2026 at 10:58:05AM +0530, K Prateek Nayak wrote:
> Hello Andrea,
> 
> On 4/28/2026 10:46 AM, Andrea Righi wrote:
> > @@ -1732,10 +1732,6 @@ sd_init(struct sched_domain_topology_level *tl,
> >  		.name			= tl->name,
> >  	};
> >  
> > -	WARN_ONCE((sd->flags & (SD_SHARE_CPUCAPACITY | SD_ASYM_CPUCAPACITY)) ==
> > -		  (SD_SHARE_CPUCAPACITY | SD_ASYM_CPUCAPACITY),
> > -		  "CPU capacity asymmetry not supported on SMT\n");
> > -
> 
> Wait! Doesn't this mean we have ASYM_CAPACITY between the SMT siblings?
> That seems wrong since we expect the siblings of cores to have the same
> capacity.
> 
> Have you tripped this warning during your testing?

Hm... I think you're right, I may have seen this only in a VM, let me double
check.

-Andrea

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 6/6] sched/topology: Remove SMT/asym capacity warning
  2026-04-28  5:54     ` Andrea Righi
@ 2026-04-28  6:04       ` Andrea Righi
  0 siblings, 0 replies; 20+ messages in thread
From: Andrea Righi @ 2026-04-28  6:04 UTC (permalink / raw)
  To: K Prateek Nayak
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Valentin Schneider, Christian Loehle, Koba Ko, Felix Abecassis,
	Balbir Singh, Joel Fernandes, Shrikanth Hegde, linux-kernel

On Tue, Apr 28, 2026 at 07:54:08AM +0200, Andrea Righi wrote:
> On Tue, Apr 28, 2026 at 10:58:05AM +0530, K Prateek Nayak wrote:
> > Hello Andrea,
> > 
> > On 4/28/2026 10:46 AM, Andrea Righi wrote:
> > > @@ -1732,10 +1732,6 @@ sd_init(struct sched_domain_topology_level *tl,
> > >  		.name			= tl->name,
> > >  	};
> > >  
> > > -	WARN_ONCE((sd->flags & (SD_SHARE_CPUCAPACITY | SD_ASYM_CPUCAPACITY)) ==
> > > -		  (SD_SHARE_CPUCAPACITY | SD_ASYM_CPUCAPACITY),
> > > -		  "CPU capacity asymmetry not supported on SMT\n");
> > > -
> > 
> > Wait! Doesn't this mean we have ASYM_CAPACITY between the SMT siblings?
> > That seems wrong since we expect the siblings of cores to have the same
> > capacity.
> > 
> > Have you tripped this warning during your testing?
> 
> Hm... I think you're right, I may have seen this only in a VM, let me double
> check.

Tested and I confirm that I don't see the warning on Vera, so please ignore
PATCH 6/6.

Thanks!
-Andrea

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 2/6] sched/fair: Attach sched_domain_shared to sd_asym_cpucapacity
  2026-04-28  5:16 ` [PATCH 2/6] sched/fair: Attach sched_domain_shared to sd_asym_cpucapacity Andrea Righi
@ 2026-04-28  6:45   ` Shrikanth Hegde
  2026-04-28  8:47     ` Andrea Righi
  0 siblings, 1 reply; 20+ messages in thread
From: Shrikanth Hegde @ 2026-04-28  6:45 UTC (permalink / raw)
  To: Andrea Righi, K Prateek Nayak
  Cc: Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Valentin Schneider, Christian Loehle, Koba Ko, Felix Abecassis,
	Balbir Singh, Joel Fernandes, linux-kernel, Ingo Molnar,
	Peter Zijlstra, Juri Lelli, Vincent Guittot



On 4/28/26 10:46 AM, Andrea Righi wrote:
> From: K Prateek Nayak <kprateek.nayak@amd.com>
> 
> On asymmetric CPU capacity systems, the wakeup path uses
> select_idle_capacity(), which scans the span of sd_asym_cpucapacity
> rather than sd_llc.
> 
> The has_idle_cores hint however lives on sd_llc->shared, so the
> wakeup-time read of has_idle_cores operates on an LLC-scoped blob while
> the actual scan/decision spans the asym domain; nr_busy_cpus also lives
> in the same shared sched_domain data, but it's never used in the asym
> CPU capacity scenario.
> 
> Therefore, move the sched_domain_shared object to sd_asym_cpucapacity
> whenever the CPU has a SD_ASYM_CPUCAPACITY_FULL ancestor and that
> ancestor is non-overlapping (i.e., not built from SD_NUMA). In that case
> the scope of has_idle_cores matches the scope of the wakeup scan.
> 
> Fall back to attaching the shared object to sd_llc in three cases:
> 
>    1) plain symmetric systems (no SD_ASYM_CPUCAPACITY_FULL anywhere);
> 
>    2) CPUs in an exclusive cpuset that carves out a symmetric capacity
>       island: has_asym is system-wide but those CPUs have no
>       SD_ASYM_CPUCAPACITY_FULL ancestor in their hierarchy and follow
>       the symmetric LLC path in select_idle_sibling();
> 
>    3) exotic topologies where SD_ASYM_CPUCAPACITY_FULL lands on an
>       SD_NUMA-built domain. init_sched_domain_shared() keys the shared
>       blob off cpumask_first(span), which on overlapping NUMA domains
>       would alias unrelated spans onto the same blob. Keep the shared
>       object on the LLC there; select_idle_capacity() gracefully skips
>       the has_idle_cores preference when sd->shared is NULL.
> 

Can you share the example topology where this benefits?

Is SD_ASYM_CPUCAPACITY_FULL one level above LLC but below NUMA?

> While at it, also rename the per-CPU sd_llc_shared to sd_balance_shared,
> as it is no longer strictly tied to the LLC.
> 

llc scans are at wakeup's. name sd_balance_shared indicates it is for load balance.

> Co-developed-by: Andrea Righi <arighi@nvidia.com>
> Signed-off-by: Andrea Righi <arighi@nvidia.com>
> Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com>
> ---
>   kernel/sched/fair.c     | 20 +++++----
>   kernel/sched/sched.h    |  2 +-
>   kernel/sched/topology.c | 91 +++++++++++++++++++++++++++++++++++------
>   3 files changed, 91 insertions(+), 22 deletions(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index fc0828150c780..ece3a26f59c27 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -7790,7 +7790,7 @@ static inline void set_idle_cores(int cpu, int val)
>   {
>   	struct sched_domain_shared *sds;
>   
> -	sds = rcu_dereference_all(per_cpu(sd_llc_shared, cpu));
> +	sds = rcu_dereference_all(per_cpu(sd_balance_shared, cpu));
>   	if (sds)
>   		WRITE_ONCE(sds->has_idle_cores, val);
>   }
> @@ -7799,7 +7799,7 @@ static inline bool test_idle_cores(int cpu)
>   {
>   	struct sched_domain_shared *sds;
>   
> -	sds = rcu_dereference_all(per_cpu(sd_llc_shared, cpu));
> +	sds = rcu_dereference_all(per_cpu(sd_balance_shared, cpu));
>   	if (sds)
>   		return READ_ONCE(sds->has_idle_cores);
>   
> @@ -7808,7 +7808,7 @@ static inline bool test_idle_cores(int cpu)
>   
>   /*
>    * Scans the local SMT mask to see if the entire core is idle, and records this
> - * information in sd_llc_shared->has_idle_cores.
> + * information in sd_balance_shared->has_idle_cores.
>    *
>    * Since SMT siblings share all cache levels, inspecting this limited remote
>    * state should be fairly cheap.
> @@ -7925,7 +7925,7 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
>   	struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_rq_mask);
>   	int i, cpu, idle_cpu = -1, nr = INT_MAX;
>   
> -	if (sched_feat(SIS_UTIL)) {
> +	if (sched_feat(SIS_UTIL) && sd->shared) {
>   		/*
>   		 * Increment because !--nr is the condition to stop scan.
>   		 *
> @@ -12759,7 +12759,7 @@ static bool nohz_balancer_needs_kick(struct rq *rq)
>   		return false;
>   	}
>   
> -	sds = rcu_dereference_all(per_cpu(sd_llc_shared, cpu));
> +	sds = rcu_dereference_all(per_cpu(sd_balance_shared, cpu));
>   	if (sds) {
>   		/*
>   		 * If there is an imbalance between LLC domains (IOW we could
> @@ -12841,10 +12841,13 @@ static void set_cpu_sd_state_busy(int cpu)
>   	guard(rcu)();
>   	sd = rcu_dereference_all(per_cpu(sd_llc, cpu));
>   
> -	if (!sd || !sd->nohz_idle)
> +	/*
> +	 * sd->nohz_idle only pairs with nr_busy_cpus on sd->shared; if this LLC
> +	 * domain has no shared object there is nothing to clear or account.
> +	 */
> +	if (!sd || !sd->shared || !sd->nohz_idle)
>   		return;
>   	sd->nohz_idle = 0;
> -
>   	atomic_inc(&sd->shared->nr_busy_cpus);
>   }
>   
> @@ -12868,7 +12871,8 @@ static void set_cpu_sd_state_idle(int cpu)
>   	guard(rcu)();
>   	sd = rcu_dereference_all(per_cpu(sd_llc, cpu));
>   
> -	if (!sd || sd->nohz_idle)
> +	/* See set_cpu_sd_state_busy(): nohz_idle is only used with sd->shared. */
> +	if (!sd || !sd->shared || sd->nohz_idle)
>   		return;
>   	sd->nohz_idle = 1;
>   
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 9f63b15d309d1..330f5893c4561 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -2170,7 +2170,7 @@ DECLARE_PER_CPU(struct sched_domain __rcu *, sd_llc);
>   DECLARE_PER_CPU(int, sd_llc_size);
>   DECLARE_PER_CPU(int, sd_llc_id);
>   DECLARE_PER_CPU(int, sd_share_id);
> -DECLARE_PER_CPU(struct sched_domain_shared __rcu *, sd_llc_shared);
> +DECLARE_PER_CPU(struct sched_domain_shared __rcu *, sd_balance_shared);
>   DECLARE_PER_CPU(struct sched_domain __rcu *, sd_numa);
>   DECLARE_PER_CPU(struct sched_domain __rcu *, sd_asym_packing);
>   DECLARE_PER_CPU(struct sched_domain __rcu *, sd_asym_cpucapacity);
> diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> index 5847b83d9d552..1e6ce369a4bbc 100644
> --- a/kernel/sched/topology.c
> +++ b/kernel/sched/topology.c
> @@ -665,7 +665,7 @@ DEFINE_PER_CPU(struct sched_domain __rcu *, sd_llc);
>   DEFINE_PER_CPU(int, sd_llc_size);
>   DEFINE_PER_CPU(int, sd_llc_id);
>   DEFINE_PER_CPU(int, sd_share_id);
> -DEFINE_PER_CPU(struct sched_domain_shared __rcu *, sd_llc_shared);
> +DEFINE_PER_CPU(struct sched_domain_shared __rcu *, sd_balance_shared);
>   DEFINE_PER_CPU(struct sched_domain __rcu *, sd_numa);
>   DEFINE_PER_CPU(struct sched_domain __rcu *, sd_asym_packing);
>   DEFINE_PER_CPU(struct sched_domain __rcu *, sd_asym_cpucapacity);
> @@ -680,20 +680,39 @@ static void update_top_cache_domain(int cpu)
>   	int id = cpu;
>   	int size = 1;
>   
> +	sd = lowest_flag_domain(cpu, SD_ASYM_CPUCAPACITY_FULL);
> +	/*
> +	 * The shared object is attached to sd_asym_cpucapacity only when the
> +	 * asym domain is non-overlapping (i.e., not built from SD_NUMA).
> +	 * On overlapping (NUMA) asym domains we fall back to letting the
> +	 * SD_SHARE_LLC path own the shared object, so sd->shared may be NULL
> +	 * here.
> +	 */
> +	if (sd && sd->shared)
> +		sds = sd->shared;
> +
> +	rcu_assign_pointer(per_cpu(sd_asym_cpucapacity, cpu), sd);
> +
>   	sd = highest_flag_domain(cpu, SD_SHARE_LLC);
>   	if (sd) {
>   		id = cpumask_first(sched_domain_span(sd));
>   		size = cpumask_weight(sched_domain_span(sd));
>   
> -		/* If sd_llc exists, sd_llc_shared should exist too. */
> -		WARN_ON_ONCE(!sd->shared);
> -		sds = sd->shared;
> +		/*
> +		 * If sd_asym_cpucapacity didn't claim the shared object,
> +		 * sd_llc must have one linked.
> +		 */
> +		if (!sds) {
> +			WARN_ON_ONCE(!sd->shared);
> +			sds = sd->shared;
> +		}
>   	}
>   
>   	rcu_assign_pointer(per_cpu(sd_llc, cpu), sd);
>   	per_cpu(sd_llc_size, cpu) = size;
>   	per_cpu(sd_llc_id, cpu) = id;
> -	rcu_assign_pointer(per_cpu(sd_llc_shared, cpu), sds);
> +
> +	rcu_assign_pointer(per_cpu(sd_balance_shared, cpu), sds);
>   
>   	sd = lowest_flag_domain(cpu, SD_CLUSTER);
>   	if (sd)
> @@ -711,9 +730,6 @@ static void update_top_cache_domain(int cpu)
>   
>   	sd = highest_flag_domain(cpu, SD_ASYM_PACKING);
>   	rcu_assign_pointer(per_cpu(sd_asym_packing, cpu), sd);
> -
> -	sd = lowest_flag_domain(cpu, SD_ASYM_CPUCAPACITY_FULL);
> -	rcu_assign_pointer(per_cpu(sd_asym_cpucapacity, cpu), sd);
>   }
>   
>   /*
> @@ -2650,6 +2666,49 @@ static void adjust_numa_imbalance(struct sched_domain *sd_llc)
>   	}
>   }
>   
> +static void init_sched_domain_shared(struct s_data *d, struct sched_domain *sd)
> +{
> +	int sd_id = cpumask_first(sched_domain_span(sd));
> +
> +	sd->shared = *per_cpu_ptr(d->sds, sd_id);
> +	atomic_set(&sd->shared->nr_busy_cpus, sd->span_weight);
> +	atomic_inc(&sd->shared->ref);
> +}
> +
> +/*
> + * For asymmetric CPU capacity, attach sched_domain_shared on the innermost
> + * SD_ASYM_CPUCAPACITY_FULL ancestor of @cpu's base domain when that ancestor is
> + * not an overlapping NUMA-built domain (then LLC should claim shared).
> + *
> + * A CPU may lack any FULL ancestor (e.g., exclusive cpuset symmetric island),
> + * then LLC must claim shared instead.
> + *
> + * Note: SD_ASYM_CPUCAPACITY_FULL is only set when multiple distinct capacities
> + * exist in the domain span, so the asym domain we attach to cannot degenerate
> + * into a single-capacity group. The relevant edge cases are instead covered by
> + * the caveats above.
> + *
> + * Return true if this CPU's asym path claimed sd->shared, false otherwise.
> + */
> +static bool claim_asym_sched_domain_shared(struct s_data *d, int cpu)
> +{
> +	struct sched_domain *sd = *per_cpu_ptr(d->sd, cpu);
> +	struct sched_domain *sd_asym;
> +
> +	if (!sd)
> +		return false;
> +
> +	sd_asym = sd;
> +	while (sd_asym && !(sd_asym->flags & SD_ASYM_CPUCAPACITY_FULL))
> +		sd_asym = sd_asym->parent;
> +
> +	if (!sd_asym || (sd_asym->flags & SD_NUMA))
> +		return false;
> +
> +	init_sched_domain_shared(d, sd_asym);
> +	return true;
> +}
> +
>   /*
>    * Build sched domains for a given set of CPUs and attach the sched domains
>    * to the individual CPUs
> @@ -2708,20 +2767,26 @@ build_sched_domains(const struct cpumask *cpu_map, struct sched_domain_attr *att
>   	}
>   
>   	for_each_cpu(i, cpu_map) {
> +		bool asym_claimed = false;
> +
>   		sd = *per_cpu_ptr(d.sd, i);
>   		if (!sd)
>   			continue;
>   
> +		if (has_asym)
> +			asym_claimed = claim_asym_sched_domain_shared(&d, i);
> +
>   		/* First, find the topmost SD_SHARE_LLC domain */
>   		while (sd->parent && (sd->parent->flags & SD_SHARE_LLC))
>   			sd = sd->parent;
>   
>   		if (sd->flags & SD_SHARE_LLC) {
> -			int sd_id = cpumask_first(sched_domain_span(sd));
> -
> -			sd->shared = *per_cpu_ptr(d.sds, sd_id);
> -			atomic_set(&sd->shared->nr_busy_cpus, sd->span_weight);
> -			atomic_inc(&sd->shared->ref);
> +			/*
> +			 * Initialize the sd->shared for SD_SHARE_LLC unless
> +			 * the asym path above already claimed it.
> +			 */
> +			if (!asym_claimed)
> +				init_sched_domain_shared(&d, sd);
>   
>   			/*
>   			 * In presence of higher domains, adjust the


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/6] sched/fair: Use guard(rcu) for sched_domain RCU sections
  2026-04-28  5:16 ` [PATCH 1/6] sched/fair: Use guard(rcu) for sched_domain RCU sections Andrea Righi
@ 2026-04-28  8:33   ` K Prateek Nayak
  2026-04-28 10:43     ` Andrea Righi
  2026-04-28 14:12     ` Steven Rostedt
  0 siblings, 2 replies; 20+ messages in thread
From: K Prateek Nayak @ 2026-04-28  8:33 UTC (permalink / raw)
  To: Andrea Righi, Ingo Molnar, Peter Zijlstra, Juri Lelli,
	Vincent Guittot
  Cc: Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Valentin Schneider, Christian Loehle, Koba Ko, Felix Abecassis,
	Balbir Singh, Joel Fernandes, Shrikanth Hegde, linux-kernel

Hello Andrea,

On 4/28/2026 10:46 AM, Andrea Righi wrote:
> Use the scoped guard(rcu)() helper to safely access sched_domain
> pointers.
> 
> No functional change intended, this is preparation for topology work
> where sched_domain lifetimes are easier to reason about with explicit,
> scope-bounded RCU critical sections.
> 
> Suggested-by: K Prateek Nayak <kprateek.nayak@amd.com>
> Signed-off-by: Andrea Righi <arighi@nvidia.com>
> ---
>  kernel/sched/fair.c | 141 ++++++++++++++++++++++----------------------
>  1 file changed, 71 insertions(+), 70 deletions(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 69361c63353ad..fc0828150c780 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -8083,6 +8083,8 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
>  	 */
>  	lockdep_assert_irqs_disabled();
>  
> +	guard(rcu)();

Since IRQs are disabled, we don't need an addition RCU read lock here.
See a03fee333a2f ("sched/fair: Remove superfluous rcu_read_lock()")

> +
>  	if (choose_idle_cpu(target, p) &&
>  	    asym_fits_cpu(task_util, util_min, util_max, target))
>  		return target;
> @@ -12701,55 +12703,16 @@ static void kick_ilb(unsigned int flags)
>  }
>  
>  /*
> - * Current decision point for kicking the idle load balancer in the presence
> - * of idle CPUs in the system.
> + * Decide whether the ILB needs a stats and/or balance kick based on
> + * sched_domain state.
>   */
> -static void nohz_balancer_kick(struct rq *rq)
> +static bool nohz_balancer_needs_kick(struct rq *rq)
>  {
> -	unsigned long now = jiffies;
>  	struct sched_domain_shared *sds;
>  	struct sched_domain *sd;
>  	int nr_busy, i, cpu = rq->cpu;
> -	unsigned int flags = 0;
> -
> -	if (unlikely(rq->idle_balance))
> -		return;
> -
> -	/*
> -	 * We may be recently in ticked or tickless idle mode. At the first
> -	 * busy tick after returning from idle, we will update the busy stats.
> -	 */
> -	nohz_balance_exit_idle(rq);
> -
> -	if (READ_ONCE(nohz.has_blocked_load) &&
> -	    time_after(now, READ_ONCE(nohz.next_blocked)))
> -		flags = NOHZ_STATS_KICK;
> -
> -	/*
> -	 * Most of the time system is not 100% busy. i.e nohz.nr_cpus > 0
> -	 * Skip the read if time is not due.
> -	 *
> -	 * If none are in tickless mode, there maybe a narrow window
> -	 * (28 jiffies, HZ=1000) where flags maybe set and kick_ilb called.
> -	 * But idle load balancing is not done as find_new_ilb fails.
> -	 * That's very rare. So read nohz.nr_cpus only if time is due.
> -	 */
> -	if (time_before(now, nohz.next_balance))
> -		goto out;
>  
> -	/*
> -	 * None are in tickless mode and hence no need for NOHZ idle load
> -	 * balancing
> -	 */
> -	if (unlikely(cpumask_empty(nohz.idle_cpus_mask)))
> -		return;
> -
> -	if (rq->nr_running >= 2) {
> -		flags = NOHZ_STATS_KICK | NOHZ_BALANCE_KICK;
> -		goto out;
> -	}
> -
> -	rcu_read_lock();
> +	guard(rcu)();

and since this is only called from:

  sched_tick() /* IRQs disabled */
    sched_balance_trigger()
      nohz_balancer_kick()

with IRQs disabled, we can get rid of that rcu_read_lock() entirely.

>  
>  	sd = rcu_dereference_all(rq->sd);
>  	if (sd) {
-- 
Thanks and Regards,
Prateek


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 2/6] sched/fair: Attach sched_domain_shared to sd_asym_cpucapacity
  2026-04-28  6:45   ` Shrikanth Hegde
@ 2026-04-28  8:47     ` Andrea Righi
  0 siblings, 0 replies; 20+ messages in thread
From: Andrea Righi @ 2026-04-28  8:47 UTC (permalink / raw)
  To: Shrikanth Hegde
  Cc: K Prateek Nayak, Dietmar Eggemann, Steven Rostedt, Ben Segall,
	Mel Gorman, Valentin Schneider, Christian Loehle, Koba Ko,
	Felix Abecassis, Balbir Singh, Joel Fernandes, linux-kernel,
	Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot

Hi Shrikanth,

On Tue, Apr 28, 2026 at 12:15:15PM +0530, Shrikanth Hegde wrote:
> On 4/28/26 10:46 AM, Andrea Righi wrote:
> > From: K Prateek Nayak <kprateek.nayak@amd.com>
> > 
> > On asymmetric CPU capacity systems, the wakeup path uses
> > select_idle_capacity(), which scans the span of sd_asym_cpucapacity
> > rather than sd_llc.
> > 
> > The has_idle_cores hint however lives on sd_llc->shared, so the
> > wakeup-time read of has_idle_cores operates on an LLC-scoped blob while
> > the actual scan/decision spans the asym domain; nr_busy_cpus also lives
> > in the same shared sched_domain data, but it's never used in the asym
> > CPU capacity scenario.
> > 
> > Therefore, move the sched_domain_shared object to sd_asym_cpucapacity
> > whenever the CPU has a SD_ASYM_CPUCAPACITY_FULL ancestor and that
> > ancestor is non-overlapping (i.e., not built from SD_NUMA). In that case
> > the scope of has_idle_cores matches the scope of the wakeup scan.
> > 
> > Fall back to attaching the shared object to sd_llc in three cases:
> > 
> >    1) plain symmetric systems (no SD_ASYM_CPUCAPACITY_FULL anywhere);
> > 
> >    2) CPUs in an exclusive cpuset that carves out a symmetric capacity
> >       island: has_asym is system-wide but those CPUs have no
> >       SD_ASYM_CPUCAPACITY_FULL ancestor in their hierarchy and follow
> >       the symmetric LLC path in select_idle_sibling();
> > 
> >    3) exotic topologies where SD_ASYM_CPUCAPACITY_FULL lands on an
> >       SD_NUMA-built domain. init_sched_domain_shared() keys the shared
> >       blob off cpumask_first(span), which on overlapping NUMA domains
> >       would alias unrelated spans onto the same blob. Keep the shared
> >       object on the LLC there; select_idle_capacity() gracefully skips
> >       the has_idle_cores preference when sd->shared is NULL.
> > 
> 
> Can you share the example topology where this benefits?

I've tested this both on a system with 1 NUMA node, 1 LLC, 88 SMT cores per LLC
(176 CPUs total) and 2 NUMA nodes, 2 LLC (one per node), 88 SMT cores per LLC
(352 CPUs total). The CPU capacities are ranging from 992 to 1024.

> 
> Is SD_ASYM_CPUCAPACITY_FULL one level above LLC but below NUMA?

In the system with a single node SD_ASYM_CPUCAPACITY_FULL is at the LLC level,
in the system with 2 nodes it's at the NUMA level.

> 
> > While at it, also rename the per-CPU sd_llc_shared to sd_balance_shared,
> > as it is no longer strictly tied to the LLC.
> > 
> 
> llc scans are at wakeup's. name sd_balance_shared indicates it is for load balance.

True, but sd_llc/balance_shared is used for the balancer kick logic. And idle
CPU scan is still a form of balancing at the end... but I'm open to suggestions
if we find a better name.

Thanks,
-Andrea

> 
> > Co-developed-by: Andrea Righi <arighi@nvidia.com>
> > Signed-off-by: Andrea Righi <arighi@nvidia.com>
> > Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com>
> > ---
> >   kernel/sched/fair.c     | 20 +++++----
> >   kernel/sched/sched.h    |  2 +-
> >   kernel/sched/topology.c | 91 +++++++++++++++++++++++++++++++++++------
> >   3 files changed, 91 insertions(+), 22 deletions(-)
> > 
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index fc0828150c780..ece3a26f59c27 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -7790,7 +7790,7 @@ static inline void set_idle_cores(int cpu, int val)
> >   {
> >   	struct sched_domain_shared *sds;
> > -	sds = rcu_dereference_all(per_cpu(sd_llc_shared, cpu));
> > +	sds = rcu_dereference_all(per_cpu(sd_balance_shared, cpu));
> >   	if (sds)
> >   		WRITE_ONCE(sds->has_idle_cores, val);
> >   }
> > @@ -7799,7 +7799,7 @@ static inline bool test_idle_cores(int cpu)
> >   {
> >   	struct sched_domain_shared *sds;
> > -	sds = rcu_dereference_all(per_cpu(sd_llc_shared, cpu));
> > +	sds = rcu_dereference_all(per_cpu(sd_balance_shared, cpu));
> >   	if (sds)
> >   		return READ_ONCE(sds->has_idle_cores);
> > @@ -7808,7 +7808,7 @@ static inline bool test_idle_cores(int cpu)
> >   /*
> >    * Scans the local SMT mask to see if the entire core is idle, and records this
> > - * information in sd_llc_shared->has_idle_cores.
> > + * information in sd_balance_shared->has_idle_cores.
> >    *
> >    * Since SMT siblings share all cache levels, inspecting this limited remote
> >    * state should be fairly cheap.
> > @@ -7925,7 +7925,7 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
> >   	struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_rq_mask);
> >   	int i, cpu, idle_cpu = -1, nr = INT_MAX;
> > -	if (sched_feat(SIS_UTIL)) {
> > +	if (sched_feat(SIS_UTIL) && sd->shared) {
> >   		/*
> >   		 * Increment because !--nr is the condition to stop scan.
> >   		 *
> > @@ -12759,7 +12759,7 @@ static bool nohz_balancer_needs_kick(struct rq *rq)
> >   		return false;
> >   	}
> > -	sds = rcu_dereference_all(per_cpu(sd_llc_shared, cpu));
> > +	sds = rcu_dereference_all(per_cpu(sd_balance_shared, cpu));
> >   	if (sds) {
> >   		/*
> >   		 * If there is an imbalance between LLC domains (IOW we could
> > @@ -12841,10 +12841,13 @@ static void set_cpu_sd_state_busy(int cpu)
> >   	guard(rcu)();
> >   	sd = rcu_dereference_all(per_cpu(sd_llc, cpu));
> > -	if (!sd || !sd->nohz_idle)
> > +	/*
> > +	 * sd->nohz_idle only pairs with nr_busy_cpus on sd->shared; if this LLC
> > +	 * domain has no shared object there is nothing to clear or account.
> > +	 */
> > +	if (!sd || !sd->shared || !sd->nohz_idle)
> >   		return;
> >   	sd->nohz_idle = 0;
> > -
> >   	atomic_inc(&sd->shared->nr_busy_cpus);
> >   }
> > @@ -12868,7 +12871,8 @@ static void set_cpu_sd_state_idle(int cpu)
> >   	guard(rcu)();
> >   	sd = rcu_dereference_all(per_cpu(sd_llc, cpu));
> > -	if (!sd || sd->nohz_idle)
> > +	/* See set_cpu_sd_state_busy(): nohz_idle is only used with sd->shared. */
> > +	if (!sd || !sd->shared || sd->nohz_idle)
> >   		return;
> >   	sd->nohz_idle = 1;
> > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> > index 9f63b15d309d1..330f5893c4561 100644
> > --- a/kernel/sched/sched.h
> > +++ b/kernel/sched/sched.h
> > @@ -2170,7 +2170,7 @@ DECLARE_PER_CPU(struct sched_domain __rcu *, sd_llc);
> >   DECLARE_PER_CPU(int, sd_llc_size);
> >   DECLARE_PER_CPU(int, sd_llc_id);
> >   DECLARE_PER_CPU(int, sd_share_id);
> > -DECLARE_PER_CPU(struct sched_domain_shared __rcu *, sd_llc_shared);
> > +DECLARE_PER_CPU(struct sched_domain_shared __rcu *, sd_balance_shared);
> >   DECLARE_PER_CPU(struct sched_domain __rcu *, sd_numa);
> >   DECLARE_PER_CPU(struct sched_domain __rcu *, sd_asym_packing);
> >   DECLARE_PER_CPU(struct sched_domain __rcu *, sd_asym_cpucapacity);
> > diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> > index 5847b83d9d552..1e6ce369a4bbc 100644
> > --- a/kernel/sched/topology.c
> > +++ b/kernel/sched/topology.c
> > @@ -665,7 +665,7 @@ DEFINE_PER_CPU(struct sched_domain __rcu *, sd_llc);
> >   DEFINE_PER_CPU(int, sd_llc_size);
> >   DEFINE_PER_CPU(int, sd_llc_id);
> >   DEFINE_PER_CPU(int, sd_share_id);
> > -DEFINE_PER_CPU(struct sched_domain_shared __rcu *, sd_llc_shared);
> > +DEFINE_PER_CPU(struct sched_domain_shared __rcu *, sd_balance_shared);
> >   DEFINE_PER_CPU(struct sched_domain __rcu *, sd_numa);
> >   DEFINE_PER_CPU(struct sched_domain __rcu *, sd_asym_packing);
> >   DEFINE_PER_CPU(struct sched_domain __rcu *, sd_asym_cpucapacity);
> > @@ -680,20 +680,39 @@ static void update_top_cache_domain(int cpu)
> >   	int id = cpu;
> >   	int size = 1;
> > +	sd = lowest_flag_domain(cpu, SD_ASYM_CPUCAPACITY_FULL);
> > +	/*
> > +	 * The shared object is attached to sd_asym_cpucapacity only when the
> > +	 * asym domain is non-overlapping (i.e., not built from SD_NUMA).
> > +	 * On overlapping (NUMA) asym domains we fall back to letting the
> > +	 * SD_SHARE_LLC path own the shared object, so sd->shared may be NULL
> > +	 * here.
> > +	 */
> > +	if (sd && sd->shared)
> > +		sds = sd->shared;
> > +
> > +	rcu_assign_pointer(per_cpu(sd_asym_cpucapacity, cpu), sd);
> > +
> >   	sd = highest_flag_domain(cpu, SD_SHARE_LLC);
> >   	if (sd) {
> >   		id = cpumask_first(sched_domain_span(sd));
> >   		size = cpumask_weight(sched_domain_span(sd));
> > -		/* If sd_llc exists, sd_llc_shared should exist too. */
> > -		WARN_ON_ONCE(!sd->shared);
> > -		sds = sd->shared;
> > +		/*
> > +		 * If sd_asym_cpucapacity didn't claim the shared object,
> > +		 * sd_llc must have one linked.
> > +		 */
> > +		if (!sds) {
> > +			WARN_ON_ONCE(!sd->shared);
> > +			sds = sd->shared;
> > +		}
> >   	}
> >   	rcu_assign_pointer(per_cpu(sd_llc, cpu), sd);
> >   	per_cpu(sd_llc_size, cpu) = size;
> >   	per_cpu(sd_llc_id, cpu) = id;
> > -	rcu_assign_pointer(per_cpu(sd_llc_shared, cpu), sds);
> > +
> > +	rcu_assign_pointer(per_cpu(sd_balance_shared, cpu), sds);
> >   	sd = lowest_flag_domain(cpu, SD_CLUSTER);
> >   	if (sd)
> > @@ -711,9 +730,6 @@ static void update_top_cache_domain(int cpu)
> >   	sd = highest_flag_domain(cpu, SD_ASYM_PACKING);
> >   	rcu_assign_pointer(per_cpu(sd_asym_packing, cpu), sd);
> > -
> > -	sd = lowest_flag_domain(cpu, SD_ASYM_CPUCAPACITY_FULL);
> > -	rcu_assign_pointer(per_cpu(sd_asym_cpucapacity, cpu), sd);
> >   }
> >   /*
> > @@ -2650,6 +2666,49 @@ static void adjust_numa_imbalance(struct sched_domain *sd_llc)
> >   	}
> >   }
> > +static void init_sched_domain_shared(struct s_data *d, struct sched_domain *sd)
> > +{
> > +	int sd_id = cpumask_first(sched_domain_span(sd));
> > +
> > +	sd->shared = *per_cpu_ptr(d->sds, sd_id);
> > +	atomic_set(&sd->shared->nr_busy_cpus, sd->span_weight);
> > +	atomic_inc(&sd->shared->ref);
> > +}
> > +
> > +/*
> > + * For asymmetric CPU capacity, attach sched_domain_shared on the innermost
> > + * SD_ASYM_CPUCAPACITY_FULL ancestor of @cpu's base domain when that ancestor is
> > + * not an overlapping NUMA-built domain (then LLC should claim shared).
> > + *
> > + * A CPU may lack any FULL ancestor (e.g., exclusive cpuset symmetric island),
> > + * then LLC must claim shared instead.
> > + *
> > + * Note: SD_ASYM_CPUCAPACITY_FULL is only set when multiple distinct capacities
> > + * exist in the domain span, so the asym domain we attach to cannot degenerate
> > + * into a single-capacity group. The relevant edge cases are instead covered by
> > + * the caveats above.
> > + *
> > + * Return true if this CPU's asym path claimed sd->shared, false otherwise.
> > + */
> > +static bool claim_asym_sched_domain_shared(struct s_data *d, int cpu)
> > +{
> > +	struct sched_domain *sd = *per_cpu_ptr(d->sd, cpu);
> > +	struct sched_domain *sd_asym;
> > +
> > +	if (!sd)
> > +		return false;
> > +
> > +	sd_asym = sd;
> > +	while (sd_asym && !(sd_asym->flags & SD_ASYM_CPUCAPACITY_FULL))
> > +		sd_asym = sd_asym->parent;
> > +
> > +	if (!sd_asym || (sd_asym->flags & SD_NUMA))
> > +		return false;
> > +
> > +	init_sched_domain_shared(d, sd_asym);
> > +	return true;
> > +}
> > +
> >   /*
> >    * Build sched domains for a given set of CPUs and attach the sched domains
> >    * to the individual CPUs
> > @@ -2708,20 +2767,26 @@ build_sched_domains(const struct cpumask *cpu_map, struct sched_domain_attr *att
> >   	}
> >   	for_each_cpu(i, cpu_map) {
> > +		bool asym_claimed = false;
> > +
> >   		sd = *per_cpu_ptr(d.sd, i);
> >   		if (!sd)
> >   			continue;
> > +		if (has_asym)
> > +			asym_claimed = claim_asym_sched_domain_shared(&d, i);
> > +
> >   		/* First, find the topmost SD_SHARE_LLC domain */
> >   		while (sd->parent && (sd->parent->flags & SD_SHARE_LLC))
> >   			sd = sd->parent;
> >   		if (sd->flags & SD_SHARE_LLC) {
> > -			int sd_id = cpumask_first(sched_domain_span(sd));
> > -
> > -			sd->shared = *per_cpu_ptr(d.sds, sd_id);
> > -			atomic_set(&sd->shared->nr_busy_cpus, sd->span_weight);
> > -			atomic_inc(&sd->shared->ref);
> > +			/*
> > +			 * Initialize the sd->shared for SD_SHARE_LLC unless
> > +			 * the asym path above already claimed it.
> > +			 */
> > +			if (!asym_claimed)
> > +				init_sched_domain_shared(&d, sd);
> >   			/*
> >   			 * In presence of higher domains, adjust the
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/6] sched/fair: Use guard(rcu) for sched_domain RCU sections
  2026-04-28  8:33   ` K Prateek Nayak
@ 2026-04-28 10:43     ` Andrea Righi
  2026-04-28 11:04       ` K Prateek Nayak
  2026-04-28 11:50       ` Peter Zijlstra
  2026-04-28 14:12     ` Steven Rostedt
  1 sibling, 2 replies; 20+ messages in thread
From: Andrea Righi @ 2026-04-28 10:43 UTC (permalink / raw)
  To: K Prateek Nayak
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Valentin Schneider, Christian Loehle, Koba Ko, Felix Abecassis,
	Balbir Singh, Joel Fernandes, Shrikanth Hegde, linux-kernel

Hi Prateek,

On Tue, Apr 28, 2026 at 02:03:59PM +0530, K Prateek Nayak wrote:
> Hello Andrea,
> 
> On 4/28/2026 10:46 AM, Andrea Righi wrote:
> > Use the scoped guard(rcu)() helper to safely access sched_domain
> > pointers.
> > 
> > No functional change intended, this is preparation for topology work
> > where sched_domain lifetimes are easier to reason about with explicit,
> > scope-bounded RCU critical sections.
> > 
> > Suggested-by: K Prateek Nayak <kprateek.nayak@amd.com>
> > Signed-off-by: Andrea Righi <arighi@nvidia.com>
> > ---
> >  kernel/sched/fair.c | 141 ++++++++++++++++++++++----------------------
> >  1 file changed, 71 insertions(+), 70 deletions(-)
> > 
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 69361c63353ad..fc0828150c780 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -8083,6 +8083,8 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
> >  	 */
> >  	lockdep_assert_irqs_disabled();
> >  
> > +	guard(rcu)();
> 
> Since IRQs are disabled, we don't need an addition RCU read lock here.
> See a03fee333a2f ("sched/fair: Remove superfluous rcu_read_lock()")

Ack.

> 
> > +
> >  	if (choose_idle_cpu(target, p) &&
> >  	    asym_fits_cpu(task_util, util_min, util_max, target))
> >  		return target;
> > @@ -12701,55 +12703,16 @@ static void kick_ilb(unsigned int flags)
> >  }
> >  
> >  /*
> > - * Current decision point for kicking the idle load balancer in the presence
> > - * of idle CPUs in the system.
> > + * Decide whether the ILB needs a stats and/or balance kick based on
> > + * sched_domain state.
> >   */
> > -static void nohz_balancer_kick(struct rq *rq)
> > +static bool nohz_balancer_needs_kick(struct rq *rq)
> >  {
> > -	unsigned long now = jiffies;
> >  	struct sched_domain_shared *sds;
> >  	struct sched_domain *sd;
> >  	int nr_busy, i, cpu = rq->cpu;
> > -	unsigned int flags = 0;
> > -
> > -	if (unlikely(rq->idle_balance))
> > -		return;
> > -
> > -	/*
> > -	 * We may be recently in ticked or tickless idle mode. At the first
> > -	 * busy tick after returning from idle, we will update the busy stats.
> > -	 */
> > -	nohz_balance_exit_idle(rq);
> > -
> > -	if (READ_ONCE(nohz.has_blocked_load) &&
> > -	    time_after(now, READ_ONCE(nohz.next_blocked)))
> > -		flags = NOHZ_STATS_KICK;
> > -
> > -	/*
> > -	 * Most of the time system is not 100% busy. i.e nohz.nr_cpus > 0
> > -	 * Skip the read if time is not due.
> > -	 *
> > -	 * If none are in tickless mode, there maybe a narrow window
> > -	 * (28 jiffies, HZ=1000) where flags maybe set and kick_ilb called.
> > -	 * But idle load balancing is not done as find_new_ilb fails.
> > -	 * That's very rare. So read nohz.nr_cpus only if time is due.
> > -	 */
> > -	if (time_before(now, nohz.next_balance))
> > -		goto out;
> >  
> > -	/*
> > -	 * None are in tickless mode and hence no need for NOHZ idle load
> > -	 * balancing
> > -	 */
> > -	if (unlikely(cpumask_empty(nohz.idle_cpus_mask)))
> > -		return;
> > -
> > -	if (rq->nr_running >= 2) {
> > -		flags = NOHZ_STATS_KICK | NOHZ_BALANCE_KICK;
> > -		goto out;
> > -	}
> > -
> > -	rcu_read_lock();
> > +	guard(rcu)();
> 
> and since this is only called from:
> 
>   sched_tick() /* IRQs disabled */
>     sched_balance_trigger()
>       nohz_balancer_kick()
> 
> with IRQs disabled, we can get rid of that rcu_read_lock() entirely.

Yeah, all makes sense. I'll update the patch dropping rcu_read_lock/unlock()
completely.

Is it worth adding a lockdep_assert_irqs_disabled()?

Thanks,
-Andrea

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/6] sched/fair: Use guard(rcu) for sched_domain RCU sections
  2026-04-28 10:43     ` Andrea Righi
@ 2026-04-28 11:04       ` K Prateek Nayak
  2026-04-28 11:50       ` Peter Zijlstra
  1 sibling, 0 replies; 20+ messages in thread
From: K Prateek Nayak @ 2026-04-28 11:04 UTC (permalink / raw)
  To: Andrea Righi
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Valentin Schneider, Christian Loehle, Koba Ko, Felix Abecassis,
	Balbir Singh, Joel Fernandes, Shrikanth Hegde, linux-kernel

Hello Andrea,

On 4/28/2026 4:13 PM, Andrea Righi wrote:
>>> -	rcu_read_lock();
>>> +	guard(rcu)();
>>
>> and since this is only called from:
>>
>>   sched_tick() /* IRQs disabled */
>>     sched_balance_trigger()
>>       nohz_balancer_kick()
>>
>> with IRQs disabled, we can get rid of that rcu_read_lock() entirely.
> 
> Yeah, all makes sense. I'll update the patch dropping rcu_read_lock/unlock()
> completely.
> 
> Is it worth adding a lockdep_assert_irqs_disabled()?

Yes, that would be nice. Thank you!

-- 
Thanks and Regards,
Prateek


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/6] sched/fair: Use guard(rcu) for sched_domain RCU sections
  2026-04-28 10:43     ` Andrea Righi
  2026-04-28 11:04       ` K Prateek Nayak
@ 2026-04-28 11:50       ` Peter Zijlstra
  2026-04-28 13:16         ` Andrea Righi
  1 sibling, 1 reply; 20+ messages in thread
From: Peter Zijlstra @ 2026-04-28 11:50 UTC (permalink / raw)
  To: Andrea Righi
  Cc: K Prateek Nayak, Ingo Molnar, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Valentin Schneider, Christian Loehle, Koba Ko, Felix Abecassis,
	Balbir Singh, Joel Fernandes, Shrikanth Hegde, linux-kernel

On Tue, Apr 28, 2026 at 12:43:22PM +0200, Andrea Righi wrote:

> > and since this is only called from:
> > 
> >   sched_tick() /* IRQs disabled */
> >     sched_balance_trigger()
> >       nohz_balancer_kick()
> > 
> > with IRQs disabled, we can get rid of that rcu_read_lock() entirely.
> 
> Yeah, all makes sense. I'll update the patch dropping rcu_read_lock/unlock()
> completely.
> 
> Is it worth adding a lockdep_assert_irqs_disabled()?

I think the rcu_dereference_all() thing will go scream if it doesn't
have any of IRQs/preempt/rcu disabled.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/6] sched/fair: Use guard(rcu) for sched_domain RCU sections
  2026-04-28 11:50       ` Peter Zijlstra
@ 2026-04-28 13:16         ` Andrea Righi
  0 siblings, 0 replies; 20+ messages in thread
From: Andrea Righi @ 2026-04-28 13:16 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: K Prateek Nayak, Ingo Molnar, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Valentin Schneider, Christian Loehle, Koba Ko, Felix Abecassis,
	Balbir Singh, Joel Fernandes, Shrikanth Hegde, linux-kernel

Hi Peter,

On Tue, Apr 28, 2026 at 01:50:17PM +0200, Peter Zijlstra wrote:
> On Tue, Apr 28, 2026 at 12:43:22PM +0200, Andrea Righi wrote:
> 
> > > and since this is only called from:
> > > 
> > >   sched_tick() /* IRQs disabled */
> > >     sched_balance_trigger()
> > >       nohz_balancer_kick()
> > > 
> > > with IRQs disabled, we can get rid of that rcu_read_lock() entirely.
> > 
> > Yeah, all makes sense. I'll update the patch dropping rcu_read_lock/unlock()
> > completely.
> > 
> > Is it worth adding a lockdep_assert_irqs_disabled()?
> 
> I think the rcu_dereference_all() thing will go scream if it doesn't
> have any of IRQs/preempt/rcu disabled.

Ah yes, indeed, I'll drop the lockdep_assert_irqs_disabled().

Thanks,
-Andrea

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/6] sched/fair: Use guard(rcu) for sched_domain RCU sections
  2026-04-28  8:33   ` K Prateek Nayak
  2026-04-28 10:43     ` Andrea Righi
@ 2026-04-28 14:12     ` Steven Rostedt
  2026-04-28 14:26       ` Andrea Righi
  1 sibling, 1 reply; 20+ messages in thread
From: Steven Rostedt @ 2026-04-28 14:12 UTC (permalink / raw)
  To: K Prateek Nayak
  Cc: Andrea Righi, Ingo Molnar, Peter Zijlstra, Juri Lelli,
	Vincent Guittot, Dietmar Eggemann, Ben Segall, Mel Gorman,
	Valentin Schneider, Christian Loehle, Koba Ko, Felix Abecassis,
	Balbir Singh, Joel Fernandes, Shrikanth Hegde, linux-kernel

On Tue, 28 Apr 2026 14:03:59 +0530
K Prateek Nayak <kprateek.nayak@amd.com> wrote:

> > -	if (rq->nr_running >= 2) {
> > -		flags = NOHZ_STATS_KICK | NOHZ_BALANCE_KICK;
> > -		goto out;
> > -	}
> > -
> > -	rcu_read_lock();
> > +	guard(rcu)();  
> 
> and since this is only called from:
> 
>   sched_tick() /* IRQs disabled */
>     sched_balance_trigger()
>       nohz_balancer_kick()
> 
> with IRQs disabled, we can get rid of that rcu_read_lock() entirely.

Then we need to add a lockdep_assert_irqs_disabled() here too.

-- Steve

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/6] sched/fair: Use guard(rcu) for sched_domain RCU sections
  2026-04-28 14:12     ` Steven Rostedt
@ 2026-04-28 14:26       ` Andrea Righi
  2026-04-28 14:29         ` Steven Rostedt
  0 siblings, 1 reply; 20+ messages in thread
From: Andrea Righi @ 2026-04-28 14:26 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: K Prateek Nayak, Ingo Molnar, Peter Zijlstra, Juri Lelli,
	Vincent Guittot, Dietmar Eggemann, Ben Segall, Mel Gorman,
	Valentin Schneider, Christian Loehle, Koba Ko, Felix Abecassis,
	Balbir Singh, Joel Fernandes, Shrikanth Hegde, linux-kernel

Hi Steven,

On Tue, Apr 28, 2026 at 10:12:09AM -0400, Steven Rostedt wrote:
> On Tue, 28 Apr 2026 14:03:59 +0530
> K Prateek Nayak <kprateek.nayak@amd.com> wrote:
> 
> > > -	if (rq->nr_running >= 2) {
> > > -		flags = NOHZ_STATS_KICK | NOHZ_BALANCE_KICK;
> > > -		goto out;
> > > -	}
> > > -
> > > -	rcu_read_lock();
> > > +	guard(rcu)();  
> > 
> > and since this is only called from:
> > 
> >   sched_tick() /* IRQs disabled */
> >     sched_balance_trigger()
> >       nohz_balancer_kick()
> > 
> > with IRQs disabled, we can get rid of that rcu_read_lock() entirely.
> 
> Then we need to add a lockdep_assert_irqs_disabled() here too.

Yeah, initially I suggested to add a lockdep_assert_irqs_disabled() in
nohz_balancer_kick() but what we care about here is RCU safety, if we do
something unsafe in the future we should be able to catch that with
rcu_dererference_all(), as pointed by Peter, right?

Thanks,
-Andrea

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/6] sched/fair: Use guard(rcu) for sched_domain RCU sections
  2026-04-28 14:26       ` Andrea Righi
@ 2026-04-28 14:29         ` Steven Rostedt
  0 siblings, 0 replies; 20+ messages in thread
From: Steven Rostedt @ 2026-04-28 14:29 UTC (permalink / raw)
  To: Andrea Righi
  Cc: K Prateek Nayak, Ingo Molnar, Peter Zijlstra, Juri Lelli,
	Vincent Guittot, Dietmar Eggemann, Ben Segall, Mel Gorman,
	Valentin Schneider, Christian Loehle, Koba Ko, Felix Abecassis,
	Balbir Singh, Joel Fernandes, Shrikanth Hegde, linux-kernel

On Tue, 28 Apr 2026 16:26:02 +0200
Andrea Righi <arighi@nvidia.com> wrote:

> > > with IRQs disabled, we can get rid of that rcu_read_lock() entirely.  
> > 
> > Then we need to add a lockdep_assert_irqs_disabled() here too.  
> 
> Yeah, initially I suggested to add a lockdep_assert_irqs_disabled() in
> nohz_balancer_kick() but what we care about here is RCU safety, if we do
> something unsafe in the future we should be able to catch that with
> rcu_dererference_all(), as pointed by Peter, right?

Yeah, I posted this before I saw Peter's reply.

-- Steve

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2026-04-28 14:29 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-28  5:16 [PATCH v4 0/6] sched/fair: SMT-aware asymmetric CPU capacity Andrea Righi
2026-04-28  5:16 ` [PATCH 1/6] sched/fair: Use guard(rcu) for sched_domain RCU sections Andrea Righi
2026-04-28  8:33   ` K Prateek Nayak
2026-04-28 10:43     ` Andrea Righi
2026-04-28 11:04       ` K Prateek Nayak
2026-04-28 11:50       ` Peter Zijlstra
2026-04-28 13:16         ` Andrea Righi
2026-04-28 14:12     ` Steven Rostedt
2026-04-28 14:26       ` Andrea Righi
2026-04-28 14:29         ` Steven Rostedt
2026-04-28  5:16 ` [PATCH 2/6] sched/fair: Attach sched_domain_shared to sd_asym_cpucapacity Andrea Righi
2026-04-28  6:45   ` Shrikanth Hegde
2026-04-28  8:47     ` Andrea Righi
2026-04-28  5:16 ` [PATCH 3/6] sched/fair: Prefer fully-idle SMT cores in asym-capacity idle selection Andrea Righi
2026-04-28  5:16 ` [PATCH 4/6] sched/fair: Reject misfit pulls onto busy SMT siblings on asym-capacity Andrea Righi
2026-04-28  5:16 ` [PATCH 5/6] sched/fair: Add SIS_UTIL support to select_idle_capacity() Andrea Righi
2026-04-28  5:16 ` [PATCH 6/6] sched/topology: Remove SMT/asym capacity warning Andrea Righi
2026-04-28  5:28   ` K Prateek Nayak
2026-04-28  5:54     ` Andrea Righi
2026-04-28  6:04       ` Andrea Righi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox