* Re: [PATCH 1/5] sched/fair: Drop redundant RCU read lock in NOHZ kick path
[not found] ` <20260428144352.3575863-2-arighi@nvidia.com>
@ 2026-05-05 9:15 ` Dietmar Eggemann
2026-05-05 9:22 ` Andrea Righi
0 siblings, 1 reply; 21+ messages in thread
From: Dietmar Eggemann @ 2026-05-05 9:15 UTC (permalink / raw)
To: Andrea Righi, Ingo Molnar, Peter Zijlstra, Juri Lelli,
Vincent Guittot
Cc: Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider,
K Prateek Nayak, Christian Loehle, Koba Ko, Felix Abecassis,
Balbir Singh, Joel Fernandes, Shrikanth Hegde, linux-kernel
On 28.04.26 16:41, Andrea Righi wrote:
[...]
> @@ -12799,10 +12795,10 @@ static void nohz_balancer_kick(struct rq *rq)
> *
> * Skip the LLC logic because it's not relevant in that case.
> */
> - goto unlock;
> + goto out;
> }
>
> - sds = rcu_dereference_all(per_cpu(sd_llc_shared, cpu));
> + sds = rcu_dereference_all(per_cpu(sd_balance_shared, cpu));
nit: sd_balance_shared is only defined in 2/5.
[...]
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 1/5] sched/fair: Drop redundant RCU read lock in NOHZ kick path
2026-05-05 9:15 ` [PATCH 1/5] sched/fair: Drop redundant RCU read lock in NOHZ kick path Dietmar Eggemann
@ 2026-05-05 9:22 ` Andrea Righi
0 siblings, 0 replies; 21+ messages in thread
From: Andrea Righi @ 2026-05-05 9:22 UTC (permalink / raw)
To: Dietmar Eggemann
Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider,
K Prateek Nayak, Christian Loehle, Koba Ko, Felix Abecassis,
Balbir Singh, Joel Fernandes, Shrikanth Hegde, linux-kernel
Hi Dietmar,
On Tue, May 05, 2026 at 11:15:12AM +0200, Dietmar Eggemann wrote:
> On 28.04.26 16:41, Andrea Righi wrote:
>
> [...]
>
> > @@ -12799,10 +12795,10 @@ static void nohz_balancer_kick(struct rq *rq)
> > *
> > * Skip the LLC logic because it's not relevant in that case.
> > */
> > - goto unlock;
> > + goto out;
> > }
> >
> > - sds = rcu_dereference_all(per_cpu(sd_llc_shared, cpu));
> > + sds = rcu_dereference_all(per_cpu(sd_balance_shared, cpu));
>
> nit: sd_balance_shared is only defined in 2/5.
Ah, good catch! Apparently I forgot to test-build each individual patch, I'll
fix this.
Thanks,
-Andrea
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 2/5] sched/fair: Attach sched_domain_shared to sd_asym_cpucapacity
[not found] ` <20260428144352.3575863-3-arighi@nvidia.com>
@ 2026-05-05 12:48 ` Dietmar Eggemann
2026-05-06 9:45 ` Vincent Guittot
1 sibling, 0 replies; 21+ messages in thread
From: Dietmar Eggemann @ 2026-05-05 12:48 UTC (permalink / raw)
To: Andrea Righi, Ingo Molnar, Peter Zijlstra, Juri Lelli,
Vincent Guittot
Cc: Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider,
K Prateek Nayak, Christian Loehle, Koba Ko, Felix Abecassis,
Balbir Singh, Joel Fernandes, Shrikanth Hegde, linux-kernel
On 28.04.26 16:41, Andrea Righi wrote:
> From: K Prateek Nayak <kprateek.nayak@amd.com>
>
> On asymmetric CPU capacity systems, the wakeup path uses
> select_idle_capacity(), which scans the span of sd_asym_cpucapacity
> rather than sd_llc.
>
> The has_idle_cores hint however lives on sd_llc->shared, so the
> wakeup-time read of has_idle_cores operates on an LLC-scoped blob while
> the actual scan/decision spans the asym domain; nr_busy_cpus also lives
> in the same shared sched_domain data, but it's never used in the asym
> CPU capacity scenario.
>
> Therefore, move the sched_domain_shared object to sd_asym_cpucapacity
> whenever the CPU has a SD_ASYM_CPUCAPACITY_FULL ancestor and that
> ancestor is non-overlapping (i.e., not built from SD_NUMA). In that case
> the scope of has_idle_cores matches the scope of the wakeup scan.
>
> Fall back to attaching the shared object to sd_llc in three cases:
>
> 1) plain symmetric systems (no SD_ASYM_CPUCAPACITY_FULL anywhere);
>
> 2) CPUs in an exclusive cpuset that carves out a symmetric capacity
> island: has_asym is system-wide but those CPUs have no
> SD_ASYM_CPUCAPACITY_FULL ancestor in their hierarchy and follow
> the symmetric LLC path in select_idle_sibling();
>
> 3) exotic topologies where SD_ASYM_CPUCAPACITY_FULL lands on an
> SD_NUMA-built domain. init_sched_domain_shared() keys the shared
> blob off cpumask_first(span), which on overlapping NUMA domains
> would alias unrelated spans onto the same blob. Keep the shared
> object on the LLC there; select_idle_capacity() gracefully skips
> the has_idle_cores preference when sd->shared is NULL.
Tested it with a coule of real & exotic topolgies, seems to work nicely.
$ cat /sys/devices/system/cpu/cpu*/cpu_capacity
160
160
160
160
498
498
1024
1024
(1) grouping CPUs with same CPU capacities
$ cat /sys/kernel/debug/sched/domains/cpu[0-7]/domain*/name
MC
PKG
$ cat /sys/kernel/debug/sched/domains/cpu[0-7]/domain*/flags
... SD_SHARE_LLC
... SD_ASYM_CPUCAPACITY SD_ASYM_CPUCAPACITY_FULL ...
PKG { 0-7 }
MC {0-3} {4,5} {6,7}
(2) flat
$ cat /sys/kernel/debug/sched/domains/cpu[0-7]/domain*/name
MC
$ cat /sys/kernel/debug/sched/domains/cpu[0-7]/domain*/flags
... SD_ASYM_CPUCAPACITY SD_ASYM_CPUCAPACITY_FULL ...
MC { 0-7 }
(3) flat, exotic, since w/ SMT
$ cat /sys/kernel/debug/sched/domains/cpu[0-7]/domain*/name
SMT
MC
... SD_SHARE_CPUCAPACITY SD_SHARE_LLC ...
... SD_ASYM_CPUCAPACITY SD_ASYM_CPUCAPACITY_FULL SD_SHARE_LLC ...
MC { 0-7 }
SMT {0-1} {2-3} {4-5} {6-7}
(4) exotic, since asymmetric and w/ SMT
$ cat /sys/kernel/debug/sched/domains/cpu[0-3]/domain*/name
SMT
MC
PKG
$ cat /sys/kernel/debug/sched/domains/cpu[0-3]/domain*/flags
... SD_SHARE_CPUCAPACITY SD_SHARE_LLC ...
... SD_SHARE_LLC
... SD_ASYM_CPUCAPACITY SD_ASYM_CPUCAPACITY_FULL ...
$ cat /sys/kernel/debug/sched/domains/cpu[4-7]/domain*/name
SMT
PKG
$ cat /sys/kernel/debug/sched/domains/cpu[4-7]/domain*/flags
... SD_SHARE_CPUCAPACITY SD_SHARE_LLC ...
... SD_ASYM_CPUCAPACITY SD_ASYM_CPUCAPACITY_FULL ...
PKG { 0-7 }
MC { 0-3 }
SMT {0-1} {2-3} {4-5} {6-7}
(5) same as (4) but partial CPU capacity asymmetry in MC { 0-3 }
cat /sys/devices/system/cpu/cpu*/cpu_capacity
160
160
498
498
160
160
1024
1024
$ cat /sys/kernel/debug/sched/domains/cpu[0-3]/domain*/flags
... SD_SHARE_CPUCAPACITY SD_SHARE_LLC ...
... SD_ASYM_CPUCAPACITY SD_SHARE_LLC ...
^^^^^^^^^^^^^^^^^^^
... SD_ASYM_CPUCAPACITY SD_ASYM_CPUCAPACITY_FULL ...
(6) (5) w/ exclusive cpusets with one symmetric island
cd /sys/fs/cgroup
echo +cpuset > cgroup.subtree_control
mkdir cs1
echo "threaded" > cs1/cgroup.type
echo 0-1,4-5 > cs1/cpuset.cpus
echo 0 > cs1/cpuset.mems
echo root > cs1/cpuset.cpus.partition
mkdir cs2
echo "threaded" > cs2/cgroup.type
echo 0 > cs2/cpuset.mems
echo 2-3,6-7 > cs2/cpuset.cpus
echo root > cs2/cpuset.cpus.partition
[ 0.006866] claim_asym_sched_domain_shared() (2) sd_asym=PKG cpu=0
[ 0.006868] claim_asym_sched_domain_shared() (2) sd_asym=PKG cpu=1
[ 0.006869] claim_asym_sched_domain_shared() (2) sd_asym=PKG cpu=2
[ 0.006869] claim_asym_sched_domain_shared() (2) sd_asym=PKG cpu=3
[ 0.006869] claim_asym_sched_domain_shared() (2) sd_asym=PKG cpu=4
[ 0.006869] claim_asym_sched_domain_shared() (2) sd_asym=PKG cpu=5
[ 0.006870] claim_asym_sched_domain_shared() (2) sd_asym=PKG cpu=6
[ 0.006870] claim_asym_sched_domain_shared() (2) sd_asym=PKG cpu=7
...
[ 222.767275] claim_asym_sched_domain_shared() (2) sd_asym=PKG cpu=2
[ 222.767324] claim_asym_sched_domain_shared() (2) sd_asym=PKG cpu=3
[ 222.767710] claim_asym_sched_domain_shared() (2) sd_asym=PKG cpu=6
[ 222.767789] claim_asym_sched_domain_shared() (2) sd_asym=PKG cpu=7
[ 222.781015] build_sched_domains() (3) sd=MC cpu=0
[ 222.781017] build_sched_domains() (3) sd=MC cpu=1
[ 222.781017] build_sched_domains() (3) sd=MC cpu=4
[ 222.781018] build_sched_domains() (3) sd=MC cpu=5
[...]
> @@ -2650,6 +2665,49 @@ static void adjust_numa_imbalance(struct sched_domain *sd_llc)
> }
> }
>
> +static void init_sched_domain_shared(struct s_data *d, struct sched_domain *sd)
> +{
> + int sd_id = cpumask_first(sched_domain_span(sd));
> +
> + sd->shared = *per_cpu_ptr(d->sds, sd_id);
> + atomic_set(&sd->shared->nr_busy_cpus, sd->span_weight);
Will be used only for sd_llc->shared, not for sd_asym, right?
> + atomic_inc(&sd->shared->ref);
> +}
> +
> +/*
> + * For asymmetric CPU capacity, attach sched_domain_shared on the innermost
> + * SD_ASYM_CPUCAPACITY_FULL ancestor of @cpu's base domain when that ancestor is
> + * not an overlapping NUMA-built domain (then LLC should claim shared).
> + *
> + * A CPU may lack any FULL ancestor (e.g., exclusive cpuset symmetric island),
> + * then LLC must claim shared instead.
> + *
> + * Note: SD_ASYM_CPUCAPACITY_FULL is only set when multiple distinct capacities
s/multiple/all ? We want to see all possible CPU capacity values in wakeup.
> + * exist in the domain span, so the asym domain we attach to cannot degenerate
> + * into a single-capacity group. The relevant edge cases are instead covered by
> + * the caveats above.
[...]
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 3/5] sched/fair: Prefer fully-idle SMT cores in asym-capacity idle selection
[not found] ` <20260428144352.3575863-4-arighi@nvidia.com>
@ 2026-05-05 17:20 ` Dietmar Eggemann
2026-05-06 18:31 ` Andrea Righi
2026-05-06 10:29 ` Vincent Guittot
1 sibling, 1 reply; 21+ messages in thread
From: Dietmar Eggemann @ 2026-05-05 17:20 UTC (permalink / raw)
To: Andrea Righi, Ingo Molnar, Peter Zijlstra, Juri Lelli,
Vincent Guittot
Cc: Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider,
K Prateek Nayak, Christian Loehle, Koba Ko, Felix Abecassis,
Balbir Singh, Joel Fernandes, Shrikanth Hegde, linux-kernel
On 28.04.26 16:41, Andrea Righi wrote:
> On systems with asymmetric CPU capacity (e.g., ACPI/CPPC reporting
> different per-core frequencies), the wakeup path uses
I assume those CPPC systems w/ different per-core frequencies (like your
Vera) are the only real one which would make use of this. Mobile
big.LITTLE/DynamIQ don't have SMT.
Phil mentioned other machines (PowerPC ?) which had issues with using
select_idle_capacity():
https://lore.kernel.org/r/20260325124840.GA98184@pauld.westford.csb
[...]
> On an SMT system with asymmetric CPU capacities, SMT-aware idle
> selection has been shown to improve throughput by around 15-18% for
> CPU-bound workloads, running an amount of tasks equal to the amount of
> SMT cores.
Just to make sure, this should be your internal NVBLAS benchmark. Is
this 'ASYM (mainline) vs. ASYM + SMT' or 'NO_ASYM vs. ASYM + SMT' ? I
try to match the cover letter's table numbers.
[...]
> @@ -7997,8 +8013,9 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
> static int
> select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target)
> {
> + bool prefers_idle_core = sched_smt_active() && test_idle_cores(target);
nit: why prefers_idle_core and not has_idle_core like in sis()?
[...]
> @@ -8047,12 +8102,17 @@ static inline bool asym_fits_cpu(unsigned long util,
> unsigned long util_max,
> int cpu)
> {
> - if (sched_asym_cpucap_active())
> + if (sched_asym_cpucap_active()) {
> /*
> * Return true only if the cpu fully fits the task requirements
> * which include the utilization and the performance hints.
> + *
> + * When SMT is active, also require that the core has no busy
> + * siblings.
> */
> - return (util_fits_cpu(util, util_min, util_max, cpu) > 0);
> + return (!sched_smt_active() || is_core_idle(cpu)) &&
> + (util_fits_cpu(util, util_min, util_max, cpu) > 0);
> + }
Not sure whether this has been discussed already. This makes all early
bailout conditions in sis() idle core aware for 'ASYM + SMT' but it's
not for 'NO_ASYM'?
Otherwise, LGTM.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH v5 0/5] sched/fair: SMT-aware asymmetric CPU capacity
[not found] <20260428144352.3575863-1-arighi@nvidia.com>
[not found] ` <20260428144352.3575863-2-arighi@nvidia.com>
[not found] ` <20260428144352.3575863-4-arighi@nvidia.com>
@ 2026-05-05 20:40 ` Dietmar Eggemann
[not found] ` <20260428144352.3575863-3-arighi@nvidia.com>
[not found] ` <20260428144352.3575863-6-arighi@nvidia.com>
4 siblings, 0 replies; 21+ messages in thread
From: Dietmar Eggemann @ 2026-05-05 20:40 UTC (permalink / raw)
To: Andrea Righi, Ingo Molnar, Peter Zijlstra, Juri Lelli,
Vincent Guittot
Cc: Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider,
K Prateek Nayak, Christian Loehle, Koba Ko, Felix Abecassis,
Balbir Singh, Joel Fernandes, Shrikanth Hegde, linux-kernel
On 28.04.26 16:41, Andrea Righi wrote:
[...]
> - DCPerf MediaWiki (all CPUs):
>
> +---------------------------------+--------+--------+--------+--------+
> | Configuration | rps | p50 | p95 | p99 |
> +---------------------------------+--------+--------+--------+--------+
> | ASYM (mainline) + SIS_UTIL | 7994 | 0.052 | 0.223 | 0.246 |
> | ASYM (mainline) + NO_SIS_UTIL | 7993 | 0.052 | 0.221 | 0.245 |
> | | | | | |
> | NO ASYM + SIS_UTIL | 8113 | 0.067 | 0.184 | 0.225 |
> | NO ASYM + NO_SIS_UTIL | 8093 | 0.068 | 0.184 | 0.223 |
> | | | | | |
> | ASYM + SMT + SIS_UTIL | 8129 | 0.076 | 0.149 | 0.188 |
> | ASYM + SMT + NO_SIS_UTIL | 8138 | 0.076 | 0.148 | 0.186 |
> +---------------------------------+--------+--------+--------+--------+
>
> In the MediaWiki case SMT awareness is less impactful, because for the majority
> of the run all CPUs are used, but it still seems to provide some benefits at
> reducing tail latency.
>
> Tests have also been conducted on NVIDIA Grace (which does not support SMT) to
> ensure that SIS_UTIL support in select_idle_capacity() does not introduce
> regressions and results show slight improvements under the same workloads.
Somehow unrelated to this smt extension but I always wanted to know why
even with !smt (e.g. Grace) we can see better values w/ ASYM.
DCPerf Mediawiki: Grace 72 CPUs, ~800 tasks (last test run):
+---------------------------------+--------+--------+--------+--------+
| Configuration | rps | p50 | p95 | p99 |
+---------------------------------+--------+--------+--------+--------+
| v6.8 NO ASYM | 4470 | 0.026 | 0.040 | 0.046 |
| v6.8 ASYM | 4636 | 0.022 | 0.037 | 0.043 |
+---------------------------------+--------+--------+--------+--------+
values from run_details.json: Wrk RPS, Nginx P50 {, P90, P95, P99} time
I always got 4%-5% higher rps and slightly better latencies w/ ASYM.
Possible explanation:
NO_ASYM
* More local wakeups
* sis()->select_idle_cpu() runs pretty fast into SIS_UTIL !nr_idle_scan
-> falls back to pick this_cpu or prev_cpu
* Causes more runqueue contention → more load balancing
* More short idle periods + migrations
ASYM
* More remote wakeups
* select_idle_capacity() always scans sd_asym
* Less balancing needed; CPUs go idle less often but for longer
* Better placement -> less contention -> higher rps
AFAICS, in this high-load scenario, ASYM avoids the !nr_idle_scan
bailout, spreading tasks more effectively and so reducing contention and
balancing overhead.
Do you have a chance to check this on mainline on your Grace machine?
[...]
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 2/5] sched/fair: Attach sched_domain_shared to sd_asym_cpucapacity
[not found] ` <20260428144352.3575863-3-arighi@nvidia.com>
2026-05-05 12:48 ` [PATCH 2/5] sched/fair: Attach sched_domain_shared to sd_asym_cpucapacity Dietmar Eggemann
@ 2026-05-06 9:45 ` Vincent Guittot
2026-05-06 10:19 ` K Prateek Nayak
1 sibling, 1 reply; 21+ messages in thread
From: Vincent Guittot @ 2026-05-06 9:45 UTC (permalink / raw)
To: Andrea Righi
Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider,
K Prateek Nayak, Christian Loehle, Koba Ko, Felix Abecassis,
Balbir Singh, Joel Fernandes, Shrikanth Hegde, linux-kernel
On Tue, 28 Apr 2026 at 16:44, Andrea Righi <arighi@nvidia.com> wrote:
>
> From: K Prateek Nayak <kprateek.nayak@amd.com>
>
> On asymmetric CPU capacity systems, the wakeup path uses
> select_idle_capacity(), which scans the span of sd_asym_cpucapacity
> rather than sd_llc.
>
> The has_idle_cores hint however lives on sd_llc->shared, so the
> wakeup-time read of has_idle_cores operates on an LLC-scoped blob while
> the actual scan/decision spans the asym domain; nr_busy_cpus also lives
> in the same shared sched_domain data, but it's never used in the asym
> CPU capacity scenario.
>
> Therefore, move the sched_domain_shared object to sd_asym_cpucapacity
> whenever the CPU has a SD_ASYM_CPUCAPACITY_FULL ancestor and that
> ancestor is non-overlapping (i.e., not built from SD_NUMA). In that case
> the scope of has_idle_cores matches the scope of the wakeup scan.
>
> Fall back to attaching the shared object to sd_llc in three cases:
>
> 1) plain symmetric systems (no SD_ASYM_CPUCAPACITY_FULL anywhere);
>
> 2) CPUs in an exclusive cpuset that carves out a symmetric capacity
> island: has_asym is system-wide but those CPUs have no
> SD_ASYM_CPUCAPACITY_FULL ancestor in their hierarchy and follow
> the symmetric LLC path in select_idle_sibling();
>
> 3) exotic topologies where SD_ASYM_CPUCAPACITY_FULL lands on an
> SD_NUMA-built domain. init_sched_domain_shared() keys the shared
> blob off cpumask_first(span), which on overlapping NUMA domains
> would alias unrelated spans onto the same blob. Keep the shared
> object on the LLC there; select_idle_capacity() gracefully skips
> the has_idle_cores preference when sd->shared is NULL.
>
> While at it, also rename the per-CPU sd_llc_shared to sd_balance_shared,
> as it is no longer strictly tied to the LLC.
>
> Co-developed-by: Andrea Righi <arighi@nvidia.com>
> Signed-off-by: Andrea Righi <arighi@nvidia.com>
> Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com>
> ---
> kernel/sched/fair.c | 17 +++++---
> kernel/sched/sched.h | 2 +-
> kernel/sched/topology.c | 90 +++++++++++++++++++++++++++++++++++------
> 3 files changed, 89 insertions(+), 20 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index e0f75dedc8456..bbdf537f61154 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -7790,7 +7790,7 @@ static inline void set_idle_cores(int cpu, int val)
> {
> struct sched_domain_shared *sds;
>
> - sds = rcu_dereference_all(per_cpu(sd_llc_shared, cpu));
> + sds = rcu_dereference_all(per_cpu(sd_balance_shared, cpu));
> if (sds)
> WRITE_ONCE(sds->has_idle_cores, val);
> }
> @@ -7799,7 +7799,7 @@ static inline bool test_idle_cores(int cpu)
> {
> struct sched_domain_shared *sds;
>
> - sds = rcu_dereference_all(per_cpu(sd_llc_shared, cpu));
> + sds = rcu_dereference_all(per_cpu(sd_balance_shared, cpu));
> if (sds)
> return READ_ONCE(sds->has_idle_cores);
>
> @@ -7808,7 +7808,7 @@ static inline bool test_idle_cores(int cpu)
>
> /*
> * Scans the local SMT mask to see if the entire core is idle, and records this
> - * information in sd_llc_shared->has_idle_cores.
> + * information in sd_balance_shared->has_idle_cores.
> *
> * Since SMT siblings share all cache levels, inspecting this limited remote
> * state should be fairly cheap.
> @@ -7925,7 +7925,7 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
> struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_rq_mask);
> int i, cpu, idle_cpu = -1, nr = INT_MAX;
>
> - if (sched_feat(SIS_UTIL)) {
> + if (sched_feat(SIS_UTIL) && sd->shared) {
If shared is attached to sd_asym_cpucapacity instead of sd_llc we
should never reach this point. Or I'm missing a case ?
> /*
> * Increment because !--nr is the condition to stop scan.
> *
> @@ -12826,7 +12826,11 @@ static void set_cpu_sd_state_busy(int cpu)
> struct sched_domain *sd;
> sd = rcu_dereference_all(per_cpu(sd_llc, cpu));
>
> - if (!sd || !sd->nohz_idle)
> + /*
> + * sd->nohz_idle only pairs with nr_busy_cpus on sd->shared; if this
> + * domain has no shared object there is nothing to clear or account.
> + */
> + if (!sd || !sd->shared || !sd->nohz_idle)
> return;
> sd->nohz_idle = 0;
>
> @@ -12851,7 +12855,8 @@ static void set_cpu_sd_state_idle(int cpu)
> struct sched_domain *sd;
> sd = rcu_dereference_all(per_cpu(sd_llc, cpu));
>
> - if (!sd || sd->nohz_idle)
> + /* See set_cpu_sd_state_busy(): nohz_idle is only used with sd->shared. */
> + if (!sd || !sd->shared || sd->nohz_idle)
> return;
> sd->nohz_idle = 1;
>
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 9f63b15d309d1..330f5893c4561 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -2170,7 +2170,7 @@ DECLARE_PER_CPU(struct sched_domain __rcu *, sd_llc);
> DECLARE_PER_CPU(int, sd_llc_size);
> DECLARE_PER_CPU(int, sd_llc_id);
> DECLARE_PER_CPU(int, sd_share_id);
> -DECLARE_PER_CPU(struct sched_domain_shared __rcu *, sd_llc_shared);
> +DECLARE_PER_CPU(struct sched_domain_shared __rcu *, sd_balance_shared);
> DECLARE_PER_CPU(struct sched_domain __rcu *, sd_numa);
> DECLARE_PER_CPU(struct sched_domain __rcu *, sd_asym_packing);
> DECLARE_PER_CPU(struct sched_domain __rcu *, sd_asym_cpucapacity);
> diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> index 5847b83d9d552..69d465cc93ab4 100644
> --- a/kernel/sched/topology.c
> +++ b/kernel/sched/topology.c
> @@ -665,7 +665,7 @@ DEFINE_PER_CPU(struct sched_domain __rcu *, sd_llc);
> DEFINE_PER_CPU(int, sd_llc_size);
> DEFINE_PER_CPU(int, sd_llc_id);
> DEFINE_PER_CPU(int, sd_share_id);
> -DEFINE_PER_CPU(struct sched_domain_shared __rcu *, sd_llc_shared);
> +DEFINE_PER_CPU(struct sched_domain_shared __rcu *, sd_balance_shared);
> DEFINE_PER_CPU(struct sched_domain __rcu *, sd_numa);
> DEFINE_PER_CPU(struct sched_domain __rcu *, sd_asym_packing);
> DEFINE_PER_CPU(struct sched_domain __rcu *, sd_asym_cpucapacity);
> @@ -680,20 +680,38 @@ static void update_top_cache_domain(int cpu)
> int id = cpu;
> int size = 1;
>
> + sd = lowest_flag_domain(cpu, SD_ASYM_CPUCAPACITY_FULL);
> + /*
> + * The shared object is attached to sd_asym_cpucapacity only when the
> + * asym domain is non-overlapping (i.e., not built from SD_NUMA).
> + * On overlapping (NUMA) asym domains we fall back to letting the
> + * SD_SHARE_LLC path own the shared object, so sd->shared may be NULL
> + * here.
> + */
> + if (sd && sd->shared)
> + sds = sd->shared;
> +
> + rcu_assign_pointer(per_cpu(sd_asym_cpucapacity, cpu), sd);
> +
> sd = highest_flag_domain(cpu, SD_SHARE_LLC);
> if (sd) {
> id = cpumask_first(sched_domain_span(sd));
> size = cpumask_weight(sched_domain_span(sd));
>
> - /* If sd_llc exists, sd_llc_shared should exist too. */
> - WARN_ON_ONCE(!sd->shared);
> - sds = sd->shared;
> + /*
> + * If sd_asym_cpucapacity didn't claim the shared object,
> + * sd_llc must have one linked.
> + */
> + if (!sds) {
> + WARN_ON_ONCE(!sd->shared);
> + sds = sd->shared;
> + }
> }
>
> rcu_assign_pointer(per_cpu(sd_llc, cpu), sd);
> per_cpu(sd_llc_size, cpu) = size;
> per_cpu(sd_llc_id, cpu) = id;
> - rcu_assign_pointer(per_cpu(sd_llc_shared, cpu), sds);
> + rcu_assign_pointer(per_cpu(sd_balance_shared, cpu), sds);
>
> sd = lowest_flag_domain(cpu, SD_CLUSTER);
> if (sd)
> @@ -711,9 +729,6 @@ static void update_top_cache_domain(int cpu)
>
> sd = highest_flag_domain(cpu, SD_ASYM_PACKING);
> rcu_assign_pointer(per_cpu(sd_asym_packing, cpu), sd);
> -
> - sd = lowest_flag_domain(cpu, SD_ASYM_CPUCAPACITY_FULL);
> - rcu_assign_pointer(per_cpu(sd_asym_cpucapacity, cpu), sd);
> }
>
> /*
> @@ -2650,6 +2665,49 @@ static void adjust_numa_imbalance(struct sched_domain *sd_llc)
> }
> }
>
> +static void init_sched_domain_shared(struct s_data *d, struct sched_domain *sd)
> +{
> + int sd_id = cpumask_first(sched_domain_span(sd));
> +
> + sd->shared = *per_cpu_ptr(d->sds, sd_id);
> + atomic_set(&sd->shared->nr_busy_cpus, sd->span_weight);
> + atomic_inc(&sd->shared->ref);
> +}
> +
> +/*
> + * For asymmetric CPU capacity, attach sched_domain_shared on the innermost
> + * SD_ASYM_CPUCAPACITY_FULL ancestor of @cpu's base domain when that ancestor is
> + * not an overlapping NUMA-built domain (then LLC should claim shared).
> + *
> + * A CPU may lack any FULL ancestor (e.g., exclusive cpuset symmetric island),
> + * then LLC must claim shared instead.
> + *
> + * Note: SD_ASYM_CPUCAPACITY_FULL is only set when multiple distinct capacities
> + * exist in the domain span, so the asym domain we attach to cannot degenerate
> + * into a single-capacity group. The relevant edge cases are instead covered by
> + * the caveats above.
> + *
> + * Return true if this CPU's asym path claimed sd->shared, false otherwise.
> + */
> +static bool claim_asym_sched_domain_shared(struct s_data *d, int cpu)
> +{
> + struct sched_domain *sd = *per_cpu_ptr(d->sd, cpu);
> + struct sched_domain *sd_asym;
> +
> + if (!sd)
> + return false;
> +
> + sd_asym = sd;
> + while (sd_asym && !(sd_asym->flags & SD_ASYM_CPUCAPACITY_FULL))
> + sd_asym = sd_asym->parent;
> +
> + if (!sd_asym || (sd_asym->flags & SD_NUMA))
> + return false;
> +
> + init_sched_domain_shared(d, sd_asym);
> + return true;
> +}
> +
> /*
> * Build sched domains for a given set of CPUs and attach the sched domains
> * to the individual CPUs
> @@ -2708,20 +2766,26 @@ build_sched_domains(const struct cpumask *cpu_map, struct sched_domain_attr *att
> }
>
> for_each_cpu(i, cpu_map) {
> + bool asym_claimed = false;
> +
> sd = *per_cpu_ptr(d.sd, i);
> if (!sd)
> continue;
>
> + if (has_asym)
> + asym_claimed = claim_asym_sched_domain_shared(&d, i);
> +
> /* First, find the topmost SD_SHARE_LLC domain */
> while (sd->parent && (sd->parent->flags & SD_SHARE_LLC))
> sd = sd->parent;
>
> if (sd->flags & SD_SHARE_LLC) {
> - int sd_id = cpumask_first(sched_domain_span(sd));
> -
> - sd->shared = *per_cpu_ptr(d.sds, sd_id);
> - atomic_set(&sd->shared->nr_busy_cpus, sd->span_weight);
> - atomic_inc(&sd->shared->ref);
> + /*
> + * Initialize the sd->shared for SD_SHARE_LLC unless
> + * the asym path above already claimed it.
> + */
> + if (!asym_claimed)
> + init_sched_domain_shared(&d, sd);
>
> /*
> * In presence of higher domains, adjust the
> --
> 2.54.0
>
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 2/5] sched/fair: Attach sched_domain_shared to sd_asym_cpucapacity
2026-05-06 9:45 ` Vincent Guittot
@ 2026-05-06 10:19 ` K Prateek Nayak
2026-05-06 10:30 ` Vincent Guittot
0 siblings, 1 reply; 21+ messages in thread
From: K Prateek Nayak @ 2026-05-06 10:19 UTC (permalink / raw)
To: Vincent Guittot, Andrea Righi
Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider,
Christian Loehle, Koba Ko, Felix Abecassis, Balbir Singh,
Joel Fernandes, Shrikanth Hegde, linux-kernel
Hello Vincent,
On 5/6/2026 3:15 PM, Vincent Guittot wrote:
>> @@ -7925,7 +7925,7 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
>> struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_rq_mask);
>> int i, cpu, idle_cpu = -1, nr = INT_MAX;
>>
>> - if (sched_feat(SIS_UTIL)) {
>> + if (sched_feat(SIS_UTIL) && sd->shared) {
>
> If shared is attached to sd_asym_cpucapacity instead of sd_llc we
> should never reach this point. Or I'm missing a case ?
So the hotpulg might race with a wakeup like:
claim_asym_sched_domain_shared()
init_sched_domain_shared(d, sd_asym);
return true;
update_top_cache_domain()
rcu_assign_pointer(sd_llc, sd);
... select_idle_sibling()
sd = rcu_dereference_all(sd_asym_cpucapacity)
/* sd_asym_cpucapacity still hasn't been updated */
if (sd /* NULL */) { ... }
sd = rcu_dereference_all(sd_llc); /* Valid */
select_idle_cpu(sd)
rcu_assign_pointer(sd_asym_cpucapacity, sd); sd->shared /* NULL */
This prevents that rare race where a remote CPU will see sd_llc
before sd_asym is published and take the !ASYM wakeup route only
to find sd->shared is NULL since sd_asym has claimed it.
>
>> /*
>> * Increment because !--nr is the condition to stop scan.
>> *
--
Thanks and Regards,
Prateek
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 3/5] sched/fair: Prefer fully-idle SMT cores in asym-capacity idle selection
[not found] ` <20260428144352.3575863-4-arighi@nvidia.com>
2026-05-05 17:20 ` [PATCH 3/5] sched/fair: Prefer fully-idle SMT cores in asym-capacity idle selection Dietmar Eggemann
@ 2026-05-06 10:29 ` Vincent Guittot
2026-05-06 12:34 ` Vincent Guittot
2026-05-06 18:15 ` Andrea Righi
1 sibling, 2 replies; 21+ messages in thread
From: Vincent Guittot @ 2026-05-06 10:29 UTC (permalink / raw)
To: Andrea Righi
Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider,
K Prateek Nayak, Christian Loehle, Koba Ko, Felix Abecassis,
Balbir Singh, Joel Fernandes, Shrikanth Hegde, linux-kernel
On Tue, 28 Apr 2026 at 16:44, Andrea Righi <arighi@nvidia.com> wrote:
>
> On systems with asymmetric CPU capacity (e.g., ACPI/CPPC reporting
> different per-core frequencies), the wakeup path uses
> select_idle_capacity() and prioritizes idle CPUs with higher capacity
> for better task placement. However, when those CPUs belong to SMT cores,
> their effective capacity can be much lower than the nominal capacity
> when the sibling thread is busy: SMT siblings compete for shared
> resources, so a "high capacity" CPU that is idle but whose sibling is
> busy does not deliver its full capacity. This effective capacity
> reduction cannot be modeled by the static capacity value alone.
>
> Introduce SMT awareness in the asym-capacity idle selection policy: when
> SMT is active, always prefer fully-idle SMT cores over partially-idle
> ones.
>
> Prioritizing fully-idle SMT cores yields better task placement because
> the effective capacity of partially-idle SMT cores is reduced; always
> preferring them when available leads to more accurate capacity usage on
> task wakeup.
>
> On an SMT system with asymmetric CPU capacities, SMT-aware idle
> selection has been shown to improve throughput by around 15-18% for
> CPU-bound workloads, running an amount of tasks equal to the amount of
> SMT cores.
>
> Cc: Vincent Guittot <vincent.guittot@linaro.org>
> Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
> Cc: Christian Loehle <christian.loehle@arm.com>
> Cc: Koba Ko <kobak@nvidia.com>
> Reviewed-by: K Prateek Nayak <kprateek.nayak@amd.com>
> Reported-by: Felix Abecassis <fabecassis@nvidia.com>
> Signed-off-by: Andrea Righi <arighi@nvidia.com>
> ---
> kernel/sched/fair.c | 70 +++++++++++++++++++++++++++++++++++++++++----
> 1 file changed, 65 insertions(+), 5 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index bbdf537f61154..6a7e4943804b5 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -7989,6 +7989,22 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
> return idle_cpu;
> }
>
> +/*
> + * Idle-capacity scan ranks transformed util_fits_cpu() outcomes; lower values
> + * are more preferred (see select_idle_capacity()).
> + */
> +enum asym_fits_state {
> + /* In descending order of preference */
> + ASYM_IDLE_CORE_UCLAMP_MISFIT = -4,
> + ASYM_IDLE_CORE_COMPLETE_MISFIT,
> + ASYM_IDLE_THREAD_FITS,
> + ASYM_IDLE_THREAD_UCLAMP_MISFIT,
> + ASYM_IDLE_COMPLETE_MISFIT,
> +
> + /* util_fits_cpu() bias for an idle core. */
> + ASYM_IDLE_CORE_BIAS = -3,
> +};
> +
> /*
> * Scan the asym_capacity domain for idle CPUs; pick the first idle one on which
> * the task fits. If no CPU is big enough, but there are idle ones, try to
> @@ -7997,8 +8013,9 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
> static int
> select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target)
> {
> + bool prefers_idle_core = sched_smt_active() && test_idle_cores(target);
> unsigned long task_util, util_min, util_max, best_cap = 0;
> - int fits, best_fits = 0;
> + int fits, best_fits = ASYM_IDLE_COMPLETE_MISFIT;
> int cpu, best_cpu = -1;
> struct cpumask *cpus;
>
> @@ -8010,6 +8027,7 @@ select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target)
> util_max = uclamp_eff_value(p, UCLAMP_MAX);
>
> for_each_cpu_wrap(cpu, cpus, target) {
> + bool preferred_core = !prefers_idle_core || is_core_idle(cpu);
> unsigned long cpu_cap = capacity_of(cpu);
>
> if (!choose_idle_cpu(cpu, p))
> @@ -8018,7 +8036,7 @@ select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target)
> fits = util_fits_cpu(task_util, util_min, util_max, cpu);
>
> /* This CPU fits with all requirements */
> - if (fits > 0)
> + if (fits > 0 && preferred_core)
> return cpu;
> /*
> * Only the min performance hint (i.e. uclamp_min) doesn't fit.
> @@ -8026,9 +8044,33 @@ select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target)
> */
> else if (fits < 0)
> cpu_cap = get_actual_cpu_capacity(cpu);
> + /*
> + * fits > 0 implies we are not on a preferred core
> + * but the util fits CPU capacity. Set fits to ASYM_IDLE_THREAD_FITS
> + * so the effective range becomes
> + * [ASYM_IDLE_THREAD_FITS, ASYM_IDLE_COMPLETE_MISFIT], where:
> + * ASYM_IDLE_COMPLETE_MISFIT - does not fit
> + * ASYM_IDLE_THREAD_UCLAMP_MISFIT - fits with the exception of UCLAMP_MIN
> + * ASYM_IDLE_THREAD_FITS - fits with the exception of preferred_core
> + */
> + else if (fits > 0)
> + fits = ASYM_IDLE_THREAD_FITS;
> +
> + /*
> + * If we are on a preferred core, translate the range of fits
> + * of [ASYM_IDLE_THREAD_UCLAMP_MISFIT, ASYM_IDLE_COMPLETE_MISFIT] to
> + * [ASYM_IDLE_CORE_UCLAMP_MISFIT, ASYM_IDLE_CORE_COMPLETE_MISFIT].
> + * This ensures that an idle core is always given priority over
> + * (partially) busy core.
> + *
> + * A fully fitting idle core would have returned early and hence
> + * fits > 0 for preferred_core need not be dealt with.
> + */
> + if (preferred_core)
> + fits += ASYM_IDLE_CORE_BIAS;
It might be good to add a comment stating that if the system doesn't
have SMT, prefers_idle_core and preferred_core are always true.
This is okay because CPU == Core in this case but the value differs
from the default 0 or -1 of util_fits_cpu
>
> /*
> - * First, select CPU which fits better (-1 being better than 0).
> + * First, select CPU which fits better (lower is more preferred).
> * Then, select the one with best capacity at same level.
> */
> if ((fits < best_fits) ||
> @@ -8039,6 +8081,19 @@ select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target)
> }
> }
>
> + /*
> + * A value in the [ASYM_IDLE_CORE_UCLAMP_MISFIT, ASYM_IDLE_CORE_BIAS]
s/ASYM_IDLE_CORE_BIAS/ASYM_IDLE_CORE_COMPLETE_MISFIT/
ASYM_IDLE_CORE_BIAS is an offset to move an idle core that doesn't
fully fit in the preferred range [ASYM_IDLE_CORE_UCLAMP_MISFIT,
ASYM_IDLE_CORE_COMPLETE_MISFIT]
Keeping in mind that ASYM_IDLE_CORE_BIAS == -3 == ASYM_IDLE_CORE_BIAS
> + * range means the chosen CPU is in a fully idle SMT core. Values above
> + * ASYM_IDLE_CORE_BIAS mean we never ranked such a CPU best.
s/ASYM_IDLE_CORE_BIAS/ASYM_IDLE_CORE_COMPLETE_MISFIT/
> + *
> + * The asym-capacity wakeup path returns from select_idle_sibling()
> + * after this function and never runs select_idle_cpu(), so the usual
> + * select_idle_cpu() tail that clears idle cores must live here when the
> + * idle-core preference did not win.
> + */
> + if (prefers_idle_core && best_fits > ASYM_IDLE_CORE_BIAS)
s/ASYM_IDLE_CORE_BIAS/ASYM_IDLE_CORE_COMPLETE_MISFIT/
> + set_idle_cores(target, false);
> +
> return best_cpu;
> }
>
> @@ -8047,12 +8102,17 @@ static inline bool asym_fits_cpu(unsigned long util,
> unsigned long util_max,
> int cpu)
> {
> - if (sched_asym_cpucap_active())
> + if (sched_asym_cpucap_active()) {
> /*
> * Return true only if the cpu fully fits the task requirements
> * which include the utilization and the performance hints.
> + *
> + * When SMT is active, also require that the core has no busy
> + * siblings.
> */
> - return (util_fits_cpu(util, util_min, util_max, cpu) > 0);
> + return (!sched_smt_active() || is_core_idle(cpu)) &&
> + (util_fits_cpu(util, util_min, util_max, cpu) > 0);
> + }
>
> return true;
> }
> --
> 2.54.0
>
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 2/5] sched/fair: Attach sched_domain_shared to sd_asym_cpucapacity
2026-05-06 10:19 ` K Prateek Nayak
@ 2026-05-06 10:30 ` Vincent Guittot
0 siblings, 0 replies; 21+ messages in thread
From: Vincent Guittot @ 2026-05-06 10:30 UTC (permalink / raw)
To: K Prateek Nayak
Cc: Andrea Righi, Ingo Molnar, Peter Zijlstra, Juri Lelli,
Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider, Christian Loehle, Koba Ko, Felix Abecassis,
Balbir Singh, Joel Fernandes, Shrikanth Hegde, linux-kernel
On Wed, 6 May 2026 at 12:20, K Prateek Nayak <kprateek.nayak@amd.com> wrote:
>
> Hello Vincent,
>
> On 5/6/2026 3:15 PM, Vincent Guittot wrote:
> >> @@ -7925,7 +7925,7 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
> >> struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_rq_mask);
> >> int i, cpu, idle_cpu = -1, nr = INT_MAX;
> >>
> >> - if (sched_feat(SIS_UTIL)) {
> >> + if (sched_feat(SIS_UTIL) && sd->shared) {
> >
> > If shared is attached to sd_asym_cpucapacity instead of sd_llc we
> > should never reach this point. Or I'm missing a case ?
>
> So the hotpulg might race with a wakeup like:
>
> claim_asym_sched_domain_shared()
> init_sched_domain_shared(d, sd_asym);
> return true;
> update_top_cache_domain()
> rcu_assign_pointer(sd_llc, sd);
> ... select_idle_sibling()
> sd = rcu_dereference_all(sd_asym_cpucapacity)
> /* sd_asym_cpucapacity still hasn't been updated */
> if (sd /* NULL */) { ... }
> sd = rcu_dereference_all(sd_llc); /* Valid */
> select_idle_cpu(sd)
> rcu_assign_pointer(sd_asym_cpucapacity, sd); sd->shared /* NULL */
>
>
> This prevents that rare race where a remote CPU will see sd_llc
> before sd_asym is published and take the !ASYM wakeup route only
> to find sd->shared is NULL since sd_asym has claimed it.
fair enough
>
> >
> >> /*
> >> * Increment because !--nr is the condition to stop scan.
> >> *
>
> --
> Thanks and Regards,
> Prateek
>
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 3/5] sched/fair: Prefer fully-idle SMT cores in asym-capacity idle selection
2026-05-06 10:29 ` Vincent Guittot
@ 2026-05-06 12:34 ` Vincent Guittot
2026-05-06 18:15 ` Andrea Righi
1 sibling, 0 replies; 21+ messages in thread
From: Vincent Guittot @ 2026-05-06 12:34 UTC (permalink / raw)
To: Andrea Righi
Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider,
K Prateek Nayak, Christian Loehle, Koba Ko, Felix Abecassis,
Balbir Singh, Joel Fernandes, Shrikanth Hegde, linux-kernel
On Wed, 6 May 2026 at 12:29, Vincent Guittot <vincent.guittot@linaro.org> wrote:
>
> On Tue, 28 Apr 2026 at 16:44, Andrea Righi <arighi@nvidia.com> wrote:
> >
> > On systems with asymmetric CPU capacity (e.g., ACPI/CPPC reporting
> > different per-core frequencies), the wakeup path uses
> > select_idle_capacity() and prioritizes idle CPUs with higher capacity
> > for better task placement. However, when those CPUs belong to SMT cores,
> > their effective capacity can be much lower than the nominal capacity
> > when the sibling thread is busy: SMT siblings compete for shared
> > resources, so a "high capacity" CPU that is idle but whose sibling is
> > busy does not deliver its full capacity. This effective capacity
> > reduction cannot be modeled by the static capacity value alone.
> >
> > Introduce SMT awareness in the asym-capacity idle selection policy: when
> > SMT is active, always prefer fully-idle SMT cores over partially-idle
> > ones.
> >
> > Prioritizing fully-idle SMT cores yields better task placement because
> > the effective capacity of partially-idle SMT cores is reduced; always
> > preferring them when available leads to more accurate capacity usage on
> > task wakeup.
> >
> > On an SMT system with asymmetric CPU capacities, SMT-aware idle
> > selection has been shown to improve throughput by around 15-18% for
> > CPU-bound workloads, running an amount of tasks equal to the amount of
> > SMT cores.
> >
> > Cc: Vincent Guittot <vincent.guittot@linaro.org>
> > Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
> > Cc: Christian Loehle <christian.loehle@arm.com>
> > Cc: Koba Ko <kobak@nvidia.com>
> > Reviewed-by: K Prateek Nayak <kprateek.nayak@amd.com>
> > Reported-by: Felix Abecassis <fabecassis@nvidia.com>
> > Signed-off-by: Andrea Righi <arighi@nvidia.com>
> > ---
> > kernel/sched/fair.c | 70 +++++++++++++++++++++++++++++++++++++++++----
> > 1 file changed, 65 insertions(+), 5 deletions(-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index bbdf537f61154..6a7e4943804b5 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -7989,6 +7989,22 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
> > return idle_cpu;
> > }
> >
> > +/*
> > + * Idle-capacity scan ranks transformed util_fits_cpu() outcomes; lower values
> > + * are more preferred (see select_idle_capacity()).
> > + */
> > +enum asym_fits_state {
> > + /* In descending order of preference */
> > + ASYM_IDLE_CORE_UCLAMP_MISFIT = -4,
> > + ASYM_IDLE_CORE_COMPLETE_MISFIT,
> > + ASYM_IDLE_THREAD_FITS,
> > + ASYM_IDLE_THREAD_UCLAMP_MISFIT,
> > + ASYM_IDLE_COMPLETE_MISFIT,
> > +
> > + /* util_fits_cpu() bias for an idle core. */
> > + ASYM_IDLE_CORE_BIAS = -3,
> > +};
> > +
> > /*
> > * Scan the asym_capacity domain for idle CPUs; pick the first idle one on which
> > * the task fits. If no CPU is big enough, but there are idle ones, try to
> > @@ -7997,8 +8013,9 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
> > static int
> > select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target)
> > {
> > + bool prefers_idle_core = sched_smt_active() && test_idle_cores(target);
> > unsigned long task_util, util_min, util_max, best_cap = 0;
> > - int fits, best_fits = 0;
> > + int fits, best_fits = ASYM_IDLE_COMPLETE_MISFIT;
> > int cpu, best_cpu = -1;
> > struct cpumask *cpus;
> >
> > @@ -8010,6 +8027,7 @@ select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target)
> > util_max = uclamp_eff_value(p, UCLAMP_MAX);
> >
> > for_each_cpu_wrap(cpu, cpus, target) {
> > + bool preferred_core = !prefers_idle_core || is_core_idle(cpu);
> > unsigned long cpu_cap = capacity_of(cpu);
> >
> > if (!choose_idle_cpu(cpu, p))
> > @@ -8018,7 +8036,7 @@ select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target)
> > fits = util_fits_cpu(task_util, util_min, util_max, cpu);
> >
> > /* This CPU fits with all requirements */
> > - if (fits > 0)
> > + if (fits > 0 && preferred_core)
> > return cpu;
> > /*
> > * Only the min performance hint (i.e. uclamp_min) doesn't fit.
> > @@ -8026,9 +8044,33 @@ select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target)
> > */
> > else if (fits < 0)
> > cpu_cap = get_actual_cpu_capacity(cpu);
> > + /*
> > + * fits > 0 implies we are not on a preferred core
> > + * but the util fits CPU capacity. Set fits to ASYM_IDLE_THREAD_FITS
> > + * so the effective range becomes
> > + * [ASYM_IDLE_THREAD_FITS, ASYM_IDLE_COMPLETE_MISFIT], where:
> > + * ASYM_IDLE_COMPLETE_MISFIT - does not fit
> > + * ASYM_IDLE_THREAD_UCLAMP_MISFIT - fits with the exception of UCLAMP_MIN
> > + * ASYM_IDLE_THREAD_FITS - fits with the exception of preferred_core
> > + */
> > + else if (fits > 0)
> > + fits = ASYM_IDLE_THREAD_FITS;
> > +
> > + /*
> > + * If we are on a preferred core, translate the range of fits
> > + * of [ASYM_IDLE_THREAD_UCLAMP_MISFIT, ASYM_IDLE_COMPLETE_MISFIT] to
> > + * [ASYM_IDLE_CORE_UCLAMP_MISFIT, ASYM_IDLE_CORE_COMPLETE_MISFIT].
> > + * This ensures that an idle core is always given priority over
> > + * (partially) busy core.
> > + *
> > + * A fully fitting idle core would have returned early and hence
> > + * fits > 0 for preferred_core need not be dealt with.
> > + */
> > + if (preferred_core)
> > + fits += ASYM_IDLE_CORE_BIAS;
>
> It might be good to add a comment stating that if the system doesn't
> have SMT, prefers_idle_core and preferred_core are always true.
I meant prefers_idle_core is alway false and preferred_core is always true
>
> This is okay because CPU == Core in this case but the value differs
> from the default 0 or -1 of util_fits_cpu
>
> >
> > /*
> > - * First, select CPU which fits better (-1 being better than 0).
> > + * First, select CPU which fits better (lower is more preferred).
> > * Then, select the one with best capacity at same level.
> > */
> > if ((fits < best_fits) ||
> > @@ -8039,6 +8081,19 @@ select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target)
> > }
> > }
> >
> > + /*
> > + * A value in the [ASYM_IDLE_CORE_UCLAMP_MISFIT, ASYM_IDLE_CORE_BIAS]
>
> s/ASYM_IDLE_CORE_BIAS/ASYM_IDLE_CORE_COMPLETE_MISFIT/
>
> ASYM_IDLE_CORE_BIAS is an offset to move an idle core that doesn't
> fully fit in the preferred range [ASYM_IDLE_CORE_UCLAMP_MISFIT,
> ASYM_IDLE_CORE_COMPLETE_MISFIT]
>
> Keeping in mind that ASYM_IDLE_CORE_BIAS == -3 == ASYM_IDLE_CORE_BIAS
>
> > + * range means the chosen CPU is in a fully idle SMT core. Values above
> > + * ASYM_IDLE_CORE_BIAS mean we never ranked such a CPU best.
>
> s/ASYM_IDLE_CORE_BIAS/ASYM_IDLE_CORE_COMPLETE_MISFIT/
>
> > + *
> > + * The asym-capacity wakeup path returns from select_idle_sibling()
> > + * after this function and never runs select_idle_cpu(), so the usual
> > + * select_idle_cpu() tail that clears idle cores must live here when the
> > + * idle-core preference did not win.
> > + */
> > + if (prefers_idle_core && best_fits > ASYM_IDLE_CORE_BIAS)
>
> s/ASYM_IDLE_CORE_BIAS/ASYM_IDLE_CORE_COMPLETE_MISFIT/
>
> > + set_idle_cores(target, false);
> > +
> > return best_cpu;
> > }
> >
> > @@ -8047,12 +8102,17 @@ static inline bool asym_fits_cpu(unsigned long util,
> > unsigned long util_max,
> > int cpu)
> > {
> > - if (sched_asym_cpucap_active())
> > + if (sched_asym_cpucap_active()) {
> > /*
> > * Return true only if the cpu fully fits the task requirements
> > * which include the utilization and the performance hints.
> > + *
> > + * When SMT is active, also require that the core has no busy
> > + * siblings.
> > */
> > - return (util_fits_cpu(util, util_min, util_max, cpu) > 0);
> > + return (!sched_smt_active() || is_core_idle(cpu)) &&
> > + (util_fits_cpu(util, util_min, util_max, cpu) > 0);
> > + }
> >
> > return true;
> > }
> > --
> > 2.54.0
> >
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 5/5] sched/fair: Add SIS_UTIL support to select_idle_capacity()
[not found] ` <20260428144352.3575863-6-arighi@nvidia.com>
@ 2026-05-06 12:59 ` Vincent Guittot
2026-05-06 17:01 ` Dietmar Eggemann
0 siblings, 1 reply; 21+ messages in thread
From: Vincent Guittot @ 2026-05-06 12:59 UTC (permalink / raw)
To: Andrea Righi
Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider,
K Prateek Nayak, Christian Loehle, Koba Ko, Felix Abecassis,
Balbir Singh, Joel Fernandes, Shrikanth Hegde, linux-kernel
On Tue, 28 Apr 2026 at 16:44, Andrea Righi <arighi@nvidia.com> wrote:
>
> From: K Prateek Nayak <kprateek.nayak@amd.com>
>
> Add to select_idle_capacity() the same SIS_UTIL-controlled idle-scan
> mechanism, already used by select_idle_cpu(): when sched_feat(SIS_UTIL)
> is enabled and the LLC domain has sched_domain_shared data, derive the
> per-attempt scan limit from sd->shared->nr_idle_scan.
>
> That bounds the walk on large LLCs and allows an early return once the
> scan limit is reached, if we already picked a sufficiently strong
> idle-core candidate (best_fits == ASYM_IDLE_CORE_UCLAMP_MISFIT).
>
> Co-developed-by: Andrea Righi <arighi@nvidia.com>
> Signed-off-by: Andrea Righi <arighi@nvidia.com>
> Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com>
> ---
> kernel/sched/fair.c | 19 +++++++++++++++++++
> 1 file changed, 19 insertions(+)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index a1f4d70f6b3d9..1cde3a9b1e0f5 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -8018,6 +8018,7 @@ select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target)
> int fits, best_fits = ASYM_IDLE_COMPLETE_MISFIT;
> int cpu, best_cpu = -1;
> struct cpumask *cpus;
> + int nr = INT_MAX;
>
> cpus = this_cpu_cpumask_var_ptr(select_rq_mask);
> cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
> @@ -8026,10 +8027,28 @@ select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target)
> util_min = uclamp_eff_value(p, UCLAMP_MIN);
> util_max = uclamp_eff_value(p, UCLAMP_MAX);
>
> + if (sched_feat(SIS_UTIL) && sd->shared) {
> + /*
> + * Same nr_idle_scan hint as select_idle_cpu(), nr only limits
> + * the scan when not preferring an idle core.
> + */
> + nr = READ_ONCE(sd->shared->nr_idle_scan) + 1;
> + /* overloaded domain is unlikely to have idle cpu/core */
> + if (nr == 1)
> + return -1;
> + }
> +
> for_each_cpu_wrap(cpu, cpus, target) {
> bool preferred_core = !prefers_idle_core || is_core_idle(cpu);
> unsigned long cpu_cap = capacity_of(cpu);
>
> + /*
> + * Good-enough early exit (mirrors select_idle_cpu() logic).
> + */
> + if (!prefers_idle_core &&
> + --nr <= 0 && best_fits == ASYM_IDLE_CORE_UCLAMP_MISFIT)
With SMT, !prefers_idle_core implies that there is no idle core; Is
best_fits == ASYM_IDLE_CORE_UCLAMP_MISFIT really expected in such case
?
With !SMT, !prefers_idle_core is always true and we will bail out
early as expected
> + return best_cpu;
> +
> if (!choose_idle_cpu(cpu, p))
> continue;
>
> --
> 2.54.0
>
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 5/5] sched/fair: Add SIS_UTIL support to select_idle_capacity()
2026-05-06 12:59 ` [PATCH 5/5] sched/fair: Add SIS_UTIL support to select_idle_capacity() Vincent Guittot
@ 2026-05-06 17:01 ` Dietmar Eggemann
2026-05-06 18:11 ` Andrea Righi
0 siblings, 1 reply; 21+ messages in thread
From: Dietmar Eggemann @ 2026-05-06 17:01 UTC (permalink / raw)
To: Vincent Guittot, Andrea Righi
Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Steven Rostedt,
Ben Segall, Mel Gorman, Valentin Schneider, K Prateek Nayak,
Christian Loehle, Koba Ko, Felix Abecassis, Balbir Singh,
Joel Fernandes, Shrikanth Hegde, linux-kernel
On 06.05.26 14:59, Vincent Guittot wrote:
> On Tue, 28 Apr 2026 at 16:44, Andrea Righi <arighi@nvidia.com> wrote:
>>
>> From: K Prateek Nayak <kprateek.nayak@amd.com>
[...]
>> @@ -8026,10 +8027,28 @@ select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target)
>> util_min = uclamp_eff_value(p, UCLAMP_MIN);
>> util_max = uclamp_eff_value(p, UCLAMP_MAX);
>>
>> + if (sched_feat(SIS_UTIL) && sd->shared) {
>> + /*
>> + * Same nr_idle_scan hint as select_idle_cpu(), nr only limits
>> + * the scan when not preferring an idle core.
>> + */
>> + nr = READ_ONCE(sd->shared->nr_idle_scan) + 1;
>> + /* overloaded domain is unlikely to have idle cpu/core */
>> + if (nr == 1)
>> + return -1;
>> + }
>> +
>> for_each_cpu_wrap(cpu, cpus, target) {
>> bool preferred_core = !prefers_idle_core || is_core_idle(cpu);
>> unsigned long cpu_cap = capacity_of(cpu);
>>
>> + /*
>> + * Good-enough early exit (mirrors select_idle_cpu() logic).
>> + */
>> + if (!prefers_idle_core &&
>> + --nr <= 0 && best_fits == ASYM_IDLE_CORE_UCLAMP_MISFIT)
>
> With SMT, !prefers_idle_core implies that there is no idle core; Is
> best_fits == ASYM_IDLE_CORE_UCLAMP_MISFIT really expected in such case
> ?
>
> With !SMT, !prefers_idle_core is always true and we will bail out
> early as expected
I struggle to comprehend:
I assume the mirrored select_idle_cpu() logic is:
for_each_cpu_wrap(cpu, cpus, target + 1)
if (has_idle_core)
else
if (--nr <= 0)
return -1
Should this condition not be just:
if (!prefers_idle_core && --nr <= 0)
return best_cpu
since if we do a:
if (!choose_idle_cpu(cpu, p)))
continue;
right after that?
best_cpu is -1 by default so sis() will return target, in case we
already found a best_cpu then sis() will return this instead.
What do I miss here?
>
>
>> + return best_cpu;
>> +
>> if (!choose_idle_cpu(cpu, p))
>> continue;
[...]
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 5/5] sched/fair: Add SIS_UTIL support to select_idle_capacity()
2026-05-06 17:01 ` Dietmar Eggemann
@ 2026-05-06 18:11 ` Andrea Righi
2026-05-07 6:47 ` Vincent Guittot
0 siblings, 1 reply; 21+ messages in thread
From: Andrea Righi @ 2026-05-06 18:11 UTC (permalink / raw)
To: Dietmar Eggemann
Cc: Vincent Guittot, Ingo Molnar, Peter Zijlstra, Juri Lelli,
Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider,
K Prateek Nayak, Christian Loehle, Koba Ko, Felix Abecassis,
Balbir Singh, Joel Fernandes, Shrikanth Hegde, linux-kernel
Hi Dietmar and Vincent,
On Wed, May 06, 2026 at 07:01:35PM +0200, Dietmar Eggemann wrote:
> On 06.05.26 14:59, Vincent Guittot wrote:
> > On Tue, 28 Apr 2026 at 16:44, Andrea Righi <arighi@nvidia.com> wrote:
> >>
> >> From: K Prateek Nayak <kprateek.nayak@amd.com>
>
> [...]
>
> >> @@ -8026,10 +8027,28 @@ select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target)
> >> util_min = uclamp_eff_value(p, UCLAMP_MIN);
> >> util_max = uclamp_eff_value(p, UCLAMP_MAX);
> >>
> >> + if (sched_feat(SIS_UTIL) && sd->shared) {
> >> + /*
> >> + * Same nr_idle_scan hint as select_idle_cpu(), nr only limits
> >> + * the scan when not preferring an idle core.
> >> + */
> >> + nr = READ_ONCE(sd->shared->nr_idle_scan) + 1;
> >> + /* overloaded domain is unlikely to have idle cpu/core */
> >> + if (nr == 1)
> >> + return -1;
> >> + }
> >> +
> >> for_each_cpu_wrap(cpu, cpus, target) {
> >> bool preferred_core = !prefers_idle_core || is_core_idle(cpu);
> >> unsigned long cpu_cap = capacity_of(cpu);
> >>
> >> + /*
> >> + * Good-enough early exit (mirrors select_idle_cpu() logic).
> >> + */
> >> + if (!prefers_idle_core &&
> >> + --nr <= 0 && best_fits == ASYM_IDLE_CORE_UCLAMP_MISFIT)
> >
> > With SMT, !prefers_idle_core implies that there is no idle core; Is
> > best_fits == ASYM_IDLE_CORE_UCLAMP_MISFIT really expected in such case
> > ?
> >
> > With !SMT, !prefers_idle_core is always true and we will bail out
> > early as expected
>
> I struggle to comprehend:
>
> I assume the mirrored select_idle_cpu() logic is:
>
> for_each_cpu_wrap(cpu, cpus, target + 1)
>
> if (has_idle_core)
>
> else
> if (--nr <= 0)
> return -1
So, the logic in select_idle_cpu() is that as soon as nr <= 0, we stops the walk
and returns -1, without any "only stop if the answer is good enough" guard.
With this change in select_idle_capacity() when nr is exhausted, we stop only if
best_cpu is "good enough" (ASYM_IDLE_CORE_UCLAMP_MISFIT), otherwise we keep
scanning. Therefore, we're not perfectly mirroring select_idle_cpu().
>
> Should this condition not be just:
>
> if (!prefers_idle_core && --nr <= 0)
> return best_cpu
I think this would match more closely select_idle_cpu(). However,
select_idle_cpu() doesn't have the "best partial idle placement" logic at all,
it either returns an idle CPU or -1.
I guess it's a policy decision here: do we want to mirror exactly the scan bound
(nr <= 0 -> hard stop) or allow extra scan based on the ranking quality
(nr <= 0 -> stop early if satisfied)?
Thanks,
-Andrea
>
> since if we do a:
>
> if (!choose_idle_cpu(cpu, p)))
> continue;
>
> right after that?
>
> best_cpu is -1 by default so sis() will return target, in case we
> already found a best_cpu then sis() will return this instead.
>
> What do I miss here?
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 3/5] sched/fair: Prefer fully-idle SMT cores in asym-capacity idle selection
2026-05-06 10:29 ` Vincent Guittot
2026-05-06 12:34 ` Vincent Guittot
@ 2026-05-06 18:15 ` Andrea Righi
1 sibling, 0 replies; 21+ messages in thread
From: Andrea Righi @ 2026-05-06 18:15 UTC (permalink / raw)
To: Vincent Guittot
Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider,
K Prateek Nayak, Christian Loehle, Koba Ko, Felix Abecassis,
Balbir Singh, Joel Fernandes, Shrikanth Hegde, linux-kernel
Hi Vincent,
On Wed, May 06, 2026 at 12:29:10PM +0200, Vincent Guittot wrote:
> On Tue, 28 Apr 2026 at 16:44, Andrea Righi <arighi@nvidia.com> wrote:
> >
> > On systems with asymmetric CPU capacity (e.g., ACPI/CPPC reporting
> > different per-core frequencies), the wakeup path uses
> > select_idle_capacity() and prioritizes idle CPUs with higher capacity
> > for better task placement. However, when those CPUs belong to SMT cores,
> > their effective capacity can be much lower than the nominal capacity
> > when the sibling thread is busy: SMT siblings compete for shared
> > resources, so a "high capacity" CPU that is idle but whose sibling is
> > busy does not deliver its full capacity. This effective capacity
> > reduction cannot be modeled by the static capacity value alone.
> >
> > Introduce SMT awareness in the asym-capacity idle selection policy: when
> > SMT is active, always prefer fully-idle SMT cores over partially-idle
> > ones.
> >
> > Prioritizing fully-idle SMT cores yields better task placement because
> > the effective capacity of partially-idle SMT cores is reduced; always
> > preferring them when available leads to more accurate capacity usage on
> > task wakeup.
> >
> > On an SMT system with asymmetric CPU capacities, SMT-aware idle
> > selection has been shown to improve throughput by around 15-18% for
> > CPU-bound workloads, running an amount of tasks equal to the amount of
> > SMT cores.
> >
> > Cc: Vincent Guittot <vincent.guittot@linaro.org>
> > Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
> > Cc: Christian Loehle <christian.loehle@arm.com>
> > Cc: Koba Ko <kobak@nvidia.com>
> > Reviewed-by: K Prateek Nayak <kprateek.nayak@amd.com>
> > Reported-by: Felix Abecassis <fabecassis@nvidia.com>
> > Signed-off-by: Andrea Righi <arighi@nvidia.com>
> > ---
> > kernel/sched/fair.c | 70 +++++++++++++++++++++++++++++++++++++++++----
> > 1 file changed, 65 insertions(+), 5 deletions(-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index bbdf537f61154..6a7e4943804b5 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -7989,6 +7989,22 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
> > return idle_cpu;
> > }
> >
> > +/*
> > + * Idle-capacity scan ranks transformed util_fits_cpu() outcomes; lower values
> > + * are more preferred (see select_idle_capacity()).
> > + */
> > +enum asym_fits_state {
> > + /* In descending order of preference */
> > + ASYM_IDLE_CORE_UCLAMP_MISFIT = -4,
> > + ASYM_IDLE_CORE_COMPLETE_MISFIT,
> > + ASYM_IDLE_THREAD_FITS,
> > + ASYM_IDLE_THREAD_UCLAMP_MISFIT,
> > + ASYM_IDLE_COMPLETE_MISFIT,
> > +
> > + /* util_fits_cpu() bias for an idle core. */
> > + ASYM_IDLE_CORE_BIAS = -3,
> > +};
> > +
> > /*
> > * Scan the asym_capacity domain for idle CPUs; pick the first idle one on which
> > * the task fits. If no CPU is big enough, but there are idle ones, try to
> > @@ -7997,8 +8013,9 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
> > static int
> > select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target)
> > {
> > + bool prefers_idle_core = sched_smt_active() && test_idle_cores(target);
> > unsigned long task_util, util_min, util_max, best_cap = 0;
> > - int fits, best_fits = 0;
> > + int fits, best_fits = ASYM_IDLE_COMPLETE_MISFIT;
> > int cpu, best_cpu = -1;
> > struct cpumask *cpus;
> >
> > @@ -8010,6 +8027,7 @@ select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target)
> > util_max = uclamp_eff_value(p, UCLAMP_MAX);
> >
> > for_each_cpu_wrap(cpu, cpus, target) {
> > + bool preferred_core = !prefers_idle_core || is_core_idle(cpu);
> > unsigned long cpu_cap = capacity_of(cpu);
> >
> > if (!choose_idle_cpu(cpu, p))
> > @@ -8018,7 +8036,7 @@ select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target)
> > fits = util_fits_cpu(task_util, util_min, util_max, cpu);
> >
> > /* This CPU fits with all requirements */
> > - if (fits > 0)
> > + if (fits > 0 && preferred_core)
> > return cpu;
> > /*
> > * Only the min performance hint (i.e. uclamp_min) doesn't fit.
> > @@ -8026,9 +8044,33 @@ select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target)
> > */
> > else if (fits < 0)
> > cpu_cap = get_actual_cpu_capacity(cpu);
> > + /*
> > + * fits > 0 implies we are not on a preferred core
> > + * but the util fits CPU capacity. Set fits to ASYM_IDLE_THREAD_FITS
> > + * so the effective range becomes
> > + * [ASYM_IDLE_THREAD_FITS, ASYM_IDLE_COMPLETE_MISFIT], where:
> > + * ASYM_IDLE_COMPLETE_MISFIT - does not fit
> > + * ASYM_IDLE_THREAD_UCLAMP_MISFIT - fits with the exception of UCLAMP_MIN
> > + * ASYM_IDLE_THREAD_FITS - fits with the exception of preferred_core
> > + */
> > + else if (fits > 0)
> > + fits = ASYM_IDLE_THREAD_FITS;
> > +
> > + /*
> > + * If we are on a preferred core, translate the range of fits
> > + * of [ASYM_IDLE_THREAD_UCLAMP_MISFIT, ASYM_IDLE_COMPLETE_MISFIT] to
> > + * [ASYM_IDLE_CORE_UCLAMP_MISFIT, ASYM_IDLE_CORE_COMPLETE_MISFIT].
> > + * This ensures that an idle core is always given priority over
> > + * (partially) busy core.
> > + *
> > + * A fully fitting idle core would have returned early and hence
> > + * fits > 0 for preferred_core need not be dealt with.
> > + */
> > + if (preferred_core)
> > + fits += ASYM_IDLE_CORE_BIAS;
>
> It might be good to add a comment stating that if the system doesn't
> have SMT, prefers_idle_core and preferred_core are always true.
>
> This is okay because CPU == Core in this case but the value differs
> from the default 0 or -1 of util_fits_cpu
Ack.
>
> >
> > /*
> > - * First, select CPU which fits better (-1 being better than 0).
> > + * First, select CPU which fits better (lower is more preferred).
> > * Then, select the one with best capacity at same level.
> > */
> > if ((fits < best_fits) ||
> > @@ -8039,6 +8081,19 @@ select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target)
> > }
> > }
> >
> > + /*
> > + * A value in the [ASYM_IDLE_CORE_UCLAMP_MISFIT, ASYM_IDLE_CORE_BIAS]
>
> s/ASYM_IDLE_CORE_BIAS/ASYM_IDLE_CORE_COMPLETE_MISFIT/
>
> ASYM_IDLE_CORE_BIAS is an offset to move an idle core that doesn't
> fully fit in the preferred range [ASYM_IDLE_CORE_UCLAMP_MISFIT,
> ASYM_IDLE_CORE_COMPLETE_MISFIT]
>
> Keeping in mind that ASYM_IDLE_CORE_BIAS == -3 == ASYM_IDLE_CORE_BIAS
Ah yes, using ASYM_IDLE_CORE_BIAS is just confusing, we should definitely use
[ASYM_IDLE_CORE_UCLAMP_MISFIT, ASYM_IDLE_CORE_COMPLETE_MISFIT]. Will fix this.
>
> > + * range means the chosen CPU is in a fully idle SMT core. Values above
> > + * ASYM_IDLE_CORE_BIAS mean we never ranked such a CPU best.
>
> s/ASYM_IDLE_CORE_BIAS/ASYM_IDLE_CORE_COMPLETE_MISFIT/
Ack.
>
> > + *
> > + * The asym-capacity wakeup path returns from select_idle_sibling()
> > + * after this function and never runs select_idle_cpu(), so the usual
> > + * select_idle_cpu() tail that clears idle cores must live here when the
> > + * idle-core preference did not win.
> > + */
> > + if (prefers_idle_core && best_fits > ASYM_IDLE_CORE_BIAS)
>
> s/ASYM_IDLE_CORE_BIAS/ASYM_IDLE_CORE_COMPLETE_MISFIT/
Ack.
>
> > + set_idle_cores(target, false);
> > +
> > return best_cpu;
> > }
> >
> > @@ -8047,12 +8102,17 @@ static inline bool asym_fits_cpu(unsigned long util,
> > unsigned long util_max,
> > int cpu)
> > {
> > - if (sched_asym_cpucap_active())
> > + if (sched_asym_cpucap_active()) {
> > /*
> > * Return true only if the cpu fully fits the task requirements
> > * which include the utilization and the performance hints.
> > + *
> > + * When SMT is active, also require that the core has no busy
> > + * siblings.
> > */
> > - return (util_fits_cpu(util, util_min, util_max, cpu) > 0);
> > + return (!sched_smt_active() || is_core_idle(cpu)) &&
> > + (util_fits_cpu(util, util_min, util_max, cpu) > 0);
> > + }
> >
> > return true;
> > }
> > --
> > 2.54.0
> >
Thanks,
-Andrea
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 3/5] sched/fair: Prefer fully-idle SMT cores in asym-capacity idle selection
2026-05-05 17:20 ` [PATCH 3/5] sched/fair: Prefer fully-idle SMT cores in asym-capacity idle selection Dietmar Eggemann
@ 2026-05-06 18:31 ` Andrea Righi
0 siblings, 0 replies; 21+ messages in thread
From: Andrea Righi @ 2026-05-06 18:31 UTC (permalink / raw)
To: Dietmar Eggemann
Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider,
K Prateek Nayak, Christian Loehle, Koba Ko, Felix Abecassis,
Balbir Singh, Joel Fernandes, Shrikanth Hegde, linux-kernel
Hi Dietmar,
On Tue, May 05, 2026 at 07:20:35PM +0200, Dietmar Eggemann wrote:
> On 28.04.26 16:41, Andrea Righi wrote:
> > On systems with asymmetric CPU capacity (e.g., ACPI/CPPC reporting
> > different per-core frequencies), the wakeup path uses
>
> I assume those CPPC systems w/ different per-core frequencies (like your
> Vera) are the only real one which would make use of this. Mobile
> big.LITTLE/DynamIQ don't have SMT.
>
> Phil mentioned other machines (PowerPC ?) which had issues with using
> select_idle_capacity():
>
> https://lore.kernel.org/r/20260325124840.GA98184@pauld.westford.csb
>
> [...]
>
> > On an SMT system with asymmetric CPU capacities, SMT-aware idle
> > selection has been shown to improve throughput by around 15-18% for
> > CPU-bound workloads, running an amount of tasks equal to the amount of
> > SMT cores.
>
> Just to make sure, this should be your internal NVBLAS benchmark. Is
> this 'ASYM (mainline) vs. ASYM + SMT' or 'NO_ASYM vs. ASYM + SMT' ? I
> try to match the cover letter's table numbers.
Yes, the 15-18% is with NVBLAS and it's NO_ASYM (mainline) vs ASYM + SMT. The
speedup of ASYM (mainline) vs ASYM+SMT is like +60% (keep in mind that with this
workload the SMT part plays a big role, because it's creating exactly nr_cpus/2
tasks => 1 task per SMT core, hence the big speedup number).
>
> [...]
>
> > @@ -7997,8 +8013,9 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
> > static int
> > select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target)
> > {
> > + bool prefers_idle_core = sched_smt_active() && test_idle_cores(target);
>
> nit: why prefers_idle_core and not has_idle_core like in sis()?
Yeah, sounds good, I'll change to has_idle_core.
>
> [...]
>
> > @@ -8047,12 +8102,17 @@ static inline bool asym_fits_cpu(unsigned long util,
> > unsigned long util_max,
> > int cpu)
> > {
> > - if (sched_asym_cpucap_active())
> > + if (sched_asym_cpucap_active()) {
> > /*
> > * Return true only if the cpu fully fits the task requirements
> > * which include the utilization and the performance hints.
> > + *
> > + * When SMT is active, also require that the core has no busy
> > + * siblings.
> > */
> > - return (util_fits_cpu(util, util_min, util_max, cpu) > 0);
> > + return (!sched_smt_active() || is_core_idle(cpu)) &&
> > + (util_fits_cpu(util, util_min, util_max, cpu) > 0);
> > + }
>
> Not sure whether this has been discussed already. This makes all early
> bailout conditions in sis() idle core aware for 'ASYM + SMT' but it's
> not for 'NO_ASYM'?
Yeah, that's another difference from NO_ASYM and I think it's worth a comment.
Maybe in the future it'd be interesting to see how NO_ASYM behaves with the same
idle core aware early bailout conditions (not for this series I'd say).
Thanks,
-Andrea
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 5/5] sched/fair: Add SIS_UTIL support to select_idle_capacity()
2026-05-06 18:11 ` Andrea Righi
@ 2026-05-07 6:47 ` Vincent Guittot
2026-05-08 14:49 ` Dietmar Eggemann
0 siblings, 1 reply; 21+ messages in thread
From: Vincent Guittot @ 2026-05-07 6:47 UTC (permalink / raw)
To: Andrea Righi
Cc: Dietmar Eggemann, Ingo Molnar, Peter Zijlstra, Juri Lelli,
Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider,
K Prateek Nayak, Christian Loehle, Koba Ko, Felix Abecassis,
Balbir Singh, Joel Fernandes, Shrikanth Hegde, linux-kernel
On Wed, 6 May 2026 at 20:11, Andrea Righi <arighi@nvidia.com> wrote:
>
> Hi Dietmar and Vincent,
>
> On Wed, May 06, 2026 at 07:01:35PM +0200, Dietmar Eggemann wrote:
> > On 06.05.26 14:59, Vincent Guittot wrote:
> > > On Tue, 28 Apr 2026 at 16:44, Andrea Righi <arighi@nvidia.com> wrote:
> > >>
> > >> From: K Prateek Nayak <kprateek.nayak@amd.com>
> >
> > [...]
> >
> > >> @@ -8026,10 +8027,28 @@ select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target)
> > >> util_min = uclamp_eff_value(p, UCLAMP_MIN);
> > >> util_max = uclamp_eff_value(p, UCLAMP_MAX);
> > >>
> > >> + if (sched_feat(SIS_UTIL) && sd->shared) {
> > >> + /*
> > >> + * Same nr_idle_scan hint as select_idle_cpu(), nr only limits
> > >> + * the scan when not preferring an idle core.
> > >> + */
> > >> + nr = READ_ONCE(sd->shared->nr_idle_scan) + 1;
> > >> + /* overloaded domain is unlikely to have idle cpu/core */
> > >> + if (nr == 1)
> > >> + return -1;
> > >> + }
> > >> +
> > >> for_each_cpu_wrap(cpu, cpus, target) {
> > >> bool preferred_core = !prefers_idle_core || is_core_idle(cpu);
> > >> unsigned long cpu_cap = capacity_of(cpu);
> > >>
> > >> + /*
> > >> + * Good-enough early exit (mirrors select_idle_cpu() logic).
> > >> + */
> > >> + if (!prefers_idle_core &&
> > >> + --nr <= 0 && best_fits == ASYM_IDLE_CORE_UCLAMP_MISFIT)
> > >
> > > With SMT, !prefers_idle_core implies that there is no idle core; Is
> > > best_fits == ASYM_IDLE_CORE_UCLAMP_MISFIT really expected in such case
> > > ?
> > >
> > > With !SMT, !prefers_idle_core is always true and we will bail out
> > > early as expected
> >
> > I struggle to comprehend:
> >
> > I assume the mirrored select_idle_cpu() logic is:
> >
> > for_each_cpu_wrap(cpu, cpus, target + 1)
> >
> > if (has_idle_core)
> >
> > else
> > if (--nr <= 0)
> > return -1
>
> So, the logic in select_idle_cpu() is that as soon as nr <= 0, we stops the walk
> and returns -1, without any "only stop if the answer is good enough" guard.
>
> With this change in select_idle_capacity() when nr is exhausted, we stop only if
> best_cpu is "good enough" (ASYM_IDLE_CORE_UCLAMP_MISFIT), otherwise we keep
> scanning. Therefore, we're not perfectly mirroring select_idle_cpu().
Okay, one reason of my confusion is that
With !SMT, preferred_core is always true and CPU == core in asym_fits_state
With SMT and test_idle_cores being true, preferred_core reflects
core/CPU idleness
But with SMT and test_idle_cores being false, preferred_core is
always false and we are back to the !SMT case where CPU == core in the
asym_fits_state
So the condition is relevant
if (!prefers_idle_core && --nr <= 0 && best_fits ==
ASYM_IDLE_CORE_UCLAMP_MISFIT)
We need a better description of which asym_fits_state range is used in
which conditions
>
> >
> > Should this condition not be just:
> >
> > if (!prefers_idle_core && --nr <= 0)
> > return best_cpu
>
> I think this would match more closely select_idle_cpu(). However,
> select_idle_cpu() doesn't have the "best partial idle placement" logic at all,
> it either returns an idle CPU or -1.
>
> I guess it's a policy decision here: do we want to mirror exactly the scan bound
> (nr <= 0 -> hard stop) or allow extra scan based on the ranking quality
> (nr <= 0 -> stop early if satisfied)?
The current proposal is ok for me:
With SMT and an idle core, we loop until finding the best idle core
Without SMT or idle core, we loop until we find a CPU on which the
task utilization matches at least the max capacity
>
> Thanks,
> -Andrea
>
> >
> > since if we do a:
> >
> > if (!choose_idle_cpu(cpu, p)))
> > continue;
> >
> > right after that?
> >
> > best_cpu is -1 by default so sis() will return target, in case we
> > already found a best_cpu then sis() will return this instead.
> >
> > What do I miss here?
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 5/5] sched/fair: Add SIS_UTIL support to select_idle_capacity()
2026-05-07 6:47 ` Vincent Guittot
@ 2026-05-08 14:49 ` Dietmar Eggemann
2026-05-08 22:05 ` Andrea Righi
0 siblings, 1 reply; 21+ messages in thread
From: Dietmar Eggemann @ 2026-05-08 14:49 UTC (permalink / raw)
To: Vincent Guittot, Andrea Righi
Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Steven Rostedt,
Ben Segall, Mel Gorman, Valentin Schneider, K Prateek Nayak,
Christian Loehle, Koba Ko, Felix Abecassis, Balbir Singh,
Joel Fernandes, Shrikanth Hegde, linux-kernel
On 07.05.26 08:47, Vincent Guittot wrote:
> On Wed, 6 May 2026 at 20:11, Andrea Righi <arighi@nvidia.com> wrote:
>>
>> Hi Dietmar and Vincent,
>>
>> On Wed, May 06, 2026 at 07:01:35PM +0200, Dietmar Eggemann wrote:
>>> On 06.05.26 14:59, Vincent Guittot wrote:
>>>> On Tue, 28 Apr 2026 at 16:44, Andrea Righi <arighi@nvidia.com> wrote:
>>>>>
>>>>> From: K Prateek Nayak <kprateek.nayak@amd.com>
>>>
>>> [...]
>>>
>>>>> @@ -8026,10 +8027,28 @@ select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target)
>>>>> util_min = uclamp_eff_value(p, UCLAMP_MIN);
>>>>> util_max = uclamp_eff_value(p, UCLAMP_MAX);
>>>>>
>>>>> + if (sched_feat(SIS_UTIL) && sd->shared) {
>>>>> + /*
>>>>> + * Same nr_idle_scan hint as select_idle_cpu(), nr only limits
>>>>> + * the scan when not preferring an idle core.
>>>>> + */
>>>>> + nr = READ_ONCE(sd->shared->nr_idle_scan) + 1;
>>>>> + /* overloaded domain is unlikely to have idle cpu/core */
>>>>> + if (nr == 1)
>>>>> + return -1;
>>>>> + }
>>>>> +
>>>>> for_each_cpu_wrap(cpu, cpus, target) {
>>>>> bool preferred_core = !prefers_idle_core || is_core_idle(cpu);
>>>>> unsigned long cpu_cap = capacity_of(cpu);
>>>>>
>>>>> + /*
>>>>> + * Good-enough early exit (mirrors select_idle_cpu() logic).
>>>>> + */
>>>>> + if (!prefers_idle_core &&
>>>>> + --nr <= 0 && best_fits == ASYM_IDLE_CORE_UCLAMP_MISFIT)
>>>>
>>>> With SMT, !prefers_idle_core implies that there is no idle core; Is
>>>> best_fits == ASYM_IDLE_CORE_UCLAMP_MISFIT really expected in such case
>>>> ?
>>>>
>>>> With !SMT, !prefers_idle_core is always true and we will bail out
>>>> early as expected
>>>
>>> I struggle to comprehend:
>>>
>>> I assume the mirrored select_idle_cpu() logic is:
>>>
>>> for_each_cpu_wrap(cpu, cpus, target + 1)
>>>
>>> if (has_idle_core)
>>>
>>> else
>>> if (--nr <= 0)
>>> return -1
>>
>> So, the logic in select_idle_cpu() is that as soon as nr <= 0, we stops the walk
>> and returns -1, without any "only stop if the answer is good enough" guard.
>>
>> With this change in select_idle_capacity() when nr is exhausted, we stop only if
>> best_cpu is "good enough" (ASYM_IDLE_CORE_UCLAMP_MISFIT), otherwise we keep
>> scanning. Therefore, we're not perfectly mirroring select_idle_cpu().
But when '--nr <= 0', does it actually make sense to continue scanning
for an _idle_ CPU?
for_each_cpu_wrap(cpu, cpus, target)
if (!prefers_idle_core &&
--nr <= 0 && best_fits == ASYM_IDLE_CORE_UCLAMP_MISFIT)
return best_cpu;
if (!choose_idle_cpu(cpu, p)) <--- !!!
continue;
I thought we want to bail since it doesn't. The likelihood that
choose_idle_cpu() will return 0 is high so from the point of '--nr <= 0'
we would not be able to reach the condition to alter best_cpu anymore?
Isn't this similar to select_idle_cpu()?
for_each_cpu_wrap(cpu, cpus, target + 1)
else
if (--nr <= 0)
return -1;
idle_cpu = __select_idle_cpu(cpu, p);
choose_idle_cpu(cpu, p)
if ((unsigned int)idle_cpu < nr_cpumask_bits)
break;
[...]
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 5/5] sched/fair: Add SIS_UTIL support to select_idle_capacity()
2026-05-08 14:49 ` Dietmar Eggemann
@ 2026-05-08 22:05 ` Andrea Righi
0 siblings, 0 replies; 21+ messages in thread
From: Andrea Righi @ 2026-05-08 22:05 UTC (permalink / raw)
To: Dietmar Eggemann
Cc: Vincent Guittot, Ingo Molnar, Peter Zijlstra, Juri Lelli,
Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider,
K Prateek Nayak, Christian Loehle, Koba Ko, Felix Abecassis,
Balbir Singh, Joel Fernandes, Shrikanth Hegde, linux-kernel
Hi Dietmar,
On Fri, May 08, 2026 at 04:49:06PM +0200, Dietmar Eggemann wrote:
> On 07.05.26 08:47, Vincent Guittot wrote:
> > On Wed, 6 May 2026 at 20:11, Andrea Righi <arighi@nvidia.com> wrote:
> >>
> >> Hi Dietmar and Vincent,
> >>
> >> On Wed, May 06, 2026 at 07:01:35PM +0200, Dietmar Eggemann wrote:
> >>> On 06.05.26 14:59, Vincent Guittot wrote:
> >>>> On Tue, 28 Apr 2026 at 16:44, Andrea Righi <arighi@nvidia.com> wrote:
> >>>>>
> >>>>> From: K Prateek Nayak <kprateek.nayak@amd.com>
> >>>
> >>> [...]
> >>>
> >>>>> @@ -8026,10 +8027,28 @@ select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target)
> >>>>> util_min = uclamp_eff_value(p, UCLAMP_MIN);
> >>>>> util_max = uclamp_eff_value(p, UCLAMP_MAX);
> >>>>>
> >>>>> + if (sched_feat(SIS_UTIL) && sd->shared) {
> >>>>> + /*
> >>>>> + * Same nr_idle_scan hint as select_idle_cpu(), nr only limits
> >>>>> + * the scan when not preferring an idle core.
> >>>>> + */
> >>>>> + nr = READ_ONCE(sd->shared->nr_idle_scan) + 1;
> >>>>> + /* overloaded domain is unlikely to have idle cpu/core */
> >>>>> + if (nr == 1)
> >>>>> + return -1;
> >>>>> + }
> >>>>> +
> >>>>> for_each_cpu_wrap(cpu, cpus, target) {
> >>>>> bool preferred_core = !prefers_idle_core || is_core_idle(cpu);
> >>>>> unsigned long cpu_cap = capacity_of(cpu);
> >>>>>
> >>>>> + /*
> >>>>> + * Good-enough early exit (mirrors select_idle_cpu() logic).
> >>>>> + */
> >>>>> + if (!prefers_idle_core &&
> >>>>> + --nr <= 0 && best_fits == ASYM_IDLE_CORE_UCLAMP_MISFIT)
> >>>>
> >>>> With SMT, !prefers_idle_core implies that there is no idle core; Is
> >>>> best_fits == ASYM_IDLE_CORE_UCLAMP_MISFIT really expected in such case
> >>>> ?
> >>>>
> >>>> With !SMT, !prefers_idle_core is always true and we will bail out
> >>>> early as expected
> >>>
> >>> I struggle to comprehend:
> >>>
> >>> I assume the mirrored select_idle_cpu() logic is:
> >>>
> >>> for_each_cpu_wrap(cpu, cpus, target + 1)
> >>>
> >>> if (has_idle_core)
> >>>
> >>> else
> >>> if (--nr <= 0)
> >>> return -1
> >>
> >> So, the logic in select_idle_cpu() is that as soon as nr <= 0, we stops the walk
> >> and returns -1, without any "only stop if the answer is good enough" guard.
> >>
> >> With this change in select_idle_capacity() when nr is exhausted, we stop only if
> >> best_cpu is "good enough" (ASYM_IDLE_CORE_UCLAMP_MISFIT), otherwise we keep
> >> scanning. Therefore, we're not perfectly mirroring select_idle_cpu().
>
> But when '--nr <= 0', does it actually make sense to continue scanning
> for an _idle_ CPU?
>
> for_each_cpu_wrap(cpu, cpus, target)
>
> if (!prefers_idle_core &&
> --nr <= 0 && best_fits == ASYM_IDLE_CORE_UCLAMP_MISFIT)
> return best_cpu;
>
> if (!choose_idle_cpu(cpu, p)) <--- !!!
> continue;
Hm... yeah and only an idle CPU can update best_fits via the ranking down below:
/*
* First, select CPU which fits better (lower is more preferred).
* Then, select the one with best capacity at same level.
*/
if ((fits < best_fits) ||
((fits == best_fits) && (cpu_cap > best_cap))) {
best_cap = cpu_cap;
best_cpu = cpu;
best_fits = fits;
}
So, we'll likely continue iterating on choose_idle_cpu() and the chance of
best_fits flipping to ASYM_IDLE_CORE_UCLAMP_MISFIT after nr is exhausted is low.
>
> I thought we want to bail since it doesn't. The likelihood that
> choose_idle_cpu() will return 0 is high so from the point of '--nr <= 0'
> we would not be able to reach the condition to alter best_cpu anymore?
>
> Isn't this similar to select_idle_cpu()?
>
> for_each_cpu_wrap(cpu, cpus, target + 1)
>
> else
> if (--nr <= 0)
> return -1;
> idle_cpu = __select_idle_cpu(cpu, p);
> choose_idle_cpu(cpu, p)
> if ((unsigned int)idle_cpu < nr_cpumask_bits)
> break;
Yes, with that said I think the right thing to do is to just mirror
select_idle_cpu unconditionally and do:
if (!prefers_idle_core && --nr <= 0)
return best_cpu;
If we all agree on this I'll fold this change in the next version (and re-test).
Thanks,
-Andrea
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH 5/5] sched/fair: Add SIS_UTIL support to select_idle_capacity()
2026-05-09 18:01 Andrea Righi
@ 2026-05-09 18:01 ` Andrea Righi
0 siblings, 0 replies; 21+ messages in thread
From: Andrea Righi @ 2026-05-09 18:01 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot
Cc: Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider, K Prateek Nayak, Christian Loehle, Phil Auld,
Koba Ko, Felix Abecassis, Balbir Singh, Joel Fernandes,
Shrikanth Hegde, linux-kernel
From: K Prateek Nayak <kprateek.nayak@amd.com>
Add to select_idle_capacity() the same SIS_UTIL-controlled idle-scan
mechanism, already used by select_idle_cpu(): when sched_feat(SIS_UTIL)
is enabled and the LLC domain has sched_domain_shared data, derive the
per-attempt scan limit from sd->shared->nr_idle_scan.
That bounds the walk on large LLCs: once nr_idle_scan is exhausted,
return the best CPU seen so far. The early exit is gated on
!has_idle_core so an active idle-core search (SMT with idle cores
reported by test_idle_cores()) isn't cut short before it gets a chance
to find one.
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Co-developed-by: Andrea Righi <arighi@nvidia.com>
Signed-off-by: Andrea Righi <arighi@nvidia.com>
Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com>
---
kernel/sched/fair.c | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 2ddba8bd27e59..494149f14d98f 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8084,6 +8084,7 @@ select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target)
int fits, best_fits = ASYM_IDLE_COMPLETE_MISFIT;
int cpu, best_cpu = -1;
struct cpumask *cpus;
+ int nr = INT_MAX;
cpus = this_cpu_cpumask_var_ptr(select_rq_mask);
cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
@@ -8092,10 +8093,28 @@ select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target)
util_min = uclamp_eff_value(p, UCLAMP_MIN);
util_max = uclamp_eff_value(p, UCLAMP_MAX);
+ if (sched_feat(SIS_UTIL) && sd->shared) {
+ /*
+ * Same nr_idle_scan hint as select_idle_cpu(), nr only limits
+ * the scan when not preferring an idle core.
+ */
+ nr = READ_ONCE(sd->shared->nr_idle_scan) + 1;
+ /* overloaded domain is unlikely to have idle cpu/core */
+ if (nr == 1)
+ return -1;
+ }
+
for_each_cpu_wrap(cpu, cpus, target) {
bool preferred_core = !has_idle_core || is_core_idle(cpu);
unsigned long cpu_cap = capacity_of(cpu);
+ /*
+ * Stop when the nr_idle_scan is exhausted (mirrors
+ * select_idle_cpu() logic).
+ */
+ if (!has_idle_core && --nr <= 0)
+ return best_cpu;
+
if (!choose_idle_cpu(cpu, p))
continue;
--
2.54.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 5/5] sched/fair: Add SIS_UTIL support to select_idle_capacity()
2026-05-09 18:07 [PATCH v6 0/5 RESEND] sched/fair: SMT-aware asymmetric CPU capacity Andrea Righi
@ 2026-05-09 18:07 ` Andrea Righi
2026-05-11 13:08 ` Vincent Guittot
0 siblings, 1 reply; 21+ messages in thread
From: Andrea Righi @ 2026-05-09 18:07 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot
Cc: Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
Valentin Schneider, K Prateek Nayak, Christian Loehle, Phil Auld,
Koba Ko, Felix Abecassis, Balbir Singh, Joel Fernandes,
Shrikanth Hegde, linux-kernel
From: K Prateek Nayak <kprateek.nayak@amd.com>
Add to select_idle_capacity() the same SIS_UTIL-controlled idle-scan
mechanism, already used by select_idle_cpu(): when sched_feat(SIS_UTIL)
is enabled and the LLC domain has sched_domain_shared data, derive the
per-attempt scan limit from sd->shared->nr_idle_scan.
That bounds the walk on large LLCs: once nr_idle_scan is exhausted,
return the best CPU seen so far. The early exit is gated on
!has_idle_core so an active idle-core search (SMT with idle cores
reported by test_idle_cores()) isn't cut short before it gets a chance
to find one.
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Co-developed-by: Andrea Righi <arighi@nvidia.com>
Signed-off-by: Andrea Righi <arighi@nvidia.com>
Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com>
---
kernel/sched/fair.c | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 2ddba8bd27e59..494149f14d98f 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8084,6 +8084,7 @@ select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target)
int fits, best_fits = ASYM_IDLE_COMPLETE_MISFIT;
int cpu, best_cpu = -1;
struct cpumask *cpus;
+ int nr = INT_MAX;
cpus = this_cpu_cpumask_var_ptr(select_rq_mask);
cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
@@ -8092,10 +8093,28 @@ select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target)
util_min = uclamp_eff_value(p, UCLAMP_MIN);
util_max = uclamp_eff_value(p, UCLAMP_MAX);
+ if (sched_feat(SIS_UTIL) && sd->shared) {
+ /*
+ * Same nr_idle_scan hint as select_idle_cpu(), nr only limits
+ * the scan when not preferring an idle core.
+ */
+ nr = READ_ONCE(sd->shared->nr_idle_scan) + 1;
+ /* overloaded domain is unlikely to have idle cpu/core */
+ if (nr == 1)
+ return -1;
+ }
+
for_each_cpu_wrap(cpu, cpus, target) {
bool preferred_core = !has_idle_core || is_core_idle(cpu);
unsigned long cpu_cap = capacity_of(cpu);
+ /*
+ * Stop when the nr_idle_scan is exhausted (mirrors
+ * select_idle_cpu() logic).
+ */
+ if (!has_idle_core && --nr <= 0)
+ return best_cpu;
+
if (!choose_idle_cpu(cpu, p))
continue;
--
2.54.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* Re: [PATCH 5/5] sched/fair: Add SIS_UTIL support to select_idle_capacity()
2026-05-09 18:07 ` [PATCH 5/5] sched/fair: Add SIS_UTIL support to select_idle_capacity() Andrea Righi
@ 2026-05-11 13:08 ` Vincent Guittot
0 siblings, 0 replies; 21+ messages in thread
From: Vincent Guittot @ 2026-05-11 13:08 UTC (permalink / raw)
To: Andrea Righi
Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider,
K Prateek Nayak, Christian Loehle, Phil Auld, Koba Ko,
Felix Abecassis, Balbir Singh, Joel Fernandes, Shrikanth Hegde,
linux-kernel
On Sat, 9 May 2026 at 20:10, Andrea Righi <arighi@nvidia.com> wrote:
>
> From: K Prateek Nayak <kprateek.nayak@amd.com>
>
> Add to select_idle_capacity() the same SIS_UTIL-controlled idle-scan
> mechanism, already used by select_idle_cpu(): when sched_feat(SIS_UTIL)
> is enabled and the LLC domain has sched_domain_shared data, derive the
> per-attempt scan limit from sd->shared->nr_idle_scan.
>
> That bounds the walk on large LLCs: once nr_idle_scan is exhausted,
> return the best CPU seen so far. The early exit is gated on
> !has_idle_core so an active idle-core search (SMT with idle cores
> reported by test_idle_cores()) isn't cut short before it gets a chance
> to find one.
>
> Cc: Vincent Guittot <vincent.guittot@linaro.org>
> Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
> Co-developed-by: Andrea Righi <arighi@nvidia.com>
> Signed-off-by: Andrea Righi <arighi@nvidia.com>
> Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
> ---
> kernel/sched/fair.c | 19 +++++++++++++++++++
> 1 file changed, 19 insertions(+)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 2ddba8bd27e59..494149f14d98f 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -8084,6 +8084,7 @@ select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target)
> int fits, best_fits = ASYM_IDLE_COMPLETE_MISFIT;
> int cpu, best_cpu = -1;
> struct cpumask *cpus;
> + int nr = INT_MAX;
>
> cpus = this_cpu_cpumask_var_ptr(select_rq_mask);
> cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
> @@ -8092,10 +8093,28 @@ select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target)
> util_min = uclamp_eff_value(p, UCLAMP_MIN);
> util_max = uclamp_eff_value(p, UCLAMP_MAX);
>
> + if (sched_feat(SIS_UTIL) && sd->shared) {
> + /*
> + * Same nr_idle_scan hint as select_idle_cpu(), nr only limits
> + * the scan when not preferring an idle core.
> + */
> + nr = READ_ONCE(sd->shared->nr_idle_scan) + 1;
> + /* overloaded domain is unlikely to have idle cpu/core */
> + if (nr == 1)
> + return -1;
> + }
> +
> for_each_cpu_wrap(cpu, cpus, target) {
> bool preferred_core = !has_idle_core || is_core_idle(cpu);
> unsigned long cpu_cap = capacity_of(cpu);
>
> + /*
> + * Stop when the nr_idle_scan is exhausted (mirrors
> + * select_idle_cpu() logic).
> + */
> + if (!has_idle_core && --nr <= 0)
> + return best_cpu;
> +
> if (!choose_idle_cpu(cpu, p))
> continue;
>
> --
> 2.54.0
>
^ permalink raw reply [flat|nested] 21+ messages in thread
end of thread, other threads:[~2026-05-11 13:08 UTC | newest]
Thread overview: 21+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20260428144352.3575863-1-arighi@nvidia.com>
[not found] ` <20260428144352.3575863-2-arighi@nvidia.com>
2026-05-05 9:15 ` [PATCH 1/5] sched/fair: Drop redundant RCU read lock in NOHZ kick path Dietmar Eggemann
2026-05-05 9:22 ` Andrea Righi
[not found] ` <20260428144352.3575863-4-arighi@nvidia.com>
2026-05-05 17:20 ` [PATCH 3/5] sched/fair: Prefer fully-idle SMT cores in asym-capacity idle selection Dietmar Eggemann
2026-05-06 18:31 ` Andrea Righi
2026-05-06 10:29 ` Vincent Guittot
2026-05-06 12:34 ` Vincent Guittot
2026-05-06 18:15 ` Andrea Righi
2026-05-05 20:40 ` [PATCH v5 0/5] sched/fair: SMT-aware asymmetric CPU capacity Dietmar Eggemann
[not found] ` <20260428144352.3575863-3-arighi@nvidia.com>
2026-05-05 12:48 ` [PATCH 2/5] sched/fair: Attach sched_domain_shared to sd_asym_cpucapacity Dietmar Eggemann
2026-05-06 9:45 ` Vincent Guittot
2026-05-06 10:19 ` K Prateek Nayak
2026-05-06 10:30 ` Vincent Guittot
[not found] ` <20260428144352.3575863-6-arighi@nvidia.com>
2026-05-06 12:59 ` [PATCH 5/5] sched/fair: Add SIS_UTIL support to select_idle_capacity() Vincent Guittot
2026-05-06 17:01 ` Dietmar Eggemann
2026-05-06 18:11 ` Andrea Righi
2026-05-07 6:47 ` Vincent Guittot
2026-05-08 14:49 ` Dietmar Eggemann
2026-05-08 22:05 ` Andrea Righi
2026-05-09 18:01 Andrea Righi
2026-05-09 18:01 ` [PATCH 5/5] sched/fair: Add SIS_UTIL support to select_idle_capacity() Andrea Righi
-- strict thread matches above, loose matches on Subject: below --
2026-05-09 18:07 [PATCH v6 0/5 RESEND] sched/fair: SMT-aware asymmetric CPU capacity Andrea Righi
2026-05-09 18:07 ` [PATCH 5/5] sched/fair: Add SIS_UTIL support to select_idle_capacity() Andrea Righi
2026-05-11 13:08 ` Vincent Guittot
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox