public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Andrea Righi <arighi@nvidia.com>
To: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Ingo Molnar <mingo@redhat.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Juri Lelli <juri.lelli@redhat.com>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Ben Segall <bsegall@google.com>, Mel Gorman <mgorman@suse.de>,
	Valentin Schneider <vschneid@redhat.com>,
	Christian Loehle <christian.loehle@arm.com>,
	Koba Ko <kobak@nvidia.com>,
	Felix Abecassis <fabecassis@nvidia.com>,
	Balbir Singh <balbirs@nvidia.com>,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH 3/4] sched/fair: Enable EAS with SMT on SD_ASYM_CPUCAPACITY systems
Date: Fri, 27 Mar 2026 10:45:27 +0100	[thread overview]
Message-ID: <acZRt_zLmbcqaSZP@gpd4> (raw)
In-Reply-To: <CAKfTPtAD_=JatsAi+9EG=D7u5D=pVHN=zj0N0UPrNH8gkb4ejA@mail.gmail.com>

On Fri, Mar 27, 2026 at 09:09:35AM +0100, Vincent Guittot wrote:
> On Thu, 26 Mar 2026 at 16:12, Andrea Righi <arighi@nvidia.com> wrote:
> >
> > Drop the sched_is_eas_possible() guard that rejects EAS whenever SMT is
> > active. This allows to enable EAS and perf-domain setup to succeed on
> > SD_ASYM_CPUCAPACITY topologies with SMT enabled.
> 
> I don't think that we want to enable EAS with SMT. So keep EAS and SMT
> exclusive, at least for now

Ack.

Thanks,
-Andrea

> 
> 
> >
> > Moreover, apply to find_energy_efficient_cpu() the same SMT-aware
> > preference as the non-EAS wakeup path: when SMT is active and there is a
> > fully-idle core in the relevant domain, prefer max-spare-capacity
> > candidates on fully-idle cores. Otherwise, fall back to the prior
> > behavior, to include also partially-idle SMT siblings.
> >
> > Cc: Vincent Guittot <vincent.guittot@linaro.org>
> > Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
> > Cc: Christian Loehle <christian.loehle@arm.com>
> > Cc: Koba Ko <kobak@nvidia.com>
> > Reported-by: Felix Abecassis <fabecassis@nvidia.com>
> > Signed-off-by: Andrea Righi <arighi@nvidia.com>
> > ---
> >  kernel/sched/fair.c     | 50 +++++++++++++++++++++++++++++++++++++++--
> >  kernel/sched/topology.c |  9 --------
> >  2 files changed, 48 insertions(+), 11 deletions(-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index f8deaaa5bfc85..593a89f688679 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -8658,13 +8658,15 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
> >         eenv_task_busy_time(&eenv, p, prev_cpu);
> >
> >         for (; pd; pd = pd->next) {
> > -               unsigned long util_min = p_util_min, util_max = p_util_max;
> >                 unsigned long cpu_cap, cpu_actual_cap, util;
> >                 long prev_spare_cap = -1, max_spare_cap = -1;
> > +               long max_spare_cap_fallback = -1;
> >                 unsigned long rq_util_min, rq_util_max;
> >                 unsigned long cur_delta, base_energy;
> > -               int max_spare_cap_cpu = -1;
> > +               int max_spare_cap_cpu = -1, max_spare_cap_cpu_fallback = -1;
> >                 int fits, max_fits = -1;
> > +               int max_fits_fallback = -1;
> > +               bool prefer_idle_cores;
> >
> >                 if (!cpumask_and(cpus, perf_domain_span(pd), cpu_online_mask))
> >                         continue;
> > @@ -8676,6 +8678,8 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
> >                 eenv.cpu_cap = cpu_actual_cap;
> >                 eenv.pd_cap = 0;
> >
> > +               prefer_idle_cores = sched_smt_active() && test_idle_cores(prev_cpu);
> > +
> >                 for_each_cpu(cpu, cpus) {
> >                         struct rq *rq = cpu_rq(cpu);
> >
> > @@ -8687,6 +8691,11 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
> >                         if (!cpumask_test_cpu(cpu, p->cpus_ptr))
> >                                 continue;
> >
> > +                       if (prefer_idle_cores && cpu != prev_cpu && !is_core_idle(cpu))
> > +                               goto fallback;
> > +
> > +                       unsigned long util_min = p_util_min, util_max = p_util_max;
> > +
> >                         util = cpu_util(cpu, p, cpu, 0);
> >                         cpu_cap = capacity_of(cpu);
> >
> > @@ -8733,6 +8742,43 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
> >                                 max_spare_cap_cpu = cpu;
> >                                 max_fits = fits;
> >                         }
> > +
> > +fallback:
> > +                       if (!prefer_idle_cores || cpu == prev_cpu || is_core_idle(cpu))
> > +                               continue;
> > +
> > +                       util_min = p_util_min;
> > +                       util_max = p_util_max;
> > +                       util = cpu_util(cpu, p, cpu, 0);
> > +                       cpu_cap = capacity_of(cpu);
> > +
> > +                       if (uclamp_is_used() && !uclamp_rq_is_idle(rq)) {
> > +                               rq_util_min = uclamp_rq_get(rq, UCLAMP_MIN);
> > +                               rq_util_max = uclamp_rq_get(rq, UCLAMP_MAX);
> > +
> > +                               util_min = max(rq_util_min, p_util_min);
> > +                               util_max = max(rq_util_max, p_util_max);
> > +                       }
> > +
> > +                       fits = util_fits_cpu(util, util_min, util_max, cpu);
> > +                       if (!fits)
> > +                               continue;
> > +
> > +                       lsub_positive(&cpu_cap, util);
> > +
> > +                       if ((fits > max_fits_fallback) ||
> > +                           ((fits == max_fits_fallback) &&
> > +                            ((long)cpu_cap > max_spare_cap_fallback))) {
> > +                               max_spare_cap_fallback = cpu_cap;
> > +                               max_spare_cap_cpu_fallback = cpu;
> > +                               max_fits_fallback = fits;
> > +                       }
> > +               }
> > +
> > +               if (max_spare_cap_cpu < 0 && max_spare_cap_cpu_fallback >= 0) {
> > +                       max_spare_cap = max_spare_cap_fallback;
> > +                       max_spare_cap_cpu = max_spare_cap_cpu_fallback;
> > +                       max_fits = max_fits_fallback;
> >                 }
> >
> >                 if (max_spare_cap_cpu < 0 && prev_spare_cap < 0)
> > diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> > index 061f8c85f5552..cb060fe56aec1 100644
> > --- a/kernel/sched/topology.c
> > +++ b/kernel/sched/topology.c
> > @@ -232,15 +232,6 @@ static bool sched_is_eas_possible(const struct cpumask *cpu_mask)
> >                 return false;
> >         }
> >
> > -       /* EAS definitely does *not* handle SMT */
> > -       if (sched_smt_active()) {
> > -               if (sched_debug()) {
> > -                       pr_info("rd %*pbl: Checking EAS, SMT is not supported\n",
> > -                               cpumask_pr_args(cpu_mask));
> > -               }
> > -               return false;
> > -       }
> > -
> >         if (!arch_scale_freq_invariant()) {
> >                 if (sched_debug()) {
> >                         pr_info("rd %*pbl: Checking EAS: frequency-invariant load tracking not yet supported",
> > --
> > 2.53.0
> >

  reply	other threads:[~2026-03-27  9:45 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-26 15:02 [PATCH 0/4] sched/fair: SMT-aware asymmetric CPU capacity Andrea Righi
2026-03-26 15:02 ` [PATCH 1/4] sched/fair: Prefer fully-idle SMT cores in asym-capacity idle selection Andrea Righi
2026-03-27  8:09   ` Vincent Guittot
2026-03-27  9:46     ` Andrea Righi
2026-03-27 10:44   ` K Prateek Nayak
2026-03-27 10:58     ` Andrea Righi
2026-03-27 11:14       ` K Prateek Nayak
2026-03-27 16:39         ` Andrea Righi
2026-03-26 15:02 ` [PATCH 2/4] sched/fair: Reject misfit pulls onto busy SMT siblings on asym-capacity Andrea Righi
2026-03-26 15:02 ` [PATCH 3/4] sched/fair: Enable EAS with SMT on SD_ASYM_CPUCAPACITY systems Andrea Righi
2026-03-27  8:09   ` Vincent Guittot
2026-03-27  9:45     ` Andrea Righi [this message]
2026-03-26 15:02 ` [PATCH 4/4] sched/fair: Prefer fully-idle SMT core for NOHZ idle load balancer Andrea Righi
2026-03-27  8:45   ` Vincent Guittot
2026-03-27  9:44     ` Andrea Righi
2026-03-27 11:34       ` K Prateek Nayak
2026-03-27 20:36         ` Andrea Righi
2026-03-27 22:45           ` Andrea Righi
2026-03-27 13:44   ` Shrikanth Hegde
2026-03-26 16:33 ` [PATCH 0/4] sched/fair: SMT-aware asymmetric CPU capacity Christian Loehle
2026-03-27  6:52   ` Andrea Righi
2026-03-27 16:31 ` Shrikanth Hegde
2026-03-27 17:08   ` Andrea Righi
2026-03-28  6:51     ` Shrikanth Hegde

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=acZRt_zLmbcqaSZP@gpd4 \
    --to=arighi@nvidia.com \
    --cc=balbirs@nvidia.com \
    --cc=bsegall@google.com \
    --cc=christian.loehle@arm.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=fabecassis@nvidia.com \
    --cc=juri.lelli@redhat.com \
    --cc=kobak@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mgorman@suse.de \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=rostedt@goodmis.org \
    --cc=vincent.guittot@linaro.org \
    --cc=vschneid@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox