public inbox for linux-pm@vger.kernel.org
 help / color / mirror / Atom feed
From: Qais Yousef <qyousef@layalina.io>
To: Lukasz Luba <lukasz.luba@arm.com>
Cc: Xuewen Yan <xuewen.yan94@gmail.com>,
	Viresh Kumar <viresh.kumar@linaro.org>,
	Xuewen Yan <xuewen.yan@unisoc.com>,
	rui.zhang@intel.com, rafael@kernel.org, linux-pm@vger.kernel.org,
	amit.kachhap@gmail.com, daniel.lezcano@kernel.org,
	linux-kernel@vger.kernel.org, ke.wang@unisoc.com,
	di.shen@unisoc.com, jeson.gao@unisoc.com,
	Peter Zijlstra <peterz@infradead.org>,
	Vincent Guittot <vincent.guittot@linaro.org>
Subject: Re: [RFC PATCH 1/2] thermal/cpufreq_cooling: remove unused cpu_idx in get_load()
Date: Thu, 26 Mar 2026 09:05:54 +0000	[thread overview]
Message-ID: <20260326090554.jerlaudbe3rkovsi@airbuntu> (raw)
In-Reply-To: <35d472ac-8a58-44c5-a0b1-5e1de8ac6cfc@arm.com>

On 03/24/26 10:46, Lukasz Luba wrote:
> 
> On 3/24/26 02:20, Xuewen Yan wrote:
> > On Mon, Mar 23, 2026 at 9:25 PM Lukasz Luba <lukasz.luba@arm.com> wrote:
> > > 
> > > 
> > > 
> > > On 3/23/26 11:06, Viresh Kumar wrote:
> > > > On 23-03-26, 10:52, Lukasz Luba wrote:
> > > > > > How is that okay ? What am I missing ?
> > > > 
> > > > I was missing !SMP :)
> > > > 
> > > > > Right, there is a mix of two things.
> > > > > The 'i' left but should be removed as well, since
> > > > > this is !SMP code with only 1 cpu and i=0.
> > 
> > That's also why we sent out patch 1/2; after all, it is always 0 on
> > !SMP systems.
> > 
> > > > > 
> > > > > The whole split which has been made for getting
> > > > > the load or utilization from CPU(s) needs to be
> > > > > cleaned. The compiled code looks different since
> > > > > it knows there is non-SMP config used.
> > > > 
> > > > Right, we are allocating that for num_cpus (which should be 1 CPU
> > > > anyway). The entire thing must be cleaned.
> > > > 
> > > > > Do you want to clean that or I should do this?
> > > > 
> > > > It would be helpful if you can do it :)
> > > > 
> > > 
> > > OK, I will. Thanks for your involvement Viresh!
> > > 
> > > Xuewen please wait with your v2, I will send
> > > a redesign of this left code today.
> > 
> > Okay, and Qais's point is also worth considering: do we actually need
> > sched_cpu_util()?
> > The way I see it, generally speaking, the request_power derived from
> > idle_time might be higher than what we get from sched_cpu_util().
> > Take this scenario as an example:
> > Consider a CPU running at the lowest frequency with 50% idle time,
> > versus one running at the highest frequency with the same 50% idle
> > time.
> > In this case, using idle_time yields the same load value for both.
> > However, sched_cpu_util() would report a lower load when the CPU
> > frequency is low. This results in a smaller request_power...

Invariance will cause settling time to stretch longer, but it should settle to
the correct value eventually. But generally another case against util is that
it has grown to be a description of compute demand more than true idleness of
the system.

> 
> Right, there are 2 things to consider:
> 1. what is the utilization when the CPU still have idle time, e.g.
>    this 50% that you mentioned
> 2. what is the utilization when there is no idle time and CPU
>    is fully busy (and starts throttling due to heat)

Hmm I think what you're trying to say here we need to distinguish between two
cases 50% or fully busy? I think how idle the system is a better question to
ask rather than what is the utilization (given the ubiquity of the signal
nowadays)

> 
> In this thermal fwk we are mostly in the 2nd case. In that case the

But from power allocator perspective (which I think is the context, right?),
you want to know if you can shift power?

> utilization on CPU's runqueue goes to 1024 no mater the CPU's frequency.
> We know which highest frequency was allowed to run and we pick the power
> value from EM for it. That's why the estimation is not that bad (apart
> from power variation for different flavors of workloads: heavy SIMD vs.
> normal integer/load).
> 
> In 1st case scenario we might underestimate the power, but that
> is not the thermal stress situation anyway, so the max OPP is
> still allowed.
> 
> So far it is hard to find the best power model to use and robust CPU
> load mechanisms. Adding more complexity and creating some
> over-engineered code in the kernel to maintain might not have sense.
> The thermal solutions are solved in the Firmware nowadays since the
> kernel won't react that fast for some rapid changes.
> 
> We have to balance the complexity here.

I am not verse in all the details, so not sure what complexity you are
referring to. IMHO the idle time is a more stable view for how much a breathing
room the cpu has. It also deals better with long decay of blocked load
over-estimating the utilization. AFAICS just sample the idle over a window when
you need to take a decision and you'd solve several problems in one go.

> Let's improve the situation a bit. It would be very much appreciated if
> you could share information if those changes help your platform
> (some older boards might not show any benefit with the new code).
> 
> Regards,
> Lukasz
> 

  parent reply	other threads:[~2026-03-26  9:06 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-20 11:31 [RFC PATCH 1/2] thermal/cpufreq_cooling: remove unused cpu_idx in get_load() Xuewen Yan
2026-03-20 11:31 ` [RFC PATCH 2/2] thermal/cpufreq_cooling: Use idle_time to get cpu_load when scx_enabled Xuewen Yan
2026-03-24  1:41   ` Qais Yousef
2026-03-20 12:32 ` [RFC PATCH 1/2] thermal/cpufreq_cooling: remove unused cpu_idx in get_load() Lukasz Luba
2026-03-21  8:48   ` Xuewen Yan
2026-03-23  5:34   ` Viresh Kumar
2026-03-23  9:20     ` Lukasz Luba
2026-03-23 10:41       ` Viresh Kumar
2026-03-23 10:52         ` Lukasz Luba
2026-03-23 11:06           ` Viresh Kumar
2026-03-23 13:25             ` Lukasz Luba
2026-03-24  2:20               ` Xuewen Yan
2026-03-24 10:46                 ` Lukasz Luba
2026-03-24 12:03                   ` Xuewen Yan
2026-03-25  8:31                     ` Lukasz Luba
2026-03-26  9:05                   ` Qais Yousef [this message]
2026-03-26  9:21                     ` Lukasz Luba
2026-03-28  8:09                       ` Qais Yousef

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260326090554.jerlaudbe3rkovsi@airbuntu \
    --to=qyousef@layalina.io \
    --cc=amit.kachhap@gmail.com \
    --cc=daniel.lezcano@kernel.org \
    --cc=di.shen@unisoc.com \
    --cc=jeson.gao@unisoc.com \
    --cc=ke.wang@unisoc.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=lukasz.luba@arm.com \
    --cc=peterz@infradead.org \
    --cc=rafael@kernel.org \
    --cc=rui.zhang@intel.com \
    --cc=vincent.guittot@linaro.org \
    --cc=viresh.kumar@linaro.org \
    --cc=xuewen.yan94@gmail.com \
    --cc=xuewen.yan@unisoc.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox