Linux Power Management development
 help / color / mirror / Atom feed
From: Dietmar Eggemann <dietmar.eggemann@arm.com>
To: Vincent Guittot <vincent.guittot@linaro.org>
Cc: linux@armlinux.org.uk, catalin.marinas@arm.com, will@kernel.org,
	sudeep.holla@arm.com, rafael@kernel.org, viresh.kumar@linaro.org,
	agross@kernel.org, andersson@kernel.org,
	konrad.dybcio@linaro.org, mingo@redhat.com, peterz@infradead.org,
	juri.lelli@redhat.com, rostedt@goodmis.org, bsegall@google.com,
	mgorman@suse.de, bristot@redhat.com, vschneid@redhat.com,
	lukasz.luba@arm.com, rui.zhang@intel.com, mhiramat@kernel.org,
	daniel.lezcano@linaro.org, amit.kachhap@gmail.com,
	corbet@lwn.net, gregkh@linuxfoundation.org,
	linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org,
	linux-arm-msm@vger.kernel.org,
	linux-trace-kernel@vger.kernel.org, linux-doc@vger.kernel.org,
	qyousef@layalina.io
Subject: Re: [PATCH v3 0/5] Rework system pressure interface to the scheduler
Date: Wed, 10 Jan 2024 19:10:08 +0100	[thread overview]
Message-ID: <5ac0df44-82b6-463b-a805-65f93d181215@arm.com> (raw)
In-Reply-To: <CAKfTPtAOSgnStDSao1QarHuUW9BTfk1o7r6NO4LhwEJMhq1drg@mail.gmail.com>

On 09/01/2024 14:29, Vincent Guittot wrote:
> On Tue, 9 Jan 2024 at 12:34, Dietmar Eggemann <dietmar.eggemann@arm.com> wrote:
>>
>> On 08/01/2024 14:48, Vincent Guittot wrote:
>>> Following the consolidation and cleanup of CPU capacity in [1], this serie
>>> reworks how the scheduler gets the pressures on CPUs. We need to take into
>>> account all pressures applied by cpufreq on the compute capacity of a CPU
>>> for dozens of ms or more and not only cpufreq cooling device or HW
>>> mitigiations. we split the pressure applied on CPU's capacity in 2 parts:
>>> - one from cpufreq and freq_qos
>>> - one from HW high freq mitigiation.
>>>
>>> The next step will be to add a dedicated interface for long standing
>>> capping of the CPU capacity (i.e. for seconds or more) like the
>>> scaling_max_freq of cpufreq sysfs. The latter is already taken into
>>> account by this serie but as a temporary pressure which is not always the
>>> best choice when we know that it will happen for seconds or more.
>>
>> I guess this is related to the 'user space system pressure' (*) slide of
>> your OSPM '23 talk.
> 
> yes
> 
>>
>> Where do you draw the line when it comes to time between (*) and the
>> 'medium pace system pressure' (e.g. thermal and FREQ_QOS).
> 
> My goal is to consider the /sys/../scaling_max_freq as the 'user space
> system pressure'
> 
>>
>> IIRC, with (*) you want to rebuild the sched domains etc.
> 
> The easiest way would be to rebuild the sched_domain but the cost is
> not small so I would prefer to skip the rebuild and add a new signal
> that keep track on this capped capacity

Are you saying that you don't need to rebuild sched domains since
cpu_capacity information of the sched domain hierarchy is
independently updated via: 

update_sd_lb_stats() {

  update_group_capacity() {

    if (!child)
      update_cpu_capacity(sd, cpu) {

        capacity = scale_rt_capacity(cpu) {

          max = get_actual_cpu_capacity(cpu) <- (*)
        }

        sdg->sgc->capacity = capacity;
        sdg->sgc->min_capacity = capacity;
        sdg->sgc->max_capacity = capacity;
      }

  }

}
        
(*) influence of temporary and permanent (to be added) frequency
pressure on cpu_capacity (per-cpu and in sd data)


example: hackbench on h960 with IPA:
                                                                                  cap  min  max
...
hackbench-2284 [007] .Ns..  2170.796726: update_group_capacity: sdg !child cpu=7 1017 1017 1017
hackbench-2456 [007] ..s..  2170.920729: update_group_capacity: sdg !child cpu=7 1018 1018 1018
    <...>-2314 [007] ..s1.  2171.044724: update_group_capacity: sdg !child cpu=7 1011 1011 1011
hackbench-2541 [007] ..s..  2171.168734: update_group_capacity: sdg !child cpu=7  918  918  918
hackbench-2558 [007] .Ns..  2171.228716: update_group_capacity: sdg !child cpu=7  912  912  912
    <...>-2321 [007] ..s..  2171.352718: update_group_capacity: sdg !child cpu=7  812  812  812
hackbench-2553 [007] ..s..  2171.476721: update_group_capacity: sdg !child cpu=7  640  640  640
    <...>-2446 [007] ..s2.  2171.600743: update_group_capacity: sdg !child cpu=7  610  610  610
hackbench-2347 [007] ..s..  2171.724738: update_group_capacity: sdg !child cpu=7  406  406  406
hackbench-2331 [007] .Ns1.  2171.848768: update_group_capacity: sdg !child cpu=7  390  390  390
hackbench-2421 [007] ..s..  2171.972733: update_group_capacity: sdg !child cpu=7  388  388  388
...

  reply	other threads:[~2024-01-10 18:10 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-01-08 13:48 [PATCH v3 0/5] Rework system pressure interface to the scheduler Vincent Guittot
2024-01-08 13:48 ` [PATCH v3 1/5] cpufreq: Add a cpufreq pressure feedback for " Vincent Guittot
2024-01-08 16:35   ` Dietmar Eggemann
2024-01-08 16:46     ` Vincent Guittot
2024-01-09 11:24     ` Vincent Guittot
2024-01-08 13:48 ` [PATCH v3 2/5] sched: Take cpufreq feedback into account Vincent Guittot
2024-01-09 11:22   ` Dietmar Eggemann
2024-01-09 14:30     ` Vincent Guittot
2024-01-10 13:51       ` Dietmar Eggemann
2024-01-10 17:25         ` Vincent Guittot
2024-01-08 13:48 ` [PATCH v3 3/5] thermal/cpufreq: Remove arch_update_thermal_pressure() Vincent Guittot
2024-01-08 13:48 ` [PATCH v3 4/5] sched: Rename arch_update_thermal_pressure into arch_update_hw_pressure Vincent Guittot
2024-01-09 11:56   ` Dietmar Eggemann
2024-01-09 13:33     ` Vincent Guittot
2024-01-08 13:48 ` [PATCH v3 5/5] sched/pelt: Remove shift of thermal clock Vincent Guittot
2024-01-09 11:33 ` [PATCH v3 0/5] Rework system pressure interface to the scheduler Dietmar Eggemann
2024-01-09 13:29   ` Vincent Guittot
2024-01-10 18:10     ` Dietmar Eggemann [this message]
2024-01-19 17:57       ` Vincent Guittot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5ac0df44-82b6-463b-a805-65f93d181215@arm.com \
    --to=dietmar.eggemann@arm.com \
    --cc=agross@kernel.org \
    --cc=amit.kachhap@gmail.com \
    --cc=andersson@kernel.org \
    --cc=bristot@redhat.com \
    --cc=bsegall@google.com \
    --cc=catalin.marinas@arm.com \
    --cc=corbet@lwn.net \
    --cc=daniel.lezcano@linaro.org \
    --cc=gregkh@linuxfoundation.org \
    --cc=juri.lelli@redhat.com \
    --cc=konrad.dybcio@linaro.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-arm-msm@vger.kernel.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-pm@vger.kernel.org \
    --cc=linux-trace-kernel@vger.kernel.org \
    --cc=linux@armlinux.org.uk \
    --cc=lukasz.luba@arm.com \
    --cc=mgorman@suse.de \
    --cc=mhiramat@kernel.org \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=qyousef@layalina.io \
    --cc=rafael@kernel.org \
    --cc=rostedt@goodmis.org \
    --cc=rui.zhang@intel.com \
    --cc=sudeep.holla@arm.com \
    --cc=vincent.guittot@linaro.org \
    --cc=viresh.kumar@linaro.org \
    --cc=vschneid@redhat.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox