From: Kajetan Puchalski <kajetan.puchalski@arm.com>
To: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Qais Yousef <qyousef@layalina.io>,
rafael@kernel.org, daniel.lezcano@linaro.org,
lukasz.luba@arm.com, Dietmar.Eggemann@arm.com,
dsmythies@telus.net, yu.chen.surf@gmail.com,
linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org,
Peter Zijlstra <peterz@infradead.org>,
Ulf Hansson <ulf.hansson@linaro.org>
Subject: Re: [PATCH v6 2/2] cpuidle: teo: Introduce util-awareness
Date: Tue, 28 May 2024 13:12:28 +0100 [thread overview]
Message-ID: <ZlXKLOnVkegQfdKA@e126311.manchester.arm.com> (raw)
In-Reply-To: <CAKfTPtA6ZzRR-zMN7sodOW+N_P+GqwNv4tGR+aMB5VXRT2b5bg@mail.gmail.com>
Hi Vincent,
On Tue, May 28, 2024 at 11:29:02AM +0200, Vincent Guittot wrote:
> Hi All,
>
> I'm quite late on this thread but this patchset creates a major
> regression for psci cpuidle driver when using the OSI mode (OS
> initiated mode). In such a case, cpuidle driver takes care only of
> CPUs power state and the deeper C-states ,which includes cluster and
> other power domains, are handled with power domain framework. In such
> configuration ,cpuidle has only 2 c-states : WFI and cpu off states
> and others states that include the clusters, are managed by genpd and
> its governor.
>
> This patch selects cpuidle c-state N-1 as soon as the utilization is
> above CPU capacity / 64 which means at most a level of 16 on the big
> core but can be as low as 4 on little cores. These levels are very low
> and the main result is that as soon as there is very little activity
> on a CPU, cpuidle always selects WFI states whatever the estimated
> sleep duration and which prevents any deeper states. Another effect is
> that it also keeps the tick firing every 1ms in my case.
>
> IMO, we should at least increase the utilization level
I think you're most likely right on this, the reason why I ended up
leaving the threshold at cap/64 was that at cap/32 it would be too high
for the mechanism to actually have any effects on the device I was
testing this on.
The issue then of course is that if you tailor the threshold to little
cores it becomes too high for big cores, conversely if you tailor it to
big cores it becomes too low for small ones.
We could get around this by making sure the threshold doesn't end up
being lower than a certain capacity-independent minimum, how does that sound?
cpu_data->util_threshold = max(MIN_THRESHOLD, max_capacity >> UTIL_THRESHOLD_SHIFT);
And we set MIN_THRESHOLD to something like 10, 15 or 20. Not sure which
one would be more appropriate but I think this could help alleviate some
issues with the mechanism being too aggressive.
>
> Regards,
> Vincent
>
> On Sun, 17 Sept 2023 at 03:05, Qais Yousef <qyousef@layalina.io> wrote:
> >
> > Hi Kajetan
> >
> > On 07/18/23 14:24, Qais Yousef wrote:
> >
> > > These patches are in GKI. So we'll if there are uncaught problems I guess :)
> > >
> > > No appetite for a knob, but the very low value for littles did strike me and
> > > thought I better ask at least. Today's littles are too tiny for their own good
> > > and it seemed the threshold could end up being too aggressive especially in low
> > > activity state. You effectively are saying that if we have few 100us of
> > > activity, normal TEO predictions based on timers are no good and better to stay
> > > shallower anyway.
> > >
> > > Note that due to NOHZ, if we go to idle for an extended period the util value
> > > might not decay for a while and miss some opportunities. Especially that when
> > > it next wakes up, it's enough for this wake up to run for few 100s us to block
> > > a deeper state before going back to sleep for extended period of time.
> > >
> > > But we shall see. I got the answer I was looking for for now.
> >
> > Unfortunately not too long after the patches got merged I got regression report
> > of worse power. As you know on Android things are not as mainline, so I need to
> > untangle this to make sure it's not a red herring. But if you want to take my
> > word for it, I think the chances of it being a true regression is high. I had
> > to introduce knobs to allow controlling the thresholds for now, so the good
> > news they do help and it's not a total revert. I don't have a lot of info to
> > share, but it's the low activity use cases that seem to got impacted. Like
> > video playback for instance.
> >
> > Generally, I'm trying to remove some hardcoded values from the scheduler that
> > enforces a behavior that is not universally desired on all systems/workloads.
> > And I think the way the util awareness threshold are done today fall into the
> > same category.
> >
> > As I tried to highlight before, it is easy to trick the threshold by a task
> > that runs for a short time then goes back to sleep for a long time.
> >
> > And when the system runs full throttle for a while, it'll take around 150+ms
> > for the util to decay to the threshold value. That's a long time to block
> > entering deeper idle states for. I'm not sure how NOHZ and blocked averaged
> > updates can make this potentially worse.
> >
> > In my view, the absolute comparison against util can be misleading. Even when
> > util is 512 for example, we still have 50% of idle time. How this time is
> > distributed can't be known from util alone. It could be one task waking up and
> > sleeping. It could be multiple tasks at many combination of patterns all
> > leading to the same outcome of CPU util being 512.
> >
> > IIUC the idea is that if we have even small activity, then erring on the
> > shallow side is better. But given that target-residency is usually in few ms
> > range, do we really need to be that quite? With a target-residency of 3ms for
> > example, even at util of 900 there can be opportunities to enter it.
> >
> > Can't we instead sample util at entry to idle loop and see if it is on a rising
> > or falling trend? When rising it makes sense to say there's demand, let's block
> > deeper idle state. But if it is falling, then if the decay time is longer than
> > target-residency we can say it's okay to permit the deeper idle states?
> >
> > I need to think more about this; but I think it's worth trying to make these
> > thresholds more deterministic and quantifiable. There are too many workloads
> > and system variations. I'm not sure if a knob to control these thresholds is
> > good for anything but a workaround like I had to do. These hardcoded values
> > can be improved IMHO. Happy to help to find alternatives.
> >
> >
> > Cheers
> >
> > --
> > Qais Yousef
next prev parent reply other threads:[~2024-05-28 12:12 UTC|newest]
Thread overview: 37+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-01-05 14:51 [PATCH v6 0/2] cpuidle: teo: Introduce util-awareness Kajetan Puchalski
2023-01-05 14:51 ` [PATCH v6 1/2] cpuidle: teo: Optionally skip polling states in teo_find_shallower_state() Kajetan Puchalski
2023-01-05 14:51 ` [PATCH v6 2/2] cpuidle: teo: Introduce util-awareness Kajetan Puchalski
2023-01-05 15:07 ` Rafael J. Wysocki
2023-01-05 15:20 ` Lukasz Luba
2023-01-05 15:34 ` Vincent Guittot
2023-01-05 17:11 ` Rafael J. Wysocki
2023-07-11 17:58 ` Qais Yousef
2023-07-17 13:47 ` Lukasz Luba
2023-07-17 18:21 ` Qais Yousef
2023-07-18 10:23 ` Lukasz Luba
2023-07-18 12:45 ` Qais Yousef
2023-07-18 12:02 ` Kajetan Puchalski
2023-07-18 13:24 ` Qais Yousef
2023-07-19 15:07 ` Kajetan Puchalski
2023-09-17 1:05 ` Qais Yousef
2023-09-18 11:41 ` Kajetan Puchalski
2023-09-19 0:04 ` Qais Yousef
2024-05-28 9:29 ` Vincent Guittot
2024-05-28 9:59 ` Lukasz Luba
2024-05-28 14:07 ` Vincent Guittot
2024-05-29 13:09 ` Christian Loehle
2024-05-31 8:57 ` Vincent Guittot
2024-06-12 7:25 ` Lukasz Luba
2024-06-12 9:04 ` Vincent Guittot
2024-06-12 9:17 ` Lukasz Luba
2024-06-17 8:52 ` Vincent Guittot
2024-06-19 12:20 ` Lukasz Luba
2024-05-28 10:35 ` Christian Loehle
2024-05-28 12:12 ` Kajetan Puchalski [this message]
2024-05-29 10:23 ` Qais Yousef
2024-05-29 10:19 ` Qais Yousef
2024-06-12 7:53 ` Lukasz Luba
2024-06-16 21:48 ` Qais Yousef
2024-06-17 8:13 ` Lukasz Luba
2023-01-12 19:22 ` [PATCH v6 0/2] " Rafael J. Wysocki
2023-01-13 15:21 ` Kajetan Puchalski
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZlXKLOnVkegQfdKA@e126311.manchester.arm.com \
--to=kajetan.puchalski@arm.com \
--cc=Dietmar.Eggemann@arm.com \
--cc=daniel.lezcano@linaro.org \
--cc=dsmythies@telus.net \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pm@vger.kernel.org \
--cc=lukasz.luba@arm.com \
--cc=peterz@infradead.org \
--cc=qyousef@layalina.io \
--cc=rafael@kernel.org \
--cc=ulf.hansson@linaro.org \
--cc=vincent.guittot@linaro.org \
--cc=yu.chen.surf@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox