From: Kevin Hilman <khilman@baylibre.com>
To: Ulf Hansson <ulf.hansson@linaro.org>,
Saravana Kannan <saravanak@google.com>
Cc: Daniel Lezcano <daniel.lezcano@linaro.org>,
Vincent Guittot <vincent.guittot@linaro.org>,
Stephen Boyd <sboyd@kernel.org>,
linux-pm@vger.kernel.org
Subject: arm64 s2idle vs. workqueues
Date: Wed, 09 Oct 2024 17:19:31 -0700 [thread overview]
Message-ID: <7ho73shkrw.fsf@baylibre.com> (raw)
Hello,
Looking for some pointers/tips on debugging s2idle, and in particular
why it is not staying in an idle state as long as expected.
I'm attempting to use s2idle on a 4-core, single cluster ARM64 SoC (TI
AM62x), which doesn't (yet) have any DT defined idle-states, so is just
doing a WFI when idle.
I'm doing an 8-second s2idle with RTC wakeup by using:
rtcwake -m freeze -s8
and what I see is that 3 of the CPUs stay in their idle state for the
full 8 seconds, but one of them keeps waking due to the A53
arm_arch_timer firing, and processing misc. workqueue related activity
(example work listed below[1].)
I realize that these workqueues are not WQ_FREEZABLE, so I don't expect
the freezer part of suspend to stop/freeze them. However, I am a bit
surprised to see this non-frozen workqueue activity happening often
enough (few times per second) to prevent all 4 CPUs from being idle for
long periods at the same time, thus preventing a deeper cluster-idle
state.
Is there something else I'm missing that is needed to keep these
workqueues quiet for longer? I had assumed that most of this workqueue
work would be deferred, and shouldn't need to wakeup a CPU just to run.
In case it's helpful I have published a trace.dat from trace-cmd which
captures power, sched, irq, timer and workqueue events. With
kernelshark, it's pretty obvious to visualize what's happening: CPU0,1,3
are all nicely idle for 8 sec while CPU2 is waking due to the timer and
workqueue activity.
Any pointers to how to improve this situation, or what else needs to be
tweaked here would be greatly appreciated,
Thanks,
Kevin
[1]
function, workqueue name
------------------------
- page_pool_release_retry(), "events"
- vmstat_shepherd(), "events"
- vmstat_update(), "mm_percpu_wq"
- crng_reseed(), "events_unbound"
- kfree_rcu_monitor(), "events"
- flush_memcg_stats_dwork(), "events_unbound"
- neigh_managed_work(), "events_power_efficient"
- async_run_entry_fn(), "async"
- deferred_probe_work_func(), "events"
- tcp_orphan_update(), timer expiry
[2] https://drive.google.com/file/d/1U51eTTeb4_13-CZWa2llHXTh9DfZ_4sF/view?usp=sharing
next reply other threads:[~2024-10-10 0:19 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-10-10 0:19 Kevin Hilman [this message]
2024-10-10 5:44 ` arm64 s2idle vs. workqueues Saravana Kannan
2024-10-10 10:23 ` Rafael J. Wysocki
2024-10-10 10:33 ` Christian Loehle
2024-10-10 10:48 ` Sudeep Holla
2024-10-10 19:20 ` Kevin Hilman
2024-10-10 20:10 ` Saravana Kannan
2024-10-10 10:34 ` Sudeep Holla
2024-10-10 19:09 ` Kevin Hilman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=7ho73shkrw.fsf@baylibre.com \
--to=khilman@baylibre.com \
--cc=daniel.lezcano@linaro.org \
--cc=linux-pm@vger.kernel.org \
--cc=saravanak@google.com \
--cc=sboyd@kernel.org \
--cc=ulf.hansson@linaro.org \
--cc=vincent.guittot@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).