From: sudeep.holla@arm.com (Sudeep Holla)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH 5/9] ARM: common: Introduce PM domains for CPUs/clusters
Date: Fri, 14 Aug 2015 10:52:01 +0100 [thread overview]
Message-ID: <55CDBA41.2080703@arm.com> (raw)
In-Reply-To: <20150813192713.GA84868@linaro.org>
On 13/08/15 20:27, Lina Iyer wrote:
> On Thu, Aug 13 2015 at 11:26 -0600, Sudeep Holla wrote:
>>
[...]
>>
>> Having gone through this series and the one using it[1], the only common
>> activity is just cluster pm notifiers. Other than that it's just
>> creating indirection for now. The scenario might change in future, but
>> for now it seems unnecessary.
>>
> Not sure, what seems unnecessary to you. Platforms do have to send
> cluster PM notifications, and they have to duplicate reference counting.
> Also PM domain framework allows hierarchy which is quite desirable to
> power down parts of the SoC that are powered on or have to clocked high,
> until the CPU is running.
>
Agreed, no argument on using genpd for CPU PM for all the goodies genpd
provides, but the way this patch was creating the genpd domains. It
needs to be part of your power controller.
> Cluster PM notifications are just one aspect of this that we currently
> handle in the first submission. The patchset as a whole provides a way
> to determine in Linux the last man down and the first man up and carry
> out activities. There are a bunch of things that are done to power save
> when the last man goes down - Turn off debuggers, switch off PLLs,
> reduce bus clocks, flush caches amongst a few that I know of. Some of it
> are platform specific and some of it arent. This patches provide the way
> for both of them to be done easily. The CPU runtime PM and PM domains as
> a framework, closely track what the hardware does.
>
Again no argument, I just favour common interface functions. Since each
power controller/platform will have specific sequence, it might be hard
to generalize that, but I may be wrong. OTH common interface functions
to handle those components might give some flexibility to the power
controllers.
> Mentioned in an other mail in this thread, is also an option to
> determine the cluster flush state and use it in conjunction with PSCI to
> do OS-Initiated cluster power down.
>
I haven't yet explored down that route yet, with platform co-ordination
we don't need much complexity in kernel :). OS co-ordination is a
different story as we need to consider the secure/non-secure world
dimensions there. We will have to consider the privileges/restrictions
Linux has.
>> Also if you look at the shmobile power controller driver, it covers all
>> the devices including CPUs unlike QCOM power controller which handles
>> only CPU. Yes we can skip CPU genpd creation there only for CPU, IMO
>> creating the power domains should be part of power controller driver.
>>
>> You can add helper functions for all the ARM specific code that can be
>> reused by multiple power controller drivers handling CPU/Cluster power
>> domain.
>>
> Sure, some architectures may desire that. I have them addressed in [2].
>
I haven't looked at that yet, but will do soon.
>>> An analogy to this would be the "arm,idle-state" that defines the DT
>>> node as something that also depicts a generic cpuidle C state.
>>>
>>
>> I tend to disagree. In idle-states, the nodes define that generic
>> properties and they can be parsed in a generic way. That's not the case
>> here. Each power controller binding differs.
>>
> Yes, may be we will have common elements like latency, residency of
> powering on/off a domain that a genpd governor can utilize in
> determining if its worth powering off the domain or not.
>
Make sense, but as Rob pointed how generic are those on various
platforms is something we need to check considering few platforms before
we build the generic infrastructure.
>> Yes the generic compatible might be useful to identify that this power
>> domain handles CPU/Cluster, but there will be more power controller
>> specific things compared to generic code.
>>
> Agreed, not debating that. Power controller is very SoC specific, but
> not debuggers, GIC, caches, buses etc. Many SoCs have almost similiar
> needs for many of these supplemental hardware to the CPUs and trend
> seems to be generalizing on many of these components.
>
Generalizing those components and using genpd is absolutely fine, just
the mechanics is what I am debating on.
Regards,
Sudeep
next prev parent reply other threads:[~2015-08-14 9:52 UTC|newest]
Thread overview: 60+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-08-04 23:35 [PATCH 0/9] ARM: PM / Domains: Generic PM domains for CPUs/Clusters Lina Iyer
2015-08-04 23:35 ` [PATCH 1/9] PM / Domains: Allocate memory outside domain locks Lina Iyer
2015-08-12 19:47 ` Kevin Hilman
2015-09-01 12:40 ` Ulf Hansson
2015-08-04 23:35 ` [PATCH 2/9] PM / Domains: Remove dev->driver check for runtime PM Lina Iyer
2015-08-12 19:50 ` Kevin Hilman
2015-08-13 8:57 ` Geert Uytterhoeven
2015-08-14 3:40 ` Kevin Hilman
2015-08-14 7:24 ` Geert Uytterhoeven
2015-08-14 17:19 ` Kevin Hilman
2015-08-16 9:24 ` Geert Uytterhoeven
2015-08-21 21:04 ` Kevin Hilman
2015-08-24 19:50 ` Lina Iyer
2015-08-25 9:24 ` Geert Uytterhoeven
2015-09-01 13:28 ` Ulf Hansson
2015-08-04 23:35 ` [PATCH 3/9] PM / Domains: Support IRQ safe PM domains Lina Iyer
2015-08-12 20:12 ` Kevin Hilman
2015-08-12 20:47 ` Lina Iyer
2015-08-12 23:03 ` Stephen Boyd
2015-08-04 23:35 ` [PATCH 4/9] kernel/cpu_pm: fix cpu_cluster_pm_exit comment Lina Iyer
2015-08-12 20:13 ` Kevin Hilman
2015-08-04 23:35 ` [PATCH 5/9] ARM: common: Introduce PM domains for CPUs/clusters Lina Iyer
2015-08-06 3:14 ` Rob Herring
2015-08-07 23:45 ` Kevin Hilman
2015-08-11 13:07 ` Geert Uytterhoeven
2015-08-11 15:58 ` Lina Iyer
2015-08-11 20:12 ` Rob Herring
2015-08-11 22:29 ` Lina Iyer
2015-08-12 19:00 ` [PATCH v2 1/2] " Lina Iyer
2015-08-12 19:00 ` [PATCH v2 2/2] ARM: domain: Add platform handlers for CPU PM domains Lina Iyer
2015-08-13 17:29 ` [PATCH v2 1/2] ARM: common: Introduce PM domains for CPUs/clusters Rob Herring
2015-08-13 20:12 ` Lina Iyer
2015-08-13 22:01 ` Rob Herring
2015-08-14 14:38 ` Lina Iyer
2015-08-13 15:01 ` [PATCH 5/9] " Lorenzo Pieralisi
2015-08-13 15:45 ` Lina Iyer
2015-08-13 15:52 ` Lorenzo Pieralisi
2015-08-13 16:22 ` Lina Iyer
2015-08-14 3:51 ` Kevin Hilman
2015-08-14 4:02 ` Lina Iyer
2015-08-14 15:49 ` Lorenzo Pieralisi
2015-08-14 19:11 ` Kevin Hilman
2015-08-13 17:26 ` Sudeep Holla
2015-08-13 19:27 ` Lina Iyer
2015-08-14 9:52 ` Sudeep Holla [this message]
2015-08-04 23:35 ` [PATCH 6/9] ARM: domain: Add platform handlers for CPU PM domains Lina Iyer
2015-08-05 14:45 ` Rob Herring
2015-08-05 16:38 ` Lina Iyer
2015-08-05 19:23 ` Lina Iyer
2015-08-06 3:01 ` Rob Herring
2015-08-10 15:36 ` Lina Iyer
2015-08-04 23:35 ` [PATCH 7/9] ARM: cpuidle: Add runtime PM support for CPU idle Lina Iyer
2015-08-04 23:35 ` [PATCH 8/9] ARM64: smp: Add runtime PM support for CPU hotplug Lina Iyer
2015-08-04 23:35 ` [PATCH 9/9] ARM: " Lina Iyer
2015-08-12 20:28 ` Kevin Hilman
2015-08-12 20:43 ` Lina Iyer
2015-08-14 18:59 ` Kevin Hilman
2015-08-12 23:47 ` Stephen Boyd
2015-08-13 16:00 ` Lina Iyer
2015-08-13 19:18 ` Stephen Boyd
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=55CDBA41.2080703@arm.com \
--to=sudeep.holla@arm.com \
--cc=linux-arm-kernel@lists.infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).