From: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
To: Grygorii Strashko <grygorii.strashko@ti.com>
Cc: Grant Likely <grant.likely@secretlab.ca>,
Geert Uytterhoeven <geert+renesas@glider.be>,
ulf.hansson@linaro.org, Kevin Hilman <khilman@linaro.org>,
Mike Turquette <mturquette@linaro.org>,
Tomasz Figa <tomasz.figa@gmail.com>,
Ben Dooks <ben.dooks@codethink.co.uk>,
Simon Horman <horms@verge.net.au>,
Magnus Damm <magnus.damm@gmail.com>,
"Rafael J. Wysocki" <rjw@rjwysocki.net>,
"linux-sh@vger.kernel.org" <linux-sh@vger.kernel.org>,
Linux PM list <linux-pm@vger.kernel.org>,
"devicetree@vger.kernel.org" <devicetree@vger.kernel.org>,
"linux-omap@vger.kernel.org" <linux-omap@vger.kernel.org>,
"linux-arm-kernel@lists.infradead.org"
<linux-arm-kernel@lists.infradead.org>,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>
Subject: Re: [RFC PATCH 2/2] of/clk: use "clkops-clocks" to specify clocks handled by clock_ops domain
Date: Fri, 12 Dec 2014 19:40:52 +0200 [thread overview]
Message-ID: <2061380.HdoWyg3PvY@avalon> (raw)
In-Reply-To: <53D8F24B.7010104@ti.com>
Hi Grygorii,
I've found this mail deep inside my inbox :-)
On Wednesday 30 July 2014 16:25:31 Grygorii Strashko wrote:
> On 07/30/2014 03:06 AM, Laurent Pinchart wrote:
> > On Monday 28 July 2014 23:52:34 Grant Likely wrote:
> >> On Mon, Jul 28, 2014 at 11:47 AM, Grygorii Strashko wrote:
> >>> On 07/28/2014 05:05 PM, Grant Likely wrote:
> >>>> On Thu, 12 Jun 2014 19:53:43 +0300, Grygorii Strashko wrote:
> >>>>> Use "clkops-clocks" property to specify clocks handled by
> >>>>> clock_ops domain PM domain. Only clocks defined in "clkops-clocks"
> >>>>> set of clocks will be handled by Runtime PM through clock_ops
> >>>>> Pm domain.
> >>>>>
> >>>>> Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
> >>>>> ---
> >>>>>
> >>>>> drivers/of/of_clk.c | 7 ++-----
> >>>>> 1 file changed, 2 insertions(+), 5 deletions(-)
> >>>>>
> >>>>> diff --git a/drivers/of/of_clk.c b/drivers/of/of_clk.c
> >>>>> index 35f5e9f..5f9b90e 100644
> >>>>> --- a/drivers/of/of_clk.c
> >>>>> +++ b/drivers/of/of_clk.c
> >>>>> @@ -86,11 +86,8 @@ int of_clk_register_runtime_pm_clocks(struct
> >>>>> device_node *np,>>>
> >>>>>
> >>>>> struct clk *clk;
> >>>>> int error;
> >>>>>
> >>>>> - for (i = 0; (clk = of_clk_get(np, i)) && !IS_ERR(clk); i++) {
> >>>>> - if (!clk_may_runtime_pm(clk)) {
> >>>>> - clk_put(clk);
> >>>>> - continue;
> >>>>> - }
> >>>>> + for (i = 0; (clk = of_clk_get_from_set(np, "clkops", i)) &&
> >>>>> + !IS_ERR(clk); i++) {
> >>>>
> >>>> This really looks like an ABI break to me. What happens to all the
> >>>> existing platforms who don't have this new clkops-clocks in their
> >>>> device tree?
> >>>
> >>> Agree. This patch as is will break such platforms.
> >>> As possible solution for above problem - the NULL can be used as clock's
> >>> prefix by default and platform code can configure new value of clock's
> >>> prefix during initialization.
> >>> In addition, to make this solution full the of_clk_get_by_name() will
> >>> need to be modified too.
> >>>
> >>> But note pls, this is pure RFC patches which I did to find out the
> >>> answer on questions: - What is better: maintain Runtime PM clocks
> >>> configuration in DT or in code?
> >>
> >> In code. I don't think it is workable to embed runtime PM behaviour
> >> into the DT bindings. I think there will be too much variance in what
> >> hardware requires. We can create helpers to make this simpler, but I
> >> don't think it is a good idea to set it up automatically without any
> >> control from the driver itself.
> >>
> >>> - Where and when to call of_clk_register_runtime_pm_clocks()?
> >>>
> >>> Bus notifier/ platform core/ device drivers
> >>
> >> I would say in device drivers.
> >
> > I tend to agree with that.
> >
> > It will help here to take a step back and remember what the problem we're
> > trying to solve is.
> >
> > At the root is clock management. Our system comprise many clocks, and they
> > need to be handled. The Common Clock Framework nicely models the clocks,
> > and offers an API for drivers to retrieve device clocks and control them.
> > Drivers can thus implement clock management manually without much pain.
> >
> > A clock can be managed in roughly three different ways :
> >
> > - it can be enabled at probe time and disabled at remove time ;
> >
> > - it can be enabled right before the device leaves its idle state and
> > disabled when the device goes back to idle ; or
> >
> > - it can be enabled and disabled in a more fine-grained, device-specific
> > manner.
> >
> > The selected clock management granularity depends on constraints specific
> > to the device and on how aggressive power saving needs to be. Enabling
> > the clocks at probe time and disabling them at remove time is enough for
> > most devices, but leads to a high power consumption. For that reason the
> > second clock management scheme is often desired.
> >
> > Managing clocks manually in the driver is a valid option. However, when
> > adding runtime PM to the equation, and realizing that the clocks need to
> > be enabled in the runtime PM resume handler and disabled in the suspend
> > handler, the clock management code starts looking very similar in most
> > drivers. We're thus tempted to factorize it away from the drivers into a
> > shared location.
> >
> > It's important to note at this point that the goal here is only to
> > simplify drivers. Moving clock management code out of the drivers doesn't
> > (unless I'm missing something) open the door to new possibilities, it just
> > serves as a simplification.
> >
> > Now, as Grygorii mentioned, differences between how a given IP core is
> > integrated in various SoCs can make clock management SoC-dependent. In the
> > vast majority of cases (which is really what we need to target, given that
> > our target is simplifying drivers) SoC integration can be described as a
> > list of clocks that must be managed. That list can be common to all
> > devices in a given SoC, or can be device-dependent as well.
>
> That's actually a problem - now we have static list of managed clocks
> per-SoC and not per device.
>
> > Few locations can be used to express a per-device list of per-SoC clocks.
> > We can have clocks lists in a per-SoC and per-device location, per-device
> > clocks lists in an SoC-specific location, or per-SoC clocks lists in a
> > device- specific location.
> >
> > The first option would require listing clocks to be managed by runtime PM
> > in DT nodes, as proposed by this patch set. I don't think this is the
> > best option, as that information is a mix of hardware description and
> > software policy, with the hardware description part being already present
> > in DT in the clocks property.
>
> I'm not fully agree here. The clock is "functional clock" If It's managed by
> runtime PM. And all such clocks need to be enabled/disabled always when
> device is powered on/off. So, from my point of view it's HW description and
> it follows TRM.
>
> Other clocks are optional
That's actually use-case dependent, some of them might be mandatory.
> and only drivers should control them.
> And question is how to define sets of such clocks in the best way?
>
> > The second option calls for storing the lists in SoC code under arch/. As
> > we're trying to minimize the amount of SoC code there (and even remove SoC
> > code completely when possible) I don't think that's a good option.
> >
> > The third option would require storing the clocks lists in device drivers.
> > I believe this is our best option, as a trade-off between simplicity and
> > versatility. Drivers that use runtime PM already need to enable it
> > explicitly when probing devices. Passing a list of clock names to runtime
> > PM at that point wouldn't complicate drivers much. When the clocks list
> > isn't SoC- dependent it could be stored as static information. Otherwise
> > it could be derived from DT (or any other source of hardware description)
> > using C code, offering all the versatility we need.
>
> Ok. if I understand right, you propose smth like this:
> 1) DT based solution:
>
> devA {
> clocks = <&clkpa>, <&clkcpgmac>, <&chipclk12>;
> rpm-clocks = <&clkpa>, <&clkcpgmac>;
> - or -
> clocks = <&clkpa>, <&clkcpgmac>, <&chipclk12>;
> clock-names = "clk_pa", "clk_cpgmac", "cpsw_cpts_rft_clk";
> rpm-clocks = "clk_pa", "clk_cpgmac";
> }
On a side note I believe the "rpm-clocks" name is too tied to the Linux
implementation. A name similar to "functional-clocks" would be better.
> in driver:
> pm_runtime_enable();
>
> |- of_clk_register_runtime_pm_clocks()
>
> - or -
> of_clk_register_runtime_pm_clocks()
> pm_runtime_enable();
I prefer the second option, as an explicit opt-in is less likely to cause
regressions, and would also offer an easy way for drivers to opt-out.
> 2) Static solution:
> char *con_ids_davinci[] =
> { "fck", "master", "slave", NULL };
> char *con_ids_keystone[] =
> { "clk_pa", "clk_cpgmac" };
>
> static struct of_device_id of_match[] = {
> { .compatible = "ti,keystone", con_ids_keystone},
> { .compatible = "ti,davinci", con_ids_davinci},
> {},
> };
>
> Personally, I like option 1 and, seems, it will not break ABI.
Is option 2 really representative of most use cases ? The list of clock inputs
to an IP core is a property of the IP core itself. How those inputs are
connected in the SoC is a property of the SoC integration. The clocks
references in DT can thus vary per-SoC, but the clock names should be pretty
much constant for a given IP core. Thus, if we have a single list of clocks to
manager for a given IP core, it shouldn't be difficult to pass that list to
the of_clk_register_runtime_pm_clocks() function.
> > The only drawback of this solution I can think of right now is that the
> > runtime PM core couldn't manage device clocks before probing the device.
> > Specifically device clocks couldn't be managed if no driver is loaded for
> > that device. I somehow recall that someone raised this as being a
> > problem, but I can't remember why.
>
> I can recollect only OMAP2+ SoCs where some abstraction called HW_MOD is
> used during platform initialization to reset all devices and turn off
> unused ones before probing the devices. But clock_ops are not used by
> OMAP2+:)
--
Regards,
Laurent Pinchart
next prev parent reply other threads:[~2014-12-12 17:40 UTC|newest]
Thread overview: 41+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-04-24 10:13 [PATCH/RFC 0/4] of: Register clocks for Runtime PM with PM core Geert Uytterhoeven
[not found] ` <1398334403-26181-1-git-send-email-geert+renesas-gXvu3+zWzMSzQB+pC5nmwQ@public.gmane.org>
2014-04-24 10:13 ` [PATCH/RFC 1/4] clk: Add CLK_RUNTIME_PM and clk_may_runtime_pm() Geert Uytterhoeven
2014-04-24 10:13 ` [PATCH/RFC 2/4] PM / clock_ops: Add pm_clk_add_clk() Geert Uytterhoeven
2014-04-24 10:13 ` [PATCH/RFC 3/4] of/clk: Register clocks suitable for Runtime PM with the PM core Geert Uytterhoeven
2014-04-24 13:11 ` Ulf Hansson
2014-04-24 14:09 ` Geert Uytterhoeven
2014-04-26 1:59 ` Tomasz Figa
2014-05-02 8:13 ` Ulf Hansson
[not found] ` <CAPDyKFqG0dV+-y2=t=d3w6_hxWsYi+sOmNdvS6ECPOuoQ61Pmw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2014-05-02 14:58 ` Geert Uytterhoeven
2014-05-06 7:58 ` Ulf Hansson
2014-04-30 21:23 ` Laurent Pinchart
2014-04-30 22:06 ` Geert Uytterhoeven
2014-04-25 23:44 ` Kevin Hilman
2014-04-29 13:16 ` Grant Likely
[not found] ` <20140429131610.29859C4094A-WNowdnHR2B42iJbIjFUEsiwD8/FfD2ys@public.gmane.org>
2014-04-30 21:25 ` Laurent Pinchart
2014-04-30 21:33 ` Ben Dooks
2014-04-30 21:54 ` Geert Uytterhoeven
2014-05-01 8:03 ` Grant Likely
2014-05-01 13:41 ` Geert Uytterhoeven
2014-05-01 13:56 ` Grant Likely
2014-05-01 14:46 ` Geert Uytterhoeven
2014-04-30 21:47 ` Geert Uytterhoeven
2014-05-02 8:56 ` Ulf Hansson
2014-05-02 14:35 ` Geert Uytterhoeven
2014-05-06 7:43 ` Ulf Hansson
2014-04-24 10:13 ` [PATCH/RFC 4/4] clk: shmobile: mstp: Set CLK_RUNTIME_PM flag Geert Uytterhoeven
2014-04-30 21:29 ` [PATCH/RFC 0/4] of: Register clocks for Runtime PM with PM core Laurent Pinchart
2014-04-30 22:17 ` Geert Uytterhoeven
2014-06-12 16:53 ` [RFC PATCH 0/2] use named clocks list to register clocks for PM clock domain Grygorii Strashko
2014-06-12 16:53 ` [RFC PATCH 1/2] clk: of: introduce of_clk_get_from_set() Grygorii Strashko
2014-06-12 16:53 ` [RFC PATCH 2/2] of/clk: use "clkops-clocks" to specify clocks handled by clock_ops domain Grygorii Strashko
2014-07-28 14:05 ` Grant Likely
[not found] ` <20140728140533.6E916C4116F-WNowdnHR2B42iJbIjFUEsiwD8/FfD2ys@public.gmane.org>
2014-07-28 17:47 ` Grygorii Strashko
2014-07-29 5:52 ` Grant Likely
2014-07-30 0:06 ` Laurent Pinchart
2014-07-30 13:25 ` Grygorii Strashko
2014-12-12 17:40 ` Laurent Pinchart [this message]
2014-08-04 11:28 ` Geert Uytterhoeven
2014-08-04 15:21 ` Laurent Pinchart
2014-09-08 20:13 ` Kevin Hilman
2014-12-12 17:52 ` Laurent Pinchart
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2061380.HdoWyg3PvY@avalon \
--to=laurent.pinchart@ideasonboard.com \
--cc=ben.dooks@codethink.co.uk \
--cc=devicetree@vger.kernel.org \
--cc=geert+renesas@glider.be \
--cc=grant.likely@secretlab.ca \
--cc=grygorii.strashko@ti.com \
--cc=horms@verge.net.au \
--cc=khilman@linaro.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-omap@vger.kernel.org \
--cc=linux-pm@vger.kernel.org \
--cc=linux-sh@vger.kernel.org \
--cc=magnus.damm@gmail.com \
--cc=mturquette@linaro.org \
--cc=rjw@rjwysocki.net \
--cc=tomasz.figa@gmail.com \
--cc=ulf.hansson@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).