* [RESEND PATCH 0/4] clk: Provide option to unprepare unused clocks at late init
@ 2013-03-12 19:26 Ulf Hansson
2013-03-12 19:26 ` [RESEND PATCH 1/4] clk: Introduce optional is_prepared callback Ulf Hansson
` (3 more replies)
0 siblings, 4 replies; 7+ messages in thread
From: Ulf Hansson @ 2013-03-12 19:26 UTC (permalink / raw)
To: linux-arm-kernel
From: Ulf Hansson <ulf.hansson@linaro.org>
The disable_unused sequence executed at late init, is already handling the
fast unused ungated clocks to be gated. This patchset extends this sequence to
include the slow unused prepared clocks to be unprepared.
The default behavior will not change in this patchset. To unprepare unused
clocks during the disable_unused sequence, the clk_hw needs to implement
the new optional callback, is_prepared.
The motivation for this patchset is to save power. Clocks that is from
bootloaders prepared|enabled, but not used should be unprepared|disabled.
Patch 4 is specific for ux500, which implements the is_prepared callback
for it's PRMCU clocks.
Note that patch 1-3 has been sent earlier, but since a proof of concept
patch could be useful, as patch 4 is, the hole patchset is resent.
Ulf Hansson (4):
clk: Introduce optional is_prepared callback
clk: Unprepare the unused prepared slow clocks at late init
clk: Introduce optional unprepare_unused callback
clk: ux500: Support is_prepared callback for clk-prcmu
drivers/clk/clk.c | 53 ++++++++++++++++
drivers/clk/ux500/clk-prcmu.c | 134 ++++++++++++++++++++++++-----------------
include/linux/clk-provider.h | 11 ++++
3 files changed, 144 insertions(+), 54 deletions(-)
--
1.7.10
^ permalink raw reply [flat|nested] 7+ messages in thread
* [RESEND PATCH 1/4] clk: Introduce optional is_prepared callback
2013-03-12 19:26 [RESEND PATCH 0/4] clk: Provide option to unprepare unused clocks at late init Ulf Hansson
@ 2013-03-12 19:26 ` Ulf Hansson
2013-03-12 19:26 ` [RESEND PATCH 2/4] clk: Unprepare the unused prepared slow clocks at late init Ulf Hansson
` (2 subsequent siblings)
3 siblings, 0 replies; 7+ messages in thread
From: Ulf Hansson @ 2013-03-12 19:26 UTC (permalink / raw)
To: linux-arm-kernel
From: Ulf Hansson <ulf.hansson@linaro.org>
To reflect whether a clk_hw is prepared the clk_hw may implement
the optional is_prepared callback. If not implemented we fall back
to use the software prepare counter.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
---
drivers/clk/clk.c | 21 +++++++++++++++++++++
include/linux/clk-provider.h | 6 ++++++
2 files changed, 27 insertions(+)
diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
index 593a2e4..deb259a 100644
--- a/drivers/clk/clk.c
+++ b/drivers/clk/clk.c
@@ -458,6 +458,27 @@ unsigned long __clk_get_flags(struct clk *clk)
return !clk ? 0 : clk->flags;
}
+bool __clk_is_prepared(struct clk *clk)
+{
+ int ret;
+
+ if (!clk)
+ return false;
+
+ /*
+ * .is_prepared is optional for clocks that can prepare
+ * fall back to software usage counter if it is missing
+ */
+ if (!clk->ops->is_prepared) {
+ ret = clk->prepare_count ? 1 : 0;
+ goto out;
+ }
+
+ ret = clk->ops->is_prepared(clk->hw);
+out:
+ return !!ret;
+}
+
bool __clk_is_enabled(struct clk *clk)
{
int ret;
diff --git a/include/linux/clk-provider.h b/include/linux/clk-provider.h
index 4989b8a..86ff6be 100644
--- a/include/linux/clk-provider.h
+++ b/include/linux/clk-provider.h
@@ -45,6 +45,10 @@ struct clk_hw;
* undo any work done in the @prepare callback. Called with
* prepare_lock held.
*
+ * @is_prepared: Queries the hardware to determine if the clock is prepared.
+ * This function is allowed to sleep. Optional, if this op is not
+ * set then the prepare count will be used.
+ *
* @enable: Enable the clock atomically. This must not return until the
* clock is generating a valid clock signal, usable by consumer
* devices. Called with enable_lock held. This function must not
@@ -108,6 +112,7 @@ struct clk_hw;
struct clk_ops {
int (*prepare)(struct clk_hw *hw);
void (*unprepare)(struct clk_hw *hw);
+ int (*is_prepared)(struct clk_hw *hw);
int (*enable)(struct clk_hw *hw);
void (*disable)(struct clk_hw *hw);
int (*is_enabled)(struct clk_hw *hw);
@@ -351,6 +356,7 @@ unsigned int __clk_get_enable_count(struct clk *clk);
unsigned int __clk_get_prepare_count(struct clk *clk);
unsigned long __clk_get_rate(struct clk *clk);
unsigned long __clk_get_flags(struct clk *clk);
+bool __clk_is_prepared(struct clk *clk);
bool __clk_is_enabled(struct clk *clk);
struct clk *__clk_lookup(const char *name);
--
1.7.10
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [RESEND PATCH 2/4] clk: Unprepare the unused prepared slow clocks at late init
2013-03-12 19:26 [RESEND PATCH 0/4] clk: Provide option to unprepare unused clocks at late init Ulf Hansson
2013-03-12 19:26 ` [RESEND PATCH 1/4] clk: Introduce optional is_prepared callback Ulf Hansson
@ 2013-03-12 19:26 ` Ulf Hansson
2013-03-19 19:57 ` Mike Turquette
2013-03-12 19:26 ` [RESEND PATCH 3/4] clk: Introduce optional unprepare_unused callback Ulf Hansson
2013-03-12 19:26 ` [RESEND PATCH 4/4] clk: ux500: Support is_prepared callback for clk-prcmu Ulf Hansson
3 siblings, 1 reply; 7+ messages in thread
From: Ulf Hansson @ 2013-03-12 19:26 UTC (permalink / raw)
To: linux-arm-kernel
From: Ulf Hansson <ulf.hansson@linaro.org>
The unused ungated fast clocks are already being disabled from
clk_disable_unused at late init. This patch extend this sequence
to the slow unused prepared clocks to be unprepared.
Unless the optional .is_prepared callback is implemented by a
clk_hw the clk_disable_unused sequence will not unprepare any
unused clocks, since it will fall back to use the software
prepare counter.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
---
drivers/clk/clk.c | 29 +++++++++++++++++++++++++++++
1 file changed, 29 insertions(+)
diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
index deb259a..0944348 100644
--- a/drivers/clk/clk.c
+++ b/drivers/clk/clk.c
@@ -341,6 +341,29 @@ static inline int clk_debug_register(struct clk *clk) { return 0; }
#endif
/* caller must hold prepare_lock */
+static void clk_unprepare_unused_subtree(struct clk *clk)
+{
+ struct clk *child;
+ struct hlist_node *tmp;
+
+ if (!clk)
+ return;
+
+ hlist_for_each_entry(child, tmp, &clk->children, child_node)
+ clk_unprepare_unused_subtree(child);
+
+ if (clk->prepare_count)
+ return;
+
+ if (clk->flags & CLK_IGNORE_UNUSED)
+ return;
+
+ if (__clk_is_prepared(clk))
+ if (clk->ops->unprepare)
+ clk->ops->unprepare(clk->hw);
+}
+
+/* caller must hold prepare_lock */
static void clk_disable_unused_subtree(struct clk *clk)
{
struct clk *child;
@@ -393,6 +416,12 @@ static int clk_disable_unused(void)
hlist_for_each_entry(clk, tmp, &clk_orphan_list, child_node)
clk_disable_unused_subtree(clk);
+ hlist_for_each_entry(clk, tmp, &clk_root_list, child_node)
+ clk_unprepare_unused_subtree(clk);
+
+ hlist_for_each_entry(clk, tmp, &clk_orphan_list, child_node)
+ clk_unprepare_unused_subtree(clk);
+
mutex_unlock(&prepare_lock);
return 0;
--
1.7.10
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [RESEND PATCH 3/4] clk: Introduce optional unprepare_unused callback
2013-03-12 19:26 [RESEND PATCH 0/4] clk: Provide option to unprepare unused clocks at late init Ulf Hansson
2013-03-12 19:26 ` [RESEND PATCH 1/4] clk: Introduce optional is_prepared callback Ulf Hansson
2013-03-12 19:26 ` [RESEND PATCH 2/4] clk: Unprepare the unused prepared slow clocks at late init Ulf Hansson
@ 2013-03-12 19:26 ` Ulf Hansson
2013-03-12 19:26 ` [RESEND PATCH 4/4] clk: ux500: Support is_prepared callback for clk-prcmu Ulf Hansson
3 siblings, 0 replies; 7+ messages in thread
From: Ulf Hansson @ 2013-03-12 19:26 UTC (permalink / raw)
To: linux-arm-kernel
From: Ulf Hansson <ulf.hansson@linaro.org>
An unprepare_unused callback is introduced due to the same reasons to
why the disable_unused callback was added.
During the clk_disable_unused sequence, those clk_hw that needs specific
treatment with regards to being unprepared, shall implement the
unprepare_unused callback.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
---
drivers/clk/clk.c | 7 +++++--
include/linux/clk-provider.h | 5 +++++
2 files changed, 10 insertions(+), 2 deletions(-)
diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
index 0944348..d8dc6d7c 100644
--- a/drivers/clk/clk.c
+++ b/drivers/clk/clk.c
@@ -358,9 +358,12 @@ static void clk_unprepare_unused_subtree(struct clk *clk)
if (clk->flags & CLK_IGNORE_UNUSED)
return;
- if (__clk_is_prepared(clk))
- if (clk->ops->unprepare)
+ if (__clk_is_prepared(clk)) {
+ if (clk->ops->unprepare_unused)
+ clk->ops->unprepare_unused(clk->hw);
+ else if (clk->ops->unprepare)
clk->ops->unprepare(clk->hw);
+ }
}
/* caller must hold prepare_lock */
diff --git a/include/linux/clk-provider.h b/include/linux/clk-provider.h
index 86ff6be..9e97fb4 100644
--- a/include/linux/clk-provider.h
+++ b/include/linux/clk-provider.h
@@ -49,6 +49,10 @@ struct clk_hw;
* This function is allowed to sleep. Optional, if this op is not
* set then the prepare count will be used.
*
+ * @unprepare_unused: Unprepare the clock atomically. Only called from
+ * clk_disable_unused for prepare clocks with special needs.
+ * Called with prepare mutex held. This function may sleep.
+ *
* @enable: Enable the clock atomically. This must not return until the
* clock is generating a valid clock signal, usable by consumer
* devices. Called with enable_lock held. This function must not
@@ -113,6 +117,7 @@ struct clk_ops {
int (*prepare)(struct clk_hw *hw);
void (*unprepare)(struct clk_hw *hw);
int (*is_prepared)(struct clk_hw *hw);
+ void (*unprepare_unused)(struct clk_hw *hw);
int (*enable)(struct clk_hw *hw);
void (*disable)(struct clk_hw *hw);
int (*is_enabled)(struct clk_hw *hw);
--
1.7.10
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [RESEND PATCH 4/4] clk: ux500: Support is_prepared callback for clk-prcmu
2013-03-12 19:26 [RESEND PATCH 0/4] clk: Provide option to unprepare unused clocks at late init Ulf Hansson
` (2 preceding siblings ...)
2013-03-12 19:26 ` [RESEND PATCH 3/4] clk: Introduce optional unprepare_unused callback Ulf Hansson
@ 2013-03-12 19:26 ` Ulf Hansson
3 siblings, 0 replies; 7+ messages in thread
From: Ulf Hansson @ 2013-03-12 19:26 UTC (permalink / raw)
To: linux-arm-kernel
From: Ulf Hansson <ulf.hansson@linaro.org>
To be able to gate unused prcmu clocks from the clk_disable_unused sequence,
clk-prcmu now implements the is_prepared callback.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
---
drivers/clk/ux500/clk-prcmu.c | 134 ++++++++++++++++++++++++-----------------
1 file changed, 80 insertions(+), 54 deletions(-)
diff --git a/drivers/clk/ux500/clk-prcmu.c b/drivers/clk/ux500/clk-prcmu.c
index 74faa7e..9d83256 100644
--- a/drivers/clk/ux500/clk-prcmu.c
+++ b/drivers/clk/ux500/clk-prcmu.c
@@ -20,15 +20,23 @@
struct clk_prcmu {
struct clk_hw hw;
u8 cg_sel;
+ int is_prepared;
int is_enabled;
+ int opp_requested;
};
/* PRCMU clock operations. */
static int clk_prcmu_prepare(struct clk_hw *hw)
{
+ int ret;
struct clk_prcmu *clk = to_clk_prcmu(hw);
- return prcmu_request_clock(clk->cg_sel, true);
+
+ ret = prcmu_request_clock(clk->cg_sel, true);
+ if (!ret)
+ clk->is_prepared = 1;
+
+ return ret;;
}
static void clk_prcmu_unprepare(struct clk_hw *hw)
@@ -37,6 +45,14 @@ static void clk_prcmu_unprepare(struct clk_hw *hw)
if (prcmu_request_clock(clk->cg_sel, false))
pr_err("clk_prcmu: %s failed to disable %s.\n", __func__,
hw->init->name);
+ else
+ clk->is_prepared = 0;
+}
+
+static int clk_prcmu_is_prepared(struct clk_hw *hw)
+{
+ struct clk_prcmu *clk = to_clk_prcmu(hw);
+ return clk->is_prepared;
}
static int clk_prcmu_enable(struct clk_hw *hw)
@@ -79,58 +95,52 @@ static int clk_prcmu_set_rate(struct clk_hw *hw, unsigned long rate,
return prcmu_set_clock_rate(clk->cg_sel, rate);
}
-static int request_ape_opp100(bool enable)
-{
- static int reqs;
- int err = 0;
-
- if (enable) {
- if (!reqs)
- err = prcmu_qos_add_requirement(PRCMU_QOS_APE_OPP,
- "clock", 100);
- if (!err)
- reqs++;
- } else {
- reqs--;
- if (!reqs)
- prcmu_qos_remove_requirement(PRCMU_QOS_APE_OPP,
- "clock");
- }
- return err;
-}
-
static int clk_prcmu_opp_prepare(struct clk_hw *hw)
{
int err;
struct clk_prcmu *clk = to_clk_prcmu(hw);
- err = request_ape_opp100(true);
- if (err) {
- pr_err("clk_prcmu: %s failed to request APE OPP100 for %s.\n",
- __func__, hw->init->name);
- return err;
+ if (!clk->opp_requested) {
+ err = prcmu_qos_add_requirement(PRCMU_QOS_APE_OPP,
+ (char *)__clk_get_name(hw->clk),
+ 100);
+ if (err) {
+ pr_err("clk_prcmu: %s fail req APE OPP for %s.\n",
+ __func__, hw->init->name);
+ return err;
+ }
+ clk->opp_requested = 1;
}
err = prcmu_request_clock(clk->cg_sel, true);
- if (err)
- request_ape_opp100(false);
+ if (err) {
+ prcmu_qos_remove_requirement(PRCMU_QOS_APE_OPP,
+ (char *)__clk_get_name(hw->clk));
+ clk->opp_requested = 0;
+ return err;
+ }
- return err;
+ clk->is_prepared = 1;
+ return 0;
}
static void clk_prcmu_opp_unprepare(struct clk_hw *hw)
{
struct clk_prcmu *clk = to_clk_prcmu(hw);
- if (prcmu_request_clock(clk->cg_sel, false))
- goto out_error;
- if (request_ape_opp100(false))
- goto out_error;
- return;
-
-out_error:
- pr_err("clk_prcmu: %s failed to disable %s.\n", __func__,
- hw->init->name);
+ if (prcmu_request_clock(clk->cg_sel, false)) {
+ pr_err("clk_prcmu: %s failed to disable %s.\n", __func__,
+ hw->init->name);
+ return;
+ }
+
+ if (clk->opp_requested) {
+ prcmu_qos_remove_requirement(PRCMU_QOS_APE_OPP,
+ (char *)__clk_get_name(hw->clk));
+ clk->opp_requested = 0;
+ }
+
+ clk->is_prepared = 0;
}
static int clk_prcmu_opp_volt_prepare(struct clk_hw *hw)
@@ -138,38 +148,49 @@ static int clk_prcmu_opp_volt_prepare(struct clk_hw *hw)
int err;
struct clk_prcmu *clk = to_clk_prcmu(hw);
- err = prcmu_request_ape_opp_100_voltage(true);
- if (err) {
- pr_err("clk_prcmu: %s failed to request APE OPP VOLT for %s.\n",
- __func__, hw->init->name);
- return err;
+ if (!clk->opp_requested) {
+ err = prcmu_request_ape_opp_100_voltage(true);
+ if (err) {
+ pr_err("clk_prcmu: %s fail req APE OPP VOLT for %s.\n",
+ __func__, hw->init->name);
+ return err;
+ }
+ clk->opp_requested = 1;
}
err = prcmu_request_clock(clk->cg_sel, true);
- if (err)
+ if (err) {
prcmu_request_ape_opp_100_voltage(false);
+ clk->opp_requested = 0;
+ return err;
+ }
- return err;
+ clk->is_prepared = 1;
+ return 0;
}
static void clk_prcmu_opp_volt_unprepare(struct clk_hw *hw)
{
struct clk_prcmu *clk = to_clk_prcmu(hw);
- if (prcmu_request_clock(clk->cg_sel, false))
- goto out_error;
- if (prcmu_request_ape_opp_100_voltage(false))
- goto out_error;
- return;
-
-out_error:
- pr_err("clk_prcmu: %s failed to disable %s.\n", __func__,
- hw->init->name);
+ if (prcmu_request_clock(clk->cg_sel, false)) {
+ pr_err("clk_prcmu: %s failed to disable %s.\n", __func__,
+ hw->init->name);
+ return;
+ }
+
+ if (clk->opp_requested) {
+ prcmu_request_ape_opp_100_voltage(false);
+ clk->opp_requested = 0;
+ }
+
+ clk->is_prepared = 0;
}
static struct clk_ops clk_prcmu_scalable_ops = {
.prepare = clk_prcmu_prepare,
.unprepare = clk_prcmu_unprepare,
+ .is_prepared = clk_prcmu_is_prepared,
.enable = clk_prcmu_enable,
.disable = clk_prcmu_disable,
.is_enabled = clk_prcmu_is_enabled,
@@ -181,6 +202,7 @@ static struct clk_ops clk_prcmu_scalable_ops = {
static struct clk_ops clk_prcmu_gate_ops = {
.prepare = clk_prcmu_prepare,
.unprepare = clk_prcmu_unprepare,
+ .is_prepared = clk_prcmu_is_prepared,
.enable = clk_prcmu_enable,
.disable = clk_prcmu_disable,
.is_enabled = clk_prcmu_is_enabled,
@@ -202,6 +224,7 @@ static struct clk_ops clk_prcmu_rate_ops = {
static struct clk_ops clk_prcmu_opp_gate_ops = {
.prepare = clk_prcmu_opp_prepare,
.unprepare = clk_prcmu_opp_unprepare,
+ .is_prepared = clk_prcmu_is_prepared,
.enable = clk_prcmu_enable,
.disable = clk_prcmu_disable,
.is_enabled = clk_prcmu_is_enabled,
@@ -211,6 +234,7 @@ static struct clk_ops clk_prcmu_opp_gate_ops = {
static struct clk_ops clk_prcmu_opp_volt_scalable_ops = {
.prepare = clk_prcmu_opp_volt_prepare,
.unprepare = clk_prcmu_opp_volt_unprepare,
+ .is_prepared = clk_prcmu_is_prepared,
.enable = clk_prcmu_enable,
.disable = clk_prcmu_disable,
.is_enabled = clk_prcmu_is_enabled,
@@ -242,7 +266,9 @@ static struct clk *clk_reg_prcmu(const char *name,
}
clk->cg_sel = cg_sel;
+ clk->is_prepared = 1;
clk->is_enabled = 1;
+ clk->opp_requested = 0;
/* "rate" can be used for changing the initial frequency */
if (rate)
prcmu_set_clock_rate(cg_sel, rate);
--
1.7.10
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [RESEND PATCH 2/4] clk: Unprepare the unused prepared slow clocks at late init
2013-03-12 19:26 ` [RESEND PATCH 2/4] clk: Unprepare the unused prepared slow clocks at late init Ulf Hansson
@ 2013-03-19 19:57 ` Mike Turquette
2013-03-20 9:33 ` Ulf Hansson
0 siblings, 1 reply; 7+ messages in thread
From: Mike Turquette @ 2013-03-19 19:57 UTC (permalink / raw)
To: linux-arm-kernel
Quoting Ulf Hansson (2013-03-12 12:26:03)
> From: Ulf Hansson <ulf.hansson@linaro.org>
>
> The unused ungated fast clocks are already being disabled from
> clk_disable_unused at late init. This patch extend this sequence
> to the slow unused prepared clocks to be unprepared.
>
> Unless the optional .is_prepared callback is implemented by a
> clk_hw the clk_disable_unused sequence will not unprepare any
> unused clocks, since it will fall back to use the software
> prepare counter.
>
> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
> Acked-by: Linus Walleij <linus.walleij@linaro.org>
Hi Ulf,
I've taken all of these patches into clk-next. Thanks for the resend.
I had to fix up this patch due to commit b67bfe0d, "hlist: drop the node
parameter from iterators". The merged version is below. Let me know if
you have any issues with it.
Regards,
Mike
>From ae3865ca119ad82afbf003b4eee38533f32bd0fe Mon Sep 17 00:00:00 2001
From: Ulf Hansson <ulf.hansson@linaro.org>
Date: Tue, 12 Mar 2013 20:26:03 +0100
Subject: [PATCH] clk: Unprepare the unused prepared slow clocks at late init
The unused ungated fast clocks are already being disabled from
clk_disable_unused at late init. This patch extend this sequence
to the slow unused prepared clocks to be unprepared.
Unless the optional .is_prepared callback is implemented by a
clk_hw the clk_disable_unused sequence will not unprepare any
unused clocks, since it will fall back to use the software
prepare counter.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Mike Turquette <mturquette@linaro.org>
[mturquette at linaro.org: fixed hlist accessors per b67bfe0d]
---
drivers/clk/clk.c | 28 ++++++++++++++++++++++++++++
1 file changed, 28 insertions(+)
diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
index 7571b50..c0141f3 100644
--- a/drivers/clk/clk.c
+++ b/drivers/clk/clk.c
@@ -336,6 +336,28 @@ static inline int clk_debug_register(struct clk *clk) { return 0; }
#endif
/* caller must hold prepare_lock */
+static void clk_unprepare_unused_subtree(struct clk *clk)
+{
+ struct clk *child;
+
+ if (!clk)
+ return;
+
+ hlist_for_each_entry(child, &clk->children, child_node)
+ clk_unprepare_unused_subtree(child);
+
+ if (clk->prepare_count)
+ return;
+
+ if (clk->flags & CLK_IGNORE_UNUSED)
+ return;
+
+ if (__clk_is_prepared(clk))
+ if (clk->ops->unprepare)
+ clk->ops->unprepare(clk->hw);
+}
+
+/* caller must hold prepare_lock */
static void clk_disable_unused_subtree(struct clk *clk)
{
struct clk *child;
@@ -386,6 +408,12 @@ static int clk_disable_unused(void)
hlist_for_each_entry(clk, &clk_orphan_list, child_node)
clk_disable_unused_subtree(clk);
+ hlist_for_each_entry(clk, &clk_root_list, child_node)
+ clk_unprepare_unused_subtree(clk);
+
+ hlist_for_each_entry(clk, &clk_orphan_list, child_node)
+ clk_unprepare_unused_subtree(clk);
+
mutex_unlock(&prepare_lock);
return 0;
--
1.7.10.4
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [RESEND PATCH 2/4] clk: Unprepare the unused prepared slow clocks at late init
2013-03-19 19:57 ` Mike Turquette
@ 2013-03-20 9:33 ` Ulf Hansson
0 siblings, 0 replies; 7+ messages in thread
From: Ulf Hansson @ 2013-03-20 9:33 UTC (permalink / raw)
To: linux-arm-kernel
On 19 March 2013 20:57, Mike Turquette <mturquette@linaro.org> wrote:
> Quoting Ulf Hansson (2013-03-12 12:26:03)
>> From: Ulf Hansson <ulf.hansson@linaro.org>
>>
>> The unused ungated fast clocks are already being disabled from
>> clk_disable_unused at late init. This patch extend this sequence
>> to the slow unused prepared clocks to be unprepared.
>>
>> Unless the optional .is_prepared callback is implemented by a
>> clk_hw the clk_disable_unused sequence will not unprepare any
>> unused clocks, since it will fall back to use the software
>> prepare counter.
>>
>> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
>> Acked-by: Linus Walleij <linus.walleij@linaro.org>
>
> Hi Ulf,
>
> I've taken all of these patches into clk-next. Thanks for the resend.
>
> I had to fix up this patch due to commit b67bfe0d, "hlist: drop the node
> parameter from iterators". The merged version is below. Let me know if
> you have any issues with it.
>
> Regards,
> Mike
>
>
>
> From ae3865ca119ad82afbf003b4eee38533f32bd0fe Mon Sep 17 00:00:00 2001
> From: Ulf Hansson <ulf.hansson@linaro.org>
> Date: Tue, 12 Mar 2013 20:26:03 +0100
> Subject: [PATCH] clk: Unprepare the unused prepared slow clocks at late init
>
> The unused ungated fast clocks are already being disabled from
> clk_disable_unused at late init. This patch extend this sequence
> to the slow unused prepared clocks to be unprepared.
>
> Unless the optional .is_prepared callback is implemented by a
> clk_hw the clk_disable_unused sequence will not unprepare any
> unused clocks, since it will fall back to use the software
> prepare counter.
>
> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
> Acked-by: Linus Walleij <linus.walleij@linaro.org>
> Signed-off-by: Mike Turquette <mturquette@linaro.org>
> [mturquette at linaro.org: fixed hlist accessors per b67bfe0d]
> ---
> drivers/clk/clk.c | 28 ++++++++++++++++++++++++++++
> 1 file changed, 28 insertions(+)
>
> diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
> index 7571b50..c0141f3 100644
> --- a/drivers/clk/clk.c
> +++ b/drivers/clk/clk.c
> @@ -336,6 +336,28 @@ static inline int clk_debug_register(struct clk *clk) { return 0; }
> #endif
>
> /* caller must hold prepare_lock */
> +static void clk_unprepare_unused_subtree(struct clk *clk)
> +{
> + struct clk *child;
> +
> + if (!clk)
> + return;
> +
> + hlist_for_each_entry(child, &clk->children, child_node)
> + clk_unprepare_unused_subtree(child);
> +
> + if (clk->prepare_count)
> + return;
> +
> + if (clk->flags & CLK_IGNORE_UNUSED)
> + return;
> +
> + if (__clk_is_prepared(clk))
> + if (clk->ops->unprepare)
> + clk->ops->unprepare(clk->hw);
> +}
> +
> +/* caller must hold prepare_lock */
> static void clk_disable_unused_subtree(struct clk *clk)
> {
> struct clk *child;
> @@ -386,6 +408,12 @@ static int clk_disable_unused(void)
> hlist_for_each_entry(clk, &clk_orphan_list, child_node)
> clk_disable_unused_subtree(clk);
>
> + hlist_for_each_entry(clk, &clk_root_list, child_node)
> + clk_unprepare_unused_subtree(clk);
> +
> + hlist_for_each_entry(clk, &clk_orphan_list, child_node)
> + clk_unprepare_unused_subtree(clk);
> +
> mutex_unlock(&prepare_lock);
>
> return 0;
> --
> 1.7.10.4
>
Thanks Mike, looks good!
Kind regards
Ulf Hansson
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2013-03-20 9:33 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-03-12 19:26 [RESEND PATCH 0/4] clk: Provide option to unprepare unused clocks at late init Ulf Hansson
2013-03-12 19:26 ` [RESEND PATCH 1/4] clk: Introduce optional is_prepared callback Ulf Hansson
2013-03-12 19:26 ` [RESEND PATCH 2/4] clk: Unprepare the unused prepared slow clocks at late init Ulf Hansson
2013-03-19 19:57 ` Mike Turquette
2013-03-20 9:33 ` Ulf Hansson
2013-03-12 19:26 ` [RESEND PATCH 3/4] clk: Introduce optional unprepare_unused callback Ulf Hansson
2013-03-12 19:26 ` [RESEND PATCH 4/4] clk: ux500: Support is_prepared callback for clk-prcmu Ulf Hansson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).