* [RFC PATCH 1/3] ARM: Add cpu power management notifiers
2011-02-10 21:31 [RFC PATCH 0/3] CPU PM notifiers Colin Cross
@ 2011-02-10 21:31 ` Colin Cross
2011-02-12 14:46 ` Russell King - ARM Linux
2011-02-10 21:31 ` [RFC PATCH 2/3] ARM: gic: Use cpu pm notifiers to save gic state Colin Cross
` (2 subsequent siblings)
3 siblings, 1 reply; 12+ messages in thread
From: Colin Cross @ 2011-02-10 21:31 UTC (permalink / raw)
To: linux-arm-kernel
During some CPU power modes entered during idle, hotplug and
suspend, peripherals located in the CPU power domain, such as
the GIC, localtimers, and VFP, may be powered down. Add a
notifier chain that allows drivers for those peripherals to
be notified before and after they may be reset.
Signed-off-by: Colin Cross <ccross@android.com>
---
arch/arm/include/asm/cpu_pm.h | 123 +++++++++++++++++++++++++++++++++++++++++
arch/arm/kernel/Makefile | 1 +
arch/arm/kernel/cpu_pm.c | 116 ++++++++++++++++++++++++++++++++++++++
3 files changed, 240 insertions(+), 0 deletions(-)
create mode 100644 arch/arm/include/asm/cpu_pm.h
create mode 100644 arch/arm/kernel/cpu_pm.c
diff --git a/arch/arm/include/asm/cpu_pm.h b/arch/arm/include/asm/cpu_pm.h
new file mode 100644
index 0000000..07b1b6e
--- /dev/null
+++ b/arch/arm/include/asm/cpu_pm.h
@@ -0,0 +1,123 @@
+/*
+ * Copyright (C) 2011 Google, Inc.
+ *
+ * Author:
+ * Colin Cross <ccross@android.com>
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#ifndef _ASMARM_CPU_PM_H
+#define _ASMARM_CPU_PM_H
+
+#include <linux/kernel.h>
+#include <linux/notifier.h>
+
+/*
+ * When a CPU goes to a low power state that turns off power to the CPU's
+ * power domain, the contents of some blocks (floating point coprocessors,
+ * interrutp controllers, caches, timers) in the same power domain can
+ * be lost. The cpm_pm notifiers provide a method for platform idle, suspend,
+ * and hotplug implementations to notify the drivers for these blocks that
+ * they may be reset.
+ *
+ * All cpu_pm notifications must be called with interrupts disabled.
+ *
+ * The notifications are split into two classes, CPU notifications and CPU
+ * complex notifications.
+ *
+ * CPU notifications apply to a single CPU, and must be called on the affected
+ * CPU. They are used to save per-cpu context for affected blocks.
+ *
+ * CPU complex notifications apply to all CPUs in a single power domain. They
+ * are used to save any global context for affected blocks, and must be called
+ * after all the CPUs in the power domain have been notified of the low power
+ * state.
+ *
+ */
+
+/*
+ * Event codes passed as unsigned long val to notifier calls
+ */
+enum cpu_pm_event {
+ /* A single cpu is entering a low power state */
+ CPU_PM_ENTER,
+
+ /* A single cpu failed to enter a low power state */
+ CPU_PM_ENTER_FAILED,
+
+ /* A single cpu is exiting a low power state */
+ CPU_PM_EXIT,
+
+ /* A cpu power domain is entering a low power state */
+ CPU_COMPLEX_PM_ENTER,
+
+ /* A cpu power domain failed to enter a low power state */
+ CPU_COMPLEX_PM_ENTER_FAILED,
+
+ /* A cpu power domain is exiting a low power state */
+ CPU_COMPLEX_PM_EXIT,
+};
+
+int cpu_pm_register_notifier(struct notifier_block *nb);
+int cpu_pm_unregister_notifier(struct notifier_block *nb);
+
+/*
+ * cpm_pm_enter
+ *
+ * Notifies listeners that a single cpu is entering a low power state that may
+ * cause some blocks in the same power domain as the cpu to reset.
+ *
+ * Must be called on the affected cpu with interrupts disabled. Platform is
+ * responsible for ensuring that cpu_pm_enter is not called twice on the same
+ * cpu before cpu_pm_exit is called.
+ */
+int cpu_pm_enter(void);
+
+/*
+ * cpm_pm_exit
+ *
+ * Notifies listeners that a single cpu is exiting a low power state that may
+ * have caused some blocks in the same power domain as the cpu to reset.
+ *
+ * Must be called on the affected cpu with interrupts disabled.
+ */
+int cpu_pm_exit(void);
+
+/*
+ * cpm_complex_pm_enter
+ *
+ * Notifies listeners that all cpus in a power domain are entering a low power
+ * state that may cause some blocks in the same power domain to reset.
+ *
+ * Must be called after cpu_pm_enter has been called on all cpus in the power
+ * domain, and before cpu_pm_exit has been called on any cpu in the power
+ * domain.
+ *
+ * Must be called with interrupts disabled.
+ */
+int cpu_complex_pm_enter(void);
+
+/*
+ * cpm_pm_enter
+ *
+ * Notifies listeners that a single cpu is entering a low power state that may
+ * cause some blocks in the same power domain as the cpu to reset.
+ *
+ * Must be called after cpu_pm_enter has been called on all cpus in the power
+ * domain, and before cpu_pm_exit has been called on any cpu in the power
+ * domain.
+ *
+ * Must be called with interrupts disabled.
+ */
+int cpu_complex_pm_exit(void);
+
+#endif
diff --git a/arch/arm/kernel/Makefile b/arch/arm/kernel/Makefile
index 185ee82..b0f25cb 100644
--- a/arch/arm/kernel/Makefile
+++ b/arch/arm/kernel/Makefile
@@ -58,6 +58,7 @@ obj-$(CONFIG_CPU_PJ4) += pj4-cp0.o
obj-$(CONFIG_IWMMXT) += iwmmxt.o
obj-$(CONFIG_CPU_HAS_PMU) += pmu.o
obj-$(CONFIG_HW_PERF_EVENTS) += perf_event.o
+obj-$(CONFIG_CPU_IDLE) += cpu_pm.o
AFLAGS_iwmmxt.o := -Wa,-mcpu=iwmmxt
ifneq ($(CONFIG_ARCH_EBSA110),y)
diff --git a/arch/arm/kernel/cpu_pm.c b/arch/arm/kernel/cpu_pm.c
new file mode 100644
index 0000000..9a04ba1
--- /dev/null
+++ b/arch/arm/kernel/cpu_pm.c
@@ -0,0 +1,116 @@
+/*
+ * Copyright (C) 2011 Google, Inc.
+ *
+ * Author:
+ * Colin Cross <ccross@android.com>
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/notifier.h>
+#include <linux/smp.h>
+
+#include <asm/cpu_pm.h>
+
+static DEFINE_SPINLOCK(idle_notifier_lock);
+static RAW_NOTIFIER_HEAD(idle_notifier_chain);
+
+int cpu_pm_register_notifier(struct notifier_block *nb)
+{
+ unsigned long flags;
+ int ret;
+
+ spin_lock_irqsave(&idle_notifier_lock, flags);
+ ret = raw_notifier_chain_register(&idle_notifier_chain, nb);
+ spin_unlock_irqrestore(&idle_notifier_lock, flags);
+
+ return ret;
+}
+
+int cpu_pm_unregister_notifier(struct notifier_block *nb)
+{
+ unsigned long flags;
+ int ret;
+
+ spin_lock_irqsave(&idle_notifier_lock, flags);
+ ret = raw_notifier_chain_unregister(&idle_notifier_chain, nb);
+ spin_unlock_irqrestore(&idle_notifier_lock, flags);
+
+ return ret;
+}
+
+static int __idle_notify(enum cpu_pm_event event, int nr_to_call,
+ int *nr_calls)
+{
+ int ret;
+
+ ret = __raw_notifier_call_chain(&idle_notifier_chain, event, NULL,
+ nr_to_call, nr_calls);
+
+ return notifier_to_errno(ret);
+}
+
+int cpu_pm_enter(void)
+{
+ int nr_calls;
+ int ret;
+
+ spin_lock(&idle_notifier_lock);
+ ret = __idle_notify(CPU_PM_ENTER, -1, &nr_calls);
+ if (ret) {
+ __idle_notify(CPU_PM_ENTER_FAILED, nr_calls - 1, NULL);
+ spin_unlock(&idle_notifier_lock);
+ return ret;
+ }
+ spin_unlock(&idle_notifier_lock);
+
+ return 0;
+}
+
+int cpu_pm_exit(void)
+{
+ int ret;
+
+ spin_lock(&idle_notifier_lock);
+ ret = __idle_notify(CPU_PM_EXIT, -1, NULL);
+ spin_unlock(&idle_notifier_lock);
+
+ return ret;
+}
+
+int cpu_complex_pm_enter(void)
+{
+ int nr_calls;
+ int ret;
+
+ spin_lock(&idle_notifier_lock);
+ ret = __idle_notify(CPU_COMPLEX_PM_ENTER, -1, &nr_calls);
+ if (ret) {
+ __idle_notify(CPU_COMPLEX_PM_ENTER_FAILED, nr_calls - 1, NULL);
+ spin_unlock(&idle_notifier_lock);
+ return ret;
+ }
+ spin_unlock(&idle_notifier_lock);
+
+ return 0;
+}
+
+int cpu_complex_pm_exit(void)
+{
+ int ret;
+
+ spin_lock(&idle_notifier_lock);
+ ret = __idle_notify(CPU_COMPLEX_PM_EXIT, -1, NULL);
+ spin_unlock(&idle_notifier_lock);
+
+ return ret;
+}
--
1.7.3.1
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [RFC PATCH 1/3] ARM: Add cpu power management notifiers
2011-02-10 21:31 ` [RFC PATCH 1/3] ARM: Add cpu power management notifiers Colin Cross
@ 2011-02-12 14:46 ` Russell King - ARM Linux
0 siblings, 0 replies; 12+ messages in thread
From: Russell King - ARM Linux @ 2011-02-12 14:46 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Feb 10, 2011 at 01:31:25PM -0800, Colin Cross wrote:
> +int cpu_pm_enter(void)
> +{
> + int nr_calls;
> + int ret;
> +
> + spin_lock(&idle_notifier_lock);
> + ret = __idle_notify(CPU_PM_ENTER, -1, &nr_calls);
> + if (ret) {
> + __idle_notify(CPU_PM_ENTER_FAILED, nr_calls - 1, NULL);
> + spin_unlock(&idle_notifier_lock);
> + return ret;
> + }
> + spin_unlock(&idle_notifier_lock);
> +
> + return 0;
Wouldn't:
spin_lock(&idle_notifier_lock);
ret = __idle_notify(CPU_PM_ENTER, -1, &nr_calls);
if (ret)
__idle_notify(CPU_PM_ENTER_FAILED, nr_calls - 1, NULL);
spin_unlock(&idle_notifier_lock);
return ret;
be easier reading?
^ permalink raw reply [flat|nested] 12+ messages in thread
* [RFC PATCH 2/3] ARM: gic: Use cpu pm notifiers to save gic state
2011-02-10 21:31 [RFC PATCH 0/3] CPU PM notifiers Colin Cross
2011-02-10 21:31 ` [RFC PATCH 1/3] ARM: Add cpu power management notifiers Colin Cross
@ 2011-02-10 21:31 ` Colin Cross
2011-02-18 0:58 ` Colin Cross
2011-02-10 21:31 ` [RFC PATCH 3/3] ARM: vfp: Use cpu pm notifiers to save vfp state Colin Cross
2011-02-12 10:23 ` [RFC PATCH 0/3] CPU PM notifiers Santosh Shilimkar
3 siblings, 1 reply; 12+ messages in thread
From: Colin Cross @ 2011-02-10 21:31 UTC (permalink / raw)
To: linux-arm-kernel
Signed-off-by: Colin Cross <ccross@android.com>
---
arch/arm/common/gic.c | 204 +++++++++++++++++++++++++++++++++++++++++++++++++
1 files changed, 204 insertions(+), 0 deletions(-)
diff --git a/arch/arm/common/gic.c b/arch/arm/common/gic.c
index 2243772..219eb71 100644
--- a/arch/arm/common/gic.c
+++ b/arch/arm/common/gic.c
@@ -29,6 +29,7 @@
#include <linux/cpumask.h>
#include <linux/io.h>
+#include <asm/cpu_pm.h>
#include <asm/irq.h>
#include <asm/mach/irq.h>
#include <asm/hardware/gic.h>
@@ -42,6 +43,17 @@ struct gic_chip_data {
unsigned int irq_offset;
void __iomem *dist_base;
void __iomem *cpu_base;
+#ifdef CONFIG_PM
+ u32 saved_spi_enable[DIV_ROUND_UP(1020, 32)];
+ u32 saved_spi_conf[DIV_ROUND_UP(1020, 16)];
+ u32 saved_spi_pri[DIV_ROUND_UP(1020, 4)];
+ u32 saved_spi_target[DIV_ROUND_UP(1020, 4)];
+ u32 __percpu *saved_ppi_enable;
+ u32 __percpu *saved_ppi_conf;
+ u32 __percpu *saved_ppi_pri;
+#endif
+
+ unsigned int gic_irqs;
};
#ifndef MAX_GIC_NR
@@ -237,6 +249,8 @@ static void __init gic_dist_init(struct gic_chip_data *gic,
if (gic_irqs > 1020)
gic_irqs = 1020;
+ gic->gic_irqs = gic_irqs;
+
/*
* Set all global interrupts to be level triggered, active low.
*/
@@ -305,6 +319,182 @@ static void __cpuinit gic_cpu_init(struct gic_chip_data *gic)
writel(1, base + GIC_CPU_CTRL);
}
+/*
+ * Saves the GIC distributor registers during suspend or idle. Must be called
+ * with interrupts disabled but before powering down the GIC. After calling
+ * this function, no interrupts will be delivered by the GIC, and another
+ * platform-specific wakeup source must be enabled.
+ */
+static void gic_dist_save(unsigned int gic_nr)
+{
+ unsigned int gic_irqs;
+ void __iomem *dist_base;
+ int i;
+
+ if (gic_nr >= MAX_GIC_NR)
+ BUG();
+
+ gic_irqs = gic_data[gic_nr].gic_irqs;
+ dist_base = gic_data[gic_nr].dist_base;
+
+ if (!dist_base)
+ return;
+
+ for (i = 0; i < DIV_ROUND_UP(gic_irqs, 16); i++)
+ gic_data[gic_nr].saved_spi_conf[i] =
+ readl(dist_base + GIC_DIST_CONFIG + i * 4);
+
+ for (i = 0; i < DIV_ROUND_UP(gic_irqs, 4); i++)
+ gic_data[gic_nr].saved_spi_pri[i] =
+ readl(dist_base + GIC_DIST_PRI + i * 4);
+
+ for (i = 0; i < DIV_ROUND_UP(gic_irqs, 4); i++)
+ gic_data[gic_nr].saved_spi_target[i] =
+ readl(dist_base + GIC_DIST_TARGET + i * 4);
+
+ for (i = 0; i < DIV_ROUND_UP(gic_irqs, 32); i++)
+ gic_data[gic_nr].saved_spi_enable[i] =
+ readl(dist_base + GIC_DIST_ENABLE_SET + i * 4);
+
+ writel(0, dist_base + GIC_DIST_CTRL);
+}
+
+/*
+ * Restores the GIC distributor registers during resume or when coming out of
+ * idle. Must be called before enabling interrupts. If a level interrupt
+ * that occured while the GIC was suspended is still present, it will be
+ * handled normally, but any edge interrupts that occured will not be seen by
+ * the GIC and need to be handled by the platform-specific wakeup source.
+ */
+static void gic_dist_restore(unsigned int gic_nr)
+{
+ unsigned int gic_irqs;
+ unsigned int i;
+ void __iomem *dist_base;
+
+ if (gic_nr >= MAX_GIC_NR)
+ BUG();
+
+ gic_irqs = gic_data[gic_nr].gic_irqs;
+ dist_base = gic_data[gic_nr].dist_base;
+
+ if (!dist_base)
+ return;
+
+ writel(0, dist_base + GIC_DIST_CTRL);
+
+ for (i = 0; i < DIV_ROUND_UP(gic_irqs, 16); i++)
+ writel(gic_data[gic_nr].saved_spi_conf[i],
+ dist_base + GIC_DIST_CONFIG + i * 4);
+
+ for (i = 0; i < DIV_ROUND_UP(gic_irqs, 4); i++)
+ writel(gic_data[gic_nr].saved_spi_pri[i],
+ dist_base + GIC_DIST_PRI + i * 4);
+
+ for (i = 0; i < DIV_ROUND_UP(gic_irqs, 4); i++)
+ writel(gic_data[gic_nr].saved_spi_target[i],
+ dist_base + GIC_DIST_TARGET + i * 4);
+
+ for (i = 0; i < DIV_ROUND_UP(gic_irqs, 32); i++)
+ writel(gic_data[gic_nr].saved_spi_enable[i],
+ dist_base + GIC_DIST_ENABLE_SET + i * 4);
+
+ writel(1, dist_base + GIC_DIST_CTRL);
+}
+
+static void gic_cpu_save(unsigned int gic_nr)
+{
+ int i;
+ u32 *ptr;
+ void __iomem *dist_base;
+ void __iomem *cpu_base;
+
+ if (gic_nr >= MAX_GIC_NR)
+ BUG();
+
+ dist_base = gic_data[gic_nr].dist_base;
+ cpu_base = gic_data[gic_nr].cpu_base;
+
+ if (!dist_base || !cpu_base)
+ return;
+
+ ptr = __this_cpu_ptr(gic_data[gic_nr].saved_ppi_enable);
+ for (i = 0; i < DIV_ROUND_UP(32, 32); i++)
+ ptr[i] = readl(dist_base + GIC_DIST_ENABLE_SET + i * 4);
+
+ ptr = __this_cpu_ptr(gic_data[gic_nr].saved_ppi_conf);
+ for (i = 0; i < DIV_ROUND_UP(32, 16); i++)
+ ptr[i] = readl(dist_base + GIC_DIST_CONFIG + i * 4);
+
+ ptr = __this_cpu_ptr(gic_data[gic_nr].saved_ppi_pri);
+ for (i = 0; i < DIV_ROUND_UP(32, 4); i++)
+ ptr[i] = readl(dist_base + GIC_DIST_PRI + i * 4);
+
+ writel(0, cpu_base + GIC_CPU_CTRL);
+}
+
+static void gic_cpu_restore(unsigned int gic_nr)
+{
+ int i;
+ u32 *ptr;
+ void __iomem *dist_base;
+ void __iomem *cpu_base;
+
+ if (gic_nr >= MAX_GIC_NR)
+ BUG();
+
+ dist_base = gic_data[gic_nr].dist_base;
+ cpu_base = gic_data[gic_nr].cpu_base;
+
+ if (!dist_base || !cpu_base)
+ return;
+
+ ptr = __this_cpu_ptr(gic_data[gic_nr].saved_ppi_enable);
+ for (i = 0; i < DIV_ROUND_UP(32, 32); i++)
+ writel(ptr[i], dist_base + GIC_DIST_ENABLE_SET + i * 4);
+
+ ptr = __this_cpu_ptr(gic_data[gic_nr].saved_ppi_conf);
+ for (i = 0; i < DIV_ROUND_UP(32, 16); i++)
+ writel(ptr[i], dist_base + GIC_DIST_CONFIG + i * 4);
+
+ ptr = __this_cpu_ptr(gic_data[gic_nr].saved_ppi_pri);
+ for (i = 0; i < DIV_ROUND_UP(32, 4); i++)
+ writel(ptr[i], dist_base + GIC_DIST_PRI + i * 4);
+
+ writel(0xf0, cpu_base + GIC_CPU_PRIMASK);
+ writel(1, cpu_base + GIC_CPU_CTRL);
+}
+
+static int gic_notifier(struct notifier_block *self, unsigned long cmd, void *v)
+{
+ int i;
+
+ for (i = 0; i < MAX_GIC_NR; i++) {
+ switch (cmd) {
+ case CPU_PM_ENTER:
+ gic_cpu_save(i);
+ break;
+ case CPU_PM_ENTER_FAILED:
+ case CPU_PM_EXIT:
+ gic_cpu_restore(i);
+ break;
+ case CPU_COMPLEX_PM_ENTER:
+ gic_dist_save(i);
+ break;
+ case CPU_COMPLEX_PM_ENTER_FAILED:
+ case CPU_COMPLEX_PM_EXIT:
+ gic_dist_restore(i);
+ break;
+ }
+ }
+
+ return NOTIFY_OK;
+}
+
+static struct notifier_block gic_notifier_block = {
+ .notifier_call = gic_notifier,
+};
+
void __init gic_init(unsigned int gic_nr, unsigned int irq_start,
void __iomem *dist_base, void __iomem *cpu_base)
{
@@ -322,6 +512,20 @@ void __init gic_init(unsigned int gic_nr, unsigned int irq_start,
gic_dist_init(gic, irq_start);
gic_cpu_init(gic);
+
+ gic->saved_ppi_enable = __alloc_percpu(DIV_ROUND_UP(32, 32) * 4,
+ sizeof(u32));
+ BUG_ON(!gic->saved_ppi_enable);
+
+ gic->saved_ppi_conf = __alloc_percpu(DIV_ROUND_UP(32, 16) * 4,
+ sizeof(u32));
+ BUG_ON(!gic->saved_ppi_conf);
+
+ gic->saved_ppi_pri = __alloc_percpu(DIV_ROUND_UP(32, 4) * 4,
+ sizeof(u32));
+ BUG_ON(!gic->saved_ppi_pri);
+
+ cpu_pm_register_notifier(&gic_notifier_block);
}
void __cpuinit gic_secondary_init(unsigned int gic_nr)
--
1.7.3.1
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [RFC PATCH 2/3] ARM: gic: Use cpu pm notifiers to save gic state
2011-02-10 21:31 ` [RFC PATCH 2/3] ARM: gic: Use cpu pm notifiers to save gic state Colin Cross
@ 2011-02-18 0:58 ` Colin Cross
0 siblings, 0 replies; 12+ messages in thread
From: Colin Cross @ 2011-02-18 0:58 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Feb 10, 2011 at 1:31 PM, Colin Cross <ccross@android.com> wrote:
<snip>
> +#ifdef CONFIG_PM
> + ? ? ? u32 saved_spi_enable[DIV_ROUND_UP(1020, 32)];
> + ? ? ? u32 saved_spi_conf[DIV_ROUND_UP(1020, 16)];
> + ? ? ? u32 saved_spi_pri[DIV_ROUND_UP(1020, 4)];
> + ? ? ? u32 saved_spi_target[DIV_ROUND_UP(1020, 4)];
> + ? ? ? u32 __percpu *saved_ppi_enable;
> + ? ? ? u32 __percpu *saved_ppi_conf;
> + ? ? ? u32 __percpu *saved_ppi_pri;
> +#endif
The #ifdef CONFIG_PM breaks building !CONFIG_PM, and this should
depend on CONFIG_CPU_IDLE, not CONFIG_PM.
> +static void gic_cpu_save(unsigned int gic_nr)
> +{
> + ? ? ? int i;
> + ? ? ? u32 *ptr;
> + ? ? ? void __iomem *dist_base;
> + ? ? ? void __iomem *cpu_base;
> +
> + ? ? ? if (gic_nr >= MAX_GIC_NR)
> + ? ? ? ? ? ? ? BUG();
> +
> + ? ? ? dist_base = gic_data[gic_nr].dist_base;
> + ? ? ? cpu_base = gic_data[gic_nr].cpu_base;
> +
> + ? ? ? if (!dist_base || !cpu_base)
> + ? ? ? ? ? ? ? return;
> +
> + ? ? ? ptr = __this_cpu_ptr(gic_data[gic_nr].saved_ppi_enable);
> + ? ? ? for (i = 0; i < DIV_ROUND_UP(32, 32); i++)
> + ? ? ? ? ? ? ? ptr[i] = readl(dist_base + GIC_DIST_ENABLE_SET + i * 4);
> +
> + ? ? ? ptr = __this_cpu_ptr(gic_data[gic_nr].saved_ppi_conf);
> + ? ? ? for (i = 0; i < DIV_ROUND_UP(32, 16); i++)
> + ? ? ? ? ? ? ? ptr[i] = readl(dist_base + GIC_DIST_CONFIG + i * 4);
> +
> + ? ? ? ptr = __this_cpu_ptr(gic_data[gic_nr].saved_ppi_pri);
> + ? ? ? for (i = 0; i < DIV_ROUND_UP(32, 4); i++)
> + ? ? ? ? ? ? ? ptr[i] = readl(dist_base + GIC_DIST_PRI + i * 4);
> +
> + ? ? ? writel(0, cpu_base + GIC_CPU_CTRL);
> +}
Disabling the GIC cpu interface here prevents SGIs from waking the CPU
from WFI. On Tegra2, it is useful to be able to go to WFI, and then
either back to normal or into reset depending on the state of the
other CPU. Is it safe to leave the GIC CPU control on when going to
reset?
^ permalink raw reply [flat|nested] 12+ messages in thread
* [RFC PATCH 3/3] ARM: vfp: Use cpu pm notifiers to save vfp state
2011-02-10 21:31 [RFC PATCH 0/3] CPU PM notifiers Colin Cross
2011-02-10 21:31 ` [RFC PATCH 1/3] ARM: Add cpu power management notifiers Colin Cross
2011-02-10 21:31 ` [RFC PATCH 2/3] ARM: gic: Use cpu pm notifiers to save gic state Colin Cross
@ 2011-02-10 21:31 ` Colin Cross
2011-02-11 12:12 ` Catalin Marinas
2011-02-12 10:23 ` [RFC PATCH 0/3] CPU PM notifiers Santosh Shilimkar
3 siblings, 1 reply; 12+ messages in thread
From: Colin Cross @ 2011-02-10 21:31 UTC (permalink / raw)
To: linux-arm-kernel
Signed-off-by: Colin Cross <ccross@android.com>
---
arch/arm/vfp/vfpmodule.c | 24 ++++++++++++++++++++++++
1 files changed, 24 insertions(+), 0 deletions(-)
diff --git a/arch/arm/vfp/vfpmodule.c b/arch/arm/vfp/vfpmodule.c
index 0797cb5..8b27c18 100644
--- a/arch/arm/vfp/vfpmodule.c
+++ b/arch/arm/vfp/vfpmodule.c
@@ -21,6 +21,7 @@
#include <asm/cputype.h>
#include <asm/thread_notify.h>
#include <asm/vfp.h>
+#include <asm/cpu_pm.h>
#include "vfpinstr.h"
#include "vfp.h"
@@ -149,6 +150,28 @@ static struct notifier_block vfp_notifier_block = {
.notifier_call = vfp_notifier,
};
+static int vfp_idle_notifier(struct notifier_block *self, unsigned long cmd,
+ void *v)
+{
+ u32 fpexc = fmrx(FPEXC);
+ unsigned int cpu = smp_processor_id();
+
+ if (cmd != CPU_PM_ENTER)
+ return NOTIFY_OK;
+
+ /* The VFP may be reset in idle, save the state */
+ if ((fpexc & FPEXC_EN) && last_VFP_context[cpu]) {
+ vfp_save_state(last_VFP_context[cpu], fpexc);
+ last_VFP_context[cpu]->hard.cpu = cpu;
+ }
+
+ return NOTIFY_OK;
+}
+
+static struct notifier_block vfp_idle_notifier_block = {
+ .notifier_call = vfp_idle_notifier,
+};
+
/*
* Raise a SIGFPE for the current process.
* sicode describes the signal being raised.
@@ -549,6 +572,7 @@ static int __init vfp_init(void)
vfp_vector = vfp_support_entry;
thread_register_notifier(&vfp_notifier_block);
+ cpu_pm_register_notifier(&vfp_idle_notifier_block);
vfp_pm_init();
/*
--
1.7.3.1
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [RFC PATCH 3/3] ARM: vfp: Use cpu pm notifiers to save vfp state
2011-02-10 21:31 ` [RFC PATCH 3/3] ARM: vfp: Use cpu pm notifiers to save vfp state Colin Cross
@ 2011-02-11 12:12 ` Catalin Marinas
2011-02-11 12:24 ` Russell King - ARM Linux
2011-02-11 19:50 ` Colin Cross
0 siblings, 2 replies; 12+ messages in thread
From: Catalin Marinas @ 2011-02-11 12:12 UTC (permalink / raw)
To: linux-arm-kernel
Colin,
On Thu, 2011-02-10 at 21:31 +0000, Colin Cross wrote:
> +static int vfp_idle_notifier(struct notifier_block *self, unsigned long cmd,
> + void *v)
> +{
> + u32 fpexc = fmrx(FPEXC);
> + unsigned int cpu = smp_processor_id();
> +
> + if (cmd != CPU_PM_ENTER)
> + return NOTIFY_OK;
> +
> + /* The VFP may be reset in idle, save the state */
> + if ((fpexc & FPEXC_EN) && last_VFP_context[cpu]) {
> + vfp_save_state(last_VFP_context[cpu], fpexc);
> + last_VFP_context[cpu]->hard.cpu = cpu;
> + }
Should we only handle the case where the VFP is enabled? At context
switch we disable the VFP and re-enable it when an application tries to
use it but it will remain disabled even the application hasn't used the
VFP. So switching to the idle thread would cause the VFP to be disabled
but the state not necessarily saved.
On SMP systems, we save the VFP at every context switch to deal with the
thread migration (though I have a plan to make this lazily on SMP as
well). On UP however, we don't save the VFP registers at context switch,
we just disable it and save it lazily if used later in a different task
Something like below (untested):
if (last_VFP_context[cpu]) {
vfp_save_state(last_VFP_context[cpu], fpexc);
/* force a reload when coming back from idle */
last_VFP_context[cpu] = NULL;
fmxr(FPEXC, fpexc & ~FPEXC_EN);
}
The last line (disabling) may not be necessary if we know that it comes
back from idle as disabled.
I wonder whether the current vfp_pm_suspend() function needs fixing for
UP systems as well. It is find if the hardware preserves the VFP
registers (which may not be the case).
--
Catalin
^ permalink raw reply [flat|nested] 12+ messages in thread
* [RFC PATCH 3/3] ARM: vfp: Use cpu pm notifiers to save vfp state
2011-02-11 12:12 ` Catalin Marinas
@ 2011-02-11 12:24 ` Russell King - ARM Linux
2011-02-11 12:55 ` Catalin Marinas
2011-02-11 19:50 ` Colin Cross
1 sibling, 1 reply; 12+ messages in thread
From: Russell King - ARM Linux @ 2011-02-11 12:24 UTC (permalink / raw)
To: linux-arm-kernel
On Fri, Feb 11, 2011 at 12:12:25PM +0000, Catalin Marinas wrote:
> On SMP systems, we save the VFP at every context switch to deal with the
> thread migration (though I have a plan to make this lazily on SMP as
> well).
I'm not sure it's worth the complexity. You'd have to do an IPI to the
old CPU to provoke it to save the context from its VFP unit. You'd have
to do that in some kind of atomic way as the old CPU may be in the middle
of already saving it. You're also going to have to add locking to the
last_VFP_context[] array as other CPUs will be accessing non-local
entries, and that means doing locking in assembly. Yuck.
No, let's not go there. Stick with what we currently have which works
well.
^ permalink raw reply [flat|nested] 12+ messages in thread
* [RFC PATCH 3/3] ARM: vfp: Use cpu pm notifiers to save vfp state
2011-02-11 12:24 ` Russell King - ARM Linux
@ 2011-02-11 12:55 ` Catalin Marinas
0 siblings, 0 replies; 12+ messages in thread
From: Catalin Marinas @ 2011-02-11 12:55 UTC (permalink / raw)
To: linux-arm-kernel
On 11 February 2011 12:24, Russell King - ARM Linux
<linux@arm.linux.org.uk> wrote:
> On Fri, Feb 11, 2011 at 12:12:25PM +0000, Catalin Marinas wrote:
>> On SMP systems, we save the VFP at every context switch to deal with the
>> thread migration (though I have a plan to make this lazily on SMP as
>> well).
>
> I'm not sure it's worth the complexity. ?You'd have to do an IPI to the
> old CPU to provoke it to save the context from its VFP unit. ?You'd have
> to do that in some kind of atomic way as the old CPU may be in the middle
> of already saving it. ?You're also going to have to add locking to the
> last_VFP_context[] array as other CPUs will be accessing non-local
> entries, and that means doing locking in assembly. ?Yuck.
I wasn't thinking about that, too complex indeed. But it may be easier
to detect thread migration, possibly with some hooks into generic
scheduler and only save the VFP state at that point. I haven't looked
in detail but I heard the x86 people have patches for something
similar.
--
Catalin
^ permalink raw reply [flat|nested] 12+ messages in thread
* [RFC PATCH 3/3] ARM: vfp: Use cpu pm notifiers to save vfp state
2011-02-11 12:12 ` Catalin Marinas
2011-02-11 12:24 ` Russell King - ARM Linux
@ 2011-02-11 19:50 ` Colin Cross
2011-02-13 21:25 ` Colin Cross
1 sibling, 1 reply; 12+ messages in thread
From: Colin Cross @ 2011-02-11 19:50 UTC (permalink / raw)
To: linux-arm-kernel
On Fri, Feb 11, 2011 at 4:12 AM, Catalin Marinas
<catalin.marinas@arm.com> wrote:
> Colin,
>
> On Thu, 2011-02-10 at 21:31 +0000, Colin Cross wrote:
>> +static int vfp_idle_notifier(struct notifier_block *self, unsigned long cmd,
>> + ? ? ? void *v)
>> +{
>> + ? ? ? u32 fpexc = fmrx(FPEXC);
>> + ? ? ? unsigned int cpu = smp_processor_id();
>> +
>> + ? ? ? if (cmd != CPU_PM_ENTER)
>> + ? ? ? ? ? ? ? return NOTIFY_OK;
>> +
>> + ? ? ? /* The VFP may be reset in idle, save the state */
>> + ? ? ? if ((fpexc & FPEXC_EN) && last_VFP_context[cpu]) {
>> + ? ? ? ? ? ? ? vfp_save_state(last_VFP_context[cpu], fpexc);
>> + ? ? ? ? ? ? ? last_VFP_context[cpu]->hard.cpu = cpu;
>> + ? ? ? }
>
> Should we only handle the case where the VFP is enabled? At context
> switch we disable the VFP and re-enable it when an application tries to
> use it but it will remain disabled even the application hasn't used the
> VFP. So switching to the idle thread would cause the VFP to be disabled
> but the state not necessarily saved.
Right
> On SMP systems, we save the VFP at every context switch to deal with the
> thread migration (though I have a plan to make this lazily on SMP as
> well). On UP however, we don't save the VFP registers at context switch,
> we just disable it and save it lazily if used later in a different task
>
> Something like below (untested):
>
> ? ? ? ?if (last_VFP_context[cpu]) {
> ? ? ? ? ? ? ? ?vfp_save_state(last_VFP_context[cpu], fpexc);
> ? ? ? ? ? ? ? ?/* force a reload when coming back from idle */
> ? ? ? ? ? ? ? ?last_VFP_context[cpu] = NULL;
> ? ? ? ? ? ? ? ?fmxr(FPEXC, fpexc & ~FPEXC_EN);
> ? ? ? ?}
>
> The last line (disabling) may not be necessary if we know that it comes
> back from idle as disabled.
It shouldn't be necessary, the context switch into the idle thread
should have disabled it, but it doesn't hurt. We should also disable
it when exiting idle.
> I wonder whether the current vfp_pm_suspend() function needs fixing for
> UP systems as well. It is find if the hardware preserves the VFP
> registers (which may not be the case).
I think there is a case where the VFP registers can be lost in suspend
on UP platforms that don't save the VFP registers in their platform
suspend. If a thread is using the VFP, and then context switches to a
thread that does not use VFP but triggers suspend by writing to
/sys/power/state, vfp_pm_suspend will be called with the VFP disabled
but the registers not saved. I think this would work:
/* save state for resumption */
if (last_VFP_context[ti->cpu]) {
printk(KERN_DEBUG "%s: saving vfp state\n", __func__);
vfp_save_state(last_VFP_context[ti->cpu], fpexc);
/* disable, just in case */
fmxr(FPEXC, fpexc & ~FPEXC_EN);
}
If the thread that wrote to /sys/power/state is using VFP,
last_VFP_context will be the same as ti->vfpstate, so we can always
save last_VFP_context.
^ permalink raw reply [flat|nested] 12+ messages in thread
* [RFC PATCH 3/3] ARM: vfp: Use cpu pm notifiers to save vfp state
2011-02-11 19:50 ` Colin Cross
@ 2011-02-13 21:25 ` Colin Cross
0 siblings, 0 replies; 12+ messages in thread
From: Colin Cross @ 2011-02-13 21:25 UTC (permalink / raw)
To: linux-arm-kernel
On Fri, Feb 11, 2011 at 11:50 AM, Colin Cross <ccross@android.com> wrote:
>> Something like below (untested):
>>
>> ? ? ? ?if (last_VFP_context[cpu]) {
>> ? ? ? ? ? ? ? ?vfp_save_state(last_VFP_context[cpu], fpexc);
>> ? ? ? ? ? ? ? ?/* force a reload when coming back from idle */
>> ? ? ? ? ? ? ? ?last_VFP_context[cpu] = NULL;
>> ? ? ? ? ? ? ? ?fmxr(FPEXC, fpexc & ~FPEXC_EN);
>> ? ? ? ?}
One more fix is necessary, the VFP will usually not be enabled when
this is called. The VFP needs to be enabled before vfp_save_state,
and then disabled after.
> ? ? ? ?/* save state for resumption */
> ? ? ? ?if (last_VFP_context[ti->cpu]) {
> ? ? ? ? ? ? ? ?printk(KERN_DEBUG "%s: saving vfp state\n", __func__);
> ? ? ? ? ? ? ? ?vfp_save_state(last_VFP_context[ti->cpu], fpexc);
>
> ? ? ? ? ? ? ? ?/* disable, just in case */
> ? ? ? ? ? ? ? ?fmxr(FPEXC, fpexc & ~FPEXC_EN);
> ? ? ? ?}
Same fix is needed here.
^ permalink raw reply [flat|nested] 12+ messages in thread
* [RFC PATCH 0/3] CPU PM notifiers
2011-02-10 21:31 [RFC PATCH 0/3] CPU PM notifiers Colin Cross
` (2 preceding siblings ...)
2011-02-10 21:31 ` [RFC PATCH 3/3] ARM: vfp: Use cpu pm notifiers to save vfp state Colin Cross
@ 2011-02-12 10:23 ` Santosh Shilimkar
3 siblings, 0 replies; 12+ messages in thread
From: Santosh Shilimkar @ 2011-02-12 10:23 UTC (permalink / raw)
To: linux-arm-kernel
> -----Original Message-----
> From: Colin Cross [mailto:ccross at android.com]
> Sent: Friday, February 11, 2011 3:01 AM
> To: linux-arm-kernel at lists.infradead.org; Russell King
> Cc: linux at arm.linux.org.uk; santosh.shilimkar at ti.com;
> catalin.marinas at arm.com; will.deacon at arm.com; Colin Cross
> Subject: [RFC PATCH 0/3] CPU PM notifiers
>
> This patch set tries to address Russell's concerns with platform
> pm code calling into the driver for every block in the Cortex A9s
> during idle, hotplug, and suspend. The first patch adds cpu pm
> notifiers that can be called by platform code, the second uses
> the notifier to save and restore the GIC state, and the third
> saves the VFP state.
>
> The notifiers are used for two types of events, CPU PM events and
> CPU complex PM events. CPU PM events are used to save per-cpu
> context when a single CPU is preparing to enter or has just exited
> a low power state. For example, the VFP saves the last thread
> context, and the GIC saves banked CPU registers.
>
> CPU complex events are used after all the CPUs in a power domain
> have been prepared for the low power state. The GIC uses these
> events to save global register state.
>
> What is not included:
> * Multiple power states - it is assumed that if the platform
> code calls cpu_pm_enter(), every listener needs to save
> its context.
> * L2 cache - The L2 cache will need very different behavior
> depending on the HW implementation and power mode being
> entered.
>
> Both problems could be solved be defining a set of power states
> shared by all platforms, if an agreeable set exists. For example:
> * CPU reset (TWD, GIC, VFP), L1 retention, L2 untouched
> * CPU reset + L1 lost, L2 retention
> * CPU reset, L1 + L2 lost
>
> Santosh previously mentioned that the GIC is not reset in the first
> two states on OMAP, which starts to make the list complicated. Does
> disabling the GIC cause a problem in these states?
>
Yep it will be an issue. L2 and GIC are not part of CPU powerdomain
but they are part of another power domain.
OMAP has many aspect like more power domains, trust zone, multiple
power state combinations and it will complicate most of the generic
code. I suggest you go ahead with what suits to majority of the SoCs.
> An alternate solution is to pass a set of flags instead of a power
> state:
> CPU_PM_LOCALTIMERS_RESET
> CPU_PM_INTERRUPTS_RESET
> CPU_PM_L1_RETENTION
> CPU_PM_L1_RESET
> CPU_PM_L2_RETENTION
> CPU_PM_L2_RESET
>
> arch/arm/common/gic.c | 204
> +++++++++++++++++++++++++++++++++++++++++
> arch/arm/include/asm/cpu_pm.h | 123 +++++++++++++++++++++++++
> arch/arm/kernel/Makefile | 1 +
> arch/arm/kernel/cpu_pm.c | 116 +++++++++++++++++++++++
> arch/arm/vfp/vfpmodule.c | 24 +++++
> 5 files changed, 468 insertions(+), 0 deletions(-)
^ permalink raw reply [flat|nested] 12+ messages in thread