* [PATCHv2 0/5] coupled cpuidle state support
@ 2012-03-14 18:29 Colin Cross
2012-03-14 18:29 ` [PATCHv2 1/5] cpuidle: refactor out cpuidle_enter_state Colin Cross
` (5 more replies)
0 siblings, 6 replies; 17+ messages in thread
From: Colin Cross @ 2012-03-14 18:29 UTC (permalink / raw)
To: linux-arm-kernel
On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the
cpus cannot be independently powered down, either due to
sequencing restrictions (on Tegra 2, cpu 0 must be the last to
power down), or due to HW bugs (on OMAP4460, a cpu powering up
will corrupt the gic state unless the other cpu runs a work
around). Each cpu has a power state that it can enter without
coordinating with the other cpu (usually Wait For Interrupt, or
WFI), and one or more "coupled" power states that affect blocks
shared between the cpus (L2 cache, interrupt controller, and
sometimes the whole SoC). Entering a coupled power state must
be tightly controlled on both cpus.
The easiest solution to implementing coupled cpu power states is
to hotplug all but one cpu whenever possible, usually using a
cpufreq governor that looks at cpu load to determine when to
enable the secondary cpus. This causes problems, as hotplug is an
expensive operation, so the number of hotplug transitions must be
minimized, leading to very slow response to loads, often on the
order of seconds.
This patch series implements an alternative solution, where each
cpu will wait in the WFI state until all cpus are ready to enter
a coupled state, at which point the coupled state function will
be called on all cpus at approximately the same time.
Once all cpus are ready to enter idle, they are woken by an smp
cross call. At this point, there is a chance that one of the
cpus will find work to do, and choose not to enter suspend. A
final pass is needed to guarantee that all cpus will call the
power state enter function at the same time. During this pass,
each cpu will increment the ready counter, and continue once the
ready counter matches the number of online coupled cpus. If any
cpu exits idle, the other cpus will decrement their counter and
retry.
To use coupled cpuidle states, a cpuidle driver must:
Set struct cpuidle_device.coupled_cpus to the mask of all
coupled cpus, usually the same as cpu_possible_mask if all cpus
are part of the same cluster. The coupled_cpus mask must be
set in the struct cpuidle_device for each cpu.
Set struct cpuidle_device.safe_state to a state that is not a
coupled state. This is usually WFI.
Set CPUIDLE_FLAG_COUPLED in struct cpuidle_state.flags for each
state that affects multiple cpus.
Provide a struct cpuidle_state.enter function for each state
that affects multiple cpus. This function is guaranteed to be
called on all cpus at approximately the same time. The driver
should ensure that the cpus all abort together if any cpu tries
to abort once the function is called.
This series has been tested by implementing a test cpuidle state
that uses the parallel barrier helper function to verify that
all cpus call the function at the same time.
This patch set has a few disadvantages over the hotplug governor,
but I think they are all fairly minor:
* Worst-case interrupt latency can be increased. If one cpu
receives an interrupt while the other is spinning in the
ready_count loop, the second cpu will be stuck with
interrupts off until the first cpu finished processing
its interrupt and exits idle. This will increase the worst
case interrupt latency by the worst-case interrupt processing
time, but should be very rare.
* Interrupts are processed while still inside pm_idle.
Normally, interrupts are only processed at the very end of
pm_idle, just before it returns to the idle loop. Coupled
states requires processing interrupts inside
cpuidle_enter_state_coupled in order to distinguish between
the smp_cross_call from another cpu that is now idle and an
interrupt that should cause idle to exit.
I don't see a way to fix this without either being able to
read the next pending irq from the interrupt chip, or
querying the irq core for which interrupts were processed.
* Since interrupts are processed inside cpuidle, the next
timer event could change. The new timer event will be
handled correctly, but the idle state decision made by
the governor will be out of date, and will not be revisited.
The governor select function could be called again every time,
but this could lead to a lot of work being done by an idle
cpu if the other cpu was mostly busy.
v2:
* removed the coupled lock, replacing it with atomic counters
* added a check for outstanding pokes before beginning the
final transition to avoid extra wakeups
* made the cpuidle_coupled struct completely private
* fixed kerneldoc comment formatting
* added a patch with a helper function for resynchronizing
cpus after aborting idle
* added a patch (not for merging) to add trace events for
verification and performance testing
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCHv2 1/5] cpuidle: refactor out cpuidle_enter_state
2012-03-14 18:29 [PATCHv2 0/5] coupled cpuidle state support Colin Cross
@ 2012-03-14 18:29 ` Colin Cross
2012-03-14 18:29 ` [PATCHv2 2/5] cpuidle: fix error handling in __cpuidle_register_device Colin Cross
` (4 subsequent siblings)
5 siblings, 0 replies; 17+ messages in thread
From: Colin Cross @ 2012-03-14 18:29 UTC (permalink / raw)
To: linux-arm-kernel
Split the code to enter a state and update the stats into a helper
function, cpuidle_enter_state, and export it. This function will
be called by the coupled state code to handle entering the safe
state and the final coupled state.
Signed-off-by: Colin Cross <ccross@android.com>
---
drivers/cpuidle/cpuidle.c | 44 ++++++++++++++++++++++++++++++--------------
drivers/cpuidle/cpuidle.h | 2 ++
2 files changed, 32 insertions(+), 14 deletions(-)
v2:
* fixed kerneldoc comment format
diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
index 59f4261..1453830 100644
--- a/drivers/cpuidle/cpuidle.c
+++ b/drivers/cpuidle/cpuidle.c
@@ -54,6 +54,35 @@ static void cpuidle_kick_cpus(void) {}
static int __cpuidle_register_device(struct cpuidle_device *dev);
/**
+ * cpuidle_enter_state - enter the state and update stats
+ * @dev: cpuidle device for this cpu
+ * @drv: cpuidle driver for this cpu
+ * @next_state: index into drv->states of the state to enter
+ */
+int cpuidle_enter_state(struct cpuidle_device *dev, struct cpuidle_driver *drv,
+ int next_state)
+{
+ int entered_state;
+ struct cpuidle_state *target_state;
+
+ target_state = &drv->states[next_state];
+
+ entered_state = target_state->enter(dev, drv, next_state);
+
+ if (entered_state >= 0) {
+ /* Update cpuidle counters */
+ /* This can be moved to within driver enter routine
+ * but that results in multiple copies of same code.
+ */
+ dev->states_usage[entered_state].time +=
+ (unsigned long long)dev->last_residency;
+ dev->states_usage[entered_state].usage++;
+ }
+
+ return entered_state;
+}
+
+/**
* cpuidle_idle_call - the main idle loop
*
* NOTE: no locks or semaphores should be used here
@@ -63,7 +92,6 @@ int cpuidle_idle_call(void)
{
struct cpuidle_device *dev = __this_cpu_read(cpuidle_devices);
struct cpuidle_driver *drv = cpuidle_get_driver();
- struct cpuidle_state *target_state;
int next_state, entered_state;
if (off)
@@ -92,26 +120,14 @@ int cpuidle_idle_call(void)
return 0;
}
- target_state = &drv->states[next_state];
-
trace_power_start(POWER_CSTATE, next_state, dev->cpu);
trace_cpu_idle(next_state, dev->cpu);
- entered_state = target_state->enter(dev, drv, next_state);
+ entered_state = cpuidle_enter_state(dev, drv, next_state);
trace_power_end(dev->cpu);
trace_cpu_idle(PWR_EVENT_EXIT, dev->cpu);
- if (entered_state >= 0) {
- /* Update cpuidle counters */
- /* This can be moved to within driver enter routine
- * but that results in multiple copies of same code.
- */
- dev->states_usage[entered_state].time +=
- (unsigned long long)dev->last_residency;
- dev->states_usage[entered_state].usage++;
- }
-
/* give the governor an opportunity to reflect on the outcome */
if (cpuidle_curr_governor->reflect)
cpuidle_curr_governor->reflect(dev, entered_state);
diff --git a/drivers/cpuidle/cpuidle.h b/drivers/cpuidle/cpuidle.h
index 7db1866..d8a3ccc 100644
--- a/drivers/cpuidle/cpuidle.h
+++ b/drivers/cpuidle/cpuidle.h
@@ -14,6 +14,8 @@ extern struct list_head cpuidle_detected_devices;
extern struct mutex cpuidle_lock;
extern spinlock_t cpuidle_driver_lock;
extern int cpuidle_disabled(void);
+extern int cpuidle_enter_state(struct cpuidle_device *dev,
+ struct cpuidle_driver *drv, int next_state);
/* idle loop */
extern void cpuidle_install_idle_handler(void);
--
1.7.9.2
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCHv2 2/5] cpuidle: fix error handling in __cpuidle_register_device
2012-03-14 18:29 [PATCHv2 0/5] coupled cpuidle state support Colin Cross
2012-03-14 18:29 ` [PATCHv2 1/5] cpuidle: refactor out cpuidle_enter_state Colin Cross
@ 2012-03-14 18:29 ` Colin Cross
2012-03-14 18:29 ` [PATCHv2 3/5] cpuidle: add support for states that affect multiple cpus Colin Cross
` (3 subsequent siblings)
5 siblings, 0 replies; 17+ messages in thread
From: Colin Cross @ 2012-03-14 18:29 UTC (permalink / raw)
To: linux-arm-kernel
Fix the error handling in __cpuidle_register_device to include
the missing list_del. Move it to a label, which will simplify
the error handling when coupled states are added.
Signed-off-by: Colin Cross <ccross@android.com>
---
drivers/cpuidle/cpuidle.c | 13 +++++++++----
1 file changed, 9 insertions(+), 4 deletions(-)
v2:
* fix after rename of sys_dev to cpu_dev
* reorder error path to reverse of probe path
diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
index 1453830..aacf2f0 100644
--- a/drivers/cpuidle/cpuidle.c
+++ b/drivers/cpuidle/cpuidle.c
@@ -319,13 +319,18 @@ static int __cpuidle_register_device(struct cpuidle_device *dev)
per_cpu(cpuidle_devices, dev->cpu) = dev;
list_add(&dev->device_list, &cpuidle_detected_devices);
- if ((ret = cpuidle_add_sysfs(cpu_dev))) {
- module_put(cpuidle_driver->owner);
- return ret;
- }
+ ret = cpuidle_add_sysfs(cpu_dev);
+ if (ret)
+ goto err_sysfs;
dev->registered = 1;
return 0;
+
+err_sysfs:
+ list_del(&dev->device_list);
+ per_cpu(cpuidle_devices, dev->cpu) = NULL;
+ module_put(cpuidle_driver->owner);
+ return ret;
}
/**
--
1.7.9.2
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCHv2 3/5] cpuidle: add support for states that affect multiple cpus
2012-03-14 18:29 [PATCHv2 0/5] coupled cpuidle state support Colin Cross
2012-03-14 18:29 ` [PATCHv2 1/5] cpuidle: refactor out cpuidle_enter_state Colin Cross
2012-03-14 18:29 ` [PATCHv2 2/5] cpuidle: fix error handling in __cpuidle_register_device Colin Cross
@ 2012-03-14 18:29 ` Colin Cross
2012-03-16 0:04 ` Kevin Hilman
2012-03-17 12:29 ` Santosh Shilimkar
2012-03-14 18:29 ` [PATCHv2 4/5] cpuidle: coupled: add parallel barrier function Colin Cross
` (2 subsequent siblings)
5 siblings, 2 replies; 17+ messages in thread
From: Colin Cross @ 2012-03-14 18:29 UTC (permalink / raw)
To: linux-arm-kernel
On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the
cpus cannot be independently powered down, either due to
sequencing restrictions (on Tegra 2, cpu 0 must be the last to
power down), or due to HW bugs (on OMAP4460, a cpu powering up
will corrupt the gic state unless the other cpu runs a work
around). Each cpu has a power state that it can enter without
coordinating with the other cpu (usually Wait For Interrupt, or
WFI), and one or more "coupled" power states that affect blocks
shared between the cpus (L2 cache, interrupt controller, and
sometimes the whole SoC). Entering a coupled power state must
be tightly controlled on both cpus.
The easiest solution to implementing coupled cpu power states is
to hotplug all but one cpu whenever possible, usually using a
cpufreq governor that looks at cpu load to determine when to
enable the secondary cpus. This causes problems, as hotplug is an
expensive operation, so the number of hotplug transitions must be
minimized, leading to very slow response to loads, often on the
order of seconds.
This file implements an alternative solution, where each cpu will
wait in the WFI state until all cpus are ready to enter a coupled
state, at which point the coupled state function will be called
on all cpus at approximately the same time.
Once all cpus are ready to enter idle, they are woken by an smp
cross call. At this point, there is a chance that one of the
cpus will find work to do, and choose not to enter idle. A
final pass is needed to guarantee that all cpus will call the
power state enter function at the same time. During this pass,
each cpu will increment the ready counter, and continue once the
ready counter matches the number of online coupled cpus. If any
cpu exits idle, the other cpus will decrement their counter and
retry.
To use coupled cpuidle states, a cpuidle driver must:
Set struct cpuidle_device.coupled_cpus to the mask of all
coupled cpus, usually the same as cpu_possible_mask if all cpus
are part of the same cluster. The coupled_cpus mask must be
set in the struct cpuidle_device for each cpu.
Set struct cpuidle_device.safe_state to a state that is not a
coupled state. This is usually WFI.
Set CPUIDLE_FLAG_COUPLED in struct cpuidle_state.flags for each
state that affects multiple cpus.
Provide a struct cpuidle_state.enter function for each state
that affects multiple cpus. This function is guaranteed to be
called on all cpus at approximately the same time. The driver
should ensure that the cpus all abort together if any cpu tries
to abort once the function is called.
Signed-off-by: Colin Cross <ccross@android.com>
Cc: Len Brown <len.brown@intel.com>
Cc: Kevin Hilman <khilman@ti.com>
Cc: Santosh Shilimkar <santosh.shilimkar@ti.com>
Cc: Amit Kucheria <amit.kucheria@linaro.org>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Trinabh Gupta <g.trinabh@gmail.com>
Cc: Deepthi Dharwar <deepthi@linux.vnet.ibm.com>
---
drivers/cpuidle/Kconfig | 3 +
drivers/cpuidle/Makefile | 1 +
drivers/cpuidle/coupled.c | 568 +++++++++++++++++++++++++++++++++++++++++++++
drivers/cpuidle/cpuidle.c | 15 +-
drivers/cpuidle/cpuidle.h | 30 +++
include/linux/cpuidle.h | 7 +
6 files changed, 623 insertions(+), 1 deletion(-)
create mode 100644 drivers/cpuidle/coupled.c
v2:
* removed the coupled lock, replacing it with atomic counters
* added a check for outstanding pokes before beginning the
final transition to avoid extra wakeups
* made the cpuidle_coupled struct completely private
* fixed kerneldoc comment formatting
diff --git a/drivers/cpuidle/Kconfig b/drivers/cpuidle/Kconfig
index 78a666d..a76b689 100644
--- a/drivers/cpuidle/Kconfig
+++ b/drivers/cpuidle/Kconfig
@@ -18,3 +18,6 @@ config CPU_IDLE_GOV_MENU
bool
depends on CPU_IDLE && NO_HZ
default y
+
+config ARCH_NEEDS_CPU_IDLE_COUPLED
+ def_bool n
diff --git a/drivers/cpuidle/Makefile b/drivers/cpuidle/Makefile
index 5634f88..38c8f69 100644
--- a/drivers/cpuidle/Makefile
+++ b/drivers/cpuidle/Makefile
@@ -3,3 +3,4 @@
#
obj-y += cpuidle.o driver.o governor.o sysfs.o governors/
+obj-$(CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED) += coupled.o
diff --git a/drivers/cpuidle/coupled.c b/drivers/cpuidle/coupled.c
new file mode 100644
index 0000000..046fccb
--- /dev/null
+++ b/drivers/cpuidle/coupled.c
@@ -0,0 +1,568 @@
+/*
+ * coupled.c - helper functions to enter the same idle state on multiple cpus
+ *
+ * Copyright (c) 2011 Google, Inc.
+ *
+ * Author: Colin Cross <ccross@android.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ */
+
+#include <linux/kernel.h>
+#include <linux/cpu.h>
+#include <linux/cpuidle.h>
+#include <linux/mutex.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+
+#include "cpuidle.h"
+
+/*
+ * coupled cpuidle states
+ *
+ * On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the
+ * cpus cannot be independently powered down, either due to
+ * sequencing restrictions (on Tegra 2, cpu 0 must be the last to
+ * power down), or due to HW bugs (on OMAP4460, a cpu powering up
+ * will corrupt the gic state unless the other cpu runs a work
+ * around). Each cpu has a power state that it can enter without
+ * coordinating with the other cpu (usually Wait For Interrupt, or
+ * WFI), and one or more "coupled" power states that affect blocks
+ * shared between the cpus (L2 cache, interrupt controller, and
+ * sometimes the whole SoC). Entering a coupled power state must
+ * be tightly controlled on both cpus.
+ *
+ * The easiest solution to implementing coupled cpu power states is
+ * to hotplug all but one cpu whenever possible, usually using a
+ * cpufreq governor that looks at cpu load to determine when to
+ * enable the secondary cpus. This causes problems, as hotplug is an
+ * expensive operation, so the number of hotplug transitions must be
+ * minimized, leading to very slow response to loads, often on the
+ * order of seconds.
+ *
+ * This file implements an alternative solution, where each cpu will
+ * wait in the WFI state until all cpus are ready to enter a coupled
+ * state, at which point the coupled state function will be called
+ * on all cpus at approximately the same time.
+ *
+ * Once all cpus are ready to enter idle, they are woken by an smp
+ * cross call. At this point, there is a chance that one of the
+ * cpus will find work to do, and choose not to enter idle. A
+ * final pass is needed to guarantee that all cpus will call the
+ * power state enter function at the same time. During this pass,
+ * each cpu will increment the ready counter, and continue once the
+ * ready counter matches the number of online coupled cpus. If any
+ * cpu exits idle, the other cpus will decrement their counter and
+ * retry.
+ *
+ * requested_state stores the deepest coupled idle state each cpu
+ * is ready for. It is assumed that the states are indexed from
+ * shallowest (highest power, lowest exit latency) to deepest
+ * (lowest power, highest exit latency). The requested_state
+ * variable is not locked. It is only written from the cpu that
+ * it stores (or by the on/offlining cpu if that cpu is offline),
+ * and only read after all the cpus are ready for the coupled idle
+ * state are are no longer updating it.
+ *
+ * Three atomic counters are used. alive_count tracks the number
+ * of cpus in the coupled set that are currently or soon will be
+ * online. waiting_count tracks the number of cpus that are in
+ * the waiting loop, in the ready loop, or in the coupled idle state.
+ * ready_count tracks the number of cpus that are in the ready loop
+ * or in the coupled idle state.
+ *
+ * To use coupled cpuidle states, a cpuidle driver must:
+ *
+ * Set struct cpuidle_device.coupled_cpus to the mask of all
+ * coupled cpus, usually the same as cpu_possible_mask if all cpus
+ * are part of the same cluster. The coupled_cpus mask must be
+ * set in the struct cpuidle_device for each cpu.
+ *
+ * Set struct cpuidle_device.safe_state to a state that is not a
+ * coupled state. This is usually WFI.
+ *
+ * Set CPUIDLE_FLAG_COUPLED in struct cpuidle_state.flags for each
+ * state that affects multiple cpus.
+ *
+ * Provide a struct cpuidle_state.enter function for each state
+ * that affects multiple cpus. This function is guaranteed to be
+ * called on all cpus at approximately the same time. The driver
+ * should ensure that the cpus all abort together if any cpu tries
+ * to abort once the function is called. The function should return
+ * with interrupts still disabled.
+ */
+
+/**
+ * struct cpuidle_coupled - data for set of cpus that share a coupled idle state
+ * @coupled_cpus: mask of cpus that are part of the coupled set
+ * @requested_state: array of requested states for cpus in the coupled set
+ * @ready_count: count of cpus that are ready for the final idle transition
+ * @waiting_count: count of cpus that are waiting for all other cpus to be idle
+ * @alive_count: count of cpus that are online or soon will be
+ * @refcnt: reference count of cpuidle devices that are using this struct
+ */
+struct cpuidle_coupled {
+ cpumask_t coupled_cpus;
+ int requested_state[NR_CPUS];
+ atomic_t ready_count;
+ atomic_t waiting_count;
+ atomic_t alive_count;
+ int refcnt;
+};
+
+#define CPUIDLE_COUPLED_NOT_IDLE (-1)
+#define CPUIDLE_COUPLED_DEAD (-2)
+
+static DEFINE_MUTEX(cpuidle_coupled_lock);
+static DEFINE_PER_CPU(struct call_single_data, cpuidle_coupled_poke_cb);
+
+/*
+ * The cpuidle_coupled_poked_mask masked is used to avoid calling
+ * __smp_call_function_single with the per cpu call_single_data struct already
+ * in use. This prevents a deadlock where two cpus are waiting for each others
+ * call_single_data struct to be available
+ */
+static cpumask_t cpuidle_coupled_poked_mask;
+
+/**
+ * cpuidle_state_is_coupled - check if a state is part of a coupled set
+ * @dev: struct cpuidle_device for the current cpu
+ * @drv: struct cpuidle_driver for the platform
+ * @state: index of the target state in drv->states
+ *
+ * Returns true if the target state is coupled with cpus besides this one
+ */
+bool cpuidle_state_is_coupled(struct cpuidle_device *dev,
+ struct cpuidle_driver *drv, int state)
+{
+ return drv->states[state].flags & CPUIDLE_FLAG_COUPLED;
+}
+
+/**
+ * cpuidle_coupled_cpus_waiting - check if all cpus in a coupled set are waiting
+ * @coupled: the struct coupled that contains the current cpu
+ *
+ * Returns true if all cpus coupled to this target state are in the wait loop
+ */
+static inline bool cpuidle_coupled_cpus_waiting(struct cpuidle_coupled *coupled)
+{
+ int alive;
+ int waiting;
+
+ /*
+ * Read alive before reading waiting so a booting cpu is not treated as
+ * idle
+ */
+ alive = atomic_read(&coupled->alive_count);
+ smp_rmb();
+ waiting = atomic_read(&coupled->waiting_count);
+
+ return (waiting == alive);
+}
+
+/**
+ * cpuidle_coupled_get_state - determine the deepest idle state
+ * @dev: struct cpuidle_device for this cpu
+ * @coupled: the struct coupled that contains the current cpu
+ *
+ * Returns the deepest idle state that all coupled cpus can enter
+ */
+static inline int cpuidle_coupled_get_state(struct cpuidle_device *dev,
+ struct cpuidle_coupled *coupled)
+{
+ int i;
+ int state = INT_MAX;
+
+ for_each_cpu_mask(i, coupled->coupled_cpus)
+ if (coupled->requested_state[i] != CPUIDLE_COUPLED_DEAD &&
+ coupled->requested_state[i] < state)
+ state = coupled->requested_state[i];
+
+ BUG_ON(state >= dev->state_count || state < 0);
+
+ return state;
+}
+
+static void cpuidle_coupled_poked(void *info)
+{
+ int cpu = (unsigned long)info;
+ cpumask_clear_cpu(cpu, &cpuidle_coupled_poked_mask);
+}
+
+/**
+ * cpuidle_coupled_poke - wake up a cpu that may be waiting
+ * @cpu: target cpu
+ *
+ * Ensures that the target cpu exits it's waiting idle state (if it is in it)
+ * and will see updates to waiting_count before it re-enters it's waiting idle
+ * state.
+ *
+ * If cpuidle_coupled_poked_mask is already set for the target cpu, that cpu
+ * either has or will soon have a pending IPI that will wake it out of idle,
+ * or it is currently processing the IPI and is not in idle.
+ */
+static void cpuidle_coupled_poke(int cpu)
+{
+ struct call_single_data *csd = &per_cpu(cpuidle_coupled_poke_cb, cpu);
+
+ if (!cpumask_test_and_set_cpu(cpu, &cpuidle_coupled_poked_mask))
+ __smp_call_function_single(cpu, csd, 0);
+}
+
+/**
+ * cpuidle_coupled_poke_others - wake up all other cpus that may be waiting
+ * @dev: struct cpuidle_device for this cpu
+ * @coupled: the struct coupled that contains the current cpu
+ *
+ * Calls cpuidle_coupled_poke on all other online cpus.
+ */
+static void cpuidle_coupled_poke_others(struct cpuidle_device *dev,
+ struct cpuidle_coupled *coupled)
+{
+ int cpu;
+
+ for_each_cpu_mask(cpu, coupled->coupled_cpus)
+ if (cpu != dev->cpu && cpu_online(cpu))
+ cpuidle_coupled_poke(cpu);
+}
+
+/**
+ * cpuidle_coupled_set_waiting - mark this cpu as in the wait loop
+ * @dev: struct cpuidle_device for this cpu
+ * @coupled: the struct coupled that contains the current cpu
+ * @next_state: the index in drv->states of the requested state for this cpu
+ *
+ * Updates the requested idle state for the specified cpuidle device,
+ * poking all coupled cpus out of idle if necessary to let them see the new
+ * state.
+ *
+ * Provides memory ordering around waiting_count.
+ */
+static void cpuidle_coupled_set_waiting(struct cpuidle_device *dev,
+ struct cpuidle_coupled *coupled, int next_state)
+{
+ int alive;
+
+ BUG_ON(coupled->requested_state[dev->cpu] >= 0);
+
+ coupled->requested_state[dev->cpu] = next_state;
+
+ /*
+ * If this is the last cpu to enter the waiting state, poke
+ * all the other cpus out of their waiting state so they can
+ * enter a deeper state. This can race with one of the cpus
+ * exiting the waiting state due to an interrupt and
+ * decrementing waiting_count, see comment below.
+ */
+ alive = atomic_read(&coupled->alive_count);
+ if (atomic_inc_return(&coupled->waiting_count) == alive)
+ cpuidle_coupled_poke_others(dev, coupled);
+}
+
+/**
+ * cpuidle_coupled_set_not_waiting - mark this cpu as leaving the wait loop
+ * @dev: struct cpuidle_device for this cpu
+ * @coupled: the struct coupled that contains the current cpu
+ *
+ * Removes the requested idle state for the specified cpuidle device.
+ *
+ * Provides memory ordering around waiting_count.
+ */
+static void cpuidle_coupled_set_not_waiting(struct cpuidle_device *dev,
+ struct cpuidle_coupled *coupled)
+{
+ BUG_ON(coupled->requested_state[dev->cpu] < 0);
+
+ /*
+ * Decrementing waiting_count can race with incrementing it in
+ * cpuidle_coupled_set_waiting, but that's OK. Worst case, some
+ * cpus will increment ready_count and then spin until they
+ * notice that this cpu has cleared it's requested_state.
+ */
+
+ smp_mb__before_atomic_dec();
+ atomic_dec(&coupled->waiting_count);
+ smp_mb__after_atomic_dec();
+
+ coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_NOT_IDLE;
+}
+
+/**
+ * cpuidle_enter_state_coupled - attempt to enter a state with coupled cpus
+ * @dev: struct cpuidle_device for the current cpu
+ * @drv: struct cpuidle_driver for the platform
+ * @next_state: index of the requested state in drv->states
+ *
+ * Coordinate with coupled cpus to enter the target state. This is a two
+ * stage process. In the first stage, the cpus are operating independently,
+ * and may call into cpuidle_enter_state_coupled at completely different times.
+ * To save as much power as possible, the first cpus to call this function will
+ * go to an intermediate state (the cpuidle_device's safe state), and wait for
+ * all the other cpus to call this function. Once all coupled cpus are idle,
+ * the second stage will start. Each coupled cpu will spin until all cpus have
+ * guaranteed that they will call the target_state.
+ */
+int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
+ struct cpuidle_driver *drv, int next_state)
+{
+ int entered_state = -1;
+ struct cpuidle_coupled *coupled = dev->coupled;
+ int alive;
+
+ if (!coupled)
+ return -EINVAL;
+
+ BUG_ON(atomic_read(&coupled->ready_count));
+ cpuidle_coupled_set_waiting(dev, coupled, next_state);
+
+retry:
+ /*
+ * Wait for all coupled cpus to be idle, using the deepest state
+ * allowed for a single cpu.
+ */
+ while (!need_resched() && !cpuidle_coupled_cpus_waiting(coupled)) {
+ entered_state = cpuidle_enter_state(dev, drv,
+ dev->safe_state_index);
+
+ local_irq_enable();
+ while (cpumask_test_cpu(dev->cpu, &cpuidle_coupled_poked_mask))
+ cpu_relax();
+ local_irq_disable();
+ }
+
+ /* give a chance to process any remaining pokes */
+ local_irq_enable();
+ while (cpumask_test_cpu(dev->cpu, &cpuidle_coupled_poked_mask))
+ cpu_relax();
+ local_irq_disable();
+
+ if (need_resched()) {
+ cpuidle_coupled_set_not_waiting(dev, coupled);
+ goto out;
+ }
+
+ /*
+ * All coupled cpus are probably idle. There is a small chance that
+ * one of the other cpus just became active. Increment a counter when
+ * ready, and spin until all coupled cpus have incremented the counter.
+ * Once a cpu has incremented the counter, it cannot abort idle and must
+ * spin until either the count has hit alive_count, or another cpu
+ * leaves idle.
+ */
+
+ smp_mb__before_atomic_inc();
+ atomic_inc(&coupled->ready_count);
+ smp_mb__after_atomic_inc();
+ /* alive_count can't change while ready_count > 0 */
+ alive = atomic_read(&coupled->alive_count);
+ while (atomic_read(&coupled->ready_count) != alive) {
+ /* Check if any other cpus bailed out of idle. */
+ if (!cpuidle_coupled_cpus_waiting(coupled)) {
+ atomic_dec(&coupled->ready_count);
+ smp_mb__after_atomic_dec();
+ goto retry;
+ }
+
+ cpu_relax();
+ }
+
+ /* all cpus have acked the coupled state */
+ smp_rmb();
+
+ next_state = cpuidle_coupled_get_state(dev, coupled);
+
+ entered_state = cpuidle_enter_state(dev, drv, next_state);
+
+ cpuidle_coupled_set_not_waiting(dev, coupled);
+ atomic_dec(&coupled->ready_count);
+ smp_mb__after_atomic_dec();
+
+out:
+ /*
+ * Normal cpuidle states are expected to return with irqs enabled.
+ * That leads to an inefficiency where a cpu receiving an interrupt
+ * that brings it out of idle will process that interrupt before
+ * exiting the idle enter function and decrementing ready_count. All
+ * other cpus will need to spin waiting for the cpu that is processing
+ * the interrupt. If the driver returns with interrupts disabled,
+ * all other cpus will loop back into the safe idle state instead of
+ * spinning, saving power.
+ *
+ * Calling local_irq_enable here allows coupled states to return with
+ * interrupts disabled, but won't cause problems for drivers that
+ * exit with interrupts enabled.
+ */
+ local_irq_enable();
+
+ /*
+ * Wait until all coupled cpus have exited idle. There is no risk that
+ * a cpu exits and re-enters the ready state because this cpu has
+ * already decremented its waiting_count.
+ */
+ while (atomic_read(&coupled->ready_count) != 0)
+ cpu_relax();
+
+ smp_rmb();
+
+ return entered_state;
+}
+
+/**
+ * cpuidle_coupled_register_device - register a coupled cpuidle device
+ * @dev: struct cpuidle_device for the current cpu
+ *
+ * Called from cpuidle_register_device to handle coupled idle init. Finds the
+ * cpuidle_coupled struct for this set of coupled cpus, or creates one if none
+ * exists yet.
+ */
+int cpuidle_coupled_register_device(struct cpuidle_device *dev)
+{
+ int cpu;
+ struct cpuidle_device *other_dev;
+ struct call_single_data *csd;
+ struct cpuidle_coupled *coupled;
+
+ if (cpumask_empty(&dev->coupled_cpus))
+ return 0;
+
+ for_each_cpu_mask(cpu, dev->coupled_cpus) {
+ other_dev = per_cpu(cpuidle_devices, cpu);
+ if (other_dev && other_dev->coupled) {
+ coupled = other_dev->coupled;
+ goto have_coupled;
+ }
+ }
+
+ /* No existing coupled info found, create a new one */
+ coupled = kzalloc(sizeof(struct cpuidle_coupled), GFP_KERNEL);
+ if (!coupled)
+ return -ENOMEM;
+
+ coupled->coupled_cpus = dev->coupled_cpus;
+ for_each_cpu_mask(cpu, coupled->coupled_cpus)
+ coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_DEAD;
+
+have_coupled:
+ dev->coupled = coupled;
+ BUG_ON(!cpumask_equal(&dev->coupled_cpus, &coupled->coupled_cpus));
+
+ if (cpu_online(dev->cpu)) {
+ coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_NOT_IDLE;
+ atomic_inc(&coupled->alive_count);
+ }
+
+ coupled->refcnt++;
+
+ csd = &per_cpu(cpuidle_coupled_poke_cb, dev->cpu);
+ csd->func = cpuidle_coupled_poked;
+ csd->info = (void *)(unsigned long)dev->cpu;
+
+ return 0;
+}
+
+/**
+ * cpuidle_coupled_unregister_device - unregister a coupled cpuidle device
+ * @dev: struct cpuidle_device for the current cpu
+ *
+ * Called from cpuidle_unregister_device to tear down coupled idle. Removes the
+ * cpu from the coupled idle set, and frees the cpuidle_coupled_info struct if
+ * this was the last cpu in the set.
+ */
+void cpuidle_coupled_unregister_device(struct cpuidle_device *dev)
+{
+ struct cpuidle_coupled *coupled = dev->coupled;
+
+ if (cpumask_empty(&dev->coupled_cpus))
+ return;
+
+ if (--coupled->refcnt)
+ kfree(coupled);
+ dev->coupled = NULL;
+}
+
+/**
+ * cpuidle_coupled_cpu_set_alive - adjust alive_count during hotplug transitions
+ * @cpu: target cpu number
+ * @alive: whether the target cpu is going up or down
+ *
+ * Run on the cpu that is bringing up the target cpu, before the target cpu
+ * has been booted, or after the target cpu is completely dead.
+ */
+static void cpuidle_coupled_cpu_set_alive(int cpu, bool alive)
+{
+ struct cpuidle_device *dev;
+ struct cpuidle_coupled *coupled;
+
+ mutex_lock(&cpuidle_lock);
+
+ dev = per_cpu(cpuidle_devices, cpu);
+ if (!dev->coupled)
+ goto out;
+
+ coupled = dev->coupled;
+
+ /*
+ * waiting_count must be at least 1 less than alive_count, because
+ * this cpu is not waiting. Spin until all cpus have noticed this cpu
+ * is not idle and exited the ready loop before changing alive_count.
+ */
+ while (atomic_read(&coupled->ready_count))
+ cpu_relax();
+
+ smp_mb__before_atomic_inc();
+ atomic_inc(&coupled->alive_count);
+ smp_mb__after_atomic_inc();
+
+ if (alive)
+ coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_NOT_IDLE;
+ else
+ coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_DEAD;
+
+out:
+ mutex_unlock(&cpuidle_lock);
+}
+
+/**
+ * cpuidle_coupled_cpu_notify - notifier called during hotplug transitions
+ * @nb: notifier block
+ * @action: hotplug transition
+ * @hcpu: target cpu number
+ *
+ * Called when a cpu is brought on or offline using hotplug. Updates the
+ * coupled cpu set appropriately
+ */
+static int cpuidle_coupled_cpu_notify(struct notifier_block *nb,
+ unsigned long action, void *hcpu)
+{
+ int cpu = (unsigned long)hcpu;
+
+ switch (action & ~CPU_TASKS_FROZEN) {
+ case CPU_DEAD:
+ case CPU_UP_CANCELED:
+ cpuidle_coupled_cpu_set_alive(cpu, false);
+ break;
+ case CPU_UP_PREPARE:
+ cpuidle_coupled_cpu_set_alive(cpu, true);
+ break;
+ }
+ return NOTIFY_OK;
+}
+
+static struct notifier_block cpuidle_coupled_cpu_notifier = {
+ .notifier_call = cpuidle_coupled_cpu_notify,
+};
+
+static int __init cpuidle_coupled_init(void)
+{
+ return register_cpu_notifier(&cpuidle_coupled_cpu_notifier);
+}
+core_initcall(cpuidle_coupled_init);
diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
index aacf2f0..a203437 100644
--- a/drivers/cpuidle/cpuidle.c
+++ b/drivers/cpuidle/cpuidle.c
@@ -123,7 +123,11 @@ int cpuidle_idle_call(void)
trace_power_start(POWER_CSTATE, next_state, dev->cpu);
trace_cpu_idle(next_state, dev->cpu);
- entered_state = cpuidle_enter_state(dev, drv, next_state);
+ if (cpuidle_state_is_coupled(dev, drv, next_state))
+ entered_state = cpuidle_enter_state_coupled(dev, drv,
+ next_state);
+ else
+ entered_state = cpuidle_enter_state(dev, drv, next_state);
trace_power_end(dev->cpu);
trace_cpu_idle(PWR_EVENT_EXIT, dev->cpu);
@@ -323,9 +327,16 @@ static int __cpuidle_register_device(struct cpuidle_device *dev)
if (ret)
goto err_sysfs;
+ ret = cpuidle_coupled_register_device(dev);
+ if (ret)
+ goto err_coupled;
+
dev->registered = 1;
return 0;
+err_coupled:
+ cpuidle_remove_sysfs(cpu_dev);
+ wait_for_completion(&dev->kobj_unregister);
err_sysfs:
list_del(&dev->device_list);
per_cpu(cpuidle_devices, dev->cpu) = NULL;
@@ -380,6 +391,8 @@ void cpuidle_unregister_device(struct cpuidle_device *dev)
wait_for_completion(&dev->kobj_unregister);
per_cpu(cpuidle_devices, dev->cpu) = NULL;
+ cpuidle_coupled_unregister_device(dev);
+
cpuidle_resume_and_unlock();
module_put(cpuidle_driver->owner);
diff --git a/drivers/cpuidle/cpuidle.h b/drivers/cpuidle/cpuidle.h
index d8a3ccc..76e7f69 100644
--- a/drivers/cpuidle/cpuidle.h
+++ b/drivers/cpuidle/cpuidle.h
@@ -32,4 +32,34 @@ extern void cpuidle_remove_state_sysfs(struct cpuidle_device *device);
extern int cpuidle_add_sysfs(struct device *dev);
extern void cpuidle_remove_sysfs(struct device *dev);
+#ifdef CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED
+bool cpuidle_state_is_coupled(struct cpuidle_device *dev,
+ struct cpuidle_driver *drv, int state);
+int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
+ struct cpuidle_driver *drv, int next_state);
+int cpuidle_coupled_register_device(struct cpuidle_device *dev);
+void cpuidle_coupled_unregister_device(struct cpuidle_device *dev);
+#else
+static inline bool cpuidle_state_is_coupled(struct cpuidle_device *dev,
+ struct cpuidle_driver *drv, int state)
+{
+ return false;
+}
+
+static inline int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
+ struct cpuidle_driver *drv, int next_state)
+{
+ return -1;
+}
+
+static inline int cpuidle_coupled_register_device(struct cpuidle_device *dev)
+{
+ return 0;
+}
+
+static inline void cpuidle_coupled_unregister_device(struct cpuidle_device *dev)
+{
+}
+#endif
+
#endif /* __DRIVER_CPUIDLE_H */
diff --git a/include/linux/cpuidle.h b/include/linux/cpuidle.h
index 712abcc..71f2fba 100644
--- a/include/linux/cpuidle.h
+++ b/include/linux/cpuidle.h
@@ -53,6 +53,7 @@ struct cpuidle_state {
/* Idle State Flags */
#define CPUIDLE_FLAG_TIME_VALID (0x01) /* is residency time measurable? */
+#define CPUIDLE_FLAG_COUPLED (0x02) /* state applies to multiple cpus */
#define CPUIDLE_DRIVER_FLAGS_MASK (0xFFFF0000)
@@ -97,6 +98,12 @@ struct cpuidle_device {
struct kobject kobj;
struct completion kobj_unregister;
void *governor_data;
+
+#ifdef CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED
+ int safe_state_index;
+ cpumask_t coupled_cpus;
+ struct cpuidle_coupled *coupled;
+#endif
};
DECLARE_PER_CPU(struct cpuidle_device *, cpuidle_devices);
--
1.7.9.2
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCHv2 4/5] cpuidle: coupled: add parallel barrier function
2012-03-14 18:29 [PATCHv2 0/5] coupled cpuidle state support Colin Cross
` (2 preceding siblings ...)
2012-03-14 18:29 ` [PATCHv2 3/5] cpuidle: add support for states that affect multiple cpus Colin Cross
@ 2012-03-14 18:29 ` Colin Cross
2012-03-14 18:29 ` [PATCHv2 5/5] cpuidle: coupled: add trace events Colin Cross
2012-03-15 23:37 ` [PATCHv2 0/5] coupled cpuidle state support Colin Cross
5 siblings, 0 replies; 17+ messages in thread
From: Colin Cross @ 2012-03-14 18:29 UTC (permalink / raw)
To: linux-arm-kernel
Adds cpuidle_coupled_parallel_barrier, which can be used by coupled
cpuidle state enter functions to handle resynchronization after
determining if any cpu needs to abort. The normal use case will
be:
static bool abort_flag;
static atomic_t abort_barrier;
int arch_cpuidle_enter(struct cpuidle_device *dev, ...)
{
if (arch_turn_off_irq_controller()) {
/* returns an error if an irq is pending and would be lost
if idle continued and turned off power */
abort_flag = true;
}
cpuidle_coupled_parallel_barrier(dev, &abort_barrier);
if (abort_flag) {
/* One of the cpus didn't turn off it's irq controller */
arch_turn_on_irq_controller();
return -EINTR;
}
/* continue with idle */
...
}
This will cause all cpus to abort idle together if one of them needs
to abort.
Signed-off-by: Colin Cross <ccross@android.com>
---
drivers/cpuidle/coupled.c | 37 +++++++++++++++++++++++++++++++++++++
include/linux/cpuidle.h | 4 ++++
2 files changed, 41 insertions(+)
diff --git a/drivers/cpuidle/coupled.c b/drivers/cpuidle/coupled.c
index 046fccb..188a53f 100644
--- a/drivers/cpuidle/coupled.c
+++ b/drivers/cpuidle/coupled.c
@@ -134,6 +134,43 @@ static DEFINE_PER_CPU(struct call_single_data, cpuidle_coupled_poke_cb);
static cpumask_t cpuidle_coupled_poked_mask;
/**
+ * cpuidle_coupled_parallel_barrier - synchronize all online coupled cpus
+ * @dev: cpuidle_device of the calling cpu
+ * @a: atomic variable to hold the barrier
+ *
+ * No caller to this function will return from this function until all online
+ * cpus in the same coupled group have called this function. Once any caller
+ * has returned from this function, the barrier is immediately available for
+ * reuse.
+ *
+ * The atomic variable a must be initialized to 0 before any cpu calls
+ * this function, will be reset to 0 before any cpu returns from this function.
+ *
+ * Must only be called from within a coupled idle state handler
+ * (state.enter when state.flags has CPUIDLE_FLAG_COUPLED set).
+ *
+ * Provides full smp barrier semantics before and after calling.
+ */
+void cpuidle_coupled_parallel_barrier(struct cpuidle_device *dev, atomic_t *a)
+{
+ int n = atomic_read(&dev->coupled->alive_count);
+
+ smp_mb__before_atomic_inc();
+ atomic_inc(a);
+
+ while (atomic_read(a) < n)
+ cpu_relax();
+
+ if (atomic_inc_return(a) == n * 2) {
+ atomic_set(a, 0);
+ return;
+ }
+
+ while (atomic_read(a) > n)
+ cpu_relax();
+}
+
+/**
* cpuidle_state_is_coupled - check if a state is part of a coupled set
* @dev: struct cpuidle_device for the current cpu
* @drv: struct cpuidle_driver for the platform
diff --git a/include/linux/cpuidle.h b/include/linux/cpuidle.h
index 71f2fba..37ae622 100644
--- a/include/linux/cpuidle.h
+++ b/include/linux/cpuidle.h
@@ -167,6 +167,10 @@ static inline void cpuidle_disable_device(struct cpuidle_device *dev) { }
#endif
+#ifdef CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED
+void cpuidle_coupled_parallel_barrier(struct cpuidle_device *dev, atomic_t *a);
+#endif
+
/******************************
* CPUIDLE GOVERNOR INTERFACE *
******************************/
--
1.7.9.2
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCHv2 5/5] cpuidle: coupled: add trace events
2012-03-14 18:29 [PATCHv2 0/5] coupled cpuidle state support Colin Cross
` (3 preceding siblings ...)
2012-03-14 18:29 ` [PATCHv2 4/5] cpuidle: coupled: add parallel barrier function Colin Cross
@ 2012-03-14 18:29 ` Colin Cross
2012-04-09 6:59 ` Santosh Shilimkar
2012-03-15 23:37 ` [PATCHv2 0/5] coupled cpuidle state support Colin Cross
5 siblings, 1 reply; 17+ messages in thread
From: Colin Cross @ 2012-03-14 18:29 UTC (permalink / raw)
To: linux-arm-kernel
Adds trace events to allow debugging of coupled cpuidle.
Can be used to verify cpuidle performance, including time spent
spinning and time spent in safe states. Not intended for merging.
Signed-off-by: Colin Cross <ccross@android.com>
---
drivers/cpuidle/coupled.c | 48 +++++++-
include/trace/events/cpuidle.h | 243 ++++++++++++++++++++++++++++++++++++++++
2 files changed, 287 insertions(+), 4 deletions(-)
create mode 100644 include/trace/events/cpuidle.h
diff --git a/drivers/cpuidle/coupled.c b/drivers/cpuidle/coupled.c
index 188a53f..3bc8a02 100644
--- a/drivers/cpuidle/coupled.c
+++ b/drivers/cpuidle/coupled.c
@@ -15,10 +15,12 @@
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*/
+#define DEBUG
#include <linux/kernel.h>
#include <linux/cpu.h>
#include <linux/cpuidle.h>
+#include <linux/delay.h>
#include <linux/mutex.h>
#include <linux/sched.h>
#include <linux/slab.h>
@@ -26,6 +28,11 @@
#include "cpuidle.h"
+#define CREATE_TRACE_POINTS
+#include <trace/events/cpuidle.h>
+
+atomic_t cpuidle_trace_seq;
+
/*
* coupled cpuidle states
*
@@ -154,20 +161,33 @@ static cpumask_t cpuidle_coupled_poked_mask;
void cpuidle_coupled_parallel_barrier(struct cpuidle_device *dev, atomic_t *a)
{
int n = atomic_read(&dev->coupled->alive_count);
+#ifdef DEBUG
+ int loops = 0;
+#endif
smp_mb__before_atomic_inc();
atomic_inc(a);
- while (atomic_read(a) < n)
+ while (atomic_read(a) < n) {
+#ifdef DEBUG
+ BUG_ON(loops++ > loops_per_jiffy);
+#else
cpu_relax();
+#endif
+ }
if (atomic_inc_return(a) == n * 2) {
atomic_set(a, 0);
return;
}
- while (atomic_read(a) > n)
+ while (atomic_read(a) > n) {
+#ifdef DEBUG
+ BUG_ON(loops++ > loops_per_jiffy);
+#else
cpu_relax();
+#endif
+ }
}
/**
@@ -232,6 +252,7 @@ static inline int cpuidle_coupled_get_state(struct cpuidle_device *dev,
static void cpuidle_coupled_poked(void *info)
{
int cpu = (unsigned long)info;
+ trace_coupled_poked(cpu);
cpumask_clear_cpu(cpu, &cpuidle_coupled_poked_mask);
}
@@ -251,8 +272,10 @@ static void cpuidle_coupled_poke(int cpu)
{
struct call_single_data *csd = &per_cpu(cpuidle_coupled_poke_cb, cpu);
- if (!cpumask_test_and_set_cpu(cpu, &cpuidle_coupled_poked_mask))
+ if (!cpumask_test_and_set_cpu(cpu, &cpuidle_coupled_poked_mask)) {
+ trace_coupled_poke(cpu);
__smp_call_function_single(cpu, csd, 0);
+ }
}
/**
@@ -361,28 +384,37 @@ int cpuidle_enter_state_coupled(struct cpuidle_device *dev,
BUG_ON(atomic_read(&coupled->ready_count));
cpuidle_coupled_set_waiting(dev, coupled, next_state);
+ trace_coupled_enter(dev->cpu);
+
retry:
/*
* Wait for all coupled cpus to be idle, using the deepest state
* allowed for a single cpu.
*/
while (!need_resched() && !cpuidle_coupled_cpus_waiting(coupled)) {
+ trace_coupled_safe_enter(dev->cpu);
entered_state = cpuidle_enter_state(dev, drv,
dev->safe_state_index);
+ trace_coupled_safe_exit(dev->cpu);
+ trace_coupled_spin(dev->cpu);
local_irq_enable();
while (cpumask_test_cpu(dev->cpu, &cpuidle_coupled_poked_mask))
cpu_relax();
local_irq_disable();
+ trace_coupled_unspin(dev->cpu);
}
/* give a chance to process any remaining pokes */
+ trace_coupled_spin(dev->cpu);
local_irq_enable();
while (cpumask_test_cpu(dev->cpu, &cpuidle_coupled_poked_mask))
cpu_relax();
local_irq_disable();
+ trace_coupled_unspin(dev->cpu);
if (need_resched()) {
+ trace_coupled_abort(dev->cpu);
cpuidle_coupled_set_not_waiting(dev, coupled);
goto out;
}
@@ -401,29 +433,35 @@ retry:
smp_mb__after_atomic_inc();
/* alive_count can't change while ready_count > 0 */
alive = atomic_read(&coupled->alive_count);
+ trace_coupled_spin(dev->cpu);
while (atomic_read(&coupled->ready_count) != alive) {
/* Check if any other cpus bailed out of idle. */
if (!cpuidle_coupled_cpus_waiting(coupled)) {
atomic_dec(&coupled->ready_count);
smp_mb__after_atomic_dec();
+ trace_coupled_detected_abort(dev->cpu);
goto retry;
}
cpu_relax();
}
+ trace_coupled_unspin(dev->cpu);
/* all cpus have acked the coupled state */
smp_rmb();
next_state = cpuidle_coupled_get_state(dev, coupled);
-
+ trace_coupled_idle_enter(dev->cpu);
entered_state = cpuidle_enter_state(dev, drv, next_state);
+ trace_coupled_idle_exit(dev->cpu);
cpuidle_coupled_set_not_waiting(dev, coupled);
atomic_dec(&coupled->ready_count);
smp_mb__after_atomic_dec();
out:
+ trace_coupled_exit(dev->cpu);
+
/*
* Normal cpuidle states are expected to return with irqs enabled.
* That leads to an inefficiency where a cpu receiving an interrupt
@@ -445,8 +483,10 @@ out:
* a cpu exits and re-enters the ready state because this cpu has
* already decremented its waiting_count.
*/
+ trace_coupled_spin(dev->cpu);
while (atomic_read(&coupled->ready_count) != 0)
cpu_relax();
+ trace_coupled_unspin(dev->cpu);
smp_rmb();
diff --git a/include/trace/events/cpuidle.h b/include/trace/events/cpuidle.h
new file mode 100644
index 0000000..9b2cbbb
--- /dev/null
+++ b/include/trace/events/cpuidle.h
@@ -0,0 +1,243 @@
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM cpuidle
+
+#if !defined(_TRACE_CPUIDLE_H) || defined(TRACE_HEADER_MULTI_READ)
+#define _TRACE_CPUIDLE_H
+
+#include <linux/atomic.h>
+#include <linux/tracepoint.h>
+
+extern atomic_t cpuidle_trace_seq;
+
+TRACE_EVENT(coupled_enter,
+
+ TP_PROTO(unsigned int cpu),
+
+ TP_ARGS(cpu),
+
+ TP_STRUCT__entry(
+ __field(unsigned int, cpu)
+ __field(unsigned int, seq)
+ ),
+
+ TP_fast_assign(
+ __entry->cpu = cpu;
+ __entry->seq = atomic_inc_return(&cpuidle_trace_seq);
+ ),
+
+ TP_printk("%u %u", __entry->seq, __entry->cpu)
+);
+
+TRACE_EVENT(coupled_exit,
+
+ TP_PROTO(unsigned int cpu),
+
+ TP_ARGS(cpu),
+
+ TP_STRUCT__entry(
+ __field(unsigned int, cpu)
+ __field(unsigned int, seq)
+ ),
+
+ TP_fast_assign(
+ __entry->cpu = cpu;
+ __entry->seq = atomic_inc_return(&cpuidle_trace_seq);
+ ),
+
+ TP_printk("%u %u", __entry->seq, __entry->cpu)
+);
+
+TRACE_EVENT(coupled_spin,
+
+ TP_PROTO(unsigned int cpu),
+
+ TP_ARGS(cpu),
+
+ TP_STRUCT__entry(
+ __field(unsigned int, cpu)
+ __field(unsigned int, seq)
+ ),
+
+ TP_fast_assign(
+ __entry->cpu = cpu;
+ __entry->seq = atomic_inc_return(&cpuidle_trace_seq);
+ ),
+
+ TP_printk("%u %u", __entry->seq, __entry->cpu)
+);
+
+TRACE_EVENT(coupled_unspin,
+
+ TP_PROTO(unsigned int cpu),
+
+ TP_ARGS(cpu),
+
+ TP_STRUCT__entry(
+ __field(unsigned int, cpu)
+ __field(unsigned int, seq)
+ ),
+
+ TP_fast_assign(
+ __entry->cpu = cpu;
+ __entry->seq = atomic_inc_return(&cpuidle_trace_seq);
+ ),
+
+ TP_printk("%u %u", __entry->seq, __entry->cpu)
+);
+
+TRACE_EVENT(coupled_safe_enter,
+
+ TP_PROTO(unsigned int cpu),
+
+ TP_ARGS(cpu),
+
+ TP_STRUCT__entry(
+ __field(unsigned int, cpu)
+ __field(unsigned int, seq)
+ ),
+
+ TP_fast_assign(
+ __entry->cpu = cpu;
+ __entry->seq = atomic_inc_return(&cpuidle_trace_seq);
+ ),
+
+ TP_printk("%u %u", __entry->seq, __entry->cpu)
+);
+
+TRACE_EVENT(coupled_safe_exit,
+
+ TP_PROTO(unsigned int cpu),
+
+ TP_ARGS(cpu),
+
+ TP_STRUCT__entry(
+ __field(unsigned int, cpu)
+ __field(unsigned int, seq)
+ ),
+
+ TP_fast_assign(
+ __entry->cpu = cpu;
+ __entry->seq = atomic_inc_return(&cpuidle_trace_seq);
+ ),
+
+ TP_printk("%u %u", __entry->seq, __entry->cpu)
+);
+
+TRACE_EVENT(coupled_idle_enter,
+
+ TP_PROTO(unsigned int cpu),
+
+ TP_ARGS(cpu),
+
+ TP_STRUCT__entry(
+ __field(unsigned int, cpu)
+ __field(unsigned int, seq)
+ ),
+
+ TP_fast_assign(
+ __entry->cpu = cpu;
+ __entry->seq = atomic_inc_return(&cpuidle_trace_seq);
+ ),
+
+ TP_printk("%u %u", __entry->seq, __entry->cpu)
+);
+
+TRACE_EVENT(coupled_idle_exit,
+
+ TP_PROTO(unsigned int cpu),
+
+ TP_ARGS(cpu),
+
+ TP_STRUCT__entry(
+ __field(unsigned int, cpu)
+ __field(unsigned int, seq)
+ ),
+
+ TP_fast_assign(
+ __entry->cpu = cpu;
+ __entry->seq = atomic_inc_return(&cpuidle_trace_seq);
+ ),
+
+ TP_printk("%u %u", __entry->seq, __entry->cpu)
+);
+
+TRACE_EVENT(coupled_abort,
+
+ TP_PROTO(unsigned int cpu),
+
+ TP_ARGS(cpu),
+
+ TP_STRUCT__entry(
+ __field(unsigned int, cpu)
+ __field(unsigned int, seq)
+ ),
+
+ TP_fast_assign(
+ __entry->cpu = cpu;
+ __entry->seq = atomic_inc_return(&cpuidle_trace_seq);
+ ),
+
+ TP_printk("%u %u", __entry->seq, __entry->cpu)
+);
+
+TRACE_EVENT(coupled_detected_abort,
+
+ TP_PROTO(unsigned int cpu),
+
+ TP_ARGS(cpu),
+
+ TP_STRUCT__entry(
+ __field(unsigned int, cpu)
+ __field(unsigned int, seq)
+ ),
+
+ TP_fast_assign(
+ __entry->cpu = cpu;
+ __entry->seq = atomic_inc_return(&cpuidle_trace_seq);
+ ),
+
+ TP_printk("%u %u", __entry->seq, __entry->cpu)
+);
+
+TRACE_EVENT(coupled_poke,
+
+ TP_PROTO(unsigned int cpu),
+
+ TP_ARGS(cpu),
+
+ TP_STRUCT__entry(
+ __field(unsigned int, cpu)
+ __field(unsigned int, seq)
+ ),
+
+ TP_fast_assign(
+ __entry->cpu = cpu;
+ __entry->seq = atomic_inc_return(&cpuidle_trace_seq);
+ ),
+
+ TP_printk("%u %u", __entry->seq, __entry->cpu)
+);
+
+TRACE_EVENT(coupled_poked,
+
+ TP_PROTO(unsigned int cpu),
+
+ TP_ARGS(cpu),
+
+ TP_STRUCT__entry(
+ __field(unsigned int, cpu)
+ __field(unsigned int, seq)
+ ),
+
+ TP_fast_assign(
+ __entry->cpu = cpu;
+ __entry->seq = atomic_inc_return(&cpuidle_trace_seq);
+ ),
+
+ TP_printk("%u %u", __entry->seq, __entry->cpu)
+);
+
+#endif /* if !defined(_TRACE_CPUIDLE_H) || defined(TRACE_HEADER_MULTI_READ) */
+
+/* This part must be outside protection */
+#include <trace/define_trace.h>
--
1.7.9.2
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCHv2 0/5] coupled cpuidle state support
2012-03-14 18:29 [PATCHv2 0/5] coupled cpuidle state support Colin Cross
` (4 preceding siblings ...)
2012-03-14 18:29 ` [PATCHv2 5/5] cpuidle: coupled: add trace events Colin Cross
@ 2012-03-15 23:37 ` Colin Cross
2012-03-30 12:53 ` Santosh Shilimkar
5 siblings, 1 reply; 17+ messages in thread
From: Colin Cross @ 2012-03-15 23:37 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Mar 14, 2012 at 11:29 AM, Colin Cross <ccross@android.com> wrote:
> On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the
> cpus cannot be independently powered down, either due to
> sequencing restrictions (on Tegra 2, cpu 0 must be the last to
> power down), or due to HW bugs (on OMAP4460, a cpu powering up
> will corrupt the gic state unless the other cpu runs a work
> around). ?Each cpu has a power state that it can enter without
> coordinating with the other cpu (usually Wait For Interrupt, or
> WFI), and one or more "coupled" power states that affect blocks
> shared between the cpus (L2 cache, interrupt controller, and
> sometimes the whole SoC). ?Entering a coupled power state must
> be tightly controlled on both cpus.
>
> The easiest solution to implementing coupled cpu power states is
> to hotplug all but one cpu whenever possible, usually using a
> cpufreq governor that looks at cpu load to determine when to
> enable the secondary cpus. ?This causes problems, as hotplug is an
> expensive operation, so the number of hotplug transitions must be
> minimized, leading to very slow response to loads, often on the
> order of seconds.
>
> This patch series implements an alternative solution, where each
> cpu will wait in the WFI state until all cpus are ready to enter
> a coupled state, at which point the coupled state function will
> be called on all cpus at approximately the same time.
>
> Once all cpus are ready to enter idle, they are woken by an smp
> cross call. ?At this point, there is a chance that one of the
> cpus will find work to do, and choose not to enter suspend. ?A
> final pass is needed to guarantee that all cpus will call the
> power state enter function at the same time. ?During this pass,
> each cpu will increment the ready counter, and continue once the
> ready counter matches the number of online coupled cpus. ?If any
> cpu exits idle, the other cpus will decrement their counter and
> retry.
>
> To use coupled cpuidle states, a cpuidle driver must:
>
> ? Set struct cpuidle_device.coupled_cpus to the mask of all
> ? coupled cpus, usually the same as cpu_possible_mask if all cpus
> ? are part of the same cluster. ?The coupled_cpus mask must be
> ? set in the struct cpuidle_device for each cpu.
>
> ? Set struct cpuidle_device.safe_state to a state that is not a
> ? coupled state. ?This is usually WFI.
>
> ? Set CPUIDLE_FLAG_COUPLED in struct cpuidle_state.flags for each
> ? state that affects multiple cpus.
>
> ? Provide a struct cpuidle_state.enter function for each state
> ? that affects multiple cpus. ?This function is guaranteed to be
> ? called on all cpus at approximately the same time. ?The driver
> ? should ensure that the cpus all abort together if any cpu tries
> ? to abort once the function is called.
>
> This series has been tested by implementing a test cpuidle state
> that uses the parallel barrier helper function to verify that
> all cpus call the function at the same time.
>
> This patch set has a few disadvantages over the hotplug governor,
> but I think they are all fairly minor:
> ? * Worst-case interrupt latency can be increased. ?If one cpu
> ? ? receives an interrupt while the other is spinning in the
> ? ? ready_count loop, the second cpu will be stuck with
> ? ? interrupts off until the first cpu finished processing
> ? ? its interrupt and exits idle. ?This will increase the worst
> ? ? case interrupt latency by the worst-case interrupt processing
> ? ? time, but should be very rare.
> ? * Interrupts are processed while still inside pm_idle.
> ? ? Normally, interrupts are only processed at the very end of
> ? ? pm_idle, just before it returns to the idle loop. ?Coupled
> ? ? states requires processing interrupts inside
> ? ? cpuidle_enter_state_coupled in order to distinguish between
> ? ? the smp_cross_call from another cpu that is now idle and an
> ? ? interrupt that should cause idle to exit.
> ? ? I don't see a way to fix this without either being able to
> ? ? read the next pending irq from the interrupt chip, or
> ? ? querying the irq core for which interrupts were processed.
> ? * Since interrupts are processed inside cpuidle, the next
> ? ? timer event could change. ?The new timer event will be
> ? ? handled correctly, but the idle state decision made by
> ? ? the governor will be out of date, and will not be revisited.
> ? ? The governor select function could be called again every time,
> ? ? but this could lead to a lot of work being done by an idle
> ? ? cpu if the other cpu was mostly busy.
>
> v2:
> ? * removed the coupled lock, replacing it with atomic counters
> ? * added a check for outstanding pokes before beginning the
> ? ? final transition to avoid extra wakeups
> ? * made the cpuidle_coupled struct completely private
> ? * fixed kerneldoc comment formatting
> ? * added a patch with a helper function for resynchronizing
> ? ? cpus after aborting idle
> ? * added a patch (not for merging) to add trace events for
> ? ? verification and performance testing
I forgot to mention, this patch series is on v3.3-rc7, and will
conflict with the cpuidle timekeeping patches. If those go in first
(which is likely), I will rework this series on top of it. I left it
on v3.3-rc7 now to make testing easier.
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCHv2 3/5] cpuidle: add support for states that affect multiple cpus
2012-03-14 18:29 ` [PATCHv2 3/5] cpuidle: add support for states that affect multiple cpus Colin Cross
@ 2012-03-16 0:04 ` Kevin Hilman
2012-03-16 0:20 ` Colin Cross
2012-03-17 12:29 ` Santosh Shilimkar
1 sibling, 1 reply; 17+ messages in thread
From: Kevin Hilman @ 2012-03-16 0:04 UTC (permalink / raw)
To: linux-arm-kernel
Colin Cross <ccross@android.com> writes:
> +/**
> + * cpuidle_coupled_cpu_set_alive - adjust alive_count during hotplug transitions
> + * @cpu: target cpu number
> + * @alive: whether the target cpu is going up or down
> + *
> + * Run on the cpu that is bringing up the target cpu, before the target cpu
> + * has been booted, or after the target cpu is completely dead.
> + */
> +static void cpuidle_coupled_cpu_set_alive(int cpu, bool alive)
> +{
> + struct cpuidle_device *dev;
> + struct cpuidle_coupled *coupled;
> +
> + mutex_lock(&cpuidle_lock);
> +
> + dev = per_cpu(cpuidle_devices, cpu);
> + if (!dev->coupled)
> + goto out;
> +
> + coupled = dev->coupled;
> +
> + /*
> + * waiting_count must be at least 1 less than alive_count, because
> + * this cpu is not waiting. Spin until all cpus have noticed this cpu
> + * is not idle and exited the ready loop before changing alive_count.
> + */
> + while (atomic_read(&coupled->ready_count))
> + cpu_relax();
> +
> + smp_mb__before_atomic_inc();
> + atomic_inc(&coupled->alive_count);
This doesn't look quite right. alive_count is incrmented whether the
CPU is going up or down?
Maybe I misunderstood something, but I don't see anywhere where
alive_count is decrmemented after a CPU is removed.
Kevin
> + smp_mb__after_atomic_inc();
> +
> + if (alive)
> + coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_NOT_IDLE;
> + else
> + coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_DEAD;
> +
> +out:
> + mutex_unlock(&cpuidle_lock);
> +}
> +
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCHv2 3/5] cpuidle: add support for states that affect multiple cpus
2012-03-16 0:04 ` Kevin Hilman
@ 2012-03-16 0:20 ` Colin Cross
2012-03-16 1:52 ` Arjan van de Ven
0 siblings, 1 reply; 17+ messages in thread
From: Colin Cross @ 2012-03-16 0:20 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Mar 15, 2012 at 5:04 PM, Kevin Hilman <khilman@ti.com> wrote:
> Colin Cross <ccross@android.com> writes:
>
>> +/**
>> + * cpuidle_coupled_cpu_set_alive - adjust alive_count during hotplug transitions
>> + * @cpu: target cpu number
>> + * @alive: whether the target cpu is going up or down
>> + *
>> + * Run on the cpu that is bringing up the target cpu, before the target cpu
>> + * has been booted, or after the target cpu is completely dead.
>> + */
>> +static void cpuidle_coupled_cpu_set_alive(int cpu, bool alive)
>> +{
>> + ? ? struct cpuidle_device *dev;
>> + ? ? struct cpuidle_coupled *coupled;
>> +
>> + ? ? mutex_lock(&cpuidle_lock);
>> +
>> + ? ? dev = per_cpu(cpuidle_devices, cpu);
>> + ? ? if (!dev->coupled)
>> + ? ? ? ? ? ? goto out;
>> +
>> + ? ? coupled = dev->coupled;
>> +
>> + ? ? /*
>> + ? ? ?* waiting_count must be at least 1 less than alive_count, because
>> + ? ? ?* this cpu is not waiting. ?Spin until all cpus have noticed this cpu
>> + ? ? ?* is not idle and exited the ready loop before changing alive_count.
>> + ? ? ?*/
>> + ? ? while (atomic_read(&coupled->ready_count))
>> + ? ? ? ? ? ? cpu_relax();
>> +
>> + ? ? smp_mb__before_atomic_inc();
>> + ? ? atomic_inc(&coupled->alive_count);
>
> This doesn't look quite right. ?alive_count is incrmented whether the
> CPU is going up or down?
>
> Maybe I misunderstood something, but I don't see anywhere where
> alive_count is decrmemented after a CPU is removed.
Oops, dropped the atomic_dec when I merged from two separate functions
for up and down to a single function that takes a bool.
>> + ? ? smp_mb__after_atomic_inc();
>> +
>> + ? ? if (alive)
>> + ? ? ? ? ? ? coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_NOT_IDLE;
>> + ? ? else
>> + ? ? ? ? ? ? coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_DEAD;
>> +
>> +out:
>> + ? ? mutex_unlock(&cpuidle_lock);
>> +}
>> +
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCHv2 3/5] cpuidle: add support for states that affect multiple cpus
2012-03-16 0:20 ` Colin Cross
@ 2012-03-16 1:52 ` Arjan van de Ven
0 siblings, 0 replies; 17+ messages in thread
From: Arjan van de Ven @ 2012-03-16 1:52 UTC (permalink / raw)
To: linux-arm-kernel
On 3/15/2012 5:20 PM, Colin Cross wrote:
> Oops, dropped the atomic_dec when I merged from two separate functions
> for up and down to a single function that takes a bool.
generally in Linux we tend to not like "multiplexer" common functions
like this.
my suggestion: make your current function a __ prefixed one, then
provide 2 small inlines that call the __ one...
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCHv2 3/5] cpuidle: add support for states that affect multiple cpus
2012-03-14 18:29 ` [PATCHv2 3/5] cpuidle: add support for states that affect multiple cpus Colin Cross
2012-03-16 0:04 ` Kevin Hilman
@ 2012-03-17 12:29 ` Santosh Shilimkar
2012-03-17 19:21 ` Colin Cross
1 sibling, 1 reply; 17+ messages in thread
From: Santosh Shilimkar @ 2012-03-17 12:29 UTC (permalink / raw)
To: linux-arm-kernel
Colin,
On Wednesday 14 March 2012 11:59 PM, Colin Cross wrote:
> On some ARM SMP SoCs (OMAP4460, Tegra 2, and probably more), the
> cpus cannot be independently powered down, either due to
> sequencing restrictions (on Tegra 2, cpu 0 must be the last to
> power down), or due to HW bugs (on OMAP4460, a cpu powering up
> will corrupt the gic state unless the other cpu runs a work
> around). Each cpu has a power state that it can enter without
> coordinating with the other cpu (usually Wait For Interrupt, or
> WFI), and one or more "coupled" power states that affect blocks
> shared between the cpus (L2 cache, interrupt controller, and
> sometimes the whole SoC). Entering a coupled power state must
> be tightly controlled on both cpus.
>
> The easiest solution to implementing coupled cpu power states is
> to hotplug all but one cpu whenever possible, usually using a
> cpufreq governor that looks at cpu load to determine when to
> enable the secondary cpus. This causes problems, as hotplug is an
> expensive operation, so the number of hotplug transitions must be
> minimized, leading to very slow response to loads, often on the
> order of seconds.
>
> This file implements an alternative solution, where each cpu will
> wait in the WFI state until all cpus are ready to enter a coupled
> state, at which point the coupled state function will be called
> on all cpus at approximately the same time.
>
> Once all cpus are ready to enter idle, they are woken by an smp
> cross call. At this point, there is a chance that one of the
> cpus will find work to do, and choose not to enter idle. A
> final pass is needed to guarantee that all cpus will call the
> power state enter function at the same time. During this pass,
> each cpu will increment the ready counter, and continue once the
> ready counter matches the number of online coupled cpus. If any
> cpu exits idle, the other cpus will decrement their counter and
> retry.
>
> To use coupled cpuidle states, a cpuidle driver must:
>
> Set struct cpuidle_device.coupled_cpus to the mask of all
> coupled cpus, usually the same as cpu_possible_mask if all cpus
> are part of the same cluster. The coupled_cpus mask must be
> set in the struct cpuidle_device for each cpu.
>
> Set struct cpuidle_device.safe_state to a state that is not a
> coupled state. This is usually WFI.
>
> Set CPUIDLE_FLAG_COUPLED in struct cpuidle_state.flags for each
> state that affects multiple cpus.
>
> Provide a struct cpuidle_state.enter function for each state
> that affects multiple cpus. This function is guaranteed to be
> called on all cpus at approximately the same time. The driver
> should ensure that the cpus all abort together if any cpu tries
> to abort once the function is called.
>
> Signed-off-by: Colin Cross <ccross@android.com>
> Cc: Len Brown <len.brown@intel.com>
> Cc: Kevin Hilman <khilman@ti.com>
> Cc: Santosh Shilimkar <santosh.shilimkar@ti.com>
> Cc: Amit Kucheria <amit.kucheria@linaro.org>
> Cc: Arjan van de Ven <arjan@linux.intel.com>
> Cc: Trinabh Gupta <g.trinabh@gmail.com>
> Cc: Deepthi Dharwar <deepthi@linux.vnet.ibm.com>
> ---
[..]
> diff --git a/drivers/cpuidle/coupled.c b/drivers/cpuidle/coupled.c
> new file mode 100644
> index 0000000..046fccb
> --- /dev/null
> +++ b/drivers/cpuidle/coupled.c
> @@ -0,0 +1,568 @@
> +/*
> + * coupled.c - helper functions to enter the same idle state on multiple cpus
> + *
> + * Copyright (c) 2011 Google, Inc.
> + *
> + * Author: Colin Cross <ccross@android.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
> + * more details.
> + */
[...]
> +/**
> + * cpuidle_coupled_cpu_set_alive - adjust alive_count during hotplug transitions
> + * @cpu: target cpu number
> + * @alive: whether the target cpu is going up or down
> + *
> + * Run on the cpu that is bringing up the target cpu, before the target cpu
> + * has been booted, or after the target cpu is completely dead.
> + */
> +static void cpuidle_coupled_cpu_set_alive(int cpu, bool alive)
> +{
> + struct cpuidle_device *dev;
> + struct cpuidle_coupled *coupled;
> +
> + mutex_lock(&cpuidle_lock);
> +
> + dev = per_cpu(cpuidle_devices, cpu);
> + if (!dev->coupled)
> + goto out;
> +
> + coupled = dev->coupled;
> +
> + /*
> + * waiting_count must be at least 1 less than alive_count, because
> + * this cpu is not waiting. Spin until all cpus have noticed this cpu
> + * is not idle and exited the ready loop before changing alive_count.
> + */
> + while (atomic_read(&coupled->ready_count))
> + cpu_relax();
> +
> + smp_mb__before_atomic_inc();
> + atomic_inc(&coupled->alive_count);
> + smp_mb__after_atomic_inc();
> +
> + if (alive)
> + coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_NOT_IDLE;
> + else
> + coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_DEAD;
> +
While playing around with this version, I noticed that the
'alive_count' is ever incrementing which will lead to
coupled idle state not being attempted after one CPU offline
attempt.
Fix in end of the email for the same. With it, coupled states
continue to work with CPU online/offline transitions.
Feel free to fold it into original patch. Attached
the same in case mailer eat tabs.
Regards
Santosh
>>
cpuidle: coupled: Fix the alive_count based on CPU online/offline.
Currently the alive_count is ever incrementing which will lead to
coupled idle state not being attempted after one CPU offline
attempt.
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
---
drivers/cpuidle/coupled.c | 14 +++++++++-----
1 files changed, 9 insertions(+), 5 deletions(-)
diff --git a/drivers/cpuidle/coupled.c b/drivers/cpuidle/coupled.c
index 3bc8a02..708bcfe 100644
--- a/drivers/cpuidle/coupled.c
+++ b/drivers/cpuidle/coupled.c
@@ -595,14 +595,18 @@ static void cpuidle_coupled_cpu_set_alive(int cpu,
bool alive)
while (atomic_read(&coupled->ready_count))
cpu_relax();
- smp_mb__before_atomic_inc();
- atomic_inc(&coupled->alive_count);
- smp_mb__after_atomic_inc();
- if (alive)
+ if (alive) {
+ smp_mb__before_atomic_inc();
+ atomic_inc(&coupled->alive_count);
+ smp_mb__after_atomic_inc();
coupled->requested_state[dev->cpu] =
CPUIDLE_COUPLED_NOT_IDLE;
- else
+ } else {
+ smp_mb__before_atomic_inc();
+ atomic_dec(&coupled->alive_count);
+ smp_mb__after_atomic_inc();
coupled->requested_state[dev->cpu] = CPUIDLE_COUPLED_DEAD;
+ }
<upled-Fix-the-alive_count-based-on-CPU-onl.patch" 45L, 1387C 1,1
Top
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 0001-cpuidle-coupled-Fix-the-alive_count-based-on-CPU-onl.patch
Type: text/x-patch
Size: 1388 bytes
Desc: not available
URL: <http://lists.infradead.org/pipermail/linux-arm-kernel/attachments/20120317/4ef98221/attachment.bin>
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCHv2 3/5] cpuidle: add support for states that affect multiple cpus
2012-03-17 12:29 ` Santosh Shilimkar
@ 2012-03-17 19:21 ` Colin Cross
2012-03-18 7:18 ` Shilimkar, Santosh
0 siblings, 1 reply; 17+ messages in thread
From: Colin Cross @ 2012-03-17 19:21 UTC (permalink / raw)
To: linux-arm-kernel
On Sat, Mar 17, 2012 at 5:29 AM, Santosh Shilimkar
<santosh.shilimkar@ti.com> wrote:
<snip>
> While playing around with this version, I noticed that the
> 'alive_count' is ever incrementing which will lead to
> coupled idle state not being attempted after one CPU offline
> attempt.
>
> Fix in end of the email for the same. With it, coupled states
> continue to work with CPU online/offline transitions.
>
> Feel free to fold it into original patch. Attached
> the same in case mailer eat tabs.
<snip>
Thanks, Kevin reported this issue and I'll fix it in the next version.
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCHv2 3/5] cpuidle: add support for states that affect multiple cpus
2012-03-17 19:21 ` Colin Cross
@ 2012-03-18 7:18 ` Shilimkar, Santosh
0 siblings, 0 replies; 17+ messages in thread
From: Shilimkar, Santosh @ 2012-03-18 7:18 UTC (permalink / raw)
To: linux-arm-kernel
On Sun, Mar 18, 2012 at 12:51 AM, Colin Cross <ccross@android.com> wrote:
> On Sat, Mar 17, 2012 at 5:29 AM, Santosh Shilimkar
> <santosh.shilimkar@ti.com> wrote:
>
> <snip>
>
>> While playing around with this version, I noticed that the
>> 'alive_count' is ever incrementing which will lead to
>> coupled idle state not being attempted after one CPU offline
>> attempt.
>>
>> Fix in end of the email for the same. With it, coupled states
>> continue to work with CPU online/offline transitions.
>>
>> Feel free to fold it into original patch. Attached
>> the same in case mailer eat tabs.
>
> <snip>
>
> Thanks, Kevin reported this issue and I'll fix it in the next version.
Ahh... I see comments from Kevin now. Some how I missed it
otherwise would have saved me few minutes of debug.
Regards
Santosh
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCHv2 0/5] coupled cpuidle state support
2012-03-15 23:37 ` [PATCHv2 0/5] coupled cpuidle state support Colin Cross
@ 2012-03-30 12:53 ` Santosh Shilimkar
2012-04-09 7:11 ` Santosh Shilimkar
0 siblings, 1 reply; 17+ messages in thread
From: Santosh Shilimkar @ 2012-03-30 12:53 UTC (permalink / raw)
To: linux-arm-kernel
Colin,
On Friday 16 March 2012 05:07 AM, Colin Cross wrote:
> On Wed, Mar 14, 2012 at 11:29 AM, Colin Cross <ccross@android.com> wrote:
[...]
>>
>> v2:
>> * removed the coupled lock, replacing it with atomic counters
>> * added a check for outstanding pokes before beginning the
>> final transition to avoid extra wakeups
>> * made the cpuidle_coupled struct completely private
>> * fixed kerneldoc comment formatting
>> * added a patch with a helper function for resynchronizing
>> cpus after aborting idle
>> * added a patch (not for merging) to add trace events for
>> verification and performance testing
>
> I forgot to mention, this patch series is on v3.3-rc7, and will
> conflict with the cpuidle timekeeping patches. If those go in first
> (which is likely), I will rework this series on top of it. I left it
> on v3.3-rc7 now to make testing easier.
I have re-based your series against Len Browns
next branch [1] which has time keeping and other cpuidle patches.
Have also folded the CPU hotplug fix which I posted in the
original coupled idle patch.
If you want, the updated branch can be pulled from:
git://gitorious.org/omap-sw-develoment/linux-omap-dev.git
for_3.5/coupled_cpuidle-rebase
Regards
Santosh
[1] Power Management
git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux.git
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCHv2 5/5] cpuidle: coupled: add trace events
2012-03-14 18:29 ` [PATCHv2 5/5] cpuidle: coupled: add trace events Colin Cross
@ 2012-04-09 6:59 ` Santosh Shilimkar
0 siblings, 0 replies; 17+ messages in thread
From: Santosh Shilimkar @ 2012-04-09 6:59 UTC (permalink / raw)
To: linux-arm-kernel
On Wednesday 14 March 2012 11:59 PM, Colin Cross wrote:
> Adds trace events to allow debugging of coupled cpuidle.
> Can be used to verify cpuidle performance, including time spent
> spinning and time spent in safe states. Not intended for merging.
>
> Signed-off-by: Colin Cross <ccross@android.com>
> ---
I found the trace events quite useful for debug and also to monitor
the cpuidle performance.
I guess we should add these trace events to the core code unless
and until there is a strong reason not to do so.
Regards
Santosh
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCHv2 0/5] coupled cpuidle state support
2012-03-30 12:53 ` Santosh Shilimkar
@ 2012-04-09 7:11 ` Santosh Shilimkar
2012-04-09 23:35 ` [linux-pm] " Kevin Hilman
0 siblings, 1 reply; 17+ messages in thread
From: Santosh Shilimkar @ 2012-04-09 7:11 UTC (permalink / raw)
To: linux-arm-kernel
On Friday 30 March 2012 06:23 PM, Santosh Shilimkar wrote:
> Colin,
>
> On Friday 16 March 2012 05:07 AM, Colin Cross wrote:
>> On Wed, Mar 14, 2012 at 11:29 AM, Colin Cross <ccross@android.com> wrote:
>
> [...]
>
>>>
>>> v2:
>>> * removed the coupled lock, replacing it with atomic counters
>>> * added a check for outstanding pokes before beginning the
>>> final transition to avoid extra wakeups
>>> * made the cpuidle_coupled struct completely private
>>> * fixed kerneldoc comment formatting
>>> * added a patch with a helper function for resynchronizing
>>> cpus after aborting idle
>>> * added a patch (not for merging) to add trace events for
>>> verification and performance testing
>>
>> I forgot to mention, this patch series is on v3.3-rc7, and will
>> conflict with the cpuidle timekeeping patches. If those go in first
>> (which is likely), I will rework this series on top of it. I left it
>> on v3.3-rc7 now to make testing easier.
>
> I have re-based your series against Len Browns
> next branch [1] which has time keeping and other cpuidle patches.
> Have also folded the CPU hotplug fix which I posted in the
> original coupled idle patch.
>
As you know, we have been playing around this series for OMAP
for last few weeks. This version series seems to work as intended
and found it pretty stable in my testing. Apart from the cpu
hotplug fix and the trace event comment, series looks fine
to me.
FWIW,
Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
An updated version of this series along with OMAP cpuidle
driver updates against 3.4-rc2 is available here [1] in
case some body is interested looking at it.
Regards
Santosh
[1] git://gitorious.org/omap-sw-develoment/linux-omap-dev.git
for_3.5/coupled_cpuidle-rebase
^ permalink raw reply [flat|nested] 17+ messages in thread
* [linux-pm] [PATCHv2 0/5] coupled cpuidle state support
2012-04-09 7:11 ` Santosh Shilimkar
@ 2012-04-09 23:35 ` Kevin Hilman
0 siblings, 0 replies; 17+ messages in thread
From: Kevin Hilman @ 2012-04-09 23:35 UTC (permalink / raw)
To: linux-arm-kernel
Santosh Shilimkar <santosh.shilimkar@ti.com> writes:
> On Friday 30 March 2012 06:23 PM, Santosh Shilimkar wrote:
>> Colin,
>>
>> On Friday 16 March 2012 05:07 AM, Colin Cross wrote:
>>> On Wed, Mar 14, 2012 at 11:29 AM, Colin Cross <ccross@android.com> wrote:
>>
>> [...]
>>
>>>>
>>>> v2:
>>>> * removed the coupled lock, replacing it with atomic counters
>>>> * added a check for outstanding pokes before beginning the
>>>> final transition to avoid extra wakeups
>>>> * made the cpuidle_coupled struct completely private
>>>> * fixed kerneldoc comment formatting
>>>> * added a patch with a helper function for resynchronizing
>>>> cpus after aborting idle
>>>> * added a patch (not for merging) to add trace events for
>>>> verification and performance testing
>>>
>>> I forgot to mention, this patch series is on v3.3-rc7, and will
>>> conflict with the cpuidle timekeeping patches. If those go in first
>>> (which is likely), I will rework this series on top of it. I left it
>>> on v3.3-rc7 now to make testing easier.
>>
>> I have re-based your series against Len Browns
>> next branch [1] which has time keeping and other cpuidle patches.
>> Have also folded the CPU hotplug fix which I posted in the
>> original coupled idle patch.
>>
> As you know, we have been playing around this series for OMAP
> for last few weeks. This version series seems to work as intended
> and found it pretty stable in my testing. Apart from the cpu
> hotplug fix and the trace event comment, series looks fine
> to me.
>
> FWIW,
> Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
> Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Also
Reviewed-by: Kevin Hilman <khilman@ti.com>
Tested-by: Kevin Hilman <khilman@ti.com>
as I've been working with Santosh on getting this stabilized on OMAP and
we are very keen to see this functionality merged.
Thanks,
Kevin
^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2012-04-09 23:35 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-03-14 18:29 [PATCHv2 0/5] coupled cpuidle state support Colin Cross
2012-03-14 18:29 ` [PATCHv2 1/5] cpuidle: refactor out cpuidle_enter_state Colin Cross
2012-03-14 18:29 ` [PATCHv2 2/5] cpuidle: fix error handling in __cpuidle_register_device Colin Cross
2012-03-14 18:29 ` [PATCHv2 3/5] cpuidle: add support for states that affect multiple cpus Colin Cross
2012-03-16 0:04 ` Kevin Hilman
2012-03-16 0:20 ` Colin Cross
2012-03-16 1:52 ` Arjan van de Ven
2012-03-17 12:29 ` Santosh Shilimkar
2012-03-17 19:21 ` Colin Cross
2012-03-18 7:18 ` Shilimkar, Santosh
2012-03-14 18:29 ` [PATCHv2 4/5] cpuidle: coupled: add parallel barrier function Colin Cross
2012-03-14 18:29 ` [PATCHv2 5/5] cpuidle: coupled: add trace events Colin Cross
2012-04-09 6:59 ` Santosh Shilimkar
2012-03-15 23:37 ` [PATCHv2 0/5] coupled cpuidle state support Colin Cross
2012-03-30 12:53 ` Santosh Shilimkar
2012-04-09 7:11 ` Santosh Shilimkar
2012-04-09 23:35 ` [linux-pm] " Kevin Hilman
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).