* [PATCH 4.14 014/126] cpu/hotplug: Adjust misplaced smb() in cpuhp_thread_fun()
[not found] <20180917211703.481236999@linuxfoundation.org>
@ 2018-09-17 22:41 ` Greg Kroah-Hartman
2018-09-17 22:41 ` [PATCH 4.14 015/126] cpu/hotplug: Prevent state corruption on error rollback Greg Kroah-Hartman
2018-09-17 22:41 ` [PATCH 4.14 035/126] timers: Clear timer_base::must_forward_clk with timer_base::lock held Greg Kroah-Hartman
2 siblings, 0 replies; 3+ messages in thread
From: Greg Kroah-Hartman @ 2018-09-17 22:41 UTC (permalink / raw)
To: linux-kernel
Cc: Greg Kroah-Hartman, stable, Neeraj Upadhyay, Thomas Gleixner,
Peter Zijlstra (Intel), josh, peterz, jiangshanlai, dzickus,
brendan.jackman, malat, mojha, sramana, linux-arm-msm
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Neeraj Upadhyay <neeraju@codeaurora.org>
commit f8b7530aa0a1def79c93101216b5b17cf408a70a upstream.
The smp_mb() in cpuhp_thread_fun() is misplaced. It needs to be after the
load of st->should_run to prevent reordering of the later load/stores
w.r.t. the load of st->should_run.
Fixes: 4dddfb5faa61 ("smp/hotplug: Rewrite AP state machine core")
Signed-off-by: Neeraj Upadhyay <neeraju@codeaurora.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra (Intel) <peterz@infraded.org>
Cc: josh@joshtriplett.org
Cc: peterz@infradead.org
Cc: jiangshanlai@gmail.com
Cc: dzickus@redhat.com
Cc: brendan.jackman@arm.com
Cc: malat@debian.org
Cc: mojha@codeaurora.org
Cc: sramana@codeaurora.org
Cc: linux-arm-msm@vger.kernel.org
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/1536126727-11629-1-git-send-email-neeraju@codeaurora.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
kernel/cpu.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -612,15 +612,15 @@ static void cpuhp_thread_fun(unsigned in
bool bringup = st->bringup;
enum cpuhp_state state;
+ if (WARN_ON_ONCE(!st->should_run))
+ return;
+
/*
* ACQUIRE for the cpuhp_should_run() load of ->should_run. Ensures
* that if we see ->should_run we also see the rest of the state.
*/
smp_mb();
- if (WARN_ON_ONCE(!st->should_run))
- return;
-
cpuhp_lock_acquire(bringup);
if (st->single) {
^ permalink raw reply [flat|nested] 3+ messages in thread* [PATCH 4.14 015/126] cpu/hotplug: Prevent state corruption on error rollback
[not found] <20180917211703.481236999@linuxfoundation.org>
2018-09-17 22:41 ` [PATCH 4.14 014/126] cpu/hotplug: Adjust misplaced smb() in cpuhp_thread_fun() Greg Kroah-Hartman
@ 2018-09-17 22:41 ` Greg Kroah-Hartman
2018-09-17 22:41 ` [PATCH 4.14 035/126] timers: Clear timer_base::must_forward_clk with timer_base::lock held Greg Kroah-Hartman
2 siblings, 0 replies; 3+ messages in thread
From: Greg Kroah-Hartman @ 2018-09-17 22:41 UTC (permalink / raw)
To: linux-kernel
Cc: Greg Kroah-Hartman, stable, Neeraj Upadhyay, Thomas Gleixner,
Geert Uytterhoeven, Sudeep Holla, josh, peterz, jiangshanlai,
dzickus, brendan.jackman, malat, sramana, linux-arm-msm
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Thomas Gleixner <tglx@linutronix.de>
commit 69fa6eb7d6a64801ea261025cce9723d9442d773 upstream.
When a teardown callback fails, the CPU hotplug code brings the CPU back to
the previous state. The previous state becomes the new target state. The
rollback happens in undo_cpu_down() which increments the state
unconditionally even if the state is already the same as the target.
As a consequence the next CPU hotplug operation will start at the wrong
state. This is easily to observe when __cpu_disable() fails.
Prevent the unconditional undo by checking the state vs. target before
incrementing state and fix up the consequently wrong conditional in the
unplug code which handles the failure of the final CPU take down on the
control CPU side.
Fixes: 4dddfb5faa61 ("smp/hotplug: Rewrite AP state machine core")
Reported-by: Neeraj Upadhyay <neeraju@codeaurora.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Geert Uytterhoeven <geert+renesas@glider.be>
Tested-by: Sudeep Holla <sudeep.holla@arm.com>
Tested-by: Neeraj Upadhyay <neeraju@codeaurora.org>
Cc: josh@joshtriplett.org
Cc: peterz@infradead.org
Cc: jiangshanlai@gmail.com
Cc: dzickus@redhat.com
Cc: brendan.jackman@arm.com
Cc: malat@debian.org
Cc: sramana@codeaurora.org
Cc: linux-arm-msm@vger.kernel.org
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/alpine.DEB.2.21.1809051419580.1416@nanos.tec.linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
----
---
kernel/cpu.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -932,7 +932,8 @@ static int cpuhp_down_callbacks(unsigned
ret = cpuhp_invoke_callback(cpu, st->state, false, NULL, NULL);
if (ret) {
st->target = prev_state;
- undo_cpu_down(cpu, st);
+ if (st->state < prev_state)
+ undo_cpu_down(cpu, st);
break;
}
}
@@ -985,7 +986,7 @@ static int __ref _cpu_down(unsigned int
* to do the further cleanups.
*/
ret = cpuhp_down_callbacks(cpu, st, target);
- if (ret && st->state > CPUHP_TEARDOWN_CPU && st->state < prev_state) {
+ if (ret && st->state == CPUHP_TEARDOWN_CPU && st->state < prev_state) {
cpuhp_reset_state(st, prev_state);
__cpuhp_kick_ap(st);
}
^ permalink raw reply [flat|nested] 3+ messages in thread* [PATCH 4.14 035/126] timers: Clear timer_base::must_forward_clk with timer_base::lock held
[not found] <20180917211703.481236999@linuxfoundation.org>
2018-09-17 22:41 ` [PATCH 4.14 014/126] cpu/hotplug: Adjust misplaced smb() in cpuhp_thread_fun() Greg Kroah-Hartman
2018-09-17 22:41 ` [PATCH 4.14 015/126] cpu/hotplug: Prevent state corruption on error rollback Greg Kroah-Hartman
@ 2018-09-17 22:41 ` Greg Kroah-Hartman
2 siblings, 0 replies; 3+ messages in thread
From: Greg Kroah-Hartman @ 2018-09-17 22:41 UTC (permalink / raw)
To: linux-kernel
Cc: Greg Kroah-Hartman, stable, Gaurav Kohli, Thomas Gleixner,
john.stultz, sboyd, linux-arm-msm, Sasha Levin
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Gaurav Kohli <gkohli@codeaurora.org>
[ Upstream commit 363e934d8811d799c88faffc5bfca782fd728334 ]
timer_base::must_forward_clock is indicating that the base clock might be
stale due to a long idle sleep.
The forwarding of the base clock takes place in the timer softirq or when a
timer is enqueued to a base which is idle. If the enqueue of timer to an
idle base happens from a remote CPU, then the following race can happen:
CPU0 CPU1
run_timer_softirq mod_timer
base = lock_timer_base(timer);
base->must_forward_clk = false
if (base->must_forward_clk)
forward(base); -> skipped
enqueue_timer(base, timer, idx);
-> idx is calculated high due to
stale base
unlock_timer_base(timer);
base = lock_timer_base(timer);
forward(base);
The root cause is that timer_base::must_forward_clk is cleared outside the
timer_base::lock held region, so the remote queuing CPU observes it as
cleared, but the base clock is still stale. This can cause large
granularity values for timers, i.e. the accuracy of the expiry time
suffers.
Prevent this by clearing the flag with timer_base::lock held, so that the
forwarding takes place before the cleared flag is observable by a remote
CPU.
Signed-off-by: Gaurav Kohli <gkohli@codeaurora.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: john.stultz@linaro.org
Cc: sboyd@kernel.org
Cc: linux-arm-msm@vger.kernel.org
Link: https://lkml.kernel.org/r/1533199863-22748-1-git-send-email-gkohli@codeaurora.org
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
kernel/time/timer.c | 29 ++++++++++++++++-------------
1 file changed, 16 insertions(+), 13 deletions(-)
--- a/kernel/time/timer.c
+++ b/kernel/time/timer.c
@@ -1609,6 +1609,22 @@ static inline void __run_timers(struct t
raw_spin_lock_irq(&base->lock);
+ /*
+ * timer_base::must_forward_clk must be cleared before running
+ * timers so that any timer functions that call mod_timer() will
+ * not try to forward the base. Idle tracking / clock forwarding
+ * logic is only used with BASE_STD timers.
+ *
+ * The must_forward_clk flag is cleared unconditionally also for
+ * the deferrable base. The deferrable base is not affected by idle
+ * tracking and never forwarded, so clearing the flag is a NOOP.
+ *
+ * The fact that the deferrable base is never forwarded can cause
+ * large variations in granularity for deferrable timers, but they
+ * can be deferred for long periods due to idle anyway.
+ */
+ base->must_forward_clk = false;
+
while (time_after_eq(jiffies, base->clk)) {
levels = collect_expired_timers(base, heads);
@@ -1628,19 +1644,6 @@ static __latent_entropy void run_timer_s
{
struct timer_base *base = this_cpu_ptr(&timer_bases[BASE_STD]);
- /*
- * must_forward_clk must be cleared before running timers so that any
- * timer functions that call mod_timer will not try to forward the
- * base. idle trcking / clock forwarding logic is only used with
- * BASE_STD timers.
- *
- * The deferrable base does not do idle tracking at all, so we do
- * not forward it. This can result in very large variations in
- * granularity for deferrable timers, but they can be deferred for
- * long periods due to idle.
- */
- base->must_forward_clk = false;
-
__run_timers(base);
if (IS_ENABLED(CONFIG_NO_HZ_COMMON))
__run_timers(this_cpu_ptr(&timer_bases[BASE_DEF]));
^ permalink raw reply [flat|nested] 3+ messages in thread