* [PATCH] clockevents: Fix cpu_down() race for hrtimer based broadcasting
@ 2015-04-28 9:19 Preeti U Murthy
2015-05-02 18:35 ` Greg KH
0 siblings, 1 reply; 3+ messages in thread
From: Preeti U Murthy @ 2015-04-28 9:19 UTC (permalink / raw)
To: stable; +Cc: nico, peterz, shreyas, rjw, gregkh, mpe, tglx, mingo
commit 345527b1edce8df719e0884500c76832a18211c3 upstream
It was found when doing a hotplug stress test on POWER, that the
machine either hit softlockups or rcu_sched stall warnings. The
issue was traced to commit:
7cba160ad789 ("powernv/cpuidle: Redesign idle states management")
which exposed the cpu_down() race with hrtimer based broadcast mode:
5d1638acb9f6 ("tick: Introduce hrtimer based broadcast")
The race is the following:
Assume CPU1 is the CPU which holds the hrtimer broadcasting duty
before it is taken down.
CPU0 CPU1
cpu_down() take_cpu_down()
disable_interrupts()
cpu_die()
while (CPU1 != CPU_DEAD) {
msleep(100);
switch_to_idle();
stop_cpu_timer();
schedule_broadcast();
}
tick_cleanup_cpu_dead()
take_over_broadcast()
So after CPU1 disabled interrupts it cannot handle the broadcast
hrtimer anymore, so CPU0 will be stuck forever.
Fix this by explicitly taking over broadcast duty before cpu_die().
This is a temporary workaround. What we really want is a callback
in the clockevent device which allows us to do that from the dying
CPU by pushing the hrtimer onto a different cpu. That might involve
an IPI and is definitely more complex than this immediate fix.
Changelog was picked up from:
https://lkml.org/lkml/2015/2/16/213
Suggested-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Preeti U. Murthy <preeti@linux.vnet.ibm.com>
Cc: linuxppc-dev@lists.ozlabs.org
Cc: mpe@ellerman.id.au
Cc: nicolas.pitre@linaro.org
Cc: peterz@infradead.org
Cc: rjw@rjwysocki.net
Fixes: http://linuxppc.10917.n7.nabble.com/offlining-cpus-breakage-td88619.html
Link: http://lkml.kernel.org/r/20150330092410.24979.59887.stgit@preeti.in.ibm.com
[ Merged it to the latest timer tree, renamed the callback, tidied up the changelog. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
Please apply this to 3.19 stable.
kernel/cpu.c | 2 ++
kernel/time/tick-broadcast.c | 19 +++++++++++--------
2 files changed, 13 insertions(+), 8 deletions(-)
diff --git a/kernel/cpu.c b/kernel/cpu.c
index 5d22023..53bec17 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -20,6 +20,7 @@
#include <linux/gfp.h>
#include <linux/suspend.h>
#include <linux/lockdep.h>
+#include <linux/tick.h>
#include <trace/events/power.h>
#include "smpboot.h"
@@ -421,6 +422,7 @@ static int __ref _cpu_down(unsigned int cpu, int tasks_frozen)
while (!idle_cpu(cpu))
cpu_relax();
+ hotplug_cpu__broadcast_tick_pull(cpu);
/* This actually kills the CPU. */
__cpu_die(cpu);
diff --git a/kernel/time/tick-broadcast.c b/kernel/time/tick-broadcast.c
index 066f0ec..25fb004 100644
--- a/kernel/time/tick-broadcast.c
+++ b/kernel/time/tick-broadcast.c
@@ -669,14 +669,19 @@ static void broadcast_shutdown_local(struct clock_event_device *bc,
clockevents_set_mode(dev, CLOCK_EVT_MODE_SHUTDOWN);
}
-static void broadcast_move_bc(int deadcpu)
+void hotplug_cpu__broadcast_tick_pull(int deadcpu)
{
- struct clock_event_device *bc = tick_broadcast_device.evtdev;
+ struct clock_event_device *bc;
+ unsigned long flags;
- if (!bc || !broadcast_needs_cpu(bc, deadcpu))
- return;
- /* This moves the broadcast assignment to this cpu */
- clockevents_program_event(bc, bc->next_event, 1);
+ raw_spin_lock_irqsave(&tick_broadcast_lock, flags);
+ bc = tick_broadcast_device.evtdev;
+
+ if (bc && broadcast_needs_cpu(bc, deadcpu)) {
+ /* This moves the broadcast assignment to this CPU: */
+ clockevents_program_event(bc, bc->next_event, 1);
+ }
+ raw_spin_unlock_irqrestore(&tick_broadcast_lock, flags);
}
/*
@@ -913,8 +918,6 @@ void tick_shutdown_broadcast_oneshot(unsigned int *cpup)
cpumask_clear_cpu(cpu, tick_broadcast_pending_mask);
cpumask_clear_cpu(cpu, tick_broadcast_force_mask);
- broadcast_move_bc(cpu);
-
raw_spin_unlock_irqrestore(&tick_broadcast_lock, flags);
}
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH] clockevents: Fix cpu_down() race for hrtimer based broadcasting
2015-04-28 9:19 [PATCH] clockevents: Fix cpu_down() race for hrtimer based broadcasting Preeti U Murthy
@ 2015-05-02 18:35 ` Greg KH
2015-05-04 5:21 ` Preeti U Murthy
0 siblings, 1 reply; 3+ messages in thread
From: Greg KH @ 2015-05-02 18:35 UTC (permalink / raw)
To: Preeti U Murthy; +Cc: stable, nico, peterz, shreyas, rjw, mpe, tglx, mingo
On Tue, Apr 28, 2015 at 02:49:55PM +0530, Preeti U Murthy wrote:
> commit 345527b1edce8df719e0884500c76832a18211c3 upstream
>
> It was found when doing a hotplug stress test on POWER, that the
> machine either hit softlockups or rcu_sched stall warnings. The
> issue was traced to commit:
>
> 7cba160ad789 ("powernv/cpuidle: Redesign idle states management")
>
> which exposed the cpu_down() race with hrtimer based broadcast mode:
>
> 5d1638acb9f6 ("tick: Introduce hrtimer based broadcast")
>
> The race is the following:
>
> Assume CPU1 is the CPU which holds the hrtimer broadcasting duty
> before it is taken down.
>
> CPU0 CPU1
>
> cpu_down() take_cpu_down()
> disable_interrupts()
>
> cpu_die()
>
> while (CPU1 != CPU_DEAD) {
> msleep(100);
> switch_to_idle();
> stop_cpu_timer();
> schedule_broadcast();
> }
>
> tick_cleanup_cpu_dead()
> take_over_broadcast()
>
> So after CPU1 disabled interrupts it cannot handle the broadcast
> hrtimer anymore, so CPU0 will be stuck forever.
>
> Fix this by explicitly taking over broadcast duty before cpu_die().
>
> This is a temporary workaround. What we really want is a callback
> in the clockevent device which allows us to do that from the dying
> CPU by pushing the hrtimer onto a different cpu. That might involve
> an IPI and is definitely more complex than this immediate fix.
>
> Changelog was picked up from:
>
> https://lkml.org/lkml/2015/2/16/213
>
> Suggested-by: Thomas Gleixner <tglx@linutronix.de>
> Tested-by: Nicolas Pitre <nico@linaro.org>
> Signed-off-by: Preeti U. Murthy <preeti@linux.vnet.ibm.com>
> Cc: linuxppc-dev@lists.ozlabs.org
> Cc: mpe@ellerman.id.au
> Cc: nicolas.pitre@linaro.org
> Cc: peterz@infradead.org
> Cc: rjw@rjwysocki.net
> Fixes: http://linuxppc.10917.n7.nabble.com/offlining-cpus-breakage-td88619.html
> Link: http://lkml.kernel.org/r/20150330092410.24979.59887.stgit@preeti.in.ibm.com
> [ Merged it to the latest timer tree, renamed the callback, tidied up the changelog. ]
> Signed-off-by: Ingo Molnar <mingo@kernel.org>
> ---
>
> Please apply this to 3.19 stable.
What about 4.0 stable?
And this doesn't look like it's the same backport, you didn't modify
tick.h, why not?
thanks,
greg k-h
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH] clockevents: Fix cpu_down() race for hrtimer based broadcasting
2015-05-02 18:35 ` Greg KH
@ 2015-05-04 5:21 ` Preeti U Murthy
0 siblings, 0 replies; 3+ messages in thread
From: Preeti U Murthy @ 2015-05-04 5:21 UTC (permalink / raw)
To: Greg KH; +Cc: stable, nico, peterz, shreyas, rjw, mpe, tglx, mingo
On 05/03/2015 12:05 AM, Greg KH wrote:
> On Tue, Apr 28, 2015 at 02:49:55PM +0530, Preeti U Murthy wrote:
>> commit 345527b1edce8df719e0884500c76832a18211c3 upstream
>>
>> It was found when doing a hotplug stress test on POWER, that the
>> machine either hit softlockups or rcu_sched stall warnings. The
>> issue was traced to commit:
>>
>> 7cba160ad789 ("powernv/cpuidle: Redesign idle states management")
>>
>> which exposed the cpu_down() race with hrtimer based broadcast mode:
>>
>> 5d1638acb9f6 ("tick: Introduce hrtimer based broadcast")
>>
>> The race is the following:
>>
>> Assume CPU1 is the CPU which holds the hrtimer broadcasting duty
>> before it is taken down.
>>
>> CPU0 CPU1
>>
>> cpu_down() take_cpu_down()
>> disable_interrupts()
>>
>> cpu_die()
>>
>> while (CPU1 != CPU_DEAD) {
>> msleep(100);
>> switch_to_idle();
>> stop_cpu_timer();
>> schedule_broadcast();
>> }
>>
>> tick_cleanup_cpu_dead()
>> take_over_broadcast()
>>
>> So after CPU1 disabled interrupts it cannot handle the broadcast
>> hrtimer anymore, so CPU0 will be stuck forever.
>>
>> Fix this by explicitly taking over broadcast duty before cpu_die().
>>
>> This is a temporary workaround. What we really want is a callback
>> in the clockevent device which allows us to do that from the dying
>> CPU by pushing the hrtimer onto a different cpu. That might involve
>> an IPI and is definitely more complex than this immediate fix.
>>
>> Changelog was picked up from:
>>
>> https://lkml.org/lkml/2015/2/16/213
>>
>> Suggested-by: Thomas Gleixner <tglx@linutronix.de>
>> Tested-by: Nicolas Pitre <nico@linaro.org>
>> Signed-off-by: Preeti U. Murthy <preeti@linux.vnet.ibm.com>
>> Cc: linuxppc-dev@lists.ozlabs.org
>> Cc: mpe@ellerman.id.au
>> Cc: nicolas.pitre@linaro.org
>> Cc: peterz@infradead.org
>> Cc: rjw@rjwysocki.net
>> Fixes: http://linuxppc.10917.n7.nabble.com/offlining-cpus-breakage-td88619.html
>> Link: http://lkml.kernel.org/r/20150330092410.24979.59887.stgit@preeti.in.ibm.com
>> [ Merged it to the latest timer tree, renamed the callback, tidied up the changelog. ]
>> Signed-off-by: Ingo Molnar <mingo@kernel.org>
>> ---
>>
>> Please apply this to 3.19 stable.
>
> What about 4.0 stable?
It needs to be applied to 4.0 as well. I pulled stable before I posted
out and did not find this branch then.
>
> And this doesn't look like it's the same backport, you didn't modify
> tick.h, why not?
This was a mistake, apologies for that. Not sure how that got missed. I
have resent this patch taking care of the missing hunk with the RESEND
tag, that has to be applied to both 3.19 and 4.0.
Thank you
Regards
Preeti U Murthy
>
> thanks,
>
> greg k-h
>
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2015-05-04 5:22 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-04-28 9:19 [PATCH] clockevents: Fix cpu_down() race for hrtimer based broadcasting Preeti U Murthy
2015-05-02 18:35 ` Greg KH
2015-05-04 5:21 ` Preeti U Murthy
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).