From: Preeti U Murthy <preeti@linux.vnet.ibm.com>
To: Ingo Molnar <mingo@kernel.org>, peterz@infradead.org
Cc: nicolas.pitre@linaro.org, rjw@rjwysocki.net,
linux-kernel@vger.kernel.org, tglx@linutronix.de,
linuxppc-dev@lists.ozlabs.org
Subject: Re: [PATCH V2] clockevents: Fix cpu down race for hrtimer based broadcasting
Date: Thu, 02 Apr 2015 16:55:39 +0530 [thread overview]
Message-ID: <551D2733.7040108@linux.vnet.ibm.com> (raw)
In-Reply-To: <20150402104226.GB21105@gmail.com>
On 04/02/2015 04:12 PM, Ingo Molnar wrote:
>
> * Preeti U Murthy <preeti@linux.vnet.ibm.com> wrote:
>
>> It was found when doing a hotplug stress test on POWER, that the machine
>> either hit softlockups or rcu_sched stall warnings. The issue was
>> traced to commit 7cba160ad789a powernv/cpuidle: Redesign idle states
>> management, which exposed the cpu down race with hrtimer based broadcast
>> mode(Commit 5d1638acb9f6(tick: Introduce hrtimer based broadcast). This
>> is explained below.
>>
>> Assume CPU1 is the CPU which holds the hrtimer broadcasting duty before
>> it is taken down.
>>
>> CPU0 CPU1
>>
>> cpu_down() take_cpu_down()
>> disable_interrupts()
>>
>> cpu_die()
>>
>> while(CPU1 != CPU_DEAD) {
>> msleep(100);
>> switch_to_idle();
>> stop_cpu_timer();
>> schedule_broadcast();
>> }
>>
>> tick_cleanup_cpu_dead()
>> take_over_broadcast()
>>
>> So after CPU1 disabled interrupts it cannot handle the broadcast hrtimer
>> anymore, so CPU0 will be stuck forever.
>>
>> Fix this by explicitly taking over broadcast duty before cpu_die().
>> This is a temporary workaround. What we really want is a callback in the
>> clockevent device which allows us to do that from the dying CPU by
>> pushing the hrtimer onto a different cpu. That might involve an IPI and
>> is definitely more complex than this immediate fix.
>
> So why not use a suitable CPU_DOWN* notifier for this, instead of open
> coding it all into a random place in the hotplug machinery?
This is because each of them is unsuitable for a reason:
1. CPU_DOWN_PREPARE stage allows for a fail. The cpu in question may not
successfully go down. So we may pull the hrtimer unnecessarily.
2. CPU_DYING notifiers are run on the cpu that is going down. So the
alternative would be to IPI an online cpu to take up the broadcast duty.
3. CPU_DEAD and CPU_POST_DEAD stages both have the drawback described in
the changelog.
I hope I got your question right.
Regards
Preeti U Murthy
>
> Also, I improved the changelog (attached below), but decided against
> applying it until these questions are cleared - please use that for
> future versions of this patch.
>
> Thanks,
>
> Ingo
>
> ===================>
> From 413fbf5193b330c5f478ef7aaeaaee08907a993e Mon Sep 17 00:00:00 2001
> From: Preeti U Murthy <preeti@linux.vnet.ibm.com>
> Date: Mon, 30 Mar 2015 14:59:19 +0530
> Subject: [PATCH] clockevents: Fix cpu_down() race for hrtimer based broadcasting
>
> It was found when doing a hotplug stress test on POWER, that the
> machine either hit softlockups or rcu_sched stall warnings. The
> issue was traced to commit:
>
> 7cba160ad789 ("powernv/cpuidle: Redesign idle states management")
>
> which exposed the cpu_down() race with hrtimer based broadcast mode:
>
> 5d1638acb9f6 ("tick: Introduce hrtimer based broadcast")
>
> The race is the following:
>
> Assume CPU1 is the CPU which holds the hrtimer broadcasting duty
> before it is taken down.
>
> CPU0 CPU1
>
> cpu_down() take_cpu_down()
> disable_interrupts()
>
> cpu_die()
>
> while (CPU1 != CPU_DEAD) {
> msleep(100);
> switch_to_idle();
> stop_cpu_timer();
> schedule_broadcast();
> }
>
> tick_cleanup_cpu_dead()
> take_over_broadcast()
>
> So after CPU1 disabled interrupts it cannot handle the broadcast
> hrtimer anymore, so CPU0 will be stuck forever.
>
> Fix this by explicitly taking over broadcast duty before cpu_die().
>
> This is a temporary workaround. What we really want is a callback
> in the clockevent device which allows us to do that from the dying
> CPU by pushing the hrtimer onto a different cpu. That might involve
> an IPI and is definitely more complex than this immediate fix.
>
> Changelog was picked up from:
>
> https://lkml.org/lkml/2015/2/16/213
>
> Suggested-by: Thomas Gleixner <tglx@linutronix.de>
> Tested-by: Nicolas Pitre <nico@linaro.org>
> Signed-off-by: Preeti U. Murthy <preeti@linux.vnet.ibm.com>
> Cc: linuxppc-dev@lists.ozlabs.org
> Cc: mpe@ellerman.id.au
> Cc: nicolas.pitre@linaro.org
> Cc: peterz@infradead.org
> Cc: rjw@rjwysocki.net
> Fixes: http://linuxppc.10917.n7.nabble.com/offlining-cpus-breakage-td88619.html
>
next prev parent reply other threads:[~2015-04-02 11:26 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-03-30 9:29 [PATCH V2] clockevents: Fix cpu down race for hrtimer based broadcasting Preeti U Murthy
2015-03-31 3:11 ` Nicolas Pitre
2015-04-02 10:42 ` Ingo Molnar
2015-04-02 11:25 ` Preeti U Murthy [this message]
2015-04-02 11:31 ` Ingo Molnar
2015-04-02 11:44 ` Preeti U Murthy
2015-04-02 12:02 ` Peter Zijlstra
2015-04-02 12:12 ` Ingo Molnar
2015-04-02 12:44 ` Preeti U Murthy
2015-04-02 12:58 ` Peter Zijlstra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=551D2733.7040108@linux.vnet.ibm.com \
--to=preeti@linux.vnet.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=mingo@kernel.org \
--cc=nicolas.pitre@linaro.org \
--cc=peterz@infradead.org \
--cc=rjw@rjwysocki.net \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).