From: ebiederm@xmission.com (Eric W. Biederman)
To: "Américo Wang" <xiyou.wangcong@gmail.com>
Cc: holzheu@linux.vnet.ibm.com, Vivek Goyal <vgoyal@redhat.com>,
akpm@linux-foundation.org, schwidefsky@de.ibm.com,
heiko.carstens@de.ibm.com, kexec@lists.infradead.org,
linux-kernel@vger.kernel.org
Subject: Re: kdump: crash_kexec()-smp_send_stop() race in panic
Date: Mon, 24 Oct 2011 10:07:19 -0700 [thread overview]
Message-ID: <m1aa8qfhdk.fsf@fess.ebiederm.org> (raw)
In-Reply-To: <CAM_iQpVktWsgOFCWoT47dQ1YdSTnnbTZAAKNYEEX=5wWzhHMzg@mail.gmail.com> ("Américo Wang"'s message of "Mon, 24 Oct 2011 23:23:33 +0800")
Américo Wang <xiyou.wangcong@gmail.com> writes:
> On Mon, Oct 24, 2011 at 11:14 PM, Eric W. Biederman
> <ebiederm@xmission.com> wrote:
>> Michael Holzheu <holzheu@linux.vnet.ibm.com> writes:
>>
>>> Hello Vivek,
>>>
>>> In our tests we ran into the following scenario:
>>>
>>> Two CPUs have called panic at the same time. The first CPU called
>>> crash_kexec() and the second CPU called smp_send_stop() in panic()
>>> before crash_kexec() finished on the first CPU. So the second CPU
>>> stopped the first CPU and therefore kdump failed.
>>>
>>> 1st CPU:
>>> panic()->crash_kexec()->mutex_trylock(&kexec_mutex)-> do kdump
>>>
>>> 2nd CPU:
>>> panic()->crash_kexec()->kexec_mutex already held by 1st CPU
>>> ->smp_send_stop()-> stop CPU 1 (stop kdump)
>>>
>>> How should we fix this problem? One possibility could be to do
>>> smp_send_stop() before we call crash_kexec().
>>>
>>> What do you think?
>>
>> smp_send_stop is insufficiently reliable to be used before crash_kexec.
>>
>> My first reaction would be to test oops_in_progress and wait until
>> oops_in_progress == 1 before calling smp_send_stop.
>>
>
> +1
>
> One of my colleague mentioned the same problem with me inside
> RH, given the fact that the race condition window is small, it would
> not be easy to reproduce this scenario.
As for reproducing it I have a hunch you could hack up something
horrible with smp_call_function and kprobes.
On a little more reflection we can't wait until oops_in_progress goes
to 1 before calling smp_send_stop. Because if crash_kexec is not
involved nothing we will never call smp_send_stop.
So my second thought is to introduce another atomic variable
panic_in_progress, visible only in panic. The cpu that sets
increments panic_in_progress can call smp_send_stop. The rest of
the cpus can just go into a busy wait. That should stop nasty
fights about who is going to come out of smp_send_stop first.
Eric
next prev parent reply other threads:[~2011-10-24 17:06 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-10-24 14:55 kdump: crash_kexec()-smp_send_stop() race in panic Michael Holzheu
2011-10-24 15:14 ` Eric W. Biederman
2011-10-24 15:23 ` Américo Wang
2011-10-24 17:07 ` Eric W. Biederman [this message]
2011-10-24 17:33 ` Vivek Goyal
2011-10-24 22:24 ` Seiji Aguchi
2011-10-25 8:33 ` Michael Holzheu
2011-10-25 8:44 ` Michael Holzheu
2011-10-25 12:04 ` Eric W. Biederman
2011-10-25 14:54 ` Vivek Goyal
2011-10-25 14:58 ` Michael Holzheu
2011-10-25 15:08 ` Vivek Goyal
2011-10-25 15:28 ` Michael Holzheu
2011-10-25 15:28 ` Don Zickus
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=m1aa8qfhdk.fsf@fess.ebiederm.org \
--to=ebiederm@xmission.com \
--cc=akpm@linux-foundation.org \
--cc=heiko.carstens@de.ibm.com \
--cc=holzheu@linux.vnet.ibm.com \
--cc=kexec@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=schwidefsky@de.ibm.com \
--cc=vgoyal@redhat.com \
--cc=xiyou.wangcong@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).