From: Frederic Weisbecker <fweisbec@gmail.com>
To: "Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com>
Cc: peterz@infradead.org, tglx@linutronix.de, mingo@kernel.org,
tj@kernel.org, rusty@rustcorp.com.au, akpm@linux-foundation.org,
hch@infradead.org, mgorman@suse.de, riel@redhat.com, bp@suse.de,
rostedt@goodmis.org, mgalbraith@suse.de, ego@linux.vnet.ibm.com,
paulmck@linux.vnet.ibm.com, oleg@redhat.com, rjw@rjwysocki.net,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH v3 1/2] smp: Print more useful debug info upon receiving IPI on an offline CPU
Date: Tue, 13 May 2014 17:38:48 +0200 [thread overview]
Message-ID: <20140513153845.GD13828@localhost.localdomain> (raw)
In-Reply-To: <20140511203647.17152.45125.stgit@srivatsabhat.in.ibm.com>
On Mon, May 12, 2014 at 02:06:49AM +0530, Srivatsa S. Bhat wrote:
> Today the smp-call-function code just prints a warning if we get an IPI on
> an offline CPU. This info is sufficient to let us know that something went
> wrong, but often it is very hard to debug exactly who sent the IPI and why,
> from this info alone.
>
> In most cases, we get the warning about the IPI to an offline CPU, immediately
> after the CPU going offline comes out of the stop-machine phase and reenables
> interrupts. Since all online CPUs participate in stop-machine, the information
> regarding the sender of the IPI is already lost by the time we exit the
> stop-machine loop. So even if we dump the stack on each CPU at this point,
> we won't find anything useful since all of them will show the stack-trace of
> the stopper thread. So we need a better way to figure out who sent the IPI and
> why.
>
> To achieve this, when we detect an IPI targeted to an offline CPU, loop through
> the call-single-data linked list and print out the payload (i.e., the name
> of the function which was supposed to be executed by the target CPU). This
> would give us an insight as to who might have sent the IPI and help us debug
> this further.
>
> Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
> ---
>
> kernel/smp.c | 18 ++++++++++++++----
> 1 file changed, 14 insertions(+), 4 deletions(-)
>
> diff --git a/kernel/smp.c b/kernel/smp.c
> index 06d574e..f864921 100644
> --- a/kernel/smp.c
> +++ b/kernel/smp.c
> @@ -185,14 +185,24 @@ void generic_smp_call_function_single_interrupt(void)
> {
> struct llist_node *entry;
> struct call_single_data *csd, *csd_next;
> + static bool warned;
> +
> + entry = llist_del_all(&__get_cpu_var(call_single_queue));
> + entry = llist_reverse_order(entry);
>
> /*
> * Shouldn't receive this interrupt on a cpu that is not yet online.
> */
> - WARN_ON_ONCE(!cpu_online(smp_processor_id()));
> -
> - entry = llist_del_all(&__get_cpu_var(call_single_queue));
> - entry = llist_reverse_order(entry);
> + if (unlikely(!cpu_online(smp_processor_id()) && !warned)) {
> + warned = true;
> + WARN_ON(1);
More details may be better:
WARN_ONCE(1, "IPI on offline CPU");
> + /*
> + * We don't have to use the _safe() variant here
> + * because we are not invoking the IPI handlers yet.
> + */
> + llist_for_each_entry(csd, entry, llist)
> + pr_warn("SMP IPI Payload: %pS \n", csd->func);
Payload is kind of vague. How about "IPI func %pS sent on offline CPU".
> + }
>
> llist_for_each_entry_safe(csd, csd_next, entry, llist) {
> csd->func(csd->info);
>
next prev parent reply other threads:[~2014-05-13 15:38 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-05-11 20:36 [PATCH v3 0/2] CPU hotplug: Fix the long-standing "IPI to offline CPU" issue Srivatsa S. Bhat
2014-05-11 20:36 ` [PATCH v3 1/2] smp: Print more useful debug info upon receiving IPI on an offline CPU Srivatsa S. Bhat
2014-05-13 15:38 ` Frederic Weisbecker [this message]
2014-05-15 6:42 ` Srivatsa S. Bhat
2014-05-15 14:16 ` Frederic Weisbecker
2014-05-11 20:37 ` [PATCH v3 2/2] CPU hotplug, stop-machine: Plug race-window that leads to "IPI-to-offline-CPU" Srivatsa S. Bhat
2014-05-12 20:57 ` Tejun Heo
2014-05-13 9:02 ` [PATCH v4 " Srivatsa S. Bhat
2014-05-13 15:57 ` Frederic Weisbecker
2014-05-15 6:54 ` Srivatsa S. Bhat
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20140513153845.GD13828@localhost.localdomain \
--to=fweisbec@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=bp@suse.de \
--cc=ego@linux.vnet.ibm.com \
--cc=hch@infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mgalbraith@suse.de \
--cc=mgorman@suse.de \
--cc=mingo@kernel.org \
--cc=oleg@redhat.com \
--cc=paulmck@linux.vnet.ibm.com \
--cc=peterz@infradead.org \
--cc=riel@redhat.com \
--cc=rjw@rjwysocki.net \
--cc=rostedt@goodmis.org \
--cc=rusty@rustcorp.com.au \
--cc=srivatsa.bhat@linux.vnet.ibm.com \
--cc=tglx@linutronix.de \
--cc=tj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).