* [PATCH] x86: Avoid intermixing cpu dump_stack output on multi-processor systems
@ 2012-05-24 14:42 Russ Anderson
2012-05-24 15:34 ` Frederic Weisbecker
2012-05-29 17:53 ` Don Zickus
0 siblings, 2 replies; 10+ messages in thread
From: Russ Anderson @ 2012-05-24 14:42 UTC (permalink / raw)
To: linux-kernel, x86
Cc: Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Andrew Morton,
Greg Kroah-Hartman, Russ Anderson
When multiple cpus on a multi-processor system call dump_stack()
at the same time, the backtrace lines get intermixed, making
the output worthless. Add a lock so each cpu stack dump comes
out as a coherent set.
For example, when a multi-processor system is NMIed, all of the
cpus call dump_stack() at the same time, resulting in output for
all of cpus getting intermixed, making it impossible to tell what
any individual cpu was doing. With this patch each cpu prints
its stack lines as a coherent set, so one can see what each cpu
was doing.
It has been tested on a 4069 cpu system.
Signed-off-by: Russ Anderson <rja@sgi.com>
---
arch/x86/kernel/dumpstack.c | 3 +++
1 file changed, 3 insertions(+)
Index: linux/arch/x86/kernel/dumpstack.c
===================================================================
--- linux.orig/arch/x86/kernel/dumpstack.c 2012-05-03 14:31:13.602345805 -0500
+++ linux/arch/x86/kernel/dumpstack.c 2012-05-03 14:51:43.805197563 -0500
@@ -186,7 +186,9 @@ void dump_stack(void)
{
unsigned long bp;
unsigned long stack;
+ static DEFINE_SPINLOCK(lock); /* Serialise the printks */
+ spin_lock(&lock);
bp = stack_frame(current, NULL);
printk("Pid: %d, comm: %.20s %s %s %.*s\n",
current->pid, current->comm, print_tainted(),
@@ -194,6 +196,7 @@ void dump_stack(void)
(int)strcspn(init_utsname()->version, " "),
init_utsname()->version);
show_trace(NULL, NULL, &stack, bp);
+ spin_unlock(&lock);
}
EXPORT_SYMBOL(dump_stack);
--
Russ Anderson, OS RAS/Partitioning Project Lead
SGI - Silicon Graphics Inc rja@sgi.com
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] x86: Avoid intermixing cpu dump_stack output on multi-processor systems
2012-05-24 14:42 [PATCH] x86: Avoid intermixing cpu dump_stack output on multi-processor systems Russ Anderson
@ 2012-05-24 15:34 ` Frederic Weisbecker
2012-05-29 18:50 ` Russ Anderson
2012-05-29 17:53 ` Don Zickus
1 sibling, 1 reply; 10+ messages in thread
From: Frederic Weisbecker @ 2012-05-24 15:34 UTC (permalink / raw)
To: Russ Anderson
Cc: linux-kernel, x86, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
Andrew Morton, Greg Kroah-Hartman
On Thu, May 24, 2012 at 09:42:29AM -0500, Russ Anderson wrote:
> When multiple cpus on a multi-processor system call dump_stack()
> at the same time, the backtrace lines get intermixed, making
> the output worthless. Add a lock so each cpu stack dump comes
> out as a coherent set.
>
> For example, when a multi-processor system is NMIed, all of the
> cpus call dump_stack() at the same time, resulting in output for
> all of cpus getting intermixed, making it impossible to tell what
> any individual cpu was doing. With this patch each cpu prints
> its stack lines as a coherent set, so one can see what each cpu
> was doing.
>
> It has been tested on a 4069 cpu system.
>
> Signed-off-by: Russ Anderson <rja@sgi.com>
I don't think this is a good idea. What if an interrupt comes
and calls this at the same time? Sure you can mask irqs but NMIs
can call that too. In this case I prefer to have a messy report
rather than a deadlock on the debug path.
May be something like that:
static atomic_t dump_lock = ATOMIC_INIT(-1);
static void dump_stack(void)
{
int was_locked;
int old;
int cpu;
preempt_disable();
retry:
cpu = smp_processor_id();
old = atomic_cmpxchg(&dump_lock, -1, cpu);
if (old == -1) {
was_locked = 0;
} else if (old == cpu) {
was_locked = 1;
} else {
cpu_relax();
goto retry;
}
__dump_trace();
if (!was_locked)
atomic_set(&dump_lock, -1);
preempt_enable();
}
You could also use a spinlock with irq disabled and test in_nmi()
but we could have a dump_trace() in an NMI before the nmi count is
incremented. So the above is perhaps more robust.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] x86: Avoid intermixing cpu dump_stack output on multi-processor systems
2012-05-24 14:42 [PATCH] x86: Avoid intermixing cpu dump_stack output on multi-processor systems Russ Anderson
2012-05-24 15:34 ` Frederic Weisbecker
@ 2012-05-29 17:53 ` Don Zickus
2012-05-29 19:19 ` Russ Anderson
1 sibling, 1 reply; 10+ messages in thread
From: Don Zickus @ 2012-05-29 17:53 UTC (permalink / raw)
To: Russ Anderson
Cc: linux-kernel, x86, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
Andrew Morton, Greg Kroah-Hartman
On Thu, May 24, 2012 at 09:42:29AM -0500, Russ Anderson wrote:
> When multiple cpus on a multi-processor system call dump_stack()
> at the same time, the backtrace lines get intermixed, making
> the output worthless. Add a lock so each cpu stack dump comes
> out as a coherent set.
>
> For example, when a multi-processor system is NMIed, all of the
> cpus call dump_stack() at the same time, resulting in output for
> all of cpus getting intermixed, making it impossible to tell what
> any individual cpu was doing. With this patch each cpu prints
> its stack lines as a coherent set, so one can see what each cpu
> was doing.
For this particular test case, it sounds like you are doing what
trigger_all_cpu_backtrace() is doing? It doesn't solve the general
problem, but probably your particular usage?
Cheers,
Don
>
> It has been tested on a 4069 cpu system.
>
> Signed-off-by: Russ Anderson <rja@sgi.com>
>
> ---
> arch/x86/kernel/dumpstack.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> Index: linux/arch/x86/kernel/dumpstack.c
> ===================================================================
> --- linux.orig/arch/x86/kernel/dumpstack.c 2012-05-03 14:31:13.602345805 -0500
> +++ linux/arch/x86/kernel/dumpstack.c 2012-05-03 14:51:43.805197563 -0500
> @@ -186,7 +186,9 @@ void dump_stack(void)
> {
> unsigned long bp;
> unsigned long stack;
> + static DEFINE_SPINLOCK(lock); /* Serialise the printks */
>
> + spin_lock(&lock);
> bp = stack_frame(current, NULL);
> printk("Pid: %d, comm: %.20s %s %s %.*s\n",
> current->pid, current->comm, print_tainted(),
> @@ -194,6 +196,7 @@ void dump_stack(void)
> (int)strcspn(init_utsname()->version, " "),
> init_utsname()->version);
> show_trace(NULL, NULL, &stack, bp);
> + spin_unlock(&lock);
> }
> EXPORT_SYMBOL(dump_stack);
>
> --
> Russ Anderson, OS RAS/Partitioning Project Lead
> SGI - Silicon Graphics Inc rja@sgi.com
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] x86: Avoid intermixing cpu dump_stack output on multi-processor systems
2012-05-24 15:34 ` Frederic Weisbecker
@ 2012-05-29 18:50 ` Russ Anderson
0 siblings, 0 replies; 10+ messages in thread
From: Russ Anderson @ 2012-05-29 18:50 UTC (permalink / raw)
To: Frederic Weisbecker
Cc: linux-kernel, x86, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
Andrew Morton, Greg Kroah-Hartman, rja
On Thu, May 24, 2012 at 05:34:13PM +0200, Frederic Weisbecker wrote:
> On Thu, May 24, 2012 at 09:42:29AM -0500, Russ Anderson wrote:
> > When multiple cpus on a multi-processor system call dump_stack()
> > at the same time, the backtrace lines get intermixed, making
> > the output worthless. Add a lock so each cpu stack dump comes
> > out as a coherent set.
> >
> > For example, when a multi-processor system is NMIed, all of the
> > cpus call dump_stack() at the same time, resulting in output for
> > all of cpus getting intermixed, making it impossible to tell what
> > any individual cpu was doing. With this patch each cpu prints
> > its stack lines as a coherent set, so one can see what each cpu
> > was doing.
> >
> > It has been tested on a 4069 cpu system.
> >
> > Signed-off-by: Russ Anderson <rja@sgi.com>
>
> I don't think this is a good idea. What if an interrupt comes
> and calls this at the same time? Sure you can mask irqs but NMIs
> can call that too. In this case I prefer to have a messy report
> rather than a deadlock on the debug path.
Below is an updated patch with your recommnded changes.
> May be something like that:
>
> static atomic_t dump_lock = ATOMIC_INIT(-1);
>
> static void dump_stack(void)
> {
> int was_locked;
> int old;
> int cpu;
>
> preempt_disable();
> retry:
> cpu = smp_processor_id();
> old = atomic_cmpxchg(&dump_lock, -1, cpu);
> if (old == -1) {
> was_locked = 0;
> } else if (old == cpu) {
> was_locked = 1;
> } else {
> cpu_relax();
> goto retry;
> }
>
> __dump_trace();
>
> if (!was_locked)
> atomic_set(&dump_lock, -1);
>
> preempt_enable();
> }
>
> You could also use a spinlock with irq disabled and test in_nmi()
> but we could have a dump_trace() in an NMI before the nmi count is
> incremented. So the above is perhaps more robust.
> --
---
When multiple cpus on a multi-processor system call dump_stack()
at the same time, the backtrace lines get intermixed, making
the output worthless. Add a lock so each cpu stack dump comes
out as a coherent set.
For example, when a multi-processor system is NMIed, all of the
cpus call dump_stack() at the same time, resulting in output for
all of cpus getting intermixed, making it impossible to tell what
any individual cpu was doing. With this patch each cpu prints
its stack lines as a coherent set, so one can see what each cpu
was doing.
It has been tested on a 4069 cpu system.
Signed-off-by: Russ Anderson <rja@sgi.com>
---
arch/x86/kernel/dumpstack.c | 28 +++++++++++++++++++++++++++-
1 file changed, 27 insertions(+), 1 deletion(-)
Index: linux/arch/x86/kernel/dumpstack.c
===================================================================
--- linux.orig/arch/x86/kernel/dumpstack.c 2012-05-24 10:05:36.477576977 -0500
+++ linux/arch/x86/kernel/dumpstack.c 2012-05-27 16:58:26.527212233 -0500
@@ -182,7 +182,7 @@ void show_stack(struct task_struct *task
/*
* The architecture-independent dump_stack generator
*/
-void dump_stack(void)
+void __dump_stack(void)
{
unsigned long bp;
unsigned long stack;
@@ -195,6 +195,32 @@ void dump_stack(void)
init_utsname()->version);
show_trace(NULL, NULL, &stack, bp);
}
+
+static atomic_t dump_lock = ATOMIC_INIT(-1);
+
+void dump_stack(void)
+{
+ int was_locked, old, cpu;
+
+ preempt_disable();
+retry:
+ cpu = smp_processor_id();
+ old = atomic_cmpxchg(&dump_lock, -1, cpu);
+ if (old == -1) {
+ was_locked = 0;
+ } else if (old == cpu) {
+ was_locked = 1;
+ } else {
+ cpu_relax();
+ goto retry;
+ }
+
+ __dump_stack();
+
+ if (!was_locked)
+ atomic_set(&dump_lock, -1);
+ preempt_enable();
+}
EXPORT_SYMBOL(dump_stack);
static arch_spinlock_t die_lock = __ARCH_SPIN_LOCK_UNLOCKED;
--
Russ Anderson, OS RAS/Partitioning Project Lead
SGI - Silicon Graphics Inc rja@sgi.com
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] x86: Avoid intermixing cpu dump_stack output on multi-processor systems
2012-05-29 17:53 ` Don Zickus
@ 2012-05-29 19:19 ` Russ Anderson
2012-05-29 22:39 ` Don Zickus
0 siblings, 1 reply; 10+ messages in thread
From: Russ Anderson @ 2012-05-29 19:19 UTC (permalink / raw)
To: Don Zickus
Cc: linux-kernel, x86, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
Andrew Morton, Greg Kroah-Hartman, rja
On Tue, May 29, 2012 at 01:53:53PM -0400, Don Zickus wrote:
> On Thu, May 24, 2012 at 09:42:29AM -0500, Russ Anderson wrote:
> > When multiple cpus on a multi-processor system call dump_stack()
> > at the same time, the backtrace lines get intermixed, making
> > the output worthless. Add a lock so each cpu stack dump comes
> > out as a coherent set.
> >
> > For example, when a multi-processor system is NMIed, all of the
> > cpus call dump_stack() at the same time, resulting in output for
> > all of cpus getting intermixed, making it impossible to tell what
> > any individual cpu was doing. With this patch each cpu prints
> > its stack lines as a coherent set, so one can see what each cpu
> > was doing.
>
> For this particular test case, it sounds like you are doing what
> trigger_all_cpu_backtrace() is doing? It doesn't solve the general
> problem, but probably your particular usage?
In this case, I am just using the hardware NMI, which sends the NMI
signal to each logical cpu. Since each cpu receives the NMI at nearly
the exact same time, they end up in dump_stack() at the same time.
Without some form of locking, trace lines from different cpus end
up intermixed, making it impossible to tell what any individual
cpu was doing.
> Cheers,
> Don
>
> >
> > It has been tested on a 4069 cpu system.
> >
> > Signed-off-by: Russ Anderson <rja@sgi.com>
> >
> > ---
> > arch/x86/kernel/dumpstack.c | 3 +++
> > 1 file changed, 3 insertions(+)
> >
> > Index: linux/arch/x86/kernel/dumpstack.c
> > ===================================================================
> > --- linux.orig/arch/x86/kernel/dumpstack.c 2012-05-03 14:31:13.602345805 -0500
> > +++ linux/arch/x86/kernel/dumpstack.c 2012-05-03 14:51:43.805197563 -0500
> > @@ -186,7 +186,9 @@ void dump_stack(void)
> > {
> > unsigned long bp;
> > unsigned long stack;
> > + static DEFINE_SPINLOCK(lock); /* Serialise the printks */
> >
> > + spin_lock(&lock);
> > bp = stack_frame(current, NULL);
> > printk("Pid: %d, comm: %.20s %s %s %.*s\n",
> > current->pid, current->comm, print_tainted(),
> > @@ -194,6 +196,7 @@ void dump_stack(void)
> > (int)strcspn(init_utsname()->version, " "),
> > init_utsname()->version);
> > show_trace(NULL, NULL, &stack, bp);
> > + spin_unlock(&lock);
> > }
> > EXPORT_SYMBOL(dump_stack);
> >
> > --
> > Russ Anderson, OS RAS/Partitioning Project Lead
> > SGI - Silicon Graphics Inc rja@sgi.com
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at http://vger.kernel.org/majordomo-info.html
> > Please read the FAQ at http://www.tux.org/lkml/
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
--
Russ Anderson, OS RAS/Partitioning Project Lead
SGI - Silicon Graphics Inc rja@sgi.com
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] x86: Avoid intermixing cpu dump_stack output on multi-processor systems
2012-05-29 19:19 ` Russ Anderson
@ 2012-05-29 22:39 ` Don Zickus
2012-05-29 23:11 ` Russ Anderson
0 siblings, 1 reply; 10+ messages in thread
From: Don Zickus @ 2012-05-29 22:39 UTC (permalink / raw)
To: Russ Anderson
Cc: linux-kernel, x86, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
Andrew Morton, Greg Kroah-Hartman, rja
On Tue, May 29, 2012 at 02:19:35PM -0500, Russ Anderson wrote:
> On Tue, May 29, 2012 at 01:53:53PM -0400, Don Zickus wrote:
> > On Thu, May 24, 2012 at 09:42:29AM -0500, Russ Anderson wrote:
> > > When multiple cpus on a multi-processor system call dump_stack()
> > > at the same time, the backtrace lines get intermixed, making
> > > the output worthless. Add a lock so each cpu stack dump comes
> > > out as a coherent set.
> > >
> > > For example, when a multi-processor system is NMIed, all of the
> > > cpus call dump_stack() at the same time, resulting in output for
> > > all of cpus getting intermixed, making it impossible to tell what
> > > any individual cpu was doing. With this patch each cpu prints
> > > its stack lines as a coherent set, so one can see what each cpu
> > > was doing.
> >
> > For this particular test case, it sounds like you are doing what
> > trigger_all_cpu_backtrace() is doing? It doesn't solve the general
> > problem, but probably your particular usage?
>
> In this case, I am just using the hardware NMI, which sends the NMI
> signal to each logical cpu. Since each cpu receives the NMI at nearly
> the exact same time, they end up in dump_stack() at the same time.
> Without some form of locking, trace lines from different cpus end
> up intermixed, making it impossible to tell what any individual
> cpu was doing.
I forgot the original reasons for having the NMI go to each CPU instead of
just the boot CPU (commit 78c06176), but it seems like if you revert that
patch and have the nmi handler just call trigger_all_cpu_backtrace()
instead (which does stack trace locking for pretty output), that would
solve your problem, no? That locking is safe because it is only called in
the NMI context.
Whereas the lock you are proposing can be called in a mixture of NMI and
IRQ which could cause deadlocks I believe.
Cheers,
Don
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] x86: Avoid intermixing cpu dump_stack output on multi-processor systems
2012-05-29 22:39 ` Don Zickus
@ 2012-05-29 23:11 ` Russ Anderson
2012-05-29 23:54 ` Don Zickus
0 siblings, 1 reply; 10+ messages in thread
From: Russ Anderson @ 2012-05-29 23:11 UTC (permalink / raw)
To: Don Zickus
Cc: linux-kernel, x86, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
Andrew Morton, Greg Kroah-Hartman, rja
On Tue, May 29, 2012 at 06:39:23PM -0400, Don Zickus wrote:
> On Tue, May 29, 2012 at 02:19:35PM -0500, Russ Anderson wrote:
> > On Tue, May 29, 2012 at 01:53:53PM -0400, Don Zickus wrote:
> > > On Thu, May 24, 2012 at 09:42:29AM -0500, Russ Anderson wrote:
> > > > When multiple cpus on a multi-processor system call dump_stack()
> > > > at the same time, the backtrace lines get intermixed, making
> > > > the output worthless. Add a lock so each cpu stack dump comes
> > > > out as a coherent set.
> > > >
> > > > For example, when a multi-processor system is NMIed, all of the
> > > > cpus call dump_stack() at the same time, resulting in output for
> > > > all of cpus getting intermixed, making it impossible to tell what
> > > > any individual cpu was doing. With this patch each cpu prints
> > > > its stack lines as a coherent set, so one can see what each cpu
> > > > was doing.
> > >
> > > For this particular test case, it sounds like you are doing what
> > > trigger_all_cpu_backtrace() is doing? It doesn't solve the general
> > > problem, but probably your particular usage?
> >
> > In this case, I am just using the hardware NMI, which sends the NMI
> > signal to each logical cpu. Since each cpu receives the NMI at nearly
> > the exact same time, they end up in dump_stack() at the same time.
> > Without some form of locking, trace lines from different cpus end
> > up intermixed, making it impossible to tell what any individual
> > cpu was doing.
>
> I forgot the original reasons for having the NMI go to each CPU instead of
> just the boot CPU (commit 78c06176), but it seems like if you revert that
> patch and have the nmi handler just call trigger_all_cpu_backtrace()
> instead (which does stack trace locking for pretty output), that would
> solve your problem, no? That locking is safe because it is only called in
> the NMI context.
We want NMI to hit all the cpus at the same time to get a coherent
snapshot of what is happening in the system at one point in time.
Sending an IPI one cpu at a time skews the results, and doesn't
really solve the problem of multiple cpus going into dump_stack()
at the same time. NMI isn't the only possible caller of dump_stack().
FWIW, "Wait for up to 10 seconds for all CPUs to do the backtrace" on
a 4096 cpu system isn't long enough. :-)
> Whereas the lock you are proposing can be called in a mixture of NMI and
> IRQ which could cause deadlocks I believe.
Since this is a lock just around the dump_stack printk, would
checking for forward progress and a timeout to catch any possible
deadlock be sufficient? In the unlikely case of a deadlock the
lock gets broken and some of the cpu backtraces get intermixed.
That is still a huge improvement over the current case where
all of the backtraces get intermixed.
> Cheers,
> Don
--
Russ Anderson, OS RAS/Partitioning Project Lead
SGI - Silicon Graphics Inc rja@sgi.com
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] x86: Avoid intermixing cpu dump_stack output on multi-processor systems
2012-05-29 23:11 ` Russ Anderson
@ 2012-05-29 23:54 ` Don Zickus
2012-06-01 22:56 ` Russ Anderson
0 siblings, 1 reply; 10+ messages in thread
From: Don Zickus @ 2012-05-29 23:54 UTC (permalink / raw)
To: Russ Anderson
Cc: linux-kernel, x86, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
Andrew Morton, Greg Kroah-Hartman, rja
On Tue, May 29, 2012 at 06:11:35PM -0500, Russ Anderson wrote:
> > > In this case, I am just using the hardware NMI, which sends the NMI
> > > signal to each logical cpu. Since each cpu receives the NMI at nearly
> > > the exact same time, they end up in dump_stack() at the same time.
> > > Without some form of locking, trace lines from different cpus end
> > > up intermixed, making it impossible to tell what any individual
> > > cpu was doing.
> >
> > I forgot the original reasons for having the NMI go to each CPU instead of
> > just the boot CPU (commit 78c06176), but it seems like if you revert that
> > patch and have the nmi handler just call trigger_all_cpu_backtrace()
> > instead (which does stack trace locking for pretty output), that would
> > solve your problem, no? That locking is safe because it is only called in
> > the NMI context.
>
> We want NMI to hit all the cpus at the same time to get a coherent
> snapshot of what is happening in the system at one point in time.
> Sending an IPI one cpu at a time skews the results, and doesn't
Oh, I thought it was broadcasting, but I see the apic_uv code serializes
it. Though getting all those hardware locks in the nmi handler has to be
time consuming? But I know you guys did some tricks to speed that up.
> really solve the problem of multiple cpus going into dump_stack()
> at the same time. NMI isn't the only possible caller of dump_stack().
I am curious, your NMI handler has locking wrapped around dump_stack,
shouldn't that serialize the output the way you want it? Why isn't that
working?
>
> FWIW, "Wait for up to 10 seconds for all CPUs to do the backtrace" on
> a 4096 cpu system isn't long enough. :-)
Good point. :-)
>
> > Whereas the lock you are proposing can be called in a mixture of NMI and
> > IRQ which could cause deadlocks I believe.
>
> Since this is a lock just around the dump_stack printk, would
> checking for forward progress and a timeout to catch any possible
> deadlock be sufficient? In the unlikely case of a deadlock the
> lock gets broken and some of the cpu backtraces get intermixed.
> That is still a huge improvement over the current case where
> all of the backtraces get intermixed.
I saw your new patch based on Frederick's input. It seems to take care of
deadlock situations though you run into the starving lock problem that
ticketed spinlocks solved. Which is why I am curious why moving the
locking one layer up to the NMI handler (which is where it is currently),
didn't fix your problem.
Cheers,
Don
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] x86: Avoid intermixing cpu dump_stack output on multi-processor systems
2012-05-29 23:54 ` Don Zickus
@ 2012-06-01 22:56 ` Russ Anderson
2012-06-04 14:23 ` Don Zickus
0 siblings, 1 reply; 10+ messages in thread
From: Russ Anderson @ 2012-06-01 22:56 UTC (permalink / raw)
To: Don Zickus
Cc: linux-kernel, x86, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
Andrew Morton, Greg Kroah-Hartman, rja
On Tue, May 29, 2012 at 07:54:07PM -0400, Don Zickus wrote:
> On Tue, May 29, 2012 at 06:11:35PM -0500, Russ Anderson wrote:
> > > > In this case, I am just using the hardware NMI, which sends the NMI
> > > > signal to each logical cpu. Since each cpu receives the NMI at nearly
> > > > the exact same time, they end up in dump_stack() at the same time.
> > > > Without some form of locking, trace lines from different cpus end
> > > > up intermixed, making it impossible to tell what any individual
> > > > cpu was doing.
> > >
> > > I forgot the original reasons for having the NMI go to each CPU instead of
> > > just the boot CPU (commit 78c06176), but it seems like if you revert that
> > > patch and have the nmi handler just call trigger_all_cpu_backtrace()
> > > instead (which does stack trace locking for pretty output), that would
> > > solve your problem, no? That locking is safe because it is only called in
> > > the NMI context.
> >
> > We want NMI to hit all the cpus at the same time to get a coherent
> > snapshot of what is happening in the system at one point in time.
> > Sending an IPI one cpu at a time skews the results, and doesn't
>
> Oh, I thought it was broadcasting, but I see the apic_uv code serializes
> it. Though getting all those hardware locks in the nmi handler has to be
> time consuming? But I know you guys did some tricks to speed that up.
>
> > really solve the problem of multiple cpus going into dump_stack()
> > at the same time. NMI isn't the only possible caller of dump_stack().
>
> I am curious, your NMI handler has locking wrapped around dump_stack,
> shouldn't that serialize the output the way you want it? Why isn't that
> working?
Yes, you're right, it does. It is working. I'd forgotten that
the community kernel has uv_nmi_lock in uv_handle_nmi. Must
be working too much with distro kernels. :-) But that doesn't
help for all the other code paths than call dump_stack.
> > FWIW, "Wait for up to 10 seconds for all CPUs to do the backtrace" on
> > a 4096 cpu system isn't long enough. :-)
>
> Good point. :-)
>
> >
> > > Whereas the lock you are proposing can be called in a mixture of NMI and
> > > IRQ which could cause deadlocks I believe.
> >
> > Since this is a lock just around the dump_stack printk, would
> > checking for forward progress and a timeout to catch any possible
> > deadlock be sufficient? In the unlikely case of a deadlock the
> > lock gets broken and some of the cpu backtraces get intermixed.
> > That is still a huge improvement over the current case where
> > all of the backtraces get intermixed.
>
> I saw your new patch based on Frederick's input. It seems to take care of
> deadlock situations though you run into the starving lock problem that
> ticketed spinlocks solved. Which is why I am curious why moving the
> locking one layer up to the NMI handler (which is where it is currently),
> didn't fix your problem.
Locking in dump_stack would remove the need for uv_nmi_lock.
--
Russ Anderson, OS RAS/Partitioning Project Lead
SGI - Silicon Graphics Inc rja@sgi.com
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH] x86: Avoid intermixing cpu dump_stack output on multi-processor systems
2012-06-01 22:56 ` Russ Anderson
@ 2012-06-04 14:23 ` Don Zickus
0 siblings, 0 replies; 10+ messages in thread
From: Don Zickus @ 2012-06-04 14:23 UTC (permalink / raw)
To: Russ Anderson
Cc: linux-kernel, x86, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
Andrew Morton, Greg Kroah-Hartman, rja
On Fri, Jun 01, 2012 at 05:56:03PM -0500, Russ Anderson wrote:
> > I am curious, your NMI handler has locking wrapped around dump_stack,
> > shouldn't that serialize the output the way you want it? Why isn't that
> > working?
>
> Yes, you're right, it does. It is working. I'd forgotten that
> the community kernel has uv_nmi_lock in uv_handle_nmi. Must
> be working too much with distro kernels. :-) But that doesn't
> help for all the other code paths than call dump_stack.
Sure, I agree. But not every caller of dump_stack needs to dump for all
cpus. I was just trying to avoid the ugly complicated code of
cmp_xchange you were requesting.
>
>
> > > FWIW, "Wait for up to 10 seconds for all CPUs to do the backtrace" on
> > > a 4096 cpu system isn't long enough. :-)
> >
> > Good point. :-)
> >
> > >
> > > > Whereas the lock you are proposing can be called in a mixture of NMI and
> > > > IRQ which could cause deadlocks I believe.
> > >
> > > Since this is a lock just around the dump_stack printk, would
> > > checking for forward progress and a timeout to catch any possible
> > > deadlock be sufficient? In the unlikely case of a deadlock the
> > > lock gets broken and some of the cpu backtraces get intermixed.
> > > That is still a huge improvement over the current case where
> > > all of the backtraces get intermixed.
> >
> > I saw your new patch based on Frederick's input. It seems to take care of
> > deadlock situations though you run into the starving lock problem that
> > ticketed spinlocks solved. Which is why I am curious why moving the
> > locking one layer up to the NMI handler (which is where it is currently),
> > didn't fix your problem.
>
> Locking in dump_stack would remove the need for uv_nmi_lock.
I agree. I was just wondering if the added complexities were worth it for
the normal case.
If the uv_nmi_lock works, then I feel comfortable that your second patch
based on Frederic's suggestion will work too. I just feel uncomfortable
with locking on a stack dump that should be reliable. It is one thing to
do the locking in only NMI space or only IRQ space, but now we are
traversing both. I don't think there will be any deadlocks (based on the
else path).
It could just be an overhyped paranoia of mine. But that was my biggest
hestitation.
Cheers,
Don
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2012-06-04 14:24 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-05-24 14:42 [PATCH] x86: Avoid intermixing cpu dump_stack output on multi-processor systems Russ Anderson
2012-05-24 15:34 ` Frederic Weisbecker
2012-05-29 18:50 ` Russ Anderson
2012-05-29 17:53 ` Don Zickus
2012-05-29 19:19 ` Russ Anderson
2012-05-29 22:39 ` Don Zickus
2012-05-29 23:11 ` Russ Anderson
2012-05-29 23:54 ` Don Zickus
2012-06-01 22:56 ` Russ Anderson
2012-06-04 14:23 ` Don Zickus
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox