public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/3] x86: only call smp_processor_id in non-preempt cases
@ 2010-11-12 14:50 Don Zickus
  2010-11-12 14:50 ` [PATCH 2/3] x86, hw_nmi: Move backtrace_mask declaration under ARCH_HAS_NMI_WATCHDOG Don Zickus
                   ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Don Zickus @ 2010-11-12 14:50 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: LKML, Don Zickus

There are some paths that walk the die_chain with preemption on.
Make sure we are in an NMI call before we start doing anything.

Reported-by: Jan Kiszka <jan.kiszka@web.de>
Signed-off-by: Don Zickus <dzickus@redhat.com>
---
 arch/x86/kernel/apic/hw_nmi.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/arch/x86/kernel/apic/hw_nmi.c b/arch/x86/kernel/apic/hw_nmi.c
index 5c4f952..ef4755d 100644
--- a/arch/x86/kernel/apic/hw_nmi.c
+++ b/arch/x86/kernel/apic/hw_nmi.c
@@ -49,7 +49,7 @@ arch_trigger_all_cpu_backtrace_handler(struct notifier_block *self,
 {
 	struct die_args *args = __args;
 	struct pt_regs *regs;
-	int cpu = smp_processor_id();
+	int cpu;
 
 	switch (cmd) {
 	case DIE_NMI:
@@ -60,6 +60,7 @@ arch_trigger_all_cpu_backtrace_handler(struct notifier_block *self,
 	}
 
 	regs = args->regs;
+	cpu = smp_processor_id();
 
 	if (cpumask_test_cpu(cpu, to_cpumask(backtrace_mask))) {
 		static arch_spinlock_t lock = __ARCH_SPIN_LOCK_UNLOCKED;
-- 
1.7.2.3


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 2/3] x86, hw_nmi: Move backtrace_mask declaration under ARCH_HAS_NMI_WATCHDOG.
  2010-11-12 14:50 [PATCH 1/3] x86: only call smp_processor_id in non-preempt cases Don Zickus
@ 2010-11-12 14:50 ` Don Zickus
  2010-11-12 14:50 ` [PATCH 3/3] x86: Avoid calling arch_trigger_all_cpu_backtrace() at the same time Don Zickus
  2010-11-18  8:14 ` [PATCH 1/3] x86: only call smp_processor_id in non-preempt cases Ingo Molnar
  2 siblings, 0 replies; 9+ messages in thread
From: Don Zickus @ 2010-11-12 14:50 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: LKML, Rakib Mullick, Don Zickus

From: Rakib Mullick <rakib.mullick@gmail.com>

backtrace_mask has been used under the code context of
ARCH_HAS_NMI_WATCHDOG. So put it into that context.
We were warned by the following warning:

arch/x86/kernel/apic/hw_nmi.c:21: warning: ‘backtrace_mask’ defined but not used

Signed-off-by: Rakib Mullick <rakib.mullick@gmail.com>
Signed-off-by: Don Zickus <dzickus@redhat.com>
---
 arch/x86/kernel/apic/hw_nmi.c |    7 ++++---
 1 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kernel/apic/hw_nmi.c b/arch/x86/kernel/apic/hw_nmi.c
index ef4755d..f349647 100644
--- a/arch/x86/kernel/apic/hw_nmi.c
+++ b/arch/x86/kernel/apic/hw_nmi.c
@@ -17,15 +17,16 @@
 #include <linux/nmi.h>
 #include <linux/module.h>
 
-/* For reliability, we're prepared to waste bits here. */
-static DECLARE_BITMAP(backtrace_mask, NR_CPUS) __read_mostly;
-
 u64 hw_nmi_get_sample_period(void)
 {
 	return (u64)(cpu_khz) * 1000 * 60;
 }
 
 #ifdef ARCH_HAS_NMI_WATCHDOG
+
+/* For reliability, we're prepared to waste bits here. */
+static DECLARE_BITMAP(backtrace_mask, NR_CPUS) __read_mostly;
+
 void arch_trigger_all_cpu_backtrace(void)
 {
 	int i;
-- 
1.7.2.3


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 3/3] x86: Avoid calling arch_trigger_all_cpu_backtrace() at the same time
  2010-11-12 14:50 [PATCH 1/3] x86: only call smp_processor_id in non-preempt cases Don Zickus
  2010-11-12 14:50 ` [PATCH 2/3] x86, hw_nmi: Move backtrace_mask declaration under ARCH_HAS_NMI_WATCHDOG Don Zickus
@ 2010-11-12 14:50 ` Don Zickus
  2010-11-18 15:57   ` Frederic Weisbecker
  2010-11-18  8:14 ` [PATCH 1/3] x86: only call smp_processor_id in non-preempt cases Ingo Molnar
  2 siblings, 1 reply; 9+ messages in thread
From: Don Zickus @ 2010-11-12 14:50 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: LKML, Dongdong Deng, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	x86, Don Zickus

From: Dongdong Deng <dongdong.deng@windriver.com>

The spin_lock_debug/rcu_cpu_stall detector uses
trigger_all_cpu_backtrace() to dump cpu backtrace.
Therefore it is possible that trigger_all_cpu_backtrace()
could be called at the same time on different CPUs, which
triggers and 'unknown reason NMI' warning. The following case
illustrates the problem:

      CPU1                    CPU2                     ...   CPU N
                       trigger_all_cpu_backtrace()
                       set "backtrace_mask" to cpu mask
                               |
generate NMI interrupts  generate NMI interrupts       ...
    \                          |                               /
     \                         |                              /

The "backtrace_mask" will be cleaned by the first NMI interrupt
at nmi_watchdog_tick(), then the following NMI interrupts generated
by other cpus's arch_trigger_all_cpu_backtrace() will be taken as
unknown reason NMI interrupts.

This patch uses a test_and_set to avoid the problem, and stop the
arch_trigger_all_cpu_backtrace() from calling to avoid dumping a
double cpu backtrace info when there is already a
trigger_all_cpu_backtrace() in progress.

Signed-off-by: Dongdong Deng <dongdong.deng@windriver.com>
Reviewed-by: Bruce Ashfield <bruce.ashfield@windriver.com>
CC: Thomas Gleixner <tglx@linutronix.de>
CC: Ingo Molnar <mingo@redhat.com>
CC: "H. Peter Anvin" <hpa@zytor.com>
CC: x86@kernel.org
CC: linux-kernel@vger.kernel.org
Signed-off-by: Don Zickus <dzickus@redhat.com>
---
 arch/x86/kernel/apic/hw_nmi.c |   24 ++++++++++++++++++++++++
 1 files changed, 24 insertions(+), 0 deletions(-)

diff --git a/arch/x86/kernel/apic/hw_nmi.c b/arch/x86/kernel/apic/hw_nmi.c
index f349647..d892896 100644
--- a/arch/x86/kernel/apic/hw_nmi.c
+++ b/arch/x86/kernel/apic/hw_nmi.c
@@ -27,9 +27,27 @@ u64 hw_nmi_get_sample_period(void)
 /* For reliability, we're prepared to waste bits here. */
 static DECLARE_BITMAP(backtrace_mask, NR_CPUS) __read_mostly;
 
+/* "in progress" flag of arch_trigger_all_cpu_backtrace */
+static unsigned long backtrace_flag;
+
 void arch_trigger_all_cpu_backtrace(void)
 {
 	int i;
+	unsigned long flags;
+
+	/*
+	 * Have to disable irq here, as the
+	 * arch_trigger_all_cpu_backtrace() could be
+	 * triggered by "spin_lock()" with irqs on.
+	 */
+	local_irq_save(flags);
+
+	if (test_and_set_bit(0, &backtrace_flag))
+		/*
+		 * If there is already a trigger_all_cpu_backtrace() in progress
+		 * (backtrace_flag == 1), don't output double cpu dump infos.
+		 */
+		goto out_restore_irq;
 
 	cpumask_copy(to_cpumask(backtrace_mask), cpu_online_mask);
 
@@ -42,6 +60,12 @@ void arch_trigger_all_cpu_backtrace(void)
 			break;
 		mdelay(1);
 	}
+
+	clear_bit(0, &backtrace_flag);
+	smp_mb__after_clear_bit();
+
+out_restore_irq:
+	local_irq_restore(flags);
 }
 
 static int __kprobes
-- 
1.7.2.3


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/3] x86: only call smp_processor_id in non-preempt cases
  2010-11-12 14:50 [PATCH 1/3] x86: only call smp_processor_id in non-preempt cases Don Zickus
  2010-11-12 14:50 ` [PATCH 2/3] x86, hw_nmi: Move backtrace_mask declaration under ARCH_HAS_NMI_WATCHDOG Don Zickus
  2010-11-12 14:50 ` [PATCH 3/3] x86: Avoid calling arch_trigger_all_cpu_backtrace() at the same time Don Zickus
@ 2010-11-18  8:14 ` Ingo Molnar
  2010-11-18 14:22   ` Don Zickus
  2 siblings, 1 reply; 9+ messages in thread
From: Ingo Molnar @ 2010-11-18  8:14 UTC (permalink / raw)
  To: Don Zickus; +Cc: LKML, Peter Zijlstra, Frédéric Weisbecker


* Don Zickus <dzickus@redhat.com> wrote:

> There are some paths that walk the die_chain with preemption on.

What are those codepaths? At minimum it's worth documenting them.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/3] x86: only call smp_processor_id in non-preempt cases
  2010-11-18  8:14 ` [PATCH 1/3] x86: only call smp_processor_id in non-preempt cases Ingo Molnar
@ 2010-11-18 14:22   ` Don Zickus
  2010-11-18 14:49     ` Ingo Molnar
  0 siblings, 1 reply; 9+ messages in thread
From: Don Zickus @ 2010-11-18 14:22 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: LKML, Peter Zijlstra, Frédéric Weisbecker

On Thu, Nov 18, 2010 at 09:14:07AM +0100, Ingo Molnar wrote:
> 
> * Don Zickus <dzickus@redhat.com> wrote:
> 
> > There are some paths that walk the die_chain with preemption on.
> 
> What are those codepaths? At minimum it's worth documenting them.

Well the one that caused the bug was do_general_protection which walks the
die_chain with DIE_GPF.

I can document them, though it might be time consuming to audit them and
hope they don't change.  I guess my bigger question is, is it expected
that anyone who calls the die_chain to have preemption disabled?  If not,
then does it matter if we document it?

Cheers,
Don

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/3] x86: only call smp_processor_id in non-preempt cases
  2010-11-18 14:22   ` Don Zickus
@ 2010-11-18 14:49     ` Ingo Molnar
  2010-11-18 15:35       ` Don Zickus
  0 siblings, 1 reply; 9+ messages in thread
From: Ingo Molnar @ 2010-11-18 14:49 UTC (permalink / raw)
  To: Don Zickus; +Cc: LKML, Peter Zijlstra, Frédéric Weisbecker


* Don Zickus <dzickus@redhat.com> wrote:

> On Thu, Nov 18, 2010 at 09:14:07AM +0100, Ingo Molnar wrote:
> > 
> > * Don Zickus <dzickus@redhat.com> wrote:
> > 
> > > There are some paths that walk the die_chain with preemption on.
> > 
> > What are those codepaths? At minimum it's worth documenting them.
> 
> Well the one that caused the bug was do_general_protection which walks the
> die_chain with DIE_GPF.
> 
> I can document them, though it might be time consuming to audit them and hope they 
> don't change.

Listing one example is enough.

> [...]  I guess my bigger question is, is it expected that anyone who calls the 
> die_chain to have preemption disabled?  If not, then does it matter if we document 
> it?

Yes, it might be a bug to call those handlers with preemption on (or even with irqs 
on). But if the code is fine as-is then documenting a single example would be nice.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/3] x86: only call smp_processor_id in non-preempt cases
  2010-11-18 14:49     ` Ingo Molnar
@ 2010-11-18 15:35       ` Don Zickus
  0 siblings, 0 replies; 9+ messages in thread
From: Don Zickus @ 2010-11-18 15:35 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: LKML, Peter Zijlstra, Frédéric Weisbecker

On Thu, Nov 18, 2010 at 03:49:21PM +0100, Ingo Molnar wrote:
> 
> * Don Zickus <dzickus@redhat.com> wrote:
> 
> > On Thu, Nov 18, 2010 at 09:14:07AM +0100, Ingo Molnar wrote:
> > > 
> > > * Don Zickus <dzickus@redhat.com> wrote:
> > > 
> > > > There are some paths that walk the die_chain with preemption on.
> > > 
> > > What are those codepaths? At minimum it's worth documenting them.
> > 
> > Well the one that caused the bug was do_general_protection which walks the
> > die_chain with DIE_GPF.
> > 
> > I can document them, though it might be time consuming to audit them and hope they 
> > don't change.
> 
> Listing one example is enough.
> 
> > [...]  I guess my bigger question is, is it expected that anyone who calls the 
> > die_chain to have preemption disabled?  If not, then does it matter if we document 
> > it?
> 
> Yes, it might be a bug to call those handlers with preemption on (or even with irqs 
> on). But if the code is fine as-is then documenting a single example would be nice.
> 

Is this better?

Cheers, 
Don

------------------------------------->
From: Don Zickus <dzickus@redhat.com>
Date: Mon, 1 Nov 2010 13:34:33 -0400
Subject: [PATCH 1/6] x86: only call smp_processor_id in non-preempt cases

There are some paths that walk the die_chain with preemption on.
Make sure we are in an NMI call before we start doing anything.

This was triggered by do_general_protection calling notify_die with
DIE_GPF.

Reported-by: Jan Kiszka <jan.kiszka@web.de>
Signed-off-by: Don Zickus <dzickus@redhat.com>
---
 arch/x86/kernel/apic/hw_nmi.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/arch/x86/kernel/apic/hw_nmi.c b/arch/x86/kernel/apic/hw_nmi.c
index 5c4f952..ef4755d 100644
--- a/arch/x86/kernel/apic/hw_nmi.c
+++ b/arch/x86/kernel/apic/hw_nmi.c
@@ -49,7 +49,7 @@ arch_trigger_all_cpu_backtrace_handler(struct notifier_block *self,
 {
 	struct die_args *args = __args;
 	struct pt_regs *regs;
-	int cpu = smp_processor_id();
+	int cpu;
 
 	switch (cmd) {
 	case DIE_NMI:
@@ -60,6 +60,7 @@ arch_trigger_all_cpu_backtrace_handler(struct notifier_block *self,
 	}
 
 	regs = args->regs;
+	cpu = smp_processor_id();
 
 	if (cpumask_test_cpu(cpu, to_cpumask(backtrace_mask))) {
 		static arch_spinlock_t lock = __ARCH_SPIN_LOCK_UNLOCKED;
-- 
1.7.3.2


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH 3/3] x86: Avoid calling arch_trigger_all_cpu_backtrace() at the same time
  2010-11-12 14:50 ` [PATCH 3/3] x86: Avoid calling arch_trigger_all_cpu_backtrace() at the same time Don Zickus
@ 2010-11-18 15:57   ` Frederic Weisbecker
  2010-11-19  3:00     ` DDD
  0 siblings, 1 reply; 9+ messages in thread
From: Frederic Weisbecker @ 2010-11-18 15:57 UTC (permalink / raw)
  To: Don Zickus
  Cc: Ingo Molnar, LKML, Dongdong Deng, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, x86

On Fri, Nov 12, 2010 at 09:50:55AM -0500, Don Zickus wrote:
> From: Dongdong Deng <dongdong.deng@windriver.com>
> 
> The spin_lock_debug/rcu_cpu_stall detector uses
> trigger_all_cpu_backtrace() to dump cpu backtrace.
> Therefore it is possible that trigger_all_cpu_backtrace()
> could be called at the same time on different CPUs, which
> triggers and 'unknown reason NMI' warning. The following case
> illustrates the problem:
> 
>       CPU1                    CPU2                     ...   CPU N
>                        trigger_all_cpu_backtrace()
>                        set "backtrace_mask" to cpu mask
>                                |
> generate NMI interrupts  generate NMI interrupts       ...
>     \                          |                               /
>      \                         |                              /
> 
> The "backtrace_mask" will be cleaned by the first NMI interrupt
> at nmi_watchdog_tick(), then the following NMI interrupts generated
> by other cpus's arch_trigger_all_cpu_backtrace() will be taken as
> unknown reason NMI interrupts.
> 
> This patch uses a test_and_set to avoid the problem, and stop the
> arch_trigger_all_cpu_backtrace() from calling to avoid dumping a
> double cpu backtrace info when there is already a
> trigger_all_cpu_backtrace() in progress.
> 
> Signed-off-by: Dongdong Deng <dongdong.deng@windriver.com>
> Reviewed-by: Bruce Ashfield <bruce.ashfield@windriver.com>
> CC: Thomas Gleixner <tglx@linutronix.de>
> CC: Ingo Molnar <mingo@redhat.com>
> CC: "H. Peter Anvin" <hpa@zytor.com>
> CC: x86@kernel.org
> CC: linux-kernel@vger.kernel.org
> Signed-off-by: Don Zickus <dzickus@redhat.com>
> ---
>  arch/x86/kernel/apic/hw_nmi.c |   24 ++++++++++++++++++++++++
>  1 files changed, 24 insertions(+), 0 deletions(-)
> 
> diff --git a/arch/x86/kernel/apic/hw_nmi.c b/arch/x86/kernel/apic/hw_nmi.c
> index f349647..d892896 100644
> --- a/arch/x86/kernel/apic/hw_nmi.c
> +++ b/arch/x86/kernel/apic/hw_nmi.c
> @@ -27,9 +27,27 @@ u64 hw_nmi_get_sample_period(void)
>  /* For reliability, we're prepared to waste bits here. */
>  static DECLARE_BITMAP(backtrace_mask, NR_CPUS) __read_mostly;
>  
> +/* "in progress" flag of arch_trigger_all_cpu_backtrace */
> +static unsigned long backtrace_flag;
> +
>  void arch_trigger_all_cpu_backtrace(void)
>  {
>  	int i;
> +	unsigned long flags;
> +
> +	/*
> +	 * Have to disable irq here, as the
> +	 * arch_trigger_all_cpu_backtrace() could be
> +	 * triggered by "spin_lock()" with irqs on.
> +	 */
> +	local_irq_save(flags);



I'm not sure I understand why you disable irqs here. It looks
safe with the test_and_set_bit already.



> +
> +	if (test_and_set_bit(0, &backtrace_flag))
> +		/*
> +		 * If there is already a trigger_all_cpu_backtrace() in progress
> +		 * (backtrace_flag == 1), don't output double cpu dump infos.
> +		 */
> +		goto out_restore_irq;
>  
>  	cpumask_copy(to_cpumask(backtrace_mask), cpu_online_mask);
>  
> @@ -42,6 +60,12 @@ void arch_trigger_all_cpu_backtrace(void)
>  			break;
>  		mdelay(1);
>  	}
> +
> +	clear_bit(0, &backtrace_flag);
> +	smp_mb__after_clear_bit();
> +
> +out_restore_irq:
> +	local_irq_restore(flags);
>  }
>  
>  static int __kprobes
> -- 
> 1.7.2.3
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 3/3] x86: Avoid calling arch_trigger_all_cpu_backtrace() at the same time
  2010-11-18 15:57   ` Frederic Weisbecker
@ 2010-11-19  3:00     ` DDD
  0 siblings, 0 replies; 9+ messages in thread
From: DDD @ 2010-11-19  3:00 UTC (permalink / raw)
  To: Frederic Weisbecker, Don Zickus
  Cc: Ingo Molnar, LKML, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	x86

Frederic Weisbecker wrote:
> On Fri, Nov 12, 2010 at 09:50:55AM -0500, Don Zickus wrote:
>> From: Dongdong Deng <dongdong.deng@windriver.com>
>>
>> The spin_lock_debug/rcu_cpu_stall detector uses
>> trigger_all_cpu_backtrace() to dump cpu backtrace.
>> Therefore it is possible that trigger_all_cpu_backtrace()
>> could be called at the same time on different CPUs, which
>> triggers and 'unknown reason NMI' warning. The following case
>> illustrates the problem:
>>
>>       CPU1                    CPU2                     ...   CPU N
>>                        trigger_all_cpu_backtrace()
>>                        set "backtrace_mask" to cpu mask
>>                                |
>> generate NMI interrupts  generate NMI interrupts       ...
>>     \                          |                               /
>>      \                         |                              /
>>
>> The "backtrace_mask" will be cleaned by the first NMI interrupt
>> at nmi_watchdog_tick(), then the following NMI interrupts generated
>> by other cpus's arch_trigger_all_cpu_backtrace() will be taken as
>> unknown reason NMI interrupts.
>>
>> This patch uses a test_and_set to avoid the problem, and stop the
>> arch_trigger_all_cpu_backtrace() from calling to avoid dumping a
>> double cpu backtrace info when there is already a
>> trigger_all_cpu_backtrace() in progress.
>>
>> Signed-off-by: Dongdong Deng <dongdong.deng@windriver.com>
>> Reviewed-by: Bruce Ashfield <bruce.ashfield@windriver.com>
>> CC: Thomas Gleixner <tglx@linutronix.de>
>> CC: Ingo Molnar <mingo@redhat.com>
>> CC: "H. Peter Anvin" <hpa@zytor.com>
>> CC: x86@kernel.org
>> CC: linux-kernel@vger.kernel.org
>> Signed-off-by: Don Zickus <dzickus@redhat.com>
>> ---
>>  arch/x86/kernel/apic/hw_nmi.c |   24 ++++++++++++++++++++++++
>>  1 files changed, 24 insertions(+), 0 deletions(-)
>>
>> diff --git a/arch/x86/kernel/apic/hw_nmi.c b/arch/x86/kernel/apic/hw_nmi.c
>> index f349647..d892896 100644
>> --- a/arch/x86/kernel/apic/hw_nmi.c
>> +++ b/arch/x86/kernel/apic/hw_nmi.c
>> @@ -27,9 +27,27 @@ u64 hw_nmi_get_sample_period(void)
>>  /* For reliability, we're prepared to waste bits here. */
>>  static DECLARE_BITMAP(backtrace_mask, NR_CPUS) __read_mostly;
>>  
>> +/* "in progress" flag of arch_trigger_all_cpu_backtrace */
>> +static unsigned long backtrace_flag;
>> +
>>  void arch_trigger_all_cpu_backtrace(void)
>>  {
>>  	int i;
>> +	unsigned long flags;
>> +
>> +	/*
>> +	 * Have to disable irq here, as the
>> +	 * arch_trigger_all_cpu_backtrace() could be
>> +	 * triggered by "spin_lock()" with irqs on.
>> +	 */
>> +	local_irq_save(flags);
> 
> 
> 
> I'm not sure I understand why you disable irqs here. It looks
> safe with the test_and_set_bit already.

Hi Frederic,

Yep, after we use test_and_set_bit to replace spin_lock,
the disable irqs ops obvious could be removed here.

I will redo this patch, and send it to Don.

Thanks,
Dongdong

> 
> 
> 
>> +
>> +	if (test_and_set_bit(0, &backtrace_flag))
>> +		/*
>> +		 * If there is already a trigger_all_cpu_backtrace() in progress
>> +		 * (backtrace_flag == 1), don't output double cpu dump infos.
>> +		 */
>> +		goto out_restore_irq;
>>  
>>  	cpumask_copy(to_cpumask(backtrace_mask), cpu_online_mask);
>>  
>> @@ -42,6 +60,12 @@ void arch_trigger_all_cpu_backtrace(void)
>>  			break;
>>  		mdelay(1);
>>  	}
>> +
>> +	clear_bit(0, &backtrace_flag);
>> +	smp_mb__after_clear_bit();
>> +
>> +out_restore_irq:
>> +	local_irq_restore(flags);
>>  }
>>  
>>  static int __kprobes
>> -- 
>> 1.7.2.3
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> Please read the FAQ at  http://www.tux.org/lkml/
> 
> 


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2010-11-19  2:59 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-11-12 14:50 [PATCH 1/3] x86: only call smp_processor_id in non-preempt cases Don Zickus
2010-11-12 14:50 ` [PATCH 2/3] x86, hw_nmi: Move backtrace_mask declaration under ARCH_HAS_NMI_WATCHDOG Don Zickus
2010-11-12 14:50 ` [PATCH 3/3] x86: Avoid calling arch_trigger_all_cpu_backtrace() at the same time Don Zickus
2010-11-18 15:57   ` Frederic Weisbecker
2010-11-19  3:00     ` DDD
2010-11-18  8:14 ` [PATCH 1/3] x86: only call smp_processor_id in non-preempt cases Ingo Molnar
2010-11-18 14:22   ` Don Zickus
2010-11-18 14:49     ` Ingo Molnar
2010-11-18 15:35       ` Don Zickus

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox