public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [GIT PULL] x86 stacktrace updates
@ 2011-05-12 20:32 Frederic Weisbecker
  2011-05-12 20:32 ` [PATCH 1/2] x86: Remove warning and warning_symbol from struct stacktrace_ops Frederic Weisbecker
  2011-05-12 20:32 ` [PATCH 2/2] x86: Make the x86-64 stacktrace code safely callable from scheduler Frederic Weisbecker
  0 siblings, 2 replies; 13+ messages in thread
From: Frederic Weisbecker @ 2011-05-12 20:32 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: LKML, Frederic Weisbecker, Peter Zijlstra, H . Peter Anvin,
	Thomas Gleixner, Steven Rostedt, Richard Weinberger,
	Oleg Nesterov, Andrew Morton, Huang Ying,
	Soeren Sandmann Pedersen, Namhyung Kim, x86, Robert Richter

Ingo,

Please pull the perf/stacktrace branch that can be found at:

git://git.kernel.org/pub/scm/linux/kernel/git/frederic/random-tracing.git
	perf/stacktrace

Thanks,
	Frederic
---

Frederic Weisbecker (1):
      x86: Make the x86-64 stacktrace code safely callable from scheduler

Richard Weinberger (1):
      x86: Remove warning and warning_symbol from struct stacktrace_ops


 arch/x86/include/asm/stacktrace.h |    3 ---
 arch/x86/kernel/cpu/perf_event.c  |   13 -------------
 arch/x86/kernel/dumpstack.c       |   16 ----------------
 arch/x86/kernel/dumpstack_64.c    |   14 ++++++++++----
 arch/x86/kernel/stacktrace.c      |   13 -------------
 arch/x86/oprofile/backtrace.c     |   13 -------------
 6 files changed, 10 insertions(+), 62 deletions(-)

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1/2] x86: Remove warning and warning_symbol from struct stacktrace_ops
  2011-05-12 20:32 [GIT PULL] x86 stacktrace updates Frederic Weisbecker
@ 2011-05-12 20:32 ` Frederic Weisbecker
  2011-07-14 11:01   ` Mel Gorman
  2011-05-12 20:32 ` [PATCH 2/2] x86: Make the x86-64 stacktrace code safely callable from scheduler Frederic Weisbecker
  1 sibling, 1 reply; 13+ messages in thread
From: Frederic Weisbecker @ 2011-05-12 20:32 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: LKML, Richard Weinberger, Oleg Nesterov, Andrew Morton,
	Huang Ying, Soeren Sandmann Pedersen, Namhyung Kim, x86,
	H. Peter Anvin, Thomas Gleixner, Robert Richter, Paul Mundt,
	Frederic Weisbecker

From: Richard Weinberger <richard@nod.at>

Both warning and warning_symbol are nowhere used.
Let's get rid of them.

Signed-off-by: Richard Weinberger <richard@nod.at>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Huang Ying <ying.huang@intel.com>
Cc: Soeren Sandmann Pedersen <ssp@redhat.com>
Cc: Namhyung Kim <namhyung@gmail.com>
Cc: x86 <x86@kernel.org>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Robert Richter <robert.richter@amd.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Link: http://lkml.kernel.org/r/1305205872-10321-2-git-send-email-richard@nod.at
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
---
 arch/x86/include/asm/stacktrace.h |    3 ---
 arch/x86/kernel/cpu/perf_event.c  |   13 -------------
 arch/x86/kernel/dumpstack.c       |   16 ----------------
 arch/x86/kernel/stacktrace.c      |   13 -------------
 arch/x86/oprofile/backtrace.c     |   13 -------------
 5 files changed, 0 insertions(+), 58 deletions(-)

diff --git a/arch/x86/include/asm/stacktrace.h b/arch/x86/include/asm/stacktrace.h
index d7e89c8..70bbe39 100644
--- a/arch/x86/include/asm/stacktrace.h
+++ b/arch/x86/include/asm/stacktrace.h
@@ -37,9 +37,6 @@ print_context_stack_bp(struct thread_info *tinfo,
 /* Generic stack tracer with callbacks */
 
 struct stacktrace_ops {
-	void (*warning)(void *data, char *msg);
-	/* msg must contain %s for the symbol */
-	void (*warning_symbol)(void *data, char *msg, unsigned long symbol);
 	void (*address)(void *data, unsigned long address, int reliable);
 	/* On negative return stop dumping */
 	int (*stack)(void *data, char *name);
diff --git a/arch/x86/kernel/cpu/perf_event.c b/arch/x86/kernel/cpu/perf_event.c
index 0de6b2b..3a0338b 100644
--- a/arch/x86/kernel/cpu/perf_event.c
+++ b/arch/x86/kernel/cpu/perf_event.c
@@ -1773,17 +1773,6 @@ static struct pmu pmu = {
  * callchain support
  */
 
-static void
-backtrace_warning_symbol(void *data, char *msg, unsigned long symbol)
-{
-	/* Ignore warnings */
-}
-
-static void backtrace_warning(void *data, char *msg)
-{
-	/* Ignore warnings */
-}
-
 static int backtrace_stack(void *data, char *name)
 {
 	return 0;
@@ -1797,8 +1786,6 @@ static void backtrace_address(void *data, unsigned long addr, int reliable)
 }
 
 static const struct stacktrace_ops backtrace_ops = {
-	.warning		= backtrace_warning,
-	.warning_symbol		= backtrace_warning_symbol,
 	.stack			= backtrace_stack,
 	.address		= backtrace_address,
 	.walk_stack		= print_context_stack_bp,
diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c
index e2a3f06..f478ff6 100644
--- a/arch/x86/kernel/dumpstack.c
+++ b/arch/x86/kernel/dumpstack.c
@@ -135,20 +135,6 @@ print_context_stack_bp(struct thread_info *tinfo,
 }
 EXPORT_SYMBOL_GPL(print_context_stack_bp);
 
-
-static void
-print_trace_warning_symbol(void *data, char *msg, unsigned long symbol)
-{
-	printk(data);
-	print_symbol(msg, symbol);
-	printk("\n");
-}
-
-static void print_trace_warning(void *data, char *msg)
-{
-	printk("%s%s\n", (char *)data, msg);
-}
-
 static int print_trace_stack(void *data, char *name)
 {
 	printk("%s <%s> ", (char *)data, name);
@@ -166,8 +152,6 @@ static void print_trace_address(void *data, unsigned long addr, int reliable)
 }
 
 static const struct stacktrace_ops print_trace_ops = {
-	.warning		= print_trace_warning,
-	.warning_symbol		= print_trace_warning_symbol,
 	.stack			= print_trace_stack,
 	.address		= print_trace_address,
 	.walk_stack		= print_context_stack,
diff --git a/arch/x86/kernel/stacktrace.c b/arch/x86/kernel/stacktrace.c
index 6515733..55d9bc0 100644
--- a/arch/x86/kernel/stacktrace.c
+++ b/arch/x86/kernel/stacktrace.c
@@ -9,15 +9,6 @@
 #include <linux/uaccess.h>
 #include <asm/stacktrace.h>
 
-static void save_stack_warning(void *data, char *msg)
-{
-}
-
-static void
-save_stack_warning_symbol(void *data, char *msg, unsigned long symbol)
-{
-}
-
 static int save_stack_stack(void *data, char *name)
 {
 	return 0;
@@ -53,16 +44,12 @@ save_stack_address_nosched(void *data, unsigned long addr, int reliable)
 }
 
 static const struct stacktrace_ops save_stack_ops = {
-	.warning	= save_stack_warning,
-	.warning_symbol	= save_stack_warning_symbol,
 	.stack		= save_stack_stack,
 	.address	= save_stack_address,
 	.walk_stack	= print_context_stack,
 };
 
 static const struct stacktrace_ops save_stack_ops_nosched = {
-	.warning	= save_stack_warning,
-	.warning_symbol	= save_stack_warning_symbol,
 	.stack		= save_stack_stack,
 	.address	= save_stack_address_nosched,
 	.walk_stack	= print_context_stack,
diff --git a/arch/x86/oprofile/backtrace.c b/arch/x86/oprofile/backtrace.c
index 2d49d4e..a5b64ab 100644
--- a/arch/x86/oprofile/backtrace.c
+++ b/arch/x86/oprofile/backtrace.c
@@ -16,17 +16,6 @@
 #include <asm/stacktrace.h>
 #include <linux/compat.h>
 
-static void backtrace_warning_symbol(void *data, char *msg,
-				     unsigned long symbol)
-{
-	/* Ignore warnings */
-}
-
-static void backtrace_warning(void *data, char *msg)
-{
-	/* Ignore warnings */
-}
-
 static int backtrace_stack(void *data, char *name)
 {
 	/* Yes, we want all stacks */
@@ -42,8 +31,6 @@ static void backtrace_address(void *data, unsigned long addr, int reliable)
 }
 
 static struct stacktrace_ops backtrace_ops = {
-	.warning	= backtrace_warning,
-	.warning_symbol	= backtrace_warning_symbol,
 	.stack		= backtrace_stack,
 	.address	= backtrace_address,
 	.walk_stack	= print_context_stack,
-- 
1.7.3.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 2/2] x86: Make the x86-64 stacktrace code safely callable from scheduler
  2011-05-12 20:32 [GIT PULL] x86 stacktrace updates Frederic Weisbecker
  2011-05-12 20:32 ` [PATCH 1/2] x86: Remove warning and warning_symbol from struct stacktrace_ops Frederic Weisbecker
@ 2011-05-12 20:32 ` Frederic Weisbecker
  2011-05-12 20:40   ` Frederic Weisbecker
                     ` (2 more replies)
  1 sibling, 3 replies; 13+ messages in thread
From: Frederic Weisbecker @ 2011-05-12 20:32 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: LKML, Frederic Weisbecker, Peter Zijlstra, H. Peter Anvin,
	Thomas Gleixner, Ingo Molnar, Steven Rostedt

Avoid potential scheduler recursion and deadlock from the
stacktrace code by avoiding rescheduling when we re-enable
preemption.

This robustifies some scheduler trace events like sched switch
when they are used to produce callchains in perf or ftrace.

Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Steven Rostedt <rostedt@goodmis.org>
---
 arch/x86/kernel/dumpstack_64.c |   14 ++++++++++----
 1 files changed, 10 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kernel/dumpstack_64.c b/arch/x86/kernel/dumpstack_64.c
index e71c98d..8e4f1d0 100644
--- a/arch/x86/kernel/dumpstack_64.c
+++ b/arch/x86/kernel/dumpstack_64.c
@@ -143,13 +143,16 @@ void dump_trace(struct task_struct *task, struct pt_regs *regs,
 		unsigned long *stack, unsigned long bp,
 		const struct stacktrace_ops *ops, void *data)
 {
-	const unsigned cpu = get_cpu();
-	unsigned long *irq_stack_end =
-		(unsigned long *)per_cpu(irq_stack_ptr, cpu);
+	unsigned long *irq_stack_end;
 	unsigned used = 0;
 	struct thread_info *tinfo;
 	int graph = 0;
 	unsigned long dummy;
+	int cpu;
+
+	preempt_disable();
+
+	cpu = smp_processor_id();
 
 	if (!task)
 		task = current;
@@ -168,6 +171,7 @@ void dump_trace(struct task_struct *task, struct pt_regs *regs,
 	 * exceptions
 	 */
 	tinfo = task_thread_info(task);
+	irq_stack_end = (unsigned long *)__get_cpu_var(irq_stack_ptr);
 	for (;;) {
 		char *id;
 		unsigned long *estack_end;
@@ -219,7 +223,9 @@ void dump_trace(struct task_struct *task, struct pt_regs *regs,
 	 * This handles the process stack:
 	 */
 	bp = ops->walk_stack(tinfo, stack, bp, ops, data, NULL, &graph);
-	put_cpu();
+
+	/* We want stacktrace to be computable anywhere, even in the scheduler */
+	preempt_enable_no_resched();
 }
 EXPORT_SYMBOL(dump_trace);
 
-- 
1.7.3.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/2] x86: Make the x86-64 stacktrace code safely callable from scheduler
  2011-05-12 20:32 ` [PATCH 2/2] x86: Make the x86-64 stacktrace code safely callable from scheduler Frederic Weisbecker
@ 2011-05-12 20:40   ` Frederic Weisbecker
  2011-05-12 21:28   ` Ingo Molnar
  2011-05-13 10:48   ` Steven Rostedt
  2 siblings, 0 replies; 13+ messages in thread
From: Frederic Weisbecker @ 2011-05-12 20:40 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: LKML, Peter Zijlstra, H. Peter Anvin, Thomas Gleixner,
	Steven Rostedt

On Thu, May 12, 2011 at 10:32:06PM +0200, Frederic Weisbecker wrote:
> Avoid potential scheduler recursion and deadlock from the
> stacktrace code by avoiding rescheduling when we re-enable
> preemption.
> 
> This robustifies some scheduler trace events like sched switch
> when they are used to produce callchains in perf or ftrace.

Actually trace events are already safe because they already disable
preemption so callchains won't call schedule() there. But still
let's be careful, we never know how stacktrace can be used.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/2] x86: Make the x86-64 stacktrace code safely callable from scheduler
  2011-05-12 20:32 ` [PATCH 2/2] x86: Make the x86-64 stacktrace code safely callable from scheduler Frederic Weisbecker
  2011-05-12 20:40   ` Frederic Weisbecker
@ 2011-05-12 21:28   ` Ingo Molnar
  2011-05-12 21:43     ` Frederic Weisbecker
  2011-05-13 10:48   ` Steven Rostedt
  2 siblings, 1 reply; 13+ messages in thread
From: Ingo Molnar @ 2011-05-12 21:28 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, Peter Zijlstra, H. Peter Anvin, Thomas Gleixner,
	Steven Rostedt


* Frederic Weisbecker <fweisbec@gmail.com> wrote:

> Avoid potential scheduler recursion and deadlock from the
> stacktrace code by avoiding rescheduling when we re-enable
> preemption.
> 
> This robustifies some scheduler trace events like sched switch
> when they are used to produce callchains in perf or ftrace.

> -	put_cpu();
> +
> +	/* We want stacktrace to be computable anywhere, even in the scheduler */
> +	preempt_enable_no_resched();

So what happens if a callchain profiling happens to be interrupted by a hardirq 
and the interrupt reschedules the current task? We'll miss the reschedule, 
right?

preempt_enable_no_resched() is not a magic 'solve scheduler recursions' bullet 
- it's to be used only if something else will guarantee the preemption check! 
But nothing guarantees it here AFAICS.

A better fix would be to use local_irq_save()/restore().

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/2] x86: Make the x86-64 stacktrace code safely callable from scheduler
  2011-05-12 21:28   ` Ingo Molnar
@ 2011-05-12 21:43     ` Frederic Weisbecker
  2011-05-12 21:55       ` Ingo Molnar
  0 siblings, 1 reply; 13+ messages in thread
From: Frederic Weisbecker @ 2011-05-12 21:43 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: LKML, Peter Zijlstra, H. Peter Anvin, Thomas Gleixner,
	Steven Rostedt

On Thu, May 12, 2011 at 11:28:10PM +0200, Ingo Molnar wrote:
> 
> * Frederic Weisbecker <fweisbec@gmail.com> wrote:
> 
> > Avoid potential scheduler recursion and deadlock from the
> > stacktrace code by avoiding rescheduling when we re-enable
> > preemption.
> > 
> > This robustifies some scheduler trace events like sched switch
> > when they are used to produce callchains in perf or ftrace.
> 
> > -	put_cpu();
> > +
> > +	/* We want stacktrace to be computable anywhere, even in the scheduler */
> > +	preempt_enable_no_resched();
> 
> So what happens if a callchain profiling happens to be interrupted by a hardirq 
> and the interrupt reschedules the current task? We'll miss the reschedule, 
> right?
> 
> preempt_enable_no_resched() is not a magic 'solve scheduler recursions' bullet 
> - it's to be used only if something else will guarantee the preemption check! 
> But nothing guarantees it here AFAICS.
> 
> A better fix would be to use local_irq_save()/restore().

Good point, but then lockdep itself might trigger a stacktrace from local_irq_save,
leading to a stacktrace recursion.

I can use raw_local_irq_disable(), or may be have a stacktrace recursion protection.
I fear the second solution could lead us to potentially lose useful information
if a stacktrace interrupts another one. Ok these are extreme cases...

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/2] x86: Make the x86-64 stacktrace code safely callable from scheduler
  2011-05-12 21:43     ` Frederic Weisbecker
@ 2011-05-12 21:55       ` Ingo Molnar
  0 siblings, 0 replies; 13+ messages in thread
From: Ingo Molnar @ 2011-05-12 21:55 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: LKML, Peter Zijlstra, H. Peter Anvin, Thomas Gleixner,
	Steven Rostedt


* Frederic Weisbecker <fweisbec@gmail.com> wrote:

> On Thu, May 12, 2011 at 11:28:10PM +0200, Ingo Molnar wrote:
> > 
> > * Frederic Weisbecker <fweisbec@gmail.com> wrote:
> > 
> > > Avoid potential scheduler recursion and deadlock from the
> > > stacktrace code by avoiding rescheduling when we re-enable
> > > preemption.
> > > 
> > > This robustifies some scheduler trace events like sched switch
> > > when they are used to produce callchains in perf or ftrace.
> > 
> > > -	put_cpu();
> > > +
> > > +	/* We want stacktrace to be computable anywhere, even in the scheduler */
> > > +	preempt_enable_no_resched();
> > 
> > So what happens if a callchain profiling happens to be interrupted by a hardirq 
> > and the interrupt reschedules the current task? We'll miss the reschedule, 
> > right?
> > 
> > preempt_enable_no_resched() is not a magic 'solve scheduler recursions' bullet 
> > - it's to be used only if something else will guarantee the preemption check! 
> > But nothing guarantees it here AFAICS.
> > 
> > A better fix would be to use local_irq_save()/restore().
> 
> Good point, but then lockdep itself might trigger a stacktrace from local_irq_save,
> leading to a stacktrace recursion.
> 
> I can use raw_local_irq_disable(), or may be have a stacktrace recursion 
> protection. I fear the second solution could lead us to potentially lose 
> useful information if a stacktrace interrupts another one. Ok these are 
> extreme cases...

i think raw_local_irq_disable() would be justified in this case.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/2] x86: Make the x86-64 stacktrace code safely callable from scheduler
  2011-05-12 20:32 ` [PATCH 2/2] x86: Make the x86-64 stacktrace code safely callable from scheduler Frederic Weisbecker
  2011-05-12 20:40   ` Frederic Weisbecker
  2011-05-12 21:28   ` Ingo Molnar
@ 2011-05-13 10:48   ` Steven Rostedt
  2011-05-13 12:48     ` Frederic Weisbecker
  2 siblings, 1 reply; 13+ messages in thread
From: Steven Rostedt @ 2011-05-13 10:48 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: Ingo Molnar, LKML, Peter Zijlstra, H. Peter Anvin,
	Thomas Gleixner

On Thu, 2011-05-12 at 22:32 +0200, Frederic Weisbecker wrote:
> Avoid potential scheduler recursion and deadlock from the
> stacktrace code by avoiding rescheduling when we re-enable
> preemption.

I'm curious to where you saw this deadlock? As I have the function stack
tracer using preempt_disable_notrace and enable_notrace without any
issues, and it traces all functions in the kernel[*]. I have no issue
with using raw_local_irq_save/restore() if it is to protect the per_cpu
variable from interrupt corruption, but I don't see the problem with
recursion.

There's only one function I had to worry about with preempt disable, not
the entire scheduler. That was the function preempt_schedule(). This
function is called by preempt_enable() and that will cause an infinite
loop if you have something in preempt_schedule() call preempt_enable().

Remember that ftrace_preempt_disable/enable() crap that I did to try to
avoid the scheduler deadlock? I found it was complex and unnecessary
because the scheduler itself was not an issue, it was only
preempt_schedule(). I replaced all that crappy code with a single line
that added notrace to preempt_schedule() and everything just worked.

Thus, if you disable interrupts to protect the cpu data, that's fine,
and say so in the change log. I really like to know if you really saw
this deadlock. Yes enabling preemption in the scheduler may recurse, but
it will only do so once.

I still argue that interrupt enabling is slow. I've seen a large slow
down of the code by switching stack tracer from preempt disable to irq
disable. I used perf to see why, and it told me that disabling
interrupts as fine, but enabling interrupts can cost you quite a bit.

-- Steve

[*] of course function tracing does not trace other notrace functions.


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/2] x86: Make the x86-64 stacktrace code safely callable from scheduler
  2011-05-13 10:48   ` Steven Rostedt
@ 2011-05-13 12:48     ` Frederic Weisbecker
  2011-05-13 13:19       ` Peter Zijlstra
  0 siblings, 1 reply; 13+ messages in thread
From: Frederic Weisbecker @ 2011-05-13 12:48 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Ingo Molnar, LKML, Peter Zijlstra, H. Peter Anvin,
	Thomas Gleixner

On Fri, May 13, 2011 at 06:48:26AM -0400, Steven Rostedt wrote:
> On Thu, 2011-05-12 at 22:32 +0200, Frederic Weisbecker wrote:
> > Avoid potential scheduler recursion and deadlock from the
> > stacktrace code by avoiding rescheduling when we re-enable
> > preemption.
> 
> I'm curious to where you saw this deadlock? As I have the function stack
> tracer using preempt_disable_notrace and enable_notrace without any
> issues, and it traces all functions in the kernel[*]. I have no issue
> with using raw_local_irq_save/restore() if it is to protect the per_cpu
> variable from interrupt corruption, but I don't see the problem with
> recursion.
> 
> There's only one function I had to worry about with preempt disable, not
> the entire scheduler. That was the function preempt_schedule(). This
> function is called by preempt_enable() and that will cause an infinite
> loop if you have something in preempt_schedule() call preempt_enable().
> 
> Remember that ftrace_preempt_disable/enable() crap that I did to try to
> avoid the scheduler deadlock? I found it was complex and unnecessary
> because the scheduler itself was not an issue, it was only
> preempt_schedule(). I replaced all that crappy code with a single line
> that added notrace to preempt_schedule() and everything just worked.
> 
> Thus, if you disable interrupts to protect the cpu data, that's fine,
> and say so in the change log. I really like to know if you really saw
> this deadlock. Yes enabling preemption in the scheduler may recurse, but
> it will only do so once.
> 
> I still argue that interrupt enabling is slow. I've seen a large slow
> down of the code by switching stack tracer from preempt disable to irq
> disable. I used perf to see why, and it told me that disabling
> interrupts as fine, but enabling interrupts can cost you quite a bit.
> 
> -- Steve
> 
> [*] of course function tracing does not trace other notrace functions.
> 

I haven't observed any deadlock. trace events disable preemption and
other tracers do too (my changelog was buggy).

I just worried about potential other users, like a WARN_ON in the
scheduler or so.

My worry is the following scenario:

schedule() {
	acquire(rq)
							set_tsk_need_resched
	WARN_ON() {
		stack_trace() {
			preempt_enable() {
				preempt_schedule() {
					acquire(rq)
				}
			}
		}
	}
}

I don't know if it happens that one set TIF_NEED_RESCHED remotely,
or if TIF_NEED_RESCHED can be set when we hold the rq, and then we
can be followed by a WARN_ON, ...
So I preferred to be careful.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/2] x86: Make the x86-64 stacktrace code safely callable from scheduler
  2011-05-13 12:48     ` Frederic Weisbecker
@ 2011-05-13 13:19       ` Peter Zijlstra
  2011-05-13 13:29         ` Frederic Weisbecker
  0 siblings, 1 reply; 13+ messages in thread
From: Peter Zijlstra @ 2011-05-13 13:19 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: Steven Rostedt, Ingo Molnar, LKML, H. Peter Anvin,
	Thomas Gleixner

On Fri, 2011-05-13 at 14:48 +0200, Frederic Weisbecker wrote:
> I haven't observed any deadlock. trace events disable preemption and
> other tracers do too (my changelog was buggy).
> 
> I just worried about potential other users, like a WARN_ON in the
> scheduler or so.
> 
> My worry is the following scenario:
> 
> schedule() {
>         acquire(rq)
>                                                         set_tsk_need_resched
>         WARN_ON() {
>                 stack_trace() {
>                         preempt_enable() {
>                                 preempt_schedule() {

Would never happen, because rq->lock is a spinlock which holds another
preempt count so preempt_enable() would never schedule.

>                                         acquire(rq)
>                                 }
>                         }
>                 }
>         }
> }



> I don't know if it happens that one set TIF_NEED_RESCHED remotely,

Yes

> or if TIF_NEED_RESCHED can be set when we hold the rq,

Yes

>  and then we
> can be followed by a WARN_ON, ...

Not quite sure, but possible.

> So I preferred to be careful. 

Still not quite seeing how all things could go bang.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/2] x86: Make the x86-64 stacktrace code safely callable from scheduler
  2011-05-13 13:19       ` Peter Zijlstra
@ 2011-05-13 13:29         ` Frederic Weisbecker
  0 siblings, 0 replies; 13+ messages in thread
From: Frederic Weisbecker @ 2011-05-13 13:29 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Steven Rostedt, Ingo Molnar, LKML, H. Peter Anvin,
	Thomas Gleixner

On Fri, May 13, 2011 at 03:19:58PM +0200, Peter Zijlstra wrote:
> On Fri, 2011-05-13 at 14:48 +0200, Frederic Weisbecker wrote:
> > I haven't observed any deadlock. trace events disable preemption and
> > other tracers do too (my changelog was buggy).
> > 
> > I just worried about potential other users, like a WARN_ON in the
> > scheduler or so.
> > 
> > My worry is the following scenario:
> > 
> > schedule() {
> >         acquire(rq)
> >                                                         set_tsk_need_resched
> >         WARN_ON() {
> >                 stack_trace() {
> >                         preempt_enable() {
> >                                 preempt_schedule() {
> 
> Would never happen, because rq->lock is a spinlock which holds another
> preempt count so preempt_enable() would never schedule.

Oh right.

> 
> >                                         acquire(rq)
> >                                 }
> >                         }
> >                 }
> >         }
> > }
> 
> 
> 
> > I don't know if it happens that one set TIF_NEED_RESCHED remotely,
> 
> Yes
> 
> > or if TIF_NEED_RESCHED can be set when we hold the rq,
> 
> Yes
> 
> >  and then we
> > can be followed by a WARN_ON, ...
> 
> Not quite sure, but possible.
> 
> > So I preferred to be careful. 
> 
> Still not quite seeing how all things could go bang.

Nah, forget about that, I was just confused ;)

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/2] x86: Remove warning and warning_symbol from struct stacktrace_ops
  2011-05-12 20:32 ` [PATCH 1/2] x86: Remove warning and warning_symbol from struct stacktrace_ops Frederic Weisbecker
@ 2011-07-14 11:01   ` Mel Gorman
  2011-07-14 13:50     ` Frederic Weisbecker
  0 siblings, 1 reply; 13+ messages in thread
From: Mel Gorman @ 2011-07-14 11:01 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: Ingo Molnar, LKML, Richard Weinberger, Oleg Nesterov,
	Andrew Morton, Huang Ying, Soeren Sandmann Pedersen, Namhyung Kim,
	x86, H. Peter Anvin, Thomas Gleixner, Robert Richter, Paul Mundt

On Thu, May 12, 2011 at 10:32:05PM +0200, Frederic Weisbecker wrote:
> From: Richard Weinberger <richard@nod.at>
> 
> Both warning and warning_symbol are nowhere used.
> Let's get rid of them.
> 
> Signed-off-by: Richard Weinberger <richard@nod.at>
> Cc: Oleg Nesterov <oleg@redhat.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Huang Ying <ying.huang@intel.com>
> Cc: Soeren Sandmann Pedersen <ssp@redhat.com>
> Cc: Namhyung Kim <namhyung@gmail.com>
> Cc: x86 <x86@kernel.org>
> Cc: H. Peter Anvin <hpa@zytor.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Robert Richter <robert.richter@amd.com>
> Cc: Paul Mundt <lethal@linux-sh.org>
> Link: http://lkml.kernel.org/r/1305205872-10321-2-git-send-email-richard@nod.at
> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>

This change breaks systemtap. I know it's out of tree and systemtap
doesn't even do anything useful with the symbols as it assigns them
to dummy callbacks but the breakage is there.

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/2] x86: Remove warning and warning_symbol from struct stacktrace_ops
  2011-07-14 11:01   ` Mel Gorman
@ 2011-07-14 13:50     ` Frederic Weisbecker
  0 siblings, 0 replies; 13+ messages in thread
From: Frederic Weisbecker @ 2011-07-14 13:50 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Ingo Molnar, LKML, Richard Weinberger, Oleg Nesterov,
	Andrew Morton, Huang Ying, Soeren Sandmann Pedersen, Namhyung Kim,
	x86, H. Peter Anvin, Thomas Gleixner, Robert Richter, Paul Mundt

On Thu, Jul 14, 2011 at 12:01:02PM +0100, Mel Gorman wrote:
> On Thu, May 12, 2011 at 10:32:05PM +0200, Frederic Weisbecker wrote:
> > From: Richard Weinberger <richard@nod.at>
> > 
> > Both warning and warning_symbol are nowhere used.
> > Let's get rid of them.
> > 
> > Signed-off-by: Richard Weinberger <richard@nod.at>
> > Cc: Oleg Nesterov <oleg@redhat.com>
> > Cc: Andrew Morton <akpm@linux-foundation.org>
> > Cc: Huang Ying <ying.huang@intel.com>
> > Cc: Soeren Sandmann Pedersen <ssp@redhat.com>
> > Cc: Namhyung Kim <namhyung@gmail.com>
> > Cc: x86 <x86@kernel.org>
> > Cc: H. Peter Anvin <hpa@zytor.com>
> > Cc: Thomas Gleixner <tglx@linutronix.de>
> > Cc: Robert Richter <robert.richter@amd.com>
> > Cc: Paul Mundt <lethal@linux-sh.org>
> > Link: http://lkml.kernel.org/r/1305205872-10321-2-git-send-email-richard@nod.at
> > Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
> 
> This change breaks systemtap. I know it's out of tree and systemtap
> doesn't even do anything useful with the symbols as it assigns them
> to dummy callbacks but the breakage is there.

Then it's up to systemtap developers to deal with that. We don't maintain out-of-tree.

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2011-07-14 13:50 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-05-12 20:32 [GIT PULL] x86 stacktrace updates Frederic Weisbecker
2011-05-12 20:32 ` [PATCH 1/2] x86: Remove warning and warning_symbol from struct stacktrace_ops Frederic Weisbecker
2011-07-14 11:01   ` Mel Gorman
2011-07-14 13:50     ` Frederic Weisbecker
2011-05-12 20:32 ` [PATCH 2/2] x86: Make the x86-64 stacktrace code safely callable from scheduler Frederic Weisbecker
2011-05-12 20:40   ` Frederic Weisbecker
2011-05-12 21:28   ` Ingo Molnar
2011-05-12 21:43     ` Frederic Weisbecker
2011-05-12 21:55       ` Ingo Molnar
2011-05-13 10:48   ` Steven Rostedt
2011-05-13 12:48     ` Frederic Weisbecker
2011-05-13 13:19       ` Peter Zijlstra
2011-05-13 13:29         ` Frederic Weisbecker

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox