public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [patch 0/8] Series to improve the x86 backtracing code
@ 2008-01-12  5:26 Arjan van de Ven
  2008-01-12  5:28 ` [patch 1/8] x86: Fix x86 32 bit FRAME_POINTER chasing code Arjan van de Ven
                   ` (9 more replies)
  0 siblings, 10 replies; 11+ messages in thread
From: Arjan van de Ven @ 2008-01-12  5:26 UTC (permalink / raw)
  To: linux-kernel; +Cc: mingo, akpm, torvalds, tglx, hpa

Hi,

this patch series improves the x86 backtracing code in the following ways:
1) Fix a bad bug in x86 (32 bit) FRAME_POINTER backtracing (2.6.24 material)
2) Add the capability to mark a backtrace entry as reliable / unreliable
3) Change the x86 (32 bit) FRAME_POINTER backtracing to use the normal backtrace but use the frames
   as guidance to the reliability of the backtrace entries
4) Patch to capture the EBP register earlier in the backtrace callchain to get a better FRAME_POINTER
   based backtrace
5) Split the x86 (64 bit) backtrace code up similar to the 32 bit code by turning a macro into a function
6) Enable FRAME_POINTER backtracing on 64 bit similar to the 32 bit patch in number 3
7) Add a simple self-test module for backtraces (will move to tests/ later when that gets merged)
8) Add the feature to x86 64bit to print a set of bytes prior to the current EIP in the Code: line
   (already present in 32 bit)

I have not yet implemented 2 requested items
1) To give dump_stack() and co a parameter towards the KERN_ log level; this is a patch that touches 125 files
   and while I like the idea I'm not sure it's worth THAT.
2) Linus' request to only print the first 5 entries of a backtrace with a important loglevel, and do the rest
   as KERN_WARNING  or so. This realistically wants 1) and I've not gotten to it yet (and it's independent functionality
   anyway)


Andrew:  You already have patch 1 in your tree, and it's not modified since that, so no need to refresh

-- 
If you want to reach me at my work email, use arjan@linux.intel.com
For development, discussion and tips for power savings, 
visit http://www.lesswatts.org

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [patch 1/8] x86: Fix x86 32 bit FRAME_POINTER chasing code
  2008-01-12  5:26 [patch 0/8] Series to improve the x86 backtracing code Arjan van de Ven
@ 2008-01-12  5:28 ` Arjan van de Ven
  2008-01-12  5:29 ` [patch 2/8] x86: Add the capability to print fuzzy backtraces Arjan van de Ven
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Arjan van de Ven @ 2008-01-12  5:28 UTC (permalink / raw)
  To: linux-kernel; +Cc: Arjan van de Ven, mingo, akpm, torvalds, tglx, hpa

Subject: Fix x86 32 bit FRAME_POINTER chasing code
From: Arjan van de Ven <arjan@linux.intel.com>

The current x86 32 bit FRAME_POINTER chasing code has a nasty bug in
that the EBP tracer doesn't actually update the value of EBP it is
tracing, so that the code doesn't actually switch to the irq stack properly.

The result is a truncated backtrace:

WARNING: at timeroops.c:8 kerneloops_regression_test() (Not tainted)
Pid: 0, comm: swapper Not tainted 2.6.24-0.77.rc4.git4.fc9 #1
 [<c040649a>] show_trace_log_lvl+0x1a/0x2f
 [<c0406d41>] show_trace+0x12/0x14
 [<c0407061>] dump_stack+0x6c/0x72
 [<e0258049>] kerneloops_regression_test+0x44/0x46 [timeroops]
 [<c04371ac>] run_timer_softirq+0x127/0x18f
 [<c0434685>] __do_softirq+0x78/0xff
 [<c0407759>] do_softirq+0x74/0xf7
 =======================

This patch fixes the code to update EBP properly, and to check the EIP
before printing (as the non-framepointer backtracer does) so that 
the same test backtrace now looks like this:

WARNING: at timeroops.c:8 kerneloops_regression_test()
Pid: 0, comm: swapper Not tainted 2.6.24-rc7 #4
 [<c0405d17>] show_trace_log_lvl+0x1a/0x2f
 [<c0406681>] show_trace+0x12/0x14
 [<c0406ef2>] dump_stack+0x6a/0x70
 [<e01f6040>] kerneloops_regression_test+0x3b/0x3d [timeroops]
 [<c0426f07>] run_timer_softirq+0x11b/0x17c
 [<c04243ac>] __do_softirq+0x42/0x94
 [<c040704c>] do_softirq+0x50/0xb6
 [<c04242a9>] irq_exit+0x37/0x67
 [<c040714c>] do_IRQ+0x9a/0xaf
 [<c04057da>] common_interrupt+0x2e/0x34
 [<c05807fe>] cpuidle_idle_call+0x52/0x78
 [<c04034f3>] cpu_idle+0x46/0x60
 [<c05fbbd3>] rest_init+0x43/0x45
 [<c070aa3d>] start_kernel+0x279/0x27f
 =======================

This shows that the backtrace goes all the way down to user context now.
This bug was found during the port to 64 bit of the frame pointer backtracer.

Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>

---
 arch/x86/kernel/traps_32.c |    4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

Index: linux-2.6.24-rc7/arch/x86/kernel/traps_32.c
===================================================================
--- linux-2.6.24-rc7.orig/arch/x86/kernel/traps_32.c
+++ linux-2.6.24-rc7/arch/x86/kernel/traps_32.c
@@ -124,7 +124,8 @@ static inline unsigned long print_contex
 		unsigned long addr;
 
 		addr = frame->return_address;
-		ops->address(data, addr);
+		if (__kernel_text_address(addr))
+			ops->address(data, addr);
 		/*
 		 * break out of recursive entries (such as
 		 * end_of_stack_stop_unwind_function). Also,
@@ -132,6 +133,7 @@ static inline unsigned long print_contex
 		 * move downwards!
 		 */
 		next = frame->next_frame;
+		ebp = (unsigned long) next;
 		if (next <= frame)
 			break;
 		frame = next;



-- 
If you want to reach me at my work email, use arjan@linux.intel.com
For development, discussion and tips for power savings, 
visit http://www.lesswatts.org

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [patch 2/8] x86: Add the capability to print fuzzy backtraces
  2008-01-12  5:26 [patch 0/8] Series to improve the x86 backtracing code Arjan van de Ven
  2008-01-12  5:28 ` [patch 1/8] x86: Fix x86 32 bit FRAME_POINTER chasing code Arjan van de Ven
@ 2008-01-12  5:29 ` Arjan van de Ven
  2008-01-12  5:30 ` [patch 3/8] x86: Improve the 32 bit Frame Pointer backtracer to also use the traditional backtrace Arjan van de Ven
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Arjan van de Ven @ 2008-01-12  5:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: Arjan van de Ven, mingo, akpm, torvalds, tglx, hpa

Subject: x86: Add the capability to print fuzzy backtraces
From: Arjan van de Ven <arjan@linux.intel.com>

For enhancing the 32 bit EBP based backtracer, I need the capability
for the backtracer to tell it's customer that an entry is either
reliable or unreliable, and the backtrace printing code then needs to
print the unreliable ones slightly different.

This patch adds the basic capability, the next patch will add a user
of this capability.

Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>

---
 arch/x86/kernel/stacktrace.c  |    2 +-
 arch/x86/kernel/traps_32.c    |    8 +++++---
 arch/x86/kernel/traps_64.c    |   37 ++++++++++++++++++++++---------------
 arch/x86/oprofile/backtrace.c |    2 +-
 include/asm-x86/stacktrace.h  |    2 +-
 5 files changed, 30 insertions(+), 21 deletions(-)

Index: linux-2.6.24-rc7/arch/x86/kernel/traps_32.c
===================================================================
--- linux-2.6.24-rc7.orig/arch/x86/kernel/traps_32.c
+++ linux-2.6.24-rc7/arch/x86/kernel/traps_32.c
@@ -125,7 +125,7 @@ static inline unsigned long print_contex
 
 		addr = frame->return_address;
 		if (__kernel_text_address(addr))
-			ops->address(data, addr);
+			ops->address(data, addr, 1);
 		/*
 		 * break out of recursive entries (such as
 		 * end_of_stack_stop_unwind_function). Also,
@@ -144,7 +144,7 @@ static inline unsigned long print_contex
 
 		addr = *stack++;
 		if (__kernel_text_address(addr))
-			ops->address(data, addr);
+			ops->address(data, addr, 1);
 	}
 #endif
 	return ebp;
@@ -219,9 +219,11 @@ static int print_trace_stack(void *data,
 /*
  * Print one address/symbol entries per line.
  */
-static void print_trace_address(void *data, unsigned long addr)
+static void print_trace_address(void *data, unsigned long addr, int reliable)
 {
 	printk("%s [<%08lx>] ", (char *)data, addr);
+	if (!reliable)
+		printk("? ");
 	print_symbol("%s\n", addr);
 	touch_nmi_watchdog();
 }
Index: linux-2.6.24-rc7/arch/x86/kernel/traps_64.c
===================================================================
--- linux-2.6.24-rc7.orig/arch/x86/kernel/traps_64.c
+++ linux-2.6.24-rc7/arch/x86/kernel/traps_64.c
@@ -99,13 +99,14 @@ static inline void preempt_conditional_c
 int kstack_depth_to_print = 12;
 
 #ifdef CONFIG_KALLSYMS
-void printk_address(unsigned long address)
+void printk_address(unsigned long address, int reliable)
 {
 	unsigned long offset = 0, symsize;
 	const char *symname;
 	char *modname;
 	char *delim = ":";
 	char namebuf[128];
+	char reliab[4] = "";;
 
 	symname = kallsyms_lookup(address, &symsize, &offset,
 					&modname, namebuf);
@@ -113,13 +114,16 @@ void printk_address(unsigned long addres
 		printk(" [<%016lx>]\n", address);
 		return;
 	}
+	if (!reliable)
+		strcpy(reliab,"? ");
+
 	if (!modname)
 		modname = delim = ""; 		
-	printk(" [<%016lx>] %s%s%s%s+0x%lx/0x%lx\n",
-		address, delim, modname, delim, symname, offset, symsize);
+	printk(" [<%016lx>] %s%s%s%s%s+0x%lx/0x%lx\n",
+		address, reliab, delim, modname, delim, symname, offset, symsize);
 }
 #else
-void printk_address(unsigned long address)
+void printk_address(unsigned long address, int reliable)
 {
 	printk(" [<%016lx>]\n", address);
 }
@@ -215,7 +219,7 @@ static inline int valid_stack_ptr(struct
 }
 
 void dump_trace(struct task_struct *tsk, struct pt_regs *regs,
-		unsigned long *stack,
+		unsigned long *stack, unsigned long ebp,
 		const struct stacktrace_ops *ops, void *data)
 {
 	const unsigned cpu = get_cpu();
@@ -252,7 +256,7 @@ void dump_trace(struct task_struct *tsk,
 			 * down the cause of the crash will be able to figure \
 			 * out the call path that was taken. \
 			 */ \
-			ops->address(data, addr);   \
+			ops->address(data, addr, 1);   \
 		} \
 	} while (0)
 
@@ -331,10 +335,10 @@ static int print_trace_stack(void *data,
 	return 0;
 }
 
-static void print_trace_address(void *data, unsigned long addr)
+static void print_trace_address(void *data, unsigned long addr, int reliable)
 {
 	touch_nmi_watchdog();
-	printk_address(addr);
+	printk_address(addr, reliable);
 }
 
 static const struct stacktrace_ops print_trace_ops = {
@@ -345,15 +349,17 @@ static const struct stacktrace_ops print
 };
 
 void
-show_trace(struct task_struct *tsk, struct pt_regs *regs, unsigned long *stack)
+show_trace(struct task_struct *tsk, struct pt_regs *regs, unsigned long *stack,
+		unsigned long ebp)
 {
 	printk("\nCall Trace:\n");
-	dump_trace(tsk, regs, stack, &print_trace_ops, NULL);
+	dump_trace(tsk, regs, stack, ebp, &print_trace_ops, NULL);
 	printk("\n");
 }
 
 static void
-_show_stack(struct task_struct *tsk, struct pt_regs *regs, unsigned long *rsp)
+_show_stack(struct task_struct *tsk, struct pt_regs *regs, unsigned long *rsp,
+							unsigned long ebp)
 {
 	unsigned long *stack;
 	int i;
@@ -387,12 +393,12 @@ _show_stack(struct task_struct *tsk, str
 		printk(" %016lx", *stack++);
 		touch_nmi_watchdog();
 	}
-	show_trace(tsk, regs, rsp);
+	show_trace(tsk, regs, rsp, ebp);
 }
 
 void show_stack(struct task_struct *tsk, unsigned long * rsp)
 {
-	_show_stack(tsk, NULL, rsp);
+	_show_stack(tsk, NULL, rsp, 0);
 }
 
 /*
@@ -401,13 +407,14 @@ void show_stack(struct task_struct *tsk,
 void dump_stack(void)
 {
 	unsigned long dummy;
+	unsigned long ebp = 0;
 
 	printk("Pid: %d, comm: %.20s %s %s %.*s\n",
 		current->pid, current->comm, print_tainted(),
 		init_utsname()->release,
 		(int)strcspn(init_utsname()->version, " "),
 		init_utsname()->version);
-	show_trace(NULL, NULL, &dummy);
+	show_trace(NULL, NULL, &dummy, ebp);
 }
 
 EXPORT_SYMBOL(dump_stack);
@@ -432,7 +439,7 @@ void show_registers(struct pt_regs *regs
 	 */
 	if (in_kernel) {
 		printk("Stack: ");
-		_show_stack(NULL, regs, (unsigned long*)rsp);
+		_show_stack(NULL, regs, (unsigned long *)rsp, regs->rbp);
 
 		printk("\nCode: ");
 		if (regs->rip < PAGE_OFFSET)
Index: linux-2.6.24-rc7/arch/x86/oprofile/backtrace.c
===================================================================
--- linux-2.6.24-rc7.orig/arch/x86/oprofile/backtrace.c
+++ linux-2.6.24-rc7/arch/x86/oprofile/backtrace.c
@@ -32,7 +32,7 @@ static int backtrace_stack(void *data, c
 	return 0;
 }
 
-static void backtrace_address(void *data, unsigned long addr)
+static void backtrace_address(void *data, unsigned long addr, int reliable)
 {
 	unsigned int *depth = data;
 
Index: linux-2.6.24-rc7/include/asm-x86/stacktrace.h
===================================================================
--- linux-2.6.24-rc7.orig/include/asm-x86/stacktrace.h
+++ linux-2.6.24-rc7/include/asm-x86/stacktrace.h
@@ -9,7 +9,7 @@ struct stacktrace_ops {
 	void (*warning)(void *data, char *msg);
 	/* msg must contain %s for the symbol */
 	void (*warning_symbol)(void *data, char *msg, unsigned long symbol);
-	void (*address)(void *data, unsigned long address);
+	void (*address)(void *data, unsigned long address, int reliable);
 	/* On negative return stop dumping */
 	int (*stack)(void *data, char *name);
 };
Index: linux-2.6.24-rc7/arch/x86/kernel/stacktrace.c
===================================================================
--- linux-2.6.24-rc7.orig/arch/x86/kernel/stacktrace.c
+++ linux-2.6.24-rc7/arch/x86/kernel/stacktrace.c
@@ -22,7 +22,7 @@ static int save_stack_stack(void *data, 
 	return -1;
 }
 
-static void save_stack_address(void *data, unsigned long addr)
+static void save_stack_address(void *data, unsigned long addr, int reliable)
 {
 	struct stack_trace *trace = (struct stack_trace *)data;
 	if (trace->skip > 0) {



-- 
If you want to reach me at my work email, use arjan@linux.intel.com
For development, discussion and tips for power savings, 
visit http://www.lesswatts.org

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [patch 3/8] x86: Improve the 32 bit Frame Pointer backtracer to also use the traditional backtrace
  2008-01-12  5:26 [patch 0/8] Series to improve the x86 backtracing code Arjan van de Ven
  2008-01-12  5:28 ` [patch 1/8] x86: Fix x86 32 bit FRAME_POINTER chasing code Arjan van de Ven
  2008-01-12  5:29 ` [patch 2/8] x86: Add the capability to print fuzzy backtraces Arjan van de Ven
@ 2008-01-12  5:30 ` Arjan van de Ven
  2008-01-12  5:31 ` [patch 4/8] x86: pull EBP calculation earlier into the backtrace path Arjan van de Ven
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Arjan van de Ven @ 2008-01-12  5:30 UTC (permalink / raw)
  To: Arjan van de Ven; +Cc: linux-kernel, mingo, akpm, torvalds, tglx, hpa

Subject: x86: Improve the 32 bit Frame Pointer backtracer to also use the traditional backtrace
From: Arjan van de Ven <arjan@linux.intel.com>

The 32 bit Frame Pointer backtracer code checks if the EBP is valid
to do a backtrace; however currently on a failure it just gives up
and prints nothing. That's not very nice; we can do better and still
print a decent backtrace. 

This patch changes the backtracer to use the regular backtracing algorithm
at the same time as the EBP backtracer; the EBP backtracer is basically
used to figure out which part of the backtrace are reliable vs those
which are likely to be noise.

Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>

---
 arch/x86/kernel/traps_32.c |   44 ++++++++++++++++++++------------------------
 1 file changed, 20 insertions(+), 24 deletions(-)

Index: linux-2.6.24-rc7/arch/x86/kernel/traps_32.c
===================================================================
--- linux-2.6.24-rc7.orig/arch/x86/kernel/traps_32.c
+++ linux-2.6.24-rc7/arch/x86/kernel/traps_32.c
@@ -117,36 +117,32 @@ static inline unsigned long print_contex
 				unsigned long *stack, unsigned long ebp,
 				const struct stacktrace_ops *ops, void *data)
 {
-#ifdef	CONFIG_FRAME_POINTER
 	struct stack_frame *frame = (struct stack_frame *)ebp;
-	while (valid_stack_ptr(tinfo, frame, sizeof(*frame))) {
-		struct stack_frame *next;
-		unsigned long addr;
 
-		addr = frame->return_address;
-		if (__kernel_text_address(addr))
-			ops->address(data, addr, 1);
-		/*
-		 * break out of recursive entries (such as
-		 * end_of_stack_stop_unwind_function). Also,
-		 * we can never allow a frame pointer to
-		 * move downwards!
-		 */
-		next = frame->next_frame;
-		ebp = (unsigned long) next;
-		if (next <= frame)
-			break;
-		frame = next;
-	}
-#else
+	/*
+	 * if EBP is "deeper" into the stack than the actual stack pointer,
+	 * we need to rewind the stack pointer a little to start at the
+	 * first stack frame, but only if EBP is in this stack frame.
+	 */
+	if (stack > (unsigned long *) ebp
+			&& valid_stack_ptr(tinfo, frame, sizeof(*frame)))
+		stack = (unsigned long *) ebp;
+
 	while (valid_stack_ptr(tinfo, stack, sizeof(*stack))) {
 		unsigned long addr;
 
-		addr = *stack++;
-		if (__kernel_text_address(addr))
-			ops->address(data, addr, 1);
+		addr = *stack;
+		if (__kernel_text_address(addr)) {
+			if ((unsigned long) stack == ebp + 4) {
+				ops->address(data, addr, 1);
+				frame = frame->next_frame;
+				ebp = (unsigned long) frame;
+			} else {
+				ops->address(data, addr, 0);
+			}
+		}
+		stack++;
 	}
-#endif
 	return ebp;
 }
 


-- 
If you want to reach me at my work email, use arjan@linux.intel.com
For development, discussion and tips for power savings, 
visit http://www.lesswatts.org

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [patch 4/8] x86: pull EBP calculation earlier into the backtrace path
  2008-01-12  5:26 [patch 0/8] Series to improve the x86 backtracing code Arjan van de Ven
                   ` (2 preceding siblings ...)
  2008-01-12  5:30 ` [patch 3/8] x86: Improve the 32 bit Frame Pointer backtracer to also use the traditional backtrace Arjan van de Ven
@ 2008-01-12  5:31 ` Arjan van de Ven
  2008-01-12  5:32 ` [patch 5/8] x86: Turn 64 bit x86 HANDLE_STACK into print_context_stack like 32 bit has Arjan van de Ven
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Arjan van de Ven @ 2008-01-12  5:31 UTC (permalink / raw)
  To: Arjan van de Ven; +Cc: linux-kernel, mingo, akpm, torvalds, tglx, hpa

Subject: x86: pull EBP calculation earlier into the backtrace path
From: Arjan van de Ven <arjan@linux.intel.com>

Right now, we take the stack pointer early during the backtrace path, but
only calculate EBP several functions deep later, making it hard to reconcile
the stack and EBP backtraces (as well as showing several internal backtrace
functions on the stack with EBP based backtracing).

This patch moves the EBP taking to the same place we take the stack pointer;
sadly this ripples through several layers of the back tracing stack,
but it's not all that bad in the end I hope.

Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>

---
 arch/x86/kernel/process_32.c   |    2 +-
 arch/x86/kernel/process_64.c   |    2 +-
 arch/x86/kernel/stacktrace.c   |    2 +-
 arch/x86/kernel/traps_32.c     |   39 +++++++++++++++++----------------------
 arch/x86/oprofile/backtrace.c  |    2 +-
 include/asm-x86/processor_32.h |    3 ++-
 include/asm-x86/proto.h        |    3 ++-
 include/asm-x86/stacktrace.h   |    1 +
 8 files changed, 26 insertions(+), 28 deletions(-)

Index: linux-2.6.24-rc7/arch/x86/kernel/process_32.c
===================================================================
--- linux-2.6.24-rc7.orig/arch/x86/kernel/process_32.c
+++ linux-2.6.24-rc7/arch/x86/kernel/process_32.c
@@ -358,7 +358,7 @@ void __show_registers(struct pt_regs *re
 void show_regs(struct pt_regs *regs)
 {
 	__show_registers(regs, 1);
-	show_trace(NULL, regs, &regs->esp);
+	show_trace(NULL, regs, &regs->esp, regs->ebp);
 }
 
 /*
Index: linux-2.6.24-rc7/arch/x86/kernel/process_64.c
===================================================================
--- linux-2.6.24-rc7.orig/arch/x86/kernel/process_64.c
+++ linux-2.6.24-rc7/arch/x86/kernel/process_64.c
@@ -368,7 +368,7 @@ void show_regs(struct pt_regs *regs)
 {
 	printk("CPU %d:", smp_processor_id());
 	__show_regs(regs);
-	show_trace(NULL, regs, (void *)(regs + 1));
+	show_trace(NULL, regs, (void *)(regs + 1), regs->rbp);
 }
 
 /*
Index: linux-2.6.24-rc7/arch/x86/kernel/stacktrace.c
===================================================================
--- linux-2.6.24-rc7.orig/arch/x86/kernel/stacktrace.c
+++ linux-2.6.24-rc7/arch/x86/kernel/stacktrace.c
@@ -45,7 +45,7 @@ static const struct stacktrace_ops save_
  */
 void save_stack_trace(struct stack_trace *trace)
 {
-	dump_trace(current, NULL, NULL, &save_stack_ops, trace);
+	dump_trace(current, NULL, NULL, 0, &save_stack_ops, trace);
 	if (trace->nr_entries < trace->max_entries)
 		trace->entries[trace->nr_entries++] = ULONG_MAX;
 }
Index: linux-2.6.24-rc7/arch/x86/kernel/traps_32.c
===================================================================
--- linux-2.6.24-rc7.orig/arch/x86/kernel/traps_32.c
+++ linux-2.6.24-rc7/arch/x86/kernel/traps_32.c
@@ -119,15 +119,6 @@ static inline unsigned long print_contex
 {
 	struct stack_frame *frame = (struct stack_frame *)ebp;
 
-	/*
-	 * if EBP is "deeper" into the stack than the actual stack pointer,
-	 * we need to rewind the stack pointer a little to start at the
-	 * first stack frame, but only if EBP is in this stack frame.
-	 */
-	if (stack > (unsigned long *) ebp
-			&& valid_stack_ptr(tinfo, frame, sizeof(*frame)))
-		stack = (unsigned long *) ebp;
-
 	while (valid_stack_ptr(tinfo, stack, sizeof(*stack))) {
 		unsigned long addr;
 
@@ -138,7 +129,7 @@ static inline unsigned long print_contex
 				frame = frame->next_frame;
 				ebp = (unsigned long) frame;
 			} else {
-				ops->address(data, addr, 0);
+				ops->address(data, addr, ebp == 0);
 			}
 		}
 		stack++;
@@ -149,11 +140,9 @@ static inline unsigned long print_contex
 #define MSG(msg) ops->warning(data, msg)
 
 void dump_trace(struct task_struct *task, struct pt_regs *regs,
-	        unsigned long *stack,
+		unsigned long *stack, unsigned long ebp,
 		const struct stacktrace_ops *ops, void *data)
 {
-	unsigned long ebp = 0;
-
 	if (!task)
 		task = current;
 
@@ -233,20 +222,20 @@ static const struct stacktrace_ops print
 
 static void
 show_trace_log_lvl(struct task_struct *task, struct pt_regs *regs,
-		   unsigned long * stack, char *log_lvl)
+		unsigned long *stack, unsigned long ebp, char *log_lvl)
 {
-	dump_trace(task, regs, stack, &print_trace_ops, log_lvl);
+	dump_trace(task, regs, stack, ebp, &print_trace_ops, log_lvl);
 	printk("%s =======================\n", log_lvl);
 }
 
 void show_trace(struct task_struct *task, struct pt_regs *regs,
-		unsigned long * stack)
+		unsigned long *stack, unsigned long ebp)
 {
-	show_trace_log_lvl(task, regs, stack, "");
+	show_trace_log_lvl(task, regs, stack, ebp, "");
 }
 
 static void show_stack_log_lvl(struct task_struct *task, struct pt_regs *regs,
-			       unsigned long *esp, char *log_lvl)
+		       unsigned long *esp, unsigned long ebp, char *log_lvl)
 {
 	unsigned long *stack;
 	int i;
@@ -267,13 +256,13 @@ static void show_stack_log_lvl(struct ta
 		printk("%08lx ", *stack++);
 	}
 	printk("\n%sCall Trace:\n", log_lvl);
-	show_trace_log_lvl(task, regs, esp, log_lvl);
+	show_trace_log_lvl(task, regs, esp, ebp, log_lvl);
 }
 
 void show_stack(struct task_struct *task, unsigned long *esp)
 {
 	printk("       ");
-	show_stack_log_lvl(task, NULL, esp, "");
+	show_stack_log_lvl(task, NULL, esp, 0, "");
 }
 
 /*
@@ -282,13 +271,19 @@ void show_stack(struct task_struct *task
 void dump_stack(void)
 {
 	unsigned long stack;
+	unsigned long ebp = 0;
+
+#ifdef CONFIG_FRAME_POINTER
+	if (!ebp)
+		asm("movl %%ebp, %0" : "=r" (ebp):);
+#endif
 
 	printk("Pid: %d, comm: %.20s %s %s %.*s\n",
 		current->pid, current->comm, print_tainted(),
 		init_utsname()->release,
 		(int)strcspn(init_utsname()->version, " "),
 		init_utsname()->version);
-	show_trace(current, NULL, &stack);
+	show_trace(current, NULL, &stack, ebp);
 }
 
 EXPORT_SYMBOL(dump_stack);
@@ -313,7 +308,7 @@ void show_registers(struct pt_regs *regs
 		unsigned char c;
 
 		printk("\n" KERN_EMERG "Stack: ");
-		show_stack_log_lvl(NULL, regs, &regs->esp, KERN_EMERG);
+		show_stack_log_lvl(NULL, regs, &regs->esp, 0, KERN_EMERG);
 
 		printk(KERN_EMERG "Code: ");
 
Index: linux-2.6.24-rc7/arch/x86/oprofile/backtrace.c
===================================================================
--- linux-2.6.24-rc7.orig/arch/x86/oprofile/backtrace.c
+++ linux-2.6.24-rc7/arch/x86/oprofile/backtrace.c
@@ -81,7 +81,7 @@ x86_backtrace(struct pt_regs * const reg
 
 	if (!user_mode_vm(regs)) {
 		if (depth)
-			dump_trace(NULL, regs, (unsigned long *)stack,
+			dump_trace(NULL, regs, (unsigned long *)stack, 0,
 				   &backtrace_ops, &depth);
 		return;
 	}
Index: linux-2.6.24-rc7/include/asm-x86/processor_32.h
===================================================================
--- linux-2.6.24-rc7.orig/include/asm-x86/processor_32.h
+++ linux-2.6.24-rc7/include/asm-x86/processor_32.h
@@ -423,7 +423,8 @@ extern void prepare_to_copy(struct task_
 extern int kernel_thread(int (*fn)(void *), void * arg, unsigned long flags);
 
 extern unsigned long thread_saved_pc(struct task_struct *tsk);
-void show_trace(struct task_struct *task, struct pt_regs *regs, unsigned long *stack);
+void show_trace(struct task_struct *task, struct pt_regs *regs,
+		unsigned long *stack, unsigned long ebp);
 
 unsigned long get_wchan(struct task_struct *p);
 
Index: linux-2.6.24-rc7/include/asm-x86/proto.h
===================================================================
--- linux-2.6.24-rc7.orig/include/asm-x86/proto.h
+++ linux-2.6.24-rc7/include/asm-x86/proto.h
@@ -53,7 +53,8 @@ extern void load_gs_index(unsigned gs);
 
 extern unsigned long end_pfn_map; 
 
-extern void show_trace(struct task_struct *, struct pt_regs *, unsigned long * rsp);
+extern void show_trace(struct task_struct *, struct pt_regs *,
+		unsigned long *rsp, unsigned long ebp);
 extern void show_registers(struct pt_regs *regs);
 
 extern void exception_table_check(void);
Index: linux-2.6.24-rc7/include/asm-x86/stacktrace.h
===================================================================
--- linux-2.6.24-rc7.orig/include/asm-x86/stacktrace.h
+++ linux-2.6.24-rc7/include/asm-x86/stacktrace.h
@@ -15,6 +15,7 @@ struct stacktrace_ops {
 };
 
 void dump_trace(struct task_struct *tsk, struct pt_regs *regs, unsigned long *stack,
+		unsigned long ebp,
 		const struct stacktrace_ops *ops, void *data);
 
 #endif



-- 
If you want to reach me at my work email, use arjan@linux.intel.com
For development, discussion and tips for power savings, 
visit http://www.lesswatts.org

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [patch 5/8] x86: Turn 64 bit x86 HANDLE_STACK into print_context_stack like 32 bit has
  2008-01-12  5:26 [patch 0/8] Series to improve the x86 backtracing code Arjan van de Ven
                   ` (3 preceding siblings ...)
  2008-01-12  5:31 ` [patch 4/8] x86: pull EBP calculation earlier into the backtrace path Arjan van de Ven
@ 2008-01-12  5:32 ` Arjan van de Ven
  2008-01-12  5:32 ` [patch 6/8] x86: Use the stack frames to get exact stack-traces for CONFIG_FRAMEPOINTER on x86-64 Arjan van de Ven
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Arjan van de Ven @ 2008-01-12  5:32 UTC (permalink / raw)
  To: linux-kernel; +Cc: Arjan van de Ven, mingo, akpm, torvalds, tglx, hpa

Subject: x86: Turn 64 bit x86 HANDLE_STACK into print_context_stack like 32 bit has
From: Arjan van de Ven <arjan@linux.intel.com>

This patch turns the x86 64 bit HANDLE_STACK macro in the backtrace code
into a function, just like 32 bit has. This is needed pre work in order to
get exact backtraces for CONFIG_FRAME_POINTER to work.

The function and it's arguments are not the same as 32 bit; due to the
exception/interrupt stack way of x86-64 there are a few differences.

This patch should not have any behavior changes, only code movement.

Due to the fragility and importance of the backtrace code, this needs to be
well reviewed and well tested before merging into mainlne.

Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
---
 arch/x86/kernel/traps_64.c |   74 +++++++++++++++++++++++++++------------------
 1 file changed, 45 insertions(+), 29 deletions(-)

Index: linux-2.6.24-rc7/arch/x86/kernel/traps_64.c
===================================================================
--- linux-2.6.24-rc7.orig/arch/x86/kernel/traps_64.c
+++ linux-2.6.24-rc7/arch/x86/kernel/traps_64.c
@@ -212,10 +212,46 @@ static unsigned long *in_exception_stack
  * severe exception (double fault, nmi, stack fault, debug, mce) hardware stack
  */
 
-static inline int valid_stack_ptr(struct thread_info *tinfo, void *p)
+static inline int valid_stack_ptr(struct thread_info *tinfo,
+			void *p, unsigned int size, void *end)
 {
 	void *t = (void *)tinfo;
-        return p > t && p < t + THREAD_SIZE - 3;
+	if (end) {
+		if (p < end && p >= (end-THREAD_SIZE))
+			return 1;
+		else
+			return 0;
+	}
+	return p > t && p < t + THREAD_SIZE - size;
+}
+
+static inline unsigned long print_context_stack(struct thread_info *tinfo,
+				unsigned long *stack, unsigned long ebp,
+				const struct stacktrace_ops *ops, void *data,
+				unsigned long *end)
+{
+	/*
+	 * Print function call entries within a stack. 'cond' is the
+	 * "end of stackframe" condition, that the 'stack++'
+	 * iteration will eventually trigger.
+	 */
+	while (valid_stack_ptr(tinfo, stack, 3, end))) {
+		unsigned long addr = *stack++;
+		/* Use unlocked access here because except for NMIs
+		   we should be already protected against module unloads */
+		if (__kernel_text_address(addr)) {
+			/*
+			 * If the address is either in the text segment of the
+			 * kernel, or in the region which contains vmalloc'ed
+			 * memory, it *may* be the address of a calling
+			 * routine; if so, print it so that someone tracing
+			 * down the cause of the crash will be able to figure
+			 * out the call path that was taken.
+			 */
+			ops->address(data, addr, 1);
+		}
+	}
+	return ebp;
 }
 
 void dump_trace(struct task_struct *tsk, struct pt_regs *regs,
@@ -229,6 +265,7 @@ void dump_trace(struct task_struct *tsk,
 
 	if (!tsk)
 		tsk = current;
+	tinfo = task_thread_info(tsk);
 
 	if (!stack) {
 		unsigned long dummy;
@@ -237,28 +274,6 @@ void dump_trace(struct task_struct *tsk,
 			stack = (unsigned long *)tsk->thread.rsp;
 	}
 
-	/*
-	 * Print function call entries within a stack. 'cond' is the
-	 * "end of stackframe" condition, that the 'stack++'
-	 * iteration will eventually trigger.
-	 */
-#define HANDLE_STACK(cond) \
-	do while (cond) { \
-		unsigned long addr = *stack++; \
-		/* Use unlocked access here because except for NMIs	\
-		   we should be already protected against module unloads */ \
-		if (__kernel_text_address(addr)) { \
-			/* \
-			 * If the address is either in the text segment of the \
-			 * kernel, or in the region which contains vmalloc'ed \
-			 * memory, it *may* be the address of a calling \
-			 * routine; if so, print it so that someone tracing \
-			 * down the cause of the crash will be able to figure \
-			 * out the call path that was taken. \
-			 */ \
-			ops->address(data, addr, 1);   \
-		} \
-	} while (0)
 
 	/*
 	 * Print function call entries in all stacks, starting at the
@@ -274,7 +289,9 @@ void dump_trace(struct task_struct *tsk,
 		if (estack_end) {
 			if (ops->stack(data, id) < 0)
 				break;
-			HANDLE_STACK (stack < estack_end);
+
+			print_context_stack(tinfo, stack, 0, ops,
+						data, estack_end);
 			ops->stack(data, "<EOE>");
 			/*
 			 * We link to the next stack via the
@@ -292,7 +309,8 @@ void dump_trace(struct task_struct *tsk,
 			if (stack >= irqstack && stack < irqstack_end) {
 				if (ops->stack(data, "IRQ") < 0)
 					break;
-				HANDLE_STACK (stack < irqstack_end);
+				print_context_stack(tinfo, stack, 0, ops,
+							 data, irqstack_end);
 				/*
 				 * We link to the next stack (which would be
 				 * the process stack normally) the last
@@ -310,9 +328,7 @@ void dump_trace(struct task_struct *tsk,
 	/*
 	 * This handles the process stack:
 	 */
-	tinfo = task_thread_info(tsk);
-	HANDLE_STACK (valid_stack_ptr(tinfo, stack));
-#undef HANDLE_STACK
+	print_context_stack(tinfo, stack, 0, ops, data, NULL);
 	put_cpu();
 }
 EXPORT_SYMBOL(dump_trace);


-- 
If you want to reach me at my work email, use arjan@linux.intel.com
For development, discussion and tips for power savings, 
visit http://www.lesswatts.org

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [patch 6/8] x86: Use the stack frames to get exact stack-traces for CONFIG_FRAMEPOINTER on x86-64
  2008-01-12  5:26 [patch 0/8] Series to improve the x86 backtracing code Arjan van de Ven
                   ` (4 preceding siblings ...)
  2008-01-12  5:32 ` [patch 5/8] x86: Turn 64 bit x86 HANDLE_STACK into print_context_stack like 32 bit has Arjan van de Ven
@ 2008-01-12  5:32 ` Arjan van de Ven
  2008-01-12  5:33 ` [patch 7/8] x86: Add a simple backtrace test module Arjan van de Ven
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Arjan van de Ven @ 2008-01-12  5:32 UTC (permalink / raw)
  To: linux-kernel; +Cc: Arjan van de Ven, mingo, akpm, torvalds, tglx, hpa


Subject: x86: Use the stack frames to get exact stack-traces for CONFIG_FRAMEPOINTER on x86-64
From: Arjan van de Ven <arjan@linux.intel.com>

x86 32 bit already has this feature: This patch uses the stack frames with
frame pointer into an exact stack trace, by following the frame pointer.
This only affects kernels built with the CONFIG_FRAME_POINTER config option
enabled, and greatly reduces the amount of noise in oopses.

This code uses the traditional method of doing backtraces, but if it 
finds a valid frame pointer chain, will use that to show which parts
of the backtrace are reliable and which parts are not

Due to the fragility and importance of the backtrace code, this needs to
be well reviewed and well tested before merging into mainlne.

Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>

---
 arch/x86/kernel/traps_64.c |   67 +++++++++++++++++++++++++++++----------------
 1 file changed, 44 insertions(+), 23 deletions(-)

Index: linux-2.6.24-rc7/arch/x86/kernel/traps_64.c
===================================================================
--- linux-2.6.24-rc7.orig/arch/x86/kernel/traps_64.c
+++ linux-2.6.24-rc7/arch/x86/kernel/traps_64.c
@@ -225,31 +225,34 @@ static inline int valid_stack_ptr(struct
 	return p > t && p < t + THREAD_SIZE - size;
 }
 
+/* The form of the top of the frame on the stack */
+struct stack_frame {
+	struct stack_frame *next_frame;
+	unsigned long return_address;
+};
+
+
 static inline unsigned long print_context_stack(struct thread_info *tinfo,
 				unsigned long *stack, unsigned long ebp,
 				const struct stacktrace_ops *ops, void *data,
 				unsigned long *end)
 {
-	/*
-	 * Print function call entries within a stack. 'cond' is the
-	 * "end of stackframe" condition, that the 'stack++'
-	 * iteration will eventually trigger.
-	 */
-	while (valid_stack_ptr(tinfo, stack, 3, end))) {
-		unsigned long addr = *stack++;
-		/* Use unlocked access here because except for NMIs
-		   we should be already protected against module unloads */
+	struct stack_frame *frame = (struct stack_frame *)ebp;
+
+	while (valid_stack_ptr(tinfo, stack, sizeof(*stack), end)) {
+		unsigned long addr;
+
+		addr = *stack;
 		if (__kernel_text_address(addr)) {
-			/*
-			 * If the address is either in the text segment of the
-			 * kernel, or in the region which contains vmalloc'ed
-			 * memory, it *may* be the address of a calling
-			 * routine; if so, print it so that someone tracing
-			 * down the cause of the crash will be able to figure
-			 * out the call path that was taken.
-			 */
-			ops->address(data, addr, 1);
+			if ((unsigned long) stack == ebp + 8) {
+				ops->address(data, addr, 1);
+				frame = frame->next_frame;
+				ebp = (unsigned long) frame;
+			} else {
+				ops->address(data, addr, ebp == 0);
+			}
 		}
+		stack++;
 	}
 	return ebp;
 }
@@ -274,6 +277,19 @@ void dump_trace(struct task_struct *tsk,
 			stack = (unsigned long *)tsk->thread.rsp;
 	}
 
+#ifdef CONFIG_FRAME_POINTER
+	if (!ebp) {
+		if (tsk == current) {
+			/* Grab ebp right from our regs */
+			asm("movq %%rbp, %0" : "=r" (ebp):);
+		} else {
+			/* ebp is the last reg pushed by switch_to */
+			ebp = *(unsigned long *) tsk->thread.rsp;
+		}
+	}
+#endif
+
+
 
 	/*
 	 * Print function call entries in all stacks, starting at the
@@ -290,8 +306,8 @@ void dump_trace(struct task_struct *tsk,
 			if (ops->stack(data, id) < 0)
 				break;
 
-			print_context_stack(tinfo, stack, 0, ops,
-						data, estack_end);
+			ebp = print_context_stack(tinfo, stack, ebp, ops,
+							data, estack_end);
 			ops->stack(data, "<EOE>");
 			/*
 			 * We link to the next stack via the
@@ -309,8 +325,8 @@ void dump_trace(struct task_struct *tsk,
 			if (stack >= irqstack && stack < irqstack_end) {
 				if (ops->stack(data, "IRQ") < 0)
 					break;
-				print_context_stack(tinfo, stack, 0, ops,
-							 data, irqstack_end);
+				ebp = print_context_stack(tinfo, stack, ebp,
+						ops, data, irqstack_end);
 				/*
 				 * We link to the next stack (which would be
 				 * the process stack normally) the last
@@ -328,7 +344,7 @@ void dump_trace(struct task_struct *tsk,
 	/*
 	 * This handles the process stack:
 	 */
-	print_context_stack(tinfo, stack, 0, ops, data, NULL);
+	ebp = print_context_stack(tinfo, stack, ebp, ops, data, NULL);
 	put_cpu();
 }
 EXPORT_SYMBOL(dump_trace);
@@ -425,6 +441,11 @@ void dump_stack(void)
 	unsigned long dummy;
 	unsigned long ebp = 0;
 
+#ifdef CONFIG_FRAME_POINTER
+	if (!ebp)
+		asm("movq %%rbp, %0" : "=r" (ebp):);
+#endif
+
 	printk("Pid: %d, comm: %.20s %s %s %.*s\n",
 		current->pid, current->comm, print_tainted(),
 		init_utsname()->release,

-- 
If you want to reach me at my work email, use arjan@linux.intel.com
For development, discussion and tips for power savings, 
visit http://www.lesswatts.org

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [patch 7/8] x86: Add a simple backtrace test module
  2008-01-12  5:26 [patch 0/8] Series to improve the x86 backtracing code Arjan van de Ven
                   ` (5 preceding siblings ...)
  2008-01-12  5:32 ` [patch 6/8] x86: Use the stack frames to get exact stack-traces for CONFIG_FRAMEPOINTER on x86-64 Arjan van de Ven
@ 2008-01-12  5:33 ` Arjan van de Ven
  2008-01-12  5:34 ` [patch 8/8] x86: Add the "print code before the trapping instruction" feature to 64 bit Arjan van de Ven
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Arjan van de Ven @ 2008-01-12  5:33 UTC (permalink / raw)
  To: linux-kernel; +Cc: Arjan van de Ven, mingo, akpm, torvalds, tglx, hpa

Subject: x86: Add a simple backtrace test module
From: Arjan van de Ven <arjan@linux.intel.com>

During the work on the x86 32 and 64 bit backtrace code I found it useful
to have a simple test module to test a process and irq context backtrace.
Since the existing backtrace code was buggy, I figure it might be useful
to have such a test module in the kernel so that maybe we can even
detect such bugs earlier..

Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>

---
 kernel/Makefile        |    1 +
 kernel/backtracetest.c |   47 +++++++++++++++++++++++++++++++++++++++++++++++
 lib/Kconfig.debug      |   12 ++++++++++++
 3 files changed, 60 insertions(+)

Index: linux-2.6.24-rc7/kernel/Makefile
===================================================================
--- linux-2.6.24-rc7.orig/kernel/Makefile
+++ linux-2.6.24-rc7/kernel/Makefile
@@ -52,6 +52,7 @@ obj-$(CONFIG_DETECT_SOFTLOCKUP) += softl
 obj-$(CONFIG_GENERIC_HARDIRQS) += irq/
 obj-$(CONFIG_SECCOMP) += seccomp.o
 obj-$(CONFIG_RCU_TORTURE_TEST) += rcutorture.o
+obj-$(CONFIG_BACKTRACE_SELF_TEST) += backtracetest.o
 obj-$(CONFIG_RELAY) += relay.o
 obj-$(CONFIG_SYSCTL) += utsname_sysctl.o
 obj-$(CONFIG_TASK_DELAY_ACCT) += delayacct.o
Index: linux-2.6.24-rc7/kernel/backtracetest.c
===================================================================
--- /dev/null
+++ linux-2.6.24-rc7/kernel/backtracetest.c
@@ -0,0 +1,47 @@
+/*
+ * Simple stack backtrace regression test module
+ *
+ * (C) Copyright 2008 Intel Corporation
+ * Author: Arjan van de Ven <arjan@linux.intel.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; version 2
+ * of the License.
+ */
+
+#include <linux/module.h>
+#include <linux/sched.h>
+
+static struct timer_list backtrace_timer;
+
+static void backtrace_test_timer(unsigned long data)
+{
+	printk("Testing a backtrace from irq context.\n");
+	printk("The following trace is a kernel self test and not a bug!\n");
+	dump_stack();
+}
+static int backtrace_regression_test(void)
+{
+	printk("====[ backtrace testing ]===========\n");
+	printk("Testing a backtrace from process context.\n");
+	printk("The following trace is a kernel self test and not a bug!\n");
+	dump_stack();
+
+	init_timer(&backtrace_timer);
+	backtrace_timer.function = backtrace_test_timer;
+	mod_timer(&backtrace_timer, jiffies + 10);
+
+	msleep(10);
+	printk("====[ end of backtrace testing ]====\n");
+	return 0;
+}
+
+static void exitf(void)
+{
+}
+
+module_init(backtrace_regression_test);
+module_exit(exitf);
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Arjan van de Ven <arjan@linux.intel.com>");
Index: linux-2.6.24-rc7/lib/Kconfig.debug
===================================================================
--- linux-2.6.24-rc7.orig/lib/Kconfig.debug
+++ linux-2.6.24-rc7/lib/Kconfig.debug
@@ -462,6 +462,18 @@ config RCU_TORTURE_TEST
 	  Say M if you want the RCU torture tests to build as a module.
 	  Say N if you are unsure.
 
+config BACKTRACE_SELF_TEST
+	tristate "Self test for the backtrace code"
+	depends on DEBUG_KERNEL
+	default n
+	help
+	  This option provides a kernel module that can be used to test
+	  the kernel stack backtrace code. This option is not useful
+	  for distributions or general kernels, but only for kernel
+	  developers working on architecture code.
+
+	  Say N if you are unsure.
+
 config LKDTM
 	tristate "Linux Kernel Dump Test Tool Module"
 	depends on DEBUG_KERNEL


-- 
If you want to reach me at my work email, use arjan@linux.intel.com
For development, discussion and tips for power savings, 
visit http://www.lesswatts.org

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [patch 8/8] x86: Add the "print code before the trapping instruction" feature to 64 bit
  2008-01-12  5:26 [patch 0/8] Series to improve the x86 backtracing code Arjan van de Ven
                   ` (6 preceding siblings ...)
  2008-01-12  5:33 ` [patch 7/8] x86: Add a simple backtrace test module Arjan van de Ven
@ 2008-01-12  5:34 ` Arjan van de Ven
  2008-01-12  6:30 ` [patch 0/8] Series to improve the x86 backtracing code Ingo Molnar
  2008-01-12 18:24 ` Linus Torvalds
  9 siblings, 0 replies; 11+ messages in thread
From: Arjan van de Ven @ 2008-01-12  5:34 UTC (permalink / raw)
  To: Arjan van de Ven; +Cc: linux-kernel, mingo, akpm, torvalds, tglx, hpa


Subject: x86: Add the "print code before the trapping instruction" feature to 64 bit
From: Arjan van de Ven <arjan@linux.intel.com>

The 32 bit x86 tree has a very useful feature that prints the Code: line
for the code even before the trapping instrution (and the start of the
trapping instruction is then denoted with a <>). Unfortunately, the 64 bit
x86 tree does not yet have this feature, making diagnosing backtraces harder
than needed.

This patch adds this feature in the same was as the 32 bit tree has
(including the same kernel boot parameter), and including a bugfix
to make the code use probe_kernel_address() rarther than a buggy (deadlocking)
__get_user.

Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>

---
 Documentation/kernel-parameters.txt |    4 +--
 arch/x86/kernel/traps_64.c          |   44 +++++++++++++++++++++++++++---------
 2 files changed, 35 insertions(+), 13 deletions(-)

Index: linux-2.6.24-rc7/Documentation/kernel-parameters.txt
===================================================================
--- linux-2.6.24-rc7.orig/Documentation/kernel-parameters.txt
+++ linux-2.6.24-rc7/Documentation/kernel-parameters.txt
@@ -408,8 +408,8 @@ and is between 256 and 4096 characters. 
 			[SPARC64] tick
 			[X86-64] hpet,tsc
 
-	code_bytes	[IA32] How many bytes of object code to print in an
-			oops report.
+	code_bytes	[IA32/X86_64] How many bytes of object code to print
+			in an oops report.
 			Range: 0 - 8192
 			Default: 64
 
Index: linux-2.6.24-rc7/arch/x86/kernel/traps_64.c
===================================================================
--- linux-2.6.24-rc7.orig/arch/x86/kernel/traps_64.c
+++ linux-2.6.24-rc7/arch/x86/kernel/traps_64.c
@@ -74,6 +74,8 @@ asmlinkage void alignment_check(void);
 asmlinkage void machine_check(void);
 asmlinkage void spurious_interrupt_bug(void);
 
+static unsigned int code_bytes = 64;
+
 static inline void conditional_sti(struct pt_regs *regs)
 {
 	if (regs->eflags & X86_EFLAGS_IF)
@@ -459,12 +461,15 @@ EXPORT_SYMBOL(dump_stack);
 void show_registers(struct pt_regs *regs)
 {
 	int i;
-	int in_kernel = !user_mode(regs);
 	unsigned long rsp;
 	const int cpu = smp_processor_id();
 	struct task_struct *cur = cpu_pda(cpu)->pcurrent;
+	u8 *rip;
+	unsigned int code_prologue = code_bytes * 43 / 64;
+	unsigned int code_len = code_bytes;
 
 	rsp = regs->rsp;
+	rip = (u8 *) regs->rip - code_prologue;
 	printk("CPU %d ", cpu);
 	__show_regs(regs);
 	printk("Process %s (pid: %d, threadinfo %p, task %p)\n",
@@ -474,22 +479,28 @@ void show_registers(struct pt_regs *regs
 	 * When in-kernel, we also print out the stack and code at the
 	 * time of the fault..
 	 */
-	if (in_kernel) {
+	if (!user_mode(regs)) {
+		unsigned char c;
 		printk("Stack: ");
 		_show_stack(NULL, regs, (unsigned long *)rsp, regs->rbp);
+		printk("\n");
 
-		printk("\nCode: ");
-		if (regs->rip < PAGE_OFFSET)
-			goto bad;
-
-		for (i=0; i<20; i++) {
-			unsigned char c;
-			if (__get_user(c, &((unsigned char*)regs->rip)[i])) {
-bad:
+		printk(KERN_EMERG "Code: ");
+		if (rip < (u8 *)PAGE_OFFSET || probe_kernel_address(rip, c)) {
+			/* try starting at RIP */
+			rip = (u8 *) regs->rip;
+			code_len = code_len - code_prologue + 1;
+		}
+		for (i = 0; i < code_len; i++, rip++) {
+			if (rip < (u8 *)PAGE_OFFSET ||
+					probe_kernel_address(rip, c)) {
 				printk(" Bad RIP value.");
 				break;
 			}
-			printk("%02x ", c);
+			if (rip == (u8 *)regs->rip)
+				printk("<%02x> ", c);
+			else
+				printk("%02x ", c);
 		}
 	}
 	printk("\n");
@@ -1187,3 +1198,14 @@ static int __init kstack_setup(char *s)
 	return 0;
 }
 early_param("kstack", kstack_setup);
+
+
+static int __init code_bytes_setup(char *s)
+{
+	code_bytes = simple_strtoul(s, NULL, 0);
+	if (code_bytes > 8192)
+		code_bytes = 8192;
+
+	return 1;
+}
+__setup("code_bytes=", code_bytes_setup);

-- 
If you want to reach me at my work email, use arjan@linux.intel.com
For development, discussion and tips for power savings, 
visit http://www.lesswatts.org

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [patch 0/8] Series to improve the x86 backtracing code
  2008-01-12  5:26 [patch 0/8] Series to improve the x86 backtracing code Arjan van de Ven
                   ` (7 preceding siblings ...)
  2008-01-12  5:34 ` [patch 8/8] x86: Add the "print code before the trapping instruction" feature to 64 bit Arjan van de Ven
@ 2008-01-12  6:30 ` Ingo Molnar
  2008-01-12 18:24 ` Linus Torvalds
  9 siblings, 0 replies; 11+ messages in thread
From: Ingo Molnar @ 2008-01-12  6:30 UTC (permalink / raw)
  To: Arjan van de Ven; +Cc: linux-kernel, akpm, torvalds, tglx, hpa


* Arjan van de Ven <arjan@infradead.org> wrote:

> this patch series improves the x86 backtracing code in the following ways:
> 1) Fix a bad bug in x86 (32 bit) FRAME_POINTER backtracing (2.6.24 material)
> 2) Add the capability to mark a backtrace entry as reliable / unreliable
> 3) Change the x86 (32 bit) FRAME_POINTER backtracing to use the normal backtrace but use the frames
>    as guidance to the reliability of the backtrace entries
> 4) Patch to capture the EBP register earlier in the backtrace callchain to get a better FRAME_POINTER
>    based backtrace
> 5) Split the x86 (64 bit) backtrace code up similar to the 32 bit code by turning a macro into a function
> 6) Enable FRAME_POINTER backtracing on 64 bit similar to the 32 bit patch in number 3
> 7) Add a simple self-test module for backtraces (will move to tests/ later when that gets merged)
> 8) Add the feature to x86 64bit to print a set of bytes prior to the current EIP in the Code: line
>    (already present in 32 bit)

thanks, i've applied your new series to x86.git. (Note that i had to 
move around some of the changes to make the series bisectable.)

	Ingo

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [patch 0/8] Series to improve the x86 backtracing code
  2008-01-12  5:26 [patch 0/8] Series to improve the x86 backtracing code Arjan van de Ven
                   ` (8 preceding siblings ...)
  2008-01-12  6:30 ` [patch 0/8] Series to improve the x86 backtracing code Ingo Molnar
@ 2008-01-12 18:24 ` Linus Torvalds
  9 siblings, 0 replies; 11+ messages in thread
From: Linus Torvalds @ 2008-01-12 18:24 UTC (permalink / raw)
  To: Arjan van de Ven; +Cc: linux-kernel, mingo, akpm, tglx, hpa



On Fri, 11 Jan 2008, Arjan van de Ven wrote:
> 
> this patch series improves the x86 backtracing code in the following ways:

I heartily approve of this series.

Good jorb. Especially the fact that you also fixed x86-64, which has been 
a total disaster in this area.

			Linus

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2008-01-12 18:25 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-01-12  5:26 [patch 0/8] Series to improve the x86 backtracing code Arjan van de Ven
2008-01-12  5:28 ` [patch 1/8] x86: Fix x86 32 bit FRAME_POINTER chasing code Arjan van de Ven
2008-01-12  5:29 ` [patch 2/8] x86: Add the capability to print fuzzy backtraces Arjan van de Ven
2008-01-12  5:30 ` [patch 3/8] x86: Improve the 32 bit Frame Pointer backtracer to also use the traditional backtrace Arjan van de Ven
2008-01-12  5:31 ` [patch 4/8] x86: pull EBP calculation earlier into the backtrace path Arjan van de Ven
2008-01-12  5:32 ` [patch 5/8] x86: Turn 64 bit x86 HANDLE_STACK into print_context_stack like 32 bit has Arjan van de Ven
2008-01-12  5:32 ` [patch 6/8] x86: Use the stack frames to get exact stack-traces for CONFIG_FRAMEPOINTER on x86-64 Arjan van de Ven
2008-01-12  5:33 ` [patch 7/8] x86: Add a simple backtrace test module Arjan van de Ven
2008-01-12  5:34 ` [patch 8/8] x86: Add the "print code before the trapping instruction" feature to 64 bit Arjan van de Ven
2008-01-12  6:30 ` [patch 0/8] Series to improve the x86 backtracing code Ingo Molnar
2008-01-12 18:24 ` Linus Torvalds

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox