From: Josh Poimboeuf <jpoimboe@redhat.com>
To: Jessica Yu <jeyu@redhat.com>, Jiri Kosina <jikos@kernel.org>,
Miroslav Benes <mbenes@suse.cz>, Ingo Molnar <mingo@redhat.com>,
Peter Zijlstra <peterz@infradead.org>,
Michael Ellerman <mpe@ellerman.id.au>,
Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: live-patching@vger.kernel.org, linux-kernel@vger.kernel.org,
x86@kernel.org, linuxppc-dev@lists.ozlabs.org,
linux-s390@vger.kernel.org, Vojtech Pavlik <vojtech@suse.com>,
Jiri Slaby <jslaby@suse.cz>, Petr Mladek <pmladek@suse.com>,
Chris J Arges <chris.j.arges@canonical.com>,
Andy Lutomirski <luto@kernel.org>
Subject: [RFC PATCH v2 07/18] stacktrace/x86: function for detecting reliable stack traces
Date: Thu, 28 Apr 2016 15:44:38 -0500 [thread overview]
Message-ID: <4d34d452bf8f85c7d6d5f93db1d3eeb4cba335c7.1461875890.git.jpoimboe@redhat.com> (raw)
In-Reply-To: <cover.1461875890.git.jpoimboe@redhat.com>
For live patching and possibly other use cases, a stack trace is only
useful if it can be assured that it's completely reliable. Add a new
save_stack_trace_tsk_reliable() function to achieve that.
Scenarios which indicate that a stack trace may be unreliable:
- running tasks
- interrupt stacks
- preemption
- corrupted stack data
- the stack grows the wrong way
- the stack walk doesn't reach the bottom
- the user didn't provide a large enough entries array
Also add a config option so arch-independent code can determine at build
time whether the function is implemented.
Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
---
arch/Kconfig | 6 ++++
arch/x86/Kconfig | 1 +
arch/x86/kernel/dumpstack.c | 77 ++++++++++++++++++++++++++++++++++++++++++++
arch/x86/kernel/stacktrace.c | 24 ++++++++++++++
include/linux/kernel.h | 1 +
include/linux/stacktrace.h | 20 +++++++++---
kernel/extable.c | 2 +-
kernel/stacktrace.c | 4 +--
lib/Kconfig.debug | 6 ++++
9 files changed, 134 insertions(+), 7 deletions(-)
diff --git a/arch/Kconfig b/arch/Kconfig
index 8f84fd2..ec4d480 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -598,6 +598,12 @@ config HAVE_STACK_VALIDATION
Architecture supports the 'objtool check' host tool command, which
performs compile-time stack metadata validation.
+config HAVE_RELIABLE_STACKTRACE
+ bool
+ help
+ Architecture has a save_stack_trace_tsk_reliable() function which
+ only returns a stack trace if it can guarantee the trace is reliable.
+
#
# ABI hall of shame
#
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 0b128b4..78c4e00 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -140,6 +140,7 @@ config X86
select HAVE_PERF_REGS
select HAVE_PERF_USER_STACK_DUMP
select HAVE_REGS_AND_STACK_ACCESS_API
+ select HAVE_RELIABLE_STACKTRACE if X86_64 && FRAME_POINTER
select HAVE_SYSCALL_TRACEPOINTS
select HAVE_UID16 if X86_32 || IA32_EMULATION
select HAVE_UNSTABLE_SCHED_CLOCK
diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c
index 13d240c..70d0013 100644
--- a/arch/x86/kernel/dumpstack.c
+++ b/arch/x86/kernel/dumpstack.c
@@ -145,6 +145,83 @@ int print_context_stack_bp(struct thread_info *tinfo,
}
EXPORT_SYMBOL_GPL(print_context_stack_bp);
+#ifdef CONFIG_RELIABLE_STACKTRACE
+/*
+ * Only succeeds if the stack trace is deemed reliable. This relies on the
+ * fact that frame pointers are reliable thanks to CONFIG_STACK_VALIDATION.
+ *
+ * The caller must ensure that the task is either sleeping or is the current
+ * task.
+ */
+int print_context_stack_reliable(struct thread_info *tinfo,
+ unsigned long *stack, unsigned long *bp,
+ const struct stacktrace_ops *ops,
+ void *data, unsigned long *end, int *graph)
+{
+ struct stack_frame *frame = (struct stack_frame *)*bp;
+ struct stack_frame *last_frame = NULL;
+ unsigned long *ret_addr = &frame->return_address;
+
+ /*
+ * If the kernel was preempted by an IRQ, we can't trust the stack
+ * because the preempted function might not have gotten the chance to
+ * save the frame pointer on the stack before it was interrupted.
+ */
+ if (tinfo->task->flags & PF_PREEMPT_IRQ)
+ return -EINVAL;
+
+ /*
+ * A freshly forked task has an empty stack trace. We can consider
+ * that to be reliable.
+ */
+ if (test_ti_thread_flag(tinfo, TIF_FORK))
+ return 0;
+
+ while (valid_stack_ptr(tinfo, ret_addr, sizeof(*ret_addr), end)) {
+ unsigned long addr = *ret_addr;
+
+ /*
+ * Make sure the stack only grows down.
+ */
+ if (frame <= last_frame)
+ return -EINVAL;
+
+ /*
+ * Make sure the frame refers to a valid kernel function.
+ */
+ if (!core_kernel_text(addr) && !init_kernel_text(addr) &&
+ !is_module_text_address(addr))
+ return -EINVAL;
+
+ /*
+ * Save the kernel text address and make sure the entries array
+ * isn't full.
+ */
+ if (ops->address(data, addr, 1))
+ return -EINVAL;
+
+ /*
+ * If the function graph tracer is in effect, save the real
+ * function address.
+ */
+ print_ftrace_graph_addr(addr, data, ops, tinfo, graph);
+
+ last_frame = frame;
+ frame = frame->next_frame;
+ ret_addr = &frame->return_address;
+ }
+
+ /*
+ * Make sure we reached the bottom of the stack.
+ */
+ if (last_frame + 1 != (void *)task_pt_regs(tinfo->task))
+ return -EINVAL;
+
+ *bp = (unsigned long)frame;
+ return 0;
+}
+#endif /* CONFIG_RELIABLE_STACKTRACE */
+
static int print_trace_stack(void *data, char *name)
{
printk("%s <%s> ", (char *)data, name);
diff --git a/arch/x86/kernel/stacktrace.c b/arch/x86/kernel/stacktrace.c
index 9ee98ee..10882e4 100644
--- a/arch/x86/kernel/stacktrace.c
+++ b/arch/x86/kernel/stacktrace.c
@@ -148,3 +148,27 @@ void save_stack_trace_user(struct stack_trace *trace)
trace->entries[trace->nr_entries++] = ULONG_MAX;
}
+#ifdef CONFIG_RELIABLE_STACKTRACE
+
+static int save_stack_stack_reliable(void *data, char *name)
+{
+ return -EINVAL;
+}
+
+static const struct stacktrace_ops save_stack_ops_reliable = {
+ .stack = save_stack_stack_reliable,
+ .address = save_stack_address,
+ .walk_stack = print_context_stack_reliable,
+};
+
+/*
+ * Returns 0 if the stack trace is deemed reliable. The caller must ensure
+ * that the task is either sleeping or is the current task.
+ */
+int save_stack_trace_tsk_reliable(struct task_struct *tsk,
+ struct stack_trace *trace)
+{
+ return dump_trace(tsk, NULL, NULL, 0, &save_stack_ops_reliable, trace);
+}
+
+#endif /* CONFIG_RELIABLE_STACKTRACE */
diff --git a/include/linux/kernel.h b/include/linux/kernel.h
index cc73982..6be1e82 100644
--- a/include/linux/kernel.h
+++ b/include/linux/kernel.h
@@ -429,6 +429,7 @@ extern char *get_options(const char *str, int nints, int *ints);
extern unsigned long long memparse(const char *ptr, char **retptr);
extern bool parse_option_str(const char *str, const char *option);
+extern int init_kernel_text(unsigned long addr);
extern int core_kernel_text(unsigned long addr);
extern int core_kernel_data(unsigned long addr);
extern int __kernel_text_address(unsigned long addr);
diff --git a/include/linux/stacktrace.h b/include/linux/stacktrace.h
index 0a34489..527e4cc 100644
--- a/include/linux/stacktrace.h
+++ b/include/linux/stacktrace.h
@@ -2,17 +2,18 @@
#define __LINUX_STACKTRACE_H
#include <linux/types.h>
+#include <linux/errno.h>
struct task_struct;
struct pt_regs;
-#ifdef CONFIG_STACKTRACE
struct stack_trace {
unsigned int nr_entries, max_entries;
unsigned long *entries;
int skip; /* input argument: How many entries to skip */
};
+#ifdef CONFIG_STACKTRACE
extern void save_stack_trace(struct stack_trace *trace);
extern void save_stack_trace_regs(struct pt_regs *regs,
struct stack_trace *trace);
@@ -29,12 +30,23 @@ extern void save_stack_trace_user(struct stack_trace *trace);
# define save_stack_trace_user(trace) do { } while (0)
#endif
-#else
+#else /* !CONFIG_STACKTRACE */
# define save_stack_trace(trace) do { } while (0)
# define save_stack_trace_tsk(tsk, trace) do { } while (0)
# define save_stack_trace_user(trace) do { } while (0)
# define print_stack_trace(trace, spaces) do { } while (0)
# define snprint_stack_trace(buf, size, trace, spaces) do { } while (0)
-#endif
+#endif /* CONFIG_STACKTRACE */
-#endif
+#ifdef CONFIG_RELIABLE_STACKTRACE
+extern int save_stack_trace_tsk_reliable(struct task_struct *tsk,
+ struct stack_trace *trace);
+#else
+static inline int save_stack_trace_tsk_reliable(struct task_struct *tsk,
+ struct stack_trace *trace)
+{
+ return -ENOSYS;
+}
+#endif /* CONFIG_RELIABLE_STACKTRACE */
+
+#endif /* __LINUX_STACKTRACE_H */
diff --git a/kernel/extable.c b/kernel/extable.c
index e820cce..c085844 100644
--- a/kernel/extable.c
+++ b/kernel/extable.c
@@ -58,7 +58,7 @@ const struct exception_table_entry *search_exception_tables(unsigned long addr)
return e;
}
-static inline int init_kernel_text(unsigned long addr)
+int init_kernel_text(unsigned long addr)
{
if (addr >= (unsigned long)_sinittext &&
addr < (unsigned long)_einittext)
diff --git a/kernel/stacktrace.c b/kernel/stacktrace.c
index b6e4c16..f35bc5d 100644
--- a/kernel/stacktrace.c
+++ b/kernel/stacktrace.c
@@ -58,8 +58,8 @@ int snprint_stack_trace(char *buf, size_t size,
EXPORT_SYMBOL_GPL(snprint_stack_trace);
/*
- * Architectures that do not implement save_stack_trace_tsk or
- * save_stack_trace_regs get this weak alias and a once-per-bootup warning
+ * Architectures that do not implement save_stack_trace_*()
+ * get this weak alias and a once-per-bootup warning
* (whenever this facility is utilized - for example by procfs):
*/
__weak void
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 5d57177..189a2d7 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1164,6 +1164,12 @@ config STACKTRACE
It is also used by various kernel debugging features that require
stack trace generation.
+config RELIABLE_STACKTRACE
+ def_bool y
+ depends on HAVE_RELIABLE_STACKTRACE
+ depends on STACKTRACE
+ depends on STACK_VALIDATION
+
config DEBUG_KOBJECT
bool "kobject debugging"
depends on DEBUG_KERNEL
--
2.4.11
next prev parent reply other threads:[~2016-04-28 20:44 UTC|newest]
Thread overview: 118+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-04-28 20:44 [RFC PATCH v2 00/18] livepatch: hybrid consistency model Josh Poimboeuf
2016-04-28 20:44 ` [RFC PATCH v2 01/18] x86/asm/head: clean up initial stack variable Josh Poimboeuf
2016-04-28 20:44 ` [RFC PATCH v2 02/18] x86/asm/head: use a common function for starting CPUs Josh Poimboeuf
2016-04-28 20:44 ` [RFC PATCH v2 03/18] x86/asm/head: standardize the bottom of the stack for idle tasks Josh Poimboeuf
2016-04-29 18:46 ` Brian Gerst
2016-04-29 20:28 ` Josh Poimboeuf
2016-04-29 19:39 ` Andy Lutomirski
2016-04-29 20:50 ` Josh Poimboeuf
2016-04-29 21:38 ` Andy Lutomirski
2016-04-29 23:27 ` Josh Poimboeuf
2016-04-30 0:10 ` Andy Lutomirski
2016-04-28 20:44 ` [RFC PATCH v2 04/18] x86: move _stext marker before head code Josh Poimboeuf
2016-04-28 20:44 ` [RFC PATCH v2 05/18] sched: add task flag for preempt IRQ tracking Josh Poimboeuf
2016-04-29 18:06 ` Andy Lutomirski
2016-04-29 20:11 ` Josh Poimboeuf
2016-04-29 20:19 ` Andy Lutomirski
2016-04-29 20:27 ` Josh Poimboeuf
2016-04-29 20:32 ` Andy Lutomirski
2016-04-29 21:25 ` Josh Poimboeuf
2016-04-29 21:37 ` Andy Lutomirski
2016-04-29 22:11 ` Jiri Kosina
2016-04-29 22:57 ` Josh Poimboeuf
2016-04-30 0:09 ` Andy Lutomirski
2016-04-29 22:41 ` Josh Poimboeuf
2016-04-30 0:08 ` Andy Lutomirski
2016-05-02 13:52 ` Josh Poimboeuf
2016-05-02 15:52 ` Andy Lutomirski
2016-05-02 17:31 ` Josh Poimboeuf
2016-05-02 18:12 ` Andy Lutomirski
2016-05-02 18:34 ` Ingo Molnar
2016-05-02 19:44 ` Josh Poimboeuf
2016-05-02 19:54 ` Jiri Kosina
2016-05-02 20:00 ` Jiri Kosina
2016-05-03 0:39 ` Andy Lutomirski
2016-05-04 15:16 ` David Laight
2016-05-19 23:15 ` Josh Poimboeuf
2016-05-19 23:39 ` Andy Lutomirski
2016-05-20 14:05 ` Josh Poimboeuf
2016-05-20 15:41 ` Andy Lutomirski
2016-05-20 16:41 ` Josh Poimboeuf
2016-05-20 16:59 ` Andy Lutomirski
2016-05-20 17:49 ` Josh Poimboeuf
2016-05-23 23:02 ` Jiri Kosina
2016-05-24 1:42 ` Andy Lutomirski
2016-05-23 21:34 ` Andy Lutomirski
2016-05-24 2:28 ` Josh Poimboeuf
2016-05-24 3:52 ` Andy Lutomirski
2016-06-22 16:30 ` Josh Poimboeuf
2016-06-22 17:59 ` Andy Lutomirski
2016-06-22 18:22 ` Josh Poimboeuf
2016-06-22 18:26 ` Andy Lutomirski
2016-06-22 18:40 ` Josh Poimboeuf
2016-06-22 19:17 ` Andy Lutomirski
2016-06-23 16:19 ` Josh Poimboeuf
2016-06-23 16:35 ` Andy Lutomirski
2016-06-23 18:31 ` Josh Poimboeuf
2016-06-23 20:40 ` Josh Poimboeuf
2016-06-23 22:00 ` Andy Lutomirski
2016-06-23 0:09 ` Andy Lutomirski
2016-06-23 15:55 ` Josh Poimboeuf
2016-04-28 20:44 ` [RFC PATCH v2 06/18] x86: dump_trace() error handling Josh Poimboeuf
2016-04-29 13:45 ` Minfei Huang
2016-04-29 14:00 ` Josh Poimboeuf
2016-04-28 20:44 ` Josh Poimboeuf [this message]
2016-04-28 20:44 ` [RFC PATCH v2 08/18] livepatch: temporary stubs for klp_patch_pending() and klp_patch_task() Josh Poimboeuf
2016-04-28 20:44 ` [RFC PATCH v2 09/18] livepatch/x86: add TIF_PATCH_PENDING thread flag Josh Poimboeuf
2016-04-29 18:08 ` Andy Lutomirski
2016-04-29 20:18 ` Josh Poimboeuf
2016-04-28 20:44 ` [RFC PATCH v2 10/18] livepatch/powerpc: " Josh Poimboeuf
2016-05-03 9:07 ` Petr Mladek
2016-05-03 12:06 ` Miroslav Benes
2016-04-28 20:44 ` [RFC PATCH v2 11/18] livepatch/s390: reorganize TIF thread flag bits Josh Poimboeuf
2016-04-28 20:44 ` [RFC PATCH v2 12/18] livepatch/s390: add TIF_PATCH_PENDING thread flag Josh Poimboeuf
2016-04-28 20:44 ` [RFC PATCH v2 13/18] livepatch: separate enabled and patched states Josh Poimboeuf
2016-05-03 9:30 ` Petr Mladek
2016-05-03 13:48 ` Josh Poimboeuf
2016-04-28 20:44 ` [RFC PATCH v2 14/18] livepatch: remove unnecessary object loaded check Josh Poimboeuf
2016-04-28 20:44 ` [RFC PATCH v2 15/18] livepatch: move patching functions into patch.c Josh Poimboeuf
2016-05-03 9:39 ` Petr Mladek
2016-04-28 20:44 ` [RFC PATCH v2 16/18] livepatch: store function sizes Josh Poimboeuf
2016-04-28 20:44 ` [RFC PATCH v2 17/18] livepatch: change to a per-task consistency model Josh Poimboeuf
2016-05-04 8:42 ` Petr Mladek
2016-05-04 15:51 ` Josh Poimboeuf
2016-05-05 9:41 ` Miroslav Benes
2016-05-05 13:06 ` Petr Mladek
2016-05-04 12:39 ` barriers: was: " Petr Mladek
2016-05-04 13:53 ` Peter Zijlstra
2016-05-04 16:51 ` Josh Poimboeuf
2016-05-04 14:12 ` Petr Mladek
2016-05-04 17:25 ` Josh Poimboeuf
2016-05-05 11:21 ` Petr Mladek
2016-05-09 15:42 ` Miroslav Benes
2016-05-04 17:02 ` Josh Poimboeuf
2016-05-05 10:21 ` Petr Mladek
2016-05-04 14:48 ` klp_task_patch: " Petr Mladek
2016-05-04 14:56 ` Jiri Kosina
2016-05-04 17:57 ` Josh Poimboeuf
2016-05-05 11:57 ` Petr Mladek
2016-05-06 12:38 ` Josh Poimboeuf
2016-05-09 12:23 ` Petr Mladek
2016-05-16 18:12 ` Josh Poimboeuf
2016-05-18 13:12 ` Petr Mladek
2016-05-06 11:33 ` Petr Mladek
2016-05-06 12:44 ` Josh Poimboeuf
2016-05-09 9:41 ` Miroslav Benes
2016-05-16 17:27 ` Josh Poimboeuf
2016-05-10 11:39 ` Miroslav Benes
2016-05-17 22:53 ` Jessica Yu
2016-05-18 8:16 ` Jiri Kosina
2016-05-18 16:51 ` Josh Poimboeuf
2016-05-18 20:22 ` Jiri Kosina
2016-05-23 9:42 ` David Laight
2016-05-23 18:44 ` Jiri Kosina
2016-05-24 15:06 ` David Laight
2016-05-24 22:45 ` Jiri Kosina
2016-06-06 13:54 ` [RFC PATCH v2 17/18] " Petr Mladek
2016-06-06 14:29 ` Josh Poimboeuf
2016-04-28 20:44 ` [RFC PATCH v2 18/18] livepatch: add /proc/<pid>/patch_state Josh Poimboeuf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4d34d452bf8f85c7d6d5f93db1d3eeb4cba335c7.1461875890.git.jpoimboe@redhat.com \
--to=jpoimboe@redhat.com \
--cc=chris.j.arges@canonical.com \
--cc=heiko.carstens@de.ibm.com \
--cc=jeyu@redhat.com \
--cc=jikos@kernel.org \
--cc=jslaby@suse.cz \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-s390@vger.kernel.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=live-patching@vger.kernel.org \
--cc=luto@kernel.org \
--cc=mbenes@suse.cz \
--cc=mingo@redhat.com \
--cc=mpe@ellerman.id.au \
--cc=peterz@infradead.org \
--cc=pmladek@suse.com \
--cc=vojtech@suse.com \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).