linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] arm64: stacktrace: cleanups
@ 2023-03-24 13:49 Mark Rutland
  2023-03-24 13:49 ` [PATCH 1/3] arm64: stacktrace: recover return address for first entry Mark Rutland
                   ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Mark Rutland @ 2023-03-24 13:49 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: broonie, catalin.marinas, kaleshsingh, madvenka, mark.rutland,
	will

Hi Catalin, Will,

These are a few minor cleanups for the arm64 stacktrace code, based on
v6.3-rc3. The first patch is a minor fix (which I don't think we need to
worry about backporting), and the last two are a cleanup for
consistency.

I'd originally intended to send these as part of a more substantial
rework of the stacktrace code, but due to some other recent time sinks
that's still in a half-baked state and I suspect I won't get that out
before v6.4.

Thanks,
Mark.

Mark Rutland (3):
  arm64: stacktrace: recover return address for first entry
  arm64: stacktrace: move dump functions to end of file
  arm64: stacktrace: always inline core stacktrace functions

 arch/arm64/kernel/stacktrace.c | 141 ++++++++++++++++++---------------
 1 file changed, 75 insertions(+), 66 deletions(-)

-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 1/3] arm64: stacktrace: recover return address for first entry
  2023-03-24 13:49 [PATCH 0/3] arm64: stacktrace: cleanups Mark Rutland
@ 2023-03-24 13:49 ` Mark Rutland
  2023-03-29  8:58   ` Kalesh Singh
  2023-04-06 15:25   ` Will Deacon
  2023-03-24 13:49 ` [PATCH 2/3] arm64: stacktrace: move dump functions to end of file Mark Rutland
  2023-03-24 13:49 ` [PATCH 3/3] arm64: stacktrace: always inline core stacktrace functions Mark Rutland
  2 siblings, 2 replies; 9+ messages in thread
From: Mark Rutland @ 2023-03-24 13:49 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: broonie, catalin.marinas, kaleshsingh, madvenka, mark.rutland,
	will

The function which calls the top-level backtracing function may have
been instrumented with ftrace and/or kprobes, and hence the first return
address may have been rewritten.

Factor out the existing fgraph / kretprobes address recovery, and use
this for the first address. As the comment for the fgraph case isn't all
that helpful, I've also dropped that.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Kalesh Singh <kaleshsingh@google.com>
Cc: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/kernel/stacktrace.c | 52 +++++++++++++++++++---------------
 1 file changed, 29 insertions(+), 23 deletions(-)

diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
index 83154303e682c..219ce0668a3dd 100644
--- a/arch/arm64/kernel/stacktrace.c
+++ b/arch/arm64/kernel/stacktrace.c
@@ -69,6 +69,31 @@ static __always_inline void unwind_init_from_task(struct unwind_state *state,
 	state->pc = thread_saved_pc(task);
 }
 
+static __always_inline int
+unwind_recover_return_address(struct unwind_state *state)
+{
+	struct task_struct *tsk = state->task;
+
+#ifdef CONFIG_FUNCTION_GRAPH_TRACER
+	if (tsk->ret_stack &&
+	    (state->pc == (unsigned long)return_to_handler)) {
+		unsigned long orig_pc;
+		orig_pc = ftrace_graph_ret_addr(tsk, NULL, state->pc,
+						(void *)state->fp);
+		if (WARN_ON_ONCE(state->pc == orig_pc))
+			return -EINVAL;
+		state->pc = orig_pc;
+	}
+#endif /* CONFIG_FUNCTION_GRAPH_TRACER */
+
+#ifdef CONFIG_KRETPROBES
+	if (is_kretprobe_trampoline(state->pc))
+		state->pc = kretprobe_find_ret_addr(tsk, (void *)state->fp, &state->kr_cur);
+#endif /* CONFIG_KRETPROBES */
+
+	return 0;
+}
+
 /*
  * Unwind from one frame record (A) to the next frame record (B).
  *
@@ -92,35 +117,16 @@ static int notrace unwind_next(struct unwind_state *state)
 
 	state->pc = ptrauth_strip_insn_pac(state->pc);
 
-#ifdef CONFIG_FUNCTION_GRAPH_TRACER
-	if (tsk->ret_stack &&
-		(state->pc == (unsigned long)return_to_handler)) {
-		unsigned long orig_pc;
-		/*
-		 * This is a case where function graph tracer has
-		 * modified a return address (LR) in a stack frame
-		 * to hook a function return.
-		 * So replace it to an original value.
-		 */
-		orig_pc = ftrace_graph_ret_addr(tsk, NULL, state->pc,
-						(void *)state->fp);
-		if (WARN_ON_ONCE(state->pc == orig_pc))
-			return -EINVAL;
-		state->pc = orig_pc;
-	}
-#endif /* CONFIG_FUNCTION_GRAPH_TRACER */
-#ifdef CONFIG_KRETPROBES
-	if (is_kretprobe_trampoline(state->pc))
-		state->pc = kretprobe_find_ret_addr(tsk, (void *)state->fp, &state->kr_cur);
-#endif
-
-	return 0;
+	return unwind_recover_return_address(state);
 }
 NOKPROBE_SYMBOL(unwind_next);
 
 static void notrace unwind(struct unwind_state *state,
 			   stack_trace_consume_fn consume_entry, void *cookie)
 {
+	if (unwind_recover_return_address(state))
+		return;
+
 	while (1) {
 		int ret;
 
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 2/3] arm64: stacktrace: move dump functions to end of file
  2023-03-24 13:49 [PATCH 0/3] arm64: stacktrace: cleanups Mark Rutland
  2023-03-24 13:49 ` [PATCH 1/3] arm64: stacktrace: recover return address for first entry Mark Rutland
@ 2023-03-24 13:49 ` Mark Rutland
  2023-03-29  8:58   ` Kalesh Singh
  2023-03-24 13:49 ` [PATCH 3/3] arm64: stacktrace: always inline core stacktrace functions Mark Rutland
  2 siblings, 1 reply; 9+ messages in thread
From: Mark Rutland @ 2023-03-24 13:49 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: broonie, catalin.marinas, kaleshsingh, madvenka, mark.rutland,
	will

For historical reasons, the backtrace dumping functions are placed in
the middle of stacktrace.c, despite using functions defined later. For
clarity, and to make subsequent refactoring easier, move the dumping
functions to the end of stacktrace.c

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Kalesh Singh <kaleshsingh@google.com>
Cc: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/kernel/stacktrace.c | 66 +++++++++++++++++-----------------
 1 file changed, 33 insertions(+), 33 deletions(-)

diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
index 219ce0668a3dd..5857e2de147a7 100644
--- a/arch/arm64/kernel/stacktrace.c
+++ b/arch/arm64/kernel/stacktrace.c
@@ -139,39 +139,6 @@ static void notrace unwind(struct unwind_state *state,
 }
 NOKPROBE_SYMBOL(unwind);
 
-static bool dump_backtrace_entry(void *arg, unsigned long where)
-{
-	char *loglvl = arg;
-	printk("%s %pSb\n", loglvl, (void *)where);
-	return true;
-}
-
-void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk,
-		    const char *loglvl)
-{
-	pr_debug("%s(regs = %p tsk = %p)\n", __func__, regs, tsk);
-
-	if (regs && user_mode(regs))
-		return;
-
-	if (!tsk)
-		tsk = current;
-
-	if (!try_get_task_stack(tsk))
-		return;
-
-	printk("%sCall trace:\n", loglvl);
-	arch_stack_walk(dump_backtrace_entry, (void *)loglvl, tsk, regs);
-
-	put_task_stack(tsk);
-}
-
-void show_stack(struct task_struct *tsk, unsigned long *sp, const char *loglvl)
-{
-	dump_backtrace(NULL, tsk, loglvl);
-	barrier();
-}
-
 /*
  * Per-cpu stacks are only accessible when unwinding the current task in a
  * non-preemptible context.
@@ -236,3 +203,36 @@ noinline noinstr void arch_stack_walk(stack_trace_consume_fn consume_entry,
 
 	unwind(&state, consume_entry, cookie);
 }
+
+static bool dump_backtrace_entry(void *arg, unsigned long where)
+{
+	char *loglvl = arg;
+	printk("%s %pSb\n", loglvl, (void *)where);
+	return true;
+}
+
+void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk,
+		    const char *loglvl)
+{
+	pr_debug("%s(regs = %p tsk = %p)\n", __func__, regs, tsk);
+
+	if (regs && user_mode(regs))
+		return;
+
+	if (!tsk)
+		tsk = current;
+
+	if (!try_get_task_stack(tsk))
+		return;
+
+	printk("%sCall trace:\n", loglvl);
+	arch_stack_walk(dump_backtrace_entry, (void *)loglvl, tsk, regs);
+
+	put_task_stack(tsk);
+}
+
+void show_stack(struct task_struct *tsk, unsigned long *sp, const char *loglvl)
+{
+	dump_backtrace(NULL, tsk, loglvl);
+	barrier();
+}
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 3/3] arm64: stacktrace: always inline core stacktrace functions
  2023-03-24 13:49 [PATCH 0/3] arm64: stacktrace: cleanups Mark Rutland
  2023-03-24 13:49 ` [PATCH 1/3] arm64: stacktrace: recover return address for first entry Mark Rutland
  2023-03-24 13:49 ` [PATCH 2/3] arm64: stacktrace: move dump functions to end of file Mark Rutland
@ 2023-03-24 13:49 ` Mark Rutland
  2023-03-29  8:59   ` Kalesh Singh
  2 siblings, 1 reply; 9+ messages in thread
From: Mark Rutland @ 2023-03-24 13:49 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: broonie, catalin.marinas, kaleshsingh, madvenka, mark.rutland,
	will

The arm64 stacktrace code can be used in kprobe context, and so cannot
be safely probed. Some (but not all) of the unwind functions are
annotated with `NOKPROBE_SYMBOL()` to ensure this, with others markes as
`__always_inline`, relying on the top-level unwind function being marked
as `noinstr`.

This patch has stacktrace.c consistently mark the internal stacktrace
functions as `__always_inline`, removing the need for NOKPROBE_SYMBOL()
as the top-level unwind function (arch_stack_walk()) is marked as
`noinstr`. This is more consistent and is a simpler pattern to follow
for future additions to stacktrace.c.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Kalesh Singh <kaleshsingh@google.com>
Cc: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/kernel/stacktrace.c | 23 +++++++++++++----------
 1 file changed, 13 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
index 5857e2de147a7..ffe7f6c93fea8 100644
--- a/arch/arm64/kernel/stacktrace.c
+++ b/arch/arm64/kernel/stacktrace.c
@@ -25,8 +25,9 @@
  *
  * The regs must be on a stack currently owned by the calling task.
  */
-static __always_inline void unwind_init_from_regs(struct unwind_state *state,
-						  struct pt_regs *regs)
+static __always_inline void
+unwind_init_from_regs(struct unwind_state *state,
+		      struct pt_regs *regs)
 {
 	unwind_init_common(state, current);
 
@@ -42,7 +43,8 @@ static __always_inline void unwind_init_from_regs(struct unwind_state *state,
  *
  * The function which invokes this must be noinline.
  */
-static __always_inline void unwind_init_from_caller(struct unwind_state *state)
+static __always_inline void
+unwind_init_from_caller(struct unwind_state *state)
 {
 	unwind_init_common(state, current);
 
@@ -60,8 +62,9 @@ static __always_inline void unwind_init_from_caller(struct unwind_state *state)
  * duration of the unwind, or the unwind will be bogus. It is never valid to
  * call this for the current task.
  */
-static __always_inline void unwind_init_from_task(struct unwind_state *state,
-						  struct task_struct *task)
+static __always_inline void
+unwind_init_from_task(struct unwind_state *state,
+		      struct task_struct *task)
 {
 	unwind_init_common(state, task);
 
@@ -101,7 +104,8 @@ unwind_recover_return_address(struct unwind_state *state)
  * records (e.g. a cycle), determined based on the location and fp value of A
  * and the location (but not the fp value) of B.
  */
-static int notrace unwind_next(struct unwind_state *state)
+static __always_inline int
+unwind_next(struct unwind_state *state)
 {
 	struct task_struct *tsk = state->task;
 	unsigned long fp = state->fp;
@@ -119,10 +123,10 @@ static int notrace unwind_next(struct unwind_state *state)
 
 	return unwind_recover_return_address(state);
 }
-NOKPROBE_SYMBOL(unwind_next);
 
-static void notrace unwind(struct unwind_state *state,
-			   stack_trace_consume_fn consume_entry, void *cookie)
+static __always_inline void
+unwind(struct unwind_state *state, stack_trace_consume_fn consume_entry,
+       void *cookie)
 {
 	if (unwind_recover_return_address(state))
 		return;
@@ -137,7 +141,6 @@ static void notrace unwind(struct unwind_state *state,
 			break;
 	}
 }
-NOKPROBE_SYMBOL(unwind);
 
 /*
  * Per-cpu stacks are only accessible when unwinding the current task in a
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/3] arm64: stacktrace: recover return address for first entry
  2023-03-24 13:49 ` [PATCH 1/3] arm64: stacktrace: recover return address for first entry Mark Rutland
@ 2023-03-29  8:58   ` Kalesh Singh
  2023-04-06 15:25   ` Will Deacon
  1 sibling, 0 replies; 9+ messages in thread
From: Kalesh Singh @ 2023-03-29  8:58 UTC (permalink / raw)
  To: Mark Rutland; +Cc: linux-arm-kernel, broonie, catalin.marinas, madvenka, will

On Fri, Mar 24, 2023 at 6:50 AM Mark Rutland <mark.rutland@arm.com> wrote:
>
> The function which calls the top-level backtracing function may have
> been instrumented with ftrace and/or kprobes, and hence the first return
> address may have been rewritten.
>
> Factor out the existing fgraph / kretprobes address recovery, and use
> this for the first address. As the comment for the fgraph case isn't all
> that helpful, I've also dropped that.
>
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Kalesh Singh <kaleshsingh@google.com>
> Cc: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>
> Cc: Mark Brown <broonie@kernel.org>
> Cc: Will Deacon <will@kernel.org>

Reviewed-by: Kalesh Singh <kaleshsingh@google.com>

Thanks,
Kalesh

> ---
>  arch/arm64/kernel/stacktrace.c | 52 +++++++++++++++++++---------------
>  1 file changed, 29 insertions(+), 23 deletions(-)
>
> diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
> index 83154303e682c..219ce0668a3dd 100644
> --- a/arch/arm64/kernel/stacktrace.c
> +++ b/arch/arm64/kernel/stacktrace.c
> @@ -69,6 +69,31 @@ static __always_inline void unwind_init_from_task(struct unwind_state *state,
>         state->pc = thread_saved_pc(task);
>  }
>
> +static __always_inline int
> +unwind_recover_return_address(struct unwind_state *state)
> +{
> +       struct task_struct *tsk = state->task;
> +
> +#ifdef CONFIG_FUNCTION_GRAPH_TRACER
> +       if (tsk->ret_stack &&
> +           (state->pc == (unsigned long)return_to_handler)) {
> +               unsigned long orig_pc;
> +               orig_pc = ftrace_graph_ret_addr(tsk, NULL, state->pc,
> +                                               (void *)state->fp);
> +               if (WARN_ON_ONCE(state->pc == orig_pc))
> +                       return -EINVAL;
> +               state->pc = orig_pc;
> +       }
> +#endif /* CONFIG_FUNCTION_GRAPH_TRACER */
> +
> +#ifdef CONFIG_KRETPROBES
> +       if (is_kretprobe_trampoline(state->pc))
> +               state->pc = kretprobe_find_ret_addr(tsk, (void *)state->fp, &state->kr_cur);
> +#endif /* CONFIG_KRETPROBES */
> +
> +       return 0;
> +}
> +
>  /*
>   * Unwind from one frame record (A) to the next frame record (B).
>   *
> @@ -92,35 +117,16 @@ static int notrace unwind_next(struct unwind_state *state)
>
>         state->pc = ptrauth_strip_insn_pac(state->pc);
>
> -#ifdef CONFIG_FUNCTION_GRAPH_TRACER
> -       if (tsk->ret_stack &&
> -               (state->pc == (unsigned long)return_to_handler)) {
> -               unsigned long orig_pc;
> -               /*
> -                * This is a case where function graph tracer has
> -                * modified a return address (LR) in a stack frame
> -                * to hook a function return.
> -                * So replace it to an original value.
> -                */
> -               orig_pc = ftrace_graph_ret_addr(tsk, NULL, state->pc,
> -                                               (void *)state->fp);
> -               if (WARN_ON_ONCE(state->pc == orig_pc))
> -                       return -EINVAL;
> -               state->pc = orig_pc;
> -       }
> -#endif /* CONFIG_FUNCTION_GRAPH_TRACER */
> -#ifdef CONFIG_KRETPROBES
> -       if (is_kretprobe_trampoline(state->pc))
> -               state->pc = kretprobe_find_ret_addr(tsk, (void *)state->fp, &state->kr_cur);
> -#endif
> -
> -       return 0;
> +       return unwind_recover_return_address(state);
>  }
>  NOKPROBE_SYMBOL(unwind_next);
>
>  static void notrace unwind(struct unwind_state *state,
>                            stack_trace_consume_fn consume_entry, void *cookie)
>  {
> +       if (unwind_recover_return_address(state))
> +               return;
> +
>         while (1) {
>                 int ret;
>
> --
> 2.30.2
>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 2/3] arm64: stacktrace: move dump functions to end of file
  2023-03-24 13:49 ` [PATCH 2/3] arm64: stacktrace: move dump functions to end of file Mark Rutland
@ 2023-03-29  8:58   ` Kalesh Singh
  0 siblings, 0 replies; 9+ messages in thread
From: Kalesh Singh @ 2023-03-29  8:58 UTC (permalink / raw)
  To: Mark Rutland; +Cc: linux-arm-kernel, broonie, catalin.marinas, madvenka, will

On Fri, Mar 24, 2023 at 6:50 AM Mark Rutland <mark.rutland@arm.com> wrote:
>
> For historical reasons, the backtrace dumping functions are placed in
> the middle of stacktrace.c, despite using functions defined later. For
> clarity, and to make subsequent refactoring easier, move the dumping
> functions to the end of stacktrace.c
>
> There should be no functional change as a result of this patch.
>
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Kalesh Singh <kaleshsingh@google.com>
> Cc: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>
> Cc: Mark Brown <broonie@kernel.org>
> Cc: Will Deacon <will@kernel.org>

Reviewed-by: Kalesh Singh <kaleshsingh@google.com>

Thanks,
Kalesh

> ---
>  arch/arm64/kernel/stacktrace.c | 66 +++++++++++++++++-----------------
>  1 file changed, 33 insertions(+), 33 deletions(-)
>
> diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
> index 219ce0668a3dd..5857e2de147a7 100644
> --- a/arch/arm64/kernel/stacktrace.c
> +++ b/arch/arm64/kernel/stacktrace.c
> @@ -139,39 +139,6 @@ static void notrace unwind(struct unwind_state *state,
>  }
>  NOKPROBE_SYMBOL(unwind);
>
> -static bool dump_backtrace_entry(void *arg, unsigned long where)
> -{
> -       char *loglvl = arg;
> -       printk("%s %pSb\n", loglvl, (void *)where);
> -       return true;
> -}
> -
> -void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk,
> -                   const char *loglvl)
> -{
> -       pr_debug("%s(regs = %p tsk = %p)\n", __func__, regs, tsk);
> -
> -       if (regs && user_mode(regs))
> -               return;
> -
> -       if (!tsk)
> -               tsk = current;
> -
> -       if (!try_get_task_stack(tsk))
> -               return;
> -
> -       printk("%sCall trace:\n", loglvl);
> -       arch_stack_walk(dump_backtrace_entry, (void *)loglvl, tsk, regs);
> -
> -       put_task_stack(tsk);
> -}
> -
> -void show_stack(struct task_struct *tsk, unsigned long *sp, const char *loglvl)
> -{
> -       dump_backtrace(NULL, tsk, loglvl);
> -       barrier();
> -}
> -
>  /*
>   * Per-cpu stacks are only accessible when unwinding the current task in a
>   * non-preemptible context.
> @@ -236,3 +203,36 @@ noinline noinstr void arch_stack_walk(stack_trace_consume_fn consume_entry,
>
>         unwind(&state, consume_entry, cookie);
>  }
> +
> +static bool dump_backtrace_entry(void *arg, unsigned long where)
> +{
> +       char *loglvl = arg;
> +       printk("%s %pSb\n", loglvl, (void *)where);
> +       return true;
> +}
> +
> +void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk,
> +                   const char *loglvl)
> +{
> +       pr_debug("%s(regs = %p tsk = %p)\n", __func__, regs, tsk);
> +
> +       if (regs && user_mode(regs))
> +               return;
> +
> +       if (!tsk)
> +               tsk = current;
> +
> +       if (!try_get_task_stack(tsk))
> +               return;
> +
> +       printk("%sCall trace:\n", loglvl);
> +       arch_stack_walk(dump_backtrace_entry, (void *)loglvl, tsk, regs);
> +
> +       put_task_stack(tsk);
> +}
> +
> +void show_stack(struct task_struct *tsk, unsigned long *sp, const char *loglvl)
> +{
> +       dump_backtrace(NULL, tsk, loglvl);
> +       barrier();
> +}
> --
> 2.30.2
>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 3/3] arm64: stacktrace: always inline core stacktrace functions
  2023-03-24 13:49 ` [PATCH 3/3] arm64: stacktrace: always inline core stacktrace functions Mark Rutland
@ 2023-03-29  8:59   ` Kalesh Singh
  0 siblings, 0 replies; 9+ messages in thread
From: Kalesh Singh @ 2023-03-29  8:59 UTC (permalink / raw)
  To: Mark Rutland; +Cc: linux-arm-kernel, broonie, catalin.marinas, madvenka, will

On Fri, Mar 24, 2023 at 6:50 AM Mark Rutland <mark.rutland@arm.com> wrote:
>
> The arm64 stacktrace code can be used in kprobe context, and so cannot
> be safely probed. Some (but not all) of the unwind functions are
> annotated with `NOKPROBE_SYMBOL()` to ensure this, with others markes as
> `__always_inline`, relying on the top-level unwind function being marked
> as `noinstr`.
>
> This patch has stacktrace.c consistently mark the internal stacktrace
> functions as `__always_inline`, removing the need for NOKPROBE_SYMBOL()
> as the top-level unwind function (arch_stack_walk()) is marked as
> `noinstr`. This is more consistent and is a simpler pattern to follow
> for future additions to stacktrace.c.
>
> There should be no functional change as a result of this patch.
>
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Kalesh Singh <kaleshsingh@google.com>
> Cc: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>
> Cc: Mark Brown <broonie@kernel.org>
> Cc: Will Deacon <will@kernel.org>

Reviewed-by: Kalesh Singh <kaleshsingh@google.com>

Thanks,
Kalesh

> ---
>  arch/arm64/kernel/stacktrace.c | 23 +++++++++++++----------
>  1 file changed, 13 insertions(+), 10 deletions(-)
>
> diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
> index 5857e2de147a7..ffe7f6c93fea8 100644
> --- a/arch/arm64/kernel/stacktrace.c
> +++ b/arch/arm64/kernel/stacktrace.c
> @@ -25,8 +25,9 @@
>   *
>   * The regs must be on a stack currently owned by the calling task.
>   */
> -static __always_inline void unwind_init_from_regs(struct unwind_state *state,
> -                                                 struct pt_regs *regs)
> +static __always_inline void
> +unwind_init_from_regs(struct unwind_state *state,
> +                     struct pt_regs *regs)
>  {
>         unwind_init_common(state, current);
>
> @@ -42,7 +43,8 @@ static __always_inline void unwind_init_from_regs(struct unwind_state *state,
>   *
>   * The function which invokes this must be noinline.
>   */
> -static __always_inline void unwind_init_from_caller(struct unwind_state *state)
> +static __always_inline void
> +unwind_init_from_caller(struct unwind_state *state)
>  {
>         unwind_init_common(state, current);
>
> @@ -60,8 +62,9 @@ static __always_inline void unwind_init_from_caller(struct unwind_state *state)
>   * duration of the unwind, or the unwind will be bogus. It is never valid to
>   * call this for the current task.
>   */
> -static __always_inline void unwind_init_from_task(struct unwind_state *state,
> -                                                 struct task_struct *task)
> +static __always_inline void
> +unwind_init_from_task(struct unwind_state *state,
> +                     struct task_struct *task)
>  {
>         unwind_init_common(state, task);
>
> @@ -101,7 +104,8 @@ unwind_recover_return_address(struct unwind_state *state)
>   * records (e.g. a cycle), determined based on the location and fp value of A
>   * and the location (but not the fp value) of B.
>   */
> -static int notrace unwind_next(struct unwind_state *state)
> +static __always_inline int
> +unwind_next(struct unwind_state *state)
>  {
>         struct task_struct *tsk = state->task;
>         unsigned long fp = state->fp;
> @@ -119,10 +123,10 @@ static int notrace unwind_next(struct unwind_state *state)
>
>         return unwind_recover_return_address(state);
>  }
> -NOKPROBE_SYMBOL(unwind_next);
>
> -static void notrace unwind(struct unwind_state *state,
> -                          stack_trace_consume_fn consume_entry, void *cookie)
> +static __always_inline void
> +unwind(struct unwind_state *state, stack_trace_consume_fn consume_entry,
> +       void *cookie)
>  {
>         if (unwind_recover_return_address(state))
>                 return;
> @@ -137,7 +141,6 @@ static void notrace unwind(struct unwind_state *state,
>                         break;
>         }
>  }
> -NOKPROBE_SYMBOL(unwind);
>
>  /*
>   * Per-cpu stacks are only accessible when unwinding the current task in a
> --
> 2.30.2
>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/3] arm64: stacktrace: recover return address for first entry
  2023-03-24 13:49 ` [PATCH 1/3] arm64: stacktrace: recover return address for first entry Mark Rutland
  2023-03-29  8:58   ` Kalesh Singh
@ 2023-04-06 15:25   ` Will Deacon
  2023-04-11 16:12     ` Mark Rutland
  1 sibling, 1 reply; 9+ messages in thread
From: Will Deacon @ 2023-04-06 15:25 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, broonie, catalin.marinas, kaleshsingh, madvenka

On Fri, Mar 24, 2023 at 01:49:56PM +0000, Mark Rutland wrote:
> The function which calls the top-level backtracing function may have
> been instrumented with ftrace and/or kprobes, and hence the first return
> address may have been rewritten.
> 
> Factor out the existing fgraph / kretprobes address recovery, and use
> this for the first address. As the comment for the fgraph case isn't all
> that helpful, I've also dropped that.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Kalesh Singh <kaleshsingh@google.com>
> Cc: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>
> Cc: Mark Brown <broonie@kernel.org>
> Cc: Will Deacon <will@kernel.org>
> ---
>  arch/arm64/kernel/stacktrace.c | 52 +++++++++++++++++++---------------
>  1 file changed, 29 insertions(+), 23 deletions(-)
> 
> diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
> index 83154303e682c..219ce0668a3dd 100644
> --- a/arch/arm64/kernel/stacktrace.c
> +++ b/arch/arm64/kernel/stacktrace.c
> @@ -69,6 +69,31 @@ static __always_inline void unwind_init_from_task(struct unwind_state *state,
>  	state->pc = thread_saved_pc(task);
>  }
>  
> +static __always_inline int
> +unwind_recover_return_address(struct unwind_state *state)
> +{
> +	struct task_struct *tsk = state->task;
> +
> +#ifdef CONFIG_FUNCTION_GRAPH_TRACER
> +	if (tsk->ret_stack &&
> +	    (state->pc == (unsigned long)return_to_handler)) {
> +		unsigned long orig_pc;
> +		orig_pc = ftrace_graph_ret_addr(tsk, NULL, state->pc,
> +						(void *)state->fp);
> +		if (WARN_ON_ONCE(state->pc == orig_pc))
> +			return -EINVAL;
> +		state->pc = orig_pc;
> +	}
> +#endif /* CONFIG_FUNCTION_GRAPH_TRACER */
> +
> +#ifdef CONFIG_KRETPROBES
> +	if (is_kretprobe_trampoline(state->pc))
> +		state->pc = kretprobe_find_ret_addr(tsk, (void *)state->fp, &state->kr_cur);
> +#endif /* CONFIG_KRETPROBES */
> +
> +	return 0;
> +}

This generates an unused variable warning with defconfig:

arch/arm64/kernel/stacktrace.c: In function ‘unwind_recover_return_address’:
arch/arm64/kernel/stacktrace.c:78:22: warning: unused variable ‘tsk’ [-Wunused-variable]
   78 |  struct task_struct *tsk = state->task;
      |                      ^~~
 

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 1/3] arm64: stacktrace: recover return address for first entry
  2023-04-06 15:25   ` Will Deacon
@ 2023-04-11 16:12     ` Mark Rutland
  0 siblings, 0 replies; 9+ messages in thread
From: Mark Rutland @ 2023-04-11 16:12 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-arm-kernel, broonie, catalin.marinas, kaleshsingh, madvenka

On Thu, Apr 06, 2023 at 04:25:50PM +0100, Will Deacon wrote:
> On Fri, Mar 24, 2023 at 01:49:56PM +0000, Mark Rutland wrote:
> > +static __always_inline int
> > +unwind_recover_return_address(struct unwind_state *state)
> > +{
> > +	struct task_struct *tsk = state->task;
> > +
> > +#ifdef CONFIG_FUNCTION_GRAPH_TRACER
> > +	if (tsk->ret_stack &&
> > +	    (state->pc == (unsigned long)return_to_handler)) {
> > +		unsigned long orig_pc;
> > +		orig_pc = ftrace_graph_ret_addr(tsk, NULL, state->pc,
> > +						(void *)state->fp);
> > +		if (WARN_ON_ONCE(state->pc == orig_pc))
> > +			return -EINVAL;
> > +		state->pc = orig_pc;
> > +	}
> > +#endif /* CONFIG_FUNCTION_GRAPH_TRACER */
> > +
> > +#ifdef CONFIG_KRETPROBES
> > +	if (is_kretprobe_trampoline(state->pc))
> > +		state->pc = kretprobe_find_ret_addr(tsk, (void *)state->fp, &state->kr_cur);
> > +#endif /* CONFIG_KRETPROBES */
> > +
> > +	return 0;
> > +}
> 
> This generates an unused variable warning with defconfig:
> 
> arch/arm64/kernel/stacktrace.c: In function ‘unwind_recover_return_address’:
> arch/arm64/kernel/stacktrace.c:78:22: warning: unused variable ‘tsk’ [-Wunused-variable]
>    78 |  struct task_struct *tsk = state->task;
>       |                      ^~~

Sorry about that; I had tested with and without each of
CONFIG_FUNCTION_GRAPH_TRACER and CONFIG_KRETPROBES, but hadn't tested without
both of them.

I'll get rid of `tsk` and use `state->task` directly in each case for v2.

Thanks,
Mark.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2023-04-11 16:13 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-03-24 13:49 [PATCH 0/3] arm64: stacktrace: cleanups Mark Rutland
2023-03-24 13:49 ` [PATCH 1/3] arm64: stacktrace: recover return address for first entry Mark Rutland
2023-03-29  8:58   ` Kalesh Singh
2023-04-06 15:25   ` Will Deacon
2023-04-11 16:12     ` Mark Rutland
2023-03-24 13:49 ` [PATCH 2/3] arm64: stacktrace: move dump functions to end of file Mark Rutland
2023-03-29  8:58   ` Kalesh Singh
2023-03-24 13:49 ` [PATCH 3/3] arm64: stacktrace: always inline core stacktrace functions Mark Rutland
2023-03-29  8:59   ` Kalesh Singh

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).