From: Mark Rutland <mark.rutland@arm.com>
To: madvenka@linux.microsoft.com
Cc: broonie@kernel.org, jpoimboe@redhat.com, ardb@kernel.org,
nobuta.keiya@fujitsu.com, sjitindarsingh@gmail.com,
catalin.marinas@arm.com, will@kernel.org,
jamorris@linux.microsoft.com,
linux-arm-kernel@lists.infradead.org,
live-patching@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v15 6/6] arm64: Introduce arch_stack_walk_reliable()
Date: Sun, 26 Jun 2022 09:57:57 +0100 [thread overview]
Message-ID: <YrgflcfxP7pYtob7@FVFF77S0Q05N> (raw)
In-Reply-To: <20220617210717.27126-7-madvenka@linux.microsoft.com>
On Fri, Jun 17, 2022 at 04:07:17PM -0500, madvenka@linux.microsoft.com wrote:
> From: "Madhavan T. Venkataraman" <madvenka@linux.microsoft.com>
>
> Introduce arch_stack_walk_reliable() for ARM64. This works like
> arch_stack_walk() except that it returns -EINVAL if the stack trace is not
> reliable.
>
> Until all the reliability checks are in place, arch_stack_walk_reliable()
> may not be used by livepatch. But it may be used by debug and test code.
For the moment I would strongly perfer *not* to add this until we have the
missing bits and pieces sorted out.
Until then, I'd like to ensure that any infrastructure we add is immediately
useful and tested. One way to do that would be to enhance the stack dumping
code (i.e. dump_backtrace()) to log some metadata.
As an end-goal, I'd like to get to a point where we can do:
* Explicit logging when trace terminate at the final frame, e.g.
stacktrace:
function_c+offset/total
function_b+offset/total
function_a+offset/total
<unwind successful>
* Explicit logging of early termination, e.g.
stacktrace:
function_c+offset/total
<unwind terminated early (bad FP)>
* Unreliability on individual elements, e.g.
stacktrace:
function_c+offset/total
function_b+offset/total (?)
function_a+offset/total
* Annotations for special unwinding, e.g.
stacktrace:
function_c+offset/total (K) // kretprobes trampoline
function_b+offset/total (F) // ftrace trampoline
function_a+offset/total (FK) // ftrace and kretprobes
other_function+offset/total (P) // from pt_regs::pc
another_function+offset/total (L?) // from pt_regs::lr, unreliable
something_else+offset/total
Note: the comments here are just to explain the idea, I don't expect those in
the actual output.
That'll justify some of the infrastructure we need for reliable unwinding, and
ensure that it is tested, well before we actually enable reliable stacktracing.
Thanks,
Mark.
>
> Signed-off-by: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>
> Reviewed-by: Mark Brown <broonie@kernel.org>
> ---
> arch/arm64/kernel/stacktrace.c | 23 +++++++++++++++++++++++
> 1 file changed, 23 insertions(+)
>
> diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
> index eda8581f7dbe..8016ba0e2c96 100644
> --- a/arch/arm64/kernel/stacktrace.c
> +++ b/arch/arm64/kernel/stacktrace.c
> @@ -383,3 +383,26 @@ noinline notrace void arch_stack_walk(stack_trace_consume_fn consume_entry,
>
> unwind(&state, consume_entry, cookie);
> }
> +
> +/*
> + * arch_stack_walk_reliable() may not be used for livepatch until all of
> + * the reliability checks are in place in unwind_consume(). However,
> + * debug and test code can choose to use it even if all the checks are not
> + * in place.
> + */
> +noinline int notrace arch_stack_walk_reliable(
> + stack_trace_consume_fn consume_entry,
> + void *cookie,
> + struct task_struct *task)
> +{
> + struct unwind_state state;
> + bool reliable;
> +
> + if (task == current)
> + unwind_init_from_caller(&state);
> + else
> + unwind_init_from_task(&state, task);
> +
> + reliable = unwind(&state, consume_entry, cookie);
> + return reliable ? 0 : -EINVAL;
> +}
> --
> 2.25.1
>
next prev parent reply other threads:[~2022-06-26 8:58 UTC|newest]
Thread overview: 38+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <ff68fb850d42e1adaa6a0a6c9c258acabb898b24>
2022-06-17 18:02 ` [RFC PATCH v15 0/6] arm64: Reorganize the unwinder and implement stack trace reliability checks madvenka
2022-06-17 18:02 ` [RFC PATCH v15 1/6] arm64: Split unwind_init() madvenka
2022-06-17 18:02 ` [RFC PATCH v15 2/6] arm64: Copy the task argument to unwind_state madvenka
2022-06-17 18:02 ` [RFC PATCH v15 3/6] arm64: Make the unwind loop in unwind() similar to other architectures madvenka
2022-06-17 18:02 ` [RFC PATCH v15 4/6] arm64: Introduce stack trace reliability checks in the unwinder madvenka
2022-06-17 18:02 ` [RFC PATCH v15 5/6] arm64: Create a list of SYM_CODE functions, check return PC against list madvenka
2022-06-17 18:02 ` [RFC PATCH v15 6/6] arm64: Introduce arch_stack_walk_reliable() madvenka
2022-06-17 20:50 ` [RFC PATCH v15 0/6] arm64: Reorganize the unwinder and implement stack trace reliability checks Madhavan T. Venkataraman
2022-06-27 13:00 ` Will Deacon
2022-06-27 17:06 ` Madhavan T. Venkataraman
2022-06-17 21:07 ` [PATCH " madvenka
2022-06-17 21:07 ` [PATCH v15 1/6] arm64: Split unwind_init() madvenka
2022-06-26 7:39 ` Mark Rutland
2022-06-17 21:07 ` [PATCH v15 2/6] arm64: Copy the task argument to unwind_state madvenka
2022-06-26 7:39 ` Mark Rutland
2022-06-17 21:07 ` [PATCH v15 3/6] arm64: Make the unwind loop in unwind() similar to other architectures madvenka
2022-06-26 8:21 ` Mark Rutland
2022-06-27 4:51 ` Madhavan T. Venkataraman
2022-06-17 21:07 ` [PATCH v15 4/6] arm64: Introduce stack trace reliability checks in the unwinder madvenka
2022-06-26 8:32 ` Mark Rutland
2022-06-27 5:01 ` Madhavan T. Venkataraman
2022-06-17 21:07 ` [PATCH v15 5/6] arm64: Create a list of SYM_CODE functions, check return PC against list madvenka
2022-06-26 8:46 ` Mark Rutland
2022-06-27 5:06 ` Madhavan T. Venkataraman
2022-06-17 21:07 ` [PATCH v15 6/6] arm64: Introduce arch_stack_walk_reliable() madvenka
2022-06-26 8:57 ` Mark Rutland [this message]
2022-06-27 5:53 ` Madhavan T. Venkataraman
2022-06-23 17:32 ` [PATCH v15 0/6] arm64: Reorganize the unwinder and implement stack trace reliability checks Will Deacon
2022-06-24 5:19 ` Madhavan T. Venkataraman
2022-06-24 5:27 ` Madhavan T. Venkataraman
2022-06-26 9:18 ` Mark Rutland
2022-06-27 4:33 ` Madhavan T. Venkataraman
2022-06-27 16:32 ` Kalesh Singh
2022-06-27 17:04 ` Madhavan T. Venkataraman
2022-06-27 4:48 ` Madhavan T. Venkataraman
2022-06-27 9:42 ` Will Deacon
2022-06-24 11:42 ` Mark Brown
2022-06-24 22:15 ` Madhavan T. Venkataraman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YrgflcfxP7pYtob7@FVFF77S0Q05N \
--to=mark.rutland@arm.com \
--cc=ardb@kernel.org \
--cc=broonie@kernel.org \
--cc=catalin.marinas@arm.com \
--cc=jamorris@linux.microsoft.com \
--cc=jpoimboe@redhat.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=live-patching@vger.kernel.org \
--cc=madvenka@linux.microsoft.com \
--cc=nobuta.keiya@fujitsu.com \
--cc=sjitindarsingh@gmail.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).