* [PATCH 1/6] KVM: arm64: Move PROTECTED_NVHE_STACKTRACE around
2022-07-27 14:29 ` [PATCH 0/6] KVM: arm64: nVHE stack unwinder rework Marc Zyngier
@ 2022-07-27 14:29 ` Marc Zyngier
2022-07-27 14:29 ` [PATCH 2/6] KVM: arm64: Move nVHE stacktrace unwinding into its own compilation unit Marc Zyngier
` (7 subsequent siblings)
8 siblings, 0 replies; 11+ messages in thread
From: Marc Zyngier @ 2022-07-27 14:29 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: mark.rutland, broonie, madvenka, tabba, oliver.upton, qperret,
kaleshsingh, james.morse, alexandru.elisei, suzuki.poulose,
catalin.marinas, andreyknvl, vincenzo.frascino, mhiramat, ast,
wangkefeng.wang, elver, keirf, yuzenghui, ardb, oupton,
kernel-team
Make the dependency with EL2_DEBUG more obvious by moving the
stacktrace configurtion *after* it.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/Kconfig | 24 +++++++++++-------------
1 file changed, 11 insertions(+), 13 deletions(-)
diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
index 09c995869916..815cc118c675 100644
--- a/arch/arm64/kvm/Kconfig
+++ b/arch/arm64/kvm/Kconfig
@@ -46,6 +46,16 @@ menuconfig KVM
If unsure, say N.
+config NVHE_EL2_DEBUG
+ bool "Debug mode for non-VHE EL2 object"
+ depends on KVM
+ help
+ Say Y here to enable the debug mode for the non-VHE KVM EL2 object.
+ Failure reports will BUG() in the hypervisor. This is intended for
+ local EL2 hypervisor development.
+
+ If unsure, say N.
+
config PROTECTED_NVHE_STACKTRACE
bool "Protected KVM hypervisor stacktraces"
depends on NVHE_EL2_DEBUG
@@ -53,22 +63,10 @@ config PROTECTED_NVHE_STACKTRACE
help
Say Y here to enable pKVM hypervisor stacktraces on hyp_panic()
- If you are not using protected nVHE (pKVM), say N.
-
If using protected nVHE mode, but cannot afford the associated
memory cost (less than 0.75 page per CPU) of pKVM stacktraces,
say N.
- If unsure, say N.
-
-config NVHE_EL2_DEBUG
- bool "Debug mode for non-VHE EL2 object"
- depends on KVM
- help
- Say Y here to enable the debug mode for the non-VHE KVM EL2 object.
- Failure reports will BUG() in the hypervisor. This is intended for
- local EL2 hypervisor development.
-
- If unsure, say N.
+ If unsure, or not using protected nVHE (pKVM), say N.
endif # VIRTUALIZATION
--
2.34.1
^ permalink raw reply related [flat|nested] 11+ messages in thread* [PATCH 2/6] KVM: arm64: Move nVHE stacktrace unwinding into its own compilation unit
2022-07-27 14:29 ` [PATCH 0/6] KVM: arm64: nVHE stack unwinder rework Marc Zyngier
2022-07-27 14:29 ` [PATCH 1/6] KVM: arm64: Move PROTECTED_NVHE_STACKTRACE around Marc Zyngier
@ 2022-07-27 14:29 ` Marc Zyngier
2022-07-27 14:29 ` [PATCH 3/6] KVM: arm64: Make unwind()/on_accessible_stack() per-unwinder functions Marc Zyngier
` (6 subsequent siblings)
8 siblings, 0 replies; 11+ messages in thread
From: Marc Zyngier @ 2022-07-27 14:29 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: mark.rutland, broonie, madvenka, tabba, oliver.upton, qperret,
kaleshsingh, james.morse, alexandru.elisei, suzuki.poulose,
catalin.marinas, andreyknvl, vincenzo.frascino, mhiramat, ast,
wangkefeng.wang, elver, keirf, yuzenghui, ardb, oupton,
kernel-team
The unwinding code doesn't really belong to the exit handling
code. Instead, move it to a file (conveniently named stacktrace.c
to confuse the reviewer), and move all the stacktrace-related
stuff there.
It will be joined by more code very soon.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/stacktrace/nvhe.h | 2 +
arch/arm64/kvm/Makefile | 2 +-
arch/arm64/kvm/handle_exit.c | 98 ------------------
arch/arm64/kvm/stacktrace.c | 120 +++++++++++++++++++++++
4 files changed, 123 insertions(+), 99 deletions(-)
create mode 100644 arch/arm64/kvm/stacktrace.c
diff --git a/arch/arm64/include/asm/stacktrace/nvhe.h b/arch/arm64/include/asm/stacktrace/nvhe.h
index 600dbc2220b6..8a5cb96d7143 100644
--- a/arch/arm64/include/asm/stacktrace/nvhe.h
+++ b/arch/arm64/include/asm/stacktrace/nvhe.h
@@ -172,5 +172,7 @@ static inline int notrace unwind_next(struct unwind_state *state)
}
NOKPROBE_SYMBOL(unwind_next);
+void kvm_nvhe_dump_backtrace(unsigned long hyp_offset);
+
#endif /* __KVM_NVHE_HYPERVISOR__ */
#endif /* __ASM_STACKTRACE_NVHE_H */
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index aa127ae9f675..5e33c2d4645a 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -12,7 +12,7 @@ obj-$(CONFIG_KVM) += hyp/
kvm-y += arm.o mmu.o mmio.o psci.o hypercalls.o pvtime.o \
inject_fault.o va_layout.o handle_exit.o \
- guest.o debug.o reset.o sys_regs.o \
+ guest.o debug.o reset.o sys_regs.o stacktrace.o \
vgic-sys-reg-v3.o fpsimd.o pkvm.o \
arch_timer.o trng.o vmid.o \
vgic/vgic.o vgic/vgic-init.o \
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index c14fc4ba4422..ef8b57953aa2 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -319,104 +319,6 @@ void handle_exit_early(struct kvm_vcpu *vcpu, int exception_index)
kvm_handle_guest_serror(vcpu, kvm_vcpu_get_esr(vcpu));
}
-/*
- * kvm_nvhe_dump_backtrace_entry - Symbolize and print an nVHE backtrace entry
- *
- * @arg : the hypervisor offset, used for address translation
- * @where : the program counter corresponding to the stack frame
- */
-static bool kvm_nvhe_dump_backtrace_entry(void *arg, unsigned long where)
-{
- unsigned long va_mask = GENMASK_ULL(vabits_actual - 1, 0);
- unsigned long hyp_offset = (unsigned long)arg;
-
- /* Mask tags and convert to kern addr */
- where = (where & va_mask) + hyp_offset;
- kvm_err(" [<%016lx>] %pB\n", where, (void *)(where + kaslr_offset()));
-
- return true;
-}
-
-static inline void kvm_nvhe_dump_backtrace_start(void)
-{
- kvm_err("nVHE call trace:\n");
-}
-
-static inline void kvm_nvhe_dump_backtrace_end(void)
-{
- kvm_err("---[ end nVHE call trace ]---\n");
-}
-
-/*
- * hyp_dump_backtrace - Dump the non-protected nVHE backtrace.
- *
- * @hyp_offset: hypervisor offset, used for address translation.
- *
- * The host can directly access HYP stack pages in non-protected
- * mode, so the unwinding is done directly from EL1. This removes
- * the need for shared buffers between host and hypervisor for
- * the stacktrace.
- */
-static void hyp_dump_backtrace(unsigned long hyp_offset)
-{
- struct kvm_nvhe_stacktrace_info *stacktrace_info;
- struct unwind_state state;
-
- stacktrace_info = this_cpu_ptr_nvhe_sym(kvm_stacktrace_info);
-
- kvm_nvhe_unwind_init(&state, stacktrace_info->fp, stacktrace_info->pc);
-
- kvm_nvhe_dump_backtrace_start();
- unwind(&state, kvm_nvhe_dump_backtrace_entry, (void *)hyp_offset);
- kvm_nvhe_dump_backtrace_end();
-}
-
-#ifdef CONFIG_PROTECTED_NVHE_STACKTRACE
-DECLARE_KVM_NVHE_PER_CPU(unsigned long [NVHE_STACKTRACE_SIZE/sizeof(long)],
- pkvm_stacktrace);
-
-/*
- * pkvm_dump_backtrace - Dump the protected nVHE HYP backtrace.
- *
- * @hyp_offset: hypervisor offset, used for address translation.
- *
- * Dumping of the pKVM HYP backtrace is done by reading the
- * stack addresses from the shared stacktrace buffer, since the
- * host cannot directly access hypervisor memory in protected
- * mode.
- */
-static void pkvm_dump_backtrace(unsigned long hyp_offset)
-{
- unsigned long *stacktrace
- = (unsigned long *) this_cpu_ptr_nvhe_sym(pkvm_stacktrace);
- int i, size = NVHE_STACKTRACE_SIZE / sizeof(long);
-
- kvm_nvhe_dump_backtrace_start();
- /* The saved stacktrace is terminated by a null entry */
- for (i = 0; i < size && stacktrace[i]; i++)
- kvm_nvhe_dump_backtrace_entry((void *)hyp_offset, stacktrace[i]);
- kvm_nvhe_dump_backtrace_end();
-}
-#else /* !CONFIG_PROTECTED_NVHE_STACKTRACE */
-static void pkvm_dump_backtrace(unsigned long hyp_offset)
-{
- kvm_err("Cannot dump pKVM nVHE stacktrace: !CONFIG_PROTECTED_NVHE_STACKTRACE\n");
-}
-#endif /* CONFIG_PROTECTED_NVHE_STACKTRACE */
-
-/*
- * kvm_nvhe_dump_backtrace - Dump KVM nVHE hypervisor backtrace.
- *
- * @hyp_offset: hypervisor offset, used for address translation.
- */
-static void kvm_nvhe_dump_backtrace(unsigned long hyp_offset)
-{
- if (is_protected_kvm_enabled())
- pkvm_dump_backtrace(hyp_offset);
- else
- hyp_dump_backtrace(hyp_offset);
-}
-
void __noreturn __cold nvhe_hyp_panic_handler(u64 esr, u64 spsr,
u64 elr_virt, u64 elr_phys,
u64 par, uintptr_t vcpu,
diff --git a/arch/arm64/kvm/stacktrace.c b/arch/arm64/kvm/stacktrace.c
new file mode 100644
index 000000000000..9812aefdcfb4
--- /dev/null
+++ b/arch/arm64/kvm/stacktrace.c
@@ -0,0 +1,120 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * KVM nVHE hypervisor stack tracing support.
+ *
+ * The unwinder implementation depends on the nVHE mode:
+ *
+ * 1) Non-protected nVHE mode - the host can directly access the
+ * HYP stack pages and unwind the HYP stack in EL1. This saves having
+ * to allocate shared buffers for the host to read the unwinded
+ * stacktrace.
+ *
+ * 2) pKVM (protected nVHE) mode - the host cannot directly access
+ * the HYP memory. The stack is unwinded in EL2 and dumped to a shared
+ * buffer where the host can read and print the stacktrace.
+ *
+ * Copyright (C) 2022 Google LLC
+ */
+
+#include <linux/kvm.h>
+#include <linux/kvm_host.h>
+
+#include <asm/stacktrace/nvhe.h>
+
+/*
+ * kvm_nvhe_dump_backtrace_entry - Symbolize and print an nVHE backtrace entry
+ *
+ * @arg : the hypervisor offset, used for address translation
+ * @where : the program counter corresponding to the stack frame
+ */
+static bool kvm_nvhe_dump_backtrace_entry(void *arg, unsigned long where)
+{
+ unsigned long va_mask = GENMASK_ULL(vabits_actual - 1, 0);
+ unsigned long hyp_offset = (unsigned long)arg;
+
+ /* Mask tags and convert to kern addr */
+ where = (where & va_mask) + hyp_offset;
+ kvm_err(" [<%016lx>] %pB\n", where, (void *)(where + kaslr_offset()));
+
+ return true;
+}
+
+static void kvm_nvhe_dump_backtrace_start(void)
+{
+ kvm_err("nVHE call trace:\n");
+}
+
+static void kvm_nvhe_dump_backtrace_end(void)
+{
+ kvm_err("---[ end nVHE call trace ]---\n");
+}
+
+/*
+ * hyp_dump_backtrace - Dump the non-protected nVHE backtrace.
+ *
+ * @hyp_offset: hypervisor offset, used for address translation.
+ *
+ * The host can directly access HYP stack pages in non-protected
+ * mode, so the unwinding is done directly from EL1. This removes
+ * the need for shared buffers between host and hypervisor for
+ * the stacktrace.
+ */
+static void hyp_dump_backtrace(unsigned long hyp_offset)
+{
+ struct kvm_nvhe_stacktrace_info *stacktrace_info;
+ struct unwind_state state;
+
+ stacktrace_info = this_cpu_ptr_nvhe_sym(kvm_stacktrace_info);
+
+ kvm_nvhe_unwind_init(&state, stacktrace_info->fp, stacktrace_info->pc);
+
+ kvm_nvhe_dump_backtrace_start();
+ unwind(&state, kvm_nvhe_dump_backtrace_entry, (void *)hyp_offset);
+ kvm_nvhe_dump_backtrace_end();
+}
+
+#ifdef CONFIG_PROTECTED_NVHE_STACKTRACE
+DECLARE_KVM_NVHE_PER_CPU(unsigned long [NVHE_STACKTRACE_SIZE/sizeof(long)],
+ pkvm_stacktrace);
+
+/*
+ * pkvm_dump_backtrace - Dump the protected nVHE HYP backtrace.
+ *
+ * @hyp_offset: hypervisor offset, used for address translation.
+ *
+ * Dumping of the pKVM HYP backtrace is done by reading the
+ * stack addresses from the shared stacktrace buffer, since the
+ * host cannot directly access hypervisor memory in protected
+ * mode.
+ */
+static void pkvm_dump_backtrace(unsigned long hyp_offset)
+{
+ unsigned long *stacktrace
+ = (unsigned long *) this_cpu_ptr_nvhe_sym(pkvm_stacktrace);
+ int i, size = NVHE_STACKTRACE_SIZE / sizeof(long);
+
+ kvm_nvhe_dump_backtrace_start();
+ /* The saved stacktrace is terminated by a null entry */
+ for (i = 0; i < size && stacktrace[i]; i++)
+ kvm_nvhe_dump_backtrace_entry((void *)hyp_offset, stacktrace[i]);
+ kvm_nvhe_dump_backtrace_end();
+}
+#else /* !CONFIG_PROTECTED_NVHE_STACKTRACE */
+static void pkvm_dump_backtrace(unsigned long hyp_offset)
+{
+ kvm_err("Cannot dump pKVM nVHE stacktrace: !CONFIG_PROTECTED_NVHE_STACKTRACE\n");
+}
+#endif /* CONFIG_PROTECTED_NVHE_STACKTRACE */
+
+/*
+ * kvm_nvhe_dump_backtrace - Dump KVM nVHE hypervisor backtrace.
+ *
+ * @hyp_offset: hypervisor offset, used for address translation.
+ */
+void kvm_nvhe_dump_backtrace(unsigned long hyp_offset)
+{
+ if (is_protected_kvm_enabled())
+ pkvm_dump_backtrace(hyp_offset);
+ else
+ hyp_dump_backtrace(hyp_offset);
+}
--
2.34.1
^ permalink raw reply related [flat|nested] 11+ messages in thread* [PATCH 3/6] KVM: arm64: Make unwind()/on_accessible_stack() per-unwinder functions
2022-07-27 14:29 ` [PATCH 0/6] KVM: arm64: nVHE stack unwinder rework Marc Zyngier
2022-07-27 14:29 ` [PATCH 1/6] KVM: arm64: Move PROTECTED_NVHE_STACKTRACE around Marc Zyngier
2022-07-27 14:29 ` [PATCH 2/6] KVM: arm64: Move nVHE stacktrace unwinding into its own compilation unit Marc Zyngier
@ 2022-07-27 14:29 ` Marc Zyngier
2022-07-27 17:32 ` Mark Brown
2022-07-27 14:29 ` [PATCH 4/6] KVM: arm64: Move nVHE-only helpers into kvm/stacktrace.c Marc Zyngier
` (5 subsequent siblings)
8 siblings, 1 reply; 11+ messages in thread
From: Marc Zyngier @ 2022-07-27 14:29 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: mark.rutland, broonie, madvenka, tabba, oliver.upton, qperret,
kaleshsingh, james.morse, alexandru.elisei, suzuki.poulose,
catalin.marinas, andreyknvl, vincenzo.frascino, mhiramat, ast,
wangkefeng.wang, elver, keirf, yuzenghui, ardb, oupton,
kernel-team
Having multiple versions of on_accessible_stack() (one per unwinder)
makes it very hard to reason about what is used where due to the
complexity of the various includes, the forward declarations, and
the reliance on everything being 'inline'.
Instead, move the code back where it should be. Each unwinder
implements:
- on_accessible_stack() as well as the helpers it depends on,
- unwind()/unwind_next(), as they pass on_accessible_stack as
a parameter to unwind_next_common() (which is the only common
code here)
This hardly results in any duplication, and makes it much
easier to reason about the code.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/stacktrace.h | 74 ------------------
arch/arm64/include/asm/stacktrace/common.h | 55 ++++---------
arch/arm64/include/asm/stacktrace/nvhe.h | 84 +-------------------
arch/arm64/kernel/stacktrace.c | 90 ++++++++++++++++++++++
arch/arm64/kvm/hyp/nvhe/stacktrace.c | 52 +++++++++++++
arch/arm64/kvm/stacktrace.c | 55 +++++++++++++
6 files changed, 213 insertions(+), 197 deletions(-)
diff --git a/arch/arm64/include/asm/stacktrace.h b/arch/arm64/include/asm/stacktrace.h
index ea828579a98b..6ebdcdff77f5 100644
--- a/arch/arm64/include/asm/stacktrace.h
+++ b/arch/arm64/include/asm/stacktrace.h
@@ -57,78 +57,4 @@ static inline bool on_overflow_stack(unsigned long sp, unsigned long size,
struct stack_info *info) { return false; }
#endif
-
-/*
- * We can only safely access per-cpu stacks from current in a non-preemptible
- * context.
- */
-static inline bool on_accessible_stack(const struct task_struct *tsk,
- unsigned long sp, unsigned long size,
- struct stack_info *info)
-{
- if (on_accessible_stack_common(tsk, sp, size, info))
- return true;
-
- if (on_task_stack(tsk, sp, size, info))
- return true;
- if (tsk != current || preemptible())
- return false;
- if (on_irq_stack(sp, size, info))
- return true;
- if (on_sdei_stack(sp, size, info))
- return true;
-
- return false;
-}
-
-/*
- * Unwind from one frame record (A) to the next frame record (B).
- *
- * We terminate early if the location of B indicates a malformed chain of frame
- * records (e.g. a cycle), determined based on the location and fp value of A
- * and the location (but not the fp value) of B.
- */
-static inline int notrace unwind_next(struct unwind_state *state)
-{
- struct task_struct *tsk = state->task;
- unsigned long fp = state->fp;
- struct stack_info info;
- int err;
-
- /* Final frame; nothing to unwind */
- if (fp == (unsigned long)task_pt_regs(tsk)->stackframe)
- return -ENOENT;
-
- err = unwind_next_common(state, &info, NULL);
- if (err)
- return err;
-
- state->pc = ptrauth_strip_insn_pac(state->pc);
-
-#ifdef CONFIG_FUNCTION_GRAPH_TRACER
- if (tsk->ret_stack &&
- (state->pc == (unsigned long)return_to_handler)) {
- unsigned long orig_pc;
- /*
- * This is a case where function graph tracer has
- * modified a return address (LR) in a stack frame
- * to hook a function return.
- * So replace it to an original value.
- */
- orig_pc = ftrace_graph_ret_addr(tsk, NULL, state->pc,
- (void *)state->fp);
- if (WARN_ON_ONCE(state->pc == orig_pc))
- return -EINVAL;
- state->pc = orig_pc;
- }
-#endif /* CONFIG_FUNCTION_GRAPH_TRACER */
-#ifdef CONFIG_KRETPROBES
- if (is_kretprobe_trampoline(state->pc))
- state->pc = kretprobe_find_ret_addr(tsk, (void *)state->fp, &state->kr_cur);
-#endif
-
- return 0;
-}
-NOKPROBE_SYMBOL(unwind_next);
-
#endif /* __ASM_STACKTRACE_H */
diff --git a/arch/arm64/include/asm/stacktrace/common.h b/arch/arm64/include/asm/stacktrace/common.h
index 3ebb69ea374a..18046a7248a2 100644
--- a/arch/arm64/include/asm/stacktrace/common.h
+++ b/arch/arm64/include/asm/stacktrace/common.h
@@ -79,15 +79,6 @@ struct unwind_state {
struct task_struct *task;
};
-static inline bool on_overflow_stack(unsigned long sp, unsigned long size,
- struct stack_info *info);
-
-static inline bool on_accessible_stack(const struct task_struct *tsk,
- unsigned long sp, unsigned long size,
- struct stack_info *info);
-
-static inline int unwind_next(struct unwind_state *state);
-
static inline bool on_stack(unsigned long sp, unsigned long size,
unsigned long low, unsigned long high,
enum stack_type type, struct stack_info *info)
@@ -106,21 +97,6 @@ static inline bool on_stack(unsigned long sp, unsigned long size,
return true;
}
-static inline bool on_accessible_stack_common(const struct task_struct *tsk,
- unsigned long sp,
- unsigned long size,
- struct stack_info *info)
-{
- if (info)
- info->type = STACK_TYPE_UNKNOWN;
-
- /*
- * Both the kernel and nvhe hypervisor make use of
- * an overflow_stack
- */
- return on_overflow_stack(sp, size, info);
-}
-
static inline void unwind_init_common(struct unwind_state *state,
struct task_struct *task)
{
@@ -156,8 +132,22 @@ static inline void unwind_init_common(struct unwind_state *state,
typedef bool (*stack_trace_translate_fp_fn)(unsigned long *fp,
enum stack_type type);
+/*
+ * on_accessible_stack_fn() - Check whether a stack range is on any
+ * of the possible stacks.
+ *
+ * @tsk: task whose stack is being unwound
+ * @sp: stack address being checked
+ * @size: size of the stack range being checked
+ * @info: stack unwinding context
+ */
+typedef bool (*on_accessible_stack_fn)(const struct task_struct *tsk,
+ unsigned long sp, unsigned long size,
+ struct stack_info *info);
+
static inline int unwind_next_common(struct unwind_state *state,
struct stack_info *info,
+ on_accessible_stack_fn accessible,
stack_trace_translate_fp_fn translate_fp)
{
unsigned long fp = state->fp, kern_fp = fp;
@@ -166,7 +156,7 @@ static inline int unwind_next_common(struct unwind_state *state,
if (fp & 0x7)
return -EINVAL;
- if (!on_accessible_stack(tsk, fp, 16, info))
+ if (!accessible(tsk, fp, 16, info))
return -EINVAL;
if (test_bit(info->type, state->stacks_done))
@@ -212,19 +202,4 @@ static inline int unwind_next_common(struct unwind_state *state,
return 0;
}
-static inline void notrace unwind(struct unwind_state *state,
- stack_trace_consume_fn consume_entry,
- void *cookie)
-{
- while (1) {
- int ret;
-
- if (!consume_entry(cookie, state->pc))
- break;
- ret = unwind_next(state);
- if (ret < 0)
- break;
- }
-}
-NOKPROBE_SYMBOL(unwind);
#endif /* __ASM_STACKTRACE_COMMON_H */
diff --git a/arch/arm64/include/asm/stacktrace/nvhe.h b/arch/arm64/include/asm/stacktrace/nvhe.h
index 8a5cb96d7143..a096216d8970 100644
--- a/arch/arm64/include/asm/stacktrace/nvhe.h
+++ b/arch/arm64/include/asm/stacktrace/nvhe.h
@@ -37,59 +37,7 @@ static inline void kvm_nvhe_unwind_init(struct unwind_state *state,
state->pc = pc;
}
-static inline bool on_hyp_stack(unsigned long sp, unsigned long size,
- struct stack_info *info);
-
-static inline bool on_accessible_stack(const struct task_struct *tsk,
- unsigned long sp, unsigned long size,
- struct stack_info *info)
-{
- if (on_accessible_stack_common(tsk, sp, size, info))
- return true;
-
- if (on_hyp_stack(sp, size, info))
- return true;
-
- return false;
-}
-
-#ifdef __KVM_NVHE_HYPERVISOR__
-/*
- * Protected nVHE HYP stack unwinder
- *
- * In protected mode, the unwinding is done by the hypervisor in EL2.
- */
-
-#ifdef CONFIG_PROTECTED_NVHE_STACKTRACE
-static inline bool on_overflow_stack(unsigned long sp, unsigned long size,
- struct stack_info *info)
-{
- unsigned long low = (unsigned long)this_cpu_ptr(overflow_stack);
- unsigned long high = low + OVERFLOW_STACK_SIZE;
-
- return on_stack(sp, size, low, high, STACK_TYPE_OVERFLOW, info);
-}
-
-static inline bool on_hyp_stack(unsigned long sp, unsigned long size,
- struct stack_info *info)
-{
- struct kvm_nvhe_init_params *params = this_cpu_ptr(&kvm_init_params);
- unsigned long high = params->stack_hyp_va;
- unsigned long low = high - PAGE_SIZE;
-
- return on_stack(sp, size, low, high, STACK_TYPE_HYP, info);
-}
-
-static inline int notrace unwind_next(struct unwind_state *state)
-{
- struct stack_info info;
-
- return unwind_next_common(state, &info, NULL);
-}
-NOKPROBE_SYMBOL(unwind_next);
-#endif /* CONFIG_PROTECTED_NVHE_STACKTRACE */
-
-#else /* !__KVM_NVHE_HYPERVISOR__ */
+#ifndef __KVM_NVHE_HYPERVISOR__
/*
* Conventional (non-protected) nVHE HYP stack unwinder
*
@@ -142,36 +90,6 @@ static inline bool kvm_nvhe_stack_kern_va(unsigned long *addr,
return true;
}
-static inline bool on_overflow_stack(unsigned long sp, unsigned long size,
- struct stack_info *info)
-{
- struct kvm_nvhe_stacktrace_info *stacktrace_info
- = this_cpu_ptr_nvhe_sym(kvm_stacktrace_info);
- unsigned long low = (unsigned long)stacktrace_info->overflow_stack_base;
- unsigned long high = low + OVERFLOW_STACK_SIZE;
-
- return on_stack(sp, size, low, high, STACK_TYPE_OVERFLOW, info);
-}
-
-static inline bool on_hyp_stack(unsigned long sp, unsigned long size,
- struct stack_info *info)
-{
- struct kvm_nvhe_stacktrace_info *stacktrace_info
- = this_cpu_ptr_nvhe_sym(kvm_stacktrace_info);
- unsigned long low = (unsigned long)stacktrace_info->stack_base;
- unsigned long high = low + PAGE_SIZE;
-
- return on_stack(sp, size, low, high, STACK_TYPE_HYP, info);
-}
-
-static inline int notrace unwind_next(struct unwind_state *state)
-{
- struct stack_info info;
-
- return unwind_next_common(state, &info, kvm_nvhe_stack_kern_va);
-}
-NOKPROBE_SYMBOL(unwind_next);
-
void kvm_nvhe_dump_backtrace(unsigned long hyp_offset);
#endif /* __KVM_NVHE_HYPERVISOR__ */
diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
index 9fa60ee48499..ce190ee18a20 100644
--- a/arch/arm64/kernel/stacktrace.c
+++ b/arch/arm64/kernel/stacktrace.c
@@ -67,6 +67,96 @@ static inline void unwind_init_from_task(struct unwind_state *state,
state->pc = thread_saved_pc(task);
}
+/*
+ * We can only safely access per-cpu stacks from current in a non-preemptible
+ * context.
+ */
+static bool on_accessible_stack(const struct task_struct *tsk,
+ unsigned long sp, unsigned long size,
+ struct stack_info *info)
+{
+ if (info)
+ info->type = STACK_TYPE_UNKNOWN;
+
+ if (on_task_stack(tsk, sp, size, info))
+ return true;
+ if (tsk != current || preemptible())
+ return false;
+ if (on_irq_stack(sp, size, info))
+ return true;
+ if (on_overflow_stack(sp, size, info))
+ return true;
+ if (on_sdei_stack(sp, size, info))
+ return true;
+
+ return false;
+}
+
+/*
+ * Unwind from one frame record (A) to the next frame record (B).
+ *
+ * We terminate early if the location of B indicates a malformed chain of frame
+ * records (e.g. a cycle), determined based on the location and fp value of A
+ * and the location (but not the fp value) of B.
+ */
+static int notrace unwind_next(struct unwind_state *state)
+{
+ struct task_struct *tsk = state->task;
+ unsigned long fp = state->fp;
+ struct stack_info info;
+ int err;
+
+ /* Final frame; nothing to unwind */
+ if (fp == (unsigned long)task_pt_regs(tsk)->stackframe)
+ return -ENOENT;
+
+ err = unwind_next_common(state, &info, on_accessible_stack, NULL);
+ if (err)
+ return err;
+
+ state->pc = ptrauth_strip_insn_pac(state->pc);
+
+#ifdef CONFIG_FUNCTION_GRAPH_TRACER
+ if (tsk->ret_stack &&
+ (state->pc == (unsigned long)return_to_handler)) {
+ unsigned long orig_pc;
+ /*
+ * This is a case where function graph tracer has
+ * modified a return address (LR) in a stack frame
+ * to hook a function return.
+ * So replace it to an original value.
+ */
+ orig_pc = ftrace_graph_ret_addr(tsk, NULL, state->pc,
+ (void *)state->fp);
+ if (WARN_ON_ONCE(state->pc == orig_pc))
+ return -EINVAL;
+ state->pc = orig_pc;
+ }
+#endif /* CONFIG_FUNCTION_GRAPH_TRACER */
+#ifdef CONFIG_KRETPROBES
+ if (is_kretprobe_trampoline(state->pc))
+ state->pc = kretprobe_find_ret_addr(tsk, (void *)state->fp, &state->kr_cur);
+#endif
+
+ return 0;
+}
+NOKPROBE_SYMBOL(unwind_next);
+
+static void notrace unwind(struct unwind_state *state,
+ stack_trace_consume_fn consume_entry, void *cookie)
+{
+ while (1) {
+ int ret;
+
+ if (!consume_entry(cookie, state->pc))
+ break;
+ ret = unwind_next(state);
+ if (ret < 0)
+ break;
+ }
+}
+NOKPROBE_SYMBOL(unwind);
+
static bool dump_backtrace_entry(void *arg, unsigned long where)
{
char *loglvl = arg;
diff --git a/arch/arm64/kvm/hyp/nvhe/stacktrace.c b/arch/arm64/kvm/hyp/nvhe/stacktrace.c
index 900324b7a08f..acbe272ecb32 100644
--- a/arch/arm64/kvm/hyp/nvhe/stacktrace.c
+++ b/arch/arm64/kvm/hyp/nvhe/stacktrace.c
@@ -39,6 +39,58 @@ static void hyp_prepare_backtrace(unsigned long fp, unsigned long pc)
DEFINE_PER_CPU(unsigned long [NVHE_STACKTRACE_SIZE/sizeof(long)], pkvm_stacktrace);
+static bool on_overflow_stack(unsigned long sp, unsigned long size,
+ struct stack_info *info)
+{
+ unsigned long low = (unsigned long)this_cpu_ptr(overflow_stack);
+ unsigned long high = low + OVERFLOW_STACK_SIZE;
+
+ return on_stack(sp, size, low, high, STACK_TYPE_OVERFLOW, info);
+}
+
+static bool on_hyp_stack(unsigned long sp, unsigned long size,
+ struct stack_info *info)
+{
+ struct kvm_nvhe_init_params *params = this_cpu_ptr(&kvm_init_params);
+ unsigned long high = params->stack_hyp_va;
+ unsigned long low = high - PAGE_SIZE;
+
+ return on_stack(sp, size, low, high, STACK_TYPE_HYP, info);
+}
+
+static bool on_accessible_stack(const struct task_struct *tsk,
+ unsigned long sp, unsigned long size,
+ struct stack_info *info)
+{
+ if (info)
+ info->type = STACK_TYPE_UNKNOWN;
+
+ return (on_overflow_stack(sp, size, info) ||
+ on_hyp_stack(sp, size, info));
+}
+
+static int unwind_next(struct unwind_state *state)
+{
+ struct stack_info info;
+
+ return unwind_next_common(state, &info, on_accessible_stack, NULL);
+}
+
+static void notrace unwind(struct unwind_state *state,
+ stack_trace_consume_fn consume_entry,
+ void *cookie)
+{
+ while (1) {
+ int ret;
+
+ if (!consume_entry(cookie, state->pc))
+ break;
+ ret = unwind_next(state);
+ if (ret < 0)
+ break;
+ }
+}
+
/*
* pkvm_save_backtrace_entry - Saves a protected nVHE HYP stacktrace entry
*
diff --git a/arch/arm64/kvm/stacktrace.c b/arch/arm64/kvm/stacktrace.c
index 9812aefdcfb4..4d5fec3175ff 100644
--- a/arch/arm64/kvm/stacktrace.c
+++ b/arch/arm64/kvm/stacktrace.c
@@ -21,6 +21,61 @@
#include <asm/stacktrace/nvhe.h>
+static bool on_overflow_stack(unsigned long sp, unsigned long size,
+ struct stack_info *info)
+{
+ struct kvm_nvhe_stacktrace_info *stacktrace_info
+ = this_cpu_ptr_nvhe_sym(kvm_stacktrace_info);
+ unsigned long low = (unsigned long)stacktrace_info->overflow_stack_base;
+ unsigned long high = low + OVERFLOW_STACK_SIZE;
+
+ return on_stack(sp, size, low, high, STACK_TYPE_OVERFLOW, info);
+}
+
+static bool on_hyp_stack(unsigned long sp, unsigned long size,
+ struct stack_info *info)
+{
+ struct kvm_nvhe_stacktrace_info *stacktrace_info
+ = this_cpu_ptr_nvhe_sym(kvm_stacktrace_info);
+ unsigned long low = (unsigned long)stacktrace_info->stack_base;
+ unsigned long high = low + PAGE_SIZE;
+
+ return on_stack(sp, size, low, high, STACK_TYPE_HYP, info);
+}
+
+static bool on_accessible_stack(const struct task_struct *tsk,
+ unsigned long sp, unsigned long size,
+ struct stack_info *info)
+{
+ if (info)
+ info->type = STACK_TYPE_UNKNOWN;
+
+ return (on_overflow_stack(sp, size, info) ||
+ on_hyp_stack(sp, size, info));
+}
+
+static int unwind_next(struct unwind_state *state)
+{
+ struct stack_info info;
+
+ return unwind_next_common(state, &info, on_accessible_stack,
+ kvm_nvhe_stack_kern_va);
+}
+
+static void unwind(struct unwind_state *state,
+ stack_trace_consume_fn consume_entry, void *cookie)
+{
+ while (1) {
+ int ret;
+
+ if (!consume_entry(cookie, state->pc))
+ break;
+ ret = unwind_next(state);
+ if (ret < 0)
+ break;
+ }
+}
+
/*
* kvm_nvhe_dump_backtrace_entry - Symbolize and print an nVHE backtrace entry
*
--
2.34.1
^ permalink raw reply related [flat|nested] 11+ messages in thread* Re: [PATCH 3/6] KVM: arm64: Make unwind()/on_accessible_stack() per-unwinder functions
2022-07-27 14:29 ` [PATCH 3/6] KVM: arm64: Make unwind()/on_accessible_stack() per-unwinder functions Marc Zyngier
@ 2022-07-27 17:32 ` Mark Brown
0 siblings, 0 replies; 11+ messages in thread
From: Mark Brown @ 2022-07-27 17:32 UTC (permalink / raw)
To: Marc Zyngier
Cc: linux-arm-kernel, kvmarm, kvm, mark.rutland, madvenka, tabba,
oliver.upton, qperret, kaleshsingh, james.morse, alexandru.elisei,
suzuki.poulose, catalin.marinas, andreyknvl, vincenzo.frascino,
mhiramat, ast, wangkefeng.wang, elver, keirf, yuzenghui, ardb,
oupton, kernel-team
[-- Attachment #1: Type: text/plain, Size: 753 bytes --]
On Wed, Jul 27, 2022 at 03:29:03PM +0100, Marc Zyngier wrote:
> Having multiple versions of on_accessible_stack() (one per unwinder)
> makes it very hard to reason about what is used where due to the
> complexity of the various includes, the forward declarations, and
> the reliance on everything being 'inline'.
>
> Instead, move the code back where it should be. Each unwinder
> implements:
>
> - on_accessible_stack() as well as the helpers it depends on,
>
> - unwind()/unwind_next(), as they pass on_accessible_stack as
> a parameter to unwind_next_common() (which is the only common
> code here)
Reviewed-by: Mark Brown <broonie@kernel.org>
It feels like more of the accessibility stuff *should* be sharable, but
yeah.
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 4/6] KVM: arm64: Move nVHE-only helpers into kvm/stacktrace.c
2022-07-27 14:29 ` [PATCH 0/6] KVM: arm64: nVHE stack unwinder rework Marc Zyngier
` (2 preceding siblings ...)
2022-07-27 14:29 ` [PATCH 3/6] KVM: arm64: Make unwind()/on_accessible_stack() per-unwinder functions Marc Zyngier
@ 2022-07-27 14:29 ` Marc Zyngier
2022-07-27 14:29 ` [PATCH 5/6] KVM: arm64: Don't open code ARRAY_SIZE() Marc Zyngier
` (4 subsequent siblings)
8 siblings, 0 replies; 11+ messages in thread
From: Marc Zyngier @ 2022-07-27 14:29 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: mark.rutland, broonie, madvenka, tabba, oliver.upton, qperret,
kaleshsingh, james.morse, alexandru.elisei, suzuki.poulose,
catalin.marinas, andreyknvl, vincenzo.frascino, mhiramat, ast,
wangkefeng.wang, elver, keirf, yuzenghui, ardb, oupton,
kernel-team
kvm_nvhe_stack_kern_va() only makes sense as part of the nVHE
unwinder, so simply move it there.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/stacktrace/nvhe.h | 41 ------------------------
arch/arm64/kvm/stacktrace.c | 41 ++++++++++++++++++++++++
2 files changed, 41 insertions(+), 41 deletions(-)
diff --git a/arch/arm64/include/asm/stacktrace/nvhe.h b/arch/arm64/include/asm/stacktrace/nvhe.h
index a096216d8970..d5527b600390 100644
--- a/arch/arm64/include/asm/stacktrace/nvhe.h
+++ b/arch/arm64/include/asm/stacktrace/nvhe.h
@@ -49,47 +49,6 @@ DECLARE_KVM_NVHE_PER_CPU(unsigned long [OVERFLOW_STACK_SIZE/sizeof(long)], overf
DECLARE_KVM_NVHE_PER_CPU(struct kvm_nvhe_stacktrace_info, kvm_stacktrace_info);
DECLARE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page);
-/*
- * kvm_nvhe_stack_kern_va - Convert KVM nVHE HYP stack addresses to a kernel VAs
- *
- * The nVHE hypervisor stack is mapped in the flexible 'private' VA range, to
- * allow for guard pages below the stack. Consequently, the fixed offset address
- * translation macros won't work here.
- *
- * The kernel VA is calculated as an offset from the kernel VA of the hypervisor
- * stack base.
- *
- * Returns true on success and updates @addr to its corresponding kernel VA;
- * otherwise returns false.
- */
-static inline bool kvm_nvhe_stack_kern_va(unsigned long *addr,
- enum stack_type type)
-{
- struct kvm_nvhe_stacktrace_info *stacktrace_info;
- unsigned long hyp_base, kern_base, hyp_offset;
-
- stacktrace_info = this_cpu_ptr_nvhe_sym(kvm_stacktrace_info);
-
- switch (type) {
- case STACK_TYPE_HYP:
- kern_base = (unsigned long)*this_cpu_ptr(&kvm_arm_hyp_stack_page);
- hyp_base = (unsigned long)stacktrace_info->stack_base;
- break;
- case STACK_TYPE_OVERFLOW:
- kern_base = (unsigned long)this_cpu_ptr_nvhe_sym(overflow_stack);
- hyp_base = (unsigned long)stacktrace_info->overflow_stack_base;
- break;
- default:
- return false;
- }
-
- hyp_offset = *addr - hyp_base;
-
- *addr = kern_base + hyp_offset;
-
- return true;
-}
-
void kvm_nvhe_dump_backtrace(unsigned long hyp_offset);
#endif /* __KVM_NVHE_HYPERVISOR__ */
diff --git a/arch/arm64/kvm/stacktrace.c b/arch/arm64/kvm/stacktrace.c
index 4d5fec3175ff..417665854f86 100644
--- a/arch/arm64/kvm/stacktrace.c
+++ b/arch/arm64/kvm/stacktrace.c
@@ -21,6 +21,47 @@
#include <asm/stacktrace/nvhe.h>
+/*
+ * kvm_nvhe_stack_kern_va - Convert KVM nVHE HYP stack addresses to a kernel VAs
+ *
+ * The nVHE hypervisor stack is mapped in the flexible 'private' VA range, to
+ * allow for guard pages below the stack. Consequently, the fixed offset address
+ * translation macros won't work here.
+ *
+ * The kernel VA is calculated as an offset from the kernel VA of the hypervisor
+ * stack base.
+ *
+ * Returns true on success and updates @addr to its corresponding kernel VA;
+ * otherwise returns false.
+ */
+static bool kvm_nvhe_stack_kern_va(unsigned long *addr,
+ enum stack_type type)
+{
+ struct kvm_nvhe_stacktrace_info *stacktrace_info;
+ unsigned long hyp_base, kern_base, hyp_offset;
+
+ stacktrace_info = this_cpu_ptr_nvhe_sym(kvm_stacktrace_info);
+
+ switch (type) {
+ case STACK_TYPE_HYP:
+ kern_base = (unsigned long)*this_cpu_ptr(&kvm_arm_hyp_stack_page);
+ hyp_base = (unsigned long)stacktrace_info->stack_base;
+ break;
+ case STACK_TYPE_OVERFLOW:
+ kern_base = (unsigned long)this_cpu_ptr_nvhe_sym(overflow_stack);
+ hyp_base = (unsigned long)stacktrace_info->overflow_stack_base;
+ break;
+ default:
+ return false;
+ }
+
+ hyp_offset = *addr - hyp_base;
+
+ *addr = kern_base + hyp_offset;
+
+ return true;
+}
+
static bool on_overflow_stack(unsigned long sp, unsigned long size,
struct stack_info *info)
{
--
2.34.1
^ permalink raw reply related [flat|nested] 11+ messages in thread* [PATCH 5/6] KVM: arm64: Don't open code ARRAY_SIZE()
2022-07-27 14:29 ` [PATCH 0/6] KVM: arm64: nVHE stack unwinder rework Marc Zyngier
` (3 preceding siblings ...)
2022-07-27 14:29 ` [PATCH 4/6] KVM: arm64: Move nVHE-only helpers into kvm/stacktrace.c Marc Zyngier
@ 2022-07-27 14:29 ` Marc Zyngier
2022-07-27 14:29 ` [PATCH 6/6] arm64: Update 'unwinder howto' Marc Zyngier
` (3 subsequent siblings)
8 siblings, 0 replies; 11+ messages in thread
From: Marc Zyngier @ 2022-07-27 14:29 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: mark.rutland, broonie, madvenka, tabba, oliver.upton, qperret,
kaleshsingh, james.morse, alexandru.elisei, suzuki.poulose,
catalin.marinas, andreyknvl, vincenzo.frascino, mhiramat, ast,
wangkefeng.wang, elver, keirf, yuzenghui, ardb, oupton,
kernel-team
From: Oliver Upton <oliver.upton@linux.dev>
Use ARRAY_SIZE() instead of an open-coded version.
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/hyp/nvhe/stacktrace.c | 3 +--
arch/arm64/kvm/stacktrace.c | 6 ++++--
2 files changed, 5 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/kvm/hyp/nvhe/stacktrace.c b/arch/arm64/kvm/hyp/nvhe/stacktrace.c
index acbe272ecb32..58f645ad66bc 100644
--- a/arch/arm64/kvm/hyp/nvhe/stacktrace.c
+++ b/arch/arm64/kvm/hyp/nvhe/stacktrace.c
@@ -103,14 +103,13 @@ static void notrace unwind(struct unwind_state *state,
static bool pkvm_save_backtrace_entry(void *arg, unsigned long where)
{
unsigned long *stacktrace = this_cpu_ptr(pkvm_stacktrace);
- int size = NVHE_STACKTRACE_SIZE / sizeof(long);
int *idx = (int *)arg;
/*
* Need 2 free slots: 1 for current entry and 1 for the
* delimiter.
*/
- if (*idx > size - 2)
+ if (*idx > ARRAY_SIZE(pkvm_stacktrace) - 2)
return false;
stacktrace[*idx] = where;
diff --git a/arch/arm64/kvm/stacktrace.c b/arch/arm64/kvm/stacktrace.c
index 417665854f86..949d19d603fb 100644
--- a/arch/arm64/kvm/stacktrace.c
+++ b/arch/arm64/kvm/stacktrace.c
@@ -187,11 +187,13 @@ static void pkvm_dump_backtrace(unsigned long hyp_offset)
{
unsigned long *stacktrace
= (unsigned long *) this_cpu_ptr_nvhe_sym(pkvm_stacktrace);
- int i, size = NVHE_STACKTRACE_SIZE / sizeof(long);
+ int i;
kvm_nvhe_dump_backtrace_start();
/* The saved stacktrace is terminated by a null entry */
- for (i = 0; i < size && stacktrace[i]; i++)
+ for (i = 0;
+ i < ARRAY_SIZE(kvm_nvhe_sym(pkvm_stacktrace)) && stacktrace[i];
+ i++)
kvm_nvhe_dump_backtrace_entry((void *)hyp_offset, stacktrace[i]);
kvm_nvhe_dump_backtrace_end();
}
--
2.34.1
^ permalink raw reply related [flat|nested] 11+ messages in thread* [PATCH 6/6] arm64: Update 'unwinder howto'
2022-07-27 14:29 ` [PATCH 0/6] KVM: arm64: nVHE stack unwinder rework Marc Zyngier
` (4 preceding siblings ...)
2022-07-27 14:29 ` [PATCH 5/6] KVM: arm64: Don't open code ARRAY_SIZE() Marc Zyngier
@ 2022-07-27 14:29 ` Marc Zyngier
2022-07-27 15:56 ` [PATCH 0/6] KVM: arm64: nVHE stack unwinder rework Kalesh Singh
` (2 subsequent siblings)
8 siblings, 0 replies; 11+ messages in thread
From: Marc Zyngier @ 2022-07-27 14:29 UTC (permalink / raw)
To: linux-arm-kernel, kvmarm, kvm
Cc: mark.rutland, broonie, madvenka, tabba, oliver.upton, qperret,
kaleshsingh, james.morse, alexandru.elisei, suzuki.poulose,
catalin.marinas, andreyknvl, vincenzo.frascino, mhiramat, ast,
wangkefeng.wang, elver, keirf, yuzenghui, ardb, oupton,
kernel-team
Implementing a new unwinder is a bit more involved than writing
a couple of helpers, so let's not lure the reader into a false
sense of comfort. Instead, let's point out what they should
call into, and what sort of parameter they need to provide.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/stacktrace/common.h | 14 ++++----------
1 file changed, 4 insertions(+), 10 deletions(-)
diff --git a/arch/arm64/include/asm/stacktrace/common.h b/arch/arm64/include/asm/stacktrace/common.h
index 18046a7248a2..f58eb944c46f 100644
--- a/arch/arm64/include/asm/stacktrace/common.h
+++ b/arch/arm64/include/asm/stacktrace/common.h
@@ -5,17 +5,11 @@
* To implement a new arm64 stack unwinder:
* 1) Include this header
*
- * 2) Provide implementations for the following functions:
- * on_overflow_stack(): Returns true if SP is on the overflow
- * stack.
- * on_accessible_stack(): Returns true is SP is on any accessible
- * stack.
- * unwind_next(): Performs validation checks on the frame
- * pointer, and transitions unwind_state
- * to the next frame.
+ * 2) Call into unwind_next_common() from your top level unwind
+ * function, passing it the validation and translation callbacks
+ * (though the later can be NULL if no translation is required).
*
- * See: arch/arm64/include/asm/stacktrace.h for reference
- * implementations.
+ * See: arch/arm64/kernel/stacktrace.c for the reference implementation.
*
* Copyright (C) 2012 ARM Ltd.
*/
--
2.34.1
^ permalink raw reply related [flat|nested] 11+ messages in thread* Re: [PATCH 0/6] KVM: arm64: nVHE stack unwinder rework
2022-07-27 14:29 ` [PATCH 0/6] KVM: arm64: nVHE stack unwinder rework Marc Zyngier
` (5 preceding siblings ...)
2022-07-27 14:29 ` [PATCH 6/6] arm64: Update 'unwinder howto' Marc Zyngier
@ 2022-07-27 15:56 ` Kalesh Singh
2022-07-27 16:01 ` Oliver Upton
2022-07-27 17:45 ` Marc Zyngier
8 siblings, 0 replies; 11+ messages in thread
From: Kalesh Singh @ 2022-07-27 15:56 UTC (permalink / raw)
To: Marc Zyngier
Cc: moderated list:ARM64 PORT (AARCH64 ARCHITECTURE), kvmarm, kvm,
Mark Rutland, Mark Brown, Madhavan T. Venkataraman, Fuad Tabba,
Oliver Upton, Quentin Perret, James Morse, Alexandru Elisei,
Suzuki K Poulose, Catalin Marinas, andreyknvl, vincenzo.frascino,
Masami Hiramatsu, Alexei Starovoitov, Kefeng Wang, Marco Elver,
Keir Fraser, Zenghui Yu, Ard Biesheuvel, Oliver Upton,
Cc: Android Kernel
On Wed, Jul 27, 2022 at 7:29 AM Marc Zyngier <maz@kernel.org> wrote:
>
> Hi all,
>
> As Kalesh's series[1] already went through quite a few rounds and that
> it has proved to be an extremely useful debugging help, I'd like to
> queue it for 5.20.
>
> However, there is a couple of nits that I'd like to address:
>
> - the code is extremely hard to follow, due to the include maze and
> the various levels of inline functions that have forward
> declarations...
>
> - there is a subtle bug in the way the kernel on_accessible_stack()
> helper has been rewritten
>
> - the config symbol for the protected unwinder is oddly placed
Hi Marc,
Thanks for doing this rework.
For the series:
Reviewed-by: Kalesh Singh <kaleshsingh@google.com>
Tested-by: Kalesh Singh <kaleshsingh@google.com>
Thanks,
Kalesh
>
> Instead of going for another round and missing the merge window, I
> propose to stash the following patches on top, which IMHO result in
> something much more readable.
>
> This series directly applies on top of Kalesh's.
>
> [1] https://lore.kernel.org/r/20220726073750.3219117-1-kaleshsingh@google.com
>
> Marc Zyngier (5):
> KVM: arm64: Move PROTECTED_NVHE_STACKTRACE around
> KVM: arm64: Move nVHE stacktrace unwinding into its own compilation
> unit
> KVM: arm64: Make unwind()/on_accessible_stack() per-unwinder functions
> KVM: arm64: Move nVHE-only helpers into kvm/stacktrace.c
> arm64: Update 'unwinder howto'
>
> Oliver Upton (1):
> KVM: arm64: Don't open code ARRAY_SIZE()
>
> arch/arm64/include/asm/stacktrace.h | 74 -------
> arch/arm64/include/asm/stacktrace/common.h | 69 ++-----
> arch/arm64/include/asm/stacktrace/nvhe.h | 125 +-----------
> arch/arm64/kernel/stacktrace.c | 90 +++++++++
> arch/arm64/kvm/Kconfig | 24 ++-
> arch/arm64/kvm/Makefile | 2 +-
> arch/arm64/kvm/handle_exit.c | 98 ---------
> arch/arm64/kvm/hyp/nvhe/stacktrace.c | 55 +++++-
> arch/arm64/kvm/stacktrace.c | 218 +++++++++++++++++++++
> 9 files changed, 394 insertions(+), 361 deletions(-)
> create mode 100644 arch/arm64/kvm/stacktrace.c
>
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 11+ messages in thread* Re: [PATCH 0/6] KVM: arm64: nVHE stack unwinder rework
2022-07-27 14:29 ` [PATCH 0/6] KVM: arm64: nVHE stack unwinder rework Marc Zyngier
` (6 preceding siblings ...)
2022-07-27 15:56 ` [PATCH 0/6] KVM: arm64: nVHE stack unwinder rework Kalesh Singh
@ 2022-07-27 16:01 ` Oliver Upton
2022-07-27 17:45 ` Marc Zyngier
8 siblings, 0 replies; 11+ messages in thread
From: Oliver Upton @ 2022-07-27 16:01 UTC (permalink / raw)
To: Marc Zyngier
Cc: linux-arm-kernel, kvmarm, kvm, mark.rutland, broonie, madvenka,
tabba, qperret, kaleshsingh, james.morse, alexandru.elisei,
suzuki.poulose, catalin.marinas, andreyknvl, vincenzo.frascino,
mhiramat, ast, wangkefeng.wang, elver, keirf, yuzenghui, ardb,
kernel-team
On Wed, Jul 27, 2022 at 03:29:00PM +0100, Marc Zyngier wrote:
> Hi all,
>
> As Kalesh's series[1] already went through quite a few rounds and that
> it has proved to be an extremely useful debugging help, I'd like to
> queue it for 5.20.
>
> However, there is a couple of nits that I'd like to address:
>
> - the code is extremely hard to follow, due to the include maze and
> the various levels of inline functions that have forward
> declarations...
>
> - there is a subtle bug in the way the kernel on_accessible_stack()
> helper has been rewritten
>
> - the config symbol for the protected unwinder is oddly placed
>
> Instead of going for another round and missing the merge window, I
> propose to stash the following patches on top, which IMHO result in
> something much more readable.
>
> This series directly applies on top of Kalesh's.
>
> [1] https://lore.kernel.org/r/20220726073750.3219117-1-kaleshsingh@google.com
For the series (besides my own patch of course):
Reviewed-by: Oliver Upton <oliver.upton@linux.dev>
--
Thanks,
Oliver
^ permalink raw reply [flat|nested] 11+ messages in thread* Re: [PATCH 0/6] KVM: arm64: nVHE stack unwinder rework
2022-07-27 14:29 ` [PATCH 0/6] KVM: arm64: nVHE stack unwinder rework Marc Zyngier
` (7 preceding siblings ...)
2022-07-27 16:01 ` Oliver Upton
@ 2022-07-27 17:45 ` Marc Zyngier
8 siblings, 0 replies; 11+ messages in thread
From: Marc Zyngier @ 2022-07-27 17:45 UTC (permalink / raw)
To: Marc Zyngier, kvm, kvmarm, linux-arm-kernel
Cc: keirf, kaleshsingh, catalin.marinas, broonie, madvenka, qperret,
wangkefeng.wang, tabba, ast, vincenzo.frascino, elver,
mark.rutland, mhiramat, oupton, james.morse, oliver.upton,
suzuki.poulose, kernel-team, alexandru.elisei, ardb, andreyknvl,
yuzenghui
On Wed, 27 Jul 2022 15:29:00 +0100, Marc Zyngier wrote:
> As Kalesh's series[1] already went through quite a few rounds and that
> it has proved to be an extremely useful debugging help, I'd like to
> queue it for 5.20.
>
> However, there is a couple of nits that I'd like to address:
>
> - the code is extremely hard to follow, due to the include maze and
> the various levels of inline functions that have forward
> declarations...
>
> [...]
Applied to next, thanks!
[1/6] KVM: arm64: Move PROTECTED_NVHE_STACKTRACE around
commit: 03fe9cd05b9f38353208c23bd791dac47c912054
[2/6] KVM: arm64: Move nVHE stacktrace unwinding into its own compilation unit
commit: 9f5fee05f6897d0fe0e3a44ade71bb85cd97b2ef
[3/6] KVM: arm64: Make unwind()/on_accessible_stack() per-unwinder functions
commit: 4e00532f37365967e9896966b1fe61888e659259
[4/6] KVM: arm64: Move nVHE-only helpers into kvm/stacktrace.c
commit: 0e773da1e688a1425ef7deae58fa11c5c7e09533
[5/6] KVM: arm64: Don't open code ARRAY_SIZE()
commit: 62ae21627aa96f6ef361981dd181c74dc7aa314c
[6/6] arm64: Update 'unwinder howto'
commit: a4c750e2328a117dc9b19a2a61db0d4347902029
Cheers,
M.
--
Marc Zyngier <maz@kernel.org>
^ permalink raw reply [flat|nested] 11+ messages in thread