From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4F2AE396B9B for ; Mon, 6 Apr 2026 18:50:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775501438; cv=none; b=QCJHTcpgOIVZPUYPdHGQeHxAuK/x/LMHzIULrLz9LZGqpWi/EqSx4SFyt3vHGiPluehAzIR1UMuJ59T56LkdHQR1hifwGgIZV97vUYyQu57HOQxGVNWvDBuxlwAA75g2vT703KzN1VKSUoEZeS4tcOa1QCFaPBshebpH9uI+Oq4= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775501438; c=relaxed/simple; bh=Ki65MefgNzHq6ISx6NAc0yPLIHUzqBtO6jD9b5LCdMY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ptjPf80ginmrPK1z/Rp9IcAncfwYPMZlqUgB44KNJtxEWk/2OxCo3sJG8dydVIAwMxCjL+5jB84OP3iIeQzq/B+RXre4dCUlDzPPAKEQu3f3D/A8TgJVvFPRy9zftIkMhRZ77wjnro43rANc4cyl1cM5qW7fNNRcNa3Q3akmgnk= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--dylanbhatch.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=AcreckdU; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--dylanbhatch.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="AcreckdU" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-82c8768a704so2337826b3a.3 for ; Mon, 06 Apr 2026 11:50:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1775501436; x=1776106236; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=NUfWhY67lIhptCLOMSh+gb51pACdbxlvycMF2len7NM=; b=AcreckdUXxhg1MnM2ISQovCWlLMPlJOpnX9rIt3AADzUQN47jZPMH+IH3rItLBMy9/ enuKUhBDsNSCZmNYWKkN7a7EqAoIJrIOhBnMxXsfo+cri23bpLurEMXlhPm4XKuN029v iwTUQxnePOEj3xrk+5FTW2fQ5kytS27HzpPlZxMonlYDTEsAbq3E57kLCek8R3213nok yJzG9SnXPuJgzGWL9LaAgRqz56VKWWzHoa9KJIRtDZAfuWX5u0hRP382RXZ6Hk5PoxeX Xn8lFyED3hzuinuwHTwFoMpjt1t4m98cmhrTknAOhNBBauLKka06EVe6z/rDv0UGG7fb pxCQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1775501436; x=1776106236; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=NUfWhY67lIhptCLOMSh+gb51pACdbxlvycMF2len7NM=; b=YFKEuKGv4B0t4ZI4pdOEbmBPH3THCD2CMOC3pVnMSDJ5Pv14UoWJnpCGs2ZmTUNstm kZr7EkJIRtGiGAGA6kYVEFZfzEPtCJwo+wO2SaEWbIa+cfyFJzEffwN7jOCcfFqxTwje iCq9tKWcNTm8aopnLNr6/MUVXzPYeIRlarBlu8J8K6WPSXU4c8FiTL4759ia4lTVJgds UcJGrHYdl2CB9BUVhIMn3CLnCULs10T0v/LmRuL9nkpJV+07LgKX8jLId+FcSLS9czhf HXXRm7Z1hZeNY/8VlTzckitCZ9taVIfFhBEc0XQTIPn6G6Nf8RumlgkCh2njwd/YW6k9 6tvQ== X-Forwarded-Encrypted: i=1; AJvYcCULRI//jezSFHRNH30yh0SZOreNhYRfjbbbwlbN8qS2B3nNeWj+2CZKhDkS7Hyrgse9G4Ql+kUQ+iyQWroHsh5f@vger.kernel.org X-Gm-Message-State: AOJu0YyCVc2yfDy6kT+hhWjwKIGMq4JVxQjKREe1tsB0u6RP9jbUAIUX RxVc2mp/tXI8gi+jarxiFz/w6tYWKjfn8xRLwXaYxmcrrvGU49VwmpMvpCZP8XHumoBUhyfzPJI YLDTFp168crPWiF1TwZWRjX6Ktg== X-Received: from pfx21.prod.google.com ([2002:a05:6a00:a455:b0:829:7f86:634]) (user=dylanbhatch job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a00:1805:b0:82a:1380:417d with SMTP id d2e1a72fcca58-82d0dbcfad1mr13503330b3a.52.1775501435455; Mon, 06 Apr 2026 11:50:35 -0700 (PDT) Date: Mon, 6 Apr 2026 18:50:00 +0000 In-Reply-To: <20260406185000.1378082-1-dylanbhatch@google.com> Precedence: bulk X-Mailing-List: linux-toolchains@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260406185000.1378082-1-dylanbhatch@google.com> X-Mailer: git-send-email 2.53.0.1213.gd9a14994de-goog Message-ID: <20260406185000.1378082-9-dylanbhatch@google.com> Subject: [PATCH v3 8/8] unwind: arm64: Use sframe to unwind interrupt frames. From: Dylan Hatch To: Roman Gushchin , Weinan Liu , Will Deacon , Josh Poimboeuf , Indu Bhagat , Peter Zijlstra , Steven Rostedt , Catalin Marinas , Jiri Kosina Cc: Dylan Hatch , Mark Rutland , Prasanna Kumar T S M , Puranjay Mohan , Song Liu , joe.lawrence@redhat.com, linux-toolchains@vger.kernel.org, linux-kernel@vger.kernel.org, live-patching@vger.kernel.org, Jens Remus , linux-arm-kernel@lists.infradead.org Content-Type: text/plain; charset="UTF-8" Add unwind_next_frame_sframe() function to unwind by sframe info if present. Use this method at exception boundaries, falling back to frame-pointer unwind only on failure. In such failure cases, the stacktrace is considered unreliable. During normal unwind, prefer frame pointer unwind (for better performance) with sframe as a backup. This change restores the LR behavior originally introduced in commit c2c6b27b5aa14fa2 ("arm64: stacktrace: unwind exception boundaries"), But later removed in commit 32ed1205682e ("arm64: stacktrace: Skip reporting LR at exception boundaries") This can be done because the sframe data can be used to determine whether the LR is current for the PC value recovered from pt_regs at the exception boundary. Signed-off-by: Weinan Liu Signed-off-by: Dylan Hatch Reviewed-by: Prasanna Kumar T S M --- arch/arm64/include/asm/stacktrace/common.h | 6 + arch/arm64/kernel/stacktrace.c | 242 +++++++++++++++++++-- 2 files changed, 228 insertions(+), 20 deletions(-) diff --git a/arch/arm64/include/asm/stacktrace/common.h b/arch/arm64/include/asm/stacktrace/common.h index 821a8fdd31af..96c4c0a7e6de 100644 --- a/arch/arm64/include/asm/stacktrace/common.h +++ b/arch/arm64/include/asm/stacktrace/common.h @@ -21,6 +21,8 @@ struct stack_info { * * @fp: The fp value in the frame record (or the real fp) * @pc: The lr value in the frame record (or the real lr) + * @sp: The sp value at the call site of the current function. + * @unreliable: Stacktrace is unreliable. * * @stack: The stack currently being unwound. * @stacks: An array of stacks which can be unwound. @@ -29,7 +31,11 @@ struct stack_info { struct unwind_state { unsigned long fp; unsigned long pc; +#ifdef CONFIG_SFRAME_UNWINDER + unsigned long sp; +#endif + bool unreliable; struct stack_info stack; struct stack_info *stacks; int nr_stacks; diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c index 3ebcf8c53fb0..16a4eb31c5c1 100644 --- a/arch/arm64/kernel/stacktrace.c +++ b/arch/arm64/kernel/stacktrace.c @@ -14,6 +14,7 @@ #include #include #include +#include #include #include @@ -26,6 +27,7 @@ enum kunwind_source { KUNWIND_SOURCE_CALLER, KUNWIND_SOURCE_TASK, KUNWIND_SOURCE_REGS_PC, + KUNWIND_SOURCE_REGS_LR, }; union unwind_flags { @@ -85,6 +87,9 @@ kunwind_init_from_regs(struct kunwind_state *state, state->regs = regs; state->common.fp = regs->regs[29]; state->common.pc = regs->pc; +#ifdef CONFIG_SFRAME_UNWINDER + state->common.sp = regs->sp; +#endif state->source = KUNWIND_SOURCE_REGS_PC; } @@ -103,6 +108,9 @@ kunwind_init_from_caller(struct kunwind_state *state) state->common.fp = (unsigned long)__builtin_frame_address(1); state->common.pc = (unsigned long)__builtin_return_address(0); +#ifdef CONFIG_SFRAME_UNWINDER + state->common.sp = (unsigned long)__builtin_frame_address(0); +#endif state->source = KUNWIND_SOURCE_CALLER; } @@ -124,6 +132,9 @@ kunwind_init_from_task(struct kunwind_state *state, state->common.fp = thread_saved_fp(task); state->common.pc = thread_saved_pc(task); +#ifdef CONFIG_SFRAME_UNWINDER + state->common.sp = thread_saved_sp(task); +#endif state->source = KUNWIND_SOURCE_TASK; } @@ -181,7 +192,6 @@ int kunwind_next_regs_pc(struct kunwind_state *state) state->regs = regs; state->common.pc = regs->pc; state->common.fp = regs->regs[29]; - state->regs = NULL; state->source = KUNWIND_SOURCE_REGS_PC; return 0; } @@ -237,6 +247,9 @@ kunwind_next_frame_record(struct kunwind_state *state) unwind_consume_stack(&state->common, info, fp, sizeof(*record)); +#ifdef CONFIG_SFRAME_UNWINDER + state->common.sp = state->common.fp; +#endif state->common.fp = new_fp; state->common.pc = new_pc; state->source = KUNWIND_SOURCE_FRAME; @@ -244,6 +257,172 @@ kunwind_next_frame_record(struct kunwind_state *state) return 0; } +#ifdef CONFIG_SFRAME_UNWINDER + +static __always_inline struct stack_info * +get_word(struct unwind_state *state, unsigned long *word) +{ + unsigned long addr = *word; + struct stack_info *info; + + info = unwind_find_stack(state, addr, sizeof(addr)); + if (!info) + return info; + + *word = READ_ONCE(*(unsigned long *)addr); + + return info; +} + +static __always_inline int +get_consume_word(struct unwind_state *state, unsigned long *word) +{ + struct stack_info *info; + unsigned long addr = *word; + + info = get_word(state, word); + if (!info) + return -EINVAL; + + unwind_consume_stack(state, info, addr, sizeof(addr)); + return 0; +} + +/* + * Unwind to the next frame according to sframe. + */ +static __always_inline int +unwind_next_frame_sframe(struct kunwind_state *state) +{ + struct unwind_frame frame; + unsigned long cfa, fp, ra; + enum kunwind_source source = KUNWIND_SOURCE_FRAME; + struct pt_regs *regs = state->regs; + + int err; + + /* FP/SP alignment 8 bytes */ + if (state->common.fp & 0x7 || state->common.sp & 0x7) + return -EINVAL; + + /* + * Most/all outermost functions are not visible to sframe. So, check for + * a meta frame record if the sframe lookup fails. + */ + err = sframe_find_kernel(state->common.pc, &frame); + if (err) + return kunwind_next_frame_record_meta(state); + + if (frame.outermost) + return -ENOENT; + + /* Get the Canonical Frame Address (CFA) */ + switch (frame.cfa.rule) { + case UNWIND_CFA_RULE_SP_OFFSET: + cfa = state->common.sp; + break; + case UNWIND_CFA_RULE_FP_OFFSET: + if (state->common.fp < state->common.sp) + return -EINVAL; + cfa = state->common.fp; + break; + case UNWIND_CFA_RULE_REG_OFFSET: + case UNWIND_CFA_RULE_REG_OFFSET_DEREF: + if (!regs) + return -EINVAL; + cfa = regs->regs[frame.cfa.regnum]; + break; + default: + WARN_ON_ONCE(1); + return -EINVAL; + } + cfa += frame.cfa.offset; + + /* + * CFA typically points to a higher address than RA or FP, so don't + * consume from the stack when we read it. + */ + if (frame.cfa.rule & UNWIND_RULE_DEREF && + !get_word(&state->common, &cfa)) + return -EINVAL; + + /* CFA alignment 8 bytes */ + if (cfa & 0x7) + return -EINVAL; + + /* Get the Return Address (RA) */ + switch (frame.ra.rule) { + case UNWIND_RULE_RETAIN: + if (!regs) + return -EINVAL; + ra = regs->regs[30]; + source = KUNWIND_SOURCE_REGS_LR; + break; + /* UNWIND_USER_RULE_CFA_OFFSET not implemented on purpose */ + case UNWIND_RULE_CFA_OFFSET_DEREF: + ra = cfa + frame.ra.offset; + break; + case UNWIND_RULE_REG_OFFSET: + case UNWIND_RULE_REG_OFFSET_DEREF: + if (!regs) + return -EINVAL; + ra = regs->regs[frame.cfa.regnum]; + ra += frame.ra.offset; + break; + default: + WARN_ON_ONCE(1); + return -EINVAL; + } + + /* Get the Frame Pointer (FP) */ + switch (frame.fp.rule) { + case UNWIND_RULE_RETAIN: + fp = state->common.fp; + break; + /* UNWIND_USER_RULE_CFA_OFFSET not implemented on purpose */ + case UNWIND_RULE_CFA_OFFSET_DEREF: + fp = cfa + frame.fp.offset; + break; + case UNWIND_RULE_REG_OFFSET: + case UNWIND_RULE_REG_OFFSET_DEREF: + if (!regs) + return -EINVAL; + fp = regs->regs[frame.fp.regnum]; + fp += frame.fp.offset; + break; + default: + WARN_ON_ONCE(1); + return -EINVAL; + } + + /* + * Consume RA and FP from the stack. The frame record puts FP at a lower + * address than RA, so we always read FP first. + */ + if (frame.fp.rule & UNWIND_RULE_DEREF && + !get_word(&state->common, &fp)) + return -EINVAL; + + if (frame.ra.rule & UNWIND_RULE_DEREF && + get_consume_word(&state->common, &ra)) + return -EINVAL; + + state->common.pc = ra; + state->common.sp = cfa; + state->common.fp = fp; + + state->source = source; + + return 0; +} + +#else + +static __always_inline int +unwind_next_frame_sframe(struct kunwind_state *state) { return -EINVAL; } + +#endif + /* * Unwind from one frame record (A) to the next frame record (B). * @@ -259,12 +438,25 @@ kunwind_next(struct kunwind_state *state) state->flags.all = 0; switch (state->source) { + case KUNWIND_SOURCE_REGS_PC: + err = unwind_next_frame_sframe(state); + + if (err && err != -ENOENT) { + /* Fallback to FP based unwinder */ + err = kunwind_next_frame_record(state); + state->common.unreliable = true; + } + state->regs = NULL; + break; case KUNWIND_SOURCE_FRAME: case KUNWIND_SOURCE_CALLER: case KUNWIND_SOURCE_TASK: - case KUNWIND_SOURCE_REGS_PC: + case KUNWIND_SOURCE_REGS_LR: err = kunwind_next_frame_record(state); + if (err && err != -ENOENT) + err = unwind_next_frame_sframe(state); break; + default: err = -EINVAL; } @@ -350,6 +542,9 @@ kunwind_stack_walk(kunwind_consume_fn consume_state, .common = { .stacks = stacks, .nr_stacks = ARRAY_SIZE(stacks), +#ifdef CONFIG_SFRAME_UNWINDER + .sp = 0, +#endif }, }; @@ -390,34 +585,40 @@ noinline noinstr void arch_stack_walk(stack_trace_consume_fn consume_entry, kunwind_stack_walk(arch_kunwind_consume_entry, &data, task, regs); } +struct kunwind_reliable_consume_entry_data { + stack_trace_consume_fn consume_entry; + void *cookie; + bool unreliable; +}; + static __always_inline bool -arch_reliable_kunwind_consume_entry(const struct kunwind_state *state, void *cookie) +arch_kunwind_reliable_consume_entry(const struct kunwind_state *state, void *cookie) { - /* - * At an exception boundary we can reliably consume the saved PC. We do - * not know whether the LR was live when the exception was taken, and - * so we cannot perform the next unwind step reliably. - * - * All that matters is whether the *entire* unwind is reliable, so give - * up as soon as we hit an exception boundary. - */ - if (state->source == KUNWIND_SOURCE_REGS_PC) - return false; + struct kunwind_reliable_consume_entry_data *data = cookie; - return arch_kunwind_consume_entry(state, cookie); + if (state->common.unreliable) { + data->unreliable = true; + return false; + } + return data->consume_entry(data->cookie, state->common.pc); } -noinline noinstr int arch_stack_walk_reliable(stack_trace_consume_fn consume_entry, - void *cookie, - struct task_struct *task) +noinline notrace int arch_stack_walk_reliable( + stack_trace_consume_fn consume_entry, + void *cookie, struct task_struct *task) { - struct kunwind_consume_entry_data data = { + struct kunwind_reliable_consume_entry_data data = { .consume_entry = consume_entry, .cookie = cookie, + .unreliable = false, }; - return kunwind_stack_walk(arch_reliable_kunwind_consume_entry, &data, - task, NULL); + kunwind_stack_walk(arch_kunwind_reliable_consume_entry, &data, task, NULL); + + if (data.unreliable) + return -EINVAL; + + return 0; } struct bpf_unwind_consume_entry_data { @@ -452,6 +653,7 @@ static const char *state_source_string(const struct kunwind_state *state) case KUNWIND_SOURCE_CALLER: return "C"; case KUNWIND_SOURCE_TASK: return "T"; case KUNWIND_SOURCE_REGS_PC: return "P"; + case KUNWIND_SOURCE_REGS_LR: return "L"; default: return "U"; } } -- 2.53.0.1213.gd9a14994de-goog