* [PATCH v3 0/4] arm64: Add return address protection to asm code
@ 2022-12-09 15:20 Ard Biesheuvel
2022-12-09 15:20 ` [PATCH v3 1/4] arm64: assembler: Force error on misuse of .Lframe_local_offset Ard Biesheuvel
` (3 more replies)
0 siblings, 4 replies; 5+ messages in thread
From: Ard Biesheuvel @ 2022-12-09 15:20 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Ard Biesheuvel, Marc Zyngier, Will Deacon, Mark Rutland,
Kees Cook, Catalin Marinas, Mark Brown
Control flow integrity features such as shadow call stack or PAC work by
placing special instructions between the reload of the link register
from the stack and the function return. The point of this is not only to
protect the control flow when calling that particular function, but also
to ensure that the sequence of instructions appearing at the end of the
function cannot be subverted and used in other ways than intended, in a
ROP/JOP style attack.
This means that it is generally a bad idea to incorporate any code that
is rarely or never used, but lacks such protections. So add some macros
that we can invoke in assembler code to protect the return address while
it is stored on the stack, and wire it up in the ftrace code, which is
often built into production kernels even when not used.
Another example of this is crypto code, and some fixes have been queued
up in the cryptodev tree to ensure that the frame_push and frame_pop
macros are used consistently.
v3:
- rebase onto updated ftrace tree
- drop EFI changes for the time being, I'll bring those back later
- emit unwind directives for return address registers != x30, and handle
them in the dynamic SCS patching code
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Mark Brown <broonie@kernel.org>
Ard Biesheuvel (4):
arm64: assembler: Force error on misuse of .Lframe_local_offset
arm64: assembler: Protect return addresses in asm routines
arm64: ftrace: Preserve original link register value in ftrace_regs
arm64: ftrace: Add return address protection
arch/arm64/include/asm/assembler.h | 76 ++++++++++++++++++++
arch/arm64/include/asm/ftrace.h | 2 +-
arch/arm64/kernel/entry-ftrace.S | 27 +++++--
arch/arm64/kernel/patch-scs.c | 70 +++++++++++++-----
arch/arm64/kernel/stacktrace.c | 1 +
5 files changed, 151 insertions(+), 25 deletions(-)
--
2.35.1
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH v3 1/4] arm64: assembler: Force error on misuse of .Lframe_local_offset
2022-12-09 15:20 [PATCH v3 0/4] arm64: Add return address protection to asm code Ard Biesheuvel
@ 2022-12-09 15:20 ` Ard Biesheuvel
2022-12-09 15:20 ` [PATCH v3 2/4] arm64: assembler: Protect return addresses in asm routines Ard Biesheuvel
` (2 subsequent siblings)
3 siblings, 0 replies; 5+ messages in thread
From: Ard Biesheuvel @ 2022-12-09 15:20 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Ard Biesheuvel, Marc Zyngier, Will Deacon, Mark Rutland,
Kees Cook, Catalin Marinas, Mark Brown
The frame_push macro sets a local symbol .Lframe_local_offset to the
offset where the local variable area resides in the stack frame.
However, while we take care not to nest frame_push and frame_pop
sequences, .Lframe_local_offset retains its most recent value, allowing
it to be referenced erroneously from outside a frame_push/frame_pop
pair. So set it to an obviously wrong value that is guaranteed to
trigger a link error in frame_pop.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
arch/arm64/include/asm/assembler.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index e5957a53be3983ac..1c04701e4fda8458 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -758,6 +758,7 @@ alternative_endif
.endif
ldp x29, x30, [sp], #.Lframe_local_offset + .Lframe_extra
.set .Lframe_regcount, -1
+ .set .Lframe_local_offset, frame_local_offset_error
.endif
.endm
--
2.35.1
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH v3 2/4] arm64: assembler: Protect return addresses in asm routines
2022-12-09 15:20 [PATCH v3 0/4] arm64: Add return address protection to asm code Ard Biesheuvel
2022-12-09 15:20 ` [PATCH v3 1/4] arm64: assembler: Force error on misuse of .Lframe_local_offset Ard Biesheuvel
@ 2022-12-09 15:20 ` Ard Biesheuvel
2022-12-09 15:20 ` [PATCH v3 3/4] arm64: ftrace: Preserve original link register value in ftrace_regs Ard Biesheuvel
2022-12-09 15:20 ` [PATCH v3 4/4] arm64: ftrace: Add return address protection Ard Biesheuvel
3 siblings, 0 replies; 5+ messages in thread
From: Ard Biesheuvel @ 2022-12-09 15:20 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Ard Biesheuvel, Marc Zyngier, Will Deacon, Mark Rutland,
Kees Cook, Catalin Marinas, Mark Brown
Introduce a set of macros that can be invoked to protect and restore the
return address when it is being spilled to memory. Just like ordinary C
code, the chosen method will be based on CONFIG_ARM64_PTR_AUTH_KERNEL,
CONFIG_SHADOW_CALL_STACK and CONFIG_DYNAMIC_SCS, and may involve boot
time patching depending on the runtime capabilities of the system (and
potential command line overrides)
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
arch/arm64/include/asm/assembler.h | 75 ++++++++++++++++++++
arch/arm64/kernel/patch-scs.c | 70 +++++++++++++-----
2 files changed, 127 insertions(+), 18 deletions(-)
diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index 1c04701e4fda8458..8b4afa2aaa9b0600 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -698,6 +698,79 @@ alternative_endif
#endif
.endm
+ /*
+ * protect_return_address - protect the return address value in
+ * register @reg, either by signing it using PAC and/or storing it on
+ * the shadow call stack. When dynamic shadow call stack is enabled,
+ * unwind directives are emitted so that the patching logic can find
+ * the instructions.
+ *
+ * These macros must not be used with reg != x30 in functions marked as
+ * SYM_FUNC, as in that case, each occurrence of this macro needs its
+ * own SYM_FUNC_CFI_START/_END section, and so these have to be emitted
+ * explicitly rather than via SYM_FUNC_START/_END. This is required to
+ * encode the return address correctly (which can only be encoded once
+ * per function)
+ */
+ .macro protect_return_address, reg=x30
+#ifdef CONFIG_ARM64_PTR_AUTH_KERNEL
+#ifdef CONFIG_UNWIND_TABLES
+ .cfi_startproc
+#endif
+ .arch_extension pauth
+ .ifnc \reg, x30
+alternative_if_not ARM64_HAS_ADDRESS_AUTH
+ // NOP encoding with bit #22 cleared (for patching to STR)
+ orr xzr, xzr, xzr, lsl #0
+alternative_else
+ pacia \reg, sp
+alternative_endif
+ .else
+ paciasp
+ .endif
+#ifdef CONFIG_UNWIND_TABLES
+ .cfi_return_column \reg
+ .cfi_negate_ra_state
+ .cfi_endproc
+#endif
+#endif
+#if defined(CONFIG_SHADOW_CALL_STACK) && !defined(CONFIG_DYNAMIC_SCS)
+ str \reg, [x18], #8
+#endif
+ .endm
+
+ /*
+ * restore_return_address - restore the return address value in
+ * register @reg, either by authenticating it using PAC and/or
+ * reloading it from the shadow call stack.
+ */
+ .macro restore_return_address, reg=x30
+#if defined(CONFIG_SHADOW_CALL_STACK) && !defined(CONFIG_DYNAMIC_SCS)
+ ldr \reg, [x18, #-8]!
+#endif
+#ifdef CONFIG_ARM64_PTR_AUTH_KERNEL
+#ifdef CONFIG_UNWIND_TABLES
+ .cfi_startproc
+#endif
+ .arch_extension pauth
+ .ifnc \reg, x30
+alternative_if_not ARM64_HAS_ADDRESS_AUTH
+ // NOP encoding with bit #22 set (for patching to LDR)
+ orr xzr, xzr, xzr, lsr #0
+alternative_else
+ autia \reg, sp
+alternative_endif
+ .else
+ autiasp
+ .endif
+#ifdef CONFIG_UNWIND_TABLES
+ .cfi_return_column \reg
+ .cfi_negate_ra_state
+ .cfi_endproc
+#endif
+#endif
+ .endm
+
/*
* frame_push - Push @regcount callee saved registers to the stack,
* starting at x19, as well as x29/x30, and set x29 to
@@ -705,6 +778,7 @@ alternative_endif
* for locals.
*/
.macro frame_push, regcount:req, extra
+ protect_return_address
__frame st, \regcount, \extra
.endm
@@ -716,6 +790,7 @@ alternative_endif
*/
.macro frame_pop
__frame ld
+ restore_return_address
.endm
.macro __frame_regs, reg1, reg2, op, num
diff --git a/arch/arm64/kernel/patch-scs.c b/arch/arm64/kernel/patch-scs.c
index 1b3da02d5b741bc3..d7319d10ca799167 100644
--- a/arch/arm64/kernel/patch-scs.c
+++ b/arch/arm64/kernel/patch-scs.c
@@ -54,20 +54,33 @@ extern const u8 __eh_frame_start[], __eh_frame_end[];
enum {
PACIASP = 0xd503233f,
AUTIASP = 0xd50323bf,
- SCS_PUSH = 0xf800865e,
- SCS_POP = 0xf85f8e5e,
+ SCS_PUSH = 0xf8008640,
+ SCS_POP = 0xf85f8e40,
+
+ // Special NOP encodings to identify locations where a register other
+ // than x30 is being used to carry the return address
+ NOP_PUSH = 0xaa1f03ff, // orr xzr, xzr, xzr, lsl #0
+ NOP_POP = 0xaa5f03ff, // orr xzr, xzr, xzr, lsr #0
};
-static void __always_inline scs_patch_loc(u64 loc)
+static void __always_inline scs_patch_loc(u64 loc, int ra_reg)
{
u32 insn = le32_to_cpup((void *)loc);
switch (insn) {
+ case NOP_PUSH:
+ if (WARN_ON(ra_reg == 30))
+ break;
+ fallthrough;
case PACIASP:
- *(u32 *)loc = cpu_to_le32(SCS_PUSH);
+ *(u32 *)loc = cpu_to_le32(SCS_PUSH | ra_reg);
break;
+ case NOP_POP:
+ if (WARN_ON(ra_reg == 30))
+ break;
+ fallthrough;
case AUTIASP:
- *(u32 *)loc = cpu_to_le32(SCS_POP);
+ *(u32 *)loc = cpu_to_le32(SCS_POP | ra_reg);
break;
default:
/*
@@ -76,9 +89,12 @@ static void __always_inline scs_patch_loc(u64 loc)
* also appear after a DW_CFA_restore_state directive that
* restores a state that is only partially accurate, and is
* followed by DW_CFA_negate_ra_state directive to toggle the
- * PAC bit again. So we permit other instructions here, and ignore
- * them.
+ * PAC bit again. So we permit other instructions here, and
+ * ignore them (unless they appear in handwritten assembly
+ * using a different return address register, where this should
+ * never happen).
*/
+ WARN_ON(ra_reg != 30);
return;
}
dcache_clean_pou(loc, loc + sizeof(u32));
@@ -130,7 +146,8 @@ struct eh_frame {
static int noinstr scs_handle_fde_frame(const struct eh_frame *frame,
bool fde_has_augmentation_data,
- int code_alignment_factor)
+ int code_alignment_factor,
+ int ra_reg)
{
int size = frame->size - offsetof(struct eh_frame, opcodes) + 4;
u64 loc = (u64)offset_to_ptr(&frame->initial_loc);
@@ -184,7 +201,7 @@ static int noinstr scs_handle_fde_frame(const struct eh_frame *frame,
break;
case DW_CFA_negate_ra_state:
- scs_patch_loc(loc - 4);
+ scs_patch_loc(loc - 4, ra_reg);
break;
case 0x40 ... 0x7f:
@@ -206,6 +223,7 @@ static int noinstr scs_handle_fde_frame(const struct eh_frame *frame,
int noinstr scs_patch(const u8 eh_frame[], int size)
{
const u8 *p = eh_frame;
+ int ra_reg = 30;
while (size > 4) {
const struct eh_frame *frame = (const void *)p;
@@ -219,23 +237,39 @@ int noinstr scs_patch(const u8 eh_frame[], int size)
break;
if (frame->cie_id_or_pointer == 0) {
- const u8 *p = frame->augmentation_string;
+ const u8 *as = frame->augmentation_string;
/* a 'z' in the augmentation string must come first */
- fde_has_augmentation_data = *p == 'z';
+ fde_has_augmentation_data = *as == 'z';
+ as += strlen(as) + 1;
+
+ /* check for at least 3 more bytes in the frame */
+ if (as - (u8 *)&frame->cie_id_or_pointer + 3 > frame->size)
+ return -ENOEXEC;
/*
- * The code alignment factor is a uleb128 encoded field
- * but given that the only sensible values are 1 or 4,
- * there is no point in decoding the whole thing.
+ * The code and data alignment factors are uleb128
+ * encoded fields but given that the only sensible
+ * values are 1 or 4, there is no point in decoding
+ * them entirely. The return address register number is
+ * a single byte in version 1 and a uleb128 in newer
+ * versions.
*/
- p += strlen(p) + 1;
- if (!WARN_ON(*p & BIT(7)))
- code_alignment_factor = *p;
+ if (WARN_ON(as[0] & BIT(7) || as[1] & BIT(7) ||
+ (as[2] & BIT(7)) && frame->version > 1))
+ return -ENOEXEC;
+
+ code_alignment_factor = as[0];
+
+ // Grab the return address register
+ ra_reg = as[2];
+ if (WARN_ON(ra_reg > 30))
+ return -ENOEXEC;
} else {
ret = scs_handle_fde_frame(frame,
fde_has_augmentation_data,
- code_alignment_factor);
+ code_alignment_factor,
+ ra_reg);
if (ret)
return ret;
}
--
2.35.1
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH v3 3/4] arm64: ftrace: Preserve original link register value in ftrace_regs
2022-12-09 15:20 [PATCH v3 0/4] arm64: Add return address protection to asm code Ard Biesheuvel
2022-12-09 15:20 ` [PATCH v3 1/4] arm64: assembler: Force error on misuse of .Lframe_local_offset Ard Biesheuvel
2022-12-09 15:20 ` [PATCH v3 2/4] arm64: assembler: Protect return addresses in asm routines Ard Biesheuvel
@ 2022-12-09 15:20 ` Ard Biesheuvel
2022-12-09 15:20 ` [PATCH v3 4/4] arm64: ftrace: Add return address protection Ard Biesheuvel
3 siblings, 0 replies; 5+ messages in thread
From: Ard Biesheuvel @ 2022-12-09 15:20 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Ard Biesheuvel, Marc Zyngier, Will Deacon, Mark Rutland,
Kees Cook, Catalin Marinas, Mark Brown
In order to be able to add pointer authentication and/or shadow call
stack support to the ftrace asm routines, it will need to reason about
whether or not the callsite's return address was updated to point to
return_to_handler(), as in this case, we want the authentication to
occur there and not before returning to the call site.
To make this a bit easier, preserve the value of register X9, which
carries the callsite's LR value upon entry to ftrace_caller, so in a
later patch, we can compare it to the callsite's effective LR upon
return, and omit the authentication if the caller will be returning via
return_to_handler().
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
arch/arm64/include/asm/ftrace.h | 2 +-
arch/arm64/kernel/entry-ftrace.S | 12 ++++++------
2 files changed, 7 insertions(+), 7 deletions(-)
diff --git a/arch/arm64/include/asm/ftrace.h b/arch/arm64/include/asm/ftrace.h
index 5664729800ae1c13..b07501645a74031a 100644
--- a/arch/arm64/include/asm/ftrace.h
+++ b/arch/arm64/include/asm/ftrace.h
@@ -86,7 +86,7 @@ struct ftrace_ops;
struct ftrace_regs {
/* x0 - x8 */
unsigned long regs[9];
- unsigned long __unused;
+ unsigned long orig_lr; // must follow ®s[8]
unsigned long fp;
unsigned long lr;
diff --git a/arch/arm64/kernel/entry-ftrace.S b/arch/arm64/kernel/entry-ftrace.S
index 30cc2a9d1757a6a7..bccd525241ab615d 100644
--- a/arch/arm64/kernel/entry-ftrace.S
+++ b/arch/arm64/kernel/entry-ftrace.S
@@ -42,12 +42,12 @@ SYM_CODE_START(ftrace_caller)
/* Make room for ftrace regs, plus two frame records */
sub sp, sp, #(FREGS_SIZE + 32)
- /* Save function arguments */
+ /* Save function arguments and original callsite LR */
stp x0, x1, [sp, #FREGS_X0]
stp x2, x3, [sp, #FREGS_X2]
stp x4, x5, [sp, #FREGS_X4]
stp x6, x7, [sp, #FREGS_X6]
- str x8, [sp, #FREGS_X8]
+ stp x8, x9, [sp, #FREGS_X8]
/* Save the callsite's FP, LR, SP */
str x29, [sp, #FREGS_FP]
@@ -78,22 +78,22 @@ SYM_INNER_LABEL(ftrace_call, SYM_L_GLOBAL)
* x19-x29 per the AAPCS, and we created frame records upon entry, so we need
* to restore x0-x8, x29, and x30.
*/
- /* Restore function arguments */
+ /* Restore function arguments and original callsite LR */
ldp x0, x1, [sp, #FREGS_X0]
ldp x2, x3, [sp, #FREGS_X2]
ldp x4, x5, [sp, #FREGS_X4]
ldp x6, x7, [sp, #FREGS_X6]
- ldr x8, [sp, #FREGS_X8]
+ ldp x8, x9, [sp, #FREGS_X8]
/* Restore the callsite's FP, LR, PC */
ldr x29, [sp, #FREGS_FP]
ldr x30, [sp, #FREGS_LR]
- ldr x9, [sp, #FREGS_PC]
+ ldr x10, [sp, #FREGS_PC]
/* Restore the callsite's SP */
add sp, sp, #FREGS_SIZE + 32
- ret x9
+ ret x10
SYM_CODE_END(ftrace_caller)
#else /* CONFIG_DYNAMIC_FTRACE_WITH_ARGS */
--
2.35.1
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH v3 4/4] arm64: ftrace: Add return address protection
2022-12-09 15:20 [PATCH v3 0/4] arm64: Add return address protection to asm code Ard Biesheuvel
` (2 preceding siblings ...)
2022-12-09 15:20 ` [PATCH v3 3/4] arm64: ftrace: Preserve original link register value in ftrace_regs Ard Biesheuvel
@ 2022-12-09 15:20 ` Ard Biesheuvel
3 siblings, 0 replies; 5+ messages in thread
From: Ard Biesheuvel @ 2022-12-09 15:20 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Ard Biesheuvel, Marc Zyngier, Will Deacon, Mark Rutland,
Kees Cook, Catalin Marinas, Mark Brown
Use the newly added asm macros to protect and restore the return address
in the ftrace call wrappers, based on whichever method is active (PAC
and/or shadow call stack).
If the graph tracer is in use, this covers both the return address *to*
the ftrace call site as well as the return address *at* the call site,
and the latter will either be restored in return_to_handler(), or before
returning to the call site.
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
arch/arm64/kernel/entry-ftrace.S | 17 ++++++++++++++++-
arch/arm64/kernel/stacktrace.c | 1 +
2 files changed, 17 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kernel/entry-ftrace.S b/arch/arm64/kernel/entry-ftrace.S
index bccd525241ab615d..4acfe12ac594da58 100644
--- a/arch/arm64/kernel/entry-ftrace.S
+++ b/arch/arm64/kernel/entry-ftrace.S
@@ -33,9 +33,13 @@
* record, its caller is missing from the LR and existing chain of frame
* records.
*/
+
SYM_CODE_START(ftrace_caller)
bti c
+ protect_return_address x9
+ protect_return_address x30
+
/* Save original SP */
mov x10, sp
@@ -65,6 +69,9 @@ SYM_CODE_START(ftrace_caller)
stp x29, x30, [sp, #FREGS_SIZE]
add x29, sp, #FREGS_SIZE
+ alternative_insn nop, "xpaci x30", ARM64_HAS_ADDRESS_AUTH, \
+ IS_ENABLED(CONFIG_ARM64_PTR_AUTH_KERNEL)
+
sub x0, x30, #AARCH64_INSN_SIZE // ip (callsite's BL insn)
mov x1, x9 // parent_ip (callsite's LR)
ldr_l x2, function_trace_op // op
@@ -93,7 +100,14 @@ SYM_INNER_LABEL(ftrace_call, SYM_L_GLOBAL)
/* Restore the callsite's SP */
add sp, sp, #FREGS_SIZE + 32
- ret x10
+ restore_return_address x10
+#ifdef CONFIG_FUNCTION_GRAPH_TRACER
+ // Check whether the callsite's LR has been overridden
+ cmp x9, x30
+ b.ne 0f
+#endif
+ restore_return_address x30
+0: ret x10
SYM_CODE_END(ftrace_caller)
#else /* CONFIG_DYNAMIC_FTRACE_WITH_ARGS */
@@ -265,6 +279,7 @@ SYM_CODE_START(return_to_handler)
ldp x6, x7, [sp, #48]
add sp, sp, #64
+ restore_return_address x30
ret
SYM_CODE_END(return_to_handler)
#endif /* CONFIG_FUNCTION_GRAPH_TRACER */
diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
index 634279b3b03d1b07..e323a8ac50168261 100644
--- a/arch/arm64/kernel/stacktrace.c
+++ b/arch/arm64/kernel/stacktrace.c
@@ -102,6 +102,7 @@ static int notrace unwind_next(struct unwind_state *state)
*/
orig_pc = ftrace_graph_ret_addr(tsk, NULL, state->pc,
(void *)state->fp);
+ orig_pc = ptrauth_strip_insn_pac(orig_pc);
if (WARN_ON_ONCE(state->pc == orig_pc))
return -EINVAL;
state->pc = orig_pc;
--
2.35.1
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 5+ messages in thread
end of thread, other threads:[~2022-12-09 15:22 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-12-09 15:20 [PATCH v3 0/4] arm64: Add return address protection to asm code Ard Biesheuvel
2022-12-09 15:20 ` [PATCH v3 1/4] arm64: assembler: Force error on misuse of .Lframe_local_offset Ard Biesheuvel
2022-12-09 15:20 ` [PATCH v3 2/4] arm64: assembler: Protect return addresses in asm routines Ard Biesheuvel
2022-12-09 15:20 ` [PATCH v3 3/4] arm64: ftrace: Preserve original link register value in ftrace_regs Ard Biesheuvel
2022-12-09 15:20 ` [PATCH v3 4/4] arm64: ftrace: Add return address protection Ard Biesheuvel
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).