* [PATCH bpf-next v7 0/2] Support kCFI + BPF on arm64
@ 2024-06-17 16:27 Maxwell Bland
2024-06-17 16:30 ` [PATCH bpf-next v7 1/2] cfi: add C CFI type macro Maxwell Bland
2024-06-17 16:31 ` [PATCH bpf-next v7 2/2] arm64/cfi,bpf: Support kCFI + BPF on arm64 Maxwell Bland
0 siblings, 2 replies; 4+ messages in thread
From: Maxwell Bland @ 2024-06-17 16:27 UTC (permalink / raw)
To: open list:BPF [GENERAL] (Safe Dynamic Programs and Tools)
Cc: Catalin Marinas, Will Deacon, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Martin KaFai Lau, Eduard Zingerman, Song Liu,
Yonghong Song, John Fastabend, KP Singh, Stanislav Fomichev,
Hao Luo, Jiri Olsa, Zi Shen Lim, Mark Rutland, Suzuki K Poulose,
Mark Brown, linux-arm-kernel, open list, Josh Poimboeuf,
Puranjay Mohan
Adds CFI checks to BPF dispatchers on aarch64.
E.g.
<bpf_dispatcher_*_func>:
paciasp
stp x29, x30, [sp, #-0x10]!
mov x29, sp
+ ldur w16, [x2, #-0x4]
+ movk w17, #0x1881
+ movk w17, #0xd942, lsl #16
+ cmp w16, w17
+ b.eq <bpf_dispatcher_*_func+0x24>
+ brk #0x8222
blr x2
ldp x29, x30, [sp], #0x10
autiasp
ret
Changes in v6->v7
https://lore.kernel.org/all/illfkwuxwq3adca2h4shibz2xub62kku3g2wte4sqp7xj7cwkb@ckn3qg7zxjuv/
- Squash one of the commits to avoid code churn
Changes in v5->v6
https://lore.kernel.org/all/mafwhrai2nz3u4wn4fu72kvzjm6krs57klc3qqvd2sz2mham6d@x4ukf6xqp4f4/
- Add include for cfi_types, fixing riscv compile error
- Fix authorship sign-off information
Changes in v4->v5
https://lore.kernel.org/all/wtb6czzpvtqq23t4g6hf7on257dtxzdb4fa4nuq3dtq32odmli@xoyyrtthafar/
- Fix failing BPF selftests from misplaced variable declaration
Changes in v3->v4
https://lore.kernel.org/all/fhdcjdzqdqnoehenxbipfaorseeamt3q7fbm7ghe6z5s2chif5@lrhtasolawud/
- Fix authorship attribution.
Changes in v2->v3:
https://lore.kernel.org/all/20240324211518.93892-1-puranjay12@gmail.com/
- Simplify cfi_get_func_hash to avoid needless failure case
- Use DEFINE_CFI_TYPE as suggested by Mark Rutland
Changes in v1->v2:
https://lore.kernel.org/bpf/20240227151115.4623-1-puranjay12@gmail.com/
- Rebased on latest bpf-next/master
Mark Rutland (1):
cfi: add C CFI type macro
Puranjay Mohan (1):
arm64/cfi,bpf: Support kCFI + BPF on arm64
arch/arm64/include/asm/cfi.h | 23 ++++++++++++++++++++++
arch/arm64/kernel/alternative.c | 18 +++++++++++++++++
arch/arm64/net/bpf_jit_comp.c | 21 +++++++++++++++++---
arch/riscv/kernel/cfi.c | 35 +++------------------------------
arch/x86/kernel/alternative.c | 35 +++------------------------------
include/linux/cfi_types.h | 23 ++++++++++++++++++++++
6 files changed, 88 insertions(+), 67 deletions(-)
create mode 100644 arch/arm64/include/asm/cfi.h
--
2.43.0
^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH bpf-next v7 1/2] cfi: add C CFI type macro
2024-06-17 16:27 [PATCH bpf-next v7 0/2] Support kCFI + BPF on arm64 Maxwell Bland
@ 2024-06-17 16:30 ` Maxwell Bland
2024-06-17 16:31 ` [PATCH bpf-next v7 2/2] arm64/cfi,bpf: Support kCFI + BPF on arm64 Maxwell Bland
1 sibling, 0 replies; 4+ messages in thread
From: Maxwell Bland @ 2024-06-17 16:30 UTC (permalink / raw)
To: open list:BPF [GENERAL] (Safe Dynamic Programs and Tools)
Cc: Catalin Marinas, Will Deacon, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Martin KaFai Lau, Eduard Zingerman, Song Liu,
Yonghong Song, John Fastabend, KP Singh, Stanislav Fomichev,
Hao Luo, Jiri Olsa, Zi Shen Lim, Mark Rutland, Suzuki K Poulose,
Mark Brown, linux-arm-kernel, open list, Josh Poimboeuf,
Puranjay Mohan
From: Mark Rutland <mark.rutland@arm.com>
Currently x86 and riscv open-code 4 instances of the same logic to
define a u32 variable with the KCFI typeid of a given function.
Replace the duplicate logic with a common macro.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Maxwell Bland <mbland@motorola.com>
---
arch/riscv/kernel/cfi.c | 35 +++--------------------------------
arch/x86/kernel/alternative.c | 35 +++--------------------------------
include/linux/cfi_types.h | 23 +++++++++++++++++++++++
3 files changed, 29 insertions(+), 64 deletions(-)
diff --git a/arch/riscv/kernel/cfi.c b/arch/riscv/kernel/cfi.c
index 64bdd3e1ab8c..e7aec5f36dd5 100644
--- a/arch/riscv/kernel/cfi.c
+++ b/arch/riscv/kernel/cfi.c
@@ -4,6 +4,7 @@
*
* Copyright (C) 2023 Google LLC
*/
+#include <linux/cfi_types.h>
#include <linux/cfi.h>
#include <asm/insn.h>
@@ -82,41 +83,11 @@ struct bpf_insn;
/* Must match bpf_func_t / DEFINE_BPF_PROG_RUN() */
extern unsigned int __bpf_prog_runX(const void *ctx,
const struct bpf_insn *insn);
-
-/*
- * Force a reference to the external symbol so the compiler generates
- * __kcfi_typid.
- */
-__ADDRESSABLE(__bpf_prog_runX);
-
-/* u32 __ro_after_init cfi_bpf_hash = __kcfi_typeid___bpf_prog_runX; */
-asm (
-" .pushsection .data..ro_after_init,\"aw\",@progbits \n"
-" .type cfi_bpf_hash,@object \n"
-" .globl cfi_bpf_hash \n"
-" .p2align 2, 0x0 \n"
-"cfi_bpf_hash: \n"
-" .word __kcfi_typeid___bpf_prog_runX \n"
-" .size cfi_bpf_hash, 4 \n"
-" .popsection \n"
-);
+DEFINE_CFI_TYPE(cfi_bpf_hash, __bpf_prog_runX);
/* Must match bpf_callback_t */
extern u64 __bpf_callback_fn(u64, u64, u64, u64, u64);
-
-__ADDRESSABLE(__bpf_callback_fn);
-
-/* u32 __ro_after_init cfi_bpf_subprog_hash = __kcfi_typeid___bpf_callback_fn; */
-asm (
-" .pushsection .data..ro_after_init,\"aw\",@progbits \n"
-" .type cfi_bpf_subprog_hash,@object \n"
-" .globl cfi_bpf_subprog_hash \n"
-" .p2align 2, 0x0 \n"
-"cfi_bpf_subprog_hash: \n"
-" .word __kcfi_typeid___bpf_callback_fn \n"
-" .size cfi_bpf_subprog_hash, 4 \n"
-" .popsection \n"
-);
+DEFINE_CFI_TYPE(cfi_bpf_subprog_hash, __bpf_callback_fn);
u32 cfi_get_func_hash(void *func)
{
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index 89de61243272..933d0c13a0d8 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -1,6 +1,7 @@
// SPDX-License-Identifier: GPL-2.0-only
#define pr_fmt(fmt) "SMP alternatives: " fmt
+#include <linux/cfi_types.h>
#include <linux/module.h>
#include <linux/sched.h>
#include <linux/perf_event.h>
@@ -901,41 +902,11 @@ struct bpf_insn;
/* Must match bpf_func_t / DEFINE_BPF_PROG_RUN() */
extern unsigned int __bpf_prog_runX(const void *ctx,
const struct bpf_insn *insn);
-
-/*
- * Force a reference to the external symbol so the compiler generates
- * __kcfi_typid.
- */
-__ADDRESSABLE(__bpf_prog_runX);
-
-/* u32 __ro_after_init cfi_bpf_hash = __kcfi_typeid___bpf_prog_runX; */
-asm (
-" .pushsection .data..ro_after_init,\"aw\",@progbits \n"
-" .type cfi_bpf_hash,@object \n"
-" .globl cfi_bpf_hash \n"
-" .p2align 2, 0x0 \n"
-"cfi_bpf_hash: \n"
-" .long __kcfi_typeid___bpf_prog_runX \n"
-" .size cfi_bpf_hash, 4 \n"
-" .popsection \n"
-);
+DEFINE_CFI_TYPE(cfi_bpf_hash, __bpf_prog_runX);
/* Must match bpf_callback_t */
extern u64 __bpf_callback_fn(u64, u64, u64, u64, u64);
-
-__ADDRESSABLE(__bpf_callback_fn);
-
-/* u32 __ro_after_init cfi_bpf_subprog_hash = __kcfi_typeid___bpf_callback_fn; */
-asm (
-" .pushsection .data..ro_after_init,\"aw\",@progbits \n"
-" .type cfi_bpf_subprog_hash,@object \n"
-" .globl cfi_bpf_subprog_hash \n"
-" .p2align 2, 0x0 \n"
-"cfi_bpf_subprog_hash: \n"
-" .long __kcfi_typeid___bpf_callback_fn \n"
-" .size cfi_bpf_subprog_hash, 4 \n"
-" .popsection \n"
-);
+DEFINE_CFI_TYPE(cfi_bpf_subprog_hash, __bpf_callback_fn);
u32 cfi_get_func_hash(void *func)
{
diff --git a/include/linux/cfi_types.h b/include/linux/cfi_types.h
index 6b8713675765..f510e62ca8b1 100644
--- a/include/linux/cfi_types.h
+++ b/include/linux/cfi_types.h
@@ -41,5 +41,28 @@
SYM_TYPED_START(name, SYM_L_GLOBAL, SYM_A_ALIGN)
#endif
+#else /* __ASSEMBLY__ */
+
+#ifdef CONFIG_CFI_CLANG
+#define DEFINE_CFI_TYPE(name, func) \
+ /* \
+ * Force a reference to the function so the compiler generates \
+ * __kcfi_typeid_<func>. \
+ */ \
+ __ADDRESSABLE(func); \
+ /* u32 name = __kcfi_typeid_<func> */ \
+ extern u32 name; \
+ asm ( \
+ " .pushsection .data..ro_after_init,\"aw\",@progbits \n" \
+ " .type " #name ",@object \n" \
+ " .globl " #name " \n" \
+ " .p2align 2, 0x0 \n" \
+ #name ": \n" \
+ " .long __kcfi_typeid_" #func " \n" \
+ " .size " #name ", 4 \n" \
+ " .popsection \n" \
+ );
+#endif
+
#endif /* __ASSEMBLY__ */
#endif /* _LINUX_CFI_TYPES_H */
--
2.43.0
^ permalink raw reply related [flat|nested] 4+ messages in thread
* [PATCH bpf-next v7 2/2] arm64/cfi,bpf: Support kCFI + BPF on arm64
2024-06-17 16:27 [PATCH bpf-next v7 0/2] Support kCFI + BPF on arm64 Maxwell Bland
2024-06-17 16:30 ` [PATCH bpf-next v7 1/2] cfi: add C CFI type macro Maxwell Bland
@ 2024-06-17 16:31 ` Maxwell Bland
2024-06-18 13:30 ` Puranjay Mohan
1 sibling, 1 reply; 4+ messages in thread
From: Maxwell Bland @ 2024-06-17 16:31 UTC (permalink / raw)
To: open list:BPF [GENERAL] (Safe Dynamic Programs and Tools)
Cc: Catalin Marinas, Will Deacon, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Martin KaFai Lau, Eduard Zingerman, Song Liu,
Yonghong Song, John Fastabend, KP Singh, Stanislav Fomichev,
Hao Luo, Jiri Olsa, Zi Shen Lim, Mark Rutland, Suzuki K Poulose,
Mark Brown, linux-arm-kernel, open list, Josh Poimboeuf,
Puranjay Mohan
From: Puranjay Mohan <puranjay12@gmail.com>
Currently, bpf_dispatcher_*_func() is marked with `__nocfi` therefore
calling BPF programs from this interface doesn't cause CFI warnings.
When BPF programs are called directly from C: from BPF helpers or
struct_ops, CFI warnings are generated.
Implement proper CFI prologues for the BPF programs and callbacks and
drop __nocfi for arm64. Fix the trampoline generation code to emit kCFI
prologue when a struct_ops trampoline is being prepared.
Signed-off-by: Puranjay Mohan <puranjay12@gmail.com>
Signed-off-by: Maxwell Bland <mbland@motorola.com>
---
arch/arm64/include/asm/cfi.h | 23 +++++++++++++++++++++++
arch/arm64/kernel/alternative.c | 18 ++++++++++++++++++
arch/arm64/net/bpf_jit_comp.c | 21 ++++++++++++++++++---
3 files changed, 59 insertions(+), 3 deletions(-)
create mode 100644 arch/arm64/include/asm/cfi.h
diff --git a/arch/arm64/include/asm/cfi.h b/arch/arm64/include/asm/cfi.h
new file mode 100644
index 000000000000..670e191f8628
--- /dev/null
+++ b/arch/arm64/include/asm/cfi.h
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _ASM_ARM64_CFI_H
+#define _ASM_ARM64_CFI_H
+
+#ifdef CONFIG_CFI_CLANG
+#define __bpfcall
+static inline int cfi_get_offset(void)
+{
+ return 4;
+}
+#define cfi_get_offset cfi_get_offset
+extern u32 cfi_bpf_hash;
+extern u32 cfi_bpf_subprog_hash;
+extern u32 cfi_get_func_hash(void *func);
+#else
+#define cfi_bpf_hash 0U
+#define cfi_bpf_subprog_hash 0U
+static inline u32 cfi_get_func_hash(void *func)
+{
+ return 0;
+}
+#endif /* CONFIG_CFI_CLANG */
+#endif /* _ASM_ARM64_CFI_H */
diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c
index 8ff6610af496..d7a58eca7665 100644
--- a/arch/arm64/kernel/alternative.c
+++ b/arch/arm64/kernel/alternative.c
@@ -8,11 +8,13 @@
#define pr_fmt(fmt) "alternatives: " fmt
+#include <linux/cfi_types.h>
#include <linux/init.h>
#include <linux/cpu.h>
#include <linux/elf.h>
#include <asm/cacheflush.h>
#include <asm/alternative.h>
+#include <asm/cfi.h>
#include <asm/cpufeature.h>
#include <asm/insn.h>
#include <asm/module.h>
@@ -298,3 +300,19 @@ noinstr void alt_cb_patch_nops(struct alt_instr *alt, __le32 *origptr,
updptr[i] = cpu_to_le32(aarch64_insn_gen_nop());
}
EXPORT_SYMBOL(alt_cb_patch_nops);
+
+#ifdef CONFIG_CFI_CLANG
+struct bpf_insn;
+/* Must match bpf_func_t / DEFINE_BPF_PROG_RUN() */
+extern unsigned int __bpf_prog_runX(const void *ctx,
+ const struct bpf_insn *insn);
+DEFINE_CFI_TYPE(cfi_bpf_hash, __bpf_prog_runX);
+/* Must match bpf_callback_t */
+extern u64 __bpf_callback_fn(u64, u64, u64, u64, u64);
+DEFINE_CFI_TYPE(cfi_bpf_subprog_hash, __bpf_callback_fn);
+u32 cfi_get_func_hash(void *func)
+{
+ u32 *hashp = func - cfi_get_offset();
+ return READ_ONCE(*hashp);
+}
+#endif
diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
index 720336d28856..211e1c29f004 100644
--- a/arch/arm64/net/bpf_jit_comp.c
+++ b/arch/arm64/net/bpf_jit_comp.c
@@ -17,6 +17,7 @@
#include <asm/asm-extable.h>
#include <asm/byteorder.h>
#include <asm/cacheflush.h>
+#include <asm/cfi.h>
#include <asm/debug-monitors.h>
#include <asm/insn.h>
#include <asm/patching.h>
@@ -162,6 +163,12 @@ static inline void emit_bti(u32 insn, struct jit_ctx *ctx)
emit(insn, ctx);
}
+static inline void emit_kcfi(u32 hash, struct jit_ctx *ctx)
+{
+ if (IS_ENABLED(CONFIG_CFI_CLANG))
+ emit(hash, ctx);
+}
+
/*
* Kernel addresses in the vmalloc space use at most 48 bits, and the
* remaining bits are guaranteed to be 0x1. So we can compose the address
@@ -311,7 +318,6 @@ static int build_prologue(struct jit_ctx *ctx, bool ebpf_from_cbpf,
const u8 tcc = bpf2a64[TCALL_CNT];
const u8 fpb = bpf2a64[FP_BOTTOM];
const u8 arena_vm_base = bpf2a64[ARENA_VM_START];
- const int idx0 = ctx->idx;
int cur_offset;
/*
@@ -337,6 +343,9 @@ static int build_prologue(struct jit_ctx *ctx, bool ebpf_from_cbpf,
*
*/
+ emit_kcfi(is_main_prog ? cfi_bpf_hash : cfi_bpf_subprog_hash, ctx);
+ const int idx0 = ctx->idx;
+
/* bpf function may be invoked by 3 instruction types:
* 1. bl, attached via freplace to bpf prog via short jump
* 2. br, attached via freplace to bpf prog via long jump
@@ -1849,9 +1858,9 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
jit_data->ro_header = ro_header;
}
- prog->bpf_func = (void *)ctx.ro_image;
+ prog->bpf_func = (void *)ctx.ro_image + cfi_get_offset();
prog->jited = 1;
- prog->jited_len = prog_size;
+ prog->jited_len = prog_size - cfi_get_offset();
if (!prog->is_func || extra_pass) {
int i;
@@ -2104,6 +2113,12 @@ static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im,
/* return address locates above FP */
retaddr_off = stack_size + 8;
+ if (flags & BPF_TRAMP_F_INDIRECT) {
+ /*
+ * Indirect call for bpf_struct_ops
+ */
+ emit_kcfi(cfi_get_func_hash(func_addr), ctx);
+ }
/* bpf trampoline may be invoked by 3 instruction types:
* 1. bl, attached to bpf prog or kernel function via short jump
* 2. br, attached to bpf prog or kernel function via long jump
--
2.43.0
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH bpf-next v7 2/2] arm64/cfi,bpf: Support kCFI + BPF on arm64
2024-06-17 16:31 ` [PATCH bpf-next v7 2/2] arm64/cfi,bpf: Support kCFI + BPF on arm64 Maxwell Bland
@ 2024-06-18 13:30 ` Puranjay Mohan
0 siblings, 0 replies; 4+ messages in thread
From: Puranjay Mohan @ 2024-06-18 13:30 UTC (permalink / raw)
To: Maxwell Bland,
open list:BPF [GENERAL] (Safe Dynamic Programs and Tools)
Cc: Catalin Marinas, Will Deacon, Alexei Starovoitov, Daniel Borkmann,
Andrii Nakryiko, Martin KaFai Lau, Eduard Zingerman, Song Liu,
Yonghong Song, John Fastabend, KP Singh, Stanislav Fomichev,
Hao Luo, Jiri Olsa, Zi Shen Lim, Mark Rutland, Suzuki K Poulose,
Mark Brown, linux-arm-kernel, open list, Josh Poimboeuf
[-- Attachment #1: Type: text/plain, Size: 2723 bytes --]
Hi Maxwell,
I am happy to test your code everytime but it would be great if you test
the code before posting it on the list. Otherwise it would take multiple
revisions for the patches to be accepted. I understand that testing this
is non-trivial because you need clang and everything and you also need
ARM64 hardware or Qemu setup. But if you enjoy kernel development you
will find all this is worth it.
> +u32 cfi_get_func_hash(void *func)
> +{
> + u32 *hashp = func - cfi_get_offset();
> + return READ_ONCE(*hashp);
The above assumes that hashp is always a valid address, and when it is
not, it crashes the kernel:
Building these patches with clang and with CONFIG_CFI_CLANG=y and then
running `sudo ./test_progs -a dummy_st_ops` crashes the kernel like:
Internal error: Oops: 0000000096000006 [#1] SMP
Modules linked in: bpf_testmod(OE) nls_ascii nls_cp437 aes_ce_blk aes_ce_cipher ghash_ce sha1_ce button sunrpc sch_fq_codel dm_mod dax configfs dmi_sysfs sha2_ce sha256_arm64
CPU: 47 PID: 5746 Comm: test_progs Tainted: G W OE 6.10.0-rc2+ #41
Hardware name: Amazon EC2 c6g.16xlarge/, BIOS 1.0 11/1/2018
pstate: 00400005 (nzcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
pc : cfi_get_func_hash+0xc/0x1c
lr : prepare_trampoline+0xcc/0xf44
sp : ffff80008b1637e0
x29: ffff80008b163810 x28: ffff0003c9e4b0c0 x27: 0000000000000000
x26: 0000000000000010 x25: ffff0003d4ab7000 x24: 0000000000000040
x23: 0000000000000018 x22: 0000000000000037 x21: 0000000000000001
x20: 0000000000000020 x19: ffff80008b163870 x18: 0000000000000000
x17: 00000000ad6b63b6 x16: 00000000ad6b63b6 x15: ffff80008002eed4
x14: ffff80008002eff4 x13: ffff80008b160000 x12: ffff80008b164000
x11: 0000000000000082 x10: 0000000000000010 x9 : ffff80008004b724
x8 : 0000000000000110 x7 : 0000000000000000 x6 : 0000000000000001
x5 : 0000000000000110 x4 : 0000000000000001 x3 : 0000000000000000
x2 : ffff0003d4ab7000 x1 : 000000000000003f x0 : 0000000000000000
Call trace:
cfi_get_func_hash+0xc/0x1c
arch_bpf_trampoline_size+0xe8/0x158
bpf_struct_ops_prepare_trampoline+0x8
[...]
Here is my understanding of the above:
We are trying to get the cfi hash from <func_addr> - 4, but clang
doesn't emit the cfi hash for all functions, and in case the cfi hash is
not emitted, <func_addr> - 4 could be an invalid address or point to
something that is not a cfi hash.
In my original patch I had:
u32 cfi_get_func_hash(void *func)
{
u32 hash;
if (get_kernel_nofault(hash, func - cfi_get_offset()))
return 0;
return hash;
}
I think we need to keep the get_kernel_nofault() to fix this issue.
cfi_get_func_hash() in arch/x86/kernel/alternative.c also uses
get_kernel_nofault() and I think it is there for the same reason.
Thanks,
Puranjay
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 255 bytes --]
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2024-06-18 13:31 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-06-17 16:27 [PATCH bpf-next v7 0/2] Support kCFI + BPF on arm64 Maxwell Bland
2024-06-17 16:30 ` [PATCH bpf-next v7 1/2] cfi: add C CFI type macro Maxwell Bland
2024-06-17 16:31 ` [PATCH bpf-next v7 2/2] arm64/cfi,bpf: Support kCFI + BPF on arm64 Maxwell Bland
2024-06-18 13:30 ` Puranjay Mohan
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).