* [PATCH bpf-next v9 0/2] Support kCFI + BPF on arm64
@ 2025-05-05 22:34 Sami Tolvanen
2025-05-05 22:34 ` [PATCH bpf-next v9 1/2] cfi: add C CFI type macro Sami Tolvanen
` (2 more replies)
0 siblings, 3 replies; 7+ messages in thread
From: Sami Tolvanen @ 2025-05-05 22:34 UTC (permalink / raw)
To: bpf, Puranjay Mohan, Alexei Starovoitov, Daniel Borkmann
Cc: Catalin Marinas, Will Deacon, Andrii Nakryiko, Mark Rutland,
linux-arm-kernel, linux-kernel, Maxwell Bland, Sami Tolvanen
Hi folks,
These patches add KCFI types to arm64 BPF JIT output. Puranjay and
Maxwell have been working on this for some time now, but I haven't
seen any progress since June 2024, so I decided to pick up the latest
version[1] posted by Maxwell and fix the few remaining issues I
noticed. I confirmed that with these patches applied, I no longer see
CFI failures when running BPF self-tests on arm64.
[1] https://lore.kernel.org/linux-arm-kernel/ptrugmna4xb5o5lo4xislf4rlz7avdmd4pfho5fjwtjj7v422u@iqrwfrbwuxrq/
Sami
---
v9:
- Rebased to bpf-next/master to fix x86 merge conflicts.
- Fixed checkpatch warnings about Co-developed-by tags and including
<asm/cfi.h>.
- Picked up Tested-by tags.
v8: https://lore.kernel.org/bpf/20250310222942.1988975-4-samitolvanen@google.com/
- Changed DEFINE_CFI_TYPE to use .4byte to match __CFI_TYPE.
- Changed cfi_get_func_hash() to again use get_kernel_nofault().
- Fixed a panic in bpf_jit_free() by resetting prog->bpf_func before
calling bpf_jit_binary_pack_hdr().
---
Mark Rutland (1):
cfi: add C CFI type macro
Puranjay Mohan (1):
arm64/cfi,bpf: Support kCFI + BPF on arm64
arch/arm64/include/asm/cfi.h | 23 ++++++++++++++++++++++
arch/arm64/kernel/alternative.c | 25 +++++++++++++++++++++++
arch/arm64/net/bpf_jit_comp.c | 22 ++++++++++++++++++---
arch/riscv/kernel/cfi.c | 35 +++------------------------------
arch/x86/kernel/alternative.c | 31 +++--------------------------
include/linux/cfi_types.h | 23 ++++++++++++++++++++++
6 files changed, 96 insertions(+), 63 deletions(-)
create mode 100644 arch/arm64/include/asm/cfi.h
base-commit: f263336a41da287c5aebd35be8f1e0422e49bc5c
--
2.49.0.967.g6a0df3ecc3-goog
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH bpf-next v9 1/2] cfi: add C CFI type macro
2025-05-05 22:34 [PATCH bpf-next v9 0/2] Support kCFI + BPF on arm64 Sami Tolvanen
@ 2025-05-05 22:34 ` Sami Tolvanen
2025-05-05 22:34 ` [PATCH bpf-next v9 2/2] arm64/cfi,bpf: Support kCFI + BPF on arm64 Sami Tolvanen
2025-05-09 18:03 ` [PATCH bpf-next v9 0/2] " Maxwell Bland
2 siblings, 0 replies; 7+ messages in thread
From: Sami Tolvanen @ 2025-05-05 22:34 UTC (permalink / raw)
To: bpf, Puranjay Mohan, Alexei Starovoitov, Daniel Borkmann
Cc: Catalin Marinas, Will Deacon, Andrii Nakryiko, Mark Rutland,
linux-arm-kernel, linux-kernel, Maxwell Bland, Sami Tolvanen,
Dao Huang
From: Mark Rutland <mark.rutland@arm.com>
Currently x86 and riscv open-code 4 instances of the same logic to
define a u32 variable with the KCFI typeid of a given function.
Replace the duplicate logic with a common macro.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Co-developed-by: Maxwell Bland <mbland@motorola.com>
Signed-off-by: Maxwell Bland <mbland@motorola.com>
Co-developed-by: Sami Tolvanen <samitolvanen@google.com>
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
Tested-by: Dao Huang <huangdao1@oppo.com>
---
arch/riscv/kernel/cfi.c | 35 +++--------------------------------
arch/x86/kernel/alternative.c | 31 +++----------------------------
include/linux/cfi_types.h | 23 +++++++++++++++++++++++
3 files changed, 29 insertions(+), 60 deletions(-)
diff --git a/arch/riscv/kernel/cfi.c b/arch/riscv/kernel/cfi.c
index 64bdd3e1ab8c..e7aec5f36dd5 100644
--- a/arch/riscv/kernel/cfi.c
+++ b/arch/riscv/kernel/cfi.c
@@ -4,6 +4,7 @@
*
* Copyright (C) 2023 Google LLC
*/
+#include <linux/cfi_types.h>
#include <linux/cfi.h>
#include <asm/insn.h>
@@ -82,41 +83,11 @@ struct bpf_insn;
/* Must match bpf_func_t / DEFINE_BPF_PROG_RUN() */
extern unsigned int __bpf_prog_runX(const void *ctx,
const struct bpf_insn *insn);
-
-/*
- * Force a reference to the external symbol so the compiler generates
- * __kcfi_typid.
- */
-__ADDRESSABLE(__bpf_prog_runX);
-
-/* u32 __ro_after_init cfi_bpf_hash = __kcfi_typeid___bpf_prog_runX; */
-asm (
-" .pushsection .data..ro_after_init,\"aw\",@progbits \n"
-" .type cfi_bpf_hash,@object \n"
-" .globl cfi_bpf_hash \n"
-" .p2align 2, 0x0 \n"
-"cfi_bpf_hash: \n"
-" .word __kcfi_typeid___bpf_prog_runX \n"
-" .size cfi_bpf_hash, 4 \n"
-" .popsection \n"
-);
+DEFINE_CFI_TYPE(cfi_bpf_hash, __bpf_prog_runX);
/* Must match bpf_callback_t */
extern u64 __bpf_callback_fn(u64, u64, u64, u64, u64);
-
-__ADDRESSABLE(__bpf_callback_fn);
-
-/* u32 __ro_after_init cfi_bpf_subprog_hash = __kcfi_typeid___bpf_callback_fn; */
-asm (
-" .pushsection .data..ro_after_init,\"aw\",@progbits \n"
-" .type cfi_bpf_subprog_hash,@object \n"
-" .globl cfi_bpf_subprog_hash \n"
-" .p2align 2, 0x0 \n"
-"cfi_bpf_subprog_hash: \n"
-" .word __kcfi_typeid___bpf_callback_fn \n"
-" .size cfi_bpf_subprog_hash, 4 \n"
-" .popsection \n"
-);
+DEFINE_CFI_TYPE(cfi_bpf_subprog_hash, __bpf_callback_fn);
u32 cfi_get_func_hash(void *func)
{
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index bf82c6f7d690..a5147fcd8397 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -1,6 +1,7 @@
// SPDX-License-Identifier: GPL-2.0-only
#define pr_fmt(fmt) "SMP alternatives: " fmt
+#include <linux/cfi_types.h>
#include <linux/module.h>
#include <linux/sched.h>
#include <linux/perf_event.h>
@@ -947,37 +948,11 @@ struct bpf_insn;
/* Must match bpf_func_t / DEFINE_BPF_PROG_RUN() */
extern unsigned int __bpf_prog_runX(const void *ctx,
const struct bpf_insn *insn);
-
-KCFI_REFERENCE(__bpf_prog_runX);
-
-/* u32 __ro_after_init cfi_bpf_hash = __kcfi_typeid___bpf_prog_runX; */
-asm (
-" .pushsection .data..ro_after_init,\"aw\",@progbits \n"
-" .type cfi_bpf_hash,@object \n"
-" .globl cfi_bpf_hash \n"
-" .p2align 2, 0x0 \n"
-"cfi_bpf_hash: \n"
-" .long __kcfi_typeid___bpf_prog_runX \n"
-" .size cfi_bpf_hash, 4 \n"
-" .popsection \n"
-);
+DEFINE_CFI_TYPE(cfi_bpf_hash, __bpf_prog_runX);
/* Must match bpf_callback_t */
extern u64 __bpf_callback_fn(u64, u64, u64, u64, u64);
-
-KCFI_REFERENCE(__bpf_callback_fn);
-
-/* u32 __ro_after_init cfi_bpf_subprog_hash = __kcfi_typeid___bpf_callback_fn; */
-asm (
-" .pushsection .data..ro_after_init,\"aw\",@progbits \n"
-" .type cfi_bpf_subprog_hash,@object \n"
-" .globl cfi_bpf_subprog_hash \n"
-" .p2align 2, 0x0 \n"
-"cfi_bpf_subprog_hash: \n"
-" .long __kcfi_typeid___bpf_callback_fn \n"
-" .size cfi_bpf_subprog_hash, 4 \n"
-" .popsection \n"
-);
+DEFINE_CFI_TYPE(cfi_bpf_subprog_hash, __bpf_callback_fn);
u32 cfi_get_func_hash(void *func)
{
diff --git a/include/linux/cfi_types.h b/include/linux/cfi_types.h
index 6b8713675765..209c8a16ac4e 100644
--- a/include/linux/cfi_types.h
+++ b/include/linux/cfi_types.h
@@ -41,5 +41,28 @@
SYM_TYPED_START(name, SYM_L_GLOBAL, SYM_A_ALIGN)
#endif
+#else /* __ASSEMBLY__ */
+
+#ifdef CONFIG_CFI_CLANG
+#define DEFINE_CFI_TYPE(name, func) \
+ /* \
+ * Force a reference to the function so the compiler generates \
+ * __kcfi_typeid_<func>. \
+ */ \
+ __ADDRESSABLE(func); \
+ /* u32 name = __kcfi_typeid_<func> */ \
+ extern u32 name; \
+ asm ( \
+ " .pushsection .data..ro_after_init,\"aw\",@progbits \n" \
+ " .type " #name ",@object \n" \
+ " .globl " #name " \n" \
+ " .p2align 2, 0x0 \n" \
+ #name ": \n" \
+ " .4byte __kcfi_typeid_" #func " \n" \
+ " .size " #name ", 4 \n" \
+ " .popsection \n" \
+ );
+#endif
+
#endif /* __ASSEMBLY__ */
#endif /* _LINUX_CFI_TYPES_H */
--
2.49.0.967.g6a0df3ecc3-goog
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH bpf-next v9 2/2] arm64/cfi,bpf: Support kCFI + BPF on arm64
2025-05-05 22:34 [PATCH bpf-next v9 0/2] Support kCFI + BPF on arm64 Sami Tolvanen
2025-05-05 22:34 ` [PATCH bpf-next v9 1/2] cfi: add C CFI type macro Sami Tolvanen
@ 2025-05-05 22:34 ` Sami Tolvanen
2025-07-11 14:26 ` Will Deacon
2025-05-09 18:03 ` [PATCH bpf-next v9 0/2] " Maxwell Bland
2 siblings, 1 reply; 7+ messages in thread
From: Sami Tolvanen @ 2025-05-05 22:34 UTC (permalink / raw)
To: bpf, Puranjay Mohan, Alexei Starovoitov, Daniel Borkmann
Cc: Catalin Marinas, Will Deacon, Andrii Nakryiko, Mark Rutland,
linux-arm-kernel, linux-kernel, Maxwell Bland, Puranjay Mohan,
Sami Tolvanen, Dao Huang
From: Puranjay Mohan <puranjay12@gmail.com>
Currently, bpf_dispatcher_*_func() is marked with `__nocfi` therefore
calling BPF programs from this interface doesn't cause CFI warnings.
When BPF programs are called directly from C: from BPF helpers or
struct_ops, CFI warnings are generated.
Implement proper CFI prologues for the BPF programs and callbacks and
drop __nocfi for arm64. Fix the trampoline generation code to emit kCFI
prologue when a struct_ops trampoline is being prepared.
Signed-off-by: Puranjay Mohan <puranjay12@gmail.com>
Co-developed-by: Maxwell Bland <mbland@motorola.com>
Signed-off-by: Maxwell Bland <mbland@motorola.com>
Co-developed-by: Sami Tolvanen <samitolvanen@google.com>
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
Tested-by: Dao Huang <huangdao1@oppo.com>
---
arch/arm64/include/asm/cfi.h | 23 +++++++++++++++++++++++
arch/arm64/kernel/alternative.c | 25 +++++++++++++++++++++++++
arch/arm64/net/bpf_jit_comp.c | 22 +++++++++++++++++++---
3 files changed, 67 insertions(+), 3 deletions(-)
create mode 100644 arch/arm64/include/asm/cfi.h
diff --git a/arch/arm64/include/asm/cfi.h b/arch/arm64/include/asm/cfi.h
new file mode 100644
index 000000000000..670e191f8628
--- /dev/null
+++ b/arch/arm64/include/asm/cfi.h
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _ASM_ARM64_CFI_H
+#define _ASM_ARM64_CFI_H
+
+#ifdef CONFIG_CFI_CLANG
+#define __bpfcall
+static inline int cfi_get_offset(void)
+{
+ return 4;
+}
+#define cfi_get_offset cfi_get_offset
+extern u32 cfi_bpf_hash;
+extern u32 cfi_bpf_subprog_hash;
+extern u32 cfi_get_func_hash(void *func);
+#else
+#define cfi_bpf_hash 0U
+#define cfi_bpf_subprog_hash 0U
+static inline u32 cfi_get_func_hash(void *func)
+{
+ return 0;
+}
+#endif /* CONFIG_CFI_CLANG */
+#endif /* _ASM_ARM64_CFI_H */
diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c
index 8ff6610af496..71c153488dad 100644
--- a/arch/arm64/kernel/alternative.c
+++ b/arch/arm64/kernel/alternative.c
@@ -8,11 +8,13 @@
#define pr_fmt(fmt) "alternatives: " fmt
+#include <linux/cfi_types.h>
#include <linux/init.h>
#include <linux/cpu.h>
#include <linux/elf.h>
#include <asm/cacheflush.h>
#include <asm/alternative.h>
+#include <asm/cfi.h>
#include <asm/cpufeature.h>
#include <asm/insn.h>
#include <asm/module.h>
@@ -298,3 +300,26 @@ noinstr void alt_cb_patch_nops(struct alt_instr *alt, __le32 *origptr,
updptr[i] = cpu_to_le32(aarch64_insn_gen_nop());
}
EXPORT_SYMBOL(alt_cb_patch_nops);
+
+#ifdef CONFIG_CFI_CLANG
+struct bpf_insn;
+
+/* Must match bpf_func_t / DEFINE_BPF_PROG_RUN() */
+extern unsigned int __bpf_prog_runX(const void *ctx,
+ const struct bpf_insn *insn);
+DEFINE_CFI_TYPE(cfi_bpf_hash, __bpf_prog_runX);
+
+/* Must match bpf_callback_t */
+extern u64 __bpf_callback_fn(u64, u64, u64, u64, u64);
+DEFINE_CFI_TYPE(cfi_bpf_subprog_hash, __bpf_callback_fn);
+
+u32 cfi_get_func_hash(void *func)
+{
+ u32 hash;
+
+ if (get_kernel_nofault(hash, func - cfi_get_offset()))
+ return 0;
+
+ return hash;
+}
+#endif /* CONFIG_CFI_CLANG */
diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
index 70d7c89d3ac9..3b3691e88dd5 100644
--- a/arch/arm64/net/bpf_jit_comp.c
+++ b/arch/arm64/net/bpf_jit_comp.c
@@ -9,6 +9,7 @@
#include <linux/bitfield.h>
#include <linux/bpf.h>
+#include <linux/cfi.h>
#include <linux/filter.h>
#include <linux/memory.h>
#include <linux/printk.h>
@@ -164,6 +165,12 @@ static inline void emit_bti(u32 insn, struct jit_ctx *ctx)
emit(insn, ctx);
}
+static inline void emit_kcfi(u32 hash, struct jit_ctx *ctx)
+{
+ if (IS_ENABLED(CONFIG_CFI_CLANG))
+ emit(hash, ctx);
+}
+
/*
* Kernel addresses in the vmalloc space use at most 48 bits, and the
* remaining bits are guaranteed to be 0x1. So we can compose the address
@@ -474,7 +481,6 @@ static int build_prologue(struct jit_ctx *ctx, bool ebpf_from_cbpf)
const bool is_main_prog = !bpf_is_subprog(prog);
const u8 fp = bpf2a64[BPF_REG_FP];
const u8 arena_vm_base = bpf2a64[ARENA_VM_START];
- const int idx0 = ctx->idx;
int cur_offset;
/*
@@ -500,6 +506,9 @@ static int build_prologue(struct jit_ctx *ctx, bool ebpf_from_cbpf)
*
*/
+ emit_kcfi(is_main_prog ? cfi_bpf_hash : cfi_bpf_subprog_hash, ctx);
+ const int idx0 = ctx->idx;
+
/* bpf function may be invoked by 3 instruction types:
* 1. bl, attached via freplace to bpf prog via short jump
* 2. br, attached via freplace to bpf prog via long jump
@@ -2009,9 +2018,9 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
jit_data->ro_header = ro_header;
}
- prog->bpf_func = (void *)ctx.ro_image;
+ prog->bpf_func = (void *)ctx.ro_image + cfi_get_offset();
prog->jited = 1;
- prog->jited_len = prog_size;
+ prog->jited_len = prog_size - cfi_get_offset();
if (!prog->is_func || extra_pass) {
int i;
@@ -2271,6 +2280,12 @@ static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im,
/* return address locates above FP */
retaddr_off = stack_size + 8;
+ if (flags & BPF_TRAMP_F_INDIRECT) {
+ /*
+ * Indirect call for bpf_struct_ops
+ */
+ emit_kcfi(cfi_get_func_hash(func_addr), ctx);
+ }
/* bpf trampoline may be invoked by 3 instruction types:
* 1. bl, attached to bpf prog or kernel function via short jump
* 2. br, attached to bpf prog or kernel function via long jump
@@ -2790,6 +2805,7 @@ void bpf_jit_free(struct bpf_prog *prog)
sizeof(jit_data->header->size));
kfree(jit_data);
}
+ prog->bpf_func -= cfi_get_offset();
hdr = bpf_jit_binary_pack_hdr(prog);
bpf_jit_binary_pack_free(hdr, NULL);
WARN_ON_ONCE(!bpf_prog_kallsyms_verify_off(prog));
--
2.49.0.967.g6a0df3ecc3-goog
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH bpf-next v9 0/2] Support kCFI + BPF on arm64
2025-05-05 22:34 [PATCH bpf-next v9 0/2] Support kCFI + BPF on arm64 Sami Tolvanen
2025-05-05 22:34 ` [PATCH bpf-next v9 1/2] cfi: add C CFI type macro Sami Tolvanen
2025-05-05 22:34 ` [PATCH bpf-next v9 2/2] arm64/cfi,bpf: Support kCFI + BPF on arm64 Sami Tolvanen
@ 2025-05-09 18:03 ` Maxwell Bland
2 siblings, 0 replies; 7+ messages in thread
From: Maxwell Bland @ 2025-05-09 18:03 UTC (permalink / raw)
To: Sami Tolvanen
Cc: bpf, Puranjay Mohan, Alexei Starovoitov, Daniel Borkmann,
Catalin Marinas, Will Deacon, Andrii Nakryiko, Mark Rutland,
linux-arm-kernel, linux-kernel, Sami Tolvanen
On Mon, May 05, 2025 at 10:34:38PM +0000, Sami Tolvanen wrote:
> Hi folks,
>
> These patches add KCFI types to arm64 BPF JIT output. Puranjay and
> Maxwell have been working on this for some time now, but I haven't
> seen any progress since June 2024, so I decided to pick up the latest
> version[1] posted by Maxwell and fix the few remaining issues I
> noticed. I confirmed that with these patches applied, I no longer see
> CFI failures when running BPF self-tests on arm64.
Bump! Thank you Sami for following up on this, hopefully the maintainers
will have time to take a look!
Regards,
Maxwell
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH bpf-next v9 2/2] arm64/cfi,bpf: Support kCFI + BPF on arm64
2025-05-05 22:34 ` [PATCH bpf-next v9 2/2] arm64/cfi,bpf: Support kCFI + BPF on arm64 Sami Tolvanen
@ 2025-07-11 14:26 ` Will Deacon
2025-07-11 18:49 ` Sami Tolvanen
0 siblings, 1 reply; 7+ messages in thread
From: Will Deacon @ 2025-07-11 14:26 UTC (permalink / raw)
To: Sami Tolvanen
Cc: bpf, Puranjay Mohan, Alexei Starovoitov, Daniel Borkmann,
Catalin Marinas, Andrii Nakryiko, Mark Rutland, linux-arm-kernel,
linux-kernel, Maxwell Bland, Puranjay Mohan, Dao Huang
On Mon, May 05, 2025 at 10:34:40PM +0000, Sami Tolvanen wrote:
> From: Puranjay Mohan <puranjay12@gmail.com>
>
> Currently, bpf_dispatcher_*_func() is marked with `__nocfi` therefore
> calling BPF programs from this interface doesn't cause CFI warnings.
>
> When BPF programs are called directly from C: from BPF helpers or
> struct_ops, CFI warnings are generated.
>
> Implement proper CFI prologues for the BPF programs and callbacks and
> drop __nocfi for arm64. Fix the trampoline generation code to emit kCFI
> prologue when a struct_ops trampoline is being prepared.
>
> Signed-off-by: Puranjay Mohan <puranjay12@gmail.com>
> Co-developed-by: Maxwell Bland <mbland@motorola.com>
> Signed-off-by: Maxwell Bland <mbland@motorola.com>
> Co-developed-by: Sami Tolvanen <samitolvanen@google.com>
> Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
> Tested-by: Dao Huang <huangdao1@oppo.com>
> ---
> arch/arm64/include/asm/cfi.h | 23 +++++++++++++++++++++++
> arch/arm64/kernel/alternative.c | 25 +++++++++++++++++++++++++
> arch/arm64/net/bpf_jit_comp.c | 22 +++++++++++++++++++---
> 3 files changed, 67 insertions(+), 3 deletions(-)
> create mode 100644 arch/arm64/include/asm/cfi.h
>
> diff --git a/arch/arm64/include/asm/cfi.h b/arch/arm64/include/asm/cfi.h
> new file mode 100644
> index 000000000000..670e191f8628
> --- /dev/null
> +++ b/arch/arm64/include/asm/cfi.h
> @@ -0,0 +1,23 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +#ifndef _ASM_ARM64_CFI_H
> +#define _ASM_ARM64_CFI_H
> +
> +#ifdef CONFIG_CFI_CLANG
> +#define __bpfcall
> +static inline int cfi_get_offset(void)
> +{
> + return 4;
Needs a comment.
> +}
> +#define cfi_get_offset cfi_get_offset
> +extern u32 cfi_bpf_hash;
> +extern u32 cfi_bpf_subprog_hash;
> +extern u32 cfi_get_func_hash(void *func);
> +#else
> +#define cfi_bpf_hash 0U
> +#define cfi_bpf_subprog_hash 0U
> +static inline u32 cfi_get_func_hash(void *func)
> +{
> + return 0;
> +}
> +#endif /* CONFIG_CFI_CLANG */
> +#endif /* _ASM_ARM64_CFI_H */
This looks like an awful lot of boiler plate to me. The only thing you
seem to need is the CFI offset -- why isn't that just a constant that we
can define (or a Kconfig symbol?).
> diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c
> index 8ff6610af496..71c153488dad 100644
> --- a/arch/arm64/kernel/alternative.c
> +++ b/arch/arm64/kernel/alternative.c
> @@ -8,11 +8,13 @@
>
> #define pr_fmt(fmt) "alternatives: " fmt
>
> +#include <linux/cfi_types.h>
> #include <linux/init.h>
> #include <linux/cpu.h>
> #include <linux/elf.h>
> #include <asm/cacheflush.h>
> #include <asm/alternative.h>
> +#include <asm/cfi.h>
> #include <asm/cpufeature.h>
> #include <asm/insn.h>
> #include <asm/module.h>
> @@ -298,3 +300,26 @@ noinstr void alt_cb_patch_nops(struct alt_instr *alt, __le32 *origptr,
> updptr[i] = cpu_to_le32(aarch64_insn_gen_nop());
> }
> EXPORT_SYMBOL(alt_cb_patch_nops);
> +
> +#ifdef CONFIG_CFI_CLANG
> +struct bpf_insn;
> +
> +/* Must match bpf_func_t / DEFINE_BPF_PROG_RUN() */
> +extern unsigned int __bpf_prog_runX(const void *ctx,
> + const struct bpf_insn *insn);
> +DEFINE_CFI_TYPE(cfi_bpf_hash, __bpf_prog_runX);
> +
> +/* Must match bpf_callback_t */
> +extern u64 __bpf_callback_fn(u64, u64, u64, u64, u64);
> +DEFINE_CFI_TYPE(cfi_bpf_subprog_hash, __bpf_callback_fn);
> +
> +u32 cfi_get_func_hash(void *func)
> +{
> + u32 hash;
> +
> + if (get_kernel_nofault(hash, func - cfi_get_offset()))
> + return 0;
> +
> + return hash;
> +}
> +#endif /* CONFIG_CFI_CLANG */
I don't think this should be in alternative.c. It's probably better off
either as a 'static inline' in the new cfi.h header.
> diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
> index 70d7c89d3ac9..3b3691e88dd5 100644
> --- a/arch/arm64/net/bpf_jit_comp.c
> +++ b/arch/arm64/net/bpf_jit_comp.c
> @@ -9,6 +9,7 @@
>
> #include <linux/bitfield.h>
> #include <linux/bpf.h>
> +#include <linux/cfi.h>
> #include <linux/filter.h>
> #include <linux/memory.h>
> #include <linux/printk.h>
> @@ -164,6 +165,12 @@ static inline void emit_bti(u32 insn, struct jit_ctx *ctx)
> emit(insn, ctx);
> }
>
> +static inline void emit_kcfi(u32 hash, struct jit_ctx *ctx)
> +{
> + if (IS_ENABLED(CONFIG_CFI_CLANG))
> + emit(hash, ctx);
> +}
> +
> /*
> * Kernel addresses in the vmalloc space use at most 48 bits, and the
> * remaining bits are guaranteed to be 0x1. So we can compose the address
> @@ -474,7 +481,6 @@ static int build_prologue(struct jit_ctx *ctx, bool ebpf_from_cbpf)
> const bool is_main_prog = !bpf_is_subprog(prog);
> const u8 fp = bpf2a64[BPF_REG_FP];
> const u8 arena_vm_base = bpf2a64[ARENA_VM_START];
> - const int idx0 = ctx->idx;
> int cur_offset;
>
> /*
> @@ -500,6 +506,9 @@ static int build_prologue(struct jit_ctx *ctx, bool ebpf_from_cbpf)
> *
> */
>
> + emit_kcfi(is_main_prog ? cfi_bpf_hash : cfi_bpf_subprog_hash, ctx);
> + const int idx0 = ctx->idx;
> +
> /* bpf function may be invoked by 3 instruction types:
> * 1. bl, attached via freplace to bpf prog via short jump
> * 2. br, attached via freplace to bpf prog via long jump
> @@ -2009,9 +2018,9 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
> jit_data->ro_header = ro_header;
> }
>
> - prog->bpf_func = (void *)ctx.ro_image;
> + prog->bpf_func = (void *)ctx.ro_image + cfi_get_offset();
> prog->jited = 1;
> - prog->jited_len = prog_size;
> + prog->jited_len = prog_size - cfi_get_offset();
Why do we add the offset even when CONFIG_CFI_CLANG is not enabled?
Will
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH bpf-next v9 2/2] arm64/cfi,bpf: Support kCFI + BPF on arm64
2025-07-11 14:26 ` Will Deacon
@ 2025-07-11 18:49 ` Sami Tolvanen
2025-07-13 11:01 ` Will Deacon
0 siblings, 1 reply; 7+ messages in thread
From: Sami Tolvanen @ 2025-07-11 18:49 UTC (permalink / raw)
To: Will Deacon
Cc: bpf, Puranjay Mohan, Alexei Starovoitov, Daniel Borkmann,
Catalin Marinas, Andrii Nakryiko, Mark Rutland, linux-arm-kernel,
linux-kernel, Maxwell Bland, Puranjay Mohan, Dao Huang
Hi Will,
On Fri, Jul 11, 2025 at 7:26 AM Will Deacon <will@kernel.org> wrote:
>
> On Mon, May 05, 2025 at 10:34:40PM +0000, Sami Tolvanen wrote:
> > From: Puranjay Mohan <puranjay12@gmail.com>
> >
> > Currently, bpf_dispatcher_*_func() is marked with `__nocfi` therefore
> > calling BPF programs from this interface doesn't cause CFI warnings.
> >
> > When BPF programs are called directly from C: from BPF helpers or
> > struct_ops, CFI warnings are generated.
> >
> > Implement proper CFI prologues for the BPF programs and callbacks and
> > drop __nocfi for arm64. Fix the trampoline generation code to emit kCFI
> > prologue when a struct_ops trampoline is being prepared.
> >
> > Signed-off-by: Puranjay Mohan <puranjay12@gmail.com>
> > Co-developed-by: Maxwell Bland <mbland@motorola.com>
> > Signed-off-by: Maxwell Bland <mbland@motorola.com>
> > Co-developed-by: Sami Tolvanen <samitolvanen@google.com>
> > Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
> > Tested-by: Dao Huang <huangdao1@oppo.com>
> > ---
> > arch/arm64/include/asm/cfi.h | 23 +++++++++++++++++++++++
> > arch/arm64/kernel/alternative.c | 25 +++++++++++++++++++++++++
> > arch/arm64/net/bpf_jit_comp.c | 22 +++++++++++++++++++---
> > 3 files changed, 67 insertions(+), 3 deletions(-)
> > create mode 100644 arch/arm64/include/asm/cfi.h
> >
> > diff --git a/arch/arm64/include/asm/cfi.h b/arch/arm64/include/asm/cfi.h
> > new file mode 100644
> > index 000000000000..670e191f8628
> > --- /dev/null
> > +++ b/arch/arm64/include/asm/cfi.h
> > @@ -0,0 +1,23 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +#ifndef _ASM_ARM64_CFI_H
> > +#define _ASM_ARM64_CFI_H
> > +
> > +#ifdef CONFIG_CFI_CLANG
> > +#define __bpfcall
> > +static inline int cfi_get_offset(void)
> > +{
> > + return 4;
>
> Needs a comment.
Ack.
> > +}
> > +#define cfi_get_offset cfi_get_offset
> > +extern u32 cfi_bpf_hash;
> > +extern u32 cfi_bpf_subprog_hash;
> > +extern u32 cfi_get_func_hash(void *func);
> > +#else
> > +#define cfi_bpf_hash 0U
> > +#define cfi_bpf_subprog_hash 0U
> > +static inline u32 cfi_get_func_hash(void *func)
> > +{
> > + return 0;
> > +}
> > +#endif /* CONFIG_CFI_CLANG */
> > +#endif /* _ASM_ARM64_CFI_H */
>
> This looks like an awful lot of boiler plate to me. The only thing you
> seem to need is the CFI offset -- why isn't that just a constant that we
> can define (or a Kconfig symbol?).
The cfi_get_offset function was originally added in commit
4f9087f16651 ("x86/cfi,bpf: Fix BPF JIT call") because the offset can
change on x86 depending on which CFI scheme is enabled at runtime.
Starting with commit 2cd3e3772e41 ("x86/cfi,bpf: Fix bpf_struct_ops
CFI") the function is also called by the generic BPF code, so we can't
trivially replace it with a constant. However, since this defaults to
`4` unless the architecture adds pre-function NOPs, I think we could
simply move the default implementation to include/linux/cfi.h (and
also drop the RISC-V version). Come to think of it, we could probably
move most of this boilerplate to generic code. I'll refactor this and
send a new version.
> > diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c
> > index 8ff6610af496..71c153488dad 100644
> > --- a/arch/arm64/kernel/alternative.c
> > +++ b/arch/arm64/kernel/alternative.c
> > @@ -8,11 +8,13 @@
> >
> > #define pr_fmt(fmt) "alternatives: " fmt
> >
> > +#include <linux/cfi_types.h>
> > #include <linux/init.h>
> > #include <linux/cpu.h>
> > #include <linux/elf.h>
> > #include <asm/cacheflush.h>
> > #include <asm/alternative.h>
> > +#include <asm/cfi.h>
> > #include <asm/cpufeature.h>
> > #include <asm/insn.h>
> > #include <asm/module.h>
> > @@ -298,3 +300,26 @@ noinstr void alt_cb_patch_nops(struct alt_instr *alt, __le32 *origptr,
> > updptr[i] = cpu_to_le32(aarch64_insn_gen_nop());
> > }
> > EXPORT_SYMBOL(alt_cb_patch_nops);
> > +
> > +#ifdef CONFIG_CFI_CLANG
> > +struct bpf_insn;
> > +
> > +/* Must match bpf_func_t / DEFINE_BPF_PROG_RUN() */
> > +extern unsigned int __bpf_prog_runX(const void *ctx,
> > + const struct bpf_insn *insn);
> > +DEFINE_CFI_TYPE(cfi_bpf_hash, __bpf_prog_runX);
> > +
> > +/* Must match bpf_callback_t */
> > +extern u64 __bpf_callback_fn(u64, u64, u64, u64, u64);
> > +DEFINE_CFI_TYPE(cfi_bpf_subprog_hash, __bpf_callback_fn);
> > +
> > +u32 cfi_get_func_hash(void *func)
> > +{
> > + u32 hash;
> > +
> > + if (get_kernel_nofault(hash, func - cfi_get_offset()))
> > + return 0;
> > +
> > + return hash;
> > +}
> > +#endif /* CONFIG_CFI_CLANG */
>
> I don't think this should be in alternative.c. It's probably better off
> either as a 'static inline' in the new cfi.h header.
Sure, I'll find a better place for this. RISC-V again seems to have
the exact same function, so I think a __weak implementation in
kernel/cfi.c would work here, allowing x86 to still conveniently
override the function.
> > diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
> > index 70d7c89d3ac9..3b3691e88dd5 100644
> > --- a/arch/arm64/net/bpf_jit_comp.c
> > +++ b/arch/arm64/net/bpf_jit_comp.c
> > @@ -9,6 +9,7 @@
> >
> > #include <linux/bitfield.h>
> > #include <linux/bpf.h>
> > +#include <linux/cfi.h>
> > #include <linux/filter.h>
> > #include <linux/memory.h>
> > #include <linux/printk.h>
> > @@ -164,6 +165,12 @@ static inline void emit_bti(u32 insn, struct jit_ctx *ctx)
> > emit(insn, ctx);
> > }
> >
> > +static inline void emit_kcfi(u32 hash, struct jit_ctx *ctx)
> > +{
> > + if (IS_ENABLED(CONFIG_CFI_CLANG))
> > + emit(hash, ctx);
> > +}
> > +
> > /*
> > * Kernel addresses in the vmalloc space use at most 48 bits, and the
> > * remaining bits are guaranteed to be 0x1. So we can compose the address
> > @@ -474,7 +481,6 @@ static int build_prologue(struct jit_ctx *ctx, bool ebpf_from_cbpf)
> > const bool is_main_prog = !bpf_is_subprog(prog);
> > const u8 fp = bpf2a64[BPF_REG_FP];
> > const u8 arena_vm_base = bpf2a64[ARENA_VM_START];
> > - const int idx0 = ctx->idx;
> > int cur_offset;
> >
> > /*
> > @@ -500,6 +506,9 @@ static int build_prologue(struct jit_ctx *ctx, bool ebpf_from_cbpf)
> > *
> > */
> >
> > + emit_kcfi(is_main_prog ? cfi_bpf_hash : cfi_bpf_subprog_hash, ctx);
> > + const int idx0 = ctx->idx;
> > +
> > /* bpf function may be invoked by 3 instruction types:
> > * 1. bl, attached via freplace to bpf prog via short jump
> > * 2. br, attached via freplace to bpf prog via long jump
> > @@ -2009,9 +2018,9 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
> > jit_data->ro_header = ro_header;
> > }
> >
> > - prog->bpf_func = (void *)ctx.ro_image;
> > + prog->bpf_func = (void *)ctx.ro_image + cfi_get_offset();
> > prog->jited = 1;
> > - prog->jited_len = prog_size;
> > + prog->jited_len = prog_size - cfi_get_offset();
>
> Why do we add the offset even when CONFIG_CFI_CLANG is not enabled?
The function returns zero if CFI is not enabled, so I believe it's
just to avoid extra if (IS_ENABLED(CONFIG_CFI_CLANG)) statements in
the code. IMO this is cleaner, but I can certainly change this if you
prefer.
Thanks for taking a look!
Sami
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH bpf-next v9 2/2] arm64/cfi,bpf: Support kCFI + BPF on arm64
2025-07-11 18:49 ` Sami Tolvanen
@ 2025-07-13 11:01 ` Will Deacon
0 siblings, 0 replies; 7+ messages in thread
From: Will Deacon @ 2025-07-13 11:01 UTC (permalink / raw)
To: Sami Tolvanen
Cc: bpf, Puranjay Mohan, Alexei Starovoitov, Daniel Borkmann,
Catalin Marinas, Andrii Nakryiko, Mark Rutland, linux-arm-kernel,
linux-kernel, Maxwell Bland, Puranjay Mohan, Dao Huang
Hey Sami,
On Fri, Jul 11, 2025 at 11:49:29AM -0700, Sami Tolvanen wrote:
> > > +#define cfi_get_offset cfi_get_offset
> > > +extern u32 cfi_bpf_hash;
> > > +extern u32 cfi_bpf_subprog_hash;
> > > +extern u32 cfi_get_func_hash(void *func);
> > > +#else
> > > +#define cfi_bpf_hash 0U
> > > +#define cfi_bpf_subprog_hash 0U
> > > +static inline u32 cfi_get_func_hash(void *func)
> > > +{
> > > + return 0;
> > > +}
> > > +#endif /* CONFIG_CFI_CLANG */
> > > +#endif /* _ASM_ARM64_CFI_H */
> >
> > This looks like an awful lot of boiler plate to me. The only thing you
> > seem to need is the CFI offset -- why isn't that just a constant that we
> > can define (or a Kconfig symbol?).
>
> The cfi_get_offset function was originally added in commit
> 4f9087f16651 ("x86/cfi,bpf: Fix BPF JIT call") because the offset can
> change on x86 depending on which CFI scheme is enabled at runtime.
> Starting with commit 2cd3e3772e41 ("x86/cfi,bpf: Fix bpf_struct_ops
> CFI") the function is also called by the generic BPF code, so we can't
> trivially replace it with a constant. However, since this defaults to
> `4` unless the architecture adds pre-function NOPs, I think we could
> simply move the default implementation to include/linux/cfi.h (and
> also drop the RISC-V version). Come to think of it, we could probably
> move most of this boilerplate to generic code. I'll refactor this and
> send a new version.
Excellent, thanks.
> > > @@ -2009,9 +2018,9 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
> > > jit_data->ro_header = ro_header;
> > > }
> > >
> > > - prog->bpf_func = (void *)ctx.ro_image;
> > > + prog->bpf_func = (void *)ctx.ro_image + cfi_get_offset();
> > > prog->jited = 1;
> > > - prog->jited_len = prog_size;
> > > + prog->jited_len = prog_size - cfi_get_offset();
> >
> > Why do we add the offset even when CONFIG_CFI_CLANG is not enabled?
>
> The function returns zero if CFI is not enabled, so I believe it's
> just to avoid extra if (IS_ENABLED(CONFIG_CFI_CLANG)) statements in
> the code. IMO this is cleaner, but I can certainly change this if you
> prefer.
Ah, that caught me out because the !CONFIG_CFI_CLANG stub is in the
core code (and I'm extra susceptible to being caught out on a warm
Friday evening!).
Hopefully if you're able to trim down the boilerplate then this will
become more obvious too.
Cheers,
Will
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2025-07-13 11:04 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-05-05 22:34 [PATCH bpf-next v9 0/2] Support kCFI + BPF on arm64 Sami Tolvanen
2025-05-05 22:34 ` [PATCH bpf-next v9 1/2] cfi: add C CFI type macro Sami Tolvanen
2025-05-05 22:34 ` [PATCH bpf-next v9 2/2] arm64/cfi,bpf: Support kCFI + BPF on arm64 Sami Tolvanen
2025-07-11 14:26 ` Will Deacon
2025-07-11 18:49 ` Sami Tolvanen
2025-07-13 11:01 ` Will Deacon
2025-05-09 18:03 ` [PATCH bpf-next v9 0/2] " Maxwell Bland
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).