* [PATCH bpf-next v5 00/16] bpf: Support stack arguments for BPF functions and kfuncs
@ 2026-04-17 3:46 Yonghong Song
2026-04-17 3:47 ` [PATCH bpf-next v5 01/16] bpf: Remove unused parameter from check_map_kptr_access() Yonghong Song
` (15 more replies)
0 siblings, 16 replies; 45+ messages in thread
From: Yonghong Song @ 2026-04-17 3:46 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau
Currently, bpf function calls and kfunc's are limited by 5 reg-level
parameters. For function calls with more than 5 parameters,
developers can use always inlining or pass a struct pointer
after packing more parameters in that struct. But there is
no workaround for kfunc if more than 5 parameters is needed.
This patch set lifts the 5-argument limit by introducing stack-based
argument passing for BPF functions and kfunc's, coordinated with
compiler support in LLVM [1]. The compiler emits loads/stores through
a new bpf register r11 (BPF_REG_PARAMS) to pass arguments beyond
the 5th, keeping the stack arg area separate from the r10-based program
stack. The maximum number of arguments is capped at MAX_BPF_FUNC_ARGS
(12), which is sufficient for the vast majority of use cases.
In verifier, r11 based stores can survive bpf-to-bpf and kfunc
calls. For example
*(u64 *)(r11 - 8) = r6;
*(u64 *)(r11 - 16) = r7;
call bar1; // arg6 = r6, arg7 = r7
call bar2; // reuses same arg6, arg7 without re-storing
If users want to have a different parameter for bar2(), it can have
a store after bar1() and before bar2().
*(u64 *)(r11 - 8) = r6;
*(u64 *)(r11 - 16) = r7;
call bar1; // arg6 = r6, arg7 = r7
*(u64 *)(r11 - 16) = r8;
call bar2; // arg6 = r6, arg7 = r8
The x86_64 JIT translates r11-relative accesses to RBP-relative
native instructions. Each function's stack allocation is extended
by 'max_outgoing' bytes to hold the outgoing arg area below the
callee-saved registers. This makes implementation easier as the r10
can be reused for stack argument access. At both BPF-to-BPF and kfunc
calls, outgoing args are pushed onto the expected calling convention
locations directly. The incoming parameters can directly get the value
from caller.
To support kfunc stack arguments, before doing any stack arguments,
existing codes are refactored/modified to use bpf_reg_state as much
as possible instead of using regno, and to pass a non-negative argno,
encoded to support both registers and stack arguments, as a single
variable.
Global subprogs with >5 args are not yet supported. Only x86_64
is supported for now.
For the rest of patches, patches 1-4 make changes to make it
easy for future stack arguments for kfuncs. Patches 5-8
supports bpf-to-bpf stack arguments. Patch 9 rejects interpreter
for stack arguments. Patch 10 rejects subprogs if tailcall reachable.
Patch 11 adds stack argument support for kfuncs. Patch 12 enables
stack arguments for x86_64 and Patch 13 implements the x86_64 JIT.
Patches 14-16 are some test cases.
[1] https://github.com/llvm/llvm-project/pull/189060
Note:
- The patch set is on top of the following commit:
1f5ffc672165 Fix mismerge of the arm64 / timer-core interrupt handling changes
- This patch set requires latest llvm23 compiler. It is possible that a build
failure may appear:
/home/yhs/work/bpf-next/scripts/mod/modpost.c:59:13: error: variable 'extra_warn' set but not used [-Werror,-Wunused-but-set-global]
59 | static bool extra_warn;
| ^
1 error generated.
In this case, the following hack can workaround the build issue:
--- a/Makefile
+++ b/Makefile
@@ -467,7 +467,7 @@ KERNELDOC = $(srctree)/tools/docs/kernel-doc
export KERNELDOC
KBUILD_USERHOSTCFLAGS := -Wall -Wmissing-prototypes -Wstrict-prototypes \
- -O2 -fomit-frame-pointer -std=gnu11
+ -O2 -fomit-frame-pointer -std=gnu11 -Wno-unused-but-set-global
KBUILD_USERCFLAGS := $(KBUILD_USERHOSTCFLAGS) $(USERCFLAGS)
KBUILD_USERLDFLAGS := $(USERLDFLAGS)
Changelogs:
v4 -> v5:
- v4: https://lore.kernel.org/bpf/20260412045826.254200-1-yonghong.song@linux.dev/
- Use r11 instead of r12, llvm also updated with r11.
- Change int type 'reg_or_arg' to u32 'argno' where 'argno' encodes to support
both bpf registers and stack arguments.
- Track per-state bitmask 'out_stack_arg_mask' for r11 based stores, so at any
particular call, it knows what stores are available. This is important since
stores may be in different basic block.
- Previously after each call, all store slots are invalidated. This patches
disabled such invalidation.
- Ensure r11 reg only appearing in allowed insns. Also avoid r11 for reg tracking
purpose.
- Make stack_arg_regs more similar to regular reg's (struct bpf_reg_state *)..
- Reorder r11 based stores from 'arg6:off:-24, arg7:off:-16, arg8:off:-8" to
"arg6:off:-8, arg7:off:-16, arg8:off:-24".
- Add a few more tests, including e.g., two callee's with different number of
stack arguments, shared r11-stores in different branches, etc.
v3 -> v4:
- v3: https://lore.kernel.org/bpf/20260405172505.1329392-1-yonghong.song@linux.dev/
- Refactor/Modify codes to make it easier for later kfunc stack argument support
- Invalidate outgoing slots immediately after the call to prevent reuse
- Fix interaction between stack argument PTR_TO_STACK and dead slot poisoning
- Reject stack arguments if tail call reachable
- Disable private stack if stack argument is used
- Allocate outgoing stack argument region after callee saved registers, and this
simplifies the JITed code a lot.
v2 -> v3:
- v2: https://lore.kernel.org/bpf/20260405165300.826241-1-yonghong.song@linux.dev/
- Fix selftest stack_arg_gap_at_minus8().
- Fix a few 'UTF-8' issues.
v1 -> v2:
- v1: https://lore.kernel.org/bpf/20260402012727.3916819-1-yonghong.song@linux.dev/
- Add stack_arg_safe() to do pruning for stack arguments.
- Fix an issue with KF_ARG_PTR_TO_MEM_SIZE. Since a faked register is
used, added verification log to indicate the start and end of such
faked register usage.
- For x86_64 JIT, copying incoming parameter values directly from caller's stack.
- Add test cases with stack arguments e.g. mem, mem+size, dynptr, iter, etc.
Yonghong Song (16):
bpf: Remove unused parameter from check_map_kptr_access()
bpf: Refactor to avoid redundant calculation of bpf_reg_state
bpf: Refactor to handle memory and size together
bpf: Prepare verifier logs for upcoming kfunc stack arguments
bpf: Introduce bpf register BPF_REG_PARAMS
bpf: Limit the scope of BPF_REG_PARAMS usage
bpf: Reuse MAX_BPF_FUNC_ARGS for maximum number of arguments
bpf: Support stack arguments for bpf functions
bpf: Reject stack arguments in non-JITed programs
bpf: Reject stack arguments if tail call reachable
bpf: Support stack arguments for kfunc calls
bpf: Enable stack argument support for x86_64
bpf,x86: Implement JIT support for stack arguments
selftests/bpf: Add tests for BPF function stack arguments
selftests/bpf: Add negative test for greater-than-8-byte kfunc stack
argument
selftests/bpf: Add verifier tests for stack argument validation
arch/x86/net/bpf_jit_comp.c | 154 ++-
include/linux/bpf.h | 6 +
include/linux/bpf_verifier.h | 29 +-
include/linux/filter.h | 6 +-
kernel/bpf/btf.c | 20 +-
kernel/bpf/const_fold.c | 9 +-
kernel/bpf/core.c | 11 +-
kernel/bpf/fixups.c | 32 +-
kernel/bpf/liveness.c | 9 +-
kernel/bpf/states.c | 41 +
kernel/bpf/verifier.c | 1200 +++++++++++------
.../testing/selftests/bpf/prog_tests/bpf_nf.c | 22 +-
.../selftests/bpf/prog_tests/cb_refs.c | 2 +-
.../selftests/bpf/prog_tests/ctx_rewrite.c | 14 +-
.../selftests/bpf/prog_tests/kfunc_call.c | 2 +-
.../selftests/bpf/prog_tests/linked_list.c | 4 +-
.../selftests/bpf/prog_tests/stack_arg.c | 133 ++
.../selftests/bpf/prog_tests/stack_arg_fail.c | 24 +
.../selftests/bpf/prog_tests/verifier.c | 2 +
.../selftests/bpf/progs/cgrp_kfunc_failure.c | 14 +-
.../selftests/bpf/progs/cpumask_failure.c | 10 +-
.../testing/selftests/bpf/progs/dynptr_fail.c | 22 +-
.../selftests/bpf/progs/file_reader_fail.c | 4 +-
tools/testing/selftests/bpf/progs/irq.c | 4 +-
tools/testing/selftests/bpf/progs/iters.c | 6 +-
.../selftests/bpf/progs/iters_state_safety.c | 14 +-
.../selftests/bpf/progs/iters_testmod.c | 4 +-
.../selftests/bpf/progs/iters_testmod_seq.c | 4 +-
.../selftests/bpf/progs/map_kptr_fail.c | 2 +-
.../selftests/bpf/progs/percpu_alloc_fail.c | 4 +-
.../testing/selftests/bpf/progs/rbtree_fail.c | 6 +-
.../bpf/progs/refcounted_kptr_fail.c | 2 +-
tools/testing/selftests/bpf/progs/stack_arg.c | 254 ++++
.../selftests/bpf/progs/stack_arg_fail.c | 32 +
.../selftests/bpf/progs/stack_arg_kfunc.c | 164 +++
.../testing/selftests/bpf/progs/stream_fail.c | 2 +-
.../selftests/bpf/progs/task_kfunc_failure.c | 18 +-
.../selftests/bpf/progs/task_work_fail.c | 6 +-
.../selftests/bpf/progs/test_bpf_nf_fail.c | 8 +-
.../bpf/progs/test_kfunc_dynptr_param.c | 2 +-
.../bpf/progs/test_kfunc_param_nullable.c | 2 +-
.../selftests/bpf/progs/verifier_bits_iter.c | 4 +-
.../bpf/progs/verifier_bpf_fastcall.c | 24 +-
.../selftests/bpf/progs/verifier_may_goto_1.c | 12 +-
.../bpf/progs/verifier_ref_tracking.c | 6 +-
.../selftests/bpf/progs/verifier_sdiv.c | 64 +-
.../selftests/bpf/progs/verifier_stack_arg.c | 463 +++++++
.../selftests/bpf/progs/verifier_vfs_reject.c | 8 +-
.../testing/selftests/bpf/progs/wq_failures.c | 2 +-
.../selftests/bpf/test_kmods/bpf_testmod.c | 73 +
.../bpf/test_kmods/bpf_testmod_kfunc.h | 26 +
tools/testing/selftests/bpf/verifier/calls.c | 14 +-
52 files changed, 2442 insertions(+), 558 deletions(-)
create mode 100644 tools/testing/selftests/bpf/prog_tests/stack_arg.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/stack_arg_fail.c
create mode 100644 tools/testing/selftests/bpf/progs/stack_arg.c
create mode 100644 tools/testing/selftests/bpf/progs/stack_arg_fail.c
create mode 100644 tools/testing/selftests/bpf/progs/stack_arg_kfunc.c
create mode 100644 tools/testing/selftests/bpf/progs/verifier_stack_arg.c
--
2.52.0
^ permalink raw reply [flat|nested] 45+ messages in thread* [PATCH bpf-next v5 01/16] bpf: Remove unused parameter from check_map_kptr_access() 2026-04-17 3:46 [PATCH bpf-next v5 00/16] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song @ 2026-04-17 3:47 ` Yonghong Song 2026-04-17 3:47 ` [PATCH bpf-next v5 02/16] bpf: Refactor to avoid redundant calculation of bpf_reg_state Yonghong Song ` (14 subsequent siblings) 15 siblings, 0 replies; 45+ messages in thread From: Yonghong Song @ 2026-04-17 3:47 UTC (permalink / raw) To: bpf Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann, Jose E . Marchesi, kernel-team, Martin KaFai Lau The parameter 'regno' in check_map_kptr_access() is unused. Remove it. Signed-off-by: Yonghong Song <yonghong.song@linux.dev> --- kernel/bpf/verifier.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 9e4980128151..44d0af8f73d1 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -4711,7 +4711,7 @@ static int mark_uptr_ld_reg(struct bpf_verifier_env *env, u32 regno, return 0; } -static int check_map_kptr_access(struct bpf_verifier_env *env, u32 regno, +static int check_map_kptr_access(struct bpf_verifier_env *env, int value_regno, int insn_idx, struct btf_field *kptr_field) { @@ -6358,7 +6358,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn kptr_field = btf_record_find(reg->map_ptr->record, off + reg->var_off.value, BPF_KPTR | BPF_UPTR); if (kptr_field) { - err = check_map_kptr_access(env, regno, value_regno, insn_idx, kptr_field); + err = check_map_kptr_access(env, value_regno, insn_idx, kptr_field); } else if (t == BPF_READ && value_regno >= 0) { struct bpf_map *map = reg->map_ptr; -- 2.52.0 ^ permalink raw reply related [flat|nested] 45+ messages in thread
* [PATCH bpf-next v5 02/16] bpf: Refactor to avoid redundant calculation of bpf_reg_state 2026-04-17 3:46 [PATCH bpf-next v5 00/16] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song 2026-04-17 3:47 ` [PATCH bpf-next v5 01/16] bpf: Remove unused parameter from check_map_kptr_access() Yonghong Song @ 2026-04-17 3:47 ` Yonghong Song 2026-04-17 3:47 ` [PATCH bpf-next v5 03/16] bpf: Refactor to handle memory and size together Yonghong Song ` (13 subsequent siblings) 15 siblings, 0 replies; 45+ messages in thread From: Yonghong Song @ 2026-04-17 3:47 UTC (permalink / raw) To: bpf Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann, Jose E . Marchesi, kernel-team, Martin KaFai Lau In many cases, once a bpf_reg_state is defined, it can pass to callee's. Otherwise, callee will need to get bpf_reg_state again based on regno. More importantly, this is needed for later stack arguments for kfuncs since the register state for stack arguments does not have a corresponding regno. So it makes sense to pass reg state for callee's. The following is the only change to avoid compilation warning: static int sanitize_check_bounds(struct bpf_verifier_env *env, const struct bpf_insn *insn, - const struct bpf_reg_state *dst_reg) + struct bpf_reg_state *dst_reg) Signed-off-by: Yonghong Song <yonghong.song@linux.dev> --- kernel/bpf/verifier.c | 213 ++++++++++++++++++------------------------ 1 file changed, 93 insertions(+), 120 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 44d0af8f73d1..2bedaa193d54 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -3933,13 +3933,13 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env, static int check_stack_write_var_off(struct bpf_verifier_env *env, /* func where register points to */ struct bpf_func_state *state, - int ptr_regno, int off, int size, + struct bpf_reg_state *ptr_reg, int off, int size, int value_regno, int insn_idx) { struct bpf_func_state *cur; /* state of the current function */ int min_off, max_off; int i, err; - struct bpf_reg_state *ptr_reg = NULL, *value_reg = NULL; + struct bpf_reg_state *value_reg = NULL; struct bpf_insn *insn = &env->prog->insnsi[insn_idx]; bool writing_zero = false; /* set if the fact that we're writing a zero is used to let any @@ -3948,7 +3948,6 @@ static int check_stack_write_var_off(struct bpf_verifier_env *env, bool zero_used = false; cur = env->cur_state->frame[env->cur_state->curframe]; - ptr_reg = &cur->regs[ptr_regno]; min_off = ptr_reg->smin_value + off; max_off = ptr_reg->smax_value + off + size; if (value_regno >= 0) @@ -4245,7 +4244,7 @@ enum bpf_access_src { ACCESS_HELPER = 2, /* the access is performed by a helper */ }; -static int check_stack_range_initialized(struct bpf_verifier_env *env, +static int check_stack_range_initialized(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno, int off, int access_size, bool zero_size_allowed, enum bpf_access_type type, @@ -4269,18 +4268,16 @@ static struct bpf_reg_state *reg_state(struct bpf_verifier_env *env, int regno) * offset; for a fixed offset check_stack_read_fixed_off should be used * instead. */ -static int check_stack_read_var_off(struct bpf_verifier_env *env, +static int check_stack_read_var_off(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int ptr_regno, int off, int size, int dst_regno) { - /* The state of the source register. */ - struct bpf_reg_state *reg = reg_state(env, ptr_regno); struct bpf_func_state *ptr_state = bpf_func(env, reg); int err; int min_off, max_off; /* Note that we pass a NULL meta, so raw access will not be permitted. */ - err = check_stack_range_initialized(env, ptr_regno, off, size, + err = check_stack_range_initialized(env, reg, ptr_regno, off, size, false, BPF_READ, NULL); if (err) return err; @@ -4302,10 +4299,9 @@ static int check_stack_read_var_off(struct bpf_verifier_env *env, * can be -1, meaning that the read value is not going to a register. */ static int check_stack_read(struct bpf_verifier_env *env, - int ptr_regno, int off, int size, + struct bpf_reg_state *reg, int ptr_regno, int off, int size, int dst_regno) { - struct bpf_reg_state *reg = reg_state(env, ptr_regno); struct bpf_func_state *state = bpf_func(env, reg); int err; /* Some accesses are only permitted with a static offset. */ @@ -4341,7 +4337,7 @@ static int check_stack_read(struct bpf_verifier_env *env, * than fixed offset ones. Note that dst_regno >= 0 on this * branch. */ - err = check_stack_read_var_off(env, ptr_regno, off, size, + err = check_stack_read_var_off(env, reg, ptr_regno, off, size, dst_regno); } return err; @@ -4358,10 +4354,9 @@ static int check_stack_read(struct bpf_verifier_env *env, * The caller must ensure that the offset falls within the maximum stack size. */ static int check_stack_write(struct bpf_verifier_env *env, - int ptr_regno, int off, int size, + struct bpf_reg_state *reg, int off, int size, int value_regno, int insn_idx) { - struct bpf_reg_state *reg = reg_state(env, ptr_regno); struct bpf_func_state *state = bpf_func(env, reg); int err; @@ -4374,16 +4369,15 @@ static int check_stack_write(struct bpf_verifier_env *env, * than fixed offset ones. */ err = check_stack_write_var_off(env, state, - ptr_regno, off, size, + reg, off, size, value_regno, insn_idx); } return err; } -static int check_map_access_type(struct bpf_verifier_env *env, u32 regno, +static int check_map_access_type(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 regno, int off, int size, enum bpf_access_type type) { - struct bpf_reg_state *reg = reg_state(env, regno); struct bpf_map *map = reg->map_ptr; u32 cap = bpf_map_flags_to_cap(map); @@ -4403,17 +4397,15 @@ static int check_map_access_type(struct bpf_verifier_env *env, u32 regno, } /* check read/write into memory region (e.g., map value, ringbuf sample, etc) */ -static int __check_mem_access(struct bpf_verifier_env *env, int regno, +static int __check_mem_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno, int off, int size, u32 mem_size, bool zero_size_allowed) { bool size_ok = size > 0 || (size == 0 && zero_size_allowed); - struct bpf_reg_state *reg; if (off >= 0 && size_ok && (u64)off + size <= mem_size) return 0; - reg = &cur_regs(env)[regno]; switch (reg->type) { case PTR_TO_MAP_KEY: verbose(env, "invalid access to map key, key_size=%d off=%d size=%d\n", @@ -4443,13 +4435,10 @@ static int __check_mem_access(struct bpf_verifier_env *env, int regno, } /* check read/write into a memory region with possible variable offset */ -static int check_mem_region_access(struct bpf_verifier_env *env, u32 regno, +static int check_mem_region_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 regno, int off, int size, u32 mem_size, bool zero_size_allowed) { - struct bpf_verifier_state *vstate = env->cur_state; - struct bpf_func_state *state = vstate->frame[vstate->curframe]; - struct bpf_reg_state *reg = &state->regs[regno]; int err; /* We may have adjusted the register pointing to memory region, so we @@ -4470,7 +4459,7 @@ static int check_mem_region_access(struct bpf_verifier_env *env, u32 regno, regno); return -EACCES; } - err = __check_mem_access(env, regno, reg->smin_value + off, size, + err = __check_mem_access(env, reg, regno, reg->smin_value + off, size, mem_size, zero_size_allowed); if (err) { verbose(env, "R%d min value is outside of the allowed memory range\n", @@ -4487,7 +4476,7 @@ static int check_mem_region_access(struct bpf_verifier_env *env, u32 regno, regno); return -EACCES; } - err = __check_mem_access(env, regno, reg->umax_value + off, size, + err = __check_mem_access(env, reg, regno, reg->umax_value + off, size, mem_size, zero_size_allowed); if (err) { verbose(env, "R%d max value is outside of the allowed memory range\n", @@ -4788,19 +4777,16 @@ static u32 map_mem_size(const struct bpf_map *map) } /* check read/write into a map element with possible variable offset */ -static int check_map_access(struct bpf_verifier_env *env, u32 regno, +static int check_map_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 regno, int off, int size, bool zero_size_allowed, enum bpf_access_src src) { - struct bpf_verifier_state *vstate = env->cur_state; - struct bpf_func_state *state = vstate->frame[vstate->curframe]; - struct bpf_reg_state *reg = &state->regs[regno]; struct bpf_map *map = reg->map_ptr; u32 mem_size = map_mem_size(map); struct btf_record *rec; int err, i; - err = check_mem_region_access(env, regno, off, size, mem_size, zero_size_allowed); + err = check_mem_region_access(env, reg, regno, off, size, mem_size, zero_size_allowed); if (err) return err; @@ -4896,10 +4882,9 @@ static bool may_access_direct_pkt_data(struct bpf_verifier_env *env, } } -static int check_packet_access(struct bpf_verifier_env *env, u32 regno, int off, +static int check_packet_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 regno, int off, int size, bool zero_size_allowed) { - struct bpf_reg_state *reg = reg_state(env, regno); int err; if (reg->range < 0) { @@ -4907,7 +4892,7 @@ static int check_packet_access(struct bpf_verifier_env *env, u32 regno, int off, return -EINVAL; } - err = check_mem_region_access(env, regno, off, size, reg->range, zero_size_allowed); + err = check_mem_region_access(env, reg, regno, off, size, reg->range, zero_size_allowed); if (err) return err; @@ -4962,7 +4947,7 @@ static int __check_ctx_access(struct bpf_verifier_env *env, int insn_idx, int of return -EACCES; } -static int check_ctx_access(struct bpf_verifier_env *env, int insn_idx, u32 regno, +static int check_ctx_access(struct bpf_verifier_env *env, int insn_idx, struct bpf_reg_state *reg, u32 regno, int off, int access_size, enum bpf_access_type t, struct bpf_insn_access_aux *info) { @@ -4972,12 +4957,10 @@ static int check_ctx_access(struct bpf_verifier_env *env, int insn_idx, u32 regn */ bool var_off_ok = is_var_ctx_off_allowed(env->prog); bool fixed_off_ok = !env->ops->convert_ctx_access; - struct bpf_reg_state *regs = cur_regs(env); - struct bpf_reg_state *reg = regs + regno; int err; if (var_off_ok) - err = check_mem_region_access(env, regno, off, access_size, U16_MAX, false); + err = check_mem_region_access(env, reg, regno, off, access_size, U16_MAX, false); else err = __check_ptr_off_reg(env, reg, regno, fixed_off_ok); if (err) @@ -5003,10 +4986,9 @@ static int check_flow_keys_access(struct bpf_verifier_env *env, int off, } static int check_sock_access(struct bpf_verifier_env *env, int insn_idx, - u32 regno, int off, int size, + struct bpf_reg_state *reg, u32 regno, int off, int size, enum bpf_access_type t) { - struct bpf_reg_state *reg = reg_state(env, regno); struct bpf_insn_access_aux info = {}; bool valid; @@ -5969,12 +5951,11 @@ static bool type_is_trusted_or_null(struct bpf_verifier_env *env, } static int check_ptr_to_btf_access(struct bpf_verifier_env *env, - struct bpf_reg_state *regs, + struct bpf_reg_state *regs, struct bpf_reg_state *reg, int regno, int off, int size, enum bpf_access_type atype, int value_regno) { - struct bpf_reg_state *reg = regs + regno; const struct btf_type *t = btf_type_by_id(reg->btf, reg->btf_id); const char *tname = btf_name_by_offset(reg->btf, t->name_off); const char *field_name = NULL; @@ -6126,12 +6107,11 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env, } static int check_ptr_to_map_access(struct bpf_verifier_env *env, - struct bpf_reg_state *regs, + struct bpf_reg_state *regs, struct bpf_reg_state *reg, int regno, int off, int size, enum bpf_access_type atype, int value_regno) { - struct bpf_reg_state *reg = regs + regno; struct bpf_map *map = reg->map_ptr; struct bpf_reg_state map_reg; enum bpf_type_flag flag = 0; @@ -6220,11 +6200,10 @@ static int check_stack_slot_within_bounds(struct bpf_verifier_env *env, * 'off' includes `regno->offset`, but not its dynamic part (if any). */ static int check_stack_access_within_bounds( - struct bpf_verifier_env *env, + struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno, int off, int access_size, enum bpf_access_type type) { - struct bpf_reg_state *reg = reg_state(env, regno); struct bpf_func_state *state = bpf_func(env, reg); s64 min_off, max_off; int err; @@ -6312,12 +6291,11 @@ static void add_scalar_to_reg(struct bpf_reg_state *dst_reg, s64 val) * if t==write && value_regno==-1, some unknown value is stored into memory * if t==read && value_regno==-1, don't care what we read from memory */ -static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regno, +static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct bpf_reg_state *reg, u32 regno, int off, int bpf_size, enum bpf_access_type t, int value_regno, bool strict_alignment_once, bool is_ldsx) { struct bpf_reg_state *regs = cur_regs(env); - struct bpf_reg_state *reg = regs + regno; int size, err = 0; size = bpf_size_to_bytes(bpf_size); @@ -6334,7 +6312,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn return -EACCES; } - err = check_mem_region_access(env, regno, off, size, + err = check_mem_region_access(env, reg, regno, off, size, reg->map_ptr->key_size, false); if (err) return err; @@ -6348,10 +6326,10 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn verbose(env, "R%d leaks addr into map\n", value_regno); return -EACCES; } - err = check_map_access_type(env, regno, off, size, t); + err = check_map_access_type(env, reg, regno, off, size, t); if (err) return err; - err = check_map_access(env, regno, off, size, false, ACCESS_DIRECT); + err = check_map_access(env, reg, regno, off, size, false, ACCESS_DIRECT); if (err) return err; if (tnum_is_const(reg->var_off)) @@ -6420,7 +6398,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn * instructions, hence no need to check bounds in that case. */ if (!rdonly_untrusted) - err = check_mem_region_access(env, regno, off, size, + err = check_mem_region_access(env, reg, regno, off, size, reg->mem_size, false); if (!err && value_regno >= 0 && (t == BPF_READ || rdonly_mem)) mark_reg_unknown(env, regs, value_regno); @@ -6438,7 +6416,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn return -EACCES; } - err = check_ctx_access(env, insn_idx, regno, off, size, t, &info); + err = check_ctx_access(env, insn_idx, reg, regno, off, size, t, &info); if (!err && t == BPF_READ && value_regno >= 0) { /* ctx access returns either a scalar, or a * PTR_TO_PACKET[_META,_END]. In the latter @@ -6475,15 +6453,15 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn } else if (reg->type == PTR_TO_STACK) { /* Basic bounds checks. */ - err = check_stack_access_within_bounds(env, regno, off, size, t); + err = check_stack_access_within_bounds(env, reg, regno, off, size, t); if (err) return err; if (t == BPF_READ) - err = check_stack_read(env, regno, off, size, + err = check_stack_read(env, reg, regno, off, size, value_regno); else - err = check_stack_write(env, regno, off, size, + err = check_stack_write(env, reg, off, size, value_regno, insn_idx); } else if (reg_is_pkt_pointer(reg)) { if (t == BPF_WRITE && !may_access_direct_pkt_data(env, NULL, t)) { @@ -6496,7 +6474,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn value_regno); return -EACCES; } - err = check_packet_access(env, regno, off, size, false); + err = check_packet_access(env, reg, regno, off, size, false); if (!err && t == BPF_READ && value_regno >= 0) mark_reg_unknown(env, regs, value_regno); } else if (reg->type == PTR_TO_FLOW_KEYS) { @@ -6516,7 +6494,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn regno, reg_type_str(env, reg->type)); return -EACCES; } - err = check_sock_access(env, insn_idx, regno, off, size, t); + err = check_sock_access(env, insn_idx, reg, regno, off, size, t); if (!err && value_regno >= 0) mark_reg_unknown(env, regs, value_regno); } else if (reg->type == PTR_TO_TP_BUFFER) { @@ -6525,10 +6503,10 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn mark_reg_unknown(env, regs, value_regno); } else if (base_type(reg->type) == PTR_TO_BTF_ID && !type_may_be_null(reg->type)) { - err = check_ptr_to_btf_access(env, regs, regno, off, size, t, + err = check_ptr_to_btf_access(env, regs, reg, regno, off, size, t, value_regno); } else if (reg->type == CONST_PTR_TO_MAP) { - err = check_ptr_to_map_access(env, regs, regno, off, size, t, + err = check_ptr_to_map_access(env, regs, reg, regno, off, size, t, value_regno); } else if (base_type(reg->type) == PTR_TO_BUF && !type_may_be_null(reg->type)) { @@ -6597,7 +6575,7 @@ static int check_load_mem(struct bpf_verifier_env *env, struct bpf_insn *insn, /* Check if (src_reg + off) is readable. The state of dst_reg will be * updated by this call. */ - err = check_mem_access(env, env->insn_idx, insn->src_reg, insn->off, + err = check_mem_access(env, env->insn_idx, regs + insn->src_reg, insn->src_reg, insn->off, BPF_SIZE(insn->code), BPF_READ, insn->dst_reg, strict_alignment_once, is_ldsx); err = err ?: save_aux_ptr_type(env, src_reg_type, @@ -6627,7 +6605,7 @@ static int check_store_reg(struct bpf_verifier_env *env, struct bpf_insn *insn, dst_reg_type = regs[insn->dst_reg].type; /* Check if (dst_reg + off) is writeable. */ - err = check_mem_access(env, env->insn_idx, insn->dst_reg, insn->off, + err = check_mem_access(env, env->insn_idx, regs + insn->dst_reg, insn->dst_reg, insn->off, BPF_SIZE(insn->code), BPF_WRITE, insn->src_reg, strict_alignment_once, false); err = err ?: save_aux_ptr_type(env, dst_reg_type, false); @@ -6638,6 +6616,7 @@ static int check_store_reg(struct bpf_verifier_env *env, struct bpf_insn *insn, static int check_atomic_rmw(struct bpf_verifier_env *env, struct bpf_insn *insn) { + struct bpf_reg_state *dst_reg; int load_reg; int err; @@ -6699,13 +6678,15 @@ static int check_atomic_rmw(struct bpf_verifier_env *env, load_reg = -1; } + dst_reg = cur_regs(env) + insn->dst_reg; + /* Check whether we can read the memory, with second call for fetch * case to simulate the register fill. */ - err = check_mem_access(env, env->insn_idx, insn->dst_reg, insn->off, + err = check_mem_access(env, env->insn_idx, dst_reg, insn->dst_reg, insn->off, BPF_SIZE(insn->code), BPF_READ, -1, true, false); if (!err && load_reg >= 0) - err = check_mem_access(env, env->insn_idx, insn->dst_reg, + err = check_mem_access(env, env->insn_idx, dst_reg, insn->dst_reg, insn->off, BPF_SIZE(insn->code), BPF_READ, load_reg, true, false); if (err) @@ -6717,7 +6698,7 @@ static int check_atomic_rmw(struct bpf_verifier_env *env, return err; } /* Check whether we can write into the same memory. */ - err = check_mem_access(env, env->insn_idx, insn->dst_reg, insn->off, + err = check_mem_access(env, env->insn_idx, dst_reg, insn->dst_reg, insn->off, BPF_SIZE(insn->code), BPF_WRITE, -1, true, false); if (err) return err; @@ -6806,11 +6787,10 @@ static int check_atomic(struct bpf_verifier_env *env, struct bpf_insn *insn) * read offsets are marked as read. */ static int check_stack_range_initialized( - struct bpf_verifier_env *env, int regno, int off, + struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno, int off, int access_size, bool zero_size_allowed, enum bpf_access_type type, struct bpf_call_arg_meta *meta) { - struct bpf_reg_state *reg = reg_state(env, regno); struct bpf_func_state *state = bpf_func(env, reg); int err, min_off, max_off, i, j, slot, spi; /* Some accesses can write anything into the stack, others are @@ -6832,7 +6812,7 @@ static int check_stack_range_initialized( return -EACCES; } - err = check_stack_access_within_bounds(env, regno, off, access_size, type); + err = check_stack_access_within_bounds(env, reg, regno, off, access_size, type); if (err) return err; @@ -6963,7 +6943,7 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno, switch (base_type(reg->type)) { case PTR_TO_PACKET: case PTR_TO_PACKET_META: - return check_packet_access(env, regno, 0, access_size, + return check_packet_access(env, reg, regno, 0, access_size, zero_size_allowed); case PTR_TO_MAP_KEY: if (access_type == BPF_WRITE) { @@ -6971,12 +6951,12 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno, reg_type_str(env, reg->type)); return -EACCES; } - return check_mem_region_access(env, regno, 0, access_size, + return check_mem_region_access(env, reg, regno, 0, access_size, reg->map_ptr->key_size, false); case PTR_TO_MAP_VALUE: - if (check_map_access_type(env, regno, 0, access_size, access_type)) + if (check_map_access_type(env, reg, regno, 0, access_size, access_type)) return -EACCES; - return check_map_access(env, regno, 0, access_size, + return check_map_access(env, reg, regno, 0, access_size, zero_size_allowed, ACCESS_HELPER); case PTR_TO_MEM: if (type_is_rdonly_mem(reg->type)) { @@ -6986,7 +6966,7 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno, return -EACCES; } } - return check_mem_region_access(env, regno, 0, + return check_mem_region_access(env, reg, regno, 0, access_size, reg->mem_size, zero_size_allowed); case PTR_TO_BUF: @@ -7006,16 +6986,16 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno, max_access); case PTR_TO_STACK: return check_stack_range_initialized( - env, + env, reg, regno, 0, access_size, zero_size_allowed, access_type, meta); case PTR_TO_BTF_ID: - return check_ptr_to_btf_access(env, regs, regno, 0, + return check_ptr_to_btf_access(env, regs, reg, regno, 0, access_size, BPF_READ, -1); case PTR_TO_CTX: /* Only permit reading or writing syscall context using helper calls. */ if (is_var_ctx_off_allowed(env->prog)) { - int err = check_mem_region_access(env, regno, 0, access_size, U16_MAX, + int err = check_mem_region_access(env, reg, regno, 0, access_size, U16_MAX, zero_size_allowed); if (err) return err; @@ -7178,11 +7158,10 @@ enum { * env->cur_state->active_locks remembers which map value element or allocated * object got locked and clears it after bpf_spin_unlock. */ -static int process_spin_lock(struct bpf_verifier_env *env, int regno, int flags) +static int process_spin_lock(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno, int flags) { bool is_lock = flags & PROCESS_SPIN_LOCK, is_res_lock = flags & PROCESS_RES_LOCK; const char *lock_str = is_res_lock ? "bpf_res_spin" : "bpf_spin"; - struct bpf_reg_state *reg = reg_state(env, regno); struct bpf_verifier_state *cur = env->cur_state; bool is_const = tnum_is_const(reg->var_off); bool is_irq = flags & PROCESS_LOCK_IRQ; @@ -7295,11 +7274,10 @@ static int process_spin_lock(struct bpf_verifier_env *env, int regno, int flags) } /* Check if @regno is a pointer to a specific field in a map value */ -static int check_map_field_pointer(struct bpf_verifier_env *env, u32 regno, +static int check_map_field_pointer(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 regno, enum btf_field_type field_type, struct bpf_map_desc *map_desc) { - struct bpf_reg_state *reg = reg_state(env, regno); bool is_const = tnum_is_const(reg->var_off); struct bpf_map *map = reg->map_ptr; u64 val = reg->var_off.value; @@ -7349,26 +7327,26 @@ static int check_map_field_pointer(struct bpf_verifier_env *env, u32 regno, return 0; } -static int process_timer_func(struct bpf_verifier_env *env, int regno, +static int process_timer_func(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno, struct bpf_map_desc *map) { if (IS_ENABLED(CONFIG_PREEMPT_RT)) { verbose(env, "bpf_timer cannot be used for PREEMPT_RT.\n"); return -EOPNOTSUPP; } - return check_map_field_pointer(env, regno, BPF_TIMER, map); + return check_map_field_pointer(env, reg, regno, BPF_TIMER, map); } -static int process_timer_helper(struct bpf_verifier_env *env, int regno, +static int process_timer_helper(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno, struct bpf_call_arg_meta *meta) { - return process_timer_func(env, regno, &meta->map); + return process_timer_func(env, reg, regno, &meta->map); } -static int process_timer_kfunc(struct bpf_verifier_env *env, int regno, +static int process_timer_kfunc(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno, struct bpf_kfunc_call_arg_meta *meta) { - return process_timer_func(env, regno, &meta->map); + return process_timer_func(env, reg, regno, &meta->map); } static int process_kptr_func(struct bpf_verifier_env *env, int regno, @@ -7444,10 +7422,9 @@ static int process_kptr_func(struct bpf_verifier_env *env, int regno, * Helpers which do not mutate the bpf_dynptr set MEM_RDONLY in their argument * type, and declare it as 'const struct bpf_dynptr *' in their prototype. */ -static int process_dynptr_func(struct bpf_verifier_env *env, int regno, int insn_idx, +static int process_dynptr_func(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno, int insn_idx, enum bpf_arg_type arg_type, int clone_ref_obj_id) { - struct bpf_reg_state *reg = reg_state(env, regno); int err; if (reg->type != PTR_TO_STACK && reg->type != CONST_PTR_TO_DYNPTR) { @@ -7490,7 +7467,7 @@ static int process_dynptr_func(struct bpf_verifier_env *env, int regno, int insn /* we write BPF_DW bits (8 bytes) at a time */ for (i = 0; i < BPF_DYNPTR_SIZE; i += 8) { - err = check_mem_access(env, insn_idx, regno, + err = check_mem_access(env, insn_idx, reg, regno, i, BPF_DW, BPF_WRITE, -1, false, false); if (err) return err; @@ -7560,10 +7537,9 @@ static bool is_kfunc_arg_iter(struct bpf_kfunc_call_arg_meta *meta, int arg_idx, return btf_param_match_suffix(meta->btf, arg, "__iter"); } -static int process_iter_arg(struct bpf_verifier_env *env, int regno, int insn_idx, +static int process_iter_arg(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno, int insn_idx, struct bpf_kfunc_call_arg_meta *meta) { - struct bpf_reg_state *reg = reg_state(env, regno); const struct btf_type *t; int spi, err, i, nr_slots, btf_id; @@ -7595,7 +7571,7 @@ static int process_iter_arg(struct bpf_verifier_env *env, int regno, int insn_id } for (i = 0; i < nr_slots * 8; i += BPF_REG_SIZE) { - err = check_mem_access(env, insn_idx, regno, + err = check_mem_access(env, insn_idx, reg, regno, i, BPF_DW, BPF_WRITE, -1, false, false); if (err) return err; @@ -8034,12 +8010,11 @@ static const struct bpf_reg_types *compatible_reg_types[__BPF_ARG_TYPE_MAX] = { [ARG_PTR_TO_DYNPTR] = &dynptr_types, }; -static int check_reg_type(struct bpf_verifier_env *env, u32 regno, +static int check_reg_type(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 regno, enum bpf_arg_type arg_type, const u32 *arg_btf_id, struct bpf_call_arg_meta *meta) { - struct bpf_reg_state *reg = reg_state(env, regno); enum bpf_reg_type expected, type = reg->type; const struct bpf_reg_types *compatible; int i, j, err; @@ -8382,7 +8357,7 @@ static int check_reg_const_str(struct bpf_verifier_env *env, return -EACCES; } - err = check_map_access(env, regno, 0, + err = check_map_access(env, reg, regno, 0, map->value_size - reg->var_off.value, false, ACCESS_HELPER); if (err) @@ -8518,7 +8493,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg, base_type(arg_type) == ARG_PTR_TO_SPIN_LOCK) arg_btf_id = fn->arg_btf_id[arg]; - err = check_reg_type(env, regno, arg_type, arg_btf_id, meta); + err = check_reg_type(env, reg, regno, arg_type, arg_btf_id, meta); if (err) return err; @@ -8656,11 +8631,11 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg, return -EACCES; } if (meta->func_id == BPF_FUNC_spin_lock) { - err = process_spin_lock(env, regno, PROCESS_SPIN_LOCK); + err = process_spin_lock(env, reg, regno, PROCESS_SPIN_LOCK); if (err) return err; } else if (meta->func_id == BPF_FUNC_spin_unlock) { - err = process_spin_lock(env, regno, 0); + err = process_spin_lock(env, reg, regno, 0); if (err) return err; } else { @@ -8669,7 +8644,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg, } break; case ARG_PTR_TO_TIMER: - err = process_timer_helper(env, regno, meta); + err = process_timer_helper(env, reg, regno, meta); if (err) return err; break; @@ -8704,7 +8679,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg, true, meta); break; case ARG_PTR_TO_DYNPTR: - err = process_dynptr_func(env, regno, insn_idx, arg_type, 0); + err = process_dynptr_func(env, reg, regno, insn_idx, arg_type, 0); if (err) return err; break; @@ -9363,7 +9338,7 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog, if (ret) return ret; - ret = process_dynptr_func(env, regno, -1, arg->arg_type, 0); + ret = process_dynptr_func(env, reg, regno, -1, arg->arg_type, 0); if (ret) return ret; } else if (base_type(arg->arg_type) == ARG_PTR_TO_BTF_ID) { @@ -9374,7 +9349,7 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog, continue; memset(&meta, 0, sizeof(meta)); /* leave func_id as zero */ - err = check_reg_type(env, regno, arg->arg_type, &arg->btf_id, &meta); + err = check_reg_type(env, reg, regno, arg->arg_type, &arg->btf_id, &meta); err = err ?: check_func_arg_reg_off(env, reg, regno, arg->arg_type); if (err) return err; @@ -10332,18 +10307,18 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn if (err) return err; + regs = cur_regs(env); + /* Mark slots with STACK_MISC in case of raw mode, stack offset * is inferred from register state. */ for (i = 0; i < meta.access_size; i++) { - err = check_mem_access(env, insn_idx, meta.regno, i, BPF_B, + err = check_mem_access(env, insn_idx, regs + meta.regno, meta.regno, i, BPF_B, BPF_WRITE, -1, false, false); if (err) return err; } - regs = cur_regs(env); - if (meta.release_regno) { err = -EINVAL; if (arg_type_is_dynptr(fn->arg_type[meta.release_regno - BPF_REG_1])) { @@ -11347,11 +11322,10 @@ get_kfunc_ptr_arg_type(struct bpf_verifier_env *env, struct bpf_kfunc_call_arg_meta *meta, const struct btf_type *t, const struct btf_type *ref_t, const char *ref_tname, const struct btf_param *args, - int argno, int nargs) + int argno, int nargs, struct bpf_reg_state *reg) { u32 regno = argno + 1; struct bpf_reg_state *regs = cur_regs(env); - struct bpf_reg_state *reg = ®s[regno]; bool arg_mem_size = false; if (meta->func_id == special_kfunc_list[KF_bpf_cast_to_kern_ctx] || @@ -11518,10 +11492,9 @@ static int process_kf_arg_ptr_to_btf_id(struct bpf_verifier_env *env, return 0; } -static int process_irq_flag(struct bpf_verifier_env *env, int regno, +static int process_irq_flag(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno, struct bpf_kfunc_call_arg_meta *meta) { - struct bpf_reg_state *reg = reg_state(env, regno); int err, kfunc_class = IRQ_NATIVE_KFUNC; bool irq_save; @@ -11546,7 +11519,7 @@ static int process_irq_flag(struct bpf_verifier_env *env, int regno, return -EINVAL; } - err = check_mem_access(env, env->insn_idx, regno, 0, BPF_DW, BPF_WRITE, -1, false, false); + err = check_mem_access(env, env->insn_idx, reg, regno, 0, BPF_DW, BPF_WRITE, -1, false, false); if (err) return err; @@ -12134,7 +12107,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ ref_t = btf_type_skip_modifiers(btf, t->type, &ref_id); ref_tname = btf_name_by_offset(btf, ref_t->name_off); - kf_arg_type = get_kfunc_ptr_arg_type(env, meta, t, ref_t, ref_tname, args, i, nargs); + kf_arg_type = get_kfunc_ptr_arg_type(env, meta, t, ref_t, ref_tname, args, i, nargs, reg); if (kf_arg_type < 0) return kf_arg_type; @@ -12299,7 +12272,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ } } - ret = process_dynptr_func(env, regno, insn_idx, dynptr_arg_type, clone_ref_obj_id); + ret = process_dynptr_func(env, reg, regno, insn_idx, dynptr_arg_type, clone_ref_obj_id); if (ret < 0) return ret; @@ -12324,7 +12297,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ return -EINVAL; } } - ret = process_iter_arg(env, regno, insn_idx, meta); + ret = process_iter_arg(env, reg, regno, insn_idx, meta); if (ret < 0) return ret; break; @@ -12501,7 +12474,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ verbose(env, "arg#%d doesn't point to a map value\n", i); return -EINVAL; } - ret = check_map_field_pointer(env, regno, BPF_WORKQUEUE, &meta->map); + ret = check_map_field_pointer(env, reg, regno, BPF_WORKQUEUE, &meta->map); if (ret < 0) return ret; break; @@ -12510,7 +12483,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ verbose(env, "arg#%d doesn't point to a map value\n", i); return -EINVAL; } - ret = process_timer_kfunc(env, regno, meta); + ret = process_timer_kfunc(env, reg, regno, meta); if (ret < 0) return ret; break; @@ -12519,7 +12492,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ verbose(env, "arg#%d doesn't point to a map value\n", i); return -EINVAL; } - ret = check_map_field_pointer(env, regno, BPF_TASK_WORK, &meta->map); + ret = check_map_field_pointer(env, reg, regno, BPF_TASK_WORK, &meta->map); if (ret < 0) return ret; break; @@ -12528,7 +12501,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ verbose(env, "arg#%d doesn't point to an irq flag on stack\n", i); return -EINVAL; } - ret = process_irq_flag(env, regno, meta); + ret = process_irq_flag(env, reg, regno, meta); if (ret < 0) return ret; break; @@ -12549,7 +12522,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ if (meta->func_id == special_kfunc_list[KF_bpf_res_spin_lock_irqsave] || meta->func_id == special_kfunc_list[KF_bpf_res_spin_unlock_irqrestore]) flags |= PROCESS_LOCK_IRQ; - ret = process_spin_lock(env, regno, flags); + ret = process_spin_lock(env, reg, regno, flags); if (ret < 0) return ret; break; @@ -13683,7 +13656,7 @@ static int check_stack_access_for_ptr_arithmetic( static int sanitize_check_bounds(struct bpf_verifier_env *env, const struct bpf_insn *insn, - const struct bpf_reg_state *dst_reg) + struct bpf_reg_state *dst_reg) { u32 dst = insn->dst_reg; @@ -13700,7 +13673,7 @@ static int sanitize_check_bounds(struct bpf_verifier_env *env, return -EACCES; break; case PTR_TO_MAP_VALUE: - if (check_map_access(env, dst, 0, 1, false, ACCESS_HELPER)) { + if (check_map_access(env, dst_reg, dst, 0, 1, false, ACCESS_HELPER)) { verbose(env, "R%d pointer arithmetic of map value goes out of range, " "prohibited for !root\n", dst); return -EACCES; @@ -17584,7 +17557,7 @@ static int do_check_insn(struct bpf_verifier_env *env, bool *do_print_state) dst_reg_type = cur_regs(env)[insn->dst_reg].type; - err = check_mem_access(env, env->insn_idx, insn->dst_reg, + err = check_mem_access(env, env->insn_idx, cur_regs(env) + insn->dst_reg, insn->dst_reg, insn->off, BPF_SIZE(insn->code), BPF_WRITE, -1, false, false); if (err) -- 2.52.0 ^ permalink raw reply related [flat|nested] 45+ messages in thread
* [PATCH bpf-next v5 03/16] bpf: Refactor to handle memory and size together 2026-04-17 3:46 [PATCH bpf-next v5 00/16] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song 2026-04-17 3:47 ` [PATCH bpf-next v5 01/16] bpf: Remove unused parameter from check_map_kptr_access() Yonghong Song 2026-04-17 3:47 ` [PATCH bpf-next v5 02/16] bpf: Refactor to avoid redundant calculation of bpf_reg_state Yonghong Song @ 2026-04-17 3:47 ` Yonghong Song 2026-04-17 4:49 ` sashiko-bot 2026-04-18 0:52 ` bot+bpf-ci 2026-04-17 3:47 ` [PATCH bpf-next v5 04/16] bpf: Prepare verifier logs for upcoming kfunc stack arguments Yonghong Song ` (12 subsequent siblings) 15 siblings, 2 replies; 45+ messages in thread From: Yonghong Song @ 2026-04-17 3:47 UTC (permalink / raw) To: bpf Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann, Jose E . Marchesi, kernel-team, Martin KaFai Lau Similar to the previous patch, try to pass bpf_reg_state from caller to callee. Both mem_reg and size_reg are passed to helper functions. This is important for stack arguments as they may be beyond registers 1-5. Signed-off-by: Yonghong Song <yonghong.song@linux.dev> --- kernel/bpf/verifier.c | 59 ++++++++++++++++++++++--------------------- 1 file changed, 30 insertions(+), 29 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 2bedaa193d54..7a7024d94cf0 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -6932,12 +6932,12 @@ static int check_stack_range_initialized( return 0; } -static int check_helper_mem_access(struct bpf_verifier_env *env, int regno, +static int check_helper_mem_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno, int access_size, enum bpf_access_type access_type, bool zero_size_allowed, struct bpf_call_arg_meta *meta) { - struct bpf_reg_state *regs = cur_regs(env), *reg = ®s[regno]; + struct bpf_reg_state *regs = cur_regs(env); u32 *max_access; switch (base_type(reg->type)) { @@ -7020,15 +7020,17 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno, /* verify arguments to helpers or kfuncs consisting of a pointer and an access * size. * - * @regno is the register containing the access size. regno-1 is the register - * containing the pointer. + * @mem_regno is the register containing the pointer, mem_regno+1 is the register + * containing the access size. */ static int check_mem_size_reg(struct bpf_verifier_env *env, - struct bpf_reg_state *reg, u32 regno, + struct bpf_reg_state *mem_reg, + struct bpf_reg_state *size_reg, u32 mem_regno, enum bpf_access_type access_type, bool zero_size_allowed, struct bpf_call_arg_meta *meta) { + int size_regno = mem_regno + 1; int err; /* This is used to refine r0 return value bounds for helpers @@ -7039,37 +7041,37 @@ static int check_mem_size_reg(struct bpf_verifier_env *env, * out. Only upper bounds can be learned because retval is an * int type and negative retvals are allowed. */ - meta->msize_max_value = reg->umax_value; + meta->msize_max_value = size_reg->umax_value; /* The register is SCALAR_VALUE; the access check happens using * its boundaries. For unprivileged variable accesses, disable * raw mode so that the program is required to initialize all * the memory that the helper could just partially fill up. */ - if (!tnum_is_const(reg->var_off)) + if (!tnum_is_const(size_reg->var_off)) meta = NULL; - if (reg->smin_value < 0) { + if (size_reg->smin_value < 0) { verbose(env, "R%d min value is negative, either use unsigned or 'var &= const'\n", - regno); + size_regno); return -EACCES; } - if (reg->umin_value == 0 && !zero_size_allowed) { + if (size_reg->umin_value == 0 && !zero_size_allowed) { verbose(env, "R%d invalid zero-sized read: u64=[%lld,%lld]\n", - regno, reg->umin_value, reg->umax_value); + size_regno, size_reg->umin_value, size_reg->umax_value); return -EACCES; } - if (reg->umax_value >= BPF_MAX_VAR_SIZ) { + if (size_reg->umax_value >= BPF_MAX_VAR_SIZ) { verbose(env, "R%d unbounded memory access, use 'var &= const' or 'if (var < const)'\n", - regno); + size_regno); return -EACCES; } - err = check_helper_mem_access(env, regno - 1, reg->umax_value, + err = check_helper_mem_access(env, mem_reg, mem_regno, size_reg->umax_value, access_type, zero_size_allowed, meta); if (!err) - err = mark_chain_precision(env, regno); + err = mark_chain_precision(env, size_regno); return err; } @@ -7094,8 +7096,8 @@ static int check_mem_reg(struct bpf_verifier_env *env, struct bpf_reg_state *reg int size = base_type(reg->type) == PTR_TO_STACK ? -(int)mem_size : mem_size; - err = check_helper_mem_access(env, regno, size, BPF_READ, true, NULL); - err = err ?: check_helper_mem_access(env, regno, size, BPF_WRITE, true, NULL); + err = check_helper_mem_access(env, reg, regno, size, BPF_READ, true, NULL); + err = err ?: check_helper_mem_access(env, reg, regno, size, BPF_WRITE, true, NULL); if (may_be_null) *reg = saved_reg; @@ -7103,16 +7105,15 @@ static int check_mem_reg(struct bpf_verifier_env *env, struct bpf_reg_state *reg return err; } -static int check_kfunc_mem_size_reg(struct bpf_verifier_env *env, struct bpf_reg_state *reg, - u32 regno) +static int check_kfunc_mem_size_reg(struct bpf_verifier_env *env, struct bpf_reg_state *mem_reg, + struct bpf_reg_state *size_reg, u32 mem_regno) { - struct bpf_reg_state *mem_reg = &cur_regs(env)[regno - 1]; bool may_be_null = type_may_be_null(mem_reg->type); struct bpf_reg_state saved_reg; struct bpf_call_arg_meta meta; int err; - WARN_ON_ONCE(regno < BPF_REG_2 || regno > BPF_REG_5); + WARN_ON_ONCE(mem_regno > BPF_REG_4); memset(&meta, 0, sizeof(meta)); @@ -7121,8 +7122,8 @@ static int check_kfunc_mem_size_reg(struct bpf_verifier_env *env, struct bpf_reg mark_ptr_not_null_reg(mem_reg); } - err = check_mem_size_reg(env, reg, regno, BPF_READ, true, &meta); - err = err ?: check_mem_size_reg(env, reg, regno, BPF_WRITE, true, &meta); + err = check_mem_size_reg(env, mem_reg, size_reg, mem_regno, BPF_READ, true, &meta); + err = err ?: check_mem_size_reg(env, mem_reg, size_reg, mem_regno, BPF_WRITE, true, &meta); if (may_be_null) *mem_reg = saved_reg; @@ -8586,7 +8587,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg, return -EFAULT; } key_size = meta->map.ptr->key_size; - err = check_helper_mem_access(env, regno, key_size, BPF_READ, false, NULL); + err = check_helper_mem_access(env, reg, regno, key_size, BPF_READ, false, NULL); if (err) return err; if (can_elide_value_nullness(meta->map.ptr->map_type)) { @@ -8613,7 +8614,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg, return -EFAULT; } meta->raw_mode = arg_type & MEM_UNINIT; - err = check_helper_mem_access(env, regno, meta->map.ptr->value_size, + err = check_helper_mem_access(env, reg, regno, meta->map.ptr->value_size, arg_type & MEM_WRITE ? BPF_WRITE : BPF_READ, false, meta); break; @@ -8657,7 +8658,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg, */ meta->raw_mode = arg_type & MEM_UNINIT; if (arg_type & MEM_FIXED_SIZE) { - err = check_helper_mem_access(env, regno, fn->arg_size[arg], + err = check_helper_mem_access(env, reg, regno, fn->arg_size[arg], arg_type & MEM_WRITE ? BPF_WRITE : BPF_READ, false, meta); if (err) @@ -8667,13 +8668,13 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg, } break; case ARG_CONST_SIZE: - err = check_mem_size_reg(env, reg, regno, + err = check_mem_size_reg(env, reg_state(env, regno - 1), reg, regno - 1, fn->arg_type[arg - 1] & MEM_WRITE ? BPF_WRITE : BPF_READ, false, meta); break; case ARG_CONST_SIZE_OR_ZERO: - err = check_mem_size_reg(env, reg, regno, + err = check_mem_size_reg(env, reg_state(env, regno - 1), reg, regno - 1, fn->arg_type[arg - 1] & MEM_WRITE ? BPF_WRITE : BPF_READ, true, meta); @@ -12407,7 +12408,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ const struct btf_param *size_arg = &args[i + 1]; if (!bpf_register_is_null(buff_reg) || !is_kfunc_arg_nullable(meta->btf, buff_arg)) { - ret = check_kfunc_mem_size_reg(env, size_reg, regno + 1); + ret = check_kfunc_mem_size_reg(env, buff_reg, size_reg, regno); if (ret < 0) { verbose(env, "arg#%d arg#%d memory, len pair leads to invalid memory access\n", i, i + 1); return ret; -- 2.52.0 ^ permalink raw reply related [flat|nested] 45+ messages in thread
* Re: [PATCH bpf-next v5 03/16] bpf: Refactor to handle memory and size together 2026-04-17 3:47 ` [PATCH bpf-next v5 03/16] bpf: Refactor to handle memory and size together Yonghong Song @ 2026-04-17 4:49 ` sashiko-bot 2026-04-18 0:52 ` bot+bpf-ci 1 sibling, 0 replies; 45+ messages in thread From: sashiko-bot @ 2026-04-17 4:49 UTC (permalink / raw) To: Yonghong Song; +Cc: bpf Sashiko AI review found 1 potential issue(s): - [High] Incomplete refactoring for stack arguments breaks verifier precision tracking, marking incorrect registers. -- commit 7bebb6c179745d67fba0c807428d095b66738135 Author: Yonghong Song <yonghong.song@linux.dev> bpf: Refactor to handle memory and size together This commit refactors memory and size register handling in BPF helper verification, passing bpf_reg_state directly to prepare for stack arguments. > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c > index 2bedaa193d54d..7a7024d94cf00 100644 > --- a/kernel/bpf/verifier.c > +++ b/kernel/bpf/verifier.c [ ... ] > @@ -7020,15 +7020,17 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno, > /* verify arguments to helpers or kfuncs consisting of a pointer and an access > * size. > * > - * @regno is the register containing the access size. regno-1 is the register > - * containing the pointer. > + * @mem_regno is the register containing the pointer, mem_regno+1 is the register > + * containing the access size. > */ > static int check_mem_size_reg(struct bpf_verifier_env *env, > - struct bpf_reg_state *reg, u32 regno, > + struct bpf_reg_state *mem_reg, > + struct bpf_reg_state *size_reg, u32 mem_regno, > enum bpf_access_type access_type, > bool zero_size_allowed, > struct bpf_call_arg_meta *meta) > { > + int size_regno = mem_regno + 1; > int err; Since this refactoring is intended to support stack arguments, does this hardcoded size_regno = mem_regno + 1 break verifier precision tracking? If the memory pointer is passed in R5, mem_regno is 5, making size_regno 6. This would cause mark_chain_precision() to incorrectly backtrack the callee-saved register BPF_REG_6 instead of the stack slot containing the size argument. Furthermore, if mem_regno is later modified to represent a 0-based argument index, size_regno will evaluate to the wrong register entirely. [ ... ] > - err = check_helper_mem_access(env, regno - 1, reg->umax_value, > + err = check_helper_mem_access(env, mem_reg, mem_regno, size_reg->umax_value, > access_type, zero_size_allowed, meta); > if (!err) > - err = mark_chain_precision(env, regno); > + err = mark_chain_precision(env, size_regno); Does marking the wrong register leave the actual size argument imprecise, potentially corrupting verifier state pruning and allowing out-of-bounds memory accesses to bypass verification? -- Sashiko AI review · https://sashiko.dev/#/patchset/20260417034658.2625353-1-yonghong.song@linux.dev?part=3 ^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [PATCH bpf-next v5 03/16] bpf: Refactor to handle memory and size together 2026-04-17 3:47 ` [PATCH bpf-next v5 03/16] bpf: Refactor to handle memory and size together Yonghong Song 2026-04-17 4:49 ` sashiko-bot @ 2026-04-18 0:52 ` bot+bpf-ci 1 sibling, 0 replies; 45+ messages in thread From: bot+bpf-ci @ 2026-04-18 0:52 UTC (permalink / raw) To: yonghong.song, bpf Cc: ast, andrii, daniel, jose.marchesi, kernel-team, martin.lau, ast, andrii, daniel, martin.lau, eddyz87, yonghong.song, clm, ihor.solodrai [-- Attachment #1: Type: text/plain, Size: 2378 bytes --] > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c > index 2bedaa193d54..7a7024d94cf0 100644 > --- a/kernel/bpf/verifier.c > +++ b/kernel/bpf/verifier.c [ ... ] > @@ -7020,15 +7020,17 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno, > /* verify arguments to helpers or kfuncs consisting of a pointer and an access > * size. > * > - * @regno is the register containing the access size. regno-1 is the register > - * containing the pointer. > + * @mem_regno is the register containing the pointer, mem_regno+1 is the register > + * containing the access size. > */ > static int check_mem_size_reg(struct bpf_verifier_env *env, > - struct bpf_reg_state *reg, u32 regno, > + struct bpf_reg_state *mem_reg, > + struct bpf_reg_state *size_reg, u32 mem_regno, > enum bpf_access_type access_type, > bool zero_size_allowed, > struct bpf_call_arg_meta *meta) > { > + int size_regno = mem_regno + 1; > int err; > > /* This is used to refine r0 return value bounds for helpers The refactoring passes size_reg as a parameter to prepare for stack arguments, but this code still hardcodes size_regno as mem_regno + 1. Does this assumption hold when stack arguments are used, where the size might not be at mem_regno + 1? Looking at how mark_chain_precision() is called later in this function: > err = check_helper_mem_access(env, mem_reg, mem_regno, size_reg->umax_value, > access_type, zero_size_allowed, meta); > if (!err) > - err = mark_chain_precision(env, regno); > + err = mark_chain_precision(env, size_regno); The incorrect size_regno value would cause mark_chain_precision() to mark the wrong register for precision tracking. For example, if the memory pointer is passed in R5 and the size is in a stack slot, mem_regno would be 5, making size_regno 6. This would incorrectly backtrack BPF_REG_6 instead of the stack slot containing the size argument. Could this break verifier state pruning when the commit message's stated goal of handling "stack arguments as they may be beyond registers 1-5" is implemented? --- AI reviewed your patch. Please fix the bug or email reply why it's not a bug. See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md CI run summary: https://github.com/kernel-patches/bpf/actions/runs/24592562564 ^ permalink raw reply [flat|nested] 45+ messages in thread
* [PATCH bpf-next v5 04/16] bpf: Prepare verifier logs for upcoming kfunc stack arguments 2026-04-17 3:46 [PATCH bpf-next v5 00/16] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song ` (2 preceding siblings ...) 2026-04-17 3:47 ` [PATCH bpf-next v5 03/16] bpf: Refactor to handle memory and size together Yonghong Song @ 2026-04-17 3:47 ` Yonghong Song 2026-04-17 3:47 ` [PATCH bpf-next v5 05/16] bpf: Introduce bpf register BPF_REG_PARAMS Yonghong Song ` (11 subsequent siblings) 15 siblings, 0 replies; 45+ messages in thread From: Yonghong Song @ 2026-04-17 3:47 UTC (permalink / raw) To: bpf Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann, Jose E . Marchesi, kernel-team, Martin KaFai Lau This change prepares verifier log reporting for upcoming kfunc stack argument support. Today verifier log code mostly assumes that an argument can be described directly by a register number. That works for arguments passed in `R1` to `R5`, but it does not work once kfunc arguments can also be passed on the stack. Introduce an internal `argno` representation such that register-passed arguments keep using their real register numbers, while stack-passed arguments use an encoded value above a dedicated base. `reg_arg_name()` converts this representation into either `R%d` or `*(R11-off)` when emitting verifier logs. If a particular `argno` is corresponding to a stack argument, print `*(R11-off)`. Otherwise, print `R%d`. Here R11 presents the base of stack arguments. This keeps existing logs readable for register arguments and allows the same log sites to handle future stack arguments without open-coding special cases. Update selftests accordingly. Signed-off-by: Yonghong Song <yonghong.song@linux.dev> --- include/linux/bpf_verifier.h | 1 + kernel/bpf/verifier.c | 649 ++++++++++-------- .../testing/selftests/bpf/prog_tests/bpf_nf.c | 22 +- .../selftests/bpf/prog_tests/cb_refs.c | 2 +- .../selftests/bpf/prog_tests/kfunc_call.c | 2 +- .../selftests/bpf/prog_tests/linked_list.c | 4 +- .../selftests/bpf/progs/cgrp_kfunc_failure.c | 14 +- .../selftests/bpf/progs/cpumask_failure.c | 10 +- .../testing/selftests/bpf/progs/dynptr_fail.c | 22 +- .../selftests/bpf/progs/file_reader_fail.c | 4 +- tools/testing/selftests/bpf/progs/irq.c | 4 +- tools/testing/selftests/bpf/progs/iters.c | 6 +- .../selftests/bpf/progs/iters_state_safety.c | 14 +- .../selftests/bpf/progs/iters_testmod.c | 4 +- .../selftests/bpf/progs/iters_testmod_seq.c | 4 +- .../selftests/bpf/progs/map_kptr_fail.c | 2 +- .../selftests/bpf/progs/percpu_alloc_fail.c | 4 +- .../testing/selftests/bpf/progs/rbtree_fail.c | 6 +- .../bpf/progs/refcounted_kptr_fail.c | 2 +- .../testing/selftests/bpf/progs/stream_fail.c | 2 +- .../selftests/bpf/progs/task_kfunc_failure.c | 18 +- .../selftests/bpf/progs/task_work_fail.c | 6 +- .../selftests/bpf/progs/test_bpf_nf_fail.c | 8 +- .../bpf/progs/test_kfunc_dynptr_param.c | 2 +- .../bpf/progs/test_kfunc_param_nullable.c | 2 +- .../selftests/bpf/progs/verifier_bits_iter.c | 4 +- .../bpf/progs/verifier_ref_tracking.c | 6 +- .../selftests/bpf/progs/verifier_vfs_reject.c | 8 +- .../testing/selftests/bpf/progs/wq_failures.c | 2 +- tools/testing/selftests/bpf/verifier/calls.c | 14 +- 30 files changed, 474 insertions(+), 374 deletions(-) diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index 53e8664cb566..29a8a2605a12 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -912,6 +912,7 @@ struct bpf_verifier_env { * e.g., in reg_type_str() to generate reg_type string */ char tmp_str_buf[TMP_STR_BUF_LEN]; + char tmp_reg_arg_name_buf[32]; struct bpf_insn insn_buf[INSN_BUF_SIZE]; struct bpf_insn epilogue_buf[INSN_BUF_SIZE]; struct bpf_scc_callchain callchain_buf; diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 7a7024d94cf0..ff0c55d80311 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -1751,6 +1751,55 @@ static struct bpf_verifier_state *push_stack(struct bpf_verifier_env *env, return &elem->st; } +/* + * Unified argument number encoding for verifier log messages. + * Register args (arg_idx 0-4) use their register number (R1-R5). + * Stack args (arg_idx 5+) are encoded as STACK_ARGNO_BASE + arg_idx + * to avoid collision with register numbers. reg_arg_name() decodes + * this back to a human-readable string like "*(R11-8)" for logs. + */ +#define STACK_ARGNO_BASE 100 + +static bool is_stack_argno(int argno) +{ + return argno >= STACK_ARGNO_BASE; +} + +static u32 make_argno(u32 arg_idx) +{ + if (arg_idx < MAX_BPF_FUNC_REG_ARGS) + return BPF_REG_1 + arg_idx; + return STACK_ARGNO_BASE + arg_idx; +} + +static u32 arg_idx_from_argno(int argno) +{ + if (is_stack_argno(argno)) + return argno - STACK_ARGNO_BASE; + return argno - BPF_REG_1; +} + +static int next_argno(int argno) +{ + return make_argno(arg_idx_from_argno(argno) + 1); +} + +static const char *reg_arg_name(struct bpf_verifier_env *env, int argno) +{ + char *buf = env->tmp_reg_arg_name_buf; + int len = sizeof(env->tmp_reg_arg_name_buf); + u32 idx; + + if (!is_stack_argno(argno)) { + snprintf(buf, len, "R%d", argno); + return buf; + } + + idx = arg_idx_from_argno(argno); + snprintf(buf, len, "*(R11-%u)", (idx - MAX_BPF_FUNC_REG_ARGS + 1) * BPF_REG_SIZE); + return buf; +} + static const int caller_saved[CALLER_SAVED_REGS] = { BPF_REG_0, BPF_REG_1, BPF_REG_2, BPF_REG_3, BPF_REG_4, BPF_REG_5 }; @@ -4245,7 +4294,7 @@ enum bpf_access_src { }; static int check_stack_range_initialized(struct bpf_verifier_env *env, struct bpf_reg_state *reg, - int regno, int off, int access_size, + int argno, int off, int access_size, bool zero_size_allowed, enum bpf_access_type type, struct bpf_call_arg_meta *meta); @@ -4269,7 +4318,7 @@ static struct bpf_reg_state *reg_state(struct bpf_verifier_env *env, int regno) * instead. */ static int check_stack_read_var_off(struct bpf_verifier_env *env, struct bpf_reg_state *reg, - int ptr_regno, int off, int size, int dst_regno) + int ptr_argno, int off, int size, int dst_regno) { struct bpf_func_state *ptr_state = bpf_func(env, reg); int err; @@ -4277,7 +4326,7 @@ static int check_stack_read_var_off(struct bpf_verifier_env *env, struct bpf_reg /* Note that we pass a NULL meta, so raw access will not be permitted. */ - err = check_stack_range_initialized(env, reg, ptr_regno, off, size, + err = check_stack_range_initialized(env, reg, ptr_argno, off, size, false, BPF_READ, NULL); if (err) return err; @@ -4299,7 +4348,7 @@ static int check_stack_read_var_off(struct bpf_verifier_env *env, struct bpf_reg * can be -1, meaning that the read value is not going to a register. */ static int check_stack_read(struct bpf_verifier_env *env, - struct bpf_reg_state *reg, int ptr_regno, int off, int size, + struct bpf_reg_state *reg, int ptr_argno, int off, int size, int dst_regno) { struct bpf_func_state *state = bpf_func(env, reg); @@ -4337,7 +4386,7 @@ static int check_stack_read(struct bpf_verifier_env *env, * than fixed offset ones. Note that dst_regno >= 0 on this * branch. */ - err = check_stack_read_var_off(env, reg, ptr_regno, off, size, + err = check_stack_read_var_off(env, reg, ptr_argno, off, size, dst_regno); } return err; @@ -4375,7 +4424,7 @@ static int check_stack_write(struct bpf_verifier_env *env, return err; } -static int check_map_access_type(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 regno, +static int check_map_access_type(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 argno, int off, int size, enum bpf_access_type type) { struct bpf_map *map = reg->map_ptr; @@ -4397,7 +4446,7 @@ static int check_map_access_type(struct bpf_verifier_env *env, struct bpf_reg_st } /* check read/write into memory region (e.g., map value, ringbuf sample, etc) */ -static int __check_mem_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno, +static int __check_mem_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int argno, int off, int size, u32 mem_size, bool zero_size_allowed) { @@ -4418,8 +4467,8 @@ static int __check_mem_access(struct bpf_verifier_env *env, struct bpf_reg_state case PTR_TO_PACKET: case PTR_TO_PACKET_META: case PTR_TO_PACKET_END: - verbose(env, "invalid access to packet, off=%d size=%d, R%d(id=%d,off=%d,r=%d)\n", - off, size, regno, reg->id, off, mem_size); + verbose(env, "invalid access to packet, off=%d size=%d, %s(id=%d,off=%d,r=%d)\n", + off, size, reg_arg_name(env, argno), reg->id, off, mem_size); break; case PTR_TO_CTX: verbose(env, "invalid access to context, ctx_size=%d off=%d size=%d\n", @@ -4435,7 +4484,7 @@ static int __check_mem_access(struct bpf_verifier_env *env, struct bpf_reg_state } /* check read/write into a memory region with possible variable offset */ -static int check_mem_region_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 regno, +static int check_mem_region_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 argno, int off, int size, u32 mem_size, bool zero_size_allowed) { @@ -4455,15 +4504,15 @@ static int check_mem_region_access(struct bpf_verifier_env *env, struct bpf_reg_ (reg->smin_value == S64_MIN || (off + reg->smin_value != (s64)(s32)(off + reg->smin_value)) || reg->smin_value + off < 0)) { - verbose(env, "R%d min value is negative, either use unsigned index or do a if (index >=0) check.\n", - regno); + verbose(env, "%s min value is negative, either use unsigned index or do a if (index >=0) check.\n", + reg_arg_name(env, argno)); return -EACCES; } - err = __check_mem_access(env, reg, regno, reg->smin_value + off, size, + err = __check_mem_access(env, reg, argno, reg->smin_value + off, size, mem_size, zero_size_allowed); if (err) { - verbose(env, "R%d min value is outside of the allowed memory range\n", - regno); + verbose(env, "%s min value is outside of the allowed memory range\n", + reg_arg_name(env, argno)); return err; } @@ -4472,15 +4521,15 @@ static int check_mem_region_access(struct bpf_verifier_env *env, struct bpf_reg_ * If reg->umax_value + off could overflow, treat that as unbounded too. */ if (reg->umax_value >= BPF_MAX_VAR_OFF) { - verbose(env, "R%d unbounded memory access, make sure to bounds check any such access\n", - regno); + verbose(env, "%s unbounded memory access, make sure to bounds check any such access\n", + reg_arg_name(env, argno)); return -EACCES; } - err = __check_mem_access(env, reg, regno, reg->umax_value + off, size, + err = __check_mem_access(env, reg, argno, reg->umax_value + off, size, mem_size, zero_size_allowed); if (err) { - verbose(env, "R%d max value is outside of the allowed memory range\n", - regno); + verbose(env, "%s max value is outside of the allowed memory range\n", + reg_arg_name(env, argno)); return err; } @@ -4488,7 +4537,7 @@ static int check_mem_region_access(struct bpf_verifier_env *env, struct bpf_reg_ } static int __check_ptr_off_reg(struct bpf_verifier_env *env, - const struct bpf_reg_state *reg, int regno, + const struct bpf_reg_state *reg, u32 argno, bool fixed_off_ok) { /* Access to this pointer-typed register or passing it to a helper @@ -4505,14 +4554,14 @@ static int __check_ptr_off_reg(struct bpf_verifier_env *env, } if (reg->smin_value < 0) { - verbose(env, "negative offset %s ptr R%d off=%lld disallowed\n", - reg_type_str(env, reg->type), regno, reg->var_off.value); + verbose(env, "negative offset %s ptr %s off=%lld disallowed\n", + reg_type_str(env, reg->type), reg_arg_name(env, argno), reg->var_off.value); return -EACCES; } if (!fixed_off_ok && reg->var_off.value != 0) { - verbose(env, "dereference of modified %s ptr R%d off=%lld disallowed\n", - reg_type_str(env, reg->type), regno, reg->var_off.value); + verbose(env, "dereference of modified %s ptr %s off=%lld disallowed\n", + reg_type_str(env, reg->type), reg_arg_name(env, argno), reg->var_off.value); return -EACCES; } @@ -4882,17 +4931,17 @@ static bool may_access_direct_pkt_data(struct bpf_verifier_env *env, } } -static int check_packet_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 regno, int off, +static int check_packet_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 argno, int off, int size, bool zero_size_allowed) { int err; if (reg->range < 0) { - verbose(env, "R%d offset is outside of the packet\n", regno); + verbose(env, "%s offset is outside of the packet\n", reg_arg_name(env, argno)); return -EINVAL; } - err = check_mem_region_access(env, reg, regno, off, size, reg->range, zero_size_allowed); + err = check_mem_region_access(env, reg, argno, off, size, reg->range, zero_size_allowed); if (err) return err; @@ -4947,7 +4996,7 @@ static int __check_ctx_access(struct bpf_verifier_env *env, int insn_idx, int of return -EACCES; } -static int check_ctx_access(struct bpf_verifier_env *env, int insn_idx, struct bpf_reg_state *reg, u32 regno, +static int check_ctx_access(struct bpf_verifier_env *env, int insn_idx, struct bpf_reg_state *reg, u32 argno, int off, int access_size, enum bpf_access_type t, struct bpf_insn_access_aux *info) { @@ -4960,9 +5009,9 @@ static int check_ctx_access(struct bpf_verifier_env *env, int insn_idx, struct b int err; if (var_off_ok) - err = check_mem_region_access(env, reg, regno, off, access_size, U16_MAX, false); + err = check_mem_region_access(env, reg, argno, off, access_size, U16_MAX, false); else - err = __check_ptr_off_reg(env, reg, regno, fixed_off_ok); + err = __check_ptr_off_reg(env, reg, argno, fixed_off_ok); if (err) return err; off += reg->umax_value; @@ -4986,15 +5035,15 @@ static int check_flow_keys_access(struct bpf_verifier_env *env, int off, } static int check_sock_access(struct bpf_verifier_env *env, int insn_idx, - struct bpf_reg_state *reg, u32 regno, int off, int size, + struct bpf_reg_state *reg, u32 argno, int off, int size, enum bpf_access_type t) { struct bpf_insn_access_aux info = {}; bool valid; if (reg->smin_value < 0) { - verbose(env, "R%d min value is negative, either use unsigned index or do a if (index >=0) check.\n", - regno); + verbose(env, "%s min value is negative, either use unsigned index or do a if (index >=0) check.\n", + reg_arg_name(env, argno)); return -EACCES; } @@ -5022,8 +5071,8 @@ static int check_sock_access(struct bpf_verifier_env *env, int insn_idx, return 0; } - verbose(env, "R%d invalid %s access off=%d size=%d\n", - regno, reg_type_str(env, reg->type), off, size); + verbose(env, "%s invalid %s access off=%d size=%d\n", + reg_arg_name(env, argno), reg_type_str(env, reg->type), off, size); return -EACCES; } @@ -5533,12 +5582,12 @@ static int check_max_stack_depth(struct bpf_verifier_env *env) static int __check_buffer_access(struct bpf_verifier_env *env, const char *buf_info, const struct bpf_reg_state *reg, - int regno, int off, int size) + int argno, int off, int size) { if (off < 0) { verbose(env, - "R%d invalid %s buffer access: off=%d, size=%d\n", - regno, buf_info, off, size); + "%s invalid %s buffer access: off=%d, size=%d\n", + reg_arg_name(env, argno), buf_info, off, size); return -EACCES; } if (!tnum_is_const(reg->var_off)) { @@ -5546,8 +5595,8 @@ static int __check_buffer_access(struct bpf_verifier_env *env, tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off); verbose(env, - "R%d invalid variable buffer offset: off=%d, var_off=%s\n", - regno, off, tn_buf); + "%s invalid variable buffer offset: off=%d, var_off=%s\n", + reg_arg_name(env, argno), off, tn_buf); return -EACCES; } @@ -5556,11 +5605,11 @@ static int __check_buffer_access(struct bpf_verifier_env *env, static int check_tp_buffer_access(struct bpf_verifier_env *env, const struct bpf_reg_state *reg, - int regno, int off, int size) + int argno, int off, int size) { int err; - err = __check_buffer_access(env, "tracepoint", reg, regno, off, size); + err = __check_buffer_access(env, "tracepoint", reg, argno, off, size); if (err) return err; @@ -5572,14 +5621,14 @@ static int check_tp_buffer_access(struct bpf_verifier_env *env, static int check_buffer_access(struct bpf_verifier_env *env, const struct bpf_reg_state *reg, - int regno, int off, int size, + int argno, int off, int size, bool zero_size_allowed, u32 *max_access) { const char *buf_info = type_is_rdonly_mem(reg->type) ? "rdonly" : "rdwr"; int err; - err = __check_buffer_access(env, buf_info, reg, regno, off, size); + err = __check_buffer_access(env, buf_info, reg, argno, off, size); if (err) return err; @@ -5952,7 +6001,7 @@ static bool type_is_trusted_or_null(struct bpf_verifier_env *env, static int check_ptr_to_btf_access(struct bpf_verifier_env *env, struct bpf_reg_state *regs, struct bpf_reg_state *reg, - int regno, int off, int size, + int argno, int off, int size, enum bpf_access_type atype, int value_regno) { @@ -5981,8 +6030,8 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env, tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off); verbose(env, - "R%d is ptr_%s invalid variable offset: off=%d, var_off=%s\n", - regno, tname, off, tn_buf); + "%s is ptr_%s invalid variable offset: off=%d, var_off=%s\n", + reg_arg_name(env, argno), tname, off, tn_buf); return -EACCES; } @@ -5990,22 +6039,22 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env, if (off < 0) { verbose(env, - "R%d is ptr_%s invalid negative access: off=%d\n", - regno, tname, off); + "%s is ptr_%s invalid negative access: off=%d\n", + reg_arg_name(env, argno), tname, off); return -EACCES; } if (reg->type & MEM_USER) { verbose(env, - "R%d is ptr_%s access user memory: off=%d\n", - regno, tname, off); + "%s is ptr_%s access user memory: off=%d\n", + reg_arg_name(env, argno), tname, off); return -EACCES; } if (reg->type & MEM_PERCPU) { verbose(env, - "R%d is ptr_%s access percpu memory: off=%d\n", - regno, tname, off); + "%s is ptr_%s access percpu memory: off=%d\n", + reg_arg_name(env, argno), tname, off); return -EACCES; } @@ -6108,7 +6157,7 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env, static int check_ptr_to_map_access(struct bpf_verifier_env *env, struct bpf_reg_state *regs, struct bpf_reg_state *reg, - int regno, int off, int size, + int argno, int off, int size, enum bpf_access_type atype, int value_regno) { @@ -6142,8 +6191,8 @@ static int check_ptr_to_map_access(struct bpf_verifier_env *env, } if (off < 0) { - verbose(env, "R%d is %s invalid negative access: off=%d\n", - regno, tname, off); + verbose(env, "%s is %s invalid negative access: off=%d\n", + reg_arg_name(env, argno), tname, off); return -EACCES; } @@ -6201,7 +6250,7 @@ static int check_stack_slot_within_bounds(struct bpf_verifier_env *env, */ static int check_stack_access_within_bounds( struct bpf_verifier_env *env, struct bpf_reg_state *reg, - int regno, int off, int access_size, + int argno, int off, int access_size, enum bpf_access_type type) { struct bpf_func_state *state = bpf_func(env, reg); @@ -6220,8 +6269,8 @@ static int check_stack_access_within_bounds( } else { if (reg->smax_value >= BPF_MAX_VAR_OFF || reg->smin_value <= -BPF_MAX_VAR_OFF) { - verbose(env, "invalid unbounded variable-offset%s stack R%d\n", - err_extra, regno); + verbose(env, "invalid unbounded variable-offset%s stack %s\n", + err_extra, reg_arg_name(env, argno)); return -EACCES; } min_off = reg->smin_value + off; @@ -6239,14 +6288,14 @@ static int check_stack_access_within_bounds( if (err) { if (tnum_is_const(reg->var_off)) { - verbose(env, "invalid%s stack R%d off=%lld size=%d\n", - err_extra, regno, min_off, access_size); + verbose(env, "invalid%s stack %s off=%lld size=%d\n", + err_extra, reg_arg_name(env, argno), min_off, access_size); } else { char tn_buf[48]; tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off); - verbose(env, "invalid variable-offset%s stack R%d var_off=%s off=%d size=%d\n", - err_extra, regno, tn_buf, off, access_size); + verbose(env, "invalid variable-offset%s stack %s var_off=%s off=%d size=%d\n", + err_extra, reg_arg_name(env, argno), tn_buf, off, access_size); } return err; } @@ -6291,7 +6340,7 @@ static void add_scalar_to_reg(struct bpf_reg_state *dst_reg, s64 val) * if t==write && value_regno==-1, some unknown value is stored into memory * if t==read && value_regno==-1, don't care what we read from memory */ -static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct bpf_reg_state *reg, u32 regno, +static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct bpf_reg_state *reg, u32 argno, int off, int bpf_size, enum bpf_access_type t, int value_regno, bool strict_alignment_once, bool is_ldsx) { @@ -6308,11 +6357,12 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct b if (reg->type == PTR_TO_MAP_KEY) { if (t == BPF_WRITE) { - verbose(env, "write to change key R%d not allowed\n", regno); + verbose(env, "write to change key %s not allowed\n", + reg_arg_name(env, argno)); return -EACCES; } - err = check_mem_region_access(env, reg, regno, off, size, + err = check_mem_region_access(env, reg, argno, off, size, reg->map_ptr->key_size, false); if (err) return err; @@ -6326,10 +6376,10 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct b verbose(env, "R%d leaks addr into map\n", value_regno); return -EACCES; } - err = check_map_access_type(env, reg, regno, off, size, t); + err = check_map_access_type(env, reg, argno, off, size, t); if (err) return err; - err = check_map_access(env, reg, regno, off, size, false, ACCESS_DIRECT); + err = check_map_access(env, reg, argno, off, size, false, ACCESS_DIRECT); if (err) return err; if (tnum_is_const(reg->var_off)) @@ -6376,14 +6426,14 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct b bool rdonly_untrusted = rdonly_mem && (reg->type & PTR_UNTRUSTED); if (type_may_be_null(reg->type)) { - verbose(env, "R%d invalid mem access '%s'\n", regno, + verbose(env, "%s invalid mem access '%s'\n", reg_arg_name(env, argno), reg_type_str(env, reg->type)); return -EACCES; } if (t == BPF_WRITE && rdonly_mem) { - verbose(env, "R%d cannot write into %s\n", - regno, reg_type_str(env, reg->type)); + verbose(env, "%s cannot write into %s\n", + reg_arg_name(env, argno), reg_type_str(env, reg->type)); return -EACCES; } @@ -6398,7 +6448,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct b * instructions, hence no need to check bounds in that case. */ if (!rdonly_untrusted) - err = check_mem_region_access(env, reg, regno, off, size, + err = check_mem_region_access(env, reg, argno, off, size, reg->mem_size, false); if (!err && value_regno >= 0 && (t == BPF_READ || rdonly_mem)) mark_reg_unknown(env, regs, value_regno); @@ -6416,7 +6466,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct b return -EACCES; } - err = check_ctx_access(env, insn_idx, reg, regno, off, size, t, &info); + err = check_ctx_access(env, insn_idx, reg, argno, off, size, t, &info); if (!err && t == BPF_READ && value_regno >= 0) { /* ctx access returns either a scalar, or a * PTR_TO_PACKET[_META,_END]. In the latter @@ -6453,12 +6503,12 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct b } else if (reg->type == PTR_TO_STACK) { /* Basic bounds checks. */ - err = check_stack_access_within_bounds(env, reg, regno, off, size, t); + err = check_stack_access_within_bounds(env, reg, argno, off, size, t); if (err) return err; if (t == BPF_READ) - err = check_stack_read(env, reg, regno, off, size, + err = check_stack_read(env, reg, argno, off, size, value_regno); else err = check_stack_write(env, reg, off, size, @@ -6474,7 +6524,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct b value_regno); return -EACCES; } - err = check_packet_access(env, reg, regno, off, size, false); + err = check_packet_access(env, reg, argno, off, size, false); if (!err && t == BPF_READ && value_regno >= 0) mark_reg_unknown(env, regs, value_regno); } else if (reg->type == PTR_TO_FLOW_KEYS) { @@ -6490,23 +6540,23 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct b mark_reg_unknown(env, regs, value_regno); } else if (type_is_sk_pointer(reg->type)) { if (t == BPF_WRITE) { - verbose(env, "R%d cannot write into %s\n", - regno, reg_type_str(env, reg->type)); + verbose(env, "%s cannot write into %s\n", + reg_arg_name(env, argno), reg_type_str(env, reg->type)); return -EACCES; } - err = check_sock_access(env, insn_idx, reg, regno, off, size, t); + err = check_sock_access(env, insn_idx, reg, argno, off, size, t); if (!err && value_regno >= 0) mark_reg_unknown(env, regs, value_regno); } else if (reg->type == PTR_TO_TP_BUFFER) { - err = check_tp_buffer_access(env, reg, regno, off, size); + err = check_tp_buffer_access(env, reg, argno, off, size); if (!err && t == BPF_READ && value_regno >= 0) mark_reg_unknown(env, regs, value_regno); } else if (base_type(reg->type) == PTR_TO_BTF_ID && !type_may_be_null(reg->type)) { - err = check_ptr_to_btf_access(env, regs, reg, regno, off, size, t, + err = check_ptr_to_btf_access(env, regs, reg, argno, off, size, t, value_regno); } else if (reg->type == CONST_PTR_TO_MAP) { - err = check_ptr_to_map_access(env, regs, reg, regno, off, size, t, + err = check_ptr_to_map_access(env, regs, reg, argno, off, size, t, value_regno); } else if (base_type(reg->type) == PTR_TO_BUF && !type_may_be_null(reg->type)) { @@ -6515,8 +6565,8 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct b if (rdonly_mem) { if (t == BPF_WRITE) { - verbose(env, "R%d cannot write into %s\n", - regno, reg_type_str(env, reg->type)); + verbose(env, "%s cannot write into %s\n", + reg_arg_name(env, argno), reg_type_str(env, reg->type)); return -EACCES; } max_access = &env->prog->aux->max_rdonly_access; @@ -6524,7 +6574,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct b max_access = &env->prog->aux->max_rdwr_access; } - err = check_buffer_access(env, reg, regno, off, size, false, + err = check_buffer_access(env, reg, argno, off, size, false, max_access); if (!err && value_regno >= 0 && (rdonly_mem || t == BPF_READ)) @@ -6533,7 +6583,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct b if (t == BPF_READ && value_regno >= 0) mark_reg_unknown(env, regs, value_regno); } else { - verbose(env, "R%d invalid mem access '%s'\n", regno, + verbose(env, "%s invalid mem access '%s'\n", reg_arg_name(env, argno), reg_type_str(env, reg->type)); return -EACCES; } @@ -6787,7 +6837,7 @@ static int check_atomic(struct bpf_verifier_env *env, struct bpf_insn *insn) * read offsets are marked as read. */ static int check_stack_range_initialized( - struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno, int off, + struct bpf_verifier_env *env, struct bpf_reg_state *reg, int argno, int off, int access_size, bool zero_size_allowed, enum bpf_access_type type, struct bpf_call_arg_meta *meta) { @@ -6812,7 +6862,7 @@ static int check_stack_range_initialized( return -EACCES; } - err = check_stack_access_within_bounds(env, reg, regno, off, access_size, type); + err = check_stack_access_within_bounds(env, reg, argno, off, access_size, type); if (err) return err; @@ -6829,8 +6879,8 @@ static int check_stack_range_initialized( char tn_buf[48]; tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off); - verbose(env, "R%d variable offset stack access prohibited for !root, var_off=%s\n", - regno, tn_buf); + verbose(env, "%s variable offset stack access prohibited for !root, var_off=%s\n", + reg_arg_name(env, argno), tn_buf); return -EACCES; } /* Only initialized buffer on stack is allowed to be accessed @@ -6873,7 +6923,7 @@ static int check_stack_range_initialized( } } meta->access_size = access_size; - meta->regno = regno; + meta->regno = argno; return 0; } @@ -6913,17 +6963,17 @@ static int check_stack_range_initialized( if (*stype == STACK_POISON) { if (allow_poison) goto mark; - verbose(env, "reading from stack R%d off %d+%d size %d, slot poisoned by dead code elimination\n", - regno, min_off, i - min_off, access_size); + verbose(env, "reading from stack %s off %d+%d size %d, slot poisoned by dead code elimination\n", + reg_arg_name(env, argno), min_off, i - min_off, access_size); } else if (tnum_is_const(reg->var_off)) { - verbose(env, "invalid read from stack R%d off %d+%d size %d\n", - regno, min_off, i - min_off, access_size); + verbose(env, "invalid read from stack %s off %d+%d size %d\n", + reg_arg_name(env, argno), min_off, i - min_off, access_size); } else { char tn_buf[48]; tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off); - verbose(env, "invalid read from stack R%d var_off %s+%d size %d\n", - regno, tn_buf, i - min_off, access_size); + verbose(env, "invalid read from stack %s var_off %s+%d size %d\n", + reg_arg_name(env, argno), tn_buf, i - min_off, access_size); } return -EACCES; mark: @@ -6932,7 +6982,7 @@ static int check_stack_range_initialized( return 0; } -static int check_helper_mem_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno, +static int check_helper_mem_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int argno, int access_size, enum bpf_access_type access_type, bool zero_size_allowed, struct bpf_call_arg_meta *meta) @@ -6943,37 +6993,37 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, struct bpf_reg_ switch (base_type(reg->type)) { case PTR_TO_PACKET: case PTR_TO_PACKET_META: - return check_packet_access(env, reg, regno, 0, access_size, + return check_packet_access(env, reg, argno, 0, access_size, zero_size_allowed); case PTR_TO_MAP_KEY: if (access_type == BPF_WRITE) { - verbose(env, "R%d cannot write into %s\n", regno, - reg_type_str(env, reg->type)); + verbose(env, "%s cannot write into %s\n", + reg_arg_name(env, argno), reg_type_str(env, reg->type)); return -EACCES; } - return check_mem_region_access(env, reg, regno, 0, access_size, + return check_mem_region_access(env, reg, argno, 0, access_size, reg->map_ptr->key_size, false); case PTR_TO_MAP_VALUE: - if (check_map_access_type(env, reg, regno, 0, access_size, access_type)) + if (check_map_access_type(env, reg, argno, 0, access_size, access_type)) return -EACCES; - return check_map_access(env, reg, regno, 0, access_size, + return check_map_access(env, reg, argno, 0, access_size, zero_size_allowed, ACCESS_HELPER); case PTR_TO_MEM: if (type_is_rdonly_mem(reg->type)) { if (access_type == BPF_WRITE) { - verbose(env, "R%d cannot write into %s\n", regno, - reg_type_str(env, reg->type)); + verbose(env, "%s cannot write into %s\n", + reg_arg_name(env, argno), reg_type_str(env, reg->type)); return -EACCES; } } - return check_mem_region_access(env, reg, regno, 0, + return check_mem_region_access(env, reg, argno, 0, access_size, reg->mem_size, zero_size_allowed); case PTR_TO_BUF: if (type_is_rdonly_mem(reg->type)) { if (access_type == BPF_WRITE) { - verbose(env, "R%d cannot write into %s\n", regno, - reg_type_str(env, reg->type)); + verbose(env, "%s cannot write into %s\n", + reg_arg_name(env, argno), reg_type_str(env, reg->type)); return -EACCES; } @@ -6981,21 +7031,21 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, struct bpf_reg_ } else { max_access = &env->prog->aux->max_rdwr_access; } - return check_buffer_access(env, reg, regno, 0, + return check_buffer_access(env, reg, argno, 0, access_size, zero_size_allowed, max_access); case PTR_TO_STACK: return check_stack_range_initialized( env, reg, - regno, 0, access_size, + argno, 0, access_size, zero_size_allowed, access_type, meta); case PTR_TO_BTF_ID: - return check_ptr_to_btf_access(env, regs, reg, regno, 0, + return check_ptr_to_btf_access(env, regs, reg, argno, 0, access_size, BPF_READ, -1); case PTR_TO_CTX: /* Only permit reading or writing syscall context using helper calls. */ if (is_var_ctx_off_allowed(env->prog)) { - int err = check_mem_region_access(env, reg, regno, 0, access_size, U16_MAX, + int err = check_mem_region_access(env, reg, argno, 0, access_size, U16_MAX, zero_size_allowed); if (err) return err; @@ -7010,7 +7060,7 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, struct bpf_reg_ bpf_register_is_null(reg)) return 0; - verbose(env, "R%d type=%s ", regno, + verbose(env, "%s type=%s ", reg_arg_name(env, argno), reg_type_str(env, reg->type)); verbose(env, "expected=%s\n", reg_type_str(env, PTR_TO_STACK)); return -EACCES; @@ -7025,12 +7075,12 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, struct bpf_reg_ */ static int check_mem_size_reg(struct bpf_verifier_env *env, struct bpf_reg_state *mem_reg, - struct bpf_reg_state *size_reg, u32 mem_regno, + struct bpf_reg_state *size_reg, u32 mem_argno, enum bpf_access_type access_type, bool zero_size_allowed, struct bpf_call_arg_meta *meta) { - int size_regno = mem_regno + 1; + int size_argno = next_argno(mem_argno); int err; /* This is used to refine r0 return value bounds for helpers @@ -7052,31 +7102,31 @@ static int check_mem_size_reg(struct bpf_verifier_env *env, meta = NULL; if (size_reg->smin_value < 0) { - verbose(env, "R%d min value is negative, either use unsigned or 'var &= const'\n", - size_regno); + verbose(env, "%s min value is negative, either use unsigned or 'var &= const'\n", + reg_arg_name(env, size_argno)); return -EACCES; } if (size_reg->umin_value == 0 && !zero_size_allowed) { - verbose(env, "R%d invalid zero-sized read: u64=[%lld,%lld]\n", - size_regno, size_reg->umin_value, size_reg->umax_value); + verbose(env, "%s invalid zero-sized read: u64=[%lld,%lld]\n", + reg_arg_name(env, size_argno), size_reg->umin_value, size_reg->umax_value); return -EACCES; } if (size_reg->umax_value >= BPF_MAX_VAR_SIZ) { - verbose(env, "R%d unbounded memory access, use 'var &= const' or 'if (var < const)'\n", - size_regno); + verbose(env, "%s unbounded memory access, use 'var &= const' or 'if (var < const)'\n", + reg_arg_name(env, size_argno)); return -EACCES; } - err = check_helper_mem_access(env, mem_reg, mem_regno, size_reg->umax_value, + err = check_helper_mem_access(env, mem_reg, mem_argno, size_reg->umax_value, access_type, zero_size_allowed, meta); - if (!err) - err = mark_chain_precision(env, size_regno); + if (!err && !is_stack_argno(size_argno)) + err = mark_chain_precision(env, size_argno); return err; } static int check_mem_reg(struct bpf_verifier_env *env, struct bpf_reg_state *reg, - u32 regno, u32 mem_size) + u32 argno, u32 mem_size) { bool may_be_null = type_may_be_null(reg->type); struct bpf_reg_state saved_reg; @@ -7096,8 +7146,8 @@ static int check_mem_reg(struct bpf_verifier_env *env, struct bpf_reg_state *reg int size = base_type(reg->type) == PTR_TO_STACK ? -(int)mem_size : mem_size; - err = check_helper_mem_access(env, reg, regno, size, BPF_READ, true, NULL); - err = err ?: check_helper_mem_access(env, reg, regno, size, BPF_WRITE, true, NULL); + err = check_helper_mem_access(env, reg, argno, size, BPF_READ, true, NULL); + err = err ?: check_helper_mem_access(env, reg, argno, size, BPF_WRITE, true, NULL); if (may_be_null) *reg = saved_reg; @@ -7106,14 +7156,15 @@ static int check_mem_reg(struct bpf_verifier_env *env, struct bpf_reg_state *reg } static int check_kfunc_mem_size_reg(struct bpf_verifier_env *env, struct bpf_reg_state *mem_reg, - struct bpf_reg_state *size_reg, u32 mem_regno) + struct bpf_reg_state *size_reg, u32 mem_argno) { bool may_be_null = type_may_be_null(mem_reg->type); struct bpf_reg_state saved_reg; struct bpf_call_arg_meta meta; + u32 argno = make_argno(mem_argno); int err; - WARN_ON_ONCE(mem_regno > BPF_REG_4); + WARN_ON_ONCE(mem_argno > BPF_REG_3); memset(&meta, 0, sizeof(meta)); @@ -7122,8 +7173,8 @@ static int check_kfunc_mem_size_reg(struct bpf_verifier_env *env, struct bpf_reg mark_ptr_not_null_reg(mem_reg); } - err = check_mem_size_reg(env, mem_reg, size_reg, mem_regno, BPF_READ, true, &meta); - err = err ?: check_mem_size_reg(env, mem_reg, size_reg, mem_regno, BPF_WRITE, true, &meta); + err = check_mem_size_reg(env, mem_reg, size_reg, argno, BPF_READ, true, &meta); + err = err ?: check_mem_size_reg(env, mem_reg, size_reg, argno, BPF_WRITE, true, &meta); if (may_be_null) *mem_reg = saved_reg; @@ -7159,7 +7210,7 @@ enum { * env->cur_state->active_locks remembers which map value element or allocated * object got locked and clears it after bpf_spin_unlock. */ -static int process_spin_lock(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno, int flags) +static int process_spin_lock(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int argno, int flags) { bool is_lock = flags & PROCESS_SPIN_LOCK, is_res_lock = flags & PROCESS_RES_LOCK; const char *lock_str = is_res_lock ? "bpf_res_spin" : "bpf_spin"; @@ -7175,8 +7226,8 @@ static int process_spin_lock(struct bpf_verifier_env *env, struct bpf_reg_state if (!is_const) { verbose(env, - "R%d doesn't have constant offset. %s_lock has to be at the constant offset\n", - regno, lock_str); + "%s doesn't have constant offset. %s_lock has to be at the constant offset\n", + reg_arg_name(env, argno), lock_str); return -EINVAL; } if (reg->type == PTR_TO_MAP_VALUE) { @@ -7275,7 +7326,7 @@ static int process_spin_lock(struct bpf_verifier_env *env, struct bpf_reg_state } /* Check if @regno is a pointer to a specific field in a map value */ -static int check_map_field_pointer(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 regno, +static int check_map_field_pointer(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 argno, enum btf_field_type field_type, struct bpf_map_desc *map_desc) { @@ -7287,8 +7338,8 @@ static int check_map_field_pointer(struct bpf_verifier_env *env, struct bpf_reg_ if (!is_const) { verbose(env, - "R%d doesn't have constant offset. %s has to be at the constant offset\n", - regno, struct_name); + "%s doesn't have constant offset. %s has to be at the constant offset\n", + reg_arg_name(env, argno), struct_name); return -EINVAL; } if (!map->btf) { @@ -7328,26 +7379,26 @@ static int check_map_field_pointer(struct bpf_verifier_env *env, struct bpf_reg_ return 0; } -static int process_timer_func(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno, +static int process_timer_func(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int argno, struct bpf_map_desc *map) { if (IS_ENABLED(CONFIG_PREEMPT_RT)) { verbose(env, "bpf_timer cannot be used for PREEMPT_RT.\n"); return -EOPNOTSUPP; } - return check_map_field_pointer(env, reg, regno, BPF_TIMER, map); + return check_map_field_pointer(env, reg, argno, BPF_TIMER, map); } -static int process_timer_helper(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno, +static int process_timer_helper(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int argno, struct bpf_call_arg_meta *meta) { - return process_timer_func(env, reg, regno, &meta->map); + return process_timer_func(env, reg, argno, &meta->map); } -static int process_timer_kfunc(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno, +static int process_timer_kfunc(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int argno, struct bpf_kfunc_call_arg_meta *meta) { - return process_timer_func(env, reg, regno, &meta->map); + return process_timer_func(env, reg, argno, &meta->map); } static int process_kptr_func(struct bpf_verifier_env *env, int regno, @@ -7423,15 +7474,15 @@ static int process_kptr_func(struct bpf_verifier_env *env, int regno, * Helpers which do not mutate the bpf_dynptr set MEM_RDONLY in their argument * type, and declare it as 'const struct bpf_dynptr *' in their prototype. */ -static int process_dynptr_func(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno, int insn_idx, +static int process_dynptr_func(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int argno, int insn_idx, enum bpf_arg_type arg_type, int clone_ref_obj_id) { int err; if (reg->type != PTR_TO_STACK && reg->type != CONST_PTR_TO_DYNPTR) { verbose(env, - "arg#%d expected pointer to stack or const struct bpf_dynptr\n", - regno - 1); + "%s expected pointer to stack or const struct bpf_dynptr\n", + reg_arg_name(env, argno)); return -EINVAL; } @@ -7468,7 +7519,7 @@ static int process_dynptr_func(struct bpf_verifier_env *env, struct bpf_reg_stat /* we write BPF_DW bits (8 bytes) at a time */ for (i = 0; i < BPF_DYNPTR_SIZE; i += 8) { - err = check_mem_access(env, insn_idx, reg, regno, + err = check_mem_access(env, insn_idx, reg, argno, i, BPF_DW, BPF_WRITE, -1, false, false); if (err) return err; @@ -7483,17 +7534,16 @@ static int process_dynptr_func(struct bpf_verifier_env *env, struct bpf_reg_stat } if (!is_dynptr_reg_valid_init(env, reg)) { - verbose(env, - "Expected an initialized dynptr as arg #%d\n", - regno - 1); + verbose(env, "Expected an initialized dynptr as %s\n", + reg_arg_name(env, argno)); return -EINVAL; } /* Fold modifiers (in this case, MEM_RDONLY) when checking expected type */ if (!is_dynptr_type_expected(env, reg, arg_type & ~MEM_RDONLY)) { - verbose(env, - "Expected a dynptr of type %s as arg #%d\n", - dynptr_type_str(arg_to_dynptr_type(arg_type)), regno - 1); + verbose(env, "Expected a dynptr of type %s as %s\n", + dynptr_type_str(arg_to_dynptr_type(arg_type)), + reg_arg_name(env, argno)); return -EINVAL; } @@ -7538,14 +7588,16 @@ static bool is_kfunc_arg_iter(struct bpf_kfunc_call_arg_meta *meta, int arg_idx, return btf_param_match_suffix(meta->btf, arg, "__iter"); } -static int process_iter_arg(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno, int insn_idx, +static int process_iter_arg(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int argno, int insn_idx, struct bpf_kfunc_call_arg_meta *meta) { const struct btf_type *t; + u32 arg_idx = arg_idx_from_argno(argno); int spi, err, i, nr_slots, btf_id; if (reg->type != PTR_TO_STACK) { - verbose(env, "arg#%d expected pointer to an iterator on stack\n", regno - 1); + verbose(env, "%s expected pointer to an iterator on stack\n", + reg_arg_name(env, argno)); return -EINVAL; } @@ -7555,9 +7607,10 @@ static int process_iter_arg(struct bpf_verifier_env *env, struct bpf_reg_state * * to any kfunc, if arg has "__iter" suffix, we need to be a bit more * conservative here. */ - btf_id = btf_check_iter_arg(meta->btf, meta->func_proto, regno - 1); + btf_id = btf_check_iter_arg(meta->btf, meta->func_proto, arg_idx); if (btf_id < 0) { - verbose(env, "expected valid iter pointer as arg #%d\n", regno - 1); + verbose(env, "expected valid iter pointer as %s\n", + reg_arg_name(env, argno)); return -EINVAL; } t = btf_type_by_id(meta->btf, btf_id); @@ -7566,13 +7619,13 @@ static int process_iter_arg(struct bpf_verifier_env *env, struct bpf_reg_state * if (is_iter_new_kfunc(meta)) { /* bpf_iter_<type>_new() expects pointer to uninit iter state */ if (!is_iter_reg_valid_uninit(env, reg, nr_slots)) { - verbose(env, "expected uninitialized iter_%s as arg #%d\n", - iter_type_str(meta->btf, btf_id), regno - 1); + verbose(env, "expected uninitialized iter_%s as %s\n", + iter_type_str(meta->btf, btf_id), reg_arg_name(env, argno)); return -EINVAL; } for (i = 0; i < nr_slots * 8; i += BPF_REG_SIZE) { - err = check_mem_access(env, insn_idx, reg, regno, + err = check_mem_access(env, insn_idx, reg, argno, i, BPF_DW, BPF_WRITE, -1, false, false); if (err) return err; @@ -7590,8 +7643,8 @@ static int process_iter_arg(struct bpf_verifier_env *env, struct bpf_reg_state * case 0: break; case -EINVAL: - verbose(env, "expected an initialized iter_%s as arg #%d\n", - iter_type_str(meta->btf, btf_id), regno - 1); + verbose(env, "expected an initialized iter_%s as %s\n", + iter_type_str(meta->btf, btf_id), reg_arg_name(env, argno)); return err; case -EPROTO: verbose(env, "expected an RCU CS when using %s\n", meta->func_name); @@ -8011,7 +8064,7 @@ static const struct bpf_reg_types *compatible_reg_types[__BPF_ARG_TYPE_MAX] = { [ARG_PTR_TO_DYNPTR] = &dynptr_types, }; -static int check_reg_type(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 regno, +static int check_reg_type(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 argno, enum bpf_arg_type arg_type, const u32 *arg_btf_id, struct bpf_call_arg_meta *meta) @@ -8046,7 +8099,8 @@ static int check_reg_type(struct bpf_verifier_env *env, struct bpf_reg_state *re type &= ~DYNPTR_TYPE_FLAG_MASK; /* Local kptr types are allowed as the source argument of bpf_kptr_xchg */ - if (meta->func_id == BPF_FUNC_kptr_xchg && type_is_alloc(type) && regno == BPF_REG_2) { + if (meta->func_id == BPF_FUNC_kptr_xchg && type_is_alloc(type) && + !is_stack_argno(argno) && argno == BPF_REG_2) { type &= ~MEM_ALLOC; type &= ~MEM_PERCPU; } @@ -8060,7 +8114,7 @@ static int check_reg_type(struct bpf_verifier_env *env, struct bpf_reg_state *re goto found; } - verbose(env, "R%d type=%s expected=", regno, reg_type_str(env, reg->type)); + verbose(env, "%s type=%s expected=", reg_arg_name(env, argno), reg_type_str(env, reg->type)); for (j = 0; j + 1 < i; j++) verbose(env, "%s, ", reg_type_str(env, compatible->types[j])); verbose(env, "%s\n", reg_type_str(env, compatible->types[j])); @@ -8073,9 +8127,9 @@ static int check_reg_type(struct bpf_verifier_env *env, struct bpf_reg_state *re if (compatible == &mem_types) { if (!(arg_type & MEM_RDONLY)) { verbose(env, - "%s() may write into memory pointed by R%d type=%s\n", + "%s() may write into memory pointed by %s type=%s\n", func_id_name(meta->func_id), - regno, reg_type_str(env, reg->type)); + reg_arg_name(env, argno), reg_type_str(env, reg->type)); return -EACCES; } return 0; @@ -8098,7 +8152,8 @@ static int check_reg_type(struct bpf_verifier_env *env, struct bpf_reg_state *re if (type_may_be_null(reg->type) && (!type_may_be_null(arg_type) || arg_type_is_release(arg_type))) { - verbose(env, "Possibly NULL pointer passed to helper arg%d\n", regno); + verbose(env, "Possibly NULL pointer passed to helper %s\n", + reg_arg_name(env, argno)); return -EACCES; } @@ -8111,25 +8166,26 @@ static int check_reg_type(struct bpf_verifier_env *env, struct bpf_reg_state *re } if (meta->func_id == BPF_FUNC_kptr_xchg) { - if (map_kptr_match_type(env, meta->kptr_field, reg, regno)) + if (map_kptr_match_type(env, meta->kptr_field, reg, argno)) return -EACCES; } else { if (arg_btf_id == BPF_PTR_POISON) { verbose(env, "verifier internal error:"); - verbose(env, "R%d has non-overwritten BPF_PTR_POISON type\n", - regno); + verbose(env, "%s has non-overwritten BPF_PTR_POISON type\n", + reg_arg_name(env, argno)); return -EACCES; } - err = __check_ptr_off_reg(env, reg, regno, true); + err = __check_ptr_off_reg(env, reg, argno, true); if (err) return err; if (!btf_struct_ids_match(&env->log, reg->btf, reg->btf_id, reg->var_off.value, btf_vmlinux, *arg_btf_id, strict_type_match)) { - verbose(env, "R%d is of type %s but %s is expected\n", - regno, btf_type_name(reg->btf, reg->btf_id), + verbose(env, "%s is of type %s but %s is expected\n", + reg_arg_name(env, argno), + btf_type_name(reg->btf, reg->btf_id), btf_type_name(btf_vmlinux, *arg_btf_id)); return -EACCES; } @@ -8146,8 +8202,9 @@ static int check_reg_type(struct bpf_verifier_env *env, struct bpf_reg_state *re return -EFAULT; } /* Check if local kptr in src arg matches kptr in dst arg */ - if (meta->func_id == BPF_FUNC_kptr_xchg && regno == BPF_REG_2) { - if (map_kptr_match_type(env, meta->kptr_field, reg, regno)) + if (meta->func_id == BPF_FUNC_kptr_xchg && + !is_stack_argno(argno) && argno == BPF_REG_2) { + if (map_kptr_match_type(env, meta->kptr_field, reg, argno)) return -EACCES; } break; @@ -8181,7 +8238,7 @@ reg_find_field_offset(const struct bpf_reg_state *reg, s32 off, u32 fields) } static int check_func_arg_reg_off(struct bpf_verifier_env *env, - const struct bpf_reg_state *reg, int regno, + const struct bpf_reg_state *reg, int argno, enum bpf_arg_type arg_type) { u32 type = reg->type; @@ -8207,8 +8264,8 @@ static int check_func_arg_reg_off(struct bpf_verifier_env *env, * to give the user a better error message. */ if (!tnum_is_const(reg->var_off) || reg->var_off.value != 0) { - verbose(env, "R%d must have zero offset when passed to release func or trusted arg to kfunc\n", - regno); + verbose(env, "%s must have zero offset when passed to release func or trusted arg to kfunc\n", + reg_arg_name(env, argno)); return -EINVAL; } } @@ -8244,7 +8301,7 @@ static int check_func_arg_reg_off(struct bpf_verifier_env *env, * cases. var_off always must be 0 for PTR_TO_BTF_ID, hence we * still need to do checks instead of returning. */ - return __check_ptr_off_reg(env, reg, regno, true); + return __check_ptr_off_reg(env, reg, argno, true); case PTR_TO_CTX: /* * Allow fixed and variable offsets for syscall context, but @@ -8256,7 +8313,7 @@ static int check_func_arg_reg_off(struct bpf_verifier_env *env, return 0; fallthrough; default: - return __check_ptr_off_reg(env, reg, regno, false); + return __check_ptr_off_reg(env, reg, argno, false); } } @@ -8326,8 +8383,8 @@ static enum bpf_dynptr_type dynptr_get_type(struct bpf_verifier_env *env, return state->stack[spi].spilled_ptr.dynptr.type; } -static int check_reg_const_str(struct bpf_verifier_env *env, - struct bpf_reg_state *reg, u32 regno) +static int check_arg_const_str(struct bpf_verifier_env *env, + struct bpf_reg_state *reg, u32 argno) { struct bpf_map *map = reg->map_ptr; int err; @@ -8339,17 +8396,18 @@ static int check_reg_const_str(struct bpf_verifier_env *env, return -EINVAL; if (map->map_type == BPF_MAP_TYPE_INSN_ARRAY) { - verbose(env, "R%d points to insn_array map which cannot be used as const string\n", regno); + verbose(env, "%s points to insn_array map which cannot be used as const string\n", + reg_arg_name(env, argno)); return -EACCES; } if (!bpf_map_is_rdonly(map)) { - verbose(env, "R%d does not point to a readonly map'\n", regno); + verbose(env, "%s does not point to a readonly map'\n", reg_arg_name(env, argno)); return -EACCES; } if (!tnum_is_const(reg->var_off)) { - verbose(env, "R%d is not a constant address'\n", regno); + verbose(env, "%s is not a constant address'\n", reg_arg_name(env, argno)); return -EACCES; } @@ -8358,7 +8416,7 @@ static int check_reg_const_str(struct bpf_verifier_env *env, return -EACCES; } - err = check_map_access(env, reg, regno, 0, + err = check_map_access(env, reg, argno, 0, map->value_size - reg->var_off.value, false, ACCESS_HELPER); if (err) @@ -8697,7 +8755,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg, break; case ARG_PTR_TO_CONST_STR: { - err = check_reg_const_str(env, reg, regno); + err = check_arg_const_str(env, reg, regno); if (err) return err; break; @@ -9286,13 +9344,14 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog, * verifier sees. */ for (i = 0; i < sub->arg_cnt; i++) { + u32 argno = make_argno(i); u32 regno = i + 1; struct bpf_reg_state *reg = ®s[regno]; struct bpf_subprog_arg_info *arg = &sub->args[i]; if (arg->arg_type == ARG_ANYTHING) { if (reg->type != SCALAR_VALUE) { - bpf_log(log, "R%d is not a scalar\n", regno); + bpf_log(log, "%s is not a scalar\n", reg_arg_name(env, argno)); return -EINVAL; } } else if (arg->arg_type & PTR_UNTRUSTED) { @@ -9302,24 +9361,26 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog, * invalid memory access. */ } else if (arg->arg_type == ARG_PTR_TO_CTX) { - ret = check_func_arg_reg_off(env, reg, regno, ARG_PTR_TO_CTX); + ret = check_func_arg_reg_off(env, reg, argno, ARG_PTR_TO_CTX); if (ret < 0) return ret; /* If function expects ctx type in BTF check that caller * is passing PTR_TO_CTX. */ if (reg->type != PTR_TO_CTX) { - bpf_log(log, "arg#%d expects pointer to ctx\n", i); + bpf_log(log, "%s expects pointer to ctx\n", + reg_arg_name(env, argno)); return -EINVAL; } } else if (base_type(arg->arg_type) == ARG_PTR_TO_MEM) { - ret = check_func_arg_reg_off(env, reg, regno, ARG_DONTCARE); + ret = check_func_arg_reg_off(env, reg, argno, ARG_DONTCARE); if (ret < 0) return ret; - if (check_mem_reg(env, reg, regno, arg->mem_size)) + if (check_mem_reg(env, reg, argno, arg->mem_size)) return -EINVAL; if (!(arg->arg_type & PTR_MAYBE_NULL) && (reg->type & PTR_MAYBE_NULL)) { - bpf_log(log, "arg#%d is expected to be non-NULL\n", i); + bpf_log(log, "%s is expected to be non-NULL\n", + reg_arg_name(env, argno)); return -EINVAL; } } else if (base_type(arg->arg_type) == ARG_PTR_TO_ARENA) { @@ -9331,15 +9392,16 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog, * run-time debug nightmare. */ if (reg->type != PTR_TO_ARENA && reg->type != SCALAR_VALUE) { - bpf_log(log, "R%d is not a pointer to arena or scalar.\n", regno); + bpf_log(log, "%s is not a pointer to arena or scalar.\n", + reg_arg_name(env, argno)); return -EINVAL; } } else if (arg->arg_type == (ARG_PTR_TO_DYNPTR | MEM_RDONLY)) { - ret = check_func_arg_reg_off(env, reg, regno, ARG_PTR_TO_DYNPTR); + ret = check_func_arg_reg_off(env, reg, argno, ARG_PTR_TO_DYNPTR); if (ret) return ret; - ret = process_dynptr_func(env, reg, regno, -1, arg->arg_type, 0); + ret = process_dynptr_func(env, reg, argno, -1, arg->arg_type, 0); if (ret) return ret; } else if (base_type(arg->arg_type) == ARG_PTR_TO_BTF_ID) { @@ -9350,12 +9412,13 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog, continue; memset(&meta, 0, sizeof(meta)); /* leave func_id as zero */ - err = check_reg_type(env, reg, regno, arg->arg_type, &arg->btf_id, &meta); - err = err ?: check_func_arg_reg_off(env, reg, regno, arg->arg_type); + err = check_reg_type(env, reg, argno, arg->arg_type, &arg->btf_id, &meta); + err = err ?: check_func_arg_reg_off(env, reg, argno, arg->arg_type); if (err) return err; } else { - verifier_bug(env, "unrecognized arg#%d type %d", i, arg->arg_type); + verifier_bug(env, "unrecognized %s type %d", + reg_arg_name(env, argno), arg->arg_type); return -EFAULT; } } @@ -11398,8 +11461,9 @@ get_kfunc_ptr_arg_type(struct bpf_verifier_env *env, if ((base_type(reg->type) == PTR_TO_BTF_ID || reg2btf_ids[base_type(reg->type)])) { if (!btf_type_is_struct(ref_t)) { - verbose(env, "kernel function %s args#%d pointer type %s %s is not supported\n", - meta->func_name, argno, btf_type_str(ref_t), ref_tname); + verbose(env, "kernel function %s %s pointer type %s %s is not supported\n", + meta->func_name, reg_arg_name(env, make_argno(argno)), + btf_type_str(ref_t), ref_tname); return -EINVAL; } return KF_ARG_PTR_TO_BTF_ID; @@ -11415,8 +11479,9 @@ get_kfunc_ptr_arg_type(struct bpf_verifier_env *env, */ if (!btf_type_is_scalar(ref_t) && !__btf_type_is_scalar_struct(env, meta->btf, ref_t, 0) && (arg_mem_size ? !btf_type_is_void(ref_t) : 1)) { - verbose(env, "arg#%d pointer type %s %s must point to %sscalar, or struct with scalar\n", - argno, btf_type_str(ref_t), ref_tname, arg_mem_size ? "void, " : ""); + verbose(env, "%s pointer type %s %s must point to %sscalar, or struct with scalar\n", + reg_arg_name(env, make_argno(argno)), + btf_type_str(ref_t), ref_tname, arg_mem_size ? "void, " : ""); return -EINVAL; } return arg_mem_size ? KF_ARG_PTR_TO_MEM_SIZE : KF_ARG_PTR_TO_MEM; @@ -11485,15 +11550,16 @@ static int process_kf_arg_ptr_to_btf_id(struct bpf_verifier_env *env, */ taking_projection = btf_is_projection_of(ref_tname, reg_ref_tname); if (!taking_projection && !struct_same) { - verbose(env, "kernel function %s args#%d expected pointer to %s %s but R%d has a pointer to %s %s\n", - meta->func_name, argno, btf_type_str(ref_t), ref_tname, argno + 1, + verbose(env, "kernel function %s %s expected pointer to %s %s but %s has a pointer to %s %s\n", + meta->func_name, reg_arg_name(env, make_argno(argno)), + btf_type_str(ref_t), ref_tname, reg_arg_name(env, make_argno(argno)), btf_type_str(reg_ref_t), reg_ref_tname); return -EINVAL; } return 0; } -static int process_irq_flag(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno, +static int process_irq_flag(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int argno, struct bpf_kfunc_call_arg_meta *meta) { int err, kfunc_class = IRQ_NATIVE_KFUNC; @@ -11516,11 +11582,13 @@ static int process_irq_flag(struct bpf_verifier_env *env, struct bpf_reg_state * if (irq_save) { if (!is_irq_flag_reg_valid_uninit(env, reg)) { - verbose(env, "expected uninitialized irq flag as arg#%d\n", regno - 1); + verbose(env, "expected uninitialized irq flag as %s\n", + reg_arg_name(env, argno)); return -EINVAL; } - err = check_mem_access(env, env->insn_idx, reg, regno, 0, BPF_DW, BPF_WRITE, -1, false, false); + err = check_mem_access(env, env->insn_idx, reg, argno, 0, BPF_DW, + BPF_WRITE, -1, false, false); if (err) return err; @@ -11530,7 +11598,8 @@ static int process_irq_flag(struct bpf_verifier_env *env, struct bpf_reg_state * } else { err = is_irq_flag_reg_valid_init(env, reg); if (err) { - verbose(env, "expected an initialized irq flag as arg#%d\n", regno - 1); + verbose(env, "expected an initialized irq flag as %s\n", + reg_arg_name(env, argno)); return err; } @@ -11821,7 +11890,7 @@ static bool check_kfunc_is_graph_node_api(struct bpf_verifier_env *env, static int __process_kf_arg_ptr_to_graph_root(struct bpf_verifier_env *env, - struct bpf_reg_state *reg, u32 regno, + struct bpf_reg_state *reg, u32 argno, struct bpf_kfunc_call_arg_meta *meta, enum btf_field_type head_field_type, struct btf_field **head_field) @@ -11842,8 +11911,8 @@ __process_kf_arg_ptr_to_graph_root(struct bpf_verifier_env *env, head_type_name = btf_field_type_name(head_field_type); if (!tnum_is_const(reg->var_off)) { verbose(env, - "R%d doesn't have constant offset. %s has to be at the constant offset\n", - regno, head_type_name); + "%s doesn't have constant offset. %s has to be at the constant offset\n", + reg_arg_name(env, argno), head_type_name); return -EINVAL; } @@ -11871,24 +11940,24 @@ __process_kf_arg_ptr_to_graph_root(struct bpf_verifier_env *env, } static int process_kf_arg_ptr_to_list_head(struct bpf_verifier_env *env, - struct bpf_reg_state *reg, u32 regno, + struct bpf_reg_state *reg, u32 argno, struct bpf_kfunc_call_arg_meta *meta) { - return __process_kf_arg_ptr_to_graph_root(env, reg, regno, meta, BPF_LIST_HEAD, + return __process_kf_arg_ptr_to_graph_root(env, reg, argno, meta, BPF_LIST_HEAD, &meta->arg_list_head.field); } static int process_kf_arg_ptr_to_rbtree_root(struct bpf_verifier_env *env, - struct bpf_reg_state *reg, u32 regno, + struct bpf_reg_state *reg, u32 argno, struct bpf_kfunc_call_arg_meta *meta) { - return __process_kf_arg_ptr_to_graph_root(env, reg, regno, meta, BPF_RB_ROOT, + return __process_kf_arg_ptr_to_graph_root(env, reg, argno, meta, BPF_RB_ROOT, &meta->arg_rbtree_root.field); } static int __process_kf_arg_ptr_to_graph_node(struct bpf_verifier_env *env, - struct bpf_reg_state *reg, u32 regno, + struct bpf_reg_state *reg, u32 argno, struct bpf_kfunc_call_arg_meta *meta, enum btf_field_type head_field_type, enum btf_field_type node_field_type, @@ -11910,8 +11979,8 @@ __process_kf_arg_ptr_to_graph_node(struct bpf_verifier_env *env, node_type_name = btf_field_type_name(node_field_type); if (!tnum_is_const(reg->var_off)) { verbose(env, - "R%d doesn't have constant offset. %s has to be at the constant offset\n", - regno, node_type_name); + "%s doesn't have constant offset. %s has to be at the constant offset\n", + reg_arg_name(env, argno), node_type_name); return -EINVAL; } @@ -11952,19 +12021,19 @@ __process_kf_arg_ptr_to_graph_node(struct bpf_verifier_env *env, } static int process_kf_arg_ptr_to_list_node(struct bpf_verifier_env *env, - struct bpf_reg_state *reg, u32 regno, + struct bpf_reg_state *reg, u32 argno, struct bpf_kfunc_call_arg_meta *meta) { - return __process_kf_arg_ptr_to_graph_node(env, reg, regno, meta, + return __process_kf_arg_ptr_to_graph_node(env, reg, argno, meta, BPF_LIST_HEAD, BPF_LIST_NODE, &meta->arg_list_head.field); } static int process_kf_arg_ptr_to_rbtree_node(struct bpf_verifier_env *env, - struct bpf_reg_state *reg, u32 regno, + struct bpf_reg_state *reg, u32 argno, struct bpf_kfunc_call_arg_meta *meta) { - return __process_kf_arg_ptr_to_graph_node(env, reg, regno, meta, + return __process_kf_arg_ptr_to_graph_node(env, reg, argno, meta, BPF_RB_ROOT, BPF_RB_NODE, &meta->arg_rbtree_root.field); } @@ -12016,6 +12085,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ struct bpf_reg_state *regs = cur_regs(env), *reg = ®s[i + 1]; const struct btf_type *t, *ref_t, *resolve_ret; enum bpf_arg_type arg_type = ARG_DONTCARE; + u32 argno = make_argno(i); u32 regno = i + 1, ref_id, type_size; bool is_ret_buf_sz = false; int kf_arg_type; @@ -12038,7 +12108,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ if (btf_type_is_scalar(t)) { if (reg->type != SCALAR_VALUE) { - verbose(env, "R%d is not a scalar\n", regno); + verbose(env, "%s is not a scalar\n", reg_arg_name(env, argno)); return -EINVAL; } @@ -12048,7 +12118,8 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ return -EFAULT; } if (!tnum_is_const(reg->var_off)) { - verbose(env, "R%d must be a known constant\n", regno); + verbose(env, "%s must be a known constant\n", + reg_arg_name(env, argno)); return -EINVAL; } ret = mark_chain_precision(env, regno); @@ -12070,7 +12141,8 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ } if (!tnum_is_const(reg->var_off)) { - verbose(env, "R%d is not a const\n", regno); + verbose(env, "%s is not a const\n", + reg_arg_name(env, argno)); return -EINVAL; } @@ -12083,20 +12155,22 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ } if (!btf_type_is_ptr(t)) { - verbose(env, "Unrecognized arg#%d type %s\n", i, btf_type_str(t)); + verbose(env, "Unrecognized %s type %s\n", + reg_arg_name(env, argno), btf_type_str(t)); return -EINVAL; } if ((bpf_register_is_null(reg) || type_may_be_null(reg->type)) && !is_kfunc_arg_nullable(meta->btf, &args[i])) { - verbose(env, "Possibly NULL pointer passed to trusted arg%d\n", i); + verbose(env, "Possibly NULL pointer passed to trusted %s\n", + reg_arg_name(env, argno)); return -EACCES; } if (reg->ref_obj_id) { if (is_kfunc_release(meta) && meta->ref_obj_id) { - verifier_bug(env, "more than one arg with ref_obj_id R%d %u %u", - regno, reg->ref_obj_id, + verifier_bug(env, "more than one arg with ref_obj_id %s %u %u", + reg_arg_name(env, argno), reg->ref_obj_id, meta->ref_obj_id); return -EFAULT; } @@ -12117,7 +12191,8 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ continue; case KF_ARG_PTR_TO_MAP: if (!reg->map_ptr) { - verbose(env, "pointer in R%d isn't map pointer\n", regno); + verbose(env, "pointer in %s isn't map pointer\n", + reg_arg_name(env, argno)); return -EINVAL; } if (meta->map.ptr && (reg->map_ptr->record->wq_off >= 0 || @@ -12155,11 +12230,13 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ case KF_ARG_PTR_TO_BTF_ID: if (!is_trusted_reg(reg)) { if (!is_kfunc_rcu(meta)) { - verbose(env, "R%d must be referenced or trusted\n", regno); + verbose(env, "%s must be referenced or trusted\n", + reg_arg_name(env, argno)); return -EINVAL; } if (!is_rcu_reg(reg)) { - verbose(env, "R%d must be a rcu pointer\n", regno); + verbose(env, "%s must be a rcu pointer\n", + reg_arg_name(env, argno)); return -EINVAL; } } @@ -12191,15 +12268,15 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ if (is_kfunc_release(meta) && reg->ref_obj_id) arg_type |= OBJ_RELEASE; - ret = check_func_arg_reg_off(env, reg, regno, arg_type); + ret = check_func_arg_reg_off(env, reg, argno, arg_type); if (ret < 0) return ret; switch (kf_arg_type) { case KF_ARG_PTR_TO_CTX: if (reg->type != PTR_TO_CTX) { - verbose(env, "arg#%d expected pointer to ctx, but got %s\n", - i, reg_type_str(env, reg->type)); + verbose(env, "%s expected pointer to ctx, but got %s\n", + reg_arg_name(env, argno), reg_type_str(env, reg->type)); return -EINVAL; } @@ -12213,16 +12290,19 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ case KF_ARG_PTR_TO_ALLOC_BTF_ID: if (reg->type == (PTR_TO_BTF_ID | MEM_ALLOC)) { if (!is_bpf_obj_drop_kfunc(meta->func_id)) { - verbose(env, "arg#%d expected for bpf_obj_drop()\n", i); + verbose(env, "%s expected for bpf_obj_drop()\n", + reg_arg_name(env, argno)); return -EINVAL; } } else if (reg->type == (PTR_TO_BTF_ID | MEM_ALLOC | MEM_PERCPU)) { if (!is_bpf_percpu_obj_drop_kfunc(meta->func_id)) { - verbose(env, "arg#%d expected for bpf_percpu_obj_drop()\n", i); + verbose(env, "%s expected for bpf_percpu_obj_drop()\n", + reg_arg_name(env, argno)); return -EINVAL; } } else { - verbose(env, "arg#%d expected pointer to allocated object\n", i); + verbose(env, "%s expected pointer to allocated object\n", + reg_arg_name(env, argno)); return -EINVAL; } if (!reg->ref_obj_id) { @@ -12273,7 +12353,8 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ } } - ret = process_dynptr_func(env, reg, regno, insn_idx, dynptr_arg_type, clone_ref_obj_id); + ret = process_dynptr_func(env, reg, argno, insn_idx, + dynptr_arg_type, clone_ref_obj_id); if (ret < 0) return ret; @@ -12298,55 +12379,59 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ return -EINVAL; } } - ret = process_iter_arg(env, reg, regno, insn_idx, meta); + ret = process_iter_arg(env, reg, argno, insn_idx, meta); if (ret < 0) return ret; break; case KF_ARG_PTR_TO_LIST_HEAD: if (reg->type != PTR_TO_MAP_VALUE && reg->type != (PTR_TO_BTF_ID | MEM_ALLOC)) { - verbose(env, "arg#%d expected pointer to map value or allocated object\n", i); + verbose(env, "%s expected pointer to map value or allocated object\n", + reg_arg_name(env, argno)); return -EINVAL; } if (reg->type == (PTR_TO_BTF_ID | MEM_ALLOC) && !reg->ref_obj_id) { verbose(env, "allocated object must be referenced\n"); return -EINVAL; } - ret = process_kf_arg_ptr_to_list_head(env, reg, regno, meta); + ret = process_kf_arg_ptr_to_list_head(env, reg, argno, meta); if (ret < 0) return ret; break; case KF_ARG_PTR_TO_RB_ROOT: if (reg->type != PTR_TO_MAP_VALUE && reg->type != (PTR_TO_BTF_ID | MEM_ALLOC)) { - verbose(env, "arg#%d expected pointer to map value or allocated object\n", i); + verbose(env, "%s expected pointer to map value or allocated object\n", + reg_arg_name(env, argno)); return -EINVAL; } if (reg->type == (PTR_TO_BTF_ID | MEM_ALLOC) && !reg->ref_obj_id) { verbose(env, "allocated object must be referenced\n"); return -EINVAL; } - ret = process_kf_arg_ptr_to_rbtree_root(env, reg, regno, meta); + ret = process_kf_arg_ptr_to_rbtree_root(env, reg, argno, meta); if (ret < 0) return ret; break; case KF_ARG_PTR_TO_LIST_NODE: if (reg->type != (PTR_TO_BTF_ID | MEM_ALLOC)) { - verbose(env, "arg#%d expected pointer to allocated object\n", i); + verbose(env, "%s expected pointer to allocated object\n", + reg_arg_name(env, argno)); return -EINVAL; } if (!reg->ref_obj_id) { verbose(env, "allocated object must be referenced\n"); return -EINVAL; } - ret = process_kf_arg_ptr_to_list_node(env, reg, regno, meta); + ret = process_kf_arg_ptr_to_list_node(env, reg, argno, meta); if (ret < 0) return ret; break; case KF_ARG_PTR_TO_RB_NODE: if (is_bpf_rbtree_add_kfunc(meta->func_id)) { if (reg->type != (PTR_TO_BTF_ID | MEM_ALLOC)) { - verbose(env, "arg#%d expected pointer to allocated object\n", i); + verbose(env, "%s expected pointer to allocated object\n", + reg_arg_name(env, argno)); return -EINVAL; } if (!reg->ref_obj_id) { @@ -12364,7 +12449,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ } } - ret = process_kf_arg_ptr_to_rbtree_node(env, reg, regno, meta); + ret = process_kf_arg_ptr_to_rbtree_node(env, reg, argno, meta); if (ret < 0) return ret; break; @@ -12379,7 +12464,8 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ if ((base_type(reg->type) != PTR_TO_BTF_ID || (bpf_type_has_unsafe_modifiers(reg->type) && !is_rcu_reg(reg))) && !reg2btf_ids[base_type(reg->type)]) { - verbose(env, "arg#%d is %s ", i, reg_type_str(env, reg->type)); + verbose(env, "%s is %s ", reg_arg_name(env, argno), + reg_type_str(env, reg->type)); verbose(env, "expected %s or socket\n", reg_type_str(env, base_type(reg->type) | (type_flag(reg->type) & BPF_REG_TRUSTED_MODIFIERS))); @@ -12392,11 +12478,12 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ case KF_ARG_PTR_TO_MEM: resolve_ret = btf_resolve_size(btf, ref_t, &type_size); if (IS_ERR(resolve_ret)) { - verbose(env, "arg#%d reference type('%s %s') size cannot be determined: %ld\n", - i, btf_type_str(ref_t), ref_tname, PTR_ERR(resolve_ret)); + verbose(env, "%s reference type('%s %s') size cannot be determined: %ld\n", + reg_arg_name(env, argno), btf_type_str(ref_t), + ref_tname, PTR_ERR(resolve_ret)); return -EINVAL; } - ret = check_mem_reg(env, reg, regno, type_size); + ret = check_mem_reg(env, reg, argno, type_size); if (ret < 0) return ret; break; @@ -12408,9 +12495,11 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ const struct btf_param *size_arg = &args[i + 1]; if (!bpf_register_is_null(buff_reg) || !is_kfunc_arg_nullable(meta->btf, buff_arg)) { - ret = check_kfunc_mem_size_reg(env, buff_reg, size_reg, regno); + ret = check_kfunc_mem_size_reg(env, buff_reg, size_reg, i); if (ret < 0) { - verbose(env, "arg#%d arg#%d memory, len pair leads to invalid memory access\n", i, i + 1); + verbose(env, "%s and ", reg_arg_name(env, argno)); + verbose(env, "%s memory, len pair leads to invalid memory access\n", + reg_arg_name(env, next_argno(argno))); return ret; } } @@ -12421,7 +12510,8 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ return -EFAULT; } if (!tnum_is_const(size_reg->var_off)) { - verbose(env, "R%d must be a known constant\n", regno + 1); + verbose(env, "%s must be a known constant\n", + reg_arg_name(env, next_argno(argno))); return -EINVAL; } meta->arg_constant.found = true; @@ -12434,14 +12524,16 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ } case KF_ARG_PTR_TO_CALLBACK: if (reg->type != PTR_TO_FUNC) { - verbose(env, "arg%d expected pointer to func\n", i); + verbose(env, "%s expected pointer to func\n", + reg_arg_name(env, argno)); return -EINVAL; } meta->subprogno = reg->subprogno; break; case KF_ARG_PTR_TO_REFCOUNTED_KPTR: if (!type_is_ptr_alloc_obj(reg->type)) { - verbose(env, "arg#%d is neither owning or non-owning ref\n", i); + verbose(env, "%s is neither owning or non-owning ref\n", + reg_arg_name(env, argno)); return -EINVAL; } if (!type_is_non_owning_ref(reg->type)) @@ -12454,7 +12546,8 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ } if (rec->refcount_off < 0) { - verbose(env, "arg#%d doesn't point to a type with bpf_refcount field\n", i); + verbose(env, "%s doesn't point to a type with bpf_refcount field\n", + reg_arg_name(env, argno)); return -EINVAL; } @@ -12463,46 +12556,51 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ break; case KF_ARG_PTR_TO_CONST_STR: if (reg->type != PTR_TO_MAP_VALUE) { - verbose(env, "arg#%d doesn't point to a const string\n", i); + verbose(env, "%s doesn't point to a const string\n", + reg_arg_name(env, argno)); return -EINVAL; } - ret = check_reg_const_str(env, reg, regno); + ret = check_arg_const_str(env, reg, argno); if (ret) return ret; break; case KF_ARG_PTR_TO_WORKQUEUE: if (reg->type != PTR_TO_MAP_VALUE) { - verbose(env, "arg#%d doesn't point to a map value\n", i); + verbose(env, "%s doesn't point to a map value\n", + reg_arg_name(env, argno)); return -EINVAL; } - ret = check_map_field_pointer(env, reg, regno, BPF_WORKQUEUE, &meta->map); + ret = check_map_field_pointer(env, reg, argno, BPF_WORKQUEUE, &meta->map); if (ret < 0) return ret; break; case KF_ARG_PTR_TO_TIMER: if (reg->type != PTR_TO_MAP_VALUE) { - verbose(env, "arg#%d doesn't point to a map value\n", i); + verbose(env, "%s doesn't point to a map value\n", + reg_arg_name(env, argno)); return -EINVAL; } - ret = process_timer_kfunc(env, reg, regno, meta); + ret = process_timer_kfunc(env, reg, argno, meta); if (ret < 0) return ret; break; case KF_ARG_PTR_TO_TASK_WORK: if (reg->type != PTR_TO_MAP_VALUE) { - verbose(env, "arg#%d doesn't point to a map value\n", i); + verbose(env, "%s doesn't point to a map value\n", + reg_arg_name(env, argno)); return -EINVAL; } - ret = check_map_field_pointer(env, reg, regno, BPF_TASK_WORK, &meta->map); + ret = check_map_field_pointer(env, reg, argno, BPF_TASK_WORK, &meta->map); if (ret < 0) return ret; break; case KF_ARG_PTR_TO_IRQ_FLAG: if (reg->type != PTR_TO_STACK) { - verbose(env, "arg#%d doesn't point to an irq flag on stack\n", i); + verbose(env, "%s doesn't point to an irq flag on stack\n", + reg_arg_name(env, argno)); return -EINVAL; } - ret = process_irq_flag(env, reg, regno, meta); + ret = process_irq_flag(env, reg, argno, meta); if (ret < 0) return ret; break; @@ -12511,7 +12609,8 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ int flags = PROCESS_RES_LOCK; if (reg->type != PTR_TO_MAP_VALUE && reg->type != (PTR_TO_BTF_ID | MEM_ALLOC)) { - verbose(env, "arg#%d doesn't point to map value or allocated object\n", i); + verbose(env, "%s doesn't point to map value or allocated object\n", + reg_arg_name(env, argno)); return -EINVAL; } @@ -12523,7 +12622,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ if (meta->func_id == special_kfunc_list[KF_bpf_res_spin_lock_irqsave] || meta->func_id == special_kfunc_list[KF_bpf_res_spin_unlock_irqrestore]) flags |= PROCESS_LOCK_IRQ; - ret = process_spin_lock(env, reg, regno, flags); + ret = process_spin_lock(env, reg, argno, flags); if (ret < 0) return ret; break; @@ -18737,7 +18836,7 @@ static int do_check_common(struct bpf_verifier_env *env, int subprog) mark_reg_unknown(env, regs, i); } else { verifier_bug(env, "unhandled arg#%d type %d", - i - BPF_REG_1, arg->arg_type); + i - BPF_REG_1 + 1, arg->arg_type); ret = -EFAULT; goto out; } diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_nf.c b/tools/testing/selftests/bpf/prog_tests/bpf_nf.c index 215878ea04de..b33dba4b126e 100644 --- a/tools/testing/selftests/bpf/prog_tests/bpf_nf.c +++ b/tools/testing/selftests/bpf/prog_tests/bpf_nf.c @@ -11,18 +11,18 @@ struct { const char *prog_name; const char *err_msg; } test_bpf_nf_fail_tests[] = { - { "alloc_release", "kernel function bpf_ct_release args#0 expected pointer to STRUCT nf_conn but" }, - { "insert_insert", "kernel function bpf_ct_insert_entry args#0 expected pointer to STRUCT nf_conn___init but" }, - { "lookup_insert", "kernel function bpf_ct_insert_entry args#0 expected pointer to STRUCT nf_conn___init but" }, - { "set_timeout_after_insert", "kernel function bpf_ct_set_timeout args#0 expected pointer to STRUCT nf_conn___init but" }, - { "set_status_after_insert", "kernel function bpf_ct_set_status args#0 expected pointer to STRUCT nf_conn___init but" }, - { "change_timeout_after_alloc", "kernel function bpf_ct_change_timeout args#0 expected pointer to STRUCT nf_conn but" }, - { "change_status_after_alloc", "kernel function bpf_ct_change_status args#0 expected pointer to STRUCT nf_conn but" }, + { "alloc_release", "kernel function bpf_ct_release R1 expected pointer to STRUCT nf_conn but" }, + { "insert_insert", "kernel function bpf_ct_insert_entry R1 expected pointer to STRUCT nf_conn___init but" }, + { "lookup_insert", "kernel function bpf_ct_insert_entry R1 expected pointer to STRUCT nf_conn___init but" }, + { "set_timeout_after_insert", "kernel function bpf_ct_set_timeout R1 expected pointer to STRUCT nf_conn___init but" }, + { "set_status_after_insert", "kernel function bpf_ct_set_status R1 expected pointer to STRUCT nf_conn___init but" }, + { "change_timeout_after_alloc", "kernel function bpf_ct_change_timeout R1 expected pointer to STRUCT nf_conn but" }, + { "change_status_after_alloc", "kernel function bpf_ct_change_status R1 expected pointer to STRUCT nf_conn but" }, { "write_not_allowlisted_field", "no write support to nf_conn at off" }, - { "lookup_null_bpf_tuple", "Possibly NULL pointer passed to trusted arg1" }, - { "lookup_null_bpf_opts", "Possibly NULL pointer passed to trusted arg3" }, - { "xdp_lookup_null_bpf_tuple", "Possibly NULL pointer passed to trusted arg1" }, - { "xdp_lookup_null_bpf_opts", "Possibly NULL pointer passed to trusted arg3" }, + { "lookup_null_bpf_tuple", "Possibly NULL pointer passed to trusted R2" }, + { "lookup_null_bpf_opts", "Possibly NULL pointer passed to trusted R4" }, + { "xdp_lookup_null_bpf_tuple", "Possibly NULL pointer passed to trusted R2" }, + { "xdp_lookup_null_bpf_opts", "Possibly NULL pointer passed to trusted R4" }, }; enum { diff --git a/tools/testing/selftests/bpf/prog_tests/cb_refs.c b/tools/testing/selftests/bpf/prog_tests/cb_refs.c index c40df623a8f7..6300b67a3a84 100644 --- a/tools/testing/selftests/bpf/prog_tests/cb_refs.c +++ b/tools/testing/selftests/bpf/prog_tests/cb_refs.c @@ -12,7 +12,7 @@ struct { const char *err_msg; } cb_refs_tests[] = { { "underflow_prog", "must point to scalar, or struct with scalar" }, - { "leak_prog", "Possibly NULL pointer passed to helper arg2" }, + { "leak_prog", "Possibly NULL pointer passed to helper R2" }, { "nested_cb", "Unreleased reference id=4 alloc_insn=2" }, /* alloc_insn=2{4,5} */ { "non_cb_transfer_ref", "Unreleased reference id=4 alloc_insn=1" }, /* alloc_insn=1{1,2} */ }; diff --git a/tools/testing/selftests/bpf/prog_tests/kfunc_call.c b/tools/testing/selftests/bpf/prog_tests/kfunc_call.c index 62f3fb79f5d1..3df07680f9e0 100644 --- a/tools/testing/selftests/bpf/prog_tests/kfunc_call.c +++ b/tools/testing/selftests/bpf/prog_tests/kfunc_call.c @@ -68,7 +68,7 @@ static struct kfunc_test_params kfunc_tests[] = { TC_FAIL(kfunc_call_test_get_mem_fail_oob, 0, "min value is outside of the allowed memory range"), TC_FAIL(kfunc_call_test_get_mem_fail_not_const, 0, "is not a const"), TC_FAIL(kfunc_call_test_mem_acquire_fail, 0, "acquire kernel function does not return PTR_TO_BTF_ID"), - TC_FAIL(kfunc_call_test_pointer_arg_type_mismatch, 0, "arg#0 expected pointer to ctx, but got scalar"), + TC_FAIL(kfunc_call_test_pointer_arg_type_mismatch, 0, "R1 expected pointer to ctx, but got scalar"), /* success cases */ TC_TEST(kfunc_call_test1, 12), diff --git a/tools/testing/selftests/bpf/prog_tests/linked_list.c b/tools/testing/selftests/bpf/prog_tests/linked_list.c index 6f25b5f39a79..dbff099860ba 100644 --- a/tools/testing/selftests/bpf/prog_tests/linked_list.c +++ b/tools/testing/selftests/bpf/prog_tests/linked_list.c @@ -81,8 +81,8 @@ static struct { { "direct_write_node", "direct access to bpf_list_node is disallowed" }, { "use_after_unlock_push_front", "invalid mem access 'scalar'" }, { "use_after_unlock_push_back", "invalid mem access 'scalar'" }, - { "double_push_front", "arg#1 expected pointer to allocated object" }, - { "double_push_back", "arg#1 expected pointer to allocated object" }, + { "double_push_front", "R2 expected pointer to allocated object" }, + { "double_push_back", "R2 expected pointer to allocated object" }, { "no_node_value_type", "bpf_list_node not found at offset=0" }, { "incorrect_value_type", "operation on bpf_list_head expects arg#1 bpf_list_node at offset=48 in struct foo, " diff --git a/tools/testing/selftests/bpf/progs/cgrp_kfunc_failure.c b/tools/testing/selftests/bpf/progs/cgrp_kfunc_failure.c index 9fe9c4a4e8f6..a875ba8e5007 100644 --- a/tools/testing/selftests/bpf/progs/cgrp_kfunc_failure.c +++ b/tools/testing/selftests/bpf/progs/cgrp_kfunc_failure.c @@ -29,7 +29,7 @@ static struct __cgrps_kfunc_map_value *insert_lookup_cgrp(struct cgroup *cgrp) } SEC("tp_btf/cgroup_mkdir") -__failure __msg("Possibly NULL pointer passed to trusted arg0") +__failure __msg("Possibly NULL pointer passed to trusted R1") int BPF_PROG(cgrp_kfunc_acquire_untrusted, struct cgroup *cgrp, const char *path) { struct cgroup *acquired; @@ -48,7 +48,7 @@ int BPF_PROG(cgrp_kfunc_acquire_untrusted, struct cgroup *cgrp, const char *path } SEC("tp_btf/cgroup_mkdir") -__failure __msg("Possibly NULL pointer passed to trusted arg0") +__failure __msg("Possibly NULL pointer passed to trusted R1") int BPF_PROG(cgrp_kfunc_acquire_no_null_check, struct cgroup *cgrp, const char *path) { struct cgroup *acquired; @@ -64,7 +64,7 @@ int BPF_PROG(cgrp_kfunc_acquire_no_null_check, struct cgroup *cgrp, const char * } SEC("tp_btf/cgroup_mkdir") -__failure __msg("arg#0 pointer type STRUCT cgroup must point") +__failure __msg("R1 pointer type STRUCT cgroup must point") int BPF_PROG(cgrp_kfunc_acquire_fp, struct cgroup *cgrp, const char *path) { struct cgroup *acquired, *stack_cgrp = (struct cgroup *)&path; @@ -106,7 +106,7 @@ int BPF_PROG(cgrp_kfunc_acquire_trusted_walked, struct cgroup *cgrp, const char } SEC("tp_btf/cgroup_mkdir") -__failure __msg("Possibly NULL pointer passed to trusted arg0") +__failure __msg("Possibly NULL pointer passed to trusted R1") int BPF_PROG(cgrp_kfunc_acquire_null, struct cgroup *cgrp, const char *path) { struct cgroup *acquired; @@ -175,7 +175,7 @@ int BPF_PROG(cgrp_kfunc_rcu_get_release, struct cgroup *cgrp, const char *path) } SEC("tp_btf/cgroup_mkdir") -__failure __msg("Possibly NULL pointer passed to trusted arg0") +__failure __msg("Possibly NULL pointer passed to trusted R1") int BPF_PROG(cgrp_kfunc_release_untrusted, struct cgroup *cgrp, const char *path) { struct __cgrps_kfunc_map_value *v; @@ -191,7 +191,7 @@ int BPF_PROG(cgrp_kfunc_release_untrusted, struct cgroup *cgrp, const char *path } SEC("tp_btf/cgroup_mkdir") -__failure __msg("arg#0 pointer type STRUCT cgroup must point") +__failure __msg("R1 pointer type STRUCT cgroup must point") int BPF_PROG(cgrp_kfunc_release_fp, struct cgroup *cgrp, const char *path) { struct cgroup *acquired = (struct cgroup *)&path; @@ -203,7 +203,7 @@ int BPF_PROG(cgrp_kfunc_release_fp, struct cgroup *cgrp, const char *path) } SEC("tp_btf/cgroup_mkdir") -__failure __msg("Possibly NULL pointer passed to trusted arg0") +__failure __msg("Possibly NULL pointer passed to trusted R1") int BPF_PROG(cgrp_kfunc_release_null, struct cgroup *cgrp, const char *path) { struct __cgrps_kfunc_map_value local, *v; diff --git a/tools/testing/selftests/bpf/progs/cpumask_failure.c b/tools/testing/selftests/bpf/progs/cpumask_failure.c index 61c32e91e8c3..4c45346fe6f7 100644 --- a/tools/testing/selftests/bpf/progs/cpumask_failure.c +++ b/tools/testing/selftests/bpf/progs/cpumask_failure.c @@ -45,7 +45,7 @@ int BPF_PROG(test_alloc_no_release, struct task_struct *task, u64 clone_flags) } SEC("tp_btf/task_newtask") -__failure __msg("NULL pointer passed to trusted arg0") +__failure __msg("NULL pointer passed to trusted R1") int BPF_PROG(test_alloc_double_release, struct task_struct *task, u64 clone_flags) { struct bpf_cpumask *cpumask; @@ -73,7 +73,7 @@ int BPF_PROG(test_acquire_wrong_cpumask, struct task_struct *task, u64 clone_fla } SEC("tp_btf/task_newtask") -__failure __msg("bpf_cpumask_set_cpu args#1 expected pointer to STRUCT bpf_cpumask") +__failure __msg("bpf_cpumask_set_cpu R2 expected pointer to STRUCT bpf_cpumask") int BPF_PROG(test_mutate_cpumask, struct task_struct *task, u64 clone_flags) { /* Can't set the CPU of a non-struct bpf_cpumask. */ @@ -107,7 +107,7 @@ int BPF_PROG(test_insert_remove_no_release, struct task_struct *task, u64 clone_ } SEC("tp_btf/task_newtask") -__failure __msg("NULL pointer passed to trusted arg0") +__failure __msg("NULL pointer passed to trusted R1") int BPF_PROG(test_cpumask_null, struct task_struct *task, u64 clone_flags) { /* NULL passed to kfunc. */ @@ -151,7 +151,7 @@ int BPF_PROG(test_global_mask_out_of_rcu, struct task_struct *task, u64 clone_fl } SEC("tp_btf/task_newtask") -__failure __msg("NULL pointer passed to trusted arg1") +__failure __msg("NULL pointer passed to trusted R2") int BPF_PROG(test_global_mask_no_null_check, struct task_struct *task, u64 clone_flags) { struct bpf_cpumask *local, *prev; @@ -179,7 +179,7 @@ int BPF_PROG(test_global_mask_no_null_check, struct task_struct *task, u64 clone } SEC("tp_btf/task_newtask") -__failure __msg("Possibly NULL pointer passed to helper arg2") +__failure __msg("Possibly NULL pointer passed to helper R2") int BPF_PROG(test_global_mask_rcu_no_null_check, struct task_struct *task, u64 clone_flags) { struct bpf_cpumask *prev, *curr; diff --git a/tools/testing/selftests/bpf/progs/dynptr_fail.c b/tools/testing/selftests/bpf/progs/dynptr_fail.c index b62773ce5219..dbd97add5a5a 100644 --- a/tools/testing/selftests/bpf/progs/dynptr_fail.c +++ b/tools/testing/selftests/bpf/progs/dynptr_fail.c @@ -149,7 +149,7 @@ int ringbuf_release_uninit_dynptr(void *ctx) /* A dynptr can't be used after it has been invalidated */ SEC("?raw_tp") -__failure __msg("Expected an initialized dynptr as arg #2") +__failure __msg("Expected an initialized dynptr as R3") int use_after_invalid(void *ctx) { struct bpf_dynptr ptr; @@ -448,7 +448,7 @@ int invalid_helper2(void *ctx) /* A bpf_dynptr is invalidated if it's been written into */ SEC("?raw_tp") -__failure __msg("Expected an initialized dynptr as arg #0") +__failure __msg("Expected an initialized dynptr as R1") int invalid_write1(void *ctx) { struct bpf_dynptr ptr; @@ -1642,7 +1642,7 @@ int invalid_slice_rdwr_rdonly(struct __sk_buff *skb) /* bpf_dynptr_adjust can only be called on initialized dynptrs */ SEC("?raw_tp") -__failure __msg("Expected an initialized dynptr as arg #0") +__failure __msg("Expected an initialized dynptr as R1") int dynptr_adjust_invalid(void *ctx) { struct bpf_dynptr ptr = {}; @@ -1655,7 +1655,7 @@ int dynptr_adjust_invalid(void *ctx) /* bpf_dynptr_is_null can only be called on initialized dynptrs */ SEC("?raw_tp") -__failure __msg("Expected an initialized dynptr as arg #0") +__failure __msg("Expected an initialized dynptr as R1") int dynptr_is_null_invalid(void *ctx) { struct bpf_dynptr ptr = {}; @@ -1668,7 +1668,7 @@ int dynptr_is_null_invalid(void *ctx) /* bpf_dynptr_is_rdonly can only be called on initialized dynptrs */ SEC("?raw_tp") -__failure __msg("Expected an initialized dynptr as arg #0") +__failure __msg("Expected an initialized dynptr as R1") int dynptr_is_rdonly_invalid(void *ctx) { struct bpf_dynptr ptr = {}; @@ -1681,7 +1681,7 @@ int dynptr_is_rdonly_invalid(void *ctx) /* bpf_dynptr_size can only be called on initialized dynptrs */ SEC("?raw_tp") -__failure __msg("Expected an initialized dynptr as arg #0") +__failure __msg("Expected an initialized dynptr as R1") int dynptr_size_invalid(void *ctx) { struct bpf_dynptr ptr = {}; @@ -1694,7 +1694,7 @@ int dynptr_size_invalid(void *ctx) /* Only initialized dynptrs can be cloned */ SEC("?raw_tp") -__failure __msg("Expected an initialized dynptr as arg #0") +__failure __msg("Expected an initialized dynptr as R1") int clone_invalid1(void *ctx) { struct bpf_dynptr ptr1 = {}; @@ -1728,7 +1728,7 @@ int clone_invalid2(struct xdp_md *xdp) /* Invalidating a dynptr should invalidate its clones */ SEC("?raw_tp") -__failure __msg("Expected an initialized dynptr as arg #2") +__failure __msg("Expected an initialized dynptr as R3") int clone_invalidate1(void *ctx) { struct bpf_dynptr clone; @@ -1749,7 +1749,7 @@ int clone_invalidate1(void *ctx) /* Invalidating a dynptr should invalidate its parent */ SEC("?raw_tp") -__failure __msg("Expected an initialized dynptr as arg #2") +__failure __msg("Expected an initialized dynptr as R3") int clone_invalidate2(void *ctx) { struct bpf_dynptr ptr; @@ -1770,7 +1770,7 @@ int clone_invalidate2(void *ctx) /* Invalidating a dynptr should invalidate its siblings */ SEC("?raw_tp") -__failure __msg("Expected an initialized dynptr as arg #2") +__failure __msg("Expected an initialized dynptr as R3") int clone_invalidate3(void *ctx) { struct bpf_dynptr ptr; @@ -1981,7 +1981,7 @@ __noinline long global_call_bpf_dynptr(const struct bpf_dynptr *dynptr) } SEC("?raw_tp") -__failure __msg("arg#0 expected pointer to stack or const struct bpf_dynptr") +__failure __msg("R1 expected pointer to stack or const struct bpf_dynptr") int test_dynptr_reg_type(void *ctx) { struct task_struct *current = NULL; diff --git a/tools/testing/selftests/bpf/progs/file_reader_fail.c b/tools/testing/selftests/bpf/progs/file_reader_fail.c index 32fe28ed2439..0739620dea8a 100644 --- a/tools/testing/selftests/bpf/progs/file_reader_fail.c +++ b/tools/testing/selftests/bpf/progs/file_reader_fail.c @@ -30,7 +30,7 @@ int on_nanosleep_unreleased_ref(void *ctx) SEC("xdp") __failure -__msg("Expected a dynptr of type file as arg #0") +__msg("Expected a dynptr of type file as R1") int xdp_wrong_dynptr_type(struct xdp_md *xdp) { struct bpf_dynptr dynptr; @@ -42,7 +42,7 @@ int xdp_wrong_dynptr_type(struct xdp_md *xdp) SEC("xdp") __failure -__msg("Expected an initialized dynptr as arg #0") +__msg("Expected an initialized dynptr as R1") int xdp_no_dynptr_type(struct xdp_md *xdp) { struct bpf_dynptr dynptr; diff --git a/tools/testing/selftests/bpf/progs/irq.c b/tools/testing/selftests/bpf/progs/irq.c index e11e82d98904..a4a007866a33 100644 --- a/tools/testing/selftests/bpf/progs/irq.c +++ b/tools/testing/selftests/bpf/progs/irq.c @@ -15,7 +15,7 @@ struct bpf_res_spin_lock lockA __hidden SEC(".data.A"); struct bpf_res_spin_lock lockB __hidden SEC(".data.B"); SEC("?tc") -__failure __msg("arg#0 doesn't point to an irq flag on stack") +__failure __msg("R1 doesn't point to an irq flag on stack") int irq_save_bad_arg(struct __sk_buff *ctx) { bpf_local_irq_save(&global_flags); @@ -23,7 +23,7 @@ int irq_save_bad_arg(struct __sk_buff *ctx) } SEC("?tc") -__failure __msg("arg#0 doesn't point to an irq flag on stack") +__failure __msg("R1 doesn't point to an irq flag on stack") int irq_restore_bad_arg(struct __sk_buff *ctx) { bpf_local_irq_restore(&global_flags); diff --git a/tools/testing/selftests/bpf/progs/iters.c b/tools/testing/selftests/bpf/progs/iters.c index 86b74e3579d9..0fa70b133d93 100644 --- a/tools/testing/selftests/bpf/progs/iters.c +++ b/tools/testing/selftests/bpf/progs/iters.c @@ -1605,7 +1605,7 @@ int iter_subprog_check_stacksafe(const void *ctx) struct bpf_iter_num global_it; SEC("raw_tp") -__failure __msg("arg#0 expected pointer to an iterator on stack") +__failure __msg("R1 expected pointer to an iterator on stack") int iter_new_bad_arg(const void *ctx) { bpf_iter_num_new(&global_it, 0, 1); @@ -1613,7 +1613,7 @@ int iter_new_bad_arg(const void *ctx) } SEC("raw_tp") -__failure __msg("arg#0 expected pointer to an iterator on stack") +__failure __msg("R1 expected pointer to an iterator on stack") int iter_next_bad_arg(const void *ctx) { bpf_iter_num_next(&global_it); @@ -1621,7 +1621,7 @@ int iter_next_bad_arg(const void *ctx) } SEC("raw_tp") -__failure __msg("arg#0 expected pointer to an iterator on stack") +__failure __msg("R1 expected pointer to an iterator on stack") int iter_destroy_bad_arg(const void *ctx) { bpf_iter_num_destroy(&global_it); diff --git a/tools/testing/selftests/bpf/progs/iters_state_safety.c b/tools/testing/selftests/bpf/progs/iters_state_safety.c index d273b46dfc7c..af8f9ec1ea98 100644 --- a/tools/testing/selftests/bpf/progs/iters_state_safety.c +++ b/tools/testing/selftests/bpf/progs/iters_state_safety.c @@ -73,7 +73,7 @@ int create_and_forget_to_destroy_fail(void *ctx) } SEC("?raw_tp") -__failure __msg("expected an initialized iter_num as arg #0") +__failure __msg("expected an initialized iter_num as R1") int destroy_without_creating_fail(void *ctx) { /* init with zeros to stop verifier complaining about uninit stack */ @@ -91,7 +91,7 @@ int destroy_without_creating_fail(void *ctx) } SEC("?raw_tp") -__failure __msg("expected an initialized iter_num as arg #0") +__failure __msg("expected an initialized iter_num as R1") int compromise_iter_w_direct_write_fail(void *ctx) { struct bpf_iter_num iter; @@ -143,7 +143,7 @@ int compromise_iter_w_direct_write_and_skip_destroy_fail(void *ctx) } SEC("?raw_tp") -__failure __msg("expected an initialized iter_num as arg #0") +__failure __msg("expected an initialized iter_num as R1") int compromise_iter_w_helper_write_fail(void *ctx) { struct bpf_iter_num iter; @@ -230,7 +230,7 @@ int valid_stack_reuse(void *ctx) } SEC("?raw_tp") -__failure __msg("expected uninitialized iter_num as arg #0") +__failure __msg("expected uninitialized iter_num as R1") int double_create_fail(void *ctx) { struct bpf_iter_num iter; @@ -258,7 +258,7 @@ int double_create_fail(void *ctx) } SEC("?raw_tp") -__failure __msg("expected an initialized iter_num as arg #0") +__failure __msg("expected an initialized iter_num as R1") int double_destroy_fail(void *ctx) { struct bpf_iter_num iter; @@ -284,7 +284,7 @@ int double_destroy_fail(void *ctx) } SEC("?raw_tp") -__failure __msg("expected an initialized iter_num as arg #0") +__failure __msg("expected an initialized iter_num as R1") int next_without_new_fail(void *ctx) { struct bpf_iter_num iter; @@ -305,7 +305,7 @@ int next_without_new_fail(void *ctx) } SEC("?raw_tp") -__failure __msg("expected an initialized iter_num as arg #0") +__failure __msg("expected an initialized iter_num as R1") int next_after_destroy_fail(void *ctx) { struct bpf_iter_num iter; diff --git a/tools/testing/selftests/bpf/progs/iters_testmod.c b/tools/testing/selftests/bpf/progs/iters_testmod.c index 5379e9960ffd..76012dbbdb41 100644 --- a/tools/testing/selftests/bpf/progs/iters_testmod.c +++ b/tools/testing/selftests/bpf/progs/iters_testmod.c @@ -29,7 +29,7 @@ int iter_next_trusted(const void *ctx) } SEC("raw_tp/sys_enter") -__failure __msg("Possibly NULL pointer passed to trusted arg0") +__failure __msg("Possibly NULL pointer passed to trusted R1") int iter_next_trusted_or_null(const void *ctx) { struct task_struct *cur_task = bpf_get_current_task_btf(); @@ -67,7 +67,7 @@ int iter_next_rcu(const void *ctx) } SEC("raw_tp/sys_enter") -__failure __msg("Possibly NULL pointer passed to trusted arg0") +__failure __msg("Possibly NULL pointer passed to trusted R1") int iter_next_rcu_or_null(const void *ctx) { struct task_struct *cur_task = bpf_get_current_task_btf(); diff --git a/tools/testing/selftests/bpf/progs/iters_testmod_seq.c b/tools/testing/selftests/bpf/progs/iters_testmod_seq.c index 83791348bed5..9b760dac333e 100644 --- a/tools/testing/selftests/bpf/progs/iters_testmod_seq.c +++ b/tools/testing/selftests/bpf/progs/iters_testmod_seq.c @@ -79,7 +79,7 @@ int testmod_seq_truncated(const void *ctx) SEC("?raw_tp") __failure -__msg("expected an initialized iter_testmod_seq as arg #1") +__msg("expected an initialized iter_testmod_seq as R2") int testmod_seq_getter_before_bad(const void *ctx) { struct bpf_iter_testmod_seq it; @@ -89,7 +89,7 @@ int testmod_seq_getter_before_bad(const void *ctx) SEC("?raw_tp") __failure -__msg("expected an initialized iter_testmod_seq as arg #1") +__msg("expected an initialized iter_testmod_seq as R2") int testmod_seq_getter_after_bad(const void *ctx) { struct bpf_iter_testmod_seq it; diff --git a/tools/testing/selftests/bpf/progs/map_kptr_fail.c b/tools/testing/selftests/bpf/progs/map_kptr_fail.c index 6443b320c732..431c218de068 100644 --- a/tools/testing/selftests/bpf/progs/map_kptr_fail.c +++ b/tools/testing/selftests/bpf/progs/map_kptr_fail.c @@ -364,7 +364,7 @@ int kptr_xchg_ref_state(struct __sk_buff *ctx) } SEC("?tc") -__failure __msg("Possibly NULL pointer passed to helper arg2") +__failure __msg("Possibly NULL pointer passed to helper R2") int kptr_xchg_possibly_null(struct __sk_buff *ctx) { struct prog_test_ref_kfunc *p; diff --git a/tools/testing/selftests/bpf/progs/percpu_alloc_fail.c b/tools/testing/selftests/bpf/progs/percpu_alloc_fail.c index 81813c724fa9..08379c3b6a03 100644 --- a/tools/testing/selftests/bpf/progs/percpu_alloc_fail.c +++ b/tools/testing/selftests/bpf/progs/percpu_alloc_fail.c @@ -110,7 +110,7 @@ int BPF_PROG(test_array_map_3) } SEC("?fentry.s/bpf_fentry_test1") -__failure __msg("arg#0 expected for bpf_percpu_obj_drop()") +__failure __msg("R1 expected for bpf_percpu_obj_drop()") int BPF_PROG(test_array_map_4) { struct val_t __percpu_kptr *p; @@ -124,7 +124,7 @@ int BPF_PROG(test_array_map_4) } SEC("?fentry.s/bpf_fentry_test1") -__failure __msg("arg#0 expected for bpf_obj_drop()") +__failure __msg("R1 expected for bpf_obj_drop()") int BPF_PROG(test_array_map_5) { struct val_t *p; diff --git a/tools/testing/selftests/bpf/progs/rbtree_fail.c b/tools/testing/selftests/bpf/progs/rbtree_fail.c index 70b7baf9304b..555379952dcc 100644 --- a/tools/testing/selftests/bpf/progs/rbtree_fail.c +++ b/tools/testing/selftests/bpf/progs/rbtree_fail.c @@ -134,7 +134,7 @@ long rbtree_api_remove_no_drop(void *ctx) } SEC("?tc") -__failure __msg("arg#1 expected pointer to allocated object") +__failure __msg("R2 expected pointer to allocated object") long rbtree_api_add_to_multiple_trees(void *ctx) { struct node_data *n; @@ -153,7 +153,7 @@ long rbtree_api_add_to_multiple_trees(void *ctx) } SEC("?tc") -__failure __msg("Possibly NULL pointer passed to trusted arg1") +__failure __msg("Possibly NULL pointer passed to trusted R2") long rbtree_api_use_unchecked_remove_retval(void *ctx) { struct bpf_rb_node *res; @@ -281,7 +281,7 @@ long add_with_cb(bool (cb)(struct bpf_rb_node *a, const struct bpf_rb_node *b)) } SEC("?tc") -__failure __msg("arg#1 expected pointer to allocated object") +__failure __msg("R2 expected pointer to allocated object") long rbtree_api_add_bad_cb_bad_fn_call_add(void *ctx) { return add_with_cb(less__bad_fn_call_add); diff --git a/tools/testing/selftests/bpf/progs/refcounted_kptr_fail.c b/tools/testing/selftests/bpf/progs/refcounted_kptr_fail.c index b2808bfcec29..7247a20c0a3b 100644 --- a/tools/testing/selftests/bpf/progs/refcounted_kptr_fail.c +++ b/tools/testing/selftests/bpf/progs/refcounted_kptr_fail.c @@ -54,7 +54,7 @@ long rbtree_refcounted_node_ref_escapes(void *ctx) } SEC("?tc") -__failure __msg("Possibly NULL pointer passed to trusted arg0") +__failure __msg("Possibly NULL pointer passed to trusted R1") long refcount_acquire_maybe_null(void *ctx) { struct node_acquire *n, *m; diff --git a/tools/testing/selftests/bpf/progs/stream_fail.c b/tools/testing/selftests/bpf/progs/stream_fail.c index 8e8249f3521c..21428bb1ee59 100644 --- a/tools/testing/selftests/bpf/progs/stream_fail.c +++ b/tools/testing/selftests/bpf/progs/stream_fail.c @@ -23,7 +23,7 @@ int stream_vprintk_scalar_arg(void *ctx) } SEC("syscall") -__failure __msg("arg#1 doesn't point to a const string") +__failure __msg("R2 doesn't point to a const string") int stream_vprintk_string_arg(void *ctx) { bpf_stream_vprintk(BPF_STDOUT, ctx, NULL, 0); diff --git a/tools/testing/selftests/bpf/progs/task_kfunc_failure.c b/tools/testing/selftests/bpf/progs/task_kfunc_failure.c index 4c07ea193f72..41047d81ec42 100644 --- a/tools/testing/selftests/bpf/progs/task_kfunc_failure.c +++ b/tools/testing/selftests/bpf/progs/task_kfunc_failure.c @@ -28,7 +28,7 @@ static struct __tasks_kfunc_map_value *insert_lookup_task(struct task_struct *ta } SEC("tp_btf/task_newtask") -__failure __msg("Possibly NULL pointer passed to trusted arg0") +__failure __msg("Possibly NULL pointer passed to trusted R1") int BPF_PROG(task_kfunc_acquire_untrusted, struct task_struct *task, u64 clone_flags) { struct task_struct *acquired; @@ -49,7 +49,7 @@ int BPF_PROG(task_kfunc_acquire_untrusted, struct task_struct *task, u64 clone_f } SEC("tp_btf/task_newtask") -__failure __msg("arg#0 pointer type STRUCT task_struct must point") +__failure __msg("R1 pointer type STRUCT task_struct must point") int BPF_PROG(task_kfunc_acquire_fp, struct task_struct *task, u64 clone_flags) { struct task_struct *acquired, *stack_task = (struct task_struct *)&clone_flags; @@ -100,7 +100,7 @@ int BPF_PROG(task_kfunc_acquire_unsafe_kretprobe_rcu, struct task_struct *task, } SEC("tp_btf/task_newtask") -__failure __msg("Possibly NULL pointer passed to trusted arg0") +__failure __msg("Possibly NULL pointer passed to trusted R1") int BPF_PROG(task_kfunc_acquire_null, struct task_struct *task, u64 clone_flags) { struct task_struct *acquired; @@ -149,7 +149,7 @@ int BPF_PROG(task_kfunc_xchg_unreleased, struct task_struct *task, u64 clone_fla } SEC("tp_btf/task_newtask") -__failure __msg("Possibly NULL pointer passed to trusted arg0") +__failure __msg("Possibly NULL pointer passed to trusted R1") int BPF_PROG(task_kfunc_acquire_release_no_null_check, struct task_struct *task, u64 clone_flags) { struct task_struct *acquired; @@ -162,7 +162,7 @@ int BPF_PROG(task_kfunc_acquire_release_no_null_check, struct task_struct *task, } SEC("tp_btf/task_newtask") -__failure __msg("Possibly NULL pointer passed to trusted arg0") +__failure __msg("Possibly NULL pointer passed to trusted R1") int BPF_PROG(task_kfunc_release_untrusted, struct task_struct *task, u64 clone_flags) { struct __tasks_kfunc_map_value *v; @@ -178,7 +178,7 @@ int BPF_PROG(task_kfunc_release_untrusted, struct task_struct *task, u64 clone_f } SEC("tp_btf/task_newtask") -__failure __msg("arg#0 pointer type STRUCT task_struct must point") +__failure __msg("R1 pointer type STRUCT task_struct must point") int BPF_PROG(task_kfunc_release_fp, struct task_struct *task, u64 clone_flags) { struct task_struct *acquired = (struct task_struct *)&clone_flags; @@ -190,7 +190,7 @@ int BPF_PROG(task_kfunc_release_fp, struct task_struct *task, u64 clone_flags) } SEC("tp_btf/task_newtask") -__failure __msg("Possibly NULL pointer passed to trusted arg0") +__failure __msg("Possibly NULL pointer passed to trusted R1") int BPF_PROG(task_kfunc_release_null, struct task_struct *task, u64 clone_flags) { struct __tasks_kfunc_map_value local, *v; @@ -234,7 +234,7 @@ int BPF_PROG(task_kfunc_release_unacquired, struct task_struct *task, u64 clone_ } SEC("tp_btf/task_newtask") -__failure __msg("Possibly NULL pointer passed to trusted arg0") +__failure __msg("Possibly NULL pointer passed to trusted R1") int BPF_PROG(task_kfunc_from_pid_no_null_check, struct task_struct *task, u64 clone_flags) { struct task_struct *acquired; @@ -248,7 +248,7 @@ int BPF_PROG(task_kfunc_from_pid_no_null_check, struct task_struct *task, u64 cl } SEC("tp_btf/task_newtask") -__failure __msg("Possibly NULL pointer passed to trusted arg0") +__failure __msg("Possibly NULL pointer passed to trusted R1") int BPF_PROG(task_kfunc_from_vpid_no_null_check, struct task_struct *task, u64 clone_flags) { struct task_struct *acquired; diff --git a/tools/testing/selftests/bpf/progs/task_work_fail.c b/tools/testing/selftests/bpf/progs/task_work_fail.c index 82e4b8913333..3186e7b4b24e 100644 --- a/tools/testing/selftests/bpf/progs/task_work_fail.c +++ b/tools/testing/selftests/bpf/progs/task_work_fail.c @@ -58,7 +58,7 @@ int mismatch_map(struct pt_regs *args) } SEC("perf_event") -__failure __msg("arg#1 doesn't point to a map value") +__failure __msg("R2 doesn't point to a map value") int no_map_task_work(struct pt_regs *args) { struct task_struct *task; @@ -70,7 +70,7 @@ int no_map_task_work(struct pt_regs *args) } SEC("perf_event") -__failure __msg("Possibly NULL pointer passed to trusted arg1") +__failure __msg("Possibly NULL pointer passed to trusted R2") int task_work_null(struct pt_regs *args) { struct task_struct *task; @@ -81,7 +81,7 @@ int task_work_null(struct pt_regs *args) } SEC("perf_event") -__failure __msg("Possibly NULL pointer passed to trusted arg2") +__failure __msg("Possibly NULL pointer passed to trusted R3") int map_null(struct pt_regs *args) { struct elem *work; diff --git a/tools/testing/selftests/bpf/progs/test_bpf_nf_fail.c b/tools/testing/selftests/bpf/progs/test_bpf_nf_fail.c index 2c156cd166af..332cda89caba 100644 --- a/tools/testing/selftests/bpf/progs/test_bpf_nf_fail.c +++ b/tools/testing/selftests/bpf/progs/test_bpf_nf_fail.c @@ -152,7 +152,7 @@ int change_status_after_alloc(struct __sk_buff *ctx) } SEC("?tc") -__failure __msg("Possibly NULL pointer passed to trusted arg1") +__failure __msg("Possibly NULL pointer passed to trusted R2") int lookup_null_bpf_tuple(struct __sk_buff *ctx) { struct bpf_ct_opts___local opts = {}; @@ -165,7 +165,7 @@ int lookup_null_bpf_tuple(struct __sk_buff *ctx) } SEC("?tc") -__failure __msg("Possibly NULL pointer passed to trusted arg3") +__failure __msg("Possibly NULL pointer passed to trusted R4") int lookup_null_bpf_opts(struct __sk_buff *ctx) { struct bpf_sock_tuple tup = {}; @@ -178,7 +178,7 @@ int lookup_null_bpf_opts(struct __sk_buff *ctx) } SEC("?xdp") -__failure __msg("Possibly NULL pointer passed to trusted arg1") +__failure __msg("Possibly NULL pointer passed to trusted R2") int xdp_lookup_null_bpf_tuple(struct xdp_md *ctx) { struct bpf_ct_opts___local opts = {}; @@ -191,7 +191,7 @@ int xdp_lookup_null_bpf_tuple(struct xdp_md *ctx) } SEC("?xdp") -__failure __msg("Possibly NULL pointer passed to trusted arg3") +__failure __msg("Possibly NULL pointer passed to trusted R4") int xdp_lookup_null_bpf_opts(struct xdp_md *ctx) { struct bpf_sock_tuple tup = {}; diff --git a/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c b/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c index d249113ed657..41da6e619940 100644 --- a/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c +++ b/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c @@ -45,7 +45,7 @@ int BPF_PROG(not_valid_dynptr, int cmd, union bpf_attr *attr, unsigned int size, } SEC("?lsm.s/bpf") -__failure __msg("arg#0 expected pointer to stack or const struct bpf_dynptr") +__failure __msg("R1 expected pointer to stack or const struct bpf_dynptr") int BPF_PROG(not_ptr_to_stack, int cmd, union bpf_attr *attr, unsigned int size, bool kernel) { static struct bpf_dynptr val; diff --git a/tools/testing/selftests/bpf/progs/test_kfunc_param_nullable.c b/tools/testing/selftests/bpf/progs/test_kfunc_param_nullable.c index 967081bbcfe1..ca35b92ea095 100644 --- a/tools/testing/selftests/bpf/progs/test_kfunc_param_nullable.c +++ b/tools/testing/selftests/bpf/progs/test_kfunc_param_nullable.c @@ -29,7 +29,7 @@ int kfunc_dynptr_nullable_test2(struct __sk_buff *skb) } SEC("tc") -__failure __msg("Possibly NULL pointer passed to trusted arg0") +__failure __msg("Possibly NULL pointer passed to trusted R1") int kfunc_dynptr_nullable_test3(struct __sk_buff *skb) { struct bpf_dynptr data; diff --git a/tools/testing/selftests/bpf/progs/verifier_bits_iter.c b/tools/testing/selftests/bpf/progs/verifier_bits_iter.c index 8bcddadfc4da..dd97f2027505 100644 --- a/tools/testing/selftests/bpf/progs/verifier_bits_iter.c +++ b/tools/testing/selftests/bpf/progs/verifier_bits_iter.c @@ -32,7 +32,7 @@ int BPF_PROG(no_destroy, struct bpf_iter_meta *meta, struct cgroup *cgrp) SEC("iter/cgroup") __description("uninitialized iter in ->next()") -__failure __msg("expected an initialized iter_bits as arg #0") +__failure __msg("expected an initialized iter_bits as R1") int BPF_PROG(next_uninit, struct bpf_iter_meta *meta, struct cgroup *cgrp) { struct bpf_iter_bits it = {}; @@ -43,7 +43,7 @@ int BPF_PROG(next_uninit, struct bpf_iter_meta *meta, struct cgroup *cgrp) SEC("iter/cgroup") __description("uninitialized iter in ->destroy()") -__failure __msg("expected an initialized iter_bits as arg #0") +__failure __msg("expected an initialized iter_bits as R1") int BPF_PROG(destroy_uninit, struct bpf_iter_meta *meta, struct cgroup *cgrp) { struct bpf_iter_bits it = {}; diff --git a/tools/testing/selftests/bpf/progs/verifier_ref_tracking.c b/tools/testing/selftests/bpf/progs/verifier_ref_tracking.c index 910365201f68..139f70bb3595 100644 --- a/tools/testing/selftests/bpf/progs/verifier_ref_tracking.c +++ b/tools/testing/selftests/bpf/progs/verifier_ref_tracking.c @@ -263,7 +263,7 @@ l0_%=: r0 = 0; \ SEC("lsm.s/bpf") __description("reference tracking: release user key reference without check") -__failure __msg("Possibly NULL pointer passed to trusted arg0") +__failure __msg("Possibly NULL pointer passed to trusted R1") __naked void user_key_reference_without_check(void) { asm volatile (" \ @@ -282,7 +282,7 @@ __naked void user_key_reference_without_check(void) SEC("lsm.s/bpf") __description("reference tracking: release system key reference without check") -__failure __msg("Possibly NULL pointer passed to trusted arg0") +__failure __msg("Possibly NULL pointer passed to trusted R1") __naked void system_key_reference_without_check(void) { asm volatile (" \ @@ -300,7 +300,7 @@ __naked void system_key_reference_without_check(void) SEC("lsm.s/bpf") __description("reference tracking: release with NULL key pointer") -__failure __msg("Possibly NULL pointer passed to trusted arg0") +__failure __msg("Possibly NULL pointer passed to trusted R1") __naked void release_with_null_key_pointer(void) { asm volatile (" \ diff --git a/tools/testing/selftests/bpf/progs/verifier_vfs_reject.c b/tools/testing/selftests/bpf/progs/verifier_vfs_reject.c index 4b392c6c8fc4..0990de076844 100644 --- a/tools/testing/selftests/bpf/progs/verifier_vfs_reject.c +++ b/tools/testing/selftests/bpf/progs/verifier_vfs_reject.c @@ -13,7 +13,7 @@ static char buf[PATH_MAX]; SEC("lsm.s/file_open") -__failure __msg("Possibly NULL pointer passed to trusted arg0") +__failure __msg("Possibly NULL pointer passed to trusted R1") int BPF_PROG(get_task_exe_file_kfunc_null) { struct file *acquired; @@ -28,7 +28,7 @@ int BPF_PROG(get_task_exe_file_kfunc_null) } SEC("lsm.s/inode_getxattr") -__failure __msg("arg#0 pointer type STRUCT task_struct must point to scalar, or struct with scalar") +__failure __msg("R1 pointer type STRUCT task_struct must point to scalar, or struct with scalar") int BPF_PROG(get_task_exe_file_kfunc_fp) { u64 x; @@ -89,7 +89,7 @@ int BPF_PROG(put_file_kfunc_unacquired, struct file *file) } SEC("lsm.s/file_open") -__failure __msg("Possibly NULL pointer passed to trusted arg0") +__failure __msg("Possibly NULL pointer passed to trusted R1") int BPF_PROG(path_d_path_kfunc_null) { /* Can't pass NULL value to bpf_path_d_path() kfunc. */ @@ -128,7 +128,7 @@ int BPF_PROG(path_d_path_kfunc_untrusted_from_current) } SEC("lsm.s/file_open") -__failure __msg("kernel function bpf_path_d_path args#0 expected pointer to STRUCT path but R1 has a pointer to STRUCT file") +__failure __msg("kernel function bpf_path_d_path R1 expected pointer to STRUCT path but R1 has a pointer to STRUCT file") int BPF_PROG(path_d_path_kfunc_type_mismatch, struct file *file) { bpf_path_d_path((struct path *)&file->f_task_work, buf, sizeof(buf)); diff --git a/tools/testing/selftests/bpf/progs/wq_failures.c b/tools/testing/selftests/bpf/progs/wq_failures.c index 3767f5595bbc..32dc8827e128 100644 --- a/tools/testing/selftests/bpf/progs/wq_failures.c +++ b/tools/testing/selftests/bpf/progs/wq_failures.c @@ -98,7 +98,7 @@ __failure * is a correct bpf_wq pointer. */ __msg(": (85) call bpf_wq_set_callback#") /* anchor message */ -__msg("arg#0 doesn't point to a map value") +__msg("R1 doesn't point to a map value") long test_wrong_wq_pointer(void *ctx) { int key = 0; diff --git a/tools/testing/selftests/bpf/verifier/calls.c b/tools/testing/selftests/bpf/verifier/calls.c index c3164b9b2be5..0bb4337552c8 100644 --- a/tools/testing/selftests/bpf/verifier/calls.c +++ b/tools/testing/selftests/bpf/verifier/calls.c @@ -31,7 +31,7 @@ }, .prog_type = BPF_PROG_TYPE_SCHED_CLS, .result = REJECT, - .errstr = "arg#0 pointer type STRUCT prog_test_fail1 must point to scalar", + .errstr = "R1 pointer type STRUCT prog_test_fail1 must point to scalar", .fixup_kfunc_btf_id = { { "bpf_kfunc_call_test_fail1", 2 }, }, @@ -46,7 +46,7 @@ }, .prog_type = BPF_PROG_TYPE_SCHED_CLS, .result = REJECT, - .errstr = "max struct nesting depth exceeded\narg#0 pointer type STRUCT prog_test_fail2", + .errstr = "max struct nesting depth exceeded\nR1 pointer type STRUCT prog_test_fail2", .fixup_kfunc_btf_id = { { "bpf_kfunc_call_test_fail2", 2 }, }, @@ -61,7 +61,7 @@ }, .prog_type = BPF_PROG_TYPE_SCHED_CLS, .result = REJECT, - .errstr = "arg#0 pointer type STRUCT prog_test_fail3 must point to scalar", + .errstr = "R1 pointer type STRUCT prog_test_fail3 must point to scalar", .fixup_kfunc_btf_id = { { "bpf_kfunc_call_test_fail3", 2 }, }, @@ -76,7 +76,7 @@ }, .prog_type = BPF_PROG_TYPE_SCHED_CLS, .result = REJECT, - .errstr = "arg#0 expected pointer to ctx, but got fp", + .errstr = "R1 expected pointer to ctx, but got fp", .fixup_kfunc_btf_id = { { "bpf_kfunc_call_test_pass_ctx", 2 }, }, @@ -91,7 +91,7 @@ }, .prog_type = BPF_PROG_TYPE_SCHED_CLS, .result = REJECT, - .errstr = "arg#0 pointer type UNKNOWN must point to scalar", + .errstr = "R1 pointer type UNKNOWN must point to scalar", .fixup_kfunc_btf_id = { { "bpf_kfunc_call_test_mem_len_fail1", 2 }, }, @@ -109,7 +109,7 @@ }, .prog_type = BPF_PROG_TYPE_SCHED_CLS, .result = REJECT, - .errstr = "Possibly NULL pointer passed to trusted arg0", + .errstr = "Possibly NULL pointer passed to trusted R1", .fixup_kfunc_btf_id = { { "bpf_kfunc_call_test_acquire", 3 }, { "bpf_kfunc_call_test_release", 5 }, @@ -152,7 +152,7 @@ }, .prog_type = BPF_PROG_TYPE_SCHED_CLS, .result = REJECT, - .errstr = "kernel function bpf_kfunc_call_memb1_release args#0 expected pointer", + .errstr = "kernel function bpf_kfunc_call_memb1_release R1 expected pointer", .fixup_kfunc_btf_id = { { "bpf_kfunc_call_memb_acquire", 1 }, { "bpf_kfunc_call_memb1_release", 5 }, -- 2.52.0 ^ permalink raw reply related [flat|nested] 45+ messages in thread
* [PATCH bpf-next v5 05/16] bpf: Introduce bpf register BPF_REG_PARAMS 2026-04-17 3:46 [PATCH bpf-next v5 00/16] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song ` (3 preceding siblings ...) 2026-04-17 3:47 ` [PATCH bpf-next v5 04/16] bpf: Prepare verifier logs for upcoming kfunc stack arguments Yonghong Song @ 2026-04-17 3:47 ` Yonghong Song 2026-04-17 3:47 ` [PATCH bpf-next v5 06/16] bpf: Limit the scope of BPF_REG_PARAMS usage Yonghong Song ` (10 subsequent siblings) 15 siblings, 0 replies; 45+ messages in thread From: Yonghong Song @ 2026-04-17 3:47 UTC (permalink / raw) To: bpf Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann, Jose E . Marchesi, kernel-team, Martin KaFai Lau The newly-added register BPF_REG_PARAMS corresponds to bpf register R11 in llvm. R11 is used as the base for stack arguments so it won't mess out R10 based stacks. The kernel internal register BPF_REG_AX was R11 previously. With this change, BPF_REG_AX will be R12. Signed-off-by: Yonghong Song <yonghong.song@linux.dev> --- include/linux/filter.h | 5 +- kernel/bpf/core.c | 4 +- .../selftests/bpf/prog_tests/ctx_rewrite.c | 14 ++-- .../bpf/progs/verifier_bpf_fastcall.c | 24 +++---- .../selftests/bpf/progs/verifier_may_goto_1.c | 12 ++-- .../selftests/bpf/progs/verifier_sdiv.c | 64 +++++++++---------- 6 files changed, 62 insertions(+), 61 deletions(-) diff --git a/include/linux/filter.h b/include/linux/filter.h index f552170eacf4..ae094328d973 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -58,8 +58,9 @@ struct ctl_table_header; #define BPF_REG_H BPF_REG_9 /* hlen, callee-saved */ /* Kernel hidden auxiliary/helper register. */ -#define BPF_REG_AX MAX_BPF_REG -#define MAX_BPF_EXT_REG (MAX_BPF_REG + 1) +#define BPF_REG_PARAMS MAX_BPF_REG +#define BPF_REG_AX (MAX_BPF_REG + 1) +#define MAX_BPF_EXT_REG (MAX_BPF_REG + 2) #define MAX_BPF_JIT_REG MAX_BPF_EXT_REG /* unused opcode to mark special call to bpf_tail_call() helper */ diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 066b86e7233c..e7e97ffa2a8b 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -1299,8 +1299,8 @@ static int bpf_jit_blind_insn(const struct bpf_insn *from, u32 imm_rnd = get_random_u32(); s16 off; - BUILD_BUG_ON(BPF_REG_AX + 1 != MAX_BPF_JIT_REG); - BUILD_BUG_ON(MAX_BPF_REG + 1 != MAX_BPF_JIT_REG); + BUILD_BUG_ON(BPF_REG_PARAMS + 2 != MAX_BPF_JIT_REG); + BUILD_BUG_ON(BPF_REG_AX + 1 != MAX_BPF_JIT_REG); /* Constraints on AX register: * diff --git a/tools/testing/selftests/bpf/prog_tests/ctx_rewrite.c b/tools/testing/selftests/bpf/prog_tests/ctx_rewrite.c index 469e92869523..83d870e32239 100644 --- a/tools/testing/selftests/bpf/prog_tests/ctx_rewrite.c +++ b/tools/testing/selftests/bpf/prog_tests/ctx_rewrite.c @@ -69,19 +69,19 @@ static struct test_case test_cases[] = { #if defined(__x86_64__) || defined(__aarch64__) { N(SCHED_CLS, struct __sk_buff, tstamp), - .read = "r11 = *(u8 *)($ctx + sk_buff::__mono_tc_offset);" - "if w11 & 0x4 goto pc+1;" + .read = "r12 = *(u8 *)($ctx + sk_buff::__mono_tc_offset);" + "if w12 & 0x4 goto pc+1;" "goto pc+4;" - "if w11 & 0x3 goto pc+1;" + "if w12 & 0x3 goto pc+1;" "goto pc+2;" "$dst = 0;" "goto pc+1;" "$dst = *(u64 *)($ctx + sk_buff::tstamp);", - .write = "r11 = *(u8 *)($ctx + sk_buff::__mono_tc_offset);" - "if w11 & 0x4 goto pc+1;" + .write = "r12 = *(u8 *)($ctx + sk_buff::__mono_tc_offset);" + "if w12 & 0x4 goto pc+1;" "goto pc+2;" - "w11 &= -4;" - "*(u8 *)($ctx + sk_buff::__mono_tc_offset) = r11;" + "w12 &= -4;" + "*(u8 *)($ctx + sk_buff::__mono_tc_offset) = r12;" "*(u64 *)($ctx + sk_buff::tstamp) = $src;", }, #endif diff --git a/tools/testing/selftests/bpf/progs/verifier_bpf_fastcall.c b/tools/testing/selftests/bpf/progs/verifier_bpf_fastcall.c index fb4fa465d67c..0d9e167555b5 100644 --- a/tools/testing/selftests/bpf/progs/verifier_bpf_fastcall.c +++ b/tools/testing/selftests/bpf/progs/verifier_bpf_fastcall.c @@ -630,13 +630,13 @@ __xlated("...") __xlated("4: r0 = &(void __percpu *)(r0)") __xlated("...") /* may_goto expansion starts */ -__xlated("6: r11 = *(u64 *)(r10 -24)") -__xlated("7: if r11 == 0x0 goto pc+6") -__xlated("8: r11 -= 1") -__xlated("9: if r11 != 0x0 goto pc+2") -__xlated("10: r11 = -24") +__xlated("6: r12 = *(u64 *)(r10 -24)") +__xlated("7: if r12 == 0x0 goto pc+6") +__xlated("8: r12 -= 1") +__xlated("9: if r12 != 0x0 goto pc+2") +__xlated("10: r12 = -24") __xlated("11: call unknown") -__xlated("12: *(u64 *)(r10 -24) = r11") +__xlated("12: *(u64 *)(r10 -24) = r12") /* may_goto expansion ends */ __xlated("13: *(u64 *)(r10 -8) = r1") __xlated("14: exit") @@ -668,13 +668,13 @@ __xlated("1: *(u64 *)(r10 -16) =") __xlated("2: r1 = 1") __xlated("3: call bpf_get_smp_processor_id") /* may_goto expansion starts */ -__xlated("4: r11 = *(u64 *)(r10 -24)") -__xlated("5: if r11 == 0x0 goto pc+6") -__xlated("6: r11 -= 1") -__xlated("7: if r11 != 0x0 goto pc+2") -__xlated("8: r11 = -24") +__xlated("4: r12 = *(u64 *)(r10 -24)") +__xlated("5: if r12 == 0x0 goto pc+6") +__xlated("6: r12 -= 1") +__xlated("7: if r12 != 0x0 goto pc+2") +__xlated("8: r12 = -24") __xlated("9: call unknown") -__xlated("10: *(u64 *)(r10 -24) = r11") +__xlated("10: *(u64 *)(r10 -24) = r12") /* may_goto expansion ends */ __xlated("11: *(u64 *)(r10 -8) = r1") __xlated("12: exit") diff --git a/tools/testing/selftests/bpf/progs/verifier_may_goto_1.c b/tools/testing/selftests/bpf/progs/verifier_may_goto_1.c index 6d1edaef9213..4bdf4256a41e 100644 --- a/tools/testing/selftests/bpf/progs/verifier_may_goto_1.c +++ b/tools/testing/selftests/bpf/progs/verifier_may_goto_1.c @@ -81,13 +81,13 @@ __arch_s390x __arch_arm64 __xlated("0: *(u64 *)(r10 -16) = 65535") __xlated("1: *(u64 *)(r10 -8) = 0") -__xlated("2: r11 = *(u64 *)(r10 -16)") -__xlated("3: if r11 == 0x0 goto pc+6") -__xlated("4: r11 -= 1") -__xlated("5: if r11 != 0x0 goto pc+2") -__xlated("6: r11 = -16") +__xlated("2: r12 = *(u64 *)(r10 -16)") +__xlated("3: if r12 == 0x0 goto pc+6") +__xlated("4: r12 -= 1") +__xlated("5: if r12 != 0x0 goto pc+2") +__xlated("6: r12 = -16") __xlated("7: call unknown") -__xlated("8: *(u64 *)(r10 -16) = r11") +__xlated("8: *(u64 *)(r10 -16) = r12") __xlated("9: r0 = 1") __xlated("10: r0 = 2") __xlated("11: exit") diff --git a/tools/testing/selftests/bpf/progs/verifier_sdiv.c b/tools/testing/selftests/bpf/progs/verifier_sdiv.c index fd59d57e8e37..95f3239ce228 100644 --- a/tools/testing/selftests/bpf/progs/verifier_sdiv.c +++ b/tools/testing/selftests/bpf/progs/verifier_sdiv.c @@ -778,10 +778,10 @@ __arch_x86_64 __xlated("0: r2 = 0x8000000000000000") __xlated("2: r3 = -1") __xlated("3: r4 = r2") -__xlated("4: r11 = r3") -__xlated("5: r11 += 1") -__xlated("6: if r11 > 0x1 goto pc+4") -__xlated("7: if r11 == 0x0 goto pc+1") +__xlated("4: r12 = r3") +__xlated("5: r12 += 1") +__xlated("6: if r12 > 0x1 goto pc+4") +__xlated("7: if r12 == 0x0 goto pc+1") __xlated("8: r2 = 0") __xlated("9: r2 = -r2") __xlated("10: goto pc+1") @@ -812,10 +812,10 @@ __success __retval(-5) __arch_x86_64 __xlated("0: r2 = 5") __xlated("1: r3 = -1") -__xlated("2: r11 = r3") -__xlated("3: r11 += 1") -__xlated("4: if r11 > 0x1 goto pc+4") -__xlated("5: if r11 == 0x0 goto pc+1") +__xlated("2: r12 = r3") +__xlated("3: r12 += 1") +__xlated("4: if r12 > 0x1 goto pc+4") +__xlated("5: if r12 == 0x0 goto pc+1") __xlated("6: r2 = 0") __xlated("7: r2 = -r2") __xlated("8: goto pc+1") @@ -890,10 +890,10 @@ __arch_x86_64 __xlated("0: w2 = -2147483648") __xlated("1: w3 = -1") __xlated("2: w4 = w2") -__xlated("3: r11 = r3") -__xlated("4: w11 += 1") -__xlated("5: if w11 > 0x1 goto pc+4") -__xlated("6: if w11 == 0x0 goto pc+1") +__xlated("3: r12 = r3") +__xlated("4: w12 += 1") +__xlated("5: if w12 > 0x1 goto pc+4") +__xlated("6: if w12 == 0x0 goto pc+1") __xlated("7: w2 = 0") __xlated("8: w2 = -w2") __xlated("9: goto pc+1") @@ -925,10 +925,10 @@ __arch_x86_64 __xlated("0: w2 = -5") __xlated("1: w3 = -1") __xlated("2: w4 = w2") -__xlated("3: r11 = r3") -__xlated("4: w11 += 1") -__xlated("5: if w11 > 0x1 goto pc+4") -__xlated("6: if w11 == 0x0 goto pc+1") +__xlated("3: r12 = r3") +__xlated("4: w12 += 1") +__xlated("5: if w12 > 0x1 goto pc+4") +__xlated("6: if w12 == 0x0 goto pc+1") __xlated("7: w2 = 0") __xlated("8: w2 = -w2") __xlated("9: goto pc+1") @@ -1004,10 +1004,10 @@ __arch_x86_64 __xlated("0: r2 = 0x8000000000000000") __xlated("2: r3 = -1") __xlated("3: r4 = r2") -__xlated("4: r11 = r3") -__xlated("5: r11 += 1") -__xlated("6: if r11 > 0x1 goto pc+3") -__xlated("7: if r11 == 0x1 goto pc+3") +__xlated("4: r12 = r3") +__xlated("5: r12 += 1") +__xlated("6: if r12 > 0x1 goto pc+3") +__xlated("7: if r12 == 0x1 goto pc+3") __xlated("8: w2 = 0") __xlated("9: goto pc+1") __xlated("10: r2 s%= r3") @@ -1034,10 +1034,10 @@ __arch_x86_64 __xlated("0: r2 = 5") __xlated("1: r3 = -1") __xlated("2: r4 = r2") -__xlated("3: r11 = r3") -__xlated("4: r11 += 1") -__xlated("5: if r11 > 0x1 goto pc+3") -__xlated("6: if r11 == 0x1 goto pc+3") +__xlated("3: r12 = r3") +__xlated("4: r12 += 1") +__xlated("5: if r12 > 0x1 goto pc+3") +__xlated("6: if r12 == 0x1 goto pc+3") __xlated("7: w2 = 0") __xlated("8: goto pc+1") __xlated("9: r2 s%= r3") @@ -1108,10 +1108,10 @@ __arch_x86_64 __xlated("0: w2 = -2147483648") __xlated("1: w3 = -1") __xlated("2: w4 = w2") -__xlated("3: r11 = r3") -__xlated("4: w11 += 1") -__xlated("5: if w11 > 0x1 goto pc+3") -__xlated("6: if w11 == 0x1 goto pc+4") +__xlated("3: r12 = r3") +__xlated("4: w12 += 1") +__xlated("5: if w12 > 0x1 goto pc+3") +__xlated("6: if w12 == 0x1 goto pc+4") __xlated("7: w2 = 0") __xlated("8: goto pc+1") __xlated("9: w2 s%= w3") @@ -1140,10 +1140,10 @@ __arch_x86_64 __xlated("0: w2 = -5") __xlated("1: w3 = -1") __xlated("2: w4 = w2") -__xlated("3: r11 = r3") -__xlated("4: w11 += 1") -__xlated("5: if w11 > 0x1 goto pc+3") -__xlated("6: if w11 == 0x1 goto pc+4") +__xlated("3: r12 = r3") +__xlated("4: w12 += 1") +__xlated("5: if w12 > 0x1 goto pc+3") +__xlated("6: if w12 == 0x1 goto pc+4") __xlated("7: w2 = 0") __xlated("8: goto pc+1") __xlated("9: w2 s%= w3") -- 2.52.0 ^ permalink raw reply related [flat|nested] 45+ messages in thread
* [PATCH bpf-next v5 06/16] bpf: Limit the scope of BPF_REG_PARAMS usage 2026-04-17 3:46 [PATCH bpf-next v5 00/16] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song ` (4 preceding siblings ...) 2026-04-17 3:47 ` [PATCH bpf-next v5 05/16] bpf: Introduce bpf register BPF_REG_PARAMS Yonghong Song @ 2026-04-17 3:47 ` Yonghong Song 2026-04-17 4:30 ` bot+bpf-ci ` (2 more replies) 2026-04-17 3:47 ` [PATCH bpf-next v5 07/16] bpf: Reuse MAX_BPF_FUNC_ARGS for maximum number of arguments Yonghong Song ` (9 subsequent siblings) 15 siblings, 3 replies; 45+ messages in thread From: Yonghong Song @ 2026-04-17 3:47 UTC (permalink / raw) To: bpf Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann, Jose E . Marchesi, kernel-team, Martin KaFai Lau BPF_REG_PARAMS (r11) is used for stack argument accesses. The LLVM compiler [1] only emits BPF_REG_PARAMS in the following instruction forms: - BPF_LDX | BPF_MEM (load incoming stack arg) - BPF_ST | BPF_MEM (store immediate to outgoing stack arg) - BPF_STX | BPF_MEM (store register to outgoing stack arg) Reject any other use of BPF_REG_PARAMS in check_and_resolve_insns() to prevent misuse. Since BPF_REG_PARAMS is beyond MAX_BPF_REG, array-based register tracking indexed by register number would cause out-of-bounds accesses. So do early return if needed. [1] https://github.com/llvm/llvm-project/pull/189060 Signed-off-by: Yonghong Song <yonghong.song@linux.dev> --- kernel/bpf/const_fold.c | 9 +++++++-- kernel/bpf/liveness.c | 9 +++++++-- kernel/bpf/verifier.c | 17 +++++++++++++---- 3 files changed, 27 insertions(+), 8 deletions(-) diff --git a/kernel/bpf/const_fold.c b/kernel/bpf/const_fold.c index db73c4740b1e..09db7fdb370f 100644 --- a/kernel/bpf/const_fold.c +++ b/kernel/bpf/const_fold.c @@ -51,13 +51,18 @@ static void const_reg_xfer(struct bpf_verifier_env *env, struct const_arg_info * struct bpf_insn *insn, struct bpf_insn *insns, int idx) { struct const_arg_info unknown = { .state = CONST_ARG_UNKNOWN, .val = 0 }; - struct const_arg_info *dst = &ci_out[insn->dst_reg]; - struct const_arg_info *src = &ci_out[insn->src_reg]; + struct const_arg_info *dst, *src; u8 class = BPF_CLASS(insn->code); u8 mode = BPF_MODE(insn->code); u8 opcode = BPF_OP(insn->code) | BPF_SRC(insn->code); int r; + /* Stack arguments using BPF_REG_PARAMS are outside the tracked register set. */ + if (insn->dst_reg >= MAX_BPF_REG || insn->src_reg >= MAX_BPF_REG) + return; + + dst = &ci_out[insn->dst_reg]; + src = &ci_out[insn->src_reg]; switch (class) { case BPF_ALU: case BPF_ALU64: diff --git a/kernel/bpf/liveness.c b/kernel/bpf/liveness.c index 1fb4c511db5a..993d7e543e9f 100644 --- a/kernel/bpf/liveness.c +++ b/kernel/bpf/liveness.c @@ -1056,11 +1056,16 @@ static void arg_track_xfer(struct bpf_verifier_env *env, struct bpf_insn *insn, int depth = instance->depth; u8 class = BPF_CLASS(insn->code); u8 code = BPF_OP(insn->code); - struct arg_track *dst = &at_out[insn->dst_reg]; - struct arg_track *src = &at_out[insn->src_reg]; + struct arg_track *dst, *src; struct arg_track none = { .frame = ARG_NONE }; int r; + /* Stack arguments using BPF_REG_PARAMS are outside the tracked register set. */ + if (insn->dst_reg >= MAX_BPF_REG || insn->src_reg >= MAX_BPF_REG) + return; + + dst = &at_out[insn->dst_reg]; + src = &at_out[insn->src_reg]; if (class == BPF_ALU64 && BPF_SRC(insn->code) == BPF_K) { if (code == BPF_MOV) { *dst = none; diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index ff0c55d80311..f25a56cfabac 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -18487,13 +18487,22 @@ static int check_and_resolve_insns(struct bpf_verifier_env *env) return err; for (i = 0; i < insn_cnt; i++, insn++) { + u8 class = BPF_CLASS(insn->code); + u8 mode = BPF_MODE(insn->code); + if (insn->dst_reg >= MAX_BPF_REG) { - verbose(env, "R%d is invalid\n", insn->dst_reg); - return -EINVAL; + if (insn->dst_reg != BPF_REG_PARAMS || + !((class == BPF_ST || class == BPF_STX) && mode == BPF_MEM)) { + verbose(env, "R%d is invalid\n", insn->dst_reg); + return -EINVAL; + } } if (insn->src_reg >= MAX_BPF_REG) { - verbose(env, "R%d is invalid\n", insn->src_reg); - return -EINVAL; + if (insn->src_reg != BPF_REG_PARAMS || class != BPF_LDX || + mode != BPF_MEM) { + verbose(env, "R%d is invalid\n", insn->src_reg); + return -EINVAL; + } } if (insn[0].code == (BPF_LD | BPF_IMM | BPF_DW)) { struct bpf_insn_aux_data *aux; -- 2.52.0 ^ permalink raw reply related [flat|nested] 45+ messages in thread
* Re: [PATCH bpf-next v5 06/16] bpf: Limit the scope of BPF_REG_PARAMS usage 2026-04-17 3:47 ` [PATCH bpf-next v5 06/16] bpf: Limit the scope of BPF_REG_PARAMS usage Yonghong Song @ 2026-04-17 4:30 ` bot+bpf-ci 2026-04-17 4:50 ` sashiko-bot 2026-04-18 1:04 ` bot+bpf-ci 2 siblings, 0 replies; 45+ messages in thread From: bot+bpf-ci @ 2026-04-17 4:30 UTC (permalink / raw) To: yonghong.song, bpf Cc: ast, andrii, daniel, jose.marchesi, kernel-team, martin.lau, ast, andrii, daniel, martin.lau, eddyz87, yonghong.song, clm, ihor.solodrai [-- Attachment #1: Type: text/plain, Size: 3092 bytes --] > diff --git a/kernel/bpf/const_fold.c b/kernel/bpf/const_fold.c > index db73c4740b1e..09db7fdb370f 100644 > --- a/kernel/bpf/const_fold.c > +++ b/kernel/bpf/const_fold.c [ ... ] > diff --git a/kernel/bpf/liveness.c b/kernel/bpf/liveness.c > index 1fb4c511db5a..993d7e543e9f 100644 > --- a/kernel/bpf/liveness.c > +++ b/kernel/bpf/liveness.c [ ... ] > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c > index ff0c55d80311..f25a56cfabac 100644 > --- a/kernel/bpf/verifier.c > +++ b/kernel/bpf/verifier.c > @@ -18487,13 +18487,22 @@ static int check_and_resolve_insns(struct bpf_verifier_env *env) > return err; > > for (i = 0; i < insn_cnt; i++, insn++) { > + u8 class = BPF_CLASS(insn->code); > + u8 mode = BPF_MODE(insn->code); > + > if (insn->dst_reg >= MAX_BPF_REG) { > - verbose(env, "R%d is invalid\n", insn->dst_reg); > - return -EINVAL; > + if (insn->dst_reg != BPF_REG_PARAMS || > + !((class == BPF_ST || class == BPF_STX) && mode == BPF_MEM)) { > + verbose(env, "R%d is invalid\n", insn->dst_reg); > + return -EINVAL; > + } > } > if (insn->src_reg >= MAX_BPF_REG) { > - verbose(env, "R%d is invalid\n", insn->src_reg); > - return -EINVAL; > + if (insn->src_reg != BPF_REG_PARAMS || class != BPF_LDX || > + mode != BPF_MEM) { > + verbose(env, "R%d is invalid\n", insn->src_reg); > + return -EINVAL; > + } > } > if (insn[0].code == (BPF_LD | BPF_IMM | BPF_DW)) { > struct bpf_insn_aux_data *aux; Can these instructions with BPF_REG_PARAMS trigger out-of-bounds accesses in subsequent verification functions? The regs array in struct bpf_func_state is declared as regs[MAX_BPF_REG] with MAX_BPF_REG=11, giving valid indices 0-10. When instructions with src_reg=BPF_REG_PARAMS (11) reach check_load_mem(): kernel/bpf/verifier.c:check_load_mem() { ... src_reg_type = regs[insn->src_reg].type; ... } This performs regs[11] which is one past the end of the array. Similarly, when instructions with dst_reg=BPF_REG_PARAMS reach __check_reg_arg(): kernel/bpf/verifier.c:__check_reg_arg() { ... reg = ®s[regno]; ... } The commit message states "array-based register tracking indexed by register number would cause out-of-bounds accesses. So do early return if needed." However, the patch adds early returns in const_fold.c and liveness.c but not in the main verification path functions like check_load_mem() or __check_reg_arg(). The fix appears to arrive in subsequent commit 212295359353 ("bpf: Support stack arguments for bpf functions"). Between this commit and that fix, could user-provided BPF programs with BPF_REG_PARAMS instructions trigger out-of-bounds memory access in the kernel verifier? This commit also lacks a Fixes: tag. The suggested tag would be: Fixes: 595885eced33 ("bpf: Introduce bpf register BPF_REG_PARAMS") --- AI reviewed your patch. Please fix the bug or email reply why it's not a bug. See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md CI run summary: https://github.com/kernel-patches/bpf/actions/runs/24546989054 ^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [PATCH bpf-next v5 06/16] bpf: Limit the scope of BPF_REG_PARAMS usage 2026-04-17 3:47 ` [PATCH bpf-next v5 06/16] bpf: Limit the scope of BPF_REG_PARAMS usage Yonghong Song 2026-04-17 4:30 ` bot+bpf-ci @ 2026-04-17 4:50 ` sashiko-bot 2026-04-18 1:04 ` bot+bpf-ci 2 siblings, 0 replies; 45+ messages in thread From: sashiko-bot @ 2026-04-17 4:50 UTC (permalink / raw) To: Yonghong Song; +Cc: bpf Sashiko AI review found 2 potential issue(s): - [Critical] Early returns in dataflow tracking passes skip necessary state updates for the destination register during BPF_LDX using BPF_REG_PARAMS. - [Critical] Permitting BPF_REG_PARAMS causes severe out-of-bounds memory accesses to the `regs` array during the main verifier loop (`do_check_main`). -- commit d68722297de575b838cc2f76141cf9c052e63448 Author: Yonghong Song <yonghong.song@linux.dev> bpf: Limit the scope of BPF_REG_PARAMS usage This commit restricts the usage of BPF_REG_PARAMS (r11) to specific memory load and store instruction forms. It also adds early returns in dataflow tracking passes to prevent out-of-bounds accesses for stack arguments. > diff --git a/kernel/bpf/const_fold.c b/kernel/bpf/const_fold.c > index db73c4740b1e1..09db7fdb370fd 100644 > --- a/kernel/bpf/const_fold.c > +++ b/kernel/bpf/const_fold.c [ ... ] > @@ -51,13 +51,18 @@ static void const_reg_xfer(struct bpf_verifier_env *env, struct const_arg_info * > struct bpf_insn *insn, struct bpf_insn *insns, int idx) > { > struct const_arg_info unknown = { .state = CONST_ARG_UNKNOWN, .val = 0 }; > - struct const_arg_info *dst = &ci_out[insn->dst_reg]; > - struct const_arg_info *src = &ci_out[insn->src_reg]; > + struct const_arg_info *dst, *src; > u8 class = BPF_CLASS(insn->code); > u8 mode = BPF_MODE(insn->code); > u8 opcode = BPF_OP(insn->code) | BPF_SRC(insn->code); > int r; > > + /* Stack arguments using BPF_REG_PARAMS are outside the tracked register set. */ > + if (insn->dst_reg >= MAX_BPF_REG || insn->src_reg >= MAX_BPF_REG) > + return; If we return early here during a BPF_LDX instruction where src_reg is BPF_REG_PARAMS (11), doesn't this skip necessary state updates for dst_reg (e.g., registers 0 through 9)? Since dst_reg is not marked as unknown, won't it retain its stale tracking state? Could bpf_prune_dead_branches() rely on this stale constant to incorrectly rewrite conditional jumps, allowing verifier safety checks to be bypassed? > + > + dst = &ci_out[insn->dst_reg]; > + src = &ci_out[insn->src_reg]; [ ... ] > diff --git a/kernel/bpf/liveness.c b/kernel/bpf/liveness.c > index 332e6e003f270..0c7eef3385513 100644 > --- a/kernel/bpf/liveness.c > +++ b/kernel/bpf/liveness.c > @@ -1068,11 +1068,16 @@ static void arg_track_xfer(struct bpf_verifier_env *env, struct bpf_insn *insn, > int depth = instance->depth; > u8 class = BPF_CLASS(insn->code); > u8 code = BPF_OP(insn->code); > - struct arg_track *dst = &at_out[insn->dst_reg]; > - struct arg_track *src = &at_out[insn->src_reg]; > + struct arg_track *dst, *src; > struct arg_track none = { .frame = ARG_NONE }; > int r; > > + /* Stack arguments using BPF_REG_PARAMS are outside the tracked register set. */ > + if (insn->dst_reg >= MAX_BPF_REG || insn->src_reg >= MAX_BPF_REG) > + return; Similarly, does this early return skip clearing the arg_track state for the destination register when loading from a stack argument? > + > + dst = &at_out[insn->dst_reg]; > + src = &at_out[insn->src_reg]; [ ... ] > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c > index ff0c55d803117..f25a56cfabaca 100644 > --- a/kernel/bpf/verifier.c > +++ b/kernel/bpf/verifier.c > @@ -18487,13 +18487,22 @@ static int check_and_resolve_insns(struct bpf_verifier_env *env) > return err; > > for (i = 0; i < insn_cnt; i++, insn++) { > + u8 class = BPF_CLASS(insn->code); > + u8 mode = BPF_MODE(insn->code); > + > if (insn->dst_reg >= MAX_BPF_REG) { > - verbose(env, "R%d is invalid\n", insn->dst_reg); > - return -EINVAL; > + if (insn->dst_reg != BPF_REG_PARAMS || > + !((class == BPF_ST || class == BPF_STX) && mode == BPF_MEM)) { > + verbose(env, "R%d is invalid\n", insn->dst_reg); > + return -EINVAL; > + } > } By permitting BPF_REG_PARAMS (which equals MAX_BPF_REG) to pass this validation check, won't this cause out-of-bounds memory accesses to the regs array (e.g., struct bpf_reg_state regs[MAX_BPF_REG]) during the main verifier loop? When do_check_main() processes these instructions, functions like check_reg_arg() will attempt to access regs[11], which overlays verifier internal state. I note this regression is fixed later in the patch series in commit 6ec4a3fcb5e83 ("bpf: Support stack arguments for bpf functions"), but it appears to introduce an issue in this intermediate patch. -- Sashiko AI review · https://sashiko.dev/#/patchset/20260417034658.2625353-1-yonghong.song@linux.dev?part=6 ^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [PATCH bpf-next v5 06/16] bpf: Limit the scope of BPF_REG_PARAMS usage 2026-04-17 3:47 ` [PATCH bpf-next v5 06/16] bpf: Limit the scope of BPF_REG_PARAMS usage Yonghong Song 2026-04-17 4:30 ` bot+bpf-ci 2026-04-17 4:50 ` sashiko-bot @ 2026-04-18 1:04 ` bot+bpf-ci 2 siblings, 0 replies; 45+ messages in thread From: bot+bpf-ci @ 2026-04-18 1:04 UTC (permalink / raw) To: yonghong.song, bpf Cc: ast, andrii, daniel, jose.marchesi, kernel-team, martin.lau, ast, andrii, daniel, martin.lau, eddyz87, yonghong.song, clm, ihor.solodrai [-- Attachment #1: Type: text/plain, Size: 4436 bytes --] > diff --git a/kernel/bpf/const_fold.c b/kernel/bpf/const_fold.c > --- a/kernel/bpf/const_fold.c > +++ b/kernel/bpf/const_fold.c > @@ -51,13 +51,18 @@ static void const_reg_xfer(struct bpf_verifier_env *env, struct const_arg_info *ci_out, > struct bpf_insn *insn, struct bpf_insn *insns, int idx) > { > struct const_arg_info unknown = { .state = CONST_ARG_UNKNOWN, .val = 0 }; > - struct const_arg_info *dst = &ci_out[insn->dst_reg]; > - struct const_arg_info *src = &ci_out[insn->src_reg]; > + struct const_arg_info *dst, *src; > u8 class = BPF_CLASS(insn->code); > u8 mode = BPF_MODE(insn->code); > u8 opcode = BPF_OP(insn->code) | BPF_SRC(insn->code); > int r; > > + /* Stack arguments using BPF_REG_PARAMS are outside the tracked register set. */ > + if (insn->dst_reg >= MAX_BPF_REG || insn->src_reg >= MAX_BPF_REG) > + return; > + > + dst = &ci_out[insn->dst_reg]; > + src = &ci_out[insn->src_reg]; If we return early here during a BPF_LDX instruction where src_reg is BPF_REG_PARAMS (11), doesn't this skip necessary state updates for dst_reg (e.g., registers 0 through 9)? Since dst_reg is not marked as unknown, won't it retain its stale tracking state? Could bpf_prune_dead_branches() rely on this stale constant to incorrectly rewrite conditional jumps, allowing verifier safety checks to be bypassed? A similar issue exists in liveness.c where the early return skips clearing the arg_track state for the destination register when loading from a stack argument. > switch (class) { > case BPF_ALU: > case BPF_ALU64: [ ... ] > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c > --- a/kernel/bpf/verifier.c > +++ b/kernel/bpf/verifier.c > @@ -18487,13 +18487,22 @@ static int check_and_resolve_insns(struct bpf_verifier_env *env) > return err; > > for (i = 0; i < insn_cnt; i++, insn++) { > + u8 class = BPF_CLASS(insn->code); > + u8 mode = BPF_MODE(insn->code); > + > if (insn->dst_reg >= MAX_BPF_REG) { > - verbose(env, "R%d is invalid\n", insn->dst_reg); > - return -EINVAL; > + if (insn->dst_reg != BPF_REG_PARAMS || > + !((class == BPF_ST || class == BPF_STX) && mode == BPF_MEM)) { > + verbose(env, "R%d is invalid\n", insn->dst_reg); > + return -EINVAL; > + } > } > if (insn->src_reg >= MAX_BPF_REG) { > - verbose(env, "R%d is invalid\n", insn->src_reg); > - return -EINVAL; > + if (insn->src_reg != BPF_REG_PARAMS || class != BPF_LDX || > + mode != BPF_MEM) { > + verbose(env, "R%d is invalid\n", insn->src_reg); > + return -EINVAL; > + } > } Can these instructions with BPF_REG_PARAMS trigger out-of-bounds accesses in subsequent verification functions? The regs array in struct bpf_func_state is declared as regs[MAX_BPF_REG] with MAX_BPF_REG=11, giving valid indices 0-10. When instructions with src_reg=BPF_REG_PARAMS (11) reach check_load_mem(): kernel/bpf/verifier.c:check_load_mem() { ... src_reg_type = regs[insn->src_reg].type; ... } This performs regs[11] which is one past the end of the array. Similarly, when instructions with dst_reg=BPF_REG_PARAMS reach __check_reg_arg(): kernel/bpf/verifier.c:__check_reg_arg() { ... reg = ®s[regno]; ... } The commit message states "array-based register tracking indexed by register number would cause out-of-bounds accesses. So do early return if needed." However, the patch adds early returns in const_fold.c and liveness.c but not in the main verification path functions like check_load_mem() or __check_reg_arg(). The fix appears to arrive in subsequent commit 212295359353. Between this commit and that fix, could user-provided BPF programs with BPF_REG_PARAMS instructions trigger out-of-bounds memory access in the kernel verifier? > if (insn[0].code == (BPF_LD | BPF_IMM | BPF_DW)) { > struct bpf_insn_aux_data *aux; [ ... ] This commit fixes a bug but lacks a Fixes: tag. The commit prevents out-of-bounds array accesses when BPF_REG_PARAMS is used as an index in array-based register tracking. The bug was introduced when BPF_REG_PARAMS was added as MAX_BPF_REG without adding bounds checks. Suggested: Fixes: fa38458af04a ("bpf: Introduce bpf register BPF_REG_PARAMS") --- AI reviewed your patch. Please fix the bug or email reply why it's not a bug. See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md CI run summary: https://github.com/kernel-patches/bpf/actions/runs/24592562564 ^ permalink raw reply [flat|nested] 45+ messages in thread
* [PATCH bpf-next v5 07/16] bpf: Reuse MAX_BPF_FUNC_ARGS for maximum number of arguments 2026-04-17 3:46 [PATCH bpf-next v5 00/16] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song ` (5 preceding siblings ...) 2026-04-17 3:47 ` [PATCH bpf-next v5 06/16] bpf: Limit the scope of BPF_REG_PARAMS usage Yonghong Song @ 2026-04-17 3:47 ` Yonghong Song 2026-04-17 4:30 ` bot+bpf-ci 2026-04-18 0:52 ` bot+bpf-ci 2026-04-17 3:47 ` [PATCH bpf-next v5 08/16] bpf: Support stack arguments for bpf functions Yonghong Song ` (8 subsequent siblings) 15 siblings, 2 replies; 45+ messages in thread From: Yonghong Song @ 2026-04-17 3:47 UTC (permalink / raw) To: bpf Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann, Jose E . Marchesi, kernel-team, Martin KaFai Lau Currently, MAX_BPF_FUNC_ARGS is used for tracepoint related progs where the number of parameters cannot exceed MAX_BPF_FUNC_ARGS. Here, MAX_BPF_FUNC_ARGS is reused to set a limit of the number of arguments for bpf functions and kfunc's. The current value for MAX_BPF_FUNC_ARGS is 12 which should be sufficient for majority of bpf functions and kfunc's. Signed-off-by: Yonghong Song <yonghong.song@linux.dev> --- include/linux/bpf.h | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 0136a108d083..b0f956be73d2 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1151,6 +1151,10 @@ struct bpf_prog_offload { /* The longest tracepoint has 12 args. * See include/trace/bpf_probe.h + * + * Also reuse this macro for maximum number of arguments a BPF function + * or a kfunc can have. Args 1-5 are passed in registers, args 6-12 via + * stack arg slots. */ #define MAX_BPF_FUNC_ARGS 12 -- 2.52.0 ^ permalink raw reply related [flat|nested] 45+ messages in thread
* Re: [PATCH bpf-next v5 07/16] bpf: Reuse MAX_BPF_FUNC_ARGS for maximum number of arguments 2026-04-17 3:47 ` [PATCH bpf-next v5 07/16] bpf: Reuse MAX_BPF_FUNC_ARGS for maximum number of arguments Yonghong Song @ 2026-04-17 4:30 ` bot+bpf-ci 2026-04-18 0:52 ` bot+bpf-ci 1 sibling, 0 replies; 45+ messages in thread From: bot+bpf-ci @ 2026-04-17 4:30 UTC (permalink / raw) To: yonghong.song, bpf Cc: ast, andrii, daniel, jose.marchesi, kernel-team, martin.lau, ast, andrii, daniel, martin.lau, eddyz87, yonghong.song, clm, ihor.solodrai [-- Attachment #1: Type: text/plain, Size: 1537 bytes --] > diff --git a/include/linux/bpf.h b/include/linux/bpf.h > --- a/include/linux/bpf.h > +++ b/include/linux/bpf.h > @@ -1151,6 +1151,10 @@ struct bpf_prog_offload { > > /* The longest tracepoint has 12 args. > * See include/trace/bpf_probe.h > + * > + * Also reuse this macro for maximum number of arguments a BPF function > + * or a kfunc can have. Args 1-5 are passed in registers, args 6-12 via > + * stack arg slots. Is this description of the calling convention accurate? The comment states "Args 1-5 are passed in registers, args 6-12 via stack arg slots," but doesn't argument 6 get passed in the R9 register rather than on the stack? Looking at arch/x86/net/bpf_jit_comp.c, the existing comment states: "x86-64 supports up to MAX_BPF_FUNC_ARGS arguments. 1-6 are passed through regs, the remains are through stack." The follow-up commit 0ef907c3ec2c in this patch series ("bpf,x86: Implement JIT support for stack arguments") documents the actual calling convention as: "Follow x86-64 calling convention for both BPF-to-BPF and kfunc calls: - Arg 6 is passed in the R9 register - Args 7+ are passed on the stack at [rsp]" Should the comment say "Args 1-5 are passed in registers, arg 6 in R9 register, args 7-12 via stack arg slots" instead? > */ > #define MAX_BPF_FUNC_ARGS 12 --- AI reviewed your patch. Please fix the bug or email reply why it's not a bug. See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md CI run summary: https://github.com/kernel-patches/bpf/actions/runs/24546989054 ^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [PATCH bpf-next v5 07/16] bpf: Reuse MAX_BPF_FUNC_ARGS for maximum number of arguments 2026-04-17 3:47 ` [PATCH bpf-next v5 07/16] bpf: Reuse MAX_BPF_FUNC_ARGS for maximum number of arguments Yonghong Song 2026-04-17 4:30 ` bot+bpf-ci @ 2026-04-18 0:52 ` bot+bpf-ci 1 sibling, 0 replies; 45+ messages in thread From: bot+bpf-ci @ 2026-04-18 0:52 UTC (permalink / raw) To: yonghong.song, bpf Cc: ast, andrii, daniel, jose.marchesi, kernel-team, martin.lau, ast, andrii, daniel, martin.lau, eddyz87, yonghong.song, clm, ihor.solodrai [-- Attachment #1: Type: text/plain, Size: 2313 bytes --] > diff --git a/include/linux/bpf.h b/include/linux/bpf.h > --- a/include/linux/bpf.h > +++ b/include/linux/bpf.h > @@ -1151,6 +1151,10 @@ struct bpf_prog_offload { > > -/* The longest tracepoint has 12 args. > +/* The longest tracepoint has 12 args. This isn't a bug, but the multi-line comment opening doesn't follow BPF subsystem style. Per bpf.md: 'Multi-line comments MUST have the opening /* on its own line'. The rule applies even if surrounding code uses the old style. Should this be reformatted as: /* * The longest tracepoint has 12 args. * See include/trace/bpf_probe.h * * Also reuse this macro for maximum number of arguments a BPF function * or a kfunc can have. Args 1-5 are passed in registers, args 6-12 via * stack arg slots. */ > * See include/trace/bpf_probe.h > + * > + * Also reuse this macro for maximum number of arguments a BPF function > + * or a kfunc can have. Args 1-5 are passed in registers, args 6-12 via Should this say "kfuncs" instead of "kfunc's"? The context is plural (referring to multiple functions), not possessive. Additionally, bot+bpf-ci@kernel.org raised a concern in v5 about the calling convention description. The comment states "Args 1-5 are passed in registers, args 6-12 via stack arg slots," but argument 6 is actually passed in the R9 register, not on the stack. Looking at arch/x86/net/bpf_jit_comp.c, the existing comment states: "x86-64 supports up to MAX_BPF_FUNC_ARGS arguments. 1-6 are passed through regs, the remains are through stack." The follow-up commit 0ef907c3ec2c in this patch series documents the actual calling convention as: "Follow x86-64 calling convention for both BPF-to-BPF and kfunc calls: - Arg 6 is passed in the R9 register - Args 7+ are passed on the stack at [rsp]" Should the comment say "Args 1-5 are passed in registers, arg 6 in R9 register, args 7-12 via stack arg slots" instead? Reference: https://lore.kernel.org/bpf/d23db60b9c7e22d8ab443688b4f2da8d5594d6c07698cdc282faccaade757430@mail.kernel.org/ > + * stack arg slots. > */ > #define MAX_BPF_FUNC_ARGS 12 > --- AI reviewed your patch. Please fix the bug or email reply why it's not a bug. See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md CI run summary: https://github.com/kernel-patches/bpf/actions/runs/24592562564 ^ permalink raw reply [flat|nested] 45+ messages in thread
* [PATCH bpf-next v5 08/16] bpf: Support stack arguments for bpf functions 2026-04-17 3:46 [PATCH bpf-next v5 00/16] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song ` (6 preceding siblings ...) 2026-04-17 3:47 ` [PATCH bpf-next v5 07/16] bpf: Reuse MAX_BPF_FUNC_ARGS for maximum number of arguments Yonghong Song @ 2026-04-17 3:47 ` Yonghong Song 2026-04-17 4:35 ` sashiko-bot ` (2 more replies) 2026-04-17 3:47 ` [PATCH bpf-next v5 09/16] bpf: Reject stack arguments in non-JITed programs Yonghong Song ` (7 subsequent siblings) 15 siblings, 3 replies; 45+ messages in thread From: Yonghong Song @ 2026-04-17 3:47 UTC (permalink / raw) To: bpf Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann, Jose E . Marchesi, kernel-team, Martin KaFai Lau Currently BPF functions (subprogs) are limited to 5 register arguments. With [1], the compiler can emit code that passes additional arguments via a dedicated stack area through bpf register BPF_REG_PARAMS (r11), introduced in the previous patch. The compiler uses positive r11 offsets for incoming (callee-side) args and negative r11 offsets for outgoing (caller-side) args, following the x86_64/arm64 calling convention direction. There is an 8-byte gap at offset 0 separating the two regions: Incoming (callee reads): r11+8 (arg6), r11+16 (arg7), ... Outgoing (caller writes): r11-8 (arg6), r11-16 (arg7), ... The following is an example to show how stack arguments are saved and transferred between caller and callee: int foo(int a1, int a2, int a3, int a4, int a5, int a6, int a7) { ... bar(a1, a2, a3, a4, a5, a6, a7, a8); ... } Caller (foo) Callee (bar) ============ ============ Incoming (positive offsets): Incoming (positive offsets): r11+8: [incoming arg 6] r11+8: [incoming arg 6] <-+ r11+16: [incoming arg 7] r11+16: [incoming arg 7] <-|+ r11+24: [incoming arg 8] <-||+ Outgoing (negative offsets): ||| r11-8: [outgoing arg 6 to bar] -------->-------------------------+|| r11-16: [outgoing arg 7 to bar] -------->--------------------------+| r11-24: [outgoing arg 8 to bar] -------->---------------------------+ If the bpf function has more than one call: int foo(int a1, int a2, int a3, int a4, int a5, int a6, int a7) { ... bar1(a1, a2, a3, a4, a5, a6, a7, a8); ... bar2(a1, a2, a3, a4, a5, a6, a7, a8, a9); ... } Caller (foo) Callee (bar2) ============ ============== Incoming (positive offsets): Incoming (positive offsets): r11+8: [incoming arg 6] r11+8: [incoming arg 6] <+ r11+16: [incoming arg 7] r11+16: [incoming arg 7] <|+ r11+24: [incoming arg 8] <||+ Outgoing for bar2 (negative offsets): r11+32: [incoming arg 9] <|||+ r11-8: [outgoing arg 6] ---->----------->-------------------------+||| r11-16: [outgoing arg 7] ---->----------->--------------------------+|| r11-24: [outgoing arg 8] ---->----------->---------------------------+| r11-32: [outgoing arg 9] ---->----------->----------------------------+ The verifier tracks stack arguments separately from the regular r10 stack. The stack_arg_regs are stored in bpf_func_state. This separation keeps the stack arg area from interfering with the normal stack and frame pointer (r10) bookkeeping. Similar to stacksafe(), introduce stack_arg_safe() to do pruning check. A per-state bitmask out_stack_arg_mask tracks which outgoing stack arg slots have been written on the current path. Each bit corresponds to an outgoing slot index (bit 0 = r11-8 = arg6, bit 1 = r11-16 = arg7, etc.). At a call site, the verifier checks that all slots required by the callee have their corresponding mask bits set. This enables precise per-path tracking: if one branch of a conditional writes arg6 but another does not, the mask correctly reflects the difference and the verifier rejects the uninitialized path. The mask is included in stack_arg_safe() so that states with different sets of initialized slots are not incorrectly pruned together. Outgoing stack arg slots are not invalidated after a call. This allows the compiler to hoist shared stores above a call and reuse them for subsequent calls. The following are a few examples. Example 1: *(u64 *)(r11 - 8) = r6; *(u64 *)(r11 - 16) = r7; call bar1; // arg6 = r6, arg7 = r7 call bar2; // reuses same arg6, arg7 without re-storing Example 2: If the caller wants different values for a later call, it simply overwrites the slot before that call: *(u64 *)(r11 - 8) = r6; call bar1; // arg6 = r6 *(u64 *)(r11 - 8) = r8; // overwrite arg6 call bar2; // arg6 = r8 Example 3: The compiler can hoist the shared stack arg stores above the branch: *(u64 *)(r11 - 16) = r7; ... if cond goto else; *(u64 *)(r11 - 8) = r8; call bar1; // arg6 = r8, arg7 = r7 goto end; else: *(u64 *)(r11 - 8) = r9; call bar2; // arg6 = r9, arg7 = r7 end: Example 4: The compiler hoists the store above the loop: *(u64 *)(r11 - 8) = r6; // arg6, before loop loop: call bar; // reuses arg6 each iteration if ... goto loop; A separate max_out_stack_arg_depth field in bpf_subprog_info tracks the deepest outgoing offset actually written. This intends to reject programs that write to offsets beyond what any callee expects. Callback functions with stack arguments need kernel setup parameter types (including stack parameters) properly and then callback function can retrieve such information for verification purpose. Global subprogs with >5 args are not yet supported. [1] https://github.com/llvm/llvm-project/pull/189060 Signed-off-by: Yonghong Song <yonghong.song@linux.dev> --- include/linux/bpf.h | 2 + include/linux/bpf_verifier.h | 28 +++- kernel/bpf/btf.c | 14 +- kernel/bpf/fixups.c | 26 +++- kernel/bpf/states.c | 41 +++++ kernel/bpf/verifier.c | 284 ++++++++++++++++++++++++++++++++++- 6 files changed, 383 insertions(+), 12 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index b0f956be73d2..5e061ec42940 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1666,6 +1666,8 @@ struct bpf_prog_aux { u32 max_pkt_offset; u32 max_tp_access; u32 stack_depth; + u16 incoming_stack_arg_depth; + u16 stack_arg_depth; /* both incoming and max outgoing of stack arguments */ u32 id; u32 func_cnt; /* used by non-func prog as the number of func progs */ u32 real_func_cnt; /* includes hidden progs, only used for JIT and freeing progs */ diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h index 29a8a2605a12..6223c055b028 100644 --- a/include/linux/bpf_verifier.h +++ b/include/linux/bpf_verifier.h @@ -372,6 +372,11 @@ struct bpf_func_state { * `stack`. allocated_stack is always a multiple of BPF_REG_SIZE. */ int allocated_stack; + + u16 stack_arg_depth; /* Size of incoming + max outgoing stack args in bytes. */ + u16 incoming_stack_arg_depth; /* Size of incoming stack args in bytes. */ + u16 out_stack_arg_mask; /* Bitmask of outgoing stack arg slots that have been written. */ + struct bpf_reg_state *stack_arg_regs; /* On-stack arguments */ }; #define MAX_CALL_FRAMES 8 @@ -508,6 +513,17 @@ struct bpf_verifier_state { iter < frame->allocated_stack / BPF_REG_SIZE; \ iter++, reg = bpf_get_spilled_reg(iter, frame, mask)) +#define bpf_get_spilled_stack_arg(slot, frame, mask) \ + (((slot < frame->stack_arg_depth / BPF_REG_SIZE) && \ + ((1 << frame->stack_arg_regs[slot].type) & (mask))) \ + ? &frame->stack_arg_regs[slot] : NULL) + +/* Iterate over 'frame', setting 'reg' to either NULL or a spilled stack arg. */ +#define bpf_for_each_spilled_stack_arg(iter, frame, reg, mask) \ + for (iter = 0, reg = bpf_get_spilled_stack_arg(iter, frame, mask); \ + iter < frame->stack_arg_depth / BPF_REG_SIZE; \ + iter++, reg = bpf_get_spilled_stack_arg(iter, frame, mask)) + #define bpf_for_each_reg_in_vstate_mask(__vst, __state, __reg, __mask, __expr) \ ({ \ struct bpf_verifier_state *___vstate = __vst; \ @@ -525,6 +541,11 @@ struct bpf_verifier_state { continue; \ (void)(__expr); \ } \ + bpf_for_each_spilled_stack_arg(___j, __state, __reg, __mask) { \ + if (!__reg) \ + continue; \ + (void)(__expr); \ + } \ } \ }) @@ -738,10 +759,13 @@ struct bpf_subprog_info { bool keep_fastcall_stack: 1; bool changes_pkt_data: 1; bool might_sleep: 1; - u8 arg_cnt:3; + u8 arg_cnt:4; enum priv_stack_mode priv_stack_mode; - struct bpf_subprog_arg_info args[MAX_BPF_FUNC_REG_ARGS]; + struct bpf_subprog_arg_info args[MAX_BPF_FUNC_ARGS]; + u16 incoming_stack_arg_depth; + u16 outgoing_stack_arg_depth; + u16 max_out_stack_arg_depth; }; struct bpf_verifier_env; diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index a62d78581207..c5f3aa05d5a3 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -7887,13 +7887,19 @@ int btf_prepare_func_args(struct bpf_verifier_env *env, int subprog) } args = (const struct btf_param *)(t + 1); nargs = btf_type_vlen(t); - if (nargs > MAX_BPF_FUNC_REG_ARGS) { - if (!is_global) - return -EINVAL; - bpf_log(log, "Global function %s() with %d > %d args. Buggy compiler.\n", + if (nargs > MAX_BPF_FUNC_ARGS) { + bpf_log(log, "Function %s() with %d > %d args not supported.\n", + tname, nargs, MAX_BPF_FUNC_ARGS); + return -EINVAL; + } + if (is_global && nargs > MAX_BPF_FUNC_REG_ARGS) { + bpf_log(log, "Global function %s() with %d > %d args not supported.\n", tname, nargs, MAX_BPF_FUNC_REG_ARGS); return -EINVAL; } + if (nargs > MAX_BPF_FUNC_REG_ARGS) + sub->incoming_stack_arg_depth = (nargs - MAX_BPF_FUNC_REG_ARGS) * BPF_REG_SIZE; + /* check that function is void or returns int, exception cb also requires this */ t = btf_type_by_id(btf, t->type); while (btf_type_is_modifier(t)) diff --git a/kernel/bpf/fixups.c b/kernel/bpf/fixups.c index 67c9b28767e1..f9233945b0f2 100644 --- a/kernel/bpf/fixups.c +++ b/kernel/bpf/fixups.c @@ -983,8 +983,14 @@ int bpf_jit_subprogs(struct bpf_verifier_env *env) int err, num_exentries; int old_len, subprog_start_adjustment = 0; - if (env->subprog_cnt <= 1) + if (env->subprog_cnt <= 1) { + /* + * Even without subprogs, kfunc calls with >5 args need stack arg space + * allocated by the root program. + */ + prog->aux->stack_arg_depth = env->subprog_info[0].outgoing_stack_arg_depth; return 0; + } for (i = 0, insn = prog->insnsi; i < prog->len; i++, insn++) { if (!bpf_pseudo_func(insn) && !bpf_pseudo_call(insn)) @@ -1074,6 +1080,9 @@ int bpf_jit_subprogs(struct bpf_verifier_env *env) func[i]->aux->name[0] = 'F'; func[i]->aux->stack_depth = env->subprog_info[i].stack_depth; + func[i]->aux->incoming_stack_arg_depth = env->subprog_info[i].incoming_stack_arg_depth; + func[i]->aux->stack_arg_depth = env->subprog_info[i].incoming_stack_arg_depth + + env->subprog_info[i].outgoing_stack_arg_depth; if (env->subprog_info[i].priv_stack_mode == PRIV_STACK_ADAPTIVE) func[i]->aux->jits_use_priv_stack = true; @@ -1265,9 +1274,20 @@ int bpf_fixup_call_args(struct bpf_verifier_env *env) struct bpf_prog *prog = env->prog; struct bpf_insn *insn = prog->insnsi; bool has_kfunc_call = bpf_prog_has_kfunc_call(prog); - int i, depth; + int depth; #endif - int err = 0; + int i, err = 0; + + for (i = 0; i < env->subprog_cnt; i++) { + struct bpf_subprog_info *subprog = &env->subprog_info[i]; + + if (subprog->max_out_stack_arg_depth > subprog->outgoing_stack_arg_depth) { + verbose(env, + "func#%d writes stack arg slot at depth %u, but calls only require %u bytes\n", + i, subprog->max_out_stack_arg_depth, subprog->outgoing_stack_arg_depth); + return -EINVAL; + } + } if (env->prog->jit_requested && !bpf_prog_is_offloaded(env->prog->aux)) { diff --git a/kernel/bpf/states.c b/kernel/bpf/states.c index 8478d2c6ed5b..235841d23fe3 100644 --- a/kernel/bpf/states.c +++ b/kernel/bpf/states.c @@ -838,6 +838,44 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old, return true; } +/* + * Compare stack arg slots between old and current states. + * Outgoing stack args are path-local state and must agree for pruning. + */ +static bool stack_arg_safe(struct bpf_verifier_env *env, struct bpf_func_state *old, + struct bpf_func_state *cur, struct bpf_idmap *idmap, + enum exact_level exact) +{ + int i, nslots; + + if (old->incoming_stack_arg_depth != cur->incoming_stack_arg_depth) + return false; + + /* Compare both incoming and outgoing stack arg slots. */ + if (old->stack_arg_depth != cur->stack_arg_depth) + return false; + + if (old->out_stack_arg_mask != cur->out_stack_arg_mask) + return false; + + nslots = old->stack_arg_depth / BPF_REG_SIZE; + for (i = 0; i < nslots; i++) { + struct bpf_reg_state *old_arg = &old->stack_arg_regs[i]; + struct bpf_reg_state *cur_arg = &cur->stack_arg_regs[i]; + + if (old_arg->type == NOT_INIT && cur_arg->type == NOT_INIT) + continue; + + if (exact == EXACT && old_arg->type != cur_arg->type) + return false; + + if (!regsafe(env, old_arg, cur_arg, idmap, exact)) + return false; + } + + return true; +} + static bool refsafe(struct bpf_verifier_state *old, struct bpf_verifier_state *cur, struct bpf_idmap *idmap) { @@ -929,6 +967,9 @@ static bool func_states_equal(struct bpf_verifier_env *env, struct bpf_func_stat if (!stacksafe(env, old, cur, &env->idmap_scratch, exact)) return false; + if (!stack_arg_safe(env, old, cur, &env->idmap_scratch, exact)) + return false; + return true; } diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index f25a56cfabac..7a65b532e84a 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -1340,6 +1340,20 @@ static int copy_stack_state(struct bpf_func_state *dst, const struct bpf_func_st return -ENOMEM; dst->allocated_stack = src->allocated_stack; + + /* copy stack args state */ + n = src->stack_arg_depth / BPF_REG_SIZE; + if (n) { + dst->stack_arg_regs = copy_array(dst->stack_arg_regs, src->stack_arg_regs, n, + sizeof(struct bpf_reg_state), + GFP_KERNEL_ACCOUNT); + if (!dst->stack_arg_regs) + return -ENOMEM; + + dst->stack_arg_depth = src->stack_arg_depth; + dst->incoming_stack_arg_depth = src->incoming_stack_arg_depth; + dst->out_stack_arg_mask = src->out_stack_arg_mask; + } return 0; } @@ -1381,6 +1395,25 @@ static int grow_stack_state(struct bpf_verifier_env *env, struct bpf_func_state return 0; } +static int grow_stack_arg_slots(struct bpf_verifier_env *env, + struct bpf_func_state *state, int size) +{ + size_t old_n = state->stack_arg_depth / BPF_REG_SIZE, n; + + size = round_up(size, BPF_REG_SIZE); + n = size / BPF_REG_SIZE; + if (old_n >= n) + return 0; + + state->stack_arg_regs = realloc_array(state->stack_arg_regs, old_n, n, + sizeof(struct bpf_reg_state)); + if (!state->stack_arg_regs) + return -ENOMEM; + + state->stack_arg_depth = size; + return 0; +} + /* Acquire a pointer id from the env and update the state->refs to include * this new pointer reference. * On success, returns a valid pointer id to associate with the register @@ -1543,6 +1576,7 @@ static void free_func_state(struct bpf_func_state *state) { if (!state) return; + kfree(state->stack_arg_regs); kfree(state->stack); kfree(state); } @@ -4215,6 +4249,13 @@ static int check_stack_read_fixed_off(struct bpf_verifier_env *env, } if (type == STACK_INVALID && env->allow_uninit_stack) continue; + /* + * Cross-frame reads may hit slots poisoned by dead code elimination. + * Static liveness can't track indirect references through pointers, + * so allow the read conservatively. + */ + if (type == STACK_POISON && reg_state != state) + continue; if (type == STACK_POISON) { verbose(env, "reading from stack off %d+%d size %d, slot poisoned by dead code elimination\n", off, i, size); @@ -4270,6 +4311,8 @@ static int check_stack_read_fixed_off(struct bpf_verifier_env *env, continue; if (type == STACK_INVALID && env->allow_uninit_stack) continue; + if (type == STACK_POISON && reg_state != state) + continue; if (type == STACK_POISON) { verbose(env, "reading from stack off %d+%d size %d, slot poisoned by dead code elimination\n", off, i, size); @@ -4424,6 +4467,123 @@ static int check_stack_write(struct bpf_verifier_env *env, return err; } +/* Validate that a stack arg access is 8-byte sized and aligned. */ +static int check_stack_arg_access(struct bpf_verifier_env *env, + struct bpf_insn *insn, const char *op) +{ + int size = bpf_size_to_bytes(BPF_SIZE(insn->code)); + + if (size != BPF_REG_SIZE) { + verbose(env, "stack arg %s must be %d bytes, got %d\n", + op, BPF_REG_SIZE, size); + return -EINVAL; + } + if (insn->off == 0 || insn->off % BPF_REG_SIZE) { + verbose(env, "stack arg %s offset %d not aligned to %d\n", + op, insn->off, BPF_REG_SIZE); + return -EINVAL; + } + /* Reads use positive offsets (incoming), writes use negative (outgoing) */ + if (op[0] == 'r' && insn->off < 0) { + verbose(env, "stack arg read must use positive offset, got %d\n", + insn->off); + return -EINVAL; + } + if (op[0] == 'w' && insn->off > 0) { + verbose(env, "stack arg write must use negative offset, got %d\n", + insn->off); + return -EINVAL; + } + return 0; +} + +static int out_arg_idx_from_off(int off) +{ + return -off / BPF_REG_SIZE - 1; +} + +static int out_arg_spi(const struct bpf_func_state *state, int idx) +{ + return state->incoming_stack_arg_depth / BPF_REG_SIZE + idx; +} + +static u16 out_arg_req_mask(int nr_stack_arg_regs) +{ + return nr_stack_arg_regs ? (1U << nr_stack_arg_regs) - 1 : 0; +} + +/* + * Write a value to the outgoing stack arg area. + * off is a negative offset from r11 (e.g. -8 for arg6, -16 for arg7). + * Callers ensure off < 0, 8-byte aligned, and size is BPF_REG_SIZE. + */ +static int check_stack_arg_write(struct bpf_verifier_env *env, struct bpf_func_state *state, + int off, int value_regno) +{ + int max_stack_arg_regs = MAX_BPF_FUNC_ARGS - MAX_BPF_FUNC_REG_ARGS; + int idx = out_arg_idx_from_off(off); + int spi = out_arg_spi(state, idx); + struct bpf_subprog_info *subprog; + struct bpf_func_state *cur; + int err; + + if (idx >= max_stack_arg_regs) { + verbose(env, "stack arg write offset %d exceeds max %d stack args\n", + off, max_stack_arg_regs); + return -EINVAL; + } + + err = grow_stack_arg_slots(env, state, state->incoming_stack_arg_depth + (-off)); + if (err) + return err; + + /* Track the max outgoing stack arg access depth. */ + subprog = &env->subprog_info[state->subprogno]; + if (-off > subprog->max_out_stack_arg_depth) + subprog->max_out_stack_arg_depth = -off; + + cur = env->cur_state->frame[env->cur_state->curframe]; + if (value_regno >= 0) { + state->stack_arg_regs[spi] = cur->regs[value_regno]; + } else { + /* BPF_ST: store immediate, treat as scalar */ + struct bpf_reg_state *arg = &state->stack_arg_regs[spi]; + + arg->type = SCALAR_VALUE; + __mark_reg_known(arg, env->prog->insnsi[env->insn_idx].imm); + } + state->out_stack_arg_mask |= BIT(idx); + return 0; +} + +/* + * Read a value from the incoming stack arg area. + * off is a positive offset from r11 (e.g. +8 for arg6, +16 for arg7). + * Callers ensure off > 0, 8-byte aligned, and size is BPF_REG_SIZE. + */ +static int check_stack_arg_read(struct bpf_verifier_env *env, struct bpf_func_state *state, + int off, int dst_regno) +{ + int spi = off / BPF_REG_SIZE - 1; + struct bpf_func_state *cur; + struct bpf_reg_state *arg; + + if (off > state->incoming_stack_arg_depth) { + verbose(env, "invalid read from stack arg off %d depth %d\n", + off, state->incoming_stack_arg_depth); + return -EACCES; + } + + arg = &state->stack_arg_regs[spi]; + cur = env->cur_state->frame[env->cur_state->curframe]; + + if (is_spillable_regtype(arg->type)) + copy_register_state(&cur->regs[dst_regno], arg); + else + mark_reg_unknown(env, cur->regs, dst_regno); + return 0; +} + static int check_map_access_type(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 argno, int off, int size, enum bpf_access_type type) { @@ -6606,10 +6766,23 @@ static int check_load_mem(struct bpf_verifier_env *env, struct bpf_insn *insn, bool strict_alignment_once, bool is_ldsx, bool allow_trust_mismatch, const char *ctx) { + struct bpf_verifier_state *vstate = env->cur_state; + struct bpf_func_state *state = vstate->frame[vstate->curframe]; struct bpf_reg_state *regs = cur_regs(env); enum bpf_reg_type src_reg_type; int err; + /* Handle stack arg access */ + if (insn->src_reg == BPF_REG_PARAMS) { + err = check_reg_arg(env, insn->dst_reg, DST_OP_NO_MARK); + if (err) + return err; + err = check_stack_arg_access(env, insn, "read"); + if (err) + return err; + return check_stack_arg_read(env, state, insn->off, insn->dst_reg); + } + /* check src operand */ err = check_reg_arg(env, insn->src_reg, SRC_OP); if (err) @@ -6638,10 +6811,23 @@ static int check_load_mem(struct bpf_verifier_env *env, struct bpf_insn *insn, static int check_store_reg(struct bpf_verifier_env *env, struct bpf_insn *insn, bool strict_alignment_once) { + struct bpf_verifier_state *vstate = env->cur_state; + struct bpf_func_state *state = vstate->frame[vstate->curframe]; struct bpf_reg_state *regs = cur_regs(env); enum bpf_reg_type dst_reg_type; int err; + /* Handle stack arg write */ + if (insn->dst_reg == BPF_REG_PARAMS) { + err = check_reg_arg(env, insn->src_reg, SRC_OP); + if (err) + return err; + err = check_stack_arg_access(env, insn, "write"); + if (err) + return err; + return check_stack_arg_write(env, state, insn->off, insn->src_reg); + } + /* check src1 operand */ err = check_reg_arg(env, insn->src_reg, SRC_OP); if (err) @@ -9327,6 +9513,20 @@ static int setup_func_entry(struct bpf_verifier_env *env, int subprog, int calls return err; } +static struct bpf_reg_state *get_func_arg_reg(struct bpf_verifier_env *env, + struct bpf_reg_state *regs, int argno) +{ + struct bpf_func_state *caller; + int spi; + + if (argno < MAX_BPF_FUNC_REG_ARGS) + return ®s[argno + 1]; + + caller = cur_func(env); + spi = out_arg_spi(caller, argno - MAX_BPF_FUNC_REG_ARGS); + return &caller->stack_arg_regs[spi]; +} + static int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog, const struct btf *btf, struct bpf_reg_state *regs) @@ -9345,8 +9545,24 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog, */ for (i = 0; i < sub->arg_cnt; i++) { u32 argno = make_argno(i); - u32 regno = i + 1; - struct bpf_reg_state *reg = ®s[regno]; + struct bpf_reg_state *reg; + + if (i >= MAX_BPF_FUNC_REG_ARGS) { + struct bpf_func_state *caller = cur_func(env); + int spi = out_arg_spi(caller, i - MAX_BPF_FUNC_REG_ARGS); + + /* + * The compiler may constant-fold stack arg values into the + * callee, eliminating the r11 stores. The BTF still declares + * these parameters, but no outgoing stack slots exist. + */ + if (spi >= (caller->stack_arg_depth / BPF_REG_SIZE)) { + verbose(env, "stack %s not found in caller state\n", + reg_arg_name(env, argno)); + return -EINVAL; + } + } + reg = get_func_arg_reg(env, regs, i); struct bpf_subprog_arg_info *arg = &sub->args[i]; if (arg->arg_type == ARG_ANYTHING) { @@ -9534,8 +9750,10 @@ static int check_func_call(struct bpf_verifier_env *env, struct bpf_insn *insn, int *insn_idx) { struct bpf_verifier_state *state = env->cur_state; + struct bpf_subprog_info *caller_info; struct bpf_func_state *caller; int err, subprog, target_insn; + u16 callee_incoming; target_insn = *insn_idx + insn->imm + 1; subprog = bpf_find_subprog(env, target_insn); @@ -9587,6 +9805,15 @@ static int check_func_call(struct bpf_verifier_env *env, struct bpf_insn *insn, return 0; } + /* + * Track caller's outgoing stack arg depth (max across all callees). + * This is needed so the JIT knows how much stack arg space to allocate. + */ + caller_info = &env->subprog_info[caller->subprogno]; + callee_incoming = env->subprog_info[subprog].incoming_stack_arg_depth; + if (callee_incoming > caller_info->outgoing_stack_arg_depth) + caller_info->outgoing_stack_arg_depth = callee_incoming; + /* for regular function entry setup new frame and continue * from that frame. */ @@ -9640,6 +9867,7 @@ static int set_callee_state(struct bpf_verifier_env *env, struct bpf_func_state *caller, struct bpf_func_state *callee, int insn_idx) { + struct bpf_subprog_info *callee_info; int i; /* copy r1 - r5 args that callee can access. The copy includes parent @@ -9647,6 +9875,45 @@ static int set_callee_state(struct bpf_verifier_env *env, */ for (i = BPF_REG_1; i <= BPF_REG_5; i++) callee->regs[i] = caller->regs[i]; + + /* + * Transfer stack args from caller's outgoing area to callee's incoming + * area. + * + * Caller stores outgoing args at negative r11 offsets: -8 (arg6), + * -16 (arg7), -24 (arg8), ... In the caller's slot array, outgoing + * spi 0 is arg6, spi 1 is arg7, and so on. + * + * Callee reads incoming args at positive r11 offsets: +8 (arg6), + * +16 (arg7), ... Incoming spi 0 is arg6. + */ + callee_info = &env->subprog_info[callee->subprogno]; + if (callee_info->incoming_stack_arg_depth) { + int callee_incoming_slots = callee_info->incoming_stack_arg_depth / BPF_REG_SIZE; + u16 req_mask = out_arg_req_mask(callee_incoming_slots); + int err, caller_spi; + + callee->incoming_stack_arg_depth = callee_info->incoming_stack_arg_depth; + err = grow_stack_arg_slots(env, callee, callee_info->incoming_stack_arg_depth); + if (err) + return err; + + if ((caller->out_stack_arg_mask & req_mask) != req_mask) { + for (i = 0; i < callee_incoming_slots; i++) { + if (caller->out_stack_arg_mask & BIT(i)) + continue; + verbose(env, "stack arg#%d not properly initialized\n", + i + MAX_BPF_FUNC_REG_ARGS + 1); + return -EINVAL; + } + } + + for (i = 0; i < callee_incoming_slots; i++) { + caller_spi = out_arg_spi(caller, i); + callee->stack_arg_regs[i] = caller->stack_arg_regs[caller_spi]; + } + } + return 0; } @@ -17649,6 +17916,17 @@ static int do_check_insn(struct bpf_verifier_env *env, bool *do_print_state) return check_store_reg(env, insn, false); case BPF_ST: { + /* Handle stack arg write (store immediate) */ + if (insn->dst_reg == BPF_REG_PARAMS) { + struct bpf_verifier_state *vstate = env->cur_state; + struct bpf_func_state *state = vstate->frame[vstate->curframe]; + + err = check_stack_arg_access(env, insn, "write"); + if (err) + return err; + return check_stack_arg_write(env, state, insn->off, -1); + } + enum bpf_reg_type dst_reg_type; err = check_reg_arg(env, insn->dst_reg, SRC_OP); @@ -18807,7 +19085,7 @@ static int do_check_common(struct bpf_verifier_env *env, int subprog) goto out; } } - for (i = BPF_REG_1; i <= sub->arg_cnt; i++) { + for (i = BPF_REG_1; i <= min_t(u32, sub->arg_cnt, MAX_BPF_FUNC_REG_ARGS); i++) { arg = &sub->args[i - BPF_REG_1]; reg = ®s[i]; -- 2.52.0 ^ permalink raw reply related [flat|nested] 45+ messages in thread
* Re: [PATCH bpf-next v5 08/16] bpf: Support stack arguments for bpf functions 2026-04-17 3:47 ` [PATCH bpf-next v5 08/16] bpf: Support stack arguments for bpf functions Yonghong Song @ 2026-04-17 4:35 ` sashiko-bot 2026-04-17 4:43 ` bot+bpf-ci 2026-04-18 1:04 ` bot+bpf-ci 2 siblings, 0 replies; 45+ messages in thread From: sashiko-bot @ 2026-04-17 4:35 UTC (permalink / raw) To: Yonghong Song; +Cc: bpf Sashiko AI review found 5 potential issue(s): - [Critical] Verifier State Leakage in `copy_stack_state` when reusing states. - [Critical] Security / Liveness Tracking Bypass via `stack_arg_regs`. - [Critical] Use-After-Free via Type Mismatch in `bpf_get_spilled_stack_arg()`. - [Medium] Unnecessary Rejection of Valid Programs due to Dead Code Elimination. - [Low] Missing Parentheses in `bpf_get_spilled_stack_arg` Macro. -- commit 6ec4a3fcb5e83a95626cd2bb13a60b2283e1c3e2 Author: Yonghong Song <yonghong.song@linux.dev> bpf: Support stack arguments for bpf functions This commit adds support for passing more than 5 arguments to BPF subprograms by utilizing a dedicated stack area accessed via register r11. It introduces new fields to track incoming and outgoing stack arguments for bounds checking and precise verifier path tracking. Link: https://github.com/llvm/llvm-project/pull/189060 > diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h > --- a/include/linux/bpf_verifier.h > +++ b/include/linux/bpf_verifier.h > @@ -508,6 +513,17 @@ struct bpf_verifier_state { > iter < frame->allocated_stack / BPF_REG_SIZE; \ > iter++, reg = bpf_get_spilled_reg(iter, frame, mask)) > > +#define bpf_get_spilled_stack_arg(slot, frame, mask) \ > + (((slot < frame->stack_arg_depth / BPF_REG_SIZE) && \ > + ((1 << frame->stack_arg_regs[slot].type) & (mask))) \ > + ? &frame->stack_arg_regs[slot] : NULL) Is there a type mismatch in this check? The mask parameter here is passed as 1 << STACK_SPILL, but frame->stack_arg_regs[slot].type is an enum bpf_reg_type, not an enum bpf_stack_slot_type. For pointer types, this expression can evaluate to 0. This would cause the iterator to silently skip pointers stored in the stack argument area. If an acquired reference is dropped via release_reference(), the copy in the stack argument area wouldn't be invalidated, potentially allowing a use-after-free if a program subsequently reads the spilled pointer back from r11. Also, this isn't a bug, but should the slot parameter be enclosed in parentheses to prevent operator precedence issues if a complex expression is passed in the future? [ ... ] > diff --git a/kernel/bpf/fixups.c b/kernel/bpf/fixups.c > --- a/kernel/bpf/fixups.c > +++ b/kernel/bpf/fixups.c > @@ -1266,9 +1275,20 @@ int bpf_fixup_call_args(struct bpf_verifier_env *env) [ ... ] > + for (i = 0; i < env->subprog_cnt; i++) { > + struct bpf_subprog_info *subprog = &env->subprog_info[i]; > + > + if (subprog->max_out_stack_arg_depth > subprog->outgoing_stack_arg_depth) { > + verbose(env, > + "func#%d writes stack arg slot at depth %u, but calls only require %u bytes\n", > + i, subprog->max_out_stack_arg_depth, subprog->outgoing_stack_arg_depth); > + return -EINVAL; > + } > + } Could this reject valid programs if the compiler hoists a stack argument store above a branch that is later eliminated as dead code? If the compiler optimizes a store above a conditional check, max_out_stack_arg_depth would be updated. However, if the configuration constants cause the branch containing the actual function call to be eliminated, outgoing_stack_arg_depth would remain 0, leading to a rejection of a safe verification path. [ ... ] > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c > --- a/kernel/bpf/verifier.c > +++ b/kernel/bpf/verifier.c > @@ -1340,6 +1340,20 @@ static int copy_stack_state(struct bpf_func_state *dst, const struct bpf_func_st > return -ENOMEM; > > dst->allocated_stack = src->allocated_stack; > + > + /* copy stack args state */ > + n = src->stack_arg_depth / BPF_REG_SIZE; > + if (n) { > + dst->stack_arg_regs = copy_array(dst->stack_arg_regs, src->stack_arg_regs, n, > + sizeof(struct bpf_reg_state), > + GFP_KERNEL_ACCOUNT); > + if (!dst->stack_arg_regs) > + return -ENOMEM; > + > + dst->stack_arg_depth = src->stack_arg_depth; > + dst->incoming_stack_arg_depth = src->incoming_stack_arg_depth; > + dst->out_stack_arg_mask = src->out_stack_arg_mask; > + } > return 0; > } Does this skip clearing the destination's stack argument fields if the source state has no stack arguments? If src->stack_arg_depth is 0, dst->stack_arg_depth, out_stack_arg_mask, and incoming_stack_arg_depth are not updated. When the verifier reuses an existing state during pop_stack() via bpf_copy_verifier_state(), the destination state might incorrectly inherit a stale stack argument initialization mask from a previously evaluated path. Could this allow a program to initialize stack arguments on one branch, and then safely perform a function call on a different branch without proper initialization? [ ... ] > @@ -4424,6 +4467,123 @@ static int check_stack_write(struct bpf_verifier_env *env, [ ... ] > +static int check_stack_arg_write(struct bpf_verifier_env *env, struct bpf_func_state *state, > + int off, int value_regno) > +{ [ ... ] > + cur = env->cur_state->frame[env->cur_state->curframe]; > + if (value_regno >= 0) { > + state->stack_arg_regs[spi] = cur->regs[value_regno]; > + } else { > + /* BPF_ST: store immediate, treat as scalar */ > + struct bpf_reg_state *arg = &state->stack_arg_regs[spi]; > + > + arg->type = SCALAR_VALUE; > + __mark_reg_known(arg, env->prog->insnsi[env->insn_idx].imm); > + } > + state->out_stack_arg_mask |= BIT(idx); > + return 0; > +} > + > +/* > + * Read a value from the incoming stack arg area. > + * off is a positive offset from r11 (e.g. +8 for arg6, +16 for arg7). > + * Callers ensure off > 0, 8-byte aligned, and size is BPF_REG_SIZE. > + */ > +static int check_stack_arg_read(struct bpf_verifier_env *env, struct bpf_func_state *state, > + int off, int dst_regno) > +{ [ ... ] > + arg = &state->stack_arg_regs[spi]; > + cur = env->cur_state->frame[env->cur_state->curframe]; > + > + if (is_spillable_regtype(arg->type)) > + copy_register_state(&cur->regs[dst_regno], arg); > + else > + mark_reg_unknown(env, cur->regs, dst_regno); > + return 0; > +} Does this bypass liveness tracking for stack arguments? Because the state is copied directly without calling mark_reg_read(), the live field of stack arguments is never marked as read. During state pruning, stack_arg_safe() delegates to regsafe(), which contains a fast-path that unconditionally returns true if the old state's register was never marked as read. Could this cause the verifier to aggressively prune verification paths even if they contain completely different and potentially unsafe stack arguments? -- Sashiko AI review · https://sashiko.dev/#/patchset/20260417034658.2625353-1-yonghong.song@linux.dev?part=8 ^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [PATCH bpf-next v5 08/16] bpf: Support stack arguments for bpf functions 2026-04-17 3:47 ` [PATCH bpf-next v5 08/16] bpf: Support stack arguments for bpf functions Yonghong Song 2026-04-17 4:35 ` sashiko-bot @ 2026-04-17 4:43 ` bot+bpf-ci 2026-04-18 1:04 ` bot+bpf-ci 2 siblings, 0 replies; 45+ messages in thread From: bot+bpf-ci @ 2026-04-17 4:43 UTC (permalink / raw) To: yonghong.song, bpf Cc: ast, andrii, daniel, jose.marchesi, kernel-team, martin.lau, ast, andrii, daniel, martin.lau, eddyz87, yonghong.song, clm, ihor.solodrai [-- Attachment #1: Type: text/plain, Size: 4602 bytes --] > diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h > index 7bbd62c51bb5c..e0341fa0b0f7c 100644 > --- a/include/linux/bpf_verifier.h > +++ b/include/linux/bpf_verifier.h [ ... ] > @@ -508,6 +513,17 @@ struct bpf_func_state { > u32 callback_depth; > u32 async_entry_cnt; > > + /* Stack argument tracking for current function. */ > + struct bpf_reg_state *stack_arg_regs; /* Stack arg register states */ > + u32 stack_arg_depth; /* Depth of stack_arg_regs array */ > + u32 incoming_stack_arg_depth; /* Incoming stack arg count (callee) */ > + u64 out_stack_arg_mask; /* Outgoing stack arg mask (caller) */ > +}; > + > +#define bpf_get_spilled_stack_arg(slot, frame, mask) \ > + (((slot < frame->stack_arg_depth / BPF_REG_SIZE) && \ > + ((1 << frame->stack_arg_regs[slot].type) & (mask))) \ > + ? &frame->stack_arg_regs[slot] : NULL) Does this macro work correctly? The existing bpf_get_spilled_reg macro filters on frame->stack[slot].slot_type[], which is drawn from the STACK_* enum (STACK_SPILL, STACK_ITER, etc.) and callers pass masks like (1 << STACK_SPILL) | (1 << STACK_ITER). This new macro filters on bpf_reg_state.type which is from a completely different enum bpf_reg_type (SCALAR_VALUE, PTR_TO_BTF_ID, etc.) with type modifier bits (PTR_MAYBE_NULL = BIT(8), PTR_TRUSTED = BIT(20), etc.). For example, check_kfunc_call() in the rcu_unlock path uses: u32 clear_mask = (1 << STACK_SPILL) | (1 << STACK_ITER); ... bpf_for_each_reg_in_vstate_mask(env->cur_state, state, reg, clear_mask, ...); A stack-arg slot holding a MEM_RCU-tagged register would have reg->type including MEM_RCU (BIT(13)). Then (1 << 13) = 0x2000, and 0x2000 & clear_mask (0x22) = 0, so the slot is skipped and MEM_RCU is not cleared on rcu_unlock. Additionally, when n >= 32 (for modifier bits like PTR_TRUSTED at BIT(20)), the shift (1 << n) invokes undefined behavior in C. Should this use a different approach to filter stack arg slots, perhaps based on initialization status (type != NOT_INIT)? > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c > index 32cc59810e14d..30f2d1f06a988 100644 > --- a/kernel/bpf/verifier.c > +++ b/kernel/bpf/verifier.c [ ... ] > @@ -1340,6 +1340,20 @@ static int copy_stack_state(struct bpf_func_state *dst, > goto out_free_stack; > } > > + n = src->stack_arg_depth / BPF_REG_SIZE; > + if (n) { > + dst->stack_arg_regs = copy_array(dst->stack_arg_regs, src->stack_arg_regs, n, > + sizeof(struct bpf_reg_state), > + GFP_KERNEL_ACCOUNT); > + if (!dst->stack_arg_regs) > + return -ENOMEM; > + > + dst->stack_arg_depth = src->stack_arg_depth; > + dst->incoming_stack_arg_depth = src->incoming_stack_arg_depth; > + dst->out_stack_arg_mask = src->out_stack_arg_mask; > + } > + > + return 0; What happens when src->stack_arg_depth == 0 but dst has non-zero values from prior work? The stack_arg_depth, incoming_stack_arg_depth, out_stack_arg_mask, and stack_arg_regs fields come after the 'stack' field in struct bpf_func_state, so the preceding memcpy(dst, src, offsetof(..., stack)) in copy_func_state does not overwrite them. When n == 0, the assignments to dst->stack_arg_depth, dst->incoming_stack_arg_depth, and dst->out_stack_arg_mask are skipped, leaving stale values in dst. This can be triggered via: 1. Frame starts with stack_arg_depth = 0, out_stack_arg_mask = 0 2. check_cond_jmp_op calls push_stack which saves this state 3. Verifier continues on one branch; BPF_ST/BPF_STX writes to r11-8, r11-16 cause check_stack_arg_write to call grow_stack_arg_slots, setting stack_arg_depth = 16 and out_stack_arg_mask = BIT(0)|BIT(1) 4. Branch ends; pop_stack calls bpf_copy_verifier_state which calls copy_stack_state with src->stack_arg_depth == 0 5. Because n == 0, the assignment block is skipped, and dst retains stack_arg_depth = 16 and out_stack_arg_mask = BIT(0)|BIT(1) 6. The other branch never writes r11-8 or r11-16, but when it calls a subprog, set_callee_state checks: if ((caller->out_stack_arg_mask & req_mask) != req_mask) The stale mask makes this check pass incorrectly Would the verifier accept a call with uninitialized stack args on that second branch? Should these fields be assigned unconditionally, similar to how allocated_stack is handled earlier in this function? --- AI reviewed your patch. Please fix the bug or email reply why it's not a bug. See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md CI run summary: https://github.com/kernel-patches/bpf/actions/runs/24546989054 ^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [PATCH bpf-next v5 08/16] bpf: Support stack arguments for bpf functions 2026-04-17 3:47 ` [PATCH bpf-next v5 08/16] bpf: Support stack arguments for bpf functions Yonghong Song 2026-04-17 4:35 ` sashiko-bot 2026-04-17 4:43 ` bot+bpf-ci @ 2026-04-18 1:04 ` bot+bpf-ci 2 siblings, 0 replies; 45+ messages in thread From: bot+bpf-ci @ 2026-04-18 1:04 UTC (permalink / raw) To: yonghong.song, bpf Cc: ast, andrii, daniel, jose.marchesi, kernel-team, martin.lau, ast, andrii, daniel, martin.lau, eddyz87, yonghong.song, clm, ihor.solodrai [-- Attachment #1: Type: text/plain, Size: 6540 bytes --] > diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h > index 29a8a2605a12..6223c055b028 100644 > --- a/include/linux/bpf_verifier.h > +++ b/include/linux/bpf_verifier.h [ ... ] > @@ -508,6 +513,17 @@ struct bpf_verifier_state { > iter < frame->allocated_stack / BPF_REG_SIZE; \ > iter++, reg = bpf_get_spilled_reg(iter, frame, mask)) > > +#define bpf_get_spilled_stack_arg(slot, frame, mask) \ > + (((slot < frame->stack_arg_depth / BPF_REG_SIZE) && \ > + ((1 << frame->stack_arg_regs[slot].type) & (mask))) \ > + ? &frame->stack_arg_regs[slot] : NULL) ^^^^ The shift count in this macro appears to be incorrect. The macro uses `1 << frame->stack_arg_regs[slot].type` where type is an `enum bpf_reg_type`. Looking at bpf_get_spilled_reg (which this macro is modeled after), it uses `1 << frame->stack[slot].slot_type[BPF_REG_SIZE - 1]` where slot_type is an `enum bpf_stack_slot_type` with values in the range 0..7. However, `enum bpf_reg_type` values frequently carry high bit flags like PTR_MAYBE_NULL (BIT(8)), PTR_UNTRUSTED (BIT(14)), PTR_TRUSTED (BIT(20)), and MEM_RCU (BIT(21)): include/linux/bpf.h: enum bpf_type_flag { PTR_MAYBE_NULL = BIT(8), ... PTR_UNTRUSTED = BIT(14), ... PTR_TRUSTED = BIT(20), MEM_RCU = BIT(21), ... }; When a stack arg has type PTR_TO_BTF_ID | MEM_RCU, the value is around 0x200010, and `1 << 0x200010` is undefined behavior (shift count >= width of int). The concrete impact occurs where bpf_for_each_reg_in_vstate (line 541-554) walks all registers including stack args: include/linux/bpf_verifier.h: #define bpf_for_each_reg_in_vstate_mask(...) bpf_for_each_spilled_reg(___j, __state, __reg, __mask) { ... } bpf_for_each_spilled_stack_arg(___j, __state, __reg, __mask) { if (!__reg) continue; (void)(__expr); } At kernel/bpf/verifier.c:13438, this iterator is used with clear_mask = (1 << STACK_SPILL) | (1 << STACK_ITER) to strip MEM_RCU from all surviving pointers after rcu_read_unlock. For a stack arg with PTR_TO_BTF_ID | MEM_RCU, the mask test becomes `(1 << (type & 31)) & 130` which does not match 130, so the macro returns NULL and the MEM_RCU stripping code never runs. MEM_RCU pointers passed as arguments 6+ remain tagged MEM_RCU across rcu_read_unlock, allowing later dereferences without PTR_UNTRUSTED semantics. Could the mask test be rewritten to avoid bit-shifting an enum bpf_reg_type value? Perhaps using is_spillable_regtype() or a similar type predicate rather than a bit pattern? > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c > index f25a56cfabac..7a65b532e84a 100644 > --- a/kernel/bpf/verifier.c > +++ b/kernel/bpf/verifier.c > @@ -1340,6 +1340,20 @@ static int copy_stack_state(struct bpf_func_state *dst, const struct bpf_func_st > return -ENOMEM; > > dst->allocated_stack = src->allocated_stack; > + > + /* copy stack args state */ > + n = src->stack_arg_depth / BPF_REG_SIZE; > + if (n) { > + dst->stack_arg_regs = copy_array(dst->stack_arg_regs, src->stack_arg_regs, n, > + sizeof(struct bpf_reg_state), > + GFP_KERNEL_ACCOUNT); > + if (!dst->stack_arg_regs) > + return -ENOMEM; > + > + dst->stack_arg_depth = src->stack_arg_depth; > + dst->incoming_stack_arg_depth = src->incoming_stack_arg_depth; > + dst->out_stack_arg_mask = src->out_stack_arg_mask; > + } > return 0; > } Are dst->stack_arg_depth, dst->incoming_stack_arg_depth, and dst->out_stack_arg_mask updated when src->stack_arg_depth is 0? copy_stack_state only updates those fields when n != 0. When src has no stack args, dst retains stale values from a prior copy. This differs from the unconditional update of dst->allocated_stack a few lines above, and from copy_reference_state which always updates active_locks and acquired_refs. The function copy_func_state() calls memcpy up to offsetof(struct bpf_func_state, stack). Since stack_arg_depth, incoming_stack_arg_depth, out_stack_arg_mask, and stack_arg_regs are all declared AFTER stack, memcpy does not copy them. copy_stack_state is the only place those fields are updated from src to dst. The staleness is visible through pop_stack: pop_stack() calls bpf_copy_verifier_state(cur, &head->st). cur->frame[i] may have out_stack_arg_mask/stack_arg_depth populated from a previously explored branch that called a stack-arg function. head->st is the saved state from before the branch, with mask=0 and stack_arg_depth=0. For that frame, src->stack_arg_depth==0 so the copy is skipped, and cur keeps the prior branch's mask and stack_arg_regs. The commit message promises "A per-state bitmask out_stack_arg_mask tracks which outgoing stack arg slots have been written on the current path... if one branch of a conditional writes arg6 but another does not, the mask correctly reflects the difference and the verifier rejects the uninitialized path." After pop_stack, the other branch is processed on top of cur, where the mask still carries bits set by the taken branch. Consequences in set_callee_state: kernel/bpf/verifier.c:set_callee_state() { ... if ((caller->out_stack_arg_mask & req_mask) != req_mask) { for (i = 0; i < callee_incoming_slots; i++) { if (caller->out_stack_arg_mask & BIT(i)) continue; verbose(env, "stack arg#%d not properly initialized\n", ...); return -EINVAL; } } ... } With stale bits, this check can pass on a path where the current branch never wrote the required r11 offsets. The verifier then copies stale caller->stack_arg_regs[caller_spi] into the callee's incoming slot. Similarly in btf_check_func_arg_match: kernel/bpf/verifier.c:btf_check_func_arg_match() { ... if (spi >= (caller->stack_arg_depth / BPF_REG_SIZE)) { verbose(env, "stack %s not found in caller state\n", ...); return -EINVAL; } ... } With stale caller->stack_arg_depth, this bound check can pass on paths where the actual live stack_arg_depth is 0. Should these fields be updated unconditionally, even when n==0? --- AI reviewed your patch. Please fix the bug or email reply why it's not a bug. See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md CI run summary: https://github.com/kernel-patches/bpf/actions/runs/24592562564 ^ permalink raw reply [flat|nested] 45+ messages in thread
* [PATCH bpf-next v5 09/16] bpf: Reject stack arguments in non-JITed programs 2026-04-17 3:46 [PATCH bpf-next v5 00/16] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song ` (7 preceding siblings ...) 2026-04-17 3:47 ` [PATCH bpf-next v5 08/16] bpf: Support stack arguments for bpf functions Yonghong Song @ 2026-04-17 3:47 ` Yonghong Song 2026-04-17 4:30 ` bot+bpf-ci 2026-04-18 0:52 ` bot+bpf-ci 2026-04-17 3:47 ` [PATCH bpf-next v5 10/16] bpf: Reject stack arguments if tail call reachable Yonghong Song ` (6 subsequent siblings) 15 siblings, 2 replies; 45+ messages in thread From: Yonghong Song @ 2026-04-17 3:47 UTC (permalink / raw) To: bpf Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann, Jose E . Marchesi, kernel-team, Martin KaFai Lau The interpreter does not understand the bpf register r11 (BPF_REG_PARAMS) used for stack argument addressing. So reject interpreter usage if stack arguments are used either in the main program or any subprogram. Signed-off-by: Yonghong Song <yonghong.song@linux.dev> --- kernel/bpf/core.c | 2 +- kernel/bpf/fixups.c | 6 ++++++ 2 files changed, 7 insertions(+), 1 deletion(-) diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index e7e97ffa2a8b..10c36a74431d 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -2555,7 +2555,7 @@ struct bpf_prog *bpf_prog_select_runtime(struct bpf_prog *fp, int *err) goto finalize; if (IS_ENABLED(CONFIG_BPF_JIT_ALWAYS_ON) || - bpf_prog_has_kfunc_call(fp)) + bpf_prog_has_kfunc_call(fp) || fp->aux->stack_arg_depth) jit_needed = true; if (!bpf_prog_select_interpreter(fp)) diff --git a/kernel/bpf/fixups.c b/kernel/bpf/fixups.c index f9233945b0f2..179e17007240 100644 --- a/kernel/bpf/fixups.c +++ b/kernel/bpf/fixups.c @@ -1302,6 +1302,12 @@ int bpf_fixup_call_args(struct bpf_verifier_env *env) verbose(env, "calling kernel functions are not allowed in non-JITed programs\n"); return -EINVAL; } + for (i = 0; i < env->subprog_cnt; i++) { + if (env->subprog_info[i].incoming_stack_arg_depth) { + verbose(env, "stack args are not supported in non-JITed programs\n"); + return -EINVAL; + } + } if (env->subprog_cnt > 1 && env->prog->aux->tail_call_reachable) { /* When JIT fails the progs with bpf2bpf calls and tail_calls * have to be rejected, since interpreter doesn't support them yet. -- 2.52.0 ^ permalink raw reply related [flat|nested] 45+ messages in thread
* Re: [PATCH bpf-next v5 09/16] bpf: Reject stack arguments in non-JITed programs 2026-04-17 3:47 ` [PATCH bpf-next v5 09/16] bpf: Reject stack arguments in non-JITed programs Yonghong Song @ 2026-04-17 4:30 ` bot+bpf-ci 2026-04-18 0:52 ` bot+bpf-ci 1 sibling, 0 replies; 45+ messages in thread From: bot+bpf-ci @ 2026-04-17 4:30 UTC (permalink / raw) To: yonghong.song, bpf Cc: ast, andrii, daniel, jose.marchesi, kernel-team, martin.lau, ast, andrii, daniel, martin.lau, eddyz87, yonghong.song, clm, ihor.solodrai [-- Attachment #1: Type: text/plain, Size: 762 bytes --] commit 6f3447bea13f86a674e917f5f8ef4474cdd3b0bc Author: Yonghong Song <yonghong.song@linux.dev> bpf: Reject stack arguments in non-JITed programs The interpreter does not understand the bpf register r11 (BPF_REG_PARAMS) used for stack argument addressing. So reject interpreter usage if stack arguments are used either in the main program or any subprogram. This commit fixes a bug but lacks a Fixes: tag. Suggested: Fixes: 21229535935384b01e3cef3c117233b23b2f3902 ("bpf: Support stack arguments for bpf functions") --- AI reviewed your patch. Please fix the bug or email reply why it's not a bug. See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md CI run summary: https://github.com/kernel-patches/bpf/actions/runs/24546989054 ^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [PATCH bpf-next v5 09/16] bpf: Reject stack arguments in non-JITed programs 2026-04-17 3:47 ` [PATCH bpf-next v5 09/16] bpf: Reject stack arguments in non-JITed programs Yonghong Song 2026-04-17 4:30 ` bot+bpf-ci @ 2026-04-18 0:52 ` bot+bpf-ci 1 sibling, 0 replies; 45+ messages in thread From: bot+bpf-ci @ 2026-04-18 0:52 UTC (permalink / raw) To: yonghong.song, bpf Cc: ast, andrii, daniel, jose.marchesi, kernel-team, martin.lau, ast, andrii, daniel, martin.lau, eddyz87, yonghong.song, clm, ihor.solodrai [-- Attachment #1: Type: text/plain, Size: 1026 bytes --] > bpf: Reject stack arguments in non-JITed programs > > The interpreter does not understand the bpf register r11 > (BPF_REG_PARAMS) used for stack argument addressing. So > reject interpreter usage if stack arguments are used either > in the main program or any subprogram. > > Signed-off-by: Yonghong Song <yonghong.song@linux.dev> Should this include a Fixes tag? The commit addresses a bug introduced in commit 13ee5d67a7c9 ("bpf: Support stack arguments for bpf functions"). Additionally, bot+bpf-ci@kernel.org noted in v5 review on lore that this commit fixes a bug but lacks the tag: https://lore.kernel.org/bpf/9b0ca90ef418abe68516e5570b953bbbee79f418c1550255784baf8422e9e3c0@mail.kernel.org/ Suggested: Fixes: 13ee5d67a7c9 ("bpf: Support stack arguments for bpf functions") --- AI reviewed your patch. Please fix the bug or email reply why it's not a bug. See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md CI run summary: https://github.com/kernel-patches/bpf/actions/runs/24592562564 ^ permalink raw reply [flat|nested] 45+ messages in thread
* [PATCH bpf-next v5 10/16] bpf: Reject stack arguments if tail call reachable 2026-04-17 3:46 [PATCH bpf-next v5 00/16] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song ` (8 preceding siblings ...) 2026-04-17 3:47 ` [PATCH bpf-next v5 09/16] bpf: Reject stack arguments in non-JITed programs Yonghong Song @ 2026-04-17 3:47 ` Yonghong Song 2026-04-17 4:08 ` sashiko-bot ` (2 more replies) 2026-04-17 3:47 ` [PATCH bpf-next v5 11/16] bpf: Support stack arguments for kfunc calls Yonghong Song ` (5 subsequent siblings) 15 siblings, 3 replies; 45+ messages in thread From: Yonghong Song @ 2026-04-17 3:47 UTC (permalink / raw) To: bpf Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann, Jose E . Marchesi, kernel-team, Martin KaFai Lau Tailcalls are been deprecated. So reject stack arguments if tail call is in the way. Signed-off-by: Yonghong Song <yonghong.song@linux.dev> --- kernel/bpf/verifier.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 7a65b532e84a..739c3127520c 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -5662,6 +5662,11 @@ static int check_max_stack_depth_subprog(struct bpf_verifier_env *env, int idx, verbose(env, "cannot tail call within exception cb\n"); return -EINVAL; } + if (subprog[tmp].incoming_stack_arg_depth || + subprog[tmp].outgoing_stack_arg_depth) { + verbose(env, "tail_calls are not allowed in programs with stack args\n"); + return -EINVAL; + } subprog[tmp].tail_call_reachable = true; } if (subprog[0].tail_call_reachable) -- 2.52.0 ^ permalink raw reply related [flat|nested] 45+ messages in thread
* Re: [PATCH bpf-next v5 10/16] bpf: Reject stack arguments if tail call reachable 2026-04-17 3:47 ` [PATCH bpf-next v5 10/16] bpf: Reject stack arguments if tail call reachable Yonghong Song @ 2026-04-17 4:08 ` sashiko-bot 2026-04-17 4:30 ` bot+bpf-ci 2026-04-18 1:04 ` bot+bpf-ci 2 siblings, 0 replies; 45+ messages in thread From: sashiko-bot @ 2026-04-17 4:08 UTC (permalink / raw) To: Yonghong Song; +Cc: bpf Sashiko AI review found 2 potential issue(s): - [Critical] False negative allowing tail calls and stack arguments together, leading to JIT memory corruption - [High] State leak in verifier DFS causing false positive rejections of valid BPF programs -- commit df24720ae892549300f1910df635621a3875dbdf Author: Yonghong Song <yonghong.song@linux.dev> bpf: Reject stack arguments if tail call reachable This commit updates the verifier to reject BPF programs if they contain both tail calls and stack arguments. This is enforced during the maximum stack depth check traversal. > Tailcalls are been deprecated. So reject stack arguments > if tail call is in the way. This isn't a bug, but there is a grammatical error here. "are been" should likely be "have been". > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c > index 7a65b532e84a8..739c3127520c3 100644 > --- a/kernel/bpf/verifier.c > +++ b/kernel/bpf/verifier.c > @@ -5662,6 +5662,11 @@ static int check_max_stack_depth_subprog(struct bpf_verifier_env *env, int idx, > verbose(env, "cannot tail call within exception cb\n"); > return -EINVAL; > } > + if (subprog[tmp].incoming_stack_arg_depth || > + subprog[tmp].outgoing_stack_arg_depth) { > + verbose(env, "tail_calls are not allowed in programs with stack args\n"); > + return -EINVAL; > + } Looking at the earlier parts of check_max_stack_depth_subprog(), the local variable tail_call_reachable only appears to be set to true when jumping into a callee that has a tail call: if (subprog[idx].has_tail_call) tail_call_reachable = true; If the main program (subprog[0]) natively contains a tail call, tail_call_reachable would remain false, causing this entire check block to be skipped. Does this logic successfully catch the case where the main program itself contains a tail call and also uses stack arguments? If this escapes the verifier, the JIT might emit prologue tail call instructions (pushing 24 bytes on x86) while calculating stack argument offsets assuming no tail call pushes, overwriting the BPF program stack. Could this regression lead to memory corruption in the JIT? When the traversal pops a frame to return to the caller and jumps back to continue_func, the tail_call_reachable variable is not reset to false: /* pop caller idx from callee */ idx = dinfo[idx].caller; ... goto continue_func; If the verifier traverses a subprogram with a tail call, returns, and then traverses a completely unrelated sibling subprogram that uses stack arguments, it seems it would incorrectly evaluate tail_call_reachable as true and reject the valid sibling. Is there a state leak in this DFS traversal that could cause false positive rejections of valid BPF programs? > subprog[tmp].tail_call_reachable = true; > } > if (subprog[0].tail_call_reachable) -- Sashiko AI review · https://sashiko.dev/#/patchset/20260417034658.2625353-1-yonghong.song@linux.dev?part=10 ^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [PATCH bpf-next v5 10/16] bpf: Reject stack arguments if tail call reachable 2026-04-17 3:47 ` [PATCH bpf-next v5 10/16] bpf: Reject stack arguments if tail call reachable Yonghong Song 2026-04-17 4:08 ` sashiko-bot @ 2026-04-17 4:30 ` bot+bpf-ci 2026-04-18 1:04 ` bot+bpf-ci 2 siblings, 0 replies; 45+ messages in thread From: bot+bpf-ci @ 2026-04-17 4:30 UTC (permalink / raw) To: yonghong.song, bpf Cc: ast, andrii, daniel, jose.marchesi, kernel-team, martin.lau, ast, andrii, daniel, martin.lau, eddyz87, yonghong.song, clm, ihor.solodrai [-- Attachment #1: Type: text/plain, Size: 2723 bytes --] > commit b77ef401ea17eab379f390ca5d89aeab4b271ca7 > Author: Yonghong Song <yonghong.song@linux.dev> > Date: Thu Apr 16 20:47:49 2026 -0700 > > bpf: Reject stack arguments if tail call reachable > > Tailcalls are been deprecated. So reject stack arguments > if tail call is in the way. The commit message has a grammatical error: "Tailcalls are been deprecated" should be either "have been" or "are being". Also, is the claim that tail calls are deprecated accurate? Based on earlier mailing list discussion, the restriction exists because tail calls and stack arguments require incompatible stack frame handling, not because tail calls are being deprecated. Could the commit message be more precise about the actual technical reason for the restriction? This also appears to be a bug fix for the stack arguments feature. Should this include a Fixes tag for the original stack arguments commit? Suggested: Fixes: 21229535935384b01e3cef3c117233b23b2f3902 ("bpf: Support stack arguments for bpf functions") > > Signed-off-by: Yonghong Song <yonghong.song@linux.dev> > > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c > index 7a65b532e84a..739c3127520c 100644 > --- a/kernel/bpf/verifier.c > +++ b/kernel/bpf/verifier.c > @@ -5662,6 +5662,11 @@ static int check_max_stack_depth_subprog(struct bpf_verifier_env *env, int idx, > verbose(env, "cannot tail call within exception cb\n"); > return -EINVAL; > } > + if (subprog[tmp].incoming_stack_arg_depth || > + subprog[tmp].outgoing_stack_arg_depth) { > + verbose(env, "tail_calls are not allowed in programs with stack args\n"); > + return -EINVAL; > + } > subprog[tmp].tail_call_reachable = true; > } > if (subprog[0].tail_call_reachable) When the traversal pops a frame to return to the caller and jumps back to continue_func, is the tail_call_reachable variable reset to false? Looking at the broader context of check_max_stack_depth_subprog(), after processing a callee, the code pops back to the caller: /* pop caller idx from callee */ idx = dinfo[idx].caller; ... goto continue_func; If the verifier traverses a subprogram with a tail call, returns, and then traverses a completely unrelated sibling subprogram that uses stack arguments, would it incorrectly evaluate tail_call_reachable as true and reject the valid sibling? Is there a state leak in this DFS traversal that could cause false positive rejections of valid BPF programs? --- AI reviewed your patch. Please fix the bug or email reply why it's not a bug. See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md CI run summary: https://github.com/kernel-patches/bpf/actions/runs/24546989054 ^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [PATCH bpf-next v5 10/16] bpf: Reject stack arguments if tail call reachable 2026-04-17 3:47 ` [PATCH bpf-next v5 10/16] bpf: Reject stack arguments if tail call reachable Yonghong Song 2026-04-17 4:08 ` sashiko-bot 2026-04-17 4:30 ` bot+bpf-ci @ 2026-04-18 1:04 ` bot+bpf-ci 2 siblings, 0 replies; 45+ messages in thread From: bot+bpf-ci @ 2026-04-18 1:04 UTC (permalink / raw) To: yonghong.song, bpf Cc: ast, andrii, daniel, jose.marchesi, kernel-team, martin.lau, ast, andrii, daniel, martin.lau, eddyz87, yonghong.song, clm, ihor.solodrai [-- Attachment #1: Type: text/plain, Size: 2665 bytes --] > bpf: Reject stack arguments if tail call reachable > > Tailcalls are been deprecated. So reject stack arguments This isn't a bug, but there's a grammar error: 'are been' should be 'have been' or 'are being'. The commit message states that "Tailcalls are been deprecated." Is this claim accurate? Based on earlier mailing list discussion, the restriction exists because tail calls and stack arguments require incompatible stack frame handling, not because tail calls are being deprecated. Could the commit message be more precise about the actual technical reason for the restriction? This commit appears to fix a bug but lacks a Fixes: tag. The suggested tag would be: Fixes: 13ee5d67a7c9 ("bpf: Support stack arguments for bpf functions") > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c > index 7a65b532e84a..739c3127520c 100644 > --- a/kernel/bpf/verifier.c > +++ b/kernel/bpf/verifier.c > @@ -5662,6 +5662,11 @@ static int check_max_stack_depth_subprog(struct bpf_verifier_env *env, int idx, > verbose(env, "cannot tail call within exception cb\n"); > return -EINVAL; > } > + if (subprog[tmp].incoming_stack_arg_depth || > + subprog[tmp].outgoing_stack_arg_depth) { > + verbose(env, "tail_calls are not allowed in programs with stack args\n"); > + return -EINVAL; > + } > subprog[tmp].tail_call_reachable = true; > } > if (subprog[0].tail_call_reachable) When the DFS traversal in check_max_stack_depth_subprog() pops a frame to return to the caller (moving from a child subprogram back to its parent), is the tail_call_reachable variable reset to false? Looking at the code, tail_call_reachable is initialized at the function start but appears to persist across the entire traversal. Consider this scenario: 1. The verifier traverses a subprogram that contains a tail call, setting tail_call_reachable = true 2. The traversal returns to the parent (pops the frame via idx = dinfo[idx].caller) 3. The traversal then visits a sibling subprogram that uses stack arguments In this scenario, wouldn't tail_call_reachable still be true from step 1, causing the check in the new code to incorrectly reject the sibling subprogram even though it has no tail calls in its own call chain? This could cause false positive rejections of valid BPF programs where one subprogram uses tail calls and a completely unrelated sibling subprogram uses stack arguments. --- AI reviewed your patch. Please fix the bug or email reply why it's not a bug. See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md CI run summary: https://github.com/kernel-patches/bpf/actions/runs/24592562564 ^ permalink raw reply [flat|nested] 45+ messages in thread
* [PATCH bpf-next v5 11/16] bpf: Support stack arguments for kfunc calls 2026-04-17 3:46 [PATCH bpf-next v5 00/16] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song ` (9 preceding siblings ...) 2026-04-17 3:47 ` [PATCH bpf-next v5 10/16] bpf: Reject stack arguments if tail call reachable Yonghong Song @ 2026-04-17 3:47 ` Yonghong Song 2026-04-17 4:40 ` sashiko-bot ` (2 more replies) 2026-04-17 3:47 ` [PATCH bpf-next v5 12/16] bpf: Enable stack argument support for x86_64 Yonghong Song ` (4 subsequent siblings) 15 siblings, 3 replies; 45+ messages in thread From: Yonghong Song @ 2026-04-17 3:47 UTC (permalink / raw) To: bpf Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann, Jose E . Marchesi, kernel-team, Martin KaFai Lau Extend the stack argument mechanism to kfunc calls, allowing kfuncs with more than 5 parameters to receive additional arguments via the r11-based stack arg area. For kfuncs, the caller is a BPF program and the callee is a kernel function. The BPF program writes outgoing args at negative r11 offsets, following the same convention as BPF-to-BPF calls: Outgoing: r11 - 8 (arg6), ..., r11 - N*8 (last arg) The following is an example: int foo(int a1, int a2, int a3, int a4, int a5, int a6, int a7) { ... kfunc1(a1, a2, a3, a4, a5, a6, a7, a8); ... kfunc2(a1, a2, a3, a4, a5, a6, a7, a8, a9); ... } Caller (foo), generated by llvm =============================== Incoming (positive offsets): r11+8: [incoming arg 6] r11+16: [incoming arg 7] Outgoing for kfunc1 (negative offsets): r11-8: [outgoing arg 6] r11-16: [outgoing arg 7] r11-24: [outgoing arg 8] Outgoing for kfunc2 (negative offsets): r11-8: [outgoing arg 6] r11-16: [outgoing arg 7] r11-24: [outgoing arg 8] r11-32: [outgoing arg 9] Later JIT will marshal outgoing arguments to the native calling convention for kfunc1() and kfunc2(). There are two places where meta->release_regno needs to keep regno for later releasing the reference. Also, 'cur_aux(env)->arg_prog = regno' is also keeping regno for later fixup. Since stack arguments don't have a valid register number (regno is set to -1), these three cases are rejected for now if the argument is on the stack. Signed-off-by: Yonghong Song <yonghong.song@linux.dev> --- kernel/bpf/verifier.c | 114 ++++++++++++++++++++++++++++++++++-------- 1 file changed, 94 insertions(+), 20 deletions(-) diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 739c3127520c..a3f307909e40 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -4497,6 +4497,14 @@ static int check_stack_arg_access(struct bpf_verifier_env *env, return 0; } +/* Check that a stack arg slot has been properly initialized. */ +static bool is_stack_arg_slot_initialized(struct bpf_func_state *state, int spi) +{ + if (spi >= (int)(state->stack_arg_depth / BPF_REG_SIZE)) + return false; + return state->stack_arg_regs[spi].type != NOT_INIT; +} + static int out_arg_idx_from_off(int off) { return -off / BPF_REG_SIZE - 1; @@ -7355,8 +7363,6 @@ static int check_kfunc_mem_size_reg(struct bpf_verifier_env *env, struct bpf_reg u32 argno = make_argno(mem_argno); int err; - WARN_ON_ONCE(mem_argno > BPF_REG_3); - memset(&meta, 0, sizeof(meta)); if (may_be_null) { @@ -11653,6 +11659,19 @@ bool bpf_is_kfunc_pkt_changing(struct bpf_kfunc_call_arg_meta *meta) return meta->func_id == special_kfunc_list[KF_bpf_xdp_pull_data]; } +static struct bpf_reg_state *get_kfunc_arg_reg(struct bpf_verifier_env *env, int argno) +{ + struct bpf_func_state *caller; + int spi; + + if (argno < MAX_BPF_FUNC_REG_ARGS) + return &cur_regs(env)[argno + 1]; + + caller = cur_func(env); + spi = out_arg_spi(caller, argno - MAX_BPF_FUNC_REG_ARGS); + return &caller->stack_arg_regs[spi]; +} + static enum kfunc_ptr_arg_type get_kfunc_ptr_arg_type(struct bpf_verifier_env *env, struct bpf_kfunc_call_arg_meta *meta, @@ -11660,8 +11679,6 @@ get_kfunc_ptr_arg_type(struct bpf_verifier_env *env, const char *ref_tname, const struct btf_param *args, int argno, int nargs, struct bpf_reg_state *reg) { - u32 regno = argno + 1; - struct bpf_reg_state *regs = cur_regs(env); bool arg_mem_size = false; if (meta->func_id == special_kfunc_list[KF_bpf_cast_to_kern_ctx] || @@ -11670,8 +11687,8 @@ get_kfunc_ptr_arg_type(struct bpf_verifier_env *env, return KF_ARG_PTR_TO_CTX; if (argno + 1 < nargs && - (is_kfunc_arg_mem_size(meta->btf, &args[argno + 1], ®s[regno + 1]) || - is_kfunc_arg_const_mem_size(meta->btf, &args[argno + 1], ®s[regno + 1]))) + (is_kfunc_arg_mem_size(meta->btf, &args[argno + 1], get_kfunc_arg_reg(env, argno + 1)) || + is_kfunc_arg_const_mem_size(meta->btf, &args[argno + 1], get_kfunc_arg_reg(env, argno + 1)))) arg_mem_size = true; /* In this function, we verify the kfunc's BTF as per the argument type, @@ -12344,9 +12361,9 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ args = (const struct btf_param *)(meta->func_proto + 1); nargs = btf_type_vlen(meta->func_proto); - if (nargs > MAX_BPF_FUNC_REG_ARGS) { + if (nargs > MAX_BPF_FUNC_ARGS) { verbose(env, "Function %s has %d > %d args\n", func_name, nargs, - MAX_BPF_FUNC_REG_ARGS); + MAX_BPF_FUNC_ARGS); return -EINVAL; } @@ -12354,20 +12371,44 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ * verifier sees. */ for (i = 0; i < nargs; i++) { - struct bpf_reg_state *regs = cur_regs(env), *reg = ®s[i + 1]; + struct bpf_reg_state *regs = cur_regs(env), *reg; const struct btf_type *t, *ref_t, *resolve_ret; enum bpf_arg_type arg_type = ARG_DONTCARE; u32 argno = make_argno(i); - u32 regno = i + 1, ref_id, type_size; + struct bpf_reg_state tmp_reg; + int regno = i + 1; + u32 ref_id, type_size; bool is_ret_buf_sz = false; int kf_arg_type; + if (i < MAX_BPF_FUNC_REG_ARGS) { + reg = ®s[i + 1]; + } else { + /* Retrieve the reg state from the outgoing stack arg slot. */ + struct bpf_func_state *caller = cur_func(env); + int spi = out_arg_spi(caller, i - MAX_BPF_FUNC_REG_ARGS); + + if (!is_stack_arg_slot_initialized(caller, spi)) { + verbose(env, "stack %s not properly initialized\n", + reg_arg_name(env, argno)); + return -EINVAL; + } + + tmp_reg = caller->stack_arg_regs[spi]; + reg = &tmp_reg; + regno = -1; + } + if (is_kfunc_arg_prog_aux(btf, &args[i])) { /* Reject repeated use bpf_prog_aux */ if (meta->arg_prog) { verifier_bug(env, "Only 1 prog->aux argument supported per-kfunc"); return -EFAULT; } + if (regno < 0) { + verbose(env, "%s prog->aux cannot be a stack argument\n", reg_arg_name(env, argno)); + return -EINVAL; + } meta->arg_prog = true; cur_aux(env)->arg_prog = regno; continue; @@ -12394,9 +12435,11 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ reg_arg_name(env, argno)); return -EINVAL; } - ret = mark_chain_precision(env, regno); - if (ret < 0) - return ret; + if (regno > 0) { + ret = mark_chain_precision(env, regno); + if (ret < 0) + return ret; + } meta->arg_constant.found = true; meta->arg_constant.value = reg->var_off.value; } else if (is_kfunc_arg_scalar_with_name(btf, &args[i], "rdonly_buf_size")) { @@ -12419,9 +12462,11 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ } meta->r0_size = reg->var_off.value; - ret = mark_chain_precision(env, regno); - if (ret) - return ret; + if (regno > 0) { + ret = mark_chain_precision(env, regno); + if (ret) + return ret; + } } continue; } @@ -12447,8 +12492,13 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ return -EFAULT; } meta->ref_obj_id = reg->ref_obj_id; - if (is_kfunc_release(meta)) + if (is_kfunc_release(meta)) { + if (regno < 0) { + verbose(env, "%s release arg cannot be a stack argument\n", reg_arg_name(env, argno)); + return -EINVAL; + } meta->release_regno = regno; + } } ref_t = btf_type_skip_modifiers(btf, t->type, &ref_id); @@ -12607,6 +12657,10 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ dynptr_arg_type |= DYNPTR_TYPE_FILE; } else if (meta->func_id == special_kfunc_list[KF_bpf_dynptr_file_discard]) { dynptr_arg_type |= DYNPTR_TYPE_FILE; + if (regno < 0) { + verbose(env, "%s release arg cannot be a stack argument\n", reg_arg_name(env, argno)); + return -EINVAL; + } meta->release_regno = regno; } else if (meta->func_id == special_kfunc_list[KF_bpf_dynptr_clone] && (dynptr_arg_type & MEM_UNINIT)) { @@ -12761,9 +12815,9 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ break; case KF_ARG_PTR_TO_MEM_SIZE: { - struct bpf_reg_state *buff_reg = ®s[regno]; + struct bpf_reg_state *buff_reg = reg; const struct btf_param *buff_arg = &args[i]; - struct bpf_reg_state *size_reg = ®s[regno + 1]; + struct bpf_reg_state *size_reg = get_kfunc_arg_reg(env, i + 1); const struct btf_param *size_arg = &args[i + 1]; if (!bpf_register_is_null(buff_reg) || !is_kfunc_arg_nullable(meta->btf, buff_arg)) { @@ -13667,7 +13721,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, nargs = btf_type_vlen(meta.func_proto); args = (const struct btf_param *)(meta.func_proto + 1); - for (i = 0; i < nargs; i++) { + for (i = 0; i < min_t(int, nargs, MAX_BPF_FUNC_REG_ARGS); i++) { u32 regno = i + 1; t = btf_type_skip_modifiers(desc_btf, args[i].type, NULL); @@ -13678,6 +13732,16 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, mark_btf_func_reg_size(env, regno, t->size); } + /* Track outgoing stack arg depth for kfuncs with >5 args */ + if (nargs > MAX_BPF_FUNC_REG_ARGS) { + struct bpf_func_state *caller = cur_func(env); + struct bpf_subprog_info *caller_info = &env->subprog_info[caller->subprogno]; + u16 kfunc_stack_arg_depth = (nargs - MAX_BPF_FUNC_REG_ARGS) * BPF_REG_SIZE; + + if (kfunc_stack_arg_depth > caller_info->outgoing_stack_arg_depth) + caller_info->outgoing_stack_arg_depth = kfunc_stack_arg_depth; + } + if (bpf_is_iter_next_kfunc(&meta)) { err = process_iter_next_call(env, insn_idx, &meta); if (err) @@ -20145,6 +20209,16 @@ int bpf_fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, if (!bpf_jit_supports_far_kfunc_call()) insn->imm = BPF_CALL_IMM(desc->addr); + /* + * After resolving the kfunc address, insn->off is no longer needed + * for BTF fd index. Repurpose it to store the number of stack args + * so the JIT can marshal them. + */ + if (desc->func_model.nr_args > MAX_BPF_FUNC_REG_ARGS) + insn->off = desc->func_model.nr_args - MAX_BPF_FUNC_REG_ARGS; + else + insn->off = 0; + if (is_bpf_obj_new_kfunc(desc->func_id) || is_bpf_percpu_obj_new_kfunc(desc->func_id)) { struct btf_struct_meta *kptr_struct_meta = env->insn_aux_data[insn_idx].kptr_struct_meta; struct bpf_insn addr[2] = { BPF_LD_IMM64(BPF_REG_2, (long)kptr_struct_meta) }; -- 2.52.0 ^ permalink raw reply related [flat|nested] 45+ messages in thread
* Re: [PATCH bpf-next v5 11/16] bpf: Support stack arguments for kfunc calls 2026-04-17 3:47 ` [PATCH bpf-next v5 11/16] bpf: Support stack arguments for kfunc calls Yonghong Song @ 2026-04-17 4:40 ` sashiko-bot 2026-04-17 4:43 ` bot+bpf-ci 2026-04-18 1:04 ` bot+bpf-ci 2 siblings, 0 replies; 45+ messages in thread From: sashiko-bot @ 2026-04-17 4:40 UTC (permalink / raw) To: Yonghong Song; +Cc: bpf Sashiko AI review found 3 potential issue(s): - [Critical] Precision tracking (`mark_chain_precision`) is incorrectly bypassed or misapplied for kfunc scalar arguments passed via the stack, allowing malicious BPF programs to bypass verifier safety checks through state pruning. - [High] `get_kfunc_arg_reg()` fetches lookahead stack arguments without bounds checking, leading to an out-of-bounds memory read or NULL pointer dereference when resolving memory sizes. - [High] Overwriting `insn->off` in `bpf_fixup_kfunc_call` destroys the BTF file descriptor index, breaking module kfunc calls on architectures that use JIT far calls. -- commit 15b2d48bd7c1080e972b83211cc9794ac2ba763f Author: Yonghong Song <yonghong.song@linux.dev> bpf: Support stack arguments for kfunc calls Extend the stack argument mechanism to kfunc calls, allowing kfuncs with more than 5 parameters to receive additional arguments via the r11-based stack arg area. > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c > index 739c3127520c3..a3f307909e401 100644 > --- a/kernel/bpf/verifier.c > +++ b/kernel/bpf/verifier.c [ ... ] > @@ -11653,6 +11659,19 @@ bool bpf_is_kfunc_pkt_changing(struct bpf_kfunc_call_arg_meta *meta) > return meta->func_id == special_kfunc_list[KF_bpf_xdp_pull_data]; > } > > +static struct bpf_reg_state *get_kfunc_arg_reg(struct bpf_verifier_env *env, int argno) > +{ > + struct bpf_func_state *caller; > + int spi; > + > + if (argno < MAX_BPF_FUNC_REG_ARGS) > + return &cur_regs(env)[argno + 1]; > + > + caller = cur_func(env); > + spi = out_arg_spi(caller, argno - MAX_BPF_FUNC_REG_ARGS); > + return &caller->stack_arg_regs[spi]; > +} Does get_kfunc_arg_reg() need to verify that spi is within bounds and the stack slot is initialized? When verifying pointer arguments that have a trailing size argument (KF_ARG_PTR_TO_MEM_SIZE), the verifier looks ahead at argno + 1. If a BPF program maliciously omits the stack size argument, could spi exceed caller->stack_arg_depth and lead to an out-of-bounds memory read? [ ... ] > @@ -12394,9 +12435,11 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ > reg_arg_name(env, argno)); > return -EINVAL; > } > - ret = mark_chain_precision(env, regno); > - if (ret < 0) > - return ret; > + if (regno > 0) { > + ret = mark_chain_precision(env, regno); > + if (ret < 0) > + return ret; > + } Stack arguments use regno = -1, so it seems mark_chain_precision() is bypassed for scalar constants and size arguments passed on the stack. If precision tracking is bypassed, could the verifier's state equivalence logic consider states with different scalar values in these stack slots as equivalent and incorrectly prune them? [ ... ] > @@ -20145,6 +20209,16 @@ int bpf_fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, > if (!bpf_jit_supports_far_kfunc_call()) > insn->imm = BPF_CALL_IMM(desc->addr); > > + /* > + * After resolving the kfunc address, insn->off is no longer needed > + * for BTF fd index. Repurpose it to store the number of stack args > + * so the JIT can marshal them. > + */ > + if (desc->func_model.nr_args > MAX_BPF_FUNC_REG_ARGS) > + insn->off = desc->func_model.nr_args - MAX_BPF_FUNC_REG_ARGS; > + else > + insn->off = 0; For architectures where bpf_jit_supports_far_kfunc_call() returns true (such as s390x, loongarch, and powerpc), they resolve the address later using bpf_get_kfunc_addr(), which requires the BTF file descriptor index stored in insn->off. If insn->off is overwritten here, could it destroy the BTF file descriptor index and break module kfunc calls during JIT compilation? -- Sashiko AI review · https://sashiko.dev/#/patchset/20260417034658.2625353-1-yonghong.song@linux.dev?part=11 ^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [PATCH bpf-next v5 11/16] bpf: Support stack arguments for kfunc calls 2026-04-17 3:47 ` [PATCH bpf-next v5 11/16] bpf: Support stack arguments for kfunc calls Yonghong Song 2026-04-17 4:40 ` sashiko-bot @ 2026-04-17 4:43 ` bot+bpf-ci 2026-04-18 1:04 ` bot+bpf-ci 2 siblings, 0 replies; 45+ messages in thread From: bot+bpf-ci @ 2026-04-17 4:43 UTC (permalink / raw) To: yonghong.song, bpf Cc: ast, andrii, daniel, jose.marchesi, kernel-team, martin.lau, ast, andrii, daniel, martin.lau, eddyz87, yonghong.song, clm, ihor.solodrai [-- Attachment #1: Type: text/plain, Size: 2566 bytes --] > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c > --- a/kernel/bpf/verifier.c > +++ b/kernel/bpf/verifier.c [ ... ] > @@ -20145,6 +20209,16 @@ int bpf_fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, > if (!bpf_jit_supports_far_kfunc_call()) > insn->imm = BPF_CALL_IMM(desc->addr); > > + /* > + * After resolving the kfunc address, insn->off is no longer needed > + * for BTF fd index. Repurpose it to store the number of stack args > + * so the JIT can marshal them. > + */ > + if (desc->func_model.nr_args > MAX_BPF_FUNC_REG_ARGS) > + insn->off = desc->func_model.nr_args - MAX_BPF_FUNC_REG_ARGS; > + else > + insn->off = 0; > + Is insn->off really "no longer needed for BTF fd index" after this point? Looking at bpf_jit_find_kfunc_model() in kernel/bpf/fixups.c, the function is called during JIT compilation from every BPF JIT backend that supports kfunc calls (x86_32, s390, riscv64, ppc64, loongarch). It performs a bsearch on the kfunc_tab keyed by (insn->imm, insn->off) using kfunc_desc_cmp_by_imm_off: kernel/bpf/fixups.c:kfunc_desc_cmp_by_imm_off() { ... if (a->imm != b->imm) return a->imm < b->imm ? -1 : 1; if (a->offset != b->offset) return a->offset < b->offset ? -1 : 1; return 0; } The desc->offset was populated from the original insn->off during bpf_add_kfunc_call(), where for module kfuncs it is a non-zero BTF fd index. The call order is: bpf_check() -> bpf_do_misc_fixups() -> bpf_fixup_kfunc_call() (overwrites insn->off to 0 or stack_arg_count) -> bpf_prog_select_runtime() -> bpf_int_jit_compile() -> arch JIT -> bpf_jit_find_kfunc_model() (reads insn->off for bsearch key) After this patch, for a kfunc with 5 or fewer args that lives in a module (non-zero btf_fd_idx), insn->off is zeroed, and the bsearch will fail to find the entry that was stored with offset=btf_fd_idx. For kfuncs with more than 5 args, the new value of insn->off (stack arg count) also will not match desc->offset (btf_fd_idx or 0), so module kfuncs with stack args are also broken, and vmlinux kfuncs with more than 5 args get a nonzero insn->off that no longer matches desc->offset=0. Can the stack arg count be communicated to the JIT without clobbering the btf_fd_idx that bsearch still keys on? --- AI reviewed your patch. Please fix the bug or email reply why it's not a bug. See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md CI run summary: https://github.com/kernel-patches/bpf/actions/runs/24546989054 ^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [PATCH bpf-next v5 11/16] bpf: Support stack arguments for kfunc calls 2026-04-17 3:47 ` [PATCH bpf-next v5 11/16] bpf: Support stack arguments for kfunc calls Yonghong Song 2026-04-17 4:40 ` sashiko-bot 2026-04-17 4:43 ` bot+bpf-ci @ 2026-04-18 1:04 ` bot+bpf-ci 2 siblings, 0 replies; 45+ messages in thread From: bot+bpf-ci @ 2026-04-18 1:04 UTC (permalink / raw) To: yonghong.song, bpf Cc: ast, andrii, daniel, jose.marchesi, kernel-team, martin.lau, ast, andrii, daniel, martin.lau, eddyz87, yonghong.song, clm, ihor.solodrai [-- Attachment #1: Type: text/plain, Size: 5392 bytes --] > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c > --- a/kernel/bpf/verifier.c > +++ b/kernel/bpf/verifier.c [ ... ] > @@ -11653,6 +11659,19 @@ bool bpf_is_kfunc_pkt_changing(struct bpf_kfunc_call_arg_meta *meta) > +static struct bpf_reg_state *get_kfunc_arg_reg(struct bpf_verifier_env *env, int argno) > +{ > + struct bpf_func_state *caller; > + int spi; > + > + if (argno < MAX_BPF_FUNC_REG_ARGS) > + return &cur_regs(env)[argno + 1]; > + > + caller = cur_func(env); > + spi = out_arg_spi(caller, argno - MAX_BPF_FUNC_REG_ARGS); > + return &caller->stack_arg_regs[spi]; > +} Can the verifier crash here? Looking at get_kfunc_arg_reg, when argno >= MAX_BPF_FUNC_REG_ARGS, the function returns &caller->stack_arg_regs[spi] without checking whether stack_arg_regs is NULL. The stack_arg_regs array is only allocated by grow_stack_arg_slots, which is called from check_stack_arg_write when the program writes outgoing stack args via r11. If a malformed BPF program invokes a kfunc without emitting any stack stores beforehand, stack_arg_regs will remain NULL. Consider get_kfunc_ptr_arg_type calling get_kfunc_arg_reg to peek at argument i+1 when checking for buffer+__sz pairs: kernel/bpf/verifier.c:get_kfunc_ptr_arg_type() { if (argno + 1 < nargs && (is_kfunc_arg_mem_size(meta->btf, &args[argno + 1], get_kfunc_arg_reg(env, argno + 1)) || ... } When argno=4 (pointer in r5) and argno+1=5 (size in first stack slot), get_kfunc_arg_reg(env, 5) enters the stack-arg branch and dereferences caller->stack_arg_regs even though it may be NULL. If the arg type is a scalar (which __sz/__szk suffixes imply), is_kfunc_arg_mem_size will try to read reg->type, causing a NULL pointer dereference. The is_stack_arg_slot_initialized gate you added in check_kfunc_args only fires when the loop reaches i=5; by then the peek at i=4 has already dereferenced the NULL pointer. Should get_kfunc_arg_reg verify that stack_arg_regs is non-NULL and spi < stack_arg_depth/BPF_REG_SIZE before returning the pointer? Alternatively, should check_kfunc_call add an out_stack_arg_mask pre-check mirroring the bpf-to-bpf gate that prevents check_func_arg_reg_off from reading uninitialized stack slots? > @@ -11660,8 +11679,6 @@ get_kfunc_ptr_arg_type(struct bpf_verifier_env *env, > const char *ref_tname, const struct btf_param *args, > int argno, int nargs, struct bpf_reg_state *reg) > { > - u32 regno = argno + 1; > - struct bpf_reg_state *regs = cur_regs(env); > bool arg_mem_size = false; > > if (meta->func_id == special_kfunc_list[KF_bpf_cast_to_kern_ctx] || > @@ -11670,8 +11687,8 @@ get_kfunc_ptr_arg_type(struct bpf_verifier_env *env, > return KF_ARG_PTR_TO_CTX; > > if (argno + 1 < nargs && > - (is_kfunc_arg_mem_size(meta->btf, &args[argno + 1], ®s[regno + 1]) || > - is_kfunc_arg_const_mem_size(meta->btf, &args[argno + 1], ®s[regno + 1]))) > + (is_kfunc_arg_mem_size(meta->btf, &args[argno + 1], get_kfunc_arg_reg(env, argno + 1)) || > + is_kfunc_arg_const_mem_size(meta->btf, &args[argno + 1], get_kfunc_arg_reg(env, argno + 1)))) > arg_mem_size = true; [ ... ] > @@ -20145,6 +20209,16 @@ int bpf_fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn, > if (!bpf_jit_supports_far_kfunc_call()) > insn->imm = BPF_CALL_IMM(desc->addr); > > + /* > + * After resolving the kfunc address, insn->off is no longer needed > + * for BTF fd index. Repurpose it to store the number of stack args > + * so the JIT can marshal them. > + */ > + if (desc->func_model.nr_args > MAX_BPF_FUNC_REG_ARGS) > + insn->off = desc->func_model.nr_args - MAX_BPF_FUNC_REG_ARGS; > + else > + insn->off = 0; > + This concern was raised by the bpf-ci bot and sashiko-bot in the v5 review thread but does not appear to have been addressed. Overwriting insn->off destroys the BTF file descriptor index required by bpf_jit_find_kfunc_model() for module kfunc lookups. Looking at the bsearch in kernel/bpf/fixups.c, it uses (insn->imm, insn->off) as the key: kernel/bpf/fixups.c:bpf_jit_find_kfunc_model() { key.addr = insn->imm; key.offset = insn->off; entry = bsearch(&key, tab, tab_count, sizeof(*tab), kfunc_addr_cmp); } For module kfuncs, insn->off stores a non-zero btf_fd_idx set during find_kfunc_desc. After this patch overwrites it with the stack arg count (or 0), the bsearch will fail to find descriptors that were stored with offset=btf_fd_idx. This breaks module kfunc calls on architectures using JIT far calls (s390x, loongarch, powerpc) for any module kfunc with 5 or fewer args (insn->off zeroed) or with more than 5 args (insn->off now contains the stack arg count instead of btf_fd_idx, still mismatching desc->offset). Vmlinux kfuncs with more than 5 args also get a nonzero insn->off that no longer matches desc->offset=0. Can the stack arg count be communicated to the JIT without clobbering the btf_fd_idx that bsearch still keys on? The original concern was raised here: https://lore.kernel.org/bpf/e5a6e801c10994cc2e03b0cd57e76152eb234493a6b7395a9859feade4a6575b@mail.kernel.org/ --- AI reviewed your patch. Please fix the bug or email reply why it's not a bug. See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md CI run summary: https://github.com/kernel-patches/bpf/actions/runs/24592562564 ^ permalink raw reply [flat|nested] 45+ messages in thread
* [PATCH bpf-next v5 12/16] bpf: Enable stack argument support for x86_64 2026-04-17 3:46 [PATCH bpf-next v5 00/16] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song ` (10 preceding siblings ...) 2026-04-17 3:47 ` [PATCH bpf-next v5 11/16] bpf: Support stack arguments for kfunc calls Yonghong Song @ 2026-04-17 3:47 ` Yonghong Song 2026-04-17 4:30 ` bot+bpf-ci ` (2 more replies) 2026-04-17 3:48 ` [PATCH bpf-next v5 13/16] bpf,x86: Implement JIT support for stack arguments Yonghong Song ` (3 subsequent siblings) 15 siblings, 3 replies; 45+ messages in thread From: Yonghong Song @ 2026-04-17 3:47 UTC (permalink / raw) To: bpf Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann, Jose E . Marchesi, kernel-team, Martin KaFai Lau Add stack argument support for x86_64. Signed-off-by: Yonghong Song <yonghong.song@linux.dev> --- arch/x86/net/bpf_jit_comp.c | 5 +++++ include/linux/filter.h | 1 + kernel/bpf/btf.c | 8 +++++++- kernel/bpf/core.c | 5 +++++ kernel/bpf/verifier.c | 5 +++++ 5 files changed, 23 insertions(+), 1 deletion(-) diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index e9b78040d703..32864dbc2c4e 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -3937,6 +3937,11 @@ bool bpf_jit_supports_kfunc_call(void) return true; } +bool bpf_jit_supports_stack_args(void) +{ + return true; +} + void *bpf_arch_text_copy(void *dst, void *src, size_t len) { if (text_poke_copy(dst, src, len) == NULL) diff --git a/include/linux/filter.h b/include/linux/filter.h index ae094328d973..e00348c50ac7 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -1161,6 +1161,7 @@ bool bpf_jit_inlines_helper_call(s32 imm); bool bpf_jit_supports_subprog_tailcalls(void); bool bpf_jit_supports_percpu_insn(void); bool bpf_jit_supports_kfunc_call(void); +bool bpf_jit_supports_stack_args(void); bool bpf_jit_supports_far_kfunc_call(void); bool bpf_jit_supports_exceptions(void); bool bpf_jit_supports_ptr_xchg(void); diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index c5f3aa05d5a3..3497e218c02d 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -7897,8 +7897,14 @@ int btf_prepare_func_args(struct bpf_verifier_env *env, int subprog) tname, nargs, MAX_BPF_FUNC_REG_ARGS); return -EINVAL; } - if (nargs > MAX_BPF_FUNC_REG_ARGS) + if (nargs > MAX_BPF_FUNC_REG_ARGS) { + if (!bpf_jit_supports_stack_args()) { + bpf_log(log, "JIT does not support function %s() with %d args\n", + tname, nargs); + return -ENOTSUPP; + } sub->incoming_stack_arg_depth = (nargs - MAX_BPF_FUNC_REG_ARGS) * BPF_REG_SIZE; + } /* check that function is void or returns int, exception cb also requires this */ t = btf_type_by_id(btf, t->type); diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 10c36a74431d..49ab4470cc20 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -3157,6 +3157,11 @@ bool __weak bpf_jit_supports_kfunc_call(void) return false; } +bool __weak bpf_jit_supports_stack_args(void) +{ + return false; +} + bool __weak bpf_jit_supports_far_kfunc_call(void) { return false; diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index a3f307909e40..4d538fa11b01 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -12366,6 +12366,11 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_ MAX_BPF_FUNC_ARGS); return -EINVAL; } + if (nargs > MAX_BPF_FUNC_REG_ARGS && !bpf_jit_supports_stack_args()) { + verbose(env, "JIT does not support kfunc %s() with %d args\n", + func_name, nargs); + return -ENOTSUPP; + } /* Check that BTF function arguments match actual types that the * verifier sees. -- 2.52.0 ^ permalink raw reply related [flat|nested] 45+ messages in thread
* Re: [PATCH bpf-next v5 12/16] bpf: Enable stack argument support for x86_64 2026-04-17 3:47 ` [PATCH bpf-next v5 12/16] bpf: Enable stack argument support for x86_64 Yonghong Song @ 2026-04-17 4:30 ` bot+bpf-ci 2026-04-17 5:03 ` sashiko-bot 2026-04-18 1:04 ` bot+bpf-ci 2 siblings, 0 replies; 45+ messages in thread From: bot+bpf-ci @ 2026-04-17 4:30 UTC (permalink / raw) To: yonghong.song, bpf Cc: ast, andrii, daniel, jose.marchesi, kernel-team, martin.lau, ast, andrii, daniel, martin.lau, eddyz87, yonghong.song, clm, ihor.solodrai [-- Attachment #1: Type: text/plain, Size: 1140 bytes --] commit 96653dbc686aeae7abf4f70b15720267e083674d Author: Yonghong Song <yonghong.song@linux.dev> bpf: Enable stack argument support for x86_64 Add stack argument support for x86_64. This commit fixes a bug but lacks a Fixes: tag. The commit adds JIT capability checks (bpf_jit_supports_stack_args) before allowing stack arguments, which was missing when stack argument support was originally introduced. Commit 212295359353 added code in btf.c that unconditionally set incoming_stack_arg_depth when nargs > MAX_BPF_FUNC_REG_ARGS without checking if the JIT actually supports stack arguments. Similarly, commit 173008b815e6 extended this to kfuncs in verifier.c without JIT capability checks. This could lead to programs with stack arguments being incorrectly accepted on architectures that don't support them. Suggested: Fixes: 21229535935384 ("bpf: Support stack arguments for bpf functions") --- AI reviewed your patch. Please fix the bug or email reply why it's not a bug. See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md CI run summary: https://github.com/kernel-patches/bpf/actions/runs/24546989054 ^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [PATCH bpf-next v5 12/16] bpf: Enable stack argument support for x86_64 2026-04-17 3:47 ` [PATCH bpf-next v5 12/16] bpf: Enable stack argument support for x86_64 Yonghong Song 2026-04-17 4:30 ` bot+bpf-ci @ 2026-04-17 5:03 ` sashiko-bot 2026-04-18 1:04 ` bot+bpf-ci 2 siblings, 0 replies; 45+ messages in thread From: sashiko-bot @ 2026-04-17 5:03 UTC (permalink / raw) To: Yonghong Song; +Cc: bpf Sashiko AI review found 1 potential issue(s): - [High] Enabling `bpf_jit_supports_stack_args()` for x86_64 without the corresponding JIT implementation introduces a bisection-breaking regression that can lead to arbitrary kernel memory writes and ABI violations. -- commit b40feee41e4472d0e27be38b4c99c940abe3c1ba Author: Yonghong Song <yonghong.song@linux.dev> bpf: Enable stack argument support for x86_64 This commit enables stack argument support for the x86_64 JIT and verifier to allow BPF programs and kfuncs to use more than 5 arguments. > diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c > index e9b78040d703e..32864dbc2c4e8 100644 > --- a/arch/x86/net/bpf_jit_comp.c > +++ b/arch/x86/net/bpf_jit_comp.c > @@ -3937,6 +3937,11 @@ bool bpf_jit_supports_kfunc_call(void) > return true; > } > > +bool bpf_jit_supports_stack_args(void) > +{ > + return true; > +} > + Does enabling this without the actual JIT implementation break bisection? If bpf_jit_supports_stack_args() returns true, the verifier will allow instructions using BPF_REG_PARAMS (register 11) to handle stack arguments. However, since the JIT support isn't implemented in this commit, BPF_REG_PARAMS is not yet defined in the JIT's reg2hex array. Could this cause reg2hex[11] to implicitly default to 0, which maps to RAX (BPF_REG_0)? If the verifier emits instructions using BPF_REG_PARAMS, the x86 JIT might miscompile them using RAX as the base register, potentially allowing an arbitrary kernel memory write since R0 is controllable by the BPF program. Additionally, it appears kfunc calls with >5 arguments won't have their arguments correctly marshaled to the System V ABI (like moving the 6th argument to R9), which could lead to reading uninitialized garbage from registers and the stack. I see the JIT implementation is added later in the series in commit 41dcf99330b6b ("bpf,x86: Implement JIT support for stack arguments"). Should the JIT implementation be merged before or in the same patch as this feature enablement to maintain bisection? [ ... ] -- Sashiko AI review · https://sashiko.dev/#/patchset/20260417034658.2625353-1-yonghong.song@linux.dev?part=12 ^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [PATCH bpf-next v5 12/16] bpf: Enable stack argument support for x86_64 2026-04-17 3:47 ` [PATCH bpf-next v5 12/16] bpf: Enable stack argument support for x86_64 Yonghong Song 2026-04-17 4:30 ` bot+bpf-ci 2026-04-17 5:03 ` sashiko-bot @ 2026-04-18 1:04 ` bot+bpf-ci 2 siblings, 0 replies; 45+ messages in thread From: bot+bpf-ci @ 2026-04-18 1:04 UTC (permalink / raw) To: yonghong.song, bpf Cc: ast, andrii, daniel, jose.marchesi, kernel-team, martin.lau, ast, andrii, daniel, martin.lau, eddyz87, yonghong.song, clm, ihor.solodrai [-- Attachment #1: Type: text/plain, Size: 1933 bytes --] > diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c > index e9b78040d703..32864dbc2c4e 100644 > --- a/arch/x86/net/bpf_jit_comp.c > +++ b/arch/x86/net/bpf_jit_comp.c > @@ -3937,6 +3937,11 @@ bool bpf_jit_supports_kfunc_call(void) > return true; > } > > +bool bpf_jit_supports_stack_args(void) > +{ > + return true; > +} > + Does enabling this without the actual JIT implementation break bisection? If bpf_jit_supports_stack_args() returns true, the verifier will allow instructions using BPF_REG_PARAMS (register 11) to handle stack arguments. However, since the JIT support isn't implemented in this commit, BPF_REG_PARAMS is not yet defined in the JIT's reg2hex array. Could this cause reg2hex[11] to implicitly default to 0, which maps to RAX (BPF_REG_0)? If the verifier emits instructions using BPF_REG_PARAMS, the x86 JIT might miscompile them using RAX as the base register, potentially allowing an arbitrary kernel memory write since R0 is controllable by the BPF program. Additionally, it appears kfunc calls with >5 arguments won't have their arguments correctly marshaled to the System V ABI (like moving the 6th argument to R9), which could lead to reading uninitialized garbage from registers and the stack. The JIT implementation appears in a later commit in the series. Should the JIT implementation be merged before or in the same patch as this feature enablement to maintain bisection? This concern was raised by sashiko-bot@kernel.org in v5 review: https://lore.kernel.org/bpf/20260417050346.F2E3FC19425@smtp.kernel.org > void *bpf_arch_text_copy(void *dst, void *src, size_t len) > { > if (text_poke_copy(dst, src, len) == NULL) [ ... ] --- AI reviewed your patch. Please fix the bug or email reply why it's not a bug. See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md CI run summary: https://github.com/kernel-patches/bpf/actions/runs/24592562564 ^ permalink raw reply [flat|nested] 45+ messages in thread
* [PATCH bpf-next v5 13/16] bpf,x86: Implement JIT support for stack arguments 2026-04-17 3:46 [PATCH bpf-next v5 00/16] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song ` (11 preceding siblings ...) 2026-04-17 3:47 ` [PATCH bpf-next v5 12/16] bpf: Enable stack argument support for x86_64 Yonghong Song @ 2026-04-17 3:48 ` Yonghong Song 2026-04-17 4:44 ` sashiko-bot 2026-04-17 3:48 ` [PATCH bpf-next v5 14/16] selftests/bpf: Add tests for BPF function " Yonghong Song ` (2 subsequent siblings) 15 siblings, 1 reply; 45+ messages in thread From: Yonghong Song @ 2026-04-17 3:48 UTC (permalink / raw) To: bpf Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann, Jose E . Marchesi, kernel-team, Martin KaFai Lau Add x86_64 JIT support for BPF functions and kfuncs with more than 5 arguments. The extra arguments are passed through a stack area addressed by register r11 (BPF_REG_PARAMS) in BPF bytecode, which the JIT translates to native code. The JIT follows the x86-64 calling convention for both BPF-to-BPF and kfunc calls: - Arg 6 is passed in the R9 register - Args 7+ are passed on the stack Incoming arg 6 (BPF r11+8) is translated to a MOV from R9 rather than a memory load. Incoming args 7+ (BPF r11+16, r11+24, ...) map directly to [rbp + 16], [rbp + 24], ..., matching the x86-64 stack layout after CALL + PUSH RBP, so no offset adjustment is needed. The verifier guarantees that neither tail_call_reachable nor priv_stack is set when stack args exist, so R9 is always available. When BPF bytecode writes to the arg-6 stack slot (offset -8), the JIT emits a MOV into R9 instead of a memory store. Outgoing args 7+ are placed at [rsp] in a pre-allocated area below callee-saved registers, using: native_off = outgoing_arg_base - outgoing_rsp - bpf_off - 16 The native x86_64 stack layout with stack arguments: high address +-------------------------+ | incoming stack arg N | [rbp + 16 + (N-7)*8] (from caller) | ... | | incoming stack arg 7 | [rbp + 16] +-------------------------+ | return address | [rbp + 8] | saved rbp | [rbp] +-------------------------+ | BPF program stack | (round_up(stack_depth, 8) bytes) +-------------------------+ | callee-saved regs | (r12, rbx, r13, r14, r15 as needed) +-------------------------+ | outgoing arg M | [rsp + (M-7)*8] | ... | | outgoing arg 7 | [rsp] +-------------------------+ rsp low address Signed-off-by: Yonghong Song <yonghong.song@linux.dev> --- arch/x86/net/bpf_jit_comp.c | 149 ++++++++++++++++++++++++++++++++++-- 1 file changed, 143 insertions(+), 6 deletions(-) diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index 32864dbc2c4e..25b4357de023 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -390,6 +390,34 @@ static void pop_callee_regs(u8 **pprog, bool *callee_regs_used) *pprog = prog; } +/* add rsp, depth */ +static void emit_add_rsp(u8 **pprog, u16 depth) +{ + u8 *prog = *pprog; + + if (!depth) + return; + if (is_imm8(depth)) + EMIT4(0x48, 0x83, 0xC4, depth); /* add rsp, imm8 */ + else + EMIT3_off32(0x48, 0x81, 0xC4, depth); /* add rsp, imm32 */ + *pprog = prog; +} + +/* sub rsp, depth */ +static void emit_sub_rsp(u8 **pprog, u16 depth) +{ + u8 *prog = *pprog; + + if (!depth) + return; + if (is_imm8(depth)) + EMIT4(0x48, 0x83, 0xEC, depth); /* sub rsp, imm8 */ + else + EMIT3_off32(0x48, 0x81, 0xEC, depth); /* sub rsp, imm32 */ + *pprog = prog; +} + static void emit_nops(u8 **pprog, int len) { u8 *prog = *pprog; @@ -1664,16 +1692,45 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image int i, excnt = 0; int ilen, proglen = 0; u8 *prog = temp; + u16 stack_arg_depth, incoming_stack_arg_depth, outgoing_stack_arg_depth; + u16 outgoing_rsp; u32 stack_depth; + int callee_saved_size; + s32 outgoing_arg_base; int err; stack_depth = bpf_prog->aux->stack_depth; + stack_arg_depth = bpf_prog->aux->stack_arg_depth; + incoming_stack_arg_depth = bpf_prog->aux->incoming_stack_arg_depth; + outgoing_stack_arg_depth = stack_arg_depth - incoming_stack_arg_depth; priv_stack_ptr = bpf_prog->aux->priv_stack_ptr; if (priv_stack_ptr) { priv_frame_ptr = priv_stack_ptr + PRIV_STACK_GUARD_SZ + round_up(stack_depth, 8); stack_depth = 0; } + /* + * Follow x86-64 calling convention for both BPF-to-BPF and + * kfunc calls: + * - Arg 6 is passed in R9 register + * - Args 7+ are passed on the stack at [rsp] + * + * Incoming arg 6 is read from R9 (BPF r11+8 → MOV from R9). + * Incoming args 7+ are read from [rbp + 16], [rbp + 24], ... + * (BPF r11+16, r11+24, ... map directly with no offset change). + * + * The verifier guarantees that neither tail_call_reachable nor + * priv_stack is set when outgoing stack args exist, so R9 is + * always available. + * + * Stack layout (high to low): + * [rbp + 16 + ...] incoming stack args 7+ (from caller) + * [rbp + 8] return address + * [rbp] saved rbp + * [rbp - prog_stack] program stack + * [below] callee-saved regs + * [below] outgoing args 7+ (= rsp) + */ arena_vm_start = bpf_arena_get_kern_vm_start(bpf_prog->aux->arena); user_vm_start = bpf_arena_get_user_vm_start(bpf_prog->aux->arena); @@ -1700,6 +1757,42 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image push_r12(&prog); push_callee_regs(&prog, callee_regs_used); } + + /* Compute callee-saved register area size. */ + callee_saved_size = 0; + if (bpf_prog->aux->exception_boundary || arena_vm_start) + callee_saved_size += 8; /* r12 */ + if (bpf_prog->aux->exception_boundary) { + callee_saved_size += 4 * 8; /* rbx, r13, r14, r15 */ + } else { + int j; + + for (j = 0; j < 4; j++) + if (callee_regs_used[j]) + callee_saved_size += 8; + } + /* + * Base offset from rbp for translating BPF outgoing args 7+ + * to native offsets. BPF uses negative offsets from r11 + * (r11-8 for arg6, r11-16 for arg7, ...) while x86 uses + * positive offsets from rsp ([rsp+0] for arg7, [rsp+8] for + * arg8, ...). Arg 6 goes to R9 directly. + * + * The translation reverses direction: + * native_off = outgoing_arg_base - outgoing_rsp - bpf_off - 16 + * + * Note that tail_call_reachable is guaranteed to be false when + * stack args exist, so tcc pushes need not be accounted for. + */ + outgoing_arg_base = -(round_up(stack_depth, 8) + callee_saved_size); + + /* + * Allocate outgoing stack arg area for args 7+ only. + * Arg 6 goes into r9 register, not on stack. + */ + outgoing_rsp = outgoing_stack_arg_depth > 8 ? outgoing_stack_arg_depth - 8 : 0; + emit_sub_rsp(&prog, outgoing_rsp); + if (arena_vm_start) emit_mov_imm64(&prog, X86_REG_R12, arena_vm_start >> 32, (u32) arena_vm_start); @@ -1721,7 +1814,7 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image u8 b2 = 0, b3 = 0; u8 *start_of_ldx; s64 jmp_offset; - s16 insn_off; + s32 insn_off; u8 jmp_cond; u8 *func; int nops; @@ -2129,12 +2222,26 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image EMIT1(0xC7); goto st; case BPF_ST | BPF_MEM | BPF_DW: + if (dst_reg == BPF_REG_PARAMS && insn->off == -8) { + /* Arg 6: store immediate in r9 register */ + emit_mov_imm64(&prog, X86_REG_R9, imm32 >> 31, (u32)imm32); + break; + } EMIT2(add_1mod(0x48, dst_reg), 0xC7); -st: if (is_imm8(insn->off)) - EMIT2(add_1reg(0x40, dst_reg), insn->off); +st: insn_off = insn->off; + if (dst_reg == BPF_REG_PARAMS) { + /* Args 7+: reverse BPF negative offsets to + * x86 positive rsp offsets. + * BPF off=-16 → [rsp+0], off=-24 → [rsp+8], ... + */ + insn_off = outgoing_arg_base - outgoing_rsp - insn_off - 16; + dst_reg = BPF_REG_FP; + } + if (is_imm8(insn_off)) + EMIT2(add_1reg(0x40, dst_reg), insn_off); else - EMIT1_off32(add_1reg(0x80, dst_reg), insn->off); + EMIT1_off32(add_1reg(0x80, dst_reg), insn_off); EMIT(imm32, bpf_size_to_x86_bytes(BPF_SIZE(insn->code))); break; @@ -2144,7 +2251,17 @@ st: if (is_imm8(insn->off)) case BPF_STX | BPF_MEM | BPF_H: case BPF_STX | BPF_MEM | BPF_W: case BPF_STX | BPF_MEM | BPF_DW: - emit_stx(&prog, BPF_SIZE(insn->code), dst_reg, src_reg, insn->off); + if (dst_reg == BPF_REG_PARAMS && insn->off == -8) { + /* Arg 6: store register value in r9 */ + EMIT_mov(X86_REG_R9, src_reg); + break; + } + insn_off = insn->off; + if (dst_reg == BPF_REG_PARAMS) { + insn_off = outgoing_arg_base - outgoing_rsp - insn_off - 16; + dst_reg = BPF_REG_FP; + } + emit_stx(&prog, BPF_SIZE(insn->code), dst_reg, src_reg, insn_off); break; case BPF_ST | BPF_PROBE_MEM32 | BPF_B: @@ -2243,6 +2360,19 @@ st: if (is_imm8(insn->off)) case BPF_LDX | BPF_PROBE_MEMSX | BPF_H: case BPF_LDX | BPF_PROBE_MEMSX | BPF_W: insn_off = insn->off; + if (src_reg == BPF_REG_PARAMS) { + if (insn_off == 8) { + /* Incoming arg 6: read from r9 */ + EMIT_mov(dst_reg, X86_REG_R9); + break; + } + src_reg = BPF_REG_FP; + /* + * Incoming args 7+: native_off == bpf_off + * (r11+16 → [rbp+16], r11+24 → [rbp+24], ...) + * No offset adjustment needed. + */ + } if (BPF_MODE(insn->code) == BPF_PROBE_MEM || BPF_MODE(insn->code) == BPF_PROBE_MEMSX) { @@ -2734,6 +2864,8 @@ st: if (is_imm8(insn->off)) if (emit_spectre_bhb_barrier(&prog, ip, bpf_prog)) return -EINVAL; } + /* Deallocate outgoing args 7+ area. */ + emit_add_rsp(&prog, outgoing_rsp); if (bpf_prog->aux->exception_boundary) { pop_callee_regs(&prog, all_callee_regs_used); pop_r12(&prog); @@ -3757,7 +3889,12 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog) prog->aux->jit_data = jit_data; } priv_stack_ptr = prog->aux->priv_stack_ptr; - if (!priv_stack_ptr && prog->aux->jits_use_priv_stack) { + /* + * x86-64 uses R9 for both private stack frame pointer and arg 6, + * so disable private stack when stack args are present. + */ + if (!priv_stack_ptr && prog->aux->jits_use_priv_stack && + prog->aux->stack_arg_depth == 0) { /* Allocate actual private stack size with verifier-calculated * stack size plus two memory guards to protect overflow and * underflow. -- 2.52.0 ^ permalink raw reply related [flat|nested] 45+ messages in thread
* Re: [PATCH bpf-next v5 13/16] bpf,x86: Implement JIT support for stack arguments 2026-04-17 3:48 ` [PATCH bpf-next v5 13/16] bpf,x86: Implement JIT support for stack arguments Yonghong Song @ 2026-04-17 4:44 ` sashiko-bot 0 siblings, 0 replies; 45+ messages in thread From: sashiko-bot @ 2026-04-17 4:44 UTC (permalink / raw) To: Yonghong Song; +Cc: bpf Sashiko AI review found 4 potential issue(s): - [High] Data corruption of Argument 6 across function calls due to caller-saved register clobbering. - [High] Missing size handling for Argument 6 allows verifier bounds tracking bypass and out-of-bounds memory writes. - [High] The offset translation formula for outgoing arguments 7+ reverses the memory layout, corrupting sub-slot accesses. - [Medium] The JIT misroutes stack accesses because it rigidly assumes LDX only targets incoming arguments and STX/ST only target outgoing arguments. -- commit 41dcf99330b6b0a3804185f2a284a76be8340ed1 Author: Yonghong Song <yonghong.song@linux.dev> bpf,x86: Implement JIT support for stack arguments Add x86_64 JIT support for BPF functions and kfuncs with more than 5 arguments. The extra arguments are passed through a stack area addressed by register r11 (BPF_REG_PARAMS) in BPF bytecode, which the JIT translates to native code. > diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c > index 32864dbc2c4e8..25b4357de023b 100644 > --- a/arch/x86/net/bpf_jit_comp.c > +++ b/arch/x86/net/bpf_jit_comp.c [ ... ] > @@ -1664,16 +1692,45 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image > stack_depth = bpf_prog->aux->stack_depth; > + stack_arg_depth = bpf_prog->aux->stack_arg_depth; > + incoming_stack_arg_depth = bpf_prog->aux->incoming_stack_arg_depth; > + outgoing_stack_arg_depth = stack_arg_depth - incoming_stack_arg_depth; > priv_stack_ptr = bpf_prog->aux->priv_stack_ptr; > if (priv_stack_ptr) { > priv_frame_ptr = priv_stack_ptr + PRIV_STACK_GUARD_SZ + round_up(stack_depth, 8); > stack_depth = 0; > } > > + /* > + * Follow x86-64 calling convention for both BPF-to-BPF and > + * kfunc calls: > + * - Arg 6 is passed in R9 register > + * - Args 7+ are passed on the stack at [rsp] Can the use of the R9 register cause data corruption across function calls? The x86-64 C calling convention defines R9 as a caller-saved register. However, the BPF verifier models argument 6 as stack memory, which implies its value should be preserved across calls. If a BPF program reads incoming argument 6 after making a helper or kfunc call, will it receive a clobbered value instead of the original argument? Similarly, could a helper call scheduled by LLVM between the setup of outgoing argument 6 and the target call corrupt the R9 register? [ ... ] > + /* > + * Base offset from rbp for translating BPF outgoing args 7+ > + * to native offsets. BPF uses negative offsets from r11 > + * (r11-8 for arg6, r11-16 for arg7, ...) while x86 uses > + * positive offsets from rsp ([rsp+0] for arg7, [rsp+8] for > + * arg8, ...). Arg 6 goes to R9 directly. > + * > + * The translation reverses direction: > + * native_off = outgoing_arg_base - outgoing_rsp - bpf_off - 16 > + * > + * Note that tail_call_reachable is guaranteed to be false when > + * stack args exist, so tcc pushes need not be accounted for. > + */ Does this formula correctly handle multi-byte sub-slot accesses? Since the formula subtracts the BPF offset, the memory layout within the 8-byte slots appears mathematically reversed. While this correctly maps the 8-byte slot bases (e.g., -16 maps to rsp + 0), it seems to point to the wrong memory for partial writes. For example, if a program writes to the upper 32 bits of Arg 7 at BPF offset -12, the formula yields BASE - (-12) - 16 = BASE - 4. Will this map the access out-of-bounds below the stack pointer instead of to [rsp + 4]? [ ... ] > @@ -2129,12 +2222,26 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image > EMIT1(0xC7); > goto st; > case BPF_ST | BPF_MEM | BPF_DW: > + if (dst_reg == BPF_REG_PARAMS && insn->off == -8) { > + /* Arg 6: store immediate in r9 register */ > + emit_mov_imm64(&prog, X86_REG_R9, imm32 >> 31, (u32)imm32); > + break; > + } > EMIT2(add_1mod(0x48, dst_reg), 0xC7); > > -st: if (is_imm8(insn->off)) > - EMIT2(add_1reg(0x40, dst_reg), insn->off); > +st: insn_off = insn->off; > + if (dst_reg == BPF_REG_PARAMS) { > + /* Args 7+: reverse BPF negative offsets to > + * x86 positive rsp offsets. > + * BPF off=-16 → [rsp+0], off=-24 → [rsp+8], ... > + */ > + insn_off = outgoing_arg_base - outgoing_rsp - insn_off - 16; > + dst_reg = BPF_REG_FP; > + } Are the size constraints missing for Argument 6 here? The check for Arg 6 (insn->off == -8) is only implemented under the BPF_DW case. For sizes B, H, and W, execution will fall through to the st: label. This would incorrectly translate the access as a normal stack operation using the formula above, writing out-of-bounds below the stack pointer at [rsp - 8] instead of populating R9. Additionally, for BPF_LDX, a 1-byte load emits a full 64-bit move to R9. Since the upper 56 bits of R9 are undefined for 1-byte arguments, could they contain garbage data? This would bypass the verifier's strict value bounds tracking, which assumes the loaded value is strictly within [0, 255]. [ ... ] > @@ -2243,6 +2360,19 @@ st: if (is_imm8(insn->off)) > case BPF_LDX | BPF_PROBE_MEMSX | BPF_H: > case BPF_LDX | BPF_PROBE_MEMSX | BPF_W: > insn_off = insn->off; > + if (src_reg == BPF_REG_PARAMS) { > + if (insn_off == 8) { > + /* Incoming arg 6: read from r9 */ > + EMIT_mov(dst_reg, X86_REG_R9); > + break; > + } > + src_reg = BPF_REG_FP; > + /* > + * Incoming args 7+: native_off == bpf_off > + * (r11+16 → [rbp+16], r11+24 → [rbp+24], ...) > + * No offset adjustment needed. > + */ > + } Does this logic misroute stack accesses by assuming LDX always targets incoming arguments and STX/ST always target outgoing arguments? If a program writes an outgoing argument ([r11 - 16]) and then reads it back via LDX, the JIT appears to apply the incoming argument translation mapping the read to [rbp - 16] (the local stack) instead of [rsp] (the outgoing area). This would return uninitialized data. Conversely, if a program modifies an incoming argument ([r11 + 16]) using STX, the JIT would apply the outgoing translation formula, yielding BASE - 16 - 16 = BASE - 32, which writes out-of-bounds below the stack pointer. -- Sashiko AI review · https://sashiko.dev/#/patchset/20260417034658.2625353-1-yonghong.song@linux.dev?part=13 ^ permalink raw reply [flat|nested] 45+ messages in thread
* [PATCH bpf-next v5 14/16] selftests/bpf: Add tests for BPF function stack arguments 2026-04-17 3:46 [PATCH bpf-next v5 00/16] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song ` (12 preceding siblings ...) 2026-04-17 3:48 ` [PATCH bpf-next v5 13/16] bpf,x86: Implement JIT support for stack arguments Yonghong Song @ 2026-04-17 3:48 ` Yonghong Song 2026-04-17 4:20 ` sashiko-bot 2026-04-18 0:52 ` bot+bpf-ci 2026-04-17 3:48 ` [PATCH bpf-next v5 15/16] selftests/bpf: Add negative test for greater-than-8-byte kfunc stack argument Yonghong Song 2026-04-17 3:48 ` [PATCH bpf-next v5 16/16] selftests/bpf: Add verifier tests for stack argument validation Yonghong Song 15 siblings, 2 replies; 45+ messages in thread From: Yonghong Song @ 2026-04-17 3:48 UTC (permalink / raw) To: bpf Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann, Jose E . Marchesi, kernel-team, Martin KaFai Lau Add selftests covering stack argument passing for both BPF-to-BPF subprog calls and kfunc calls with more than 5 arguments. All tests are guarded by __BPF_FEATURE_STACK_ARGUMENT and __TARGET_ARCH_x86. BPF-to-BPF subprog call tests (stack_arg.c): - Scalar stack args - Pointer stack args - Mixed pointer/scalar stack args - Nested calls - Dynptr stack arg - Two callees with different stack arg counts - Async callback Kfunc call tests (stack_arg_kfunc.c, with bpf_testmod kfuncs): - Scalar stack args - Pointer stack args - Mixed pointer/scalar stack args - Dynptr stack arg - Memory buffer + size pair - Iterator - Const string pointer - Timer pointer Signed-off-by: Yonghong Song <yonghong.song@linux.dev> --- .../selftests/bpf/prog_tests/stack_arg.c | 133 +++++++++ tools/testing/selftests/bpf/progs/stack_arg.c | 254 ++++++++++++++++++ .../selftests/bpf/progs/stack_arg_kfunc.c | 164 +++++++++++ .../selftests/bpf/test_kmods/bpf_testmod.c | 66 +++++ .../bpf/test_kmods/bpf_testmod_kfunc.h | 20 +- 5 files changed, 636 insertions(+), 1 deletion(-) create mode 100644 tools/testing/selftests/bpf/prog_tests/stack_arg.c create mode 100644 tools/testing/selftests/bpf/progs/stack_arg.c create mode 100644 tools/testing/selftests/bpf/progs/stack_arg_kfunc.c diff --git a/tools/testing/selftests/bpf/prog_tests/stack_arg.c b/tools/testing/selftests/bpf/prog_tests/stack_arg.c new file mode 100644 index 000000000000..130eaf1c4a78 --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/stack_arg.c @@ -0,0 +1,133 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */ + +#include <test_progs.h> +#include <network_helpers.h> +#include "stack_arg.skel.h" +#include "stack_arg_kfunc.skel.h" + +static void run_subtest(struct bpf_program *prog, int expected) +{ + int err, prog_fd; + LIBBPF_OPTS(bpf_test_run_opts, topts, + .data_in = &pkt_v4, + .data_size_in = sizeof(pkt_v4), + .repeat = 1, + ); + + prog_fd = bpf_program__fd(prog); + err = bpf_prog_test_run_opts(prog_fd, &topts); + ASSERT_OK(err, "test_run"); + ASSERT_EQ(topts.retval, expected, "retval"); +} + +static void test_global_many(void) +{ + struct stack_arg *skel; + + skel = stack_arg__open(); + if (!ASSERT_OK_PTR(skel, "open")) + return; + + if (!skel->rodata->has_stack_arg) { + test__skip(); + goto out; + } + + if (!ASSERT_OK(stack_arg__load(skel), "load")) + goto out; + + run_subtest(skel->progs.test_global_many_args, 36); + +out: + stack_arg__destroy(skel); +} + +static void test_async_cb_many(void) +{ + struct stack_arg *skel; + + skel = stack_arg__open(); + if (!ASSERT_OK_PTR(skel, "open")) + return; + + if (!skel->rodata->has_stack_arg) { + test__skip(); + goto out; + } + + if (!ASSERT_OK(stack_arg__load(skel), "load")) + goto out; + + run_subtest(skel->progs.test_async_cb_many_args, 0); + +out: + stack_arg__destroy(skel); +} + +static void test_bpf2bpf(void) +{ + struct stack_arg *skel; + + skel = stack_arg__open(); + if (!ASSERT_OK_PTR(skel, "open")) + return; + + if (!skel->rodata->has_stack_arg) { + test__skip(); + goto out; + } + + if (!ASSERT_OK(stack_arg__load(skel), "load")) + goto out; + + run_subtest(skel->progs.test_bpf2bpf_ptr_stack_arg, 45); + run_subtest(skel->progs.test_bpf2bpf_mix_stack_args, 51); + run_subtest(skel->progs.test_bpf2bpf_nesting_stack_arg, 50); + run_subtest(skel->progs.test_bpf2bpf_dynptr_stack_arg, 69); + run_subtest(skel->progs.test_two_callees, 91); + +out: + stack_arg__destroy(skel); +} + +static void test_kfunc(void) +{ + struct stack_arg_kfunc *skel; + + skel = stack_arg_kfunc__open(); + if (!ASSERT_OK_PTR(skel, "open")) + return; + + if (!skel->rodata->has_stack_arg) { + test__skip(); + goto out; + } + + if (!ASSERT_OK(stack_arg_kfunc__load(skel), "load")) + goto out; + + run_subtest(skel->progs.test_stack_arg_scalar, 36); + run_subtest(skel->progs.test_stack_arg_ptr, 45); + run_subtest(skel->progs.test_stack_arg_mix, 51); + run_subtest(skel->progs.test_stack_arg_dynptr, 69); + run_subtest(skel->progs.test_stack_arg_mem, 151); + run_subtest(skel->progs.test_stack_arg_iter, 115); + run_subtest(skel->progs.test_stack_arg_const_str, 15); + run_subtest(skel->progs.test_stack_arg_timer, 15); + +out: + stack_arg_kfunc__destroy(skel); +} + +void test_stack_arg(void) +{ + if (test__start_subtest("global_many_args")) + test_global_many(); + if (test__start_subtest("async_cb_many_args")) + test_async_cb_many(); + if (test__start_subtest("bpf2bpf")) + test_bpf2bpf(); + if (test__start_subtest("kfunc")) + test_kfunc(); +} diff --git a/tools/testing/selftests/bpf/progs/stack_arg.c b/tools/testing/selftests/bpf/progs/stack_arg.c new file mode 100644 index 000000000000..8c198ee952ff --- /dev/null +++ b/tools/testing/selftests/bpf/progs/stack_arg.c @@ -0,0 +1,254 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */ + +#include <vmlinux.h> +#include <stdbool.h> +#include <bpf/bpf_helpers.h> +#include "bpf_kfuncs.h" + +#define CLOCK_MONOTONIC 1 + +long a, b, c, d, e, f, g, i; + +struct timer_elem { + struct bpf_timer timer; +}; + +struct { + __uint(type, BPF_MAP_TYPE_ARRAY); + __uint(max_entries, 1); + __type(key, int); + __type(value, struct timer_elem); +} timer_map SEC(".maps"); + +int timer_result; + +#if defined(__TARGET_ARCH_x86) && defined(__BPF_FEATURE_STACK_ARGUMENT) + +const volatile bool has_stack_arg = true; + +__noinline static int static_func_many_args(int a, int b, int c, int d, + int e, int f, int g, int h) +{ + return a + b + c + d + e + f + g + h; +} + +__noinline int global_calls_many_args(int a, int b, int c) +{ + return static_func_many_args(a, b, c, 4, 5, 6, 7, 8); +} + +SEC("tc") +int test_global_many_args(void) +{ + return global_calls_many_args(1, 2, 3); +} + +struct test_data { + long x; + long y; +}; + +/* 1 + 2 + 3 + 4 + 5 + 10 + 20 = 45 */ +__noinline static long func_with_ptr_stack_arg(long a, long b, long c, long d, + long e, struct test_data *p) +{ + return a + b + c + d + e + p->x + p->y; +} + +__noinline long global_ptr_stack_arg(long a, long b, long c, long d, long e) +{ + struct test_data data = { .x = 10, .y = 20 }; + + return func_with_ptr_stack_arg(a, b, c, d, e, &data); +} + +SEC("tc") +int test_bpf2bpf_ptr_stack_arg(void) +{ + return global_ptr_stack_arg(1, 2, 3, 4, 5); +} + +/* 1 + 2 + 3 + 4 + 5 + 10 + 6 + 20 = 51 */ +__noinline static long func_with_mix_stack_args(long a, long b, long c, long d, + long e, struct test_data *p, + long f, struct test_data *q) +{ + return a + b + c + d + e + p->x + f + q->y; +} + +__noinline long global_mix_stack_args(long a, long b, long c, long d, long e) +{ + struct test_data p = { .x = 10 }; + struct test_data q = { .y = 20 }; + + return func_with_mix_stack_args(a, b, c, d, e, &p, e + 1, &q); +} + +SEC("tc") +int test_bpf2bpf_mix_stack_args(void) +{ + return global_mix_stack_args(1, 2, 3, 4, 5); +} + +/* + * Nesting test: func_outer calls func_inner, both with struct pointer + * as stack arg. + * + * func_inner: (a+1) + (b+1) + (c+1) + (d+1) + (e+1) + p->x + p->y + * = 2 + 3 + 4 + 5 + 6 + 10 + 20 = 50 + */ +__noinline static long func_inner_ptr(long a, long b, long c, long d, + long e, struct test_data *p) +{ + return a + b + c + d + e + p->x + p->y; +} + +__noinline static long func_outer_ptr(long a, long b, long c, long d, + long e, struct test_data *p) +{ + return func_inner_ptr(a + 1, b + 1, c + 1, d + 1, e + 1, p); +} + +__noinline long global_nesting_ptr(long a, long b, long c, long d, long e) +{ + struct test_data data = { .x = 10, .y = 20 }; + + return func_outer_ptr(a, b, c, d, e, &data); +} + +SEC("tc") +int test_bpf2bpf_nesting_stack_arg(void) +{ + return global_nesting_ptr(1, 2, 3, 4, 5); +} + +/* 1 + 2 + 3 + 4 + 5 + sizeof(pkt_v4) = 15 + 54 = 69 */ +__noinline static long func_with_dynptr(long a, long b, long c, long d, + long e, struct bpf_dynptr *ptr) +{ + return a + b + c + d + e + bpf_dynptr_size(ptr); +} + +__noinline long global_dynptr_stack_arg(void *ctx __arg_ctx, long a, long b, + long c, long d) +{ + struct bpf_dynptr ptr; + + bpf_dynptr_from_skb(ctx, 0, &ptr); + return func_with_dynptr(a, b, c, d, d + 1, &ptr); +} + +SEC("tc") +int test_bpf2bpf_dynptr_stack_arg(struct __sk_buff *skb) +{ + return global_dynptr_stack_arg(skb, 1, 2, 3, 4); +} + +/* foo1: a+b+c+d+e+f+g+h */ +__noinline static int foo1(int a, int b, int c, int d, + int e, int f, int g, int h) +{ + return a + b + c + d + e + f + g + h; +} + +/* foo2: a+b+c+d+e+f+g+h+i+j */ +__noinline static int foo2(int a, int b, int c, int d, int e, + int f, int g, int h, int i, int j) +{ + return a + b + c + d + e + f + g + h + i + j; +} + +/* bar calls foo1 (3 stack args) and foo2 (5 stack args). + * The outgoing stack arg area is sized for foo2 (the larger callee). + * Stores for foo1 are a subset of the area used by foo2. + * Result: foo1(1,2,3,4,5,6,7,8) + foo2(1,2,3,4,5,6,7,8,9,10) = 36 + 55 = 91 + * + * Pass a-e through so the compiler can't constant-fold the stack args away. + */ +__noinline int global_two_callees(int a, int b, int c, int d, int e) +{ + int ret; + + ret = foo1(a, b, c, d, e, a + 5, a + 6, a + 7); + ret += foo2(a, b, c, d, e, a + 5, a + 6, a + 7, a + 8, a + 9); + return ret; +} + +SEC("tc") +int test_two_callees(void) +{ + return global_two_callees(1, 2, 3, 4, 5); +} + +static int timer_cb_many_args(void *map, int *key, struct bpf_timer *timer) +{ + timer_result = static_func_many_args(10, 20, 30, 40, 50, 60, 70, 80); + return 0; +} + +SEC("tc") +int test_async_cb_many_args(void) +{ + struct timer_elem *elem; + int key = 0; + + elem = bpf_map_lookup_elem(&timer_map, &key); + if (!elem) + return -1; + + bpf_timer_init(&elem->timer, &timer_map, CLOCK_MONOTONIC); + bpf_timer_set_callback(&elem->timer, timer_cb_many_args); + bpf_timer_start(&elem->timer, 1, 0); + return 0; +} + +#else + +const volatile bool has_stack_arg = false; + +SEC("tc") +int test_global_many_args(void) +{ + return 0; +} + +SEC("tc") +int test_bpf2bpf_ptr_stack_arg(void) +{ + return 0; +} + +SEC("tc") +int test_bpf2bpf_mix_stack_args(void) +{ + return 0; +} + +SEC("tc") +int test_bpf2bpf_nesting_stack_arg(void) +{ + return 0; +} + +SEC("tc") +int test_bpf2bpf_dynptr_stack_arg(struct __sk_buff *skb) +{ + return 0; +} + +SEC("tc") +int test_two_callees(void) +{ + return 0; +} + +SEC("tc") +int test_async_cb_many_args(void) +{ + return 0; +} + +#endif + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/stack_arg_kfunc.c b/tools/testing/selftests/bpf/progs/stack_arg_kfunc.c new file mode 100644 index 000000000000..6cc404d57863 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/stack_arg_kfunc.c @@ -0,0 +1,164 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */ + +#include <vmlinux.h> +#include <bpf/bpf_helpers.h> +#include "bpf_kfuncs.h" +#include "../test_kmods/bpf_testmod_kfunc.h" + +#if defined(__TARGET_ARCH_x86) && defined(__BPF_FEATURE_STACK_ARGUMENT) + +const volatile bool has_stack_arg = true; + +struct bpf_iter_testmod_seq { + u64 :64; + u64 :64; +}; + +extern int bpf_iter_testmod_seq_new(struct bpf_iter_testmod_seq *it, s64 value, int cnt) __ksym; +extern int *bpf_iter_testmod_seq_next(struct bpf_iter_testmod_seq *it) __ksym; +extern void bpf_iter_testmod_seq_destroy(struct bpf_iter_testmod_seq *it) __ksym; + +struct timer_map_value { + struct bpf_timer timer; +}; + +struct { + __uint(type, BPF_MAP_TYPE_ARRAY); + __uint(max_entries, 1); + __type(key, int); + __type(value, struct timer_map_value); +} kfunc_timer_map SEC(".maps"); + +SEC("tc") +int test_stack_arg_scalar(struct __sk_buff *skb) +{ + return bpf_kfunc_call_stack_arg(1, 2, 3, 4, 5, 6, 7, 8); +} + +SEC("tc") +int test_stack_arg_ptr(struct __sk_buff *skb) +{ + struct prog_test_pass1 p = { .x0 = 10, .x1 = 20 }; + + return bpf_kfunc_call_stack_arg_ptr(1, 2, 3, 4, 5, &p); +} + +SEC("tc") +int test_stack_arg_mix(struct __sk_buff *skb) +{ + struct prog_test_pass1 p = { .x0 = 10 }; + struct prog_test_pass1 q = { .x1 = 20 }; + + return bpf_kfunc_call_stack_arg_mix(1, 2, 3, 4, 5, &p, 6, &q); +} + +/* 1 + 2 + 3 + 4 + 5 + sizeof(pkt_v4) = 15 + 54 = 69 */ +SEC("tc") +int test_stack_arg_dynptr(struct __sk_buff *skb) +{ + struct bpf_dynptr ptr; + + bpf_dynptr_from_skb(skb, 0, &ptr); + return bpf_kfunc_call_stack_arg_dynptr(1, 2, 3, 4, 5, &ptr); +} + +/* 1 + 2 + 3 + 4 + 5 + (1 + 2 + ... + 16) = 15 + 136 = 151 */ +SEC("tc") +int test_stack_arg_mem(struct __sk_buff *skb) +{ + char buf[16] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16}; + + return bpf_kfunc_call_stack_arg_mem(1, 2, 3, 4, 5, buf, sizeof(buf)); +} + +/* 1 + 2 + 3 + 4 + 5 + 100 = 115 */ +SEC("tc") +int test_stack_arg_iter(struct __sk_buff *skb) +{ + struct bpf_iter_testmod_seq it; + u64 ret; + + bpf_iter_testmod_seq_new(&it, 100, 10); + ret = bpf_kfunc_call_stack_arg_iter(1, 2, 3, 4, 5, &it); + bpf_iter_testmod_seq_destroy(&it); + return ret; +} + +const char cstr[] = "hello"; + +/* 1 + 2 + 3 + 4 + 5 = 15 */ +SEC("tc") +int test_stack_arg_const_str(struct __sk_buff *skb) +{ + return bpf_kfunc_call_stack_arg_const_str(1, 2, 3, 4, 5, cstr); +} + +/* 1 + 2 + 3 + 4 + 5 = 15 */ +SEC("tc") +int test_stack_arg_timer(struct __sk_buff *skb) +{ + struct timer_map_value *val; + int key = 0; + + val = bpf_map_lookup_elem(&kfunc_timer_map, &key); + if (!val) + return 0; + return bpf_kfunc_call_stack_arg_timer(1, 2, 3, 4, 5, &val->timer); +} + +#else + +const volatile bool has_stack_arg = false; + +SEC("tc") +int test_stack_arg_scalar(struct __sk_buff *skb) +{ + return 0; +} + +SEC("tc") +int test_stack_arg_ptr(struct __sk_buff *skb) +{ + return 0; +} + +SEC("tc") +int test_stack_arg_mix(struct __sk_buff *skb) +{ + return 0; +} + +SEC("tc") +int test_stack_arg_dynptr(struct __sk_buff *skb) +{ + return 0; +} + +SEC("tc") +int test_stack_arg_mem(struct __sk_buff *skb) +{ + return 0; +} + +SEC("tc") +int test_stack_arg_iter(struct __sk_buff *skb) +{ + return 0; +} + +SEC("tc") +int test_stack_arg_const_str(struct __sk_buff *skb) +{ + return 0; +} + +SEC("tc") +int test_stack_arg_timer(struct __sk_buff *skb) +{ + return 0; +} + +#endif + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c b/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c index d876314a4d67..ea82a6d32d9f 100644 --- a/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c +++ b/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c @@ -825,6 +825,63 @@ __bpf_kfunc int bpf_kfunc_call_test5(u8 a, u16 b, u32 c) return 0; } +__bpf_kfunc u64 bpf_kfunc_call_stack_arg(u64 a, u64 b, u64 c, u64 d, + u64 e, u64 f, u64 g, u64 h) +{ + return a + b + c + d + e + f + g + h; +} + +__bpf_kfunc u64 bpf_kfunc_call_stack_arg_ptr(u64 a, u64 b, u64 c, u64 d, u64 e, + struct prog_test_pass1 *p) +{ + return a + b + c + d + e + p->x0 + p->x1; +} + +__bpf_kfunc u64 bpf_kfunc_call_stack_arg_mix(u64 a, u64 b, u64 c, u64 d, u64 e, + struct prog_test_pass1 *p, u64 f, + struct prog_test_pass1 *q) +{ + return a + b + c + d + e + p->x0 + f + q->x1; +} + +__bpf_kfunc u64 bpf_kfunc_call_stack_arg_dynptr(u64 a, u64 b, u64 c, u64 d, u64 e, + struct bpf_dynptr *ptr) +{ + const struct bpf_dynptr_kern *kern_ptr = (void *)ptr; + + return a + b + c + d + e + (kern_ptr->size & 0xFFFFFF); +} + +__bpf_kfunc u64 bpf_kfunc_call_stack_arg_mem(u64 a, u64 b, u64 c, u64 d, u64 e, + void *mem, int mem__sz) +{ + const unsigned char *p = mem; + u64 sum = a + b + c + d + e; + int i; + + for (i = 0; i < mem__sz; i++) + sum += p[i]; + return sum; +} + +__bpf_kfunc u64 bpf_kfunc_call_stack_arg_iter(u64 a, u64 b, u64 c, u64 d, u64 e, + struct bpf_iter_testmod_seq *it__iter) +{ + return a + b + c + d + e + it__iter->value; +} + +__bpf_kfunc u64 bpf_kfunc_call_stack_arg_const_str(u64 a, u64 b, u64 c, u64 d, u64 e, + const char *str__str) +{ + return a + b + c + d + e; +} + +__bpf_kfunc u64 bpf_kfunc_call_stack_arg_timer(u64 a, u64 b, u64 c, u64 d, u64 e, + struct bpf_timer *timer) +{ + return a + b + c + d + e; +} + static struct prog_test_ref_kfunc prog_test_struct = { .a = 42, .b = 108, @@ -1288,6 +1345,15 @@ BTF_ID_FLAGS(func, bpf_kfunc_call_test2) BTF_ID_FLAGS(func, bpf_kfunc_call_test3) BTF_ID_FLAGS(func, bpf_kfunc_call_test4) BTF_ID_FLAGS(func, bpf_kfunc_call_test5) +BTF_ID_FLAGS(func, bpf_kfunc_call_stack_arg) +BTF_ID_FLAGS(func, bpf_kfunc_call_stack_arg_ptr) +BTF_ID_FLAGS(func, bpf_kfunc_call_stack_arg_mix) +BTF_ID_FLAGS(func, bpf_kfunc_call_stack_arg_dynptr) +BTF_ID_FLAGS(func, bpf_kfunc_call_stack_arg_mem) +BTF_ID_FLAGS(func, bpf_kfunc_call_stack_arg_iter) +BTF_ID_FLAGS(func, bpf_kfunc_call_stack_arg_const_str) +BTF_ID_FLAGS(func, bpf_kfunc_call_stack_arg_timer) +BTF_ID_FLAGS(func, bpf_kfunc_call_test_mem_len_pass1) BTF_ID_FLAGS(func, bpf_kfunc_call_test_mem_len_fail1) BTF_ID_FLAGS(func, bpf_kfunc_call_test_mem_len_fail2) BTF_ID_FLAGS(func, bpf_kfunc_call_test_acquire, KF_ACQUIRE | KF_RET_NULL) diff --git a/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h b/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h index aa0b8d41e71b..2c1cb118f886 100644 --- a/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h +++ b/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h @@ -26,6 +26,8 @@ struct prog_test_ref_kfunc { }; #endif +struct bpf_iter_testmod_seq; + struct prog_test_pass1 { int x0; struct { @@ -111,7 +113,23 @@ int bpf_kfunc_call_test2(struct sock *sk, __u32 a, __u32 b) __ksym; struct sock *bpf_kfunc_call_test3(struct sock *sk) __ksym; long bpf_kfunc_call_test4(signed char a, short b, int c, long d) __ksym; int bpf_kfunc_call_test5(__u8 a, __u16 b, __u32 c) __ksym; - +__u64 bpf_kfunc_call_stack_arg(__u64 a, __u64 b, __u64 c, __u64 d, + __u64 e, __u64 f, __u64 g, __u64 h) __ksym; +__u64 bpf_kfunc_call_stack_arg_ptr(__u64 a, __u64 b, __u64 c, __u64 d, __u64 e, + struct prog_test_pass1 *p) __ksym; +__u64 bpf_kfunc_call_stack_arg_mix(__u64 a, __u64 b, __u64 c, __u64 d, __u64 e, + struct prog_test_pass1 *p, __u64 f, + struct prog_test_pass1 *q) __ksym; +__u64 bpf_kfunc_call_stack_arg_dynptr(__u64 a, __u64 b, __u64 c, __u64 d, __u64 e, + struct bpf_dynptr *ptr) __ksym; +__u64 bpf_kfunc_call_stack_arg_mem(__u64 a, __u64 b, __u64 c, __u64 d, __u64 e, + void *mem, int mem__sz) __ksym; +__u64 bpf_kfunc_call_stack_arg_iter(__u64 a, __u64 b, __u64 c, __u64 d, __u64 e, + struct bpf_iter_testmod_seq *it__iter) __ksym; +__u64 bpf_kfunc_call_stack_arg_const_str(__u64 a, __u64 b, __u64 c, __u64 d, __u64 e, + const char *str__str) __ksym; +__u64 bpf_kfunc_call_stack_arg_timer(__u64 a, __u64 b, __u64 c, __u64 d, __u64 e, + struct bpf_timer *timer) __ksym; void bpf_kfunc_call_test_pass_ctx(struct __sk_buff *skb) __ksym; void bpf_kfunc_call_test_pass1(struct prog_test_pass1 *p) __ksym; void bpf_kfunc_call_test_pass2(struct prog_test_pass2 *p) __ksym; -- 2.52.0 ^ permalink raw reply related [flat|nested] 45+ messages in thread
* Re: [PATCH bpf-next v5 14/16] selftests/bpf: Add tests for BPF function stack arguments 2026-04-17 3:48 ` [PATCH bpf-next v5 14/16] selftests/bpf: Add tests for BPF function " Yonghong Song @ 2026-04-17 4:20 ` sashiko-bot 2026-04-18 0:52 ` bot+bpf-ci 1 sibling, 0 replies; 45+ messages in thread From: sashiko-bot @ 2026-04-17 4:20 UTC (permalink / raw) To: Yonghong Song; +Cc: bpf Sashiko AI review found 1 potential issue(s): - [Medium] The `test_async_cb_many` test fails to verify the execution and result of the asynchronous timer callback. -- commit ef139a03093697d9985422f1f900de9a12f9a0dc Author: Yonghong Song <yonghong.song@linux.dev> selftests/bpf: Add tests for BPF function stack arguments This commit adds selftests for BPF-to-BPF subprog and kfunc calls that utilize more than 5 arguments, verifying stack argument passing. It introduces test scenarios for various stack argument types, including scalars, pointers, and async callbacks. > diff --git a/tools/testing/selftests/bpf/prog_tests/stack_arg.c b/tools/testing/selftests/bpf/prog_tests/stack_arg.c > index 0000000000000..130eaf1c4a78e > --- /dev/null > +++ b/tools/testing/selftests/bpf/prog_tests/stack_arg.c [ ... ] > +static void test_async_cb_many(void) > +{ > + struct stack_arg *skel; > + > + skel = stack_arg__open(); > + if (!ASSERT_OK_PTR(skel, "open")) > + return; > + > + if (!skel->rodata->has_stack_arg) { > + test__skip(); > + goto out; > + } > + > + if (!ASSERT_OK(stack_arg__load(skel), "load")) > + goto out; > + > + run_subtest(skel->progs.test_async_cb_many_args, 0); Does this test verify the asynchronous callback execution and its result? Since run_subtest() only runs the program and asserts the return value is 0, the skeleton is immediately destroyed afterward. The timer callback might not have time to execute and could be canceled during map destruction. > + > +out: > + stack_arg__destroy(skel); > +} [ ... ] > diff --git a/tools/testing/selftests/bpf/progs/stack_arg.c b/tools/testing/selftests/bpf/progs/stack_arg.c > index 0000000000000..8c198ee952ff4 > --- /dev/null > +++ b/tools/testing/selftests/bpf/progs/stack_arg.c [ ... ] > +static int timer_cb_many_args(void *map, int *key, struct bpf_timer *timer) > +{ > + timer_result = static_func_many_args(10, 20, 30, 40, 50, 60, 70, 80); > + return 0; > +} The result of the callback (timer_result) is never verified in userspace. Should there be a polling loop waiting for the timer to fire in the test, followed by an assertion checking that skel->bss->timer_result == 360? -- Sashiko AI review · https://sashiko.dev/#/patchset/20260417034658.2625353-1-yonghong.song@linux.dev?part=14 ^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [PATCH bpf-next v5 14/16] selftests/bpf: Add tests for BPF function stack arguments 2026-04-17 3:48 ` [PATCH bpf-next v5 14/16] selftests/bpf: Add tests for BPF function " Yonghong Song 2026-04-17 4:20 ` sashiko-bot @ 2026-04-18 0:52 ` bot+bpf-ci 1 sibling, 0 replies; 45+ messages in thread From: bot+bpf-ci @ 2026-04-18 0:52 UTC (permalink / raw) To: yonghong.song, bpf Cc: ast, andrii, daniel, jose.marchesi, kernel-team, martin.lau, ast, andrii, daniel, martin.lau, eddyz87, yonghong.song, clm, ihor.solodrai [-- Attachment #1: Type: text/plain, Size: 2199 bytes --] > diff --git a/tools/testing/selftests/bpf/prog_tests/stack_arg.c b/tools/testing/selftests/bpf/prog_tests/stack_arg.c > new file mode 100644 > index 000000000000..130eaf1c4a78 > --- /dev/null > +++ b/tools/testing/selftests/bpf/prog_tests/stack_arg.c > @@ -0,0 +1,133 @@ > +// SPDX-License-Identifier: GPL-2.0 > +/* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */ > + > +#include <test_progs.h> > +#include <network_helpers.h> > +#include "stack_arg.skel.h" > +#include "stack_arg_kfunc.skel.h" > + > +static void run_subtest(struct bpf_program *prog, int expected) > +{ > + int err, prog_fd; > + LIBBPF_OPTS(bpf_test_run_opts, topts, > + .data_in = &pkt_v4, > + .data_size_in = sizeof(pkt_v4), > + .repeat = 1, > + ); > + > + prog_fd = bpf_program__fd(prog); > + err = bpf_prog_test_run_opts(prog_fd, &topts); > + ASSERT_OK(err, "test_run"); > + ASSERT_EQ(topts.retval, expected, "retval"); > +} [ ... ] > +static void test_async_cb_many(void) > +{ > + struct stack_arg *skel; > + > + skel = stack_arg__open(); > + if (!ASSERT_OK_PTR(skel, "open")) > + return; > + > + if (!skel->rodata->has_stack_arg) { > + test__skip(); > + goto out; > + } > + > + if (!ASSERT_OK(stack_arg__load(skel), "load")) > + goto out; > + > + run_subtest(skel->progs.test_async_cb_many_args, 0); > + > +out: > + stack_arg__destroy(skel); > +} Does this test adequately verify the asynchronous timer callback execution and its result? Since run_subtest() only runs the program and asserts the return value is 0, the skeleton is immediately destroyed afterward. The timer callback might not have time to execute and could be canceled during map destruction. The result of the callback (timer_result) is never verified in userspace. Should there be a polling loop waiting for the timer to fire, followed by an assertion checking that skel->bss->timer_result == 360? Reference: https://lore.kernel.org/bpf/20260417042017.CF065C19425@smtp.kernel.org/ [ ... ] --- AI reviewed your patch. Please fix the bug or email reply why it's not a bug. See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md CI run summary: https://github.com/kernel-patches/bpf/actions/runs/24592562564 ^ permalink raw reply [flat|nested] 45+ messages in thread
* [PATCH bpf-next v5 15/16] selftests/bpf: Add negative test for greater-than-8-byte kfunc stack argument 2026-04-17 3:46 [PATCH bpf-next v5 00/16] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song ` (13 preceding siblings ...) 2026-04-17 3:48 ` [PATCH bpf-next v5 14/16] selftests/bpf: Add tests for BPF function " Yonghong Song @ 2026-04-17 3:48 ` Yonghong Song 2026-04-17 4:28 ` sashiko-bot 2026-04-18 0:52 ` bot+bpf-ci 2026-04-17 3:48 ` [PATCH bpf-next v5 16/16] selftests/bpf: Add verifier tests for stack argument validation Yonghong Song 15 siblings, 2 replies; 45+ messages in thread From: Yonghong Song @ 2026-04-17 3:48 UTC (permalink / raw) To: bpf Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann, Jose E . Marchesi, kernel-team, Martin KaFai Lau Add a test that the verifier rejects kfunc calls where a stack argument exceeds 8 bytes (the register-sized slot limit). Signed-off-by: Yonghong Song <yonghong.song@linux.dev> --- .../selftests/bpf/prog_tests/stack_arg_fail.c | 24 ++++++++++++++ .../selftests/bpf/progs/stack_arg_fail.c | 32 +++++++++++++++++++ .../selftests/bpf/test_kmods/bpf_testmod.c | 7 ++++ .../bpf/test_kmods/bpf_testmod_kfunc.h | 8 +++++ 4 files changed, 71 insertions(+) create mode 100644 tools/testing/selftests/bpf/prog_tests/stack_arg_fail.c create mode 100644 tools/testing/selftests/bpf/progs/stack_arg_fail.c diff --git a/tools/testing/selftests/bpf/prog_tests/stack_arg_fail.c b/tools/testing/selftests/bpf/prog_tests/stack_arg_fail.c new file mode 100644 index 000000000000..328a79edee45 --- /dev/null +++ b/tools/testing/selftests/bpf/prog_tests/stack_arg_fail.c @@ -0,0 +1,24 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */ + +#include <test_progs.h> +#include "stack_arg_fail.skel.h" + +void test_stack_arg_fail(void) +{ + struct stack_arg_fail *skel; + + skel = stack_arg_fail__open(); + if (!ASSERT_OK_PTR(skel, "open")) + return; + + if (!skel->rodata->has_stack_arg) { + test__skip(); + goto out; + } + + ASSERT_ERR(stack_arg_fail__load(skel), "load_should_fail"); + +out: + stack_arg_fail__destroy(skel); +} diff --git a/tools/testing/selftests/bpf/progs/stack_arg_fail.c b/tools/testing/selftests/bpf/progs/stack_arg_fail.c new file mode 100644 index 000000000000..caa63b6f6a80 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/stack_arg_fail.c @@ -0,0 +1,32 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */ + +#include <vmlinux.h> +#include <bpf/bpf_helpers.h> +#include "../test_kmods/bpf_testmod_kfunc.h" + +#if defined(__BPF_FEATURE_STACK_ARGUMENT) + +const volatile bool has_stack_arg = true; + +SEC("tc") +int test_stack_arg_big(struct __sk_buff *skb) +{ + struct prog_test_big_arg s = { .a = 1, .b = 2 }; + + return bpf_kfunc_call_stack_arg_big(1, 2, 3, 4, 5, s); +} + +#else + +const volatile bool has_stack_arg = false; + +SEC("tc") +int test_stack_arg_big(struct __sk_buff *skb) +{ + return 0; +} + +#endif + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c b/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c index ea82a6d32d9f..bd467560787e 100644 --- a/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c +++ b/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c @@ -882,6 +882,12 @@ __bpf_kfunc u64 bpf_kfunc_call_stack_arg_timer(u64 a, u64 b, u64 c, u64 d, u64 e return a + b + c + d + e; } +__bpf_kfunc u64 bpf_kfunc_call_stack_arg_big(u64 a, u64 b, u64 c, u64 d, u64 e, + struct prog_test_big_arg s) +{ + return a + b + c + d + e + s.a + s.b; +} + static struct prog_test_ref_kfunc prog_test_struct = { .a = 42, .b = 108, @@ -1353,6 +1359,7 @@ BTF_ID_FLAGS(func, bpf_kfunc_call_stack_arg_mem) BTF_ID_FLAGS(func, bpf_kfunc_call_stack_arg_iter) BTF_ID_FLAGS(func, bpf_kfunc_call_stack_arg_const_str) BTF_ID_FLAGS(func, bpf_kfunc_call_stack_arg_timer) +BTF_ID_FLAGS(func, bpf_kfunc_call_stack_arg_big) BTF_ID_FLAGS(func, bpf_kfunc_call_test_mem_len_pass1) BTF_ID_FLAGS(func, bpf_kfunc_call_test_mem_len_fail1) BTF_ID_FLAGS(func, bpf_kfunc_call_test_mem_len_fail2) diff --git a/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h b/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h index 2c1cb118f886..2a40f80b074a 100644 --- a/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h +++ b/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h @@ -50,6 +50,11 @@ struct prog_test_pass2 { } x; }; +struct prog_test_big_arg { + long a; + long b; +}; + struct prog_test_fail1 { void *p; int x; @@ -130,6 +135,9 @@ __u64 bpf_kfunc_call_stack_arg_const_str(__u64 a, __u64 b, __u64 c, __u64 d, __u const char *str__str) __ksym; __u64 bpf_kfunc_call_stack_arg_timer(__u64 a, __u64 b, __u64 c, __u64 d, __u64 e, struct bpf_timer *timer) __ksym; +__u64 bpf_kfunc_call_stack_arg_big(__u64 a, __u64 b, __u64 c, __u64 d, __u64 e, + struct prog_test_big_arg s) __ksym; + void bpf_kfunc_call_test_pass_ctx(struct __sk_buff *skb) __ksym; void bpf_kfunc_call_test_pass1(struct prog_test_pass1 *p) __ksym; void bpf_kfunc_call_test_pass2(struct prog_test_pass2 *p) __ksym; -- 2.52.0 ^ permalink raw reply related [flat|nested] 45+ messages in thread
* Re: [PATCH bpf-next v5 15/16] selftests/bpf: Add negative test for greater-than-8-byte kfunc stack argument 2026-04-17 3:48 ` [PATCH bpf-next v5 15/16] selftests/bpf: Add negative test for greater-than-8-byte kfunc stack argument Yonghong Song @ 2026-04-17 4:28 ` sashiko-bot 2026-04-18 0:52 ` bot+bpf-ci 1 sibling, 0 replies; 45+ messages in thread From: sashiko-bot @ 2026-04-17 4:28 UTC (permalink / raw) To: Yonghong Song; +Cc: bpf Sashiko AI review found 2 potential issue(s): - [Medium] The `prog_test_big_arg` struct uses `long`, which fails to test the >8 byte limit on 32-bit architectures and causes BTF size mismatches. - [Medium] Using `ASSERT_ERR(stack_arg_fail__load(skel))` masks the actual verifier error reason, allowing the negative test to pass for unrelated reasons. -- commit 1ec644adfded3aa88befcb32fc73958c68b7e018 Author: Yonghong Song <yonghong.song@linux.dev> selftests/bpf: Add negative test for greater-than-8-byte kfunc stack argument This commit adds a selftest to ensure the verifier rejects kfunc calls where a stack argument exceeds the 8-byte register-sized slot limit. It introduces a new kfunc taking a struct argument and a BPF program that attempts to pass a large struct to it. > diff --git a/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h b/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h > index 2c1cb118f8867..2a40f80b074a3 100644 > --- a/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h > +++ b/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h > @@ -50,6 +50,11 @@ struct prog_test_pass2 { > } x; > }; > > +struct prog_test_big_arg { > + long a; > + long b; > +}; > + Does this structure achieve the intended >8 byte size on 32-bit architectures? When the test kernel module is compiled on a 32-bit host, long is 32-bit, making the struct exactly 8 bytes. Since the BPF target always treats long as a 64-bit integer, this makes the struct 16 bytes in the BPF program. This size discrepancy between the BPF program's BTF and the host kernel's BTF might cause the verifier to reject the load for a BTF size mismatch rather than the intended stack argument limit. Could these fields use explicitly sized types like __u64 to ensure the struct is consistently larger than 8 bytes across all architectures? [ ... ] > diff --git a/tools/testing/selftests/bpf/prog_tests/stack_arg_fail.c b/tools/testing/selftests/bpf/prog_tests/stack_arg_fail.c > new file mode 100644 > --- /dev/null > +++ b/tools/testing/selftests/bpf/prog_tests/stack_arg_fail.c > @@ -0,0 +1,24 @@ [ ... ] > + if (!skel->rodata->has_stack_arg) { > + test__skip(); > + goto out; > + } > + > + ASSERT_ERR(stack_arg_fail__load(skel), "load_should_fail"); Could this generic error assertion mask the actual verifier rejection reason? If the program fails to load due to an unrelated issue, such as a missing BTF, a syntax error, or the architecture-dependent size mismatch described above, the test will incorrectly report success because it only checks that the load operation failed. Would it be better to refactor this test to use the standard test_loader infrastructure (via RUN_TESTS) with __failure and __msg annotations in the BPF program? That would ensure the rejection is explicitly due to the argument size limit. > + > +out: > + stack_arg_fail__destroy(skel); > +} -- Sashiko AI review · https://sashiko.dev/#/patchset/20260417034658.2625353-1-yonghong.song@linux.dev?part=15 ^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [PATCH bpf-next v5 15/16] selftests/bpf: Add negative test for greater-than-8-byte kfunc stack argument 2026-04-17 3:48 ` [PATCH bpf-next v5 15/16] selftests/bpf: Add negative test for greater-than-8-byte kfunc stack argument Yonghong Song 2026-04-17 4:28 ` sashiko-bot @ 2026-04-18 0:52 ` bot+bpf-ci 1 sibling, 0 replies; 45+ messages in thread From: bot+bpf-ci @ 2026-04-18 0:52 UTC (permalink / raw) To: yonghong.song, bpf Cc: ast, andrii, daniel, jose.marchesi, kernel-team, martin.lau, ast, andrii, daniel, martin.lau, eddyz87, yonghong.song, clm, ihor.solodrai [-- Attachment #1: Type: text/plain, Size: 2788 bytes --] > diff --git a/tools/testing/selftests/bpf/prog_tests/stack_arg_fail.c b/tools/testing/selftests/bpf/prog_tests/stack_arg_fail.c > new file mode 100644 > index 000000000000..328a79edee45 > --- /dev/null > +++ b/tools/testing/selftests/bpf/prog_tests/stack_arg_fail.c > @@ -0,0 +1,24 @@ > +// SPDX-License-Identifier: GPL-2.0 > +/* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */ > + > +#include <test_progs.h> > +#include "stack_arg_fail.skel.h" > + > +void test_stack_arg_fail(void) > +{ > + struct stack_arg_fail *skel; > + > + skel = stack_arg_fail__open(); > + if (!ASSERT_OK_PTR(skel, "open")) > + return; > + > + if (!skel->rodata->has_stack_arg) { > + test__skip(); > + goto out; > + } > + > + ASSERT_ERR(stack_arg_fail__load(skel), "load_should_fail"); Could this generic error assertion mask the actual verifier rejection reason? If the program fails to load due to an unrelated issue, such as a missing BTF, a syntax error, or the architecture-dependent size mismatch described below, the test will incorrectly report success because it only checks that the load operation failed. Would it be better to refactor this test to use the standard test_loader infrastructure (via RUN_TESTS) with __failure and __msg annotations in the BPF program? That would ensure the rejection is explicitly due to the argument size limit. > + > +out: > + stack_arg_fail__destroy(skel); > +} [ ... ] > diff --git a/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h b/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h > index 2c1cb118f886..2a40f80b074a 100644 > --- a/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h > +++ b/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h > @@ -50,6 +50,11 @@ struct prog_test_pass2 { > } x; > }; > > +struct prog_test_big_arg { > + long a; > + long b; > +}; Does this structure achieve the intended >8 byte size on 32-bit architectures? When the test kernel module is compiled on a 32-bit host, long is 32-bit, making the struct exactly 8 bytes. Since the BPF target always treats long as a 64-bit integer, this makes the struct 16 bytes in the BPF program. This size discrepancy between the BPF program's BTF and the host kernel's BTF might cause the verifier to reject the load for a BTF size mismatch rather than the intended stack argument limit. Could these fields use explicitly sized types like __u64 to ensure the struct is consistently larger than 8 bytes across all architectures? > + > struct prog_test_fail1 { > void *p; > int x; [ ... ] --- AI reviewed your patch. Please fix the bug or email reply why it's not a bug. See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md CI run summary: https://github.com/kernel-patches/bpf/actions/runs/24592562564 ^ permalink raw reply [flat|nested] 45+ messages in thread
* [PATCH bpf-next v5 16/16] selftests/bpf: Add verifier tests for stack argument validation 2026-04-17 3:46 [PATCH bpf-next v5 00/16] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song ` (14 preceding siblings ...) 2026-04-17 3:48 ` [PATCH bpf-next v5 15/16] selftests/bpf: Add negative test for greater-than-8-byte kfunc stack argument Yonghong Song @ 2026-04-17 3:48 ` Yonghong Song 2026-04-17 4:38 ` sashiko-bot 2026-04-18 0:52 ` bot+bpf-ci 15 siblings, 2 replies; 45+ messages in thread From: Yonghong Song @ 2026-04-17 3:48 UTC (permalink / raw) To: bpf Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann, Jose E . Marchesi, kernel-team, Martin KaFai Lau Add inline-asm based verifier tests that exercise stack argument validation logic directly. Positive tests: - subprog call with 6 arg's - Two sequential calls to different subprogs (6-arg and 7-arg) - Share a r11 store for both branches - Sequential calls reuse stack arg's Negative tests — verifier rejection: - Read from uninitialized incoming stack arg slot - Gap in outgoing slots: only r11-16 written, r11-8 missing - Sub-8-byte stack arg write (4 bytes instead of 8) - Write at r11-80, exceeding max 7 stack args - Missing store on one branch with a shared store Negative tests — pointer/ref tracking: - Pruning type mismatch: one branch stores PTR_TO_STACK, the other stores a scalar, callee dereferences — must not prune - Release invalidation: bpf_sk_release invalidates a socket pointer stored in a stack arg slot - Packet pointer invalidation: bpf_skb_pull_data invalidates a packet pointer stored in a stack arg slot - Null propagation: PTR_TO_MAP_VALUE_OR_NULL stored in stack arg slot, null branch attempts dereference via callee Signed-off-by: Yonghong Song <yonghong.song@linux.dev> --- .../selftests/bpf/prog_tests/verifier.c | 2 + .../selftests/bpf/progs/verifier_stack_arg.c | 463 ++++++++++++++++++ 2 files changed, 465 insertions(+) create mode 100644 tools/testing/selftests/bpf/progs/verifier_stack_arg.c diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c index a96b25ebff23..aef21cf2987b 100644 --- a/tools/testing/selftests/bpf/prog_tests/verifier.c +++ b/tools/testing/selftests/bpf/prog_tests/verifier.c @@ -91,6 +91,7 @@ #include "verifier_sockmap_mutate.skel.h" #include "verifier_spill_fill.skel.h" #include "verifier_spin_lock.skel.h" +#include "verifier_stack_arg.skel.h" #include "verifier_stack_ptr.skel.h" #include "verifier_store_release.skel.h" #include "verifier_subprog_precision.skel.h" @@ -238,6 +239,7 @@ void test_verifier_sock_addr(void) { RUN(verifier_sock_addr); } void test_verifier_sockmap_mutate(void) { RUN(verifier_sockmap_mutate); } void test_verifier_spill_fill(void) { RUN(verifier_spill_fill); } void test_verifier_spin_lock(void) { RUN(verifier_spin_lock); } +void test_verifier_stack_arg(void) { RUN(verifier_stack_arg); } void test_verifier_stack_ptr(void) { RUN(verifier_stack_ptr); } void test_verifier_store_release(void) { RUN(verifier_store_release); } void test_verifier_subprog_precision(void) { RUN(verifier_subprog_precision); } diff --git a/tools/testing/selftests/bpf/progs/verifier_stack_arg.c b/tools/testing/selftests/bpf/progs/verifier_stack_arg.c new file mode 100644 index 000000000000..d212b6c3cac7 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/verifier_stack_arg.c @@ -0,0 +1,463 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */ + +#include <linux/bpf.h> +#include <bpf/bpf_helpers.h> +#include "bpf_misc.h" + +struct { + __uint(type, BPF_MAP_TYPE_HASH); + __uint(max_entries, 1); + __type(key, long long); + __type(value, long long); +} map_hash_8b SEC(".maps"); + +#if defined(__TARGET_ARCH_x86) && defined(__BPF_FEATURE_STACK_ARGUMENT) + +__noinline __used +static int subprog_6args(int a, int b, int c, int d, int e, int f) +{ + return a + b + c + d + e + f; +} + +__noinline __used +static int subprog_7args(int a, int b, int c, int d, int e, int f, int g) +{ + return a + b + c + d + e + f + g; +} + +__noinline __used +static int subprog_8args(int a, int b, int c, int d, int e, int f, int g, int h) +{ + return a + b + c + d + e + f + g + h; +} + +__noinline __used +static long subprog_deref_arg6(long a, long b, long c, long d, long e, long *f) +{ + return *f; +} + +SEC("tc") +__description("stack_arg: subprog with 6 args") +__success +__arch_x86_64 +__naked void stack_arg_6args(void) +{ + asm volatile ( + "r1 = 1;" + "r2 = 2;" + "r3 = 3;" + "r4 = 4;" + "r5 = 5;" + "*(u64 *)(r11 - 8) = 6;" + "call subprog_6args;" + "exit;" + ::: __clobber_all + ); +} + +SEC("tc") +__description("stack_arg: two subprogs with >5 args") +__success +__arch_x86_64 +__naked void stack_arg_two_subprogs(void) +{ + asm volatile ( + "r1 = 1;" + "r2 = 2;" + "r3 = 3;" + "r4 = 4;" + "r5 = 5;" + "*(u64 *)(r11 - 8) = 10;" + "call subprog_6args;" + "r6 = r0;" + "r1 = 1;" + "r2 = 2;" + "r3 = 3;" + "r4 = 4;" + "r5 = 5;" + "*(u64 *)(r11 - 16) = 30;" + "*(u64 *)(r11 - 8) = 20;" + "call subprog_7args;" + "r0 += r6;" + "exit;" + ::: __clobber_all + ); +} + +SEC("tc") +__description("stack_arg: read from uninitialized stack arg slot") +__failure +__arch_x86_64 +__msg("invalid read from stack arg") +__naked void stack_arg_read_uninitialized(void) +{ + asm volatile ( + "r0 = *(u64 *)(r11 + 8);" + "r0 = 0;" + "exit;" + ::: __clobber_all + ); +} + +SEC("tc") +__description("stack_arg: gap at offset -8, only wrote -16") +__failure +__arch_x86_64 +__msg("stack arg#6 not properly initialized") +__naked void stack_arg_gap_at_minus8(void) +{ + asm volatile ( + "r1 = 1;" + "r2 = 2;" + "r3 = 3;" + "r4 = 4;" + "r5 = 5;" + "*(u64 *)(r11 - 16) = 30;" + "call subprog_7args;" + "exit;" + ::: __clobber_all + ); +} + +SEC("tc") +__description("stack_arg: incorrect size of stack arg write") +__failure +__arch_x86_64 +__msg("stack arg write must be 8 bytes, got 4") +__naked void stack_arg_not_written(void) +{ + asm volatile ( + "r1 = 1;" + "r2 = 2;" + "r3 = 3;" + "r4 = 4;" + "r5 = 5;" + "*(u32 *)(r11 - 8) = 30;" + "call subprog_6args;" + "exit;" + ::: __clobber_all + ); +} + +SEC("tc") +__description("stack_arg: pruning with different stack arg types") +__failure +__flag(BPF_F_TEST_STATE_FREQ) +__arch_x86_64 +__msg("R1 invalid mem access") +__naked void stack_arg_pruning_type_mismatch(void) +{ + asm volatile ( + "call %[bpf_get_prandom_u32];" + "r6 = r0;" + /* local = 0 on program stack */ + "r7 = 0;" + "*(u64 *)(r10 - 8) = r7;" + /* Branch based on random value */ + "if r6 s> 3 goto l0_%=;" + /* Path 1: store stack pointer to outgoing arg6 */ + "r1 = r10;" + "r1 += -8;" + "*(u64 *)(r11 - 8) = r1;" + "goto l1_%=;" + "l0_%=:" + /* Path 2: store scalar to outgoing arg6 */ + "*(u64 *)(r11 - 8) = 42;" + "l1_%=:" + /* Call subprog that dereferences arg6 */ + "r1 = r6;" + "r2 = 0;" + "r3 = 0;" + "r4 = 0;" + "r5 = 0;" + "call subprog_deref_arg6;" + "exit;" + :: __imm(bpf_get_prandom_u32) + : __clobber_all + ); +} + +SEC("tc") +__description("stack_arg: release_reference invalidates stack arg slot") +__failure +__arch_x86_64 +__msg("R1 invalid sock access") +__naked void stack_arg_release_ref(void) +{ + asm volatile ( + "r6 = r1;" + /* struct bpf_sock_tuple tuple = {} */ + "r2 = 0;" + "*(u32 *)(r10 - 8) = r2;" + "*(u64 *)(r10 - 16) = r2;" + "*(u64 *)(r10 - 24) = r2;" + "*(u64 *)(r10 - 32) = r2;" + "*(u64 *)(r10 - 40) = r2;" + "*(u64 *)(r10 - 48) = r2;" + /* sk = bpf_sk_lookup_tcp(ctx, &tuple, sizeof(tuple), 0, 0) */ + "r1 = r6;" + "r2 = r10;" + "r2 += -48;" + "r3 = %[sizeof_bpf_sock_tuple];" + "r4 = 0;" + "r5 = 0;" + "call %[bpf_sk_lookup_tcp];" + /* r0 = sk (PTR_TO_SOCK_OR_NULL) */ + "if r0 == 0 goto l0_%=;" + /* Store sock ref to outgoing arg6 slot */ + "*(u64 *)(r11 - 8) = r0;" + /* Release the reference — invalidates the stack arg slot */ + "r1 = r0;" + "call %[bpf_sk_release];" + /* Call subprog that dereferences arg6 — should fail */ + "r1 = 1;" + "r2 = 2;" + "r3 = 3;" + "r4 = 4;" + "r5 = 5;" + "call subprog_deref_arg6;" + "l0_%=:" + "r0 = 0;" + "exit;" + : + : __imm(bpf_sk_lookup_tcp), + __imm(bpf_sk_release), + __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple)) + : __clobber_all + ); +} + +SEC("tc") +__description("stack_arg: pkt pointer in stack arg slot invalidated after pull_data") +__failure +__arch_x86_64 +__msg("invalid access to packet") +__naked void stack_arg_stale_pkt_ptr(void) +{ + asm volatile ( + "r6 = r1;" + "r7 = *(u32 *)(r6 + %[__sk_buff_data]);" + "r8 = *(u32 *)(r6 + %[__sk_buff_data_end]);" + /* check pkt has at least 1 byte */ + "r0 = r7;" + "r0 += 1;" + "if r0 > r8 goto l0_%=;" + /* Store valid pkt pointer to outgoing arg6 slot */ + "*(u64 *)(r11 - 8) = r7;" + /* bpf_skb_pull_data invalidates all pkt pointers */ + "r1 = r6;" + "r2 = 0;" + "call %[bpf_skb_pull_data];" + /* Call subprog that dereferences arg6 — should fail */ + "r1 = 1;" + "r2 = 2;" + "r3 = 3;" + "r4 = 4;" + "r5 = 5;" + "call subprog_deref_arg6;" + "l0_%=:" + "r0 = 0;" + "exit;" + : + : __imm(bpf_skb_pull_data), + __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)), + __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end)) + : __clobber_all + ); +} + +SEC("tc") +__description("stack_arg: null propagation rejects deref on null branch") +__failure +__arch_x86_64 +__msg("R1 invalid mem access") +__naked void stack_arg_null_propagation_fail(void) +{ + asm volatile ( + "r1 = 0;" + "*(u64 *)(r10 - 8) = r1;" + /* r0 = bpf_map_lookup_elem(&map_hash_8b, &key) */ + "r2 = r10;" + "r2 += -8;" + "r1 = %[map_hash_8b] ll;" + "call %[bpf_map_lookup_elem];" + /* Store PTR_TO_MAP_VALUE_OR_NULL to outgoing arg6 slot */ + "*(u64 *)(r11 - 8) = r0;" + /* null check on r0 */ + "if r0 != 0 goto l0_%=;" + /* + * On null branch, outgoing slot is SCALAR(0). + * Call subprog that dereferences arg6 — should fail. + */ + "r1 = 0;" + "r2 = 0;" + "r3 = 0;" + "r4 = 0;" + "r5 = 0;" + "call subprog_deref_arg6;" + "l0_%=:" + "r0 = 0;" + "exit;" + : + : __imm(bpf_map_lookup_elem), + __imm_addr(map_hash_8b) + : __clobber_all + ); +} + +SEC("tc") +__description("stack_arg: missing store on one branch") +__failure +__arch_x86_64 +__msg("stack arg#6 not properly initialized") +__naked void stack_arg_missing_store_one_branch(void) +{ + asm volatile ( + "r1 = 1;" + "r2 = 2;" + "r3 = 3;" + "r4 = 4;" + "r5 = 5;" + /* Write arg7 (r11-16) before branch */ + "*(u64 *)(r11 - 16) = 20;" + "call %[bpf_get_prandom_u32];" + "if r0 > 0 goto l0_%=;" + /* Path 1: write arg6 and call */ + "*(u64 *)(r11 - 8) = 10;" + "r1 = 1;" + "r2 = 2;" + "r3 = 3;" + "r4 = 4;" + "r5 = 5;" + "call subprog_7args;" + "goto l1_%=;" + "l0_%=:" + /* Path 2: missing arg6 store, call should fail */ + "r1 = 1;" + "r2 = 2;" + "r3 = 3;" + "r4 = 4;" + "r5 = 5;" + "call subprog_7args;" + "l1_%=:" + "r0 = 0;" + "exit;" + :: __imm(bpf_get_prandom_u32) + : __clobber_all + ); +} + +SEC("tc") +__description("stack_arg: share a store for both branches") +__success __retval(0) +__arch_x86_64 +__naked void stack_arg_shared_store(void) +{ + asm volatile ( + "r1 = 1;" + "r2 = 2;" + "r3 = 3;" + "r4 = 4;" + "r5 = 5;" + /* Write arg7 (r11-16) before branch */ + "*(u64 *)(r11 - 16) = 20;" + "call %[bpf_get_prandom_u32];" + "if r0 > 0 goto l0_%=;" + /* Path 1: write arg6 and call */ + "*(u64 *)(r11 - 8) = 10;" + "r1 = 1;" + "r2 = 2;" + "r3 = 3;" + "r4 = 4;" + "r5 = 5;" + "call subprog_7args;" + "goto l1_%=;" + "l0_%=:" + /* Path 2: also write arg6 and call */ + "*(u64 *)(r11 - 8) = 30;" + "r1 = 1;" + "r2 = 2;" + "r3 = 3;" + "r4 = 4;" + "r5 = 5;" + "call subprog_7args;" + "l1_%=:" + "r0 = 0;" + "exit;" + :: __imm(bpf_get_prandom_u32) + : __clobber_all + ); +} + +SEC("tc") +__description("stack_arg: write beyond max outgoing depth") +__failure +__arch_x86_64 +__msg("stack arg write offset -80 exceeds max 7 stack args") +__naked void stack_arg_write_beyond_max(void) +{ + asm volatile ( + "r1 = 1;" + "r2 = 2;" + "r3 = 3;" + "r4 = 4;" + "r5 = 5;" + /* Write to offset -80, way beyond any callee's needs */ + "*(u64 *)(r11 - 80) = 99;" + "*(u64 *)(r11 - 16) = 20;" + "*(u64 *)(r11 - 8) = 10;" + "call subprog_7args;" + "r0 = 0;" + "exit;" + ::: __clobber_all + ); +} + +SEC("tc") +__description("stack_arg: sequential calls reuse slots") +__success __retval(56) +__arch_x86_64 +__naked void stack_arg_sequential_calls(void) +{ + asm volatile ( + /* call1: subprog_7args(1,2,3,4,5,6,7) = 28 */ + "r1 = 1;" + "r2 = 2;" + "r3 = 3;" + "r4 = 4;" + "r5 = 5;" + "*(u64 *)(r11 - 8) = 6;" + "*(u64 *)(r11 - 16) = 7;" + "call subprog_7args;" + "r6 = r0;" + /* call2: reuse stack arg slots from call1, + * subprog_7args(1,2,3,4,5,6,7) = 28 + */ + "r1 = 1;" + "r2 = 2;" + "r3 = 3;" + "r4 = 4;" + "r5 = 5;" + "call subprog_7args;" + /* total: 28 + 28 = 56 */ + "r0 += r6;" + "exit;" + ::: __clobber_all + ); +} + +#else + +SEC("socket") +__description("stack_arg is not supported by compiler or jit, use a dummy test") +__success +int dummy_test(void) +{ + return 0; +} + +#endif + +char _license[] SEC("license") = "GPL"; -- 2.52.0 ^ permalink raw reply related [flat|nested] 45+ messages in thread
* Re: [PATCH bpf-next v5 16/16] selftests/bpf: Add verifier tests for stack argument validation 2026-04-17 3:48 ` [PATCH bpf-next v5 16/16] selftests/bpf: Add verifier tests for stack argument validation Yonghong Song @ 2026-04-17 4:38 ` sashiko-bot 2026-04-18 0:52 ` bot+bpf-ci 1 sibling, 0 replies; 45+ messages in thread From: sashiko-bot @ 2026-04-17 4:38 UTC (permalink / raw) To: Yonghong Song; +Cc: bpf Sashiko AI review found 3 potential issue(s): - [Medium] Negative verifier tests for stack arguments use inherently invalid memory accesses, causing them to pass even if the verifier has bugs. - [Medium] `stack_arg_read_uninitialized` reads from `r11 + 8` instead of a valid negative offset, failing to properly test uninitialized stack argument slots. - [Low] Unused `subprog_8args` function defined in the test file. -- commit 24b1eb352e7d04e7c35a9fbc88c6b91d08b17832 Author: Yonghong Song <yonghong.song@linux.dev> selftests/bpf: Add verifier tests for stack argument validation This commit adds inline-asm based verifier tests to directly exercise stack argument validation logic. > diff --git a/tools/testing/selftests/bpf/progs/verifier_stack_arg.c b/tools/testing/selftests/bpf/progs/verifier_stack_arg.c > new file mode 100644 > index 0000000000000..d212b6c3cac75 > --- /dev/null > +++ b/tools/testing/selftests/bpf/progs/verifier_stack_arg.c [ ... ] > +__noinline __used > +static int subprog_8args(int a, int b, int c, int d, int e, int f, int g, int h) > +{ > + return a + b + c + d + e + f + g + h; > +} Is subprog_8args intended to be used in any of the tests? It does not seem to be called anywhere in this file. [ ... ] > +SEC("tc") > +__description("stack_arg: read from uninitialized stack arg slot") > +__failure > +__arch_x86_64 > +__msg("invalid read from stack arg") > +__naked void stack_arg_read_uninitialized(void) > +{ > + asm volatile ( > + "r0 = *(u64 *)(r11 + 8);" > + "r0 = 0;" > + "exit;" > + ::: __clobber_all > + ); > +} Should this read from r11 - 8 instead? Since outgoing stack arguments are at negative offsets from r11, reading from r11 + 8 is out of bounds and might be rejected for the wrong reason, rather than testing an uninitialized but valid stack argument slot. [ ... ] > +__noinline __used > +static long subprog_deref_arg6(long a, long b, long c, long d, long e, long *f) > +{ > + return *f; > +} [ ... ] > +SEC("tc") > +__description("stack_arg: release_reference invalidates stack arg slot") > +__failure > +__arch_x86_64 > +__msg("R1 invalid sock access") > +__naked void stack_arg_release_ref(void) > +{ > + asm volatile ( [ ... ] > + /* Release the reference — invalidates the stack arg slot */ > + "r1 = r0;" > + "call %[bpf_sk_release];" > + /* Call subprog that dereferences arg6 — should fail */ > + "r1 = 1;" > + "r2 = 2;" > + "r3 = 3;" > + "r4 = 4;" > + "r5 = 5;" > + "call subprog_deref_arg6;" Can this test pass even if the verifier fails to invalidate the stack slot? subprog_deref_arg6() performs an 8-byte read on the argument. Since BPF sockets only allow up to 4-byte reads, an 8-byte read is always invalid. If the reference isn't correctly invalidated, the valid socket pointer will still be rejected with "R1 invalid sock access". To properly test the invalidation, should the subprogram perform a valid 4-byte read instead? [ ... ] > +SEC("tc") > +__description("stack_arg: pkt pointer in stack arg slot invalidated after pull_data") > +__failure > +__arch_x86_64 > +__msg("invalid access to packet") > +__naked void stack_arg_stale_pkt_ptr(void) > +{ > + asm volatile ( > + "r6 = r1;" > + "r7 = *(u32 *)(r6 + %[__sk_buff_data]);" > + "r8 = *(u32 *)(r6 + %[__sk_buff_data_end]);" > + /* check pkt has at least 1 byte */ > + "r0 = r7;" > + "r0 += 1;" > + "if r0 > r8 goto l0_%=;" > + /* Store valid pkt pointer to outgoing arg6 slot */ > + "*(u64 *)(r11 - 8) = r7;" > + /* bpf_skb_pull_data invalidates all pkt pointers */ > + "r1 = r6;" > + "r2 = 0;" > + "call %[bpf_skb_pull_data];" > + /* Call subprog that dereferences arg6 — should fail */ > + "r1 = 1;" > + "r2 = 2;" > + "r3 = 3;" > + "r4 = 4;" > + "r5 = 5;" > + "call subprog_deref_arg6;" Similarly, does this properly test packet pointer invalidation? The test verifies 1 byte of the packet pointer, but subprog_deref_arg6() reads 8 bytes. If bpf_skb_pull_data() fails to invalidate the pointer, the 8-byte read would still be rejected as out-of-bounds, yielding the expected "invalid access to packet" message. Should the packet bounds check verify at least 8 bytes so that a valid read is guaranteed if the pointer isn't invalidated? [ ... ] > +SEC("tc") > +__description("stack_arg: null propagation rejects deref on null branch") > +__failure > +__arch_x86_64 > +__msg("R1 invalid mem access") > +__naked void stack_arg_null_propagation_fail(void) > +{ > + asm volatile ( [ ... ] > + "call %[bpf_map_lookup_elem];" > + /* Store PTR_TO_MAP_VALUE_OR_NULL to outgoing arg6 slot */ > + "*(u64 *)(r11 - 8) = r0;" > + /* null check on r0 */ > + "if r0 != 0 goto l0_%=;" > + /* > + * On null branch, outgoing slot is SCALAR(0). > + * Call subprog that dereferences arg6 — should fail. > + */ > + "r1 = 0;" > + "r2 = 0;" > + "r3 = 0;" > + "r4 = 0;" > + "r5 = 0;" > + "call subprog_deref_arg6;" Does the __msg() substring match accept the wrong failure mode here? Dereferencing an unrefined PTR_TO_MAP_VALUE_OR_NULL yields "R1 invalid mem access 'map_value_or_null'", while dereferencing SCALAR yields "R1 invalid mem access 'scalar'". The substring match "R1 invalid mem access" accepts both, allowing the test to pass even if null propagation fails and the register is still tracked as a map pointer. Should it match the exact scalar error, or perhaps test the non-null branch for success instead? -- Sashiko AI review · https://sashiko.dev/#/patchset/20260417034658.2625353-1-yonghong.song@linux.dev?part=16 ^ permalink raw reply [flat|nested] 45+ messages in thread
* Re: [PATCH bpf-next v5 16/16] selftests/bpf: Add verifier tests for stack argument validation 2026-04-17 3:48 ` [PATCH bpf-next v5 16/16] selftests/bpf: Add verifier tests for stack argument validation Yonghong Song 2026-04-17 4:38 ` sashiko-bot @ 2026-04-18 0:52 ` bot+bpf-ci 1 sibling, 0 replies; 45+ messages in thread From: bot+bpf-ci @ 2026-04-18 0:52 UTC (permalink / raw) To: yonghong.song, bpf Cc: ast, andrii, daniel, jose.marchesi, kernel-team, martin.lau, ast, andrii, daniel, martin.lau, eddyz87, yonghong.song, clm, ihor.solodrai [-- Attachment #1: Type: text/plain, Size: 5877 bytes --] > diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c > index a96b25ebff23..aef21cf2987b 100644 > --- a/tools/testing/selftests/bpf/prog_tests/verifier.c > +++ b/tools/testing/selftests/bpf/prog_tests/verifier.c [ ... ] > diff --git a/tools/testing/selftests/bpf/progs/verifier_stack_arg.c b/tools/testing/selftests/bpf/progs/verifier_stack_arg.c > new file mode 100644 > index 000000000000..d212b6c3cac7 > --- /dev/null > +++ b/tools/testing/selftests/bpf/progs/verifier_stack_arg.c > @@ -0,0 +1,463 @@ > +// SPDX-License-Identifier: GPL-2.0 > +/* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */ > + > +#include <linux/bpf.h> > +#include <bpf/bpf_helpers.h> > +#include "bpf_misc.h" [ ... ] > +__noinline __used > +static int subprog_8args(int a, int b, int c, int d, int e, int f, int g, int h) > +{ > + return a + b + c + d + e + f + g + h; > +} Is subprog_8args intended to be used in any of the tests? It does not seem to be called anywhere in this file. Reference: https://lore.kernel.org/bpf/20260417043846.66CFEC2BCB0@smtp.kernel.org/ [ ... ] > +SEC("tc") > +__description("stack_arg: read from uninitialized stack arg slot") > +__failure > +__arch_x86_64 > +__msg("invalid read from stack arg") > +__naked void stack_arg_read_uninitialized(void) > +{ > + asm volatile ( > + "r0 = *(u64 *)(r11 + 8);" > + "r0 = 0;" > + "exit;" > + ::: __clobber_all > + ); > +} Should this read from r11 - 8 instead? Since outgoing stack arguments are at negative offsets from r11, reading from r11 + 8 is out of bounds and might be rejected for the wrong reason, rather than testing an uninitialized but valid stack argument slot. Reference: https://lore.kernel.org/bpf/20260417043846.66CFEC2BCB0@smtp.kernel.org/ [ ... ] > +SEC("tc") > +__description("stack_arg: release_reference invalidates stack arg slot") > +__failure > +__arch_x86_64 > +__msg("R1 invalid sock access") > +__naked void stack_arg_release_ref(void) > +{ [ ... ] > + /* Release the reference — invalidates the stack arg slot */ > + "r1 = r0;" > + "call %[bpf_sk_release];" > + /* Call subprog that dereferences arg6 — should fail */ > + "r1 = 1;" > + "r2 = 2;" > + "r3 = 3;" > + "r4 = 4;" > + "r5 = 5;" > + "call subprog_deref_arg6;" Can this test pass even if the verifier fails to invalidate the stack slot? subprog_deref_arg6() performs an 8-byte read on the argument. Since BPF sockets only allow up to 4-byte reads, an 8-byte read is always invalid. If the reference isn't correctly invalidated, the valid socket pointer will still be rejected with "R1 invalid sock access". To properly test the invalidation, should the subprogram perform a valid 4-byte read instead? Reference: https://lore.kernel.org/bpf/20260417043846.66CFEC2BCB0@smtp.kernel.org/ [ ... ] > +SEC("tc") > +__description("stack_arg: pkt pointer in stack arg slot invalidated after pull_data") > +__failure > +__arch_x86_64 > +__msg("invalid access to packet") > +__naked void stack_arg_stale_pkt_ptr(void) > +{ > + asm volatile ( > + "r6 = r1;" > + "r7 = *(u32 *)(r6 + %[__sk_buff_data]);" > + "r8 = *(u32 *)(r6 + %[__sk_buff_data_end]);" > + /* check pkt has at least 1 byte */ > + "r0 = r7;" > + "r0 += 1;" > + "if r0 > r8 goto l0_%=;" > + /* Store valid pkt pointer to outgoing arg6 slot */ > + "*(u64 *)(r11 - 8) = r7;" > + /* bpf_skb_pull_data invalidates all pkt pointers */ > + "r1 = r6;" > + "r2 = 0;" > + "call %[bpf_skb_pull_data];" > + /* Call subprog that dereferences arg6 — should fail */ > + "r1 = 1;" > + "r2 = 2;" > + "r3 = 3;" > + "r4 = 4;" > + "r5 = 5;" > + "call subprog_deref_arg6;" Similarly, does this properly test packet pointer invalidation? The test verifies 1 byte of the packet pointer, but subprog_deref_arg6() reads 8 bytes. If bpf_skb_pull_data() fails to invalidate the pointer, the 8-byte read would still be rejected as out-of-bounds, yielding the expected "invalid access to packet" message. Should the packet bounds check verify at least 8 bytes so that a valid read is guaranteed if the pointer isn't invalidated? Reference: https://lore.kernel.org/bpf/20260417043846.66CFEC2BCB0@smtp.kernel.org/ [ ... ] > +SEC("tc") > +__description("stack_arg: null propagation rejects deref on null branch") > +__failure > +__arch_x86_64 > +__msg("R1 invalid mem access") > +__naked void stack_arg_null_propagation_fail(void) > +{ > + asm volatile ( > + "r1 = 0;" > + "*(u64 *)(r10 - 8) = r1;" > + /* r0 = bpf_map_lookup_elem(&map_hash_8b, &key) */ > + "r2 = r10;" > + "r2 += -8;" > + "r1 = %[map_hash_8b] ll;" > + "call %[bpf_map_lookup_elem];" > + /* Store PTR_TO_MAP_VALUE_OR_NULL to outgoing arg6 slot */ > + "*(u64 *)(r11 - 8) = r0;" > + /* null check on r0 */ > + "if r0 != 0 goto l0_%=;" > + /* > + * On null branch, outgoing slot is SCALAR(0). > + * Call subprog that dereferences arg6 — should fail. > + */ > + "r1 = 0;" > + "r2 = 0;" > + "r3 = 0;" > + "r4 = 0;" > + "r5 = 0;" > + "call subprog_deref_arg6;" Does the __msg() substring match accept the wrong failure mode here? Dereferencing an unrefined PTR_TO_MAP_VALUE_OR_NULL yields "R1 invalid mem access 'map_value_or_null'", while dereferencing SCALAR yields "R1 invalid mem access 'scalar'". The substring match "R1 invalid mem access" accepts both, allowing the test to pass even if null propagation fails and the register is still tracked as a map pointer. Should it match the exact scalar error, or perhaps test the non-null branch for success instead? Reference: https://lore.kernel.org/bpf/20260417043846.66CFEC2BCB0@smtp.kernel.org/ --- AI reviewed your patch. Please fix the bug or email reply why it's not a bug. See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md CI run summary: https://github.com/kernel-patches/bpf/actions/runs/24592562564 ^ permalink raw reply [flat|nested] 45+ messages in thread
end of thread, other threads:[~2026-04-18 1:04 UTC | newest] Thread overview: 45+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2026-04-17 3:46 [PATCH bpf-next v5 00/16] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song 2026-04-17 3:47 ` [PATCH bpf-next v5 01/16] bpf: Remove unused parameter from check_map_kptr_access() Yonghong Song 2026-04-17 3:47 ` [PATCH bpf-next v5 02/16] bpf: Refactor to avoid redundant calculation of bpf_reg_state Yonghong Song 2026-04-17 3:47 ` [PATCH bpf-next v5 03/16] bpf: Refactor to handle memory and size together Yonghong Song 2026-04-17 4:49 ` sashiko-bot 2026-04-18 0:52 ` bot+bpf-ci 2026-04-17 3:47 ` [PATCH bpf-next v5 04/16] bpf: Prepare verifier logs for upcoming kfunc stack arguments Yonghong Song 2026-04-17 3:47 ` [PATCH bpf-next v5 05/16] bpf: Introduce bpf register BPF_REG_PARAMS Yonghong Song 2026-04-17 3:47 ` [PATCH bpf-next v5 06/16] bpf: Limit the scope of BPF_REG_PARAMS usage Yonghong Song 2026-04-17 4:30 ` bot+bpf-ci 2026-04-17 4:50 ` sashiko-bot 2026-04-18 1:04 ` bot+bpf-ci 2026-04-17 3:47 ` [PATCH bpf-next v5 07/16] bpf: Reuse MAX_BPF_FUNC_ARGS for maximum number of arguments Yonghong Song 2026-04-17 4:30 ` bot+bpf-ci 2026-04-18 0:52 ` bot+bpf-ci 2026-04-17 3:47 ` [PATCH bpf-next v5 08/16] bpf: Support stack arguments for bpf functions Yonghong Song 2026-04-17 4:35 ` sashiko-bot 2026-04-17 4:43 ` bot+bpf-ci 2026-04-18 1:04 ` bot+bpf-ci 2026-04-17 3:47 ` [PATCH bpf-next v5 09/16] bpf: Reject stack arguments in non-JITed programs Yonghong Song 2026-04-17 4:30 ` bot+bpf-ci 2026-04-18 0:52 ` bot+bpf-ci 2026-04-17 3:47 ` [PATCH bpf-next v5 10/16] bpf: Reject stack arguments if tail call reachable Yonghong Song 2026-04-17 4:08 ` sashiko-bot 2026-04-17 4:30 ` bot+bpf-ci 2026-04-18 1:04 ` bot+bpf-ci 2026-04-17 3:47 ` [PATCH bpf-next v5 11/16] bpf: Support stack arguments for kfunc calls Yonghong Song 2026-04-17 4:40 ` sashiko-bot 2026-04-17 4:43 ` bot+bpf-ci 2026-04-18 1:04 ` bot+bpf-ci 2026-04-17 3:47 ` [PATCH bpf-next v5 12/16] bpf: Enable stack argument support for x86_64 Yonghong Song 2026-04-17 4:30 ` bot+bpf-ci 2026-04-17 5:03 ` sashiko-bot 2026-04-18 1:04 ` bot+bpf-ci 2026-04-17 3:48 ` [PATCH bpf-next v5 13/16] bpf,x86: Implement JIT support for stack arguments Yonghong Song 2026-04-17 4:44 ` sashiko-bot 2026-04-17 3:48 ` [PATCH bpf-next v5 14/16] selftests/bpf: Add tests for BPF function " Yonghong Song 2026-04-17 4:20 ` sashiko-bot 2026-04-18 0:52 ` bot+bpf-ci 2026-04-17 3:48 ` [PATCH bpf-next v5 15/16] selftests/bpf: Add negative test for greater-than-8-byte kfunc stack argument Yonghong Song 2026-04-17 4:28 ` sashiko-bot 2026-04-18 0:52 ` bot+bpf-ci 2026-04-17 3:48 ` [PATCH bpf-next v5 16/16] selftests/bpf: Add verifier tests for stack argument validation Yonghong Song 2026-04-17 4:38 ` sashiko-bot 2026-04-18 0:52 ` bot+bpf-ci
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox