* [PATCH bpf-next v6 00/17] bpf: Support stack arguments for BPF functions and kfuncs
@ 2026-04-19 16:33 Yonghong Song
2026-04-19 16:33 ` [PATCH bpf-next v6 01/17] bpf: Remove unused parameter from check_map_kptr_access() Yonghong Song
` (17 more replies)
0 siblings, 18 replies; 51+ messages in thread
From: Yonghong Song @ 2026-04-19 16:33 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau
Currently, bpf function calls and kfunc's are limited by 5 reg-level
parameters. For function calls with more than 5 parameters,
developers can use always inlining or pass a struct pointer
after packing more parameters in that struct although it may have
some inconvenience. But there is no workaround for kfunc if more
than 5 parameters is needed.
This patch set lifts the 5-argument limit by introducing stack-based
argument passing for BPF functions and kfunc's, coordinated with
compiler support in LLVM [1]. The compiler emits loads/stores through
a new bpf register r11 (BPF_REG_PARAMS) to pass arguments beyond
the 5th, keeping the stack arg area separate from the r10-based program
stack. The current maximum number of arguments is capped at
MAX_BPF_FUNC_ARGS (12), which is sufficient for the vast majority of
use cases.
All kfunc/bpf-function arguments are caller saved, including stack
arguments. For register arguments (r1-r5), the verifier already marks
them as clobbered after each call. For stack arguments, the verifier
invalidates all outgoing stack arg slots immediately after a call,
requiring the compiler to re-store them before any subsequent call.
This follows the native calling convention where all function
parameters are caller saved.
The x86_64 JIT translates r11-relative accesses to RBP-relative
native instructions. Each function's stack allocation is extended
by 'max_outgoing' bytes to hold the outgoing arg area below the
callee-saved registers. This makes implementation easier as the r10
can be reused for stack argument access. At both BPF-to-BPF and kfunc
calls, outgoing args are pushed onto the expected calling convention
locations directly. The incoming parameters can directly get the value
from caller.
To support kfunc stack arguments, before doing any stack arguments,
existing codes are refactored/modified to use bpf_reg_state as much
as possible instead of using regno, and to pass a non-negative argno,
encoded to support both registers and stack arguments, as a single
variable.
Global subprogs with >5 args are not yet supported. Only x86_64
is supported for now.
For the rest of patches, patches 1-4 make changes to make it
easy for future stack arguments for kfuncs. Patches 5-8
supports bpf-to-bpf stack arguments. Patch 9 rejects interpreter
for stack arguments. Patch 10 rejects subprogs if tailcall reachable.
Patch 11 adds stack argument support for kfuncs. Patch 12 enables
stack arguments for x86_64 and Patch 13 implements the x86_64 JIT.
Patches 14-16 are some test cases.
[1] https://github.com/llvm/llvm-project/pull/189060
Note:
- The patch set is on top of the following commit:
eb0d6d97c27c Merge tag 'bpf-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf
- This patch set requires latest llvm23 compiler. It is possible that a build
failure may appear:
/home/yhs/work/bpf-next/scripts/mod/modpost.c:59:13: error: variable 'extra_warn' set but not used [-Werror,-Wunused-but-set-global]
59 | static bool extra_warn;
| ^
1 error generated.
In this case, the following hack can workaround the build issue:
--- a/Makefile
+++ b/Makefile
@@ -467,7 +467,7 @@ KERNELDOC = $(srctree)/tools/docs/kernel-doc
export KERNELDOC
KBUILD_USERHOSTCFLAGS := -Wall -Wmissing-prototypes -Wstrict-prototypes \
- -O2 -fomit-frame-pointer -std=gnu11
+ -O2 -fomit-frame-pointer -std=gnu11 -Wno-unused-but-set-global
KBUILD_USERCFLAGS := $(KBUILD_USERHOSTCFLAGS) $(USERCFLAGS)
KBUILD_USERLDFLAGS := $(USERLDFLAGS)
Changelogs:
v5 -> v6:
- v5: https://lore.kernel.org/bpf/20260417034658.2625353-1-yonghong.song@linux.dev/
- Do stack arguments invalidation after bpf function or kfunc all. This is to
following native achitecture calling convention.
- Fix some r11 related issues in const_fold, liveness and insn checking.
- Fix a few places for precision marking for stack arguments. All these cases
have const value and mark them as precise.
- Unconditionally copy some incoming/outgoing stat.
- Fix a missing tailcall case with main prog having tailcall and there is no
other subprog's.
- Add and fix tests as suggested in v5.
- Reorder some patches, e.g., preparing stack arguments in bpf functions,
disabling JIT, before allowing proper r11 usage.
v4 -> v5:
- v4: https://lore.kernel.org/bpf/20260412045826.254200-1-yonghong.song@linux.dev/
- Use r11 instead of r12, llvm also updated with r11.
- Change int type 'reg_or_arg' to u32 'argno' where 'argno' encodes to support
both bpf registers and stack arguments.
- Track per-state bitmask 'out_stack_arg_mask' for r11 based stores, so at any
particular call, it knows what stores are available. This is important since
stores may be in different basic block.
- Previously after each call, all store slots are invalidated. This patches
disabled such invalidation.
- Ensure r11 reg only appearing in allowed insns. Also avoid r11 for reg tracking
purpose.
- Make stack_arg_regs more similar to regular reg's (struct bpf_reg_state *)..
- Reorder r11 based stores from 'arg6:off:-24, arg7:off:-16, arg8:off:-8" to
"arg6:off:-8, arg7:off:-16, arg8:off:-24".
- Add a few more tests, including e.g., two callee's with different number of
stack arguments, shared r11-stores in different branches, etc.
v3 -> v4:
- v3: https://lore.kernel.org/bpf/20260405172505.1329392-1-yonghong.song@linux.dev/
- Refactor/Modify codes to make it easier for later kfunc stack argument support
- Invalidate outgoing slots immediately after the call to prevent reuse
- Fix interaction between stack argument PTR_TO_STACK and dead slot poisoning
- Reject stack arguments if tail call reachable
- Disable private stack if stack argument is used
- Allocate outgoing stack argument region after callee saved registers, and this
simplifies the JITed code a lot.
v2 -> v3:
- v2: https://lore.kernel.org/bpf/20260405165300.826241-1-yonghong.song@linux.dev/
- Fix selftest stack_arg_gap_at_minus8().
- Fix a few 'UTF-8' issues.
v1 -> v2:
- v1: https://lore.kernel.org/bpf/20260402012727.3916819-1-yonghong.song@linux.dev/
- Add stack_arg_safe() to do pruning for stack arguments.
- Fix an issue with KF_ARG_PTR_TO_MEM_SIZE. Since a faked register is
used, added verification log to indicate the start and end of such
faked register usage.
- For x86_64 JIT, copying incoming parameter values directly from caller's stack.
- Add test cases with stack arguments e.g. mem, mem+size, dynptr, iter, etc.
Yonghong Song (17):
bpf: Remove unused parameter from check_map_kptr_access()
bpf: Refactor to avoid redundant calculation of bpf_reg_state
bpf: Refactor to handle memory and size together
bpf: Prepare verifier logs for upcoming kfunc stack arguments
bpf: Introduce bpf register BPF_REG_PARAMS
bpf: Reuse MAX_BPF_FUNC_ARGS for maximum number of arguments
bpf: Support stack arguments for bpf functions
bpf: Reject stack arguments in non-JITed programs
bpf: Track r11 registers in const_fold and liveness
bpf: Prepare architecture JIT support for stack arguments
bpf: Enable r11 based insns
bpf: Support stack arguments for kfunc calls
bpf: Reject stack arguments if tail call reachable
bpf,x86: Implement JIT support for stack arguments
selftests/bpf: Add tests for BPF function stack arguments
selftests/bpf: Add tests for stack argument validation
selftests/bpf: Add verifier tests for stack argument validation
arch/x86/net/bpf_jit_comp.c | 154 +-
include/linux/bpf.h | 7 +
include/linux/bpf_verifier.h | 29 +-
include/linux/filter.h | 6 +-
kernel/bpf/btf.c | 20 +-
kernel/bpf/const_fold.c | 14 +-
kernel/bpf/core.c | 11 +-
kernel/bpf/fixups.c | 28 +-
kernel/bpf/liveness.c | 14 +-
kernel/bpf/states.c | 41 +
kernel/bpf/verifier.c | 1233 ++++++++++++-----
.../testing/selftests/bpf/prog_tests/bpf_nf.c | 22 +-
.../selftests/bpf/prog_tests/cb_refs.c | 2 +-
.../selftests/bpf/prog_tests/ctx_rewrite.c | 14 +-
.../selftests/bpf/prog_tests/kfunc_call.c | 2 +-
.../selftests/bpf/prog_tests/linked_list.c | 4 +-
.../selftests/bpf/prog_tests/stack_arg.c | 139 ++
.../selftests/bpf/prog_tests/stack_arg_fail.c | 10 +
.../selftests/bpf/prog_tests/verifier.c | 2 +
.../selftests/bpf/progs/cgrp_kfunc_failure.c | 14 +-
.../selftests/bpf/progs/cpumask_failure.c | 10 +-
.../testing/selftests/bpf/progs/dynptr_fail.c | 22 +-
.../selftests/bpf/progs/file_reader_fail.c | 4 +-
tools/testing/selftests/bpf/progs/irq.c | 4 +-
tools/testing/selftests/bpf/progs/iters.c | 6 +-
.../selftests/bpf/progs/iters_state_safety.c | 14 +-
.../selftests/bpf/progs/iters_testmod.c | 4 +-
.../selftests/bpf/progs/iters_testmod_seq.c | 4 +-
.../selftests/bpf/progs/map_kptr_fail.c | 2 +-
.../selftests/bpf/progs/percpu_alloc_fail.c | 4 +-
.../testing/selftests/bpf/progs/rbtree_fail.c | 6 +-
.../bpf/progs/refcounted_kptr_fail.c | 2 +-
tools/testing/selftests/bpf/progs/stack_arg.c | 254 ++++
.../selftests/bpf/progs/stack_arg_fail.c | 124 ++
.../selftests/bpf/progs/stack_arg_kfunc.c | 164 +++
.../testing/selftests/bpf/progs/stream_fail.c | 2 +-
.../selftests/bpf/progs/task_kfunc_failure.c | 18 +-
.../selftests/bpf/progs/task_work_fail.c | 6 +-
.../selftests/bpf/progs/test_bpf_nf_fail.c | 8 +-
.../bpf/progs/test_kfunc_dynptr_param.c | 2 +-
.../bpf/progs/test_kfunc_param_nullable.c | 2 +-
.../selftests/bpf/progs/verifier_bits_iter.c | 4 +-
.../bpf/progs/verifier_bpf_fastcall.c | 24 +-
.../selftests/bpf/progs/verifier_may_goto_1.c | 12 +-
.../bpf/progs/verifier_ref_tracking.c | 6 +-
.../selftests/bpf/progs/verifier_sdiv.c | 64 +-
.../selftests/bpf/progs/verifier_stack_arg.c | 433 ++++++
.../selftests/bpf/progs/verifier_vfs_reject.c | 8 +-
.../testing/selftests/bpf/progs/wq_failures.c | 2 +-
.../selftests/bpf/test_kmods/bpf_testmod.c | 73 +
.../bpf/test_kmods/bpf_testmod_kfunc.h | 26 +
tools/testing/selftests/bpf/verifier/calls.c | 14 +-
52 files changed, 2533 insertions(+), 561 deletions(-)
create mode 100644 tools/testing/selftests/bpf/prog_tests/stack_arg.c
create mode 100644 tools/testing/selftests/bpf/prog_tests/stack_arg_fail.c
create mode 100644 tools/testing/selftests/bpf/progs/stack_arg.c
create mode 100644 tools/testing/selftests/bpf/progs/stack_arg_fail.c
create mode 100644 tools/testing/selftests/bpf/progs/stack_arg_kfunc.c
create mode 100644 tools/testing/selftests/bpf/progs/verifier_stack_arg.c
--
2.52.0
^ permalink raw reply [flat|nested] 51+ messages in thread
* [PATCH bpf-next v6 01/17] bpf: Remove unused parameter from check_map_kptr_access()
2026-04-19 16:33 [PATCH bpf-next v6 00/17] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song
@ 2026-04-19 16:33 ` Yonghong Song
2026-04-19 16:33 ` [PATCH bpf-next v6 02/17] bpf: Refactor to avoid redundant calculation of bpf_reg_state Yonghong Song
` (16 subsequent siblings)
17 siblings, 0 replies; 51+ messages in thread
From: Yonghong Song @ 2026-04-19 16:33 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau
The parameter 'regno' in check_map_kptr_access() is unused. Remove it.
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
kernel/bpf/verifier.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 69d75515ed3f..0f9da0f6220b 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -4719,7 +4719,7 @@ static int mark_uptr_ld_reg(struct bpf_verifier_env *env, u32 regno,
return 0;
}
-static int check_map_kptr_access(struct bpf_verifier_env *env, u32 regno,
+static int check_map_kptr_access(struct bpf_verifier_env *env,
int value_regno, int insn_idx,
struct btf_field *kptr_field)
{
@@ -6366,7 +6366,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
kptr_field = btf_record_find(reg->map_ptr->record,
off + reg->var_off.value, BPF_KPTR | BPF_UPTR);
if (kptr_field) {
- err = check_map_kptr_access(env, regno, value_regno, insn_idx, kptr_field);
+ err = check_map_kptr_access(env, value_regno, insn_idx, kptr_field);
} else if (t == BPF_READ && value_regno >= 0) {
struct bpf_map *map = reg->map_ptr;
--
2.52.0
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH bpf-next v6 02/17] bpf: Refactor to avoid redundant calculation of bpf_reg_state
2026-04-19 16:33 [PATCH bpf-next v6 00/17] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song
2026-04-19 16:33 ` [PATCH bpf-next v6 01/17] bpf: Remove unused parameter from check_map_kptr_access() Yonghong Song
@ 2026-04-19 16:33 ` Yonghong Song
2026-04-19 16:33 ` [PATCH bpf-next v6 03/17] bpf: Refactor to handle memory and size together Yonghong Song
` (15 subsequent siblings)
17 siblings, 0 replies; 51+ messages in thread
From: Yonghong Song @ 2026-04-19 16:33 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau
In many cases, once a bpf_reg_state is defined, it can pass to
callee's. Otherwise, callee will need to get bpf_reg_state again
based on regno. More importantly, this is needed for later stack
arguments for kfuncs since the register state for stack arguments does
not have a corresponding regno. So it makes sense to pass reg state
for callee's.
The following is the only change to avoid compilation warning:
static int sanitize_check_bounds(struct bpf_verifier_env *env,
const struct bpf_insn *insn,
- const struct bpf_reg_state *dst_reg)
+ struct bpf_reg_state *dst_reg)
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
kernel/bpf/verifier.c | 213 ++++++++++++++++++------------------------
1 file changed, 93 insertions(+), 120 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 0f9da0f6220b..514fee971f96 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -3938,13 +3938,13 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
static int check_stack_write_var_off(struct bpf_verifier_env *env,
/* func where register points to */
struct bpf_func_state *state,
- int ptr_regno, int off, int size,
+ struct bpf_reg_state *ptr_reg, int off, int size,
int value_regno, int insn_idx)
{
struct bpf_func_state *cur; /* state of the current function */
int min_off, max_off;
int i, err;
- struct bpf_reg_state *ptr_reg = NULL, *value_reg = NULL;
+ struct bpf_reg_state *value_reg = NULL;
struct bpf_insn *insn = &env->prog->insnsi[insn_idx];
bool writing_zero = false;
/* set if the fact that we're writing a zero is used to let any
@@ -3953,7 +3953,6 @@ static int check_stack_write_var_off(struct bpf_verifier_env *env,
bool zero_used = false;
cur = env->cur_state->frame[env->cur_state->curframe];
- ptr_reg = &cur->regs[ptr_regno];
min_off = ptr_reg->smin_value + off;
max_off = ptr_reg->smax_value + off + size;
if (value_regno >= 0)
@@ -4250,7 +4249,7 @@ enum bpf_access_src {
ACCESS_HELPER = 2, /* the access is performed by a helper */
};
-static int check_stack_range_initialized(struct bpf_verifier_env *env,
+static int check_stack_range_initialized(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
int regno, int off, int access_size,
bool zero_size_allowed,
enum bpf_access_type type,
@@ -4274,18 +4273,16 @@ static struct bpf_reg_state *reg_state(struct bpf_verifier_env *env, int regno)
* offset; for a fixed offset check_stack_read_fixed_off should be used
* instead.
*/
-static int check_stack_read_var_off(struct bpf_verifier_env *env,
+static int check_stack_read_var_off(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
int ptr_regno, int off, int size, int dst_regno)
{
- /* The state of the source register. */
- struct bpf_reg_state *reg = reg_state(env, ptr_regno);
struct bpf_func_state *ptr_state = bpf_func(env, reg);
int err;
int min_off, max_off;
/* Note that we pass a NULL meta, so raw access will not be permitted.
*/
- err = check_stack_range_initialized(env, ptr_regno, off, size,
+ err = check_stack_range_initialized(env, reg, ptr_regno, off, size,
false, BPF_READ, NULL);
if (err)
return err;
@@ -4307,10 +4304,9 @@ static int check_stack_read_var_off(struct bpf_verifier_env *env,
* can be -1, meaning that the read value is not going to a register.
*/
static int check_stack_read(struct bpf_verifier_env *env,
- int ptr_regno, int off, int size,
+ struct bpf_reg_state *reg, int ptr_regno, int off, int size,
int dst_regno)
{
- struct bpf_reg_state *reg = reg_state(env, ptr_regno);
struct bpf_func_state *state = bpf_func(env, reg);
int err;
/* Some accesses are only permitted with a static offset. */
@@ -4346,7 +4342,7 @@ static int check_stack_read(struct bpf_verifier_env *env,
* than fixed offset ones. Note that dst_regno >= 0 on this
* branch.
*/
- err = check_stack_read_var_off(env, ptr_regno, off, size,
+ err = check_stack_read_var_off(env, reg, ptr_regno, off, size,
dst_regno);
}
return err;
@@ -4363,10 +4359,9 @@ static int check_stack_read(struct bpf_verifier_env *env,
* The caller must ensure that the offset falls within the maximum stack size.
*/
static int check_stack_write(struct bpf_verifier_env *env,
- int ptr_regno, int off, int size,
+ struct bpf_reg_state *reg, int off, int size,
int value_regno, int insn_idx)
{
- struct bpf_reg_state *reg = reg_state(env, ptr_regno);
struct bpf_func_state *state = bpf_func(env, reg);
int err;
@@ -4379,16 +4374,15 @@ static int check_stack_write(struct bpf_verifier_env *env,
* than fixed offset ones.
*/
err = check_stack_write_var_off(env, state,
- ptr_regno, off, size,
+ reg, off, size,
value_regno, insn_idx);
}
return err;
}
-static int check_map_access_type(struct bpf_verifier_env *env, u32 regno,
+static int check_map_access_type(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 regno,
int off, int size, enum bpf_access_type type)
{
- struct bpf_reg_state *reg = reg_state(env, regno);
struct bpf_map *map = reg->map_ptr;
u32 cap = bpf_map_flags_to_cap(map);
@@ -4408,17 +4402,15 @@ static int check_map_access_type(struct bpf_verifier_env *env, u32 regno,
}
/* check read/write into memory region (e.g., map value, ringbuf sample, etc) */
-static int __check_mem_access(struct bpf_verifier_env *env, int regno,
+static int __check_mem_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno,
int off, int size, u32 mem_size,
bool zero_size_allowed)
{
bool size_ok = size > 0 || (size == 0 && zero_size_allowed);
- struct bpf_reg_state *reg;
if (off >= 0 && size_ok && (u64)off + size <= mem_size)
return 0;
- reg = &cur_regs(env)[regno];
switch (reg->type) {
case PTR_TO_MAP_KEY:
verbose(env, "invalid access to map key, key_size=%d off=%d size=%d\n",
@@ -4448,13 +4440,10 @@ static int __check_mem_access(struct bpf_verifier_env *env, int regno,
}
/* check read/write into a memory region with possible variable offset */
-static int check_mem_region_access(struct bpf_verifier_env *env, u32 regno,
+static int check_mem_region_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 regno,
int off, int size, u32 mem_size,
bool zero_size_allowed)
{
- struct bpf_verifier_state *vstate = env->cur_state;
- struct bpf_func_state *state = vstate->frame[vstate->curframe];
- struct bpf_reg_state *reg = &state->regs[regno];
int err;
/* We may have adjusted the register pointing to memory region, so we
@@ -4475,7 +4464,7 @@ static int check_mem_region_access(struct bpf_verifier_env *env, u32 regno,
regno);
return -EACCES;
}
- err = __check_mem_access(env, regno, reg->smin_value + off, size,
+ err = __check_mem_access(env, reg, regno, reg->smin_value + off, size,
mem_size, zero_size_allowed);
if (err) {
verbose(env, "R%d min value is outside of the allowed memory range\n",
@@ -4492,7 +4481,7 @@ static int check_mem_region_access(struct bpf_verifier_env *env, u32 regno,
regno);
return -EACCES;
}
- err = __check_mem_access(env, regno, reg->umax_value + off, size,
+ err = __check_mem_access(env, reg, regno, reg->umax_value + off, size,
mem_size, zero_size_allowed);
if (err) {
verbose(env, "R%d max value is outside of the allowed memory range\n",
@@ -4796,19 +4785,16 @@ static u32 map_mem_size(const struct bpf_map *map)
}
/* check read/write into a map element with possible variable offset */
-static int check_map_access(struct bpf_verifier_env *env, u32 regno,
+static int check_map_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 regno,
int off, int size, bool zero_size_allowed,
enum bpf_access_src src)
{
- struct bpf_verifier_state *vstate = env->cur_state;
- struct bpf_func_state *state = vstate->frame[vstate->curframe];
- struct bpf_reg_state *reg = &state->regs[regno];
struct bpf_map *map = reg->map_ptr;
u32 mem_size = map_mem_size(map);
struct btf_record *rec;
int err, i;
- err = check_mem_region_access(env, regno, off, size, mem_size, zero_size_allowed);
+ err = check_mem_region_access(env, reg, regno, off, size, mem_size, zero_size_allowed);
if (err)
return err;
@@ -4904,10 +4890,9 @@ static bool may_access_direct_pkt_data(struct bpf_verifier_env *env,
}
}
-static int check_packet_access(struct bpf_verifier_env *env, u32 regno, int off,
+static int check_packet_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 regno, int off,
int size, bool zero_size_allowed)
{
- struct bpf_reg_state *reg = reg_state(env, regno);
int err;
if (reg->range < 0) {
@@ -4915,7 +4900,7 @@ static int check_packet_access(struct bpf_verifier_env *env, u32 regno, int off,
return -EINVAL;
}
- err = check_mem_region_access(env, regno, off, size, reg->range, zero_size_allowed);
+ err = check_mem_region_access(env, reg, regno, off, size, reg->range, zero_size_allowed);
if (err)
return err;
@@ -4970,7 +4955,7 @@ static int __check_ctx_access(struct bpf_verifier_env *env, int insn_idx, int of
return -EACCES;
}
-static int check_ctx_access(struct bpf_verifier_env *env, int insn_idx, u32 regno,
+static int check_ctx_access(struct bpf_verifier_env *env, int insn_idx, struct bpf_reg_state *reg, u32 regno,
int off, int access_size, enum bpf_access_type t,
struct bpf_insn_access_aux *info)
{
@@ -4980,12 +4965,10 @@ static int check_ctx_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
*/
bool var_off_ok = is_var_ctx_off_allowed(env->prog);
bool fixed_off_ok = !env->ops->convert_ctx_access;
- struct bpf_reg_state *regs = cur_regs(env);
- struct bpf_reg_state *reg = regs + regno;
int err;
if (var_off_ok)
- err = check_mem_region_access(env, regno, off, access_size, U16_MAX, false);
+ err = check_mem_region_access(env, reg, regno, off, access_size, U16_MAX, false);
else
err = __check_ptr_off_reg(env, reg, regno, fixed_off_ok);
if (err)
@@ -5011,10 +4994,9 @@ static int check_flow_keys_access(struct bpf_verifier_env *env, int off,
}
static int check_sock_access(struct bpf_verifier_env *env, int insn_idx,
- u32 regno, int off, int size,
+ struct bpf_reg_state *reg, u32 regno, int off, int size,
enum bpf_access_type t)
{
- struct bpf_reg_state *reg = reg_state(env, regno);
struct bpf_insn_access_aux info = {};
bool valid;
@@ -5977,12 +5959,11 @@ static bool type_is_trusted_or_null(struct bpf_verifier_env *env,
}
static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
- struct bpf_reg_state *regs,
+ struct bpf_reg_state *regs, struct bpf_reg_state *reg,
int regno, int off, int size,
enum bpf_access_type atype,
int value_regno)
{
- struct bpf_reg_state *reg = regs + regno;
const struct btf_type *t = btf_type_by_id(reg->btf, reg->btf_id);
const char *tname = btf_name_by_offset(reg->btf, t->name_off);
const char *field_name = NULL;
@@ -6134,12 +6115,11 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
}
static int check_ptr_to_map_access(struct bpf_verifier_env *env,
- struct bpf_reg_state *regs,
+ struct bpf_reg_state *regs, struct bpf_reg_state *reg,
int regno, int off, int size,
enum bpf_access_type atype,
int value_regno)
{
- struct bpf_reg_state *reg = regs + regno;
struct bpf_map *map = reg->map_ptr;
struct bpf_reg_state map_reg;
enum bpf_type_flag flag = 0;
@@ -6228,11 +6208,10 @@ static int check_stack_slot_within_bounds(struct bpf_verifier_env *env,
* 'off' includes `regno->offset`, but not its dynamic part (if any).
*/
static int check_stack_access_within_bounds(
- struct bpf_verifier_env *env,
+ struct bpf_verifier_env *env, struct bpf_reg_state *reg,
int regno, int off, int access_size,
enum bpf_access_type type)
{
- struct bpf_reg_state *reg = reg_state(env, regno);
struct bpf_func_state *state = bpf_func(env, reg);
s64 min_off, max_off;
int err;
@@ -6320,12 +6299,11 @@ static void add_scalar_to_reg(struct bpf_reg_state *dst_reg, s64 val)
* if t==write && value_regno==-1, some unknown value is stored into memory
* if t==read && value_regno==-1, don't care what we read from memory
*/
-static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regno,
+static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct bpf_reg_state *reg, u32 regno,
int off, int bpf_size, enum bpf_access_type t,
int value_regno, bool strict_alignment_once, bool is_ldsx)
{
struct bpf_reg_state *regs = cur_regs(env);
- struct bpf_reg_state *reg = regs + regno;
int size, err = 0;
size = bpf_size_to_bytes(bpf_size);
@@ -6342,7 +6320,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
return -EACCES;
}
- err = check_mem_region_access(env, regno, off, size,
+ err = check_mem_region_access(env, reg, regno, off, size,
reg->map_ptr->key_size, false);
if (err)
return err;
@@ -6356,10 +6334,10 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
verbose(env, "R%d leaks addr into map\n", value_regno);
return -EACCES;
}
- err = check_map_access_type(env, regno, off, size, t);
+ err = check_map_access_type(env, reg, regno, off, size, t);
if (err)
return err;
- err = check_map_access(env, regno, off, size, false, ACCESS_DIRECT);
+ err = check_map_access(env, reg, regno, off, size, false, ACCESS_DIRECT);
if (err)
return err;
if (tnum_is_const(reg->var_off))
@@ -6428,7 +6406,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
* instructions, hence no need to check bounds in that case.
*/
if (!rdonly_untrusted)
- err = check_mem_region_access(env, regno, off, size,
+ err = check_mem_region_access(env, reg, regno, off, size,
reg->mem_size, false);
if (!err && value_regno >= 0 && (t == BPF_READ || rdonly_mem))
mark_reg_unknown(env, regs, value_regno);
@@ -6446,7 +6424,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
return -EACCES;
}
- err = check_ctx_access(env, insn_idx, regno, off, size, t, &info);
+ err = check_ctx_access(env, insn_idx, reg, regno, off, size, t, &info);
if (!err && t == BPF_READ && value_regno >= 0) {
/* ctx access returns either a scalar, or a
* PTR_TO_PACKET[_META,_END]. In the latter
@@ -6483,15 +6461,15 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
} else if (reg->type == PTR_TO_STACK) {
/* Basic bounds checks. */
- err = check_stack_access_within_bounds(env, regno, off, size, t);
+ err = check_stack_access_within_bounds(env, reg, regno, off, size, t);
if (err)
return err;
if (t == BPF_READ)
- err = check_stack_read(env, regno, off, size,
+ err = check_stack_read(env, reg, regno, off, size,
value_regno);
else
- err = check_stack_write(env, regno, off, size,
+ err = check_stack_write(env, reg, off, size,
value_regno, insn_idx);
} else if (reg_is_pkt_pointer(reg)) {
if (t == BPF_WRITE && !may_access_direct_pkt_data(env, NULL, t)) {
@@ -6504,7 +6482,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
value_regno);
return -EACCES;
}
- err = check_packet_access(env, regno, off, size, false);
+ err = check_packet_access(env, reg, regno, off, size, false);
if (!err && t == BPF_READ && value_regno >= 0)
mark_reg_unknown(env, regs, value_regno);
} else if (reg->type == PTR_TO_FLOW_KEYS) {
@@ -6524,7 +6502,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
regno, reg_type_str(env, reg->type));
return -EACCES;
}
- err = check_sock_access(env, insn_idx, regno, off, size, t);
+ err = check_sock_access(env, insn_idx, reg, regno, off, size, t);
if (!err && value_regno >= 0)
mark_reg_unknown(env, regs, value_regno);
} else if (reg->type == PTR_TO_TP_BUFFER) {
@@ -6533,10 +6511,10 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
mark_reg_unknown(env, regs, value_regno);
} else if (base_type(reg->type) == PTR_TO_BTF_ID &&
!type_may_be_null(reg->type)) {
- err = check_ptr_to_btf_access(env, regs, regno, off, size, t,
+ err = check_ptr_to_btf_access(env, regs, reg, regno, off, size, t,
value_regno);
} else if (reg->type == CONST_PTR_TO_MAP) {
- err = check_ptr_to_map_access(env, regs, regno, off, size, t,
+ err = check_ptr_to_map_access(env, regs, reg, regno, off, size, t,
value_regno);
} else if (base_type(reg->type) == PTR_TO_BUF &&
!type_may_be_null(reg->type)) {
@@ -6605,7 +6583,7 @@ static int check_load_mem(struct bpf_verifier_env *env, struct bpf_insn *insn,
/* Check if (src_reg + off) is readable. The state of dst_reg will be
* updated by this call.
*/
- err = check_mem_access(env, env->insn_idx, insn->src_reg, insn->off,
+ err = check_mem_access(env, env->insn_idx, regs + insn->src_reg, insn->src_reg, insn->off,
BPF_SIZE(insn->code), BPF_READ, insn->dst_reg,
strict_alignment_once, is_ldsx);
err = err ?: save_aux_ptr_type(env, src_reg_type,
@@ -6635,7 +6613,7 @@ static int check_store_reg(struct bpf_verifier_env *env, struct bpf_insn *insn,
dst_reg_type = regs[insn->dst_reg].type;
/* Check if (dst_reg + off) is writeable. */
- err = check_mem_access(env, env->insn_idx, insn->dst_reg, insn->off,
+ err = check_mem_access(env, env->insn_idx, regs + insn->dst_reg, insn->dst_reg, insn->off,
BPF_SIZE(insn->code), BPF_WRITE, insn->src_reg,
strict_alignment_once, false);
err = err ?: save_aux_ptr_type(env, dst_reg_type, false);
@@ -6646,6 +6624,7 @@ static int check_store_reg(struct bpf_verifier_env *env, struct bpf_insn *insn,
static int check_atomic_rmw(struct bpf_verifier_env *env,
struct bpf_insn *insn)
{
+ struct bpf_reg_state *dst_reg;
int load_reg;
int err;
@@ -6707,13 +6686,15 @@ static int check_atomic_rmw(struct bpf_verifier_env *env,
load_reg = -1;
}
+ dst_reg = cur_regs(env) + insn->dst_reg;
+
/* Check whether we can read the memory, with second call for fetch
* case to simulate the register fill.
*/
- err = check_mem_access(env, env->insn_idx, insn->dst_reg, insn->off,
+ err = check_mem_access(env, env->insn_idx, dst_reg, insn->dst_reg, insn->off,
BPF_SIZE(insn->code), BPF_READ, -1, true, false);
if (!err && load_reg >= 0)
- err = check_mem_access(env, env->insn_idx, insn->dst_reg,
+ err = check_mem_access(env, env->insn_idx, dst_reg, insn->dst_reg,
insn->off, BPF_SIZE(insn->code),
BPF_READ, load_reg, true, false);
if (err)
@@ -6725,7 +6706,7 @@ static int check_atomic_rmw(struct bpf_verifier_env *env,
return err;
}
/* Check whether we can write into the same memory. */
- err = check_mem_access(env, env->insn_idx, insn->dst_reg, insn->off,
+ err = check_mem_access(env, env->insn_idx, dst_reg, insn->dst_reg, insn->off,
BPF_SIZE(insn->code), BPF_WRITE, -1, true, false);
if (err)
return err;
@@ -6814,11 +6795,10 @@ static int check_atomic(struct bpf_verifier_env *env, struct bpf_insn *insn)
* read offsets are marked as read.
*/
static int check_stack_range_initialized(
- struct bpf_verifier_env *env, int regno, int off,
+ struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno, int off,
int access_size, bool zero_size_allowed,
enum bpf_access_type type, struct bpf_call_arg_meta *meta)
{
- struct bpf_reg_state *reg = reg_state(env, regno);
struct bpf_func_state *state = bpf_func(env, reg);
int err, min_off, max_off, i, j, slot, spi;
/* Some accesses can write anything into the stack, others are
@@ -6840,7 +6820,7 @@ static int check_stack_range_initialized(
return -EACCES;
}
- err = check_stack_access_within_bounds(env, regno, off, access_size, type);
+ err = check_stack_access_within_bounds(env, reg, regno, off, access_size, type);
if (err)
return err;
@@ -6971,7 +6951,7 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno,
switch (base_type(reg->type)) {
case PTR_TO_PACKET:
case PTR_TO_PACKET_META:
- return check_packet_access(env, regno, 0, access_size,
+ return check_packet_access(env, reg, regno, 0, access_size,
zero_size_allowed);
case PTR_TO_MAP_KEY:
if (access_type == BPF_WRITE) {
@@ -6979,12 +6959,12 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno,
reg_type_str(env, reg->type));
return -EACCES;
}
- return check_mem_region_access(env, regno, 0, access_size,
+ return check_mem_region_access(env, reg, regno, 0, access_size,
reg->map_ptr->key_size, false);
case PTR_TO_MAP_VALUE:
- if (check_map_access_type(env, regno, 0, access_size, access_type))
+ if (check_map_access_type(env, reg, regno, 0, access_size, access_type))
return -EACCES;
- return check_map_access(env, regno, 0, access_size,
+ return check_map_access(env, reg, regno, 0, access_size,
zero_size_allowed, ACCESS_HELPER);
case PTR_TO_MEM:
if (type_is_rdonly_mem(reg->type)) {
@@ -6994,7 +6974,7 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno,
return -EACCES;
}
}
- return check_mem_region_access(env, regno, 0,
+ return check_mem_region_access(env, reg, regno, 0,
access_size, reg->mem_size,
zero_size_allowed);
case PTR_TO_BUF:
@@ -7014,16 +6994,16 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno,
max_access);
case PTR_TO_STACK:
return check_stack_range_initialized(
- env,
+ env, reg,
regno, 0, access_size,
zero_size_allowed, access_type, meta);
case PTR_TO_BTF_ID:
- return check_ptr_to_btf_access(env, regs, regno, 0,
+ return check_ptr_to_btf_access(env, regs, reg, regno, 0,
access_size, BPF_READ, -1);
case PTR_TO_CTX:
/* Only permit reading or writing syscall context using helper calls. */
if (is_var_ctx_off_allowed(env->prog)) {
- int err = check_mem_region_access(env, regno, 0, access_size, U16_MAX,
+ int err = check_mem_region_access(env, reg, regno, 0, access_size, U16_MAX,
zero_size_allowed);
if (err)
return err;
@@ -7186,11 +7166,10 @@ enum {
* env->cur_state->active_locks remembers which map value element or allocated
* object got locked and clears it after bpf_spin_unlock.
*/
-static int process_spin_lock(struct bpf_verifier_env *env, int regno, int flags)
+static int process_spin_lock(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno, int flags)
{
bool is_lock = flags & PROCESS_SPIN_LOCK, is_res_lock = flags & PROCESS_RES_LOCK;
const char *lock_str = is_res_lock ? "bpf_res_spin" : "bpf_spin";
- struct bpf_reg_state *reg = reg_state(env, regno);
struct bpf_verifier_state *cur = env->cur_state;
bool is_const = tnum_is_const(reg->var_off);
bool is_irq = flags & PROCESS_LOCK_IRQ;
@@ -7303,11 +7282,10 @@ static int process_spin_lock(struct bpf_verifier_env *env, int regno, int flags)
}
/* Check if @regno is a pointer to a specific field in a map value */
-static int check_map_field_pointer(struct bpf_verifier_env *env, u32 regno,
+static int check_map_field_pointer(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 regno,
enum btf_field_type field_type,
struct bpf_map_desc *map_desc)
{
- struct bpf_reg_state *reg = reg_state(env, regno);
bool is_const = tnum_is_const(reg->var_off);
struct bpf_map *map = reg->map_ptr;
u64 val = reg->var_off.value;
@@ -7357,26 +7335,26 @@ static int check_map_field_pointer(struct bpf_verifier_env *env, u32 regno,
return 0;
}
-static int process_timer_func(struct bpf_verifier_env *env, int regno,
+static int process_timer_func(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno,
struct bpf_map_desc *map)
{
if (IS_ENABLED(CONFIG_PREEMPT_RT)) {
verbose(env, "bpf_timer cannot be used for PREEMPT_RT.\n");
return -EOPNOTSUPP;
}
- return check_map_field_pointer(env, regno, BPF_TIMER, map);
+ return check_map_field_pointer(env, reg, regno, BPF_TIMER, map);
}
-static int process_timer_helper(struct bpf_verifier_env *env, int regno,
+static int process_timer_helper(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno,
struct bpf_call_arg_meta *meta)
{
- return process_timer_func(env, regno, &meta->map);
+ return process_timer_func(env, reg, regno, &meta->map);
}
-static int process_timer_kfunc(struct bpf_verifier_env *env, int regno,
+static int process_timer_kfunc(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno,
struct bpf_kfunc_call_arg_meta *meta)
{
- return process_timer_func(env, regno, &meta->map);
+ return process_timer_func(env, reg, regno, &meta->map);
}
static int process_kptr_func(struct bpf_verifier_env *env, int regno,
@@ -7452,10 +7430,9 @@ static int process_kptr_func(struct bpf_verifier_env *env, int regno,
* Helpers which do not mutate the bpf_dynptr set MEM_RDONLY in their argument
* type, and declare it as 'const struct bpf_dynptr *' in their prototype.
*/
-static int process_dynptr_func(struct bpf_verifier_env *env, int regno, int insn_idx,
+static int process_dynptr_func(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno, int insn_idx,
enum bpf_arg_type arg_type, int clone_ref_obj_id)
{
- struct bpf_reg_state *reg = reg_state(env, regno);
int err;
if (reg->type != PTR_TO_STACK && reg->type != CONST_PTR_TO_DYNPTR) {
@@ -7498,7 +7475,7 @@ static int process_dynptr_func(struct bpf_verifier_env *env, int regno, int insn
/* we write BPF_DW bits (8 bytes) at a time */
for (i = 0; i < BPF_DYNPTR_SIZE; i += 8) {
- err = check_mem_access(env, insn_idx, regno,
+ err = check_mem_access(env, insn_idx, reg, regno,
i, BPF_DW, BPF_WRITE, -1, false, false);
if (err)
return err;
@@ -7568,10 +7545,9 @@ static bool is_kfunc_arg_iter(struct bpf_kfunc_call_arg_meta *meta, int arg_idx,
return btf_param_match_suffix(meta->btf, arg, "__iter");
}
-static int process_iter_arg(struct bpf_verifier_env *env, int regno, int insn_idx,
+static int process_iter_arg(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno, int insn_idx,
struct bpf_kfunc_call_arg_meta *meta)
{
- struct bpf_reg_state *reg = reg_state(env, regno);
const struct btf_type *t;
int spi, err, i, nr_slots, btf_id;
@@ -7603,7 +7579,7 @@ static int process_iter_arg(struct bpf_verifier_env *env, int regno, int insn_id
}
for (i = 0; i < nr_slots * 8; i += BPF_REG_SIZE) {
- err = check_mem_access(env, insn_idx, regno,
+ err = check_mem_access(env, insn_idx, reg, regno,
i, BPF_DW, BPF_WRITE, -1, false, false);
if (err)
return err;
@@ -8042,12 +8018,11 @@ static const struct bpf_reg_types *compatible_reg_types[__BPF_ARG_TYPE_MAX] = {
[ARG_PTR_TO_DYNPTR] = &dynptr_types,
};
-static int check_reg_type(struct bpf_verifier_env *env, u32 regno,
+static int check_reg_type(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 regno,
enum bpf_arg_type arg_type,
const u32 *arg_btf_id,
struct bpf_call_arg_meta *meta)
{
- struct bpf_reg_state *reg = reg_state(env, regno);
enum bpf_reg_type expected, type = reg->type;
const struct bpf_reg_types *compatible;
int i, j, err;
@@ -8390,7 +8365,7 @@ static int check_reg_const_str(struct bpf_verifier_env *env,
return -EACCES;
}
- err = check_map_access(env, regno, 0,
+ err = check_map_access(env, reg, regno, 0,
map->value_size - reg->var_off.value, false,
ACCESS_HELPER);
if (err)
@@ -8526,7 +8501,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
base_type(arg_type) == ARG_PTR_TO_SPIN_LOCK)
arg_btf_id = fn->arg_btf_id[arg];
- err = check_reg_type(env, regno, arg_type, arg_btf_id, meta);
+ err = check_reg_type(env, reg, regno, arg_type, arg_btf_id, meta);
if (err)
return err;
@@ -8664,11 +8639,11 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
return -EACCES;
}
if (meta->func_id == BPF_FUNC_spin_lock) {
- err = process_spin_lock(env, regno, PROCESS_SPIN_LOCK);
+ err = process_spin_lock(env, reg, regno, PROCESS_SPIN_LOCK);
if (err)
return err;
} else if (meta->func_id == BPF_FUNC_spin_unlock) {
- err = process_spin_lock(env, regno, 0);
+ err = process_spin_lock(env, reg, regno, 0);
if (err)
return err;
} else {
@@ -8677,7 +8652,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
}
break;
case ARG_PTR_TO_TIMER:
- err = process_timer_helper(env, regno, meta);
+ err = process_timer_helper(env, reg, regno, meta);
if (err)
return err;
break;
@@ -8712,7 +8687,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
true, meta);
break;
case ARG_PTR_TO_DYNPTR:
- err = process_dynptr_func(env, regno, insn_idx, arg_type, 0);
+ err = process_dynptr_func(env, reg, regno, insn_idx, arg_type, 0);
if (err)
return err;
break;
@@ -9371,7 +9346,7 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog,
if (ret)
return ret;
- ret = process_dynptr_func(env, regno, -1, arg->arg_type, 0);
+ ret = process_dynptr_func(env, reg, regno, -1, arg->arg_type, 0);
if (ret)
return ret;
} else if (base_type(arg->arg_type) == ARG_PTR_TO_BTF_ID) {
@@ -9382,7 +9357,7 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog,
continue;
memset(&meta, 0, sizeof(meta)); /* leave func_id as zero */
- err = check_reg_type(env, regno, arg->arg_type, &arg->btf_id, &meta);
+ err = check_reg_type(env, reg, regno, arg->arg_type, &arg->btf_id, &meta);
err = err ?: check_func_arg_reg_off(env, reg, regno, arg->arg_type);
if (err)
return err;
@@ -10340,18 +10315,18 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
if (err)
return err;
+ regs = cur_regs(env);
+
/* Mark slots with STACK_MISC in case of raw mode, stack offset
* is inferred from register state.
*/
for (i = 0; i < meta.access_size; i++) {
- err = check_mem_access(env, insn_idx, meta.regno, i, BPF_B,
+ err = check_mem_access(env, insn_idx, regs + meta.regno, meta.regno, i, BPF_B,
BPF_WRITE, -1, false, false);
if (err)
return err;
}
- regs = cur_regs(env);
-
if (meta.release_regno) {
err = -EINVAL;
if (arg_type_is_dynptr(fn->arg_type[meta.release_regno - BPF_REG_1])) {
@@ -11355,11 +11330,10 @@ get_kfunc_ptr_arg_type(struct bpf_verifier_env *env,
struct bpf_kfunc_call_arg_meta *meta,
const struct btf_type *t, const struct btf_type *ref_t,
const char *ref_tname, const struct btf_param *args,
- int argno, int nargs)
+ int argno, int nargs, struct bpf_reg_state *reg)
{
u32 regno = argno + 1;
struct bpf_reg_state *regs = cur_regs(env);
- struct bpf_reg_state *reg = ®s[regno];
bool arg_mem_size = false;
if (meta->func_id == special_kfunc_list[KF_bpf_cast_to_kern_ctx] ||
@@ -11526,10 +11500,9 @@ static int process_kf_arg_ptr_to_btf_id(struct bpf_verifier_env *env,
return 0;
}
-static int process_irq_flag(struct bpf_verifier_env *env, int regno,
+static int process_irq_flag(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno,
struct bpf_kfunc_call_arg_meta *meta)
{
- struct bpf_reg_state *reg = reg_state(env, regno);
int err, kfunc_class = IRQ_NATIVE_KFUNC;
bool irq_save;
@@ -11554,7 +11527,7 @@ static int process_irq_flag(struct bpf_verifier_env *env, int regno,
return -EINVAL;
}
- err = check_mem_access(env, env->insn_idx, regno, 0, BPF_DW, BPF_WRITE, -1, false, false);
+ err = check_mem_access(env, env->insn_idx, reg, regno, 0, BPF_DW, BPF_WRITE, -1, false, false);
if (err)
return err;
@@ -12142,7 +12115,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
ref_t = btf_type_skip_modifiers(btf, t->type, &ref_id);
ref_tname = btf_name_by_offset(btf, ref_t->name_off);
- kf_arg_type = get_kfunc_ptr_arg_type(env, meta, t, ref_t, ref_tname, args, i, nargs);
+ kf_arg_type = get_kfunc_ptr_arg_type(env, meta, t, ref_t, ref_tname, args, i, nargs, reg);
if (kf_arg_type < 0)
return kf_arg_type;
@@ -12307,7 +12280,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
}
}
- ret = process_dynptr_func(env, regno, insn_idx, dynptr_arg_type, clone_ref_obj_id);
+ ret = process_dynptr_func(env, reg, regno, insn_idx, dynptr_arg_type, clone_ref_obj_id);
if (ret < 0)
return ret;
@@ -12332,7 +12305,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
return -EINVAL;
}
}
- ret = process_iter_arg(env, regno, insn_idx, meta);
+ ret = process_iter_arg(env, reg, regno, insn_idx, meta);
if (ret < 0)
return ret;
break;
@@ -12509,7 +12482,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
verbose(env, "arg#%d doesn't point to a map value\n", i);
return -EINVAL;
}
- ret = check_map_field_pointer(env, regno, BPF_WORKQUEUE, &meta->map);
+ ret = check_map_field_pointer(env, reg, regno, BPF_WORKQUEUE, &meta->map);
if (ret < 0)
return ret;
break;
@@ -12518,7 +12491,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
verbose(env, "arg#%d doesn't point to a map value\n", i);
return -EINVAL;
}
- ret = process_timer_kfunc(env, regno, meta);
+ ret = process_timer_kfunc(env, reg, regno, meta);
if (ret < 0)
return ret;
break;
@@ -12527,7 +12500,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
verbose(env, "arg#%d doesn't point to a map value\n", i);
return -EINVAL;
}
- ret = check_map_field_pointer(env, regno, BPF_TASK_WORK, &meta->map);
+ ret = check_map_field_pointer(env, reg, regno, BPF_TASK_WORK, &meta->map);
if (ret < 0)
return ret;
break;
@@ -12536,7 +12509,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
verbose(env, "arg#%d doesn't point to an irq flag on stack\n", i);
return -EINVAL;
}
- ret = process_irq_flag(env, regno, meta);
+ ret = process_irq_flag(env, reg, regno, meta);
if (ret < 0)
return ret;
break;
@@ -12557,7 +12530,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
if (meta->func_id == special_kfunc_list[KF_bpf_res_spin_lock_irqsave] ||
meta->func_id == special_kfunc_list[KF_bpf_res_spin_unlock_irqrestore])
flags |= PROCESS_LOCK_IRQ;
- ret = process_spin_lock(env, regno, flags);
+ ret = process_spin_lock(env, reg, regno, flags);
if (ret < 0)
return ret;
break;
@@ -13691,7 +13664,7 @@ static int check_stack_access_for_ptr_arithmetic(
static int sanitize_check_bounds(struct bpf_verifier_env *env,
const struct bpf_insn *insn,
- const struct bpf_reg_state *dst_reg)
+ struct bpf_reg_state *dst_reg)
{
u32 dst = insn->dst_reg;
@@ -13708,7 +13681,7 @@ static int sanitize_check_bounds(struct bpf_verifier_env *env,
return -EACCES;
break;
case PTR_TO_MAP_VALUE:
- if (check_map_access(env, dst, 0, 1, false, ACCESS_HELPER)) {
+ if (check_map_access(env, dst_reg, dst, 0, 1, false, ACCESS_HELPER)) {
verbose(env, "R%d pointer arithmetic of map value goes out of range, "
"prohibited for !root\n", dst);
return -EACCES;
@@ -17594,7 +17567,7 @@ static int do_check_insn(struct bpf_verifier_env *env, bool *do_print_state)
dst_reg_type = cur_regs(env)[insn->dst_reg].type;
- err = check_mem_access(env, env->insn_idx, insn->dst_reg,
+ err = check_mem_access(env, env->insn_idx, cur_regs(env) + insn->dst_reg, insn->dst_reg,
insn->off, BPF_SIZE(insn->code),
BPF_WRITE, -1, false, false);
if (err)
--
2.52.0
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH bpf-next v6 03/17] bpf: Refactor to handle memory and size together
2026-04-19 16:33 [PATCH bpf-next v6 00/17] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song
2026-04-19 16:33 ` [PATCH bpf-next v6 01/17] bpf: Remove unused parameter from check_map_kptr_access() Yonghong Song
2026-04-19 16:33 ` [PATCH bpf-next v6 02/17] bpf: Refactor to avoid redundant calculation of bpf_reg_state Yonghong Song
@ 2026-04-19 16:33 ` Yonghong Song
2026-04-20 23:58 ` Alexei Starovoitov
2026-04-19 16:33 ` [PATCH bpf-next v6 04/17] bpf: Prepare verifier logs for upcoming kfunc stack arguments Yonghong Song
` (14 subsequent siblings)
17 siblings, 1 reply; 51+ messages in thread
From: Yonghong Song @ 2026-04-19 16:33 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau
Similar to the previous patch, try to pass bpf_reg_state from caller
to callee. Both mem_reg and size_reg are passed to helper functions.
This is important for stack arguments as they may be beyond registers 1-5.
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
kernel/bpf/verifier.c | 59 ++++++++++++++++++++++---------------------
1 file changed, 30 insertions(+), 29 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 514fee971f96..3716d9688d00 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -6940,12 +6940,12 @@ static int check_stack_range_initialized(
return 0;
}
-static int check_helper_mem_access(struct bpf_verifier_env *env, int regno,
+static int check_helper_mem_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno,
int access_size, enum bpf_access_type access_type,
bool zero_size_allowed,
struct bpf_call_arg_meta *meta)
{
- struct bpf_reg_state *regs = cur_regs(env), *reg = ®s[regno];
+ struct bpf_reg_state *regs = cur_regs(env);
u32 *max_access;
switch (base_type(reg->type)) {
@@ -7028,15 +7028,17 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno,
/* verify arguments to helpers or kfuncs consisting of a pointer and an access
* size.
*
- * @regno is the register containing the access size. regno-1 is the register
- * containing the pointer.
+ * @mem_regno is the register containing the pointer, mem_regno+1 is the register
+ * containing the access size.
*/
static int check_mem_size_reg(struct bpf_verifier_env *env,
- struct bpf_reg_state *reg, u32 regno,
+ struct bpf_reg_state *mem_reg,
+ struct bpf_reg_state *size_reg, u32 mem_regno,
enum bpf_access_type access_type,
bool zero_size_allowed,
struct bpf_call_arg_meta *meta)
{
+ int size_regno = mem_regno + 1;
int err;
/* This is used to refine r0 return value bounds for helpers
@@ -7047,37 +7049,37 @@ static int check_mem_size_reg(struct bpf_verifier_env *env,
* out. Only upper bounds can be learned because retval is an
* int type and negative retvals are allowed.
*/
- meta->msize_max_value = reg->umax_value;
+ meta->msize_max_value = size_reg->umax_value;
/* The register is SCALAR_VALUE; the access check happens using
* its boundaries. For unprivileged variable accesses, disable
* raw mode so that the program is required to initialize all
* the memory that the helper could just partially fill up.
*/
- if (!tnum_is_const(reg->var_off))
+ if (!tnum_is_const(size_reg->var_off))
meta = NULL;
- if (reg->smin_value < 0) {
+ if (size_reg->smin_value < 0) {
verbose(env, "R%d min value is negative, either use unsigned or 'var &= const'\n",
- regno);
+ size_regno);
return -EACCES;
}
- if (reg->umin_value == 0 && !zero_size_allowed) {
+ if (size_reg->umin_value == 0 && !zero_size_allowed) {
verbose(env, "R%d invalid zero-sized read: u64=[%lld,%lld]\n",
- regno, reg->umin_value, reg->umax_value);
+ size_regno, size_reg->umin_value, size_reg->umax_value);
return -EACCES;
}
- if (reg->umax_value >= BPF_MAX_VAR_SIZ) {
+ if (size_reg->umax_value >= BPF_MAX_VAR_SIZ) {
verbose(env, "R%d unbounded memory access, use 'var &= const' or 'if (var < const)'\n",
- regno);
+ size_regno);
return -EACCES;
}
- err = check_helper_mem_access(env, regno - 1, reg->umax_value,
+ err = check_helper_mem_access(env, mem_reg, mem_regno, size_reg->umax_value,
access_type, zero_size_allowed, meta);
if (!err)
- err = mark_chain_precision(env, regno);
+ err = mark_chain_precision(env, size_regno);
return err;
}
@@ -7102,8 +7104,8 @@ static int check_mem_reg(struct bpf_verifier_env *env, struct bpf_reg_state *reg
int size = base_type(reg->type) == PTR_TO_STACK ? -(int)mem_size : mem_size;
- err = check_helper_mem_access(env, regno, size, BPF_READ, true, NULL);
- err = err ?: check_helper_mem_access(env, regno, size, BPF_WRITE, true, NULL);
+ err = check_helper_mem_access(env, reg, regno, size, BPF_READ, true, NULL);
+ err = err ?: check_helper_mem_access(env, reg, regno, size, BPF_WRITE, true, NULL);
if (may_be_null)
*reg = saved_reg;
@@ -7111,16 +7113,15 @@ static int check_mem_reg(struct bpf_verifier_env *env, struct bpf_reg_state *reg
return err;
}
-static int check_kfunc_mem_size_reg(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
- u32 regno)
+static int check_kfunc_mem_size_reg(struct bpf_verifier_env *env, struct bpf_reg_state *mem_reg,
+ struct bpf_reg_state *size_reg, u32 mem_regno)
{
- struct bpf_reg_state *mem_reg = &cur_regs(env)[regno - 1];
bool may_be_null = type_may_be_null(mem_reg->type);
struct bpf_reg_state saved_reg;
struct bpf_call_arg_meta meta;
int err;
- WARN_ON_ONCE(regno < BPF_REG_2 || regno > BPF_REG_5);
+ WARN_ON_ONCE(mem_regno > BPF_REG_4);
memset(&meta, 0, sizeof(meta));
@@ -7129,8 +7130,8 @@ static int check_kfunc_mem_size_reg(struct bpf_verifier_env *env, struct bpf_reg
mark_ptr_not_null_reg(mem_reg);
}
- err = check_mem_size_reg(env, reg, regno, BPF_READ, true, &meta);
- err = err ?: check_mem_size_reg(env, reg, regno, BPF_WRITE, true, &meta);
+ err = check_mem_size_reg(env, mem_reg, size_reg, mem_regno, BPF_READ, true, &meta);
+ err = err ?: check_mem_size_reg(env, mem_reg, size_reg, mem_regno, BPF_WRITE, true, &meta);
if (may_be_null)
*mem_reg = saved_reg;
@@ -8594,7 +8595,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
return -EFAULT;
}
key_size = meta->map.ptr->key_size;
- err = check_helper_mem_access(env, regno, key_size, BPF_READ, false, NULL);
+ err = check_helper_mem_access(env, reg, regno, key_size, BPF_READ, false, NULL);
if (err)
return err;
if (can_elide_value_nullness(meta->map.ptr->map_type)) {
@@ -8621,7 +8622,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
return -EFAULT;
}
meta->raw_mode = arg_type & MEM_UNINIT;
- err = check_helper_mem_access(env, regno, meta->map.ptr->value_size,
+ err = check_helper_mem_access(env, reg, regno, meta->map.ptr->value_size,
arg_type & MEM_WRITE ? BPF_WRITE : BPF_READ,
false, meta);
break;
@@ -8665,7 +8666,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
*/
meta->raw_mode = arg_type & MEM_UNINIT;
if (arg_type & MEM_FIXED_SIZE) {
- err = check_helper_mem_access(env, regno, fn->arg_size[arg],
+ err = check_helper_mem_access(env, reg, regno, fn->arg_size[arg],
arg_type & MEM_WRITE ? BPF_WRITE : BPF_READ,
false, meta);
if (err)
@@ -8675,13 +8676,13 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
}
break;
case ARG_CONST_SIZE:
- err = check_mem_size_reg(env, reg, regno,
+ err = check_mem_size_reg(env, reg_state(env, regno - 1), reg, regno - 1,
fn->arg_type[arg - 1] & MEM_WRITE ?
BPF_WRITE : BPF_READ,
false, meta);
break;
case ARG_CONST_SIZE_OR_ZERO:
- err = check_mem_size_reg(env, reg, regno,
+ err = check_mem_size_reg(env, reg_state(env, regno - 1), reg, regno - 1,
fn->arg_type[arg - 1] & MEM_WRITE ?
BPF_WRITE : BPF_READ,
true, meta);
@@ -12415,7 +12416,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
const struct btf_param *size_arg = &args[i + 1];
if (!bpf_register_is_null(buff_reg) || !is_kfunc_arg_nullable(meta->btf, buff_arg)) {
- ret = check_kfunc_mem_size_reg(env, size_reg, regno + 1);
+ ret = check_kfunc_mem_size_reg(env, buff_reg, size_reg, regno);
if (ret < 0) {
verbose(env, "arg#%d arg#%d memory, len pair leads to invalid memory access\n", i, i + 1);
return ret;
--
2.52.0
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH bpf-next v6 04/17] bpf: Prepare verifier logs for upcoming kfunc stack arguments
2026-04-19 16:33 [PATCH bpf-next v6 00/17] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song
` (2 preceding siblings ...)
2026-04-19 16:33 ` [PATCH bpf-next v6 03/17] bpf: Refactor to handle memory and size together Yonghong Song
@ 2026-04-19 16:33 ` Yonghong Song
2026-04-21 0:03 ` Alexei Starovoitov
2026-04-19 16:33 ` [PATCH bpf-next v6 05/17] bpf: Introduce bpf register BPF_REG_PARAMS Yonghong Song
` (13 subsequent siblings)
17 siblings, 1 reply; 51+ messages in thread
From: Yonghong Song @ 2026-04-19 16:33 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau
This change prepares verifier log reporting for upcoming kfunc stack
argument support.
Today verifier log code mostly assumes that an argument can be described
directly by a register number. That works for arguments passed in `R1`
to `R5`, but it does not work once kfunc arguments can also be
passed on the stack.
Introduce an internal `argno` representation such that register-passed
arguments keep using their real register numbers, while stack-passed
arguments use an encoded value above a dedicated base.
`reg_arg_name()` converts this representation into either `R%d` or
`*(R11-off)` when emitting verifier logs. If a particular `argno`
is corresponding to a stack argument, print `*(R11-off)`. Otherwise,
print `R%d`. Here R11 presents the base of stack arguments.
This keeps existing logs readable for register arguments and allows the
same log sites to handle future stack arguments without open-coding
special cases.
Update selftests accordingly.
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
include/linux/bpf_verifier.h | 1 +
kernel/bpf/verifier.c | 649 ++++++++++--------
.../testing/selftests/bpf/prog_tests/bpf_nf.c | 22 +-
.../selftests/bpf/prog_tests/cb_refs.c | 2 +-
.../selftests/bpf/prog_tests/kfunc_call.c | 2 +-
.../selftests/bpf/prog_tests/linked_list.c | 4 +-
.../selftests/bpf/progs/cgrp_kfunc_failure.c | 14 +-
.../selftests/bpf/progs/cpumask_failure.c | 10 +-
.../testing/selftests/bpf/progs/dynptr_fail.c | 22 +-
.../selftests/bpf/progs/file_reader_fail.c | 4 +-
tools/testing/selftests/bpf/progs/irq.c | 4 +-
tools/testing/selftests/bpf/progs/iters.c | 6 +-
.../selftests/bpf/progs/iters_state_safety.c | 14 +-
.../selftests/bpf/progs/iters_testmod.c | 4 +-
.../selftests/bpf/progs/iters_testmod_seq.c | 4 +-
.../selftests/bpf/progs/map_kptr_fail.c | 2 +-
.../selftests/bpf/progs/percpu_alloc_fail.c | 4 +-
.../testing/selftests/bpf/progs/rbtree_fail.c | 6 +-
.../bpf/progs/refcounted_kptr_fail.c | 2 +-
.../testing/selftests/bpf/progs/stream_fail.c | 2 +-
.../selftests/bpf/progs/task_kfunc_failure.c | 18 +-
.../selftests/bpf/progs/task_work_fail.c | 6 +-
.../selftests/bpf/progs/test_bpf_nf_fail.c | 8 +-
.../bpf/progs/test_kfunc_dynptr_param.c | 2 +-
.../bpf/progs/test_kfunc_param_nullable.c | 2 +-
.../selftests/bpf/progs/verifier_bits_iter.c | 4 +-
.../bpf/progs/verifier_ref_tracking.c | 6 +-
.../selftests/bpf/progs/verifier_vfs_reject.c | 8 +-
.../testing/selftests/bpf/progs/wq_failures.c | 2 +-
tools/testing/selftests/bpf/verifier/calls.c | 14 +-
30 files changed, 474 insertions(+), 374 deletions(-)
diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index b148f816f25b..9fbbddc40d21 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -913,6 +913,7 @@ struct bpf_verifier_env {
* e.g., in reg_type_str() to generate reg_type string
*/
char tmp_str_buf[TMP_STR_BUF_LEN];
+ char tmp_reg_arg_name_buf[32];
struct bpf_insn insn_buf[INSN_BUF_SIZE];
struct bpf_insn epilogue_buf[INSN_BUF_SIZE];
struct bpf_scc_callchain callchain_buf;
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 3716d9688d00..6aa4dc161a56 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -1751,6 +1751,55 @@ static struct bpf_verifier_state *push_stack(struct bpf_verifier_env *env,
return &elem->st;
}
+/*
+ * Unified argument number encoding for verifier log messages.
+ * Register args (arg_idx 0-4) use their register number (R1-R5).
+ * Stack args (arg_idx 5+) are encoded as STACK_ARGNO_BASE + arg_idx
+ * to avoid collision with register numbers. reg_arg_name() decodes
+ * this back to a human-readable string like "*(R11-8)" for logs.
+ */
+#define STACK_ARGNO_BASE 100
+
+static bool is_stack_argno(int argno)
+{
+ return argno >= STACK_ARGNO_BASE;
+}
+
+static u32 make_argno(u32 arg_idx)
+{
+ if (arg_idx < MAX_BPF_FUNC_REG_ARGS)
+ return BPF_REG_1 + arg_idx;
+ return STACK_ARGNO_BASE + arg_idx;
+}
+
+static u32 arg_idx_from_argno(int argno)
+{
+ if (is_stack_argno(argno))
+ return argno - STACK_ARGNO_BASE;
+ return argno - BPF_REG_1;
+}
+
+static int next_argno(int argno)
+{
+ return make_argno(arg_idx_from_argno(argno) + 1);
+}
+
+static const char *reg_arg_name(struct bpf_verifier_env *env, int argno)
+{
+ char *buf = env->tmp_reg_arg_name_buf;
+ int len = sizeof(env->tmp_reg_arg_name_buf);
+ u32 idx;
+
+ if (!is_stack_argno(argno)) {
+ snprintf(buf, len, "R%d", argno);
+ return buf;
+ }
+
+ idx = arg_idx_from_argno(argno);
+ snprintf(buf, len, "*(R11-%u)", (idx - MAX_BPF_FUNC_REG_ARGS + 1) * BPF_REG_SIZE);
+ return buf;
+}
+
static const int caller_saved[CALLER_SAVED_REGS] = {
BPF_REG_0, BPF_REG_1, BPF_REG_2, BPF_REG_3, BPF_REG_4, BPF_REG_5
};
@@ -4250,7 +4299,7 @@ enum bpf_access_src {
};
static int check_stack_range_initialized(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
- int regno, int off, int access_size,
+ int argno, int off, int access_size,
bool zero_size_allowed,
enum bpf_access_type type,
struct bpf_call_arg_meta *meta);
@@ -4274,7 +4323,7 @@ static struct bpf_reg_state *reg_state(struct bpf_verifier_env *env, int regno)
* instead.
*/
static int check_stack_read_var_off(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
- int ptr_regno, int off, int size, int dst_regno)
+ int ptr_argno, int off, int size, int dst_regno)
{
struct bpf_func_state *ptr_state = bpf_func(env, reg);
int err;
@@ -4282,7 +4331,7 @@ static int check_stack_read_var_off(struct bpf_verifier_env *env, struct bpf_reg
/* Note that we pass a NULL meta, so raw access will not be permitted.
*/
- err = check_stack_range_initialized(env, reg, ptr_regno, off, size,
+ err = check_stack_range_initialized(env, reg, ptr_argno, off, size,
false, BPF_READ, NULL);
if (err)
return err;
@@ -4304,7 +4353,7 @@ static int check_stack_read_var_off(struct bpf_verifier_env *env, struct bpf_reg
* can be -1, meaning that the read value is not going to a register.
*/
static int check_stack_read(struct bpf_verifier_env *env,
- struct bpf_reg_state *reg, int ptr_regno, int off, int size,
+ struct bpf_reg_state *reg, int ptr_argno, int off, int size,
int dst_regno)
{
struct bpf_func_state *state = bpf_func(env, reg);
@@ -4342,7 +4391,7 @@ static int check_stack_read(struct bpf_verifier_env *env,
* than fixed offset ones. Note that dst_regno >= 0 on this
* branch.
*/
- err = check_stack_read_var_off(env, reg, ptr_regno, off, size,
+ err = check_stack_read_var_off(env, reg, ptr_argno, off, size,
dst_regno);
}
return err;
@@ -4380,7 +4429,7 @@ static int check_stack_write(struct bpf_verifier_env *env,
return err;
}
-static int check_map_access_type(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 regno,
+static int check_map_access_type(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 argno,
int off, int size, enum bpf_access_type type)
{
struct bpf_map *map = reg->map_ptr;
@@ -4402,7 +4451,7 @@ static int check_map_access_type(struct bpf_verifier_env *env, struct bpf_reg_st
}
/* check read/write into memory region (e.g., map value, ringbuf sample, etc) */
-static int __check_mem_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno,
+static int __check_mem_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int argno,
int off, int size, u32 mem_size,
bool zero_size_allowed)
{
@@ -4423,8 +4472,8 @@ static int __check_mem_access(struct bpf_verifier_env *env, struct bpf_reg_state
case PTR_TO_PACKET:
case PTR_TO_PACKET_META:
case PTR_TO_PACKET_END:
- verbose(env, "invalid access to packet, off=%d size=%d, R%d(id=%d,off=%d,r=%d)\n",
- off, size, regno, reg->id, off, mem_size);
+ verbose(env, "invalid access to packet, off=%d size=%d, %s(id=%d,off=%d,r=%d)\n",
+ off, size, reg_arg_name(env, argno), reg->id, off, mem_size);
break;
case PTR_TO_CTX:
verbose(env, "invalid access to context, ctx_size=%d off=%d size=%d\n",
@@ -4440,7 +4489,7 @@ static int __check_mem_access(struct bpf_verifier_env *env, struct bpf_reg_state
}
/* check read/write into a memory region with possible variable offset */
-static int check_mem_region_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 regno,
+static int check_mem_region_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 argno,
int off, int size, u32 mem_size,
bool zero_size_allowed)
{
@@ -4460,15 +4509,15 @@ static int check_mem_region_access(struct bpf_verifier_env *env, struct bpf_reg_
(reg->smin_value == S64_MIN ||
(off + reg->smin_value != (s64)(s32)(off + reg->smin_value)) ||
reg->smin_value + off < 0)) {
- verbose(env, "R%d min value is negative, either use unsigned index or do a if (index >=0) check.\n",
- regno);
+ verbose(env, "%s min value is negative, either use unsigned index or do a if (index >=0) check.\n",
+ reg_arg_name(env, argno));
return -EACCES;
}
- err = __check_mem_access(env, reg, regno, reg->smin_value + off, size,
+ err = __check_mem_access(env, reg, argno, reg->smin_value + off, size,
mem_size, zero_size_allowed);
if (err) {
- verbose(env, "R%d min value is outside of the allowed memory range\n",
- regno);
+ verbose(env, "%s min value is outside of the allowed memory range\n",
+ reg_arg_name(env, argno));
return err;
}
@@ -4477,15 +4526,15 @@ static int check_mem_region_access(struct bpf_verifier_env *env, struct bpf_reg_
* If reg->umax_value + off could overflow, treat that as unbounded too.
*/
if (reg->umax_value >= BPF_MAX_VAR_OFF) {
- verbose(env, "R%d unbounded memory access, make sure to bounds check any such access\n",
- regno);
+ verbose(env, "%s unbounded memory access, make sure to bounds check any such access\n",
+ reg_arg_name(env, argno));
return -EACCES;
}
- err = __check_mem_access(env, reg, regno, reg->umax_value + off, size,
+ err = __check_mem_access(env, reg, argno, reg->umax_value + off, size,
mem_size, zero_size_allowed);
if (err) {
- verbose(env, "R%d max value is outside of the allowed memory range\n",
- regno);
+ verbose(env, "%s max value is outside of the allowed memory range\n",
+ reg_arg_name(env, argno));
return err;
}
@@ -4493,7 +4542,7 @@ static int check_mem_region_access(struct bpf_verifier_env *env, struct bpf_reg_
}
static int __check_ptr_off_reg(struct bpf_verifier_env *env,
- const struct bpf_reg_state *reg, int regno,
+ const struct bpf_reg_state *reg, u32 argno,
bool fixed_off_ok)
{
/* Access to this pointer-typed register or passing it to a helper
@@ -4510,14 +4559,14 @@ static int __check_ptr_off_reg(struct bpf_verifier_env *env,
}
if (reg->smin_value < 0) {
- verbose(env, "negative offset %s ptr R%d off=%lld disallowed\n",
- reg_type_str(env, reg->type), regno, reg->var_off.value);
+ verbose(env, "negative offset %s ptr %s off=%lld disallowed\n",
+ reg_type_str(env, reg->type), reg_arg_name(env, argno), reg->var_off.value);
return -EACCES;
}
if (!fixed_off_ok && reg->var_off.value != 0) {
- verbose(env, "dereference of modified %s ptr R%d off=%lld disallowed\n",
- reg_type_str(env, reg->type), regno, reg->var_off.value);
+ verbose(env, "dereference of modified %s ptr %s off=%lld disallowed\n",
+ reg_type_str(env, reg->type), reg_arg_name(env, argno), reg->var_off.value);
return -EACCES;
}
@@ -4890,17 +4939,17 @@ static bool may_access_direct_pkt_data(struct bpf_verifier_env *env,
}
}
-static int check_packet_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 regno, int off,
+static int check_packet_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 argno, int off,
int size, bool zero_size_allowed)
{
int err;
if (reg->range < 0) {
- verbose(env, "R%d offset is outside of the packet\n", regno);
+ verbose(env, "%s offset is outside of the packet\n", reg_arg_name(env, argno));
return -EINVAL;
}
- err = check_mem_region_access(env, reg, regno, off, size, reg->range, zero_size_allowed);
+ err = check_mem_region_access(env, reg, argno, off, size, reg->range, zero_size_allowed);
if (err)
return err;
@@ -4955,7 +5004,7 @@ static int __check_ctx_access(struct bpf_verifier_env *env, int insn_idx, int of
return -EACCES;
}
-static int check_ctx_access(struct bpf_verifier_env *env, int insn_idx, struct bpf_reg_state *reg, u32 regno,
+static int check_ctx_access(struct bpf_verifier_env *env, int insn_idx, struct bpf_reg_state *reg, u32 argno,
int off, int access_size, enum bpf_access_type t,
struct bpf_insn_access_aux *info)
{
@@ -4968,9 +5017,9 @@ static int check_ctx_access(struct bpf_verifier_env *env, int insn_idx, struct b
int err;
if (var_off_ok)
- err = check_mem_region_access(env, reg, regno, off, access_size, U16_MAX, false);
+ err = check_mem_region_access(env, reg, argno, off, access_size, U16_MAX, false);
else
- err = __check_ptr_off_reg(env, reg, regno, fixed_off_ok);
+ err = __check_ptr_off_reg(env, reg, argno, fixed_off_ok);
if (err)
return err;
off += reg->umax_value;
@@ -4994,15 +5043,15 @@ static int check_flow_keys_access(struct bpf_verifier_env *env, int off,
}
static int check_sock_access(struct bpf_verifier_env *env, int insn_idx,
- struct bpf_reg_state *reg, u32 regno, int off, int size,
+ struct bpf_reg_state *reg, u32 argno, int off, int size,
enum bpf_access_type t)
{
struct bpf_insn_access_aux info = {};
bool valid;
if (reg->smin_value < 0) {
- verbose(env, "R%d min value is negative, either use unsigned index or do a if (index >=0) check.\n",
- regno);
+ verbose(env, "%s min value is negative, either use unsigned index or do a if (index >=0) check.\n",
+ reg_arg_name(env, argno));
return -EACCES;
}
@@ -5030,8 +5079,8 @@ static int check_sock_access(struct bpf_verifier_env *env, int insn_idx,
return 0;
}
- verbose(env, "R%d invalid %s access off=%d size=%d\n",
- regno, reg_type_str(env, reg->type), off, size);
+ verbose(env, "%s invalid %s access off=%d size=%d\n",
+ reg_arg_name(env, argno), reg_type_str(env, reg->type), off, size);
return -EACCES;
}
@@ -5541,12 +5590,12 @@ static int check_max_stack_depth(struct bpf_verifier_env *env)
static int __check_buffer_access(struct bpf_verifier_env *env,
const char *buf_info,
const struct bpf_reg_state *reg,
- int regno, int off, int size)
+ int argno, int off, int size)
{
if (off < 0) {
verbose(env,
- "R%d invalid %s buffer access: off=%d, size=%d\n",
- regno, buf_info, off, size);
+ "%s invalid %s buffer access: off=%d, size=%d\n",
+ reg_arg_name(env, argno), buf_info, off, size);
return -EACCES;
}
if (!tnum_is_const(reg->var_off)) {
@@ -5554,8 +5603,8 @@ static int __check_buffer_access(struct bpf_verifier_env *env,
tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off);
verbose(env,
- "R%d invalid variable buffer offset: off=%d, var_off=%s\n",
- regno, off, tn_buf);
+ "%s invalid variable buffer offset: off=%d, var_off=%s\n",
+ reg_arg_name(env, argno), off, tn_buf);
return -EACCES;
}
@@ -5564,11 +5613,11 @@ static int __check_buffer_access(struct bpf_verifier_env *env,
static int check_tp_buffer_access(struct bpf_verifier_env *env,
const struct bpf_reg_state *reg,
- int regno, int off, int size)
+ int argno, int off, int size)
{
int err;
- err = __check_buffer_access(env, "tracepoint", reg, regno, off, size);
+ err = __check_buffer_access(env, "tracepoint", reg, argno, off, size);
if (err)
return err;
@@ -5580,14 +5629,14 @@ static int check_tp_buffer_access(struct bpf_verifier_env *env,
static int check_buffer_access(struct bpf_verifier_env *env,
const struct bpf_reg_state *reg,
- int regno, int off, int size,
+ int argno, int off, int size,
bool zero_size_allowed,
u32 *max_access)
{
const char *buf_info = type_is_rdonly_mem(reg->type) ? "rdonly" : "rdwr";
int err;
- err = __check_buffer_access(env, buf_info, reg, regno, off, size);
+ err = __check_buffer_access(env, buf_info, reg, argno, off, size);
if (err)
return err;
@@ -5960,7 +6009,7 @@ static bool type_is_trusted_or_null(struct bpf_verifier_env *env,
static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
struct bpf_reg_state *regs, struct bpf_reg_state *reg,
- int regno, int off, int size,
+ int argno, int off, int size,
enum bpf_access_type atype,
int value_regno)
{
@@ -5989,8 +6038,8 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off);
verbose(env,
- "R%d is ptr_%s invalid variable offset: off=%d, var_off=%s\n",
- regno, tname, off, tn_buf);
+ "%s is ptr_%s invalid variable offset: off=%d, var_off=%s\n",
+ reg_arg_name(env, argno), tname, off, tn_buf);
return -EACCES;
}
@@ -5998,22 +6047,22 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
if (off < 0) {
verbose(env,
- "R%d is ptr_%s invalid negative access: off=%d\n",
- regno, tname, off);
+ "%s is ptr_%s invalid negative access: off=%d\n",
+ reg_arg_name(env, argno), tname, off);
return -EACCES;
}
if (reg->type & MEM_USER) {
verbose(env,
- "R%d is ptr_%s access user memory: off=%d\n",
- regno, tname, off);
+ "%s is ptr_%s access user memory: off=%d\n",
+ reg_arg_name(env, argno), tname, off);
return -EACCES;
}
if (reg->type & MEM_PERCPU) {
verbose(env,
- "R%d is ptr_%s access percpu memory: off=%d\n",
- regno, tname, off);
+ "%s is ptr_%s access percpu memory: off=%d\n",
+ reg_arg_name(env, argno), tname, off);
return -EACCES;
}
@@ -6116,7 +6165,7 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
static int check_ptr_to_map_access(struct bpf_verifier_env *env,
struct bpf_reg_state *regs, struct bpf_reg_state *reg,
- int regno, int off, int size,
+ int argno, int off, int size,
enum bpf_access_type atype,
int value_regno)
{
@@ -6150,8 +6199,8 @@ static int check_ptr_to_map_access(struct bpf_verifier_env *env,
}
if (off < 0) {
- verbose(env, "R%d is %s invalid negative access: off=%d\n",
- regno, tname, off);
+ verbose(env, "%s is %s invalid negative access: off=%d\n",
+ reg_arg_name(env, argno), tname, off);
return -EACCES;
}
@@ -6209,7 +6258,7 @@ static int check_stack_slot_within_bounds(struct bpf_verifier_env *env,
*/
static int check_stack_access_within_bounds(
struct bpf_verifier_env *env, struct bpf_reg_state *reg,
- int regno, int off, int access_size,
+ int argno, int off, int access_size,
enum bpf_access_type type)
{
struct bpf_func_state *state = bpf_func(env, reg);
@@ -6228,8 +6277,8 @@ static int check_stack_access_within_bounds(
} else {
if (reg->smax_value >= BPF_MAX_VAR_OFF ||
reg->smin_value <= -BPF_MAX_VAR_OFF) {
- verbose(env, "invalid unbounded variable-offset%s stack R%d\n",
- err_extra, regno);
+ verbose(env, "invalid unbounded variable-offset%s stack %s\n",
+ err_extra, reg_arg_name(env, argno));
return -EACCES;
}
min_off = reg->smin_value + off;
@@ -6247,14 +6296,14 @@ static int check_stack_access_within_bounds(
if (err) {
if (tnum_is_const(reg->var_off)) {
- verbose(env, "invalid%s stack R%d off=%lld size=%d\n",
- err_extra, regno, min_off, access_size);
+ verbose(env, "invalid%s stack %s off=%lld size=%d\n",
+ err_extra, reg_arg_name(env, argno), min_off, access_size);
} else {
char tn_buf[48];
tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off);
- verbose(env, "invalid variable-offset%s stack R%d var_off=%s off=%d size=%d\n",
- err_extra, regno, tn_buf, off, access_size);
+ verbose(env, "invalid variable-offset%s stack %s var_off=%s off=%d size=%d\n",
+ err_extra, reg_arg_name(env, argno), tn_buf, off, access_size);
}
return err;
}
@@ -6299,7 +6348,7 @@ static void add_scalar_to_reg(struct bpf_reg_state *dst_reg, s64 val)
* if t==write && value_regno==-1, some unknown value is stored into memory
* if t==read && value_regno==-1, don't care what we read from memory
*/
-static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct bpf_reg_state *reg, u32 regno,
+static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct bpf_reg_state *reg, u32 argno,
int off, int bpf_size, enum bpf_access_type t,
int value_regno, bool strict_alignment_once, bool is_ldsx)
{
@@ -6316,11 +6365,12 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct b
if (reg->type == PTR_TO_MAP_KEY) {
if (t == BPF_WRITE) {
- verbose(env, "write to change key R%d not allowed\n", regno);
+ verbose(env, "write to change key %s not allowed\n",
+ reg_arg_name(env, argno));
return -EACCES;
}
- err = check_mem_region_access(env, reg, regno, off, size,
+ err = check_mem_region_access(env, reg, argno, off, size,
reg->map_ptr->key_size, false);
if (err)
return err;
@@ -6334,10 +6384,10 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct b
verbose(env, "R%d leaks addr into map\n", value_regno);
return -EACCES;
}
- err = check_map_access_type(env, reg, regno, off, size, t);
+ err = check_map_access_type(env, reg, argno, off, size, t);
if (err)
return err;
- err = check_map_access(env, reg, regno, off, size, false, ACCESS_DIRECT);
+ err = check_map_access(env, reg, argno, off, size, false, ACCESS_DIRECT);
if (err)
return err;
if (tnum_is_const(reg->var_off))
@@ -6384,14 +6434,14 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct b
bool rdonly_untrusted = rdonly_mem && (reg->type & PTR_UNTRUSTED);
if (type_may_be_null(reg->type)) {
- verbose(env, "R%d invalid mem access '%s'\n", regno,
+ verbose(env, "%s invalid mem access '%s'\n", reg_arg_name(env, argno),
reg_type_str(env, reg->type));
return -EACCES;
}
if (t == BPF_WRITE && rdonly_mem) {
- verbose(env, "R%d cannot write into %s\n",
- regno, reg_type_str(env, reg->type));
+ verbose(env, "%s cannot write into %s\n",
+ reg_arg_name(env, argno), reg_type_str(env, reg->type));
return -EACCES;
}
@@ -6406,7 +6456,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct b
* instructions, hence no need to check bounds in that case.
*/
if (!rdonly_untrusted)
- err = check_mem_region_access(env, reg, regno, off, size,
+ err = check_mem_region_access(env, reg, argno, off, size,
reg->mem_size, false);
if (!err && value_regno >= 0 && (t == BPF_READ || rdonly_mem))
mark_reg_unknown(env, regs, value_regno);
@@ -6424,7 +6474,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct b
return -EACCES;
}
- err = check_ctx_access(env, insn_idx, reg, regno, off, size, t, &info);
+ err = check_ctx_access(env, insn_idx, reg, argno, off, size, t, &info);
if (!err && t == BPF_READ && value_regno >= 0) {
/* ctx access returns either a scalar, or a
* PTR_TO_PACKET[_META,_END]. In the latter
@@ -6461,12 +6511,12 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct b
} else if (reg->type == PTR_TO_STACK) {
/* Basic bounds checks. */
- err = check_stack_access_within_bounds(env, reg, regno, off, size, t);
+ err = check_stack_access_within_bounds(env, reg, argno, off, size, t);
if (err)
return err;
if (t == BPF_READ)
- err = check_stack_read(env, reg, regno, off, size,
+ err = check_stack_read(env, reg, argno, off, size,
value_regno);
else
err = check_stack_write(env, reg, off, size,
@@ -6482,7 +6532,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct b
value_regno);
return -EACCES;
}
- err = check_packet_access(env, reg, regno, off, size, false);
+ err = check_packet_access(env, reg, argno, off, size, false);
if (!err && t == BPF_READ && value_regno >= 0)
mark_reg_unknown(env, regs, value_regno);
} else if (reg->type == PTR_TO_FLOW_KEYS) {
@@ -6498,23 +6548,23 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct b
mark_reg_unknown(env, regs, value_regno);
} else if (type_is_sk_pointer(reg->type)) {
if (t == BPF_WRITE) {
- verbose(env, "R%d cannot write into %s\n",
- regno, reg_type_str(env, reg->type));
+ verbose(env, "%s cannot write into %s\n",
+ reg_arg_name(env, argno), reg_type_str(env, reg->type));
return -EACCES;
}
- err = check_sock_access(env, insn_idx, reg, regno, off, size, t);
+ err = check_sock_access(env, insn_idx, reg, argno, off, size, t);
if (!err && value_regno >= 0)
mark_reg_unknown(env, regs, value_regno);
} else if (reg->type == PTR_TO_TP_BUFFER) {
- err = check_tp_buffer_access(env, reg, regno, off, size);
+ err = check_tp_buffer_access(env, reg, argno, off, size);
if (!err && t == BPF_READ && value_regno >= 0)
mark_reg_unknown(env, regs, value_regno);
} else if (base_type(reg->type) == PTR_TO_BTF_ID &&
!type_may_be_null(reg->type)) {
- err = check_ptr_to_btf_access(env, regs, reg, regno, off, size, t,
+ err = check_ptr_to_btf_access(env, regs, reg, argno, off, size, t,
value_regno);
} else if (reg->type == CONST_PTR_TO_MAP) {
- err = check_ptr_to_map_access(env, regs, reg, regno, off, size, t,
+ err = check_ptr_to_map_access(env, regs, reg, argno, off, size, t,
value_regno);
} else if (base_type(reg->type) == PTR_TO_BUF &&
!type_may_be_null(reg->type)) {
@@ -6523,8 +6573,8 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct b
if (rdonly_mem) {
if (t == BPF_WRITE) {
- verbose(env, "R%d cannot write into %s\n",
- regno, reg_type_str(env, reg->type));
+ verbose(env, "%s cannot write into %s\n",
+ reg_arg_name(env, argno), reg_type_str(env, reg->type));
return -EACCES;
}
max_access = &env->prog->aux->max_rdonly_access;
@@ -6532,7 +6582,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct b
max_access = &env->prog->aux->max_rdwr_access;
}
- err = check_buffer_access(env, reg, regno, off, size, false,
+ err = check_buffer_access(env, reg, argno, off, size, false,
max_access);
if (!err && value_regno >= 0 && (rdonly_mem || t == BPF_READ))
@@ -6541,7 +6591,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct b
if (t == BPF_READ && value_regno >= 0)
mark_reg_unknown(env, regs, value_regno);
} else {
- verbose(env, "R%d invalid mem access '%s'\n", regno,
+ verbose(env, "%s invalid mem access '%s'\n", reg_arg_name(env, argno),
reg_type_str(env, reg->type));
return -EACCES;
}
@@ -6795,7 +6845,7 @@ static int check_atomic(struct bpf_verifier_env *env, struct bpf_insn *insn)
* read offsets are marked as read.
*/
static int check_stack_range_initialized(
- struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno, int off,
+ struct bpf_verifier_env *env, struct bpf_reg_state *reg, int argno, int off,
int access_size, bool zero_size_allowed,
enum bpf_access_type type, struct bpf_call_arg_meta *meta)
{
@@ -6820,7 +6870,7 @@ static int check_stack_range_initialized(
return -EACCES;
}
- err = check_stack_access_within_bounds(env, reg, regno, off, access_size, type);
+ err = check_stack_access_within_bounds(env, reg, argno, off, access_size, type);
if (err)
return err;
@@ -6837,8 +6887,8 @@ static int check_stack_range_initialized(
char tn_buf[48];
tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off);
- verbose(env, "R%d variable offset stack access prohibited for !root, var_off=%s\n",
- regno, tn_buf);
+ verbose(env, "%s variable offset stack access prohibited for !root, var_off=%s\n",
+ reg_arg_name(env, argno), tn_buf);
return -EACCES;
}
/* Only initialized buffer on stack is allowed to be accessed
@@ -6881,7 +6931,7 @@ static int check_stack_range_initialized(
}
}
meta->access_size = access_size;
- meta->regno = regno;
+ meta->regno = argno;
return 0;
}
@@ -6921,17 +6971,17 @@ static int check_stack_range_initialized(
if (*stype == STACK_POISON) {
if (allow_poison)
goto mark;
- verbose(env, "reading from stack R%d off %d+%d size %d, slot poisoned by dead code elimination\n",
- regno, min_off, i - min_off, access_size);
+ verbose(env, "reading from stack %s off %d+%d size %d, slot poisoned by dead code elimination\n",
+ reg_arg_name(env, argno), min_off, i - min_off, access_size);
} else if (tnum_is_const(reg->var_off)) {
- verbose(env, "invalid read from stack R%d off %d+%d size %d\n",
- regno, min_off, i - min_off, access_size);
+ verbose(env, "invalid read from stack %s off %d+%d size %d\n",
+ reg_arg_name(env, argno), min_off, i - min_off, access_size);
} else {
char tn_buf[48];
tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off);
- verbose(env, "invalid read from stack R%d var_off %s+%d size %d\n",
- regno, tn_buf, i - min_off, access_size);
+ verbose(env, "invalid read from stack %s var_off %s+%d size %d\n",
+ reg_arg_name(env, argno), tn_buf, i - min_off, access_size);
}
return -EACCES;
mark:
@@ -6940,7 +6990,7 @@ static int check_stack_range_initialized(
return 0;
}
-static int check_helper_mem_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno,
+static int check_helper_mem_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int argno,
int access_size, enum bpf_access_type access_type,
bool zero_size_allowed,
struct bpf_call_arg_meta *meta)
@@ -6951,37 +7001,37 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, struct bpf_reg_
switch (base_type(reg->type)) {
case PTR_TO_PACKET:
case PTR_TO_PACKET_META:
- return check_packet_access(env, reg, regno, 0, access_size,
+ return check_packet_access(env, reg, argno, 0, access_size,
zero_size_allowed);
case PTR_TO_MAP_KEY:
if (access_type == BPF_WRITE) {
- verbose(env, "R%d cannot write into %s\n", regno,
- reg_type_str(env, reg->type));
+ verbose(env, "%s cannot write into %s\n",
+ reg_arg_name(env, argno), reg_type_str(env, reg->type));
return -EACCES;
}
- return check_mem_region_access(env, reg, regno, 0, access_size,
+ return check_mem_region_access(env, reg, argno, 0, access_size,
reg->map_ptr->key_size, false);
case PTR_TO_MAP_VALUE:
- if (check_map_access_type(env, reg, regno, 0, access_size, access_type))
+ if (check_map_access_type(env, reg, argno, 0, access_size, access_type))
return -EACCES;
- return check_map_access(env, reg, regno, 0, access_size,
+ return check_map_access(env, reg, argno, 0, access_size,
zero_size_allowed, ACCESS_HELPER);
case PTR_TO_MEM:
if (type_is_rdonly_mem(reg->type)) {
if (access_type == BPF_WRITE) {
- verbose(env, "R%d cannot write into %s\n", regno,
- reg_type_str(env, reg->type));
+ verbose(env, "%s cannot write into %s\n",
+ reg_arg_name(env, argno), reg_type_str(env, reg->type));
return -EACCES;
}
}
- return check_mem_region_access(env, reg, regno, 0,
+ return check_mem_region_access(env, reg, argno, 0,
access_size, reg->mem_size,
zero_size_allowed);
case PTR_TO_BUF:
if (type_is_rdonly_mem(reg->type)) {
if (access_type == BPF_WRITE) {
- verbose(env, "R%d cannot write into %s\n", regno,
- reg_type_str(env, reg->type));
+ verbose(env, "%s cannot write into %s\n",
+ reg_arg_name(env, argno), reg_type_str(env, reg->type));
return -EACCES;
}
@@ -6989,21 +7039,21 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, struct bpf_reg_
} else {
max_access = &env->prog->aux->max_rdwr_access;
}
- return check_buffer_access(env, reg, regno, 0,
+ return check_buffer_access(env, reg, argno, 0,
access_size, zero_size_allowed,
max_access);
case PTR_TO_STACK:
return check_stack_range_initialized(
env, reg,
- regno, 0, access_size,
+ argno, 0, access_size,
zero_size_allowed, access_type, meta);
case PTR_TO_BTF_ID:
- return check_ptr_to_btf_access(env, regs, reg, regno, 0,
+ return check_ptr_to_btf_access(env, regs, reg, argno, 0,
access_size, BPF_READ, -1);
case PTR_TO_CTX:
/* Only permit reading or writing syscall context using helper calls. */
if (is_var_ctx_off_allowed(env->prog)) {
- int err = check_mem_region_access(env, reg, regno, 0, access_size, U16_MAX,
+ int err = check_mem_region_access(env, reg, argno, 0, access_size, U16_MAX,
zero_size_allowed);
if (err)
return err;
@@ -7018,7 +7068,7 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, struct bpf_reg_
bpf_register_is_null(reg))
return 0;
- verbose(env, "R%d type=%s ", regno,
+ verbose(env, "%s type=%s ", reg_arg_name(env, argno),
reg_type_str(env, reg->type));
verbose(env, "expected=%s\n", reg_type_str(env, PTR_TO_STACK));
return -EACCES;
@@ -7033,12 +7083,12 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, struct bpf_reg_
*/
static int check_mem_size_reg(struct bpf_verifier_env *env,
struct bpf_reg_state *mem_reg,
- struct bpf_reg_state *size_reg, u32 mem_regno,
+ struct bpf_reg_state *size_reg, u32 mem_argno,
enum bpf_access_type access_type,
bool zero_size_allowed,
struct bpf_call_arg_meta *meta)
{
- int size_regno = mem_regno + 1;
+ int size_argno = next_argno(mem_argno);
int err;
/* This is used to refine r0 return value bounds for helpers
@@ -7060,31 +7110,31 @@ static int check_mem_size_reg(struct bpf_verifier_env *env,
meta = NULL;
if (size_reg->smin_value < 0) {
- verbose(env, "R%d min value is negative, either use unsigned or 'var &= const'\n",
- size_regno);
+ verbose(env, "%s min value is negative, either use unsigned or 'var &= const'\n",
+ reg_arg_name(env, size_argno));
return -EACCES;
}
if (size_reg->umin_value == 0 && !zero_size_allowed) {
- verbose(env, "R%d invalid zero-sized read: u64=[%lld,%lld]\n",
- size_regno, size_reg->umin_value, size_reg->umax_value);
+ verbose(env, "%s invalid zero-sized read: u64=[%lld,%lld]\n",
+ reg_arg_name(env, size_argno), size_reg->umin_value, size_reg->umax_value);
return -EACCES;
}
if (size_reg->umax_value >= BPF_MAX_VAR_SIZ) {
- verbose(env, "R%d unbounded memory access, use 'var &= const' or 'if (var < const)'\n",
- size_regno);
+ verbose(env, "%s unbounded memory access, use 'var &= const' or 'if (var < const)'\n",
+ reg_arg_name(env, size_argno));
return -EACCES;
}
- err = check_helper_mem_access(env, mem_reg, mem_regno, size_reg->umax_value,
+ err = check_helper_mem_access(env, mem_reg, mem_argno, size_reg->umax_value,
access_type, zero_size_allowed, meta);
- if (!err)
- err = mark_chain_precision(env, size_regno);
+ if (!err && !is_stack_argno(size_argno))
+ err = mark_chain_precision(env, size_argno);
return err;
}
static int check_mem_reg(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
- u32 regno, u32 mem_size)
+ u32 argno, u32 mem_size)
{
bool may_be_null = type_may_be_null(reg->type);
struct bpf_reg_state saved_reg;
@@ -7104,8 +7154,8 @@ static int check_mem_reg(struct bpf_verifier_env *env, struct bpf_reg_state *reg
int size = base_type(reg->type) == PTR_TO_STACK ? -(int)mem_size : mem_size;
- err = check_helper_mem_access(env, reg, regno, size, BPF_READ, true, NULL);
- err = err ?: check_helper_mem_access(env, reg, regno, size, BPF_WRITE, true, NULL);
+ err = check_helper_mem_access(env, reg, argno, size, BPF_READ, true, NULL);
+ err = err ?: check_helper_mem_access(env, reg, argno, size, BPF_WRITE, true, NULL);
if (may_be_null)
*reg = saved_reg;
@@ -7114,14 +7164,15 @@ static int check_mem_reg(struct bpf_verifier_env *env, struct bpf_reg_state *reg
}
static int check_kfunc_mem_size_reg(struct bpf_verifier_env *env, struct bpf_reg_state *mem_reg,
- struct bpf_reg_state *size_reg, u32 mem_regno)
+ struct bpf_reg_state *size_reg, u32 mem_argno)
{
bool may_be_null = type_may_be_null(mem_reg->type);
struct bpf_reg_state saved_reg;
struct bpf_call_arg_meta meta;
+ u32 argno = make_argno(mem_argno);
int err;
- WARN_ON_ONCE(mem_regno > BPF_REG_4);
+ WARN_ON_ONCE(mem_argno > BPF_REG_3);
memset(&meta, 0, sizeof(meta));
@@ -7130,8 +7181,8 @@ static int check_kfunc_mem_size_reg(struct bpf_verifier_env *env, struct bpf_reg
mark_ptr_not_null_reg(mem_reg);
}
- err = check_mem_size_reg(env, mem_reg, size_reg, mem_regno, BPF_READ, true, &meta);
- err = err ?: check_mem_size_reg(env, mem_reg, size_reg, mem_regno, BPF_WRITE, true, &meta);
+ err = check_mem_size_reg(env, mem_reg, size_reg, argno, BPF_READ, true, &meta);
+ err = err ?: check_mem_size_reg(env, mem_reg, size_reg, argno, BPF_WRITE, true, &meta);
if (may_be_null)
*mem_reg = saved_reg;
@@ -7167,7 +7218,7 @@ enum {
* env->cur_state->active_locks remembers which map value element or allocated
* object got locked and clears it after bpf_spin_unlock.
*/
-static int process_spin_lock(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno, int flags)
+static int process_spin_lock(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int argno, int flags)
{
bool is_lock = flags & PROCESS_SPIN_LOCK, is_res_lock = flags & PROCESS_RES_LOCK;
const char *lock_str = is_res_lock ? "bpf_res_spin" : "bpf_spin";
@@ -7183,8 +7234,8 @@ static int process_spin_lock(struct bpf_verifier_env *env, struct bpf_reg_state
if (!is_const) {
verbose(env,
- "R%d doesn't have constant offset. %s_lock has to be at the constant offset\n",
- regno, lock_str);
+ "%s doesn't have constant offset. %s_lock has to be at the constant offset\n",
+ reg_arg_name(env, argno), lock_str);
return -EINVAL;
}
if (reg->type == PTR_TO_MAP_VALUE) {
@@ -7283,7 +7334,7 @@ static int process_spin_lock(struct bpf_verifier_env *env, struct bpf_reg_state
}
/* Check if @regno is a pointer to a specific field in a map value */
-static int check_map_field_pointer(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 regno,
+static int check_map_field_pointer(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 argno,
enum btf_field_type field_type,
struct bpf_map_desc *map_desc)
{
@@ -7295,8 +7346,8 @@ static int check_map_field_pointer(struct bpf_verifier_env *env, struct bpf_reg_
if (!is_const) {
verbose(env,
- "R%d doesn't have constant offset. %s has to be at the constant offset\n",
- regno, struct_name);
+ "%s doesn't have constant offset. %s has to be at the constant offset\n",
+ reg_arg_name(env, argno), struct_name);
return -EINVAL;
}
if (!map->btf) {
@@ -7336,26 +7387,26 @@ static int check_map_field_pointer(struct bpf_verifier_env *env, struct bpf_reg_
return 0;
}
-static int process_timer_func(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno,
+static int process_timer_func(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int argno,
struct bpf_map_desc *map)
{
if (IS_ENABLED(CONFIG_PREEMPT_RT)) {
verbose(env, "bpf_timer cannot be used for PREEMPT_RT.\n");
return -EOPNOTSUPP;
}
- return check_map_field_pointer(env, reg, regno, BPF_TIMER, map);
+ return check_map_field_pointer(env, reg, argno, BPF_TIMER, map);
}
-static int process_timer_helper(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno,
+static int process_timer_helper(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int argno,
struct bpf_call_arg_meta *meta)
{
- return process_timer_func(env, reg, regno, &meta->map);
+ return process_timer_func(env, reg, argno, &meta->map);
}
-static int process_timer_kfunc(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno,
+static int process_timer_kfunc(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int argno,
struct bpf_kfunc_call_arg_meta *meta)
{
- return process_timer_func(env, reg, regno, &meta->map);
+ return process_timer_func(env, reg, argno, &meta->map);
}
static int process_kptr_func(struct bpf_verifier_env *env, int regno,
@@ -7431,15 +7482,15 @@ static int process_kptr_func(struct bpf_verifier_env *env, int regno,
* Helpers which do not mutate the bpf_dynptr set MEM_RDONLY in their argument
* type, and declare it as 'const struct bpf_dynptr *' in their prototype.
*/
-static int process_dynptr_func(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno, int insn_idx,
+static int process_dynptr_func(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int argno, int insn_idx,
enum bpf_arg_type arg_type, int clone_ref_obj_id)
{
int err;
if (reg->type != PTR_TO_STACK && reg->type != CONST_PTR_TO_DYNPTR) {
verbose(env,
- "arg#%d expected pointer to stack or const struct bpf_dynptr\n",
- regno - 1);
+ "%s expected pointer to stack or const struct bpf_dynptr\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
@@ -7476,7 +7527,7 @@ static int process_dynptr_func(struct bpf_verifier_env *env, struct bpf_reg_stat
/* we write BPF_DW bits (8 bytes) at a time */
for (i = 0; i < BPF_DYNPTR_SIZE; i += 8) {
- err = check_mem_access(env, insn_idx, reg, regno,
+ err = check_mem_access(env, insn_idx, reg, argno,
i, BPF_DW, BPF_WRITE, -1, false, false);
if (err)
return err;
@@ -7491,17 +7542,16 @@ static int process_dynptr_func(struct bpf_verifier_env *env, struct bpf_reg_stat
}
if (!is_dynptr_reg_valid_init(env, reg)) {
- verbose(env,
- "Expected an initialized dynptr as arg #%d\n",
- regno - 1);
+ verbose(env, "Expected an initialized dynptr as %s\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
/* Fold modifiers (in this case, MEM_RDONLY) when checking expected type */
if (!is_dynptr_type_expected(env, reg, arg_type & ~MEM_RDONLY)) {
- verbose(env,
- "Expected a dynptr of type %s as arg #%d\n",
- dynptr_type_str(arg_to_dynptr_type(arg_type)), regno - 1);
+ verbose(env, "Expected a dynptr of type %s as %s\n",
+ dynptr_type_str(arg_to_dynptr_type(arg_type)),
+ reg_arg_name(env, argno));
return -EINVAL;
}
@@ -7546,14 +7596,16 @@ static bool is_kfunc_arg_iter(struct bpf_kfunc_call_arg_meta *meta, int arg_idx,
return btf_param_match_suffix(meta->btf, arg, "__iter");
}
-static int process_iter_arg(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno, int insn_idx,
+static int process_iter_arg(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int argno, int insn_idx,
struct bpf_kfunc_call_arg_meta *meta)
{
const struct btf_type *t;
+ u32 arg_idx = arg_idx_from_argno(argno);
int spi, err, i, nr_slots, btf_id;
if (reg->type != PTR_TO_STACK) {
- verbose(env, "arg#%d expected pointer to an iterator on stack\n", regno - 1);
+ verbose(env, "%s expected pointer to an iterator on stack\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
@@ -7563,9 +7615,10 @@ static int process_iter_arg(struct bpf_verifier_env *env, struct bpf_reg_state *
* to any kfunc, if arg has "__iter" suffix, we need to be a bit more
* conservative here.
*/
- btf_id = btf_check_iter_arg(meta->btf, meta->func_proto, regno - 1);
+ btf_id = btf_check_iter_arg(meta->btf, meta->func_proto, arg_idx);
if (btf_id < 0) {
- verbose(env, "expected valid iter pointer as arg #%d\n", regno - 1);
+ verbose(env, "expected valid iter pointer as %s\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
t = btf_type_by_id(meta->btf, btf_id);
@@ -7574,13 +7627,13 @@ static int process_iter_arg(struct bpf_verifier_env *env, struct bpf_reg_state *
if (is_iter_new_kfunc(meta)) {
/* bpf_iter_<type>_new() expects pointer to uninit iter state */
if (!is_iter_reg_valid_uninit(env, reg, nr_slots)) {
- verbose(env, "expected uninitialized iter_%s as arg #%d\n",
- iter_type_str(meta->btf, btf_id), regno - 1);
+ verbose(env, "expected uninitialized iter_%s as %s\n",
+ iter_type_str(meta->btf, btf_id), reg_arg_name(env, argno));
return -EINVAL;
}
for (i = 0; i < nr_slots * 8; i += BPF_REG_SIZE) {
- err = check_mem_access(env, insn_idx, reg, regno,
+ err = check_mem_access(env, insn_idx, reg, argno,
i, BPF_DW, BPF_WRITE, -1, false, false);
if (err)
return err;
@@ -7598,8 +7651,8 @@ static int process_iter_arg(struct bpf_verifier_env *env, struct bpf_reg_state *
case 0:
break;
case -EINVAL:
- verbose(env, "expected an initialized iter_%s as arg #%d\n",
- iter_type_str(meta->btf, btf_id), regno - 1);
+ verbose(env, "expected an initialized iter_%s as %s\n",
+ iter_type_str(meta->btf, btf_id), reg_arg_name(env, argno));
return err;
case -EPROTO:
verbose(env, "expected an RCU CS when using %s\n", meta->func_name);
@@ -8019,7 +8072,7 @@ static const struct bpf_reg_types *compatible_reg_types[__BPF_ARG_TYPE_MAX] = {
[ARG_PTR_TO_DYNPTR] = &dynptr_types,
};
-static int check_reg_type(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 regno,
+static int check_reg_type(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 argno,
enum bpf_arg_type arg_type,
const u32 *arg_btf_id,
struct bpf_call_arg_meta *meta)
@@ -8054,7 +8107,8 @@ static int check_reg_type(struct bpf_verifier_env *env, struct bpf_reg_state *re
type &= ~DYNPTR_TYPE_FLAG_MASK;
/* Local kptr types are allowed as the source argument of bpf_kptr_xchg */
- if (meta->func_id == BPF_FUNC_kptr_xchg && type_is_alloc(type) && regno == BPF_REG_2) {
+ if (meta->func_id == BPF_FUNC_kptr_xchg && type_is_alloc(type) &&
+ !is_stack_argno(argno) && argno == BPF_REG_2) {
type &= ~MEM_ALLOC;
type &= ~MEM_PERCPU;
}
@@ -8068,7 +8122,7 @@ static int check_reg_type(struct bpf_verifier_env *env, struct bpf_reg_state *re
goto found;
}
- verbose(env, "R%d type=%s expected=", regno, reg_type_str(env, reg->type));
+ verbose(env, "%s type=%s expected=", reg_arg_name(env, argno), reg_type_str(env, reg->type));
for (j = 0; j + 1 < i; j++)
verbose(env, "%s, ", reg_type_str(env, compatible->types[j]));
verbose(env, "%s\n", reg_type_str(env, compatible->types[j]));
@@ -8081,9 +8135,9 @@ static int check_reg_type(struct bpf_verifier_env *env, struct bpf_reg_state *re
if (compatible == &mem_types) {
if (!(arg_type & MEM_RDONLY)) {
verbose(env,
- "%s() may write into memory pointed by R%d type=%s\n",
+ "%s() may write into memory pointed by %s type=%s\n",
func_id_name(meta->func_id),
- regno, reg_type_str(env, reg->type));
+ reg_arg_name(env, argno), reg_type_str(env, reg->type));
return -EACCES;
}
return 0;
@@ -8106,7 +8160,8 @@ static int check_reg_type(struct bpf_verifier_env *env, struct bpf_reg_state *re
if (type_may_be_null(reg->type) &&
(!type_may_be_null(arg_type) || arg_type_is_release(arg_type))) {
- verbose(env, "Possibly NULL pointer passed to helper arg%d\n", regno);
+ verbose(env, "Possibly NULL pointer passed to helper %s\n",
+ reg_arg_name(env, argno));
return -EACCES;
}
@@ -8119,25 +8174,26 @@ static int check_reg_type(struct bpf_verifier_env *env, struct bpf_reg_state *re
}
if (meta->func_id == BPF_FUNC_kptr_xchg) {
- if (map_kptr_match_type(env, meta->kptr_field, reg, regno))
+ if (map_kptr_match_type(env, meta->kptr_field, reg, argno))
return -EACCES;
} else {
if (arg_btf_id == BPF_PTR_POISON) {
verbose(env, "verifier internal error:");
- verbose(env, "R%d has non-overwritten BPF_PTR_POISON type\n",
- regno);
+ verbose(env, "%s has non-overwritten BPF_PTR_POISON type\n",
+ reg_arg_name(env, argno));
return -EACCES;
}
- err = __check_ptr_off_reg(env, reg, regno, true);
+ err = __check_ptr_off_reg(env, reg, argno, true);
if (err)
return err;
if (!btf_struct_ids_match(&env->log, reg->btf, reg->btf_id,
reg->var_off.value, btf_vmlinux, *arg_btf_id,
strict_type_match)) {
- verbose(env, "R%d is of type %s but %s is expected\n",
- regno, btf_type_name(reg->btf, reg->btf_id),
+ verbose(env, "%s is of type %s but %s is expected\n",
+ reg_arg_name(env, argno),
+ btf_type_name(reg->btf, reg->btf_id),
btf_type_name(btf_vmlinux, *arg_btf_id));
return -EACCES;
}
@@ -8154,8 +8210,9 @@ static int check_reg_type(struct bpf_verifier_env *env, struct bpf_reg_state *re
return -EFAULT;
}
/* Check if local kptr in src arg matches kptr in dst arg */
- if (meta->func_id == BPF_FUNC_kptr_xchg && regno == BPF_REG_2) {
- if (map_kptr_match_type(env, meta->kptr_field, reg, regno))
+ if (meta->func_id == BPF_FUNC_kptr_xchg &&
+ !is_stack_argno(argno) && argno == BPF_REG_2) {
+ if (map_kptr_match_type(env, meta->kptr_field, reg, argno))
return -EACCES;
}
break;
@@ -8189,7 +8246,7 @@ reg_find_field_offset(const struct bpf_reg_state *reg, s32 off, u32 fields)
}
static int check_func_arg_reg_off(struct bpf_verifier_env *env,
- const struct bpf_reg_state *reg, int regno,
+ const struct bpf_reg_state *reg, int argno,
enum bpf_arg_type arg_type)
{
u32 type = reg->type;
@@ -8215,8 +8272,8 @@ static int check_func_arg_reg_off(struct bpf_verifier_env *env,
* to give the user a better error message.
*/
if (!tnum_is_const(reg->var_off) || reg->var_off.value != 0) {
- verbose(env, "R%d must have zero offset when passed to release func or trusted arg to kfunc\n",
- regno);
+ verbose(env, "%s must have zero offset when passed to release func or trusted arg to kfunc\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
}
@@ -8252,7 +8309,7 @@ static int check_func_arg_reg_off(struct bpf_verifier_env *env,
* cases. var_off always must be 0 for PTR_TO_BTF_ID, hence we
* still need to do checks instead of returning.
*/
- return __check_ptr_off_reg(env, reg, regno, true);
+ return __check_ptr_off_reg(env, reg, argno, true);
case PTR_TO_CTX:
/*
* Allow fixed and variable offsets for syscall context, but
@@ -8264,7 +8321,7 @@ static int check_func_arg_reg_off(struct bpf_verifier_env *env,
return 0;
fallthrough;
default:
- return __check_ptr_off_reg(env, reg, regno, false);
+ return __check_ptr_off_reg(env, reg, argno, false);
}
}
@@ -8334,8 +8391,8 @@ static enum bpf_dynptr_type dynptr_get_type(struct bpf_verifier_env *env,
return state->stack[spi].spilled_ptr.dynptr.type;
}
-static int check_reg_const_str(struct bpf_verifier_env *env,
- struct bpf_reg_state *reg, u32 regno)
+static int check_arg_const_str(struct bpf_verifier_env *env,
+ struct bpf_reg_state *reg, u32 argno)
{
struct bpf_map *map = reg->map_ptr;
int err;
@@ -8347,17 +8404,18 @@ static int check_reg_const_str(struct bpf_verifier_env *env,
return -EINVAL;
if (map->map_type == BPF_MAP_TYPE_INSN_ARRAY) {
- verbose(env, "R%d points to insn_array map which cannot be used as const string\n", regno);
+ verbose(env, "%s points to insn_array map which cannot be used as const string\n",
+ reg_arg_name(env, argno));
return -EACCES;
}
if (!bpf_map_is_rdonly(map)) {
- verbose(env, "R%d does not point to a readonly map'\n", regno);
+ verbose(env, "%s does not point to a readonly map'\n", reg_arg_name(env, argno));
return -EACCES;
}
if (!tnum_is_const(reg->var_off)) {
- verbose(env, "R%d is not a constant address'\n", regno);
+ verbose(env, "%s is not a constant address'\n", reg_arg_name(env, argno));
return -EACCES;
}
@@ -8366,7 +8424,7 @@ static int check_reg_const_str(struct bpf_verifier_env *env,
return -EACCES;
}
- err = check_map_access(env, reg, regno, 0,
+ err = check_map_access(env, reg, argno, 0,
map->value_size - reg->var_off.value, false,
ACCESS_HELPER);
if (err)
@@ -8705,7 +8763,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
break;
case ARG_PTR_TO_CONST_STR:
{
- err = check_reg_const_str(env, reg, regno);
+ err = check_arg_const_str(env, reg, regno);
if (err)
return err;
break;
@@ -9294,13 +9352,14 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog,
* verifier sees.
*/
for (i = 0; i < sub->arg_cnt; i++) {
+ u32 argno = make_argno(i);
u32 regno = i + 1;
struct bpf_reg_state *reg = ®s[regno];
struct bpf_subprog_arg_info *arg = &sub->args[i];
if (arg->arg_type == ARG_ANYTHING) {
if (reg->type != SCALAR_VALUE) {
- bpf_log(log, "R%d is not a scalar\n", regno);
+ bpf_log(log, "%s is not a scalar\n", reg_arg_name(env, argno));
return -EINVAL;
}
} else if (arg->arg_type & PTR_UNTRUSTED) {
@@ -9310,24 +9369,26 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog,
* invalid memory access.
*/
} else if (arg->arg_type == ARG_PTR_TO_CTX) {
- ret = check_func_arg_reg_off(env, reg, regno, ARG_PTR_TO_CTX);
+ ret = check_func_arg_reg_off(env, reg, argno, ARG_PTR_TO_CTX);
if (ret < 0)
return ret;
/* If function expects ctx type in BTF check that caller
* is passing PTR_TO_CTX.
*/
if (reg->type != PTR_TO_CTX) {
- bpf_log(log, "arg#%d expects pointer to ctx\n", i);
+ bpf_log(log, "%s expects pointer to ctx\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
} else if (base_type(arg->arg_type) == ARG_PTR_TO_MEM) {
- ret = check_func_arg_reg_off(env, reg, regno, ARG_DONTCARE);
+ ret = check_func_arg_reg_off(env, reg, argno, ARG_DONTCARE);
if (ret < 0)
return ret;
- if (check_mem_reg(env, reg, regno, arg->mem_size))
+ if (check_mem_reg(env, reg, argno, arg->mem_size))
return -EINVAL;
if (!(arg->arg_type & PTR_MAYBE_NULL) && (reg->type & PTR_MAYBE_NULL)) {
- bpf_log(log, "arg#%d is expected to be non-NULL\n", i);
+ bpf_log(log, "%s is expected to be non-NULL\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
} else if (base_type(arg->arg_type) == ARG_PTR_TO_ARENA) {
@@ -9339,15 +9400,16 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog,
* run-time debug nightmare.
*/
if (reg->type != PTR_TO_ARENA && reg->type != SCALAR_VALUE) {
- bpf_log(log, "R%d is not a pointer to arena or scalar.\n", regno);
+ bpf_log(log, "%s is not a pointer to arena or scalar.\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
} else if (arg->arg_type == (ARG_PTR_TO_DYNPTR | MEM_RDONLY)) {
- ret = check_func_arg_reg_off(env, reg, regno, ARG_PTR_TO_DYNPTR);
+ ret = check_func_arg_reg_off(env, reg, argno, ARG_PTR_TO_DYNPTR);
if (ret)
return ret;
- ret = process_dynptr_func(env, reg, regno, -1, arg->arg_type, 0);
+ ret = process_dynptr_func(env, reg, argno, -1, arg->arg_type, 0);
if (ret)
return ret;
} else if (base_type(arg->arg_type) == ARG_PTR_TO_BTF_ID) {
@@ -9358,12 +9420,13 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog,
continue;
memset(&meta, 0, sizeof(meta)); /* leave func_id as zero */
- err = check_reg_type(env, reg, regno, arg->arg_type, &arg->btf_id, &meta);
- err = err ?: check_func_arg_reg_off(env, reg, regno, arg->arg_type);
+ err = check_reg_type(env, reg, argno, arg->arg_type, &arg->btf_id, &meta);
+ err = err ?: check_func_arg_reg_off(env, reg, argno, arg->arg_type);
if (err)
return err;
} else {
- verifier_bug(env, "unrecognized arg#%d type %d", i, arg->arg_type);
+ verifier_bug(env, "unrecognized %s type %d",
+ reg_arg_name(env, argno), arg->arg_type);
return -EFAULT;
}
}
@@ -11406,8 +11469,9 @@ get_kfunc_ptr_arg_type(struct bpf_verifier_env *env,
if ((base_type(reg->type) == PTR_TO_BTF_ID || reg2btf_ids[base_type(reg->type)])) {
if (!btf_type_is_struct(ref_t)) {
- verbose(env, "kernel function %s args#%d pointer type %s %s is not supported\n",
- meta->func_name, argno, btf_type_str(ref_t), ref_tname);
+ verbose(env, "kernel function %s %s pointer type %s %s is not supported\n",
+ meta->func_name, reg_arg_name(env, make_argno(argno)),
+ btf_type_str(ref_t), ref_tname);
return -EINVAL;
}
return KF_ARG_PTR_TO_BTF_ID;
@@ -11423,8 +11487,9 @@ get_kfunc_ptr_arg_type(struct bpf_verifier_env *env,
*/
if (!btf_type_is_scalar(ref_t) && !__btf_type_is_scalar_struct(env, meta->btf, ref_t, 0) &&
(arg_mem_size ? !btf_type_is_void(ref_t) : 1)) {
- verbose(env, "arg#%d pointer type %s %s must point to %sscalar, or struct with scalar\n",
- argno, btf_type_str(ref_t), ref_tname, arg_mem_size ? "void, " : "");
+ verbose(env, "%s pointer type %s %s must point to %sscalar, or struct with scalar\n",
+ reg_arg_name(env, make_argno(argno)),
+ btf_type_str(ref_t), ref_tname, arg_mem_size ? "void, " : "");
return -EINVAL;
}
return arg_mem_size ? KF_ARG_PTR_TO_MEM_SIZE : KF_ARG_PTR_TO_MEM;
@@ -11493,15 +11558,16 @@ static int process_kf_arg_ptr_to_btf_id(struct bpf_verifier_env *env,
*/
taking_projection = btf_is_projection_of(ref_tname, reg_ref_tname);
if (!taking_projection && !struct_same) {
- verbose(env, "kernel function %s args#%d expected pointer to %s %s but R%d has a pointer to %s %s\n",
- meta->func_name, argno, btf_type_str(ref_t), ref_tname, argno + 1,
+ verbose(env, "kernel function %s %s expected pointer to %s %s but %s has a pointer to %s %s\n",
+ meta->func_name, reg_arg_name(env, make_argno(argno)),
+ btf_type_str(ref_t), ref_tname, reg_arg_name(env, make_argno(argno)),
btf_type_str(reg_ref_t), reg_ref_tname);
return -EINVAL;
}
return 0;
}
-static int process_irq_flag(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno,
+static int process_irq_flag(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int argno,
struct bpf_kfunc_call_arg_meta *meta)
{
int err, kfunc_class = IRQ_NATIVE_KFUNC;
@@ -11524,11 +11590,13 @@ static int process_irq_flag(struct bpf_verifier_env *env, struct bpf_reg_state *
if (irq_save) {
if (!is_irq_flag_reg_valid_uninit(env, reg)) {
- verbose(env, "expected uninitialized irq flag as arg#%d\n", regno - 1);
+ verbose(env, "expected uninitialized irq flag as %s\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
- err = check_mem_access(env, env->insn_idx, reg, regno, 0, BPF_DW, BPF_WRITE, -1, false, false);
+ err = check_mem_access(env, env->insn_idx, reg, argno, 0, BPF_DW,
+ BPF_WRITE, -1, false, false);
if (err)
return err;
@@ -11538,7 +11606,8 @@ static int process_irq_flag(struct bpf_verifier_env *env, struct bpf_reg_state *
} else {
err = is_irq_flag_reg_valid_init(env, reg);
if (err) {
- verbose(env, "expected an initialized irq flag as arg#%d\n", regno - 1);
+ verbose(env, "expected an initialized irq flag as %s\n",
+ reg_arg_name(env, argno));
return err;
}
@@ -11829,7 +11898,7 @@ static bool check_kfunc_is_graph_node_api(struct bpf_verifier_env *env,
static int
__process_kf_arg_ptr_to_graph_root(struct bpf_verifier_env *env,
- struct bpf_reg_state *reg, u32 regno,
+ struct bpf_reg_state *reg, u32 argno,
struct bpf_kfunc_call_arg_meta *meta,
enum btf_field_type head_field_type,
struct btf_field **head_field)
@@ -11850,8 +11919,8 @@ __process_kf_arg_ptr_to_graph_root(struct bpf_verifier_env *env,
head_type_name = btf_field_type_name(head_field_type);
if (!tnum_is_const(reg->var_off)) {
verbose(env,
- "R%d doesn't have constant offset. %s has to be at the constant offset\n",
- regno, head_type_name);
+ "%s doesn't have constant offset. %s has to be at the constant offset\n",
+ reg_arg_name(env, argno), head_type_name);
return -EINVAL;
}
@@ -11879,24 +11948,24 @@ __process_kf_arg_ptr_to_graph_root(struct bpf_verifier_env *env,
}
static int process_kf_arg_ptr_to_list_head(struct bpf_verifier_env *env,
- struct bpf_reg_state *reg, u32 regno,
+ struct bpf_reg_state *reg, u32 argno,
struct bpf_kfunc_call_arg_meta *meta)
{
- return __process_kf_arg_ptr_to_graph_root(env, reg, regno, meta, BPF_LIST_HEAD,
+ return __process_kf_arg_ptr_to_graph_root(env, reg, argno, meta, BPF_LIST_HEAD,
&meta->arg_list_head.field);
}
static int process_kf_arg_ptr_to_rbtree_root(struct bpf_verifier_env *env,
- struct bpf_reg_state *reg, u32 regno,
+ struct bpf_reg_state *reg, u32 argno,
struct bpf_kfunc_call_arg_meta *meta)
{
- return __process_kf_arg_ptr_to_graph_root(env, reg, regno, meta, BPF_RB_ROOT,
+ return __process_kf_arg_ptr_to_graph_root(env, reg, argno, meta, BPF_RB_ROOT,
&meta->arg_rbtree_root.field);
}
static int
__process_kf_arg_ptr_to_graph_node(struct bpf_verifier_env *env,
- struct bpf_reg_state *reg, u32 regno,
+ struct bpf_reg_state *reg, u32 argno,
struct bpf_kfunc_call_arg_meta *meta,
enum btf_field_type head_field_type,
enum btf_field_type node_field_type,
@@ -11918,8 +11987,8 @@ __process_kf_arg_ptr_to_graph_node(struct bpf_verifier_env *env,
node_type_name = btf_field_type_name(node_field_type);
if (!tnum_is_const(reg->var_off)) {
verbose(env,
- "R%d doesn't have constant offset. %s has to be at the constant offset\n",
- regno, node_type_name);
+ "%s doesn't have constant offset. %s has to be at the constant offset\n",
+ reg_arg_name(env, argno), node_type_name);
return -EINVAL;
}
@@ -11960,19 +12029,19 @@ __process_kf_arg_ptr_to_graph_node(struct bpf_verifier_env *env,
}
static int process_kf_arg_ptr_to_list_node(struct bpf_verifier_env *env,
- struct bpf_reg_state *reg, u32 regno,
+ struct bpf_reg_state *reg, u32 argno,
struct bpf_kfunc_call_arg_meta *meta)
{
- return __process_kf_arg_ptr_to_graph_node(env, reg, regno, meta,
+ return __process_kf_arg_ptr_to_graph_node(env, reg, argno, meta,
BPF_LIST_HEAD, BPF_LIST_NODE,
&meta->arg_list_head.field);
}
static int process_kf_arg_ptr_to_rbtree_node(struct bpf_verifier_env *env,
- struct bpf_reg_state *reg, u32 regno,
+ struct bpf_reg_state *reg, u32 argno,
struct bpf_kfunc_call_arg_meta *meta)
{
- return __process_kf_arg_ptr_to_graph_node(env, reg, regno, meta,
+ return __process_kf_arg_ptr_to_graph_node(env, reg, argno, meta,
BPF_RB_ROOT, BPF_RB_NODE,
&meta->arg_rbtree_root.field);
}
@@ -12024,6 +12093,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
struct bpf_reg_state *regs = cur_regs(env), *reg = ®s[i + 1];
const struct btf_type *t, *ref_t, *resolve_ret;
enum bpf_arg_type arg_type = ARG_DONTCARE;
+ u32 argno = make_argno(i);
u32 regno = i + 1, ref_id, type_size;
bool is_ret_buf_sz = false;
int kf_arg_type;
@@ -12046,7 +12116,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
if (btf_type_is_scalar(t)) {
if (reg->type != SCALAR_VALUE) {
- verbose(env, "R%d is not a scalar\n", regno);
+ verbose(env, "%s is not a scalar\n", reg_arg_name(env, argno));
return -EINVAL;
}
@@ -12056,7 +12126,8 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
return -EFAULT;
}
if (!tnum_is_const(reg->var_off)) {
- verbose(env, "R%d must be a known constant\n", regno);
+ verbose(env, "%s must be a known constant\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
ret = mark_chain_precision(env, regno);
@@ -12078,7 +12149,8 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
}
if (!tnum_is_const(reg->var_off)) {
- verbose(env, "R%d is not a const\n", regno);
+ verbose(env, "%s is not a const\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
@@ -12091,20 +12163,22 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
}
if (!btf_type_is_ptr(t)) {
- verbose(env, "Unrecognized arg#%d type %s\n", i, btf_type_str(t));
+ verbose(env, "Unrecognized %s type %s\n",
+ reg_arg_name(env, argno), btf_type_str(t));
return -EINVAL;
}
if ((bpf_register_is_null(reg) || type_may_be_null(reg->type)) &&
!is_kfunc_arg_nullable(meta->btf, &args[i])) {
- verbose(env, "Possibly NULL pointer passed to trusted arg%d\n", i);
+ verbose(env, "Possibly NULL pointer passed to trusted %s\n",
+ reg_arg_name(env, argno));
return -EACCES;
}
if (reg->ref_obj_id) {
if (is_kfunc_release(meta) && meta->ref_obj_id) {
- verifier_bug(env, "more than one arg with ref_obj_id R%d %u %u",
- regno, reg->ref_obj_id,
+ verifier_bug(env, "more than one arg with ref_obj_id %s %u %u",
+ reg_arg_name(env, argno), reg->ref_obj_id,
meta->ref_obj_id);
return -EFAULT;
}
@@ -12125,7 +12199,8 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
continue;
case KF_ARG_PTR_TO_MAP:
if (!reg->map_ptr) {
- verbose(env, "pointer in R%d isn't map pointer\n", regno);
+ verbose(env, "pointer in %s isn't map pointer\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
if (meta->map.ptr && (reg->map_ptr->record->wq_off >= 0 ||
@@ -12163,11 +12238,13 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
case KF_ARG_PTR_TO_BTF_ID:
if (!is_trusted_reg(reg)) {
if (!is_kfunc_rcu(meta)) {
- verbose(env, "R%d must be referenced or trusted\n", regno);
+ verbose(env, "%s must be referenced or trusted\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
if (!is_rcu_reg(reg)) {
- verbose(env, "R%d must be a rcu pointer\n", regno);
+ verbose(env, "%s must be a rcu pointer\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
}
@@ -12199,15 +12276,15 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
if (is_kfunc_release(meta) && reg->ref_obj_id)
arg_type |= OBJ_RELEASE;
- ret = check_func_arg_reg_off(env, reg, regno, arg_type);
+ ret = check_func_arg_reg_off(env, reg, argno, arg_type);
if (ret < 0)
return ret;
switch (kf_arg_type) {
case KF_ARG_PTR_TO_CTX:
if (reg->type != PTR_TO_CTX) {
- verbose(env, "arg#%d expected pointer to ctx, but got %s\n",
- i, reg_type_str(env, reg->type));
+ verbose(env, "%s expected pointer to ctx, but got %s\n",
+ reg_arg_name(env, argno), reg_type_str(env, reg->type));
return -EINVAL;
}
@@ -12221,16 +12298,19 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
case KF_ARG_PTR_TO_ALLOC_BTF_ID:
if (reg->type == (PTR_TO_BTF_ID | MEM_ALLOC)) {
if (!is_bpf_obj_drop_kfunc(meta->func_id)) {
- verbose(env, "arg#%d expected for bpf_obj_drop()\n", i);
+ verbose(env, "%s expected for bpf_obj_drop()\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
} else if (reg->type == (PTR_TO_BTF_ID | MEM_ALLOC | MEM_PERCPU)) {
if (!is_bpf_percpu_obj_drop_kfunc(meta->func_id)) {
- verbose(env, "arg#%d expected for bpf_percpu_obj_drop()\n", i);
+ verbose(env, "%s expected for bpf_percpu_obj_drop()\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
} else {
- verbose(env, "arg#%d expected pointer to allocated object\n", i);
+ verbose(env, "%s expected pointer to allocated object\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
if (!reg->ref_obj_id) {
@@ -12281,7 +12361,8 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
}
}
- ret = process_dynptr_func(env, reg, regno, insn_idx, dynptr_arg_type, clone_ref_obj_id);
+ ret = process_dynptr_func(env, reg, argno, insn_idx,
+ dynptr_arg_type, clone_ref_obj_id);
if (ret < 0)
return ret;
@@ -12306,55 +12387,59 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
return -EINVAL;
}
}
- ret = process_iter_arg(env, reg, regno, insn_idx, meta);
+ ret = process_iter_arg(env, reg, argno, insn_idx, meta);
if (ret < 0)
return ret;
break;
case KF_ARG_PTR_TO_LIST_HEAD:
if (reg->type != PTR_TO_MAP_VALUE &&
reg->type != (PTR_TO_BTF_ID | MEM_ALLOC)) {
- verbose(env, "arg#%d expected pointer to map value or allocated object\n", i);
+ verbose(env, "%s expected pointer to map value or allocated object\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
if (reg->type == (PTR_TO_BTF_ID | MEM_ALLOC) && !reg->ref_obj_id) {
verbose(env, "allocated object must be referenced\n");
return -EINVAL;
}
- ret = process_kf_arg_ptr_to_list_head(env, reg, regno, meta);
+ ret = process_kf_arg_ptr_to_list_head(env, reg, argno, meta);
if (ret < 0)
return ret;
break;
case KF_ARG_PTR_TO_RB_ROOT:
if (reg->type != PTR_TO_MAP_VALUE &&
reg->type != (PTR_TO_BTF_ID | MEM_ALLOC)) {
- verbose(env, "arg#%d expected pointer to map value or allocated object\n", i);
+ verbose(env, "%s expected pointer to map value or allocated object\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
if (reg->type == (PTR_TO_BTF_ID | MEM_ALLOC) && !reg->ref_obj_id) {
verbose(env, "allocated object must be referenced\n");
return -EINVAL;
}
- ret = process_kf_arg_ptr_to_rbtree_root(env, reg, regno, meta);
+ ret = process_kf_arg_ptr_to_rbtree_root(env, reg, argno, meta);
if (ret < 0)
return ret;
break;
case KF_ARG_PTR_TO_LIST_NODE:
if (reg->type != (PTR_TO_BTF_ID | MEM_ALLOC)) {
- verbose(env, "arg#%d expected pointer to allocated object\n", i);
+ verbose(env, "%s expected pointer to allocated object\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
if (!reg->ref_obj_id) {
verbose(env, "allocated object must be referenced\n");
return -EINVAL;
}
- ret = process_kf_arg_ptr_to_list_node(env, reg, regno, meta);
+ ret = process_kf_arg_ptr_to_list_node(env, reg, argno, meta);
if (ret < 0)
return ret;
break;
case KF_ARG_PTR_TO_RB_NODE:
if (is_bpf_rbtree_add_kfunc(meta->func_id)) {
if (reg->type != (PTR_TO_BTF_ID | MEM_ALLOC)) {
- verbose(env, "arg#%d expected pointer to allocated object\n", i);
+ verbose(env, "%s expected pointer to allocated object\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
if (!reg->ref_obj_id) {
@@ -12372,7 +12457,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
}
}
- ret = process_kf_arg_ptr_to_rbtree_node(env, reg, regno, meta);
+ ret = process_kf_arg_ptr_to_rbtree_node(env, reg, argno, meta);
if (ret < 0)
return ret;
break;
@@ -12387,7 +12472,8 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
if ((base_type(reg->type) != PTR_TO_BTF_ID ||
(bpf_type_has_unsafe_modifiers(reg->type) && !is_rcu_reg(reg))) &&
!reg2btf_ids[base_type(reg->type)]) {
- verbose(env, "arg#%d is %s ", i, reg_type_str(env, reg->type));
+ verbose(env, "%s is %s ", reg_arg_name(env, argno),
+ reg_type_str(env, reg->type));
verbose(env, "expected %s or socket\n",
reg_type_str(env, base_type(reg->type) |
(type_flag(reg->type) & BPF_REG_TRUSTED_MODIFIERS)));
@@ -12400,11 +12486,12 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
case KF_ARG_PTR_TO_MEM:
resolve_ret = btf_resolve_size(btf, ref_t, &type_size);
if (IS_ERR(resolve_ret)) {
- verbose(env, "arg#%d reference type('%s %s') size cannot be determined: %ld\n",
- i, btf_type_str(ref_t), ref_tname, PTR_ERR(resolve_ret));
+ verbose(env, "%s reference type('%s %s') size cannot be determined: %ld\n",
+ reg_arg_name(env, argno), btf_type_str(ref_t),
+ ref_tname, PTR_ERR(resolve_ret));
return -EINVAL;
}
- ret = check_mem_reg(env, reg, regno, type_size);
+ ret = check_mem_reg(env, reg, argno, type_size);
if (ret < 0)
return ret;
break;
@@ -12416,9 +12503,11 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
const struct btf_param *size_arg = &args[i + 1];
if (!bpf_register_is_null(buff_reg) || !is_kfunc_arg_nullable(meta->btf, buff_arg)) {
- ret = check_kfunc_mem_size_reg(env, buff_reg, size_reg, regno);
+ ret = check_kfunc_mem_size_reg(env, buff_reg, size_reg, i);
if (ret < 0) {
- verbose(env, "arg#%d arg#%d memory, len pair leads to invalid memory access\n", i, i + 1);
+ verbose(env, "%s and ", reg_arg_name(env, argno));
+ verbose(env, "%s memory, len pair leads to invalid memory access\n",
+ reg_arg_name(env, next_argno(argno)));
return ret;
}
}
@@ -12429,7 +12518,8 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
return -EFAULT;
}
if (!tnum_is_const(size_reg->var_off)) {
- verbose(env, "R%d must be a known constant\n", regno + 1);
+ verbose(env, "%s must be a known constant\n",
+ reg_arg_name(env, next_argno(argno)));
return -EINVAL;
}
meta->arg_constant.found = true;
@@ -12442,14 +12532,16 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
}
case KF_ARG_PTR_TO_CALLBACK:
if (reg->type != PTR_TO_FUNC) {
- verbose(env, "arg%d expected pointer to func\n", i);
+ verbose(env, "%s expected pointer to func\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
meta->subprogno = reg->subprogno;
break;
case KF_ARG_PTR_TO_REFCOUNTED_KPTR:
if (!type_is_ptr_alloc_obj(reg->type)) {
- verbose(env, "arg#%d is neither owning or non-owning ref\n", i);
+ verbose(env, "%s is neither owning or non-owning ref\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
if (!type_is_non_owning_ref(reg->type))
@@ -12462,7 +12554,8 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
}
if (rec->refcount_off < 0) {
- verbose(env, "arg#%d doesn't point to a type with bpf_refcount field\n", i);
+ verbose(env, "%s doesn't point to a type with bpf_refcount field\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
@@ -12471,46 +12564,51 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
break;
case KF_ARG_PTR_TO_CONST_STR:
if (reg->type != PTR_TO_MAP_VALUE) {
- verbose(env, "arg#%d doesn't point to a const string\n", i);
+ verbose(env, "%s doesn't point to a const string\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
- ret = check_reg_const_str(env, reg, regno);
+ ret = check_arg_const_str(env, reg, argno);
if (ret)
return ret;
break;
case KF_ARG_PTR_TO_WORKQUEUE:
if (reg->type != PTR_TO_MAP_VALUE) {
- verbose(env, "arg#%d doesn't point to a map value\n", i);
+ verbose(env, "%s doesn't point to a map value\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
- ret = check_map_field_pointer(env, reg, regno, BPF_WORKQUEUE, &meta->map);
+ ret = check_map_field_pointer(env, reg, argno, BPF_WORKQUEUE, &meta->map);
if (ret < 0)
return ret;
break;
case KF_ARG_PTR_TO_TIMER:
if (reg->type != PTR_TO_MAP_VALUE) {
- verbose(env, "arg#%d doesn't point to a map value\n", i);
+ verbose(env, "%s doesn't point to a map value\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
- ret = process_timer_kfunc(env, reg, regno, meta);
+ ret = process_timer_kfunc(env, reg, argno, meta);
if (ret < 0)
return ret;
break;
case KF_ARG_PTR_TO_TASK_WORK:
if (reg->type != PTR_TO_MAP_VALUE) {
- verbose(env, "arg#%d doesn't point to a map value\n", i);
+ verbose(env, "%s doesn't point to a map value\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
- ret = check_map_field_pointer(env, reg, regno, BPF_TASK_WORK, &meta->map);
+ ret = check_map_field_pointer(env, reg, argno, BPF_TASK_WORK, &meta->map);
if (ret < 0)
return ret;
break;
case KF_ARG_PTR_TO_IRQ_FLAG:
if (reg->type != PTR_TO_STACK) {
- verbose(env, "arg#%d doesn't point to an irq flag on stack\n", i);
+ verbose(env, "%s doesn't point to an irq flag on stack\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
- ret = process_irq_flag(env, reg, regno, meta);
+ ret = process_irq_flag(env, reg, argno, meta);
if (ret < 0)
return ret;
break;
@@ -12519,7 +12617,8 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
int flags = PROCESS_RES_LOCK;
if (reg->type != PTR_TO_MAP_VALUE && reg->type != (PTR_TO_BTF_ID | MEM_ALLOC)) {
- verbose(env, "arg#%d doesn't point to map value or allocated object\n", i);
+ verbose(env, "%s doesn't point to map value or allocated object\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
@@ -12531,7 +12630,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
if (meta->func_id == special_kfunc_list[KF_bpf_res_spin_lock_irqsave] ||
meta->func_id == special_kfunc_list[KF_bpf_res_spin_unlock_irqrestore])
flags |= PROCESS_LOCK_IRQ;
- ret = process_spin_lock(env, reg, regno, flags);
+ ret = process_spin_lock(env, reg, argno, flags);
if (ret < 0)
return ret;
break;
@@ -18747,7 +18846,7 @@ static int do_check_common(struct bpf_verifier_env *env, int subprog)
mark_reg_unknown(env, regs, i);
} else {
verifier_bug(env, "unhandled arg#%d type %d",
- i - BPF_REG_1, arg->arg_type);
+ i - BPF_REG_1 + 1, arg->arg_type);
ret = -EFAULT;
goto out;
}
diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_nf.c b/tools/testing/selftests/bpf/prog_tests/bpf_nf.c
index 215878ea04de..b33dba4b126e 100644
--- a/tools/testing/selftests/bpf/prog_tests/bpf_nf.c
+++ b/tools/testing/selftests/bpf/prog_tests/bpf_nf.c
@@ -11,18 +11,18 @@ struct {
const char *prog_name;
const char *err_msg;
} test_bpf_nf_fail_tests[] = {
- { "alloc_release", "kernel function bpf_ct_release args#0 expected pointer to STRUCT nf_conn but" },
- { "insert_insert", "kernel function bpf_ct_insert_entry args#0 expected pointer to STRUCT nf_conn___init but" },
- { "lookup_insert", "kernel function bpf_ct_insert_entry args#0 expected pointer to STRUCT nf_conn___init but" },
- { "set_timeout_after_insert", "kernel function bpf_ct_set_timeout args#0 expected pointer to STRUCT nf_conn___init but" },
- { "set_status_after_insert", "kernel function bpf_ct_set_status args#0 expected pointer to STRUCT nf_conn___init but" },
- { "change_timeout_after_alloc", "kernel function bpf_ct_change_timeout args#0 expected pointer to STRUCT nf_conn but" },
- { "change_status_after_alloc", "kernel function bpf_ct_change_status args#0 expected pointer to STRUCT nf_conn but" },
+ { "alloc_release", "kernel function bpf_ct_release R1 expected pointer to STRUCT nf_conn but" },
+ { "insert_insert", "kernel function bpf_ct_insert_entry R1 expected pointer to STRUCT nf_conn___init but" },
+ { "lookup_insert", "kernel function bpf_ct_insert_entry R1 expected pointer to STRUCT nf_conn___init but" },
+ { "set_timeout_after_insert", "kernel function bpf_ct_set_timeout R1 expected pointer to STRUCT nf_conn___init but" },
+ { "set_status_after_insert", "kernel function bpf_ct_set_status R1 expected pointer to STRUCT nf_conn___init but" },
+ { "change_timeout_after_alloc", "kernel function bpf_ct_change_timeout R1 expected pointer to STRUCT nf_conn but" },
+ { "change_status_after_alloc", "kernel function bpf_ct_change_status R1 expected pointer to STRUCT nf_conn but" },
{ "write_not_allowlisted_field", "no write support to nf_conn at off" },
- { "lookup_null_bpf_tuple", "Possibly NULL pointer passed to trusted arg1" },
- { "lookup_null_bpf_opts", "Possibly NULL pointer passed to trusted arg3" },
- { "xdp_lookup_null_bpf_tuple", "Possibly NULL pointer passed to trusted arg1" },
- { "xdp_lookup_null_bpf_opts", "Possibly NULL pointer passed to trusted arg3" },
+ { "lookup_null_bpf_tuple", "Possibly NULL pointer passed to trusted R2" },
+ { "lookup_null_bpf_opts", "Possibly NULL pointer passed to trusted R4" },
+ { "xdp_lookup_null_bpf_tuple", "Possibly NULL pointer passed to trusted R2" },
+ { "xdp_lookup_null_bpf_opts", "Possibly NULL pointer passed to trusted R4" },
};
enum {
diff --git a/tools/testing/selftests/bpf/prog_tests/cb_refs.c b/tools/testing/selftests/bpf/prog_tests/cb_refs.c
index c40df623a8f7..6300b67a3a84 100644
--- a/tools/testing/selftests/bpf/prog_tests/cb_refs.c
+++ b/tools/testing/selftests/bpf/prog_tests/cb_refs.c
@@ -12,7 +12,7 @@ struct {
const char *err_msg;
} cb_refs_tests[] = {
{ "underflow_prog", "must point to scalar, or struct with scalar" },
- { "leak_prog", "Possibly NULL pointer passed to helper arg2" },
+ { "leak_prog", "Possibly NULL pointer passed to helper R2" },
{ "nested_cb", "Unreleased reference id=4 alloc_insn=2" }, /* alloc_insn=2{4,5} */
{ "non_cb_transfer_ref", "Unreleased reference id=4 alloc_insn=1" }, /* alloc_insn=1{1,2} */
};
diff --git a/tools/testing/selftests/bpf/prog_tests/kfunc_call.c b/tools/testing/selftests/bpf/prog_tests/kfunc_call.c
index 62f3fb79f5d1..3df07680f9e0 100644
--- a/tools/testing/selftests/bpf/prog_tests/kfunc_call.c
+++ b/tools/testing/selftests/bpf/prog_tests/kfunc_call.c
@@ -68,7 +68,7 @@ static struct kfunc_test_params kfunc_tests[] = {
TC_FAIL(kfunc_call_test_get_mem_fail_oob, 0, "min value is outside of the allowed memory range"),
TC_FAIL(kfunc_call_test_get_mem_fail_not_const, 0, "is not a const"),
TC_FAIL(kfunc_call_test_mem_acquire_fail, 0, "acquire kernel function does not return PTR_TO_BTF_ID"),
- TC_FAIL(kfunc_call_test_pointer_arg_type_mismatch, 0, "arg#0 expected pointer to ctx, but got scalar"),
+ TC_FAIL(kfunc_call_test_pointer_arg_type_mismatch, 0, "R1 expected pointer to ctx, but got scalar"),
/* success cases */
TC_TEST(kfunc_call_test1, 12),
diff --git a/tools/testing/selftests/bpf/prog_tests/linked_list.c b/tools/testing/selftests/bpf/prog_tests/linked_list.c
index 6f25b5f39a79..dbff099860ba 100644
--- a/tools/testing/selftests/bpf/prog_tests/linked_list.c
+++ b/tools/testing/selftests/bpf/prog_tests/linked_list.c
@@ -81,8 +81,8 @@ static struct {
{ "direct_write_node", "direct access to bpf_list_node is disallowed" },
{ "use_after_unlock_push_front", "invalid mem access 'scalar'" },
{ "use_after_unlock_push_back", "invalid mem access 'scalar'" },
- { "double_push_front", "arg#1 expected pointer to allocated object" },
- { "double_push_back", "arg#1 expected pointer to allocated object" },
+ { "double_push_front", "R2 expected pointer to allocated object" },
+ { "double_push_back", "R2 expected pointer to allocated object" },
{ "no_node_value_type", "bpf_list_node not found at offset=0" },
{ "incorrect_value_type",
"operation on bpf_list_head expects arg#1 bpf_list_node at offset=48 in struct foo, "
diff --git a/tools/testing/selftests/bpf/progs/cgrp_kfunc_failure.c b/tools/testing/selftests/bpf/progs/cgrp_kfunc_failure.c
index 9fe9c4a4e8f6..a875ba8e5007 100644
--- a/tools/testing/selftests/bpf/progs/cgrp_kfunc_failure.c
+++ b/tools/testing/selftests/bpf/progs/cgrp_kfunc_failure.c
@@ -29,7 +29,7 @@ static struct __cgrps_kfunc_map_value *insert_lookup_cgrp(struct cgroup *cgrp)
}
SEC("tp_btf/cgroup_mkdir")
-__failure __msg("Possibly NULL pointer passed to trusted arg0")
+__failure __msg("Possibly NULL pointer passed to trusted R1")
int BPF_PROG(cgrp_kfunc_acquire_untrusted, struct cgroup *cgrp, const char *path)
{
struct cgroup *acquired;
@@ -48,7 +48,7 @@ int BPF_PROG(cgrp_kfunc_acquire_untrusted, struct cgroup *cgrp, const char *path
}
SEC("tp_btf/cgroup_mkdir")
-__failure __msg("Possibly NULL pointer passed to trusted arg0")
+__failure __msg("Possibly NULL pointer passed to trusted R1")
int BPF_PROG(cgrp_kfunc_acquire_no_null_check, struct cgroup *cgrp, const char *path)
{
struct cgroup *acquired;
@@ -64,7 +64,7 @@ int BPF_PROG(cgrp_kfunc_acquire_no_null_check, struct cgroup *cgrp, const char *
}
SEC("tp_btf/cgroup_mkdir")
-__failure __msg("arg#0 pointer type STRUCT cgroup must point")
+__failure __msg("R1 pointer type STRUCT cgroup must point")
int BPF_PROG(cgrp_kfunc_acquire_fp, struct cgroup *cgrp, const char *path)
{
struct cgroup *acquired, *stack_cgrp = (struct cgroup *)&path;
@@ -106,7 +106,7 @@ int BPF_PROG(cgrp_kfunc_acquire_trusted_walked, struct cgroup *cgrp, const char
}
SEC("tp_btf/cgroup_mkdir")
-__failure __msg("Possibly NULL pointer passed to trusted arg0")
+__failure __msg("Possibly NULL pointer passed to trusted R1")
int BPF_PROG(cgrp_kfunc_acquire_null, struct cgroup *cgrp, const char *path)
{
struct cgroup *acquired;
@@ -175,7 +175,7 @@ int BPF_PROG(cgrp_kfunc_rcu_get_release, struct cgroup *cgrp, const char *path)
}
SEC("tp_btf/cgroup_mkdir")
-__failure __msg("Possibly NULL pointer passed to trusted arg0")
+__failure __msg("Possibly NULL pointer passed to trusted R1")
int BPF_PROG(cgrp_kfunc_release_untrusted, struct cgroup *cgrp, const char *path)
{
struct __cgrps_kfunc_map_value *v;
@@ -191,7 +191,7 @@ int BPF_PROG(cgrp_kfunc_release_untrusted, struct cgroup *cgrp, const char *path
}
SEC("tp_btf/cgroup_mkdir")
-__failure __msg("arg#0 pointer type STRUCT cgroup must point")
+__failure __msg("R1 pointer type STRUCT cgroup must point")
int BPF_PROG(cgrp_kfunc_release_fp, struct cgroup *cgrp, const char *path)
{
struct cgroup *acquired = (struct cgroup *)&path;
@@ -203,7 +203,7 @@ int BPF_PROG(cgrp_kfunc_release_fp, struct cgroup *cgrp, const char *path)
}
SEC("tp_btf/cgroup_mkdir")
-__failure __msg("Possibly NULL pointer passed to trusted arg0")
+__failure __msg("Possibly NULL pointer passed to trusted R1")
int BPF_PROG(cgrp_kfunc_release_null, struct cgroup *cgrp, const char *path)
{
struct __cgrps_kfunc_map_value local, *v;
diff --git a/tools/testing/selftests/bpf/progs/cpumask_failure.c b/tools/testing/selftests/bpf/progs/cpumask_failure.c
index 61c32e91e8c3..4c45346fe6f7 100644
--- a/tools/testing/selftests/bpf/progs/cpumask_failure.c
+++ b/tools/testing/selftests/bpf/progs/cpumask_failure.c
@@ -45,7 +45,7 @@ int BPF_PROG(test_alloc_no_release, struct task_struct *task, u64 clone_flags)
}
SEC("tp_btf/task_newtask")
-__failure __msg("NULL pointer passed to trusted arg0")
+__failure __msg("NULL pointer passed to trusted R1")
int BPF_PROG(test_alloc_double_release, struct task_struct *task, u64 clone_flags)
{
struct bpf_cpumask *cpumask;
@@ -73,7 +73,7 @@ int BPF_PROG(test_acquire_wrong_cpumask, struct task_struct *task, u64 clone_fla
}
SEC("tp_btf/task_newtask")
-__failure __msg("bpf_cpumask_set_cpu args#1 expected pointer to STRUCT bpf_cpumask")
+__failure __msg("bpf_cpumask_set_cpu R2 expected pointer to STRUCT bpf_cpumask")
int BPF_PROG(test_mutate_cpumask, struct task_struct *task, u64 clone_flags)
{
/* Can't set the CPU of a non-struct bpf_cpumask. */
@@ -107,7 +107,7 @@ int BPF_PROG(test_insert_remove_no_release, struct task_struct *task, u64 clone_
}
SEC("tp_btf/task_newtask")
-__failure __msg("NULL pointer passed to trusted arg0")
+__failure __msg("NULL pointer passed to trusted R1")
int BPF_PROG(test_cpumask_null, struct task_struct *task, u64 clone_flags)
{
/* NULL passed to kfunc. */
@@ -151,7 +151,7 @@ int BPF_PROG(test_global_mask_out_of_rcu, struct task_struct *task, u64 clone_fl
}
SEC("tp_btf/task_newtask")
-__failure __msg("NULL pointer passed to trusted arg1")
+__failure __msg("NULL pointer passed to trusted R2")
int BPF_PROG(test_global_mask_no_null_check, struct task_struct *task, u64 clone_flags)
{
struct bpf_cpumask *local, *prev;
@@ -179,7 +179,7 @@ int BPF_PROG(test_global_mask_no_null_check, struct task_struct *task, u64 clone
}
SEC("tp_btf/task_newtask")
-__failure __msg("Possibly NULL pointer passed to helper arg2")
+__failure __msg("Possibly NULL pointer passed to helper R2")
int BPF_PROG(test_global_mask_rcu_no_null_check, struct task_struct *task, u64 clone_flags)
{
struct bpf_cpumask *prev, *curr;
diff --git a/tools/testing/selftests/bpf/progs/dynptr_fail.c b/tools/testing/selftests/bpf/progs/dynptr_fail.c
index b62773ce5219..dbd97add5a5a 100644
--- a/tools/testing/selftests/bpf/progs/dynptr_fail.c
+++ b/tools/testing/selftests/bpf/progs/dynptr_fail.c
@@ -149,7 +149,7 @@ int ringbuf_release_uninit_dynptr(void *ctx)
/* A dynptr can't be used after it has been invalidated */
SEC("?raw_tp")
-__failure __msg("Expected an initialized dynptr as arg #2")
+__failure __msg("Expected an initialized dynptr as R3")
int use_after_invalid(void *ctx)
{
struct bpf_dynptr ptr;
@@ -448,7 +448,7 @@ int invalid_helper2(void *ctx)
/* A bpf_dynptr is invalidated if it's been written into */
SEC("?raw_tp")
-__failure __msg("Expected an initialized dynptr as arg #0")
+__failure __msg("Expected an initialized dynptr as R1")
int invalid_write1(void *ctx)
{
struct bpf_dynptr ptr;
@@ -1642,7 +1642,7 @@ int invalid_slice_rdwr_rdonly(struct __sk_buff *skb)
/* bpf_dynptr_adjust can only be called on initialized dynptrs */
SEC("?raw_tp")
-__failure __msg("Expected an initialized dynptr as arg #0")
+__failure __msg("Expected an initialized dynptr as R1")
int dynptr_adjust_invalid(void *ctx)
{
struct bpf_dynptr ptr = {};
@@ -1655,7 +1655,7 @@ int dynptr_adjust_invalid(void *ctx)
/* bpf_dynptr_is_null can only be called on initialized dynptrs */
SEC("?raw_tp")
-__failure __msg("Expected an initialized dynptr as arg #0")
+__failure __msg("Expected an initialized dynptr as R1")
int dynptr_is_null_invalid(void *ctx)
{
struct bpf_dynptr ptr = {};
@@ -1668,7 +1668,7 @@ int dynptr_is_null_invalid(void *ctx)
/* bpf_dynptr_is_rdonly can only be called on initialized dynptrs */
SEC("?raw_tp")
-__failure __msg("Expected an initialized dynptr as arg #0")
+__failure __msg("Expected an initialized dynptr as R1")
int dynptr_is_rdonly_invalid(void *ctx)
{
struct bpf_dynptr ptr = {};
@@ -1681,7 +1681,7 @@ int dynptr_is_rdonly_invalid(void *ctx)
/* bpf_dynptr_size can only be called on initialized dynptrs */
SEC("?raw_tp")
-__failure __msg("Expected an initialized dynptr as arg #0")
+__failure __msg("Expected an initialized dynptr as R1")
int dynptr_size_invalid(void *ctx)
{
struct bpf_dynptr ptr = {};
@@ -1694,7 +1694,7 @@ int dynptr_size_invalid(void *ctx)
/* Only initialized dynptrs can be cloned */
SEC("?raw_tp")
-__failure __msg("Expected an initialized dynptr as arg #0")
+__failure __msg("Expected an initialized dynptr as R1")
int clone_invalid1(void *ctx)
{
struct bpf_dynptr ptr1 = {};
@@ -1728,7 +1728,7 @@ int clone_invalid2(struct xdp_md *xdp)
/* Invalidating a dynptr should invalidate its clones */
SEC("?raw_tp")
-__failure __msg("Expected an initialized dynptr as arg #2")
+__failure __msg("Expected an initialized dynptr as R3")
int clone_invalidate1(void *ctx)
{
struct bpf_dynptr clone;
@@ -1749,7 +1749,7 @@ int clone_invalidate1(void *ctx)
/* Invalidating a dynptr should invalidate its parent */
SEC("?raw_tp")
-__failure __msg("Expected an initialized dynptr as arg #2")
+__failure __msg("Expected an initialized dynptr as R3")
int clone_invalidate2(void *ctx)
{
struct bpf_dynptr ptr;
@@ -1770,7 +1770,7 @@ int clone_invalidate2(void *ctx)
/* Invalidating a dynptr should invalidate its siblings */
SEC("?raw_tp")
-__failure __msg("Expected an initialized dynptr as arg #2")
+__failure __msg("Expected an initialized dynptr as R3")
int clone_invalidate3(void *ctx)
{
struct bpf_dynptr ptr;
@@ -1981,7 +1981,7 @@ __noinline long global_call_bpf_dynptr(const struct bpf_dynptr *dynptr)
}
SEC("?raw_tp")
-__failure __msg("arg#0 expected pointer to stack or const struct bpf_dynptr")
+__failure __msg("R1 expected pointer to stack or const struct bpf_dynptr")
int test_dynptr_reg_type(void *ctx)
{
struct task_struct *current = NULL;
diff --git a/tools/testing/selftests/bpf/progs/file_reader_fail.c b/tools/testing/selftests/bpf/progs/file_reader_fail.c
index 32fe28ed2439..0739620dea8a 100644
--- a/tools/testing/selftests/bpf/progs/file_reader_fail.c
+++ b/tools/testing/selftests/bpf/progs/file_reader_fail.c
@@ -30,7 +30,7 @@ int on_nanosleep_unreleased_ref(void *ctx)
SEC("xdp")
__failure
-__msg("Expected a dynptr of type file as arg #0")
+__msg("Expected a dynptr of type file as R1")
int xdp_wrong_dynptr_type(struct xdp_md *xdp)
{
struct bpf_dynptr dynptr;
@@ -42,7 +42,7 @@ int xdp_wrong_dynptr_type(struct xdp_md *xdp)
SEC("xdp")
__failure
-__msg("Expected an initialized dynptr as arg #0")
+__msg("Expected an initialized dynptr as R1")
int xdp_no_dynptr_type(struct xdp_md *xdp)
{
struct bpf_dynptr dynptr;
diff --git a/tools/testing/selftests/bpf/progs/irq.c b/tools/testing/selftests/bpf/progs/irq.c
index e11e82d98904..a4a007866a33 100644
--- a/tools/testing/selftests/bpf/progs/irq.c
+++ b/tools/testing/selftests/bpf/progs/irq.c
@@ -15,7 +15,7 @@ struct bpf_res_spin_lock lockA __hidden SEC(".data.A");
struct bpf_res_spin_lock lockB __hidden SEC(".data.B");
SEC("?tc")
-__failure __msg("arg#0 doesn't point to an irq flag on stack")
+__failure __msg("R1 doesn't point to an irq flag on stack")
int irq_save_bad_arg(struct __sk_buff *ctx)
{
bpf_local_irq_save(&global_flags);
@@ -23,7 +23,7 @@ int irq_save_bad_arg(struct __sk_buff *ctx)
}
SEC("?tc")
-__failure __msg("arg#0 doesn't point to an irq flag on stack")
+__failure __msg("R1 doesn't point to an irq flag on stack")
int irq_restore_bad_arg(struct __sk_buff *ctx)
{
bpf_local_irq_restore(&global_flags);
diff --git a/tools/testing/selftests/bpf/progs/iters.c b/tools/testing/selftests/bpf/progs/iters.c
index 86b74e3579d9..0fa70b133d93 100644
--- a/tools/testing/selftests/bpf/progs/iters.c
+++ b/tools/testing/selftests/bpf/progs/iters.c
@@ -1605,7 +1605,7 @@ int iter_subprog_check_stacksafe(const void *ctx)
struct bpf_iter_num global_it;
SEC("raw_tp")
-__failure __msg("arg#0 expected pointer to an iterator on stack")
+__failure __msg("R1 expected pointer to an iterator on stack")
int iter_new_bad_arg(const void *ctx)
{
bpf_iter_num_new(&global_it, 0, 1);
@@ -1613,7 +1613,7 @@ int iter_new_bad_arg(const void *ctx)
}
SEC("raw_tp")
-__failure __msg("arg#0 expected pointer to an iterator on stack")
+__failure __msg("R1 expected pointer to an iterator on stack")
int iter_next_bad_arg(const void *ctx)
{
bpf_iter_num_next(&global_it);
@@ -1621,7 +1621,7 @@ int iter_next_bad_arg(const void *ctx)
}
SEC("raw_tp")
-__failure __msg("arg#0 expected pointer to an iterator on stack")
+__failure __msg("R1 expected pointer to an iterator on stack")
int iter_destroy_bad_arg(const void *ctx)
{
bpf_iter_num_destroy(&global_it);
diff --git a/tools/testing/selftests/bpf/progs/iters_state_safety.c b/tools/testing/selftests/bpf/progs/iters_state_safety.c
index d273b46dfc7c..af8f9ec1ea98 100644
--- a/tools/testing/selftests/bpf/progs/iters_state_safety.c
+++ b/tools/testing/selftests/bpf/progs/iters_state_safety.c
@@ -73,7 +73,7 @@ int create_and_forget_to_destroy_fail(void *ctx)
}
SEC("?raw_tp")
-__failure __msg("expected an initialized iter_num as arg #0")
+__failure __msg("expected an initialized iter_num as R1")
int destroy_without_creating_fail(void *ctx)
{
/* init with zeros to stop verifier complaining about uninit stack */
@@ -91,7 +91,7 @@ int destroy_without_creating_fail(void *ctx)
}
SEC("?raw_tp")
-__failure __msg("expected an initialized iter_num as arg #0")
+__failure __msg("expected an initialized iter_num as R1")
int compromise_iter_w_direct_write_fail(void *ctx)
{
struct bpf_iter_num iter;
@@ -143,7 +143,7 @@ int compromise_iter_w_direct_write_and_skip_destroy_fail(void *ctx)
}
SEC("?raw_tp")
-__failure __msg("expected an initialized iter_num as arg #0")
+__failure __msg("expected an initialized iter_num as R1")
int compromise_iter_w_helper_write_fail(void *ctx)
{
struct bpf_iter_num iter;
@@ -230,7 +230,7 @@ int valid_stack_reuse(void *ctx)
}
SEC("?raw_tp")
-__failure __msg("expected uninitialized iter_num as arg #0")
+__failure __msg("expected uninitialized iter_num as R1")
int double_create_fail(void *ctx)
{
struct bpf_iter_num iter;
@@ -258,7 +258,7 @@ int double_create_fail(void *ctx)
}
SEC("?raw_tp")
-__failure __msg("expected an initialized iter_num as arg #0")
+__failure __msg("expected an initialized iter_num as R1")
int double_destroy_fail(void *ctx)
{
struct bpf_iter_num iter;
@@ -284,7 +284,7 @@ int double_destroy_fail(void *ctx)
}
SEC("?raw_tp")
-__failure __msg("expected an initialized iter_num as arg #0")
+__failure __msg("expected an initialized iter_num as R1")
int next_without_new_fail(void *ctx)
{
struct bpf_iter_num iter;
@@ -305,7 +305,7 @@ int next_without_new_fail(void *ctx)
}
SEC("?raw_tp")
-__failure __msg("expected an initialized iter_num as arg #0")
+__failure __msg("expected an initialized iter_num as R1")
int next_after_destroy_fail(void *ctx)
{
struct bpf_iter_num iter;
diff --git a/tools/testing/selftests/bpf/progs/iters_testmod.c b/tools/testing/selftests/bpf/progs/iters_testmod.c
index 5379e9960ffd..76012dbbdb41 100644
--- a/tools/testing/selftests/bpf/progs/iters_testmod.c
+++ b/tools/testing/selftests/bpf/progs/iters_testmod.c
@@ -29,7 +29,7 @@ int iter_next_trusted(const void *ctx)
}
SEC("raw_tp/sys_enter")
-__failure __msg("Possibly NULL pointer passed to trusted arg0")
+__failure __msg("Possibly NULL pointer passed to trusted R1")
int iter_next_trusted_or_null(const void *ctx)
{
struct task_struct *cur_task = bpf_get_current_task_btf();
@@ -67,7 +67,7 @@ int iter_next_rcu(const void *ctx)
}
SEC("raw_tp/sys_enter")
-__failure __msg("Possibly NULL pointer passed to trusted arg0")
+__failure __msg("Possibly NULL pointer passed to trusted R1")
int iter_next_rcu_or_null(const void *ctx)
{
struct task_struct *cur_task = bpf_get_current_task_btf();
diff --git a/tools/testing/selftests/bpf/progs/iters_testmod_seq.c b/tools/testing/selftests/bpf/progs/iters_testmod_seq.c
index 83791348bed5..9b760dac333e 100644
--- a/tools/testing/selftests/bpf/progs/iters_testmod_seq.c
+++ b/tools/testing/selftests/bpf/progs/iters_testmod_seq.c
@@ -79,7 +79,7 @@ int testmod_seq_truncated(const void *ctx)
SEC("?raw_tp")
__failure
-__msg("expected an initialized iter_testmod_seq as arg #1")
+__msg("expected an initialized iter_testmod_seq as R2")
int testmod_seq_getter_before_bad(const void *ctx)
{
struct bpf_iter_testmod_seq it;
@@ -89,7 +89,7 @@ int testmod_seq_getter_before_bad(const void *ctx)
SEC("?raw_tp")
__failure
-__msg("expected an initialized iter_testmod_seq as arg #1")
+__msg("expected an initialized iter_testmod_seq as R2")
int testmod_seq_getter_after_bad(const void *ctx)
{
struct bpf_iter_testmod_seq it;
diff --git a/tools/testing/selftests/bpf/progs/map_kptr_fail.c b/tools/testing/selftests/bpf/progs/map_kptr_fail.c
index ee053b24e6ca..8f36e74fd8f9 100644
--- a/tools/testing/selftests/bpf/progs/map_kptr_fail.c
+++ b/tools/testing/selftests/bpf/progs/map_kptr_fail.c
@@ -364,7 +364,7 @@ int kptr_xchg_ref_state(struct __sk_buff *ctx)
}
SEC("?tc")
-__failure __msg("Possibly NULL pointer passed to helper arg2")
+__failure __msg("Possibly NULL pointer passed to helper R2")
int kptr_xchg_possibly_null(struct __sk_buff *ctx)
{
struct prog_test_ref_kfunc *p;
diff --git a/tools/testing/selftests/bpf/progs/percpu_alloc_fail.c b/tools/testing/selftests/bpf/progs/percpu_alloc_fail.c
index 81813c724fa9..08379c3b6a03 100644
--- a/tools/testing/selftests/bpf/progs/percpu_alloc_fail.c
+++ b/tools/testing/selftests/bpf/progs/percpu_alloc_fail.c
@@ -110,7 +110,7 @@ int BPF_PROG(test_array_map_3)
}
SEC("?fentry.s/bpf_fentry_test1")
-__failure __msg("arg#0 expected for bpf_percpu_obj_drop()")
+__failure __msg("R1 expected for bpf_percpu_obj_drop()")
int BPF_PROG(test_array_map_4)
{
struct val_t __percpu_kptr *p;
@@ -124,7 +124,7 @@ int BPF_PROG(test_array_map_4)
}
SEC("?fentry.s/bpf_fentry_test1")
-__failure __msg("arg#0 expected for bpf_obj_drop()")
+__failure __msg("R1 expected for bpf_obj_drop()")
int BPF_PROG(test_array_map_5)
{
struct val_t *p;
diff --git a/tools/testing/selftests/bpf/progs/rbtree_fail.c b/tools/testing/selftests/bpf/progs/rbtree_fail.c
index 70b7baf9304b..555379952dcc 100644
--- a/tools/testing/selftests/bpf/progs/rbtree_fail.c
+++ b/tools/testing/selftests/bpf/progs/rbtree_fail.c
@@ -134,7 +134,7 @@ long rbtree_api_remove_no_drop(void *ctx)
}
SEC("?tc")
-__failure __msg("arg#1 expected pointer to allocated object")
+__failure __msg("R2 expected pointer to allocated object")
long rbtree_api_add_to_multiple_trees(void *ctx)
{
struct node_data *n;
@@ -153,7 +153,7 @@ long rbtree_api_add_to_multiple_trees(void *ctx)
}
SEC("?tc")
-__failure __msg("Possibly NULL pointer passed to trusted arg1")
+__failure __msg("Possibly NULL pointer passed to trusted R2")
long rbtree_api_use_unchecked_remove_retval(void *ctx)
{
struct bpf_rb_node *res;
@@ -281,7 +281,7 @@ long add_with_cb(bool (cb)(struct bpf_rb_node *a, const struct bpf_rb_node *b))
}
SEC("?tc")
-__failure __msg("arg#1 expected pointer to allocated object")
+__failure __msg("R2 expected pointer to allocated object")
long rbtree_api_add_bad_cb_bad_fn_call_add(void *ctx)
{
return add_with_cb(less__bad_fn_call_add);
diff --git a/tools/testing/selftests/bpf/progs/refcounted_kptr_fail.c b/tools/testing/selftests/bpf/progs/refcounted_kptr_fail.c
index b2808bfcec29..7247a20c0a3b 100644
--- a/tools/testing/selftests/bpf/progs/refcounted_kptr_fail.c
+++ b/tools/testing/selftests/bpf/progs/refcounted_kptr_fail.c
@@ -54,7 +54,7 @@ long rbtree_refcounted_node_ref_escapes(void *ctx)
}
SEC("?tc")
-__failure __msg("Possibly NULL pointer passed to trusted arg0")
+__failure __msg("Possibly NULL pointer passed to trusted R1")
long refcount_acquire_maybe_null(void *ctx)
{
struct node_acquire *n, *m;
diff --git a/tools/testing/selftests/bpf/progs/stream_fail.c b/tools/testing/selftests/bpf/progs/stream_fail.c
index 8e8249f3521c..21428bb1ee59 100644
--- a/tools/testing/selftests/bpf/progs/stream_fail.c
+++ b/tools/testing/selftests/bpf/progs/stream_fail.c
@@ -23,7 +23,7 @@ int stream_vprintk_scalar_arg(void *ctx)
}
SEC("syscall")
-__failure __msg("arg#1 doesn't point to a const string")
+__failure __msg("R2 doesn't point to a const string")
int stream_vprintk_string_arg(void *ctx)
{
bpf_stream_vprintk(BPF_STDOUT, ctx, NULL, 0);
diff --git a/tools/testing/selftests/bpf/progs/task_kfunc_failure.c b/tools/testing/selftests/bpf/progs/task_kfunc_failure.c
index 4c07ea193f72..41047d81ec42 100644
--- a/tools/testing/selftests/bpf/progs/task_kfunc_failure.c
+++ b/tools/testing/selftests/bpf/progs/task_kfunc_failure.c
@@ -28,7 +28,7 @@ static struct __tasks_kfunc_map_value *insert_lookup_task(struct task_struct *ta
}
SEC("tp_btf/task_newtask")
-__failure __msg("Possibly NULL pointer passed to trusted arg0")
+__failure __msg("Possibly NULL pointer passed to trusted R1")
int BPF_PROG(task_kfunc_acquire_untrusted, struct task_struct *task, u64 clone_flags)
{
struct task_struct *acquired;
@@ -49,7 +49,7 @@ int BPF_PROG(task_kfunc_acquire_untrusted, struct task_struct *task, u64 clone_f
}
SEC("tp_btf/task_newtask")
-__failure __msg("arg#0 pointer type STRUCT task_struct must point")
+__failure __msg("R1 pointer type STRUCT task_struct must point")
int BPF_PROG(task_kfunc_acquire_fp, struct task_struct *task, u64 clone_flags)
{
struct task_struct *acquired, *stack_task = (struct task_struct *)&clone_flags;
@@ -100,7 +100,7 @@ int BPF_PROG(task_kfunc_acquire_unsafe_kretprobe_rcu, struct task_struct *task,
}
SEC("tp_btf/task_newtask")
-__failure __msg("Possibly NULL pointer passed to trusted arg0")
+__failure __msg("Possibly NULL pointer passed to trusted R1")
int BPF_PROG(task_kfunc_acquire_null, struct task_struct *task, u64 clone_flags)
{
struct task_struct *acquired;
@@ -149,7 +149,7 @@ int BPF_PROG(task_kfunc_xchg_unreleased, struct task_struct *task, u64 clone_fla
}
SEC("tp_btf/task_newtask")
-__failure __msg("Possibly NULL pointer passed to trusted arg0")
+__failure __msg("Possibly NULL pointer passed to trusted R1")
int BPF_PROG(task_kfunc_acquire_release_no_null_check, struct task_struct *task, u64 clone_flags)
{
struct task_struct *acquired;
@@ -162,7 +162,7 @@ int BPF_PROG(task_kfunc_acquire_release_no_null_check, struct task_struct *task,
}
SEC("tp_btf/task_newtask")
-__failure __msg("Possibly NULL pointer passed to trusted arg0")
+__failure __msg("Possibly NULL pointer passed to trusted R1")
int BPF_PROG(task_kfunc_release_untrusted, struct task_struct *task, u64 clone_flags)
{
struct __tasks_kfunc_map_value *v;
@@ -178,7 +178,7 @@ int BPF_PROG(task_kfunc_release_untrusted, struct task_struct *task, u64 clone_f
}
SEC("tp_btf/task_newtask")
-__failure __msg("arg#0 pointer type STRUCT task_struct must point")
+__failure __msg("R1 pointer type STRUCT task_struct must point")
int BPF_PROG(task_kfunc_release_fp, struct task_struct *task, u64 clone_flags)
{
struct task_struct *acquired = (struct task_struct *)&clone_flags;
@@ -190,7 +190,7 @@ int BPF_PROG(task_kfunc_release_fp, struct task_struct *task, u64 clone_flags)
}
SEC("tp_btf/task_newtask")
-__failure __msg("Possibly NULL pointer passed to trusted arg0")
+__failure __msg("Possibly NULL pointer passed to trusted R1")
int BPF_PROG(task_kfunc_release_null, struct task_struct *task, u64 clone_flags)
{
struct __tasks_kfunc_map_value local, *v;
@@ -234,7 +234,7 @@ int BPF_PROG(task_kfunc_release_unacquired, struct task_struct *task, u64 clone_
}
SEC("tp_btf/task_newtask")
-__failure __msg("Possibly NULL pointer passed to trusted arg0")
+__failure __msg("Possibly NULL pointer passed to trusted R1")
int BPF_PROG(task_kfunc_from_pid_no_null_check, struct task_struct *task, u64 clone_flags)
{
struct task_struct *acquired;
@@ -248,7 +248,7 @@ int BPF_PROG(task_kfunc_from_pid_no_null_check, struct task_struct *task, u64 cl
}
SEC("tp_btf/task_newtask")
-__failure __msg("Possibly NULL pointer passed to trusted arg0")
+__failure __msg("Possibly NULL pointer passed to trusted R1")
int BPF_PROG(task_kfunc_from_vpid_no_null_check, struct task_struct *task, u64 clone_flags)
{
struct task_struct *acquired;
diff --git a/tools/testing/selftests/bpf/progs/task_work_fail.c b/tools/testing/selftests/bpf/progs/task_work_fail.c
index 82e4b8913333..3186e7b4b24e 100644
--- a/tools/testing/selftests/bpf/progs/task_work_fail.c
+++ b/tools/testing/selftests/bpf/progs/task_work_fail.c
@@ -58,7 +58,7 @@ int mismatch_map(struct pt_regs *args)
}
SEC("perf_event")
-__failure __msg("arg#1 doesn't point to a map value")
+__failure __msg("R2 doesn't point to a map value")
int no_map_task_work(struct pt_regs *args)
{
struct task_struct *task;
@@ -70,7 +70,7 @@ int no_map_task_work(struct pt_regs *args)
}
SEC("perf_event")
-__failure __msg("Possibly NULL pointer passed to trusted arg1")
+__failure __msg("Possibly NULL pointer passed to trusted R2")
int task_work_null(struct pt_regs *args)
{
struct task_struct *task;
@@ -81,7 +81,7 @@ int task_work_null(struct pt_regs *args)
}
SEC("perf_event")
-__failure __msg("Possibly NULL pointer passed to trusted arg2")
+__failure __msg("Possibly NULL pointer passed to trusted R3")
int map_null(struct pt_regs *args)
{
struct elem *work;
diff --git a/tools/testing/selftests/bpf/progs/test_bpf_nf_fail.c b/tools/testing/selftests/bpf/progs/test_bpf_nf_fail.c
index 2c156cd166af..332cda89caba 100644
--- a/tools/testing/selftests/bpf/progs/test_bpf_nf_fail.c
+++ b/tools/testing/selftests/bpf/progs/test_bpf_nf_fail.c
@@ -152,7 +152,7 @@ int change_status_after_alloc(struct __sk_buff *ctx)
}
SEC("?tc")
-__failure __msg("Possibly NULL pointer passed to trusted arg1")
+__failure __msg("Possibly NULL pointer passed to trusted R2")
int lookup_null_bpf_tuple(struct __sk_buff *ctx)
{
struct bpf_ct_opts___local opts = {};
@@ -165,7 +165,7 @@ int lookup_null_bpf_tuple(struct __sk_buff *ctx)
}
SEC("?tc")
-__failure __msg("Possibly NULL pointer passed to trusted arg3")
+__failure __msg("Possibly NULL pointer passed to trusted R4")
int lookup_null_bpf_opts(struct __sk_buff *ctx)
{
struct bpf_sock_tuple tup = {};
@@ -178,7 +178,7 @@ int lookup_null_bpf_opts(struct __sk_buff *ctx)
}
SEC("?xdp")
-__failure __msg("Possibly NULL pointer passed to trusted arg1")
+__failure __msg("Possibly NULL pointer passed to trusted R2")
int xdp_lookup_null_bpf_tuple(struct xdp_md *ctx)
{
struct bpf_ct_opts___local opts = {};
@@ -191,7 +191,7 @@ int xdp_lookup_null_bpf_tuple(struct xdp_md *ctx)
}
SEC("?xdp")
-__failure __msg("Possibly NULL pointer passed to trusted arg3")
+__failure __msg("Possibly NULL pointer passed to trusted R4")
int xdp_lookup_null_bpf_opts(struct xdp_md *ctx)
{
struct bpf_sock_tuple tup = {};
diff --git a/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c b/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c
index d249113ed657..41da6e619940 100644
--- a/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c
+++ b/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c
@@ -45,7 +45,7 @@ int BPF_PROG(not_valid_dynptr, int cmd, union bpf_attr *attr, unsigned int size,
}
SEC("?lsm.s/bpf")
-__failure __msg("arg#0 expected pointer to stack or const struct bpf_dynptr")
+__failure __msg("R1 expected pointer to stack or const struct bpf_dynptr")
int BPF_PROG(not_ptr_to_stack, int cmd, union bpf_attr *attr, unsigned int size, bool kernel)
{
static struct bpf_dynptr val;
diff --git a/tools/testing/selftests/bpf/progs/test_kfunc_param_nullable.c b/tools/testing/selftests/bpf/progs/test_kfunc_param_nullable.c
index 967081bbcfe1..ca35b92ea095 100644
--- a/tools/testing/selftests/bpf/progs/test_kfunc_param_nullable.c
+++ b/tools/testing/selftests/bpf/progs/test_kfunc_param_nullable.c
@@ -29,7 +29,7 @@ int kfunc_dynptr_nullable_test2(struct __sk_buff *skb)
}
SEC("tc")
-__failure __msg("Possibly NULL pointer passed to trusted arg0")
+__failure __msg("Possibly NULL pointer passed to trusted R1")
int kfunc_dynptr_nullable_test3(struct __sk_buff *skb)
{
struct bpf_dynptr data;
diff --git a/tools/testing/selftests/bpf/progs/verifier_bits_iter.c b/tools/testing/selftests/bpf/progs/verifier_bits_iter.c
index 8bcddadfc4da..dd97f2027505 100644
--- a/tools/testing/selftests/bpf/progs/verifier_bits_iter.c
+++ b/tools/testing/selftests/bpf/progs/verifier_bits_iter.c
@@ -32,7 +32,7 @@ int BPF_PROG(no_destroy, struct bpf_iter_meta *meta, struct cgroup *cgrp)
SEC("iter/cgroup")
__description("uninitialized iter in ->next()")
-__failure __msg("expected an initialized iter_bits as arg #0")
+__failure __msg("expected an initialized iter_bits as R1")
int BPF_PROG(next_uninit, struct bpf_iter_meta *meta, struct cgroup *cgrp)
{
struct bpf_iter_bits it = {};
@@ -43,7 +43,7 @@ int BPF_PROG(next_uninit, struct bpf_iter_meta *meta, struct cgroup *cgrp)
SEC("iter/cgroup")
__description("uninitialized iter in ->destroy()")
-__failure __msg("expected an initialized iter_bits as arg #0")
+__failure __msg("expected an initialized iter_bits as R1")
int BPF_PROG(destroy_uninit, struct bpf_iter_meta *meta, struct cgroup *cgrp)
{
struct bpf_iter_bits it = {};
diff --git a/tools/testing/selftests/bpf/progs/verifier_ref_tracking.c b/tools/testing/selftests/bpf/progs/verifier_ref_tracking.c
index 910365201f68..139f70bb3595 100644
--- a/tools/testing/selftests/bpf/progs/verifier_ref_tracking.c
+++ b/tools/testing/selftests/bpf/progs/verifier_ref_tracking.c
@@ -263,7 +263,7 @@ l0_%=: r0 = 0; \
SEC("lsm.s/bpf")
__description("reference tracking: release user key reference without check")
-__failure __msg("Possibly NULL pointer passed to trusted arg0")
+__failure __msg("Possibly NULL pointer passed to trusted R1")
__naked void user_key_reference_without_check(void)
{
asm volatile (" \
@@ -282,7 +282,7 @@ __naked void user_key_reference_without_check(void)
SEC("lsm.s/bpf")
__description("reference tracking: release system key reference without check")
-__failure __msg("Possibly NULL pointer passed to trusted arg0")
+__failure __msg("Possibly NULL pointer passed to trusted R1")
__naked void system_key_reference_without_check(void)
{
asm volatile (" \
@@ -300,7 +300,7 @@ __naked void system_key_reference_without_check(void)
SEC("lsm.s/bpf")
__description("reference tracking: release with NULL key pointer")
-__failure __msg("Possibly NULL pointer passed to trusted arg0")
+__failure __msg("Possibly NULL pointer passed to trusted R1")
__naked void release_with_null_key_pointer(void)
{
asm volatile (" \
diff --git a/tools/testing/selftests/bpf/progs/verifier_vfs_reject.c b/tools/testing/selftests/bpf/progs/verifier_vfs_reject.c
index 4b392c6c8fc4..0990de076844 100644
--- a/tools/testing/selftests/bpf/progs/verifier_vfs_reject.c
+++ b/tools/testing/selftests/bpf/progs/verifier_vfs_reject.c
@@ -13,7 +13,7 @@
static char buf[PATH_MAX];
SEC("lsm.s/file_open")
-__failure __msg("Possibly NULL pointer passed to trusted arg0")
+__failure __msg("Possibly NULL pointer passed to trusted R1")
int BPF_PROG(get_task_exe_file_kfunc_null)
{
struct file *acquired;
@@ -28,7 +28,7 @@ int BPF_PROG(get_task_exe_file_kfunc_null)
}
SEC("lsm.s/inode_getxattr")
-__failure __msg("arg#0 pointer type STRUCT task_struct must point to scalar, or struct with scalar")
+__failure __msg("R1 pointer type STRUCT task_struct must point to scalar, or struct with scalar")
int BPF_PROG(get_task_exe_file_kfunc_fp)
{
u64 x;
@@ -89,7 +89,7 @@ int BPF_PROG(put_file_kfunc_unacquired, struct file *file)
}
SEC("lsm.s/file_open")
-__failure __msg("Possibly NULL pointer passed to trusted arg0")
+__failure __msg("Possibly NULL pointer passed to trusted R1")
int BPF_PROG(path_d_path_kfunc_null)
{
/* Can't pass NULL value to bpf_path_d_path() kfunc. */
@@ -128,7 +128,7 @@ int BPF_PROG(path_d_path_kfunc_untrusted_from_current)
}
SEC("lsm.s/file_open")
-__failure __msg("kernel function bpf_path_d_path args#0 expected pointer to STRUCT path but R1 has a pointer to STRUCT file")
+__failure __msg("kernel function bpf_path_d_path R1 expected pointer to STRUCT path but R1 has a pointer to STRUCT file")
int BPF_PROG(path_d_path_kfunc_type_mismatch, struct file *file)
{
bpf_path_d_path((struct path *)&file->f_task_work, buf, sizeof(buf));
diff --git a/tools/testing/selftests/bpf/progs/wq_failures.c b/tools/testing/selftests/bpf/progs/wq_failures.c
index 3767f5595bbc..32dc8827e128 100644
--- a/tools/testing/selftests/bpf/progs/wq_failures.c
+++ b/tools/testing/selftests/bpf/progs/wq_failures.c
@@ -98,7 +98,7 @@ __failure
* is a correct bpf_wq pointer.
*/
__msg(": (85) call bpf_wq_set_callback#") /* anchor message */
-__msg("arg#0 doesn't point to a map value")
+__msg("R1 doesn't point to a map value")
long test_wrong_wq_pointer(void *ctx)
{
int key = 0;
diff --git a/tools/testing/selftests/bpf/verifier/calls.c b/tools/testing/selftests/bpf/verifier/calls.c
index c3164b9b2be5..0bb4337552c8 100644
--- a/tools/testing/selftests/bpf/verifier/calls.c
+++ b/tools/testing/selftests/bpf/verifier/calls.c
@@ -31,7 +31,7 @@
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.result = REJECT,
- .errstr = "arg#0 pointer type STRUCT prog_test_fail1 must point to scalar",
+ .errstr = "R1 pointer type STRUCT prog_test_fail1 must point to scalar",
.fixup_kfunc_btf_id = {
{ "bpf_kfunc_call_test_fail1", 2 },
},
@@ -46,7 +46,7 @@
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.result = REJECT,
- .errstr = "max struct nesting depth exceeded\narg#0 pointer type STRUCT prog_test_fail2",
+ .errstr = "max struct nesting depth exceeded\nR1 pointer type STRUCT prog_test_fail2",
.fixup_kfunc_btf_id = {
{ "bpf_kfunc_call_test_fail2", 2 },
},
@@ -61,7 +61,7 @@
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.result = REJECT,
- .errstr = "arg#0 pointer type STRUCT prog_test_fail3 must point to scalar",
+ .errstr = "R1 pointer type STRUCT prog_test_fail3 must point to scalar",
.fixup_kfunc_btf_id = {
{ "bpf_kfunc_call_test_fail3", 2 },
},
@@ -76,7 +76,7 @@
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.result = REJECT,
- .errstr = "arg#0 expected pointer to ctx, but got fp",
+ .errstr = "R1 expected pointer to ctx, but got fp",
.fixup_kfunc_btf_id = {
{ "bpf_kfunc_call_test_pass_ctx", 2 },
},
@@ -91,7 +91,7 @@
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.result = REJECT,
- .errstr = "arg#0 pointer type UNKNOWN must point to scalar",
+ .errstr = "R1 pointer type UNKNOWN must point to scalar",
.fixup_kfunc_btf_id = {
{ "bpf_kfunc_call_test_mem_len_fail1", 2 },
},
@@ -109,7 +109,7 @@
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.result = REJECT,
- .errstr = "Possibly NULL pointer passed to trusted arg0",
+ .errstr = "Possibly NULL pointer passed to trusted R1",
.fixup_kfunc_btf_id = {
{ "bpf_kfunc_call_test_acquire", 3 },
{ "bpf_kfunc_call_test_release", 5 },
@@ -152,7 +152,7 @@
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.result = REJECT,
- .errstr = "kernel function bpf_kfunc_call_memb1_release args#0 expected pointer",
+ .errstr = "kernel function bpf_kfunc_call_memb1_release R1 expected pointer",
.fixup_kfunc_btf_id = {
{ "bpf_kfunc_call_memb_acquire", 1 },
{ "bpf_kfunc_call_memb1_release", 5 },
--
2.52.0
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH bpf-next v6 05/17] bpf: Introduce bpf register BPF_REG_PARAMS
2026-04-19 16:33 [PATCH bpf-next v6 00/17] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song
` (3 preceding siblings ...)
2026-04-19 16:33 ` [PATCH bpf-next v6 04/17] bpf: Prepare verifier logs for upcoming kfunc stack arguments Yonghong Song
@ 2026-04-19 16:33 ` Yonghong Song
2026-04-19 17:06 ` sashiko-bot
2026-04-19 16:33 ` [PATCH bpf-next v6 06/17] bpf: Reuse MAX_BPF_FUNC_ARGS for maximum number of arguments Yonghong Song
` (12 subsequent siblings)
17 siblings, 1 reply; 51+ messages in thread
From: Yonghong Song @ 2026-04-19 16:33 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau
Introduce BPF_REG_PARAMS as a dedicated BPF register for stack
argument accesses. It occupies the BPF register number 11 (R11),
which is used as the base pointer for the stack argument area,
keeping it separate from the R10-based (BPF_REG_FP) program stack.
The kernel-internal hidden register BPF_REG_AX previously occupied
slot 11 (MAX_BPF_REG). With BPF_REG_PARAMS taking that slot,
BPF_REG_AX moves to slot 12 and MAX_BPF_EXT_REG increases
accordingly.
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
include/linux/filter.h | 5 +-
kernel/bpf/core.c | 4 +-
.../selftests/bpf/prog_tests/ctx_rewrite.c | 14 ++--
.../bpf/progs/verifier_bpf_fastcall.c | 24 +++----
.../selftests/bpf/progs/verifier_may_goto_1.c | 12 ++--
.../selftests/bpf/progs/verifier_sdiv.c | 64 +++++++++----------
6 files changed, 62 insertions(+), 61 deletions(-)
diff --git a/include/linux/filter.h b/include/linux/filter.h
index 1ec6d5ba64cc..b77d0b06db6e 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -58,8 +58,9 @@ struct ctl_table_header;
#define BPF_REG_H BPF_REG_9 /* hlen, callee-saved */
/* Kernel hidden auxiliary/helper register. */
-#define BPF_REG_AX MAX_BPF_REG
-#define MAX_BPF_EXT_REG (MAX_BPF_REG + 1)
+#define BPF_REG_PARAMS MAX_BPF_REG
+#define BPF_REG_AX (MAX_BPF_REG + 1)
+#define MAX_BPF_EXT_REG (MAX_BPF_REG + 2)
#define MAX_BPF_JIT_REG MAX_BPF_EXT_REG
/* unused opcode to mark special call to bpf_tail_call() helper */
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 8b018ff48875..ae10b9ca018d 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -1299,8 +1299,8 @@ static int bpf_jit_blind_insn(const struct bpf_insn *from,
u32 imm_rnd = get_random_u32();
s16 off;
- BUILD_BUG_ON(BPF_REG_AX + 1 != MAX_BPF_JIT_REG);
- BUILD_BUG_ON(MAX_BPF_REG + 1 != MAX_BPF_JIT_REG);
+ BUILD_BUG_ON(BPF_REG_PARAMS + 2 != MAX_BPF_JIT_REG);
+ BUILD_BUG_ON(BPF_REG_AX + 1 != MAX_BPF_JIT_REG);
/* Constraints on AX register:
*
diff --git a/tools/testing/selftests/bpf/prog_tests/ctx_rewrite.c b/tools/testing/selftests/bpf/prog_tests/ctx_rewrite.c
index 469e92869523..83d870e32239 100644
--- a/tools/testing/selftests/bpf/prog_tests/ctx_rewrite.c
+++ b/tools/testing/selftests/bpf/prog_tests/ctx_rewrite.c
@@ -69,19 +69,19 @@ static struct test_case test_cases[] = {
#if defined(__x86_64__) || defined(__aarch64__)
{
N(SCHED_CLS, struct __sk_buff, tstamp),
- .read = "r11 = *(u8 *)($ctx + sk_buff::__mono_tc_offset);"
- "if w11 & 0x4 goto pc+1;"
+ .read = "r12 = *(u8 *)($ctx + sk_buff::__mono_tc_offset);"
+ "if w12 & 0x4 goto pc+1;"
"goto pc+4;"
- "if w11 & 0x3 goto pc+1;"
+ "if w12 & 0x3 goto pc+1;"
"goto pc+2;"
"$dst = 0;"
"goto pc+1;"
"$dst = *(u64 *)($ctx + sk_buff::tstamp);",
- .write = "r11 = *(u8 *)($ctx + sk_buff::__mono_tc_offset);"
- "if w11 & 0x4 goto pc+1;"
+ .write = "r12 = *(u8 *)($ctx + sk_buff::__mono_tc_offset);"
+ "if w12 & 0x4 goto pc+1;"
"goto pc+2;"
- "w11 &= -4;"
- "*(u8 *)($ctx + sk_buff::__mono_tc_offset) = r11;"
+ "w12 &= -4;"
+ "*(u8 *)($ctx + sk_buff::__mono_tc_offset) = r12;"
"*(u64 *)($ctx + sk_buff::tstamp) = $src;",
},
#endif
diff --git a/tools/testing/selftests/bpf/progs/verifier_bpf_fastcall.c b/tools/testing/selftests/bpf/progs/verifier_bpf_fastcall.c
index fb4fa465d67c..0d9e167555b5 100644
--- a/tools/testing/selftests/bpf/progs/verifier_bpf_fastcall.c
+++ b/tools/testing/selftests/bpf/progs/verifier_bpf_fastcall.c
@@ -630,13 +630,13 @@ __xlated("...")
__xlated("4: r0 = &(void __percpu *)(r0)")
__xlated("...")
/* may_goto expansion starts */
-__xlated("6: r11 = *(u64 *)(r10 -24)")
-__xlated("7: if r11 == 0x0 goto pc+6")
-__xlated("8: r11 -= 1")
-__xlated("9: if r11 != 0x0 goto pc+2")
-__xlated("10: r11 = -24")
+__xlated("6: r12 = *(u64 *)(r10 -24)")
+__xlated("7: if r12 == 0x0 goto pc+6")
+__xlated("8: r12 -= 1")
+__xlated("9: if r12 != 0x0 goto pc+2")
+__xlated("10: r12 = -24")
__xlated("11: call unknown")
-__xlated("12: *(u64 *)(r10 -24) = r11")
+__xlated("12: *(u64 *)(r10 -24) = r12")
/* may_goto expansion ends */
__xlated("13: *(u64 *)(r10 -8) = r1")
__xlated("14: exit")
@@ -668,13 +668,13 @@ __xlated("1: *(u64 *)(r10 -16) =")
__xlated("2: r1 = 1")
__xlated("3: call bpf_get_smp_processor_id")
/* may_goto expansion starts */
-__xlated("4: r11 = *(u64 *)(r10 -24)")
-__xlated("5: if r11 == 0x0 goto pc+6")
-__xlated("6: r11 -= 1")
-__xlated("7: if r11 != 0x0 goto pc+2")
-__xlated("8: r11 = -24")
+__xlated("4: r12 = *(u64 *)(r10 -24)")
+__xlated("5: if r12 == 0x0 goto pc+6")
+__xlated("6: r12 -= 1")
+__xlated("7: if r12 != 0x0 goto pc+2")
+__xlated("8: r12 = -24")
__xlated("9: call unknown")
-__xlated("10: *(u64 *)(r10 -24) = r11")
+__xlated("10: *(u64 *)(r10 -24) = r12")
/* may_goto expansion ends */
__xlated("11: *(u64 *)(r10 -8) = r1")
__xlated("12: exit")
diff --git a/tools/testing/selftests/bpf/progs/verifier_may_goto_1.c b/tools/testing/selftests/bpf/progs/verifier_may_goto_1.c
index 6d1edaef9213..4bdf4256a41e 100644
--- a/tools/testing/selftests/bpf/progs/verifier_may_goto_1.c
+++ b/tools/testing/selftests/bpf/progs/verifier_may_goto_1.c
@@ -81,13 +81,13 @@ __arch_s390x
__arch_arm64
__xlated("0: *(u64 *)(r10 -16) = 65535")
__xlated("1: *(u64 *)(r10 -8) = 0")
-__xlated("2: r11 = *(u64 *)(r10 -16)")
-__xlated("3: if r11 == 0x0 goto pc+6")
-__xlated("4: r11 -= 1")
-__xlated("5: if r11 != 0x0 goto pc+2")
-__xlated("6: r11 = -16")
+__xlated("2: r12 = *(u64 *)(r10 -16)")
+__xlated("3: if r12 == 0x0 goto pc+6")
+__xlated("4: r12 -= 1")
+__xlated("5: if r12 != 0x0 goto pc+2")
+__xlated("6: r12 = -16")
__xlated("7: call unknown")
-__xlated("8: *(u64 *)(r10 -16) = r11")
+__xlated("8: *(u64 *)(r10 -16) = r12")
__xlated("9: r0 = 1")
__xlated("10: r0 = 2")
__xlated("11: exit")
diff --git a/tools/testing/selftests/bpf/progs/verifier_sdiv.c b/tools/testing/selftests/bpf/progs/verifier_sdiv.c
index fd59d57e8e37..95f3239ce228 100644
--- a/tools/testing/selftests/bpf/progs/verifier_sdiv.c
+++ b/tools/testing/selftests/bpf/progs/verifier_sdiv.c
@@ -778,10 +778,10 @@ __arch_x86_64
__xlated("0: r2 = 0x8000000000000000")
__xlated("2: r3 = -1")
__xlated("3: r4 = r2")
-__xlated("4: r11 = r3")
-__xlated("5: r11 += 1")
-__xlated("6: if r11 > 0x1 goto pc+4")
-__xlated("7: if r11 == 0x0 goto pc+1")
+__xlated("4: r12 = r3")
+__xlated("5: r12 += 1")
+__xlated("6: if r12 > 0x1 goto pc+4")
+__xlated("7: if r12 == 0x0 goto pc+1")
__xlated("8: r2 = 0")
__xlated("9: r2 = -r2")
__xlated("10: goto pc+1")
@@ -812,10 +812,10 @@ __success __retval(-5)
__arch_x86_64
__xlated("0: r2 = 5")
__xlated("1: r3 = -1")
-__xlated("2: r11 = r3")
-__xlated("3: r11 += 1")
-__xlated("4: if r11 > 0x1 goto pc+4")
-__xlated("5: if r11 == 0x0 goto pc+1")
+__xlated("2: r12 = r3")
+__xlated("3: r12 += 1")
+__xlated("4: if r12 > 0x1 goto pc+4")
+__xlated("5: if r12 == 0x0 goto pc+1")
__xlated("6: r2 = 0")
__xlated("7: r2 = -r2")
__xlated("8: goto pc+1")
@@ -890,10 +890,10 @@ __arch_x86_64
__xlated("0: w2 = -2147483648")
__xlated("1: w3 = -1")
__xlated("2: w4 = w2")
-__xlated("3: r11 = r3")
-__xlated("4: w11 += 1")
-__xlated("5: if w11 > 0x1 goto pc+4")
-__xlated("6: if w11 == 0x0 goto pc+1")
+__xlated("3: r12 = r3")
+__xlated("4: w12 += 1")
+__xlated("5: if w12 > 0x1 goto pc+4")
+__xlated("6: if w12 == 0x0 goto pc+1")
__xlated("7: w2 = 0")
__xlated("8: w2 = -w2")
__xlated("9: goto pc+1")
@@ -925,10 +925,10 @@ __arch_x86_64
__xlated("0: w2 = -5")
__xlated("1: w3 = -1")
__xlated("2: w4 = w2")
-__xlated("3: r11 = r3")
-__xlated("4: w11 += 1")
-__xlated("5: if w11 > 0x1 goto pc+4")
-__xlated("6: if w11 == 0x0 goto pc+1")
+__xlated("3: r12 = r3")
+__xlated("4: w12 += 1")
+__xlated("5: if w12 > 0x1 goto pc+4")
+__xlated("6: if w12 == 0x0 goto pc+1")
__xlated("7: w2 = 0")
__xlated("8: w2 = -w2")
__xlated("9: goto pc+1")
@@ -1004,10 +1004,10 @@ __arch_x86_64
__xlated("0: r2 = 0x8000000000000000")
__xlated("2: r3 = -1")
__xlated("3: r4 = r2")
-__xlated("4: r11 = r3")
-__xlated("5: r11 += 1")
-__xlated("6: if r11 > 0x1 goto pc+3")
-__xlated("7: if r11 == 0x1 goto pc+3")
+__xlated("4: r12 = r3")
+__xlated("5: r12 += 1")
+__xlated("6: if r12 > 0x1 goto pc+3")
+__xlated("7: if r12 == 0x1 goto pc+3")
__xlated("8: w2 = 0")
__xlated("9: goto pc+1")
__xlated("10: r2 s%= r3")
@@ -1034,10 +1034,10 @@ __arch_x86_64
__xlated("0: r2 = 5")
__xlated("1: r3 = -1")
__xlated("2: r4 = r2")
-__xlated("3: r11 = r3")
-__xlated("4: r11 += 1")
-__xlated("5: if r11 > 0x1 goto pc+3")
-__xlated("6: if r11 == 0x1 goto pc+3")
+__xlated("3: r12 = r3")
+__xlated("4: r12 += 1")
+__xlated("5: if r12 > 0x1 goto pc+3")
+__xlated("6: if r12 == 0x1 goto pc+3")
__xlated("7: w2 = 0")
__xlated("8: goto pc+1")
__xlated("9: r2 s%= r3")
@@ -1108,10 +1108,10 @@ __arch_x86_64
__xlated("0: w2 = -2147483648")
__xlated("1: w3 = -1")
__xlated("2: w4 = w2")
-__xlated("3: r11 = r3")
-__xlated("4: w11 += 1")
-__xlated("5: if w11 > 0x1 goto pc+3")
-__xlated("6: if w11 == 0x1 goto pc+4")
+__xlated("3: r12 = r3")
+__xlated("4: w12 += 1")
+__xlated("5: if w12 > 0x1 goto pc+3")
+__xlated("6: if w12 == 0x1 goto pc+4")
__xlated("7: w2 = 0")
__xlated("8: goto pc+1")
__xlated("9: w2 s%= w3")
@@ -1140,10 +1140,10 @@ __arch_x86_64
__xlated("0: w2 = -5")
__xlated("1: w3 = -1")
__xlated("2: w4 = w2")
-__xlated("3: r11 = r3")
-__xlated("4: w11 += 1")
-__xlated("5: if w11 > 0x1 goto pc+3")
-__xlated("6: if w11 == 0x1 goto pc+4")
+__xlated("3: r12 = r3")
+__xlated("4: w12 += 1")
+__xlated("5: if w12 > 0x1 goto pc+3")
+__xlated("6: if w12 == 0x1 goto pc+4")
__xlated("7: w2 = 0")
__xlated("8: goto pc+1")
__xlated("9: w2 s%= w3")
--
2.52.0
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH bpf-next v6 06/17] bpf: Reuse MAX_BPF_FUNC_ARGS for maximum number of arguments
2026-04-19 16:33 [PATCH bpf-next v6 00/17] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song
` (4 preceding siblings ...)
2026-04-19 16:33 ` [PATCH bpf-next v6 05/17] bpf: Introduce bpf register BPF_REG_PARAMS Yonghong Song
@ 2026-04-19 16:33 ` Yonghong Song
2026-04-19 16:33 ` [PATCH bpf-next v6 07/17] bpf: Support stack arguments for bpf functions Yonghong Song
` (11 subsequent siblings)
17 siblings, 0 replies; 51+ messages in thread
From: Yonghong Song @ 2026-04-19 16:33 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau
Currently, MAX_BPF_FUNC_ARGS is used for tracepoint related progs where
the number of parameters cannot exceed MAX_BPF_FUNC_ARGS.
Here, MAX_BPF_FUNC_ARGS is reused to set a limit of the number of arguments
for bpf functions and kfunc's. The current value for MAX_BPF_FUNC_ARGS
is 12 which should be sufficient for majority of bpf functions and
kfunc's.
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
include/linux/bpf.h | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index b4b703c90ca9..c2e614236e1c 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1151,6 +1151,11 @@ struct bpf_prog_offload {
/* The longest tracepoint has 12 args.
* See include/trace/bpf_probe.h
+ *
+ * Also reuse this macro for maximum number of arguments a BPF function
+ * or a kfunc can have. Args 1-5 are passed in registers, args 6-12 via
+ * stack arg slots. The JIT may map some stack arg slots to registers based
+ * on the native calling convention (e.g., arg 6 to R9 on x86-64).
*/
#define MAX_BPF_FUNC_ARGS 12
--
2.52.0
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH bpf-next v6 07/17] bpf: Support stack arguments for bpf functions
2026-04-19 16:33 [PATCH bpf-next v6 00/17] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song
` (5 preceding siblings ...)
2026-04-19 16:33 ` [PATCH bpf-next v6 06/17] bpf: Reuse MAX_BPF_FUNC_ARGS for maximum number of arguments Yonghong Song
@ 2026-04-19 16:33 ` Yonghong Song
2026-04-19 19:15 ` sashiko-bot
2026-04-21 0:37 ` Alexei Starovoitov
2026-04-19 16:33 ` [PATCH bpf-next v6 08/17] bpf: Reject stack arguments in non-JITed programs Yonghong Song
` (10 subsequent siblings)
17 siblings, 2 replies; 51+ messages in thread
From: Yonghong Song @ 2026-04-19 16:33 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau
Currently BPF functions (subprogs) are limited to 5 register arguments.
With [1], the compiler can emit code that passes additional arguments
via a dedicated stack area through bpf register BPF_REG_PARAMS (r11),
introduced in the previous patch.
The compiler uses positive r11 offsets for incoming (callee-side) args
and negative r11 offsets for outgoing (caller-side) args, following the
x86_64/arm64 calling convention direction. There is an 8-byte gap at
offset 0 separating the two regions:
Incoming (callee reads): r11+8 (arg6), r11+16 (arg7), ...
Outgoing (caller writes): r11-8 (arg6), r11-16 (arg7), ...
The following is an example to show how stack arguments are saved
and transferred between caller and callee:
int foo(int a1, int a2, int a3, int a4, int a5, int a6, int a7) {
...
bar(a1, a2, a3, a4, a5, a6, a7, a8);
...
}
Caller (foo) Callee (bar)
============ ============
Incoming (positive offsets): Incoming (positive offsets):
r11+8: [incoming arg 6] r11+8: [incoming arg 6] <-+
r11+16: [incoming arg 7] r11+16: [incoming arg 7] <-|+
r11+24: [incoming arg 8] <-||+
Outgoing (negative offsets): |||
r11-8: [outgoing arg 6 to bar] -------->-------------------------+||
r11-16: [outgoing arg 7 to bar] -------->--------------------------+|
r11-24: [outgoing arg 8 to bar] -------->---------------------------+
If the bpf function has more than one call:
int foo(int a1, int a2, int a3, int a4, int a5, int a6, int a7) {
...
bar1(a1, a2, a3, a4, a5, a6, a7, a8);
...
bar2(a1, a2, a3, a4, a5, a6, a7, a8, a9);
...
}
Caller (foo) Callee (bar2)
============ ==============
Incoming (positive offsets): Incoming (positive offsets):
r11+8: [incoming arg 6] r11+8: [incoming arg 6] <+
r11+16: [incoming arg 7] r11+16: [incoming arg 7] <|+
r11+24: [incoming arg 8] <||+
Outgoing for bar2 (negative offsets): r11+32: [incoming arg 9] <|||+
r11-8: [outgoing arg 6] ---->----------->-------------------------+|||
r11-16: [outgoing arg 7] ---->----------->--------------------------+||
r11-24: [outgoing arg 8] ---->----------->---------------------------+|
r11-32: [outgoing arg 9] ---->----------->----------------------------+
The verifier tracks stack arguments separately from the regular r10
stack. The stack_arg_regs are stored in bpf_func_state. This separation
keeps the stack arg area from interfering with the normal stack and
frame pointer (r10) bookkeeping. Similar to stacksafe(), introduce
stack_arg_safe() to do pruning check.
A per-state bitmask out_stack_arg_mask tracks which outgoing stack arg
slots have been written on the current path. Each bit corresponds to
an outgoing slot index (bit 0 = r11-8 = arg6, bit 1 = r11-16 = arg7,
etc.). At a call site, the verifier checks that all slots required by
the callee have their corresponding mask bits set. This enables
precise per-path tracking: if one branch of a conditional writes arg6
but another does not, the mask correctly reflects the difference and
the verifier rejects the uninitialized path. The mask is included in
stack_arg_safe() so that states with different sets of initialized
slots are not incorrectly pruned together.
Outgoing stack arg slots are invalidated after a call. This follows
native architecture calling convention where all function parameters
are caller saved. The following are a few examples.
Example 1:
*(u64 *)(r11 - 8) = r6;
*(u64 *)(r11 - 16) = r7;
call bar1; // arg6 = r6, arg7 = r7
call bar2; // expected with 2 stack arguments, failed
Example 2:
To fix the Example 1:
*(u64 *)(r11 - 8) = r6;
*(u64 *)(r11 - 16) = r7;
call bar1; // arg6 = r6, arg7 = r7
*(u64 *)(r11 - 8) = r8;
*(u64 *)(r11 - 16) = r9;
call bar2; // arg6 = r8, arg7 = r9
Example 3:
The compiler can hoist the shared stack arg stores above the branch:
*(u64 *)(r11 - 16) = r7;
...
if cond goto else;
*(u64 *)(r11 - 8) = r8;
call bar1; // arg6 = r8, arg7 = r7
goto end;
else:
*(u64 *)(r11 - 8) = r9;
call bar2; // arg6 = r9, arg7 = r7
end:
Example 4:
Within a loop:
loop:
*(u64 *)(r11 - 8) = r6; // arg6, before loop
call bar; // reuses arg6 each iteration
if ... goto loop;
A separate max_out_stack_arg_depth field in bpf_subprog_info tracks
the deepest outgoing offset actually written. This intends to
reject programs that write to offsets beyond what any callee expects.
Callback functions with stack arguments need kernel setup parameter
types (including stack parameters) properly and then callback function
can retrieve such information for verification purpose.
Global subprogs with >5 args are not yet supported.
[1] https://github.com/llvm/llvm-project/pull/189060
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
include/linux/bpf.h | 2 +
include/linux/bpf_verifier.h | 28 +++-
kernel/bpf/btf.c | 14 +-
kernel/bpf/fixups.c | 22 ++-
kernel/bpf/states.c | 41 +++++
kernel/bpf/verifier.c | 295 ++++++++++++++++++++++++++++++++++-
6 files changed, 390 insertions(+), 12 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index c2e614236e1c..8450c5dd1e20 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1669,6 +1669,8 @@ struct bpf_prog_aux {
u32 max_pkt_offset;
u32 max_tp_access;
u32 stack_depth;
+ u16 incoming_stack_arg_depth;
+ u16 stack_arg_depth; /* both incoming and max outgoing of stack arguments */
u32 id;
u32 func_cnt; /* used by non-func prog as the number of func progs */
u32 real_func_cnt; /* includes hidden progs, only used for JIT and freeing progs */
diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index 9fbbddc40d21..bb6d8cab3a35 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -372,6 +372,11 @@ struct bpf_func_state {
* `stack`. allocated_stack is always a multiple of BPF_REG_SIZE.
*/
int allocated_stack;
+
+ u16 stack_arg_depth; /* Size of incoming + max outgoing stack args in bytes. */
+ u16 incoming_stack_arg_depth; /* Size of incoming stack args in bytes. */
+ u16 out_stack_arg_mask; /* Bitmask of outgoing stack arg slots that have been written. */
+ struct bpf_reg_state *stack_arg_regs; /* On-stack arguments */
};
#define MAX_CALL_FRAMES 8
@@ -508,6 +513,17 @@ struct bpf_verifier_state {
iter < frame->allocated_stack / BPF_REG_SIZE; \
iter++, reg = bpf_get_spilled_reg(iter, frame, mask))
+#define bpf_get_spilled_stack_arg(slot, frame, mask) \
+ ((((slot) < frame->stack_arg_depth / BPF_REG_SIZE) && \
+ (frame->stack_arg_regs[slot].type != NOT_INIT)) \
+ ? &frame->stack_arg_regs[slot] : NULL)
+
+/* Iterate over 'frame', setting 'reg' to either NULL or a spilled stack arg. */
+#define bpf_for_each_spilled_stack_arg(iter, frame, reg, mask) \
+ for (iter = 0, reg = bpf_get_spilled_stack_arg(iter, frame, mask); \
+ iter < frame->stack_arg_depth / BPF_REG_SIZE; \
+ iter++, reg = bpf_get_spilled_stack_arg(iter, frame, mask))
+
#define bpf_for_each_reg_in_vstate_mask(__vst, __state, __reg, __mask, __expr) \
({ \
struct bpf_verifier_state *___vstate = __vst; \
@@ -525,6 +541,11 @@ struct bpf_verifier_state {
continue; \
(void)(__expr); \
} \
+ bpf_for_each_spilled_stack_arg(___j, __state, __reg, __mask) { \
+ if (!__reg) \
+ continue; \
+ (void)(__expr); \
+ } \
} \
})
@@ -739,10 +760,13 @@ struct bpf_subprog_info {
bool keep_fastcall_stack: 1;
bool changes_pkt_data: 1;
bool might_sleep: 1;
- u8 arg_cnt:3;
+ u8 arg_cnt:4;
enum priv_stack_mode priv_stack_mode;
- struct bpf_subprog_arg_info args[MAX_BPF_FUNC_REG_ARGS];
+ struct bpf_subprog_arg_info args[MAX_BPF_FUNC_ARGS];
+ u16 incoming_stack_arg_depth;
+ u16 outgoing_stack_arg_depth;
+ u16 max_out_stack_arg_depth;
};
struct bpf_verifier_env;
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index a62d78581207..c5f3aa05d5a3 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -7887,13 +7887,19 @@ int btf_prepare_func_args(struct bpf_verifier_env *env, int subprog)
}
args = (const struct btf_param *)(t + 1);
nargs = btf_type_vlen(t);
- if (nargs > MAX_BPF_FUNC_REG_ARGS) {
- if (!is_global)
- return -EINVAL;
- bpf_log(log, "Global function %s() with %d > %d args. Buggy compiler.\n",
+ if (nargs > MAX_BPF_FUNC_ARGS) {
+ bpf_log(log, "Function %s() with %d > %d args not supported.\n",
+ tname, nargs, MAX_BPF_FUNC_ARGS);
+ return -EINVAL;
+ }
+ if (is_global && nargs > MAX_BPF_FUNC_REG_ARGS) {
+ bpf_log(log, "Global function %s() with %d > %d args not supported.\n",
tname, nargs, MAX_BPF_FUNC_REG_ARGS);
return -EINVAL;
}
+ if (nargs > MAX_BPF_FUNC_REG_ARGS)
+ sub->incoming_stack_arg_depth = (nargs - MAX_BPF_FUNC_REG_ARGS) * BPF_REG_SIZE;
+
/* check that function is void or returns int, exception cb also requires this */
t = btf_type_by_id(btf, t->type);
while (btf_type_is_modifier(t))
diff --git a/kernel/bpf/fixups.c b/kernel/bpf/fixups.c
index fba9e8c00878..c4e0224ad2f2 100644
--- a/kernel/bpf/fixups.c
+++ b/kernel/bpf/fixups.c
@@ -1123,6 +1123,9 @@ static int jit_subprogs(struct bpf_verifier_env *env)
func[i]->aux->name[0] = 'F';
func[i]->aux->stack_depth = env->subprog_info[i].stack_depth;
+ func[i]->aux->incoming_stack_arg_depth = env->subprog_info[i].incoming_stack_arg_depth;
+ func[i]->aux->stack_arg_depth = env->subprog_info[i].incoming_stack_arg_depth +
+ env->subprog_info[i].outgoing_stack_arg_depth;
if (env->subprog_info[i].priv_stack_mode == PRIV_STACK_ADAPTIVE)
func[i]->aux->jits_use_priv_stack = true;
@@ -1301,8 +1304,10 @@ int bpf_jit_subprogs(struct bpf_verifier_env *env)
struct bpf_insn_aux_data *orig_insn_aux;
u32 *orig_subprog_starts;
- if (env->subprog_cnt <= 1)
+ if (env->subprog_cnt <= 1) {
+ env->prog->aux->stack_arg_depth = env->subprog_info[0].outgoing_stack_arg_depth;
return 0;
+ }
prog = orig_prog = env->prog;
if (bpf_prog_need_blind(prog)) {
@@ -1378,9 +1383,20 @@ int bpf_fixup_call_args(struct bpf_verifier_env *env)
struct bpf_prog *prog = env->prog;
struct bpf_insn *insn = prog->insnsi;
bool has_kfunc_call = bpf_prog_has_kfunc_call(prog);
- int i, depth;
+ int depth;
#endif
- int err = 0;
+ int i, err = 0;
+
+ for (i = 0; i < env->subprog_cnt; i++) {
+ struct bpf_subprog_info *subprog = &env->subprog_info[i];
+
+ if (subprog->max_out_stack_arg_depth > subprog->outgoing_stack_arg_depth) {
+ verbose(env,
+ "func#%d writes stack arg slot at depth %u, but calls only require %u bytes\n",
+ i, subprog->max_out_stack_arg_depth, subprog->outgoing_stack_arg_depth);
+ return -EINVAL;
+ }
+ }
if (env->prog->jit_requested &&
!bpf_prog_is_offloaded(env->prog->aux)) {
diff --git a/kernel/bpf/states.c b/kernel/bpf/states.c
index 8478d2c6ed5b..235841d23fe3 100644
--- a/kernel/bpf/states.c
+++ b/kernel/bpf/states.c
@@ -838,6 +838,44 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
return true;
}
+/*
+ * Compare stack arg slots between old and current states.
+ * Outgoing stack args are path-local state and must agree for pruning.
+ */
+static bool stack_arg_safe(struct bpf_verifier_env *env, struct bpf_func_state *old,
+ struct bpf_func_state *cur, struct bpf_idmap *idmap,
+ enum exact_level exact)
+{
+ int i, nslots;
+
+ if (old->incoming_stack_arg_depth != cur->incoming_stack_arg_depth)
+ return false;
+
+ /* Compare both incoming and outgoing stack arg slots. */
+ if (old->stack_arg_depth != cur->stack_arg_depth)
+ return false;
+
+ if (old->out_stack_arg_mask != cur->out_stack_arg_mask)
+ return false;
+
+ nslots = old->stack_arg_depth / BPF_REG_SIZE;
+ for (i = 0; i < nslots; i++) {
+ struct bpf_reg_state *old_arg = &old->stack_arg_regs[i];
+ struct bpf_reg_state *cur_arg = &cur->stack_arg_regs[i];
+
+ if (old_arg->type == NOT_INIT && cur_arg->type == NOT_INIT)
+ continue;
+
+ if (exact == EXACT && old_arg->type != cur_arg->type)
+ return false;
+
+ if (!regsafe(env, old_arg, cur_arg, idmap, exact))
+ return false;
+ }
+
+ return true;
+}
+
static bool refsafe(struct bpf_verifier_state *old, struct bpf_verifier_state *cur,
struct bpf_idmap *idmap)
{
@@ -929,6 +967,9 @@ static bool func_states_equal(struct bpf_verifier_env *env, struct bpf_func_stat
if (!stacksafe(env, old, cur, &env->idmap_scratch, exact))
return false;
+ if (!stack_arg_safe(env, old, cur, &env->idmap_scratch, exact))
+ return false;
+
return true;
}
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 6aa4dc161a56..78c9322870a5 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -1340,6 +1340,20 @@ static int copy_stack_state(struct bpf_func_state *dst, const struct bpf_func_st
return -ENOMEM;
dst->allocated_stack = src->allocated_stack;
+
+ /* copy stack args state */
+ n = src->stack_arg_depth / BPF_REG_SIZE;
+ if (n) {
+ dst->stack_arg_regs = copy_array(dst->stack_arg_regs, src->stack_arg_regs, n,
+ sizeof(struct bpf_reg_state),
+ GFP_KERNEL_ACCOUNT);
+ if (!dst->stack_arg_regs)
+ return -ENOMEM;
+ }
+
+ dst->stack_arg_depth = src->stack_arg_depth;
+ dst->incoming_stack_arg_depth = src->incoming_stack_arg_depth;
+ dst->out_stack_arg_mask = src->out_stack_arg_mask;
return 0;
}
@@ -1381,6 +1395,25 @@ static int grow_stack_state(struct bpf_verifier_env *env, struct bpf_func_state
return 0;
}
+static int grow_stack_arg_slots(struct bpf_verifier_env *env,
+ struct bpf_func_state *state, int size)
+{
+ size_t old_n = state->stack_arg_depth / BPF_REG_SIZE, n;
+
+ size = round_up(size, BPF_REG_SIZE);
+ n = size / BPF_REG_SIZE;
+ if (old_n >= n)
+ return 0;
+
+ state->stack_arg_regs = realloc_array(state->stack_arg_regs, old_n, n,
+ sizeof(struct bpf_reg_state));
+ if (!state->stack_arg_regs)
+ return -ENOMEM;
+
+ state->stack_arg_depth = size;
+ return 0;
+}
+
/* Acquire a pointer id from the env and update the state->refs to include
* this new pointer reference.
* On success, returns a valid pointer id to associate with the register
@@ -1543,6 +1576,7 @@ static void free_func_state(struct bpf_func_state *state)
{
if (!state)
return;
+ kfree(state->stack_arg_regs);
kfree(state->stack);
kfree(state);
}
@@ -4220,6 +4254,13 @@ static int check_stack_read_fixed_off(struct bpf_verifier_env *env,
}
if (type == STACK_INVALID && env->allow_uninit_stack)
continue;
+ /*
+ * Cross-frame reads may hit slots poisoned by dead code elimination.
+ * Static liveness can't track indirect references through pointers,
+ * so allow the read conservatively.
+ */
+ if (type == STACK_POISON && reg_state != state)
+ continue;
if (type == STACK_POISON) {
verbose(env, "reading from stack off %d+%d size %d, slot poisoned by dead code elimination\n",
off, i, size);
@@ -4275,6 +4316,8 @@ static int check_stack_read_fixed_off(struct bpf_verifier_env *env,
continue;
if (type == STACK_INVALID && env->allow_uninit_stack)
continue;
+ if (type == STACK_POISON && reg_state != state)
+ continue;
if (type == STACK_POISON) {
verbose(env, "reading from stack off %d+%d size %d, slot poisoned by dead code elimination\n",
off, i, size);
@@ -4429,6 +4472,123 @@ static int check_stack_write(struct bpf_verifier_env *env,
return err;
}
+/* Validate that a stack arg access is 8-byte sized and aligned. */
+static int check_stack_arg_access(struct bpf_verifier_env *env,
+ struct bpf_insn *insn, const char *op)
+{
+ int size = bpf_size_to_bytes(BPF_SIZE(insn->code));
+
+ if (size != BPF_REG_SIZE) {
+ verbose(env, "stack arg %s must be %d bytes, got %d\n",
+ op, BPF_REG_SIZE, size);
+ return -EINVAL;
+ }
+ if (insn->off == 0 || insn->off % BPF_REG_SIZE) {
+ verbose(env, "stack arg %s offset %d not aligned to %d\n",
+ op, insn->off, BPF_REG_SIZE);
+ return -EINVAL;
+ }
+ /* Reads use positive offsets (incoming), writes use negative (outgoing) */
+ if (op[0] == 'r' && insn->off <= 0) {
+ verbose(env, "stack arg read must use positive offset, got %d\n",
+ insn->off);
+ return -EINVAL;
+ }
+ if (op[0] == 'w' && insn->off >= 0) {
+ verbose(env, "stack arg write must use negative offset, got %d\n",
+ insn->off);
+ return -EINVAL;
+ }
+ return 0;
+}
+
+static int out_arg_idx_from_off(int off)
+{
+ return -off / BPF_REG_SIZE - 1;
+}
+
+static int out_arg_spi(const struct bpf_func_state *state, int idx)
+{
+ return state->incoming_stack_arg_depth / BPF_REG_SIZE + idx;
+}
+
+static u16 out_arg_req_mask(int nr_stack_arg_regs)
+{
+ return nr_stack_arg_regs ? (1U << nr_stack_arg_regs) - 1 : 0;
+}
+
+/*
+ * Write a value to the outgoing stack arg area.
+ * off is a negative offset from r11 (e.g. -8 for arg6, -16 for arg7).
+ * Callers ensure off < 0, 8-byte aligned, and size is BPF_REG_SIZE.
+ */
+static int check_stack_arg_write(struct bpf_verifier_env *env, struct bpf_func_state *state,
+ int off, int value_regno)
+{
+ int max_stack_arg_regs = MAX_BPF_FUNC_ARGS - MAX_BPF_FUNC_REG_ARGS;
+ int idx = out_arg_idx_from_off(off);
+ int spi = out_arg_spi(state, idx);
+ struct bpf_subprog_info *subprog;
+ struct bpf_func_state *cur;
+ int err;
+
+ if (idx >= max_stack_arg_regs) {
+ verbose(env, "stack arg write offset %d exceeds max %d stack args\n",
+ off, max_stack_arg_regs);
+ return -EINVAL;
+ }
+
+ err = grow_stack_arg_slots(env, state, state->incoming_stack_arg_depth + (-off));
+ if (err)
+ return err;
+
+ /* Track the max outgoing stack arg access depth. */
+ subprog = &env->subprog_info[state->subprogno];
+ if (-off > subprog->max_out_stack_arg_depth)
+ subprog->max_out_stack_arg_depth = -off;
+
+ cur = env->cur_state->frame[env->cur_state->curframe];
+ if (value_regno >= 0) {
+ state->stack_arg_regs[spi] = cur->regs[value_regno];
+ } else {
+ /* BPF_ST: store immediate, treat as scalar */
+ struct bpf_reg_state *arg = &state->stack_arg_regs[spi];
+
+ arg->type = SCALAR_VALUE;
+ __mark_reg_known(arg, env->prog->insnsi[env->insn_idx].imm);
+ }
+ state->out_stack_arg_mask |= BIT(idx);
+ return 0;
+}
+
+/*
+ * Read a value from the incoming stack arg area.
+ * off is a positive offset from r11 (e.g. +8 for arg6, +16 for arg7).
+ * Callers ensure off > 0, 8-byte aligned, and size is BPF_REG_SIZE.
+ */
+static int check_stack_arg_read(struct bpf_verifier_env *env, struct bpf_func_state *state,
+ int off, int dst_regno)
+{
+ int spi = off / BPF_REG_SIZE - 1;
+ struct bpf_func_state *cur;
+ struct bpf_reg_state *arg;
+
+ if (off > state->incoming_stack_arg_depth) {
+ verbose(env, "invalid read from stack arg off %d depth %d\n",
+ off, state->incoming_stack_arg_depth);
+ return -EACCES;
+ }
+
+ arg = &state->stack_arg_regs[spi];
+ cur = env->cur_state->frame[env->cur_state->curframe];
+
+ if (is_spillable_regtype(arg->type))
+ copy_register_state(&cur->regs[dst_regno], arg);
+ else
+ mark_reg_unknown(env, cur->regs, dst_regno);
+ return 0;
+}
+
static int check_map_access_type(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 argno,
int off, int size, enum bpf_access_type type)
{
@@ -6614,10 +6774,23 @@ static int check_load_mem(struct bpf_verifier_env *env, struct bpf_insn *insn,
bool strict_alignment_once, bool is_ldsx,
bool allow_trust_mismatch, const char *ctx)
{
+ struct bpf_verifier_state *vstate = env->cur_state;
+ struct bpf_func_state *state = vstate->frame[vstate->curframe];
struct bpf_reg_state *regs = cur_regs(env);
enum bpf_reg_type src_reg_type;
int err;
+ /* Handle stack arg access */
+ if (insn->src_reg == BPF_REG_PARAMS) {
+ err = check_reg_arg(env, insn->dst_reg, DST_OP_NO_MARK);
+ if (err)
+ return err;
+ err = check_stack_arg_access(env, insn, "read");
+ if (err)
+ return err;
+ return check_stack_arg_read(env, state, insn->off, insn->dst_reg);
+ }
+
/* check src operand */
err = check_reg_arg(env, insn->src_reg, SRC_OP);
if (err)
@@ -6646,10 +6819,23 @@ static int check_load_mem(struct bpf_verifier_env *env, struct bpf_insn *insn,
static int check_store_reg(struct bpf_verifier_env *env, struct bpf_insn *insn,
bool strict_alignment_once)
{
+ struct bpf_verifier_state *vstate = env->cur_state;
+ struct bpf_func_state *state = vstate->frame[vstate->curframe];
struct bpf_reg_state *regs = cur_regs(env);
enum bpf_reg_type dst_reg_type;
int err;
+ /* Handle stack arg write */
+ if (insn->dst_reg == BPF_REG_PARAMS) {
+ err = check_reg_arg(env, insn->src_reg, SRC_OP);
+ if (err)
+ return err;
+ err = check_stack_arg_access(env, insn, "write");
+ if (err)
+ return err;
+ return check_stack_arg_write(env, state, insn->off, insn->src_reg);
+ }
+
/* check src1 operand */
err = check_reg_arg(env, insn->src_reg, SRC_OP);
if (err)
@@ -9278,6 +9464,16 @@ static void clear_caller_saved_regs(struct bpf_verifier_env *env,
}
}
+static void invalidate_outgoing_stack_args(struct bpf_func_state *state)
+{
+ int i, outgoing_start = state->incoming_stack_arg_depth / BPF_REG_SIZE;
+ int nslots = state->stack_arg_depth / BPF_REG_SIZE;
+
+ for (i = outgoing_start; i < nslots; i++)
+ state->stack_arg_regs[i].type = NOT_INIT;
+ state->out_stack_arg_mask = 0;
+}
+
typedef int (*set_callee_state_fn)(struct bpf_verifier_env *env,
struct bpf_func_state *caller,
struct bpf_func_state *callee,
@@ -9335,6 +9531,20 @@ static int setup_func_entry(struct bpf_verifier_env *env, int subprog, int calls
return err;
}
+static struct bpf_reg_state *get_func_arg_reg(struct bpf_verifier_env *env,
+ struct bpf_reg_state *regs, int argno)
+{
+ struct bpf_func_state *caller;
+ int spi;
+
+ if (argno < MAX_BPF_FUNC_REG_ARGS)
+ return ®s[argno + 1];
+
+ caller = cur_func(env);
+ spi = out_arg_spi(caller, argno - MAX_BPF_FUNC_REG_ARGS);
+ return &caller->stack_arg_regs[spi];
+}
+
static int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog,
const struct btf *btf,
struct bpf_reg_state *regs)
@@ -9353,8 +9563,24 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog,
*/
for (i = 0; i < sub->arg_cnt; i++) {
u32 argno = make_argno(i);
- u32 regno = i + 1;
- struct bpf_reg_state *reg = ®s[regno];
+ struct bpf_reg_state *reg;
+
+ if (i >= MAX_BPF_FUNC_REG_ARGS) {
+ struct bpf_func_state *caller = cur_func(env);
+ int spi = out_arg_spi(caller, i - MAX_BPF_FUNC_REG_ARGS);
+
+ /*
+ * The compiler may constant-fold stack arg values into the
+ * callee, eliminating the r11 stores. The BTF still declares
+ * these parameters, but no outgoing stack slots exist.
+ */
+ if (spi >= (caller->stack_arg_depth / BPF_REG_SIZE)) {
+ verbose(env, "stack %s not found in caller state\n",
+ reg_arg_name(env, argno));
+ return -EINVAL;
+ }
+ }
+ reg = get_func_arg_reg(env, regs, i);
struct bpf_subprog_arg_info *arg = &sub->args[i];
if (arg->arg_type == ARG_ANYTHING) {
@@ -9542,8 +9768,10 @@ static int check_func_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
int *insn_idx)
{
struct bpf_verifier_state *state = env->cur_state;
+ struct bpf_subprog_info *caller_info;
struct bpf_func_state *caller;
int err, subprog, target_insn;
+ u16 callee_incoming;
target_insn = *insn_idx + insn->imm + 1;
subprog = bpf_find_subprog(env, target_insn);
@@ -9595,6 +9823,15 @@ static int check_func_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
return 0;
}
+ /*
+ * Track caller's outgoing stack arg depth (max across all callees).
+ * This is needed so the JIT knows how much stack arg space to allocate.
+ */
+ caller_info = &env->subprog_info[caller->subprogno];
+ callee_incoming = env->subprog_info[subprog].incoming_stack_arg_depth;
+ if (callee_incoming > caller_info->outgoing_stack_arg_depth)
+ caller_info->outgoing_stack_arg_depth = callee_incoming;
+
/* for regular function entry setup new frame and continue
* from that frame.
*/
@@ -9603,6 +9840,7 @@ static int check_func_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
return err;
clear_caller_saved_regs(env, caller->regs);
+ invalidate_outgoing_stack_args(caller);
/* and go analyze first insn of the callee */
*insn_idx = env->subprog_info[subprog].start - 1;
@@ -9648,6 +9886,7 @@ static int set_callee_state(struct bpf_verifier_env *env,
struct bpf_func_state *caller,
struct bpf_func_state *callee, int insn_idx)
{
+ struct bpf_subprog_info *callee_info;
int i;
/* copy r1 - r5 args that callee can access. The copy includes parent
@@ -9655,6 +9894,45 @@ static int set_callee_state(struct bpf_verifier_env *env,
*/
for (i = BPF_REG_1; i <= BPF_REG_5; i++)
callee->regs[i] = caller->regs[i];
+
+ /*
+ * Transfer stack args from caller's outgoing area to callee's incoming
+ * area.
+ *
+ * Caller stores outgoing args at negative r11 offsets: -8 (arg6),
+ * -16 (arg7), -24 (arg8), ... In the caller's slot array, outgoing
+ * spi 0 is arg6, spi 1 is arg7, and so on.
+ *
+ * Callee reads incoming args at positive r11 offsets: +8 (arg6),
+ * +16 (arg7), ... Incoming spi 0 is arg6.
+ */
+ callee_info = &env->subprog_info[callee->subprogno];
+ if (callee_info->incoming_stack_arg_depth) {
+ int callee_incoming_slots = callee_info->incoming_stack_arg_depth / BPF_REG_SIZE;
+ u16 req_mask = out_arg_req_mask(callee_incoming_slots);
+ int err, caller_spi;
+
+ callee->incoming_stack_arg_depth = callee_info->incoming_stack_arg_depth;
+ err = grow_stack_arg_slots(env, callee, callee_info->incoming_stack_arg_depth);
+ if (err)
+ return err;
+
+ if ((caller->out_stack_arg_mask & req_mask) != req_mask) {
+ for (i = 0; i < callee_incoming_slots; i++) {
+ if (caller->out_stack_arg_mask & BIT(i))
+ continue;
+ verbose(env, "stack arg#%d not properly initialized\n",
+ i + MAX_BPF_FUNC_REG_ARGS + 1);
+ return -EINVAL;
+ }
+ }
+
+ for (i = 0; i < callee_incoming_slots; i++) {
+ caller_spi = out_arg_spi(caller, i);
+ callee->stack_arg_regs[i] = caller->stack_arg_regs[caller_spi];
+ }
+ }
+
return 0;
}
@@ -17659,6 +17937,17 @@ static int do_check_insn(struct bpf_verifier_env *env, bool *do_print_state)
return check_store_reg(env, insn, false);
case BPF_ST: {
+ /* Handle stack arg write (store immediate) */
+ if (insn->dst_reg == BPF_REG_PARAMS) {
+ struct bpf_verifier_state *vstate = env->cur_state;
+ struct bpf_func_state *state = vstate->frame[vstate->curframe];
+
+ err = check_stack_arg_access(env, insn, "write");
+ if (err)
+ return err;
+ return check_stack_arg_write(env, state, insn->off, -1);
+ }
+
enum bpf_reg_type dst_reg_type;
err = check_reg_arg(env, insn->dst_reg, SRC_OP);
@@ -18808,7 +19097,7 @@ static int do_check_common(struct bpf_verifier_env *env, int subprog)
goto out;
}
}
- for (i = BPF_REG_1; i <= sub->arg_cnt; i++) {
+ for (i = BPF_REG_1; i <= min_t(u32, sub->arg_cnt, MAX_BPF_FUNC_REG_ARGS); i++) {
arg = &sub->args[i - BPF_REG_1];
reg = ®s[i];
--
2.52.0
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH bpf-next v6 08/17] bpf: Reject stack arguments in non-JITed programs
2026-04-19 16:33 [PATCH bpf-next v6 00/17] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song
` (6 preceding siblings ...)
2026-04-19 16:33 ` [PATCH bpf-next v6 07/17] bpf: Support stack arguments for bpf functions Yonghong Song
@ 2026-04-19 16:33 ` Yonghong Song
2026-04-19 18:21 ` sashiko-bot
2026-04-19 16:34 ` [PATCH bpf-next v6 09/17] bpf: Track r11 registers in const_fold and liveness Yonghong Song
` (9 subsequent siblings)
17 siblings, 1 reply; 51+ messages in thread
From: Yonghong Song @ 2026-04-19 16:33 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau
The interpreter does not understand the bpf register r11
(BPF_REG_PARAMS) used for stack argument addressing. So
reject interpreter usage if stack arguments are used either
in the main program or any subprogram.
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
kernel/bpf/core.c | 2 +-
kernel/bpf/fixups.c | 6 ++++++
2 files changed, 7 insertions(+), 1 deletion(-)
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index ae10b9ca018d..ec8523e6e4eb 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -2599,7 +2599,7 @@ struct bpf_prog *__bpf_prog_select_runtime(struct bpf_verifier_env *env, struct
goto finalize;
if (IS_ENABLED(CONFIG_BPF_JIT_ALWAYS_ON) ||
- bpf_prog_has_kfunc_call(fp))
+ bpf_prog_has_kfunc_call(fp) || fp->aux->stack_arg_depth)
jit_needed = true;
if (!bpf_prog_select_interpreter(fp))
diff --git a/kernel/bpf/fixups.c b/kernel/bpf/fixups.c
index c4e0224ad2f2..3c9a82823913 100644
--- a/kernel/bpf/fixups.c
+++ b/kernel/bpf/fixups.c
@@ -1411,6 +1411,12 @@ int bpf_fixup_call_args(struct bpf_verifier_env *env)
verbose(env, "calling kernel functions are not allowed in non-JITed programs\n");
return -EINVAL;
}
+ for (i = 0; i < env->subprog_cnt; i++) {
+ if (env->subprog_info[i].incoming_stack_arg_depth) {
+ verbose(env, "stack args are not supported in non-JITed programs\n");
+ return -EINVAL;
+ }
+ }
if (env->subprog_cnt > 1 && env->prog->aux->tail_call_reachable) {
/* When JIT fails the progs with bpf2bpf calls and tail_calls
* have to be rejected, since interpreter doesn't support them yet.
--
2.52.0
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH bpf-next v6 09/17] bpf: Track r11 registers in const_fold and liveness
2026-04-19 16:33 [PATCH bpf-next v6 00/17] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song
` (7 preceding siblings ...)
2026-04-19 16:33 ` [PATCH bpf-next v6 08/17] bpf: Reject stack arguments in non-JITed programs Yonghong Song
@ 2026-04-19 16:34 ` Yonghong Song
2026-04-19 16:34 ` [PATCH bpf-next v6 10/17] bpf: Prepare architecture JIT support for stack arguments Yonghong Song
` (8 subsequent siblings)
17 siblings, 0 replies; 51+ messages in thread
From: Yonghong Song @ 2026-04-19 16:34 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau
Since BPF_REG_PARAMS (r11) is beyond MAX_BPF_REG (r11), using it as an
array index into the register tracking arrays in const_fold.c and
liveness.c could cause out-of-bounds accesses.
When dst_reg is BPF_REG_PARAMS (BPF_ST/BPF_STX storing to the stack
arg area), no tracked register state changes, so an early return is
sufficient.
When src_reg is BPF_REG_PARAMS (BPF_LDX loading from the stack arg
area), dst_reg is a normal register that gets overwritten. Simply
returning early would leave dst_reg with stale state from a prior
path, which could cause incorrect constant folding or liveness
analysis. Mark dst_reg as unknown/none before returning to prevent
this.
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
kernel/bpf/const_fold.c | 14 ++++++++++++--
kernel/bpf/liveness.c | 14 ++++++++++++--
2 files changed, 24 insertions(+), 4 deletions(-)
diff --git a/kernel/bpf/const_fold.c b/kernel/bpf/const_fold.c
index db73c4740b1e..41e5a406432c 100644
--- a/kernel/bpf/const_fold.c
+++ b/kernel/bpf/const_fold.c
@@ -51,13 +51,23 @@ static void const_reg_xfer(struct bpf_verifier_env *env, struct const_arg_info *
struct bpf_insn *insn, struct bpf_insn *insns, int idx)
{
struct const_arg_info unknown = { .state = CONST_ARG_UNKNOWN, .val = 0 };
- struct const_arg_info *dst = &ci_out[insn->dst_reg];
- struct const_arg_info *src = &ci_out[insn->src_reg];
+ struct const_arg_info *dst, *src;
u8 class = BPF_CLASS(insn->code);
u8 mode = BPF_MODE(insn->code);
u8 opcode = BPF_OP(insn->code) | BPF_SRC(insn->code);
int r;
+ /* Stack arguments using BPF_REG_PARAMS are outside the tracked register set. */
+ if (insn->dst_reg >= MAX_BPF_REG)
+ return;
+ if (insn->src_reg >= MAX_BPF_REG) {
+ if (class == BPF_LDX)
+ ci_out[insn->dst_reg] = unknown;
+ return;
+ }
+
+ dst = &ci_out[insn->dst_reg];
+ src = &ci_out[insn->src_reg];
switch (class) {
case BPF_ALU:
case BPF_ALU64:
diff --git a/kernel/bpf/liveness.c b/kernel/bpf/liveness.c
index 332e6e003f27..87022da94f3a 100644
--- a/kernel/bpf/liveness.c
+++ b/kernel/bpf/liveness.c
@@ -1068,11 +1068,21 @@ static void arg_track_xfer(struct bpf_verifier_env *env, struct bpf_insn *insn,
int depth = instance->depth;
u8 class = BPF_CLASS(insn->code);
u8 code = BPF_OP(insn->code);
- struct arg_track *dst = &at_out[insn->dst_reg];
- struct arg_track *src = &at_out[insn->src_reg];
+ struct arg_track *dst, *src;
struct arg_track none = { .frame = ARG_NONE };
int r;
+ /* Stack arguments using BPF_REG_PARAMS are outside the tracked register set. */
+ if (insn->dst_reg >= MAX_BPF_REG)
+ return;
+ if (insn->src_reg >= MAX_BPF_REG) {
+ if (class == BPF_LDX)
+ at_out[insn->dst_reg] = none;
+ return;
+ }
+
+ dst = &at_out[insn->dst_reg];
+ src = &at_out[insn->src_reg];
if (class == BPF_ALU64 && BPF_SRC(insn->code) == BPF_K) {
if (code == BPF_MOV) {
*dst = none;
--
2.52.0
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH bpf-next v6 10/17] bpf: Prepare architecture JIT support for stack arguments
2026-04-19 16:33 [PATCH bpf-next v6 00/17] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song
` (8 preceding siblings ...)
2026-04-19 16:34 ` [PATCH bpf-next v6 09/17] bpf: Track r11 registers in const_fold and liveness Yonghong Song
@ 2026-04-19 16:34 ` Yonghong Song
2026-04-19 16:34 ` [PATCH bpf-next v6 11/17] bpf: Enable r11 based insns Yonghong Song
` (7 subsequent siblings)
17 siblings, 0 replies; 51+ messages in thread
From: Yonghong Song @ 2026-04-19 16:34 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau
Add bpf_jit_supports_stack_args() as a weak function defaulting to
false. Architectures that implement JIT support for stack arguments
override it to return true.
Reject BPF functions and kfuncs with more than 5 parameters at
verification time if the architecture does not support stack
arguments.
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
include/linux/filter.h | 1 +
kernel/bpf/btf.c | 8 +++++++-
kernel/bpf/core.c | 5 +++++
kernel/bpf/verifier.c | 5 +++++
4 files changed, 18 insertions(+), 1 deletion(-)
diff --git a/include/linux/filter.h b/include/linux/filter.h
index b77d0b06db6e..911205dd670e 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -1163,6 +1163,7 @@ bool bpf_jit_inlines_helper_call(s32 imm);
bool bpf_jit_supports_subprog_tailcalls(void);
bool bpf_jit_supports_percpu_insn(void);
bool bpf_jit_supports_kfunc_call(void);
+bool bpf_jit_supports_stack_args(void);
bool bpf_jit_supports_far_kfunc_call(void);
bool bpf_jit_supports_exceptions(void);
bool bpf_jit_supports_ptr_xchg(void);
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index c5f3aa05d5a3..3497e218c02d 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -7897,8 +7897,14 @@ int btf_prepare_func_args(struct bpf_verifier_env *env, int subprog)
tname, nargs, MAX_BPF_FUNC_REG_ARGS);
return -EINVAL;
}
- if (nargs > MAX_BPF_FUNC_REG_ARGS)
+ if (nargs > MAX_BPF_FUNC_REG_ARGS) {
+ if (!bpf_jit_supports_stack_args()) {
+ bpf_log(log, "JIT does not support function %s() with %d args\n",
+ tname, nargs);
+ return -ENOTSUPP;
+ }
sub->incoming_stack_arg_depth = (nargs - MAX_BPF_FUNC_REG_ARGS) * BPF_REG_SIZE;
+ }
/* check that function is void or returns int, exception cb also requires this */
t = btf_type_by_id(btf, t->type);
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index ec8523e6e4eb..7522b3d7b267 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -3217,6 +3217,11 @@ bool __weak bpf_jit_supports_kfunc_call(void)
return false;
}
+bool __weak bpf_jit_supports_stack_args(void)
+{
+ return false;
+}
+
bool __weak bpf_jit_supports_far_kfunc_call(void)
{
return false;
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 78c9322870a5..e952ebcc2f8f 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -12363,6 +12363,11 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
MAX_BPF_FUNC_REG_ARGS);
return -EINVAL;
}
+ if (nargs > MAX_BPF_FUNC_REG_ARGS && !bpf_jit_supports_stack_args()) {
+ verbose(env, "JIT does not support kfunc %s() with %d args\n",
+ func_name, nargs);
+ return -ENOTSUPP;
+ }
/* Check that BTF function arguments match actual types that the
* verifier sees.
--
2.52.0
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH bpf-next v6 11/17] bpf: Enable r11 based insns
2026-04-19 16:33 [PATCH bpf-next v6 00/17] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song
` (9 preceding siblings ...)
2026-04-19 16:34 ` [PATCH bpf-next v6 10/17] bpf: Prepare architecture JIT support for stack arguments Yonghong Song
@ 2026-04-19 16:34 ` Yonghong Song
2026-04-19 16:34 ` [PATCH bpf-next v6 12/17] bpf: Support stack arguments for kfunc calls Yonghong Song
` (6 subsequent siblings)
17 siblings, 0 replies; 51+ messages in thread
From: Yonghong Song @ 2026-04-19 16:34 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau
BPF_REG_PARAMS (r11) is used for stack argument accesses and
the following are the only insns with r11 presence:
- BPF_LDX | BPF_MEM | BPF_DW (load incoming stack arg)
- BPF_STX | BPF_MEM | BPF_DW (store register to outgoing stack arg)
- BPF_ST | BPF_MEM | BPF_DW (store immediate to outgoing stack arg)
The LLVM compiler [1] implemented the above BPF_REG_PARAMS
insns.
[1] https://github.com/llvm/llvm-project/pull/189060
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
kernel/bpf/verifier.c | 19 +++++++++++++++----
1 file changed, 15 insertions(+), 4 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index e952ebcc2f8f..2a346e4f28e1 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -18791,13 +18791,24 @@ static int check_and_resolve_insns(struct bpf_verifier_env *env)
return err;
for (i = 0; i < insn_cnt; i++, insn++) {
+ u8 class = BPF_CLASS(insn->code);
+ u8 mode = BPF_MODE(insn->code);
+ u8 size = BPF_SIZE(insn->code);
+
if (insn->dst_reg >= MAX_BPF_REG) {
- verbose(env, "R%d is invalid\n", insn->dst_reg);
- return -EINVAL;
+ if (insn->dst_reg != BPF_REG_PARAMS ||
+ (class != BPF_ST && class != BPF_STX) ||
+ mode != BPF_MEM || size != BPF_DW) {
+ verbose(env, "R%d is invalid\n", insn->dst_reg);
+ return -EINVAL;
+ }
}
if (insn->src_reg >= MAX_BPF_REG) {
- verbose(env, "R%d is invalid\n", insn->src_reg);
- return -EINVAL;
+ if (insn->src_reg != BPF_REG_PARAMS ||
+ insn->code != (BPF_LDX | BPF_MEM | BPF_DW)) {
+ verbose(env, "R%d is invalid\n", insn->src_reg);
+ return -EINVAL;
+ }
}
if (insn[0].code == (BPF_LD | BPF_IMM | BPF_DW)) {
struct bpf_insn_aux_data *aux;
--
2.52.0
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH bpf-next v6 12/17] bpf: Support stack arguments for kfunc calls
2026-04-19 16:33 [PATCH bpf-next v6 00/17] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song
` (10 preceding siblings ...)
2026-04-19 16:34 ` [PATCH bpf-next v6 11/17] bpf: Enable r11 based insns Yonghong Song
@ 2026-04-19 16:34 ` Yonghong Song
2026-04-19 17:08 ` sashiko-bot
2026-04-19 16:34 ` [PATCH bpf-next v6 13/17] bpf: Reject stack arguments if tail call reachable Yonghong Song
` (5 subsequent siblings)
17 siblings, 1 reply; 51+ messages in thread
From: Yonghong Song @ 2026-04-19 16:34 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau
Extend the stack argument mechanism to kfunc calls, allowing kfuncs
with more than 5 parameters to receive additional arguments via the
r11-based stack arg area.
For kfuncs, the caller is a BPF program and the callee is a kernel
function. The BPF program writes outgoing args at negative r11
offsets, following the same convention as BPF-to-BPF calls:
Outgoing: r11 - 8 (arg6), ..., r11 - N*8 (last arg)
The following is an example:
int foo(int a1, int a2, int a3, int a4, int a5, int a6, int a7) {
...
kfunc1(a1, a2, a3, a4, a5, a6, a7, a8);
...
kfunc2(a1, a2, a3, a4, a5, a6, a7, a8, a9);
...
}
Caller (foo), generated by llvm
===============================
Incoming (positive offsets):
r11+8: [incoming arg 6]
r11+16: [incoming arg 7]
Outgoing for kfunc1 (negative offsets):
r11-8: [outgoing arg 6]
r11-16: [outgoing arg 7]
r11-24: [outgoing arg 8]
Outgoing for kfunc2 (negative offsets):
r11-8: [outgoing arg 6]
r11-16: [outgoing arg 7]
r11-24: [outgoing arg 8]
r11-32: [outgoing arg 9]
Later JIT will marshal outgoing arguments to the native calling
convention for kfunc1() and kfunc2().
There are two places where meta->release_regno needs to keep
regno for later releasing the reference. Also, 'cur_aux(env)->arg_prog = regno'
is also keeping regno for later fixup. Since stack arguments don't have a valid
register number (regno is set to -1), these three cases are rejected for now
if the argument is on the stack.
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
kernel/bpf/verifier.c | 127 ++++++++++++++++++++++++++++++++++--------
1 file changed, 103 insertions(+), 24 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 2a346e4f28e1..13f1fd788092 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -4502,6 +4502,14 @@ static int check_stack_arg_access(struct bpf_verifier_env *env,
return 0;
}
+/* Check whether a stack arg slot has been properly initialized. */
+static bool is_stack_arg_slot_initialized(struct bpf_func_state *state, int spi)
+{
+ if (spi >= (int)(state->stack_arg_depth / BPF_REG_SIZE))
+ return false;
+ return state->stack_arg_regs[spi].type != NOT_INIT;
+}
+
static int out_arg_idx_from_off(int off)
{
return -off / BPF_REG_SIZE - 1;
@@ -7314,8 +7322,12 @@ static int check_mem_size_reg(struct bpf_verifier_env *env,
}
err = check_helper_mem_access(env, mem_reg, mem_argno, size_reg->umax_value,
access_type, zero_size_allowed, meta);
- if (!err && !is_stack_argno(size_argno))
- err = mark_chain_precision(env, size_argno);
+ if (!err) {
+ if (is_stack_argno(size_argno))
+ size_reg->precise = true;
+ else
+ err = mark_chain_precision(env, size_argno);
+ }
return err;
}
@@ -7358,8 +7370,6 @@ static int check_kfunc_mem_size_reg(struct bpf_verifier_env *env, struct bpf_reg
u32 argno = make_argno(mem_argno);
int err;
- WARN_ON_ONCE(mem_argno > BPF_REG_3);
-
memset(&meta, 0, sizeof(meta));
if (may_be_null) {
@@ -11667,6 +11677,21 @@ bool bpf_is_kfunc_pkt_changing(struct bpf_kfunc_call_arg_meta *meta)
return meta->func_id == special_kfunc_list[KF_bpf_xdp_pull_data];
}
+static struct bpf_reg_state *get_kfunc_arg_reg(struct bpf_verifier_env *env, int argno)
+{
+ struct bpf_func_state *caller;
+ int spi;
+
+ if (argno < MAX_BPF_FUNC_REG_ARGS)
+ return &cur_regs(env)[argno + 1];
+
+ caller = cur_func(env);
+ spi = out_arg_spi(caller, argno - MAX_BPF_FUNC_REG_ARGS);
+ if (spi >= caller->stack_arg_depth / BPF_REG_SIZE)
+ return NULL;
+ return &caller->stack_arg_regs[spi];
+}
+
static enum kfunc_ptr_arg_type
get_kfunc_ptr_arg_type(struct bpf_verifier_env *env,
struct bpf_kfunc_call_arg_meta *meta,
@@ -11674,8 +11699,6 @@ get_kfunc_ptr_arg_type(struct bpf_verifier_env *env,
const char *ref_tname, const struct btf_param *args,
int argno, int nargs, struct bpf_reg_state *reg)
{
- u32 regno = argno + 1;
- struct bpf_reg_state *regs = cur_regs(env);
bool arg_mem_size = false;
if (meta->func_id == special_kfunc_list[KF_bpf_cast_to_kern_ctx] ||
@@ -11683,10 +11706,14 @@ get_kfunc_ptr_arg_type(struct bpf_verifier_env *env,
meta->func_id == special_kfunc_list[KF_bpf_session_cookie])
return KF_ARG_PTR_TO_CTX;
- if (argno + 1 < nargs &&
- (is_kfunc_arg_mem_size(meta->btf, &args[argno + 1], ®s[regno + 1]) ||
- is_kfunc_arg_const_mem_size(meta->btf, &args[argno + 1], ®s[regno + 1])))
- arg_mem_size = true;
+ if (argno + 1 < nargs) {
+ struct bpf_reg_state *next_reg = get_kfunc_arg_reg(env, argno + 1);
+
+ if (next_reg &&
+ (is_kfunc_arg_mem_size(meta->btf, &args[argno + 1], next_reg) ||
+ is_kfunc_arg_const_mem_size(meta->btf, &args[argno + 1], next_reg)))
+ arg_mem_size = true;
+ }
/* In this function, we verify the kfunc's BTF as per the argument type,
* leaving the rest of the verification with respect to the register
@@ -12358,9 +12385,9 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
args = (const struct btf_param *)(meta->func_proto + 1);
nargs = btf_type_vlen(meta->func_proto);
- if (nargs > MAX_BPF_FUNC_REG_ARGS) {
+ if (nargs > MAX_BPF_FUNC_ARGS) {
verbose(env, "Function %s has %d > %d args\n", func_name, nargs,
- MAX_BPF_FUNC_REG_ARGS);
+ MAX_BPF_FUNC_ARGS);
return -EINVAL;
}
if (nargs > MAX_BPF_FUNC_REG_ARGS && !bpf_jit_supports_stack_args()) {
@@ -12373,20 +12400,44 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
* verifier sees.
*/
for (i = 0; i < nargs; i++) {
- struct bpf_reg_state *regs = cur_regs(env), *reg = ®s[i + 1];
+ struct bpf_reg_state *regs = cur_regs(env), *reg;
const struct btf_type *t, *ref_t, *resolve_ret;
enum bpf_arg_type arg_type = ARG_DONTCARE;
u32 argno = make_argno(i);
- u32 regno = i + 1, ref_id, type_size;
+ struct bpf_reg_state tmp_reg;
+ int regno = i + 1;
+ u32 ref_id, type_size;
bool is_ret_buf_sz = false;
int kf_arg_type;
+ if (i < MAX_BPF_FUNC_REG_ARGS) {
+ reg = ®s[i + 1];
+ } else {
+ /* Retrieve the reg state from the outgoing stack arg slot. */
+ struct bpf_func_state *caller = cur_func(env);
+ int spi = out_arg_spi(caller, i - MAX_BPF_FUNC_REG_ARGS);
+
+ if (!is_stack_arg_slot_initialized(caller, spi)) {
+ verbose(env, "stack %s not properly initialized\n",
+ reg_arg_name(env, argno));
+ return -EINVAL;
+ }
+
+ tmp_reg = caller->stack_arg_regs[spi];
+ reg = &tmp_reg;
+ regno = -1;
+ }
+
if (is_kfunc_arg_prog_aux(btf, &args[i])) {
/* Reject repeated use bpf_prog_aux */
if (meta->arg_prog) {
verifier_bug(env, "Only 1 prog->aux argument supported per-kfunc");
return -EFAULT;
}
+ if (regno < 0) {
+ verbose(env, "%s prog->aux cannot be a stack argument\n", reg_arg_name(env, argno));
+ return -EINVAL;
+ }
meta->arg_prog = true;
cur_aux(env)->arg_prog = regno;
continue;
@@ -12413,9 +12464,13 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
reg_arg_name(env, argno));
return -EINVAL;
}
- ret = mark_chain_precision(env, regno);
- if (ret < 0)
- return ret;
+ if (regno < 0) {
+ reg->precise = true;
+ } else {
+ ret = mark_chain_precision(env, regno);
+ if (ret < 0)
+ return ret;
+ }
meta->arg_constant.found = true;
meta->arg_constant.value = reg->var_off.value;
} else if (is_kfunc_arg_scalar_with_name(btf, &args[i], "rdonly_buf_size")) {
@@ -12438,9 +12493,13 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
}
meta->r0_size = reg->var_off.value;
- ret = mark_chain_precision(env, regno);
- if (ret)
- return ret;
+ if (regno < 0) {
+ reg->precise = true;
+ } else {
+ ret = mark_chain_precision(env, regno);
+ if (ret)
+ return ret;
+ }
}
continue;
}
@@ -12466,8 +12525,13 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
return -EFAULT;
}
meta->ref_obj_id = reg->ref_obj_id;
- if (is_kfunc_release(meta))
+ if (is_kfunc_release(meta)) {
+ if (regno < 0) {
+ verbose(env, "%s release arg cannot be a stack argument\n", reg_arg_name(env, argno));
+ return -EINVAL;
+ }
meta->release_regno = regno;
+ }
}
ref_t = btf_type_skip_modifiers(btf, t->type, &ref_id);
@@ -12626,6 +12690,10 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
dynptr_arg_type |= DYNPTR_TYPE_FILE;
} else if (meta->func_id == special_kfunc_list[KF_bpf_dynptr_file_discard]) {
dynptr_arg_type |= DYNPTR_TYPE_FILE;
+ if (regno < 0) {
+ verbose(env, "%s release arg cannot be a stack argument\n", reg_arg_name(env, argno));
+ return -EINVAL;
+ }
meta->release_regno = regno;
} else if (meta->func_id == special_kfunc_list[KF_bpf_dynptr_clone] &&
(dynptr_arg_type & MEM_UNINIT)) {
@@ -12780,9 +12848,9 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
break;
case KF_ARG_PTR_TO_MEM_SIZE:
{
- struct bpf_reg_state *buff_reg = ®s[regno];
+ struct bpf_reg_state *buff_reg = reg;
const struct btf_param *buff_arg = &args[i];
- struct bpf_reg_state *size_reg = ®s[regno + 1];
+ struct bpf_reg_state *size_reg = get_kfunc_arg_reg(env, i + 1);
const struct btf_param *size_arg = &args[i + 1];
if (!bpf_register_is_null(buff_reg) || !is_kfunc_arg_nullable(meta->btf, buff_arg)) {
@@ -13686,7 +13754,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
nargs = btf_type_vlen(meta.func_proto);
args = (const struct btf_param *)(meta.func_proto + 1);
- for (i = 0; i < nargs; i++) {
+ for (i = 0; i < min_t(int, nargs, MAX_BPF_FUNC_REG_ARGS); i++) {
u32 regno = i + 1;
t = btf_type_skip_modifiers(desc_btf, args[i].type, NULL);
@@ -13697,6 +13765,17 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
mark_btf_func_reg_size(env, regno, t->size);
}
+ /* Track outgoing stack arg depth for kfuncs with >5 args */
+ if (nargs > MAX_BPF_FUNC_REG_ARGS) {
+ struct bpf_func_state *caller = cur_func(env);
+ struct bpf_subprog_info *caller_info = &env->subprog_info[caller->subprogno];
+ u16 kfunc_stack_arg_depth = (nargs - MAX_BPF_FUNC_REG_ARGS) * BPF_REG_SIZE;
+
+ if (kfunc_stack_arg_depth > caller_info->outgoing_stack_arg_depth)
+ caller_info->outgoing_stack_arg_depth = kfunc_stack_arg_depth;
+ invalidate_outgoing_stack_args(caller);
+ }
+
if (bpf_is_iter_next_kfunc(&meta)) {
err = process_iter_next_call(env, insn_idx, &meta);
if (err)
--
2.52.0
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH bpf-next v6 13/17] bpf: Reject stack arguments if tail call reachable
2026-04-19 16:33 [PATCH bpf-next v6 00/17] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song
` (11 preceding siblings ...)
2026-04-19 16:34 ` [PATCH bpf-next v6 12/17] bpf: Support stack arguments for kfunc calls Yonghong Song
@ 2026-04-19 16:34 ` Yonghong Song
2026-04-19 17:08 ` sashiko-bot
2026-04-19 16:34 ` [PATCH bpf-next v6 14/17] bpf,x86: Implement JIT support for stack arguments Yonghong Song
` (4 subsequent siblings)
17 siblings, 1 reply; 51+ messages in thread
From: Yonghong Song @ 2026-04-19 16:34 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau
Tailcalls have been deprecated. So reject stack arguments
if tail call is in the way.
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
kernel/bpf/verifier.c | 16 +++++++++++++++-
1 file changed, 15 insertions(+), 1 deletion(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 13f1fd788092..af8975049883 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -5519,6 +5519,11 @@ struct bpf_subprog_call_depth_info {
int frame; /* # of consecutive static call stack frames on top of stack */
};
+static bool subprog_has_stack_args(const struct bpf_subprog_info *si)
+{
+ return si->incoming_stack_arg_depth || si->outgoing_stack_arg_depth;
+}
+
/* starting from main bpf function walk all instructions of the function
* and recursively walk all callees that given function can call.
* Ignore jump and exit insns.
@@ -5672,14 +5677,23 @@ static int check_max_stack_depth_subprog(struct bpf_verifier_env *env, int idx,
* this info will be utilized by JIT so that we will be preserving the
* tail call counter throughout bpf2bpf calls combined with tailcalls
*/
- if (tail_call_reachable)
+ if (tail_call_reachable) {
for (tmp = idx; tmp >= 0; tmp = dinfo[tmp].caller) {
if (subprog[tmp].is_exception_cb) {
verbose(env, "cannot tail call within exception cb\n");
return -EINVAL;
}
+ if (subprog_has_stack_args(&subprog[tmp])) {
+ verbose(env, "tail_calls are not allowed in programs with stack args\n");
+ return -EINVAL;
+ }
subprog[tmp].tail_call_reachable = true;
}
+ } else if (!idx && subprog[0].has_tail_call && subprog_has_stack_args(&subprog[0])) {
+ verbose(env, "tail_calls are not allowed in programs with stack args\n");
+ return -EINVAL;
+ }
+
if (subprog[0].tail_call_reachable)
env->prog->aux->tail_call_reachable = true;
--
2.52.0
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH bpf-next v6 14/17] bpf,x86: Implement JIT support for stack arguments
2026-04-19 16:33 [PATCH bpf-next v6 00/17] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song
` (12 preceding siblings ...)
2026-04-19 16:34 ` [PATCH bpf-next v6 13/17] bpf: Reject stack arguments if tail call reachable Yonghong Song
@ 2026-04-19 16:34 ` Yonghong Song
2026-04-19 17:25 ` sashiko-bot
2026-04-19 16:34 ` [PATCH bpf-next v6 15/17] selftests/bpf: Add tests for BPF function " Yonghong Song
` (3 subsequent siblings)
17 siblings, 1 reply; 51+ messages in thread
From: Yonghong Song @ 2026-04-19 16:34 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau
Add x86_64 JIT support for BPF functions and kfuncs with more than
5 arguments. The extra arguments are passed through a stack area
addressed by register r11 (BPF_REG_PARAMS) in BPF bytecode,
which the JIT translates to native code.
The JIT follows the x86-64 calling convention for both BPF-to-BPF
and kfunc calls:
- Arg 6 is passed in the R9 register
- Args 7+ are passed on the stack
Incoming arg 6 (BPF r11+8) is translated to a MOV from R9 rather
than a memory load. Incoming args 7+ (BPF r11+16, r11+24, ...) map
directly to [rbp + 16], [rbp + 24], ..., matching the x86-64 stack
layout after CALL + PUSH RBP, so no offset adjustment is needed.
The verifier guarantees that neither tail_call_reachable nor
priv_stack is set when stack args exist, so R9 is always
available. When BPF bytecode writes to the arg-6 stack slot
(offset -8), the JIT emits a MOV into R9 instead of a memory store.
Outgoing args 7+ are placed at [rsp] in a pre-allocated area below
callee-saved registers, using:
native_off = outgoing_arg_base - outgoing_rsp - bpf_off - 16
The native x86_64 stack layout with stack arguments:
high address
+-------------------------+
| incoming stack arg N | [rbp + 16 + (N-7)*8] (from caller)
| ... |
| incoming stack arg 7 | [rbp + 16]
+-------------------------+
| return address | [rbp + 8]
| saved rbp | [rbp]
+-------------------------+
| BPF program stack | (round_up(stack_depth, 8) bytes)
+-------------------------+
| callee-saved regs | (r12, rbx, r13, r14, r15 as needed)
+-------------------------+
| outgoing arg M | [rsp + (M-7)*8]
| ... |
| outgoing arg 7 | [rsp]
+-------------------------+ rsp
low address
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
arch/x86/net/bpf_jit_comp.c | 154 ++++++++++++++++++++++++++++++++++--
1 file changed, 148 insertions(+), 6 deletions(-)
diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index ea9e707e8abf..06f33fef1e8e 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -390,6 +390,34 @@ static void pop_callee_regs(u8 **pprog, bool *callee_regs_used)
*pprog = prog;
}
+/* add rsp, depth */
+static void emit_add_rsp(u8 **pprog, u16 depth)
+{
+ u8 *prog = *pprog;
+
+ if (!depth)
+ return;
+ if (is_imm8(depth))
+ EMIT4(0x48, 0x83, 0xC4, depth); /* add rsp, imm8 */
+ else
+ EMIT3_off32(0x48, 0x81, 0xC4, depth); /* add rsp, imm32 */
+ *pprog = prog;
+}
+
+/* sub rsp, depth */
+static void emit_sub_rsp(u8 **pprog, u16 depth)
+{
+ u8 *prog = *pprog;
+
+ if (!depth)
+ return;
+ if (is_imm8(depth))
+ EMIT4(0x48, 0x83, 0xEC, depth); /* sub rsp, imm8 */
+ else
+ EMIT3_off32(0x48, 0x81, 0xEC, depth); /* sub rsp, imm32 */
+ *pprog = prog;
+}
+
static void emit_nops(u8 **pprog, int len)
{
u8 *prog = *pprog;
@@ -1664,16 +1692,45 @@ static int do_jit(struct bpf_verifier_env *env, struct bpf_prog *bpf_prog, int *
int i, excnt = 0;
int ilen, proglen = 0;
u8 *ip, *prog = temp;
+ u16 stack_arg_depth, incoming_stack_arg_depth, outgoing_stack_arg_depth;
+ u16 outgoing_rsp;
u32 stack_depth;
+ int callee_saved_size;
+ s32 outgoing_arg_base;
int err;
stack_depth = bpf_prog->aux->stack_depth;
+ stack_arg_depth = bpf_prog->aux->stack_arg_depth;
+ incoming_stack_arg_depth = bpf_prog->aux->incoming_stack_arg_depth;
+ outgoing_stack_arg_depth = stack_arg_depth - incoming_stack_arg_depth;
priv_stack_ptr = bpf_prog->aux->priv_stack_ptr;
if (priv_stack_ptr) {
priv_frame_ptr = priv_stack_ptr + PRIV_STACK_GUARD_SZ + round_up(stack_depth, 8);
stack_depth = 0;
}
+ /*
+ * Follow x86-64 calling convention for both BPF-to-BPF and
+ * kfunc calls:
+ * - Arg 6 is passed in R9 register
+ * - Args 7+ are passed on the stack at [rsp]
+ *
+ * Incoming arg 6 is read from R9 (BPF r11+8 → MOV from R9).
+ * Incoming args 7+ are read from [rbp + 16], [rbp + 24], ...
+ * (BPF r11+16, r11+24, ... map directly with no offset change).
+ *
+ * The verifier guarantees that neither tail_call_reachable nor
+ * priv_stack is set when outgoing stack args exist, so R9 is
+ * always available.
+ *
+ * Stack layout (high to low):
+ * [rbp + 16 + ...] incoming stack args 7+ (from caller)
+ * [rbp + 8] return address
+ * [rbp] saved rbp
+ * [rbp - prog_stack] program stack
+ * [below] callee-saved regs
+ * [below] outgoing args 7+ (= rsp)
+ */
arena_vm_start = bpf_arena_get_kern_vm_start(bpf_prog->aux->arena);
user_vm_start = bpf_arena_get_user_vm_start(bpf_prog->aux->arena);
@@ -1700,6 +1757,42 @@ static int do_jit(struct bpf_verifier_env *env, struct bpf_prog *bpf_prog, int *
push_r12(&prog);
push_callee_regs(&prog, callee_regs_used);
}
+
+ /* Compute callee-saved register area size. */
+ callee_saved_size = 0;
+ if (bpf_prog->aux->exception_boundary || arena_vm_start)
+ callee_saved_size += 8; /* r12 */
+ if (bpf_prog->aux->exception_boundary) {
+ callee_saved_size += 4 * 8; /* rbx, r13, r14, r15 */
+ } else {
+ int j;
+
+ for (j = 0; j < 4; j++)
+ if (callee_regs_used[j])
+ callee_saved_size += 8;
+ }
+ /*
+ * Base offset from rbp for translating BPF outgoing args 7+
+ * to native offsets. BPF uses negative offsets from r11
+ * (r11-8 for arg6, r11-16 for arg7, ...) while x86 uses
+ * positive offsets from rsp ([rsp+0] for arg7, [rsp+8] for
+ * arg8, ...). Arg 6 goes to R9 directly.
+ *
+ * The translation reverses direction:
+ * native_off = outgoing_arg_base - outgoing_rsp - bpf_off - 16
+ *
+ * Note that tail_call_reachable is guaranteed to be false when
+ * stack args exist, so tcc pushes need not be accounted for.
+ */
+ outgoing_arg_base = -(round_up(stack_depth, 8) + callee_saved_size);
+
+ /*
+ * Allocate outgoing stack arg area for args 7+ only.
+ * Arg 6 goes into r9 register, not on stack.
+ */
+ outgoing_rsp = outgoing_stack_arg_depth > 8 ? outgoing_stack_arg_depth - 8 : 0;
+ emit_sub_rsp(&prog, outgoing_rsp);
+
if (arena_vm_start)
emit_mov_imm64(&prog, X86_REG_R12,
arena_vm_start >> 32, (u32) arena_vm_start);
@@ -1721,7 +1814,7 @@ static int do_jit(struct bpf_verifier_env *env, struct bpf_prog *bpf_prog, int *
u8 b2 = 0, b3 = 0;
u8 *start_of_ldx;
s64 jmp_offset;
- s16 insn_off;
+ s32 insn_off;
u8 jmp_cond;
u8 *func;
int nops;
@@ -2134,12 +2227,26 @@ static int do_jit(struct bpf_verifier_env *env, struct bpf_prog *bpf_prog, int *
EMIT1(0xC7);
goto st;
case BPF_ST | BPF_MEM | BPF_DW:
+ if (dst_reg == BPF_REG_PARAMS && insn->off == -8) {
+ /* Arg 6: store immediate in r9 register */
+ emit_mov_imm64(&prog, X86_REG_R9, imm32 >> 31, (u32)imm32);
+ break;
+ }
EMIT2(add_1mod(0x48, dst_reg), 0xC7);
-st: if (is_imm8(insn->off))
- EMIT2(add_1reg(0x40, dst_reg), insn->off);
+st: insn_off = insn->off;
+ if (dst_reg == BPF_REG_PARAMS) {
+ /* Args 7+: reverse BPF negative offsets to
+ * x86 positive rsp offsets.
+ * BPF off=-16 → [rsp+0], off=-24 → [rsp+8], ...
+ */
+ insn_off = outgoing_arg_base - outgoing_rsp - insn_off - 16;
+ dst_reg = BPF_REG_FP;
+ }
+ if (is_imm8(insn_off))
+ EMIT2(add_1reg(0x40, dst_reg), insn_off);
else
- EMIT1_off32(add_1reg(0x80, dst_reg), insn->off);
+ EMIT1_off32(add_1reg(0x80, dst_reg), insn_off);
EMIT(imm32, bpf_size_to_x86_bytes(BPF_SIZE(insn->code)));
break;
@@ -2149,7 +2256,17 @@ st: if (is_imm8(insn->off))
case BPF_STX | BPF_MEM | BPF_H:
case BPF_STX | BPF_MEM | BPF_W:
case BPF_STX | BPF_MEM | BPF_DW:
- emit_stx(&prog, BPF_SIZE(insn->code), dst_reg, src_reg, insn->off);
+ if (dst_reg == BPF_REG_PARAMS && insn->off == -8) {
+ /* Arg 6: store register value in r9 */
+ EMIT_mov(X86_REG_R9, src_reg);
+ break;
+ }
+ insn_off = insn->off;
+ if (dst_reg == BPF_REG_PARAMS) {
+ insn_off = outgoing_arg_base - outgoing_rsp - insn_off - 16;
+ dst_reg = BPF_REG_FP;
+ }
+ emit_stx(&prog, BPF_SIZE(insn->code), dst_reg, src_reg, insn_off);
break;
case BPF_ST | BPF_PROBE_MEM32 | BPF_B:
@@ -2248,6 +2365,19 @@ st: if (is_imm8(insn->off))
case BPF_LDX | BPF_PROBE_MEMSX | BPF_H:
case BPF_LDX | BPF_PROBE_MEMSX | BPF_W:
insn_off = insn->off;
+ if (src_reg == BPF_REG_PARAMS) {
+ if (insn_off == 8) {
+ /* Incoming arg 6: read from r9 */
+ EMIT_mov(dst_reg, X86_REG_R9);
+ break;
+ }
+ src_reg = BPF_REG_FP;
+ /*
+ * Incoming args 7+: native_off == bpf_off
+ * (r11+16 → [rbp+16], r11+24 → [rbp+24], ...)
+ * No offset adjustment needed.
+ */
+ }
if (BPF_MODE(insn->code) == BPF_PROBE_MEM ||
BPF_MODE(insn->code) == BPF_PROBE_MEMSX) {
@@ -2736,6 +2866,8 @@ st: if (is_imm8(insn->off))
if (emit_spectre_bhb_barrier(&prog, ip, bpf_prog))
return -EINVAL;
}
+ /* Deallocate outgoing args 7+ area. */
+ emit_add_rsp(&prog, outgoing_rsp);
if (bpf_prog->aux->exception_boundary) {
pop_callee_regs(&prog, all_callee_regs_used);
pop_r12(&prog);
@@ -3743,7 +3875,12 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_verifier_env *env, struct bpf_pr
prog->aux->jit_data = jit_data;
}
priv_stack_ptr = prog->aux->priv_stack_ptr;
- if (!priv_stack_ptr && prog->aux->jits_use_priv_stack) {
+ /*
+ * x86-64 uses R9 for both private stack frame pointer and arg 6,
+ * so disable private stack when stack args are present.
+ */
+ if (!priv_stack_ptr && prog->aux->jits_use_priv_stack &&
+ prog->aux->stack_arg_depth == 0) {
/* Allocate actual private stack size with verifier-calculated
* stack size plus two memory guards to protect overflow and
* underflow.
@@ -3910,6 +4047,11 @@ bool bpf_jit_supports_kfunc_call(void)
return true;
}
+bool bpf_jit_supports_stack_args(void)
+{
+ return true;
+}
+
void *bpf_arch_text_copy(void *dst, void *src, size_t len)
{
if (text_poke_copy(dst, src, len) == NULL)
--
2.52.0
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH bpf-next v6 15/17] selftests/bpf: Add tests for BPF function stack arguments
2026-04-19 16:33 [PATCH bpf-next v6 00/17] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song
` (13 preceding siblings ...)
2026-04-19 16:34 ` [PATCH bpf-next v6 14/17] bpf,x86: Implement JIT support for stack arguments Yonghong Song
@ 2026-04-19 16:34 ` Yonghong Song
2026-04-19 17:15 ` sashiko-bot
2026-04-19 16:34 ` [PATCH bpf-next v6 16/17] selftests/bpf: Add tests for stack argument validation Yonghong Song
` (2 subsequent siblings)
17 siblings, 1 reply; 51+ messages in thread
From: Yonghong Song @ 2026-04-19 16:34 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau
Add selftests covering stack argument passing for both BPF-to-BPF
subprog calls and kfunc calls with more than 5 arguments. All tests
are guarded by __BPF_FEATURE_STACK_ARGUMENT and __TARGET_ARCH_x86.
BPF-to-BPF subprog call tests (stack_arg.c):
- Scalar stack args
- Pointer stack args
- Mixed pointer/scalar stack args
- Nested calls
- Dynptr stack arg
- Two callees with different stack arg counts
- Async callback
Kfunc call tests (stack_arg_kfunc.c, with bpf_testmod kfuncs):
- Scalar stack args
- Pointer stack args
- Mixed pointer/scalar stack args
- Dynptr stack arg
- Memory buffer + size pair
- Iterator
- Const string pointer
- Timer pointer
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
.../selftests/bpf/prog_tests/stack_arg.c | 139 ++++++++++
tools/testing/selftests/bpf/progs/stack_arg.c | 254 ++++++++++++++++++
.../selftests/bpf/progs/stack_arg_kfunc.c | 164 +++++++++++
.../selftests/bpf/test_kmods/bpf_testmod.c | 66 +++++
.../bpf/test_kmods/bpf_testmod_kfunc.h | 20 +-
5 files changed, 642 insertions(+), 1 deletion(-)
create mode 100644 tools/testing/selftests/bpf/prog_tests/stack_arg.c
create mode 100644 tools/testing/selftests/bpf/progs/stack_arg.c
create mode 100644 tools/testing/selftests/bpf/progs/stack_arg_kfunc.c
diff --git a/tools/testing/selftests/bpf/prog_tests/stack_arg.c b/tools/testing/selftests/bpf/prog_tests/stack_arg.c
new file mode 100644
index 000000000000..d61bac33f809
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/stack_arg.c
@@ -0,0 +1,139 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */
+
+#include <test_progs.h>
+#include <network_helpers.h>
+#include "stack_arg.skel.h"
+#include "stack_arg_kfunc.skel.h"
+
+static void run_subtest(struct bpf_program *prog, int expected)
+{
+ int err, prog_fd;
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_in = &pkt_v4,
+ .data_size_in = sizeof(pkt_v4),
+ .repeat = 1,
+ );
+
+ prog_fd = bpf_program__fd(prog);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(err, "test_run");
+ ASSERT_EQ(topts.retval, expected, "retval");
+}
+
+static void test_global_many(void)
+{
+ struct stack_arg *skel;
+
+ skel = stack_arg__open();
+ if (!ASSERT_OK_PTR(skel, "open"))
+ return;
+
+ if (!skel->rodata->has_stack_arg) {
+ test__skip();
+ goto out;
+ }
+
+ if (!ASSERT_OK(stack_arg__load(skel), "load"))
+ goto out;
+
+ run_subtest(skel->progs.test_global_many_args, 36);
+
+out:
+ stack_arg__destroy(skel);
+}
+
+static void test_async_cb_many(void)
+{
+ struct stack_arg *skel;
+
+ skel = stack_arg__open();
+ if (!ASSERT_OK_PTR(skel, "open"))
+ return;
+
+ if (!skel->rodata->has_stack_arg) {
+ test__skip();
+ goto out;
+ }
+
+ if (!ASSERT_OK(stack_arg__load(skel), "load"))
+ goto out;
+
+ run_subtest(skel->progs.test_async_cb_many_args, 0);
+
+ /* Wait for the timer callback to fire and verify the result.
+ * 10+20+30+40+50+60+70+80 = 360
+ */
+ usleep(50);
+ ASSERT_EQ(skel->bss->timer_result, 360, "timer_result");
+
+out:
+ stack_arg__destroy(skel);
+}
+
+static void test_bpf2bpf(void)
+{
+ struct stack_arg *skel;
+
+ skel = stack_arg__open();
+ if (!ASSERT_OK_PTR(skel, "open"))
+ return;
+
+ if (!skel->rodata->has_stack_arg) {
+ test__skip();
+ goto out;
+ }
+
+ if (!ASSERT_OK(stack_arg__load(skel), "load"))
+ goto out;
+
+ run_subtest(skel->progs.test_bpf2bpf_ptr_stack_arg, 45);
+ run_subtest(skel->progs.test_bpf2bpf_mix_stack_args, 51);
+ run_subtest(skel->progs.test_bpf2bpf_nesting_stack_arg, 50);
+ run_subtest(skel->progs.test_bpf2bpf_dynptr_stack_arg, 69);
+ run_subtest(skel->progs.test_two_callees, 91);
+
+out:
+ stack_arg__destroy(skel);
+}
+
+static void test_kfunc(void)
+{
+ struct stack_arg_kfunc *skel;
+
+ skel = stack_arg_kfunc__open();
+ if (!ASSERT_OK_PTR(skel, "open"))
+ return;
+
+ if (!skel->rodata->has_stack_arg) {
+ test__skip();
+ goto out;
+ }
+
+ if (!ASSERT_OK(stack_arg_kfunc__load(skel), "load"))
+ goto out;
+
+ run_subtest(skel->progs.test_stack_arg_scalar, 36);
+ run_subtest(skel->progs.test_stack_arg_ptr, 45);
+ run_subtest(skel->progs.test_stack_arg_mix, 51);
+ run_subtest(skel->progs.test_stack_arg_dynptr, 69);
+ run_subtest(skel->progs.test_stack_arg_mem, 151);
+ run_subtest(skel->progs.test_stack_arg_iter, 115);
+ run_subtest(skel->progs.test_stack_arg_const_str, 15);
+ run_subtest(skel->progs.test_stack_arg_timer, 15);
+
+out:
+ stack_arg_kfunc__destroy(skel);
+}
+
+void test_stack_arg(void)
+{
+ if (test__start_subtest("global_many_args"))
+ test_global_many();
+ if (test__start_subtest("async_cb_many_args"))
+ test_async_cb_many();
+ if (test__start_subtest("bpf2bpf"))
+ test_bpf2bpf();
+ if (test__start_subtest("kfunc"))
+ test_kfunc();
+}
diff --git a/tools/testing/selftests/bpf/progs/stack_arg.c b/tools/testing/selftests/bpf/progs/stack_arg.c
new file mode 100644
index 000000000000..8c198ee952ff
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/stack_arg.c
@@ -0,0 +1,254 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */
+
+#include <vmlinux.h>
+#include <stdbool.h>
+#include <bpf/bpf_helpers.h>
+#include "bpf_kfuncs.h"
+
+#define CLOCK_MONOTONIC 1
+
+long a, b, c, d, e, f, g, i;
+
+struct timer_elem {
+ struct bpf_timer timer;
+};
+
+struct {
+ __uint(type, BPF_MAP_TYPE_ARRAY);
+ __uint(max_entries, 1);
+ __type(key, int);
+ __type(value, struct timer_elem);
+} timer_map SEC(".maps");
+
+int timer_result;
+
+#if defined(__TARGET_ARCH_x86) && defined(__BPF_FEATURE_STACK_ARGUMENT)
+
+const volatile bool has_stack_arg = true;
+
+__noinline static int static_func_many_args(int a, int b, int c, int d,
+ int e, int f, int g, int h)
+{
+ return a + b + c + d + e + f + g + h;
+}
+
+__noinline int global_calls_many_args(int a, int b, int c)
+{
+ return static_func_many_args(a, b, c, 4, 5, 6, 7, 8);
+}
+
+SEC("tc")
+int test_global_many_args(void)
+{
+ return global_calls_many_args(1, 2, 3);
+}
+
+struct test_data {
+ long x;
+ long y;
+};
+
+/* 1 + 2 + 3 + 4 + 5 + 10 + 20 = 45 */
+__noinline static long func_with_ptr_stack_arg(long a, long b, long c, long d,
+ long e, struct test_data *p)
+{
+ return a + b + c + d + e + p->x + p->y;
+}
+
+__noinline long global_ptr_stack_arg(long a, long b, long c, long d, long e)
+{
+ struct test_data data = { .x = 10, .y = 20 };
+
+ return func_with_ptr_stack_arg(a, b, c, d, e, &data);
+}
+
+SEC("tc")
+int test_bpf2bpf_ptr_stack_arg(void)
+{
+ return global_ptr_stack_arg(1, 2, 3, 4, 5);
+}
+
+/* 1 + 2 + 3 + 4 + 5 + 10 + 6 + 20 = 51 */
+__noinline static long func_with_mix_stack_args(long a, long b, long c, long d,
+ long e, struct test_data *p,
+ long f, struct test_data *q)
+{
+ return a + b + c + d + e + p->x + f + q->y;
+}
+
+__noinline long global_mix_stack_args(long a, long b, long c, long d, long e)
+{
+ struct test_data p = { .x = 10 };
+ struct test_data q = { .y = 20 };
+
+ return func_with_mix_stack_args(a, b, c, d, e, &p, e + 1, &q);
+}
+
+SEC("tc")
+int test_bpf2bpf_mix_stack_args(void)
+{
+ return global_mix_stack_args(1, 2, 3, 4, 5);
+}
+
+/*
+ * Nesting test: func_outer calls func_inner, both with struct pointer
+ * as stack arg.
+ *
+ * func_inner: (a+1) + (b+1) + (c+1) + (d+1) + (e+1) + p->x + p->y
+ * = 2 + 3 + 4 + 5 + 6 + 10 + 20 = 50
+ */
+__noinline static long func_inner_ptr(long a, long b, long c, long d,
+ long e, struct test_data *p)
+{
+ return a + b + c + d + e + p->x + p->y;
+}
+
+__noinline static long func_outer_ptr(long a, long b, long c, long d,
+ long e, struct test_data *p)
+{
+ return func_inner_ptr(a + 1, b + 1, c + 1, d + 1, e + 1, p);
+}
+
+__noinline long global_nesting_ptr(long a, long b, long c, long d, long e)
+{
+ struct test_data data = { .x = 10, .y = 20 };
+
+ return func_outer_ptr(a, b, c, d, e, &data);
+}
+
+SEC("tc")
+int test_bpf2bpf_nesting_stack_arg(void)
+{
+ return global_nesting_ptr(1, 2, 3, 4, 5);
+}
+
+/* 1 + 2 + 3 + 4 + 5 + sizeof(pkt_v4) = 15 + 54 = 69 */
+__noinline static long func_with_dynptr(long a, long b, long c, long d,
+ long e, struct bpf_dynptr *ptr)
+{
+ return a + b + c + d + e + bpf_dynptr_size(ptr);
+}
+
+__noinline long global_dynptr_stack_arg(void *ctx __arg_ctx, long a, long b,
+ long c, long d)
+{
+ struct bpf_dynptr ptr;
+
+ bpf_dynptr_from_skb(ctx, 0, &ptr);
+ return func_with_dynptr(a, b, c, d, d + 1, &ptr);
+}
+
+SEC("tc")
+int test_bpf2bpf_dynptr_stack_arg(struct __sk_buff *skb)
+{
+ return global_dynptr_stack_arg(skb, 1, 2, 3, 4);
+}
+
+/* foo1: a+b+c+d+e+f+g+h */
+__noinline static int foo1(int a, int b, int c, int d,
+ int e, int f, int g, int h)
+{
+ return a + b + c + d + e + f + g + h;
+}
+
+/* foo2: a+b+c+d+e+f+g+h+i+j */
+__noinline static int foo2(int a, int b, int c, int d, int e,
+ int f, int g, int h, int i, int j)
+{
+ return a + b + c + d + e + f + g + h + i + j;
+}
+
+/* bar calls foo1 (3 stack args) and foo2 (5 stack args).
+ * The outgoing stack arg area is sized for foo2 (the larger callee).
+ * Stores for foo1 are a subset of the area used by foo2.
+ * Result: foo1(1,2,3,4,5,6,7,8) + foo2(1,2,3,4,5,6,7,8,9,10) = 36 + 55 = 91
+ *
+ * Pass a-e through so the compiler can't constant-fold the stack args away.
+ */
+__noinline int global_two_callees(int a, int b, int c, int d, int e)
+{
+ int ret;
+
+ ret = foo1(a, b, c, d, e, a + 5, a + 6, a + 7);
+ ret += foo2(a, b, c, d, e, a + 5, a + 6, a + 7, a + 8, a + 9);
+ return ret;
+}
+
+SEC("tc")
+int test_two_callees(void)
+{
+ return global_two_callees(1, 2, 3, 4, 5);
+}
+
+static int timer_cb_many_args(void *map, int *key, struct bpf_timer *timer)
+{
+ timer_result = static_func_many_args(10, 20, 30, 40, 50, 60, 70, 80);
+ return 0;
+}
+
+SEC("tc")
+int test_async_cb_many_args(void)
+{
+ struct timer_elem *elem;
+ int key = 0;
+
+ elem = bpf_map_lookup_elem(&timer_map, &key);
+ if (!elem)
+ return -1;
+
+ bpf_timer_init(&elem->timer, &timer_map, CLOCK_MONOTONIC);
+ bpf_timer_set_callback(&elem->timer, timer_cb_many_args);
+ bpf_timer_start(&elem->timer, 1, 0);
+ return 0;
+}
+
+#else
+
+const volatile bool has_stack_arg = false;
+
+SEC("tc")
+int test_global_many_args(void)
+{
+ return 0;
+}
+
+SEC("tc")
+int test_bpf2bpf_ptr_stack_arg(void)
+{
+ return 0;
+}
+
+SEC("tc")
+int test_bpf2bpf_mix_stack_args(void)
+{
+ return 0;
+}
+
+SEC("tc")
+int test_bpf2bpf_nesting_stack_arg(void)
+{
+ return 0;
+}
+
+SEC("tc")
+int test_bpf2bpf_dynptr_stack_arg(struct __sk_buff *skb)
+{
+ return 0;
+}
+
+SEC("tc")
+int test_two_callees(void)
+{
+ return 0;
+}
+
+SEC("tc")
+int test_async_cb_many_args(void)
+{
+ return 0;
+}
+
+#endif
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/stack_arg_kfunc.c b/tools/testing/selftests/bpf/progs/stack_arg_kfunc.c
new file mode 100644
index 000000000000..6cc404d57863
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/stack_arg_kfunc.c
@@ -0,0 +1,164 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */
+
+#include <vmlinux.h>
+#include <bpf/bpf_helpers.h>
+#include "bpf_kfuncs.h"
+#include "../test_kmods/bpf_testmod_kfunc.h"
+
+#if defined(__TARGET_ARCH_x86) && defined(__BPF_FEATURE_STACK_ARGUMENT)
+
+const volatile bool has_stack_arg = true;
+
+struct bpf_iter_testmod_seq {
+ u64 :64;
+ u64 :64;
+};
+
+extern int bpf_iter_testmod_seq_new(struct bpf_iter_testmod_seq *it, s64 value, int cnt) __ksym;
+extern int *bpf_iter_testmod_seq_next(struct bpf_iter_testmod_seq *it) __ksym;
+extern void bpf_iter_testmod_seq_destroy(struct bpf_iter_testmod_seq *it) __ksym;
+
+struct timer_map_value {
+ struct bpf_timer timer;
+};
+
+struct {
+ __uint(type, BPF_MAP_TYPE_ARRAY);
+ __uint(max_entries, 1);
+ __type(key, int);
+ __type(value, struct timer_map_value);
+} kfunc_timer_map SEC(".maps");
+
+SEC("tc")
+int test_stack_arg_scalar(struct __sk_buff *skb)
+{
+ return bpf_kfunc_call_stack_arg(1, 2, 3, 4, 5, 6, 7, 8);
+}
+
+SEC("tc")
+int test_stack_arg_ptr(struct __sk_buff *skb)
+{
+ struct prog_test_pass1 p = { .x0 = 10, .x1 = 20 };
+
+ return bpf_kfunc_call_stack_arg_ptr(1, 2, 3, 4, 5, &p);
+}
+
+SEC("tc")
+int test_stack_arg_mix(struct __sk_buff *skb)
+{
+ struct prog_test_pass1 p = { .x0 = 10 };
+ struct prog_test_pass1 q = { .x1 = 20 };
+
+ return bpf_kfunc_call_stack_arg_mix(1, 2, 3, 4, 5, &p, 6, &q);
+}
+
+/* 1 + 2 + 3 + 4 + 5 + sizeof(pkt_v4) = 15 + 54 = 69 */
+SEC("tc")
+int test_stack_arg_dynptr(struct __sk_buff *skb)
+{
+ struct bpf_dynptr ptr;
+
+ bpf_dynptr_from_skb(skb, 0, &ptr);
+ return bpf_kfunc_call_stack_arg_dynptr(1, 2, 3, 4, 5, &ptr);
+}
+
+/* 1 + 2 + 3 + 4 + 5 + (1 + 2 + ... + 16) = 15 + 136 = 151 */
+SEC("tc")
+int test_stack_arg_mem(struct __sk_buff *skb)
+{
+ char buf[16] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16};
+
+ return bpf_kfunc_call_stack_arg_mem(1, 2, 3, 4, 5, buf, sizeof(buf));
+}
+
+/* 1 + 2 + 3 + 4 + 5 + 100 = 115 */
+SEC("tc")
+int test_stack_arg_iter(struct __sk_buff *skb)
+{
+ struct bpf_iter_testmod_seq it;
+ u64 ret;
+
+ bpf_iter_testmod_seq_new(&it, 100, 10);
+ ret = bpf_kfunc_call_stack_arg_iter(1, 2, 3, 4, 5, &it);
+ bpf_iter_testmod_seq_destroy(&it);
+ return ret;
+}
+
+const char cstr[] = "hello";
+
+/* 1 + 2 + 3 + 4 + 5 = 15 */
+SEC("tc")
+int test_stack_arg_const_str(struct __sk_buff *skb)
+{
+ return bpf_kfunc_call_stack_arg_const_str(1, 2, 3, 4, 5, cstr);
+}
+
+/* 1 + 2 + 3 + 4 + 5 = 15 */
+SEC("tc")
+int test_stack_arg_timer(struct __sk_buff *skb)
+{
+ struct timer_map_value *val;
+ int key = 0;
+
+ val = bpf_map_lookup_elem(&kfunc_timer_map, &key);
+ if (!val)
+ return 0;
+ return bpf_kfunc_call_stack_arg_timer(1, 2, 3, 4, 5, &val->timer);
+}
+
+#else
+
+const volatile bool has_stack_arg = false;
+
+SEC("tc")
+int test_stack_arg_scalar(struct __sk_buff *skb)
+{
+ return 0;
+}
+
+SEC("tc")
+int test_stack_arg_ptr(struct __sk_buff *skb)
+{
+ return 0;
+}
+
+SEC("tc")
+int test_stack_arg_mix(struct __sk_buff *skb)
+{
+ return 0;
+}
+
+SEC("tc")
+int test_stack_arg_dynptr(struct __sk_buff *skb)
+{
+ return 0;
+}
+
+SEC("tc")
+int test_stack_arg_mem(struct __sk_buff *skb)
+{
+ return 0;
+}
+
+SEC("tc")
+int test_stack_arg_iter(struct __sk_buff *skb)
+{
+ return 0;
+}
+
+SEC("tc")
+int test_stack_arg_const_str(struct __sk_buff *skb)
+{
+ return 0;
+}
+
+SEC("tc")
+int test_stack_arg_timer(struct __sk_buff *skb)
+{
+ return 0;
+}
+
+#endif
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c b/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
index d876314a4d67..ea82a6d32d9f 100644
--- a/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
+++ b/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
@@ -825,6 +825,63 @@ __bpf_kfunc int bpf_kfunc_call_test5(u8 a, u16 b, u32 c)
return 0;
}
+__bpf_kfunc u64 bpf_kfunc_call_stack_arg(u64 a, u64 b, u64 c, u64 d,
+ u64 e, u64 f, u64 g, u64 h)
+{
+ return a + b + c + d + e + f + g + h;
+}
+
+__bpf_kfunc u64 bpf_kfunc_call_stack_arg_ptr(u64 a, u64 b, u64 c, u64 d, u64 e,
+ struct prog_test_pass1 *p)
+{
+ return a + b + c + d + e + p->x0 + p->x1;
+}
+
+__bpf_kfunc u64 bpf_kfunc_call_stack_arg_mix(u64 a, u64 b, u64 c, u64 d, u64 e,
+ struct prog_test_pass1 *p, u64 f,
+ struct prog_test_pass1 *q)
+{
+ return a + b + c + d + e + p->x0 + f + q->x1;
+}
+
+__bpf_kfunc u64 bpf_kfunc_call_stack_arg_dynptr(u64 a, u64 b, u64 c, u64 d, u64 e,
+ struct bpf_dynptr *ptr)
+{
+ const struct bpf_dynptr_kern *kern_ptr = (void *)ptr;
+
+ return a + b + c + d + e + (kern_ptr->size & 0xFFFFFF);
+}
+
+__bpf_kfunc u64 bpf_kfunc_call_stack_arg_mem(u64 a, u64 b, u64 c, u64 d, u64 e,
+ void *mem, int mem__sz)
+{
+ const unsigned char *p = mem;
+ u64 sum = a + b + c + d + e;
+ int i;
+
+ for (i = 0; i < mem__sz; i++)
+ sum += p[i];
+ return sum;
+}
+
+__bpf_kfunc u64 bpf_kfunc_call_stack_arg_iter(u64 a, u64 b, u64 c, u64 d, u64 e,
+ struct bpf_iter_testmod_seq *it__iter)
+{
+ return a + b + c + d + e + it__iter->value;
+}
+
+__bpf_kfunc u64 bpf_kfunc_call_stack_arg_const_str(u64 a, u64 b, u64 c, u64 d, u64 e,
+ const char *str__str)
+{
+ return a + b + c + d + e;
+}
+
+__bpf_kfunc u64 bpf_kfunc_call_stack_arg_timer(u64 a, u64 b, u64 c, u64 d, u64 e,
+ struct bpf_timer *timer)
+{
+ return a + b + c + d + e;
+}
+
static struct prog_test_ref_kfunc prog_test_struct = {
.a = 42,
.b = 108,
@@ -1288,6 +1345,15 @@ BTF_ID_FLAGS(func, bpf_kfunc_call_test2)
BTF_ID_FLAGS(func, bpf_kfunc_call_test3)
BTF_ID_FLAGS(func, bpf_kfunc_call_test4)
BTF_ID_FLAGS(func, bpf_kfunc_call_test5)
+BTF_ID_FLAGS(func, bpf_kfunc_call_stack_arg)
+BTF_ID_FLAGS(func, bpf_kfunc_call_stack_arg_ptr)
+BTF_ID_FLAGS(func, bpf_kfunc_call_stack_arg_mix)
+BTF_ID_FLAGS(func, bpf_kfunc_call_stack_arg_dynptr)
+BTF_ID_FLAGS(func, bpf_kfunc_call_stack_arg_mem)
+BTF_ID_FLAGS(func, bpf_kfunc_call_stack_arg_iter)
+BTF_ID_FLAGS(func, bpf_kfunc_call_stack_arg_const_str)
+BTF_ID_FLAGS(func, bpf_kfunc_call_stack_arg_timer)
+BTF_ID_FLAGS(func, bpf_kfunc_call_test_mem_len_pass1)
BTF_ID_FLAGS(func, bpf_kfunc_call_test_mem_len_fail1)
BTF_ID_FLAGS(func, bpf_kfunc_call_test_mem_len_fail2)
BTF_ID_FLAGS(func, bpf_kfunc_call_test_acquire, KF_ACQUIRE | KF_RET_NULL)
diff --git a/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h b/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h
index aa0b8d41e71b..2c1cb118f886 100644
--- a/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h
+++ b/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h
@@ -26,6 +26,8 @@ struct prog_test_ref_kfunc {
};
#endif
+struct bpf_iter_testmod_seq;
+
struct prog_test_pass1 {
int x0;
struct {
@@ -111,7 +113,23 @@ int bpf_kfunc_call_test2(struct sock *sk, __u32 a, __u32 b) __ksym;
struct sock *bpf_kfunc_call_test3(struct sock *sk) __ksym;
long bpf_kfunc_call_test4(signed char a, short b, int c, long d) __ksym;
int bpf_kfunc_call_test5(__u8 a, __u16 b, __u32 c) __ksym;
-
+__u64 bpf_kfunc_call_stack_arg(__u64 a, __u64 b, __u64 c, __u64 d,
+ __u64 e, __u64 f, __u64 g, __u64 h) __ksym;
+__u64 bpf_kfunc_call_stack_arg_ptr(__u64 a, __u64 b, __u64 c, __u64 d, __u64 e,
+ struct prog_test_pass1 *p) __ksym;
+__u64 bpf_kfunc_call_stack_arg_mix(__u64 a, __u64 b, __u64 c, __u64 d, __u64 e,
+ struct prog_test_pass1 *p, __u64 f,
+ struct prog_test_pass1 *q) __ksym;
+__u64 bpf_kfunc_call_stack_arg_dynptr(__u64 a, __u64 b, __u64 c, __u64 d, __u64 e,
+ struct bpf_dynptr *ptr) __ksym;
+__u64 bpf_kfunc_call_stack_arg_mem(__u64 a, __u64 b, __u64 c, __u64 d, __u64 e,
+ void *mem, int mem__sz) __ksym;
+__u64 bpf_kfunc_call_stack_arg_iter(__u64 a, __u64 b, __u64 c, __u64 d, __u64 e,
+ struct bpf_iter_testmod_seq *it__iter) __ksym;
+__u64 bpf_kfunc_call_stack_arg_const_str(__u64 a, __u64 b, __u64 c, __u64 d, __u64 e,
+ const char *str__str) __ksym;
+__u64 bpf_kfunc_call_stack_arg_timer(__u64 a, __u64 b, __u64 c, __u64 d, __u64 e,
+ struct bpf_timer *timer) __ksym;
void bpf_kfunc_call_test_pass_ctx(struct __sk_buff *skb) __ksym;
void bpf_kfunc_call_test_pass1(struct prog_test_pass1 *p) __ksym;
void bpf_kfunc_call_test_pass2(struct prog_test_pass2 *p) __ksym;
--
2.52.0
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH bpf-next v6 16/17] selftests/bpf: Add tests for stack argument validation
2026-04-19 16:33 [PATCH bpf-next v6 00/17] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song
` (14 preceding siblings ...)
2026-04-19 16:34 ` [PATCH bpf-next v6 15/17] selftests/bpf: Add tests for BPF function " Yonghong Song
@ 2026-04-19 16:34 ` Yonghong Song
2026-04-19 16:34 ` [PATCH bpf-next v6 17/17] selftests/bpf: Add verifier " Yonghong Song
2026-04-20 15:41 ` [PATCH bpf-next v6 00/17] bpf: Support stack arguments for BPF functions and kfuncs Puranjay Mohan
17 siblings, 0 replies; 51+ messages in thread
From: Yonghong Song @ 2026-04-19 16:34 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau
Add negative tests that verify the kfunc (rejecting kfunc call
with >8 byte struct as stack argument) and the verifier
(rejecting invalid uses of r11 for stack arguments).
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
.../selftests/bpf/prog_tests/stack_arg_fail.c | 10 ++
.../selftests/bpf/progs/stack_arg_fail.c | 124 ++++++++++++++++++
.../selftests/bpf/test_kmods/bpf_testmod.c | 7 +
.../bpf/test_kmods/bpf_testmod_kfunc.h | 8 ++
4 files changed, 149 insertions(+)
create mode 100644 tools/testing/selftests/bpf/prog_tests/stack_arg_fail.c
create mode 100644 tools/testing/selftests/bpf/progs/stack_arg_fail.c
diff --git a/tools/testing/selftests/bpf/prog_tests/stack_arg_fail.c b/tools/testing/selftests/bpf/prog_tests/stack_arg_fail.c
new file mode 100644
index 000000000000..090af1330953
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/stack_arg_fail.c
@@ -0,0 +1,10 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */
+
+#include <test_progs.h>
+#include "stack_arg_fail.skel.h"
+
+void test_stack_arg_fail(void)
+{
+ RUN_TESTS(stack_arg_fail);
+}
diff --git a/tools/testing/selftests/bpf/progs/stack_arg_fail.c b/tools/testing/selftests/bpf/progs/stack_arg_fail.c
new file mode 100644
index 000000000000..219eafed7f93
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/stack_arg_fail.c
@@ -0,0 +1,124 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */
+
+#include <vmlinux.h>
+#include <bpf/bpf_helpers.h>
+#include "../test_kmods/bpf_testmod_kfunc.h"
+#include "bpf_misc.h"
+
+#if defined(__BPF_FEATURE_STACK_ARGUMENT)
+
+SEC("tc")
+__failure __msg("Unrecognized *(R11-8) type STRUCT")
+int test_stack_arg_big(struct __sk_buff *skb)
+{
+ struct prog_test_big_arg s = { .a = 1, .b = 2 };
+
+ return bpf_kfunc_call_stack_arg_big(1, 2, 3, 4, 5, s);
+}
+
+SEC("socket")
+__description("r11 in ALU instruction")
+__failure __msg("R11 is invalid")
+__naked void r11_alu_reject(void)
+{
+ asm volatile (
+ "r11 += 1;"
+ "r0 = 0;"
+ "exit;"
+ ::: __clobber_all);
+}
+
+SEC("socket")
+__description("r11 store with non-DW size")
+__failure __msg("R11 is invalid")
+__naked void r11_store_non_dw(void)
+{
+ asm volatile (
+ "*(u32 *)(r11 - 8) = r1;"
+ "r0 = 0;"
+ "exit;"
+ ::: __clobber_all);
+}
+
+SEC("socket")
+__description("r11 store with unaligned offset")
+__failure __msg("stack arg write offset -4 not aligned")
+__naked void r11_store_unaligned(void)
+{
+ asm volatile (
+ "*(u64 *)(r11 - 4) = r1;"
+ "r0 = 0;"
+ "exit;"
+ ::: __clobber_all);
+}
+
+SEC("socket")
+__description("r11 store with positive offset")
+__failure __msg("stack arg write must use negative offset")
+__naked void r11_store_positive_off(void)
+{
+ asm volatile (
+ "*(u64 *)(r11 + 8) = r1;"
+ "r0 = 0;"
+ "exit;"
+ ::: __clobber_all);
+}
+
+SEC("socket")
+__description("r11 load with negative offset")
+__failure __msg("stack arg read must use positive offset")
+__naked void r11_load_negative_off(void)
+{
+ asm volatile (
+ "r0 = *(u64 *)(r11 - 8);"
+ "exit;"
+ ::: __clobber_all);
+}
+
+SEC("socket")
+__description("r11 load with non-DW size")
+__failure __msg("R11 is invalid")
+__naked void r11_load_non_dw(void)
+{
+ asm volatile (
+ "r0 = *(u32 *)(r11 + 8);"
+ "exit;"
+ ::: __clobber_all);
+}
+
+SEC("socket")
+__description("r11 store with zero offset")
+__failure __msg("stack arg write offset 0 not aligned")
+__naked void r11_store_zero_off(void)
+{
+ asm volatile (
+ "*(u64 *)(r11 + 0) = r1;"
+ "r0 = 0;"
+ "exit;"
+ ::: __clobber_all);
+}
+
+#else
+
+SEC("tc")
+__success
+int test_stack_arg_big(struct __sk_buff *skb)
+{
+ return 0;
+}
+
+SEC("socket")
+__description("stack arg dummy (no stack arg support)")
+__success
+__naked void stack_arg_dummy(void)
+{
+ asm volatile (
+ "r0 = 0;"
+ "exit;"
+ ::: __clobber_all);
+}
+
+#endif
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c b/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
index ea82a6d32d9f..bd467560787e 100644
--- a/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
+++ b/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
@@ -882,6 +882,12 @@ __bpf_kfunc u64 bpf_kfunc_call_stack_arg_timer(u64 a, u64 b, u64 c, u64 d, u64 e
return a + b + c + d + e;
}
+__bpf_kfunc u64 bpf_kfunc_call_stack_arg_big(u64 a, u64 b, u64 c, u64 d, u64 e,
+ struct prog_test_big_arg s)
+{
+ return a + b + c + d + e + s.a + s.b;
+}
+
static struct prog_test_ref_kfunc prog_test_struct = {
.a = 42,
.b = 108,
@@ -1353,6 +1359,7 @@ BTF_ID_FLAGS(func, bpf_kfunc_call_stack_arg_mem)
BTF_ID_FLAGS(func, bpf_kfunc_call_stack_arg_iter)
BTF_ID_FLAGS(func, bpf_kfunc_call_stack_arg_const_str)
BTF_ID_FLAGS(func, bpf_kfunc_call_stack_arg_timer)
+BTF_ID_FLAGS(func, bpf_kfunc_call_stack_arg_big)
BTF_ID_FLAGS(func, bpf_kfunc_call_test_mem_len_pass1)
BTF_ID_FLAGS(func, bpf_kfunc_call_test_mem_len_fail1)
BTF_ID_FLAGS(func, bpf_kfunc_call_test_mem_len_fail2)
diff --git a/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h b/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h
index 2c1cb118f886..2edc36b66de9 100644
--- a/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h
+++ b/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h
@@ -50,6 +50,11 @@ struct prog_test_pass2 {
} x;
};
+struct prog_test_big_arg {
+ __u64 a;
+ __u64 b;
+};
+
struct prog_test_fail1 {
void *p;
int x;
@@ -130,6 +135,9 @@ __u64 bpf_kfunc_call_stack_arg_const_str(__u64 a, __u64 b, __u64 c, __u64 d, __u
const char *str__str) __ksym;
__u64 bpf_kfunc_call_stack_arg_timer(__u64 a, __u64 b, __u64 c, __u64 d, __u64 e,
struct bpf_timer *timer) __ksym;
+__u64 bpf_kfunc_call_stack_arg_big(__u64 a, __u64 b, __u64 c, __u64 d, __u64 e,
+ struct prog_test_big_arg s) __ksym;
+
void bpf_kfunc_call_test_pass_ctx(struct __sk_buff *skb) __ksym;
void bpf_kfunc_call_test_pass1(struct prog_test_pass1 *p) __ksym;
void bpf_kfunc_call_test_pass2(struct prog_test_pass2 *p) __ksym;
--
2.52.0
^ permalink raw reply related [flat|nested] 51+ messages in thread
* [PATCH bpf-next v6 17/17] selftests/bpf: Add verifier tests for stack argument validation
2026-04-19 16:33 [PATCH bpf-next v6 00/17] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song
` (15 preceding siblings ...)
2026-04-19 16:34 ` [PATCH bpf-next v6 16/17] selftests/bpf: Add tests for stack argument validation Yonghong Song
@ 2026-04-19 16:34 ` Yonghong Song
2026-04-19 17:21 ` sashiko-bot
2026-04-20 15:41 ` [PATCH bpf-next v6 00/17] bpf: Support stack arguments for BPF functions and kfuncs Puranjay Mohan
17 siblings, 1 reply; 51+ messages in thread
From: Yonghong Song @ 2026-04-19 16:34 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau
Add inline-asm based verifier tests that exercise stack argument
validation logic directly.
Positive tests:
- subprog call with 6 arg's
- Two sequential calls to different subprogs (6-arg and 7-arg)
- Share a r11 store for both branches
Negative tests — verifier rejection:
- Read from uninitialized incoming stack arg slot
- Gap in outgoing slots: only r11-16 written, r11-8 missing
- Write at r11-80, exceeding max 7 stack args
- Missing store on one branch with a shared store
- First call has proper stack arguments and the second
call intends to inherit stack arguments but not working
Negative tests — pointer/ref tracking:
- Pruning type mismatch: one branch stores PTR_TO_STACK, the
other stores a scalar, callee dereferences — must not prune
- Release invalidation: bpf_sk_release invalidates a socket
pointer stored in a stack arg slot
- Packet pointer invalidation: bpf_skb_pull_data invalidates
a packet pointer stored in a stack arg slot
- Null propagation: PTR_TO_MAP_VALUE_OR_NULL stored in stack
arg slot, null branch attempts dereference via callee
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
.../selftests/bpf/prog_tests/verifier.c | 2 +
.../selftests/bpf/progs/verifier_stack_arg.c | 433 ++++++++++++++++++
2 files changed, 435 insertions(+)
create mode 100644 tools/testing/selftests/bpf/progs/verifier_stack_arg.c
diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c
index a96b25ebff23..aef21cf2987b 100644
--- a/tools/testing/selftests/bpf/prog_tests/verifier.c
+++ b/tools/testing/selftests/bpf/prog_tests/verifier.c
@@ -91,6 +91,7 @@
#include "verifier_sockmap_mutate.skel.h"
#include "verifier_spill_fill.skel.h"
#include "verifier_spin_lock.skel.h"
+#include "verifier_stack_arg.skel.h"
#include "verifier_stack_ptr.skel.h"
#include "verifier_store_release.skel.h"
#include "verifier_subprog_precision.skel.h"
@@ -238,6 +239,7 @@ void test_verifier_sock_addr(void) { RUN(verifier_sock_addr); }
void test_verifier_sockmap_mutate(void) { RUN(verifier_sockmap_mutate); }
void test_verifier_spill_fill(void) { RUN(verifier_spill_fill); }
void test_verifier_spin_lock(void) { RUN(verifier_spin_lock); }
+void test_verifier_stack_arg(void) { RUN(verifier_stack_arg); }
void test_verifier_stack_ptr(void) { RUN(verifier_stack_ptr); }
void test_verifier_store_release(void) { RUN(verifier_store_release); }
void test_verifier_subprog_precision(void) { RUN(verifier_subprog_precision); }
diff --git a/tools/testing/selftests/bpf/progs/verifier_stack_arg.c b/tools/testing/selftests/bpf/progs/verifier_stack_arg.c
new file mode 100644
index 000000000000..66dd11840a63
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/verifier_stack_arg.c
@@ -0,0 +1,433 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
+
+struct {
+ __uint(type, BPF_MAP_TYPE_HASH);
+ __uint(max_entries, 1);
+ __type(key, long long);
+ __type(value, long long);
+} map_hash_8b SEC(".maps");
+
+#if defined(__TARGET_ARCH_x86) && defined(__BPF_FEATURE_STACK_ARGUMENT)
+
+__noinline __used
+static int subprog_6args(int a, int b, int c, int d, int e, int f)
+{
+ return a + b + c + d + e + f;
+}
+
+__noinline __used
+static int subprog_7args(int a, int b, int c, int d, int e, int f, int g)
+{
+ return a + b + c + d + e + f + g;
+}
+
+__noinline __used
+static long subprog_deref_arg6(long a, long b, long c, long d, long e, long *f)
+{
+ return *f;
+}
+
+SEC("tc")
+__description("stack_arg: subprog with 6 args")
+__success
+__arch_x86_64
+__naked void stack_arg_6args(void)
+{
+ asm volatile (
+ "r1 = 1;"
+ "r2 = 2;"
+ "r3 = 3;"
+ "r4 = 4;"
+ "r5 = 5;"
+ "*(u64 *)(r11 - 8) = 6;"
+ "call subprog_6args;"
+ "exit;"
+ ::: __clobber_all
+ );
+}
+
+SEC("tc")
+__description("stack_arg: two subprogs with >5 args")
+__success
+__arch_x86_64
+__naked void stack_arg_two_subprogs(void)
+{
+ asm volatile (
+ "r1 = 1;"
+ "r2 = 2;"
+ "r3 = 3;"
+ "r4 = 4;"
+ "r5 = 5;"
+ "*(u64 *)(r11 - 8) = 10;"
+ "call subprog_6args;"
+ "r6 = r0;"
+ "r1 = 1;"
+ "r2 = 2;"
+ "r3 = 3;"
+ "r4 = 4;"
+ "r5 = 5;"
+ "*(u64 *)(r11 - 16) = 30;"
+ "*(u64 *)(r11 - 8) = 20;"
+ "call subprog_7args;"
+ "r0 += r6;"
+ "exit;"
+ ::: __clobber_all
+ );
+}
+
+SEC("tc")
+__description("stack_arg: read from uninitialized stack arg slot")
+__failure
+__arch_x86_64
+__msg("invalid read from stack arg off 8 depth 0")
+__naked void stack_arg_read_uninitialized(void)
+{
+ asm volatile (
+ "r0 = *(u64 *)(r11 + 8);"
+ "r0 = 0;"
+ "exit;"
+ ::: __clobber_all
+ );
+}
+
+SEC("tc")
+__description("stack_arg: gap at offset -8, only wrote -16")
+__failure
+__arch_x86_64
+__msg("stack arg#6 not properly initialized")
+__naked void stack_arg_gap_at_minus8(void)
+{
+ asm volatile (
+ "r1 = 1;"
+ "r2 = 2;"
+ "r3 = 3;"
+ "r4 = 4;"
+ "r5 = 5;"
+ "*(u64 *)(r11 - 16) = 30;"
+ "call subprog_7args;"
+ "exit;"
+ ::: __clobber_all
+ );
+}
+
+SEC("tc")
+__description("stack_arg: pruning with different stack arg types")
+__failure
+__flag(BPF_F_TEST_STATE_FREQ)
+__arch_x86_64
+__msg("R1 invalid mem access 'scalar'")
+__naked void stack_arg_pruning_type_mismatch(void)
+{
+ asm volatile (
+ "call %[bpf_get_prandom_u32];"
+ "r6 = r0;"
+ /* local = 0 on program stack */
+ "r7 = 0;"
+ "*(u64 *)(r10 - 8) = r7;"
+ /* Branch based on random value */
+ "if r6 s> 3 goto l0_%=;"
+ /* Path 1: store stack pointer to outgoing arg6 */
+ "r1 = r10;"
+ "r1 += -8;"
+ "*(u64 *)(r11 - 8) = r1;"
+ "goto l1_%=;"
+ "l0_%=:"
+ /* Path 2: store scalar to outgoing arg6 */
+ "*(u64 *)(r11 - 8) = 42;"
+ "l1_%=:"
+ /* Call subprog that dereferences arg6 */
+ "r1 = r6;"
+ "r2 = 0;"
+ "r3 = 0;"
+ "r4 = 0;"
+ "r5 = 0;"
+ "call subprog_deref_arg6;"
+ "exit;"
+ :: __imm(bpf_get_prandom_u32)
+ : __clobber_all
+ );
+}
+
+SEC("tc")
+__description("stack_arg: release_reference invalidates stack arg slot")
+__failure
+__arch_x86_64
+__msg("R1 invalid mem access 'scalar'")
+__naked void stack_arg_release_ref(void)
+{
+ asm volatile (
+ "r6 = r1;"
+ /* struct bpf_sock_tuple tuple = {} */
+ "r2 = 0;"
+ "*(u32 *)(r10 - 8) = r2;"
+ "*(u64 *)(r10 - 16) = r2;"
+ "*(u64 *)(r10 - 24) = r2;"
+ "*(u64 *)(r10 - 32) = r2;"
+ "*(u64 *)(r10 - 40) = r2;"
+ "*(u64 *)(r10 - 48) = r2;"
+ /* sk = bpf_sk_lookup_tcp(ctx, &tuple, sizeof(tuple), 0, 0) */
+ "r1 = r6;"
+ "r2 = r10;"
+ "r2 += -48;"
+ "r3 = %[sizeof_bpf_sock_tuple];"
+ "r4 = 0;"
+ "r5 = 0;"
+ "call %[bpf_sk_lookup_tcp];"
+ /* r0 = sk (PTR_TO_SOCK_OR_NULL) */
+ "if r0 == 0 goto l0_%=;"
+ /* Store sock ref to outgoing arg6 slot */
+ "*(u64 *)(r11 - 8) = r0;"
+ /* Release the reference — invalidates the stack arg slot */
+ "r1 = r0;"
+ "call %[bpf_sk_release];"
+ /* Call subprog that dereferences arg6 — should fail */
+ "r1 = 1;"
+ "r2 = 2;"
+ "r3 = 3;"
+ "r4 = 4;"
+ "r5 = 5;"
+ "call subprog_deref_arg6;"
+ "l0_%=:"
+ "r0 = 0;"
+ "exit;"
+ :
+ : __imm(bpf_sk_lookup_tcp),
+ __imm(bpf_sk_release),
+ __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+ : __clobber_all
+ );
+}
+
+SEC("tc")
+__description("stack_arg: pkt pointer in stack arg slot invalidated after pull_data")
+__failure
+__arch_x86_64
+__msg("R1 invalid mem access 'scalar'")
+__naked void stack_arg_stale_pkt_ptr(void)
+{
+ asm volatile (
+ "r6 = r1;"
+ "r7 = *(u32 *)(r6 + %[__sk_buff_data]);"
+ "r8 = *(u32 *)(r6 + %[__sk_buff_data_end]);"
+ /* check pkt has at least 1 byte */
+ "r0 = r7;"
+ "r0 += 8;"
+ "if r0 > r8 goto l0_%=;"
+ /* Store valid pkt pointer to outgoing arg6 slot */
+ "*(u64 *)(r11 - 8) = r7;"
+ /* bpf_skb_pull_data invalidates all pkt pointers */
+ "r1 = r6;"
+ "r2 = 0;"
+ "call %[bpf_skb_pull_data];"
+ /* Call subprog that dereferences arg6 — should fail */
+ "r1 = 1;"
+ "r2 = 2;"
+ "r3 = 3;"
+ "r4 = 4;"
+ "r5 = 5;"
+ "call subprog_deref_arg6;"
+ "l0_%=:"
+ "r0 = 0;"
+ "exit;"
+ :
+ : __imm(bpf_skb_pull_data),
+ __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
+ __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end))
+ : __clobber_all
+ );
+}
+
+SEC("tc")
+__description("stack_arg: null propagation rejects deref on null branch")
+__failure
+__arch_x86_64
+__msg("R1 invalid mem access 'scalar'")
+__naked void stack_arg_null_propagation_fail(void)
+{
+ asm volatile (
+ "r1 = 0;"
+ "*(u64 *)(r10 - 8) = r1;"
+ /* r0 = bpf_map_lookup_elem(&map_hash_8b, &key) */
+ "r2 = r10;"
+ "r2 += -8;"
+ "r1 = %[map_hash_8b] ll;"
+ "call %[bpf_map_lookup_elem];"
+ /* Store PTR_TO_MAP_VALUE_OR_NULL to outgoing arg6 slot */
+ "*(u64 *)(r11 - 8) = r0;"
+ /* null check on r0 */
+ "if r0 != 0 goto l0_%=;"
+ /*
+ * On null branch, outgoing slot is SCALAR(0).
+ * Call subprog that dereferences arg6 — should fail.
+ */
+ "r1 = 0;"
+ "r2 = 0;"
+ "r3 = 0;"
+ "r4 = 0;"
+ "r5 = 0;"
+ "call subprog_deref_arg6;"
+ "l0_%=:"
+ "r0 = 0;"
+ "exit;"
+ :
+ : __imm(bpf_map_lookup_elem),
+ __imm_addr(map_hash_8b)
+ : __clobber_all
+ );
+}
+
+SEC("tc")
+__description("stack_arg: missing store on one branch")
+__failure
+__arch_x86_64
+__msg("stack arg#6 not properly initialized")
+__naked void stack_arg_missing_store_one_branch(void)
+{
+ asm volatile (
+ "r1 = 1;"
+ "r2 = 2;"
+ "r3 = 3;"
+ "r4 = 4;"
+ "r5 = 5;"
+ /* Write arg7 (r11-16) before branch */
+ "*(u64 *)(r11 - 16) = 20;"
+ "call %[bpf_get_prandom_u32];"
+ "if r0 > 0 goto l0_%=;"
+ /* Path 1: write arg6 and call */
+ "*(u64 *)(r11 - 8) = 10;"
+ "r1 = 1;"
+ "r2 = 2;"
+ "r3 = 3;"
+ "r4 = 4;"
+ "r5 = 5;"
+ "call subprog_7args;"
+ "goto l1_%=;"
+ "l0_%=:"
+ /* Path 2: missing arg6 store, call should fail */
+ "r1 = 1;"
+ "r2 = 2;"
+ "r3 = 3;"
+ "r4 = 4;"
+ "r5 = 5;"
+ "call subprog_7args;"
+ "l1_%=:"
+ "r0 = 0;"
+ "exit;"
+ :: __imm(bpf_get_prandom_u32)
+ : __clobber_all
+ );
+}
+
+SEC("tc")
+__description("stack_arg: share a store for both branches")
+__success __retval(0)
+__arch_x86_64
+__naked void stack_arg_shared_store(void)
+{
+ asm volatile (
+ "r1 = 1;"
+ "r2 = 2;"
+ "r3 = 3;"
+ "r4 = 4;"
+ "r5 = 5;"
+ /* Write arg7 (r11-16) before branch */
+ "*(u64 *)(r11 - 16) = 20;"
+ "call %[bpf_get_prandom_u32];"
+ "if r0 > 0 goto l0_%=;"
+ /* Path 1: write arg6 and call */
+ "*(u64 *)(r11 - 8) = 10;"
+ "r1 = 1;"
+ "r2 = 2;"
+ "r3 = 3;"
+ "r4 = 4;"
+ "r5 = 5;"
+ "call subprog_7args;"
+ "goto l1_%=;"
+ "l0_%=:"
+ /* Path 2: also write arg6 and call */
+ "*(u64 *)(r11 - 8) = 30;"
+ "r1 = 1;"
+ "r2 = 2;"
+ "r3 = 3;"
+ "r4 = 4;"
+ "r5 = 5;"
+ "call subprog_7args;"
+ "l1_%=:"
+ "r0 = 0;"
+ "exit;"
+ :: __imm(bpf_get_prandom_u32)
+ : __clobber_all
+ );
+}
+
+SEC("tc")
+__description("stack_arg: write beyond max outgoing depth")
+__failure
+__arch_x86_64
+__msg("stack arg write offset -80 exceeds max 7 stack args")
+__naked void stack_arg_write_beyond_max(void)
+{
+ asm volatile (
+ "r1 = 1;"
+ "r2 = 2;"
+ "r3 = 3;"
+ "r4 = 4;"
+ "r5 = 5;"
+ /* Write to offset -80, way beyond any callee's needs */
+ "*(u64 *)(r11 - 80) = 99;"
+ "*(u64 *)(r11 - 16) = 20;"
+ "*(u64 *)(r11 - 8) = 10;"
+ "call subprog_7args;"
+ "r0 = 0;"
+ "exit;"
+ ::: __clobber_all
+ );
+}
+
+SEC("tc")
+__description("stack_arg: sequential calls reuse slots")
+__failure
+__arch_x86_64
+__msg("stack arg#6 not properly initialized")
+__naked void stack_arg_sequential_calls(void)
+{
+ asm volatile (
+ "r1 = 1;"
+ "r2 = 2;"
+ "r3 = 3;"
+ "r4 = 4;"
+ "r5 = 5;"
+ "*(u64 *)(r11 - 8) = 6;"
+ "*(u64 *)(r11 - 16) = 7;"
+ "call subprog_7args;"
+ "r6 = r0;"
+ "r1 = 1;"
+ "r2 = 2;"
+ "r3 = 3;"
+ "r4 = 4;"
+ "r5 = 5;"
+ "call subprog_7args;"
+ "r0 += r6;"
+ "exit;"
+ ::: __clobber_all
+ );
+}
+
+#else
+
+SEC("socket")
+__description("stack_arg is not supported by compiler or jit, use a dummy test")
+__success
+int dummy_test(void)
+{
+ return 0;
+}
+
+#endif
+
+char _license[] SEC("license") = "GPL";
--
2.52.0
^ permalink raw reply related [flat|nested] 51+ messages in thread
* Re: [PATCH bpf-next v6 05/17] bpf: Introduce bpf register BPF_REG_PARAMS
2026-04-19 16:33 ` [PATCH bpf-next v6 05/17] bpf: Introduce bpf register BPF_REG_PARAMS Yonghong Song
@ 2026-04-19 17:06 ` sashiko-bot
2026-04-19 18:14 ` Yonghong Song
0 siblings, 1 reply; 51+ messages in thread
From: sashiko-bot @ 2026-04-19 17:06 UTC (permalink / raw)
To: Yonghong Song; +Cc: bpf
Sashiko AI review found 1 potential issue(s):
- [Critical] Unvalidated BPF_REG_PARAMS instructions cause silent register aliasing and arbitrary kernel memory corruption on architectures lacking stack argument support.
--
commit 9b78142c31c81cff58a84123ee2833f9b935c211
Author: Yonghong Song <yonghong.song@linux.dev>
bpf: Introduce bpf register BPF_REG_PARAMS
This commit introduces BPF_REG_PARAMS as a dedicated BPF register for stack
argument accesses, occupying BPF register number 11 (R11). It shifts the
kernel-internal hidden register BPF_REG_AX to slot 12.
> diff --git a/include/linux/filter.h b/include/linux/filter.h
> index 1ec6d5ba64cc4..b77d0b06db6eb 100644
> --- a/include/linux/filter.h
> +++ b/include/linux/filter.h
> @@ -58,8 +58,9 @@ struct ctl_table_header;
> #define BPF_REG_H BPF_REG_9 /* hlen, callee-saved */
>
> /* Kernel hidden auxiliary/helper register. */
> -#define BPF_REG_AX MAX_BPF_REG
> -#define MAX_BPF_EXT_REG (MAX_BPF_REG + 1)
> +#define BPF_REG_PARAMS MAX_BPF_REG
Does the introduction of BPF_REG_PARAMS without JIT-support validation allow
architectures lacking stack argument support (like ARM64, RISC-V, s390x) to
inadvertently use uninitialized JIT register mappings?
If the verifier doesn't gate BPF_REG_PARAMS usage behind a check like
bpf_jit_supports_stack_args(), could a crafted program trigger silent
register aliasing?
For example, if a program sets BPF_REG_1 to an arbitrary kernel address and
executes BPF_STX_MEM(BPF_DW, BPF_REG_PARAMS, BPF_REG_2, -8), the verifier may
allow it as a safe outgoing stack argument write. When the ARM64 JIT compiles
the instruction, it looks up index 11. If that index is uninitialized and
defaults to 0, it would emit a store using x0 (which is BPF_REG_1) as the
base pointer (e.g., STR x1, [x0, #-8]).
Could this write an arbitrary value into a controlled kernel address and
bypass verifier memory safety guarantees?
> +#define BPF_REG_AX (MAX_BPF_REG + 1)
> +#define MAX_BPF_EXT_REG (MAX_BPF_REG + 2)
> #define MAX_BPF_JIT_REG MAX_BPF_EXT_REG
--
Sashiko AI review · https://sashiko.dev/#/patchset/20260419163316.731019-1-yonghong.song@linux.dev?part=5
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH bpf-next v6 13/17] bpf: Reject stack arguments if tail call reachable
2026-04-19 16:34 ` [PATCH bpf-next v6 13/17] bpf: Reject stack arguments if tail call reachable Yonghong Song
@ 2026-04-19 17:08 ` sashiko-bot
2026-04-19 18:20 ` Yonghong Song
0 siblings, 1 reply; 51+ messages in thread
From: sashiko-bot @ 2026-04-19 17:08 UTC (permalink / raw)
To: Yonghong Song; +Cc: bpf
Sashiko AI review found 1 potential issue(s):
- [High] The verifier incorrectly rejects perfectly valid subprograms with stack arguments due to an order-dependent state leak of `tail_call_reachable` during the call graph traversal.
--
commit 2677e39a30cc9fcf44cadb1927d1362ad39f7b4e
Author: Yonghong Song <yonghong.song@linux.dev>
bpf: Reject stack arguments if tail call reachable
This commit deprecates tailcalls in the presence of stack arguments,
rejecting subprograms with stack arguments if a tail call is reachable.
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 13f1fd7880921..af89750498836 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
[ ... ]
> @@ -5672,14 +5677,23 @@ static int check_max_stack_depth_subprog(struct bpf_verifier_env *env, int idx,
> * this info will be utilized by JIT so that we will be preserving the
> * tail call counter throughout bpf2bpf calls combined with tailcalls
> */
> - if (tail_call_reachable)
> + if (tail_call_reachable) {
> for (tmp = idx; tmp >= 0; tmp = dinfo[tmp].caller) {
> if (subprog[tmp].is_exception_cb) {
> verbose(env, "cannot tail call within exception cb\n");
> return -EINVAL;
> }
> + if (subprog_has_stack_args(&subprog[tmp])) {
> + verbose(env, "tail_calls are not allowed in programs with stack args\n");
> + return -EINVAL;
> + }
Can this leak the tail_call_reachable state across disjoint call graph branches?
In check_max_stack_depth_subprog(), tail_call_reachable is a local variable
that seems to act as an accumulator. If the main program first calls subprogram A
(which has a tail call) and subsequently calls subprogram B (which has stack
arguments but no tail call), tail_call_reachable remains true while traversing B.
When the traversal finishes subprogram B, this newly added check evaluates the
current call stack (main -> B). Since tail_call_reachable leaked from the main
-> A traversal, and subprogram B has stack arguments, the check would falsely
reject the program.
Conversely, if the bytecode is ordered so that B is called before A,
tail_call_reachable would be false while traversing B, allowing the program to
load successfully.
Does this introduce an unpredictable, order-dependent functional regression?
> subprog[tmp].tail_call_reachable = true;
> }
> + } else if (!idx && subprog[0].has_tail_call && subprog_has_stack_args(&subprog[0])) {
> + verbose(env, "tail_calls are not allowed in programs with stack args\n");
> + return -EINVAL;
> + }
> +
> if (subprog[0].tail_call_reachable)
> env->prog->aux->tail_call_reachable = true;
>
--
Sashiko AI review · https://sashiko.dev/#/patchset/20260419163316.731019-1-yonghong.song@linux.dev?part=13
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH bpf-next v6 12/17] bpf: Support stack arguments for kfunc calls
2026-04-19 16:34 ` [PATCH bpf-next v6 12/17] bpf: Support stack arguments for kfunc calls Yonghong Song
@ 2026-04-19 17:08 ` sashiko-bot
2026-04-19 18:18 ` Yonghong Song
0 siblings, 1 reply; 51+ messages in thread
From: sashiko-bot @ 2026-04-19 17:08 UTC (permalink / raw)
To: Yonghong Song; +Cc: bpf
Sashiko AI review found 2 potential issue(s):
- [Critical] Loss of precision tracking for scalar stack arguments due to modifying a local copy (`tmp_reg`).
- [Critical] Bypassing `mark_chain_precision` for stack arguments breaks data-flow backtracking, enabling unsafe state pruning.
--
commit 96adcc4832214af51212820f2056565abaf42a32
Author: Yonghong Song <yonghong.song@linux.dev>
bpf: Support stack arguments for kfunc calls
Extend the stack argument mechanism to kfunc calls, allowing kfuncs
with more than 5 parameters to receive additional arguments via the
r11-based stack arg area.
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 2a346e4f28e15..13f1fd7880921 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
[ ... ]
> @@ -7314,8 +7322,12 @@ static int check_mem_size_reg(struct bpf_verifier_env *env,
> }
> err = check_helper_mem_access(env, mem_reg, mem_argno, size_reg->umax_value,
> access_type, zero_size_allowed, meta);
> - if (!err && !is_stack_argno(size_argno))
> - err = mark_chain_precision(env, size_argno);
> + if (!err) {
> + if (is_stack_argno(size_argno))
> + size_reg->precise = true;
> + else
> + err = mark_chain_precision(env, size_argno);
> + }
> return err;
> }
Does setting size_reg->precise = true without calling mark_chain_precision()
break data-flow backtracking?
If only the current state's flag is set, parent states and the source
registers that populated the stack argument will not be marked precise.
Could this lead to the verifier incorrectly pruning states if execution paths
merge before the stack store?
[ ... ]
> @@ -12373,20 +12400,44 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
[ ... ]
> if (i < MAX_BPF_FUNC_REG_ARGS) {
> reg = ®s[i + 1];
> } else {
> /* Retrieve the reg state from the outgoing stack arg slot. */
> struct bpf_func_state *caller = cur_func(env);
> int spi = out_arg_spi(caller, i - MAX_BPF_FUNC_REG_ARGS);
>
> if (!is_stack_arg_slot_initialized(caller, spi)) {
> verbose(env, "stack %s not properly initialized\n",
> reg_arg_name(env, argno));
> return -EINVAL;
> }
>
> tmp_reg = caller->stack_arg_regs[spi];
> reg = &tmp_reg;
> regno = -1;
> }
[ ... ]
> @@ -12413,9 +12464,13 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
> reg_arg_name(env, argno));
> return -EINVAL;
> }
> - ret = mark_chain_precision(env, regno);
> - if (ret < 0)
> - return ret;
> + if (regno < 0) {
> + reg->precise = true;
> + } else {
> + ret = mark_chain_precision(env, regno);
> + if (ret < 0)
> + return ret;
> + }
> meta->arg_constant.found = true;
> meta->arg_constant.value = reg->var_off.value;
Does setting reg->precise = true here modify a local copy instead of the actual
verifier state?
Since reg points to tmp_reg for stack arguments:
tmp_reg = caller->stack_arg_regs[spi];
reg = &tmp_reg;
it seems that reg->precise = true will only update the block-scoped tmp_reg,
leaving the real state in caller->stack_arg_regs[spi] untouched. Could this
cause the verifier to lose track of precision requirements for scalar
constraints?
Additionally, similar to check_mem_size_reg() above, bypassing
mark_chain_precision() for stack arguments appears to skip backtracking
entirely.
--
Sashiko AI review · https://sashiko.dev/#/patchset/20260419163316.731019-1-yonghong.song@linux.dev?part=12
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH bpf-next v6 15/17] selftests/bpf: Add tests for BPF function stack arguments
2026-04-19 16:34 ` [PATCH bpf-next v6 15/17] selftests/bpf: Add tests for BPF function " Yonghong Song
@ 2026-04-19 17:15 ` sashiko-bot
2026-04-20 5:52 ` Yonghong Song
0 siblings, 1 reply; 51+ messages in thread
From: sashiko-bot @ 2026-04-19 17:15 UTC (permalink / raw)
To: Yonghong Song; +Cc: bpf
Sashiko AI review found 3 potential issue(s):
- [Medium] The `test_async_cb_many` test uses an extremely short `usleep(50)` to wait for a BPF timer callback to execute, leading to potential flakiness.
- [Low] The block comment for `global_two_callees` incorrectly refers to the function as `bar`.
- [Low] The `bpf_iter_testmod_seq_next` kfunc is incorrectly declared with an `int *` return type in `stack_arg_kfunc.c`.
--
commit 63c1c9ef99d02448833e84a39ecb212ccaaf5b80
Author: Yonghong Song <yonghong.song@linux.dev>
selftests/bpf: Add tests for BPF function stack arguments
This commit adds selftests covering stack argument passing for both BPF-to-BPF
subprog calls and kfunc calls with more than 5 arguments.
> diff --git a/tools/testing/selftests/bpf/prog_tests/stack_arg.c b/tools/testing/selftests/bpf/prog_tests/stack_arg.c
> new file mode 100644
> index 0000000000000..d61bac33f8092
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/prog_tests/stack_arg.c
[ ... ]
> +static void test_async_cb_many(void)
> +{
> + struct stack_arg *skel;
> +
> + skel = stack_arg__open();
> + if (!ASSERT_OK_PTR(skel, "open"))
> + return;
> +
> + if (!skel->rodata->has_stack_arg) {
> + test__skip();
> + goto out;
> + }
> +
> + if (!ASSERT_OK(stack_arg__load(skel), "load"))
> + goto out;
> +
> + run_subtest(skel->progs.test_async_cb_many_args, 0);
> +
> + /* Wait for the timer callback to fire and verify the result.
> + * 10+20+30+40+50+60+70+80 = 360
> + */
> + usleep(50);
> + ASSERT_EQ(skel->bss->timer_result, 360, "timer_result");
Is 50 microseconds enough time to reliably wait for the timer callback to
execute?
In CI environments or heavily loaded virtualization systems, might the
scheduling and softirq execution be delayed by more than a fraction of a
millisecond? If the timer does not fire in exactly that short window, the
test would incorrectly read a timer_result of 0 and fail.
Would it be more robust to use a polling loop or a significantly larger
timeout to avoid flaky tests?
[ ... ]
> diff --git a/tools/testing/selftests/bpf/progs/stack_arg.c b/tools/testing/selftests/bpf/progs/stack_arg.c
> new file mode 100644
> index 0000000000000..8c198ee952ff4
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/progs/stack_arg.c
[ ... ]
> +/* bar calls foo1 (3 stack args) and foo2 (5 stack args).
> + * The outgoing stack arg area is sized for foo2 (the larger callee).
> + * Stores for foo1 are a subset of the area used by foo2.
> + * Result: foo1(1,2,3,4,5,6,7,8) + foo2(1,2,3,4,5,6,7,8,9,10) = 36 + 55 = 91
> + *
> + * Pass a-e through so the compiler can't constant-fold the stack args away.
> + */
> +__noinline int global_two_callees(int a, int b, int c, int d, int e)
This isn't a bug, but the comment refers to a function named bar, while the
function being described is actually named global_two_callees.
[ ... ]
> diff --git a/tools/testing/selftests/bpf/progs/stack_arg_kfunc.c b/tools/testing/selftests/bpf/progs/stack_arg_kfunc.c
> new file mode 100644
> index 0000000000000..6cc404d578639
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/progs/stack_arg_kfunc.c
[ ... ]
> +#if defined(__TARGET_ARCH_x86) && defined(__BPF_FEATURE_STACK_ARGUMENT)
> +
> +const volatile bool has_stack_arg = true;
> +
> +struct bpf_iter_testmod_seq {
> + u64 :64;
> + u64 :64;
> +};
> +
> +extern int bpf_iter_testmod_seq_new(struct bpf_iter_testmod_seq *it, s64 value, int cnt) __ksym;
> +extern int *bpf_iter_testmod_seq_next(struct bpf_iter_testmod_seq *it) __ksym;
Does bpf_iter_testmod_seq_next() actually return an int pointer?
Looking at the corresponding kfunc definition in bpf_testmod.c, the function
appears to return an s64 pointer. Although this function isn't actively called
in this test file (which prevents a BTF verifier failure here), could the
mismatched return type declaration cause confusion or subtle issues in the
future?
--
Sashiko AI review · https://sashiko.dev/#/patchset/20260419163316.731019-1-yonghong.song@linux.dev?part=15
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH bpf-next v6 17/17] selftests/bpf: Add verifier tests for stack argument validation
2026-04-19 16:34 ` [PATCH bpf-next v6 17/17] selftests/bpf: Add verifier " Yonghong Song
@ 2026-04-19 17:21 ` sashiko-bot
2026-04-20 6:14 ` Yonghong Song
0 siblings, 1 reply; 51+ messages in thread
From: sashiko-bot @ 2026-04-19 17:21 UTC (permalink / raw)
To: Yonghong Song; +Cc: bpf
Sashiko AI review found 1 potential issue(s):
- [Medium] Test fragility in `subprog_deref_arg6` error matching due to compiler register allocation
--
commit 3936b0a6dea45627278be80c25b20ff18f2821d4
Author: Yonghong Song <yonghong.song@linux.dev>
selftests/bpf: Add verifier tests for stack argument validation
This commit adds inline-asm based verifier tests to exercise BPF stack
argument validation logic directly, including both positive and negative
tests for tracking pointers, references, and null propagation.
> diff --git a/tools/testing/selftests/bpf/progs/verifier_stack_arg.c b/tools/testing/selftests/bpf/progs/verifier_stack_arg.c
> new file mode 100644
> index 0000000000000..66dd11840a63e
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/progs/verifier_stack_arg.c
[ ... ]
> +__noinline __used
> +static long subprog_deref_arg6(long a, long b, long c, long d, long e, long *f)
> +{
> + return *f;
> +}
[ ... ]
> +SEC("tc")
> +__description("stack_arg: pruning with different stack arg types")
> +__failure
> +__flag(BPF_F_TEST_STATE_FREQ)
> +__arch_x86_64
> +__msg("R1 invalid mem access 'scalar'")
Does this hardcode an assumption about Clang's register allocation?
Since subprog_deref_arg6 is standard C rather than a __naked inline
assembly function, the compiler determines which register is used to
dereference f. If a future LLVM version chooses a different register, the
tests will fail because the verifier error will mention the new register
instead of R1.
Could these tests use __regex to match the register dynamically, or be
rewritten so subprog_deref_arg6 is a __naked function to guarantee the
register choice?
> +__naked void stack_arg_pruning_type_mismatch(void)
> +{
> + asm volatile (
> + "call %[bpf_get_prandom_u32];"
[ ... ]
--
Sashiko AI review · https://sashiko.dev/#/patchset/20260419163316.731019-1-yonghong.song@linux.dev?part=17
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH bpf-next v6 14/17] bpf,x86: Implement JIT support for stack arguments
2026-04-19 16:34 ` [PATCH bpf-next v6 14/17] bpf,x86: Implement JIT support for stack arguments Yonghong Song
@ 2026-04-19 17:25 ` sashiko-bot
2026-04-19 18:55 ` Yonghong Song
0 siblings, 1 reply; 51+ messages in thread
From: sashiko-bot @ 2026-04-19 17:25 UTC (permalink / raw)
To: Yonghong Song; +Cc: bpf
Sashiko AI review found 4 potential issue(s):
- [Critical] Incoming Arg 6 mapped to volatile R9 is clobbered by calls, causing verifier type confusion.
- [Critical] Sub-register BPF_LDX from incoming Arg 6 incorrectly copies the full 64-bit R9 register, breaking verifier bounds tracking.
- [High] BPF_ST instructions for outgoing Arg 6 miss size checks, causing out-of-bounds stack corruption and uninitialized arguments.
- [High] Sub-register BPF_STX to outgoing Arg 6 overwrites the entire 64-bit R9 register.
--
commit 16df4bd097c0d6a600a22bdb40ebfc360ece0fc6
Author: Yonghong Song <yonghong.song@linux.dev>
bpf,x86: Implement JIT support for stack arguments
This commit adds x86_64 JIT support for BPF functions with more than 5
arguments. The extra arguments are passed through a stack area addressed by
register r11 (BPF_REG_PARAMS), mapping Arg 6 to R9 and Args 7+ to the stack.
> diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
> index ea9e707e8abff..06f33fef1e8e1 100644
> --- a/arch/x86/net/bpf_jit_comp.c
> +++ b/arch/x86/net/bpf_jit_comp.c
[ ... ]
> @@ -2134,12 +2227,26 @@ static int do_jit(struct bpf_verifier_env *env, struct bpf_prog *bpf_prog, int *
> EMIT1(0xC7);
> goto st;
> case BPF_ST | BPF_MEM | BPF_DW:
> + if (dst_reg == BPF_REG_PARAMS && insn->off == -8) {
> + /* Arg 6: store immediate in r9 register */
> + emit_mov_imm64(&prog, X86_REG_R9, imm32 >> 31, (u32)imm32);
> + break;
> + }
> EMIT2(add_1mod(0x48, dst_reg), 0xC7);
>
> -st: if (is_imm8(insn->off))
> - EMIT2(add_1reg(0x40, dst_reg), insn->off);
> +st: insn_off = insn->off;
> + if (dst_reg == BPF_REG_PARAMS) {
> + /* Args 7+: reverse BPF negative offsets to
> + * x86 positive rsp offsets.
> + * BPF off=-16 → [rsp+0], off=-24 → [rsp+8], ...
> + */
> + insn_off = outgoing_arg_base - outgoing_rsp - insn_off - 16;
Does this code handle 32-bit, 16-bit, or 8-bit stores to Arg 6 correctly?
The check for dst_reg == BPF_REG_PARAMS && insn->off == -8 is uniquely placed
inside the BPF_ST | BPF_MEM | BPF_DW case block. If a BPF program uses a
smaller size store like BPF_W to prepare Arg 6, the code bypasses this check
and falls through to the st: label.
At the st: label, the insn_off calculation equates to RSP - 8 relative to
RBP, emitting a memory store that corrupts unallocated stack memory. In
addition, R9 is never updated, which leaves Arg 6 uninitialized for the callee.
[ ... ]
> @@ -2149,7 +2256,17 @@ st: if (is_imm8(insn->off))
> case BPF_STX | BPF_MEM | BPF_B:
> case BPF_STX | BPF_MEM | BPF_H:
> case BPF_STX | BPF_MEM | BPF_W:
> case BPF_STX | BPF_MEM | BPF_DW:
> - emit_stx(&prog, BPF_SIZE(insn->code), dst_reg, src_reg, insn->off);
> + if (dst_reg == BPF_REG_PARAMS && insn->off == -8) {
> + /* Arg 6: store register value in r9 */
> + EMIT_mov(X86_REG_R9, src_reg);
> + break;
> + }
Does this full 64-bit move break semantics for partial writes?
For smaller sizes like BPF_B or BPF_W, EMIT_mov generates a full 64-bit move
that copies all 64 bits from src_reg to R9.
BPF semantics dictate that a sub-register store should only modify the
specified lower bytes of the target location, leaving the rest unmodified. This
full copy could overwrite data previously stored in the upper bytes of Arg 6
via other partial writes.
[ ... ]
> @@ -2248,6 +2365,19 @@ st: if (is_imm8(insn->off))
> case BPF_LDX | BPF_PROBE_MEMSX | BPF_H:
> case BPF_LDX | BPF_PROBE_MEMSX | BPF_W:
> insn_off = insn->off;
> + if (src_reg == BPF_REG_PARAMS) {
> + if (insn_off == 8) {
> + /* Incoming arg 6: read from r9 */
> + EMIT_mov(dst_reg, X86_REG_R9);
> + break;
> + }
Can R9 be clobbered by function calls when used for Arg 6?
The verifier treats incoming stack arguments as stable memory preserved across
function calls. However, under the x86-64 System V ABI, R9 is a caller-saved
(volatile) register.
If a BPF program reads [r11+8] after making a helper, kfunc, or subprog call,
it will read the clobbered value left in R9. The verifier will incorrectly
track this as the uncorrupted argument, which could cause type confusion.
Additionally, does EMIT_mov(dst_reg, X86_REG_R9) zero-extend sub-register
loads?
This generates a full 64-bit register-to-register move regardless of the load
size. For smaller loads like BPF_B, BPF semantics require the value to be
zero-extended.
Since the entire 64-bit R9 register is copied without zeroing the upper 56
bits, garbage left by the caller in the upper bits will leak into the
destination register, breaking the bounds tracked by the verifier.
--
Sashiko AI review · https://sashiko.dev/#/patchset/20260419163316.731019-1-yonghong.song@linux.dev?part=14
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH bpf-next v6 05/17] bpf: Introduce bpf register BPF_REG_PARAMS
2026-04-19 17:06 ` sashiko-bot
@ 2026-04-19 18:14 ` Yonghong Song
0 siblings, 0 replies; 51+ messages in thread
From: Yonghong Song @ 2026-04-19 18:14 UTC (permalink / raw)
To: sashiko; +Cc: bpf
On 4/19/26 10:06 AM, sashiko-bot@kernel.org wrote:
> Sashiko AI review found 1 potential issue(s):
> - [Critical] Unvalidated BPF_REG_PARAMS instructions cause silent register aliasing and arbitrary kernel memory corruption on architectures lacking stack argument support.
> --
>
> commit 9b78142c31c81cff58a84123ee2833f9b935c211
> Author: Yonghong Song <yonghong.song@linux.dev>
>
> bpf: Introduce bpf register BPF_REG_PARAMS
>
> This commit introduces BPF_REG_PARAMS as a dedicated BPF register for stack
> argument accesses, occupying BPF register number 11 (R11). It shifts the
> kernel-internal hidden register BPF_REG_AX to slot 12.
>
>> diff --git a/include/linux/filter.h b/include/linux/filter.h
>> index 1ec6d5ba64cc4..b77d0b06db6eb 100644
>> --- a/include/linux/filter.h
>> +++ b/include/linux/filter.h
>> @@ -58,8 +58,9 @@ struct ctl_table_header;
>> #define BPF_REG_H BPF_REG_9 /* hlen, callee-saved */
>>
>> /* Kernel hidden auxiliary/helper register. */
>> -#define BPF_REG_AX MAX_BPF_REG
>> -#define MAX_BPF_EXT_REG (MAX_BPF_REG + 1)
>> +#define BPF_REG_PARAMS MAX_BPF_REG
> Does the introduction of BPF_REG_PARAMS without JIT-support validation allow
> architectures lacking stack argument support (like ARM64, RISC-V, s390x) to
> inadvertently use uninitialized JIT register mappings?
See patch 'bpf: Enable r11 based insns'. r11 based insns will be
rejected earlier in check_and_resolve_insns(). And before this patch,
bpf function support is done, jit support is disabled,
so we are fine.
>
> If the verifier doesn't gate BPF_REG_PARAMS usage behind a check like
> bpf_jit_supports_stack_args(), could a crafted program trigger silent
> register aliasing?
This is not an issue. All potential r11 related issues are resolved
(as far as I know) before "bpf: Enable r11 based insns".
>
> For example, if a program sets BPF_REG_1 to an arbitrary kernel address and
> executes BPF_STX_MEM(BPF_DW, BPF_REG_PARAMS, BPF_REG_2, -8), the verifier may
> allow it as a safe outgoing stack argument write. When the ARM64 JIT compiles
> the instruction, it looks up index 11. If that index is uninitialized and
> defaults to 0, it would emit a store using x0 (which is BPF_REG_1) as the
> base pointer (e.g., STR x1, [x0, #-8]).
>
> Could this write an arbitrary value into a controlled kernel address and
> bypass verifier memory safety guarantees?
>
>> +#define BPF_REG_AX (MAX_BPF_REG + 1)
>> +#define MAX_BPF_EXT_REG (MAX_BPF_REG + 2)
>> #define MAX_BPF_JIT_REG MAX_BPF_EXT_REG
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH bpf-next v6 12/17] bpf: Support stack arguments for kfunc calls
2026-04-19 17:08 ` sashiko-bot
@ 2026-04-19 18:18 ` Yonghong Song
0 siblings, 0 replies; 51+ messages in thread
From: Yonghong Song @ 2026-04-19 18:18 UTC (permalink / raw)
To: sashiko; +Cc: bpf
On 4/19/26 10:08 AM, sashiko-bot@kernel.org wrote:
> Sashiko AI review found 2 potential issue(s):
> - [Critical] Loss of precision tracking for scalar stack arguments due to modifying a local copy (`tmp_reg`).
> - [Critical] Bypassing `mark_chain_precision` for stack arguments breaks data-flow backtracking, enabling unsafe state pruning.
> --
>
> commit 96adcc4832214af51212820f2056565abaf42a32
> Author: Yonghong Song <yonghong.song@linux.dev>
>
> bpf: Support stack arguments for kfunc calls
>
> Extend the stack argument mechanism to kfunc calls, allowing kfuncs
> with more than 5 parameters to receive additional arguments via the
> r11-based stack arg area.
>
>> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
>> index 2a346e4f28e15..13f1fd7880921 100644
>> --- a/kernel/bpf/verifier.c
>> +++ b/kernel/bpf/verifier.c
> [ ... ]
>> @@ -7314,8 +7322,12 @@ static int check_mem_size_reg(struct bpf_verifier_env *env,
>> }
>> err = check_helper_mem_access(env, mem_reg, mem_argno, size_reg->umax_value,
>> access_type, zero_size_allowed, meta);
>> - if (!err && !is_stack_argno(size_argno))
>> - err = mark_chain_precision(env, size_argno);
>> + if (!err) {
>> + if (is_stack_argno(size_argno))
>> + size_reg->precise = true;
>> + else
>> + err = mark_chain_precision(env, size_argno);
>> + }
>> return err;
>> }
> Does setting size_reg->precise = true without calling mark_chain_precision()
> break data-flow backtracking?
>
> If only the current state's flag is set, parent states and the source
> registers that populated the stack argument will not be marked precise.
> Could this lead to the verifier incorrectly pruning states if execution paths
> merge before the stack store?
You are right. Let me think how to resolve this.
>
> [ ... ]
>> @@ -12373,20 +12400,44 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
> [ ... ]
>> if (i < MAX_BPF_FUNC_REG_ARGS) {
>> reg = ®s[i + 1];
>> } else {
>> /* Retrieve the reg state from the outgoing stack arg slot. */
>> struct bpf_func_state *caller = cur_func(env);
>> int spi = out_arg_spi(caller, i - MAX_BPF_FUNC_REG_ARGS);
>>
>> if (!is_stack_arg_slot_initialized(caller, spi)) {
>> verbose(env, "stack %s not properly initialized\n",
>> reg_arg_name(env, argno));
>> return -EINVAL;
>> }
>>
>> tmp_reg = caller->stack_arg_regs[spi];
>> reg = &tmp_reg;
>> regno = -1;
>> }
> [ ... ]
>> @@ -12413,9 +12464,13 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
>> reg_arg_name(env, argno));
>> return -EINVAL;
>> }
>> - ret = mark_chain_precision(env, regno);
>> - if (ret < 0)
>> - return ret;
>> + if (regno < 0) {
>> + reg->precise = true;
>> + } else {
>> + ret = mark_chain_precision(env, regno);
>> + if (ret < 0)
>> + return ret;
>> + }
>> meta->arg_constant.found = true;
>> meta->arg_constant.value = reg->var_off.value;
> Does setting reg->precise = true here modify a local copy instead of the actual
> verifier state?
>
> Since reg points to tmp_reg for stack arguments:
>
> tmp_reg = caller->stack_arg_regs[spi];
> reg = &tmp_reg;
>
> it seems that reg->precise = true will only update the block-scoped tmp_reg,
> leaving the real state in caller->stack_arg_regs[spi] untouched. Could this
> cause the verifier to lose track of precision requirements for scalar
> constraints?
Right, tmp_reg is not used. I guess my previous usage of tmp_reg due to
an early hack but I didn't remove it. Will remove tmp_reg in the next revision.
>
> Additionally, similar to check_mem_size_reg() above, bypassing
> mark_chain_precision() for stack arguments appears to skip backtracking
> entirely.
Ack.
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH bpf-next v6 13/17] bpf: Reject stack arguments if tail call reachable
2026-04-19 17:08 ` sashiko-bot
@ 2026-04-19 18:20 ` Yonghong Song
0 siblings, 0 replies; 51+ messages in thread
From: Yonghong Song @ 2026-04-19 18:20 UTC (permalink / raw)
To: sashiko; +Cc: bpf
On 4/19/26 10:08 AM, sashiko-bot@kernel.org wrote:
> Sashiko AI review found 1 potential issue(s):
> - [High] The verifier incorrectly rejects perfectly valid subprograms with stack arguments due to an order-dependent state leak of `tail_call_reachable` during the call graph traversal.
> --
>
> commit 2677e39a30cc9fcf44cadb1927d1362ad39f7b4e
> Author: Yonghong Song <yonghong.song@linux.dev>
>
> bpf: Reject stack arguments if tail call reachable
>
> This commit deprecates tailcalls in the presence of stack arguments,
> rejecting subprograms with stack arguments if a tail call is reachable.
>
>> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
>> index 13f1fd7880921..af89750498836 100644
>> --- a/kernel/bpf/verifier.c
>> +++ b/kernel/bpf/verifier.c
> [ ... ]
>
>> @@ -5672,14 +5677,23 @@ static int check_max_stack_depth_subprog(struct bpf_verifier_env *env, int idx,
>> * this info will be utilized by JIT so that we will be preserving the
>> * tail call counter throughout bpf2bpf calls combined with tailcalls
>> */
>> - if (tail_call_reachable)
>> + if (tail_call_reachable) {
>> for (tmp = idx; tmp >= 0; tmp = dinfo[tmp].caller) {
>> if (subprog[tmp].is_exception_cb) {
>> verbose(env, "cannot tail call within exception cb\n");
>> return -EINVAL;
>> }
>> + if (subprog_has_stack_args(&subprog[tmp])) {
>> + verbose(env, "tail_calls are not allowed in programs with stack args\n");
>> + return -EINVAL;
>> + }
> Can this leak the tail_call_reachable state across disjoint call graph branches?
I guess this is a known issue such that tail_call_reachable is marked more
than necessar. Since this has been mentioned a couple of times. I will fix
it in the next revision.
>
> In check_max_stack_depth_subprog(), tail_call_reachable is a local variable
> that seems to act as an accumulator. If the main program first calls subprogram A
> (which has a tail call) and subsequently calls subprogram B (which has stack
> arguments but no tail call), tail_call_reachable remains true while traversing B.
>
> When the traversal finishes subprogram B, this newly added check evaluates the
> current call stack (main -> B). Since tail_call_reachable leaked from the main
> -> A traversal, and subprogram B has stack arguments, the check would falsely
> reject the program.
>
> Conversely, if the bytecode is ordered so that B is called before A,
> tail_call_reachable would be false while traversing B, allowing the program to
> load successfully.
>
> Does this introduce an unpredictable, order-dependent functional regression?
>
>> subprog[tmp].tail_call_reachable = true;
>> }
>> + } else if (!idx && subprog[0].has_tail_call && subprog_has_stack_args(&subprog[0])) {
>> + verbose(env, "tail_calls are not allowed in programs with stack args\n");
>> + return -EINVAL;
>> + }
>> +
>> if (subprog[0].tail_call_reachable)
>> env->prog->aux->tail_call_reachable = true;
>>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH bpf-next v6 08/17] bpf: Reject stack arguments in non-JITed programs
2026-04-19 16:33 ` [PATCH bpf-next v6 08/17] bpf: Reject stack arguments in non-JITed programs Yonghong Song
@ 2026-04-19 18:21 ` sashiko-bot
2026-04-20 4:23 ` Yonghong Song
0 siblings, 1 reply; 51+ messages in thread
From: sashiko-bot @ 2026-04-19 18:21 UTC (permalink / raw)
To: Yonghong Song; +Cc: bpf
Sashiko AI review found 2 potential issue(s):
- [High] The loop in `bpf_fixup_call_args()` incorrectly rejects main BPF programs with >5 arguments in their BTF signature when JIT is disabled, even though main programs do not actually use stack arguments.
- [High] Standalone global functions with stack arguments can incorrectly fall back to the interpreter if JIT compilation fails, leading to a kernel panic.
--
commit f56b0fedf39a845476910805e073f6b2d3a4ad82
Author: Yonghong Song <yonghong.song@linux.dev>
bpf: Reject stack arguments in non-JITed programs
The interpreter does not understand the bpf register r11 (BPF_REG_PARAMS)
used for stack argument addressing. So reject interpreter usage if stack
arguments are used either in the main program or any subprogram.
> diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
> index ae10b9ca018df..ec8523e6e4eb4 100644
> --- a/kernel/bpf/core.c
> +++ b/kernel/bpf/core.c
> @@ -2599,7 +2599,7 @@ struct bpf_prog *__bpf_prog_select_runtime(struct bpf_verifier_env *env, struct
> goto finalize;
>
> if (IS_ENABLED(CONFIG_BPF_JIT_ALWAYS_ON) ||
> - bpf_prog_has_kfunc_call(fp))
> + bpf_prog_has_kfunc_call(fp) || fp->aux->stack_arg_depth)
> jit_needed = true;
When loading a standalone global function (env->subprog_cnt <= 1) with >5
arguments, bpf_jit_subprogs() sets fp->aux->stack_arg_depth to 0 (the
outgoing_stack_arg_depth). If JIT is requested, bpf_fixup_call_args() will
exit early, bypassing the newly added loop that checks
incoming_stack_arg_depth.
Because fp->aux->stack_arg_depth has been cleared to 0, jit_needed evaluates
to false here.
If JIT compilation subsequently fails (e.g., due to memory limits or
unsupported instructions), does the kernel silently fall back to the
interpreter instead of rejecting the program?
If executed, the interpreter encounters instructions utilizing BPF_REG_PARAMS
(r11, mapped to regs[11]), which is initialized to 0 in the interpreter's
stack frame. Could dereferencing this register to read stack arguments
result in a NULL pointer dereference and cause a kernel panic?
> diff --git a/kernel/bpf/fixups.c b/kernel/bpf/fixups.c
> index c4e0224ad2f2e..3c9a82823913e 100644
> --- a/kernel/bpf/fixups.c
> +++ b/kernel/bpf/fixups.c
> @@ -1411,6 +1411,12 @@ int bpf_fixup_call_args(struct bpf_verifier_env *env)
> verbose(env, "calling kernel functions are not allowed in non-JITed programs\n");
> return -EINVAL;
> }
> + for (i = 0; i < env->subprog_cnt; i++) {
> + if (env->subprog_info[i].incoming_stack_arg_depth) {
When verifying the main BPF program (i = 0), if the provided BTF signature
has >5 arguments, the verifier sets
env->subprog_info[0].incoming_stack_arg_depth > 0.
Although the verifier subsequently realizes this signature doesn't match the
expected single ctx argument and marks the BTF info as unreliable, it never
clears the incoming_stack_arg_depth.
Because this loop starts at i = 0 and unconditionally checks
incoming_stack_arg_depth, will it reject the entire program if JIT is
disabled (e.g., via sysctl net.core.bpf_jit_enable=0), even though main
programs do not actually use stack arguments?
Does this break backwards compatibility for existing valid BPF programs with
6+ argument BTF signatures that used to load successfully?
> + verbose(env, "stack args are not supported in non-JITed programs\n");
> + return -EINVAL;
> + }
> + }
--
Sashiko AI review · https://sashiko.dev/#/patchset/20260419163316.731019-1-yonghong.song@linux.dev?part=8
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH bpf-next v6 14/17] bpf,x86: Implement JIT support for stack arguments
2026-04-19 17:25 ` sashiko-bot
@ 2026-04-19 18:55 ` Yonghong Song
0 siblings, 0 replies; 51+ messages in thread
From: Yonghong Song @ 2026-04-19 18:55 UTC (permalink / raw)
To: sashiko; +Cc: bpf
On 4/19/26 10:25 AM, sashiko-bot@kernel.org wrote:
> Sashiko AI review found 4 potential issue(s):
> - [Critical] Incoming Arg 6 mapped to volatile R9 is clobbered by calls, causing verifier type confusion.
> - [Critical] Sub-register BPF_LDX from incoming Arg 6 incorrectly copies the full 64-bit R9 register, breaking verifier bounds tracking.
> - [High] BPF_ST instructions for outgoing Arg 6 miss size checks, causing out-of-bounds stack corruption and uninitialized arguments.
> - [High] Sub-register BPF_STX to outgoing Arg 6 overwrites the entire 64-bit R9 register.
> --
>
> commit 16df4bd097c0d6a600a22bdb40ebfc360ece0fc6
> Author: Yonghong Song <yonghong.song@linux.dev>
>
> bpf,x86: Implement JIT support for stack arguments
>
> This commit adds x86_64 JIT support for BPF functions with more than 5
> arguments. The extra arguments are passed through a stack area addressed by
> register r11 (BPF_REG_PARAMS), mapping Arg 6 to R9 and Args 7+ to the stack.
>
>> diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
>> index ea9e707e8abff..06f33fef1e8e1 100644
>> --- a/arch/x86/net/bpf_jit_comp.c
>> +++ b/arch/x86/net/bpf_jit_comp.c
> [ ... ]
>> @@ -2134,12 +2227,26 @@ static int do_jit(struct bpf_verifier_env *env, struct bpf_prog *bpf_prog, int *
>> EMIT1(0xC7);
>> goto st;
>> case BPF_ST | BPF_MEM | BPF_DW:
>> + if (dst_reg == BPF_REG_PARAMS && insn->off == -8) {
>> + /* Arg 6: store immediate in r9 register */
>> + emit_mov_imm64(&prog, X86_REG_R9, imm32 >> 31, (u32)imm32);
>> + break;
>> + }
>> EMIT2(add_1mod(0x48, dst_reg), 0xC7);
>>
>> -st: if (is_imm8(insn->off))
>> - EMIT2(add_1reg(0x40, dst_reg), insn->off);
>> +st: insn_off = insn->off;
>> + if (dst_reg == BPF_REG_PARAMS) {
>> + /* Args 7+: reverse BPF negative offsets to
>> + * x86 positive rsp offsets.
>> + * BPF off=-16 → [rsp+0], off=-24 → [rsp+8], ...
>> + */
>> + insn_off = outgoing_arg_base - outgoing_rsp - insn_off - 16;
> Does this code handle 32-bit, 16-bit, or 8-bit stores to Arg 6 correctly?
No. Only 64-bit stores are supported. See patch "bpf: Enable r11 based insns".
>
> The check for dst_reg == BPF_REG_PARAMS && insn->off == -8 is uniquely placed
> inside the BPF_ST | BPF_MEM | BPF_DW case block. If a BPF program uses a
> smaller size store like BPF_W to prepare Arg 6, the code bypasses this check
> and falls through to the st: label.
>
> At the st: label, the insn_off calculation equates to RSP - 8 relative to
> RBP, emitting a memory store that corrupts unallocated stack memory. In
> addition, R9 is never updated, which leaves Arg 6 uninitialized for the callee.
>
> [ ... ]
>> @@ -2149,7 +2256,17 @@ st: if (is_imm8(insn->off))
>> case BPF_STX | BPF_MEM | BPF_B:
>> case BPF_STX | BPF_MEM | BPF_H:
>> case BPF_STX | BPF_MEM | BPF_W:
>> case BPF_STX | BPF_MEM | BPF_DW:
>> - emit_stx(&prog, BPF_SIZE(insn->code), dst_reg, src_reg, insn->off);
>> + if (dst_reg == BPF_REG_PARAMS && insn->off == -8) {
>> + /* Arg 6: store register value in r9 */
>> + EMIT_mov(X86_REG_R9, src_reg);
>> + break;
>> + }
> Does this full 64-bit move break semantics for partial writes?
It should be okay. llvm compiler will promote the value to 64bit before
r11 based load/store's.
>
> For smaller sizes like BPF_B or BPF_W, EMIT_mov generates a full 64-bit move
> that copies all 64 bits from src_reg to R9.
>
> BPF semantics dictate that a sub-register store should only modify the
> specified lower bytes of the target location, leaving the rest unmodified. This
> full copy could overwrite data previously stored in the upper bytes of Arg 6
> via other partial writes.
>
> [ ... ]
>> @@ -2248,6 +2365,19 @@ st: if (is_imm8(insn->off))
>> case BPF_LDX | BPF_PROBE_MEMSX | BPF_H:
>> case BPF_LDX | BPF_PROBE_MEMSX | BPF_W:
>> insn_off = insn->off;
>> + if (src_reg == BPF_REG_PARAMS) {
>> + if (insn_off == 8) {
>> + /* Incoming arg 6: read from r9 */
>> + EMIT_mov(dst_reg, X86_REG_R9);
>> + break;
>> + }
> Can R9 be clobbered by function calls when used for Arg 6?
I think yes as you mentioned below for Arg 6 (r9) usage.
>
> The verifier treats incoming stack arguments as stable memory preserved across
> function calls. However, under the x86-64 System V ABI, R9 is a caller-saved
> (volatile) register.
>
> If a BPF program reads [r11+8] after making a helper, kfunc, or subprog call,
> it will read the clobbered value left in R9. The verifier will incorrectly
> track this as the uncorrupted argument, which could cause type confusion.
To make things easier, I think verifier can enforce all stack loads must
be before stack stores. And all stack loads must be before any function/kfunc
calls. The following is an example:
$ cat t.c
int bar1(int, int, int, int, int, int);
int foo(int a, int b, int c, int d, int e, int f) {
int ret;
ret = bar1(a, a, a, b, b, b);
ret += bar1(a, a, a, b, b, f);
return ret;
}
$ llvm-objdump -d t.o
t.o: file format elf64-x86-64
Disassembly of section .text:
0000000000000000 <foo>:
0: 55 pushq %rbp
1: 41 57 pushq %r15
3: 41 56 pushq %r14
5: 53 pushq %rbx
6: 50 pushq %rax
7: 44 89 cb movl %r9d, %ebx
a: 89 f5 movl %esi, %ebp
c: 41 89 fe movl %edi, %r14d
f: 89 fe movl %edi, %esi
11: 89 fa movl %edi, %edx
13: 89 e9 movl %ebp, %ecx
15: 41 89 e8 movl %ebp, %r8d
18: 41 89 e9 movl %ebp, %r9d
1b: e8 00 00 00 00 callq 0x20 <foo+0x20>
20: 41 89 c7 movl %eax, %r15d
23: 44 89 f7 movl %r14d, %edi
26: 44 89 f6 movl %r14d, %esi
29: 44 89 f2 movl %r14d, %edx
2c: 89 e9 movl %ebp, %ecx
2e: 41 89 e8 movl %ebp, %r8d
31: 41 89 d9 movl %ebx, %r9d
34: e8 00 00 00 00 callq 0x39 <foo+0x39>
39: 44 01 f8 addl %r15d, %eax
3c: 48 83 c4 08 addq $0x8, %rsp
40: 5b popq %rbx
41: 41 5e popq %r14
43: 41 5f popq %r15
45: 5d popq %rbp
46: c3 retq
foo()'s argument 'f' is first loaded and saved to an callee saved register:
movl %r9d, %ebx
and after the first bar1(), it did
movl %ebx, %r9d
to push as #6 argument.
So I think we can enforce certain rules (as mentioned in the above)
in the verifier.
>
> Additionally, does EMIT_mov(dst_reg, X86_REG_R9) zero-extend sub-register
> loads?
Only 64-bit register-to-register is supported.
>
> This generates a full 64-bit register-to-register move regardless of the load
> size. For smaller loads like BPF_B, BPF semantics require the value to be
> zero-extended.
>
> Since the entire 64-bit R9 register is copied without zeroing the upper 56
> bits, garbage left by the caller in the upper bits will leak into the
> destination register, breaking the bounds tracked by the verifier.
>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH bpf-next v6 07/17] bpf: Support stack arguments for bpf functions
2026-04-19 16:33 ` [PATCH bpf-next v6 07/17] bpf: Support stack arguments for bpf functions Yonghong Song
@ 2026-04-19 19:15 ` sashiko-bot
2026-04-20 4:35 ` Yonghong Song
2026-04-21 0:37 ` Alexei Starovoitov
1 sibling, 1 reply; 51+ messages in thread
From: sashiko-bot @ 2026-04-19 19:15 UTC (permalink / raw)
To: Yonghong Song; +Cc: bpf
Sashiko AI review found 3 potential issue(s):
- [Critical] Missing precision tracking for stack arguments (`BPF_REG_PARAMS` / `r11`) leads to incorrect verifier state pruning and arbitrary out-of-bounds memory access.
- [Medium] `BPF_PROG_TYPE_EXT` (freplace) programs are incorrectly rejected when targeting static subprograms with more than 5 arguments.
- [Low] `stack_arg_safe()` strictly compares `stack_arg_depth`, preventing safe paths from being pruned and leading to state explosion.
--
commit 0c58d1bb907c4a6a387f1afd22baae9d43e95a55
Author: Yonghong Song <yonghong.song@linux.dev>
Subject: bpf: Support stack arguments for bpf functions
This commit introduces support for passing more than 5 arguments to BPF
subprograms by using a dedicated stack area accessed via register r11.
It tracks these stack arguments separately from the regular stack to ensure
proper verification and pruning.
Link: https://github.com/llvm/llvm-project/pull/189060
> diff --git a/kernel/bpf/states.c b/kernel/bpf/states.c
> index 8478d2c6ed5b6..235841d23fe3f 100644
> --- a/kernel/bpf/states.c
> +++ b/kernel/bpf/states.c
[ ... ]
> @@ -838,6 +838,44 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
> return true;
> }
>
> +/*
> + * Compare stack arg slots between old and current states.
> + * Outgoing stack args are path-local state and must agree for pruning.
> + */
> +static bool stack_arg_safe(struct bpf_verifier_env *env, struct bpf_func_state *old,
> + struct bpf_func_state *cur, struct bpf_idmap *idmap,
> + enum exact_level exact)
> +{
> + int i, nslots;
> +
> + if (old->incoming_stack_arg_depth != cur->incoming_stack_arg_depth)
> + return false;
> +
> + /* Compare both incoming and outgoing stack arg slots. */
> + if (old->stack_arg_depth != cur->stack_arg_depth)
> + return false;
Does this strict comparison of stack_arg_depth prevent safe paths from
being pruned?
Since stack_arg_depth tracks the allocated capacity rather than the
active outgoing arguments tracked by out_stack_arg_mask, two branches
that call subprograms with different stack argument requirements might
end up with permanently different stack_arg_depth values.
Could this unnecessarily increase verification complexity by failing to
prune functionally equivalent states?
[ ... ]
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 6aa4dc161a563..78c9322870a51 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
[ ... ]
> +static int check_stack_arg_write(struct bpf_verifier_env *env, struct bpf_func_state *state,
> + int off, int value_regno)
> +{
> + int max_stack_arg_regs = MAX_BPF_FUNC_ARGS - MAX_BPF_FUNC_REG_ARGS;
[ ... ]
> + state->out_stack_arg_mask |= BIT(idx);
> + return 0;
> +}
> +
> +/*
> + * Read a value from the incoming stack arg area.
> + * off is a positive offset from r11 (e.g. +8 for arg6, +16 for arg7).
> + * Callers ensure off > 0, 8-byte aligned, and size is BPF_REG_SIZE.
> + */
> +static int check_stack_arg_read(struct bpf_verifier_env *env, struct bpf_func_state *state,
> + int off, int dst_regno)
> +{
[ ... ]
> + if (is_spillable_regtype(arg->type))
> + copy_register_state(&cur->regs[dst_regno], arg);
> + else
> + mark_reg_unknown(env, cur->regs, dst_regno);
> + return 0;
> +}
Should check_stack_arg_read() and check_stack_arg_write() call
bpf_push_jmp_history() with the INSN_F_STACK_ACCESS flag?
When the callee uses a stack argument in a context requiring precision,
backtracking is initiated. Because the INSN_F_STACK_ACCESS flag is
missing, backtrack_insn() might silently return 0. This clears the
destination register without propagating the precision requirement to
the stack slot.
Could this cause the verifier to incorrectly prune execution paths
containing different, potentially unsafe scalar values, allowing
programs to bypass verifier bounds checks and perform arbitrary
out-of-bounds accesses? Note that the subsequent patch that tracks
r11 registers in const_fold and liveness does not address this
backtracking logic.
[ ... ]
> @@ -18808,7 +19097,7 @@ static int do_check_common(struct bpf_verifier_env *env, int subprog)
> goto out;
> }
> }
> - for (i = BPF_REG_1; i <= sub->arg_cnt; i++) {
> + for (i = BPF_REG_1; i <= min_t(u32, sub->arg_cnt, MAX_BPF_FUNC_REG_ARGS); i++) {
> arg = &sub->args[i - BPF_REG_1];
> reg = ®s[i];
For freplace programs targeting static subprograms with more than 5
arguments, do we need to initialize state->frame[0]->incoming_stack_arg_depth
here or allocate stack_arg_regs?
Currently it seems to only initialize arguments up to MAX_BPF_FUNC_REG_ARGS.
If incoming_stack_arg_depth remains 0, an EXT program attempting to read
its 6th argument via BPF_REG_PARAMS would fail verification in
check_stack_arg_read() with -EACCES.
--
Sashiko AI review · https://sashiko.dev/#/patchset/20260419163316.731019-1-yonghong.song@linux.dev?part=7
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH bpf-next v6 08/17] bpf: Reject stack arguments in non-JITed programs
2026-04-19 18:21 ` sashiko-bot
@ 2026-04-20 4:23 ` Yonghong Song
0 siblings, 0 replies; 51+ messages in thread
From: Yonghong Song @ 2026-04-20 4:23 UTC (permalink / raw)
To: sashiko; +Cc: bpf
On 4/19/26 11:21 AM, sashiko-bot@kernel.org wrote:
> Sashiko AI review found 2 potential issue(s):
> - [High] The loop in `bpf_fixup_call_args()` incorrectly rejects main BPF programs with >5 arguments in their BTF signature when JIT is disabled, even though main programs do not actually use stack arguments.
> - [High] Standalone global functions with stack arguments can incorrectly fall back to the interpreter if JIT compilation fails, leading to a kernel panic.
> --
>
> commit f56b0fedf39a845476910805e073f6b2d3a4ad82
> Author: Yonghong Song <yonghong.song@linux.dev>
>
> bpf: Reject stack arguments in non-JITed programs
>
> The interpreter does not understand the bpf register r11 (BPF_REG_PARAMS)
> used for stack argument addressing. So reject interpreter usage if stack
> arguments are used either in the main program or any subprogram.
>
>> diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
>> index ae10b9ca018df..ec8523e6e4eb4 100644
>> --- a/kernel/bpf/core.c
>> +++ b/kernel/bpf/core.c
>> @@ -2599,7 +2599,7 @@ struct bpf_prog *__bpf_prog_select_runtime(struct bpf_verifier_env *env, struct
>> goto finalize;
>>
>> if (IS_ENABLED(CONFIG_BPF_JIT_ALWAYS_ON) ||
>> - bpf_prog_has_kfunc_call(fp))
>> + bpf_prog_has_kfunc_call(fp) || fp->aux->stack_arg_depth)
>> jit_needed = true;
> When loading a standalone global function (env->subprog_cnt <= 1) with >5
> arguments, bpf_jit_subprogs() sets fp->aux->stack_arg_depth to 0 (the
> outgoing_stack_arg_depth). If JIT is requested, bpf_fixup_call_args() will
> exit early, bypassing the newly added loop that checks
> incoming_stack_arg_depth.
>
> Because fp->aux->stack_arg_depth has been cleared to 0, jit_needed evaluates
> to false here.
I think the below change should fix the problem.
- if (env->subprog_cnt <= 1)
+ if (env->subprog_cnt <= 1) {
+ env->prog->aux->stack_arg_depth = env->subprog_info[0].outgoing_stack_arg_depth
+ + env->subprog_info[0].incoming_stack_arg_depth;
return 0;
+ }
In the above scenaio, env->prog->aux->stack_arg_depth will not be 0, so jit_needed
evaluates to true.
>
> If JIT compilation subsequently fails (e.g., due to memory limits or
> unsupported instructions), does the kernel silently fall back to the
> interpreter instead of rejecting the program?
>
> If executed, the interpreter encounters instructions utilizing BPF_REG_PARAMS
> (r11, mapped to regs[11]), which is initialized to 0 in the interpreter's
> stack frame. Could dereferencing this register to read stack arguments
> result in a NULL pointer dereference and cause a kernel panic?
>
>> diff --git a/kernel/bpf/fixups.c b/kernel/bpf/fixups.c
>> index c4e0224ad2f2e..3c9a82823913e 100644
>> --- a/kernel/bpf/fixups.c
>> +++ b/kernel/bpf/fixups.c
>> @@ -1411,6 +1411,12 @@ int bpf_fixup_call_args(struct bpf_verifier_env *env)
>> verbose(env, "calling kernel functions are not allowed in non-JITed programs\n");
>> return -EINVAL;
>> }
>> + for (i = 0; i < env->subprog_cnt; i++) {
>> + if (env->subprog_info[i].incoming_stack_arg_depth) {
> When verifying the main BPF program (i = 0), if the provided BTF signature
> has >5 arguments, the verifier sets
> env->subprog_info[0].incoming_stack_arg_depth > 0.
We can ignore i = 0. The first subprog is the main prog, we can skip it.
>
> Although the verifier subsequently realizes this signature doesn't match the
> expected single ctx argument and marks the BTF info as unreliable, it never
> clears the incoming_stack_arg_depth.
>
> Because this loop starts at i = 0 and unconditionally checks
> incoming_stack_arg_depth, will it reject the entire program if JIT is
> disabled (e.g., via sysctl net.core.bpf_jit_enable=0), even though main
> programs do not actually use stack arguments?
>
> Does this break backwards compatibility for existing valid BPF programs with
> 6+ argument BTF signatures that used to load successfully?
>
>> + verbose(env, "stack args are not supported in non-JITed programs\n");
>> + return -EINVAL;
>> + }
>> + }
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH bpf-next v6 07/17] bpf: Support stack arguments for bpf functions
2026-04-19 19:15 ` sashiko-bot
@ 2026-04-20 4:35 ` Yonghong Song
0 siblings, 0 replies; 51+ messages in thread
From: Yonghong Song @ 2026-04-20 4:35 UTC (permalink / raw)
To: sashiko; +Cc: bpf
On 4/19/26 12:15 PM, sashiko-bot@kernel.org wrote:
> Sashiko AI review found 3 potential issue(s):
> - [Critical] Missing precision tracking for stack arguments (`BPF_REG_PARAMS` / `r11`) leads to incorrect verifier state pruning and arbitrary out-of-bounds memory access.
> - [Medium] `BPF_PROG_TYPE_EXT` (freplace) programs are incorrectly rejected when targeting static subprograms with more than 5 arguments.
> - [Low] `stack_arg_safe()` strictly compares `stack_arg_depth`, preventing safe paths from being pruned and leading to state explosion.
> --
>
> commit 0c58d1bb907c4a6a387f1afd22baae9d43e95a55
> Author: Yonghong Song <yonghong.song@linux.dev>
> Subject: bpf: Support stack arguments for bpf functions
>
> This commit introduces support for passing more than 5 arguments to BPF
> subprograms by using a dedicated stack area accessed via register r11.
> It tracks these stack arguments separately from the regular stack to ensure
> proper verification and pruning.
>
> Link: https://github.com/llvm/llvm-project/pull/189060
>
>> diff --git a/kernel/bpf/states.c b/kernel/bpf/states.c
>> index 8478d2c6ed5b6..235841d23fe3f 100644
>> --- a/kernel/bpf/states.c
>> +++ b/kernel/bpf/states.c
> [ ... ]
>> @@ -838,6 +838,44 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
>> return true;
>> }
>>
>> +/*
>> + * Compare stack arg slots between old and current states.
>> + * Outgoing stack args are path-local state and must agree for pruning.
>> + */
>> +static bool stack_arg_safe(struct bpf_verifier_env *env, struct bpf_func_state *old,
>> + struct bpf_func_state *cur, struct bpf_idmap *idmap,
>> + enum exact_level exact)
>> +{
>> + int i, nslots;
>> +
>> + if (old->incoming_stack_arg_depth != cur->incoming_stack_arg_depth)
>> + return false;
>> +
>> + /* Compare both incoming and outgoing stack arg slots. */
>> + if (old->stack_arg_depth != cur->stack_arg_depth)
>> + return false;
> Does this strict comparison of stack_arg_depth prevent safe paths from
> being pruned?
>
> Since stack_arg_depth tracks the allocated capacity rather than the
> active outgoing arguments tracked by out_stack_arg_mask, two branches
> that call subprograms with different stack argument requirements might
> end up with permanently different stack_arg_depth values.
>
> Could this unnecessarily increase verification complexity by failing to
> prune functionally equivalent states?
You are right. We should not compare stack_arg_depth, which is not tracking
arguments. The out_stack_arg_mask is the correct way to do comparison.
>
> [ ... ]
>> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
>> index 6aa4dc161a563..78c9322870a51 100644
>> --- a/kernel/bpf/verifier.c
>> +++ b/kernel/bpf/verifier.c
> [ ... ]
>> +static int check_stack_arg_write(struct bpf_verifier_env *env, struct bpf_func_state *state,
>> + int off, int value_regno)
>> +{
>> + int max_stack_arg_regs = MAX_BPF_FUNC_ARGS - MAX_BPF_FUNC_REG_ARGS;
> [ ... ]
>> + state->out_stack_arg_mask |= BIT(idx);
>> + return 0;
>> +}
>> +
>> +/*
>> + * Read a value from the incoming stack arg area.
>> + * off is a positive offset from r11 (e.g. +8 for arg6, +16 for arg7).
>> + * Callers ensure off > 0, 8-byte aligned, and size is BPF_REG_SIZE.
>> + */
>> +static int check_stack_arg_read(struct bpf_verifier_env *env, struct bpf_func_state *state,
>> + int off, int dst_regno)
>> +{
> [ ... ]
>> + if (is_spillable_regtype(arg->type))
>> + copy_register_state(&cur->regs[dst_regno], arg);
>> + else
>> + mark_reg_unknown(env, cur->regs, dst_regno);
>> + return 0;
>> +}
> Should check_stack_arg_read() and check_stack_arg_write() call
> bpf_push_jmp_history() with the INSN_F_STACK_ACCESS flag?
>
> When the callee uses a stack argument in a context requiring precision,
> backtracking is initiated. Because the INSN_F_STACK_ACCESS flag is
> missing, backtrack_insn() might silently return 0. This clears the
> destination register without propagating the precision requirement to
> the stack slot.
>
> Could this cause the verifier to incorrectly prune execution paths
> containing different, potentially unsafe scalar values, allowing
> programs to bypass verifier bounds checks and perform arbitrary
> out-of-bounds accesses? Note that the subsequent patch that tracks
> r11 registers in const_fold and liveness does not address this
> backtracking logic.
Yes, you are right. I will think of how to get backtracking work
for check_stack_arg_read() and check_stack_arg_write().
>
> [ ... ]
>> @@ -18808,7 +19097,7 @@ static int do_check_common(struct bpf_verifier_env *env, int subprog)
>> goto out;
>> }
>> }
>> - for (i = BPF_REG_1; i <= sub->arg_cnt; i++) {
>> + for (i = BPF_REG_1; i <= min_t(u32, sub->arg_cnt, MAX_BPF_FUNC_REG_ARGS); i++) {
>> arg = &sub->args[i - BPF_REG_1];
>> reg = ®s[i];
> For freplace programs targeting static subprograms with more than 5
> arguments, do we need to initialize state->frame[0]->incoming_stack_arg_depth
> here or allocate stack_arg_regs?
>
> Currently it seems to only initialize arguments up to MAX_BPF_FUNC_REG_ARGS.
> If incoming_stack_arg_depth remains 0, an EXT program attempting to read
> its 6th argument via BPF_REG_PARAMS would fail verification in
> check_stack_arg_read() with -EACCES.
For freplace, considering the patch set is always very big, I would like
to delay this. I will add a check for freplace. If it has more than 5
parameters, will cause verification failure.
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH bpf-next v6 15/17] selftests/bpf: Add tests for BPF function stack arguments
2026-04-19 17:15 ` sashiko-bot
@ 2026-04-20 5:52 ` Yonghong Song
0 siblings, 0 replies; 51+ messages in thread
From: Yonghong Song @ 2026-04-20 5:52 UTC (permalink / raw)
To: sashiko; +Cc: bpf
On 4/19/26 10:15 AM, sashiko-bot@kernel.org wrote:
> Sashiko AI review found 3 potential issue(s):
> - [Medium] The `test_async_cb_many` test uses an extremely short `usleep(50)` to wait for a BPF timer callback to execute, leading to potential flakiness.
> - [Low] The block comment for `global_two_callees` incorrectly refers to the function as `bar`.
> - [Low] The `bpf_iter_testmod_seq_next` kfunc is incorrectly declared with an `int *` return type in `stack_arg_kfunc.c`.
> --
>
> commit 63c1c9ef99d02448833e84a39ecb212ccaaf5b80
> Author: Yonghong Song <yonghong.song@linux.dev>
>
> selftests/bpf: Add tests for BPF function stack arguments
>
> This commit adds selftests covering stack argument passing for both BPF-to-BPF
> subprog calls and kfunc calls with more than 5 arguments.
>
>> diff --git a/tools/testing/selftests/bpf/prog_tests/stack_arg.c b/tools/testing/selftests/bpf/prog_tests/stack_arg.c
>> new file mode 100644
>> index 0000000000000..d61bac33f8092
>> --- /dev/null
>> +++ b/tools/testing/selftests/bpf/prog_tests/stack_arg.c
> [ ... ]
>
>> +static void test_async_cb_many(void)
>> +{
>> + struct stack_arg *skel;
>> +
>> + skel = stack_arg__open();
>> + if (!ASSERT_OK_PTR(skel, "open"))
>> + return;
>> +
>> + if (!skel->rodata->has_stack_arg) {
>> + test__skip();
>> + goto out;
>> + }
>> +
>> + if (!ASSERT_OK(stack_arg__load(skel), "load"))
>> + goto out;
>> +
>> + run_subtest(skel->progs.test_async_cb_many_args, 0);
>> +
>> + /* Wait for the timer callback to fire and verify the result.
>> + * 10+20+30+40+50+60+70+80 = 360
>> + */
>> + usleep(50);
>> + ASSERT_EQ(skel->bss->timer_result, 360, "timer_result");
> Is 50 microseconds enough time to reliably wait for the timer callback to
> execute?
>
> In CI environments or heavily loaded virtualization systems, might the
> scheduling and softirq execution be delayed by more than a fraction of a
> millisecond? If the timer does not fire in exactly that short window, the
> test would incorrectly read a timer_result of 0 and fail.
>
> Would it be more robust to use a polling loop or a significantly larger
> timeout to avoid flaky tests?
I am using the same 'usleep(50)' in prog_tests/timer.c:
timer__detach(timer_skel);
usleep(50); /* 10 usecs should be enough, but give it extra */
/* check that timer_cb1() was executed 10+10 times */
...
So I think 50 us should be okay.
>
> [ ... ]
>
>> diff --git a/tools/testing/selftests/bpf/progs/stack_arg.c b/tools/testing/selftests/bpf/progs/stack_arg.c
>> new file mode 100644
>> index 0000000000000..8c198ee952ff4
>> --- /dev/null
>> +++ b/tools/testing/selftests/bpf/progs/stack_arg.c
> [ ... ]
>
>> +/* bar calls foo1 (3 stack args) and foo2 (5 stack args).
>> + * The outgoing stack arg area is sized for foo2 (the larger callee).
>> + * Stores for foo1 are a subset of the area used by foo2.
>> + * Result: foo1(1,2,3,4,5,6,7,8) + foo2(1,2,3,4,5,6,7,8,9,10) = 36 + 55 = 91
>> + *
>> + * Pass a-e through so the compiler can't constant-fold the stack args away.
>> + */
>> +__noinline int global_two_callees(int a, int b, int c, int d, int e)
> This isn't a bug, but the comment refers to a function named bar, while the
> function being described is actually named global_two_callees.
Thanks. Will fix.
>
> [ ... ]
>
>> diff --git a/tools/testing/selftests/bpf/progs/stack_arg_kfunc.c b/tools/testing/selftests/bpf/progs/stack_arg_kfunc.c
>> new file mode 100644
>> index 0000000000000..6cc404d578639
>> --- /dev/null
>> +++ b/tools/testing/selftests/bpf/progs/stack_arg_kfunc.c
> [ ... ]
>
>> +#if defined(__TARGET_ARCH_x86) && defined(__BPF_FEATURE_STACK_ARGUMENT)
>> +
>> +const volatile bool has_stack_arg = true;
>> +
>> +struct bpf_iter_testmod_seq {
>> + u64 :64;
>> + u64 :64;
>> +};
>> +
>> +extern int bpf_iter_testmod_seq_new(struct bpf_iter_testmod_seq *it, s64 value, int cnt) __ksym;
>> +extern int *bpf_iter_testmod_seq_next(struct bpf_iter_testmod_seq *it) __ksym;
> Does bpf_iter_testmod_seq_next() actually return an int pointer?
This one
+extern int *bpf_iter_testmod_seq_next(struct bpf_iter_testmod_seq *it) __ksym;
is dead code. Will remove.
>
> Looking at the corresponding kfunc definition in bpf_testmod.c, the function
> appears to return an s64 pointer. Although this function isn't actively called
> in this test file (which prevents a BTF verifier failure here), could the
> mismatched return type declaration cause confusion or subtle issues in the
> future?
>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH bpf-next v6 17/17] selftests/bpf: Add verifier tests for stack argument validation
2026-04-19 17:21 ` sashiko-bot
@ 2026-04-20 6:14 ` Yonghong Song
0 siblings, 0 replies; 51+ messages in thread
From: Yonghong Song @ 2026-04-20 6:14 UTC (permalink / raw)
To: sashiko; +Cc: bpf
On 4/19/26 10:21 AM, sashiko-bot@kernel.org wrote:
> Sashiko AI review found 1 potential issue(s):
> - [Medium] Test fragility in `subprog_deref_arg6` error matching due to compiler register allocation
> --
>
> commit 3936b0a6dea45627278be80c25b20ff18f2821d4
> Author: Yonghong Song <yonghong.song@linux.dev>
>
> selftests/bpf: Add verifier tests for stack argument validation
>
> This commit adds inline-asm based verifier tests to exercise BPF stack
> argument validation logic directly, including both positive and negative
> tests for tracking pointers, references, and null propagation.
>
>> diff --git a/tools/testing/selftests/bpf/progs/verifier_stack_arg.c b/tools/testing/selftests/bpf/progs/verifier_stack_arg.c
>> new file mode 100644
>> index 0000000000000..66dd11840a63e
>> --- /dev/null
>> +++ b/tools/testing/selftests/bpf/progs/verifier_stack_arg.c
> [ ... ]
>
>> +__noinline __used
>> +static long subprog_deref_arg6(long a, long b, long c, long d, long e, long *f)
>> +{
>> + return *f;
>> +}
> [ ... ]
>
>> +SEC("tc")
>> +__description("stack_arg: pruning with different stack arg types")
>> +__failure
>> +__flag(BPF_F_TEST_STATE_FREQ)
>> +__arch_x86_64
>> +__msg("R1 invalid mem access 'scalar'")
> Does this hardcode an assumption about Clang's register allocation?
Yes, it is possible that clang may pick a different register.
So as you suggested, I will use regex like below:
__msg("R{{[0-9]}} invalid mem access 'scalar'")
which should fix the issue.
>
> Since subprog_deref_arg6 is standard C rather than a __naked inline
> assembly function, the compiler determines which register is used to
> dereference f. If a future LLVM version chooses a different register, the
> tests will fail because the verifier error will mention the new register
> instead of R1.
>
> Could these tests use __regex to match the register dynamically, or be
> rewritten so subprog_deref_arg6 is a __naked function to guarantee the
> register choice?
>
>> +__naked void stack_arg_pruning_type_mismatch(void)
>> +{
>> + asm volatile (
>> + "call %[bpf_get_prandom_u32];"
> [ ... ]
>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH bpf-next v6 00/17] bpf: Support stack arguments for BPF functions and kfuncs
2026-04-19 16:33 [PATCH bpf-next v6 00/17] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song
` (16 preceding siblings ...)
2026-04-19 16:34 ` [PATCH bpf-next v6 17/17] selftests/bpf: Add verifier " Yonghong Song
@ 2026-04-20 15:41 ` Puranjay Mohan
2026-04-20 20:22 ` Yonghong Song
17 siblings, 1 reply; 51+ messages in thread
From: Puranjay Mohan @ 2026-04-20 15:41 UTC (permalink / raw)
To: Yonghong Song, bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau, Puranjay Mohan
Yonghong Song <yonghong.song@linux.dev> writes:
> Currently, bpf function calls and kfunc's are limited by 5 reg-level
> parameters. For function calls with more than 5 parameters,
> developers can use always inlining or pass a struct pointer
> after packing more parameters in that struct although it may have
> some inconvenience. But there is no workaround for kfunc if more
> than 5 parameters is needed.
>
> This patch set lifts the 5-argument limit by introducing stack-based
> argument passing for BPF functions and kfunc's, coordinated with
> compiler support in LLVM [1]. The compiler emits loads/stores through
> a new bpf register r11 (BPF_REG_PARAMS) to pass arguments beyond
> the 5th, keeping the stack arg area separate from the r10-based program
> stack. The current maximum number of arguments is capped at
> MAX_BPF_FUNC_ARGS (12), which is sufficient for the vast majority of
> use cases.
>
> All kfunc/bpf-function arguments are caller saved, including stack
> arguments. For register arguments (r1-r5), the verifier already marks
> them as clobbered after each call. For stack arguments, the verifier
> invalidates all outgoing stack arg slots immediately after a call,
> requiring the compiler to re-store them before any subsequent call.
> This follows the native calling convention where all function
> parameters are caller saved.
>
> The x86_64 JIT translates r11-relative accesses to RBP-relative
> native instructions. Each function's stack allocation is extended
> by 'max_outgoing' bytes to hold the outgoing arg area below the
> callee-saved registers. This makes implementation easier as the r10
> can be reused for stack argument access. At both BPF-to-BPF and kfunc
> calls, outgoing args are pushed onto the expected calling convention
> locations directly. The incoming parameters can directly get the value
> from caller.
>
> To support kfunc stack arguments, before doing any stack arguments,
> existing codes are refactored/modified to use bpf_reg_state as much
> as possible instead of using regno, and to pass a non-negative argno,
> encoded to support both registers and stack arguments, as a single
> variable.
>
> Global subprogs with >5 args are not yet supported. Only x86_64
> is supported for now.
>
> For the rest of patches, patches 1-4 make changes to make it
> easy for future stack arguments for kfuncs. Patches 5-8
> supports bpf-to-bpf stack arguments. Patch 9 rejects interpreter
> for stack arguments. Patch 10 rejects subprogs if tailcall reachable.
> Patch 11 adds stack argument support for kfuncs. Patch 12 enables
> stack arguments for x86_64 and Patch 13 implements the x86_64 JIT.
> Patches 14-16 are some test cases.
>
> [1] https://github.com/llvm/llvm-project/pull/189060
>
> Note:
> - The patch set is on top of the following commit:
> eb0d6d97c27c Merge tag 'bpf-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf
> - This patch set requires latest llvm23 compiler. It is possible that a build
> failure may appear:
> /home/yhs/work/bpf-next/scripts/mod/modpost.c:59:13: error: variable 'extra_warn' set but not used [-Werror,-Wunused-but-set-global]
> 59 | static bool extra_warn;
> | ^
> 1 error generated.
> In this case, the following hack can workaround the build issue:
> --- a/Makefile
> +++ b/Makefile
> @@ -467,7 +467,7 @@ KERNELDOC = $(srctree)/tools/docs/kernel-doc
> export KERNELDOC
>
> KBUILD_USERHOSTCFLAGS := -Wall -Wmissing-prototypes -Wstrict-prototypes \
> - -O2 -fomit-frame-pointer -std=gnu11
> + -O2 -fomit-frame-pointer -std=gnu11 -Wno-unused-but-set-global
> KBUILD_USERCFLAGS := $(KBUILD_USERHOSTCFLAGS) $(USERCFLAGS)
> KBUILD_USERLDFLAGS := $(USERLDFLAGS)
>
> Changelogs:
> v5 -> v6:
> - v5: https://lore.kernel.org/bpf/20260417034658.2625353-1-yonghong.song@linux.dev/
> - Do stack arguments invalidation after bpf function or kfunc all. This is to
> following native achitecture calling convention.
> - Fix some r11 related issues in const_fold, liveness and insn checking.
> - Fix a few places for precision marking for stack arguments. All these cases
> have const value and mark them as precise.
> - Unconditionally copy some incoming/outgoing stat.
> - Fix a missing tailcall case with main prog having tailcall and there is no
> other subprog's.
> - Add and fix tests as suggested in v5.
> - Reorder some patches, e.g., preparing stack arguments in bpf functions,
> disabling JIT, before allowing proper r11 usage.
> v4 -> v5:
> - v4: https://lore.kernel.org/bpf/20260412045826.254200-1-yonghong.song@linux.dev/
> - Use r11 instead of r12, llvm also updated with r11.
> - Change int type 'reg_or_arg' to u32 'argno' where 'argno' encodes to support
> both bpf registers and stack arguments.
> - Track per-state bitmask 'out_stack_arg_mask' for r11 based stores, so at any
> particular call, it knows what stores are available. This is important since
> stores may be in different basic block.
> - Previously after each call, all store slots are invalidated. This patches
> disabled such invalidation.
> - Ensure r11 reg only appearing in allowed insns. Also avoid r11 for reg tracking
> purpose.
> - Make stack_arg_regs more similar to regular reg's (struct bpf_reg_state *)..
> - Reorder r11 based stores from 'arg6:off:-24, arg7:off:-16, arg8:off:-8" to
> "arg6:off:-8, arg7:off:-16, arg8:off:-24".
> - Add a few more tests, including e.g., two callee's with different number of
> stack arguments, shared r11-stores in different branches, etc.
>
> v3 -> v4:
> - v3: https://lore.kernel.org/bpf/20260405172505.1329392-1-yonghong.song@linux.dev/
> - Refactor/Modify codes to make it easier for later kfunc stack argument support
> - Invalidate outgoing slots immediately after the call to prevent reuse
> - Fix interaction between stack argument PTR_TO_STACK and dead slot poisoning
> - Reject stack arguments if tail call reachable
> - Disable private stack if stack argument is used
> - Allocate outgoing stack argument region after callee saved registers, and this
> simplifies the JITed code a lot.
> v2 -> v3:
> - v2: https://lore.kernel.org/bpf/20260405165300.826241-1-yonghong.song@linux.dev/
> - Fix selftest stack_arg_gap_at_minus8().
> - Fix a few 'UTF-8' issues.
> v1 -> v2:
> - v1: https://lore.kernel.org/bpf/20260402012727.3916819-1-yonghong.song@linux.dev/
> - Add stack_arg_safe() to do pruning for stack arguments.
> - Fix an issue with KF_ARG_PTR_TO_MEM_SIZE. Since a faked register is
> used, added verification log to indicate the start and end of such
> faked register usage.
> - For x86_64 JIT, copying incoming parameter values directly from caller's stack.
> - Add test cases with stack arguments e.g. mem, mem+size, dynptr, iter, etc.
>
> Yonghong Song (17):
> bpf: Remove unused parameter from check_map_kptr_access()
> bpf: Refactor to avoid redundant calculation of bpf_reg_state
> bpf: Refactor to handle memory and size together
> bpf: Prepare verifier logs for upcoming kfunc stack arguments
> bpf: Introduce bpf register BPF_REG_PARAMS
> bpf: Reuse MAX_BPF_FUNC_ARGS for maximum number of arguments
> bpf: Support stack arguments for bpf functions
> bpf: Reject stack arguments in non-JITed programs
> bpf: Track r11 registers in const_fold and liveness
> bpf: Prepare architecture JIT support for stack arguments
> bpf: Enable r11 based insns
> bpf: Support stack arguments for kfunc calls
> bpf: Reject stack arguments if tail call reachable
> bpf,x86: Implement JIT support for stack arguments
> selftests/bpf: Add tests for BPF function stack arguments
> selftests/bpf: Add tests for stack argument validation
> selftests/bpf: Add verifier tests for stack argument validation
>
This set looks good to me overall:
Acked-by: Puranjay Mohan <puranjay@kernel.org>
I added support for arm64 in https://lore.kernel.org/all/20260420153603.4097618-1-puranjay@kernel.org/
Please incorporate it into your series if you do another version.
Thanks,
Puranjay
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH bpf-next v6 00/17] bpf: Support stack arguments for BPF functions and kfuncs
2026-04-20 15:41 ` [PATCH bpf-next v6 00/17] bpf: Support stack arguments for BPF functions and kfuncs Puranjay Mohan
@ 2026-04-20 20:22 ` Yonghong Song
2026-04-20 20:25 ` Puranjay Mohan
0 siblings, 1 reply; 51+ messages in thread
From: Yonghong Song @ 2026-04-20 20:22 UTC (permalink / raw)
To: Puranjay Mohan, bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau, Puranjay Mohan
On 4/20/26 8:41 AM, Puranjay Mohan wrote:
> Yonghong Song <yonghong.song@linux.dev> writes:
>
>> Currently, bpf function calls and kfunc's are limited by 5 reg-level
>> parameters. For function calls with more than 5 parameters,
>> developers can use always inlining or pass a struct pointer
>> after packing more parameters in that struct although it may have
>> some inconvenience. But there is no workaround for kfunc if more
>> than 5 parameters is needed.
>>
>> This patch set lifts the 5-argument limit by introducing stack-based
>> argument passing for BPF functions and kfunc's, coordinated with
>> compiler support in LLVM [1]. The compiler emits loads/stores through
>> a new bpf register r11 (BPF_REG_PARAMS) to pass arguments beyond
>> the 5th, keeping the stack arg area separate from the r10-based program
>> stack. The current maximum number of arguments is capped at
>> MAX_BPF_FUNC_ARGS (12), which is sufficient for the vast majority of
>> use cases.
>>
>> All kfunc/bpf-function arguments are caller saved, including stack
>> arguments. For register arguments (r1-r5), the verifier already marks
>> them as clobbered after each call. For stack arguments, the verifier
>> invalidates all outgoing stack arg slots immediately after a call,
>> requiring the compiler to re-store them before any subsequent call.
>> This follows the native calling convention where all function
>> parameters are caller saved.
>>
>> The x86_64 JIT translates r11-relative accesses to RBP-relative
>> native instructions. Each function's stack allocation is extended
>> by 'max_outgoing' bytes to hold the outgoing arg area below the
>> callee-saved registers. This makes implementation easier as the r10
>> can be reused for stack argument access. At both BPF-to-BPF and kfunc
>> calls, outgoing args are pushed onto the expected calling convention
>> locations directly. The incoming parameters can directly get the value
>> from caller.
>>
>> To support kfunc stack arguments, before doing any stack arguments,
>> existing codes are refactored/modified to use bpf_reg_state as much
>> as possible instead of using regno, and to pass a non-negative argno,
>> encoded to support both registers and stack arguments, as a single
>> variable.
>>
>> Global subprogs with >5 args are not yet supported. Only x86_64
>> is supported for now.
>>
>> For the rest of patches, patches 1-4 make changes to make it
>> easy for future stack arguments for kfuncs. Patches 5-8
>> supports bpf-to-bpf stack arguments. Patch 9 rejects interpreter
>> for stack arguments. Patch 10 rejects subprogs if tailcall reachable.
>> Patch 11 adds stack argument support for kfuncs. Patch 12 enables
>> stack arguments for x86_64 and Patch 13 implements the x86_64 JIT.
>> Patches 14-16 are some test cases.
>>
>> [1] https://github.com/llvm/llvm-project/pull/189060
>>
>> Note:
>> - The patch set is on top of the following commit:
>> eb0d6d97c27c Merge tag 'bpf-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf
>> - This patch set requires latest llvm23 compiler. It is possible that a build
>> failure may appear:
>> /home/yhs/work/bpf-next/scripts/mod/modpost.c:59:13: error: variable 'extra_warn' set but not used [-Werror,-Wunused-but-set-global]
>> 59 | static bool extra_warn;
>> | ^
>> 1 error generated.
>> In this case, the following hack can workaround the build issue:
>> --- a/Makefile
>> +++ b/Makefile
>> @@ -467,7 +467,7 @@ KERNELDOC = $(srctree)/tools/docs/kernel-doc
>> export KERNELDOC
>>
>> KBUILD_USERHOSTCFLAGS := -Wall -Wmissing-prototypes -Wstrict-prototypes \
>> - -O2 -fomit-frame-pointer -std=gnu11
>> + -O2 -fomit-frame-pointer -std=gnu11 -Wno-unused-but-set-global
>> KBUILD_USERCFLAGS := $(KBUILD_USERHOSTCFLAGS) $(USERCFLAGS)
>> KBUILD_USERLDFLAGS := $(USERLDFLAGS)
>>
>> Changelogs:
>> v5 -> v6:
>> - v5: https://lore.kernel.org/bpf/20260417034658.2625353-1-yonghong.song@linux.dev/
>> - Do stack arguments invalidation after bpf function or kfunc all. This is to
>> following native achitecture calling convention.
>> - Fix some r11 related issues in const_fold, liveness and insn checking.
>> - Fix a few places for precision marking for stack arguments. All these cases
>> have const value and mark them as precise.
>> - Unconditionally copy some incoming/outgoing stat.
>> - Fix a missing tailcall case with main prog having tailcall and there is no
>> other subprog's.
>> - Add and fix tests as suggested in v5.
>> - Reorder some patches, e.g., preparing stack arguments in bpf functions,
>> disabling JIT, before allowing proper r11 usage.
>> v4 -> v5:
>> - v4: https://lore.kernel.org/bpf/20260412045826.254200-1-yonghong.song@linux.dev/
>> - Use r11 instead of r12, llvm also updated with r11.
>> - Change int type 'reg_or_arg' to u32 'argno' where 'argno' encodes to support
>> both bpf registers and stack arguments.
>> - Track per-state bitmask 'out_stack_arg_mask' for r11 based stores, so at any
>> particular call, it knows what stores are available. This is important since
>> stores may be in different basic block.
>> - Previously after each call, all store slots are invalidated. This patches
>> disabled such invalidation.
>> - Ensure r11 reg only appearing in allowed insns. Also avoid r11 for reg tracking
>> purpose.
>> - Make stack_arg_regs more similar to regular reg's (struct bpf_reg_state *)..
>> - Reorder r11 based stores from 'arg6:off:-24, arg7:off:-16, arg8:off:-8" to
>> "arg6:off:-8, arg7:off:-16, arg8:off:-24".
>> - Add a few more tests, including e.g., two callee's with different number of
>> stack arguments, shared r11-stores in different branches, etc.
>>
>> v3 -> v4:
>> - v3: https://lore.kernel.org/bpf/20260405172505.1329392-1-yonghong.song@linux.dev/
>> - Refactor/Modify codes to make it easier for later kfunc stack argument support
>> - Invalidate outgoing slots immediately after the call to prevent reuse
>> - Fix interaction between stack argument PTR_TO_STACK and dead slot poisoning
>> - Reject stack arguments if tail call reachable
>> - Disable private stack if stack argument is used
>> - Allocate outgoing stack argument region after callee saved registers, and this
>> simplifies the JITed code a lot.
>> v2 -> v3:
>> - v2: https://lore.kernel.org/bpf/20260405165300.826241-1-yonghong.song@linux.dev/
>> - Fix selftest stack_arg_gap_at_minus8().
>> - Fix a few 'UTF-8' issues.
>> v1 -> v2:
>> - v1: https://lore.kernel.org/bpf/20260402012727.3916819-1-yonghong.song@linux.dev/
>> - Add stack_arg_safe() to do pruning for stack arguments.
>> - Fix an issue with KF_ARG_PTR_TO_MEM_SIZE. Since a faked register is
>> used, added verification log to indicate the start and end of such
>> faked register usage.
>> - For x86_64 JIT, copying incoming parameter values directly from caller's stack.
>> - Add test cases with stack arguments e.g. mem, mem+size, dynptr, iter, etc.
>>
>> Yonghong Song (17):
>> bpf: Remove unused parameter from check_map_kptr_access()
>> bpf: Refactor to avoid redundant calculation of bpf_reg_state
>> bpf: Refactor to handle memory and size together
>> bpf: Prepare verifier logs for upcoming kfunc stack arguments
>> bpf: Introduce bpf register BPF_REG_PARAMS
>> bpf: Reuse MAX_BPF_FUNC_ARGS for maximum number of arguments
>> bpf: Support stack arguments for bpf functions
>> bpf: Reject stack arguments in non-JITed programs
>> bpf: Track r11 registers in const_fold and liveness
>> bpf: Prepare architecture JIT support for stack arguments
>> bpf: Enable r11 based insns
>> bpf: Support stack arguments for kfunc calls
>> bpf: Reject stack arguments if tail call reachable
>> bpf,x86: Implement JIT support for stack arguments
>> selftests/bpf: Add tests for BPF function stack arguments
>> selftests/bpf: Add tests for stack argument validation
>> selftests/bpf: Add verifier tests for stack argument validation
>>
> This set looks good to me overall:
>
> Acked-by: Puranjay Mohan <puranjay@kernel.org>
>
> I added support for arm64 in https://lore.kernel.org/all/20260420153603.4097618-1-puranjay@kernel.org/
> Please incorporate it into your series if you do another version.
Thanks, Puranjay! I will send another revision later which adds support
for precision backtracking and additional/modified test cases among other
things. I won't include your patches in the next revision. Once the patch
set is stable enough, I can add your patches. Optionally, the patch set is
already large enough, so maybe at some point, after this patch set is merged,
you can send your patch set separately.
>
> Thanks,
> Puranjay
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH bpf-next v6 00/17] bpf: Support stack arguments for BPF functions and kfuncs
2026-04-20 20:22 ` Yonghong Song
@ 2026-04-20 20:25 ` Puranjay Mohan
2026-04-20 21:49 ` Alexei Starovoitov
0 siblings, 1 reply; 51+ messages in thread
From: Puranjay Mohan @ 2026-04-20 20:25 UTC (permalink / raw)
To: Yonghong Song
Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau
On Mon, Apr 20, 2026 at 9:23 PM Yonghong Song <yonghong.song@linux.dev> wrote:
>
>
>
> On 4/20/26 8:41 AM, Puranjay Mohan wrote:
> > Yonghong Song <yonghong.song@linux.dev> writes:
> >
> >> Currently, bpf function calls and kfunc's are limited by 5 reg-level
> >> parameters. For function calls with more than 5 parameters,
> >> developers can use always inlining or pass a struct pointer
> >> after packing more parameters in that struct although it may have
> >> some inconvenience. But there is no workaround for kfunc if more
> >> than 5 parameters is needed.
> >>
> >> This patch set lifts the 5-argument limit by introducing stack-based
> >> argument passing for BPF functions and kfunc's, coordinated with
> >> compiler support in LLVM [1]. The compiler emits loads/stores through
> >> a new bpf register r11 (BPF_REG_PARAMS) to pass arguments beyond
> >> the 5th, keeping the stack arg area separate from the r10-based program
> >> stack. The current maximum number of arguments is capped at
> >> MAX_BPF_FUNC_ARGS (12), which is sufficient for the vast majority of
> >> use cases.
> >>
> >> All kfunc/bpf-function arguments are caller saved, including stack
> >> arguments. For register arguments (r1-r5), the verifier already marks
> >> them as clobbered after each call. For stack arguments, the verifier
> >> invalidates all outgoing stack arg slots immediately after a call,
> >> requiring the compiler to re-store them before any subsequent call.
> >> This follows the native calling convention where all function
> >> parameters are caller saved.
> >>
> >> The x86_64 JIT translates r11-relative accesses to RBP-relative
> >> native instructions. Each function's stack allocation is extended
> >> by 'max_outgoing' bytes to hold the outgoing arg area below the
> >> callee-saved registers. This makes implementation easier as the r10
> >> can be reused for stack argument access. At both BPF-to-BPF and kfunc
> >> calls, outgoing args are pushed onto the expected calling convention
> >> locations directly. The incoming parameters can directly get the value
> >> from caller.
> >>
> >> To support kfunc stack arguments, before doing any stack arguments,
> >> existing codes are refactored/modified to use bpf_reg_state as much
> >> as possible instead of using regno, and to pass a non-negative argno,
> >> encoded to support both registers and stack arguments, as a single
> >> variable.
> >>
> >> Global subprogs with >5 args are not yet supported. Only x86_64
> >> is supported for now.
> >>
> >> For the rest of patches, patches 1-4 make changes to make it
> >> easy for future stack arguments for kfuncs. Patches 5-8
> >> supports bpf-to-bpf stack arguments. Patch 9 rejects interpreter
> >> for stack arguments. Patch 10 rejects subprogs if tailcall reachable.
> >> Patch 11 adds stack argument support for kfuncs. Patch 12 enables
> >> stack arguments for x86_64 and Patch 13 implements the x86_64 JIT.
> >> Patches 14-16 are some test cases.
> >>
> >> [1] https://github.com/llvm/llvm-project/pull/189060
> >>
> >> Note:
> >> - The patch set is on top of the following commit:
> >> eb0d6d97c27c Merge tag 'bpf-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf
> >> - This patch set requires latest llvm23 compiler. It is possible that a build
> >> failure may appear:
> >> /home/yhs/work/bpf-next/scripts/mod/modpost.c:59:13: error: variable 'extra_warn' set but not used [-Werror,-Wunused-but-set-global]
> >> 59 | static bool extra_warn;
> >> | ^
> >> 1 error generated.
> >> In this case, the following hack can workaround the build issue:
> >> --- a/Makefile
> >> +++ b/Makefile
> >> @@ -467,7 +467,7 @@ KERNELDOC = $(srctree)/tools/docs/kernel-doc
> >> export KERNELDOC
> >>
> >> KBUILD_USERHOSTCFLAGS := -Wall -Wmissing-prototypes -Wstrict-prototypes \
> >> - -O2 -fomit-frame-pointer -std=gnu11
> >> + -O2 -fomit-frame-pointer -std=gnu11 -Wno-unused-but-set-global
> >> KBUILD_USERCFLAGS := $(KBUILD_USERHOSTCFLAGS) $(USERCFLAGS)
> >> KBUILD_USERLDFLAGS := $(USERLDFLAGS)
> >>
> >> Changelogs:
> >> v5 -> v6:
> >> - v5: https://lore.kernel.org/bpf/20260417034658.2625353-1-yonghong.song@linux.dev/
> >> - Do stack arguments invalidation after bpf function or kfunc all. This is to
> >> following native achitecture calling convention.
> >> - Fix some r11 related issues in const_fold, liveness and insn checking.
> >> - Fix a few places for precision marking for stack arguments. All these cases
> >> have const value and mark them as precise.
> >> - Unconditionally copy some incoming/outgoing stat.
> >> - Fix a missing tailcall case with main prog having tailcall and there is no
> >> other subprog's.
> >> - Add and fix tests as suggested in v5.
> >> - Reorder some patches, e.g., preparing stack arguments in bpf functions,
> >> disabling JIT, before allowing proper r11 usage.
> >> v4 -> v5:
> >> - v4: https://lore.kernel.org/bpf/20260412045826.254200-1-yonghong.song@linux.dev/
> >> - Use r11 instead of r12, llvm also updated with r11.
> >> - Change int type 'reg_or_arg' to u32 'argno' where 'argno' encodes to support
> >> both bpf registers and stack arguments.
> >> - Track per-state bitmask 'out_stack_arg_mask' for r11 based stores, so at any
> >> particular call, it knows what stores are available. This is important since
> >> stores may be in different basic block.
> >> - Previously after each call, all store slots are invalidated. This patches
> >> disabled such invalidation.
> >> - Ensure r11 reg only appearing in allowed insns. Also avoid r11 for reg tracking
> >> purpose.
> >> - Make stack_arg_regs more similar to regular reg's (struct bpf_reg_state *)..
> >> - Reorder r11 based stores from 'arg6:off:-24, arg7:off:-16, arg8:off:-8" to
> >> "arg6:off:-8, arg7:off:-16, arg8:off:-24".
> >> - Add a few more tests, including e.g., two callee's with different number of
> >> stack arguments, shared r11-stores in different branches, etc.
> >>
> >> v3 -> v4:
> >> - v3: https://lore.kernel.org/bpf/20260405172505.1329392-1-yonghong.song@linux.dev/
> >> - Refactor/Modify codes to make it easier for later kfunc stack argument support
> >> - Invalidate outgoing slots immediately after the call to prevent reuse
> >> - Fix interaction between stack argument PTR_TO_STACK and dead slot poisoning
> >> - Reject stack arguments if tail call reachable
> >> - Disable private stack if stack argument is used
> >> - Allocate outgoing stack argument region after callee saved registers, and this
> >> simplifies the JITed code a lot.
> >> v2 -> v3:
> >> - v2: https://lore.kernel.org/bpf/20260405165300.826241-1-yonghong.song@linux.dev/
> >> - Fix selftest stack_arg_gap_at_minus8().
> >> - Fix a few 'UTF-8' issues.
> >> v1 -> v2:
> >> - v1: https://lore.kernel.org/bpf/20260402012727.3916819-1-yonghong.song@linux.dev/
> >> - Add stack_arg_safe() to do pruning for stack arguments.
> >> - Fix an issue with KF_ARG_PTR_TO_MEM_SIZE. Since a faked register is
> >> used, added verification log to indicate the start and end of such
> >> faked register usage.
> >> - For x86_64 JIT, copying incoming parameter values directly from caller's stack.
> >> - Add test cases with stack arguments e.g. mem, mem+size, dynptr, iter, etc.
> >>
> >> Yonghong Song (17):
> >> bpf: Remove unused parameter from check_map_kptr_access()
> >> bpf: Refactor to avoid redundant calculation of bpf_reg_state
> >> bpf: Refactor to handle memory and size together
> >> bpf: Prepare verifier logs for upcoming kfunc stack arguments
> >> bpf: Introduce bpf register BPF_REG_PARAMS
> >> bpf: Reuse MAX_BPF_FUNC_ARGS for maximum number of arguments
> >> bpf: Support stack arguments for bpf functions
> >> bpf: Reject stack arguments in non-JITed programs
> >> bpf: Track r11 registers in const_fold and liveness
> >> bpf: Prepare architecture JIT support for stack arguments
> >> bpf: Enable r11 based insns
> >> bpf: Support stack arguments for kfunc calls
> >> bpf: Reject stack arguments if tail call reachable
> >> bpf,x86: Implement JIT support for stack arguments
> >> selftests/bpf: Add tests for BPF function stack arguments
> >> selftests/bpf: Add tests for stack argument validation
> >> selftests/bpf: Add verifier tests for stack argument validation
> >>
> > This set looks good to me overall:
> >
> > Acked-by: Puranjay Mohan <puranjay@kernel.org>
> >
> > I added support for arm64 in https://lore.kernel.org/all/20260420153603.4097618-1-puranjay@kernel.org/
> > Please incorporate it into your series if you do another version.
>
> Thanks, Puranjay! I will send another revision later which adds support
> for precision backtracking and additional/modified test cases among other
> things. I won't include your patches in the next revision. Once the patch
> set is stable enough, I can add your patches. Optionally, the patch set is
> already large enough, so maybe at some point, after this patch set is merged,
> you can send your patch set separately.
Okay, doing it separately is better I guess, I will wait for your
patches to land and then resend the arm64 support.
Thanks,
Puranjay
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH bpf-next v6 00/17] bpf: Support stack arguments for BPF functions and kfuncs
2026-04-20 20:25 ` Puranjay Mohan
@ 2026-04-20 21:49 ` Alexei Starovoitov
2026-04-20 23:44 ` Yonghong Song
0 siblings, 1 reply; 51+ messages in thread
From: Alexei Starovoitov @ 2026-04-20 21:49 UTC (permalink / raw)
To: Puranjay Mohan
Cc: Yonghong Song, bpf, Alexei Starovoitov, Andrii Nakryiko,
Daniel Borkmann, Jose E . Marchesi, Kernel Team, Martin KaFai Lau
On Mon, Apr 20, 2026 at 1:25 PM Puranjay Mohan <puranjay12@gmail.com> wrote:
>
>
> Okay, doing it separately is better I guess, I will wait for your
> patches to land and then resend the arm64 support.
Just combine them into one patchset, so that CI tests it all together.
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH bpf-next v6 00/17] bpf: Support stack arguments for BPF functions and kfuncs
2026-04-20 21:49 ` Alexei Starovoitov
@ 2026-04-20 23:44 ` Yonghong Song
0 siblings, 0 replies; 51+ messages in thread
From: Yonghong Song @ 2026-04-20 23:44 UTC (permalink / raw)
To: Alexei Starovoitov, Puranjay Mohan
Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, Kernel Team, Martin KaFai Lau
On 4/20/26 2:49 PM, Alexei Starovoitov wrote:
> On Mon, Apr 20, 2026 at 1:25 PM Puranjay Mohan <puranjay12@gmail.com> wrote:
>>
>> Okay, doing it separately is better I guess, I will wait for your
>> patches to land and then resend the arm64 support.
> Just combine them into one patchset, so that CI tests it all together.
Okay, will do.
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH bpf-next v6 03/17] bpf: Refactor to handle memory and size together
2026-04-19 16:33 ` [PATCH bpf-next v6 03/17] bpf: Refactor to handle memory and size together Yonghong Song
@ 2026-04-20 23:58 ` Alexei Starovoitov
2026-04-21 4:04 ` Yonghong Song
0 siblings, 1 reply; 51+ messages in thread
From: Alexei Starovoitov @ 2026-04-20 23:58 UTC (permalink / raw)
To: Yonghong Song, bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau
On Sun Apr 19, 2026 at 9:33 AM PDT, Yonghong Song wrote:
> + * @mem_regno is the register containing the pointer, mem_regno+1 is the register
> + * containing the access size.
wrong comment. should probably say that size_reg is the register that contains size.
>
> - WARN_ON_ONCE(regno < BPF_REG_2 || regno > BPF_REG_5);
> + WARN_ON_ONCE(mem_regno > BPF_REG_4);
this warn is too late here. Can be removed.
>
> memset(&meta, 0, sizeof(meta));
>
> @@ -7129,8 +7130,8 @@ static int check_kfunc_mem_size_reg(struct bpf_verifier_env *env, struct bpf_reg
> mark_ptr_not_null_reg(mem_reg);
> }
>
> - err = check_mem_size_reg(env, reg, regno, BPF_READ, true, &meta);
> - err = err ?: check_mem_size_reg(env, reg, regno, BPF_WRITE, true, &meta);
> + err = check_mem_size_reg(env, mem_reg, size_reg, mem_regno, BPF_READ, true, &meta);
> + err = err ?: check_mem_size_reg(env, mem_reg, size_reg, mem_regno, BPF_WRITE, true, &meta);
>
> if (may_be_null)
> *mem_reg = saved_reg;
> @@ -8594,7 +8595,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
> return -EFAULT;
> }
> key_size = meta->map.ptr->key_size;
> - err = check_helper_mem_access(env, regno, key_size, BPF_READ, false, NULL);
> + err = check_helper_mem_access(env, reg, regno, key_size, BPF_READ, false, NULL);
> if (err)
> return err;
> if (can_elide_value_nullness(meta->map.ptr->map_type)) {
> @@ -8621,7 +8622,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
> return -EFAULT;
> }
> meta->raw_mode = arg_type & MEM_UNINIT;
> - err = check_helper_mem_access(env, regno, meta->map.ptr->value_size,
> + err = check_helper_mem_access(env, reg, regno, meta->map.ptr->value_size,
> arg_type & MEM_WRITE ? BPF_WRITE : BPF_READ,
> false, meta);
> break;
> @@ -8665,7 +8666,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
> */
> meta->raw_mode = arg_type & MEM_UNINIT;
> if (arg_type & MEM_FIXED_SIZE) {
> - err = check_helper_mem_access(env, regno, fn->arg_size[arg],
> + err = check_helper_mem_access(env, reg, regno, fn->arg_size[arg],
> arg_type & MEM_WRITE ? BPF_WRITE : BPF_READ,
> false, meta);
> if (err)
> @@ -8675,13 +8676,13 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
> }
> break;
> case ARG_CONST_SIZE:
> - err = check_mem_size_reg(env, reg, regno,
> + err = check_mem_size_reg(env, reg_state(env, regno - 1), reg, regno - 1,
or moved here.
I would just remove it.
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH bpf-next v6 04/17] bpf: Prepare verifier logs for upcoming kfunc stack arguments
2026-04-19 16:33 ` [PATCH bpf-next v6 04/17] bpf: Prepare verifier logs for upcoming kfunc stack arguments Yonghong Song
@ 2026-04-21 0:03 ` Alexei Starovoitov
2026-04-21 4:06 ` Yonghong Song
2026-04-21 6:07 ` Yonghong Song
0 siblings, 2 replies; 51+ messages in thread
From: Alexei Starovoitov @ 2026-04-21 0:03 UTC (permalink / raw)
To: Yonghong Song, bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau
On Sun Apr 19, 2026 at 9:33 AM PDT, Yonghong Song wrote:
> This change prepares verifier log reporting for upcoming kfunc stack
> argument support.
>
> Today verifier log code mostly assumes that an argument can be described
> directly by a register number. That works for arguments passed in `R1`
> to `R5`, but it does not work once kfunc arguments can also be
> passed on the stack.
>
> Introduce an internal `argno` representation such that register-passed
> arguments keep using their real register numbers, while stack-passed
> arguments use an encoded value above a dedicated base.
> `reg_arg_name()` converts this representation into either `R%d` or
> `*(R11-off)` when emitting verifier logs. If a particular `argno`
> is corresponding to a stack argument, print `*(R11-off)`. Otherwise,
> print `R%d`. Here R11 presents the base of stack arguments.
>
> This keeps existing logs readable for register arguments and allows the
> same log sites to handle future stack arguments without open-coding
> special cases.
>
> Update selftests accordingly.
>
> Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
> ---
> include/linux/bpf_verifier.h | 1 +
> kernel/bpf/verifier.c | 649 ++++++++++--------
> .../testing/selftests/bpf/prog_tests/bpf_nf.c | 22 +-
> .../selftests/bpf/prog_tests/cb_refs.c | 2 +-
> .../selftests/bpf/prog_tests/kfunc_call.c | 2 +-
> .../selftests/bpf/prog_tests/linked_list.c | 4 +-
> .../selftests/bpf/progs/cgrp_kfunc_failure.c | 14 +-
> .../selftests/bpf/progs/cpumask_failure.c | 10 +-
> .../testing/selftests/bpf/progs/dynptr_fail.c | 22 +-
> .../selftests/bpf/progs/file_reader_fail.c | 4 +-
> tools/testing/selftests/bpf/progs/irq.c | 4 +-
> tools/testing/selftests/bpf/progs/iters.c | 6 +-
> .../selftests/bpf/progs/iters_state_safety.c | 14 +-
> .../selftests/bpf/progs/iters_testmod.c | 4 +-
> .../selftests/bpf/progs/iters_testmod_seq.c | 4 +-
> .../selftests/bpf/progs/map_kptr_fail.c | 2 +-
> .../selftests/bpf/progs/percpu_alloc_fail.c | 4 +-
> .../testing/selftests/bpf/progs/rbtree_fail.c | 6 +-
> .../bpf/progs/refcounted_kptr_fail.c | 2 +-
> .../testing/selftests/bpf/progs/stream_fail.c | 2 +-
> .../selftests/bpf/progs/task_kfunc_failure.c | 18 +-
> .../selftests/bpf/progs/task_work_fail.c | 6 +-
> .../selftests/bpf/progs/test_bpf_nf_fail.c | 8 +-
> .../bpf/progs/test_kfunc_dynptr_param.c | 2 +-
> .../bpf/progs/test_kfunc_param_nullable.c | 2 +-
> .../selftests/bpf/progs/verifier_bits_iter.c | 4 +-
> .../bpf/progs/verifier_ref_tracking.c | 6 +-
> .../selftests/bpf/progs/verifier_vfs_reject.c | 8 +-
> .../testing/selftests/bpf/progs/wq_failures.c | 2 +-
> tools/testing/selftests/bpf/verifier/calls.c | 14 +-
> 30 files changed, 474 insertions(+), 374 deletions(-)
>
> diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
> index b148f816f25b..9fbbddc40d21 100644
> --- a/include/linux/bpf_verifier.h
> +++ b/include/linux/bpf_verifier.h
> @@ -913,6 +913,7 @@ struct bpf_verifier_env {
> * e.g., in reg_type_str() to generate reg_type string
> */
> char tmp_str_buf[TMP_STR_BUF_LEN];
> + char tmp_reg_arg_name_buf[32];
the name is too long.
Just tmp_arg_name ?
> struct bpf_insn insn_buf[INSN_BUF_SIZE];
> struct bpf_insn epilogue_buf[INSN_BUF_SIZE];
> struct bpf_scc_callchain callchain_buf;
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 3716d9688d00..6aa4dc161a56 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -1751,6 +1751,55 @@ static struct bpf_verifier_state *push_stack(struct bpf_verifier_env *env,
> return &elem->st;
> }
>
> +/*
> + * Unified argument number encoding for verifier log messages.
> + * Register args (arg_idx 0-4) use their register number (R1-R5).
> + * Stack args (arg_idx 5+) are encoded as STACK_ARGNO_BASE + arg_idx
> + * to avoid collision with register numbers. reg_arg_name() decodes
> + * this back to a human-readable string like "*(R11-8)" for logs.
> + */
> +#define STACK_ARGNO_BASE 100
> +
> +static bool is_stack_argno(int argno)
> +{
> + return argno >= STACK_ARGNO_BASE;
> +}
> +
> +static u32 make_argno(u32 arg_idx)
> +{
> + if (arg_idx < MAX_BPF_FUNC_REG_ARGS)
> + return BPF_REG_1 + arg_idx;
> + return STACK_ARGNO_BASE + arg_idx;
> +}
> +
> +static u32 arg_idx_from_argno(int argno)
> +{
> + if (is_stack_argno(argno))
> + return argno - STACK_ARGNO_BASE;
> + return argno - BPF_REG_1;
> +}
> +
> +static int next_argno(int argno)
> +{
> + return make_argno(arg_idx_from_argno(argno) + 1);
> +}
I don't like this +1, -1 dance all around. Makes the whole thing
hard to follow.
Keep argno starting at 1. So old regno == argno.
> /* Check if local kptr in src arg matches kptr in dst arg */
> - if (meta->func_id == BPF_FUNC_kptr_xchg && regno == BPF_REG_2) {
> - if (map_kptr_match_type(env, meta->kptr_field, reg, regno))
> + if (meta->func_id == BPF_FUNC_kptr_xchg &&
> + !is_stack_argno(argno) && argno == BPF_REG_2) {
> + if (map_kptr_match_type(env, meta->kptr_field, reg, argno))
And then this argno == BPF_REG_2 will look fine.
With argno base 0 the above looks broken.
Also is_stack_argno() looks like defensive programming. remove it.
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH bpf-next v6 07/17] bpf: Support stack arguments for bpf functions
2026-04-19 16:33 ` [PATCH bpf-next v6 07/17] bpf: Support stack arguments for bpf functions Yonghong Song
2026-04-19 19:15 ` sashiko-bot
@ 2026-04-21 0:37 ` Alexei Starovoitov
2026-04-21 4:15 ` Yonghong Song
1 sibling, 1 reply; 51+ messages in thread
From: Alexei Starovoitov @ 2026-04-21 0:37 UTC (permalink / raw)
To: Yonghong Song, bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau
On Sun Apr 19, 2026 at 9:33 AM PDT, Yonghong Song wrote:
> Currently BPF functions (subprogs) are limited to 5 register arguments.
> With [1], the compiler can emit code that passes additional arguments
> via a dedicated stack area through bpf register BPF_REG_PARAMS (r11),
> introduced in the previous patch.
>
> The compiler uses positive r11 offsets for incoming (callee-side) args
> and negative r11 offsets for outgoing (caller-side) args, following the
> x86_64/arm64 calling convention direction. There is an 8-byte gap at
> offset 0 separating the two regions:
> Incoming (callee reads): r11+8 (arg6), r11+16 (arg7), ...
> Outgoing (caller writes): r11-8 (arg6), r11-16 (arg7), ...
This part looks clean now.
> A per-state bitmask out_stack_arg_mask tracks which outgoing stack arg
> slots have been written on the current path. Each bit corresponds to
> an outgoing slot index (bit 0 = r11-8 = arg6, bit 1 = r11-16 = arg7,
> etc.). At a call site, the verifier checks that all slots required by
> the callee have their corresponding mask bits set. This enables
> precise per-path tracking: if one branch of a conditional writes arg6
> but another does not, the mask correctly reflects the difference and
> the verifier rejects the uninitialized path. The mask is included in
> stack_arg_safe() so that states with different sets of initialized
> slots are not incorrectly pruned together.
But this part I don't understand.
why do you need this bitmask?
Even when they're written out of order stack_arg_depth is all you need.
Then compare old->stack_arg_regs vs cur->stack_arg_regs.
If one is not written its state will be NOT_INIT.
so
*(u64 *)(r11 - 16) = r7;
// without *(u64 *)(r11 - 8) = r6;
call bar1; // arg6 = r6, arg7 = r7
will fail the verification.
> @@ -1669,6 +1669,8 @@ struct bpf_prog_aux {
> u32 max_pkt_offset;
> u32 max_tp_access;
> u32 stack_depth;
> + u16 incoming_stack_arg_depth;
> + u16 stack_arg_depth; /* both incoming and max outgoing of stack arguments */
these two are ok. I'm assuming you don't want JIT to recompute them.
> u32 id;
> u32 func_cnt; /* used by non-func prog as the number of func progs */
> u32 real_func_cnt; /* includes hidden progs, only used for JIT and freeing progs */
> diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
> index 9fbbddc40d21..bb6d8cab3a35 100644
> --- a/include/linux/bpf_verifier.h
> +++ b/include/linux/bpf_verifier.h
> @@ -372,6 +372,11 @@ struct bpf_func_state {
> * `stack`. allocated_stack is always a multiple of BPF_REG_SIZE.
> */
> int allocated_stack;
> +
> + u16 stack_arg_depth; /* Size of incoming + max outgoing stack args in bytes. */
> + u16 incoming_stack_arg_depth; /* Size of incoming stack args in bytes. */
but incoming_stack_arg_depth looks odd.
Callee should be accessing caller's stack_arg_depth and
caller's stack_arg_regs.
> + u16 out_stack_arg_mask; /* Bitmask of outgoing stack arg slots that have been written. */
> + struct bpf_reg_state *stack_arg_regs; /* On-stack arguments */
> };
>
> #define MAX_CALL_FRAMES 8
> @@ -508,6 +513,17 @@ struct bpf_verifier_state {
> iter < frame->allocated_stack / BPF_REG_SIZE; \
> iter++, reg = bpf_get_spilled_reg(iter, frame, mask))
>
> +#define bpf_get_spilled_stack_arg(slot, frame, mask) \
> + ((((slot) < frame->stack_arg_depth / BPF_REG_SIZE) && \
> + (frame->stack_arg_regs[slot].type != NOT_INIT)) \
> + ? &frame->stack_arg_regs[slot] : NULL)
> +
> +/* Iterate over 'frame', setting 'reg' to either NULL or a spilled stack arg. */
> +#define bpf_for_each_spilled_stack_arg(iter, frame, reg, mask) \
> + for (iter = 0, reg = bpf_get_spilled_stack_arg(iter, frame, mask); \
> + iter < frame->stack_arg_depth / BPF_REG_SIZE; \
> + iter++, reg = bpf_get_spilled_stack_arg(iter, frame, mask))
> +
> #define bpf_for_each_reg_in_vstate_mask(__vst, __state, __reg, __mask, __expr) \
> ({ \
> struct bpf_verifier_state *___vstate = __vst; \
> @@ -525,6 +541,11 @@ struct bpf_verifier_state {
> continue; \
> (void)(__expr); \
> } \
> + bpf_for_each_spilled_stack_arg(___j, __state, __reg, __mask) { \
> + if (!__reg) \
> + continue; \
> + (void)(__expr); \
> + } \
> } \
> })
>
> @@ -739,10 +760,13 @@ struct bpf_subprog_info {
> bool keep_fastcall_stack: 1;
> bool changes_pkt_data: 1;
> bool might_sleep: 1;
> - u8 arg_cnt:3;
> + u8 arg_cnt:4;
>
> enum priv_stack_mode priv_stack_mode;
> - struct bpf_subprog_arg_info args[MAX_BPF_FUNC_REG_ARGS];
> + struct bpf_subprog_arg_info args[MAX_BPF_FUNC_ARGS];
> + u16 incoming_stack_arg_depth;
> + u16 outgoing_stack_arg_depth;
> + u16 max_out_stack_arg_depth;
but you already have them in prog_aux?! another copy in bpf_subprog_info?!
Remove one of them. JIT only need one set.
> };
>
> struct bpf_verifier_env;
> diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
> index a62d78581207..c5f3aa05d5a3 100644
> --- a/kernel/bpf/btf.c
> +++ b/kernel/bpf/btf.c
> @@ -7887,13 +7887,19 @@ int btf_prepare_func_args(struct bpf_verifier_env *env, int subprog)
> }
> args = (const struct btf_param *)(t + 1);
> nargs = btf_type_vlen(t);
> - if (nargs > MAX_BPF_FUNC_REG_ARGS) {
> - if (!is_global)
> - return -EINVAL;
> - bpf_log(log, "Global function %s() with %d > %d args. Buggy compiler.\n",
> + if (nargs > MAX_BPF_FUNC_ARGS) {
> + bpf_log(log, "Function %s() with %d > %d args not supported.\n",
> + tname, nargs, MAX_BPF_FUNC_ARGS);
> + return -EINVAL;
> + }
> + if (is_global && nargs > MAX_BPF_FUNC_REG_ARGS) {
> + bpf_log(log, "Global function %s() with %d > %d args not supported.\n",
> tname, nargs, MAX_BPF_FUNC_REG_ARGS);
> return -EINVAL;
> }
> + if (nargs > MAX_BPF_FUNC_REG_ARGS)
> + sub->incoming_stack_arg_depth = (nargs - MAX_BPF_FUNC_REG_ARGS) * BPF_REG_SIZE;
> +
> /* check that function is void or returns int, exception cb also requires this */
> t = btf_type_by_id(btf, t->type);
> while (btf_type_is_modifier(t))
> diff --git a/kernel/bpf/fixups.c b/kernel/bpf/fixups.c
> index fba9e8c00878..c4e0224ad2f2 100644
> --- a/kernel/bpf/fixups.c
> +++ b/kernel/bpf/fixups.c
> @@ -1123,6 +1123,9 @@ static int jit_subprogs(struct bpf_verifier_env *env)
>
> func[i]->aux->name[0] = 'F';
> func[i]->aux->stack_depth = env->subprog_info[i].stack_depth;
> + func[i]->aux->incoming_stack_arg_depth = env->subprog_info[i].incoming_stack_arg_depth;
> + func[i]->aux->stack_arg_depth = env->subprog_info[i].incoming_stack_arg_depth +
> + env->subprog_info[i].outgoing_stack_arg_depth;
> if (env->subprog_info[i].priv_stack_mode == PRIV_STACK_ADAPTIVE)
> func[i]->aux->jits_use_priv_stack = true;
>
> @@ -1301,8 +1304,10 @@ int bpf_jit_subprogs(struct bpf_verifier_env *env)
> struct bpf_insn_aux_data *orig_insn_aux;
> u32 *orig_subprog_starts;
>
> - if (env->subprog_cnt <= 1)
> + if (env->subprog_cnt <= 1) {
> + env->prog->aux->stack_arg_depth = env->subprog_info[0].outgoing_stack_arg_depth;
> return 0;
> + }
>
> prog = orig_prog = env->prog;
> if (bpf_prog_need_blind(prog)) {
> @@ -1378,9 +1383,20 @@ int bpf_fixup_call_args(struct bpf_verifier_env *env)
> struct bpf_prog *prog = env->prog;
> struct bpf_insn *insn = prog->insnsi;
> bool has_kfunc_call = bpf_prog_has_kfunc_call(prog);
> - int i, depth;
> + int depth;
> #endif
> - int err = 0;
> + int i, err = 0;
> +
> + for (i = 0; i < env->subprog_cnt; i++) {
> + struct bpf_subprog_info *subprog = &env->subprog_info[i];
> +
> + if (subprog->max_out_stack_arg_depth > subprog->outgoing_stack_arg_depth) {
> + verbose(env,
> + "func#%d writes stack arg slot at depth %u, but calls only require %u bytes\n",
> + i, subprog->max_out_stack_arg_depth, subprog->outgoing_stack_arg_depth);
> + return -EINVAL;
> + }
> + }
>
> if (env->prog->jit_requested &&
> !bpf_prog_is_offloaded(env->prog->aux)) {
> diff --git a/kernel/bpf/states.c b/kernel/bpf/states.c
> index 8478d2c6ed5b..235841d23fe3 100644
> --- a/kernel/bpf/states.c
> +++ b/kernel/bpf/states.c
> @@ -838,6 +838,44 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
> return true;
> }
>
> +/*
> + * Compare stack arg slots between old and current states.
> + * Outgoing stack args are path-local state and must agree for pruning.
> + */
> +static bool stack_arg_safe(struct bpf_verifier_env *env, struct bpf_func_state *old,
> + struct bpf_func_state *cur, struct bpf_idmap *idmap,
> + enum exact_level exact)
> +{
> + int i, nslots;
> +
> + if (old->incoming_stack_arg_depth != cur->incoming_stack_arg_depth)
> + return false;
> +
> + /* Compare both incoming and outgoing stack arg slots. */
> + if (old->stack_arg_depth != cur->stack_arg_depth)
> + return false;
> +
> + if (old->out_stack_arg_mask != cur->out_stack_arg_mask)
> + return false;
shouldn't be neccessary.
> +
> + nslots = old->stack_arg_depth / BPF_REG_SIZE;
> + for (i = 0; i < nslots; i++) {
> + struct bpf_reg_state *old_arg = &old->stack_arg_regs[i];
> + struct bpf_reg_state *cur_arg = &cur->stack_arg_regs[i];
> +
> + if (old_arg->type == NOT_INIT && cur_arg->type == NOT_INIT)
> + continue;
> +
> + if (exact == EXACT && old_arg->type != cur_arg->type)
> + return false;
> +
> + if (!regsafe(env, old_arg, cur_arg, idmap, exact))
> + return false;
> + }
> +
> + return true;
> +}
> +
> static bool refsafe(struct bpf_verifier_state *old, struct bpf_verifier_state *cur,
> struct bpf_idmap *idmap)
> {
> @@ -929,6 +967,9 @@ static bool func_states_equal(struct bpf_verifier_env *env, struct bpf_func_stat
> if (!stacksafe(env, old, cur, &env->idmap_scratch, exact))
> return false;
>
> + if (!stack_arg_safe(env, old, cur, &env->idmap_scratch, exact))
> + return false;
> +
> return true;
> }
>
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 6aa4dc161a56..78c9322870a5 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -1340,6 +1340,20 @@ static int copy_stack_state(struct bpf_func_state *dst, const struct bpf_func_st
> return -ENOMEM;
>
> dst->allocated_stack = src->allocated_stack;
> +
> + /* copy stack args state */
> + n = src->stack_arg_depth / BPF_REG_SIZE;
> + if (n) {
> + dst->stack_arg_regs = copy_array(dst->stack_arg_regs, src->stack_arg_regs, n,
> + sizeof(struct bpf_reg_state),
> + GFP_KERNEL_ACCOUNT);
copy is unnecessary.
> @@ -4220,6 +4254,13 @@ static int check_stack_read_fixed_off(struct bpf_verifier_env *env,
> }
> if (type == STACK_INVALID && env->allow_uninit_stack)
> continue;
> + /*
> + * Cross-frame reads may hit slots poisoned by dead code elimination.
> + * Static liveness can't track indirect references through pointers,
> + * so allow the read conservatively.
> + */
> + if (type == STACK_POISON && reg_state != state)
> + continue;
wait what? Is this a real issue? Are you saying you have a test prog
that passes FP derived pointers in arg 6, 7 and arg_tracking cannot detect it?
Then it should be added properly in arg_tracking.
This hack to allow reading poisoned slots is not ok.
This is a serious issue.
pw-bot: cr
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH bpf-next v6 03/17] bpf: Refactor to handle memory and size together
2026-04-20 23:58 ` Alexei Starovoitov
@ 2026-04-21 4:04 ` Yonghong Song
0 siblings, 0 replies; 51+ messages in thread
From: Yonghong Song @ 2026-04-21 4:04 UTC (permalink / raw)
To: Alexei Starovoitov, bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau
On 4/20/26 4:58 PM, Alexei Starovoitov wrote:
> On Sun Apr 19, 2026 at 9:33 AM PDT, Yonghong Song wrote:
>> + * @mem_regno is the register containing the pointer, mem_regno+1 is the register
>> + * containing the access size.
> wrong comment. should probably say that size_reg is the register that contains size.
>
>>
>> - WARN_ON_ONCE(regno < BPF_REG_2 || regno > BPF_REG_5);
>> + WARN_ON_ONCE(mem_regno > BPF_REG_4);
> this warn is too late here. Can be removed.
>
>>
>> memset(&meta, 0, sizeof(meta));
>>
>> @@ -7129,8 +7130,8 @@ static int check_kfunc_mem_size_reg(struct bpf_verifier_env *env, struct bpf_reg
>> mark_ptr_not_null_reg(mem_reg);
>> }
>>
>> - err = check_mem_size_reg(env, reg, regno, BPF_READ, true, &meta);
>> - err = err ?: check_mem_size_reg(env, reg, regno, BPF_WRITE, true, &meta);
>> + err = check_mem_size_reg(env, mem_reg, size_reg, mem_regno, BPF_READ, true, &meta);
>> + err = err ?: check_mem_size_reg(env, mem_reg, size_reg, mem_regno, BPF_WRITE, true, &meta);
>>
>> if (may_be_null)
>> *mem_reg = saved_reg;
>> @@ -8594,7 +8595,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
>> return -EFAULT;
>> }
>> key_size = meta->map.ptr->key_size;
>> - err = check_helper_mem_access(env, regno, key_size, BPF_READ, false, NULL);
>> + err = check_helper_mem_access(env, reg, regno, key_size, BPF_READ, false, NULL);
>> if (err)
>> return err;
>> if (can_elide_value_nullness(meta->map.ptr->map_type)) {
>> @@ -8621,7 +8622,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
>> return -EFAULT;
>> }
>> meta->raw_mode = arg_type & MEM_UNINIT;
>> - err = check_helper_mem_access(env, regno, meta->map.ptr->value_size,
>> + err = check_helper_mem_access(env, reg, regno, meta->map.ptr->value_size,
>> arg_type & MEM_WRITE ? BPF_WRITE : BPF_READ,
>> false, meta);
>> break;
>> @@ -8665,7 +8666,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
>> */
>> meta->raw_mode = arg_type & MEM_UNINIT;
>> if (arg_type & MEM_FIXED_SIZE) {
>> - err = check_helper_mem_access(env, regno, fn->arg_size[arg],
>> + err = check_helper_mem_access(env, reg, regno, fn->arg_size[arg],
>> arg_type & MEM_WRITE ? BPF_WRITE : BPF_READ,
>> false, meta);
>> if (err)
>> @@ -8675,13 +8676,13 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
>> }
>> break;
>> case ARG_CONST_SIZE:
>> - err = check_mem_size_reg(env, reg, regno,
>> + err = check_mem_size_reg(env, reg_state(env, regno - 1), reg, regno - 1,
> or moved here.
> I would just remove it.
I checked logic and it seems this warn should not really happen. So removing it sound a good idea.
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH bpf-next v6 04/17] bpf: Prepare verifier logs for upcoming kfunc stack arguments
2026-04-21 0:03 ` Alexei Starovoitov
@ 2026-04-21 4:06 ` Yonghong Song
2026-04-21 6:07 ` Yonghong Song
1 sibling, 0 replies; 51+ messages in thread
From: Yonghong Song @ 2026-04-21 4:06 UTC (permalink / raw)
To: Alexei Starovoitov, bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau
On 4/20/26 5:03 PM, Alexei Starovoitov wrote:
> On Sun Apr 19, 2026 at 9:33 AM PDT, Yonghong Song wrote:
>> This change prepares verifier log reporting for upcoming kfunc stack
>> argument support.
>>
>> Today verifier log code mostly assumes that an argument can be described
>> directly by a register number. That works for arguments passed in `R1`
>> to `R5`, but it does not work once kfunc arguments can also be
>> passed on the stack.
>>
>> Introduce an internal `argno` representation such that register-passed
>> arguments keep using their real register numbers, while stack-passed
>> arguments use an encoded value above a dedicated base.
>> `reg_arg_name()` converts this representation into either `R%d` or
>> `*(R11-off)` when emitting verifier logs. If a particular `argno`
>> is corresponding to a stack argument, print `*(R11-off)`. Otherwise,
>> print `R%d`. Here R11 presents the base of stack arguments.
>>
>> This keeps existing logs readable for register arguments and allows the
>> same log sites to handle future stack arguments without open-coding
>> special cases.
>>
>> Update selftests accordingly.
>>
>> Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
>> ---
>> include/linux/bpf_verifier.h | 1 +
>> kernel/bpf/verifier.c | 649 ++++++++++--------
>> .../testing/selftests/bpf/prog_tests/bpf_nf.c | 22 +-
>> .../selftests/bpf/prog_tests/cb_refs.c | 2 +-
>> .../selftests/bpf/prog_tests/kfunc_call.c | 2 +-
>> .../selftests/bpf/prog_tests/linked_list.c | 4 +-
>> .../selftests/bpf/progs/cgrp_kfunc_failure.c | 14 +-
>> .../selftests/bpf/progs/cpumask_failure.c | 10 +-
>> .../testing/selftests/bpf/progs/dynptr_fail.c | 22 +-
>> .../selftests/bpf/progs/file_reader_fail.c | 4 +-
>> tools/testing/selftests/bpf/progs/irq.c | 4 +-
>> tools/testing/selftests/bpf/progs/iters.c | 6 +-
>> .../selftests/bpf/progs/iters_state_safety.c | 14 +-
>> .../selftests/bpf/progs/iters_testmod.c | 4 +-
>> .../selftests/bpf/progs/iters_testmod_seq.c | 4 +-
>> .../selftests/bpf/progs/map_kptr_fail.c | 2 +-
>> .../selftests/bpf/progs/percpu_alloc_fail.c | 4 +-
>> .../testing/selftests/bpf/progs/rbtree_fail.c | 6 +-
>> .../bpf/progs/refcounted_kptr_fail.c | 2 +-
>> .../testing/selftests/bpf/progs/stream_fail.c | 2 +-
>> .../selftests/bpf/progs/task_kfunc_failure.c | 18 +-
>> .../selftests/bpf/progs/task_work_fail.c | 6 +-
>> .../selftests/bpf/progs/test_bpf_nf_fail.c | 8 +-
>> .../bpf/progs/test_kfunc_dynptr_param.c | 2 +-
>> .../bpf/progs/test_kfunc_param_nullable.c | 2 +-
>> .../selftests/bpf/progs/verifier_bits_iter.c | 4 +-
>> .../bpf/progs/verifier_ref_tracking.c | 6 +-
>> .../selftests/bpf/progs/verifier_vfs_reject.c | 8 +-
>> .../testing/selftests/bpf/progs/wq_failures.c | 2 +-
>> tools/testing/selftests/bpf/verifier/calls.c | 14 +-
>> 30 files changed, 474 insertions(+), 374 deletions(-)
>>
>> diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
>> index b148f816f25b..9fbbddc40d21 100644
>> --- a/include/linux/bpf_verifier.h
>> +++ b/include/linux/bpf_verifier.h
>> @@ -913,6 +913,7 @@ struct bpf_verifier_env {
>> * e.g., in reg_type_str() to generate reg_type string
>> */
>> char tmp_str_buf[TMP_STR_BUF_LEN];
>> + char tmp_reg_arg_name_buf[32];
> the name is too long.
> Just tmp_arg_name ?
Okay.
>
>> struct bpf_insn insn_buf[INSN_BUF_SIZE];
>> struct bpf_insn epilogue_buf[INSN_BUF_SIZE];
>> struct bpf_scc_callchain callchain_buf;
>> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
>> index 3716d9688d00..6aa4dc161a56 100644
>> --- a/kernel/bpf/verifier.c
>> +++ b/kernel/bpf/verifier.c
>> @@ -1751,6 +1751,55 @@ static struct bpf_verifier_state *push_stack(struct bpf_verifier_env *env,
>> return &elem->st;
>> }
>>
>> +/*
>> + * Unified argument number encoding for verifier log messages.
>> + * Register args (arg_idx 0-4) use their register number (R1-R5).
>> + * Stack args (arg_idx 5+) are encoded as STACK_ARGNO_BASE + arg_idx
>> + * to avoid collision with register numbers. reg_arg_name() decodes
>> + * this back to a human-readable string like "*(R11-8)" for logs.
>> + */
>> +#define STACK_ARGNO_BASE 100
>> +
>> +static bool is_stack_argno(int argno)
>> +{
>> + return argno >= STACK_ARGNO_BASE;
>> +}
>> +
>> +static u32 make_argno(u32 arg_idx)
>> +{
>> + if (arg_idx < MAX_BPF_FUNC_REG_ARGS)
>> + return BPF_REG_1 + arg_idx;
>> + return STACK_ARGNO_BASE + arg_idx;
>> +}
>> +
>> +static u32 arg_idx_from_argno(int argno)
>> +{
>> + if (is_stack_argno(argno))
>> + return argno - STACK_ARGNO_BASE;
>> + return argno - BPF_REG_1;
>> +}
>> +
>> +static int next_argno(int argno)
>> +{
>> + return make_argno(arg_idx_from_argno(argno) + 1);
>> +}
> I don't like this +1, -1 dance all around. Makes the whole thing
> hard to follow.
> Keep argno starting at 1. So old regno == argno.
Sure. Will try to simplify things...
>
>> /* Check if local kptr in src arg matches kptr in dst arg */
>> - if (meta->func_id == BPF_FUNC_kptr_xchg && regno == BPF_REG_2) {
>> - if (map_kptr_match_type(env, meta->kptr_field, reg, regno))
>> + if (meta->func_id == BPF_FUNC_kptr_xchg &&
>> + !is_stack_argno(argno) && argno == BPF_REG_2) {
>> + if (map_kptr_match_type(env, meta->kptr_field, reg, argno))
> And then this argno == BPF_REG_2 will look fine.
> With argno base 0 the above looks broken.
> Also is_stack_argno() looks like defensive programming. remove it.
Okay.
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH bpf-next v6 07/17] bpf: Support stack arguments for bpf functions
2026-04-21 0:37 ` Alexei Starovoitov
@ 2026-04-21 4:15 ` Yonghong Song
0 siblings, 0 replies; 51+ messages in thread
From: Yonghong Song @ 2026-04-21 4:15 UTC (permalink / raw)
To: Alexei Starovoitov, bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau
On 4/20/26 5:37 PM, Alexei Starovoitov wrote:
> On Sun Apr 19, 2026 at 9:33 AM PDT, Yonghong Song wrote:
>> Currently BPF functions (subprogs) are limited to 5 register arguments.
>> With [1], the compiler can emit code that passes additional arguments
>> via a dedicated stack area through bpf register BPF_REG_PARAMS (r11),
>> introduced in the previous patch.
>>
>> The compiler uses positive r11 offsets for incoming (callee-side) args
>> and negative r11 offsets for outgoing (caller-side) args, following the
>> x86_64/arm64 calling convention direction. There is an 8-byte gap at
>> offset 0 separating the two regions:
>> Incoming (callee reads): r11+8 (arg6), r11+16 (arg7), ...
>> Outgoing (caller writes): r11-8 (arg6), r11-16 (arg7), ...
> This part looks clean now.
>
>> A per-state bitmask out_stack_arg_mask tracks which outgoing stack arg
>> slots have been written on the current path. Each bit corresponds to
>> an outgoing slot index (bit 0 = r11-8 = arg6, bit 1 = r11-16 = arg7,
>> etc.). At a call site, the verifier checks that all slots required by
>> the callee have their corresponding mask bits set. This enables
>> precise per-path tracking: if one branch of a conditional writes arg6
>> but another does not, the mask correctly reflects the difference and
>> the verifier rejects the uninitialized path. The mask is included in
>> stack_arg_safe() so that states with different sets of initialized
>> slots are not incorrectly pruned together.
> But this part I don't understand.
> why do you need this bitmask?
> Even when they're written out of order stack_arg_depth is all you need.
> Then compare old->stack_arg_regs vs cur->stack_arg_regs.
> If one is not written its state will be NOT_INIT.
> so
> *(u64 *)(r11 - 16) = r7;
> // without *(u64 *)(r11 - 8) = r6;
> call bar1; // arg6 = r6, arg7 = r7
>
> will fail the verification.
I added bitmask to make it easy for early comparison for pruning
since maintenance of bitmask is light. But I think this probably
not necessary. Will remove.
>
>> @@ -1669,6 +1669,8 @@ struct bpf_prog_aux {
>> u32 max_pkt_offset;
>> u32 max_tp_access;
>> u32 stack_depth;
>> + u16 incoming_stack_arg_depth;
>> + u16 stack_arg_depth; /* both incoming and max outgoing of stack arguments */
> these two are ok. I'm assuming you don't want JIT to recompute them.
>
>> u32 id;
>> u32 func_cnt; /* used by non-func prog as the number of func progs */
>> u32 real_func_cnt; /* includes hidden progs, only used for JIT and freeing progs */
>> diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
>> index 9fbbddc40d21..bb6d8cab3a35 100644
>> --- a/include/linux/bpf_verifier.h
>> +++ b/include/linux/bpf_verifier.h
>> @@ -372,6 +372,11 @@ struct bpf_func_state {
>> * `stack`. allocated_stack is always a multiple of BPF_REG_SIZE.
>> */
>> int allocated_stack;
>> +
>> + u16 stack_arg_depth; /* Size of incoming + max outgoing stack args in bytes. */
>> + u16 incoming_stack_arg_depth; /* Size of incoming stack args in bytes. */
> but incoming_stack_arg_depth looks odd.
>
> Callee should be accessing caller's stack_arg_depth and
> caller's stack_arg_regs.
Okay. Will reduce local caching and get the value from the caller.
>
>> + u16 out_stack_arg_mask; /* Bitmask of outgoing stack arg slots that have been written. */
>> + struct bpf_reg_state *stack_arg_regs; /* On-stack arguments */
>> };
>>
>> #define MAX_CALL_FRAMES 8
>> @@ -508,6 +513,17 @@ struct bpf_verifier_state {
>> iter < frame->allocated_stack / BPF_REG_SIZE; \
>> iter++, reg = bpf_get_spilled_reg(iter, frame, mask))
>>
>> +#define bpf_get_spilled_stack_arg(slot, frame, mask) \
>> + ((((slot) < frame->stack_arg_depth / BPF_REG_SIZE) && \
>> + (frame->stack_arg_regs[slot].type != NOT_INIT)) \
>> + ? &frame->stack_arg_regs[slot] : NULL)
>> +
>> +/* Iterate over 'frame', setting 'reg' to either NULL or a spilled stack arg. */
>> +#define bpf_for_each_spilled_stack_arg(iter, frame, reg, mask) \
>> + for (iter = 0, reg = bpf_get_spilled_stack_arg(iter, frame, mask); \
>> + iter < frame->stack_arg_depth / BPF_REG_SIZE; \
>> + iter++, reg = bpf_get_spilled_stack_arg(iter, frame, mask))
>> +
>> #define bpf_for_each_reg_in_vstate_mask(__vst, __state, __reg, __mask, __expr) \
>> ({ \
>> struct bpf_verifier_state *___vstate = __vst; \
>> @@ -525,6 +541,11 @@ struct bpf_verifier_state {
>> continue; \
>> (void)(__expr); \
>> } \
>> + bpf_for_each_spilled_stack_arg(___j, __state, __reg, __mask) { \
>> + if (!__reg) \
>> + continue; \
>> + (void)(__expr); \
>> + } \
>> } \
>> })
>>
>> @@ -739,10 +760,13 @@ struct bpf_subprog_info {
>> bool keep_fastcall_stack: 1;
>> bool changes_pkt_data: 1;
>> bool might_sleep: 1;
>> - u8 arg_cnt:3;
>> + u8 arg_cnt:4;
>>
>> enum priv_stack_mode priv_stack_mode;
>> - struct bpf_subprog_arg_info args[MAX_BPF_FUNC_REG_ARGS];
>> + struct bpf_subprog_arg_info args[MAX_BPF_FUNC_ARGS];
>> + u16 incoming_stack_arg_depth;
>> + u16 outgoing_stack_arg_depth;
>> + u16 max_out_stack_arg_depth;
> but you already have them in prog_aux?! another copy in bpf_subprog_info?!
> Remove one of them. JIT only need one set.
Okay.
>
>> };
>>
>> struct bpf_verifier_env;
>> diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
>> index a62d78581207..c5f3aa05d5a3 100644
>> --- a/kernel/bpf/btf.c
>> +++ b/kernel/bpf/btf.c
>> @@ -7887,13 +7887,19 @@ int btf_prepare_func_args(struct bpf_verifier_env *env, int subprog)
>> }
>> args = (const struct btf_param *)(t + 1);
>> nargs = btf_type_vlen(t);
>> - if (nargs > MAX_BPF_FUNC_REG_ARGS) {
>> - if (!is_global)
>> - return -EINVAL;
>> - bpf_log(log, "Global function %s() with %d > %d args. Buggy compiler.\n",
>> + if (nargs > MAX_BPF_FUNC_ARGS) {
>> + bpf_log(log, "Function %s() with %d > %d args not supported.\n",
>> + tname, nargs, MAX_BPF_FUNC_ARGS);
>> + return -EINVAL;
>> + }
>> + if (is_global && nargs > MAX_BPF_FUNC_REG_ARGS) {
>> + bpf_log(log, "Global function %s() with %d > %d args not supported.\n",
>> tname, nargs, MAX_BPF_FUNC_REG_ARGS);
>> return -EINVAL;
>> }
>> + if (nargs > MAX_BPF_FUNC_REG_ARGS)
>> + sub->incoming_stack_arg_depth = (nargs - MAX_BPF_FUNC_REG_ARGS) * BPF_REG_SIZE;
>> +
>> /* check that function is void or returns int, exception cb also requires this */
>> t = btf_type_by_id(btf, t->type);
>> while (btf_type_is_modifier(t))
>> diff --git a/kernel/bpf/fixups.c b/kernel/bpf/fixups.c
>> index fba9e8c00878..c4e0224ad2f2 100644
>> --- a/kernel/bpf/fixups.c
>> +++ b/kernel/bpf/fixups.c
>> @@ -1123,6 +1123,9 @@ static int jit_subprogs(struct bpf_verifier_env *env)
>>
>> func[i]->aux->name[0] = 'F';
>> func[i]->aux->stack_depth = env->subprog_info[i].stack_depth;
>> + func[i]->aux->incoming_stack_arg_depth = env->subprog_info[i].incoming_stack_arg_depth;
>> + func[i]->aux->stack_arg_depth = env->subprog_info[i].incoming_stack_arg_depth +
>> + env->subprog_info[i].outgoing_stack_arg_depth;
>> if (env->subprog_info[i].priv_stack_mode == PRIV_STACK_ADAPTIVE)
>> func[i]->aux->jits_use_priv_stack = true;
>>
>> @@ -1301,8 +1304,10 @@ int bpf_jit_subprogs(struct bpf_verifier_env *env)
>> struct bpf_insn_aux_data *orig_insn_aux;
>> u32 *orig_subprog_starts;
>>
>> - if (env->subprog_cnt <= 1)
>> + if (env->subprog_cnt <= 1) {
>> + env->prog->aux->stack_arg_depth = env->subprog_info[0].outgoing_stack_arg_depth;
>> return 0;
>> + }
>>
>> prog = orig_prog = env->prog;
>> if (bpf_prog_need_blind(prog)) {
>> @@ -1378,9 +1383,20 @@ int bpf_fixup_call_args(struct bpf_verifier_env *env)
>> struct bpf_prog *prog = env->prog;
>> struct bpf_insn *insn = prog->insnsi;
>> bool has_kfunc_call = bpf_prog_has_kfunc_call(prog);
>> - int i, depth;
>> + int depth;
>> #endif
>> - int err = 0;
>> + int i, err = 0;
>> +
>> + for (i = 0; i < env->subprog_cnt; i++) {
>> + struct bpf_subprog_info *subprog = &env->subprog_info[i];
>> +
>> + if (subprog->max_out_stack_arg_depth > subprog->outgoing_stack_arg_depth) {
>> + verbose(env,
>> + "func#%d writes stack arg slot at depth %u, but calls only require %u bytes\n",
>> + i, subprog->max_out_stack_arg_depth, subprog->outgoing_stack_arg_depth);
>> + return -EINVAL;
>> + }
>> + }
>>
>> if (env->prog->jit_requested &&
>> !bpf_prog_is_offloaded(env->prog->aux)) {
>> diff --git a/kernel/bpf/states.c b/kernel/bpf/states.c
>> index 8478d2c6ed5b..235841d23fe3 100644
>> --- a/kernel/bpf/states.c
>> +++ b/kernel/bpf/states.c
>> @@ -838,6 +838,44 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
>> return true;
>> }
>>
>> +/*
>> + * Compare stack arg slots between old and current states.
>> + * Outgoing stack args are path-local state and must agree for pruning.
>> + */
>> +static bool stack_arg_safe(struct bpf_verifier_env *env, struct bpf_func_state *old,
>> + struct bpf_func_state *cur, struct bpf_idmap *idmap,
>> + enum exact_level exact)
>> +{
>> + int i, nslots;
>> +
>> + if (old->incoming_stack_arg_depth != cur->incoming_stack_arg_depth)
>> + return false;
>> +
>> + /* Compare both incoming and outgoing stack arg slots. */
>> + if (old->stack_arg_depth != cur->stack_arg_depth)
>> + return false;
>> +
>> + if (old->out_stack_arg_mask != cur->out_stack_arg_mask)
>> + return false;
> shouldn't be neccessary.
Okay.
>
>> +
>> + nslots = old->stack_arg_depth / BPF_REG_SIZE;
>> + for (i = 0; i < nslots; i++) {
>> + struct bpf_reg_state *old_arg = &old->stack_arg_regs[i];
>> + struct bpf_reg_state *cur_arg = &cur->stack_arg_regs[i];
>> +
>> + if (old_arg->type == NOT_INIT && cur_arg->type == NOT_INIT)
>> + continue;
>> +
>> + if (exact == EXACT && old_arg->type != cur_arg->type)
>> + return false;
>> +
>> + if (!regsafe(env, old_arg, cur_arg, idmap, exact))
>> + return false;
>> + }
>> +
>> + return true;
>> +}
>> +
>> static bool refsafe(struct bpf_verifier_state *old, struct bpf_verifier_state *cur,
>> struct bpf_idmap *idmap)
>> {
>> @@ -929,6 +967,9 @@ static bool func_states_equal(struct bpf_verifier_env *env, struct bpf_func_stat
>> if (!stacksafe(env, old, cur, &env->idmap_scratch, exact))
>> return false;
>>
>> + if (!stack_arg_safe(env, old, cur, &env->idmap_scratch, exact))
>> + return false;
>> +
>> return true;
>> }
>>
>> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
>> index 6aa4dc161a56..78c9322870a5 100644
>> --- a/kernel/bpf/verifier.c
>> +++ b/kernel/bpf/verifier.c
>> @@ -1340,6 +1340,20 @@ static int copy_stack_state(struct bpf_func_state *dst, const struct bpf_func_st
>> return -ENOMEM;
>>
>> dst->allocated_stack = src->allocated_stack;
>> +
>> + /* copy stack args state */
>> + n = src->stack_arg_depth / BPF_REG_SIZE;
>> + if (n) {
>> + dst->stack_arg_regs = copy_array(dst->stack_arg_regs, src->stack_arg_regs, n,
>> + sizeof(struct bpf_reg_state),
>> + GFP_KERNEL_ACCOUNT);
> copy is unnecessary.
Okay.
>
>> @@ -4220,6 +4254,13 @@ static int check_stack_read_fixed_off(struct bpf_verifier_env *env,
>> }
>> if (type == STACK_INVALID && env->allow_uninit_stack)
>> continue;
>> + /*
>> + * Cross-frame reads may hit slots poisoned by dead code elimination.
>> + * Static liveness can't track indirect references through pointers,
>> + * so allow the read conservatively.
>> + */
>> + if (type == STACK_POISON && reg_state != state)
>> + continue;
> wait what? Is this a real issue? Are you saying you have a test prog
> that passes FP derived pointers in arg 6, 7 and arg_tracking cannot detect it?
> Then it should be added properly in arg_tracking.
> This hack to allow reading poisoned slots is not ok.
> This is a serious issue.
Okay. The reason is that callee is not getting the proper state from caller.
I will do proper implementation here.
>
> pw-bot: cr
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH bpf-next v6 04/17] bpf: Prepare verifier logs for upcoming kfunc stack arguments
2026-04-21 0:03 ` Alexei Starovoitov
2026-04-21 4:06 ` Yonghong Song
@ 2026-04-21 6:07 ` Yonghong Song
2026-04-21 13:48 ` Alexei Starovoitov
1 sibling, 1 reply; 51+ messages in thread
From: Yonghong Song @ 2026-04-21 6:07 UTC (permalink / raw)
To: Alexei Starovoitov, bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau
On 4/20/26 5:03 PM, Alexei Starovoitov wrote:
> On Sun Apr 19, 2026 at 9:33 AM PDT, Yonghong Song wrote:
>> This change prepares verifier log reporting for upcoming kfunc stack
>> argument support.
>>
>> Today verifier log code mostly assumes that an argument can be described
>> directly by a register number. That works for arguments passed in `R1`
>> to `R5`, but it does not work once kfunc arguments can also be
>> passed on the stack.
>>
>> Introduce an internal `argno` representation such that register-passed
>> arguments keep using their real register numbers, while stack-passed
>> arguments use an encoded value above a dedicated base.
>> `reg_arg_name()` converts this representation into either `R%d` or
>> `*(R11-off)` when emitting verifier logs. If a particular `argno`
>> is corresponding to a stack argument, print `*(R11-off)`. Otherwise,
>> print `R%d`. Here R11 presents the base of stack arguments.
>>
>> This keeps existing logs readable for register arguments and allows the
>> same log sites to handle future stack arguments without open-coding
>> special cases.
>>
>> Update selftests accordingly.
>>
>> Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
>> ---
>> include/linux/bpf_verifier.h | 1 +
>> kernel/bpf/verifier.c | 649 ++++++++++--------
>> .../testing/selftests/bpf/prog_tests/bpf_nf.c | 22 +-
>> .../selftests/bpf/prog_tests/cb_refs.c | 2 +-
>> .../selftests/bpf/prog_tests/kfunc_call.c | 2 +-
>> .../selftests/bpf/prog_tests/linked_list.c | 4 +-
>> .../selftests/bpf/progs/cgrp_kfunc_failure.c | 14 +-
>> .../selftests/bpf/progs/cpumask_failure.c | 10 +-
>> .../testing/selftests/bpf/progs/dynptr_fail.c | 22 +-
>> .../selftests/bpf/progs/file_reader_fail.c | 4 +-
>> tools/testing/selftests/bpf/progs/irq.c | 4 +-
>> tools/testing/selftests/bpf/progs/iters.c | 6 +-
>> .../selftests/bpf/progs/iters_state_safety.c | 14 +-
>> .../selftests/bpf/progs/iters_testmod.c | 4 +-
>> .../selftests/bpf/progs/iters_testmod_seq.c | 4 +-
>> .../selftests/bpf/progs/map_kptr_fail.c | 2 +-
>> .../selftests/bpf/progs/percpu_alloc_fail.c | 4 +-
>> .../testing/selftests/bpf/progs/rbtree_fail.c | 6 +-
>> .../bpf/progs/refcounted_kptr_fail.c | 2 +-
>> .../testing/selftests/bpf/progs/stream_fail.c | 2 +-
>> .../selftests/bpf/progs/task_kfunc_failure.c | 18 +-
>> .../selftests/bpf/progs/task_work_fail.c | 6 +-
>> .../selftests/bpf/progs/test_bpf_nf_fail.c | 8 +-
>> .../bpf/progs/test_kfunc_dynptr_param.c | 2 +-
>> .../bpf/progs/test_kfunc_param_nullable.c | 2 +-
>> .../selftests/bpf/progs/verifier_bits_iter.c | 4 +-
>> .../bpf/progs/verifier_ref_tracking.c | 6 +-
>> .../selftests/bpf/progs/verifier_vfs_reject.c | 8 +-
>> .../testing/selftests/bpf/progs/wq_failures.c | 2 +-
>> tools/testing/selftests/bpf/verifier/calls.c | 14 +-
>> 30 files changed, 474 insertions(+), 374 deletions(-)
>>
>> diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
>> index b148f816f25b..9fbbddc40d21 100644
>> --- a/include/linux/bpf_verifier.h
>> +++ b/include/linux/bpf_verifier.h
>> @@ -913,6 +913,7 @@ struct bpf_verifier_env {
>> * e.g., in reg_type_str() to generate reg_type string
>> */
>> char tmp_str_buf[TMP_STR_BUF_LEN];
>> + char tmp_reg_arg_name_buf[32];
> the name is too long.
> Just tmp_arg_name ?
>
>> struct bpf_insn insn_buf[INSN_BUF_SIZE];
>> struct bpf_insn epilogue_buf[INSN_BUF_SIZE];
>> struct bpf_scc_callchain callchain_buf;
>> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
>> index 3716d9688d00..6aa4dc161a56 100644
>> --- a/kernel/bpf/verifier.c
>> +++ b/kernel/bpf/verifier.c
>> @@ -1751,6 +1751,55 @@ static struct bpf_verifier_state *push_stack(struct bpf_verifier_env *env,
>> return &elem->st;
>> }
>>
>> +/*
>> + * Unified argument number encoding for verifier log messages.
>> + * Register args (arg_idx 0-4) use their register number (R1-R5).
>> + * Stack args (arg_idx 5+) are encoded as STACK_ARGNO_BASE + arg_idx
>> + * to avoid collision with register numbers. reg_arg_name() decodes
>> + * this back to a human-readable string like "*(R11-8)" for logs.
>> + */
>> +#define STACK_ARGNO_BASE 100
>> +
>> +static bool is_stack_argno(int argno)
>> +{
>> + return argno >= STACK_ARGNO_BASE;
>> +}
>> +
>> +static u32 make_argno(u32 arg_idx)
>> +{
>> + if (arg_idx < MAX_BPF_FUNC_REG_ARGS)
>> + return BPF_REG_1 + arg_idx;
>> + return STACK_ARGNO_BASE + arg_idx;
>> +}
>> +
>> +static u32 arg_idx_from_argno(int argno)
>> +{
>> + if (is_stack_argno(argno))
>> + return argno - STACK_ARGNO_BASE;
>> + return argno - BPF_REG_1;
>> +}
>> +
>> +static int next_argno(int argno)
>> +{
>> + return make_argno(arg_idx_from_argno(argno) + 1);
>> +}
> I don't like this +1, -1 dance all around. Makes the whole thing
> hard to follow.
> Keep argno starting at 1. So old regno == argno.
The above make_argno() returns an argno having regno == argno
for regno 1-5.
I think we do not need next_argno(). It is a special case for
KF_ARG_PTR_TO_MEM_SIZE. We can do something like
make_argno(i) and maek_argno(i + 1) for current and next argno.
>
>> /* Check if local kptr in src arg matches kptr in dst arg */
>> - if (meta->func_id == BPF_FUNC_kptr_xchg && regno == BPF_REG_2) {
>> - if (map_kptr_match_type(env, meta->kptr_field, reg, regno))
>> + if (meta->func_id == BPF_FUNC_kptr_xchg &&
>> + !is_stack_argno(argno) && argno == BPF_REG_2) {
>> + if (map_kptr_match_type(env, meta->kptr_field, reg, argno))
> And then this argno == BPF_REG_2 will look fine.
> With argno base 0 the above looks broken.
> Also is_stack_argno() looks like defensive programming. remove it.
>
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH bpf-next v6 04/17] bpf: Prepare verifier logs for upcoming kfunc stack arguments
2026-04-21 6:07 ` Yonghong Song
@ 2026-04-21 13:48 ` Alexei Starovoitov
2026-04-21 15:41 ` Yonghong Song
0 siblings, 1 reply; 51+ messages in thread
From: Alexei Starovoitov @ 2026-04-21 13:48 UTC (permalink / raw)
To: Yonghong Song
Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, Kernel Team, Martin KaFai Lau
On Mon, Apr 20, 2026 at 11:07 PM Yonghong Song <yonghong.song@linux.dev> wrote:
>
>
>
> On 4/20/26 5:03 PM, Alexei Starovoitov wrote:
> > On Sun Apr 19, 2026 at 9:33 AM PDT, Yonghong Song wrote:
> >> This change prepares verifier log reporting for upcoming kfunc stack
> >> argument support.
> >>
> >> Today verifier log code mostly assumes that an argument can be described
> >> directly by a register number. That works for arguments passed in `R1`
> >> to `R5`, but it does not work once kfunc arguments can also be
> >> passed on the stack.
> >>
> >> Introduce an internal `argno` representation such that register-passed
> >> arguments keep using their real register numbers, while stack-passed
> >> arguments use an encoded value above a dedicated base.
> >> `reg_arg_name()` converts this representation into either `R%d` or
> >> `*(R11-off)` when emitting verifier logs. If a particular `argno`
> >> is corresponding to a stack argument, print `*(R11-off)`. Otherwise,
> >> print `R%d`. Here R11 presents the base of stack arguments.
> >>
> >> This keeps existing logs readable for register arguments and allows the
> >> same log sites to handle future stack arguments without open-coding
> >> special cases.
> >>
> >> Update selftests accordingly.
> >>
> >> Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
> >> ---
> >> include/linux/bpf_verifier.h | 1 +
> >> kernel/bpf/verifier.c | 649 ++++++++++--------
> >> .../testing/selftests/bpf/prog_tests/bpf_nf.c | 22 +-
> >> .../selftests/bpf/prog_tests/cb_refs.c | 2 +-
> >> .../selftests/bpf/prog_tests/kfunc_call.c | 2 +-
> >> .../selftests/bpf/prog_tests/linked_list.c | 4 +-
> >> .../selftests/bpf/progs/cgrp_kfunc_failure.c | 14 +-
> >> .../selftests/bpf/progs/cpumask_failure.c | 10 +-
> >> .../testing/selftests/bpf/progs/dynptr_fail.c | 22 +-
> >> .../selftests/bpf/progs/file_reader_fail.c | 4 +-
> >> tools/testing/selftests/bpf/progs/irq.c | 4 +-
> >> tools/testing/selftests/bpf/progs/iters.c | 6 +-
> >> .../selftests/bpf/progs/iters_state_safety.c | 14 +-
> >> .../selftests/bpf/progs/iters_testmod.c | 4 +-
> >> .../selftests/bpf/progs/iters_testmod_seq.c | 4 +-
> >> .../selftests/bpf/progs/map_kptr_fail.c | 2 +-
> >> .../selftests/bpf/progs/percpu_alloc_fail.c | 4 +-
> >> .../testing/selftests/bpf/progs/rbtree_fail.c | 6 +-
> >> .../bpf/progs/refcounted_kptr_fail.c | 2 +-
> >> .../testing/selftests/bpf/progs/stream_fail.c | 2 +-
> >> .../selftests/bpf/progs/task_kfunc_failure.c | 18 +-
> >> .../selftests/bpf/progs/task_work_fail.c | 6 +-
> >> .../selftests/bpf/progs/test_bpf_nf_fail.c | 8 +-
> >> .../bpf/progs/test_kfunc_dynptr_param.c | 2 +-
> >> .../bpf/progs/test_kfunc_param_nullable.c | 2 +-
> >> .../selftests/bpf/progs/verifier_bits_iter.c | 4 +-
> >> .../bpf/progs/verifier_ref_tracking.c | 6 +-
> >> .../selftests/bpf/progs/verifier_vfs_reject.c | 8 +-
> >> .../testing/selftests/bpf/progs/wq_failures.c | 2 +-
> >> tools/testing/selftests/bpf/verifier/calls.c | 14 +-
> >> 30 files changed, 474 insertions(+), 374 deletions(-)
> >>
> >> diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
> >> index b148f816f25b..9fbbddc40d21 100644
> >> --- a/include/linux/bpf_verifier.h
> >> +++ b/include/linux/bpf_verifier.h
> >> @@ -913,6 +913,7 @@ struct bpf_verifier_env {
> >> * e.g., in reg_type_str() to generate reg_type string
> >> */
> >> char tmp_str_buf[TMP_STR_BUF_LEN];
> >> + char tmp_reg_arg_name_buf[32];
> > the name is too long.
> > Just tmp_arg_name ?
> >
> >> struct bpf_insn insn_buf[INSN_BUF_SIZE];
> >> struct bpf_insn epilogue_buf[INSN_BUF_SIZE];
> >> struct bpf_scc_callchain callchain_buf;
> >> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> >> index 3716d9688d00..6aa4dc161a56 100644
> >> --- a/kernel/bpf/verifier.c
> >> +++ b/kernel/bpf/verifier.c
> >> @@ -1751,6 +1751,55 @@ static struct bpf_verifier_state *push_stack(struct bpf_verifier_env *env,
> >> return &elem->st;
> >> }
> >>
> >> +/*
> >> + * Unified argument number encoding for verifier log messages.
> >> + * Register args (arg_idx 0-4) use their register number (R1-R5).
> >> + * Stack args (arg_idx 5+) are encoded as STACK_ARGNO_BASE + arg_idx
> >> + * to avoid collision with register numbers. reg_arg_name() decodes
> >> + * this back to a human-readable string like "*(R11-8)" for logs.
> >> + */
> >> +#define STACK_ARGNO_BASE 100
> >> +
> >> +static bool is_stack_argno(int argno)
> >> +{
> >> + return argno >= STACK_ARGNO_BASE;
> >> +}
> >> +
> >> +static u32 make_argno(u32 arg_idx)
> >> +{
> >> + if (arg_idx < MAX_BPF_FUNC_REG_ARGS)
> >> + return BPF_REG_1 + arg_idx;
> >> + return STACK_ARGNO_BASE + arg_idx;
> >> +}
> >> +
> >> +static u32 arg_idx_from_argno(int argno)
> >> +{
> >> + if (is_stack_argno(argno))
> >> + return argno - STACK_ARGNO_BASE;
> >> + return argno - BPF_REG_1;
> >> +}
> >> +
> >> +static int next_argno(int argno)
> >> +{
> >> + return make_argno(arg_idx_from_argno(argno) + 1);
> >> +}
> > I don't like this +1, -1 dance all around. Makes the whole thing
> > hard to follow.
> > Keep argno starting at 1. So old regno == argno.
>
> The above make_argno() returns an argno having regno == argno
> for regno 1-5.
No. It doesn't.
it does:
+static u32 make_argno(u32 arg_idx)
+{
+ if (arg_idx < MAX_BPF_FUNC_REG_ARGS)
+ return BPF_REG_1 + arg_idx;
so for argno == 0 it returns 1 and that's my objection.
Make argno starting at 1.
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH bpf-next v6 04/17] bpf: Prepare verifier logs for upcoming kfunc stack arguments
2026-04-21 13:48 ` Alexei Starovoitov
@ 2026-04-21 15:41 ` Yonghong Song
2026-04-21 15:46 ` Alexei Starovoitov
0 siblings, 1 reply; 51+ messages in thread
From: Yonghong Song @ 2026-04-21 15:41 UTC (permalink / raw)
To: Alexei Starovoitov
Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, Kernel Team, Martin KaFai Lau
On 4/21/26 6:48 AM, Alexei Starovoitov wrote:
> On Mon, Apr 20, 2026 at 11:07 PM Yonghong Song <yonghong.song@linux.dev> wrote:
>>
>>
>> On 4/20/26 5:03 PM, Alexei Starovoitov wrote:
>>> On Sun Apr 19, 2026 at 9:33 AM PDT, Yonghong Song wrote:
>>>> This change prepares verifier log reporting for upcoming kfunc stack
>>>> argument support.
>>>>
>>>> Today verifier log code mostly assumes that an argument can be described
>>>> directly by a register number. That works for arguments passed in `R1`
>>>> to `R5`, but it does not work once kfunc arguments can also be
>>>> passed on the stack.
>>>>
>>>> Introduce an internal `argno` representation such that register-passed
>>>> arguments keep using their real register numbers, while stack-passed
>>>> arguments use an encoded value above a dedicated base.
>>>> `reg_arg_name()` converts this representation into either `R%d` or
>>>> `*(R11-off)` when emitting verifier logs. If a particular `argno`
>>>> is corresponding to a stack argument, print `*(R11-off)`. Otherwise,
>>>> print `R%d`. Here R11 presents the base of stack arguments.
>>>>
>>>> This keeps existing logs readable for register arguments and allows the
>>>> same log sites to handle future stack arguments without open-coding
>>>> special cases.
>>>>
>>>> Update selftests accordingly.
>>>>
>>>> Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
>>>> ---
>>>> include/linux/bpf_verifier.h | 1 +
>>>> kernel/bpf/verifier.c | 649 ++++++++++--------
>>>> .../testing/selftests/bpf/prog_tests/bpf_nf.c | 22 +-
>>>> .../selftests/bpf/prog_tests/cb_refs.c | 2 +-
>>>> .../selftests/bpf/prog_tests/kfunc_call.c | 2 +-
>>>> .../selftests/bpf/prog_tests/linked_list.c | 4 +-
>>>> .../selftests/bpf/progs/cgrp_kfunc_failure.c | 14 +-
>>>> .../selftests/bpf/progs/cpumask_failure.c | 10 +-
>>>> .../testing/selftests/bpf/progs/dynptr_fail.c | 22 +-
>>>> .../selftests/bpf/progs/file_reader_fail.c | 4 +-
>>>> tools/testing/selftests/bpf/progs/irq.c | 4 +-
>>>> tools/testing/selftests/bpf/progs/iters.c | 6 +-
>>>> .../selftests/bpf/progs/iters_state_safety.c | 14 +-
>>>> .../selftests/bpf/progs/iters_testmod.c | 4 +-
>>>> .../selftests/bpf/progs/iters_testmod_seq.c | 4 +-
>>>> .../selftests/bpf/progs/map_kptr_fail.c | 2 +-
>>>> .../selftests/bpf/progs/percpu_alloc_fail.c | 4 +-
>>>> .../testing/selftests/bpf/progs/rbtree_fail.c | 6 +-
>>>> .../bpf/progs/refcounted_kptr_fail.c | 2 +-
>>>> .../testing/selftests/bpf/progs/stream_fail.c | 2 +-
>>>> .../selftests/bpf/progs/task_kfunc_failure.c | 18 +-
>>>> .../selftests/bpf/progs/task_work_fail.c | 6 +-
>>>> .../selftests/bpf/progs/test_bpf_nf_fail.c | 8 +-
>>>> .../bpf/progs/test_kfunc_dynptr_param.c | 2 +-
>>>> .../bpf/progs/test_kfunc_param_nullable.c | 2 +-
>>>> .../selftests/bpf/progs/verifier_bits_iter.c | 4 +-
>>>> .../bpf/progs/verifier_ref_tracking.c | 6 +-
>>>> .../selftests/bpf/progs/verifier_vfs_reject.c | 8 +-
>>>> .../testing/selftests/bpf/progs/wq_failures.c | 2 +-
>>>> tools/testing/selftests/bpf/verifier/calls.c | 14 +-
>>>> 30 files changed, 474 insertions(+), 374 deletions(-)
>>>>
>>>> diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
>>>> index b148f816f25b..9fbbddc40d21 100644
>>>> --- a/include/linux/bpf_verifier.h
>>>> +++ b/include/linux/bpf_verifier.h
>>>> @@ -913,6 +913,7 @@ struct bpf_verifier_env {
>>>> * e.g., in reg_type_str() to generate reg_type string
>>>> */
>>>> char tmp_str_buf[TMP_STR_BUF_LEN];
>>>> + char tmp_reg_arg_name_buf[32];
>>> the name is too long.
>>> Just tmp_arg_name ?
>>>
>>>> struct bpf_insn insn_buf[INSN_BUF_SIZE];
>>>> struct bpf_insn epilogue_buf[INSN_BUF_SIZE];
>>>> struct bpf_scc_callchain callchain_buf;
>>>> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
>>>> index 3716d9688d00..6aa4dc161a56 100644
>>>> --- a/kernel/bpf/verifier.c
>>>> +++ b/kernel/bpf/verifier.c
>>>> @@ -1751,6 +1751,55 @@ static struct bpf_verifier_state *push_stack(struct bpf_verifier_env *env,
>>>> return &elem->st;
>>>> }
>>>>
>>>> +/*
>>>> + * Unified argument number encoding for verifier log messages.
>>>> + * Register args (arg_idx 0-4) use their register number (R1-R5).
>>>> + * Stack args (arg_idx 5+) are encoded as STACK_ARGNO_BASE + arg_idx
>>>> + * to avoid collision with register numbers. reg_arg_name() decodes
>>>> + * this back to a human-readable string like "*(R11-8)" for logs.
>>>> + */
>>>> +#define STACK_ARGNO_BASE 100
>>>> +
>>>> +static bool is_stack_argno(int argno)
>>>> +{
>>>> + return argno >= STACK_ARGNO_BASE;
>>>> +}
>>>> +
>>>> +static u32 make_argno(u32 arg_idx)
>>>> +{
>>>> + if (arg_idx < MAX_BPF_FUNC_REG_ARGS)
>>>> + return BPF_REG_1 + arg_idx;
>>>> + return STACK_ARGNO_BASE + arg_idx;
>>>> +}
>>>> +
>>>> +static u32 arg_idx_from_argno(int argno)
>>>> +{
>>>> + if (is_stack_argno(argno))
>>>> + return argno - STACK_ARGNO_BASE;
>>>> + return argno - BPF_REG_1;
>>>> +}
>>>> +
>>>> +static int next_argno(int argno)
>>>> +{
>>>> + return make_argno(arg_idx_from_argno(argno) + 1);
>>>> +}
>>> I don't like this +1, -1 dance all around. Makes the whole thing
>>> hard to follow.
>>> Keep argno starting at 1. So old regno == argno.
>> The above make_argno() returns an argno having regno == argno
>> for regno 1-5.
> No. It doesn't.
>
> it does:
>
> +static u32 make_argno(u32 arg_idx)
> +{
> + if (arg_idx < MAX_BPF_FUNC_REG_ARGS)
> + return BPF_REG_1 + arg_idx;
>
> so for argno == 0 it returns 1 and that's my objection.
> Make argno starting at 1.
Okay. Will do.
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH bpf-next v6 04/17] bpf: Prepare verifier logs for upcoming kfunc stack arguments
2026-04-21 15:41 ` Yonghong Song
@ 2026-04-21 15:46 ` Alexei Starovoitov
2026-04-21 16:37 ` Yonghong Song
2026-04-21 17:24 ` Yonghong Song
0 siblings, 2 replies; 51+ messages in thread
From: Alexei Starovoitov @ 2026-04-21 15:46 UTC (permalink / raw)
To: Yonghong Song
Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, Kernel Team, Martin KaFai Lau
On Tue Apr 21, 2026 at 8:41 AM PDT, Yonghong Song wrote:
>>
>> it does:
>>
>> +static u32 make_argno(u32 arg_idx)
>> +{
>> + if (arg_idx < MAX_BPF_FUNC_REG_ARGS)
>> + return BPF_REG_1 + arg_idx;
>>
>> so for argno == 0 it returns 1 and that's my objection.
>> Make argno starting at 1.
>
> Okay. Will do.
To move things forward please resend the first 6 patches only.
So we can make sure that they're in a good shape and land them first.
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH bpf-next v6 04/17] bpf: Prepare verifier logs for upcoming kfunc stack arguments
2026-04-21 15:46 ` Alexei Starovoitov
@ 2026-04-21 16:37 ` Yonghong Song
2026-04-21 17:24 ` Yonghong Song
1 sibling, 0 replies; 51+ messages in thread
From: Yonghong Song @ 2026-04-21 16:37 UTC (permalink / raw)
To: Alexei Starovoitov
Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, Kernel Team, Martin KaFai Lau
On 4/21/26 8:46 AM, Alexei Starovoitov wrote:
> On Tue Apr 21, 2026 at 8:41 AM PDT, Yonghong Song wrote:
>>> it does:
>>>
>>> +static u32 make_argno(u32 arg_idx)
>>> +{
>>> + if (arg_idx < MAX_BPF_FUNC_REG_ARGS)
>>> + return BPF_REG_1 + arg_idx;
>>>
>>> so for argno == 0 it returns 1 and that's my objection.
>>> Make argno starting at 1.
>> Okay. Will do.
> To move things forward please resend the first 6 patches only.
> So we can make sure that they're in a good shape and land them first.
Good point. Will do.
^ permalink raw reply [flat|nested] 51+ messages in thread
* Re: [PATCH bpf-next v6 04/17] bpf: Prepare verifier logs for upcoming kfunc stack arguments
2026-04-21 15:46 ` Alexei Starovoitov
2026-04-21 16:37 ` Yonghong Song
@ 2026-04-21 17:24 ` Yonghong Song
1 sibling, 0 replies; 51+ messages in thread
From: Yonghong Song @ 2026-04-21 17:24 UTC (permalink / raw)
To: Alexei Starovoitov
Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, Kernel Team, Martin KaFai Lau
On 4/21/26 8:46 AM, Alexei Starovoitov wrote:
> On Tue Apr 21, 2026 at 8:41 AM PDT, Yonghong Song wrote:
>>> it does:
>>>
>>> +static u32 make_argno(u32 arg_idx)
>>> +{
>>> + if (arg_idx < MAX_BPF_FUNC_REG_ARGS)
>>> + return BPF_REG_1 + arg_idx;
>>>
>>> so for argno == 0 it returns 1 and that's my objection.
>>> Make argno starting at 1.
>> Okay. Will do.
> To move things forward please resend the first 6 patches only.
> So we can make sure that they're in a good shape and land them first.
>
Just send out the preparation patch set:
https://lore.kernel.org/bpf/20260421171927.3507554-1-yonghong.song@linux.dev/
For one patch: bpf: Fix tail_call_reachable leak
AI commented multiple times for this. So I implemented this. You can decide whether
this is important or not.
^ permalink raw reply [flat|nested] 51+ messages in thread
end of thread, other threads:[~2026-04-21 17:24 UTC | newest]
Thread overview: 51+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-19 16:33 [PATCH bpf-next v6 00/17] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song
2026-04-19 16:33 ` [PATCH bpf-next v6 01/17] bpf: Remove unused parameter from check_map_kptr_access() Yonghong Song
2026-04-19 16:33 ` [PATCH bpf-next v6 02/17] bpf: Refactor to avoid redundant calculation of bpf_reg_state Yonghong Song
2026-04-19 16:33 ` [PATCH bpf-next v6 03/17] bpf: Refactor to handle memory and size together Yonghong Song
2026-04-20 23:58 ` Alexei Starovoitov
2026-04-21 4:04 ` Yonghong Song
2026-04-19 16:33 ` [PATCH bpf-next v6 04/17] bpf: Prepare verifier logs for upcoming kfunc stack arguments Yonghong Song
2026-04-21 0:03 ` Alexei Starovoitov
2026-04-21 4:06 ` Yonghong Song
2026-04-21 6:07 ` Yonghong Song
2026-04-21 13:48 ` Alexei Starovoitov
2026-04-21 15:41 ` Yonghong Song
2026-04-21 15:46 ` Alexei Starovoitov
2026-04-21 16:37 ` Yonghong Song
2026-04-21 17:24 ` Yonghong Song
2026-04-19 16:33 ` [PATCH bpf-next v6 05/17] bpf: Introduce bpf register BPF_REG_PARAMS Yonghong Song
2026-04-19 17:06 ` sashiko-bot
2026-04-19 18:14 ` Yonghong Song
2026-04-19 16:33 ` [PATCH bpf-next v6 06/17] bpf: Reuse MAX_BPF_FUNC_ARGS for maximum number of arguments Yonghong Song
2026-04-19 16:33 ` [PATCH bpf-next v6 07/17] bpf: Support stack arguments for bpf functions Yonghong Song
2026-04-19 19:15 ` sashiko-bot
2026-04-20 4:35 ` Yonghong Song
2026-04-21 0:37 ` Alexei Starovoitov
2026-04-21 4:15 ` Yonghong Song
2026-04-19 16:33 ` [PATCH bpf-next v6 08/17] bpf: Reject stack arguments in non-JITed programs Yonghong Song
2026-04-19 18:21 ` sashiko-bot
2026-04-20 4:23 ` Yonghong Song
2026-04-19 16:34 ` [PATCH bpf-next v6 09/17] bpf: Track r11 registers in const_fold and liveness Yonghong Song
2026-04-19 16:34 ` [PATCH bpf-next v6 10/17] bpf: Prepare architecture JIT support for stack arguments Yonghong Song
2026-04-19 16:34 ` [PATCH bpf-next v6 11/17] bpf: Enable r11 based insns Yonghong Song
2026-04-19 16:34 ` [PATCH bpf-next v6 12/17] bpf: Support stack arguments for kfunc calls Yonghong Song
2026-04-19 17:08 ` sashiko-bot
2026-04-19 18:18 ` Yonghong Song
2026-04-19 16:34 ` [PATCH bpf-next v6 13/17] bpf: Reject stack arguments if tail call reachable Yonghong Song
2026-04-19 17:08 ` sashiko-bot
2026-04-19 18:20 ` Yonghong Song
2026-04-19 16:34 ` [PATCH bpf-next v6 14/17] bpf,x86: Implement JIT support for stack arguments Yonghong Song
2026-04-19 17:25 ` sashiko-bot
2026-04-19 18:55 ` Yonghong Song
2026-04-19 16:34 ` [PATCH bpf-next v6 15/17] selftests/bpf: Add tests for BPF function " Yonghong Song
2026-04-19 17:15 ` sashiko-bot
2026-04-20 5:52 ` Yonghong Song
2026-04-19 16:34 ` [PATCH bpf-next v6 16/17] selftests/bpf: Add tests for stack argument validation Yonghong Song
2026-04-19 16:34 ` [PATCH bpf-next v6 17/17] selftests/bpf: Add verifier " Yonghong Song
2026-04-19 17:21 ` sashiko-bot
2026-04-20 6:14 ` Yonghong Song
2026-04-20 15:41 ` [PATCH bpf-next v6 00/17] bpf: Support stack arguments for BPF functions and kfuncs Puranjay Mohan
2026-04-20 20:22 ` Yonghong Song
2026-04-20 20:25 ` Puranjay Mohan
2026-04-20 21:49 ` Alexei Starovoitov
2026-04-20 23:44 ` Yonghong Song
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox