* [PATCH bpf-next 0/9] bpf: Prepare to support stack arguments
@ 2026-04-21 17:19 Yonghong Song
2026-04-21 17:19 ` [PATCH bpf-next 1/9] bpf: Remove unused parameter from check_map_kptr_access() Yonghong Song
` (9 more replies)
0 siblings, 10 replies; 25+ messages in thread
From: Yonghong Song @ 2026-04-21 17:19 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau
The patch set prepares to support stack arguments for bpf functions
and kfunc's. The major changes include:
- Avoid redundant calculation of bpf_reg_state. For stack
arguments, there exists no corresponding register number.
- Refactor check_kfunc_mem_size_reg() to have bpf_reg_state's
for both mem_reg and size_reg.
- Allow verifier logs to print stack arguments if there is no
corresponding register.
Please see individual patches for details.
Yonghong Song (9):
bpf: Remove unused parameter from check_map_kptr_access()
bpf: Fix tail_call_reachable leak
bpf: Remove WARN_ON_ONCE in check_kfunc_mem_size_reg()
bpf: Refactor to avoid redundant calculation of bpf_reg_state
bpf: Refactor to handle memory and size together
bpf: Rename existing argno to arg
bpf: Prepare verifier logs for upcoming kfunc stack arguments
bpf: Introduce bpf register BPF_REG_PARAMS
bpf: Reuse MAX_BPF_FUNC_ARGS for maximum number of arguments
include/linux/bpf.h | 5 +
include/linux/bpf_verifier.h | 1 +
include/linux/filter.h | 5 +-
kernel/bpf/core.c | 4 +-
kernel/bpf/verifier.c | 813 ++++++++++--------
.../testing/selftests/bpf/prog_tests/bpf_nf.c | 22 +-
.../selftests/bpf/prog_tests/cb_refs.c | 2 +-
.../selftests/bpf/prog_tests/ctx_rewrite.c | 14 +-
.../selftests/bpf/prog_tests/kfunc_call.c | 2 +-
.../selftests/bpf/prog_tests/linked_list.c | 4 +-
.../selftests/bpf/progs/cgrp_kfunc_failure.c | 14 +-
.../selftests/bpf/progs/cpumask_failure.c | 10 +-
.../testing/selftests/bpf/progs/dynptr_fail.c | 22 +-
.../selftests/bpf/progs/file_reader_fail.c | 4 +-
tools/testing/selftests/bpf/progs/irq.c | 4 +-
tools/testing/selftests/bpf/progs/iters.c | 6 +-
.../selftests/bpf/progs/iters_state_safety.c | 14 +-
.../selftests/bpf/progs/iters_testmod.c | 4 +-
.../selftests/bpf/progs/iters_testmod_seq.c | 4 +-
.../selftests/bpf/progs/map_kptr_fail.c | 2 +-
.../selftests/bpf/progs/percpu_alloc_fail.c | 4 +-
.../testing/selftests/bpf/progs/rbtree_fail.c | 6 +-
.../bpf/progs/refcounted_kptr_fail.c | 2 +-
.../testing/selftests/bpf/progs/stream_fail.c | 2 +-
.../selftests/bpf/progs/task_kfunc_failure.c | 18 +-
.../selftests/bpf/progs/task_work_fail.c | 6 +-
.../selftests/bpf/progs/test_bpf_nf_fail.c | 8 +-
.../bpf/progs/test_kfunc_dynptr_param.c | 2 +-
.../bpf/progs/test_kfunc_param_nullable.c | 2 +-
.../selftests/bpf/progs/verifier_bits_iter.c | 4 +-
.../bpf/progs/verifier_bpf_fastcall.c | 24 +-
.../selftests/bpf/progs/verifier_may_goto_1.c | 12 +-
.../bpf/progs/verifier_ref_tracking.c | 6 +-
.../selftests/bpf/progs/verifier_sdiv.c | 64 +-
.../selftests/bpf/progs/verifier_vfs_reject.c | 8 +-
.../testing/selftests/bpf/progs/wq_failures.c | 2 +-
tools/testing/selftests/bpf/verifier/calls.c | 14 +-
37 files changed, 604 insertions(+), 536 deletions(-)
--
2.52.0
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH bpf-next 1/9] bpf: Remove unused parameter from check_map_kptr_access()
2026-04-21 17:19 [PATCH bpf-next 0/9] bpf: Prepare to support stack arguments Yonghong Song
@ 2026-04-21 17:19 ` Yonghong Song
2026-04-21 17:19 ` [PATCH bpf-next 2/9] bpf: Fix tail_call_reachable leak Yonghong Song
` (8 subsequent siblings)
9 siblings, 0 replies; 25+ messages in thread
From: Yonghong Song @ 2026-04-21 17:19 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau, Puranjay Mohan
The parameter 'regno' in check_map_kptr_access() is unused. Remove it.
Acked-by: Puranjay Mohan <puranjay@kernel.org>
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
kernel/bpf/verifier.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 185210b73385..a768359b22cb 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -4710,7 +4710,7 @@ static int mark_uptr_ld_reg(struct bpf_verifier_env *env, u32 regno,
return 0;
}
-static int check_map_kptr_access(struct bpf_verifier_env *env, u32 regno,
+static int check_map_kptr_access(struct bpf_verifier_env *env,
int value_regno, int insn_idx,
struct btf_field *kptr_field)
{
@@ -6357,7 +6357,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
kptr_field = btf_record_find(reg->map_ptr->record,
off + reg->var_off.value, BPF_KPTR | BPF_UPTR);
if (kptr_field) {
- err = check_map_kptr_access(env, regno, value_regno, insn_idx, kptr_field);
+ err = check_map_kptr_access(env, value_regno, insn_idx, kptr_field);
} else if (t == BPF_READ && value_regno >= 0) {
struct bpf_map *map = reg->map_ptr;
--
2.52.0
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH bpf-next 2/9] bpf: Fix tail_call_reachable leak
2026-04-21 17:19 [PATCH bpf-next 0/9] bpf: Prepare to support stack arguments Yonghong Song
2026-04-21 17:19 ` [PATCH bpf-next 1/9] bpf: Remove unused parameter from check_map_kptr_access() Yonghong Song
@ 2026-04-21 17:19 ` Yonghong Song
2026-04-21 18:06 ` bot+bpf-ci
2026-04-21 17:19 ` [PATCH bpf-next 3/9] bpf: Remove WARN_ON_ONCE in check_kfunc_mem_size_reg() Yonghong Song
` (7 subsequent siblings)
9 siblings, 1 reply; 25+ messages in thread
From: Yonghong Song @ 2026-04-21 17:19 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau
In check_max_stack_depth_subprog(), the local variable
tail_call_reachable is set when entering a callee that has a tail
call, but never reset when popping back to the parent. This causes
the flag to leak across sibling subprogs in the DFS traversal.
This results in unnecessary JIT overhead: the JIT emits tail call
counter preservation code for subprogs that can never be reached
via a tail call path.
Fix this by resetting tail_call_reachable to the parent's actual
per-subprog flag when popping a frame. If the parent was already
marked tail_call_reachable by a previous sibling's traversal, the
local variable stays true. Otherwise it resets to false, so
subsequent siblings start with a clean state.
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
kernel/bpf/verifier.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index a768359b22cb..34696af96b3e 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -5490,6 +5490,9 @@ static int check_max_stack_depth_subprog(struct bpf_verifier_env *env, int idx,
frame = dinfo[idx].frame;
i = dinfo[idx].ret_insn;
+ /* reset tail_call_reachable to the parent's actual state */
+ tail_call_reachable = subprog[idx].tail_call_reachable;
+
goto continue_func;
}
--
2.52.0
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH bpf-next 3/9] bpf: Remove WARN_ON_ONCE in check_kfunc_mem_size_reg()
2026-04-21 17:19 [PATCH bpf-next 0/9] bpf: Prepare to support stack arguments Yonghong Song
2026-04-21 17:19 ` [PATCH bpf-next 1/9] bpf: Remove unused parameter from check_map_kptr_access() Yonghong Song
2026-04-21 17:19 ` [PATCH bpf-next 2/9] bpf: Fix tail_call_reachable leak Yonghong Song
@ 2026-04-21 17:19 ` Yonghong Song
2026-04-21 17:19 ` [PATCH bpf-next 4/9] bpf: Refactor to avoid redundant calculation of bpf_reg_state Yonghong Song
` (6 subsequent siblings)
9 siblings, 0 replies; 25+ messages in thread
From: Yonghong Song @ 2026-04-21 17:19 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau
The warning is too late if it does happen. Remove it.
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
kernel/bpf/verifier.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 34696af96b3e..ed04fef49f6c 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -7134,8 +7134,6 @@ static int check_kfunc_mem_size_reg(struct bpf_verifier_env *env, struct bpf_reg
struct bpf_call_arg_meta meta;
int err;
- WARN_ON_ONCE(regno < BPF_REG_2 || regno > BPF_REG_5);
-
memset(&meta, 0, sizeof(meta));
if (may_be_null) {
--
2.52.0
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH bpf-next 4/9] bpf: Refactor to avoid redundant calculation of bpf_reg_state
2026-04-21 17:19 [PATCH bpf-next 0/9] bpf: Prepare to support stack arguments Yonghong Song
` (2 preceding siblings ...)
2026-04-21 17:19 ` [PATCH bpf-next 3/9] bpf: Remove WARN_ON_ONCE in check_kfunc_mem_size_reg() Yonghong Song
@ 2026-04-21 17:19 ` Yonghong Song
2026-04-21 21:40 ` Amery Hung
2026-04-21 17:19 ` [PATCH bpf-next 5/9] bpf: Refactor to handle memory and size together Yonghong Song
` (5 subsequent siblings)
9 siblings, 1 reply; 25+ messages in thread
From: Yonghong Song @ 2026-04-21 17:19 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau, Puranjay Mohan
In many cases, once a bpf_reg_state is defined, it can pass to
callee's. Otherwise, callee will need to get bpf_reg_state again
based on regno. More importantly, this is needed for later stack
arguments for kfuncs since the register state for stack arguments does
not have a corresponding regno. So it makes sense to pass reg state
for callee's.
The following is the only change to avoid compilation warning:
static int sanitize_check_bounds(struct bpf_verifier_env *env,
const struct bpf_insn *insn,
- const struct bpf_reg_state *dst_reg)
+ struct bpf_reg_state *dst_reg)
Acked-by: Puranjay Mohan <puranjay@kernel.org>
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
kernel/bpf/verifier.c | 213 ++++++++++++++++++------------------------
1 file changed, 93 insertions(+), 120 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index ed04fef49f6c..b56a11fc3856 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -3929,13 +3929,13 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
static int check_stack_write_var_off(struct bpf_verifier_env *env,
/* func where register points to */
struct bpf_func_state *state,
- int ptr_regno, int off, int size,
+ struct bpf_reg_state *ptr_reg, int off, int size,
int value_regno, int insn_idx)
{
struct bpf_func_state *cur; /* state of the current function */
int min_off, max_off;
int i, err;
- struct bpf_reg_state *ptr_reg = NULL, *value_reg = NULL;
+ struct bpf_reg_state *value_reg = NULL;
struct bpf_insn *insn = &env->prog->insnsi[insn_idx];
bool writing_zero = false;
/* set if the fact that we're writing a zero is used to let any
@@ -3944,7 +3944,6 @@ static int check_stack_write_var_off(struct bpf_verifier_env *env,
bool zero_used = false;
cur = env->cur_state->frame[env->cur_state->curframe];
- ptr_reg = &cur->regs[ptr_regno];
min_off = ptr_reg->smin_value + off;
max_off = ptr_reg->smax_value + off + size;
if (value_regno >= 0)
@@ -4241,7 +4240,7 @@ enum bpf_access_src {
ACCESS_HELPER = 2, /* the access is performed by a helper */
};
-static int check_stack_range_initialized(struct bpf_verifier_env *env,
+static int check_stack_range_initialized(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
int regno, int off, int access_size,
bool zero_size_allowed,
enum bpf_access_type type,
@@ -4265,18 +4264,16 @@ static struct bpf_reg_state *reg_state(struct bpf_verifier_env *env, int regno)
* offset; for a fixed offset check_stack_read_fixed_off should be used
* instead.
*/
-static int check_stack_read_var_off(struct bpf_verifier_env *env,
+static int check_stack_read_var_off(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
int ptr_regno, int off, int size, int dst_regno)
{
- /* The state of the source register. */
- struct bpf_reg_state *reg = reg_state(env, ptr_regno);
struct bpf_func_state *ptr_state = bpf_func(env, reg);
int err;
int min_off, max_off;
/* Note that we pass a NULL meta, so raw access will not be permitted.
*/
- err = check_stack_range_initialized(env, ptr_regno, off, size,
+ err = check_stack_range_initialized(env, reg, ptr_regno, off, size,
false, BPF_READ, NULL);
if (err)
return err;
@@ -4298,10 +4295,9 @@ static int check_stack_read_var_off(struct bpf_verifier_env *env,
* can be -1, meaning that the read value is not going to a register.
*/
static int check_stack_read(struct bpf_verifier_env *env,
- int ptr_regno, int off, int size,
+ struct bpf_reg_state *reg, int ptr_regno, int off, int size,
int dst_regno)
{
- struct bpf_reg_state *reg = reg_state(env, ptr_regno);
struct bpf_func_state *state = bpf_func(env, reg);
int err;
/* Some accesses are only permitted with a static offset. */
@@ -4337,7 +4333,7 @@ static int check_stack_read(struct bpf_verifier_env *env,
* than fixed offset ones. Note that dst_regno >= 0 on this
* branch.
*/
- err = check_stack_read_var_off(env, ptr_regno, off, size,
+ err = check_stack_read_var_off(env, reg, ptr_regno, off, size,
dst_regno);
}
return err;
@@ -4354,10 +4350,9 @@ static int check_stack_read(struct bpf_verifier_env *env,
* The caller must ensure that the offset falls within the maximum stack size.
*/
static int check_stack_write(struct bpf_verifier_env *env,
- int ptr_regno, int off, int size,
+ struct bpf_reg_state *reg, int off, int size,
int value_regno, int insn_idx)
{
- struct bpf_reg_state *reg = reg_state(env, ptr_regno);
struct bpf_func_state *state = bpf_func(env, reg);
int err;
@@ -4370,16 +4365,15 @@ static int check_stack_write(struct bpf_verifier_env *env,
* than fixed offset ones.
*/
err = check_stack_write_var_off(env, state,
- ptr_regno, off, size,
+ reg, off, size,
value_regno, insn_idx);
}
return err;
}
-static int check_map_access_type(struct bpf_verifier_env *env, u32 regno,
+static int check_map_access_type(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 regno,
int off, int size, enum bpf_access_type type)
{
- struct bpf_reg_state *reg = reg_state(env, regno);
struct bpf_map *map = reg->map_ptr;
u32 cap = bpf_map_flags_to_cap(map);
@@ -4399,17 +4393,15 @@ static int check_map_access_type(struct bpf_verifier_env *env, u32 regno,
}
/* check read/write into memory region (e.g., map value, ringbuf sample, etc) */
-static int __check_mem_access(struct bpf_verifier_env *env, int regno,
+static int __check_mem_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno,
int off, int size, u32 mem_size,
bool zero_size_allowed)
{
bool size_ok = size > 0 || (size == 0 && zero_size_allowed);
- struct bpf_reg_state *reg;
if (off >= 0 && size_ok && (u64)off + size <= mem_size)
return 0;
- reg = &cur_regs(env)[regno];
switch (reg->type) {
case PTR_TO_MAP_KEY:
verbose(env, "invalid access to map key, key_size=%d off=%d size=%d\n",
@@ -4439,13 +4431,10 @@ static int __check_mem_access(struct bpf_verifier_env *env, int regno,
}
/* check read/write into a memory region with possible variable offset */
-static int check_mem_region_access(struct bpf_verifier_env *env, u32 regno,
+static int check_mem_region_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 regno,
int off, int size, u32 mem_size,
bool zero_size_allowed)
{
- struct bpf_verifier_state *vstate = env->cur_state;
- struct bpf_func_state *state = vstate->frame[vstate->curframe];
- struct bpf_reg_state *reg = &state->regs[regno];
int err;
/* We may have adjusted the register pointing to memory region, so we
@@ -4466,7 +4455,7 @@ static int check_mem_region_access(struct bpf_verifier_env *env, u32 regno,
regno);
return -EACCES;
}
- err = __check_mem_access(env, regno, reg->smin_value + off, size,
+ err = __check_mem_access(env, reg, regno, reg->smin_value + off, size,
mem_size, zero_size_allowed);
if (err) {
verbose(env, "R%d min value is outside of the allowed memory range\n",
@@ -4483,7 +4472,7 @@ static int check_mem_region_access(struct bpf_verifier_env *env, u32 regno,
regno);
return -EACCES;
}
- err = __check_mem_access(env, regno, reg->umax_value + off, size,
+ err = __check_mem_access(env, reg, regno, reg->umax_value + off, size,
mem_size, zero_size_allowed);
if (err) {
verbose(env, "R%d max value is outside of the allowed memory range\n",
@@ -4787,19 +4776,16 @@ static u32 map_mem_size(const struct bpf_map *map)
}
/* check read/write into a map element with possible variable offset */
-static int check_map_access(struct bpf_verifier_env *env, u32 regno,
+static int check_map_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 regno,
int off, int size, bool zero_size_allowed,
enum bpf_access_src src)
{
- struct bpf_verifier_state *vstate = env->cur_state;
- struct bpf_func_state *state = vstate->frame[vstate->curframe];
- struct bpf_reg_state *reg = &state->regs[regno];
struct bpf_map *map = reg->map_ptr;
u32 mem_size = map_mem_size(map);
struct btf_record *rec;
int err, i;
- err = check_mem_region_access(env, regno, off, size, mem_size, zero_size_allowed);
+ err = check_mem_region_access(env, reg, regno, off, size, mem_size, zero_size_allowed);
if (err)
return err;
@@ -4895,10 +4881,9 @@ static bool may_access_direct_pkt_data(struct bpf_verifier_env *env,
}
}
-static int check_packet_access(struct bpf_verifier_env *env, u32 regno, int off,
+static int check_packet_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 regno, int off,
int size, bool zero_size_allowed)
{
- struct bpf_reg_state *reg = reg_state(env, regno);
int err;
if (reg->range < 0) {
@@ -4906,7 +4891,7 @@ static int check_packet_access(struct bpf_verifier_env *env, u32 regno, int off,
return -EINVAL;
}
- err = check_mem_region_access(env, regno, off, size, reg->range, zero_size_allowed);
+ err = check_mem_region_access(env, reg, regno, off, size, reg->range, zero_size_allowed);
if (err)
return err;
@@ -4961,7 +4946,7 @@ static int __check_ctx_access(struct bpf_verifier_env *env, int insn_idx, int of
return -EACCES;
}
-static int check_ctx_access(struct bpf_verifier_env *env, int insn_idx, u32 regno,
+static int check_ctx_access(struct bpf_verifier_env *env, int insn_idx, struct bpf_reg_state *reg, u32 regno,
int off, int access_size, enum bpf_access_type t,
struct bpf_insn_access_aux *info)
{
@@ -4971,12 +4956,10 @@ static int check_ctx_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
*/
bool var_off_ok = is_var_ctx_off_allowed(env->prog);
bool fixed_off_ok = !env->ops->convert_ctx_access;
- struct bpf_reg_state *regs = cur_regs(env);
- struct bpf_reg_state *reg = regs + regno;
int err;
if (var_off_ok)
- err = check_mem_region_access(env, regno, off, access_size, U16_MAX, false);
+ err = check_mem_region_access(env, reg, regno, off, access_size, U16_MAX, false);
else
err = __check_ptr_off_reg(env, reg, regno, fixed_off_ok);
if (err)
@@ -5002,10 +4985,9 @@ static int check_flow_keys_access(struct bpf_verifier_env *env, int off,
}
static int check_sock_access(struct bpf_verifier_env *env, int insn_idx,
- u32 regno, int off, int size,
+ struct bpf_reg_state *reg, u32 regno, int off, int size,
enum bpf_access_type t)
{
- struct bpf_reg_state *reg = reg_state(env, regno);
struct bpf_insn_access_aux info = {};
bool valid;
@@ -5971,12 +5953,11 @@ static bool type_is_trusted_or_null(struct bpf_verifier_env *env,
}
static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
- struct bpf_reg_state *regs,
+ struct bpf_reg_state *regs, struct bpf_reg_state *reg,
int regno, int off, int size,
enum bpf_access_type atype,
int value_regno)
{
- struct bpf_reg_state *reg = regs + regno;
const struct btf_type *t = btf_type_by_id(reg->btf, reg->btf_id);
const char *tname = btf_name_by_offset(reg->btf, t->name_off);
const char *field_name = NULL;
@@ -6128,12 +6109,11 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
}
static int check_ptr_to_map_access(struct bpf_verifier_env *env,
- struct bpf_reg_state *regs,
+ struct bpf_reg_state *regs, struct bpf_reg_state *reg,
int regno, int off, int size,
enum bpf_access_type atype,
int value_regno)
{
- struct bpf_reg_state *reg = regs + regno;
struct bpf_map *map = reg->map_ptr;
struct bpf_reg_state map_reg;
enum bpf_type_flag flag = 0;
@@ -6222,11 +6202,10 @@ static int check_stack_slot_within_bounds(struct bpf_verifier_env *env,
* 'off' includes `regno->offset`, but not its dynamic part (if any).
*/
static int check_stack_access_within_bounds(
- struct bpf_verifier_env *env,
+ struct bpf_verifier_env *env, struct bpf_reg_state *reg,
int regno, int off, int access_size,
enum bpf_access_type type)
{
- struct bpf_reg_state *reg = reg_state(env, regno);
struct bpf_func_state *state = bpf_func(env, reg);
s64 min_off, max_off;
int err;
@@ -6314,12 +6293,11 @@ static void add_scalar_to_reg(struct bpf_reg_state *dst_reg, s64 val)
* if t==write && value_regno==-1, some unknown value is stored into memory
* if t==read && value_regno==-1, don't care what we read from memory
*/
-static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regno,
+static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct bpf_reg_state *reg, u32 regno,
int off, int bpf_size, enum bpf_access_type t,
int value_regno, bool strict_alignment_once, bool is_ldsx)
{
struct bpf_reg_state *regs = cur_regs(env);
- struct bpf_reg_state *reg = regs + regno;
int size, err = 0;
size = bpf_size_to_bytes(bpf_size);
@@ -6336,7 +6314,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
return -EACCES;
}
- err = check_mem_region_access(env, regno, off, size,
+ err = check_mem_region_access(env, reg, regno, off, size,
reg->map_ptr->key_size, false);
if (err)
return err;
@@ -6350,10 +6328,10 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
verbose(env, "R%d leaks addr into map\n", value_regno);
return -EACCES;
}
- err = check_map_access_type(env, regno, off, size, t);
+ err = check_map_access_type(env, reg, regno, off, size, t);
if (err)
return err;
- err = check_map_access(env, regno, off, size, false, ACCESS_DIRECT);
+ err = check_map_access(env, reg, regno, off, size, false, ACCESS_DIRECT);
if (err)
return err;
if (tnum_is_const(reg->var_off))
@@ -6422,7 +6400,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
* instructions, hence no need to check bounds in that case.
*/
if (!rdonly_untrusted)
- err = check_mem_region_access(env, regno, off, size,
+ err = check_mem_region_access(env, reg, regno, off, size,
reg->mem_size, false);
if (!err && value_regno >= 0 && (t == BPF_READ || rdonly_mem))
mark_reg_unknown(env, regs, value_regno);
@@ -6440,7 +6418,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
return -EACCES;
}
- err = check_ctx_access(env, insn_idx, regno, off, size, t, &info);
+ err = check_ctx_access(env, insn_idx, reg, regno, off, size, t, &info);
if (!err && t == BPF_READ && value_regno >= 0) {
/* ctx access returns either a scalar, or a
* PTR_TO_PACKET[_META,_END]. In the latter
@@ -6477,15 +6455,15 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
} else if (reg->type == PTR_TO_STACK) {
/* Basic bounds checks. */
- err = check_stack_access_within_bounds(env, regno, off, size, t);
+ err = check_stack_access_within_bounds(env, reg, regno, off, size, t);
if (err)
return err;
if (t == BPF_READ)
- err = check_stack_read(env, regno, off, size,
+ err = check_stack_read(env, reg, regno, off, size,
value_regno);
else
- err = check_stack_write(env, regno, off, size,
+ err = check_stack_write(env, reg, off, size,
value_regno, insn_idx);
} else if (reg_is_pkt_pointer(reg)) {
if (t == BPF_WRITE && !may_access_direct_pkt_data(env, NULL, t)) {
@@ -6498,7 +6476,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
value_regno);
return -EACCES;
}
- err = check_packet_access(env, regno, off, size, false);
+ err = check_packet_access(env, reg, regno, off, size, false);
if (!err && t == BPF_READ && value_regno >= 0)
mark_reg_unknown(env, regs, value_regno);
} else if (reg->type == PTR_TO_FLOW_KEYS) {
@@ -6518,7 +6496,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
regno, reg_type_str(env, reg->type));
return -EACCES;
}
- err = check_sock_access(env, insn_idx, regno, off, size, t);
+ err = check_sock_access(env, insn_idx, reg, regno, off, size, t);
if (!err && value_regno >= 0)
mark_reg_unknown(env, regs, value_regno);
} else if (reg->type == PTR_TO_TP_BUFFER) {
@@ -6527,10 +6505,10 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
mark_reg_unknown(env, regs, value_regno);
} else if (base_type(reg->type) == PTR_TO_BTF_ID &&
!type_may_be_null(reg->type)) {
- err = check_ptr_to_btf_access(env, regs, regno, off, size, t,
+ err = check_ptr_to_btf_access(env, regs, reg, regno, off, size, t,
value_regno);
} else if (reg->type == CONST_PTR_TO_MAP) {
- err = check_ptr_to_map_access(env, regs, regno, off, size, t,
+ err = check_ptr_to_map_access(env, regs, reg, regno, off, size, t,
value_regno);
} else if (base_type(reg->type) == PTR_TO_BUF &&
!type_may_be_null(reg->type)) {
@@ -6599,7 +6577,7 @@ static int check_load_mem(struct bpf_verifier_env *env, struct bpf_insn *insn,
/* Check if (src_reg + off) is readable. The state of dst_reg will be
* updated by this call.
*/
- err = check_mem_access(env, env->insn_idx, insn->src_reg, insn->off,
+ err = check_mem_access(env, env->insn_idx, regs + insn->src_reg, insn->src_reg, insn->off,
BPF_SIZE(insn->code), BPF_READ, insn->dst_reg,
strict_alignment_once, is_ldsx);
err = err ?: save_aux_ptr_type(env, src_reg_type,
@@ -6629,7 +6607,7 @@ static int check_store_reg(struct bpf_verifier_env *env, struct bpf_insn *insn,
dst_reg_type = regs[insn->dst_reg].type;
/* Check if (dst_reg + off) is writeable. */
- err = check_mem_access(env, env->insn_idx, insn->dst_reg, insn->off,
+ err = check_mem_access(env, env->insn_idx, regs + insn->dst_reg, insn->dst_reg, insn->off,
BPF_SIZE(insn->code), BPF_WRITE, insn->src_reg,
strict_alignment_once, false);
err = err ?: save_aux_ptr_type(env, dst_reg_type, false);
@@ -6640,6 +6618,7 @@ static int check_store_reg(struct bpf_verifier_env *env, struct bpf_insn *insn,
static int check_atomic_rmw(struct bpf_verifier_env *env,
struct bpf_insn *insn)
{
+ struct bpf_reg_state *dst_reg;
int load_reg;
int err;
@@ -6701,13 +6680,15 @@ static int check_atomic_rmw(struct bpf_verifier_env *env,
load_reg = -1;
}
+ dst_reg = cur_regs(env) + insn->dst_reg;
+
/* Check whether we can read the memory, with second call for fetch
* case to simulate the register fill.
*/
- err = check_mem_access(env, env->insn_idx, insn->dst_reg, insn->off,
+ err = check_mem_access(env, env->insn_idx, dst_reg, insn->dst_reg, insn->off,
BPF_SIZE(insn->code), BPF_READ, -1, true, false);
if (!err && load_reg >= 0)
- err = check_mem_access(env, env->insn_idx, insn->dst_reg,
+ err = check_mem_access(env, env->insn_idx, dst_reg, insn->dst_reg,
insn->off, BPF_SIZE(insn->code),
BPF_READ, load_reg, true, false);
if (err)
@@ -6719,7 +6700,7 @@ static int check_atomic_rmw(struct bpf_verifier_env *env,
return err;
}
/* Check whether we can write into the same memory. */
- err = check_mem_access(env, env->insn_idx, insn->dst_reg, insn->off,
+ err = check_mem_access(env, env->insn_idx, dst_reg, insn->dst_reg, insn->off,
BPF_SIZE(insn->code), BPF_WRITE, -1, true, false);
if (err)
return err;
@@ -6808,11 +6789,10 @@ static int check_atomic(struct bpf_verifier_env *env, struct bpf_insn *insn)
* read offsets are marked as read.
*/
static int check_stack_range_initialized(
- struct bpf_verifier_env *env, int regno, int off,
+ struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno, int off,
int access_size, bool zero_size_allowed,
enum bpf_access_type type, struct bpf_call_arg_meta *meta)
{
- struct bpf_reg_state *reg = reg_state(env, regno);
struct bpf_func_state *state = bpf_func(env, reg);
int err, min_off, max_off, i, j, slot, spi;
/* Some accesses can write anything into the stack, others are
@@ -6834,7 +6814,7 @@ static int check_stack_range_initialized(
return -EACCES;
}
- err = check_stack_access_within_bounds(env, regno, off, access_size, type);
+ err = check_stack_access_within_bounds(env, reg, regno, off, access_size, type);
if (err)
return err;
@@ -6965,7 +6945,7 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno,
switch (base_type(reg->type)) {
case PTR_TO_PACKET:
case PTR_TO_PACKET_META:
- return check_packet_access(env, regno, 0, access_size,
+ return check_packet_access(env, reg, regno, 0, access_size,
zero_size_allowed);
case PTR_TO_MAP_KEY:
if (access_type == BPF_WRITE) {
@@ -6973,12 +6953,12 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno,
reg_type_str(env, reg->type));
return -EACCES;
}
- return check_mem_region_access(env, regno, 0, access_size,
+ return check_mem_region_access(env, reg, regno, 0, access_size,
reg->map_ptr->key_size, false);
case PTR_TO_MAP_VALUE:
- if (check_map_access_type(env, regno, 0, access_size, access_type))
+ if (check_map_access_type(env, reg, regno, 0, access_size, access_type))
return -EACCES;
- return check_map_access(env, regno, 0, access_size,
+ return check_map_access(env, reg, regno, 0, access_size,
zero_size_allowed, ACCESS_HELPER);
case PTR_TO_MEM:
if (type_is_rdonly_mem(reg->type)) {
@@ -6988,7 +6968,7 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno,
return -EACCES;
}
}
- return check_mem_region_access(env, regno, 0,
+ return check_mem_region_access(env, reg, regno, 0,
access_size, reg->mem_size,
zero_size_allowed);
case PTR_TO_BUF:
@@ -7008,16 +6988,16 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno,
max_access);
case PTR_TO_STACK:
return check_stack_range_initialized(
- env,
+ env, reg,
regno, 0, access_size,
zero_size_allowed, access_type, meta);
case PTR_TO_BTF_ID:
- return check_ptr_to_btf_access(env, regs, regno, 0,
+ return check_ptr_to_btf_access(env, regs, reg, regno, 0,
access_size, BPF_READ, -1);
case PTR_TO_CTX:
/* Only permit reading or writing syscall context using helper calls. */
if (is_var_ctx_off_allowed(env->prog)) {
- int err = check_mem_region_access(env, regno, 0, access_size, U16_MAX,
+ int err = check_mem_region_access(env, reg, regno, 0, access_size, U16_MAX,
zero_size_allowed);
if (err)
return err;
@@ -7178,11 +7158,10 @@ enum {
* env->cur_state->active_locks remembers which map value element or allocated
* object got locked and clears it after bpf_spin_unlock.
*/
-static int process_spin_lock(struct bpf_verifier_env *env, int regno, int flags)
+static int process_spin_lock(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno, int flags)
{
bool is_lock = flags & PROCESS_SPIN_LOCK, is_res_lock = flags & PROCESS_RES_LOCK;
const char *lock_str = is_res_lock ? "bpf_res_spin" : "bpf_spin";
- struct bpf_reg_state *reg = reg_state(env, regno);
struct bpf_verifier_state *cur = env->cur_state;
bool is_const = tnum_is_const(reg->var_off);
bool is_irq = flags & PROCESS_LOCK_IRQ;
@@ -7295,11 +7274,10 @@ static int process_spin_lock(struct bpf_verifier_env *env, int regno, int flags)
}
/* Check if @regno is a pointer to a specific field in a map value */
-static int check_map_field_pointer(struct bpf_verifier_env *env, u32 regno,
+static int check_map_field_pointer(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 regno,
enum btf_field_type field_type,
struct bpf_map_desc *map_desc)
{
- struct bpf_reg_state *reg = reg_state(env, regno);
bool is_const = tnum_is_const(reg->var_off);
struct bpf_map *map = reg->map_ptr;
u64 val = reg->var_off.value;
@@ -7349,26 +7327,26 @@ static int check_map_field_pointer(struct bpf_verifier_env *env, u32 regno,
return 0;
}
-static int process_timer_func(struct bpf_verifier_env *env, int regno,
+static int process_timer_func(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno,
struct bpf_map_desc *map)
{
if (IS_ENABLED(CONFIG_PREEMPT_RT)) {
verbose(env, "bpf_timer cannot be used for PREEMPT_RT.\n");
return -EOPNOTSUPP;
}
- return check_map_field_pointer(env, regno, BPF_TIMER, map);
+ return check_map_field_pointer(env, reg, regno, BPF_TIMER, map);
}
-static int process_timer_helper(struct bpf_verifier_env *env, int regno,
+static int process_timer_helper(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno,
struct bpf_call_arg_meta *meta)
{
- return process_timer_func(env, regno, &meta->map);
+ return process_timer_func(env, reg, regno, &meta->map);
}
-static int process_timer_kfunc(struct bpf_verifier_env *env, int regno,
+static int process_timer_kfunc(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno,
struct bpf_kfunc_call_arg_meta *meta)
{
- return process_timer_func(env, regno, &meta->map);
+ return process_timer_func(env, reg, regno, &meta->map);
}
static int process_kptr_func(struct bpf_verifier_env *env, int regno,
@@ -7433,10 +7411,9 @@ static int process_kptr_func(struct bpf_verifier_env *env, int regno,
* use case. The second level is tracked using the upper bit of bpf_dynptr->size
* and checked dynamically during runtime.
*/
-static int process_dynptr_func(struct bpf_verifier_env *env, int regno, int insn_idx,
+static int process_dynptr_func(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno, int insn_idx,
enum bpf_arg_type arg_type, int clone_ref_obj_id)
{
- struct bpf_reg_state *reg = reg_state(env, regno);
int err;
if (reg->type != PTR_TO_STACK && reg->type != CONST_PTR_TO_DYNPTR) {
@@ -7470,7 +7447,7 @@ static int process_dynptr_func(struct bpf_verifier_env *env, int regno, int insn
/* we write BPF_DW bits (8 bytes) at a time */
for (i = 0; i < BPF_DYNPTR_SIZE; i += 8) {
- err = check_mem_access(env, insn_idx, regno,
+ err = check_mem_access(env, insn_idx, reg, regno,
i, BPF_DW, BPF_WRITE, -1, false, false);
if (err)
return err;
@@ -7540,10 +7517,9 @@ static bool is_kfunc_arg_iter(struct bpf_kfunc_call_arg_meta *meta, int arg_idx,
return btf_param_match_suffix(meta->btf, arg, "__iter");
}
-static int process_iter_arg(struct bpf_verifier_env *env, int regno, int insn_idx,
+static int process_iter_arg(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno, int insn_idx,
struct bpf_kfunc_call_arg_meta *meta)
{
- struct bpf_reg_state *reg = reg_state(env, regno);
const struct btf_type *t;
int spi, err, i, nr_slots, btf_id;
@@ -7575,7 +7551,7 @@ static int process_iter_arg(struct bpf_verifier_env *env, int regno, int insn_id
}
for (i = 0; i < nr_slots * 8; i += BPF_REG_SIZE) {
- err = check_mem_access(env, insn_idx, regno,
+ err = check_mem_access(env, insn_idx, reg, regno,
i, BPF_DW, BPF_WRITE, -1, false, false);
if (err)
return err;
@@ -8014,12 +7990,11 @@ static const struct bpf_reg_types *compatible_reg_types[__BPF_ARG_TYPE_MAX] = {
[ARG_PTR_TO_DYNPTR] = &dynptr_types,
};
-static int check_reg_type(struct bpf_verifier_env *env, u32 regno,
+static int check_reg_type(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 regno,
enum bpf_arg_type arg_type,
const u32 *arg_btf_id,
struct bpf_call_arg_meta *meta)
{
- struct bpf_reg_state *reg = reg_state(env, regno);
enum bpf_reg_type expected, type = reg->type;
const struct bpf_reg_types *compatible;
int i, j, err;
@@ -8362,7 +8337,7 @@ static int check_reg_const_str(struct bpf_verifier_env *env,
return -EACCES;
}
- err = check_map_access(env, regno, 0,
+ err = check_map_access(env, reg, regno, 0,
map->value_size - reg->var_off.value, false,
ACCESS_HELPER);
if (err)
@@ -8498,7 +8473,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
base_type(arg_type) == ARG_PTR_TO_SPIN_LOCK)
arg_btf_id = fn->arg_btf_id[arg];
- err = check_reg_type(env, regno, arg_type, arg_btf_id, meta);
+ err = check_reg_type(env, reg, regno, arg_type, arg_btf_id, meta);
if (err)
return err;
@@ -8636,11 +8611,11 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
return -EACCES;
}
if (meta->func_id == BPF_FUNC_spin_lock) {
- err = process_spin_lock(env, regno, PROCESS_SPIN_LOCK);
+ err = process_spin_lock(env, reg, regno, PROCESS_SPIN_LOCK);
if (err)
return err;
} else if (meta->func_id == BPF_FUNC_spin_unlock) {
- err = process_spin_lock(env, regno, 0);
+ err = process_spin_lock(env, reg, regno, 0);
if (err)
return err;
} else {
@@ -8649,7 +8624,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
}
break;
case ARG_PTR_TO_TIMER:
- err = process_timer_helper(env, regno, meta);
+ err = process_timer_helper(env, reg, regno, meta);
if (err)
return err;
break;
@@ -8684,7 +8659,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
true, meta);
break;
case ARG_PTR_TO_DYNPTR:
- err = process_dynptr_func(env, regno, insn_idx, arg_type, 0);
+ err = process_dynptr_func(env, reg, regno, insn_idx, arg_type, 0);
if (err)
return err;
break;
@@ -9343,7 +9318,7 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog,
if (ret)
return ret;
- ret = process_dynptr_func(env, regno, -1, arg->arg_type, 0);
+ ret = process_dynptr_func(env, reg, regno, -1, arg->arg_type, 0);
if (ret)
return ret;
} else if (base_type(arg->arg_type) == ARG_PTR_TO_BTF_ID) {
@@ -9354,7 +9329,7 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog,
continue;
memset(&meta, 0, sizeof(meta)); /* leave func_id as zero */
- err = check_reg_type(env, regno, arg->arg_type, &arg->btf_id, &meta);
+ err = check_reg_type(env, reg, regno, arg->arg_type, &arg->btf_id, &meta);
err = err ?: check_func_arg_reg_off(env, reg, regno, arg->arg_type);
if (err)
return err;
@@ -10312,18 +10287,18 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
if (err)
return err;
+ regs = cur_regs(env);
+
/* Mark slots with STACK_MISC in case of raw mode, stack offset
* is inferred from register state.
*/
for (i = 0; i < meta.access_size; i++) {
- err = check_mem_access(env, insn_idx, meta.regno, i, BPF_B,
+ err = check_mem_access(env, insn_idx, regs + meta.regno, meta.regno, i, BPF_B,
BPF_WRITE, -1, false, false);
if (err)
return err;
}
- regs = cur_regs(env);
-
if (meta.release_regno) {
err = -EINVAL;
if (arg_type_is_dynptr(fn->arg_type[meta.release_regno - BPF_REG_1])) {
@@ -11327,11 +11302,10 @@ get_kfunc_ptr_arg_type(struct bpf_verifier_env *env,
struct bpf_kfunc_call_arg_meta *meta,
const struct btf_type *t, const struct btf_type *ref_t,
const char *ref_tname, const struct btf_param *args,
- int argno, int nargs)
+ int argno, int nargs, struct bpf_reg_state *reg)
{
u32 regno = argno + 1;
struct bpf_reg_state *regs = cur_regs(env);
- struct bpf_reg_state *reg = ®s[regno];
bool arg_mem_size = false;
if (meta->func_id == special_kfunc_list[KF_bpf_cast_to_kern_ctx] ||
@@ -11498,10 +11472,9 @@ static int process_kf_arg_ptr_to_btf_id(struct bpf_verifier_env *env,
return 0;
}
-static int process_irq_flag(struct bpf_verifier_env *env, int regno,
+static int process_irq_flag(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno,
struct bpf_kfunc_call_arg_meta *meta)
{
- struct bpf_reg_state *reg = reg_state(env, regno);
int err, kfunc_class = IRQ_NATIVE_KFUNC;
bool irq_save;
@@ -11526,7 +11499,7 @@ static int process_irq_flag(struct bpf_verifier_env *env, int regno,
return -EINVAL;
}
- err = check_mem_access(env, env->insn_idx, regno, 0, BPF_DW, BPF_WRITE, -1, false, false);
+ err = check_mem_access(env, env->insn_idx, reg, regno, 0, BPF_DW, BPF_WRITE, -1, false, false);
if (err)
return err;
@@ -12114,7 +12087,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
ref_t = btf_type_skip_modifiers(btf, t->type, &ref_id);
ref_tname = btf_name_by_offset(btf, ref_t->name_off);
- kf_arg_type = get_kfunc_ptr_arg_type(env, meta, t, ref_t, ref_tname, args, i, nargs);
+ kf_arg_type = get_kfunc_ptr_arg_type(env, meta, t, ref_t, ref_tname, args, i, nargs, reg);
if (kf_arg_type < 0)
return kf_arg_type;
@@ -12276,7 +12249,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
}
}
- ret = process_dynptr_func(env, regno, insn_idx, dynptr_arg_type, clone_ref_obj_id);
+ ret = process_dynptr_func(env, reg, regno, insn_idx, dynptr_arg_type, clone_ref_obj_id);
if (ret < 0)
return ret;
@@ -12301,7 +12274,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
return -EINVAL;
}
}
- ret = process_iter_arg(env, regno, insn_idx, meta);
+ ret = process_iter_arg(env, reg, regno, insn_idx, meta);
if (ret < 0)
return ret;
break;
@@ -12478,7 +12451,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
verbose(env, "arg#%d doesn't point to a map value\n", i);
return -EINVAL;
}
- ret = check_map_field_pointer(env, regno, BPF_WORKQUEUE, &meta->map);
+ ret = check_map_field_pointer(env, reg, regno, BPF_WORKQUEUE, &meta->map);
if (ret < 0)
return ret;
break;
@@ -12487,7 +12460,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
verbose(env, "arg#%d doesn't point to a map value\n", i);
return -EINVAL;
}
- ret = process_timer_kfunc(env, regno, meta);
+ ret = process_timer_kfunc(env, reg, regno, meta);
if (ret < 0)
return ret;
break;
@@ -12496,7 +12469,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
verbose(env, "arg#%d doesn't point to a map value\n", i);
return -EINVAL;
}
- ret = check_map_field_pointer(env, regno, BPF_TASK_WORK, &meta->map);
+ ret = check_map_field_pointer(env, reg, regno, BPF_TASK_WORK, &meta->map);
if (ret < 0)
return ret;
break;
@@ -12505,7 +12478,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
verbose(env, "arg#%d doesn't point to an irq flag on stack\n", i);
return -EINVAL;
}
- ret = process_irq_flag(env, regno, meta);
+ ret = process_irq_flag(env, reg, regno, meta);
if (ret < 0)
return ret;
break;
@@ -12526,7 +12499,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
if (meta->func_id == special_kfunc_list[KF_bpf_res_spin_lock_irqsave] ||
meta->func_id == special_kfunc_list[KF_bpf_res_spin_unlock_irqrestore])
flags |= PROCESS_LOCK_IRQ;
- ret = process_spin_lock(env, regno, flags);
+ ret = process_spin_lock(env, reg, regno, flags);
if (ret < 0)
return ret;
break;
@@ -13660,7 +13633,7 @@ static int check_stack_access_for_ptr_arithmetic(
static int sanitize_check_bounds(struct bpf_verifier_env *env,
const struct bpf_insn *insn,
- const struct bpf_reg_state *dst_reg)
+ struct bpf_reg_state *dst_reg)
{
u32 dst = insn->dst_reg;
@@ -13677,7 +13650,7 @@ static int sanitize_check_bounds(struct bpf_verifier_env *env,
return -EACCES;
break;
case PTR_TO_MAP_VALUE:
- if (check_map_access(env, dst, 0, 1, false, ACCESS_HELPER)) {
+ if (check_map_access(env, dst_reg, dst, 0, 1, false, ACCESS_HELPER)) {
verbose(env, "R%d pointer arithmetic of map value goes out of range, "
"prohibited for !root\n", dst);
return -EACCES;
@@ -17563,7 +17536,7 @@ static int do_check_insn(struct bpf_verifier_env *env, bool *do_print_state)
dst_reg_type = cur_regs(env)[insn->dst_reg].type;
- err = check_mem_access(env, env->insn_idx, insn->dst_reg,
+ err = check_mem_access(env, env->insn_idx, cur_regs(env) + insn->dst_reg, insn->dst_reg,
insn->off, BPF_SIZE(insn->code),
BPF_WRITE, -1, false, false);
if (err)
--
2.52.0
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH bpf-next 5/9] bpf: Refactor to handle memory and size together
2026-04-21 17:19 [PATCH bpf-next 0/9] bpf: Prepare to support stack arguments Yonghong Song
` (3 preceding siblings ...)
2026-04-21 17:19 ` [PATCH bpf-next 4/9] bpf: Refactor to avoid redundant calculation of bpf_reg_state Yonghong Song
@ 2026-04-21 17:19 ` Yonghong Song
2026-04-21 17:19 ` [PATCH bpf-next 6/9] bpf: Rename existing argno to arg Yonghong Song
` (4 subsequent siblings)
9 siblings, 0 replies; 25+ messages in thread
From: Yonghong Song @ 2026-04-21 17:19 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau, Puranjay Mohan
Similar to the previous patch, try to pass bpf_reg_state from caller
to callee. Both mem_reg and size_reg are passed to helper functions.
This is important for stack arguments as they may be beyond registers 1-5.
Acked-by: Puranjay Mohan <puranjay@kernel.org>
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
kernel/bpf/verifier.c | 57 +++++++++++++++++++++----------------------
1 file changed, 28 insertions(+), 29 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index b56a11fc3856..e389442a3f5c 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -6934,12 +6934,12 @@ static int check_stack_range_initialized(
return 0;
}
-static int check_helper_mem_access(struct bpf_verifier_env *env, int regno,
+static int check_helper_mem_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno,
int access_size, enum bpf_access_type access_type,
bool zero_size_allowed,
struct bpf_call_arg_meta *meta)
{
- struct bpf_reg_state *regs = cur_regs(env), *reg = ®s[regno];
+ struct bpf_reg_state *regs = cur_regs(env);
u32 *max_access;
switch (base_type(reg->type)) {
@@ -7022,12 +7022,12 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno,
/* verify arguments to helpers or kfuncs consisting of a pointer and an access
* size.
*
- * @regno is the register containing the access size. regno-1 is the register
- * containing the pointer.
+ * @mem_reg contains the pointer, @size_reg contains the access size.
*/
static int check_mem_size_reg(struct bpf_verifier_env *env,
- struct bpf_reg_state *reg, u32 regno,
- enum bpf_access_type access_type,
+ struct bpf_reg_state *mem_reg,
+ struct bpf_reg_state *size_reg, u32 mem_regno,
+ u32 size_regno, enum bpf_access_type access_type,
bool zero_size_allowed,
struct bpf_call_arg_meta *meta)
{
@@ -7041,37 +7041,37 @@ static int check_mem_size_reg(struct bpf_verifier_env *env,
* out. Only upper bounds can be learned because retval is an
* int type and negative retvals are allowed.
*/
- meta->msize_max_value = reg->umax_value;
+ meta->msize_max_value = size_reg->umax_value;
/* The register is SCALAR_VALUE; the access check happens using
* its boundaries. For unprivileged variable accesses, disable
* raw mode so that the program is required to initialize all
* the memory that the helper could just partially fill up.
*/
- if (!tnum_is_const(reg->var_off))
+ if (!tnum_is_const(size_reg->var_off))
meta = NULL;
- if (reg->smin_value < 0) {
+ if (size_reg->smin_value < 0) {
verbose(env, "R%d min value is negative, either use unsigned or 'var &= const'\n",
- regno);
+ size_regno);
return -EACCES;
}
- if (reg->umin_value == 0 && !zero_size_allowed) {
+ if (size_reg->umin_value == 0 && !zero_size_allowed) {
verbose(env, "R%d invalid zero-sized read: u64=[%lld,%lld]\n",
- regno, reg->umin_value, reg->umax_value);
+ size_regno, size_reg->umin_value, size_reg->umax_value);
return -EACCES;
}
- if (reg->umax_value >= BPF_MAX_VAR_SIZ) {
+ if (size_reg->umax_value >= BPF_MAX_VAR_SIZ) {
verbose(env, "R%d unbounded memory access, use 'var &= const' or 'if (var < const)'\n",
- regno);
+ size_regno);
return -EACCES;
}
- err = check_helper_mem_access(env, regno - 1, reg->umax_value,
+ err = check_helper_mem_access(env, mem_reg, mem_regno, size_reg->umax_value,
access_type, zero_size_allowed, meta);
if (!err)
- err = mark_chain_precision(env, regno);
+ err = mark_chain_precision(env, size_regno);
return err;
}
@@ -7096,8 +7096,8 @@ static int check_mem_reg(struct bpf_verifier_env *env, struct bpf_reg_state *reg
int size = base_type(reg->type) == PTR_TO_STACK ? -(int)mem_size : mem_size;
- err = check_helper_mem_access(env, regno, size, BPF_READ, true, NULL);
- err = err ?: check_helper_mem_access(env, regno, size, BPF_WRITE, true, NULL);
+ err = check_helper_mem_access(env, reg, regno, size, BPF_READ, true, NULL);
+ err = err ?: check_helper_mem_access(env, reg, regno, size, BPF_WRITE, true, NULL);
if (may_be_null)
*reg = saved_reg;
@@ -7105,10 +7105,9 @@ static int check_mem_reg(struct bpf_verifier_env *env, struct bpf_reg_state *reg
return err;
}
-static int check_kfunc_mem_size_reg(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
- u32 regno)
+static int check_kfunc_mem_size_reg(struct bpf_verifier_env *env, struct bpf_reg_state *mem_reg,
+ struct bpf_reg_state *size_reg, u32 mem_regno, u32 size_regno)
{
- struct bpf_reg_state *mem_reg = &cur_regs(env)[regno - 1];
bool may_be_null = type_may_be_null(mem_reg->type);
struct bpf_reg_state saved_reg;
struct bpf_call_arg_meta meta;
@@ -7121,8 +7120,8 @@ static int check_kfunc_mem_size_reg(struct bpf_verifier_env *env, struct bpf_reg
mark_ptr_not_null_reg(mem_reg);
}
- err = check_mem_size_reg(env, reg, regno, BPF_READ, true, &meta);
- err = err ?: check_mem_size_reg(env, reg, regno, BPF_WRITE, true, &meta);
+ err = check_mem_size_reg(env, mem_reg, size_reg, mem_regno, size_regno, BPF_READ, true, &meta);
+ err = err ?: check_mem_size_reg(env, mem_reg, size_reg, mem_regno, size_regno, BPF_WRITE, true, &meta);
if (may_be_null)
*mem_reg = saved_reg;
@@ -8566,7 +8565,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
return -EFAULT;
}
key_size = meta->map.ptr->key_size;
- err = check_helper_mem_access(env, regno, key_size, BPF_READ, false, NULL);
+ err = check_helper_mem_access(env, reg, regno, key_size, BPF_READ, false, NULL);
if (err)
return err;
if (can_elide_value_nullness(meta->map.ptr->map_type)) {
@@ -8593,7 +8592,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
return -EFAULT;
}
meta->raw_mode = arg_type & MEM_UNINIT;
- err = check_helper_mem_access(env, regno, meta->map.ptr->value_size,
+ err = check_helper_mem_access(env, reg, regno, meta->map.ptr->value_size,
arg_type & MEM_WRITE ? BPF_WRITE : BPF_READ,
false, meta);
break;
@@ -8637,7 +8636,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
*/
meta->raw_mode = arg_type & MEM_UNINIT;
if (arg_type & MEM_FIXED_SIZE) {
- err = check_helper_mem_access(env, regno, fn->arg_size[arg],
+ err = check_helper_mem_access(env, reg, regno, fn->arg_size[arg],
arg_type & MEM_WRITE ? BPF_WRITE : BPF_READ,
false, meta);
if (err)
@@ -8647,13 +8646,13 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
}
break;
case ARG_CONST_SIZE:
- err = check_mem_size_reg(env, reg, regno,
+ err = check_mem_size_reg(env, reg_state(env, regno - 1), reg, regno - 1, regno,
fn->arg_type[arg - 1] & MEM_WRITE ?
BPF_WRITE : BPF_READ,
false, meta);
break;
case ARG_CONST_SIZE_OR_ZERO:
- err = check_mem_size_reg(env, reg, regno,
+ err = check_mem_size_reg(env, reg_state(env, regno - 1), reg, regno - 1, regno,
fn->arg_type[arg - 1] & MEM_WRITE ?
BPF_WRITE : BPF_READ,
true, meta);
@@ -12384,7 +12383,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
const struct btf_param *size_arg = &args[i + 1];
if (!bpf_register_is_null(buff_reg) || !is_kfunc_arg_nullable(meta->btf, buff_arg)) {
- ret = check_kfunc_mem_size_reg(env, size_reg, regno + 1);
+ ret = check_kfunc_mem_size_reg(env, buff_reg, size_reg, regno, regno + 1);
if (ret < 0) {
verbose(env, "arg#%d arg#%d memory, len pair leads to invalid memory access\n", i, i + 1);
return ret;
--
2.52.0
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH bpf-next 6/9] bpf: Rename existing argno to arg
2026-04-21 17:19 [PATCH bpf-next 0/9] bpf: Prepare to support stack arguments Yonghong Song
` (4 preceding siblings ...)
2026-04-21 17:19 ` [PATCH bpf-next 5/9] bpf: Refactor to handle memory and size together Yonghong Song
@ 2026-04-21 17:19 ` Yonghong Song
2026-04-21 17:20 ` [PATCH bpf-next 7/9] bpf: Prepare verifier logs for upcoming kfunc stack arguments Yonghong Song
` (3 subsequent siblings)
9 siblings, 0 replies; 25+ messages in thread
From: Yonghong Song @ 2026-04-21 17:19 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau
To support stack arguments, in later patches, argno will represent
both registers and stack arguments. To avoid confusion, rename
existing argno to arg.
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
kernel/bpf/verifier.c | 54 +++++++++++++++++++++----------------------
1 file changed, 27 insertions(+), 27 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index e389442a3f5c..18ab92581452 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -11301,9 +11301,9 @@ get_kfunc_ptr_arg_type(struct bpf_verifier_env *env,
struct bpf_kfunc_call_arg_meta *meta,
const struct btf_type *t, const struct btf_type *ref_t,
const char *ref_tname, const struct btf_param *args,
- int argno, int nargs, struct bpf_reg_state *reg)
+ int arg, int nargs, struct bpf_reg_state *reg)
{
- u32 regno = argno + 1;
+ u32 regno = arg + 1;
struct bpf_reg_state *regs = cur_regs(env);
bool arg_mem_size = false;
@@ -11312,9 +11312,9 @@ get_kfunc_ptr_arg_type(struct bpf_verifier_env *env,
meta->func_id == special_kfunc_list[KF_bpf_session_cookie])
return KF_ARG_PTR_TO_CTX;
- if (argno + 1 < nargs &&
- (is_kfunc_arg_mem_size(meta->btf, &args[argno + 1], ®s[regno + 1]) ||
- is_kfunc_arg_const_mem_size(meta->btf, &args[argno + 1], ®s[regno + 1])))
+ if (arg + 1 < nargs &&
+ (is_kfunc_arg_mem_size(meta->btf, &args[arg + 1], ®s[regno + 1]) ||
+ is_kfunc_arg_const_mem_size(meta->btf, &args[arg + 1], ®s[regno + 1])))
arg_mem_size = true;
/* In this function, we verify the kfunc's BTF as per the argument type,
@@ -11322,68 +11322,68 @@ get_kfunc_ptr_arg_type(struct bpf_verifier_env *env,
* type to our caller. When a set of conditions hold in the BTF type of
* arguments, we resolve it to a known kfunc_ptr_arg_type.
*/
- if (btf_is_prog_ctx_type(&env->log, meta->btf, t, resolve_prog_type(env->prog), argno))
+ if (btf_is_prog_ctx_type(&env->log, meta->btf, t, resolve_prog_type(env->prog), arg))
return KF_ARG_PTR_TO_CTX;
- if (is_kfunc_arg_nullable(meta->btf, &args[argno]) && bpf_register_is_null(reg) &&
+ if (is_kfunc_arg_nullable(meta->btf, &args[arg]) && bpf_register_is_null(reg) &&
!arg_mem_size)
return KF_ARG_PTR_TO_NULL;
- if (is_kfunc_arg_alloc_obj(meta->btf, &args[argno]))
+ if (is_kfunc_arg_alloc_obj(meta->btf, &args[arg]))
return KF_ARG_PTR_TO_ALLOC_BTF_ID;
- if (is_kfunc_arg_refcounted_kptr(meta->btf, &args[argno]))
+ if (is_kfunc_arg_refcounted_kptr(meta->btf, &args[arg]))
return KF_ARG_PTR_TO_REFCOUNTED_KPTR;
- if (is_kfunc_arg_dynptr(meta->btf, &args[argno]))
+ if (is_kfunc_arg_dynptr(meta->btf, &args[arg]))
return KF_ARG_PTR_TO_DYNPTR;
- if (is_kfunc_arg_iter(meta, argno, &args[argno]))
+ if (is_kfunc_arg_iter(meta, arg, &args[arg]))
return KF_ARG_PTR_TO_ITER;
- if (is_kfunc_arg_list_head(meta->btf, &args[argno]))
+ if (is_kfunc_arg_list_head(meta->btf, &args[arg]))
return KF_ARG_PTR_TO_LIST_HEAD;
- if (is_kfunc_arg_list_node(meta->btf, &args[argno]))
+ if (is_kfunc_arg_list_node(meta->btf, &args[arg]))
return KF_ARG_PTR_TO_LIST_NODE;
- if (is_kfunc_arg_rbtree_root(meta->btf, &args[argno]))
+ if (is_kfunc_arg_rbtree_root(meta->btf, &args[arg]))
return KF_ARG_PTR_TO_RB_ROOT;
- if (is_kfunc_arg_rbtree_node(meta->btf, &args[argno]))
+ if (is_kfunc_arg_rbtree_node(meta->btf, &args[arg]))
return KF_ARG_PTR_TO_RB_NODE;
- if (is_kfunc_arg_const_str(meta->btf, &args[argno]))
+ if (is_kfunc_arg_const_str(meta->btf, &args[arg]))
return KF_ARG_PTR_TO_CONST_STR;
- if (is_kfunc_arg_map(meta->btf, &args[argno]))
+ if (is_kfunc_arg_map(meta->btf, &args[arg]))
return KF_ARG_PTR_TO_MAP;
- if (is_kfunc_arg_wq(meta->btf, &args[argno]))
+ if (is_kfunc_arg_wq(meta->btf, &args[arg]))
return KF_ARG_PTR_TO_WORKQUEUE;
- if (is_kfunc_arg_timer(meta->btf, &args[argno]))
+ if (is_kfunc_arg_timer(meta->btf, &args[arg]))
return KF_ARG_PTR_TO_TIMER;
- if (is_kfunc_arg_task_work(meta->btf, &args[argno]))
+ if (is_kfunc_arg_task_work(meta->btf, &args[arg]))
return KF_ARG_PTR_TO_TASK_WORK;
- if (is_kfunc_arg_irq_flag(meta->btf, &args[argno]))
+ if (is_kfunc_arg_irq_flag(meta->btf, &args[arg]))
return KF_ARG_PTR_TO_IRQ_FLAG;
- if (is_kfunc_arg_res_spin_lock(meta->btf, &args[argno]))
+ if (is_kfunc_arg_res_spin_lock(meta->btf, &args[arg]))
return KF_ARG_PTR_TO_RES_SPIN_LOCK;
if ((base_type(reg->type) == PTR_TO_BTF_ID || reg2btf_ids[base_type(reg->type)])) {
if (!btf_type_is_struct(ref_t)) {
verbose(env, "kernel function %s args#%d pointer type %s %s is not supported\n",
- meta->func_name, argno, btf_type_str(ref_t), ref_tname);
+ meta->func_name, arg, btf_type_str(ref_t), ref_tname);
return -EINVAL;
}
return KF_ARG_PTR_TO_BTF_ID;
}
- if (is_kfunc_arg_callback(env, meta->btf, &args[argno]))
+ if (is_kfunc_arg_callback(env, meta->btf, &args[arg]))
return KF_ARG_PTR_TO_CALLBACK;
/* This is the catch all argument type of register types supported by
@@ -11394,7 +11394,7 @@ get_kfunc_ptr_arg_type(struct bpf_verifier_env *env,
if (!btf_type_is_scalar(ref_t) && !__btf_type_is_scalar_struct(env, meta->btf, ref_t, 0) &&
(arg_mem_size ? !btf_type_is_void(ref_t) : 1)) {
verbose(env, "arg#%d pointer type %s %s must point to %sscalar, or struct with scalar\n",
- argno, btf_type_str(ref_t), ref_tname, arg_mem_size ? "void, " : "");
+ arg, btf_type_str(ref_t), ref_tname, arg_mem_size ? "void, " : "");
return -EINVAL;
}
return arg_mem_size ? KF_ARG_PTR_TO_MEM_SIZE : KF_ARG_PTR_TO_MEM;
@@ -11405,7 +11405,7 @@ static int process_kf_arg_ptr_to_btf_id(struct bpf_verifier_env *env,
const struct btf_type *ref_t,
const char *ref_tname, u32 ref_id,
struct bpf_kfunc_call_arg_meta *meta,
- int argno)
+ int arg)
{
const struct btf_type *reg_ref_t;
bool strict_type_match = false;
@@ -11464,7 +11464,7 @@ static int process_kf_arg_ptr_to_btf_id(struct bpf_verifier_env *env,
taking_projection = btf_is_projection_of(ref_tname, reg_ref_tname);
if (!taking_projection && !struct_same) {
verbose(env, "kernel function %s args#%d expected pointer to %s %s but R%d has a pointer to %s %s\n",
- meta->func_name, argno, btf_type_str(ref_t), ref_tname, argno + 1,
+ meta->func_name, arg, btf_type_str(ref_t), ref_tname, arg + 1,
btf_type_str(reg_ref_t), reg_ref_tname);
return -EINVAL;
}
--
2.52.0
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH bpf-next 7/9] bpf: Prepare verifier logs for upcoming kfunc stack arguments
2026-04-21 17:19 [PATCH bpf-next 0/9] bpf: Prepare to support stack arguments Yonghong Song
` (5 preceding siblings ...)
2026-04-21 17:19 ` [PATCH bpf-next 6/9] bpf: Rename existing argno to arg Yonghong Song
@ 2026-04-21 17:20 ` Yonghong Song
2026-04-21 22:07 ` Alexei Starovoitov
2026-04-21 17:20 ` [PATCH bpf-next 8/9] bpf: Introduce bpf register BPF_REG_PARAMS Yonghong Song
` (2 subsequent siblings)
9 siblings, 1 reply; 25+ messages in thread
From: Yonghong Song @ 2026-04-21 17:20 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau, Puranjay Mohan
This change prepares verifier log reporting for upcoming kfunc stack
argument support.
Today verifier log code mostly assumes that an argument can be described
directly by a register number. That works for arguments passed in `R1`
to `R5`, but it does not work once kfunc arguments can also be
passed on the stack.
Introduce an internal `argno` representation such that register-passed
arguments keep using their real register numbers, while stack-passed
arguments use an encoded value above a dedicated base.
`reg_arg_name()` converts this representation into either `R%d` or
`*(R11-off)` when emitting verifier logs. If a particular `argno`
is corresponding to a stack argument, print `*(R11-off)`. Otherwise,
print `R%d`. Here R11 presents the base of stack arguments.
This keeps existing logs readable for register arguments and allows the
same log sites to handle future stack arguments without open-coding
special cases.
Update selftests accordingly.
Acked-by: Puranjay Mohan <puranjay@kernel.org>
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
include/linux/bpf_verifier.h | 1 +
kernel/bpf/verifier.c | 640 ++++++++++--------
.../testing/selftests/bpf/prog_tests/bpf_nf.c | 22 +-
.../selftests/bpf/prog_tests/cb_refs.c | 2 +-
.../selftests/bpf/prog_tests/kfunc_call.c | 2 +-
.../selftests/bpf/prog_tests/linked_list.c | 4 +-
.../selftests/bpf/progs/cgrp_kfunc_failure.c | 14 +-
.../selftests/bpf/progs/cpumask_failure.c | 10 +-
.../testing/selftests/bpf/progs/dynptr_fail.c | 22 +-
.../selftests/bpf/progs/file_reader_fail.c | 4 +-
tools/testing/selftests/bpf/progs/irq.c | 4 +-
tools/testing/selftests/bpf/progs/iters.c | 6 +-
.../selftests/bpf/progs/iters_state_safety.c | 14 +-
.../selftests/bpf/progs/iters_testmod.c | 4 +-
.../selftests/bpf/progs/iters_testmod_seq.c | 4 +-
.../selftests/bpf/progs/map_kptr_fail.c | 2 +-
.../selftests/bpf/progs/percpu_alloc_fail.c | 4 +-
.../testing/selftests/bpf/progs/rbtree_fail.c | 6 +-
.../bpf/progs/refcounted_kptr_fail.c | 2 +-
.../testing/selftests/bpf/progs/stream_fail.c | 2 +-
.../selftests/bpf/progs/task_kfunc_failure.c | 18 +-
.../selftests/bpf/progs/task_work_fail.c | 6 +-
.../selftests/bpf/progs/test_bpf_nf_fail.c | 8 +-
.../bpf/progs/test_kfunc_dynptr_param.c | 2 +-
.../bpf/progs/test_kfunc_param_nullable.c | 2 +-
.../selftests/bpf/progs/verifier_bits_iter.c | 4 +-
.../bpf/progs/verifier_ref_tracking.c | 6 +-
.../selftests/bpf/progs/verifier_vfs_reject.c | 8 +-
.../testing/selftests/bpf/progs/wq_failures.c | 2 +-
tools/testing/selftests/bpf/verifier/calls.c | 14 +-
30 files changed, 464 insertions(+), 375 deletions(-)
diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index b148f816f25b..d5b4303315dd 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -913,6 +913,7 @@ struct bpf_verifier_env {
* e.g., in reg_type_str() to generate reg_type string
*/
char tmp_str_buf[TMP_STR_BUF_LEN];
+ char tmp_arg_name[32];
struct bpf_insn insn_buf[INSN_BUF_SIZE];
struct bpf_insn epilogue_buf[INSN_BUF_SIZE];
struct bpf_scc_callchain callchain_buf;
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 18ab92581452..82568a427211 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -1742,6 +1742,44 @@ static struct bpf_verifier_state *push_stack(struct bpf_verifier_env *env,
return &elem->st;
}
+#define STACK_ARGNO_BASE 100
+
+static bool is_stack_argno(int argno)
+{
+ return argno > STACK_ARGNO_BASE;
+}
+
+/* arg starts at 1 */
+static u32 make_argno(u32 arg)
+{
+ if (arg <= MAX_BPF_FUNC_REG_ARGS)
+ return arg;
+ return STACK_ARGNO_BASE + arg;
+}
+
+static u32 arg_from_argno(int argno)
+{
+ if (is_stack_argno(argno))
+ return argno - STACK_ARGNO_BASE;
+ return argno;
+}
+
+static const char *reg_arg_name(struct bpf_verifier_env *env, int argno)
+{
+ char *buf = env->tmp_arg_name;
+ int len = sizeof(env->tmp_arg_name);
+ u32 arg;
+
+ if (!is_stack_argno(argno)) {
+ snprintf(buf, len, "R%d", argno);
+ return buf;
+ }
+
+ arg = arg_from_argno(argno);
+ snprintf(buf, len, "*(R11-%u)", (arg - MAX_BPF_FUNC_REG_ARGS) * BPF_REG_SIZE);
+ return buf;
+}
+
static const int caller_saved[CALLER_SAVED_REGS] = {
BPF_REG_0, BPF_REG_1, BPF_REG_2, BPF_REG_3, BPF_REG_4, BPF_REG_5
};
@@ -4241,7 +4279,7 @@ enum bpf_access_src {
};
static int check_stack_range_initialized(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
- int regno, int off, int access_size,
+ int argno, int off, int access_size,
bool zero_size_allowed,
enum bpf_access_type type,
struct bpf_call_arg_meta *meta);
@@ -4265,7 +4303,7 @@ static struct bpf_reg_state *reg_state(struct bpf_verifier_env *env, int regno)
* instead.
*/
static int check_stack_read_var_off(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
- int ptr_regno, int off, int size, int dst_regno)
+ int ptr_argno, int off, int size, int dst_regno)
{
struct bpf_func_state *ptr_state = bpf_func(env, reg);
int err;
@@ -4273,7 +4311,7 @@ static int check_stack_read_var_off(struct bpf_verifier_env *env, struct bpf_reg
/* Note that we pass a NULL meta, so raw access will not be permitted.
*/
- err = check_stack_range_initialized(env, reg, ptr_regno, off, size,
+ err = check_stack_range_initialized(env, reg, ptr_argno, off, size,
false, BPF_READ, NULL);
if (err)
return err;
@@ -4295,7 +4333,7 @@ static int check_stack_read_var_off(struct bpf_verifier_env *env, struct bpf_reg
* can be -1, meaning that the read value is not going to a register.
*/
static int check_stack_read(struct bpf_verifier_env *env,
- struct bpf_reg_state *reg, int ptr_regno, int off, int size,
+ struct bpf_reg_state *reg, int ptr_argno, int off, int size,
int dst_regno)
{
struct bpf_func_state *state = bpf_func(env, reg);
@@ -4333,7 +4371,7 @@ static int check_stack_read(struct bpf_verifier_env *env,
* than fixed offset ones. Note that dst_regno >= 0 on this
* branch.
*/
- err = check_stack_read_var_off(env, reg, ptr_regno, off, size,
+ err = check_stack_read_var_off(env, reg, ptr_argno, off, size,
dst_regno);
}
return err;
@@ -4371,7 +4409,7 @@ static int check_stack_write(struct bpf_verifier_env *env,
return err;
}
-static int check_map_access_type(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 regno,
+static int check_map_access_type(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 argno,
int off, int size, enum bpf_access_type type)
{
struct bpf_map *map = reg->map_ptr;
@@ -4393,7 +4431,7 @@ static int check_map_access_type(struct bpf_verifier_env *env, struct bpf_reg_st
}
/* check read/write into memory region (e.g., map value, ringbuf sample, etc) */
-static int __check_mem_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno,
+static int __check_mem_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int argno,
int off, int size, u32 mem_size,
bool zero_size_allowed)
{
@@ -4414,8 +4452,8 @@ static int __check_mem_access(struct bpf_verifier_env *env, struct bpf_reg_state
case PTR_TO_PACKET:
case PTR_TO_PACKET_META:
case PTR_TO_PACKET_END:
- verbose(env, "invalid access to packet, off=%d size=%d, R%d(id=%d,off=%d,r=%d)\n",
- off, size, regno, reg->id, off, mem_size);
+ verbose(env, "invalid access to packet, off=%d size=%d, %s(id=%d,off=%d,r=%d)\n",
+ off, size, reg_arg_name(env, argno), reg->id, off, mem_size);
break;
case PTR_TO_CTX:
verbose(env, "invalid access to context, ctx_size=%d off=%d size=%d\n",
@@ -4431,7 +4469,7 @@ static int __check_mem_access(struct bpf_verifier_env *env, struct bpf_reg_state
}
/* check read/write into a memory region with possible variable offset */
-static int check_mem_region_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 regno,
+static int check_mem_region_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 argno,
int off, int size, u32 mem_size,
bool zero_size_allowed)
{
@@ -4451,15 +4489,15 @@ static int check_mem_region_access(struct bpf_verifier_env *env, struct bpf_reg_
(reg->smin_value == S64_MIN ||
(off + reg->smin_value != (s64)(s32)(off + reg->smin_value)) ||
reg->smin_value + off < 0)) {
- verbose(env, "R%d min value is negative, either use unsigned index or do a if (index >=0) check.\n",
- regno);
+ verbose(env, "%s min value is negative, either use unsigned index or do a if (index >=0) check.\n",
+ reg_arg_name(env, argno));
return -EACCES;
}
- err = __check_mem_access(env, reg, regno, reg->smin_value + off, size,
+ err = __check_mem_access(env, reg, argno, reg->smin_value + off, size,
mem_size, zero_size_allowed);
if (err) {
- verbose(env, "R%d min value is outside of the allowed memory range\n",
- regno);
+ verbose(env, "%s min value is outside of the allowed memory range\n",
+ reg_arg_name(env, argno));
return err;
}
@@ -4468,15 +4506,15 @@ static int check_mem_region_access(struct bpf_verifier_env *env, struct bpf_reg_
* If reg->umax_value + off could overflow, treat that as unbounded too.
*/
if (reg->umax_value >= BPF_MAX_VAR_OFF) {
- verbose(env, "R%d unbounded memory access, make sure to bounds check any such access\n",
- regno);
+ verbose(env, "%s unbounded memory access, make sure to bounds check any such access\n",
+ reg_arg_name(env, argno));
return -EACCES;
}
- err = __check_mem_access(env, reg, regno, reg->umax_value + off, size,
+ err = __check_mem_access(env, reg, argno, reg->umax_value + off, size,
mem_size, zero_size_allowed);
if (err) {
- verbose(env, "R%d max value is outside of the allowed memory range\n",
- regno);
+ verbose(env, "%s max value is outside of the allowed memory range\n",
+ reg_arg_name(env, argno));
return err;
}
@@ -4484,7 +4522,7 @@ static int check_mem_region_access(struct bpf_verifier_env *env, struct bpf_reg_
}
static int __check_ptr_off_reg(struct bpf_verifier_env *env,
- const struct bpf_reg_state *reg, int regno,
+ const struct bpf_reg_state *reg, u32 argno,
bool fixed_off_ok)
{
/* Access to this pointer-typed register or passing it to a helper
@@ -4501,14 +4539,14 @@ static int __check_ptr_off_reg(struct bpf_verifier_env *env,
}
if (reg->smin_value < 0) {
- verbose(env, "negative offset %s ptr R%d off=%lld disallowed\n",
- reg_type_str(env, reg->type), regno, reg->var_off.value);
+ verbose(env, "negative offset %s ptr %s off=%lld disallowed\n",
+ reg_type_str(env, reg->type), reg_arg_name(env, argno), reg->var_off.value);
return -EACCES;
}
if (!fixed_off_ok && reg->var_off.value != 0) {
- verbose(env, "dereference of modified %s ptr R%d off=%lld disallowed\n",
- reg_type_str(env, reg->type), regno, reg->var_off.value);
+ verbose(env, "dereference of modified %s ptr %s off=%lld disallowed\n",
+ reg_type_str(env, reg->type), reg_arg_name(env, argno), reg->var_off.value);
return -EACCES;
}
@@ -4881,17 +4919,17 @@ static bool may_access_direct_pkt_data(struct bpf_verifier_env *env,
}
}
-static int check_packet_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 regno, int off,
+static int check_packet_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 argno, int off,
int size, bool zero_size_allowed)
{
int err;
if (reg->range < 0) {
- verbose(env, "R%d offset is outside of the packet\n", regno);
+ verbose(env, "%s offset is outside of the packet\n", reg_arg_name(env, argno));
return -EINVAL;
}
- err = check_mem_region_access(env, reg, regno, off, size, reg->range, zero_size_allowed);
+ err = check_mem_region_access(env, reg, argno, off, size, reg->range, zero_size_allowed);
if (err)
return err;
@@ -4946,7 +4984,7 @@ static int __check_ctx_access(struct bpf_verifier_env *env, int insn_idx, int of
return -EACCES;
}
-static int check_ctx_access(struct bpf_verifier_env *env, int insn_idx, struct bpf_reg_state *reg, u32 regno,
+static int check_ctx_access(struct bpf_verifier_env *env, int insn_idx, struct bpf_reg_state *reg, u32 argno,
int off, int access_size, enum bpf_access_type t,
struct bpf_insn_access_aux *info)
{
@@ -4959,9 +4997,9 @@ static int check_ctx_access(struct bpf_verifier_env *env, int insn_idx, struct b
int err;
if (var_off_ok)
- err = check_mem_region_access(env, reg, regno, off, access_size, U16_MAX, false);
+ err = check_mem_region_access(env, reg, argno, off, access_size, U16_MAX, false);
else
- err = __check_ptr_off_reg(env, reg, regno, fixed_off_ok);
+ err = __check_ptr_off_reg(env, reg, argno, fixed_off_ok);
if (err)
return err;
off += reg->umax_value;
@@ -4985,15 +5023,15 @@ static int check_flow_keys_access(struct bpf_verifier_env *env, int off,
}
static int check_sock_access(struct bpf_verifier_env *env, int insn_idx,
- struct bpf_reg_state *reg, u32 regno, int off, int size,
+ struct bpf_reg_state *reg, u32 argno, int off, int size,
enum bpf_access_type t)
{
struct bpf_insn_access_aux info = {};
bool valid;
if (reg->smin_value < 0) {
- verbose(env, "R%d min value is negative, either use unsigned index or do a if (index >=0) check.\n",
- regno);
+ verbose(env, "%s min value is negative, either use unsigned index or do a if (index >=0) check.\n",
+ reg_arg_name(env, argno));
return -EACCES;
}
@@ -5021,8 +5059,8 @@ static int check_sock_access(struct bpf_verifier_env *env, int insn_idx,
return 0;
}
- verbose(env, "R%d invalid %s access off=%d size=%d\n",
- regno, reg_type_str(env, reg->type), off, size);
+ verbose(env, "%s invalid %s access off=%d size=%d\n",
+ reg_arg_name(env, argno), reg_type_str(env, reg->type), off, size);
return -EACCES;
}
@@ -5535,12 +5573,12 @@ static int check_max_stack_depth(struct bpf_verifier_env *env)
static int __check_buffer_access(struct bpf_verifier_env *env,
const char *buf_info,
const struct bpf_reg_state *reg,
- int regno, int off, int size)
+ int argno, int off, int size)
{
if (off < 0) {
verbose(env,
- "R%d invalid %s buffer access: off=%d, size=%d\n",
- regno, buf_info, off, size);
+ "%s invalid %s buffer access: off=%d, size=%d\n",
+ reg_arg_name(env, argno), buf_info, off, size);
return -EACCES;
}
if (!tnum_is_const(reg->var_off)) {
@@ -5548,8 +5586,8 @@ static int __check_buffer_access(struct bpf_verifier_env *env,
tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off);
verbose(env,
- "R%d invalid variable buffer offset: off=%d, var_off=%s\n",
- regno, off, tn_buf);
+ "%s invalid variable buffer offset: off=%d, var_off=%s\n",
+ reg_arg_name(env, argno), off, tn_buf);
return -EACCES;
}
@@ -5558,11 +5596,11 @@ static int __check_buffer_access(struct bpf_verifier_env *env,
static int check_tp_buffer_access(struct bpf_verifier_env *env,
const struct bpf_reg_state *reg,
- int regno, int off, int size)
+ int argno, int off, int size)
{
int err;
- err = __check_buffer_access(env, "tracepoint", reg, regno, off, size);
+ err = __check_buffer_access(env, "tracepoint", reg, argno, off, size);
if (err)
return err;
@@ -5574,14 +5612,14 @@ static int check_tp_buffer_access(struct bpf_verifier_env *env,
static int check_buffer_access(struct bpf_verifier_env *env,
const struct bpf_reg_state *reg,
- int regno, int off, int size,
+ int argno, int off, int size,
bool zero_size_allowed,
u32 *max_access)
{
const char *buf_info = type_is_rdonly_mem(reg->type) ? "rdonly" : "rdwr";
int err;
- err = __check_buffer_access(env, buf_info, reg, regno, off, size);
+ err = __check_buffer_access(env, buf_info, reg, argno, off, size);
if (err)
return err;
@@ -5954,7 +5992,7 @@ static bool type_is_trusted_or_null(struct bpf_verifier_env *env,
static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
struct bpf_reg_state *regs, struct bpf_reg_state *reg,
- int regno, int off, int size,
+ int argno, int off, int size,
enum bpf_access_type atype,
int value_regno)
{
@@ -5983,8 +6021,8 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off);
verbose(env,
- "R%d is ptr_%s invalid variable offset: off=%d, var_off=%s\n",
- regno, tname, off, tn_buf);
+ "%s is ptr_%s invalid variable offset: off=%d, var_off=%s\n",
+ reg_arg_name(env, argno), tname, off, tn_buf);
return -EACCES;
}
@@ -5992,22 +6030,22 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
if (off < 0) {
verbose(env,
- "R%d is ptr_%s invalid negative access: off=%d\n",
- regno, tname, off);
+ "%s is ptr_%s invalid negative access: off=%d\n",
+ reg_arg_name(env, argno), tname, off);
return -EACCES;
}
if (reg->type & MEM_USER) {
verbose(env,
- "R%d is ptr_%s access user memory: off=%d\n",
- regno, tname, off);
+ "%s is ptr_%s access user memory: off=%d\n",
+ reg_arg_name(env, argno), tname, off);
return -EACCES;
}
if (reg->type & MEM_PERCPU) {
verbose(env,
- "R%d is ptr_%s access percpu memory: off=%d\n",
- regno, tname, off);
+ "%s is ptr_%s access percpu memory: off=%d\n",
+ reg_arg_name(env, argno), tname, off);
return -EACCES;
}
@@ -6110,7 +6148,7 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
static int check_ptr_to_map_access(struct bpf_verifier_env *env,
struct bpf_reg_state *regs, struct bpf_reg_state *reg,
- int regno, int off, int size,
+ int argno, int off, int size,
enum bpf_access_type atype,
int value_regno)
{
@@ -6144,8 +6182,8 @@ static int check_ptr_to_map_access(struct bpf_verifier_env *env,
}
if (off < 0) {
- verbose(env, "R%d is %s invalid negative access: off=%d\n",
- regno, tname, off);
+ verbose(env, "%s is %s invalid negative access: off=%d\n",
+ reg_arg_name(env, argno), tname, off);
return -EACCES;
}
@@ -6203,7 +6241,7 @@ static int check_stack_slot_within_bounds(struct bpf_verifier_env *env,
*/
static int check_stack_access_within_bounds(
struct bpf_verifier_env *env, struct bpf_reg_state *reg,
- int regno, int off, int access_size,
+ int argno, int off, int access_size,
enum bpf_access_type type)
{
struct bpf_func_state *state = bpf_func(env, reg);
@@ -6222,8 +6260,8 @@ static int check_stack_access_within_bounds(
} else {
if (reg->smax_value >= BPF_MAX_VAR_OFF ||
reg->smin_value <= -BPF_MAX_VAR_OFF) {
- verbose(env, "invalid unbounded variable-offset%s stack R%d\n",
- err_extra, regno);
+ verbose(env, "invalid unbounded variable-offset%s stack %s\n",
+ err_extra, reg_arg_name(env, argno));
return -EACCES;
}
min_off = reg->smin_value + off;
@@ -6241,14 +6279,14 @@ static int check_stack_access_within_bounds(
if (err) {
if (tnum_is_const(reg->var_off)) {
- verbose(env, "invalid%s stack R%d off=%lld size=%d\n",
- err_extra, regno, min_off, access_size);
+ verbose(env, "invalid%s stack %s off=%lld size=%d\n",
+ err_extra, reg_arg_name(env, argno), min_off, access_size);
} else {
char tn_buf[48];
tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off);
- verbose(env, "invalid variable-offset%s stack R%d var_off=%s off=%d size=%d\n",
- err_extra, regno, tn_buf, off, access_size);
+ verbose(env, "invalid variable-offset%s stack %s var_off=%s off=%d size=%d\n",
+ err_extra, reg_arg_name(env, argno), tn_buf, off, access_size);
}
return err;
}
@@ -6293,7 +6331,7 @@ static void add_scalar_to_reg(struct bpf_reg_state *dst_reg, s64 val)
* if t==write && value_regno==-1, some unknown value is stored into memory
* if t==read && value_regno==-1, don't care what we read from memory
*/
-static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct bpf_reg_state *reg, u32 regno,
+static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct bpf_reg_state *reg, u32 argno,
int off, int bpf_size, enum bpf_access_type t,
int value_regno, bool strict_alignment_once, bool is_ldsx)
{
@@ -6310,11 +6348,12 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct b
if (reg->type == PTR_TO_MAP_KEY) {
if (t == BPF_WRITE) {
- verbose(env, "write to change key R%d not allowed\n", regno);
+ verbose(env, "write to change key %s not allowed\n",
+ reg_arg_name(env, argno));
return -EACCES;
}
- err = check_mem_region_access(env, reg, regno, off, size,
+ err = check_mem_region_access(env, reg, argno, off, size,
reg->map_ptr->key_size, false);
if (err)
return err;
@@ -6328,10 +6367,10 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct b
verbose(env, "R%d leaks addr into map\n", value_regno);
return -EACCES;
}
- err = check_map_access_type(env, reg, regno, off, size, t);
+ err = check_map_access_type(env, reg, argno, off, size, t);
if (err)
return err;
- err = check_map_access(env, reg, regno, off, size, false, ACCESS_DIRECT);
+ err = check_map_access(env, reg, argno, off, size, false, ACCESS_DIRECT);
if (err)
return err;
if (tnum_is_const(reg->var_off))
@@ -6378,14 +6417,14 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct b
bool rdonly_untrusted = rdonly_mem && (reg->type & PTR_UNTRUSTED);
if (type_may_be_null(reg->type)) {
- verbose(env, "R%d invalid mem access '%s'\n", regno,
+ verbose(env, "%s invalid mem access '%s'\n", reg_arg_name(env, argno),
reg_type_str(env, reg->type));
return -EACCES;
}
if (t == BPF_WRITE && rdonly_mem) {
- verbose(env, "R%d cannot write into %s\n",
- regno, reg_type_str(env, reg->type));
+ verbose(env, "%s cannot write into %s\n",
+ reg_arg_name(env, argno), reg_type_str(env, reg->type));
return -EACCES;
}
@@ -6400,7 +6439,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct b
* instructions, hence no need to check bounds in that case.
*/
if (!rdonly_untrusted)
- err = check_mem_region_access(env, reg, regno, off, size,
+ err = check_mem_region_access(env, reg, argno, off, size,
reg->mem_size, false);
if (!err && value_regno >= 0 && (t == BPF_READ || rdonly_mem))
mark_reg_unknown(env, regs, value_regno);
@@ -6418,7 +6457,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct b
return -EACCES;
}
- err = check_ctx_access(env, insn_idx, reg, regno, off, size, t, &info);
+ err = check_ctx_access(env, insn_idx, reg, argno, off, size, t, &info);
if (!err && t == BPF_READ && value_regno >= 0) {
/* ctx access returns either a scalar, or a
* PTR_TO_PACKET[_META,_END]. In the latter
@@ -6455,12 +6494,12 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct b
} else if (reg->type == PTR_TO_STACK) {
/* Basic bounds checks. */
- err = check_stack_access_within_bounds(env, reg, regno, off, size, t);
+ err = check_stack_access_within_bounds(env, reg, argno, off, size, t);
if (err)
return err;
if (t == BPF_READ)
- err = check_stack_read(env, reg, regno, off, size,
+ err = check_stack_read(env, reg, argno, off, size,
value_regno);
else
err = check_stack_write(env, reg, off, size,
@@ -6476,7 +6515,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct b
value_regno);
return -EACCES;
}
- err = check_packet_access(env, reg, regno, off, size, false);
+ err = check_packet_access(env, reg, argno, off, size, false);
if (!err && t == BPF_READ && value_regno >= 0)
mark_reg_unknown(env, regs, value_regno);
} else if (reg->type == PTR_TO_FLOW_KEYS) {
@@ -6492,23 +6531,23 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct b
mark_reg_unknown(env, regs, value_regno);
} else if (type_is_sk_pointer(reg->type)) {
if (t == BPF_WRITE) {
- verbose(env, "R%d cannot write into %s\n",
- regno, reg_type_str(env, reg->type));
+ verbose(env, "%s cannot write into %s\n",
+ reg_arg_name(env, argno), reg_type_str(env, reg->type));
return -EACCES;
}
- err = check_sock_access(env, insn_idx, reg, regno, off, size, t);
+ err = check_sock_access(env, insn_idx, reg, argno, off, size, t);
if (!err && value_regno >= 0)
mark_reg_unknown(env, regs, value_regno);
} else if (reg->type == PTR_TO_TP_BUFFER) {
- err = check_tp_buffer_access(env, reg, regno, off, size);
+ err = check_tp_buffer_access(env, reg, argno, off, size);
if (!err && t == BPF_READ && value_regno >= 0)
mark_reg_unknown(env, regs, value_regno);
} else if (base_type(reg->type) == PTR_TO_BTF_ID &&
!type_may_be_null(reg->type)) {
- err = check_ptr_to_btf_access(env, regs, reg, regno, off, size, t,
+ err = check_ptr_to_btf_access(env, regs, reg, argno, off, size, t,
value_regno);
} else if (reg->type == CONST_PTR_TO_MAP) {
- err = check_ptr_to_map_access(env, regs, reg, regno, off, size, t,
+ err = check_ptr_to_map_access(env, regs, reg, argno, off, size, t,
value_regno);
} else if (base_type(reg->type) == PTR_TO_BUF &&
!type_may_be_null(reg->type)) {
@@ -6517,8 +6556,8 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct b
if (rdonly_mem) {
if (t == BPF_WRITE) {
- verbose(env, "R%d cannot write into %s\n",
- regno, reg_type_str(env, reg->type));
+ verbose(env, "%s cannot write into %s\n",
+ reg_arg_name(env, argno), reg_type_str(env, reg->type));
return -EACCES;
}
max_access = &env->prog->aux->max_rdonly_access;
@@ -6526,7 +6565,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct b
max_access = &env->prog->aux->max_rdwr_access;
}
- err = check_buffer_access(env, reg, regno, off, size, false,
+ err = check_buffer_access(env, reg, argno, off, size, false,
max_access);
if (!err && value_regno >= 0 && (rdonly_mem || t == BPF_READ))
@@ -6535,7 +6574,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct b
if (t == BPF_READ && value_regno >= 0)
mark_reg_unknown(env, regs, value_regno);
} else {
- verbose(env, "R%d invalid mem access '%s'\n", regno,
+ verbose(env, "%s invalid mem access '%s'\n", reg_arg_name(env, argno),
reg_type_str(env, reg->type));
return -EACCES;
}
@@ -6789,7 +6828,7 @@ static int check_atomic(struct bpf_verifier_env *env, struct bpf_insn *insn)
* read offsets are marked as read.
*/
static int check_stack_range_initialized(
- struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno, int off,
+ struct bpf_verifier_env *env, struct bpf_reg_state *reg, int argno, int off,
int access_size, bool zero_size_allowed,
enum bpf_access_type type, struct bpf_call_arg_meta *meta)
{
@@ -6814,7 +6853,7 @@ static int check_stack_range_initialized(
return -EACCES;
}
- err = check_stack_access_within_bounds(env, reg, regno, off, access_size, type);
+ err = check_stack_access_within_bounds(env, reg, argno, off, access_size, type);
if (err)
return err;
@@ -6831,8 +6870,8 @@ static int check_stack_range_initialized(
char tn_buf[48];
tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off);
- verbose(env, "R%d variable offset stack access prohibited for !root, var_off=%s\n",
- regno, tn_buf);
+ verbose(env, "%s variable offset stack access prohibited for !root, var_off=%s\n",
+ reg_arg_name(env, argno), tn_buf);
return -EACCES;
}
/* Only initialized buffer on stack is allowed to be accessed
@@ -6875,7 +6914,7 @@ static int check_stack_range_initialized(
}
}
meta->access_size = access_size;
- meta->regno = regno;
+ meta->regno = argno;
return 0;
}
@@ -6915,17 +6954,17 @@ static int check_stack_range_initialized(
if (*stype == STACK_POISON) {
if (allow_poison)
goto mark;
- verbose(env, "reading from stack R%d off %d+%d size %d, slot poisoned by dead code elimination\n",
- regno, min_off, i - min_off, access_size);
+ verbose(env, "reading from stack %s off %d+%d size %d, slot poisoned by dead code elimination\n",
+ reg_arg_name(env, argno), min_off, i - min_off, access_size);
} else if (tnum_is_const(reg->var_off)) {
- verbose(env, "invalid read from stack R%d off %d+%d size %d\n",
- regno, min_off, i - min_off, access_size);
+ verbose(env, "invalid read from stack %s off %d+%d size %d\n",
+ reg_arg_name(env, argno), min_off, i - min_off, access_size);
} else {
char tn_buf[48];
tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off);
- verbose(env, "invalid read from stack R%d var_off %s+%d size %d\n",
- regno, tn_buf, i - min_off, access_size);
+ verbose(env, "invalid read from stack %s var_off %s+%d size %d\n",
+ reg_arg_name(env, argno), tn_buf, i - min_off, access_size);
}
return -EACCES;
mark:
@@ -6934,7 +6973,7 @@ static int check_stack_range_initialized(
return 0;
}
-static int check_helper_mem_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno,
+static int check_helper_mem_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int argno,
int access_size, enum bpf_access_type access_type,
bool zero_size_allowed,
struct bpf_call_arg_meta *meta)
@@ -6945,37 +6984,37 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, struct bpf_reg_
switch (base_type(reg->type)) {
case PTR_TO_PACKET:
case PTR_TO_PACKET_META:
- return check_packet_access(env, reg, regno, 0, access_size,
+ return check_packet_access(env, reg, argno, 0, access_size,
zero_size_allowed);
case PTR_TO_MAP_KEY:
if (access_type == BPF_WRITE) {
- verbose(env, "R%d cannot write into %s\n", regno,
- reg_type_str(env, reg->type));
+ verbose(env, "%s cannot write into %s\n",
+ reg_arg_name(env, argno), reg_type_str(env, reg->type));
return -EACCES;
}
- return check_mem_region_access(env, reg, regno, 0, access_size,
+ return check_mem_region_access(env, reg, argno, 0, access_size,
reg->map_ptr->key_size, false);
case PTR_TO_MAP_VALUE:
- if (check_map_access_type(env, reg, regno, 0, access_size, access_type))
+ if (check_map_access_type(env, reg, argno, 0, access_size, access_type))
return -EACCES;
- return check_map_access(env, reg, regno, 0, access_size,
+ return check_map_access(env, reg, argno, 0, access_size,
zero_size_allowed, ACCESS_HELPER);
case PTR_TO_MEM:
if (type_is_rdonly_mem(reg->type)) {
if (access_type == BPF_WRITE) {
- verbose(env, "R%d cannot write into %s\n", regno,
- reg_type_str(env, reg->type));
+ verbose(env, "%s cannot write into %s\n",
+ reg_arg_name(env, argno), reg_type_str(env, reg->type));
return -EACCES;
}
}
- return check_mem_region_access(env, reg, regno, 0,
+ return check_mem_region_access(env, reg, argno, 0,
access_size, reg->mem_size,
zero_size_allowed);
case PTR_TO_BUF:
if (type_is_rdonly_mem(reg->type)) {
if (access_type == BPF_WRITE) {
- verbose(env, "R%d cannot write into %s\n", regno,
- reg_type_str(env, reg->type));
+ verbose(env, "%s cannot write into %s\n",
+ reg_arg_name(env, argno), reg_type_str(env, reg->type));
return -EACCES;
}
@@ -6983,21 +7022,21 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, struct bpf_reg_
} else {
max_access = &env->prog->aux->max_rdwr_access;
}
- return check_buffer_access(env, reg, regno, 0,
+ return check_buffer_access(env, reg, argno, 0,
access_size, zero_size_allowed,
max_access);
case PTR_TO_STACK:
return check_stack_range_initialized(
env, reg,
- regno, 0, access_size,
+ argno, 0, access_size,
zero_size_allowed, access_type, meta);
case PTR_TO_BTF_ID:
- return check_ptr_to_btf_access(env, regs, reg, regno, 0,
+ return check_ptr_to_btf_access(env, regs, reg, argno, 0,
access_size, BPF_READ, -1);
case PTR_TO_CTX:
/* Only permit reading or writing syscall context using helper calls. */
if (is_var_ctx_off_allowed(env->prog)) {
- int err = check_mem_region_access(env, reg, regno, 0, access_size, U16_MAX,
+ int err = check_mem_region_access(env, reg, argno, 0, access_size, U16_MAX,
zero_size_allowed);
if (err)
return err;
@@ -7012,7 +7051,7 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, struct bpf_reg_
bpf_register_is_null(reg))
return 0;
- verbose(env, "R%d type=%s ", regno,
+ verbose(env, "%s type=%s ", reg_arg_name(env, argno),
reg_type_str(env, reg->type));
verbose(env, "expected=%s\n", reg_type_str(env, PTR_TO_STACK));
return -EACCES;
@@ -7026,8 +7065,8 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, struct bpf_reg_
*/
static int check_mem_size_reg(struct bpf_verifier_env *env,
struct bpf_reg_state *mem_reg,
- struct bpf_reg_state *size_reg, u32 mem_regno,
- u32 size_regno, enum bpf_access_type access_type,
+ struct bpf_reg_state *size_reg, u32 mem_argno,
+ u32 size_argno, enum bpf_access_type access_type,
bool zero_size_allowed,
struct bpf_call_arg_meta *meta)
{
@@ -7052,31 +7091,31 @@ static int check_mem_size_reg(struct bpf_verifier_env *env,
meta = NULL;
if (size_reg->smin_value < 0) {
- verbose(env, "R%d min value is negative, either use unsigned or 'var &= const'\n",
- size_regno);
+ verbose(env, "%s min value is negative, either use unsigned or 'var &= const'\n",
+ reg_arg_name(env, size_argno));
return -EACCES;
}
if (size_reg->umin_value == 0 && !zero_size_allowed) {
- verbose(env, "R%d invalid zero-sized read: u64=[%lld,%lld]\n",
- size_regno, size_reg->umin_value, size_reg->umax_value);
+ verbose(env, "%s invalid zero-sized read: u64=[%lld,%lld]\n",
+ reg_arg_name(env, size_argno), size_reg->umin_value, size_reg->umax_value);
return -EACCES;
}
if (size_reg->umax_value >= BPF_MAX_VAR_SIZ) {
- verbose(env, "R%d unbounded memory access, use 'var &= const' or 'if (var < const)'\n",
- size_regno);
+ verbose(env, "%s unbounded memory access, use 'var &= const' or 'if (var < const)'\n",
+ reg_arg_name(env, size_argno));
return -EACCES;
}
- err = check_helper_mem_access(env, mem_reg, mem_regno, size_reg->umax_value,
+ err = check_helper_mem_access(env, mem_reg, mem_argno, size_reg->umax_value,
access_type, zero_size_allowed, meta);
if (!err)
- err = mark_chain_precision(env, size_regno);
+ err = mark_chain_precision(env, size_argno);
return err;
}
static int check_mem_reg(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
- u32 regno, u32 mem_size)
+ u32 argno, u32 mem_size)
{
bool may_be_null = type_may_be_null(reg->type);
struct bpf_reg_state saved_reg;
@@ -7096,8 +7135,8 @@ static int check_mem_reg(struct bpf_verifier_env *env, struct bpf_reg_state *reg
int size = base_type(reg->type) == PTR_TO_STACK ? -(int)mem_size : mem_size;
- err = check_helper_mem_access(env, reg, regno, size, BPF_READ, true, NULL);
- err = err ?: check_helper_mem_access(env, reg, regno, size, BPF_WRITE, true, NULL);
+ err = check_helper_mem_access(env, reg, argno, size, BPF_READ, true, NULL);
+ err = err ?: check_helper_mem_access(env, reg, argno, size, BPF_WRITE, true, NULL);
if (may_be_null)
*reg = saved_reg;
@@ -7106,7 +7145,7 @@ static int check_mem_reg(struct bpf_verifier_env *env, struct bpf_reg_state *reg
}
static int check_kfunc_mem_size_reg(struct bpf_verifier_env *env, struct bpf_reg_state *mem_reg,
- struct bpf_reg_state *size_reg, u32 mem_regno, u32 size_regno)
+ struct bpf_reg_state *size_reg, u32 mem_argno, u32 size_argno)
{
bool may_be_null = type_may_be_null(mem_reg->type);
struct bpf_reg_state saved_reg;
@@ -7120,8 +7159,8 @@ static int check_kfunc_mem_size_reg(struct bpf_verifier_env *env, struct bpf_reg
mark_ptr_not_null_reg(mem_reg);
}
- err = check_mem_size_reg(env, mem_reg, size_reg, mem_regno, size_regno, BPF_READ, true, &meta);
- err = err ?: check_mem_size_reg(env, mem_reg, size_reg, mem_regno, size_regno, BPF_WRITE, true, &meta);
+ err = check_mem_size_reg(env, mem_reg, size_reg, mem_argno, size_argno, BPF_READ, true, &meta);
+ err = err ?: check_mem_size_reg(env, mem_reg, size_reg, mem_argno, size_argno, BPF_WRITE, true, &meta);
if (may_be_null)
*mem_reg = saved_reg;
@@ -7157,7 +7196,7 @@ enum {
* env->cur_state->active_locks remembers which map value element or allocated
* object got locked and clears it after bpf_spin_unlock.
*/
-static int process_spin_lock(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno, int flags)
+static int process_spin_lock(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int argno, int flags)
{
bool is_lock = flags & PROCESS_SPIN_LOCK, is_res_lock = flags & PROCESS_RES_LOCK;
const char *lock_str = is_res_lock ? "bpf_res_spin" : "bpf_spin";
@@ -7173,8 +7212,8 @@ static int process_spin_lock(struct bpf_verifier_env *env, struct bpf_reg_state
if (!is_const) {
verbose(env,
- "R%d doesn't have constant offset. %s_lock has to be at the constant offset\n",
- regno, lock_str);
+ "%s doesn't have constant offset. %s_lock has to be at the constant offset\n",
+ reg_arg_name(env, argno), lock_str);
return -EINVAL;
}
if (reg->type == PTR_TO_MAP_VALUE) {
@@ -7273,7 +7312,7 @@ static int process_spin_lock(struct bpf_verifier_env *env, struct bpf_reg_state
}
/* Check if @regno is a pointer to a specific field in a map value */
-static int check_map_field_pointer(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 regno,
+static int check_map_field_pointer(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 argno,
enum btf_field_type field_type,
struct bpf_map_desc *map_desc)
{
@@ -7285,8 +7324,8 @@ static int check_map_field_pointer(struct bpf_verifier_env *env, struct bpf_reg_
if (!is_const) {
verbose(env,
- "R%d doesn't have constant offset. %s has to be at the constant offset\n",
- regno, struct_name);
+ "%s doesn't have constant offset. %s has to be at the constant offset\n",
+ reg_arg_name(env, argno), struct_name);
return -EINVAL;
}
if (!map->btf) {
@@ -7326,26 +7365,26 @@ static int check_map_field_pointer(struct bpf_verifier_env *env, struct bpf_reg_
return 0;
}
-static int process_timer_func(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno,
+static int process_timer_func(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int argno,
struct bpf_map_desc *map)
{
if (IS_ENABLED(CONFIG_PREEMPT_RT)) {
verbose(env, "bpf_timer cannot be used for PREEMPT_RT.\n");
return -EOPNOTSUPP;
}
- return check_map_field_pointer(env, reg, regno, BPF_TIMER, map);
+ return check_map_field_pointer(env, reg, argno, BPF_TIMER, map);
}
-static int process_timer_helper(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno,
+static int process_timer_helper(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int argno,
struct bpf_call_arg_meta *meta)
{
- return process_timer_func(env, reg, regno, &meta->map);
+ return process_timer_func(env, reg, argno, &meta->map);
}
-static int process_timer_kfunc(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno,
+static int process_timer_kfunc(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int argno,
struct bpf_kfunc_call_arg_meta *meta)
{
- return process_timer_func(env, reg, regno, &meta->map);
+ return process_timer_func(env, reg, argno, &meta->map);
}
static int process_kptr_func(struct bpf_verifier_env *env, int regno,
@@ -7410,15 +7449,15 @@ static int process_kptr_func(struct bpf_verifier_env *env, int regno,
* use case. The second level is tracked using the upper bit of bpf_dynptr->size
* and checked dynamically during runtime.
*/
-static int process_dynptr_func(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno, int insn_idx,
+static int process_dynptr_func(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int argno, int insn_idx,
enum bpf_arg_type arg_type, int clone_ref_obj_id)
{
int err;
if (reg->type != PTR_TO_STACK && reg->type != CONST_PTR_TO_DYNPTR) {
verbose(env,
- "arg#%d expected pointer to stack or const struct bpf_dynptr\n",
- regno - 1);
+ "%s expected pointer to stack or const struct bpf_dynptr\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
@@ -7446,7 +7485,7 @@ static int process_dynptr_func(struct bpf_verifier_env *env, struct bpf_reg_stat
/* we write BPF_DW bits (8 bytes) at a time */
for (i = 0; i < BPF_DYNPTR_SIZE; i += 8) {
- err = check_mem_access(env, insn_idx, reg, regno,
+ err = check_mem_access(env, insn_idx, reg, argno,
i, BPF_DW, BPF_WRITE, -1, false, false);
if (err)
return err;
@@ -7461,17 +7500,17 @@ static int process_dynptr_func(struct bpf_verifier_env *env, struct bpf_reg_stat
}
if (!is_dynptr_reg_valid_init(env, reg)) {
- verbose(env,
- "Expected an initialized dynptr as arg #%d\n",
- regno - 1);
+ verbose(env, "Expected an initialized dynptr as %s\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
/* Fold modifiers (in this case, OBJ_RELEASE) when checking expected type */
if (!is_dynptr_type_expected(env, reg, arg_type & ~OBJ_RELEASE)) {
verbose(env,
- "Expected a dynptr of type %s as arg #%d\n",
- dynptr_type_str(arg_to_dynptr_type(arg_type)), regno - 1);
+ "Expected a dynptr of type %s as %s\n",
+ dynptr_type_str(arg_to_dynptr_type(arg_type)),
+ reg_arg_name(env, argno));
return -EINVAL;
}
@@ -7516,14 +7555,16 @@ static bool is_kfunc_arg_iter(struct bpf_kfunc_call_arg_meta *meta, int arg_idx,
return btf_param_match_suffix(meta->btf, arg, "__iter");
}
-static int process_iter_arg(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno, int insn_idx,
+static int process_iter_arg(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int argno, int insn_idx,
struct bpf_kfunc_call_arg_meta *meta)
{
const struct btf_type *t;
+ u32 arg_idx = arg_from_argno(argno) - 1;
int spi, err, i, nr_slots, btf_id;
if (reg->type != PTR_TO_STACK) {
- verbose(env, "arg#%d expected pointer to an iterator on stack\n", regno - 1);
+ verbose(env, "%s expected pointer to an iterator on stack\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
@@ -7533,9 +7574,10 @@ static int process_iter_arg(struct bpf_verifier_env *env, struct bpf_reg_state *
* to any kfunc, if arg has "__iter" suffix, we need to be a bit more
* conservative here.
*/
- btf_id = btf_check_iter_arg(meta->btf, meta->func_proto, regno - 1);
+ btf_id = btf_check_iter_arg(meta->btf, meta->func_proto, arg_idx);
if (btf_id < 0) {
- verbose(env, "expected valid iter pointer as arg #%d\n", regno - 1);
+ verbose(env, "expected valid iter pointer as %s\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
t = btf_type_by_id(meta->btf, btf_id);
@@ -7544,13 +7586,13 @@ static int process_iter_arg(struct bpf_verifier_env *env, struct bpf_reg_state *
if (is_iter_new_kfunc(meta)) {
/* bpf_iter_<type>_new() expects pointer to uninit iter state */
if (!is_iter_reg_valid_uninit(env, reg, nr_slots)) {
- verbose(env, "expected uninitialized iter_%s as arg #%d\n",
- iter_type_str(meta->btf, btf_id), regno - 1);
+ verbose(env, "expected uninitialized iter_%s as %s\n",
+ iter_type_str(meta->btf, btf_id), reg_arg_name(env, argno));
return -EINVAL;
}
for (i = 0; i < nr_slots * 8; i += BPF_REG_SIZE) {
- err = check_mem_access(env, insn_idx, reg, regno,
+ err = check_mem_access(env, insn_idx, reg, argno,
i, BPF_DW, BPF_WRITE, -1, false, false);
if (err)
return err;
@@ -7568,8 +7610,8 @@ static int process_iter_arg(struct bpf_verifier_env *env, struct bpf_reg_state *
case 0:
break;
case -EINVAL:
- verbose(env, "expected an initialized iter_%s as arg #%d\n",
- iter_type_str(meta->btf, btf_id), regno - 1);
+ verbose(env, "expected an initialized iter_%s as %s\n",
+ iter_type_str(meta->btf, btf_id), reg_arg_name(env, argno));
return err;
case -EPROTO:
verbose(env, "expected an RCU CS when using %s\n", meta->func_name);
@@ -7989,7 +8031,7 @@ static const struct bpf_reg_types *compatible_reg_types[__BPF_ARG_TYPE_MAX] = {
[ARG_PTR_TO_DYNPTR] = &dynptr_types,
};
-static int check_reg_type(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 regno,
+static int check_reg_type(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 argno,
enum bpf_arg_type arg_type,
const u32 *arg_btf_id,
struct bpf_call_arg_meta *meta)
@@ -8024,7 +8066,7 @@ static int check_reg_type(struct bpf_verifier_env *env, struct bpf_reg_state *re
type &= ~DYNPTR_TYPE_FLAG_MASK;
/* Local kptr types are allowed as the source argument of bpf_kptr_xchg */
- if (meta->func_id == BPF_FUNC_kptr_xchg && type_is_alloc(type) && regno == BPF_REG_2) {
+ if (meta->func_id == BPF_FUNC_kptr_xchg && type_is_alloc(type) && argno == BPF_REG_2) {
type &= ~MEM_ALLOC;
type &= ~MEM_PERCPU;
}
@@ -8038,7 +8080,7 @@ static int check_reg_type(struct bpf_verifier_env *env, struct bpf_reg_state *re
goto found;
}
- verbose(env, "R%d type=%s expected=", regno, reg_type_str(env, reg->type));
+ verbose(env, "%s type=%s expected=", reg_arg_name(env, argno), reg_type_str(env, reg->type));
for (j = 0; j + 1 < i; j++)
verbose(env, "%s, ", reg_type_str(env, compatible->types[j]));
verbose(env, "%s\n", reg_type_str(env, compatible->types[j]));
@@ -8051,9 +8093,9 @@ static int check_reg_type(struct bpf_verifier_env *env, struct bpf_reg_state *re
if (compatible == &mem_types) {
if (!(arg_type & MEM_RDONLY)) {
verbose(env,
- "%s() may write into memory pointed by R%d type=%s\n",
+ "%s() may write into memory pointed by %s type=%s\n",
func_id_name(meta->func_id),
- regno, reg_type_str(env, reg->type));
+ reg_arg_name(env, argno), reg_type_str(env, reg->type));
return -EACCES;
}
return 0;
@@ -8076,7 +8118,8 @@ static int check_reg_type(struct bpf_verifier_env *env, struct bpf_reg_state *re
if (type_may_be_null(reg->type) &&
(!type_may_be_null(arg_type) || arg_type_is_release(arg_type))) {
- verbose(env, "Possibly NULL pointer passed to helper arg%d\n", regno);
+ verbose(env, "Possibly NULL pointer passed to helper %s\n",
+ reg_arg_name(env, argno));
return -EACCES;
}
@@ -8089,25 +8132,26 @@ static int check_reg_type(struct bpf_verifier_env *env, struct bpf_reg_state *re
}
if (meta->func_id == BPF_FUNC_kptr_xchg) {
- if (map_kptr_match_type(env, meta->kptr_field, reg, regno))
+ if (map_kptr_match_type(env, meta->kptr_field, reg, argno))
return -EACCES;
} else {
if (arg_btf_id == BPF_PTR_POISON) {
verbose(env, "verifier internal error:");
- verbose(env, "R%d has non-overwritten BPF_PTR_POISON type\n",
- regno);
+ verbose(env, "%s has non-overwritten BPF_PTR_POISON type\n",
+ reg_arg_name(env, argno));
return -EACCES;
}
- err = __check_ptr_off_reg(env, reg, regno, true);
+ err = __check_ptr_off_reg(env, reg, argno, true);
if (err)
return err;
if (!btf_struct_ids_match(&env->log, reg->btf, reg->btf_id,
reg->var_off.value, btf_vmlinux, *arg_btf_id,
strict_type_match)) {
- verbose(env, "R%d is of type %s but %s is expected\n",
- regno, btf_type_name(reg->btf, reg->btf_id),
+ verbose(env, "%s is of type %s but %s is expected\n",
+ reg_arg_name(env, argno),
+ btf_type_name(reg->btf, reg->btf_id),
btf_type_name(btf_vmlinux, *arg_btf_id));
return -EACCES;
}
@@ -8124,8 +8168,8 @@ static int check_reg_type(struct bpf_verifier_env *env, struct bpf_reg_state *re
return -EFAULT;
}
/* Check if local kptr in src arg matches kptr in dst arg */
- if (meta->func_id == BPF_FUNC_kptr_xchg && regno == BPF_REG_2) {
- if (map_kptr_match_type(env, meta->kptr_field, reg, regno))
+ if (meta->func_id == BPF_FUNC_kptr_xchg && argno == BPF_REG_2) {
+ if (map_kptr_match_type(env, meta->kptr_field, reg, argno))
return -EACCES;
}
break;
@@ -8159,7 +8203,7 @@ reg_find_field_offset(const struct bpf_reg_state *reg, s32 off, u32 fields)
}
static int check_func_arg_reg_off(struct bpf_verifier_env *env,
- const struct bpf_reg_state *reg, int regno,
+ const struct bpf_reg_state *reg, int argno,
enum bpf_arg_type arg_type)
{
u32 type = reg->type;
@@ -8185,8 +8229,8 @@ static int check_func_arg_reg_off(struct bpf_verifier_env *env,
* to give the user a better error message.
*/
if (!tnum_is_const(reg->var_off) || reg->var_off.value != 0) {
- verbose(env, "R%d must have zero offset when passed to release func or trusted arg to kfunc\n",
- regno);
+ verbose(env, "%s must have zero offset when passed to release func or trusted arg to kfunc\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
}
@@ -8222,7 +8266,7 @@ static int check_func_arg_reg_off(struct bpf_verifier_env *env,
* cases. var_off always must be 0 for PTR_TO_BTF_ID, hence we
* still need to do checks instead of returning.
*/
- return __check_ptr_off_reg(env, reg, regno, true);
+ return __check_ptr_off_reg(env, reg, argno, true);
case PTR_TO_CTX:
/*
* Allow fixed and variable offsets for syscall context, but
@@ -8234,7 +8278,7 @@ static int check_func_arg_reg_off(struct bpf_verifier_env *env,
return 0;
fallthrough;
default:
- return __check_ptr_off_reg(env, reg, regno, false);
+ return __check_ptr_off_reg(env, reg, argno, false);
}
}
@@ -8304,8 +8348,8 @@ static enum bpf_dynptr_type dynptr_get_type(struct bpf_verifier_env *env,
return state->stack[spi].spilled_ptr.dynptr.type;
}
-static int check_reg_const_str(struct bpf_verifier_env *env,
- struct bpf_reg_state *reg, u32 regno)
+static int check_arg_const_str(struct bpf_verifier_env *env,
+ struct bpf_reg_state *reg, u32 argno)
{
struct bpf_map *map = reg->map_ptr;
int err;
@@ -8317,17 +8361,18 @@ static int check_reg_const_str(struct bpf_verifier_env *env,
return -EINVAL;
if (map->map_type == BPF_MAP_TYPE_INSN_ARRAY) {
- verbose(env, "R%d points to insn_array map which cannot be used as const string\n", regno);
+ verbose(env, "%s points to insn_array map which cannot be used as const string\n",
+ reg_arg_name(env, argno));
return -EACCES;
}
if (!bpf_map_is_rdonly(map)) {
- verbose(env, "R%d does not point to a readonly map'\n", regno);
+ verbose(env, "%s does not point to a readonly map'\n", reg_arg_name(env, argno));
return -EACCES;
}
if (!tnum_is_const(reg->var_off)) {
- verbose(env, "R%d is not a constant address'\n", regno);
+ verbose(env, "%s is not a constant address'\n", reg_arg_name(env, argno));
return -EACCES;
}
@@ -8336,7 +8381,7 @@ static int check_reg_const_str(struct bpf_verifier_env *env,
return -EACCES;
}
- err = check_map_access(env, reg, regno, 0,
+ err = check_map_access(env, reg, argno, 0,
map->value_size - reg->var_off.value, false,
ACCESS_HELPER);
if (err)
@@ -8675,7 +8720,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
break;
case ARG_PTR_TO_CONST_STR:
{
- err = check_reg_const_str(env, reg, regno);
+ err = check_arg_const_str(env, reg, regno);
if (err)
return err;
break;
@@ -9264,13 +9309,14 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog,
* verifier sees.
*/
for (i = 0; i < sub->arg_cnt; i++) {
+ u32 argno = make_argno(i + 1);
u32 regno = i + 1;
struct bpf_reg_state *reg = ®s[regno];
struct bpf_subprog_arg_info *arg = &sub->args[i];
if (arg->arg_type == ARG_ANYTHING) {
if (reg->type != SCALAR_VALUE) {
- bpf_log(log, "R%d is not a scalar\n", regno);
+ bpf_log(log, "%s is not a scalar\n", reg_arg_name(env, argno));
return -EINVAL;
}
} else if (arg->arg_type & PTR_UNTRUSTED) {
@@ -9280,24 +9326,26 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog,
* invalid memory access.
*/
} else if (arg->arg_type == ARG_PTR_TO_CTX) {
- ret = check_func_arg_reg_off(env, reg, regno, ARG_PTR_TO_CTX);
+ ret = check_func_arg_reg_off(env, reg, argno, ARG_PTR_TO_CTX);
if (ret < 0)
return ret;
/* If function expects ctx type in BTF check that caller
* is passing PTR_TO_CTX.
*/
if (reg->type != PTR_TO_CTX) {
- bpf_log(log, "arg#%d expects pointer to ctx\n", i);
+ bpf_log(log, "%s expects pointer to ctx\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
} else if (base_type(arg->arg_type) == ARG_PTR_TO_MEM) {
- ret = check_func_arg_reg_off(env, reg, regno, ARG_DONTCARE);
+ ret = check_func_arg_reg_off(env, reg, argno, ARG_DONTCARE);
if (ret < 0)
return ret;
- if (check_mem_reg(env, reg, regno, arg->mem_size))
+ if (check_mem_reg(env, reg, argno, arg->mem_size))
return -EINVAL;
if (!(arg->arg_type & PTR_MAYBE_NULL) && (reg->type & PTR_MAYBE_NULL)) {
- bpf_log(log, "arg#%d is expected to be non-NULL\n", i);
+ bpf_log(log, "%s is expected to be non-NULL\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
} else if (base_type(arg->arg_type) == ARG_PTR_TO_ARENA) {
@@ -9309,15 +9357,16 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog,
* run-time debug nightmare.
*/
if (reg->type != PTR_TO_ARENA && reg->type != SCALAR_VALUE) {
- bpf_log(log, "R%d is not a pointer to arena or scalar.\n", regno);
+ bpf_log(log, "%s is not a pointer to arena or scalar.\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
} else if (arg->arg_type == ARG_PTR_TO_DYNPTR) {
- ret = check_func_arg_reg_off(env, reg, regno, ARG_PTR_TO_DYNPTR);
+ ret = check_func_arg_reg_off(env, reg, argno, ARG_PTR_TO_DYNPTR);
if (ret)
return ret;
- ret = process_dynptr_func(env, reg, regno, -1, arg->arg_type, 0);
+ ret = process_dynptr_func(env, reg, argno, -1, arg->arg_type, 0);
if (ret)
return ret;
} else if (base_type(arg->arg_type) == ARG_PTR_TO_BTF_ID) {
@@ -9328,12 +9377,13 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog,
continue;
memset(&meta, 0, sizeof(meta)); /* leave func_id as zero */
- err = check_reg_type(env, reg, regno, arg->arg_type, &arg->btf_id, &meta);
- err = err ?: check_func_arg_reg_off(env, reg, regno, arg->arg_type);
+ err = check_reg_type(env, reg, argno, arg->arg_type, &arg->btf_id, &meta);
+ err = err ?: check_func_arg_reg_off(env, reg, argno, arg->arg_type);
if (err)
return err;
} else {
- verifier_bug(env, "unrecognized arg#%d type %d", i, arg->arg_type);
+ verifier_bug(env, "unrecognized %s type %d",
+ reg_arg_name(env, argno), arg->arg_type);
return -EFAULT;
}
}
@@ -11301,7 +11351,7 @@ get_kfunc_ptr_arg_type(struct bpf_verifier_env *env,
struct bpf_kfunc_call_arg_meta *meta,
const struct btf_type *t, const struct btf_type *ref_t,
const char *ref_tname, const struct btf_param *args,
- int arg, int nargs, struct bpf_reg_state *reg)
+ int arg, int nargs, u32 argno, struct bpf_reg_state *reg)
{
u32 regno = arg + 1;
struct bpf_reg_state *regs = cur_regs(env);
@@ -11376,8 +11426,9 @@ get_kfunc_ptr_arg_type(struct bpf_verifier_env *env,
if ((base_type(reg->type) == PTR_TO_BTF_ID || reg2btf_ids[base_type(reg->type)])) {
if (!btf_type_is_struct(ref_t)) {
- verbose(env, "kernel function %s args#%d pointer type %s %s is not supported\n",
- meta->func_name, arg, btf_type_str(ref_t), ref_tname);
+ verbose(env, "kernel function %s %s pointer type %s %s is not supported\n",
+ meta->func_name, reg_arg_name(env, argno),
+ btf_type_str(ref_t), ref_tname);
return -EINVAL;
}
return KF_ARG_PTR_TO_BTF_ID;
@@ -11393,8 +11444,9 @@ get_kfunc_ptr_arg_type(struct bpf_verifier_env *env,
*/
if (!btf_type_is_scalar(ref_t) && !__btf_type_is_scalar_struct(env, meta->btf, ref_t, 0) &&
(arg_mem_size ? !btf_type_is_void(ref_t) : 1)) {
- verbose(env, "arg#%d pointer type %s %s must point to %sscalar, or struct with scalar\n",
- arg, btf_type_str(ref_t), ref_tname, arg_mem_size ? "void, " : "");
+ verbose(env, "%s pointer type %s %s must point to %sscalar, or struct with scalar\n",
+ reg_arg_name(env, argno),
+ btf_type_str(ref_t), ref_tname, arg_mem_size ? "void, " : "");
return -EINVAL;
}
return arg_mem_size ? KF_ARG_PTR_TO_MEM_SIZE : KF_ARG_PTR_TO_MEM;
@@ -11405,7 +11457,7 @@ static int process_kf_arg_ptr_to_btf_id(struct bpf_verifier_env *env,
const struct btf_type *ref_t,
const char *ref_tname, u32 ref_id,
struct bpf_kfunc_call_arg_meta *meta,
- int arg)
+ int arg, u32 argno)
{
const struct btf_type *reg_ref_t;
bool strict_type_match = false;
@@ -11463,15 +11515,16 @@ static int process_kf_arg_ptr_to_btf_id(struct bpf_verifier_env *env,
*/
taking_projection = btf_is_projection_of(ref_tname, reg_ref_tname);
if (!taking_projection && !struct_same) {
- verbose(env, "kernel function %s args#%d expected pointer to %s %s but R%d has a pointer to %s %s\n",
- meta->func_name, arg, btf_type_str(ref_t), ref_tname, arg + 1,
+ verbose(env, "kernel function %s %s expected pointer to %s %s but %s has a pointer to %s %s\n",
+ meta->func_name, reg_arg_name(env, argno),
+ btf_type_str(ref_t), ref_tname, reg_arg_name(env, argno),
btf_type_str(reg_ref_t), reg_ref_tname);
return -EINVAL;
}
return 0;
}
-static int process_irq_flag(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno,
+static int process_irq_flag(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int argno,
struct bpf_kfunc_call_arg_meta *meta)
{
int err, kfunc_class = IRQ_NATIVE_KFUNC;
@@ -11494,11 +11547,13 @@ static int process_irq_flag(struct bpf_verifier_env *env, struct bpf_reg_state *
if (irq_save) {
if (!is_irq_flag_reg_valid_uninit(env, reg)) {
- verbose(env, "expected uninitialized irq flag as arg#%d\n", regno - 1);
+ verbose(env, "expected uninitialized irq flag as %s\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
- err = check_mem_access(env, env->insn_idx, reg, regno, 0, BPF_DW, BPF_WRITE, -1, false, false);
+ err = check_mem_access(env, env->insn_idx, reg, argno, 0, BPF_DW,
+ BPF_WRITE, -1, false, false);
if (err)
return err;
@@ -11508,7 +11563,8 @@ static int process_irq_flag(struct bpf_verifier_env *env, struct bpf_reg_state *
} else {
err = is_irq_flag_reg_valid_init(env, reg);
if (err) {
- verbose(env, "expected an initialized irq flag as arg#%d\n", regno - 1);
+ verbose(env, "expected an initialized irq flag as %s\n",
+ reg_arg_name(env, argno));
return err;
}
@@ -11799,7 +11855,7 @@ static bool check_kfunc_is_graph_node_api(struct bpf_verifier_env *env,
static int
__process_kf_arg_ptr_to_graph_root(struct bpf_verifier_env *env,
- struct bpf_reg_state *reg, u32 regno,
+ struct bpf_reg_state *reg, u32 argno,
struct bpf_kfunc_call_arg_meta *meta,
enum btf_field_type head_field_type,
struct btf_field **head_field)
@@ -11820,8 +11876,8 @@ __process_kf_arg_ptr_to_graph_root(struct bpf_verifier_env *env,
head_type_name = btf_field_type_name(head_field_type);
if (!tnum_is_const(reg->var_off)) {
verbose(env,
- "R%d doesn't have constant offset. %s has to be at the constant offset\n",
- regno, head_type_name);
+ "%s doesn't have constant offset. %s has to be at the constant offset\n",
+ reg_arg_name(env, argno), head_type_name);
return -EINVAL;
}
@@ -11849,24 +11905,24 @@ __process_kf_arg_ptr_to_graph_root(struct bpf_verifier_env *env,
}
static int process_kf_arg_ptr_to_list_head(struct bpf_verifier_env *env,
- struct bpf_reg_state *reg, u32 regno,
+ struct bpf_reg_state *reg, u32 argno,
struct bpf_kfunc_call_arg_meta *meta)
{
- return __process_kf_arg_ptr_to_graph_root(env, reg, regno, meta, BPF_LIST_HEAD,
+ return __process_kf_arg_ptr_to_graph_root(env, reg, argno, meta, BPF_LIST_HEAD,
&meta->arg_list_head.field);
}
static int process_kf_arg_ptr_to_rbtree_root(struct bpf_verifier_env *env,
- struct bpf_reg_state *reg, u32 regno,
+ struct bpf_reg_state *reg, u32 argno,
struct bpf_kfunc_call_arg_meta *meta)
{
- return __process_kf_arg_ptr_to_graph_root(env, reg, regno, meta, BPF_RB_ROOT,
+ return __process_kf_arg_ptr_to_graph_root(env, reg, argno, meta, BPF_RB_ROOT,
&meta->arg_rbtree_root.field);
}
static int
__process_kf_arg_ptr_to_graph_node(struct bpf_verifier_env *env,
- struct bpf_reg_state *reg, u32 regno,
+ struct bpf_reg_state *reg, u32 argno,
struct bpf_kfunc_call_arg_meta *meta,
enum btf_field_type head_field_type,
enum btf_field_type node_field_type,
@@ -11888,8 +11944,8 @@ __process_kf_arg_ptr_to_graph_node(struct bpf_verifier_env *env,
node_type_name = btf_field_type_name(node_field_type);
if (!tnum_is_const(reg->var_off)) {
verbose(env,
- "R%d doesn't have constant offset. %s has to be at the constant offset\n",
- regno, node_type_name);
+ "%s doesn't have constant offset. %s has to be at the constant offset\n",
+ reg_arg_name(env, argno), node_type_name);
return -EINVAL;
}
@@ -11930,19 +11986,19 @@ __process_kf_arg_ptr_to_graph_node(struct bpf_verifier_env *env,
}
static int process_kf_arg_ptr_to_list_node(struct bpf_verifier_env *env,
- struct bpf_reg_state *reg, u32 regno,
+ struct bpf_reg_state *reg, u32 argno,
struct bpf_kfunc_call_arg_meta *meta)
{
- return __process_kf_arg_ptr_to_graph_node(env, reg, regno, meta,
+ return __process_kf_arg_ptr_to_graph_node(env, reg, argno, meta,
BPF_LIST_HEAD, BPF_LIST_NODE,
&meta->arg_list_head.field);
}
static int process_kf_arg_ptr_to_rbtree_node(struct bpf_verifier_env *env,
- struct bpf_reg_state *reg, u32 regno,
+ struct bpf_reg_state *reg, u32 argno,
struct bpf_kfunc_call_arg_meta *meta)
{
- return __process_kf_arg_ptr_to_graph_node(env, reg, regno, meta,
+ return __process_kf_arg_ptr_to_graph_node(env, reg, argno, meta,
BPF_RB_ROOT, BPF_RB_NODE,
&meta->arg_rbtree_root.field);
}
@@ -11994,6 +12050,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
struct bpf_reg_state *regs = cur_regs(env), *reg = ®s[i + 1];
const struct btf_type *t, *ref_t, *resolve_ret;
enum bpf_arg_type arg_type = ARG_DONTCARE;
+ u32 argno = make_argno(i + 1);
u32 regno = i + 1, ref_id, type_size;
bool is_ret_buf_sz = false;
int kf_arg_type;
@@ -12016,7 +12073,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
if (btf_type_is_scalar(t)) {
if (reg->type != SCALAR_VALUE) {
- verbose(env, "R%d is not a scalar\n", regno);
+ verbose(env, "%s is not a scalar\n", reg_arg_name(env, argno));
return -EINVAL;
}
@@ -12026,7 +12083,8 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
return -EFAULT;
}
if (!tnum_is_const(reg->var_off)) {
- verbose(env, "R%d must be a known constant\n", regno);
+ verbose(env, "%s must be a known constant\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
ret = mark_chain_precision(env, regno);
@@ -12048,7 +12106,8 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
}
if (!tnum_is_const(reg->var_off)) {
- verbose(env, "R%d is not a const\n", regno);
+ verbose(env, "%s is not a const\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
@@ -12061,20 +12120,22 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
}
if (!btf_type_is_ptr(t)) {
- verbose(env, "Unrecognized arg#%d type %s\n", i, btf_type_str(t));
+ verbose(env, "Unrecognized %s type %s\n",
+ reg_arg_name(env, argno), btf_type_str(t));
return -EINVAL;
}
if ((bpf_register_is_null(reg) || type_may_be_null(reg->type)) &&
!is_kfunc_arg_nullable(meta->btf, &args[i])) {
- verbose(env, "Possibly NULL pointer passed to trusted arg%d\n", i);
+ verbose(env, "Possibly NULL pointer passed to trusted %s\n",
+ reg_arg_name(env, argno));
return -EACCES;
}
if (reg->ref_obj_id) {
if (is_kfunc_release(meta) && meta->ref_obj_id) {
- verifier_bug(env, "more than one arg with ref_obj_id R%d %u %u",
- regno, reg->ref_obj_id,
+ verifier_bug(env, "more than one arg with ref_obj_id %s %u %u",
+ reg_arg_name(env, argno), reg->ref_obj_id,
meta->ref_obj_id);
return -EFAULT;
}
@@ -12086,7 +12147,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
ref_t = btf_type_skip_modifiers(btf, t->type, &ref_id);
ref_tname = btf_name_by_offset(btf, ref_t->name_off);
- kf_arg_type = get_kfunc_ptr_arg_type(env, meta, t, ref_t, ref_tname, args, i, nargs, reg);
+ kf_arg_type = get_kfunc_ptr_arg_type(env, meta, t, ref_t, ref_tname, args, i, nargs, argno, reg);
if (kf_arg_type < 0)
return kf_arg_type;
@@ -12095,7 +12156,8 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
continue;
case KF_ARG_PTR_TO_MAP:
if (!reg->map_ptr) {
- verbose(env, "pointer in R%d isn't map pointer\n", regno);
+ verbose(env, "pointer in %s isn't map pointer\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
if (meta->map.ptr && (reg->map_ptr->record->wq_off >= 0 ||
@@ -12133,11 +12195,13 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
case KF_ARG_PTR_TO_BTF_ID:
if (!is_trusted_reg(reg)) {
if (!is_kfunc_rcu(meta)) {
- verbose(env, "R%d must be referenced or trusted\n", regno);
+ verbose(env, "%s must be referenced or trusted\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
if (!is_rcu_reg(reg)) {
- verbose(env, "R%d must be a rcu pointer\n", regno);
+ verbose(env, "%s must be a rcu pointer\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
}
@@ -12169,15 +12233,15 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
if (is_kfunc_release(meta) && reg->ref_obj_id)
arg_type |= OBJ_RELEASE;
- ret = check_func_arg_reg_off(env, reg, regno, arg_type);
+ ret = check_func_arg_reg_off(env, reg, argno, arg_type);
if (ret < 0)
return ret;
switch (kf_arg_type) {
case KF_ARG_PTR_TO_CTX:
if (reg->type != PTR_TO_CTX) {
- verbose(env, "arg#%d expected pointer to ctx, but got %s\n",
- i, reg_type_str(env, reg->type));
+ verbose(env, "%s expected pointer to ctx, but got %s\n",
+ reg_arg_name(env, argno), reg_type_str(env, reg->type));
return -EINVAL;
}
@@ -12191,16 +12255,19 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
case KF_ARG_PTR_TO_ALLOC_BTF_ID:
if (reg->type == (PTR_TO_BTF_ID | MEM_ALLOC)) {
if (!is_bpf_obj_drop_kfunc(meta->func_id)) {
- verbose(env, "arg#%d expected for bpf_obj_drop()\n", i);
+ verbose(env, "%s expected for bpf_obj_drop()\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
} else if (reg->type == (PTR_TO_BTF_ID | MEM_ALLOC | MEM_PERCPU)) {
if (!is_bpf_percpu_obj_drop_kfunc(meta->func_id)) {
- verbose(env, "arg#%d expected for bpf_percpu_obj_drop()\n", i);
+ verbose(env, "%s expected for bpf_percpu_obj_drop()\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
} else {
- verbose(env, "arg#%d expected pointer to allocated object\n", i);
+ verbose(env, "%s expected pointer to allocated object\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
if (!reg->ref_obj_id) {
@@ -12248,7 +12315,8 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
}
}
- ret = process_dynptr_func(env, reg, regno, insn_idx, dynptr_arg_type, clone_ref_obj_id);
+ ret = process_dynptr_func(env, reg, argno, insn_idx,
+ dynptr_arg_type, clone_ref_obj_id);
if (ret < 0)
return ret;
@@ -12273,55 +12341,59 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
return -EINVAL;
}
}
- ret = process_iter_arg(env, reg, regno, insn_idx, meta);
+ ret = process_iter_arg(env, reg, argno, insn_idx, meta);
if (ret < 0)
return ret;
break;
case KF_ARG_PTR_TO_LIST_HEAD:
if (reg->type != PTR_TO_MAP_VALUE &&
reg->type != (PTR_TO_BTF_ID | MEM_ALLOC)) {
- verbose(env, "arg#%d expected pointer to map value or allocated object\n", i);
+ verbose(env, "%s expected pointer to map value or allocated object\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
if (reg->type == (PTR_TO_BTF_ID | MEM_ALLOC) && !reg->ref_obj_id) {
verbose(env, "allocated object must be referenced\n");
return -EINVAL;
}
- ret = process_kf_arg_ptr_to_list_head(env, reg, regno, meta);
+ ret = process_kf_arg_ptr_to_list_head(env, reg, argno, meta);
if (ret < 0)
return ret;
break;
case KF_ARG_PTR_TO_RB_ROOT:
if (reg->type != PTR_TO_MAP_VALUE &&
reg->type != (PTR_TO_BTF_ID | MEM_ALLOC)) {
- verbose(env, "arg#%d expected pointer to map value or allocated object\n", i);
+ verbose(env, "%s expected pointer to map value or allocated object\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
if (reg->type == (PTR_TO_BTF_ID | MEM_ALLOC) && !reg->ref_obj_id) {
verbose(env, "allocated object must be referenced\n");
return -EINVAL;
}
- ret = process_kf_arg_ptr_to_rbtree_root(env, reg, regno, meta);
+ ret = process_kf_arg_ptr_to_rbtree_root(env, reg, argno, meta);
if (ret < 0)
return ret;
break;
case KF_ARG_PTR_TO_LIST_NODE:
if (reg->type != (PTR_TO_BTF_ID | MEM_ALLOC)) {
- verbose(env, "arg#%d expected pointer to allocated object\n", i);
+ verbose(env, "%s expected pointer to allocated object\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
if (!reg->ref_obj_id) {
verbose(env, "allocated object must be referenced\n");
return -EINVAL;
}
- ret = process_kf_arg_ptr_to_list_node(env, reg, regno, meta);
+ ret = process_kf_arg_ptr_to_list_node(env, reg, argno, meta);
if (ret < 0)
return ret;
break;
case KF_ARG_PTR_TO_RB_NODE:
if (is_bpf_rbtree_add_kfunc(meta->func_id)) {
if (reg->type != (PTR_TO_BTF_ID | MEM_ALLOC)) {
- verbose(env, "arg#%d expected pointer to allocated object\n", i);
+ verbose(env, "%s expected pointer to allocated object\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
if (!reg->ref_obj_id) {
@@ -12339,7 +12411,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
}
}
- ret = process_kf_arg_ptr_to_rbtree_node(env, reg, regno, meta);
+ ret = process_kf_arg_ptr_to_rbtree_node(env, reg, argno, meta);
if (ret < 0)
return ret;
break;
@@ -12354,24 +12426,26 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
if ((base_type(reg->type) != PTR_TO_BTF_ID ||
(bpf_type_has_unsafe_modifiers(reg->type) && !is_rcu_reg(reg))) &&
!reg2btf_ids[base_type(reg->type)]) {
- verbose(env, "arg#%d is %s ", i, reg_type_str(env, reg->type));
+ verbose(env, "%s is %s ", reg_arg_name(env, argno),
+ reg_type_str(env, reg->type));
verbose(env, "expected %s or socket\n",
reg_type_str(env, base_type(reg->type) |
(type_flag(reg->type) & BPF_REG_TRUSTED_MODIFIERS)));
return -EINVAL;
}
- ret = process_kf_arg_ptr_to_btf_id(env, reg, ref_t, ref_tname, ref_id, meta, i);
+ ret = process_kf_arg_ptr_to_btf_id(env, reg, ref_t, ref_tname, ref_id, meta, i, argno);
if (ret < 0)
return ret;
break;
case KF_ARG_PTR_TO_MEM:
resolve_ret = btf_resolve_size(btf, ref_t, &type_size);
if (IS_ERR(resolve_ret)) {
- verbose(env, "arg#%d reference type('%s %s') size cannot be determined: %ld\n",
- i, btf_type_str(ref_t), ref_tname, PTR_ERR(resolve_ret));
+ verbose(env, "%s reference type('%s %s') size cannot be determined: %ld\n",
+ reg_arg_name(env, argno), btf_type_str(ref_t),
+ ref_tname, PTR_ERR(resolve_ret));
return -EINVAL;
}
- ret = check_mem_reg(env, reg, regno, type_size);
+ ret = check_mem_reg(env, reg, argno, type_size);
if (ret < 0)
return ret;
break;
@@ -12381,11 +12455,15 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
const struct btf_param *buff_arg = &args[i];
struct bpf_reg_state *size_reg = ®s[regno + 1];
const struct btf_param *size_arg = &args[i + 1];
+ u32 next_argno = make_argno(i + 2);
if (!bpf_register_is_null(buff_reg) || !is_kfunc_arg_nullable(meta->btf, buff_arg)) {
- ret = check_kfunc_mem_size_reg(env, buff_reg, size_reg, regno, regno + 1);
+ ret = check_kfunc_mem_size_reg(env, buff_reg, size_reg,
+ argno, next_argno);
if (ret < 0) {
- verbose(env, "arg#%d arg#%d memory, len pair leads to invalid memory access\n", i, i + 1);
+ verbose(env, "%s and ", reg_arg_name(env, argno));
+ verbose(env, "%s memory, len pair leads to invalid memory access\n",
+ reg_arg_name(env, next_argno));
return ret;
}
}
@@ -12396,7 +12474,8 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
return -EFAULT;
}
if (!tnum_is_const(size_reg->var_off)) {
- verbose(env, "R%d must be a known constant\n", regno + 1);
+ verbose(env, "%s must be a known constant\n",
+ reg_arg_name(env, next_argno));
return -EINVAL;
}
meta->arg_constant.found = true;
@@ -12409,14 +12488,16 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
}
case KF_ARG_PTR_TO_CALLBACK:
if (reg->type != PTR_TO_FUNC) {
- verbose(env, "arg%d expected pointer to func\n", i);
+ verbose(env, "%s expected pointer to func\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
meta->subprogno = reg->subprogno;
break;
case KF_ARG_PTR_TO_REFCOUNTED_KPTR:
if (!type_is_ptr_alloc_obj(reg->type)) {
- verbose(env, "arg#%d is neither owning or non-owning ref\n", i);
+ verbose(env, "%s is neither owning or non-owning ref\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
if (!type_is_non_owning_ref(reg->type))
@@ -12429,7 +12510,8 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
}
if (rec->refcount_off < 0) {
- verbose(env, "arg#%d doesn't point to a type with bpf_refcount field\n", i);
+ verbose(env, "%s doesn't point to a type with bpf_refcount field\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
@@ -12438,46 +12520,51 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
break;
case KF_ARG_PTR_TO_CONST_STR:
if (reg->type != PTR_TO_MAP_VALUE) {
- verbose(env, "arg#%d doesn't point to a const string\n", i);
+ verbose(env, "%s doesn't point to a const string\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
- ret = check_reg_const_str(env, reg, regno);
+ ret = check_arg_const_str(env, reg, argno);
if (ret)
return ret;
break;
case KF_ARG_PTR_TO_WORKQUEUE:
if (reg->type != PTR_TO_MAP_VALUE) {
- verbose(env, "arg#%d doesn't point to a map value\n", i);
+ verbose(env, "%s doesn't point to a map value\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
- ret = check_map_field_pointer(env, reg, regno, BPF_WORKQUEUE, &meta->map);
+ ret = check_map_field_pointer(env, reg, argno, BPF_WORKQUEUE, &meta->map);
if (ret < 0)
return ret;
break;
case KF_ARG_PTR_TO_TIMER:
if (reg->type != PTR_TO_MAP_VALUE) {
- verbose(env, "arg#%d doesn't point to a map value\n", i);
+ verbose(env, "%s doesn't point to a map value\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
- ret = process_timer_kfunc(env, reg, regno, meta);
+ ret = process_timer_kfunc(env, reg, argno, meta);
if (ret < 0)
return ret;
break;
case KF_ARG_PTR_TO_TASK_WORK:
if (reg->type != PTR_TO_MAP_VALUE) {
- verbose(env, "arg#%d doesn't point to a map value\n", i);
+ verbose(env, "%s doesn't point to a map value\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
- ret = check_map_field_pointer(env, reg, regno, BPF_TASK_WORK, &meta->map);
+ ret = check_map_field_pointer(env, reg, argno, BPF_TASK_WORK, &meta->map);
if (ret < 0)
return ret;
break;
case KF_ARG_PTR_TO_IRQ_FLAG:
if (reg->type != PTR_TO_STACK) {
- verbose(env, "arg#%d doesn't point to an irq flag on stack\n", i);
+ verbose(env, "%s doesn't point to an irq flag on stack\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
- ret = process_irq_flag(env, reg, regno, meta);
+ ret = process_irq_flag(env, reg, argno, meta);
if (ret < 0)
return ret;
break;
@@ -12486,7 +12573,8 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
int flags = PROCESS_RES_LOCK;
if (reg->type != PTR_TO_MAP_VALUE && reg->type != (PTR_TO_BTF_ID | MEM_ALLOC)) {
- verbose(env, "arg#%d doesn't point to map value or allocated object\n", i);
+ verbose(env, "%s doesn't point to map value or allocated object\n",
+ reg_arg_name(env, argno));
return -EINVAL;
}
@@ -12498,7 +12586,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
if (meta->func_id == special_kfunc_list[KF_bpf_res_spin_lock_irqsave] ||
meta->func_id == special_kfunc_list[KF_bpf_res_spin_unlock_irqrestore])
flags |= PROCESS_LOCK_IRQ;
- ret = process_spin_lock(env, reg, regno, flags);
+ ret = process_spin_lock(env, reg, argno, flags);
if (ret < 0)
return ret;
break;
@@ -18714,7 +18802,7 @@ static int do_check_common(struct bpf_verifier_env *env, int subprog)
mark_reg_unknown(env, regs, i);
} else {
verifier_bug(env, "unhandled arg#%d type %d",
- i - BPF_REG_1, arg->arg_type);
+ i - BPF_REG_1 + 1, arg->arg_type);
ret = -EFAULT;
goto out;
}
diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_nf.c b/tools/testing/selftests/bpf/prog_tests/bpf_nf.c
index 215878ea04de..b33dba4b126e 100644
--- a/tools/testing/selftests/bpf/prog_tests/bpf_nf.c
+++ b/tools/testing/selftests/bpf/prog_tests/bpf_nf.c
@@ -11,18 +11,18 @@ struct {
const char *prog_name;
const char *err_msg;
} test_bpf_nf_fail_tests[] = {
- { "alloc_release", "kernel function bpf_ct_release args#0 expected pointer to STRUCT nf_conn but" },
- { "insert_insert", "kernel function bpf_ct_insert_entry args#0 expected pointer to STRUCT nf_conn___init but" },
- { "lookup_insert", "kernel function bpf_ct_insert_entry args#0 expected pointer to STRUCT nf_conn___init but" },
- { "set_timeout_after_insert", "kernel function bpf_ct_set_timeout args#0 expected pointer to STRUCT nf_conn___init but" },
- { "set_status_after_insert", "kernel function bpf_ct_set_status args#0 expected pointer to STRUCT nf_conn___init but" },
- { "change_timeout_after_alloc", "kernel function bpf_ct_change_timeout args#0 expected pointer to STRUCT nf_conn but" },
- { "change_status_after_alloc", "kernel function bpf_ct_change_status args#0 expected pointer to STRUCT nf_conn but" },
+ { "alloc_release", "kernel function bpf_ct_release R1 expected pointer to STRUCT nf_conn but" },
+ { "insert_insert", "kernel function bpf_ct_insert_entry R1 expected pointer to STRUCT nf_conn___init but" },
+ { "lookup_insert", "kernel function bpf_ct_insert_entry R1 expected pointer to STRUCT nf_conn___init but" },
+ { "set_timeout_after_insert", "kernel function bpf_ct_set_timeout R1 expected pointer to STRUCT nf_conn___init but" },
+ { "set_status_after_insert", "kernel function bpf_ct_set_status R1 expected pointer to STRUCT nf_conn___init but" },
+ { "change_timeout_after_alloc", "kernel function bpf_ct_change_timeout R1 expected pointer to STRUCT nf_conn but" },
+ { "change_status_after_alloc", "kernel function bpf_ct_change_status R1 expected pointer to STRUCT nf_conn but" },
{ "write_not_allowlisted_field", "no write support to nf_conn at off" },
- { "lookup_null_bpf_tuple", "Possibly NULL pointer passed to trusted arg1" },
- { "lookup_null_bpf_opts", "Possibly NULL pointer passed to trusted arg3" },
- { "xdp_lookup_null_bpf_tuple", "Possibly NULL pointer passed to trusted arg1" },
- { "xdp_lookup_null_bpf_opts", "Possibly NULL pointer passed to trusted arg3" },
+ { "lookup_null_bpf_tuple", "Possibly NULL pointer passed to trusted R2" },
+ { "lookup_null_bpf_opts", "Possibly NULL pointer passed to trusted R4" },
+ { "xdp_lookup_null_bpf_tuple", "Possibly NULL pointer passed to trusted R2" },
+ { "xdp_lookup_null_bpf_opts", "Possibly NULL pointer passed to trusted R4" },
};
enum {
diff --git a/tools/testing/selftests/bpf/prog_tests/cb_refs.c b/tools/testing/selftests/bpf/prog_tests/cb_refs.c
index c40df623a8f7..6300b67a3a84 100644
--- a/tools/testing/selftests/bpf/prog_tests/cb_refs.c
+++ b/tools/testing/selftests/bpf/prog_tests/cb_refs.c
@@ -12,7 +12,7 @@ struct {
const char *err_msg;
} cb_refs_tests[] = {
{ "underflow_prog", "must point to scalar, or struct with scalar" },
- { "leak_prog", "Possibly NULL pointer passed to helper arg2" },
+ { "leak_prog", "Possibly NULL pointer passed to helper R2" },
{ "nested_cb", "Unreleased reference id=4 alloc_insn=2" }, /* alloc_insn=2{4,5} */
{ "non_cb_transfer_ref", "Unreleased reference id=4 alloc_insn=1" }, /* alloc_insn=1{1,2} */
};
diff --git a/tools/testing/selftests/bpf/prog_tests/kfunc_call.c b/tools/testing/selftests/bpf/prog_tests/kfunc_call.c
index 62f3fb79f5d1..3df07680f9e0 100644
--- a/tools/testing/selftests/bpf/prog_tests/kfunc_call.c
+++ b/tools/testing/selftests/bpf/prog_tests/kfunc_call.c
@@ -68,7 +68,7 @@ static struct kfunc_test_params kfunc_tests[] = {
TC_FAIL(kfunc_call_test_get_mem_fail_oob, 0, "min value is outside of the allowed memory range"),
TC_FAIL(kfunc_call_test_get_mem_fail_not_const, 0, "is not a const"),
TC_FAIL(kfunc_call_test_mem_acquire_fail, 0, "acquire kernel function does not return PTR_TO_BTF_ID"),
- TC_FAIL(kfunc_call_test_pointer_arg_type_mismatch, 0, "arg#0 expected pointer to ctx, but got scalar"),
+ TC_FAIL(kfunc_call_test_pointer_arg_type_mismatch, 0, "R1 expected pointer to ctx, but got scalar"),
/* success cases */
TC_TEST(kfunc_call_test1, 12),
diff --git a/tools/testing/selftests/bpf/prog_tests/linked_list.c b/tools/testing/selftests/bpf/prog_tests/linked_list.c
index 6f25b5f39a79..dbff099860ba 100644
--- a/tools/testing/selftests/bpf/prog_tests/linked_list.c
+++ b/tools/testing/selftests/bpf/prog_tests/linked_list.c
@@ -81,8 +81,8 @@ static struct {
{ "direct_write_node", "direct access to bpf_list_node is disallowed" },
{ "use_after_unlock_push_front", "invalid mem access 'scalar'" },
{ "use_after_unlock_push_back", "invalid mem access 'scalar'" },
- { "double_push_front", "arg#1 expected pointer to allocated object" },
- { "double_push_back", "arg#1 expected pointer to allocated object" },
+ { "double_push_front", "R2 expected pointer to allocated object" },
+ { "double_push_back", "R2 expected pointer to allocated object" },
{ "no_node_value_type", "bpf_list_node not found at offset=0" },
{ "incorrect_value_type",
"operation on bpf_list_head expects arg#1 bpf_list_node at offset=48 in struct foo, "
diff --git a/tools/testing/selftests/bpf/progs/cgrp_kfunc_failure.c b/tools/testing/selftests/bpf/progs/cgrp_kfunc_failure.c
index 9fe9c4a4e8f6..a875ba8e5007 100644
--- a/tools/testing/selftests/bpf/progs/cgrp_kfunc_failure.c
+++ b/tools/testing/selftests/bpf/progs/cgrp_kfunc_failure.c
@@ -29,7 +29,7 @@ static struct __cgrps_kfunc_map_value *insert_lookup_cgrp(struct cgroup *cgrp)
}
SEC("tp_btf/cgroup_mkdir")
-__failure __msg("Possibly NULL pointer passed to trusted arg0")
+__failure __msg("Possibly NULL pointer passed to trusted R1")
int BPF_PROG(cgrp_kfunc_acquire_untrusted, struct cgroup *cgrp, const char *path)
{
struct cgroup *acquired;
@@ -48,7 +48,7 @@ int BPF_PROG(cgrp_kfunc_acquire_untrusted, struct cgroup *cgrp, const char *path
}
SEC("tp_btf/cgroup_mkdir")
-__failure __msg("Possibly NULL pointer passed to trusted arg0")
+__failure __msg("Possibly NULL pointer passed to trusted R1")
int BPF_PROG(cgrp_kfunc_acquire_no_null_check, struct cgroup *cgrp, const char *path)
{
struct cgroup *acquired;
@@ -64,7 +64,7 @@ int BPF_PROG(cgrp_kfunc_acquire_no_null_check, struct cgroup *cgrp, const char *
}
SEC("tp_btf/cgroup_mkdir")
-__failure __msg("arg#0 pointer type STRUCT cgroup must point")
+__failure __msg("R1 pointer type STRUCT cgroup must point")
int BPF_PROG(cgrp_kfunc_acquire_fp, struct cgroup *cgrp, const char *path)
{
struct cgroup *acquired, *stack_cgrp = (struct cgroup *)&path;
@@ -106,7 +106,7 @@ int BPF_PROG(cgrp_kfunc_acquire_trusted_walked, struct cgroup *cgrp, const char
}
SEC("tp_btf/cgroup_mkdir")
-__failure __msg("Possibly NULL pointer passed to trusted arg0")
+__failure __msg("Possibly NULL pointer passed to trusted R1")
int BPF_PROG(cgrp_kfunc_acquire_null, struct cgroup *cgrp, const char *path)
{
struct cgroup *acquired;
@@ -175,7 +175,7 @@ int BPF_PROG(cgrp_kfunc_rcu_get_release, struct cgroup *cgrp, const char *path)
}
SEC("tp_btf/cgroup_mkdir")
-__failure __msg("Possibly NULL pointer passed to trusted arg0")
+__failure __msg("Possibly NULL pointer passed to trusted R1")
int BPF_PROG(cgrp_kfunc_release_untrusted, struct cgroup *cgrp, const char *path)
{
struct __cgrps_kfunc_map_value *v;
@@ -191,7 +191,7 @@ int BPF_PROG(cgrp_kfunc_release_untrusted, struct cgroup *cgrp, const char *path
}
SEC("tp_btf/cgroup_mkdir")
-__failure __msg("arg#0 pointer type STRUCT cgroup must point")
+__failure __msg("R1 pointer type STRUCT cgroup must point")
int BPF_PROG(cgrp_kfunc_release_fp, struct cgroup *cgrp, const char *path)
{
struct cgroup *acquired = (struct cgroup *)&path;
@@ -203,7 +203,7 @@ int BPF_PROG(cgrp_kfunc_release_fp, struct cgroup *cgrp, const char *path)
}
SEC("tp_btf/cgroup_mkdir")
-__failure __msg("Possibly NULL pointer passed to trusted arg0")
+__failure __msg("Possibly NULL pointer passed to trusted R1")
int BPF_PROG(cgrp_kfunc_release_null, struct cgroup *cgrp, const char *path)
{
struct __cgrps_kfunc_map_value local, *v;
diff --git a/tools/testing/selftests/bpf/progs/cpumask_failure.c b/tools/testing/selftests/bpf/progs/cpumask_failure.c
index 61c32e91e8c3..4c45346fe6f7 100644
--- a/tools/testing/selftests/bpf/progs/cpumask_failure.c
+++ b/tools/testing/selftests/bpf/progs/cpumask_failure.c
@@ -45,7 +45,7 @@ int BPF_PROG(test_alloc_no_release, struct task_struct *task, u64 clone_flags)
}
SEC("tp_btf/task_newtask")
-__failure __msg("NULL pointer passed to trusted arg0")
+__failure __msg("NULL pointer passed to trusted R1")
int BPF_PROG(test_alloc_double_release, struct task_struct *task, u64 clone_flags)
{
struct bpf_cpumask *cpumask;
@@ -73,7 +73,7 @@ int BPF_PROG(test_acquire_wrong_cpumask, struct task_struct *task, u64 clone_fla
}
SEC("tp_btf/task_newtask")
-__failure __msg("bpf_cpumask_set_cpu args#1 expected pointer to STRUCT bpf_cpumask")
+__failure __msg("bpf_cpumask_set_cpu R2 expected pointer to STRUCT bpf_cpumask")
int BPF_PROG(test_mutate_cpumask, struct task_struct *task, u64 clone_flags)
{
/* Can't set the CPU of a non-struct bpf_cpumask. */
@@ -107,7 +107,7 @@ int BPF_PROG(test_insert_remove_no_release, struct task_struct *task, u64 clone_
}
SEC("tp_btf/task_newtask")
-__failure __msg("NULL pointer passed to trusted arg0")
+__failure __msg("NULL pointer passed to trusted R1")
int BPF_PROG(test_cpumask_null, struct task_struct *task, u64 clone_flags)
{
/* NULL passed to kfunc. */
@@ -151,7 +151,7 @@ int BPF_PROG(test_global_mask_out_of_rcu, struct task_struct *task, u64 clone_fl
}
SEC("tp_btf/task_newtask")
-__failure __msg("NULL pointer passed to trusted arg1")
+__failure __msg("NULL pointer passed to trusted R2")
int BPF_PROG(test_global_mask_no_null_check, struct task_struct *task, u64 clone_flags)
{
struct bpf_cpumask *local, *prev;
@@ -179,7 +179,7 @@ int BPF_PROG(test_global_mask_no_null_check, struct task_struct *task, u64 clone
}
SEC("tp_btf/task_newtask")
-__failure __msg("Possibly NULL pointer passed to helper arg2")
+__failure __msg("Possibly NULL pointer passed to helper R2")
int BPF_PROG(test_global_mask_rcu_no_null_check, struct task_struct *task, u64 clone_flags)
{
struct bpf_cpumask *prev, *curr;
diff --git a/tools/testing/selftests/bpf/progs/dynptr_fail.c b/tools/testing/selftests/bpf/progs/dynptr_fail.c
index b62773ce5219..dbd97add5a5a 100644
--- a/tools/testing/selftests/bpf/progs/dynptr_fail.c
+++ b/tools/testing/selftests/bpf/progs/dynptr_fail.c
@@ -149,7 +149,7 @@ int ringbuf_release_uninit_dynptr(void *ctx)
/* A dynptr can't be used after it has been invalidated */
SEC("?raw_tp")
-__failure __msg("Expected an initialized dynptr as arg #2")
+__failure __msg("Expected an initialized dynptr as R3")
int use_after_invalid(void *ctx)
{
struct bpf_dynptr ptr;
@@ -448,7 +448,7 @@ int invalid_helper2(void *ctx)
/* A bpf_dynptr is invalidated if it's been written into */
SEC("?raw_tp")
-__failure __msg("Expected an initialized dynptr as arg #0")
+__failure __msg("Expected an initialized dynptr as R1")
int invalid_write1(void *ctx)
{
struct bpf_dynptr ptr;
@@ -1642,7 +1642,7 @@ int invalid_slice_rdwr_rdonly(struct __sk_buff *skb)
/* bpf_dynptr_adjust can only be called on initialized dynptrs */
SEC("?raw_tp")
-__failure __msg("Expected an initialized dynptr as arg #0")
+__failure __msg("Expected an initialized dynptr as R1")
int dynptr_adjust_invalid(void *ctx)
{
struct bpf_dynptr ptr = {};
@@ -1655,7 +1655,7 @@ int dynptr_adjust_invalid(void *ctx)
/* bpf_dynptr_is_null can only be called on initialized dynptrs */
SEC("?raw_tp")
-__failure __msg("Expected an initialized dynptr as arg #0")
+__failure __msg("Expected an initialized dynptr as R1")
int dynptr_is_null_invalid(void *ctx)
{
struct bpf_dynptr ptr = {};
@@ -1668,7 +1668,7 @@ int dynptr_is_null_invalid(void *ctx)
/* bpf_dynptr_is_rdonly can only be called on initialized dynptrs */
SEC("?raw_tp")
-__failure __msg("Expected an initialized dynptr as arg #0")
+__failure __msg("Expected an initialized dynptr as R1")
int dynptr_is_rdonly_invalid(void *ctx)
{
struct bpf_dynptr ptr = {};
@@ -1681,7 +1681,7 @@ int dynptr_is_rdonly_invalid(void *ctx)
/* bpf_dynptr_size can only be called on initialized dynptrs */
SEC("?raw_tp")
-__failure __msg("Expected an initialized dynptr as arg #0")
+__failure __msg("Expected an initialized dynptr as R1")
int dynptr_size_invalid(void *ctx)
{
struct bpf_dynptr ptr = {};
@@ -1694,7 +1694,7 @@ int dynptr_size_invalid(void *ctx)
/* Only initialized dynptrs can be cloned */
SEC("?raw_tp")
-__failure __msg("Expected an initialized dynptr as arg #0")
+__failure __msg("Expected an initialized dynptr as R1")
int clone_invalid1(void *ctx)
{
struct bpf_dynptr ptr1 = {};
@@ -1728,7 +1728,7 @@ int clone_invalid2(struct xdp_md *xdp)
/* Invalidating a dynptr should invalidate its clones */
SEC("?raw_tp")
-__failure __msg("Expected an initialized dynptr as arg #2")
+__failure __msg("Expected an initialized dynptr as R3")
int clone_invalidate1(void *ctx)
{
struct bpf_dynptr clone;
@@ -1749,7 +1749,7 @@ int clone_invalidate1(void *ctx)
/* Invalidating a dynptr should invalidate its parent */
SEC("?raw_tp")
-__failure __msg("Expected an initialized dynptr as arg #2")
+__failure __msg("Expected an initialized dynptr as R3")
int clone_invalidate2(void *ctx)
{
struct bpf_dynptr ptr;
@@ -1770,7 +1770,7 @@ int clone_invalidate2(void *ctx)
/* Invalidating a dynptr should invalidate its siblings */
SEC("?raw_tp")
-__failure __msg("Expected an initialized dynptr as arg #2")
+__failure __msg("Expected an initialized dynptr as R3")
int clone_invalidate3(void *ctx)
{
struct bpf_dynptr ptr;
@@ -1981,7 +1981,7 @@ __noinline long global_call_bpf_dynptr(const struct bpf_dynptr *dynptr)
}
SEC("?raw_tp")
-__failure __msg("arg#0 expected pointer to stack or const struct bpf_dynptr")
+__failure __msg("R1 expected pointer to stack or const struct bpf_dynptr")
int test_dynptr_reg_type(void *ctx)
{
struct task_struct *current = NULL;
diff --git a/tools/testing/selftests/bpf/progs/file_reader_fail.c b/tools/testing/selftests/bpf/progs/file_reader_fail.c
index 32fe28ed2439..0739620dea8a 100644
--- a/tools/testing/selftests/bpf/progs/file_reader_fail.c
+++ b/tools/testing/selftests/bpf/progs/file_reader_fail.c
@@ -30,7 +30,7 @@ int on_nanosleep_unreleased_ref(void *ctx)
SEC("xdp")
__failure
-__msg("Expected a dynptr of type file as arg #0")
+__msg("Expected a dynptr of type file as R1")
int xdp_wrong_dynptr_type(struct xdp_md *xdp)
{
struct bpf_dynptr dynptr;
@@ -42,7 +42,7 @@ int xdp_wrong_dynptr_type(struct xdp_md *xdp)
SEC("xdp")
__failure
-__msg("Expected an initialized dynptr as arg #0")
+__msg("Expected an initialized dynptr as R1")
int xdp_no_dynptr_type(struct xdp_md *xdp)
{
struct bpf_dynptr dynptr;
diff --git a/tools/testing/selftests/bpf/progs/irq.c b/tools/testing/selftests/bpf/progs/irq.c
index e11e82d98904..a4a007866a33 100644
--- a/tools/testing/selftests/bpf/progs/irq.c
+++ b/tools/testing/selftests/bpf/progs/irq.c
@@ -15,7 +15,7 @@ struct bpf_res_spin_lock lockA __hidden SEC(".data.A");
struct bpf_res_spin_lock lockB __hidden SEC(".data.B");
SEC("?tc")
-__failure __msg("arg#0 doesn't point to an irq flag on stack")
+__failure __msg("R1 doesn't point to an irq flag on stack")
int irq_save_bad_arg(struct __sk_buff *ctx)
{
bpf_local_irq_save(&global_flags);
@@ -23,7 +23,7 @@ int irq_save_bad_arg(struct __sk_buff *ctx)
}
SEC("?tc")
-__failure __msg("arg#0 doesn't point to an irq flag on stack")
+__failure __msg("R1 doesn't point to an irq flag on stack")
int irq_restore_bad_arg(struct __sk_buff *ctx)
{
bpf_local_irq_restore(&global_flags);
diff --git a/tools/testing/selftests/bpf/progs/iters.c b/tools/testing/selftests/bpf/progs/iters.c
index 86b74e3579d9..0fa70b133d93 100644
--- a/tools/testing/selftests/bpf/progs/iters.c
+++ b/tools/testing/selftests/bpf/progs/iters.c
@@ -1605,7 +1605,7 @@ int iter_subprog_check_stacksafe(const void *ctx)
struct bpf_iter_num global_it;
SEC("raw_tp")
-__failure __msg("arg#0 expected pointer to an iterator on stack")
+__failure __msg("R1 expected pointer to an iterator on stack")
int iter_new_bad_arg(const void *ctx)
{
bpf_iter_num_new(&global_it, 0, 1);
@@ -1613,7 +1613,7 @@ int iter_new_bad_arg(const void *ctx)
}
SEC("raw_tp")
-__failure __msg("arg#0 expected pointer to an iterator on stack")
+__failure __msg("R1 expected pointer to an iterator on stack")
int iter_next_bad_arg(const void *ctx)
{
bpf_iter_num_next(&global_it);
@@ -1621,7 +1621,7 @@ int iter_next_bad_arg(const void *ctx)
}
SEC("raw_tp")
-__failure __msg("arg#0 expected pointer to an iterator on stack")
+__failure __msg("R1 expected pointer to an iterator on stack")
int iter_destroy_bad_arg(const void *ctx)
{
bpf_iter_num_destroy(&global_it);
diff --git a/tools/testing/selftests/bpf/progs/iters_state_safety.c b/tools/testing/selftests/bpf/progs/iters_state_safety.c
index d273b46dfc7c..af8f9ec1ea98 100644
--- a/tools/testing/selftests/bpf/progs/iters_state_safety.c
+++ b/tools/testing/selftests/bpf/progs/iters_state_safety.c
@@ -73,7 +73,7 @@ int create_and_forget_to_destroy_fail(void *ctx)
}
SEC("?raw_tp")
-__failure __msg("expected an initialized iter_num as arg #0")
+__failure __msg("expected an initialized iter_num as R1")
int destroy_without_creating_fail(void *ctx)
{
/* init with zeros to stop verifier complaining about uninit stack */
@@ -91,7 +91,7 @@ int destroy_without_creating_fail(void *ctx)
}
SEC("?raw_tp")
-__failure __msg("expected an initialized iter_num as arg #0")
+__failure __msg("expected an initialized iter_num as R1")
int compromise_iter_w_direct_write_fail(void *ctx)
{
struct bpf_iter_num iter;
@@ -143,7 +143,7 @@ int compromise_iter_w_direct_write_and_skip_destroy_fail(void *ctx)
}
SEC("?raw_tp")
-__failure __msg("expected an initialized iter_num as arg #0")
+__failure __msg("expected an initialized iter_num as R1")
int compromise_iter_w_helper_write_fail(void *ctx)
{
struct bpf_iter_num iter;
@@ -230,7 +230,7 @@ int valid_stack_reuse(void *ctx)
}
SEC("?raw_tp")
-__failure __msg("expected uninitialized iter_num as arg #0")
+__failure __msg("expected uninitialized iter_num as R1")
int double_create_fail(void *ctx)
{
struct bpf_iter_num iter;
@@ -258,7 +258,7 @@ int double_create_fail(void *ctx)
}
SEC("?raw_tp")
-__failure __msg("expected an initialized iter_num as arg #0")
+__failure __msg("expected an initialized iter_num as R1")
int double_destroy_fail(void *ctx)
{
struct bpf_iter_num iter;
@@ -284,7 +284,7 @@ int double_destroy_fail(void *ctx)
}
SEC("?raw_tp")
-__failure __msg("expected an initialized iter_num as arg #0")
+__failure __msg("expected an initialized iter_num as R1")
int next_without_new_fail(void *ctx)
{
struct bpf_iter_num iter;
@@ -305,7 +305,7 @@ int next_without_new_fail(void *ctx)
}
SEC("?raw_tp")
-__failure __msg("expected an initialized iter_num as arg #0")
+__failure __msg("expected an initialized iter_num as R1")
int next_after_destroy_fail(void *ctx)
{
struct bpf_iter_num iter;
diff --git a/tools/testing/selftests/bpf/progs/iters_testmod.c b/tools/testing/selftests/bpf/progs/iters_testmod.c
index 5379e9960ffd..76012dbbdb41 100644
--- a/tools/testing/selftests/bpf/progs/iters_testmod.c
+++ b/tools/testing/selftests/bpf/progs/iters_testmod.c
@@ -29,7 +29,7 @@ int iter_next_trusted(const void *ctx)
}
SEC("raw_tp/sys_enter")
-__failure __msg("Possibly NULL pointer passed to trusted arg0")
+__failure __msg("Possibly NULL pointer passed to trusted R1")
int iter_next_trusted_or_null(const void *ctx)
{
struct task_struct *cur_task = bpf_get_current_task_btf();
@@ -67,7 +67,7 @@ int iter_next_rcu(const void *ctx)
}
SEC("raw_tp/sys_enter")
-__failure __msg("Possibly NULL pointer passed to trusted arg0")
+__failure __msg("Possibly NULL pointer passed to trusted R1")
int iter_next_rcu_or_null(const void *ctx)
{
struct task_struct *cur_task = bpf_get_current_task_btf();
diff --git a/tools/testing/selftests/bpf/progs/iters_testmod_seq.c b/tools/testing/selftests/bpf/progs/iters_testmod_seq.c
index 83791348bed5..9b760dac333e 100644
--- a/tools/testing/selftests/bpf/progs/iters_testmod_seq.c
+++ b/tools/testing/selftests/bpf/progs/iters_testmod_seq.c
@@ -79,7 +79,7 @@ int testmod_seq_truncated(const void *ctx)
SEC("?raw_tp")
__failure
-__msg("expected an initialized iter_testmod_seq as arg #1")
+__msg("expected an initialized iter_testmod_seq as R2")
int testmod_seq_getter_before_bad(const void *ctx)
{
struct bpf_iter_testmod_seq it;
@@ -89,7 +89,7 @@ int testmod_seq_getter_before_bad(const void *ctx)
SEC("?raw_tp")
__failure
-__msg("expected an initialized iter_testmod_seq as arg #1")
+__msg("expected an initialized iter_testmod_seq as R2")
int testmod_seq_getter_after_bad(const void *ctx)
{
struct bpf_iter_testmod_seq it;
diff --git a/tools/testing/selftests/bpf/progs/map_kptr_fail.c b/tools/testing/selftests/bpf/progs/map_kptr_fail.c
index ee053b24e6ca..8f36e74fd8f9 100644
--- a/tools/testing/selftests/bpf/progs/map_kptr_fail.c
+++ b/tools/testing/selftests/bpf/progs/map_kptr_fail.c
@@ -364,7 +364,7 @@ int kptr_xchg_ref_state(struct __sk_buff *ctx)
}
SEC("?tc")
-__failure __msg("Possibly NULL pointer passed to helper arg2")
+__failure __msg("Possibly NULL pointer passed to helper R2")
int kptr_xchg_possibly_null(struct __sk_buff *ctx)
{
struct prog_test_ref_kfunc *p;
diff --git a/tools/testing/selftests/bpf/progs/percpu_alloc_fail.c b/tools/testing/selftests/bpf/progs/percpu_alloc_fail.c
index 81813c724fa9..08379c3b6a03 100644
--- a/tools/testing/selftests/bpf/progs/percpu_alloc_fail.c
+++ b/tools/testing/selftests/bpf/progs/percpu_alloc_fail.c
@@ -110,7 +110,7 @@ int BPF_PROG(test_array_map_3)
}
SEC("?fentry.s/bpf_fentry_test1")
-__failure __msg("arg#0 expected for bpf_percpu_obj_drop()")
+__failure __msg("R1 expected for bpf_percpu_obj_drop()")
int BPF_PROG(test_array_map_4)
{
struct val_t __percpu_kptr *p;
@@ -124,7 +124,7 @@ int BPF_PROG(test_array_map_4)
}
SEC("?fentry.s/bpf_fentry_test1")
-__failure __msg("arg#0 expected for bpf_obj_drop()")
+__failure __msg("R1 expected for bpf_obj_drop()")
int BPF_PROG(test_array_map_5)
{
struct val_t *p;
diff --git a/tools/testing/selftests/bpf/progs/rbtree_fail.c b/tools/testing/selftests/bpf/progs/rbtree_fail.c
index 70b7baf9304b..555379952dcc 100644
--- a/tools/testing/selftests/bpf/progs/rbtree_fail.c
+++ b/tools/testing/selftests/bpf/progs/rbtree_fail.c
@@ -134,7 +134,7 @@ long rbtree_api_remove_no_drop(void *ctx)
}
SEC("?tc")
-__failure __msg("arg#1 expected pointer to allocated object")
+__failure __msg("R2 expected pointer to allocated object")
long rbtree_api_add_to_multiple_trees(void *ctx)
{
struct node_data *n;
@@ -153,7 +153,7 @@ long rbtree_api_add_to_multiple_trees(void *ctx)
}
SEC("?tc")
-__failure __msg("Possibly NULL pointer passed to trusted arg1")
+__failure __msg("Possibly NULL pointer passed to trusted R2")
long rbtree_api_use_unchecked_remove_retval(void *ctx)
{
struct bpf_rb_node *res;
@@ -281,7 +281,7 @@ long add_with_cb(bool (cb)(struct bpf_rb_node *a, const struct bpf_rb_node *b))
}
SEC("?tc")
-__failure __msg("arg#1 expected pointer to allocated object")
+__failure __msg("R2 expected pointer to allocated object")
long rbtree_api_add_bad_cb_bad_fn_call_add(void *ctx)
{
return add_with_cb(less__bad_fn_call_add);
diff --git a/tools/testing/selftests/bpf/progs/refcounted_kptr_fail.c b/tools/testing/selftests/bpf/progs/refcounted_kptr_fail.c
index b2808bfcec29..7247a20c0a3b 100644
--- a/tools/testing/selftests/bpf/progs/refcounted_kptr_fail.c
+++ b/tools/testing/selftests/bpf/progs/refcounted_kptr_fail.c
@@ -54,7 +54,7 @@ long rbtree_refcounted_node_ref_escapes(void *ctx)
}
SEC("?tc")
-__failure __msg("Possibly NULL pointer passed to trusted arg0")
+__failure __msg("Possibly NULL pointer passed to trusted R1")
long refcount_acquire_maybe_null(void *ctx)
{
struct node_acquire *n, *m;
diff --git a/tools/testing/selftests/bpf/progs/stream_fail.c b/tools/testing/selftests/bpf/progs/stream_fail.c
index 8e8249f3521c..21428bb1ee59 100644
--- a/tools/testing/selftests/bpf/progs/stream_fail.c
+++ b/tools/testing/selftests/bpf/progs/stream_fail.c
@@ -23,7 +23,7 @@ int stream_vprintk_scalar_arg(void *ctx)
}
SEC("syscall")
-__failure __msg("arg#1 doesn't point to a const string")
+__failure __msg("R2 doesn't point to a const string")
int stream_vprintk_string_arg(void *ctx)
{
bpf_stream_vprintk(BPF_STDOUT, ctx, NULL, 0);
diff --git a/tools/testing/selftests/bpf/progs/task_kfunc_failure.c b/tools/testing/selftests/bpf/progs/task_kfunc_failure.c
index 4c07ea193f72..41047d81ec42 100644
--- a/tools/testing/selftests/bpf/progs/task_kfunc_failure.c
+++ b/tools/testing/selftests/bpf/progs/task_kfunc_failure.c
@@ -28,7 +28,7 @@ static struct __tasks_kfunc_map_value *insert_lookup_task(struct task_struct *ta
}
SEC("tp_btf/task_newtask")
-__failure __msg("Possibly NULL pointer passed to trusted arg0")
+__failure __msg("Possibly NULL pointer passed to trusted R1")
int BPF_PROG(task_kfunc_acquire_untrusted, struct task_struct *task, u64 clone_flags)
{
struct task_struct *acquired;
@@ -49,7 +49,7 @@ int BPF_PROG(task_kfunc_acquire_untrusted, struct task_struct *task, u64 clone_f
}
SEC("tp_btf/task_newtask")
-__failure __msg("arg#0 pointer type STRUCT task_struct must point")
+__failure __msg("R1 pointer type STRUCT task_struct must point")
int BPF_PROG(task_kfunc_acquire_fp, struct task_struct *task, u64 clone_flags)
{
struct task_struct *acquired, *stack_task = (struct task_struct *)&clone_flags;
@@ -100,7 +100,7 @@ int BPF_PROG(task_kfunc_acquire_unsafe_kretprobe_rcu, struct task_struct *task,
}
SEC("tp_btf/task_newtask")
-__failure __msg("Possibly NULL pointer passed to trusted arg0")
+__failure __msg("Possibly NULL pointer passed to trusted R1")
int BPF_PROG(task_kfunc_acquire_null, struct task_struct *task, u64 clone_flags)
{
struct task_struct *acquired;
@@ -149,7 +149,7 @@ int BPF_PROG(task_kfunc_xchg_unreleased, struct task_struct *task, u64 clone_fla
}
SEC("tp_btf/task_newtask")
-__failure __msg("Possibly NULL pointer passed to trusted arg0")
+__failure __msg("Possibly NULL pointer passed to trusted R1")
int BPF_PROG(task_kfunc_acquire_release_no_null_check, struct task_struct *task, u64 clone_flags)
{
struct task_struct *acquired;
@@ -162,7 +162,7 @@ int BPF_PROG(task_kfunc_acquire_release_no_null_check, struct task_struct *task,
}
SEC("tp_btf/task_newtask")
-__failure __msg("Possibly NULL pointer passed to trusted arg0")
+__failure __msg("Possibly NULL pointer passed to trusted R1")
int BPF_PROG(task_kfunc_release_untrusted, struct task_struct *task, u64 clone_flags)
{
struct __tasks_kfunc_map_value *v;
@@ -178,7 +178,7 @@ int BPF_PROG(task_kfunc_release_untrusted, struct task_struct *task, u64 clone_f
}
SEC("tp_btf/task_newtask")
-__failure __msg("arg#0 pointer type STRUCT task_struct must point")
+__failure __msg("R1 pointer type STRUCT task_struct must point")
int BPF_PROG(task_kfunc_release_fp, struct task_struct *task, u64 clone_flags)
{
struct task_struct *acquired = (struct task_struct *)&clone_flags;
@@ -190,7 +190,7 @@ int BPF_PROG(task_kfunc_release_fp, struct task_struct *task, u64 clone_flags)
}
SEC("tp_btf/task_newtask")
-__failure __msg("Possibly NULL pointer passed to trusted arg0")
+__failure __msg("Possibly NULL pointer passed to trusted R1")
int BPF_PROG(task_kfunc_release_null, struct task_struct *task, u64 clone_flags)
{
struct __tasks_kfunc_map_value local, *v;
@@ -234,7 +234,7 @@ int BPF_PROG(task_kfunc_release_unacquired, struct task_struct *task, u64 clone_
}
SEC("tp_btf/task_newtask")
-__failure __msg("Possibly NULL pointer passed to trusted arg0")
+__failure __msg("Possibly NULL pointer passed to trusted R1")
int BPF_PROG(task_kfunc_from_pid_no_null_check, struct task_struct *task, u64 clone_flags)
{
struct task_struct *acquired;
@@ -248,7 +248,7 @@ int BPF_PROG(task_kfunc_from_pid_no_null_check, struct task_struct *task, u64 cl
}
SEC("tp_btf/task_newtask")
-__failure __msg("Possibly NULL pointer passed to trusted arg0")
+__failure __msg("Possibly NULL pointer passed to trusted R1")
int BPF_PROG(task_kfunc_from_vpid_no_null_check, struct task_struct *task, u64 clone_flags)
{
struct task_struct *acquired;
diff --git a/tools/testing/selftests/bpf/progs/task_work_fail.c b/tools/testing/selftests/bpf/progs/task_work_fail.c
index 82e4b8913333..3186e7b4b24e 100644
--- a/tools/testing/selftests/bpf/progs/task_work_fail.c
+++ b/tools/testing/selftests/bpf/progs/task_work_fail.c
@@ -58,7 +58,7 @@ int mismatch_map(struct pt_regs *args)
}
SEC("perf_event")
-__failure __msg("arg#1 doesn't point to a map value")
+__failure __msg("R2 doesn't point to a map value")
int no_map_task_work(struct pt_regs *args)
{
struct task_struct *task;
@@ -70,7 +70,7 @@ int no_map_task_work(struct pt_regs *args)
}
SEC("perf_event")
-__failure __msg("Possibly NULL pointer passed to trusted arg1")
+__failure __msg("Possibly NULL pointer passed to trusted R2")
int task_work_null(struct pt_regs *args)
{
struct task_struct *task;
@@ -81,7 +81,7 @@ int task_work_null(struct pt_regs *args)
}
SEC("perf_event")
-__failure __msg("Possibly NULL pointer passed to trusted arg2")
+__failure __msg("Possibly NULL pointer passed to trusted R3")
int map_null(struct pt_regs *args)
{
struct elem *work;
diff --git a/tools/testing/selftests/bpf/progs/test_bpf_nf_fail.c b/tools/testing/selftests/bpf/progs/test_bpf_nf_fail.c
index 2c156cd166af..332cda89caba 100644
--- a/tools/testing/selftests/bpf/progs/test_bpf_nf_fail.c
+++ b/tools/testing/selftests/bpf/progs/test_bpf_nf_fail.c
@@ -152,7 +152,7 @@ int change_status_after_alloc(struct __sk_buff *ctx)
}
SEC("?tc")
-__failure __msg("Possibly NULL pointer passed to trusted arg1")
+__failure __msg("Possibly NULL pointer passed to trusted R2")
int lookup_null_bpf_tuple(struct __sk_buff *ctx)
{
struct bpf_ct_opts___local opts = {};
@@ -165,7 +165,7 @@ int lookup_null_bpf_tuple(struct __sk_buff *ctx)
}
SEC("?tc")
-__failure __msg("Possibly NULL pointer passed to trusted arg3")
+__failure __msg("Possibly NULL pointer passed to trusted R4")
int lookup_null_bpf_opts(struct __sk_buff *ctx)
{
struct bpf_sock_tuple tup = {};
@@ -178,7 +178,7 @@ int lookup_null_bpf_opts(struct __sk_buff *ctx)
}
SEC("?xdp")
-__failure __msg("Possibly NULL pointer passed to trusted arg1")
+__failure __msg("Possibly NULL pointer passed to trusted R2")
int xdp_lookup_null_bpf_tuple(struct xdp_md *ctx)
{
struct bpf_ct_opts___local opts = {};
@@ -191,7 +191,7 @@ int xdp_lookup_null_bpf_tuple(struct xdp_md *ctx)
}
SEC("?xdp")
-__failure __msg("Possibly NULL pointer passed to trusted arg3")
+__failure __msg("Possibly NULL pointer passed to trusted R4")
int xdp_lookup_null_bpf_opts(struct xdp_md *ctx)
{
struct bpf_sock_tuple tup = {};
diff --git a/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c b/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c
index 1c6cfd0888ba..bf48fc43c7ab 100644
--- a/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c
+++ b/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c
@@ -40,7 +40,7 @@ int BPF_PROG(not_valid_dynptr, int cmd, union bpf_attr *attr, unsigned int size,
}
SEC("?lsm.s/bpf")
-__failure __msg("arg#0 expected pointer to stack or const struct bpf_dynptr")
+__failure __msg("R1 expected pointer to stack or const struct bpf_dynptr")
int BPF_PROG(not_ptr_to_stack, int cmd, union bpf_attr *attr, unsigned int size, bool kernel)
{
static struct bpf_dynptr val;
diff --git a/tools/testing/selftests/bpf/progs/test_kfunc_param_nullable.c b/tools/testing/selftests/bpf/progs/test_kfunc_param_nullable.c
index 967081bbcfe1..ca35b92ea095 100644
--- a/tools/testing/selftests/bpf/progs/test_kfunc_param_nullable.c
+++ b/tools/testing/selftests/bpf/progs/test_kfunc_param_nullable.c
@@ -29,7 +29,7 @@ int kfunc_dynptr_nullable_test2(struct __sk_buff *skb)
}
SEC("tc")
-__failure __msg("Possibly NULL pointer passed to trusted arg0")
+__failure __msg("Possibly NULL pointer passed to trusted R1")
int kfunc_dynptr_nullable_test3(struct __sk_buff *skb)
{
struct bpf_dynptr data;
diff --git a/tools/testing/selftests/bpf/progs/verifier_bits_iter.c b/tools/testing/selftests/bpf/progs/verifier_bits_iter.c
index 8bcddadfc4da..dd97f2027505 100644
--- a/tools/testing/selftests/bpf/progs/verifier_bits_iter.c
+++ b/tools/testing/selftests/bpf/progs/verifier_bits_iter.c
@@ -32,7 +32,7 @@ int BPF_PROG(no_destroy, struct bpf_iter_meta *meta, struct cgroup *cgrp)
SEC("iter/cgroup")
__description("uninitialized iter in ->next()")
-__failure __msg("expected an initialized iter_bits as arg #0")
+__failure __msg("expected an initialized iter_bits as R1")
int BPF_PROG(next_uninit, struct bpf_iter_meta *meta, struct cgroup *cgrp)
{
struct bpf_iter_bits it = {};
@@ -43,7 +43,7 @@ int BPF_PROG(next_uninit, struct bpf_iter_meta *meta, struct cgroup *cgrp)
SEC("iter/cgroup")
__description("uninitialized iter in ->destroy()")
-__failure __msg("expected an initialized iter_bits as arg #0")
+__failure __msg("expected an initialized iter_bits as R1")
int BPF_PROG(destroy_uninit, struct bpf_iter_meta *meta, struct cgroup *cgrp)
{
struct bpf_iter_bits it = {};
diff --git a/tools/testing/selftests/bpf/progs/verifier_ref_tracking.c b/tools/testing/selftests/bpf/progs/verifier_ref_tracking.c
index 910365201f68..139f70bb3595 100644
--- a/tools/testing/selftests/bpf/progs/verifier_ref_tracking.c
+++ b/tools/testing/selftests/bpf/progs/verifier_ref_tracking.c
@@ -263,7 +263,7 @@ l0_%=: r0 = 0; \
SEC("lsm.s/bpf")
__description("reference tracking: release user key reference without check")
-__failure __msg("Possibly NULL pointer passed to trusted arg0")
+__failure __msg("Possibly NULL pointer passed to trusted R1")
__naked void user_key_reference_without_check(void)
{
asm volatile (" \
@@ -282,7 +282,7 @@ __naked void user_key_reference_without_check(void)
SEC("lsm.s/bpf")
__description("reference tracking: release system key reference without check")
-__failure __msg("Possibly NULL pointer passed to trusted arg0")
+__failure __msg("Possibly NULL pointer passed to trusted R1")
__naked void system_key_reference_without_check(void)
{
asm volatile (" \
@@ -300,7 +300,7 @@ __naked void system_key_reference_without_check(void)
SEC("lsm.s/bpf")
__description("reference tracking: release with NULL key pointer")
-__failure __msg("Possibly NULL pointer passed to trusted arg0")
+__failure __msg("Possibly NULL pointer passed to trusted R1")
__naked void release_with_null_key_pointer(void)
{
asm volatile (" \
diff --git a/tools/testing/selftests/bpf/progs/verifier_vfs_reject.c b/tools/testing/selftests/bpf/progs/verifier_vfs_reject.c
index 4b392c6c8fc4..0990de076844 100644
--- a/tools/testing/selftests/bpf/progs/verifier_vfs_reject.c
+++ b/tools/testing/selftests/bpf/progs/verifier_vfs_reject.c
@@ -13,7 +13,7 @@
static char buf[PATH_MAX];
SEC("lsm.s/file_open")
-__failure __msg("Possibly NULL pointer passed to trusted arg0")
+__failure __msg("Possibly NULL pointer passed to trusted R1")
int BPF_PROG(get_task_exe_file_kfunc_null)
{
struct file *acquired;
@@ -28,7 +28,7 @@ int BPF_PROG(get_task_exe_file_kfunc_null)
}
SEC("lsm.s/inode_getxattr")
-__failure __msg("arg#0 pointer type STRUCT task_struct must point to scalar, or struct with scalar")
+__failure __msg("R1 pointer type STRUCT task_struct must point to scalar, or struct with scalar")
int BPF_PROG(get_task_exe_file_kfunc_fp)
{
u64 x;
@@ -89,7 +89,7 @@ int BPF_PROG(put_file_kfunc_unacquired, struct file *file)
}
SEC("lsm.s/file_open")
-__failure __msg("Possibly NULL pointer passed to trusted arg0")
+__failure __msg("Possibly NULL pointer passed to trusted R1")
int BPF_PROG(path_d_path_kfunc_null)
{
/* Can't pass NULL value to bpf_path_d_path() kfunc. */
@@ -128,7 +128,7 @@ int BPF_PROG(path_d_path_kfunc_untrusted_from_current)
}
SEC("lsm.s/file_open")
-__failure __msg("kernel function bpf_path_d_path args#0 expected pointer to STRUCT path but R1 has a pointer to STRUCT file")
+__failure __msg("kernel function bpf_path_d_path R1 expected pointer to STRUCT path but R1 has a pointer to STRUCT file")
int BPF_PROG(path_d_path_kfunc_type_mismatch, struct file *file)
{
bpf_path_d_path((struct path *)&file->f_task_work, buf, sizeof(buf));
diff --git a/tools/testing/selftests/bpf/progs/wq_failures.c b/tools/testing/selftests/bpf/progs/wq_failures.c
index 3767f5595bbc..32dc8827e128 100644
--- a/tools/testing/selftests/bpf/progs/wq_failures.c
+++ b/tools/testing/selftests/bpf/progs/wq_failures.c
@@ -98,7 +98,7 @@ __failure
* is a correct bpf_wq pointer.
*/
__msg(": (85) call bpf_wq_set_callback#") /* anchor message */
-__msg("arg#0 doesn't point to a map value")
+__msg("R1 doesn't point to a map value")
long test_wrong_wq_pointer(void *ctx)
{
int key = 0;
diff --git a/tools/testing/selftests/bpf/verifier/calls.c b/tools/testing/selftests/bpf/verifier/calls.c
index c3164b9b2be5..0bb4337552c8 100644
--- a/tools/testing/selftests/bpf/verifier/calls.c
+++ b/tools/testing/selftests/bpf/verifier/calls.c
@@ -31,7 +31,7 @@
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.result = REJECT,
- .errstr = "arg#0 pointer type STRUCT prog_test_fail1 must point to scalar",
+ .errstr = "R1 pointer type STRUCT prog_test_fail1 must point to scalar",
.fixup_kfunc_btf_id = {
{ "bpf_kfunc_call_test_fail1", 2 },
},
@@ -46,7 +46,7 @@
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.result = REJECT,
- .errstr = "max struct nesting depth exceeded\narg#0 pointer type STRUCT prog_test_fail2",
+ .errstr = "max struct nesting depth exceeded\nR1 pointer type STRUCT prog_test_fail2",
.fixup_kfunc_btf_id = {
{ "bpf_kfunc_call_test_fail2", 2 },
},
@@ -61,7 +61,7 @@
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.result = REJECT,
- .errstr = "arg#0 pointer type STRUCT prog_test_fail3 must point to scalar",
+ .errstr = "R1 pointer type STRUCT prog_test_fail3 must point to scalar",
.fixup_kfunc_btf_id = {
{ "bpf_kfunc_call_test_fail3", 2 },
},
@@ -76,7 +76,7 @@
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.result = REJECT,
- .errstr = "arg#0 expected pointer to ctx, but got fp",
+ .errstr = "R1 expected pointer to ctx, but got fp",
.fixup_kfunc_btf_id = {
{ "bpf_kfunc_call_test_pass_ctx", 2 },
},
@@ -91,7 +91,7 @@
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.result = REJECT,
- .errstr = "arg#0 pointer type UNKNOWN must point to scalar",
+ .errstr = "R1 pointer type UNKNOWN must point to scalar",
.fixup_kfunc_btf_id = {
{ "bpf_kfunc_call_test_mem_len_fail1", 2 },
},
@@ -109,7 +109,7 @@
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.result = REJECT,
- .errstr = "Possibly NULL pointer passed to trusted arg0",
+ .errstr = "Possibly NULL pointer passed to trusted R1",
.fixup_kfunc_btf_id = {
{ "bpf_kfunc_call_test_acquire", 3 },
{ "bpf_kfunc_call_test_release", 5 },
@@ -152,7 +152,7 @@
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.result = REJECT,
- .errstr = "kernel function bpf_kfunc_call_memb1_release args#0 expected pointer",
+ .errstr = "kernel function bpf_kfunc_call_memb1_release R1 expected pointer",
.fixup_kfunc_btf_id = {
{ "bpf_kfunc_call_memb_acquire", 1 },
{ "bpf_kfunc_call_memb1_release", 5 },
--
2.52.0
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH bpf-next 8/9] bpf: Introduce bpf register BPF_REG_PARAMS
2026-04-21 17:19 [PATCH bpf-next 0/9] bpf: Prepare to support stack arguments Yonghong Song
` (6 preceding siblings ...)
2026-04-21 17:20 ` [PATCH bpf-next 7/9] bpf: Prepare verifier logs for upcoming kfunc stack arguments Yonghong Song
@ 2026-04-21 17:20 ` Yonghong Song
2026-04-21 22:10 ` Alexei Starovoitov
2026-04-21 17:20 ` [PATCH bpf-next 9/9] bpf: Reuse MAX_BPF_FUNC_ARGS for maximum number of arguments Yonghong Song
2026-04-21 19:13 ` [PATCH bpf-next 0/9] bpf: Prepare to support stack arguments Kumar Kartikeya Dwivedi
9 siblings, 1 reply; 25+ messages in thread
From: Yonghong Song @ 2026-04-21 17:20 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau, Puranjay Mohan
Introduce BPF_REG_PARAMS as a dedicated BPF register for stack
argument accesses. It occupies the BPF register number 11 (R11),
which is used as the base pointer for the stack argument area,
keeping it separate from the R10-based (BPF_REG_FP) program stack.
The kernel-internal hidden register BPF_REG_AX previously occupied
slot 11 (MAX_BPF_REG). With BPF_REG_PARAMS taking that slot,
BPF_REG_AX moves to slot 12 and MAX_BPF_EXT_REG increases
accordingly.
Acked-by: Puranjay Mohan <puranjay@kernel.org>
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
include/linux/filter.h | 5 +-
kernel/bpf/core.c | 4 +-
.../selftests/bpf/prog_tests/ctx_rewrite.c | 14 ++--
.../bpf/progs/verifier_bpf_fastcall.c | 24 +++----
.../selftests/bpf/progs/verifier_may_goto_1.c | 12 ++--
.../selftests/bpf/progs/verifier_sdiv.c | 64 +++++++++----------
6 files changed, 62 insertions(+), 61 deletions(-)
diff --git a/include/linux/filter.h b/include/linux/filter.h
index 1ec6d5ba64cc..b77d0b06db6e 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -58,8 +58,9 @@ struct ctl_table_header;
#define BPF_REG_H BPF_REG_9 /* hlen, callee-saved */
/* Kernel hidden auxiliary/helper register. */
-#define BPF_REG_AX MAX_BPF_REG
-#define MAX_BPF_EXT_REG (MAX_BPF_REG + 1)
+#define BPF_REG_PARAMS MAX_BPF_REG
+#define BPF_REG_AX (MAX_BPF_REG + 1)
+#define MAX_BPF_EXT_REG (MAX_BPF_REG + 2)
#define MAX_BPF_JIT_REG MAX_BPF_EXT_REG
/* unused opcode to mark special call to bpf_tail_call() helper */
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 8b018ff48875..ae10b9ca018d 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -1299,8 +1299,8 @@ static int bpf_jit_blind_insn(const struct bpf_insn *from,
u32 imm_rnd = get_random_u32();
s16 off;
- BUILD_BUG_ON(BPF_REG_AX + 1 != MAX_BPF_JIT_REG);
- BUILD_BUG_ON(MAX_BPF_REG + 1 != MAX_BPF_JIT_REG);
+ BUILD_BUG_ON(BPF_REG_PARAMS + 2 != MAX_BPF_JIT_REG);
+ BUILD_BUG_ON(BPF_REG_AX + 1 != MAX_BPF_JIT_REG);
/* Constraints on AX register:
*
diff --git a/tools/testing/selftests/bpf/prog_tests/ctx_rewrite.c b/tools/testing/selftests/bpf/prog_tests/ctx_rewrite.c
index 5064aeb8fe67..2c3124092b73 100644
--- a/tools/testing/selftests/bpf/prog_tests/ctx_rewrite.c
+++ b/tools/testing/selftests/bpf/prog_tests/ctx_rewrite.c
@@ -69,19 +69,19 @@ static struct test_case test_cases[] = {
#if defined(__x86_64__) || defined(__aarch64__)
{
N(SCHED_CLS, struct __sk_buff, tstamp),
- .read = "r11 = *(u8 *)($ctx + sk_buff::__mono_tc_offset);"
- "if w11 & 0x4 goto pc+1;"
+ .read = "r12 = *(u8 *)($ctx + sk_buff::__mono_tc_offset);"
+ "if w12 & 0x4 goto pc+1;"
"goto pc+4;"
- "if w11 & 0x3 goto pc+1;"
+ "if w12 & 0x3 goto pc+1;"
"goto pc+2;"
"$dst = 0;"
"goto pc+1;"
"$dst = *(u64 *)($ctx + sk_buff::tstamp);",
- .write = "r11 = *(u8 *)($ctx + sk_buff::__mono_tc_offset);"
- "if w11 & 0x4 goto pc+1;"
+ .write = "r12 = *(u8 *)($ctx + sk_buff::__mono_tc_offset);"
+ "if w12 & 0x4 goto pc+1;"
"goto pc+2;"
- "w11 &= -4;"
- "*(u8 *)($ctx + sk_buff::__mono_tc_offset) = r11;"
+ "w12 &= -4;"
+ "*(u8 *)($ctx + sk_buff::__mono_tc_offset) = r12;"
"*(u64 *)($ctx + sk_buff::tstamp) = $src;",
},
#endif
diff --git a/tools/testing/selftests/bpf/progs/verifier_bpf_fastcall.c b/tools/testing/selftests/bpf/progs/verifier_bpf_fastcall.c
index fb4fa465d67c..0d9e167555b5 100644
--- a/tools/testing/selftests/bpf/progs/verifier_bpf_fastcall.c
+++ b/tools/testing/selftests/bpf/progs/verifier_bpf_fastcall.c
@@ -630,13 +630,13 @@ __xlated("...")
__xlated("4: r0 = &(void __percpu *)(r0)")
__xlated("...")
/* may_goto expansion starts */
-__xlated("6: r11 = *(u64 *)(r10 -24)")
-__xlated("7: if r11 == 0x0 goto pc+6")
-__xlated("8: r11 -= 1")
-__xlated("9: if r11 != 0x0 goto pc+2")
-__xlated("10: r11 = -24")
+__xlated("6: r12 = *(u64 *)(r10 -24)")
+__xlated("7: if r12 == 0x0 goto pc+6")
+__xlated("8: r12 -= 1")
+__xlated("9: if r12 != 0x0 goto pc+2")
+__xlated("10: r12 = -24")
__xlated("11: call unknown")
-__xlated("12: *(u64 *)(r10 -24) = r11")
+__xlated("12: *(u64 *)(r10 -24) = r12")
/* may_goto expansion ends */
__xlated("13: *(u64 *)(r10 -8) = r1")
__xlated("14: exit")
@@ -668,13 +668,13 @@ __xlated("1: *(u64 *)(r10 -16) =")
__xlated("2: r1 = 1")
__xlated("3: call bpf_get_smp_processor_id")
/* may_goto expansion starts */
-__xlated("4: r11 = *(u64 *)(r10 -24)")
-__xlated("5: if r11 == 0x0 goto pc+6")
-__xlated("6: r11 -= 1")
-__xlated("7: if r11 != 0x0 goto pc+2")
-__xlated("8: r11 = -24")
+__xlated("4: r12 = *(u64 *)(r10 -24)")
+__xlated("5: if r12 == 0x0 goto pc+6")
+__xlated("6: r12 -= 1")
+__xlated("7: if r12 != 0x0 goto pc+2")
+__xlated("8: r12 = -24")
__xlated("9: call unknown")
-__xlated("10: *(u64 *)(r10 -24) = r11")
+__xlated("10: *(u64 *)(r10 -24) = r12")
/* may_goto expansion ends */
__xlated("11: *(u64 *)(r10 -8) = r1")
__xlated("12: exit")
diff --git a/tools/testing/selftests/bpf/progs/verifier_may_goto_1.c b/tools/testing/selftests/bpf/progs/verifier_may_goto_1.c
index 6d1edaef9213..4bdf4256a41e 100644
--- a/tools/testing/selftests/bpf/progs/verifier_may_goto_1.c
+++ b/tools/testing/selftests/bpf/progs/verifier_may_goto_1.c
@@ -81,13 +81,13 @@ __arch_s390x
__arch_arm64
__xlated("0: *(u64 *)(r10 -16) = 65535")
__xlated("1: *(u64 *)(r10 -8) = 0")
-__xlated("2: r11 = *(u64 *)(r10 -16)")
-__xlated("3: if r11 == 0x0 goto pc+6")
-__xlated("4: r11 -= 1")
-__xlated("5: if r11 != 0x0 goto pc+2")
-__xlated("6: r11 = -16")
+__xlated("2: r12 = *(u64 *)(r10 -16)")
+__xlated("3: if r12 == 0x0 goto pc+6")
+__xlated("4: r12 -= 1")
+__xlated("5: if r12 != 0x0 goto pc+2")
+__xlated("6: r12 = -16")
__xlated("7: call unknown")
-__xlated("8: *(u64 *)(r10 -16) = r11")
+__xlated("8: *(u64 *)(r10 -16) = r12")
__xlated("9: r0 = 1")
__xlated("10: r0 = 2")
__xlated("11: exit")
diff --git a/tools/testing/selftests/bpf/progs/verifier_sdiv.c b/tools/testing/selftests/bpf/progs/verifier_sdiv.c
index fd59d57e8e37..95f3239ce228 100644
--- a/tools/testing/selftests/bpf/progs/verifier_sdiv.c
+++ b/tools/testing/selftests/bpf/progs/verifier_sdiv.c
@@ -778,10 +778,10 @@ __arch_x86_64
__xlated("0: r2 = 0x8000000000000000")
__xlated("2: r3 = -1")
__xlated("3: r4 = r2")
-__xlated("4: r11 = r3")
-__xlated("5: r11 += 1")
-__xlated("6: if r11 > 0x1 goto pc+4")
-__xlated("7: if r11 == 0x0 goto pc+1")
+__xlated("4: r12 = r3")
+__xlated("5: r12 += 1")
+__xlated("6: if r12 > 0x1 goto pc+4")
+__xlated("7: if r12 == 0x0 goto pc+1")
__xlated("8: r2 = 0")
__xlated("9: r2 = -r2")
__xlated("10: goto pc+1")
@@ -812,10 +812,10 @@ __success __retval(-5)
__arch_x86_64
__xlated("0: r2 = 5")
__xlated("1: r3 = -1")
-__xlated("2: r11 = r3")
-__xlated("3: r11 += 1")
-__xlated("4: if r11 > 0x1 goto pc+4")
-__xlated("5: if r11 == 0x0 goto pc+1")
+__xlated("2: r12 = r3")
+__xlated("3: r12 += 1")
+__xlated("4: if r12 > 0x1 goto pc+4")
+__xlated("5: if r12 == 0x0 goto pc+1")
__xlated("6: r2 = 0")
__xlated("7: r2 = -r2")
__xlated("8: goto pc+1")
@@ -890,10 +890,10 @@ __arch_x86_64
__xlated("0: w2 = -2147483648")
__xlated("1: w3 = -1")
__xlated("2: w4 = w2")
-__xlated("3: r11 = r3")
-__xlated("4: w11 += 1")
-__xlated("5: if w11 > 0x1 goto pc+4")
-__xlated("6: if w11 == 0x0 goto pc+1")
+__xlated("3: r12 = r3")
+__xlated("4: w12 += 1")
+__xlated("5: if w12 > 0x1 goto pc+4")
+__xlated("6: if w12 == 0x0 goto pc+1")
__xlated("7: w2 = 0")
__xlated("8: w2 = -w2")
__xlated("9: goto pc+1")
@@ -925,10 +925,10 @@ __arch_x86_64
__xlated("0: w2 = -5")
__xlated("1: w3 = -1")
__xlated("2: w4 = w2")
-__xlated("3: r11 = r3")
-__xlated("4: w11 += 1")
-__xlated("5: if w11 > 0x1 goto pc+4")
-__xlated("6: if w11 == 0x0 goto pc+1")
+__xlated("3: r12 = r3")
+__xlated("4: w12 += 1")
+__xlated("5: if w12 > 0x1 goto pc+4")
+__xlated("6: if w12 == 0x0 goto pc+1")
__xlated("7: w2 = 0")
__xlated("8: w2 = -w2")
__xlated("9: goto pc+1")
@@ -1004,10 +1004,10 @@ __arch_x86_64
__xlated("0: r2 = 0x8000000000000000")
__xlated("2: r3 = -1")
__xlated("3: r4 = r2")
-__xlated("4: r11 = r3")
-__xlated("5: r11 += 1")
-__xlated("6: if r11 > 0x1 goto pc+3")
-__xlated("7: if r11 == 0x1 goto pc+3")
+__xlated("4: r12 = r3")
+__xlated("5: r12 += 1")
+__xlated("6: if r12 > 0x1 goto pc+3")
+__xlated("7: if r12 == 0x1 goto pc+3")
__xlated("8: w2 = 0")
__xlated("9: goto pc+1")
__xlated("10: r2 s%= r3")
@@ -1034,10 +1034,10 @@ __arch_x86_64
__xlated("0: r2 = 5")
__xlated("1: r3 = -1")
__xlated("2: r4 = r2")
-__xlated("3: r11 = r3")
-__xlated("4: r11 += 1")
-__xlated("5: if r11 > 0x1 goto pc+3")
-__xlated("6: if r11 == 0x1 goto pc+3")
+__xlated("3: r12 = r3")
+__xlated("4: r12 += 1")
+__xlated("5: if r12 > 0x1 goto pc+3")
+__xlated("6: if r12 == 0x1 goto pc+3")
__xlated("7: w2 = 0")
__xlated("8: goto pc+1")
__xlated("9: r2 s%= r3")
@@ -1108,10 +1108,10 @@ __arch_x86_64
__xlated("0: w2 = -2147483648")
__xlated("1: w3 = -1")
__xlated("2: w4 = w2")
-__xlated("3: r11 = r3")
-__xlated("4: w11 += 1")
-__xlated("5: if w11 > 0x1 goto pc+3")
-__xlated("6: if w11 == 0x1 goto pc+4")
+__xlated("3: r12 = r3")
+__xlated("4: w12 += 1")
+__xlated("5: if w12 > 0x1 goto pc+3")
+__xlated("6: if w12 == 0x1 goto pc+4")
__xlated("7: w2 = 0")
__xlated("8: goto pc+1")
__xlated("9: w2 s%= w3")
@@ -1140,10 +1140,10 @@ __arch_x86_64
__xlated("0: w2 = -5")
__xlated("1: w3 = -1")
__xlated("2: w4 = w2")
-__xlated("3: r11 = r3")
-__xlated("4: w11 += 1")
-__xlated("5: if w11 > 0x1 goto pc+3")
-__xlated("6: if w11 == 0x1 goto pc+4")
+__xlated("3: r12 = r3")
+__xlated("4: w12 += 1")
+__xlated("5: if w12 > 0x1 goto pc+3")
+__xlated("6: if w12 == 0x1 goto pc+4")
__xlated("7: w2 = 0")
__xlated("8: goto pc+1")
__xlated("9: w2 s%= w3")
--
2.52.0
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH bpf-next 9/9] bpf: Reuse MAX_BPF_FUNC_ARGS for maximum number of arguments
2026-04-21 17:19 [PATCH bpf-next 0/9] bpf: Prepare to support stack arguments Yonghong Song
` (7 preceding siblings ...)
2026-04-21 17:20 ` [PATCH bpf-next 8/9] bpf: Introduce bpf register BPF_REG_PARAMS Yonghong Song
@ 2026-04-21 17:20 ` Yonghong Song
2026-04-21 17:52 ` bot+bpf-ci
2026-04-21 19:13 ` [PATCH bpf-next 0/9] bpf: Prepare to support stack arguments Kumar Kartikeya Dwivedi
9 siblings, 1 reply; 25+ messages in thread
From: Yonghong Song @ 2026-04-21 17:20 UTC (permalink / raw)
To: bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau, Puranjay Mohan
Currently, MAX_BPF_FUNC_ARGS is used for tracepoint related progs where
the number of parameters cannot exceed MAX_BPF_FUNC_ARGS.
Here, MAX_BPF_FUNC_ARGS is reused to set a limit of the number of arguments
for bpf functions and kfunc's. The current value for MAX_BPF_FUNC_ARGS
is 12 which should be sufficient for majority of bpf functions and
kfunc's.
Acked-by: Puranjay Mohan <puranjay@kernel.org>
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
include/linux/bpf.h | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 3cb6b9e70080..c8e387b0ce46 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1151,6 +1151,11 @@ struct bpf_prog_offload {
/* The longest tracepoint has 12 args.
* See include/trace/bpf_probe.h
+ *
+ * Also reuse this macro for maximum number of arguments a BPF function
+ * or a kfunc can have. Args 1-5 are passed in registers, args 6-12 via
+ * stack arg slots. The JIT may map some stack arg slots to registers based
+ * on the native calling convention (e.g., arg 6 to R9 on x86-64).
*/
#define MAX_BPF_FUNC_ARGS 12
--
2.52.0
^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [PATCH bpf-next 9/9] bpf: Reuse MAX_BPF_FUNC_ARGS for maximum number of arguments
2026-04-21 17:20 ` [PATCH bpf-next 9/9] bpf: Reuse MAX_BPF_FUNC_ARGS for maximum number of arguments Yonghong Song
@ 2026-04-21 17:52 ` bot+bpf-ci
0 siblings, 0 replies; 25+ messages in thread
From: bot+bpf-ci @ 2026-04-21 17:52 UTC (permalink / raw)
To: yonghong.song, bpf
Cc: ast, andrii, daniel, jose.marchesi, kernel-team, martin.lau,
puranjay, ast, andrii, daniel, martin.lau, eddyz87, yonghong.song,
clm, ihor.solodrai
[-- Attachment #1: Type: text/plain, Size: 1582 bytes --]
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index 3cb6b9e70080..c8e387b0ce46 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -1151,6 +1151,11 @@ struct bpf_prog_offload {
>
> +/* The longest tracepoint has 12 args.
> * See include/trace/bpf_probe.h
This isn't a bug, but the multi-line comment opening doesn't follow BPF
subsystem style. Per bpf.md, multi-line comments must have the opening
/* on its own line. The rule applies even if surrounding code uses the
old style.
This was raised by bot+bpf-ci@kernel.org at
https://lore.kernel.org/bpf/3bc734dfbeffc1c45e1c640be67af1824500aa1b8e04aee814e1942c1064cc96@mail.kernel.org/
> + *
> + * Also reuse this macro for maximum number of arguments a BPF function
> + * or a kfunc can have. Args 1-5 are passed in registers, args 6-12 via
Should this say 'kfuncs' instead of 'kfunc's'? The context is plural
(referring to multiple functions), not possessive. The same issue
appears twice in the commit message.
This was raised by bot+bpf-ci@kernel.org at
https://lore.kernel.org/bpf/3bc734dfbeffc1c45e1c640be67af1824500aa1b8e04aee814e1942c1064cc96@mail.kernel.org/
> + * stack arg slots. The JIT may map some stack arg slots to registers based
> + * on the native calling convention (e.g., arg 6 to R9 on x86-64).
> */
> #define MAX_BPF_FUNC_ARGS 12
---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
CI run summary: https://github.com/kernel-patches/bpf/actions/runs/24736686700
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH bpf-next 2/9] bpf: Fix tail_call_reachable leak
2026-04-21 17:19 ` [PATCH bpf-next 2/9] bpf: Fix tail_call_reachable leak Yonghong Song
@ 2026-04-21 18:06 ` bot+bpf-ci
2026-04-22 0:29 ` Yonghong Song
0 siblings, 1 reply; 25+ messages in thread
From: bot+bpf-ci @ 2026-04-21 18:06 UTC (permalink / raw)
To: yonghong.song, bpf
Cc: ast, andrii, daniel, jose.marchesi, kernel-team, martin.lau, ast,
andrii, daniel, martin.lau, eddyz87, yonghong.song, clm,
ihor.solodrai
[-- Attachment #1: Type: text/plain, Size: 1120 bytes --]
commit 697f215251be114235dcf2ec469f425797a26a51
Author: Yonghong Song <yonghong.song@linux.dev>
bpf: Fix tail_call_reachable leak
This commit fixes a leak of the local tail_call_reachable flag across
sibling subprogs during the DFS traversal in
check_max_stack_depth_subprog(), which caused the JIT to emit
unnecessary tail call counter preservation code for subprogs that
could not be reached via a tail call path.
Should this commit carry a Fixes: tag? The buggy pattern (set
tail_call_reachable to true on push, never reset on pop, then mark all
currently-on-stack frames) appears to have been introduced by commit
ebf7d1f508a7 ("bpf, x64: rework pro/epilogue and tailcall handling in
JIT") which originally added tail_call_reachable tracking to the
stack-depth DFS.
Something like:
Fixes: ebf7d1f508a7 ("bpf, x64: rework pro/epilogue and tailcall handling in JIT")
---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
CI run summary: https://github.com/kernel-patches/bpf/actions/runs/24736686700
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH bpf-next 0/9] bpf: Prepare to support stack arguments
2026-04-21 17:19 [PATCH bpf-next 0/9] bpf: Prepare to support stack arguments Yonghong Song
` (8 preceding siblings ...)
2026-04-21 17:20 ` [PATCH bpf-next 9/9] bpf: Reuse MAX_BPF_FUNC_ARGS for maximum number of arguments Yonghong Song
@ 2026-04-21 19:13 ` Kumar Kartikeya Dwivedi
9 siblings, 0 replies; 25+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2026-04-21 19:13 UTC (permalink / raw)
To: Yonghong Song
Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau
On Tue, 21 Apr 2026 at 19:22, Yonghong Song <yonghong.song@linux.dev> wrote:
>
> The patch set prepares to support stack arguments for bpf functions
> and kfunc's. The major changes include:
> - Avoid redundant calculation of bpf_reg_state. For stack
> arguments, there exists no corresponding register number.
> - Refactor check_kfunc_mem_size_reg() to have bpf_reg_state's
> for both mem_reg and size_reg.
> - Allow verifier logs to print stack arguments if there is no
> corresponding register.
>
> Please see individual patches for details.
For the set:
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> [...]
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH bpf-next 4/9] bpf: Refactor to avoid redundant calculation of bpf_reg_state
2026-04-21 17:19 ` [PATCH bpf-next 4/9] bpf: Refactor to avoid redundant calculation of bpf_reg_state Yonghong Song
@ 2026-04-21 21:40 ` Amery Hung
2026-04-21 23:42 ` Yonghong Song
0 siblings, 1 reply; 25+ messages in thread
From: Amery Hung @ 2026-04-21 21:40 UTC (permalink / raw)
To: Yonghong Song
Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau, Puranjay Mohan
On Tue, Apr 21, 2026 at 10:20 AM Yonghong Song <yonghong.song@linux.dev> wrote:
>
> In many cases, once a bpf_reg_state is defined, it can pass to
> callee's. Otherwise, callee will need to get bpf_reg_state again
> based on regno. More importantly, this is needed for later stack
> arguments for kfuncs since the register state for stack arguments does
> not have a corresponding regno. So it makes sense to pass reg state
> for callee's.
>
> The following is the only change to avoid compilation warning:
> static int sanitize_check_bounds(struct bpf_verifier_env *env,
> const struct bpf_insn *insn,
> - const struct bpf_reg_state *dst_reg)
> + struct bpf_reg_state *dst_reg)
>
> Acked-by: Puranjay Mohan <puranjay@kernel.org>
> Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
> ---
> kernel/bpf/verifier.c | 213 ++++++++++++++++++------------------------
> 1 file changed, 93 insertions(+), 120 deletions(-)
>
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index ed04fef49f6c..b56a11fc3856 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -3929,13 +3929,13 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
> static int check_stack_write_var_off(struct bpf_verifier_env *env,
> /* func where register points to */
> struct bpf_func_state *state,
> - int ptr_regno, int off, int size,
> + struct bpf_reg_state *ptr_reg, int off, int size,
> int value_regno, int insn_idx)
> {
The comment needs to be updated.
> struct bpf_func_state *cur; /* state of the current function */
> int min_off, max_off;
> int i, err;
> - struct bpf_reg_state *ptr_reg = NULL, *value_reg = NULL;
> + struct bpf_reg_state *value_reg = NULL;
> struct bpf_insn *insn = &env->prog->insnsi[insn_idx];
> bool writing_zero = false;
> /* set if the fact that we're writing a zero is used to let any
> @@ -3944,7 +3944,6 @@ static int check_stack_write_var_off(struct bpf_verifier_env *env,
> bool zero_used = false;
>
> cur = env->cur_state->frame[env->cur_state->curframe];
> - ptr_reg = &cur->regs[ptr_regno];
> min_off = ptr_reg->smin_value + off;
> max_off = ptr_reg->smax_value + off + size;
> if (value_regno >= 0)
> @@ -4241,7 +4240,7 @@ enum bpf_access_src {
> ACCESS_HELPER = 2, /* the access is performed by a helper */
> };
>
> -static int check_stack_range_initialized(struct bpf_verifier_env *env,
> +static int check_stack_range_initialized(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
> int regno, int off, int access_size,
> bool zero_size_allowed,
> enum bpf_access_type type,
> @@ -4265,18 +4264,16 @@ static struct bpf_reg_state *reg_state(struct bpf_verifier_env *env, int regno)
> * offset; for a fixed offset check_stack_read_fixed_off should be used
> * instead.
> */
Same here
> -static int check_stack_read_var_off(struct bpf_verifier_env *env,
> +static int check_stack_read_var_off(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
> int ptr_regno, int off, int size, int dst_regno)
> {
> - /* The state of the source register. */
> - struct bpf_reg_state *reg = reg_state(env, ptr_regno);
> struct bpf_func_state *ptr_state = bpf_func(env, reg);
> int err;
> int min_off, max_off;
>
> /* Note that we pass a NULL meta, so raw access will not be permitted.
> */
> - err = check_stack_range_initialized(env, ptr_regno, off, size,
> + err = check_stack_range_initialized(env, reg, ptr_regno, off, size,
> false, BPF_READ, NULL);
> if (err)
> return err;
> @@ -4298,10 +4295,9 @@ static int check_stack_read_var_off(struct bpf_verifier_env *env,
> * can be -1, meaning that the read value is not going to a register.
> */
> static int check_stack_read(struct bpf_verifier_env *env,
> - int ptr_regno, int off, int size,
> + struct bpf_reg_state *reg, int ptr_regno, int off, int size,
> int dst_regno)
> {
> - struct bpf_reg_state *reg = reg_state(env, ptr_regno);
> struct bpf_func_state *state = bpf_func(env, reg);
> int err;
> /* Some accesses are only permitted with a static offset. */
> @@ -4337,7 +4333,7 @@ static int check_stack_read(struct bpf_verifier_env *env,
> * than fixed offset ones. Note that dst_regno >= 0 on this
> * branch.
> */
> - err = check_stack_read_var_off(env, ptr_regno, off, size,
> + err = check_stack_read_var_off(env, reg, ptr_regno, off, size,
> dst_regno);
> }
> return err;
> @@ -4354,10 +4350,9 @@ static int check_stack_read(struct bpf_verifier_env *env,
> * The caller must ensure that the offset falls within the maximum stack size.
> */
Ditto
> static int check_stack_write(struct bpf_verifier_env *env,
> - int ptr_regno, int off, int size,
> + struct bpf_reg_state *reg, int off, int size,
> int value_regno, int insn_idx)
> {
> - struct bpf_reg_state *reg = reg_state(env, ptr_regno);
> struct bpf_func_state *state = bpf_func(env, reg);
> int err;
>
> @@ -4370,16 +4365,15 @@ static int check_stack_write(struct bpf_verifier_env *env,
> * than fixed offset ones.
> */
> err = check_stack_write_var_off(env, state,
> - ptr_regno, off, size,
> + reg, off, size,
> value_regno, insn_idx);
> }
> return err;
> }
>
Otherwise looks good to me.
Reviewed-by: Amery Hung <ameryhung@gmail.com>
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH bpf-next 7/9] bpf: Prepare verifier logs for upcoming kfunc stack arguments
2026-04-21 17:20 ` [PATCH bpf-next 7/9] bpf: Prepare verifier logs for upcoming kfunc stack arguments Yonghong Song
@ 2026-04-21 22:07 ` Alexei Starovoitov
2026-04-21 23:56 ` Yonghong Song
0 siblings, 1 reply; 25+ messages in thread
From: Alexei Starovoitov @ 2026-04-21 22:07 UTC (permalink / raw)
To: Yonghong Song, bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau, Puranjay Mohan
On Tue Apr 21, 2026 at 10:20 AM PDT, Yonghong Song wrote:
> This change prepares verifier log reporting for upcoming kfunc stack
> argument support.
>
> Today verifier log code mostly assumes that an argument can be described
> directly by a register number. That works for arguments passed in `R1`
> to `R5`, but it does not work once kfunc arguments can also be
> passed on the stack.
>
> Introduce an internal `argno` representation such that register-passed
> arguments keep using their real register numbers, while stack-passed
> arguments use an encoded value above a dedicated base.
> `reg_arg_name()` converts this representation into either `R%d` or
> `*(R11-off)` when emitting verifier logs. If a particular `argno`
> is corresponding to a stack argument, print `*(R11-off)`. Otherwise,
> print `R%d`. Here R11 presents the base of stack arguments.
>
> This keeps existing logs readable for register arguments and allows the
> same log sites to handle future stack arguments without open-coding
> special cases.
>
> Update selftests accordingly.
>
> Acked-by: Puranjay Mohan <puranjay@kernel.org>
> Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
> ---
> include/linux/bpf_verifier.h | 1 +
> kernel/bpf/verifier.c | 640 ++++++++++--------
> .../testing/selftests/bpf/prog_tests/bpf_nf.c | 22 +-
> .../selftests/bpf/prog_tests/cb_refs.c | 2 +-
> .../selftests/bpf/prog_tests/kfunc_call.c | 2 +-
> .../selftests/bpf/prog_tests/linked_list.c | 4 +-
> .../selftests/bpf/progs/cgrp_kfunc_failure.c | 14 +-
> .../selftests/bpf/progs/cpumask_failure.c | 10 +-
> .../testing/selftests/bpf/progs/dynptr_fail.c | 22 +-
> .../selftests/bpf/progs/file_reader_fail.c | 4 +-
> tools/testing/selftests/bpf/progs/irq.c | 4 +-
> tools/testing/selftests/bpf/progs/iters.c | 6 +-
> .../selftests/bpf/progs/iters_state_safety.c | 14 +-
> .../selftests/bpf/progs/iters_testmod.c | 4 +-
> .../selftests/bpf/progs/iters_testmod_seq.c | 4 +-
> .../selftests/bpf/progs/map_kptr_fail.c | 2 +-
> .../selftests/bpf/progs/percpu_alloc_fail.c | 4 +-
> .../testing/selftests/bpf/progs/rbtree_fail.c | 6 +-
> .../bpf/progs/refcounted_kptr_fail.c | 2 +-
> .../testing/selftests/bpf/progs/stream_fail.c | 2 +-
> .../selftests/bpf/progs/task_kfunc_failure.c | 18 +-
> .../selftests/bpf/progs/task_work_fail.c | 6 +-
> .../selftests/bpf/progs/test_bpf_nf_fail.c | 8 +-
> .../bpf/progs/test_kfunc_dynptr_param.c | 2 +-
> .../bpf/progs/test_kfunc_param_nullable.c | 2 +-
> .../selftests/bpf/progs/verifier_bits_iter.c | 4 +-
> .../bpf/progs/verifier_ref_tracking.c | 6 +-
> .../selftests/bpf/progs/verifier_vfs_reject.c | 8 +-
> .../testing/selftests/bpf/progs/wq_failures.c | 2 +-
> tools/testing/selftests/bpf/verifier/calls.c | 14 +-
> 30 files changed, 464 insertions(+), 375 deletions(-)
>
> diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
> index b148f816f25b..d5b4303315dd 100644
> --- a/include/linux/bpf_verifier.h
> +++ b/include/linux/bpf_verifier.h
> @@ -913,6 +913,7 @@ struct bpf_verifier_env {
> * e.g., in reg_type_str() to generate reg_type string
> */
> char tmp_str_buf[TMP_STR_BUF_LEN];
> + char tmp_arg_name[32];
> struct bpf_insn insn_buf[INSN_BUF_SIZE];
> struct bpf_insn epilogue_buf[INSN_BUF_SIZE];
> struct bpf_scc_callchain callchain_buf;
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 18ab92581452..82568a427211 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -1742,6 +1742,44 @@ static struct bpf_verifier_state *push_stack(struct bpf_verifier_env *env,
> return &elem->st;
> }
>
> +#define STACK_ARGNO_BASE 100
> +
> +static bool is_stack_argno(int argno)
> +{
> + return argno > STACK_ARGNO_BASE;
> +}
> +
> +/* arg starts at 1 */
> +static u32 make_argno(u32 arg)
> +{
> + if (arg <= MAX_BPF_FUNC_REG_ARGS)
> + return arg;
> + return STACK_ARGNO_BASE + arg;
> +}
You can remove this and simplify everything further by
static bool is_stack_argno(int argno)
{
return argno > MAX_BPF_FUNC_REG_ARGS;
}
> +
> +static u32 arg_from_argno(int argno)
> +{
> + if (is_stack_argno(argno))
> + return argno - STACK_ARGNO_BASE;
> + return argno;
> +}
remove as well.
and a comment like:
/*
* switch (argno) {
* case 1: R1
* case 5: R5
* case 6: *(u64 *)(R11 +- 8)
* case 7: *(u64 *)(R11 +- 16)
*/
> +static const char *reg_arg_name(struct bpf_verifier_env *env, int argno)
> +{
> + char *buf = env->tmp_arg_name;
> + int len = sizeof(env->tmp_arg_name);
> + u32 arg;
> +
> + if (!is_stack_argno(argno)) {
> + snprintf(buf, len, "R%d", argno);
> + return buf;
> + }
> +
> + arg = arg_from_argno(argno);
gone
> + snprintf(buf, len, "*(R11-%u)", (arg - MAX_BPF_FUNC_REG_ARGS) * BPF_REG_SIZE);
> + return buf;
> +}
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH bpf-next 8/9] bpf: Introduce bpf register BPF_REG_PARAMS
2026-04-21 17:20 ` [PATCH bpf-next 8/9] bpf: Introduce bpf register BPF_REG_PARAMS Yonghong Song
@ 2026-04-21 22:10 ` Alexei Starovoitov
2026-04-22 0:09 ` Yonghong Song
0 siblings, 1 reply; 25+ messages in thread
From: Alexei Starovoitov @ 2026-04-21 22:10 UTC (permalink / raw)
To: Yonghong Song, bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau, Puranjay Mohan
On Tue Apr 21, 2026 at 10:20 AM PDT, Yonghong Song wrote:
>
> /* Kernel hidden auxiliary/helper register. */
> -#define BPF_REG_AX MAX_BPF_REG
> -#define MAX_BPF_EXT_REG (MAX_BPF_REG + 1)
> +#define BPF_REG_PARAMS MAX_BPF_REG
> +#define BPF_REG_AX (MAX_BPF_REG + 1)
> +#define MAX_BPF_EXT_REG (MAX_BPF_REG + 2)
...
> --- a/tools/testing/selftests/bpf/progs/verifier_bpf_fastcall.c
> +++ b/tools/testing/selftests/bpf/progs/verifier_bpf_fastcall.c
> @@ -630,13 +630,13 @@ __xlated("...")
> __xlated("4: r0 = &(void __percpu *)(r0)")
> __xlated("...")
> /* may_goto expansion starts */
> -__xlated("6: r11 = *(u64 *)(r10 -24)")
> -__xlated("7: if r11 == 0x0 goto pc+6")
> -__xlated("8: r11 -= 1")
> -__xlated("9: if r11 != 0x0 goto pc+2")
> -__xlated("10: r11 = -24")
> +__xlated("6: r12 = *(u64 *)(r10 -24)")
> +__xlated("7: if r12 == 0x0 goto pc+6")
> +__xlated("8: r12 -= 1")
> +__xlated("9: if r12 != 0x0 goto pc+2")
> +__xlated("10: r12 = -24")
maybe shift it to r15 right away, so we don't need to touch this code
if/when true r12 is introduced?
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH bpf-next 4/9] bpf: Refactor to avoid redundant calculation of bpf_reg_state
2026-04-21 21:40 ` Amery Hung
@ 2026-04-21 23:42 ` Yonghong Song
0 siblings, 0 replies; 25+ messages in thread
From: Yonghong Song @ 2026-04-21 23:42 UTC (permalink / raw)
To: Amery Hung
Cc: bpf, Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau, Puranjay Mohan
On 4/21/26 2:40 PM, Amery Hung wrote:
> On Tue, Apr 21, 2026 at 10:20 AM Yonghong Song <yonghong.song@linux.dev> wrote:
>> In many cases, once a bpf_reg_state is defined, it can pass to
>> callee's. Otherwise, callee will need to get bpf_reg_state again
>> based on regno. More importantly, this is needed for later stack
>> arguments for kfuncs since the register state for stack arguments does
>> not have a corresponding regno. So it makes sense to pass reg state
>> for callee's.
>>
>> The following is the only change to avoid compilation warning:
>> static int sanitize_check_bounds(struct bpf_verifier_env *env,
>> const struct bpf_insn *insn,
>> - const struct bpf_reg_state *dst_reg)
>> + struct bpf_reg_state *dst_reg)
>>
>> Acked-by: Puranjay Mohan <puranjay@kernel.org>
>> Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
>> ---
>> kernel/bpf/verifier.c | 213 ++++++++++++++++++------------------------
>> 1 file changed, 93 insertions(+), 120 deletions(-)
>>
>> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
>> index ed04fef49f6c..b56a11fc3856 100644
>> --- a/kernel/bpf/verifier.c
>> +++ b/kernel/bpf/verifier.c
>> @@ -3929,13 +3929,13 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
>> static int check_stack_write_var_off(struct bpf_verifier_env *env,
>> /* func where register points to */
>> struct bpf_func_state *state,
>> - int ptr_regno, int off, int size,
>> + struct bpf_reg_state *ptr_reg, int off, int size,
>> int value_regno, int insn_idx)
>> {
> The comment needs to be updated.
>
>> struct bpf_func_state *cur; /* state of the current function */
>> int min_off, max_off;
>> int i, err;
>> - struct bpf_reg_state *ptr_reg = NULL, *value_reg = NULL;
>> + struct bpf_reg_state *value_reg = NULL;
>> struct bpf_insn *insn = &env->prog->insnsi[insn_idx];
>> bool writing_zero = false;
>> /* set if the fact that we're writing a zero is used to let any
>> @@ -3944,7 +3944,6 @@ static int check_stack_write_var_off(struct bpf_verifier_env *env,
>> bool zero_used = false;
>>
>> cur = env->cur_state->frame[env->cur_state->curframe];
>> - ptr_reg = &cur->regs[ptr_regno];
>> min_off = ptr_reg->smin_value + off;
>> max_off = ptr_reg->smax_value + off + size;
>> if (value_regno >= 0)
>> @@ -4241,7 +4240,7 @@ enum bpf_access_src {
>> ACCESS_HELPER = 2, /* the access is performed by a helper */
>> };
>>
>> -static int check_stack_range_initialized(struct bpf_verifier_env *env,
>> +static int check_stack_range_initialized(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
>> int regno, int off, int access_size,
>> bool zero_size_allowed,
>> enum bpf_access_type type,
>> @@ -4265,18 +4264,16 @@ static struct bpf_reg_state *reg_state(struct bpf_verifier_env *env, int regno)
>> * offset; for a fixed offset check_stack_read_fixed_off should be used
>> * instead.
>> */
> Same here
>
>> -static int check_stack_read_var_off(struct bpf_verifier_env *env,
>> +static int check_stack_read_var_off(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
>> int ptr_regno, int off, int size, int dst_regno)
>> {
>> - /* The state of the source register. */
>> - struct bpf_reg_state *reg = reg_state(env, ptr_regno);
>> struct bpf_func_state *ptr_state = bpf_func(env, reg);
>> int err;
>> int min_off, max_off;
>>
>> /* Note that we pass a NULL meta, so raw access will not be permitted.
>> */
>> - err = check_stack_range_initialized(env, ptr_regno, off, size,
>> + err = check_stack_range_initialized(env, reg, ptr_regno, off, size,
>> false, BPF_READ, NULL);
>> if (err)
>> return err;
>> @@ -4298,10 +4295,9 @@ static int check_stack_read_var_off(struct bpf_verifier_env *env,
>> * can be -1, meaning that the read value is not going to a register.
>> */
>> static int check_stack_read(struct bpf_verifier_env *env,
>> - int ptr_regno, int off, int size,
>> + struct bpf_reg_state *reg, int ptr_regno, int off, int size,
>> int dst_regno)
>> {
>> - struct bpf_reg_state *reg = reg_state(env, ptr_regno);
>> struct bpf_func_state *state = bpf_func(env, reg);
>> int err;
>> /* Some accesses are only permitted with a static offset. */
>> @@ -4337,7 +4333,7 @@ static int check_stack_read(struct bpf_verifier_env *env,
>> * than fixed offset ones. Note that dst_regno >= 0 on this
>> * branch.
>> */
>> - err = check_stack_read_var_off(env, ptr_regno, off, size,
>> + err = check_stack_read_var_off(env, reg, ptr_regno, off, size,
>> dst_regno);
>> }
>> return err;
>> @@ -4354,10 +4350,9 @@ static int check_stack_read(struct bpf_verifier_env *env,
>> * The caller must ensure that the offset falls within the maximum stack size.
>> */
> Ditto
Ack. Will fix all the above in comments.
>
>> static int check_stack_write(struct bpf_verifier_env *env,
>> - int ptr_regno, int off, int size,
>> + struct bpf_reg_state *reg, int off, int size,
>> int value_regno, int insn_idx)
>> {
>> - struct bpf_reg_state *reg = reg_state(env, ptr_regno);
>> struct bpf_func_state *state = bpf_func(env, reg);
>> int err;
>>
>> @@ -4370,16 +4365,15 @@ static int check_stack_write(struct bpf_verifier_env *env,
>> * than fixed offset ones.
>> */
>> err = check_stack_write_var_off(env, state,
>> - ptr_regno, off, size,
>> + reg, off, size,
>> value_regno, insn_idx);
>> }
>> return err;
>> }
>>
> Otherwise looks good to me.
>
> Reviewed-by: Amery Hung <ameryhung@gmail.com>
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH bpf-next 7/9] bpf: Prepare verifier logs for upcoming kfunc stack arguments
2026-04-21 22:07 ` Alexei Starovoitov
@ 2026-04-21 23:56 ` Yonghong Song
2026-04-22 0:37 ` Alexei Starovoitov
0 siblings, 1 reply; 25+ messages in thread
From: Yonghong Song @ 2026-04-21 23:56 UTC (permalink / raw)
To: Alexei Starovoitov, bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau, Puranjay Mohan
On 4/21/26 3:07 PM, Alexei Starovoitov wrote:
> On Tue Apr 21, 2026 at 10:20 AM PDT, Yonghong Song wrote:
>> This change prepares verifier log reporting for upcoming kfunc stack
>> argument support.
>>
>> Today verifier log code mostly assumes that an argument can be described
>> directly by a register number. That works for arguments passed in `R1`
>> to `R5`, but it does not work once kfunc arguments can also be
>> passed on the stack.
>>
>> Introduce an internal `argno` representation such that register-passed
>> arguments keep using their real register numbers, while stack-passed
>> arguments use an encoded value above a dedicated base.
>> `reg_arg_name()` converts this representation into either `R%d` or
>> `*(R11-off)` when emitting verifier logs. If a particular `argno`
>> is corresponding to a stack argument, print `*(R11-off)`. Otherwise,
>> print `R%d`. Here R11 presents the base of stack arguments.
>>
>> This keeps existing logs readable for register arguments and allows the
>> same log sites to handle future stack arguments without open-coding
>> special cases.
>>
>> Update selftests accordingly.
>>
>> Acked-by: Puranjay Mohan <puranjay@kernel.org>
>> Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
>> ---
>> include/linux/bpf_verifier.h | 1 +
>> kernel/bpf/verifier.c | 640 ++++++++++--------
>> .../testing/selftests/bpf/prog_tests/bpf_nf.c | 22 +-
>> .../selftests/bpf/prog_tests/cb_refs.c | 2 +-
>> .../selftests/bpf/prog_tests/kfunc_call.c | 2 +-
>> .../selftests/bpf/prog_tests/linked_list.c | 4 +-
>> .../selftests/bpf/progs/cgrp_kfunc_failure.c | 14 +-
>> .../selftests/bpf/progs/cpumask_failure.c | 10 +-
>> .../testing/selftests/bpf/progs/dynptr_fail.c | 22 +-
>> .../selftests/bpf/progs/file_reader_fail.c | 4 +-
>> tools/testing/selftests/bpf/progs/irq.c | 4 +-
>> tools/testing/selftests/bpf/progs/iters.c | 6 +-
>> .../selftests/bpf/progs/iters_state_safety.c | 14 +-
>> .../selftests/bpf/progs/iters_testmod.c | 4 +-
>> .../selftests/bpf/progs/iters_testmod_seq.c | 4 +-
>> .../selftests/bpf/progs/map_kptr_fail.c | 2 +-
>> .../selftests/bpf/progs/percpu_alloc_fail.c | 4 +-
>> .../testing/selftests/bpf/progs/rbtree_fail.c | 6 +-
>> .../bpf/progs/refcounted_kptr_fail.c | 2 +-
>> .../testing/selftests/bpf/progs/stream_fail.c | 2 +-
>> .../selftests/bpf/progs/task_kfunc_failure.c | 18 +-
>> .../selftests/bpf/progs/task_work_fail.c | 6 +-
>> .../selftests/bpf/progs/test_bpf_nf_fail.c | 8 +-
>> .../bpf/progs/test_kfunc_dynptr_param.c | 2 +-
>> .../bpf/progs/test_kfunc_param_nullable.c | 2 +-
>> .../selftests/bpf/progs/verifier_bits_iter.c | 4 +-
>> .../bpf/progs/verifier_ref_tracking.c | 6 +-
>> .../selftests/bpf/progs/verifier_vfs_reject.c | 8 +-
>> .../testing/selftests/bpf/progs/wq_failures.c | 2 +-
>> tools/testing/selftests/bpf/verifier/calls.c | 14 +-
>> 30 files changed, 464 insertions(+), 375 deletions(-)
>>
>> diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
>> index b148f816f25b..d5b4303315dd 100644
>> --- a/include/linux/bpf_verifier.h
>> +++ b/include/linux/bpf_verifier.h
>> @@ -913,6 +913,7 @@ struct bpf_verifier_env {
>> * e.g., in reg_type_str() to generate reg_type string
>> */
>> char tmp_str_buf[TMP_STR_BUF_LEN];
>> + char tmp_arg_name[32];
>> struct bpf_insn insn_buf[INSN_BUF_SIZE];
>> struct bpf_insn epilogue_buf[INSN_BUF_SIZE];
>> struct bpf_scc_callchain callchain_buf;
>> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
>> index 18ab92581452..82568a427211 100644
>> --- a/kernel/bpf/verifier.c
>> +++ b/kernel/bpf/verifier.c
>> @@ -1742,6 +1742,44 @@ static struct bpf_verifier_state *push_stack(struct bpf_verifier_env *env,
>> return &elem->st;
>> }
>>
>> +#define STACK_ARGNO_BASE 100
>> +
>> +static bool is_stack_argno(int argno)
>> +{
>> + return argno > STACK_ARGNO_BASE;
>> +}
>> +
>> +/* arg starts at 1 */
>> +static u32 make_argno(u32 arg)
>> +{
>> + if (arg <= MAX_BPF_FUNC_REG_ARGS)
>> + return arg;
>> + return STACK_ARGNO_BASE + arg;
>> +}
> You can remove this and simplify everything further by
>
> static bool is_stack_argno(int argno)
> {
> return argno > MAX_BPF_FUNC_REG_ARGS;
> }
>
>> +
>> +static u32 arg_from_argno(int argno)
>> +{
>> + if (is_stack_argno(argno))
>> + return argno - STACK_ARGNO_BASE;
>> + return argno;
>> +}
> remove as well.
>
> and a comment like:
>
> /*
> * switch (argno) {
> * case 1: R1
> * case 5: R5
> * case 6: *(u64 *)(R11 +- 8)
> * case 7: *(u64 *)(R11 +- 16)
> */
This doesn't work. Let us see the following example:
check_kfunc_args
process_dynptr_func (argno)
check_mem_access (argno, 4th argument)
check_packet_access (argno)
check_mem_region_access (argno)
__check_mem_access (argno)
<== verbose log with argno
do_check
do_check_insn (env)
check_load_mem (insn)
check_mem_access (insn->src_reg, 4th argument)
check_packet_access (...)
check_mem_region_access (...)
__check_mem_access (insn->src_reg or argno)
In the above case, function __check_mem_access() intends to issue
an verbose log with the 'argno' argument. The possible values are
- R0 to R11 with do_check() call stack
- R1 to R5 or stack arguments
In such cases, print will be
R0 to R11 if argno <= 100 (STACK_ARGNO_BASE)
*(R11-%u) if argno > 100
Does this make sense?
>> +static const char *reg_arg_name(struct bpf_verifier_env *env, int argno)
>> +{
>> + char *buf = env->tmp_arg_name;
>> + int len = sizeof(env->tmp_arg_name);
>> + u32 arg;
>> +
>> + if (!is_stack_argno(argno)) {
>> + snprintf(buf, len, "R%d", argno);
>> + return buf;
>> + }
>> +
>> + arg = arg_from_argno(argno);
> gone
>
>> + snprintf(buf, len, "*(R11-%u)", (arg - MAX_BPF_FUNC_REG_ARGS) * BPF_REG_SIZE);
>> + return buf;
>> +}
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH bpf-next 8/9] bpf: Introduce bpf register BPF_REG_PARAMS
2026-04-21 22:10 ` Alexei Starovoitov
@ 2026-04-22 0:09 ` Yonghong Song
2026-04-22 0:42 ` Alexei Starovoitov
0 siblings, 1 reply; 25+ messages in thread
From: Yonghong Song @ 2026-04-22 0:09 UTC (permalink / raw)
To: Alexei Starovoitov, bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau, Puranjay Mohan
On 4/21/26 3:10 PM, Alexei Starovoitov wrote:
> On Tue Apr 21, 2026 at 10:20 AM PDT, Yonghong Song wrote:
>>
>> /* Kernel hidden auxiliary/helper register. */
>> -#define BPF_REG_AX MAX_BPF_REG
>> -#define MAX_BPF_EXT_REG (MAX_BPF_REG + 1)
>> +#define BPF_REG_PARAMS MAX_BPF_REG
>> +#define BPF_REG_AX (MAX_BPF_REG + 1)
>> +#define MAX_BPF_EXT_REG (MAX_BPF_REG + 2)
> ...
>
>> --- a/tools/testing/selftests/bpf/progs/verifier_bpf_fastcall.c
>> +++ b/tools/testing/selftests/bpf/progs/verifier_bpf_fastcall.c
>> @@ -630,13 +630,13 @@ __xlated("...")
>> __xlated("4: r0 = &(void __percpu *)(r0)")
>> __xlated("...")
>> /* may_goto expansion starts */
>> -__xlated("6: r11 = *(u64 *)(r10 -24)")
>> -__xlated("7: if r11 == 0x0 goto pc+6")
>> -__xlated("8: r11 -= 1")
>> -__xlated("9: if r11 != 0x0 goto pc+2")
>> -__xlated("10: r11 = -24")
>> +__xlated("6: r12 = *(u64 *)(r10 -24)")
>> +__xlated("7: if r12 == 0x0 goto pc+6")
>> +__xlated("8: r12 -= 1")
>> +__xlated("9: if r12 != 0x0 goto pc+2")
>> +__xlated("10: r12 = -24")
> maybe shift it to r15 right away, so we don't need to touch this code
> if/when true r12 is introduced?
We can do this. Do you think the following is okay:
diff --git a/include/linux/filter.h b/include/linux/filter.h
index b77d0b06db6e..fe7b6b943ea4 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -59,9 +59,9 @@ struct ctl_table_header;
/* Kernel hidden auxiliary/helper register. */
#define BPF_REG_PARAMS MAX_BPF_REG
-#define BPF_REG_AX (MAX_BPF_REG + 1)
-#define MAX_BPF_EXT_REG (MAX_BPF_REG + 2)
+#define MAX_BPF_EXT_REG (MAX_BPF_REG + 1)
#define MAX_BPF_JIT_REG MAX_BPF_EXT_REG
+#define BPF_REG_AX (MAX_BPF_REG + 4)
/* unused opcode to mark special call to bpf_tail_call() helper */
#define BPF_TAIL_CALL 0xf0
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index ae10b9ca018d..3ad286ef3085 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -1299,8 +1299,8 @@ static int bpf_jit_blind_insn(const struct bpf_insn *from,
u32 imm_rnd = get_random_u32();
s16 off;
- BUILD_BUG_ON(BPF_REG_PARAMS + 2 != MAX_BPF_JIT_REG);
- BUILD_BUG_ON(BPF_REG_AX + 1 != MAX_BPF_JIT_REG);
+ BUILD_BUG_ON(BPF_REG_PARAMS + 1 != MAX_BPF_JIT_REG);
+ BUILD_BUG_ON(BPF_REG_AX != MAX_BPF_JIT_REG + 3);
/* Constraints on AX register:
*
^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [PATCH bpf-next 2/9] bpf: Fix tail_call_reachable leak
2026-04-21 18:06 ` bot+bpf-ci
@ 2026-04-22 0:29 ` Yonghong Song
0 siblings, 0 replies; 25+ messages in thread
From: Yonghong Song @ 2026-04-22 0:29 UTC (permalink / raw)
To: bot+bpf-ci, bpf
Cc: ast, andrii, daniel, jose.marchesi, kernel-team, martin.lau,
eddyz87, clm, ihor.solodrai
On 4/21/26 11:06 AM, bot+bpf-ci@kernel.org wrote:
> commit 697f215251be114235dcf2ec469f425797a26a51
> Author: Yonghong Song <yonghong.song@linux.dev>
>
> bpf: Fix tail_call_reachable leak
>
> This commit fixes a leak of the local tail_call_reachable flag across
> sibling subprogs during the DFS traversal in
> check_max_stack_depth_subprog(), which caused the JIT to emit
> unnecessary tail call counter preservation code for subprogs that
> could not be reached via a tail call path.
>
> Should this commit carry a Fixes: tag? The buggy pattern (set
> tail_call_reachable to true on push, never reset on pop, then mark all
> currently-on-stack frames) appears to have been introduced by commit
> ebf7d1f508a7 ("bpf, x64: rework pro/epilogue and tailcall handling in
> JIT") which originally added tail_call_reachable tracking to the
> stack-depth DFS.
>
> Something like:
>
> Fixes: ebf7d1f508a7 ("bpf, x64: rework pro/epilogue and tailcall handling in JIT")
The above fix tag is in 2020 so it has been 5+ years. BTW, this is not
really a bug although in certain cases it may cause more stack space than needed.
I would skip Fixes tag for this patch.
>
>
> ---
> AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
> See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
>
> CI run summary: https://github.com/kernel-patches/bpf/actions/runs/24736686700
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH bpf-next 7/9] bpf: Prepare verifier logs for upcoming kfunc stack arguments
2026-04-21 23:56 ` Yonghong Song
@ 2026-04-22 0:37 ` Alexei Starovoitov
2026-04-22 1:20 ` Yonghong Song
0 siblings, 1 reply; 25+ messages in thread
From: Alexei Starovoitov @ 2026-04-22 0:37 UTC (permalink / raw)
To: Yonghong Song, bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau, Puranjay Mohan
On Tue Apr 21, 2026 at 4:56 PM PDT, Yonghong Song wrote:
>
>
> On 4/21/26 3:07 PM, Alexei Starovoitov wrote:
>> On Tue Apr 21, 2026 at 10:20 AM PDT, Yonghong Song wrote:
>>> This change prepares verifier log reporting for upcoming kfunc stack
>>> argument support.
>>>
>>> Today verifier log code mostly assumes that an argument can be described
>>> directly by a register number. That works for arguments passed in `R1`
>>> to `R5`, but it does not work once kfunc arguments can also be
>>> passed on the stack.
>>>
>>> Introduce an internal `argno` representation such that register-passed
>>> arguments keep using their real register numbers, while stack-passed
>>> arguments use an encoded value above a dedicated base.
>>> `reg_arg_name()` converts this representation into either `R%d` or
>>> `*(R11-off)` when emitting verifier logs. If a particular `argno`
>>> is corresponding to a stack argument, print `*(R11-off)`. Otherwise,
>>> print `R%d`. Here R11 presents the base of stack arguments.
>>>
>>> This keeps existing logs readable for register arguments and allows the
>>> same log sites to handle future stack arguments without open-coding
>>> special cases.
>>>
>>> Update selftests accordingly.
>>>
>>> Acked-by: Puranjay Mohan <puranjay@kernel.org>
>>> Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
>>> ---
>>> include/linux/bpf_verifier.h | 1 +
>>> kernel/bpf/verifier.c | 640 ++++++++++--------
>>> .../testing/selftests/bpf/prog_tests/bpf_nf.c | 22 +-
>>> .../selftests/bpf/prog_tests/cb_refs.c | 2 +-
>>> .../selftests/bpf/prog_tests/kfunc_call.c | 2 +-
>>> .../selftests/bpf/prog_tests/linked_list.c | 4 +-
>>> .../selftests/bpf/progs/cgrp_kfunc_failure.c | 14 +-
>>> .../selftests/bpf/progs/cpumask_failure.c | 10 +-
>>> .../testing/selftests/bpf/progs/dynptr_fail.c | 22 +-
>>> .../selftests/bpf/progs/file_reader_fail.c | 4 +-
>>> tools/testing/selftests/bpf/progs/irq.c | 4 +-
>>> tools/testing/selftests/bpf/progs/iters.c | 6 +-
>>> .../selftests/bpf/progs/iters_state_safety.c | 14 +-
>>> .../selftests/bpf/progs/iters_testmod.c | 4 +-
>>> .../selftests/bpf/progs/iters_testmod_seq.c | 4 +-
>>> .../selftests/bpf/progs/map_kptr_fail.c | 2 +-
>>> .../selftests/bpf/progs/percpu_alloc_fail.c | 4 +-
>>> .../testing/selftests/bpf/progs/rbtree_fail.c | 6 +-
>>> .../bpf/progs/refcounted_kptr_fail.c | 2 +-
>>> .../testing/selftests/bpf/progs/stream_fail.c | 2 +-
>>> .../selftests/bpf/progs/task_kfunc_failure.c | 18 +-
>>> .../selftests/bpf/progs/task_work_fail.c | 6 +-
>>> .../selftests/bpf/progs/test_bpf_nf_fail.c | 8 +-
>>> .../bpf/progs/test_kfunc_dynptr_param.c | 2 +-
>>> .../bpf/progs/test_kfunc_param_nullable.c | 2 +-
>>> .../selftests/bpf/progs/verifier_bits_iter.c | 4 +-
>>> .../bpf/progs/verifier_ref_tracking.c | 6 +-
>>> .../selftests/bpf/progs/verifier_vfs_reject.c | 8 +-
>>> .../testing/selftests/bpf/progs/wq_failures.c | 2 +-
>>> tools/testing/selftests/bpf/verifier/calls.c | 14 +-
>>> 30 files changed, 464 insertions(+), 375 deletions(-)
>>>
>>> diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
>>> index b148f816f25b..d5b4303315dd 100644
>>> --- a/include/linux/bpf_verifier.h
>>> +++ b/include/linux/bpf_verifier.h
>>> @@ -913,6 +913,7 @@ struct bpf_verifier_env {
>>> * e.g., in reg_type_str() to generate reg_type string
>>> */
>>> char tmp_str_buf[TMP_STR_BUF_LEN];
>>> + char tmp_arg_name[32];
>>> struct bpf_insn insn_buf[INSN_BUF_SIZE];
>>> struct bpf_insn epilogue_buf[INSN_BUF_SIZE];
>>> struct bpf_scc_callchain callchain_buf;
>>> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
>>> index 18ab92581452..82568a427211 100644
>>> --- a/kernel/bpf/verifier.c
>>> +++ b/kernel/bpf/verifier.c
>>> @@ -1742,6 +1742,44 @@ static struct bpf_verifier_state *push_stack(struct bpf_verifier_env *env,
>>> return &elem->st;
>>> }
>>>
>>> +#define STACK_ARGNO_BASE 100
>>> +
>>> +static bool is_stack_argno(int argno)
>>> +{
>>> + return argno > STACK_ARGNO_BASE;
>>> +}
>>> +
>>> +/* arg starts at 1 */
>>> +static u32 make_argno(u32 arg)
>>> +{
>>> + if (arg <= MAX_BPF_FUNC_REG_ARGS)
>>> + return arg;
>>> + return STACK_ARGNO_BASE + arg;
>>> +}
>> You can remove this and simplify everything further by
>>
>> static bool is_stack_argno(int argno)
>> {
>> return argno > MAX_BPF_FUNC_REG_ARGS;
>> }
>>
>>> +
>>> +static u32 arg_from_argno(int argno)
>>> +{
>>> + if (is_stack_argno(argno))
>>> + return argno - STACK_ARGNO_BASE;
>>> + return argno;
>>> +}
>> remove as well.
>>
>> and a comment like:
>>
>> /*
>> * switch (argno) {
>> * case 1: R1
>> * case 5: R5
>> * case 6: *(u64 *)(R11 +- 8)
>> * case 7: *(u64 *)(R11 +- 16)
>> */
>
> This doesn't work. Let us see the following example:
>
> check_kfunc_args
> process_dynptr_func (argno)
> check_mem_access (argno, 4th argument)
> check_packet_access (argno)
> check_mem_region_access (argno)
> __check_mem_access (argno)
> <== verbose log with argno
>
> do_check
> do_check_insn (env)
> check_load_mem (insn)
> check_mem_access (insn->src_reg, 4th argument)
> check_packet_access (...)
> check_mem_region_access (...)
> __check_mem_access (insn->src_reg or argno)
Ohh. Silent conversion. That's quite error prone.
let's do
typedef struct argno {
int argno;
} argno_t;
and make sure this callchain passes arg_t unmodified:
process_dynptr_func (argno)
check_mem_access (argno, 4th argument)
check_packet_access (argno) ...
while here:
check_load_mem (insn)
check_mem_access (argno_from_reg(insn->src_reg), 4th argument)
static argno_t argno_from_reg(u32 regno)
{
return (argno_t){ .argno = regno };
}
static argno_t argno_from_arg(u32 arg)
{
return (argno_t){ .argno = -arg };
}
static const char *reg_arg_name(struct bpf_verifier_env *env, argno_t argno)
When positive vs negative is an internal implemenation of argno_t
it's fine. It's better than shift by 100, but when negative was
used as a signal everywhere it leaked details to caller.
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH bpf-next 8/9] bpf: Introduce bpf register BPF_REG_PARAMS
2026-04-22 0:09 ` Yonghong Song
@ 2026-04-22 0:42 ` Alexei Starovoitov
2026-04-22 1:10 ` Yonghong Song
0 siblings, 1 reply; 25+ messages in thread
From: Alexei Starovoitov @ 2026-04-22 0:42 UTC (permalink / raw)
To: Yonghong Song, bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau, Puranjay Mohan
On Tue Apr 21, 2026 at 5:09 PM PDT, Yonghong Song wrote:
>
>
> On 4/21/26 3:10 PM, Alexei Starovoitov wrote:
>> On Tue Apr 21, 2026 at 10:20 AM PDT, Yonghong Song wrote:
>>>
>>> /* Kernel hidden auxiliary/helper register. */
>>> -#define BPF_REG_AX MAX_BPF_REG
>>> -#define MAX_BPF_EXT_REG (MAX_BPF_REG + 1)
>>> +#define BPF_REG_PARAMS MAX_BPF_REG
>>> +#define BPF_REG_AX (MAX_BPF_REG + 1)
>>> +#define MAX_BPF_EXT_REG (MAX_BPF_REG + 2)
>> ...
>>
>>> --- a/tools/testing/selftests/bpf/progs/verifier_bpf_fastcall.c
>>> +++ b/tools/testing/selftests/bpf/progs/verifier_bpf_fastcall.c
>>> @@ -630,13 +630,13 @@ __xlated("...")
>>> __xlated("4: r0 = &(void __percpu *)(r0)")
>>> __xlated("...")
>>> /* may_goto expansion starts */
>>> -__xlated("6: r11 = *(u64 *)(r10 -24)")
>>> -__xlated("7: if r11 == 0x0 goto pc+6")
>>> -__xlated("8: r11 -= 1")
>>> -__xlated("9: if r11 != 0x0 goto pc+2")
>>> -__xlated("10: r11 = -24")
>>> +__xlated("6: r12 = *(u64 *)(r10 -24)")
>>> +__xlated("7: if r12 == 0x0 goto pc+6")
>>> +__xlated("8: r12 -= 1")
>>> +__xlated("9: if r12 != 0x0 goto pc+2")
>>> +__xlated("10: r12 = -24")
>> maybe shift it to r15 right away, so we don't need to touch this code
>> if/when true r12 is introduced?
>
> We can do this. Do you think the following is okay:
>
> diff --git a/include/linux/filter.h b/include/linux/filter.h
> index b77d0b06db6e..fe7b6b943ea4 100644
> --- a/include/linux/filter.h
> +++ b/include/linux/filter.h
> @@ -59,9 +59,9 @@ struct ctl_table_header;
>
> /* Kernel hidden auxiliary/helper register. */
> #define BPF_REG_PARAMS MAX_BPF_REG
> -#define BPF_REG_AX (MAX_BPF_REG + 1)
> -#define MAX_BPF_EXT_REG (MAX_BPF_REG + 2)
> +#define MAX_BPF_EXT_REG (MAX_BPF_REG + 1)
> #define MAX_BPF_JIT_REG MAX_BPF_EXT_REG
> +#define BPF_REG_AX (MAX_BPF_REG + 4)
>
> /* unused opcode to mark special call to bpf_tail_call() helper */
> #define BPF_TAIL_CALL 0xf0
> diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
> index ae10b9ca018d..3ad286ef3085 100644
> --- a/kernel/bpf/core.c
> +++ b/kernel/bpf/core.c
> @@ -1299,8 +1299,8 @@ static int bpf_jit_blind_insn(const struct bpf_insn *from,
> u32 imm_rnd = get_random_u32();
> s16 off;
>
> - BUILD_BUG_ON(BPF_REG_PARAMS + 2 != MAX_BPF_JIT_REG);
> - BUILD_BUG_ON(BPF_REG_AX + 1 != MAX_BPF_JIT_REG);
> + BUILD_BUG_ON(BPF_REG_PARAMS + 1 != MAX_BPF_JIT_REG);
> + BUILD_BUG_ON(BPF_REG_AX != MAX_BPF_JIT_REG + 3);
Ohh. We have:
static unsigned int PROG_NAME(stack_size)(const void *ctx, const struct bpf_insn *insn) \
{ \
u64 stack[stack_size / sizeof(u64)]; \
u64 regs[MAX_BPF_EXT_REG] = {}; \
Please double check that BPF_REG_AX is not used by the interpreter.
but it's starting to feel that my suggestion was premature.
Probably better to keep this patchset as-is.
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH bpf-next 8/9] bpf: Introduce bpf register BPF_REG_PARAMS
2026-04-22 0:42 ` Alexei Starovoitov
@ 2026-04-22 1:10 ` Yonghong Song
0 siblings, 0 replies; 25+ messages in thread
From: Yonghong Song @ 2026-04-22 1:10 UTC (permalink / raw)
To: Alexei Starovoitov, bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau, Puranjay Mohan
On 4/21/26 5:42 PM, Alexei Starovoitov wrote:
> On Tue Apr 21, 2026 at 5:09 PM PDT, Yonghong Song wrote:
>>
>> On 4/21/26 3:10 PM, Alexei Starovoitov wrote:
>>> On Tue Apr 21, 2026 at 10:20 AM PDT, Yonghong Song wrote:
>>>>
>>>> /* Kernel hidden auxiliary/helper register. */
>>>> -#define BPF_REG_AX MAX_BPF_REG
>>>> -#define MAX_BPF_EXT_REG (MAX_BPF_REG + 1)
>>>> +#define BPF_REG_PARAMS MAX_BPF_REG
>>>> +#define BPF_REG_AX (MAX_BPF_REG + 1)
>>>> +#define MAX_BPF_EXT_REG (MAX_BPF_REG + 2)
>>> ...
>>>
>>>> --- a/tools/testing/selftests/bpf/progs/verifier_bpf_fastcall.c
>>>> +++ b/tools/testing/selftests/bpf/progs/verifier_bpf_fastcall.c
>>>> @@ -630,13 +630,13 @@ __xlated("...")
>>>> __xlated("4: r0 = &(void __percpu *)(r0)")
>>>> __xlated("...")
>>>> /* may_goto expansion starts */
>>>> -__xlated("6: r11 = *(u64 *)(r10 -24)")
>>>> -__xlated("7: if r11 == 0x0 goto pc+6")
>>>> -__xlated("8: r11 -= 1")
>>>> -__xlated("9: if r11 != 0x0 goto pc+2")
>>>> -__xlated("10: r11 = -24")
>>>> +__xlated("6: r12 = *(u64 *)(r10 -24)")
>>>> +__xlated("7: if r12 == 0x0 goto pc+6")
>>>> +__xlated("8: r12 -= 1")
>>>> +__xlated("9: if r12 != 0x0 goto pc+2")
>>>> +__xlated("10: r12 = -24")
>>> maybe shift it to r15 right away, so we don't need to touch this code
>>> if/when true r12 is introduced?
>> We can do this. Do you think the following is okay:
>>
>> diff --git a/include/linux/filter.h b/include/linux/filter.h
>> index b77d0b06db6e..fe7b6b943ea4 100644
>> --- a/include/linux/filter.h
>> +++ b/include/linux/filter.h
>> @@ -59,9 +59,9 @@ struct ctl_table_header;
>>
>> /* Kernel hidden auxiliary/helper register. */
>> #define BPF_REG_PARAMS MAX_BPF_REG
>> -#define BPF_REG_AX (MAX_BPF_REG + 1)
>> -#define MAX_BPF_EXT_REG (MAX_BPF_REG + 2)
>> +#define MAX_BPF_EXT_REG (MAX_BPF_REG + 1)
>> #define MAX_BPF_JIT_REG MAX_BPF_EXT_REG
>> +#define BPF_REG_AX (MAX_BPF_REG + 4)
>>
>> /* unused opcode to mark special call to bpf_tail_call() helper */
>> #define BPF_TAIL_CALL 0xf0
>> diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
>> index ae10b9ca018d..3ad286ef3085 100644
>> --- a/kernel/bpf/core.c
>> +++ b/kernel/bpf/core.c
>> @@ -1299,8 +1299,8 @@ static int bpf_jit_blind_insn(const struct bpf_insn *from,
>> u32 imm_rnd = get_random_u32();
>> s16 off;
>>
>> - BUILD_BUG_ON(BPF_REG_PARAMS + 2 != MAX_BPF_JIT_REG);
>> - BUILD_BUG_ON(BPF_REG_AX + 1 != MAX_BPF_JIT_REG);
>> + BUILD_BUG_ON(BPF_REG_PARAMS + 1 != MAX_BPF_JIT_REG);
>> + BUILD_BUG_ON(BPF_REG_AX != MAX_BPF_JIT_REG + 3);
> Ohh. We have:
> static unsigned int PROG_NAME(stack_size)(const void *ctx, const struct bpf_insn *insn) \
> { \
> u64 stack[stack_size / sizeof(u64)]; \
> u64 regs[MAX_BPF_EXT_REG] = {}; \
>
>
> Please double check that BPF_REG_AX is not used by the interpreter.
>
> but it's starting to feel that my suggestion was premature.
> Probably better to keep this patchset as-is.
Okay, let me keep the current one.
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH bpf-next 7/9] bpf: Prepare verifier logs for upcoming kfunc stack arguments
2026-04-22 0:37 ` Alexei Starovoitov
@ 2026-04-22 1:20 ` Yonghong Song
2026-04-22 1:52 ` Alexei Starovoitov
0 siblings, 1 reply; 25+ messages in thread
From: Yonghong Song @ 2026-04-22 1:20 UTC (permalink / raw)
To: Alexei Starovoitov, bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau, Puranjay Mohan
On 4/21/26 5:37 PM, Alexei Starovoitov wrote:
> On Tue Apr 21, 2026 at 4:56 PM PDT, Yonghong Song wrote:
>>
>> On 4/21/26 3:07 PM, Alexei Starovoitov wrote:
>>> On Tue Apr 21, 2026 at 10:20 AM PDT, Yonghong Song wrote:
>>>> This change prepares verifier log reporting for upcoming kfunc stack
>>>> argument support.
>>>>
>>>> Today verifier log code mostly assumes that an argument can be described
>>>> directly by a register number. That works for arguments passed in `R1`
>>>> to `R5`, but it does not work once kfunc arguments can also be
>>>> passed on the stack.
>>>>
>>>> Introduce an internal `argno` representation such that register-passed
>>>> arguments keep using their real register numbers, while stack-passed
>>>> arguments use an encoded value above a dedicated base.
>>>> `reg_arg_name()` converts this representation into either `R%d` or
>>>> `*(R11-off)` when emitting verifier logs. If a particular `argno`
>>>> is corresponding to a stack argument, print `*(R11-off)`. Otherwise,
>>>> print `R%d`. Here R11 presents the base of stack arguments.
>>>>
>>>> This keeps existing logs readable for register arguments and allows the
>>>> same log sites to handle future stack arguments without open-coding
>>>> special cases.
>>>>
>>>> Update selftests accordingly.
>>>>
>>>> Acked-by: Puranjay Mohan <puranjay@kernel.org>
>>>> Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
>>>> ---
>>>> include/linux/bpf_verifier.h | 1 +
>>>> kernel/bpf/verifier.c | 640 ++++++++++--------
>>>> .../testing/selftests/bpf/prog_tests/bpf_nf.c | 22 +-
>>>> .../selftests/bpf/prog_tests/cb_refs.c | 2 +-
>>>> .../selftests/bpf/prog_tests/kfunc_call.c | 2 +-
>>>> .../selftests/bpf/prog_tests/linked_list.c | 4 +-
>>>> .../selftests/bpf/progs/cgrp_kfunc_failure.c | 14 +-
>>>> .../selftests/bpf/progs/cpumask_failure.c | 10 +-
>>>> .../testing/selftests/bpf/progs/dynptr_fail.c | 22 +-
>>>> .../selftests/bpf/progs/file_reader_fail.c | 4 +-
>>>> tools/testing/selftests/bpf/progs/irq.c | 4 +-
>>>> tools/testing/selftests/bpf/progs/iters.c | 6 +-
>>>> .../selftests/bpf/progs/iters_state_safety.c | 14 +-
>>>> .../selftests/bpf/progs/iters_testmod.c | 4 +-
>>>> .../selftests/bpf/progs/iters_testmod_seq.c | 4 +-
>>>> .../selftests/bpf/progs/map_kptr_fail.c | 2 +-
>>>> .../selftests/bpf/progs/percpu_alloc_fail.c | 4 +-
>>>> .../testing/selftests/bpf/progs/rbtree_fail.c | 6 +-
>>>> .../bpf/progs/refcounted_kptr_fail.c | 2 +-
>>>> .../testing/selftests/bpf/progs/stream_fail.c | 2 +-
>>>> .../selftests/bpf/progs/task_kfunc_failure.c | 18 +-
>>>> .../selftests/bpf/progs/task_work_fail.c | 6 +-
>>>> .../selftests/bpf/progs/test_bpf_nf_fail.c | 8 +-
>>>> .../bpf/progs/test_kfunc_dynptr_param.c | 2 +-
>>>> .../bpf/progs/test_kfunc_param_nullable.c | 2 +-
>>>> .../selftests/bpf/progs/verifier_bits_iter.c | 4 +-
>>>> .../bpf/progs/verifier_ref_tracking.c | 6 +-
>>>> .../selftests/bpf/progs/verifier_vfs_reject.c | 8 +-
>>>> .../testing/selftests/bpf/progs/wq_failures.c | 2 +-
>>>> tools/testing/selftests/bpf/verifier/calls.c | 14 +-
>>>> 30 files changed, 464 insertions(+), 375 deletions(-)
>>>>
>>>> diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
>>>> index b148f816f25b..d5b4303315dd 100644
>>>> --- a/include/linux/bpf_verifier.h
>>>> +++ b/include/linux/bpf_verifier.h
>>>> @@ -913,6 +913,7 @@ struct bpf_verifier_env {
>>>> * e.g., in reg_type_str() to generate reg_type string
>>>> */
>>>> char tmp_str_buf[TMP_STR_BUF_LEN];
>>>> + char tmp_arg_name[32];
>>>> struct bpf_insn insn_buf[INSN_BUF_SIZE];
>>>> struct bpf_insn epilogue_buf[INSN_BUF_SIZE];
>>>> struct bpf_scc_callchain callchain_buf;
>>>> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
>>>> index 18ab92581452..82568a427211 100644
>>>> --- a/kernel/bpf/verifier.c
>>>> +++ b/kernel/bpf/verifier.c
>>>> @@ -1742,6 +1742,44 @@ static struct bpf_verifier_state *push_stack(struct bpf_verifier_env *env,
>>>> return &elem->st;
>>>> }
>>>>
>>>> +#define STACK_ARGNO_BASE 100
>>>> +
>>>> +static bool is_stack_argno(int argno)
>>>> +{
>>>> + return argno > STACK_ARGNO_BASE;
>>>> +}
>>>> +
>>>> +/* arg starts at 1 */
>>>> +static u32 make_argno(u32 arg)
>>>> +{
>>>> + if (arg <= MAX_BPF_FUNC_REG_ARGS)
>>>> + return arg;
>>>> + return STACK_ARGNO_BASE + arg;
>>>> +}
>>> You can remove this and simplify everything further by
>>>
>>> static bool is_stack_argno(int argno)
>>> {
>>> return argno > MAX_BPF_FUNC_REG_ARGS;
>>> }
>>>
>>>> +
>>>> +static u32 arg_from_argno(int argno)
>>>> +{
>>>> + if (is_stack_argno(argno))
>>>> + return argno - STACK_ARGNO_BASE;
>>>> + return argno;
>>>> +}
>>> remove as well.
>>>
>>> and a comment like:
>>>
>>> /*
>>> * switch (argno) {
>>> * case 1: R1
>>> * case 5: R5
>>> * case 6: *(u64 *)(R11 +- 8)
>>> * case 7: *(u64 *)(R11 +- 16)
>>> */
>> This doesn't work. Let us see the following example:
>>
>> check_kfunc_args
>> process_dynptr_func (argno)
>> check_mem_access (argno, 4th argument)
>> check_packet_access (argno)
>> check_mem_region_access (argno)
>> __check_mem_access (argno)
>> <== verbose log with argno
>>
>> do_check
>> do_check_insn (env)
>> check_load_mem (insn)
>> check_mem_access (insn->src_reg, 4th argument)
>> check_packet_access (...)
>> check_mem_region_access (...)
>> __check_mem_access (insn->src_reg or argno)
> Ohh. Silent conversion. That's quite error prone.
>
> let's do
> typedef struct argno {
> int argno;
> } argno_t;
>
> and make sure this callchain passes arg_t unmodified:
>
> process_dynptr_func (argno)
> check_mem_access (argno, 4th argument)
> check_packet_access (argno) ...
>
> while here:
>
> check_load_mem (insn)
> check_mem_access (argno_from_reg(insn->src_reg), 4th argument)
>
> static argno_t argno_from_reg(u32 regno)
> {
> return (argno_t){ .argno = regno };
> }
>
> static argno_t argno_from_arg(u32 arg)
> {
> return (argno_t){ .argno = -arg };
> }
>
> static const char *reg_arg_name(struct bpf_verifier_env *env, argno_t argno)
>
> When positive vs negative is an internal implemenation of argno_t
> it's fine. It's better than shift by 100, but when negative was
> used as a signal everywhere it leaked details to caller.
This approach is kind of similar to what I proposed earlier with
an int variable, non-negative for reg and negative for arg (value -1/-2/...).
https://lore.kernel.org/bpf/20260412045857.256260-1-yonghong.song@linux.dev/
But it is not explicit except the name 'reg_or_arg'.
Here, argno_t type makes it more explicit and should better.
For printing, I guess we still want to print 'R#' whenever possible
including positive registers and negative argno (1-5), and print
'*(R11-off)' for negative argno (6->...)?
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH bpf-next 7/9] bpf: Prepare verifier logs for upcoming kfunc stack arguments
2026-04-22 1:20 ` Yonghong Song
@ 2026-04-22 1:52 ` Alexei Starovoitov
0 siblings, 0 replies; 25+ messages in thread
From: Alexei Starovoitov @ 2026-04-22 1:52 UTC (permalink / raw)
To: Yonghong Song, bpf
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Jose E . Marchesi, kernel-team, Martin KaFai Lau, Puranjay Mohan
On Tue Apr 21, 2026 at 6:20 PM PDT, Yonghong Song wrote:
>
>
> On 4/21/26 5:37 PM, Alexei Starovoitov wrote:
>> On Tue Apr 21, 2026 at 4:56 PM PDT, Yonghong Song wrote:
>>>
>>> On 4/21/26 3:07 PM, Alexei Starovoitov wrote:
>>>> On Tue Apr 21, 2026 at 10:20 AM PDT, Yonghong Song wrote:
>>>>> This change prepares verifier log reporting for upcoming kfunc stack
>>>>> argument support.
>>>>>
>>>>> Today verifier log code mostly assumes that an argument can be described
>>>>> directly by a register number. That works for arguments passed in `R1`
>>>>> to `R5`, but it does not work once kfunc arguments can also be
>>>>> passed on the stack.
>>>>>
>>>>> Introduce an internal `argno` representation such that register-passed
>>>>> arguments keep using their real register numbers, while stack-passed
>>>>> arguments use an encoded value above a dedicated base.
>>>>> `reg_arg_name()` converts this representation into either `R%d` or
>>>>> `*(R11-off)` when emitting verifier logs. If a particular `argno`
>>>>> is corresponding to a stack argument, print `*(R11-off)`. Otherwise,
>>>>> print `R%d`. Here R11 presents the base of stack arguments.
>>>>>
>>>>> This keeps existing logs readable for register arguments and allows the
>>>>> same log sites to handle future stack arguments without open-coding
>>>>> special cases.
>>>>>
>>>>> Update selftests accordingly.
>>>>>
>>>>> Acked-by: Puranjay Mohan <puranjay@kernel.org>
>>>>> Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
>>>>> ---
>>>>> include/linux/bpf_verifier.h | 1 +
>>>>> kernel/bpf/verifier.c | 640 ++++++++++--------
>>>>> .../testing/selftests/bpf/prog_tests/bpf_nf.c | 22 +-
>>>>> .../selftests/bpf/prog_tests/cb_refs.c | 2 +-
>>>>> .../selftests/bpf/prog_tests/kfunc_call.c | 2 +-
>>>>> .../selftests/bpf/prog_tests/linked_list.c | 4 +-
>>>>> .../selftests/bpf/progs/cgrp_kfunc_failure.c | 14 +-
>>>>> .../selftests/bpf/progs/cpumask_failure.c | 10 +-
>>>>> .../testing/selftests/bpf/progs/dynptr_fail.c | 22 +-
>>>>> .../selftests/bpf/progs/file_reader_fail.c | 4 +-
>>>>> tools/testing/selftests/bpf/progs/irq.c | 4 +-
>>>>> tools/testing/selftests/bpf/progs/iters.c | 6 +-
>>>>> .../selftests/bpf/progs/iters_state_safety.c | 14 +-
>>>>> .../selftests/bpf/progs/iters_testmod.c | 4 +-
>>>>> .../selftests/bpf/progs/iters_testmod_seq.c | 4 +-
>>>>> .../selftests/bpf/progs/map_kptr_fail.c | 2 +-
>>>>> .../selftests/bpf/progs/percpu_alloc_fail.c | 4 +-
>>>>> .../testing/selftests/bpf/progs/rbtree_fail.c | 6 +-
>>>>> .../bpf/progs/refcounted_kptr_fail.c | 2 +-
>>>>> .../testing/selftests/bpf/progs/stream_fail.c | 2 +-
>>>>> .../selftests/bpf/progs/task_kfunc_failure.c | 18 +-
>>>>> .../selftests/bpf/progs/task_work_fail.c | 6 +-
>>>>> .../selftests/bpf/progs/test_bpf_nf_fail.c | 8 +-
>>>>> .../bpf/progs/test_kfunc_dynptr_param.c | 2 +-
>>>>> .../bpf/progs/test_kfunc_param_nullable.c | 2 +-
>>>>> .../selftests/bpf/progs/verifier_bits_iter.c | 4 +-
>>>>> .../bpf/progs/verifier_ref_tracking.c | 6 +-
>>>>> .../selftests/bpf/progs/verifier_vfs_reject.c | 8 +-
>>>>> .../testing/selftests/bpf/progs/wq_failures.c | 2 +-
>>>>> tools/testing/selftests/bpf/verifier/calls.c | 14 +-
>>>>> 30 files changed, 464 insertions(+), 375 deletions(-)
>>>>>
>>>>> diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
>>>>> index b148f816f25b..d5b4303315dd 100644
>>>>> --- a/include/linux/bpf_verifier.h
>>>>> +++ b/include/linux/bpf_verifier.h
>>>>> @@ -913,6 +913,7 @@ struct bpf_verifier_env {
>>>>> * e.g., in reg_type_str() to generate reg_type string
>>>>> */
>>>>> char tmp_str_buf[TMP_STR_BUF_LEN];
>>>>> + char tmp_arg_name[32];
>>>>> struct bpf_insn insn_buf[INSN_BUF_SIZE];
>>>>> struct bpf_insn epilogue_buf[INSN_BUF_SIZE];
>>>>> struct bpf_scc_callchain callchain_buf;
>>>>> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
>>>>> index 18ab92581452..82568a427211 100644
>>>>> --- a/kernel/bpf/verifier.c
>>>>> +++ b/kernel/bpf/verifier.c
>>>>> @@ -1742,6 +1742,44 @@ static struct bpf_verifier_state *push_stack(struct bpf_verifier_env *env,
>>>>> return &elem->st;
>>>>> }
>>>>>
>>>>> +#define STACK_ARGNO_BASE 100
>>>>> +
>>>>> +static bool is_stack_argno(int argno)
>>>>> +{
>>>>> + return argno > STACK_ARGNO_BASE;
>>>>> +}
>>>>> +
>>>>> +/* arg starts at 1 */
>>>>> +static u32 make_argno(u32 arg)
>>>>> +{
>>>>> + if (arg <= MAX_BPF_FUNC_REG_ARGS)
>>>>> + return arg;
>>>>> + return STACK_ARGNO_BASE + arg;
>>>>> +}
>>>> You can remove this and simplify everything further by
>>>>
>>>> static bool is_stack_argno(int argno)
>>>> {
>>>> return argno > MAX_BPF_FUNC_REG_ARGS;
>>>> }
>>>>
>>>>> +
>>>>> +static u32 arg_from_argno(int argno)
>>>>> +{
>>>>> + if (is_stack_argno(argno))
>>>>> + return argno - STACK_ARGNO_BASE;
>>>>> + return argno;
>>>>> +}
>>>> remove as well.
>>>>
>>>> and a comment like:
>>>>
>>>> /*
>>>> * switch (argno) {
>>>> * case 1: R1
>>>> * case 5: R5
>>>> * case 6: *(u64 *)(R11 +- 8)
>>>> * case 7: *(u64 *)(R11 +- 16)
>>>> */
>>> This doesn't work. Let us see the following example:
>>>
>>> check_kfunc_args
>>> process_dynptr_func (argno)
>>> check_mem_access (argno, 4th argument)
>>> check_packet_access (argno)
>>> check_mem_region_access (argno)
>>> __check_mem_access (argno)
>>> <== verbose log with argno
>>>
>>> do_check
>>> do_check_insn (env)
>>> check_load_mem (insn)
>>> check_mem_access (insn->src_reg, 4th argument)
>>> check_packet_access (...)
>>> check_mem_region_access (...)
>>> __check_mem_access (insn->src_reg or argno)
>> Ohh. Silent conversion. That's quite error prone.
>>
>> let's do
>> typedef struct argno {
>> int argno;
>> } argno_t;
>>
>> and make sure this callchain passes arg_t unmodified:
>>
>> process_dynptr_func (argno)
>> check_mem_access (argno, 4th argument)
>> check_packet_access (argno) ...
>>
>> while here:
>>
>> check_load_mem (insn)
>> check_mem_access (argno_from_reg(insn->src_reg), 4th argument)
>>
>> static argno_t argno_from_reg(u32 regno)
>> {
>> return (argno_t){ .argno = regno };
>> }
>>
>> static argno_t argno_from_arg(u32 arg)
>> {
>> return (argno_t){ .argno = -arg };
>> }
>>
>> static const char *reg_arg_name(struct bpf_verifier_env *env, argno_t argno)
>>
>> When positive vs negative is an internal implemenation of argno_t
>> it's fine. It's better than shift by 100, but when negative was
>> used as a signal everywhere it leaked details to caller.
>
> This approach is kind of similar to what I proposed earlier with
> an int variable, non-negative for reg and negative for arg (value -1/-2/...).
> https://lore.kernel.org/bpf/20260412045857.256260-1-yonghong.song@linux.dev/
Of course. It's your proposal. I just relayed it back.
My objection for inband signaling was that it leaks into the caller
and you "fixed" with make_argno() plus "shift-by-100"
while I was objecting to "leaks to caller" part.
In band signaling almost always sucks, but when it's there
to save extra word of being passed around by value,
it's worth doing. But only if it's hidden behind api.
> But it is not explicit except the name 'reg_or_arg'.
> Here, argno_t type makes it more explicit and should better.
"not explicit" is the key.
When it's abstracted it can change into "shift-by-100"
or extra bit without affecting callers or callees.
> For printing, I guess we still want to print 'R#' whenever possible
> including positive registers and negative argno (1-5), and print
> '*(R11-off)' for negative argno (6->...)?
Yes, because that's what verifier log just before the error
message has. 'arg#' is disconnected from verifier output
that immediately preceding that error message. It's bad
for humans and especially bad for agents that cannot
connect the dots. r1=..; arg#0 is invalid because ...;
How agent suppose to infer that arg#0 is the same as r1 ?
It can if it thinks "effort xhigh", but it shouldn't need to do that.
^ permalink raw reply [flat|nested] 25+ messages in thread
end of thread, other threads:[~2026-04-22 1:52 UTC | newest]
Thread overview: 25+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-21 17:19 [PATCH bpf-next 0/9] bpf: Prepare to support stack arguments Yonghong Song
2026-04-21 17:19 ` [PATCH bpf-next 1/9] bpf: Remove unused parameter from check_map_kptr_access() Yonghong Song
2026-04-21 17:19 ` [PATCH bpf-next 2/9] bpf: Fix tail_call_reachable leak Yonghong Song
2026-04-21 18:06 ` bot+bpf-ci
2026-04-22 0:29 ` Yonghong Song
2026-04-21 17:19 ` [PATCH bpf-next 3/9] bpf: Remove WARN_ON_ONCE in check_kfunc_mem_size_reg() Yonghong Song
2026-04-21 17:19 ` [PATCH bpf-next 4/9] bpf: Refactor to avoid redundant calculation of bpf_reg_state Yonghong Song
2026-04-21 21:40 ` Amery Hung
2026-04-21 23:42 ` Yonghong Song
2026-04-21 17:19 ` [PATCH bpf-next 5/9] bpf: Refactor to handle memory and size together Yonghong Song
2026-04-21 17:19 ` [PATCH bpf-next 6/9] bpf: Rename existing argno to arg Yonghong Song
2026-04-21 17:20 ` [PATCH bpf-next 7/9] bpf: Prepare verifier logs for upcoming kfunc stack arguments Yonghong Song
2026-04-21 22:07 ` Alexei Starovoitov
2026-04-21 23:56 ` Yonghong Song
2026-04-22 0:37 ` Alexei Starovoitov
2026-04-22 1:20 ` Yonghong Song
2026-04-22 1:52 ` Alexei Starovoitov
2026-04-21 17:20 ` [PATCH bpf-next 8/9] bpf: Introduce bpf register BPF_REG_PARAMS Yonghong Song
2026-04-21 22:10 ` Alexei Starovoitov
2026-04-22 0:09 ` Yonghong Song
2026-04-22 0:42 ` Alexei Starovoitov
2026-04-22 1:10 ` Yonghong Song
2026-04-21 17:20 ` [PATCH bpf-next 9/9] bpf: Reuse MAX_BPF_FUNC_ARGS for maximum number of arguments Yonghong Song
2026-04-21 17:52 ` bot+bpf-ci
2026-04-21 19:13 ` [PATCH bpf-next 0/9] bpf: Prepare to support stack arguments Kumar Kartikeya Dwivedi
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox