public inbox for bpf@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH bpf-next v4 00/18] bpf: Support stack arguments for BPF functions and kfuncs
@ 2026-04-12  4:58 Yonghong Song
  2026-04-12  4:58 ` [PATCH bpf-next v4 01/18] bpf: Remove unused parameter from check_map_kptr_access() Yonghong Song
                   ` (17 more replies)
  0 siblings, 18 replies; 27+ messages in thread
From: Yonghong Song @ 2026-04-12  4:58 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Jose E . Marchesi, kernel-team, Martin KaFai Lau

Currently, bpf function calls and kfunc's are limited by 5 reg-level
parameters. For function calls with more than 5 parameters,
developers can use always inlining or pass a struct pointer
after packing more parameters in that struct. But there is
no workaround for kfunc if more than 5 parameters is needed.

This patch set lifts the 5-argument limit by introducing stack-based
argument passing for BPF functions and kfunc's, coordinated with
compiler support in LLVM [1]. The compiler emits loads/stores through
a new bpf register r12 (BPF_REG_STACK_ARG_BASE) to pass arguments beyond
the 5th, keeping the stack arg area separate from the r10-based program
stack. The maximum number of arguments is capped at MAX_BPF_FUNC_ARGS
(12), which is sufficient for the vast majority of use cases.

The x86_64 JIT translates r12-relative accesses to RBP-relative
native instructions. Each function's stack allocation is extended
by 'max_outgoing' bytes to hold the outgoing arg area below the
callee-saved registers. This makes implementation easier as the r10 can be
reused for stack argument access. At both BPF-to-BPF and kfunc calls,
outgoing args are pushed onto the expected calling convention locations
directly. The incoming parameters can directly get the value from
caller.

To support kfunc stack arguments, before doing any stack arguments,
existing codes are refactored/modified to use bpf_reg_state as much
as possible instead of using regno, and to pass regno/argno in a
single variable where non-negative means regno and negative means argno.

Global subprogs with >5 args are not yet supported. Only x86_64
is supported for now.

For the rest of patches, patches 1-6 make changes to make it
easy for future stack arguments for kfuncs. Patches 7-10
supports bpf-to-bpf stack arguments. Patch 11 rejects interpreter
for stack arguments. Patch 12 rejects subprogs if tailcall reachable.
Patch 13 adds stack argument support for kfuncs. Patch 14 enables
stack arguments for x86_64 and Patch 15 implements the x86_64 JIT.
Patches 16-18 are some test cases.

  [1] https://github.com/llvm/llvm-project/pull/189060

Note:
  - The patch set is on top of the following commit:
    2ec74a053611  bpf: Simplify do_check_insn()
  - This patch set requires latest llvm23 compiler. It is possible that a build
    failure may appear:
      /home/yhs/work/bpf-next/scripts/mod/modpost.c:59:13: error: variable 'extra_warn' set but not used [-Werror,-Wunused-but-set-global]
             59 | static bool extra_warn;
                |             ^
          1 error generated.
    In this case, the following hack can workaround the build issue:
      --- a/Makefile
      +++ b/Makefile
      @@ -467,7 +467,7 @@ KERNELDOC       = $(srctree)/tools/docs/kernel-doc
       export KERNELDOC
 
       KBUILD_USERHOSTCFLAGS := -Wall -Wmissing-prototypes -Wstrict-prototypes \
      -                        -O2 -fomit-frame-pointer -std=gnu11
      +                        -O2 -fomit-frame-pointer -std=gnu11 -Wno-unused-but-set-global
       KBUILD_USERCFLAGS  := $(KBUILD_USERHOSTCFLAGS) $(USERCFLAGS)
       KBUILD_USERLDFLAGS := $(USERLDFLAGS)

Changelogs:
  v3 -> v4:
    - v3: https://lore.kernel.org/bpf/20260405172505.1329392-1-yonghong.song@linux.dev/
    - Refactor/Modify codes to make it easier for later kfunc stack argument support
    - Invalidate outgoing slots immediately after the call to prevent reuse
    - Fix interaction between stack argument PTR_TO_STACK and dead slot poisoning
    - Reject stack arguments if tail call reachable
    - Disable private stack if stack argument is used
    - Allocate outgoing stack argument region after callee saved registers, and this
      simplifies the JITed code a lot.
  v2 -> v3:
    - v2: https://lore.kernel.org/bpf/20260405165300.826241-1-yonghong.song@linux.dev/
    - Fix selftest stack_arg_gap_at_minus8().
    - Fix a few 'UTF-8' issues.
  v1 -> v2:
    - v1: https://lore.kernel.org/bpf/20260402012727.3916819-1-yonghong.song@linux.dev/
    - Add stack_arg_safe() to do pruning for stack arguments.
    - Fix an issue with KF_ARG_PTR_TO_MEM_SIZE. Since a faked register is
      used, added verification log to indicate the start and end of such
      faked register usage.
    - For x86_64 JIT, copying incoming parameter values directly from caller's stack.
    - Add test cases with stack arguments e.g. mem, mem+size, dynptr, iter, etc.

Yonghong Song (18):
  bpf: Remove unused parameter from check_map_kptr_access()
  bpf: Change from "arg #%d" to "arg#%d" in verifier log
  bpf: Refactor to avoid redundant calculation of bpf_reg_state
  bpf: Refactor to handle memory and size together
  bpf: Change some regno type from u32 to int type
  bpf: Use argument index instead of register index in kfunc verifier
    logs
  bpf: Introduce bpf register BPF_REG_STACK_ARG_BASE
  bpf: Reuse MAX_BPF_FUNC_ARGS for maximum number of arguments
  bpf: Support stack arguments for bpf functions
  bpf: Fix interaction between stack argument PTR_TO_STACK and dead slot
    poisoning
  bpf: Reject stack arguments in non-JITed programs
  bpf: Reject stack arguments if tail call reachable
  bpf: Support stack arguments for kfunc calls
  bpf: Enable stack argument support for x86_64
  bpf,x86: Implement JIT support for stack arguments
  selftests/bpf: Add tests for BPF function stack arguments
  selftests/bpf: Add negative test for greater-than-8-byte kfunc stack
    argument
  selftests/bpf: Add verifier tests for stack argument validation

 arch/x86/net/bpf_jit_comp.c                   |  177 ++-
 include/linux/bpf.h                           |    6 +
 include/linux/bpf_verifier.h                  |   32 +-
 include/linux/filter.h                        |    4 +-
 kernel/bpf/btf.c                              |   21 +-
 kernel/bpf/core.c                             |   12 +-
 kernel/bpf/verifier.c                         | 1038 ++++++++++++-----
 .../selftests/bpf/prog_tests/cb_refs.c        |    2 +-
 .../selftests/bpf/prog_tests/linked_list.c    |    4 +-
 .../selftests/bpf/prog_tests/stack_arg.c      |  132 +++
 .../selftests/bpf/prog_tests/stack_arg_fail.c |   24 +
 .../selftests/bpf/prog_tests/verifier.c       |    2 +
 .../selftests/bpf/progs/cpumask_failure.c     |    4 +-
 .../testing/selftests/bpf/progs/dynptr_fail.c |   26 +-
 .../selftests/bpf/progs/file_reader_fail.c    |    4 +-
 .../selftests/bpf/progs/iters_state_safety.c  |   14 +-
 .../selftests/bpf/progs/iters_testmod.c       |    6 +-
 .../selftests/bpf/progs/iters_testmod_seq.c   |    4 +-
 .../bpf/progs/local_kptr_stash_fail.c         |    2 +-
 .../selftests/bpf/progs/map_kptr_fail.c       |    4 +-
 .../bpf/progs/mem_rdonly_untrusted.c          |    2 +-
 .../bpf/progs/nested_trust_failure.c          |    2 +-
 .../selftests/bpf/progs/res_spin_lock_fail.c  |    2 +-
 tools/testing/selftests/bpf/progs/stack_arg.c |  212 ++++
 .../selftests/bpf/progs/stack_arg_fail.c      |   32 +
 .../selftests/bpf/progs/stack_arg_kfunc.c     |  164 +++
 .../testing/selftests/bpf/progs/stream_fail.c |    2 +-
 .../selftests/bpf/progs/task_kfunc_failure.c  |    4 +-
 .../selftests/bpf/progs/verifier_bits_iter.c  |    4 +-
 .../bpf/progs/verifier_cgroup_storage.c       |    4 +-
 .../selftests/bpf/progs/verifier_ctx.c        |    2 +-
 .../bpf/progs/verifier_ref_tracking.c         |    2 +-
 .../selftests/bpf/progs/verifier_sock.c       |    6 +-
 .../selftests/bpf/progs/verifier_stack_arg.c  |  316 +++++
 .../selftests/bpf/progs/verifier_unpriv.c     |    4 +-
 .../selftests/bpf/progs/verifier_vfs_reject.c |    8 +-
 .../testing/selftests/bpf/progs/wq_failures.c |    4 +-
 .../selftests/bpf/test_kmods/bpf_testmod.c    |   73 ++
 .../bpf/test_kmods/bpf_testmod_kfunc.h        |   26 +
 tools/testing/selftests/bpf/verifier/calls.c  |    6 +-
 .../testing/selftests/bpf/verifier/map_kptr.c |   10 +-
 41 files changed, 1996 insertions(+), 407 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/stack_arg.c
 create mode 100644 tools/testing/selftests/bpf/prog_tests/stack_arg_fail.c
 create mode 100644 tools/testing/selftests/bpf/progs/stack_arg.c
 create mode 100644 tools/testing/selftests/bpf/progs/stack_arg_fail.c
 create mode 100644 tools/testing/selftests/bpf/progs/stack_arg_kfunc.c
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_stack_arg.c

-- 
2.52.0


^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH bpf-next v4 01/18] bpf: Remove unused parameter from check_map_kptr_access()
  2026-04-12  4:58 [PATCH bpf-next v4 00/18] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song
@ 2026-04-12  4:58 ` Yonghong Song
  2026-04-12  4:58 ` [PATCH bpf-next v4 02/18] bpf: Change from "arg #%d" to "arg#%d" in verifier log Yonghong Song
                   ` (16 subsequent siblings)
  17 siblings, 0 replies; 27+ messages in thread
From: Yonghong Song @ 2026-04-12  4:58 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Jose E . Marchesi, kernel-team, Martin KaFai Lau

The parameter 'regno' in check_map_kptr_access() is unused. Remove it.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
 kernel/bpf/verifier.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 967e132f2662..3ba837e4b591 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -6264,7 +6264,7 @@ static int mark_uptr_ld_reg(struct bpf_verifier_env *env, u32 regno,
 	return 0;
 }
 
-static int check_map_kptr_access(struct bpf_verifier_env *env, u32 regno,
+static int check_map_kptr_access(struct bpf_verifier_env *env,
 				 int value_regno, int insn_idx,
 				 struct btf_field *kptr_field)
 {
@@ -7926,7 +7926,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
 			kptr_field = btf_record_find(reg->map_ptr->record,
 						     off + reg->var_off.value, BPF_KPTR | BPF_UPTR);
 		if (kptr_field) {
-			err = check_map_kptr_access(env, regno, value_regno, insn_idx, kptr_field);
+			err = check_map_kptr_access(env, value_regno, insn_idx, kptr_field);
 		} else if (t == BPF_READ && value_regno >= 0) {
 			struct bpf_map *map = reg->map_ptr;
 
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH bpf-next v4 02/18] bpf: Change from "arg #%d" to "arg#%d" in verifier log
  2026-04-12  4:58 [PATCH bpf-next v4 00/18] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song
  2026-04-12  4:58 ` [PATCH bpf-next v4 01/18] bpf: Remove unused parameter from check_map_kptr_access() Yonghong Song
@ 2026-04-12  4:58 ` Yonghong Song
  2026-04-12  4:58 ` [PATCH bpf-next v4 03/18] bpf: Refactor to avoid redundant calculation of bpf_reg_state Yonghong Song
                   ` (15 subsequent siblings)
  17 siblings, 0 replies; 27+ messages in thread
From: Yonghong Song @ 2026-04-12  4:58 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Jose E . Marchesi, kernel-team, Martin KaFai Lau

In verifier, there are 31 verifier logs with "arg#%d" and 5 with "arg #%d".
Consolidate all of these to be "arg#%d" as in later patch, a helper function
will emit "arg#%d" as the defined format. Some related selftests are
adjusted as well.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
 kernel/bpf/verifier.c                         | 10 +++++-----
 .../testing/selftests/bpf/progs/dynptr_fail.c | 20 +++++++++----------
 .../selftests/bpf/progs/file_reader_fail.c    |  4 ++--
 .../selftests/bpf/progs/iters_state_safety.c  | 14 ++++++-------
 .../selftests/bpf/progs/iters_testmod_seq.c   |  4 ++--
 .../selftests/bpf/progs/verifier_bits_iter.c  |  4 ++--
 6 files changed, 28 insertions(+), 28 deletions(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 3ba837e4b591..6469e71cd1fa 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -9074,7 +9074,7 @@ static int process_dynptr_func(struct bpf_verifier_env *env, int regno, int insn
 
 		if (!is_dynptr_reg_valid_init(env, reg)) {
 			verbose(env,
-				"Expected an initialized dynptr as arg #%d\n",
+				"Expected an initialized dynptr as arg#%d\n",
 				regno - 1);
 			return -EINVAL;
 		}
@@ -9082,7 +9082,7 @@ static int process_dynptr_func(struct bpf_verifier_env *env, int regno, int insn
 		/* Fold modifiers (in this case, MEM_RDONLY) when checking expected type */
 		if (!is_dynptr_type_expected(env, reg, arg_type & ~MEM_RDONLY)) {
 			verbose(env,
-				"Expected a dynptr of type %s as arg #%d\n",
+				"Expected a dynptr of type %s as arg#%d\n",
 				dynptr_type_str(arg_to_dynptr_type(arg_type)), regno - 1);
 			return -EINVAL;
 		}
@@ -9152,7 +9152,7 @@ static int process_iter_arg(struct bpf_verifier_env *env, int regno, int insn_id
 	 */
 	btf_id = btf_check_iter_arg(meta->btf, meta->func_proto, regno - 1);
 	if (btf_id < 0) {
-		verbose(env, "expected valid iter pointer as arg #%d\n", regno - 1);
+		verbose(env, "expected valid iter pointer as arg#%d\n", regno - 1);
 		return -EINVAL;
 	}
 	t = btf_type_by_id(meta->btf, btf_id);
@@ -9161,7 +9161,7 @@ static int process_iter_arg(struct bpf_verifier_env *env, int regno, int insn_id
 	if (is_iter_new_kfunc(meta)) {
 		/* bpf_iter_<type>_new() expects pointer to uninit iter state */
 		if (!is_iter_reg_valid_uninit(env, reg, nr_slots)) {
-			verbose(env, "expected uninitialized iter_%s as arg #%d\n",
+			verbose(env, "expected uninitialized iter_%s as arg#%d\n",
 				iter_type_str(meta->btf, btf_id), regno - 1);
 			return -EINVAL;
 		}
@@ -9185,7 +9185,7 @@ static int process_iter_arg(struct bpf_verifier_env *env, int regno, int insn_id
 		case 0:
 			break;
 		case -EINVAL:
-			verbose(env, "expected an initialized iter_%s as arg #%d\n",
+			verbose(env, "expected an initialized iter_%s as arg#%d\n",
 				iter_type_str(meta->btf, btf_id), regno - 1);
 			return err;
 		case -EPROTO:
diff --git a/tools/testing/selftests/bpf/progs/dynptr_fail.c b/tools/testing/selftests/bpf/progs/dynptr_fail.c
index b62773ce5219..d552117b001e 100644
--- a/tools/testing/selftests/bpf/progs/dynptr_fail.c
+++ b/tools/testing/selftests/bpf/progs/dynptr_fail.c
@@ -149,7 +149,7 @@ int ringbuf_release_uninit_dynptr(void *ctx)
 
 /* A dynptr can't be used after it has been invalidated */
 SEC("?raw_tp")
-__failure __msg("Expected an initialized dynptr as arg #2")
+__failure __msg("Expected an initialized dynptr as arg#2")
 int use_after_invalid(void *ctx)
 {
 	struct bpf_dynptr ptr;
@@ -448,7 +448,7 @@ int invalid_helper2(void *ctx)
 
 /* A bpf_dynptr is invalidated if it's been written into */
 SEC("?raw_tp")
-__failure __msg("Expected an initialized dynptr as arg #0")
+__failure __msg("Expected an initialized dynptr as arg#0")
 int invalid_write1(void *ctx)
 {
 	struct bpf_dynptr ptr;
@@ -1642,7 +1642,7 @@ int invalid_slice_rdwr_rdonly(struct __sk_buff *skb)
 
 /* bpf_dynptr_adjust can only be called on initialized dynptrs */
 SEC("?raw_tp")
-__failure __msg("Expected an initialized dynptr as arg #0")
+__failure __msg("Expected an initialized dynptr as arg#0")
 int dynptr_adjust_invalid(void *ctx)
 {
 	struct bpf_dynptr ptr = {};
@@ -1655,7 +1655,7 @@ int dynptr_adjust_invalid(void *ctx)
 
 /* bpf_dynptr_is_null can only be called on initialized dynptrs */
 SEC("?raw_tp")
-__failure __msg("Expected an initialized dynptr as arg #0")
+__failure __msg("Expected an initialized dynptr as arg#0")
 int dynptr_is_null_invalid(void *ctx)
 {
 	struct bpf_dynptr ptr = {};
@@ -1668,7 +1668,7 @@ int dynptr_is_null_invalid(void *ctx)
 
 /* bpf_dynptr_is_rdonly can only be called on initialized dynptrs */
 SEC("?raw_tp")
-__failure __msg("Expected an initialized dynptr as arg #0")
+__failure __msg("Expected an initialized dynptr as arg#0")
 int dynptr_is_rdonly_invalid(void *ctx)
 {
 	struct bpf_dynptr ptr = {};
@@ -1681,7 +1681,7 @@ int dynptr_is_rdonly_invalid(void *ctx)
 
 /* bpf_dynptr_size can only be called on initialized dynptrs */
 SEC("?raw_tp")
-__failure __msg("Expected an initialized dynptr as arg #0")
+__failure __msg("Expected an initialized dynptr as arg#0")
 int dynptr_size_invalid(void *ctx)
 {
 	struct bpf_dynptr ptr = {};
@@ -1694,7 +1694,7 @@ int dynptr_size_invalid(void *ctx)
 
 /* Only initialized dynptrs can be cloned */
 SEC("?raw_tp")
-__failure __msg("Expected an initialized dynptr as arg #0")
+__failure __msg("Expected an initialized dynptr as arg#0")
 int clone_invalid1(void *ctx)
 {
 	struct bpf_dynptr ptr1 = {};
@@ -1728,7 +1728,7 @@ int clone_invalid2(struct xdp_md *xdp)
 
 /* Invalidating a dynptr should invalidate its clones */
 SEC("?raw_tp")
-__failure __msg("Expected an initialized dynptr as arg #2")
+__failure __msg("Expected an initialized dynptr as arg#2")
 int clone_invalidate1(void *ctx)
 {
 	struct bpf_dynptr clone;
@@ -1749,7 +1749,7 @@ int clone_invalidate1(void *ctx)
 
 /* Invalidating a dynptr should invalidate its parent */
 SEC("?raw_tp")
-__failure __msg("Expected an initialized dynptr as arg #2")
+__failure __msg("Expected an initialized dynptr as arg#2")
 int clone_invalidate2(void *ctx)
 {
 	struct bpf_dynptr ptr;
@@ -1770,7 +1770,7 @@ int clone_invalidate2(void *ctx)
 
 /* Invalidating a dynptr should invalidate its siblings */
 SEC("?raw_tp")
-__failure __msg("Expected an initialized dynptr as arg #2")
+__failure __msg("Expected an initialized dynptr as arg#2")
 int clone_invalidate3(void *ctx)
 {
 	struct bpf_dynptr ptr;
diff --git a/tools/testing/selftests/bpf/progs/file_reader_fail.c b/tools/testing/selftests/bpf/progs/file_reader_fail.c
index 32fe28ed2439..429831ca9154 100644
--- a/tools/testing/selftests/bpf/progs/file_reader_fail.c
+++ b/tools/testing/selftests/bpf/progs/file_reader_fail.c
@@ -30,7 +30,7 @@ int on_nanosleep_unreleased_ref(void *ctx)
 
 SEC("xdp")
 __failure
-__msg("Expected a dynptr of type file as arg #0")
+__msg("Expected a dynptr of type file as arg#0")
 int xdp_wrong_dynptr_type(struct xdp_md *xdp)
 {
 	struct bpf_dynptr dynptr;
@@ -42,7 +42,7 @@ int xdp_wrong_dynptr_type(struct xdp_md *xdp)
 
 SEC("xdp")
 __failure
-__msg("Expected an initialized dynptr as arg #0")
+__msg("Expected an initialized dynptr as arg#0")
 int xdp_no_dynptr_type(struct xdp_md *xdp)
 {
 	struct bpf_dynptr dynptr;
diff --git a/tools/testing/selftests/bpf/progs/iters_state_safety.c b/tools/testing/selftests/bpf/progs/iters_state_safety.c
index d273b46dfc7c..88cdd8d46373 100644
--- a/tools/testing/selftests/bpf/progs/iters_state_safety.c
+++ b/tools/testing/selftests/bpf/progs/iters_state_safety.c
@@ -73,7 +73,7 @@ int create_and_forget_to_destroy_fail(void *ctx)
 }
 
 SEC("?raw_tp")
-__failure __msg("expected an initialized iter_num as arg #0")
+__failure __msg("expected an initialized iter_num as arg#0")
 int destroy_without_creating_fail(void *ctx)
 {
 	/* init with zeros to stop verifier complaining about uninit stack */
@@ -91,7 +91,7 @@ int destroy_without_creating_fail(void *ctx)
 }
 
 SEC("?raw_tp")
-__failure __msg("expected an initialized iter_num as arg #0")
+__failure __msg("expected an initialized iter_num as arg#0")
 int compromise_iter_w_direct_write_fail(void *ctx)
 {
 	struct bpf_iter_num iter;
@@ -143,7 +143,7 @@ int compromise_iter_w_direct_write_and_skip_destroy_fail(void *ctx)
 }
 
 SEC("?raw_tp")
-__failure __msg("expected an initialized iter_num as arg #0")
+__failure __msg("expected an initialized iter_num as arg#0")
 int compromise_iter_w_helper_write_fail(void *ctx)
 {
 	struct bpf_iter_num iter;
@@ -230,7 +230,7 @@ int valid_stack_reuse(void *ctx)
 }
 
 SEC("?raw_tp")
-__failure __msg("expected uninitialized iter_num as arg #0")
+__failure __msg("expected uninitialized iter_num as arg#0")
 int double_create_fail(void *ctx)
 {
 	struct bpf_iter_num iter;
@@ -258,7 +258,7 @@ int double_create_fail(void *ctx)
 }
 
 SEC("?raw_tp")
-__failure __msg("expected an initialized iter_num as arg #0")
+__failure __msg("expected an initialized iter_num as arg#0")
 int double_destroy_fail(void *ctx)
 {
 	struct bpf_iter_num iter;
@@ -284,7 +284,7 @@ int double_destroy_fail(void *ctx)
 }
 
 SEC("?raw_tp")
-__failure __msg("expected an initialized iter_num as arg #0")
+__failure __msg("expected an initialized iter_num as arg#0")
 int next_without_new_fail(void *ctx)
 {
 	struct bpf_iter_num iter;
@@ -305,7 +305,7 @@ int next_without_new_fail(void *ctx)
 }
 
 SEC("?raw_tp")
-__failure __msg("expected an initialized iter_num as arg #0")
+__failure __msg("expected an initialized iter_num as arg#0")
 int next_after_destroy_fail(void *ctx)
 {
 	struct bpf_iter_num iter;
diff --git a/tools/testing/selftests/bpf/progs/iters_testmod_seq.c b/tools/testing/selftests/bpf/progs/iters_testmod_seq.c
index 83791348bed5..fd160c14289f 100644
--- a/tools/testing/selftests/bpf/progs/iters_testmod_seq.c
+++ b/tools/testing/selftests/bpf/progs/iters_testmod_seq.c
@@ -79,7 +79,7 @@ int testmod_seq_truncated(const void *ctx)
 
 SEC("?raw_tp")
 __failure
-__msg("expected an initialized iter_testmod_seq as arg #1")
+__msg("expected an initialized iter_testmod_seq as arg#1")
 int testmod_seq_getter_before_bad(const void *ctx)
 {
 	struct bpf_iter_testmod_seq it;
@@ -89,7 +89,7 @@ int testmod_seq_getter_before_bad(const void *ctx)
 
 SEC("?raw_tp")
 __failure
-__msg("expected an initialized iter_testmod_seq as arg #1")
+__msg("expected an initialized iter_testmod_seq as arg#1")
 int testmod_seq_getter_after_bad(const void *ctx)
 {
 	struct bpf_iter_testmod_seq it;
diff --git a/tools/testing/selftests/bpf/progs/verifier_bits_iter.c b/tools/testing/selftests/bpf/progs/verifier_bits_iter.c
index 8bcddadfc4da..86f1e5a8e87f 100644
--- a/tools/testing/selftests/bpf/progs/verifier_bits_iter.c
+++ b/tools/testing/selftests/bpf/progs/verifier_bits_iter.c
@@ -32,7 +32,7 @@ int BPF_PROG(no_destroy, struct bpf_iter_meta *meta, struct cgroup *cgrp)
 
 SEC("iter/cgroup")
 __description("uninitialized iter in ->next()")
-__failure __msg("expected an initialized iter_bits as arg #0")
+__failure __msg("expected an initialized iter_bits as arg#0")
 int BPF_PROG(next_uninit, struct bpf_iter_meta *meta, struct cgroup *cgrp)
 {
 	struct bpf_iter_bits it = {};
@@ -43,7 +43,7 @@ int BPF_PROG(next_uninit, struct bpf_iter_meta *meta, struct cgroup *cgrp)
 
 SEC("iter/cgroup")
 __description("uninitialized iter in ->destroy()")
-__failure __msg("expected an initialized iter_bits as arg #0")
+__failure __msg("expected an initialized iter_bits as arg#0")
 int BPF_PROG(destroy_uninit, struct bpf_iter_meta *meta, struct cgroup *cgrp)
 {
 	struct bpf_iter_bits it = {};
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH bpf-next v4 03/18] bpf: Refactor to avoid redundant calculation of bpf_reg_state
  2026-04-12  4:58 [PATCH bpf-next v4 00/18] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song
  2026-04-12  4:58 ` [PATCH bpf-next v4 01/18] bpf: Remove unused parameter from check_map_kptr_access() Yonghong Song
  2026-04-12  4:58 ` [PATCH bpf-next v4 02/18] bpf: Change from "arg #%d" to "arg#%d" in verifier log Yonghong Song
@ 2026-04-12  4:58 ` Yonghong Song
  2026-04-12  5:31   ` bot+bpf-ci
  2026-04-12  4:58 ` [PATCH bpf-next v4 04/18] bpf: Refactor to handle memory and size together Yonghong Song
                   ` (14 subsequent siblings)
  17 siblings, 1 reply; 27+ messages in thread
From: Yonghong Song @ 2026-04-12  4:58 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Jose E . Marchesi, kernel-team, Martin KaFai Lau

In many cases, once a bpf_reg_state is defined, it can pass to
callee's. Otherwise, callee will need to get bpf_reg_state again
based on regno. More importantly, this is needed for later stack
arguments for kfuncs since the register state for stack arguments does
not have a corresponding regno. So it makes sense to pass reg state
for callee's.

The following is the only change to avoid compilation warning:
   static int sanitize_check_bounds(struct bpf_verifier_env *env,
                                   const struct bpf_insn *insn,
  -                                const struct bpf_reg_state *dst_reg)
  +                                struct bpf_reg_state *dst_reg)

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
 kernel/bpf/verifier.c | 207 ++++++++++++++++++------------------------
 1 file changed, 90 insertions(+), 117 deletions(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 6469e71cd1fa..4c67a15c73e1 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -5486,13 +5486,13 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
 static int check_stack_write_var_off(struct bpf_verifier_env *env,
 				     /* func where register points to */
 				     struct bpf_func_state *state,
-				     int ptr_regno, int off, int size,
+				     struct bpf_reg_state *ptr_reg, int off, int size,
 				     int value_regno, int insn_idx)
 {
 	struct bpf_func_state *cur; /* state of the current function */
 	int min_off, max_off;
 	int i, err;
-	struct bpf_reg_state *ptr_reg = NULL, *value_reg = NULL;
+	struct bpf_reg_state *value_reg = NULL;
 	struct bpf_insn *insn = &env->prog->insnsi[insn_idx];
 	bool writing_zero = false;
 	/* set if the fact that we're writing a zero is used to let any
@@ -5501,7 +5501,6 @@ static int check_stack_write_var_off(struct bpf_verifier_env *env,
 	bool zero_used = false;
 
 	cur = env->cur_state->frame[env->cur_state->curframe];
-	ptr_reg = &cur->regs[ptr_regno];
 	min_off = ptr_reg->smin_value + off;
 	max_off = ptr_reg->smax_value + off + size;
 	if (value_regno >= 0)
@@ -5798,7 +5797,7 @@ enum bpf_access_src {
 	ACCESS_HELPER = 2,  /* the access is performed by a helper */
 };
 
-static int check_stack_range_initialized(struct bpf_verifier_env *env,
+static int check_stack_range_initialized(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
 					 int regno, int off, int access_size,
 					 bool zero_size_allowed,
 					 enum bpf_access_type type,
@@ -5822,18 +5821,16 @@ static struct bpf_reg_state *reg_state(struct bpf_verifier_env *env, int regno)
  * offset; for a fixed offset check_stack_read_fixed_off should be used
  * instead.
  */
-static int check_stack_read_var_off(struct bpf_verifier_env *env,
+static int check_stack_read_var_off(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
 				    int ptr_regno, int off, int size, int dst_regno)
 {
-	/* The state of the source register. */
-	struct bpf_reg_state *reg = reg_state(env, ptr_regno);
 	struct bpf_func_state *ptr_state = func(env, reg);
 	int err;
 	int min_off, max_off;
 
 	/* Note that we pass a NULL meta, so raw access will not be permitted.
 	 */
-	err = check_stack_range_initialized(env, ptr_regno, off, size,
+	err = check_stack_range_initialized(env, reg, ptr_regno, off, size,
 					    false, BPF_READ, NULL);
 	if (err)
 		return err;
@@ -5855,10 +5852,9 @@ static int check_stack_read_var_off(struct bpf_verifier_env *env,
  * can be -1, meaning that the read value is not going to a register.
  */
 static int check_stack_read(struct bpf_verifier_env *env,
-			    int ptr_regno, int off, int size,
+			    struct bpf_reg_state *reg, int ptr_regno, int off, int size,
 			    int dst_regno)
 {
-	struct bpf_reg_state *reg = reg_state(env, ptr_regno);
 	struct bpf_func_state *state = func(env, reg);
 	int err;
 	/* Some accesses are only permitted with a static offset. */
@@ -5894,7 +5890,7 @@ static int check_stack_read(struct bpf_verifier_env *env,
 		 * than fixed offset ones. Note that dst_regno >= 0 on this
 		 * branch.
 		 */
-		err = check_stack_read_var_off(env, ptr_regno, off, size,
+		err = check_stack_read_var_off(env, reg, ptr_regno, off, size,
 					       dst_regno);
 	}
 	return err;
@@ -5911,10 +5907,9 @@ static int check_stack_read(struct bpf_verifier_env *env,
  * The caller must ensure that the offset falls within the maximum stack size.
  */
 static int check_stack_write(struct bpf_verifier_env *env,
-			     int ptr_regno, int off, int size,
+			     struct bpf_reg_state *reg, int off, int size,
 			     int value_regno, int insn_idx)
 {
-	struct bpf_reg_state *reg = reg_state(env, ptr_regno);
 	struct bpf_func_state *state = func(env, reg);
 	int err;
 
@@ -5927,16 +5922,15 @@ static int check_stack_write(struct bpf_verifier_env *env,
 		 * than fixed offset ones.
 		 */
 		err = check_stack_write_var_off(env, state,
-						ptr_regno, off, size,
+						reg, off, size,
 						value_regno, insn_idx);
 	}
 	return err;
 }
 
-static int check_map_access_type(struct bpf_verifier_env *env, u32 regno,
+static int check_map_access_type(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
 				 int off, int size, enum bpf_access_type type)
 {
-	struct bpf_reg_state *reg = reg_state(env, regno);
 	struct bpf_map *map = reg->map_ptr;
 	u32 cap = bpf_map_flags_to_cap(map);
 
@@ -5956,17 +5950,15 @@ static int check_map_access_type(struct bpf_verifier_env *env, u32 regno,
 }
 
 /* check read/write into memory region (e.g., map value, ringbuf sample, etc) */
-static int __check_mem_access(struct bpf_verifier_env *env, int regno,
+static int __check_mem_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno,
 			      int off, int size, u32 mem_size,
 			      bool zero_size_allowed)
 {
 	bool size_ok = size > 0 || (size == 0 && zero_size_allowed);
-	struct bpf_reg_state *reg;
 
 	if (off >= 0 && size_ok && (u64)off + size <= mem_size)
 		return 0;
 
-	reg = &cur_regs(env)[regno];
 	switch (reg->type) {
 	case PTR_TO_MAP_KEY:
 		verbose(env, "invalid access to map key, key_size=%d off=%d size=%d\n",
@@ -5996,13 +5988,10 @@ static int __check_mem_access(struct bpf_verifier_env *env, int regno,
 }
 
 /* check read/write into a memory region with possible variable offset */
-static int check_mem_region_access(struct bpf_verifier_env *env, u32 regno,
+static int check_mem_region_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 regno,
 				   int off, int size, u32 mem_size,
 				   bool zero_size_allowed)
 {
-	struct bpf_verifier_state *vstate = env->cur_state;
-	struct bpf_func_state *state = vstate->frame[vstate->curframe];
-	struct bpf_reg_state *reg = &state->regs[regno];
 	int err;
 
 	/* We may have adjusted the register pointing to memory region, so we
@@ -6023,7 +6012,7 @@ static int check_mem_region_access(struct bpf_verifier_env *env, u32 regno,
 			regno);
 		return -EACCES;
 	}
-	err = __check_mem_access(env, regno, reg->smin_value + off, size,
+	err = __check_mem_access(env, reg, regno, reg->smin_value + off, size,
 				 mem_size, zero_size_allowed);
 	if (err) {
 		verbose(env, "R%d min value is outside of the allowed memory range\n",
@@ -6040,7 +6029,7 @@ static int check_mem_region_access(struct bpf_verifier_env *env, u32 regno,
 			regno);
 		return -EACCES;
 	}
-	err = __check_mem_access(env, regno, reg->umax_value + off, size,
+	err = __check_mem_access(env, reg, regno, reg->umax_value + off, size,
 				 mem_size, zero_size_allowed);
 	if (err) {
 		verbose(env, "R%d max value is outside of the allowed memory range\n",
@@ -6341,19 +6330,16 @@ static u32 map_mem_size(const struct bpf_map *map)
 }
 
 /* check read/write into a map element with possible variable offset */
-static int check_map_access(struct bpf_verifier_env *env, u32 regno,
+static int check_map_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 regno,
 			    int off, int size, bool zero_size_allowed,
 			    enum bpf_access_src src)
 {
-	struct bpf_verifier_state *vstate = env->cur_state;
-	struct bpf_func_state *state = vstate->frame[vstate->curframe];
-	struct bpf_reg_state *reg = &state->regs[regno];
 	struct bpf_map *map = reg->map_ptr;
 	u32 mem_size = map_mem_size(map);
 	struct btf_record *rec;
 	int err, i;
 
-	err = check_mem_region_access(env, regno, off, size, mem_size, zero_size_allowed);
+	err = check_mem_region_access(env, reg, regno, off, size, mem_size, zero_size_allowed);
 	if (err)
 		return err;
 
@@ -6451,10 +6437,9 @@ static bool may_access_direct_pkt_data(struct bpf_verifier_env *env,
 	}
 }
 
-static int check_packet_access(struct bpf_verifier_env *env, u32 regno, int off,
+static int check_packet_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 regno, int off,
 			       int size, bool zero_size_allowed)
 {
-	struct bpf_reg_state *reg = reg_state(env, regno);
 	int err;
 
 	if (reg->range < 0) {
@@ -6462,7 +6447,7 @@ static int check_packet_access(struct bpf_verifier_env *env, u32 regno, int off,
 		return -EINVAL;
 	}
 
-	err = check_mem_region_access(env, regno, off, size, reg->range, zero_size_allowed);
+	err = check_mem_region_access(env, reg, regno, off, size, reg->range, zero_size_allowed);
 	if (err)
 		return err;
 
@@ -6517,7 +6502,7 @@ static int __check_ctx_access(struct bpf_verifier_env *env, int insn_idx, int of
 	return -EACCES;
 }
 
-static int check_ctx_access(struct bpf_verifier_env *env, int insn_idx, u32 regno,
+static int check_ctx_access(struct bpf_verifier_env *env, int insn_idx, struct bpf_reg_state *reg, u32 regno,
 			    int off, int access_size, enum bpf_access_type t,
 			    struct bpf_insn_access_aux *info)
 {
@@ -6527,12 +6512,10 @@ static int check_ctx_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
 	 */
 	bool var_off_ok = is_var_ctx_off_allowed(env->prog);
 	bool fixed_off_ok = !env->ops->convert_ctx_access;
-	struct bpf_reg_state *regs = cur_regs(env);
-	struct bpf_reg_state *reg = regs + regno;
 	int err;
 
 	if (var_off_ok)
-		err = check_mem_region_access(env, regno, off, access_size, U16_MAX, false);
+		err = check_mem_region_access(env, reg, regno, off, access_size, U16_MAX, false);
 	else
 		err = __check_ptr_off_reg(env, reg, regno, fixed_off_ok);
 	if (err)
@@ -6558,10 +6541,9 @@ static int check_flow_keys_access(struct bpf_verifier_env *env, int off,
 }
 
 static int check_sock_access(struct bpf_verifier_env *env, int insn_idx,
-			     u32 regno, int off, int size,
+			     struct bpf_reg_state *reg, u32 regno, int off, int size,
 			     enum bpf_access_type t)
 {
-	struct bpf_reg_state *reg = reg_state(env, regno);
 	struct bpf_insn_access_aux info = {};
 	bool valid;
 
@@ -7537,12 +7519,11 @@ static bool type_is_trusted_or_null(struct bpf_verifier_env *env,
 }
 
 static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
-				   struct bpf_reg_state *regs,
+				   struct bpf_reg_state *regs, struct bpf_reg_state *reg,
 				   int regno, int off, int size,
 				   enum bpf_access_type atype,
 				   int value_regno)
 {
-	struct bpf_reg_state *reg = regs + regno;
 	const struct btf_type *t = btf_type_by_id(reg->btf, reg->btf_id);
 	const char *tname = btf_name_by_offset(reg->btf, t->name_off);
 	const char *field_name = NULL;
@@ -7694,12 +7675,11 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
 }
 
 static int check_ptr_to_map_access(struct bpf_verifier_env *env,
-				   struct bpf_reg_state *regs,
+				   struct bpf_reg_state *regs, struct bpf_reg_state *reg,
 				   int regno, int off, int size,
 				   enum bpf_access_type atype,
 				   int value_regno)
 {
-	struct bpf_reg_state *reg = regs + regno;
 	struct bpf_map *map = reg->map_ptr;
 	struct bpf_reg_state map_reg;
 	enum bpf_type_flag flag = 0;
@@ -7788,11 +7768,10 @@ static int check_stack_slot_within_bounds(struct bpf_verifier_env *env,
  * 'off' includes `regno->offset`, but not its dynamic part (if any).
  */
 static int check_stack_access_within_bounds(
-		struct bpf_verifier_env *env,
+		struct bpf_verifier_env *env, struct bpf_reg_state *reg,
 		int regno, int off, int access_size,
 		enum bpf_access_type type)
 {
-	struct bpf_reg_state *reg = reg_state(env, regno);
 	struct bpf_func_state *state = func(env, reg);
 	s64 min_off, max_off;
 	int err;
@@ -7880,12 +7859,11 @@ static void add_scalar_to_reg(struct bpf_reg_state *dst_reg, s64 val)
  * if t==write && value_regno==-1, some unknown value is stored into memory
  * if t==read && value_regno==-1, don't care what we read from memory
  */
-static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regno,
+static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct bpf_reg_state *reg, u32 regno,
 			    int off, int bpf_size, enum bpf_access_type t,
 			    int value_regno, bool strict_alignment_once, bool is_ldsx)
 {
 	struct bpf_reg_state *regs = cur_regs(env);
-	struct bpf_reg_state *reg = regs + regno;
 	int size, err = 0;
 
 	size = bpf_size_to_bytes(bpf_size);
@@ -7902,7 +7880,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
 			return -EACCES;
 		}
 
-		err = check_mem_region_access(env, regno, off, size,
+		err = check_mem_region_access(env, reg, regno, off, size,
 					      reg->map_ptr->key_size, false);
 		if (err)
 			return err;
@@ -7916,10 +7894,10 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
 			verbose(env, "R%d leaks addr into map\n", value_regno);
 			return -EACCES;
 		}
-		err = check_map_access_type(env, regno, off, size, t);
+		err = check_map_access_type(env, reg, off, size, t);
 		if (err)
 			return err;
-		err = check_map_access(env, regno, off, size, false, ACCESS_DIRECT);
+		err = check_map_access(env, reg, regno, off, size, false, ACCESS_DIRECT);
 		if (err)
 			return err;
 		if (tnum_is_const(reg->var_off))
@@ -7988,7 +7966,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
 		 * instructions, hence no need to check bounds in that case.
 		 */
 		if (!rdonly_untrusted)
-			err = check_mem_region_access(env, regno, off, size,
+			err = check_mem_region_access(env, reg, regno, off, size,
 						      reg->mem_size, false);
 		if (!err && value_regno >= 0 && (t == BPF_READ || rdonly_mem))
 			mark_reg_unknown(env, regs, value_regno);
@@ -8006,7 +7984,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
 			return -EACCES;
 		}
 
-		err = check_ctx_access(env, insn_idx, regno, off, size, t, &info);
+		err = check_ctx_access(env, insn_idx, reg, regno, off, size, t, &info);
 		if (!err && t == BPF_READ && value_regno >= 0) {
 			/* ctx access returns either a scalar, or a
 			 * PTR_TO_PACKET[_META,_END]. In the latter
@@ -8043,15 +8021,15 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
 
 	} else if (reg->type == PTR_TO_STACK) {
 		/* Basic bounds checks. */
-		err = check_stack_access_within_bounds(env, regno, off, size, t);
+		err = check_stack_access_within_bounds(env, reg, regno, off, size, t);
 		if (err)
 			return err;
 
 		if (t == BPF_READ)
-			err = check_stack_read(env, regno, off, size,
+			err = check_stack_read(env, reg, regno, off, size,
 					       value_regno);
 		else
-			err = check_stack_write(env, regno, off, size,
+			err = check_stack_write(env, reg, off, size,
 						value_regno, insn_idx);
 	} else if (reg_is_pkt_pointer(reg)) {
 		if (t == BPF_WRITE && !may_access_direct_pkt_data(env, NULL, t)) {
@@ -8064,7 +8042,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
 				value_regno);
 			return -EACCES;
 		}
-		err = check_packet_access(env, regno, off, size, false);
+		err = check_packet_access(env, reg, regno, off, size, false);
 		if (!err && t == BPF_READ && value_regno >= 0)
 			mark_reg_unknown(env, regs, value_regno);
 	} else if (reg->type == PTR_TO_FLOW_KEYS) {
@@ -8084,7 +8062,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
 				regno, reg_type_str(env, reg->type));
 			return -EACCES;
 		}
-		err = check_sock_access(env, insn_idx, regno, off, size, t);
+		err = check_sock_access(env, insn_idx, reg, regno, off, size, t);
 		if (!err && value_regno >= 0)
 			mark_reg_unknown(env, regs, value_regno);
 	} else if (reg->type == PTR_TO_TP_BUFFER) {
@@ -8093,10 +8071,10 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
 			mark_reg_unknown(env, regs, value_regno);
 	} else if (base_type(reg->type) == PTR_TO_BTF_ID &&
 		   !type_may_be_null(reg->type)) {
-		err = check_ptr_to_btf_access(env, regs, regno, off, size, t,
+		err = check_ptr_to_btf_access(env, regs, reg, regno, off, size, t,
 					      value_regno);
 	} else if (reg->type == CONST_PTR_TO_MAP) {
-		err = check_ptr_to_map_access(env, regs, regno, off, size, t,
+		err = check_ptr_to_map_access(env, regs, reg, regno, off, size, t,
 					      value_regno);
 	} else if (base_type(reg->type) == PTR_TO_BUF &&
 		   !type_may_be_null(reg->type)) {
@@ -8165,7 +8143,7 @@ static int check_load_mem(struct bpf_verifier_env *env, struct bpf_insn *insn,
 	/* Check if (src_reg + off) is readable. The state of dst_reg will be
 	 * updated by this call.
 	 */
-	err = check_mem_access(env, env->insn_idx, insn->src_reg, insn->off,
+	err = check_mem_access(env, env->insn_idx, regs + insn->src_reg, insn->src_reg, insn->off,
 			       BPF_SIZE(insn->code), BPF_READ, insn->dst_reg,
 			       strict_alignment_once, is_ldsx);
 	err = err ?: save_aux_ptr_type(env, src_reg_type,
@@ -8195,7 +8173,7 @@ static int check_store_reg(struct bpf_verifier_env *env, struct bpf_insn *insn,
 	dst_reg_type = regs[insn->dst_reg].type;
 
 	/* Check if (dst_reg + off) is writeable. */
-	err = check_mem_access(env, env->insn_idx, insn->dst_reg, insn->off,
+	err = check_mem_access(env, env->insn_idx, regs + insn->dst_reg, insn->dst_reg, insn->off,
 			       BPF_SIZE(insn->code), BPF_WRITE, insn->src_reg,
 			       strict_alignment_once, false);
 	err = err ?: save_aux_ptr_type(env, dst_reg_type, false);
@@ -8206,6 +8184,7 @@ static int check_store_reg(struct bpf_verifier_env *env, struct bpf_insn *insn,
 static int check_atomic_rmw(struct bpf_verifier_env *env,
 			    struct bpf_insn *insn)
 {
+	struct bpf_reg_state *dst_reg;
 	int load_reg;
 	int err;
 
@@ -8267,13 +8246,15 @@ static int check_atomic_rmw(struct bpf_verifier_env *env,
 		load_reg = -1;
 	}
 
+	dst_reg = cur_regs(env) + insn->dst_reg;
+
 	/* Check whether we can read the memory, with second call for fetch
 	 * case to simulate the register fill.
 	 */
-	err = check_mem_access(env, env->insn_idx, insn->dst_reg, insn->off,
+	err = check_mem_access(env, env->insn_idx, dst_reg, insn->dst_reg, insn->off,
 			       BPF_SIZE(insn->code), BPF_READ, -1, true, false);
 	if (!err && load_reg >= 0)
-		err = check_mem_access(env, env->insn_idx, insn->dst_reg,
+		err = check_mem_access(env, env->insn_idx, dst_reg, insn->dst_reg,
 				       insn->off, BPF_SIZE(insn->code),
 				       BPF_READ, load_reg, true, false);
 	if (err)
@@ -8285,7 +8266,7 @@ static int check_atomic_rmw(struct bpf_verifier_env *env,
 			return err;
 	}
 	/* Check whether we can write into the same memory. */
-	err = check_mem_access(env, env->insn_idx, insn->dst_reg, insn->off,
+	err = check_mem_access(env, env->insn_idx, dst_reg, insn->dst_reg, insn->off,
 			       BPF_SIZE(insn->code), BPF_WRITE, -1, true, false);
 	if (err)
 		return err;
@@ -8374,11 +8355,10 @@ static int check_atomic(struct bpf_verifier_env *env, struct bpf_insn *insn)
  * read offsets are marked as read.
  */
 static int check_stack_range_initialized(
-		struct bpf_verifier_env *env, int regno, int off,
+		struct bpf_verifier_env *env, struct bpf_reg_state *reg,int regno, int off,
 		int access_size, bool zero_size_allowed,
 		enum bpf_access_type type, struct bpf_call_arg_meta *meta)
 {
-	struct bpf_reg_state *reg = reg_state(env, regno);
 	struct bpf_func_state *state = func(env, reg);
 	int err, min_off, max_off, i, j, slot, spi;
 	/* Some accesses can write anything into the stack, others are
@@ -8400,11 +8380,10 @@ static int check_stack_range_initialized(
 		return -EACCES;
 	}
 
-	err = check_stack_access_within_bounds(env, regno, off, access_size, type);
+	err = check_stack_access_within_bounds(env, reg, regno, off, access_size, type);
 	if (err)
 		return err;
 
-
 	if (tnum_is_const(reg->var_off)) {
 		min_off = max_off = reg->var_off.value + off;
 	} else {
@@ -8531,7 +8510,7 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno,
 	switch (base_type(reg->type)) {
 	case PTR_TO_PACKET:
 	case PTR_TO_PACKET_META:
-		return check_packet_access(env, regno, 0, access_size,
+		return check_packet_access(env, reg, regno, 0, access_size,
 					   zero_size_allowed);
 	case PTR_TO_MAP_KEY:
 		if (access_type == BPF_WRITE) {
@@ -8539,12 +8518,12 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno,
 				reg_type_str(env, reg->type));
 			return -EACCES;
 		}
-		return check_mem_region_access(env, regno, 0, access_size,
+		return check_mem_region_access(env, reg, regno, 0, access_size,
 					       reg->map_ptr->key_size, false);
 	case PTR_TO_MAP_VALUE:
-		if (check_map_access_type(env, regno, 0, access_size, access_type))
+		if (check_map_access_type(env, reg, 0, access_size, access_type))
 			return -EACCES;
-		return check_map_access(env, regno, 0, access_size,
+		return check_map_access(env, reg, regno, 0, access_size,
 					zero_size_allowed, ACCESS_HELPER);
 	case PTR_TO_MEM:
 		if (type_is_rdonly_mem(reg->type)) {
@@ -8554,7 +8533,7 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno,
 				return -EACCES;
 			}
 		}
-		return check_mem_region_access(env, regno, 0,
+		return check_mem_region_access(env, reg, regno, 0,
 					       access_size, reg->mem_size,
 					       zero_size_allowed);
 	case PTR_TO_BUF:
@@ -8574,16 +8553,16 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno,
 					   max_access);
 	case PTR_TO_STACK:
 		return check_stack_range_initialized(
-				env,
+				env, reg,
 				regno, 0, access_size,
 				zero_size_allowed, access_type, meta);
 	case PTR_TO_BTF_ID:
-		return check_ptr_to_btf_access(env, regs, regno, 0,
+		return check_ptr_to_btf_access(env, regs, reg, regno, 0,
 					       access_size, BPF_READ, -1);
 	case PTR_TO_CTX:
 		/* Only permit reading or writing syscall context using helper calls. */
 		if (is_var_ctx_off_allowed(env->prog)) {
-			int err = check_mem_region_access(env, regno, 0, access_size, U16_MAX,
+			int err = check_mem_region_access(env, reg, regno, 0, access_size, U16_MAX,
 							  zero_size_allowed);
 			if (err)
 				return err;
@@ -8746,11 +8725,10 @@ enum {
  * env->cur_state->active_locks remembers which map value element or allocated
  * object got locked and clears it after bpf_spin_unlock.
  */
-static int process_spin_lock(struct bpf_verifier_env *env, int regno, int flags)
+static int process_spin_lock(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno, int flags)
 {
 	bool is_lock = flags & PROCESS_SPIN_LOCK, is_res_lock = flags & PROCESS_RES_LOCK;
 	const char *lock_str = is_res_lock ? "bpf_res_spin" : "bpf_spin";
-	struct bpf_reg_state *reg = reg_state(env, regno);
 	struct bpf_verifier_state *cur = env->cur_state;
 	bool is_const = tnum_is_const(reg->var_off);
 	bool is_irq = flags & PROCESS_LOCK_IRQ;
@@ -8863,11 +8841,10 @@ static int process_spin_lock(struct bpf_verifier_env *env, int regno, int flags)
 }
 
 /* Check if @regno is a pointer to a specific field in a map value */
-static int check_map_field_pointer(struct bpf_verifier_env *env, u32 regno,
+static int check_map_field_pointer(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 regno,
 				   enum btf_field_type field_type,
 				   struct bpf_map_desc *map_desc)
 {
-	struct bpf_reg_state *reg = reg_state(env, regno);
 	bool is_const = tnum_is_const(reg->var_off);
 	struct bpf_map *map = reg->map_ptr;
 	u64 val = reg->var_off.value;
@@ -8917,26 +8894,26 @@ static int check_map_field_pointer(struct bpf_verifier_env *env, u32 regno,
 	return 0;
 }
 
-static int process_timer_func(struct bpf_verifier_env *env, int regno,
+static int process_timer_func(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno,
 			      struct bpf_map_desc *map)
 {
 	if (IS_ENABLED(CONFIG_PREEMPT_RT)) {
 		verbose(env, "bpf_timer cannot be used for PREEMPT_RT.\n");
 		return -EOPNOTSUPP;
 	}
-	return check_map_field_pointer(env, regno, BPF_TIMER, map);
+	return check_map_field_pointer(env, reg, regno, BPF_TIMER, map);
 }
 
-static int process_timer_helper(struct bpf_verifier_env *env, int regno,
+static int process_timer_helper(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno,
 				struct bpf_call_arg_meta *meta)
 {
-	return process_timer_func(env, regno, &meta->map);
+	return process_timer_func(env, reg, regno, &meta->map);
 }
 
-static int process_timer_kfunc(struct bpf_verifier_env *env, int regno,
+static int process_timer_kfunc(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno,
 			       struct bpf_kfunc_call_arg_meta *meta)
 {
-	return process_timer_func(env, regno, &meta->map);
+	return process_timer_func(env, reg, regno, &meta->map);
 }
 
 static int process_kptr_func(struct bpf_verifier_env *env, int regno,
@@ -9012,10 +8989,9 @@ static int process_kptr_func(struct bpf_verifier_env *env, int regno,
  * Helpers which do not mutate the bpf_dynptr set MEM_RDONLY in their argument
  * type, and declare it as 'const struct bpf_dynptr *' in their prototype.
  */
-static int process_dynptr_func(struct bpf_verifier_env *env, int regno, int insn_idx,
+static int process_dynptr_func(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno, int insn_idx,
 			       enum bpf_arg_type arg_type, int clone_ref_obj_id)
 {
-	struct bpf_reg_state *reg = reg_state(env, regno);
 	int err;
 
 	if (reg->type != PTR_TO_STACK && reg->type != CONST_PTR_TO_DYNPTR) {
@@ -9058,7 +9034,7 @@ static int process_dynptr_func(struct bpf_verifier_env *env, int regno, int insn
 
 		/* we write BPF_DW bits (8 bytes) at a time */
 		for (i = 0; i < BPF_DYNPTR_SIZE; i += 8) {
-			err = check_mem_access(env, insn_idx, regno,
+			err = check_mem_access(env, insn_idx, reg, regno,
 					       i, BPF_DW, BPF_WRITE, -1, false, false);
 			if (err)
 				return err;
@@ -9132,10 +9108,9 @@ static bool is_kfunc_arg_iter(struct bpf_kfunc_call_arg_meta *meta, int arg_idx,
 	return btf_param_match_suffix(meta->btf, arg, "__iter");
 }
 
-static int process_iter_arg(struct bpf_verifier_env *env, int regno, int insn_idx,
+static int process_iter_arg(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno, int insn_idx,
 			    struct bpf_kfunc_call_arg_meta *meta)
 {
-	struct bpf_reg_state *reg = reg_state(env, regno);
 	const struct btf_type *t;
 	int spi, err, i, nr_slots, btf_id;
 
@@ -9167,7 +9142,7 @@ static int process_iter_arg(struct bpf_verifier_env *env, int regno, int insn_id
 		}
 
 		for (i = 0; i < nr_slots * 8; i += BPF_REG_SIZE) {
-			err = check_mem_access(env, insn_idx, regno,
+			err = check_mem_access(env, insn_idx, reg, regno,
 					       i, BPF_DW, BPF_WRITE, -1, false, false);
 			if (err)
 				return err;
@@ -9959,7 +9934,7 @@ static int check_reg_const_str(struct bpf_verifier_env *env,
 		return -EACCES;
 	}
 
-	err = check_map_access(env, regno, 0,
+	err = check_map_access(env, reg, regno, 0,
 			       map->value_size - reg->var_off.value, false,
 			       ACCESS_HELPER);
 	if (err)
@@ -10233,11 +10208,11 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
 			return -EACCES;
 		}
 		if (meta->func_id == BPF_FUNC_spin_lock) {
-			err = process_spin_lock(env, regno, PROCESS_SPIN_LOCK);
+			err = process_spin_lock(env, reg, regno, PROCESS_SPIN_LOCK);
 			if (err)
 				return err;
 		} else if (meta->func_id == BPF_FUNC_spin_unlock) {
-			err = process_spin_lock(env, regno, 0);
+			err = process_spin_lock(env, reg, regno, 0);
 			if (err)
 				return err;
 		} else {
@@ -10246,7 +10221,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
 		}
 		break;
 	case ARG_PTR_TO_TIMER:
-		err = process_timer_helper(env, regno, meta);
+		err = process_timer_helper(env, reg, regno, meta);
 		if (err)
 			return err;
 		break;
@@ -10281,7 +10256,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
 					 true, meta);
 		break;
 	case ARG_PTR_TO_DYNPTR:
-		err = process_dynptr_func(env, regno, insn_idx, arg_type, 0);
+		err = process_dynptr_func(env, reg, regno, insn_idx, arg_type, 0);
 		if (err)
 			return err;
 		break;
@@ -10940,7 +10915,7 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog,
 			if (ret)
 				return ret;
 
-			ret = process_dynptr_func(env, regno, -1, arg->arg_type, 0);
+			ret = process_dynptr_func(env, reg, regno, -1, arg->arg_type, 0);
 			if (ret)
 				return ret;
 		} else if (base_type(arg->arg_type) == ARG_PTR_TO_BTF_ID) {
@@ -11909,18 +11884,18 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
 	if (err)
 		return err;
 
+	regs = cur_regs(env);
+
 	/* Mark slots with STACK_MISC in case of raw mode, stack offset
 	 * is inferred from register state.
 	 */
 	for (i = 0; i < meta.access_size; i++) {
-		err = check_mem_access(env, insn_idx, meta.regno, i, BPF_B,
+		err = check_mem_access(env, insn_idx, regs + meta.regno, meta.regno, i, BPF_B,
 				       BPF_WRITE, -1, false, false);
 		if (err)
 			return err;
 	}
 
-	regs = cur_regs(env);
-
 	if (meta.release_regno) {
 		err = -EINVAL;
 		if (arg_type_is_dynptr(fn->arg_type[meta.release_regno - BPF_REG_1])) {
@@ -12928,11 +12903,10 @@ get_kfunc_ptr_arg_type(struct bpf_verifier_env *env,
 		       struct bpf_kfunc_call_arg_meta *meta,
 		       const struct btf_type *t, const struct btf_type *ref_t,
 		       const char *ref_tname, const struct btf_param *args,
-		       int argno, int nargs)
+		       int argno, int nargs, struct bpf_reg_state *reg)
 {
 	u32 regno = argno + 1;
 	struct bpf_reg_state *regs = cur_regs(env);
-	struct bpf_reg_state *reg = &regs[regno];
 	bool arg_mem_size = false;
 
 	if (meta->func_id == special_kfunc_list[KF_bpf_cast_to_kern_ctx] ||
@@ -13099,10 +13073,9 @@ static int process_kf_arg_ptr_to_btf_id(struct bpf_verifier_env *env,
 	return 0;
 }
 
-static int process_irq_flag(struct bpf_verifier_env *env, int regno,
+static int process_irq_flag(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno,
 			     struct bpf_kfunc_call_arg_meta *meta)
 {
-	struct bpf_reg_state *reg = reg_state(env, regno);
 	int err, kfunc_class = IRQ_NATIVE_KFUNC;
 	bool irq_save;
 
@@ -13127,7 +13100,7 @@ static int process_irq_flag(struct bpf_verifier_env *env, int regno,
 			return -EINVAL;
 		}
 
-		err = check_mem_access(env, env->insn_idx, regno, 0, BPF_DW, BPF_WRITE, -1, false, false);
+		err = check_mem_access(env, env->insn_idx, reg, regno, 0, BPF_DW, BPF_WRITE, -1, false, false);
 		if (err)
 			return err;
 
@@ -13715,7 +13688,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 		ref_t = btf_type_skip_modifiers(btf, t->type, &ref_id);
 		ref_tname = btf_name_by_offset(btf, ref_t->name_off);
 
-		kf_arg_type = get_kfunc_ptr_arg_type(env, meta, t, ref_t, ref_tname, args, i, nargs);
+		kf_arg_type = get_kfunc_ptr_arg_type(env, meta, t, ref_t, ref_tname, args, i, nargs, reg);
 		if (kf_arg_type < 0)
 			return kf_arg_type;
 
@@ -13880,7 +13853,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 				}
 			}
 
-			ret = process_dynptr_func(env, regno, insn_idx, dynptr_arg_type, clone_ref_obj_id);
+			ret = process_dynptr_func(env, reg, regno, insn_idx, dynptr_arg_type, clone_ref_obj_id);
 			if (ret < 0)
 				return ret;
 
@@ -13905,7 +13878,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 					return -EINVAL;
 				}
 			}
-			ret = process_iter_arg(env, regno, insn_idx, meta);
+			ret = process_iter_arg(env, reg, regno, insn_idx, meta);
 			if (ret < 0)
 				return ret;
 			break;
@@ -14082,7 +14055,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 				verbose(env, "arg#%d doesn't point to a map value\n", i);
 				return -EINVAL;
 			}
-			ret = check_map_field_pointer(env, regno, BPF_WORKQUEUE, &meta->map);
+			ret = check_map_field_pointer(env, reg, regno, BPF_WORKQUEUE, &meta->map);
 			if (ret < 0)
 				return ret;
 			break;
@@ -14091,7 +14064,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 				verbose(env, "arg#%d doesn't point to a map value\n", i);
 				return -EINVAL;
 			}
-			ret = process_timer_kfunc(env, regno, meta);
+			ret = process_timer_kfunc(env, reg, regno, meta);
 			if (ret < 0)
 				return ret;
 			break;
@@ -14100,7 +14073,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 				verbose(env, "arg#%d doesn't point to a map value\n", i);
 				return -EINVAL;
 			}
-			ret = check_map_field_pointer(env, regno, BPF_TASK_WORK, &meta->map);
+			ret = check_map_field_pointer(env, reg, regno, BPF_TASK_WORK, &meta->map);
 			if (ret < 0)
 				return ret;
 			break;
@@ -14109,7 +14082,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 				verbose(env, "arg#%d doesn't point to an irq flag on stack\n", i);
 				return -EINVAL;
 			}
-			ret = process_irq_flag(env, regno, meta);
+			ret = process_irq_flag(env, reg, regno, meta);
 			if (ret < 0)
 				return ret;
 			break;
@@ -14130,7 +14103,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 			if (meta->func_id == special_kfunc_list[KF_bpf_res_spin_lock_irqsave] ||
 			    meta->func_id == special_kfunc_list[KF_bpf_res_spin_unlock_irqrestore])
 				flags |= PROCESS_LOCK_IRQ;
-			ret = process_spin_lock(env, regno, flags);
+			ret = process_spin_lock(env, reg, regno, flags);
 			if (ret < 0)
 				return ret;
 			break;
@@ -15264,7 +15237,7 @@ static int check_stack_access_for_ptr_arithmetic(
 
 static int sanitize_check_bounds(struct bpf_verifier_env *env,
 				 const struct bpf_insn *insn,
-				 const struct bpf_reg_state *dst_reg)
+				 struct bpf_reg_state *dst_reg)
 {
 	u32 dst = insn->dst_reg;
 
@@ -15281,7 +15254,7 @@ static int sanitize_check_bounds(struct bpf_verifier_env *env,
 			return -EACCES;
 		break;
 	case PTR_TO_MAP_VALUE:
-		if (check_map_access(env, dst, 0, 1, false, ACCESS_HELPER)) {
+		if (check_map_access(env, dst_reg, dst, 0, 1, false, ACCESS_HELPER)) {
 			verbose(env, "R%d pointer arithmetic of map value goes out of range, "
 				"prohibited for !root\n", dst);
 			return -EACCES;
@@ -21560,7 +21533,7 @@ static int do_check_insn(struct bpf_verifier_env *env, bool *do_print_state)
 
 		dst_reg_type = cur_regs(env)[insn->dst_reg].type;
 
-		err = check_mem_access(env, env->insn_idx, insn->dst_reg,
+		err = check_mem_access(env, env->insn_idx, cur_regs(env) + insn->dst_reg, insn->dst_reg,
 				       insn->off, BPF_SIZE(insn->code),
 				       BPF_WRITE, -1, false, false);
 		if (err)
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH bpf-next v4 04/18] bpf: Refactor to handle memory and size together
  2026-04-12  4:58 [PATCH bpf-next v4 00/18] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song
                   ` (2 preceding siblings ...)
  2026-04-12  4:58 ` [PATCH bpf-next v4 03/18] bpf: Refactor to avoid redundant calculation of bpf_reg_state Yonghong Song
@ 2026-04-12  4:58 ` Yonghong Song
  2026-04-12  5:31   ` bot+bpf-ci
  2026-04-12  4:58 ` [PATCH bpf-next v4 05/18] bpf: Change some regno type from u32 to int type Yonghong Song
                   ` (13 subsequent siblings)
  17 siblings, 1 reply; 27+ messages in thread
From: Yonghong Song @ 2026-04-12  4:58 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Jose E . Marchesi, kernel-team, Martin KaFai Lau

Similar to the previous patch, try to pass bpf_reg_state from caller
to callee. Both mem_reg and size_reg are passed to helper functions.
This is important for stack arguments as they may be beyond registers 1-5.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
 kernel/bpf/verifier.c | 56 ++++++++++++++++++++++---------------------
 1 file changed, 29 insertions(+), 27 deletions(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 4c67a15c73e1..cddd39ebb40b 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -8499,12 +8499,12 @@ static int check_stack_range_initialized(
 	return 0;
 }
 
-static int check_helper_mem_access(struct bpf_verifier_env *env, int regno,
+static int check_helper_mem_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno,
 				   int access_size, enum bpf_access_type access_type,
 				   bool zero_size_allowed,
 				   struct bpf_call_arg_meta *meta)
 {
-	struct bpf_reg_state *regs = cur_regs(env), *reg = &regs[regno];
+	struct bpf_reg_state *regs = cur_regs(env);
 	u32 *max_access;
 
 	switch (base_type(reg->type)) {
@@ -8591,11 +8591,13 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno,
  * containing the pointer.
  */
 static int check_mem_size_reg(struct bpf_verifier_env *env,
-			      struct bpf_reg_state *reg, u32 regno,
+			      struct bpf_reg_state *mem_reg,
+			      struct bpf_reg_state *size_reg, u32 mem_regno,
 			      enum bpf_access_type access_type,
 			      bool zero_size_allowed,
 			      struct bpf_call_arg_meta *meta)
 {
+	int size_regno = mem_regno + 1;
 	int err;
 
 	/* This is used to refine r0 return value bounds for helpers
@@ -8606,37 +8608,37 @@ static int check_mem_size_reg(struct bpf_verifier_env *env,
 	 * out. Only upper bounds can be learned because retval is an
 	 * int type and negative retvals are allowed.
 	 */
-	meta->msize_max_value = reg->umax_value;
+	meta->msize_max_value = size_reg->umax_value;
 
 	/* The register is SCALAR_VALUE; the access check happens using
 	 * its boundaries. For unprivileged variable accesses, disable
 	 * raw mode so that the program is required to initialize all
 	 * the memory that the helper could just partially fill up.
 	 */
-	if (!tnum_is_const(reg->var_off))
+	if (!tnum_is_const(size_reg->var_off))
 		meta = NULL;
 
-	if (reg->smin_value < 0) {
+	if (size_reg->smin_value < 0) {
 		verbose(env, "R%d min value is negative, either use unsigned or 'var &= const'\n",
-			regno);
+			size_regno);
 		return -EACCES;
 	}
 
-	if (reg->umin_value == 0 && !zero_size_allowed) {
+	if (size_reg->umin_value == 0 && !zero_size_allowed) {
 		verbose(env, "R%d invalid zero-sized read: u64=[%lld,%lld]\n",
-			regno, reg->umin_value, reg->umax_value);
+			size_regno, size_reg->umin_value, size_reg->umax_value);
 		return -EACCES;
 	}
 
-	if (reg->umax_value >= BPF_MAX_VAR_SIZ) {
+	if (size_reg->umax_value >= BPF_MAX_VAR_SIZ) {
 		verbose(env, "R%d unbounded memory access, use 'var &= const' or 'if (var < const)'\n",
-			regno);
+			size_regno);
 		return -EACCES;
 	}
-	err = check_helper_mem_access(env, regno - 1, reg->umax_value,
+	err = check_helper_mem_access(env, mem_reg, mem_regno, size_reg->umax_value,
 				      access_type, zero_size_allowed, meta);
 	if (!err)
-		err = mark_chain_precision(env, regno);
+		err = mark_chain_precision(env, size_regno);
 	return err;
 }
 
@@ -8661,8 +8663,8 @@ static int check_mem_reg(struct bpf_verifier_env *env, struct bpf_reg_state *reg
 
 	int size = base_type(reg->type) == PTR_TO_STACK ? -(int)mem_size : mem_size;
 
-	err = check_helper_mem_access(env, regno, size, BPF_READ, true, NULL);
-	err = err ?: check_helper_mem_access(env, regno, size, BPF_WRITE, true, NULL);
+	err = check_helper_mem_access(env, reg, regno, size, BPF_READ, true, NULL);
+	err = err ?: check_helper_mem_access(env, reg, regno, size, BPF_WRITE, true, NULL);
 
 	if (may_be_null)
 		*reg = saved_reg;
@@ -8670,16 +8672,16 @@ static int check_mem_reg(struct bpf_verifier_env *env, struct bpf_reg_state *reg
 	return err;
 }
 
-static int check_kfunc_mem_size_reg(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
-				    u32 regno)
+static int check_kfunc_mem_size_reg(struct bpf_verifier_env *env, struct bpf_reg_state *mem_reg,
+				    struct bpf_reg_state *size_reg,
+				    u32 mem_regno)
 {
-	struct bpf_reg_state *mem_reg = &cur_regs(env)[regno - 1];
 	bool may_be_null = type_may_be_null(mem_reg->type);
 	struct bpf_reg_state saved_reg;
 	struct bpf_call_arg_meta meta;
 	int err;
 
-	WARN_ON_ONCE(regno < BPF_REG_2 || regno > BPF_REG_5);
+	WARN_ON_ONCE(mem_regno > BPF_REG_4);
 
 	memset(&meta, 0, sizeof(meta));
 
@@ -8688,8 +8690,8 @@ static int check_kfunc_mem_size_reg(struct bpf_verifier_env *env, struct bpf_reg
 		mark_ptr_not_null_reg(mem_reg);
 	}
 
-	err = check_mem_size_reg(env, reg, regno, BPF_READ, true, &meta);
-	err = err ?: check_mem_size_reg(env, reg, regno, BPF_WRITE, true, &meta);
+	err = check_mem_size_reg(env, mem_reg, size_reg, mem_regno, BPF_READ, true, &meta);
+	err = err ?: check_mem_size_reg(env, mem_reg, size_reg, mem_regno, BPF_WRITE, true, &meta);
 
 	if (may_be_null)
 		*mem_reg = saved_reg;
@@ -10163,7 +10165,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
 			return -EFAULT;
 		}
 		key_size = meta->map.ptr->key_size;
-		err = check_helper_mem_access(env, regno, key_size, BPF_READ, false, NULL);
+		err = check_helper_mem_access(env, reg, regno, key_size, BPF_READ, false, NULL);
 		if (err)
 			return err;
 		if (can_elide_value_nullness(meta->map.ptr->map_type)) {
@@ -10190,7 +10192,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
 			return -EFAULT;
 		}
 		meta->raw_mode = arg_type & MEM_UNINIT;
-		err = check_helper_mem_access(env, regno, meta->map.ptr->value_size,
+		err = check_helper_mem_access(env, reg, regno, meta->map.ptr->value_size,
 					      arg_type & MEM_WRITE ? BPF_WRITE : BPF_READ,
 					      false, meta);
 		break;
@@ -10234,7 +10236,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
 		 */
 		meta->raw_mode = arg_type & MEM_UNINIT;
 		if (arg_type & MEM_FIXED_SIZE) {
-			err = check_helper_mem_access(env, regno, fn->arg_size[arg],
+			err = check_helper_mem_access(env, reg, regno, fn->arg_size[arg],
 						      arg_type & MEM_WRITE ? BPF_WRITE : BPF_READ,
 						      false, meta);
 			if (err)
@@ -10244,13 +10246,13 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
 		}
 		break;
 	case ARG_CONST_SIZE:
-		err = check_mem_size_reg(env, reg, regno,
+		err = check_mem_size_reg(env, reg_state(env, regno - 1), reg, regno - 1,
 					 fn->arg_type[arg - 1] & MEM_WRITE ?
 					 BPF_WRITE : BPF_READ,
 					 false, meta);
 		break;
 	case ARG_CONST_SIZE_OR_ZERO:
-		err = check_mem_size_reg(env, reg, regno,
+		err = check_mem_size_reg(env, reg_state(env, regno - 1), reg, regno - 1,
 					 fn->arg_type[arg - 1] & MEM_WRITE ?
 					 BPF_WRITE : BPF_READ,
 					 true, meta);
@@ -13988,7 +13990,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 			const struct btf_param *size_arg = &args[i + 1];
 
 			if (!register_is_null(buff_reg) || !is_kfunc_arg_nullable(meta->btf, buff_arg)) {
-				ret = check_kfunc_mem_size_reg(env, size_reg, regno + 1);
+				ret = check_kfunc_mem_size_reg(env, buff_reg, size_reg, regno);
 				if (ret < 0) {
 					verbose(env, "arg#%d arg#%d memory, len pair leads to invalid memory access\n", i, i + 1);
 					return ret;
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH bpf-next v4 05/18] bpf: Change some regno type from u32 to int type
  2026-04-12  4:58 [PATCH bpf-next v4 00/18] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song
                   ` (3 preceding siblings ...)
  2026-04-12  4:58 ` [PATCH bpf-next v4 04/18] bpf: Refactor to handle memory and size together Yonghong Song
@ 2026-04-12  4:58 ` Yonghong Song
  2026-04-12  4:58 ` [PATCH bpf-next v4 06/18] bpf: Use argument index instead of register index in kfunc verifier logs Yonghong Song
                   ` (12 subsequent siblings)
  17 siblings, 0 replies; 27+ messages in thread
From: Yonghong Song @ 2026-04-12  4:58 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Jose E . Marchesi, kernel-team, Martin KaFai Lau

For stack arguments, regno is not really useful for it. Since it
is a stack argument, we can use argument number to indicate it is
a stack argument. We do not want to have two arguments, regno and argno,
in the parameter list. The particular context is for kfunc which
may have more than 5 parameters. bpf-to-bpf call could also have
more than 5 parameters, but stack arguments (beyond 5) are handled
separately so they won't encounter the stack parameter issue.

So for callee's called directly or indirectly by check_kfunc_args(),
some of regno arguments need to be an integer type. The following
is an example:
  check_kfunc_args
    process_dynptr_func
      check_mem_access(..., regno, ...)
         <=== regno to be negative to represent an argno
  do_check_insn
    check_load_mem
      check_mem_access(..., regno, ...)
         <=== regno to be non-negative to represent an regno

Next patch will show the formula how to present a regno
for either a regno or argno.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
 kernel/bpf/verifier.c | 18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index cddd39ebb40b..54296d818d35 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -5988,7 +5988,7 @@ static int __check_mem_access(struct bpf_verifier_env *env, struct bpf_reg_state
 }
 
 /* check read/write into a memory region with possible variable offset */
-static int check_mem_region_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 regno,
+static int check_mem_region_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno,
 				   int off, int size, u32 mem_size,
 				   bool zero_size_allowed)
 {
@@ -6330,7 +6330,7 @@ static u32 map_mem_size(const struct bpf_map *map)
 }
 
 /* check read/write into a map element with possible variable offset */
-static int check_map_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 regno,
+static int check_map_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno,
 			    int off, int size, bool zero_size_allowed,
 			    enum bpf_access_src src)
 {
@@ -6437,7 +6437,7 @@ static bool may_access_direct_pkt_data(struct bpf_verifier_env *env,
 	}
 }
 
-static int check_packet_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 regno, int off,
+static int check_packet_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno, int off,
 			       int size, bool zero_size_allowed)
 {
 	int err;
@@ -6502,7 +6502,7 @@ static int __check_ctx_access(struct bpf_verifier_env *env, int insn_idx, int of
 	return -EACCES;
 }
 
-static int check_ctx_access(struct bpf_verifier_env *env, int insn_idx, struct bpf_reg_state *reg, u32 regno,
+static int check_ctx_access(struct bpf_verifier_env *env, int insn_idx, struct bpf_reg_state *reg, int regno,
 			    int off, int access_size, enum bpf_access_type t,
 			    struct bpf_insn_access_aux *info)
 {
@@ -6541,7 +6541,7 @@ static int check_flow_keys_access(struct bpf_verifier_env *env, int off,
 }
 
 static int check_sock_access(struct bpf_verifier_env *env, int insn_idx,
-			     struct bpf_reg_state *reg, u32 regno, int off, int size,
+			     struct bpf_reg_state *reg, int regno, int off, int size,
 			     enum bpf_access_type t)
 {
 	struct bpf_insn_access_aux info = {};
@@ -7859,7 +7859,7 @@ static void add_scalar_to_reg(struct bpf_reg_state *dst_reg, s64 val)
  * if t==write && value_regno==-1, some unknown value is stored into memory
  * if t==read && value_regno==-1, don't care what we read from memory
  */
-static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct bpf_reg_state *reg, u32 regno,
+static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct bpf_reg_state *reg, int regno,
 			    int off, int bpf_size, enum bpf_access_type t,
 			    int value_regno, bool strict_alignment_once, bool is_ldsx)
 {
@@ -8592,7 +8592,7 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, struct bpf_reg_
  */
 static int check_mem_size_reg(struct bpf_verifier_env *env,
 			      struct bpf_reg_state *mem_reg,
-			      struct bpf_reg_state *size_reg, u32 mem_regno,
+			      struct bpf_reg_state *size_reg, int mem_regno,
 			      enum bpf_access_type access_type,
 			      bool zero_size_allowed,
 			      struct bpf_call_arg_meta *meta)
@@ -8643,7 +8643,7 @@ static int check_mem_size_reg(struct bpf_verifier_env *env,
 }
 
 static int check_mem_reg(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
-			 u32 regno, u32 mem_size)
+			 int regno, u32 mem_size)
 {
 	bool may_be_null = type_may_be_null(reg->type);
 	struct bpf_reg_state saved_reg;
@@ -9905,7 +9905,7 @@ static enum bpf_dynptr_type dynptr_get_type(struct bpf_verifier_env *env,
 }
 
 static int check_reg_const_str(struct bpf_verifier_env *env,
-			       struct bpf_reg_state *reg, u32 regno)
+			       struct bpf_reg_state *reg, int regno)
 {
 	struct bpf_map *map = reg->map_ptr;
 	int err;
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH bpf-next v4 06/18] bpf: Use argument index instead of register index in kfunc verifier logs
  2026-04-12  4:58 [PATCH bpf-next v4 00/18] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song
                   ` (4 preceding siblings ...)
  2026-04-12  4:58 ` [PATCH bpf-next v4 05/18] bpf: Change some regno type from u32 to int type Yonghong Song
@ 2026-04-12  4:58 ` Yonghong Song
  2026-04-12  5:43   ` bot+bpf-ci
  2026-04-12  4:59 ` [PATCH bpf-next v4 07/18] bpf: Introduce bpf register BPF_REG_STACK_ARG_BASE Yonghong Song
                   ` (11 subsequent siblings)
  17 siblings, 1 reply; 27+ messages in thread
From: Yonghong Song @ 2026-04-12  4:58 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Jose E . Marchesi, kernel-team, Martin KaFai Lau

For kfunc argument checking, use the argument index (arg#0, arg#1, ...)
instead of the register index (R1, R2, ...) in verifier log messages.
This is a preparation for future stack-based arguments where kfuncs can
accept more than 5 arguments. Stack arguments won't have a corresponding
register, so using argument index is more appropriate.

Since some functions like check_mem_access(), check_stack_read_var_off(),
and check_stack_range_initialized() are shared between kfunc argument
checking (check_kfunc_args) and other paths (check_func_arg, do_check_insn, ...),
introduce a `reg_or_arg` encoding: a non-negative value represents a register
index, while a negative value encodes an argument index as -(argno + 1).
The helper reg_arg_name() decodes this to produce either "R%d" or
"arg#%d" for log messages.

For check_func_arg() callers, in certain cases, the register index is
preserved so existing helper function logs remain unchanged (e.g., "R1", "R2").

Update selftests to expect the new "arg#N" format in kfunc error
messages.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
 include/linux/bpf_verifier.h                  |   1 +
 kernel/bpf/verifier.c                         | 466 +++++++++---------
 .../selftests/bpf/prog_tests/cb_refs.c        |   2 +-
 .../selftests/bpf/prog_tests/linked_list.c    |   4 +-
 .../selftests/bpf/progs/cpumask_failure.c     |   4 +-
 .../testing/selftests/bpf/progs/dynptr_fail.c |   6 +-
 .../selftests/bpf/progs/iters_testmod.c       |   6 +-
 .../bpf/progs/local_kptr_stash_fail.c         |   2 +-
 .../selftests/bpf/progs/map_kptr_fail.c       |   4 +-
 .../bpf/progs/mem_rdonly_untrusted.c          |   2 +-
 .../bpf/progs/nested_trust_failure.c          |   2 +-
 .../selftests/bpf/progs/res_spin_lock_fail.c  |   2 +-
 .../testing/selftests/bpf/progs/stream_fail.c |   2 +-
 .../selftests/bpf/progs/task_kfunc_failure.c  |   4 +-
 .../bpf/progs/verifier_cgroup_storage.c       |   4 +-
 .../selftests/bpf/progs/verifier_ctx.c        |   2 +-
 .../bpf/progs/verifier_ref_tracking.c         |   2 +-
 .../selftests/bpf/progs/verifier_sock.c       |   6 +-
 .../selftests/bpf/progs/verifier_unpriv.c     |   4 +-
 .../selftests/bpf/progs/verifier_vfs_reject.c |   8 +-
 .../testing/selftests/bpf/progs/wq_failures.c |   4 +-
 tools/testing/selftests/bpf/verifier/calls.c  |   6 +-
 .../testing/selftests/bpf/verifier/map_kptr.c |  10 +-
 23 files changed, 286 insertions(+), 267 deletions(-)

diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index 05b9fe98b8f8..291f11ddd176 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -910,6 +910,7 @@ struct bpf_verifier_env {
 	 * e.g., in reg_type_str() to generate reg_type string
 	 */
 	char tmp_str_buf[TMP_STR_BUF_LEN];
+	char tmp_reg_arg_name_buf[16];
 	struct bpf_insn insn_buf[INSN_BUF_SIZE];
 	struct bpf_insn epilogue_buf[INSN_BUF_SIZE];
 	struct bpf_scc_callchain callchain_buf;
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 54296d818d35..01df990f841a 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -2179,6 +2179,18 @@ static struct bpf_verifier_state *push_stack(struct bpf_verifier_env *env,
 	return &elem->st;
 }
 
+static const char *reg_arg_name(struct bpf_verifier_env *env, int reg_or_arg)
+{
+	char *buf = env->tmp_reg_arg_name_buf;
+	int len = sizeof(env->tmp_reg_arg_name_buf);
+
+	if (reg_or_arg >= 0)
+		snprintf(buf, len, "R%d", reg_or_arg);
+	else
+		snprintf(buf, len, "arg#%d", -(reg_or_arg + 1));
+	return buf;
+}
+
 #define CALLER_SAVED_REGS 6
 static const int caller_saved[CALLER_SAVED_REGS] = {
 	BPF_REG_0, BPF_REG_1, BPF_REG_2, BPF_REG_3, BPF_REG_4, BPF_REG_5
@@ -5822,7 +5834,7 @@ static struct bpf_reg_state *reg_state(struct bpf_verifier_env *env, int regno)
  * instead.
  */
 static int check_stack_read_var_off(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
-				    int ptr_regno, int off, int size, int dst_regno)
+				    int ptr_reg_or_arg, int off, int size, int dst_regno)
 {
 	struct bpf_func_state *ptr_state = func(env, reg);
 	int err;
@@ -5830,7 +5842,7 @@ static int check_stack_read_var_off(struct bpf_verifier_env *env, struct bpf_reg
 
 	/* Note that we pass a NULL meta, so raw access will not be permitted.
 	 */
-	err = check_stack_range_initialized(env, reg, ptr_regno, off, size,
+	err = check_stack_range_initialized(env, reg, ptr_reg_or_arg, off, size,
 					    false, BPF_READ, NULL);
 	if (err)
 		return err;
@@ -5852,7 +5864,7 @@ static int check_stack_read_var_off(struct bpf_verifier_env *env, struct bpf_reg
  * can be -1, meaning that the read value is not going to a register.
  */
 static int check_stack_read(struct bpf_verifier_env *env,
-			    struct bpf_reg_state *reg, int ptr_regno, int off, int size,
+			    struct bpf_reg_state *reg, int ptr_reg_or_arg, int off, int size,
 			    int dst_regno)
 {
 	struct bpf_func_state *state = func(env, reg);
@@ -5890,7 +5902,7 @@ static int check_stack_read(struct bpf_verifier_env *env,
 		 * than fixed offset ones. Note that dst_regno >= 0 on this
 		 * branch.
 		 */
-		err = check_stack_read_var_off(env, reg, ptr_regno, off, size,
+		err = check_stack_read_var_off(env, reg, ptr_reg_or_arg, off, size,
 					       dst_regno);
 	}
 	return err;
@@ -5950,7 +5962,7 @@ static int check_map_access_type(struct bpf_verifier_env *env, struct bpf_reg_st
 }
 
 /* check read/write into memory region (e.g., map value, ringbuf sample, etc) */
-static int __check_mem_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno,
+static int __check_mem_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int reg_or_arg,
 			      int off, int size, u32 mem_size,
 			      bool zero_size_allowed)
 {
@@ -5971,8 +5983,8 @@ static int __check_mem_access(struct bpf_verifier_env *env, struct bpf_reg_state
 	case PTR_TO_PACKET:
 	case PTR_TO_PACKET_META:
 	case PTR_TO_PACKET_END:
-		verbose(env, "invalid access to packet, off=%d size=%d, R%d(id=%d,off=%d,r=%d)\n",
-			off, size, regno, reg->id, off, mem_size);
+		verbose(env, "invalid access to packet, off=%d size=%d, %s(id=%d,off=%d,r=%d)\n",
+			off, size, reg_arg_name(env, reg_or_arg), reg->id, off, mem_size);
 		break;
 	case PTR_TO_CTX:
 		verbose(env, "invalid access to context, ctx_size=%d off=%d size=%d\n",
@@ -5988,7 +6000,7 @@ static int __check_mem_access(struct bpf_verifier_env *env, struct bpf_reg_state
 }
 
 /* check read/write into a memory region with possible variable offset */
-static int check_mem_region_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno,
+static int check_mem_region_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int reg_or_arg,
 				   int off, int size, u32 mem_size,
 				   bool zero_size_allowed)
 {
@@ -6008,15 +6020,15 @@ static int check_mem_region_access(struct bpf_verifier_env *env, struct bpf_reg_
 	    (reg->smin_value == S64_MIN ||
 	     (off + reg->smin_value != (s64)(s32)(off + reg->smin_value)) ||
 	      reg->smin_value + off < 0)) {
-		verbose(env, "R%d min value is negative, either use unsigned index or do a if (index >=0) check.\n",
-			regno);
+		verbose(env, "%s min value is negative, either use unsigned index or do a if (index >=0) check.\n",
+			reg_arg_name(env, reg_or_arg));
 		return -EACCES;
 	}
-	err = __check_mem_access(env, reg, regno, reg->smin_value + off, size,
+	err = __check_mem_access(env, reg, reg_or_arg, reg->smin_value + off, size,
 				 mem_size, zero_size_allowed);
 	if (err) {
-		verbose(env, "R%d min value is outside of the allowed memory range\n",
-			regno);
+		verbose(env, "%s min value is outside of the allowed memory range\n",
+			reg_arg_name(env, reg_or_arg));
 		return err;
 	}
 
@@ -6025,15 +6037,15 @@ static int check_mem_region_access(struct bpf_verifier_env *env, struct bpf_reg_
 	 * If reg->umax_value + off could overflow, treat that as unbounded too.
 	 */
 	if (reg->umax_value >= BPF_MAX_VAR_OFF) {
-		verbose(env, "R%d unbounded memory access, make sure to bounds check any such access\n",
-			regno);
+		verbose(env, "%s unbounded memory access, make sure to bounds check any such access\n",
+			reg_arg_name(env, reg_or_arg));
 		return -EACCES;
 	}
-	err = __check_mem_access(env, reg, regno, reg->umax_value + off, size,
+	err = __check_mem_access(env, reg, reg_or_arg, reg->umax_value + off, size,
 				 mem_size, zero_size_allowed);
 	if (err) {
-		verbose(env, "R%d max value is outside of the allowed memory range\n",
-			regno);
+		verbose(env, "%s max value is outside of the allowed memory range\n",
+			reg_arg_name(env, reg_or_arg));
 		return err;
 	}
 
@@ -6041,7 +6053,7 @@ static int check_mem_region_access(struct bpf_verifier_env *env, struct bpf_reg_
 }
 
 static int __check_ptr_off_reg(struct bpf_verifier_env *env,
-			       const struct bpf_reg_state *reg, int regno,
+			       const struct bpf_reg_state *reg, int reg_or_arg,
 			       bool fixed_off_ok)
 {
 	/* Access to this pointer-typed register or passing it to a helper
@@ -6058,14 +6070,14 @@ static int __check_ptr_off_reg(struct bpf_verifier_env *env,
 	}
 
 	if (reg->smin_value < 0) {
-		verbose(env, "negative offset %s ptr R%d off=%lld disallowed\n",
-			reg_type_str(env, reg->type), regno, reg->var_off.value);
+		verbose(env, "negative offset %s ptr %s off=%lld disallowed\n",
+			reg_type_str(env, reg->type), reg_arg_name(env, reg_or_arg), reg->var_off.value);
 		return -EACCES;
 	}
 
 	if (!fixed_off_ok && reg->var_off.value != 0) {
-		verbose(env, "dereference of modified %s ptr R%d off=%lld disallowed\n",
-			reg_type_str(env, reg->type), regno, reg->var_off.value);
+		verbose(env, "dereference of modified %s ptr %s off=%lld disallowed\n",
+			reg_type_str(env, reg->type), reg_arg_name(env, reg_or_arg), reg->var_off.value);
 		return -EACCES;
 	}
 
@@ -6330,7 +6342,7 @@ static u32 map_mem_size(const struct bpf_map *map)
 }
 
 /* check read/write into a map element with possible variable offset */
-static int check_map_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno,
+static int check_map_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int reg_or_arg,
 			    int off, int size, bool zero_size_allowed,
 			    enum bpf_access_src src)
 {
@@ -6339,7 +6351,7 @@ static int check_map_access(struct bpf_verifier_env *env, struct bpf_reg_state *
 	struct btf_record *rec;
 	int err, i;
 
-	err = check_mem_region_access(env, reg, regno, off, size, mem_size, zero_size_allowed);
+	err = check_mem_region_access(env, reg, reg_or_arg, off, size, mem_size, zero_size_allowed);
 	if (err)
 		return err;
 
@@ -6437,17 +6449,17 @@ static bool may_access_direct_pkt_data(struct bpf_verifier_env *env,
 	}
 }
 
-static int check_packet_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno, int off,
+static int check_packet_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int reg_or_arg, int off,
 			       int size, bool zero_size_allowed)
 {
 	int err;
 
 	if (reg->range < 0) {
-		verbose(env, "R%d offset is outside of the packet\n", regno);
+		verbose(env, "%s offset is outside of the packet\n", reg_arg_name(env, reg_or_arg));
 		return -EINVAL;
 	}
 
-	err = check_mem_region_access(env, reg, regno, off, size, reg->range, zero_size_allowed);
+	err = check_mem_region_access(env, reg, reg_or_arg, off, size, reg->range, zero_size_allowed);
 	if (err)
 		return err;
 
@@ -6502,7 +6514,7 @@ static int __check_ctx_access(struct bpf_verifier_env *env, int insn_idx, int of
 	return -EACCES;
 }
 
-static int check_ctx_access(struct bpf_verifier_env *env, int insn_idx, struct bpf_reg_state *reg, int regno,
+static int check_ctx_access(struct bpf_verifier_env *env, int insn_idx, struct bpf_reg_state *reg, int reg_or_arg,
 			    int off, int access_size, enum bpf_access_type t,
 			    struct bpf_insn_access_aux *info)
 {
@@ -6515,9 +6527,9 @@ static int check_ctx_access(struct bpf_verifier_env *env, int insn_idx, struct b
 	int err;
 
 	if (var_off_ok)
-		err = check_mem_region_access(env, reg, regno, off, access_size, U16_MAX, false);
+		err = check_mem_region_access(env, reg, reg_or_arg, off, access_size, U16_MAX, false);
 	else
-		err = __check_ptr_off_reg(env, reg, regno, fixed_off_ok);
+		err = __check_ptr_off_reg(env, reg, reg_or_arg, fixed_off_ok);
 	if (err)
 		return err;
 	off += reg->umax_value;
@@ -6541,15 +6553,15 @@ static int check_flow_keys_access(struct bpf_verifier_env *env, int off,
 }
 
 static int check_sock_access(struct bpf_verifier_env *env, int insn_idx,
-			     struct bpf_reg_state *reg, int regno, int off, int size,
+			     struct bpf_reg_state *reg, int reg_or_arg, int off, int size,
 			     enum bpf_access_type t)
 {
 	struct bpf_insn_access_aux info = {};
 	bool valid;
 
 	if (reg->smin_value < 0) {
-		verbose(env, "R%d min value is negative, either use unsigned index or do a if (index >=0) check.\n",
-			regno);
+		verbose(env, "%s min value is negative, either use unsigned index or do a if (index >=0) check.\n",
+			reg_arg_name(env, reg_or_arg));
 		return -EACCES;
 	}
 
@@ -6577,8 +6589,8 @@ static int check_sock_access(struct bpf_verifier_env *env, int insn_idx,
 		return 0;
 	}
 
-	verbose(env, "R%d invalid %s access off=%d size=%d\n",
-		regno, reg_type_str(env, reg->type), off, size);
+	verbose(env, "%s invalid %s access off=%d size=%d\n",
+		reg_arg_name(env, reg_or_arg), reg_type_str(env, reg->type), off, size);
 
 	return -EACCES;
 }
@@ -7101,12 +7113,12 @@ static int get_callee_stack_depth(struct bpf_verifier_env *env,
 static int __check_buffer_access(struct bpf_verifier_env *env,
 				 const char *buf_info,
 				 const struct bpf_reg_state *reg,
-				 int regno, int off, int size)
+				 int reg_or_arg, int off, int size)
 {
 	if (off < 0) {
 		verbose(env,
-			"R%d invalid %s buffer access: off=%d, size=%d\n",
-			regno, buf_info, off, size);
+			"%s invalid %s buffer access: off=%d, size=%d\n",
+			reg_arg_name(env, reg_or_arg), buf_info, off, size);
 		return -EACCES;
 	}
 	if (!tnum_is_const(reg->var_off)) {
@@ -7114,8 +7126,8 @@ static int __check_buffer_access(struct bpf_verifier_env *env,
 
 		tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off);
 		verbose(env,
-			"R%d invalid variable buffer offset: off=%d, var_off=%s\n",
-			regno, off, tn_buf);
+			"%s invalid variable buffer offset: off=%d, var_off=%s\n",
+			reg_arg_name(env, reg_or_arg), off, tn_buf);
 		return -EACCES;
 	}
 
@@ -7124,11 +7136,11 @@ static int __check_buffer_access(struct bpf_verifier_env *env,
 
 static int check_tp_buffer_access(struct bpf_verifier_env *env,
 				  const struct bpf_reg_state *reg,
-				  int regno, int off, int size)
+				  int reg_or_arg, int off, int size)
 {
 	int err;
 
-	err = __check_buffer_access(env, "tracepoint", reg, regno, off, size);
+	err = __check_buffer_access(env, "tracepoint", reg, reg_or_arg, off, size);
 	if (err)
 		return err;
 
@@ -7140,14 +7152,14 @@ static int check_tp_buffer_access(struct bpf_verifier_env *env,
 
 static int check_buffer_access(struct bpf_verifier_env *env,
 			       const struct bpf_reg_state *reg,
-			       int regno, int off, int size,
+			       int reg_or_arg, int off, int size,
 			       bool zero_size_allowed,
 			       u32 *max_access)
 {
 	const char *buf_info = type_is_rdonly_mem(reg->type) ? "rdonly" : "rdwr";
 	int err;
 
-	err = __check_buffer_access(env, buf_info, reg, regno, off, size);
+	err = __check_buffer_access(env, buf_info, reg, reg_or_arg, off, size);
 	if (err)
 		return err;
 
@@ -7520,7 +7532,7 @@ static bool type_is_trusted_or_null(struct bpf_verifier_env *env,
 
 static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
 				   struct bpf_reg_state *regs, struct bpf_reg_state *reg,
-				   int regno, int off, int size,
+				   int reg_or_arg, int off, int size,
 				   enum bpf_access_type atype,
 				   int value_regno)
 {
@@ -7549,8 +7561,8 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
 
 		tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off);
 		verbose(env,
-			"R%d is ptr_%s invalid variable offset: off=%d, var_off=%s\n",
-			regno, tname, off, tn_buf);
+			"%s is ptr_%s invalid variable offset: off=%d, var_off=%s\n",
+			reg_arg_name(env, reg_or_arg), tname, off, tn_buf);
 		return -EACCES;
 	}
 
@@ -7558,22 +7570,22 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
 
 	if (off < 0) {
 		verbose(env,
-			"R%d is ptr_%s invalid negative access: off=%d\n",
-			regno, tname, off);
+			"%s is ptr_%s invalid negative access: off=%d\n",
+			reg_arg_name(env, reg_or_arg), tname, off);
 		return -EACCES;
 	}
 
 	if (reg->type & MEM_USER) {
 		verbose(env,
-			"R%d is ptr_%s access user memory: off=%d\n",
-			regno, tname, off);
+			"%s is ptr_%s access user memory: off=%d\n",
+			reg_arg_name(env, reg_or_arg), tname, off);
 		return -EACCES;
 	}
 
 	if (reg->type & MEM_PERCPU) {
 		verbose(env,
-			"R%d is ptr_%s access percpu memory: off=%d\n",
-			regno, tname, off);
+			"%s is ptr_%s access percpu memory: off=%d\n",
+			reg_arg_name(env, reg_or_arg), tname, off);
 		return -EACCES;
 	}
 
@@ -7676,7 +7688,7 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
 
 static int check_ptr_to_map_access(struct bpf_verifier_env *env,
 				   struct bpf_reg_state *regs, struct bpf_reg_state *reg,
-				   int regno, int off, int size,
+				   int reg_or_arg, int off, int size,
 				   enum bpf_access_type atype,
 				   int value_regno)
 {
@@ -7710,8 +7722,8 @@ static int check_ptr_to_map_access(struct bpf_verifier_env *env,
 	}
 
 	if (off < 0) {
-		verbose(env, "R%d is %s invalid negative access: off=%d\n",
-			regno, tname, off);
+		verbose(env, "%s is %s invalid negative access: off=%d\n",
+			reg_arg_name(env, reg_or_arg), tname, off);
 		return -EACCES;
 	}
 
@@ -7769,7 +7781,7 @@ static int check_stack_slot_within_bounds(struct bpf_verifier_env *env,
  */
 static int check_stack_access_within_bounds(
 		struct bpf_verifier_env *env, struct bpf_reg_state *reg,
-		int regno, int off, int access_size,
+		int reg_or_arg, int off, int access_size,
 		enum bpf_access_type type)
 {
 	struct bpf_func_state *state = func(env, reg);
@@ -7788,8 +7800,8 @@ static int check_stack_access_within_bounds(
 	} else {
 		if (reg->smax_value >= BPF_MAX_VAR_OFF ||
 		    reg->smin_value <= -BPF_MAX_VAR_OFF) {
-			verbose(env, "invalid unbounded variable-offset%s stack R%d\n",
-				err_extra, regno);
+			verbose(env, "invalid unbounded variable-offset%s stack %s\n",
+				err_extra, reg_arg_name(env, reg_or_arg));
 			return -EACCES;
 		}
 		min_off = reg->smin_value + off;
@@ -7807,14 +7819,14 @@ static int check_stack_access_within_bounds(
 
 	if (err) {
 		if (tnum_is_const(reg->var_off)) {
-			verbose(env, "invalid%s stack R%d off=%lld size=%d\n",
-				err_extra, regno, min_off, access_size);
+			verbose(env, "invalid%s stack %s off=%lld size=%d\n",
+				err_extra, reg_arg_name(env, reg_or_arg), min_off, access_size);
 		} else {
 			char tn_buf[48];
 
 			tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off);
-			verbose(env, "invalid variable-offset%s stack R%d var_off=%s off=%d size=%d\n",
-				err_extra, regno, tn_buf, off, access_size);
+			verbose(env, "invalid variable-offset%s stack %s var_off=%s off=%d size=%d\n",
+				err_extra, reg_arg_name(env, reg_or_arg), tn_buf, off, access_size);
 		}
 		return err;
 	}
@@ -7859,7 +7871,7 @@ static void add_scalar_to_reg(struct bpf_reg_state *dst_reg, s64 val)
  * if t==write && value_regno==-1, some unknown value is stored into memory
  * if t==read && value_regno==-1, don't care what we read from memory
  */
-static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct bpf_reg_state *reg, int regno,
+static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct bpf_reg_state *reg, int reg_or_arg,
 			    int off, int bpf_size, enum bpf_access_type t,
 			    int value_regno, bool strict_alignment_once, bool is_ldsx)
 {
@@ -7876,11 +7888,11 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct b
 
 	if (reg->type == PTR_TO_MAP_KEY) {
 		if (t == BPF_WRITE) {
-			verbose(env, "write to change key R%d not allowed\n", regno);
+			verbose(env, "write to change key %s not allowed\n", reg_arg_name(env, reg_or_arg));
 			return -EACCES;
 		}
 
-		err = check_mem_region_access(env, reg, regno, off, size,
+		err = check_mem_region_access(env, reg, reg_or_arg, off, size,
 					      reg->map_ptr->key_size, false);
 		if (err)
 			return err;
@@ -7897,7 +7909,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct b
 		err = check_map_access_type(env, reg, off, size, t);
 		if (err)
 			return err;
-		err = check_map_access(env, reg, regno, off, size, false, ACCESS_DIRECT);
+		err = check_map_access(env, reg, reg_or_arg, off, size, false, ACCESS_DIRECT);
 		if (err)
 			return err;
 		if (tnum_is_const(reg->var_off))
@@ -7944,14 +7956,14 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct b
 		bool rdonly_untrusted = rdonly_mem && (reg->type & PTR_UNTRUSTED);
 
 		if (type_may_be_null(reg->type)) {
-			verbose(env, "R%d invalid mem access '%s'\n", regno,
+			verbose(env, "%s invalid mem access '%s'\n", reg_arg_name(env, reg_or_arg),
 				reg_type_str(env, reg->type));
 			return -EACCES;
 		}
 
 		if (t == BPF_WRITE && rdonly_mem) {
-			verbose(env, "R%d cannot write into %s\n",
-				regno, reg_type_str(env, reg->type));
+			verbose(env, "%s cannot write into %s\n",
+				reg_arg_name(env, reg_or_arg), reg_type_str(env, reg->type));
 			return -EACCES;
 		}
 
@@ -7966,7 +7978,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct b
 		 * instructions, hence no need to check bounds in that case.
 		 */
 		if (!rdonly_untrusted)
-			err = check_mem_region_access(env, reg, regno, off, size,
+			err = check_mem_region_access(env, reg, reg_or_arg, off, size,
 						      reg->mem_size, false);
 		if (!err && value_regno >= 0 && (t == BPF_READ || rdonly_mem))
 			mark_reg_unknown(env, regs, value_regno);
@@ -7984,7 +7996,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct b
 			return -EACCES;
 		}
 
-		err = check_ctx_access(env, insn_idx, reg, regno, off, size, t, &info);
+		err = check_ctx_access(env, insn_idx, reg, reg_or_arg, off, size, t, &info);
 		if (!err && t == BPF_READ && value_regno >= 0) {
 			/* ctx access returns either a scalar, or a
 			 * PTR_TO_PACKET[_META,_END]. In the latter
@@ -8021,12 +8033,12 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct b
 
 	} else if (reg->type == PTR_TO_STACK) {
 		/* Basic bounds checks. */
-		err = check_stack_access_within_bounds(env, reg, regno, off, size, t);
+		err = check_stack_access_within_bounds(env, reg, reg_or_arg, off, size, t);
 		if (err)
 			return err;
 
 		if (t == BPF_READ)
-			err = check_stack_read(env, reg, regno, off, size,
+			err = check_stack_read(env, reg, reg_or_arg, off, size,
 					       value_regno);
 		else
 			err = check_stack_write(env, reg, off, size,
@@ -8042,7 +8054,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct b
 				value_regno);
 			return -EACCES;
 		}
-		err = check_packet_access(env, reg, regno, off, size, false);
+		err = check_packet_access(env, reg, reg_or_arg, off, size, false);
 		if (!err && t == BPF_READ && value_regno >= 0)
 			mark_reg_unknown(env, regs, value_regno);
 	} else if (reg->type == PTR_TO_FLOW_KEYS) {
@@ -8058,23 +8070,23 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct b
 			mark_reg_unknown(env, regs, value_regno);
 	} else if (type_is_sk_pointer(reg->type)) {
 		if (t == BPF_WRITE) {
-			verbose(env, "R%d cannot write into %s\n",
-				regno, reg_type_str(env, reg->type));
+			verbose(env, "%s cannot write into %s\n",
+				reg_arg_name(env, reg_or_arg), reg_type_str(env, reg->type));
 			return -EACCES;
 		}
-		err = check_sock_access(env, insn_idx, reg, regno, off, size, t);
+		err = check_sock_access(env, insn_idx, reg, reg_or_arg, off, size, t);
 		if (!err && value_regno >= 0)
 			mark_reg_unknown(env, regs, value_regno);
 	} else if (reg->type == PTR_TO_TP_BUFFER) {
-		err = check_tp_buffer_access(env, reg, regno, off, size);
+		err = check_tp_buffer_access(env, reg, reg_or_arg, off, size);
 		if (!err && t == BPF_READ && value_regno >= 0)
 			mark_reg_unknown(env, regs, value_regno);
 	} else if (base_type(reg->type) == PTR_TO_BTF_ID &&
 		   !type_may_be_null(reg->type)) {
-		err = check_ptr_to_btf_access(env, regs, reg, regno, off, size, t,
+		err = check_ptr_to_btf_access(env, regs, reg, reg_or_arg, off, size, t,
 					      value_regno);
 	} else if (reg->type == CONST_PTR_TO_MAP) {
-		err = check_ptr_to_map_access(env, regs, reg, regno, off, size, t,
+		err = check_ptr_to_map_access(env, regs, reg, reg_or_arg, off, size, t,
 					      value_regno);
 	} else if (base_type(reg->type) == PTR_TO_BUF &&
 		   !type_may_be_null(reg->type)) {
@@ -8083,8 +8095,8 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct b
 
 		if (rdonly_mem) {
 			if (t == BPF_WRITE) {
-				verbose(env, "R%d cannot write into %s\n",
-					regno, reg_type_str(env, reg->type));
+				verbose(env, "%s cannot write into %s\n",
+					reg_arg_name(env, reg_or_arg), reg_type_str(env, reg->type));
 				return -EACCES;
 			}
 			max_access = &env->prog->aux->max_rdonly_access;
@@ -8092,7 +8104,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct b
 			max_access = &env->prog->aux->max_rdwr_access;
 		}
 
-		err = check_buffer_access(env, reg, regno, off, size, false,
+		err = check_buffer_access(env, reg, reg_or_arg, off, size, false,
 					  max_access);
 
 		if (!err && value_regno >= 0 && (rdonly_mem || t == BPF_READ))
@@ -8101,7 +8113,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, struct b
 		if (t == BPF_READ && value_regno >= 0)
 			mark_reg_unknown(env, regs, value_regno);
 	} else {
-		verbose(env, "R%d invalid mem access '%s'\n", regno,
+		verbose(env, "%s invalid mem access '%s'\n", reg_arg_name(env, reg_or_arg),
 			reg_type_str(env, reg->type));
 		return -EACCES;
 	}
@@ -8355,7 +8367,7 @@ static int check_atomic(struct bpf_verifier_env *env, struct bpf_insn *insn)
  * read offsets are marked as read.
  */
 static int check_stack_range_initialized(
-		struct bpf_verifier_env *env, struct bpf_reg_state *reg,int regno, int off,
+		struct bpf_verifier_env *env, struct bpf_reg_state *reg,int reg_or_arg, int off,
 		int access_size, bool zero_size_allowed,
 		enum bpf_access_type type, struct bpf_call_arg_meta *meta)
 {
@@ -8380,7 +8392,7 @@ static int check_stack_range_initialized(
 		return -EACCES;
 	}
 
-	err = check_stack_access_within_bounds(env, reg, regno, off, access_size, type);
+	err = check_stack_access_within_bounds(env, reg, reg_or_arg, off, access_size, type);
 	if (err)
 		return err;
 
@@ -8396,8 +8408,8 @@ static int check_stack_range_initialized(
 			char tn_buf[48];
 
 			tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off);
-			verbose(env, "R%d variable offset stack access prohibited for !root, var_off=%s\n",
-				regno, tn_buf);
+			verbose(env, "%s variable offset stack access prohibited for !root, var_off=%s\n",
+				reg_arg_name(env, reg_or_arg), tn_buf);
 			return -EACCES;
 		}
 		/* Only initialized buffer on stack is allowed to be accessed
@@ -8440,7 +8452,12 @@ static int check_stack_range_initialized(
 			}
 		}
 		meta->access_size = access_size;
-		meta->regno = regno;
+
+		/*
+		 * reg_or_arg should always be non-negative as meta->raw_mode is set in
+		 * check_func_arg().
+		 */
+		meta->regno = reg_or_arg;
 		return 0;
 	}
 
@@ -8480,17 +8497,17 @@ static int check_stack_range_initialized(
 		if (*stype == STACK_POISON) {
 			if (allow_poison)
 				goto mark;
-			verbose(env, "reading from stack R%d off %d+%d size %d, slot poisoned by dead code elimination\n",
-				regno, min_off, i - min_off, access_size);
+			verbose(env, "reading from stack %s off %d+%d size %d, slot poisoned by dead code elimination\n",
+				reg_arg_name(env, reg_or_arg), min_off, i - min_off, access_size);
 		} else if (tnum_is_const(reg->var_off)) {
-			verbose(env, "invalid read from stack R%d off %d+%d size %d\n",
-				regno, min_off, i - min_off, access_size);
+			verbose(env, "invalid read from stack %s off %d+%d size %d\n",
+				reg_arg_name(env, reg_or_arg), min_off, i - min_off, access_size);
 		} else {
 			char tn_buf[48];
 
 			tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off);
-			verbose(env, "invalid read from stack R%d var_off %s+%d size %d\n",
-				regno, tn_buf, i - min_off, access_size);
+			verbose(env, "invalid read from stack %s var_off %s+%d size %d\n",
+				reg_arg_name(env, reg_or_arg), tn_buf, i - min_off, access_size);
 		}
 		return -EACCES;
 mark:
@@ -8499,7 +8516,7 @@ static int check_stack_range_initialized(
 	return 0;
 }
 
-static int check_helper_mem_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno,
+static int check_helper_mem_access(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int reg_or_arg,
 				   int access_size, enum bpf_access_type access_type,
 				   bool zero_size_allowed,
 				   struct bpf_call_arg_meta *meta)
@@ -8510,36 +8527,36 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, struct bpf_reg_
 	switch (base_type(reg->type)) {
 	case PTR_TO_PACKET:
 	case PTR_TO_PACKET_META:
-		return check_packet_access(env, reg, regno, 0, access_size,
+		return check_packet_access(env, reg, reg_or_arg, 0, access_size,
 					   zero_size_allowed);
 	case PTR_TO_MAP_KEY:
 		if (access_type == BPF_WRITE) {
-			verbose(env, "R%d cannot write into %s\n", regno,
+			verbose(env, "%s cannot write into %s\n", reg_arg_name(env, reg_or_arg),
 				reg_type_str(env, reg->type));
 			return -EACCES;
 		}
-		return check_mem_region_access(env, reg, regno, 0, access_size,
+		return check_mem_region_access(env, reg, reg_or_arg, 0, access_size,
 					       reg->map_ptr->key_size, false);
 	case PTR_TO_MAP_VALUE:
 		if (check_map_access_type(env, reg, 0, access_size, access_type))
 			return -EACCES;
-		return check_map_access(env, reg, regno, 0, access_size,
+		return check_map_access(env, reg, reg_or_arg, 0, access_size,
 					zero_size_allowed, ACCESS_HELPER);
 	case PTR_TO_MEM:
 		if (type_is_rdonly_mem(reg->type)) {
 			if (access_type == BPF_WRITE) {
-				verbose(env, "R%d cannot write into %s\n", regno,
+				verbose(env, "%s cannot write into %s\n", reg_arg_name(env, reg_or_arg),
 					reg_type_str(env, reg->type));
 				return -EACCES;
 			}
 		}
-		return check_mem_region_access(env, reg, regno, 0,
+		return check_mem_region_access(env, reg, reg_or_arg, 0,
 					       access_size, reg->mem_size,
 					       zero_size_allowed);
 	case PTR_TO_BUF:
 		if (type_is_rdonly_mem(reg->type)) {
 			if (access_type == BPF_WRITE) {
-				verbose(env, "R%d cannot write into %s\n", regno,
+				verbose(env, "%s cannot write into %s\n", reg_arg_name(env, reg_or_arg),
 					reg_type_str(env, reg->type));
 				return -EACCES;
 			}
@@ -8548,21 +8565,21 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, struct bpf_reg_
 		} else {
 			max_access = &env->prog->aux->max_rdwr_access;
 		}
-		return check_buffer_access(env, reg, regno, 0,
+		return check_buffer_access(env, reg, reg_or_arg, 0,
 					   access_size, zero_size_allowed,
 					   max_access);
 	case PTR_TO_STACK:
 		return check_stack_range_initialized(
 				env, reg,
-				regno, 0, access_size,
+				reg_or_arg, 0, access_size,
 				zero_size_allowed, access_type, meta);
 	case PTR_TO_BTF_ID:
-		return check_ptr_to_btf_access(env, regs, reg, regno, 0,
+		return check_ptr_to_btf_access(env, regs, reg, reg_or_arg, 0,
 					       access_size, BPF_READ, -1);
 	case PTR_TO_CTX:
 		/* Only permit reading or writing syscall context using helper calls. */
 		if (is_var_ctx_off_allowed(env->prog)) {
-			int err = check_mem_region_access(env, reg, regno, 0, access_size, U16_MAX,
+			int err = check_mem_region_access(env, reg, reg_or_arg, 0, access_size, U16_MAX,
 							  zero_size_allowed);
 			if (err)
 				return err;
@@ -8577,7 +8594,7 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, struct bpf_reg_
 		    register_is_null(reg))
 			return 0;
 
-		verbose(env, "R%d type=%s ", regno,
+		verbose(env, "%s type=%s ", reg_arg_name(env, reg_or_arg),
 			reg_type_str(env, reg->type));
 		verbose(env, "expected=%s\n", reg_type_str(env, PTR_TO_STACK));
 		return -EACCES;
@@ -8592,12 +8609,12 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, struct bpf_reg_
  */
 static int check_mem_size_reg(struct bpf_verifier_env *env,
 			      struct bpf_reg_state *mem_reg,
-			      struct bpf_reg_state *size_reg, int mem_regno,
+			      struct bpf_reg_state *size_reg, int reg_or_arg,
 			      enum bpf_access_type access_type,
 			      bool zero_size_allowed,
 			      struct bpf_call_arg_meta *meta)
 {
-	int size_regno = mem_regno + 1;
+	int size_reg_or_arg = (reg_or_arg >= 0) ? reg_or_arg + 1 : reg_or_arg - 1;
 	int err;
 
 	/* This is used to refine r0 return value bounds for helpers
@@ -8619,31 +8636,31 @@ static int check_mem_size_reg(struct bpf_verifier_env *env,
 		meta = NULL;
 
 	if (size_reg->smin_value < 0) {
-		verbose(env, "R%d min value is negative, either use unsigned or 'var &= const'\n",
-			size_regno);
+		verbose(env, "%s min value is negative, either use unsigned or 'var &= const'\n",
+			reg_arg_name(env, size_reg_or_arg));
 		return -EACCES;
 	}
 
 	if (size_reg->umin_value == 0 && !zero_size_allowed) {
-		verbose(env, "R%d invalid zero-sized read: u64=[%lld,%lld]\n",
-			size_regno, size_reg->umin_value, size_reg->umax_value);
+		verbose(env, "%s invalid zero-sized read: u64=[%lld,%lld]\n",
+			reg_arg_name(env, size_reg_or_arg), size_reg->umin_value, size_reg->umax_value);
 		return -EACCES;
 	}
 
 	if (size_reg->umax_value >= BPF_MAX_VAR_SIZ) {
-		verbose(env, "R%d unbounded memory access, use 'var &= const' or 'if (var < const)'\n",
-			size_regno);
+		verbose(env, "%s unbounded memory access, use 'var &= const' or 'if (var < const)'\n",
+			reg_arg_name(env, size_reg_or_arg));
 		return -EACCES;
 	}
-	err = check_helper_mem_access(env, mem_reg, mem_regno, size_reg->umax_value,
+	err = check_helper_mem_access(env, mem_reg, reg_or_arg, size_reg->umax_value,
 				      access_type, zero_size_allowed, meta);
-	if (!err)
-		err = mark_chain_precision(env, size_regno);
+	if (!err && size_reg_or_arg > 0)
+		err = mark_chain_precision(env, size_reg_or_arg);
 	return err;
 }
 
 static int check_mem_reg(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
-			 int regno, u32 mem_size)
+			 int reg_or_arg, u32 mem_size)
 {
 	bool may_be_null = type_may_be_null(reg->type);
 	struct bpf_reg_state saved_reg;
@@ -8663,8 +8680,8 @@ static int check_mem_reg(struct bpf_verifier_env *env, struct bpf_reg_state *reg
 
 	int size = base_type(reg->type) == PTR_TO_STACK ? -(int)mem_size : mem_size;
 
-	err = check_helper_mem_access(env, reg, regno, size, BPF_READ, true, NULL);
-	err = err ?: check_helper_mem_access(env, reg, regno, size, BPF_WRITE, true, NULL);
+	err = check_helper_mem_access(env, reg, reg_or_arg, size, BPF_READ, true, NULL);
+	err = err ?: check_helper_mem_access(env, reg, reg_or_arg, size, BPF_WRITE, true, NULL);
 
 	if (may_be_null)
 		*reg = saved_reg;
@@ -8674,14 +8691,15 @@ static int check_mem_reg(struct bpf_verifier_env *env, struct bpf_reg_state *reg
 
 static int check_kfunc_mem_size_reg(struct bpf_verifier_env *env, struct bpf_reg_state *mem_reg,
 				    struct bpf_reg_state *size_reg,
-				    u32 mem_regno)
+				    u32 mem_argno)
 {
+	int reg_or_arg = -(int)(mem_argno + 1);
 	bool may_be_null = type_may_be_null(mem_reg->type);
 	struct bpf_reg_state saved_reg;
 	struct bpf_call_arg_meta meta;
 	int err;
 
-	WARN_ON_ONCE(mem_regno > BPF_REG_4);
+	WARN_ON_ONCE(mem_argno > BPF_REG_3);
 
 	memset(&meta, 0, sizeof(meta));
 
@@ -8690,8 +8708,8 @@ static int check_kfunc_mem_size_reg(struct bpf_verifier_env *env, struct bpf_reg
 		mark_ptr_not_null_reg(mem_reg);
 	}
 
-	err = check_mem_size_reg(env, mem_reg, size_reg, mem_regno, BPF_READ, true, &meta);
-	err = err ?: check_mem_size_reg(env, mem_reg, size_reg, mem_regno, BPF_WRITE, true, &meta);
+	err = check_mem_size_reg(env, mem_reg, size_reg, reg_or_arg, BPF_READ, true, &meta);
+	err = err ?: check_mem_size_reg(env, mem_reg, size_reg, reg_or_arg, BPF_WRITE, true, &meta);
 
 	if (may_be_null)
 		*mem_reg = saved_reg;
@@ -8727,7 +8745,7 @@ enum {
  * env->cur_state->active_locks remembers which map value element or allocated
  * object got locked and clears it after bpf_spin_unlock.
  */
-static int process_spin_lock(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno, int flags)
+static int process_spin_lock(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int argno, int flags)
 {
 	bool is_lock = flags & PROCESS_SPIN_LOCK, is_res_lock = flags & PROCESS_RES_LOCK;
 	const char *lock_str = is_res_lock ? "bpf_res_spin" : "bpf_spin";
@@ -8743,8 +8761,8 @@ static int process_spin_lock(struct bpf_verifier_env *env, struct bpf_reg_state
 
 	if (!is_const) {
 		verbose(env,
-			"R%d doesn't have constant offset. %s_lock has to be at the constant offset\n",
-			regno, lock_str);
+			"arg#%d doesn't have constant offset. %s_lock has to be at the constant offset\n",
+			argno, lock_str);
 		return -EINVAL;
 	}
 	if (reg->type == PTR_TO_MAP_VALUE) {
@@ -8843,7 +8861,7 @@ static int process_spin_lock(struct bpf_verifier_env *env, struct bpf_reg_state
 }
 
 /* Check if @regno is a pointer to a specific field in a map value */
-static int check_map_field_pointer(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 regno,
+static int check_map_field_pointer(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 argno,
 				   enum btf_field_type field_type,
 				   struct bpf_map_desc *map_desc)
 {
@@ -8855,8 +8873,8 @@ static int check_map_field_pointer(struct bpf_verifier_env *env, struct bpf_reg_
 
 	if (!is_const) {
 		verbose(env,
-			"R%d doesn't have constant offset. %s has to be at the constant offset\n",
-			regno, struct_name);
+			"arg#%d doesn't have constant offset. %s has to be at the constant offset\n",
+			argno, struct_name);
 		return -EINVAL;
 	}
 	if (!map->btf) {
@@ -8896,26 +8914,26 @@ static int check_map_field_pointer(struct bpf_verifier_env *env, struct bpf_reg_
 	return 0;
 }
 
-static int process_timer_func(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno,
+static int process_timer_func(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int argno,
 			      struct bpf_map_desc *map)
 {
 	if (IS_ENABLED(CONFIG_PREEMPT_RT)) {
 		verbose(env, "bpf_timer cannot be used for PREEMPT_RT.\n");
 		return -EOPNOTSUPP;
 	}
-	return check_map_field_pointer(env, reg, regno, BPF_TIMER, map);
+	return check_map_field_pointer(env, reg, argno, BPF_TIMER, map);
 }
 
-static int process_timer_helper(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno,
+static int process_timer_helper(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int argno,
 				struct bpf_call_arg_meta *meta)
 {
-	return process_timer_func(env, reg, regno, &meta->map);
+	return process_timer_func(env, reg, argno, &meta->map);
 }
 
-static int process_timer_kfunc(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno,
+static int process_timer_kfunc(struct bpf_verifier_env *env, struct bpf_reg_state *reg, u32 argno,
 			       struct bpf_kfunc_call_arg_meta *meta)
 {
-	return process_timer_func(env, reg, regno, &meta->map);
+	return process_timer_func(env, reg, argno, &meta->map);
 }
 
 static int process_kptr_func(struct bpf_verifier_env *env, int regno,
@@ -8991,7 +9009,7 @@ static int process_kptr_func(struct bpf_verifier_env *env, int regno,
  * Helpers which do not mutate the bpf_dynptr set MEM_RDONLY in their argument
  * type, and declare it as 'const struct bpf_dynptr *' in their prototype.
  */
-static int process_dynptr_func(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno, int insn_idx,
+static int process_dynptr_func(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int argno, int insn_idx,
 			       enum bpf_arg_type arg_type, int clone_ref_obj_id)
 {
 	int err;
@@ -8999,7 +9017,7 @@ static int process_dynptr_func(struct bpf_verifier_env *env, struct bpf_reg_stat
 	if (reg->type != PTR_TO_STACK && reg->type != CONST_PTR_TO_DYNPTR) {
 		verbose(env,
 			"arg#%d expected pointer to stack or const struct bpf_dynptr\n",
-			regno - 1);
+			argno);
 		return -EINVAL;
 	}
 
@@ -9036,7 +9054,7 @@ static int process_dynptr_func(struct bpf_verifier_env *env, struct bpf_reg_stat
 
 		/* we write BPF_DW bits (8 bytes) at a time */
 		for (i = 0; i < BPF_DYNPTR_SIZE; i += 8) {
-			err = check_mem_access(env, insn_idx, reg, regno,
+			err = check_mem_access(env, insn_idx, reg, -(argno + 1),
 					       i, BPF_DW, BPF_WRITE, -1, false, false);
 			if (err)
 				return err;
@@ -9053,7 +9071,7 @@ static int process_dynptr_func(struct bpf_verifier_env *env, struct bpf_reg_stat
 		if (!is_dynptr_reg_valid_init(env, reg)) {
 			verbose(env,
 				"Expected an initialized dynptr as arg#%d\n",
-				regno - 1);
+				argno);
 			return -EINVAL;
 		}
 
@@ -9061,7 +9079,7 @@ static int process_dynptr_func(struct bpf_verifier_env *env, struct bpf_reg_stat
 		if (!is_dynptr_type_expected(env, reg, arg_type & ~MEM_RDONLY)) {
 			verbose(env,
 				"Expected a dynptr of type %s as arg#%d\n",
-				dynptr_type_str(arg_to_dynptr_type(arg_type)), regno - 1);
+				dynptr_type_str(arg_to_dynptr_type(arg_type)), argno);
 			return -EINVAL;
 		}
 
@@ -9110,14 +9128,14 @@ static bool is_kfunc_arg_iter(struct bpf_kfunc_call_arg_meta *meta, int arg_idx,
 	return btf_param_match_suffix(meta->btf, arg, "__iter");
 }
 
-static int process_iter_arg(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno, int insn_idx,
+static int process_iter_arg(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int argno, int insn_idx,
 			    struct bpf_kfunc_call_arg_meta *meta)
 {
 	const struct btf_type *t;
 	int spi, err, i, nr_slots, btf_id;
 
 	if (reg->type != PTR_TO_STACK) {
-		verbose(env, "arg#%d expected pointer to an iterator on stack\n", regno - 1);
+		verbose(env, "arg#%d expected pointer to an iterator on stack\n", argno);
 		return -EINVAL;
 	}
 
@@ -9127,9 +9145,9 @@ static int process_iter_arg(struct bpf_verifier_env *env, struct bpf_reg_state *
 	 * to any kfunc, if arg has "__iter" suffix, we need to be a bit more
 	 * conservative here.
 	 */
-	btf_id = btf_check_iter_arg(meta->btf, meta->func_proto, regno - 1);
+	btf_id = btf_check_iter_arg(meta->btf, meta->func_proto, argno);
 	if (btf_id < 0) {
-		verbose(env, "expected valid iter pointer as arg#%d\n", regno - 1);
+		verbose(env, "expected valid iter pointer as arg#%d\n", argno);
 		return -EINVAL;
 	}
 	t = btf_type_by_id(meta->btf, btf_id);
@@ -9139,12 +9157,12 @@ static int process_iter_arg(struct bpf_verifier_env *env, struct bpf_reg_state *
 		/* bpf_iter_<type>_new() expects pointer to uninit iter state */
 		if (!is_iter_reg_valid_uninit(env, reg, nr_slots)) {
 			verbose(env, "expected uninitialized iter_%s as arg#%d\n",
-				iter_type_str(meta->btf, btf_id), regno - 1);
+				iter_type_str(meta->btf, btf_id), argno);
 			return -EINVAL;
 		}
 
 		for (i = 0; i < nr_slots * 8; i += BPF_REG_SIZE) {
-			err = check_mem_access(env, insn_idx, reg, regno,
+			err = check_mem_access(env, insn_idx, reg, -(argno + 1),
 					       i, BPF_DW, BPF_WRITE, -1, false, false);
 			if (err)
 				return err;
@@ -9163,7 +9181,7 @@ static int process_iter_arg(struct bpf_verifier_env *env, struct bpf_reg_state *
 			break;
 		case -EINVAL:
 			verbose(env, "expected an initialized iter_%s as arg#%d\n",
-				iter_type_str(meta->btf, btf_id), regno - 1);
+				iter_type_str(meta->btf, btf_id), argno);
 			return err;
 		case -EPROTO:
 			verbose(env, "expected an RCU CS when using %s\n", meta->func_name);
@@ -9676,7 +9694,7 @@ static int check_reg_type(struct bpf_verifier_env *env, u32 regno,
 
 		if (type_may_be_null(reg->type) &&
 		    (!type_may_be_null(arg_type) || arg_type_is_release(arg_type))) {
-			verbose(env, "Possibly NULL pointer passed to helper arg%d\n", regno);
+			verbose(env, "Possibly NULL pointer passed to helper R%d\n", regno);
 			return -EACCES;
 		}
 
@@ -9759,7 +9777,7 @@ reg_find_field_offset(const struct bpf_reg_state *reg, s32 off, u32 fields)
 }
 
 static int check_func_arg_reg_off(struct bpf_verifier_env *env,
-				  const struct bpf_reg_state *reg, int regno,
+				  const struct bpf_reg_state *reg, int reg_or_arg,
 				  enum bpf_arg_type arg_type)
 {
 	u32 type = reg->type;
@@ -9785,8 +9803,8 @@ static int check_func_arg_reg_off(struct bpf_verifier_env *env,
 		 * to give the user a better error message.
 		 */
 		if (!tnum_is_const(reg->var_off) || reg->var_off.value != 0) {
-			verbose(env, "R%d must have zero offset when passed to release func or trusted arg to kfunc\n",
-				regno);
+			verbose(env, "%s must have zero offset when passed to release func or trusted arg to kfunc\n",
+				reg_arg_name(env, reg_or_arg));
 			return -EINVAL;
 		}
 	}
@@ -9822,7 +9840,7 @@ static int check_func_arg_reg_off(struct bpf_verifier_env *env,
 		 * cases. var_off always must be 0 for PTR_TO_BTF_ID, hence we
 		 * still need to do checks instead of returning.
 		 */
-		return __check_ptr_off_reg(env, reg, regno, true);
+		return __check_ptr_off_reg(env, reg, reg_or_arg, true);
 	case PTR_TO_CTX:
 		/*
 		 * Allow fixed and variable offsets for syscall context, but
@@ -9834,7 +9852,7 @@ static int check_func_arg_reg_off(struct bpf_verifier_env *env,
 			return 0;
 		fallthrough;
 	default:
-		return __check_ptr_off_reg(env, reg, regno, false);
+		return __check_ptr_off_reg(env, reg, reg_or_arg, false);
 	}
 }
 
@@ -9905,7 +9923,7 @@ static enum bpf_dynptr_type dynptr_get_type(struct bpf_verifier_env *env,
 }
 
 static int check_reg_const_str(struct bpf_verifier_env *env,
-			       struct bpf_reg_state *reg, int regno)
+			       struct bpf_reg_state *reg, int reg_or_arg)
 {
 	struct bpf_map *map = reg->map_ptr;
 	int err;
@@ -9917,17 +9935,17 @@ static int check_reg_const_str(struct bpf_verifier_env *env,
 		return -EINVAL;
 
 	if (map->map_type == BPF_MAP_TYPE_INSN_ARRAY) {
-		verbose(env, "R%d points to insn_array map which cannot be used as const string\n", regno);
+		verbose(env, "%s points to insn_array map which cannot be used as const string\n", reg_arg_name(env, reg_or_arg));
 		return -EACCES;
 	}
 
 	if (!bpf_map_is_rdonly(map)) {
-		verbose(env, "R%d does not point to a readonly map'\n", regno);
+		verbose(env, "%s does not point to a readonly map'\n", reg_arg_name(env, reg_or_arg));
 		return -EACCES;
 	}
 
 	if (!tnum_is_const(reg->var_off)) {
-		verbose(env, "R%d is not a constant address'\n", regno);
+		verbose(env, "%s is not a constant address'\n", reg_arg_name(env, reg_or_arg));
 		return -EACCES;
 	}
 
@@ -9936,7 +9954,7 @@ static int check_reg_const_str(struct bpf_verifier_env *env,
 		return -EACCES;
 	}
 
-	err = check_map_access(env, reg, regno, 0,
+	err = check_map_access(env, reg, reg_or_arg, 0,
 			       map->value_size - reg->var_off.value, false,
 			       ACCESS_HELPER);
 	if (err)
@@ -10042,8 +10060,8 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
 
 	if (arg_type == ARG_ANYTHING) {
 		if (is_pointer_value(env, regno)) {
-			verbose(env, "R%d leaks addr into helper function\n",
-				regno);
+			verbose(env, "arg#%d leaks addr into helper function\n",
+				arg);
 			return -EACCES;
 		}
 		return 0;
@@ -10094,7 +10112,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
 			if (reg->type == PTR_TO_STACK) {
 				spi = dynptr_get_spi(env, reg);
 				if (spi < 0 || !state->stack[spi].spilled_ptr.ref_obj_id) {
-					verbose(env, "arg %d is an unacquired reference\n", regno);
+					verbose(env, "arg#%d is an unacquired reference\n", arg);
 					return -EINVAL;
 				}
 			} else {
@@ -10102,8 +10120,8 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
 				return -EINVAL;
 			}
 		} else if (!reg->ref_obj_id && !register_is_null(reg)) {
-			verbose(env, "R%d must be referenced when passed to release function\n",
-				regno);
+			verbose(env, "arg#%d must be referenced when passed to release function\n",
+				arg);
 			return -EINVAL;
 		}
 		if (meta->release_regno) {
@@ -10115,8 +10133,8 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
 
 	if (reg->ref_obj_id && base_type(arg_type) != ARG_KPTR_XCHG_DEST) {
 		if (meta->ref_obj_id) {
-			verbose(env, "more than one arg with ref_obj_id R%d %u %u",
-				regno, reg->ref_obj_id,
+			verbose(env, "more than one arg with ref_obj_id arg#%d %u %u",
+				arg, reg->ref_obj_id,
 				meta->ref_obj_id);
 			return -EACCES;
 		}
@@ -10198,7 +10216,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
 		break;
 	case ARG_PTR_TO_PERCPU_BTF_ID:
 		if (!reg->btf_id) {
-			verbose(env, "Helper has invalid btf_id in R%d\n", regno);
+			verbose(env, "Helper has invalid btf_id in arg#%d\n", arg);
 			return -EACCES;
 		}
 		meta->ret_btf = reg->btf;
@@ -10210,11 +10228,11 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
 			return -EACCES;
 		}
 		if (meta->func_id == BPF_FUNC_spin_lock) {
-			err = process_spin_lock(env, reg, regno, PROCESS_SPIN_LOCK);
+			err = process_spin_lock(env, reg, arg, PROCESS_SPIN_LOCK);
 			if (err)
 				return err;
 		} else if (meta->func_id == BPF_FUNC_spin_unlock) {
-			err = process_spin_lock(env, reg, regno, 0);
+			err = process_spin_lock(env, reg, arg, 0);
 			if (err)
 				return err;
 		} else {
@@ -10223,7 +10241,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
 		}
 		break;
 	case ARG_PTR_TO_TIMER:
-		err = process_timer_helper(env, reg, regno, meta);
+		err = process_timer_helper(env, reg, arg, meta);
 		if (err)
 			return err;
 		break;
@@ -10258,14 +10276,14 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
 					 true, meta);
 		break;
 	case ARG_PTR_TO_DYNPTR:
-		err = process_dynptr_func(env, reg, regno, insn_idx, arg_type, 0);
+		err = process_dynptr_func(env, reg, arg, insn_idx, arg_type, 0);
 		if (err)
 			return err;
 		break;
 	case ARG_CONST_ALLOC_SIZE_OR_ZERO:
 		if (!tnum_is_const(reg->var_off)) {
-			verbose(env, "R%d is not a known constant'\n",
-				regno);
+			verbose(env, "arg#%d is not a known constant'\n",
+				arg);
 			return -EACCES;
 		}
 		meta->mem_size = reg->var_off.value;
@@ -10870,7 +10888,7 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog,
 
 		if (arg->arg_type == ARG_ANYTHING) {
 			if (reg->type != SCALAR_VALUE) {
-				bpf_log(log, "R%d is not a scalar\n", regno);
+				bpf_log(log, "arg#%d is not a scalar\n", i);
 				return -EINVAL;
 			}
 		} else if (arg->arg_type & PTR_UNTRUSTED) {
@@ -10909,7 +10927,7 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog,
 			 * run-time debug nightmare.
 			 */
 			if (reg->type != PTR_TO_ARENA && reg->type != SCALAR_VALUE) {
-				bpf_log(log, "R%d is not a pointer to arena or scalar.\n", regno);
+				bpf_log(log, "arg#%d is not a pointer to arena or scalar.\n", i);
 				return -EINVAL;
 			}
 		} else if (arg->arg_type == (ARG_PTR_TO_DYNPTR | MEM_RDONLY)) {
@@ -10917,7 +10935,7 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog,
 			if (ret)
 				return ret;
 
-			ret = process_dynptr_func(env, reg, regno, -1, arg->arg_type, 0);
+			ret = process_dynptr_func(env, reg, i, -1, arg->arg_type, 0);
 			if (ret)
 				return ret;
 		} else if (base_type(arg->arg_type) == ARG_PTR_TO_BTF_ID) {
@@ -13067,15 +13085,15 @@ static int process_kf_arg_ptr_to_btf_id(struct bpf_verifier_env *env,
 	 */
 	taking_projection = btf_is_projection_of(ref_tname, reg_ref_tname);
 	if (!taking_projection && !struct_same) {
-		verbose(env, "kernel function %s args#%d expected pointer to %s %s but R%d has a pointer to %s %s\n",
-			meta->func_name, argno, btf_type_str(ref_t), ref_tname, argno + 1,
+		verbose(env, "kernel function %s args#%d expected pointer to %s %s but has a pointer to %s %s\n",
+			meta->func_name, argno, btf_type_str(ref_t), ref_tname,
 			btf_type_str(reg_ref_t), reg_ref_tname);
 		return -EINVAL;
 	}
 	return 0;
 }
 
-static int process_irq_flag(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int regno,
+static int process_irq_flag(struct bpf_verifier_env *env, struct bpf_reg_state *reg, int argno,
 			     struct bpf_kfunc_call_arg_meta *meta)
 {
 	int err, kfunc_class = IRQ_NATIVE_KFUNC;
@@ -13098,11 +13116,11 @@ static int process_irq_flag(struct bpf_verifier_env *env, struct bpf_reg_state *
 
 	if (irq_save) {
 		if (!is_irq_flag_reg_valid_uninit(env, reg)) {
-			verbose(env, "expected uninitialized irq flag as arg#%d\n", regno - 1);
+			verbose(env, "expected uninitialized irq flag as arg#%d\n", argno);
 			return -EINVAL;
 		}
 
-		err = check_mem_access(env, env->insn_idx, reg, regno, 0, BPF_DW, BPF_WRITE, -1, false, false);
+		err = check_mem_access(env, env->insn_idx, reg, -(argno + 1), 0, BPF_DW, BPF_WRITE, -1, false, false);
 		if (err)
 			return err;
 
@@ -13112,7 +13130,7 @@ static int process_irq_flag(struct bpf_verifier_env *env, struct bpf_reg_state *
 	} else {
 		err = is_irq_flag_reg_valid_init(env, reg);
 		if (err) {
-			verbose(env, "expected an initialized irq flag as arg#%d\n", regno - 1);
+			verbose(env, "expected an initialized irq flag as arg#%d\n", argno);
 			return err;
 		}
 
@@ -13403,7 +13421,7 @@ static bool check_kfunc_is_graph_node_api(struct bpf_verifier_env *env,
 
 static int
 __process_kf_arg_ptr_to_graph_root(struct bpf_verifier_env *env,
-				   struct bpf_reg_state *reg, u32 regno,
+				   struct bpf_reg_state *reg, u32 argno,
 				   struct bpf_kfunc_call_arg_meta *meta,
 				   enum btf_field_type head_field_type,
 				   struct btf_field **head_field)
@@ -13424,8 +13442,8 @@ __process_kf_arg_ptr_to_graph_root(struct bpf_verifier_env *env,
 	head_type_name = btf_field_type_name(head_field_type);
 	if (!tnum_is_const(reg->var_off)) {
 		verbose(env,
-			"R%d doesn't have constant offset. %s has to be at the constant offset\n",
-			regno, head_type_name);
+			"arg#%d doesn't have constant offset. %s has to be at the constant offset\n",
+			argno, head_type_name);
 		return -EINVAL;
 	}
 
@@ -13453,24 +13471,24 @@ __process_kf_arg_ptr_to_graph_root(struct bpf_verifier_env *env,
 }
 
 static int process_kf_arg_ptr_to_list_head(struct bpf_verifier_env *env,
-					   struct bpf_reg_state *reg, u32 regno,
+					   struct bpf_reg_state *reg, u32 argno,
 					   struct bpf_kfunc_call_arg_meta *meta)
 {
-	return __process_kf_arg_ptr_to_graph_root(env, reg, regno, meta, BPF_LIST_HEAD,
+	return __process_kf_arg_ptr_to_graph_root(env, reg, argno, meta, BPF_LIST_HEAD,
 							  &meta->arg_list_head.field);
 }
 
 static int process_kf_arg_ptr_to_rbtree_root(struct bpf_verifier_env *env,
-					     struct bpf_reg_state *reg, u32 regno,
+					     struct bpf_reg_state *reg, u32 argno,
 					     struct bpf_kfunc_call_arg_meta *meta)
 {
-	return __process_kf_arg_ptr_to_graph_root(env, reg, regno, meta, BPF_RB_ROOT,
+	return __process_kf_arg_ptr_to_graph_root(env, reg, argno, meta, BPF_RB_ROOT,
 							  &meta->arg_rbtree_root.field);
 }
 
 static int
 __process_kf_arg_ptr_to_graph_node(struct bpf_verifier_env *env,
-				   struct bpf_reg_state *reg, u32 regno,
+				   struct bpf_reg_state *reg, u32 argno,
 				   struct bpf_kfunc_call_arg_meta *meta,
 				   enum btf_field_type head_field_type,
 				   enum btf_field_type node_field_type,
@@ -13492,8 +13510,8 @@ __process_kf_arg_ptr_to_graph_node(struct bpf_verifier_env *env,
 	node_type_name = btf_field_type_name(node_field_type);
 	if (!tnum_is_const(reg->var_off)) {
 		verbose(env,
-			"R%d doesn't have constant offset. %s has to be at the constant offset\n",
-			regno, node_type_name);
+			"arg#%d doesn't have constant offset. %s has to be at the constant offset\n",
+			argno, node_type_name);
 		return -EINVAL;
 	}
 
@@ -13534,19 +13552,19 @@ __process_kf_arg_ptr_to_graph_node(struct bpf_verifier_env *env,
 }
 
 static int process_kf_arg_ptr_to_list_node(struct bpf_verifier_env *env,
-					   struct bpf_reg_state *reg, u32 regno,
+					   struct bpf_reg_state *reg, u32 argno,
 					   struct bpf_kfunc_call_arg_meta *meta)
 {
-	return __process_kf_arg_ptr_to_graph_node(env, reg, regno, meta,
+	return __process_kf_arg_ptr_to_graph_node(env, reg, argno, meta,
 						  BPF_LIST_HEAD, BPF_LIST_NODE,
 						  &meta->arg_list_head.field);
 }
 
 static int process_kf_arg_ptr_to_rbtree_node(struct bpf_verifier_env *env,
-					     struct bpf_reg_state *reg, u32 regno,
+					     struct bpf_reg_state *reg, u32 argno,
 					     struct bpf_kfunc_call_arg_meta *meta)
 {
-	return __process_kf_arg_ptr_to_graph_node(env, reg, regno, meta,
+	return __process_kf_arg_ptr_to_graph_node(env, reg, argno, meta,
 						  BPF_RB_ROOT, BPF_RB_NODE,
 						  &meta->arg_rbtree_root.field);
 }
@@ -13620,7 +13638,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 
 		if (btf_type_is_scalar(t)) {
 			if (reg->type != SCALAR_VALUE) {
-				verbose(env, "R%d is not a scalar\n", regno);
+				verbose(env, "arg#%d is not a scalar\n", i);
 				return -EINVAL;
 			}
 
@@ -13630,7 +13648,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 					return -EFAULT;
 				}
 				if (!tnum_is_const(reg->var_off)) {
-					verbose(env, "R%d must be a known constant\n", regno);
+					verbose(env, "arg#%d must be a known constant\n", i);
 					return -EINVAL;
 				}
 				ret = mark_chain_precision(env, regno);
@@ -13652,7 +13670,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 				}
 
 				if (!tnum_is_const(reg->var_off)) {
-					verbose(env, "R%d is not a const\n", regno);
+					verbose(env, "arg#%d is not a const\n", i);
 					return -EINVAL;
 				}
 
@@ -13677,8 +13695,8 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 
 		if (reg->ref_obj_id) {
 			if (is_kfunc_release(meta) && meta->ref_obj_id) {
-				verifier_bug(env, "more than one arg with ref_obj_id R%d %u %u",
-					     regno, reg->ref_obj_id,
+				verifier_bug(env, "more than one arg with ref_obj_id arg#%d %u %u",
+					     i, reg->ref_obj_id,
 					     meta->ref_obj_id);
 				return -EFAULT;
 			}
@@ -13699,7 +13717,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 			continue;
 		case KF_ARG_PTR_TO_MAP:
 			if (!reg->map_ptr) {
-				verbose(env, "pointer in R%d isn't map pointer\n", regno);
+				verbose(env, "pointer in arg#%d isn't map pointer\n", i);
 				return -EINVAL;
 			}
 			if (meta->map.ptr && (reg->map_ptr->record->wq_off >= 0 ||
@@ -13737,11 +13755,11 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 		case KF_ARG_PTR_TO_BTF_ID:
 			if (!is_trusted_reg(reg)) {
 				if (!is_kfunc_rcu(meta)) {
-					verbose(env, "R%d must be referenced or trusted\n", regno);
+					verbose(env, "arg#%d must be referenced or trusted\n", i);
 					return -EINVAL;
 				}
 				if (!is_rcu_reg(reg)) {
-					verbose(env, "R%d must be a rcu pointer\n", regno);
+					verbose(env, "arg#%d must be a rcu pointer\n", i);
 					return -EINVAL;
 				}
 			}
@@ -13773,7 +13791,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 
 		if (is_kfunc_release(meta) && reg->ref_obj_id)
 			arg_type |= OBJ_RELEASE;
-		ret = check_func_arg_reg_off(env, reg, regno, arg_type);
+		ret = check_func_arg_reg_off(env, reg, -(i + 1), arg_type);
 		if (ret < 0)
 			return ret;
 
@@ -13855,7 +13873,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 				}
 			}
 
-			ret = process_dynptr_func(env, reg, regno, insn_idx, dynptr_arg_type, clone_ref_obj_id);
+			ret = process_dynptr_func(env, reg, i, insn_idx, dynptr_arg_type, clone_ref_obj_id);
 			if (ret < 0)
 				return ret;
 
@@ -13880,7 +13898,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 					return -EINVAL;
 				}
 			}
-			ret = process_iter_arg(env, reg, regno, insn_idx, meta);
+			ret = process_iter_arg(env, reg, i, insn_idx, meta);
 			if (ret < 0)
 				return ret;
 			break;
@@ -13894,7 +13912,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 				verbose(env, "allocated object must be referenced\n");
 				return -EINVAL;
 			}
-			ret = process_kf_arg_ptr_to_list_head(env, reg, regno, meta);
+			ret = process_kf_arg_ptr_to_list_head(env, reg, i, meta);
 			if (ret < 0)
 				return ret;
 			break;
@@ -13908,7 +13926,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 				verbose(env, "allocated object must be referenced\n");
 				return -EINVAL;
 			}
-			ret = process_kf_arg_ptr_to_rbtree_root(env, reg, regno, meta);
+			ret = process_kf_arg_ptr_to_rbtree_root(env, reg, i, meta);
 			if (ret < 0)
 				return ret;
 			break;
@@ -13921,7 +13939,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 				verbose(env, "allocated object must be referenced\n");
 				return -EINVAL;
 			}
-			ret = process_kf_arg_ptr_to_list_node(env, reg, regno, meta);
+			ret = process_kf_arg_ptr_to_list_node(env, reg, i, meta);
 			if (ret < 0)
 				return ret;
 			break;
@@ -13946,7 +13964,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 				}
 			}
 
-			ret = process_kf_arg_ptr_to_rbtree_node(env, reg, regno, meta);
+			ret = process_kf_arg_ptr_to_rbtree_node(env, reg, i, meta);
 			if (ret < 0)
 				return ret;
 			break;
@@ -13978,7 +13996,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 					i, btf_type_str(ref_t), ref_tname, PTR_ERR(resolve_ret));
 				return -EINVAL;
 			}
-			ret = check_mem_reg(env, reg, regno, type_size);
+			ret = check_mem_reg(env, reg, -(i + 1), type_size);
 			if (ret < 0)
 				return ret;
 			break;
@@ -13990,7 +14008,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 			const struct btf_param *size_arg = &args[i + 1];
 
 			if (!register_is_null(buff_reg) || !is_kfunc_arg_nullable(meta->btf, buff_arg)) {
-				ret = check_kfunc_mem_size_reg(env, buff_reg, size_reg, regno);
+				ret = check_kfunc_mem_size_reg(env, buff_reg, size_reg, i);
 				if (ret < 0) {
 					verbose(env, "arg#%d arg#%d memory, len pair leads to invalid memory access\n", i, i + 1);
 					return ret;
@@ -14003,7 +14021,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 					return -EFAULT;
 				}
 				if (!tnum_is_const(size_reg->var_off)) {
-					verbose(env, "R%d must be a known constant\n", regno + 1);
+					verbose(env, "arg#%d must be a known constant\n", i + 1);
 					return -EINVAL;
 				}
 				meta->arg_constant.found = true;
@@ -14048,7 +14066,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 				verbose(env, "arg#%d doesn't point to a const string\n", i);
 				return -EINVAL;
 			}
-			ret = check_reg_const_str(env, reg, regno);
+			ret = check_reg_const_str(env, reg, -(i + 1));
 			if (ret)
 				return ret;
 			break;
@@ -14057,7 +14075,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 				verbose(env, "arg#%d doesn't point to a map value\n", i);
 				return -EINVAL;
 			}
-			ret = check_map_field_pointer(env, reg, regno, BPF_WORKQUEUE, &meta->map);
+			ret = check_map_field_pointer(env, reg, i, BPF_WORKQUEUE, &meta->map);
 			if (ret < 0)
 				return ret;
 			break;
@@ -14066,7 +14084,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 				verbose(env, "arg#%d doesn't point to a map value\n", i);
 				return -EINVAL;
 			}
-			ret = process_timer_kfunc(env, reg, regno, meta);
+			ret = process_timer_kfunc(env, reg, i, meta);
 			if (ret < 0)
 				return ret;
 			break;
@@ -14075,7 +14093,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 				verbose(env, "arg#%d doesn't point to a map value\n", i);
 				return -EINVAL;
 			}
-			ret = check_map_field_pointer(env, reg, regno, BPF_TASK_WORK, &meta->map);
+			ret = check_map_field_pointer(env, reg, i, BPF_TASK_WORK, &meta->map);
 			if (ret < 0)
 				return ret;
 			break;
@@ -14084,7 +14102,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 				verbose(env, "arg#%d doesn't point to an irq flag on stack\n", i);
 				return -EINVAL;
 			}
-			ret = process_irq_flag(env, reg, regno, meta);
+			ret = process_irq_flag(env, reg, i, meta);
 			if (ret < 0)
 				return ret;
 			break;
@@ -14105,7 +14123,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 			if (meta->func_id == special_kfunc_list[KF_bpf_res_spin_lock_irqsave] ||
 			    meta->func_id == special_kfunc_list[KF_bpf_res_spin_unlock_irqrestore])
 				flags |= PROCESS_LOCK_IRQ;
-			ret = process_spin_lock(env, reg, regno, flags);
+			ret = process_spin_lock(env, reg, i, flags);
 			if (ret < 0)
 				return ret;
 			break;
diff --git a/tools/testing/selftests/bpf/prog_tests/cb_refs.c b/tools/testing/selftests/bpf/prog_tests/cb_refs.c
index c40df623a8f7..6300b67a3a84 100644
--- a/tools/testing/selftests/bpf/prog_tests/cb_refs.c
+++ b/tools/testing/selftests/bpf/prog_tests/cb_refs.c
@@ -12,7 +12,7 @@ struct {
 	const char *err_msg;
 } cb_refs_tests[] = {
 	{ "underflow_prog", "must point to scalar, or struct with scalar" },
-	{ "leak_prog", "Possibly NULL pointer passed to helper arg2" },
+	{ "leak_prog", "Possibly NULL pointer passed to helper R2" },
 	{ "nested_cb", "Unreleased reference id=4 alloc_insn=2" }, /* alloc_insn=2{4,5} */
 	{ "non_cb_transfer_ref", "Unreleased reference id=4 alloc_insn=1" }, /* alloc_insn=1{1,2} */
 };
diff --git a/tools/testing/selftests/bpf/prog_tests/linked_list.c b/tools/testing/selftests/bpf/prog_tests/linked_list.c
index 6f25b5f39a79..f817e0968d72 100644
--- a/tools/testing/selftests/bpf/prog_tests/linked_list.c
+++ b/tools/testing/selftests/bpf/prog_tests/linked_list.c
@@ -68,7 +68,7 @@ static struct {
 	{ "obj_type_id_oor", "local type ID argument must be in range [0, U32_MAX]" },
 	{ "obj_new_no_composite", "bpf_obj_new/bpf_percpu_obj_new type ID argument must be of a struct" },
 	{ "obj_new_no_struct", "bpf_obj_new/bpf_percpu_obj_new type ID argument must be of a struct" },
-	{ "obj_drop_non_zero_off", "R1 must have zero offset when passed to release func" },
+	{ "obj_drop_non_zero_off", "arg#0 must have zero offset when passed to release func" },
 	{ "new_null_ret", "R0 invalid mem access 'ptr_or_null_'" },
 	{ "obj_new_acq", "Unreleased reference id=" },
 	{ "use_after_drop", "invalid mem access 'scalar'" },
@@ -91,7 +91,7 @@ static struct {
 	{ "incorrect_node_off1", "bpf_list_node not found at offset=49" },
 	{ "incorrect_node_off2", "arg#1 offset=0, but expected bpf_list_node at offset=48 in struct foo" },
 	{ "no_head_type", "bpf_list_head not found at offset=0" },
-	{ "incorrect_head_var_off1", "R1 doesn't have constant offset" },
+	{ "incorrect_head_var_off1", "arg#0 doesn't have constant offset" },
 	{ "incorrect_head_var_off2", "variable ptr_ access var_off=(0x0; 0x1ffffffff) disallowed" },
 	{ "incorrect_head_off1", "bpf_list_head not found at offset=25" },
 	{ "incorrect_head_off2", "bpf_list_head not found at offset=1" },
diff --git a/tools/testing/selftests/bpf/progs/cpumask_failure.c b/tools/testing/selftests/bpf/progs/cpumask_failure.c
index 61c32e91e8c3..588fa15e71ef 100644
--- a/tools/testing/selftests/bpf/progs/cpumask_failure.c
+++ b/tools/testing/selftests/bpf/progs/cpumask_failure.c
@@ -117,7 +117,7 @@ int BPF_PROG(test_cpumask_null, struct task_struct *task, u64 clone_flags)
 }
 
 SEC("tp_btf/task_newtask")
-__failure __msg("R2 must be a rcu pointer")
+__failure __msg("arg#1 must be a rcu pointer")
 int BPF_PROG(test_global_mask_out_of_rcu, struct task_struct *task, u64 clone_flags)
 {
 	struct bpf_cpumask *local, *prev;
@@ -179,7 +179,7 @@ int BPF_PROG(test_global_mask_no_null_check, struct task_struct *task, u64 clone
 }
 
 SEC("tp_btf/task_newtask")
-__failure __msg("Possibly NULL pointer passed to helper arg2")
+__failure __msg("Possibly NULL pointer passed to helper R2")
 int BPF_PROG(test_global_mask_rcu_no_null_check, struct task_struct *task, u64 clone_flags)
 {
 	struct bpf_cpumask *prev, *curr;
diff --git a/tools/testing/selftests/bpf/progs/dynptr_fail.c b/tools/testing/selftests/bpf/progs/dynptr_fail.c
index d552117b001e..381072d5152f 100644
--- a/tools/testing/selftests/bpf/progs/dynptr_fail.c
+++ b/tools/testing/selftests/bpf/progs/dynptr_fail.c
@@ -136,7 +136,7 @@ int ringbuf_missing_release_callback(void *ctx)
 
 /* Can't call bpf_ringbuf_submit/discard_dynptr on a non-initialized dynptr */
 SEC("?raw_tp")
-__failure __msg("arg 1 is an unacquired reference")
+__failure __msg("arg#0 is an unacquired reference")
 int ringbuf_release_uninit_dynptr(void *ctx)
 {
 	struct bpf_dynptr ptr;
@@ -650,7 +650,7 @@ int invalid_offset(void *ctx)
 
 /* Can't release a dynptr twice */
 SEC("?raw_tp")
-__failure __msg("arg 1 is an unacquired reference")
+__failure __msg("arg#0 is an unacquired reference")
 int release_twice(void *ctx)
 {
 	struct bpf_dynptr ptr;
@@ -677,7 +677,7 @@ static int release_twice_callback_fn(__u32 index, void *data)
  * within a callback function, fails
  */
 SEC("?raw_tp")
-__failure __msg("arg 1 is an unacquired reference")
+__failure __msg("arg#0 is an unacquired reference")
 int release_twice_callback(void *ctx)
 {
 	struct bpf_dynptr ptr;
diff --git a/tools/testing/selftests/bpf/progs/iters_testmod.c b/tools/testing/selftests/bpf/progs/iters_testmod.c
index 5379e9960ffd..60d28220df2a 100644
--- a/tools/testing/selftests/bpf/progs/iters_testmod.c
+++ b/tools/testing/selftests/bpf/progs/iters_testmod.c
@@ -85,7 +85,7 @@ int iter_next_rcu_or_null(const void *ctx)
 }
 
 SEC("raw_tp/sys_enter")
-__failure __msg("R1 must be referenced or trusted")
+__failure __msg("arg#0 must be referenced or trusted")
 int iter_next_rcu_not_trusted(const void *ctx)
 {
 	struct task_struct *cur_task = bpf_get_current_task_btf();
@@ -105,8 +105,8 @@ int iter_next_rcu_not_trusted(const void *ctx)
 }
 
 SEC("raw_tp/sys_enter")
-__failure __msg("R1 cannot write into rdonly_mem")
-/* Message should not be 'R1 cannot write into rdonly_trusted_mem' */
+__failure __msg("arg#0 cannot write into rdonly_mem")
+/* Message should not be 'arg#0 cannot write into rdonly_trusted_mem' */
 int iter_next_ptr_mem_not_trusted(const void *ctx)
 {
 	struct bpf_iter_num num_it;
diff --git a/tools/testing/selftests/bpf/progs/local_kptr_stash_fail.c b/tools/testing/selftests/bpf/progs/local_kptr_stash_fail.c
index fcf7a7567da2..9c817aca03e1 100644
--- a/tools/testing/selftests/bpf/progs/local_kptr_stash_fail.c
+++ b/tools/testing/selftests/bpf/progs/local_kptr_stash_fail.c
@@ -63,7 +63,7 @@ long stash_rb_nodes(void *ctx)
 }
 
 SEC("tc")
-__failure __msg("R1 must have zero offset when passed to release func")
+__failure __msg("arg#0 must have zero offset when passed to release func")
 long drop_rb_node_off(void *ctx)
 {
 	struct map_value *mapval;
diff --git a/tools/testing/selftests/bpf/progs/map_kptr_fail.c b/tools/testing/selftests/bpf/progs/map_kptr_fail.c
index 6443b320c732..ea765ac4fedb 100644
--- a/tools/testing/selftests/bpf/progs/map_kptr_fail.c
+++ b/tools/testing/selftests/bpf/progs/map_kptr_fail.c
@@ -252,7 +252,7 @@ int reject_untrusted_store_to_ref(struct __sk_buff *ctx)
 }
 
 SEC("?tc")
-__failure __msg("R2 must be referenced")
+__failure __msg("arg#1 must be referenced")
 int reject_untrusted_xchg(struct __sk_buff *ctx)
 {
 	struct prog_test_ref_kfunc *p;
@@ -364,7 +364,7 @@ int kptr_xchg_ref_state(struct __sk_buff *ctx)
 }
 
 SEC("?tc")
-__failure __msg("Possibly NULL pointer passed to helper arg2")
+__failure __msg("Possibly NULL pointer passed to helper R2")
 int kptr_xchg_possibly_null(struct __sk_buff *ctx)
 {
 	struct prog_test_ref_kfunc *p;
diff --git a/tools/testing/selftests/bpf/progs/mem_rdonly_untrusted.c b/tools/testing/selftests/bpf/progs/mem_rdonly_untrusted.c
index 5b4453747c23..02386da6bbc3 100644
--- a/tools/testing/selftests/bpf/progs/mem_rdonly_untrusted.c
+++ b/tools/testing/selftests/bpf/progs/mem_rdonly_untrusted.c
@@ -118,7 +118,7 @@ int atomic_rmw_not_ok(void *ctx)
 SEC("socket")
 __failure
 __msg("invalid access to memory, mem_size=0 off=0 size=4")
-__msg("R1 min value is outside of the allowed memory range")
+__msg("arg#0 min value is outside of the allowed memory range")
 int kfunc_param_not_ok(void *ctx)
 {
 	int *p;
diff --git a/tools/testing/selftests/bpf/progs/nested_trust_failure.c b/tools/testing/selftests/bpf/progs/nested_trust_failure.c
index 3568ec450100..ebfc86af31f0 100644
--- a/tools/testing/selftests/bpf/progs/nested_trust_failure.c
+++ b/tools/testing/selftests/bpf/progs/nested_trust_failure.c
@@ -24,7 +24,7 @@ struct {
  */
 
 SEC("tp_btf/task_newtask")
-__failure __msg("R2 must be")
+__failure __msg("arg#1 must be")
 int BPF_PROG(test_invalid_nested_user_cpus, struct task_struct *task, u64 clone_flags)
 {
 	bpf_cpumask_test_cpu(0, task->user_cpus_ptr);
diff --git a/tools/testing/selftests/bpf/progs/res_spin_lock_fail.c b/tools/testing/selftests/bpf/progs/res_spin_lock_fail.c
index 330682a88c16..dc26c1e52320 100644
--- a/tools/testing/selftests/bpf/progs/res_spin_lock_fail.c
+++ b/tools/testing/selftests/bpf/progs/res_spin_lock_fail.c
@@ -203,7 +203,7 @@ int res_spin_lock_bad_off(struct __sk_buff *ctx)
 }
 
 SEC("?tc")
-__failure __msg("R1 doesn't have constant offset. bpf_res_spin_lock has to be at the constant offset")
+__failure __msg("arg#0 doesn't have constant offset. bpf_res_spin_lock has to be at the constant offset")
 int res_spin_lock_var_off(struct __sk_buff *ctx)
 {
 	struct arr_elem *elem;
diff --git a/tools/testing/selftests/bpf/progs/stream_fail.c b/tools/testing/selftests/bpf/progs/stream_fail.c
index 8e8249f3521c..7a88a670dee0 100644
--- a/tools/testing/selftests/bpf/progs/stream_fail.c
+++ b/tools/testing/selftests/bpf/progs/stream_fail.c
@@ -15,7 +15,7 @@ int stream_vprintk_null_arg(void *ctx)
 }
 
 SEC("syscall")
-__failure __msg("R3 type=scalar expected=")
+__failure __msg("arg#2 type=scalar expected=")
 int stream_vprintk_scalar_arg(void *ctx)
 {
 	bpf_stream_vprintk(BPF_STDOUT, "", (void *)46, 0);
diff --git a/tools/testing/selftests/bpf/progs/task_kfunc_failure.c b/tools/testing/selftests/bpf/progs/task_kfunc_failure.c
index 4c07ea193f72..055fb1d83a75 100644
--- a/tools/testing/selftests/bpf/progs/task_kfunc_failure.c
+++ b/tools/testing/selftests/bpf/progs/task_kfunc_failure.c
@@ -262,7 +262,7 @@ int BPF_PROG(task_kfunc_from_vpid_no_null_check, struct task_struct *task, u64 c
 }
 
 SEC("lsm/task_free")
-__failure __msg("R1 must be a rcu pointer")
+__failure __msg("arg#0 must be a rcu pointer")
 int BPF_PROG(task_kfunc_from_lsm_task_free, struct task_struct *task)
 {
 	struct task_struct *acquired;
@@ -313,7 +313,7 @@ int BPF_PROG(task_access_comm4, struct task_struct *task, const char *buf, bool
 }
 
 SEC("tp_btf/task_newtask")
-__failure __msg("R1 must be referenced or trusted")
+__failure __msg("arg#0 must be referenced or trusted")
 int BPF_PROG(task_kfunc_release_in_map, struct task_struct *task, u64 clone_flags)
 {
 	struct task_struct *local;
diff --git a/tools/testing/selftests/bpf/progs/verifier_cgroup_storage.c b/tools/testing/selftests/bpf/progs/verifier_cgroup_storage.c
index 9a13f5c11ac7..e96d632fc1d8 100644
--- a/tools/testing/selftests/bpf/progs/verifier_cgroup_storage.c
+++ b/tools/testing/selftests/bpf/progs/verifier_cgroup_storage.c
@@ -149,7 +149,7 @@ __naked void invalid_cgroup_storage_access_5(void)
 SEC("cgroup/skb")
 __description("invalid cgroup storage access 6")
 __failure __msg("get_local_storage() doesn't support non-zero flags")
-__msg_unpriv("R2 leaks addr into helper function")
+__msg_unpriv("arg#1 leaks addr into helper function")
 __naked void invalid_cgroup_storage_access_6(void)
 {
 	asm volatile ("					\
@@ -288,7 +288,7 @@ __naked void cpu_cgroup_storage_access_5(void)
 SEC("cgroup/skb")
 __description("invalid per-cpu cgroup storage access 6")
 __failure __msg("get_local_storage() doesn't support non-zero flags")
-__msg_unpriv("R2 leaks addr into helper function")
+__msg_unpriv("arg#1 leaks addr into helper function")
 __naked void cpu_cgroup_storage_access_6(void)
 {
 	asm volatile ("					\
diff --git a/tools/testing/selftests/bpf/progs/verifier_ctx.c b/tools/testing/selftests/bpf/progs/verifier_ctx.c
index 7856dad3d1f3..86f0cf1f1dca 100644
--- a/tools/testing/selftests/bpf/progs/verifier_ctx.c
+++ b/tools/testing/selftests/bpf/progs/verifier_ctx.c
@@ -844,7 +844,7 @@ int syscall_ctx_kfunc_zero_sized(void *ctx)
 	}								\
 	SEC("?" type)							\
 	__description(type ": reject kfunc zero-sized ctx access")	\
-	__failure __msg("R1 type=ctx expected=fp")			\
+	__failure __msg("arg#0 type=ctx expected=fp")			\
 	int no_rewrite_##name##_kfunc_zero(void *ctx)			\
 	{								\
 		bpf_kfunc_call_test_mem_len_pass1(ctx, 0);		\
diff --git a/tools/testing/selftests/bpf/progs/verifier_ref_tracking.c b/tools/testing/selftests/bpf/progs/verifier_ref_tracking.c
index 910365201f68..630f40ac9e5a 100644
--- a/tools/testing/selftests/bpf/progs/verifier_ref_tracking.c
+++ b/tools/testing/selftests/bpf/progs/verifier_ref_tracking.c
@@ -1288,7 +1288,7 @@ l1_%=:	r1 = r6;					\
 
 SEC("tc")
 __description("reference tracking: bpf_sk_release(listen_sk)")
-__failure __msg("R1 must be referenced when passed to release function")
+__failure __msg("arg#0 must be referenced when passed to release function")
 __naked void bpf_sk_release_listen_sk(void)
 {
 	asm volatile (
diff --git a/tools/testing/selftests/bpf/progs/verifier_sock.c b/tools/testing/selftests/bpf/progs/verifier_sock.c
index a2132c72d3b8..45f44a5d9b60 100644
--- a/tools/testing/selftests/bpf/progs/verifier_sock.c
+++ b/tools/testing/selftests/bpf/progs/verifier_sock.c
@@ -603,7 +603,7 @@ l2_%=:	r0 = *(u32*)(r0 + %[bpf_tcp_sock_snd_cwnd]);	\
 
 SEC("tc")
 __description("bpf_sk_release(skb->sk)")
-__failure __msg("R1 must be referenced when passed to release function")
+__failure __msg("arg#0 must be referenced when passed to release function")
 __naked void bpf_sk_release_skb_sk(void)
 {
 	asm volatile ("					\
@@ -620,7 +620,7 @@ l0_%=:	r0 = 0;						\
 
 SEC("tc")
 __description("bpf_sk_release(bpf_sk_fullsock(skb->sk))")
-__failure __msg("R1 must be referenced when passed to release function")
+__failure __msg("arg#0 must be referenced when passed to release function")
 __naked void bpf_sk_fullsock_skb_sk(void)
 {
 	asm volatile ("					\
@@ -644,7 +644,7 @@ l1_%=:	r1 = r0;					\
 
 SEC("tc")
 __description("bpf_sk_release(bpf_tcp_sock(skb->sk))")
-__failure __msg("R1 must be referenced when passed to release function")
+__failure __msg("arg#0 must be referenced when passed to release function")
 __naked void bpf_tcp_sock_skb_sk(void)
 {
 	asm volatile ("					\
diff --git a/tools/testing/selftests/bpf/progs/verifier_unpriv.c b/tools/testing/selftests/bpf/progs/verifier_unpriv.c
index c16f8382cf17..97eab23d7480 100644
--- a/tools/testing/selftests/bpf/progs/verifier_unpriv.c
+++ b/tools/testing/selftests/bpf/progs/verifier_unpriv.c
@@ -175,7 +175,7 @@ __naked void check_that_printk_is_disallowed(void)
 
 SEC("socket")
 __description("unpriv: pass pointer to helper function")
-__success __failure_unpriv __msg_unpriv("R4 leaks addr")
+__success __failure_unpriv __msg_unpriv("arg#3 leaks addr")
 __retval(0)
 __naked void pass_pointer_to_helper_function(void)
 {
@@ -607,7 +607,7 @@ __naked void unpriv_partial_copy_of_pointer(void)
 
 SEC("socket")
 __description("unpriv: pass pointer to tail_call")
-__success __failure_unpriv __msg_unpriv("R3 leaks addr into helper")
+__success __failure_unpriv __msg_unpriv("arg#2 leaks addr into helper")
 __retval(0)
 __naked void pass_pointer_to_tail_call(void)
 {
diff --git a/tools/testing/selftests/bpf/progs/verifier_vfs_reject.c b/tools/testing/selftests/bpf/progs/verifier_vfs_reject.c
index 4b392c6c8fc4..b3e34c9c30a3 100644
--- a/tools/testing/selftests/bpf/progs/verifier_vfs_reject.c
+++ b/tools/testing/selftests/bpf/progs/verifier_vfs_reject.c
@@ -46,7 +46,7 @@ int BPF_PROG(get_task_exe_file_kfunc_fp)
 }
 
 SEC("lsm.s/file_open")
-__failure __msg("R1 must be referenced or trusted")
+__failure __msg("arg#0 must be referenced or trusted")
 int BPF_PROG(get_task_exe_file_kfunc_untrusted)
 {
 	struct file *acquired;
@@ -98,7 +98,7 @@ int BPF_PROG(path_d_path_kfunc_null)
 }
 
 SEC("lsm.s/task_alloc")
-__failure __msg("R1 must be referenced or trusted")
+__failure __msg("arg#0 must be referenced or trusted")
 int BPF_PROG(path_d_path_kfunc_untrusted_from_argument, struct task_struct *task)
 {
 	struct path *root;
@@ -112,7 +112,7 @@ int BPF_PROG(path_d_path_kfunc_untrusted_from_argument, struct task_struct *task
 }
 
 SEC("lsm.s/file_open")
-__failure __msg("R1 must be referenced or trusted")
+__failure __msg("arg#0 must be referenced or trusted")
 int BPF_PROG(path_d_path_kfunc_untrusted_from_current)
 {
 	struct path *pwd;
@@ -128,7 +128,7 @@ int BPF_PROG(path_d_path_kfunc_untrusted_from_current)
 }
 
 SEC("lsm.s/file_open")
-__failure __msg("kernel function bpf_path_d_path args#0 expected pointer to STRUCT path but R1 has a pointer to STRUCT file")
+__failure __msg("kernel function bpf_path_d_path args#0 expected pointer to STRUCT path but has a pointer to STRUCT file")
 int BPF_PROG(path_d_path_kfunc_type_mismatch, struct file *file)
 {
 	bpf_path_d_path((struct path *)&file->f_task_work, buf, sizeof(buf));
diff --git a/tools/testing/selftests/bpf/progs/wq_failures.c b/tools/testing/selftests/bpf/progs/wq_failures.c
index 3767f5595bbc..15fff10b6892 100644
--- a/tools/testing/selftests/bpf/progs/wq_failures.c
+++ b/tools/testing/selftests/bpf/progs/wq_failures.c
@@ -48,7 +48,7 @@ __log_level(2)
 __flag(BPF_F_TEST_STATE_FREQ)
 __failure
 __msg(": (85) call bpf_wq_init#") /* anchor message */
-__msg("pointer in R2 isn't map pointer")
+__msg("pointer in arg#1 isn't map pointer")
 long test_wq_init_nomap(void *ctx)
 {
 	struct bpf_wq *wq;
@@ -147,7 +147,7 @@ SEC("tc")
 __log_level(2)
 __failure
 __msg(": (85) call bpf_wq_init#")
-__msg("R1 doesn't have constant offset. bpf_wq has to be at the constant offset")
+__msg("arg#0 doesn't have constant offset. bpf_wq has to be at the constant offset")
 long test_bad_wq_off(void *ctx)
 {
 	struct elem *val;
diff --git a/tools/testing/selftests/bpf/verifier/calls.c b/tools/testing/selftests/bpf/verifier/calls.c
index c3164b9b2be5..fdcbaf3193d4 100644
--- a/tools/testing/selftests/bpf/verifier/calls.c
+++ b/tools/testing/selftests/bpf/verifier/calls.c
@@ -132,7 +132,7 @@
 	},
 	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
 	.result = REJECT,
-	.errstr = "R1 must have zero offset when passed to release func",
+	.errstr = "arg#0 must have zero offset when passed to release func",
 	.fixup_kfunc_btf_id = {
 		{ "bpf_kfunc_call_test_acquire", 3 },
 		{ "bpf_kfunc_call_memb_release", 8 },
@@ -220,7 +220,7 @@
 	},
 	.result_unpriv = REJECT,
 	.result = REJECT,
-	.errstr = "R1 must have zero offset when passed to release func or trusted arg to kfunc",
+	.errstr = "arg#0 must have zero offset when passed to release func or trusted arg to kfunc",
 },
 {
 	"calls: invalid kfunc call: referenced arg needs refcounted PTR_TO_BTF_ID",
@@ -247,7 +247,7 @@
 	},
 	.result_unpriv = REJECT,
 	.result = REJECT,
-	.errstr = "R1 must be",
+	.errstr = "arg#0 must be",
 },
 {
 	"calls: valid kfunc call: referenced arg needs refcounted PTR_TO_BTF_ID",
diff --git a/tools/testing/selftests/bpf/verifier/map_kptr.c b/tools/testing/selftests/bpf/verifier/map_kptr.c
index 4b39f8472f9b..bfb3835bb68b 100644
--- a/tools/testing/selftests/bpf/verifier/map_kptr.c
+++ b/tools/testing/selftests/bpf/verifier/map_kptr.c
@@ -100,7 +100,7 @@
 	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
 	.fixup_map_kptr = { 1 },
 	.result = REJECT,
-	.errstr = "R1 doesn't have constant offset. kptr has to be at the constant offset",
+	.errstr = "arg#0 doesn't have constant offset. kptr has to be at the constant offset",
 },
 {
 	"map_kptr: unaligned boundary load/store",
@@ -176,7 +176,7 @@
 	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
 	.fixup_map_kptr = { 1 },
 	.result = REJECT,
-	.errstr = "invalid kptr access, R1 type=untrusted_ptr_prog_test_ref_kfunc expected=ptr_prog_test",
+	.errstr = "invalid kptr access, arg#0 type=untrusted_ptr_prog_test_ref_kfunc expected=ptr_prog_test",
 },
 {
 	"map_kptr: unref: loaded pointer marked as untrusted",
@@ -244,7 +244,7 @@
 	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
 	.fixup_map_kptr = { 1 },
 	.result = REJECT,
-	.errstr = "R1 type=untrusted_ptr_ expected=percpu_ptr_",
+	.errstr = "arg#0 type=untrusted_ptr_ expected=percpu_ptr_",
 },
 {
 	"map_kptr: unref: no reference state created",
@@ -311,7 +311,7 @@
 	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
 	.fixup_map_kptr = { 1 },
 	.result = REJECT,
-	.errstr = "R1 type=rcu_ptr_or_null_ expected=percpu_ptr_",
+	.errstr = "arg#0 type=rcu_ptr_or_null_ expected=percpu_ptr_",
 },
 {
 	"map_kptr: ref: reject off != 0",
@@ -342,7 +342,7 @@
 	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
 	.fixup_map_kptr = { 1 },
 	.result = REJECT,
-	.errstr = "invalid kptr access, R2 type=ptr_prog_test_ref_kfunc expected=ptr_prog_test_member",
+	.errstr = "invalid kptr access, arg#1 type=ptr_prog_test_ref_kfunc expected=ptr_prog_test_member",
 },
 {
 	"map_kptr: ref: reference state created and released on xchg",
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH bpf-next v4 07/18] bpf: Introduce bpf register BPF_REG_STACK_ARG_BASE
  2026-04-12  4:58 [PATCH bpf-next v4 00/18] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song
                   ` (5 preceding siblings ...)
  2026-04-12  4:58 ` [PATCH bpf-next v4 06/18] bpf: Use argument index instead of register index in kfunc verifier logs Yonghong Song
@ 2026-04-12  4:59 ` Yonghong Song
  2026-04-12  4:59 ` [PATCH bpf-next v4 08/18] bpf: Reuse MAX_BPF_FUNC_ARGS for maximum number of arguments Yonghong Song
                   ` (10 subsequent siblings)
  17 siblings, 0 replies; 27+ messages in thread
From: Yonghong Song @ 2026-04-12  4:59 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Jose E . Marchesi, kernel-team, Martin KaFai Lau

The newly-added register BPF_REG_STACK_ARG_BASE corresponds to bpf
register R12 added by [1]. R12 is used as the base for stack arguments
so it won't mess out R10 based stacks.

  [1] https://github.com/llvm/llvm-project/pull/189060

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
 include/linux/filter.h | 3 ++-
 kernel/bpf/core.c      | 4 ++--
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/include/linux/filter.h b/include/linux/filter.h
index e40d4071a345..68f018dd4b9c 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -59,7 +59,8 @@ struct ctl_table_header;
 
 /* Kernel hidden auxiliary/helper register. */
 #define BPF_REG_AX		MAX_BPF_REG
-#define MAX_BPF_EXT_REG		(MAX_BPF_REG + 1)
+#define BPF_REG_STACK_ARG_BASE	(MAX_BPF_REG + 1)
+#define MAX_BPF_EXT_REG		(MAX_BPF_REG + 2)
 #define MAX_BPF_JIT_REG		MAX_BPF_EXT_REG
 
 /* unused opcode to mark special call to bpf_tail_call() helper */
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 066b86e7233c..76a4e208f34e 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -1299,8 +1299,8 @@ static int bpf_jit_blind_insn(const struct bpf_insn *from,
 	u32 imm_rnd = get_random_u32();
 	s16 off;
 
-	BUILD_BUG_ON(BPF_REG_AX  + 1 != MAX_BPF_JIT_REG);
-	BUILD_BUG_ON(MAX_BPF_REG + 1 != MAX_BPF_JIT_REG);
+	BUILD_BUG_ON(BPF_REG_AX + 2 != MAX_BPF_JIT_REG);
+	BUILD_BUG_ON(BPF_REG_STACK_ARG_BASE + 1 != MAX_BPF_JIT_REG);
 
 	/* Constraints on AX register:
 	 *
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH bpf-next v4 08/18] bpf: Reuse MAX_BPF_FUNC_ARGS for maximum number of arguments
  2026-04-12  4:58 [PATCH bpf-next v4 00/18] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song
                   ` (6 preceding siblings ...)
  2026-04-12  4:59 ` [PATCH bpf-next v4 07/18] bpf: Introduce bpf register BPF_REG_STACK_ARG_BASE Yonghong Song
@ 2026-04-12  4:59 ` Yonghong Song
  2026-04-12  4:59 ` [PATCH bpf-next v4 09/18] bpf: Support stack arguments for bpf functions Yonghong Song
                   ` (9 subsequent siblings)
  17 siblings, 0 replies; 27+ messages in thread
From: Yonghong Song @ 2026-04-12  4:59 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Jose E . Marchesi, kernel-team, Martin KaFai Lau

Currently, MAX_BPF_FUNC_ARGS is used for tracepoint related progs where
the number of parameters cannot exceed MAX_BPF_FUNC_ARGS.

Here, MAX_BPF_FUNC_ARGS is reused to set a limit of the number of arguments
for bpf functions and kfunc's. The current value for MAX_BPF_FUNC_ARGS
is 12 which should be sufficient for majority of bpf functions and
kfunc's.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
 include/linux/bpf.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 0136a108d083..b0f956be73d2 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1151,6 +1151,10 @@ struct bpf_prog_offload {
 
 /* The longest tracepoint has 12 args.
  * See include/trace/bpf_probe.h
+ *
+ * Also reuse this macro for maximum number of arguments a BPF function
+ * or a kfunc can have. Args 1-5 are passed in registers, args 6-12 via
+ * stack arg slots.
  */
 #define MAX_BPF_FUNC_ARGS 12
 
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH bpf-next v4 09/18] bpf: Support stack arguments for bpf functions
  2026-04-12  4:58 [PATCH bpf-next v4 00/18] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song
                   ` (7 preceding siblings ...)
  2026-04-12  4:59 ` [PATCH bpf-next v4 08/18] bpf: Reuse MAX_BPF_FUNC_ARGS for maximum number of arguments Yonghong Song
@ 2026-04-12  4:59 ` Yonghong Song
  2026-04-12  5:43   ` bot+bpf-ci
  2026-04-12  5:00 ` [PATCH bpf-next v4 10/18] bpf: Fix interaction between stack argument PTR_TO_STACK and dead slot poisoning Yonghong Song
                   ` (8 subsequent siblings)
  17 siblings, 1 reply; 27+ messages in thread
From: Yonghong Song @ 2026-04-12  4:59 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Jose E . Marchesi, kernel-team, Martin KaFai Lau

Currently BPF functions (subprogs) are limited to 5 register arguments.
With [1], the compiler can emit code that passes additional arguments
via a dedicated stack area through bpf register
BPF_REG_STACK_ARG_BASE (r12), introduced in the previous patch.

The compiler uses positive r12 offsets for incoming (callee-side) args
and negative r12 offsets for outgoing (caller-side) args, following the
x86_64/arm64 calling convention direction. There is an 8-byte gap at
offset 0 separating the two regions:

  Incoming (callee reads):   r12+8 (arg6), r12+16 (arg7), ...
  Outgoing (caller writes):  r12-N*8 (arg6), ..., r12-8 (last arg)

The following is an example to show how stack arguments are saved
and transferred between caller and callee:

  int foo(int a1, int a2, int a3, int a4, int a5, int a6, int a7) {
    ...
    bar(a1, a2, a3, a4, a5, a6, a7, a8);
    ...
  }

   Caller (foo)                           Callee (bar)
   ============                           ============
   Incoming (positive offsets):           Incoming (positive offsets):

   r12+8:  [incoming arg 6]               r12+8:  [incoming arg 6] <-+
   r12+16: [incoming arg 7]               r12+16: [incoming arg 7] <-|+
                                          r12+24: [incoming arg 8] <-||+
   Outgoing (negative offsets):                                      |||
   r12-24: [outgoing arg 6 to bar] -------->-------------------------+||
   r12-16: [outgoing arg 7 to bar] -------->--------------------------+|
   r12-8:  [outgoing arg 8 to bar] -------->---------------------------+

Note the reversed order: the caller's most negative outgoing offset
(arg6) maps to the callee's first positive incoming offset (arg6).
The caller stores arg6 at r12-24 (= -3*8 for 3 stack args), and
the callee reads it at r12+8.

If the bpf function has more than one call:

  int foo(int a1, int a2, int a3, int a4, int a5, int a6, int a7) {
    ...
    bar1(a1, a2, a3, a4, a5, a6, a7, a8);
    ...
    bar2(a1, a2, a3, a4, a5, a6, a7, a8, a9);
    ...
  }

   Caller (foo)                             Callee (bar2)
   ============                             ==============
   Incoming (positive offsets):             Incoming (positive offsets):

   r12+8:  [incoming arg 6]                 r12+8:  [incoming arg 6] <+
   r12+16: [incoming arg 7]                 r12+16: [incoming arg 7] <|+
                                            r12+24: [incoming arg 8] <||+
   Outgoing for bar2 (negative offsets):    r12+32: [incoming arg 9] <|||+
   r12-32: [outgoing arg 6] ---->----------->-------------------------+|||
   r12-24: [outgoing arg 7] ---->----------->--------------------------+||
   r12-16: [outgoing arg 8] ---->----------->---------------------------+|
   r12-8:  [outgoing arg 9] ---->----------->----------------------------+

The verifier tracks stack arg slots separately from the regular r10
stack. A new 'bpf_stack_arg_state' structure mirrors the existing stack
slot tracking (spilled_ptr + slot_type[]) but lives in a dedicated
'stack_arg_slots' array in bpf_func_state. This separation keeps the
stack arg area from interfering with the normal stack and frame pointer
(r10) bookkeeping. Similar to stacksafe(), introduce stack_arg_safe()
to do pruning check.

Callback functions with stack arguments need kernel setup parameter
types (including stack parameters) properly and then callback function
can retrieve such information for verification purpose.

Global subprogs with >5 args are not yet supported.

  [1] https://github.com/llvm/llvm-project/pull/189060

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
 include/linux/bpf.h          |   2 +
 include/linux/bpf_verifier.h |  31 +++-
 kernel/bpf/btf.c             |  14 +-
 kernel/bpf/verifier.c        | 320 ++++++++++++++++++++++++++++++++++-
 4 files changed, 355 insertions(+), 12 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index b0f956be73d2..5e061ec42940 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1666,6 +1666,8 @@ struct bpf_prog_aux {
 	u32 max_pkt_offset;
 	u32 max_tp_access;
 	u32 stack_depth;
+	u16 incoming_stack_arg_depth;
+	u16 stack_arg_depth; /* both incoming and max outgoing of stack arguments */
 	u32 id;
 	u32 func_cnt; /* used by non-func prog as the number of func progs */
 	u32 real_func_cnt; /* includes hidden progs, only used for JIT and freeing progs */
diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index 291f11ddd176..645a4546a57f 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -319,6 +319,11 @@ struct bpf_retval_range {
 	bool return_32bit;
 };
 
+struct bpf_stack_arg_state {
+	struct bpf_reg_state spilled_ptr; /* for spilled scalar/pointer semantics */
+	u8 slot_type[BPF_REG_SIZE];
+};
+
 /* state of the program:
  * type of all registers and stack info
  */
@@ -370,6 +375,10 @@ struct bpf_func_state {
 	 * `stack`. allocated_stack is always a multiple of BPF_REG_SIZE.
 	 */
 	int allocated_stack;
+
+	u16 stack_arg_depth; /* Size of incoming + max outgoing stack args in bytes. */
+	u16 incoming_stack_arg_depth; /* Size of incoming stack args in bytes. */
+	struct bpf_stack_arg_state *stack_arg_slots;
 };
 
 #define MAX_CALL_FRAMES 8
@@ -506,6 +515,17 @@ struct bpf_verifier_state {
 	     iter < frame->allocated_stack / BPF_REG_SIZE;		\
 	     iter++, reg = bpf_get_spilled_reg(iter, frame, mask))
 
+#define bpf_get_spilled_stack_arg(slot, frame, mask)			\
+	(((slot < frame->stack_arg_depth / BPF_REG_SIZE) &&		\
+	  ((1 << frame->stack_arg_slots[slot].slot_type[BPF_REG_SIZE - 1]) & (mask))) \
+	 ? &frame->stack_arg_slots[slot].spilled_ptr : NULL)
+
+/* Iterate over 'frame', setting 'reg' to either NULL or a spilled stack arg. */
+#define bpf_for_each_spilled_stack_arg(iter, frame, reg, mask)		\
+	for (iter = 0, reg = bpf_get_spilled_stack_arg(iter, frame, mask); \
+	     iter < frame->stack_arg_depth / BPF_REG_SIZE;		\
+	     iter++, reg = bpf_get_spilled_stack_arg(iter, frame, mask))
+
 #define bpf_for_each_reg_in_vstate_mask(__vst, __state, __reg, __mask, __expr)   \
 	({                                                               \
 		struct bpf_verifier_state *___vstate = __vst;            \
@@ -523,6 +543,11 @@ struct bpf_verifier_state {
 					continue;                        \
 				(void)(__expr);                          \
 			}                                                \
+			bpf_for_each_spilled_stack_arg(___j, __state, __reg, __mask) { \
+				if (!__reg)                              \
+					continue;                        \
+				(void)(__expr);                          \
+			}                                                \
 		}                                                        \
 	})
 
@@ -736,10 +761,12 @@ struct bpf_subprog_info {
 	bool keep_fastcall_stack: 1;
 	bool changes_pkt_data: 1;
 	bool might_sleep: 1;
-	u8 arg_cnt:3;
+	u8 arg_cnt:4;
 
 	enum priv_stack_mode priv_stack_mode;
-	struct bpf_subprog_arg_info args[MAX_BPF_FUNC_REG_ARGS];
+	struct bpf_subprog_arg_info args[MAX_BPF_FUNC_ARGS];
+	u16 incoming_stack_arg_depth;
+	u16 outgoing_stack_arg_depth;
 };
 
 struct bpf_verifier_env;
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index a62d78581207..c5f3aa05d5a3 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -7887,13 +7887,19 @@ int btf_prepare_func_args(struct bpf_verifier_env *env, int subprog)
 	}
 	args = (const struct btf_param *)(t + 1);
 	nargs = btf_type_vlen(t);
-	if (nargs > MAX_BPF_FUNC_REG_ARGS) {
-		if (!is_global)
-			return -EINVAL;
-		bpf_log(log, "Global function %s() with %d > %d args. Buggy compiler.\n",
+	if (nargs > MAX_BPF_FUNC_ARGS) {
+		bpf_log(log, "Function %s() with %d > %d args not supported.\n",
+			tname, nargs, MAX_BPF_FUNC_ARGS);
+		return -EINVAL;
+	}
+	if (is_global && nargs > MAX_BPF_FUNC_REG_ARGS) {
+		bpf_log(log, "Global function %s() with %d > %d args not supported.\n",
 			tname, nargs, MAX_BPF_FUNC_REG_ARGS);
 		return -EINVAL;
 	}
+	if (nargs > MAX_BPF_FUNC_REG_ARGS)
+		sub->incoming_stack_arg_depth = (nargs - MAX_BPF_FUNC_REG_ARGS) * BPF_REG_SIZE;
+
 	/* check that function is void or returns int, exception cb also requires this */
 	t = btf_type_by_id(btf, t->type);
 	while (btf_type_is_modifier(t))
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 01df990f841a..e664d924e8d4 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -1482,6 +1482,19 @@ static int copy_stack_state(struct bpf_func_state *dst, const struct bpf_func_st
 		return -ENOMEM;
 
 	dst->allocated_stack = src->allocated_stack;
+
+	/* copy stack_arg_slots state */
+	n = src->stack_arg_depth / BPF_REG_SIZE;
+	if (n) {
+		dst->stack_arg_slots = copy_array(dst->stack_arg_slots, src->stack_arg_slots, n,
+						  sizeof(struct bpf_stack_arg_state),
+						  GFP_KERNEL_ACCOUNT);
+		if (!dst->stack_arg_slots)
+			return -ENOMEM;
+
+		dst->stack_arg_depth = src->stack_arg_depth;
+		dst->incoming_stack_arg_depth = src->incoming_stack_arg_depth;
+	}
 	return 0;
 }
 
@@ -1523,6 +1536,25 @@ static int grow_stack_state(struct bpf_verifier_env *env, struct bpf_func_state
 	return 0;
 }
 
+static int grow_stack_arg_slots(struct bpf_verifier_env *env,
+				struct bpf_func_state *state, int size)
+{
+	size_t old_n = state->stack_arg_depth / BPF_REG_SIZE, n;
+
+	size = round_up(size, BPF_REG_SIZE);
+	n = size / BPF_REG_SIZE;
+	if (old_n >= n)
+		return 0;
+
+	state->stack_arg_slots = realloc_array(state->stack_arg_slots, old_n, n,
+					       sizeof(struct bpf_stack_arg_state));
+	if (!state->stack_arg_slots)
+		return -ENOMEM;
+
+	state->stack_arg_depth = size;
+	return 0;
+}
+
 /* Acquire a pointer id from the env and update the state->refs to include
  * this new pointer reference.
  * On success, returns a valid pointer id to associate with the register
@@ -1693,6 +1725,7 @@ static void free_func_state(struct bpf_func_state *state)
 {
 	if (!state)
 		return;
+	kfree(state->stack_arg_slots);
 	kfree(state->stack);
 	kfree(state);
 }
@@ -5940,6 +5973,119 @@ static int check_stack_write(struct bpf_verifier_env *env,
 	return err;
 }
 
+/* Validate that a stack arg access is 8-byte sized and aligned. */
+static int check_stack_arg_access(struct bpf_verifier_env *env,
+				  struct bpf_insn *insn, const char *op)
+{
+	int size = bpf_size_to_bytes(BPF_SIZE(insn->code));
+
+	if (size != BPF_REG_SIZE) {
+		verbose(env, "stack arg %s must be %d bytes, got %d\n",
+			op, BPF_REG_SIZE, size);
+		return -EINVAL;
+	}
+	if (insn->off == 0 || insn->off % BPF_REG_SIZE) {
+		verbose(env, "stack arg %s offset %d not aligned to %d\n",
+			op, insn->off, BPF_REG_SIZE);
+		return -EINVAL;
+	}
+	/* Reads use positive offsets (incoming), writes use negative (outgoing) */
+	if (op[0] == 'r' && insn->off < 0) {
+		verbose(env, "stack arg read must use positive offset, got %d\n",
+			insn->off);
+		return -EINVAL;
+	}
+	if (op[0] == 'w' && insn->off > 0) {
+		verbose(env, "stack arg write must use negative offset, got %d\n",
+			insn->off);
+		return -EINVAL;
+	}
+	return 0;
+}
+
+/* Check that a stack arg slot has been properly initialized. */
+static bool is_stack_arg_slot_initialized(struct bpf_func_state *state, int spi)
+{
+	u8 type;
+
+	if (spi >= (int)(state->stack_arg_depth / BPF_REG_SIZE))
+		return false;
+	type = state->stack_arg_slots[spi].slot_type[BPF_REG_SIZE - 1];
+	return type == STACK_SPILL || type == STACK_MISC;
+}
+
+/*
+ * Write a value to the outgoing stack arg area.
+ * off is a negative offset from r12 (e.g. -8 for the last outgoing arg).
+ * Callers ensure off < 0, 8-byte aligned, and size is BPF_REG_SIZE.
+ */
+static int check_stack_arg_write(struct bpf_verifier_env *env, struct bpf_func_state *state,
+				 int off, int value_regno)
+{
+	int incoming_slots = state->incoming_stack_arg_depth / BPF_REG_SIZE;
+	int spi = incoming_slots + (-off / BPF_REG_SIZE - 1);
+	struct bpf_subprog_info *subprog;
+	struct bpf_func_state *cur;
+	struct bpf_reg_state *reg;
+	int i, err;
+	u8 type;
+
+	err = grow_stack_arg_slots(env, state, state->incoming_stack_arg_depth + (-off));
+	if (err)
+		return err;
+
+	/* Ensure the JIT allocates space for the outgoing stack arg area. */
+	subprog = &env->subprog_info[state->subprogno];
+	if (-off > subprog->outgoing_stack_arg_depth)
+		subprog->outgoing_stack_arg_depth = -off;
+
+	cur = env->cur_state->frame[env->cur_state->curframe];
+	if (value_regno >= 0) {
+		reg = &cur->regs[value_regno];
+		state->stack_arg_slots[spi].spilled_ptr = *reg;
+		type = is_spillable_regtype(reg->type) ? STACK_SPILL : STACK_MISC;
+		for (i = 0; i < BPF_REG_SIZE; i++)
+			state->stack_arg_slots[spi].slot_type[i] = type;
+	} else {
+		/* BPF_ST: store immediate, treat as scalar */
+		reg = &state->stack_arg_slots[spi].spilled_ptr;
+		reg->type = SCALAR_VALUE;
+		__mark_reg_known(reg, env->prog->insnsi[env->insn_idx].imm);
+		for (i = 0; i < BPF_REG_SIZE; i++)
+			state->stack_arg_slots[spi].slot_type[i] = STACK_MISC;
+	}
+	return 0;
+}
+
+/*
+ * Read a value from the incoming stack arg area.
+ * off is a positive offset from r12 (e.g. +8 for arg6, +16 for arg7).
+ * Callers ensure off > 0, 8-byte aligned, and size is BPF_REG_SIZE.
+ */
+static int check_stack_arg_read(struct bpf_verifier_env *env, struct bpf_func_state *state,
+				int off, int dst_regno)
+{
+	int spi = off / BPF_REG_SIZE - 1;
+	struct bpf_func_state *cur;
+	u8 *stype;
+
+	if (off > state->incoming_stack_arg_depth) {
+		verbose(env, "invalid read from stack arg off %d depth %d\n",
+			off, state->incoming_stack_arg_depth);
+		return -EACCES;
+	}
+
+	stype = state->stack_arg_slots[spi].slot_type;
+	cur = env->cur_state->frame[env->cur_state->curframe];
+
+	if (stype[BPF_REG_SIZE - 1] == STACK_SPILL)
+		copy_register_state(&cur->regs[dst_regno],
+				    &state->stack_arg_slots[spi].spilled_ptr);
+	else
+		mark_reg_unknown(env, cur->regs, dst_regno);
+	return 0;
+}
+
 static int check_map_access_type(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
 				 int off, int size, enum bpf_access_type type)
 {
@@ -8136,10 +8282,23 @@ static int check_load_mem(struct bpf_verifier_env *env, struct bpf_insn *insn,
 			  bool strict_alignment_once, bool is_ldsx,
 			  bool allow_trust_mismatch, const char *ctx)
 {
+	struct bpf_verifier_state *vstate = env->cur_state;
+	struct bpf_func_state *state = vstate->frame[vstate->curframe];
 	struct bpf_reg_state *regs = cur_regs(env);
 	enum bpf_reg_type src_reg_type;
 	int err;
 
+	/* Handle stack arg access */
+	if (insn->src_reg == BPF_REG_STACK_ARG_BASE) {
+		err = check_reg_arg(env, insn->dst_reg, DST_OP_NO_MARK);
+		if (err)
+			return err;
+		err = check_stack_arg_access(env, insn, "read");
+		if (err)
+			return err;
+		return check_stack_arg_read(env, state, insn->off, insn->dst_reg);
+	}
+
 	/* check src operand */
 	err = check_reg_arg(env, insn->src_reg, SRC_OP);
 	if (err)
@@ -8168,10 +8327,23 @@ static int check_load_mem(struct bpf_verifier_env *env, struct bpf_insn *insn,
 static int check_store_reg(struct bpf_verifier_env *env, struct bpf_insn *insn,
 			   bool strict_alignment_once)
 {
+	struct bpf_verifier_state *vstate = env->cur_state;
+	struct bpf_func_state *state = vstate->frame[vstate->curframe];
 	struct bpf_reg_state *regs = cur_regs(env);
 	enum bpf_reg_type dst_reg_type;
 	int err;
 
+	/* Handle stack arg write */
+	if (insn->dst_reg == BPF_REG_STACK_ARG_BASE) {
+		err = check_reg_arg(env, insn->src_reg, SRC_OP);
+		if (err)
+			return err;
+		err = check_stack_arg_access(env, insn, "write");
+		if (err)
+			return err;
+		return check_stack_arg_write(env, state, insn->off, insn->src_reg);
+	}
+
 	/* check src1 operand */
 	err = check_reg_arg(env, insn->src_reg, SRC_OP);
 	if (err)
@@ -10881,7 +11053,7 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog,
 	/* check that BTF function arguments match actual types that the
 	 * verifier sees.
 	 */
-	for (i = 0; i < sub->arg_cnt; i++) {
+	for (i = 0; i < min_t(u32, sub->arg_cnt, MAX_BPF_FUNC_REG_ARGS); i++) {
 		u32 regno = i + 1;
 		struct bpf_reg_state *reg = &regs[regno];
 		struct bpf_subprog_arg_info *arg = &sub->args[i];
@@ -11067,8 +11239,10 @@ static int check_func_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
 			   int *insn_idx)
 {
 	struct bpf_verifier_state *state = env->cur_state;
+	struct bpf_subprog_info *caller_info;
 	struct bpf_func_state *caller;
 	int err, subprog, target_insn;
+	u16 callee_incoming;
 
 	target_insn = *insn_idx + insn->imm + 1;
 	subprog = bpf_find_subprog(env, target_insn);
@@ -11120,6 +11294,15 @@ static int check_func_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
 		return 0;
 	}
 
+	/*
+	 * Track caller's outgoing stack arg depth (max across all callees).
+	 * This is needed so the JIT knows how much stack arg space to allocate.
+	 */
+	caller_info = &env->subprog_info[caller->subprogno];
+	callee_incoming = env->subprog_info[subprog].incoming_stack_arg_depth;
+	if (callee_incoming > caller_info->outgoing_stack_arg_depth)
+		caller_info->outgoing_stack_arg_depth = callee_incoming;
+
 	/* for regular function entry setup new frame and continue
 	 * from that frame.
 	 */
@@ -11173,13 +11356,61 @@ static int set_callee_state(struct bpf_verifier_env *env,
 			    struct bpf_func_state *caller,
 			    struct bpf_func_state *callee, int insn_idx)
 {
-	int i;
+	struct bpf_subprog_info *callee_info;
+	int i, err;
 
 	/* copy r1 - r5 args that callee can access.  The copy includes parent
 	 * pointers, which connects us up to the liveness chain
 	 */
 	for (i = BPF_REG_1; i <= BPF_REG_5; i++)
 		callee->regs[i] = caller->regs[i];
+
+	/*
+	 * Transfer stack args from caller's outgoing area to callee's incoming area.
+	 *
+	 * Caller stores outgoing args at negative r12 offsets: -K*8 (arg6),
+	 * -(K-1)*8 (arg7), ..., -8 (last arg).  In the caller's slot array,
+	 * outgoing spi 0 (off=-8) is the *last* arg and spi K-1 (off=-K*8)
+	 * is arg6.
+	 *
+	 * Callee reads incoming args at positive r12 offsets: +8 (arg6),
+	 * +16 (arg7), ...  Incoming spi 0 is arg6.
+	 *
+	 * So the transfer reverses: callee spi i = caller outgoing spi (K-1-i).
+	 */
+	callee_info = &env->subprog_info[callee->subprogno];
+	if (callee_info->incoming_stack_arg_depth) {
+		int caller_incoming_slots = caller->incoming_stack_arg_depth / BPF_REG_SIZE;
+		int callee_incoming_slots = callee_info->incoming_stack_arg_depth / BPF_REG_SIZE;
+
+		callee->incoming_stack_arg_depth = callee_info->incoming_stack_arg_depth;
+		err = grow_stack_arg_slots(env, callee, callee_info->incoming_stack_arg_depth);
+		if (err)
+			return err;
+
+		for (i = 0; i < callee_incoming_slots; i++) {
+			int caller_spi = caller_incoming_slots +
+					 (callee_incoming_slots - 1 - i);
+
+			if (!is_stack_arg_slot_initialized(caller, caller_spi)) {
+				verbose(env, "stack arg#%d not properly initialized\n",
+					i + MAX_BPF_FUNC_REG_ARGS);
+				return -EINVAL;
+			}
+			callee->stack_arg_slots[i] = caller->stack_arg_slots[caller_spi];
+		}
+
+		/* Invalidate caller's outgoing slots -- they have been consumed
+		 * by the callee. This ensures the verifier requires fresh
+		 * initialization before each subsequent call.
+		 */
+		for (i = 0; i < callee_incoming_slots; i++) {
+			int caller_spi = i + caller_incoming_slots;
+
+			memset(&caller->stack_arg_slots[caller_spi], 0,
+			       sizeof(caller->stack_arg_slots[caller_spi]));
+		}
+	}
 	return 0;
 }
 
@@ -20565,6 +20796,60 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
 	return true;
 }
 
+/*
+ * Compare stack arg slots between old and current states.
+ * Only incoming stack args need comparison -— outgoing slots are transient
+ * (written before each call, consumed at the call site) so they don't carry
+ * meaningful state across pruning points.
+ */
+static bool stack_arg_safe(struct bpf_verifier_env *env, struct bpf_func_state *old,
+			   struct bpf_func_state *cur, struct bpf_idmap *idmap,
+			   enum exact_level exact)
+{
+	int i, spi;
+
+	if (old->incoming_stack_arg_depth != cur->incoming_stack_arg_depth)
+		return false;
+
+	/* Compare both incoming and outgoing stack arg slots. */
+	if (old->stack_arg_depth != cur->stack_arg_depth)
+		return false;
+
+	for (i = 0; i < old->stack_arg_depth; i++) {
+		spi = i / BPF_REG_SIZE;
+
+		if (exact == EXACT &&
+		    old->stack_arg_slots[spi].slot_type[i % BPF_REG_SIZE] !=
+		    cur->stack_arg_slots[spi].slot_type[i % BPF_REG_SIZE])
+			return false;
+
+		if (old->stack_arg_slots[spi].slot_type[i % BPF_REG_SIZE] == STACK_INVALID)
+			continue;
+
+		if (old->stack_arg_slots[spi].slot_type[i % BPF_REG_SIZE] !=
+		    cur->stack_arg_slots[spi].slot_type[i % BPF_REG_SIZE])
+			return false;
+
+		if (i % BPF_REG_SIZE != BPF_REG_SIZE - 1)
+			continue;
+
+		switch (old->stack_arg_slots[spi].slot_type[BPF_REG_SIZE - 1]) {
+		case STACK_SPILL:
+			if (!regsafe(env, &old->stack_arg_slots[spi].spilled_ptr,
+				     &cur->stack_arg_slots[spi].spilled_ptr, idmap, exact))
+				return false;
+			break;
+		case STACK_MISC:
+		case STACK_ZERO:
+		case STACK_INVALID:
+			continue;
+		default:
+			return false;
+		}
+	}
+	return true;
+}
+
 static bool refsafe(struct bpf_verifier_state *old, struct bpf_verifier_state *cur,
 		    struct bpf_idmap *idmap)
 {
@@ -20656,6 +20941,9 @@ static bool func_states_equal(struct bpf_verifier_env *env, struct bpf_func_stat
 	if (!stacksafe(env, old, cur, &env->idmap_scratch, exact))
 		return false;
 
+	if (!stack_arg_safe(env, old, cur, &env->idmap_scratch, exact))
+		return false;
+
 	return true;
 }
 
@@ -21545,6 +21833,17 @@ static int do_check_insn(struct bpf_verifier_env *env, bool *do_print_state)
 		return check_store_reg(env, insn, false);
 
 	case BPF_ST: {
+		/* Handle stack arg write (store immediate) */
+		if (insn->dst_reg == BPF_REG_STACK_ARG_BASE) {
+			struct bpf_verifier_state *vstate = env->cur_state;
+			struct bpf_func_state *state = vstate->frame[vstate->curframe];
+
+			err = check_stack_arg_access(env, insn, "write");
+			if (err)
+				return err;
+			return check_stack_arg_write(env, state, insn->off, -1);
+		}
+
 		enum bpf_reg_type dst_reg_type;
 
 		err = check_reg_arg(env, insn->dst_reg, SRC_OP);
@@ -22383,11 +22682,11 @@ static int check_and_resolve_insns(struct bpf_verifier_env *env)
 		return err;
 
 	for (i = 0; i < insn_cnt; i++, insn++) {
-		if (insn->dst_reg >= MAX_BPF_REG) {
+		if (insn->dst_reg >= MAX_BPF_REG && insn->dst_reg != BPF_REG_STACK_ARG_BASE) {
 			verbose(env, "R%d is invalid\n", insn->dst_reg);
 			return -EINVAL;
 		}
-		if (insn->src_reg >= MAX_BPF_REG) {
+		if (insn->src_reg >= MAX_BPF_REG && insn->src_reg != BPF_REG_STACK_ARG_BASE) {
 			verbose(env, "R%d is invalid\n", insn->src_reg);
 			return -EINVAL;
 		}
@@ -23414,8 +23713,14 @@ static int jit_subprogs(struct bpf_verifier_env *env)
 	int err, num_exentries;
 	int old_len, subprog_start_adjustment = 0;
 
-	if (env->subprog_cnt <= 1)
+	if (env->subprog_cnt <= 1) {
+		/*
+		 * Even without subprogs, kfunc calls with >5 args need stack arg space
+		 * allocated by the root program.
+		 */
+		prog->aux->stack_arg_depth = env->subprog_info[0].outgoing_stack_arg_depth;
 		return 0;
+	}
 
 	for (i = 0, insn = prog->insnsi; i < prog->len; i++, insn++) {
 		if (!bpf_pseudo_func(insn) && !bpf_pseudo_call(insn))
@@ -23505,6 +23810,9 @@ static int jit_subprogs(struct bpf_verifier_env *env)
 
 		func[i]->aux->name[0] = 'F';
 		func[i]->aux->stack_depth = env->subprog_info[i].stack_depth;
+		func[i]->aux->incoming_stack_arg_depth = env->subprog_info[i].incoming_stack_arg_depth;
+		func[i]->aux->stack_arg_depth = env->subprog_info[i].incoming_stack_arg_depth +
+						env->subprog_info[i].outgoing_stack_arg_depth;
 		if (env->subprog_info[i].priv_stack_mode == PRIV_STACK_ADAPTIVE)
 			func[i]->aux->jits_use_priv_stack = true;
 
@@ -25197,7 +25505,7 @@ static int do_check_common(struct bpf_verifier_env *env, int subprog)
 				goto out;
 			}
 		}
-		for (i = BPF_REG_1; i <= sub->arg_cnt; i++) {
+		for (i = BPF_REG_1; i <= min_t(u32, sub->arg_cnt, MAX_BPF_FUNC_REG_ARGS); i++) {
 			arg = &sub->args[i - BPF_REG_1];
 			reg = &regs[i];
 
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH bpf-next v4 10/18] bpf: Fix interaction between stack argument PTR_TO_STACK and dead slot poisoning
  2026-04-12  4:58 [PATCH bpf-next v4 00/18] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song
                   ` (8 preceding siblings ...)
  2026-04-12  4:59 ` [PATCH bpf-next v4 09/18] bpf: Support stack arguments for bpf functions Yonghong Song
@ 2026-04-12  5:00 ` Yonghong Song
  2026-04-12  5:43   ` bot+bpf-ci
  2026-04-12  5:00 ` [PATCH bpf-next v4 11/18] bpf: Reject stack arguments in non-JITed programs Yonghong Song
                   ` (7 subsequent siblings)
  17 siblings, 1 reply; 27+ messages in thread
From: Yonghong Song @ 2026-04-12  5:00 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Jose E . Marchesi, kernel-team, Martin KaFai Lau

The "poison dead stack slots" mechanism (commit 2cb27158adb3) uses
static liveness analysis to identify dead stack slots and poisons them
as a safety check. However, the static liveness pass cannot track
indirect stack references through pointers passed via stack arguments.

For register-passed PTR_TO_STACK (e.g., R1 = fp-8 passed to a static
subprog), the liveness abstract tracker carries frame/offset info
through registers. When the callee dereferences R1, the tracker
attributes the read to the parent frame's stack slot, correctly marking
it alive. So no poisoning issue arises.

For stack-argument-passed PTR_TO_STACK (e.g., fp-8 stored via
*(r12-8) = r1), the value goes through BPF_REG_STACK_ARG_BASE (r12)
which the liveness pass does not track. When the callee loads the
pointer from its incoming stack arg and dereferences it, the liveness
pass cannot attribute the read back to the parent frame. The parent's
stack slot is determined dead and poisoned before the callee even
starts. The callee's subsequent dereference then fails with "slot
poisoned by dead code elimination".

Fix this by allowing STACK_POISON reads in check_stack_read_fixed_off()
when the read targets a parent frame's stack (reg_state != state).
Same-frame STACK_POISON reads remain rejected to preserve the safety
check for real liveness bugs. Cross-frame reads are safe to allow
because:
  - The pointer to the parent's stack was already validated by the
    verifier.
  - The slot contained valid data before being (incorrectly) poisoned.
  - The read returns an unknown scalar, which is conservative.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
 kernel/bpf/verifier.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index e664d924e8d4..bfeecd73e66e 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -5764,6 +5764,13 @@ static int check_stack_read_fixed_off(struct bpf_verifier_env *env,
 					}
 					if (type == STACK_INVALID && env->allow_uninit_stack)
 						continue;
+					/*
+					 * Cross-frame reads may hit slots poisoned by dead code elimination.
+					 * Static liveness can't track indirect references through pointers,
+					 * so allow the read conservatively.
+					 */
+					if (type == STACK_POISON && reg_state != state)
+						continue;
 					if (type == STACK_POISON) {
 						verbose(env, "reading from stack off %d+%d size %d, slot poisoned by dead code elimination\n",
 							off, i, size);
@@ -5819,6 +5826,8 @@ static int check_stack_read_fixed_off(struct bpf_verifier_env *env,
 				continue;
 			if (type == STACK_INVALID && env->allow_uninit_stack)
 				continue;
+			if (type == STACK_POISON && reg_state != state)
+				continue;
 			if (type == STACK_POISON) {
 				verbose(env, "reading from stack off %d+%d size %d, slot poisoned by dead code elimination\n",
 					off, i, size);
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH bpf-next v4 11/18] bpf: Reject stack arguments in non-JITed programs
  2026-04-12  4:58 [PATCH bpf-next v4 00/18] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song
                   ` (9 preceding siblings ...)
  2026-04-12  5:00 ` [PATCH bpf-next v4 10/18] bpf: Fix interaction between stack argument PTR_TO_STACK and dead slot poisoning Yonghong Song
@ 2026-04-12  5:00 ` Yonghong Song
  2026-04-12  5:00 ` [PATCH bpf-next v4 12/18] bpf: Reject stack arguments if tail call reachable Yonghong Song
                   ` (6 subsequent siblings)
  17 siblings, 0 replies; 27+ messages in thread
From: Yonghong Song @ 2026-04-12  5:00 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Jose E . Marchesi, kernel-team, Martin KaFai Lau

The interpreter does not understand the bpf register r12
(BPF_REG_STACK_ARG_BASE) used for stack argument addressing. So
reject interpreter usage if stack arguments are used either
in the main program or any subprogram.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
 kernel/bpf/core.c     | 3 ++-
 kernel/bpf/verifier.c | 6 ++++++
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 76a4e208f34e..1bd8a19d7b61 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -2555,7 +2555,8 @@ struct bpf_prog *bpf_prog_select_runtime(struct bpf_prog *fp, int *err)
 		goto finalize;
 
 	if (IS_ENABLED(CONFIG_BPF_JIT_ALWAYS_ON) ||
-	    bpf_prog_has_kfunc_call(fp))
+	    bpf_prog_has_kfunc_call(fp) ||
+	    fp->aux->stack_arg_depth)
 		jit_needed = true;
 
 	if (!bpf_prog_select_interpreter(fp))
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index bfeecd73e66e..20fb53ead728 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -24030,6 +24030,12 @@ static int fixup_call_args(struct bpf_verifier_env *env)
 		verbose(env, "calling kernel functions are not allowed in non-JITed programs\n");
 		return -EINVAL;
 	}
+	for (i = 0; i < env->subprog_cnt; i++) {
+		if (env->subprog_info[i].incoming_stack_arg_depth) {
+			verbose(env, "stack args are not supported in non-JITed programs\n");
+			return -EINVAL;
+		}
+	}
 	if (env->subprog_cnt > 1 && env->prog->aux->tail_call_reachable) {
 		/* When JIT fails the progs with bpf2bpf calls and tail_calls
 		 * have to be rejected, since interpreter doesn't support them yet.
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH bpf-next v4 12/18] bpf: Reject stack arguments if tail call reachable
  2026-04-12  4:58 [PATCH bpf-next v4 00/18] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song
                   ` (10 preceding siblings ...)
  2026-04-12  5:00 ` [PATCH bpf-next v4 11/18] bpf: Reject stack arguments in non-JITed programs Yonghong Song
@ 2026-04-12  5:00 ` Yonghong Song
  2026-04-12  5:43   ` bot+bpf-ci
  2026-04-12  5:00 ` [PATCH bpf-next v4 13/18] bpf: Support stack arguments for kfunc calls Yonghong Song
                   ` (5 subsequent siblings)
  17 siblings, 1 reply; 27+ messages in thread
From: Yonghong Song @ 2026-04-12  5:00 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Jose E . Marchesi, kernel-team, Martin KaFai Lau

Tailcalls are been deprecated. So reject stack arguments
if tail call is in the way.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
 kernel/bpf/verifier.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 20fb53ead728..45987041bb2a 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -7175,6 +7175,11 @@ static int check_max_stack_depth_subprog(struct bpf_verifier_env *env, int idx,
 				verbose(env, "cannot tail call within exception cb\n");
 				return -EINVAL;
 			}
+			if (subprog[tmp].incoming_stack_arg_depth ||
+			    subprog[tmp].outgoing_stack_arg_depth) {
+				verbose(env, "tail_calls are not allowed in programs with stack args\n");
+				return -EINVAL;
+			}
 			subprog[tmp].tail_call_reachable = true;
 		}
 	if (subprog[0].tail_call_reachable)
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH bpf-next v4 13/18] bpf: Support stack arguments for kfunc calls
  2026-04-12  4:58 [PATCH bpf-next v4 00/18] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song
                   ` (11 preceding siblings ...)
  2026-04-12  5:00 ` [PATCH bpf-next v4 12/18] bpf: Reject stack arguments if tail call reachable Yonghong Song
@ 2026-04-12  5:00 ` Yonghong Song
  2026-04-12  5:43   ` bot+bpf-ci
  2026-04-12  5:00 ` [PATCH bpf-next v4 14/18] bpf: Enable stack argument support for x86_64 Yonghong Song
                   ` (4 subsequent siblings)
  17 siblings, 1 reply; 27+ messages in thread
From: Yonghong Song @ 2026-04-12  5:00 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Jose E . Marchesi, kernel-team, Martin KaFai Lau

Extend the stack argument mechanism to kfunc calls, allowing kfuncs
with more than 5 parameters to receive additional arguments via the
r12-based stack arg area.

For kfuncs, the caller is a BPF program and the callee is a kernel
function. The BPF program writes outgoing args at negative r12
offsets, following the same convention as BPF-to-BPF calls:

  Outgoing: r12 - N*8 (arg6), ..., r12 - 8 (last arg)

The following is an example:

  int foo(int a1, int a2, int a3, int a4, int a5, int a6, int a7) {
    ...
    kfunc1(a1, a2, a3, a4, a5, a6, a7, a8);
    ...
    kfunc2(a1, a2, a3, a4, a5, a6, a7, a8, a9);
    ...
  }

   Caller (foo)
   ============
   Incoming (positive offsets):
     r12+8:  [incoming arg 6]
     r12+16: [incoming arg 7]

   Outgoing for kfunc1 (negative offsets):
     r12-24: [outgoing arg 6]
     r12-16: [outgoing arg 7]
     r12-8:  [outgoing arg 8]

   Outgoing for kfunc2 (negative offsets):
     r12-32: [outgoing arg 6]
     r12-24: [outgoing arg 7]
     r12-16: [outgoing arg 8]
     r12-8:  [outgoing arg 9]

Later JIT will marshal outgoing arguments to the native calling
convention for kfunc1() and kfunc2().

There are two places where meta->release_regno needs to keep
regno for later releasing the reference. Also, 'cur_aux(env)->arg_prog = regno'
is also keeping regno for later fixup. Since regno is greater than 5,
such three cases are rejected for now if they are in stack arguments.
If possible, new kfuncs could keep them in first 5 registers so
there are no issues at all.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
 kernel/bpf/verifier.c | 104 ++++++++++++++++++++++++++++++++++--------
 1 file changed, 85 insertions(+), 19 deletions(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 45987041bb2a..206ffbd9596d 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -8885,8 +8885,6 @@ static int check_kfunc_mem_size_reg(struct bpf_verifier_env *env, struct bpf_reg
 	struct bpf_call_arg_meta meta;
 	int err;
 
-	WARN_ON_ONCE(mem_argno > BPF_REG_3);
-
 	memset(&meta, 0, sizeof(meta));
 
 	if (may_be_null) {
@@ -13163,6 +13161,20 @@ static bool is_kfunc_pkt_changing(struct bpf_kfunc_call_arg_meta *meta)
 	return meta->func_id == special_kfunc_list[KF_bpf_xdp_pull_data];
 }
 
+static struct bpf_reg_state *get_kfunc_arg_reg(struct bpf_verifier_env *env,
+					       int argno, int nargs)
+{
+	struct bpf_func_state *caller;
+	int spi;
+
+	if (argno < MAX_BPF_FUNC_REG_ARGS)
+		return &cur_regs(env)[argno + 1];
+
+	caller = cur_func(env);
+	spi = caller->incoming_stack_arg_depth / BPF_REG_SIZE + (nargs - 1 - argno);
+	return &caller->stack_arg_slots[spi].spilled_ptr;
+}
+
 static enum kfunc_ptr_arg_type
 get_kfunc_ptr_arg_type(struct bpf_verifier_env *env,
 		       struct bpf_kfunc_call_arg_meta *meta,
@@ -13170,8 +13182,6 @@ get_kfunc_ptr_arg_type(struct bpf_verifier_env *env,
 		       const char *ref_tname, const struct btf_param *args,
 		       int argno, int nargs, struct bpf_reg_state *reg)
 {
-	u32 regno = argno + 1;
-	struct bpf_reg_state *regs = cur_regs(env);
 	bool arg_mem_size = false;
 
 	if (meta->func_id == special_kfunc_list[KF_bpf_cast_to_kern_ctx] ||
@@ -13180,8 +13190,8 @@ get_kfunc_ptr_arg_type(struct bpf_verifier_env *env,
 		return KF_ARG_PTR_TO_CTX;
 
 	if (argno + 1 < nargs &&
-	    (is_kfunc_arg_mem_size(meta->btf, &args[argno + 1], &regs[regno + 1]) ||
-	     is_kfunc_arg_const_mem_size(meta->btf, &args[argno + 1], &regs[regno + 1])))
+	    (is_kfunc_arg_mem_size(meta->btf, &args[argno + 1], get_kfunc_arg_reg(env, argno + 1, nargs)) ||
+	     is_kfunc_arg_const_mem_size(meta->btf, &args[argno + 1], get_kfunc_arg_reg(env, argno + 1, nargs))))
 		arg_mem_size = true;
 
 	/* In this function, we verify the kfunc's BTF as per the argument type,
@@ -13848,9 +13858,9 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 
 	args = (const struct btf_param *)(meta->func_proto + 1);
 	nargs = btf_type_vlen(meta->func_proto);
-	if (nargs > MAX_BPF_FUNC_REG_ARGS) {
+	if (nargs > MAX_BPF_FUNC_ARGS) {
 		verbose(env, "Function %s has %d > %d args\n", func_name, nargs,
-			MAX_BPF_FUNC_REG_ARGS);
+			MAX_BPF_FUNC_ARGS);
 		return -EINVAL;
 	}
 
@@ -13858,19 +13868,42 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 	 * verifier sees.
 	 */
 	for (i = 0; i < nargs; i++) {
-		struct bpf_reg_state *regs = cur_regs(env), *reg = &regs[i + 1];
+		struct bpf_reg_state *regs = cur_regs(env), *reg;
 		const struct btf_type *t, *ref_t, *resolve_ret;
 		enum bpf_arg_type arg_type = ARG_DONTCARE;
-		u32 regno = i + 1, ref_id, type_size;
+		struct bpf_reg_state tmp_reg;
+		int regno = i + 1;
+		u32 ref_id, type_size;
 		bool is_ret_buf_sz = false;
 		int kf_arg_type;
 
+		if (i < MAX_BPF_FUNC_REG_ARGS) {
+			reg = &regs[i + 1];
+		} else {
+			/* Retrieve the spilled reg state from the stack arg slot. */
+			struct bpf_func_state *caller = cur_func(env);
+			int spi = caller->incoming_stack_arg_depth / BPF_REG_SIZE + (nargs - 1 - i);
+
+			if (!is_stack_arg_slot_initialized(caller, spi)) {
+				verbose(env, "stack arg#%d not properly initialized\n", i);
+				return -EINVAL;
+			}
+
+			tmp_reg = caller->stack_arg_slots[spi].spilled_ptr;
+			reg = &tmp_reg;
+			regno = -1;
+		}
+
 		if (is_kfunc_arg_prog_aux(btf, &args[i])) {
 			/* Reject repeated use bpf_prog_aux */
 			if (meta->arg_prog) {
 				verifier_bug(env, "Only 1 prog->aux argument supported per-kfunc");
 				return -EFAULT;
 			}
+			if (regno < 0) {
+				verbose(env, "arg#%d prog->aux cannot be a stack argument\n", i);
+				return -EINVAL;
+			}
 			meta->arg_prog = true;
 			cur_aux(env)->arg_prog = regno;
 			continue;
@@ -13896,9 +13929,11 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 					verbose(env, "arg#%d must be a known constant\n", i);
 					return -EINVAL;
 				}
-				ret = mark_chain_precision(env, regno);
-				if (ret < 0)
-					return ret;
+				if (regno > 0) {
+					ret = mark_chain_precision(env, regno);
+					if (ret < 0)
+						return ret;
+				}
 				meta->arg_constant.found = true;
 				meta->arg_constant.value = reg->var_off.value;
 			} else if (is_kfunc_arg_scalar_with_name(btf, &args[i], "rdonly_buf_size")) {
@@ -13920,9 +13955,11 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 				}
 
 				meta->r0_size = reg->var_off.value;
-				ret = mark_chain_precision(env, regno);
-				if (ret)
-					return ret;
+				if (regno > 0) {
+					ret = mark_chain_precision(env, regno);
+					if (ret)
+						return ret;
+				}
 			}
 			continue;
 		}
@@ -13946,8 +13983,13 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 				return -EFAULT;
 			}
 			meta->ref_obj_id = reg->ref_obj_id;
-			if (is_kfunc_release(meta))
+			if (is_kfunc_release(meta)) {
+				if (regno < 0) {
+					verbose(env, "arg#%d release arg cannot be a stack argument\n", i);
+					return -EINVAL;
+				}
 				meta->release_regno = regno;
+			}
 		}
 
 		ref_t = btf_type_skip_modifiers(btf, t->type, &ref_id);
@@ -14100,6 +14142,10 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 				dynptr_arg_type |= DYNPTR_TYPE_FILE;
 			} else if (meta->func_id == special_kfunc_list[KF_bpf_dynptr_file_discard]) {
 				dynptr_arg_type |= DYNPTR_TYPE_FILE;
+				if (regno < 0) {
+					verbose(env, "arg#%d release arg cannot be a stack argument\n", i);
+					return -EINVAL;
+				}
 				meta->release_regno = regno;
 			} else if (meta->func_id == special_kfunc_list[KF_bpf_dynptr_clone] &&
 				   (dynptr_arg_type & MEM_UNINIT)) {
@@ -14247,9 +14293,9 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 			break;
 		case KF_ARG_PTR_TO_MEM_SIZE:
 		{
-			struct bpf_reg_state *buff_reg = &regs[regno];
+			struct bpf_reg_state *buff_reg = reg;
 			const struct btf_param *buff_arg = &args[i];
-			struct bpf_reg_state *size_reg = &regs[regno + 1];
+			struct bpf_reg_state *size_reg = get_kfunc_arg_reg(env, i + 1, nargs);
 			const struct btf_param *size_arg = &args[i + 1];
 
 			if (!register_is_null(buff_reg) || !is_kfunc_arg_nullable(meta->btf, buff_arg)) {
@@ -15152,6 +15198,16 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
 			mark_btf_func_reg_size(env, regno, t->size);
 	}
 
+	/* Track outgoing stack arg depth for kfuncs with >5 args */
+	if (nargs > MAX_BPF_FUNC_REG_ARGS) {
+		struct bpf_func_state *caller = cur_func(env);
+		struct bpf_subprog_info *caller_info = &env->subprog_info[caller->subprogno];
+		u16 kfunc_stack_arg_depth = (nargs - MAX_BPF_FUNC_REG_ARGS) * BPF_REG_SIZE;
+
+		if (kfunc_stack_arg_depth > caller_info->outgoing_stack_arg_depth)
+			caller_info->outgoing_stack_arg_depth = kfunc_stack_arg_depth;
+	}
+
 	if (is_iter_next_kfunc(&meta)) {
 		err = process_iter_next_call(env, insn_idx, &meta);
 		if (err)
@@ -24167,6 +24223,16 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
 	if (!bpf_jit_supports_far_kfunc_call())
 		insn->imm = BPF_CALL_IMM(desc->addr);
 
+	/*
+	 * After resolving the kfunc address, insn->off is no longer needed
+	 * for BTF fd index. Repurpose it to store the number of stack args
+	 * so the JIT can marshal them.
+	 */
+	if (desc->func_model.nr_args > MAX_BPF_FUNC_REG_ARGS)
+		insn->off = desc->func_model.nr_args - MAX_BPF_FUNC_REG_ARGS;
+	else
+		insn->off = 0;
+
 	if (is_bpf_obj_new_kfunc(desc->func_id) || is_bpf_percpu_obj_new_kfunc(desc->func_id)) {
 		struct btf_struct_meta *kptr_struct_meta = env->insn_aux_data[insn_idx].kptr_struct_meta;
 		struct bpf_insn addr[2] = { BPF_LD_IMM64(BPF_REG_2, (long)kptr_struct_meta) };
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH bpf-next v4 14/18] bpf: Enable stack argument support for x86_64
  2026-04-12  4:58 [PATCH bpf-next v4 00/18] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song
                   ` (12 preceding siblings ...)
  2026-04-12  5:00 ` [PATCH bpf-next v4 13/18] bpf: Support stack arguments for kfunc calls Yonghong Song
@ 2026-04-12  5:00 ` Yonghong Song
  2026-04-12  5:00 ` [PATCH bpf-next v4 15/18] bpf,x86: Implement JIT support for stack arguments Yonghong Song
                   ` (3 subsequent siblings)
  17 siblings, 0 replies; 27+ messages in thread
From: Yonghong Song @ 2026-04-12  5:00 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Jose E . Marchesi, kernel-team, Martin KaFai Lau

Add stack argument support for x86_64.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
 arch/x86/net/bpf_jit_comp.c | 5 +++++
 include/linux/filter.h      | 1 +
 kernel/bpf/btf.c            | 9 ++++++++-
 kernel/bpf/core.c           | 5 +++++
 kernel/bpf/verifier.c       | 5 +++++
 5 files changed, 24 insertions(+), 1 deletion(-)

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index e9b78040d703..32864dbc2c4e 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -3937,6 +3937,11 @@ bool bpf_jit_supports_kfunc_call(void)
 	return true;
 }
 
+bool bpf_jit_supports_stack_args(void)
+{
+	return true;
+}
+
 void *bpf_arch_text_copy(void *dst, void *src, size_t len)
 {
 	if (text_poke_copy(dst, src, len) == NULL)
diff --git a/include/linux/filter.h b/include/linux/filter.h
index 68f018dd4b9c..a5035fb80a6b 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -1160,6 +1160,7 @@ bool bpf_jit_inlines_helper_call(s32 imm);
 bool bpf_jit_supports_subprog_tailcalls(void);
 bool bpf_jit_supports_percpu_insn(void);
 bool bpf_jit_supports_kfunc_call(void);
+bool bpf_jit_supports_stack_args(void);
 bool bpf_jit_supports_far_kfunc_call(void);
 bool bpf_jit_supports_exceptions(void);
 bool bpf_jit_supports_ptr_xchg(void);
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index c5f3aa05d5a3..1cbe0f2b0e41 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -20,6 +20,7 @@
 #include <linux/btf.h>
 #include <linux/btf_ids.h>
 #include <linux/bpf.h>
+#include <linux/filter.h>
 #include <linux/bpf_lsm.h>
 #include <linux/skmsg.h>
 #include <linux/perf_event.h>
@@ -7897,8 +7898,14 @@ int btf_prepare_func_args(struct bpf_verifier_env *env, int subprog)
 			tname, nargs, MAX_BPF_FUNC_REG_ARGS);
 		return -EINVAL;
 	}
-	if (nargs > MAX_BPF_FUNC_REG_ARGS)
+	if (nargs > MAX_BPF_FUNC_REG_ARGS) {
+		if (!bpf_jit_supports_stack_args()) {
+			bpf_log(log, "JIT does not support function %s() with %d args\n",
+				tname, nargs);
+			return -ENOTSUPP;
+		}
 		sub->incoming_stack_arg_depth = (nargs - MAX_BPF_FUNC_REG_ARGS) * BPF_REG_SIZE;
+	}
 
 	/* check that function is void or returns int, exception cb also requires this */
 	t = btf_type_by_id(btf, t->type);
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 1bd8a19d7b61..124ebeb9baf9 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -3158,6 +3158,11 @@ bool __weak bpf_jit_supports_kfunc_call(void)
 	return false;
 }
 
+bool __weak bpf_jit_supports_stack_args(void)
+{
+	return false;
+}
+
 bool __weak bpf_jit_supports_far_kfunc_call(void)
 {
 	return false;
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 206ffbd9596d..91c5f2942194 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -13863,6 +13863,11 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 			MAX_BPF_FUNC_ARGS);
 		return -EINVAL;
 	}
+	if (nargs > MAX_BPF_FUNC_REG_ARGS && !bpf_jit_supports_stack_args()) {
+		verbose(env, "JIT does not support kfunc %s() with %d args\n",
+			func_name, nargs);
+		return -ENOTSUPP;
+	}
 
 	/* Check that BTF function arguments match actual types that the
 	 * verifier sees.
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH bpf-next v4 15/18] bpf,x86: Implement JIT support for stack arguments
  2026-04-12  4:58 [PATCH bpf-next v4 00/18] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song
                   ` (13 preceding siblings ...)
  2026-04-12  5:00 ` [PATCH bpf-next v4 14/18] bpf: Enable stack argument support for x86_64 Yonghong Song
@ 2026-04-12  5:00 ` Yonghong Song
  2026-04-12  5:43   ` bot+bpf-ci
  2026-04-12  5:00 ` [PATCH bpf-next v4 16/18] selftests/bpf: Add tests for BPF function " Yonghong Song
                   ` (2 subsequent siblings)
  17 siblings, 1 reply; 27+ messages in thread
From: Yonghong Song @ 2026-04-12  5:00 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Jose E . Marchesi, kernel-team, Martin KaFai Lau

Add x86_64 JIT support for BPF functions and kfuncs with more than
5 arguments. The extra arguments are passed through a stack area
addressed by register r12 (BPF_REG_STACK_ARG_BASE) in BPF bytecode,
which the JIT translates to native code.

The JIT follows the x86-64 calling convention for both BPF-to-BPF
and kfunc calls:
  - Arg 6 is passed in the R9 register
  - Args 7+ are passed on the stack

Incoming arg 6 (BPF r12+8) is translated to a MOV from R9 rather
than a memory load. Incoming args 7+ (BPF r12+16, r12+24, ...) map
directly to [rbp + 16], [rbp + 24], ..., matching the x86-64 stack
layout after CALL + PUSH RBP, so no offset adjustment is needed.

The verifier guarantees that neither tail_call_reachable nor
priv_stack is set when outgoing stack args exist, so R9 is always
available. When BPF bytecode writes to the arg-6 stack slot
(the most negative outgoing offset), the JIT emits a MOV into R9
instead of a memory store. Outgoing args 7+ are placed at [rsp]
in a pre-allocated area below callee-saved registers, using:
  native_off = outgoing_arg_base + bpf_off

The native x86_64 stack layout:

  high address
  +-------------------------+
  | incoming stack arg N    |  [rbp + 16 + (N-2)*8]  (from caller)
  | ...                     |
  | incoming stack arg 7    |  [rbp + 16]
  +-------------------------+
  | return address          |  [rbp + 8]
  | saved rbp               |  [rbp]
  +-------------------------+
  | BPF program stack       |  (round_up(stack_depth, 8) bytes)
  +-------------------------+
  | callee-saved regs       |  (r12, rbx, r13, r14, r15 as needed)
  +-------------------------+
  | outgoing arg M          |  [rsp + (M-7)*8]
  | ...                     |
  | outgoing arg 7          |  [rsp]
  +-------------------------+  rsp
  low address

  (Arg 6 is in R9, not on the stack)

  [1] https://github.com/llvm/llvm-project/pull/189060

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
 arch/x86/net/bpf_jit_comp.c | 172 ++++++++++++++++++++++++++++++++++--
 1 file changed, 164 insertions(+), 8 deletions(-)

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 32864dbc2c4e..ec57b9a6b417 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -390,6 +390,34 @@ static void pop_callee_regs(u8 **pprog, bool *callee_regs_used)
 	*pprog = prog;
 }
 
+/* add rsp, depth */
+static void emit_add_rsp(u8 **pprog, u16 depth)
+{
+	u8 *prog = *pprog;
+
+	if (!depth)
+		return;
+	if (is_imm8(depth))
+		EMIT4(0x48, 0x83, 0xC4, depth); /* add rsp, imm8 */
+	else
+		EMIT3_off32(0x48, 0x81, 0xC4, depth); /* add rsp, imm32 */
+	*pprog = prog;
+}
+
+/* sub rsp, depth */
+static void emit_sub_rsp(u8 **pprog, u16 depth)
+{
+	u8 *prog = *pprog;
+
+	if (!depth)
+		return;
+	if (is_imm8(depth))
+		EMIT4(0x48, 0x83, 0xEC, depth); /* sub rsp, imm8 */
+	else
+		EMIT3_off32(0x48, 0x81, 0xEC, depth); /* sub rsp, imm32 */
+	*pprog = prog;
+}
+
 static void emit_nops(u8 **pprog, int len)
 {
 	u8 *prog = *pprog;
@@ -725,8 +753,8 @@ static void emit_return(u8 **pprog, u8 *ip)
  */
 static void emit_bpf_tail_call_indirect(struct bpf_prog *bpf_prog,
 					u8 **pprog, bool *callee_regs_used,
-					u32 stack_depth, u8 *ip,
-					struct jit_context *ctx)
+					u32 stack_depth, u16 outgoing_depth,
+					u8 *ip, struct jit_context *ctx)
 {
 	int tcc_ptr_off = BPF_TAIL_CALL_CNT_PTR_STACK_OFF(stack_depth);
 	u8 *prog = *pprog, *start = *pprog;
@@ -775,6 +803,9 @@ static void emit_bpf_tail_call_indirect(struct bpf_prog *bpf_prog,
 	/* Inc tail_call_cnt if the slot is populated. */
 	EMIT4(0x48, 0x83, 0x00, 0x01);            /* add qword ptr [rax], 1 */
 
+	/* Deallocate outgoing stack arg area. */
+	emit_add_rsp(&prog, outgoing_depth);
+
 	if (bpf_prog->aux->exception_boundary) {
 		pop_callee_regs(&prog, all_callee_regs_used);
 		pop_r12(&prog);
@@ -815,6 +846,7 @@ static void emit_bpf_tail_call_direct(struct bpf_prog *bpf_prog,
 				      struct bpf_jit_poke_descriptor *poke,
 				      u8 **pprog, u8 *ip,
 				      bool *callee_regs_used, u32 stack_depth,
+				      u16 outgoing_depth,
 				      struct jit_context *ctx)
 {
 	int tcc_ptr_off = BPF_TAIL_CALL_CNT_PTR_STACK_OFF(stack_depth);
@@ -842,6 +874,9 @@ static void emit_bpf_tail_call_direct(struct bpf_prog *bpf_prog,
 	/* Inc tail_call_cnt if the slot is populated. */
 	EMIT4(0x48, 0x83, 0x00, 0x01);                /* add qword ptr [rax], 1 */
 
+	/* Deallocate outgoing stack arg area. */
+	emit_add_rsp(&prog, outgoing_depth);
+
 	if (bpf_prog->aux->exception_boundary) {
 		pop_callee_regs(&prog, all_callee_regs_used);
 		pop_r12(&prog);
@@ -1664,16 +1699,48 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image
 	int i, excnt = 0;
 	int ilen, proglen = 0;
 	u8 *prog = temp;
+	u16 stack_arg_depth, incoming_stack_arg_depth, outgoing_stack_arg_depth;
+	u16 outgoing_rsp;
 	u32 stack_depth;
+	int callee_saved_size;
+	s32 outgoing_arg_base;
+	bool has_stack_args;
 	int err;
 
 	stack_depth = bpf_prog->aux->stack_depth;
+	stack_arg_depth = bpf_prog->aux->stack_arg_depth;
+	incoming_stack_arg_depth = bpf_prog->aux->incoming_stack_arg_depth;
+	outgoing_stack_arg_depth = stack_arg_depth - incoming_stack_arg_depth;
 	priv_stack_ptr = bpf_prog->aux->priv_stack_ptr;
 	if (priv_stack_ptr) {
 		priv_frame_ptr = priv_stack_ptr + PRIV_STACK_GUARD_SZ + round_up(stack_depth, 8);
 		stack_depth = 0;
 	}
 
+	/*
+	 * Follow x86-64 calling convention for both BPF-to-BPF and
+	 * kfunc calls:
+	 *   - Arg 6 is passed in R9 register
+	 *   - Args 7+ are passed on the stack at [rsp]
+	 *
+	 * Incoming arg 6 is read from R9 (BPF r12+8 → MOV from R9).
+	 * Incoming args 7+ are read from [rbp + 16], [rbp + 24], ...
+	 * (BPF r12+16, r12+24, ... map directly with no offset change).
+	 *
+	 * The verifier guarantees that neither tail_call_reachable nor
+	 * priv_stack is set when outgoing stack args exist, so R9 is
+	 * always available.
+	 *
+	 * Stack layout (high to low):
+	 *   [rbp + 16 + ...]    incoming stack args 7+ (from caller)
+	 *   [rbp + 8]           return address
+	 *   [rbp]               saved rbp
+	 *   [rbp - prog_stack]  program stack
+	 *   [below]             callee-saved regs
+	 *   [below]             outgoing args 7+ (= rsp)
+	 */
+	has_stack_args = stack_arg_depth > 0;
+
 	arena_vm_start = bpf_arena_get_kern_vm_start(bpf_prog->aux->arena);
 	user_vm_start = bpf_arena_get_user_vm_start(bpf_prog->aux->arena);
 
@@ -1700,6 +1767,41 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image
 			push_r12(&prog);
 		push_callee_regs(&prog, callee_regs_used);
 	}
+
+	/* Compute callee-saved register area size. */
+	callee_saved_size = 0;
+	if (bpf_prog->aux->exception_boundary || arena_vm_start)
+		callee_saved_size += 8; /* r12 */
+	if (bpf_prog->aux->exception_boundary) {
+		callee_saved_size += 4 * 8; /* rbx, r13, r14, r15 */
+	} else {
+		int j;
+
+		for (j = 0; j < 4; j++)
+			if (callee_regs_used[j])
+				callee_saved_size += 8;
+	}
+	/*
+	 * Base offset from rbp for translating BPF outgoing args 7+
+	 * to native offsets:
+	 *   native_off = outgoing_arg_base + bpf_off
+	 *
+	 * BPF outgoing offsets are negative (r12 - N*8 for arg6,
+	 * ..., r12 - 8 for last arg). Arg 6 goes to R9 directly,
+	 * so only args 7+ occupy the outgoing stack area.
+	 *
+	 * Note that tail_call_reachable is guaranteed to be false when
+	 * stack args exist, so tcc pushes need not be accounted for.
+	 */
+	outgoing_arg_base = -(round_up(stack_depth, 8) + callee_saved_size);
+
+	/*
+	 * Allocate outgoing stack arg area for args 7+ only.
+	 * Arg 6 goes into r9 register, not on stack.
+	 */
+	outgoing_rsp = outgoing_stack_arg_depth > 8 ?  outgoing_stack_arg_depth - 8 : 0;
+	emit_sub_rsp(&prog, outgoing_rsp);
+
 	if (arena_vm_start)
 		emit_mov_imm64(&prog, X86_REG_R12,
 			       arena_vm_start >> 32, (u32) arena_vm_start);
@@ -1715,13 +1817,14 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image
 	prog = temp;
 
 	for (i = 1; i <= insn_cnt; i++, insn++) {
+		bool adjust_stack_arg_off = false;
 		const s32 imm32 = insn->imm;
 		u32 dst_reg = insn->dst_reg;
 		u32 src_reg = insn->src_reg;
 		u8 b2 = 0, b3 = 0;
 		u8 *start_of_ldx;
 		s64 jmp_offset;
-		s16 insn_off;
+		s32 insn_off;
 		u8 jmp_cond;
 		u8 *func;
 		int nops;
@@ -1734,6 +1837,21 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image
 				dst_reg = X86_REG_R9;
 		}
 
+		if (has_stack_args) {
+			u8 class = BPF_CLASS(insn->code);
+
+			if (class == BPF_LDX &&
+			    src_reg == BPF_REG_STACK_ARG_BASE) {
+				src_reg = BPF_REG_FP;
+				adjust_stack_arg_off = true;
+			}
+			if ((class == BPF_STX || class == BPF_ST) &&
+			    dst_reg == BPF_REG_STACK_ARG_BASE) {
+				dst_reg = BPF_REG_FP;
+				adjust_stack_arg_off = true;
+			}
+		}
+
 		switch (insn->code) {
 			/* ALU */
 		case BPF_ALU | BPF_ADD | BPF_X:
@@ -2129,12 +2247,20 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image
 				EMIT1(0xC7);
 			goto st;
 		case BPF_ST | BPF_MEM | BPF_DW:
+			if (adjust_stack_arg_off && insn->off == -outgoing_stack_arg_depth) {
+				/* Arg 6: store immediate in r9 register */
+				emit_mov_imm64(&prog, X86_REG_R9, imm32 >> 31, (u32)imm32);
+				break;
+			}
 			EMIT2(add_1mod(0x48, dst_reg), 0xC7);
 
-st:			if (is_imm8(insn->off))
-				EMIT2(add_1reg(0x40, dst_reg), insn->off);
+st:			insn_off = insn->off;
+			if (adjust_stack_arg_off)
+				insn_off = outgoing_arg_base + insn_off;
+			if (is_imm8(insn_off))
+				EMIT2(add_1reg(0x40, dst_reg), insn_off);
 			else
-				EMIT1_off32(add_1reg(0x80, dst_reg), insn->off);
+				EMIT1_off32(add_1reg(0x80, dst_reg), insn_off);
 
 			EMIT(imm32, bpf_size_to_x86_bytes(BPF_SIZE(insn->code)));
 			break;
@@ -2144,7 +2270,15 @@ st:			if (is_imm8(insn->off))
 		case BPF_STX | BPF_MEM | BPF_H:
 		case BPF_STX | BPF_MEM | BPF_W:
 		case BPF_STX | BPF_MEM | BPF_DW:
-			emit_stx(&prog, BPF_SIZE(insn->code), dst_reg, src_reg, insn->off);
+			if (adjust_stack_arg_off && insn->off == -outgoing_stack_arg_depth) {
+				/* Arg 6: store register value in r9 */
+				EMIT_mov(X86_REG_R9, src_reg);
+				break;
+			}
+			insn_off = insn->off;
+			if (adjust_stack_arg_off)
+				insn_off = outgoing_arg_base + insn_off;
+			emit_stx(&prog, BPF_SIZE(insn->code), dst_reg, src_reg, insn_off);
 			break;
 
 		case BPF_ST | BPF_PROBE_MEM32 | BPF_B:
@@ -2243,6 +2377,18 @@ st:			if (is_imm8(insn->off))
 		case BPF_LDX | BPF_PROBE_MEMSX | BPF_H:
 		case BPF_LDX | BPF_PROBE_MEMSX | BPF_W:
 			insn_off = insn->off;
+			if (adjust_stack_arg_off) {
+				if (insn_off == 8) {
+					/* Incoming arg 6: read from r9 */
+					EMIT_mov(dst_reg, X86_REG_R9);
+					break;
+				}
+				/*
+				 * Incoming args 7+: native_off == bpf_off
+				 * (r12+16 → [rbp+16], r12+24 → [rbp+24], ...)
+				 * No offset adjustment needed.
+				 */
+			}
 
 			if (BPF_MODE(insn->code) == BPF_PROBE_MEM ||
 			    BPF_MODE(insn->code) == BPF_PROBE_MEMSX) {
@@ -2468,12 +2614,14 @@ st:			if (is_imm8(insn->off))
 							  &prog, image + addrs[i - 1],
 							  callee_regs_used,
 							  stack_depth,
+							  outgoing_rsp,
 							  ctx);
 			else
 				emit_bpf_tail_call_indirect(bpf_prog,
 							    &prog,
 							    callee_regs_used,
 							    stack_depth,
+							    outgoing_rsp,
 							    image + addrs[i - 1],
 							    ctx);
 			break;
@@ -2734,6 +2882,8 @@ st:			if (is_imm8(insn->off))
 				if (emit_spectre_bhb_barrier(&prog, ip, bpf_prog))
 					return -EINVAL;
 			}
+			/* Deallocate outgoing args 7+ area. */
+			emit_add_rsp(&prog, outgoing_rsp);
 			if (bpf_prog->aux->exception_boundary) {
 				pop_callee_regs(&prog, all_callee_regs_used);
 				pop_r12(&prog);
@@ -3757,7 +3907,13 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
 		prog->aux->jit_data = jit_data;
 	}
 	priv_stack_ptr = prog->aux->priv_stack_ptr;
-	if (!priv_stack_ptr && prog->aux->jits_use_priv_stack) {
+	/*
+	 * x86-64 uses R9 for both private stack frame pointer and
+	 * outgoing arg 6, so disable private stack when outgoing
+	 * stack args are present.
+	 */
+	if (!priv_stack_ptr && prog->aux->jits_use_priv_stack &&
+	    prog->aux->stack_arg_depth == prog->aux->incoming_stack_arg_depth) {
 		/* Allocate actual private stack size with verifier-calculated
 		 * stack size plus two memory guards to protect overflow and
 		 * underflow.
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH bpf-next v4 16/18] selftests/bpf: Add tests for BPF function stack arguments
  2026-04-12  4:58 [PATCH bpf-next v4 00/18] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song
                   ` (14 preceding siblings ...)
  2026-04-12  5:00 ` [PATCH bpf-next v4 15/18] bpf,x86: Implement JIT support for stack arguments Yonghong Song
@ 2026-04-12  5:00 ` Yonghong Song
  2026-04-12  5:00 ` [PATCH bpf-next v4 17/18] selftests/bpf: Add negative test for greater-than-8-byte kfunc stack argument Yonghong Song
  2026-04-12  5:00 ` [PATCH bpf-next v4 18/18] selftests/bpf: Add verifier tests for stack argument validation Yonghong Song
  17 siblings, 0 replies; 27+ messages in thread
From: Yonghong Song @ 2026-04-12  5:00 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Jose E . Marchesi, kernel-team, Martin KaFai Lau

Add selftests covering stack argument passing for both BPF-to-BPF
subprog calls and kfunc calls with more than 5 arguments. All tests
are guarded by __BPF_FEATURE_STACK_ARGUMENT and __TARGET_ARCH_x86.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
 .../selftests/bpf/prog_tests/stack_arg.c      | 132 +++++++++++
 tools/testing/selftests/bpf/progs/stack_arg.c | 212 ++++++++++++++++++
 .../selftests/bpf/progs/stack_arg_kfunc.c     | 164 ++++++++++++++
 .../selftests/bpf/test_kmods/bpf_testmod.c    |  66 ++++++
 .../bpf/test_kmods/bpf_testmod_kfunc.h        |  20 +-
 5 files changed, 593 insertions(+), 1 deletion(-)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/stack_arg.c
 create mode 100644 tools/testing/selftests/bpf/progs/stack_arg.c
 create mode 100644 tools/testing/selftests/bpf/progs/stack_arg_kfunc.c

diff --git a/tools/testing/selftests/bpf/prog_tests/stack_arg.c b/tools/testing/selftests/bpf/prog_tests/stack_arg.c
new file mode 100644
index 000000000000..1af5e5c91e0d
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/stack_arg.c
@@ -0,0 +1,132 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */
+
+#include <test_progs.h>
+#include <network_helpers.h>
+#include "stack_arg.skel.h"
+#include "stack_arg_kfunc.skel.h"
+
+static void run_subtest(struct bpf_program *prog, int expected)
+{
+	int err, prog_fd;
+	LIBBPF_OPTS(bpf_test_run_opts, topts,
+		.data_in = &pkt_v4,
+		.data_size_in = sizeof(pkt_v4),
+		.repeat = 1,
+	);
+
+	prog_fd = bpf_program__fd(prog);
+	err = bpf_prog_test_run_opts(prog_fd, &topts);
+	ASSERT_OK(err, "test_run");
+	ASSERT_EQ(topts.retval, expected, "retval");
+}
+
+static void test_global_many(void)
+{
+	struct stack_arg *skel;
+
+	skel = stack_arg__open();
+	if (!ASSERT_OK_PTR(skel, "open"))
+		return;
+
+	if (!skel->rodata->has_stack_arg) {
+		test__skip();
+		goto out;
+	}
+
+	if (!ASSERT_OK(stack_arg__load(skel), "load"))
+		goto out;
+
+	run_subtest(skel->progs.test_global_many_args, 36);
+
+out:
+	stack_arg__destroy(skel);
+}
+
+static void test_async_cb_many(void)
+{
+	struct stack_arg *skel;
+
+	skel = stack_arg__open();
+	if (!ASSERT_OK_PTR(skel, "open"))
+		return;
+
+	if (!skel->rodata->has_stack_arg) {
+		test__skip();
+		goto out;
+	}
+
+	if (!ASSERT_OK(stack_arg__load(skel), "load"))
+		goto out;
+
+	run_subtest(skel->progs.test_async_cb_many_args, 0);
+
+out:
+	stack_arg__destroy(skel);
+}
+
+static void test_bpf2bpf(void)
+{
+	struct stack_arg *skel;
+
+	skel = stack_arg__open();
+	if (!ASSERT_OK_PTR(skel, "open"))
+		return;
+
+	if (!skel->rodata->has_stack_arg) {
+		test__skip();
+		goto out;
+	}
+
+	if (!ASSERT_OK(stack_arg__load(skel), "load"))
+		goto out;
+
+	run_subtest(skel->progs.test_bpf2bpf_ptr_stack_arg, 45);
+	run_subtest(skel->progs.test_bpf2bpf_mix_stack_args, 51);
+	run_subtest(skel->progs.test_bpf2bpf_nesting_stack_arg, 50);
+	run_subtest(skel->progs.test_bpf2bpf_dynptr_stack_arg, 69);
+
+out:
+	stack_arg__destroy(skel);
+}
+
+static void test_kfunc(void)
+{
+	struct stack_arg_kfunc *skel;
+
+	skel = stack_arg_kfunc__open();
+	if (!ASSERT_OK_PTR(skel, "open"))
+		return;
+
+	if (!skel->rodata->has_stack_arg) {
+		test__skip();
+		goto out;
+	}
+
+	if (!ASSERT_OK(stack_arg_kfunc__load(skel), "load"))
+		goto out;
+
+	run_subtest(skel->progs.test_stack_arg_scalar, 36);
+	run_subtest(skel->progs.test_stack_arg_ptr, 45);
+	run_subtest(skel->progs.test_stack_arg_mix, 51);
+	run_subtest(skel->progs.test_stack_arg_dynptr, 69);
+	run_subtest(skel->progs.test_stack_arg_mem, 151);
+	run_subtest(skel->progs.test_stack_arg_iter, 115);
+	run_subtest(skel->progs.test_stack_arg_const_str, 15);
+	run_subtest(skel->progs.test_stack_arg_timer, 15);
+
+out:
+	stack_arg_kfunc__destroy(skel);
+}
+
+void test_stack_arg(void)
+{
+	if (test__start_subtest("global_many_args"))
+		test_global_many();
+	if (test__start_subtest("async_cb_many_args"))
+		test_async_cb_many();
+	if (test__start_subtest("bpf2bpf"))
+		test_bpf2bpf();
+	if (test__start_subtest("kfunc"))
+		test_kfunc();
+}
diff --git a/tools/testing/selftests/bpf/progs/stack_arg.c b/tools/testing/selftests/bpf/progs/stack_arg.c
new file mode 100644
index 000000000000..c6bf89c3a0db
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/stack_arg.c
@@ -0,0 +1,212 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */
+
+#include <vmlinux.h>
+#include <stdbool.h>
+#include <bpf/bpf_helpers.h>
+#include "bpf_kfuncs.h"
+
+#define CLOCK_MONOTONIC 1
+
+long a, b, c, d, e, f, g, i;
+
+struct timer_elem {
+	struct bpf_timer timer;
+};
+
+struct {
+	__uint(type, BPF_MAP_TYPE_ARRAY);
+	__uint(max_entries, 1);
+	__type(key, int);
+	__type(value, struct timer_elem);
+} timer_map SEC(".maps");
+
+int timer_result;
+
+#if defined(__TARGET_ARCH_x86) && defined(__BPF_FEATURE_STACK_ARGUMENT)
+
+const volatile bool has_stack_arg = true;
+
+__noinline static int static_func_many_args(int a, int b, int c, int d,
+					    int e, int f, int g, int h)
+{
+	return a + b + c + d + e + f + g + h;
+}
+
+__noinline int global_calls_many_args(int a, int b, int c)
+{
+	return static_func_many_args(a, b, c, 4, 5, 6, 7, 8);
+}
+
+SEC("tc")
+int test_global_many_args(void)
+{
+	return global_calls_many_args(1, 2, 3);
+}
+
+struct test_data {
+	long x;
+	long y;
+};
+
+/* 1 + 2 + 3 + 4 + 5 + 10 + 20 = 45 */
+__noinline static long func_with_ptr_stack_arg(long a, long b, long c, long d,
+					       long e, struct test_data *p)
+{
+	return a + b + c + d + e + p->x + p->y;
+}
+
+__noinline long global_ptr_stack_arg(long a, long b, long c, long d, long e)
+{
+	struct test_data data = { .x = 10, .y = 20 };
+
+	return func_with_ptr_stack_arg(a, b, c, d, e, &data);
+}
+
+SEC("tc")
+int test_bpf2bpf_ptr_stack_arg(void)
+{
+	return global_ptr_stack_arg(1, 2, 3, 4, 5);
+}
+
+/* 1 + 2 + 3 + 4 + 5 + 10 + 6 + 20 = 51 */
+__noinline static long func_with_mix_stack_args(long a, long b, long c, long d,
+						long e, struct test_data *p,
+						long f, struct test_data *q)
+{
+	return a + b + c + d + e + p->x + f + q->y;
+}
+
+__noinline long global_mix_stack_args(long a, long b, long c, long d, long e)
+{
+	struct test_data p = { .x = 10 };
+	struct test_data q = { .y = 20 };
+
+	return func_with_mix_stack_args(a, b, c, d, e, &p, e + 1, &q);
+}
+
+SEC("tc")
+int test_bpf2bpf_mix_stack_args(void)
+{
+	return global_mix_stack_args(1, 2, 3, 4, 5);
+}
+
+/*
+ * Nesting test: func_outer calls func_inner, both with struct pointer
+ * as stack arg.
+ *
+ * func_inner: (a+1) + (b+1) + (c+1) + (d+1) + (e+1) + p->x + p->y
+ *           = 2 + 3 + 4 + 5 + 6 + 10 + 20 = 50
+ */
+__noinline static long func_inner_ptr(long a, long b, long c, long d,
+				      long e, struct test_data *p)
+{
+	return a + b + c + d + e + p->x + p->y;
+}
+
+__noinline static long func_outer_ptr(long a, long b, long c, long d,
+				      long e, struct test_data *p)
+{
+	return func_inner_ptr(a + 1, b + 1, c + 1, d + 1, e + 1, p);
+}
+
+__noinline long global_nesting_ptr(long a, long b, long c, long d, long e)
+{
+	struct test_data data = { .x = 10, .y = 20 };
+
+	return func_outer_ptr(a, b, c, d, e, &data);
+}
+
+SEC("tc")
+int test_bpf2bpf_nesting_stack_arg(void)
+{
+	return global_nesting_ptr(1, 2, 3, 4, 5);
+}
+
+/* 1 + 2 + 3 + 4 + 5 + sizeof(pkt_v4) = 15 + 54 = 69 */
+__noinline static long func_with_dynptr(long a, long b, long c, long d,
+					long e, struct bpf_dynptr *ptr)
+{
+	return a + b + c + d + e + bpf_dynptr_size(ptr);
+}
+
+__noinline long global_dynptr_stack_arg(void *ctx __arg_ctx, long a, long b,
+					long c, long d)
+{
+	struct bpf_dynptr ptr;
+
+	bpf_dynptr_from_skb(ctx, 0, &ptr);
+	return func_with_dynptr(a, b, c, d, d + 1, &ptr);
+}
+
+SEC("tc")
+int test_bpf2bpf_dynptr_stack_arg(struct __sk_buff *skb)
+{
+	return global_dynptr_stack_arg(skb, 1, 2, 3, 4);
+}
+
+static int timer_cb_many_args(void *map, int *key, struct bpf_timer *timer)
+{
+	timer_result = static_func_many_args(10, 20, 30, 40, 50, 60, 70, 80);
+	return 0;
+}
+
+SEC("tc")
+int test_async_cb_many_args(void)
+{
+	struct timer_elem *elem;
+	int key = 0;
+
+	elem = bpf_map_lookup_elem(&timer_map, &key);
+	if (!elem)
+		return -1;
+
+	bpf_timer_init(&elem->timer, &timer_map, CLOCK_MONOTONIC);
+	bpf_timer_set_callback(&elem->timer, timer_cb_many_args);
+	bpf_timer_start(&elem->timer, 1, 0);
+	return 0;
+}
+
+#else
+
+const volatile bool has_stack_arg = false;
+
+SEC("tc")
+int test_global_many_args(void)
+{
+	return 0;
+}
+
+SEC("tc")
+int test_bpf2bpf_ptr_stack_arg(void)
+{
+	return 0;
+}
+
+SEC("tc")
+int test_bpf2bpf_mix_stack_args(void)
+{
+	return 0;
+}
+
+SEC("tc")
+int test_bpf2bpf_nesting_stack_arg(void)
+{
+	return 0;
+}
+
+SEC("tc")
+int test_bpf2bpf_dynptr_stack_arg(struct __sk_buff *skb)
+{
+	return 0;
+}
+
+SEC("tc")
+int test_async_cb_many_args(void)
+{
+	return 0;
+}
+
+#endif
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/stack_arg_kfunc.c b/tools/testing/selftests/bpf/progs/stack_arg_kfunc.c
new file mode 100644
index 000000000000..6cc404d57863
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/stack_arg_kfunc.c
@@ -0,0 +1,164 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */
+
+#include <vmlinux.h>
+#include <bpf/bpf_helpers.h>
+#include "bpf_kfuncs.h"
+#include "../test_kmods/bpf_testmod_kfunc.h"
+
+#if defined(__TARGET_ARCH_x86) && defined(__BPF_FEATURE_STACK_ARGUMENT)
+
+const volatile bool has_stack_arg = true;
+
+struct bpf_iter_testmod_seq {
+	u64 :64;
+	u64 :64;
+};
+
+extern int bpf_iter_testmod_seq_new(struct bpf_iter_testmod_seq *it, s64 value, int cnt) __ksym;
+extern int *bpf_iter_testmod_seq_next(struct bpf_iter_testmod_seq *it) __ksym;
+extern void bpf_iter_testmod_seq_destroy(struct bpf_iter_testmod_seq *it) __ksym;
+
+struct timer_map_value {
+	struct bpf_timer timer;
+};
+
+struct {
+	__uint(type, BPF_MAP_TYPE_ARRAY);
+	__uint(max_entries, 1);
+	__type(key, int);
+	__type(value, struct timer_map_value);
+} kfunc_timer_map SEC(".maps");
+
+SEC("tc")
+int test_stack_arg_scalar(struct __sk_buff *skb)
+{
+	return bpf_kfunc_call_stack_arg(1, 2, 3, 4, 5, 6, 7, 8);
+}
+
+SEC("tc")
+int test_stack_arg_ptr(struct __sk_buff *skb)
+{
+	struct prog_test_pass1 p = { .x0 = 10, .x1 = 20 };
+
+	return bpf_kfunc_call_stack_arg_ptr(1, 2, 3, 4, 5, &p);
+}
+
+SEC("tc")
+int test_stack_arg_mix(struct __sk_buff *skb)
+{
+	struct prog_test_pass1 p = { .x0 = 10 };
+	struct prog_test_pass1 q = { .x1 = 20 };
+
+	return bpf_kfunc_call_stack_arg_mix(1, 2, 3, 4, 5, &p, 6, &q);
+}
+
+/* 1 + 2 + 3 + 4 + 5 + sizeof(pkt_v4) = 15 + 54 = 69 */
+SEC("tc")
+int test_stack_arg_dynptr(struct __sk_buff *skb)
+{
+	struct bpf_dynptr ptr;
+
+	bpf_dynptr_from_skb(skb, 0, &ptr);
+	return bpf_kfunc_call_stack_arg_dynptr(1, 2, 3, 4, 5, &ptr);
+}
+
+/* 1 + 2 + 3 + 4 + 5 + (1 + 2 + ... + 16) = 15 + 136 = 151 */
+SEC("tc")
+int test_stack_arg_mem(struct __sk_buff *skb)
+{
+	char buf[16] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16};
+
+	return bpf_kfunc_call_stack_arg_mem(1, 2, 3, 4, 5, buf, sizeof(buf));
+}
+
+/* 1 + 2 + 3 + 4 + 5 + 100 = 115 */
+SEC("tc")
+int test_stack_arg_iter(struct __sk_buff *skb)
+{
+	struct bpf_iter_testmod_seq it;
+	u64 ret;
+
+	bpf_iter_testmod_seq_new(&it, 100, 10);
+	ret = bpf_kfunc_call_stack_arg_iter(1, 2, 3, 4, 5, &it);
+	bpf_iter_testmod_seq_destroy(&it);
+	return ret;
+}
+
+const char cstr[] = "hello";
+
+/* 1 + 2 + 3 + 4 + 5 = 15 */
+SEC("tc")
+int test_stack_arg_const_str(struct __sk_buff *skb)
+{
+	return bpf_kfunc_call_stack_arg_const_str(1, 2, 3, 4, 5, cstr);
+}
+
+/* 1 + 2 + 3 + 4 + 5 = 15 */
+SEC("tc")
+int test_stack_arg_timer(struct __sk_buff *skb)
+{
+	struct timer_map_value *val;
+	int key = 0;
+
+	val = bpf_map_lookup_elem(&kfunc_timer_map, &key);
+	if (!val)
+		return 0;
+	return bpf_kfunc_call_stack_arg_timer(1, 2, 3, 4, 5, &val->timer);
+}
+
+#else
+
+const volatile bool has_stack_arg = false;
+
+SEC("tc")
+int test_stack_arg_scalar(struct __sk_buff *skb)
+{
+	return 0;
+}
+
+SEC("tc")
+int test_stack_arg_ptr(struct __sk_buff *skb)
+{
+	return 0;
+}
+
+SEC("tc")
+int test_stack_arg_mix(struct __sk_buff *skb)
+{
+	return 0;
+}
+
+SEC("tc")
+int test_stack_arg_dynptr(struct __sk_buff *skb)
+{
+	return 0;
+}
+
+SEC("tc")
+int test_stack_arg_mem(struct __sk_buff *skb)
+{
+	return 0;
+}
+
+SEC("tc")
+int test_stack_arg_iter(struct __sk_buff *skb)
+{
+	return 0;
+}
+
+SEC("tc")
+int test_stack_arg_const_str(struct __sk_buff *skb)
+{
+	return 0;
+}
+
+SEC("tc")
+int test_stack_arg_timer(struct __sk_buff *skb)
+{
+	return 0;
+}
+
+#endif
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c b/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
index d876314a4d67..ea82a6d32d9f 100644
--- a/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
+++ b/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
@@ -825,6 +825,63 @@ __bpf_kfunc int bpf_kfunc_call_test5(u8 a, u16 b, u32 c)
 	return 0;
 }
 
+__bpf_kfunc u64 bpf_kfunc_call_stack_arg(u64 a, u64 b, u64 c, u64 d,
+					 u64 e, u64 f, u64 g, u64 h)
+{
+	return a + b + c + d + e + f + g + h;
+}
+
+__bpf_kfunc u64 bpf_kfunc_call_stack_arg_ptr(u64 a, u64 b, u64 c, u64 d, u64 e,
+					     struct prog_test_pass1 *p)
+{
+	return a + b + c + d + e + p->x0 + p->x1;
+}
+
+__bpf_kfunc u64 bpf_kfunc_call_stack_arg_mix(u64 a, u64 b, u64 c, u64 d, u64 e,
+					     struct prog_test_pass1 *p, u64 f,
+					     struct prog_test_pass1 *q)
+{
+	return a + b + c + d + e + p->x0 + f + q->x1;
+}
+
+__bpf_kfunc u64 bpf_kfunc_call_stack_arg_dynptr(u64 a, u64 b, u64 c, u64 d, u64 e,
+					       struct bpf_dynptr *ptr)
+{
+	const struct bpf_dynptr_kern *kern_ptr = (void *)ptr;
+
+	return a + b + c + d + e + (kern_ptr->size & 0xFFFFFF);
+}
+
+__bpf_kfunc u64 bpf_kfunc_call_stack_arg_mem(u64 a, u64 b, u64 c, u64 d, u64 e,
+					     void *mem, int mem__sz)
+{
+	const unsigned char *p = mem;
+	u64 sum = a + b + c + d + e;
+	int i;
+
+	for (i = 0; i < mem__sz; i++)
+		sum += p[i];
+	return sum;
+}
+
+__bpf_kfunc u64 bpf_kfunc_call_stack_arg_iter(u64 a, u64 b, u64 c, u64 d, u64 e,
+					      struct bpf_iter_testmod_seq *it__iter)
+{
+	return a + b + c + d + e + it__iter->value;
+}
+
+__bpf_kfunc u64 bpf_kfunc_call_stack_arg_const_str(u64 a, u64 b, u64 c, u64 d, u64 e,
+						   const char *str__str)
+{
+	return a + b + c + d + e;
+}
+
+__bpf_kfunc u64 bpf_kfunc_call_stack_arg_timer(u64 a, u64 b, u64 c, u64 d, u64 e,
+					       struct bpf_timer *timer)
+{
+	return a + b + c + d + e;
+}
+
 static struct prog_test_ref_kfunc prog_test_struct = {
 	.a = 42,
 	.b = 108,
@@ -1288,6 +1345,15 @@ BTF_ID_FLAGS(func, bpf_kfunc_call_test2)
 BTF_ID_FLAGS(func, bpf_kfunc_call_test3)
 BTF_ID_FLAGS(func, bpf_kfunc_call_test4)
 BTF_ID_FLAGS(func, bpf_kfunc_call_test5)
+BTF_ID_FLAGS(func, bpf_kfunc_call_stack_arg)
+BTF_ID_FLAGS(func, bpf_kfunc_call_stack_arg_ptr)
+BTF_ID_FLAGS(func, bpf_kfunc_call_stack_arg_mix)
+BTF_ID_FLAGS(func, bpf_kfunc_call_stack_arg_dynptr)
+BTF_ID_FLAGS(func, bpf_kfunc_call_stack_arg_mem)
+BTF_ID_FLAGS(func, bpf_kfunc_call_stack_arg_iter)
+BTF_ID_FLAGS(func, bpf_kfunc_call_stack_arg_const_str)
+BTF_ID_FLAGS(func, bpf_kfunc_call_stack_arg_timer)
+BTF_ID_FLAGS(func, bpf_kfunc_call_test_mem_len_pass1)
 BTF_ID_FLAGS(func, bpf_kfunc_call_test_mem_len_fail1)
 BTF_ID_FLAGS(func, bpf_kfunc_call_test_mem_len_fail2)
 BTF_ID_FLAGS(func, bpf_kfunc_call_test_acquire, KF_ACQUIRE | KF_RET_NULL)
diff --git a/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h b/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h
index aa0b8d41e71b..2c1cb118f886 100644
--- a/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h
+++ b/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h
@@ -26,6 +26,8 @@ struct prog_test_ref_kfunc {
 };
 #endif
 
+struct bpf_iter_testmod_seq;
+
 struct prog_test_pass1 {
 	int x0;
 	struct {
@@ -111,7 +113,23 @@ int bpf_kfunc_call_test2(struct sock *sk, __u32 a, __u32 b) __ksym;
 struct sock *bpf_kfunc_call_test3(struct sock *sk) __ksym;
 long bpf_kfunc_call_test4(signed char a, short b, int c, long d) __ksym;
 int bpf_kfunc_call_test5(__u8 a, __u16 b, __u32 c) __ksym;
-
+__u64 bpf_kfunc_call_stack_arg(__u64 a, __u64 b, __u64 c, __u64 d,
+			       __u64 e, __u64 f, __u64 g, __u64 h) __ksym;
+__u64 bpf_kfunc_call_stack_arg_ptr(__u64 a, __u64 b, __u64 c, __u64 d, __u64 e,
+				   struct prog_test_pass1 *p) __ksym;
+__u64 bpf_kfunc_call_stack_arg_mix(__u64 a, __u64 b, __u64 c, __u64 d, __u64 e,
+				   struct prog_test_pass1 *p, __u64 f,
+				   struct prog_test_pass1 *q) __ksym;
+__u64 bpf_kfunc_call_stack_arg_dynptr(__u64 a, __u64 b, __u64 c, __u64 d, __u64 e,
+				      struct bpf_dynptr *ptr) __ksym;
+__u64 bpf_kfunc_call_stack_arg_mem(__u64 a, __u64 b, __u64 c, __u64 d, __u64 e,
+				   void *mem, int mem__sz) __ksym;
+__u64 bpf_kfunc_call_stack_arg_iter(__u64 a, __u64 b, __u64 c, __u64 d, __u64 e,
+				    struct bpf_iter_testmod_seq *it__iter) __ksym;
+__u64 bpf_kfunc_call_stack_arg_const_str(__u64 a, __u64 b, __u64 c, __u64 d, __u64 e,
+					 const char *str__str) __ksym;
+__u64 bpf_kfunc_call_stack_arg_timer(__u64 a, __u64 b, __u64 c, __u64 d, __u64 e,
+				     struct bpf_timer *timer) __ksym;
 void bpf_kfunc_call_test_pass_ctx(struct __sk_buff *skb) __ksym;
 void bpf_kfunc_call_test_pass1(struct prog_test_pass1 *p) __ksym;
 void bpf_kfunc_call_test_pass2(struct prog_test_pass2 *p) __ksym;
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH bpf-next v4 17/18] selftests/bpf: Add negative test for greater-than-8-byte kfunc stack argument
  2026-04-12  4:58 [PATCH bpf-next v4 00/18] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song
                   ` (15 preceding siblings ...)
  2026-04-12  5:00 ` [PATCH bpf-next v4 16/18] selftests/bpf: Add tests for BPF function " Yonghong Song
@ 2026-04-12  5:00 ` Yonghong Song
  2026-04-12  5:00 ` [PATCH bpf-next v4 18/18] selftests/bpf: Add verifier tests for stack argument validation Yonghong Song
  17 siblings, 0 replies; 27+ messages in thread
From: Yonghong Song @ 2026-04-12  5:00 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Jose E . Marchesi, kernel-team, Martin KaFai Lau

Add a test that the verifier rejects kfunc calls where a stack argument
exceeds 8 bytes (the register-sized slot limit).

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
 .../selftests/bpf/prog_tests/stack_arg_fail.c | 24 ++++++++++++++
 .../selftests/bpf/progs/stack_arg_fail.c      | 32 +++++++++++++++++++
 .../selftests/bpf/test_kmods/bpf_testmod.c    |  7 ++++
 .../bpf/test_kmods/bpf_testmod_kfunc.h        |  8 +++++
 4 files changed, 71 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/stack_arg_fail.c
 create mode 100644 tools/testing/selftests/bpf/progs/stack_arg_fail.c

diff --git a/tools/testing/selftests/bpf/prog_tests/stack_arg_fail.c b/tools/testing/selftests/bpf/prog_tests/stack_arg_fail.c
new file mode 100644
index 000000000000..328a79edee45
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/stack_arg_fail.c
@@ -0,0 +1,24 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */
+
+#include <test_progs.h>
+#include "stack_arg_fail.skel.h"
+
+void test_stack_arg_fail(void)
+{
+	struct stack_arg_fail *skel;
+
+	skel = stack_arg_fail__open();
+	if (!ASSERT_OK_PTR(skel, "open"))
+		return;
+
+	if (!skel->rodata->has_stack_arg) {
+		test__skip();
+		goto out;
+	}
+
+	ASSERT_ERR(stack_arg_fail__load(skel), "load_should_fail");
+
+out:
+	stack_arg_fail__destroy(skel);
+}
diff --git a/tools/testing/selftests/bpf/progs/stack_arg_fail.c b/tools/testing/selftests/bpf/progs/stack_arg_fail.c
new file mode 100644
index 000000000000..caa63b6f6a80
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/stack_arg_fail.c
@@ -0,0 +1,32 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */
+
+#include <vmlinux.h>
+#include <bpf/bpf_helpers.h>
+#include "../test_kmods/bpf_testmod_kfunc.h"
+
+#if defined(__BPF_FEATURE_STACK_ARGUMENT)
+
+const volatile bool has_stack_arg = true;
+
+SEC("tc")
+int test_stack_arg_big(struct __sk_buff *skb)
+{
+	struct prog_test_big_arg s = { .a = 1, .b = 2 };
+
+	return bpf_kfunc_call_stack_arg_big(1, 2, 3, 4, 5, s);
+}
+
+#else
+
+const volatile bool has_stack_arg = false;
+
+SEC("tc")
+int test_stack_arg_big(struct __sk_buff *skb)
+{
+	return 0;
+}
+
+#endif
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c b/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
index ea82a6d32d9f..bd467560787e 100644
--- a/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
+++ b/tools/testing/selftests/bpf/test_kmods/bpf_testmod.c
@@ -882,6 +882,12 @@ __bpf_kfunc u64 bpf_kfunc_call_stack_arg_timer(u64 a, u64 b, u64 c, u64 d, u64 e
 	return a + b + c + d + e;
 }
 
+__bpf_kfunc u64 bpf_kfunc_call_stack_arg_big(u64 a, u64 b, u64 c, u64 d, u64 e,
+					     struct prog_test_big_arg s)
+{
+	return a + b + c + d + e + s.a + s.b;
+}
+
 static struct prog_test_ref_kfunc prog_test_struct = {
 	.a = 42,
 	.b = 108,
@@ -1353,6 +1359,7 @@ BTF_ID_FLAGS(func, bpf_kfunc_call_stack_arg_mem)
 BTF_ID_FLAGS(func, bpf_kfunc_call_stack_arg_iter)
 BTF_ID_FLAGS(func, bpf_kfunc_call_stack_arg_const_str)
 BTF_ID_FLAGS(func, bpf_kfunc_call_stack_arg_timer)
+BTF_ID_FLAGS(func, bpf_kfunc_call_stack_arg_big)
 BTF_ID_FLAGS(func, bpf_kfunc_call_test_mem_len_pass1)
 BTF_ID_FLAGS(func, bpf_kfunc_call_test_mem_len_fail1)
 BTF_ID_FLAGS(func, bpf_kfunc_call_test_mem_len_fail2)
diff --git a/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h b/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h
index 2c1cb118f886..2a40f80b074a 100644
--- a/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h
+++ b/tools/testing/selftests/bpf/test_kmods/bpf_testmod_kfunc.h
@@ -50,6 +50,11 @@ struct prog_test_pass2 {
 	} x;
 };
 
+struct prog_test_big_arg {
+	long a;
+	long b;
+};
+
 struct prog_test_fail1 {
 	void *p;
 	int x;
@@ -130,6 +135,9 @@ __u64 bpf_kfunc_call_stack_arg_const_str(__u64 a, __u64 b, __u64 c, __u64 d, __u
 					 const char *str__str) __ksym;
 __u64 bpf_kfunc_call_stack_arg_timer(__u64 a, __u64 b, __u64 c, __u64 d, __u64 e,
 				     struct bpf_timer *timer) __ksym;
+__u64 bpf_kfunc_call_stack_arg_big(__u64 a, __u64 b, __u64 c, __u64 d, __u64 e,
+				   struct prog_test_big_arg s) __ksym;
+
 void bpf_kfunc_call_test_pass_ctx(struct __sk_buff *skb) __ksym;
 void bpf_kfunc_call_test_pass1(struct prog_test_pass1 *p) __ksym;
 void bpf_kfunc_call_test_pass2(struct prog_test_pass2 *p) __ksym;
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH bpf-next v4 18/18] selftests/bpf: Add verifier tests for stack argument validation
  2026-04-12  4:58 [PATCH bpf-next v4 00/18] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song
                   ` (16 preceding siblings ...)
  2026-04-12  5:00 ` [PATCH bpf-next v4 17/18] selftests/bpf: Add negative test for greater-than-8-byte kfunc stack argument Yonghong Song
@ 2026-04-12  5:00 ` Yonghong Song
  17 siblings, 0 replies; 27+ messages in thread
From: Yonghong Song @ 2026-04-12  5:00 UTC (permalink / raw)
  To: bpf
  Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
	Jose E . Marchesi, kernel-team, Martin KaFai Lau

Add inline-asm-based verifier tests that exercise the stack argument
validation logic directly.

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
 .../selftests/bpf/prog_tests/verifier.c       |   2 +
 .../selftests/bpf/progs/verifier_stack_arg.c  | 316 ++++++++++++++++++
 2 files changed, 318 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_stack_arg.c

diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c
index a96b25ebff23..aef21cf2987b 100644
--- a/tools/testing/selftests/bpf/prog_tests/verifier.c
+++ b/tools/testing/selftests/bpf/prog_tests/verifier.c
@@ -91,6 +91,7 @@
 #include "verifier_sockmap_mutate.skel.h"
 #include "verifier_spill_fill.skel.h"
 #include "verifier_spin_lock.skel.h"
+#include "verifier_stack_arg.skel.h"
 #include "verifier_stack_ptr.skel.h"
 #include "verifier_store_release.skel.h"
 #include "verifier_subprog_precision.skel.h"
@@ -238,6 +239,7 @@ void test_verifier_sock_addr(void)            { RUN(verifier_sock_addr); }
 void test_verifier_sockmap_mutate(void)       { RUN(verifier_sockmap_mutate); }
 void test_verifier_spill_fill(void)           { RUN(verifier_spill_fill); }
 void test_verifier_spin_lock(void)            { RUN(verifier_spin_lock); }
+void test_verifier_stack_arg(void)            { RUN(verifier_stack_arg); }
 void test_verifier_stack_ptr(void)            { RUN(verifier_stack_ptr); }
 void test_verifier_store_release(void)        { RUN(verifier_store_release); }
 void test_verifier_subprog_precision(void)    { RUN(verifier_subprog_precision); }
diff --git a/tools/testing/selftests/bpf/progs/verifier_stack_arg.c b/tools/testing/selftests/bpf/progs/verifier_stack_arg.c
new file mode 100644
index 000000000000..35b1bc869691
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/verifier_stack_arg.c
@@ -0,0 +1,316 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
+
+struct {
+	__uint(type, BPF_MAP_TYPE_HASH);
+	__uint(max_entries, 1);
+	__type(key, long long);
+	__type(value, long long);
+} map_hash_8b SEC(".maps");
+
+#if defined(__TARGET_ARCH_x86) && defined(__BPF_FEATURE_STACK_ARGUMENT)
+
+__noinline __used
+static int subprog_6args(int a, int b, int c, int d, int e, int f)
+{
+	return a + b + c + d + e + f;
+}
+
+__noinline __used
+static int subprog_7args(int a, int b, int c, int d, int e, int f, int g)
+{
+	return a + b + c + d + e + f + g;
+}
+
+__noinline __used
+static long subprog_deref_arg6(long a, long b, long c, long d, long e, long *f)
+{
+	return *f;
+}
+
+SEC("tc")
+__description("stack_arg: subprog with 6 args")
+__success
+__arch_x86_64
+__naked void stack_arg_6args(void)
+{
+	asm volatile (
+		"r1 = 1;"
+		"r2 = 2;"
+		"r3 = 3;"
+		"r4 = 4;"
+		"r5 = 5;"
+		"*(u64 *)(r12 - 8) = 6;"
+		"call subprog_6args;"
+		"exit;"
+		::: __clobber_all
+	);
+}
+
+SEC("tc")
+__description("stack_arg: two subprogs with >5 args")
+__success
+__arch_x86_64
+__naked void stack_arg_two_subprogs(void)
+{
+	asm volatile (
+		"r1 = 1;"
+		"r2 = 2;"
+		"r3 = 3;"
+		"r4 = 4;"
+		"r5 = 5;"
+		"*(u64 *)(r12 - 8) = 10;"
+		"call subprog_6args;"
+		"r6 = r0;"
+		"r1 = 1;"
+		"r2 = 2;"
+		"r3 = 3;"
+		"r4 = 4;"
+		"r5 = 5;"
+		"*(u64 *)(r12 - 16) = 30;"
+		"*(u64 *)(r12 - 8) = 20;"
+		"call subprog_7args;"
+		"r0 += r6;"
+		"exit;"
+		::: __clobber_all
+	);
+}
+
+SEC("tc")
+__description("stack_arg: read from uninitialized stack arg slot")
+__failure
+__arch_x86_64
+__msg("invalid read from stack arg")
+__naked void stack_arg_read_uninitialized(void)
+{
+	asm volatile (
+		"r0 = *(u64 *)(r12 + 8);"
+		"r0 = 0;"
+		"exit;"
+		::: __clobber_all
+	);
+}
+
+SEC("tc")
+__description("stack_arg: gap at offset -8, only wrote -16")
+__failure
+__arch_x86_64
+__msg("stack arg#6 not properly initialized")
+__naked void stack_arg_gap_at_minus8(void)
+{
+	asm volatile (
+		"r1 = 1;"
+		"r2 = 2;"
+		"r3 = 3;"
+		"r4 = 4;"
+		"r5 = 5;"
+		"*(u64 *)(r12 - 16) = 30;"
+		"call subprog_7args;"
+		"exit;"
+		::: __clobber_all
+	);
+}
+
+SEC("tc")
+__description("stack_arg: incorrect size of stack arg write")
+__failure
+__arch_x86_64
+__msg("stack arg write must be 8 bytes, got 4")
+__naked void stack_arg_not_written(void)
+{
+	asm volatile (
+		"r1 = 1;"
+		"r2 = 2;"
+		"r3 = 3;"
+		"r4 = 4;"
+		"r5 = 5;"
+		"*(u32 *)(r12 - 8) = 30;"
+		"call subprog_6args;"
+		"exit;"
+		::: __clobber_all
+	);
+}
+
+SEC("tc")
+__description("stack_arg: pruning with different stack arg types")
+__failure
+__flag(BPF_F_TEST_STATE_FREQ)
+__arch_x86_64
+__msg("invalid mem access 'scalar'")
+__naked void stack_arg_pruning_type_mismatch(void)
+{
+	asm volatile (
+		"call %[bpf_get_prandom_u32];"
+		"r6 = r0;"
+		/* local = 0 on program stack */
+		"r7 = 0;"
+		"*(u64 *)(r10 - 8) = r7;"
+		/* Branch based on random value */
+		"if r6 s> 3 goto l0_%=;"
+		/* Path 1: store stack pointer to outgoing arg6 */
+		"r1 = r10;"
+		"r1 += -8;"
+		"*(u64 *)(r12 - 8) = r1;"
+		"goto l1_%=;"
+	"l0_%=:"
+		/* Path 2: store scalar to outgoing arg6 */
+		"*(u64 *)(r12 - 8) = 42;"
+	"l1_%=:"
+		/* Call subprog that dereferences arg6 */
+		"r1 = r6;"
+		"r2 = 0;"
+		"r3 = 0;"
+		"r4 = 0;"
+		"r5 = 0;"
+		"call subprog_deref_arg6;"
+		"exit;"
+		:: __imm(bpf_get_prandom_u32)
+		: __clobber_all
+	);
+}
+
+SEC("tc")
+__description("stack_arg: release_reference invalidates stack arg slot")
+__failure
+__arch_x86_64
+__msg("invalid mem access 'scalar'")
+__naked void stack_arg_release_ref(void)
+{
+	asm volatile (
+		"r6 = r1;"
+		/* struct bpf_sock_tuple tuple = {} */
+		"r2 = 0;"
+		"*(u32 *)(r10 - 8) = r2;"
+		"*(u64 *)(r10 - 16) = r2;"
+		"*(u64 *)(r10 - 24) = r2;"
+		"*(u64 *)(r10 - 32) = r2;"
+		"*(u64 *)(r10 - 40) = r2;"
+		"*(u64 *)(r10 - 48) = r2;"
+		/* sk = bpf_sk_lookup_tcp(ctx, &tuple, sizeof(tuple), 0, 0) */
+		"r1 = r6;"
+		"r2 = r10;"
+		"r2 += -48;"
+		"r3 = %[sizeof_bpf_sock_tuple];"
+		"r4 = 0;"
+		"r5 = 0;"
+		"call %[bpf_sk_lookup_tcp];"
+		/* r0 = sk (PTR_TO_SOCK_OR_NULL) */
+		"if r0 == 0 goto l0_%=;"
+		/* Store sock ref to outgoing arg6 slot */
+		"*(u64 *)(r12 - 8) = r0;"
+		/* Release the reference — invalidates the stack arg slot */
+		"r1 = r0;"
+		"call %[bpf_sk_release];"
+		/* Call subprog that dereferences arg6 — should fail */
+		"r1 = 1;"
+		"r2 = 2;"
+		"r3 = 3;"
+		"r4 = 4;"
+		"r5 = 5;"
+		"call subprog_deref_arg6;"
+	"l0_%=:"
+		"r0 = 0;"
+		"exit;"
+		:
+		: __imm(bpf_sk_lookup_tcp),
+		  __imm(bpf_sk_release),
+		  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+		: __clobber_all
+	);
+}
+
+SEC("tc")
+__description("stack_arg: pkt pointer in stack arg slot invalidated after pull_data")
+__failure
+__arch_x86_64
+__msg("invalid mem access 'scalar'")
+__naked void stack_arg_stale_pkt_ptr(void)
+{
+	asm volatile (
+		"r6 = r1;"
+		"r7 = *(u32 *)(r6 + %[__sk_buff_data]);"
+		"r8 = *(u32 *)(r6 + %[__sk_buff_data_end]);"
+		/* check pkt has at least 1 byte */
+		"r0 = r7;"
+		"r0 += 1;"
+		"if r0 > r8 goto l0_%=;"
+		/* Store valid pkt pointer to outgoing arg6 slot */
+		"*(u64 *)(r12 - 8) = r7;"
+		/* bpf_skb_pull_data invalidates all pkt pointers */
+		"r1 = r6;"
+		"r2 = 0;"
+		"call %[bpf_skb_pull_data];"
+		/* Call subprog that dereferences arg6 — should fail */
+		"r1 = 1;"
+		"r2 = 2;"
+		"r3 = 3;"
+		"r4 = 4;"
+		"r5 = 5;"
+		"call subprog_deref_arg6;"
+	"l0_%=:"
+		"r0 = 0;"
+		"exit;"
+		:
+		: __imm(bpf_skb_pull_data),
+		  __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
+		  __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end))
+		: __clobber_all
+	);
+}
+
+SEC("tc")
+__description("stack_arg: null propagation rejects deref on null branch")
+__failure
+__arch_x86_64
+__msg("invalid mem access 'scalar'")
+__naked void stack_arg_null_propagation_fail(void)
+{
+	asm volatile (
+		"r1 = 0;"
+		"*(u64 *)(r10 - 8) = r1;"
+		/* r0 = bpf_map_lookup_elem(&map_hash_8b, &key) */
+		"r2 = r10;"
+		"r2 += -8;"
+		"r1 = %[map_hash_8b] ll;"
+		"call %[bpf_map_lookup_elem];"
+		/* Store PTR_TO_MAP_VALUE_OR_NULL to outgoing arg6 slot */
+		"*(u64 *)(r12 - 8) = r0;"
+		/* null check on r0 */
+		"if r0 != 0 goto l0_%=;"
+		/*
+		 * On null branch, outgoing slot is SCALAR(0).
+		 * Call subprog that dereferences arg6 — should fail.
+		 */
+		"r1 = 0;"
+		"r2 = 0;"
+		"r3 = 0;"
+		"r4 = 0;"
+		"r5 = 0;"
+		"call subprog_deref_arg6;"
+	"l0_%=:"
+		"r0 = 0;"
+		"exit;"
+		:
+		: __imm(bpf_map_lookup_elem),
+		  __imm_addr(map_hash_8b)
+		: __clobber_all
+	);
+}
+
+#else
+
+SEC("socket")
+__description("stack_arg is not supported by compiler or jit, use a dummy test")
+__success
+int dummy_test(void)
+{
+	return 0;
+}
+
+#endif
+
+char _license[] SEC("license") = "GPL";
-- 
2.52.0


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PATCH bpf-next v4 03/18] bpf: Refactor to avoid redundant calculation of bpf_reg_state
  2026-04-12  4:58 ` [PATCH bpf-next v4 03/18] bpf: Refactor to avoid redundant calculation of bpf_reg_state Yonghong Song
@ 2026-04-12  5:31   ` bot+bpf-ci
  0 siblings, 0 replies; 27+ messages in thread
From: bot+bpf-ci @ 2026-04-12  5:31 UTC (permalink / raw)
  To: yonghong.song, bpf
  Cc: ast, andrii, daniel, jose.marchesi, kernel-team, martin.lau, ast,
	andrii, daniel, martin.lau, eddyz87, yonghong.song, clm,
	ihor.solodrai

[-- Attachment #1: Type: text/plain, Size: 977 bytes --]

> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 6469e71cd1fa..4c67a15c73e1 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c

[ ... ]

> @@ -8374,11 +8355,10 @@ static int check_atomic(struct bpf_verifier_env *env, struct bpf_insn *insn)
>   * read offsets are marked as read.
>   */
>  static int check_stack_range_initialized(
> -		struct bpf_verifier_env *env, int regno, int off,
> +		struct bpf_verifier_env *env, struct bpf_reg_state *reg,int regno, int off,
                                                                     ^

There's a missing space after the comma between *reg and int.  The
forward declaration earlier in the file has proper spacing, but the
definition here does not.


---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md

CI run summary: https://github.com/kernel-patches/bpf/actions/runs/24299298635

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH bpf-next v4 04/18] bpf: Refactor to handle memory and size together
  2026-04-12  4:58 ` [PATCH bpf-next v4 04/18] bpf: Refactor to handle memory and size together Yonghong Song
@ 2026-04-12  5:31   ` bot+bpf-ci
  0 siblings, 0 replies; 27+ messages in thread
From: bot+bpf-ci @ 2026-04-12  5:31 UTC (permalink / raw)
  To: yonghong.song, bpf
  Cc: ast, andrii, daniel, jose.marchesi, kernel-team, martin.lau, ast,
	andrii, daniel, martin.lau, eddyz87, yonghong.song, clm,
	ihor.solodrai

[-- Attachment #1: Type: text/plain, Size: 1248 bytes --]

> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 4c67a15c73e1..cddd39ebb40b 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c

[ ... ]

> @@ -8591,11 +8591,13 @@ static int check_helper_mem_access(struct bpf_verifier_env *env, int regno,
>   * containing the pointer.
>   */
>  static int check_mem_size_reg(struct bpf_verifier_env *env,
> -			      struct bpf_reg_state *reg, u32 regno,
> +			      struct bpf_reg_state *mem_reg,
> +			      struct bpf_reg_state *size_reg, u32 mem_regno,

The function comment above still says "@regno is the register
containing the access size. regno-1 is the register containing
the pointer."

After this refactor there is no @regno parameter.  The parameter is
now mem_regno, which is the memory (pointer) register, not the size
register.  Should the comment be updated to match the new signature?
Something like "@mem_regno is the register containing the pointer,
size register is at mem_regno+1" would match the actual interface.


---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md

CI run summary: https://github.com/kernel-patches/bpf/actions/runs/24299298635

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH bpf-next v4 10/18] bpf: Fix interaction between stack argument PTR_TO_STACK and dead slot poisoning
  2026-04-12  5:00 ` [PATCH bpf-next v4 10/18] bpf: Fix interaction between stack argument PTR_TO_STACK and dead slot poisoning Yonghong Song
@ 2026-04-12  5:43   ` bot+bpf-ci
  0 siblings, 0 replies; 27+ messages in thread
From: bot+bpf-ci @ 2026-04-12  5:43 UTC (permalink / raw)
  To: yonghong.song, bpf
  Cc: ast, andrii, daniel, jose.marchesi, kernel-team, martin.lau, ast,
	andrii, daniel, martin.lau, eddyz87, yonghong.song, clm,
	ihor.solodrai

[-- Attachment #1: Type: text/plain, Size: 1133 bytes --]


commit 7570556100592966981ed4784bb775307405e5f0
Author: Yonghong Song <yonghong.song@linux.dev>

bpf: Fix interaction between stack argument PTR_TO_STACK and dead slot poisoning

This commit fixes a false rejection where the verifier rejects valid
cross-frame stack reads because the static liveness pass cannot track
indirect references through stack-argument-passed pointers, causing
parent frame slots to be incorrectly poisoned.

This patch fixes a bug introduced by an earlier commit in the same
series but does not include a Fixes: tag. The issue is introduced
when stack argument support is added, because BPF_REG_STACK_ARG_BASE
(r12) is not tracked by the liveness pass. Without attribution, it
is harder to identify the scope of the fix during bisection or
stable backporting.

Would it make sense to add:

    Fixes: 442d7da1edaf ("bpf: Support stack arguments for bpf functions")


---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md

CI run summary: https://github.com/kernel-patches/bpf/actions/runs/24299298635

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH bpf-next v4 13/18] bpf: Support stack arguments for kfunc calls
  2026-04-12  5:00 ` [PATCH bpf-next v4 13/18] bpf: Support stack arguments for kfunc calls Yonghong Song
@ 2026-04-12  5:43   ` bot+bpf-ci
  0 siblings, 0 replies; 27+ messages in thread
From: bot+bpf-ci @ 2026-04-12  5:43 UTC (permalink / raw)
  To: yonghong.song, bpf
  Cc: ast, andrii, daniel, jose.marchesi, kernel-team, martin.lau, ast,
	andrii, daniel, martin.lau, eddyz87, yonghong.song, clm,
	ihor.solodrai

[-- Attachment #1: Type: text/plain, Size: 2562 bytes --]

> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 45987041bb2a..206ffbd9596d 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c

[ ... ]

> @@ -13848,9 +13858,9 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
>
>  	args = (const struct btf_param *)(meta->func_proto + 1);
>  	nargs = btf_type_vlen(meta->func_proto);
> -	if (nargs > MAX_BPF_FUNC_REG_ARGS) {
> +	if (nargs > MAX_BPF_FUNC_ARGS) {
>  		verbose(env, "Function %s has %d > %d args\n", func_name, nargs,
> -			MAX_BPF_FUNC_REG_ARGS);
> +			MAX_BPF_FUNC_ARGS);
>  		return -EINVAL;
>  	}

[ ... ]

> @@ -15152,6 +15198,16 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
>  			mark_btf_func_reg_size(env, regno, t->size);
>  	}

With nargs now allowed up to MAX_BPF_FUNC_ARGS (12), the existing
mark_btf_func_reg_size loop in check_kfunc_call() iterates beyond
register arguments:

    for (i = 0; i < nargs; i++) {
        u32 regno = i + 1;
        ...
        mark_btf_func_reg_size(env, regno, t->size);
    }

This calls __mark_btf_func_reg_size() with &regs[regno] where
regs is bpf_func_state.regs[MAX_BPF_REG] and MAX_BPF_REG is 11.

For nargs 6-10, the loop incorrectly calls mark_insn_zext() on
callee-saved registers R6-R9 or the frame pointer R10. These
registers are not cleared by the caller-saved register loop above,
so if the BPF program wrote a 32-bit value to R6 before the kfunc
call, mark_insn_zext() would set zext_dst on that unrelated
instruction and clear the register's subreg_def.

For nargs >= 11 (MAX_BPF_FUNC_ARGS is 12), regs[11] is an
out-of-bounds access that overlaps bpf_func_state.callsite.

Should this loop be bounded to register args only, similar to what
v1 of this patch had?

    for (i = 0; i < nargs && i < MAX_BPF_FUNC_REG_ARGS; i++) {

> +	/* Track outgoing stack arg depth for kfuncs with >5 args */
> +	if (nargs > MAX_BPF_FUNC_REG_ARGS) {
> +		struct bpf_func_state *caller = cur_func(env);
> +		struct bpf_subprog_info *caller_info = &env->subprog_info[caller->subprogno];
> +		u16 kfunc_stack_arg_depth = (nargs - MAX_BPF_FUNC_REG_ARGS) * BPF_REG_SIZE;
> +
> +		if (kfunc_stack_arg_depth > caller_info->outgoing_stack_arg_depth)
> +			caller_info->outgoing_stack_arg_depth = kfunc_stack_arg_depth;
> +	}


---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md

CI run summary: https://github.com/kernel-patches/bpf/actions/runs/24299298635

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH bpf-next v4 12/18] bpf: Reject stack arguments if tail call reachable
  2026-04-12  5:00 ` [PATCH bpf-next v4 12/18] bpf: Reject stack arguments if tail call reachable Yonghong Song
@ 2026-04-12  5:43   ` bot+bpf-ci
  0 siblings, 0 replies; 27+ messages in thread
From: bot+bpf-ci @ 2026-04-12  5:43 UTC (permalink / raw)
  To: yonghong.song, bpf
  Cc: ast, andrii, daniel, jose.marchesi, kernel-team, martin.lau, ast,
	andrii, daniel, martin.lau, eddyz87, yonghong.song, clm,
	ihor.solodrai

[-- Attachment #1: Type: text/plain, Size: 986 bytes --]

> Tailcalls are been deprecated. So reject stack arguments
> if tail call is in the way.

The commit message has a grammar issue ("are been" should probably
be "have been" or "are being"), but more importantly, is the claim
that tail calls are deprecated accurate?

Looking at the mailing list discussion on the v1 series, Alexei's
guidance was "let's disallow mixing 6+ args and tailcalls" due to
stack frame layout incompatibility, not because tail calls are being
deprecated.

Could the commit message more precisely describe the actual
reason for the restriction?  Something along the lines of "tail
calls and stack arguments require incompatible stack frame
handling" would better explain to future readers why the
combination is rejected.


---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md

CI run summary: https://github.com/kernel-patches/bpf/actions/runs/24299298635

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH bpf-next v4 06/18] bpf: Use argument index instead of register index in kfunc verifier logs
  2026-04-12  4:58 ` [PATCH bpf-next v4 06/18] bpf: Use argument index instead of register index in kfunc verifier logs Yonghong Song
@ 2026-04-12  5:43   ` bot+bpf-ci
  0 siblings, 0 replies; 27+ messages in thread
From: bot+bpf-ci @ 2026-04-12  5:43 UTC (permalink / raw)
  To: yonghong.song, bpf
  Cc: ast, andrii, daniel, jose.marchesi, kernel-team, martin.lau, ast,
	andrii, daniel, martin.lau, eddyz87, yonghong.song, clm,
	ihor.solodrai

[-- Attachment #1: Type: text/plain, Size: 3085 bytes --]

> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 54296d818d35..01df990f841a 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c

[ ... ]

> @@ -8592,12 +8609,12 @@ static int check_helper_mem_access(...)
>   */
>  static int check_mem_size_reg(struct bpf_verifier_env *env,
>  			      struct bpf_reg_state *mem_reg,
> -			      struct bpf_reg_state *size_reg, int mem_regno,
> +			      struct bpf_reg_state *size_reg, int reg_or_arg,
>  			      enum bpf_access_type access_type,
>  			      bool zero_size_allowed,
>  			      struct bpf_call_arg_meta *meta)
>  {
> -	int size_regno = mem_regno + 1;
> +	int size_reg_or_arg = (reg_or_arg >= 0) ? reg_or_arg + 1 : reg_or_arg - 1;

[ ... ]

> -	err = check_helper_mem_access(env, mem_reg, mem_regno, size_reg->umax_value,
> +	err = check_helper_mem_access(env, mem_reg, reg_or_arg, size_reg->umax_value,
>  				      access_type, zero_size_allowed, meta);
> -	if (!err)
> -		err = mark_chain_precision(env, size_regno);
> +	if (!err && size_reg_or_arg > 0)
> +		err = mark_chain_precision(env, size_reg_or_arg);
>  	return err;
>  }

When called from the kfunc path via check_kfunc_mem_size_reg(),
reg_or_arg is always negative (-(mem_argno + 1)), which makes
size_reg_or_arg also negative (reg_or_arg - 1). The condition
"size_reg_or_arg > 0" is then always false, so mark_chain_precision()
is never called for kfunc mem+size argument pairs.

The old code unconditionally called mark_chain_precision(env,
size_regno) with the actual register number. With the new encoding,
precision tracking is skipped entirely on the kfunc path.

For reference, check_kfunc_mem_size_reg() constructs the negative
encoding:

> @@ -8674,14 +8691,15 @@ static int check_mem_reg(...)
>  static int check_kfunc_mem_size_reg(struct bpf_verifier_env *env,
>                                     struct bpf_reg_state *mem_reg,
> 				    struct bpf_reg_state *size_reg,
> -				    u32 mem_regno)
> +				    u32 mem_argno)
>  {
> +	int reg_or_arg = -(int)(mem_argno + 1);

[ ... ]

> -	err = check_mem_size_reg(env, mem_reg, size_reg, mem_regno, BPF_READ, true, &meta);
> -	err = err ?: check_mem_size_reg(env, mem_reg, size_reg, mem_regno, BPF_WRITE, true, &meta);
> +	err = check_mem_size_reg(env, mem_reg, size_reg, reg_or_arg, BPF_READ, true, &meta);
> +	err = err ?: check_mem_size_reg(env, mem_reg, size_reg, reg_or_arg, BPF_WRITE, true, &meta);

Should check_mem_size_reg() still call mark_chain_precision() with
the actual register number for the kfunc path? It looks like the
precision marking was unintentionally dropped rather than being
intentionally skipped. The later commit 2d66a554f1b5 ("bpf: Support
stack arguments for kfunc calls") adds "if (regno > 0)" guards for
mark_chain_precision in check_kfunc_args() but does not address
this case in check_mem_size_reg().


---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md

CI run summary: https://github.com/kernel-patches/bpf/actions/runs/24299298635

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH bpf-next v4 15/18] bpf,x86: Implement JIT support for stack arguments
  2026-04-12  5:00 ` [PATCH bpf-next v4 15/18] bpf,x86: Implement JIT support for stack arguments Yonghong Song
@ 2026-04-12  5:43   ` bot+bpf-ci
  0 siblings, 0 replies; 27+ messages in thread
From: bot+bpf-ci @ 2026-04-12  5:43 UTC (permalink / raw)
  To: yonghong.song, bpf
  Cc: ast, andrii, daniel, jose.marchesi, kernel-team, martin.lau, ast,
	andrii, daniel, martin.lau, eddyz87, yonghong.song, clm,
	ihor.solodrai

[-- Attachment #1: Type: text/plain, Size: 2892 bytes --]

> diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
> index 32864dbc2c4e..ec57b9a6b417 100644
> --- a/arch/x86/net/bpf_jit_comp.c
> +++ b/arch/x86/net/bpf_jit_comp.c

[ ... ]

> @@ -1664,16 +1699,48 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image

[ ... ]

> +	/*
> +	 * The verifier guarantees that neither tail_call_reachable nor
> +	 * priv_stack is set when outgoing stack args exist, so R9 is
> +	 * always available.

[ ... ]

> @@ -2243,6 +2377,18 @@ st:			if (is_imm8(insn->off))
>  		case BPF_LDX | BPF_PROBE_MEMSX | BPF_H:
>  		case BPF_LDX | BPF_PROBE_MEMSX | BPF_W:
>  			insn_off = insn->off;
> +			if (adjust_stack_arg_off) {
> +				if (insn_off == 8) {
> +					/* Incoming arg 6: read from r9 */
> +					EMIT_mov(dst_reg, X86_REG_R9);
> +					break;
> +				}

[ ... ]

> @@ -3757,7 +3907,13 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
>  		prog->aux->jit_data = jit_data;
>  	}
>  	priv_stack_ptr = prog->aux->priv_stack_ptr;
> -	if (!priv_stack_ptr && prog->aux->jits_use_priv_stack) {
> +	/*
> +	 * x86-64 uses R9 for both private stack frame pointer and
> +	 * outgoing arg 6, so disable private stack when outgoing
> +	 * stack args are present.
> +	 */
> +	if (!priv_stack_ptr && prog->aux->jits_use_priv_stack &&
> +	    prog->aux->stack_arg_depth == prog->aux->incoming_stack_arg_depth) {

This condition disables priv_stack when outgoing stack args are present,
but R9 is also used for incoming arg 6. Could a subprog with incoming
stack args (incoming_stack_arg_depth > 0) and no outgoing stack args
still get priv_stack allocated here?

The condition stack_arg_depth == incoming_stack_arg_depth is true when
outgoing is zero, regardless of the incoming depth. When priv_stack is
allocated, emit_priv_frame_ptr() in the prologue overwrites R9:

  emit_mov_imm64(&prog, X86_REG_R9, ...priv_frame_ptr...);

Later, the LDX handler for incoming arg 6 reads R9 expecting the
caller's 6th argument:

  if (insn_off == 8) {
      EMIT_mov(dst_reg, X86_REG_R9);
      break;
  }

For a subprog with 6+ args and >= 64 bytes of stack (triggering
PRIV_STACK_ADAPTIVE in check_max_stack_depth_subprog()), R9 would
contain the private stack frame pointer instead of the incoming arg 6
value.

The comment also says "outgoing arg 6" but should this also mention
incoming arg 6, since both use R9?

Would changing the condition to check that stack_arg_depth is zero
(no stack args at all) be the right fix here?

>  		/* Allocate actual private stack size with verifier-calculated
>  		 * stack size plus two memory guards to protect overflow and
>  		 * underflow.


---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md

CI run summary: https://github.com/kernel-patches/bpf/actions/runs/24299298635

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH bpf-next v4 09/18] bpf: Support stack arguments for bpf functions
  2026-04-12  4:59 ` [PATCH bpf-next v4 09/18] bpf: Support stack arguments for bpf functions Yonghong Song
@ 2026-04-12  5:43   ` bot+bpf-ci
  0 siblings, 0 replies; 27+ messages in thread
From: bot+bpf-ci @ 2026-04-12  5:43 UTC (permalink / raw)
  To: yonghong.song, bpf
  Cc: ast, andrii, daniel, jose.marchesi, kernel-team, martin.lau, ast,
	andrii, daniel, martin.lau, eddyz87, yonghong.song, clm,
	ihor.solodrai

[-- Attachment #1: Type: text/plain, Size: 1813 bytes --]

> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 01df990f841a..e664d924e8d4 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c

[ ... ]

> @@ -22383,11 +22682,11 @@ static int check_and_resolve_insns(struct bpf_verifier_env *env)
>  		return err;
>
>  	for (i = 0; i < insn_cnt; i++, insn++) {
> -		if (insn->dst_reg >= MAX_BPF_REG) {
> +		if (insn->dst_reg >= MAX_BPF_REG && insn->dst_reg != BPF_REG_STACK_ARG_BASE) {
>  			verbose(env, "R%d is invalid\n", insn->dst_reg);
>  			return -EINVAL;
>  		}
> -		if (insn->src_reg >= MAX_BPF_REG) {
> +		if (insn->src_reg >= MAX_BPF_REG && insn->src_reg != BPF_REG_STACK_ARG_BASE) {
>  			verbose(env, "R%d is invalid\n", insn->src_reg);
>  			return -EINVAL;
>  		}

This allows r12 (BPF_REG_STACK_ARG_BASE) through the register
validation for all instruction classes, but only BPF_LDX, BPF_STX,
and BPF_ST have explicit early-return handling for r12. What happens
if a BPF program uses r12 as src_reg or dst_reg in an ALU or JMP
instruction?

As far as I can tell, the path would be:

  check_alu_op() or check_cond_jmp_op()
    check_reg_arg(env, 12, ...)
      __check_reg_arg(env, regs, 12, ...)
        reg = &regs[12];

The regs array is struct bpf_reg_state regs[MAX_BPF_REG] with
MAX_BPF_REG = 11 (indices 0-10), so regs[12] is an out-of-bounds
access into whatever follows the regs array in bpf_func_state.

Would it make sense to restrict the r12 exemption here to only the
instruction classes that actually handle it (BPF_LDX, BPF_STX,
BPF_ST), rather than allowing it globally?


---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md

CI run summary: https://github.com/kernel-patches/bpf/actions/runs/24299298635

^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2026-04-12  5:43 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-12  4:58 [PATCH bpf-next v4 00/18] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song
2026-04-12  4:58 ` [PATCH bpf-next v4 01/18] bpf: Remove unused parameter from check_map_kptr_access() Yonghong Song
2026-04-12  4:58 ` [PATCH bpf-next v4 02/18] bpf: Change from "arg #%d" to "arg#%d" in verifier log Yonghong Song
2026-04-12  4:58 ` [PATCH bpf-next v4 03/18] bpf: Refactor to avoid redundant calculation of bpf_reg_state Yonghong Song
2026-04-12  5:31   ` bot+bpf-ci
2026-04-12  4:58 ` [PATCH bpf-next v4 04/18] bpf: Refactor to handle memory and size together Yonghong Song
2026-04-12  5:31   ` bot+bpf-ci
2026-04-12  4:58 ` [PATCH bpf-next v4 05/18] bpf: Change some regno type from u32 to int type Yonghong Song
2026-04-12  4:58 ` [PATCH bpf-next v4 06/18] bpf: Use argument index instead of register index in kfunc verifier logs Yonghong Song
2026-04-12  5:43   ` bot+bpf-ci
2026-04-12  4:59 ` [PATCH bpf-next v4 07/18] bpf: Introduce bpf register BPF_REG_STACK_ARG_BASE Yonghong Song
2026-04-12  4:59 ` [PATCH bpf-next v4 08/18] bpf: Reuse MAX_BPF_FUNC_ARGS for maximum number of arguments Yonghong Song
2026-04-12  4:59 ` [PATCH bpf-next v4 09/18] bpf: Support stack arguments for bpf functions Yonghong Song
2026-04-12  5:43   ` bot+bpf-ci
2026-04-12  5:00 ` [PATCH bpf-next v4 10/18] bpf: Fix interaction between stack argument PTR_TO_STACK and dead slot poisoning Yonghong Song
2026-04-12  5:43   ` bot+bpf-ci
2026-04-12  5:00 ` [PATCH bpf-next v4 11/18] bpf: Reject stack arguments in non-JITed programs Yonghong Song
2026-04-12  5:00 ` [PATCH bpf-next v4 12/18] bpf: Reject stack arguments if tail call reachable Yonghong Song
2026-04-12  5:43   ` bot+bpf-ci
2026-04-12  5:00 ` [PATCH bpf-next v4 13/18] bpf: Support stack arguments for kfunc calls Yonghong Song
2026-04-12  5:43   ` bot+bpf-ci
2026-04-12  5:00 ` [PATCH bpf-next v4 14/18] bpf: Enable stack argument support for x86_64 Yonghong Song
2026-04-12  5:00 ` [PATCH bpf-next v4 15/18] bpf,x86: Implement JIT support for stack arguments Yonghong Song
2026-04-12  5:43   ` bot+bpf-ci
2026-04-12  5:00 ` [PATCH bpf-next v4 16/18] selftests/bpf: Add tests for BPF function " Yonghong Song
2026-04-12  5:00 ` [PATCH bpf-next v4 17/18] selftests/bpf: Add negative test for greater-than-8-byte kfunc stack argument Yonghong Song
2026-04-12  5:00 ` [PATCH bpf-next v4 18/18] selftests/bpf: Add verifier tests for stack argument validation Yonghong Song

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox