All of lore.kernel.org
 help / color / mirror / Atom feed
From: Yonghong Song <yonghong.song@linux.dev>
To: bpf@vger.kernel.org
Cc: Alexei Starovoitov <ast@kernel.org>,
	Andrii Nakryiko <andrii@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	"Jose E . Marchesi" <jose.marchesi@oracle.com>,
	kernel-team@fb.com, Martin KaFai Lau <martin.lau@kernel.org>
Subject: [PATCH bpf-next v4 21/25] selftests/bpf: Add verifier tests for stack argument validation
Date: Tue, 12 May 2026 21:51:43 -0700	[thread overview]
Message-ID: <20260513045143.2399278-1-yonghong.song@linux.dev> (raw)
In-Reply-To: <20260513044949.2382019-1-yonghong.song@linux.dev>

Add inline-asm based verifier tests that exercise stack argument
validation logic directly.

Positive tests:
  - subprog call with 6 arg's
  - Two sequential calls to different subprogs (6-arg and 7-arg)
  - Share a r11 store for both branches

Negative tests — verifier rejection:
  - Read from uninitialized incoming stack arg slot
  - Gap in outgoing slots: only r11-16 written, r11-8 missing
  - Write at r11-80, exceeding max 7 stack args
  - Missing store on one branch with a shared store
  - First call has proper stack arguments and the second
    call intends to inherit stack arguments but not working
  - r11 load ordering issue

Negative tests — pointer/ref tracking:
  - Pruning type mismatch: one branch stores PTR_TO_STACK, the
    other stores a scalar, callee dereferences — must not prune
  - Release invalidation: bpf_sk_release invalidates a socket
    pointer stored in a stack arg slot
  - Packet pointer invalidation: bpf_skb_pull_data invalidates
    a packet pointer stored in a stack arg slot
  - Null propagation: PTR_TO_MAP_VALUE_OR_NULL stored in stack
    arg slot, null branch attempts dereference via callee

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
---
 .../selftests/bpf/prog_tests/verifier.c       |   4 +
 .../bpf/progs/btf__verifier_stack_arg_order.c |  40 ++
 .../selftests/bpf/progs/verifier_stack_arg.c  | 444 ++++++++++++++++++
 .../bpf/progs/verifier_stack_arg_order.c      | 126 +++++
 4 files changed, 614 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/progs/btf__verifier_stack_arg_order.c
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_stack_arg.c
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_stack_arg_order.c

diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c
index a96b25ebff23..ee3d929fac8a 100644
--- a/tools/testing/selftests/bpf/prog_tests/verifier.c
+++ b/tools/testing/selftests/bpf/prog_tests/verifier.c
@@ -91,6 +91,8 @@
 #include "verifier_sockmap_mutate.skel.h"
 #include "verifier_spill_fill.skel.h"
 #include "verifier_spin_lock.skel.h"
+#include "verifier_stack_arg.skel.h"
+#include "verifier_stack_arg_order.skel.h"
 #include "verifier_stack_ptr.skel.h"
 #include "verifier_store_release.skel.h"
 #include "verifier_subprog_precision.skel.h"
@@ -238,6 +240,8 @@ void test_verifier_sock_addr(void)            { RUN(verifier_sock_addr); }
 void test_verifier_sockmap_mutate(void)       { RUN(verifier_sockmap_mutate); }
 void test_verifier_spill_fill(void)           { RUN(verifier_spill_fill); }
 void test_verifier_spin_lock(void)            { RUN(verifier_spin_lock); }
+void test_verifier_stack_arg(void)            { RUN(verifier_stack_arg); }
+void test_verifier_stack_arg_order(void)      { RUN(verifier_stack_arg_order); }
 void test_verifier_stack_ptr(void)            { RUN(verifier_stack_ptr); }
 void test_verifier_store_release(void)        { RUN(verifier_store_release); }
 void test_verifier_subprog_precision(void)    { RUN(verifier_subprog_precision); }
diff --git a/tools/testing/selftests/bpf/progs/btf__verifier_stack_arg_order.c b/tools/testing/selftests/bpf/progs/btf__verifier_stack_arg_order.c
new file mode 100644
index 000000000000..83692570d5bc
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/btf__verifier_stack_arg_order.c
@@ -0,0 +1,40 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */
+#include <vmlinux.h>
+#include <bpf/bpf_helpers.h>
+
+#if defined(__TARGET_ARCH_x86) && defined(__BPF_FEATURE_STACK_ARGUMENT)
+
+int subprog_bad_order_6args(int a, int b, int c, int d, int e, int f)
+{
+	return a + b + c + d + e + f;
+}
+
+int subprog_call_before_load_6args(int a, int b, int c, int d, int e, int f)
+{
+	return a + b + c + d + e + f;
+}
+
+int subprog_pruning_call_before_load_6args(int a, int b, int c, int d, int e, int f)
+{
+	return a + b + c + d + e + f;
+}
+
+#else
+
+int subprog_bad_order_6args(void)
+{
+	return 0;
+}
+
+int subprog_call_before_load_6args(void)
+{
+	return 0;
+}
+
+int subprog_pruning_call_before_load_6args(void)
+{
+	return 0;
+}
+
+#endif
diff --git a/tools/testing/selftests/bpf/progs/verifier_stack_arg.c b/tools/testing/selftests/bpf/progs/verifier_stack_arg.c
new file mode 100644
index 000000000000..6587bf912bc0
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/verifier_stack_arg.c
@@ -0,0 +1,444 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
+
+struct {
+	__uint(type, BPF_MAP_TYPE_HASH);
+	__uint(max_entries, 1);
+	__type(key, long long);
+	__type(value, long long);
+} map_hash_8b SEC(".maps");
+
+#if defined(__TARGET_ARCH_x86) && defined(__BPF_FEATURE_STACK_ARGUMENT)
+
+__noinline __used
+static int subprog_6args(int a, int b, int c, int d, int e, int f)
+{
+	return a + b + c + d + e + f;
+}
+
+__noinline __used
+static int subprog_7args(int a, int b, int c, int d, int e, int f, int g)
+{
+	return a + b + c + d + e + f + g;
+}
+
+__noinline __used
+static long subprog_deref_arg6(long a, long b, long c, long d, long e, long *f)
+{
+	return *f;
+}
+
+SEC("tc")
+__description("stack_arg: subprog with 6 args")
+__success __retval(21)
+__naked void stack_arg_6args(void)
+{
+	asm volatile (
+		"r1 = 1;"
+		"r2 = 2;"
+		"r3 = 3;"
+		"r4 = 4;"
+		"r5 = 5;"
+		"*(u64 *)(r11 - 8) = 6;"
+		"call subprog_6args;"
+		"exit;"
+		::: __clobber_all
+	);
+}
+
+SEC("tc")
+__description("stack_arg: two subprogs with >5 args")
+__success __retval(90)
+__naked void stack_arg_two_subprogs(void)
+{
+	asm volatile (
+		"r1 = 1;"
+		"r2 = 2;"
+		"r3 = 3;"
+		"r4 = 4;"
+		"r5 = 5;"
+		"*(u64 *)(r11 - 8) = 10;"
+		"call subprog_6args;"
+		"r6 = r0;"
+		"r1 = 1;"
+		"r2 = 2;"
+		"r3 = 3;"
+		"r4 = 4;"
+		"r5 = 5;"
+		"*(u64 *)(r11 - 16) = 30;"
+		"*(u64 *)(r11 - 8) = 20;"
+		"call subprog_7args;"
+		"r0 += r6;"
+		"exit;"
+		::: __clobber_all
+	);
+}
+
+SEC("tc")
+__description("stack_arg: read from uninitialized stack arg slot")
+__failure
+__msg("invalid read from stack arg off 8 depth 0")
+__naked void stack_arg_read_uninitialized(void)
+{
+	asm volatile (
+		"r0 = *(u64 *)(r11 + 8);"
+		"r0 = 0;"
+		"exit;"
+		::: __clobber_all
+	);
+}
+
+SEC("tc")
+__description("stack_arg: gap at offset -8, only wrote -16")
+__failure
+__msg("callee expects 7 args, stack arg1 is not initialized")
+__naked void stack_arg_gap_at_minus8(void)
+{
+	asm volatile (
+		"r1 = 1;"
+		"r2 = 2;"
+		"r3 = 3;"
+		"r4 = 4;"
+		"r5 = 5;"
+		"*(u64 *)(r11 - 16) = 30;"
+		"call subprog_7args;"
+		"exit;"
+		::: __clobber_all
+	);
+}
+
+SEC("tc")
+__description("stack_arg: pruning with different stack arg types")
+__failure
+__flag(BPF_F_TEST_STATE_FREQ)
+__msg("R{{[0-9]}} invalid mem access 'scalar'")
+__naked void stack_arg_pruning_type_mismatch(void)
+{
+	asm volatile (
+		"call %[bpf_get_prandom_u32];"
+		"r6 = r0;"
+		/* local = 0 on program stack */
+		"r7 = 0;"
+		"*(u64 *)(r10 - 8) = r7;"
+		/* Branch based on random value */
+		"if r6 s> 3 goto l0_%=;"
+		/* Path 1: store stack pointer to outgoing arg6 */
+		"r1 = r10;"
+		"r1 += -8;"
+		"*(u64 *)(r11 - 8) = r1;"
+		"goto l1_%=;"
+	"l0_%=:"
+		/* Path 2: store scalar to outgoing arg6 */
+		"*(u64 *)(r11 - 8) = 42;"
+	"l1_%=:"
+		/* Call subprog that dereferences arg6 */
+		"r1 = r6;"
+		"r2 = 0;"
+		"r3 = 0;"
+		"r4 = 0;"
+		"r5 = 0;"
+		"call subprog_deref_arg6;"
+		"exit;"
+		:: __imm(bpf_get_prandom_u32)
+		: __clobber_all
+	);
+}
+
+SEC("tc")
+__description("stack_arg: release_reference invalidates stack arg slot")
+__failure
+__msg("R{{[0-9]}} !read_ok")
+__naked void stack_arg_release_ref(void)
+{
+	asm volatile (
+		"r6 = r1;"
+		/* struct bpf_sock_tuple tuple = {} */
+		"r2 = 0;"
+		"*(u32 *)(r10 - 8) = r2;"
+		"*(u64 *)(r10 - 16) = r2;"
+		"*(u64 *)(r10 - 24) = r2;"
+		"*(u64 *)(r10 - 32) = r2;"
+		"*(u64 *)(r10 - 40) = r2;"
+		"*(u64 *)(r10 - 48) = r2;"
+		/* sk = bpf_sk_lookup_tcp(ctx, &tuple, sizeof(tuple), 0, 0) */
+		"r1 = r6;"
+		"r2 = r10;"
+		"r2 += -48;"
+		"r3 = %[sizeof_bpf_sock_tuple];"
+		"r4 = 0;"
+		"r5 = 0;"
+		"call %[bpf_sk_lookup_tcp];"
+		/* r0 = sk (PTR_TO_SOCK_OR_NULL) */
+		"if r0 == 0 goto l0_%=;"
+		/* Store sock ref to outgoing arg6 slot */
+		"*(u64 *)(r11 - 8) = r0;"
+		/* Release the reference — invalidates the stack arg slot */
+		"r1 = r0;"
+		"call %[bpf_sk_release];"
+		/* Call subprog that dereferences arg6 — should fail */
+		"r1 = 1;"
+		"r2 = 2;"
+		"r3 = 3;"
+		"r4 = 4;"
+		"r5 = 5;"
+		"call subprog_deref_arg6;"
+	"l0_%=:"
+		"r0 = 0;"
+		"exit;"
+		:
+		: __imm(bpf_sk_lookup_tcp),
+		  __imm(bpf_sk_release),
+		  __imm_const(sizeof_bpf_sock_tuple, sizeof(struct bpf_sock_tuple))
+		: __clobber_all
+	);
+}
+
+SEC("tc")
+__description("stack_arg: pkt pointer in stack arg slot invalidated after pull_data")
+__failure
+__msg("R{{[0-9]}} !read_ok")
+__naked void stack_arg_stale_pkt_ptr(void)
+{
+	asm volatile (
+		"r6 = r1;"
+		"r7 = *(u32 *)(r6 + %[__sk_buff_data]);"
+		"r8 = *(u32 *)(r6 + %[__sk_buff_data_end]);"
+		/* check pkt has at least 1 byte */
+		"r0 = r7;"
+		"r0 += 8;"
+		"if r0 > r8 goto l0_%=;"
+		/* Store valid pkt pointer to outgoing arg6 slot */
+		"*(u64 *)(r11 - 8) = r7;"
+		/* bpf_skb_pull_data invalidates all pkt pointers */
+		"r1 = r6;"
+		"r2 = 0;"
+		"call %[bpf_skb_pull_data];"
+		/* Call subprog that dereferences arg6 — should fail */
+		"r1 = 1;"
+		"r2 = 2;"
+		"r3 = 3;"
+		"r4 = 4;"
+		"r5 = 5;"
+		"call subprog_deref_arg6;"
+	"l0_%=:"
+		"r0 = 0;"
+		"exit;"
+		:
+		: __imm(bpf_skb_pull_data),
+		  __imm_const(__sk_buff_data, offsetof(struct __sk_buff, data)),
+		  __imm_const(__sk_buff_data_end, offsetof(struct __sk_buff, data_end))
+		: __clobber_all
+	);
+}
+
+SEC("tc")
+__description("stack_arg: null propagation rejects deref on null branch")
+__failure
+__msg("R{{[0-9]}} invalid mem access 'scalar'")
+__naked void stack_arg_null_propagation_fail(void)
+{
+	asm volatile (
+		"r1 = 0;"
+		"*(u64 *)(r10 - 8) = r1;"
+		/* r0 = bpf_map_lookup_elem(&map_hash_8b, &key) */
+		"r2 = r10;"
+		"r2 += -8;"
+		"r1 = %[map_hash_8b] ll;"
+		"call %[bpf_map_lookup_elem];"
+		/* Store PTR_TO_MAP_VALUE_OR_NULL to outgoing arg6 slot */
+		"*(u64 *)(r11 - 8) = r0;"
+		/* null check on r0 */
+		"if r0 != 0 goto l0_%=;"
+		/*
+		 * On null branch, outgoing slot is SCALAR(0).
+		 * Call subprog that dereferences arg6 — should fail.
+		 */
+		"r1 = 0;"
+		"r2 = 0;"
+		"r3 = 0;"
+		"r4 = 0;"
+		"r5 = 0;"
+		"call subprog_deref_arg6;"
+	"l0_%=:"
+		"r0 = 0;"
+		"exit;"
+		:
+		: __imm(bpf_map_lookup_elem),
+		  __imm_addr(map_hash_8b)
+		: __clobber_all
+	);
+}
+
+SEC("tc")
+__description("stack_arg: missing store on one branch")
+__failure
+__msg("callee expects 7 args, stack arg1 is not initialized")
+__naked void stack_arg_missing_store_one_branch(void)
+{
+	asm volatile (
+		"call %[bpf_get_prandom_u32];"
+		"r1 = 1;"
+		"r2 = 2;"
+		"r3 = 3;"
+		"r4 = 4;"
+		"r5 = 5;"
+		/* Write arg7 (r11-16) before branch */
+		"*(u64 *)(r11 - 16) = 20;"
+		"if r0 > 0 goto l0_%=;"
+		/* Path 1: write arg6 and call */
+		"*(u64 *)(r11 - 8) = 10;"
+		"r1 = 1;"
+		"r2 = 2;"
+		"r3 = 3;"
+		"r4 = 4;"
+		"r5 = 5;"
+		"call subprog_7args;"
+		"goto l1_%=;"
+	"l0_%=:"
+		/* Path 2: missing arg6 store, call should fail */
+		"r1 = 1;"
+		"r2 = 2;"
+		"r3 = 3;"
+		"r4 = 4;"
+		"r5 = 5;"
+		"call subprog_7args;"
+	"l1_%=:"
+		"r0 = 0;"
+		"exit;"
+		:: __imm(bpf_get_prandom_u32)
+		: __clobber_all
+	);
+}
+
+SEC("tc")
+__description("stack_arg: share a store for both branches")
+__success __retval(0)
+__naked void stack_arg_shared_store(void)
+{
+	asm volatile (
+		"call %[bpf_get_prandom_u32];"
+		"r1 = 1;"
+		"r2 = 2;"
+		"r3 = 3;"
+		"r4 = 4;"
+		"r5 = 5;"
+		/* Write arg7 (r11-16) before branch */
+		"*(u64 *)(r11 - 16) = 20;"
+		"if r0 > 0 goto l0_%=;"
+		/* Path 1: write arg6 and call */
+		"*(u64 *)(r11 - 8) = 10;"
+		"r1 = 1;"
+		"r2 = 2;"
+		"r3 = 3;"
+		"r4 = 4;"
+		"r5 = 5;"
+		"call subprog_7args;"
+		"goto l1_%=;"
+	"l0_%=:"
+		/* Path 2: also write arg6 and call */
+		"*(u64 *)(r11 - 8) = 30;"
+		"r1 = 1;"
+		"r2 = 2;"
+		"r3 = 3;"
+		"r4 = 4;"
+		"r5 = 5;"
+		"call subprog_7args;"
+	"l1_%=:"
+		"r0 = 0;"
+		"exit;"
+		:: __imm(bpf_get_prandom_u32)
+		: __clobber_all
+	);
+}
+
+SEC("tc")
+__description("stack_arg: write beyond max outgoing depth")
+__failure
+__msg("stack arg write offset -80 exceeds max 7 stack args")
+__naked void stack_arg_write_beyond_max(void)
+{
+	asm volatile (
+		"r1 = 1;"
+		"r2 = 2;"
+		"r3 = 3;"
+		"r4 = 4;"
+		"r5 = 5;"
+		/* Write to offset -80, way beyond any callee's needs */
+		"*(u64 *)(r11 - 80) = 99;"
+		"*(u64 *)(r11 - 16) = 20;"
+		"*(u64 *)(r11 - 8) = 10;"
+		"call subprog_7args;"
+		"r0 = 0;"
+		"exit;"
+		::: __clobber_all
+	);
+}
+
+SEC("tc")
+__description("stack_arg: write unused stack arg slot")
+__failure
+__msg("func#0 writes 5 stack arg slots, but calls only require 2")
+__naked void stack_arg_write_unused_slot(void)
+{
+	asm volatile (
+		"r1 = 1;"
+		"r2 = 2;"
+		"r3 = 3;"
+		"r4 = 4;"
+		"r5 = 5;"
+		/* Write to offset -40, unused for the callee */
+		"*(u64 *)(r11 - 40) = 99;"
+		"*(u64 *)(r11 - 16) = 20;"
+		"*(u64 *)(r11 - 8) = 10;"
+		"call subprog_7args;"
+		"r0 = 0;"
+		"exit;"
+		::: __clobber_all
+	);
+}
+
+SEC("tc")
+__description("stack_arg: sequential calls reuse slots")
+__failure
+__msg("callee expects 7 args, stack arg1 is not initialized")
+__naked void stack_arg_sequential_calls(void)
+{
+	asm volatile (
+		"r1 = 1;"
+		"r2 = 2;"
+		"r3 = 3;"
+		"r4 = 4;"
+		"r5 = 5;"
+		"*(u64 *)(r11 - 8) = 6;"
+		"*(u64 *)(r11 - 16) = 7;"
+		"call subprog_7args;"
+		"r6 = r0;"
+		"r1 = 1;"
+		"r2 = 2;"
+		"r3 = 3;"
+		"r4 = 4;"
+		"r5 = 5;"
+		"call subprog_7args;"
+		"r0 += r6;"
+		"exit;"
+		::: __clobber_all
+	);
+}
+
+#else
+
+SEC("socket")
+__description("stack_arg is not supported by compiler or jit, use a dummy test")
+__success
+int dummy_test(void)
+{
+	return 0;
+}
+
+#endif
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/verifier_stack_arg_order.c b/tools/testing/selftests/bpf/progs/verifier_stack_arg_order.c
new file mode 100644
index 000000000000..938f4a2f5482
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/verifier_stack_arg_order.c
@@ -0,0 +1,126 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2026 Meta Platforms, Inc. and affiliates. */
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
+
+#if defined(__TARGET_ARCH_x86) && defined(__BPF_FEATURE_STACK_ARGUMENT)
+
+__noinline __used __naked
+static int subprog_bad_order_6args(int a, int b, int c, int d, int e, int f)
+{
+	asm volatile (
+		"*(u64 *)(r11 - 8) = r1;"
+		"r0 = *(u64 *)(r11 + 8);"
+		"exit;"
+		::: __clobber_all
+	);
+}
+
+SEC("tc")
+__description("stack_arg: r11 load after r11 store")
+__failure
+__msg("r11 load must be before any r11 store or call insn")
+__btf_func_path("btf__verifier_stack_arg_order.bpf.o")
+__naked void stack_arg_load_after_store(void)
+{
+	asm volatile (
+		"r1 = 1;"
+		"r2 = 2;"
+		"r3 = 3;"
+		"r4 = 4;"
+		"r5 = 5;"
+		"*(u64 *)(r11 - 8) = 6;"
+		"call subprog_bad_order_6args;"
+		"exit;"
+		::: __clobber_all
+	);
+}
+
+__noinline __used __naked
+static int subprog_call_before_load_6args(int a, int b, int c, int d, int e,
+					  int f)
+{
+	asm volatile (
+		"call %[bpf_get_prandom_u32];"
+		"r0 = *(u64 *)(r11 + 8);"
+		"exit;"
+		:: __imm(bpf_get_prandom_u32)
+		: __clobber_all
+	);
+}
+
+SEC("tc")
+__description("stack_arg: r11 load after a call")
+__failure
+__msg("r11 load must be before any r11 store or call insn")
+__btf_func_path("btf__verifier_stack_arg_order.bpf.o")
+__naked void stack_arg_load_after_call(void)
+{
+	asm volatile (
+		"r1 = 1;"
+		"r2 = 2;"
+		"r3 = 3;"
+		"r4 = 4;"
+		"r5 = 5;"
+		"*(u64 *)(r11 - 8) = 6;"
+		"call subprog_call_before_load_6args;"
+		"exit;"
+		::: __clobber_all
+	);
+}
+
+__noinline __used __naked
+static int subprog_pruning_call_before_load_6args(int a, int b, int c, int d,
+						  int e, int f)
+{
+	asm volatile (
+		"if r1 s> 0 goto l0_%=;"
+		"goto l1_%=;"
+	"l0_%=:"
+		"call %[bpf_get_prandom_u32];"
+	"l1_%=:"
+		"r0 = *(u64 *)(r11 + 8);"
+		"exit;"
+		:: __imm(bpf_get_prandom_u32)
+		: __clobber_all
+	);
+}
+
+SEC("tc")
+__description("stack_arg: pruning keeps r11 load ordering")
+__failure
+__flag(BPF_F_TEST_STATE_FREQ)
+__msg("r11 load must be before any r11 store or call insn")
+__btf_func_path("btf__verifier_stack_arg_order.bpf.o")
+__naked void stack_arg_pruning_load_after_call(void)
+{
+	asm volatile (
+		"call %[bpf_get_prandom_u32];"
+		"r1 = r0;"
+		"r2 = 2;"
+		"r3 = 3;"
+		"r4 = 4;"
+		"r5 = 5;"
+		"*(u64 *)(r11 - 8) = 6;"
+		"call subprog_pruning_call_before_load_6args;"
+		"exit;"
+		:: __imm(bpf_get_prandom_u32)
+		: __clobber_all
+	);
+}
+
+#else
+
+SEC("socket")
+__description("stack_arg order is not supported by compiler or jit, use a dummy test")
+__success
+int dummy_test(void)
+{
+	return 0;
+}
+
+#endif
+
+char _license[] SEC("license") = "GPL";
-- 
2.53.0-Meta


  parent reply	other threads:[~2026-05-13  4:51 UTC|newest]

Thread overview: 43+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-05-13  4:49 [PATCH bpf-next v4 00/25] bpf: Support stack arguments for BPF functions and kfuncs Yonghong Song
2026-05-13  4:49 ` [PATCH bpf-next v4 01/25] bpf: Convert bpf_get_spilled_reg macro to static inline function Yonghong Song
2026-05-13  4:50 ` [PATCH bpf-next v4 02/25] bpf: Remove copy_register_state wrapper function Yonghong Song
2026-05-13  4:50 ` [PATCH bpf-next v4 03/25] bpf: Add helper functions for r11-based stack argument insns Yonghong Song
2026-05-13  4:50 ` [PATCH bpf-next v4 04/25] bpf: Set sub->arg_cnt earlier in btf_prepare_func_args() Yonghong Song
2026-05-13  4:50 ` [PATCH bpf-next v4 05/25] bpf: Support stack arguments for bpf functions Yonghong Song
2026-05-14 10:46   ` sashiko-bot
2026-05-14 16:07     ` Yonghong Song
2026-05-13  4:50 ` [PATCH bpf-next v4 06/25] bpf: Refactor jmp history to use dedicated spi/frame fields Yonghong Song
2026-05-13  4:50 ` [PATCH bpf-next v4 07/25] bpf: Add precision marking and backtracking for stack argument slots Yonghong Song
2026-05-13  5:44   ` bot+bpf-ci
2026-05-13  4:50 ` [PATCH bpf-next v4 08/25] bpf: Refactor record_call_access() to extract per-arg logic Yonghong Song
2026-05-13  4:50 ` [PATCH bpf-next v4 09/25] bpf: Use arg_is_fp() in has_fp_args() Yonghong Song
2026-05-13  4:50 ` [PATCH bpf-next v4 10/25] bpf: Extend liveness analysis to track stack argument slots Yonghong Song
2026-05-13  5:44   ` bot+bpf-ci
2026-05-14 22:53   ` sashiko-bot
2026-05-13  4:50 ` [PATCH bpf-next v4 11/25] bpf: Reject stack arguments in non-JITed programs Yonghong Song
2026-05-13  5:33   ` bot+bpf-ci
2026-05-14 23:59   ` sashiko-bot
2026-05-13  4:50 ` [PATCH bpf-next v4 12/25] bpf: Prepare architecture JIT support for stack arguments Yonghong Song
2026-05-13  5:33   ` bot+bpf-ci
2026-05-15  0:30   ` sashiko-bot
2026-05-13  4:50 ` [PATCH bpf-next v4 13/25] bpf: Enable r11 based insns Yonghong Song
2026-05-13  4:51 ` [PATCH bpf-next v4 14/25] bpf: Support stack arguments for kfunc calls Yonghong Song
2026-05-13  4:51 ` [PATCH bpf-next v4 15/25] bpf: Reject stack arguments if tail call reachable Yonghong Song
2026-05-13  5:33   ` bot+bpf-ci
2026-05-15  3:23   ` sashiko-bot
2026-05-13  4:51 ` [PATCH bpf-next v4 16/25] bpf: Disable private stack for x86_64 if stack arguments used Yonghong Song
2026-05-13  5:33   ` bot+bpf-ci
2026-05-13  4:51 ` [PATCH bpf-next v4 17/25] bpf,x86: Implement JIT support for stack arguments Yonghong Song
2026-05-13  4:51 ` [PATCH bpf-next v4 18/25] selftests/bpf: Add tests for BPF function " Yonghong Song
2026-05-13  4:51 ` [PATCH bpf-next v4 19/25] selftests/bpf: Add tests for stack argument validation Yonghong Song
2026-05-13  4:51 ` [PATCH bpf-next v4 20/25] selftests/bpf: Add BTF fixup for __naked subprog parameter names Yonghong Song
2026-05-13  4:51 ` Yonghong Song [this message]
2026-05-13  4:51 ` [PATCH bpf-next v4 22/25] selftests/bpf: Add precision backtracking test for stack arguments Yonghong Song
2026-05-13  4:51 ` [PATCH bpf-next v4 23/25] bpf, arm64: Map BPF_REG_0 to x8 instead of x7 Yonghong Song
2026-05-13  4:51 ` [PATCH bpf-next v4 24/25] bpf, arm64: Add JIT support for stack arguments Yonghong Song
2026-05-13  4:52 ` [PATCH bpf-next v4 25/25] selftests/bpf: Enable stack argument tests for arm64 Yonghong Song
2026-05-13 16:33 ` [PATCH bpf-next v4 00/25] bpf: Support stack arguments for BPF functions and kfuncs Alexei Starovoitov
2026-05-13 17:41   ` Yonghong Song
2026-05-13 17:51     ` Alexei Starovoitov
2026-05-13 18:11       ` Yonghong Song
2026-05-13 16:40 ` patchwork-bot+netdevbpf

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260513045143.2399278-1-yonghong.song@linux.dev \
    --to=yonghong.song@linux.dev \
    --cc=andrii@kernel.org \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=jose.marchesi@oracle.com \
    --cc=kernel-team@fb.com \
    --cc=martin.lau@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.