* [PATCH bpf-next v3 0/2] bpf: Add multi-level pointer parameter support for trampolines
@ 2026-02-23 8:31 Slava Imameev
2026-02-23 8:31 ` [PATCH bpf-next v3 1/2] bpf: Support multi-level pointer params via SCALAR_VALUE " Slava Imameev
2026-02-23 8:31 ` [PATCH bpf-next v3 2/2] selftests/bpf: Add trampolines multi-level pointer params test coverage Slava Imameev
0 siblings, 2 replies; 8+ messages in thread
From: Slava Imameev @ 2026-02-23 8:31 UTC (permalink / raw)
To: ast, daniel, andrii
Cc: martin.lau, eddyz87, song, yonghong.song, john.fastabend, kpsingh,
sdf, haoluo, jolsa, davem, edumazet, kuba, pabeni, horms, shuah,
linux-kernel, bpf, netdev, linux-kselftest, linux-open-source,
Slava Imameev
This patch adds BPF verifier support for multi-level pointer parameters
and return values in BPF trampolines. The implementation treats these
parameters as SCALAR_VALUE.
Background:
Prior to these changes, accessing multi-level pointer parameters or
return values through BPF trampoline context arrays resulted in
verification failures in btf_ctx_access, producing errors such as:
func '%s' arg%d type %s is not a struct
For example, consider a BPF program that logs an input parameter of type
struct posix_acl **:
SEC("fentry/__posix_acl_chmod")
int BPF_PROG(trace_posix_acl_chmod, struct posix_acl **ppacl, gfp_t gfp,
umode_t mode)
{
bpf_printk("__posix_acl_chmod ppacl = %px\n", ppacl);
return 0;
}
This program failed BPF verification with the following error:
libbpf: prog 'trace_posix_acl_chmod': -- BEGIN PROG LOAD LOG --
0: R1=ctx() R10=fp0
; int BPF_PROG(trace_posix_acl_chmod, struct posix_acl **ppacl,
gfp_t gfp, umode_t mode) @ posix_acl_monitor.bpf.c:23
0: (79) r6 = *(u64 *)(r1 +16) ; R1=ctx() R6_w=scalar()
1: (79) r1 = *(u64 *)(r1 +0)
func '__posix_acl_chmod' arg0 type PTR is not a struct
invalid bpf_context access off=0 size=8
processed 2 insns (limit 1000000) max_states_per_insn 0 total_states 0
peak_states 0 mark_read 0
-- END PROG LOAD LOG --
The common workaround involved using helper functions to fetch parameter
values by passing the address of the context array entry:
SEC("fentry/__posix_acl_chmod")
int BPF_PROG(trace_posix_acl_chmod, struct posix_acl **ppacl, gfp_t gfp,
umode_t mode)
{
struct posix_acl **pp;
bpf_probe_read_kernel(&pp, sizeof(ppacl), &ctx[0]);
bpf_printk("__posix_acl_chmod %px\n", pp);
return 0;
}
This approach introduced helper call overhead and created inconsistency
with parameter access patterns.
Improvements:
With this patch, trampoline programs can directly access multi-level
pointer parameters, eliminating helper call overhead and explicit ctx
access while ensuring consistent parameter handling. For example, the
following ctx access with a helper call:
SEC("fentry/__posix_acl_chmod")
int BPF_PROG(trace_posix_acl_chmod, struct posix_acl **ppacl, gfp_t gfp,
umode_t mode)
{
struct posix_acl **pp;
bpf_probe_read_kernel(&pp, sizeof(pp), &ctx[0]);
bpf_printk("__posix_acl_chmod %px\n", pp);
...
}
is replaced by a load instruction:
SEC("fentry/__posix_acl_chmod")
int BPF_PROG(trace_posix_acl_chmod, struct posix_acl **ppacl, gfp_t gfp,
umode_t mode)
{
bpf_printk("__posix_acl_chmod %px\n", ppacl);
...
}
The bpf_core_cast macro can be used for deeper level dereferences,
as illustrated in the tests added by this patch.
v1 -> v2:
* corrected maintainer's email
v2 -> v3:
* Addressed reviewers feedback:
* Changed the register type from PTR_TO_MEM to SCALAR_VALUE.
* Modified tests to accommodate SCALAR_VALUE handling.
* Fixed a compilation error for loongarch
* https://lore.kernel.org/oe-kbuild-all/202602181710.tEK6nOl6-lkp@intel.com/
* Addressed AI bot review
* Added a commentary to address a NULL pointer case
* Removed WARN_ON
* Fixed a commentary
Slava Imameev (2):
bpf: Support multi-level pointer params via SCALAR_VALUE for
trampolines
selftests/bpf: Add trampolines multi-level pointer params test
coverage
kernel/bpf/btf.c | 20 +-
net/bpf/test_run.c | 130 ++++++
.../prog_tests/fentry_fexit_multi_level_ptr.c | 206 +++++++++
.../selftests/bpf/prog_tests/verifier.c | 2 +
.../progs/fentry_fexit_pptr_nullable_test.c | 56 +++
.../bpf/progs/fentry_fexit_pptr_test.c | 67 +++
.../bpf/progs/fentry_fexit_void_ppptr_test.c | 38 ++
.../bpf/progs/fentry_fexit_void_pptr_test.c | 71 +++
.../bpf/progs/verifier_ctx_multilevel_ptr.c | 435 ++++++++++++++++++
9 files changed, 1024 insertions(+), 1 deletion(-)
create mode 100644 tools/testing/selftests/bpf/prog_tests/fentry_fexit_multi_level_ptr.c
create mode 100644 tools/testing/selftests/bpf/progs/fentry_fexit_pptr_nullable_test.c
create mode 100644 tools/testing/selftests/bpf/progs/fentry_fexit_pptr_test.c
create mode 100644 tools/testing/selftests/bpf/progs/fentry_fexit_void_ppptr_test.c
create mode 100644 tools/testing/selftests/bpf/progs/fentry_fexit_void_pptr_test.c
create mode 100644 tools/testing/selftests/bpf/progs/verifier_ctx_multilevel_ptr.c
--
2.34.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH bpf-next v3 1/2] bpf: Support multi-level pointer params via SCALAR_VALUE for trampolines
2026-02-23 8:31 [PATCH bpf-next v3 0/2] bpf: Add multi-level pointer parameter support for trampolines Slava Imameev
@ 2026-02-23 8:31 ` Slava Imameev
2026-02-23 9:06 ` bot+bpf-ci
2026-02-23 8:31 ` [PATCH bpf-next v3 2/2] selftests/bpf: Add trampolines multi-level pointer params test coverage Slava Imameev
1 sibling, 1 reply; 8+ messages in thread
From: Slava Imameev @ 2026-02-23 8:31 UTC (permalink / raw)
To: ast, daniel, andrii
Cc: martin.lau, eddyz87, song, yonghong.song, john.fastabend, kpsingh,
sdf, haoluo, jolsa, davem, edumazet, kuba, pabeni, horms, shuah,
linux-kernel, bpf, netdev, linux-kselftest, linux-open-source,
Slava Imameev
Add BPF verifier support for multi-level pointer parameters and return
values in BPF trampolines. The implementation treats these parameters as
SCALAR_VALUE.
Signed-off-by: Slava Imameev <slava.imameev@crowdstrike.com>
---
kernel/bpf/btf.c | 20 +++++++++++++++++++-
1 file changed, 19 insertions(+), 1 deletion(-)
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 7708958e3fb8..ebb1b0c3f993 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -760,6 +760,21 @@ const struct btf_type *btf_type_resolve_func_ptr(const struct btf *btf,
return NULL;
}
+static bool is_multilevel_ptr(const struct btf *btf, const struct btf_type *t)
+{
+ u32 depth = 0;
+
+ if (!btf_type_is_ptr(t))
+ return false;
+
+ do {
+ depth += 1;
+ t = btf_type_skip_modifiers(btf, t->type, NULL);
+ } while (btf_type_is_ptr(t) && depth < 2);
+
+ return depth > 1;
+}
+
/* Types that act only as a source, not sink or intermediate
* type when resolving.
*/
@@ -6905,8 +6920,11 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
/*
* If it's a pointer to void, it's the same as scalar from the verifier
* safety POV. Either way, no futher pointer walking is allowed.
+ * Multilevel pointers (e.g., int**, struct foo**, char***) of any type
+ * are treated as scalars because the verifier lacks the context to infer
+ * the size of their target memory regions.
*/
- if (is_void_or_int_ptr(btf, t))
+ if (is_void_or_int_ptr(btf, t) || is_multilevel_ptr(btf, t))
return true;
/* this is a pointer to another type */
--
2.34.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH bpf-next v3 2/2] selftests/bpf: Add trampolines multi-level pointer params test coverage
2026-02-23 8:31 [PATCH bpf-next v3 0/2] bpf: Add multi-level pointer parameter support for trampolines Slava Imameev
2026-02-23 8:31 ` [PATCH bpf-next v3 1/2] bpf: Support multi-level pointer params via SCALAR_VALUE " Slava Imameev
@ 2026-02-23 8:31 ` Slava Imameev
2026-02-23 16:38 ` Alexei Starovoitov
1 sibling, 1 reply; 8+ messages in thread
From: Slava Imameev @ 2026-02-23 8:31 UTC (permalink / raw)
To: ast, daniel, andrii
Cc: martin.lau, eddyz87, song, yonghong.song, john.fastabend, kpsingh,
sdf, haoluo, jolsa, davem, edumazet, kuba, pabeni, horms, shuah,
linux-kernel, bpf, netdev, linux-kselftest, linux-open-source,
Slava Imameev
Multi-level pointer params and return value test coverage for BPF
trampolines:
- fentry/fexit programs covering struct and void double/triple
pointer parameters and returned values
- verifier context tests covering multi-level pointers as parameters
- verifier context tests covering multi-level pointers as returned
values
- verifier context tests for lsm to check trusted parameters handling
- verifier context tests covering out-of-bound access after cast
- verifier BPF helper tests to validate no change in verifier
behaviour
Signed-off-by: Slava Imameev <slava.imameev@crowdstrike.com>
---
net/bpf/test_run.c | 130 ++++++
.../prog_tests/fentry_fexit_multi_level_ptr.c | 206 +++++++++
.../selftests/bpf/prog_tests/verifier.c | 2 +
.../progs/fentry_fexit_pptr_nullable_test.c | 56 +++
.../bpf/progs/fentry_fexit_pptr_test.c | 67 +++
.../bpf/progs/fentry_fexit_void_ppptr_test.c | 38 ++
.../bpf/progs/fentry_fexit_void_pptr_test.c | 71 +++
.../bpf/progs/verifier_ctx_multilevel_ptr.c | 435 ++++++++++++++++++
8 files changed, 1005 insertions(+)
create mode 100644 tools/testing/selftests/bpf/prog_tests/fentry_fexit_multi_level_ptr.c
create mode 100644 tools/testing/selftests/bpf/progs/fentry_fexit_pptr_nullable_test.c
create mode 100644 tools/testing/selftests/bpf/progs/fentry_fexit_pptr_test.c
create mode 100644 tools/testing/selftests/bpf/progs/fentry_fexit_void_ppptr_test.c
create mode 100644 tools/testing/selftests/bpf/progs/fentry_fexit_void_pptr_test.c
create mode 100644 tools/testing/selftests/bpf/progs/verifier_ctx_multilevel_ptr.c
diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
index 178c4738e63b..9f6ee2eb01cd 100644
--- a/net/bpf/test_run.c
+++ b/net/bpf/test_run.c
@@ -24,6 +24,9 @@
#include <net/netdev_rx_queue.h>
#include <net/xdp.h>
#include <net/netfilter/nf_bpf_link.h>
+#include <linux/set_memory.h>
+#include <linux/string.h>
+#include <asm/tlbflush.h>
#define CREATE_TRACE_POINTS
#include <trace/events/bpf_test_run.h>
@@ -563,6 +566,42 @@ noinline int bpf_fentry_test10(const void *a)
return (long)a;
}
+struct bpf_fentry_test_pptr_t {
+ u32 value1;
+ u32 value2;
+};
+
+noinline int bpf_fentry_test11_pptr_nullable(struct bpf_fentry_test_pptr_t **pptr__nullable)
+{
+ if (!pptr__nullable)
+ return -1;
+
+ return (*pptr__nullable)->value1;
+}
+
+noinline u32 **bpf_fentry_test12_pptr(u32 id, u32 **pptr)
+{
+ /* prevent DCE */
+ asm volatile("" : "+r"(id));
+ asm volatile("" : "+r"(pptr));
+ return pptr;
+}
+
+noinline u8 bpf_fentry_test13_pptr(void **pptr)
+{
+ void *ptr;
+
+ return copy_from_kernel_nofault(&ptr, pptr, sizeof(pptr)) == 0;
+}
+
+/* Test the verifier can handle multi-level pointer types with qualifiers. */
+noinline void ***bpf_fentry_test14_ppptr(void **volatile *const ppptr)
+{
+ /* prevent DCE */
+ asm volatile("" :: "r"(ppptr) : "memory");
+ return (void ***)ppptr;
+}
+
noinline void bpf_fentry_test_sinfo(struct skb_shared_info *sinfo)
{
}
@@ -670,20 +709,110 @@ static void *bpf_test_init(const union bpf_attr *kattr, u32 user_size,
return data;
}
+static void *create_bad_kaddr(void)
+{
+ /*
+ * Try to get an address that passes kernel range checks but causes
+ * a page fault handler invocation if accessed from a BPF program.
+ */
+#if defined(CONFIG_ARCH_HAS_SET_MEMORY) && defined(CONFIG_X86)
+ void *addr = vmalloc(PAGE_SIZE);
+
+ if (!addr)
+ return NULL;
+ /* Make it non-present - any access will fault */
+ if (set_memory_np((unsigned long)addr, 1)) {
+ vfree(addr);
+ return NULL;
+ }
+ return addr;
+#elif defined(CONFIG_ARCH_HAS_SET_DIRECT_MAP)
+ struct page *page = alloc_page(GFP_KERNEL);
+
+ if (!page)
+ return NULL;
+ /* Remove from direct map - any access will fault */
+ if (set_direct_map_invalid_noflush(page)) {
+ __free_page(page);
+ return NULL;
+ }
+ flush_tlb_kernel_range((unsigned long)page_address(page),
+ (unsigned long)page_address(page) + PAGE_SIZE);
+ return page_address(page);
+#endif
+ return NULL;
+}
+
+static void free_bad_kaddr(void *addr)
+{
+ if (!addr)
+ return;
+
+ /*
+ * Free an invalid test address created by create_bad_kaddr().
+ * Restores the page to present state before freeing.
+ */
+#if defined(CONFIG_ARCH_HAS_SET_MEMORY) && defined(CONFIG_X86)
+ set_memory_p((unsigned long)addr, 1);
+ vfree(addr);
+#elif defined(CONFIG_ARCH_HAS_SET_DIRECT_MAP)
+ struct page *page = virt_to_page(addr);
+
+ set_direct_map_default_noflush(page);
+ flush_tlb_kernel_range((unsigned long)addr,
+ (unsigned long)addr + PAGE_SIZE);
+ __free_page(page);
+#endif
+}
+
+#define CONSUME(val) do { \
+ typeof(val) __var = (val); \
+ __asm__ __volatile__("" : "+r" (__var)); \
+ (void)__var; \
+} while (0)
+
int bpf_prog_test_run_tracing(struct bpf_prog *prog,
const union bpf_attr *kattr,
union bpf_attr __user *uattr)
{
struct bpf_fentry_test_t arg = {};
+ struct bpf_fentry_test_pptr_t ts = { .value1 = 1979, .value2 = 2026 };
+ struct bpf_fentry_test_pptr_t *ptr = &ts;
+ void *kaddr = NULL;
+ u32 *u32_ptr = (u32 *)29;
u16 side_effect = 0, ret = 0;
int b = 2, err = -EFAULT;
u32 retval = 0;
+ const char *attach_name;
if (kattr->test.flags || kattr->test.cpu || kattr->test.batch_size)
return -EINVAL;
+ attach_name = prog->aux->attach_func_name;
+ if (!attach_name)
+ attach_name = "!";
+
switch (prog->expected_attach_type) {
case BPF_TRACE_FENTRY:
+ if (!strcmp(attach_name, "bpf_fentry_test11_pptr_nullable")) {
+ CONSUME(bpf_fentry_test11_pptr_nullable(&ptr));
+ break;
+ } else if (!strcmp(attach_name, "bpf_fentry_test12_pptr")) {
+ CONSUME(bpf_fentry_test12_pptr(0, &u32_ptr));
+ CONSUME(bpf_fentry_test12_pptr(1, (u32 **)17));
+ break;
+ } else if (!strcmp(attach_name, "bpf_fentry_test13_pptr")) {
+ /* If kaddr is NULL, the test handles this gracefully. */
+ kaddr = create_bad_kaddr();
+ CONSUME(bpf_fentry_test13_pptr(kaddr));
+ CONSUME(bpf_fentry_test13_pptr((void **)19));
+ CONSUME(bpf_fentry_test13_pptr(ERR_PTR(-ENOMEM)));
+ break;
+ } else if (!strcmp(attach_name, "bpf_fentry_test14_ppptr")) {
+ CONSUME(bpf_fentry_test14_ppptr(ERR_PTR(-ENOMEM)));
+ break;
+ }
+ fallthrough;
case BPF_TRACE_FEXIT:
case BPF_TRACE_FSESSION:
if (bpf_fentry_test1(1) != 2 ||
@@ -717,6 +846,7 @@ int bpf_prog_test_run_tracing(struct bpf_prog *prog,
err = 0;
out:
+ free_bad_kaddr(kaddr);
trace_bpf_test_finish(&err);
return err;
}
diff --git a/tools/testing/selftests/bpf/prog_tests/fentry_fexit_multi_level_ptr.c b/tools/testing/selftests/bpf/prog_tests/fentry_fexit_multi_level_ptr.c
new file mode 100644
index 000000000000..54a4d2720ba4
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/fentry_fexit_multi_level_ptr.c
@@ -0,0 +1,206 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2026 CrowdStrike, Inc. */
+#include <test_progs.h>
+#include "fentry_fexit_pptr_nullable_test.skel.h"
+#include "fentry_fexit_pptr_test.skel.h"
+#include "fentry_fexit_void_pptr_test.skel.h"
+#include "fentry_fexit_void_ppptr_test.skel.h"
+
+static void test_fentry_fexit_pptr_nullable(void)
+{
+ struct fentry_fexit_pptr_nullable_test *skel = NULL;
+ int err, prog_fd;
+ LIBBPF_OPTS(bpf_test_run_opts, topts);
+
+ skel = fentry_fexit_pptr_nullable_test__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "fentry_fexit_pptr_nullable_test__open_and_load"))
+ return;
+
+ err = fentry_fexit_pptr_nullable_test__attach(skel);
+ if (!ASSERT_OK(err, "fentry_fexit_pptr_nullable_test__attach"))
+ goto cleanup;
+
+ /* Trigger fentry/fexit programs. */
+ prog_fd = bpf_program__fd(skel->progs.test_fentry_pptr_nullable);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(err, "test_run");
+ ASSERT_EQ(topts.retval, 0, "test_run retval");
+
+ /* Verify fentry was called and captured the correct value. */
+ ASSERT_EQ(skel->bss->fentry_called, 1, "fentry_called");
+ ASSERT_EQ(skel->bss->fentry_ptr_field_value1, 1979, "fentry_ptr_field_value1");
+ ASSERT_EQ(skel->bss->fentry_ptr_field_value2, 2026, "fentry_ptr_field_value2");
+
+ /* Verify fexit captured correct values and return code. */
+ ASSERT_EQ(skel->bss->fexit_called, 1, "fexit_called");
+ ASSERT_EQ(skel->bss->fexit_ptr_field_value1, 1979, "fexit_ptr_field_value1");
+ ASSERT_EQ(skel->bss->fexit_ptr_field_value2, 2026, "fexit_ptr_field_value2");
+ ASSERT_EQ(skel->bss->fexit_retval, 1979, "fexit_retval");
+
+cleanup:
+ fentry_fexit_pptr_nullable_test__destroy(skel);
+}
+
+static void test_fentry_fexit_pptr(void)
+{
+ struct fentry_fexit_pptr_test *skel = NULL;
+ int err, prog_fd, i;
+ LIBBPF_OPTS(bpf_test_run_opts, topts);
+
+ skel = fentry_fexit_pptr_test__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "fentry_fexit_pptr_test__open_and_load"))
+ return;
+
+ /* Poison some values which should be modified by BPF programs. */
+ for (i = 0; i < ARRAY_SIZE(skel->bss->telemetry); ++i) {
+ skel->bss->telemetry[i].id = 30;
+ skel->bss->telemetry[i].fentry_pptr = 31;
+ skel->bss->telemetry[i].fentry_ptr = 32;
+ skel->bss->telemetry[i].fexit_pptr = 33;
+ skel->bss->telemetry[i].fexit_ptr = 34;
+ skel->bss->telemetry[i].fexit_ret_pptr = 35;
+ skel->bss->telemetry[i].fexit_ret_ptr = 36;
+ }
+
+ err = fentry_fexit_pptr_test__attach(skel);
+ if (!ASSERT_OK(err, "fentry_fexit_pptr_test__attach"))
+ goto cleanup;
+
+ /* Trigger fentry/fexit programs */
+ prog_fd = bpf_program__fd(skel->progs.test_fentry_pptr);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(err, "test_run");
+ ASSERT_EQ(topts.retval, 0, "test_run retval");
+
+ for (i = 0; i < ARRAY_SIZE(skel->bss->telemetry); ++i) {
+ ASSERT_TRUE(skel->bss->telemetry[i].id == 0 ||
+ skel->bss->telemetry[i].id == 1, "id");
+ if (skel->bss->telemetry[i].id == 0) {
+ /* Verify fentry captured the correct value. */
+ ASSERT_EQ(skel->bss->telemetry[i].fentry_called, 1, "fentry_called");
+ ASSERT_EQ(skel->bss->telemetry[i].fentry_ptr, (u64)29, "fentry_ptr");
+
+ /* Verify fexit captured correct values and return address. */
+ ASSERT_EQ(skel->bss->telemetry[i].fexit_called, 1, "fexit_called");
+ ASSERT_EQ(skel->bss->telemetry[i].fexit_pptr,
+ skel->bss->telemetry[i].fentry_pptr, "fexit_pptr");
+ ASSERT_EQ(skel->bss->telemetry[i].fexit_ptr, (u64)29, "fexit_ptr");
+ ASSERT_EQ(skel->bss->telemetry[i].fexit_ret_pptr,
+ skel->bss->telemetry[i].fentry_pptr, "fexit_ret_pptr");
+ ASSERT_EQ(skel->bss->telemetry[i].fexit_ret_ptr, (u64)29, "fexit_ret_ptr");
+ } else if (skel->bss->telemetry[i].id == 1) {
+ /* Verify fentry captured the correct value */
+ ASSERT_EQ(skel->bss->telemetry[i].fentry_called, 1, "fentry_called");
+ ASSERT_EQ(skel->bss->telemetry[i].fentry_pptr, 17, "fentry_pptr");
+
+ /*
+ * Verify fexit captured correct values and return address,
+ * fentry_ptr value depends on kernel address space layout
+ * and a mapped page presence at NULL.
+ */
+ ASSERT_EQ(skel->bss->telemetry[i].fexit_called, 1, "fexit_called");
+ ASSERT_EQ(skel->bss->telemetry[i].fexit_pptr, 17, "fexit_pptr");
+ ASSERT_EQ(skel->bss->telemetry[i].fexit_ptr,
+ skel->bss->telemetry[i].fentry_ptr, "fexit_ptr");
+ ASSERT_EQ(skel->bss->telemetry[i].fexit_ret_pptr, 17, "fexit_ret_pptr");
+ ASSERT_EQ(skel->bss->telemetry[i].fexit_ret_ptr,
+ skel->bss->telemetry[i].fentry_ptr, "fexit_ret_ptr");
+ }
+ }
+
+cleanup:
+ fentry_fexit_pptr_test__destroy(skel);
+}
+
+static void test_fentry_fexit_void_pptr(void)
+{
+ struct fentry_fexit_void_pptr_test *skel = NULL;
+ int err, prog_fd, i;
+ LIBBPF_OPTS(bpf_test_run_opts, topts);
+
+ skel = fentry_fexit_void_pptr_test__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "fentry_fexit_void_pptr_test__open_and_load"))
+ return;
+
+ /* Poison some values which should be modified by BPF programs. */
+ for (i = 0; i < ARRAY_SIZE(skel->bss->telemetry); ++i) {
+ skel->bss->telemetry[i].fentry_pptr = 30;
+ skel->bss->telemetry[i].fentry_ptr = 31;
+ skel->bss->telemetry[i].fexit_pptr = 32;
+ skel->bss->telemetry[i].fexit_ptr = 33;
+ }
+
+ err = fentry_fexit_void_pptr_test__attach(skel);
+ if (!ASSERT_OK(err, "fentry_fexit_void_pptr_test__attach"))
+ goto cleanup;
+
+ /* Trigger fentry/fexit programs. */
+ prog_fd = bpf_program__fd(skel->progs.test_fentry_void_pptr);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(err, "test_run");
+ ASSERT_EQ(topts.retval, 0, "test_run retval");
+ for (i = 0; i < ARRAY_SIZE(skel->bss->telemetry); ++i) {
+ ASSERT_EQ(skel->bss->telemetry[i].fentry_called, 1, "fentry_called");
+ ASSERT_EQ(skel->bss->telemetry[i].fexit_called, 1, "fexit_called");
+ ASSERT_EQ(skel->bss->telemetry[i].fentry_pptr, skel->bss->telemetry[i].fexit_pptr,
+ "fentry_pptr == fexit_pptr");
+ ASSERT_EQ(skel->bss->telemetry[i].fexit_ptr, skel->bss->telemetry[i].fentry_ptr,
+ "fexit_ptr");
+ ASSERT_EQ(skel->bss->telemetry[i].fentry_pptr_addr_valid,
+ skel->bss->telemetry[i].fexit_pptr_addr_valid, "fexit_pptr_addr_valid");
+ if (!skel->bss->telemetry[i].fentry_pptr_addr_valid) {
+ /* Should be set to 0 by kernel address boundaries check or an exception handler. */
+ ASSERT_EQ(skel->bss->telemetry[i].fentry_ptr, 0, "fentry_ptr");
+ ASSERT_EQ(skel->bss->telemetry[i].fexit_ptr, 0, "fexit_ptr");
+ }
+ }
+cleanup:
+ fentry_fexit_void_pptr_test__destroy(skel);
+}
+
+static void test_fentry_fexit_void_ppptr(void)
+{
+ struct fentry_fexit_void_ppptr_test *skel = NULL;
+ int err, prog_fd;
+ LIBBPF_OPTS(bpf_test_run_opts, topts);
+
+ skel = fentry_fexit_void_ppptr_test__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "fentry_fexit_void_ppptr_test__open_and_load"))
+ return;
+
+ /* Poison some values which should be modified by BPF programs */
+ skel->bss->fentry_pptr = 31;
+
+ err = fentry_fexit_void_ppptr_test__attach(skel);
+ if (!ASSERT_OK(err, "fentry_fexit_void_ppptr_test__attach"))
+ goto cleanup;
+
+ /* Trigger fentry/fexit programs */
+ prog_fd = bpf_program__fd(skel->progs.test_fentry_void_ppptr);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(err, "test_run");
+ ASSERT_EQ(topts.retval, 0, "test_run retval");
+
+ /* Verify invalid memory access results in zeroed register */
+ ASSERT_EQ(skel->bss->fentry_called, 1, "fentry_called");
+ ASSERT_EQ(skel->bss->fentry_pptr, 0, "fentry_pptr");
+
+ /* Verify fexit captured correct values and return value */
+ ASSERT_EQ(skel->bss->fexit_called, 1, "fexit_called");
+ ASSERT_EQ(skel->bss->fexit_retval, (u64)ERR_PTR(-ENOMEM), "fexit_retval");
+
+cleanup:
+ fentry_fexit_void_ppptr_test__destroy(skel);
+}
+
+void test_fentry_fexit_multi_level_ptr(void)
+{
+ if (test__start_subtest("pptr_nullable"))
+ test_fentry_fexit_pptr_nullable();
+ if (test__start_subtest("pptr"))
+ test_fentry_fexit_pptr();
+ if (test__start_subtest("void_pptr"))
+ test_fentry_fexit_void_pptr();
+ if (test__start_subtest("void_ppptr"))
+ test_fentry_fexit_void_ppptr();
+}
diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c
index 8cdfd74c95d7..5bcc6406c0b2 100644
--- a/tools/testing/selftests/bpf/prog_tests/verifier.c
+++ b/tools/testing/selftests/bpf/prog_tests/verifier.c
@@ -115,6 +115,7 @@
#include "verifier_lsm.skel.h"
#include "verifier_jit_inline.skel.h"
#include "irq.skel.h"
+#include "verifier_ctx_multilevel_ptr.skel.h"
#define MAX_ENTRIES 11
@@ -259,6 +260,7 @@ void test_verifier_lsm(void) { RUN(verifier_lsm); }
void test_irq(void) { RUN(irq); }
void test_verifier_mtu(void) { RUN(verifier_mtu); }
void test_verifier_jit_inline(void) { RUN(verifier_jit_inline); }
+void test_verifier_ctx_multilevel_ptr(void) { RUN(verifier_ctx_multilevel_ptr); }
static int init_test_val_map(struct bpf_object *obj, char *map_name)
{
diff --git a/tools/testing/selftests/bpf/progs/fentry_fexit_pptr_nullable_test.c b/tools/testing/selftests/bpf/progs/fentry_fexit_pptr_nullable_test.c
new file mode 100644
index 000000000000..9cfd21a042e6
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/fentry_fexit_pptr_nullable_test.c
@@ -0,0 +1,56 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2026 CrowdStrike, Inc. */
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+#include <bpf/bpf_core_read.h>
+
+char _license[] SEC("license") = "GPL";
+
+struct bpf_fentry_test_pptr_t {
+ __u32 value1;
+ __u32 value2;
+};
+
+/*
+ * Workaround for a bug in LLVM:
+ * fatal error: error in backend: Empty type name for BTF_TYPE_ID_REMOTE reloc
+ */
+typedef struct bpf_fentry_test_pptr_t *bpf_fentry_test_pptr_p;
+
+__u32 fentry_called = 0;
+__u32 fentry_ptr_field_value1 = 0;
+__u32 fentry_ptr_field_value2 = 0;
+__u32 fexit_called = 0;
+__u32 fexit_ptr_field_value1 = 0;
+__u32 fexit_ptr_field_value2 = 0;
+__u32 fexit_retval = 0;
+
+SEC("fentry/bpf_fentry_test11_pptr_nullable")
+int BPF_PROG(test_fentry_pptr_nullable, struct bpf_fentry_test_pptr_t **pptr__nullable)
+{
+ struct bpf_fentry_test_pptr_t *ptr;
+
+ fentry_called = 1;
+ /* For scalars, the verifier does not enforce NULL pointer checks. */
+ ptr = *bpf_core_cast(pptr__nullable, bpf_fentry_test_pptr_p);
+ bpf_probe_read_kernel(&fentry_ptr_field_value1, sizeof(fentry_ptr_field_value1), &ptr->value1);
+ bpf_probe_read_kernel(&fentry_ptr_field_value2, sizeof(fentry_ptr_field_value2), &ptr->value2);
+ return 0;
+}
+
+SEC("fexit/bpf_fentry_test11_pptr_nullable")
+int BPF_PROG(test_fexit_pptr_nullable, struct bpf_fentry_test_pptr_t **pptr__nullable, int ret)
+{
+ struct bpf_fentry_test_pptr_t **pptr;
+ struct bpf_fentry_test_pptr_t *ptr;
+
+ fexit_called = 1;
+ fexit_retval = ret;
+ /* For scalars, the verifier does not enforce NULL pointer checks. */
+ pptr = bpf_core_cast(pptr__nullable, bpf_fentry_test_pptr_p);
+ ptr = bpf_core_cast((*pptr), struct bpf_fentry_test_pptr_t);
+ fexit_ptr_field_value1 = ptr->value1;
+ fexit_ptr_field_value2 = ptr->value2;
+ return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/fentry_fexit_pptr_test.c b/tools/testing/selftests/bpf/progs/fentry_fexit_pptr_test.c
new file mode 100644
index 000000000000..77c5c09d7117
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/fentry_fexit_pptr_test.c
@@ -0,0 +1,67 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2026 CrowdStrike, Inc. */
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+#include <bpf/bpf_core_read.h>
+
+char _license[] SEC("license") = "GPL";
+
+#define TELEMETRY_COUNT 2
+
+struct {
+ __u32 id;
+ __u32 fentry_called;
+ __u32 fexit_called;
+ __u64 fentry_pptr;
+ __u64 fentry_ptr;
+ __u64 fexit_pptr;
+ __u64 fexit_ptr;
+ __u64 fexit_ret_pptr;
+ __u64 fexit_ret_ptr;
+} telemetry[TELEMETRY_COUNT];
+
+volatile unsigned int current_index = 0;
+
+/*
+ * Workaround for a bug in LLVM:
+ * fatal error: error in backend: Empty type name for BTF_TYPE_ID_REMOTE reloc
+ */
+typedef __u32 *__u32_p;
+
+SEC("fentry/bpf_fentry_test12_pptr")
+int BPF_PROG(test_fentry_pptr, __u32 id, __u32 **pptr)
+{
+ void *ptr;
+ unsigned int i = current_index;
+
+ if (i >= TELEMETRY_COUNT)
+ return 0;
+
+ if (bpf_probe_read_kernel(&ptr, sizeof(ptr), pptr) != 0)
+ ptr = NULL;
+
+ telemetry[i].id = id;
+ telemetry[i].fentry_called = 1;
+ telemetry[i].fentry_pptr = (__u64)pptr;
+ telemetry[i].fentry_ptr = (__u64)ptr;
+ return 0;
+}
+
+SEC("fexit/bpf_fentry_test12_pptr")
+int BPF_PROG(test_fexit_pptr, __u32 id, __u32 **pptr, __u32 **ret)
+{
+ unsigned int i = current_index;
+
+ if (i >= TELEMETRY_COUNT)
+ return 0;
+
+ telemetry[i].fexit_called = 1;
+ telemetry[i].fexit_pptr = (__u64)pptr;
+ telemetry[i].fexit_ptr = (__u64)*bpf_core_cast(pptr, __u32_p);
+ telemetry[i].fexit_ret_pptr = (__u64)ret;
+ telemetry[i].fexit_ret_ptr = ret ? (__u64)*bpf_core_cast(ret, __u32_p) : 0;
+
+ current_index = i + 1;
+ return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/fentry_fexit_void_ppptr_test.c b/tools/testing/selftests/bpf/progs/fentry_fexit_void_ppptr_test.c
new file mode 100644
index 000000000000..26cb7285f808
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/fentry_fexit_void_ppptr_test.c
@@ -0,0 +1,38 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2026 CrowdStrike, Inc. */
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+#include <bpf/bpf_core_read.h>
+
+char _license[] SEC("license") = "GPL";
+
+__u32 fentry_called = 0;
+__u32 fexit_called = 0;
+__u64 fentry_pptr = 0;
+__u64 fexit_retval = 0;
+
+typedef void **volatile *const ppvpc_t;
+
+/*
+ * Workaround for a bug in LLVM:
+ * fatal error: error in backend: Empty type name for BTF_TYPE_ID_REMOTE reloc
+ */
+typedef void **void_pp;
+
+SEC("fentry/bpf_fentry_test14_ppptr")
+int BPF_PROG(test_fentry_void_ppptr, ppvpc_t ppptr)
+{
+ fentry_called = 1;
+ /* Invalid memory access is fixed by boundaries check or exception handler */
+ fentry_pptr = (__u64)*bpf_core_cast((void ***)ppptr, void_pp);
+ return 0;
+}
+
+SEC("fexit/bpf_fentry_test14_ppptr")
+int BPF_PROG(test_fexit_void_ppptr, ppvpc_t ppptr, void ***ret)
+{
+ fexit_called = 1;
+ fexit_retval = ret ? (__u64)ret : 0;
+ return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/fentry_fexit_void_pptr_test.c b/tools/testing/selftests/bpf/progs/fentry_fexit_void_pptr_test.c
new file mode 100644
index 000000000000..588050b9607d
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/fentry_fexit_void_pptr_test.c
@@ -0,0 +1,71 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2026 CrowdStrike, Inc. */
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+#include <bpf/bpf_core_read.h>
+
+char _license[] SEC("license") = "GPL";
+
+#define TELEMETRY_COUNT 3
+
+struct {
+ __u32 fentry_called;
+ __u32 fexit_called;
+ __u32 fentry_pptr_addr_valid;
+ __u32 fexit_pptr_addr_valid;
+ __u64 fentry_pptr;
+ __u64 fentry_ptr;
+ __u64 fexit_pptr;
+ __u64 fexit_ptr;
+} telemetry[TELEMETRY_COUNT];
+
+volatile unsigned int current_index = 0;
+
+/*
+ * Workaround for a bug in LLVM:
+ * fatal error: error in backend: Empty type name for BTF_TYPE_ID_REMOTE reloc
+ */
+typedef void *void_p;
+
+SEC("fentry/bpf_fentry_test13_pptr")
+int BPF_PROG(test_fentry_void_pptr, void **pptr)
+{
+ void *ptr;
+ unsigned int i = current_index;
+
+ if (i >= TELEMETRY_COUNT)
+ return 0;
+
+ telemetry[i].fentry_pptr_addr_valid =
+ (bpf_probe_read_kernel(&ptr, sizeof(ptr), pptr) == 0);
+ if (!telemetry[i].fentry_pptr_addr_valid)
+ ptr = NULL;
+
+ telemetry[i].fentry_called = 1;
+ telemetry[i].fentry_pptr = (__u64)pptr;
+ telemetry[i].fentry_ptr = (__u64)ptr;
+ return 0;
+}
+
+SEC("fexit/bpf_fentry_test13_pptr")
+int BPF_PROG(test_fexit_void_pptr, void **pptr, __u8 ret)
+{
+ unsigned int i = current_index;
+
+ if (i >= TELEMETRY_COUNT)
+ return 0;
+
+ telemetry[i].fexit_called = 1;
+ telemetry[i].fexit_pptr = (__u64)pptr;
+ telemetry[i].fexit_pptr_addr_valid = ret;
+
+ /*
+ * For invalid addresses, the destination register for *dptr is set
+ * to 0 by the BPF exception handler, JIT address range check, or
+ * the BPF interpreter.
+ */
+ telemetry[i].fexit_ptr = (__u64)*bpf_core_cast(pptr, void_p);
+ current_index = i + 1;
+ return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/verifier_ctx_multilevel_ptr.c b/tools/testing/selftests/bpf/progs/verifier_ctx_multilevel_ptr.c
new file mode 100644
index 000000000000..e508a6fbf430
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/verifier_ctx_multilevel_ptr.c
@@ -0,0 +1,435 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Verifier tests for double and triple pointer parameter handling
+ * Copyright (c) 2026 CrowdStrike, Inc.
+ */
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+#include <bpf/bpf_core_read.h>
+#include "bpf_misc.h"
+
+SEC("fentry/bpf_fentry_test11_pptr_nullable")
+__description("fentry/double pointer parameter - valid ctx access")
+__success __retval(0)
+__naked void ctx_double_ptr_fentry_valid_ctx_access(void)
+{
+ asm volatile (" \
+ /* load double pointer - SCALAR_VALUE */\
+ r2 = *(u64 *)(r1 + 0); \
+ r0 = 0; \
+ exit; \
+" ::: __clobber_all);
+}
+
+SEC("fexit/bpf_fentry_test11_pptr_nullable")
+__description("fexit/double pointer parameter - valid ctx access")
+__success __retval(0)
+__naked void ctx_double_ptr_fexit_valid_ctx_access(void)
+{
+ asm volatile (" \
+ /* load double pointer - SCALAR_VALUE */\
+ r2 = *(u64 *)(r1 + 0); \
+ r0 = 0; \
+ exit; \
+" ::: __clobber_all);
+}
+
+SEC("fentry/bpf_fentry_test11_pptr_nullable")
+__description("fentry/double pointer nullable parameter - valid ctx access")
+__success __retval(0)
+__naked void ctx_double_ptr_valid_ctx_access_nullable(void)
+{
+ asm volatile (" \
+ /* load double pointer - SCALAR_VALUE */\
+ r2 = *(u64 *)(r1 + 0); \
+ r0 = 0; \
+ exit; \
+" ::: __clobber_all);
+}
+
+SEC("fentry/bpf_fentry_test11_pptr_nullable")
+__description("fentry/double pointer nullable parameter - invalid load with scalar")
+__failure __msg("R2 invalid mem access 'scalar'")
+__naked void ctx_double_ptr_invalid_load(void)
+{
+ asm volatile (" \
+ /* load double pointer - SCALAR_VALUE */\
+ r2 = *(u64 *)(r1 + 0); \
+ r3 = *(u64 *)(r2 + 0); \
+ r0 = 0; \
+ exit; \
+" ::: __clobber_all);
+}
+
+SEC("fentry/bpf_fentry_test11_pptr_nullable")
+__description("fentry/double pointer nullable parameter - invalid load with scalar by offset")
+__failure __msg("R2 invalid mem access 'scalar'")
+__naked void ctx_double_ptr_invalid_load_with_offset(void)
+{
+ asm volatile (" \
+ /* load double pointer - SCALAR_VALUE */\
+ r2 = *(u64 *)(r1 + 0); \
+ r3 = *(u64 *)(r2 + 0x80); \
+ r0 = 0; \
+ exit; \
+" ::: __clobber_all);
+}
+
+SEC("fentry/bpf_fentry_test11_pptr_nullable")
+__description("fentry/double pointer nullable parameter - invalid store by scalar")
+__failure __msg("R2 invalid mem access 'scalar'")
+__naked void ctx_double_ptr_store_with_scalar(void)
+{
+ asm volatile (" \
+ /* load double pointer - SCALAR_VALUE */\
+ r2 = *(u64 *)(r1 + 0); \
+ *(u64 *)(r2 + 0x0) = 1; \
+ r0 = 0; \
+ exit; \
+" ::: __clobber_all);
+}
+
+SEC("fentry/bpf_fentry_test14_ppptr")
+__description("fentry/triple pointer parameter - valid ctx access")
+__success __retval(0)
+__naked void ctx_triple_ptr_valid_ctx_access(void)
+{
+ asm volatile (" \
+ /* load triple pointer - SCALAR_VALUE */\
+ r2 = *(u64 *)(r1 + 0); \
+ r0 = 0; \
+ exit; \
+" ::: __clobber_all);
+}
+
+SEC("fentry/bpf_fentry_test14_ppptr")
+__description("fentry/triple pointer parameter - invalid load with scalar")
+__failure __msg("R2 invalid mem access 'scalar'")
+__naked void ctx_triple_ptr_load_with_scalar(void)
+{
+ asm volatile (" \
+ /* load triple pointer - SCALAR_VALUE */\
+ r2 = *(u64 *)(r1 + 0); \
+ r3 = *(u64 *)(r2 + 0); \
+ r0 = 0; \
+ exit; \
+" ::: __clobber_all);
+}
+
+SEC("fentry/bpf_fentry_test14_ppptr")
+__description("fentry/triple pointer parameter - invalid store with scalar")
+__failure __msg("R2 invalid mem access 'scalar'")
+__naked void ctx_triple_ptr_store_with_scalar(void)
+{
+ asm volatile (" \
+ /* load triple pointer - SCALAR_VALUE */\
+ r2 = *(u64 *)(r1 + 0); \
+ *(u64 *)(r2 + 0) = 1; \
+ r0 = 0; \
+ exit; \
+" ::: __clobber_all);
+}
+
+SEC("lsm/sb_eat_lsm_opts")
+__description("lsm/double pointer parameter trusted - valid ctx access")
+__success
+__naked void sb_eat_lsm_opts_trusted_valid_ctx_access(void)
+{
+ asm volatile (" \
+ /* load double pointer - SCALAR_VALUE */\
+ r2 = *(u64 *)(r1 + 8); \
+ r0 = 0; \
+ exit; \
+" ::: __clobber_all);
+}
+
+SEC("lsm/sb_eat_lsm_opts")
+__description("lsm/double pointer parameter trusted - invalid load with scalar")
+__failure __msg("R2 invalid mem access 'scalar'")
+__naked void sb_eat_lsm_opts_trusted_load_with_scalar(void)
+{
+ asm volatile (" \
+ /* load double pointer - SCALAR_VALUE */\
+ r2 = *(u64 *)(r1 + 8); \
+ r3 = *(u64 *)(r2 + 0); \
+ r0 = 0; \
+ exit; \
+" ::: __clobber_all);
+}
+
+SEC("lsm/sb_eat_lsm_opts")
+__description("lsm/double pointer parameter trusted - invalid store with scalar")
+__failure __msg("R2 invalid mem access 'scalar'")
+__naked void sb_eat_lsm_opts_trusted_store_with_scalar(void)
+{
+ asm volatile (" \
+ /* load double pointer - SCALAR_VALUE */\
+ r2 = *(u64 *)(r1 + 8); \
+ *(u64 *)(r2 + 0) = 1; \
+ r0 = 0; \
+ exit; \
+" ::: __clobber_all);
+}
+
+struct bpf_fentry_test_pptr_t;
+
+SEC("fentry/bpf_fentry_test11_pptr_nullable")
+__description("fentry/double pointer parameter - bpf helpers with nullable var")
+__success __retval(0)
+int BPF_PROG(ctx_double_ptr_nulable_var_access_bpf_helpers,
+ struct bpf_fentry_test_pptr_t **pptr__nullable)
+{
+ /* Check compatibility with BPF helpers; NULL checks should not be required. */
+ void *ptr;
+
+ bpf_probe_read_kernel(&ptr, sizeof(ptr), pptr__nullable);
+ return 0;
+}
+
+/*
+ * Workaround for a bug in LLVM:
+ * fatal error: error in backend: Empty type name for BTF_TYPE_ID_REMOTE reloc
+ */
+typedef __u32 *__u32_p;
+
+/*
+ * Workaround for:
+ * kfunc bpf_rdonly_cast type ID argument must be of a struct or void
+ */
+struct __u32_wrap {
+ __u32 v;
+};
+
+SEC("fexit/bpf_fentry_test12_pptr")
+__description("fexit/double pointer return value - valid dereference of return val")
+__success __retval(0)
+int BPF_PROG(ctx_double_ptr_return_access, __u32 id,
+ __u32 **pptr, __u32 **ret)
+{
+ __u32 **ppu32;
+ struct __u32_wrap *pu32;
+ ppu32 = bpf_core_cast(ret, __u32_p);
+ pu32 = bpf_core_cast(ppu32, struct __u32_wrap);
+ bpf_printk("%d", pu32->v);
+ return 0;
+}
+
+SEC("fexit/bpf_fentry_test12_pptr")
+__description("fexit/double pointer parameter - bpf helpers with return val")
+__success __retval(0)
+int BPF_PROG(ctx_double_ptr_return_access_bpf_helpers, __u32 id,
+ __u32 **pptr, __u32 **ret)
+{
+ /* Check compatibility with BPF helpers; NULL checks should not be required. */
+ void *ptr;
+
+ bpf_probe_read_kernel(&ptr, sizeof(ptr), pptr);
+ bpf_probe_read_kernel(&ptr, sizeof(ptr), ret);
+ return 0;
+}
+
+SEC("fentry/bpf_fentry_test11_pptr_nullable")
+__description("fentry/double pointer parameter - bpf helpers with nullable var, direct ctx pointer")
+__success __retval(0)
+int BPF_PROG(ctx_double_ptr_nulable_var_access_bpf_helpers_ctx,
+ struct bpf_fentry_test_pptr_t **pptr__nullable)
+{
+ /* Check compatibility with BPF helpers; NULL checks should not be required. */
+ void *ptr;
+
+ bpf_probe_read_kernel(&ptr, sizeof(ptr), &ctx[0] /*pptr__nullable*/);
+ return 0;
+}
+
+SEC("fexit/bpf_fentry_test12_pptr")
+__description("fexit/double pointer parameter - bpf helpers with return val, direct ctx pointer")
+__success __retval(0)
+int BPF_PROG(ctx_double_ptr_return_access_bpf_helpers_ctx, __u32 id,
+ __u32 **pptr, __u32 **ret)
+{
+ /* Check compatibility with BPF helpers; NULL checks should not be required. */
+ void *ptr;
+
+ bpf_probe_read_kernel(&ptr, sizeof(ptr), &ctx[1] /*pptr*/);
+ bpf_probe_read_kernel(&ptr, sizeof(ptr), &ctx[2] /*ret*/);
+ return 0;
+}
+
+struct bpf_fentry_test_pptr_t {
+ __u32 value1;
+ __u32 value2;
+};
+
+/*
+ * Workaround for a bug in LLVM:
+ * fatal error: error in backend: Empty type name for BTF_TYPE_ID_REMOTE reloc
+ */
+typedef struct bpf_fentry_test_pptr_t *bpf_fentry_test_pptr_p;
+
+SEC("fentry/bpf_fentry_test11_pptr_nullable")
+__description("fentry/double pointer parameter - dereference followed by valid load of field 1")
+__success __retval(0)
+int BPF_PROG(ctx_double_ptr_deref_with_field_1_load,
+ struct bpf_fentry_test_pptr_t **pptr__nullable)
+{
+ struct bpf_fentry_test_pptr_t **pptr;
+ struct bpf_fentry_test_pptr_t *ptr;
+
+ pptr = bpf_core_cast(pptr__nullable, bpf_fentry_test_pptr_p);
+ ptr = bpf_core_cast((*pptr), struct bpf_fentry_test_pptr_t);
+ bpf_printk("%d", ptr->value1);
+ return 0;
+}
+
+SEC("fentry/bpf_fentry_test11_pptr_nullable")
+__description("fentry/double pointer parameter - dereference followed by valid load of field 2")
+__success __retval(0)
+int BPF_PROG(ctx_double_ptr_deref_with_field_2_load, struct bpf_fentry_test_pptr_t **pptr__nullable)
+{
+ struct bpf_fentry_test_pptr_t **pptr;
+ struct bpf_fentry_test_pptr_t *ptr;
+
+ pptr = bpf_core_cast(pptr__nullable, bpf_fentry_test_pptr_p);
+ ptr = bpf_core_cast((*pptr), struct bpf_fentry_test_pptr_t);
+ bpf_printk("%d", ptr->value2);
+ return 0;
+}
+
+SEC("fentry/bpf_fentry_test11_pptr_nullable")
+__description("fentry/double pointer parameter - dereference followed by invalid out-of-bounds offset load")
+__failure __msg("access beyond struct bpf_fentry_test_pptr_t at off 128 size 4")
+int BPF_PROG(ctx_double_ptr_deref_with_load_by_positive_out_of_bound_offset,
+ struct bpf_fentry_test_pptr_t **pptr__nullable)
+{
+ struct bpf_fentry_test_pptr_t **pptr;
+ struct bpf_fentry_test_pptr_t *ptr;
+ __u32 value;
+
+ pptr = bpf_core_cast(pptr__nullable, bpf_fentry_test_pptr_p);
+ ptr = bpf_core_cast((*pptr), struct bpf_fentry_test_pptr_t);
+
+ asm volatile (" \
+ r2 = %1; \
+ /* Load with out-of-bounds offset */\
+ %0 = *(u32 *)(r2 + 0x80) \
+ " : "=r" (value) : "r" (ptr) : "r2");
+
+ bpf_printk("%d", value);
+ return 0;
+}
+
+SEC("fentry/bpf_fentry_test11_pptr_nullable")
+__description("fentry/double pointer parameter - dereference followed by invalid out-of-bounds offset load")
+__failure __msg("R2 is ptr_bpf_fentry_test_pptr_t invalid negative access: off=-128")
+int BPF_PROG(ctx_double_ptr_deref_with_load_by_negative_out_of_bound_offset,
+ struct bpf_fentry_test_pptr_t **pptr__nullable)
+{
+ struct bpf_fentry_test_pptr_t **pptr;
+ struct bpf_fentry_test_pptr_t *ptr;
+ __u32 value;
+
+ pptr = bpf_core_cast(pptr__nullable, bpf_fentry_test_pptr_p);
+ ptr = bpf_core_cast((*pptr), struct bpf_fentry_test_pptr_t);
+
+ asm volatile (" \
+ r2 = %1; \
+ /* Load with out-of-bounds offset */\
+ %0 = *(u32 *)(r2 - 0x80); \
+ " : "=r" (value) : "r" (ptr) : "r2");
+
+ bpf_printk("%d", value);
+ return 0;
+}
+
+SEC("fentry/bpf_fentry_test11_pptr_nullable")
+__description("fentry/double pointer parameter - dereference followed by invalid store to field 1")
+__failure __msg("only read is supported")
+int BPF_PROG(ctx_double_ptr_deref_with_field_1_modification, struct bpf_fentry_test_pptr_t **pptr__nullable)
+{
+ struct bpf_fentry_test_pptr_t **pptr;
+ struct bpf_fentry_test_pptr_t *ptr;
+
+ pptr = bpf_core_cast(pptr__nullable, bpf_fentry_test_pptr_p);
+ ptr = bpf_core_cast((*pptr), struct bpf_fentry_test_pptr_t);
+
+ asm volatile (" \
+ /* Load immediate 1 into w2 */\
+ w2 = 1; \
+ /* Store to ptr->value1 */ \
+ *(u32 *)(%0 + 0) = r2; \
+ " :: "r" (ptr) : "r2");
+
+ return 0;
+}
+
+SEC("fentry/bpf_fentry_test11_pptr_nullable")
+__description("fentry/double pointer parameter - dereference followed by invalid store to field 2")
+__failure __msg("only read is supported")
+int BPF_PROG(ctx_double_ptr_deref_with_field_2_modification,
+ struct bpf_fentry_test_pptr_t **pptr__nullable)
+{
+ struct bpf_fentry_test_pptr_t **pptr;
+ struct bpf_fentry_test_pptr_t *ptr;
+
+ pptr = bpf_core_cast(pptr__nullable, bpf_fentry_test_pptr_p);
+ ptr = bpf_core_cast((*pptr), struct bpf_fentry_test_pptr_t);
+
+ asm volatile (" \
+ /* Load immediate 2 into w2 */\
+ w2 = 2; \
+ /* Store to ptr->value2 */ \
+ *(u32 *)(%0 + 4) = r2; \
+ " :: "r" (ptr) : "r2");
+
+ return 0;
+}
+
+SEC("fentry/bpf_fentry_test11_pptr_nullable")
+__description("fentry/double pointer parameter - dereference followed by invalid store to positive offset beyond struct boundaries")
+__failure __msg("only read is supported")
+int BPF_PROG(ctx_double_ptr_deref_with_store_by_positive_invalid_offset,
+ struct bpf_fentry_test_pptr_t **pptr__nullable)
+{
+ struct bpf_fentry_test_pptr_t **pptr;
+ struct bpf_fentry_test_pptr_t *ptr;
+
+ pptr = bpf_core_cast(pptr__nullable, bpf_fentry_test_pptr_p);
+ ptr = bpf_core_cast((*pptr), struct bpf_fentry_test_pptr_t);
+
+ asm volatile (" \
+ r3 = %0; \
+ /* Load immediate 3 into w2 */\
+ w2 = 3; \
+ /* Store with offset outside struct size */ \
+ *(u32 *)(r3 + 0x80) = r2; \
+ " :: "r" (ptr) : "r2", "r3");
+
+ return 0;
+}
+
+SEC("fentry/bpf_fentry_test11_pptr_nullable")
+__description("fentry/double pointer parameter - dereference followed by invalid store to negative offset beyond struct boundaries")
+__failure __msg("R3 is ptr_bpf_fentry_test_pptr_t invalid negative access: off=-128")
+int BPF_PROG(ctx_double_ptr_deref_with_store_by_negative_invalid_offset,
+ struct bpf_fentry_test_pptr_t **pptr__nullable)
+{
+ struct bpf_fentry_test_pptr_t **pptr;
+ struct bpf_fentry_test_pptr_t *ptr;
+
+ pptr = bpf_core_cast(pptr__nullable, bpf_fentry_test_pptr_p);
+ ptr = bpf_core_cast((*pptr), struct bpf_fentry_test_pptr_t);
+
+ asm volatile (" \
+ r3 = %0; \
+ /* Load immediate 3 into w2 */\
+ w2 = 3; \
+ /* Store with offset outside struct size */ \
+ *(u32 *)(r3 - 0x80) = r2; \
+ " :: "r" (ptr) : "r2", "r3");
+
+ return 0;
+}
+
+char _license[] SEC("license") = "GPL";
--
2.34.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH bpf-next v3 1/2] bpf: Support multi-level pointer params via SCALAR_VALUE for trampolines
2026-02-23 8:31 ` [PATCH bpf-next v3 1/2] bpf: Support multi-level pointer params via SCALAR_VALUE " Slava Imameev
@ 2026-02-23 9:06 ` bot+bpf-ci
2026-02-23 10:47 ` Slava Imameev
0 siblings, 1 reply; 8+ messages in thread
From: bot+bpf-ci @ 2026-02-23 9:06 UTC (permalink / raw)
To: slava.imameev, ast, daniel, andrii
Cc: martin.lau, eddyz87, song, yonghong.song, john.fastabend, kpsingh,
sdf, haoluo, jolsa, davem, edumazet, kuba, pabeni, horms, shuah,
linux-kernel, bpf, netdev, linux-kselftest, linux-open-source,
slava.imameev, ast, andrii, daniel, martin.lau, eddyz87,
yonghong.song, clm, ihor.solodrai
[-- Attachment #1: Type: text/plain, Size: 2081 bytes --]
> diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
> index 7708958e3fb8..ebb1b0c3f993 100644
> --- a/kernel/bpf/btf.c
> +++ b/kernel/bpf/btf.c
[ ... ]
> @@ -6905,8 +6920,11 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
> /*
> * If it's a pointer to void, it's the same as scalar from the verifier
> * safety POV. Either way, no futher pointer walking is allowed.
> + * Multilevel pointers (e.g., int**, struct foo**, char***) of any type
> + * are treated as scalars because the verifier lacks the context to infer
> + * the size of their target memory regions.
> */
> - if (is_void_or_int_ptr(btf, t))
> + if (is_void_or_int_ptr(btf, t) || is_multilevel_ptr(btf, t))
> return true;
In the v2 review thread, Alexei Starovoitov wrote:
"I'm less certain about 'char **'. If we make it scalar too
it will be harder to make it a pointer to nul terminated
string later.
So I would do 'void **' -> scalar for now only."
https://lore.kernel.org/bpf/CAADnVQ+xwC+J1hZcHPEUjBoKcUKmnucG4AFQ-kkDmqx+C-GcoA@mail.gmail.com/
The comment says "of any type" and is_multilevel_ptr() matches all
multi-level pointers (int**, struct foo**, char***), which is broader
than the requested scope. Was the decision to treat all multi-level
pointer types as scalar discussed with Alexei after the v2 feedback?
---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
CI run summary: https://github.com/kernel-patches/bpf/actions/runs/22298698870
AI-authorship-score: low
AI-authorship-explanation: Code exhibits human-authored characteristics including idiomatic kernel C patterns, iterative revision based on reviewer feedback across three versions, and deep domain knowledge of BPF verifier internals.
issues-found: 1
issue-severity-score: low
issue-severity-explanation: Unaddressed maintainer feedback about limiting scope to void** pointers only; the broader approach is functionally safe but contradicts the requested design direction.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Re: [PATCH bpf-next v3 1/2] bpf: Support multi-level pointer params via SCALAR_VALUE for trampolines
2026-02-23 9:06 ` bot+bpf-ci
@ 2026-02-23 10:47 ` Slava Imameev
2026-02-23 16:58 ` Alexei Starovoitov
0 siblings, 1 reply; 8+ messages in thread
From: Slava Imameev @ 2026-02-23 10:47 UTC (permalink / raw)
To: bot+bpf-ci
Cc: andrii, ast, bpf, clm, daniel, davem, eddyz87, edumazet, haoluo,
horms, ihor.solodrai, john.fastabend, jolsa, kpsingh, kuba,
linux-kernel, linux-kselftest, linux-open-source, martin.lau,
martin.lau, netdev, pabeni, sdf, shuah, slava.imameev, song,
yonghong.song
I think support for broader types can be provided in a compatible way
with future annotated support, as I explained in my reply to the v2
review:
"
> so I suggest treating 'void **' as a scalar as Eduard suggested.
> This particular sb_eat_lsm_opts() hook
> doesn't have a useful type behind it anyway.
> I'm less certain about 'char **'. If we make it scalar too
> it will be harder to make it a pointer to nul terminated string later.
> So I would do 'void **' -> scalar for now only.
I changed to scalar in v3, keeping broader scope for pointer types.
We encountered double pointers of various types that required
workarounds, such as:
int __posix_acl_chmod(struct posix_acl **acl, gfp_t gfp, umode_t mode)
Adding support for void** alone doesn't address the broader issue
with other double pointer types.
When annotated array support (including char**) is added in the
future, it should remain compatible with the scalar approach for
legacy (unannotated) parameters. Unannotated parameters will
continue using scalar handling.
"
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH bpf-next v3 2/2] selftests/bpf: Add trampolines multi-level pointer params test coverage
2026-02-23 8:31 ` [PATCH bpf-next v3 2/2] selftests/bpf: Add trampolines multi-level pointer params test coverage Slava Imameev
@ 2026-02-23 16:38 ` Alexei Starovoitov
0 siblings, 0 replies; 8+ messages in thread
From: Alexei Starovoitov @ 2026-02-23 16:38 UTC (permalink / raw)
To: Slava Imameev
Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
Martin KaFai Lau, Eduard, Song Liu, Yonghong Song, John Fastabend,
KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa, David S. Miller,
Eric Dumazet, Jakub Kicinski, Paolo Abeni, Simon Horman,
Shuah Khan, LKML, bpf, Network Development,
open list:KERNEL SELFTEST FRAMEWORK, DL Linux Open Source Team
On Mon, Feb 23, 2026 at 12:32 AM Slava Imameev
<slava.imameev@crowdstrike.com> wrote:
>
> Multi-level pointer params and return value test coverage for BPF
> trampolines:
> - fentry/fexit programs covering struct and void double/triple
> pointer parameters and returned values
> - verifier context tests covering multi-level pointers as parameters
> - verifier context tests covering multi-level pointers as returned
> values
> - verifier context tests for lsm to check trusted parameters handling
> - verifier context tests covering out-of-bound access after cast
> - verifier BPF helper tests to validate no change in verifier
> behaviour
>
> Signed-off-by: Slava Imameev <slava.imameev@crowdstrike.com>
> ---
> net/bpf/test_run.c | 130 ++++++
> .../prog_tests/fentry_fexit_multi_level_ptr.c | 206 +++++++++
> .../selftests/bpf/prog_tests/verifier.c | 2 +
> .../progs/fentry_fexit_pptr_nullable_test.c | 56 +++
> .../bpf/progs/fentry_fexit_pptr_test.c | 67 +++
> .../bpf/progs/fentry_fexit_void_ppptr_test.c | 38 ++
> .../bpf/progs/fentry_fexit_void_pptr_test.c | 71 +++
> .../bpf/progs/verifier_ctx_multilevel_ptr.c | 435 ++++++++++++++++++
> 8 files changed, 1005 insertions(+)
> create mode 100644 tools/testing/selftests/bpf/prog_tests/fentry_fexit_multi_level_ptr.c
> create mode 100644 tools/testing/selftests/bpf/progs/fentry_fexit_pptr_nullable_test.c
> create mode 100644 tools/testing/selftests/bpf/progs/fentry_fexit_pptr_test.c
> create mode 100644 tools/testing/selftests/bpf/progs/fentry_fexit_void_ppptr_test.c
> create mode 100644 tools/testing/selftests/bpf/progs/fentry_fexit_void_pptr_test.c
> create mode 100644 tools/testing/selftests/bpf/progs/verifier_ctx_multilevel_ptr.c
>
> diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
> index 178c4738e63b..9f6ee2eb01cd 100644
> --- a/net/bpf/test_run.c
> +++ b/net/bpf/test_run.c
> @@ -24,6 +24,9 @@
> #include <net/netdev_rx_queue.h>
> #include <net/xdp.h>
> #include <net/netfilter/nf_bpf_link.h>
> +#include <linux/set_memory.h>
> +#include <linux/string.h>
> +#include <asm/tlbflush.h>
>
> #define CREATE_TRACE_POINTS
> #include <trace/events/bpf_test_run.h>
> @@ -563,6 +566,42 @@ noinline int bpf_fentry_test10(const void *a)
> return (long)a;
> }
>
> +struct bpf_fentry_test_pptr_t {
> + u32 value1;
> + u32 value2;
> +};
> +
> +noinline int bpf_fentry_test11_pptr_nullable(struct bpf_fentry_test_pptr_t **pptr__nullable)
> +{
> + if (!pptr__nullable)
> + return -1;
> +
> + return (*pptr__nullable)->value1;
> +}
> +
> +noinline u32 **bpf_fentry_test12_pptr(u32 id, u32 **pptr)
> +{
> + /* prevent DCE */
> + asm volatile("" : "+r"(id));
> + asm volatile("" : "+r"(pptr));
> + return pptr;
> +}
> +
> +noinline u8 bpf_fentry_test13_pptr(void **pptr)
> +{
> + void *ptr;
> +
> + return copy_from_kernel_nofault(&ptr, pptr, sizeof(pptr)) == 0;
> +}
> +
> +/* Test the verifier can handle multi-level pointer types with qualifiers. */
> +noinline void ***bpf_fentry_test14_ppptr(void **volatile *const ppptr)
> +{
> + /* prevent DCE */
> + asm volatile("" :: "r"(ppptr) : "memory");
> + return (void ***)ppptr;
> +}
> +
> noinline void bpf_fentry_test_sinfo(struct skb_shared_info *sinfo)
> {
> }
> @@ -670,20 +709,110 @@ static void *bpf_test_init(const union bpf_attr *kattr, u32 user_size,
> return data;
> }
>
> +static void *create_bad_kaddr(void)
> +{
> + /*
> + * Try to get an address that passes kernel range checks but causes
> + * a page fault handler invocation if accessed from a BPF program.
> + */
> +#if defined(CONFIG_ARCH_HAS_SET_MEMORY) && defined(CONFIG_X86)
> + void *addr = vmalloc(PAGE_SIZE);
> +
> + if (!addr)
> + return NULL;
> + /* Make it non-present - any access will fault */
> + if (set_memory_np((unsigned long)addr, 1)) {
> + vfree(addr);
> + return NULL;
> + }
> + return addr;
> +#elif defined(CONFIG_ARCH_HAS_SET_DIRECT_MAP)
> + struct page *page = alloc_page(GFP_KERNEL);
> +
> + if (!page)
> + return NULL;
> + /* Remove from direct map - any access will fault */
> + if (set_direct_map_invalid_noflush(page)) {
> + __free_page(page);
> + return NULL;
> + }
> + flush_tlb_kernel_range((unsigned long)page_address(page),
> + (unsigned long)page_address(page) + PAGE_SIZE);
> + return page_address(page);
> +#endif
This is serious overkill for a test.
See how bpf_testmod_return_ptr() does it.
pw-bot: cr
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Re: [PATCH bpf-next v3 1/2] bpf: Support multi-level pointer params via SCALAR_VALUE for trampolines
2026-02-23 10:47 ` Slava Imameev
@ 2026-02-23 16:58 ` Alexei Starovoitov
2026-02-23 21:23 ` Slava Imameev
0 siblings, 1 reply; 8+ messages in thread
From: Alexei Starovoitov @ 2026-02-23 16:58 UTC (permalink / raw)
To: Slava Imameev
Cc: bot+bpf-ci, Andrii Nakryiko, Alexei Starovoitov, bpf, Chris Mason,
Daniel Borkmann, David S. Miller, Eduard, Eric Dumazet, Hao Luo,
Simon Horman, Ihor Solodrai, John Fastabend, Jiri Olsa, KP Singh,
Jakub Kicinski, LKML, open list:KERNEL SELFTEST FRAMEWORK,
DL Linux Open Source Team, Martin KaFai Lau, Martin KaFai Lau,
Network Development, Paolo Abeni, Stanislav Fomichev, Shuah Khan,
Song Liu, Yonghong Song
On Mon, Feb 23, 2026 at 2:48 AM Slava Imameev
<slava.imameev@crowdstrike.com> wrote:
>
> I think support for broader types can be provided in a compatible way
> with future annotated support, as I explained in my reply to the v2
> review:
>
> "
> > so I suggest treating 'void **' as a scalar as Eduard suggested.
> > This particular sb_eat_lsm_opts() hook
> > doesn't have a useful type behind it anyway.
> > I'm less certain about 'char **'. If we make it scalar too
> > it will be harder to make it a pointer to nul terminated string later.
>
> > So I would do 'void **' -> scalar for now only.
>
> I changed to scalar in v3, keeping broader scope for pointer types.
>
> We encountered double pointers of various types that required
> workarounds, such as:
>
> int __posix_acl_chmod(struct posix_acl **acl, gfp_t gfp, umode_t mode)
>
> Adding support for void** alone doesn't address the broader issue
> with other double pointer types.
>
> When annotated array support (including char**) is added in the
> future, it should remain compatible with the scalar approach for
> legacy (unannotated) parameters. Unannotated parameters will
> continue using scalar handling.
> "
but then there is no need to relax it for double pointers only.
- if (is_void_or_int_ptr(btf, t))
+ if (is_void_or_int_ptr(btf, t) || !btf_type_is_struct_ptr(btf, t))
would make 'scalar' a fallback for anything unrecognized,
and we can argue that making it smarter in the future
maybe hopefully won't break progs.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Re: [PATCH bpf-next v3 1/2] bpf: Support multi-level pointer params via SCALAR_VALUE for trampolines
2026-02-23 16:58 ` Alexei Starovoitov
@ 2026-02-23 21:23 ` Slava Imameev
0 siblings, 0 replies; 8+ messages in thread
From: Slava Imameev @ 2026-02-23 21:23 UTC (permalink / raw)
To: alexei.starovoitov
Cc: andrii, ast, bot+bpf-ci, bpf, clm, daniel, davem, eddyz87,
edumazet, haoluo, horms, ihor.solodrai, john.fastabend, jolsa,
kpsingh, kuba, linux-kernel, linux-kselftest, linux-open-source,
martin.lau, martin.lau, netdev, pabeni, sdf, shuah, slava.imameev,
song, yonghong.song
Alexei Starovoitov <ast@kernel.org> wrote:
> On Mon, Feb 23, 2026 at 2:48=E2=80=AFAM Slava Imameev
> <slava.imameev@crowdstrike.com> wrote:
> >
> > I think support for broader types can be provided in a compatible way
> > with future annotated support, as I explained in my reply to the v2
> > review:
> >
> > "
> > > so I suggest treating 'void **' as a scalar as Eduard suggested.
> > > This particular sb_eat_lsm_opts() hook
> > > doesn't have a useful type behind it anyway.
> > > I'm less certain about 'char **'. If we make it scalar too
> > > it will be harder to make it a pointer to nul terminated string later=
> .
> >
> > > So I would do 'void **' -> scalar for now only.
> >
> > I changed to scalar in v3, keeping broader scope for pointer types.
> >
> > We encountered double pointers of various types that required
> > workarounds, such as:
> >
> > int __posix_acl_chmod(struct posix_acl **acl, gfp_t gfp, umode_t mode)
> >
> > Adding support for void** alone doesn't address the broader issue
> > with other double pointer types.
> >
> > When annotated array support (including char**) is added in the
> > future, it should remain compatible with the scalar approach for
> > legacy (unannotated) parameters. Unannotated parameters will
> > continue using scalar handling.
> > "
>
> but then there is no need to relax it for double pointers only.
>
> - if (is_void_or_int_ptr(btf, t))
> + if (is_void_or_int_ptr(btf, t) || !btf_type_is_struct_ptr(btf, t))
>
> would make 'scalar' a fallback for anything unrecognized,
> and we can argue that making it smarter in the future
> maybe hopefully won't break progs.
I can increase coverage to anything that's not a pointer to structure.
I added support only for multi-level pointers of any type because
we needed workarounds for them in our development. This was the real
case we encountered in practice.
Multi-level pointers support appears to be a safe extension since any
future support for arrays and output values would require annotation
(similar to Microsoft SAL), which differentiates between current
unannotated scalar cases and new annotated cases.
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2026-02-23 21:24 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-23 8:31 [PATCH bpf-next v3 0/2] bpf: Add multi-level pointer parameter support for trampolines Slava Imameev
2026-02-23 8:31 ` [PATCH bpf-next v3 1/2] bpf: Support multi-level pointer params via SCALAR_VALUE " Slava Imameev
2026-02-23 9:06 ` bot+bpf-ci
2026-02-23 10:47 ` Slava Imameev
2026-02-23 16:58 ` Alexei Starovoitov
2026-02-23 21:23 ` Slava Imameev
2026-02-23 8:31 ` [PATCH bpf-next v3 2/2] selftests/bpf: Add trampolines multi-level pointer params test coverage Slava Imameev
2026-02-23 16:38 ` Alexei Starovoitov
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox