* [PATCH bpf-next v2 0/2] bpf: Add multi-level pointer parameter support for trampolines
@ 2026-02-17 22:13 Slava Imameev
2026-02-17 22:13 ` [PATCH bpf-next v2 1/2] bpf: Support multi-level pointer params via PTR_TO_MEM " Slava Imameev
` (2 more replies)
0 siblings, 3 replies; 11+ messages in thread
From: Slava Imameev @ 2026-02-17 22:13 UTC (permalink / raw)
To: ast, daniel, andrii
Cc: martin.lau, eddyz87, song, yonghong.song, john.fastabend, kpsingh,
sdf, haoluo, jolsa, davem, edumazet, kuba, pabeni, horms, shuah,
linux-kernel, bpf, netdev, linux-kselftest, linux-open-source,
Slava Imameev
This patch adds BPF verifier support for multi-level pointer parameters
and return values in BPF trampolines. The implementation treats these
parameters as PTR_TO_MEM with read-only semantics, applying either
untrusted or trusted access patterns while honoring __nullable
annotations. Runtime safety is ensured through existing exception
handling mechanisms for untrusted memory reads, with the verifier
enforcing bounds checking and null validation. The series includes
selftests covering double and triple pointer arguments across
fentry/fexit/lsm programs and verifier context validation.
Background:
Prior to these changes, accessing multi-level pointer parameters or
return values through BPF trampoline context arrays resulted in
verification failures in btf_ctx_access, producing errors such as:
func '%s' arg%d type %s is not a struct
For example, consider a BPF program that logs an input parameter of type
struct posix_acl **:
SEC("fentry/__posix_acl_chmod")
int BPF_PROG(trace_posix_acl_chmod, struct posix_acl **ppacl, gfp_t gfp,
umode_t mode)
{
bpf_printk("__posix_acl_chmod ppacl = %px\n", ppacl);
return 0;
}
This program failed BPF verification with the following error:
libbpf: prog 'trace_posix_acl_chmod': -- BEGIN PROG LOAD LOG --
0: R1=ctx() R10=fp0
; int BPF_PROG(trace_posix_acl_chmod, struct posix_acl **ppacl,
gfp_t gfp, umode_t mode) @ posix_acl_monitor.bpf.c:23
0: (79) r6 = *(u64 *)(r1 +16) ; R1=ctx() R6_w=scalar()
1: (79) r1 = *(u64 *)(r1 +0)
func '__posix_acl_chmod' arg0 type PTR is not a struct
invalid bpf_context access off=0 size=8
processed 2 insns (limit 1000000) max_states_per_insn 0 total_states 0
peak_states 0 mark_read 0
-- END PROG LOAD LOG --
The common workaround involved using helper functions to fetch parameter
values by passing the address of the context array entry:
SEC("fentry/__posix_acl_chmod")
int BPF_PROG(trace_posix_acl_chmod, struct posix_acl **ppacl, gfp_t gfp,
umode_t mode)
{
struct posix_acl **p;
bpf_probe_read_kernel(&p, sizeof(ppacl), &ctx[0]);
bpf_printk("__posix_acl_chmod before %px\n", p);
return 0;
}
This approach introduced helper call overhead and created inconsistency
with parameter access patterns.
Improvements:
With this patch, trampoline programs can directly read parameters and
dereference memory using load instructions, eliminating helper call
overhead and ensuring consistent parameter handling. For example, the
following helper call sequence:
{
struct posix_acl **pp;
struct posix_acl *p;
bpf_probe_read_kernel(&pp, sizeof(pp), &ctx[0]);
bpf_probe_read_kernel(&p, sizeof(p), pp);
...
}
can be replaced by two load instructions implementing a single C
statement:
{
struct posix_acl *p = *ppacl;
...
}
Design Rationale: PTR_TO_MEM vs SCALAR
The verifier assigns SCALAR type to single-level pointers (void*, int*).
For multi-level pointers, I selected PTR_TO_MEM to enable memory access
through a single load instruction for the first level of dereference,
with subsequent dereferences becoming SCALAR. This design eliminates
helper call for parameter dereference, replacing it with a load
instruction (e.g., void* ptr = *pptr).
Access safety is maintained through existing verify-time checks,
exception handling, and kernel virtual address range boundary checks:
- User-mode memory address access is prevented by runtime virtual
address range checks for untrusted PTR_TO_MEM
- Invalid kernel address space accesses are intercepted by the
exception handler for untrusted PTR_TO_MEM
- Trusted PTR_TO_MEM access safety is maintained at verify time
v1 -> v2: corrected maintainer's email
Slava Imameev (2):
bpf: Support multi-level pointer params via PTR_TO_MEM for trampolines
selftests/bpf: Add trampolines multi-level pointer params test
coverage
include/linux/bpf.h | 3 +-
kernel/bpf/btf.c | 54 ++-
kernel/bpf/verifier.c | 4 +-
net/bpf/test_run.c | 128 ++++++
.../prog_tests/fentry_fexit_multi_level_ptr.c | 204 +++++++++
.../selftests/bpf/prog_tests/verifier.c | 2 +
.../progs/fentry_fexit_pptr_nullable_test.c | 52 +++
.../bpf/progs/fentry_fexit_pptr_test.c | 60 +++
.../bpf/progs/fentry_fexit_void_ppptr_test.c | 31 ++
.../bpf/progs/fentry_fexit_void_pptr_test.c | 64 +++
.../bpf/progs/verifier_ctx_multilevel_ptr.c | 429 ++++++++++++++++++
11 files changed, 1021 insertions(+), 10 deletions(-)
create mode 100644 tools/testing/selftests/bpf/prog_tests/fentry_fexit_multi_level_ptr.c
create mode 100644 tools/testing/selftests/bpf/progs/fentry_fexit_pptr_nullable_test.c
create mode 100644 tools/testing/selftests/bpf/progs/fentry_fexit_pptr_test.c
create mode 100644 tools/testing/selftests/bpf/progs/fentry_fexit_void_ppptr_test.c
create mode 100644 tools/testing/selftests/bpf/progs/fentry_fexit_void_pptr_test.c
create mode 100644 tools/testing/selftests/bpf/progs/verifier_ctx_multilevel_ptr.c
--
2.50.1 (Apple Git-155)
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH bpf-next v2 1/2] bpf: Support multi-level pointer params via PTR_TO_MEM for trampolines
2026-02-17 22:13 [PATCH bpf-next v2 0/2] bpf: Add multi-level pointer parameter support for trampolines Slava Imameev
@ 2026-02-17 22:13 ` Slava Imameev
2026-02-17 22:13 ` [PATCH bpf-next v2 2/2] selftests/bpf: Add trampolines multi-level pointer params test coverage Slava Imameev
2026-02-18 1:48 ` [PATCH bpf-next v2 0/2] bpf: Add multi-level pointer parameter support for trampolines Eduard Zingerman
2 siblings, 0 replies; 11+ messages in thread
From: Slava Imameev @ 2026-02-17 22:13 UTC (permalink / raw)
To: ast, daniel, andrii
Cc: martin.lau, eddyz87, song, yonghong.song, john.fastabend, kpsingh,
sdf, haoluo, jolsa, davem, edumazet, kuba, pabeni, horms, shuah,
linux-kernel, bpf, netdev, linux-kselftest, linux-open-source,
Slava Imameev
Add BPF verifier support for multi-level pointer parameters and return
values in BPF trampolines. The implementation treats these parameters as
PTR_TO_MEM with read-only semantics, applying either untrusted or trusted
access patterns while honoring __nullable annotations. Runtime safety is
ensured through existing exception handling mechanisms for untrusted
memory reads, with the verifier enforcing bounds checking and null
validation.
Signed-off-by: Slava Imameev <slava.imameev@crowdstrike.com>
---
include/linux/bpf.h | 3 ++-
kernel/bpf/btf.c | 54 ++++++++++++++++++++++++++++++++++++-------
kernel/bpf/verifier.c | 4 +++-
3 files changed, 51 insertions(+), 10 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index cd9b96434904..6dd6a85cf13a 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1052,7 +1052,8 @@ struct bpf_insn_access_aux {
struct btf *btf;
u32 btf_id;
u32 ref_obj_id;
- };
+ }; /* base type PTR_TO_BTF_ID */
+ u32 mem_size; /* base type PTR_TO_MEM */
};
struct bpf_verifier_log *log; /* for verbose logs */
bool is_retval; /* is accessing function return value ? */
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 7708958e3fb8..7b7cb30cdc98 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -760,6 +760,21 @@ const struct btf_type *btf_type_resolve_func_ptr(const struct btf *btf,
return NULL;
}
+static bool is_multilevel_ptr(const struct btf *btf, const struct btf_type *t)
+{
+ u32 depth = 0;
+
+ if (!btf_type_is_ptr(t))
+ return false;
+
+ do {
+ depth += 1;
+ t = btf_type_skip_modifiers(btf, t->type, NULL);
+ } while (btf_type_is_ptr(t) && depth < 2);
+
+ return depth > 1;
+}
+
/* Types that act only as a source, not sink or intermediate
* type when resolving.
*/
@@ -6790,6 +6805,7 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
const char *tag_value;
u32 nr_args, arg;
int i, ret;
+ bool trusted, nullable;
if (off % 8) {
bpf_log(log, "func '%s' offset %d is not multiple of 8\n",
@@ -6927,12 +6943,8 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
}
}
- info->reg_type = PTR_TO_BTF_ID;
- if (prog_args_trusted(prog))
- info->reg_type |= PTR_TRUSTED;
-
- if (btf_param_match_suffix(btf, &args[arg], "__nullable"))
- info->reg_type |= PTR_MAYBE_NULL;
+ trusted = prog_args_trusted(prog);
+ nullable = btf_param_match_suffix(btf, &args[arg], "__nullable");
if (prog->expected_attach_type == BPF_TRACE_RAW_TP) {
struct btf *btf = prog->aux->attach_btf;
@@ -6953,7 +6965,7 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
if (strcmp(tname, raw_tp_null_args[i].func))
continue;
if (raw_tp_null_args[i].mask & (0x1ULL << (arg * 4)))
- info->reg_type |= PTR_MAYBE_NULL;
+ nullable = true;
/* Is the current arg IS_ERR? */
if (raw_tp_null_args[i].mask & (0x2ULL << (arg * 4)))
ptr_err_raw_tp = true;
@@ -6964,9 +6976,35 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
* argument as PTR_MAYBE_NULL.
*/
if (i == ARRAY_SIZE(raw_tp_null_args) && btf_is_module(btf))
- info->reg_type |= PTR_MAYBE_NULL;
+ nullable = true;
}
+ if (is_multilevel_ptr(btf, t)) {
+ /* If it can be IS_ERR at runtime, mark as scalar. */
+ if (ptr_err_raw_tp) {
+ bpf_log(log, "marking func '%s' pointer arg%d as scalar as it may encode error",
+ tname, arg);
+ info->reg_type = SCALAR_VALUE;
+ } else {
+ info->reg_type = PTR_TO_MEM | MEM_RDONLY;
+ if (!trusted)
+ info->reg_type |= PTR_UNTRUSTED;
+ /* for return value be conservative and mark it nullable */
+ if (nullable || arg == nr_args)
+ info->reg_type |= PTR_MAYBE_NULL;
+ /* this is a pointer to another pointer */
+ info->mem_size = sizeof(void *);
+ }
+ return true;
+ }
+
+ info->reg_type = PTR_TO_BTF_ID;
+ if (trusted)
+ info->reg_type |= PTR_TRUSTED;
+
+ if (nullable)
+ info->reg_type |= PTR_MAYBE_NULL;
+
if (tgt_prog) {
enum bpf_prog_type tgt_type;
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 0162f946032f..5de56336e169 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -6311,7 +6311,7 @@ static int check_ctx_access(struct bpf_verifier_env *env, int insn_idx, int off,
off);
return -EACCES;
}
- } else {
+ } else if (base_type(info->reg_type) != PTR_TO_MEM) {
env->insn_aux_data[insn_idx].ctx_field_size = info->ctx_field_size;
}
/* remember the offset of last byte accessed in ctx */
@@ -7771,6 +7771,8 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
regs[value_regno].btf = info.btf;
regs[value_regno].btf_id = info.btf_id;
regs[value_regno].ref_obj_id = info.ref_obj_id;
+ } else if (base_type(info.reg_type) == PTR_TO_MEM) {
+ regs[value_regno].mem_size = info.mem_size;
}
}
regs[value_regno].type = info.reg_type;
--
2.50.1 (Apple Git-155)
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH bpf-next v2 2/2] selftests/bpf: Add trampolines multi-level pointer params test coverage
2026-02-17 22:13 [PATCH bpf-next v2 0/2] bpf: Add multi-level pointer parameter support for trampolines Slava Imameev
2026-02-17 22:13 ` [PATCH bpf-next v2 1/2] bpf: Support multi-level pointer params via PTR_TO_MEM " Slava Imameev
@ 2026-02-17 22:13 ` Slava Imameev
2026-02-17 22:47 ` bot+bpf-ci
2026-02-18 9:25 ` kernel test robot
2026-02-18 1:48 ` [PATCH bpf-next v2 0/2] bpf: Add multi-level pointer parameter support for trampolines Eduard Zingerman
2 siblings, 2 replies; 11+ messages in thread
From: Slava Imameev @ 2026-02-17 22:13 UTC (permalink / raw)
To: ast, daniel, andrii
Cc: martin.lau, eddyz87, song, yonghong.song, john.fastabend, kpsingh,
sdf, haoluo, jolsa, davem, edumazet, kuba, pabeni, horms, shuah,
linux-kernel, bpf, netdev, linux-kselftest, linux-open-source,
Slava Imameev
Multi-level pointer params and return value test coverage for BPF
trampolines:
- fentry/fexit programs covering struct and void double/triple pointer
parameters
- nullable pointer cases to validate required NULL checks
- verifier context tests for lsm to check trusted parameters handling
- verifier context tests to exercise PTR_TO_MEM sizing and read-only
behavior
- verifier BPF helper tests to validate no change in verifier behaviour
Signed-off-by: Slava Imameev <slava.imameev@crowdstrike.com>
---
net/bpf/test_run.c | 128 ++++++
.../prog_tests/fentry_fexit_multi_level_ptr.c | 204 +++++++++
.../selftests/bpf/prog_tests/verifier.c | 2 +
.../progs/fentry_fexit_pptr_nullable_test.c | 52 +++
.../bpf/progs/fentry_fexit_pptr_test.c | 60 +++
.../bpf/progs/fentry_fexit_void_ppptr_test.c | 31 ++
.../bpf/progs/fentry_fexit_void_pptr_test.c | 64 +++
.../bpf/progs/verifier_ctx_multilevel_ptr.c | 429 ++++++++++++++++++
8 files changed, 970 insertions(+)
create mode 100644 tools/testing/selftests/bpf/prog_tests/fentry_fexit_multi_level_ptr.c
create mode 100644 tools/testing/selftests/bpf/progs/fentry_fexit_pptr_nullable_test.c
create mode 100644 tools/testing/selftests/bpf/progs/fentry_fexit_pptr_test.c
create mode 100644 tools/testing/selftests/bpf/progs/fentry_fexit_void_ppptr_test.c
create mode 100644 tools/testing/selftests/bpf/progs/fentry_fexit_void_pptr_test.c
create mode 100644 tools/testing/selftests/bpf/progs/verifier_ctx_multilevel_ptr.c
diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
index 178c4738e63b..19c82ae9bfe6 100644
--- a/net/bpf/test_run.c
+++ b/net/bpf/test_run.c
@@ -24,6 +24,8 @@
#include <net/netdev_rx_queue.h>
#include <net/xdp.h>
#include <net/netfilter/nf_bpf_link.h>
+#include <linux/set_memory.h>
+#include <linux/string.h>
#define CREATE_TRACE_POINTS
#include <trace/events/bpf_test_run.h>
@@ -563,6 +565,41 @@ noinline int bpf_fentry_test10(const void *a)
return (long)a;
}
+struct bpf_fentry_test_pptr_t {
+ int value;
+};
+
+noinline int bpf_fentry_test11_pptr_nullable(struct bpf_fentry_test_pptr_t **pptr__nullable)
+{
+ if (!pptr__nullable)
+ return -1;
+
+ return (*pptr__nullable)->value;
+}
+
+noinline u32 **bpf_fentry_test12_pptr(u32 id, u32 **pptr)
+{
+ /* prevent DCE */
+ asm volatile("" : "+r"(id));
+ asm volatile("" : "+r"(pptr));
+ return pptr;
+}
+
+noinline u8 bpf_fentry_test13_pptr(void **pptr)
+{
+ void *ptr;
+
+ return copy_from_kernel_nofault(&ptr, pptr, sizeof(pptr)) == 0;
+}
+
+/* Test the verifier can handle multi-level pointer types with qualifiers. */
+noinline void ***bpf_fentry_test14_ppptr(void **volatile *const ppptr)
+{
+ /* prevent DCE */
+ asm volatile("" :: "r"(ppptr) : "memory");
+ return (void ***)ppptr;
+}
+
noinline void bpf_fentry_test_sinfo(struct skb_shared_info *sinfo)
{
}
@@ -670,20 +707,110 @@ static void *bpf_test_init(const union bpf_attr *kattr, u32 user_size,
return data;
}
+static void *create_bad_kaddr(void)
+{
+ /*
+ * Try to get an address that passes kernel range checks but causes
+ * a page fault handler invocation if accessed from a BPF program.
+ */
+#if defined(CONFIG_ARCH_HAS_SET_MEMORY) && defined(CONFIG_X86)
+ void *addr = vmalloc(PAGE_SIZE);
+
+ if (!addr)
+ return NULL;
+ /* Make it non-present - any access will fault */
+ if (set_memory_np((unsigned long)addr, 1)) {
+ vfree(addr);
+ return NULL;
+ }
+ return addr;
+#elif defined(CONFIG_ARCH_HAS_SET_DIRECT_MAP)
+ struct page *page = alloc_page(GFP_KERNEL);
+
+ if (!page)
+ return NULL;
+ /* Remove from direct map - any access will fault */
+ if (set_direct_map_invalid_noflush(page)) {
+ __free_page(page);
+ return NULL;
+ }
+ flush_tlb_kernel_range((unsigned long)page_address(page),
+ (unsigned long)page_address(page) + PAGE_SIZE);
+ return page_address(page);
+#endif
+ return NULL;
+}
+
+static void free_bad_kaddr(void *addr)
+{
+ if (!addr)
+ return;
+
+ /*
+ * Free an invalid test address created by get_invalid_address().
+ * Restores the page to present state before freeing.
+ */
+#if defined(CONFIG_ARCH_HAS_SET_MEMORY) && defined(CONFIG_X86)
+ set_memory_p((unsigned long)addr, 1);
+ vfree(addr);
+#elif defined(CONFIG_ARCH_HAS_SET_DIRECT_MAP)
+ struct page *page = virt_to_page(addr);
+
+ set_direct_map_default_noflush(page);
+ flush_tlb_kernel_range((unsigned long)addr,
+ (unsigned long)addr + PAGE_SIZE);
+ __free_page(page);
+#endif
+}
+
+#define CONSUME(val) do { \
+ typeof(val) __var = (val); \
+ __asm__ __volatile__("" : "+r" (__var)); \
+ (void)__var; \
+} while (0)
+
int bpf_prog_test_run_tracing(struct bpf_prog *prog,
const union bpf_attr *kattr,
union bpf_attr __user *uattr)
{
struct bpf_fentry_test_t arg = {};
+ struct bpf_fentry_test_pptr_t ts = { .value = 1979 };
+ struct bpf_fentry_test_pptr_t *ptr = &ts;
+ void *kaddr = NULL;
+ u32 *u32_ptr = (u32 *)29;
u16 side_effect = 0, ret = 0;
int b = 2, err = -EFAULT;
u32 retval = 0;
+ const char *attach_name;
if (kattr->test.flags || kattr->test.cpu || kattr->test.batch_size)
return -EINVAL;
+ attach_name = prog->aux->attach_func_name;
+ if (!attach_name)
+ attach_name = "!";
+
switch (prog->expected_attach_type) {
case BPF_TRACE_FENTRY:
+ if (!strcmp(attach_name, "bpf_fentry_test11_pptr_nullable")) {
+ CONSUME(bpf_fentry_test11_pptr_nullable(&ptr));
+ break;
+ } else if (!strcmp(attach_name, "bpf_fentry_test12_pptr")) {
+ CONSUME(bpf_fentry_test12_pptr(0, &u32_ptr));
+ CONSUME(bpf_fentry_test12_pptr(1, (u32 **)17));
+ break;
+ } else if (!strcmp(attach_name, "bpf_fentry_test13_pptr")) {
+ kaddr = create_bad_kaddr();
+ WARN_ON(!kaddr);
+ CONSUME(bpf_fentry_test13_pptr(kaddr));
+ CONSUME(bpf_fentry_test13_pptr((void **)19));
+ CONSUME(bpf_fentry_test13_pptr(ERR_PTR(-ENOMEM)));
+ break;
+ } else if (!strcmp(attach_name, "bpf_fentry_test14_ppptr")) {
+ CONSUME(bpf_fentry_test14_ppptr(ERR_PTR(-ENOMEM)));
+ break;
+ }
+ fallthrough;
case BPF_TRACE_FEXIT:
case BPF_TRACE_FSESSION:
if (bpf_fentry_test1(1) != 2 ||
@@ -717,6 +844,7 @@ int bpf_prog_test_run_tracing(struct bpf_prog *prog,
err = 0;
out:
+ free_bad_kaddr(kaddr);
trace_bpf_test_finish(&err);
return err;
}
diff --git a/tools/testing/selftests/bpf/prog_tests/fentry_fexit_multi_level_ptr.c b/tools/testing/selftests/bpf/prog_tests/fentry_fexit_multi_level_ptr.c
new file mode 100644
index 000000000000..48cb8a3d3967
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/fentry_fexit_multi_level_ptr.c
@@ -0,0 +1,204 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2026 CrowdStrike, Inc. */
+#include <test_progs.h>
+#include "fentry_fexit_pptr_nullable_test.skel.h"
+#include "fentry_fexit_pptr_test.skel.h"
+#include "fentry_fexit_void_pptr_test.skel.h"
+#include "fentry_fexit_void_ppptr_test.skel.h"
+
+static void test_fentry_fexit_pptr_nullable(void)
+{
+ struct fentry_fexit_pptr_nullable_test *skel = NULL;
+ int err, prog_fd;
+ LIBBPF_OPTS(bpf_test_run_opts, topts);
+
+ skel = fentry_fexit_pptr_nullable_test__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "fentry_fexit_pptr_nullable_test__open_and_load"))
+ return;
+
+ err = fentry_fexit_pptr_nullable_test__attach(skel);
+ if (!ASSERT_OK(err, "fentry_fexit_pptr_nullable_test__attach"))
+ goto cleanup;
+
+ /* Trigger fentry/fexit programs. */
+ prog_fd = bpf_program__fd(skel->progs.test_fentry_pptr_nullable);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(err, "test_run");
+ ASSERT_EQ(topts.retval, 0, "test_run retval");
+
+ /* Verify fentry was called and captured the correct value. */
+ ASSERT_EQ(skel->bss->fentry_called, 1, "fentry_called");
+ ASSERT_EQ(skel->bss->fentry_ptr_field_value, 1979, "fentry_ptr_field_value");
+
+ /* Verify fexit captured correct values and return code. */
+ ASSERT_EQ(skel->bss->fexit_called, 1, "fexit_called");
+ ASSERT_EQ(skel->bss->fexit_ptr_field_value, 1979, "fexit_ptr_field_value");
+ ASSERT_EQ(skel->bss->fexit_retval, 1979, "fexit_retval");
+
+cleanup:
+ fentry_fexit_pptr_nullable_test__destroy(skel);
+}
+
+static void test_fentry_fexit_pptr(void)
+{
+ struct fentry_fexit_pptr_test *skel = NULL;
+ int err, prog_fd, i;
+ LIBBPF_OPTS(bpf_test_run_opts, topts);
+
+ skel = fentry_fexit_pptr_test__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "fentry_fexit_pptr_test__open_and_load"))
+ return;
+
+ /* Poison some values which should be modified by BPF programs. */
+ for (i = 0; i < ARRAY_SIZE(skel->bss->telemetry); ++i) {
+ skel->bss->telemetry[i].id = 30;
+ skel->bss->telemetry[i].fentry_pptr = 31;
+ skel->bss->telemetry[i].fentry_ptr = 32;
+ skel->bss->telemetry[i].fexit_pptr = 33;
+ skel->bss->telemetry[i].fexit_ptr = 34;
+ skel->bss->telemetry[i].fexit_ret_pptr = 35;
+ skel->bss->telemetry[i].fexit_ret_ptr = 36;
+ }
+
+ err = fentry_fexit_pptr_test__attach(skel);
+ if (!ASSERT_OK(err, "fentry_fexit_pptr_test__attach"))
+ goto cleanup;
+
+ /* Trigger fentry/fexit programs */
+ prog_fd = bpf_program__fd(skel->progs.test_fentry_pptr);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(err, "test_run");
+ ASSERT_EQ(topts.retval, 0, "test_run retval");
+
+ for (i = 0; i < ARRAY_SIZE(skel->bss->telemetry); ++i) {
+ ASSERT_TRUE(skel->bss->telemetry[i].id == 0 ||
+ skel->bss->telemetry[i].id == 1, "id");
+ if (skel->bss->telemetry[i].id == 0) {
+ /* Verify fentry captured the correct value. */
+ ASSERT_EQ(skel->bss->telemetry[i].fentry_called, 1, "fentry_called");
+ ASSERT_EQ(skel->bss->telemetry[i].fentry_ptr, (u64)29, "fentry_ptr");
+
+ /* Verify fexit captured correct values and return address. */
+ ASSERT_EQ(skel->bss->telemetry[i].fexit_called, 1, "fexit_called");
+ ASSERT_EQ(skel->bss->telemetry[i].fexit_pptr,
+ skel->bss->telemetry[i].fentry_pptr, "fexit_pptr");
+ ASSERT_EQ(skel->bss->telemetry[i].fexit_ptr, (u64)29, "fexit_ptr");
+ ASSERT_EQ(skel->bss->telemetry[i].fexit_ret_pptr,
+ skel->bss->telemetry[i].fentry_pptr, "fexit_ret_pptr");
+ ASSERT_EQ(skel->bss->telemetry[i].fexit_ret_ptr, (u64)29, "fexit_ret_ptr");
+ } else if (skel->bss->telemetry[i].id == 1) {
+ /* Verify fentry captured the correct value */
+ ASSERT_EQ(skel->bss->telemetry[i].fentry_called, 1, "fentry_called");
+ ASSERT_EQ(skel->bss->telemetry[i].fentry_pptr, 17, "fentry_pptr");
+
+ /*
+ * Verify fexit captured correct values and return address,
+ * fentry_ptr value depends on kernel address space layout
+ * and a mapped page presence at NULL.
+ */
+ ASSERT_EQ(skel->bss->telemetry[i].fexit_called, 1, "fexit_called");
+ ASSERT_EQ(skel->bss->telemetry[i].fexit_pptr, 17, "fexit_pptr");
+ ASSERT_EQ(skel->bss->telemetry[i].fexit_ptr,
+ skel->bss->telemetry[i].fentry_ptr, "fexit_ptr");
+ ASSERT_EQ(skel->bss->telemetry[i].fexit_ret_pptr, 17, "fexit_ret_pptr");
+ ASSERT_EQ(skel->bss->telemetry[i].fexit_ret_ptr,
+ skel->bss->telemetry[i].fentry_ptr, "fexit_ret_ptr");
+ }
+ }
+
+cleanup:
+ fentry_fexit_pptr_test__destroy(skel);
+}
+
+static void test_fentry_fexit_void_pptr(void)
+{
+ struct fentry_fexit_void_pptr_test *skel = NULL;
+ int err, prog_fd, i;
+ LIBBPF_OPTS(bpf_test_run_opts, topts);
+
+ skel = fentry_fexit_void_pptr_test__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "fentry_fexit_void_pptr_test__open_and_load"))
+ return;
+
+ /* Poison some values which should be modified by BPF programs. */
+ for (i = 0; i < ARRAY_SIZE(skel->bss->telemetry); ++i) {
+ skel->bss->telemetry[i].fentry_pptr = 30;
+ skel->bss->telemetry[i].fentry_ptr = 31;
+ skel->bss->telemetry[i].fexit_pptr = 32;
+ skel->bss->telemetry[i].fexit_ptr = 33;
+ }
+
+ err = fentry_fexit_void_pptr_test__attach(skel);
+ if (!ASSERT_OK(err, "fentry_fexit_void_pptr_test__attach"))
+ goto cleanup;
+
+ /* Trigger fentry/fexit programs. */
+ prog_fd = bpf_program__fd(skel->progs.test_fentry_void_pptr);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(err, "test_run");
+ ASSERT_EQ(topts.retval, 0, "test_run retval");
+ for (i = 0; i < ARRAY_SIZE(skel->bss->telemetry); ++i) {
+ ASSERT_EQ(skel->bss->telemetry[i].fentry_called, 1, "fentry_called");
+ ASSERT_EQ(skel->bss->telemetry[i].fexit_called, 1, "fexit_called");
+ ASSERT_EQ(skel->bss->telemetry[i].fentry_pptr, skel->bss->telemetry[i].fexit_pptr,
+ "fentry_pptr == fexit_pptr");
+ ASSERT_EQ(skel->bss->telemetry[i].fexit_ptr, skel->bss->telemetry[i].fentry_ptr,
+ "fexit_ptr");
+ ASSERT_EQ(skel->bss->telemetry[i].fentry_pptr_addr_valid,
+ skel->bss->telemetry[i].fexit_pptr_addr_valid, "fexit_pptr_addr_valid");
+ if (!skel->bss->telemetry[i].fentry_pptr_addr_valid) {
+ /* Should be set to 0 by kernel address boundaries check or an exception handler. */
+ ASSERT_EQ(skel->bss->telemetry[i].fentry_ptr, 0, "fentry_ptr");
+ ASSERT_EQ(skel->bss->telemetry[i].fexit_ptr, 0, "fexit_ptr");
+ }
+ }
+cleanup:
+ fentry_fexit_void_pptr_test__destroy(skel);
+}
+
+static void test_fentry_fexit_void_ppptr(void)
+{
+ struct fentry_fexit_void_ppptr_test *skel = NULL;
+ int err, prog_fd;
+ LIBBPF_OPTS(bpf_test_run_opts, topts);
+
+ skel = fentry_fexit_void_ppptr_test__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "fentry_fexit_void_ppptr_test__open_and_load"))
+ return;
+
+ /* Poison some values which should be modified by BPF programs */
+ skel->bss->fentry_pptr = 31;
+
+ err = fentry_fexit_void_ppptr_test__attach(skel);
+ if (!ASSERT_OK(err, "fentry_fexit_void_ppptr_test__attach"))
+ goto cleanup;
+
+ /* Trigger fentry/fexit programs */
+ prog_fd = bpf_program__fd(skel->progs.test_fentry_void_ppptr);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(err, "test_run");
+ ASSERT_EQ(topts.retval, 0, "test_run retval");
+
+ /* Verify invalid memory access results in zeroed register */
+ ASSERT_EQ(skel->bss->fentry_called, 1, "fentry_called");
+ ASSERT_EQ(skel->bss->fentry_pptr, 0, "fentry_pptr");
+
+ /* Verify fexit captured correct values and return value */
+ ASSERT_EQ(skel->bss->fexit_called, 1, "fexit_called");
+ ASSERT_EQ(skel->bss->fexit_retval, (u64)ERR_PTR(-ENOMEM), "fexit_retval");
+
+cleanup:
+ fentry_fexit_void_ppptr_test__destroy(skel);
+}
+
+void test_fentry_fexit_multi_level_ptr(void)
+{
+ if (test__start_subtest("pptr_nullable"))
+ test_fentry_fexit_pptr_nullable();
+ if (test__start_subtest("pptr"))
+ test_fentry_fexit_pptr();
+ if (test__start_subtest("void_pptr"))
+ test_fentry_fexit_void_pptr();
+ if (test__start_subtest("void_ppptr"))
+ test_fentry_fexit_void_ppptr();
+}
diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c
index 8cdfd74c95d7..5bcc6406c0b2 100644
--- a/tools/testing/selftests/bpf/prog_tests/verifier.c
+++ b/tools/testing/selftests/bpf/prog_tests/verifier.c
@@ -115,6 +115,7 @@
#include "verifier_lsm.skel.h"
#include "verifier_jit_inline.skel.h"
#include "irq.skel.h"
+#include "verifier_ctx_multilevel_ptr.skel.h"
#define MAX_ENTRIES 11
@@ -259,6 +260,7 @@ void test_verifier_lsm(void) { RUN(verifier_lsm); }
void test_irq(void) { RUN(irq); }
void test_verifier_mtu(void) { RUN(verifier_mtu); }
void test_verifier_jit_inline(void) { RUN(verifier_jit_inline); }
+void test_verifier_ctx_multilevel_ptr(void) { RUN(verifier_ctx_multilevel_ptr); }
static int init_test_val_map(struct bpf_object *obj, char *map_name)
{
diff --git a/tools/testing/selftests/bpf/progs/fentry_fexit_pptr_nullable_test.c b/tools/testing/selftests/bpf/progs/fentry_fexit_pptr_nullable_test.c
new file mode 100644
index 000000000000..b88d4a1ebba2
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/fentry_fexit_pptr_nullable_test.c
@@ -0,0 +1,52 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2026 CrowdStrike, Inc. */
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+char _license[] SEC("license") = "GPL";
+
+struct bpf_fentry_test_pptr_t {
+ __u32 value;
+};
+
+__u32 fentry_called = 0;
+__u32 fentry_ptr_field_value = 0;
+__u32 fexit_called = 0;
+__u32 fexit_ptr_field_value = 0;
+__u32 fexit_retval = 0;
+
+SEC("fentry/bpf_fentry_test11_pptr_nullable")
+int BPF_PROG(test_fentry_pptr_nullable, struct bpf_fentry_test_pptr_t **pptr__nullable)
+{
+ struct bpf_fentry_test_pptr_t *ptr;
+
+ fentry_called = 1;
+ if (!pptr__nullable)
+ return 0;
+
+ ptr = *pptr__nullable;
+ if (!ptr)
+ return 0;
+
+ bpf_probe_read_kernel(&fentry_ptr_field_value, sizeof(fentry_ptr_field_value), &ptr->value);
+ return 0;
+}
+
+SEC("fexit/bpf_fentry_test11_pptr_nullable")
+int BPF_PROG(test_fexit_pptr_nullable, struct bpf_fentry_test_pptr_t **pptr__nullable, int ret)
+{
+ struct bpf_fentry_test_pptr_t *ptr;
+
+ fexit_called = 1;
+ fexit_retval = ret;
+ if (!pptr__nullable)
+ return 0;
+
+ ptr = *pptr__nullable;
+ if (!ptr)
+ return 0;
+
+ bpf_probe_read_kernel(&fexit_ptr_field_value, sizeof(fexit_ptr_field_value), &ptr->value);
+ return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/fentry_fexit_pptr_test.c b/tools/testing/selftests/bpf/progs/fentry_fexit_pptr_test.c
new file mode 100644
index 000000000000..37764b030669
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/fentry_fexit_pptr_test.c
@@ -0,0 +1,60 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2026 CrowdStrike, Inc. */
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+char _license[] SEC("license") = "GPL";
+
+#define TELEMETRY_COUNT 2
+
+struct {
+ __u32 id;
+ __u32 fentry_called;
+ __u32 fexit_called;
+ __u64 fentry_pptr;
+ __u64 fentry_ptr;
+ __u64 fexit_pptr;
+ __u64 fexit_ptr;
+ __u64 fexit_ret_pptr;
+ __u64 fexit_ret_ptr;
+} telemetry[TELEMETRY_COUNT];
+
+volatile unsigned int current_index = 0;
+
+SEC("fentry/bpf_fentry_test12_pptr")
+int BPF_PROG(test_fentry_pptr, __u32 id, __u32 **pptr)
+{
+ void *ptr;
+ unsigned int i = current_index;
+
+ if (i >= TELEMETRY_COUNT)
+ return 0;
+
+ if (bpf_probe_read_kernel(&ptr, sizeof(ptr), pptr) != 0)
+ ptr = NULL;
+
+ telemetry[i].id = id;
+ telemetry[i].fentry_called = 1;
+ telemetry[i].fentry_pptr = (__u64)pptr;
+ telemetry[i].fentry_ptr = (__u64)ptr;
+ return 0;
+}
+
+SEC("fexit/bpf_fentry_test12_pptr")
+int BPF_PROG(test_fexit_pptr, __u32 id, __u32 **pptr, __u32 **ret)
+{
+ unsigned int i = current_index;
+
+ if (i >= TELEMETRY_COUNT)
+ return 0;
+
+ telemetry[i].fexit_called = 1;
+ telemetry[i].fexit_pptr = (__u64)pptr;
+ telemetry[i].fexit_ptr = (__u64)*pptr;
+ telemetry[i].fexit_ret_pptr = (__u64)ret;
+ telemetry[i].fexit_ret_ptr = ret ? (__u64)*ret : 0;
+
+ current_index = i + 1;
+ return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/fentry_fexit_void_ppptr_test.c b/tools/testing/selftests/bpf/progs/fentry_fexit_void_ppptr_test.c
new file mode 100644
index 000000000000..3e0e908f6eda
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/fentry_fexit_void_ppptr_test.c
@@ -0,0 +1,31 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2026 CrowdStrike, Inc. */
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+char _license[] SEC("license") = "GPL";
+
+__u32 fentry_called = 0;
+__u32 fexit_called = 0;
+__u64 fentry_pptr = 0;
+__u64 fexit_retval = 0;
+
+typedef void **volatile *const ppvpc_t;
+
+SEC("fentry/bpf_fentry_test14_ppptr")
+int BPF_PROG(test_fentry_void_ppptr, ppvpc_t ppptr)
+{
+ fentry_called = 1;
+ /* Invalid memory access is fixed by boundaries check or exception handler */
+ fentry_pptr = (unsigned long)*ppptr;
+ return 0;
+}
+
+SEC("fexit/bpf_fentry_test14_ppptr")
+int BPF_PROG(test_fexit_void_ppptr, ppvpc_t ppptr, void ***ret)
+{
+ fexit_called = 1;
+ fexit_retval = ret ? (__u64)ret : 0;
+ return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/fentry_fexit_void_pptr_test.c b/tools/testing/selftests/bpf/progs/fentry_fexit_void_pptr_test.c
new file mode 100644
index 000000000000..0ec86da97ec5
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/fentry_fexit_void_pptr_test.c
@@ -0,0 +1,64 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2026 CrowdStrike, Inc. */
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+char _license[] SEC("license") = "GPL";
+
+#define TELEMETRY_COUNT 3
+
+struct {
+ __u32 fentry_called;
+ __u32 fexit_called;
+ __u32 fentry_pptr_addr_valid;
+ __u32 fexit_pptr_addr_valid;
+ __u64 fentry_pptr;
+ __u64 fentry_ptr;
+ __u64 fexit_pptr;
+ __u64 fexit_ptr;
+} telemetry[TELEMETRY_COUNT];
+
+volatile unsigned int current_index = 0;
+
+SEC("fentry/bpf_fentry_test13_pptr")
+int BPF_PROG(test_fentry_void_pptr, void **pptr)
+{
+ void *ptr;
+ unsigned int i = current_index;
+
+ if (i >= TELEMETRY_COUNT)
+ return 0;
+
+ telemetry[i].fentry_pptr_addr_valid =
+ (bpf_probe_read_kernel(&ptr, sizeof(ptr), pptr) == 0);
+ if (!telemetry[i].fentry_pptr_addr_valid)
+ ptr = NULL;
+
+ telemetry[i].fentry_called = 1;
+ telemetry[i].fentry_pptr = (__u64)pptr;
+ telemetry[i].fentry_ptr = (__u64)ptr;
+ return 0;
+}
+
+SEC("fexit/bpf_fentry_test13_pptr")
+int BPF_PROG(test_fexit_void_pptr, void **pptr, __u8 ret)
+{
+ unsigned int i = current_index;
+
+ if (i >= TELEMETRY_COUNT)
+ return 0;
+
+ telemetry[i].fexit_called = 1;
+ telemetry[i].fexit_pptr = (__u64)pptr;
+ telemetry[i].fexit_pptr_addr_valid = ret;
+
+ /*
+ * For invalid addresses, the destination register for *dptr is set
+ * to 0 by the BPF exception handler, JIT address range check, or
+ * the BPF interpreter.
+ */
+ telemetry[i].fexit_ptr = (__u64)*pptr;
+ current_index = i + 1;
+ return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/verifier_ctx_multilevel_ptr.c b/tools/testing/selftests/bpf/progs/verifier_ctx_multilevel_ptr.c
new file mode 100644
index 000000000000..9635aed66ba4
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/verifier_ctx_multilevel_ptr.c
@@ -0,0 +1,429 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Verifier tests for double and triple pointer parameter handling
+ * Copyright (c) 2026 CrowdStrike, Inc.
+ */
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+#include "bpf_misc.h"
+
+SEC("fentry/bpf_fentry_test11_pptr_nullable")
+__description("fentry/double pointer parameter (rdonly, untrusted, nullable) - valid ctx access")
+__success __retval(0)
+__naked void ctx_double_ptr_valid_load(void)
+{
+ asm volatile (" \
+ /* load double pointer - PTR_TO_MEM | MEM_RDONLY | PTR_UNTRUSTED | PTR_MAYBE_NULL */\
+ r2 = *(u64 *)(r1 + 0); \
+ r0 = 0; \
+ exit; \
+" ::: __clobber_all);
+}
+
+SEC("fentry/bpf_fentry_test11_pptr_nullable")
+__description("fentry/double pointer parameter (rdonly, untrusted, nullable) - invalid load without null")
+__failure __msg("R2 invalid mem access 'rdonly_untrusted_mem_or_null'")
+__naked void ctx_double_ptr_load_no_check_nullable(void)
+{
+ asm volatile (" \
+ /* load double pointer - PTR_TO_MEM | MEM_RDONLY | PTR_UNTRUSTED | PTR_MAYBE_NULL */\
+ r2 = *(u64 *)(r1 + 0); \
+ /* \
+ * invalid dereference without check for NULL when a parameter \
+ * is marked nullable (PTR_MAYBE_NULL) \
+ */ \
+ r3 = *(u64 *)(r2 + 0); \
+ r0 = 0; \
+ exit; \
+" ::: __clobber_all);
+}
+
+SEC("fentry/bpf_fentry_test12_pptr")
+__description("fentry/double pointer parameter (rdonly, untrusted) - valid load without null")
+__success __retval(0)
+__naked void ctx_double_ptr_load_no_check(void)
+{
+ asm volatile (" \
+ /* load double pointer - PTR_TO_MEM | MEM_RDONLY | PTR_UNTRUSTED */\
+ r2 = *(u64 *)(r1 + 8); \
+ /* valid dereference without check for NULL as the parameter is not marked as nullable */\
+ r3 = *(u64 *)(r2 + 0); \
+ r0 = 0; \
+ exit; \
+" ::: __clobber_all);
+}
+
+SEC("fentry/bpf_fentry_test11_pptr_nullable")
+__description("fentry/double pointer parameter (rdonly, untrusted, nullable) - valid load with null")
+__success __retval(0)
+__naked void ctx_double_ptr_readonly(void)
+{
+ asm volatile (" \
+ /* load double pointer - PTR_TO_MEM | MEM_RDONLY | PTR_UNTRUSTED | PTR_MAYBE_NULL */\
+ r2 = *(u64 *)(r1 + 0); \
+ if r2 == 0 goto l0_%=; /* check for null */\
+ r3 = *(u64 *)(r2 + 0); \
+l0_%=: \
+ r0 = 0; \
+ exit; \
+" ::: __clobber_all);
+}
+
+SEC("fentry/bpf_fentry_test11_pptr_nullable")
+__description("fentry/double pointer parameter (rdonly, untrusted) - valid load with arbitrary offset")
+__success __retval(0)
+__naked void ctx_double_ptr_valid_load_with_offset(void)
+{
+ asm volatile (" \
+ /* load double pointer - PTR_TO_MEM | MEM_RDONLY | PTR_UNTRUSTED */\
+ r2 = *(u64 *)(r1 + 0); \
+ if r2 == 0 goto l0_%=; /* check for null (PTR_MAYBE_NULL) */\
+ /* load with arbitrary offset is protected by an exception handler */\
+ r3 = *(u64 *)(r2 + 0x1000); \
+l0_%=: \
+ r0 = 0; \
+ exit; \
+" ::: __clobber_all);
+}
+
+SEC("fentry/bpf_fentry_test11_pptr_nullable")
+__description("fentry/double pointer parameter (rdonly, untrusted, nullable) - invalid load with double dereference with offset")
+__failure __msg("R3 invalid mem access 'scalar'")
+__naked void ctx_double_ptr_invalid_load_with_offset(void)
+{
+ asm volatile (" \
+ /* load double pointer - PTR_TO_MEM | MEM_RDONLY | PTR_UNTRUSTED */\
+ r2 = *(u64 *)(r1 + 0); \
+ if r2 == 0 goto l0_%=; /* check for null (PTR_MAYBE_NULL) */\
+ r3 = *(u64 *)(r2 + 0); \
+ r4 = *(u64 *)(r3 + 0x1000); \
+l0_%=: \
+ r0 = 0; \
+ exit; \
+" ::: __clobber_all);
+}
+
+SEC("fentry/bpf_fentry_test11_pptr_nullable")
+__description("fentry/double pointer parameter (rdonly, untrusted, nullable) - invalid narrow load")
+__failure __msg("size 4 must be 8")
+__naked void ctx_double_ptr_size_check(void)
+{
+ asm volatile (" \
+ r2 = *(u32 *)(r1 + 0); /* invalid narrow load */\
+ r0 = 0; \
+ exit; \
+" ::: __clobber_all);
+}
+
+SEC("fentry/bpf_fentry_test11_pptr_nullable")
+__description("fentry/double pointer parameter (rdonly, untrusted, nullable) - invalid store to read only memory")
+__failure __msg("R2 cannot write into rdonly_untrusted_mem")
+__naked void ctx_double_ptr_write_readonly(void)
+{
+ asm volatile (" \
+ /* load double pointer - PTR_TO_MEM | MEM_RDONLY | PTR_UNTRUSTED | PTR_MAYBE_NULL */\
+ r2 = *(u64 *)(r1 + 0); \
+ if r2 == 0 goto l0_%=; /* check for null */\
+ *(u64 *)(r2 + 0x0) = 1; /* read only */ \
+l0_%=: \
+ r0 = 0; \
+ exit; \
+" ::: __clobber_all);
+}
+
+SEC("fentry/bpf_fentry_test11_pptr_nullable")
+__description("fentry/double pointer parameter (rdonly, untrusted, nullable) - invalid store with offset")
+__failure __msg("R2 cannot write into rdonly_untrusted_mem")
+__naked void ctx_double_ptr_write_offset_readonly(void)
+{
+ asm volatile (" \
+ /* load double pointer - PTR_TO_MEM | MEM_RDONLY | PTR_UNTRUSTED | PTR_MAYBE_NULL */\
+ r2 = *(u64 *)(r1 + 0); \
+ if r2 == 0 goto l0_%=; /* check for null */\
+ *(u64 *)(r2 + 0x1000) = 1; /* read only */ \
+l0_%=: \
+ r0 = 0; \
+ exit; \
+" ::: __clobber_all);
+}
+
+SEC("fentry/bpf_fentry_test11_pptr_nullable")
+__description("fentry/double pointer parameter (rdonly, untrusted, nullable) - invalid store with offset, scalar type")
+__failure __msg("R3 invalid mem access 'scalar'")
+__naked void ctx_double_ptr_write2_readonly(void)
+{
+ asm volatile (" \
+ /* load double pointer - PTR_TO_MEM | MEM_RDONLY | PTR_UNTRUSTED | PTR_MAYBE_NULL */\
+ r2 = *(u64 *)(r1 + 0); \
+ if r2 == 0 goto l0_%=; /* check for null */\
+ r3 = *(u64 *)(r2 + 0); /* R3 is a scalar */ \
+ *(u64 *)(r3 + 0) = 1; /* scalar */ \
+l0_%=: \
+ r0 = 0; \
+ exit; \
+" ::: __clobber_all);
+}
+
+SEC("fentry/bpf_fentry_test14_ppptr")
+__description("fentry/triple pointer parameter (rdonly, untrusted, nullable) - invalid store to read only memory")
+__failure __msg("R2 cannot write into rdonly_untrusted_mem")
+__naked void ctx_double_ptr_write3_readonly(void)
+{
+ asm volatile (" \
+ /* load double pointer - PTR_TO_MEM | MEM_RDONLY | PTR_UNTRUSTED | PTR_MAYBE_NULL */\
+ r2 = *(u64 *)(r1 + 0); \
+ if r2 == 0 goto l0_%=; /* check for null */\
+ *(u64 *)(r2 + 0) = 1; /* read only */ \
+l0_%=: \
+ r0 = 0; \
+ exit; \
+" ::: __clobber_all);
+}
+
+SEC("fentry/bpf_fentry_test14_ppptr")
+__description("fentry/triple pointer parameter (rdonly, untrusted, nullable) - invalid mem access (scalar)")
+__failure __msg("R3 invalid mem access 'scalar'")
+__naked void ctx_double_ptr_write4_readonly(void)
+{
+ asm volatile (" \
+ /* load double pointer - PTR_TO_MEM | MEM_RDONLY | PTR_UNTRUSTED | PTR_MAYBE_NULL */\
+ r2 = *(u64 *)(r1 + 0); \
+ if r2 == 0 goto l0_%=; /* check for null (PTR_MAYBE_NULL) */\
+ r3 = *(u64 *)(r2 + 0); /* R3 type is scalar */ \
+ *(u64 *)(r3 + 0) = 1; /* mem access for scalar */ \
+l0_%=: \
+ r0 = 0; \
+ exit; \
+" ::: __clobber_all);
+}
+
+SEC("lsm/sb_eat_lsm_opts")
+__description("lsm/double pointer parameter (rdonly, trusted) - invalid load outside boundaries")
+__failure __msg("R2 min value is outside of the allowed memory range")
+__naked void sb_eat_lsm_opts_trusted_offset_outside_boundaries(void)
+{
+ asm volatile (" \
+ /* load double pointer - PTR_TO_MEM | MEM_RDONLY, PTR_UNTRUSTED is not set */\
+ r2 = *(u64 *)(r1 + 8); \
+ if r2 == 0 goto l0_%=; /* check for null */\
+ /* should fail as for a trusted parameter verifier checks boundaries */\
+ r3 = *(u64 *)(r2 + 0x1000); \
+l0_%=: \
+ r0 = 0; \
+ exit; \
+" ::: __clobber_all);
+}
+
+SEC("lsm/sb_eat_lsm_opts")
+__description("lsm/double pointer parameter (rdonly, trusted) - load within boundaries")
+__success
+__naked void sb_eat_lsm_opts_trusted_offset_within_boundaries(void)
+{
+ asm volatile (" \
+ /* load double pointer - PTR_TO_MEM | MEM_RDONLY , PTR_UNTRUSTED is not set */\
+ r2 = *(u64 *)(r1 + 8); \
+ if r2 == 0 goto l0_%=; /* check for null */\
+ /* \
+ * should pass as for a trusted parameter verifier checks boundaries \
+ * and access is within boundaries \
+ */ \
+ r3 = *(u64 *)(r2 + 0x0); \
+l0_%=: \
+ r0 = 0; \
+ exit; \
+" ::: __clobber_all);
+}
+
+SEC("lsm/sb_eat_lsm_opts")
+__description("lsm/double pointer parameter (rdonly, trusted) - load within boundaries, no check for null")
+__success
+__naked void sb_eat_lsm_opts_trusted_offset_within_boundaries_no_null_check(void)
+{
+ asm volatile (" \
+ /* load double pointer - PTR_TO_MEM | MEM_RDONLY , PTR_UNTRUSTED is not set */\
+ r2 = *(u64 *)(r1 + 8); \
+ /* \
+ * should pass as for a trusted parameter verifier checks boundaries \
+ * and PTR_MAYBE_NULL is not set \
+ */ \
+ r3 = *(u64 *)(r2 + 0x0); \
+ r0 = 0; \
+ exit; \
+" ::: __clobber_all);
+}
+
+SEC("lsm/sb_eat_lsm_opts")
+__description("lsm/double pointer parameter (rdonly, trusted) - invalid store within boundaries to read only mem")
+__failure __msg("R2 cannot write into rdonly_mem")
+__naked void sb_eat_lsm_opts_trusted_modification_within_boundaries(void)
+{
+ asm volatile (" \
+ /* load double pointer - should be PTR_TO_MEM | MEM_RDONLY , PTR_UNTRUSTED is not set */\
+ r2 = *(u64 *)(r1 + 8); \
+ if r2 == 0 goto l0_%=; /* check for null */\
+ *(u64 *)(r2 + 0x0) = 1; /* read only */ \
+l0_%=: \
+ r0 = 0; \
+ exit; \
+" ::: __clobber_all);
+}
+
+SEC("lsm/sb_eat_lsm_opts")
+__description("lsm/double pointer parameter (rdonly, trusted) - invalid store outside boundaries to read only mem")
+__failure __msg("R2 cannot write into rdonly_mem")
+__naked void sb_eat_lsm_opts_trusted_modification_outside_boundaries(void)
+{
+ asm volatile (" \
+ /* load double pointer - PTR_TO_MEM | MEM_RDONLY , PTR_UNTRUSTED is not set */\
+ r2 = *(u64 *)(r1 + 8); \
+ if r2 == 0 goto l0_%=; /* check for null */\
+ *(u64 *)(r2 + 0x1000) = 1; /* read only */ \
+l0_%=: \
+ r0 = 0; \
+ exit; \
+" ::: __clobber_all);
+}
+
+SEC("fexit/bpf_fentry_test12_pptr")
+__description("fexit/double pointer return (rdonly, untrusted, nullable) - valid load")
+__success __retval(0)
+__naked void ctx_double_ptr_return_load1(void)
+{
+ asm volatile (" \
+ /* load double pointer return value - PTR_TO_MEM | MEM_RDONLY | PTR_UNTRUSTED | PTR_MAYBE_NULL */\
+ r2 = *(u64 *)(r1 + 16); \
+ if r2 == 0 goto l0_%=; /* check for null */\
+ r3 = *(u64 *)(r2 + 0); /* R3 is a scalar */ \
+l0_%=: \
+ r0 = 0; \
+ exit; \
+" ::: __clobber_all);
+}
+
+SEC("fexit/bpf_fentry_test12_pptr")
+__description("fexit/double pointer return (rdonly, untrusted, nullable) - valid load with offset")
+__success __retval(0)
+__naked void ctx_double_ptr_return_load2(void)
+{
+ asm volatile (" \
+ /* load double pointer return value - PTR_TO_MEM | MEM_RDONLY | PTR_UNTRUSTED | PTR_MAYBE_NULL */\
+ r2 = *(u64 *)(r1 + 16); \
+ if r2 == 0 goto l0_%=; /* check for null */\
+ /* verifier doesn't check boundaries for access protect by an exception handler */\
+ r3 = *(u64 *)(r2 - 0x100); \
+l0_%=: \
+ r0 = 0; \
+ exit; \
+" ::: __clobber_all);
+}
+
+SEC("fexit/bpf_fentry_test12_pptr")
+__description("fexit/double pointer return (rdonly, untrusted, nullable) - invalid load with double dereference")
+__failure __msg("R3 invalid mem access 'scalar'")
+__naked void ctx_double_ptr_return_load3(void)
+{
+ asm volatile (" \
+ /* load double pointer return value - PTR_TO_MEM | MEM_RDONLY | PTR_UNTRUSTED | PTR_MAYBE_NULL */\
+ r2 = *(u64 *)(r1 + 16); \
+ if r2 == 0 goto l0_%=; /* check for null */\
+ r3 = *(u64 *)(r2 + 0); /* R3 is a scalar */ \
+ r4 = *(u64 *)(r3 + 0); /* load from scalar */\
+l0_%=: \
+ r0 = 0; \
+ exit; \
+" ::: __clobber_all);
+}
+
+SEC("fexit/bpf_fentry_test12_pptr")
+__description("fexit/double pointer return (rdonly, untrusted, nullable) - invalid store to read only memory")
+__failure __msg("R2 cannot write into rdonly_untrusted_mem")
+__naked void ctx_double_ptr_return_write1(void)
+{
+ asm volatile (" \
+ /* load double pointer return value - PTR_TO_MEM | MEM_RDONLY | PTR_UNTRUSTED | PTR_MAYBE_NULL */\
+ r2 = *(u64 *)(r1 + 16); \
+ if r2 == 0 goto l0_%=; /* check for null */\
+ *(u64 *)(r2 + 0) = 1; /* R2 contains read only memory address */ \
+l0_%=: \
+ r0 = 0; \
+ exit; \
+" ::: __clobber_all);
+}
+
+SEC("fexit/bpf_fentry_test12_pptr")
+__description("fexit/double pointer return (rdonly, untrusted, nullable) - invalid store to read only memory with double dereference")
+__failure __msg("R3 invalid mem access 'scalar'")
+__naked void ctx_double_ptr_return_write2(void)
+{
+ asm volatile (" \
+ /* load double pointer return value - PTR_TO_MEM | MEM_RDONLY | PTR_UNTRUSTED | PTR_MAYBE_NULL */\
+ r2 = *(u64 *)(r1 + 16); \
+ if r2 == 0 goto l0_%=; /* check for null */\
+ r3 = *(u64 *)(r2 + 0); /* R3 is a scalar */ \
+ *(u64 *)(r3 + 0) = 1; /* mem access for scalar */ \
+l0_%=: \
+ r0 = 0; \
+ exit; \
+" ::: __clobber_all);
+}
+
+struct bpf_fentry_test_pptr_t;
+
+SEC("fentry/bpf_fentry_test11_pptr_nullable")
+__description("fexit/double pointer return (rdonly, untrusted, nullable) - bpf helpers with nullable var")
+__success __retval(0)
+int BPF_PROG(ctx_double_ptr_nulable_var_access_bpf_helpers,
+ struct bpf_fentry_test_pptr_t **pptr__nullable)
+{
+ /* Check compatibility with BPF helpers; NULL checks should not be required. */
+ void *ptr;
+
+ bpf_probe_read_kernel(&ptr, sizeof(ptr), pptr__nullable);
+ return 0;
+}
+
+SEC("fexit/bpf_fentry_test12_pptr")
+__description("fexit/double pointer return (rdonly, untrusted, nullable) - bpf helpers with return val")
+__success __retval(0)
+int BPF_PROG(ctx_double_ptr_return_access_bpf_helpers, __u32 id,
+ __u32 **pptr, __u32 **ret)
+{
+ /* Check compatibility with BPF helpers; NULL checks should not be required. */
+ void *ptr;
+
+ bpf_probe_read_kernel(&ptr, sizeof(ptr), pptr);
+ bpf_probe_read_kernel(&ptr, sizeof(ptr), ret);
+ return 0;
+}
+
+SEC("fentry/bpf_fentry_test11_pptr_nullable")
+__description("fexit/double pointer return (rdonly, untrusted, nullable) - bpf helpers with nullable var, direct ctx pointer")
+__success __retval(0)
+int BPF_PROG(ctx_double_ptr_nulable_var_access_bpf_helpers_ctx,
+ struct bpf_fentry_test_pptr_t **pptr__nullable)
+{
+ /* Check compatibility with BPF helpers; NULL checks should not be required. */
+ void *ptr;
+
+ bpf_probe_read_kernel(&ptr, sizeof(ptr), &ctx[0] /*pptr__nullable*/);
+ return 0;
+}
+
+SEC("fexit/bpf_fentry_test12_pptr")
+__description("fexit/double pointer return (rdonly, untrusted, nullable) - bpf helpers with return val, direct ctx pointer")
+__success __retval(0)
+int BPF_PROG(ctx_double_ptr_return_access_bpf_helpers_ctx, __u32 id,
+ __u32 **pptr, __u32 **ret)
+{
+ /* Check compatibility with BPF helpers; NULL checks should not be required. */
+ void *ptr;
+
+ bpf_probe_read_kernel(&ptr, sizeof(ptr), &ctx[1] /*pptr*/);
+ bpf_probe_read_kernel(&ptr, sizeof(ptr), &ctx[2] /*ret*/);
+ return 0;
+}
+
+
+char _license[] SEC("license") = "GPL";
--
2.50.1 (Apple Git-155)
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH bpf-next v2 2/2] selftests/bpf: Add trampolines multi-level pointer params test coverage
2026-02-17 22:13 ` [PATCH bpf-next v2 2/2] selftests/bpf: Add trampolines multi-level pointer params test coverage Slava Imameev
@ 2026-02-17 22:47 ` bot+bpf-ci
2026-02-18 9:25 ` kernel test robot
1 sibling, 0 replies; 11+ messages in thread
From: bot+bpf-ci @ 2026-02-17 22:47 UTC (permalink / raw)
To: slava.imameev, ast, daniel, andrii
Cc: martin.lau, eddyz87, song, yonghong.song, john.fastabend, kpsingh,
sdf, haoluo, jolsa, davem, edumazet, kuba, pabeni, horms, shuah,
linux-kernel, bpf, netdev, linux-kselftest, linux-open-source,
slava.imameev, ast, andrii, daniel, martin.lau, eddyz87,
yonghong.song, clm, ihor.solodrai
[-- Attachment #1: Type: text/plain, Size: 2741 bytes --]
> diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
> index 178c4738e63b..19c82ae9bfe6 100644
> --- a/net/bpf/test_run.c
> +++ b/net/bpf/test_run.c
[ ... ]
> +static void *create_bad_kaddr(void)
> +{
> + /*
> + * Try to get an address that passes kernel range checks but causes
> + * a page fault handler invocation if accessed from a BPF program.
> + */
> +#if defined(CONFIG_ARCH_HAS_SET_MEMORY) && defined(CONFIG_X86)
The first branch requires both CONFIG_ARCH_HAS_SET_MEMORY and
CONFIG_X86. The second branch requires CONFIG_ARCH_HAS_SET_DIRECT_MAP.
PowerPC selects ARCH_HAS_SET_MEMORY (arch/powerpc/Kconfig) but not
ARCH_HAS_SET_DIRECT_MAP, so neither branch compiles in and the function
always returns NULL on that architecture.
> + void *addr = vmalloc(PAGE_SIZE);
> +
> + if (!addr)
> + return NULL;
> + /* Make it non-present - any access will fault */
> + if (set_memory_np((unsigned long)addr, 1)) {
> + vfree(addr);
> + return NULL;
> + }
> + return addr;
> +#elif defined(CONFIG_ARCH_HAS_SET_DIRECT_MAP)
[ ... ]
> +#endif
> + return NULL;
> +}
[ ... ]
> +static void free_bad_kaddr(void *addr)
> +{
> + if (!addr)
> + return;
> +
> + /*
> + * Free an invalid test address created by get_invalid_address().
> + * Restores the page to present state before freeing.
> + */
The comment references get_invalid_address() but the function that
creates the address is create_bad_kaddr().
> +#if defined(CONFIG_ARCH_HAS_SET_MEMORY) && defined(CONFIG_X86)
[ ... ]
> +#endif
> +}
[ ... ]
> + } else if (!strcmp(attach_name, "bpf_fentry_test13_pptr")) {
> + kaddr = create_bad_kaddr();
> + WARN_ON(!kaddr);
Since create_bad_kaddr() returns NULL on PowerPC (which supports BPF
trampolines via HAVE_DYNAMIC_FTRACE_WITH_ARGS), would this WARN_ON fire
every time the bpf_fentry_test13_pptr test runs on that architecture?
> + CONSUME(bpf_fentry_test13_pptr(kaddr));
> + CONSUME(bpf_fentry_test13_pptr((void **)19));
> + CONSUME(bpf_fentry_test13_pptr(ERR_PTR(-ENOMEM)));
> + break;
[ ... ]
---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
CI run summary: https://github.com/kernel-patches/bpf/actions/runs/22118172993
AI-authorship-score: low
AI-authorship-explanation: Stale comment referencing a renamed function and an architecture coverage gap with WARN_ON are characteristic of human-authored code that underwent refactoring.
issues-found: 2
issue-severity-score: medium
issue-severity-explanation: WARN_ON(!kaddr) fires unconditionally on PowerPC where create_bad_kaddr() always returns NULL, producing kernel warnings on every test run and potential panics with panic_on_warn.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH bpf-next v2 0/2] bpf: Add multi-level pointer parameter support for trampolines
2026-02-17 22:13 [PATCH bpf-next v2 0/2] bpf: Add multi-level pointer parameter support for trampolines Slava Imameev
2026-02-17 22:13 ` [PATCH bpf-next v2 1/2] bpf: Support multi-level pointer params via PTR_TO_MEM " Slava Imameev
2026-02-17 22:13 ` [PATCH bpf-next v2 2/2] selftests/bpf: Add trampolines multi-level pointer params test coverage Slava Imameev
@ 2026-02-18 1:48 ` Eduard Zingerman
2026-02-18 10:43 ` Re: [PATCH bpf-next v2 0/2] bpf: Add multi-level pointer parameter Slava Imameev
2 siblings, 1 reply; 11+ messages in thread
From: Eduard Zingerman @ 2026-02-18 1:48 UTC (permalink / raw)
To: Slava Imameev, ast, daniel, andrii
Cc: martin.lau, song, yonghong.song, john.fastabend, kpsingh, sdf,
haoluo, jolsa, davem, edumazet, kuba, pabeni, horms, shuah,
linux-kernel, bpf, netdev, linux-kselftest, linux-open-source
On Wed, 2026-02-18 at 09:13 +1100, Slava Imameev wrote:
[...]
> The verifier assigns SCALAR type to single-level pointers (void*, int*).
So, the simplest change for pointers to pointers would be as below, right?
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -6906,7 +6906,8 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
* If it's a pointer to void, it's the same as scalar from the verifier
* safety POV. Either way, no futher pointer walking is allowed.
*/
- if (is_void_or_int_ptr(btf, t))
+ if (is_void_or_int_ptr(btf, t) || !is_ptr_to_struct(btf, t))
return true;
/* this is a pointer to another type */
Except that loaded value would be marked as scalar() and one would
need to cast it using e.g. bpf_core_cast() to obtain an untrusted
pointer.
> For multi-level pointers, I selected PTR_TO_MEM to enable memory access
> through a single load instruction for the first level of dereference,
> with subsequent dereferences becoming SCALAR. This design eliminates
> helper call for parameter dereference, replacing it with a load
> instruction (e.g., void* ptr = *pptr).
If going this route instead, is there a technical reason to limit this
logic to multi-level pointers? Applying same rules to `int *` and
alike seem more consistent.
[...]
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH bpf-next v2 2/2] selftests/bpf: Add trampolines multi-level pointer params test coverage
2026-02-17 22:13 ` [PATCH bpf-next v2 2/2] selftests/bpf: Add trampolines multi-level pointer params test coverage Slava Imameev
2026-02-17 22:47 ` bot+bpf-ci
@ 2026-02-18 9:25 ` kernel test robot
1 sibling, 0 replies; 11+ messages in thread
From: kernel test robot @ 2026-02-18 9:25 UTC (permalink / raw)
To: Slava Imameev, ast, daniel, andrii
Cc: llvm, oe-kbuild-all, martin.lau, eddyz87, song, yonghong.song,
john.fastabend, kpsingh, sdf, haoluo, jolsa, davem, edumazet,
kuba, pabeni, horms, shuah, linux-kernel, bpf, netdev,
linux-kselftest, linux-open-source, Slava Imameev
Hi Slava,
kernel test robot noticed the following build errors:
[auto build test ERROR on bpf-next/master]
url: https://github.com/intel-lab-lkp/linux/commits/Slava-Imameev/bpf-Support-multi-level-pointer-params-via-PTR_TO_MEM-for-trampolines/20260218-062417
base: https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
patch link: https://lore.kernel.org/r/20260217221357.18215-3-slava.imameev%40crowdstrike.com
patch subject: [PATCH bpf-next v2 2/2] selftests/bpf: Add trampolines multi-level pointer params test coverage
config: loongarch-defconfig (https://download.01.org/0day-ci/archive/20260218/202602181710.tEK6nOl6-lkp@intel.com/config)
compiler: clang version 19.1.7 (https://github.com/llvm/llvm-project cd708029e0b2869e80abe31ddb175f7c35361f90)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260218/202602181710.tEK6nOl6-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202602181710.tEK6nOl6-lkp@intel.com/
All errors (new ones prefixed by >>):
>> net/bpf/test_run.c:737:2: error: call to undeclared function 'flush_tlb_kernel_range'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
737 | flush_tlb_kernel_range((unsigned long)page_address(page),
| ^
net/bpf/test_run.c:760:2: error: call to undeclared function 'flush_tlb_kernel_range'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
760 | flush_tlb_kernel_range((unsigned long)addr,
| ^
2 errors generated.
vim +/flush_tlb_kernel_range +737 net/bpf/test_run.c
709
710 static void *create_bad_kaddr(void)
711 {
712 /*
713 * Try to get an address that passes kernel range checks but causes
714 * a page fault handler invocation if accessed from a BPF program.
715 */
716 #if defined(CONFIG_ARCH_HAS_SET_MEMORY) && defined(CONFIG_X86)
717 void *addr = vmalloc(PAGE_SIZE);
718
719 if (!addr)
720 return NULL;
721 /* Make it non-present - any access will fault */
722 if (set_memory_np((unsigned long)addr, 1)) {
723 vfree(addr);
724 return NULL;
725 }
726 return addr;
727 #elif defined(CONFIG_ARCH_HAS_SET_DIRECT_MAP)
728 struct page *page = alloc_page(GFP_KERNEL);
729
730 if (!page)
731 return NULL;
732 /* Remove from direct map - any access will fault */
733 if (set_direct_map_invalid_noflush(page)) {
734 __free_page(page);
735 return NULL;
736 }
> 737 flush_tlb_kernel_range((unsigned long)page_address(page),
738 (unsigned long)page_address(page) + PAGE_SIZE);
739 return page_address(page);
740 #endif
741 return NULL;
742 }
743
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Re: [PATCH bpf-next v2 0/2] bpf: Add multi-level pointer parameter
2026-02-18 1:48 ` [PATCH bpf-next v2 0/2] bpf: Add multi-level pointer parameter support for trampolines Eduard Zingerman
@ 2026-02-18 10:43 ` Slava Imameev
2026-02-18 16:16 ` David Windsor
2026-02-19 3:15 ` Alexei Starovoitov
0 siblings, 2 replies; 11+ messages in thread
From: Slava Imameev @ 2026-02-18 10:43 UTC (permalink / raw)
To: eddyz87
Cc: andrii, ast, bpf, daniel, davem, edumazet, haoluo, horms,
john.fastabend, jolsa, kpsingh, kuba, linux-kernel,
linux-kselftest, linux-open-source, martin.lau, netdev, pabeni,
sdf, shuah, slava.imameev, song, yonghong.song
> > The verifier assigns SCALAR type to single-level pointers (void*, int*).
>
> So, the simplest change for pointers to pointers would be as below, right?
>
> --- a/kernel/bpf/btf.c
> +++ b/kernel/bpf/btf.c
> @@ -6906,7 +6906,8 @@ bool btf_ctx_access(int off, int size, enum bpf_acc=
ess_type type,
> * If it's a pointer to void, it's the same as scalar from the ve=
rifier
> * safety POV. Either way, no futher pointer walking is allowed.
> */
> - if (is_void_or_int_ptr(btf, t))
> + if (is_void_or_int_ptr(btf, t) || !is_ptr_to_struct(btf, t))
> return true;
>
> /* this is a pointer to another type */
>
> Except that loaded value would be marked as scalar() and one would
> need to cast it using e.g. bpf_core_cast() to obtain an untrusted
> pointer.
I considered using a scalar as a simpler solution, but there are some
disadvantages with casting to scalar and using bpf_core_cast:
- Casting to scalar removes nullable and trusted properties
- bpf_core_cast cannot cast to multi-level pointers without
introducing a new typedef or a wrapper for a pointer
Let's consider the following LSM program which has trusted parameters, and
logs the value for (*mnt_opts):
SEC("lsm/sb_eat_lsm_opts")
int BPF_PROG(sb_eat_lsm_opts_1,char *options, void **mnt_opts)
With this patch:
- This program is valid:
SEC("lsm/sb_eat_lsm_opts")
int BPF_PROG(sb_eat_lsm_opts_1,char *options, void **mnt_opts)
{
bpf_printk("%p\n", *mnt_opts);
return 0;
}
- This program is semantically invalid as mnt_opts is a trusted
parameter, so there are no run-time checks and the verifier rejects
out-of-bounds access:
SEC("lsm/sb_eat_lsm_opts")
int BPF_PROG(sb_eat_lsm_opts_1,char *options, void **mnt_opts)
{
bpf_printk("%p\n", *(mnt_opts+10));
return 0;
}
With casting to a scalar and following bpf_core_cast:
- This programs cannot be compiled as bpf_core_cast cannot cast to a
multi-level pointer:
SEC("lsm/sb_eat_lsm_opts")
int BPF_PROG(sb_eat_lsm_opts_1,char *options, void **mnt_opts)
{
void** ppt = bpf_core_cast(mnt_opts, void*);
bpf_printk("%p\t", *ppt);
return 0;
}
- There is a workaround, which requires introducing a wrapper for
a pointer or typedef:
struct pvoid {
void* v;
};
typedef void* pvoid;
SEC("lsm/sb_eat_lsm_opts")
int BPF_PROG(sb_eat_lsm_opts_1,char *options, void **mnt_opts)
{
struct pvoid* ppt = bpf_core_cast(mnt_opts, struct pvoid);
bpf_printk("%p\t", ppt->v);
return 0;
}
SEC("lsm/sb_eat_lsm_opts")
int BPF_PROG(sb_eat_lsm_opts_2,char *options, void **mnt_opts)
{
pvoid* ppt = bpf_core_cast(mnt_opts, pvoid);
bpf_printk("%p\t", *ppt);
return 0;
}
- This program passes verifier, though it is semantically invalid
as logs an invalid data using a trusted parameter:
SEC("lsm/sb_eat_lsm_opts")
int BPF_PROG(sb_eat_lsm_opts_1,char *options, void **mnt_opts)
{
struct pvoid* ppt = bpf_core_cast(mnt_opts + 10, struct pvoid);
bpf_printk("%p\t", ppt->v);
return 0;
}
The similar examples can be done for nullable annotation, which
is ignored for a scalar allowing semantically invalid BPF programs to
pass verifier.
> > For multi-level pointers, I selected PTR_TO_MEM to enable memory access
> > through a single load instruction for the first level of dereference,
> > with subsequent dereferences becoming SCALAR. This design eliminates
> > helper call for parameter dereference, replacing it with a load
> > instruction (e.g., void* ptr =3D *pptr).
>
> If going this route instead, is there a technical reason to limit this
> logic to multi-level pointers? Applying same rules to `int *` and
> alike seem more consistent.
I decided to address only multilevel pointers as this is what we
encountered in practice and have to use BPF helper workarounds.
I think there are no technical restrictions for treating single
level pointers as PTR_TO_MEM.
However, there is some cohesion between multilevel pointers being
PTR_TO_MEM and single level being scalar, as verifier infers a scalar
for PTR_TO_MEM dereference, so:
foo(void *ptr1, void **pptr)
{
void* ptr2 = *pptr; /* verifier infers a scalar for ptr2*/
/* both ptr1 and ptr2 are scalars */
}
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Re: [PATCH bpf-next v2 0/2] bpf: Add multi-level pointer parameter
2026-02-18 10:43 ` Re: [PATCH bpf-next v2 0/2] bpf: Add multi-level pointer parameter Slava Imameev
@ 2026-02-18 16:16 ` David Windsor
2026-02-19 3:15 ` Alexei Starovoitov
1 sibling, 0 replies; 11+ messages in thread
From: David Windsor @ 2026-02-18 16:16 UTC (permalink / raw)
To: Slava Imameev
Cc: eddyz87, andrii, ast, bpf, daniel, davem, edumazet, haoluo, horms,
john.fastabend, jolsa, kpsingh, kuba, linux-kernel,
linux-kselftest, linux-open-source, martin.lau, netdev, pabeni,
sdf, shuah, song, yonghong.song
> I decided to address only multilevel pointers as this is what we
encountered in practice and have to use BPF helper workarounds.
I think there are no technical restrictions for treating single
level pointers as PTR_TO_MEM.
Hi Slava and Eduard,
If we add support for writable single-level int pointers, we could
trivially implement bpf_inode_set_xattr in the way that Alexei
originally suggested[1] when it was first attempted to be added.
One note, for this particular case, the kfunc would need to be able to
write to the xattr int* param, as lsm_get_xattr_slot[2] increments the
LSM-internal xattr_count. Others would be possible as well
(cred_getsecid).
[1] https://kernsec.org/pipermail/linux-security-module-archive/2022-October/034878.html
[2] https://elixir.bootlin.com/linux/v6.19-rc5/source/include/linux/lsm_hooks.h#L215
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Re: [PATCH bpf-next v2 0/2] bpf: Add multi-level pointer parameter
2026-02-18 10:43 ` Re: [PATCH bpf-next v2 0/2] bpf: Add multi-level pointer parameter Slava Imameev
2026-02-18 16:16 ` David Windsor
@ 2026-02-19 3:15 ` Alexei Starovoitov
2026-02-19 5:17 ` Yonghong Song
2026-02-23 9:44 ` Re: " Slava Imameev
1 sibling, 2 replies; 11+ messages in thread
From: Alexei Starovoitov @ 2026-02-19 3:15 UTC (permalink / raw)
To: Slava Imameev
Cc: Eduard, Andrii Nakryiko, Alexei Starovoitov, bpf, Daniel Borkmann,
David S. Miller, Eric Dumazet, Hao Luo, Simon Horman,
John Fastabend, Jiri Olsa, KP Singh, Jakub Kicinski, LKML,
open list:KERNEL SELFTEST FRAMEWORK, DL Linux Open Source Team,
Martin KaFai Lau, Network Development, Paolo Abeni,
Stanislav Fomichev, Shuah Khan, Song Liu, Yonghong Song
On Wed, Feb 18, 2026 at 2:44 AM Slava Imameev
<slava.imameev@crowdstrike.com> wrote:
>
>
> - This programs cannot be compiled as bpf_core_cast cannot cast to a
> multi-level pointer:
>
> SEC("lsm/sb_eat_lsm_opts")
> int BPF_PROG(sb_eat_lsm_opts_1,char *options, void **mnt_opts)
> {
> void** ppt = bpf_core_cast(mnt_opts, void*);
> bpf_printk("%p\t", *ppt);
> return 0;
> }
Looks like there is a bug in llvm, since it crashes on the above.
But the following works:
void** ppt = bpf_rdonly_cast(mnt_opts, 0);
bpf_printk("%lx\t", *ppt);
Plus Eduard's diff:
- if (is_void_or_int_ptr(btf, t))
+ if (is_void_or_int_ptr(btf, t) || !btf_type_is_struct_ptr(btf, t))
> - There is a workaround, which requires introducing a wrapper for
> a pointer or typedef:
>
> struct pvoid {
> void* v;
> };
llvm should have handled it without the workaround. It's a bug
that should be fixed.
> I think there are no technical restrictions for treating single
> level pointers as PTR_TO_MEM.
I think it will be a missed opportunity and a potential foot gun.
We didn't support access to 'char *' initially.
Later relaxed it to mean that it's a valid pointer to a single byte,
but since the code is generic it also became the case
that 'char *' is allowed in kfunc and the verifier checks
that one valid byte is there.
That was a bad footgun, since we saw several cases of broken
kfunc implementations that assume that 'char *' means a string.
There are only two lsm hooks that pass 'struct foo **'.
If we make it ptr_to_mem of 8 bytes we lose the ability to do
something smarter in the future.
I think we better add support to annotate such '**' as an actual
array with given size and track types completely, so
'struct foo * var[N]' will become array_to_ptr_to_btf_id.
That's probably more work that you signed up to do,
so I suggest treating 'void **' as a scalar as Eduard suggested.
This particular sb_eat_lsm_opts() hook
doesn't have a useful type behind it anyway.
I'm less certain about 'char **'. If we make it scalar too
it will be harder to make it a pointer to nul terminated string later.
So I would do 'void **' -> scalar for now only.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH bpf-next v2 0/2] bpf: Add multi-level pointer parameter
2026-02-19 3:15 ` Alexei Starovoitov
@ 2026-02-19 5:17 ` Yonghong Song
2026-02-23 9:44 ` Re: " Slava Imameev
1 sibling, 0 replies; 11+ messages in thread
From: Yonghong Song @ 2026-02-19 5:17 UTC (permalink / raw)
To: Alexei Starovoitov, Slava Imameev
Cc: Eduard, Andrii Nakryiko, Alexei Starovoitov, bpf, Daniel Borkmann,
David S. Miller, Eric Dumazet, Hao Luo, Simon Horman,
John Fastabend, Jiri Olsa, KP Singh, Jakub Kicinski, LKML,
open list:KERNEL SELFTEST FRAMEWORK, DL Linux Open Source Team,
Martin KaFai Lau, Network Development, Paolo Abeni,
Stanislav Fomichev, Shuah Khan, Song Liu
On 2/18/26 7:15 PM, Alexei Starovoitov wrote:
> On Wed, Feb 18, 2026 at 2:44 AM Slava Imameev
> <slava.imameev@crowdstrike.com> wrote:
>>
>> - This programs cannot be compiled as bpf_core_cast cannot cast to a
>> multi-level pointer:
>>
>> SEC("lsm/sb_eat_lsm_opts")
>> int BPF_PROG(sb_eat_lsm_opts_1,char *options, void **mnt_opts)
>> {
>> void** ppt = bpf_core_cast(mnt_opts, void*);
>> bpf_printk("%p\t", *ppt);
>> return 0;
>> }
> Looks like there is a bug in llvm, since it crashes on the above.
> But the following works:
>
> void** ppt = bpf_rdonly_cast(mnt_opts, 0);
> bpf_printk("%lx\t", *ppt);
>
> Plus Eduard's diff:
> - if (is_void_or_int_ptr(btf, t))
> + if (is_void_or_int_ptr(btf, t) || !btf_type_is_struct_ptr(btf, t))
>
>
>> - There is a workaround, which requires introducing a wrapper for
>> a pointer or typedef:
>>
>> struct pvoid {
>> void* v;
>> };
> llvm should have handled it without the workaround. It's a bug
> that should be fixed.
Okay, I will take a look.
>
>> I think there are no technical restrictions for treating single
>> level pointers as PTR_TO_MEM.
> I think it will be a missed opportunity and a potential foot gun.
>
> We didn't support access to 'char *' initially.
> Later relaxed it to mean that it's a valid pointer to a single byte,
> but since the code is generic it also became the case
> that 'char *' is allowed in kfunc and the verifier checks
> that one valid byte is there.
> That was a bad footgun, since we saw several cases of broken
> kfunc implementations that assume that 'char *' means a string.
>
> There are only two lsm hooks that pass 'struct foo **'.
> If we make it ptr_to_mem of 8 bytes we lose the ability to do
> something smarter in the future.
> I think we better add support to annotate such '**' as an actual
> array with given size and track types completely, so
> 'struct foo * var[N]' will become array_to_ptr_to_btf_id.
> That's probably more work that you signed up to do,
> so I suggest treating 'void **' as a scalar as Eduard suggested.
> This particular sb_eat_lsm_opts() hook
> doesn't have a useful type behind it anyway.
> I'm less certain about 'char **'. If we make it scalar too
> it will be harder to make it a pointer to nul terminated string later.
>
> So I would do 'void **' -> scalar for now only.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Re: Re: [PATCH bpf-next v2 0/2] bpf: Add multi-level pointer parameter
2026-02-19 3:15 ` Alexei Starovoitov
2026-02-19 5:17 ` Yonghong Song
@ 2026-02-23 9:44 ` Slava Imameev
1 sibling, 0 replies; 11+ messages in thread
From: Slava Imameev @ 2026-02-23 9:44 UTC (permalink / raw)
To: alexei.starovoitov
Cc: andrii, ast, bpf, daniel, davem, eddyz87, edumazet, haoluo, horms,
john.fastabend, jolsa, kpsingh, kuba, linux-kernel,
linux-kselftest, linux-open-source, martin.lau, netdev, pabeni,
sdf, shuah, slava.imameev, song, yonghong.song
On Wed, 18 Feb 2026 19:15:47 Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote
> so I suggest treating 'void **' as a scalar as Eduard suggested.
> This particular sb_eat_lsm_opts() hook
> doesn't have a useful type behind it anyway.
> I'm less certain about 'char **'. If we make it scalar too
> it will be harder to make it a pointer to nul terminated string later.
> So I would do 'void **' -> scalar for now only.
I changed to scalar in v3, keeping broader scope for pointer types.
We encountered double pointers of various types that required
workarounds, such as:
int __posix_acl_chmod(struct posix_acl **acl, gfp_t gfp, umode_t mode)
Adding support for void** alone doesn't address the broader issue
with other double pointer types.
When annotated array support (including char**) is added in the
future, it should remain compatible with the scalar approach for
legacy (unannotated) parameters. Unannotated parameters will
continue using scalar handling.
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2026-02-23 9:46 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-17 22:13 [PATCH bpf-next v2 0/2] bpf: Add multi-level pointer parameter support for trampolines Slava Imameev
2026-02-17 22:13 ` [PATCH bpf-next v2 1/2] bpf: Support multi-level pointer params via PTR_TO_MEM " Slava Imameev
2026-02-17 22:13 ` [PATCH bpf-next v2 2/2] selftests/bpf: Add trampolines multi-level pointer params test coverage Slava Imameev
2026-02-17 22:47 ` bot+bpf-ci
2026-02-18 9:25 ` kernel test robot
2026-02-18 1:48 ` [PATCH bpf-next v2 0/2] bpf: Add multi-level pointer parameter support for trampolines Eduard Zingerman
2026-02-18 10:43 ` Re: [PATCH bpf-next v2 0/2] bpf: Add multi-level pointer parameter Slava Imameev
2026-02-18 16:16 ` David Windsor
2026-02-19 3:15 ` Alexei Starovoitov
2026-02-19 5:17 ` Yonghong Song
2026-02-23 9:44 ` Re: " Slava Imameev
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox