public inbox for netdev@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH bpf-next v4 0/2] bpf: Add multi-level pointer parameter support for trampolines
@ 2026-03-03  9:54 Slava Imameev
  2026-03-03  9:54 ` [PATCH bpf-next v4 1/2] bpf: Support new pointer param types via SCALAR_VALUE " Slava Imameev
  2026-03-03  9:54 ` [PATCH bpf-next v4 2/2] selftests/bpf: Add trampolines single and multi-level pointer params test coverage Slava Imameev
  0 siblings, 2 replies; 15+ messages in thread
From: Slava Imameev @ 2026-03-03  9:54 UTC (permalink / raw)
  To: ast, daniel, andrii
  Cc: martin.lau, eddyz87, song, yonghong.song, john.fastabend, kpsingh,
	sdf, haoluo, jolsa, davem, edumazet, kuba, pabeni, horms, shuah,
	linux-kernel, bpf, netdev, linux-kselftest, linux-open-source,
	Slava Imameev

This is v4 of a series adding support for new pointer types for
trampoline parameters.

Originally, only support for multi-level pointers was proposed.
As suggested during review, it was extended to some single level
pointers. During discussion, it was proposed to add support for any
pointer type that is not a pointer to structure, with a condition
like if (!btf_type_is_struct(t)), but I found this might not be
compatible with some cases, e.g., some tests failed. Though this
depended on the exact place the check is done - if the check is
moved before the exit from btf_ctx_access, tests are passed. Anyway,
this looked like an extension which might conflict with future
changes for some types falling in the !btf_type_is_struct(t) case.
Instead, I used explicit type checks to add support only for single
and multi-level pointer types that are not supported but can be
supported with scalar. This is a cautious approach which can be
verified with explicit tests for each type.

These changes appear to be a safe extension since any future support
for arrays and output values would require annotation (similar to
Microsoft SAL), which differentiates between current unannotated
scalar cases and new annotated cases.

This series adds BPF verifier support for single- and multi-level
pointer parameters and return values in BPF trampolines. The
implementation treats these parameters as SCALAR_VALUE.

The following new single level pointer support is added:
- pointers to enum, 32 and 64
- pointers to functions

The following multi-level pointer support is added:
- multi-level pointers to int
- multi-level pointers to void
- multi-level pointers to enum, 32 and 64
- multi-level pointers to function
- multi-level pointers to structure

This is consistent with the existing pointers to int and void
already treated as SCALAR.

This provides consistent logic for single and multi-level pointers
- if the type is treated as SCALAR for a single level pointer, the
same is applicable for multi-level pointers, except the pointer to
struct which is currently PTR_TO_BTF_ID, but in case of multi-level
pointer it is treated as scalar as the verifier lacks the context
to infer the size of their target memory regions.

Background:

Prior to these changes, accessing multi-level pointer parameters or
return values through BPF trampoline context arrays resulted in
verification failures in btf_ctx_access, producing errors such as:

func '%s' arg%d type %s is not a struct

For example, consider a BPF program that logs an input parameter of
type struct posix_acl **:

SEC("fentry/__posix_acl_chmod")
int BPF_PROG(trace_posix_acl_chmod, struct posix_acl **ppacl, gfp_t gfp,
             umode_t mode)
{
    bpf_printk("__posix_acl_chmod ppacl = %px\n", ppacl);
    return 0;
}

This program failed BPF verification with the following error:

libbpf: prog 'trace_posix_acl_chmod': -- BEGIN PROG LOAD LOG --
0: R1=ctx() R10=fp0
; int BPF_PROG(trace_posix_acl_chmod, struct posix_acl **ppacl,
gfp_t gfp, umode_t mode) @ posix_acl_monitor.bpf.c:23
0: (79) r6 = *(u64 *)(r1 +16)         ; R1=ctx() R6_w=scalar()
1: (79) r1 = *(u64 *)(r1 +0)
func '__posix_acl_chmod' arg0 type PTR is not a struct
invalid bpf_context access off=0 size=8
processed 2 insns (limit 1000000) max_states_per_insn 0 total_states 0
peak_states 0 mark_read 0
-- END PROG LOAD LOG --

The common workaround involved using helper functions to fetch
parameter values by passing the address of the context array entry:

SEC("fentry/__posix_acl_chmod")
int BPF_PROG(trace_posix_acl_chmod, struct posix_acl **ppacl, gfp_t gfp,
             umode_t mode)
{
    struct posix_acl **pp;
    bpf_probe_read_kernel(&pp, sizeof(ppacl), &ctx[0]);
    bpf_printk("__posix_acl_chmod %px\n", pp);
    return 0;
}

This approach introduced helper call overhead and created
inconsistency with parameter access patterns.

Improvements:

With this patch, trampoline programs can directly access multi-level
pointer parameters, eliminating helper call overhead and explicit ctx
access while ensuring consistent parameter handling. For example, the
following ctx access with a helper call:

SEC("fentry/__posix_acl_chmod")
int BPF_PROG(trace_posix_acl_chmod, struct posix_acl **ppacl, gfp_t gfp,
             umode_t mode)
{
    struct posix_acl **pp;
    bpf_probe_read_kernel(&pp, sizeof(pp), &ctx[0]);
    bpf_printk("__posix_acl_chmod %px\n", pp);
    ...
}

is replaced by a load instruction:

SEC("fentry/__posix_acl_chmod")
int BPF_PROG(trace_posix_acl_chmod, struct posix_acl **ppacl, gfp_t gfp,
             umode_t mode)
{
    bpf_printk("__posix_acl_chmod %px\n", ppacl);
    ...
}

The bpf_core_cast macro can be used for deeper level dereferences,
as illustrated in the tests added by this patch.

v1 -> v2:
* corrected maintainer's email
v2 -> v3:
* Addressed reviewers' feedback:
	* Changed the register type from PTR_TO_MEM to SCALAR_VALUE.
	* Modified tests to accommodate SCALAR_VALUE handling.
* Fixed a compilation error for loongarch
	* https://lore.kernel.org/oe-kbuild-all/202602181710.tEK6nOl6-lkp@intel.com/
* Addressed AI bot review
	* Added a commentary to address a NULL pointer case
	* Removed WARN_ON
	* Fixed a commentary
v3 -> v4:
* Added more consistent support for single and multi-level pointers
as suggested by reviewers.
	* added single level pointers to enum 32 and 64
	* added single level pointers to functions
	* harmonized support for single and multi-level pointer types
	* added new tests to support the above changes
* Removed create_bad_kaddr that allocated and invalidated kernel VA
for tests, and replaced it with hardcoded values similar to
bpf_testmod_return_ptr as suggested by reviewers.

Slava Imameev (2):
  bpf: Support new pointer param types via SCALAR_VALUE for trampolines
  selftests/bpf: Add trampolines single and multi-level pointer params
    test coverage

 kernel/bpf/btf.c                              |  31 +-
 net/bpf/test_run.c                            | 130 +++++
 .../prog_tests/fentry_fexit_multi_level_ptr.c | 206 +++++++
 .../selftests/bpf/prog_tests/verifier.c       |   2 +
 .../progs/fentry_fexit_pptr_nullable_test.c   |  60 ++
 .../bpf/progs/fentry_fexit_pptr_test.c        |  67 +++
 .../bpf/progs/fentry_fexit_void_ppptr_test.c  |  38 ++
 .../bpf/progs/fentry_fexit_void_pptr_test.c   |  71 +++
 .../bpf/progs/verifier_ctx_ptr_param.c        | 523 ++++++++++++++++++
 9 files changed, 1120 insertions(+), 8 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/fentry_fexit_multi_level_ptr.c
 create mode 100644 tools/testing/selftests/bpf/progs/fentry_fexit_pptr_nullable_test.c
 create mode 100644 tools/testing/selftests/bpf/progs/fentry_fexit_pptr_test.c
 create mode 100644 tools/testing/selftests/bpf/progs/fentry_fexit_void_ppptr_test.c
 create mode 100644 tools/testing/selftests/bpf/progs/fentry_fexit_void_pptr_test.c
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_ctx_ptr_param.c

-- 
2.34.1


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH bpf-next v4 1/2] bpf: Support new pointer param types via SCALAR_VALUE for trampolines
  2026-03-03  9:54 [PATCH bpf-next v4 0/2] bpf: Add multi-level pointer parameter support for trampolines Slava Imameev
@ 2026-03-03  9:54 ` Slava Imameev
  2026-03-03 20:05   ` Eduard Zingerman
  2026-03-03  9:54 ` [PATCH bpf-next v4 2/2] selftests/bpf: Add trampolines single and multi-level pointer params test coverage Slava Imameev
  1 sibling, 1 reply; 15+ messages in thread
From: Slava Imameev @ 2026-03-03  9:54 UTC (permalink / raw)
  To: ast, daniel, andrii
  Cc: martin.lau, eddyz87, song, yonghong.song, john.fastabend, kpsingh,
	sdf, haoluo, jolsa, davem, edumazet, kuba, pabeni, horms, shuah,
	linux-kernel, bpf, netdev, linux-kselftest, linux-open-source,
	Slava Imameev

Add BPF verifier support for single- and multi-level pointer
parameters and return values in BPF trampolines. The implementation
treats these parameters as SCALAR_VALUE.

The following new single level pointer support is added:
- pointers to enum, 32 and 64
- pointers to functions

The following multi-level pointer support is added:
- multi-level pointers to int
- multi-level pointers to void
- multi-level pointers to enum, 32 and 64
- multi-level pointers to function
- multi-level pointers to structure

This is consistent with the existing pointers to int and void
already treated as SCALAR.

This provides consistent logic for single and multi-level pointers
- if the type is treated as SCALAR for a single level pointer, the
same is applicable for multi-level pointers, except the pointer to
struct which is currently PTR_TO_BTF_ID, but in case of multi-level
pointer it is treated as scalar as the verifier lacks the context
to infer the size of their target memory regions.

Signed-off-by: Slava Imameev <slava.imameev@crowdstrike.com>
---
 kernel/bpf/btf.c | 31 +++++++++++++++++++++++--------
 1 file changed, 23 insertions(+), 8 deletions(-)

diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 4872d2a6c42d..c2d06d2597d6 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -6508,11 +6508,30 @@ struct btf *bpf_prog_get_target_btf(const struct bpf_prog *prog)
 		return prog->aux->attach_btf;
 }
 
-static bool is_void_or_int_ptr(struct btf *btf, const struct btf_type *t)
+static bool is_ptr_treated_as_scalar(const struct btf *btf,
+	const struct btf_type *t)
 {
-	/* skip modifiers */
+	int depth = 1;
+
+	WARN_ON(!btf_type_is_ptr(t));
+
 	t = btf_type_skip_modifiers(btf, t->type, NULL);
-	return btf_type_is_void(t) || btf_type_is_int(t);
+	while (btf_type_is_ptr(t) && depth < MAX_RESOLVE_DEPTH) {
+		depth += 1;
+		t = btf_type_skip_modifiers(btf, t->type, NULL);
+	}
+
+	/*
+	 * If it's a single or multilevel pointer to void, int, enum,
+	 * or function, it's the same as scalar from the verifier
+	 * safety POV. Multilevel pointers to structures are treated as
+	 * scalars. The verifier lacks the context to infer the size of
+	 * their target memory regions. Either way, no further pointer
+	 * walking is allowed.
+	 */
+	return btf_type_is_void(t) || btf_type_is_int(t) ||
+		   btf_is_any_enum(t) || btf_type_is_func_proto(t) ||
+		   (btf_type_is_struct(t) && depth > 1);
 }
 
 u32 btf_ctx_arg_idx(struct btf *btf, const struct btf_type *func_proto,
@@ -6902,11 +6921,7 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
 		}
 	}
 
-	/*
-	 * If it's a pointer to void, it's the same as scalar from the verifier
-	 * safety POV. Either way, no futher pointer walking is allowed.
-	 */
-	if (is_void_or_int_ptr(btf, t))
+	if (is_ptr_treated_as_scalar(btf, t))
 		return true;
 
 	/* this is a pointer to another type */
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH bpf-next v4 2/2] selftests/bpf: Add trampolines single and multi-level pointer params test coverage
  2026-03-03  9:54 [PATCH bpf-next v4 0/2] bpf: Add multi-level pointer parameter support for trampolines Slava Imameev
  2026-03-03  9:54 ` [PATCH bpf-next v4 1/2] bpf: Support new pointer param types via SCALAR_VALUE " Slava Imameev
@ 2026-03-03  9:54 ` Slava Imameev
  2026-03-03 20:08   ` Eduard Zingerman
  1 sibling, 1 reply; 15+ messages in thread
From: Slava Imameev @ 2026-03-03  9:54 UTC (permalink / raw)
  To: ast, daniel, andrii
  Cc: martin.lau, eddyz87, song, yonghong.song, john.fastabend, kpsingh,
	sdf, haoluo, jolsa, davem, edumazet, kuba, pabeni, horms, shuah,
	linux-kernel, bpf, netdev, linux-kselftest, linux-open-source,
	Slava Imameev

Single and multi-level pointer params and return value test coverage
for BPF trampolines:
- fentry/fexit programs covering struct and void double/triple
  pointer parameters and return values
- verifier context tests covering pointers as parameters, these
  tests cover single and double pointers to int, enum 32 and 64,
  void, function, and double pointers to struct, triple pointers
  for void
- verifier context tests covering single and double pointers to
  float, to check proper error is returned as pointers to float
  are not supported
- verifier context tests covering pointers as return values
- verifier context tests for lsm to check trusted parameters
  handling
- verifier context tests covering out-of-bound access after cast
- verifier BPF helper tests to validate no change in verifier
  behavior

Signed-off-by: Slava Imameev <slava.imameev@crowdstrike.com>
---
 net/bpf/test_run.c                            | 130 +++++
 .../prog_tests/fentry_fexit_multi_level_ptr.c | 206 +++++++
 .../selftests/bpf/prog_tests/verifier.c       |   2 +
 .../progs/fentry_fexit_pptr_nullable_test.c   |  60 ++
 .../bpf/progs/fentry_fexit_pptr_test.c        |  67 +++
 .../bpf/progs/fentry_fexit_void_ppptr_test.c  |  38 ++
 .../bpf/progs/fentry_fexit_void_pptr_test.c   |  71 +++
 .../bpf/progs/verifier_ctx_ptr_param.c        | 523 ++++++++++++++++++
 8 files changed, 1097 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/fentry_fexit_multi_level_ptr.c
 create mode 100644 tools/testing/selftests/bpf/progs/fentry_fexit_pptr_nullable_test.c
 create mode 100644 tools/testing/selftests/bpf/progs/fentry_fexit_pptr_test.c
 create mode 100644 tools/testing/selftests/bpf/progs/fentry_fexit_void_ppptr_test.c
 create mode 100644 tools/testing/selftests/bpf/progs/fentry_fexit_void_pptr_test.c
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_ctx_ptr_param.c

diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
index 178c4738e63b..73191c4a586e 100644
--- a/net/bpf/test_run.c
+++ b/net/bpf/test_run.c
@@ -24,6 +24,8 @@
 #include <net/netdev_rx_queue.h>
 #include <net/xdp.h>
 #include <net/netfilter/nf_bpf_link.h>
+#include <linux/set_memory.h>
+#include <linux/string.h>
 
 #define CREATE_TRACE_POINTS
 #include <trace/events/bpf_test_run.h>
@@ -563,6 +565,96 @@ noinline int bpf_fentry_test10(const void *a)
 	return (long)a;
 }
 
+struct bpf_fentry_test_pptr_t {
+	u32 value1;
+	u32 value2;
+};
+
+noinline int bpf_fentry_test11_pptr_nullable(struct bpf_fentry_test_pptr_t **pptr__nullable)
+{
+	if (!pptr__nullable)
+		return -1;
+
+	return (*pptr__nullable)->value1;
+}
+
+noinline u32 **bpf_fentry_test12_pptr(u32 id, u32 **pptr)
+{
+	barrier_data(&id);
+	barrier_data(&pptr);
+	return pptr;
+}
+
+noinline u8 bpf_fentry_test13_pptr(void **pptr)
+{
+	void *ptr;
+
+	return copy_from_kernel_nofault(&ptr, pptr, sizeof(pptr)) == 0;
+}
+
+/* Test the verifier can handle multi-level pointer types with qualifiers. */
+noinline void ***bpf_fentry_test14_ppptr(void **volatile *const ppptr)
+{
+	barrier_data(&ppptr);
+	return (void ***)ppptr;
+}
+
+enum fentry_test_enum32;
+
+noinline void bpf_fentry_test15_penum32(enum fentry_test_enum32 *pe)
+{
+}
+
+enum fentry_test_enum64 {
+	TEST_ENUM64 = 0xffffffffFFFFFFFFULL
+};
+
+noinline void bpf_fentry_test15_penum64(enum fentry_test_enum64 *pe)
+{
+}
+
+noinline void bpf_fentry_test16_ppenum32(enum fentry_test_enum32 **ppe)
+{
+}
+
+noinline void bpf_fentry_test16_ppenum64(enum fentry_test_enum64 **ppe)
+{
+}
+
+noinline void bpf_fentry_test17_pfunc(void (*pf)(void))
+{
+}
+
+noinline void bpf_fentry_test18_ppfunc(void (**ppf)(void))
+{
+}
+
+noinline void bpf_fentry_test19_pfloat(float *pff)
+{
+}
+
+noinline void bpf_fentry_test20_ppfloat(float **ppff)
+{
+}
+
+noinline void bpf_fentry_test21_pchar(char *pc)
+{
+}
+
+noinline void bpf_fentry_test22_ppchar(char **ppc)
+{
+}
+
+noinline char **bpf_fentry_test23_ret_ppchar(void)
+{
+	return (char **)NULL;
+}
+
+noinline struct file **bpf_fentry_test24_ret_ppfile(void **a)
+{
+	return (struct file **)NULL;
+}
+
 noinline void bpf_fentry_test_sinfo(struct skb_shared_info *sinfo)
 {
 }
@@ -670,20 +762,58 @@ static void *bpf_test_init(const union bpf_attr *kattr, u32 user_size,
 	return data;
 }
 
+#define CONSUME(val) do { \
+	typeof(val) __var = (val); \
+	__asm__ __volatile__("" : "+r" (__var)); \
+	(void)__var; \
+} while (0)
+
 int bpf_prog_test_run_tracing(struct bpf_prog *prog,
 			      const union bpf_attr *kattr,
 			      union bpf_attr __user *uattr)
 {
 	struct bpf_fentry_test_t arg = {};
+	struct bpf_fentry_test_pptr_t ts = { .value1 = 1979, .value2 = 2026 };
+	struct bpf_fentry_test_pptr_t *ptr = &ts;
+	u32 *u32_ptr = (u32 *)29;
 	u16 side_effect = 0, ret = 0;
 	int b = 2, err = -EFAULT;
 	u32 retval = 0;
+	const char *attach_name;
 
 	if (kattr->test.flags || kattr->test.cpu || kattr->test.batch_size)
 		return -EINVAL;
 
+	attach_name = prog->aux->attach_func_name;
+	if (!attach_name)
+		attach_name = "!";
+
 	switch (prog->expected_attach_type) {
 	case BPF_TRACE_FENTRY:
+		if (!strcmp(attach_name, "bpf_fentry_test11_pptr_nullable")) {
+			/* valid kernel pointer, valid pointer after dereference */
+			CONSUME(bpf_fentry_test11_pptr_nullable(&ptr));
+			break;
+		} else if (!strcmp(attach_name, "bpf_fentry_test12_pptr")) {
+			/* valid kernel pointer, user pointer after dereference */
+			CONSUME(bpf_fentry_test12_pptr(0, &u32_ptr));
+			/* user address on most systems */
+			CONSUME(bpf_fentry_test12_pptr(1, (u32 **)17));
+			break;
+		} else if (!strcmp(attach_name, "bpf_fentry_test13_pptr")) {
+			/* should trigger extable on most systems */
+			CONSUME(bpf_fentry_test13_pptr((void **)~(1ull << 30)));
+			/* user address on most systems */
+			CONSUME(bpf_fentry_test13_pptr((void **)19));
+			/* kernel address at top 4KB, invalid */
+			CONSUME(bpf_fentry_test13_pptr(ERR_PTR(-ENOMEM)));
+			break;
+		} else if (!strcmp(attach_name, "bpf_fentry_test14_ppptr")) {
+			/* kernel address at top 4KB, invalid */
+			CONSUME(bpf_fentry_test14_ppptr(ERR_PTR(-ENOMEM)));
+			break;
+		}
+		fallthrough;
 	case BPF_TRACE_FEXIT:
 	case BPF_TRACE_FSESSION:
 		if (bpf_fentry_test1(1) != 2 ||
diff --git a/tools/testing/selftests/bpf/prog_tests/fentry_fexit_multi_level_ptr.c b/tools/testing/selftests/bpf/prog_tests/fentry_fexit_multi_level_ptr.c
new file mode 100644
index 000000000000..07e8b142dd87
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/fentry_fexit_multi_level_ptr.c
@@ -0,0 +1,206 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2026 CrowdStrike, Inc. */
+#include <test_progs.h>
+#include "fentry_fexit_pptr_nullable_test.skel.h"
+#include "fentry_fexit_pptr_test.skel.h"
+#include "fentry_fexit_void_pptr_test.skel.h"
+#include "fentry_fexit_void_ppptr_test.skel.h"
+
+static void test_fentry_fexit_pptr_nullable(void)
+{
+	struct fentry_fexit_pptr_nullable_test *skel = NULL;
+	int err, prog_fd;
+	LIBBPF_OPTS(bpf_test_run_opts, topts);
+
+	skel = fentry_fexit_pptr_nullable_test__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "fentry_fexit_pptr_nullable_test__open_and_load"))
+		return;
+
+	err = fentry_fexit_pptr_nullable_test__attach(skel);
+	if (!ASSERT_OK(err, "fentry_fexit_pptr_nullable_test__attach"))
+		goto cleanup;
+
+	/* Trigger fentry/fexit programs. */
+	prog_fd = bpf_program__fd(skel->progs.test_fentry_pptr_nullable);
+	err = bpf_prog_test_run_opts(prog_fd, &topts);
+	ASSERT_OK(err, "test_run");
+	ASSERT_EQ(topts.retval, 0, "test_run retval");
+
+	/* Verify fentry was called and captured the correct value. */
+	ASSERT_EQ(skel->bss->fentry_called, 1, "fentry_called");
+	ASSERT_EQ(skel->bss->fentry_ptr_field_value1, 1979, "fentry_ptr_field_value1");
+	ASSERT_EQ(skel->bss->fentry_ptr_field_value2, 2026, "fentry_ptr_field_value2");
+
+	/* Verify fexit captured correct values and return code. */
+	ASSERT_EQ(skel->bss->fexit_called, 1, "fexit_called");
+	ASSERT_EQ(skel->bss->fexit_ptr_field_value1, 1979, "fexit_ptr_field_value1");
+	ASSERT_EQ(skel->bss->fexit_ptr_field_value2, 2026, "fexit_ptr_field_value2");
+	ASSERT_EQ(skel->bss->fexit_retval, 1979, "fexit_retval");
+
+cleanup:
+	fentry_fexit_pptr_nullable_test__destroy(skel);
+}
+
+static void test_fentry_fexit_pptr(void)
+{
+	struct fentry_fexit_pptr_test *skel = NULL;
+	int err, prog_fd, i;
+	LIBBPF_OPTS(bpf_test_run_opts, topts);
+
+	skel = fentry_fexit_pptr_test__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "fentry_fexit_pptr_test__open_and_load"))
+		return;
+
+	/* Poison some values which should be modified by BPF programs. */
+	for (i = 0; i < ARRAY_SIZE(skel->bss->telemetry); ++i) {
+		skel->bss->telemetry[i].id = 30;
+		skel->bss->telemetry[i].fentry_pptr = 31;
+		skel->bss->telemetry[i].fentry_ptr = 32;
+		skel->bss->telemetry[i].fexit_pptr = 33;
+		skel->bss->telemetry[i].fexit_ptr = 34;
+		skel->bss->telemetry[i].fexit_ret_pptr = 35;
+		skel->bss->telemetry[i].fexit_ret_ptr = 36;
+	}
+
+	err = fentry_fexit_pptr_test__attach(skel);
+	if (!ASSERT_OK(err, "fentry_fexit_pptr_test__attach"))
+		goto cleanup;
+
+	/* Trigger fentry/fexit programs */
+	prog_fd = bpf_program__fd(skel->progs.test_fentry_pptr);
+	err = bpf_prog_test_run_opts(prog_fd, &topts);
+	ASSERT_OK(err, "test_run");
+	ASSERT_EQ(topts.retval, 0, "test_run retval");
+
+	for (i = 0; i < ARRAY_SIZE(skel->bss->telemetry); ++i) {
+		ASSERT_TRUE(skel->bss->telemetry[i].id == 0 ||
+			skel->bss->telemetry[i].id == 1, "id");
+		if (skel->bss->telemetry[i].id == 0) {
+			/* Verify fentry captured the correct value. */
+			ASSERT_EQ(skel->bss->telemetry[i].fentry_called, 1, "fentry_called");
+			ASSERT_EQ(skel->bss->telemetry[i].fentry_ptr, (u64)29, "fentry_ptr");
+
+			/* Verify fexit captured correct values and return address. */
+			ASSERT_EQ(skel->bss->telemetry[i].fexit_called, 1, "fexit_called");
+			ASSERT_EQ(skel->bss->telemetry[i].fexit_pptr,
+				skel->bss->telemetry[i].fentry_pptr, "fexit_pptr");
+			ASSERT_EQ(skel->bss->telemetry[i].fexit_ptr, (u64)29, "fexit_ptr");
+			ASSERT_EQ(skel->bss->telemetry[i].fexit_ret_pptr,
+				skel->bss->telemetry[i].fentry_pptr, "fexit_ret_pptr");
+			ASSERT_EQ(skel->bss->telemetry[i].fexit_ret_ptr, (u64)29, "fexit_ret_ptr");
+		} else if (skel->bss->telemetry[i].id == 1) {
+			/* Verify fentry captured the correct value */
+			ASSERT_EQ(skel->bss->telemetry[i].fentry_called, 1, "fentry_called");
+			ASSERT_EQ(skel->bss->telemetry[i].fentry_pptr, 17, "fentry_pptr");
+
+			/*
+			 * Verify fexit captured correct values and return address,
+			 * fentry_ptr value depends on kernel address space layout
+			 * and a mapped page presence at NULL.
+			 */
+			ASSERT_EQ(skel->bss->telemetry[i].fexit_called, 1, "fexit_called");
+			ASSERT_EQ(skel->bss->telemetry[i].fexit_pptr, 17, "fexit_pptr");
+			ASSERT_EQ(skel->bss->telemetry[i].fexit_ptr,
+				skel->bss->telemetry[i].fentry_ptr, "fexit_ptr");
+			ASSERT_EQ(skel->bss->telemetry[i].fexit_ret_pptr, 17, "fexit_ret_pptr");
+			ASSERT_EQ(skel->bss->telemetry[i].fexit_ret_ptr,
+				skel->bss->telemetry[i].fentry_ptr, "fexit_ret_ptr");
+		}
+	}
+
+cleanup:
+	fentry_fexit_pptr_test__destroy(skel);
+}
+
+static void test_fentry_fexit_void_pptr(void)
+{
+	struct fentry_fexit_void_pptr_test *skel = NULL;
+	int err, prog_fd, i;
+	LIBBPF_OPTS(bpf_test_run_opts, topts);
+
+	skel = fentry_fexit_void_pptr_test__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "fentry_fexit_void_pptr_test__open_and_load"))
+		return;
+
+	/* Poison some values which should be modified by BPF programs. */
+	for (i = 0; i < ARRAY_SIZE(skel->bss->telemetry); ++i) {
+		skel->bss->telemetry[i].fentry_pptr = 30;
+		skel->bss->telemetry[i].fentry_ptr = 31;
+		skel->bss->telemetry[i].fexit_pptr = 32;
+		skel->bss->telemetry[i].fexit_ptr = 33;
+	}
+
+	err = fentry_fexit_void_pptr_test__attach(skel);
+	if (!ASSERT_OK(err, "fentry_fexit_void_pptr_test__attach"))
+		goto cleanup;
+
+	/* Trigger fentry/fexit programs. */
+	prog_fd = bpf_program__fd(skel->progs.test_fentry_void_pptr);
+	err = bpf_prog_test_run_opts(prog_fd, &topts);
+	ASSERT_OK(err, "test_run");
+	ASSERT_EQ(topts.retval, 0, "test_run retval");
+	for (i = 0; i < ARRAY_SIZE(skel->bss->telemetry); ++i) {
+		ASSERT_EQ(skel->bss->telemetry[i].fentry_called, 1, "fentry_called");
+		ASSERT_EQ(skel->bss->telemetry[i].fexit_called, 1, "fexit_called");
+		ASSERT_EQ(skel->bss->telemetry[i].fentry_pptr, skel->bss->telemetry[i].fexit_pptr,
+			"fentry_pptr == fexit_pptr");
+		ASSERT_EQ(skel->bss->telemetry[i].fexit_ptr, skel->bss->telemetry[i].fentry_ptr,
+			"fexit_ptr");
+		ASSERT_EQ(skel->bss->telemetry[i].fentry_pptr_addr_valid,
+			skel->bss->telemetry[i].fexit_pptr_addr_valid, "fexit_pptr_addr_valid");
+		if (!skel->bss->telemetry[i].fentry_pptr_addr_valid) {
+			/* Should be set to 0 by kernel address boundary checks or an exception handler. */
+			ASSERT_EQ(skel->bss->telemetry[i].fentry_ptr, 0, "fentry_ptr");
+			ASSERT_EQ(skel->bss->telemetry[i].fexit_ptr, 0, "fexit_ptr");
+		}
+	}
+cleanup:
+	fentry_fexit_void_pptr_test__destroy(skel);
+}
+
+static void test_fentry_fexit_void_ppptr(void)
+{
+	struct fentry_fexit_void_ppptr_test *skel = NULL;
+	int err, prog_fd;
+	LIBBPF_OPTS(bpf_test_run_opts, topts);
+
+	skel = fentry_fexit_void_ppptr_test__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "fentry_fexit_void_ppptr_test__open_and_load"))
+		return;
+
+	/* Poison some values which should be modified by BPF programs */
+	skel->bss->fentry_pptr = 31;
+
+	err = fentry_fexit_void_ppptr_test__attach(skel);
+	if (!ASSERT_OK(err, "fentry_fexit_void_ppptr_test__attach"))
+		goto cleanup;
+
+	/* Trigger fentry/fexit programs */
+	prog_fd = bpf_program__fd(skel->progs.test_fentry_void_ppptr);
+	err = bpf_prog_test_run_opts(prog_fd, &topts);
+	ASSERT_OK(err, "test_run");
+	ASSERT_EQ(topts.retval, 0, "test_run retval");
+
+	/* Verify invalid memory access results in zeroed register */
+	ASSERT_EQ(skel->bss->fentry_called, 1, "fentry_called");
+	ASSERT_EQ(skel->bss->fentry_pptr, 0, "fentry_pptr");
+
+	/* Verify fexit captured correct values and return value */
+	ASSERT_EQ(skel->bss->fexit_called, 1, "fexit_called");
+	ASSERT_EQ(skel->bss->fexit_retval, (u64)ERR_PTR(-ENOMEM), "fexit_retval");
+
+cleanup:
+	fentry_fexit_void_ppptr_test__destroy(skel);
+}
+
+void test_fentry_fexit_multi_level_ptr(void)
+{
+	if (test__start_subtest("pptr_nullable"))
+		test_fentry_fexit_pptr_nullable();
+	if (test__start_subtest("pptr"))
+		test_fentry_fexit_pptr();
+	if (test__start_subtest("void_pptr"))
+		test_fentry_fexit_void_pptr();
+	if (test__start_subtest("void_ppptr"))
+		test_fentry_fexit_void_ppptr();
+}
diff --git a/tools/testing/selftests/bpf/prog_tests/verifier.c b/tools/testing/selftests/bpf/prog_tests/verifier.c
index 8cdfd74c95d7..bcf01cb4cfe4 100644
--- a/tools/testing/selftests/bpf/prog_tests/verifier.c
+++ b/tools/testing/selftests/bpf/prog_tests/verifier.c
@@ -115,6 +115,7 @@
 #include "verifier_lsm.skel.h"
 #include "verifier_jit_inline.skel.h"
 #include "irq.skel.h"
+#include "verifier_ctx_ptr_param.skel.h"
 
 #define MAX_ENTRIES 11
 
@@ -259,6 +260,7 @@ void test_verifier_lsm(void)                  { RUN(verifier_lsm); }
 void test_irq(void)			      { RUN(irq); }
 void test_verifier_mtu(void)		      { RUN(verifier_mtu); }
 void test_verifier_jit_inline(void)               { RUN(verifier_jit_inline); }
+void test_verifier_ctx_ptr_param(void)       { RUN(verifier_ctx_ptr_param); }
 
 static int init_test_val_map(struct bpf_object *obj, char *map_name)
 {
diff --git a/tools/testing/selftests/bpf/progs/fentry_fexit_pptr_nullable_test.c b/tools/testing/selftests/bpf/progs/fentry_fexit_pptr_nullable_test.c
new file mode 100644
index 000000000000..03c8e30d5303
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/fentry_fexit_pptr_nullable_test.c
@@ -0,0 +1,60 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2026 CrowdStrike, Inc. */
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+#include <bpf/bpf_core_read.h>
+
+char _license[] SEC("license") = "GPL";
+
+struct bpf_fentry_test_pptr_t {
+	__u32 value1;
+	__u32 value2;
+};
+
+/*
+ * Workaround for a bug in LLVM:
+ * fatal error: error in backend: Empty type name for BTF_TYPE_ID_REMOTE reloc
+ */
+typedef struct bpf_fentry_test_pptr_t *bpf_fentry_test_pptr_p;
+
+__u32 fentry_called = 0;
+__u32 fentry_ptr_field_value1 = 0;
+__u32 fentry_ptr_field_value2 = 0;
+__u32 fexit_called = 0;
+__u32 fexit_ptr_field_value1 = 0;
+__u32 fexit_ptr_field_value2 = 0;
+__u32 fexit_retval = 0;
+
+SEC("fentry/bpf_fentry_test11_pptr_nullable")
+int BPF_PROG(test_fentry_pptr_nullable,
+	struct bpf_fentry_test_pptr_t **pptr__nullable)
+{
+	struct bpf_fentry_test_pptr_t *ptr;
+
+	fentry_called = 1;
+	/* For scalars, the verifier does not enforce NULL pointer checks. */
+	ptr = *bpf_core_cast(pptr__nullable, bpf_fentry_test_pptr_p);
+	bpf_probe_read_kernel(&fentry_ptr_field_value1,
+		sizeof(fentry_ptr_field_value1), &ptr->value1);
+	bpf_probe_read_kernel(&fentry_ptr_field_value2,
+		sizeof(fentry_ptr_field_value2), &ptr->value2);
+	return 0;
+}
+
+SEC("fexit/bpf_fentry_test11_pptr_nullable")
+int BPF_PROG(test_fexit_pptr_nullable,
+	struct bpf_fentry_test_pptr_t **pptr__nullable, int ret)
+{
+	struct bpf_fentry_test_pptr_t **pptr;
+	struct bpf_fentry_test_pptr_t *ptr;
+
+	fexit_called = 1;
+	fexit_retval = ret;
+	/* For scalars, the verifier does not enforce NULL pointer checks. */
+	pptr = bpf_core_cast(pptr__nullable, bpf_fentry_test_pptr_p);
+	ptr = bpf_core_cast((*pptr), struct bpf_fentry_test_pptr_t);
+	fexit_ptr_field_value1 = ptr->value1;
+	fexit_ptr_field_value2 = ptr->value2;
+	return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/fentry_fexit_pptr_test.c b/tools/testing/selftests/bpf/progs/fentry_fexit_pptr_test.c
new file mode 100644
index 000000000000..77c5c09d7117
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/fentry_fexit_pptr_test.c
@@ -0,0 +1,67 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2026 CrowdStrike, Inc. */
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+#include <bpf/bpf_core_read.h>
+
+char _license[] SEC("license") = "GPL";
+
+#define TELEMETRY_COUNT 2
+
+struct {
+	__u32 id;
+	__u32 fentry_called;
+	__u32 fexit_called;
+	__u64 fentry_pptr;
+	__u64 fentry_ptr;
+	__u64 fexit_pptr;
+	__u64 fexit_ptr;
+	__u64 fexit_ret_pptr;
+	__u64 fexit_ret_ptr;
+} telemetry[TELEMETRY_COUNT];
+
+volatile unsigned int current_index = 0;
+
+/*
+ * Workaround for a bug in LLVM:
+ * fatal error: error in backend: Empty type name for BTF_TYPE_ID_REMOTE reloc
+ */
+typedef __u32 *__u32_p;
+
+SEC("fentry/bpf_fentry_test12_pptr")
+int BPF_PROG(test_fentry_pptr, __u32 id, __u32 **pptr)
+{
+	void *ptr;
+	unsigned int i = current_index;
+
+	if (i >= TELEMETRY_COUNT)
+		return 0;
+
+	if (bpf_probe_read_kernel(&ptr, sizeof(ptr), pptr) != 0)
+		ptr = NULL;
+
+	telemetry[i].id = id;
+	telemetry[i].fentry_called = 1;
+	telemetry[i].fentry_pptr = (__u64)pptr;
+	telemetry[i].fentry_ptr = (__u64)ptr;
+	return 0;
+}
+
+SEC("fexit/bpf_fentry_test12_pptr")
+int BPF_PROG(test_fexit_pptr, __u32 id, __u32 **pptr, __u32 **ret)
+{
+	unsigned int i = current_index;
+
+	if (i >= TELEMETRY_COUNT)
+		return 0;
+
+	telemetry[i].fexit_called = 1;
+	telemetry[i].fexit_pptr = (__u64)pptr;
+	telemetry[i].fexit_ptr = (__u64)*bpf_core_cast(pptr, __u32_p);
+	telemetry[i].fexit_ret_pptr = (__u64)ret;
+	telemetry[i].fexit_ret_ptr = ret ? (__u64)*bpf_core_cast(ret, __u32_p) : 0;
+
+	current_index = i + 1;
+	return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/fentry_fexit_void_ppptr_test.c b/tools/testing/selftests/bpf/progs/fentry_fexit_void_ppptr_test.c
new file mode 100644
index 000000000000..15e908f0a1ed
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/fentry_fexit_void_ppptr_test.c
@@ -0,0 +1,38 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2026 CrowdStrike, Inc. */
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+#include <bpf/bpf_core_read.h>
+
+char _license[] SEC("license") = "GPL";
+
+__u32 fentry_called = 0;
+__u32 fexit_called = 0;
+__u64 fentry_pptr = 0;
+__u64 fexit_retval = 0;
+
+typedef void **volatile *const ppvpc_t;
+
+/*
+ * Workaround for a bug in LLVM:
+ * fatal error: error in backend: Empty type name for BTF_TYPE_ID_REMOTE reloc
+ */
+typedef void **void_pp;
+
+SEC("fentry/bpf_fentry_test14_ppptr")
+int BPF_PROG(test_fentry_void_ppptr, ppvpc_t ppptr)
+{
+	fentry_called = 1;
+	/* Invalid memory access is fixed by boundary checks or exception handler */
+	fentry_pptr = (__u64)*bpf_core_cast((void ***)ppptr, void_pp);
+	return 0;
+}
+
+SEC("fexit/bpf_fentry_test14_ppptr")
+int BPF_PROG(test_fexit_void_ppptr, ppvpc_t ppptr, void ***ret)
+{
+	fexit_called = 1;
+	fexit_retval = ret ? (__u64)ret : 0;
+	return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/fentry_fexit_void_pptr_test.c b/tools/testing/selftests/bpf/progs/fentry_fexit_void_pptr_test.c
new file mode 100644
index 000000000000..588050b9607d
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/fentry_fexit_void_pptr_test.c
@@ -0,0 +1,71 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2026 CrowdStrike, Inc. */
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+#include <bpf/bpf_core_read.h>
+
+char _license[] SEC("license") = "GPL";
+
+#define TELEMETRY_COUNT 3
+
+struct {
+	__u32 fentry_called;
+	__u32 fexit_called;
+	__u32 fentry_pptr_addr_valid;
+	__u32 fexit_pptr_addr_valid;
+	__u64 fentry_pptr;
+	__u64 fentry_ptr;
+	__u64 fexit_pptr;
+	__u64 fexit_ptr;
+} telemetry[TELEMETRY_COUNT];
+
+volatile unsigned int current_index = 0;
+
+/*
+ * Workaround for a bug in LLVM:
+ * fatal error: error in backend: Empty type name for BTF_TYPE_ID_REMOTE reloc
+ */
+typedef void *void_p;
+
+SEC("fentry/bpf_fentry_test13_pptr")
+int BPF_PROG(test_fentry_void_pptr, void **pptr)
+{
+	void *ptr;
+	unsigned int i = current_index;
+
+	if (i >= TELEMETRY_COUNT)
+		return 0;
+
+	telemetry[i].fentry_pptr_addr_valid =
+		(bpf_probe_read_kernel(&ptr, sizeof(ptr), pptr) == 0);
+	if (!telemetry[i].fentry_pptr_addr_valid)
+		ptr = NULL;
+
+	telemetry[i].fentry_called = 1;
+	telemetry[i].fentry_pptr = (__u64)pptr;
+	telemetry[i].fentry_ptr = (__u64)ptr;
+	return 0;
+}
+
+SEC("fexit/bpf_fentry_test13_pptr")
+int BPF_PROG(test_fexit_void_pptr, void **pptr, __u8 ret)
+{
+	unsigned int i = current_index;
+
+	if (i >= TELEMETRY_COUNT)
+		return 0;
+
+	telemetry[i].fexit_called = 1;
+	telemetry[i].fexit_pptr = (__u64)pptr;
+	telemetry[i].fexit_pptr_addr_valid = ret;
+
+	/*
+	 * For invalid addresses, the destination register for *dptr is set
+	 * to 0 by the BPF exception handler, JIT address range check, or
+	 * the BPF interpreter.
+	 */
+	telemetry[i].fexit_ptr = (__u64)*bpf_core_cast(pptr, void_p);
+	current_index = i + 1;
+	return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/verifier_ctx_ptr_param.c b/tools/testing/selftests/bpf/progs/verifier_ctx_ptr_param.c
new file mode 100644
index 000000000000..5465b8a406c0
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/verifier_ctx_ptr_param.c
@@ -0,0 +1,523 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Verifier tests for single- and multi-level pointer parameter handling
+ * Copyright (c) 2026 CrowdStrike, Inc.
+ */
+
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+#include <bpf/bpf_core_read.h>
+#include "bpf_misc.h"
+
+#define VALID_CTX_ACCESS(section, name, ctx_offset) \
+SEC(section) \
+__description(section " - valid ctx access at offset " #ctx_offset) \
+__success __retval(0) \
+__naked void name##_ctx_at_##ctx_offset##_valid(void) \
+{ \
+	asm volatile ("				\
+	r2 = *(u64 *)(r1 + " #ctx_offset " );		\
+	r0 = 0;						\
+	exit;						\
+"	::: __clobber_all); \
+}
+
+#define INVALID_CTX_ACCESS(section, name, desc, errmsg, ctx_offset) \
+SEC(section) \
+__description(desc) \
+__failure __msg(errmsg) \
+__naked void name##_ctx_at_##ctx_offset##_invalid(void) \
+{ \
+	asm volatile ("				\
+	r2 = *(u64 *)(r1 + " #ctx_offset ");		\
+	r0 = 0;						\
+	exit;						\
+"	::: __clobber_all); \
+}
+
+#define INVALID_LOAD_OFFSET(section, name, size, offset, ctx_offset) \
+SEC(section) \
+__description(section " - ctx offset " #ctx_offset ", invalid load at offset " #offset " with scalar") \
+__failure __msg("R2 invalid mem access 'scalar'") \
+__naked void name##_load_at_##offset##_with_scalar(void) \
+{ \
+	asm volatile ("				\
+	r2 = *(u64 *)(r1 + " #ctx_offset ");		\
+	r3 = *(u" #size "*)(r2 + " #offset ");		\
+	r0 = 0;						\
+	exit;						\
+"	::: __clobber_all); \
+}
+
+#define INVALID_LOAD(section, name, size, ctx_offset) \
+	INVALID_LOAD_OFFSET(section, name, size, 0, ctx_offset)
+
+#define INVALID_LOAD_NEG_OFFSET(section, name, size, offset, ctx_offset) \
+SEC(section) \
+__description(section " - ctx offset " #ctx_offset ", invalid load at negative offset " #offset " with scalar") \
+__failure __msg("R2 invalid mem access 'scalar'") \
+__naked void name##_load_at_neg_##offset##_with_scalar(void) \
+{ \
+	asm volatile ("				\
+	r2 = *(u64 *)(r1 + " #ctx_offset ");		\
+	r3 = *(u" #size "*)(r2 - " #offset ");		\
+	r0 = 0;						\
+	exit;						\
+"	::: __clobber_all); \
+}
+
+#define INVALID_STORE_OFFSET(section, name, size, offset, ctx_offset) \
+SEC(section) \
+__description(section " - ctx offset " #ctx_offset ", invalid store " #size " at offset " #offset " with scalar") \
+__failure __msg("R2 invalid mem access 'scalar'") \
+__naked void name##_store##size##_at_##offset##_with_scalar(void) \
+{ \
+	asm volatile ("				\
+	r2 = *(u64 *)(r1 + " #ctx_offset ");		\
+	*(u" #size "*)(r2 + " #offset ") = 1;		\
+	r0 = 0;						\
+	exit;						\
+"	::: __clobber_all); \
+}
+
+#define INVALID_STORE(section, name, size, ctx_offset) \
+	INVALID_STORE_OFFSET(section, name, size, 0, ctx_offset)
+
+#define INVALID_STORE_NEG_OFFSET(section, name, size, offset, ctx_offset) \
+SEC(section) \
+__description(section " - ctx offset " #ctx_offset ", invalid store " #size " at negative offset " #offset " with scalar") \
+__failure __msg("R2 invalid mem access 'scalar'") \
+__naked void name##_store##size##_at_neg_##offset##_with_scalar(void) \
+{ \
+	asm volatile ("				\
+	r2 = *(u64 *)(r1 + "#ctx_offset ");		\
+	*(u" #size "*)(r2 - " #offset ") = 1;		\
+	r0 = 0;						\
+	exit;						\
+"	::: __clobber_all); \
+}
+
+/* Double nullable pointer to struct */
+VALID_CTX_ACCESS("fentry/bpf_fentry_test11_pptr_nullable", bpf_fentry_pptr_nullable, 0)
+VALID_CTX_ACCESS("fexit/bpf_fentry_test11_pptr_nullable", bpf_fexit_pptr_nullable, 0)
+INVALID_LOAD("fentry/bpf_fentry_test11_pptr_nullable", bpf_fentry_pptr_nullable, 64, 0)
+INVALID_LOAD_OFFSET("fentry/bpf_fentry_test11_pptr_nullable", bpf_fentry_pptr_nullable, 64, 128, 0)
+INVALID_LOAD_NEG_OFFSET("fentry/bpf_fentry_test11_pptr_nullable", bpf_fentry_pptr_nullable, 64, 128, 0)
+INVALID_STORE("fentry/bpf_fentry_test11_pptr_nullable", bpf_fentry_pptr_nullable, 8, 0)
+INVALID_STORE("fentry/bpf_fentry_test11_pptr_nullable", bpf_fentry_pptr_nullable, 16, 0)
+INVALID_STORE("fentry/bpf_fentry_test11_pptr_nullable", bpf_fentry_pptr_nullable, 32, 0)
+INVALID_STORE("fentry/bpf_fentry_test11_pptr_nullable", bpf_fentry_pptr_nullable, 64, 0)
+INVALID_STORE_OFFSET("fentry/bpf_fentry_test11_pptr_nullable", bpf_fentry_pptr_nullable, 64, 128, 0)
+INVALID_STORE_NEG_OFFSET("fentry/bpf_fentry_test11_pptr_nullable", bpf_fentry_pptr_nullable, 64, 128, 0)
+
+/* Double pointer parameter to u32 at offset 8 in ctx */
+VALID_CTX_ACCESS("fentry/bpf_fentry_test12_pptr", bpf_fentry_test12_pptr, 8)
+VALID_CTX_ACCESS("fexit/bpf_fentry_test12_pptr", bpf_fexit_test12_pptr, 8)
+INVALID_LOAD("fentry/bpf_fentry_test12_pptr", bpf_fentry_test12_pptr, 64, 8)
+INVALID_LOAD_OFFSET("fentry/bpf_fentry_test12_pptr", bpf_fentry_test12_pptr, 64, 64, 8)
+INVALID_LOAD_NEG_OFFSET("fentry/bpf_fentry_test12_pptr", bpf_fentry_test12_pptr, 64, 64, 8)
+INVALID_STORE("fentry/bpf_fentry_test12_pptr", bpf_fentry_test12_pptr, 64, 8)
+INVALID_STORE_OFFSET("fentry/bpf_fentry_test12_pptr", bpf_fentry_test12_pptr, 64, 128, 8)
+INVALID_STORE_NEG_OFFSET("fentry/bpf_fentry_test12_pptr", bpf_fentry_test12_pptr, 64, 128, 8)
+
+/* Triple pointer to void with modifiers */
+VALID_CTX_ACCESS("fentry/bpf_fentry_test14_ppptr", bpf_fentry_ppptr, 0)
+VALID_CTX_ACCESS("fexit/bpf_fentry_test14_ppptr", bpf_fexit_ppptr, 0)
+INVALID_LOAD("fentry/bpf_fentry_test14_ppptr", bpf_fentry_ppptr, 64, 0)
+INVALID_STORE("fentry/bpf_fentry_test14_ppptr", bpf_fentry_ppptr, 64, 0)
+
+/* Trusted double pointer to void */
+SEC("lsm/sb_eat_lsm_opts")
+__description("lsm/sb_eat_lsm_opts double pointer parameter trusted - valid ctx access")
+__success
+__naked void sb_eat_lsm_opts_trusted_valid_ctx_access(void)
+{
+	asm volatile ("				\
+	/* load double pointer - SCALAR_VALUE */\
+	r2 = *(u64 *)(r1 + 8);		\
+	r0 = 0;						\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("lsm/sb_eat_lsm_opts")
+__description("lsm/sb_eat_lsm_opts double pointer parameter trusted - invalid load with scalar")
+__failure __msg("R2 invalid mem access 'scalar'")
+__naked void sb_eat_lsm_opts_trusted_load_with_scalar(void)
+{
+	asm volatile ("				\
+	/* load double pointer - SCALAR_VALUE */\
+	r2 = *(u64 *)(r1 + 8);		\
+	r3 = *(u64 *)(r2 + 0);		\
+	r0 = 0;						\
+	exit;						\
+"	::: __clobber_all);
+}
+
+SEC("lsm/sb_eat_lsm_opts")
+__description("lsm/sb_eat_lsm_opts double pointer parameter trusted - invalid store with scalar")
+__failure __msg("R2 invalid mem access 'scalar'")
+__naked void sb_eat_lsm_opts_trusted_store_with_scalar(void)
+{
+	asm volatile ("				\
+	/* load double pointer - SCALAR_VALUE */\
+	r2 = *(u64 *)(r1 + 8);		\
+	*(u64 *)(r2 + 0) = 1;		\
+	r0 = 0;						\
+	exit;						\
+"	::: __clobber_all);
+}
+
+struct bpf_fentry_test_pptr_t;
+
+SEC("fentry/bpf_fentry_test11_pptr_nullable")
+__description("fentry/double pointer parameter - bpf helpers with nullable var")
+__success __retval(0)
+int BPF_PROG(ctx_double_ptr_nullable_var_access_bpf_helpers,
+	struct bpf_fentry_test_pptr_t **pptr__nullable)
+{
+	/* Check compatibility with BPF helpers; NULL checks should not be required. */
+	void *ptr;
+
+	bpf_probe_read_kernel(&ptr, sizeof(ptr), pptr__nullable);
+	return 0;
+}
+
+/*
+ * Workaround for a bug in LLVM:
+ * fatal error: error in backend: Empty type name for BTF_TYPE_ID_REMOTE reloc
+ */
+typedef __u32 *__u32_p;
+
+/*
+ * Workaround for:
+ * kfunc bpf_rdonly_cast type ID argument must be of a struct or void
+ */
+struct __u32_wrap {
+	__u32 v;
+};
+
+SEC("fexit/bpf_fentry_test12_pptr")
+__description("fexit/double pointer return value - valid dereference of return val")
+__success __retval(0)
+int BPF_PROG(ctx_double_ptr_return_access, __u32 id,
+	__u32 **pptr, __u32 **ret)
+{
+	__u32 **ppu32;
+	struct __u32_wrap *pu32;
+	ppu32 = bpf_core_cast(ret, __u32_p);
+	pu32 = bpf_core_cast(ppu32, struct __u32_wrap);
+	bpf_printk("%d", pu32->v);
+	return 0;
+}
+
+SEC("fexit/bpf_fentry_test12_pptr")
+__description("fexit/double pointer parameter - bpf helpers with return val")
+__success __retval(0)
+int BPF_PROG(ctx_double_ptr_return_access_bpf_helpers, __u32 id,
+	__u32 **pptr, __u32 **ret)
+{
+	/* Check compatibility with BPF helpers */
+	void *ptr;
+
+	bpf_probe_read_kernel(&ptr, sizeof(ptr), pptr);
+	bpf_probe_read_kernel(&ptr, sizeof(ptr), ret);
+	return 0;
+}
+
+SEC("fentry/bpf_fentry_test11_pptr_nullable")
+__description("fentry/double pointer parameter - bpf helpers with nullable var, direct ctx pointer")
+__success __retval(0)
+int BPF_PROG(ctx_double_ptr_nullable_var_access_bpf_helpers_ctx,
+	struct bpf_fentry_test_pptr_t **pptr__nullable)
+{
+	/*
+	 * Check compatibility with BPF helpers
+	 * NULL checks should not be required.
+	 */
+	void *ptr;
+
+	bpf_probe_read_kernel(&ptr, sizeof(ptr), &ctx[0] /*pptr__nullable*/);
+	return 0;
+}
+
+SEC("fexit/bpf_fentry_test12_pptr")
+__description("fexit/double pointer parameter - bpf helpers with return val, direct ctx pointer")
+__success __retval(0)
+int BPF_PROG(ctx_double_ptr_return_access_bpf_helpers_ctx, __u32 id,
+	__u32 **pptr, __u32 **ret)
+{
+	/* Check compatibility with BPF helpers */
+	void *ptr;
+
+	bpf_probe_read_kernel(&ptr, sizeof(ptr), &ctx[1] /*pptr*/);
+	bpf_probe_read_kernel(&ptr, sizeof(ptr), &ctx[2] /*ret*/);
+	return 0;
+}
+
+struct bpf_fentry_test_pptr_t {
+	__u32 value1;
+	__u32 value2;
+};
+
+/*
+ * Workaround for a bug in LLVM:
+ * fatal error: error in backend: Empty type name for BTF_TYPE_ID_REMOTE reloc
+ */
+typedef struct bpf_fentry_test_pptr_t *bpf_fentry_test_pptr_p;
+
+SEC("fentry/bpf_fentry_test11_pptr_nullable")
+__description("fentry/double pointer parameter - dereference followed by valid load of field 1")
+__success __retval(0)
+int BPF_PROG(ctx_double_ptr_deref_with_field_1_load,
+	struct bpf_fentry_test_pptr_t **pptr__nullable)
+{
+	struct bpf_fentry_test_pptr_t **pptr;
+	struct bpf_fentry_test_pptr_t *ptr;
+
+	pptr = bpf_core_cast(pptr__nullable, bpf_fentry_test_pptr_p);
+	ptr = bpf_core_cast((*pptr), struct bpf_fentry_test_pptr_t);
+	bpf_printk("%d", ptr->value1);
+	return 0;
+}
+
+SEC("fentry/bpf_fentry_test11_pptr_nullable")
+__description("fentry/double pointer parameter - dereference followed by valid load of field 2")
+__success __retval(0)
+int BPF_PROG(ctx_double_ptr_deref_with_field_2_load,
+	struct bpf_fentry_test_pptr_t **pptr__nullable)
+{
+	struct bpf_fentry_test_pptr_t **pptr;
+	struct bpf_fentry_test_pptr_t *ptr;
+
+	pptr = bpf_core_cast(pptr__nullable, bpf_fentry_test_pptr_p);
+	ptr = bpf_core_cast((*pptr), struct bpf_fentry_test_pptr_t);
+	bpf_printk("%d", ptr->value2);
+	return 0;
+}
+
+SEC("fentry/bpf_fentry_test11_pptr_nullable")
+__description("fentry/double pointer parameter - dereference followed by invalid out-of-bounds offset load")
+__failure __msg("access beyond struct bpf_fentry_test_pptr_t at off 128 size 4")
+int BPF_PROG(ctx_double_ptr_deref_with_load_by_positive_out_of_bound_offset,
+	struct bpf_fentry_test_pptr_t **pptr__nullable)
+{
+	struct bpf_fentry_test_pptr_t **pptr;
+	struct bpf_fentry_test_pptr_t *ptr;
+	__u32 value;
+
+	pptr = bpf_core_cast(pptr__nullable, bpf_fentry_test_pptr_p);
+	ptr = bpf_core_cast((*pptr), struct bpf_fentry_test_pptr_t);
+
+	asm volatile ("					\
+		r2 = %1;					\
+		/* Load with out-of-bounds offset */\
+		%0 = *(u32 *)(r2 + 0x80)	\
+		" : "=r" (value) : "r" (ptr) : "r2");
+
+	bpf_printk("%d", value);
+	return 0;
+}
+
+SEC("fentry/bpf_fentry_test11_pptr_nullable")
+__description("fentry/double pointer parameter - dereference followed by invalid out-of-bounds offset load")
+__failure __msg("R2 is ptr_bpf_fentry_test_pptr_t invalid negative access: off=-128")
+int BPF_PROG(ctx_double_ptr_deref_with_load_by_negative_out_of_bound_offset,
+	struct bpf_fentry_test_pptr_t **pptr__nullable)
+{
+	struct bpf_fentry_test_pptr_t **pptr;
+	struct bpf_fentry_test_pptr_t *ptr;
+	__u32 value;
+
+	pptr = bpf_core_cast(pptr__nullable, bpf_fentry_test_pptr_p);
+	ptr = bpf_core_cast((*pptr), struct bpf_fentry_test_pptr_t);
+
+	asm volatile ("					\
+		r2 = %1;					\
+		/* Load with out-of-bounds offset */\
+		%0 = *(u32 *)(r2 - 0x80);	\
+		" : "=r" (value) : "r" (ptr) : "r2");
+
+	bpf_printk("%d", value);
+	return 0;
+}
+
+SEC("fentry/bpf_fentry_test11_pptr_nullable")
+__description("fentry/double pointer parameter - dereference followed by invalid store to field 1")
+__failure __msg("only read is supported")
+int BPF_PROG(ctx_double_ptr_deref_with_field_1_modification,
+	struct bpf_fentry_test_pptr_t **pptr__nullable)
+{
+	struct bpf_fentry_test_pptr_t **pptr;
+	struct bpf_fentry_test_pptr_t *ptr;
+
+	pptr = bpf_core_cast(pptr__nullable, bpf_fentry_test_pptr_p);
+	ptr = bpf_core_cast((*pptr), struct bpf_fentry_test_pptr_t);
+
+	asm volatile ("					\
+		/* Load immediate 1 into w2 */\
+		w2 = 1;						\
+		/* Store to ptr->value1 */	\
+		*(u32 *)(%0 + 0) = r2;		\
+		" :: "r" (ptr) : "r2");
+
+	return 0;
+}
+
+SEC("fentry/bpf_fentry_test11_pptr_nullable")
+__description("fentry/double pointer parameter - dereference followed by invalid store to field 2")
+__failure __msg("only read is supported")
+int BPF_PROG(ctx_double_ptr_deref_with_field_2_modification,
+	struct bpf_fentry_test_pptr_t **pptr__nullable)
+{
+	struct bpf_fentry_test_pptr_t **pptr;
+	struct bpf_fentry_test_pptr_t *ptr;
+
+	pptr = bpf_core_cast(pptr__nullable, bpf_fentry_test_pptr_p);
+	ptr = bpf_core_cast((*pptr), struct bpf_fentry_test_pptr_t);
+
+	asm volatile ("					\
+		/* Load immediate 2 into w2 */\
+		w2 = 2;						\
+		/* Store to ptr->value2 */	\
+		*(u32 *)(%0 + 4) = r2;		\
+		" :: "r" (ptr) : "r2");
+
+	return 0;
+}
+
+SEC("fentry/bpf_fentry_test11_pptr_nullable")
+__description("fentry/double pointer parameter - dereference followed by invalid store to positive offset beyond struct boundaries")
+__failure __msg("only read is supported")
+int BPF_PROG(ctx_double_ptr_deref_with_store_by_positive_invalid_offset,
+	struct bpf_fentry_test_pptr_t **pptr__nullable)
+{
+	struct bpf_fentry_test_pptr_t **pptr;
+	struct bpf_fentry_test_pptr_t *ptr;
+
+	pptr = bpf_core_cast(pptr__nullable, bpf_fentry_test_pptr_p);
+	ptr = bpf_core_cast((*pptr), struct bpf_fentry_test_pptr_t);
+
+	asm volatile ("					\
+		r3 = %0;					\
+		/* Load immediate 3 into w2 */\
+		w2 = 3;						\
+		/* Store with offset outside struct size */	\
+		*(u32 *)(r3 + 0x80) = r2;		\
+		" :: "r" (ptr) : "r2", "r3");
+
+	return 0;
+}
+
+SEC("fentry/bpf_fentry_test11_pptr_nullable")
+__description("fentry/double pointer parameter - dereference followed by invalid store to negative offset beyond struct boundaries")
+__failure __msg("R3 is ptr_bpf_fentry_test_pptr_t invalid negative access: off=-128")
+int BPF_PROG(ctx_double_ptr_deref_with_store_by_negative_invalid_offset,
+	struct bpf_fentry_test_pptr_t **pptr__nullable)
+{
+	struct bpf_fentry_test_pptr_t **pptr;
+	struct bpf_fentry_test_pptr_t *ptr;
+
+	pptr = bpf_core_cast(pptr__nullable, bpf_fentry_test_pptr_p);
+	ptr = bpf_core_cast((*pptr), struct bpf_fentry_test_pptr_t);
+
+	asm volatile ("					\
+		r3 = %0;					\
+		/* Load immediate 3 into w2 */\
+		w2 = 3;						\
+		/* Store with offset outside struct size */	\
+		*(u32 *)(r3 - 0x80) = r2;		\
+		" :: "r" (ptr) : "r2", "r3");
+
+	return 0;
+}
+
+/* Pointer to enum 32 */
+VALID_CTX_ACCESS("fentry/bpf_fentry_test15_penum32", bpf_fentry_penum32, 0)
+INVALID_LOAD("fentry/bpf_fentry_test15_penum32", bpf_fentry_penum32, 32, 0)
+VALID_CTX_ACCESS("fexit/bpf_fentry_test15_penum32", bpf_fexit_penum32, 0)
+INVALID_LOAD("fexit/bpf_fentry_test15_penum32", bpf_fexit_penum32, 32, 0)
+INVALID_LOAD_OFFSET("fentry/bpf_fentry_test15_penum32", bpf_fentry_penum32, 8, 1, 0)
+INVALID_STORE("fentry/bpf_fentry_test15_penum32", bpf_fentry_penum32, 8, 0)
+INVALID_STORE("fentry/bpf_fentry_test15_penum32", bpf_fentry_penum32, 32, 0)
+INVALID_STORE("fentry/bpf_fentry_test15_penum32", bpf_fentry_penum32, 64, 0)
+INVALID_STORE_OFFSET("fentry/bpf_fentry_test15_penum32", bpf_fentry_penum32, 8, 1, 0)
+INVALID_STORE_NEG_OFFSET("fentry/bpf_fentry_test15_penum32", bpf_fentry_penum32, 8, 1, 0)
+
+/* Pointer to enum 64 */
+VALID_CTX_ACCESS("fentry/bpf_fentry_test15_penum64", bpf_fentry_penum64, 0)
+INVALID_LOAD("fentry/bpf_fentry_test15_penum64", bpf_fentry_penum64, 64, 0)
+VALID_CTX_ACCESS("fexit/bpf_fentry_test15_penum64", bpf_fexit_penum64, 0)
+INVALID_LOAD("fexit/bpf_fentry_test15_penum64", bpf_fexit_penum64, 64, 0)
+
+/* Double pointer to enum 32 */
+VALID_CTX_ACCESS("fentry/bpf_fentry_test16_ppenum32", bpf_fentry_ppenum32, 0)
+INVALID_LOAD("fentry/bpf_fentry_test16_ppenum32", bpf_fentry_ppenum32, 8, 0)
+
+/* Double pointer to enum 64 */
+VALID_CTX_ACCESS("fentry/bpf_fentry_test16_ppenum64", bpf_fentry_ppenum64, 0)
+INVALID_LOAD("fentry/bpf_fentry_test16_ppenum64", bpf_fentry_ppenum64, 64, 0)
+
+/* Pointer to function */
+VALID_CTX_ACCESS("fentry/bpf_fentry_test17_pfunc", bpf_fentry_pfunc, 0)
+INVALID_LOAD("fentry/bpf_fentry_test17_pfunc", bpf_fentry_pfunc, 8, 0)
+
+/* Double pointer to function */
+VALID_CTX_ACCESS("fentry/bpf_fentry_test18_ppfunc", bpf_fentry_ppfunc, 0)
+INVALID_LOAD("fentry/bpf_fentry_test18_ppfunc", bpf_fentry_ppfunc, 8, 0)
+
+/* Pointer to float */
+INVALID_CTX_ACCESS("fentry/bpf_fentry_test19_pfloat", bpf_fentry_float,
+	"fentry/pointer to float - invalid ctx access",
+	"func 'bpf_fentry_test19_pfloat' arg0 type FLOAT is not a struct", 0)
+
+/* Double pointer to float */
+INVALID_CTX_ACCESS("fentry/bpf_fentry_test20_ppfloat", bpf_fentry_pfloat,
+	"fentry/double pointer to float - invalid ctx access",
+	"func 'bpf_fentry_test20_ppfloat' arg0 type PTR is not a struct", 0)
+
+/* Pointer to char */
+VALID_CTX_ACCESS("fentry/bpf_fentry_test21_pchar", bpf_fentry_pchar, 0)
+INVALID_LOAD("fentry/bpf_fentry_test21_pchar", bpf_fentry_pchar, 64, 0)
+INVALID_STORE("fentry/bpf_fentry_test21_pchar", bpf_fentry_pchar, 8, 0)
+INVALID_STORE("fentry/bpf_fentry_test21_pchar", bpf_fentry_pchar, 16, 0)
+INVALID_STORE("fentry/bpf_fentry_test21_pchar", bpf_fentry_pchar, 32, 0)
+INVALID_STORE("fentry/bpf_fentry_test21_pchar", bpf_fentry_pchar, 64, 0)
+
+/* Double pointer to char */
+VALID_CTX_ACCESS("fentry/bpf_fentry_test22_ppchar", bpf_fentry_ppchar, 0)
+INVALID_LOAD("fentry/bpf_fentry_test22_ppchar", bpf_fentry_ppchar, 64, 0)
+INVALID_STORE("fentry/bpf_fentry_test22_ppchar", bpf_fentry_ppchar, 8, 0)
+INVALID_STORE_OFFSET("fentry/bpf_fentry_test22_ppchar", bpf_fentry_ppchar, 8, 1, 0)
+INVALID_STORE("fentry/bpf_fentry_test22_ppchar", bpf_fentry_ppchar, 16, 0)
+INVALID_STORE("fentry/bpf_fentry_test22_ppchar", bpf_fentry_ppchar, 32, 0)
+INVALID_STORE("fentry/bpf_fentry_test22_ppchar", bpf_fentry_ppchar, 64, 0)
+
+/* Double pointer to char as return value */
+INVALID_CTX_ACCESS("fentry/bpf_fentry_test23_ret_ppchar", bpf_fentry_ret_ppchar,
+	"fentry/bpf_fentry_test23_ret_ppchar - invalid ctx access for nonexisting prameter",
+	"func 'bpf_fentry_test23_ret_ppchar' doesn't have 1-th argument", 0)
+VALID_CTX_ACCESS("fexit/bpf_fentry_test23_ret_ppchar", bpf_fexit_ret_ppchar, 0)
+INVALID_LOAD("fexit/bpf_fentry_test23_ret_ppchar", bpf_fexit_ret_ppchar, 8, 0)
+INVALID_LOAD_OFFSET("fexit/bpf_fentry_test23_ret_ppchar", bpf_fexit_ret_ppchar, 8, 1, 0)
+INVALID_LOAD_NEG_OFFSET("fexit/bpf_fentry_test23_ret_ppchar", bpf_fexit_ret_ppchar, 8, 1, 0)
+INVALID_STORE("fexit/bpf_fentry_test23_ret_ppchar", bpf_fexit_ret_ppchar, 8, 0)
+INVALID_STORE_OFFSET("fexit/bpf_fentry_test23_ret_ppchar", bpf_fexit_ret_ppchar, 8, 1, 0)
+INVALID_STORE_NEG_OFFSET("fexit/bpf_fentry_test23_ret_ppchar", bpf_fexit_ret_ppchar, 8, 1, 0)
+
+/* Double pointer to struct file as return value, double pointer to void as input */
+VALID_CTX_ACCESS("fentry/bpf_fentry_test24_ret_ppfile", bpf_fenty_ret_ppfile, 0)
+INVALID_CTX_ACCESS("fentry/bpf_fentry_test24_ret_ppfile", bpf_fenty_ret_ppfile,
+	"fentry/bpf_fentry_test24_ret_ppfile - invalid ctx access for nonexisting prameter",
+	"func 'bpf_fentry_test24_ret_ppfile' doesn't have 2-th argument", 8)
+VALID_CTX_ACCESS("fexit/bpf_fentry_test24_ret_ppfile", bpf_fexit_ret_ppfile, 0)
+VALID_CTX_ACCESS("fexit/bpf_fentry_test24_ret_ppfile", bpf_fexit_ret_ppfile, 8)
+INVALID_LOAD("fexit/bpf_fentry_test24_ret_ppfile", bpf_fexit_ret_ppfile, 8, 8)
+INVALID_LOAD_OFFSET("fexit/bpf_fentry_test24_ret_ppfile", bpf_fexit_ret_ppfile, 8, 1, 8)
+INVALID_LOAD_NEG_OFFSET("fexit/bpf_fentry_test24_ret_ppfile", bpf_fexit_ret_ppfile, 8, 1, 8)
+INVALID_STORE("fexit/bpf_fentry_test24_ret_ppfile", bpf_fexit_ret_ppfile, 8, 8)
+INVALID_STORE_OFFSET("fexit/bpf_fentry_test24_ret_ppfile", bpf_fexit_ret_ppfile, 8, 1, 8)
+INVALID_STORE_NEG_OFFSET("fexit/bpf_fentry_test24_ret_ppfile", bpf_fexit_ret_ppfile, 8, 1, 8)
+
+char _license[] SEC("license") = "GPL";
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH bpf-next v4 1/2] bpf: Support new pointer param types via SCALAR_VALUE for trampolines
  2026-03-03  9:54 ` [PATCH bpf-next v4 1/2] bpf: Support new pointer param types via SCALAR_VALUE " Slava Imameev
@ 2026-03-03 20:05   ` Eduard Zingerman
  2026-03-03 21:49     ` Slava Imameev
  0 siblings, 1 reply; 15+ messages in thread
From: Eduard Zingerman @ 2026-03-03 20:05 UTC (permalink / raw)
  To: Slava Imameev, ast, daniel, andrii
  Cc: martin.lau, song, yonghong.song, john.fastabend, kpsingh, sdf,
	haoluo, jolsa, davem, edumazet, kuba, pabeni, horms, shuah,
	linux-kernel, bpf, netdev, linux-kselftest, linux-open-source

On Tue, 2026-03-03 at 20:54 +1100, Slava Imameev wrote:

[...]

> @@ -6902,11 +6921,7 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
>  		}
>  	}
>  
> -	/*
> -	 * If it's a pointer to void, it's the same as scalar from the verifier
> -	 * safety POV. Either way, no futher pointer walking is allowed.
> -	 */
> -	if (is_void_or_int_ptr(btf, t))
> +	if (is_ptr_treated_as_scalar(btf, t))
>  		return true;

I'm probably missing a point here, but what's wrong with Alexei's
suggestion to do this instead:

	if (is_ptr_treated_as_scalar(btf, t))
		 return true;
?

Only two new tests fail:
- #554/62  verifier_ctx_ptr_param/fentry/pointer to float - invalid ctx access:FAIL
- #554/63  verifier_ctx_ptr_param/fentry/double pointer to float - invalid ctx access:FAIL

But I'd say this shouldn't matter.
This will also make selftests much simpler.

>  
>  	/* this is a pointer to another type */

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH bpf-next v4 2/2] selftests/bpf: Add trampolines single and multi-level pointer params test coverage
  2026-03-03  9:54 ` [PATCH bpf-next v4 2/2] selftests/bpf: Add trampolines single and multi-level pointer params test coverage Slava Imameev
@ 2026-03-03 20:08   ` Eduard Zingerman
  2026-03-03 22:14     ` Slava Imameev
  0 siblings, 1 reply; 15+ messages in thread
From: Eduard Zingerman @ 2026-03-03 20:08 UTC (permalink / raw)
  To: Slava Imameev, ast, daniel, andrii
  Cc: martin.lau, song, yonghong.song, john.fastabend, kpsingh, sdf,
	haoluo, jolsa, davem, edumazet, kuba, pabeni, horms, shuah,
	linux-kernel, bpf, netdev, linux-kselftest, linux-open-source

On Tue, 2026-03-03 at 20:54 +1100, Slava Imameev wrote:
> Single and multi-level pointer params and return value test coverage
> for BPF trampolines:
> - fentry/fexit programs covering struct and void double/triple
>   pointer parameters and return values
> - verifier context tests covering pointers as parameters, these
>   tests cover single and double pointers to int, enum 32 and 64,
>   void, function, and double pointers to struct, triple pointers
>   for void
> - verifier context tests covering single and double pointers to
>   float, to check proper error is returned as pointers to float
>   are not supported
> - verifier context tests covering pointers as return values
> - verifier context tests for lsm to check trusted parameters
>   handling
> - verifier context tests covering out-of-bound access after cast
> - verifier BPF helper tests to validate no change in verifier
>   behavior
> 
> Signed-off-by: Slava Imameev <slava.imameev@crowdstrike.com>
> ---

Again, I probably miss a point, but with current implementation it
seems sufficient in verifier_ctx_ptr_param() to add one or two
tests accessing void** or similar and checking verification log
to validate that parameter has expected type scalar().
Why so many tests are necessary?

[...]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH bpf-next v4 1/2] bpf: Support new pointer param types via SCALAR_VALUE for trampolines
  2026-03-03 20:05   ` Eduard Zingerman
@ 2026-03-03 21:49     ` Slava Imameev
  2026-03-03 22:43       ` Eduard Zingerman
  0 siblings, 1 reply; 15+ messages in thread
From: Slava Imameev @ 2026-03-03 21:49 UTC (permalink / raw)
  To: eddyz87
  Cc: andrii, ast, bpf, daniel, davem, edumazet, haoluo, horms,
	john.fastabend, jolsa, kpsingh, kuba, linux-kernel,
	linux-kselftest, linux-open-source, martin.lau, netdev, pabeni,
	sdf, shuah, slava.imameev, song, yonghong.song

On 2026-03-03 20:05 UTC, Eduard Zingerman wrote:

> > @@ -6902,11 +6921,7 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
> >               }
> >       }
> > 
> > -     /*
> > -      * If it's a pointer to void, it's the same as scalar from the verifier
> > -      * safety POV. Either way, no futher pointer walking is allowed.
> > -      */
> > -     if (is_void_or_int_ptr(btf, t))
> > +     if (is_ptr_treated_as_scalar(btf, t))
> >               return true;
> 
> I'm probably missing a point here, but what's wrong with Alexei's
> suggestion to do this instead:
> 
>         if (is_ptr_treated_as_scalar(btf, t))
>                  return true;
> ?

This reflects my belief in a cautious approach: adding support
only for selected types with tests added for each new type. That said,
I can add the suggested broader condition and make it pass the tests,
but I cannot be sure it will be future-proof against conflicts.

I think the broader check like

	/* skip modifiers */
	tt = t;
	while (btf_type_is_modifier(tt))
		tt = btf_type_by_id(btf, tt->type);
	if (!btf_type_is_struct(tt))
		return true;

might have some incompatibility with future changes, compared to
explicit type checks for selected types. This condition is
open-ended, including anything instead of selecting specific types.

This broader check also needs to be moved down closer to the exit
from btf_ctx_access; otherwise, btf_ctx_access can exit early
without executing the following code. In my case, this resulted in
existing test failures if the above !btf_type_is_struct(tt) replaces
current master's branch condition

	if (is_void_or_int_ptr(btf, t))
		return true;

The result for: 

./vmtest.sh -- ./test_progs

was:

	Summary: 617/5770 PASSED, 80 SKIPPED, 82 FAILED

with a lot of:

	unexpected_load_success

Compared to:

	Summary: 692/6045 PASSED, 80 SKIPPED, 7 FAILED

for the master branch.

As I noted this diff, closer to the exit from btf_ctx_access,
makes tests to pass:

        if (!btf_type_is_struct(t)) {
-               bpf_log(log,
-                       "func '%s' arg%d type %s is not a struct\n",
-                       tname, arg, btf_type_str(t));
-               return false;
+               info->reg_type = SCALAR_VALUE;
+               return true;
        }


> Only two new tests fail:
> - #554/62  verifier_ctx_ptr_param/fentry/pointer to float - invalid ctx access:FAIL
> - #554/63  verifier_ctx_ptr_param/fentry/double pointer to float - invalid ctx access:FAIL

> But I'd say this shouldn't matter.
> This will also make selftests much simpler.

Yes, I decided not to add support for pointers to float.


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH bpf-next v4 2/2] selftests/bpf: Add trampolines single and multi-level pointer params test coverage
  2026-03-03 20:08   ` Eduard Zingerman
@ 2026-03-03 22:14     ` Slava Imameev
  0 siblings, 0 replies; 15+ messages in thread
From: Slava Imameev @ 2026-03-03 22:14 UTC (permalink / raw)
  To: eddyz87
  Cc: andrii, ast, bpf, daniel, davem, edumazet, haoluo, horms,
	john.fastabend, jolsa, kpsingh, kuba, linux-kernel,
	linux-kselftest, linux-open-source, martin.lau, netdev, pabeni,
	sdf, shuah, slava.imameev, song, yonghong.song

On Tue, 03 Mar 2026 12:08:44, Eduard Zingerman wrote:
> On Tue, 2026-03-03 at 20:54 +1100, Slava Imameev wrote:
> > Single and multi-level pointer params and return value test coverage
> > for BPF trampolines:
> > - fentry/fexit programs covering struct and void double/triple
> >   pointer parameters and return values
> > - verifier context tests covering pointers as parameters, these
> >   tests cover single and double pointers to int, enum 32 and 64,
> >   void, function, and double pointers to struct, triple pointers
> >   for void
> > - verifier context tests covering single and double pointers to
> >   float, to check proper error is returned as pointers to float
> >   are not supported
> > - verifier context tests covering pointers as return values
> > - verifier context tests for lsm to check trusted parameters
> >   handling
> > - verifier context tests covering out-of-bound access after cast
> > - verifier BPF helper tests to validate no change in verifier
> >   behavior
> > 
> > Signed-off-by: Slava Imameev <slava.imameev@crowdstrike.com>
> > ---
> 
> Again, I probably miss a point, but with current implementation it
> seems sufficient in verifier_ctx_ptr_param() to add one or two
> tests accessing void** or similar and checking verification log
> to validate that parameter has expected type scalar().
> Why so many tests are necessary?

This reflects my belief in more comprehensive test coverage.

I can certainly reduce the number of tests if this seems excessive,
but I made 90% of the added tests one-liners to keep them maintainable.

These changes add support for multilevel pointers, so double and
triple pointers need to be checked at minimum. I think adding checks
for any new type support is beneficial. I tried to verify a broader
set of conditions that might be broken by future changes. I tried to
make most tests one-liners to facilitate future modification.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH bpf-next v4 1/2] bpf: Support new pointer param types via SCALAR_VALUE for trampolines
  2026-03-03 21:49     ` Slava Imameev
@ 2026-03-03 22:43       ` Eduard Zingerman
  2026-03-04  0:22         ` Slava Imameev
  0 siblings, 1 reply; 15+ messages in thread
From: Eduard Zingerman @ 2026-03-03 22:43 UTC (permalink / raw)
  To: Slava Imameev
  Cc: andrii, ast, bpf, daniel, davem, edumazet, haoluo, horms,
	john.fastabend, jolsa, kpsingh, kuba, linux-kernel,
	linux-kselftest, linux-open-source, martin.lau, netdev, pabeni,
	sdf, shuah, song, yonghong.song

On Wed, 2026-03-04 at 08:49 +1100, Slava Imameev wrote:
> On 2026-03-03 20:05 UTC, Eduard Zingerman wrote:
> 
> > > @@ -6902,11 +6921,7 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
> > >               }
> > >       }
> > > 
> > > -     /*
> > > -      * If it's a pointer to void, it's the same as scalar from the verifier
> > > -      * safety POV. Either way, no futher pointer walking is allowed.
> > > -      */
> > > -     if (is_void_or_int_ptr(btf, t))
> > > +     if (is_ptr_treated_as_scalar(btf, t))
> > >               return true;
> > 
> > I'm probably missing a point here, but what's wrong with Alexei's
> > suggestion to do this instead:
> > 
> >         if (is_ptr_treated_as_scalar(btf, t))
> >                  return true;
> > ?

Uh-oh, I copy-pasted the wrong snippet, sorry.
The correct snippet is:

         if (btf_type_is_struct_ptr(btf, t))
                  return true;

With it the selftests pass (except for `float` tests noted earlier).
And regardless of selftests, the code below this point will
error out if `t` is not a pointer to struct.

> This reflects my belief in a cautious approach: adding support
> only for selected types with tests added for each new type. That said,
> I can add the suggested broader condition and make it pass the tests,
> but I cannot be sure it will be future-proof against conflicts.
> 
> I think the broader check like
> 
> 	/* skip modifiers */
> 	tt = t;
> 	while (btf_type_is_modifier(tt))
> 		tt = btf_type_by_id(btf, tt->type);
> 	if (!btf_type_is_struct(tt))
> 		return true;

btf_type_is_struct_ptr() is almost identical to the snippet above.

> might have some incompatibility with future changes, compared to
> explicit type checks for selected types. This condition is
> open-ended, including anything instead of selecting specific types.

What potential incompatibility do you expect?
Two things change:
- types other then `struct foo *` or `int` can be read:
  - do you expect we would want to deny reading some ctx
    fields in the future?
- the value read is marked as scalar:
  - not much can be done with a scalar, except for leaking it to
    e.g. some map or ring buffer. Do you expect this to problematic?

Note that the above are selected based on type, not on the
function/parameter combination, which is already not a very effective
filter if some parameters need to be hidden.

> This broader check also needs to be moved down closer to the exit
> from btf_ctx_access; otherwise, btf_ctx_access can exit early
> without executing the following code. In my case, this resulted in
> existing test failures if the above !btf_type_is_struct(tt) replaces
> current master's branch condition
> 
> 	if (is_void_or_int_ptr(btf, t))
> 		return true;
> 
> The result for: 
> 
> ./vmtest.sh -- ./test_progs
> 
> was:
> 
> 	Summary: 617/5770 PASSED, 80 SKIPPED, 82 FAILED
> 
> with a lot of:
> 
> 	unexpected_load_success
> 
> Compared to:
> 
> 	Summary: 692/6045 PASSED, 80 SKIPPED, 7 FAILED
> 
> for the master branch.
> 
> As I noted this diff, closer to the exit from btf_ctx_access,
> makes tests to pass:
> 
>         if (!btf_type_is_struct(t)) {
> -               bpf_log(log,
> -                       "func '%s' arg%d type %s is not a struct\n",
> -                       tname, arg, btf_type_str(t));
> -               return false;
> +               info->reg_type = SCALAR_VALUE;
> +               return true;
>         }
> 
> 
> > Only two new tests fail:
> > - #554/62  verifier_ctx_ptr_param/fentry/pointer to float - invalid ctx access:FAIL
> > - #554/63  verifier_ctx_ptr_param/fentry/double pointer to float - invalid ctx access:FAIL
> 
> > But I'd say this shouldn't matter.
> > This will also make selftests much simpler.
> 
> Yes, I decided not to add support for pointers to float.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH bpf-next v4 1/2] bpf: Support new pointer param types via SCALAR_VALUE for trampolines
  2026-03-03 22:43       ` Eduard Zingerman
@ 2026-03-04  0:22         ` Slava Imameev
  2026-03-04  0:36           ` Alexei Starovoitov
  2026-03-04  0:38           ` Eduard Zingerman
  0 siblings, 2 replies; 15+ messages in thread
From: Slava Imameev @ 2026-03-04  0:22 UTC (permalink / raw)
  To: eddyz87
  Cc: andrii, ast, bpf, daniel, davem, edumazet, haoluo, horms,
	john.fastabend, jolsa, kpsingh, kuba, linux-kernel,
	linux-kselftest, linux-open-source, martin.lau, netdev, pabeni,
	sdf, shuah, slava.imameev, song, yonghong.song

On Tue, 03 Mar 2026 14:43:01, Eduard Zingerman wrote:
> On Wed, 2026-03-04 at 08:49 +1100, Slava Imameev wrote:
> > On 2026-03-03 20:05 UTC, Eduard Zingerman wrote:
> >
> > > > @@ -6902,11 +6921,7 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
> > > >               }
> > > >       }
> > > >
> > > > -     /*
> > > > -      * If it's a pointer to void, it's the same as scalar from the verifier
> > > > -      * safety POV. Either way, no futher pointer walking is allowed.
> > > > -      */
> > > > -     if (is_void_or_int_ptr(btf, t))
> > > > +     if (is_ptr_treated_as_scalar(btf, t))
> > > >               return true;
> > >
> > > I'm probably missing a point here, but what's wrong with Alexei's
> > > suggestion to do this instead:
> > >
> > >         if (is_ptr_treated_as_scalar(btf, t))
> > >                  return true;
> > > ?
> 
> Uh-oh, I copy-pasted the wrong snippet, sorry.
> The correct snippet is:
> 
>          if (btf_type_is_struct_ptr(btf, t))
>                   return true;
> 
> With it the selftests pass (except for `float` tests noted earlier).
> And regardless of selftests, the code below this point will
> error out if `t` is not a pointer to struct.

I think you tested with

	if (!btf_type_is_struct_ptr(btf, t))
		return true;

I decided on a narrower condition, as

- if (!btf_type_is_struct_ptr(btf, t)) -

changes the existing selection condition from "treat only these types
as scalar" to "treat as scalar any type that is not a pointer to
structure". Technically both approaches cover the problem I'm trying
to solve - multilevel pointer support for structures, but the latter is
open-ended and changes the current approach, which checks for pointers
to int and void. So I'm extending this to int, void, enum 32/64,
function, and corresponding multilevel pointers to these types and
multilevel pointers to structures.

It seems - if (!btf_type_is_struct_ptr(btf, t)) - works, but it's
challenging to strictly prove it's sufficiently future-proof.

> > This reflects my belief in a cautious approach: adding support
> > only for selected types with tests added for each new type. That said,
> > I can add the suggested broader condition and make it pass the tests,
> but I cannot be sure it will be future-proof against conflicts.
> >
> > I think the broader check like
> >
> >       /* skip modifiers */
> >       tt = t;
> >       while (btf_type_is_modifier(tt))
> >               tt = btf_type_by_id(btf, tt->type);
> >       if (!btf_type_is_struct(tt))
> >               return true;
> 
> btf_type_is_struct_ptr() is almost identical to the snippet above.
> 
> > might have some incompatibility with future changes, compared to
> > explicit type checks for selected types. This condition is
> > open-ended, including anything instead of selecting specific types.
> 
> What potential incompatibility do you expect?
> Two things change:
> - types other then `struct foo *` or `int` can be read:
>   - do you expect we would want to deny reading some ctx
>     fields in the future?
> - the value read is marked as scalar:
>   - not much can be done with a scalar, except for leaking it to
>     e.g. some map or ring buffer. Do you expect this to problematic?
> 
> Note that the above are selected based on type, not on the
> function/parameter combination, which is already not a very effective
> filter if some parameters need to be hidden.

I do not think any of these represent a real problem. As I said,
my approach is based mostly on narrowing the supported types to
reduce potential conflicts.

I do not have a good example of such conflicts.
The added tests for pointer to float, which failed with -
if (!btf_type_is_struct_ptr(btf, t)) - might be an example when adding
a new type might silently pass this check because of missing tests.

I  was not able to convince myself a conflict will not  happen.

That said, changing

	if (is_ptr_treated_as_scalar(btf, t))
		return true;

	to

	if (!btf_type_is_struct_ptr(btf, t))
		return true;

just makes the scope of these changes wider. This was
my initial approach to this problem, but I was worried
by its wide scope.


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH bpf-next v4 1/2] bpf: Support new pointer param types via SCALAR_VALUE for trampolines
  2026-03-04  0:22         ` Slava Imameev
@ 2026-03-04  0:36           ` Alexei Starovoitov
  2026-03-04  0:38           ` Eduard Zingerman
  1 sibling, 0 replies; 15+ messages in thread
From: Alexei Starovoitov @ 2026-03-04  0:36 UTC (permalink / raw)
  To: Slava Imameev
  Cc: Eduard, Andrii Nakryiko, Alexei Starovoitov, bpf, Daniel Borkmann,
	David S. Miller, Eric Dumazet, Hao Luo, Simon Horman,
	John Fastabend, Jiri Olsa, KP Singh, Jakub Kicinski, LKML,
	open list:KERNEL SELFTEST FRAMEWORK, DL Linux Open Source Team,
	Martin KaFai Lau, Network Development, Paolo Abeni,
	Stanislav Fomichev, Shuah Khan, Song Liu, Yonghong Song

On Tue, Mar 3, 2026 at 4:22 PM Slava Imameev
<slava.imameev@crowdstrike.com> wrote:
>
> On Tue, 03 Mar 2026 14:43:01, Eduard Zingerman wrote:
> > On Wed, 2026-03-04 at 08:49 +1100, Slava Imameev wrote:
> > > On 2026-03-03 20:05 UTC, Eduard Zingerman wrote:
> > >
> > > > > @@ -6902,11 +6921,7 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
> > > > >               }
> > > > >       }
> > > > >
> > > > > -     /*
> > > > > -      * If it's a pointer to void, it's the same as scalar from the verifier
> > > > > -      * safety POV. Either way, no futher pointer walking is allowed.
> > > > > -      */
> > > > > -     if (is_void_or_int_ptr(btf, t))
> > > > > +     if (is_ptr_treated_as_scalar(btf, t))
> > > > >               return true;
> > > >
> > > > I'm probably missing a point here, but what's wrong with Alexei's
> > > > suggestion to do this instead:
> > > >
> > > >         if (is_ptr_treated_as_scalar(btf, t))
> > > >                  return true;
> > > > ?
> >
> > Uh-oh, I copy-pasted the wrong snippet, sorry.
> > The correct snippet is:
> >
> >          if (btf_type_is_struct_ptr(btf, t))
> >                   return true;
> >
> > With it the selftests pass (except for `float` tests noted earlier).
> > And regardless of selftests, the code below this point will
> > error out if `t` is not a pointer to struct.
>
> I think you tested with
>
>         if (!btf_type_is_struct_ptr(btf, t))
>                 return true;
>
> I decided on a narrower condition, as
>
> - if (!btf_type_is_struct_ptr(btf, t)) -
>
> changes the existing selection condition from "treat only these types
> as scalar" to "treat as scalar any type that is not a pointer to
> structure". Technically both approaches cover the problem I'm trying
> to solve - multilevel pointer support for structures, but the latter is
> open-ended and changes the current approach, which checks for pointers
> to int and void. So I'm extending this to int, void, enum 32/64,
> function, and corresponding multilevel pointers to these types and
> multilevel pointers to structures.
>
> It seems - if (!btf_type_is_struct_ptr(btf, t)) - works, but it's
> challenging to strictly prove it's sufficiently future-proof.
>
> > > This reflects my belief in a cautious approach: adding support
> > > only for selected types with tests added for each new type. That said,
> > > I can add the suggested broader condition and make it pass the tests,
> > but I cannot be sure it will be future-proof against conflicts.
> > >
> > > I think the broader check like
> > >
> > >       /* skip modifiers */
> > >       tt = t;
> > >       while (btf_type_is_modifier(tt))
> > >               tt = btf_type_by_id(btf, tt->type);
> > >       if (!btf_type_is_struct(tt))
> > >               return true;
> >
> > btf_type_is_struct_ptr() is almost identical to the snippet above.
> >
> > > might have some incompatibility with future changes, compared to
> > > explicit type checks for selected types. This condition is
> > > open-ended, including anything instead of selecting specific types.
> >
> > What potential incompatibility do you expect?
> > Two things change:
> > - types other then `struct foo *` or `int` can be read:
> >   - do you expect we would want to deny reading some ctx
> >     fields in the future?
> > - the value read is marked as scalar:
> >   - not much can be done with a scalar, except for leaking it to
> >     e.g. some map or ring buffer. Do you expect this to problematic?
> >
> > Note that the above are selected based on type, not on the
> > function/parameter combination, which is already not a very effective
> > filter if some parameters need to be hidden.
>
> I do not think any of these represent a real problem. As I said,
> my approach is based mostly on narrowing the supported types to
> reduce potential conflicts.
>
> I do not have a good example of such conflicts.
> The added tests for pointer to float, which failed with -
> if (!btf_type_is_struct_ptr(btf, t)) - might be an example when adding
> a new type might silently pass this check because of missing tests.
>
> I  was not able to convince myself a conflict will not  happen.
>
> That said, changing
>
>         if (is_ptr_treated_as_scalar(btf, t))
>                 return true;
>
>         to
>
>         if (!btf_type_is_struct_ptr(btf, t))
>                 return true;
>
> just makes the scope of these changes wider. This was
> my initial approach to this problem, but I was worried
> by its wide scope.

You contradict yourself. If returning scalar today for pointer-to-pointer
is backward compatible in case we want to make it smarter,
then returning scalar for all is backward compat as well.
So no reason to introduce is_ptr_treated_as_scalar().

Also simplify tests as Eduard suggested.

pw-bot: cr

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH bpf-next v4 1/2] bpf: Support new pointer param types via SCALAR_VALUE for trampolines
  2026-03-04  0:22         ` Slava Imameev
  2026-03-04  0:36           ` Alexei Starovoitov
@ 2026-03-04  0:38           ` Eduard Zingerman
  2026-03-10 12:16             ` Slava Imameev
  1 sibling, 1 reply; 15+ messages in thread
From: Eduard Zingerman @ 2026-03-04  0:38 UTC (permalink / raw)
  To: Slava Imameev
  Cc: andrii, ast, bpf, daniel, davem, edumazet, haoluo, horms,
	john.fastabend, jolsa, kpsingh, kuba, linux-kernel,
	linux-kselftest, linux-open-source, martin.lau, netdev, pabeni,
	sdf, shuah, song, yonghong.song

On Wed, 2026-03-04 at 11:22 +1100, Slava Imameev wrote:
> On Tue, 03 Mar 2026 14:43:01, Eduard Zingerman wrote:
> > On Wed, 2026-03-04 at 08:49 +1100, Slava Imameev wrote:
> > > On 2026-03-03 20:05 UTC, Eduard Zingerman wrote:
> > > 
> > > > > @@ -6902,11 +6921,7 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
> > > > >               }
> > > > >       }
> > > > > 
> > > > > -     /*
> > > > > -      * If it's a pointer to void, it's the same as scalar from the verifier
> > > > > -      * safety POV. Either way, no futher pointer walking is allowed.
> > > > > -      */
> > > > > -     if (is_void_or_int_ptr(btf, t))
> > > > > +     if (is_ptr_treated_as_scalar(btf, t))
> > > > >               return true;
> > > > 
> > > > I'm probably missing a point here, but what's wrong with Alexei's
> > > > suggestion to do this instead:
> > > > 
> > > >         if (is_ptr_treated_as_scalar(btf, t))
> > > >                  return true;
> > > > ?
> > 
> > Uh-oh, I copy-pasted the wrong snippet, sorry.
> > The correct snippet is:
> > 
> >          if (btf_type_is_struct_ptr(btf, t))
> >                   return true;
> > 
> > With it the selftests pass (except for `float` tests noted earlier).
> > And regardless of selftests, the code below this point will
> > error out if `t` is not a pointer to struct.
> 
> I think you tested with
> 
> 	if (!btf_type_is_struct_ptr(btf, t))
> 		return true;
> 
> I decided on a narrower condition, as
> 
> - if (!btf_type_is_struct_ptr(btf, t)) -

Yes, sorry again.

> changes the existing selection condition from "treat only these types
> as scalar" to "treat as scalar any type that is not a pointer to
> structure". Technically both approaches cover the problem I'm trying
> to solve - multilevel pointer support for structures, but the latter is
> open-ended and changes the current approach, which checks for pointers
> to int and void. So I'm extending this to int, void, enum 32/64,
> function, and corresponding multilevel pointers to these types and
> multilevel pointers to structures.

BTF is defined for the following non-modifier types:
- void        [allowed already]
- int         [allowed already]
- ptr         [multi-level pointers allowed by your patch]
- array       [disallowed?]
- struct      [single level pointers allowed already,
- union		   multi-level allowed by your patch]
- enum/enum64 [allowed by your patch]
- func_proto  [allowed by your patch]
- float       [disallowed]

And a few not reachable from function fields (I think BTF validation
checks that these can't be met, but would be good to double-check.
If it doesn't, it should):
- func
- var
- datasec

So, effectively you disallow reading from tracing context fields of
type: struct (non-pointer), array, float and a few types that can't be
specified for struct fields.

Does not seem necessary, tbh.

> It seems - if (!btf_type_is_struct_ptr(btf, t)) - works, but it's
> challenging to strictly prove it's sufficiently future-proof.
> 
> > > This reflects my belief in a cautious approach: adding support
> > > only for selected types with tests added for each new type. That said,
> > > I can add the suggested broader condition and make it pass the tests,
> > but I cannot be sure it will be future-proof against conflicts.
> > > 
> > > I think the broader check like
> > > 
> > >       /* skip modifiers */
> > >       tt = t;
> > >       while (btf_type_is_modifier(tt))
> > >               tt = btf_type_by_id(btf, tt->type);
> > >       if (!btf_type_is_struct(tt))
> > >               return true;
> > 
> > btf_type_is_struct_ptr() is almost identical to the snippet above.
> > 
> > > might have some incompatibility with future changes, compared to
> > > explicit type checks for selected types. This condition is
> > > open-ended, including anything instead of selecting specific types.
> > 
> > What potential incompatibility do you expect?
> > Two things change:
> > - types other then `struct foo *` or `int` can be read:
> >   - do you expect we would want to deny reading some ctx
> >     fields in the future?
> > - the value read is marked as scalar:
> >   - not much can be done with a scalar, except for leaking it to
> >     e.g. some map or ring buffer. Do you expect this to problematic?
> > 
> > Note that the above are selected based on type, not on the
> > function/parameter combination, which is already not a very effective
> > filter if some parameters need to be hidden.
> 
> I do not think any of these represent a real problem. As I said,
> my approach is based mostly on narrowing the supported types to
> reduce potential conflicts.
> 
> I do not have a good example of such conflicts.
> The added tests for pointer to float, which failed with -
> if (!btf_type_is_struct_ptr(btf, t)) - might be an example when adding
> a new type might silently pass this check because of missing tests.

Yes, but that does not really matter if verifier treats floats as
unbound scalars.

> I  was not able to convince myself a conflict will not  happen.
> 
> That said, changing
> 
> 	if (is_ptr_treated_as_scalar(btf, t))
> 		return true;
> 
> 	to
> 
> 	if (!btf_type_is_struct_ptr(btf, t))
> 		return true;
> 
> just makes the scope of these changes wider. This was
> my initial approach to this problem, but I was worried
> by its wide scope.

Let's see what Alexei would say, but I'd say there is no need to
complicate things w/o clear necessity.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH bpf-next v4 1/2] bpf: Support new pointer param types via SCALAR_VALUE for trampolines
  2026-03-04  0:38           ` Eduard Zingerman
@ 2026-03-10 12:16             ` Slava Imameev
  2026-03-10 18:52               ` Eduard Zingerman
  0 siblings, 1 reply; 15+ messages in thread
From: Slava Imameev @ 2026-03-10 12:16 UTC (permalink / raw)
  To: eddyz87
  Cc: andrii, ast, bpf, daniel, davem, edumazet, haoluo, horms,
	john.fastabend, jolsa, kpsingh, kuba, linux-kernel,
	linux-kselftest, linux-open-source, martin.lau, netdev, pabeni,
	sdf, shuah, slava.imameev, song, yonghong.song

On Tue, 03 Mar 2026 16:38:57 -0800, Eduard Zingerman wrote:
> On Wed, 2026-03-04 at 11:22 +1100, Slava Imameev wrote:
> > On Tue, 03 Mar 2026 14:43:01, Eduard Zingerman wrote:
> > > On Wed, 2026-03-04 at 08:49 +1100, Slava Imameev wrote:
> > > > On 2026-03-03 20:05 UTC, Eduard Zingerman wrote:
> > > > 
> > > > > > @@ -6902,11 +6921,7 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
> > > > > >               }
> > > > > >       }
> > > > > > 
> > > > > > -     /*
> > > > > > -      * If it's a pointer to void, it's the same as scalar from the verifier
> > > > > > -      * safety POV. Either way, no futher pointer walking is allowed.
> > > > > > -      */
> > > > > > -     if (is_void_or_int_ptr(btf, t))
> > > > > > +     if (is_ptr_treated_as_scalar(btf, t))
> > > > > >               return true;
> > > > > 
> > > > > I'm probably missing a point here, but what's wrong with Alexei's
> > > > > suggestion to do this instead:
> > > > > 
> > > > >         if (is_ptr_treated_as_scalar(btf, t))
> > > > >                  return true;
> > > > > ?
> > > 
> > > Uh-oh, I copy-pasted the wrong snippet, sorry.
> > > The correct snippet is:
> > > 
> > >          if (btf_type_is_struct_ptr(btf, t))
> > >                   return true;
> > > 
> > > With it the selftests pass (except for `float` tests noted earlier).
> > > And regardless of selftests, the code below this point will
> > > error out if `t` is not a pointer to struct.
> > 
> > I think you tested with
> > 
> > 	if (!btf_type_is_struct_ptr(btf, t))
> > 		return true;
> > 
> > I decided on a narrower condition, as
> > 
> > - if (!btf_type_is_struct_ptr(btf, t)) -
> 
> Yes, sorry again.
> 
> > changes the existing selection condition from "treat only these types
> > as scalar" to "treat as scalar any type that is not a pointer to
> > structure". Technically both approaches cover the problem I'm trying
> > to solve - multilevel pointer support for structures, but the latter is
> > open-ended and changes the current approach, which checks for pointers
> > to int and void. So I'm extending this to int, void, enum 32/64,
> > function, and corresponding multilevel pointers to these types and
> > multilevel pointers to structures.
> 
> BTF is defined for the following non-modifier types:
> - void        [allowed already]
> - int         [allowed already]
> - ptr         [multi-level pointers allowed by your patch]
> - array       [disallowed?]
> - struct      [single level pointers allowed already,
> - union		   multi-level allowed by your patch]
> - enum/enum64 [allowed by your patch]
> - func_proto  [allowed by your patch]
> - float       [disallowed]
> 
> And a few not reachable from function fields (I think BTF validation
> checks that these can't be met, but would be good to double-check.
> If it doesn't, it should):
> - func
> - var
> - datasec
> 
> So, effectively you disallow reading from tracing context fields of
> type: struct (non-pointer), array, float and a few types that can't be
> specified for struct fields.
> 
> Does not seem necessary, tbh.

I verified whether PTR->FUNC, PTR->DATASEC, PTR->VAR can be passed to
btf_ctx_access() in the current mainline.

I added helpers that inject PTR->FUNC, PTR->DATASEC, PTR->VAR as pre or
post calls to btf_check_meta(). In all cases, the BPF program load
failed with errors "arg0 type FUNC / DATASEC / VAR is not a struct",
which indicates that btf_check_meta() can indeed be called with
PTR->FUNC, PTR->DATASEC, PTR->VAR.

If the condition for pointer check is changed to
`if (!btf_type_is_struct_ptr(btf, t))`, these BPF programs will load
successfully with arguments set to scalar().

Do we accept this change in behavior?

Test case with invalid BTF types injection:
https://github.com/slava-at-cs/bpf/commit/c49af6500ace4e4aceee01c570e3b067aae7e48c

Branch:
https://github.com/slava-at-cs/bpf/commits/inject-invalid-btf/

To run test:
./vmtest.sh -- ./test_progs -t verifier_btf_ctx_access

The verifier log:
=============
0: R1=ctx() R10=fp0
; asm volatile ("					\ @ verifier_btf_ctx_access.c:85
0: (79) r2 = *(u64 *)(r1 +0)
func 'bpf_fentry_test_invalid_ptr_func' arg0 type FUNC is not a struct
invalid bpf_context access off=0 size=8
processed 1 insns (limit 1000000) max_states_per_insn 0 total_states 0 peak_states 0 mark_read 0
=============

=============
0: R1=ctx() R10=fp0
; asm volatile ("					\ @ verifier_btf_ctx_access.c:85
0: (79) r2 = *(u64 *)(r1 +0)
func 'bpf_fentry_test_invalid_ptr_func' arg0 type DATASEC is not a struct
invalid bpf_context access off=0 size=8
processed 1 insns (limit 1000000) max_states_per_insn 0 total_states 0 peak_states 0 mark_read 0
=============

=============
0: R1=ctx() R10=fp0
; asm volatile ("					\ @ verifier_btf_ctx_access.c:85
0: (79) r2 = *(u64 *)(r1 +0)
func 'bpf_fentry_test_invalid_ptr_func' arg0 type VAR is not a struct
invalid bpf_context access off=0 size=8
processed 1 insns (limit 1000000) max_states_per_insn 0 total_states 0 peak_states 0 mark_read 0
=============

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH bpf-next v4 1/2] bpf: Support new pointer param types via SCALAR_VALUE for trampolines
  2026-03-10 12:16             ` Slava Imameev
@ 2026-03-10 18:52               ` Eduard Zingerman
  2026-03-11 13:07                 ` Slava Imameev
  0 siblings, 1 reply; 15+ messages in thread
From: Eduard Zingerman @ 2026-03-10 18:52 UTC (permalink / raw)
  To: Slava Imameev
  Cc: andrii, ast, bpf, daniel, davem, edumazet, haoluo, horms,
	john.fastabend, jolsa, kpsingh, kuba, linux-kernel,
	linux-kselftest, linux-open-source, martin.lau, netdev, pabeni,
	sdf, shuah, song, yonghong.song

On Tue, 2026-03-10 at 23:16 +1100, Slava Imameev wrote:

[...]

> I verified whether PTR->FUNC, PTR->DATASEC, PTR->VAR can be passed to
> btf_ctx_access() in the current mainline.
> 
> I added helpers that inject PTR->FUNC, PTR->DATASEC, PTR->VAR as pre or
> post calls to btf_check_meta(). In all cases, the BPF program load
> failed with errors "arg0 type FUNC / DATASEC / VAR is not a struct",
> which indicates that btf_check_meta() can indeed be called with
> PTR->FUNC, PTR->DATASEC, PTR->VAR.
> 
> If the condition for pointer check is changed to
> `if (!btf_type_is_struct_ptr(btf, t))`, these BPF programs will load
> successfully with arguments set to scalar().
> 
> Do we accept this change in behavior?

Kernel validates BTF before loading, see kernel/bpf/btf.c:btf_resolve().
Validation is applied to kernel, module and program-level BTF.
Does BTF containing PTR->DATASEC and PTR->VAR pass validation?
If it does, validation should be updated to reject such cases.
For PTR->FUNC, which one is legit PTR->FUNC or PTR->FUNC_PROTO?
The legit one should be allowed and invalid should be rejected
at validation phase.

You can craft invalid BTF as in the following selftest:
tools/testing/selftests/bpf/prog_tests/btf.c.

[...]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH bpf-next v4 1/2] bpf: Support new pointer param types via SCALAR_VALUE for trampolines
  2026-03-10 18:52               ` Eduard Zingerman
@ 2026-03-11 13:07                 ` Slava Imameev
  2026-03-11 16:31                   ` Eduard Zingerman
  0 siblings, 1 reply; 15+ messages in thread
From: Slava Imameev @ 2026-03-11 13:07 UTC (permalink / raw)
  To: eddyz87
  Cc: andrii, ast, bpf, daniel, davem, edumazet, haoluo, horms,
	john.fastabend, jolsa, kpsingh, kuba, linux-kernel,
	linux-kselftest, linux-open-source, martin.lau, netdev, pabeni,
	sdf, shuah, slava.imameev, song, yonghong.song

Tue, 10 Mar 2026 11:52:10 -0700, Eduard Zingerman wrote:
> [...]
> 
> > I verified whether PTR->FUNC, PTR->DATASEC, PTR->VAR can be passed to
> > btf_ctx_access() in the current mainline.
> > 
> > I added helpers that inject PTR->FUNC, PTR->DATASEC, PTR->VAR as pre or
> > post calls to btf_check_meta(). In all cases, the BPF program load
> > failed with errors "arg0 type FUNC / DATASEC / VAR is not a struct",
> > which indicates that btf_check_meta() can indeed be called with
> > PTR->FUNC, PTR->DATASEC, PTR->VAR.
> > 
> > If the condition for pointer check is changed to
> > `if (!btf_type_is_struct_ptr(btf, t))`, these BPF programs will load
> > successfully with arguments set to scalar().
> > 
> > Do we accept this change in behavior?
> 
> Kernel validates BTF before loading, see kernel/bpf/btf.c:btf_resolve().
> Validation is applied to kernel, module and program-level BTF.
> Does BTF containing PTR->DATASEC and PTR->VAR pass validation?
> If it does, validation should be updated to reject such cases.
> For PTR->FUNC, which one is legit PTR->FUNC or PTR->FUNC_PROTO?
> The legit one should be allowed and invalid should be rejected
> at validation phase.
> 
> You can craft invalid BTF as in the following selftest:
> tools/testing/selftests/bpf/prog_tests/btf.c.
> 
> [...]

Invalid BTF pointer types PTR->DATASEC, PTR->FUNC, PTR->VAR are
rejected by btf_ptr_resolve(), which is called through the sequence
btf_check_all_types()->btf_resolve()->btf_ptr_resolve().

PTR->FUNC_PROTO is a valid type.

vmlinux BTF is processed by btf_parse_vmlinux, which calls
btf_parse_base. Since btf_parse_base doesn't call btf_check_all_types,
it becomes possible to invoke btf_ctx_access with PTR->DATASEC,
PTR->FUNC, PTR->VAR in case of vmlinux BTF.

Modules and programs BTF are processed by btf_parse, which calls
btf_check_all_types and detects invalid pointer types, so
btf_ctx_access cannot see PTR->DATASEC, PTR->FUNC, PTR->VAR in
these cases.

If btf_check_all_types is added to btf_parse_base, invalid pointer
types get detected inside btf_parse_vmlinux, resulting in failure
to process vmlinux BTF and effectively disabling BPF. libbpf
returns this error:
libbpf: Error in bpf_object__probe_loading(): -EINVAL. Couldn't load
trivial BPF program. Make sure your kernel supports BPF
(CONFIG_BPF_SYSCALL=y) and/or that RLIMIT_MEMLOCK is set to big
enough value.

If vmlinux BTF is trusted not to contain invalid types like
PTR->DATASEC, PTR->FUNC, PTR->VAR, which seems reasonable, we can
conclude that btf_ctx_access will never observe these types.

Adopting the view that vmlinux BTF is consistent, we can replace
btf_ctx_access's condition for inferring scalar() for pointers
from "if (is_void_or_int_ptr(btf, t))" to
"if (!btf_type_is_struct_ptr(btf, t))".

Otherwise, we need to either check types for vmlinux BTF,
incurring additional cost, or use explicit checks for pointer types
that can be inferred as scalar by btf_ctx_access to exclude invalid
types. This would let invalid pointer types be rejected with an
error like "func 'bpf_fentry_test_invalid_ptr_func' arg0 type FUNC
is not a struct".


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH bpf-next v4 1/2] bpf: Support new pointer param types via SCALAR_VALUE for trampolines
  2026-03-11 13:07                 ` Slava Imameev
@ 2026-03-11 16:31                   ` Eduard Zingerman
  0 siblings, 0 replies; 15+ messages in thread
From: Eduard Zingerman @ 2026-03-11 16:31 UTC (permalink / raw)
  To: Slava Imameev
  Cc: andrii, ast, bpf, daniel, davem, edumazet, haoluo, horms,
	john.fastabend, jolsa, kpsingh, kuba, linux-kernel,
	linux-kselftest, linux-open-source, martin.lau, netdev, pabeni,
	sdf, shuah, song, yonghong.song

On Thu, 2026-03-12 at 00:07 +1100, Slava Imameev wrote:

[...]

> Adopting the view that vmlinux BTF is consistent, we can replace
> btf_ctx_access's condition for inferring scalar() for pointers
> from "if (is_void_or_int_ptr(btf, t))" to
> "if (!btf_type_is_struct_ptr(btf, t))".

I think it should be fine to adopt this view.

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2026-03-11 16:31 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-03  9:54 [PATCH bpf-next v4 0/2] bpf: Add multi-level pointer parameter support for trampolines Slava Imameev
2026-03-03  9:54 ` [PATCH bpf-next v4 1/2] bpf: Support new pointer param types via SCALAR_VALUE " Slava Imameev
2026-03-03 20:05   ` Eduard Zingerman
2026-03-03 21:49     ` Slava Imameev
2026-03-03 22:43       ` Eduard Zingerman
2026-03-04  0:22         ` Slava Imameev
2026-03-04  0:36           ` Alexei Starovoitov
2026-03-04  0:38           ` Eduard Zingerman
2026-03-10 12:16             ` Slava Imameev
2026-03-10 18:52               ` Eduard Zingerman
2026-03-11 13:07                 ` Slava Imameev
2026-03-11 16:31                   ` Eduard Zingerman
2026-03-03  9:54 ` [PATCH bpf-next v4 2/2] selftests/bpf: Add trampolines single and multi-level pointer params test coverage Slava Imameev
2026-03-03 20:08   ` Eduard Zingerman
2026-03-03 22:14     ` Slava Imameev

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox