public inbox for netdev@vger.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH bpf-next v2 00/11] Dynptr cleanup and bugfixes
@ 2026-03-07  6:44 Amery Hung
  2026-03-07  6:44 ` [RFC PATCH bpf-next v2 01/11] bpf: Set kfunc dynptr arg type flag based on prototype Amery Hung
                   ` (11 more replies)
  0 siblings, 12 replies; 46+ messages in thread
From: Amery Hung @ 2026-03-07  6:44 UTC (permalink / raw)
  To: bpf
  Cc: netdev, alexei.starovoitov, andrii, daniel, memxor, martin.lau,
	ameryhung, kernel-team

This patchset (1) cleans up dynptr handling (2) refactors object parent-
child relationship tracking to make it more precise and (3) fixes dynptr
UAF bug due to a missing link between dynptr and the parent referenced
object in the verifier.

This patchset will make dynptr tracks its parent object. In bpf qdisc
programs, an skb may be freed through kfuncs. However, since dynptr
currently does not track the parent referenced object (e.g., skb), the
verifier will not invalidate the dynptr after the skb is freed,
resulting in use-after-free. The similar issue also affects file dynptr.
To solve the issue, we need to track the parent skb in the derived
dynptr and slices.

However, we need to refactor the verifier's object tracking mechanism
first because id and ref_obj_id cannot easily express more than simple
object relationship. To illustrate this, we use an example as shown in
the figure below.

Before: object (id,ref_obj_id,dynptr_id)
  id         = id of the object (used for nullness tracking)
  ref_obj_id = id of the underlying referenced object (used for lifetime
               tracking)
  dynptr_id  = id of the parent dynptr of the slice (used for tracking
               parent dynptr, only for PTR_TO_MEM)

                      skb (0,1,0)
                             ^ (try to link dynptr to parent ref_obj_id)
                             +-------------------------------+
                             |           bpf_dynptr_clone    |
                 dynptr A (2,1,0)                dynptr C (4,1,0)
                           ^                               ^
        bpf_dynptr_slice   |                               |
                           |                               |
              slice B (3,1,2)                 slice D (5,1,4)
                         ^
    bpf_dynptr_from_mem  |
    (NOT allowed yet)    |
             dynptr E (6,1,0)

Lets first try to fix the bug by letting dynptr track the parent skb
using ref_obj_id and propagating the ref_obj_id to slices so that when
the skb goes away the derived dynptrs and slices will also be
invalidated. However, if dynptr A is destroyed by overwriting the stack
slot, release_reference(ref_obj_id=1) would be called and all nodes will
be invaldiated. The correct handling should leave skb, dynptr C, and
slice D intact since non-referenced dynptr clone's lifetime does not
need to tie to the original dynptr. This is not a problem before since
dynptr created from skb has ref_obj_id = 0. In the future if we start
allowing creating dynptr from slice, the current design also cannot
correctly handle the removal of dynptr E. All objects will be
incorrectly invalidated instead of only invalidating childrens of
dynptr E. While it is possible to solve the issue by adding more
specialized handling in the dynptr paths [0], it creates more complexity.

To track precise object relationship in a simpler way, u32 parent_id is
added to bpf_reg_state to track parent object. This replaces the
PTR_TO_MEM specific dynptr_id. Therefore, for dynptr A, since it is a
non-referenced dynptr, its ref_obj_id is set to 0. The parent_id will be
set to 1 to track the id of the skb. Note that, this will not grow
bpf_reg_state on 64 byte machine as there is a 7-byte padding.

After: object (id,ref_obj_id,parent_id)
  id         = id of the object (used for nullness tracking)
  ref_obj_id = id of the referenced object; objects with same ref_obj_id
               have the same lifetime (used for lifetime tracking)
  parent_id  = id of the parent object; points to id (used for object
               relationship tracking)

(1) Non-referenced dynptr with referenced parent (e.g., skb in Qdisc):

                          skb (1,1,0)
                               ^
          bpf_dynptr_from_skb  +-------------------------------+
                               |      bpf_dynptr_clone(A, C)   |
                 dynptr A (2,0,1)               dynptr C (4,0,1)
                           ^                              ^
        bpf_dynptr_slice   |                              |
                           |                              |
              slice B (3,0,2)                slice D (5,0,4)
                       ^
  bpf_dynptr_from_mem  |
  (NOT allowed yet)    |
         dynptr E (6,0,3)

The figures below show how the new design works in different 
referenced/non-referenced dynptr + referenced/non-referenced parent
combinations. The relationship between slices and dynptrs is ignored
as they are still the same. The main difference is how clone dynptrs
are represented. Since bpf_dynptr_clone() does not initializes a new
dynptr, the clone of referenced dynptr cannot function when the
original or any of the clone is invalidated. To represent this,
they will share the same ref_obj_id. For non-referenced dynptr, the
original and the clones will be able live independently.


(2) Non-referenced dynptr with non-referenced parent (e.g., skb in TC,
    always valid):

      bpf_dynptr_from_skb
                                  bpf_dynptr_clone(A, C)
             dynptr A (1,0,0)                  dynptr C (2,0,0)

                         dynptr A and C live independently

(3) Referenced dynptr with referenced parent:

                     file (1,1,0)
                           ^ ^
     bpf_dynptr_from_file  | +-------------------------------+
                           |       bpf_dynptr_clone(A, C)    |
             dynptr A (2,3,1)                  dynptr C (4,3,1)
                         ^                                 ^
                         |                                 |
                         dynptr A and C have the same lifetime


(4) Referenced dynptr with non-referenced parent:

 bpf_ringbuf_reserve_dynptr  
                                  bpf_dynptr_clone(A, C)
             dynptr A (1,1,0)                  dynptr C (2,1,0)
                         ^                                 ^
                         |                                 |
                         dynptr A and C have the same lifetime


I also tried folding id and ref_obj_id into id and using ref_obj_id to
track parent [1]. This design was not able to express the relationship
of referenced sk pointer and casted referenced sk pointer. The two
objects needs two id to express that they have the same lifetime but
different nullness.

Referenced socket pointer:

                                C = ptr_casting_function(A)
                ptr A (1,1,0)                     ptr C (2,1,0)
                         ^                                 ^
                         |                                 |
                        ptr C may be NULL even if ptr A is valid
			but they have the same lifetime


To avoid recursive call chain of release_reference() ->
unmark_stack_slots_dynptr(), release_reference() now uses
stacked-based DFS to find and invalidate registers and stack slots
containing the to-be-released id/ref_obj_id and all dependant ids
whose parent_id matches the id. Currently, it skips id == 0, which
however maybe a valid id e.g., pkt pointer by reading ctx. Future work
may start giving them > 0 id. This should not affect the current usecase
where skb and file are both given > 0 id.

[0] https://lore.kernel.org/bpf/20250414161443.1146103-2-memxor@gmail.com/
[1] https://github.com/ameryhung/bpf/commits/obj_relationship_v2_no_parent_id/ 


Changelog:

v1 -> v2
  - Redesign: Use object (id, ref_obj_id, parent_id) instead of 
    (id, ref_obj_id) as it cannot express ptr casting without
    introduing specialized code to handle the case
  - Use stack-based DFS to release objects to avoid recursion (Andrii)
  - Keep reg->id after null check
  - Add dynptr cleanup
  - Fix dynptr kfunc arg type determination
  - Add a file dynptr UAF selftest
  Link: https://lore.kernel.org/bpf/20260202214817.2853236-1-ameryhung@gmail.com/

---

Amery Hung (11):
  bpf: Set kfunc dynptr arg type flag based on prototype
  selftests/bpf: Test passing CONST_PTR_TO_DYNPTR to kfunc that may
    mutate dynptr
  bpf: Unify dynptr handling in the verifier
  bpf: Assign reg->id when getting referenced kptr from ctx
  bpf: Preserve reg->id of pointer objects after null-check
  bpf: Refactor object relationship tracking and fix dynptr UAF bug
  bpf: Remove redundant dynptr arg check for helper
  selftests/bpf: Test creating dynptr from dynptr data and slice
  selftests/bpf: Test using dynptr after freeing the underlying object
  selftests/bpf: Test using slice after invalidating dynptr clone
  selftests/bpf: Test using file dynptr after the reference on file is
    dropped

 fs/verity/measure.c                           |   2 +-
 include/linux/bpf.h                           |   8 +-
 include/linux/bpf_verifier.h                  |  14 +-
 kernel/bpf/helpers.c                          |  10 +-
 kernel/bpf/log.c                              |   4 +-
 kernel/bpf/verifier.c                         | 496 +++++++-----------
 kernel/trace/bpf_trace.c                      |  18 +-
 tools/testing/selftests/bpf/bpf_kfuncs.h      |   6 +-
 .../selftests/bpf/prog_tests/bpf_qdisc.c      |  50 ++
 .../bpf/progs/bpf_qdisc_dynptr_clone.c        |  69 +++
 .../progs/bpf_qdisc_fail__invalid_dynptr.c    |  62 +++
 ...f_qdisc_fail__invalid_dynptr_cross_frame.c |  68 +++
 .../bpf_qdisc_fail__invalid_dynptr_slice.c    |  64 +++
 .../testing/selftests/bpf/progs/dynptr_fail.c |  85 ++-
 .../selftests/bpf/progs/dynptr_success.c      |   6 +-
 .../selftests/bpf/progs/file_reader_fail.c    |  60 +++
 .../bpf/progs/test_kfunc_dynptr_param.c       |   9 +-
 .../selftests/bpf/progs/user_ringbuf_fail.c   |   4 +-
 18 files changed, 684 insertions(+), 351 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/progs/bpf_qdisc_dynptr_clone.c
 create mode 100644 tools/testing/selftests/bpf/progs/bpf_qdisc_fail__invalid_dynptr.c
 create mode 100644 tools/testing/selftests/bpf/progs/bpf_qdisc_fail__invalid_dynptr_cross_frame.c
 create mode 100644 tools/testing/selftests/bpf/progs/bpf_qdisc_fail__invalid_dynptr_slice.c

-- 
2.47.3


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [RFC PATCH bpf-next v2 01/11] bpf: Set kfunc dynptr arg type flag based on prototype
  2026-03-07  6:44 [RFC PATCH bpf-next v2 00/11] Dynptr cleanup and bugfixes Amery Hung
@ 2026-03-07  6:44 ` Amery Hung
  2026-03-11 14:47   ` Mykyta Yatsenko
                     ` (2 more replies)
  2026-03-07  6:44 ` [RFC PATCH bpf-next v2 02/11] selftests/bpf: Test passing CONST_PTR_TO_DYNPTR to kfunc that may mutate dynptr Amery Hung
                   ` (10 subsequent siblings)
  11 siblings, 3 replies; 46+ messages in thread
From: Amery Hung @ 2026-03-07  6:44 UTC (permalink / raw)
  To: bpf
  Cc: netdev, alexei.starovoitov, andrii, daniel, memxor, martin.lau,
	ameryhung, kernel-team

The verifier should decide whether a dynptr argument is read-only
based on if the type is "const struct bpf_dynptr *", not the type of
the register passed to the kfunc. This currently does not cause issues
because existing kfuncs that mutate struct bpf_dynptr are constructors
(e.g., bpf_dynptr_from_xxx and bpf_dynptr_clone). These kfuncs have
additional check in process_dynptr_func() to make sure the stack slot
does not contain initialized dynptr. Nonetheless, this should still be
fixed to avoid future issues when there is a non-constructor dynptr
kfunc that can mutate dynptr. This is also a small step toward unifying
kfunc and helper handling in the verifier, where the first step is to
generate kfunc prototype similar to bpf_func_proto before the main
verification loop.

We also need to correctly mark some kfunc arguments as "const struct
bpf_dynptr *" to align with other kfuncs that take non-mutable dynptr
argument and to not break their usage. Adding const qualifier does
not break backward compatibility.

Signed-off-by: Amery Hung <ameryhung@gmail.com>
---
 fs/verity/measure.c                            |  2 +-
 include/linux/bpf.h                            |  8 ++++----
 kernel/bpf/helpers.c                           | 10 +++++-----
 kernel/bpf/verifier.c                          | 18 +++++++++++++++++-
 kernel/trace/bpf_trace.c                       | 18 +++++++++---------
 tools/testing/selftests/bpf/bpf_kfuncs.h       |  6 +++---
 .../selftests/bpf/progs/dynptr_success.c       |  6 +++---
 .../bpf/progs/test_kfunc_dynptr_param.c        |  7 +------
 8 files changed, 43 insertions(+), 32 deletions(-)

diff --git a/fs/verity/measure.c b/fs/verity/measure.c
index 6a35623ebdf0..3840436e4510 100644
--- a/fs/verity/measure.c
+++ b/fs/verity/measure.c
@@ -118,7 +118,7 @@ __bpf_kfunc_start_defs();
  *
  * Return: 0 on success, a negative value on error.
  */
-__bpf_kfunc int bpf_get_fsverity_digest(struct file *file, struct bpf_dynptr *digest_p)
+__bpf_kfunc int bpf_get_fsverity_digest(struct file *file, const struct bpf_dynptr *digest_p)
 {
 	struct bpf_dynptr_kern *digest_ptr = (struct bpf_dynptr_kern *)digest_p;
 	const struct inode *inode = file_inode(file);
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index b78b53198a2e..946a37b951f7 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -3621,8 +3621,8 @@ static inline int bpf_fd_reuseport_array_update_elem(struct bpf_map *map,
 struct bpf_key *bpf_lookup_user_key(s32 serial, u64 flags);
 struct bpf_key *bpf_lookup_system_key(u64 id);
 void bpf_key_put(struct bpf_key *bkey);
-int bpf_verify_pkcs7_signature(struct bpf_dynptr *data_p,
-			       struct bpf_dynptr *sig_p,
+int bpf_verify_pkcs7_signature(const struct bpf_dynptr *data_p,
+			       const struct bpf_dynptr *sig_p,
 			       struct bpf_key *trusted_keyring);
 
 #else
@@ -3640,8 +3640,8 @@ static inline void bpf_key_put(struct bpf_key *bkey)
 {
 }
 
-static inline int bpf_verify_pkcs7_signature(struct bpf_dynptr *data_p,
-					     struct bpf_dynptr *sig_p,
+static inline int bpf_verify_pkcs7_signature(const struct bpf_dynptr *data_p,
+					     const struct bpf_dynptr *sig_p,
 					     struct bpf_key *trusted_keyring)
 {
 	return -EOPNOTSUPP;
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index 6eb6c82ed2ee..3d44896587ac 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -3000,8 +3000,8 @@ __bpf_kfunc int bpf_dynptr_clone(const struct bpf_dynptr *p,
  * Copies data from source dynptr to destination dynptr.
  * Returns 0 on success; negative error, otherwise.
  */
-__bpf_kfunc int bpf_dynptr_copy(struct bpf_dynptr *dst_ptr, u64 dst_off,
-				struct bpf_dynptr *src_ptr, u64 src_off, u64 size)
+__bpf_kfunc int bpf_dynptr_copy(const struct bpf_dynptr *dst_ptr, u64 dst_off,
+				const struct bpf_dynptr *src_ptr, u64 src_off, u64 size)
 {
 	struct bpf_dynptr_kern *dst = (struct bpf_dynptr_kern *)dst_ptr;
 	struct bpf_dynptr_kern *src = (struct bpf_dynptr_kern *)src_ptr;
@@ -3055,7 +3055,7 @@ __bpf_kfunc int bpf_dynptr_copy(struct bpf_dynptr *dst_ptr, u64 dst_off,
  * at @offset with the constant byte @val.
  * Returns 0 on success; negative error, otherwise.
  */
-__bpf_kfunc int bpf_dynptr_memset(struct bpf_dynptr *p, u64 offset, u64 size, u8 val)
+__bpf_kfunc int bpf_dynptr_memset(const struct bpf_dynptr *p, u64 offset, u64 size, u8 val)
 {
 	struct bpf_dynptr_kern *ptr = (struct bpf_dynptr_kern *)p;
 	u64 chunk_sz, write_off;
@@ -4069,8 +4069,8 @@ __bpf_kfunc void bpf_key_put(struct bpf_key *bkey)
  *
  * Return: 0 on success, a negative value on error.
  */
-__bpf_kfunc int bpf_verify_pkcs7_signature(struct bpf_dynptr *data_p,
-			       struct bpf_dynptr *sig_p,
+__bpf_kfunc int bpf_verify_pkcs7_signature(const struct bpf_dynptr *data_p,
+			       const struct bpf_dynptr *sig_p,
 			       struct bpf_key *trusted_keyring)
 {
 #ifdef CONFIG_SYSTEM_DATA_VERIFICATION
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 1153a828ce8d..0f77c4c5b510 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -12276,6 +12276,22 @@ static bool is_kfunc_arg_dynptr(const struct btf *btf, const struct btf_param *a
 	return __is_kfunc_ptr_arg_type(btf, arg, KF_ARG_DYNPTR_ID);
 }
 
+static bool is_kfunc_arg_const_ptr(const struct btf *btf, const struct btf_param *arg)
+{
+	const struct btf_type *t, *resolved_t;
+
+	t = btf_type_skip_modifiers(btf, arg->type, NULL);
+	if (!t || !btf_type_is_ptr(t))
+		return false;
+
+	resolved_t = btf_type_skip_modifiers(btf, t->type, NULL);
+	for (; t != resolved_t; t = btf_type_by_id(btf, t->type))
+		if (BTF_INFO_KIND(t->info) == BTF_KIND_CONST)
+			return true;
+
+	return false;
+}
+
 static bool is_kfunc_arg_list_head(const struct btf *btf, const struct btf_param *arg)
 {
 	return __is_kfunc_ptr_arg_type(btf, arg, KF_ARG_LIST_HEAD_ID);
@@ -13509,7 +13525,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 			enum bpf_arg_type dynptr_arg_type = ARG_PTR_TO_DYNPTR;
 			int clone_ref_obj_id = 0;
 
-			if (reg->type == CONST_PTR_TO_DYNPTR)
+			if (is_kfunc_arg_const_ptr(btf, &args[i]))
 				dynptr_arg_type |= MEM_RDONLY;
 
 			if (is_kfunc_arg_uninit(btf, &args[i]))
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 9bc0dfd235af..127c317376be 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -3391,7 +3391,7 @@ typedef int (*copy_fn_t)(void *dst, const void *src, u32 size, struct task_struc
  * direct calls into all the specific callback implementations
  * (copy_user_data_sleepable, copy_user_data_nofault, and so on)
  */
-static __always_inline int __bpf_dynptr_copy_str(struct bpf_dynptr *dptr, u64 doff, u64 size,
+static __always_inline int __bpf_dynptr_copy_str(const struct bpf_dynptr *dptr, u64 doff, u64 size,
 						 const void *unsafe_src,
 						 copy_fn_t str_copy_fn,
 						 struct task_struct *tsk)
@@ -3533,49 +3533,49 @@ __bpf_kfunc int bpf_send_signal_task(struct task_struct *task, int sig, enum pid
 	return bpf_send_signal_common(sig, type, task, value);
 }
 
-__bpf_kfunc int bpf_probe_read_user_dynptr(struct bpf_dynptr *dptr, u64 off,
+__bpf_kfunc int bpf_probe_read_user_dynptr(const struct bpf_dynptr *dptr, u64 off,
 					   u64 size, const void __user *unsafe_ptr__ign)
 {
 	return __bpf_dynptr_copy(dptr, off, size, (const void __force *)unsafe_ptr__ign,
 				 copy_user_data_nofault, NULL);
 }
 
-__bpf_kfunc int bpf_probe_read_kernel_dynptr(struct bpf_dynptr *dptr, u64 off,
+__bpf_kfunc int bpf_probe_read_kernel_dynptr(const struct bpf_dynptr *dptr, u64 off,
 					     u64 size, const void *unsafe_ptr__ign)
 {
 	return __bpf_dynptr_copy(dptr, off, size, unsafe_ptr__ign,
 				 copy_kernel_data_nofault, NULL);
 }
 
-__bpf_kfunc int bpf_probe_read_user_str_dynptr(struct bpf_dynptr *dptr, u64 off,
+__bpf_kfunc int bpf_probe_read_user_str_dynptr(const struct bpf_dynptr *dptr, u64 off,
 					       u64 size, const void __user *unsafe_ptr__ign)
 {
 	return __bpf_dynptr_copy_str(dptr, off, size, (const void __force *)unsafe_ptr__ign,
 				     copy_user_str_nofault, NULL);
 }
 
-__bpf_kfunc int bpf_probe_read_kernel_str_dynptr(struct bpf_dynptr *dptr, u64 off,
+__bpf_kfunc int bpf_probe_read_kernel_str_dynptr(const struct bpf_dynptr *dptr, u64 off,
 						 u64 size, const void *unsafe_ptr__ign)
 {
 	return __bpf_dynptr_copy_str(dptr, off, size, unsafe_ptr__ign,
 				     copy_kernel_str_nofault, NULL);
 }
 
-__bpf_kfunc int bpf_copy_from_user_dynptr(struct bpf_dynptr *dptr, u64 off,
+__bpf_kfunc int bpf_copy_from_user_dynptr(const struct bpf_dynptr *dptr, u64 off,
 					  u64 size, const void __user *unsafe_ptr__ign)
 {
 	return __bpf_dynptr_copy(dptr, off, size, (const void __force *)unsafe_ptr__ign,
 				 copy_user_data_sleepable, NULL);
 }
 
-__bpf_kfunc int bpf_copy_from_user_str_dynptr(struct bpf_dynptr *dptr, u64 off,
+__bpf_kfunc int bpf_copy_from_user_str_dynptr(const struct bpf_dynptr *dptr, u64 off,
 					      u64 size, const void __user *unsafe_ptr__ign)
 {
 	return __bpf_dynptr_copy_str(dptr, off, size, (const void __force *)unsafe_ptr__ign,
 				     copy_user_str_sleepable, NULL);
 }
 
-__bpf_kfunc int bpf_copy_from_user_task_dynptr(struct bpf_dynptr *dptr, u64 off,
+__bpf_kfunc int bpf_copy_from_user_task_dynptr(const struct bpf_dynptr *dptr, u64 off,
 					       u64 size, const void __user *unsafe_ptr__ign,
 					       struct task_struct *tsk)
 {
@@ -3583,7 +3583,7 @@ __bpf_kfunc int bpf_copy_from_user_task_dynptr(struct bpf_dynptr *dptr, u64 off,
 				 copy_user_data_sleepable, tsk);
 }
 
-__bpf_kfunc int bpf_copy_from_user_task_str_dynptr(struct bpf_dynptr *dptr, u64 off,
+__bpf_kfunc int bpf_copy_from_user_task_str_dynptr(const struct bpf_dynptr *dptr, u64 off,
 						   u64 size, const void __user *unsafe_ptr__ign,
 						   struct task_struct *tsk)
 {
diff --git a/tools/testing/selftests/bpf/bpf_kfuncs.h b/tools/testing/selftests/bpf/bpf_kfuncs.h
index 7dad01439391..ffb9bc1cace0 100644
--- a/tools/testing/selftests/bpf/bpf_kfuncs.h
+++ b/tools/testing/selftests/bpf/bpf_kfuncs.h
@@ -70,13 +70,13 @@ extern void *bpf_rdonly_cast(const void *obj, __u32 btf_id) __ksym __weak;
 
 extern int bpf_get_file_xattr(struct file *file, const char *name,
 			      struct bpf_dynptr *value_ptr) __ksym;
-extern int bpf_get_fsverity_digest(struct file *file, struct bpf_dynptr *digest_ptr) __ksym;
+extern int bpf_get_fsverity_digest(struct file *file, const struct bpf_dynptr *digest_ptr) __ksym;
 
 extern struct bpf_key *bpf_lookup_user_key(__s32 serial, __u64 flags) __ksym;
 extern struct bpf_key *bpf_lookup_system_key(__u64 id) __ksym;
 extern void bpf_key_put(struct bpf_key *key) __ksym;
-extern int bpf_verify_pkcs7_signature(struct bpf_dynptr *data_ptr,
-				      struct bpf_dynptr *sig_ptr,
+extern int bpf_verify_pkcs7_signature(const struct bpf_dynptr *data_ptr,
+				      const struct bpf_dynptr *sig_ptr,
 				      struct bpf_key *trusted_keyring) __ksym;
 
 struct dentry;
diff --git a/tools/testing/selftests/bpf/progs/dynptr_success.c b/tools/testing/selftests/bpf/progs/dynptr_success.c
index e0d672d93adf..e0745b6e467e 100644
--- a/tools/testing/selftests/bpf/progs/dynptr_success.c
+++ b/tools/testing/selftests/bpf/progs/dynptr_success.c
@@ -914,7 +914,7 @@ void *user_ptr;
 char expected_str[384];
 __u32 test_len[7] = {0/* placeholder */, 0, 1, 2, 255, 256, 257};
 
-typedef int (*bpf_read_dynptr_fn_t)(struct bpf_dynptr *dptr, u64 off,
+typedef int (*bpf_read_dynptr_fn_t)(const struct bpf_dynptr *dptr, u64 off,
 				    u64 size, const void *unsafe_ptr);
 
 /* Returns the offset just before the end of the maximum sized xdp fragment.
@@ -1106,7 +1106,7 @@ int test_copy_from_user_str_dynptr(void *ctx)
 	return 0;
 }
 
-static int bpf_copy_data_from_user_task(struct bpf_dynptr *dptr, u64 off,
+static int bpf_copy_data_from_user_task(const struct bpf_dynptr *dptr, u64 off,
 					u64 size, const void *unsafe_ptr)
 {
 	struct task_struct *task = bpf_get_current_task_btf();
@@ -1114,7 +1114,7 @@ static int bpf_copy_data_from_user_task(struct bpf_dynptr *dptr, u64 off,
 	return bpf_copy_from_user_task_dynptr(dptr, off, size, unsafe_ptr, task);
 }
 
-static int bpf_copy_data_from_user_task_str(struct bpf_dynptr *dptr, u64 off,
+static int bpf_copy_data_from_user_task_str(const struct bpf_dynptr *dptr, u64 off,
 					    u64 size, const void *unsafe_ptr)
 {
 	struct task_struct *task = bpf_get_current_task_btf();
diff --git a/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c b/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c
index d249113ed657..c3631fd41977 100644
--- a/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c
+++ b/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c
@@ -11,12 +11,7 @@
 #include <bpf/bpf_helpers.h>
 #include <bpf/bpf_tracing.h>
 #include "bpf_misc.h"
-
-extern struct bpf_key *bpf_lookup_system_key(__u64 id) __ksym;
-extern void bpf_key_put(struct bpf_key *key) __ksym;
-extern int bpf_verify_pkcs7_signature(struct bpf_dynptr *data_ptr,
-				      struct bpf_dynptr *sig_ptr,
-				      struct bpf_key *trusted_keyring) __ksym;
+#include "bpf_kfuncs.h"
 
 struct {
 	__uint(type, BPF_MAP_TYPE_RINGBUF);
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [RFC PATCH bpf-next v2 02/11] selftests/bpf: Test passing CONST_PTR_TO_DYNPTR to kfunc that may mutate dynptr
  2026-03-07  6:44 [RFC PATCH bpf-next v2 00/11] Dynptr cleanup and bugfixes Amery Hung
  2026-03-07  6:44 ` [RFC PATCH bpf-next v2 01/11] bpf: Set kfunc dynptr arg type flag based on prototype Amery Hung
@ 2026-03-07  6:44 ` Amery Hung
  2026-03-11 15:26   ` Mykyta Yatsenko
  2026-03-16 21:35   ` Eduard Zingerman
  2026-03-07  6:44 ` [RFC PATCH bpf-next v2 03/11] bpf: Unify dynptr handling in the verifier Amery Hung
                   ` (9 subsequent siblings)
  11 siblings, 2 replies; 46+ messages in thread
From: Amery Hung @ 2026-03-07  6:44 UTC (permalink / raw)
  To: bpf
  Cc: netdev, alexei.starovoitov, andrii, daniel, memxor, martin.lau,
	ameryhung, kernel-team

Make sure for kfunc that takes mutable dynptr argument, verifier rejects
passing CONST_PTR_TO_DYNPTR to it.

Rename struct sample to test_sample to avoid a conflict with the
definition in vmlinux.h

In test_kfunc_dynptr_param.c, initialize dynptr to 0 to avoid
-Wuninitialized-const-pointer warning.

Signed-off-by: Amery Hung <ameryhung@gmail.com>
---
 .../testing/selftests/bpf/progs/dynptr_fail.c | 37 +++++++++++++++----
 .../bpf/progs/test_kfunc_dynptr_param.c       |  2 +-
 2 files changed, 30 insertions(+), 9 deletions(-)

diff --git a/tools/testing/selftests/bpf/progs/dynptr_fail.c b/tools/testing/selftests/bpf/progs/dynptr_fail.c
index 8f2ae9640886..5e1b1cf4ea8e 100644
--- a/tools/testing/selftests/bpf/progs/dynptr_fail.c
+++ b/tools/testing/selftests/bpf/progs/dynptr_fail.c
@@ -1,15 +1,14 @@
 // SPDX-License-Identifier: GPL-2.0
 /* Copyright (c) 2022 Facebook */
 
+#include <vmlinux.h>
 #include <errno.h>
 #include <string.h>
-#include <stdbool.h>
-#include <linux/bpf.h>
 #include <bpf/bpf_helpers.h>
 #include <bpf/bpf_tracing.h>
-#include <linux/if_ether.h>
 #include "bpf_misc.h"
 #include "bpf_kfuncs.h"
+#include "../test_kmods/bpf_testmod_kfunc.h"
 
 char _license[] SEC("license") = "GPL";
 
@@ -46,7 +45,7 @@ struct {
 	__type(value, __u64);
 } array_map4 SEC(".maps");
 
-struct sample {
+struct test_sample {
 	int pid;
 	long value;
 	char comm[16];
@@ -95,7 +94,7 @@ __failure __msg("Unreleased reference id=4")
 int ringbuf_missing_release2(void *ctx)
 {
 	struct bpf_dynptr ptr1, ptr2;
-	struct sample *sample;
+	struct test_sample *sample;
 
 	bpf_ringbuf_reserve_dynptr(&ringbuf, sizeof(*sample), 0, &ptr1);
 	bpf_ringbuf_reserve_dynptr(&ringbuf, sizeof(*sample), 0, &ptr2);
@@ -173,7 +172,7 @@ __failure __msg("type=mem expected=ringbuf_mem")
 int ringbuf_invalid_api(void *ctx)
 {
 	struct bpf_dynptr ptr;
-	struct sample *sample;
+	struct test_sample *sample;
 
 	bpf_ringbuf_reserve_dynptr(&ringbuf, sizeof(*sample), 0, &ptr);
 	sample = bpf_dynptr_data(&ptr, 0, sizeof(*sample));
@@ -315,7 +314,7 @@ __failure __msg("invalid mem access 'scalar'")
 int data_slice_use_after_release1(void *ctx)
 {
 	struct bpf_dynptr ptr;
-	struct sample *sample;
+	struct test_sample *sample;
 
 	bpf_ringbuf_reserve_dynptr(&ringbuf, sizeof(*sample), 0, &ptr);
 	sample = bpf_dynptr_data(&ptr, 0, sizeof(*sample));
@@ -347,7 +346,7 @@ __failure __msg("invalid mem access 'scalar'")
 int data_slice_use_after_release2(void *ctx)
 {
 	struct bpf_dynptr ptr1, ptr2;
-	struct sample *sample;
+	struct test_sample *sample;
 
 	bpf_ringbuf_reserve_dynptr(&ringbuf, 64, 0, &ptr1);
 	bpf_ringbuf_reserve_dynptr(&ringbuf, sizeof(*sample), 0, &ptr2);
@@ -1993,3 +1992,25 @@ int test_dynptr_reg_type(void *ctx)
 	global_call_bpf_dynptr((const struct bpf_dynptr *)current);
 	return 0;
 }
+
+/* Cannot pass CONST_PTR_TO_DYNPTR to bpf_kfunc_dynptr_test() that may mutate the dynptr */
+__noinline int global_subprog_dynptr_mutable(const struct bpf_dynptr *dynptr)
+{
+	long ret = 0;
+
+	/* this should fail */
+	bpf_kfunc_dynptr_test((struct bpf_dynptr *)dynptr, NULL);
+	__sink(ret);
+	return ret;
+}
+
+SEC("tc")
+__failure __msg("cannot pass pointer to const bpf_dynptr, the helper mutates it")
+int kfunc_dynptr_const_to_mutable(struct __sk_buff *skb)
+{
+	struct bpf_dynptr data;
+
+	bpf_dynptr_from_skb(skb, 0, &data);
+	global_subprog_dynptr_mutable(&data);
+	return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c b/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c
index c3631fd41977..1c6cfd0888ba 100644
--- a/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c
+++ b/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c
@@ -33,7 +33,7 @@ SEC("?lsm.s/bpf")
 __failure __msg("cannot pass in dynptr at an offset=-8")
 int BPF_PROG(not_valid_dynptr, int cmd, union bpf_attr *attr, unsigned int size, bool kernel)
 {
-	unsigned long val;
+	unsigned long val = 0;
 
 	return bpf_verify_pkcs7_signature((struct bpf_dynptr *)&val,
 					  (struct bpf_dynptr *)&val, NULL);
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [RFC PATCH bpf-next v2 03/11] bpf: Unify dynptr handling in the verifier
  2026-03-07  6:44 [RFC PATCH bpf-next v2 00/11] Dynptr cleanup and bugfixes Amery Hung
  2026-03-07  6:44 ` [RFC PATCH bpf-next v2 01/11] bpf: Set kfunc dynptr arg type flag based on prototype Amery Hung
  2026-03-07  6:44 ` [RFC PATCH bpf-next v2 02/11] selftests/bpf: Test passing CONST_PTR_TO_DYNPTR to kfunc that may mutate dynptr Amery Hung
@ 2026-03-07  6:44 ` Amery Hung
  2026-03-11 16:03   ` Mykyta Yatsenko
                     ` (2 more replies)
  2026-03-07  6:44 ` [RFC PATCH bpf-next v2 04/11] bpf: Assign reg->id when getting referenced kptr from ctx Amery Hung
                   ` (8 subsequent siblings)
  11 siblings, 3 replies; 46+ messages in thread
From: Amery Hung @ 2026-03-07  6:44 UTC (permalink / raw)
  To: bpf
  Cc: netdev, alexei.starovoitov, andrii, daniel, memxor, martin.lau,
	ameryhung, kernel-team

Simplify dynptr checking for helper and kfunc by unifying it. Remember
initialized dynptr in process_dynptr_func() so that we can easily
retrieve the information for verification later.

Signed-off-by: Amery Hung <ameryhung@gmail.com>
---
 kernel/bpf/verifier.c | 179 +++++++++---------------------------------
 1 file changed, 36 insertions(+), 143 deletions(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 0f77c4c5b510..d52780962adb 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -277,8 +277,15 @@ struct bpf_map_desc {
 	int uid;
 };
 
+struct bpf_dynptr_desc {
+	enum bpf_dynptr_type type;
+	u32 id;
+	u32 ref_obj_id;
+};
+
 struct bpf_call_arg_meta {
 	struct bpf_map_desc map;
+	struct bpf_dynptr_desc initialized_dynptr;
 	bool raw_mode;
 	bool pkt_access;
 	u8 release_regno;
@@ -287,7 +294,6 @@ struct bpf_call_arg_meta {
 	int mem_size;
 	u64 msize_max_value;
 	int ref_obj_id;
-	int dynptr_id;
 	int func_id;
 	struct btf *btf;
 	u32 btf_id;
@@ -346,16 +352,12 @@ struct bpf_kfunc_call_arg_meta {
 	struct {
 		struct btf_field *field;
 	} arg_rbtree_root;
-	struct {
-		enum bpf_dynptr_type type;
-		u32 id;
-		u32 ref_obj_id;
-	} initialized_dynptr;
 	struct {
 		u8 spi;
 		u8 frameno;
 	} iter;
 	struct bpf_map_desc map;
+	struct bpf_dynptr_desc initialized_dynptr;
 	u64 mem_size;
 };
 
@@ -511,11 +513,6 @@ static bool is_ptr_cast_function(enum bpf_func_id func_id)
 		func_id == BPF_FUNC_skc_to_tcp_request_sock;
 }
 
-static bool is_dynptr_ref_function(enum bpf_func_id func_id)
-{
-	return func_id == BPF_FUNC_dynptr_data;
-}
-
 static bool is_sync_callback_calling_kfunc(u32 btf_id);
 static bool is_async_callback_calling_kfunc(u32 btf_id);
 static bool is_callback_calling_kfunc(u32 btf_id);
@@ -597,8 +594,6 @@ static bool helper_multiple_ref_obj_use(enum bpf_func_id func_id,
 		ref_obj_uses++;
 	if (is_acquire_function(func_id, map))
 		ref_obj_uses++;
-	if (is_dynptr_ref_function(func_id))
-		ref_obj_uses++;
 
 	return ref_obj_uses > 1;
 }
@@ -8750,7 +8745,8 @@ static int process_kptr_func(struct bpf_verifier_env *env, int regno,
  * type, and declare it as 'const struct bpf_dynptr *' in their prototype.
  */
 static int process_dynptr_func(struct bpf_verifier_env *env, int regno, int insn_idx,
-			       enum bpf_arg_type arg_type, int clone_ref_obj_id)
+			       enum bpf_arg_type arg_type, int clone_ref_obj_id,
+			       struct bpf_dynptr_desc *initialized_dynptr)
 {
 	struct bpf_reg_state *reg = reg_state(env, regno);
 	int err;
@@ -8825,6 +8821,20 @@ static int process_dynptr_func(struct bpf_verifier_env *env, int regno, int insn
 		}
 
 		err = mark_dynptr_read(env, reg);
+
+		if (initialized_dynptr) {
+			struct bpf_func_state *state = func(env, reg);
+			int spi;
+
+			if (reg->type != CONST_PTR_TO_DYNPTR) {
+				spi = dynptr_get_spi(env, reg);
+				reg = &state->stack[spi].spilled_ptr;
+			}
+
+			initialized_dynptr->id = reg->id;
+			initialized_dynptr->type = reg->dynptr.type;
+			initialized_dynptr->ref_obj_id = reg->ref_obj_id;
+		}
 	}
 	return err;
 }
@@ -9587,72 +9597,6 @@ static int check_func_arg_reg_off(struct bpf_verifier_env *env,
 	}
 }
 
-static struct bpf_reg_state *get_dynptr_arg_reg(struct bpf_verifier_env *env,
-						const struct bpf_func_proto *fn,
-						struct bpf_reg_state *regs)
-{
-	struct bpf_reg_state *state = NULL;
-	int i;
-
-	for (i = 0; i < MAX_BPF_FUNC_REG_ARGS; i++)
-		if (arg_type_is_dynptr(fn->arg_type[i])) {
-			if (state) {
-				verbose(env, "verifier internal error: multiple dynptr args\n");
-				return NULL;
-			}
-			state = &regs[BPF_REG_1 + i];
-		}
-
-	if (!state)
-		verbose(env, "verifier internal error: no dynptr arg found\n");
-
-	return state;
-}
-
-static int dynptr_id(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
-{
-	struct bpf_func_state *state = func(env, reg);
-	int spi;
-
-	if (reg->type == CONST_PTR_TO_DYNPTR)
-		return reg->id;
-	spi = dynptr_get_spi(env, reg);
-	if (spi < 0)
-		return spi;
-	return state->stack[spi].spilled_ptr.id;
-}
-
-static int dynptr_ref_obj_id(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
-{
-	struct bpf_func_state *state = func(env, reg);
-	int spi;
-
-	if (reg->type == CONST_PTR_TO_DYNPTR)
-		return reg->ref_obj_id;
-	spi = dynptr_get_spi(env, reg);
-	if (spi < 0)
-		return spi;
-	return state->stack[spi].spilled_ptr.ref_obj_id;
-}
-
-static enum bpf_dynptr_type dynptr_get_type(struct bpf_verifier_env *env,
-					    struct bpf_reg_state *reg)
-{
-	struct bpf_func_state *state = func(env, reg);
-	int spi;
-
-	if (reg->type == CONST_PTR_TO_DYNPTR)
-		return reg->dynptr.type;
-
-	spi = __get_spi(reg->var_off.value);
-	if (spi < 0) {
-		verbose(env, "verifier internal error: invalid spi when querying dynptr type\n");
-		return BPF_DYNPTR_TYPE_INVALID;
-	}
-
-	return state->stack[spi].spilled_ptr.dynptr.type;
-}
-
 static int check_reg_const_str(struct bpf_verifier_env *env,
 			       struct bpf_reg_state *reg, u32 regno)
 {
@@ -10007,7 +9951,8 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
 					 true, meta);
 		break;
 	case ARG_PTR_TO_DYNPTR:
-		err = process_dynptr_func(env, regno, insn_idx, arg_type, 0);
+		err = process_dynptr_func(env, regno, insn_idx, arg_type, 0,
+					  &meta->initialized_dynptr);
 		if (err)
 			return err;
 		break;
@@ -10666,7 +10611,7 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog,
 			if (ret)
 				return ret;
 
-			ret = process_dynptr_func(env, regno, -1, arg->arg_type, 0);
+			ret = process_dynptr_func(env, regno, -1, arg->arg_type, 0, NULL);
 			if (ret)
 				return ret;
 		} else if (base_type(arg->arg_type) == ARG_PTR_TO_BTF_ID) {
@@ -11771,52 +11716,10 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
 			}
 		}
 		break;
-	case BPF_FUNC_dynptr_data:
-	{
-		struct bpf_reg_state *reg;
-		int id, ref_obj_id;
-
-		reg = get_dynptr_arg_reg(env, fn, regs);
-		if (!reg)
-			return -EFAULT;
-
-
-		if (meta.dynptr_id) {
-			verifier_bug(env, "meta.dynptr_id already set");
-			return -EFAULT;
-		}
-		if (meta.ref_obj_id) {
-			verifier_bug(env, "meta.ref_obj_id already set");
-			return -EFAULT;
-		}
-
-		id = dynptr_id(env, reg);
-		if (id < 0) {
-			verifier_bug(env, "failed to obtain dynptr id");
-			return id;
-		}
-
-		ref_obj_id = dynptr_ref_obj_id(env, reg);
-		if (ref_obj_id < 0) {
-			verifier_bug(env, "failed to obtain dynptr ref_obj_id");
-			return ref_obj_id;
-		}
-
-		meta.dynptr_id = id;
-		meta.ref_obj_id = ref_obj_id;
-
-		break;
-	}
 	case BPF_FUNC_dynptr_write:
 	{
-		enum bpf_dynptr_type dynptr_type;
-		struct bpf_reg_state *reg;
-
-		reg = get_dynptr_arg_reg(env, fn, regs);
-		if (!reg)
-			return -EFAULT;
+		enum bpf_dynptr_type dynptr_type = meta.initialized_dynptr.type;
 
-		dynptr_type = dynptr_get_type(env, reg);
 		if (dynptr_type == BPF_DYNPTR_TYPE_INVALID)
 			return -EFAULT;
 
@@ -12007,10 +11910,7 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
 		return -EFAULT;
 	}
 
-	if (is_dynptr_ref_function(func_id))
-		regs[BPF_REG_0].dynptr_id = meta.dynptr_id;
-
-	if (is_ptr_cast_function(func_id) || is_dynptr_ref_function(func_id)) {
+	if (is_ptr_cast_function(func_id)) {
 		/* For release_reference() */
 		regs[BPF_REG_0].ref_obj_id = meta.ref_obj_id;
 	} else if (is_acquire_function(func_id, meta.map.ptr)) {
@@ -12024,6 +11924,11 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
 		regs[BPF_REG_0].ref_obj_id = id;
 	}
 
+	if (func_id == BPF_FUNC_dynptr_data) {
+		regs[BPF_REG_0].dynptr_id = meta.initialized_dynptr.id;
+		regs[BPF_REG_0].ref_obj_id = meta.initialized_dynptr.ref_obj_id;
+	}
+
 	err = do_refine_retval_range(env, regs, fn->ret_type, func_id, &meta);
 	if (err)
 		return err;
@@ -13559,22 +13464,10 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 				}
 			}
 
-			ret = process_dynptr_func(env, regno, insn_idx, dynptr_arg_type, clone_ref_obj_id);
+			ret = process_dynptr_func(env, regno, insn_idx, dynptr_arg_type, clone_ref_obj_id,
+						  &meta->initialized_dynptr);
 			if (ret < 0)
 				return ret;
-
-			if (!(dynptr_arg_type & MEM_UNINIT)) {
-				int id = dynptr_id(env, reg);
-
-				if (id < 0) {
-					verifier_bug(env, "failed to obtain dynptr id");
-					return id;
-				}
-				meta->initialized_dynptr.id = id;
-				meta->initialized_dynptr.type = dynptr_get_type(env, reg);
-				meta->initialized_dynptr.ref_obj_id = dynptr_ref_obj_id(env, reg);
-			}
-
 			break;
 		}
 		case KF_ARG_PTR_TO_ITER:
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [RFC PATCH bpf-next v2 04/11] bpf: Assign reg->id when getting referenced kptr from ctx
  2026-03-07  6:44 [RFC PATCH bpf-next v2 00/11] Dynptr cleanup and bugfixes Amery Hung
                   ` (2 preceding siblings ...)
  2026-03-07  6:44 ` [RFC PATCH bpf-next v2 03/11] bpf: Unify dynptr handling in the verifier Amery Hung
@ 2026-03-07  6:44 ` Amery Hung
  2026-03-07  6:44 ` [RFC PATCH bpf-next v2 05/11] bpf: Preserve reg->id of pointer objects after null-check Amery Hung
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 46+ messages in thread
From: Amery Hung @ 2026-03-07  6:44 UTC (permalink / raw)
  To: bpf
  Cc: netdev, alexei.starovoitov, andrii, daniel, memxor, martin.lau,
	ameryhung, kernel-team

Assign reg->id when getting referenced kptr from read program context
to be consistent with R0 of KF_ACQUIRE kfunc. skb dynptr will track the
referenced skb in qdisc programs using a new field reg->parent_id in
a later patch.

Signed-off-by: Amery Hung <ameryhung@gmail.com>
---
 kernel/bpf/verifier.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index d52780962adb..ea10dd611df2 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -7754,8 +7754,6 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
 			} else {
 				mark_reg_known_zero(env, regs,
 						    value_regno);
-				if (type_may_be_null(info.reg_type))
-					regs[value_regno].id = ++env->id_gen;
 				/* A load of ctx field could have different
 				 * actual load size with the one encoded in the
 				 * insn. When the dst is PTR, it is for sure not
@@ -7765,8 +7763,11 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
 				if (base_type(info.reg_type) == PTR_TO_BTF_ID) {
 					regs[value_regno].btf = info.btf;
 					regs[value_regno].btf_id = info.btf_id;
+					regs[value_regno].id = info.ref_obj_id;
 					regs[value_regno].ref_obj_id = info.ref_obj_id;
 				}
+				if (type_may_be_null(info.reg_type) && !regs[value_regno].id)
+					regs[value_regno].id = ++env->id_gen;
 			}
 			regs[value_regno].type = info.reg_type;
 		}
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [RFC PATCH bpf-next v2 05/11] bpf: Preserve reg->id of pointer objects after null-check
  2026-03-07  6:44 [RFC PATCH bpf-next v2 00/11] Dynptr cleanup and bugfixes Amery Hung
                   ` (3 preceding siblings ...)
  2026-03-07  6:44 ` [RFC PATCH bpf-next v2 04/11] bpf: Assign reg->id when getting referenced kptr from ctx Amery Hung
@ 2026-03-07  6:44 ` Amery Hung
  2026-03-11 21:55   ` Andrii Nakryiko
  2026-03-11 22:26   ` Alexei Starovoitov
  2026-03-07  6:44 ` [RFC PATCH bpf-next v2 06/11] bpf: Refactor object relationship tracking and fix dynptr UAF bug Amery Hung
                   ` (6 subsequent siblings)
  11 siblings, 2 replies; 46+ messages in thread
From: Amery Hung @ 2026-03-07  6:44 UTC (permalink / raw)
  To: bpf
  Cc: netdev, alexei.starovoitov, andrii, daniel, memxor, martin.lau,
	ameryhung, kernel-team

Preserve reg->id of pointer objects after null-checking the register so
that children objects derived from it can still refer to it in the new
object relationship tracking mechanism introduced in a later patch. This
change incurs a slight increase in the number of states in one selftest
bpf object, rbtree_search.bpf.o. For Meta bpf objects, the increase of
states is also negligible.

Selftest BPF objects with insns_diff > 0

Insns (A)  Insns (B)  Insns  (DIFF)  States (A)  States (B)  States (DIFF)
---------  ---------  -------------  ----------  ----------  -------------
     7309       7814  +505 (+6.91%)         394         413   +19 (+4.82%)

Meta BPF objects with insns_diff > 0

Insns (A)  Insns (B)  Insns   (DIFF)  States (A)  States (B)  States (DIFF)
---------  ---------  --------------  ----------  ----------  -------------
       52         57     +5 (+9.62%)           5           6   +1 (+20.00%)
       52         57     +5 (+9.62%)           5           6   +1 (+20.00%)
      676        679     +3 (+0.44%)          54          54    +0 (+0.00%)
      289        292     +3 (+1.04%)          20          20    +0 (+0.00%)
       78         82     +4 (+5.13%)           8           8    +0 (+0.00%)
      252        320   +68 (+26.98%)          21          27   +6 (+28.57%)
      252        320   +68 (+26.98%)          21          27   +6 (+28.57%)
      119        126     +7 (+5.88%)           6           7   +1 (+16.67%)
     1119       1128     +9 (+0.80%)          95          96    +1 (+1.05%)
     1128       1137     +9 (+0.80%)          95          96    +1 (+1.05%)
     4380       4465    +85 (+1.94%)         114         118    +4 (+3.51%)
     3093       3170    +77 (+2.49%)          83          88    +5 (+6.02%)
    30181      31224  +1043 (+3.46%)         832         863   +31 (+3.73%)
   237608     237619    +11 (+0.00%)       11670       11671    +1 (+0.01%)
    94832      94836     +4 (+0.00%)        4787        4788    +1 (+0.02%)
   237387     237407    +20 (+0.01%)       11651       11652    +1 (+0.01%)
    94832      94836     +4 (+0.00%)        4787        4788    +1 (+0.02%)
     8103       8108     +5 (+0.06%)         459         459    +0 (+0.00%)
     8076       8079     +3 (+0.04%)         457         457    +0 (+0.00%)
     8177       8197    +20 (+0.24%)         459         460    +1 (+0.22%)
     8083       8086     +3 (+0.04%)         458         458    +0 (+0.00%)
   237608     237619    +11 (+0.00%)       11670       11671    +1 (+0.01%)
    94832      94836     +4 (+0.00%)        4787        4788    +1 (+0.02%)
   237387     237407    +20 (+0.01%)       11651       11652    +1 (+0.01%)
    94832      94836     +4 (+0.00%)        4787        4788    +1 (+0.02%)
     8103       8108     +5 (+0.06%)         459         459    +0 (+0.00%)
     8076       8079     +3 (+0.04%)         457         457    +0 (+0.00%)
     8177       8197    +20 (+0.24%)         459         460    +1 (+0.22%)
     8083       8086     +3 (+0.04%)         458         458    +0 (+0.00%)

Looking into rbtree_search, the reason for such increase is that the
verifier has to explore the main loop shown below for one more iteration
until state pruning decides the current state is safe.

long rbtree_search(void *ctx)
{
	...
	bpf_spin_lock(&glock0);
	rb_n = bpf_rbtree_root(&groot0);
	while (can_loop) {
		if (!rb_n) {
			bpf_spin_unlock(&glock0);
			return __LINE__;
		}

		n = rb_entry(rb_n, struct node_data, r0);
		if (lookup_key == n->key0)
			break;
		if (nr_gc < NR_NODES)
			gc_ns[nr_gc++] = rb_n;
		if (lookup_key < n->key0)
			rb_n = bpf_rbtree_left(&groot0, rb_n);
		else
			rb_n = bpf_rbtree_right(&groot0, rb_n);
	}
	...
}

Below is what the verifier sees at the start of each iteration
(65: may_goto) after preserving id of rb_n. Without id of rb_n, the
verifier stops exploring the loop at iter 16.

           rb_n  gc_ns[15]
iter 15    257   257

iter 16    290   257    rb_n: idmap add 257->290
                        gc_ns[15]: check 257 != 290 --> state not equal

iter 17    325   257    rb_n: idmap add 290->325
                        gc_ns[15]: idmap add 257->257 --> state safe

Signed-off-by: Amery Hung <ameryhung@gmail.com>
---
 kernel/bpf/verifier.c | 13 ++++---------
 1 file changed, 4 insertions(+), 9 deletions(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index ea10dd611df2..8f9e28901bc4 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -17014,15 +17014,10 @@ static void mark_ptr_or_null_reg(struct bpf_func_state *state,
 
 		mark_ptr_not_null_reg(reg);
 
-		if (!reg_may_point_to_spin_lock(reg)) {
-			/* For not-NULL ptr, reg->ref_obj_id will be reset
-			 * in release_reference().
-			 *
-			 * reg->id is still used by spin_lock ptr. Other
-			 * than spin_lock ptr type, reg->id can be reset.
-			 */
-			reg->id = 0;
-		}
+		/*
+		 * reg->id is preserved for object relationship tracking
+		 * and spin_lock lock state tracking
+		 */
 	}
 }
 
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [RFC PATCH bpf-next v2 06/11] bpf: Refactor object relationship tracking and fix dynptr UAF bug
  2026-03-07  6:44 [RFC PATCH bpf-next v2 00/11] Dynptr cleanup and bugfixes Amery Hung
                   ` (4 preceding siblings ...)
  2026-03-07  6:44 ` [RFC PATCH bpf-next v2 05/11] bpf: Preserve reg->id of pointer objects after null-check Amery Hung
@ 2026-03-07  6:44 ` Amery Hung
  2026-03-11 22:32   ` Andrii Nakryiko
  2026-03-12 23:33   ` Mykyta Yatsenko
  2026-03-07  6:44 ` [RFC PATCH bpf-next v2 07/11] bpf: Remove redundant dynptr arg check for helper Amery Hung
                   ` (5 subsequent siblings)
  11 siblings, 2 replies; 46+ messages in thread
From: Amery Hung @ 2026-03-07  6:44 UTC (permalink / raw)
  To: bpf
  Cc: netdev, alexei.starovoitov, andrii, daniel, memxor, martin.lau,
	ameryhung, kernel-team

Refactor object relationship tracking in the verifier by removing
dynptr_id and using parent_id to track the parent object. Then, track
the referenced parent object for the dynptr when calling a dynptr
constructor. This fixes a use-after-free bug. For dynptr that has
referenced parent object (skb dynptr in BPF qdisc or file dynptr),
the dynptr or derived slices need to be invalidated when the parent
object is released.

First, add parent_id to bpf_reg_state to be able to precisely track
objects' child-parent relationship. A child object will use parent_id
to track the parent object's id. This replaces dynptr slice specific
dynptr_id.

Then, when calling dynptr constructors (i.e., process_dynptr_func() with
MEM_UNINIT argument), track the parent's id if parent is an referenced
object. This only applies to file dynptr and skb dynptr, so only pass
parent reg->id to kfunc constructors.

For release_reference(), this mean when invalidating an object, it needs
to also invalidate all dependent objects by traversing the subtree. This
is done using stack-based DFS to avoid recursive call chain of
release_reference() -> unmark_stack_slots_dynptr() ->
release_reference(). Note that, referenced objects cannot be released
when traversing the tree if it is not the object id initially passed to
release_reference() as they would actually require helper call to
release the acquired resources.

While the new design changes how object relationships are being tracked
in the verifier, it does NOT change the verifier's behavior. Here is
the implication of the new design for dynptr, ptr casting and
owning/non-owning references.

Dynptr:

When initializing a dynptr, referenced dynptr will acquire an reference
for ref_obj_id. If the dynptr has a referenced parent, the parent_id
will be used to track the its id. When cloning dynptr, ref_obj_id and
parent_id of the clone are copied directly from the original dynptr.
This means, when releasing a dynptr, if it is a referenced dynptr,
release_reference(ref_obj_id) will release all clones and the original
and derived slices. For non-referenced dynptr, only the specific dynptr
being released and its children slices will be invalidated.

Pointer casting:

Referenced socket pointer and the casted pointers should share the same
lifetime, while having difference nullness. Therefore, they will have
different id but the same ref_obj_id.

When converting owning references to non-owning:

After converting a reference from owning to non-owning by clearing the
object's ref_reg_id. (e.g., object(id=1, ref_obj_id=1) -> object(id=1,
ref_obj_id=0)), the verifier only needs to release the reference state
instead of releasing registers that have the id, so call
release_reference_nomark() instead of release_reference().

CC: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Fixes: 870c28588afa ("bpf: net_sched: Add basic bpf qdisc kfuncs")
Signed-off-by: Amery Hung <ameryhung@gmail.com>
---
 include/linux/bpf_verifier.h |  14 +-
 kernel/bpf/log.c             |   4 +-
 kernel/bpf/verifier.c        | 274 ++++++++++++++++++-----------------
 3 files changed, 154 insertions(+), 138 deletions(-)

diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index c1e30096ea7b..e987a48f511a 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -65,7 +65,6 @@ struct bpf_reg_state {
 
 		struct { /* for PTR_TO_MEM | PTR_TO_MEM_OR_NULL */
 			u32 mem_size;
-			u32 dynptr_id; /* for dynptr slices */
 		};
 
 		/* For dynptr stack slots */
@@ -193,6 +192,13 @@ struct bpf_reg_state {
 	 * allowed and has the same effect as bpf_sk_release(sk).
 	 */
 	u32 ref_obj_id;
+	/* Tracks the parent object this register was derived from.
+	 * Used for cascading invalidation: when the parent object is
+	 * released or invalidated, all registers with matching parent_id
+	 * are also invalidated. For example, a slice from bpf_dynptr_data()
+	 * gets parent_id set to the dynptr's id.
+	 */
+	u32 parent_id;
 	/* Inside the callee two registers can be both PTR_TO_STACK like
 	 * R1=fp-8 and R2=fp-8, but one of them points to this function stack
 	 * while another to the caller's stack. To differentiate them 'frameno'
@@ -707,6 +713,11 @@ struct bpf_idset {
 	} entries[BPF_ID_MAP_SIZE];
 };
 
+struct bpf_idstack {
+	int cnt;
+	u32 ids[BPF_ID_MAP_SIZE];
+};
+
 /* see verifier.c:compute_scc_callchain() */
 struct bpf_scc_callchain {
 	/* call sites from bpf_verifier_state->frame[*]->callsite leading to this SCC */
@@ -789,6 +800,7 @@ struct bpf_verifier_env {
 	union {
 		struct bpf_idmap idmap_scratch;
 		struct bpf_idset idset_scratch;
+		struct bpf_idstack idstack_scratch;
 	};
 	struct {
 		int *insn_state;
diff --git a/kernel/bpf/log.c b/kernel/bpf/log.c
index 37d72b052192..cb4129b8b2a1 100644
--- a/kernel/bpf/log.c
+++ b/kernel/bpf/log.c
@@ -707,6 +707,8 @@ static void print_reg_state(struct bpf_verifier_env *env,
 		verbose(env, "%+d", reg->delta);
 	if (reg->ref_obj_id)
 		verbose_a("ref_obj_id=%d", reg->ref_obj_id);
+	if (reg->parent_id)
+		verbose_a("parent_id=%d", reg->parent_id);
 	if (type_is_non_owning_ref(reg->type))
 		verbose_a("%s", "non_own_ref");
 	if (type_is_map_ptr(t)) {
@@ -810,8 +812,6 @@ void print_verifier_state(struct bpf_verifier_env *env, const struct bpf_verifie
 				verbose_a("id=%d", reg->id);
 			if (reg->ref_obj_id)
 				verbose_a("ref_id=%d", reg->ref_obj_id);
-			if (reg->dynptr_id)
-				verbose_a("dynptr_id=%d", reg->dynptr_id);
 			verbose(env, ")");
 			break;
 		case STACK_ITER:
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 8f9e28901bc4..0436fc4d9107 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -204,7 +204,7 @@ struct bpf_verifier_stack_elem {
 
 static int acquire_reference(struct bpf_verifier_env *env, int insn_idx);
 static int release_reference_nomark(struct bpf_verifier_state *state, int ref_obj_id);
-static int release_reference(struct bpf_verifier_env *env, int ref_obj_id);
+static int release_reference(struct bpf_verifier_env *env, int id);
 static void invalidate_non_owning_refs(struct bpf_verifier_env *env);
 static bool in_rbtree_lock_required_cb(struct bpf_verifier_env *env);
 static int ref_set_non_owning(struct bpf_verifier_env *env,
@@ -281,6 +281,7 @@ struct bpf_dynptr_desc {
 	enum bpf_dynptr_type type;
 	u32 id;
 	u32 ref_obj_id;
+	u32 parent_id;
 };
 
 struct bpf_call_arg_meta {
@@ -294,6 +295,7 @@ struct bpf_call_arg_meta {
 	int mem_size;
 	u64 msize_max_value;
 	int ref_obj_id;
+	u32 id;
 	int func_id;
 	struct btf *btf;
 	u32 btf_id;
@@ -321,6 +323,7 @@ struct bpf_kfunc_call_arg_meta {
 	const char *func_name;
 	/* Out parameters */
 	u32 ref_obj_id;
+	u32 id;
 	u8 release_regno;
 	bool r0_rdonly;
 	u32 ret_btf_id;
@@ -721,14 +724,14 @@ static enum bpf_type_flag get_dynptr_type_flag(enum bpf_dynptr_type type)
 	}
 }
 
-static bool dynptr_type_refcounted(enum bpf_dynptr_type type)
+static bool dynptr_type_referenced(enum bpf_dynptr_type type)
 {
 	return type == BPF_DYNPTR_TYPE_RINGBUF || type == BPF_DYNPTR_TYPE_FILE;
 }
 
 static void __mark_dynptr_reg(struct bpf_reg_state *reg,
 			      enum bpf_dynptr_type type,
-			      bool first_slot, int dynptr_id);
+			      bool first_slot, int id);
 
 static void __mark_reg_not_init(const struct bpf_verifier_env *env,
 				struct bpf_reg_state *reg);
@@ -755,11 +758,12 @@ static int destroy_if_dynptr_stack_slot(struct bpf_verifier_env *env,
 				        struct bpf_func_state *state, int spi);
 
 static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
-				   enum bpf_arg_type arg_type, int insn_idx, int clone_ref_obj_id)
+				   enum bpf_arg_type arg_type, int insn_idx, int parent_id,
+				   struct bpf_dynptr_desc *initialized_dynptr)
 {
 	struct bpf_func_state *state = func(env, reg);
+	int spi, i, err, ref_obj_id = 0;
 	enum bpf_dynptr_type type;
-	int spi, i, err;
 
 	spi = dynptr_get_spi(env, reg);
 	if (spi < 0)
@@ -793,22 +797,28 @@ static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_
 	mark_dynptr_stack_regs(env, &state->stack[spi].spilled_ptr,
 			       &state->stack[spi - 1].spilled_ptr, type);
 
-	if (dynptr_type_refcounted(type)) {
-		/* The id is used to track proper releasing */
-		int id;
-
-		if (clone_ref_obj_id)
-			id = clone_ref_obj_id;
-		else
-			id = acquire_reference(env, insn_idx);
-
-		if (id < 0)
-			return id;
-
-		state->stack[spi].spilled_ptr.ref_obj_id = id;
-		state->stack[spi - 1].spilled_ptr.ref_obj_id = id;
+	if (initialized_dynptr->type == BPF_DYNPTR_TYPE_INVALID) {
+		if (dynptr_type_referenced(type)) {
+			ref_obj_id = acquire_reference(env, insn_idx);
+			if (ref_obj_id < 0)
+				return ref_obj_id;
+		}
+	} else {
+		/*
+		 * Referenced dynptr clones have the same lifetime as the original dynptr
+		 * since bpf_dynptr_clone() does not initialize the clones like the
+		 * constructor does. If any of the dynptrs is invalidated, the rest will
+		 * also need to invalidated. Thus, they all share the same non-zero ref_obj_id.
+		 */
+		ref_obj_id = initialized_dynptr->ref_obj_id;
+		parent_id = initialized_dynptr->parent_id;
 	}
 
+	state->stack[spi].spilled_ptr.ref_obj_id = ref_obj_id;
+	state->stack[spi - 1].spilled_ptr.ref_obj_id = ref_obj_id;
+	state->stack[spi].spilled_ptr.parent_id = parent_id;
+	state->stack[spi - 1].spilled_ptr.parent_id = parent_id;
+
 	bpf_mark_stack_write(env, state->frameno, BIT(spi - 1) | BIT(spi));
 
 	return 0;
@@ -832,7 +842,7 @@ static void invalidate_dynptr(struct bpf_verifier_env *env, struct bpf_func_stat
 static int unmark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
 {
 	struct bpf_func_state *state = func(env, reg);
-	int spi, ref_obj_id, i;
+	int spi;
 
 	/*
 	 * This can only be set for PTR_TO_STACK, as CONST_PTR_TO_DYNPTR cannot
@@ -843,45 +853,19 @@ static int unmark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_re
 		verifier_bug(env, "CONST_PTR_TO_DYNPTR cannot be released");
 		return -EFAULT;
 	}
+
 	spi = dynptr_get_spi(env, reg);
 	if (spi < 0)
 		return spi;
 
-	if (!dynptr_type_refcounted(state->stack[spi].spilled_ptr.dynptr.type)) {
-		invalidate_dynptr(env, state, spi);
-		return 0;
-	}
-
-	ref_obj_id = state->stack[spi].spilled_ptr.ref_obj_id;
-
-	/* If the dynptr has a ref_obj_id, then we need to invalidate
-	 * two things:
-	 *
-	 * 1) Any dynptrs with a matching ref_obj_id (clones)
-	 * 2) Any slices derived from this dynptr.
+	/*
+	 * For referenced dynptr, the clones share the same ref_obj_id and will be
+	 * invalidated too. For non-referenced dynptr, only the dynptr and slices
+	 * derived from it will be invalidated.
 	 */
-
-	/* Invalidate any slices associated with this dynptr */
-	WARN_ON_ONCE(release_reference(env, ref_obj_id));
-
-	/* Invalidate any dynptr clones */
-	for (i = 1; i < state->allocated_stack / BPF_REG_SIZE; i++) {
-		if (state->stack[i].spilled_ptr.ref_obj_id != ref_obj_id)
-			continue;
-
-		/* it should always be the case that if the ref obj id
-		 * matches then the stack slot also belongs to a
-		 * dynptr
-		 */
-		if (state->stack[i].slot_type[0] != STACK_DYNPTR) {
-			verifier_bug(env, "misconfigured ref_obj_id");
-			return -EFAULT;
-		}
-		if (state->stack[i].spilled_ptr.dynptr.first_slot)
-			invalidate_dynptr(env, state, i);
-	}
-
-	return 0;
+	reg = &state->stack[spi].spilled_ptr;
+	return release_reference(env, dynptr_type_referenced(reg->dynptr.type) ?
+				      reg->ref_obj_id : reg->id);
 }
 
 static void __mark_reg_unknown(const struct bpf_verifier_env *env,
@@ -898,10 +882,6 @@ static void mark_reg_invalid(const struct bpf_verifier_env *env, struct bpf_reg_
 static int destroy_if_dynptr_stack_slot(struct bpf_verifier_env *env,
 				        struct bpf_func_state *state, int spi)
 {
-	struct bpf_func_state *fstate;
-	struct bpf_reg_state *dreg;
-	int i, dynptr_id;
-
 	/* We always ensure that STACK_DYNPTR is never set partially,
 	 * hence just checking for slot_type[0] is enough. This is
 	 * different for STACK_SPILL, where it may be only set for
@@ -914,7 +894,7 @@ static int destroy_if_dynptr_stack_slot(struct bpf_verifier_env *env,
 	if (!state->stack[spi].spilled_ptr.dynptr.first_slot)
 		spi = spi + 1;
 
-	if (dynptr_type_refcounted(state->stack[spi].spilled_ptr.dynptr.type)) {
+	if (dynptr_type_referenced(state->stack[spi].spilled_ptr.dynptr.type)) {
 		verbose(env, "cannot overwrite referenced dynptr\n");
 		return -EINVAL;
 	}
@@ -922,31 +902,8 @@ static int destroy_if_dynptr_stack_slot(struct bpf_verifier_env *env,
 	mark_stack_slot_scratched(env, spi);
 	mark_stack_slot_scratched(env, spi - 1);
 
-	/* Writing partially to one dynptr stack slot destroys both. */
-	for (i = 0; i < BPF_REG_SIZE; i++) {
-		state->stack[spi].slot_type[i] = STACK_INVALID;
-		state->stack[spi - 1].slot_type[i] = STACK_INVALID;
-	}
-
-	dynptr_id = state->stack[spi].spilled_ptr.id;
-	/* Invalidate any slices associated with this dynptr */
-	bpf_for_each_reg_in_vstate(env->cur_state, fstate, dreg, ({
-		/* Dynptr slices are only PTR_TO_MEM_OR_NULL and PTR_TO_MEM */
-		if (dreg->type != (PTR_TO_MEM | PTR_MAYBE_NULL) && dreg->type != PTR_TO_MEM)
-			continue;
-		if (dreg->dynptr_id == dynptr_id)
-			mark_reg_invalid(env, dreg);
-	}));
-
-	/* Do not release reference state, we are destroying dynptr on stack,
-	 * not using some helper to release it. Just reset register.
-	 */
-	__mark_reg_not_init(env, &state->stack[spi].spilled_ptr);
-	__mark_reg_not_init(env, &state->stack[spi - 1].spilled_ptr);
-
-	bpf_mark_stack_write(env, state->frameno, BIT(spi - 1) | BIT(spi));
-
-	return 0;
+	/* Invalidate the dynptr and any derived slices */
+	return release_reference(env, state->stack[spi].spilled_ptr.id);
 }
 
 static bool is_dynptr_reg_valid_uninit(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
@@ -1583,15 +1540,15 @@ static void release_reference_state(struct bpf_verifier_state *state, int idx)
 	return;
 }
 
-static bool find_reference_state(struct bpf_verifier_state *state, int ptr_id)
+static struct bpf_reference_state *find_reference_state(struct bpf_verifier_state *state, int ptr_id)
 {
 	int i;
 
 	for (i = 0; i < state->acquired_refs; i++)
 		if (state->refs[i].id == ptr_id)
-			return true;
+			return &state->refs[i];
 
-	return false;
+	return NULL;
 }
 
 static int release_lock_state(struct bpf_verifier_state *state, int type, int id, void *ptr)
@@ -2186,6 +2143,7 @@ static void __mark_reg_known(struct bpf_reg_state *reg, u64 imm)
 	       offsetof(struct bpf_reg_state, var_off) - sizeof(reg->type));
 	reg->id = 0;
 	reg->ref_obj_id = 0;
+	reg->parent_id = 0;
 	___mark_reg_known(reg, imm);
 }
 
@@ -2230,7 +2188,7 @@ static void mark_reg_known_zero(struct bpf_verifier_env *env,
 }
 
 static void __mark_dynptr_reg(struct bpf_reg_state *reg, enum bpf_dynptr_type type,
-			      bool first_slot, int dynptr_id)
+			      bool first_slot, int id)
 {
 	/* reg->type has no meaning for STACK_DYNPTR, but when we set reg for
 	 * callback arguments, it does need to be CONST_PTR_TO_DYNPTR, so simply
@@ -2239,7 +2197,7 @@ static void __mark_dynptr_reg(struct bpf_reg_state *reg, enum bpf_dynptr_type ty
 	__mark_reg_known_zero(reg);
 	reg->type = CONST_PTR_TO_DYNPTR;
 	/* Give each dynptr a unique id to uniquely associate slices to it. */
-	reg->id = dynptr_id;
+	reg->id = id;
 	reg->dynptr.type = type;
 	reg->dynptr.first_slot = first_slot;
 }
@@ -2801,6 +2759,7 @@ static void __mark_reg_unknown_imprecise(struct bpf_reg_state *reg)
 	reg->type = SCALAR_VALUE;
 	reg->id = 0;
 	reg->ref_obj_id = 0;
+	reg->parent_id = 0;
 	reg->var_off = tnum_unknown;
 	reg->frameno = 0;
 	reg->precise = false;
@@ -8746,7 +8705,7 @@ static int process_kptr_func(struct bpf_verifier_env *env, int regno,
  * type, and declare it as 'const struct bpf_dynptr *' in their prototype.
  */
 static int process_dynptr_func(struct bpf_verifier_env *env, int regno, int insn_idx,
-			       enum bpf_arg_type arg_type, int clone_ref_obj_id,
+			       enum bpf_arg_type arg_type, int parent_id,
 			       struct bpf_dynptr_desc *initialized_dynptr)
 {
 	struct bpf_reg_state *reg = reg_state(env, regno);
@@ -8798,7 +8757,8 @@ static int process_dynptr_func(struct bpf_verifier_env *env, int regno, int insn
 				return err;
 		}
 
-		err = mark_stack_slots_dynptr(env, reg, arg_type, insn_idx, clone_ref_obj_id);
+		err = mark_stack_slots_dynptr(env, reg, arg_type, insn_idx, parent_id,
+					      initialized_dynptr);
 	} else /* MEM_RDONLY and None case from above */ {
 		/* For the reg->type == PTR_TO_STACK case, bpf_dynptr is never const */
 		if (reg->type == CONST_PTR_TO_DYNPTR && !(arg_type & MEM_RDONLY)) {
@@ -8835,6 +8795,7 @@ static int process_dynptr_func(struct bpf_verifier_env *env, int regno, int insn
 			initialized_dynptr->id = reg->id;
 			initialized_dynptr->type = reg->dynptr.type;
 			initialized_dynptr->ref_obj_id = reg->ref_obj_id;
+			initialized_dynptr->parent_id = reg->parent_id;
 		}
 	}
 	return err;
@@ -9787,7 +9748,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
 			 */
 			if (reg->type == PTR_TO_STACK) {
 				spi = dynptr_get_spi(env, reg);
-				if (spi < 0 || !state->stack[spi].spilled_ptr.ref_obj_id) {
+				if (spi < 0 || !state->stack[spi].spilled_ptr.id) {
 					verbose(env, "arg %d is an unacquired reference\n", regno);
 					return -EINVAL;
 				}
@@ -9815,6 +9776,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
 			return -EACCES;
 		}
 		meta->ref_obj_id = reg->ref_obj_id;
+		meta->id = reg->id;
 	}
 
 	switch (base_type(arg_type)) {
@@ -10438,26 +10400,82 @@ static int release_reference_nomark(struct bpf_verifier_state *state, int ref_ob
 	return -EINVAL;
 }
 
-/* The pointer with the specified id has released its reference to kernel
- * resources. Identify all copies of the same pointer and clear the reference.
- *
- * This is the release function corresponding to acquire_reference(). Idempotent.
- */
-static int release_reference(struct bpf_verifier_env *env, int ref_obj_id)
+static void idstack_reset(struct bpf_idstack *idstack)
+{
+	idstack->cnt = 0;
+}
+
+static void idstack_push(struct bpf_idstack *idstack, u32 id)
+{
+	if (WARN_ON_ONCE(idstack->cnt >= BPF_ID_MAP_SIZE))
+		return;
+
+	idstack->ids[idstack->cnt++] = id;
+}
+
+static u32 idstack_pop(struct bpf_idstack *idstack)
+{
+	return idstack->cnt > 0 ? idstack->ids[--idstack->cnt] : 0;
+}
+
+static int release_reg_check(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
+			     int id, int root_id, struct bpf_idstack *idstack)
 {
+	struct bpf_reference_state *ref_state;
+
+	if (reg->id == id || reg->parent_id == id || reg->ref_obj_id == id) {
+		/* Cannot indirectly release a referenced id */
+		if (reg->ref_obj_id && id != root_id) {
+			ref_state = find_reference_state(env->cur_state, reg->ref_obj_id);
+			verbose(env, "Unreleased reference id=%d alloc_insn=%d when releasing id=%d\n",
+				ref_state->id, ref_state->insn_idx, root_id);
+			return -EINVAL;
+		}
+
+		if (reg->id && reg->id != id)
+			idstack_push(idstack, reg->id);
+		return 1;
+	}
+
+	return 0;
+}
+
+static int release_reference(struct bpf_verifier_env *env, int id)
+{
+	struct bpf_idstack *idstack = &env->idstack_scratch;
 	struct bpf_verifier_state *vstate = env->cur_state;
+	int spi, fi, root_id = id, err = 0;
 	struct bpf_func_state *state;
 	struct bpf_reg_state *reg;
-	int err;
 
-	err = release_reference_nomark(vstate, ref_obj_id);
-	if (err)
-		return err;
+	idstack_reset(idstack);
+	idstack_push(idstack, id);
 
-	bpf_for_each_reg_in_vstate(vstate, state, reg, ({
-		if (reg->ref_obj_id == ref_obj_id)
-			mark_reg_invalid(env, reg);
-	}));
+	if (find_reference_state(vstate, id))
+		WARN_ON_ONCE(release_reference_nomark(vstate, id));
+
+	while ((id = idstack_pop(idstack))) {
+		bpf_for_each_reg_in_vstate(vstate, state, reg, ({
+			err = release_reg_check(env, reg, id, root_id, idstack);
+			if (err < 0)
+				return err;
+			if (err == 1)
+				mark_reg_invalid(env, reg);
+		}));
+
+		for (fi = 0; fi <= vstate->curframe; fi++) {
+			state = vstate->frame[fi];
+			bpf_for_each_spilled_reg(spi, state, reg, (1 << STACK_DYNPTR)) {
+				if (!reg || !reg->dynptr.first_slot)
+					continue;
+				err = release_reg_check(env, reg, id, root_id, idstack);
+				if (err < 0)
+					return err;
+				if (err == 1)
+					invalidate_dynptr(env, state, spi);
+			}
+		}
+	}
 
 	return 0;
 }
@@ -11643,11 +11661,8 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
 			 */
 			err = 0;
 		}
-		if (err) {
-			verbose(env, "func %s#%d reference has not been acquired before\n",
-				func_id_name(func_id), func_id);
+		if (err)
 			return err;
-		}
 	}
 
 	switch (func_id) {
@@ -11925,10 +11940,8 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
 		regs[BPF_REG_0].ref_obj_id = id;
 	}
 
-	if (func_id == BPF_FUNC_dynptr_data) {
-		regs[BPF_REG_0].dynptr_id = meta.initialized_dynptr.id;
-		regs[BPF_REG_0].ref_obj_id = meta.initialized_dynptr.ref_obj_id;
-	}
+	if (func_id == BPF_FUNC_dynptr_data)
+		regs[BPF_REG_0].parent_id = meta.initialized_dynptr.id;
 
 	err = do_refine_retval_range(env, regs, fn->ret_type, func_id, &meta);
 	if (err)
@@ -13295,6 +13308,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 				return -EFAULT;
 			}
 			meta->ref_obj_id = reg->ref_obj_id;
+			meta->id = reg->id;
 			if (is_kfunc_release(meta))
 				meta->release_regno = regno;
 		}
@@ -13429,7 +13443,6 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 		case KF_ARG_PTR_TO_DYNPTR:
 		{
 			enum bpf_arg_type dynptr_arg_type = ARG_PTR_TO_DYNPTR;
-			int clone_ref_obj_id = 0;
 
 			if (is_kfunc_arg_const_ptr(btf, &args[i]))
 				dynptr_arg_type |= MEM_RDONLY;
@@ -13458,14 +13471,10 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
 				}
 
 				dynptr_arg_type |= (unsigned int)get_dynptr_type_flag(parent_type);
-				clone_ref_obj_id = meta->initialized_dynptr.ref_obj_id;
-				if (dynptr_type_refcounted(parent_type) && !clone_ref_obj_id) {
-					verifier_bug(env, "missing ref obj id for parent of clone");
-					return -EFAULT;
-				}
 			}
 
-			ret = process_dynptr_func(env, regno, insn_idx, dynptr_arg_type, clone_ref_obj_id,
+			ret = process_dynptr_func(env, regno, insn_idx, dynptr_arg_type,
+						  meta->ref_obj_id ? meta->id : 0,
 						  &meta->initialized_dynptr);
 			if (ret < 0)
 				return ret;
@@ -13913,12 +13922,7 @@ static int check_special_kfunc(struct bpf_verifier_env *env, struct bpf_kfunc_ca
 			verifier_bug(env, "no dynptr id");
 			return -EFAULT;
 		}
-		regs[BPF_REG_0].dynptr_id = meta->initialized_dynptr.id;
-
-		/* we don't need to set BPF_REG_0's ref obj id
-		 * because packet slices are not refcounted (see
-		 * dynptr_type_refcounted)
-		 */
+		regs[BPF_REG_0].parent_id = meta->initialized_dynptr.id;
 	} else {
 		return 0;
 	}
@@ -14113,9 +14117,6 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
 			err = unmark_stack_slots_dynptr(env, reg);
 		} else {
 			err = release_reference(env, reg->ref_obj_id);
-			if (err)
-				verbose(env, "kfunc %s#%d reference has not been acquired before\n",
-					func_name, meta.func_id);
 		}
 		if (err)
 			return err;
@@ -14134,7 +14135,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
 			return err;
 		}
 
-		err = release_reference(env, release_ref_obj_id);
+		err = release_reference_nomark(env->cur_state, release_ref_obj_id);
 		if (err) {
 			verbose(env, "kfunc %s#%d reference has not been acquired before\n",
 				func_name, meta.func_id);
@@ -14225,7 +14226,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
 
 			/* Ensures we don't access the memory after a release_reference() */
 			if (meta.ref_obj_id)
-				regs[BPF_REG_0].ref_obj_id = meta.ref_obj_id;
+				regs[BPF_REG_0].parent_id = meta.ref_obj_id;
 
 			if (is_kfunc_rcu_protected(&meta))
 				regs[BPF_REG_0].type |= MEM_RCU;
@@ -19575,7 +19576,8 @@ static bool regs_exact(const struct bpf_reg_state *rold,
 {
 	return memcmp(rold, rcur, offsetof(struct bpf_reg_state, id)) == 0 &&
 	       check_ids(rold->id, rcur->id, idmap) &&
-	       check_ids(rold->ref_obj_id, rcur->ref_obj_id, idmap);
+	       check_ids(rold->ref_obj_id, rcur->ref_obj_id, idmap) &&
+	       check_ids(rold->parent_id, rcur->parent_id, idmap);
 }
 
 enum exact_level {
@@ -19697,7 +19699,8 @@ static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold,
 		       range_within(rold, rcur) &&
 		       tnum_in(rold->var_off, rcur->var_off) &&
 		       check_ids(rold->id, rcur->id, idmap) &&
-		       check_ids(rold->ref_obj_id, rcur->ref_obj_id, idmap);
+		       check_ids(rold->ref_obj_id, rcur->ref_obj_id, idmap) &&
+		       check_ids(rold->parent_id, rcur->parent_id, idmap);
 	case PTR_TO_PACKET_META:
 	case PTR_TO_PACKET:
 		/* We must have at least as much range as the old ptr
@@ -19852,7 +19855,8 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
 			cur_reg = &cur->stack[spi].spilled_ptr;
 			if (old_reg->dynptr.type != cur_reg->dynptr.type ||
 			    old_reg->dynptr.first_slot != cur_reg->dynptr.first_slot ||
-			    !check_ids(old_reg->ref_obj_id, cur_reg->ref_obj_id, idmap))
+			    !check_ids(old_reg->ref_obj_id, cur_reg->ref_obj_id, idmap) ||
+			    !check_ids(old_reg->parent_id, cur_reg->parent_id, idmap))
 				return false;
 			break;
 		case STACK_ITER:
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [RFC PATCH bpf-next v2 07/11] bpf: Remove redundant dynptr arg check for helper
  2026-03-07  6:44 [RFC PATCH bpf-next v2 00/11] Dynptr cleanup and bugfixes Amery Hung
                   ` (5 preceding siblings ...)
  2026-03-07  6:44 ` [RFC PATCH bpf-next v2 06/11] bpf: Refactor object relationship tracking and fix dynptr UAF bug Amery Hung
@ 2026-03-07  6:44 ` Amery Hung
  2026-03-07  6:44 ` [RFC PATCH bpf-next v2 08/11] selftests/bpf: Test creating dynptr from dynptr data and slice Amery Hung
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 46+ messages in thread
From: Amery Hung @ 2026-03-07  6:44 UTC (permalink / raw)
  To: bpf
  Cc: netdev, alexei.starovoitov, andrii, daniel, memxor, martin.lau,
	ameryhung, kernel-team

unmark_stack_slots_dynptr() already makes sure that CONST_PTR_TO_DYNPTR
cannot be released. process_dynptr_func() also prevents passing
uninitialized dynptr to helpers expecting initialized dynptr. Now that
unmark_stack_slots_dynptr() also error returned from
release_reference(), there should be no reason to keep these redundant
checks.

Signed-off-by: Amery Hung <ameryhung@gmail.com>
---
 kernel/bpf/verifier.c                         | 21 +------------------
 .../testing/selftests/bpf/progs/dynptr_fail.c |  6 +++---
 .../selftests/bpf/progs/user_ringbuf_fail.c   |  4 ++--
 3 files changed, 6 insertions(+), 25 deletions(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 0436fc4d9107..80b9ef6f329f 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -9737,26 +9737,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
 
 skip_type_check:
 	if (arg_type_is_release(arg_type)) {
-		if (arg_type_is_dynptr(arg_type)) {
-			struct bpf_func_state *state = func(env, reg);
-			int spi;
-
-			/* Only dynptr created on stack can be released, thus
-			 * the get_spi and stack state checks for spilled_ptr
-			 * should only be done before process_dynptr_func for
-			 * PTR_TO_STACK.
-			 */
-			if (reg->type == PTR_TO_STACK) {
-				spi = dynptr_get_spi(env, reg);
-				if (spi < 0 || !state->stack[spi].spilled_ptr.id) {
-					verbose(env, "arg %d is an unacquired reference\n", regno);
-					return -EINVAL;
-				}
-			} else {
-				verbose(env, "cannot release unowned const bpf_dynptr\n");
-				return -EINVAL;
-			}
-		} else if (!reg->ref_obj_id && !register_is_null(reg)) {
+		if (!arg_type_is_dynptr(arg_type) && !reg->ref_obj_id && !register_is_null(reg)) {
 			verbose(env, "R%d must be referenced when passed to release function\n",
 				regno);
 			return -EINVAL;
diff --git a/tools/testing/selftests/bpf/progs/dynptr_fail.c b/tools/testing/selftests/bpf/progs/dynptr_fail.c
index 5e1b1cf4ea8e..631e37500ec6 100644
--- a/tools/testing/selftests/bpf/progs/dynptr_fail.c
+++ b/tools/testing/selftests/bpf/progs/dynptr_fail.c
@@ -135,7 +135,7 @@ int ringbuf_missing_release_callback(void *ctx)
 
 /* Can't call bpf_ringbuf_submit/discard_dynptr on a non-initialized dynptr */
 SEC("?raw_tp")
-__failure __msg("arg 1 is an unacquired reference")
+__failure __msg("Expected an initialized dynptr as arg #0")
 int ringbuf_release_uninit_dynptr(void *ctx)
 {
 	struct bpf_dynptr ptr;
@@ -649,7 +649,7 @@ int invalid_offset(void *ctx)
 
 /* Can't release a dynptr twice */
 SEC("?raw_tp")
-__failure __msg("arg 1 is an unacquired reference")
+__failure __msg("Expected an initialized dynptr as arg #0")
 int release_twice(void *ctx)
 {
 	struct bpf_dynptr ptr;
@@ -676,7 +676,7 @@ static int release_twice_callback_fn(__u32 index, void *data)
  * within a callback function, fails
  */
 SEC("?raw_tp")
-__failure __msg("arg 1 is an unacquired reference")
+__failure __msg("Expected an initialized dynptr as arg #0")
 int release_twice_callback(void *ctx)
 {
 	struct bpf_dynptr ptr;
diff --git a/tools/testing/selftests/bpf/progs/user_ringbuf_fail.c b/tools/testing/selftests/bpf/progs/user_ringbuf_fail.c
index 54de0389f878..e8f4ae86470f 100644
--- a/tools/testing/selftests/bpf/progs/user_ringbuf_fail.c
+++ b/tools/testing/selftests/bpf/progs/user_ringbuf_fail.c
@@ -146,7 +146,7 @@ try_discard_dynptr(struct bpf_dynptr *dynptr, void *context)
  * not be able to read past the end of the pointer.
  */
 SEC("?raw_tp")
-__failure __msg("cannot release unowned const bpf_dynptr")
+__failure __msg("cannot pass pointer to const bpf_dynptr, the helper mutates it")
 int user_ringbuf_callback_discard_dynptr(void *ctx)
 {
 	bpf_user_ringbuf_drain(&user_ringbuf, try_discard_dynptr, NULL, 0);
@@ -166,7 +166,7 @@ try_submit_dynptr(struct bpf_dynptr *dynptr, void *context)
  * not be able to read past the end of the pointer.
  */
 SEC("?raw_tp")
-__failure __msg("cannot release unowned const bpf_dynptr")
+__failure __msg("cannot pass pointer to const bpf_dynptr, the helper mutates it")
 int user_ringbuf_callback_submit_dynptr(void *ctx)
 {
 	bpf_user_ringbuf_drain(&user_ringbuf, try_submit_dynptr, NULL, 0);
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [RFC PATCH bpf-next v2 08/11] selftests/bpf: Test creating dynptr from dynptr data and slice
  2026-03-07  6:44 [RFC PATCH bpf-next v2 00/11] Dynptr cleanup and bugfixes Amery Hung
                   ` (6 preceding siblings ...)
  2026-03-07  6:44 ` [RFC PATCH bpf-next v2 07/11] bpf: Remove redundant dynptr arg check for helper Amery Hung
@ 2026-03-07  6:44 ` Amery Hung
  2026-03-07  6:44 ` [RFC PATCH bpf-next v2 09/11] selftests/bpf: Test using dynptr after freeing the underlying object Amery Hung
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 46+ messages in thread
From: Amery Hung @ 2026-03-07  6:44 UTC (permalink / raw)
  To: bpf
  Cc: netdev, alexei.starovoitov, andrii, daniel, memxor, martin.lau,
	ameryhung, kernel-team

The verifier currently does not allow creating dynptr from dynptr data
or slice. Add a selftest to test this explicitly.

Signed-off-by: Amery Hung <ameryhung@gmail.com>
---
 .../testing/selftests/bpf/progs/dynptr_fail.c | 42 +++++++++++++++++++
 1 file changed, 42 insertions(+)

diff --git a/tools/testing/selftests/bpf/progs/dynptr_fail.c b/tools/testing/selftests/bpf/progs/dynptr_fail.c
index 631e37500ec6..6b162512b0c9 100644
--- a/tools/testing/selftests/bpf/progs/dynptr_fail.c
+++ b/tools/testing/selftests/bpf/progs/dynptr_fail.c
@@ -704,6 +704,48 @@ int dynptr_from_mem_invalid_api(void *ctx)
 	return 0;
 }
 
+/* Cannot create dynptr from dynptr data */
+SEC("?raw_tp")
+__failure __msg("Unsupported reg type mem for bpf_dynptr_from_mem data")
+int dynptr_from_dynptr_data(void *ctx)
+{
+	struct bpf_dynptr ptr, ptr2;
+	__u8 *data;
+
+	if (get_map_val_dynptr(&ptr))
+		return 0;
+
+	data = bpf_dynptr_data(&ptr, 0, sizeof(__u32));
+	if (!data)
+		return 0;
+
+	/* this should fail */
+	bpf_dynptr_from_mem(data, sizeof(__u32), 0, &ptr2);
+
+	return 0;
+}
+
+/* Cannot create dynptr from dynptr slice */
+SEC("?tc")
+__failure __msg("Unsupported reg type mem for bpf_dynptr_from_mem data")
+int dynptr_from_dynptr_slice(struct __sk_buff *skb)
+{
+	struct bpf_dynptr ptr, ptr2;
+	struct ethhdr *hdr;
+	char buffer[sizeof(*hdr)] = {};
+
+	bpf_dynptr_from_skb(skb, 0, &ptr);
+
+	hdr = bpf_dynptr_slice_rdwr(&ptr, 0, buffer, sizeof(buffer));
+	if (!hdr)
+		return SK_DROP;
+
+	/* this should fail */
+	bpf_dynptr_from_mem(hdr, sizeof(*hdr), 0, &ptr2);
+
+	return SK_PASS;
+}
+
 SEC("?tc")
 __failure __msg("cannot overwrite referenced dynptr") __log_level(2)
 int dynptr_pruning_overwrite(struct __sk_buff *ctx)
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [RFC PATCH bpf-next v2 09/11] selftests/bpf: Test using dynptr after freeing the underlying object
  2026-03-07  6:44 [RFC PATCH bpf-next v2 00/11] Dynptr cleanup and bugfixes Amery Hung
                   ` (7 preceding siblings ...)
  2026-03-07  6:44 ` [RFC PATCH bpf-next v2 08/11] selftests/bpf: Test creating dynptr from dynptr data and slice Amery Hung
@ 2026-03-07  6:44 ` Amery Hung
  2026-03-16 19:25   ` Eduard Zingerman
  2026-03-07  6:44 ` [RFC PATCH bpf-next v2 10/11] selftests/bpf: Test using slice after invalidating dynptr clone Amery Hung
                   ` (2 subsequent siblings)
  11 siblings, 1 reply; 46+ messages in thread
From: Amery Hung @ 2026-03-07  6:44 UTC (permalink / raw)
  To: bpf
  Cc: netdev, alexei.starovoitov, andrii, daniel, memxor, martin.lau,
	ameryhung, kernel-team

Make sure the verifier invalidates the dynptr and dynptr slice derived
from an skb after the skb is freed.

Signed-off-by: Amery Hung <ameryhung@gmail.com>
---
 .../selftests/bpf/prog_tests/bpf_qdisc.c      | 36 ++++++++++
 .../progs/bpf_qdisc_fail__invalid_dynptr.c    | 62 +++++++++++++++++
 ...f_qdisc_fail__invalid_dynptr_cross_frame.c | 68 +++++++++++++++++++
 .../bpf_qdisc_fail__invalid_dynptr_slice.c    | 64 +++++++++++++++++
 4 files changed, 230 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/progs/bpf_qdisc_fail__invalid_dynptr.c
 create mode 100644 tools/testing/selftests/bpf/progs/bpf_qdisc_fail__invalid_dynptr_cross_frame.c
 create mode 100644 tools/testing/selftests/bpf/progs/bpf_qdisc_fail__invalid_dynptr_slice.c

diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_qdisc.c b/tools/testing/selftests/bpf/prog_tests/bpf_qdisc.c
index 730357cd0c9a..ec5b346138c5 100644
--- a/tools/testing/selftests/bpf/prog_tests/bpf_qdisc.c
+++ b/tools/testing/selftests/bpf/prog_tests/bpf_qdisc.c
@@ -8,6 +8,9 @@
 #include "bpf_qdisc_fifo.skel.h"
 #include "bpf_qdisc_fq.skel.h"
 #include "bpf_qdisc_fail__incompl_ops.skel.h"
+#include "bpf_qdisc_fail__invalid_dynptr.skel.h"
+#include "bpf_qdisc_fail__invalid_dynptr_slice.skel.h"
+#include "bpf_qdisc_fail__invalid_dynptr_cross_frame.skel.h"
 
 #define LO_IFINDEX 1
 
@@ -156,6 +159,33 @@ static void test_incompl_ops(void)
 	bpf_qdisc_fail__incompl_ops__destroy(skel);
 }
 
+static void test_invalid_dynptr(void)
+{
+	struct bpf_qdisc_fail__invalid_dynptr *skel;
+
+	skel = bpf_qdisc_fail__invalid_dynptr__open_and_load();
+	if (!ASSERT_ERR_PTR(skel, "bpf_qdisc_fail__invalid_dynptr__open_and_load"))
+		bpf_qdisc_fail__invalid_dynptr__destroy(skel);
+}
+
+static void test_invalid_dynptr_slice(void)
+{
+	struct bpf_qdisc_fail__invalid_dynptr_slice *skel;
+
+	skel = bpf_qdisc_fail__invalid_dynptr_slice__open_and_load();
+	if (!ASSERT_ERR_PTR(skel, "bpf_qdisc_fail__invalid_dynptr_slice__open_and_load"))
+		bpf_qdisc_fail__invalid_dynptr_slice__destroy(skel);
+}
+
+static void test_invalid_dynptr_cross_frame(void)
+{
+	struct bpf_qdisc_fail__invalid_dynptr_cross_frame *skel;
+
+	skel = bpf_qdisc_fail__invalid_dynptr_cross_frame__open_and_load();
+	if (!ASSERT_ERR_PTR(skel, "bpf_qdisc_fail__invalid_dynptr_cross_frame__open_and_load"))
+		bpf_qdisc_fail__invalid_dynptr_cross_frame__destroy(skel);
+}
+
 static int get_default_qdisc(char *qdisc_name)
 {
 	FILE *f;
@@ -223,6 +253,12 @@ void test_ns_bpf_qdisc(void)
 		test_qdisc_attach_to_non_root();
 	if (test__start_subtest("incompl_ops"))
 		test_incompl_ops();
+	if (test__start_subtest("invalid_dynptr"))
+		test_invalid_dynptr();
+	if (test__start_subtest("invalid_dynptr_slice"))
+		test_invalid_dynptr_slice();
+	if (test__start_subtest("invalid_dynptr_cross_frame"))
+		test_invalid_dynptr_cross_frame();
 }
 
 void serial_test_bpf_qdisc_default(void)
diff --git a/tools/testing/selftests/bpf/progs/bpf_qdisc_fail__invalid_dynptr.c b/tools/testing/selftests/bpf/progs/bpf_qdisc_fail__invalid_dynptr.c
new file mode 100644
index 000000000000..2e76470bc261
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/bpf_qdisc_fail__invalid_dynptr.c
@@ -0,0 +1,62 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <vmlinux.h>
+#include "bpf_experimental.h"
+#include "bpf_qdisc_common.h"
+
+char _license[] SEC("license") = "GPL";
+
+int proto;
+
+SEC("struct_ops")
+int BPF_PROG(bpf_qdisc_test_enqueue, struct sk_buff *skb, struct Qdisc *sch,
+	     struct bpf_sk_buff_ptr *to_free)
+{
+	struct bpf_dynptr ptr;
+	struct ethhdr *hdr;
+
+	bpf_dynptr_from_skb((struct __sk_buff *)skb, 0, &ptr);
+
+	bpf_qdisc_skb_drop(skb, to_free);
+
+	hdr = bpf_dynptr_slice(&ptr, 0, NULL, sizeof(*hdr));
+	if (!hdr)
+		return NET_XMIT_DROP;
+
+	proto = hdr->h_proto;
+
+	return NET_XMIT_DROP;
+}
+
+SEC("struct_ops")
+struct sk_buff *BPF_PROG(bpf_qdisc_test_dequeue, struct Qdisc *sch)
+{
+	return NULL;
+}
+
+SEC("struct_ops")
+int BPF_PROG(bpf_qdisc_test_init, struct Qdisc *sch, struct nlattr *opt,
+	     struct netlink_ext_ack *extack)
+{
+	return 0;
+}
+
+SEC("struct_ops")
+void BPF_PROG(bpf_qdisc_test_reset, struct Qdisc *sch)
+{
+}
+
+SEC("struct_ops")
+void BPF_PROG(bpf_qdisc_test_destroy, struct Qdisc *sch)
+{
+}
+
+SEC(".struct_ops")
+struct Qdisc_ops test = {
+	.enqueue   = (void *)bpf_qdisc_test_enqueue,
+	.dequeue   = (void *)bpf_qdisc_test_dequeue,
+	.init      = (void *)bpf_qdisc_test_init,
+	.reset     = (void *)bpf_qdisc_test_reset,
+	.destroy   = (void *)bpf_qdisc_test_destroy,
+	.id        = "bpf_qdisc_test",
+};
diff --git a/tools/testing/selftests/bpf/progs/bpf_qdisc_fail__invalid_dynptr_cross_frame.c b/tools/testing/selftests/bpf/progs/bpf_qdisc_fail__invalid_dynptr_cross_frame.c
new file mode 100644
index 000000000000..565dea13bde8
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/bpf_qdisc_fail__invalid_dynptr_cross_frame.c
@@ -0,0 +1,68 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <vmlinux.h>
+#include "bpf_experimental.h"
+#include "bpf_qdisc_common.h"
+
+char _license[] SEC("license") = "GPL";
+
+int proto;
+
+static __noinline int free_skb(struct sk_buff *skb)
+{
+	bpf_kfree_skb(skb);
+	return 0;
+}
+
+SEC("struct_ops")
+int BPF_PROG(bpf_qdisc_test_enqueue, struct sk_buff *skb, struct Qdisc *sch,
+	     struct bpf_sk_buff_ptr *to_free)
+{
+	struct bpf_dynptr ptr;
+	struct ethhdr *hdr;
+
+	bpf_dynptr_from_skb((struct __sk_buff *)skb, 0, &ptr);
+
+	hdr = bpf_dynptr_slice(&ptr, 0, NULL, sizeof(*hdr));
+	if (!hdr)
+		return NET_XMIT_DROP;
+
+	free_skb(skb);
+
+	proto = hdr->h_proto;
+
+	return NET_XMIT_DROP;
+}
+
+SEC("struct_ops")
+struct sk_buff *BPF_PROG(bpf_qdisc_test_dequeue, struct Qdisc *sch)
+{
+	return NULL;
+}
+
+SEC("struct_ops")
+int BPF_PROG(bpf_qdisc_test_init, struct Qdisc *sch, struct nlattr *opt,
+	     struct netlink_ext_ack *extack)
+{
+	return 0;
+}
+
+SEC("struct_ops")
+void BPF_PROG(bpf_qdisc_test_reset, struct Qdisc *sch)
+{
+}
+
+SEC("struct_ops")
+void BPF_PROG(bpf_qdisc_test_destroy, struct Qdisc *sch)
+{
+}
+
+SEC(".struct_ops")
+struct Qdisc_ops test = {
+	.enqueue   = (void *)bpf_qdisc_test_enqueue,
+	.dequeue   = (void *)bpf_qdisc_test_dequeue,
+	.init      = (void *)bpf_qdisc_test_init,
+	.reset     = (void *)bpf_qdisc_test_reset,
+	.destroy   = (void *)bpf_qdisc_test_destroy,
+	.id        = "bpf_qdisc_test",
+};
diff --git a/tools/testing/selftests/bpf/progs/bpf_qdisc_fail__invalid_dynptr_slice.c b/tools/testing/selftests/bpf/progs/bpf_qdisc_fail__invalid_dynptr_slice.c
new file mode 100644
index 000000000000..95e8c070a37d
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/bpf_qdisc_fail__invalid_dynptr_slice.c
@@ -0,0 +1,64 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <vmlinux.h>
+#include "bpf_experimental.h"
+#include "bpf_qdisc_common.h"
+
+char _license[] SEC("license") = "GPL";
+
+int proto;
+
+SEC("struct_ops")
+int BPF_PROG(bpf_qdisc_test_enqueue, struct sk_buff *skb, struct Qdisc *sch,
+	     struct bpf_sk_buff_ptr *to_free)
+{
+	struct bpf_dynptr ptr;
+	struct ethhdr *hdr;
+
+	bpf_dynptr_from_skb((struct __sk_buff *)skb, 0, &ptr);
+
+	hdr = bpf_dynptr_slice(&ptr, 0, NULL, sizeof(*hdr));
+	if (!hdr) {
+		bpf_qdisc_skb_drop(skb, to_free);
+		return NET_XMIT_DROP;
+	}
+
+	bpf_qdisc_skb_drop(skb, to_free);
+
+	proto = hdr->h_proto;
+
+	return NET_XMIT_DROP;
+}
+
+SEC("struct_ops")
+struct sk_buff *BPF_PROG(bpf_qdisc_test_dequeue, struct Qdisc *sch)
+{
+	return NULL;
+}
+
+SEC("struct_ops")
+int BPF_PROG(bpf_qdisc_test_init, struct Qdisc *sch, struct nlattr *opt,
+	     struct netlink_ext_ack *extack)
+{
+	return 0;
+}
+
+SEC("struct_ops")
+void BPF_PROG(bpf_qdisc_test_reset, struct Qdisc *sch)
+{
+}
+
+SEC("struct_ops")
+void BPF_PROG(bpf_qdisc_test_destroy, struct Qdisc *sch)
+{
+}
+
+SEC(".struct_ops")
+struct Qdisc_ops test = {
+	.enqueue   = (void *)bpf_qdisc_test_enqueue,
+	.dequeue   = (void *)bpf_qdisc_test_dequeue,
+	.init      = (void *)bpf_qdisc_test_init,
+	.reset     = (void *)bpf_qdisc_test_reset,
+	.destroy   = (void *)bpf_qdisc_test_destroy,
+	.id        = "bpf_qdisc_test",
+};
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [RFC PATCH bpf-next v2 10/11] selftests/bpf: Test using slice after invalidating dynptr clone
  2026-03-07  6:44 [RFC PATCH bpf-next v2 00/11] Dynptr cleanup and bugfixes Amery Hung
                   ` (8 preceding siblings ...)
  2026-03-07  6:44 ` [RFC PATCH bpf-next v2 09/11] selftests/bpf: Test using dynptr after freeing the underlying object Amery Hung
@ 2026-03-07  6:44 ` Amery Hung
  2026-03-07  6:44 ` [RFC PATCH bpf-next v2 11/11] selftests/bpf: Test using file dynptr after the reference on file is dropped Amery Hung
  2026-03-11 19:38 ` [RFC PATCH bpf-next v2 00/11] Dynptr cleanup and bugfixes Andrii Nakryiko
  11 siblings, 0 replies; 46+ messages in thread
From: Amery Hung @ 2026-03-07  6:44 UTC (permalink / raw)
  To: bpf
  Cc: netdev, alexei.starovoitov, andrii, daniel, memxor, martin.lau,
	ameryhung, kernel-team

The parent object of a cloned dynptr is skb not the original dynptr.
Invalidate the original dynptr should not prevent the program from
using the slice derived from the cloned dynptr.

Signed-off-by: Amery Hung <ameryhung@gmail.com>
---
 .../selftests/bpf/prog_tests/bpf_qdisc.c      | 14 ++++
 .../bpf/progs/bpf_qdisc_dynptr_clone.c        | 69 +++++++++++++++++++
 2 files changed, 83 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/progs/bpf_qdisc_dynptr_clone.c

diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_qdisc.c b/tools/testing/selftests/bpf/prog_tests/bpf_qdisc.c
index ec5b346138c5..ba14738c509b 100644
--- a/tools/testing/selftests/bpf/prog_tests/bpf_qdisc.c
+++ b/tools/testing/selftests/bpf/prog_tests/bpf_qdisc.c
@@ -11,6 +11,7 @@
 #include "bpf_qdisc_fail__invalid_dynptr.skel.h"
 #include "bpf_qdisc_fail__invalid_dynptr_slice.skel.h"
 #include "bpf_qdisc_fail__invalid_dynptr_cross_frame.skel.h"
+#include "bpf_qdisc_dynptr_clone.skel.h"
 
 #define LO_IFINDEX 1
 
@@ -186,6 +187,17 @@ static void test_invalid_dynptr_cross_frame(void)
 		bpf_qdisc_fail__invalid_dynptr_cross_frame__destroy(skel);
 }
 
+static void test_dynptr_clone(void)
+{
+	struct bpf_qdisc_dynptr_clone *skel;
+
+	skel = bpf_qdisc_dynptr_clone__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "bpf_qdisc_dynptr_clone__open_and_load"))
+		return;
+
+	bpf_qdisc_dynptr_clone__destroy(skel);
+}
+
 static int get_default_qdisc(char *qdisc_name)
 {
 	FILE *f;
@@ -259,6 +271,8 @@ void test_ns_bpf_qdisc(void)
 		test_invalid_dynptr_slice();
 	if (test__start_subtest("invalid_dynptr_cross_frame"))
 		test_invalid_dynptr_cross_frame();
+	if (test__start_subtest("dynptr_clone"))
+		test_dynptr_clone();
 }
 
 void serial_test_bpf_qdisc_default(void)
diff --git a/tools/testing/selftests/bpf/progs/bpf_qdisc_dynptr_clone.c b/tools/testing/selftests/bpf/progs/bpf_qdisc_dynptr_clone.c
new file mode 100644
index 000000000000..f23581e19da1
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/bpf_qdisc_dynptr_clone.c
@@ -0,0 +1,69 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <vmlinux.h>
+#include "bpf_experimental.h"
+#include "bpf_qdisc_common.h"
+
+char _license[] SEC("license") = "GPL";
+
+int proto;
+
+SEC("struct_ops")
+int BPF_PROG(bpf_qdisc_test_enqueue, struct sk_buff *skb, struct Qdisc *sch,
+	     struct bpf_sk_buff_ptr *to_free)
+{
+	struct bpf_dynptr ptr, ptr_clone;
+	struct ethhdr *hdr;
+
+	bpf_dynptr_from_skb((struct __sk_buff *)skb, 0, &ptr);
+
+	bpf_dynptr_clone(&ptr, &ptr_clone);
+
+	hdr = bpf_dynptr_slice(&ptr_clone, 0, NULL, sizeof(*hdr));
+	if (!hdr) {
+		bpf_qdisc_skb_drop(skb, to_free);
+		return NET_XMIT_DROP;
+	}
+
+	*(int *)&ptr = 0;
+
+	proto = hdr->h_proto;
+
+	bpf_qdisc_skb_drop(skb, to_free);
+
+	return NET_XMIT_DROP;
+}
+
+SEC("struct_ops")
+struct sk_buff *BPF_PROG(bpf_qdisc_test_dequeue, struct Qdisc *sch)
+{
+	return NULL;
+}
+
+SEC("struct_ops")
+int BPF_PROG(bpf_qdisc_test_init, struct Qdisc *sch, struct nlattr *opt,
+	     struct netlink_ext_ack *extack)
+{
+	return 0;
+}
+
+SEC("struct_ops")
+void BPF_PROG(bpf_qdisc_test_reset, struct Qdisc *sch)
+{
+}
+
+SEC("struct_ops")
+void BPF_PROG(bpf_qdisc_test_destroy, struct Qdisc *sch)
+{
+}
+
+SEC(".struct_ops")
+struct Qdisc_ops test = {
+	.enqueue   = (void *)bpf_qdisc_test_enqueue,
+	.dequeue   = (void *)bpf_qdisc_test_dequeue,
+	.init      = (void *)bpf_qdisc_test_init,
+	.reset     = (void *)bpf_qdisc_test_reset,
+	.destroy   = (void *)bpf_qdisc_test_destroy,
+	.id        = "bpf_qdisc_test",
+};
+
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [RFC PATCH bpf-next v2 11/11] selftests/bpf: Test using file dynptr after the reference on file is dropped
  2026-03-07  6:44 [RFC PATCH bpf-next v2 00/11] Dynptr cleanup and bugfixes Amery Hung
                   ` (9 preceding siblings ...)
  2026-03-07  6:44 ` [RFC PATCH bpf-next v2 10/11] selftests/bpf: Test using slice after invalidating dynptr clone Amery Hung
@ 2026-03-07  6:44 ` Amery Hung
  2026-03-11 19:38 ` [RFC PATCH bpf-next v2 00/11] Dynptr cleanup and bugfixes Andrii Nakryiko
  11 siblings, 0 replies; 46+ messages in thread
From: Amery Hung @ 2026-03-07  6:44 UTC (permalink / raw)
  To: bpf
  Cc: netdev, alexei.starovoitov, andrii, daniel, memxor, martin.lau,
	ameryhung, kernel-team

File dynptr and slice should be invalidated when the parent file's
reference is dropped in the program. Without the verifier tracking
dyntpr's parent referenced object, the dynptr would continute to be
incorrectly used even if the underlying file is being tear down or gone.

Signed-off-by: Amery Hung <ameryhung@gmail.com>
---
 .../selftests/bpf/progs/file_reader_fail.c    | 60 +++++++++++++++++++
 1 file changed, 60 insertions(+)

diff --git a/tools/testing/selftests/bpf/progs/file_reader_fail.c b/tools/testing/selftests/bpf/progs/file_reader_fail.c
index 32fe28ed2439..a7102737abfe 100644
--- a/tools/testing/selftests/bpf/progs/file_reader_fail.c
+++ b/tools/testing/selftests/bpf/progs/file_reader_fail.c
@@ -50,3 +50,63 @@ int xdp_no_dynptr_type(struct xdp_md *xdp)
 	bpf_dynptr_file_discard(&dynptr);
 	return 0;
 }
+
+SEC("lsm/file_open")
+__failure
+__msg("Expected an initialized dynptr as arg #2")
+int use_file_dynptr_after_put_file(void *ctx)
+{
+	struct task_struct *task = bpf_get_current_task_btf();
+	struct file *file = bpf_get_task_exe_file(task);
+	struct bpf_dynptr dynptr;
+	char buf[64];
+
+	if (!file)
+		return 0;
+
+	if (bpf_dynptr_from_file(file, 0, &dynptr))
+		goto out;
+
+	bpf_put_file(file);
+
+	/* this should fail - dynptr is invalid after file ref is dropped */
+	bpf_dynptr_read(buf, sizeof(buf), &dynptr, 0, 0);
+	return 0;
+
+out:
+	bpf_dynptr_file_discard(&dynptr);
+	bpf_put_file(file);
+	return 0;
+}
+
+SEC("lsm/file_open")
+__failure
+__msg("invalid mem access 'scalar'")
+int use_file_dynptr_slice_after_put_file(void *ctx)
+{
+	struct task_struct *task = bpf_get_current_task_btf();
+	struct file *file = bpf_get_task_exe_file(task);
+	struct bpf_dynptr dynptr;
+	char *data;
+
+	if (!file)
+		return 0;
+
+	if (bpf_dynptr_from_file(file, 0, &dynptr))
+		goto out;
+
+	data = bpf_dynptr_data(&dynptr, 0, 1);
+	if (!data)
+		goto out;
+
+	bpf_put_file(file);
+
+	/* this should fail - data slice is invalid after file ref is dropped */
+	*data = 'x';
+	return 0;
+
+out:
+	bpf_dynptr_file_discard(&dynptr);
+	bpf_put_file(file);
+	return 0;
+}
-- 
2.47.3


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH bpf-next v2 01/11] bpf: Set kfunc dynptr arg type flag based on prototype
  2026-03-07  6:44 ` [RFC PATCH bpf-next v2 01/11] bpf: Set kfunc dynptr arg type flag based on prototype Amery Hung
@ 2026-03-11 14:47   ` Mykyta Yatsenko
  2026-03-11 16:34     ` Amery Hung
  2026-03-11 19:43   ` Andrii Nakryiko
  2026-03-16 20:57   ` Eduard Zingerman
  2 siblings, 1 reply; 46+ messages in thread
From: Mykyta Yatsenko @ 2026-03-11 14:47 UTC (permalink / raw)
  To: Amery Hung, bpf
  Cc: netdev, alexei.starovoitov, andrii, daniel, memxor, martin.lau,
	ameryhung, kernel-team

Amery Hung <ameryhung@gmail.com> writes:

> The verifier should decide whether a dynptr argument is read-only
> based on if the type is "const struct bpf_dynptr *", not the type of
> the register passed to the kfunc. This currently does not cause issues
> because existing kfuncs that mutate struct bpf_dynptr are constructors
> (e.g., bpf_dynptr_from_xxx and bpf_dynptr_clone). These kfuncs have
> additional check in process_dynptr_func() to make sure the stack slot
> does not contain initialized dynptr. Nonetheless, this should still be
> fixed to avoid future issues when there is a non-constructor dynptr
> kfunc that can mutate dynptr. This is also a small step toward unifying
> kfunc and helper handling in the verifier, where the first step is to
> generate kfunc prototype similar to bpf_func_proto before the main
> verification loop.
>
> We also need to correctly mark some kfunc arguments as "const struct
> bpf_dynptr *" to align with other kfuncs that take non-mutable dynptr
> argument and to not break their usage. Adding const qualifier does
> not break backward compatibility.
>
> Signed-off-by: Amery Hung <ameryhung@gmail.com>
> ---
>  fs/verity/measure.c                            |  2 +-
>  include/linux/bpf.h                            |  8 ++++----
>  kernel/bpf/helpers.c                           | 10 +++++-----
>  kernel/bpf/verifier.c                          | 18 +++++++++++++++++-
>  kernel/trace/bpf_trace.c                       | 18 +++++++++---------
>  tools/testing/selftests/bpf/bpf_kfuncs.h       |  6 +++---
>  .../selftests/bpf/progs/dynptr_success.c       |  6 +++---
>  .../bpf/progs/test_kfunc_dynptr_param.c        |  7 +------
>  8 files changed, 43 insertions(+), 32 deletions(-)
>
> diff --git a/fs/verity/measure.c b/fs/verity/measure.c
> index 6a35623ebdf0..3840436e4510 100644
> --- a/fs/verity/measure.c
> +++ b/fs/verity/measure.c
> @@ -118,7 +118,7 @@ __bpf_kfunc_start_defs();
>   *
>   * Return: 0 on success, a negative value on error.
>   */
> -__bpf_kfunc int bpf_get_fsverity_digest(struct file *file, struct bpf_dynptr *digest_p)
> +__bpf_kfunc int bpf_get_fsverity_digest(struct file *file, const struct bpf_dynptr *digest_p)
>  {
>  	struct bpf_dynptr_kern *digest_ptr = (struct bpf_dynptr_kern *)digest_p;
>  	const struct inode *inode = file_inode(file);
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index b78b53198a2e..946a37b951f7 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -3621,8 +3621,8 @@ static inline int bpf_fd_reuseport_array_update_elem(struct bpf_map *map,
>  struct bpf_key *bpf_lookup_user_key(s32 serial, u64 flags);
>  struct bpf_key *bpf_lookup_system_key(u64 id);
>  void bpf_key_put(struct bpf_key *bkey);
> -int bpf_verify_pkcs7_signature(struct bpf_dynptr *data_p,
> -			       struct bpf_dynptr *sig_p,
> +int bpf_verify_pkcs7_signature(const struct bpf_dynptr *data_p,
> +			       const struct bpf_dynptr *sig_p,
>  			       struct bpf_key *trusted_keyring);
>  
>  #else
> @@ -3640,8 +3640,8 @@ static inline void bpf_key_put(struct bpf_key *bkey)
>  {
>  }
>  
> -static inline int bpf_verify_pkcs7_signature(struct bpf_dynptr *data_p,
> -					     struct bpf_dynptr *sig_p,
> +static inline int bpf_verify_pkcs7_signature(const struct bpf_dynptr *data_p,
> +					     const struct bpf_dynptr *sig_p,
>  					     struct bpf_key *trusted_keyring)
>  {
>  	return -EOPNOTSUPP;
> diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> index 6eb6c82ed2ee..3d44896587ac 100644
> --- a/kernel/bpf/helpers.c
> +++ b/kernel/bpf/helpers.c
> @@ -3000,8 +3000,8 @@ __bpf_kfunc int bpf_dynptr_clone(const struct bpf_dynptr *p,
>   * Copies data from source dynptr to destination dynptr.
>   * Returns 0 on success; negative error, otherwise.
>   */
> -__bpf_kfunc int bpf_dynptr_copy(struct bpf_dynptr *dst_ptr, u64 dst_off,
> -				struct bpf_dynptr *src_ptr, u64 src_off, u64 size)
> +__bpf_kfunc int bpf_dynptr_copy(const struct bpf_dynptr *dst_ptr, u64 dst_off,
> +				const struct bpf_dynptr *src_ptr, u64 src_off, u64 size)
>  {
>  	struct bpf_dynptr_kern *dst = (struct bpf_dynptr_kern *)dst_ptr;
>  	struct bpf_dynptr_kern *src = (struct bpf_dynptr_kern *)src_ptr;
> @@ -3055,7 +3055,7 @@ __bpf_kfunc int bpf_dynptr_copy(struct bpf_dynptr *dst_ptr, u64 dst_off,
>   * at @offset with the constant byte @val.
>   * Returns 0 on success; negative error, otherwise.
>   */
> -__bpf_kfunc int bpf_dynptr_memset(struct bpf_dynptr *p, u64 offset, u64 size, u8 val)
> +__bpf_kfunc int bpf_dynptr_memset(const struct bpf_dynptr *p, u64 offset, u64 size, u8 val)
>  {
>  	struct bpf_dynptr_kern *ptr = (struct bpf_dynptr_kern *)p;
>  	u64 chunk_sz, write_off;
> @@ -4069,8 +4069,8 @@ __bpf_kfunc void bpf_key_put(struct bpf_key *bkey)
>   *
>   * Return: 0 on success, a negative value on error.
>   */
> -__bpf_kfunc int bpf_verify_pkcs7_signature(struct bpf_dynptr *data_p,
> -			       struct bpf_dynptr *sig_p,
> +__bpf_kfunc int bpf_verify_pkcs7_signature(const struct bpf_dynptr *data_p,
> +			       const struct bpf_dynptr *sig_p,
>  			       struct bpf_key *trusted_keyring)
>  {
>  #ifdef CONFIG_SYSTEM_DATA_VERIFICATION
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 1153a828ce8d..0f77c4c5b510 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -12276,6 +12276,22 @@ static bool is_kfunc_arg_dynptr(const struct btf *btf, const struct btf_param *a
>  	return __is_kfunc_ptr_arg_type(btf, arg, KF_ARG_DYNPTR_ID);
>  }
>  
> +static bool is_kfunc_arg_const_ptr(const struct btf *btf, const struct btf_param *arg)
> +{
> +	const struct btf_type *t, *resolved_t;
> +
> +	t = btf_type_skip_modifiers(btf, arg->type, NULL);
> +	if (!t || !btf_type_is_ptr(t))
> +		return false;
> +
> +	resolved_t = btf_type_skip_modifiers(btf, t->type, NULL);
nit: t is ptr type, maybe we can do t = btf_type_by_id(btf, t->type)
before the loop starts, as we know the result of the first iteration.
> +	for (; t != resolved_t; t = btf_type_by_id(btf, t->type))
> +		if (BTF_INFO_KIND(t->info) == BTF_KIND_CONST)
nit: btf_kind() is a bit shorter than BTF_KIND_INFO()
> +			return true;
> +
> +	return false;
> +}
The logic in this function looks correct to me. The refactoring makes
sense as well (although I'm not 100% sure how this is relevant to this
patch series)
Acked-by: Mykyta Yatsenko <yatsenko@meta.com>
> +
>  static bool is_kfunc_arg_list_head(const struct btf *btf, const struct btf_param *arg)
>  {
>  	return __is_kfunc_ptr_arg_type(btf, arg, KF_ARG_LIST_HEAD_ID);
> @@ -13509,7 +13525,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
>  			enum bpf_arg_type dynptr_arg_type = ARG_PTR_TO_DYNPTR;
>  			int clone_ref_obj_id = 0;
>  
> -			if (reg->type == CONST_PTR_TO_DYNPTR)
> +			if (is_kfunc_arg_const_ptr(btf, &args[i]))
>  				dynptr_arg_type |= MEM_RDONLY;
>  
>  			if (is_kfunc_arg_uninit(btf, &args[i]))
> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> index 9bc0dfd235af..127c317376be 100644
> --- a/kernel/trace/bpf_trace.c
> +++ b/kernel/trace/bpf_trace.c
> @@ -3391,7 +3391,7 @@ typedef int (*copy_fn_t)(void *dst, const void *src, u32 size, struct task_struc
>   * direct calls into all the specific callback implementations
>   * (copy_user_data_sleepable, copy_user_data_nofault, and so on)
>   */
> -static __always_inline int __bpf_dynptr_copy_str(struct bpf_dynptr *dptr, u64 doff, u64 size,
> +static __always_inline int __bpf_dynptr_copy_str(const struct bpf_dynptr *dptr, u64 doff, u64 size,
>  						 const void *unsafe_src,
>  						 copy_fn_t str_copy_fn,
>  						 struct task_struct *tsk)
> @@ -3533,49 +3533,49 @@ __bpf_kfunc int bpf_send_signal_task(struct task_struct *task, int sig, enum pid
>  	return bpf_send_signal_common(sig, type, task, value);
>  }
>  
> -__bpf_kfunc int bpf_probe_read_user_dynptr(struct bpf_dynptr *dptr, u64 off,
> +__bpf_kfunc int bpf_probe_read_user_dynptr(const struct bpf_dynptr *dptr, u64 off,
>  					   u64 size, const void __user *unsafe_ptr__ign)
>  {
>  	return __bpf_dynptr_copy(dptr, off, size, (const void __force *)unsafe_ptr__ign,
>  				 copy_user_data_nofault, NULL);
>  }
>  
> -__bpf_kfunc int bpf_probe_read_kernel_dynptr(struct bpf_dynptr *dptr, u64 off,
> +__bpf_kfunc int bpf_probe_read_kernel_dynptr(const struct bpf_dynptr *dptr, u64 off,
>  					     u64 size, const void *unsafe_ptr__ign)
>  {
>  	return __bpf_dynptr_copy(dptr, off, size, unsafe_ptr__ign,
>  				 copy_kernel_data_nofault, NULL);
>  }
>  
> -__bpf_kfunc int bpf_probe_read_user_str_dynptr(struct bpf_dynptr *dptr, u64 off,
> +__bpf_kfunc int bpf_probe_read_user_str_dynptr(const struct bpf_dynptr *dptr, u64 off,
>  					       u64 size, const void __user *unsafe_ptr__ign)
>  {
>  	return __bpf_dynptr_copy_str(dptr, off, size, (const void __force *)unsafe_ptr__ign,
>  				     copy_user_str_nofault, NULL);
>  }
>  
> -__bpf_kfunc int bpf_probe_read_kernel_str_dynptr(struct bpf_dynptr *dptr, u64 off,
> +__bpf_kfunc int bpf_probe_read_kernel_str_dynptr(const struct bpf_dynptr *dptr, u64 off,
>  						 u64 size, const void *unsafe_ptr__ign)
>  {
>  	return __bpf_dynptr_copy_str(dptr, off, size, unsafe_ptr__ign,
>  				     copy_kernel_str_nofault, NULL);
>  }
>  
> -__bpf_kfunc int bpf_copy_from_user_dynptr(struct bpf_dynptr *dptr, u64 off,
> +__bpf_kfunc int bpf_copy_from_user_dynptr(const struct bpf_dynptr *dptr, u64 off,
>  					  u64 size, const void __user *unsafe_ptr__ign)
>  {
>  	return __bpf_dynptr_copy(dptr, off, size, (const void __force *)unsafe_ptr__ign,
>  				 copy_user_data_sleepable, NULL);
>  }
>  
> -__bpf_kfunc int bpf_copy_from_user_str_dynptr(struct bpf_dynptr *dptr, u64 off,
> +__bpf_kfunc int bpf_copy_from_user_str_dynptr(const struct bpf_dynptr *dptr, u64 off,
>  					      u64 size, const void __user *unsafe_ptr__ign)
>  {
>  	return __bpf_dynptr_copy_str(dptr, off, size, (const void __force *)unsafe_ptr__ign,
>  				     copy_user_str_sleepable, NULL);
>  }
>  
> -__bpf_kfunc int bpf_copy_from_user_task_dynptr(struct bpf_dynptr *dptr, u64 off,
> +__bpf_kfunc int bpf_copy_from_user_task_dynptr(const struct bpf_dynptr *dptr, u64 off,
>  					       u64 size, const void __user *unsafe_ptr__ign,
>  					       struct task_struct *tsk)
>  {
> @@ -3583,7 +3583,7 @@ __bpf_kfunc int bpf_copy_from_user_task_dynptr(struct bpf_dynptr *dptr, u64 off,
>  				 copy_user_data_sleepable, tsk);
>  }
>  
> -__bpf_kfunc int bpf_copy_from_user_task_str_dynptr(struct bpf_dynptr *dptr, u64 off,
> +__bpf_kfunc int bpf_copy_from_user_task_str_dynptr(const struct bpf_dynptr *dptr, u64 off,
>  						   u64 size, const void __user *unsafe_ptr__ign,
>  						   struct task_struct *tsk)
>  {
> diff --git a/tools/testing/selftests/bpf/bpf_kfuncs.h b/tools/testing/selftests/bpf/bpf_kfuncs.h
> index 7dad01439391..ffb9bc1cace0 100644
> --- a/tools/testing/selftests/bpf/bpf_kfuncs.h
> +++ b/tools/testing/selftests/bpf/bpf_kfuncs.h
> @@ -70,13 +70,13 @@ extern void *bpf_rdonly_cast(const void *obj, __u32 btf_id) __ksym __weak;
>  
>  extern int bpf_get_file_xattr(struct file *file, const char *name,
>  			      struct bpf_dynptr *value_ptr) __ksym;
> -extern int bpf_get_fsverity_digest(struct file *file, struct bpf_dynptr *digest_ptr) __ksym;
> +extern int bpf_get_fsverity_digest(struct file *file, const struct bpf_dynptr *digest_ptr) __ksym;
>  
>  extern struct bpf_key *bpf_lookup_user_key(__s32 serial, __u64 flags) __ksym;
>  extern struct bpf_key *bpf_lookup_system_key(__u64 id) __ksym;
>  extern void bpf_key_put(struct bpf_key *key) __ksym;
> -extern int bpf_verify_pkcs7_signature(struct bpf_dynptr *data_ptr,
> -				      struct bpf_dynptr *sig_ptr,
> +extern int bpf_verify_pkcs7_signature(const struct bpf_dynptr *data_ptr,
> +				      const struct bpf_dynptr *sig_ptr,
>  				      struct bpf_key *trusted_keyring) __ksym;
>  
>  struct dentry;
> diff --git a/tools/testing/selftests/bpf/progs/dynptr_success.c b/tools/testing/selftests/bpf/progs/dynptr_success.c
> index e0d672d93adf..e0745b6e467e 100644
> --- a/tools/testing/selftests/bpf/progs/dynptr_success.c
> +++ b/tools/testing/selftests/bpf/progs/dynptr_success.c
> @@ -914,7 +914,7 @@ void *user_ptr;
>  char expected_str[384];
>  __u32 test_len[7] = {0/* placeholder */, 0, 1, 2, 255, 256, 257};
>  
> -typedef int (*bpf_read_dynptr_fn_t)(struct bpf_dynptr *dptr, u64 off,
> +typedef int (*bpf_read_dynptr_fn_t)(const struct bpf_dynptr *dptr, u64 off,
>  				    u64 size, const void *unsafe_ptr);
>  
>  /* Returns the offset just before the end of the maximum sized xdp fragment.
> @@ -1106,7 +1106,7 @@ int test_copy_from_user_str_dynptr(void *ctx)
>  	return 0;
>  }
>  
> -static int bpf_copy_data_from_user_task(struct bpf_dynptr *dptr, u64 off,
> +static int bpf_copy_data_from_user_task(const struct bpf_dynptr *dptr, u64 off,
>  					u64 size, const void *unsafe_ptr)
>  {
>  	struct task_struct *task = bpf_get_current_task_btf();
> @@ -1114,7 +1114,7 @@ static int bpf_copy_data_from_user_task(struct bpf_dynptr *dptr, u64 off,
>  	return bpf_copy_from_user_task_dynptr(dptr, off, size, unsafe_ptr, task);
>  }
>  
> -static int bpf_copy_data_from_user_task_str(struct bpf_dynptr *dptr, u64 off,
> +static int bpf_copy_data_from_user_task_str(const struct bpf_dynptr *dptr, u64 off,
>  					    u64 size, const void *unsafe_ptr)
>  {
>  	struct task_struct *task = bpf_get_current_task_btf();
> diff --git a/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c b/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c
> index d249113ed657..c3631fd41977 100644
> --- a/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c
> +++ b/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c
> @@ -11,12 +11,7 @@
>  #include <bpf/bpf_helpers.h>
>  #include <bpf/bpf_tracing.h>
>  #include "bpf_misc.h"
> -
> -extern struct bpf_key *bpf_lookup_system_key(__u64 id) __ksym;
> -extern void bpf_key_put(struct bpf_key *key) __ksym;
> -extern int bpf_verify_pkcs7_signature(struct bpf_dynptr *data_ptr,
> -				      struct bpf_dynptr *sig_ptr,
> -				      struct bpf_key *trusted_keyring) __ksym;
> +#include "bpf_kfuncs.h"
>  
>  struct {
>  	__uint(type, BPF_MAP_TYPE_RINGBUF);
> -- 
> 2.47.3

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH bpf-next v2 02/11] selftests/bpf: Test passing CONST_PTR_TO_DYNPTR to kfunc that may mutate dynptr
  2026-03-07  6:44 ` [RFC PATCH bpf-next v2 02/11] selftests/bpf: Test passing CONST_PTR_TO_DYNPTR to kfunc that may mutate dynptr Amery Hung
@ 2026-03-11 15:26   ` Mykyta Yatsenko
  2026-03-11 16:38     ` Amery Hung
  2026-03-16 21:35   ` Eduard Zingerman
  1 sibling, 1 reply; 46+ messages in thread
From: Mykyta Yatsenko @ 2026-03-11 15:26 UTC (permalink / raw)
  To: Amery Hung, bpf
  Cc: netdev, alexei.starovoitov, andrii, daniel, memxor, martin.lau,
	ameryhung, kernel-team

Amery Hung <ameryhung@gmail.com> writes:

> Make sure for kfunc that takes mutable dynptr argument, verifier rejects
> passing CONST_PTR_TO_DYNPTR to it.
>
> Rename struct sample to test_sample to avoid a conflict with the
> definition in vmlinux.h
>
> In test_kfunc_dynptr_param.c, initialize dynptr to 0 to avoid
> -Wuninitialized-const-pointer warning.
>
> Signed-off-by: Amery Hung <ameryhung@gmail.com>
> ---
>  .../testing/selftests/bpf/progs/dynptr_fail.c | 37 +++++++++++++++----
>  .../bpf/progs/test_kfunc_dynptr_param.c       |  2 +-
>  2 files changed, 30 insertions(+), 9 deletions(-)
>
> diff --git a/tools/testing/selftests/bpf/progs/dynptr_fail.c b/tools/testing/selftests/bpf/progs/dynptr_fail.c
> index 8f2ae9640886..5e1b1cf4ea8e 100644
> --- a/tools/testing/selftests/bpf/progs/dynptr_fail.c
> +++ b/tools/testing/selftests/bpf/progs/dynptr_fail.c
> @@ -1,15 +1,14 @@
>  // SPDX-License-Identifier: GPL-2.0
>  /* Copyright (c) 2022 Facebook */
>  
> +#include <vmlinux.h>
>  #include <errno.h>
>  #include <string.h>
> -#include <stdbool.h>
> -#include <linux/bpf.h>
>  #include <bpf/bpf_helpers.h>
>  #include <bpf/bpf_tracing.h>
> -#include <linux/if_ether.h>
>  #include "bpf_misc.h"
>  #include "bpf_kfuncs.h"
> +#include "../test_kmods/bpf_testmod_kfunc.h"
>  
>  char _license[] SEC("license") = "GPL";
>  
> @@ -46,7 +45,7 @@ struct {
>  	__type(value, __u64);
>  } array_map4 SEC(".maps");
>  
> -struct sample {
> +struct test_sample {
>  	int pid;
>  	long value;
>  	char comm[16];
> @@ -95,7 +94,7 @@ __failure __msg("Unreleased reference id=4")
>  int ringbuf_missing_release2(void *ctx)
>  {
>  	struct bpf_dynptr ptr1, ptr2;
> -	struct sample *sample;
> +	struct test_sample *sample;
>  
>  	bpf_ringbuf_reserve_dynptr(&ringbuf, sizeof(*sample), 0, &ptr1);
>  	bpf_ringbuf_reserve_dynptr(&ringbuf, sizeof(*sample), 0, &ptr2);
> @@ -173,7 +172,7 @@ __failure __msg("type=mem expected=ringbuf_mem")
>  int ringbuf_invalid_api(void *ctx)
>  {
>  	struct bpf_dynptr ptr;
> -	struct sample *sample;
> +	struct test_sample *sample;
>  
>  	bpf_ringbuf_reserve_dynptr(&ringbuf, sizeof(*sample), 0, &ptr);
>  	sample = bpf_dynptr_data(&ptr, 0, sizeof(*sample));
> @@ -315,7 +314,7 @@ __failure __msg("invalid mem access 'scalar'")
>  int data_slice_use_after_release1(void *ctx)
>  {
>  	struct bpf_dynptr ptr;
> -	struct sample *sample;
> +	struct test_sample *sample;
>  
>  	bpf_ringbuf_reserve_dynptr(&ringbuf, sizeof(*sample), 0, &ptr);
>  	sample = bpf_dynptr_data(&ptr, 0, sizeof(*sample));
> @@ -347,7 +346,7 @@ __failure __msg("invalid mem access 'scalar'")
>  int data_slice_use_after_release2(void *ctx)
>  {
>  	struct bpf_dynptr ptr1, ptr2;
> -	struct sample *sample;
> +	struct test_sample *sample;
>  
>  	bpf_ringbuf_reserve_dynptr(&ringbuf, 64, 0, &ptr1);
>  	bpf_ringbuf_reserve_dynptr(&ringbuf, sizeof(*sample), 0, &ptr2);
> @@ -1993,3 +1992,25 @@ int test_dynptr_reg_type(void *ctx)
>  	global_call_bpf_dynptr((const struct bpf_dynptr *)current);
>  	return 0;
>  }
> +
> +/* Cannot pass CONST_PTR_TO_DYNPTR to bpf_kfunc_dynptr_test() that may mutate the dynptr */
> +__noinline int global_subprog_dynptr_mutable(const struct bpf_dynptr *dynptr)
> +{
> +	long ret = 0;
Why do we need this long ret? Do we even need this function at all, why
not calling bpf_kfunc_dynptr_test() directly from the
kfunc_dynptr_const_to_mutable()?
> +
> +	/* this should fail */
> +	bpf_kfunc_dynptr_test((struct bpf_dynptr *)dynptr, NULL);
> +	__sink(ret);
> +	return ret;
> +}
> +
> +SEC("tc")
nit: it looks like most of the programs in this file are optional:
SEC("?tc").
> +__failure __msg("cannot pass pointer to const bpf_dynptr, the helper mutates it")
> +int kfunc_dynptr_const_to_mutable(struct __sk_buff *skb)
> +{
> +	struct bpf_dynptr data;
> +
> +	bpf_dynptr_from_skb(skb, 0, &data);
> +	global_subprog_dynptr_mutable(&data);
> +	return 0;
> +}
> diff --git a/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c b/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c
> index c3631fd41977..1c6cfd0888ba 100644
> --- a/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c
> +++ b/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c
> @@ -33,7 +33,7 @@ SEC("?lsm.s/bpf")
>  __failure __msg("cannot pass in dynptr at an offset=-8")
>  int BPF_PROG(not_valid_dynptr, int cmd, union bpf_attr *attr, unsigned int size, bool kernel)
>  {
> -	unsigned long val;
> +	unsigned long val = 0;
>  
>  	return bpf_verify_pkcs7_signature((struct bpf_dynptr *)&val,
>  					  (struct bpf_dynptr *)&val, NULL);
> -- 
> 2.47.3

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH bpf-next v2 03/11] bpf: Unify dynptr handling in the verifier
  2026-03-07  6:44 ` [RFC PATCH bpf-next v2 03/11] bpf: Unify dynptr handling in the verifier Amery Hung
@ 2026-03-11 16:03   ` Mykyta Yatsenko
  2026-03-11 17:23     ` Amery Hung
  2026-03-11 19:57   ` Andrii Nakryiko
  2026-03-16 22:52   ` Eduard Zingerman
  2 siblings, 1 reply; 46+ messages in thread
From: Mykyta Yatsenko @ 2026-03-11 16:03 UTC (permalink / raw)
  To: Amery Hung, bpf
  Cc: netdev, alexei.starovoitov, andrii, daniel, memxor, martin.lau,
	ameryhung, kernel-team

Amery Hung <ameryhung@gmail.com> writes:

> Simplify dynptr checking for helper and kfunc by unifying it. Remember
> initialized dynptr in process_dynptr_func() so that we can easily
> retrieve the information for verification later.
>
> Signed-off-by: Amery Hung <ameryhung@gmail.com>
> ---
>  kernel/bpf/verifier.c | 179 +++++++++---------------------------------
>  1 file changed, 36 insertions(+), 143 deletions(-)
>
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 0f77c4c5b510..d52780962adb 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -277,8 +277,15 @@ struct bpf_map_desc {
>  	int uid;
>  };
>  
> +struct bpf_dynptr_desc {
> +	enum bpf_dynptr_type type;
> +	u32 id;
> +	u32 ref_obj_id;
nit: let's add a comment here explaining what this field is for.
> +};
> +
>  struct bpf_call_arg_meta {
>  	struct bpf_map_desc map;
> +	struct bpf_dynptr_desc initialized_dynptr;
>  	bool raw_mode;
>  	bool pkt_access;
>  	u8 release_regno;
> @@ -287,7 +294,6 @@ struct bpf_call_arg_meta {
>  	int mem_size;
>  	u64 msize_max_value;
>  	int ref_obj_id;
> -	int dynptr_id;
>  	int func_id;
>  	struct btf *btf;
>  	u32 btf_id;
> @@ -346,16 +352,12 @@ struct bpf_kfunc_call_arg_meta {
>  	struct {
>  		struct btf_field *field;
>  	} arg_rbtree_root;
> -	struct {
> -		enum bpf_dynptr_type type;
> -		u32 id;
> -		u32 ref_obj_id;
> -	} initialized_dynptr;
>  	struct {
>  		u8 spi;
>  		u8 frameno;
>  	} iter;
>  	struct bpf_map_desc map;
> +	struct bpf_dynptr_desc initialized_dynptr;
>  	u64 mem_size;
>  };
>  
> @@ -511,11 +513,6 @@ static bool is_ptr_cast_function(enum bpf_func_id func_id)
>  		func_id == BPF_FUNC_skc_to_tcp_request_sock;
>  }
>  
> -static bool is_dynptr_ref_function(enum bpf_func_id func_id)
> -{
> -	return func_id == BPF_FUNC_dynptr_data;
> -}
> -
>  static bool is_sync_callback_calling_kfunc(u32 btf_id);
>  static bool is_async_callback_calling_kfunc(u32 btf_id);
>  static bool is_callback_calling_kfunc(u32 btf_id);
> @@ -597,8 +594,6 @@ static bool helper_multiple_ref_obj_use(enum bpf_func_id func_id,
>  		ref_obj_uses++;
>  	if (is_acquire_function(func_id, map))
>  		ref_obj_uses++;
> -	if (is_dynptr_ref_function(func_id))
> -		ref_obj_uses++;
>  
>  	return ref_obj_uses > 1;
>  }
> @@ -8750,7 +8745,8 @@ static int process_kptr_func(struct bpf_verifier_env *env, int regno,
>   * type, and declare it as 'const struct bpf_dynptr *' in their prototype.
>   */
>  static int process_dynptr_func(struct bpf_verifier_env *env, int regno, int insn_idx,
> -			       enum bpf_arg_type arg_type, int clone_ref_obj_id)
> +			       enum bpf_arg_type arg_type, int clone_ref_obj_id,
> +			       struct bpf_dynptr_desc *initialized_dynptr)
>  {
>  	struct bpf_reg_state *reg = reg_state(env, regno);
>  	int err;
> @@ -8825,6 +8821,20 @@ static int process_dynptr_func(struct bpf_verifier_env *env, int regno, int insn
>  		}
>  
>  		err = mark_dynptr_read(env, reg);
> +
> +		if (initialized_dynptr) {
> +			struct bpf_func_state *state = func(env, reg);
state is only used if reg->type != CONST_PTR_TO_DYNPTR, does it make
sense to move state = func(env, reg); to the corresponding if block?
> +			int spi;
> +
> +			if (reg->type != CONST_PTR_TO_DYNPTR) {
> +				spi = dynptr_get_spi(env, reg);
looking at the deleted dynptr_id() and dynptr_ref_obj_id() spi can be
negative, what changed here that we no longer need this check?
> +				reg = &state->stack[spi].spilled_ptr;
> +			}
> +
> +			initialized_dynptr->id = reg->id;
> +			initialized_dynptr->type = reg->dynptr.type;
> +			initialized_dynptr->ref_obj_id = reg->ref_obj_id;
> +		}
>  	}
>  	return err;
>  }
> @@ -9587,72 +9597,6 @@ static int check_func_arg_reg_off(struct bpf_verifier_env *env,
>  	}
>  }
>  
> -static struct bpf_reg_state *get_dynptr_arg_reg(struct bpf_verifier_env *env,
> -						const struct bpf_func_proto *fn,
> -						struct bpf_reg_state *regs)
> -{
> -	struct bpf_reg_state *state = NULL;
> -	int i;
> -
> -	for (i = 0; i < MAX_BPF_FUNC_REG_ARGS; i++)
> -		if (arg_type_is_dynptr(fn->arg_type[i])) {
> -			if (state) {
> -				verbose(env, "verifier internal error: multiple dynptr args\n");
> -				return NULL;
> -			}
> -			state = &regs[BPF_REG_1 + i];
> -		}
> -
> -	if (!state)
> -		verbose(env, "verifier internal error: no dynptr arg found\n");
> -
> -	return state;
> -}
> -
> -static int dynptr_id(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
> -{
> -	struct bpf_func_state *state = func(env, reg);
> -	int spi;
> -
> -	if (reg->type == CONST_PTR_TO_DYNPTR)
> -		return reg->id;
> -	spi = dynptr_get_spi(env, reg);
> -	if (spi < 0)
> -		return spi;
> -	return state->stack[spi].spilled_ptr.id;
> -}
> -
> -static int dynptr_ref_obj_id(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
> -{
> -	struct bpf_func_state *state = func(env, reg);
> -	int spi;
> -
> -	if (reg->type == CONST_PTR_TO_DYNPTR)
> -		return reg->ref_obj_id;
> -	spi = dynptr_get_spi(env, reg);
> -	if (spi < 0)
> -		return spi;
> -	return state->stack[spi].spilled_ptr.ref_obj_id;
> -}
> -
> -static enum bpf_dynptr_type dynptr_get_type(struct bpf_verifier_env *env,
> -					    struct bpf_reg_state *reg)
> -{
> -	struct bpf_func_state *state = func(env, reg);
> -	int spi;
> -
> -	if (reg->type == CONST_PTR_TO_DYNPTR)
> -		return reg->dynptr.type;
> -
> -	spi = __get_spi(reg->var_off.value);
> -	if (spi < 0) {
> -		verbose(env, "verifier internal error: invalid spi when querying dynptr type\n");
> -		return BPF_DYNPTR_TYPE_INVALID;
> -	}
> -
> -	return state->stack[spi].spilled_ptr.dynptr.type;
> -}
> -
>  static int check_reg_const_str(struct bpf_verifier_env *env,
>  			       struct bpf_reg_state *reg, u32 regno)
>  {
> @@ -10007,7 +9951,8 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
>  					 true, meta);
>  		break;
>  	case ARG_PTR_TO_DYNPTR:
> -		err = process_dynptr_func(env, regno, insn_idx, arg_type, 0);
> +		err = process_dynptr_func(env, regno, insn_idx, arg_type, 0,
> +					  &meta->initialized_dynptr);
>  		if (err)
>  			return err;
>  		break;
> @@ -10666,7 +10611,7 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog,
>  			if (ret)
>  				return ret;
>  
> -			ret = process_dynptr_func(env, regno, -1, arg->arg_type, 0);
> +			ret = process_dynptr_func(env, regno, -1, arg->arg_type, 0, NULL);
>  			if (ret)
>  				return ret;
>  		} else if (base_type(arg->arg_type) == ARG_PTR_TO_BTF_ID) {
> @@ -11771,52 +11716,10 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
>  			}
>  		}
>  		break;
> -	case BPF_FUNC_dynptr_data:
> -	{
> -		struct bpf_reg_state *reg;
> -		int id, ref_obj_id;
> -
> -		reg = get_dynptr_arg_reg(env, fn, regs);
> -		if (!reg)
> -			return -EFAULT;
> -
> -
> -		if (meta.dynptr_id) {
> -			verifier_bug(env, "meta.dynptr_id already set");
> -			return -EFAULT;
> -		}
> -		if (meta.ref_obj_id) {
> -			verifier_bug(env, "meta.ref_obj_id already set");
> -			return -EFAULT;
> -		}
> -
> -		id = dynptr_id(env, reg);
> -		if (id < 0) {
> -			verifier_bug(env, "failed to obtain dynptr id");
> -			return id;
> -		}
> -
> -		ref_obj_id = dynptr_ref_obj_id(env, reg);
> -		if (ref_obj_id < 0) {
> -			verifier_bug(env, "failed to obtain dynptr ref_obj_id");
> -			return ref_obj_id;
> -		}
> -
> -		meta.dynptr_id = id;
> -		meta.ref_obj_id = ref_obj_id;
> -
> -		break;
> -	}
>  	case BPF_FUNC_dynptr_write:
>  	{
> -		enum bpf_dynptr_type dynptr_type;
> -		struct bpf_reg_state *reg;
> -
> -		reg = get_dynptr_arg_reg(env, fn, regs);
> -		if (!reg)
> -			return -EFAULT;
> +		enum bpf_dynptr_type dynptr_type = meta.initialized_dynptr.type;
>  
> -		dynptr_type = dynptr_get_type(env, reg);
>  		if (dynptr_type == BPF_DYNPTR_TYPE_INVALID)
>  			return -EFAULT;
>  
> @@ -12007,10 +11910,7 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
>  		return -EFAULT;
>  	}
>  
> -	if (is_dynptr_ref_function(func_id))
> -		regs[BPF_REG_0].dynptr_id = meta.dynptr_id;
> -
> -	if (is_ptr_cast_function(func_id) || is_dynptr_ref_function(func_id)) {
> +	if (is_ptr_cast_function(func_id)) {
>  		/* For release_reference() */
>  		regs[BPF_REG_0].ref_obj_id = meta.ref_obj_id;
>  	} else if (is_acquire_function(func_id, meta.map.ptr)) {
> @@ -12024,6 +11924,11 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
>  		regs[BPF_REG_0].ref_obj_id = id;
>  	}
>  
> +	if (func_id == BPF_FUNC_dynptr_data) {
> +		regs[BPF_REG_0].dynptr_id = meta.initialized_dynptr.id;
> +		regs[BPF_REG_0].ref_obj_id = meta.initialized_dynptr.ref_obj_id;
> +	}
> +
>  	err = do_refine_retval_range(env, regs, fn->ret_type, func_id, &meta);
>  	if (err)
>  		return err;
> @@ -13559,22 +13464,10 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
>  				}
>  			}
>  
> -			ret = process_dynptr_func(env, regno, insn_idx, dynptr_arg_type, clone_ref_obj_id);
> +			ret = process_dynptr_func(env, regno, insn_idx, dynptr_arg_type, clone_ref_obj_id,
> +						  &meta->initialized_dynptr);
>  			if (ret < 0)
>  				return ret;
> -
> -			if (!(dynptr_arg_type & MEM_UNINIT)) {
> -				int id = dynptr_id(env, reg);
> -
> -				if (id < 0) {
> -					verifier_bug(env, "failed to obtain dynptr id");
> -					return id;
> -				}
> -				meta->initialized_dynptr.id = id;
> -				meta->initialized_dynptr.type = dynptr_get_type(env, reg);
> -				meta->initialized_dynptr.ref_obj_id = dynptr_ref_obj_id(env, reg);
> -			}
> -
>  			break;
>  		}
>  		case KF_ARG_PTR_TO_ITER:
> -- 
> 2.47.3

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH bpf-next v2 01/11] bpf: Set kfunc dynptr arg type flag based on prototype
  2026-03-11 14:47   ` Mykyta Yatsenko
@ 2026-03-11 16:34     ` Amery Hung
  0 siblings, 0 replies; 46+ messages in thread
From: Amery Hung @ 2026-03-11 16:34 UTC (permalink / raw)
  To: Mykyta Yatsenko
  Cc: bpf, netdev, alexei.starovoitov, andrii, daniel, memxor,
	martin.lau, kernel-team

On Wed, Mar 11, 2026 at 7:47 AM Mykyta Yatsenko
<mykyta.yatsenko5@gmail.com> wrote:
>
> Amery Hung <ameryhung@gmail.com> writes:
>
> > The verifier should decide whether a dynptr argument is read-only
> > based on if the type is "const struct bpf_dynptr *", not the type of
> > the register passed to the kfunc. This currently does not cause issues
> > because existing kfuncs that mutate struct bpf_dynptr are constructors
> > (e.g., bpf_dynptr_from_xxx and bpf_dynptr_clone). These kfuncs have
> > additional check in process_dynptr_func() to make sure the stack slot
> > does not contain initialized dynptr. Nonetheless, this should still be
> > fixed to avoid future issues when there is a non-constructor dynptr
> > kfunc that can mutate dynptr. This is also a small step toward unifying
> > kfunc and helper handling in the verifier, where the first step is to
> > generate kfunc prototype similar to bpf_func_proto before the main
> > verification loop.
> >
> > We also need to correctly mark some kfunc arguments as "const struct
> > bpf_dynptr *" to align with other kfuncs that take non-mutable dynptr
> > argument and to not break their usage. Adding const qualifier does
> > not break backward compatibility.
> >
> > Signed-off-by: Amery Hung <ameryhung@gmail.com>
> > ---
> >  fs/verity/measure.c                            |  2 +-
> >  include/linux/bpf.h                            |  8 ++++----
> >  kernel/bpf/helpers.c                           | 10 +++++-----
> >  kernel/bpf/verifier.c                          | 18 +++++++++++++++++-
> >  kernel/trace/bpf_trace.c                       | 18 +++++++++---------
> >  tools/testing/selftests/bpf/bpf_kfuncs.h       |  6 +++---
> >  .../selftests/bpf/progs/dynptr_success.c       |  6 +++---
> >  .../bpf/progs/test_kfunc_dynptr_param.c        |  7 +------
> >  8 files changed, 43 insertions(+), 32 deletions(-)
> >
> > diff --git a/fs/verity/measure.c b/fs/verity/measure.c
> > index 6a35623ebdf0..3840436e4510 100644
> > --- a/fs/verity/measure.c
> > +++ b/fs/verity/measure.c
> > @@ -118,7 +118,7 @@ __bpf_kfunc_start_defs();
> >   *
> >   * Return: 0 on success, a negative value on error.
> >   */
> > -__bpf_kfunc int bpf_get_fsverity_digest(struct file *file, struct bpf_dynptr *digest_p)
> > +__bpf_kfunc int bpf_get_fsverity_digest(struct file *file, const struct bpf_dynptr *digest_p)
> >  {
> >       struct bpf_dynptr_kern *digest_ptr = (struct bpf_dynptr_kern *)digest_p;
> >       const struct inode *inode = file_inode(file);
> > diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> > index b78b53198a2e..946a37b951f7 100644
> > --- a/include/linux/bpf.h
> > +++ b/include/linux/bpf.h
> > @@ -3621,8 +3621,8 @@ static inline int bpf_fd_reuseport_array_update_elem(struct bpf_map *map,
> >  struct bpf_key *bpf_lookup_user_key(s32 serial, u64 flags);
> >  struct bpf_key *bpf_lookup_system_key(u64 id);
> >  void bpf_key_put(struct bpf_key *bkey);
> > -int bpf_verify_pkcs7_signature(struct bpf_dynptr *data_p,
> > -                            struct bpf_dynptr *sig_p,
> > +int bpf_verify_pkcs7_signature(const struct bpf_dynptr *data_p,
> > +                            const struct bpf_dynptr *sig_p,
> >                              struct bpf_key *trusted_keyring);
> >
> >  #else
> > @@ -3640,8 +3640,8 @@ static inline void bpf_key_put(struct bpf_key *bkey)
> >  {
> >  }
> >
> > -static inline int bpf_verify_pkcs7_signature(struct bpf_dynptr *data_p,
> > -                                          struct bpf_dynptr *sig_p,
> > +static inline int bpf_verify_pkcs7_signature(const struct bpf_dynptr *data_p,
> > +                                          const struct bpf_dynptr *sig_p,
> >                                            struct bpf_key *trusted_keyring)
> >  {
> >       return -EOPNOTSUPP;
> > diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> > index 6eb6c82ed2ee..3d44896587ac 100644
> > --- a/kernel/bpf/helpers.c
> > +++ b/kernel/bpf/helpers.c
> > @@ -3000,8 +3000,8 @@ __bpf_kfunc int bpf_dynptr_clone(const struct bpf_dynptr *p,
> >   * Copies data from source dynptr to destination dynptr.
> >   * Returns 0 on success; negative error, otherwise.
> >   */
> > -__bpf_kfunc int bpf_dynptr_copy(struct bpf_dynptr *dst_ptr, u64 dst_off,
> > -                             struct bpf_dynptr *src_ptr, u64 src_off, u64 size)
> > +__bpf_kfunc int bpf_dynptr_copy(const struct bpf_dynptr *dst_ptr, u64 dst_off,
> > +                             const struct bpf_dynptr *src_ptr, u64 src_off, u64 size)
> >  {
> >       struct bpf_dynptr_kern *dst = (struct bpf_dynptr_kern *)dst_ptr;
> >       struct bpf_dynptr_kern *src = (struct bpf_dynptr_kern *)src_ptr;
> > @@ -3055,7 +3055,7 @@ __bpf_kfunc int bpf_dynptr_copy(struct bpf_dynptr *dst_ptr, u64 dst_off,
> >   * at @offset with the constant byte @val.
> >   * Returns 0 on success; negative error, otherwise.
> >   */
> > -__bpf_kfunc int bpf_dynptr_memset(struct bpf_dynptr *p, u64 offset, u64 size, u8 val)
> > +__bpf_kfunc int bpf_dynptr_memset(const struct bpf_dynptr *p, u64 offset, u64 size, u8 val)
> >  {
> >       struct bpf_dynptr_kern *ptr = (struct bpf_dynptr_kern *)p;
> >       u64 chunk_sz, write_off;
> > @@ -4069,8 +4069,8 @@ __bpf_kfunc void bpf_key_put(struct bpf_key *bkey)
> >   *
> >   * Return: 0 on success, a negative value on error.
> >   */
> > -__bpf_kfunc int bpf_verify_pkcs7_signature(struct bpf_dynptr *data_p,
> > -                            struct bpf_dynptr *sig_p,
> > +__bpf_kfunc int bpf_verify_pkcs7_signature(const struct bpf_dynptr *data_p,
> > +                            const struct bpf_dynptr *sig_p,
> >                              struct bpf_key *trusted_keyring)
> >  {
> >  #ifdef CONFIG_SYSTEM_DATA_VERIFICATION
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index 1153a828ce8d..0f77c4c5b510 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -12276,6 +12276,22 @@ static bool is_kfunc_arg_dynptr(const struct btf *btf, const struct btf_param *a
> >       return __is_kfunc_ptr_arg_type(btf, arg, KF_ARG_DYNPTR_ID);
> >  }
> >
> > +static bool is_kfunc_arg_const_ptr(const struct btf *btf, const struct btf_param *arg)
> > +{
> > +     const struct btf_type *t, *resolved_t;
> > +
> > +     t = btf_type_skip_modifiers(btf, arg->type, NULL);
> > +     if (!t || !btf_type_is_ptr(t))
> > +             return false;
> > +
> > +     resolved_t = btf_type_skip_modifiers(btf, t->type, NULL);
> nit: t is ptr type, maybe we can do t = btf_type_by_id(btf, t->type)
> before the loop starts, as we know the result of the first iteration.

Will add it before calling btf_type_skip_modifiers.

> > +     for (; t != resolved_t; t = btf_type_by_id(btf, t->type))
> > +             if (BTF_INFO_KIND(t->info) == BTF_KIND_CONST)
> nit: btf_kind() is a bit shorter than BTF_KIND_INFO()

Will replace it with btf_kind()

> > +                     return true;
> > +
> > +     return false;
> > +}
> The logic in this function looks correct to me. The refactoring makes
> sense as well (although I'm not 100% sure how this is relevant to this
> patch series)

Thanks for taking a look at the set!

Indeed independent. I will separate these two patches if the rest is
going to take more time.

> Acked-by: Mykyta Yatsenko <yatsenko@meta.com>
> > +
> >  static bool is_kfunc_arg_list_head(const struct btf *btf, const struct btf_param *arg)
> >  {
> >       return __is_kfunc_ptr_arg_type(btf, arg, KF_ARG_LIST_HEAD_ID);
> > @@ -13509,7 +13525,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
> >                       enum bpf_arg_type dynptr_arg_type = ARG_PTR_TO_DYNPTR;
> >                       int clone_ref_obj_id = 0;
> >
> > -                     if (reg->type == CONST_PTR_TO_DYNPTR)
> > +                     if (is_kfunc_arg_const_ptr(btf, &args[i]))
> >                               dynptr_arg_type |= MEM_RDONLY;
> >
> >                       if (is_kfunc_arg_uninit(btf, &args[i]))
> > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> > index 9bc0dfd235af..127c317376be 100644
> > --- a/kernel/trace/bpf_trace.c
> > +++ b/kernel/trace/bpf_trace.c
> > @@ -3391,7 +3391,7 @@ typedef int (*copy_fn_t)(void *dst, const void *src, u32 size, struct task_struc
> >   * direct calls into all the specific callback implementations
> >   * (copy_user_data_sleepable, copy_user_data_nofault, and so on)
> >   */
> > -static __always_inline int __bpf_dynptr_copy_str(struct bpf_dynptr *dptr, u64 doff, u64 size,
> > +static __always_inline int __bpf_dynptr_copy_str(const struct bpf_dynptr *dptr, u64 doff, u64 size,
> >                                                const void *unsafe_src,
> >                                                copy_fn_t str_copy_fn,
> >                                                struct task_struct *tsk)
> > @@ -3533,49 +3533,49 @@ __bpf_kfunc int bpf_send_signal_task(struct task_struct *task, int sig, enum pid
> >       return bpf_send_signal_common(sig, type, task, value);
> >  }
> >
> > -__bpf_kfunc int bpf_probe_read_user_dynptr(struct bpf_dynptr *dptr, u64 off,
> > +__bpf_kfunc int bpf_probe_read_user_dynptr(const struct bpf_dynptr *dptr, u64 off,
> >                                          u64 size, const void __user *unsafe_ptr__ign)
> >  {
> >       return __bpf_dynptr_copy(dptr, off, size, (const void __force *)unsafe_ptr__ign,
> >                                copy_user_data_nofault, NULL);
> >  }
> >
> > -__bpf_kfunc int bpf_probe_read_kernel_dynptr(struct bpf_dynptr *dptr, u64 off,
> > +__bpf_kfunc int bpf_probe_read_kernel_dynptr(const struct bpf_dynptr *dptr, u64 off,
> >                                            u64 size, const void *unsafe_ptr__ign)
> >  {
> >       return __bpf_dynptr_copy(dptr, off, size, unsafe_ptr__ign,
> >                                copy_kernel_data_nofault, NULL);
> >  }
> >
> > -__bpf_kfunc int bpf_probe_read_user_str_dynptr(struct bpf_dynptr *dptr, u64 off,
> > +__bpf_kfunc int bpf_probe_read_user_str_dynptr(const struct bpf_dynptr *dptr, u64 off,
> >                                              u64 size, const void __user *unsafe_ptr__ign)
> >  {
> >       return __bpf_dynptr_copy_str(dptr, off, size, (const void __force *)unsafe_ptr__ign,
> >                                    copy_user_str_nofault, NULL);
> >  }
> >
> > -__bpf_kfunc int bpf_probe_read_kernel_str_dynptr(struct bpf_dynptr *dptr, u64 off,
> > +__bpf_kfunc int bpf_probe_read_kernel_str_dynptr(const struct bpf_dynptr *dptr, u64 off,
> >                                                u64 size, const void *unsafe_ptr__ign)
> >  {
> >       return __bpf_dynptr_copy_str(dptr, off, size, unsafe_ptr__ign,
> >                                    copy_kernel_str_nofault, NULL);
> >  }
> >
> > -__bpf_kfunc int bpf_copy_from_user_dynptr(struct bpf_dynptr *dptr, u64 off,
> > +__bpf_kfunc int bpf_copy_from_user_dynptr(const struct bpf_dynptr *dptr, u64 off,
> >                                         u64 size, const void __user *unsafe_ptr__ign)
> >  {
> >       return __bpf_dynptr_copy(dptr, off, size, (const void __force *)unsafe_ptr__ign,
> >                                copy_user_data_sleepable, NULL);
> >  }
> >
> > -__bpf_kfunc int bpf_copy_from_user_str_dynptr(struct bpf_dynptr *dptr, u64 off,
> > +__bpf_kfunc int bpf_copy_from_user_str_dynptr(const struct bpf_dynptr *dptr, u64 off,
> >                                             u64 size, const void __user *unsafe_ptr__ign)
> >  {
> >       return __bpf_dynptr_copy_str(dptr, off, size, (const void __force *)unsafe_ptr__ign,
> >                                    copy_user_str_sleepable, NULL);
> >  }
> >
> > -__bpf_kfunc int bpf_copy_from_user_task_dynptr(struct bpf_dynptr *dptr, u64 off,
> > +__bpf_kfunc int bpf_copy_from_user_task_dynptr(const struct bpf_dynptr *dptr, u64 off,
> >                                              u64 size, const void __user *unsafe_ptr__ign,
> >                                              struct task_struct *tsk)
> >  {
> > @@ -3583,7 +3583,7 @@ __bpf_kfunc int bpf_copy_from_user_task_dynptr(struct bpf_dynptr *dptr, u64 off,
> >                                copy_user_data_sleepable, tsk);
> >  }
> >
> > -__bpf_kfunc int bpf_copy_from_user_task_str_dynptr(struct bpf_dynptr *dptr, u64 off,
> > +__bpf_kfunc int bpf_copy_from_user_task_str_dynptr(const struct bpf_dynptr *dptr, u64 off,
> >                                                  u64 size, const void __user *unsafe_ptr__ign,
> >                                                  struct task_struct *tsk)
> >  {
> > diff --git a/tools/testing/selftests/bpf/bpf_kfuncs.h b/tools/testing/selftests/bpf/bpf_kfuncs.h
> > index 7dad01439391..ffb9bc1cace0 100644
> > --- a/tools/testing/selftests/bpf/bpf_kfuncs.h
> > +++ b/tools/testing/selftests/bpf/bpf_kfuncs.h
> > @@ -70,13 +70,13 @@ extern void *bpf_rdonly_cast(const void *obj, __u32 btf_id) __ksym __weak;
> >
> >  extern int bpf_get_file_xattr(struct file *file, const char *name,
> >                             struct bpf_dynptr *value_ptr) __ksym;
> > -extern int bpf_get_fsverity_digest(struct file *file, struct bpf_dynptr *digest_ptr) __ksym;
> > +extern int bpf_get_fsverity_digest(struct file *file, const struct bpf_dynptr *digest_ptr) __ksym;
> >
> >  extern struct bpf_key *bpf_lookup_user_key(__s32 serial, __u64 flags) __ksym;
> >  extern struct bpf_key *bpf_lookup_system_key(__u64 id) __ksym;
> >  extern void bpf_key_put(struct bpf_key *key) __ksym;
> > -extern int bpf_verify_pkcs7_signature(struct bpf_dynptr *data_ptr,
> > -                                   struct bpf_dynptr *sig_ptr,
> > +extern int bpf_verify_pkcs7_signature(const struct bpf_dynptr *data_ptr,
> > +                                   const struct bpf_dynptr *sig_ptr,
> >                                     struct bpf_key *trusted_keyring) __ksym;
> >
> >  struct dentry;
> > diff --git a/tools/testing/selftests/bpf/progs/dynptr_success.c b/tools/testing/selftests/bpf/progs/dynptr_success.c
> > index e0d672d93adf..e0745b6e467e 100644
> > --- a/tools/testing/selftests/bpf/progs/dynptr_success.c
> > +++ b/tools/testing/selftests/bpf/progs/dynptr_success.c
> > @@ -914,7 +914,7 @@ void *user_ptr;
> >  char expected_str[384];
> >  __u32 test_len[7] = {0/* placeholder */, 0, 1, 2, 255, 256, 257};
> >
> > -typedef int (*bpf_read_dynptr_fn_t)(struct bpf_dynptr *dptr, u64 off,
> > +typedef int (*bpf_read_dynptr_fn_t)(const struct bpf_dynptr *dptr, u64 off,
> >                                   u64 size, const void *unsafe_ptr);
> >
> >  /* Returns the offset just before the end of the maximum sized xdp fragment.
> > @@ -1106,7 +1106,7 @@ int test_copy_from_user_str_dynptr(void *ctx)
> >       return 0;
> >  }
> >
> > -static int bpf_copy_data_from_user_task(struct bpf_dynptr *dptr, u64 off,
> > +static int bpf_copy_data_from_user_task(const struct bpf_dynptr *dptr, u64 off,
> >                                       u64 size, const void *unsafe_ptr)
> >  {
> >       struct task_struct *task = bpf_get_current_task_btf();
> > @@ -1114,7 +1114,7 @@ static int bpf_copy_data_from_user_task(struct bpf_dynptr *dptr, u64 off,
> >       return bpf_copy_from_user_task_dynptr(dptr, off, size, unsafe_ptr, task);
> >  }
> >
> > -static int bpf_copy_data_from_user_task_str(struct bpf_dynptr *dptr, u64 off,
> > +static int bpf_copy_data_from_user_task_str(const struct bpf_dynptr *dptr, u64 off,
> >                                           u64 size, const void *unsafe_ptr)
> >  {
> >       struct task_struct *task = bpf_get_current_task_btf();
> > diff --git a/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c b/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c
> > index d249113ed657..c3631fd41977 100644
> > --- a/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c
> > +++ b/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c
> > @@ -11,12 +11,7 @@
> >  #include <bpf/bpf_helpers.h>
> >  #include <bpf/bpf_tracing.h>
> >  #include "bpf_misc.h"
> > -
> > -extern struct bpf_key *bpf_lookup_system_key(__u64 id) __ksym;
> > -extern void bpf_key_put(struct bpf_key *key) __ksym;
> > -extern int bpf_verify_pkcs7_signature(struct bpf_dynptr *data_ptr,
> > -                                   struct bpf_dynptr *sig_ptr,
> > -                                   struct bpf_key *trusted_keyring) __ksym;
> > +#include "bpf_kfuncs.h"
> >
> >  struct {
> >       __uint(type, BPF_MAP_TYPE_RINGBUF);
> > --
> > 2.47.3

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH bpf-next v2 02/11] selftests/bpf: Test passing CONST_PTR_TO_DYNPTR to kfunc that may mutate dynptr
  2026-03-11 15:26   ` Mykyta Yatsenko
@ 2026-03-11 16:38     ` Amery Hung
  2026-03-11 16:56       ` Amery Hung
  0 siblings, 1 reply; 46+ messages in thread
From: Amery Hung @ 2026-03-11 16:38 UTC (permalink / raw)
  To: Mykyta Yatsenko
  Cc: bpf, netdev, alexei.starovoitov, andrii, daniel, memxor,
	martin.lau, kernel-team

On Wed, Mar 11, 2026 at 8:26 AM Mykyta Yatsenko
<mykyta.yatsenko5@gmail.com> wrote:
>
> Amery Hung <ameryhung@gmail.com> writes:
>
> > Make sure for kfunc that takes mutable dynptr argument, verifier rejects
> > passing CONST_PTR_TO_DYNPTR to it.
> >
> > Rename struct sample to test_sample to avoid a conflict with the
> > definition in vmlinux.h
> >
> > In test_kfunc_dynptr_param.c, initialize dynptr to 0 to avoid
> > -Wuninitialized-const-pointer warning.
> >
> > Signed-off-by: Amery Hung <ameryhung@gmail.com>
> > ---
> >  .../testing/selftests/bpf/progs/dynptr_fail.c | 37 +++++++++++++++----
> >  .../bpf/progs/test_kfunc_dynptr_param.c       |  2 +-
> >  2 files changed, 30 insertions(+), 9 deletions(-)
> >
> > diff --git a/tools/testing/selftests/bpf/progs/dynptr_fail.c b/tools/testing/selftests/bpf/progs/dynptr_fail.c
> > index 8f2ae9640886..5e1b1cf4ea8e 100644
> > --- a/tools/testing/selftests/bpf/progs/dynptr_fail.c
> > +++ b/tools/testing/selftests/bpf/progs/dynptr_fail.c
> > @@ -1,15 +1,14 @@
> >  // SPDX-License-Identifier: GPL-2.0
> >  /* Copyright (c) 2022 Facebook */
> >
> > +#include <vmlinux.h>
> >  #include <errno.h>
> >  #include <string.h>
> > -#include <stdbool.h>
> > -#include <linux/bpf.h>
> >  #include <bpf/bpf_helpers.h>
> >  #include <bpf/bpf_tracing.h>
> > -#include <linux/if_ether.h>
> >  #include "bpf_misc.h"
> >  #include "bpf_kfuncs.h"
> > +#include "../test_kmods/bpf_testmod_kfunc.h"
> >
> >  char _license[] SEC("license") = "GPL";
> >
> > @@ -46,7 +45,7 @@ struct {
> >       __type(value, __u64);
> >  } array_map4 SEC(".maps");
> >
> > -struct sample {
> > +struct test_sample {
> >       int pid;
> >       long value;
> >       char comm[16];
> > @@ -95,7 +94,7 @@ __failure __msg("Unreleased reference id=4")
> >  int ringbuf_missing_release2(void *ctx)
> >  {
> >       struct bpf_dynptr ptr1, ptr2;
> > -     struct sample *sample;
> > +     struct test_sample *sample;
> >
> >       bpf_ringbuf_reserve_dynptr(&ringbuf, sizeof(*sample), 0, &ptr1);
> >       bpf_ringbuf_reserve_dynptr(&ringbuf, sizeof(*sample), 0, &ptr2);
> > @@ -173,7 +172,7 @@ __failure __msg("type=mem expected=ringbuf_mem")
> >  int ringbuf_invalid_api(void *ctx)
> >  {
> >       struct bpf_dynptr ptr;
> > -     struct sample *sample;
> > +     struct test_sample *sample;
> >
> >       bpf_ringbuf_reserve_dynptr(&ringbuf, sizeof(*sample), 0, &ptr);
> >       sample = bpf_dynptr_data(&ptr, 0, sizeof(*sample));
> > @@ -315,7 +314,7 @@ __failure __msg("invalid mem access 'scalar'")
> >  int data_slice_use_after_release1(void *ctx)
> >  {
> >       struct bpf_dynptr ptr;
> > -     struct sample *sample;
> > +     struct test_sample *sample;
> >
> >       bpf_ringbuf_reserve_dynptr(&ringbuf, sizeof(*sample), 0, &ptr);
> >       sample = bpf_dynptr_data(&ptr, 0, sizeof(*sample));
> > @@ -347,7 +346,7 @@ __failure __msg("invalid mem access 'scalar'")
> >  int data_slice_use_after_release2(void *ctx)
> >  {
> >       struct bpf_dynptr ptr1, ptr2;
> > -     struct sample *sample;
> > +     struct test_sample *sample;
> >
> >       bpf_ringbuf_reserve_dynptr(&ringbuf, 64, 0, &ptr1);
> >       bpf_ringbuf_reserve_dynptr(&ringbuf, sizeof(*sample), 0, &ptr2);
> > @@ -1993,3 +1992,25 @@ int test_dynptr_reg_type(void *ctx)
> >       global_call_bpf_dynptr((const struct bpf_dynptr *)current);
> >       return 0;
> >  }
> > +
> > +/* Cannot pass CONST_PTR_TO_DYNPTR to bpf_kfunc_dynptr_test() that may mutate the dynptr */
> > +__noinline int global_subprog_dynptr_mutable(const struct bpf_dynptr *dynptr)
> > +{
> > +     long ret = 0;
> Why do we need this long ret? Do we even need this function at all, why
> not calling bpf_kfunc_dynptr_test() directly from the
> kfunc_dynptr_const_to_mutable()?
> > +
> > +     /* this should fail */
> > +     bpf_kfunc_dynptr_test((struct bpf_dynptr *)dynptr, NULL);
> > +     __sink(ret);
> > +     return ret;
> > +}
> > +
> > +SEC("tc")
> nit: it looks like most of the programs in this file are optional:
> SEC("?tc").

I will make it SEC("?tc").

> > +__failure __msg("cannot pass pointer to const bpf_dynptr, the helper mutates it")
> > +int kfunc_dynptr_const_to_mutable(struct __sk_buff *skb)
> > +{
> > +     struct bpf_dynptr data;
> > +
> > +     bpf_dynptr_from_skb(skb, 0, &data);
> > +     global_subprog_dynptr_mutable(&data);
> > +     return 0;
> > +}
> > diff --git a/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c b/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c
> > index c3631fd41977..1c6cfd0888ba 100644
> > --- a/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c
> > +++ b/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c
> > @@ -33,7 +33,7 @@ SEC("?lsm.s/bpf")
> >  __failure __msg("cannot pass in dynptr at an offset=-8")
> >  int BPF_PROG(not_valid_dynptr, int cmd, union bpf_attr *attr, unsigned int size, bool kernel)
> >  {
> > -     unsigned long val;
> > +     unsigned long val = 0;
> >
> >       return bpf_verify_pkcs7_signature((struct bpf_dynptr *)&val,
> >                                         (struct bpf_dynptr *)&val, NULL);
> > --
> > 2.47.3

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH bpf-next v2 02/11] selftests/bpf: Test passing CONST_PTR_TO_DYNPTR to kfunc that may mutate dynptr
  2026-03-11 16:38     ` Amery Hung
@ 2026-03-11 16:56       ` Amery Hung
  0 siblings, 0 replies; 46+ messages in thread
From: Amery Hung @ 2026-03-11 16:56 UTC (permalink / raw)
  To: Mykyta Yatsenko
  Cc: bpf, netdev, alexei.starovoitov, andrii, daniel, memxor,
	martin.lau, kernel-team

On Wed, Mar 11, 2026 at 9:38 AM Amery Hung <ameryhung@gmail.com> wrote:
>
> On Wed, Mar 11, 2026 at 8:26 AM Mykyta Yatsenko
> <mykyta.yatsenko5@gmail.com> wrote:
> >
> > Amery Hung <ameryhung@gmail.com> writes:
> >
> > > Make sure for kfunc that takes mutable dynptr argument, verifier rejects
> > > passing CONST_PTR_TO_DYNPTR to it.
> > >
> > > Rename struct sample to test_sample to avoid a conflict with the
> > > definition in vmlinux.h
> > >
> > > In test_kfunc_dynptr_param.c, initialize dynptr to 0 to avoid
> > > -Wuninitialized-const-pointer warning.
> > >
> > > Signed-off-by: Amery Hung <ameryhung@gmail.com>
> > > ---
> > >  .../testing/selftests/bpf/progs/dynptr_fail.c | 37 +++++++++++++++----
> > >  .../bpf/progs/test_kfunc_dynptr_param.c       |  2 +-
> > >  2 files changed, 30 insertions(+), 9 deletions(-)
> > >
> > > diff --git a/tools/testing/selftests/bpf/progs/dynptr_fail.c b/tools/testing/selftests/bpf/progs/dynptr_fail.c
> > > index 8f2ae9640886..5e1b1cf4ea8e 100644
> > > --- a/tools/testing/selftests/bpf/progs/dynptr_fail.c
> > > +++ b/tools/testing/selftests/bpf/progs/dynptr_fail.c
> > > @@ -1,15 +1,14 @@
> > >  // SPDX-License-Identifier: GPL-2.0
> > >  /* Copyright (c) 2022 Facebook */
> > >
> > > +#include <vmlinux.h>
> > >  #include <errno.h>
> > >  #include <string.h>
> > > -#include <stdbool.h>
> > > -#include <linux/bpf.h>
> > >  #include <bpf/bpf_helpers.h>
> > >  #include <bpf/bpf_tracing.h>
> > > -#include <linux/if_ether.h>
> > >  #include "bpf_misc.h"
> > >  #include "bpf_kfuncs.h"
> > > +#include "../test_kmods/bpf_testmod_kfunc.h"
> > >
> > >  char _license[] SEC("license") = "GPL";
> > >
> > > @@ -46,7 +45,7 @@ struct {
> > >       __type(value, __u64);
> > >  } array_map4 SEC(".maps");
> > >
> > > -struct sample {
> > > +struct test_sample {
> > >       int pid;
> > >       long value;
> > >       char comm[16];
> > > @@ -95,7 +94,7 @@ __failure __msg("Unreleased reference id=4")
> > >  int ringbuf_missing_release2(void *ctx)
> > >  {
> > >       struct bpf_dynptr ptr1, ptr2;
> > > -     struct sample *sample;
> > > +     struct test_sample *sample;
> > >
> > >       bpf_ringbuf_reserve_dynptr(&ringbuf, sizeof(*sample), 0, &ptr1);
> > >       bpf_ringbuf_reserve_dynptr(&ringbuf, sizeof(*sample), 0, &ptr2);
> > > @@ -173,7 +172,7 @@ __failure __msg("type=mem expected=ringbuf_mem")
> > >  int ringbuf_invalid_api(void *ctx)
> > >  {
> > >       struct bpf_dynptr ptr;
> > > -     struct sample *sample;
> > > +     struct test_sample *sample;
> > >
> > >       bpf_ringbuf_reserve_dynptr(&ringbuf, sizeof(*sample), 0, &ptr);
> > >       sample = bpf_dynptr_data(&ptr, 0, sizeof(*sample));
> > > @@ -315,7 +314,7 @@ __failure __msg("invalid mem access 'scalar'")
> > >  int data_slice_use_after_release1(void *ctx)
> > >  {
> > >       struct bpf_dynptr ptr;
> > > -     struct sample *sample;
> > > +     struct test_sample *sample;
> > >
> > >       bpf_ringbuf_reserve_dynptr(&ringbuf, sizeof(*sample), 0, &ptr);
> > >       sample = bpf_dynptr_data(&ptr, 0, sizeof(*sample));
> > > @@ -347,7 +346,7 @@ __failure __msg("invalid mem access 'scalar'")
> > >  int data_slice_use_after_release2(void *ctx)
> > >  {
> > >       struct bpf_dynptr ptr1, ptr2;
> > > -     struct sample *sample;
> > > +     struct test_sample *sample;
> > >
> > >       bpf_ringbuf_reserve_dynptr(&ringbuf, 64, 0, &ptr1);
> > >       bpf_ringbuf_reserve_dynptr(&ringbuf, sizeof(*sample), 0, &ptr2);
> > > @@ -1993,3 +1992,25 @@ int test_dynptr_reg_type(void *ctx)
> > >       global_call_bpf_dynptr((const struct bpf_dynptr *)current);
> > >       return 0;
> > >  }
> > > +
> > > +/* Cannot pass CONST_PTR_TO_DYNPTR to bpf_kfunc_dynptr_test() that may mutate the dynptr */
> > > +__noinline int global_subprog_dynptr_mutable(const struct bpf_dynptr *dynptr)
> > > +{
> > > +     long ret = 0;
> > Why do we need this long ret? Do we even need this function at all, why
> > not calling bpf_kfunc_dynptr_test() directly from the
> > kfunc_dynptr_const_to_mutable()?

oops. Will remove ret.

IIUC, this global subprog is needed so that the arg will be
CONST_PTR_TO_DYNPTR. Verifier will see PTR_TO_STACK if passing &data
directly to bpf_kfunc_dynptr_test().

> > > +
> > > +     /* this should fail */
> > > +     bpf_kfunc_dynptr_test((struct bpf_dynptr *)dynptr, NULL);
> > > +     __sink(ret);
> > > +     return ret;
> > > +}
> > > +
> > > +SEC("tc")
> > nit: it looks like most of the programs in this file are optional:
> > SEC("?tc").
>
> I will make it SEC("?tc").
>
> > > +__failure __msg("cannot pass pointer to const bpf_dynptr, the helper mutates it")
> > > +int kfunc_dynptr_const_to_mutable(struct __sk_buff *skb)
> > > +{
> > > +     struct bpf_dynptr data;
> > > +
> > > +     bpf_dynptr_from_skb(skb, 0, &data);
> > > +     global_subprog_dynptr_mutable(&data);
> > > +     return 0;
> > > +}
> > > diff --git a/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c b/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c
> > > index c3631fd41977..1c6cfd0888ba 100644
> > > --- a/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c
> > > +++ b/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c
> > > @@ -33,7 +33,7 @@ SEC("?lsm.s/bpf")
> > >  __failure __msg("cannot pass in dynptr at an offset=-8")
> > >  int BPF_PROG(not_valid_dynptr, int cmd, union bpf_attr *attr, unsigned int size, bool kernel)
> > >  {
> > > -     unsigned long val;
> > > +     unsigned long val = 0;
> > >
> > >       return bpf_verify_pkcs7_signature((struct bpf_dynptr *)&val,
> > >                                         (struct bpf_dynptr *)&val, NULL);
> > > --
> > > 2.47.3

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH bpf-next v2 03/11] bpf: Unify dynptr handling in the verifier
  2026-03-11 16:03   ` Mykyta Yatsenko
@ 2026-03-11 17:23     ` Amery Hung
  2026-03-11 22:22       ` Mykyta Yatsenko
  0 siblings, 1 reply; 46+ messages in thread
From: Amery Hung @ 2026-03-11 17:23 UTC (permalink / raw)
  To: Mykyta Yatsenko
  Cc: bpf, netdev, alexei.starovoitov, andrii, daniel, memxor,
	martin.lau, kernel-team

On Wed, Mar 11, 2026 at 9:03 AM Mykyta Yatsenko
<mykyta.yatsenko5@gmail.com> wrote:
>
> Amery Hung <ameryhung@gmail.com> writes:
>
> > Simplify dynptr checking for helper and kfunc by unifying it. Remember
> > initialized dynptr in process_dynptr_func() so that we can easily
> > retrieve the information for verification later.
> >
> > Signed-off-by: Amery Hung <ameryhung@gmail.com>
> > ---
> >  kernel/bpf/verifier.c | 179 +++++++++---------------------------------
> >  1 file changed, 36 insertions(+), 143 deletions(-)
> >
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index 0f77c4c5b510..d52780962adb 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -277,8 +277,15 @@ struct bpf_map_desc {
> >       int uid;
> >  };
> >
> > +struct bpf_dynptr_desc {
> > +     enum bpf_dynptr_type type;
> > +     u32 id;
> > +     u32 ref_obj_id;
> nit: let's add a comment here explaining what this field is for.

We are about to change the meaning of id and ref_obj_id. I can add
comments explaining id, ref_obj_id and parent_id in the refactor patch
(#6). That said, the meaning of these fields will apply to all objects
tracked by the verifier, not just limited to dynptr, and is already
documented when we define bpf_reg_state in
include/linux/bpf_verifier.h. Can you share a bit what info you are
looking for?

> > +};
> > +
> >  struct bpf_call_arg_meta {
> >       struct bpf_map_desc map;
> > +     struct bpf_dynptr_desc initialized_dynptr;
> >       bool raw_mode;
> >       bool pkt_access;
> >       u8 release_regno;
> > @@ -287,7 +294,6 @@ struct bpf_call_arg_meta {
> >       int mem_size;
> >       u64 msize_max_value;
> >       int ref_obj_id;
> > -     int dynptr_id;
> >       int func_id;
> >       struct btf *btf;
> >       u32 btf_id;
> > @@ -346,16 +352,12 @@ struct bpf_kfunc_call_arg_meta {
> >       struct {
> >               struct btf_field *field;
> >       } arg_rbtree_root;
> > -     struct {
> > -             enum bpf_dynptr_type type;
> > -             u32 id;
> > -             u32 ref_obj_id;
> > -     } initialized_dynptr;
> >       struct {
> >               u8 spi;
> >               u8 frameno;
> >       } iter;
> >       struct bpf_map_desc map;
> > +     struct bpf_dynptr_desc initialized_dynptr;
> >       u64 mem_size;
> >  };
> >
> > @@ -511,11 +513,6 @@ static bool is_ptr_cast_function(enum bpf_func_id func_id)
> >               func_id == BPF_FUNC_skc_to_tcp_request_sock;
> >  }
> >
> > -static bool is_dynptr_ref_function(enum bpf_func_id func_id)
> > -{
> > -     return func_id == BPF_FUNC_dynptr_data;
> > -}
> > -
> >  static bool is_sync_callback_calling_kfunc(u32 btf_id);
> >  static bool is_async_callback_calling_kfunc(u32 btf_id);
> >  static bool is_callback_calling_kfunc(u32 btf_id);
> > @@ -597,8 +594,6 @@ static bool helper_multiple_ref_obj_use(enum bpf_func_id func_id,
> >               ref_obj_uses++;
> >       if (is_acquire_function(func_id, map))
> >               ref_obj_uses++;
> > -     if (is_dynptr_ref_function(func_id))
> > -             ref_obj_uses++;
> >
> >       return ref_obj_uses > 1;
> >  }
> > @@ -8750,7 +8745,8 @@ static int process_kptr_func(struct bpf_verifier_env *env, int regno,
> >   * type, and declare it as 'const struct bpf_dynptr *' in their prototype.
> >   */
> >  static int process_dynptr_func(struct bpf_verifier_env *env, int regno, int insn_idx,
> > -                            enum bpf_arg_type arg_type, int clone_ref_obj_id)
> > +                            enum bpf_arg_type arg_type, int clone_ref_obj_id,
> > +                            struct bpf_dynptr_desc *initialized_dynptr)
> >  {
> >       struct bpf_reg_state *reg = reg_state(env, regno);
> >       int err;
> > @@ -8825,6 +8821,20 @@ static int process_dynptr_func(struct bpf_verifier_env *env, int regno, int insn
> >               }
> >
> >               err = mark_dynptr_read(env, reg);
> > +
> > +             if (initialized_dynptr) {
> > +                     struct bpf_func_state *state = func(env, reg);
> state is only used if reg->type != CONST_PTR_TO_DYNPTR, does it make
> sense to move state = func(env, reg); to the corresponding if block?

I think this is fine. It looks less cluttered this way.

> > +                     int spi;
> > +
> > +                     if (reg->type != CONST_PTR_TO_DYNPTR) {
> > +                             spi = dynptr_get_spi(env, reg);
> looking at the deleted dynptr_id() and dynptr_ref_obj_id() spi can be
> negative, what changed here that we no longer need this check?

is_dynptr_reg_valid_init() above already makes sure reg points to a
valid dynptr so we don't need to check it again.

> > +                             reg = &state->stack[spi].spilled_ptr;
> > +                     }
> > +
> > +                     initialized_dynptr->id = reg->id;
> > +                     initialized_dynptr->type = reg->dynptr.type;
> > +                     initialized_dynptr->ref_obj_id = reg->ref_obj_id;
> > +             }
> >       }
> >       return err;
> >  }
> > @@ -9587,72 +9597,6 @@ static int check_func_arg_reg_off(struct bpf_verifier_env *env,
> >       }
> >  }
> >
> > -static struct bpf_reg_state *get_dynptr_arg_reg(struct bpf_verifier_env *env,
> > -                                             const struct bpf_func_proto *fn,
> > -                                             struct bpf_reg_state *regs)
> > -{
> > -     struct bpf_reg_state *state = NULL;
> > -     int i;
> > -
> > -     for (i = 0; i < MAX_BPF_FUNC_REG_ARGS; i++)
> > -             if (arg_type_is_dynptr(fn->arg_type[i])) {
> > -                     if (state) {
> > -                             verbose(env, "verifier internal error: multiple dynptr args\n");
> > -                             return NULL;
> > -                     }
> > -                     state = &regs[BPF_REG_1 + i];
> > -             }
> > -
> > -     if (!state)
> > -             verbose(env, "verifier internal error: no dynptr arg found\n");
> > -
> > -     return state;
> > -}
> > -
> > -static int dynptr_id(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
> > -{
> > -     struct bpf_func_state *state = func(env, reg);
> > -     int spi;
> > -
> > -     if (reg->type == CONST_PTR_TO_DYNPTR)
> > -             return reg->id;
> > -     spi = dynptr_get_spi(env, reg);
> > -     if (spi < 0)
> > -             return spi;
> > -     return state->stack[spi].spilled_ptr.id;
> > -}
> > -
> > -static int dynptr_ref_obj_id(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
> > -{
> > -     struct bpf_func_state *state = func(env, reg);
> > -     int spi;
> > -
> > -     if (reg->type == CONST_PTR_TO_DYNPTR)
> > -             return reg->ref_obj_id;
> > -     spi = dynptr_get_spi(env, reg);
> > -     if (spi < 0)
> > -             return spi;
> > -     return state->stack[spi].spilled_ptr.ref_obj_id;
> > -}
> > -
> > -static enum bpf_dynptr_type dynptr_get_type(struct bpf_verifier_env *env,
> > -                                         struct bpf_reg_state *reg)
> > -{
> > -     struct bpf_func_state *state = func(env, reg);
> > -     int spi;
> > -
> > -     if (reg->type == CONST_PTR_TO_DYNPTR)
> > -             return reg->dynptr.type;
> > -
> > -     spi = __get_spi(reg->var_off.value);
> > -     if (spi < 0) {
> > -             verbose(env, "verifier internal error: invalid spi when querying dynptr type\n");
> > -             return BPF_DYNPTR_TYPE_INVALID;
> > -     }
> > -
> > -     return state->stack[spi].spilled_ptr.dynptr.type;
> > -}
> > -
> >  static int check_reg_const_str(struct bpf_verifier_env *env,
> >                              struct bpf_reg_state *reg, u32 regno)
> >  {
> > @@ -10007,7 +9951,8 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
> >                                        true, meta);
> >               break;
> >       case ARG_PTR_TO_DYNPTR:
> > -             err = process_dynptr_func(env, regno, insn_idx, arg_type, 0);
> > +             err = process_dynptr_func(env, regno, insn_idx, arg_type, 0,
> > +                                       &meta->initialized_dynptr);
> >               if (err)
> >                       return err;
> >               break;
> > @@ -10666,7 +10611,7 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog,
> >                       if (ret)
> >                               return ret;
> >
> > -                     ret = process_dynptr_func(env, regno, -1, arg->arg_type, 0);
> > +                     ret = process_dynptr_func(env, regno, -1, arg->arg_type, 0, NULL);
> >                       if (ret)
> >                               return ret;
> >               } else if (base_type(arg->arg_type) == ARG_PTR_TO_BTF_ID) {
> > @@ -11771,52 +11716,10 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
> >                       }
> >               }
> >               break;
> > -     case BPF_FUNC_dynptr_data:
> > -     {
> > -             struct bpf_reg_state *reg;
> > -             int id, ref_obj_id;
> > -
> > -             reg = get_dynptr_arg_reg(env, fn, regs);
> > -             if (!reg)
> > -                     return -EFAULT;
> > -
> > -
> > -             if (meta.dynptr_id) {
> > -                     verifier_bug(env, "meta.dynptr_id already set");
> > -                     return -EFAULT;
> > -             }
> > -             if (meta.ref_obj_id) {
> > -                     verifier_bug(env, "meta.ref_obj_id already set");
> > -                     return -EFAULT;
> > -             }
> > -
> > -             id = dynptr_id(env, reg);
> > -             if (id < 0) {
> > -                     verifier_bug(env, "failed to obtain dynptr id");
> > -                     return id;
> > -             }
> > -
> > -             ref_obj_id = dynptr_ref_obj_id(env, reg);
> > -             if (ref_obj_id < 0) {
> > -                     verifier_bug(env, "failed to obtain dynptr ref_obj_id");
> > -                     return ref_obj_id;
> > -             }
> > -
> > -             meta.dynptr_id = id;
> > -             meta.ref_obj_id = ref_obj_id;
> > -
> > -             break;
> > -     }
> >       case BPF_FUNC_dynptr_write:
> >       {
> > -             enum bpf_dynptr_type dynptr_type;
> > -             struct bpf_reg_state *reg;
> > -
> > -             reg = get_dynptr_arg_reg(env, fn, regs);
> > -             if (!reg)
> > -                     return -EFAULT;
> > +             enum bpf_dynptr_type dynptr_type = meta.initialized_dynptr.type;
> >
> > -             dynptr_type = dynptr_get_type(env, reg);
> >               if (dynptr_type == BPF_DYNPTR_TYPE_INVALID)
> >                       return -EFAULT;
> >
> > @@ -12007,10 +11910,7 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
> >               return -EFAULT;
> >       }
> >
> > -     if (is_dynptr_ref_function(func_id))
> > -             regs[BPF_REG_0].dynptr_id = meta.dynptr_id;
> > -
> > -     if (is_ptr_cast_function(func_id) || is_dynptr_ref_function(func_id)) {
> > +     if (is_ptr_cast_function(func_id)) {
> >               /* For release_reference() */
> >               regs[BPF_REG_0].ref_obj_id = meta.ref_obj_id;
> >       } else if (is_acquire_function(func_id, meta.map.ptr)) {
> > @@ -12024,6 +11924,11 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
> >               regs[BPF_REG_0].ref_obj_id = id;
> >       }
> >
> > +     if (func_id == BPF_FUNC_dynptr_data) {
> > +             regs[BPF_REG_0].dynptr_id = meta.initialized_dynptr.id;
> > +             regs[BPF_REG_0].ref_obj_id = meta.initialized_dynptr.ref_obj_id;
> > +     }
> > +
> >       err = do_refine_retval_range(env, regs, fn->ret_type, func_id, &meta);
> >       if (err)
> >               return err;
> > @@ -13559,22 +13464,10 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
> >                               }
> >                       }
> >
> > -                     ret = process_dynptr_func(env, regno, insn_idx, dynptr_arg_type, clone_ref_obj_id);
> > +                     ret = process_dynptr_func(env, regno, insn_idx, dynptr_arg_type, clone_ref_obj_id,
> > +                                               &meta->initialized_dynptr);
> >                       if (ret < 0)
> >                               return ret;
> > -
> > -                     if (!(dynptr_arg_type & MEM_UNINIT)) {
> > -                             int id = dynptr_id(env, reg);
> > -
> > -                             if (id < 0) {
> > -                                     verifier_bug(env, "failed to obtain dynptr id");
> > -                                     return id;
> > -                             }
> > -                             meta->initialized_dynptr.id = id;
> > -                             meta->initialized_dynptr.type = dynptr_get_type(env, reg);
> > -                             meta->initialized_dynptr.ref_obj_id = dynptr_ref_obj_id(env, reg);
> > -                     }
> > -
> >                       break;
> >               }
> >               case KF_ARG_PTR_TO_ITER:
> > --
> > 2.47.3

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH bpf-next v2 00/11] Dynptr cleanup and bugfixes
  2026-03-07  6:44 [RFC PATCH bpf-next v2 00/11] Dynptr cleanup and bugfixes Amery Hung
                   ` (10 preceding siblings ...)
  2026-03-07  6:44 ` [RFC PATCH bpf-next v2 11/11] selftests/bpf: Test using file dynptr after the reference on file is dropped Amery Hung
@ 2026-03-11 19:38 ` Andrii Nakryiko
  2026-03-13 20:49   ` Amery Hung
  11 siblings, 1 reply; 46+ messages in thread
From: Andrii Nakryiko @ 2026-03-11 19:38 UTC (permalink / raw)
  To: Amery Hung
  Cc: bpf, netdev, alexei.starovoitov, andrii, daniel, memxor,
	martin.lau, kernel-team

On Fri, Mar 6, 2026 at 10:44 PM Amery Hung <ameryhung@gmail.com> wrote:
>
> This patchset (1) cleans up dynptr handling (2) refactors object parent-
> child relationship tracking to make it more precise and (3) fixes dynptr
> UAF bug due to a missing link between dynptr and the parent referenced
> object in the verifier.
>
> This patchset will make dynptr tracks its parent object. In bpf qdisc
> programs, an skb may be freed through kfuncs. However, since dynptr
> currently does not track the parent referenced object (e.g., skb), the
> verifier will not invalidate the dynptr after the skb is freed,
> resulting in use-after-free. The similar issue also affects file dynptr.
> To solve the issue, we need to track the parent skb in the derived
> dynptr and slices.
>
> However, we need to refactor the verifier's object tracking mechanism
> first because id and ref_obj_id cannot easily express more than simple
> object relationship. To illustrate this, we use an example as shown in
> the figure below.
>
> Before: object (id,ref_obj_id,dynptr_id)
>   id         = id of the object (used for nullness tracking)
>   ref_obj_id = id of the underlying referenced object (used for lifetime
>                tracking)
>   dynptr_id  = id of the parent dynptr of the slice (used for tracking
>                parent dynptr, only for PTR_TO_MEM)
>
>                       skb (0,1,0)
>                              ^ (try to link dynptr to parent ref_obj_id)
>                              +-------------------------------+
>                              |           bpf_dynptr_clone    |
>                  dynptr A (2,1,0)                dynptr C (4,1,0)
>                            ^                               ^
>         bpf_dynptr_slice   |                               |
>                            |                               |
>               slice B (3,1,2)                 slice D (5,1,4)
>                          ^
>     bpf_dynptr_from_mem  |
>     (NOT allowed yet)    |
>              dynptr E (6,1,0)

Ugh... This cover letter is... intimidating. It's good to have all
this information, but for someone who didn't whiteboard this with you,
I think it's a bit too hard and overwhelming to comprehend. You are
also intermingling both problem statements and possible/actual
solution (and problems with earlier possible solutions) all in the
same go.

May I suggest a bit of restructuring? This diagram you have here is a
great start. I'd use it to "set the scene" and explain an example we
are going to look at first. (Keep all the IDs, but mention that they
will be more relevant a bit later and reader shouldn't concentrate on
them just yet), and just explain that in BPF we have this potential
hierarchy of interdependent things that have related lifetimes. When
skb is released, all dynptrs and slices derived from those should be
released. But also mention that it can't be all-or-nothing, in the
sense that if dynptr A is "released", skb and dynptr C should still be
valid.

And it's currently not the case. That's the problem we are trying to
solve. At this point you might use those IDs to explain why we can't
solve the release problems with the way we use id and ref_obj_id.

Then explain the idea of parent_id and how it fixes this hierarchy
reconstruction problem.

But then, mention that socket casting problem which introduces shared
lifetime while objects are actually semi-independent (from verifier
POV) due to independent NULL-ness. Which makes parent_id not enough
and we still need ref_obj_id (maybe we should rename it to
lifetime_id, don't know).

In short, I think there is a clear logical story here, but your cover
letter hides it a bit behind the wall of dense text, which is hard to
get through initially.

>
> Lets first try to fix the bug by letting dynptr track the parent skb
> using ref_obj_id and propagating the ref_obj_id to slices so that when
> the skb goes away the derived dynptrs and slices will also be
> invalidated. However, if dynptr A is destroyed by overwriting the stack
> slot, release_reference(ref_obj_id=1) would be called and all nodes will
> be invaldiated. The correct handling should leave skb, dynptr C, and
> slice D intact since non-referenced dynptr clone's lifetime does not
> need to tie to the original dynptr. This is not a problem before since
> dynptr created from skb has ref_obj_id = 0. In the future if we start
> allowing creating dynptr from slice, the current design also cannot
> correctly handle the removal of dynptr E. All objects will be
> incorrectly invalidated instead of only invalidating childrens of
> dynptr E. While it is possible to solve the issue by adding more
> specialized handling in the dynptr paths [0], it creates more complexity.
>

[...]

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH bpf-next v2 01/11] bpf: Set kfunc dynptr arg type flag based on prototype
  2026-03-07  6:44 ` [RFC PATCH bpf-next v2 01/11] bpf: Set kfunc dynptr arg type flag based on prototype Amery Hung
  2026-03-11 14:47   ` Mykyta Yatsenko
@ 2026-03-11 19:43   ` Andrii Nakryiko
  2026-03-11 20:01     ` Amery Hung
  2026-03-16 20:57   ` Eduard Zingerman
  2 siblings, 1 reply; 46+ messages in thread
From: Andrii Nakryiko @ 2026-03-11 19:43 UTC (permalink / raw)
  To: Amery Hung
  Cc: bpf, netdev, alexei.starovoitov, andrii, daniel, memxor,
	martin.lau, kernel-team

On Fri, Mar 6, 2026 at 10:44 PM Amery Hung <ameryhung@gmail.com> wrote:
>
> The verifier should decide whether a dynptr argument is read-only
> based on if the type is "const struct bpf_dynptr *", not the type of
> the register passed to the kfunc. This currently does not cause issues
> because existing kfuncs that mutate struct bpf_dynptr are constructors
> (e.g., bpf_dynptr_from_xxx and bpf_dynptr_clone). These kfuncs have
> additional check in process_dynptr_func() to make sure the stack slot
> does not contain initialized dynptr. Nonetheless, this should still be
> fixed to avoid future issues when there is a non-constructor dynptr
> kfunc that can mutate dynptr. This is also a small step toward unifying
> kfunc and helper handling in the verifier, where the first step is to
> generate kfunc prototype similar to bpf_func_proto before the main
> verification loop.
>
> We also need to correctly mark some kfunc arguments as "const struct
> bpf_dynptr *" to align with other kfuncs that take non-mutable dynptr
> argument and to not break their usage. Adding const qualifier does
> not break backward compatibility.
>
> Signed-off-by: Amery Hung <ameryhung@gmail.com>
> ---
>  fs/verity/measure.c                            |  2 +-
>  include/linux/bpf.h                            |  8 ++++----
>  kernel/bpf/helpers.c                           | 10 +++++-----
>  kernel/bpf/verifier.c                          | 18 +++++++++++++++++-
>  kernel/trace/bpf_trace.c                       | 18 +++++++++---------
>  tools/testing/selftests/bpf/bpf_kfuncs.h       |  6 +++---
>  .../selftests/bpf/progs/dynptr_success.c       |  6 +++---
>  .../bpf/progs/test_kfunc_dynptr_param.c        |  7 +------
>  8 files changed, 43 insertions(+), 32 deletions(-)
>
> diff --git a/fs/verity/measure.c b/fs/verity/measure.c
> index 6a35623ebdf0..3840436e4510 100644
> --- a/fs/verity/measure.c
> +++ b/fs/verity/measure.c
> @@ -118,7 +118,7 @@ __bpf_kfunc_start_defs();
>   *
>   * Return: 0 on success, a negative value on error.
>   */
> -__bpf_kfunc int bpf_get_fsverity_digest(struct file *file, struct bpf_dynptr *digest_p)
> +__bpf_kfunc int bpf_get_fsverity_digest(struct file *file, const struct bpf_dynptr *digest_p)

but kfunc is writing into digest_p, so that const is wrong?...

>  {
>         struct bpf_dynptr_kern *digest_ptr = (struct bpf_dynptr_kern *)digest_p;
>         const struct inode *inode = file_inode(file);

[...]

> index 6eb6c82ed2ee..3d44896587ac 100644
> --- a/kernel/bpf/helpers.c
> +++ b/kernel/bpf/helpers.c
> @@ -3000,8 +3000,8 @@ __bpf_kfunc int bpf_dynptr_clone(const struct bpf_dynptr *p,
>   * Copies data from source dynptr to destination dynptr.
>   * Returns 0 on success; negative error, otherwise.
>   */
> -__bpf_kfunc int bpf_dynptr_copy(struct bpf_dynptr *dst_ptr, u64 dst_off,
> -                               struct bpf_dynptr *src_ptr, u64 src_off, u64 size)
> +__bpf_kfunc int bpf_dynptr_copy(const struct bpf_dynptr *dst_ptr, u64 dst_off,

again, dst_ptr clearly is modifiable because we are copying data into it.

What am I missing, why is this logically correct?

(I understand that from purely C type system POV this is fine, because
we don't modify bpf_dynptr struct itself on the stack, but bpf_dynptr
is a representation of some memory, and if we are modifying this
memory, then I think it should be not marked as const)

> +                               const struct bpf_dynptr *src_ptr, u64 src_off, u64 size)
>  {
>         struct bpf_dynptr_kern *dst = (struct bpf_dynptr_kern *)dst_ptr;
>         struct bpf_dynptr_kern *src = (struct bpf_dynptr_kern *)src_ptr;

[...]

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH bpf-next v2 03/11] bpf: Unify dynptr handling in the verifier
  2026-03-07  6:44 ` [RFC PATCH bpf-next v2 03/11] bpf: Unify dynptr handling in the verifier Amery Hung
  2026-03-11 16:03   ` Mykyta Yatsenko
@ 2026-03-11 19:57   ` Andrii Nakryiko
  2026-03-11 20:16     ` Amery Hung
  2026-03-16 22:52   ` Eduard Zingerman
  2 siblings, 1 reply; 46+ messages in thread
From: Andrii Nakryiko @ 2026-03-11 19:57 UTC (permalink / raw)
  To: Amery Hung
  Cc: bpf, netdev, alexei.starovoitov, andrii, daniel, memxor,
	martin.lau, kernel-team

On Fri, Mar 6, 2026 at 10:44 PM Amery Hung <ameryhung@gmail.com> wrote:
>
> Simplify dynptr checking for helper and kfunc by unifying it. Remember
> initialized dynptr in process_dynptr_func() so that we can easily
> retrieve the information for verification later.

it would help to call out why all those checks you are removing are
not needed anymore

>
> Signed-off-by: Amery Hung <ameryhung@gmail.com>
> ---
>  kernel/bpf/verifier.c | 179 +++++++++---------------------------------
>  1 file changed, 36 insertions(+), 143 deletions(-)
>
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 0f77c4c5b510..d52780962adb 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -277,8 +277,15 @@ struct bpf_map_desc {
>         int uid;
>  };
>
> +struct bpf_dynptr_desc {
> +       enum bpf_dynptr_type type;
> +       u32 id;
> +       u32 ref_obj_id;
> +};
> +
>  struct bpf_call_arg_meta {
>         struct bpf_map_desc map;
> +       struct bpf_dynptr_desc initialized_dynptr;

nit: let's drop "initialized_" prefix? so verbose

[...]

> @@ -511,11 +513,6 @@ static bool is_ptr_cast_function(enum bpf_func_id func_id)
>                 func_id == BPF_FUNC_skc_to_tcp_request_sock;
>  }
>
> -static bool is_dynptr_ref_function(enum bpf_func_id func_id)
> -{
> -       return func_id == BPF_FUNC_dynptr_data;
> -}
> -
>  static bool is_sync_callback_calling_kfunc(u32 btf_id);
>  static bool is_async_callback_calling_kfunc(u32 btf_id);
>  static bool is_callback_calling_kfunc(u32 btf_id);
> @@ -597,8 +594,6 @@ static bool helper_multiple_ref_obj_use(enum bpf_func_id func_id,
>                 ref_obj_uses++;
>         if (is_acquire_function(func_id, map))
>                 ref_obj_uses++;
> -       if (is_dynptr_ref_function(func_id))
> -               ref_obj_uses++;

e.g., why this is fine? (because we don't use ref_obj_id for tracking
dynptrs anymore, right? would be good to call this out in the commit
message)

>
>         return ref_obj_uses > 1;
>  }

[...]

> @@ -13559,22 +13464,10 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
>                                 }
>                         }
>
> -                       ret = process_dynptr_func(env, regno, insn_idx, dynptr_arg_type, clone_ref_obj_id);
> +                       ret = process_dynptr_func(env, regno, insn_idx, dynptr_arg_type, clone_ref_obj_id,
> +                                                 &meta->initialized_dynptr);
>                         if (ret < 0)
>                                 return ret;
> -
> -                       if (!(dynptr_arg_type & MEM_UNINIT)) {

I can't fully connect MEM_UNINIT and CONST_PTR_TO_DYNPTR, this is
something that should be called out in commit message, IMO

> -                               int id = dynptr_id(env, reg);
> -
> -                               if (id < 0) {
> -                                       verifier_bug(env, "failed to obtain dynptr id");
> -                                       return id;
> -                               }
> -                               meta->initialized_dynptr.id = id;
> -                               meta->initialized_dynptr.type = dynptr_get_type(env, reg);
> -                               meta->initialized_dynptr.ref_obj_id = dynptr_ref_obj_id(env, reg);
> -                       }
> -
>                         break;
>                 }
>                 case KF_ARG_PTR_TO_ITER:
> --
> 2.47.3
>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH bpf-next v2 01/11] bpf: Set kfunc dynptr arg type flag based on prototype
  2026-03-11 19:43   ` Andrii Nakryiko
@ 2026-03-11 20:01     ` Amery Hung
  2026-03-11 22:37       ` Andrii Nakryiko
  0 siblings, 1 reply; 46+ messages in thread
From: Amery Hung @ 2026-03-11 20:01 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, netdev, alexei.starovoitov, andrii, daniel, memxor,
	martin.lau, kernel-team

On Wed, Mar 11, 2026 at 12:44 PM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Fri, Mar 6, 2026 at 10:44 PM Amery Hung <ameryhung@gmail.com> wrote:
> >
> > The verifier should decide whether a dynptr argument is read-only
> > based on if the type is "const struct bpf_dynptr *", not the type of
> > the register passed to the kfunc. This currently does not cause issues
> > because existing kfuncs that mutate struct bpf_dynptr are constructors
> > (e.g., bpf_dynptr_from_xxx and bpf_dynptr_clone). These kfuncs have
> > additional check in process_dynptr_func() to make sure the stack slot
> > does not contain initialized dynptr. Nonetheless, this should still be
> > fixed to avoid future issues when there is a non-constructor dynptr
> > kfunc that can mutate dynptr. This is also a small step toward unifying
> > kfunc and helper handling in the verifier, where the first step is to
> > generate kfunc prototype similar to bpf_func_proto before the main
> > verification loop.
> >
> > We also need to correctly mark some kfunc arguments as "const struct
> > bpf_dynptr *" to align with other kfuncs that take non-mutable dynptr
> > argument and to not break their usage. Adding const qualifier does
> > not break backward compatibility.
> >
> > Signed-off-by: Amery Hung <ameryhung@gmail.com>
> > ---
> >  fs/verity/measure.c                            |  2 +-
> >  include/linux/bpf.h                            |  8 ++++----
> >  kernel/bpf/helpers.c                           | 10 +++++-----
> >  kernel/bpf/verifier.c                          | 18 +++++++++++++++++-
> >  kernel/trace/bpf_trace.c                       | 18 +++++++++---------
> >  tools/testing/selftests/bpf/bpf_kfuncs.h       |  6 +++---
> >  .../selftests/bpf/progs/dynptr_success.c       |  6 +++---
> >  .../bpf/progs/test_kfunc_dynptr_param.c        |  7 +------
> >  8 files changed, 43 insertions(+), 32 deletions(-)
> >
> > diff --git a/fs/verity/measure.c b/fs/verity/measure.c
> > index 6a35623ebdf0..3840436e4510 100644
> > --- a/fs/verity/measure.c
> > +++ b/fs/verity/measure.c
> > @@ -118,7 +118,7 @@ __bpf_kfunc_start_defs();
> >   *
> >   * Return: 0 on success, a negative value on error.
> >   */
> > -__bpf_kfunc int bpf_get_fsverity_digest(struct file *file, struct bpf_dynptr *digest_p)
> > +__bpf_kfunc int bpf_get_fsverity_digest(struct file *file, const struct bpf_dynptr *digest_p)
>
> but kfunc is writing into digest_p, so that const is wrong?...
>
> >  {
> >         struct bpf_dynptr_kern *digest_ptr = (struct bpf_dynptr_kern *)digest_p;
> >         const struct inode *inode = file_inode(file);
>
> [...]
>
> > index 6eb6c82ed2ee..3d44896587ac 100644
> > --- a/kernel/bpf/helpers.c
> > +++ b/kernel/bpf/helpers.c
> > @@ -3000,8 +3000,8 @@ __bpf_kfunc int bpf_dynptr_clone(const struct bpf_dynptr *p,
> >   * Copies data from source dynptr to destination dynptr.
> >   * Returns 0 on success; negative error, otherwise.
> >   */
> > -__bpf_kfunc int bpf_dynptr_copy(struct bpf_dynptr *dst_ptr, u64 dst_off,
> > -                               struct bpf_dynptr *src_ptr, u64 src_off, u64 size)
> > +__bpf_kfunc int bpf_dynptr_copy(const struct bpf_dynptr *dst_ptr, u64 dst_off,
>
> again, dst_ptr clearly is modifiable because we are copying data into it.
>
> What am I missing, why is this logically correct?
>
> (I understand that from purely C type system POV this is fine, because
> we don't modify bpf_dynptr struct itself on the stack, but bpf_dynptr
> is a representation of some memory, and if we are modifying this
> memory, then I think it should be not marked as const)

The patch is just to first make the arg type determination independent
from bpf_reg_state and make kfunc signature consistent based on what
commit 52f37c4e0f11 (bpf: Rework process_dynptr_func) has laid out.

Perhaps MEM_RDONLY is a bit misleading. In process_dynptr_func(), the
flag means the dynptr struct on the stack is immutable. Currently,
there is no way (and maybe no need?) to specify read-only dynptr.

>
> > +                               const struct bpf_dynptr *src_ptr, u64 src_off, u64 size)
> >  {
> >         struct bpf_dynptr_kern *dst = (struct bpf_dynptr_kern *)dst_ptr;
> >         struct bpf_dynptr_kern *src = (struct bpf_dynptr_kern *)src_ptr;
>
> [...]

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH bpf-next v2 03/11] bpf: Unify dynptr handling in the verifier
  2026-03-11 19:57   ` Andrii Nakryiko
@ 2026-03-11 20:16     ` Amery Hung
  0 siblings, 0 replies; 46+ messages in thread
From: Amery Hung @ 2026-03-11 20:16 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, netdev, alexei.starovoitov, andrii, daniel, memxor,
	martin.lau, kernel-team

On Wed, Mar 11, 2026 at 12:57 PM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Fri, Mar 6, 2026 at 10:44 PM Amery Hung <ameryhung@gmail.com> wrote:
> >
> > Simplify dynptr checking for helper and kfunc by unifying it. Remember
> > initialized dynptr in process_dynptr_func() so that we can easily
> > retrieve the information for verification later.
>
> it would help to call out why all those checks you are removing are
> not needed anymore

Mykyta also raised a similar question in another place. I will explain
in the commit msg if there are checks dropped in the next iteration.

>
> >
> > Signed-off-by: Amery Hung <ameryhung@gmail.com>
> > ---
> >  kernel/bpf/verifier.c | 179 +++++++++---------------------------------
> >  1 file changed, 36 insertions(+), 143 deletions(-)
> >
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index 0f77c4c5b510..d52780962adb 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -277,8 +277,15 @@ struct bpf_map_desc {
> >         int uid;
> >  };
> >
> > +struct bpf_dynptr_desc {
> > +       enum bpf_dynptr_type type;
> > +       u32 id;
> > +       u32 ref_obj_id;
> > +};
> > +
> >  struct bpf_call_arg_meta {
> >         struct bpf_map_desc map;
> > +       struct bpf_dynptr_desc initialized_dynptr;
>
> nit: let's drop "initialized_" prefix? so verbose

Ack.

>
> [...]
>
> > @@ -511,11 +513,6 @@ static bool is_ptr_cast_function(enum bpf_func_id func_id)
> >                 func_id == BPF_FUNC_skc_to_tcp_request_sock;
> >  }
> >
> > -static bool is_dynptr_ref_function(enum bpf_func_id func_id)
> > -{
> > -       return func_id == BPF_FUNC_dynptr_data;
> > -}
> > -
> >  static bool is_sync_callback_calling_kfunc(u32 btf_id);
> >  static bool is_async_callback_calling_kfunc(u32 btf_id);
> >  static bool is_callback_calling_kfunc(u32 btf_id);
> > @@ -597,8 +594,6 @@ static bool helper_multiple_ref_obj_use(enum bpf_func_id func_id,
> >                 ref_obj_uses++;
> >         if (is_acquire_function(func_id, map))
> >                 ref_obj_uses++;
> > -       if (is_dynptr_ref_function(func_id))
> > -               ref_obj_uses++;
>
> e.g., why this is fine? (because we don't use ref_obj_id for tracking
> dynptrs anymore, right? would be good to call this out in the commit
> message)

Thanks for the example.

>
> >
> >         return ref_obj_uses > 1;
> >  }
>
> [...]
>
> > @@ -13559,22 +13464,10 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
> >                                 }
> >                         }
> >
> > -                       ret = process_dynptr_func(env, regno, insn_idx, dynptr_arg_type, clone_ref_obj_id);
> > +                       ret = process_dynptr_func(env, regno, insn_idx, dynptr_arg_type, clone_ref_obj_id,
> > +                                                 &meta->initialized_dynptr);
> >                         if (ret < 0)
> >                                 return ret;
> > -
> > -                       if (!(dynptr_arg_type & MEM_UNINIT)) {
>
> I can't fully connect MEM_UNINIT and CONST_PTR_TO_DYNPTR, this is
> something that should be called out in commit message, IMO

Will explain in the commit message that !(dynptr_arg_type &
MEM_UNINIT) means the argument expects an initialized dynptr.

>
> > -                               int id = dynptr_id(env, reg);
> > -
> > -                               if (id < 0) {
> > -                                       verifier_bug(env, "failed to obtain dynptr id");
> > -                                       return id;
> > -                               }
> > -                               meta->initialized_dynptr.id = id;
> > -                               meta->initialized_dynptr.type = dynptr_get_type(env, reg);
> > -                               meta->initialized_dynptr.ref_obj_id = dynptr_ref_obj_id(env, reg);
> > -                       }
> > -
> >                         break;
> >                 }
> >                 case KF_ARG_PTR_TO_ITER:
> > --
> > 2.47.3
> >

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH bpf-next v2 05/11] bpf: Preserve reg->id of pointer objects after null-check
  2026-03-07  6:44 ` [RFC PATCH bpf-next v2 05/11] bpf: Preserve reg->id of pointer objects after null-check Amery Hung
@ 2026-03-11 21:55   ` Andrii Nakryiko
  2026-03-11 22:26   ` Alexei Starovoitov
  1 sibling, 0 replies; 46+ messages in thread
From: Andrii Nakryiko @ 2026-03-11 21:55 UTC (permalink / raw)
  To: Amery Hung
  Cc: bpf, netdev, alexei.starovoitov, andrii, daniel, memxor,
	martin.lau, kernel-team

On Fri, Mar 6, 2026 at 10:44 PM Amery Hung <ameryhung@gmail.com> wrote:
>
> Preserve reg->id of pointer objects after null-checking the register so
> that children objects derived from it can still refer to it in the new
> object relationship tracking mechanism introduced in a later patch. This
> change incurs a slight increase in the number of states in one selftest
> bpf object, rbtree_search.bpf.o. For Meta bpf objects, the increase of
> states is also negligible.
>
> Selftest BPF objects with insns_diff > 0
>
> Insns (A)  Insns (B)  Insns  (DIFF)  States (A)  States (B)  States (DIFF)
> ---------  ---------  -------------  ----------  ----------  -------------
>      7309       7814  +505 (+6.91%)         394         413   +19 (+4.82%)
>
> Meta BPF objects with insns_diff > 0
>
> Insns (A)  Insns (B)  Insns   (DIFF)  States (A)  States (B)  States (DIFF)
> ---------  ---------  --------------  ----------  ----------  -------------
>        52         57     +5 (+9.62%)           5           6   +1 (+20.00%)
>        52         57     +5 (+9.62%)           5           6   +1 (+20.00%)
>       676        679     +3 (+0.44%)          54          54    +0 (+0.00%)
>       289        292     +3 (+1.04%)          20          20    +0 (+0.00%)
>        78         82     +4 (+5.13%)           8           8    +0 (+0.00%)
>       252        320   +68 (+26.98%)          21          27   +6 (+28.57%)
>       252        320   +68 (+26.98%)          21          27   +6 (+28.57%)
>       119        126     +7 (+5.88%)           6           7   +1 (+16.67%)
>      1119       1128     +9 (+0.80%)          95          96    +1 (+1.05%)
>      1128       1137     +9 (+0.80%)          95          96    +1 (+1.05%)
>      4380       4465    +85 (+1.94%)         114         118    +4 (+3.51%)
>      3093       3170    +77 (+2.49%)          83          88    +5 (+6.02%)
>     30181      31224  +1043 (+3.46%)         832         863   +31 (+3.73%)
>    237608     237619    +11 (+0.00%)       11670       11671    +1 (+0.01%)
>     94832      94836     +4 (+0.00%)        4787        4788    +1 (+0.02%)
>    237387     237407    +20 (+0.01%)       11651       11652    +1 (+0.01%)
>     94832      94836     +4 (+0.00%)        4787        4788    +1 (+0.02%)
>      8103       8108     +5 (+0.06%)         459         459    +0 (+0.00%)
>      8076       8079     +3 (+0.04%)         457         457    +0 (+0.00%)
>      8177       8197    +20 (+0.24%)         459         460    +1 (+0.22%)
>      8083       8086     +3 (+0.04%)         458         458    +0 (+0.00%)
>    237608     237619    +11 (+0.00%)       11670       11671    +1 (+0.01%)
>     94832      94836     +4 (+0.00%)        4787        4788    +1 (+0.02%)
>    237387     237407    +20 (+0.01%)       11651       11652    +1 (+0.01%)
>     94832      94836     +4 (+0.00%)        4787        4788    +1 (+0.02%)
>      8103       8108     +5 (+0.06%)         459         459    +0 (+0.00%)
>      8076       8079     +3 (+0.04%)         457         457    +0 (+0.00%)
>      8177       8197    +20 (+0.24%)         459         460    +1 (+0.22%)
>      8083       8086     +3 (+0.04%)         458         458    +0 (+0.00%)
>
> Looking into rbtree_search, the reason for such increase is that the
> verifier has to explore the main loop shown below for one more iteration
> until state pruning decides the current state is safe.
>
> long rbtree_search(void *ctx)
> {
>         ...
>         bpf_spin_lock(&glock0);
>         rb_n = bpf_rbtree_root(&groot0);
>         while (can_loop) {
>                 if (!rb_n) {
>                         bpf_spin_unlock(&glock0);
>                         return __LINE__;
>                 }
>
>                 n = rb_entry(rb_n, struct node_data, r0);
>                 if (lookup_key == n->key0)
>                         break;
>                 if (nr_gc < NR_NODES)
>                         gc_ns[nr_gc++] = rb_n;
>                 if (lookup_key < n->key0)
>                         rb_n = bpf_rbtree_left(&groot0, rb_n);
>                 else
>                         rb_n = bpf_rbtree_right(&groot0, rb_n);
>         }
>         ...
> }
>
> Below is what the verifier sees at the start of each iteration
> (65: may_goto) after preserving id of rb_n. Without id of rb_n, the
> verifier stops exploring the loop at iter 16.
>
>            rb_n  gc_ns[15]
> iter 15    257   257
>
> iter 16    290   257    rb_n: idmap add 257->290
>                         gc_ns[15]: check 257 != 290 --> state not equal
>
> iter 17    325   257    rb_n: idmap add 290->325
>                         gc_ns[15]: idmap add 257->257 --> state safe
>
> Signed-off-by: Amery Hung <ameryhung@gmail.com>
> ---
>  kernel/bpf/verifier.c | 13 ++++---------
>  1 file changed, 4 insertions(+), 9 deletions(-)
>
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index ea10dd611df2..8f9e28901bc4 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -17014,15 +17014,10 @@ static void mark_ptr_or_null_reg(struct bpf_func_state *state,
>
>                 mark_ptr_not_null_reg(reg);
>
> -               if (!reg_may_point_to_spin_lock(reg)) {
> -                       /* For not-NULL ptr, reg->ref_obj_id will be reset
> -                        * in release_reference().
> -                        *
> -                        * reg->id is still used by spin_lock ptr. Other
> -                        * than spin_lock ptr type, reg->id can be reset.
> -                        */
> -                       reg->id = 0;
> -               }
> +               /*
> +                * reg->id is preserved for object relationship tracking
> +                * and spin_lock lock state tracking
> +                */

Acked-by: Andrii Nakryiko <andrii@kernel.org>

nice to have one less special case

>         }
>  }
>
> --
> 2.47.3
>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH bpf-next v2 03/11] bpf: Unify dynptr handling in the verifier
  2026-03-11 17:23     ` Amery Hung
@ 2026-03-11 22:22       ` Mykyta Yatsenko
  2026-03-11 22:35         ` Amery Hung
  0 siblings, 1 reply; 46+ messages in thread
From: Mykyta Yatsenko @ 2026-03-11 22:22 UTC (permalink / raw)
  To: Amery Hung
  Cc: bpf, netdev, alexei.starovoitov, andrii, daniel, memxor,
	martin.lau, kernel-team

Amery Hung <ameryhung@gmail.com> writes:

> On Wed, Mar 11, 2026 at 9:03 AM Mykyta Yatsenko
> <mykyta.yatsenko5@gmail.com> wrote:
>>
>> Amery Hung <ameryhung@gmail.com> writes:
>>
>> > Simplify dynptr checking for helper and kfunc by unifying it. Remember
>> > initialized dynptr in process_dynptr_func() so that we can easily
>> > retrieve the information for verification later.
>> >
>> > Signed-off-by: Amery Hung <ameryhung@gmail.com>
>> > ---
>> >  kernel/bpf/verifier.c | 179 +++++++++---------------------------------
>> >  1 file changed, 36 insertions(+), 143 deletions(-)
>> >
>> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
>> > index 0f77c4c5b510..d52780962adb 100644
>> > --- a/kernel/bpf/verifier.c
>> > +++ b/kernel/bpf/verifier.c
>> > @@ -277,8 +277,15 @@ struct bpf_map_desc {
>> >       int uid;
>> >  };
>> >
>> > +struct bpf_dynptr_desc {
>> > +     enum bpf_dynptr_type type;
>> > +     u32 id;
>> > +     u32 ref_obj_id;
>> nit: let's add a comment here explaining what this field is for.
>
> We are about to change the meaning of id and ref_obj_id. I can add
> comments explaining id, ref_obj_id and parent_id in the refactor patch
> (#6). That said, the meaning of these fields will apply to all objects
> tracked by the verifier, not just limited to dynptr, and is already
> documented when we define bpf_reg_state in
> include/linux/bpf_verifier.h. Can you share a bit what info you are
> looking for?
>
>
the description from commit message would help:
/* id of the referenced object; objects with same ref_obj_id have the same lifetime */

Oftentimes when I work on verifier, it's difficult to understand what
some data field is for. It's easier now with the AI, but still I see a
lot of value to have that inline. Essentially ref_obj_id does not have
obvious meaning (at least to me).  
>> > +};
>> > +
>> >  struct bpf_call_arg_meta {
>> >       struct bpf_map_desc map;
>> > +     struct bpf_dynptr_desc initialized_dynptr;
>> >       bool raw_mode;
>> >       bool pkt_access;
>> >       u8 release_regno;
>> > @@ -287,7 +294,6 @@ struct bpf_call_arg_meta {
>> >       int mem_size;
>> >       u64 msize_max_value;
>> >       int ref_obj_id;
>> > -     int dynptr_id;
>> >       int func_id;
>> >       struct btf *btf;
>> >       u32 btf_id;
>> > @@ -346,16 +352,12 @@ struct bpf_kfunc_call_arg_meta {
>> >       struct {
>> >               struct btf_field *field;
>> >       } arg_rbtree_root;
>> > -     struct {
>> > -             enum bpf_dynptr_type type;
>> > -             u32 id;
>> > -             u32 ref_obj_id;
>> > -     } initialized_dynptr;
>> >       struct {
>> >               u8 spi;
>> >               u8 frameno;
>> >       } iter;
>> >       struct bpf_map_desc map;
>> > +     struct bpf_dynptr_desc initialized_dynptr;
>> >       u64 mem_size;
>> >  };
>> >
>> > @@ -511,11 +513,6 @@ static bool is_ptr_cast_function(enum bpf_func_id func_id)
>> >               func_id == BPF_FUNC_skc_to_tcp_request_sock;
>> >  }
>> >
>> > -static bool is_dynptr_ref_function(enum bpf_func_id func_id)
>> > -{
>> > -     return func_id == BPF_FUNC_dynptr_data;
>> > -}
>> > -
>> >  static bool is_sync_callback_calling_kfunc(u32 btf_id);
>> >  static bool is_async_callback_calling_kfunc(u32 btf_id);
>> >  static bool is_callback_calling_kfunc(u32 btf_id);
>> > @@ -597,8 +594,6 @@ static bool helper_multiple_ref_obj_use(enum bpf_func_id func_id,
>> >               ref_obj_uses++;
>> >       if (is_acquire_function(func_id, map))
>> >               ref_obj_uses++;
>> > -     if (is_dynptr_ref_function(func_id))
>> > -             ref_obj_uses++;
>> >
>> >       return ref_obj_uses > 1;
>> >  }
>> > @@ -8750,7 +8745,8 @@ static int process_kptr_func(struct bpf_verifier_env *env, int regno,
>> >   * type, and declare it as 'const struct bpf_dynptr *' in their prototype.
>> >   */
>> >  static int process_dynptr_func(struct bpf_verifier_env *env, int regno, int insn_idx,
>> > -                            enum bpf_arg_type arg_type, int clone_ref_obj_id)
>> > +                            enum bpf_arg_type arg_type, int clone_ref_obj_id,
>> > +                            struct bpf_dynptr_desc *initialized_dynptr)
>> >  {
>> >       struct bpf_reg_state *reg = reg_state(env, regno);
>> >       int err;
>> > @@ -8825,6 +8821,20 @@ static int process_dynptr_func(struct bpf_verifier_env *env, int regno, int insn
>> >               }
>> >
>> >               err = mark_dynptr_read(env, reg);
>> > +
>> > +             if (initialized_dynptr) {
>> > +                     struct bpf_func_state *state = func(env, reg);
>> state is only used if reg->type != CONST_PTR_TO_DYNPTR, does it make
>> sense to move state = func(env, reg); to the corresponding if block?
>
> I think this is fine. It looks less cluttered this way.
>
>> > +                     int spi;
>> > +
>> > +                     if (reg->type != CONST_PTR_TO_DYNPTR) {
>> > +                             spi = dynptr_get_spi(env, reg);
>> looking at the deleted dynptr_id() and dynptr_ref_obj_id() spi can be
>> negative, what changed here that we no longer need this check?
>
> is_dynptr_reg_valid_init() above already makes sure reg points to a
> valid dynptr so we don't need to check it again.
>
>> > +                             reg = &state->stack[spi].spilled_ptr;
>> > +                     }
>> > +
>> > +                     initialized_dynptr->id = reg->id;
>> > +                     initialized_dynptr->type = reg->dynptr.type;
>> > +                     initialized_dynptr->ref_obj_id = reg->ref_obj_id;
>> > +             }
>> >       }
>> >       return err;
>> >  }
>> > @@ -9587,72 +9597,6 @@ static int check_func_arg_reg_off(struct bpf_verifier_env *env,
>> >       }
>> >  }
>> >
>> > -static struct bpf_reg_state *get_dynptr_arg_reg(struct bpf_verifier_env *env,
>> > -                                             const struct bpf_func_proto *fn,
>> > -                                             struct bpf_reg_state *regs)
>> > -{
>> > -     struct bpf_reg_state *state = NULL;
>> > -     int i;
>> > -
>> > -     for (i = 0; i < MAX_BPF_FUNC_REG_ARGS; i++)
>> > -             if (arg_type_is_dynptr(fn->arg_type[i])) {
>> > -                     if (state) {
>> > -                             verbose(env, "verifier internal error: multiple dynptr args\n");
>> > -                             return NULL;
>> > -                     }
>> > -                     state = &regs[BPF_REG_1 + i];
>> > -             }
>> > -
>> > -     if (!state)
>> > -             verbose(env, "verifier internal error: no dynptr arg found\n");
>> > -
>> > -     return state;
>> > -}
>> > -
>> > -static int dynptr_id(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
>> > -{
>> > -     struct bpf_func_state *state = func(env, reg);
>> > -     int spi;
>> > -
>> > -     if (reg->type == CONST_PTR_TO_DYNPTR)
>> > -             return reg->id;
>> > -     spi = dynptr_get_spi(env, reg);
>> > -     if (spi < 0)
>> > -             return spi;
>> > -     return state->stack[spi].spilled_ptr.id;
>> > -}
>> > -
>> > -static int dynptr_ref_obj_id(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
>> > -{
>> > -     struct bpf_func_state *state = func(env, reg);
>> > -     int spi;
>> > -
>> > -     if (reg->type == CONST_PTR_TO_DYNPTR)
>> > -             return reg->ref_obj_id;
>> > -     spi = dynptr_get_spi(env, reg);
>> > -     if (spi < 0)
>> > -             return spi;
>> > -     return state->stack[spi].spilled_ptr.ref_obj_id;
>> > -}
>> > -
>> > -static enum bpf_dynptr_type dynptr_get_type(struct bpf_verifier_env *env,
>> > -                                         struct bpf_reg_state *reg)
>> > -{
>> > -     struct bpf_func_state *state = func(env, reg);
>> > -     int spi;
>> > -
>> > -     if (reg->type == CONST_PTR_TO_DYNPTR)
>> > -             return reg->dynptr.type;
>> > -
>> > -     spi = __get_spi(reg->var_off.value);
>> > -     if (spi < 0) {
>> > -             verbose(env, "verifier internal error: invalid spi when querying dynptr type\n");
>> > -             return BPF_DYNPTR_TYPE_INVALID;
>> > -     }
>> > -
>> > -     return state->stack[spi].spilled_ptr.dynptr.type;
>> > -}
>> > -
>> >  static int check_reg_const_str(struct bpf_verifier_env *env,
>> >                              struct bpf_reg_state *reg, u32 regno)
>> >  {
>> > @@ -10007,7 +9951,8 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
>> >                                        true, meta);
>> >               break;
>> >       case ARG_PTR_TO_DYNPTR:
>> > -             err = process_dynptr_func(env, regno, insn_idx, arg_type, 0);
>> > +             err = process_dynptr_func(env, regno, insn_idx, arg_type, 0,
>> > +                                       &meta->initialized_dynptr);
>> >               if (err)
>> >                       return err;
>> >               break;
>> > @@ -10666,7 +10611,7 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog,
>> >                       if (ret)
>> >                               return ret;
>> >
>> > -                     ret = process_dynptr_func(env, regno, -1, arg->arg_type, 0);
>> > +                     ret = process_dynptr_func(env, regno, -1, arg->arg_type, 0, NULL);
>> >                       if (ret)
>> >                               return ret;
>> >               } else if (base_type(arg->arg_type) == ARG_PTR_TO_BTF_ID) {
>> > @@ -11771,52 +11716,10 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
>> >                       }
>> >               }
>> >               break;
>> > -     case BPF_FUNC_dynptr_data:
>> > -     {
>> > -             struct bpf_reg_state *reg;
>> > -             int id, ref_obj_id;
>> > -
>> > -             reg = get_dynptr_arg_reg(env, fn, regs);
>> > -             if (!reg)
>> > -                     return -EFAULT;
>> > -
>> > -
>> > -             if (meta.dynptr_id) {
>> > -                     verifier_bug(env, "meta.dynptr_id already set");
>> > -                     return -EFAULT;
>> > -             }
>> > -             if (meta.ref_obj_id) {
>> > -                     verifier_bug(env, "meta.ref_obj_id already set");
>> > -                     return -EFAULT;
>> > -             }
>> > -
>> > -             id = dynptr_id(env, reg);
>> > -             if (id < 0) {
>> > -                     verifier_bug(env, "failed to obtain dynptr id");
>> > -                     return id;
>> > -             }
>> > -
>> > -             ref_obj_id = dynptr_ref_obj_id(env, reg);
>> > -             if (ref_obj_id < 0) {
>> > -                     verifier_bug(env, "failed to obtain dynptr ref_obj_id");
>> > -                     return ref_obj_id;
>> > -             }
>> > -
>> > -             meta.dynptr_id = id;
>> > -             meta.ref_obj_id = ref_obj_id;
>> > -
>> > -             break;
>> > -     }
>> >       case BPF_FUNC_dynptr_write:
>> >       {
>> > -             enum bpf_dynptr_type dynptr_type;
>> > -             struct bpf_reg_state *reg;
>> > -
>> > -             reg = get_dynptr_arg_reg(env, fn, regs);
>> > -             if (!reg)
>> > -                     return -EFAULT;
>> > +             enum bpf_dynptr_type dynptr_type = meta.initialized_dynptr.type;
>> >
>> > -             dynptr_type = dynptr_get_type(env, reg);
>> >               if (dynptr_type == BPF_DYNPTR_TYPE_INVALID)
>> >                       return -EFAULT;
>> >
>> > @@ -12007,10 +11910,7 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
>> >               return -EFAULT;
>> >       }
>> >
>> > -     if (is_dynptr_ref_function(func_id))
>> > -             regs[BPF_REG_0].dynptr_id = meta.dynptr_id;
>> > -
>> > -     if (is_ptr_cast_function(func_id) || is_dynptr_ref_function(func_id)) {
>> > +     if (is_ptr_cast_function(func_id)) {
>> >               /* For release_reference() */
>> >               regs[BPF_REG_0].ref_obj_id = meta.ref_obj_id;
>> >       } else if (is_acquire_function(func_id, meta.map.ptr)) {
>> > @@ -12024,6 +11924,11 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
>> >               regs[BPF_REG_0].ref_obj_id = id;
>> >       }
>> >
>> > +     if (func_id == BPF_FUNC_dynptr_data) {
>> > +             regs[BPF_REG_0].dynptr_id = meta.initialized_dynptr.id;
>> > +             regs[BPF_REG_0].ref_obj_id = meta.initialized_dynptr.ref_obj_id;
>> > +     }
>> > +
>> >       err = do_refine_retval_range(env, regs, fn->ret_type, func_id, &meta);
>> >       if (err)
>> >               return err;
>> > @@ -13559,22 +13464,10 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
>> >                               }
>> >                       }
>> >
>> > -                     ret = process_dynptr_func(env, regno, insn_idx, dynptr_arg_type, clone_ref_obj_id);
>> > +                     ret = process_dynptr_func(env, regno, insn_idx, dynptr_arg_type, clone_ref_obj_id,
>> > +                                               &meta->initialized_dynptr);
>> >                       if (ret < 0)
>> >                               return ret;
>> > -
>> > -                     if (!(dynptr_arg_type & MEM_UNINIT)) {
>> > -                             int id = dynptr_id(env, reg);
>> > -
>> > -                             if (id < 0) {
>> > -                                     verifier_bug(env, "failed to obtain dynptr id");
>> > -                                     return id;
>> > -                             }
>> > -                             meta->initialized_dynptr.id = id;
>> > -                             meta->initialized_dynptr.type = dynptr_get_type(env, reg);
>> > -                             meta->initialized_dynptr.ref_obj_id = dynptr_ref_obj_id(env, reg);
>> > -                     }
>> > -
>> >                       break;
>> >               }
>> >               case KF_ARG_PTR_TO_ITER:
>> > --
>> > 2.47.3

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH bpf-next v2 05/11] bpf: Preserve reg->id of pointer objects after null-check
  2026-03-07  6:44 ` [RFC PATCH bpf-next v2 05/11] bpf: Preserve reg->id of pointer objects after null-check Amery Hung
  2026-03-11 21:55   ` Andrii Nakryiko
@ 2026-03-11 22:26   ` Alexei Starovoitov
  2026-03-11 22:29     ` Alexei Starovoitov
  1 sibling, 1 reply; 46+ messages in thread
From: Alexei Starovoitov @ 2026-03-11 22:26 UTC (permalink / raw)
  To: Amery Hung
  Cc: bpf, Network Development, Andrii Nakryiko, Daniel Borkmann,
	Kumar Kartikeya Dwivedi, Martin KaFai Lau, Kernel Team

On Fri, Mar 6, 2026 at 10:44 PM Amery Hung <ameryhung@gmail.com> wrote:
>
> Preserve reg->id of pointer objects after null-checking the register so
> that children objects derived from it can still refer to it in the new
> object relationship tracking mechanism introduced in a later patch. This
> change incurs a slight increase in the number of states in one selftest
> bpf object, rbtree_search.bpf.o. For Meta bpf objects, the increase of
> states is also negligible.
>
> Selftest BPF objects with insns_diff > 0
>
> Insns (A)  Insns (B)  Insns  (DIFF)  States (A)  States (B)  States (DIFF)
> ---------  ---------  -------------  ----------  ----------  -------------
>      7309       7814  +505 (+6.91%)         394         413   +19 (+4.82%)
>
> Meta BPF objects with insns_diff > 0
>
> Insns (A)  Insns (B)  Insns   (DIFF)  States (A)  States (B)  States (DIFF)
> ---------  ---------  --------------  ----------  ----------  -------------
>        52         57     +5 (+9.62%)           5           6   +1 (+20.00%)
>        52         57     +5 (+9.62%)           5           6   +1 (+20.00%)
>       676        679     +3 (+0.44%)          54          54    +0 (+0.00%)
>       289        292     +3 (+1.04%)          20          20    +0 (+0.00%)
>        78         82     +4 (+5.13%)           8           8    +0 (+0.00%)
>       252        320   +68 (+26.98%)          21          27   +6 (+28.57%)
>       252        320   +68 (+26.98%)          21          27   +6 (+28.57%)
>       119        126     +7 (+5.88%)           6           7   +1 (+16.67%)
>      1119       1128     +9 (+0.80%)          95          96    +1 (+1.05%)
>      1128       1137     +9 (+0.80%)          95          96    +1 (+1.05%)
>      4380       4465    +85 (+1.94%)         114         118    +4 (+3.51%)
>      3093       3170    +77 (+2.49%)          83          88    +5 (+6.02%)
>     30181      31224  +1043 (+3.46%)         832         863   +31 (+3.73%)
>    237608     237619    +11 (+0.00%)       11670       11671    +1 (+0.01%)
>     94832      94836     +4 (+0.00%)        4787        4788    +1 (+0.02%)
>    237387     237407    +20 (+0.01%)       11651       11652    +1 (+0.01%)
>     94832      94836     +4 (+0.00%)        4787        4788    +1 (+0.02%)
>      8103       8108     +5 (+0.06%)         459         459    +0 (+0.00%)
>      8076       8079     +3 (+0.04%)         457         457    +0 (+0.00%)
>      8177       8197    +20 (+0.24%)         459         460    +1 (+0.22%)
>      8083       8086     +3 (+0.04%)         458         458    +0 (+0.00%)
>    237608     237619    +11 (+0.00%)       11670       11671    +1 (+0.01%)
>     94832      94836     +4 (+0.00%)        4787        4788    +1 (+0.02%)
>    237387     237407    +20 (+0.01%)       11651       11652    +1 (+0.01%)
>     94832      94836     +4 (+0.00%)        4787        4788    +1 (+0.02%)
>      8103       8108     +5 (+0.06%)         459         459    +0 (+0.00%)
>      8076       8079     +3 (+0.04%)         457         457    +0 (+0.00%)
>      8177       8197    +20 (+0.24%)         459         460    +1 (+0.22%)
>      8083       8086     +3 (+0.04%)         458         458    +0 (+0.00%)

The table is missing names.
State columns can be dropped instead.

> Looking into rbtree_search, the reason for such increase is that the
> verifier has to explore the main loop shown below for one more iteration
> until state pruning decides the current state is safe.
>
> long rbtree_search(void *ctx)
> {
>         ...
>         bpf_spin_lock(&glock0);
>         rb_n = bpf_rbtree_root(&groot0);
>         while (can_loop) {
>                 if (!rb_n) {
>                         bpf_spin_unlock(&glock0);
>                         return __LINE__;
>                 }
>
>                 n = rb_entry(rb_n, struct node_data, r0);
>                 if (lookup_key == n->key0)
>                         break;
>                 if (nr_gc < NR_NODES)
>                         gc_ns[nr_gc++] = rb_n;
>                 if (lookup_key < n->key0)
>                         rb_n = bpf_rbtree_left(&groot0, rb_n);
>                 else
>                         rb_n = bpf_rbtree_right(&groot0, rb_n);
>         }
>         ...
> }
>
> Below is what the verifier sees at the start of each iteration
> (65: may_goto) after preserving id of rb_n. Without id of rb_n, the
> verifier stops exploring the loop at iter 16.
>
>            rb_n  gc_ns[15]
> iter 15    257   257
>
> iter 16    290   257    rb_n: idmap add 257->290
>                         gc_ns[15]: check 257 != 290 --> state not equal
>
> iter 17    325   257    rb_n: idmap add 290->325
>                         gc_ns[15]: idmap add 257->257 --> state safe

I'm not following. The verifier processes above as a bounded loop.
All 16 (NR_NODES) iterations.

Why presence of id on 'rb_n' makes a difference?
It will still process 16 loops.

Which insn is safe vs not in the above ?
One after gc_ns[nr_gc++] = rb_n ?

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH bpf-next v2 05/11] bpf: Preserve reg->id of pointer objects after null-check
  2026-03-11 22:26   ` Alexei Starovoitov
@ 2026-03-11 22:29     ` Alexei Starovoitov
  2026-03-11 23:46       ` Amery Hung
  0 siblings, 1 reply; 46+ messages in thread
From: Alexei Starovoitov @ 2026-03-11 22:29 UTC (permalink / raw)
  To: Amery Hung
  Cc: bpf, Network Development, Andrii Nakryiko, Daniel Borkmann,
	Kumar Kartikeya Dwivedi, Martin KaFai Lau, Kernel Team

On Wed, Mar 11, 2026 at 3:26 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Fri, Mar 6, 2026 at 10:44 PM Amery Hung <ameryhung@gmail.com> wrote:
> >
> > Preserve reg->id of pointer objects after null-checking the register so
> > that children objects derived from it can still refer to it in the new
> > object relationship tracking mechanism introduced in a later patch. This
> > change incurs a slight increase in the number of states in one selftest
> > bpf object, rbtree_search.bpf.o. For Meta bpf objects, the increase of
> > states is also negligible.
> >
> > Selftest BPF objects with insns_diff > 0
> >
> > Insns (A)  Insns (B)  Insns  (DIFF)  States (A)  States (B)  States (DIFF)
> > ---------  ---------  -------------  ----------  ----------  -------------
> >      7309       7814  +505 (+6.91%)         394         413   +19 (+4.82%)
> >
> > Meta BPF objects with insns_diff > 0
> >
> > Insns (A)  Insns (B)  Insns   (DIFF)  States (A)  States (B)  States (DIFF)
> > ---------  ---------  --------------  ----------  ----------  -------------
> >        52         57     +5 (+9.62%)           5           6   +1 (+20.00%)
> >        52         57     +5 (+9.62%)           5           6   +1 (+20.00%)
> >       676        679     +3 (+0.44%)          54          54    +0 (+0.00%)
> >       289        292     +3 (+1.04%)          20          20    +0 (+0.00%)
> >        78         82     +4 (+5.13%)           8           8    +0 (+0.00%)
> >       252        320   +68 (+26.98%)          21          27   +6 (+28.57%)
> >       252        320   +68 (+26.98%)          21          27   +6 (+28.57%)
> >       119        126     +7 (+5.88%)           6           7   +1 (+16.67%)
> >      1119       1128     +9 (+0.80%)          95          96    +1 (+1.05%)
> >      1128       1137     +9 (+0.80%)          95          96    +1 (+1.05%)
> >      4380       4465    +85 (+1.94%)         114         118    +4 (+3.51%)
> >      3093       3170    +77 (+2.49%)          83          88    +5 (+6.02%)
> >     30181      31224  +1043 (+3.46%)         832         863   +31 (+3.73%)
> >    237608     237619    +11 (+0.00%)       11670       11671    +1 (+0.01%)
> >     94832      94836     +4 (+0.00%)        4787        4788    +1 (+0.02%)
> >    237387     237407    +20 (+0.01%)       11651       11652    +1 (+0.01%)
> >     94832      94836     +4 (+0.00%)        4787        4788    +1 (+0.02%)
> >      8103       8108     +5 (+0.06%)         459         459    +0 (+0.00%)
> >      8076       8079     +3 (+0.04%)         457         457    +0 (+0.00%)
> >      8177       8197    +20 (+0.24%)         459         460    +1 (+0.22%)
> >      8083       8086     +3 (+0.04%)         458         458    +0 (+0.00%)
> >    237608     237619    +11 (+0.00%)       11670       11671    +1 (+0.01%)
> >     94832      94836     +4 (+0.00%)        4787        4788    +1 (+0.02%)
> >    237387     237407    +20 (+0.01%)       11651       11652    +1 (+0.01%)
> >     94832      94836     +4 (+0.00%)        4787        4788    +1 (+0.02%)
> >      8103       8108     +5 (+0.06%)         459         459    +0 (+0.00%)
> >      8076       8079     +3 (+0.04%)         457         457    +0 (+0.00%)
> >      8177       8197    +20 (+0.24%)         459         460    +1 (+0.22%)
> >      8083       8086     +3 (+0.04%)         458         458    +0 (+0.00%)
>
> The table is missing names.
> State columns can be dropped instead.
>
> > Looking into rbtree_search, the reason for such increase is that the
> > verifier has to explore the main loop shown below for one more iteration
> > until state pruning decides the current state is safe.
> >
> > long rbtree_search(void *ctx)
> > {
> >         ...
> >         bpf_spin_lock(&glock0);
> >         rb_n = bpf_rbtree_root(&groot0);
> >         while (can_loop) {
> >                 if (!rb_n) {
> >                         bpf_spin_unlock(&glock0);
> >                         return __LINE__;
> >                 }
> >
> >                 n = rb_entry(rb_n, struct node_data, r0);
> >                 if (lookup_key == n->key0)
> >                         break;
> >                 if (nr_gc < NR_NODES)
> >                         gc_ns[nr_gc++] = rb_n;
> >                 if (lookup_key < n->key0)
> >                         rb_n = bpf_rbtree_left(&groot0, rb_n);
> >                 else
> >                         rb_n = bpf_rbtree_right(&groot0, rb_n);
> >         }
> >         ...
> > }
> >
> > Below is what the verifier sees at the start of each iteration
> > (65: may_goto) after preserving id of rb_n. Without id of rb_n, the
> > verifier stops exploring the loop at iter 16.
> >
> >            rb_n  gc_ns[15]
> > iter 15    257   257
> >
> > iter 16    290   257    rb_n: idmap add 257->290
> >                         gc_ns[15]: check 257 != 290 --> state not equal
> >
> > iter 17    325   257    rb_n: idmap add 290->325
> >                         gc_ns[15]: idmap add 257->257 --> state safe
>
> I'm not following. The verifier processes above as a bounded loop.
> All 16 (NR_NODES) iterations.
>
> Why presence of id on 'rb_n' makes a difference?
> It will still process 16 loops.
>
> Which insn is safe vs not in the above ?
> One after gc_ns[nr_gc++] = rb_n ?

One more thing...

How does it interact with reg_is_init_pkt_pointer() ?

That pointer has to have id == 0.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH bpf-next v2 06/11] bpf: Refactor object relationship tracking and fix dynptr UAF bug
  2026-03-07  6:44 ` [RFC PATCH bpf-next v2 06/11] bpf: Refactor object relationship tracking and fix dynptr UAF bug Amery Hung
@ 2026-03-11 22:32   ` Andrii Nakryiko
  2026-03-13 20:32     ` Amery Hung
  2026-03-12 23:33   ` Mykyta Yatsenko
  1 sibling, 1 reply; 46+ messages in thread
From: Andrii Nakryiko @ 2026-03-11 22:32 UTC (permalink / raw)
  To: Amery Hung
  Cc: bpf, netdev, alexei.starovoitov, andrii, daniel, memxor,
	martin.lau, kernel-team

On Fri, Mar 6, 2026 at 10:44 PM Amery Hung <ameryhung@gmail.com> wrote:
>
> Refactor object relationship tracking in the verifier by removing
> dynptr_id and using parent_id to track the parent object. Then, track
> the referenced parent object for the dynptr when calling a dynptr
> constructor. This fixes a use-after-free bug. For dynptr that has
> referenced parent object (skb dynptr in BPF qdisc or file dynptr),
> the dynptr or derived slices need to be invalidated when the parent
> object is released.
>
> First, add parent_id to bpf_reg_state to be able to precisely track
> objects' child-parent relationship. A child object will use parent_id
> to track the parent object's id. This replaces dynptr slice specific
> dynptr_id.
>
> Then, when calling dynptr constructors (i.e., process_dynptr_func() with
> MEM_UNINIT argument), track the parent's id if parent is an referenced
> object. This only applies to file dynptr and skb dynptr, so only pass
> parent reg->id to kfunc constructors.
>
> For release_reference(), this mean when invalidating an object, it needs
> to also invalidate all dependent objects by traversing the subtree. This
> is done using stack-based DFS to avoid recursive call chain of
> release_reference() -> unmark_stack_slots_dynptr() ->
> release_reference(). Note that, referenced objects cannot be released
> when traversing the tree if it is not the object id initially passed to
> release_reference() as they would actually require helper call to
> release the acquired resources.
>
> While the new design changes how object relationships are being tracked
> in the verifier, it does NOT change the verifier's behavior. Here is
> the implication of the new design for dynptr, ptr casting and
> owning/non-owning references.
>
> Dynptr:
>
> When initializing a dynptr, referenced dynptr will acquire an reference
> for ref_obj_id. If the dynptr has a referenced parent, the parent_id
> will be used to track the its id. When cloning dynptr, ref_obj_id and
> parent_id of the clone are copied directly from the original dynptr.
> This means, when releasing a dynptr, if it is a referenced dynptr,
> release_reference(ref_obj_id) will release all clones and the original
> and derived slices. For non-referenced dynptr, only the specific dynptr
> being released and its children slices will be invalidated.
>
> Pointer casting:
>
> Referenced socket pointer and the casted pointers should share the same
> lifetime, while having difference nullness. Therefore, they will have
> different id but the same ref_obj_id.
>
> When converting owning references to non-owning:
>
> After converting a reference from owning to non-owning by clearing the
> object's ref_reg_id. (e.g., object(id=1, ref_obj_id=1) -> object(id=1,
> ref_obj_id=0)), the verifier only needs to release the reference state
> instead of releasing registers that have the id, so call
> release_reference_nomark() instead of release_reference().
>
> CC: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> Fixes: 870c28588afa ("bpf: net_sched: Add basic bpf qdisc kfuncs")
> Signed-off-by: Amery Hung <ameryhung@gmail.com>
> ---
>  include/linux/bpf_verifier.h |  14 +-
>  kernel/bpf/log.c             |   4 +-
>  kernel/bpf/verifier.c        | 274 ++++++++++++++++++-----------------
>  3 files changed, 154 insertions(+), 138 deletions(-)
>

[...]


> -       ref_obj_id = state->stack[spi].spilled_ptr.ref_obj_id;
> -
> -       /* If the dynptr has a ref_obj_id, then we need to invalidate
> -        * two things:
> -        *
> -        * 1) Any dynptrs with a matching ref_obj_id (clones)
> -        * 2) Any slices derived from this dynptr.
> +       /*
> +        * For referenced dynptr, the clones share the same ref_obj_id and will be
> +        * invalidated too. For non-referenced dynptr, only the dynptr and slices
> +        * derived from it will be invalidated.
>          */

this is confusing to me. Why the nature of dynptr should change
anything about the scope of invalidation? This should be controlled
from outside. E.g., if someone invalidates clone by overwriting it on
the stack, we shouldn't just go an invalidate all other clones. We
just invalidate that particular clone (regardless of whether it's a
clone of file dynptr or just some mem dynptr).

But if someone is calling bpf_dynptr_file_discard() on one of the
clones, then yes, all the clones need to be invalidated. But that
should be handled as more generic "this file lifetime is ending", no?

Maybe I'm missing something, but it feels wrong to make decisions like
this inside a low-level (and thus intentionally dumb)
unmark_stack_slots_dynptr() helper.

> -
> -       /* Invalidate any slices associated with this dynptr */
> -       WARN_ON_ONCE(release_reference(env, ref_obj_id));
> -
> -       /* Invalidate any dynptr clones */
> -       for (i = 1; i < state->allocated_stack / BPF_REG_SIZE; i++) {
> -               if (state->stack[i].spilled_ptr.ref_obj_id != ref_obj_id)
> -                       continue;

[...]

> +static u32 idstack_pop(struct bpf_idstack *idstack)
> +{
> +       return idstack->cnt > 0 ? idstack->ids[--idstack->cnt] : 0;
> +}
> +
> +static int release_reg_check(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
> +                            int id, int root_id, struct bpf_idstack *idstack)

tbh, I feel like release_reg_check is doing too much, it both enqueues
children and checks unreleased references. And this <0, 0, and 1 as
return values (where 1 is completely unobvious) is an indicator of
that. I think id/parent_id/ref_obj_id check can be done inline in
release_reg_check() just fine (yes, two place, no big deal, but if so,
make it a small helper) and then you'll have more obvious logic break
down into a) check if reg should be enqueue, b) if so, check ref leak,
and c) enqueue new id

>  {
> +       struct bpf_reference_state *ref_state;
> +
> +       if (reg->id == id || reg->parent_id == id || reg->ref_obj_id == id) {
> +               /* Cannot indirectly release a referenced id */
> +               if (reg->ref_obj_id && id != root_id) {
> +                       ref_state = find_reference_state(env->cur_state, reg->ref_obj_id);
> +                       verbose(env, "Unreleased reference id=%d alloc_insn=%d when releasing id=%d\n",
> +                               ref_state->id, ref_state->insn_idx, root_id);
> +                       return -EINVAL;
> +               }
> +
> +               if (reg->id && reg->id != id)
> +                       idstack_push(idstack, reg->id);

can't you push the same id multiple times into stack this way? your
idstack is actually a set, no? so idmap serves you better (just map id
to 1 for "to be checked")? And then you don't need to introduce a new
idstack_scratch data structure

> +               return 1;
> +       }
> +
> +       return 0;
> +}
> +

[...]

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH bpf-next v2 03/11] bpf: Unify dynptr handling in the verifier
  2026-03-11 22:22       ` Mykyta Yatsenko
@ 2026-03-11 22:35         ` Amery Hung
  0 siblings, 0 replies; 46+ messages in thread
From: Amery Hung @ 2026-03-11 22:35 UTC (permalink / raw)
  To: Mykyta Yatsenko
  Cc: bpf, netdev, alexei.starovoitov, andrii, daniel, memxor,
	martin.lau, kernel-team

On Wed, Mar 11, 2026 at 3:22 PM Mykyta Yatsenko
<mykyta.yatsenko5@gmail.com> wrote:
>
> Amery Hung <ameryhung@gmail.com> writes:
>
> > On Wed, Mar 11, 2026 at 9:03 AM Mykyta Yatsenko
> > <mykyta.yatsenko5@gmail.com> wrote:
> >>
> >> Amery Hung <ameryhung@gmail.com> writes:
> >>
> >> > Simplify dynptr checking for helper and kfunc by unifying it. Remember
> >> > initialized dynptr in process_dynptr_func() so that we can easily
> >> > retrieve the information for verification later.
> >> >
> >> > Signed-off-by: Amery Hung <ameryhung@gmail.com>
> >> > ---
> >> >  kernel/bpf/verifier.c | 179 +++++++++---------------------------------
> >> >  1 file changed, 36 insertions(+), 143 deletions(-)
> >> >
> >> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> >> > index 0f77c4c5b510..d52780962adb 100644
> >> > --- a/kernel/bpf/verifier.c
> >> > +++ b/kernel/bpf/verifier.c
> >> > @@ -277,8 +277,15 @@ struct bpf_map_desc {
> >> >       int uid;
> >> >  };
> >> >
> >> > +struct bpf_dynptr_desc {
> >> > +     enum bpf_dynptr_type type;
> >> > +     u32 id;
> >> > +     u32 ref_obj_id;
> >> nit: let's add a comment here explaining what this field is for.
> >
> > We are about to change the meaning of id and ref_obj_id. I can add
> > comments explaining id, ref_obj_id and parent_id in the refactor patch
> > (#6). That said, the meaning of these fields will apply to all objects
> > tracked by the verifier, not just limited to dynptr, and is already
> > documented when we define bpf_reg_state in
> > include/linux/bpf_verifier.h. Can you share a bit what info you are
> > looking for?
> >
> >
> the description from commit message would help:
> /* id of the referenced object; objects with same ref_obj_id have the same lifetime */
>
> Oftentimes when I work on verifier, it's difficult to understand what
> some data field is for. It's easier now with the AI, but still I see a
> lot of value to have that inline. Essentially ref_obj_id does not have
> obvious meaning (at least to me).

I see. bpf_reg_state->ref_obj_id is already explained in
bpf_verifier.h, but only mentioned how it is used in tracking
referenced socket and casted socket pointers. I will update the
comment in patch 6 in the next respin.

> >> > +};
> >> > +
> >> >  struct bpf_call_arg_meta {
> >> >       struct bpf_map_desc map;
> >> > +     struct bpf_dynptr_desc initialized_dynptr;
> >> >       bool raw_mode;
> >> >       bool pkt_access;
> >> >       u8 release_regno;
> >> > @@ -287,7 +294,6 @@ struct bpf_call_arg_meta {
> >> >       int mem_size;
> >> >       u64 msize_max_value;
> >> >       int ref_obj_id;
> >> > -     int dynptr_id;
> >> >       int func_id;
> >> >       struct btf *btf;
> >> >       u32 btf_id;
> >> > @@ -346,16 +352,12 @@ struct bpf_kfunc_call_arg_meta {
> >> >       struct {
> >> >               struct btf_field *field;
> >> >       } arg_rbtree_root;
> >> > -     struct {
> >> > -             enum bpf_dynptr_type type;
> >> > -             u32 id;
> >> > -             u32 ref_obj_id;
> >> > -     } initialized_dynptr;
> >> >       struct {
> >> >               u8 spi;
> >> >               u8 frameno;
> >> >       } iter;
> >> >       struct bpf_map_desc map;
> >> > +     struct bpf_dynptr_desc initialized_dynptr;
> >> >       u64 mem_size;
> >> >  };
> >> >
> >> > @@ -511,11 +513,6 @@ static bool is_ptr_cast_function(enum bpf_func_id func_id)
> >> >               func_id == BPF_FUNC_skc_to_tcp_request_sock;
> >> >  }
> >> >
> >> > -static bool is_dynptr_ref_function(enum bpf_func_id func_id)
> >> > -{
> >> > -     return func_id == BPF_FUNC_dynptr_data;
> >> > -}
> >> > -
> >> >  static bool is_sync_callback_calling_kfunc(u32 btf_id);
> >> >  static bool is_async_callback_calling_kfunc(u32 btf_id);
> >> >  static bool is_callback_calling_kfunc(u32 btf_id);
> >> > @@ -597,8 +594,6 @@ static bool helper_multiple_ref_obj_use(enum bpf_func_id func_id,
> >> >               ref_obj_uses++;
> >> >       if (is_acquire_function(func_id, map))
> >> >               ref_obj_uses++;
> >> > -     if (is_dynptr_ref_function(func_id))
> >> > -             ref_obj_uses++;
> >> >
> >> >       return ref_obj_uses > 1;
> >> >  }
> >> > @@ -8750,7 +8745,8 @@ static int process_kptr_func(struct bpf_verifier_env *env, int regno,
> >> >   * type, and declare it as 'const struct bpf_dynptr *' in their prototype.
> >> >   */
> >> >  static int process_dynptr_func(struct bpf_verifier_env *env, int regno, int insn_idx,
> >> > -                            enum bpf_arg_type arg_type, int clone_ref_obj_id)
> >> > +                            enum bpf_arg_type arg_type, int clone_ref_obj_id,
> >> > +                            struct bpf_dynptr_desc *initialized_dynptr)
> >> >  {
> >> >       struct bpf_reg_state *reg = reg_state(env, regno);
> >> >       int err;
> >> > @@ -8825,6 +8821,20 @@ static int process_dynptr_func(struct bpf_verifier_env *env, int regno, int insn
> >> >               }
> >> >
> >> >               err = mark_dynptr_read(env, reg);
> >> > +
> >> > +             if (initialized_dynptr) {
> >> > +                     struct bpf_func_state *state = func(env, reg);
> >> state is only used if reg->type != CONST_PTR_TO_DYNPTR, does it make
> >> sense to move state = func(env, reg); to the corresponding if block?
> >
> > I think this is fine. It looks less cluttered this way.
> >
> >> > +                     int spi;
> >> > +
> >> > +                     if (reg->type != CONST_PTR_TO_DYNPTR) {
> >> > +                             spi = dynptr_get_spi(env, reg);
> >> looking at the deleted dynptr_id() and dynptr_ref_obj_id() spi can be
> >> negative, what changed here that we no longer need this check?
> >
> > is_dynptr_reg_valid_init() above already makes sure reg points to a
> > valid dynptr so we don't need to check it again.
> >
> >> > +                             reg = &state->stack[spi].spilled_ptr;
> >> > +                     }
> >> > +
> >> > +                     initialized_dynptr->id = reg->id;
> >> > +                     initialized_dynptr->type = reg->dynptr.type;
> >> > +                     initialized_dynptr->ref_obj_id = reg->ref_obj_id;
> >> > +             }
> >> >       }
> >> >       return err;
> >> >  }
> >> > @@ -9587,72 +9597,6 @@ static int check_func_arg_reg_off(struct bpf_verifier_env *env,
> >> >       }
> >> >  }
> >> >
> >> > -static struct bpf_reg_state *get_dynptr_arg_reg(struct bpf_verifier_env *env,
> >> > -                                             const struct bpf_func_proto *fn,
> >> > -                                             struct bpf_reg_state *regs)
> >> > -{
> >> > -     struct bpf_reg_state *state = NULL;
> >> > -     int i;
> >> > -
> >> > -     for (i = 0; i < MAX_BPF_FUNC_REG_ARGS; i++)
> >> > -             if (arg_type_is_dynptr(fn->arg_type[i])) {
> >> > -                     if (state) {
> >> > -                             verbose(env, "verifier internal error: multiple dynptr args\n");
> >> > -                             return NULL;
> >> > -                     }
> >> > -                     state = &regs[BPF_REG_1 + i];
> >> > -             }
> >> > -
> >> > -     if (!state)
> >> > -             verbose(env, "verifier internal error: no dynptr arg found\n");
> >> > -
> >> > -     return state;
> >> > -}
> >> > -
> >> > -static int dynptr_id(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
> >> > -{
> >> > -     struct bpf_func_state *state = func(env, reg);
> >> > -     int spi;
> >> > -
> >> > -     if (reg->type == CONST_PTR_TO_DYNPTR)
> >> > -             return reg->id;
> >> > -     spi = dynptr_get_spi(env, reg);
> >> > -     if (spi < 0)
> >> > -             return spi;
> >> > -     return state->stack[spi].spilled_ptr.id;
> >> > -}
> >> > -
> >> > -static int dynptr_ref_obj_id(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
> >> > -{
> >> > -     struct bpf_func_state *state = func(env, reg);
> >> > -     int spi;
> >> > -
> >> > -     if (reg->type == CONST_PTR_TO_DYNPTR)
> >> > -             return reg->ref_obj_id;
> >> > -     spi = dynptr_get_spi(env, reg);
> >> > -     if (spi < 0)
> >> > -             return spi;
> >> > -     return state->stack[spi].spilled_ptr.ref_obj_id;
> >> > -}
> >> > -
> >> > -static enum bpf_dynptr_type dynptr_get_type(struct bpf_verifier_env *env,
> >> > -                                         struct bpf_reg_state *reg)
> >> > -{
> >> > -     struct bpf_func_state *state = func(env, reg);
> >> > -     int spi;
> >> > -
> >> > -     if (reg->type == CONST_PTR_TO_DYNPTR)
> >> > -             return reg->dynptr.type;
> >> > -
> >> > -     spi = __get_spi(reg->var_off.value);
> >> > -     if (spi < 0) {
> >> > -             verbose(env, "verifier internal error: invalid spi when querying dynptr type\n");
> >> > -             return BPF_DYNPTR_TYPE_INVALID;
> >> > -     }
> >> > -
> >> > -     return state->stack[spi].spilled_ptr.dynptr.type;
> >> > -}
> >> > -
> >> >  static int check_reg_const_str(struct bpf_verifier_env *env,
> >> >                              struct bpf_reg_state *reg, u32 regno)
> >> >  {
> >> > @@ -10007,7 +9951,8 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
> >> >                                        true, meta);
> >> >               break;
> >> >       case ARG_PTR_TO_DYNPTR:
> >> > -             err = process_dynptr_func(env, regno, insn_idx, arg_type, 0);
> >> > +             err = process_dynptr_func(env, regno, insn_idx, arg_type, 0,
> >> > +                                       &meta->initialized_dynptr);
> >> >               if (err)
> >> >                       return err;
> >> >               break;
> >> > @@ -10666,7 +10611,7 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog,
> >> >                       if (ret)
> >> >                               return ret;
> >> >
> >> > -                     ret = process_dynptr_func(env, regno, -1, arg->arg_type, 0);
> >> > +                     ret = process_dynptr_func(env, regno, -1, arg->arg_type, 0, NULL);
> >> >                       if (ret)
> >> >                               return ret;
> >> >               } else if (base_type(arg->arg_type) == ARG_PTR_TO_BTF_ID) {
> >> > @@ -11771,52 +11716,10 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
> >> >                       }
> >> >               }
> >> >               break;
> >> > -     case BPF_FUNC_dynptr_data:
> >> > -     {
> >> > -             struct bpf_reg_state *reg;
> >> > -             int id, ref_obj_id;
> >> > -
> >> > -             reg = get_dynptr_arg_reg(env, fn, regs);
> >> > -             if (!reg)
> >> > -                     return -EFAULT;
> >> > -
> >> > -
> >> > -             if (meta.dynptr_id) {
> >> > -                     verifier_bug(env, "meta.dynptr_id already set");
> >> > -                     return -EFAULT;
> >> > -             }
> >> > -             if (meta.ref_obj_id) {
> >> > -                     verifier_bug(env, "meta.ref_obj_id already set");
> >> > -                     return -EFAULT;
> >> > -             }
> >> > -
> >> > -             id = dynptr_id(env, reg);
> >> > -             if (id < 0) {
> >> > -                     verifier_bug(env, "failed to obtain dynptr id");
> >> > -                     return id;
> >> > -             }
> >> > -
> >> > -             ref_obj_id = dynptr_ref_obj_id(env, reg);
> >> > -             if (ref_obj_id < 0) {
> >> > -                     verifier_bug(env, "failed to obtain dynptr ref_obj_id");
> >> > -                     return ref_obj_id;
> >> > -             }
> >> > -
> >> > -             meta.dynptr_id = id;
> >> > -             meta.ref_obj_id = ref_obj_id;
> >> > -
> >> > -             break;
> >> > -     }
> >> >       case BPF_FUNC_dynptr_write:
> >> >       {
> >> > -             enum bpf_dynptr_type dynptr_type;
> >> > -             struct bpf_reg_state *reg;
> >> > -
> >> > -             reg = get_dynptr_arg_reg(env, fn, regs);
> >> > -             if (!reg)
> >> > -                     return -EFAULT;
> >> > +             enum bpf_dynptr_type dynptr_type = meta.initialized_dynptr.type;
> >> >
> >> > -             dynptr_type = dynptr_get_type(env, reg);
> >> >               if (dynptr_type == BPF_DYNPTR_TYPE_INVALID)
> >> >                       return -EFAULT;
> >> >
> >> > @@ -12007,10 +11910,7 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
> >> >               return -EFAULT;
> >> >       }
> >> >
> >> > -     if (is_dynptr_ref_function(func_id))
> >> > -             regs[BPF_REG_0].dynptr_id = meta.dynptr_id;
> >> > -
> >> > -     if (is_ptr_cast_function(func_id) || is_dynptr_ref_function(func_id)) {
> >> > +     if (is_ptr_cast_function(func_id)) {
> >> >               /* For release_reference() */
> >> >               regs[BPF_REG_0].ref_obj_id = meta.ref_obj_id;
> >> >       } else if (is_acquire_function(func_id, meta.map.ptr)) {
> >> > @@ -12024,6 +11924,11 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
> >> >               regs[BPF_REG_0].ref_obj_id = id;
> >> >       }
> >> >
> >> > +     if (func_id == BPF_FUNC_dynptr_data) {
> >> > +             regs[BPF_REG_0].dynptr_id = meta.initialized_dynptr.id;
> >> > +             regs[BPF_REG_0].ref_obj_id = meta.initialized_dynptr.ref_obj_id;
> >> > +     }
> >> > +
> >> >       err = do_refine_retval_range(env, regs, fn->ret_type, func_id, &meta);
> >> >       if (err)
> >> >               return err;
> >> > @@ -13559,22 +13464,10 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
> >> >                               }
> >> >                       }
> >> >
> >> > -                     ret = process_dynptr_func(env, regno, insn_idx, dynptr_arg_type, clone_ref_obj_id);
> >> > +                     ret = process_dynptr_func(env, regno, insn_idx, dynptr_arg_type, clone_ref_obj_id,
> >> > +                                               &meta->initialized_dynptr);
> >> >                       if (ret < 0)
> >> >                               return ret;
> >> > -
> >> > -                     if (!(dynptr_arg_type & MEM_UNINIT)) {
> >> > -                             int id = dynptr_id(env, reg);
> >> > -
> >> > -                             if (id < 0) {
> >> > -                                     verifier_bug(env, "failed to obtain dynptr id");
> >> > -                                     return id;
> >> > -                             }
> >> > -                             meta->initialized_dynptr.id = id;
> >> > -                             meta->initialized_dynptr.type = dynptr_get_type(env, reg);
> >> > -                             meta->initialized_dynptr.ref_obj_id = dynptr_ref_obj_id(env, reg);
> >> > -                     }
> >> > -
> >> >                       break;
> >> >               }
> >> >               case KF_ARG_PTR_TO_ITER:
> >> > --
> >> > 2.47.3

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH bpf-next v2 01/11] bpf: Set kfunc dynptr arg type flag based on prototype
  2026-03-11 20:01     ` Amery Hung
@ 2026-03-11 22:37       ` Andrii Nakryiko
  2026-03-11 23:03         ` Amery Hung
  0 siblings, 1 reply; 46+ messages in thread
From: Andrii Nakryiko @ 2026-03-11 22:37 UTC (permalink / raw)
  To: Amery Hung
  Cc: bpf, netdev, alexei.starovoitov, andrii, daniel, memxor,
	martin.lau, kernel-team

On Wed, Mar 11, 2026 at 1:01 PM Amery Hung <ameryhung@gmail.com> wrote:
>
> On Wed, Mar 11, 2026 at 12:44 PM Andrii Nakryiko
> <andrii.nakryiko@gmail.com> wrote:
> >
> > On Fri, Mar 6, 2026 at 10:44 PM Amery Hung <ameryhung@gmail.com> wrote:
> > >
> > > The verifier should decide whether a dynptr argument is read-only
> > > based on if the type is "const struct bpf_dynptr *", not the type of
> > > the register passed to the kfunc. This currently does not cause issues
> > > because existing kfuncs that mutate struct bpf_dynptr are constructors
> > > (e.g., bpf_dynptr_from_xxx and bpf_dynptr_clone). These kfuncs have
> > > additional check in process_dynptr_func() to make sure the stack slot
> > > does not contain initialized dynptr. Nonetheless, this should still be
> > > fixed to avoid future issues when there is a non-constructor dynptr
> > > kfunc that can mutate dynptr. This is also a small step toward unifying
> > > kfunc and helper handling in the verifier, where the first step is to
> > > generate kfunc prototype similar to bpf_func_proto before the main
> > > verification loop.
> > >
> > > We also need to correctly mark some kfunc arguments as "const struct
> > > bpf_dynptr *" to align with other kfuncs that take non-mutable dynptr
> > > argument and to not break their usage. Adding const qualifier does
> > > not break backward compatibility.
> > >
> > > Signed-off-by: Amery Hung <ameryhung@gmail.com>
> > > ---
> > >  fs/verity/measure.c                            |  2 +-
> > >  include/linux/bpf.h                            |  8 ++++----
> > >  kernel/bpf/helpers.c                           | 10 +++++-----
> > >  kernel/bpf/verifier.c                          | 18 +++++++++++++++++-
> > >  kernel/trace/bpf_trace.c                       | 18 +++++++++---------
> > >  tools/testing/selftests/bpf/bpf_kfuncs.h       |  6 +++---
> > >  .../selftests/bpf/progs/dynptr_success.c       |  6 +++---
> > >  .../bpf/progs/test_kfunc_dynptr_param.c        |  7 +------
> > >  8 files changed, 43 insertions(+), 32 deletions(-)
> > >
> > > diff --git a/fs/verity/measure.c b/fs/verity/measure.c
> > > index 6a35623ebdf0..3840436e4510 100644
> > > --- a/fs/verity/measure.c
> > > +++ b/fs/verity/measure.c
> > > @@ -118,7 +118,7 @@ __bpf_kfunc_start_defs();
> > >   *
> > >   * Return: 0 on success, a negative value on error.
> > >   */
> > > -__bpf_kfunc int bpf_get_fsverity_digest(struct file *file, struct bpf_dynptr *digest_p)
> > > +__bpf_kfunc int bpf_get_fsverity_digest(struct file *file, const struct bpf_dynptr *digest_p)
> >
> > but kfunc is writing into digest_p, so that const is wrong?...
> >
> > >  {
> > >         struct bpf_dynptr_kern *digest_ptr = (struct bpf_dynptr_kern *)digest_p;
> > >         const struct inode *inode = file_inode(file);
> >
> > [...]
> >
> > > index 6eb6c82ed2ee..3d44896587ac 100644
> > > --- a/kernel/bpf/helpers.c
> > > +++ b/kernel/bpf/helpers.c
> > > @@ -3000,8 +3000,8 @@ __bpf_kfunc int bpf_dynptr_clone(const struct bpf_dynptr *p,
> > >   * Copies data from source dynptr to destination dynptr.
> > >   * Returns 0 on success; negative error, otherwise.
> > >   */
> > > -__bpf_kfunc int bpf_dynptr_copy(struct bpf_dynptr *dst_ptr, u64 dst_off,
> > > -                               struct bpf_dynptr *src_ptr, u64 src_off, u64 size)
> > > +__bpf_kfunc int bpf_dynptr_copy(const struct bpf_dynptr *dst_ptr, u64 dst_off,
> >
> > again, dst_ptr clearly is modifiable because we are copying data into it.
> >
> > What am I missing, why is this logically correct?
> >
> > (I understand that from purely C type system POV this is fine, because
> > we don't modify bpf_dynptr struct itself on the stack, but bpf_dynptr
> > is a representation of some memory, and if we are modifying this
> > memory, then I think it should be not marked as const)
>
> The patch is just to first make the arg type determination independent
> from bpf_reg_state and make kfunc signature consistent based on what
> commit 52f37c4e0f11 (bpf: Rework process_dynptr_func) has laid out.
>
> Perhaps MEM_RDONLY is a bit misleading. In process_dynptr_func(), the
> flag means the dynptr struct on the stack is immutable. Currently,
> there is no way (and maybe no need?) to specify read-only dynptr.
>

are you basically trying to determine if CONST_DYNPTR_PTR is allowed or not?

> > > The verifier should decide whether a dynptr argument is read-only

can you please remind us what "read-only dynptr argument" means for verifier?

> >
> > > +                               const struct bpf_dynptr *src_ptr, u64 src_off, u64 size)
> > >  {
> > >         struct bpf_dynptr_kern *dst = (struct bpf_dynptr_kern *)dst_ptr;
> > >         struct bpf_dynptr_kern *src = (struct bpf_dynptr_kern *)src_ptr;
> >
> > [...]

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH bpf-next v2 01/11] bpf: Set kfunc dynptr arg type flag based on prototype
  2026-03-11 22:37       ` Andrii Nakryiko
@ 2026-03-11 23:03         ` Amery Hung
  2026-03-11 23:15           ` Andrii Nakryiko
  0 siblings, 1 reply; 46+ messages in thread
From: Amery Hung @ 2026-03-11 23:03 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, netdev, alexei.starovoitov, andrii, daniel, memxor,
	martin.lau, kernel-team

On Wed, Mar 11, 2026 at 3:37 PM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Wed, Mar 11, 2026 at 1:01 PM Amery Hung <ameryhung@gmail.com> wrote:
> >
> > On Wed, Mar 11, 2026 at 12:44 PM Andrii Nakryiko
> > <andrii.nakryiko@gmail.com> wrote:
> > >
> > > On Fri, Mar 6, 2026 at 10:44 PM Amery Hung <ameryhung@gmail.com> wrote:
> > > >
> > > > The verifier should decide whether a dynptr argument is read-only
> > > > based on if the type is "const struct bpf_dynptr *", not the type of
> > > > the register passed to the kfunc. This currently does not cause issues
> > > > because existing kfuncs that mutate struct bpf_dynptr are constructors
> > > > (e.g., bpf_dynptr_from_xxx and bpf_dynptr_clone). These kfuncs have
> > > > additional check in process_dynptr_func() to make sure the stack slot
> > > > does not contain initialized dynptr. Nonetheless, this should still be
> > > > fixed to avoid future issues when there is a non-constructor dynptr
> > > > kfunc that can mutate dynptr. This is also a small step toward unifying
> > > > kfunc and helper handling in the verifier, where the first step is to
> > > > generate kfunc prototype similar to bpf_func_proto before the main
> > > > verification loop.
> > > >
> > > > We also need to correctly mark some kfunc arguments as "const struct
> > > > bpf_dynptr *" to align with other kfuncs that take non-mutable dynptr
> > > > argument and to not break their usage. Adding const qualifier does
> > > > not break backward compatibility.
> > > >
> > > > Signed-off-by: Amery Hung <ameryhung@gmail.com>
> > > > ---
> > > >  fs/verity/measure.c                            |  2 +-
> > > >  include/linux/bpf.h                            |  8 ++++----
> > > >  kernel/bpf/helpers.c                           | 10 +++++-----
> > > >  kernel/bpf/verifier.c                          | 18 +++++++++++++++++-
> > > >  kernel/trace/bpf_trace.c                       | 18 +++++++++---------
> > > >  tools/testing/selftests/bpf/bpf_kfuncs.h       |  6 +++---
> > > >  .../selftests/bpf/progs/dynptr_success.c       |  6 +++---
> > > >  .../bpf/progs/test_kfunc_dynptr_param.c        |  7 +------
> > > >  8 files changed, 43 insertions(+), 32 deletions(-)
> > > >
> > > > diff --git a/fs/verity/measure.c b/fs/verity/measure.c
> > > > index 6a35623ebdf0..3840436e4510 100644
> > > > --- a/fs/verity/measure.c
> > > > +++ b/fs/verity/measure.c
> > > > @@ -118,7 +118,7 @@ __bpf_kfunc_start_defs();
> > > >   *
> > > >   * Return: 0 on success, a negative value on error.
> > > >   */
> > > > -__bpf_kfunc int bpf_get_fsverity_digest(struct file *file, struct bpf_dynptr *digest_p)
> > > > +__bpf_kfunc int bpf_get_fsverity_digest(struct file *file, const struct bpf_dynptr *digest_p)
> > >
> > > but kfunc is writing into digest_p, so that const is wrong?...
> > >
> > > >  {
> > > >         struct bpf_dynptr_kern *digest_ptr = (struct bpf_dynptr_kern *)digest_p;
> > > >         const struct inode *inode = file_inode(file);
> > >
> > > [...]
> > >
> > > > index 6eb6c82ed2ee..3d44896587ac 100644
> > > > --- a/kernel/bpf/helpers.c
> > > > +++ b/kernel/bpf/helpers.c
> > > > @@ -3000,8 +3000,8 @@ __bpf_kfunc int bpf_dynptr_clone(const struct bpf_dynptr *p,
> > > >   * Copies data from source dynptr to destination dynptr.
> > > >   * Returns 0 on success; negative error, otherwise.
> > > >   */
> > > > -__bpf_kfunc int bpf_dynptr_copy(struct bpf_dynptr *dst_ptr, u64 dst_off,
> > > > -                               struct bpf_dynptr *src_ptr, u64 src_off, u64 size)
> > > > +__bpf_kfunc int bpf_dynptr_copy(const struct bpf_dynptr *dst_ptr, u64 dst_off,
> > >
> > > again, dst_ptr clearly is modifiable because we are copying data into it.
> > >
> > > What am I missing, why is this logically correct?
> > >
> > > (I understand that from purely C type system POV this is fine, because
> > > we don't modify bpf_dynptr struct itself on the stack, but bpf_dynptr
> > > is a representation of some memory, and if we are modifying this
> > > memory, then I think it should be not marked as const)
> >
> > The patch is just to first make the arg type determination independent
> > from bpf_reg_state and make kfunc signature consistent based on what
> > commit 52f37c4e0f11 (bpf: Rework process_dynptr_func) has laid out.
> >
> > Perhaps MEM_RDONLY is a bit misleading. In process_dynptr_func(), the
> > flag means the dynptr struct on the stack is immutable. Currently,
> > there is no way (and maybe no need?) to specify read-only dynptr.
> >
>
> are you basically trying to determine if CONST_DYNPTR_PTR is allowed or not?

Yes.

>
> > > > The verifier should decide whether a dynptr argument is read-only
>
> can you please remind us what "read-only dynptr argument" means for verifier?

Oh well... my apologies for the inconsistency in commit message and
comments. It should be "immutable" instead of "read-only" here.

>
> > >
> > > > +                               const struct bpf_dynptr *src_ptr, u64 src_off, u64 size)
> > > >  {
> > > >         struct bpf_dynptr_kern *dst = (struct bpf_dynptr_kern *)dst_ptr;
> > > >         struct bpf_dynptr_kern *src = (struct bpf_dynptr_kern *)src_ptr;
> > >
> > > [...]

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH bpf-next v2 01/11] bpf: Set kfunc dynptr arg type flag based on prototype
  2026-03-11 23:03         ` Amery Hung
@ 2026-03-11 23:15           ` Andrii Nakryiko
  2026-03-12 16:59             ` Amery Hung
  0 siblings, 1 reply; 46+ messages in thread
From: Andrii Nakryiko @ 2026-03-11 23:15 UTC (permalink / raw)
  To: Amery Hung
  Cc: bpf, netdev, alexei.starovoitov, andrii, daniel, memxor,
	martin.lau, kernel-team

On Wed, Mar 11, 2026 at 4:03 PM Amery Hung <ameryhung@gmail.com> wrote:
>
> On Wed, Mar 11, 2026 at 3:37 PM Andrii Nakryiko
> <andrii.nakryiko@gmail.com> wrote:
> >
> > On Wed, Mar 11, 2026 at 1:01 PM Amery Hung <ameryhung@gmail.com> wrote:
> > >
> > > On Wed, Mar 11, 2026 at 12:44 PM Andrii Nakryiko
> > > <andrii.nakryiko@gmail.com> wrote:
> > > >
> > > > On Fri, Mar 6, 2026 at 10:44 PM Amery Hung <ameryhung@gmail.com> wrote:
> > > > >
> > > > > The verifier should decide whether a dynptr argument is read-only
> > > > > based on if the type is "const struct bpf_dynptr *", not the type of
> > > > > the register passed to the kfunc. This currently does not cause issues
> > > > > because existing kfuncs that mutate struct bpf_dynptr are constructors
> > > > > (e.g., bpf_dynptr_from_xxx and bpf_dynptr_clone). These kfuncs have
> > > > > additional check in process_dynptr_func() to make sure the stack slot
> > > > > does not contain initialized dynptr. Nonetheless, this should still be
> > > > > fixed to avoid future issues when there is a non-constructor dynptr
> > > > > kfunc that can mutate dynptr. This is also a small step toward unifying
> > > > > kfunc and helper handling in the verifier, where the first step is to
> > > > > generate kfunc prototype similar to bpf_func_proto before the main
> > > > > verification loop.
> > > > >
> > > > > We also need to correctly mark some kfunc arguments as "const struct
> > > > > bpf_dynptr *" to align with other kfuncs that take non-mutable dynptr
> > > > > argument and to not break their usage. Adding const qualifier does
> > > > > not break backward compatibility.
> > > > >
> > > > > Signed-off-by: Amery Hung <ameryhung@gmail.com>
> > > > > ---
> > > > >  fs/verity/measure.c                            |  2 +-
> > > > >  include/linux/bpf.h                            |  8 ++++----
> > > > >  kernel/bpf/helpers.c                           | 10 +++++-----
> > > > >  kernel/bpf/verifier.c                          | 18 +++++++++++++++++-
> > > > >  kernel/trace/bpf_trace.c                       | 18 +++++++++---------
> > > > >  tools/testing/selftests/bpf/bpf_kfuncs.h       |  6 +++---
> > > > >  .../selftests/bpf/progs/dynptr_success.c       |  6 +++---
> > > > >  .../bpf/progs/test_kfunc_dynptr_param.c        |  7 +------
> > > > >  8 files changed, 43 insertions(+), 32 deletions(-)
> > > > >
> > > > > diff --git a/fs/verity/measure.c b/fs/verity/measure.c
> > > > > index 6a35623ebdf0..3840436e4510 100644
> > > > > --- a/fs/verity/measure.c
> > > > > +++ b/fs/verity/measure.c
> > > > > @@ -118,7 +118,7 @@ __bpf_kfunc_start_defs();
> > > > >   *
> > > > >   * Return: 0 on success, a negative value on error.
> > > > >   */
> > > > > -__bpf_kfunc int bpf_get_fsverity_digest(struct file *file, struct bpf_dynptr *digest_p)
> > > > > +__bpf_kfunc int bpf_get_fsverity_digest(struct file *file, const struct bpf_dynptr *digest_p)
> > > >
> > > > but kfunc is writing into digest_p, so that const is wrong?...
> > > >
> > > > >  {
> > > > >         struct bpf_dynptr_kern *digest_ptr = (struct bpf_dynptr_kern *)digest_p;
> > > > >         const struct inode *inode = file_inode(file);
> > > >
> > > > [...]
> > > >
> > > > > index 6eb6c82ed2ee..3d44896587ac 100644
> > > > > --- a/kernel/bpf/helpers.c
> > > > > +++ b/kernel/bpf/helpers.c
> > > > > @@ -3000,8 +3000,8 @@ __bpf_kfunc int bpf_dynptr_clone(const struct bpf_dynptr *p,
> > > > >   * Copies data from source dynptr to destination dynptr.
> > > > >   * Returns 0 on success; negative error, otherwise.
> > > > >   */
> > > > > -__bpf_kfunc int bpf_dynptr_copy(struct bpf_dynptr *dst_ptr, u64 dst_off,
> > > > > -                               struct bpf_dynptr *src_ptr, u64 src_off, u64 size)
> > > > > +__bpf_kfunc int bpf_dynptr_copy(const struct bpf_dynptr *dst_ptr, u64 dst_off,
> > > >
> > > > again, dst_ptr clearly is modifiable because we are copying data into it.
> > > >
> > > > What am I missing, why is this logically correct?
> > > >
> > > > (I understand that from purely C type system POV this is fine, because
> > > > we don't modify bpf_dynptr struct itself on the stack, but bpf_dynptr
> > > > is a representation of some memory, and if we are modifying this
> > > > memory, then I think it should be not marked as const)
> > >
> > > The patch is just to first make the arg type determination independent
> > > from bpf_reg_state and make kfunc signature consistent based on what
> > > commit 52f37c4e0f11 (bpf: Rework process_dynptr_func) has laid out.
> > >
> > > Perhaps MEM_RDONLY is a bit misleading. In process_dynptr_func(), the
> > > flag means the dynptr struct on the stack is immutable. Currently,
> > > there is no way (and maybe no need?) to specify read-only dynptr.
> > >
> >
> > are you basically trying to determine if CONST_DYNPTR_PTR is allowed or not?
>
> Yes.
>
> >
> > > > > The verifier should decide whether a dynptr argument is read-only
> >
> > can you please remind us what "read-only dynptr argument" means for verifier?
>
> Oh well... my apologies for the inconsistency in commit message and
> comments. It should be "immutable" instead of "read-only" here.

I guess I'd suggest

struct bpf_dyntpr *const dptr

to mark this through type system? that is, *constant pointer* to
dynptr, not a pointer to *constant dynptr*

WDYT?

>
> >
> > > >
> > > > > +                               const struct bpf_dynptr *src_ptr, u64 src_off, u64 size)
> > > > >  {
> > > > >         struct bpf_dynptr_kern *dst = (struct bpf_dynptr_kern *)dst_ptr;
> > > > >         struct bpf_dynptr_kern *src = (struct bpf_dynptr_kern *)src_ptr;
> > > >
> > > > [...]

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH bpf-next v2 05/11] bpf: Preserve reg->id of pointer objects after null-check
  2026-03-11 22:29     ` Alexei Starovoitov
@ 2026-03-11 23:46       ` Amery Hung
  2026-03-17 18:49         ` Eduard Zingerman
  0 siblings, 1 reply; 46+ messages in thread
From: Amery Hung @ 2026-03-11 23:46 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: bpf, Network Development, Andrii Nakryiko, Daniel Borkmann,
	Kumar Kartikeya Dwivedi, Martin KaFai Lau, Kernel Team

On Wed, Mar 11, 2026 at 3:30 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Wed, Mar 11, 2026 at 3:26 PM Alexei Starovoitov
> <alexei.starovoitov@gmail.com> wrote:
> >
> > On Fri, Mar 6, 2026 at 10:44 PM Amery Hung <ameryhung@gmail.com> wrote:
> > >
> > > Preserve reg->id of pointer objects after null-checking the register so
> > > that children objects derived from it can still refer to it in the new
> > > object relationship tracking mechanism introduced in a later patch. This
> > > change incurs a slight increase in the number of states in one selftest
> > > bpf object, rbtree_search.bpf.o. For Meta bpf objects, the increase of
> > > states is also negligible.
> > >
> > > Selftest BPF objects with insns_diff > 0
> > >
> > > Insns (A)  Insns (B)  Insns  (DIFF)  States (A)  States (B)  States (DIFF)
> > > ---------  ---------  -------------  ----------  ----------  -------------
> > >      7309       7814  +505 (+6.91%)         394         413   +19 (+4.82%)
> > >
> > > Meta BPF objects with insns_diff > 0
> > >
> > > Insns (A)  Insns (B)  Insns   (DIFF)  States (A)  States (B)  States (DIFF)
> > > ---------  ---------  --------------  ----------  ----------  -------------
> > >        52         57     +5 (+9.62%)           5           6   +1 (+20.00%)
> > >        52         57     +5 (+9.62%)           5           6   +1 (+20.00%)
> > >       676        679     +3 (+0.44%)          54          54    +0 (+0.00%)
> > >       289        292     +3 (+1.04%)          20          20    +0 (+0.00%)
> > >        78         82     +4 (+5.13%)           8           8    +0 (+0.00%)
> > >       252        320   +68 (+26.98%)          21          27   +6 (+28.57%)
> > >       252        320   +68 (+26.98%)          21          27   +6 (+28.57%)
> > >       119        126     +7 (+5.88%)           6           7   +1 (+16.67%)
> > >      1119       1128     +9 (+0.80%)          95          96    +1 (+1.05%)
> > >      1128       1137     +9 (+0.80%)          95          96    +1 (+1.05%)
> > >      4380       4465    +85 (+1.94%)         114         118    +4 (+3.51%)
> > >      3093       3170    +77 (+2.49%)          83          88    +5 (+6.02%)
> > >     30181      31224  +1043 (+3.46%)         832         863   +31 (+3.73%)
> > >    237608     237619    +11 (+0.00%)       11670       11671    +1 (+0.01%)
> > >     94832      94836     +4 (+0.00%)        4787        4788    +1 (+0.02%)
> > >    237387     237407    +20 (+0.01%)       11651       11652    +1 (+0.01%)
> > >     94832      94836     +4 (+0.00%)        4787        4788    +1 (+0.02%)
> > >      8103       8108     +5 (+0.06%)         459         459    +0 (+0.00%)
> > >      8076       8079     +3 (+0.04%)         457         457    +0 (+0.00%)
> > >      8177       8197    +20 (+0.24%)         459         460    +1 (+0.22%)
> > >      8083       8086     +3 (+0.04%)         458         458    +0 (+0.00%)
> > >    237608     237619    +11 (+0.00%)       11670       11671    +1 (+0.01%)
> > >     94832      94836     +4 (+0.00%)        4787        4788    +1 (+0.02%)
> > >    237387     237407    +20 (+0.01%)       11651       11652    +1 (+0.01%)
> > >     94832      94836     +4 (+0.00%)        4787        4788    +1 (+0.02%)
> > >      8103       8108     +5 (+0.06%)         459         459    +0 (+0.00%)
> > >      8076       8079     +3 (+0.04%)         457         457    +0 (+0.00%)
> > >      8177       8197    +20 (+0.24%)         459         460    +1 (+0.22%)
> > >      8083       8086     +3 (+0.04%)         458         458    +0 (+0.00%)
> >
> > The table is missing names.
> > State columns can be dropped instead.

Program                   Insns (A)  Insns (B)  Insns   (DIFF)  States
(A)  States (B)  States (DIFF)
------------------------  ---------  ---------  --------------
----------  ----------  -------------
ned_imex_be_tclass               52         57     +5 (+9.62%)
  5           6   +1 (+20.00%)
ned_imex_be_tclass               52         57     +5 (+9.62%)
  5           6   +1 (+20.00%)
ned_skop_auto_flowlabel         676        679     +3 (+0.44%)
 54          54    +0 (+0.00%)
ned_skop_mss                    289        292     +3 (+1.04%)
 20          20    +0 (+0.00%)
ned_skopt_bet_classifier         78         82     +4 (+5.13%)
  8           8    +0 (+0.00%)
dctcp_update_alpha              252        320   +68 (+26.98%)
 21          27   +6 (+28.57%)
dctcp_update_alpha              252        320   +68 (+26.98%)
 21          27   +6 (+28.57%)
ned_ts_func                     119        126     +7 (+5.88%)
  6           7   +1 (+16.67%)
tw_egress                      1119       1128     +9 (+0.80%)
 95          96    +1 (+1.05%)
tw_ingress                     1128       1137     +9 (+0.80%)
 95          96    +1 (+1.05%)
tw_tproxy_router               4380       4465    +85 (+1.94%)
114         118    +4 (+3.51%)
tw_tproxy_router4              3093       3170    +77 (+2.49%)
 83          88    +5 (+6.02%)
ttls_tc_ingress               30181      31224  +1043 (+3.46%)
832         863   +31 (+3.73%)
tw_twfw_egress               237608     237619    +11 (+0.00%)
11670       11671    +1 (+0.01%)
tw_twfw_ingress               94832      94836     +4 (+0.00%)
4787        4788    +1 (+0.02%)
tw_twfw_tc_eg                237387     237407    +20 (+0.01%)
11651       11652    +1 (+0.01%)
tw_twfw_tc_in                 94832      94836     +4 (+0.00%)
4787        4788    +1 (+0.02%)
tw_twfw_egress                 8103       8108     +5 (+0.06%)
459         459    +0 (+0.00%)
tw_twfw_ingress                8076       8079     +3 (+0.04%)
457         457    +0 (+0.00%)
tw_twfw_tc_eg                  8177       8197    +20 (+0.24%)
459         460    +1 (+0.22%)
tw_twfw_tc_in                  8083       8086     +3 (+0.04%)
458         458    +0 (+0.00%)
tw_twfw_egress               237608     237619    +11 (+0.00%)
11670       11671    +1 (+0.01%)
tw_twfw_ingress               94832      94836     +4 (+0.00%)
4787        4788    +1 (+0.02%)
tw_twfw_tc_eg                237387     237407    +20 (+0.01%)
11651       11652    +1 (+0.01%)
tw_twfw_tc_in                 94832      94836     +4 (+0.00%)
4787        4788    +1 (+0.02%)
tw_twfw_egress                 8103       8108     +5 (+0.06%)
459         459    +0 (+0.00%)
tw_twfw_ingress                8076       8079     +3 (+0.04%)
457         457    +0 (+0.00%)
tw_twfw_tc_eg                  8177       8197    +20 (+0.24%)
459         460    +1 (+0.22%)
tw_twfw_tc_in                  8083       8086     +3 (+0.04%)
458         458    +0 (+0.00%)

> >
> > > Looking into rbtree_search, the reason for such increase is that the
> > > verifier has to explore the main loop shown below for one more iteration
> > > until state pruning decides the current state is safe.
> > >
> > > long rbtree_search(void *ctx)
> > > {
> > >         ...
> > >         bpf_spin_lock(&glock0);
> > >         rb_n = bpf_rbtree_root(&groot0);
> > >         while (can_loop) {
> > >                 if (!rb_n) {
> > >                         bpf_spin_unlock(&glock0);
> > >                         return __LINE__;
> > >                 }
> > >
> > >                 n = rb_entry(rb_n, struct node_data, r0);
> > >                 if (lookup_key == n->key0)
> > >                         break;
> > >                 if (nr_gc < NR_NODES)
> > >                         gc_ns[nr_gc++] = rb_n;
> > >                 if (lookup_key < n->key0)
> > >                         rb_n = bpf_rbtree_left(&groot0, rb_n);
> > >                 else
> > >                         rb_n = bpf_rbtree_right(&groot0, rb_n);
> > >         }
> > >         ...
> > > }
> > >
> > > Below is what the verifier sees at the start of each iteration
> > > (65: may_goto) after preserving id of rb_n. Without id of rb_n, the
> > > verifier stops exploring the loop at iter 16.
> > >
> > >            rb_n  gc_ns[15]
> > > iter 15    257   257
> > >
> > > iter 16    290   257    rb_n: idmap add 257->290
> > >                         gc_ns[15]: check 257 != 290 --> state not equal
> > >
> > > iter 17    325   257    rb_n: idmap add 290->325
> > >                         gc_ns[15]: idmap add 257->257 --> state safe
> >
> > I'm not following. The verifier processes above as a bounded loop.
> > All 16 (NR_NODES) iterations.

This is not a bounded loop IIUC. Note that there is no else branch for
"if (nr_gc < NR_NODES)" to break the loop. Therefore the verifier
processes 17 loops after preserving id.

> >
> > Why presence of id on 'rb_n' makes a difference?
> > It will still process 16 loops.
> >
> > Which insn is safe vs not in the above ?
> > One after gc_ns[nr_gc++] = rb_n ?

It is while (can_loop)

>
> One more thing...
>
> How does it interact with reg_is_init_pkt_pointer() ?
>
> That pointer has to have id == 0.

I haven't looked deep into the case. Currently, skb is non-referenced
for non-qdisc programs, so skb dynptr won't need to track it.

If there is ever a need to track it, we can assign a reserved non-zero
id to the unmodified pkt pointer. For reg_is_init_pkt_pointer(), it is
already checking tnum_equals_const(reg->var_off, 0), so maybe it is
fine to drop the id check (not sure).

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH bpf-next v2 01/11] bpf: Set kfunc dynptr arg type flag based on prototype
  2026-03-11 23:15           ` Andrii Nakryiko
@ 2026-03-12 16:59             ` Amery Hung
  2026-03-12 20:09               ` Andrii Nakryiko
  0 siblings, 1 reply; 46+ messages in thread
From: Amery Hung @ 2026-03-12 16:59 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, netdev, alexei.starovoitov, andrii, daniel, memxor,
	martin.lau, kernel-team

On Wed, Mar 11, 2026 at 4:15 PM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Wed, Mar 11, 2026 at 4:03 PM Amery Hung <ameryhung@gmail.com> wrote:
> >
> > On Wed, Mar 11, 2026 at 3:37 PM Andrii Nakryiko
> > <andrii.nakryiko@gmail.com> wrote:
> > >
> > > On Wed, Mar 11, 2026 at 1:01 PM Amery Hung <ameryhung@gmail.com> wrote:
> > > >
> > > > On Wed, Mar 11, 2026 at 12:44 PM Andrii Nakryiko
> > > > <andrii.nakryiko@gmail.com> wrote:
> > > > >
> > > > > On Fri, Mar 6, 2026 at 10:44 PM Amery Hung <ameryhung@gmail.com> wrote:
> > > > > >
> > > > > > The verifier should decide whether a dynptr argument is read-only
> > > > > > based on if the type is "const struct bpf_dynptr *", not the type of
> > > > > > the register passed to the kfunc. This currently does not cause issues
> > > > > > because existing kfuncs that mutate struct bpf_dynptr are constructors
> > > > > > (e.g., bpf_dynptr_from_xxx and bpf_dynptr_clone). These kfuncs have
> > > > > > additional check in process_dynptr_func() to make sure the stack slot
> > > > > > does not contain initialized dynptr. Nonetheless, this should still be
> > > > > > fixed to avoid future issues when there is a non-constructor dynptr
> > > > > > kfunc that can mutate dynptr. This is also a small step toward unifying
> > > > > > kfunc and helper handling in the verifier, where the first step is to
> > > > > > generate kfunc prototype similar to bpf_func_proto before the main
> > > > > > verification loop.
> > > > > >
> > > > > > We also need to correctly mark some kfunc arguments as "const struct
> > > > > > bpf_dynptr *" to align with other kfuncs that take non-mutable dynptr
> > > > > > argument and to not break their usage. Adding const qualifier does
> > > > > > not break backward compatibility.
> > > > > >
> > > > > > Signed-off-by: Amery Hung <ameryhung@gmail.com>
> > > > > > ---
> > > > > >  fs/verity/measure.c                            |  2 +-
> > > > > >  include/linux/bpf.h                            |  8 ++++----
> > > > > >  kernel/bpf/helpers.c                           | 10 +++++-----
> > > > > >  kernel/bpf/verifier.c                          | 18 +++++++++++++++++-
> > > > > >  kernel/trace/bpf_trace.c                       | 18 +++++++++---------
> > > > > >  tools/testing/selftests/bpf/bpf_kfuncs.h       |  6 +++---
> > > > > >  .../selftests/bpf/progs/dynptr_success.c       |  6 +++---
> > > > > >  .../bpf/progs/test_kfunc_dynptr_param.c        |  7 +------
> > > > > >  8 files changed, 43 insertions(+), 32 deletions(-)
> > > > > >
> > > > > > diff --git a/fs/verity/measure.c b/fs/verity/measure.c
> > > > > > index 6a35623ebdf0..3840436e4510 100644
> > > > > > --- a/fs/verity/measure.c
> > > > > > +++ b/fs/verity/measure.c
> > > > > > @@ -118,7 +118,7 @@ __bpf_kfunc_start_defs();
> > > > > >   *
> > > > > >   * Return: 0 on success, a negative value on error.
> > > > > >   */
> > > > > > -__bpf_kfunc int bpf_get_fsverity_digest(struct file *file, struct bpf_dynptr *digest_p)
> > > > > > +__bpf_kfunc int bpf_get_fsverity_digest(struct file *file, const struct bpf_dynptr *digest_p)
> > > > >
> > > > > but kfunc is writing into digest_p, so that const is wrong?...
> > > > >
> > > > > >  {
> > > > > >         struct bpf_dynptr_kern *digest_ptr = (struct bpf_dynptr_kern *)digest_p;
> > > > > >         const struct inode *inode = file_inode(file);
> > > > >
> > > > > [...]
> > > > >
> > > > > > index 6eb6c82ed2ee..3d44896587ac 100644
> > > > > > --- a/kernel/bpf/helpers.c
> > > > > > +++ b/kernel/bpf/helpers.c
> > > > > > @@ -3000,8 +3000,8 @@ __bpf_kfunc int bpf_dynptr_clone(const struct bpf_dynptr *p,
> > > > > >   * Copies data from source dynptr to destination dynptr.
> > > > > >   * Returns 0 on success; negative error, otherwise.
> > > > > >   */
> > > > > > -__bpf_kfunc int bpf_dynptr_copy(struct bpf_dynptr *dst_ptr, u64 dst_off,
> > > > > > -                               struct bpf_dynptr *src_ptr, u64 src_off, u64 size)
> > > > > > +__bpf_kfunc int bpf_dynptr_copy(const struct bpf_dynptr *dst_ptr, u64 dst_off,
> > > > >
> > > > > again, dst_ptr clearly is modifiable because we are copying data into it.
> > > > >
> > > > > What am I missing, why is this logically correct?
> > > > >
> > > > > (I understand that from purely C type system POV this is fine, because
> > > > > we don't modify bpf_dynptr struct itself on the stack, but bpf_dynptr
> > > > > is a representation of some memory, and if we are modifying this
> > > > > memory, then I think it should be not marked as const)
> > > >
> > > > The patch is just to first make the arg type determination independent
> > > > from bpf_reg_state and make kfunc signature consistent based on what
> > > > commit 52f37c4e0f11 (bpf: Rework process_dynptr_func) has laid out.
> > > >
> > > > Perhaps MEM_RDONLY is a bit misleading. In process_dynptr_func(), the
> > > > flag means the dynptr struct on the stack is immutable. Currently,
> > > > there is no way (and maybe no need?) to specify read-only dynptr.
> > > >
> > >
> > > are you basically trying to determine if CONST_DYNPTR_PTR is allowed or not?
> >
> > Yes.
> >
> > >
> > > > > > The verifier should decide whether a dynptr argument is read-only
> > >
> > > can you please remind us what "read-only dynptr argument" means for verifier?
> >
> > Oh well... my apologies for the inconsistency in commit message and
> > comments. It should be "immutable" instead of "read-only" here.
>
> I guess I'd suggest
>
> struct bpf_dyntpr *const dptr
>
> to mark this through type system? that is, *constant pointer* to
> dynptr, not a pointer to *constant dynptr*
>
> WDYT?

I think they both make sense from different angles, but I am not sure
which one is better.

From purely C's point of view. the pointer points to a struct dynptr
that should not be mutated so "const struct dynptr *p" is correct.

From the BPF dynptr abstraction's point of view, the pointer points to
the underlying memory (e.gg., skb, file, ringbuf, etc.), so "struct
dynptr * const p".

Any way we choose, I'd suggest that to be a separate patch. This patch
at least makes things consistent and fixes a logical bug.

>
> >
> > >
> > > > >
> > > > > > +                               const struct bpf_dynptr *src_ptr, u64 src_off, u64 size)
> > > > > >  {
> > > > > >         struct bpf_dynptr_kern *dst = (struct bpf_dynptr_kern *)dst_ptr;
> > > > > >         struct bpf_dynptr_kern *src = (struct bpf_dynptr_kern *)src_ptr;
> > > > >
> > > > > [...]

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH bpf-next v2 01/11] bpf: Set kfunc dynptr arg type flag based on prototype
  2026-03-12 16:59             ` Amery Hung
@ 2026-03-12 20:09               ` Andrii Nakryiko
  2026-03-13  3:25                 ` Alexei Starovoitov
  0 siblings, 1 reply; 46+ messages in thread
From: Andrii Nakryiko @ 2026-03-12 20:09 UTC (permalink / raw)
  To: Amery Hung
  Cc: bpf, netdev, alexei.starovoitov, andrii, daniel, memxor,
	martin.lau, kernel-team

On Thu, Mar 12, 2026 at 9:59 AM Amery Hung <ameryhung@gmail.com> wrote:
>
> On Wed, Mar 11, 2026 at 4:15 PM Andrii Nakryiko
> <andrii.nakryiko@gmail.com> wrote:
> >
> > On Wed, Mar 11, 2026 at 4:03 PM Amery Hung <ameryhung@gmail.com> wrote:
> > >
> > > On Wed, Mar 11, 2026 at 3:37 PM Andrii Nakryiko
> > > <andrii.nakryiko@gmail.com> wrote:
> > > >
> > > > On Wed, Mar 11, 2026 at 1:01 PM Amery Hung <ameryhung@gmail.com> wrote:
> > > > >
> > > > > On Wed, Mar 11, 2026 at 12:44 PM Andrii Nakryiko
> > > > > <andrii.nakryiko@gmail.com> wrote:
> > > > > >
> > > > > > On Fri, Mar 6, 2026 at 10:44 PM Amery Hung <ameryhung@gmail.com> wrote:
> > > > > > >
> > > > > > > The verifier should decide whether a dynptr argument is read-only
> > > > > > > based on if the type is "const struct bpf_dynptr *", not the type of
> > > > > > > the register passed to the kfunc. This currently does not cause issues
> > > > > > > because existing kfuncs that mutate struct bpf_dynptr are constructors
> > > > > > > (e.g., bpf_dynptr_from_xxx and bpf_dynptr_clone). These kfuncs have
> > > > > > > additional check in process_dynptr_func() to make sure the stack slot
> > > > > > > does not contain initialized dynptr. Nonetheless, this should still be
> > > > > > > fixed to avoid future issues when there is a non-constructor dynptr
> > > > > > > kfunc that can mutate dynptr. This is also a small step toward unifying
> > > > > > > kfunc and helper handling in the verifier, where the first step is to
> > > > > > > generate kfunc prototype similar to bpf_func_proto before the main
> > > > > > > verification loop.
> > > > > > >
> > > > > > > We also need to correctly mark some kfunc arguments as "const struct
> > > > > > > bpf_dynptr *" to align with other kfuncs that take non-mutable dynptr
> > > > > > > argument and to not break their usage. Adding const qualifier does
> > > > > > > not break backward compatibility.
> > > > > > >
> > > > > > > Signed-off-by: Amery Hung <ameryhung@gmail.com>
> > > > > > > ---
> > > > > > >  fs/verity/measure.c                            |  2 +-
> > > > > > >  include/linux/bpf.h                            |  8 ++++----
> > > > > > >  kernel/bpf/helpers.c                           | 10 +++++-----
> > > > > > >  kernel/bpf/verifier.c                          | 18 +++++++++++++++++-
> > > > > > >  kernel/trace/bpf_trace.c                       | 18 +++++++++---------
> > > > > > >  tools/testing/selftests/bpf/bpf_kfuncs.h       |  6 +++---
> > > > > > >  .../selftests/bpf/progs/dynptr_success.c       |  6 +++---
> > > > > > >  .../bpf/progs/test_kfunc_dynptr_param.c        |  7 +------
> > > > > > >  8 files changed, 43 insertions(+), 32 deletions(-)
> > > > > > >
> > > > > > > diff --git a/fs/verity/measure.c b/fs/verity/measure.c
> > > > > > > index 6a35623ebdf0..3840436e4510 100644
> > > > > > > --- a/fs/verity/measure.c
> > > > > > > +++ b/fs/verity/measure.c
> > > > > > > @@ -118,7 +118,7 @@ __bpf_kfunc_start_defs();
> > > > > > >   *
> > > > > > >   * Return: 0 on success, a negative value on error.
> > > > > > >   */
> > > > > > > -__bpf_kfunc int bpf_get_fsverity_digest(struct file *file, struct bpf_dynptr *digest_p)
> > > > > > > +__bpf_kfunc int bpf_get_fsverity_digest(struct file *file, const struct bpf_dynptr *digest_p)
> > > > > >
> > > > > > but kfunc is writing into digest_p, so that const is wrong?...
> > > > > >
> > > > > > >  {
> > > > > > >         struct bpf_dynptr_kern *digest_ptr = (struct bpf_dynptr_kern *)digest_p;
> > > > > > >         const struct inode *inode = file_inode(file);
> > > > > >
> > > > > > [...]
> > > > > >
> > > > > > > index 6eb6c82ed2ee..3d44896587ac 100644
> > > > > > > --- a/kernel/bpf/helpers.c
> > > > > > > +++ b/kernel/bpf/helpers.c
> > > > > > > @@ -3000,8 +3000,8 @@ __bpf_kfunc int bpf_dynptr_clone(const struct bpf_dynptr *p,
> > > > > > >   * Copies data from source dynptr to destination dynptr.
> > > > > > >   * Returns 0 on success; negative error, otherwise.
> > > > > > >   */
> > > > > > > -__bpf_kfunc int bpf_dynptr_copy(struct bpf_dynptr *dst_ptr, u64 dst_off,
> > > > > > > -                               struct bpf_dynptr *src_ptr, u64 src_off, u64 size)
> > > > > > > +__bpf_kfunc int bpf_dynptr_copy(const struct bpf_dynptr *dst_ptr, u64 dst_off,
> > > > > >
> > > > > > again, dst_ptr clearly is modifiable because we are copying data into it.
> > > > > >
> > > > > > What am I missing, why is this logically correct?
> > > > > >
> > > > > > (I understand that from purely C type system POV this is fine, because
> > > > > > we don't modify bpf_dynptr struct itself on the stack, but bpf_dynptr
> > > > > > is a representation of some memory, and if we are modifying this
> > > > > > memory, then I think it should be not marked as const)
> > > > >
> > > > > The patch is just to first make the arg type determination independent
> > > > > from bpf_reg_state and make kfunc signature consistent based on what
> > > > > commit 52f37c4e0f11 (bpf: Rework process_dynptr_func) has laid out.
> > > > >
> > > > > Perhaps MEM_RDONLY is a bit misleading. In process_dynptr_func(), the
> > > > > flag means the dynptr struct on the stack is immutable. Currently,
> > > > > there is no way (and maybe no need?) to specify read-only dynptr.
> > > > >
> > > >
> > > > are you basically trying to determine if CONST_DYNPTR_PTR is allowed or not?
> > >
> > > Yes.
> > >
> > > >
> > > > > > > The verifier should decide whether a dynptr argument is read-only
> > > >
> > > > can you please remind us what "read-only dynptr argument" means for verifier?
> > >
> > > Oh well... my apologies for the inconsistency in commit message and
> > > comments. It should be "immutable" instead of "read-only" here.
> >
> > I guess I'd suggest
> >
> > struct bpf_dyntpr *const dptr
> >
> > to mark this through type system? that is, *constant pointer* to
> > dynptr, not a pointer to *constant dynptr*
> >
> > WDYT?
>
> I think they both make sense from different angles, but I am not sure
> which one is better.
>
> From purely C's point of view. the pointer points to a struct dynptr
> that should not be mutated so "const struct dynptr *p" is correct.
>
> From the BPF dynptr abstraction's point of view, the pointer points to
> the underlying memory (e.gg., skb, file, ringbuf, etc.), so "struct
> dynptr * const p".
>
> Any way we choose, I'd suggest that to be a separate patch. This patch
> at least makes things consistent and fixes a logical bug.

if we could rely on decltag, I'd go with that. But *const is unusual
and stands out, so I'd go with that. I wonder if anyone else has any
thoughts.
>
> >
> > >
> > > >
> > > > > >
> > > > > > > +                               const struct bpf_dynptr *src_ptr, u64 src_off, u64 size)
> > > > > > >  {
> > > > > > >         struct bpf_dynptr_kern *dst = (struct bpf_dynptr_kern *)dst_ptr;
> > > > > > >         struct bpf_dynptr_kern *src = (struct bpf_dynptr_kern *)src_ptr;
> > > > > >
> > > > > > [...]

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH bpf-next v2 06/11] bpf: Refactor object relationship tracking and fix dynptr UAF bug
  2026-03-07  6:44 ` [RFC PATCH bpf-next v2 06/11] bpf: Refactor object relationship tracking and fix dynptr UAF bug Amery Hung
  2026-03-11 22:32   ` Andrii Nakryiko
@ 2026-03-12 23:33   ` Mykyta Yatsenko
  2026-03-13 20:33     ` Amery Hung
  1 sibling, 1 reply; 46+ messages in thread
From: Mykyta Yatsenko @ 2026-03-12 23:33 UTC (permalink / raw)
  To: Amery Hung, bpf
  Cc: netdev, alexei.starovoitov, andrii, daniel, memxor, martin.lau,
	ameryhung, kernel-team

Amery Hung <ameryhung@gmail.com> writes:

> Refactor object relationship tracking in the verifier by removing
> dynptr_id and using parent_id to track the parent object. Then, track
> the referenced parent object for the dynptr when calling a dynptr
> constructor. This fixes a use-after-free bug. For dynptr that has
> referenced parent object (skb dynptr in BPF qdisc or file dynptr),
> the dynptr or derived slices need to be invalidated when the parent
> object is released.
>
> First, add parent_id to bpf_reg_state to be able to precisely track
> objects' child-parent relationship. A child object will use parent_id
> to track the parent object's id. This replaces dynptr slice specific
> dynptr_id.
>
> Then, when calling dynptr constructors (i.e., process_dynptr_func() with
> MEM_UNINIT argument), track the parent's id if parent is an referenced
> object. This only applies to file dynptr and skb dynptr, so only pass
> parent reg->id to kfunc constructors.
>
> For release_reference(), this mean when invalidating an object, it needs
> to also invalidate all dependent objects by traversing the subtree. This
> is done using stack-based DFS to avoid recursive call chain of
> release_reference() -> unmark_stack_slots_dynptr() ->
> release_reference(). Note that, referenced objects cannot be released
> when traversing the tree if it is not the object id initially passed to
> release_reference() as they would actually require helper call to
> release the acquired resources.
>
> While the new design changes how object relationships are being tracked
> in the verifier, it does NOT change the verifier's behavior. Here is
> the implication of the new design for dynptr, ptr casting and
> owning/non-owning references.
>
> Dynptr:
>
> When initializing a dynptr, referenced dynptr will acquire an reference
> for ref_obj_id. If the dynptr has a referenced parent, the parent_id
> will be used to track the its id. When cloning dynptr, ref_obj_id and
> parent_id of the clone are copied directly from the original dynptr.
> This means, when releasing a dynptr, if it is a referenced dynptr,
> release_reference(ref_obj_id) will release all clones and the original
> and derived slices. For non-referenced dynptr, only the specific dynptr
> being released and its children slices will be invalidated.
>
> Pointer casting:
>
> Referenced socket pointer and the casted pointers should share the same
> lifetime, while having difference nullness. Therefore, they will have
> different id but the same ref_obj_id.
>
> When converting owning references to non-owning:
>
> After converting a reference from owning to non-owning by clearing the
> object's ref_reg_id. (e.g., object(id=1, ref_obj_id=1) -> object(id=1,
> ref_obj_id=0)), the verifier only needs to release the reference state
> instead of releasing registers that have the id, so call
> release_reference_nomark() instead of release_reference().
>
> CC: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> Fixes: 870c28588afa ("bpf: net_sched: Add basic bpf qdisc kfuncs")
> Signed-off-by: Amery Hung <ameryhung@gmail.com>
> ---
>  include/linux/bpf_verifier.h |  14 +-
>  kernel/bpf/log.c             |   4 +-
>  kernel/bpf/verifier.c        | 274 ++++++++++++++++++-----------------
>  3 files changed, 154 insertions(+), 138 deletions(-)
>
> diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
> index c1e30096ea7b..e987a48f511a 100644
> --- a/include/linux/bpf_verifier.h
> +++ b/include/linux/bpf_verifier.h
> @@ -65,7 +65,6 @@ struct bpf_reg_state {
>  
>  		struct { /* for PTR_TO_MEM | PTR_TO_MEM_OR_NULL */
>  			u32 mem_size;
> -			u32 dynptr_id; /* for dynptr slices */
>  		};
>  
>  		/* For dynptr stack slots */
> @@ -193,6 +192,13 @@ struct bpf_reg_state {
>  	 * allowed and has the same effect as bpf_sk_release(sk).
>  	 */
>  	u32 ref_obj_id;
> +	/* Tracks the parent object this register was derived from.
> +	 * Used for cascading invalidation: when the parent object is
> +	 * released or invalidated, all registers with matching parent_id
> +	 * are also invalidated. For example, a slice from bpf_dynptr_data()
> +	 * gets parent_id set to the dynptr's id.
> +	 */
> +	u32 parent_id;
>  	/* Inside the callee two registers can be both PTR_TO_STACK like
>  	 * R1=fp-8 and R2=fp-8, but one of them points to this function stack
>  	 * while another to the caller's stack. To differentiate them 'frameno'
> @@ -707,6 +713,11 @@ struct bpf_idset {
>  	} entries[BPF_ID_MAP_SIZE];
>  };
>  
> +struct bpf_idstack {
> +	int cnt;
> +	u32 ids[BPF_ID_MAP_SIZE];
> +};
> +
>  /* see verifier.c:compute_scc_callchain() */
>  struct bpf_scc_callchain {
>  	/* call sites from bpf_verifier_state->frame[*]->callsite leading to this SCC */
> @@ -789,6 +800,7 @@ struct bpf_verifier_env {
>  	union {
>  		struct bpf_idmap idmap_scratch;
>  		struct bpf_idset idset_scratch;
> +		struct bpf_idstack idstack_scratch;
>  	};
>  	struct {
>  		int *insn_state;
> diff --git a/kernel/bpf/log.c b/kernel/bpf/log.c
> index 37d72b052192..cb4129b8b2a1 100644
> --- a/kernel/bpf/log.c
> +++ b/kernel/bpf/log.c
> @@ -707,6 +707,8 @@ static void print_reg_state(struct bpf_verifier_env *env,
>  		verbose(env, "%+d", reg->delta);
>  	if (reg->ref_obj_id)
>  		verbose_a("ref_obj_id=%d", reg->ref_obj_id);
> +	if (reg->parent_id)
> +		verbose_a("parent_id=%d", reg->parent_id);
>  	if (type_is_non_owning_ref(reg->type))
>  		verbose_a("%s", "non_own_ref");
>  	if (type_is_map_ptr(t)) {
> @@ -810,8 +812,6 @@ void print_verifier_state(struct bpf_verifier_env *env, const struct bpf_verifie
>  				verbose_a("id=%d", reg->id);
>  			if (reg->ref_obj_id)
>  				verbose_a("ref_id=%d", reg->ref_obj_id);
> -			if (reg->dynptr_id)
> -				verbose_a("dynptr_id=%d", reg->dynptr_id);
>  			verbose(env, ")");
>  			break;
>  		case STACK_ITER:
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 8f9e28901bc4..0436fc4d9107 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -204,7 +204,7 @@ struct bpf_verifier_stack_elem {
>  
>  static int acquire_reference(struct bpf_verifier_env *env, int insn_idx);
>  static int release_reference_nomark(struct bpf_verifier_state *state, int ref_obj_id);
> -static int release_reference(struct bpf_verifier_env *env, int ref_obj_id);
> +static int release_reference(struct bpf_verifier_env *env, int id);
>  static void invalidate_non_owning_refs(struct bpf_verifier_env *env);
>  static bool in_rbtree_lock_required_cb(struct bpf_verifier_env *env);
>  static int ref_set_non_owning(struct bpf_verifier_env *env,
> @@ -281,6 +281,7 @@ struct bpf_dynptr_desc {
>  	enum bpf_dynptr_type type;
>  	u32 id;
>  	u32 ref_obj_id;
> +	u32 parent_id;
>  };
>  
>  struct bpf_call_arg_meta {
> @@ -294,6 +295,7 @@ struct bpf_call_arg_meta {
>  	int mem_size;
>  	u64 msize_max_value;
>  	int ref_obj_id;
> +	u32 id;
>  	int func_id;
>  	struct btf *btf;
>  	u32 btf_id;
> @@ -321,6 +323,7 @@ struct bpf_kfunc_call_arg_meta {
>  	const char *func_name;
>  	/* Out parameters */
>  	u32 ref_obj_id;
> +	u32 id;
>  	u8 release_regno;
>  	bool r0_rdonly;
>  	u32 ret_btf_id;
> @@ -721,14 +724,14 @@ static enum bpf_type_flag get_dynptr_type_flag(enum bpf_dynptr_type type)
>  	}
>  }
>  
> -static bool dynptr_type_refcounted(enum bpf_dynptr_type type)
> +static bool dynptr_type_referenced(enum bpf_dynptr_type type)
>  {
>  	return type == BPF_DYNPTR_TYPE_RINGBUF || type == BPF_DYNPTR_TYPE_FILE;
>  }
>  
>  static void __mark_dynptr_reg(struct bpf_reg_state *reg,
>  			      enum bpf_dynptr_type type,
> -			      bool first_slot, int dynptr_id);
> +			      bool first_slot, int id);
>  
>  static void __mark_reg_not_init(const struct bpf_verifier_env *env,
>  				struct bpf_reg_state *reg);
> @@ -755,11 +758,12 @@ static int destroy_if_dynptr_stack_slot(struct bpf_verifier_env *env,
>  				        struct bpf_func_state *state, int spi);
>  
>  static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
> -				   enum bpf_arg_type arg_type, int insn_idx, int clone_ref_obj_id)
> +				   enum bpf_arg_type arg_type, int insn_idx, int parent_id,
> +				   struct bpf_dynptr_desc *initialized_dynptr)
>  {
>  	struct bpf_func_state *state = func(env, reg);
> +	int spi, i, err, ref_obj_id = 0;
>  	enum bpf_dynptr_type type;
> -	int spi, i, err;
>  
>  	spi = dynptr_get_spi(env, reg);
>  	if (spi < 0)
> @@ -793,22 +797,28 @@ static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_
>  	mark_dynptr_stack_regs(env, &state->stack[spi].spilled_ptr,
>  			       &state->stack[spi - 1].spilled_ptr, type);
>  
> -	if (dynptr_type_refcounted(type)) {
> -		/* The id is used to track proper releasing */
> -		int id;
> -
> -		if (clone_ref_obj_id)
> -			id = clone_ref_obj_id;
> -		else
> -			id = acquire_reference(env, insn_idx);
> -
> -		if (id < 0)
> -			return id;
> -
> -		state->stack[spi].spilled_ptr.ref_obj_id = id;
> -		state->stack[spi - 1].spilled_ptr.ref_obj_id = id;
> +	if (initialized_dynptr->type == BPF_DYNPTR_TYPE_INVALID) {
> +		if (dynptr_type_referenced(type)) {
> +			ref_obj_id = acquire_reference(env, insn_idx);
> +			if (ref_obj_id < 0)
> +				return ref_obj_id;
> +		}
> +	} else {
> +		/*
> +		 * Referenced dynptr clones have the same lifetime as the original dynptr
> +		 * since bpf_dynptr_clone() does not initialize the clones like the
> +		 * constructor does. If any of the dynptrs is invalidated, the rest will
> +		 * also need to invalidated. Thus, they all share the same non-zero ref_obj_id.
> +		 */
> +		ref_obj_id = initialized_dynptr->ref_obj_id;
> +		parent_id = initialized_dynptr->parent_id;
>  	}
>  
> +	state->stack[spi].spilled_ptr.ref_obj_id = ref_obj_id;
> +	state->stack[spi - 1].spilled_ptr.ref_obj_id = ref_obj_id;
> +	state->stack[spi].spilled_ptr.parent_id = parent_id;
> +	state->stack[spi - 1].spilled_ptr.parent_id = parent_id;
> +
>  	bpf_mark_stack_write(env, state->frameno, BIT(spi - 1) | BIT(spi));
>  
>  	return 0;
> @@ -832,7 +842,7 @@ static void invalidate_dynptr(struct bpf_verifier_env *env, struct bpf_func_stat
>  static int unmark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
>  {
>  	struct bpf_func_state *state = func(env, reg);
> -	int spi, ref_obj_id, i;
> +	int spi;
>  
>  	/*
>  	 * This can only be set for PTR_TO_STACK, as CONST_PTR_TO_DYNPTR cannot
> @@ -843,45 +853,19 @@ static int unmark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_re
>  		verifier_bug(env, "CONST_PTR_TO_DYNPTR cannot be released");
>  		return -EFAULT;
>  	}
> +
>  	spi = dynptr_get_spi(env, reg);
>  	if (spi < 0)
>  		return spi;
>  
> -	if (!dynptr_type_refcounted(state->stack[spi].spilled_ptr.dynptr.type)) {
> -		invalidate_dynptr(env, state, spi);
> -		return 0;
> -	}
> -
> -	ref_obj_id = state->stack[spi].spilled_ptr.ref_obj_id;
> -
> -	/* If the dynptr has a ref_obj_id, then we need to invalidate
> -	 * two things:
> -	 *
> -	 * 1) Any dynptrs with a matching ref_obj_id (clones)
> -	 * 2) Any slices derived from this dynptr.
> +	/*
> +	 * For referenced dynptr, the clones share the same ref_obj_id and will be
> +	 * invalidated too. For non-referenced dynptr, only the dynptr and slices
> +	 * derived from it will be invalidated.
>  	 */
> -
> -	/* Invalidate any slices associated with this dynptr */
> -	WARN_ON_ONCE(release_reference(env, ref_obj_id));
> -
> -	/* Invalidate any dynptr clones */
> -	for (i = 1; i < state->allocated_stack / BPF_REG_SIZE; i++) {
> -		if (state->stack[i].spilled_ptr.ref_obj_id != ref_obj_id)
> -			continue;
> -
> -		/* it should always be the case that if the ref obj id
> -		 * matches then the stack slot also belongs to a
> -		 * dynptr
> -		 */
> -		if (state->stack[i].slot_type[0] != STACK_DYNPTR) {
> -			verifier_bug(env, "misconfigured ref_obj_id");
> -			return -EFAULT;
> -		}
> -		if (state->stack[i].spilled_ptr.dynptr.first_slot)
> -			invalidate_dynptr(env, state, i);
> -	}
> -
> -	return 0;
> +	reg = &state->stack[spi].spilled_ptr;
> +	return release_reference(env, dynptr_type_referenced(reg->dynptr.type) ?
> +				      reg->ref_obj_id : reg->id);
>  }
>  
>  static void __mark_reg_unknown(const struct bpf_verifier_env *env,
> @@ -898,10 +882,6 @@ static void mark_reg_invalid(const struct bpf_verifier_env *env, struct bpf_reg_
>  static int destroy_if_dynptr_stack_slot(struct bpf_verifier_env *env,
>  				        struct bpf_func_state *state, int spi)
>  {
> -	struct bpf_func_state *fstate;
> -	struct bpf_reg_state *dreg;
> -	int i, dynptr_id;
> -
>  	/* We always ensure that STACK_DYNPTR is never set partially,
>  	 * hence just checking for slot_type[0] is enough. This is
>  	 * different for STACK_SPILL, where it may be only set for
> @@ -914,7 +894,7 @@ static int destroy_if_dynptr_stack_slot(struct bpf_verifier_env *env,
>  	if (!state->stack[spi].spilled_ptr.dynptr.first_slot)
>  		spi = spi + 1;
>  
> -	if (dynptr_type_refcounted(state->stack[spi].spilled_ptr.dynptr.type)) {
> +	if (dynptr_type_referenced(state->stack[spi].spilled_ptr.dynptr.type)) {
>  		verbose(env, "cannot overwrite referenced dynptr\n");
>  		return -EINVAL;
>  	}
> @@ -922,31 +902,8 @@ static int destroy_if_dynptr_stack_slot(struct bpf_verifier_env *env,
>  	mark_stack_slot_scratched(env, spi);
>  	mark_stack_slot_scratched(env, spi - 1);
>  
> -	/* Writing partially to one dynptr stack slot destroys both. */
> -	for (i = 0; i < BPF_REG_SIZE; i++) {
> -		state->stack[spi].slot_type[i] = STACK_INVALID;
> -		state->stack[spi - 1].slot_type[i] = STACK_INVALID;
> -	}
> -
> -	dynptr_id = state->stack[spi].spilled_ptr.id;
> -	/* Invalidate any slices associated with this dynptr */
> -	bpf_for_each_reg_in_vstate(env->cur_state, fstate, dreg, ({
> -		/* Dynptr slices are only PTR_TO_MEM_OR_NULL and PTR_TO_MEM */
> -		if (dreg->type != (PTR_TO_MEM | PTR_MAYBE_NULL) && dreg->type != PTR_TO_MEM)
> -			continue;
> -		if (dreg->dynptr_id == dynptr_id)
> -			mark_reg_invalid(env, dreg);
> -	}));
> -
> -	/* Do not release reference state, we are destroying dynptr on stack,
> -	 * not using some helper to release it. Just reset register.
> -	 */
> -	__mark_reg_not_init(env, &state->stack[spi].spilled_ptr);
> -	__mark_reg_not_init(env, &state->stack[spi - 1].spilled_ptr);
> -
> -	bpf_mark_stack_write(env, state->frameno, BIT(spi - 1) | BIT(spi));
> -
> -	return 0;
> +	/* Invalidate the dynptr and any derived slices */
> +	return release_reference(env, state->stack[spi].spilled_ptr.id);
>  }
>  
>  static bool is_dynptr_reg_valid_uninit(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
> @@ -1583,15 +1540,15 @@ static void release_reference_state(struct bpf_verifier_state *state, int idx)
>  	return;
>  }
>  
> -static bool find_reference_state(struct bpf_verifier_state *state, int ptr_id)
> +static struct bpf_reference_state *find_reference_state(struct bpf_verifier_state *state, int ptr_id)
>  {
>  	int i;
>  
>  	for (i = 0; i < state->acquired_refs; i++)
>  		if (state->refs[i].id == ptr_id)
> -			return true;
> +			return &state->refs[i];
>  
> -	return false;
> +	return NULL;
>  }
>  
>  static int release_lock_state(struct bpf_verifier_state *state, int type, int id, void *ptr)
> @@ -2186,6 +2143,7 @@ static void __mark_reg_known(struct bpf_reg_state *reg, u64 imm)
>  	       offsetof(struct bpf_reg_state, var_off) - sizeof(reg->type));
>  	reg->id = 0;
>  	reg->ref_obj_id = 0;
> +	reg->parent_id = 0;
>  	___mark_reg_known(reg, imm);
>  }
>  
> @@ -2230,7 +2188,7 @@ static void mark_reg_known_zero(struct bpf_verifier_env *env,
>  }
>  
>  static void __mark_dynptr_reg(struct bpf_reg_state *reg, enum bpf_dynptr_type type,
> -			      bool first_slot, int dynptr_id)
> +			      bool first_slot, int id)
>  {
>  	/* reg->type has no meaning for STACK_DYNPTR, but when we set reg for
>  	 * callback arguments, it does need to be CONST_PTR_TO_DYNPTR, so simply
> @@ -2239,7 +2197,7 @@ static void __mark_dynptr_reg(struct bpf_reg_state *reg, enum bpf_dynptr_type ty
>  	__mark_reg_known_zero(reg);
>  	reg->type = CONST_PTR_TO_DYNPTR;
>  	/* Give each dynptr a unique id to uniquely associate slices to it. */
> -	reg->id = dynptr_id;
> +	reg->id = id;
>  	reg->dynptr.type = type;
>  	reg->dynptr.first_slot = first_slot;
>  }
> @@ -2801,6 +2759,7 @@ static void __mark_reg_unknown_imprecise(struct bpf_reg_state *reg)
>  	reg->type = SCALAR_VALUE;
>  	reg->id = 0;
>  	reg->ref_obj_id = 0;
> +	reg->parent_id = 0;
>  	reg->var_off = tnum_unknown;
>  	reg->frameno = 0;
>  	reg->precise = false;
> @@ -8746,7 +8705,7 @@ static int process_kptr_func(struct bpf_verifier_env *env, int regno,
>   * type, and declare it as 'const struct bpf_dynptr *' in their prototype.
>   */
>  static int process_dynptr_func(struct bpf_verifier_env *env, int regno, int insn_idx,
> -			       enum bpf_arg_type arg_type, int clone_ref_obj_id,
> +			       enum bpf_arg_type arg_type, int parent_id,
>  			       struct bpf_dynptr_desc *initialized_dynptr)
>  {
>  	struct bpf_reg_state *reg = reg_state(env, regno);
> @@ -8798,7 +8757,8 @@ static int process_dynptr_func(struct bpf_verifier_env *env, int regno, int insn
>  				return err;
>  		}
>  
> -		err = mark_stack_slots_dynptr(env, reg, arg_type, insn_idx, clone_ref_obj_id);
> +		err = mark_stack_slots_dynptr(env, reg, arg_type, insn_idx, parent_id,
> +					      initialized_dynptr);
>  	} else /* MEM_RDONLY and None case from above */ {
>  		/* For the reg->type == PTR_TO_STACK case, bpf_dynptr is never const */
>  		if (reg->type == CONST_PTR_TO_DYNPTR && !(arg_type & MEM_RDONLY)) {
> @@ -8835,6 +8795,7 @@ static int process_dynptr_func(struct bpf_verifier_env *env, int regno, int insn
>  			initialized_dynptr->id = reg->id;
>  			initialized_dynptr->type = reg->dynptr.type;
>  			initialized_dynptr->ref_obj_id = reg->ref_obj_id;
> +			initialized_dynptr->parent_id = reg->parent_id;
>  		}
>  	}
>  	return err;
> @@ -9787,7 +9748,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
>  			 */
>  			if (reg->type == PTR_TO_STACK) {
>  				spi = dynptr_get_spi(env, reg);
> -				if (spi < 0 || !state->stack[spi].spilled_ptr.ref_obj_id) {
> +				if (spi < 0 || !state->stack[spi].spilled_ptr.id) {
>  					verbose(env, "arg %d is an unacquired reference\n", regno);
>  					return -EINVAL;
>  				}
> @@ -9815,6 +9776,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
>  			return -EACCES;
>  		}
>  		meta->ref_obj_id = reg->ref_obj_id;
> +		meta->id = reg->id;
>  	}
>  
>  	switch (base_type(arg_type)) {
> @@ -10438,26 +10400,82 @@ static int release_reference_nomark(struct bpf_verifier_state *state, int ref_ob
>  	return -EINVAL;
>  }
>  
> -/* The pointer with the specified id has released its reference to kernel
> - * resources. Identify all copies of the same pointer and clear the reference.
> - *
> - * This is the release function corresponding to acquire_reference(). Idempotent.
> - */
> -static int release_reference(struct bpf_verifier_env *env, int ref_obj_id)
> +static void idstack_reset(struct bpf_idstack *idstack)
> +{
> +	idstack->cnt = 0;
> +}
> +
I agree with Andrii, maybe new bpf_idstack is not really worth adding,
Since the total number of reg_states is bounded, the idstack could be
replaced with a simpler approach using bpf_idset: store                                            
discovered IDs in a flat array and keep a cursor to the first                                                
unprocessed entry. IDs before the cursor are already visited, IDs at
and after it are pending. "Popping" becomes just advancing the cursor,
and deduplication comes naturally by searching the full array (both
visited and pending) before inserting a new ID.

This avoids the possibility of pushing the same child ID multiple times.
> +static void idstack_push(struct bpf_idstack *idstack, u32 id)
> +{
> +	if (WARN_ON_ONCE(idstack->cnt >= BPF_ID_MAP_SIZE))
> +		return;
> +
> +	idstack->ids[idstack->cnt++] = id;
> +}
> +
> +static u32 idstack_pop(struct bpf_idstack *idstack)
> +{
> +	return idstack->cnt > 0 ? idstack->ids[--idstack->cnt] : 0;
> +}
> +
> +static int release_reg_check(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
> +			     int id, int root_id, struct bpf_idstack *idstack)
>  {
> +	struct bpf_reference_state *ref_state;
> +
> +	if (reg->id == id || reg->parent_id == id || reg->ref_obj_id == id) {
> +		/* Cannot indirectly release a referenced id */
> +		if (reg->ref_obj_id && id != root_id) {
> +			ref_state = find_reference_state(env->cur_state, reg->ref_obj_id);
> +			verbose(env, "Unreleased reference id=%d alloc_insn=%d when releasing id=%d\n",
> +				ref_state->id, ref_state->insn_idx, root_id);
> +			return -EINVAL;
> +		}
> +
> +		if (reg->id && reg->id != id)
> +			idstack_push(idstack, reg->id);
> +		return 1;
> +	}
> +
> +	return 0;
> +}
> +
> +static int release_reference(struct bpf_verifier_env *env, int id)
> +{
> +	struct bpf_idstack *idstack = &env->idstack_scratch;
>  	struct bpf_verifier_state *vstate = env->cur_state;
> +	int spi, fi, root_id = id, err = 0;
>  	struct bpf_func_state *state;
>  	struct bpf_reg_state *reg;
> -	int err;
>  
> -	err = release_reference_nomark(vstate, ref_obj_id);
> -	if (err)
> -		return err;
> +	idstack_reset(idstack);
> +	idstack_push(idstack, id);
>  
> -	bpf_for_each_reg_in_vstate(vstate, state, reg, ({
> -		if (reg->ref_obj_id == ref_obj_id)
> -			mark_reg_invalid(env, reg);
> -	}));
> +	if (find_reference_state(vstate, id))
> +		WARN_ON_ONCE(release_reference_nomark(vstate, id));
> +
> +	while ((id = idstack_pop(idstack))) {
> +		bpf_for_each_reg_in_vstate(vstate, state, reg, ({
> +			err = release_reg_check(env, reg, id, root_id, idstack);
> +			if (err < 0)
> +				return err;
> +			if (err == 1)
> +				mark_reg_invalid(env, reg);
> +		}));
> +
> +		for (fi = 0; fi <= vstate->curframe; fi++) {
> +			state = vstate->frame[fi];
> +			bpf_for_each_spilled_reg(spi, state, reg, (1 << STACK_DYNPTR)) {
> +				if (!reg || !reg->dynptr.first_slot)
> +					continue;
> +				err = release_reg_check(env, reg, id, root_id, idstack);
> +				if (err < 0)
> +					return err;
> +				if (err == 1)
> +					invalidate_dynptr(env, state, spi);
> +			}
> +		}
> +	}
>  
>  	return 0;
>  }
> @@ -11643,11 +11661,8 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
>  			 */
>  			err = 0;
>  		}
> -		if (err) {
> -			verbose(env, "func %s#%d reference has not been acquired before\n",
> -				func_id_name(func_id), func_id);
> +		if (err)
>  			return err;
> -		}
>  	}
>  
>  	switch (func_id) {
> @@ -11925,10 +11940,8 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
>  		regs[BPF_REG_0].ref_obj_id = id;
>  	}
>  
> -	if (func_id == BPF_FUNC_dynptr_data) {
> -		regs[BPF_REG_0].dynptr_id = meta.initialized_dynptr.id;
> -		regs[BPF_REG_0].ref_obj_id = meta.initialized_dynptr.ref_obj_id;
> -	}
> +	if (func_id == BPF_FUNC_dynptr_data)
> +		regs[BPF_REG_0].parent_id = meta.initialized_dynptr.id;
>  
>  	err = do_refine_retval_range(env, regs, fn->ret_type, func_id, &meta);
>  	if (err)
> @@ -13295,6 +13308,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
>  				return -EFAULT;
>  			}
>  			meta->ref_obj_id = reg->ref_obj_id;
> +			meta->id = reg->id;
>  			if (is_kfunc_release(meta))
>  				meta->release_regno = regno;
>  		}
> @@ -13429,7 +13443,6 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
>  		case KF_ARG_PTR_TO_DYNPTR:
>  		{
>  			enum bpf_arg_type dynptr_arg_type = ARG_PTR_TO_DYNPTR;
> -			int clone_ref_obj_id = 0;
>  
>  			if (is_kfunc_arg_const_ptr(btf, &args[i]))
>  				dynptr_arg_type |= MEM_RDONLY;
> @@ -13458,14 +13471,10 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
>  				}
>  
>  				dynptr_arg_type |= (unsigned int)get_dynptr_type_flag(parent_type);
> -				clone_ref_obj_id = meta->initialized_dynptr.ref_obj_id;
> -				if (dynptr_type_refcounted(parent_type) && !clone_ref_obj_id) {
> -					verifier_bug(env, "missing ref obj id for parent of clone");
> -					return -EFAULT;
> -				}
>  			}
>  
> -			ret = process_dynptr_func(env, regno, insn_idx, dynptr_arg_type, clone_ref_obj_id,
> +			ret = process_dynptr_func(env, regno, insn_idx, dynptr_arg_type,
> +						  meta->ref_obj_id ? meta->id : 0,
>  						  &meta->initialized_dynptr);
>  			if (ret < 0)
>  				return ret;
> @@ -13913,12 +13922,7 @@ static int check_special_kfunc(struct bpf_verifier_env *env, struct bpf_kfunc_ca
>  			verifier_bug(env, "no dynptr id");
>  			return -EFAULT;
>  		}
> -		regs[BPF_REG_0].dynptr_id = meta->initialized_dynptr.id;
> -
> -		/* we don't need to set BPF_REG_0's ref obj id
> -		 * because packet slices are not refcounted (see
> -		 * dynptr_type_refcounted)
> -		 */
> +		regs[BPF_REG_0].parent_id = meta->initialized_dynptr.id;
>  	} else {
>  		return 0;
>  	}
> @@ -14113,9 +14117,6 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
>  			err = unmark_stack_slots_dynptr(env, reg);
>  		} else {
>  			err = release_reference(env, reg->ref_obj_id);
> -			if (err)
> -				verbose(env, "kfunc %s#%d reference has not been acquired before\n",
> -					func_name, meta.func_id);
>  		}
>  		if (err)
>  			return err;
> @@ -14134,7 +14135,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
>  			return err;
>  		}
>  
> -		err = release_reference(env, release_ref_obj_id);
> +		err = release_reference_nomark(env->cur_state, release_ref_obj_id);
>  		if (err) {
>  			verbose(env, "kfunc %s#%d reference has not been acquired before\n",
>  				func_name, meta.func_id);
> @@ -14225,7 +14226,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
>  
>  			/* Ensures we don't access the memory after a release_reference() */
>  			if (meta.ref_obj_id)
> -				regs[BPF_REG_0].ref_obj_id = meta.ref_obj_id;
> +				regs[BPF_REG_0].parent_id = meta.ref_obj_id;
>  
>  			if (is_kfunc_rcu_protected(&meta))
>  				regs[BPF_REG_0].type |= MEM_RCU;
> @@ -19575,7 +19576,8 @@ static bool regs_exact(const struct bpf_reg_state *rold,
>  {
>  	return memcmp(rold, rcur, offsetof(struct bpf_reg_state, id)) == 0 &&
>  	       check_ids(rold->id, rcur->id, idmap) &&
> -	       check_ids(rold->ref_obj_id, rcur->ref_obj_id, idmap);
> +	       check_ids(rold->ref_obj_id, rcur->ref_obj_id, idmap) &&
> +	       check_ids(rold->parent_id, rcur->parent_id, idmap);
>  }
>  
>  enum exact_level {
> @@ -19697,7 +19699,8 @@ static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold,
>  		       range_within(rold, rcur) &&
>  		       tnum_in(rold->var_off, rcur->var_off) &&
>  		       check_ids(rold->id, rcur->id, idmap) &&
> -		       check_ids(rold->ref_obj_id, rcur->ref_obj_id, idmap);
> +		       check_ids(rold->ref_obj_id, rcur->ref_obj_id, idmap) &&
> +		       check_ids(rold->parent_id, rcur->parent_id, idmap);
>  	case PTR_TO_PACKET_META:
>  	case PTR_TO_PACKET:
>  		/* We must have at least as much range as the old ptr
> @@ -19852,7 +19855,8 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
>  			cur_reg = &cur->stack[spi].spilled_ptr;
>  			if (old_reg->dynptr.type != cur_reg->dynptr.type ||
>  			    old_reg->dynptr.first_slot != cur_reg->dynptr.first_slot ||
> -			    !check_ids(old_reg->ref_obj_id, cur_reg->ref_obj_id, idmap))
> +			    !check_ids(old_reg->ref_obj_id, cur_reg->ref_obj_id, idmap) ||
> +			    !check_ids(old_reg->parent_id, cur_reg->parent_id, idmap))
>  				return false;
>  			break;
>  		case STACK_ITER:
> -- 
> 2.47.3

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH bpf-next v2 01/11] bpf: Set kfunc dynptr arg type flag based on prototype
  2026-03-12 20:09               ` Andrii Nakryiko
@ 2026-03-13  3:25                 ` Alexei Starovoitov
  0 siblings, 0 replies; 46+ messages in thread
From: Alexei Starovoitov @ 2026-03-13  3:25 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Amery Hung, bpf, Network Development, Andrii Nakryiko,
	Daniel Borkmann, Kumar Kartikeya Dwivedi, Martin KaFai Lau,
	Kernel Team

On Thu, Mar 12, 2026 at 1:10 PM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> > > I guess I'd suggest
> > >
> > > struct bpf_dyntpr *const dptr
> > >
> > > to mark this through type system? that is, *constant pointer* to
> > > dynptr, not a pointer to *constant dynptr*
> > >
> > > WDYT?
> >
> > I think they both make sense from different angles, but I am not sure
> > which one is better.
> >
> > From purely C's point of view. the pointer points to a struct dynptr
> > that should not be mutated so "const struct dynptr *p" is correct.
> >
> > From the BPF dynptr abstraction's point of view, the pointer points to
> > the underlying memory (e.gg., skb, file, ringbuf, etc.), so "struct
> > dynptr * const p".
> >
> > Any way we choose, I'd suggest that to be a separate patch. This patch
> > at least makes things consistent and fixes a logical bug.
>
> if we could rely on decltag, I'd go with that. But *const is unusual
> and stands out, so I'd go with that. I wonder if anyone else has any
> thoughts.

I think C is inconsistent in how it propagates constantness.

struct foo {
  struct bar b;
  struct bar *bp;
};

const struct foo *p;

all fields of p->b are read only, but p->bp are not.

So both:
const struct bpf_dyntpr *dptr
struct bpf_dyntpr *const dptr

cannot really express that a memory that dynptr observes is
read only.
So, imo,  btf tag is the only option, because we can define
what it means precisely.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH bpf-next v2 06/11] bpf: Refactor object relationship tracking and fix dynptr UAF bug
  2026-03-11 22:32   ` Andrii Nakryiko
@ 2026-03-13 20:32     ` Amery Hung
  0 siblings, 0 replies; 46+ messages in thread
From: Amery Hung @ 2026-03-13 20:32 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, netdev, alexei.starovoitov, andrii, daniel, memxor,
	martin.lau, kernel-team

On Wed, Mar 11, 2026 at 3:32 PM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Fri, Mar 6, 2026 at 10:44 PM Amery Hung <ameryhung@gmail.com> wrote:
> >
> > Refactor object relationship tracking in the verifier by removing
> > dynptr_id and using parent_id to track the parent object. Then, track
> > the referenced parent object for the dynptr when calling a dynptr
> > constructor. This fixes a use-after-free bug. For dynptr that has
> > referenced parent object (skb dynptr in BPF qdisc or file dynptr),
> > the dynptr or derived slices need to be invalidated when the parent
> > object is released.
> >
> > First, add parent_id to bpf_reg_state to be able to precisely track
> > objects' child-parent relationship. A child object will use parent_id
> > to track the parent object's id. This replaces dynptr slice specific
> > dynptr_id.
> >
> > Then, when calling dynptr constructors (i.e., process_dynptr_func() with
> > MEM_UNINIT argument), track the parent's id if parent is an referenced
> > object. This only applies to file dynptr and skb dynptr, so only pass
> > parent reg->id to kfunc constructors.
> >
> > For release_reference(), this mean when invalidating an object, it needs
> > to also invalidate all dependent objects by traversing the subtree. This
> > is done using stack-based DFS to avoid recursive call chain of
> > release_reference() -> unmark_stack_slots_dynptr() ->
> > release_reference(). Note that, referenced objects cannot be released
> > when traversing the tree if it is not the object id initially passed to
> > release_reference() as they would actually require helper call to
> > release the acquired resources.
> >
> > While the new design changes how object relationships are being tracked
> > in the verifier, it does NOT change the verifier's behavior. Here is
> > the implication of the new design for dynptr, ptr casting and
> > owning/non-owning references.
> >
> > Dynptr:
> >
> > When initializing a dynptr, referenced dynptr will acquire an reference
> > for ref_obj_id. If the dynptr has a referenced parent, the parent_id
> > will be used to track the its id. When cloning dynptr, ref_obj_id and
> > parent_id of the clone are copied directly from the original dynptr.
> > This means, when releasing a dynptr, if it is a referenced dynptr,
> > release_reference(ref_obj_id) will release all clones and the original
> > and derived slices. For non-referenced dynptr, only the specific dynptr
> > being released and its children slices will be invalidated.
> >
> > Pointer casting:
> >
> > Referenced socket pointer and the casted pointers should share the same
> > lifetime, while having difference nullness. Therefore, they will have
> > different id but the same ref_obj_id.
> >
> > When converting owning references to non-owning:
> >
> > After converting a reference from owning to non-owning by clearing the
> > object's ref_reg_id. (e.g., object(id=1, ref_obj_id=1) -> object(id=1,
> > ref_obj_id=0)), the verifier only needs to release the reference state
> > instead of releasing registers that have the id, so call
> > release_reference_nomark() instead of release_reference().
> >
> > CC: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> > Fixes: 870c28588afa ("bpf: net_sched: Add basic bpf qdisc kfuncs")
> > Signed-off-by: Amery Hung <ameryhung@gmail.com>
> > ---
> >  include/linux/bpf_verifier.h |  14 +-
> >  kernel/bpf/log.c             |   4 +-
> >  kernel/bpf/verifier.c        | 274 ++++++++++++++++++-----------------
> >  3 files changed, 154 insertions(+), 138 deletions(-)
> >
>
> [...]
>
>
> > -       ref_obj_id = state->stack[spi].spilled_ptr.ref_obj_id;
> > -
> > -       /* If the dynptr has a ref_obj_id, then we need to invalidate
> > -        * two things:
> > -        *
> > -        * 1) Any dynptrs with a matching ref_obj_id (clones)
> > -        * 2) Any slices derived from this dynptr.
> > +       /*
> > +        * For referenced dynptr, the clones share the same ref_obj_id and will be
> > +        * invalidated too. For non-referenced dynptr, only the dynptr and slices
> > +        * derived from it will be invalidated.
> >          */
>
> this is confusing to me. Why the nature of dynptr should change
> anything about the scope of invalidation? This should be controlled
> from outside. E.g., if someone invalidates clone by overwriting it on
> the stack, we shouldn't just go an invalidate all other clones. We
> just invalidate that particular clone (regardless of whether it's a
> clone of file dynptr or just some mem dynptr).

This is the nuance that lies in dynptr.

When invalidating a referenced dynptr clone, all clones and the
original must be invalidated. There is no option to just invalidate
one. The reason is that bpf_dynptr_clone just copies bpf_dynptr_kern,
not calling the dynptr ctor to set up necessary stuff to make a dynptr
valid. (Also want to mention referenced dynptr cannot be destroy by
overwriting the stack slot)

This is not the case for unreferenced dynptr. The entire state of the
dynptr is saved in bpf_dynptr_kern. Therefore their clones can live
independently.

To summarize, referenced dynptr can only be released via helper and
all clones must be invalidated. Non-referenced dynptr do not need to
be released, but when invalidated by overwriting the stack slot, it
does not affect other clones.

>
> But if someone is calling bpf_dynptr_file_discard() on one of the
> clones, then yes, all the clones need to be invalidated. But that
> should be handled as more generic "this file lifetime is ending", no?
>
> Maybe I'm missing something, but it feels wrong to make decisions like
> this inside a low-level (and thus intentionally dumb)
> unmark_stack_slots_dynptr() helper.
>
> > -
> > -       /* Invalidate any slices associated with this dynptr */
> > -       WARN_ON_ONCE(release_reference(env, ref_obj_id));
> > -
> > -       /* Invalidate any dynptr clones */
> > -       for (i = 1; i < state->allocated_stack / BPF_REG_SIZE; i++) {
> > -               if (state->stack[i].spilled_ptr.ref_obj_id != ref_obj_id)
> > -                       continue;
>
> [...]
>
> > +static u32 idstack_pop(struct bpf_idstack *idstack)
> > +{
> > +       return idstack->cnt > 0 ? idstack->ids[--idstack->cnt] : 0;
> > +}
> > +
> > +static int release_reg_check(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
> > +                            int id, int root_id, struct bpf_idstack *idstack)
>
> tbh, I feel like release_reg_check is doing too much, it both enqueues
> children and checks unreleased references. And this <0, 0, and 1 as
> return values (where 1 is completely unobvious) is an indicator of
> that. I think id/parent_id/ref_obj_id check can be done inline in
> release_reg_check() just fine (yes, two place, no big deal, but if so,
> make it a small helper) and then you'll have more obvious logic break
> down into a) check if reg should be enqueue, b) if so, check ref leak,
> and c) enqueue new id

Make sense to break it down. I also found it hard to name this
function and maybe that already indicates that it is trying to do
much.

>
> >  {
> > +       struct bpf_reference_state *ref_state;
> > +
> > +       if (reg->id == id || reg->parent_id == id || reg->ref_obj_id == id) {
> > +               /* Cannot indirectly release a referenced id */
> > +               if (reg->ref_obj_id && id != root_id) {
> > +                       ref_state = find_reference_state(env->cur_state, reg->ref_obj_id);
> > +                       verbose(env, "Unreleased reference id=%d alloc_insn=%d when releasing id=%d\n",
> > +                               ref_state->id, ref_state->insn_idx, root_id);
> > +                       return -EINVAL;
> > +               }
> > +
> > +               if (reg->id && reg->id != id)
> > +                       idstack_push(idstack, reg->id);
>
> can't you push the same id multiple times into stack this way? your
> idstack is actually a set, no? so idmap serves you better (just map id
> to 1 for "to be checked")? And then you don't need to introduce a new
> idstack_scratch data structure

Ack.

>
> > +               return 1;
> > +       }
> > +
> > +       return 0;
> > +}
> > +
>
> [...]

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH bpf-next v2 06/11] bpf: Refactor object relationship tracking and fix dynptr UAF bug
  2026-03-12 23:33   ` Mykyta Yatsenko
@ 2026-03-13 20:33     ` Amery Hung
  0 siblings, 0 replies; 46+ messages in thread
From: Amery Hung @ 2026-03-13 20:33 UTC (permalink / raw)
  To: Mykyta Yatsenko
  Cc: bpf, netdev, alexei.starovoitov, andrii, daniel, memxor,
	martin.lau, kernel-team

On Thu, Mar 12, 2026 at 4:33 PM Mykyta Yatsenko
<mykyta.yatsenko5@gmail.com> wrote:
>
> Amery Hung <ameryhung@gmail.com> writes:
>
> > Refactor object relationship tracking in the verifier by removing
> > dynptr_id and using parent_id to track the parent object. Then, track
> > the referenced parent object for the dynptr when calling a dynptr
> > constructor. This fixes a use-after-free bug. For dynptr that has
> > referenced parent object (skb dynptr in BPF qdisc or file dynptr),
> > the dynptr or derived slices need to be invalidated when the parent
> > object is released.
> >
> > First, add parent_id to bpf_reg_state to be able to precisely track
> > objects' child-parent relationship. A child object will use parent_id
> > to track the parent object's id. This replaces dynptr slice specific
> > dynptr_id.
> >
> > Then, when calling dynptr constructors (i.e., process_dynptr_func() with
> > MEM_UNINIT argument), track the parent's id if parent is an referenced
> > object. This only applies to file dynptr and skb dynptr, so only pass
> > parent reg->id to kfunc constructors.
> >
> > For release_reference(), this mean when invalidating an object, it needs
> > to also invalidate all dependent objects by traversing the subtree. This
> > is done using stack-based DFS to avoid recursive call chain of
> > release_reference() -> unmark_stack_slots_dynptr() ->
> > release_reference(). Note that, referenced objects cannot be released
> > when traversing the tree if it is not the object id initially passed to
> > release_reference() as they would actually require helper call to
> > release the acquired resources.
> >
> > While the new design changes how object relationships are being tracked
> > in the verifier, it does NOT change the verifier's behavior. Here is
> > the implication of the new design for dynptr, ptr casting and
> > owning/non-owning references.
> >
> > Dynptr:
> >
> > When initializing a dynptr, referenced dynptr will acquire an reference
> > for ref_obj_id. If the dynptr has a referenced parent, the parent_id
> > will be used to track the its id. When cloning dynptr, ref_obj_id and
> > parent_id of the clone are copied directly from the original dynptr.
> > This means, when releasing a dynptr, if it is a referenced dynptr,
> > release_reference(ref_obj_id) will release all clones and the original
> > and derived slices. For non-referenced dynptr, only the specific dynptr
> > being released and its children slices will be invalidated.
> >
> > Pointer casting:
> >
> > Referenced socket pointer and the casted pointers should share the same
> > lifetime, while having difference nullness. Therefore, they will have
> > different id but the same ref_obj_id.
> >
> > When converting owning references to non-owning:
> >
> > After converting a reference from owning to non-owning by clearing the
> > object's ref_reg_id. (e.g., object(id=1, ref_obj_id=1) -> object(id=1,
> > ref_obj_id=0)), the verifier only needs to release the reference state
> > instead of releasing registers that have the id, so call
> > release_reference_nomark() instead of release_reference().
> >
> > CC: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> > Fixes: 870c28588afa ("bpf: net_sched: Add basic bpf qdisc kfuncs")
> > Signed-off-by: Amery Hung <ameryhung@gmail.com>
> > ---
> >  include/linux/bpf_verifier.h |  14 +-
> >  kernel/bpf/log.c             |   4 +-
> >  kernel/bpf/verifier.c        | 274 ++++++++++++++++++-----------------
> >  3 files changed, 154 insertions(+), 138 deletions(-)
> >
> > diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
> > index c1e30096ea7b..e987a48f511a 100644
> > --- a/include/linux/bpf_verifier.h
> > +++ b/include/linux/bpf_verifier.h
> > @@ -65,7 +65,6 @@ struct bpf_reg_state {
> >
> >               struct { /* for PTR_TO_MEM | PTR_TO_MEM_OR_NULL */
> >                       u32 mem_size;
> > -                     u32 dynptr_id; /* for dynptr slices */
> >               };
> >
> >               /* For dynptr stack slots */
> > @@ -193,6 +192,13 @@ struct bpf_reg_state {
> >        * allowed and has the same effect as bpf_sk_release(sk).
> >        */
> >       u32 ref_obj_id;
> > +     /* Tracks the parent object this register was derived from.
> > +      * Used for cascading invalidation: when the parent object is
> > +      * released or invalidated, all registers with matching parent_id
> > +      * are also invalidated. For example, a slice from bpf_dynptr_data()
> > +      * gets parent_id set to the dynptr's id.
> > +      */
> > +     u32 parent_id;
> >       /* Inside the callee two registers can be both PTR_TO_STACK like
> >        * R1=fp-8 and R2=fp-8, but one of them points to this function stack
> >        * while another to the caller's stack. To differentiate them 'frameno'
> > @@ -707,6 +713,11 @@ struct bpf_idset {
> >       } entries[BPF_ID_MAP_SIZE];
> >  };
> >
> > +struct bpf_idstack {
> > +     int cnt;
> > +     u32 ids[BPF_ID_MAP_SIZE];
> > +};
> > +
> >  /* see verifier.c:compute_scc_callchain() */
> >  struct bpf_scc_callchain {
> >       /* call sites from bpf_verifier_state->frame[*]->callsite leading to this SCC */
> > @@ -789,6 +800,7 @@ struct bpf_verifier_env {
> >       union {
> >               struct bpf_idmap idmap_scratch;
> >               struct bpf_idset idset_scratch;
> > +             struct bpf_idstack idstack_scratch;
> >       };
> >       struct {
> >               int *insn_state;
> > diff --git a/kernel/bpf/log.c b/kernel/bpf/log.c
> > index 37d72b052192..cb4129b8b2a1 100644
> > --- a/kernel/bpf/log.c
> > +++ b/kernel/bpf/log.c
> > @@ -707,6 +707,8 @@ static void print_reg_state(struct bpf_verifier_env *env,
> >               verbose(env, "%+d", reg->delta);
> >       if (reg->ref_obj_id)
> >               verbose_a("ref_obj_id=%d", reg->ref_obj_id);
> > +     if (reg->parent_id)
> > +             verbose_a("parent_id=%d", reg->parent_id);
> >       if (type_is_non_owning_ref(reg->type))
> >               verbose_a("%s", "non_own_ref");
> >       if (type_is_map_ptr(t)) {
> > @@ -810,8 +812,6 @@ void print_verifier_state(struct bpf_verifier_env *env, const struct bpf_verifie
> >                               verbose_a("id=%d", reg->id);
> >                       if (reg->ref_obj_id)
> >                               verbose_a("ref_id=%d", reg->ref_obj_id);
> > -                     if (reg->dynptr_id)
> > -                             verbose_a("dynptr_id=%d", reg->dynptr_id);
> >                       verbose(env, ")");
> >                       break;
> >               case STACK_ITER:
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index 8f9e28901bc4..0436fc4d9107 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -204,7 +204,7 @@ struct bpf_verifier_stack_elem {
> >
> >  static int acquire_reference(struct bpf_verifier_env *env, int insn_idx);
> >  static int release_reference_nomark(struct bpf_verifier_state *state, int ref_obj_id);
> > -static int release_reference(struct bpf_verifier_env *env, int ref_obj_id);
> > +static int release_reference(struct bpf_verifier_env *env, int id);
> >  static void invalidate_non_owning_refs(struct bpf_verifier_env *env);
> >  static bool in_rbtree_lock_required_cb(struct bpf_verifier_env *env);
> >  static int ref_set_non_owning(struct bpf_verifier_env *env,
> > @@ -281,6 +281,7 @@ struct bpf_dynptr_desc {
> >       enum bpf_dynptr_type type;
> >       u32 id;
> >       u32 ref_obj_id;
> > +     u32 parent_id;
> >  };
> >
> >  struct bpf_call_arg_meta {
> > @@ -294,6 +295,7 @@ struct bpf_call_arg_meta {
> >       int mem_size;
> >       u64 msize_max_value;
> >       int ref_obj_id;
> > +     u32 id;
> >       int func_id;
> >       struct btf *btf;
> >       u32 btf_id;
> > @@ -321,6 +323,7 @@ struct bpf_kfunc_call_arg_meta {
> >       const char *func_name;
> >       /* Out parameters */
> >       u32 ref_obj_id;
> > +     u32 id;
> >       u8 release_regno;
> >       bool r0_rdonly;
> >       u32 ret_btf_id;
> > @@ -721,14 +724,14 @@ static enum bpf_type_flag get_dynptr_type_flag(enum bpf_dynptr_type type)
> >       }
> >  }
> >
> > -static bool dynptr_type_refcounted(enum bpf_dynptr_type type)
> > +static bool dynptr_type_referenced(enum bpf_dynptr_type type)
> >  {
> >       return type == BPF_DYNPTR_TYPE_RINGBUF || type == BPF_DYNPTR_TYPE_FILE;
> >  }
> >
> >  static void __mark_dynptr_reg(struct bpf_reg_state *reg,
> >                             enum bpf_dynptr_type type,
> > -                           bool first_slot, int dynptr_id);
> > +                           bool first_slot, int id);
> >
> >  static void __mark_reg_not_init(const struct bpf_verifier_env *env,
> >                               struct bpf_reg_state *reg);
> > @@ -755,11 +758,12 @@ static int destroy_if_dynptr_stack_slot(struct bpf_verifier_env *env,
> >                                       struct bpf_func_state *state, int spi);
> >
> >  static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
> > -                                enum bpf_arg_type arg_type, int insn_idx, int clone_ref_obj_id)
> > +                                enum bpf_arg_type arg_type, int insn_idx, int parent_id,
> > +                                struct bpf_dynptr_desc *initialized_dynptr)
> >  {
> >       struct bpf_func_state *state = func(env, reg);
> > +     int spi, i, err, ref_obj_id = 0;
> >       enum bpf_dynptr_type type;
> > -     int spi, i, err;
> >
> >       spi = dynptr_get_spi(env, reg);
> >       if (spi < 0)
> > @@ -793,22 +797,28 @@ static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_
> >       mark_dynptr_stack_regs(env, &state->stack[spi].spilled_ptr,
> >                              &state->stack[spi - 1].spilled_ptr, type);
> >
> > -     if (dynptr_type_refcounted(type)) {
> > -             /* The id is used to track proper releasing */
> > -             int id;
> > -
> > -             if (clone_ref_obj_id)
> > -                     id = clone_ref_obj_id;
> > -             else
> > -                     id = acquire_reference(env, insn_idx);
> > -
> > -             if (id < 0)
> > -                     return id;
> > -
> > -             state->stack[spi].spilled_ptr.ref_obj_id = id;
> > -             state->stack[spi - 1].spilled_ptr.ref_obj_id = id;
> > +     if (initialized_dynptr->type == BPF_DYNPTR_TYPE_INVALID) {
> > +             if (dynptr_type_referenced(type)) {
> > +                     ref_obj_id = acquire_reference(env, insn_idx);
> > +                     if (ref_obj_id < 0)
> > +                             return ref_obj_id;
> > +             }
> > +     } else {
> > +             /*
> > +              * Referenced dynptr clones have the same lifetime as the original dynptr
> > +              * since bpf_dynptr_clone() does not initialize the clones like the
> > +              * constructor does. If any of the dynptrs is invalidated, the rest will
> > +              * also need to invalidated. Thus, they all share the same non-zero ref_obj_id.
> > +              */
> > +             ref_obj_id = initialized_dynptr->ref_obj_id;
> > +             parent_id = initialized_dynptr->parent_id;
> >       }
> >
> > +     state->stack[spi].spilled_ptr.ref_obj_id = ref_obj_id;
> > +     state->stack[spi - 1].spilled_ptr.ref_obj_id = ref_obj_id;
> > +     state->stack[spi].spilled_ptr.parent_id = parent_id;
> > +     state->stack[spi - 1].spilled_ptr.parent_id = parent_id;
> > +
> >       bpf_mark_stack_write(env, state->frameno, BIT(spi - 1) | BIT(spi));
> >
> >       return 0;
> > @@ -832,7 +842,7 @@ static void invalidate_dynptr(struct bpf_verifier_env *env, struct bpf_func_stat
> >  static int unmark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
> >  {
> >       struct bpf_func_state *state = func(env, reg);
> > -     int spi, ref_obj_id, i;
> > +     int spi;
> >
> >       /*
> >        * This can only be set for PTR_TO_STACK, as CONST_PTR_TO_DYNPTR cannot
> > @@ -843,45 +853,19 @@ static int unmark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_re
> >               verifier_bug(env, "CONST_PTR_TO_DYNPTR cannot be released");
> >               return -EFAULT;
> >       }
> > +
> >       spi = dynptr_get_spi(env, reg);
> >       if (spi < 0)
> >               return spi;
> >
> > -     if (!dynptr_type_refcounted(state->stack[spi].spilled_ptr.dynptr.type)) {
> > -             invalidate_dynptr(env, state, spi);
> > -             return 0;
> > -     }
> > -
> > -     ref_obj_id = state->stack[spi].spilled_ptr.ref_obj_id;
> > -
> > -     /* If the dynptr has a ref_obj_id, then we need to invalidate
> > -      * two things:
> > -      *
> > -      * 1) Any dynptrs with a matching ref_obj_id (clones)
> > -      * 2) Any slices derived from this dynptr.
> > +     /*
> > +      * For referenced dynptr, the clones share the same ref_obj_id and will be
> > +      * invalidated too. For non-referenced dynptr, only the dynptr and slices
> > +      * derived from it will be invalidated.
> >        */
> > -
> > -     /* Invalidate any slices associated with this dynptr */
> > -     WARN_ON_ONCE(release_reference(env, ref_obj_id));
> > -
> > -     /* Invalidate any dynptr clones */
> > -     for (i = 1; i < state->allocated_stack / BPF_REG_SIZE; i++) {
> > -             if (state->stack[i].spilled_ptr.ref_obj_id != ref_obj_id)
> > -                     continue;
> > -
> > -             /* it should always be the case that if the ref obj id
> > -              * matches then the stack slot also belongs to a
> > -              * dynptr
> > -              */
> > -             if (state->stack[i].slot_type[0] != STACK_DYNPTR) {
> > -                     verifier_bug(env, "misconfigured ref_obj_id");
> > -                     return -EFAULT;
> > -             }
> > -             if (state->stack[i].spilled_ptr.dynptr.first_slot)
> > -                     invalidate_dynptr(env, state, i);
> > -     }
> > -
> > -     return 0;
> > +     reg = &state->stack[spi].spilled_ptr;
> > +     return release_reference(env, dynptr_type_referenced(reg->dynptr.type) ?
> > +                                   reg->ref_obj_id : reg->id);
> >  }
> >
> >  static void __mark_reg_unknown(const struct bpf_verifier_env *env,
> > @@ -898,10 +882,6 @@ static void mark_reg_invalid(const struct bpf_verifier_env *env, struct bpf_reg_
> >  static int destroy_if_dynptr_stack_slot(struct bpf_verifier_env *env,
> >                                       struct bpf_func_state *state, int spi)
> >  {
> > -     struct bpf_func_state *fstate;
> > -     struct bpf_reg_state *dreg;
> > -     int i, dynptr_id;
> > -
> >       /* We always ensure that STACK_DYNPTR is never set partially,
> >        * hence just checking for slot_type[0] is enough. This is
> >        * different for STACK_SPILL, where it may be only set for
> > @@ -914,7 +894,7 @@ static int destroy_if_dynptr_stack_slot(struct bpf_verifier_env *env,
> >       if (!state->stack[spi].spilled_ptr.dynptr.first_slot)
> >               spi = spi + 1;
> >
> > -     if (dynptr_type_refcounted(state->stack[spi].spilled_ptr.dynptr.type)) {
> > +     if (dynptr_type_referenced(state->stack[spi].spilled_ptr.dynptr.type)) {
> >               verbose(env, "cannot overwrite referenced dynptr\n");
> >               return -EINVAL;
> >       }
> > @@ -922,31 +902,8 @@ static int destroy_if_dynptr_stack_slot(struct bpf_verifier_env *env,
> >       mark_stack_slot_scratched(env, spi);
> >       mark_stack_slot_scratched(env, spi - 1);
> >
> > -     /* Writing partially to one dynptr stack slot destroys both. */
> > -     for (i = 0; i < BPF_REG_SIZE; i++) {
> > -             state->stack[spi].slot_type[i] = STACK_INVALID;
> > -             state->stack[spi - 1].slot_type[i] = STACK_INVALID;
> > -     }
> > -
> > -     dynptr_id = state->stack[spi].spilled_ptr.id;
> > -     /* Invalidate any slices associated with this dynptr */
> > -     bpf_for_each_reg_in_vstate(env->cur_state, fstate, dreg, ({
> > -             /* Dynptr slices are only PTR_TO_MEM_OR_NULL and PTR_TO_MEM */
> > -             if (dreg->type != (PTR_TO_MEM | PTR_MAYBE_NULL) && dreg->type != PTR_TO_MEM)
> > -                     continue;
> > -             if (dreg->dynptr_id == dynptr_id)
> > -                     mark_reg_invalid(env, dreg);
> > -     }));
> > -
> > -     /* Do not release reference state, we are destroying dynptr on stack,
> > -      * not using some helper to release it. Just reset register.
> > -      */
> > -     __mark_reg_not_init(env, &state->stack[spi].spilled_ptr);
> > -     __mark_reg_not_init(env, &state->stack[spi - 1].spilled_ptr);
> > -
> > -     bpf_mark_stack_write(env, state->frameno, BIT(spi - 1) | BIT(spi));
> > -
> > -     return 0;
> > +     /* Invalidate the dynptr and any derived slices */
> > +     return release_reference(env, state->stack[spi].spilled_ptr.id);
> >  }
> >
> >  static bool is_dynptr_reg_valid_uninit(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
> > @@ -1583,15 +1540,15 @@ static void release_reference_state(struct bpf_verifier_state *state, int idx)
> >       return;
> >  }
> >
> > -static bool find_reference_state(struct bpf_verifier_state *state, int ptr_id)
> > +static struct bpf_reference_state *find_reference_state(struct bpf_verifier_state *state, int ptr_id)
> >  {
> >       int i;
> >
> >       for (i = 0; i < state->acquired_refs; i++)
> >               if (state->refs[i].id == ptr_id)
> > -                     return true;
> > +                     return &state->refs[i];
> >
> > -     return false;
> > +     return NULL;
> >  }
> >
> >  static int release_lock_state(struct bpf_verifier_state *state, int type, int id, void *ptr)
> > @@ -2186,6 +2143,7 @@ static void __mark_reg_known(struct bpf_reg_state *reg, u64 imm)
> >              offsetof(struct bpf_reg_state, var_off) - sizeof(reg->type));
> >       reg->id = 0;
> >       reg->ref_obj_id = 0;
> > +     reg->parent_id = 0;
> >       ___mark_reg_known(reg, imm);
> >  }
> >
> > @@ -2230,7 +2188,7 @@ static void mark_reg_known_zero(struct bpf_verifier_env *env,
> >  }
> >
> >  static void __mark_dynptr_reg(struct bpf_reg_state *reg, enum bpf_dynptr_type type,
> > -                           bool first_slot, int dynptr_id)
> > +                           bool first_slot, int id)
> >  {
> >       /* reg->type has no meaning for STACK_DYNPTR, but when we set reg for
> >        * callback arguments, it does need to be CONST_PTR_TO_DYNPTR, so simply
> > @@ -2239,7 +2197,7 @@ static void __mark_dynptr_reg(struct bpf_reg_state *reg, enum bpf_dynptr_type ty
> >       __mark_reg_known_zero(reg);
> >       reg->type = CONST_PTR_TO_DYNPTR;
> >       /* Give each dynptr a unique id to uniquely associate slices to it. */
> > -     reg->id = dynptr_id;
> > +     reg->id = id;
> >       reg->dynptr.type = type;
> >       reg->dynptr.first_slot = first_slot;
> >  }
> > @@ -2801,6 +2759,7 @@ static void __mark_reg_unknown_imprecise(struct bpf_reg_state *reg)
> >       reg->type = SCALAR_VALUE;
> >       reg->id = 0;
> >       reg->ref_obj_id = 0;
> > +     reg->parent_id = 0;
> >       reg->var_off = tnum_unknown;
> >       reg->frameno = 0;
> >       reg->precise = false;
> > @@ -8746,7 +8705,7 @@ static int process_kptr_func(struct bpf_verifier_env *env, int regno,
> >   * type, and declare it as 'const struct bpf_dynptr *' in their prototype.
> >   */
> >  static int process_dynptr_func(struct bpf_verifier_env *env, int regno, int insn_idx,
> > -                            enum bpf_arg_type arg_type, int clone_ref_obj_id,
> > +                            enum bpf_arg_type arg_type, int parent_id,
> >                              struct bpf_dynptr_desc *initialized_dynptr)
> >  {
> >       struct bpf_reg_state *reg = reg_state(env, regno);
> > @@ -8798,7 +8757,8 @@ static int process_dynptr_func(struct bpf_verifier_env *env, int regno, int insn
> >                               return err;
> >               }
> >
> > -             err = mark_stack_slots_dynptr(env, reg, arg_type, insn_idx, clone_ref_obj_id);
> > +             err = mark_stack_slots_dynptr(env, reg, arg_type, insn_idx, parent_id,
> > +                                           initialized_dynptr);
> >       } else /* MEM_RDONLY and None case from above */ {
> >               /* For the reg->type == PTR_TO_STACK case, bpf_dynptr is never const */
> >               if (reg->type == CONST_PTR_TO_DYNPTR && !(arg_type & MEM_RDONLY)) {
> > @@ -8835,6 +8795,7 @@ static int process_dynptr_func(struct bpf_verifier_env *env, int regno, int insn
> >                       initialized_dynptr->id = reg->id;
> >                       initialized_dynptr->type = reg->dynptr.type;
> >                       initialized_dynptr->ref_obj_id = reg->ref_obj_id;
> > +                     initialized_dynptr->parent_id = reg->parent_id;
> >               }
> >       }
> >       return err;
> > @@ -9787,7 +9748,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
> >                        */
> >                       if (reg->type == PTR_TO_STACK) {
> >                               spi = dynptr_get_spi(env, reg);
> > -                             if (spi < 0 || !state->stack[spi].spilled_ptr.ref_obj_id) {
> > +                             if (spi < 0 || !state->stack[spi].spilled_ptr.id) {
> >                                       verbose(env, "arg %d is an unacquired reference\n", regno);
> >                                       return -EINVAL;
> >                               }
> > @@ -9815,6 +9776,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
> >                       return -EACCES;
> >               }
> >               meta->ref_obj_id = reg->ref_obj_id;
> > +             meta->id = reg->id;
> >       }
> >
> >       switch (base_type(arg_type)) {
> > @@ -10438,26 +10400,82 @@ static int release_reference_nomark(struct bpf_verifier_state *state, int ref_ob
> >       return -EINVAL;
> >  }
> >
> > -/* The pointer with the specified id has released its reference to kernel
> > - * resources. Identify all copies of the same pointer and clear the reference.
> > - *
> > - * This is the release function corresponding to acquire_reference(). Idempotent.
> > - */
> > -static int release_reference(struct bpf_verifier_env *env, int ref_obj_id)
> > +static void idstack_reset(struct bpf_idstack *idstack)
> > +{
> > +     idstack->cnt = 0;
> > +}
> > +
> I agree with Andrii, maybe new bpf_idstack is not really worth adding,
> Since the total number of reg_states is bounded, the idstack could be
> replaced with a simpler approach using bpf_idset: store
> discovered IDs in a flat array and keep a cursor to the first
> unprocessed entry. IDs before the cursor are already visited, IDs at
> and after it are pending. "Popping" becomes just advancing the cursor,
> and deduplication comes naturally by searching the full array (both
> visited and pending) before inserting a new ID.
>
> This avoids the possibility of pushing the same child ID multiple times.

Agree. I will use id_set for this purpose and not introduce id_stack.
Thanks for the suggestion.

> > +static void idstack_push(struct bpf_idstack *idstack, u32 id)
> > +{
> > +     if (WARN_ON_ONCE(idstack->cnt >= BPF_ID_MAP_SIZE))
> > +             return;
> > +
> > +     idstack->ids[idstack->cnt++] = id;
> > +}
> > +
> > +static u32 idstack_pop(struct bpf_idstack *idstack)
> > +{
> > +     return idstack->cnt > 0 ? idstack->ids[--idstack->cnt] : 0;
> > +}
> > +
> > +static int release_reg_check(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
> > +                          int id, int root_id, struct bpf_idstack *idstack)
> >  {
> > +     struct bpf_reference_state *ref_state;
> > +
> > +     if (reg->id == id || reg->parent_id == id || reg->ref_obj_id == id) {
> > +             /* Cannot indirectly release a referenced id */
> > +             if (reg->ref_obj_id && id != root_id) {
> > +                     ref_state = find_reference_state(env->cur_state, reg->ref_obj_id);
> > +                     verbose(env, "Unreleased reference id=%d alloc_insn=%d when releasing id=%d\n",
> > +                             ref_state->id, ref_state->insn_idx, root_id);
> > +                     return -EINVAL;
> > +             }
> > +
> > +             if (reg->id && reg->id != id)
> > +                     idstack_push(idstack, reg->id);
> > +             return 1;
> > +     }
> > +
> > +     return 0;
> > +}
> > +
> > +static int release_reference(struct bpf_verifier_env *env, int id)
> > +{
> > +     struct bpf_idstack *idstack = &env->idstack_scratch;
> >       struct bpf_verifier_state *vstate = env->cur_state;
> > +     int spi, fi, root_id = id, err = 0;
> >       struct bpf_func_state *state;
> >       struct bpf_reg_state *reg;
> > -     int err;
> >
> > -     err = release_reference_nomark(vstate, ref_obj_id);
> > -     if (err)
> > -             return err;
> > +     idstack_reset(idstack);
> > +     idstack_push(idstack, id);
> >
> > -     bpf_for_each_reg_in_vstate(vstate, state, reg, ({
> > -             if (reg->ref_obj_id == ref_obj_id)
> > -                     mark_reg_invalid(env, reg);
> > -     }));
> > +     if (find_reference_state(vstate, id))
> > +             WARN_ON_ONCE(release_reference_nomark(vstate, id));
> > +
> > +     while ((id = idstack_pop(idstack))) {
> > +             bpf_for_each_reg_in_vstate(vstate, state, reg, ({
> > +                     err = release_reg_check(env, reg, id, root_id, idstack);
> > +                     if (err < 0)
> > +                             return err;
> > +                     if (err == 1)
> > +                             mark_reg_invalid(env, reg);
> > +             }));
> > +
> > +             for (fi = 0; fi <= vstate->curframe; fi++) {
> > +                     state = vstate->frame[fi];
> > +                     bpf_for_each_spilled_reg(spi, state, reg, (1 << STACK_DYNPTR)) {
> > +                             if (!reg || !reg->dynptr.first_slot)
> > +                                     continue;
> > +                             err = release_reg_check(env, reg, id, root_id, idstack);
> > +                             if (err < 0)
> > +                                     return err;
> > +                             if (err == 1)
> > +                                     invalidate_dynptr(env, state, spi);
> > +                     }
> > +             }
> > +     }
> >
> >       return 0;
> >  }
> > @@ -11643,11 +11661,8 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
> >                        */
> >                       err = 0;
> >               }
> > -             if (err) {
> > -                     verbose(env, "func %s#%d reference has not been acquired before\n",
> > -                             func_id_name(func_id), func_id);
> > +             if (err)
> >                       return err;
> > -             }
> >       }
> >
> >       switch (func_id) {
> > @@ -11925,10 +11940,8 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
> >               regs[BPF_REG_0].ref_obj_id = id;
> >       }
> >
> > -     if (func_id == BPF_FUNC_dynptr_data) {
> > -             regs[BPF_REG_0].dynptr_id = meta.initialized_dynptr.id;
> > -             regs[BPF_REG_0].ref_obj_id = meta.initialized_dynptr.ref_obj_id;
> > -     }
> > +     if (func_id == BPF_FUNC_dynptr_data)
> > +             regs[BPF_REG_0].parent_id = meta.initialized_dynptr.id;
> >
> >       err = do_refine_retval_range(env, regs, fn->ret_type, func_id, &meta);
> >       if (err)
> > @@ -13295,6 +13308,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
> >                               return -EFAULT;
> >                       }
> >                       meta->ref_obj_id = reg->ref_obj_id;
> > +                     meta->id = reg->id;
> >                       if (is_kfunc_release(meta))
> >                               meta->release_regno = regno;
> >               }
> > @@ -13429,7 +13443,6 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
> >               case KF_ARG_PTR_TO_DYNPTR:
> >               {
> >                       enum bpf_arg_type dynptr_arg_type = ARG_PTR_TO_DYNPTR;
> > -                     int clone_ref_obj_id = 0;
> >
> >                       if (is_kfunc_arg_const_ptr(btf, &args[i]))
> >                               dynptr_arg_type |= MEM_RDONLY;
> > @@ -13458,14 +13471,10 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
> >                               }
> >
> >                               dynptr_arg_type |= (unsigned int)get_dynptr_type_flag(parent_type);
> > -                             clone_ref_obj_id = meta->initialized_dynptr.ref_obj_id;
> > -                             if (dynptr_type_refcounted(parent_type) && !clone_ref_obj_id) {
> > -                                     verifier_bug(env, "missing ref obj id for parent of clone");
> > -                                     return -EFAULT;
> > -                             }
> >                       }
> >
> > -                     ret = process_dynptr_func(env, regno, insn_idx, dynptr_arg_type, clone_ref_obj_id,
> > +                     ret = process_dynptr_func(env, regno, insn_idx, dynptr_arg_type,
> > +                                               meta->ref_obj_id ? meta->id : 0,
> >                                                 &meta->initialized_dynptr);
> >                       if (ret < 0)
> >                               return ret;
> > @@ -13913,12 +13922,7 @@ static int check_special_kfunc(struct bpf_verifier_env *env, struct bpf_kfunc_ca
> >                       verifier_bug(env, "no dynptr id");
> >                       return -EFAULT;
> >               }
> > -             regs[BPF_REG_0].dynptr_id = meta->initialized_dynptr.id;
> > -
> > -             /* we don't need to set BPF_REG_0's ref obj id
> > -              * because packet slices are not refcounted (see
> > -              * dynptr_type_refcounted)
> > -              */
> > +             regs[BPF_REG_0].parent_id = meta->initialized_dynptr.id;
> >       } else {
> >               return 0;
> >       }
> > @@ -14113,9 +14117,6 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
> >                       err = unmark_stack_slots_dynptr(env, reg);
> >               } else {
> >                       err = release_reference(env, reg->ref_obj_id);
> > -                     if (err)
> > -                             verbose(env, "kfunc %s#%d reference has not been acquired before\n",
> > -                                     func_name, meta.func_id);
> >               }
> >               if (err)
> >                       return err;
> > @@ -14134,7 +14135,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
> >                       return err;
> >               }
> >
> > -             err = release_reference(env, release_ref_obj_id);
> > +             err = release_reference_nomark(env->cur_state, release_ref_obj_id);
> >               if (err) {
> >                       verbose(env, "kfunc %s#%d reference has not been acquired before\n",
> >                               func_name, meta.func_id);
> > @@ -14225,7 +14226,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
> >
> >                       /* Ensures we don't access the memory after a release_reference() */
> >                       if (meta.ref_obj_id)
> > -                             regs[BPF_REG_0].ref_obj_id = meta.ref_obj_id;
> > +                             regs[BPF_REG_0].parent_id = meta.ref_obj_id;
> >
> >                       if (is_kfunc_rcu_protected(&meta))
> >                               regs[BPF_REG_0].type |= MEM_RCU;
> > @@ -19575,7 +19576,8 @@ static bool regs_exact(const struct bpf_reg_state *rold,
> >  {
> >       return memcmp(rold, rcur, offsetof(struct bpf_reg_state, id)) == 0 &&
> >              check_ids(rold->id, rcur->id, idmap) &&
> > -            check_ids(rold->ref_obj_id, rcur->ref_obj_id, idmap);
> > +            check_ids(rold->ref_obj_id, rcur->ref_obj_id, idmap) &&
> > +            check_ids(rold->parent_id, rcur->parent_id, idmap);
> >  }
> >
> >  enum exact_level {
> > @@ -19697,7 +19699,8 @@ static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold,
> >                      range_within(rold, rcur) &&
> >                      tnum_in(rold->var_off, rcur->var_off) &&
> >                      check_ids(rold->id, rcur->id, idmap) &&
> > -                    check_ids(rold->ref_obj_id, rcur->ref_obj_id, idmap);
> > +                    check_ids(rold->ref_obj_id, rcur->ref_obj_id, idmap) &&
> > +                    check_ids(rold->parent_id, rcur->parent_id, idmap);
> >       case PTR_TO_PACKET_META:
> >       case PTR_TO_PACKET:
> >               /* We must have at least as much range as the old ptr
> > @@ -19852,7 +19855,8 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
> >                       cur_reg = &cur->stack[spi].spilled_ptr;
> >                       if (old_reg->dynptr.type != cur_reg->dynptr.type ||
> >                           old_reg->dynptr.first_slot != cur_reg->dynptr.first_slot ||
> > -                         !check_ids(old_reg->ref_obj_id, cur_reg->ref_obj_id, idmap))
> > +                         !check_ids(old_reg->ref_obj_id, cur_reg->ref_obj_id, idmap) ||
> > +                         !check_ids(old_reg->parent_id, cur_reg->parent_id, idmap))
> >                               return false;
> >                       break;
> >               case STACK_ITER:
> > --
> > 2.47.3

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH bpf-next v2 00/11] Dynptr cleanup and bugfixes
  2026-03-11 19:38 ` [RFC PATCH bpf-next v2 00/11] Dynptr cleanup and bugfixes Andrii Nakryiko
@ 2026-03-13 20:49   ` Amery Hung
  0 siblings, 0 replies; 46+ messages in thread
From: Amery Hung @ 2026-03-13 20:49 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, netdev, alexei.starovoitov, andrii, daniel, memxor,
	martin.lau, kernel-team

On Wed, Mar 11, 2026 at 12:39 PM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Fri, Mar 6, 2026 at 10:44 PM Amery Hung <ameryhung@gmail.com> wrote:
> >
> > This patchset (1) cleans up dynptr handling (2) refactors object parent-
> > child relationship tracking to make it more precise and (3) fixes dynptr
> > UAF bug due to a missing link between dynptr and the parent referenced
> > object in the verifier.
> >
> > This patchset will make dynptr tracks its parent object. In bpf qdisc
> > programs, an skb may be freed through kfuncs. However, since dynptr
> > currently does not track the parent referenced object (e.g., skb), the
> > verifier will not invalidate the dynptr after the skb is freed,
> > resulting in use-after-free. The similar issue also affects file dynptr.
> > To solve the issue, we need to track the parent skb in the derived
> > dynptr and slices.
> >
> > However, we need to refactor the verifier's object tracking mechanism
> > first because id and ref_obj_id cannot easily express more than simple
> > object relationship. To illustrate this, we use an example as shown in
> > the figure below.
> >
> > Before: object (id,ref_obj_id,dynptr_id)
> >   id         = id of the object (used for nullness tracking)
> >   ref_obj_id = id of the underlying referenced object (used for lifetime
> >                tracking)
> >   dynptr_id  = id of the parent dynptr of the slice (used for tracking
> >                parent dynptr, only for PTR_TO_MEM)
> >
> >                       skb (0,1,0)
> >                              ^ (try to link dynptr to parent ref_obj_id)
> >                              +-------------------------------+
> >                              |           bpf_dynptr_clone    |
> >                  dynptr A (2,1,0)                dynptr C (4,1,0)
> >                            ^                               ^
> >         bpf_dynptr_slice   |                               |
> >                            |                               |
> >               slice B (3,1,2)                 slice D (5,1,4)
> >                          ^
> >     bpf_dynptr_from_mem  |
> >     (NOT allowed yet)    |
> >              dynptr E (6,1,0)
>
> Ugh... This cover letter is... intimidating. It's good to have all
> this information, but for someone who didn't whiteboard this with you,
> I think it's a bit too hard and overwhelming to comprehend. You are
> also intermingling both problem statements and possible/actual
> solution (and problems with earlier possible solutions) all in the
> same go.
>
> May I suggest a bit of restructuring? This diagram you have here is a
> great start. I'd use it to "set the scene" and explain an example we
> are going to look at first. (Keep all the IDs, but mention that they
> will be more relevant a bit later and reader shouldn't concentrate on
> them just yet), and just explain that in BPF we have this potential
> hierarchy of interdependent things that have related lifetimes. When
> skb is released, all dynptrs and slices derived from those should be
> released. But also mention that it can't be all-or-nothing, in the
> sense that if dynptr A is "released", skb and dynptr C should still be
> valid.
>
> And it's currently not the case. That's the problem we are trying to
> solve. At this point you might use those IDs to explain why we can't
> solve the release problems with the way we use id and ref_obj_id.
>
> Then explain the idea of parent_id and how it fixes this hierarchy
> reconstruction problem.
>
> But then, mention that socket casting problem which introduces shared
> lifetime while objects are actually semi-independent (from verifier
> POV) due to independent NULL-ness. Which makes parent_id not enough
> and we still need ref_obj_id (maybe we should rename it to
> lifetime_id, don't know).
>
> In short, I think there is a clear logical story here, but your cover
> letter hides it a bit behind the wall of dense text, which is hard to
> get through initially.
>

Thanks for the writing suggestions. I will try to make it easier to
follow in the next respin. Really appreciate you sharing your thoughts
on how to structure the cover letter.

> >
> > Lets first try to fix the bug by letting dynptr track the parent skb
> > using ref_obj_id and propagating the ref_obj_id to slices so that when
> > the skb goes away the derived dynptrs and slices will also be
> > invalidated. However, if dynptr A is destroyed by overwriting the stack
> > slot, release_reference(ref_obj_id=1) would be called and all nodes will
> > be invaldiated. The correct handling should leave skb, dynptr C, and
> > slice D intact since non-referenced dynptr clone's lifetime does not
> > need to tie to the original dynptr. This is not a problem before since
> > dynptr created from skb has ref_obj_id = 0. In the future if we start
> > allowing creating dynptr from slice, the current design also cannot
> > correctly handle the removal of dynptr E. All objects will be
> > incorrectly invalidated instead of only invalidating childrens of
> > dynptr E. While it is possible to solve the issue by adding more
> > specialized handling in the dynptr paths [0], it creates more complexity.
> >
>
> [...]

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH bpf-next v2 09/11] selftests/bpf: Test using dynptr after freeing the underlying object
  2026-03-07  6:44 ` [RFC PATCH bpf-next v2 09/11] selftests/bpf: Test using dynptr after freeing the underlying object Amery Hung
@ 2026-03-16 19:25   ` Eduard Zingerman
  0 siblings, 0 replies; 46+ messages in thread
From: Eduard Zingerman @ 2026-03-16 19:25 UTC (permalink / raw)
  To: Amery Hung, bpf
  Cc: netdev, alexei.starovoitov, andrii, daniel, memxor, martin.lau,
	kernel-team

On Fri, 2026-03-06 at 22:44 -0800, Amery Hung wrote:

[...]

> @@ -223,6 +253,12 @@ void test_ns_bpf_qdisc(void)
>  		test_qdisc_attach_to_non_root();
>  	if (test__start_subtest("incompl_ops"))
>  		test_incompl_ops();
> +	if (test__start_subtest("invalid_dynptr"))
> +		test_invalid_dynptr();
> +	if (test__start_subtest("invalid_dynptr_slice"))
> +		test_invalid_dynptr_slice();
> +	if (test__start_subtest("invalid_dynptr_cross_frame"))
> +		test_invalid_dynptr_cross_frame();
>  }

Nit:

maybe consider using test_loader.c based infrastructure for failure tests?
E.g. like below:

    +++ b/tools/testing/selftests/bpf/prog_tests/verifier.c
    @@ -115,6 +115,7 @@
     #include "verifier_lsm.skel.h"
     #include "verifier_jit_inline.skel.h"
     #include "irq.skel.h"
    +#include "bpf_qdisc_fail__invalid_dynptr.skel.h"
     
     #define MAX_ENTRIES 11
     
    @@ -259,6 +260,7 @@ void test_verifier_lsm(void)                  { RUN(verifier_lsm); }
     void test_irq(void)			      { RUN(irq); }
     void test_verifier_mtu(void)		      { RUN(verifier_mtu); }
     void test_verifier_jit_inline(void)               { RUN(verifier_jit_inline); }
    +void test_bpf_qdisc_fail__invalid_dynptr(void) { RUN(bpf_qdisc_fail__invalid_dynptr); }
     
     static int init_test_val_map(struct bpf_object *obj, char *map_name)
     {
    diff --git a/tools/testing/selftests/bpf/progs/bpf_qdisc_fail__invalid_dynptr.c b/tools/testing/selftests/bpf/progs/bpf_qdisc_fail__invalid_dynptr.c
    index 2e76470bc261..f085872c3900 100644
    --- a/tools/testing/selftests/bpf/progs/bpf_qdisc_fail__invalid_dynptr.c
    +++ b/tools/testing/selftests/bpf/progs/bpf_qdisc_fail__invalid_dynptr.c
    @@ -3,12 +3,14 @@
     #include <vmlinux.h>
     #include "bpf_experimental.h"
     #include "bpf_qdisc_common.h"
    +#include "bpf_misc.h"
     
     char _license[] SEC("license") = "GPL";
     
     int proto;
     
     SEC("struct_ops")
    +__failure
     int BPF_PROG(bpf_qdisc_test_enqueue, struct sk_buff *skb, struct Qdisc *sch,
     	     struct bpf_sk_buff_ptr *to_free)
     {

For tests that exercise verifier failure messages this has some
benefits. E.g. the following command would reliably produce log
output even if program load succeeds:

  ./test_progs -vvv -a bpf_qdisc_fail__invalid_dynptr/bpf_qdisc_test_enqueue

And __msg annotations can be used to force-check the failure reason.

[...]

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH bpf-next v2 01/11] bpf: Set kfunc dynptr arg type flag based on prototype
  2026-03-07  6:44 ` [RFC PATCH bpf-next v2 01/11] bpf: Set kfunc dynptr arg type flag based on prototype Amery Hung
  2026-03-11 14:47   ` Mykyta Yatsenko
  2026-03-11 19:43   ` Andrii Nakryiko
@ 2026-03-16 20:57   ` Eduard Zingerman
  2 siblings, 0 replies; 46+ messages in thread
From: Eduard Zingerman @ 2026-03-16 20:57 UTC (permalink / raw)
  To: Amery Hung, bpf
  Cc: netdev, alexei.starovoitov, andrii, daniel, memxor, martin.lau,
	kernel-team

On Fri, 2026-03-06 at 22:44 -0800, Amery Hung wrote:
> The verifier should decide whether a dynptr argument is read-only
> based on if the type is "const struct bpf_dynptr *", not the type of
> the register passed to the kfunc. This currently does not cause issues
> because existing kfuncs that mutate struct bpf_dynptr are constructors
> (e.g., bpf_dynptr_from_xxx and bpf_dynptr_clone). These kfuncs have
> additional check in process_dynptr_func() to make sure the stack slot
> does not contain initialized dynptr. Nonetheless, this should still be
> fixed to avoid future issues when there is a non-constructor dynptr
> kfunc that can mutate dynptr. This is also a small step toward unifying
> kfunc and helper handling in the verifier, where the first step is to
> generate kfunc prototype similar to bpf_func_proto before the main
> verification loop.
> 
> We also need to correctly mark some kfunc arguments as "const struct
> bpf_dynptr *" to align with other kfuncs that take non-mutable dynptr
> argument and to not break their usage. Adding const qualifier does
> not break backward compatibility.
> 
> Signed-off-by: Amery Hung <ameryhung@gmail.com>
> ---

Acked-by: Eduard Zingerman <eddyz87@gmail.com>

[...]

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH bpf-next v2 02/11] selftests/bpf: Test passing CONST_PTR_TO_DYNPTR to kfunc that may mutate dynptr
  2026-03-07  6:44 ` [RFC PATCH bpf-next v2 02/11] selftests/bpf: Test passing CONST_PTR_TO_DYNPTR to kfunc that may mutate dynptr Amery Hung
  2026-03-11 15:26   ` Mykyta Yatsenko
@ 2026-03-16 21:35   ` Eduard Zingerman
  1 sibling, 0 replies; 46+ messages in thread
From: Eduard Zingerman @ 2026-03-16 21:35 UTC (permalink / raw)
  To: Amery Hung, bpf
  Cc: netdev, alexei.starovoitov, andrii, daniel, memxor, martin.lau,
	kernel-team

On Fri, 2026-03-06 at 22:44 -0800, Amery Hung wrote:
> Make sure for kfunc that takes mutable dynptr argument, verifier rejects
> passing CONST_PTR_TO_DYNPTR to it.
> 
> Rename struct sample to test_sample to avoid a conflict with the
> definition in vmlinux.h
> 
> In test_kfunc_dynptr_param.c, initialize dynptr to 0 to avoid
> -Wuninitialized-const-pointer warning.
> 
> Signed-off-by: Amery Hung <ameryhung@gmail.com>
> ---

Acked-by: Eduard Zingerman <eddyz87@gmail.com>

[...]

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH bpf-next v2 03/11] bpf: Unify dynptr handling in the verifier
  2026-03-07  6:44 ` [RFC PATCH bpf-next v2 03/11] bpf: Unify dynptr handling in the verifier Amery Hung
  2026-03-11 16:03   ` Mykyta Yatsenko
  2026-03-11 19:57   ` Andrii Nakryiko
@ 2026-03-16 22:52   ` Eduard Zingerman
  2 siblings, 0 replies; 46+ messages in thread
From: Eduard Zingerman @ 2026-03-16 22:52 UTC (permalink / raw)
  To: Amery Hung, bpf
  Cc: netdev, alexei.starovoitov, andrii, daniel, memxor, martin.lau,
	kernel-team

On Fri, 2026-03-06 at 22:44 -0800, Amery Hung wrote:
> Simplify dynptr checking for helper and kfunc by unifying it. Remember
> initialized dynptr in process_dynptr_func() so that we can easily
> retrieve the information for verification later.
> 
> Signed-off-by: Amery Hung <ameryhung@gmail.com>
> ---

Acked-by: Eduard Zingerman <eddyz87@gmail.com>

[...]

> @@ -8825,6 +8821,20 @@ static int process_dynptr_func(struct bpf_verifier_env *env, int regno, int insn
>  		}
>  
>  		err = mark_dynptr_read(env, reg);
> +
> +		if (initialized_dynptr) {
> +			struct bpf_func_state *state = func(env, reg);
> +			int spi;
> +
> +			if (reg->type != CONST_PTR_TO_DYNPTR) {
> +				spi = dynptr_get_spi(env, reg);
> +				reg = &state->stack[spi].spilled_ptr;
> +			}
> +
> +			initialized_dynptr->id = reg->id;
> +			initialized_dynptr->type = reg->dynptr.type;
> +			initialized_dynptr->ref_obj_id = reg->ref_obj_id;
> +		}

Nit: Maybe fill `initialized_dynptr` unconditionally (regardless of
     MEM_UNINIT check above I mean)?
     Remembering under which conditions it is filled might be a bit
     cumbersome when working with this function in the future.

[...]

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH bpf-next v2 05/11] bpf: Preserve reg->id of pointer objects after null-check
  2026-03-11 23:46       ` Amery Hung
@ 2026-03-17 18:49         ` Eduard Zingerman
  0 siblings, 0 replies; 46+ messages in thread
From: Eduard Zingerman @ 2026-03-17 18:49 UTC (permalink / raw)
  To: Amery Hung, Alexei Starovoitov
  Cc: bpf, Network Development, Andrii Nakryiko, Daniel Borkmann,
	Kumar Kartikeya Dwivedi, Martin KaFai Lau, Kernel Team

On Wed, 2026-03-11 at 16:46 -0700, Amery Hung wrote:
> On Wed, Mar 11, 2026 at 3:30 PM Alexei Starovoitov
> <alexei.starovoitov@gmail.com> wrote:
> > 
> > On Wed, Mar 11, 2026 at 3:26 PM Alexei Starovoitov
> > <alexei.starovoitov@gmail.com> wrote:

[...]

> > One more thing...
> > 
> > How does it interact with reg_is_init_pkt_pointer() ?
> > 
> > That pointer has to have id == 0.
> 
> I haven't looked deep into the case. Currently, skb is non-referenced
> for non-qdisc programs, so skb dynptr won't need to track it.
> 
> If there is ever a need to track it, we can assign a reserved non-zero
> id to the unmodified pkt pointer. For reg_is_init_pkt_pointer(), it is
> already checking tnum_equals_const(reg->var_off, 0), so maybe it is
> fine to drop the id check (not sure).

Looks like dropping id == 0 check in reg_is_init_pkt_pointer() should be fine.

^ permalink raw reply	[flat|nested] 46+ messages in thread

end of thread, other threads:[~2026-03-17 18:49 UTC | newest]

Thread overview: 46+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-07  6:44 [RFC PATCH bpf-next v2 00/11] Dynptr cleanup and bugfixes Amery Hung
2026-03-07  6:44 ` [RFC PATCH bpf-next v2 01/11] bpf: Set kfunc dynptr arg type flag based on prototype Amery Hung
2026-03-11 14:47   ` Mykyta Yatsenko
2026-03-11 16:34     ` Amery Hung
2026-03-11 19:43   ` Andrii Nakryiko
2026-03-11 20:01     ` Amery Hung
2026-03-11 22:37       ` Andrii Nakryiko
2026-03-11 23:03         ` Amery Hung
2026-03-11 23:15           ` Andrii Nakryiko
2026-03-12 16:59             ` Amery Hung
2026-03-12 20:09               ` Andrii Nakryiko
2026-03-13  3:25                 ` Alexei Starovoitov
2026-03-16 20:57   ` Eduard Zingerman
2026-03-07  6:44 ` [RFC PATCH bpf-next v2 02/11] selftests/bpf: Test passing CONST_PTR_TO_DYNPTR to kfunc that may mutate dynptr Amery Hung
2026-03-11 15:26   ` Mykyta Yatsenko
2026-03-11 16:38     ` Amery Hung
2026-03-11 16:56       ` Amery Hung
2026-03-16 21:35   ` Eduard Zingerman
2026-03-07  6:44 ` [RFC PATCH bpf-next v2 03/11] bpf: Unify dynptr handling in the verifier Amery Hung
2026-03-11 16:03   ` Mykyta Yatsenko
2026-03-11 17:23     ` Amery Hung
2026-03-11 22:22       ` Mykyta Yatsenko
2026-03-11 22:35         ` Amery Hung
2026-03-11 19:57   ` Andrii Nakryiko
2026-03-11 20:16     ` Amery Hung
2026-03-16 22:52   ` Eduard Zingerman
2026-03-07  6:44 ` [RFC PATCH bpf-next v2 04/11] bpf: Assign reg->id when getting referenced kptr from ctx Amery Hung
2026-03-07  6:44 ` [RFC PATCH bpf-next v2 05/11] bpf: Preserve reg->id of pointer objects after null-check Amery Hung
2026-03-11 21:55   ` Andrii Nakryiko
2026-03-11 22:26   ` Alexei Starovoitov
2026-03-11 22:29     ` Alexei Starovoitov
2026-03-11 23:46       ` Amery Hung
2026-03-17 18:49         ` Eduard Zingerman
2026-03-07  6:44 ` [RFC PATCH bpf-next v2 06/11] bpf: Refactor object relationship tracking and fix dynptr UAF bug Amery Hung
2026-03-11 22:32   ` Andrii Nakryiko
2026-03-13 20:32     ` Amery Hung
2026-03-12 23:33   ` Mykyta Yatsenko
2026-03-13 20:33     ` Amery Hung
2026-03-07  6:44 ` [RFC PATCH bpf-next v2 07/11] bpf: Remove redundant dynptr arg check for helper Amery Hung
2026-03-07  6:44 ` [RFC PATCH bpf-next v2 08/11] selftests/bpf: Test creating dynptr from dynptr data and slice Amery Hung
2026-03-07  6:44 ` [RFC PATCH bpf-next v2 09/11] selftests/bpf: Test using dynptr after freeing the underlying object Amery Hung
2026-03-16 19:25   ` Eduard Zingerman
2026-03-07  6:44 ` [RFC PATCH bpf-next v2 10/11] selftests/bpf: Test using slice after invalidating dynptr clone Amery Hung
2026-03-07  6:44 ` [RFC PATCH bpf-next v2 11/11] selftests/bpf: Test using file dynptr after the reference on file is dropped Amery Hung
2026-03-11 19:38 ` [RFC PATCH bpf-next v2 00/11] Dynptr cleanup and bugfixes Andrii Nakryiko
2026-03-13 20:49   ` Amery Hung

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox