* [PATCH bpf-next v3 0/9] Refactor verifier object relationship tracking
@ 2026-04-21 22:10 Amery Hung
2026-04-21 22:10 ` [PATCH bpf-next v3 1/9] bpf: Unify dynptr handling in the verifier Amery Hung
` (8 more replies)
0 siblings, 9 replies; 12+ messages in thread
From: Amery Hung @ 2026-04-21 22:10 UTC (permalink / raw)
To: bpf
Cc: netdev, alexei.starovoitov, andrii, daniel, eddyz87, memxor,
martin.lau, mykyta.yatsenko5, ameryhung, kernel-team
Hi all,
This patchset cleans up dynptr handling, refactors object relationship
tracking in the verifier by introducing parent_id, and fixes dynptr
use-after-free bugs where file/skb dynptrs are not invalidated when
the parent referenced object is freed.
* Motivation *
In BPF qdisc programs, an skb can be freed through kfuncs. However,
since dynptr does not track the parent referenced object (e.g., skb),
the verifier does not invalidate the dynptr after the skb is freed,
resulting in use-after-free. The same issue also affects file dynptr.
The figure below shows the current state of object tracking. The
verifier tracks objects using three fields: id for nullness tracking,
ref_obj_id for lifetime tracking, and dynptr_id for tracking the parent
dynptr of a slice (PTR_TO_MEM only). While dynptr_id links slices to
their parent dynptr, there is no field that links a dynptr back to its
parent skb. When the skb is freed via release_reference(ref_obj_id=1),
only objects with ref_obj_id=1 are invalidated. Since skb dynptr is
non-referenced (ref_obj_id=0), the dynptr and its derived slices remain
accessible.
Current: object (id, ref_obj_id, dynptr_id)
id = unique id of the object (for nullness tracking)
ref_obj_id = id of the referenced object (for lifetime tracking)
dynptr_id = id of the parent dynptr (only for PTR_TO_MEM slices)
skb (0,1,0)
^
! No link from dynptr to skb
+-------------------------------+
| bpf_dynptr_clone |
dynptr A (2,0,0) dynptr C (4,0,0)
^ ^
bpf_dynptr_slice | |
| |
slice B (3,0,2) slice D (5,0,4)
* Why not simply use ref_obj_id to track the parent? *
A natural first approach is to link dynptr to its parent by sharing
the parent's ref_obj_id and propagating it to slices. Now, releasing
the skb via release_reference(ref_obj_id=1) correctly invalidates all
derived objects.
Attempted fix: share parent's ref_obj_id
skb (0,1,0)
^
+-------------------------------+
| bpf_dynptr_clone |
dynptr A (2,1,0) dynptr C (4,1,0)
^ ^
bpf_dynptr_slice | |
| |
slice B (3,1,2) slice D (5,1,4)
However, this approach does not generalize to all dynptr types.
Referenced dynptrs such as file dynptr acquire their own ref_obj_id to
track the dynptr's lifetime. Since ref_obj_id is already
used for the dynptr's own reference, it cannot also be used to point to
the parent file object. While it is possible to add specialized handling
for individual dynptr types [0], it adds complexity and does not
generalize.
An alternative approach is to avoid introducing a new field and instead
repurpose ref_obj_id as parent_id by folding lifetime tracking into id
[1]. In this design, each object is represented as (id, ref_obj_id)
where id is used for both nullness and lifetime tracking, and ref_obj_id
tracks the parent object's id.
Attempted: object (id, ref_obj_id)
id = id of the object (for nullness and lifetime tracking)
ref_obj_id = id of the parent object
' = id is referenced
skb (1',0)
^
bpf_dynptr_from_skb +-------------------------------+
| bpf_dynptr_clone(A, C) |
dynptr A (2,1') dynptr C (4,1')
^ ^
bpf_dynptr_slice | |
| |
slice B (3,2) slice D (5,4)
However, this design cannot express the relationship between referenced
socket pointers and their casted counterparts. After pointer casting,
the original and casted pointers need the same lifetime (same ref_obj_id
in the current design) but different nullness (different id). The casted
pointer may be NULL even if the original is valid. With id serving as
the only field for both nullness and lifetime, and ref_obj_id repurposed
as parent, there is no way to express "different identity, same
lifetime."
Referenced socket pointer (expressed using current design):
C = ptr_casting_function(A)
ptr A (1,1,0) ptr C (2,1,0)
^ ^
| |
ptr C may be NULL even if ptr A is valid
but they have the same lifetime
* New Design: parent_id *
To track precise object relationships, u32 parent_id is added to
bpf_reg_state. A child object's parent_id points to the parent
object's id. This replaces the PTR_TO_MEM-specific dynptr_id, and
does not increase the size of bpf_reg_state on 64-bit machines as
there is existing padding.
After: object (id, ref_obj_id, parent_id)
id = unique id of the object (for nullness tracking)
ref_obj_id = id of the referenced object; objects with the same
ref_obj_id share the same lifetime
parent_id = id of the parent object; points to parent's id
(for object relationship tracking)
skb (1,1,0)
^
bpf_dynptr_from_skb +-------------------------------+
| bpf_dynptr_clone(A, C) |
dynptr A (2,0,1) dynptr C (4,0,1)
^ ^
bpf_dynptr_slice | |
| |
slice B (3,0,2) slice D (5,0,4)
^
bpf_dynptr_from_mem |
(NOT allowed yet) |
dynptr E (6,0,3)
With parent_id, the verifier can precisely track object trees. When the
skb is freed, the verifier traverses the tree rooted at skb (id=1) and
invalidates all descendants — dynptr A, dynptr C, and their slices.
When dynptr A is destroyed by overwriting the stack slot, only dynptr A
and its children (slice B, dynptr E) are invalidated; skb, dynptr C,
and slice D remain valid.
For referenced dynptr (e.g., file dynptr), the original and its clones
share the same ref_obj_id so they are all invalidated together when any
one of them is released. For non-referenced dynptr (e.g., skb dynptr),
clones live independently since they have ref_obj_id=0.
To avoid recursive call chains when releasing objects (e.g.,
release_reference() -> unmark_stack_slots_dynptr() ->
release_reference()), release_reference() now uses stack-based DFS to
find and invalidate all registers and stack slots with matching id or
ref_obj_id and all descendants whose parent_id matches. Currently, it
skips id == 0, which could be a valid id (e.g., pkt pointer by reading
ctx). Future work may start assigning > 0 id to them. This does not
affect the current use cases where skb and file parents are both given
id > 0.
* Preserving reg->id after null-check *
For parent_id tracking to work, child objects need to refer to the
parent's id. This requires two preparatory changes: assigning reg->id
when reading referenced kptrs from program context (patch 2), and
preserving reg->id of pointer objects after null-check (patch 3).
Previously, null-check would clear reg->id, making it impossible for
children to reference the parent afterward. The latter causes a slight
increase in verified states for some programs. One selftest object
sees +19 states (+5.01%). For Meta BPF objects, the increase is
also minor, with the largest being +34 states (+3.63%).
* Object relationship in different scenarios (for reference) *
The figures below show how the new design handles all four combinations
of referenced/non-referenced dynptr with referenced/non-referenced
parent. The relationship between slices and dynptrs is omitted as it
is the same across all cases. The main difference is how cloned dynptrs
are represented. Since bpf_dynptr_clone() does not initialize a new
reference, clones of referenced dynptrs share the same ref_obj_id and
must be invalidated together. For non-referenced dynptrs, the original
and clones live independently.
(1) Non-referenced dynptr with referenced parent (e.g., skb in Qdisc):
skb (1,1,0)
^
bpf_dynptr_from_skb +-------------------------------+
| bpf_dynptr_clone(A, C) |
dynptr A (2,0,1) dynptr C (4,0,1)
(2) Non-referenced dynptr with non-referenced parent (e.g., skb in TC,
always valid):
bpf_dynptr_from_skb
bpf_dynptr_clone(A, C)
dynptr A (1,0,0) dynptr C (2,0,0)
dynptr A and C live independently
(3) Referenced dynptr with referenced parent:
file (1,1,0)
^ ^
bpf_dynptr_from_file | +-------------------------------+
| bpf_dynptr_clone(A, C) |
dynptr A (2,3,1) dynptr C (4,3,1)
^ ^
| |
dynptr A and C have the same lifetime
(4) Referenced dynptr with non-referenced parent:
bpf_ringbuf_reserve_dynptr
bpf_dynptr_clone(A, C)
dynptr A (1,1,0) dynptr C (2,1,0)
^ ^
| |
dynptr A and C have the same lifetime
[0] https://lore.kernel.org/bpf/20250414161443.1146103-2-memxor@gmail.com/
[1] https://github.com/ameryhung/bpf/commits/obj_relationship_v2_no_parent_id/
Changelog:
v2 -> v3
- Rebase to bpf-next/master
- Update veristat numbers
- Update commit msg to explain multiple dropped checks (Mykyta, Andrii)
- Reuse idmap as idstack in release_reference() and check for
duplicate id (Mykyta, Andrii)
- Change to use RUN_TEST for qdisc dynptr selftest (Eduard)
Link: https://lore.kernel.org/bpf/20260307064439.3247440-1-ameryhung@gmail.com/
v1 -> v2
- Redesign: Use object (id, ref_obj_id, parent_id) instead of
(id, ref_obj_id) as it cannot express ptr casting without
introducing specialized code to handle the case
- Use stack-based DFS to release objects to avoid recursion (Andrii)
- Keep reg->id after null check
- Add dynptr cleanup
- Fix dynptr kfunc arg type determination
- Add a file dynptr UAF selftest
Link: https://lore.kernel.org/bpf/20260202214817.2853236-1-ameryhung@gmail.com/
---
Amery Hung (9):
bpf: Unify dynptr handling in the verifier
bpf: Assign reg->id when getting referenced kptr from ctx
bpf: Preserve reg->id of pointer objects after null-check
bpf: Refactor object relationship tracking and fix dynptr UAF bug
bpf: Remove redundant dynptr arg check for helper
selftests/bpf: Test creating dynptr from dynptr data and slice
selftests/bpf: Test using dynptr after freeing the underlying object
selftests/bpf: Test using slice after invalidating dynptr clone
selftests/bpf: Test using file dynptr after the reference on file is
dropped
include/linux/bpf_verifier.h | 34 +-
kernel/bpf/log.c | 4 +-
kernel/bpf/states.c | 9 +-
kernel/bpf/verifier.c | 461 ++++++------------
.../selftests/bpf/prog_tests/bpf_qdisc.c | 8 +
..._qdisc_dynptr_use_after_invalidate_clone.c | 75 +++
.../progs/bpf_qdisc_fail__invalid_dynptr.c | 68 +++
...f_qdisc_fail__invalid_dynptr_cross_frame.c | 74 +++
.../bpf_qdisc_fail__invalid_dynptr_slice.c | 70 +++
.../testing/selftests/bpf/progs/dynptr_fail.c | 48 +-
.../selftests/bpf/progs/file_reader_fail.c | 60 +++
.../selftests/bpf/progs/user_ringbuf_fail.c | 4 +-
12 files changed, 593 insertions(+), 322 deletions(-)
create mode 100644 tools/testing/selftests/bpf/progs/bpf_qdisc_dynptr_use_after_invalidate_clone.c
create mode 100644 tools/testing/selftests/bpf/progs/bpf_qdisc_fail__invalid_dynptr.c
create mode 100644 tools/testing/selftests/bpf/progs/bpf_qdisc_fail__invalid_dynptr_cross_frame.c
create mode 100644 tools/testing/selftests/bpf/progs/bpf_qdisc_fail__invalid_dynptr_slice.c
--
2.52.0
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH bpf-next v3 1/9] bpf: Unify dynptr handling in the verifier
2026-04-21 22:10 [PATCH bpf-next v3 0/9] Refactor verifier object relationship tracking Amery Hung
@ 2026-04-21 22:10 ` Amery Hung
2026-04-21 22:52 ` bot+bpf-ci
2026-04-21 22:10 ` [PATCH bpf-next v3 2/9] bpf: Assign reg->id when getting referenced kptr from ctx Amery Hung
` (7 subsequent siblings)
8 siblings, 1 reply; 12+ messages in thread
From: Amery Hung @ 2026-04-21 22:10 UTC (permalink / raw)
To: bpf
Cc: netdev, alexei.starovoitov, andrii, daniel, eddyz87, memxor,
martin.lau, mykyta.yatsenko5, ameryhung, kernel-team
Simplify dynptr checking for helper and kfunc by unifying it. Remember
the initialized dynptr (i.e.,g !(arg_type |= MEM_UNINIT)) pass to a
dynptr kfunc during process_dynptr_func() so that we can easily
retrieve the information for verification later. By saving it in
meta->dynptr, there is no need to call dynptr helpers such as
dynptr_id(), dynptr_ref_obj_id() and dynptr_type() in check_func_arg().
Remove and open code the helpers in process_dynptr_func() when
saving id, ref_obj_id, and type. It is okay to drop spi < 0 check as
is_dynptr_reg_valid_init() has made sure the dynptr is valid.
Besides, since dynptr ref_obj_id information is now pass around in
meta->bpf_dynptr_desc, drop the check in helper_multiple_ref_obj_use.
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
Signed-off-by: Amery Hung <ameryhung@gmail.com>
---
include/linux/bpf_verifier.h | 12 ++-
kernel/bpf/verifier.c | 178 +++++++----------------------------
2 files changed, 41 insertions(+), 149 deletions(-)
diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index b148f816f25b..dc0cff59246d 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -1319,6 +1319,12 @@ struct bpf_map_desc {
int uid;
};
+struct bpf_dynptr_desc {
+ enum bpf_dynptr_type type;
+ u32 id;
+ u32 ref_obj_id;
+};
+
struct bpf_kfunc_call_arg_meta {
/* In parameters */
struct btf *btf;
@@ -1359,16 +1365,12 @@ struct bpf_kfunc_call_arg_meta {
struct {
struct btf_field *field;
} arg_rbtree_root;
- struct {
- enum bpf_dynptr_type type;
- u32 id;
- u32 ref_obj_id;
- } initialized_dynptr;
struct {
u8 spi;
u8 frameno;
} iter;
struct bpf_map_desc map;
+ struct bpf_dynptr_desc dynptr;
u64 mem_size;
};
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 185210b73385..41e4ea41c72e 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -232,6 +232,7 @@ static void bpf_map_key_store(struct bpf_insn_aux_data *aux, u64 state)
struct bpf_call_arg_meta {
struct bpf_map_desc map;
+ struct bpf_dynptr_desc dynptr;
bool raw_mode;
bool pkt_access;
u8 release_regno;
@@ -240,7 +241,6 @@ struct bpf_call_arg_meta {
int mem_size;
u64 msize_max_value;
int ref_obj_id;
- int dynptr_id;
int func_id;
struct btf *btf;
u32 btf_id;
@@ -434,11 +434,6 @@ static bool is_ptr_cast_function(enum bpf_func_id func_id)
func_id == BPF_FUNC_skc_to_tcp_request_sock;
}
-static bool is_dynptr_ref_function(enum bpf_func_id func_id)
-{
- return func_id == BPF_FUNC_dynptr_data;
-}
-
static bool is_sync_callback_calling_kfunc(u32 btf_id);
static bool is_async_callback_calling_kfunc(u32 btf_id);
static bool is_callback_calling_kfunc(u32 btf_id);
@@ -507,8 +502,6 @@ static bool helper_multiple_ref_obj_use(enum bpf_func_id func_id,
ref_obj_uses++;
if (is_acquire_function(func_id, map))
ref_obj_uses++;
- if (is_dynptr_ref_function(func_id))
- ref_obj_uses++;
return ref_obj_uses > 1;
}
@@ -7433,7 +7426,8 @@ static int process_kptr_func(struct bpf_verifier_env *env, int regno,
* and checked dynamically during runtime.
*/
static int process_dynptr_func(struct bpf_verifier_env *env, int regno, int insn_idx,
- enum bpf_arg_type arg_type, int clone_ref_obj_id)
+ enum bpf_arg_type arg_type, int clone_ref_obj_id,
+ struct bpf_dynptr_desc *dynptr)
{
struct bpf_reg_state *reg = reg_state(env, regno);
int err;
@@ -7499,6 +7493,20 @@ static int process_dynptr_func(struct bpf_verifier_env *env, int regno, int insn
}
err = mark_dynptr_read(env, reg);
+
+ if (dynptr) {
+ struct bpf_func_state *state = bpf_func(env, reg);
+ int spi;
+
+ if (reg->type != CONST_PTR_TO_DYNPTR) {
+ spi = dynptr_get_spi(env, reg);
+ reg = &state->stack[spi].spilled_ptr;
+ }
+
+ dynptr->id = reg->id;
+ dynptr->type = reg->dynptr.type;
+ dynptr->ref_obj_id = reg->ref_obj_id;
+ }
}
return err;
}
@@ -8263,72 +8271,6 @@ static int check_func_arg_reg_off(struct bpf_verifier_env *env,
}
}
-static struct bpf_reg_state *get_dynptr_arg_reg(struct bpf_verifier_env *env,
- const struct bpf_func_proto *fn,
- struct bpf_reg_state *regs)
-{
- struct bpf_reg_state *state = NULL;
- int i;
-
- for (i = 0; i < MAX_BPF_FUNC_REG_ARGS; i++)
- if (arg_type_is_dynptr(fn->arg_type[i])) {
- if (state) {
- verbose(env, "verifier internal error: multiple dynptr args\n");
- return NULL;
- }
- state = ®s[BPF_REG_1 + i];
- }
-
- if (!state)
- verbose(env, "verifier internal error: no dynptr arg found\n");
-
- return state;
-}
-
-static int dynptr_id(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
-{
- struct bpf_func_state *state = bpf_func(env, reg);
- int spi;
-
- if (reg->type == CONST_PTR_TO_DYNPTR)
- return reg->id;
- spi = dynptr_get_spi(env, reg);
- if (spi < 0)
- return spi;
- return state->stack[spi].spilled_ptr.id;
-}
-
-static int dynptr_ref_obj_id(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
-{
- struct bpf_func_state *state = bpf_func(env, reg);
- int spi;
-
- if (reg->type == CONST_PTR_TO_DYNPTR)
- return reg->ref_obj_id;
- spi = dynptr_get_spi(env, reg);
- if (spi < 0)
- return spi;
- return state->stack[spi].spilled_ptr.ref_obj_id;
-}
-
-static enum bpf_dynptr_type dynptr_get_type(struct bpf_verifier_env *env,
- struct bpf_reg_state *reg)
-{
- struct bpf_func_state *state = bpf_func(env, reg);
- int spi;
-
- if (reg->type == CONST_PTR_TO_DYNPTR)
- return reg->dynptr.type;
-
- spi = bpf_get_spi(reg->var_off.value);
- if (spi < 0) {
- verbose(env, "verifier internal error: invalid spi when querying dynptr type\n");
- return BPF_DYNPTR_TYPE_INVALID;
- }
-
- return state->stack[spi].spilled_ptr.dynptr.type;
-}
-
static int check_reg_const_str(struct bpf_verifier_env *env,
struct bpf_reg_state *reg, u32 regno)
{
@@ -8683,7 +8625,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
true, meta);
break;
case ARG_PTR_TO_DYNPTR:
- err = process_dynptr_func(env, regno, insn_idx, arg_type, 0);
+ err = process_dynptr_func(env, regno, insn_idx, arg_type, 0, &meta->dynptr);
if (err)
return err;
break;
@@ -9342,7 +9284,7 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog,
if (ret)
return ret;
- ret = process_dynptr_func(env, regno, -1, arg->arg_type, 0);
+ ret = process_dynptr_func(env, regno, -1, arg->arg_type, 0, NULL);
if (ret)
return ret;
} else if (base_type(arg->arg_type) == ARG_PTR_TO_BTF_ID) {
@@ -10429,52 +10371,10 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
}
}
break;
- case BPF_FUNC_dynptr_data:
- {
- struct bpf_reg_state *reg;
- int id, ref_obj_id;
-
- reg = get_dynptr_arg_reg(env, fn, regs);
- if (!reg)
- return -EFAULT;
-
-
- if (meta.dynptr_id) {
- verifier_bug(env, "meta.dynptr_id already set");
- return -EFAULT;
- }
- if (meta.ref_obj_id) {
- verifier_bug(env, "meta.ref_obj_id already set");
- return -EFAULT;
- }
-
- id = dynptr_id(env, reg);
- if (id < 0) {
- verifier_bug(env, "failed to obtain dynptr id");
- return id;
- }
-
- ref_obj_id = dynptr_ref_obj_id(env, reg);
- if (ref_obj_id < 0) {
- verifier_bug(env, "failed to obtain dynptr ref_obj_id");
- return ref_obj_id;
- }
-
- meta.dynptr_id = id;
- meta.ref_obj_id = ref_obj_id;
-
- break;
- }
case BPF_FUNC_dynptr_write:
{
- enum bpf_dynptr_type dynptr_type;
- struct bpf_reg_state *reg;
+ enum bpf_dynptr_type dynptr_type = meta.dynptr.type;
- reg = get_dynptr_arg_reg(env, fn, regs);
- if (!reg)
- return -EFAULT;
-
- dynptr_type = dynptr_get_type(env, reg);
if (dynptr_type == BPF_DYNPTR_TYPE_INVALID)
return -EFAULT;
@@ -10665,10 +10565,7 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
return -EFAULT;
}
- if (is_dynptr_ref_function(func_id))
- regs[BPF_REG_0].dynptr_id = meta.dynptr_id;
-
- if (is_ptr_cast_function(func_id) || is_dynptr_ref_function(func_id)) {
+ if (is_ptr_cast_function(func_id)) {
/* For release_reference() */
regs[BPF_REG_0].ref_obj_id = meta.ref_obj_id;
} else if (is_acquire_function(func_id, meta.map.ptr)) {
@@ -10682,6 +10579,11 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
regs[BPF_REG_0].ref_obj_id = id;
}
+ if (func_id == BPF_FUNC_dynptr_data) {
+ regs[BPF_REG_0].dynptr_id = meta.dynptr.id;
+ regs[BPF_REG_0].ref_obj_id = meta.dynptr.ref_obj_id;
+ }
+
err = do_refine_retval_range(env, regs, fn->ret_type, func_id, &meta);
if (err)
return err;
@@ -12260,7 +12162,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
meta->release_regno = regno;
} else if (meta->func_id == special_kfunc_list[KF_bpf_dynptr_clone] &&
(dynptr_arg_type & MEM_UNINIT)) {
- enum bpf_dynptr_type parent_type = meta->initialized_dynptr.type;
+ enum bpf_dynptr_type parent_type = meta->dynptr.type;
if (parent_type == BPF_DYNPTR_TYPE_INVALID) {
verifier_bug(env, "no dynptr type for parent of clone");
@@ -12268,29 +12170,17 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
}
dynptr_arg_type |= (unsigned int)get_dynptr_type_flag(parent_type);
- clone_ref_obj_id = meta->initialized_dynptr.ref_obj_id;
+ clone_ref_obj_id = meta->dynptr.ref_obj_id;
if (dynptr_type_refcounted(parent_type) && !clone_ref_obj_id) {
verifier_bug(env, "missing ref obj id for parent of clone");
return -EFAULT;
}
}
- ret = process_dynptr_func(env, regno, insn_idx, dynptr_arg_type, clone_ref_obj_id);
+ ret = process_dynptr_func(env, regno, insn_idx, dynptr_arg_type, clone_ref_obj_id,
+ &meta->dynptr);
if (ret < 0)
return ret;
-
- if (!(dynptr_arg_type & MEM_UNINIT)) {
- int id = dynptr_id(env, reg);
-
- if (id < 0) {
- verifier_bug(env, "failed to obtain dynptr id");
- return id;
- }
- meta->initialized_dynptr.id = id;
- meta->initialized_dynptr.type = dynptr_get_type(env, reg);
- meta->initialized_dynptr.ref_obj_id = dynptr_ref_obj_id(env, reg);
- }
-
break;
}
case KF_ARG_PTR_TO_ITER:
@@ -12894,7 +12784,7 @@ static int check_special_kfunc(struct bpf_verifier_env *env, struct bpf_kfunc_ca
}
} else if (meta->func_id == special_kfunc_list[KF_bpf_dynptr_slice] ||
meta->func_id == special_kfunc_list[KF_bpf_dynptr_slice_rdwr]) {
- enum bpf_type_flag type_flag = get_dynptr_type_flag(meta->initialized_dynptr.type);
+ enum bpf_type_flag type_flag = get_dynptr_type_flag(meta->dynptr.type);
mark_reg_known_zero(env, regs, BPF_REG_0);
@@ -12918,11 +12808,11 @@ static int check_special_kfunc(struct bpf_verifier_env *env, struct bpf_kfunc_ca
}
}
- if (!meta->initialized_dynptr.id) {
+ if (!meta->dynptr.id) {
verifier_bug(env, "no dynptr id");
return -EFAULT;
}
- regs[BPF_REG_0].dynptr_id = meta->initialized_dynptr.id;
+ regs[BPF_REG_0].dynptr_id = meta->dynptr.id;
/* we don't need to set BPF_REG_0's ref obj id
* because packet slices are not refcounted (see
@@ -13110,7 +13000,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
if (meta.release_regno) {
struct bpf_reg_state *reg = ®s[meta.release_regno];
- if (meta.initialized_dynptr.ref_obj_id) {
+ if (meta.dynptr.ref_obj_id) {
err = unmark_stack_slots_dynptr(env, reg);
} else {
err = release_reference(env, reg->ref_obj_id);
--
2.52.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH bpf-next v3 2/9] bpf: Assign reg->id when getting referenced kptr from ctx
2026-04-21 22:10 [PATCH bpf-next v3 0/9] Refactor verifier object relationship tracking Amery Hung
2026-04-21 22:10 ` [PATCH bpf-next v3 1/9] bpf: Unify dynptr handling in the verifier Amery Hung
@ 2026-04-21 22:10 ` Amery Hung
2026-04-21 22:10 ` [PATCH bpf-next v3 3/9] bpf: Preserve reg->id of pointer objects after null-check Amery Hung
` (6 subsequent siblings)
8 siblings, 0 replies; 12+ messages in thread
From: Amery Hung @ 2026-04-21 22:10 UTC (permalink / raw)
To: bpf
Cc: netdev, alexei.starovoitov, andrii, daniel, eddyz87, memxor,
martin.lau, mykyta.yatsenko5, ameryhung, kernel-team
Assign reg->id when getting referenced kptr from read program context
to be consistent with R0 of KF_ACQUIRE kfunc. skb dynptr will track the
referenced skb in qdisc programs using a new field reg->parent_id in
a later patch.
Signed-off-by: Amery Hung <ameryhung@gmail.com>
---
kernel/bpf/verifier.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 41e4ea41c72e..93003a2a96b0 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -6448,8 +6448,6 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
} else {
mark_reg_known_zero(env, regs,
value_regno);
- if (type_may_be_null(info.reg_type))
- regs[value_regno].id = ++env->id_gen;
/* A load of ctx field could have different
* actual load size with the one encoded in the
* insn. When the dst is PTR, it is for sure not
@@ -6459,8 +6457,11 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
if (base_type(info.reg_type) == PTR_TO_BTF_ID) {
regs[value_regno].btf = info.btf;
regs[value_regno].btf_id = info.btf_id;
+ regs[value_regno].id = info.ref_obj_id;
regs[value_regno].ref_obj_id = info.ref_obj_id;
}
+ if (type_may_be_null(info.reg_type) && !regs[value_regno].id)
+ regs[value_regno].id = ++env->id_gen;
}
regs[value_regno].type = info.reg_type;
}
--
2.52.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH bpf-next v3 3/9] bpf: Preserve reg->id of pointer objects after null-check
2026-04-21 22:10 [PATCH bpf-next v3 0/9] Refactor verifier object relationship tracking Amery Hung
2026-04-21 22:10 ` [PATCH bpf-next v3 1/9] bpf: Unify dynptr handling in the verifier Amery Hung
2026-04-21 22:10 ` [PATCH bpf-next v3 2/9] bpf: Assign reg->id when getting referenced kptr from ctx Amery Hung
@ 2026-04-21 22:10 ` Amery Hung
2026-04-21 22:52 ` bot+bpf-ci
2026-04-21 22:10 ` [PATCH bpf-next v3 4/9] bpf: Refactor object relationship tracking and fix dynptr UAF bug Amery Hung
` (5 subsequent siblings)
8 siblings, 1 reply; 12+ messages in thread
From: Amery Hung @ 2026-04-21 22:10 UTC (permalink / raw)
To: bpf
Cc: netdev, alexei.starovoitov, andrii, daniel, eddyz87, memxor,
martin.lau, mykyta.yatsenko5, ameryhung, kernel-team
Preserve reg->id of pointer objects after null-checking the register so
that children objects derived from it can still refer to it in the new
object relationship tracking mechanism introduced in a later patch. This
change incurs a slight increase in the number of states in one selftest
bpf object, rbtree_search.bpf.o. For Meta bpf objects, the increase of
states is also negligible.
Selftest BPF objects with insns_diff > 0
Program Insns (A) Insns (B) Insns (DIFF) States (A) States (B) States (DIFF)
------------------------ --------- --------- -------------- ---------- ---------- -------------
rbtree_search 6820 7326 +506 (+7.42%) 379 398 +19 (+5.01%)
Meta BPF objects with insns_diff > 0
Program Insns (A) Insns (B) Insns (DIFF) States (A) States (B) States (DIFF)
------------------------ --------- --------- -------------- ---------- ---------- -------------
ned_imex_be_tclass 52 57 +5 (+9.62%) 5 6 +1 (+20.00%)
ned_imex_be_tclass 52 57 +5 (+9.62%) 5 6 +1 (+20.00%)
ned_skop_auto_flowlabel 523 526 +3 (+0.57%) 39 40 +1 (+2.56%)
ned_skop_mss 289 292 +3 (+1.04%) 20 20 +0 (+0.00%)
ned_skopt_bet_classifier 78 82 +4 (+5.13%) 8 8 +0 (+0.00%)
dctcp_update_alpha 252 320 +68 (+26.98%) 21 27 +6 (+28.57%)
dctcp_update_alpha 252 320 +68 (+26.98%) 21 27 +6 (+28.57%)
ned_ts_func 119 126 +7 (+5.88%) 6 7 +1 (+16.67%)
tw_egress 1119 1128 +9 (+0.80%) 95 96 +1 (+1.05%)
tw_ingress 1128 1137 +9 (+0.80%) 95 96 +1 (+1.05%)
tw_tproxy_router 4380 4465 +85 (+1.94%) 114 118 +4 (+3.51%)
tw_tproxy_router4 3093 3170 +77 (+2.49%) 83 88 +5 (+6.02%)
ttls_tc_ingress 34656 35717 +1061 (+3.06%) 936 970 +34 (+3.63%)
tw_twfw_egress 222327 222338 +11 (+0.00%) 10563 10564 +1 (+0.01%)
tw_twfw_ingress 78295 78299 +4 (+0.01%) 3825 3826 +1 (+0.03%)
tw_twfw_tc_eg 222839 222859 +20 (+0.01%) 10584 10585 +1 (+0.01%)
tw_twfw_tc_in 78295 78299 +4 (+0.01%) 3825 3826 +1 (+0.03%)
tw_twfw_egress 8080 8085 +5 (+0.06%) 456 456 +0 (+0.00%)
tw_twfw_ingress 8053 8056 +3 (+0.04%) 454 454 +0 (+0.00%)
tw_twfw_tc_eg 8154 8174 +20 (+0.25%) 456 457 +1 (+0.22%)
tw_twfw_tc_in 8060 8063 +3 (+0.04%) 455 455 +0 (+0.00%)
tw_twfw_egress 222327 222338 +11 (+0.00%) 10563 10564 +1 (+0.01%)
tw_twfw_ingress 78295 78299 +4 (+0.01%) 3825 3826 +1 (+0.03%)
tw_twfw_tc_eg 222839 222859 +20 (+0.01%) 10584 10585 +1 (+0.01%)
tw_twfw_tc_in 78295 78299 +4 (+0.01%) 3825 3826 +1 (+0.03%)
tw_twfw_egress 8080 8085 +5 (+0.06%) 456 456 +0 (+0.00%)
tw_twfw_ingress 8053 8056 +3 (+0.04%) 454 454 +0 (+0.00%)
tw_twfw_tc_eg 8154 8174 +20 (+0.25%) 456 457 +1 (+0.22%)
tw_twfw_tc_in 8060 8063 +3 (+0.04%) 455 455 +0 (+0.00%)
Looking into rbtree_search, the reason for such increase is that the
verifier has to explore the main loop shown below for one more iteration
until state pruning decides the current state is safe.
long rbtree_search(void *ctx)
{
...
bpf_spin_lock(&glock0);
rb_n = bpf_rbtree_root(&groot0);
while (can_loop) {
if (!rb_n) {
bpf_spin_unlock(&glock0);
return __LINE__;
}
n = rb_entry(rb_n, struct node_data, r0);
if (lookup_key == n->key0)
break;
if (nr_gc < NR_NODES)
gc_ns[nr_gc++] = rb_n;
if (lookup_key < n->key0)
rb_n = bpf_rbtree_left(&groot0, rb_n);
else
rb_n = bpf_rbtree_right(&groot0, rb_n);
}
...
}
Below is what the verifier sees at the start of each iteration
(65: may_goto) after preserving id of rb_n. Without id of rb_n, the
verifier stops exploring the loop at iter 16.
rb_n gc_ns[15]
iter 15 257 257
iter 16 290 257 rb_n: idmap add 257->290
gc_ns[15]: check 257 != 290 --> state not equal
iter 17 325 257 rb_n: idmap add 290->325
gc_ns[15]: idmap add 257->257 --> state safe
Signed-off-by: Amery Hung <ameryhung@gmail.com>
---
kernel/bpf/verifier.c | 13 ++++---------
1 file changed, 4 insertions(+), 9 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 93003a2a96b0..0313b7d5f6c9 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -15886,15 +15886,10 @@ static void mark_ptr_or_null_reg(struct bpf_func_state *state,
mark_ptr_not_null_reg(reg);
- if (!reg_may_point_to_spin_lock(reg)) {
- /* For not-NULL ptr, reg->ref_obj_id will be reset
- * in release_reference().
- *
- * reg->id is still used by spin_lock ptr. Other
- * than spin_lock ptr type, reg->id can be reset.
- */
- reg->id = 0;
- }
+ /*
+ * reg->id is preserved for object relationship tracking
+ * and spin_lock lock state tracking
+ */
}
}
--
2.52.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH bpf-next v3 4/9] bpf: Refactor object relationship tracking and fix dynptr UAF bug
2026-04-21 22:10 [PATCH bpf-next v3 0/9] Refactor verifier object relationship tracking Amery Hung
` (2 preceding siblings ...)
2026-04-21 22:10 ` [PATCH bpf-next v3 3/9] bpf: Preserve reg->id of pointer objects after null-check Amery Hung
@ 2026-04-21 22:10 ` Amery Hung
2026-04-21 22:10 ` [PATCH bpf-next v3 5/9] bpf: Remove redundant dynptr arg check for helper Amery Hung
` (4 subsequent siblings)
8 siblings, 0 replies; 12+ messages in thread
From: Amery Hung @ 2026-04-21 22:10 UTC (permalink / raw)
To: bpf
Cc: netdev, alexei.starovoitov, andrii, daniel, eddyz87, memxor,
martin.lau, mykyta.yatsenko5, ameryhung, kernel-team
Refactor object relationship tracking in the verifier and fix a dynptr
use-after-free bug where file/skb dynptrs are not invalidated when the
parent referenced object is freed.
Add parent_id to bpf_reg_state to precisely track child-parent
relationships. A child object's parent_id points to the parent object's
id. This replaces the PTR_TO_MEM-specific dynptr_id and does not
increase the size of bpf_reg_state on 64-bit machines as there is
existing padding.
When calling dynptr constructors (i.e., process_dynptr_func() with
MEM_UNINIT argument), track the parent's id if the parent is a
referenced object. This only applies to file dynptr and skb dynptr,
so only pass parent reg->id to kfunc constructors.
For release_reference(), invalidating an object now also invalidates
all descendants by traversing the object tree. This is done using
stack-based DFS to avoid recursive call chains of release_reference() ->
unmark_stack_slots_dynptr() -> release_reference(). Referenced objects
encountered during tree traversal cannot be indirectly released. They
require an explicit helper/kfunc call to release the acquired resources.
While the new design changes how object relationships are tracked in
the verifier, it does not change the verifier's behavior. Here is the
implication for dynptr, pointer casting, and owning/non-owning
references:
Dynptr:
When initializing a dynptr, referenced dynptrs acquire a reference for
ref_obj_id. If the dynptr has a referenced parent, parent_id tracks the
parent's id. When cloning, ref_obj_id and parent_id are copied from the
original. Releasing a referenced dynptr via release_reference(ref_obj_id)
invalidates all clones and derived slices. For non-referenced dynptrs,
only the specific dynptr and its children are invalidated.
Pointer casting:
Referenced socket pointers and their casted counterparts share the same
lifetime but have different nullness — they have different id but the
same ref_obj_id.
Owning to non-owning reference conversion:
After converting owning to non-owning by clearing ref_obj_id (e.g.,
object(id=1, ref_obj_id=1) -> object(id=1, ref_obj_id=0)), the
verifier only needs to release the reference state, so it calls
release_reference_nomark() instead of release_reference().
Note that the error message "reference has not been acquired before" in
the helper and kfunc release paths is removed. This message was already
unreachable. The verifier only calls release_reference() after
confirming meta.ref_obj_id is valid, so the condition could never
trigger in practice (no selftest exercises it either). With the
refactor, release_reference() can now be called with non-acquired ids
and have different error conditions. Report directly in
release_reference() instead.
Fixes: 870c28588afa ("bpf: net_sched: Add basic bpf qdisc kfuncs")
Signed-off-by: Amery Hung <ameryhung@gmail.com>
---
include/linux/bpf_verifier.h | 22 ++-
kernel/bpf/log.c | 4 +-
kernel/bpf/states.c | 9 +-
kernel/bpf/verifier.c | 264 +++++++++++++++++------------------
4 files changed, 152 insertions(+), 147 deletions(-)
diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index dc0cff59246d..1314299c3763 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -65,7 +65,6 @@ struct bpf_reg_state {
struct { /* for PTR_TO_MEM | PTR_TO_MEM_OR_NULL */
u32 mem_size;
- u32 dynptr_id; /* for dynptr slices */
};
/* For dynptr stack slots */
@@ -193,6 +192,13 @@ struct bpf_reg_state {
* allowed and has the same effect as bpf_sk_release(sk).
*/
u32 ref_obj_id;
+ /* Tracks the parent object this register was derived from.
+ * Used for cascading invalidation: when the parent object is
+ * released or invalidated, all registers with matching parent_id
+ * are also invalidated. For example, a slice from bpf_dynptr_data()
+ * gets parent_id set to the dynptr's id.
+ */
+ u32 parent_id;
/* Inside the callee two registers can be both PTR_TO_STACK like
* R1=fp-8 and R2=fp-8, but one of them points to this function stack
* while another to the caller's stack. To differentiate them 'frameno'
@@ -508,7 +514,7 @@ struct bpf_verifier_state {
iter < frame->allocated_stack / BPF_REG_SIZE; \
iter++, reg = bpf_get_spilled_reg(iter, frame, mask))
-#define bpf_for_each_reg_in_vstate_mask(__vst, __state, __reg, __mask, __expr) \
+#define bpf_for_each_reg_in_vstate_mask(__vst, __state, __reg, __stack, __mask, __expr) \
({ \
struct bpf_verifier_state *___vstate = __vst; \
int ___i, ___j; \
@@ -516,6 +522,7 @@ struct bpf_verifier_state {
struct bpf_reg_state *___regs; \
__state = ___vstate->frame[___i]; \
___regs = __state->regs; \
+ __stack = NULL; \
for (___j = 0; ___j < MAX_BPF_REG; ___j++) { \
__reg = &___regs[___j]; \
(void)(__expr); \
@@ -523,14 +530,19 @@ struct bpf_verifier_state {
bpf_for_each_spilled_reg(___j, __state, __reg, __mask) { \
if (!__reg) \
continue; \
+ __stack = &__state->stack[___j]; \
(void)(__expr); \
} \
} \
})
/* Invoke __expr over regsiters in __vst, setting __state and __reg */
-#define bpf_for_each_reg_in_vstate(__vst, __state, __reg, __expr) \
- bpf_for_each_reg_in_vstate_mask(__vst, __state, __reg, 1 << STACK_SPILL, __expr)
+#define bpf_for_each_reg_in_vstate(__vst, __state, __reg, __expr) \
+ ({ \
+ struct bpf_stack_state * ___stack; \
+ bpf_for_each_reg_in_vstate_mask(__vst, __state, __reg, ___stack,\
+ 1 << STACK_SPILL, __expr); \
+ })
/* linked list of verifier states used to prune search */
struct bpf_verifier_state_list {
@@ -1323,6 +1335,7 @@ struct bpf_dynptr_desc {
enum bpf_dynptr_type type;
u32 id;
u32 ref_obj_id;
+ u32 parent_id;
};
struct bpf_kfunc_call_arg_meta {
@@ -1334,6 +1347,7 @@ struct bpf_kfunc_call_arg_meta {
const char *func_name;
/* Out parameters */
u32 ref_obj_id;
+ u32 id;
u8 release_regno;
bool r0_rdonly;
u32 ret_btf_id;
diff --git a/kernel/bpf/log.c b/kernel/bpf/log.c
index 011e4ec25acd..d8dd372e45cd 100644
--- a/kernel/bpf/log.c
+++ b/kernel/bpf/log.c
@@ -667,6 +667,8 @@ static void print_reg_state(struct bpf_verifier_env *env,
verbose(env, "%+d", reg->delta);
if (reg->ref_obj_id)
verbose_a("ref_obj_id=%d", reg->ref_obj_id);
+ if (reg->parent_id)
+ verbose_a("parent_id=%d", reg->parent_id);
if (type_is_non_owning_ref(reg->type))
verbose_a("%s", "non_own_ref");
if (type_is_map_ptr(t)) {
@@ -770,8 +772,6 @@ void print_verifier_state(struct bpf_verifier_env *env, const struct bpf_verifie
verbose_a("id=%d", reg->id);
if (reg->ref_obj_id)
verbose_a("ref_id=%d", reg->ref_obj_id);
- if (reg->dynptr_id)
- verbose_a("dynptr_id=%d", reg->dynptr_id);
verbose(env, ")");
break;
case STACK_ITER:
diff --git a/kernel/bpf/states.c b/kernel/bpf/states.c
index 8478d2c6ed5b..72bd3bcda5fb 100644
--- a/kernel/bpf/states.c
+++ b/kernel/bpf/states.c
@@ -494,7 +494,8 @@ static bool regs_exact(const struct bpf_reg_state *rold,
{
return memcmp(rold, rcur, offsetof(struct bpf_reg_state, id)) == 0 &&
check_ids(rold->id, rcur->id, idmap) &&
- check_ids(rold->ref_obj_id, rcur->ref_obj_id, idmap);
+ check_ids(rold->ref_obj_id, rcur->ref_obj_id, idmap) &&
+ check_ids(rold->parent_id, rcur->parent_id, idmap);
}
enum exact_level {
@@ -619,7 +620,8 @@ static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold,
range_within(rold, rcur) &&
tnum_in(rold->var_off, rcur->var_off) &&
check_ids(rold->id, rcur->id, idmap) &&
- check_ids(rold->ref_obj_id, rcur->ref_obj_id, idmap);
+ check_ids(rold->ref_obj_id, rcur->ref_obj_id, idmap) &&
+ check_ids(rold->parent_id, rcur->parent_id, idmap);
case PTR_TO_PACKET_META:
case PTR_TO_PACKET:
/* We must have at least as much range as the old ptr
@@ -799,7 +801,8 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
cur_reg = &cur->stack[spi].spilled_ptr;
if (old_reg->dynptr.type != cur_reg->dynptr.type ||
old_reg->dynptr.first_slot != cur_reg->dynptr.first_slot ||
- !check_ids(old_reg->ref_obj_id, cur_reg->ref_obj_id, idmap))
+ !check_ids(old_reg->ref_obj_id, cur_reg->ref_obj_id, idmap) ||
+ !check_ids(old_reg->parent_id, cur_reg->parent_id, idmap))
return false;
break;
case STACK_ITER:
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 0313b7d5f6c9..908a3af0e7c4 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -201,7 +201,7 @@ struct bpf_verifier_stack_elem {
static int acquire_reference(struct bpf_verifier_env *env, int insn_idx);
static int release_reference_nomark(struct bpf_verifier_state *state, int ref_obj_id);
-static int release_reference(struct bpf_verifier_env *env, int ref_obj_id);
+static int release_reference(struct bpf_verifier_env *env, int id);
static void invalidate_non_owning_refs(struct bpf_verifier_env *env);
static bool in_rbtree_lock_required_cb(struct bpf_verifier_env *env);
static int ref_set_non_owning(struct bpf_verifier_env *env,
@@ -241,6 +241,7 @@ struct bpf_call_arg_meta {
int mem_size;
u64 msize_max_value;
int ref_obj_id;
+ u32 id;
int func_id;
struct btf *btf;
u32 btf_id;
@@ -603,14 +604,14 @@ static enum bpf_type_flag get_dynptr_type_flag(enum bpf_dynptr_type type)
}
}
-static bool dynptr_type_refcounted(enum bpf_dynptr_type type)
+static bool dynptr_type_referenced(enum bpf_dynptr_type type)
{
return type == BPF_DYNPTR_TYPE_RINGBUF || type == BPF_DYNPTR_TYPE_FILE;
}
static void __mark_dynptr_reg(struct bpf_reg_state *reg,
enum bpf_dynptr_type type,
- bool first_slot, int dynptr_id);
+ bool first_slot, int id);
static void mark_dynptr_stack_regs(struct bpf_verifier_env *env,
@@ -635,11 +636,12 @@ static int destroy_if_dynptr_stack_slot(struct bpf_verifier_env *env,
struct bpf_func_state *state, int spi);
static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
- enum bpf_arg_type arg_type, int insn_idx, int clone_ref_obj_id)
+ enum bpf_arg_type arg_type, int insn_idx, int parent_id,
+ struct bpf_dynptr_desc *dynptr)
{
struct bpf_func_state *state = bpf_func(env, reg);
+ int spi, i, err, ref_obj_id = 0;
enum bpf_dynptr_type type;
- int spi, i, err;
spi = dynptr_get_spi(env, reg);
if (spi < 0)
@@ -673,82 +675,56 @@ static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_
mark_dynptr_stack_regs(env, &state->stack[spi].spilled_ptr,
&state->stack[spi - 1].spilled_ptr, type);
- if (dynptr_type_refcounted(type)) {
- /* The id is used to track proper releasing */
- int id;
-
- if (clone_ref_obj_id)
- id = clone_ref_obj_id;
- else
- id = acquire_reference(env, insn_idx);
-
- if (id < 0)
- return id;
-
- state->stack[spi].spilled_ptr.ref_obj_id = id;
- state->stack[spi - 1].spilled_ptr.ref_obj_id = id;
+ if (dynptr->type == BPF_DYNPTR_TYPE_INVALID) { /* dynptr constructors */
+ if (dynptr_type_referenced(type)) {
+ ref_obj_id = acquire_reference(env, insn_idx);
+ if (ref_obj_id < 0)
+ return ref_obj_id;
+ }
+ } else { /* bpf_dynptr_clone() */
+ ref_obj_id = dynptr->ref_obj_id;
+ parent_id = dynptr->parent_id;
}
+ state->stack[spi].spilled_ptr.ref_obj_id = ref_obj_id;
+ state->stack[spi - 1].spilled_ptr.ref_obj_id = ref_obj_id;
+ state->stack[spi].spilled_ptr.parent_id = parent_id;
+ state->stack[spi - 1].spilled_ptr.parent_id = parent_id;
+
return 0;
}
-static void invalidate_dynptr(struct bpf_verifier_env *env, struct bpf_func_state *state, int spi)
+static void invalidate_dynptr(struct bpf_verifier_env *env, struct bpf_func_state *state,
+ struct bpf_stack_state *stack)
{
int i;
for (i = 0; i < BPF_REG_SIZE; i++) {
- state->stack[spi].slot_type[i] = STACK_INVALID;
- state->stack[spi - 1].slot_type[i] = STACK_INVALID;
+ stack[0].slot_type[i] = STACK_INVALID;
+ stack[1].slot_type[i] = STACK_INVALID;
}
- bpf_mark_reg_not_init(env, &state->stack[spi].spilled_ptr);
- bpf_mark_reg_not_init(env, &state->stack[spi - 1].spilled_ptr);
+ bpf_mark_reg_not_init(env, &stack[0].spilled_ptr);
+ bpf_mark_reg_not_init(env, &stack[1].spilled_ptr);
}
static int unmark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
{
struct bpf_func_state *state = bpf_func(env, reg);
- int spi, ref_obj_id, i;
+ int spi;
spi = dynptr_get_spi(env, reg);
if (spi < 0)
return spi;
- if (!dynptr_type_refcounted(state->stack[spi].spilled_ptr.dynptr.type)) {
- invalidate_dynptr(env, state, spi);
- return 0;
- }
-
- ref_obj_id = state->stack[spi].spilled_ptr.ref_obj_id;
-
- /* If the dynptr has a ref_obj_id, then we need to invalidate
- * two things:
- *
- * 1) Any dynptrs with a matching ref_obj_id (clones)
- * 2) Any slices derived from this dynptr.
+ /*
+ * For referenced dynptr, the clones share the same ref_obj_id and will be
+ * invalidated too. For non-referenced dynptr, only the dynptr and slices
+ * derived from it will be invalidated.
*/
-
- /* Invalidate any slices associated with this dynptr */
- WARN_ON_ONCE(release_reference(env, ref_obj_id));
-
- /* Invalidate any dynptr clones */
- for (i = 1; i < state->allocated_stack / BPF_REG_SIZE; i++) {
- if (state->stack[i].spilled_ptr.ref_obj_id != ref_obj_id)
- continue;
-
- /* it should always be the case that if the ref obj id
- * matches then the stack slot also belongs to a
- * dynptr
- */
- if (state->stack[i].slot_type[0] != STACK_DYNPTR) {
- verifier_bug(env, "misconfigured ref_obj_id");
- return -EFAULT;
- }
- if (state->stack[i].spilled_ptr.dynptr.first_slot)
- invalidate_dynptr(env, state, i);
- }
-
- return 0;
+ reg = &state->stack[spi].spilled_ptr;
+ return release_reference(env, dynptr_type_referenced(reg->dynptr.type) ?
+ reg->ref_obj_id : reg->id);
}
static void __mark_reg_unknown(const struct bpf_verifier_env *env,
@@ -765,10 +741,6 @@ static void mark_reg_invalid(const struct bpf_verifier_env *env, struct bpf_reg_
static int destroy_if_dynptr_stack_slot(struct bpf_verifier_env *env,
struct bpf_func_state *state, int spi)
{
- struct bpf_func_state *fstate;
- struct bpf_reg_state *dreg;
- int i, dynptr_id;
-
/* We always ensure that STACK_DYNPTR is never set partially,
* hence just checking for slot_type[0] is enough. This is
* different for STACK_SPILL, where it may be only set for
@@ -781,9 +753,9 @@ static int destroy_if_dynptr_stack_slot(struct bpf_verifier_env *env,
if (!state->stack[spi].spilled_ptr.dynptr.first_slot)
spi = spi + 1;
- if (dynptr_type_refcounted(state->stack[spi].spilled_ptr.dynptr.type)) {
+ if (dynptr_type_referenced(state->stack[spi].spilled_ptr.dynptr.type)) {
int ref_obj_id = state->stack[spi].spilled_ptr.ref_obj_id;
- int ref_cnt = 0;
+ int i, ref_cnt = 0;
/*
* A referenced dynptr can be overwritten only if there is at
@@ -808,29 +780,8 @@ static int destroy_if_dynptr_stack_slot(struct bpf_verifier_env *env,
mark_stack_slot_scratched(env, spi);
mark_stack_slot_scratched(env, spi - 1);
- /* Writing partially to one dynptr stack slot destroys both. */
- for (i = 0; i < BPF_REG_SIZE; i++) {
- state->stack[spi].slot_type[i] = STACK_INVALID;
- state->stack[spi - 1].slot_type[i] = STACK_INVALID;
- }
-
- dynptr_id = state->stack[spi].spilled_ptr.id;
- /* Invalidate any slices associated with this dynptr */
- bpf_for_each_reg_in_vstate(env->cur_state, fstate, dreg, ({
- /* Dynptr slices are only PTR_TO_MEM_OR_NULL and PTR_TO_MEM */
- if (dreg->type != (PTR_TO_MEM | PTR_MAYBE_NULL) && dreg->type != PTR_TO_MEM)
- continue;
- if (dreg->dynptr_id == dynptr_id)
- mark_reg_invalid(env, dreg);
- }));
-
- /* Do not release reference state, we are destroying dynptr on stack,
- * not using some helper to release it. Just reset register.
- */
- bpf_mark_reg_not_init(env, &state->stack[spi].spilled_ptr);
- bpf_mark_reg_not_init(env, &state->stack[spi - 1].spilled_ptr);
-
- return 0;
+ /* Invalidate the dynptr and any derived slices */
+ return release_reference(env, state->stack[spi].spilled_ptr.id);
}
static bool is_dynptr_reg_valid_uninit(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
@@ -1449,15 +1400,15 @@ static void release_reference_state(struct bpf_verifier_state *state, int idx)
return;
}
-static bool find_reference_state(struct bpf_verifier_state *state, int ptr_id)
+static struct bpf_reference_state *find_reference_state(struct bpf_verifier_state *state, int ptr_id)
{
int i;
for (i = 0; i < state->acquired_refs; i++)
if (state->refs[i].id == ptr_id)
- return true;
+ return &state->refs[i];
- return false;
+ return NULL;
}
static int release_lock_state(struct bpf_verifier_state *state, int type, int id, void *ptr)
@@ -1764,6 +1715,7 @@ static void __mark_reg_known(struct bpf_reg_state *reg, u64 imm)
offsetof(struct bpf_reg_state, var_off) - sizeof(reg->type));
reg->id = 0;
reg->ref_obj_id = 0;
+ reg->parent_id = 0;
___mark_reg_known(reg, imm);
}
@@ -1801,7 +1753,7 @@ static void mark_reg_known_zero(struct bpf_verifier_env *env,
}
static void __mark_dynptr_reg(struct bpf_reg_state *reg, enum bpf_dynptr_type type,
- bool first_slot, int dynptr_id)
+ bool first_slot, int id)
{
/* reg->type has no meaning for STACK_DYNPTR, but when we set reg for
* callback arguments, it does need to be CONST_PTR_TO_DYNPTR, so simply
@@ -1810,7 +1762,7 @@ static void __mark_dynptr_reg(struct bpf_reg_state *reg, enum bpf_dynptr_type ty
__mark_reg_known_zero(reg);
reg->type = CONST_PTR_TO_DYNPTR;
/* Give each dynptr a unique id to uniquely associate slices to it. */
- reg->id = dynptr_id;
+ reg->id = id;
reg->dynptr.type = type;
reg->dynptr.first_slot = first_slot;
}
@@ -2451,6 +2403,7 @@ void bpf_mark_reg_unknown_imprecise(struct bpf_reg_state *reg)
reg->type = SCALAR_VALUE;
reg->id = 0;
reg->ref_obj_id = 0;
+ reg->parent_id = 0;
reg->var_off = tnum_unknown;
reg->frameno = 0;
reg->precise = false;
@@ -7427,7 +7380,7 @@ static int process_kptr_func(struct bpf_verifier_env *env, int regno,
* and checked dynamically during runtime.
*/
static int process_dynptr_func(struct bpf_verifier_env *env, int regno, int insn_idx,
- enum bpf_arg_type arg_type, int clone_ref_obj_id,
+ enum bpf_arg_type arg_type, int parent_id,
struct bpf_dynptr_desc *dynptr)
{
struct bpf_reg_state *reg = reg_state(env, regno);
@@ -7470,7 +7423,8 @@ static int process_dynptr_func(struct bpf_verifier_env *env, int regno, int insn
return err;
}
- err = mark_stack_slots_dynptr(env, reg, arg_type, insn_idx, clone_ref_obj_id);
+ err = mark_stack_slots_dynptr(env, reg, arg_type, insn_idx, parent_id,
+ dynptr);
} else /* OBJ_RELEASE and None case from above */ {
/* For the reg->type == PTR_TO_STACK case, bpf_dynptr is never const */
if (reg->type == CONST_PTR_TO_DYNPTR && (arg_type & OBJ_RELEASE)) {
@@ -7507,6 +7461,7 @@ static int process_dynptr_func(struct bpf_verifier_env *env, int regno, int insn
dynptr->id = reg->id;
dynptr->type = reg->dynptr.type;
dynptr->ref_obj_id = reg->ref_obj_id;
+ dynptr->parent_id = reg->parent_id;
}
}
return err;
@@ -8461,7 +8416,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
*/
if (reg->type == PTR_TO_STACK) {
spi = dynptr_get_spi(env, reg);
- if (spi < 0 || !state->stack[spi].spilled_ptr.ref_obj_id) {
+ if (spi < 0 || !state->stack[spi].spilled_ptr.id) {
verbose(env, "arg %d is an unacquired reference\n", regno);
return -EINVAL;
}
@@ -8489,6 +8444,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
return -EACCES;
}
meta->ref_obj_id = reg->ref_obj_id;
+ meta->id = reg->id;
}
switch (base_type(arg_type)) {
@@ -9111,26 +9067,75 @@ static int release_reference_nomark(struct bpf_verifier_state *state, int ref_ob
return -EINVAL;
}
-/* The pointer with the specified id has released its reference to kernel
- * resources. Identify all copies of the same pointer and clear the reference.
- *
- * This is the release function corresponding to acquire_reference(). Idempotent.
- */
-static int release_reference(struct bpf_verifier_env *env, int ref_obj_id)
+static int idstack_push(struct bpf_idmap *idmap, u32 id)
+{
+ int i;
+
+ if (!id)
+ return 0;
+
+ for (i = 0; i < idmap->cnt; i++)
+ if (idmap->map[i].old == id)
+ return 0;
+
+ if (WARN_ON_ONCE(idmap->cnt >= BPF_ID_MAP_SIZE))
+ return -EFAULT;
+
+ idmap->map[idmap->cnt++].old = id;
+ return 0;
+}
+
+static int idstack_pop(struct bpf_idmap *idmap)
{
+ if (!idmap->cnt)
+ return 0;
+
+ return idmap->map[--idmap->cnt].old;
+}
+
+/* Release id and objects referencing the id iteratively in a DFS manner */
+static int release_reference(struct bpf_verifier_env *env, int id)
+{
+ u32 mask = (1 << STACK_SPILL) | (1 << STACK_DYNPTR);
struct bpf_verifier_state *vstate = env->cur_state;
+ struct bpf_idmap *idstack = &env->idmap_scratch;
+ struct bpf_stack_state *stack;
struct bpf_func_state *state;
struct bpf_reg_state *reg;
- int err;
+ int root_id = id, err;
- err = release_reference_nomark(vstate, ref_obj_id);
- if (err)
- return err;
+ idstack->cnt = 0;
+ idstack_push(idstack, id);
- bpf_for_each_reg_in_vstate(vstate, state, reg, ({
- if (reg->ref_obj_id == ref_obj_id)
- mark_reg_invalid(env, reg);
- }));
+ if (find_reference_state(vstate, id))
+ WARN_ON_ONCE(release_reference_nomark(vstate, id));
+
+ while ((id = idstack_pop(idstack))) {
+ bpf_for_each_reg_in_vstate_mask(vstate, state, reg, stack, mask, ({
+ if (reg->id != id && reg->parent_id != id && reg->ref_obj_id != id)
+ continue;
+
+ if (reg->ref_obj_id && id != root_id) {
+ struct bpf_reference_state *ref_state;
+
+ ref_state = find_reference_state(env->cur_state, reg->ref_obj_id);
+ verbose(env, "Unreleased reference id=%d alloc_insn=%d when releasing id=%d\n",
+ ref_state->id, ref_state->insn_idx, root_id);
+ return -EINVAL;
+ }
+
+ if (reg->id != id) {
+ err = idstack_push(idstack, reg->id);
+ if (err)
+ return err;
+ }
+
+ if (!stack || stack->slot_type[BPF_REG_SIZE - 1] == STACK_SPILL)
+ mark_reg_invalid(env, reg);
+ else if (stack->slot_type[BPF_REG_SIZE - 1] == STACK_DYNPTR)
+ invalidate_dynptr(env, state, stack);
+ }));
+ }
return 0;
}
@@ -10298,11 +10303,8 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
*/
err = 0;
}
- if (err) {
- verbose(env, "func %s#%d reference has not been acquired before\n",
- func_id_name(func_id), func_id);
+ if (err)
return err;
- }
}
switch (func_id) {
@@ -10580,10 +10582,8 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
regs[BPF_REG_0].ref_obj_id = id;
}
- if (func_id == BPF_FUNC_dynptr_data) {
- regs[BPF_REG_0].dynptr_id = meta.dynptr.id;
- regs[BPF_REG_0].ref_obj_id = meta.dynptr.ref_obj_id;
- }
+ if (func_id == BPF_FUNC_dynptr_data)
+ regs[BPF_REG_0].parent_id = meta.dynptr.id;
err = do_refine_retval_range(env, regs, fn->ret_type, func_id, &meta);
if (err)
@@ -12009,6 +12009,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
return -EFAULT;
}
meta->ref_obj_id = reg->ref_obj_id;
+ meta->id = reg->id;
if (is_kfunc_release(meta))
meta->release_regno = regno;
}
@@ -12145,7 +12146,6 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
case KF_ARG_PTR_TO_DYNPTR:
{
enum bpf_arg_type dynptr_arg_type = ARG_PTR_TO_DYNPTR;
- int clone_ref_obj_id = 0;
if (is_kfunc_arg_uninit(btf, &args[i]))
dynptr_arg_type |= MEM_UNINIT;
@@ -12171,15 +12171,10 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
}
dynptr_arg_type |= (unsigned int)get_dynptr_type_flag(parent_type);
- clone_ref_obj_id = meta->dynptr.ref_obj_id;
- if (dynptr_type_refcounted(parent_type) && !clone_ref_obj_id) {
- verifier_bug(env, "missing ref obj id for parent of clone");
- return -EFAULT;
- }
}
- ret = process_dynptr_func(env, regno, insn_idx, dynptr_arg_type, clone_ref_obj_id,
- &meta->dynptr);
+ ret = process_dynptr_func(env, regno, insn_idx, dynptr_arg_type,
+ meta->ref_obj_id ? meta->id : 0, &meta->dynptr);
if (ret < 0)
return ret;
break;
@@ -12813,12 +12808,7 @@ static int check_special_kfunc(struct bpf_verifier_env *env, struct bpf_kfunc_ca
verifier_bug(env, "no dynptr id");
return -EFAULT;
}
- regs[BPF_REG_0].dynptr_id = meta->dynptr.id;
-
- /* we don't need to set BPF_REG_0's ref obj id
- * because packet slices are not refcounted (see
- * dynptr_type_refcounted)
- */
+ regs[BPF_REG_0].parent_id = meta->dynptr.id;
} else {
return 0;
}
@@ -12953,6 +12943,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
if (rcu_lock) {
env->cur_state->active_rcu_locks++;
} else if (rcu_unlock) {
+ struct bpf_stack_state *stack;
struct bpf_func_state *state;
struct bpf_reg_state *reg;
u32 clear_mask = (1 << STACK_SPILL) | (1 << STACK_ITER);
@@ -12962,7 +12953,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
return -EINVAL;
}
if (--env->cur_state->active_rcu_locks == 0) {
- bpf_for_each_reg_in_vstate_mask(env->cur_state, state, reg, clear_mask, ({
+ bpf_for_each_reg_in_vstate_mask(env->cur_state, state, reg, stack, clear_mask, ({
if (reg->type & MEM_RCU) {
reg->type &= ~(MEM_RCU | PTR_MAYBE_NULL);
reg->type |= PTR_UNTRUSTED;
@@ -13005,9 +12996,6 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
err = unmark_stack_slots_dynptr(env, reg);
} else {
err = release_reference(env, reg->ref_obj_id);
- if (err)
- verbose(env, "kfunc %s#%d reference has not been acquired before\n",
- func_name, meta.func_id);
}
if (err)
return err;
@@ -13024,7 +13012,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
return err;
}
- err = release_reference(env, release_ref_obj_id);
+ err = release_reference_nomark(env->cur_state, release_ref_obj_id);
if (err) {
verbose(env, "kfunc %s#%d reference has not been acquired before\n",
func_name, meta.func_id);
@@ -13114,7 +13102,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
/* Ensures we don't access the memory after a release_reference() */
if (meta.ref_obj_id)
- regs[BPF_REG_0].ref_obj_id = meta.ref_obj_id;
+ regs[BPF_REG_0].parent_id = meta.ref_obj_id;
if (is_kfunc_rcu_protected(&meta))
regs[BPF_REG_0].type |= MEM_RCU;
--
2.52.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH bpf-next v3 5/9] bpf: Remove redundant dynptr arg check for helper
2026-04-21 22:10 [PATCH bpf-next v3 0/9] Refactor verifier object relationship tracking Amery Hung
` (3 preceding siblings ...)
2026-04-21 22:10 ` [PATCH bpf-next v3 4/9] bpf: Refactor object relationship tracking and fix dynptr UAF bug Amery Hung
@ 2026-04-21 22:10 ` Amery Hung
2026-04-21 22:10 ` [PATCH bpf-next v3 6/9] selftests/bpf: Test creating dynptr from dynptr data and slice Amery Hung
` (3 subsequent siblings)
8 siblings, 0 replies; 12+ messages in thread
From: Amery Hung @ 2026-04-21 22:10 UTC (permalink / raw)
To: bpf
Cc: netdev, alexei.starovoitov, andrii, daniel, eddyz87, memxor,
martin.lau, mykyta.yatsenko5, ameryhung, kernel-team
unmark_stack_slots_dynptr() already makes sure that CONST_PTR_TO_DYNPTR
cannot be released. process_dynptr_func() also prevents passing
uninitialized dynptr to helpers expecting initialized dynptr. Now that
unmark_stack_slots_dynptr() also error returned from
release_reference(), there should be no reason to keep these redundant
checks.
Signed-off-by: Amery Hung <ameryhung@gmail.com>
---
kernel/bpf/verifier.c | 21 +------------------
.../testing/selftests/bpf/progs/dynptr_fail.c | 6 +++---
.../selftests/bpf/progs/user_ringbuf_fail.c | 4 ++--
3 files changed, 6 insertions(+), 25 deletions(-)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 908a3af0e7c4..3ab9bc2fe0e3 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -8405,26 +8405,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
skip_type_check:
if (arg_type_is_release(arg_type)) {
- if (arg_type_is_dynptr(arg_type)) {
- struct bpf_func_state *state = bpf_func(env, reg);
- int spi;
-
- /* Only dynptr created on stack can be released, thus
- * the get_spi and stack state checks for spilled_ptr
- * should only be done before process_dynptr_func for
- * PTR_TO_STACK.
- */
- if (reg->type == PTR_TO_STACK) {
- spi = dynptr_get_spi(env, reg);
- if (spi < 0 || !state->stack[spi].spilled_ptr.id) {
- verbose(env, "arg %d is an unacquired reference\n", regno);
- return -EINVAL;
- }
- } else {
- verbose(env, "cannot release unowned const bpf_dynptr\n");
- return -EINVAL;
- }
- } else if (!reg->ref_obj_id && !bpf_register_is_null(reg)) {
+ if (!arg_type_is_dynptr(arg_type) && !reg->ref_obj_id && !bpf_register_is_null(reg)) {
verbose(env, "R%d must be referenced when passed to release function\n",
regno);
return -EINVAL;
diff --git a/tools/testing/selftests/bpf/progs/dynptr_fail.c b/tools/testing/selftests/bpf/progs/dynptr_fail.c
index b62773ce5219..b5fbc9b5c484 100644
--- a/tools/testing/selftests/bpf/progs/dynptr_fail.c
+++ b/tools/testing/selftests/bpf/progs/dynptr_fail.c
@@ -136,7 +136,7 @@ int ringbuf_missing_release_callback(void *ctx)
/* Can't call bpf_ringbuf_submit/discard_dynptr on a non-initialized dynptr */
SEC("?raw_tp")
-__failure __msg("arg 1 is an unacquired reference")
+__failure __msg("Expected an initialized dynptr as arg #0")
int ringbuf_release_uninit_dynptr(void *ctx)
{
struct bpf_dynptr ptr;
@@ -650,7 +650,7 @@ int invalid_offset(void *ctx)
/* Can't release a dynptr twice */
SEC("?raw_tp")
-__failure __msg("arg 1 is an unacquired reference")
+__failure __msg("Expected an initialized dynptr as arg #0")
int release_twice(void *ctx)
{
struct bpf_dynptr ptr;
@@ -677,7 +677,7 @@ static int release_twice_callback_fn(__u32 index, void *data)
* within a callback function, fails
*/
SEC("?raw_tp")
-__failure __msg("arg 1 is an unacquired reference")
+__failure __msg("Expected an initialized dynptr as arg #0")
int release_twice_callback(void *ctx)
{
struct bpf_dynptr ptr;
diff --git a/tools/testing/selftests/bpf/progs/user_ringbuf_fail.c b/tools/testing/selftests/bpf/progs/user_ringbuf_fail.c
index 54de0389f878..c0d0422b8030 100644
--- a/tools/testing/selftests/bpf/progs/user_ringbuf_fail.c
+++ b/tools/testing/selftests/bpf/progs/user_ringbuf_fail.c
@@ -146,7 +146,7 @@ try_discard_dynptr(struct bpf_dynptr *dynptr, void *context)
* not be able to read past the end of the pointer.
*/
SEC("?raw_tp")
-__failure __msg("cannot release unowned const bpf_dynptr")
+__failure __msg("CONST_PTR_TO_DYNPTR cannot be released")
int user_ringbuf_callback_discard_dynptr(void *ctx)
{
bpf_user_ringbuf_drain(&user_ringbuf, try_discard_dynptr, NULL, 0);
@@ -166,7 +166,7 @@ try_submit_dynptr(struct bpf_dynptr *dynptr, void *context)
* not be able to read past the end of the pointer.
*/
SEC("?raw_tp")
-__failure __msg("cannot release unowned const bpf_dynptr")
+__failure __msg("CONST_PTR_TO_DYNPTR cannot be released")
int user_ringbuf_callback_submit_dynptr(void *ctx)
{
bpf_user_ringbuf_drain(&user_ringbuf, try_submit_dynptr, NULL, 0);
--
2.52.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH bpf-next v3 6/9] selftests/bpf: Test creating dynptr from dynptr data and slice
2026-04-21 22:10 [PATCH bpf-next v3 0/9] Refactor verifier object relationship tracking Amery Hung
` (4 preceding siblings ...)
2026-04-21 22:10 ` [PATCH bpf-next v3 5/9] bpf: Remove redundant dynptr arg check for helper Amery Hung
@ 2026-04-21 22:10 ` Amery Hung
2026-04-21 22:10 ` [PATCH bpf-next v3 7/9] selftests/bpf: Test using dynptr after freeing the underlying object Amery Hung
` (2 subsequent siblings)
8 siblings, 0 replies; 12+ messages in thread
From: Amery Hung @ 2026-04-21 22:10 UTC (permalink / raw)
To: bpf
Cc: netdev, alexei.starovoitov, andrii, daniel, eddyz87, memxor,
martin.lau, mykyta.yatsenko5, ameryhung, kernel-team
The verifier currently does not allow creating dynptr from dynptr data
or slice. Add a selftest to test this explicitly.
Signed-off-by: Amery Hung <ameryhung@gmail.com>
---
.../testing/selftests/bpf/progs/dynptr_fail.c | 42 +++++++++++++++++++
1 file changed, 42 insertions(+)
diff --git a/tools/testing/selftests/bpf/progs/dynptr_fail.c b/tools/testing/selftests/bpf/progs/dynptr_fail.c
index b5fbc9b5c484..43beb70f50ee 100644
--- a/tools/testing/selftests/bpf/progs/dynptr_fail.c
+++ b/tools/testing/selftests/bpf/progs/dynptr_fail.c
@@ -705,6 +705,48 @@ int dynptr_from_mem_invalid_api(void *ctx)
return 0;
}
+/* Cannot create dynptr from dynptr data */
+SEC("?raw_tp")
+__failure __msg("Unsupported reg type mem for bpf_dynptr_from_mem data")
+int dynptr_from_dynptr_data(void *ctx)
+{
+ struct bpf_dynptr ptr, ptr2;
+ __u8 *data;
+
+ if (get_map_val_dynptr(&ptr))
+ return 0;
+
+ data = bpf_dynptr_data(&ptr, 0, sizeof(__u32));
+ if (!data)
+ return 0;
+
+ /* this should fail */
+ bpf_dynptr_from_mem(data, sizeof(__u32), 0, &ptr2);
+
+ return 0;
+}
+
+/* Cannot create dynptr from dynptr slice */
+SEC("?tc")
+__failure __msg("Unsupported reg type mem for bpf_dynptr_from_mem data")
+int dynptr_from_dynptr_slice(struct __sk_buff *skb)
+{
+ struct bpf_dynptr ptr, ptr2;
+ struct ethhdr *hdr;
+ char buffer[sizeof(*hdr)] = {};
+
+ bpf_dynptr_from_skb(skb, 0, &ptr);
+
+ hdr = bpf_dynptr_slice_rdwr(&ptr, 0, buffer, sizeof(buffer));
+ if (!hdr)
+ return SK_DROP;
+
+ /* this should fail */
+ bpf_dynptr_from_mem(hdr, sizeof(*hdr), 0, &ptr2);
+
+ return SK_PASS;
+}
+
SEC("?tc")
__failure __msg("cannot overwrite referenced dynptr") __log_level(2)
int dynptr_pruning_overwrite(struct __sk_buff *ctx)
--
2.52.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH bpf-next v3 7/9] selftests/bpf: Test using dynptr after freeing the underlying object
2026-04-21 22:10 [PATCH bpf-next v3 0/9] Refactor verifier object relationship tracking Amery Hung
` (5 preceding siblings ...)
2026-04-21 22:10 ` [PATCH bpf-next v3 6/9] selftests/bpf: Test creating dynptr from dynptr data and slice Amery Hung
@ 2026-04-21 22:10 ` Amery Hung
2026-04-21 22:10 ` [PATCH bpf-next v3 8/9] selftests/bpf: Test using slice after invalidating dynptr clone Amery Hung
2026-04-21 22:10 ` [PATCH bpf-next v3 9/9] selftests/bpf: Test using file dynptr after the reference on file is dropped Amery Hung
8 siblings, 0 replies; 12+ messages in thread
From: Amery Hung @ 2026-04-21 22:10 UTC (permalink / raw)
To: bpf
Cc: netdev, alexei.starovoitov, andrii, daniel, eddyz87, memxor,
martin.lau, mykyta.yatsenko5, ameryhung, kernel-team
Make sure the verifier invalidates the dynptr and dynptr slice derived
from an skb after the skb is freed.
Signed-off-by: Amery Hung <ameryhung@gmail.com>
---
.../selftests/bpf/prog_tests/bpf_qdisc.c | 6 ++
.../progs/bpf_qdisc_fail__invalid_dynptr.c | 68 +++++++++++++++++
...f_qdisc_fail__invalid_dynptr_cross_frame.c | 74 +++++++++++++++++++
.../bpf_qdisc_fail__invalid_dynptr_slice.c | 70 ++++++++++++++++++
4 files changed, 218 insertions(+)
create mode 100644 tools/testing/selftests/bpf/progs/bpf_qdisc_fail__invalid_dynptr.c
create mode 100644 tools/testing/selftests/bpf/progs/bpf_qdisc_fail__invalid_dynptr_cross_frame.c
create mode 100644 tools/testing/selftests/bpf/progs/bpf_qdisc_fail__invalid_dynptr_slice.c
diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_qdisc.c b/tools/testing/selftests/bpf/prog_tests/bpf_qdisc.c
index 730357cd0c9a..65277c8fc887 100644
--- a/tools/testing/selftests/bpf/prog_tests/bpf_qdisc.c
+++ b/tools/testing/selftests/bpf/prog_tests/bpf_qdisc.c
@@ -8,6 +8,9 @@
#include "bpf_qdisc_fifo.skel.h"
#include "bpf_qdisc_fq.skel.h"
#include "bpf_qdisc_fail__incompl_ops.skel.h"
+#include "bpf_qdisc_fail__invalid_dynptr.skel.h"
+#include "bpf_qdisc_fail__invalid_dynptr_slice.skel.h"
+#include "bpf_qdisc_fail__invalid_dynptr_cross_frame.skel.h"
#define LO_IFINDEX 1
@@ -223,6 +226,9 @@ void test_ns_bpf_qdisc(void)
test_qdisc_attach_to_non_root();
if (test__start_subtest("incompl_ops"))
test_incompl_ops();
+ RUN_TESTS(bpf_qdisc_fail__invalid_dynptr);
+ RUN_TESTS(bpf_qdisc_fail__invalid_dynptr_cross_frame);
+ RUN_TESTS(bpf_qdisc_fail__invalid_dynptr_slice);
}
void serial_test_bpf_qdisc_default(void)
diff --git a/tools/testing/selftests/bpf/progs/bpf_qdisc_fail__invalid_dynptr.c b/tools/testing/selftests/bpf/progs/bpf_qdisc_fail__invalid_dynptr.c
new file mode 100644
index 000000000000..3a20811e3feb
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/bpf_qdisc_fail__invalid_dynptr.c
@@ -0,0 +1,68 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <vmlinux.h>
+#include "bpf_experimental.h"
+#include "bpf_qdisc_common.h"
+#include "bpf_misc.h"
+
+char _license[] SEC("license") = "GPL";
+
+int proto;
+
+SEC("struct_ops")
+__failure __msg("Expected an initialized dynptr as arg #")
+int BPF_PROG(invalid_dynptr, struct sk_buff *skb, struct Qdisc *sch,
+ struct bpf_sk_buff_ptr *to_free)
+{
+ struct bpf_dynptr ptr;
+ struct ethhdr *hdr;
+
+ bpf_dynptr_from_skb((struct __sk_buff *)skb, 0, &ptr);
+
+ bpf_qdisc_skb_drop(skb, to_free);
+
+ hdr = bpf_dynptr_slice(&ptr, 0, NULL, sizeof(*hdr));
+ if (!hdr)
+ return NET_XMIT_DROP;
+
+ proto = hdr->h_proto;
+
+ return NET_XMIT_DROP;
+}
+
+SEC("struct_ops")
+__auxiliary
+struct sk_buff *BPF_PROG(bpf_qdisc_test_dequeue, struct Qdisc *sch)
+{
+ return NULL;
+}
+
+SEC("struct_ops")
+__auxiliary
+int BPF_PROG(bpf_qdisc_test_init, struct Qdisc *sch, struct nlattr *opt,
+ struct netlink_ext_ack *extack)
+{
+ return 0;
+}
+
+SEC("struct_ops")
+__auxiliary
+void BPF_PROG(bpf_qdisc_test_reset, struct Qdisc *sch)
+{
+}
+
+SEC("struct_ops")
+__auxiliary
+void BPF_PROG(bpf_qdisc_test_destroy, struct Qdisc *sch)
+{
+}
+
+SEC(".struct_ops")
+struct Qdisc_ops test = {
+ .enqueue = (void *)invalid_dynptr,
+ .dequeue = (void *)bpf_qdisc_test_dequeue,
+ .init = (void *)bpf_qdisc_test_init,
+ .reset = (void *)bpf_qdisc_test_reset,
+ .destroy = (void *)bpf_qdisc_test_destroy,
+ .id = "bpf_qdisc_test",
+};
diff --git a/tools/testing/selftests/bpf/progs/bpf_qdisc_fail__invalid_dynptr_cross_frame.c b/tools/testing/selftests/bpf/progs/bpf_qdisc_fail__invalid_dynptr_cross_frame.c
new file mode 100644
index 000000000000..2e23b8593af9
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/bpf_qdisc_fail__invalid_dynptr_cross_frame.c
@@ -0,0 +1,74 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <vmlinux.h>
+#include "bpf_experimental.h"
+#include "bpf_qdisc_common.h"
+#include "bpf_misc.h"
+
+char _license[] SEC("license") = "GPL";
+
+int proto;
+
+static __noinline int free_skb(struct sk_buff *skb)
+{
+ bpf_kfree_skb(skb);
+ return 0;
+}
+
+SEC("struct_ops")
+__failure __msg("invalid mem access 'scalar'")
+int BPF_PROG(invalid_dynptr_cross_frame, struct sk_buff *skb, struct Qdisc *sch,
+ struct bpf_sk_buff_ptr *to_free)
+{
+ struct bpf_dynptr ptr;
+ struct ethhdr *hdr;
+
+ bpf_dynptr_from_skb((struct __sk_buff *)skb, 0, &ptr);
+
+ hdr = bpf_dynptr_slice(&ptr, 0, NULL, sizeof(*hdr));
+ if (!hdr)
+ return NET_XMIT_DROP;
+
+ free_skb(skb);
+
+ proto = hdr->h_proto;
+
+ return NET_XMIT_DROP;
+}
+
+SEC("struct_ops")
+__auxiliary
+struct sk_buff *BPF_PROG(bpf_qdisc_test_dequeue, struct Qdisc *sch)
+{
+ return NULL;
+}
+
+SEC("struct_ops")
+__auxiliary
+int BPF_PROG(bpf_qdisc_test_init, struct Qdisc *sch, struct nlattr *opt,
+ struct netlink_ext_ack *extack)
+{
+ return 0;
+}
+
+SEC("struct_ops")
+__auxiliary
+void BPF_PROG(bpf_qdisc_test_reset, struct Qdisc *sch)
+{
+}
+
+SEC("struct_ops")
+__auxiliary
+void BPF_PROG(bpf_qdisc_test_destroy, struct Qdisc *sch)
+{
+}
+
+SEC(".struct_ops")
+struct Qdisc_ops test = {
+ .enqueue = (void *)invalid_dynptr_cross_frame,
+ .dequeue = (void *)bpf_qdisc_test_dequeue,
+ .init = (void *)bpf_qdisc_test_init,
+ .reset = (void *)bpf_qdisc_test_reset,
+ .destroy = (void *)bpf_qdisc_test_destroy,
+ .id = "bpf_qdisc_test",
+};
diff --git a/tools/testing/selftests/bpf/progs/bpf_qdisc_fail__invalid_dynptr_slice.c b/tools/testing/selftests/bpf/progs/bpf_qdisc_fail__invalid_dynptr_slice.c
new file mode 100644
index 000000000000..731216c4e45a
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/bpf_qdisc_fail__invalid_dynptr_slice.c
@@ -0,0 +1,70 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <vmlinux.h>
+#include "bpf_experimental.h"
+#include "bpf_qdisc_common.h"
+#include "bpf_misc.h"
+
+char _license[] SEC("license") = "GPL";
+
+int proto;
+
+SEC("struct_ops")
+__failure __msg("invalid mem access 'scalar'")
+int BPF_PROG(invalid_dynptr_slice, struct sk_buff *skb, struct Qdisc *sch,
+ struct bpf_sk_buff_ptr *to_free)
+{
+ struct bpf_dynptr ptr;
+ struct ethhdr *hdr;
+
+ bpf_dynptr_from_skb((struct __sk_buff *)skb, 0, &ptr);
+
+ hdr = bpf_dynptr_slice(&ptr, 0, NULL, sizeof(*hdr));
+ if (!hdr) {
+ bpf_qdisc_skb_drop(skb, to_free);
+ return NET_XMIT_DROP;
+ }
+
+ bpf_qdisc_skb_drop(skb, to_free);
+
+ proto = hdr->h_proto;
+
+ return NET_XMIT_DROP;
+}
+
+SEC("struct_ops")
+__auxiliary
+struct sk_buff *BPF_PROG(bpf_qdisc_test_dequeue, struct Qdisc *sch)
+{
+ return NULL;
+}
+
+SEC("struct_ops")
+__auxiliary
+int BPF_PROG(bpf_qdisc_test_init, struct Qdisc *sch, struct nlattr *opt,
+ struct netlink_ext_ack *extack)
+{
+ return 0;
+}
+
+SEC("struct_ops")
+__auxiliary
+void BPF_PROG(bpf_qdisc_test_reset, struct Qdisc *sch)
+{
+}
+
+SEC("struct_ops")
+__auxiliary
+void BPF_PROG(bpf_qdisc_test_destroy, struct Qdisc *sch)
+{
+}
+
+SEC(".struct_ops")
+struct Qdisc_ops test = {
+ .enqueue = (void *)invalid_dynptr_slice,
+ .dequeue = (void *)bpf_qdisc_test_dequeue,
+ .init = (void *)bpf_qdisc_test_init,
+ .reset = (void *)bpf_qdisc_test_reset,
+ .destroy = (void *)bpf_qdisc_test_destroy,
+ .id = "bpf_qdisc_test",
+};
--
2.52.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH bpf-next v3 8/9] selftests/bpf: Test using slice after invalidating dynptr clone
2026-04-21 22:10 [PATCH bpf-next v3 0/9] Refactor verifier object relationship tracking Amery Hung
` (6 preceding siblings ...)
2026-04-21 22:10 ` [PATCH bpf-next v3 7/9] selftests/bpf: Test using dynptr after freeing the underlying object Amery Hung
@ 2026-04-21 22:10 ` Amery Hung
2026-04-21 22:10 ` [PATCH bpf-next v3 9/9] selftests/bpf: Test using file dynptr after the reference on file is dropped Amery Hung
8 siblings, 0 replies; 12+ messages in thread
From: Amery Hung @ 2026-04-21 22:10 UTC (permalink / raw)
To: bpf
Cc: netdev, alexei.starovoitov, andrii, daniel, eddyz87, memxor,
martin.lau, mykyta.yatsenko5, ameryhung, kernel-team
The parent object of a cloned dynptr is skb not the original dynptr.
Invalidate the original dynptr should not prevent the program from
using the slice derived from the cloned dynptr.
Signed-off-by: Amery Hung <ameryhung@gmail.com>
---
.../selftests/bpf/prog_tests/bpf_qdisc.c | 2 +
..._qdisc_dynptr_use_after_invalidate_clone.c | 75 +++++++++++++++++++
2 files changed, 77 insertions(+)
create mode 100644 tools/testing/selftests/bpf/progs/bpf_qdisc_dynptr_use_after_invalidate_clone.c
diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_qdisc.c b/tools/testing/selftests/bpf/prog_tests/bpf_qdisc.c
index 65277c8fc887..77f1c0550c9b 100644
--- a/tools/testing/selftests/bpf/prog_tests/bpf_qdisc.c
+++ b/tools/testing/selftests/bpf/prog_tests/bpf_qdisc.c
@@ -11,6 +11,7 @@
#include "bpf_qdisc_fail__invalid_dynptr.skel.h"
#include "bpf_qdisc_fail__invalid_dynptr_slice.skel.h"
#include "bpf_qdisc_fail__invalid_dynptr_cross_frame.skel.h"
+#include "bpf_qdisc_dynptr_use_after_invalidate_clone.skel.h"
#define LO_IFINDEX 1
@@ -229,6 +230,7 @@ void test_ns_bpf_qdisc(void)
RUN_TESTS(bpf_qdisc_fail__invalid_dynptr);
RUN_TESTS(bpf_qdisc_fail__invalid_dynptr_cross_frame);
RUN_TESTS(bpf_qdisc_fail__invalid_dynptr_slice);
+ RUN_TESTS(bpf_qdisc_dynptr_use_after_invalidate_clone);
}
void serial_test_bpf_qdisc_default(void)
diff --git a/tools/testing/selftests/bpf/progs/bpf_qdisc_dynptr_use_after_invalidate_clone.c b/tools/testing/selftests/bpf/progs/bpf_qdisc_dynptr_use_after_invalidate_clone.c
new file mode 100644
index 000000000000..cca2accf081d
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/bpf_qdisc_dynptr_use_after_invalidate_clone.c
@@ -0,0 +1,75 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <vmlinux.h>
+#include "bpf_experimental.h"
+#include "bpf_qdisc_common.h"
+#include "bpf_misc.h"
+
+char _license[] SEC("license") = "GPL";
+
+int proto;
+
+SEC("struct_ops")
+__success
+int BPF_PROG(dynptr_use_after_invalidate_clone, struct sk_buff *skb, struct Qdisc *sch,
+ struct bpf_sk_buff_ptr *to_free)
+{
+ struct bpf_dynptr ptr, ptr_clone;
+ struct ethhdr *hdr;
+
+ bpf_dynptr_from_skb((struct __sk_buff *)skb, 0, &ptr);
+
+ bpf_dynptr_clone(&ptr, &ptr_clone);
+
+ hdr = bpf_dynptr_slice(&ptr_clone, 0, NULL, sizeof(*hdr));
+ if (!hdr) {
+ bpf_qdisc_skb_drop(skb, to_free);
+ return NET_XMIT_DROP;
+ }
+
+ *(int *)&ptr = 0;
+
+ proto = hdr->h_proto;
+
+ bpf_qdisc_skb_drop(skb, to_free);
+
+ return NET_XMIT_DROP;
+}
+
+SEC("struct_ops")
+__auxiliary
+struct sk_buff *BPF_PROG(bpf_qdisc_test_dequeue, struct Qdisc *sch)
+{
+ return NULL;
+}
+
+SEC("struct_ops")
+__auxiliary
+int BPF_PROG(bpf_qdisc_test_init, struct Qdisc *sch, struct nlattr *opt,
+ struct netlink_ext_ack *extack)
+{
+ return 0;
+}
+
+SEC("struct_ops")
+__auxiliary
+void BPF_PROG(bpf_qdisc_test_reset, struct Qdisc *sch)
+{
+}
+
+SEC("struct_ops")
+__auxiliary
+void BPF_PROG(bpf_qdisc_test_destroy, struct Qdisc *sch)
+{
+}
+
+SEC(".struct_ops")
+struct Qdisc_ops test = {
+ .enqueue = (void *)dynptr_use_after_invalidate_clone,
+ .dequeue = (void *)bpf_qdisc_test_dequeue,
+ .init = (void *)bpf_qdisc_test_init,
+ .reset = (void *)bpf_qdisc_test_reset,
+ .destroy = (void *)bpf_qdisc_test_destroy,
+ .id = "bpf_qdisc_test",
+};
+
--
2.52.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH bpf-next v3 9/9] selftests/bpf: Test using file dynptr after the reference on file is dropped
2026-04-21 22:10 [PATCH bpf-next v3 0/9] Refactor verifier object relationship tracking Amery Hung
` (7 preceding siblings ...)
2026-04-21 22:10 ` [PATCH bpf-next v3 8/9] selftests/bpf: Test using slice after invalidating dynptr clone Amery Hung
@ 2026-04-21 22:10 ` Amery Hung
8 siblings, 0 replies; 12+ messages in thread
From: Amery Hung @ 2026-04-21 22:10 UTC (permalink / raw)
To: bpf
Cc: netdev, alexei.starovoitov, andrii, daniel, eddyz87, memxor,
martin.lau, mykyta.yatsenko5, ameryhung, kernel-team
File dynptr and slice should be invalidated when the parent file's
reference is dropped in the program. Without the verifier tracking
dyntpr's parent referenced object, the dynptr would continute to be
incorrectly used even if the underlying file is being tear down or gone.
Signed-off-by: Amery Hung <ameryhung@gmail.com>
---
.../selftests/bpf/progs/file_reader_fail.c | 60 +++++++++++++++++++
1 file changed, 60 insertions(+)
diff --git a/tools/testing/selftests/bpf/progs/file_reader_fail.c b/tools/testing/selftests/bpf/progs/file_reader_fail.c
index 32fe28ed2439..a7102737abfe 100644
--- a/tools/testing/selftests/bpf/progs/file_reader_fail.c
+++ b/tools/testing/selftests/bpf/progs/file_reader_fail.c
@@ -50,3 +50,63 @@ int xdp_no_dynptr_type(struct xdp_md *xdp)
bpf_dynptr_file_discard(&dynptr);
return 0;
}
+
+SEC("lsm/file_open")
+__failure
+__msg("Expected an initialized dynptr as arg #2")
+int use_file_dynptr_after_put_file(void *ctx)
+{
+ struct task_struct *task = bpf_get_current_task_btf();
+ struct file *file = bpf_get_task_exe_file(task);
+ struct bpf_dynptr dynptr;
+ char buf[64];
+
+ if (!file)
+ return 0;
+
+ if (bpf_dynptr_from_file(file, 0, &dynptr))
+ goto out;
+
+ bpf_put_file(file);
+
+ /* this should fail - dynptr is invalid after file ref is dropped */
+ bpf_dynptr_read(buf, sizeof(buf), &dynptr, 0, 0);
+ return 0;
+
+out:
+ bpf_dynptr_file_discard(&dynptr);
+ bpf_put_file(file);
+ return 0;
+}
+
+SEC("lsm/file_open")
+__failure
+__msg("invalid mem access 'scalar'")
+int use_file_dynptr_slice_after_put_file(void *ctx)
+{
+ struct task_struct *task = bpf_get_current_task_btf();
+ struct file *file = bpf_get_task_exe_file(task);
+ struct bpf_dynptr dynptr;
+ char *data;
+
+ if (!file)
+ return 0;
+
+ if (bpf_dynptr_from_file(file, 0, &dynptr))
+ goto out;
+
+ data = bpf_dynptr_data(&dynptr, 0, 1);
+ if (!data)
+ goto out;
+
+ bpf_put_file(file);
+
+ /* this should fail - data slice is invalid after file ref is dropped */
+ *data = 'x';
+ return 0;
+
+out:
+ bpf_dynptr_file_discard(&dynptr);
+ bpf_put_file(file);
+ return 0;
+}
--
2.52.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH bpf-next v3 3/9] bpf: Preserve reg->id of pointer objects after null-check
2026-04-21 22:10 ` [PATCH bpf-next v3 3/9] bpf: Preserve reg->id of pointer objects after null-check Amery Hung
@ 2026-04-21 22:52 ` bot+bpf-ci
0 siblings, 0 replies; 12+ messages in thread
From: bot+bpf-ci @ 2026-04-21 22:52 UTC (permalink / raw)
To: ameryhung, bpf
Cc: netdev, alexei.starovoitov, andrii, daniel, eddyz87, memxor,
martin.lau, mykyta.yatsenko5, ameryhung, kernel-team, ast, andrii,
daniel, martin.lau, eddyz87, yonghong.song, clm, ihor.solodrai
[-- Attachment #1: Type: text/plain, Size: 1752 bytes --]
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 93003a2a96b0..0313b7d5f6c9 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -15886,15 +15886,10 @@ static void mark_ptr_or_null_reg(struct bpf_func_state *state,
>
> mark_ptr_not_null_reg(reg);
>
> - if (!reg_may_point_to_spin_lock(reg)) {
> - /* For not-NULL ptr, reg->ref_obj_id will be reset
> - * in release_reference().
> - *
> - * reg->id is still used by spin_lock ptr. Other
> - * than spin_lock ptr type, reg->id can be reset.
> - */
> - reg->id = 0;
> - }
> + /*
> + * reg->id is preserved for object relationship tracking
> + * and spin_lock lock state tracking
> + */
How does preserving reg->id interact with reg_is_init_pkt_pointer()?
In v2 review, Alexei Starovoitov raised this concern: "That pointer has
to have id == 0". Eduard Zingerman responded that dropping the id == 0
check should be fine, but this change was not made in v3.
The function reg_is_init_pkt_pointer() still checks reg->id == 0:
return reg->type == which &&
reg->id == 0 &&
reg->off == 0 &&
With this patch, if a PTR_TO_PACKET register goes through null-checking
in mark_ptr_or_null_reg(), it will now preserve its reg->id instead of
resetting it to 0. Would this cause reg_is_init_pkt_pointer() to
incorrectly return false for init packet pointers?
Reference: https://lore.kernel.org/bpf/CAADnVQLCKi_QL1FeybW+6MH6FAOAoRJyW06UvtX3my3yNSH5eQ@mail.gmail.com/
> }
> }
---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
CI run summary: https://github.com/kernel-patches/bpf/actions/runs/24749741787
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH bpf-next v3 1/9] bpf: Unify dynptr handling in the verifier
2026-04-21 22:10 ` [PATCH bpf-next v3 1/9] bpf: Unify dynptr handling in the verifier Amery Hung
@ 2026-04-21 22:52 ` bot+bpf-ci
0 siblings, 0 replies; 12+ messages in thread
From: bot+bpf-ci @ 2026-04-21 22:52 UTC (permalink / raw)
To: ameryhung, bpf
Cc: netdev, alexei.starovoitov, andrii, daniel, eddyz87, memxor,
martin.lau, mykyta.yatsenko5, ameryhung, kernel-team, ast, andrii,
daniel, martin.lau, eddyz87, yonghong.song, clm, ihor.solodrai
[-- Attachment #1: Type: text/plain, Size: 1182 bytes --]
> diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
> index b148f816f25b..dc0cff59246d 100644
> --- a/include/linux/bpf_verifier.h
> +++ b/include/linux/bpf_verifier.h
> @@ -1319,6 +1319,12 @@ struct bpf_map_desc {
> int uid;
> };
>
> +struct bpf_dynptr_desc {
> + enum bpf_dynptr_type type;
> + u32 id;
> + u32 ref_obj_id;
> +};
> +
This isn't a bug, but as noted by Mykyta Yatsenko in v2 review, the new struct
bpf_dynptr_desc could benefit from a comment explaining what the ref_obj_id
field is for.
While you mentioned in your response that comments for id, ref_obj_id, and
parent_id will be added in a later refactor patch (#6) since these fields
apply to all objects tracked by the verifier, the struct currently lacks this
inline documentation. Could this documentation be added now to help readers
understand the field's purpose?
Reference: https://lore.kernel.org/bpf/87ikb2tkta.fsf@gmail.com/
[ ... ]
---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
CI run summary: https://github.com/kernel-patches/bpf/actions/runs/24749741787
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2026-04-21 22:52 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-21 22:10 [PATCH bpf-next v3 0/9] Refactor verifier object relationship tracking Amery Hung
2026-04-21 22:10 ` [PATCH bpf-next v3 1/9] bpf: Unify dynptr handling in the verifier Amery Hung
2026-04-21 22:52 ` bot+bpf-ci
2026-04-21 22:10 ` [PATCH bpf-next v3 2/9] bpf: Assign reg->id when getting referenced kptr from ctx Amery Hung
2026-04-21 22:10 ` [PATCH bpf-next v3 3/9] bpf: Preserve reg->id of pointer objects after null-check Amery Hung
2026-04-21 22:52 ` bot+bpf-ci
2026-04-21 22:10 ` [PATCH bpf-next v3 4/9] bpf: Refactor object relationship tracking and fix dynptr UAF bug Amery Hung
2026-04-21 22:10 ` [PATCH bpf-next v3 5/9] bpf: Remove redundant dynptr arg check for helper Amery Hung
2026-04-21 22:10 ` [PATCH bpf-next v3 6/9] selftests/bpf: Test creating dynptr from dynptr data and slice Amery Hung
2026-04-21 22:10 ` [PATCH bpf-next v3 7/9] selftests/bpf: Test using dynptr after freeing the underlying object Amery Hung
2026-04-21 22:10 ` [PATCH bpf-next v3 8/9] selftests/bpf: Test using slice after invalidating dynptr clone Amery Hung
2026-04-21 22:10 ` [PATCH bpf-next v3 9/9] selftests/bpf: Test using file dynptr after the reference on file is dropped Amery Hung
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox