* [RFC PATCH bpf-next v1 1/4] bpf: Fix and refactor object relationship tracking
@ 2026-02-02 21:48 Amery Hung
2026-02-02 21:48 ` [RFC PATCH bpf-next v1 2/4] selftests/bpf: Test creating dynptr from dynptr data and slice Amery Hung
` (4 more replies)
0 siblings, 5 replies; 7+ messages in thread
From: Amery Hung @ 2026-02-02 21:48 UTC (permalink / raw)
To: bpf
Cc: netdev, alexei.starovoitov, andrii, daniel, memxor, martin.lau,
ameryhung, kernel-team
This patch (1) fixes the missing link between dynptr and the parent
referenced object and (2) simplifies the object relationship tracking
to allow building more complex object relationship in the future.
This patch makes dynptr tracks its parent object. After bpf qdisc is
introduced, it is now possible for an skb to be freed in a qdisc program.
However, since there is no link between the skb dynptr and the skb, the
dynptr will not be invalidated after the skb is freed, resulting in
use after free.
In addition, also take the chance to refactor object relationship
tracking. Currently ref_obj_id of an object will share the same
ref_obj_id of the object it is referencing. For dynptr slice, a separate
dynptr_id is used to track its parent.
This design works for now, but after adding link between skb and dynptr,
it becomes problematic. It cannot correctly handle the removal of some
nodes in the object relationship tree. For example, if dynptr a is
destroyed by overwriting the stack slot, release_reference(1) would be
called and all nodes will be removed. The correct handling should leave
skb, dynptr c, and slice d intact. This is not a problem before since
dynptr created from skb has ref_obj_id = 0.
In the future if we start allowing creating dynptr from slice, the current
design also cannot correctly handle the removal of dynptr e. Again, all
nodes will be removed instead of childrens of dynptr e.
Before: object (id,ref_obj_id,dynptr_id)
skb (1,1,0)
^ (Link added in this patch)
+-------------------------------+
| bpf_dynptr_clone |
dynptr a (2,1,0) dynptr c (4,1,0)
^ ^
bpf_dynptr_slice | |
| |
slice b (3,1,2) slice d (5,1,4)
^
bpf_dynptr_from_mem |
(NOT allowed yet) |
dynptr e (6,1,0)
The new design removes dynptr_id and use ref_obj_id to track the id
of the parent object instead of sharing the ref_obj_id of the root.
For release_reference(), this means ref_obj_id argument now can aslo
contains id not acquired through acquire_reference(). Add an argument
"acquired_ref" to indicate this. If true, there is no need to call
release_reference_nomark(). In addition, when iterating the register
state, when a reg->ref_obj_id matches the to be released ref_obj_id,
other objects referencing reg->id also needs to be released. Finally,
since dynptrs reside on the stack, release_reference() will also need
to scan stack slots.
After: object (id,ref_obj_id)
Qdisc:
skb (1,1)
^
bpf_dynptr_from_skb +-------------------------------+
| bpf_dynptr_clone |
dynptr a (2,1) dynptr c (4,1)
^ ^
bpf_dynptr_slice | |
| |
slice b (3,2) slice d (5,4)
^
bpf_dynptr_from_mem |
(NOT allowed yet) |
dynptr e (6,3)
Referenced dynptr:
bpf_dynptr_from_file +---------------------------------+
| bpf_dynptr_clone |
dynptr a (1,1) dynptr c (2,1)
Other non-referenced dynptr
bpf_dynptr_from_skb +---------------------------------+
| bpf_dynptr_clone |
dynptr a (1,0) dynptr c (2,1)
While this change introduces a potential circular call chain of
release_reference() -> __unmark_stack_slots_dynptr(), we currently do
not allow creating dynptr from slice so the depth of the tree will be
capped at three. In the future when this is allowed, the recursion
depth will still be bounded as the number of slices is limited by the
number registers. Besides, mark_ptr_or_null_reg() also needs to preserve
reg->id of slices.
Note that the new design does not support intermediate nodes to acquire
reference. If this is needed in the future, we might need to separate
ref_obj_id into ref_obj_id and parent_id to handle it.
CC: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Fixes: 870c28588afa ("bpf: net_sched: Add basic bpf qdisc kfuncs")
Signed-off-by: Amery Hung <ameryhung@gmail.com>
---
This refactor is based on Kumar work in [0]
[0] https://lore.kernel.org/bpf/20250414161443.1146103-2-memxor@gmail.com/
---
include/linux/bpf_verifier.h | 1 -
kernel/bpf/log.c | 2 -
kernel/bpf/verifier.c | 173 ++++++++++++++++-------------------
3 files changed, 81 insertions(+), 95 deletions(-)
diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index 8355b585cd18..0cb14648a269 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -66,7 +66,6 @@ struct bpf_reg_state {
struct { /* for PTR_TO_MEM | PTR_TO_MEM_OR_NULL */
u32 mem_size;
- u32 dynptr_id; /* for dynptr slices */
};
/* For dynptr stack slots */
diff --git a/kernel/bpf/log.c b/kernel/bpf/log.c
index a0c3b35de2ce..8851adfdb581 100644
--- a/kernel/bpf/log.c
+++ b/kernel/bpf/log.c
@@ -808,8 +808,6 @@ void print_verifier_state(struct bpf_verifier_env *env, const struct bpf_verifie
verbose_a("id=%d", reg->id);
if (reg->ref_obj_id)
verbose_a("ref_id=%d", reg->ref_obj_id);
- if (reg->dynptr_id)
- verbose_a("dynptr_id=%d", reg->dynptr_id);
verbose(env, ")");
break;
case STACK_ITER:
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 6b62b6d57175..a357e0305139 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -204,7 +204,7 @@ struct bpf_verifier_stack_elem {
static int acquire_reference(struct bpf_verifier_env *env, int insn_idx);
static int release_reference_nomark(struct bpf_verifier_state *state, int ref_obj_id);
-static int release_reference(struct bpf_verifier_env *env, int ref_obj_id);
+static int release_reference(struct bpf_verifier_env *env, int ref_obj_id, bool acquired_ref);
static void invalidate_non_owning_refs(struct bpf_verifier_env *env);
static bool in_rbtree_lock_required_cb(struct bpf_verifier_env *env);
static int ref_set_non_owning(struct bpf_verifier_env *env,
@@ -287,7 +287,7 @@ struct bpf_call_arg_meta {
int mem_size;
u64 msize_max_value;
int ref_obj_id;
- int dynptr_id;
+ int id;
int func_id;
struct btf *btf;
u32 btf_id;
@@ -733,7 +733,7 @@ static bool dynptr_type_refcounted(enum bpf_dynptr_type type)
static void __mark_dynptr_reg(struct bpf_reg_state *reg,
enum bpf_dynptr_type type,
- bool first_slot, int dynptr_id);
+ bool first_slot, int id);
static void __mark_reg_not_init(const struct bpf_verifier_env *env,
struct bpf_reg_state *reg);
@@ -764,7 +764,7 @@ static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_
{
struct bpf_func_state *state = func(env, reg);
enum bpf_dynptr_type type;
- int spi, i, err;
+ int spi, i, err, id;
spi = dynptr_get_spi(env, reg);
if (spi < 0)
@@ -798,22 +798,17 @@ static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_
mark_dynptr_stack_regs(env, &state->stack[spi].spilled_ptr,
&state->stack[spi - 1].spilled_ptr, type);
- if (dynptr_type_refcounted(type)) {
- /* The id is used to track proper releasing */
- int id;
-
- if (clone_ref_obj_id)
- id = clone_ref_obj_id;
- else
- id = acquire_reference(env, insn_idx);
+ id = clone_ref_obj_id;
+ if (dynptr_type_refcounted(type) && !clone_ref_obj_id) {
+ id = acquire_reference(env, insn_idx);
if (id < 0)
return id;
-
- state->stack[spi].spilled_ptr.ref_obj_id = id;
- state->stack[spi - 1].spilled_ptr.ref_obj_id = id;
}
+ state->stack[spi].spilled_ptr.ref_obj_id = id;
+ state->stack[spi - 1].spilled_ptr.ref_obj_id = id;
+
bpf_mark_stack_write(env, state->frameno, BIT(spi - 1) | BIT(spi));
return 0;
@@ -834,10 +829,30 @@ static void invalidate_dynptr(struct bpf_verifier_env *env, struct bpf_func_stat
bpf_mark_stack_write(env, state->frameno, BIT(spi - 1) | BIT(spi));
}
+static void __unmark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_func_state *state,
+ int spi)
+{
+ enum bpf_dynptr_type type;
+ int id, ref_obj_id;
+
+ id = state->stack[spi].spilled_ptr.id;
+ ref_obj_id = state->stack[spi].spilled_ptr.ref_obj_id;
+ type = state->stack[spi].spilled_ptr.dynptr.type;
+
+ invalidate_dynptr(env, state, spi);
+
+ /* Release the reference acquired on the dynptr */
+ if (dynptr_type_refcounted(type))
+ WARN_ON_ONCE(release_reference(env, ref_obj_id, true));
+
+ /* Invalidate any slices associated with this dynptr */
+ WARN_ON_ONCE(release_reference(env, id, false));
+}
+
static int unmark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
{
struct bpf_func_state *state = func(env, reg);
- int spi, ref_obj_id, i;
+ int spi;
/*
* This can only be set for PTR_TO_STACK, as CONST_PTR_TO_DYNPTR cannot
@@ -848,44 +863,12 @@ static int unmark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_re
verifier_bug(env, "CONST_PTR_TO_DYNPTR cannot be released");
return -EFAULT;
}
+
spi = dynptr_get_spi(env, reg);
if (spi < 0)
return spi;
- if (!dynptr_type_refcounted(state->stack[spi].spilled_ptr.dynptr.type)) {
- invalidate_dynptr(env, state, spi);
- return 0;
- }
-
- ref_obj_id = state->stack[spi].spilled_ptr.ref_obj_id;
-
- /* If the dynptr has a ref_obj_id, then we need to invalidate
- * two things:
- *
- * 1) Any dynptrs with a matching ref_obj_id (clones)
- * 2) Any slices derived from this dynptr.
- */
-
- /* Invalidate any slices associated with this dynptr */
- WARN_ON_ONCE(release_reference(env, ref_obj_id));
-
- /* Invalidate any dynptr clones */
- for (i = 1; i < state->allocated_stack / BPF_REG_SIZE; i++) {
- if (state->stack[i].spilled_ptr.ref_obj_id != ref_obj_id)
- continue;
-
- /* it should always be the case that if the ref obj id
- * matches then the stack slot also belongs to a
- * dynptr
- */
- if (state->stack[i].slot_type[0] != STACK_DYNPTR) {
- verifier_bug(env, "misconfigured ref_obj_id");
- return -EFAULT;
- }
- if (state->stack[i].spilled_ptr.dynptr.first_slot)
- invalidate_dynptr(env, state, i);
- }
-
+ __unmark_stack_slots_dynptr(env, state, spi);
return 0;
}
@@ -905,7 +888,7 @@ static int destroy_if_dynptr_stack_slot(struct bpf_verifier_env *env,
{
struct bpf_func_state *fstate;
struct bpf_reg_state *dreg;
- int i, dynptr_id;
+ int i, id;
/* We always ensure that STACK_DYNPTR is never set partially,
* hence just checking for slot_type[0] is enough. This is
@@ -933,13 +916,13 @@ static int destroy_if_dynptr_stack_slot(struct bpf_verifier_env *env,
state->stack[spi - 1].slot_type[i] = STACK_INVALID;
}
- dynptr_id = state->stack[spi].spilled_ptr.id;
+ id = state->stack[spi].spilled_ptr.id;
/* Invalidate any slices associated with this dynptr */
bpf_for_each_reg_in_vstate(env->cur_state, fstate, dreg, ({
/* Dynptr slices are only PTR_TO_MEM_OR_NULL and PTR_TO_MEM */
if (dreg->type != (PTR_TO_MEM | PTR_MAYBE_NULL) && dreg->type != PTR_TO_MEM)
continue;
- if (dreg->dynptr_id == dynptr_id)
+ if (dreg->ref_obj_id == id)
mark_reg_invalid(env, dreg);
}));
@@ -1098,7 +1081,7 @@ static int unmark_stack_slots_iter(struct bpf_verifier_env *env,
struct bpf_reg_state *st = &slot->spilled_ptr;
if (i == 0)
- WARN_ON_ONCE(release_reference(env, st->ref_obj_id));
+ WARN_ON_ONCE(release_reference(env, st->ref_obj_id, true));
__mark_reg_not_init(env, st);
@@ -2235,7 +2218,7 @@ static void mark_reg_known_zero(struct bpf_verifier_env *env,
}
static void __mark_dynptr_reg(struct bpf_reg_state *reg, enum bpf_dynptr_type type,
- bool first_slot, int dynptr_id)
+ bool first_slot, int id)
{
/* reg->type has no meaning for STACK_DYNPTR, but when we set reg for
* callback arguments, it does need to be CONST_PTR_TO_DYNPTR, so simply
@@ -2244,7 +2227,7 @@ static void __mark_dynptr_reg(struct bpf_reg_state *reg, enum bpf_dynptr_type ty
__mark_reg_known_zero(reg);
reg->type = CONST_PTR_TO_DYNPTR;
/* Give each dynptr a unique id to uniquely associate slices to it. */
- reg->id = dynptr_id;
+ reg->id = id;
reg->dynptr.type = type;
reg->dynptr.first_slot = first_slot;
}
@@ -10481,22 +10464,43 @@ static int release_reference_nomark(struct bpf_verifier_state *state, int ref_ob
*
* This is the release function corresponding to acquire_reference(). Idempotent.
*/
-static int release_reference(struct bpf_verifier_env *env, int ref_obj_id)
+static int release_reference(struct bpf_verifier_env *env, int ref_obj_id, bool acquired_ref)
{
struct bpf_verifier_state *vstate = env->cur_state;
struct bpf_func_state *state;
struct bpf_reg_state *reg;
- int err;
+ int err, spi;
- err = release_reference_nomark(vstate, ref_obj_id);
- if (err)
- return err;
+ if (acquired_ref) {
+ err = release_reference_nomark(vstate, ref_obj_id);
+ if (err)
+ return err;
+ }
bpf_for_each_reg_in_vstate(vstate, state, reg, ({
- if (reg->ref_obj_id == ref_obj_id)
+ if (reg->ref_obj_id == ref_obj_id) {
mark_reg_invalid(env, reg);
+ if (reg->id && reg->id != ref_obj_id) {
+ err = release_reference(env, reg->id, false);
+ if (err)
+ return err;
+ }
+ }
}));
+ bpf_for_each_spilled_reg(spi, state, reg, (1 << STACK_DYNPTR)) {
+ if (!reg || !reg->dynptr.first_slot)
+ continue;
+ if (reg->ref_obj_id == ref_obj_id) {
+ __unmark_stack_slots_dynptr(env, state, spi);
+ if (reg->id && reg->id != ref_obj_id) {
+ err = release_reference(env, reg->id, false);
+ if (err)
+ return err;
+ }
+ }
+ }
+
return 0;
}
@@ -11673,7 +11677,7 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
}));
}
} else if (meta.ref_obj_id) {
- err = release_reference(env, meta.ref_obj_id);
+ err = release_reference(env, meta.ref_obj_id, true);
} else if (register_is_null(®s[meta.release_regno])) {
/* meta.ref_obj_id can only be 0 if register that is meant to be
* released is NULL, which must be > R0.
@@ -11757,19 +11761,14 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
case BPF_FUNC_dynptr_data:
{
struct bpf_reg_state *reg;
- int id, ref_obj_id;
+ int id;
reg = get_dynptr_arg_reg(env, fn, regs);
if (!reg)
return -EFAULT;
-
- if (meta.dynptr_id) {
- verifier_bug(env, "meta.dynptr_id already set");
- return -EFAULT;
- }
- if (meta.ref_obj_id) {
- verifier_bug(env, "meta.ref_obj_id already set");
+ if (meta.id) {
+ verifier_bug(env, "meta.id already set");
return -EFAULT;
}
@@ -11779,14 +11778,7 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
return id;
}
- ref_obj_id = dynptr_ref_obj_id(env, reg);
- if (ref_obj_id < 0) {
- verifier_bug(env, "failed to obtain dynptr ref_obj_id");
- return ref_obj_id;
- }
-
- meta.dynptr_id = id;
- meta.ref_obj_id = ref_obj_id;
+ meta.id = id;
break;
}
@@ -11990,10 +11982,9 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
return -EFAULT;
}
- if (is_dynptr_ref_function(func_id))
- regs[BPF_REG_0].dynptr_id = meta.dynptr_id;
-
- if (is_ptr_cast_function(func_id) || is_dynptr_ref_function(func_id)) {
+ if (is_dynptr_ref_function(func_id)) {
+ regs[BPF_REG_0].ref_obj_id = meta.id;
+ } else if (is_ptr_cast_function(func_id)) {
/* For release_reference() */
regs[BPF_REG_0].ref_obj_id = meta.ref_obj_id;
} else if (is_acquire_function(func_id, meta.map.ptr)) {
@@ -13328,7 +13319,9 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
}
if (reg->ref_obj_id) {
- if (is_kfunc_release(meta) && meta->ref_obj_id) {
+ if (meta->ref_obj_id &&
+ (is_kfunc_release(meta) ||
+ meta->func_id == special_kfunc_list[KF_bpf_dynptr_from_skb])) {
verifier_bug(env, "more than one arg with ref_obj_id R%d %u %u",
regno, reg->ref_obj_id,
meta->ref_obj_id);
@@ -13478,6 +13471,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
if (meta->func_id == special_kfunc_list[KF_bpf_dynptr_from_skb]) {
dynptr_arg_type |= DYNPTR_TYPE_SKB;
+ clone_ref_obj_id = meta->ref_obj_id;
} else if (meta->func_id == special_kfunc_list[KF_bpf_dynptr_from_xdp]) {
dynptr_arg_type |= DYNPTR_TYPE_XDP;
} else if (meta->func_id == special_kfunc_list[KF_bpf_dynptr_from_skb_meta]) {
@@ -13955,12 +13949,7 @@ static int check_special_kfunc(struct bpf_verifier_env *env, struct bpf_kfunc_ca
verifier_bug(env, "no dynptr id");
return -EFAULT;
}
- regs[BPF_REG_0].dynptr_id = meta->initialized_dynptr.id;
-
- /* we don't need to set BPF_REG_0's ref obj id
- * because packet slices are not refcounted (see
- * dynptr_type_refcounted)
- */
+ regs[BPF_REG_0].ref_obj_id = meta->initialized_dynptr.id;
} else {
return 0;
}
@@ -14154,7 +14143,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
if (meta.initialized_dynptr.ref_obj_id) {
err = unmark_stack_slots_dynptr(env, reg);
} else {
- err = release_reference(env, reg->ref_obj_id);
+ err = release_reference(env, reg->ref_obj_id, true);
if (err)
verbose(env, "kfunc %s#%d reference has not been acquired before\n",
func_name, meta.func_id);
@@ -14176,7 +14165,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
return err;
}
- err = release_reference(env, release_ref_obj_id);
+ err = release_reference(env, release_ref_obj_id, true);
if (err) {
verbose(env, "kfunc %s#%d reference has not been acquired before\n",
func_name, meta.func_id);
--
2.47.3
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [RFC PATCH bpf-next v1 2/4] selftests/bpf: Test creating dynptr from dynptr data and slice
2026-02-02 21:48 [RFC PATCH bpf-next v1 1/4] bpf: Fix and refactor object relationship tracking Amery Hung
@ 2026-02-02 21:48 ` Amery Hung
2026-02-02 21:48 ` [RFC PATCH bpf-next v1 3/4] selftests/bpf: Test using dynptr after freeing the underlying object Amery Hung
` (3 subsequent siblings)
4 siblings, 0 replies; 7+ messages in thread
From: Amery Hung @ 2026-02-02 21:48 UTC (permalink / raw)
To: bpf
Cc: netdev, alexei.starovoitov, andrii, daniel, memxor, martin.lau,
ameryhung, kernel-team
The verifier currently does not allow creating dynptr from dynptr data
or slice. Add a selftest to test this explicitly.
Signed-off-by: Amery Hung <ameryhung@gmail.com>
---
.../testing/selftests/bpf/progs/dynptr_fail.c | 42 +++++++++++++++++++
1 file changed, 42 insertions(+)
diff --git a/tools/testing/selftests/bpf/progs/dynptr_fail.c b/tools/testing/selftests/bpf/progs/dynptr_fail.c
index 8f2ae9640886..b44372cac40b 100644
--- a/tools/testing/selftests/bpf/progs/dynptr_fail.c
+++ b/tools/testing/selftests/bpf/progs/dynptr_fail.c
@@ -705,6 +705,48 @@ int dynptr_from_mem_invalid_api(void *ctx)
return 0;
}
+/* Cannot create dynptr from dynptr data */
+SEC("?raw_tp")
+__failure __msg("Unsupported reg type mem for bpf_dynptr_from_mem data")
+int dynptr_from_dynptr_data(void *ctx)
+{
+ struct bpf_dynptr ptr, ptr2;
+ __u8 *data;
+
+ if (get_map_val_dynptr(&ptr))
+ return 0;
+
+ data = bpf_dynptr_data(&ptr, 0, sizeof(__u32));
+ if (!data)
+ return 0;
+
+ /* this should fail */
+ bpf_dynptr_from_mem(data, sizeof(__u32), 0, &ptr2);
+
+ return 0;
+}
+
+/* Cannot create dynptr from dynptr slice */
+SEC("?tc")
+__failure __msg("Unsupported reg type mem for bpf_dynptr_from_mem data")
+int dynptr_from_dynptr_slice(struct __sk_buff *skb)
+{
+ struct bpf_dynptr ptr, ptr2;
+ struct ethhdr *hdr;
+ char buffer[sizeof(*hdr)] = {};
+
+ bpf_dynptr_from_skb(skb, 0, &ptr);
+
+ hdr = bpf_dynptr_slice_rdwr(&ptr, 0, buffer, sizeof(buffer));
+ if (!hdr)
+ return SK_DROP;
+
+ /* this should fail */
+ bpf_dynptr_from_mem(hdr, sizeof(*hdr), 0, &ptr2);
+
+ return SK_PASS;
+}
+
SEC("?tc")
__failure __msg("cannot overwrite referenced dynptr") __log_level(2)
int dynptr_pruning_overwrite(struct __sk_buff *ctx)
--
2.47.3
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [RFC PATCH bpf-next v1 3/4] selftests/bpf: Test using dynptr after freeing the underlying object
2026-02-02 21:48 [RFC PATCH bpf-next v1 1/4] bpf: Fix and refactor object relationship tracking Amery Hung
2026-02-02 21:48 ` [RFC PATCH bpf-next v1 2/4] selftests/bpf: Test creating dynptr from dynptr data and slice Amery Hung
@ 2026-02-02 21:48 ` Amery Hung
2026-02-02 21:48 ` [RFC PATCH bpf-next v1 4/4] selftests/bpf: Test using slice after invalidating dynptr clone Amery Hung
` (2 subsequent siblings)
4 siblings, 0 replies; 7+ messages in thread
From: Amery Hung @ 2026-02-02 21:48 UTC (permalink / raw)
To: bpf
Cc: netdev, alexei.starovoitov, andrii, daniel, memxor, martin.lau,
ameryhung, kernel-team
Make sure the verifier invalidates the dynptr and dynptr slice derived
from an skb after the skb is freed.
Signed-off-by: Amery Hung <ameryhung@gmail.com>
---
.../selftests/bpf/prog_tests/bpf_qdisc.c | 36 ++++++++++
.../progs/bpf_qdisc_fail__invalid_dynptr.c | 62 +++++++++++++++++
...f_qdisc_fail__invalid_dynptr_cross_frame.c | 68 +++++++++++++++++++
.../bpf_qdisc_fail__invalid_dynptr_slice.c | 64 +++++++++++++++++
4 files changed, 230 insertions(+)
create mode 100644 tools/testing/selftests/bpf/progs/bpf_qdisc_fail__invalid_dynptr.c
create mode 100644 tools/testing/selftests/bpf/progs/bpf_qdisc_fail__invalid_dynptr_cross_frame.c
create mode 100644 tools/testing/selftests/bpf/progs/bpf_qdisc_fail__invalid_dynptr_slice.c
diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_qdisc.c b/tools/testing/selftests/bpf/prog_tests/bpf_qdisc.c
index 730357cd0c9a..ec5b346138c5 100644
--- a/tools/testing/selftests/bpf/prog_tests/bpf_qdisc.c
+++ b/tools/testing/selftests/bpf/prog_tests/bpf_qdisc.c
@@ -8,6 +8,9 @@
#include "bpf_qdisc_fifo.skel.h"
#include "bpf_qdisc_fq.skel.h"
#include "bpf_qdisc_fail__incompl_ops.skel.h"
+#include "bpf_qdisc_fail__invalid_dynptr.skel.h"
+#include "bpf_qdisc_fail__invalid_dynptr_slice.skel.h"
+#include "bpf_qdisc_fail__invalid_dynptr_cross_frame.skel.h"
#define LO_IFINDEX 1
@@ -156,6 +159,33 @@ static void test_incompl_ops(void)
bpf_qdisc_fail__incompl_ops__destroy(skel);
}
+static void test_invalid_dynptr(void)
+{
+ struct bpf_qdisc_fail__invalid_dynptr *skel;
+
+ skel = bpf_qdisc_fail__invalid_dynptr__open_and_load();
+ if (!ASSERT_ERR_PTR(skel, "bpf_qdisc_fail__invalid_dynptr__open_and_load"))
+ bpf_qdisc_fail__invalid_dynptr__destroy(skel);
+}
+
+static void test_invalid_dynptr_slice(void)
+{
+ struct bpf_qdisc_fail__invalid_dynptr_slice *skel;
+
+ skel = bpf_qdisc_fail__invalid_dynptr_slice__open_and_load();
+ if (!ASSERT_ERR_PTR(skel, "bpf_qdisc_fail__invalid_dynptr_slice__open_and_load"))
+ bpf_qdisc_fail__invalid_dynptr_slice__destroy(skel);
+}
+
+static void test_invalid_dynptr_cross_frame(void)
+{
+ struct bpf_qdisc_fail__invalid_dynptr_cross_frame *skel;
+
+ skel = bpf_qdisc_fail__invalid_dynptr_cross_frame__open_and_load();
+ if (!ASSERT_ERR_PTR(skel, "bpf_qdisc_fail__invalid_dynptr_cross_frame__open_and_load"))
+ bpf_qdisc_fail__invalid_dynptr_cross_frame__destroy(skel);
+}
+
static int get_default_qdisc(char *qdisc_name)
{
FILE *f;
@@ -223,6 +253,12 @@ void test_ns_bpf_qdisc(void)
test_qdisc_attach_to_non_root();
if (test__start_subtest("incompl_ops"))
test_incompl_ops();
+ if (test__start_subtest("invalid_dynptr"))
+ test_invalid_dynptr();
+ if (test__start_subtest("invalid_dynptr_slice"))
+ test_invalid_dynptr_slice();
+ if (test__start_subtest("invalid_dynptr_cross_frame"))
+ test_invalid_dynptr_cross_frame();
}
void serial_test_bpf_qdisc_default(void)
diff --git a/tools/testing/selftests/bpf/progs/bpf_qdisc_fail__invalid_dynptr.c b/tools/testing/selftests/bpf/progs/bpf_qdisc_fail__invalid_dynptr.c
new file mode 100644
index 000000000000..2e76470bc261
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/bpf_qdisc_fail__invalid_dynptr.c
@@ -0,0 +1,62 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <vmlinux.h>
+#include "bpf_experimental.h"
+#include "bpf_qdisc_common.h"
+
+char _license[] SEC("license") = "GPL";
+
+int proto;
+
+SEC("struct_ops")
+int BPF_PROG(bpf_qdisc_test_enqueue, struct sk_buff *skb, struct Qdisc *sch,
+ struct bpf_sk_buff_ptr *to_free)
+{
+ struct bpf_dynptr ptr;
+ struct ethhdr *hdr;
+
+ bpf_dynptr_from_skb((struct __sk_buff *)skb, 0, &ptr);
+
+ bpf_qdisc_skb_drop(skb, to_free);
+
+ hdr = bpf_dynptr_slice(&ptr, 0, NULL, sizeof(*hdr));
+ if (!hdr)
+ return NET_XMIT_DROP;
+
+ proto = hdr->h_proto;
+
+ return NET_XMIT_DROP;
+}
+
+SEC("struct_ops")
+struct sk_buff *BPF_PROG(bpf_qdisc_test_dequeue, struct Qdisc *sch)
+{
+ return NULL;
+}
+
+SEC("struct_ops")
+int BPF_PROG(bpf_qdisc_test_init, struct Qdisc *sch, struct nlattr *opt,
+ struct netlink_ext_ack *extack)
+{
+ return 0;
+}
+
+SEC("struct_ops")
+void BPF_PROG(bpf_qdisc_test_reset, struct Qdisc *sch)
+{
+}
+
+SEC("struct_ops")
+void BPF_PROG(bpf_qdisc_test_destroy, struct Qdisc *sch)
+{
+}
+
+SEC(".struct_ops")
+struct Qdisc_ops test = {
+ .enqueue = (void *)bpf_qdisc_test_enqueue,
+ .dequeue = (void *)bpf_qdisc_test_dequeue,
+ .init = (void *)bpf_qdisc_test_init,
+ .reset = (void *)bpf_qdisc_test_reset,
+ .destroy = (void *)bpf_qdisc_test_destroy,
+ .id = "bpf_qdisc_test",
+};
diff --git a/tools/testing/selftests/bpf/progs/bpf_qdisc_fail__invalid_dynptr_cross_frame.c b/tools/testing/selftests/bpf/progs/bpf_qdisc_fail__invalid_dynptr_cross_frame.c
new file mode 100644
index 000000000000..936bf4bc79bf
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/bpf_qdisc_fail__invalid_dynptr_cross_frame.c
@@ -0,0 +1,68 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <vmlinux.h>
+#include "bpf_experimental.h"
+#include "bpf_qdisc_common.h"
+
+char _license[] SEC("license") = "GPL";
+
+int proto;
+
+static int free_skb(struct sk_buff *skb)
+{
+ bpf_kfree_skb(skb);
+ return 0;
+}
+
+SEC("struct_ops")
+int BPF_PROG(bpf_qdisc_test_enqueue, struct sk_buff *skb, struct Qdisc *sch,
+ struct bpf_sk_buff_ptr *to_free)
+{
+ struct bpf_dynptr ptr;
+ struct ethhdr *hdr;
+
+ bpf_dynptr_from_skb((struct __sk_buff *)skb, 0, &ptr);
+
+ hdr = bpf_dynptr_slice(&ptr, 0, NULL, sizeof(*hdr));
+ if (!hdr)
+ return NET_XMIT_DROP;
+
+ free_skb(skb);
+
+ proto = hdr->h_proto;
+
+ return NET_XMIT_DROP;
+}
+
+SEC("struct_ops")
+struct sk_buff *BPF_PROG(bpf_qdisc_test_dequeue, struct Qdisc *sch)
+{
+ return NULL;
+}
+
+SEC("struct_ops")
+int BPF_PROG(bpf_qdisc_test_init, struct Qdisc *sch, struct nlattr *opt,
+ struct netlink_ext_ack *extack)
+{
+ return 0;
+}
+
+SEC("struct_ops")
+void BPF_PROG(bpf_qdisc_test_reset, struct Qdisc *sch)
+{
+}
+
+SEC("struct_ops")
+void BPF_PROG(bpf_qdisc_test_destroy, struct Qdisc *sch)
+{
+}
+
+SEC(".struct_ops")
+struct Qdisc_ops test = {
+ .enqueue = (void *)bpf_qdisc_test_enqueue,
+ .dequeue = (void *)bpf_qdisc_test_dequeue,
+ .init = (void *)bpf_qdisc_test_init,
+ .reset = (void *)bpf_qdisc_test_reset,
+ .destroy = (void *)bpf_qdisc_test_destroy,
+ .id = "bpf_qdisc_test",
+};
diff --git a/tools/testing/selftests/bpf/progs/bpf_qdisc_fail__invalid_dynptr_slice.c b/tools/testing/selftests/bpf/progs/bpf_qdisc_fail__invalid_dynptr_slice.c
new file mode 100644
index 000000000000..95e8c070a37d
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/bpf_qdisc_fail__invalid_dynptr_slice.c
@@ -0,0 +1,64 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <vmlinux.h>
+#include "bpf_experimental.h"
+#include "bpf_qdisc_common.h"
+
+char _license[] SEC("license") = "GPL";
+
+int proto;
+
+SEC("struct_ops")
+int BPF_PROG(bpf_qdisc_test_enqueue, struct sk_buff *skb, struct Qdisc *sch,
+ struct bpf_sk_buff_ptr *to_free)
+{
+ struct bpf_dynptr ptr;
+ struct ethhdr *hdr;
+
+ bpf_dynptr_from_skb((struct __sk_buff *)skb, 0, &ptr);
+
+ hdr = bpf_dynptr_slice(&ptr, 0, NULL, sizeof(*hdr));
+ if (!hdr) {
+ bpf_qdisc_skb_drop(skb, to_free);
+ return NET_XMIT_DROP;
+ }
+
+ bpf_qdisc_skb_drop(skb, to_free);
+
+ proto = hdr->h_proto;
+
+ return NET_XMIT_DROP;
+}
+
+SEC("struct_ops")
+struct sk_buff *BPF_PROG(bpf_qdisc_test_dequeue, struct Qdisc *sch)
+{
+ return NULL;
+}
+
+SEC("struct_ops")
+int BPF_PROG(bpf_qdisc_test_init, struct Qdisc *sch, struct nlattr *opt,
+ struct netlink_ext_ack *extack)
+{
+ return 0;
+}
+
+SEC("struct_ops")
+void BPF_PROG(bpf_qdisc_test_reset, struct Qdisc *sch)
+{
+}
+
+SEC("struct_ops")
+void BPF_PROG(bpf_qdisc_test_destroy, struct Qdisc *sch)
+{
+}
+
+SEC(".struct_ops")
+struct Qdisc_ops test = {
+ .enqueue = (void *)bpf_qdisc_test_enqueue,
+ .dequeue = (void *)bpf_qdisc_test_dequeue,
+ .init = (void *)bpf_qdisc_test_init,
+ .reset = (void *)bpf_qdisc_test_reset,
+ .destroy = (void *)bpf_qdisc_test_destroy,
+ .id = "bpf_qdisc_test",
+};
--
2.47.3
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [RFC PATCH bpf-next v1 4/4] selftests/bpf: Test using slice after invalidating dynptr clone
2026-02-02 21:48 [RFC PATCH bpf-next v1 1/4] bpf: Fix and refactor object relationship tracking Amery Hung
2026-02-02 21:48 ` [RFC PATCH bpf-next v1 2/4] selftests/bpf: Test creating dynptr from dynptr data and slice Amery Hung
2026-02-02 21:48 ` [RFC PATCH bpf-next v1 3/4] selftests/bpf: Test using dynptr after freeing the underlying object Amery Hung
@ 2026-02-02 21:48 ` Amery Hung
2026-02-02 22:23 ` [RFC PATCH bpf-next v1 1/4] bpf: Fix and refactor object relationship tracking bot+bpf-ci
2026-02-04 23:58 ` Amery Hung
4 siblings, 0 replies; 7+ messages in thread
From: Amery Hung @ 2026-02-02 21:48 UTC (permalink / raw)
To: bpf
Cc: netdev, alexei.starovoitov, andrii, daniel, memxor, martin.lau,
ameryhung, kernel-team
The parent object of a cloned dynptr is skb not the original dynptr.
Invalidate the original dynptr should not prevent the program from
using the slice derived from the cloned dynptr.
Signed-off-by: Amery Hung <ameryhung@gmail.com>
---
.../selftests/bpf/prog_tests/bpf_qdisc.c | 14 ++++
.../bpf/progs/bpf_qdisc_dynptr_clone.c | 69 +++++++++++++++++++
2 files changed, 83 insertions(+)
create mode 100644 tools/testing/selftests/bpf/progs/bpf_qdisc_dynptr_clone.c
diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_qdisc.c b/tools/testing/selftests/bpf/prog_tests/bpf_qdisc.c
index ec5b346138c5..ba14738c509b 100644
--- a/tools/testing/selftests/bpf/prog_tests/bpf_qdisc.c
+++ b/tools/testing/selftests/bpf/prog_tests/bpf_qdisc.c
@@ -11,6 +11,7 @@
#include "bpf_qdisc_fail__invalid_dynptr.skel.h"
#include "bpf_qdisc_fail__invalid_dynptr_slice.skel.h"
#include "bpf_qdisc_fail__invalid_dynptr_cross_frame.skel.h"
+#include "bpf_qdisc_dynptr_clone.skel.h"
#define LO_IFINDEX 1
@@ -186,6 +187,17 @@ static void test_invalid_dynptr_cross_frame(void)
bpf_qdisc_fail__invalid_dynptr_cross_frame__destroy(skel);
}
+static void test_dynptr_clone(void)
+{
+ struct bpf_qdisc_dynptr_clone *skel;
+
+ skel = bpf_qdisc_dynptr_clone__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "bpf_qdisc_dynptr_clone__open_and_load"))
+ return;
+
+ bpf_qdisc_dynptr_clone__destroy(skel);
+}
+
static int get_default_qdisc(char *qdisc_name)
{
FILE *f;
@@ -259,6 +271,8 @@ void test_ns_bpf_qdisc(void)
test_invalid_dynptr_slice();
if (test__start_subtest("invalid_dynptr_cross_frame"))
test_invalid_dynptr_cross_frame();
+ if (test__start_subtest("dynptr_clone"))
+ test_dynptr_clone();
}
void serial_test_bpf_qdisc_default(void)
diff --git a/tools/testing/selftests/bpf/progs/bpf_qdisc_dynptr_clone.c b/tools/testing/selftests/bpf/progs/bpf_qdisc_dynptr_clone.c
new file mode 100644
index 000000000000..f23581e19da1
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/bpf_qdisc_dynptr_clone.c
@@ -0,0 +1,69 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <vmlinux.h>
+#include "bpf_experimental.h"
+#include "bpf_qdisc_common.h"
+
+char _license[] SEC("license") = "GPL";
+
+int proto;
+
+SEC("struct_ops")
+int BPF_PROG(bpf_qdisc_test_enqueue, struct sk_buff *skb, struct Qdisc *sch,
+ struct bpf_sk_buff_ptr *to_free)
+{
+ struct bpf_dynptr ptr, ptr_clone;
+ struct ethhdr *hdr;
+
+ bpf_dynptr_from_skb((struct __sk_buff *)skb, 0, &ptr);
+
+ bpf_dynptr_clone(&ptr, &ptr_clone);
+
+ hdr = bpf_dynptr_slice(&ptr_clone, 0, NULL, sizeof(*hdr));
+ if (!hdr) {
+ bpf_qdisc_skb_drop(skb, to_free);
+ return NET_XMIT_DROP;
+ }
+
+ *(int *)&ptr = 0;
+
+ proto = hdr->h_proto;
+
+ bpf_qdisc_skb_drop(skb, to_free);
+
+ return NET_XMIT_DROP;
+}
+
+SEC("struct_ops")
+struct sk_buff *BPF_PROG(bpf_qdisc_test_dequeue, struct Qdisc *sch)
+{
+ return NULL;
+}
+
+SEC("struct_ops")
+int BPF_PROG(bpf_qdisc_test_init, struct Qdisc *sch, struct nlattr *opt,
+ struct netlink_ext_ack *extack)
+{
+ return 0;
+}
+
+SEC("struct_ops")
+void BPF_PROG(bpf_qdisc_test_reset, struct Qdisc *sch)
+{
+}
+
+SEC("struct_ops")
+void BPF_PROG(bpf_qdisc_test_destroy, struct Qdisc *sch)
+{
+}
+
+SEC(".struct_ops")
+struct Qdisc_ops test = {
+ .enqueue = (void *)bpf_qdisc_test_enqueue,
+ .dequeue = (void *)bpf_qdisc_test_dequeue,
+ .init = (void *)bpf_qdisc_test_init,
+ .reset = (void *)bpf_qdisc_test_reset,
+ .destroy = (void *)bpf_qdisc_test_destroy,
+ .id = "bpf_qdisc_test",
+};
+
--
2.47.3
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [RFC PATCH bpf-next v1 1/4] bpf: Fix and refactor object relationship tracking
2026-02-02 21:48 [RFC PATCH bpf-next v1 1/4] bpf: Fix and refactor object relationship tracking Amery Hung
` (2 preceding siblings ...)
2026-02-02 21:48 ` [RFC PATCH bpf-next v1 4/4] selftests/bpf: Test using slice after invalidating dynptr clone Amery Hung
@ 2026-02-02 22:23 ` bot+bpf-ci
2026-02-02 23:39 ` Amery Hung
2026-02-04 23:58 ` Amery Hung
4 siblings, 1 reply; 7+ messages in thread
From: bot+bpf-ci @ 2026-02-02 22:23 UTC (permalink / raw)
To: ameryhung, bpf
Cc: netdev, alexei.starovoitov, andrii, daniel, memxor, martin.lau,
ameryhung, kernel-team, ast, andrii, daniel, martin.lau, eddyz87,
yonghong.song, clm, ihor.solodrai
[-- Attachment #1: Type: text/plain, Size: 3330 bytes --]
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 6b62b6d571..a357e03051 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
[ ... ]
> @@ -10481,22 +10464,43 @@ static int release_reference_nomark(...)
> *
> * This is the release function corresponding to acquire_reference(). Idempotent.
> */
> -static int release_reference(struct bpf_verifier_env *env, int ref_obj_id)
> +static int release_reference(struct bpf_verifier_env *env, int ref_obj_id, bool acquired_ref)
> {
> struct bpf_verifier_state *vstate = env->cur_state;
> struct bpf_func_state *state;
> struct bpf_reg_state *reg;
> - int err;
> + int err, spi;
>
> - err = release_reference_nomark(vstate, ref_obj_id);
> - if (err)
> - return err;
> + if (acquired_ref) {
> + err = release_reference_nomark(vstate, ref_obj_id);
> + if (err)
> + return err;
> + }
>
> bpf_for_each_reg_in_vstate(vstate, state, reg, ({
> - if (reg->ref_obj_id == ref_obj_id)
> + if (reg->ref_obj_id == ref_obj_id) {
> mark_reg_invalid(env, reg);
> + if (reg->id && reg->id != ref_obj_id) {
> + err = release_reference(env, reg->id, false);
> + if (err)
> + return err;
> + }
> + }
> }));
>
> + bpf_for_each_spilled_reg(spi, state, reg, (1 << STACK_DYNPTR)) {
> + if (!reg || !reg->dynptr.first_slot)
> + continue;
> + if (reg->ref_obj_id == ref_obj_id) {
> + __unmark_stack_slots_dynptr(env, state, spi);
> + if (reg->id && reg->id != ref_obj_id) {
> + err = release_reference(env, reg->id, false);
> + if (err)
> + return err;
> + }
> + }
> + }
Does this loop only scan dynptrs in the current frame?
The bpf_for_each_reg_in_vstate macro sets 'state' to each frame in sequence:
for (___i = 0; ___i <= ___vstate->curframe; ___i++) {
__state = ___vstate->frame[___i];
...
}
After the macro completes, 'state' holds the last assigned value, which is
vstate->frame[vstate->curframe] - only the top frame.
The subsequent bpf_for_each_spilled_reg loop uses this 'state' variable,
so it only scans STACK_DYNPTR slots in the top frame. Dynptrs residing in
caller frames (frames 0 through curframe-1) would not be found.
When a BPF program creates a dynptr in frame 0 and then calls a subprog
(frame 1) that releases the parent object, the dynptr in frame 0 would not
be invalidated. The selftest bpf_qdisc_fail__invalid_dynptr_cross_frame.c
appears to test this exact scenario.
Should the bpf_for_each_spilled_reg loop be wrapped in a loop over all
frames, similar to how bpf_for_each_reg_in_vstate_mask handles it?
---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
CI run summary: https://github.com/kernel-patches/bpf/actions/runs/21608154622
AI-authorship-score: low
AI-authorship-explanation: The commit demonstrates deep domain expertise with detailed ASCII diagrams, uses standard kernel conventions, and the bug is a typical human oversight from misunderstanding macro variable scoping.
issues-found: 1
issue-severity-score: medium
issue-severity-explanation: The dynptr invalidation loop only scans the top frame, potentially missing dynptrs in caller frames when parent objects are released, which could lead to use-after-free in the BPF verifier.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [RFC PATCH bpf-next v1 1/4] bpf: Fix and refactor object relationship tracking
2026-02-02 22:23 ` [RFC PATCH bpf-next v1 1/4] bpf: Fix and refactor object relationship tracking bot+bpf-ci
@ 2026-02-02 23:39 ` Amery Hung
0 siblings, 0 replies; 7+ messages in thread
From: Amery Hung @ 2026-02-02 23:39 UTC (permalink / raw)
To: bot+bpf-ci
Cc: bpf, netdev, alexei.starovoitov, andrii, daniel, memxor,
martin.lau, kernel-team, ast, eddyz87, yonghong.song, clm,
ihor.solodrai
On Mon, Feb 2, 2026 at 2:23 PM <bot+bpf-ci@kernel.org> wrote:
>
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index 6b62b6d571..a357e03051 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
>
> [ ... ]
>
> > @@ -10481,22 +10464,43 @@ static int release_reference_nomark(...)
> > *
> > * This is the release function corresponding to acquire_reference(). Idempotent.
> > */
> > -static int release_reference(struct bpf_verifier_env *env, int ref_obj_id)
> > +static int release_reference(struct bpf_verifier_env *env, int ref_obj_id, bool acquired_ref)
> > {
> > struct bpf_verifier_state *vstate = env->cur_state;
> > struct bpf_func_state *state;
> > struct bpf_reg_state *reg;
> > - int err;
> > + int err, spi;
> >
> > - err = release_reference_nomark(vstate, ref_obj_id);
> > - if (err)
> > - return err;
> > + if (acquired_ref) {
> > + err = release_reference_nomark(vstate, ref_obj_id);
> > + if (err)
> > + return err;
> > + }
> >
> > bpf_for_each_reg_in_vstate(vstate, state, reg, ({
> > - if (reg->ref_obj_id == ref_obj_id)
> > + if (reg->ref_obj_id == ref_obj_id) {
> > mark_reg_invalid(env, reg);
> > + if (reg->id && reg->id != ref_obj_id) {
> > + err = release_reference(env, reg->id, false);
> > + if (err)
> > + return err;
> > + }
> > + }
> > }));
> >
> > + bpf_for_each_spilled_reg(spi, state, reg, (1 << STACK_DYNPTR)) {
> > + if (!reg || !reg->dynptr.first_slot)
> > + continue;
> > + if (reg->ref_obj_id == ref_obj_id) {
> > + __unmark_stack_slots_dynptr(env, state, spi);
> > + if (reg->id && reg->id != ref_obj_id) {
> > + err = release_reference(env, reg->id, false);
> > + if (err)
> > + return err;
> > + }
> > + }
> > + }
>
> Does this loop only scan dynptrs in the current frame?
>
> The bpf_for_each_reg_in_vstate macro sets 'state' to each frame in sequence:
>
> for (___i = 0; ___i <= ___vstate->curframe; ___i++) {
> __state = ___vstate->frame[___i];
> ...
> }
>
> After the macro completes, 'state' holds the last assigned value, which is
> vstate->frame[vstate->curframe] - only the top frame.
>
> The subsequent bpf_for_each_spilled_reg loop uses this 'state' variable,
> so it only scans STACK_DYNPTR slots in the top frame. Dynptrs residing in
> caller frames (frames 0 through curframe-1) would not be found.
>
> When a BPF program creates a dynptr in frame 0 and then calls a subprog
> (frame 1) that releases the parent object, the dynptr in frame 0 would not
> be invalidated. The selftest bpf_qdisc_fail__invalid_dynptr_cross_frame.c
> appears to test this exact scenario.
>
> Should the bpf_for_each_spilled_reg loop be wrapped in a loop over all
> frames, similar to how bpf_for_each_reg_in_vstate_mask handles it?
AI is right, it should loop through all frames. The selftest fail to
catch this because free_skb is inlined. Will fix the bug and add
__noinline to free_skb().
>
>
> ---
> AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
> See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
>
> CI run summary: https://github.com/kernel-patches/bpf/actions/runs/21608154622
>
> AI-authorship-score: low
> AI-authorship-explanation: The commit demonstrates deep domain expertise with detailed ASCII diagrams, uses standard kernel conventions, and the bug is a typical human oversight from misunderstanding macro variable scoping.
> issues-found: 1
> issue-severity-score: medium
> issue-severity-explanation: The dynptr invalidation loop only scans the top frame, potentially missing dynptrs in caller frames when parent objects are released, which could lead to use-after-free in the BPF verifier.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [RFC PATCH bpf-next v1 1/4] bpf: Fix and refactor object relationship tracking
2026-02-02 21:48 [RFC PATCH bpf-next v1 1/4] bpf: Fix and refactor object relationship tracking Amery Hung
` (3 preceding siblings ...)
2026-02-02 22:23 ` [RFC PATCH bpf-next v1 1/4] bpf: Fix and refactor object relationship tracking bot+bpf-ci
@ 2026-02-04 23:58 ` Amery Hung
4 siblings, 0 replies; 7+ messages in thread
From: Amery Hung @ 2026-02-04 23:58 UTC (permalink / raw)
To: bpf
Cc: netdev, alexei.starovoitov, andrii, daniel, memxor, martin.lau,
kernel-team
Brief note of an offline discussion with Andrii:
On Mon, Feb 2, 2026 at 1:48 PM Amery Hung <ameryhung@gmail.com> wrote:
>
> This patch (1) fixes the missing link between dynptr and the parent
> referenced object and (2) simplifies the object relationship tracking
> to allow building more complex object relationship in the future.
>
> This patch makes dynptr tracks its parent object. After bpf qdisc is
> introduced, it is now possible for an skb to be freed in a qdisc program.
> However, since there is no link between the skb dynptr and the skb, the
> dynptr will not be invalidated after the skb is freed, resulting in
> use after free.
>
> In addition, also take the chance to refactor object relationship
> tracking. Currently ref_obj_id of an object will share the same
> ref_obj_id of the object it is referencing. For dynptr slice, a separate
> dynptr_id is used to track its parent.
>
> This design works for now, but after adding link between skb and dynptr,
> it becomes problematic. It cannot correctly handle the removal of some
> nodes in the object relationship tree. For example, if dynptr a is
> destroyed by overwriting the stack slot, release_reference(1) would be
> called and all nodes will be removed. The correct handling should leave
> skb, dynptr c, and slice d intact. This is not a problem before since
> dynptr created from skb has ref_obj_id = 0.
>
> In the future if we start allowing creating dynptr from slice, the current
> design also cannot correctly handle the removal of dynptr e. Again, all
> nodes will be removed instead of childrens of dynptr e.
>
> Before: object (id,ref_obj_id,dynptr_id)
>
> skb (1,1,0)
> ^ (Link added in this patch)
> +-------------------------------+
> | bpf_dynptr_clone |
> dynptr a (2,1,0) dynptr c (4,1,0)
> ^ ^
> bpf_dynptr_slice | |
> | |
> slice b (3,1,2) slice d (5,1,4)
> ^
> bpf_dynptr_from_mem |
> (NOT allowed yet) |
> dynptr e (6,1,0)
>
> The new design removes dynptr_id and use ref_obj_id to track the id
> of the parent object instead of sharing the ref_obj_id of the root.
> For release_reference(), this means ref_obj_id argument now can aslo
> contains id not acquired through acquire_reference(). Add an argument
> "acquired_ref" to indicate this. If true, there is no need to call
> release_reference_nomark(). In addition, when iterating the register
> state, when a reg->ref_obj_id matches the to be released ref_obj_id,
> other objects referencing reg->id also needs to be released. Finally,
> since dynptrs reside on the stack, release_reference() will also need
> to scan stack slots.
>
> After: object (id,ref_obj_id)
>
> Qdisc:
> skb (1,1)
> ^
> bpf_dynptr_from_skb +-------------------------------+
> | bpf_dynptr_clone |
> dynptr a (2,1) dynptr c (4,1)
> ^ ^
> bpf_dynptr_slice | |
> | |
> slice b (3,2) slice d (5,4)
> ^
> bpf_dynptr_from_mem |
> (NOT allowed yet) |
> dynptr e (6,3)
>
> Referenced dynptr:
>
> bpf_dynptr_from_file +---------------------------------+
> | bpf_dynptr_clone |
> dynptr a (1,1) dynptr c (2,1)
File dynptr should also have ref_obj_id point to the file's id
>
> Other non-referenced dynptr
>
> bpf_dynptr_from_skb +---------------------------------+
> | bpf_dynptr_clone |
> dynptr a (1,0) dynptr c (2,1)
This should be dynptr c (2,0)
>
> While this change introduces a potential circular call chain of
> release_reference() -> __unmark_stack_slots_dynptr(), we currently do
> not allow creating dynptr from slice so the depth of the tree will be
> capped at three. In the future when this is allowed, the recursion
> depth will still be bounded as the number of slices is limited by the
> number registers. Besides, mark_ptr_or_null_reg() also needs to preserve
> reg->id of slices.
Use BFS to avoid the potential unsafe recursive calls.
>
> Note that the new design does not support intermediate nodes to acquire
> reference. If this is needed in the future, we might need to separate
> ref_obj_id into ref_obj_id and parent_id to handle it.
>
For historical reasons, an ptr object can have an id for MAYBE_NULL
tracking and a different ref_obj_id for referenced object tracking.
However in the new scheme, while we might still keep the same name,
"id" really means the unique id of an object, which may or may not be
referenced (i.e., return of acquire_reference()); while ref_obj_id
saves the parent object's "id". Therefore, there should be no need for
(id, ref_obj_id, parent_id) mentioned in the last paragraph.
> CC: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> Fixes: 870c28588afa ("bpf: net_sched: Add basic bpf qdisc kfuncs")
> Signed-off-by: Amery Hung <ameryhung@gmail.com>
>
> ---
>
> This refactor is based on Kumar work in [0]
>
> [0] https://lore.kernel.org/bpf/20250414161443.1146103-2-memxor@gmail.com/
> ---
> include/linux/bpf_verifier.h | 1 -
> kernel/bpf/log.c | 2 -
> kernel/bpf/verifier.c | 173 ++++++++++++++++-------------------
> 3 files changed, 81 insertions(+), 95 deletions(-)
>
> diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
> index 8355b585cd18..0cb14648a269 100644
> --- a/include/linux/bpf_verifier.h
> +++ b/include/linux/bpf_verifier.h
> @@ -66,7 +66,6 @@ struct bpf_reg_state {
>
> struct { /* for PTR_TO_MEM | PTR_TO_MEM_OR_NULL */
> u32 mem_size;
> - u32 dynptr_id; /* for dynptr slices */
> };
>
> /* For dynptr stack slots */
> diff --git a/kernel/bpf/log.c b/kernel/bpf/log.c
> index a0c3b35de2ce..8851adfdb581 100644
> --- a/kernel/bpf/log.c
> +++ b/kernel/bpf/log.c
> @@ -808,8 +808,6 @@ void print_verifier_state(struct bpf_verifier_env *env, const struct bpf_verifie
> verbose_a("id=%d", reg->id);
> if (reg->ref_obj_id)
> verbose_a("ref_id=%d", reg->ref_obj_id);
> - if (reg->dynptr_id)
> - verbose_a("dynptr_id=%d", reg->dynptr_id);
> verbose(env, ")");
> break;
> case STACK_ITER:
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 6b62b6d57175..a357e0305139 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -204,7 +204,7 @@ struct bpf_verifier_stack_elem {
>
> static int acquire_reference(struct bpf_verifier_env *env, int insn_idx);
> static int release_reference_nomark(struct bpf_verifier_state *state, int ref_obj_id);
> -static int release_reference(struct bpf_verifier_env *env, int ref_obj_id);
> +static int release_reference(struct bpf_verifier_env *env, int ref_obj_id, bool acquired_ref);
> static void invalidate_non_owning_refs(struct bpf_verifier_env *env);
> static bool in_rbtree_lock_required_cb(struct bpf_verifier_env *env);
> static int ref_set_non_owning(struct bpf_verifier_env *env,
> @@ -287,7 +287,7 @@ struct bpf_call_arg_meta {
> int mem_size;
> u64 msize_max_value;
> int ref_obj_id;
> - int dynptr_id;
> + int id;
> int func_id;
> struct btf *btf;
> u32 btf_id;
> @@ -733,7 +733,7 @@ static bool dynptr_type_refcounted(enum bpf_dynptr_type type)
>
> static void __mark_dynptr_reg(struct bpf_reg_state *reg,
> enum bpf_dynptr_type type,
> - bool first_slot, int dynptr_id);
> + bool first_slot, int id);
>
> static void __mark_reg_not_init(const struct bpf_verifier_env *env,
> struct bpf_reg_state *reg);
> @@ -764,7 +764,7 @@ static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_
> {
> struct bpf_func_state *state = func(env, reg);
> enum bpf_dynptr_type type;
> - int spi, i, err;
> + int spi, i, err, id;
>
> spi = dynptr_get_spi(env, reg);
> if (spi < 0)
> @@ -798,22 +798,17 @@ static int mark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_
> mark_dynptr_stack_regs(env, &state->stack[spi].spilled_ptr,
> &state->stack[spi - 1].spilled_ptr, type);
>
> - if (dynptr_type_refcounted(type)) {
> - /* The id is used to track proper releasing */
> - int id;
> -
> - if (clone_ref_obj_id)
> - id = clone_ref_obj_id;
> - else
> - id = acquire_reference(env, insn_idx);
> + id = clone_ref_obj_id;
>
> + if (dynptr_type_refcounted(type) && !clone_ref_obj_id) {
> + id = acquire_reference(env, insn_idx);
> if (id < 0)
> return id;
> -
> - state->stack[spi].spilled_ptr.ref_obj_id = id;
> - state->stack[spi - 1].spilled_ptr.ref_obj_id = id;
> }
>
> + state->stack[spi].spilled_ptr.ref_obj_id = id;
> + state->stack[spi - 1].spilled_ptr.ref_obj_id = id;
> +
> bpf_mark_stack_write(env, state->frameno, BIT(spi - 1) | BIT(spi));
>
> return 0;
> @@ -834,10 +829,30 @@ static void invalidate_dynptr(struct bpf_verifier_env *env, struct bpf_func_stat
> bpf_mark_stack_write(env, state->frameno, BIT(spi - 1) | BIT(spi));
> }
>
> +static void __unmark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_func_state *state,
> + int spi)
> +{
> + enum bpf_dynptr_type type;
> + int id, ref_obj_id;
> +
> + id = state->stack[spi].spilled_ptr.id;
> + ref_obj_id = state->stack[spi].spilled_ptr.ref_obj_id;
> + type = state->stack[spi].spilled_ptr.dynptr.type;
> +
> + invalidate_dynptr(env, state, spi);
> +
> + /* Release the reference acquired on the dynptr */
> + if (dynptr_type_refcounted(type))
> + WARN_ON_ONCE(release_reference(env, ref_obj_id, true));
> +
> + /* Invalidate any slices associated with this dynptr */
> + WARN_ON_ONCE(release_reference(env, id, false));
> +}
> +
> static int unmark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_reg_state *reg)
> {
> struct bpf_func_state *state = func(env, reg);
> - int spi, ref_obj_id, i;
> + int spi;
>
> /*
> * This can only be set for PTR_TO_STACK, as CONST_PTR_TO_DYNPTR cannot
> @@ -848,44 +863,12 @@ static int unmark_stack_slots_dynptr(struct bpf_verifier_env *env, struct bpf_re
> verifier_bug(env, "CONST_PTR_TO_DYNPTR cannot be released");
> return -EFAULT;
> }
> +
> spi = dynptr_get_spi(env, reg);
> if (spi < 0)
> return spi;
>
> - if (!dynptr_type_refcounted(state->stack[spi].spilled_ptr.dynptr.type)) {
> - invalidate_dynptr(env, state, spi);
> - return 0;
> - }
> -
> - ref_obj_id = state->stack[spi].spilled_ptr.ref_obj_id;
> -
> - /* If the dynptr has a ref_obj_id, then we need to invalidate
> - * two things:
> - *
> - * 1) Any dynptrs with a matching ref_obj_id (clones)
> - * 2) Any slices derived from this dynptr.
> - */
> -
> - /* Invalidate any slices associated with this dynptr */
> - WARN_ON_ONCE(release_reference(env, ref_obj_id));
> -
> - /* Invalidate any dynptr clones */
> - for (i = 1; i < state->allocated_stack / BPF_REG_SIZE; i++) {
> - if (state->stack[i].spilled_ptr.ref_obj_id != ref_obj_id)
> - continue;
> -
> - /* it should always be the case that if the ref obj id
> - * matches then the stack slot also belongs to a
> - * dynptr
> - */
> - if (state->stack[i].slot_type[0] != STACK_DYNPTR) {
> - verifier_bug(env, "misconfigured ref_obj_id");
> - return -EFAULT;
> - }
> - if (state->stack[i].spilled_ptr.dynptr.first_slot)
> - invalidate_dynptr(env, state, i);
> - }
> -
> + __unmark_stack_slots_dynptr(env, state, spi);
> return 0;
> }
>
> @@ -905,7 +888,7 @@ static int destroy_if_dynptr_stack_slot(struct bpf_verifier_env *env,
> {
> struct bpf_func_state *fstate;
> struct bpf_reg_state *dreg;
> - int i, dynptr_id;
> + int i, id;
>
> /* We always ensure that STACK_DYNPTR is never set partially,
> * hence just checking for slot_type[0] is enough. This is
> @@ -933,13 +916,13 @@ static int destroy_if_dynptr_stack_slot(struct bpf_verifier_env *env,
> state->stack[spi - 1].slot_type[i] = STACK_INVALID;
> }
>
> - dynptr_id = state->stack[spi].spilled_ptr.id;
> + id = state->stack[spi].spilled_ptr.id;
> /* Invalidate any slices associated with this dynptr */
> bpf_for_each_reg_in_vstate(env->cur_state, fstate, dreg, ({
> /* Dynptr slices are only PTR_TO_MEM_OR_NULL and PTR_TO_MEM */
> if (dreg->type != (PTR_TO_MEM | PTR_MAYBE_NULL) && dreg->type != PTR_TO_MEM)
> continue;
> - if (dreg->dynptr_id == dynptr_id)
> + if (dreg->ref_obj_id == id)
> mark_reg_invalid(env, dreg);
> }));
>
> @@ -1098,7 +1081,7 @@ static int unmark_stack_slots_iter(struct bpf_verifier_env *env,
> struct bpf_reg_state *st = &slot->spilled_ptr;
>
> if (i == 0)
> - WARN_ON_ONCE(release_reference(env, st->ref_obj_id));
> + WARN_ON_ONCE(release_reference(env, st->ref_obj_id, true));
>
> __mark_reg_not_init(env, st);
>
> @@ -2235,7 +2218,7 @@ static void mark_reg_known_zero(struct bpf_verifier_env *env,
> }
>
> static void __mark_dynptr_reg(struct bpf_reg_state *reg, enum bpf_dynptr_type type,
> - bool first_slot, int dynptr_id)
> + bool first_slot, int id)
> {
> /* reg->type has no meaning for STACK_DYNPTR, but when we set reg for
> * callback arguments, it does need to be CONST_PTR_TO_DYNPTR, so simply
> @@ -2244,7 +2227,7 @@ static void __mark_dynptr_reg(struct bpf_reg_state *reg, enum bpf_dynptr_type ty
> __mark_reg_known_zero(reg);
> reg->type = CONST_PTR_TO_DYNPTR;
> /* Give each dynptr a unique id to uniquely associate slices to it. */
> - reg->id = dynptr_id;
> + reg->id = id;
> reg->dynptr.type = type;
> reg->dynptr.first_slot = first_slot;
> }
> @@ -10481,22 +10464,43 @@ static int release_reference_nomark(struct bpf_verifier_state *state, int ref_ob
> *
> * This is the release function corresponding to acquire_reference(). Idempotent.
> */
> -static int release_reference(struct bpf_verifier_env *env, int ref_obj_id)
> +static int release_reference(struct bpf_verifier_env *env, int ref_obj_id, bool acquired_ref)
> {
> struct bpf_verifier_state *vstate = env->cur_state;
> struct bpf_func_state *state;
> struct bpf_reg_state *reg;
> - int err;
> + int err, spi;
>
> - err = release_reference_nomark(vstate, ref_obj_id);
> - if (err)
> - return err;
> + if (acquired_ref) {
> + err = release_reference_nomark(vstate, ref_obj_id);
> + if (err)
> + return err;
> + }
>
> bpf_for_each_reg_in_vstate(vstate, state, reg, ({
> - if (reg->ref_obj_id == ref_obj_id)
> + if (reg->ref_obj_id == ref_obj_id) {
> mark_reg_invalid(env, reg);
> + if (reg->id && reg->id != ref_obj_id) {
> + err = release_reference(env, reg->id, false);
> + if (err)
> + return err;
> + }
> + }
> }));
>
> + bpf_for_each_spilled_reg(spi, state, reg, (1 << STACK_DYNPTR)) {
> + if (!reg || !reg->dynptr.first_slot)
> + continue;
> + if (reg->ref_obj_id == ref_obj_id) {
> + __unmark_stack_slots_dynptr(env, state, spi);
> + if (reg->id && reg->id != ref_obj_id) {
> + err = release_reference(env, reg->id, false);
> + if (err)
> + return err;
> + }
> + }
> + }
> +
> return 0;
> }
>
> @@ -11673,7 +11677,7 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
> }));
> }
> } else if (meta.ref_obj_id) {
> - err = release_reference(env, meta.ref_obj_id);
> + err = release_reference(env, meta.ref_obj_id, true);
> } else if (register_is_null(®s[meta.release_regno])) {
> /* meta.ref_obj_id can only be 0 if register that is meant to be
> * released is NULL, which must be > R0.
> @@ -11757,19 +11761,14 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
> case BPF_FUNC_dynptr_data:
> {
> struct bpf_reg_state *reg;
> - int id, ref_obj_id;
> + int id;
>
> reg = get_dynptr_arg_reg(env, fn, regs);
> if (!reg)
> return -EFAULT;
>
> -
> - if (meta.dynptr_id) {
> - verifier_bug(env, "meta.dynptr_id already set");
> - return -EFAULT;
> - }
> - if (meta.ref_obj_id) {
> - verifier_bug(env, "meta.ref_obj_id already set");
> + if (meta.id) {
> + verifier_bug(env, "meta.id already set");
> return -EFAULT;
> }
>
> @@ -11779,14 +11778,7 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
> return id;
> }
>
> - ref_obj_id = dynptr_ref_obj_id(env, reg);
> - if (ref_obj_id < 0) {
> - verifier_bug(env, "failed to obtain dynptr ref_obj_id");
> - return ref_obj_id;
> - }
> -
> - meta.dynptr_id = id;
> - meta.ref_obj_id = ref_obj_id;
> + meta.id = id;
>
> break;
> }
> @@ -11990,10 +11982,9 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
> return -EFAULT;
> }
>
> - if (is_dynptr_ref_function(func_id))
> - regs[BPF_REG_0].dynptr_id = meta.dynptr_id;
> -
> - if (is_ptr_cast_function(func_id) || is_dynptr_ref_function(func_id)) {
> + if (is_dynptr_ref_function(func_id)) {
> + regs[BPF_REG_0].ref_obj_id = meta.id;
> + } else if (is_ptr_cast_function(func_id)) {
> /* For release_reference() */
> regs[BPF_REG_0].ref_obj_id = meta.ref_obj_id;
> } else if (is_acquire_function(func_id, meta.map.ptr)) {
> @@ -13328,7 +13319,9 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
> }
>
> if (reg->ref_obj_id) {
> - if (is_kfunc_release(meta) && meta->ref_obj_id) {
> + if (meta->ref_obj_id &&
> + (is_kfunc_release(meta) ||
> + meta->func_id == special_kfunc_list[KF_bpf_dynptr_from_skb])) {
> verifier_bug(env, "more than one arg with ref_obj_id R%d %u %u",
> regno, reg->ref_obj_id,
> meta->ref_obj_id);
> @@ -13478,6 +13471,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
>
> if (meta->func_id == special_kfunc_list[KF_bpf_dynptr_from_skb]) {
> dynptr_arg_type |= DYNPTR_TYPE_SKB;
> + clone_ref_obj_id = meta->ref_obj_id;
> } else if (meta->func_id == special_kfunc_list[KF_bpf_dynptr_from_xdp]) {
> dynptr_arg_type |= DYNPTR_TYPE_XDP;
> } else if (meta->func_id == special_kfunc_list[KF_bpf_dynptr_from_skb_meta]) {
> @@ -13955,12 +13949,7 @@ static int check_special_kfunc(struct bpf_verifier_env *env, struct bpf_kfunc_ca
> verifier_bug(env, "no dynptr id");
> return -EFAULT;
> }
> - regs[BPF_REG_0].dynptr_id = meta->initialized_dynptr.id;
> -
> - /* we don't need to set BPF_REG_0's ref obj id
> - * because packet slices are not refcounted (see
> - * dynptr_type_refcounted)
> - */
> + regs[BPF_REG_0].ref_obj_id = meta->initialized_dynptr.id;
> } else {
> return 0;
> }
> @@ -14154,7 +14143,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
> if (meta.initialized_dynptr.ref_obj_id) {
> err = unmark_stack_slots_dynptr(env, reg);
> } else {
> - err = release_reference(env, reg->ref_obj_id);
> + err = release_reference(env, reg->ref_obj_id, true);
> if (err)
> verbose(env, "kfunc %s#%d reference has not been acquired before\n",
> func_name, meta.func_id);
> @@ -14176,7 +14165,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
> return err;
> }
>
> - err = release_reference(env, release_ref_obj_id);
> + err = release_reference(env, release_ref_obj_id, true);
> if (err) {
> verbose(env, "kfunc %s#%d reference has not been acquired before\n",
> func_name, meta.func_id);
> --
> 2.47.3
>
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2026-02-04 23:58 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-02 21:48 [RFC PATCH bpf-next v1 1/4] bpf: Fix and refactor object relationship tracking Amery Hung
2026-02-02 21:48 ` [RFC PATCH bpf-next v1 2/4] selftests/bpf: Test creating dynptr from dynptr data and slice Amery Hung
2026-02-02 21:48 ` [RFC PATCH bpf-next v1 3/4] selftests/bpf: Test using dynptr after freeing the underlying object Amery Hung
2026-02-02 21:48 ` [RFC PATCH bpf-next v1 4/4] selftests/bpf: Test using slice after invalidating dynptr clone Amery Hung
2026-02-02 22:23 ` [RFC PATCH bpf-next v1 1/4] bpf: Fix and refactor object relationship tracking bot+bpf-ci
2026-02-02 23:39 ` Amery Hung
2026-02-04 23:58 ` Amery Hung
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox