* [PATCH v3 0/6] bpf: Extend the bpf_list family of APIs
@ 2026-03-02 12:40 Chengkaitao
2026-03-02 12:40 ` [PATCH v3 1/6] bpf: Introduce the bpf_list_del kfunc Chengkaitao
` (5 more replies)
0 siblings, 6 replies; 11+ messages in thread
From: Chengkaitao @ 2026-03-02 12:40 UTC (permalink / raw)
To: martin.lau, ast, daniel, andrii, eddyz87, song, yonghong.song,
john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah, chengkaitao,
linux-kselftest
Cc: bpf, linux-kernel
From: Kaitao Cheng <chengkaitao@kylinos.cn>
In BPF, a list can only be used to implement a stack structure.
Due to an incomplete API set, only FIFO or LIFO operations are
supported. The patches enhance the BPF list API, making it more
list-like.
Five new kfuncs have been added:
bpf_list_del: remove a node from the list
bpf_list_add_impl: insert a node after a given list node
bpf_list_is_first: check if a node is the first in the list
bpf_list_is_last: check if a node is the last in the list
bpf_list_empty: check if the list is empty
Changes in v3:
- Add a new lock_rec member to struct bpf_reference_state for lock
holding detection.
- Add test cases to verify that the verifier correctly restricts calls
to bpf_list_del when the spin_lock is not held.
Changes in v2:
- Remove the head parameter from bpf_list_del
- Add bpf_list_add/is_first/is_last/empty to API and test cases
Link to v2:
https://lore.kernel.org/all/20260225092651.94689-1-pilgrimtao@gmail.com/
Link to v1:
https://lore.kernel.org/all/20260209025250.55750-1-pilgrimtao@gmail.com/
Kaitao Cheng (6):
bpf: Introduce the bpf_list_del kfunc.
selftests/bpf: Add test cases for bpf_list_del
bpf: add bpf_list_add_impl to insert node after a given list node
selftests/bpf: Add test case for bpf_list_add_impl
bpf: add bpf_list_is_first/last/empty kfuncs
selftests/bpf: Add test cases for bpf_list_is_first/is_last/empty
include/linux/bpf_verifier.h | 4 +
kernel/bpf/btf.c | 33 +++-
kernel/bpf/helpers.c | 92 +++++++++++
kernel/bpf/verifier.c | 76 ++++++++-
.../testing/selftests/bpf/bpf_experimental.h | 39 +++++
.../selftests/bpf/progs/refcounted_kptr.c | 149 ++++++++++++++++++
6 files changed, 385 insertions(+), 8 deletions(-)
--
2.50.1 (Apple Git-155)
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v3 1/6] bpf: Introduce the bpf_list_del kfunc.
2026-03-02 12:40 [PATCH v3 0/6] bpf: Extend the bpf_list family of APIs Chengkaitao
@ 2026-03-02 12:40 ` Chengkaitao
2026-03-02 13:32 ` bot+bpf-ci
2026-03-02 15:19 ` Mykyta Yatsenko
2026-03-02 12:40 ` [PATCH v3 2/6] selftests/bpf: Add test cases for bpf_list_del Chengkaitao
` (4 subsequent siblings)
5 siblings, 2 replies; 11+ messages in thread
From: Chengkaitao @ 2026-03-02 12:40 UTC (permalink / raw)
To: martin.lau, ast, daniel, andrii, eddyz87, song, yonghong.song,
john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah, chengkaitao,
linux-kselftest
Cc: bpf, linux-kernel
From: Kaitao Cheng <chengkaitao@kylinos.cn>
If a user holds ownership of a node in the middle of a list, they
can directly remove it from the list without strictly adhering to
deletion rules from the head or tail.
When a kfunc has only one bpf_list_node parameter, supplement the
initialization of the corresponding btf_field. Add a new lock_rec
member to struct bpf_reference_state for lock holding detection.
This is typically paired with bpf_refcount. After calling
bpf_list_del, it is generally necessary to drop the reference to
the list node twice to prevent reference count leaks.
Signed-off-by: Kaitao Cheng <chengkaitao@kylinos.cn>
---
include/linux/bpf_verifier.h | 4 +++
kernel/bpf/btf.c | 33 +++++++++++++++++++---
kernel/bpf/helpers.c | 17 ++++++++++++
kernel/bpf/verifier.c | 54 ++++++++++++++++++++++++++++++++++--
4 files changed, 101 insertions(+), 7 deletions(-)
diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index ef8e45a362d9..e1358b62d6cc 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -261,6 +261,10 @@ struct bpf_reference_state {
* it matches on unlock.
*/
void *ptr;
+ /* For REF_TYPE_LOCK_*: btf_record of the locked object, used for lock
+ * checking in kfuncs such as bpf_list_del.
+ */
+ struct btf_record *lock_rec;
};
struct bpf_retval_range {
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 4872d2a6c42d..8a977c793d56 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -3785,7 +3785,6 @@ static int btf_find_field_one(const struct btf *btf,
case BPF_RES_SPIN_LOCK:
case BPF_TIMER:
case BPF_WORKQUEUE:
- case BPF_LIST_NODE:
case BPF_RB_NODE:
case BPF_REFCOUNT:
case BPF_TASK_WORK:
@@ -3794,6 +3793,27 @@ static int btf_find_field_one(const struct btf *btf,
if (ret < 0)
return ret;
break;
+ case BPF_LIST_NODE:
+ ret = btf_find_struct(btf, var_type, off, sz, field_type,
+ info_cnt ? &info[0] : &tmp);
+ if (ret < 0)
+ return ret;
+ /* graph_root for verifier: container type and node member name */
+ if (info_cnt && var_idx >= 0 && (u32)var_idx < btf_type_vlen(var)) {
+ u32 id;
+ const struct btf_member *member;
+
+ for (id = 1; id < btf_nr_types(btf); id++) {
+ if (btf_type_by_id(btf, id) == var) {
+ info[0].graph_root.value_btf_id = id;
+ member = btf_type_member(var) + var_idx;
+ info[0].graph_root.node_name =
+ __btf_name_by_offset(btf, member->name_off);
+ break;
+ }
+ }
+ }
+ break;
case BPF_KPTR_UNREF:
case BPF_KPTR_REF:
case BPF_KPTR_PERCPU:
@@ -4138,6 +4158,7 @@ struct btf_record *btf_parse_fields(const struct btf *btf, const struct btf_type
if (ret < 0)
goto end;
break;
+ case BPF_LIST_NODE:
case BPF_LIST_HEAD:
ret = btf_parse_list_head(btf, &rec->fields[i], &info_arr[i]);
if (ret < 0)
@@ -4148,7 +4169,6 @@ struct btf_record *btf_parse_fields(const struct btf *btf, const struct btf_type
if (ret < 0)
goto end;
break;
- case BPF_LIST_NODE:
case BPF_RB_NODE:
break;
default:
@@ -4192,20 +4212,25 @@ int btf_check_and_fixup_fields(const struct btf *btf, struct btf_record *rec)
int i;
/* There are three types that signify ownership of some other type:
- * kptr_ref, bpf_list_head, bpf_rb_root.
+ * kptr_ref, bpf_list_head/node, bpf_rb_root.
* kptr_ref only supports storing kernel types, which can't store
* references to program allocated local types.
*
* Hence we only need to ensure that bpf_{list_head,rb_root} ownership
* does not form cycles.
*/
- if (IS_ERR_OR_NULL(rec) || !(rec->field_mask & (BPF_GRAPH_ROOT | BPF_UPTR)))
+ if (IS_ERR_OR_NULL(rec) || !(rec->field_mask &
+ (BPF_GRAPH_ROOT | BPF_GRAPH_NODE | BPF_UPTR)))
return 0;
+
for (i = 0; i < rec->cnt; i++) {
struct btf_struct_meta *meta;
const struct btf_type *t;
u32 btf_id;
+ if (rec->fields[i].type & BPF_GRAPH_NODE)
+ rec->fields[i].graph_root.value_rec = rec;
+
if (rec->fields[i].type == BPF_UPTR) {
/* The uptr only supports pinning one page and cannot
* point to a kernel struct
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index 6eb6c82ed2ee..577af62a9f7a 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -2459,6 +2459,22 @@ __bpf_kfunc struct bpf_list_node *bpf_list_pop_back(struct bpf_list_head *head)
return __bpf_list_del(head, true);
}
+__bpf_kfunc struct bpf_list_node *bpf_list_del(struct bpf_list_node *node)
+{
+ struct bpf_list_node_kern *knode = (struct bpf_list_node_kern *)node;
+
+ if (unlikely(!knode))
+ return NULL;
+
+ if (WARN_ON_ONCE(!READ_ONCE(knode->owner)))
+ return NULL;
+
+ list_del_init(&knode->list_head);
+ WRITE_ONCE(knode->owner, NULL);
+
+ return node;
+}
+
__bpf_kfunc struct bpf_list_node *bpf_list_front(struct bpf_list_head *head)
{
struct list_head *h = (struct list_head *)head;
@@ -4545,6 +4561,7 @@ BTF_ID_FLAGS(func, bpf_list_push_front_impl)
BTF_ID_FLAGS(func, bpf_list_push_back_impl)
BTF_ID_FLAGS(func, bpf_list_pop_front, KF_ACQUIRE | KF_RET_NULL)
BTF_ID_FLAGS(func, bpf_list_pop_back, KF_ACQUIRE | KF_RET_NULL)
+BTF_ID_FLAGS(func, bpf_list_del, KF_ACQUIRE | KF_RET_NULL)
BTF_ID_FLAGS(func, bpf_list_front, KF_RET_NULL)
BTF_ID_FLAGS(func, bpf_list_back, KF_RET_NULL)
BTF_ID_FLAGS(func, bpf_task_acquire, KF_ACQUIRE | KF_RCU | KF_RET_NULL)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index a3390190c26e..8a782772dd36 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -1536,7 +1536,7 @@ static int acquire_reference(struct bpf_verifier_env *env, int insn_idx)
}
static int acquire_lock_state(struct bpf_verifier_env *env, int insn_idx, enum ref_state_type type,
- int id, void *ptr)
+ int id, void *ptr, struct btf_record *lock_rec)
{
struct bpf_verifier_state *state = env->cur_state;
struct bpf_reference_state *s;
@@ -1547,6 +1547,7 @@ static int acquire_lock_state(struct bpf_verifier_env *env, int insn_idx, enum r
s->type = type;
s->id = id;
s->ptr = ptr;
+ s->lock_rec = lock_rec;
state->active_locks++;
state->active_lock_id = id;
@@ -1662,6 +1663,23 @@ static struct bpf_reference_state *find_lock_state(struct bpf_verifier_state *st
return NULL;
}
+static bool rec_has_list_matching_node_type(struct bpf_verifier_env *env,
+ const struct btf_record *rec,
+ const struct btf *node_btf, u32 node_btf_id)
+{
+ u32 i;
+
+ for (i = 0; i < rec->cnt; i++) {
+ if (!(rec->fields[i].type & BPF_LIST_HEAD))
+ continue;
+ if (btf_struct_ids_match(&env->log, node_btf, node_btf_id, 0,
+ rec->fields[i].graph_root.btf,
+ rec->fields[i].graph_root.value_btf_id, true))
+ return true;
+ }
+ return false;
+}
+
static void update_peak_states(struct bpf_verifier_env *env)
{
u32 cur_states;
@@ -8576,7 +8594,8 @@ static int process_spin_lock(struct bpf_verifier_env *env, int regno, int flags)
type = REF_TYPE_RES_LOCK;
else
type = REF_TYPE_LOCK;
- err = acquire_lock_state(env, env->insn_idx, type, reg->id, ptr);
+ err = acquire_lock_state(env, env->insn_idx, type, reg->id, ptr,
+ reg_btf_record(reg));
if (err < 0) {
verbose(env, "Failed to acquire lock state\n");
return err;
@@ -12431,6 +12450,7 @@ enum special_kfunc_type {
KF_bpf_list_push_back_impl,
KF_bpf_list_pop_front,
KF_bpf_list_pop_back,
+ KF_bpf_list_del,
KF_bpf_list_front,
KF_bpf_list_back,
KF_bpf_cast_to_kern_ctx,
@@ -12491,6 +12511,7 @@ BTF_ID(func, bpf_list_push_front_impl)
BTF_ID(func, bpf_list_push_back_impl)
BTF_ID(func, bpf_list_pop_front)
BTF_ID(func, bpf_list_pop_back)
+BTF_ID(func, bpf_list_del)
BTF_ID(func, bpf_list_front)
BTF_ID(func, bpf_list_back)
BTF_ID(func, bpf_cast_to_kern_ctx)
@@ -12966,6 +12987,7 @@ static bool is_bpf_list_api_kfunc(u32 btf_id)
btf_id == special_kfunc_list[KF_bpf_list_push_back_impl] ||
btf_id == special_kfunc_list[KF_bpf_list_pop_front] ||
btf_id == special_kfunc_list[KF_bpf_list_pop_back] ||
+ btf_id == special_kfunc_list[KF_bpf_list_del] ||
btf_id == special_kfunc_list[KF_bpf_list_front] ||
btf_id == special_kfunc_list[KF_bpf_list_back];
}
@@ -13088,7 +13110,8 @@ static bool check_kfunc_is_graph_node_api(struct bpf_verifier_env *env,
switch (node_field_type) {
case BPF_LIST_NODE:
ret = (kfunc_btf_id == special_kfunc_list[KF_bpf_list_push_front_impl] ||
- kfunc_btf_id == special_kfunc_list[KF_bpf_list_push_back_impl]);
+ kfunc_btf_id == special_kfunc_list[KF_bpf_list_push_back_impl] ||
+ kfunc_btf_id == special_kfunc_list[KF_bpf_list_del]);
break;
case BPF_RB_NODE:
ret = (kfunc_btf_id == special_kfunc_list[KF_bpf_rbtree_remove] ||
@@ -13211,6 +13234,9 @@ __process_kf_arg_ptr_to_graph_node(struct bpf_verifier_env *env,
return -EINVAL;
}
+ if (!*node_field)
+ *node_field = field;
+
field = *node_field;
et = btf_type_by_id(field->graph_root.btf, field->graph_root.value_btf_id);
@@ -13237,6 +13263,28 @@ __process_kf_arg_ptr_to_graph_node(struct bpf_verifier_env *env,
return -EINVAL;
}
+ /* bpf_list_del: require list head's lock. Use refs[] REF_TYPE_LOCK_MASK
+ * only. At lock time we stored the locked object's btf_record in ref->
+ * lock_rec, so we can get the list value type from the ref directly.
+ */
+ if (node_field_type == BPF_LIST_NODE &&
+ meta->func_id == special_kfunc_list[KF_bpf_list_del]) {
+ struct bpf_verifier_state *cur = env->cur_state;
+
+ for (int i = 0; i < cur->acquired_refs; i++) {
+ struct bpf_reference_state *s = &cur->refs[i];
+
+ if (!(s->type & REF_TYPE_LOCK_MASK) || !s->lock_rec)
+ continue;
+
+ if (rec_has_list_matching_node_type(env, s->lock_rec,
+ reg->btf, reg->btf_id))
+ return 0;
+ }
+ verbose(env, "bpf_spin_lock must be held for bpf_list_del\n");
+ return -EINVAL;
+ }
+
return 0;
}
--
2.50.1 (Apple Git-155)
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v3 2/6] selftests/bpf: Add test cases for bpf_list_del
2026-03-02 12:40 [PATCH v3 0/6] bpf: Extend the bpf_list family of APIs Chengkaitao
2026-03-02 12:40 ` [PATCH v3 1/6] bpf: Introduce the bpf_list_del kfunc Chengkaitao
@ 2026-03-02 12:40 ` Chengkaitao
2026-03-02 12:40 ` [PATCH v3 3/6] bpf: add bpf_list_add_impl to insert node after a given list node Chengkaitao
` (3 subsequent siblings)
5 siblings, 0 replies; 11+ messages in thread
From: Chengkaitao @ 2026-03-02 12:40 UTC (permalink / raw)
To: martin.lau, ast, daniel, andrii, eddyz87, song, yonghong.song,
john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah, chengkaitao,
linux-kselftest
Cc: bpf, linux-kernel
From: Kaitao Cheng <chengkaitao@kylinos.cn>
Add a node to both an rbtree and a list, retrieve the node from
the rbtree, use the obtained node pointer to remove it from the
list, and finally free the node.
To verify the validity of bpf_list_del, also expect the verifier
to reject calls to bpf_list_del made without holding the spin_lock.
Signed-off-by: Kaitao Cheng <chengkaitao@kylinos.cn>
---
.../testing/selftests/bpf/bpf_experimental.h | 10 +++
.../selftests/bpf/progs/refcounted_kptr.c | 71 +++++++++++++++++++
2 files changed, 81 insertions(+)
diff --git a/tools/testing/selftests/bpf/bpf_experimental.h b/tools/testing/selftests/bpf/bpf_experimental.h
index 4b7210c318dd..b4fb0459f11f 100644
--- a/tools/testing/selftests/bpf/bpf_experimental.h
+++ b/tools/testing/selftests/bpf/bpf_experimental.h
@@ -99,6 +99,16 @@ extern struct bpf_list_node *bpf_list_pop_front(struct bpf_list_head *head) __ks
*/
extern struct bpf_list_node *bpf_list_pop_back(struct bpf_list_head *head) __ksym;
+/* Description
+ * Remove 'node' from its BPF linked list.
+ * The node must be in the list. Caller receives ownership of the
+ * removed node and must release it with bpf_obj_drop.
+ * Returns
+ * Pointer to the removed bpf_list_node, or NULL if 'node' is NULL
+ * or not in the list.
+ */
+extern struct bpf_list_node *bpf_list_del(struct bpf_list_node *node) __ksym;
+
/* Description
* Remove 'node' from rbtree with root 'root'
* Returns
diff --git a/tools/testing/selftests/bpf/progs/refcounted_kptr.c b/tools/testing/selftests/bpf/progs/refcounted_kptr.c
index 1aca85d86aeb..c4fb5615d08b 100644
--- a/tools/testing/selftests/bpf/progs/refcounted_kptr.c
+++ b/tools/testing/selftests/bpf/progs/refcounted_kptr.c
@@ -367,6 +367,77 @@ long insert_rbtree_and_stash__del_tree_##rem_tree(void *ctx) \
INSERT_STASH_READ(true, "insert_stash_read: remove from tree");
INSERT_STASH_READ(false, "insert_stash_read: don't remove from tree");
+/* Insert node_data into both rbtree and list, remove from tree, then remove
+ * from list via bpf_list_del using the node obtained from the tree.
+ */
+SEC("tc")
+__description("test_bpf_list_del: remove an arbitrary node from the list")
+__success __retval(0)
+long test_bpf_list_del(void *ctx)
+{
+ long err;
+ struct bpf_rb_node *rb;
+ struct bpf_list_node *l;
+ struct node_data *n;
+
+ err = __insert_in_tree_and_list(&head, &root, &lock);
+ if (err)
+ return err;
+
+ bpf_spin_lock(&lock);
+ rb = bpf_rbtree_first(&root);
+ if (!rb) {
+ bpf_spin_unlock(&lock);
+ return -4;
+ }
+
+ rb = bpf_rbtree_remove(&root, rb);
+ if (!rb) {
+ bpf_spin_unlock(&lock);
+ return -5;
+ }
+
+ n = container_of(rb, struct node_data, r);
+ l = bpf_list_del(&n->l);
+ bpf_spin_unlock(&lock);
+ bpf_obj_drop(n);
+ if (!l)
+ return -6;
+
+ bpf_obj_drop(container_of(l, struct node_data, l));
+ return 0;
+}
+
+SEC("?tc")
+__failure __msg("bpf_spin_lock must be held for bpf_list_del")
+long list_del_without_lock_fail(void *ctx)
+{
+ struct bpf_rb_node *rb;
+ struct bpf_list_node *l;
+ struct node_data *n;
+
+ bpf_spin_lock(&lock);
+ rb = bpf_rbtree_first(&root);
+ if (!rb) {
+ bpf_spin_unlock(&lock);
+ return -4;
+ }
+
+ rb = bpf_rbtree_remove(&root, rb);
+ bpf_spin_unlock(&lock);
+ if (!rb)
+ return -5;
+
+ n = container_of(rb, struct node_data, r);
+ l = bpf_list_del(&n->l);
+ bpf_obj_drop(n);
+ if (!l)
+ return -6;
+
+ bpf_obj_drop(container_of(l, struct node_data, l));
+ return 0;
+}
+
SEC("tc")
__success
long rbtree_refcounted_node_ref_escapes(void *ctx)
--
2.50.1 (Apple Git-155)
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v3 3/6] bpf: add bpf_list_add_impl to insert node after a given list node
2026-03-02 12:40 [PATCH v3 0/6] bpf: Extend the bpf_list family of APIs Chengkaitao
2026-03-02 12:40 ` [PATCH v3 1/6] bpf: Introduce the bpf_list_del kfunc Chengkaitao
2026-03-02 12:40 ` [PATCH v3 2/6] selftests/bpf: Add test cases for bpf_list_del Chengkaitao
@ 2026-03-02 12:40 ` Chengkaitao
2026-03-02 12:40 ` [PATCH v3 4/6] selftests/bpf: Add test case for bpf_list_add_impl Chengkaitao
` (2 subsequent siblings)
5 siblings, 0 replies; 11+ messages in thread
From: Chengkaitao @ 2026-03-02 12:40 UTC (permalink / raw)
To: martin.lau, ast, daniel, andrii, eddyz87, song, yonghong.song,
john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah, chengkaitao,
linux-kselftest
Cc: bpf, linux-kernel
From: Kaitao Cheng <chengkaitao@kylinos.cn>
Add a new kfunc bpf_list_add_impl(prev, node, meta, off) that inserts
'node' after 'prev' in the BPF linked list. Both must be in the same
list; 'prev' must already be in the list. The new node must be an
owning reference (e.g. from bpf_obj_new); the kfunc consumes that
reference and the node becomes non-owning once inserted.
Returns 0 on success, -EINVAL if prev is not in a list or node is
already in a list (or duplicate insertion). On failure, the kernel
drops the passed-in node.
Signed-off-by: Kaitao Cheng <chengkaitao@kylinos.cn>
---
kernel/bpf/helpers.c | 34 ++++++++++++++++++++++++++++++++++
kernel/bpf/verifier.c | 23 ++++++++++++++++-------
2 files changed, 50 insertions(+), 7 deletions(-)
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index 577af62a9f7a..d212962d4ed6 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -2495,6 +2495,39 @@ __bpf_kfunc struct bpf_list_node *bpf_list_back(struct bpf_list_head *head)
return (struct bpf_list_node *)h->prev;
}
+static int __bpf_list_add_after(struct bpf_list_node_kern *prev,
+ struct bpf_list_node_kern *node,
+ struct btf_record *rec, u64 off)
+{
+ struct bpf_list_head *head;
+ struct list_head *n = &node->list_head, *p = &prev->list_head;
+
+ head = READ_ONCE(prev->owner);
+ if (unlikely(!head))
+ goto fail;
+
+ if (cmpxchg(&node->owner, NULL, BPF_PTR_POISON))
+ goto fail;
+
+ list_add(n, p);
+ WRITE_ONCE(node->owner, head);
+ return 0;
+
+fail:
+ __bpf_obj_drop_impl((void *)n - off, rec, false);
+ return -EINVAL;
+}
+
+__bpf_kfunc int bpf_list_add_impl(struct bpf_list_node *prev,
+ struct bpf_list_node *node,
+ void *meta__ign, u64 off)
+{
+ struct bpf_list_node_kern *n = (void *)node, *p = (void *)prev;
+ struct btf_struct_meta *meta = meta__ign;
+
+ return __bpf_list_add_after(p, n, meta ? meta->record : NULL, off);
+}
+
__bpf_kfunc struct bpf_rb_node *bpf_rbtree_remove(struct bpf_rb_root *root,
struct bpf_rb_node *node)
{
@@ -4564,6 +4597,7 @@ BTF_ID_FLAGS(func, bpf_list_pop_back, KF_ACQUIRE | KF_RET_NULL)
BTF_ID_FLAGS(func, bpf_list_del, KF_ACQUIRE | KF_RET_NULL)
BTF_ID_FLAGS(func, bpf_list_front, KF_RET_NULL)
BTF_ID_FLAGS(func, bpf_list_back, KF_RET_NULL)
+BTF_ID_FLAGS(func, bpf_list_add_impl)
BTF_ID_FLAGS(func, bpf_task_acquire, KF_ACQUIRE | KF_RCU | KF_RET_NULL)
BTF_ID_FLAGS(func, bpf_task_release, KF_RELEASE)
BTF_ID_FLAGS(func, bpf_rbtree_remove, KF_ACQUIRE | KF_RET_NULL)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 8a782772dd36..f5ee11779a5c 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -12453,6 +12453,7 @@ enum special_kfunc_type {
KF_bpf_list_del,
KF_bpf_list_front,
KF_bpf_list_back,
+ KF_bpf_list_add_impl,
KF_bpf_cast_to_kern_ctx,
KF_bpf_rdonly_cast,
KF_bpf_rcu_read_lock,
@@ -12514,6 +12515,7 @@ BTF_ID(func, bpf_list_pop_back)
BTF_ID(func, bpf_list_del)
BTF_ID(func, bpf_list_front)
BTF_ID(func, bpf_list_back)
+BTF_ID(func, bpf_list_add_impl)
BTF_ID(func, bpf_cast_to_kern_ctx)
BTF_ID(func, bpf_rdonly_cast)
BTF_ID(func, bpf_rcu_read_lock)
@@ -12989,7 +12991,8 @@ static bool is_bpf_list_api_kfunc(u32 btf_id)
btf_id == special_kfunc_list[KF_bpf_list_pop_back] ||
btf_id == special_kfunc_list[KF_bpf_list_del] ||
btf_id == special_kfunc_list[KF_bpf_list_front] ||
- btf_id == special_kfunc_list[KF_bpf_list_back];
+ btf_id == special_kfunc_list[KF_bpf_list_back] ||
+ btf_id == special_kfunc_list[KF_bpf_list_add_impl];
}
static bool is_bpf_rbtree_api_kfunc(u32 btf_id)
@@ -13111,7 +13114,8 @@ static bool check_kfunc_is_graph_node_api(struct bpf_verifier_env *env,
case BPF_LIST_NODE:
ret = (kfunc_btf_id == special_kfunc_list[KF_bpf_list_push_front_impl] ||
kfunc_btf_id == special_kfunc_list[KF_bpf_list_push_back_impl] ||
- kfunc_btf_id == special_kfunc_list[KF_bpf_list_del]);
+ kfunc_btf_id == special_kfunc_list[KF_bpf_list_del] ||
+ kfunc_btf_id == special_kfunc_list[KF_bpf_list_add_impl]);
break;
case BPF_RB_NODE:
ret = (kfunc_btf_id == special_kfunc_list[KF_bpf_rbtree_remove] ||
@@ -13263,12 +13267,15 @@ __process_kf_arg_ptr_to_graph_node(struct bpf_verifier_env *env,
return -EINVAL;
}
- /* bpf_list_del: require list head's lock. Use refs[] REF_TYPE_LOCK_MASK
- * only. At lock time we stored the locked object's btf_record in ref->
- * lock_rec, so we can get the list value type from the ref directly.
+ /* When there is no bpf_list_head in the parameter list, to prevent BPF
+ * programs from calling bpf_list APIs without holding the spinlock,
+ * we need to acquire the list head's lock. At lock time we stored the
+ * locked object's btf_record in ref->lock_rec, so we can get the list
+ * value type from the ref directly.
*/
if (node_field_type == BPF_LIST_NODE &&
- meta->func_id == special_kfunc_list[KF_bpf_list_del]) {
+ (meta->func_id == special_kfunc_list[KF_bpf_list_del] ||
+ meta->func_id == special_kfunc_list[KF_bpf_list_add_impl])) {
struct bpf_verifier_state *cur = env->cur_state;
for (int i = 0; i < cur->acquired_refs; i++) {
@@ -13281,7 +13288,7 @@ __process_kf_arg_ptr_to_graph_node(struct bpf_verifier_env *env,
reg->btf, reg->btf_id))
return 0;
}
- verbose(env, "bpf_spin_lock must be held for bpf_list_del\n");
+ verbose(env, "bpf_spin_lock must be held for bpf_list api\n");
return -EINVAL;
}
@@ -14278,6 +14285,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
if (meta.func_id == special_kfunc_list[KF_bpf_list_push_front_impl] ||
meta.func_id == special_kfunc_list[KF_bpf_list_push_back_impl] ||
+ meta.func_id == special_kfunc_list[KF_bpf_list_add_impl] ||
meta.func_id == special_kfunc_list[KF_bpf_rbtree_add_impl]) {
release_ref_obj_id = regs[BPF_REG_2].ref_obj_id;
insn_aux->insert_off = regs[BPF_REG_2].off;
@@ -23244,6 +23252,7 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
*cnt = 3;
} else if (desc->func_id == special_kfunc_list[KF_bpf_list_push_back_impl] ||
desc->func_id == special_kfunc_list[KF_bpf_list_push_front_impl] ||
+ desc->func_id == special_kfunc_list[KF_bpf_list_add_impl] ||
desc->func_id == special_kfunc_list[KF_bpf_rbtree_add_impl]) {
struct btf_struct_meta *kptr_struct_meta = env->insn_aux_data[insn_idx].kptr_struct_meta;
int struct_meta_reg = BPF_REG_3;
--
2.50.1 (Apple Git-155)
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v3 4/6] selftests/bpf: Add test case for bpf_list_add_impl
2026-03-02 12:40 [PATCH v3 0/6] bpf: Extend the bpf_list family of APIs Chengkaitao
` (2 preceding siblings ...)
2026-03-02 12:40 ` [PATCH v3 3/6] bpf: add bpf_list_add_impl to insert node after a given list node Chengkaitao
@ 2026-03-02 12:40 ` Chengkaitao
2026-03-02 12:40 ` [PATCH v3 5/6] bpf: add bpf_list_is_first/last/empty kfuncs Chengkaitao
2026-03-02 12:40 ` [PATCH v3 6/6] selftests/bpf: Add test cases for bpf_list_is_first/is_last/empty Chengkaitao
5 siblings, 0 replies; 11+ messages in thread
From: Chengkaitao @ 2026-03-02 12:40 UTC (permalink / raw)
To: martin.lau, ast, daniel, andrii, eddyz87, song, yonghong.song,
john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah, chengkaitao,
linux-kselftest
Cc: bpf, linux-kernel
From: Kaitao Cheng <chengkaitao@kylinos.cn>
Extend refcounted_kptr test (test_list_add_del) to exercise bpf_list_add:
add a second node after the first, then bpf_list_del both nodes.
Signed-off-by: Kaitao Cheng <chengkaitao@kylinos.cn>
---
.../testing/selftests/bpf/bpf_experimental.h | 14 +++
.../selftests/bpf/progs/refcounted_kptr.c | 105 ++++++++++++++----
2 files changed, 97 insertions(+), 22 deletions(-)
diff --git a/tools/testing/selftests/bpf/bpf_experimental.h b/tools/testing/selftests/bpf/bpf_experimental.h
index b4fb0459f11f..48106ea5dda8 100644
--- a/tools/testing/selftests/bpf/bpf_experimental.h
+++ b/tools/testing/selftests/bpf/bpf_experimental.h
@@ -109,6 +109,20 @@ extern struct bpf_list_node *bpf_list_pop_back(struct bpf_list_head *head) __ksy
*/
extern struct bpf_list_node *bpf_list_del(struct bpf_list_node *node) __ksym;
+/* Description
+ * Insert 'node' after 'prev' in the BPF linked list. 'prev' must already
+ * be in a list; 'node' must not be in any list. The 'meta' and 'off'
+ * parameters are rewritten by the verifier, no need for BPF programs to
+ * set them.
+ * Returns
+ * 0 on success, -EINVAL if prev is not in a list or node is already in a list.
+ */
+extern int bpf_list_add_impl(struct bpf_list_node *prev, struct bpf_list_node *node,
+ void *meta, __u64 off) __ksym;
+
+/* Convenience macro to wrap over bpf_list_add_impl */
+#define bpf_list_add(prev, node) bpf_list_add_impl(prev, node, NULL, 0)
+
/* Description
* Remove 'node' from rbtree with root 'root'
* Returns
diff --git a/tools/testing/selftests/bpf/progs/refcounted_kptr.c b/tools/testing/selftests/bpf/progs/refcounted_kptr.c
index c4fb5615d08b..4d979f5ad9e8 100644
--- a/tools/testing/selftests/bpf/progs/refcounted_kptr.c
+++ b/tools/testing/selftests/bpf/progs/refcounted_kptr.c
@@ -367,18 +367,19 @@ long insert_rbtree_and_stash__del_tree_##rem_tree(void *ctx) \
INSERT_STASH_READ(true, "insert_stash_read: remove from tree");
INSERT_STASH_READ(false, "insert_stash_read: don't remove from tree");
-/* Insert node_data into both rbtree and list, remove from tree, then remove
- * from list via bpf_list_del using the node obtained from the tree.
+/* Insert one node in tree and list, remove it from tree, add a second
+ * node after it in list with bpf_list_add, then remove both nodes from
+ * list via bpf_list_del.
*/
SEC("tc")
-__description("test_bpf_list_del: remove an arbitrary node from the list")
+__description("test_list_add_del: test bpf_list_add/del")
__success __retval(0)
-long test_bpf_list_del(void *ctx)
+long test_list_add_del(void *ctx)
{
- long err;
+ long err = 0;
struct bpf_rb_node *rb;
- struct bpf_list_node *l;
- struct node_data *n;
+ struct bpf_list_node *l, *l_1;
+ struct node_data *n, *n_1, *m_1;
err = __insert_in_tree_and_list(&head, &root, &lock);
if (err)
@@ -392,29 +393,62 @@ long test_bpf_list_del(void *ctx)
}
rb = bpf_rbtree_remove(&root, rb);
- if (!rb) {
- bpf_spin_unlock(&lock);
+ bpf_spin_unlock(&lock);
+ if (!rb)
return -5;
- }
n = container_of(rb, struct node_data, r);
+ n_1 = bpf_obj_new(typeof(*n_1));
+ if (!n_1) {
+ bpf_obj_drop(n);
+ return -1;
+ }
+ m_1 = bpf_refcount_acquire(n_1);
+ if (!m_1) {
+ bpf_obj_drop(n);
+ bpf_obj_drop(n_1);
+ return -1;
+ }
+
+ bpf_spin_lock(&lock);
+ if (bpf_list_add(&n->l, &n_1->l)) {
+ bpf_spin_unlock(&lock);
+ bpf_obj_drop(n);
+ bpf_obj_drop(m_1);
+ return -8;
+ }
+
l = bpf_list_del(&n->l);
+ l_1 = bpf_list_del(&m_1->l);
bpf_spin_unlock(&lock);
bpf_obj_drop(n);
- if (!l)
- return -6;
+ bpf_obj_drop(m_1);
- bpf_obj_drop(container_of(l, struct node_data, l));
- return 0;
+ if (l)
+ bpf_obj_drop(container_of(l, struct node_data, l));
+ else
+ err = -6;
+
+ if (l_1)
+ bpf_obj_drop(container_of(l_1, struct node_data, l));
+ else
+ err = -6;
+
+ return err;
}
SEC("?tc")
-__failure __msg("bpf_spin_lock must be held for bpf_list_del")
-long list_del_without_lock_fail(void *ctx)
+__failure __msg("bpf_spin_lock must be held for bpf_list api")
+long list_add_del_without_lock_fail(void *ctx)
{
+ long err = 0;
struct bpf_rb_node *rb;
- struct bpf_list_node *l;
- struct node_data *n;
+ struct bpf_list_node *l, *l_1;
+ struct node_data *n, *n_1, *m_1;
+
+ err = __insert_in_tree_and_list(&head, &root, &lock);
+ if (err)
+ return err;
bpf_spin_lock(&lock);
rb = bpf_rbtree_first(&root);
@@ -429,13 +463,40 @@ long list_del_without_lock_fail(void *ctx)
return -5;
n = container_of(rb, struct node_data, r);
+ n_1 = bpf_obj_new(typeof(*n_1));
+ if (!n_1) {
+ bpf_obj_drop(n);
+ return -1;
+ }
+ m_1 = bpf_refcount_acquire(n_1);
+ if (!m_1) {
+ bpf_obj_drop(n);
+ bpf_obj_drop(n_1);
+ return -1;
+ }
+
+ if (bpf_list_add(&n->l, &n_1->l)) {
+ bpf_obj_drop(n);
+ bpf_obj_drop(m_1);
+ return -8;
+ }
+
l = bpf_list_del(&n->l);
+ l_1 = bpf_list_del(&m_1->l);
bpf_obj_drop(n);
- if (!l)
- return -6;
+ bpf_obj_drop(m_1);
- bpf_obj_drop(container_of(l, struct node_data, l));
- return 0;
+ if (l)
+ bpf_obj_drop(container_of(l, struct node_data, l));
+ else
+ err = -6;
+
+ if (l_1)
+ bpf_obj_drop(container_of(l_1, struct node_data, l));
+ else
+ err = -6;
+
+ return err;
}
SEC("tc")
--
2.50.1 (Apple Git-155)
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v3 5/6] bpf: add bpf_list_is_first/last/empty kfuncs
2026-03-02 12:40 [PATCH v3 0/6] bpf: Extend the bpf_list family of APIs Chengkaitao
` (3 preceding siblings ...)
2026-03-02 12:40 ` [PATCH v3 4/6] selftests/bpf: Add test case for bpf_list_add_impl Chengkaitao
@ 2026-03-02 12:40 ` Chengkaitao
2026-03-02 12:40 ` [PATCH v3 6/6] selftests/bpf: Add test cases for bpf_list_is_first/is_last/empty Chengkaitao
5 siblings, 0 replies; 11+ messages in thread
From: Chengkaitao @ 2026-03-02 12:40 UTC (permalink / raw)
To: martin.lau, ast, daniel, andrii, eddyz87, song, yonghong.song,
john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah, chengkaitao,
linux-kselftest
Cc: bpf, linux-kernel
From: Kaitao Cheng <chengkaitao@kylinos.cn>
Add three kfuncs for BPF linked list queries:
- bpf_list_is_first(head, node): true if node is the first in the list.
- bpf_list_is_last(head, node): true if node is the last in the list.
- bpf_list_empty(head): true if the list has no entries.
In previous versions, to implement the above functionality, it was
necessary to first call bpf_list_pop_front/back to retrieve the first
or last node before checking whether the passed-in node was the first
or last one. After the check, the node had to be pushed back into the
list using bpf_list_push_front/back, which was very inefficient.
Now, with the bpf_list_is_first/last/empty kfuncs, we can directly
check whether a node is the first, last, or whether the list is empty,
without having to first retrieve the node.
Signed-off-by: Kaitao Cheng <chengkaitao@kylinos.cn>
---
kernel/bpf/helpers.c | 41 +++++++++++++++++++++++++++++++++++++++++
kernel/bpf/verifier.c | 15 +++++++++++++--
2 files changed, 54 insertions(+), 2 deletions(-)
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index d212962d4ed6..ada14eca58ab 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -2528,6 +2528,44 @@ __bpf_kfunc int bpf_list_add_impl(struct bpf_list_node *prev,
return __bpf_list_add_after(p, n, meta ? meta->record : NULL, off);
}
+__bpf_kfunc bool bpf_list_is_first(struct bpf_list_head *head, struct bpf_list_node *node)
+{
+ struct list_head *h = (struct list_head *)head;
+ struct bpf_list_node_kern *n = (struct bpf_list_node_kern *)node;
+
+ if (unlikely(!h->next) || list_empty(h))
+ return false;
+
+ if (READ_ONCE(n->owner) != head)
+ return false;
+
+ return h->next == &n->list_head;
+}
+
+__bpf_kfunc bool bpf_list_is_last(struct bpf_list_head *head, struct bpf_list_node *node)
+{
+ struct list_head *h = (struct list_head *)head;
+ struct bpf_list_node_kern *n = (struct bpf_list_node_kern *)node;
+
+ if (unlikely(!h->next) || list_empty(h))
+ return false;
+
+ if (READ_ONCE(n->owner) != head)
+ return false;
+
+ return h->prev == &n->list_head;
+}
+
+__bpf_kfunc bool bpf_list_empty(struct bpf_list_head *head)
+{
+ struct list_head *h = (struct list_head *)head;
+
+ if (unlikely(!h->next))
+ return true;
+
+ return list_empty(h);
+}
+
__bpf_kfunc struct bpf_rb_node *bpf_rbtree_remove(struct bpf_rb_root *root,
struct bpf_rb_node *node)
{
@@ -4598,6 +4636,9 @@ BTF_ID_FLAGS(func, bpf_list_del, KF_ACQUIRE | KF_RET_NULL)
BTF_ID_FLAGS(func, bpf_list_front, KF_RET_NULL)
BTF_ID_FLAGS(func, bpf_list_back, KF_RET_NULL)
BTF_ID_FLAGS(func, bpf_list_add_impl)
+BTF_ID_FLAGS(func, bpf_list_is_first)
+BTF_ID_FLAGS(func, bpf_list_is_last)
+BTF_ID_FLAGS(func, bpf_list_empty)
BTF_ID_FLAGS(func, bpf_task_acquire, KF_ACQUIRE | KF_RCU | KF_RET_NULL)
BTF_ID_FLAGS(func, bpf_task_release, KF_RELEASE)
BTF_ID_FLAGS(func, bpf_rbtree_remove, KF_ACQUIRE | KF_RET_NULL)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index f5ee11779a5c..1c36b0938da7 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -12454,6 +12454,9 @@ enum special_kfunc_type {
KF_bpf_list_front,
KF_bpf_list_back,
KF_bpf_list_add_impl,
+ KF_bpf_list_is_first,
+ KF_bpf_list_is_last,
+ KF_bpf_list_empty,
KF_bpf_cast_to_kern_ctx,
KF_bpf_rdonly_cast,
KF_bpf_rcu_read_lock,
@@ -12516,6 +12519,9 @@ BTF_ID(func, bpf_list_del)
BTF_ID(func, bpf_list_front)
BTF_ID(func, bpf_list_back)
BTF_ID(func, bpf_list_add_impl)
+BTF_ID(func, bpf_list_is_first)
+BTF_ID(func, bpf_list_is_last)
+BTF_ID(func, bpf_list_empty)
BTF_ID(func, bpf_cast_to_kern_ctx)
BTF_ID(func, bpf_rdonly_cast)
BTF_ID(func, bpf_rcu_read_lock)
@@ -12992,7 +12998,10 @@ static bool is_bpf_list_api_kfunc(u32 btf_id)
btf_id == special_kfunc_list[KF_bpf_list_del] ||
btf_id == special_kfunc_list[KF_bpf_list_front] ||
btf_id == special_kfunc_list[KF_bpf_list_back] ||
- btf_id == special_kfunc_list[KF_bpf_list_add_impl];
+ btf_id == special_kfunc_list[KF_bpf_list_add_impl] ||
+ btf_id == special_kfunc_list[KF_bpf_list_is_first] ||
+ btf_id == special_kfunc_list[KF_bpf_list_is_last] ||
+ btf_id == special_kfunc_list[KF_bpf_list_empty];
}
static bool is_bpf_rbtree_api_kfunc(u32 btf_id)
@@ -13115,7 +13124,9 @@ static bool check_kfunc_is_graph_node_api(struct bpf_verifier_env *env,
ret = (kfunc_btf_id == special_kfunc_list[KF_bpf_list_push_front_impl] ||
kfunc_btf_id == special_kfunc_list[KF_bpf_list_push_back_impl] ||
kfunc_btf_id == special_kfunc_list[KF_bpf_list_del] ||
- kfunc_btf_id == special_kfunc_list[KF_bpf_list_add_impl]);
+ kfunc_btf_id == special_kfunc_list[KF_bpf_list_add_impl] ||
+ kfunc_btf_id == special_kfunc_list[KF_bpf_list_is_first] ||
+ kfunc_btf_id == special_kfunc_list[KF_bpf_list_is_last]);
break;
case BPF_RB_NODE:
ret = (kfunc_btf_id == special_kfunc_list[KF_bpf_rbtree_remove] ||
--
2.50.1 (Apple Git-155)
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v3 6/6] selftests/bpf: Add test cases for bpf_list_is_first/is_last/empty
2026-03-02 12:40 [PATCH v3 0/6] bpf: Extend the bpf_list family of APIs Chengkaitao
` (4 preceding siblings ...)
2026-03-02 12:40 ` [PATCH v3 5/6] bpf: add bpf_list_is_first/last/empty kfuncs Chengkaitao
@ 2026-03-02 12:40 ` Chengkaitao
5 siblings, 0 replies; 11+ messages in thread
From: Chengkaitao @ 2026-03-02 12:40 UTC (permalink / raw)
To: martin.lau, ast, daniel, andrii, eddyz87, song, yonghong.song,
john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah, chengkaitao,
linux-kselftest
Cc: bpf, linux-kernel
From: Kaitao Cheng <chengkaitao@kylinos.cn>
Rename test_list_add_del to list_add_del_and_check and extend it to
cover the new kfuncs: assert list non-empty after insert, assert
is_first(n) and is_last(m_1) after bpf_list_add, and assert list
empty after removing both nodes.
Signed-off-by: Kaitao Cheng <chengkaitao@kylinos.cn>
---
.../testing/selftests/bpf/bpf_experimental.h | 15 +++++++++++
.../selftests/bpf/progs/refcounted_kptr.c | 27 +++++++++++++++----
2 files changed, 37 insertions(+), 5 deletions(-)
diff --git a/tools/testing/selftests/bpf/bpf_experimental.h b/tools/testing/selftests/bpf/bpf_experimental.h
index 48106ea5dda8..2eb107e771ad 100644
--- a/tools/testing/selftests/bpf/bpf_experimental.h
+++ b/tools/testing/selftests/bpf/bpf_experimental.h
@@ -123,6 +123,21 @@ extern int bpf_list_add_impl(struct bpf_list_node *prev, struct bpf_list_node *n
/* Convenience macro to wrap over bpf_list_add_impl */
#define bpf_list_add(prev, node) bpf_list_add_impl(prev, node, NULL, 0)
+/* Description
+ * Return true if 'node' is the first node in the list with head 'head'.
+ */
+extern bool bpf_list_is_first(struct bpf_list_head *head, struct bpf_list_node *node) __ksym;
+
+/* Description
+ * Return true if 'node' is the last node in the list with head 'head'.
+ */
+extern bool bpf_list_is_last(struct bpf_list_head *head, struct bpf_list_node *node) __ksym;
+
+/* Description
+ * Return true if the list with head 'head' has no entries.
+ */
+extern bool bpf_list_empty(struct bpf_list_head *head) __ksym;
+
/* Description
* Remove 'node' from rbtree with root 'root'
* Returns
diff --git a/tools/testing/selftests/bpf/progs/refcounted_kptr.c b/tools/testing/selftests/bpf/progs/refcounted_kptr.c
index 4d979f5ad9e8..6797c53f550e 100644
--- a/tools/testing/selftests/bpf/progs/refcounted_kptr.c
+++ b/tools/testing/selftests/bpf/progs/refcounted_kptr.c
@@ -367,14 +367,14 @@ long insert_rbtree_and_stash__del_tree_##rem_tree(void *ctx) \
INSERT_STASH_READ(true, "insert_stash_read: remove from tree");
INSERT_STASH_READ(false, "insert_stash_read: don't remove from tree");
-/* Insert one node in tree and list, remove it from tree, add a second
- * node after it in list with bpf_list_add, then remove both nodes from
- * list via bpf_list_del.
+/* Insert one node in tree and list, remove it from tree, add a second node
+ * after it with bpf_list_add, check bpf_list_is_first/is_last/empty, then
+ * remove both nodes from list via bpf_list_del.
*/
SEC("tc")
-__description("test_list_add_del: test bpf_list_add/del")
+__description("list_add_del_and_check: test bpf_list_add/del/is_first/is_last/empty")
__success __retval(0)
-long test_list_add_del(void *ctx)
+long list_add_del_and_check(void *ctx)
{
long err = 0;
struct bpf_rb_node *rb;
@@ -386,6 +386,11 @@ long test_list_add_del(void *ctx)
return err;
bpf_spin_lock(&lock);
+ if (bpf_list_empty(&head)) {
+ bpf_spin_unlock(&lock);
+ return -7;
+ }
+
rb = bpf_rbtree_first(&root);
if (!rb) {
bpf_spin_unlock(&lock);
@@ -418,6 +423,14 @@ long test_list_add_del(void *ctx)
return -8;
}
+ if (!bpf_list_is_first(&head, &n->l) ||
+ !bpf_list_is_last(&head, &m_1->l)) {
+ bpf_spin_unlock(&lock);
+ bpf_obj_drop(n);
+ bpf_obj_drop(m_1);
+ return -9;
+ }
+
l = bpf_list_del(&n->l);
l_1 = bpf_list_del(&m_1->l);
bpf_spin_unlock(&lock);
@@ -434,6 +447,10 @@ long test_list_add_del(void *ctx)
else
err = -6;
+ bpf_spin_lock(&lock);
+ if (!bpf_list_empty(&head))
+ err = -7;
+ bpf_spin_unlock(&lock);
return err;
}
--
2.50.1 (Apple Git-155)
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH v3 1/6] bpf: Introduce the bpf_list_del kfunc.
2026-03-02 12:40 ` [PATCH v3 1/6] bpf: Introduce the bpf_list_del kfunc Chengkaitao
@ 2026-03-02 13:32 ` bot+bpf-ci
2026-03-02 15:19 ` Mykyta Yatsenko
1 sibling, 0 replies; 11+ messages in thread
From: bot+bpf-ci @ 2026-03-02 13:32 UTC (permalink / raw)
To: pilgrimtao, martin.lau, ast, daniel, andrii, eddyz87, song,
yonghong.song, john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah,
chengkaitao, linux-kselftest
Cc: bpf, linux-kernel, ast, andrii, daniel, martin.lau, eddyz87,
yonghong.song, clm, ihor.solodrai
[-- Attachment #1: Type: text/plain, Size: 1856 bytes --]
> diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
> --- a/kernel/bpf/btf.c
> +++ b/kernel/bpf/btf.c
[ ... ]
> @@ -4080,20 +4100,25 @@ int btf_check_and_fixup_fields(const struct btf *btf, struct btf_record *rec)
> int i;
>
> /* There are three types that signify ownership of some other type:
> - * kptr_ref, bpf_list_head, bpf_rb_root.
> + * kptr_ref, bpf_list_head/node, bpf_rb_root.
Should "bpf_list_head/node" be just "bpf_list_head" here? bpf_list_node
doesn't signify ownership of another type -- it is the type that is
owned.
The unmodified paragraph just below still says:
"Hence we only need to ensure that bpf_{list_head,rb_root} ownership
does not form cycles."
And further down in this same function:
"A type can only _own_ another type in user BTF if it has a
bpf_{list_head,rb_root}. Let's call these root types."
Both correctly exclude nodes from the ownership types.
> * kptr_ref only supports storing kernel types, which can't store
> * references to program allocated local types.
> *
> * Hence we only need to ensure that bpf_{list_head,rb_root} ownership
> * does not form cycles.
> */
[ ... ]
---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md
CI run summary: https://github.com/kernel-patches/bpf/actions/runs/22576595943
AI-authorship-score: low
AI-authorship-explanation: Slightly formal commit message phrasing is consistent with non-native English; code and multi-version iteration strongly suggest human authorship.
issues-found: 1
issue-severity-score: low
issue-severity-explanation: Inaccurate comment in btf_check_and_fixup_fields incorrectly describes bpf_list_node as signifying ownership, contradicting other comments within the same function; no runtime impact.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3 1/6] bpf: Introduce the bpf_list_del kfunc.
2026-03-02 12:40 ` [PATCH v3 1/6] bpf: Introduce the bpf_list_del kfunc Chengkaitao
2026-03-02 13:32 ` bot+bpf-ci
@ 2026-03-02 15:19 ` Mykyta Yatsenko
2026-03-03 1:15 ` Chengkaitao
1 sibling, 1 reply; 11+ messages in thread
From: Mykyta Yatsenko @ 2026-03-02 15:19 UTC (permalink / raw)
To: Chengkaitao, martin.lau, ast, daniel, andrii, eddyz87, song,
yonghong.song, john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah,
chengkaitao, linux-kselftest
Cc: bpf, linux-kernel
Chengkaitao <pilgrimtao@gmail.com> writes:
> From: Kaitao Cheng <chengkaitao@kylinos.cn>
>
> If a user holds ownership of a node in the middle of a list, they
> can directly remove it from the list without strictly adhering to
> deletion rules from the head or tail.
>
> When a kfunc has only one bpf_list_node parameter, supplement the
> initialization of the corresponding btf_field. Add a new lock_rec
> member to struct bpf_reference_state for lock holding detection.
>
> This is typically paired with bpf_refcount. After calling
> bpf_list_del, it is generally necessary to drop the reference to
> the list node twice to prevent reference count leaks.
>
> Signed-off-by: Kaitao Cheng <chengkaitao@kylinos.cn>
> ---
> include/linux/bpf_verifier.h | 4 +++
> kernel/bpf/btf.c | 33 +++++++++++++++++++---
> kernel/bpf/helpers.c | 17 ++++++++++++
> kernel/bpf/verifier.c | 54 ++++++++++++++++++++++++++++++++++--
> 4 files changed, 101 insertions(+), 7 deletions(-)
>
> diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
> index ef8e45a362d9..e1358b62d6cc 100644
> --- a/include/linux/bpf_verifier.h
> +++ b/include/linux/bpf_verifier.h
> @@ -261,6 +261,10 @@ struct bpf_reference_state {
> * it matches on unlock.
> */
> void *ptr;
> + /* For REF_TYPE_LOCK_*: btf_record of the locked object, used for lock
> + * checking in kfuncs such as bpf_list_del.
> + */
> + struct btf_record *lock_rec;
As far as I understand this patch introduces a weaker type of
verification: we only check that there is some lock held by the
object of the same type as this node's head, but there is no guarantee
it's the same instance. Please confirm if I'm right.
What would it take to implement instance validation instead of
type-based lock check?
> };
>
> struct bpf_retval_range {
> diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
> index 4872d2a6c42d..8a977c793d56 100644
> --- a/kernel/bpf/btf.c
> +++ b/kernel/bpf/btf.c
> @@ -3785,7 +3785,6 @@ static int btf_find_field_one(const struct btf *btf,
> case BPF_RES_SPIN_LOCK:
> case BPF_TIMER:
> case BPF_WORKQUEUE:
> - case BPF_LIST_NODE:
> case BPF_RB_NODE:
> case BPF_REFCOUNT:
> case BPF_TASK_WORK:
> @@ -3794,6 +3793,27 @@ static int btf_find_field_one(const struct btf *btf,
> if (ret < 0)
> return ret;
> break;
> + case BPF_LIST_NODE:
> + ret = btf_find_struct(btf, var_type, off, sz, field_type,
> + info_cnt ? &info[0] : &tmp);
> + if (ret < 0)
> + return ret;
> + /* graph_root for verifier: container type and node member name */
> + if (info_cnt && var_idx >= 0 && (u32)var_idx < btf_type_vlen(var)) {
> + u32 id;
> + const struct btf_member *member;
> +
> + for (id = 1; id < btf_nr_types(btf); id++) {
> + if (btf_type_by_id(btf, id) == var) {
> + info[0].graph_root.value_btf_id = id;
> + member = btf_type_member(var) + var_idx;
> + info[0].graph_root.node_name =
> + __btf_name_by_offset(btf, member->name_off);
> + break;
> + }
> + }
> + }
> + break;
> case BPF_KPTR_UNREF:
> case BPF_KPTR_REF:
> case BPF_KPTR_PERCPU:
> @@ -4138,6 +4158,7 @@ struct btf_record *btf_parse_fields(const struct btf *btf, const struct btf_type
> if (ret < 0)
> goto end;
> break;
> + case BPF_LIST_NODE:
> case BPF_LIST_HEAD:
> ret = btf_parse_list_head(btf, &rec->fields[i], &info_arr[i]);
> if (ret < 0)
> @@ -4148,7 +4169,6 @@ struct btf_record *btf_parse_fields(const struct btf *btf, const struct btf_type
> if (ret < 0)
> goto end;
> break;
> - case BPF_LIST_NODE:
> case BPF_RB_NODE:
> break;
> default:
> @@ -4192,20 +4212,25 @@ int btf_check_and_fixup_fields(const struct btf *btf, struct btf_record *rec)
> int i;
>
> /* There are three types that signify ownership of some other type:
> - * kptr_ref, bpf_list_head, bpf_rb_root.
> + * kptr_ref, bpf_list_head/node, bpf_rb_root.
> * kptr_ref only supports storing kernel types, which can't store
> * references to program allocated local types.
> *
> * Hence we only need to ensure that bpf_{list_head,rb_root} ownership
> * does not form cycles.
> */
> - if (IS_ERR_OR_NULL(rec) || !(rec->field_mask & (BPF_GRAPH_ROOT | BPF_UPTR)))
> + if (IS_ERR_OR_NULL(rec) || !(rec->field_mask &
> + (BPF_GRAPH_ROOT | BPF_GRAPH_NODE | BPF_UPTR)))
> return 0;
> +
> for (i = 0; i < rec->cnt; i++) {
> struct btf_struct_meta *meta;
> const struct btf_type *t;
> u32 btf_id;
>
> + if (rec->fields[i].type & BPF_GRAPH_NODE)
> + rec->fields[i].graph_root.value_rec = rec;
> +
> if (rec->fields[i].type == BPF_UPTR) {
> /* The uptr only supports pinning one page and cannot
> * point to a kernel struct
> diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> index 6eb6c82ed2ee..577af62a9f7a 100644
> --- a/kernel/bpf/helpers.c
> +++ b/kernel/bpf/helpers.c
> @@ -2459,6 +2459,22 @@ __bpf_kfunc struct bpf_list_node *bpf_list_pop_back(struct bpf_list_head *head)
> return __bpf_list_del(head, true);
> }
>
> +__bpf_kfunc struct bpf_list_node *bpf_list_del(struct bpf_list_node *node)
> +{
> + struct bpf_list_node_kern *knode = (struct bpf_list_node_kern *)node;
> +
> + if (unlikely(!knode))
> + return NULL;
> +
> + if (WARN_ON_ONCE(!READ_ONCE(knode->owner)))
> + return NULL;
> +
> + list_del_init(&knode->list_head);
> + WRITE_ONCE(knode->owner, NULL);
> +
> + return node;
> +}
> +
> __bpf_kfunc struct bpf_list_node *bpf_list_front(struct bpf_list_head *head)
> {
> struct list_head *h = (struct list_head *)head;
> @@ -4545,6 +4561,7 @@ BTF_ID_FLAGS(func, bpf_list_push_front_impl)
> BTF_ID_FLAGS(func, bpf_list_push_back_impl)
> BTF_ID_FLAGS(func, bpf_list_pop_front, KF_ACQUIRE | KF_RET_NULL)
> BTF_ID_FLAGS(func, bpf_list_pop_back, KF_ACQUIRE | KF_RET_NULL)
> +BTF_ID_FLAGS(func, bpf_list_del, KF_ACQUIRE | KF_RET_NULL)
> BTF_ID_FLAGS(func, bpf_list_front, KF_RET_NULL)
> BTF_ID_FLAGS(func, bpf_list_back, KF_RET_NULL)
> BTF_ID_FLAGS(func, bpf_task_acquire, KF_ACQUIRE | KF_RCU | KF_RET_NULL)
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index a3390190c26e..8a782772dd36 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -1536,7 +1536,7 @@ static int acquire_reference(struct bpf_verifier_env *env, int insn_idx)
> }
>
> static int acquire_lock_state(struct bpf_verifier_env *env, int insn_idx, enum ref_state_type type,
> - int id, void *ptr)
> + int id, void *ptr, struct btf_record *lock_rec)
> {
> struct bpf_verifier_state *state = env->cur_state;
> struct bpf_reference_state *s;
> @@ -1547,6 +1547,7 @@ static int acquire_lock_state(struct bpf_verifier_env *env, int insn_idx, enum r
> s->type = type;
> s->id = id;
> s->ptr = ptr;
> + s->lock_rec = lock_rec;
>
> state->active_locks++;
> state->active_lock_id = id;
> @@ -1662,6 +1663,23 @@ static struct bpf_reference_state *find_lock_state(struct bpf_verifier_state *st
> return NULL;
> }
>
> +static bool rec_has_list_matching_node_type(struct bpf_verifier_env *env,
> + const struct btf_record *rec,
> + const struct btf *node_btf, u32 node_btf_id)
> +{
> + u32 i;
> +
> + for (i = 0; i < rec->cnt; i++) {
> + if (!(rec->fields[i].type & BPF_LIST_HEAD))
> + continue;
> + if (btf_struct_ids_match(&env->log, node_btf, node_btf_id, 0,
> + rec->fields[i].graph_root.btf,
> + rec->fields[i].graph_root.value_btf_id, true))
> + return true;
> + }
> + return false;
> +}
> +
> static void update_peak_states(struct bpf_verifier_env *env)
> {
> u32 cur_states;
> @@ -8576,7 +8594,8 @@ static int process_spin_lock(struct bpf_verifier_env *env, int regno, int flags)
> type = REF_TYPE_RES_LOCK;
> else
> type = REF_TYPE_LOCK;
> - err = acquire_lock_state(env, env->insn_idx, type, reg->id, ptr);
> + err = acquire_lock_state(env, env->insn_idx, type, reg->id, ptr,
> + reg_btf_record(reg));
> if (err < 0) {
> verbose(env, "Failed to acquire lock state\n");
> return err;
> @@ -12431,6 +12450,7 @@ enum special_kfunc_type {
> KF_bpf_list_push_back_impl,
> KF_bpf_list_pop_front,
> KF_bpf_list_pop_back,
> + KF_bpf_list_del,
> KF_bpf_list_front,
> KF_bpf_list_back,
> KF_bpf_cast_to_kern_ctx,
> @@ -12491,6 +12511,7 @@ BTF_ID(func, bpf_list_push_front_impl)
> BTF_ID(func, bpf_list_push_back_impl)
> BTF_ID(func, bpf_list_pop_front)
> BTF_ID(func, bpf_list_pop_back)
> +BTF_ID(func, bpf_list_del)
> BTF_ID(func, bpf_list_front)
> BTF_ID(func, bpf_list_back)
> BTF_ID(func, bpf_cast_to_kern_ctx)
> @@ -12966,6 +12987,7 @@ static bool is_bpf_list_api_kfunc(u32 btf_id)
> btf_id == special_kfunc_list[KF_bpf_list_push_back_impl] ||
> btf_id == special_kfunc_list[KF_bpf_list_pop_front] ||
> btf_id == special_kfunc_list[KF_bpf_list_pop_back] ||
> + btf_id == special_kfunc_list[KF_bpf_list_del] ||
> btf_id == special_kfunc_list[KF_bpf_list_front] ||
> btf_id == special_kfunc_list[KF_bpf_list_back];
> }
> @@ -13088,7 +13110,8 @@ static bool check_kfunc_is_graph_node_api(struct bpf_verifier_env *env,
> switch (node_field_type) {
> case BPF_LIST_NODE:
> ret = (kfunc_btf_id == special_kfunc_list[KF_bpf_list_push_front_impl] ||
> - kfunc_btf_id == special_kfunc_list[KF_bpf_list_push_back_impl]);
> + kfunc_btf_id == special_kfunc_list[KF_bpf_list_push_back_impl] ||
> + kfunc_btf_id == special_kfunc_list[KF_bpf_list_del]);
> break;
> case BPF_RB_NODE:
> ret = (kfunc_btf_id == special_kfunc_list[KF_bpf_rbtree_remove] ||
> @@ -13211,6 +13234,9 @@ __process_kf_arg_ptr_to_graph_node(struct bpf_verifier_env *env,
> return -EINVAL;
> }
>
> + if (!*node_field)
> + *node_field = field;
> +
> field = *node_field;
>
> et = btf_type_by_id(field->graph_root.btf, field->graph_root.value_btf_id);
> @@ -13237,6 +13263,28 @@ __process_kf_arg_ptr_to_graph_node(struct bpf_verifier_env *env,
> return -EINVAL;
> }
>
> + /* bpf_list_del: require list head's lock. Use refs[] REF_TYPE_LOCK_MASK
> + * only. At lock time we stored the locked object's btf_record in ref->
> + * lock_rec, so we can get the list value type from the ref directly.
> + */
> + if (node_field_type == BPF_LIST_NODE &&
> + meta->func_id == special_kfunc_list[KF_bpf_list_del]) {
> + struct bpf_verifier_state *cur = env->cur_state;
> +
> + for (int i = 0; i < cur->acquired_refs; i++) {
> + struct bpf_reference_state *s = &cur->refs[i];
> +
> + if (!(s->type & REF_TYPE_LOCK_MASK) || !s->lock_rec)
> + continue;
> +
> + if (rec_has_list_matching_node_type(env, s->lock_rec,
> + reg->btf, reg->btf_id))
> + return 0;
> + }
> + verbose(env, "bpf_spin_lock must be held for bpf_list_del\n");
> + return -EINVAL;
> + }
> +
> return 0;
> }
>
> --
> 2.50.1 (Apple Git-155)
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3 1/6] bpf: Introduce the bpf_list_del kfunc.
2026-03-02 15:19 ` Mykyta Yatsenko
@ 2026-03-03 1:15 ` Chengkaitao
2026-03-03 1:22 ` Alexei Starovoitov
0 siblings, 1 reply; 11+ messages in thread
From: Chengkaitao @ 2026-03-03 1:15 UTC (permalink / raw)
To: Mykyta Yatsenko
Cc: martin.lau, ast, daniel, andrii, eddyz87, song, yonghong.song,
john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah, chengkaitao,
linux-kselftest, bpf, linux-kernel
On Mon, Mar 2, 2026 at 11:19 PM Mykyta Yatsenko
<mykyta.yatsenko5@gmail.com> wrote:
>
> Chengkaitao <pilgrimtao@gmail.com> writes:
>
> > From: Kaitao Cheng <chengkaitao@kylinos.cn>
> >
> > If a user holds ownership of a node in the middle of a list, they
> > can directly remove it from the list without strictly adhering to
> > deletion rules from the head or tail.
> >
> > When a kfunc has only one bpf_list_node parameter, supplement the
> > initialization of the corresponding btf_field. Add a new lock_rec
> > member to struct bpf_reference_state for lock holding detection.
> >
> > This is typically paired with bpf_refcount. After calling
> > bpf_list_del, it is generally necessary to drop the reference to
> > the list node twice to prevent reference count leaks.
> >
> > Signed-off-by: Kaitao Cheng <chengkaitao@kylinos.cn>
> > ---
> > include/linux/bpf_verifier.h | 4 +++
> > kernel/bpf/btf.c | 33 +++++++++++++++++++---
> > kernel/bpf/helpers.c | 17 ++++++++++++
> > kernel/bpf/verifier.c | 54 ++++++++++++++++++++++++++++++++++--
> > 4 files changed, 101 insertions(+), 7 deletions(-)
> >
> > diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
> > index ef8e45a362d9..e1358b62d6cc 100644
> > --- a/include/linux/bpf_verifier.h
> > +++ b/include/linux/bpf_verifier.h
> > @@ -261,6 +261,10 @@ struct bpf_reference_state {
> > * it matches on unlock.
> > */
> > void *ptr;
> > + /* For REF_TYPE_LOCK_*: btf_record of the locked object, used for lock
> > + * checking in kfuncs such as bpf_list_del.
> > + */
> > + struct btf_record *lock_rec;
> As far as I understand this patch introduces a weaker type of
> verification: we only check that there is some lock held by the
> object of the same type as this node's head, but there is no guarantee
> it's the same instance. Please confirm if I'm right.
> What would it take to implement instance validation instead of
> type-based lock check?
Your understanding is correct. However, I haven’t come up
with a better solution to obtain this node's head. Do you have
any suggestions? Alternatively, shall we revert to version v1?
https://lore.kernel.org/all/20260209025250.55750-2-pilgrimtao@gmail.com/
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v3 1/6] bpf: Introduce the bpf_list_del kfunc.
2026-03-03 1:15 ` Chengkaitao
@ 2026-03-03 1:22 ` Alexei Starovoitov
0 siblings, 0 replies; 11+ messages in thread
From: Alexei Starovoitov @ 2026-03-03 1:22 UTC (permalink / raw)
To: Chengkaitao
Cc: Mykyta Yatsenko, Martin KaFai Lau, Alexei Starovoitov,
Daniel Borkmann, Andrii Nakryiko, Eduard, Song Liu, Yonghong Song,
John Fastabend, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
Shuah Khan, Chengkaitao, open list:KERNEL SELFTEST FRAMEWORK, bpf,
LKML
On Mon, Mar 2, 2026 at 5:15 PM Chengkaitao <pilgrimtao@gmail.com> wrote:
>
> On Mon, Mar 2, 2026 at 11:19 PM Mykyta Yatsenko
> <mykyta.yatsenko5@gmail.com> wrote:
> >
> > Chengkaitao <pilgrimtao@gmail.com> writes:
> >
> > > From: Kaitao Cheng <chengkaitao@kylinos.cn>
> > >
> > > If a user holds ownership of a node in the middle of a list, they
> > > can directly remove it from the list without strictly adhering to
> > > deletion rules from the head or tail.
> > >
> > > When a kfunc has only one bpf_list_node parameter, supplement the
> > > initialization of the corresponding btf_field. Add a new lock_rec
> > > member to struct bpf_reference_state for lock holding detection.
> > >
> > > This is typically paired with bpf_refcount. After calling
> > > bpf_list_del, it is generally necessary to drop the reference to
> > > the list node twice to prevent reference count leaks.
> > >
> > > Signed-off-by: Kaitao Cheng <chengkaitao@kylinos.cn>
> > > ---
> > > include/linux/bpf_verifier.h | 4 +++
> > > kernel/bpf/btf.c | 33 +++++++++++++++++++---
> > > kernel/bpf/helpers.c | 17 ++++++++++++
> > > kernel/bpf/verifier.c | 54 ++++++++++++++++++++++++++++++++++--
> > > 4 files changed, 101 insertions(+), 7 deletions(-)
> > >
> > > diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
> > > index ef8e45a362d9..e1358b62d6cc 100644
> > > --- a/include/linux/bpf_verifier.h
> > > +++ b/include/linux/bpf_verifier.h
> > > @@ -261,6 +261,10 @@ struct bpf_reference_state {
> > > * it matches on unlock.
> > > */
> > > void *ptr;
> > > + /* For REF_TYPE_LOCK_*: btf_record of the locked object, used for lock
> > > + * checking in kfuncs such as bpf_list_del.
> > > + */
> > > + struct btf_record *lock_rec;
> > As far as I understand this patch introduces a weaker type of
> > verification: we only check that there is some lock held by the
> > object of the same type as this node's head, but there is no guarantee
> > it's the same instance. Please confirm if I'm right.
> > What would it take to implement instance validation instead of
> > type-based lock check?
>
> Your understanding is correct. However, I haven’t come up
> with a better solution to obtain this node's head. Do you have
> any suggestions? Alternatively, shall we revert to version v1?
>
> https://lore.kernel.org/all/20260209025250.55750-2-pilgrimtao@gmail.com/
Let's revert to v1. Passing head just to avoid messing wit the verifier
is an ok trade off.
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2026-03-03 1:22 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-02 12:40 [PATCH v3 0/6] bpf: Extend the bpf_list family of APIs Chengkaitao
2026-03-02 12:40 ` [PATCH v3 1/6] bpf: Introduce the bpf_list_del kfunc Chengkaitao
2026-03-02 13:32 ` bot+bpf-ci
2026-03-02 15:19 ` Mykyta Yatsenko
2026-03-03 1:15 ` Chengkaitao
2026-03-03 1:22 ` Alexei Starovoitov
2026-03-02 12:40 ` [PATCH v3 2/6] selftests/bpf: Add test cases for bpf_list_del Chengkaitao
2026-03-02 12:40 ` [PATCH v3 3/6] bpf: add bpf_list_add_impl to insert node after a given list node Chengkaitao
2026-03-02 12:40 ` [PATCH v3 4/6] selftests/bpf: Add test case for bpf_list_add_impl Chengkaitao
2026-03-02 12:40 ` [PATCH v3 5/6] bpf: add bpf_list_is_first/last/empty kfuncs Chengkaitao
2026-03-02 12:40 ` [PATCH v3 6/6] selftests/bpf: Add test cases for bpf_list_is_first/is_last/empty Chengkaitao
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox