Linux Kernel Selftest development
 help / color / mirror / Atom feed
* [PATCH v2 0/6] bpf: Extend the bpf_list family of APIs
@ 2026-02-25  9:26 Chengkaitao
  2026-02-25  9:26 ` [PATCH v2 1/6] bpf: Introduce the bpf_list_del kfunc Chengkaitao
                   ` (5 more replies)
  0 siblings, 6 replies; 11+ messages in thread
From: Chengkaitao @ 2026-02-25  9:26 UTC (permalink / raw)
  To: martin.lau, ast, daniel, andrii, eddyz87, song, yonghong.song,
	john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah, chengkaitao,
	leon.hwang, linux-kselftest
  Cc: bpf, linux-kernel

From: Kaitao Cheng <chengkaitao@kylinos.cn>

In BPF, a list can only be used to implement a stack structure.
Due to an incomplete API set, only FIFO or LIFO operations are
supported. The patches enhance the BPF list API, making it more
list-like.

Five new kfuncs have been added:
bpf_list_del: remove a node from the list
bpf_list_add_impl: insert a node after a given list node
bpf_list_is_first: check if a node is the first in the list
bpf_list_is_last: check if a node is the last in the list
bpf_list_empty: check if the list is empty

Changes in v2:
- Remove the head parameter from bpf_list_del
- Add bpf_list_add/is_first/is_last/empty to API and test cases

Link to v1:
https://lore.kernel.org/all/20260209025250.55750-1-pilgrimtao@gmail.com/

Kaitao Cheng (6):
  bpf: Introduce the bpf_list_del kfunc.
  selftests/bpf: Add test cases for bpf_list_del
  bpf: add bpf_list_add_impl to insert node after a given list node
  selftests/bpf: Add test case for bpf_list_add_impl
  bpf: add bpf_list_is_first/last/empty kfuncs
  selftests/bpf: Add test cases for bpf_list_is_first/is_last/empty

 kernel/bpf/btf.c                              | 33 ++++++-
 kernel/bpf/helpers.c                          | 92 +++++++++++++++++++
 kernel/bpf/verifier.c                         | 28 +++++-
 .../testing/selftests/bpf/bpf_experimental.h  | 38 ++++++++
 .../selftests/bpf/progs/refcounted_kptr.c     | 87 ++++++++++++++++++
 5 files changed, 272 insertions(+), 6 deletions(-)

-- 
2.50.1 (Apple Git-155)


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v2 1/6] bpf: Introduce the bpf_list_del kfunc.
  2026-02-25  9:26 [PATCH v2 0/6] bpf: Extend the bpf_list family of APIs Chengkaitao
@ 2026-02-25  9:26 ` Chengkaitao
  2026-02-25 10:22   ` bot+bpf-ci
  2026-02-25  9:26 ` [PATCH v2 2/6] selftests/bpf: Add test cases for bpf_list_del Chengkaitao
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 11+ messages in thread
From: Chengkaitao @ 2026-02-25  9:26 UTC (permalink / raw)
  To: martin.lau, ast, daniel, andrii, eddyz87, song, yonghong.song,
	john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah, chengkaitao,
	leon.hwang, linux-kselftest
  Cc: bpf, linux-kernel

From: Kaitao Cheng <chengkaitao@kylinos.cn>

If a user holds ownership of a node in the middle of a list, they
can directly remove it from the list without strictly adhering to
deletion rules from the head or tail.

When a kfunc has only one bpf_list_node parameter, supplement the
initialization of the corresponding btf_field.

This is typically paired with bpf_refcount. After calling
bpf_list_del, it is generally necessary to drop the reference to
the list node twice to prevent reference count leaks.

Signed-off-by: Kaitao Cheng <chengkaitao@kylinos.cn>
---
 kernel/bpf/btf.c      | 33 +++++++++++++++++++++++++++++----
 kernel/bpf/helpers.c  | 17 +++++++++++++++++
 kernel/bpf/verifier.c |  9 ++++++++-
 3 files changed, 54 insertions(+), 5 deletions(-)

diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 4872d2a6c42d..8a977c793d56 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -3785,7 +3785,6 @@ static int btf_find_field_one(const struct btf *btf,
 	case BPF_RES_SPIN_LOCK:
 	case BPF_TIMER:
 	case BPF_WORKQUEUE:
-	case BPF_LIST_NODE:
 	case BPF_RB_NODE:
 	case BPF_REFCOUNT:
 	case BPF_TASK_WORK:
@@ -3794,6 +3793,27 @@ static int btf_find_field_one(const struct btf *btf,
 		if (ret < 0)
 			return ret;
 		break;
+	case BPF_LIST_NODE:
+		ret = btf_find_struct(btf, var_type, off, sz, field_type,
+				      info_cnt ? &info[0] : &tmp);
+		if (ret < 0)
+			return ret;
+		/* graph_root for verifier: container type and node member name */
+		if (info_cnt && var_idx >= 0 && (u32)var_idx < btf_type_vlen(var)) {
+			u32 id;
+			const struct btf_member *member;
+
+			for (id = 1; id < btf_nr_types(btf); id++) {
+				if (btf_type_by_id(btf, id) == var) {
+					info[0].graph_root.value_btf_id = id;
+					member = btf_type_member(var) + var_idx;
+					info[0].graph_root.node_name =
+						__btf_name_by_offset(btf, member->name_off);
+					break;
+				}
+			}
+		}
+		break;
 	case BPF_KPTR_UNREF:
 	case BPF_KPTR_REF:
 	case BPF_KPTR_PERCPU:
@@ -4138,6 +4158,7 @@ struct btf_record *btf_parse_fields(const struct btf *btf, const struct btf_type
 			if (ret < 0)
 				goto end;
 			break;
+		case BPF_LIST_NODE:
 		case BPF_LIST_HEAD:
 			ret = btf_parse_list_head(btf, &rec->fields[i], &info_arr[i]);
 			if (ret < 0)
@@ -4148,7 +4169,6 @@ struct btf_record *btf_parse_fields(const struct btf *btf, const struct btf_type
 			if (ret < 0)
 				goto end;
 			break;
-		case BPF_LIST_NODE:
 		case BPF_RB_NODE:
 			break;
 		default:
@@ -4192,20 +4212,25 @@ int btf_check_and_fixup_fields(const struct btf *btf, struct btf_record *rec)
 	int i;
 
 	/* There are three types that signify ownership of some other type:
-	 *  kptr_ref, bpf_list_head, bpf_rb_root.
+	 *  kptr_ref, bpf_list_head/node, bpf_rb_root.
 	 * kptr_ref only supports storing kernel types, which can't store
 	 * references to program allocated local types.
 	 *
 	 * Hence we only need to ensure that bpf_{list_head,rb_root} ownership
 	 * does not form cycles.
 	 */
-	if (IS_ERR_OR_NULL(rec) || !(rec->field_mask & (BPF_GRAPH_ROOT | BPF_UPTR)))
+	if (IS_ERR_OR_NULL(rec) || !(rec->field_mask &
+	   (BPF_GRAPH_ROOT | BPF_GRAPH_NODE | BPF_UPTR)))
 		return 0;
+
 	for (i = 0; i < rec->cnt; i++) {
 		struct btf_struct_meta *meta;
 		const struct btf_type *t;
 		u32 btf_id;
 
+		if (rec->fields[i].type & BPF_GRAPH_NODE)
+			rec->fields[i].graph_root.value_rec = rec;
+
 		if (rec->fields[i].type == BPF_UPTR) {
 			/* The uptr only supports pinning one page and cannot
 			 * point to a kernel struct
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index 6eb6c82ed2ee..577af62a9f7a 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -2459,6 +2459,22 @@ __bpf_kfunc struct bpf_list_node *bpf_list_pop_back(struct bpf_list_head *head)
 	return __bpf_list_del(head, true);
 }
 
+__bpf_kfunc struct bpf_list_node *bpf_list_del(struct bpf_list_node *node)
+{
+	struct bpf_list_node_kern *knode = (struct bpf_list_node_kern *)node;
+
+	if (unlikely(!knode))
+		return NULL;
+
+	if (WARN_ON_ONCE(!READ_ONCE(knode->owner)))
+		return NULL;
+
+	list_del_init(&knode->list_head);
+	WRITE_ONCE(knode->owner, NULL);
+
+	return node;
+}
+
 __bpf_kfunc struct bpf_list_node *bpf_list_front(struct bpf_list_head *head)
 {
 	struct list_head *h = (struct list_head *)head;
@@ -4545,6 +4561,7 @@ BTF_ID_FLAGS(func, bpf_list_push_front_impl)
 BTF_ID_FLAGS(func, bpf_list_push_back_impl)
 BTF_ID_FLAGS(func, bpf_list_pop_front, KF_ACQUIRE | KF_RET_NULL)
 BTF_ID_FLAGS(func, bpf_list_pop_back, KF_ACQUIRE | KF_RET_NULL)
+BTF_ID_FLAGS(func, bpf_list_del, KF_ACQUIRE | KF_RET_NULL)
 BTF_ID_FLAGS(func, bpf_list_front, KF_RET_NULL)
 BTF_ID_FLAGS(func, bpf_list_back, KF_RET_NULL)
 BTF_ID_FLAGS(func, bpf_task_acquire, KF_ACQUIRE | KF_RCU | KF_RET_NULL)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index a3390190c26e..1e9857177a7a 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -12431,6 +12431,7 @@ enum special_kfunc_type {
 	KF_bpf_list_push_back_impl,
 	KF_bpf_list_pop_front,
 	KF_bpf_list_pop_back,
+	KF_bpf_list_del,
 	KF_bpf_list_front,
 	KF_bpf_list_back,
 	KF_bpf_cast_to_kern_ctx,
@@ -12491,6 +12492,7 @@ BTF_ID(func, bpf_list_push_front_impl)
 BTF_ID(func, bpf_list_push_back_impl)
 BTF_ID(func, bpf_list_pop_front)
 BTF_ID(func, bpf_list_pop_back)
+BTF_ID(func, bpf_list_del)
 BTF_ID(func, bpf_list_front)
 BTF_ID(func, bpf_list_back)
 BTF_ID(func, bpf_cast_to_kern_ctx)
@@ -12966,6 +12968,7 @@ static bool is_bpf_list_api_kfunc(u32 btf_id)
 	       btf_id == special_kfunc_list[KF_bpf_list_push_back_impl] ||
 	       btf_id == special_kfunc_list[KF_bpf_list_pop_front] ||
 	       btf_id == special_kfunc_list[KF_bpf_list_pop_back] ||
+	       btf_id == special_kfunc_list[KF_bpf_list_del] ||
 	       btf_id == special_kfunc_list[KF_bpf_list_front] ||
 	       btf_id == special_kfunc_list[KF_bpf_list_back];
 }
@@ -13088,7 +13091,8 @@ static bool check_kfunc_is_graph_node_api(struct bpf_verifier_env *env,
 	switch (node_field_type) {
 	case BPF_LIST_NODE:
 		ret = (kfunc_btf_id == special_kfunc_list[KF_bpf_list_push_front_impl] ||
-		       kfunc_btf_id == special_kfunc_list[KF_bpf_list_push_back_impl]);
+		       kfunc_btf_id == special_kfunc_list[KF_bpf_list_push_back_impl] ||
+		       kfunc_btf_id == special_kfunc_list[KF_bpf_list_del]);
 		break;
 	case BPF_RB_NODE:
 		ret = (kfunc_btf_id == special_kfunc_list[KF_bpf_rbtree_remove] ||
@@ -13211,6 +13215,9 @@ __process_kf_arg_ptr_to_graph_node(struct bpf_verifier_env *env,
 		return -EINVAL;
 	}
 
+	if (!*node_field)
+		*node_field = field;
+
 	field = *node_field;
 
 	et = btf_type_by_id(field->graph_root.btf, field->graph_root.value_btf_id);
-- 
2.50.1 (Apple Git-155)


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 2/6] selftests/bpf: Add test cases for bpf_list_del
  2026-02-25  9:26 [PATCH v2 0/6] bpf: Extend the bpf_list family of APIs Chengkaitao
  2026-02-25  9:26 ` [PATCH v2 1/6] bpf: Introduce the bpf_list_del kfunc Chengkaitao
@ 2026-02-25  9:26 ` Chengkaitao
  2026-02-25 10:10   ` bot+bpf-ci
  2026-02-25  9:26 ` [PATCH v2 3/6] bpf: add bpf_list_add_impl to insert node after a given list node Chengkaitao
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 11+ messages in thread
From: Chengkaitao @ 2026-02-25  9:26 UTC (permalink / raw)
  To: martin.lau, ast, daniel, andrii, eddyz87, song, yonghong.song,
	john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah, chengkaitao,
	leon.hwang, linux-kselftest
  Cc: bpf, linux-kernel

From: Kaitao Cheng <chengkaitao@kylinos.cn>

Add a node to both an rbtree and a list, retrieve the node from
the rbtree, use the obtained node pointer to remove it from the
list, and finally free the node.

Signed-off-by: Kaitao Cheng <chengkaitao@kylinos.cn>
---
 .../testing/selftests/bpf/bpf_experimental.h  | 10 +++++
 .../selftests/bpf/progs/refcounted_kptr.c     | 41 +++++++++++++++++++
 2 files changed, 51 insertions(+)

diff --git a/tools/testing/selftests/bpf/bpf_experimental.h b/tools/testing/selftests/bpf/bpf_experimental.h
index 4b7210c318dd..c7c950e10501 100644
--- a/tools/testing/selftests/bpf/bpf_experimental.h
+++ b/tools/testing/selftests/bpf/bpf_experimental.h
@@ -99,6 +99,16 @@ extern struct bpf_list_node *bpf_list_pop_front(struct bpf_list_head *head) __ks
  */
 extern struct bpf_list_node *bpf_list_pop_back(struct bpf_list_head *head) __ksym;
 
+/* Description
+ *	Remove 'node' from the BPF linked list with head 'head'.
+ *	The node must be in the list. Caller receives ownership of the
+ *	removed node and must release it with bpf_obj_drop.
+ * Returns
+ *	Pointer to the removed bpf_list_node, or NULL if 'node' is NULL
+ *	or not in the list.
+ */
+extern struct bpf_list_node *bpf_list_del(struct bpf_list_node *node) __ksym;
+
 /* Description
  *	Remove 'node' from rbtree with root 'root'
  * Returns
diff --git a/tools/testing/selftests/bpf/progs/refcounted_kptr.c b/tools/testing/selftests/bpf/progs/refcounted_kptr.c
index 1aca85d86aeb..86f7f5f8e4c8 100644
--- a/tools/testing/selftests/bpf/progs/refcounted_kptr.c
+++ b/tools/testing/selftests/bpf/progs/refcounted_kptr.c
@@ -367,6 +367,47 @@ long insert_rbtree_and_stash__del_tree_##rem_tree(void *ctx)		\
 INSERT_STASH_READ(true, "insert_stash_read: remove from tree");
 INSERT_STASH_READ(false, "insert_stash_read: don't remove from tree");
 
+/* Insert node_data into both rbtree and list, remove from tree, then remove
+ * from list via bpf_list_del using the node obtained from the tree.
+ */
+SEC("tc")
+__description("test_bpf_list_del: remove an arbitrary node from the list")
+__success __retval(0)
+long test_bpf_list_del(void *ctx)
+{
+	long err;
+	struct bpf_rb_node *rb;
+	struct bpf_list_node *l;
+	struct node_data *n;
+
+	err = __insert_in_tree_and_list(&head, &root, &lock);
+	if (err)
+		return err;
+
+	bpf_spin_lock(&lock);
+	rb = bpf_rbtree_first(&root);
+	if (!rb) {
+		bpf_spin_unlock(&lock);
+		return -4;
+	}
+
+	rb = bpf_rbtree_remove(&root, rb);
+	bpf_spin_unlock(&lock);
+	if (!rb)
+		return -5;
+	n = container_of(rb, struct node_data, r);
+
+	bpf_spin_lock(&lock);
+	l = bpf_list_del(&n->l);
+	bpf_spin_unlock(&lock);
+	bpf_obj_drop(n);
+	if (!l)
+		return -6;
+
+	bpf_obj_drop(container_of(l, struct node_data, l));
+	return 0;
+}
+
 SEC("tc")
 __success
 long rbtree_refcounted_node_ref_escapes(void *ctx)
-- 
2.50.1 (Apple Git-155)


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 3/6] bpf: add bpf_list_add_impl to insert node after a given list node
  2026-02-25  9:26 [PATCH v2 0/6] bpf: Extend the bpf_list family of APIs Chengkaitao
  2026-02-25  9:26 ` [PATCH v2 1/6] bpf: Introduce the bpf_list_del kfunc Chengkaitao
  2026-02-25  9:26 ` [PATCH v2 2/6] selftests/bpf: Add test cases for bpf_list_del Chengkaitao
@ 2026-02-25  9:26 ` Chengkaitao
  2026-02-25 10:22   ` bot+bpf-ci
  2026-02-25  9:26 ` [PATCH v2 4/6] selftests/bpf: Add test case for bpf_list_add_impl Chengkaitao
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 11+ messages in thread
From: Chengkaitao @ 2026-02-25  9:26 UTC (permalink / raw)
  To: martin.lau, ast, daniel, andrii, eddyz87, song, yonghong.song,
	john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah, chengkaitao,
	leon.hwang, linux-kselftest
  Cc: bpf, linux-kernel

From: Kaitao Cheng <chengkaitao@kylinos.cn>

Add a new kfunc bpf_list_add_impl(prev, node, meta, off) that inserts
'node' after 'prev' in the BPF linked list. Both must be in the same
list; 'prev' must already be in the list. The new node must be an
owning reference (e.g. from bpf_obj_new); the kfunc consumes that
reference and the node becomes non-owning once inserted.

Returns 0 on success, -EINVAL if prev is not in a list or node is
already in a list (or duplicate insertion). On failure, the kernel
drops the passed-in node.

Signed-off-by: Kaitao Cheng <chengkaitao@kylinos.cn>
---
 kernel/bpf/helpers.c  | 34 ++++++++++++++++++++++++++++++++++
 kernel/bpf/verifier.c | 10 ++++++++--
 2 files changed, 42 insertions(+), 2 deletions(-)

diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index 577af62a9f7a..d212962d4ed6 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -2495,6 +2495,39 @@ __bpf_kfunc struct bpf_list_node *bpf_list_back(struct bpf_list_head *head)
 	return (struct bpf_list_node *)h->prev;
 }
 
+static int __bpf_list_add_after(struct bpf_list_node_kern *prev,
+				struct bpf_list_node_kern *node,
+				struct btf_record *rec, u64 off)
+{
+	struct bpf_list_head *head;
+	struct list_head *n = &node->list_head, *p = &prev->list_head;
+
+	head = READ_ONCE(prev->owner);
+	if (unlikely(!head))
+		goto fail;
+
+	if (cmpxchg(&node->owner, NULL, BPF_PTR_POISON))
+		goto fail;
+
+	list_add(n, p);
+	WRITE_ONCE(node->owner, head);
+	return 0;
+
+fail:
+	__bpf_obj_drop_impl((void *)n - off, rec, false);
+	return -EINVAL;
+}
+
+__bpf_kfunc int bpf_list_add_impl(struct bpf_list_node *prev,
+				  struct bpf_list_node *node,
+				  void *meta__ign, u64 off)
+{
+	struct bpf_list_node_kern *n = (void *)node, *p = (void *)prev;
+	struct btf_struct_meta *meta = meta__ign;
+
+	return __bpf_list_add_after(p, n, meta ? meta->record : NULL, off);
+}
+
 __bpf_kfunc struct bpf_rb_node *bpf_rbtree_remove(struct bpf_rb_root *root,
 						  struct bpf_rb_node *node)
 {
@@ -4564,6 +4597,7 @@ BTF_ID_FLAGS(func, bpf_list_pop_back, KF_ACQUIRE | KF_RET_NULL)
 BTF_ID_FLAGS(func, bpf_list_del, KF_ACQUIRE | KF_RET_NULL)
 BTF_ID_FLAGS(func, bpf_list_front, KF_RET_NULL)
 BTF_ID_FLAGS(func, bpf_list_back, KF_RET_NULL)
+BTF_ID_FLAGS(func, bpf_list_add_impl)
 BTF_ID_FLAGS(func, bpf_task_acquire, KF_ACQUIRE | KF_RCU | KF_RET_NULL)
 BTF_ID_FLAGS(func, bpf_task_release, KF_RELEASE)
 BTF_ID_FLAGS(func, bpf_rbtree_remove, KF_ACQUIRE | KF_RET_NULL)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 1e9857177a7a..923deb73683b 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -12434,6 +12434,7 @@ enum special_kfunc_type {
 	KF_bpf_list_del,
 	KF_bpf_list_front,
 	KF_bpf_list_back,
+	KF_bpf_list_add_impl,
 	KF_bpf_cast_to_kern_ctx,
 	KF_bpf_rdonly_cast,
 	KF_bpf_rcu_read_lock,
@@ -12495,6 +12496,7 @@ BTF_ID(func, bpf_list_pop_back)
 BTF_ID(func, bpf_list_del)
 BTF_ID(func, bpf_list_front)
 BTF_ID(func, bpf_list_back)
+BTF_ID(func, bpf_list_add_impl)
 BTF_ID(func, bpf_cast_to_kern_ctx)
 BTF_ID(func, bpf_rdonly_cast)
 BTF_ID(func, bpf_rcu_read_lock)
@@ -12970,7 +12972,8 @@ static bool is_bpf_list_api_kfunc(u32 btf_id)
 	       btf_id == special_kfunc_list[KF_bpf_list_pop_back] ||
 	       btf_id == special_kfunc_list[KF_bpf_list_del] ||
 	       btf_id == special_kfunc_list[KF_bpf_list_front] ||
-	       btf_id == special_kfunc_list[KF_bpf_list_back];
+	       btf_id == special_kfunc_list[KF_bpf_list_back] ||
+	       btf_id == special_kfunc_list[KF_bpf_list_add_impl];
 }
 
 static bool is_bpf_rbtree_api_kfunc(u32 btf_id)
@@ -13092,7 +13095,8 @@ static bool check_kfunc_is_graph_node_api(struct bpf_verifier_env *env,
 	case BPF_LIST_NODE:
 		ret = (kfunc_btf_id == special_kfunc_list[KF_bpf_list_push_front_impl] ||
 		       kfunc_btf_id == special_kfunc_list[KF_bpf_list_push_back_impl] ||
-		       kfunc_btf_id == special_kfunc_list[KF_bpf_list_del]);
+		       kfunc_btf_id == special_kfunc_list[KF_bpf_list_del] ||
+		       kfunc_btf_id == special_kfunc_list[KF_bpf_list_add_impl]);
 		break;
 	case BPF_RB_NODE:
 		ret = (kfunc_btf_id == special_kfunc_list[KF_bpf_rbtree_remove] ||
@@ -14237,6 +14241,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
 
 	if (meta.func_id == special_kfunc_list[KF_bpf_list_push_front_impl] ||
 	    meta.func_id == special_kfunc_list[KF_bpf_list_push_back_impl] ||
+	    meta.func_id == special_kfunc_list[KF_bpf_list_add_impl] ||
 	    meta.func_id == special_kfunc_list[KF_bpf_rbtree_add_impl]) {
 		release_ref_obj_id = regs[BPF_REG_2].ref_obj_id;
 		insn_aux->insert_off = regs[BPF_REG_2].off;
@@ -23203,6 +23208,7 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
 		*cnt = 3;
 	} else if (desc->func_id == special_kfunc_list[KF_bpf_list_push_back_impl] ||
 		   desc->func_id == special_kfunc_list[KF_bpf_list_push_front_impl] ||
+		   desc->func_id == special_kfunc_list[KF_bpf_list_add_impl] ||
 		   desc->func_id == special_kfunc_list[KF_bpf_rbtree_add_impl]) {
 		struct btf_struct_meta *kptr_struct_meta = env->insn_aux_data[insn_idx].kptr_struct_meta;
 		int struct_meta_reg = BPF_REG_3;
-- 
2.50.1 (Apple Git-155)


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 4/6] selftests/bpf: Add test case for bpf_list_add_impl
  2026-02-25  9:26 [PATCH v2 0/6] bpf: Extend the bpf_list family of APIs Chengkaitao
                   ` (2 preceding siblings ...)
  2026-02-25  9:26 ` [PATCH v2 3/6] bpf: add bpf_list_add_impl to insert node after a given list node Chengkaitao
@ 2026-02-25  9:26 ` Chengkaitao
  2026-02-25 10:10   ` bot+bpf-ci
  2026-02-25  9:26 ` [PATCH v2 5/6] bpf: add bpf_list_is_first/last/empty kfuncs Chengkaitao
  2026-02-25  9:26 ` [PATCH v2 6/6] selftests/bpf: Add test cases for bpf_list_is_first/is_last/empty Chengkaitao
  5 siblings, 1 reply; 11+ messages in thread
From: Chengkaitao @ 2026-02-25  9:26 UTC (permalink / raw)
  To: martin.lau, ast, daniel, andrii, eddyz87, song, yonghong.song,
	john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah, chengkaitao,
	leon.hwang, linux-kselftest
  Cc: bpf, linux-kernel

From: Kaitao Cheng <chengkaitao@kylinos.cn>

Extend refcounted_kptr test (test_list_add_del) to exercise bpf_list_add:
add a second node after the first, then bpf_list_del both nodes.

Signed-off-by: Kaitao Cheng <chengkaitao@kylinos.cn>
---
 .../testing/selftests/bpf/bpf_experimental.h  | 13 +++++
 .../selftests/bpf/progs/refcounted_kptr.c     | 51 +++++++++++++++----
 2 files changed, 53 insertions(+), 11 deletions(-)

diff --git a/tools/testing/selftests/bpf/bpf_experimental.h b/tools/testing/selftests/bpf/bpf_experimental.h
index c7c950e10501..dc3919deab89 100644
--- a/tools/testing/selftests/bpf/bpf_experimental.h
+++ b/tools/testing/selftests/bpf/bpf_experimental.h
@@ -109,6 +109,19 @@ extern struct bpf_list_node *bpf_list_pop_back(struct bpf_list_head *head) __ksy
  */
 extern struct bpf_list_node *bpf_list_del(struct bpf_list_node *node) __ksym;
 
+/* Description
+ *	Insert 'node' after 'prev' in the BPF linked list. Both must be in the
+ *	same list; 'prev' must be in the list. The 'meta' and 'off' parameters
+ *	are rewritten by the verifier, no need for BPF programs to set them.
+ * Returns
+ *	0 on success, -EINVAL if prev is not in a list or node is already in a list.
+ */
+extern int bpf_list_add_impl(struct bpf_list_node *prev, struct bpf_list_node *node,
+			     void *meta, __u64 off) __ksym;
+
+/* Convenience macro to wrap over bpf_list_add_impl */
+#define bpf_list_add(prev, node) bpf_list_add_impl(prev, node, NULL, 0)
+
 /* Description
  *	Remove 'node' from rbtree with root 'root'
  * Returns
diff --git a/tools/testing/selftests/bpf/progs/refcounted_kptr.c b/tools/testing/selftests/bpf/progs/refcounted_kptr.c
index 86f7f5f8e4c8..a3d5995d4302 100644
--- a/tools/testing/selftests/bpf/progs/refcounted_kptr.c
+++ b/tools/testing/selftests/bpf/progs/refcounted_kptr.c
@@ -367,18 +367,19 @@ long insert_rbtree_and_stash__del_tree_##rem_tree(void *ctx)		\
 INSERT_STASH_READ(true, "insert_stash_read: remove from tree");
 INSERT_STASH_READ(false, "insert_stash_read: don't remove from tree");
 
-/* Insert node_data into both rbtree and list, remove from tree, then remove
- * from list via bpf_list_del using the node obtained from the tree.
+/* Insert one node in tree and list, remove it from tree, add a second
+ * node after it in list with bpf_list_add, then remove both nodes from
+ * list via bpf_list_del.
  */
 SEC("tc")
-__description("test_bpf_list_del: remove an arbitrary node from the list")
+__description("test_list_add_del: test bpf_list_add/del")
 __success __retval(0)
-long test_bpf_list_del(void *ctx)
+long test_list_add_del(void *ctx)
 {
-	long err;
+	long err = 0;
 	struct bpf_rb_node *rb;
-	struct bpf_list_node *l;
-	struct node_data *n;
+	struct bpf_list_node *l, *l_1;
+	struct node_data *n, *n_1, *m_1;
 
 	err = __insert_in_tree_and_list(&head, &root, &lock);
 	if (err)
@@ -397,15 +398,43 @@ long test_bpf_list_del(void *ctx)
 		return -5;
 	n = container_of(rb, struct node_data, r);
 
+	n_1 = bpf_obj_new(typeof(*n_1));
+	if (!n_1) {
+		bpf_obj_drop(n);
+		return -1;
+	}
+	m_1 = bpf_refcount_acquire(n_1);
+	if (!m_1) {
+		bpf_obj_drop(n);
+		bpf_obj_drop(n_1);
+		return -1;
+	}
+
 	bpf_spin_lock(&lock);
+	if (bpf_list_add(&n->l, &n_1->l)) {
+		bpf_spin_unlock(&lock);
+		bpf_obj_drop(n);
+		bpf_obj_drop(m_1);
+		return -8;
+	}
+
 	l = bpf_list_del(&n->l);
+	l_1 = bpf_list_del(&m_1->l);
 	bpf_spin_unlock(&lock);
 	bpf_obj_drop(n);
-	if (!l)
-		return -6;
+	bpf_obj_drop(m_1);
 
-	bpf_obj_drop(container_of(l, struct node_data, l));
-	return 0;
+	if (l)
+		bpf_obj_drop(container_of(l, struct node_data, l));
+	else
+		err = -6;
+
+	if (l_1)
+		bpf_obj_drop(container_of(l_1, struct node_data, l));
+	else
+		err = -6;
+
+	return err;
 }
 
 SEC("tc")
-- 
2.50.1 (Apple Git-155)


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 5/6] bpf: add bpf_list_is_first/last/empty kfuncs
  2026-02-25  9:26 [PATCH v2 0/6] bpf: Extend the bpf_list family of APIs Chengkaitao
                   ` (3 preceding siblings ...)
  2026-02-25  9:26 ` [PATCH v2 4/6] selftests/bpf: Add test case for bpf_list_add_impl Chengkaitao
@ 2026-02-25  9:26 ` Chengkaitao
  2026-02-25  9:26 ` [PATCH v2 6/6] selftests/bpf: Add test cases for bpf_list_is_first/is_last/empty Chengkaitao
  5 siblings, 0 replies; 11+ messages in thread
From: Chengkaitao @ 2026-02-25  9:26 UTC (permalink / raw)
  To: martin.lau, ast, daniel, andrii, eddyz87, song, yonghong.song,
	john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah, chengkaitao,
	leon.hwang, linux-kselftest
  Cc: bpf, linux-kernel

From: Kaitao Cheng <chengkaitao@kylinos.cn>

Add three kfuncs for BPF linked list queries:
- bpf_list_is_first(head, node): true if node is the first in the list.
- bpf_list_is_last(head, node): true if node is the last in the list.
- bpf_list_empty(head): true if the list has no entries.

In previous versions, to implement the above functionality, it was
necessary to first call bpf_list_pop_front/back to retrieve the first
or last node before checking whether the passed-in node was the first
or last one. After the check, the node had to be pushed back into the
list using bpf_list_push_front/back, which was very inefficient.

Now, with the bpf_list_is_first/last/empty kfuncs, we can directly
check whether a node is the first, last, or whether the list is empty,
without having to first retrieve the node.

Signed-off-by: Kaitao Cheng <chengkaitao@kylinos.cn>
---
 kernel/bpf/helpers.c  | 41 +++++++++++++++++++++++++++++++++++++++++
 kernel/bpf/verifier.c | 15 +++++++++++++--
 2 files changed, 54 insertions(+), 2 deletions(-)

diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index d212962d4ed6..ada14eca58ab 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -2528,6 +2528,44 @@ __bpf_kfunc int bpf_list_add_impl(struct bpf_list_node *prev,
 	return __bpf_list_add_after(p, n, meta ? meta->record : NULL, off);
 }
 
+__bpf_kfunc bool bpf_list_is_first(struct bpf_list_head *head, struct bpf_list_node *node)
+{
+	struct list_head *h = (struct list_head *)head;
+	struct bpf_list_node_kern *n = (struct bpf_list_node_kern *)node;
+
+	if (unlikely(!h->next) || list_empty(h))
+		return false;
+
+	if (READ_ONCE(n->owner) != head)
+		return false;
+
+	return h->next == &n->list_head;
+}
+
+__bpf_kfunc bool bpf_list_is_last(struct bpf_list_head *head, struct bpf_list_node *node)
+{
+	struct list_head *h = (struct list_head *)head;
+	struct bpf_list_node_kern *n = (struct bpf_list_node_kern *)node;
+
+	if (unlikely(!h->next) || list_empty(h))
+		return false;
+
+	if (READ_ONCE(n->owner) != head)
+		return false;
+
+	return h->prev == &n->list_head;
+}
+
+__bpf_kfunc bool bpf_list_empty(struct bpf_list_head *head)
+{
+	struct list_head *h = (struct list_head *)head;
+
+	if (unlikely(!h->next))
+		return true;
+
+	return list_empty(h);
+}
+
 __bpf_kfunc struct bpf_rb_node *bpf_rbtree_remove(struct bpf_rb_root *root,
 						  struct bpf_rb_node *node)
 {
@@ -4598,6 +4636,9 @@ BTF_ID_FLAGS(func, bpf_list_del, KF_ACQUIRE | KF_RET_NULL)
 BTF_ID_FLAGS(func, bpf_list_front, KF_RET_NULL)
 BTF_ID_FLAGS(func, bpf_list_back, KF_RET_NULL)
 BTF_ID_FLAGS(func, bpf_list_add_impl)
+BTF_ID_FLAGS(func, bpf_list_is_first)
+BTF_ID_FLAGS(func, bpf_list_is_last)
+BTF_ID_FLAGS(func, bpf_list_empty)
 BTF_ID_FLAGS(func, bpf_task_acquire, KF_ACQUIRE | KF_RCU | KF_RET_NULL)
 BTF_ID_FLAGS(func, bpf_task_release, KF_RELEASE)
 BTF_ID_FLAGS(func, bpf_rbtree_remove, KF_ACQUIRE | KF_RET_NULL)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 923deb73683b..149eac5f70e3 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -12435,6 +12435,9 @@ enum special_kfunc_type {
 	KF_bpf_list_front,
 	KF_bpf_list_back,
 	KF_bpf_list_add_impl,
+	KF_bpf_list_is_first,
+	KF_bpf_list_is_last,
+	KF_bpf_list_empty,
 	KF_bpf_cast_to_kern_ctx,
 	KF_bpf_rdonly_cast,
 	KF_bpf_rcu_read_lock,
@@ -12497,6 +12500,9 @@ BTF_ID(func, bpf_list_del)
 BTF_ID(func, bpf_list_front)
 BTF_ID(func, bpf_list_back)
 BTF_ID(func, bpf_list_add_impl)
+BTF_ID(func, bpf_list_is_first)
+BTF_ID(func, bpf_list_is_last)
+BTF_ID(func, bpf_list_empty)
 BTF_ID(func, bpf_cast_to_kern_ctx)
 BTF_ID(func, bpf_rdonly_cast)
 BTF_ID(func, bpf_rcu_read_lock)
@@ -12973,7 +12979,10 @@ static bool is_bpf_list_api_kfunc(u32 btf_id)
 	       btf_id == special_kfunc_list[KF_bpf_list_del] ||
 	       btf_id == special_kfunc_list[KF_bpf_list_front] ||
 	       btf_id == special_kfunc_list[KF_bpf_list_back] ||
-	       btf_id == special_kfunc_list[KF_bpf_list_add_impl];
+	       btf_id == special_kfunc_list[KF_bpf_list_add_impl] ||
+	       btf_id == special_kfunc_list[KF_bpf_list_is_first] ||
+	       btf_id == special_kfunc_list[KF_bpf_list_is_last] ||
+	       btf_id == special_kfunc_list[KF_bpf_list_empty];
 }
 
 static bool is_bpf_rbtree_api_kfunc(u32 btf_id)
@@ -13096,7 +13105,9 @@ static bool check_kfunc_is_graph_node_api(struct bpf_verifier_env *env,
 		ret = (kfunc_btf_id == special_kfunc_list[KF_bpf_list_push_front_impl] ||
 		       kfunc_btf_id == special_kfunc_list[KF_bpf_list_push_back_impl] ||
 		       kfunc_btf_id == special_kfunc_list[KF_bpf_list_del] ||
-		       kfunc_btf_id == special_kfunc_list[KF_bpf_list_add_impl]);
+		       kfunc_btf_id == special_kfunc_list[KF_bpf_list_add_impl] ||
+		       kfunc_btf_id == special_kfunc_list[KF_bpf_list_is_first] ||
+		       kfunc_btf_id == special_kfunc_list[KF_bpf_list_is_last]);
 		break;
 	case BPF_RB_NODE:
 		ret = (kfunc_btf_id == special_kfunc_list[KF_bpf_rbtree_remove] ||
-- 
2.50.1 (Apple Git-155)


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v2 6/6] selftests/bpf: Add test cases for bpf_list_is_first/is_last/empty
  2026-02-25  9:26 [PATCH v2 0/6] bpf: Extend the bpf_list family of APIs Chengkaitao
                   ` (4 preceding siblings ...)
  2026-02-25  9:26 ` [PATCH v2 5/6] bpf: add bpf_list_is_first/last/empty kfuncs Chengkaitao
@ 2026-02-25  9:26 ` Chengkaitao
  5 siblings, 0 replies; 11+ messages in thread
From: Chengkaitao @ 2026-02-25  9:26 UTC (permalink / raw)
  To: martin.lau, ast, daniel, andrii, eddyz87, song, yonghong.song,
	john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah, chengkaitao,
	leon.hwang, linux-kselftest
  Cc: bpf, linux-kernel

From: Kaitao Cheng <chengkaitao@kylinos.cn>

Rename test_list_add_del to list_add_del_and_check and extend it to
cover the new kfuncs: assert list non-empty after insert, assert
is_first(n) and is_last(m_1) after bpf_list_add, and assert list
empty after removing both nodes.

Signed-off-by: Kaitao Cheng <chengkaitao@kylinos.cn>
---
 .../testing/selftests/bpf/bpf_experimental.h  | 15 +++++++++++
 .../selftests/bpf/progs/refcounted_kptr.c     | 27 +++++++++++++++----
 2 files changed, 37 insertions(+), 5 deletions(-)

diff --git a/tools/testing/selftests/bpf/bpf_experimental.h b/tools/testing/selftests/bpf/bpf_experimental.h
index dc3919deab89..fc73a276c268 100644
--- a/tools/testing/selftests/bpf/bpf_experimental.h
+++ b/tools/testing/selftests/bpf/bpf_experimental.h
@@ -122,6 +122,21 @@ extern int bpf_list_add_impl(struct bpf_list_node *prev, struct bpf_list_node *n
 /* Convenience macro to wrap over bpf_list_add_impl */
 #define bpf_list_add(prev, node) bpf_list_add_impl(prev, node, NULL, 0)
 
+/* Description
+ *	Return true if 'node' is the first node in the list with head 'head'.
+ */
+extern bool bpf_list_is_first(struct bpf_list_head *head, struct bpf_list_node *node) __ksym;
+
+/* Description
+ *	Return true if 'node' is the last node in the list with head 'head'.
+ */
+extern bool bpf_list_is_last(struct bpf_list_head *head, struct bpf_list_node *node) __ksym;
+
+/* Description
+ *	Return true if the list with head 'head' has no entries.
+ */
+extern bool bpf_list_empty(struct bpf_list_head *head) __ksym;
+
 /* Description
  *	Remove 'node' from rbtree with root 'root'
  * Returns
diff --git a/tools/testing/selftests/bpf/progs/refcounted_kptr.c b/tools/testing/selftests/bpf/progs/refcounted_kptr.c
index a3d5995d4302..aa4ee68c8baf 100644
--- a/tools/testing/selftests/bpf/progs/refcounted_kptr.c
+++ b/tools/testing/selftests/bpf/progs/refcounted_kptr.c
@@ -367,14 +367,14 @@ long insert_rbtree_and_stash__del_tree_##rem_tree(void *ctx)		\
 INSERT_STASH_READ(true, "insert_stash_read: remove from tree");
 INSERT_STASH_READ(false, "insert_stash_read: don't remove from tree");
 
-/* Insert one node in tree and list, remove it from tree, add a second
- * node after it in list with bpf_list_add, then remove both nodes from
- * list via bpf_list_del.
+/* Insert one node in tree and list, remove it from tree, add a second node
+ * after it with bpf_list_add, check bpf_list_is_first/is_last/empty, then
+ * remove both nodes from list via bpf_list_del.
  */
 SEC("tc")
-__description("test_list_add_del: test bpf_list_add/del")
+__description("list_add_del_and_check: test bpf_list_add/del/is_first/is_last/empty")
 __success __retval(0)
-long test_list_add_del(void *ctx)
+long list_add_del_and_check(void *ctx)
 {
 	long err = 0;
 	struct bpf_rb_node *rb;
@@ -386,6 +386,11 @@ long test_list_add_del(void *ctx)
 		return err;
 
 	bpf_spin_lock(&lock);
+	if (bpf_list_empty(&head)) {
+		bpf_spin_unlock(&lock);
+		return -7;
+	}
+
 	rb = bpf_rbtree_first(&root);
 	if (!rb) {
 		bpf_spin_unlock(&lock);
@@ -418,6 +423,14 @@ long test_list_add_del(void *ctx)
 		return -8;
 	}
 
+	if (!bpf_list_is_first(&head, &n->l) ||
+	    !bpf_list_is_last(&head, &m_1->l)) {
+		bpf_spin_unlock(&lock);
+		bpf_obj_drop(n);
+		bpf_obj_drop(m_1);
+		return -9;
+	}
+
 	l = bpf_list_del(&n->l);
 	l_1 = bpf_list_del(&m_1->l);
 	bpf_spin_unlock(&lock);
@@ -434,6 +447,10 @@ long test_list_add_del(void *ctx)
 	else
 		err = -6;
 
+	bpf_spin_lock(&lock);
+	if (!bpf_list_empty(&head))
+		err = -7;
+	bpf_spin_unlock(&lock);
 	return err;
 }
 
-- 
2.50.1 (Apple Git-155)


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 2/6] selftests/bpf: Add test cases for bpf_list_del
  2026-02-25  9:26 ` [PATCH v2 2/6] selftests/bpf: Add test cases for bpf_list_del Chengkaitao
@ 2026-02-25 10:10   ` bot+bpf-ci
  0 siblings, 0 replies; 11+ messages in thread
From: bot+bpf-ci @ 2026-02-25 10:10 UTC (permalink / raw)
  To: pilgrimtao, martin.lau, ast, daniel, andrii, eddyz87, song,
	yonghong.song, john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah,
	chengkaitao, leon.hwang, linux-kselftest
  Cc: bpf, linux-kernel, ast, andrii, daniel, martin.lau, eddyz87,
	yonghong.song, clm, ihor.solodrai

[-- Attachment #1: Type: text/plain, Size: 1609 bytes --]

> diff --git a/tools/testing/selftests/bpf/bpf_experimental.h b/tools/testing/selftests/bpf/bpf_experimental.h
> --- a/tools/testing/selftests/bpf/bpf_experimental.h
> +++ b/tools/testing/selftests/bpf/bpf_experimental.h

[ ... ]

> +/* Description
> + *	Remove 'node' from the BPF linked list with head 'head'.
                                                          ^^^^^^

The comment references a 'head' parameter, but the function signature
only takes 'node':

> + */
> +extern struct bpf_list_node *bpf_list_del(struct bpf_list_node *node) __ksym;

This looks like a leftover from v1 where bpf_list_del() took both
head and node. Alexei asked to drop the head parameter in v1 review:

https://lore.kernel.org/bpf/CAADnVQKjBdfL17d0H6eHdiHYbXOr0Xz6BC=AbwMctQ6860-P5w@mail.gmail.com/

The implementation was updated but the comment still references the
removed parameter. Should this say something like "Remove 'node'
from its BPF linked list" instead?


---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md

CI run summary: https://github.com/kernel-patches/bpf/actions/runs/22391130139

AI-authorship-score: low
AI-authorship-explanation: Follows established BPF selftest patterns with idiomatic kernel C; the stale v1 comment is a typical human oversight when iterating on review feedback.
issues-found: 1
issue-severity-score: low
issue-severity-explanation: Stale documentation comment referencing a removed parameter; no runtime impact but incomplete response to maintainer review feedback.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 4/6] selftests/bpf: Add test case for bpf_list_add_impl
  2026-02-25  9:26 ` [PATCH v2 4/6] selftests/bpf: Add test case for bpf_list_add_impl Chengkaitao
@ 2026-02-25 10:10   ` bot+bpf-ci
  0 siblings, 0 replies; 11+ messages in thread
From: bot+bpf-ci @ 2026-02-25 10:10 UTC (permalink / raw)
  To: pilgrimtao, martin.lau, ast, daniel, andrii, eddyz87, song,
	yonghong.song, john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah,
	chengkaitao, leon.hwang, linux-kselftest
  Cc: bpf, linux-kernel, ast, andrii, daniel, martin.lau, eddyz87,
	yonghong.song, clm, ihor.solodrai

[-- Attachment #1: Type: text/plain, Size: 1905 bytes --]

> diff --git a/tools/testing/selftests/bpf/bpf_experimental.h b/tools/testing/selftests/bpf/bpf_experimental.h
> --- a/tools/testing/selftests/bpf/bpf_experimental.h
> +++ b/tools/testing/selftests/bpf/bpf_experimental.h

[ ... ]

> +/* Description
> + *	Insert 'node' after 'prev' in the BPF linked list. Both must be in the
> + *	same list; 'prev' must be in the list. The 'meta' and 'off' parameters
                   ^^^^

The description says "Both must be in the same list" but looking at
__bpf_list_add_after() in kernel/bpf/helpers.c, the implementation
requires node->owner to be NULL via cmpxchg:

    if (cmpxchg(&node->owner, NULL, BPF_PTR_POISON))
        goto fail;

This means 'node' must not already be in any list.  The Returns
section below correctly says "-EINVAL if ... node is already in a
list," which contradicts the Description.

Should this instead say something like "'prev' must already be in a
list; 'node' must not be in any list"?

> + *	are rewritten by the verifier, no need for BPF programs to set them.
> + * Returns
> + *	0 on success, -EINVAL if prev is not in a list or node is already in a list.
> + */
> +extern int bpf_list_add_impl(struct bpf_list_node *prev, struct bpf_list_node *node,
> +			     void *meta, __u64 off) __ksym;

[ ... ]


---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md

CI run summary: https://github.com/kernel-patches/bpf/actions/runs/22391130139

AI-authorship-score: low
AI-authorship-explanation: Code follows established BPF selftest patterns and the doc comment error is characteristic of human imprecision rather than AI generation.
issues-found: 1
issue-severity-score: low
issue-severity-explanation: Inaccurate API doc comment in selftest header contradicts implementation semantics but does not cause runtime failures.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 1/6] bpf: Introduce the bpf_list_del kfunc.
  2026-02-25  9:26 ` [PATCH v2 1/6] bpf: Introduce the bpf_list_del kfunc Chengkaitao
@ 2026-02-25 10:22   ` bot+bpf-ci
  0 siblings, 0 replies; 11+ messages in thread
From: bot+bpf-ci @ 2026-02-25 10:22 UTC (permalink / raw)
  To: pilgrimtao, martin.lau, ast, daniel, andrii, eddyz87, song,
	yonghong.song, john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah,
	chengkaitao, leon.hwang, linux-kselftest
  Cc: bpf, linux-kernel, ast, andrii, daniel, martin.lau, eddyz87,
	yonghong.song, clm, ihor.solodrai

[-- Attachment #1: Type: text/plain, Size: 3651 bytes --]

> diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> --- a/kernel/bpf/helpers.c
> +++ b/kernel/bpf/helpers.c

[ ... ]

> +__bpf_kfunc struct bpf_list_node *bpf_list_del(struct bpf_list_node *node)
> +{
> +	struct bpf_list_node_kern *knode = (struct bpf_list_node_kern *)node;
> +
> +	if (unlikely(!knode))
> +		return NULL;
> +
> +	if (WARN_ON_ONCE(!READ_ONCE(knode->owner)))
> +		return NULL;
> +
> +	list_del_init(&knode->list_head);
> +	WRITE_ONCE(knode->owner, NULL);
> +
> +	return node;
> +}

Unlike bpf_list_pop_front/bpf_list_pop_back which take a
bpf_list_head argument, bpf_list_del takes only a bpf_list_node.
The list_head argument is what triggers the verifier's lock check
in __process_kf_arg_ptr_to_graph_root() via
check_reg_allocation_locked().

Since bpf_list_del has no list_head argument, the verifier path
goes through __process_kf_arg_ptr_to_graph_node() which has no
lock check. Can a BPF program call bpf_list_del without holding
the bpf_spin_lock?

Consider the following sequence:

    bpf_spin_lock(&obj->lock);
    ref = bpf_list_front(&obj->head);
    owned = bpf_refcount_acquire(ref);
    bpf_spin_unlock(&obj->lock);
    /* owned is PTR_TO_BTF_ID | MEM_ALLOC, still valid */
    bpf_list_del(&owned->node);  /* no lock held */

bpf_refcount_acquire returns an owning reference that survives
lock release. The verifier allows passing the node field of this
owning reference to bpf_list_del without any lock enforcement.

If another CPU concurrently operates on the same list under the
lock, the unprotected list_del_init here would race with those
operations and could corrupt the list.

For reference, bpf_rbtree_remove avoids this by taking both
root and node arguments, where the root triggers the lock check:

kernel/bpf/verifier.c:__process_kf_arg_ptr_to_graph_root() {
    ...
    if (check_reg_allocation_locked(env, reg)) {
        verbose(env, "bpf_spin_lock at off=%d must be held ...\n",
            rec->spin_lock_off, head_type_name);
        return -EINVAL;
    }
    ...
}

Would it make sense to add a similar lock check in
__process_kf_arg_ptr_to_graph_node() for kfuncs that only
take a node argument?

[ ... ]

> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c

[ ... ]

> @@ -13089,6 +13093,9 @@ __process_kf_arg_ptr_to_graph_node(struct bpf_verifier_env *env,
>  		return -EINVAL;
>  	}
>
> +	if (!*node_field)
> +		*node_field = field;
> +
>  	field = *node_field;
>
>  	et = btf_type_by_id(field->graph_root.btf, field->graph_root.value_btf_id);

This is where the node_field is set for bpf_list_del. For push
operations, *node_field was already set by the list_head argument
processing in __process_kf_arg_ptr_to_graph_root (which includes
the lock check). For bpf_list_del, *node_field starts as NULL and
is set here from the node's own field, bypassing the root path
entirely and with it the lock enforcement.


---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md

CI run summary: https://github.com/kernel-patches/bpf/actions/runs/22391130139

AI-authorship-score: medium
AI-authorship-explanation: Formulaic commit message style and brute-force linear scan suggest possible AI generation, but coherent cross-file BPF verifier integration shows domain understanding.
issues-found: 1
issue-severity-score: high
issue-severity-explanation: Missing bpf_spin_lock enforcement in the verifier for bpf_list_del allows concurrent unprotected list_del_init, which can corrupt the linked list and crash the kernel.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v2 3/6] bpf: add bpf_list_add_impl to insert node after a given list node
  2026-02-25  9:26 ` [PATCH v2 3/6] bpf: add bpf_list_add_impl to insert node after a given list node Chengkaitao
@ 2026-02-25 10:22   ` bot+bpf-ci
  0 siblings, 0 replies; 11+ messages in thread
From: bot+bpf-ci @ 2026-02-25 10:22 UTC (permalink / raw)
  To: pilgrimtao, martin.lau, ast, daniel, andrii, eddyz87, song,
	yonghong.song, john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah,
	chengkaitao, leon.hwang, linux-kselftest
  Cc: bpf, linux-kernel, ast, andrii, daniel, martin.lau, eddyz87,
	yonghong.song, clm, ihor.solodrai

[-- Attachment #1: Type: text/plain, Size: 2997 bytes --]

> diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> index 093600363564..0d7581786a14 100644
> --- a/kernel/bpf/helpers.c
> +++ b/kernel/bpf/helpers.c

[ ... ]

> +__bpf_kfunc int bpf_list_add_impl(struct bpf_list_node *prev,
> +				  struct bpf_list_node *node,
> +				  void *meta__ign, u64 off)

> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 7a54cd749aec..9558b7020b89 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c

[ ... ]

> @@ -14124,6 +14128,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
>
>  	if (meta.func_id == special_kfunc_list[KF_bpf_list_push_front_impl] ||
>  	    meta.func_id == special_kfunc_list[KF_bpf_list_push_back_impl] ||
> +	    meta.func_id == special_kfunc_list[KF_bpf_list_add_impl] ||
>  	    meta.func_id == special_kfunc_list[KF_bpf_rbtree_add_impl]) {
>  		release_ref_obj_id = regs[BPF_REG_2].ref_obj_id;
>  		insn_aux->insert_off = regs[BPF_REG_2].off;

Is the verifier enforcing that the bpf_spin_lock is held when
bpf_list_add_impl is called?

The existing bpf_list_push_front_impl and bpf_list_push_back_impl get
lock enforcement because they take a bpf_list_head* as their first
argument. That argument type triggers __process_kf_arg_ptr_to_graph_root
in the verifier, which calls check_reg_allocation_locked():

kernel/bpf/verifier.c:__process_kf_arg_ptr_to_graph_root() {
    ...
    if (check_reg_allocation_locked(env, reg)) {
        verbose(env, "bpf_spin_lock at off=%d must be held for %s\n",
            rec->spin_lock_off, head_type_name);
        return -EINVAL;
    }
    ...
}

bpf_list_add_impl takes two bpf_list_node* arguments and no
bpf_list_head*. The verifier processes both arguments through
__process_kf_arg_ptr_to_graph_node, which does not call
check_reg_allocation_locked. This means a BPF program could call
bpf_list_add_impl without holding the spin lock.

With bpf_refcount, a program can hold an owning reference to an
in-list node that survives lock release:

    bpf_spin_lock(&lock);
    bpf_list_push_front(&head, &node->node);
    extra = bpf_refcount_acquire(node);
    bpf_spin_unlock(&lock);
    /* extra is still an owning ref, call without lock */
    bpf_list_add_impl(&extra->node, &new_node->node, ...);

This would execute list_add() without synchronization, which could
corrupt the list.


---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md

CI run summary: https://github.com/kernel-patches/bpf/actions/runs/22391130139

AI-authorship-score: low
AI-authorship-explanation: Code closely follows existing bpf_list_push_front/back patterns with natural kernel commit style and precise BPF-specific terminology.
issues-found: 1
issue-severity-score: high
issue-severity-explanation: Missing verifier spin lock enforcement for bpf_list_add_impl allows unsynchronized list_add() calls that can corrupt the linked list.

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2026-02-25 10:22 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-25  9:26 [PATCH v2 0/6] bpf: Extend the bpf_list family of APIs Chengkaitao
2026-02-25  9:26 ` [PATCH v2 1/6] bpf: Introduce the bpf_list_del kfunc Chengkaitao
2026-02-25 10:22   ` bot+bpf-ci
2026-02-25  9:26 ` [PATCH v2 2/6] selftests/bpf: Add test cases for bpf_list_del Chengkaitao
2026-02-25 10:10   ` bot+bpf-ci
2026-02-25  9:26 ` [PATCH v2 3/6] bpf: add bpf_list_add_impl to insert node after a given list node Chengkaitao
2026-02-25 10:22   ` bot+bpf-ci
2026-02-25  9:26 ` [PATCH v2 4/6] selftests/bpf: Add test case for bpf_list_add_impl Chengkaitao
2026-02-25 10:10   ` bot+bpf-ci
2026-02-25  9:26 ` [PATCH v2 5/6] bpf: add bpf_list_is_first/last/empty kfuncs Chengkaitao
2026-02-25  9:26 ` [PATCH v2 6/6] selftests/bpf: Add test cases for bpf_list_is_first/is_last/empty Chengkaitao

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox