public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH bpf-next v7 0/5] bpf: Extend the bpf_list family of APIs
@ 2026-03-08 13:46 Chengkaitao
  2026-03-08 13:46 ` [PATCH bpf-next v7 1/5] bpf: Introduce the bpf_list_del kfunc Chengkaitao
                   ` (4 more replies)
  0 siblings, 5 replies; 19+ messages in thread
From: Chengkaitao @ 2026-03-08 13:46 UTC (permalink / raw)
  To: martin.lau, ast, daniel, andrii, eddyz87, song, yonghong.song,
	john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah, chengkaitao,
	linux-kselftest
  Cc: bpf, linux-kernel

From: Kaitao Cheng <chengkaitao@kylinos.cn>

In BPF, a list can only be used to implement a stack structure.
Due to an incomplete API set, only FIFO or LIFO operations are
supported. The patches enhance the BPF list API, making it more
list-like.

Five new kfuncs have been added:
bpf_list_del: remove a node from the list
bpf_list_add_impl: insert a node after a given list node
bpf_list_is_first: check if a node is the first in the list
bpf_list_is_last: check if a node is the last in the list
bpf_list_empty: check if the list is empty

refactor kfunc checks to table-driven approach, and add test cases for
the aforementioned kfuncs.

Changes in v7:
- Replace bpf_list_node_is_edge with bpf_list_is_first/is_last
- Reimplement __bpf_list_del and __bpf_list_add
- Simplify test cases

Changes in v6:
- Merge [patch v5 (2,4,6)/6] into [patch v6 4/5]
- If list_head was 0-initialized, init it
- refactor kfunc checks to table-driven approach

Changes in v5:
- Fix bpf_obj leak on bpf_list_add_impl error

Changes in v4:
- [patch v3 1/6] Revert to version v1
- Change the parameters of bpf_list_add_impl to (head, new, prev, ...)

Changes in v3:
- Add a new lock_rec member to struct bpf_reference_state for lock
  holding detection.
- Add test cases to verify that the verifier correctly restricts calls
  to bpf_list_del when the spin_lock is not held.

Changes in v2:
- Remove the head parameter from bpf_list_del
- Add bpf_list_add/is_first/is_last/empty to API and test cases

Link to v6:
https://lore.kernel.org/all/20260304143459.78059-1-pilgrimtao@gmail.com/

Link to v5:
https://lore.kernel.org/all/20260304031606.43884-1-pilgrimtao@gmail.com/

Link to v4:
https://lore.kernel.org/all/20260303135219.33726-1-pilgrimtao@gmail.com/

Link to v3:
https://lore.kernel.org/all/20260302124028.82420-1-pilgrimtao@gmail.com/

Link to v2:
https://lore.kernel.org/all/20260225092651.94689-1-pilgrimtao@gmail.com/

Link to v1:
https://lore.kernel.org/all/20260209025250.55750-1-pilgrimtao@gmail.com/

Kaitao Cheng (5):
  bpf: Introduce the bpf_list_del kfunc.
  bpf: Add bpf_list_add_impl to insert node after a given list node
  bpf: add bpf_list_is_first/last/empty kfuncs
  selftests/bpf: Add test cases for
    bpf_list_del/add/is_first/is_last/empty
  bpf: refactor kfunc checks using table-driven approach in verifier

 kernel/bpf/helpers.c                          | 124 +++++++++++++---
 kernel/bpf/verifier.c                         | 107 ++++++++++---
 .../testing/selftests/bpf/bpf_experimental.h  |  16 ++
 .../selftests/bpf/progs/refcounted_kptr.c     | 140 ++++++++++++++++++
 4 files changed, 339 insertions(+), 48 deletions(-)

-- 
2.50.1 (Apple Git-155)


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH bpf-next v7 1/5] bpf: Introduce the bpf_list_del kfunc.
  2026-03-08 13:46 [PATCH bpf-next v7 0/5] bpf: Extend the bpf_list family of APIs Chengkaitao
@ 2026-03-08 13:46 ` Chengkaitao
  2026-03-09  6:33   ` Leon Hwang
  2026-03-08 13:46 ` [PATCH bpf-next v7 2/5] bpf: Add bpf_list_add_impl to insert node after a given list node Chengkaitao
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 19+ messages in thread
From: Chengkaitao @ 2026-03-08 13:46 UTC (permalink / raw)
  To: martin.lau, ast, daniel, andrii, eddyz87, song, yonghong.song,
	john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah, chengkaitao,
	linux-kselftest
  Cc: bpf, linux-kernel

From: Kaitao Cheng <chengkaitao@kylinos.cn>

If a user holds ownership of a node in the middle of a list, they
can directly remove it from the list without strictly adhering to
deletion rules from the head or tail.

We have added an additional parameter bpf_list_head *head to
bpf_list_del, as the verifier requires the head parameter to
check whether the lock is being held.

This is typically paired with bpf_refcount. After calling
bpf_list_del, it is generally necessary to drop the reference to
the list node twice to prevent reference count leaks.

Signed-off-by: Kaitao Cheng <chengkaitao@kylinos.cn>
---
 kernel/bpf/helpers.c  | 30 +++++++++++++++++++++++-------
 kernel/bpf/verifier.c |  6 +++++-
 2 files changed, 28 insertions(+), 8 deletions(-)

diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index 6eb6c82ed2ee..01b74c4ac00d 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -2426,20 +2426,23 @@ __bpf_kfunc int bpf_list_push_back_impl(struct bpf_list_head *head,
 	return __bpf_list_add(n, head, true, meta ? meta->record : NULL, off);
 }
 
-static struct bpf_list_node *__bpf_list_del(struct bpf_list_head *head, bool tail)
+static struct bpf_list_node *__bpf_list_del(struct bpf_list_head *head,
+					    struct list_head *n)
 {
-	struct list_head *n, *h = (void *)head;
+	struct list_head *h = (void *)head;
 	struct bpf_list_node_kern *node;
 
 	/* If list_head was 0-initialized by map, bpf_obj_init_field wasn't
 	 * called on its fields, so init here
 	 */
-	if (unlikely(!h->next))
+	if (unlikely(!h->next)) {
 		INIT_LIST_HEAD(h);
-	if (list_empty(h))
+		return NULL;
+	}
+
+	if (n == h)
 		return NULL;
 
-	n = tail ? h->prev : h->next;
 	node = container_of(n, struct bpf_list_node_kern, list_head);
 	if (WARN_ON_ONCE(READ_ONCE(node->owner) != head))
 		return NULL;
@@ -2451,12 +2454,24 @@ static struct bpf_list_node *__bpf_list_del(struct bpf_list_head *head, bool tai
 
 __bpf_kfunc struct bpf_list_node *bpf_list_pop_front(struct bpf_list_head *head)
 {
-	return __bpf_list_del(head, false);
+	struct list_head *h = (void *)head;
+
+	return __bpf_list_del(head, h->next);
 }
 
 __bpf_kfunc struct bpf_list_node *bpf_list_pop_back(struct bpf_list_head *head)
 {
-	return __bpf_list_del(head, true);
+	struct list_head *h = (void *)head;
+
+	return __bpf_list_del(head, h->prev);
+}
+
+__bpf_kfunc struct bpf_list_node *bpf_list_del(struct bpf_list_head *head,
+					       struct bpf_list_node *node)
+{
+	struct bpf_list_node_kern *kn = (void *)node;
+
+	return __bpf_list_del(head, &kn->list_head);
 }
 
 __bpf_kfunc struct bpf_list_node *bpf_list_front(struct bpf_list_head *head)
@@ -4545,6 +4560,7 @@ BTF_ID_FLAGS(func, bpf_list_push_front_impl)
 BTF_ID_FLAGS(func, bpf_list_push_back_impl)
 BTF_ID_FLAGS(func, bpf_list_pop_front, KF_ACQUIRE | KF_RET_NULL)
 BTF_ID_FLAGS(func, bpf_list_pop_back, KF_ACQUIRE | KF_RET_NULL)
+BTF_ID_FLAGS(func, bpf_list_del, KF_ACQUIRE | KF_RET_NULL)
 BTF_ID_FLAGS(func, bpf_list_front, KF_RET_NULL)
 BTF_ID_FLAGS(func, bpf_list_back, KF_RET_NULL)
 BTF_ID_FLAGS(func, bpf_task_acquire, KF_ACQUIRE | KF_RCU | KF_RET_NULL)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 67c09b43a497..c9557d3fb8dd 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -12461,6 +12461,7 @@ enum special_kfunc_type {
 	KF_bpf_list_push_back_impl,
 	KF_bpf_list_pop_front,
 	KF_bpf_list_pop_back,
+	KF_bpf_list_del,
 	KF_bpf_list_front,
 	KF_bpf_list_back,
 	KF_bpf_cast_to_kern_ctx,
@@ -12521,6 +12522,7 @@ BTF_ID(func, bpf_list_push_front_impl)
 BTF_ID(func, bpf_list_push_back_impl)
 BTF_ID(func, bpf_list_pop_front)
 BTF_ID(func, bpf_list_pop_back)
+BTF_ID(func, bpf_list_del)
 BTF_ID(func, bpf_list_front)
 BTF_ID(func, bpf_list_back)
 BTF_ID(func, bpf_cast_to_kern_ctx)
@@ -12996,6 +12998,7 @@ static bool is_bpf_list_api_kfunc(u32 btf_id)
 	       btf_id == special_kfunc_list[KF_bpf_list_push_back_impl] ||
 	       btf_id == special_kfunc_list[KF_bpf_list_pop_front] ||
 	       btf_id == special_kfunc_list[KF_bpf_list_pop_back] ||
+	       btf_id == special_kfunc_list[KF_bpf_list_del] ||
 	       btf_id == special_kfunc_list[KF_bpf_list_front] ||
 	       btf_id == special_kfunc_list[KF_bpf_list_back];
 }
@@ -13118,7 +13121,8 @@ static bool check_kfunc_is_graph_node_api(struct bpf_verifier_env *env,
 	switch (node_field_type) {
 	case BPF_LIST_NODE:
 		ret = (kfunc_btf_id == special_kfunc_list[KF_bpf_list_push_front_impl] ||
-		       kfunc_btf_id == special_kfunc_list[KF_bpf_list_push_back_impl]);
+		       kfunc_btf_id == special_kfunc_list[KF_bpf_list_push_back_impl] ||
+		       kfunc_btf_id == special_kfunc_list[KF_bpf_list_del]);
 		break;
 	case BPF_RB_NODE:
 		ret = (kfunc_btf_id == special_kfunc_list[KF_bpf_rbtree_remove] ||
-- 
2.50.1 (Apple Git-155)


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH bpf-next v7 2/5] bpf: Add bpf_list_add_impl to insert node after a given list node
  2026-03-08 13:46 [PATCH bpf-next v7 0/5] bpf: Extend the bpf_list family of APIs Chengkaitao
  2026-03-08 13:46 ` [PATCH bpf-next v7 1/5] bpf: Introduce the bpf_list_del kfunc Chengkaitao
@ 2026-03-08 13:46 ` Chengkaitao
  2026-03-08 14:25   ` bot+bpf-ci
                     ` (2 more replies)
  2026-03-08 13:46 ` [PATCH bpf-next v7 3/5] bpf: add bpf_list_is_first/last/empty kfuncs Chengkaitao
                   ` (2 subsequent siblings)
  4 siblings, 3 replies; 19+ messages in thread
From: Chengkaitao @ 2026-03-08 13:46 UTC (permalink / raw)
  To: martin.lau, ast, daniel, andrii, eddyz87, song, yonghong.song,
	john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah, chengkaitao,
	linux-kselftest
  Cc: bpf, linux-kernel

From: Kaitao Cheng <chengkaitao@kylinos.cn>

Add a new kfunc bpf_list_add_impl(head, new, prev, meta, off) that
inserts 'new' after 'prev' in the BPF linked list. Both must be in
the same list; 'prev' must already be in the list. The new node must
be an owning reference (e.g. from bpf_obj_new); the kfunc consumes
that reference and the node becomes non-owning once inserted.

We have added an additional parameter bpf_list_head *head to
bpf_list_add_impl, as the verifier requires the head parameter to
check whether the lock is being held.

Returns 0 on success, -EINVAL if 'prev' is not in a list or 'new'
is already in a list (or duplicate insertion). On failure, the
kernel drops the passed-in node.

Signed-off-by: Kaitao Cheng <chengkaitao@kylinos.cn>
---
 kernel/bpf/helpers.c  | 56 ++++++++++++++++++++++++++++++-------------
 kernel/bpf/verifier.c | 13 ++++++++--
 2 files changed, 50 insertions(+), 19 deletions(-)

diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index 01b74c4ac00d..407520fde668 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -2379,11 +2379,12 @@ __bpf_kfunc void *bpf_refcount_acquire_impl(void *p__refcounted_kptr, void *meta
 	return (void *)p__refcounted_kptr;
 }
 
-static int __bpf_list_add(struct bpf_list_node_kern *node,
-			  struct bpf_list_head *head,
-			  bool tail, struct btf_record *rec, u64 off)
+static int __bpf_list_add(struct bpf_list_head *head,
+			  struct bpf_list_node_kern *new,
+			  struct list_head *prev,
+			  struct btf_record *rec, u64 off)
 {
-	struct list_head *n = &node->list_head, *h = (void *)head;
+	struct list_head *n = &new->list_head, *h = (void *)head;
 
 	/* If list_head was 0-initialized by map, bpf_obj_init_field wasn't
 	 * called on its fields, so init here
@@ -2391,39 +2392,59 @@ static int __bpf_list_add(struct bpf_list_node_kern *node,
 	if (unlikely(!h->next))
 		INIT_LIST_HEAD(h);
 
-	/* node->owner != NULL implies !list_empty(n), no need to separately
+	/* When prev is not the list head, it must be a node in this list. */
+	if (prev != h && WARN_ON_ONCE(READ_ONCE(container_of(
+	    prev, struct bpf_list_node_kern, list_head)->owner) != head))
+		goto fail;
+
+	/* new->owner != NULL implies !list_empty(n), no need to separately
 	 * check the latter
 	 */
-	if (cmpxchg(&node->owner, NULL, BPF_PTR_POISON)) {
-		/* Only called from BPF prog, no need to migrate_disable */
-		__bpf_obj_drop_impl((void *)n - off, rec, false);
-		return -EINVAL;
-	}
-
-	tail ? list_add_tail(n, h) : list_add(n, h);
-	WRITE_ONCE(node->owner, head);
+	if (cmpxchg(&new->owner, NULL, BPF_PTR_POISON))
+		goto fail;
 
+	list_add(n, prev);
+	WRITE_ONCE(new->owner, head);
 	return 0;
+
+fail:
+	/* Only called from BPF prog, no need to migrate_disable */
+	__bpf_obj_drop_impl((void *)n - off, rec, false);
+	return -EINVAL;
 }
 
 __bpf_kfunc int bpf_list_push_front_impl(struct bpf_list_head *head,
 					 struct bpf_list_node *node,
 					 void *meta__ign, u64 off)
 {
-	struct bpf_list_node_kern *n = (void *)node;
+	struct bpf_list_node_kern *new = (void *)node;
 	struct btf_struct_meta *meta = meta__ign;
+	struct list_head *h = (void *)head;
 
-	return __bpf_list_add(n, head, false, meta ? meta->record : NULL, off);
+	return __bpf_list_add(head, new, h, meta ? meta->record : NULL, off);
 }
 
 __bpf_kfunc int bpf_list_push_back_impl(struct bpf_list_head *head,
 					struct bpf_list_node *node,
 					void *meta__ign, u64 off)
 {
-	struct bpf_list_node_kern *n = (void *)node;
+	struct bpf_list_node_kern *new = (void *)node;
+	struct btf_struct_meta *meta = meta__ign;
+	struct list_head *h = (void *)head;
+
+	return __bpf_list_add(head, new, h->prev, meta ? meta->record : NULL, off);
+}
+
+__bpf_kfunc int bpf_list_add_impl(struct bpf_list_head *head,
+				  struct bpf_list_node *new,
+				  struct bpf_list_node *prev,
+				  void *meta__ign, u64 off)
+{
+	struct bpf_list_node_kern *kn = (void *)new, *kp = (void *)prev;
 	struct btf_struct_meta *meta = meta__ign;
 
-	return __bpf_list_add(n, head, true, meta ? meta->record : NULL, off);
+	return __bpf_list_add(head, kn, &kp->list_head,
+			      meta ? meta->record : NULL, off);
 }
 
 static struct bpf_list_node *__bpf_list_del(struct bpf_list_head *head,
@@ -4563,6 +4584,7 @@ BTF_ID_FLAGS(func, bpf_list_pop_back, KF_ACQUIRE | KF_RET_NULL)
 BTF_ID_FLAGS(func, bpf_list_del, KF_ACQUIRE | KF_RET_NULL)
 BTF_ID_FLAGS(func, bpf_list_front, KF_RET_NULL)
 BTF_ID_FLAGS(func, bpf_list_back, KF_RET_NULL)
+BTF_ID_FLAGS(func, bpf_list_add_impl)
 BTF_ID_FLAGS(func, bpf_task_acquire, KF_ACQUIRE | KF_RCU | KF_RET_NULL)
 BTF_ID_FLAGS(func, bpf_task_release, KF_RELEASE)
 BTF_ID_FLAGS(func, bpf_rbtree_remove, KF_ACQUIRE | KF_RET_NULL)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index c9557d3fb8dd..5f55b68ed935 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -12459,6 +12459,7 @@ enum special_kfunc_type {
 	KF_bpf_refcount_acquire_impl,
 	KF_bpf_list_push_front_impl,
 	KF_bpf_list_push_back_impl,
+	KF_bpf_list_add_impl,
 	KF_bpf_list_pop_front,
 	KF_bpf_list_pop_back,
 	KF_bpf_list_del,
@@ -12520,6 +12521,7 @@ BTF_ID(func, bpf_obj_drop_impl)
 BTF_ID(func, bpf_refcount_acquire_impl)
 BTF_ID(func, bpf_list_push_front_impl)
 BTF_ID(func, bpf_list_push_back_impl)
+BTF_ID(func, bpf_list_add_impl)
 BTF_ID(func, bpf_list_pop_front)
 BTF_ID(func, bpf_list_pop_back)
 BTF_ID(func, bpf_list_del)
@@ -12996,6 +12998,7 @@ static bool is_bpf_list_api_kfunc(u32 btf_id)
 {
 	return btf_id == special_kfunc_list[KF_bpf_list_push_front_impl] ||
 	       btf_id == special_kfunc_list[KF_bpf_list_push_back_impl] ||
+	       btf_id == special_kfunc_list[KF_bpf_list_add_impl] ||
 	       btf_id == special_kfunc_list[KF_bpf_list_pop_front] ||
 	       btf_id == special_kfunc_list[KF_bpf_list_pop_back] ||
 	       btf_id == special_kfunc_list[KF_bpf_list_del] ||
@@ -13122,6 +13125,7 @@ static bool check_kfunc_is_graph_node_api(struct bpf_verifier_env *env,
 	case BPF_LIST_NODE:
 		ret = (kfunc_btf_id == special_kfunc_list[KF_bpf_list_push_front_impl] ||
 		       kfunc_btf_id == special_kfunc_list[KF_bpf_list_push_back_impl] ||
+		       kfunc_btf_id == special_kfunc_list[KF_bpf_list_add_impl] ||
 		       kfunc_btf_id == special_kfunc_list[KF_bpf_list_del]);
 		break;
 	case BPF_RB_NODE:
@@ -14264,6 +14268,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
 
 	if (meta.func_id == special_kfunc_list[KF_bpf_list_push_front_impl] ||
 	    meta.func_id == special_kfunc_list[KF_bpf_list_push_back_impl] ||
+	    meta.func_id == special_kfunc_list[KF_bpf_list_add_impl] ||
 	    meta.func_id == special_kfunc_list[KF_bpf_rbtree_add_impl]) {
 		release_ref_obj_id = regs[BPF_REG_2].ref_obj_id;
 		insn_aux->insert_off = regs[BPF_REG_2].off;
@@ -23230,13 +23235,17 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
 		*cnt = 3;
 	} else if (desc->func_id == special_kfunc_list[KF_bpf_list_push_back_impl] ||
 		   desc->func_id == special_kfunc_list[KF_bpf_list_push_front_impl] ||
+		   desc->func_id == special_kfunc_list[KF_bpf_list_add_impl] ||
 		   desc->func_id == special_kfunc_list[KF_bpf_rbtree_add_impl]) {
 		struct btf_struct_meta *kptr_struct_meta = env->insn_aux_data[insn_idx].kptr_struct_meta;
 		int struct_meta_reg = BPF_REG_3;
 		int node_offset_reg = BPF_REG_4;
 
-		/* rbtree_add has extra 'less' arg, so args-to-fixup are in diff regs */
-		if (desc->func_id == special_kfunc_list[KF_bpf_rbtree_add_impl]) {
+		/* list/rbtree_add_impl have an extra arg (prev/less),
+		 * so args-to-fixup are in different regs.
+		 */
+		if (desc->func_id == special_kfunc_list[KF_bpf_list_add_impl] ||
+		    desc->func_id == special_kfunc_list[KF_bpf_rbtree_add_impl]) {
 			struct_meta_reg = BPF_REG_4;
 			node_offset_reg = BPF_REG_5;
 		}
-- 
2.50.1 (Apple Git-155)


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH bpf-next v7 3/5] bpf: add bpf_list_is_first/last/empty kfuncs
  2026-03-08 13:46 [PATCH bpf-next v7 0/5] bpf: Extend the bpf_list family of APIs Chengkaitao
  2026-03-08 13:46 ` [PATCH bpf-next v7 1/5] bpf: Introduce the bpf_list_del kfunc Chengkaitao
  2026-03-08 13:46 ` [PATCH bpf-next v7 2/5] bpf: Add bpf_list_add_impl to insert node after a given list node Chengkaitao
@ 2026-03-08 13:46 ` Chengkaitao
  2026-03-09  6:42   ` Leon Hwang
  2026-03-08 13:46 ` [PATCH bpf-next v7 4/5] selftests/bpf: Add test cases for bpf_list_del/add/is_first/is_last/empty Chengkaitao
  2026-03-08 13:46 ` [PATCH bpf-next v7 5/5] bpf: refactor kfunc checks using table-driven approach in verifier Chengkaitao
  4 siblings, 1 reply; 19+ messages in thread
From: Chengkaitao @ 2026-03-08 13:46 UTC (permalink / raw)
  To: martin.lau, ast, daniel, andrii, eddyz87, song, yonghong.song,
	john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah, chengkaitao,
	linux-kselftest
  Cc: bpf, linux-kernel

From: Kaitao Cheng <chengkaitao@kylinos.cn>

Add three kfuncs for BPF linked list queries:
- bpf_list_is_first(head, node): true if node is the first in the list.
- bpf_list_is_last(head, node): true if node is the last in the list.
- bpf_list_empty(head): true if the list has no entries.

In previous versions, to implement the above functionality, it was
necessary to first call bpf_list_pop_front/back to retrieve the first
or last node before checking whether the passed-in node was the first
or last one. After the check, the node had to be pushed back into the
list using bpf_list_push_front/back, which was very inefficient.

Now, with the bpf_list_is_first/last/empty kfuncs, we can directly
check whether a node is the first, last, or whether the list is empty,
without having to first retrieve the node.

Signed-off-by: Kaitao Cheng <chengkaitao@kylinos.cn>
---
 kernel/bpf/helpers.c  | 38 ++++++++++++++++++++++++++++++++++++++
 kernel/bpf/verifier.c | 15 +++++++++++++--
 2 files changed, 51 insertions(+), 2 deletions(-)

diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index 407520fde668..476f5ad319e2 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -2515,6 +2515,41 @@ __bpf_kfunc struct bpf_list_node *bpf_list_back(struct bpf_list_head *head)
 	return (struct bpf_list_node *)h->prev;
 }
 
+__bpf_kfunc bool bpf_list_is_first(struct bpf_list_head *head, struct bpf_list_node *node)
+{
+	struct list_head *h = (struct list_head *)head;
+	struct bpf_list_node_kern *kn = (struct bpf_list_node_kern *)node;
+
+	if (READ_ONCE(kn->owner) != head)
+		return false;
+
+	return list_is_first(&kn->list_head, h);
+}
+
+__bpf_kfunc bool bpf_list_is_last(struct bpf_list_head *head, struct bpf_list_node *node)
+{
+	struct list_head *h = (struct list_head *)head;
+	struct bpf_list_node_kern *kn = (struct bpf_list_node_kern *)node;
+
+	if (READ_ONCE(kn->owner) != head)
+		return false;
+
+	return list_is_last(&kn->list_head, h);
+}
+
+__bpf_kfunc bool bpf_list_empty(struct bpf_list_head *head)
+{
+	struct list_head *h = (struct list_head *)head;
+
+	/* If list_head was 0-initialized by map, bpf_obj_init_field wasn't
+	 * called on its fields, so init here
+	 */
+	if (unlikely(!h->next))
+		INIT_LIST_HEAD(h);
+
+	return list_empty(h);
+}
+
 __bpf_kfunc struct bpf_rb_node *bpf_rbtree_remove(struct bpf_rb_root *root,
 						  struct bpf_rb_node *node)
 {
@@ -4585,6 +4620,9 @@ BTF_ID_FLAGS(func, bpf_list_del, KF_ACQUIRE | KF_RET_NULL)
 BTF_ID_FLAGS(func, bpf_list_front, KF_RET_NULL)
 BTF_ID_FLAGS(func, bpf_list_back, KF_RET_NULL)
 BTF_ID_FLAGS(func, bpf_list_add_impl)
+BTF_ID_FLAGS(func, bpf_list_is_first)
+BTF_ID_FLAGS(func, bpf_list_is_last)
+BTF_ID_FLAGS(func, bpf_list_empty)
 BTF_ID_FLAGS(func, bpf_task_acquire, KF_ACQUIRE | KF_RCU | KF_RET_NULL)
 BTF_ID_FLAGS(func, bpf_task_release, KF_RELEASE)
 BTF_ID_FLAGS(func, bpf_rbtree_remove, KF_ACQUIRE | KF_RET_NULL)
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 5f55b68ed935..5e32e02429c4 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -12465,6 +12465,9 @@ enum special_kfunc_type {
 	KF_bpf_list_del,
 	KF_bpf_list_front,
 	KF_bpf_list_back,
+	KF_bpf_list_is_first,
+	KF_bpf_list_is_last,
+	KF_bpf_list_empty,
 	KF_bpf_cast_to_kern_ctx,
 	KF_bpf_rdonly_cast,
 	KF_bpf_rcu_read_lock,
@@ -12527,6 +12530,9 @@ BTF_ID(func, bpf_list_pop_back)
 BTF_ID(func, bpf_list_del)
 BTF_ID(func, bpf_list_front)
 BTF_ID(func, bpf_list_back)
+BTF_ID(func, bpf_list_is_first)
+BTF_ID(func, bpf_list_is_last)
+BTF_ID(func, bpf_list_empty)
 BTF_ID(func, bpf_cast_to_kern_ctx)
 BTF_ID(func, bpf_rdonly_cast)
 BTF_ID(func, bpf_rcu_read_lock)
@@ -13003,7 +13009,10 @@ static bool is_bpf_list_api_kfunc(u32 btf_id)
 	       btf_id == special_kfunc_list[KF_bpf_list_pop_back] ||
 	       btf_id == special_kfunc_list[KF_bpf_list_del] ||
 	       btf_id == special_kfunc_list[KF_bpf_list_front] ||
-	       btf_id == special_kfunc_list[KF_bpf_list_back];
+	       btf_id == special_kfunc_list[KF_bpf_list_back] ||
+	       btf_id == special_kfunc_list[KF_bpf_list_is_first] ||
+	       btf_id == special_kfunc_list[KF_bpf_list_is_last] ||
+	       btf_id == special_kfunc_list[KF_bpf_list_empty];
 }
 
 static bool is_bpf_rbtree_api_kfunc(u32 btf_id)
@@ -13126,7 +13135,9 @@ static bool check_kfunc_is_graph_node_api(struct bpf_verifier_env *env,
 		ret = (kfunc_btf_id == special_kfunc_list[KF_bpf_list_push_front_impl] ||
 		       kfunc_btf_id == special_kfunc_list[KF_bpf_list_push_back_impl] ||
 		       kfunc_btf_id == special_kfunc_list[KF_bpf_list_add_impl] ||
-		       kfunc_btf_id == special_kfunc_list[KF_bpf_list_del]);
+		       kfunc_btf_id == special_kfunc_list[KF_bpf_list_del] ||
+		       kfunc_btf_id == special_kfunc_list[KF_bpf_list_is_first] ||
+		       kfunc_btf_id == special_kfunc_list[KF_bpf_list_is_last]);
 		break;
 	case BPF_RB_NODE:
 		ret = (kfunc_btf_id == special_kfunc_list[KF_bpf_rbtree_remove] ||
-- 
2.50.1 (Apple Git-155)


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH bpf-next v7 4/5] selftests/bpf: Add test cases for bpf_list_del/add/is_first/is_last/empty
  2026-03-08 13:46 [PATCH bpf-next v7 0/5] bpf: Extend the bpf_list family of APIs Chengkaitao
                   ` (2 preceding siblings ...)
  2026-03-08 13:46 ` [PATCH bpf-next v7 3/5] bpf: add bpf_list_is_first/last/empty kfuncs Chengkaitao
@ 2026-03-08 13:46 ` Chengkaitao
  2026-03-08 14:25   ` bot+bpf-ci
  2026-03-09  6:43   ` Leon Hwang
  2026-03-08 13:46 ` [PATCH bpf-next v7 5/5] bpf: refactor kfunc checks using table-driven approach in verifier Chengkaitao
  4 siblings, 2 replies; 19+ messages in thread
From: Chengkaitao @ 2026-03-08 13:46 UTC (permalink / raw)
  To: martin.lau, ast, daniel, andrii, eddyz87, song, yonghong.song,
	john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah, chengkaitao,
	linux-kselftest
  Cc: bpf, linux-kernel

From: Kaitao Cheng <chengkaitao@kylinos.cn>

Extend the refcounted_kptr test: add a node to both an rbtree and a
list, retrieve the node from the rbtree to obtain the node pointer,
then add a new node after the first in the list, and finally use
bpf_list_del to remove both nodes.

The test asserts that the list is non-empty after insert, asserts the
first and last nodes after bpf_list_add, and asserts that the list is
empty after removing both nodes.

To verify the validity of bpf_list_del/add, the test also expects the
verifier to reject calls to bpf_list_del/add made without holding the
spin_lock.

Signed-off-by: Kaitao Cheng <chengkaitao@kylinos.cn>
---
 .../testing/selftests/bpf/bpf_experimental.h  |  16 ++
 .../selftests/bpf/progs/refcounted_kptr.c     | 140 ++++++++++++++++++
 2 files changed, 156 insertions(+)

diff --git a/tools/testing/selftests/bpf/bpf_experimental.h b/tools/testing/selftests/bpf/bpf_experimental.h
index 4b7210c318dd..005ca9d84677 100644
--- a/tools/testing/selftests/bpf/bpf_experimental.h
+++ b/tools/testing/selftests/bpf/bpf_experimental.h
@@ -85,6 +85,22 @@ extern int bpf_list_push_back_impl(struct bpf_list_head *head,
 /* Convenience macro to wrap over bpf_list_push_back_impl */
 #define bpf_list_push_back(head, node) bpf_list_push_back_impl(head, node, NULL, 0)
 
+/* Description
+ *	Insert 'new' after 'prev' in the BPF linked list with head 'head'.
+ *	The bpf_spin_lock protecting the list must be held. 'prev' must already
+ *	be in that list; 'new' must not be in any list. The 'meta' and 'off'
+ *	parameters are rewritten by the verifier, no need for BPF programs to
+ *	set them.
+ * Returns
+ *	0 on success, -EINVAL if head is NULL, prev is not in the list with head,
+ *	or new is already in a list.
+ */
+extern int bpf_list_add_impl(struct bpf_list_head *head, struct bpf_list_node *new,
+			     struct bpf_list_node *prev, void *meta, __u64 off) __ksym;
+
+/* Convenience macro to wrap over bpf_list_add_impl */
+#define bpf_list_add(head, new, prev) bpf_list_add_impl(head, new, prev, NULL, 0)
+
 /* Description
  *	Remove the entry at the beginning of the BPF linked list.
  * Returns
diff --git a/tools/testing/selftests/bpf/progs/refcounted_kptr.c b/tools/testing/selftests/bpf/progs/refcounted_kptr.c
index 1aca85d86aeb..c2defa991acd 100644
--- a/tools/testing/selftests/bpf/progs/refcounted_kptr.c
+++ b/tools/testing/selftests/bpf/progs/refcounted_kptr.c
@@ -367,6 +367,146 @@ long insert_rbtree_and_stash__del_tree_##rem_tree(void *ctx)		\
 INSERT_STASH_READ(true, "insert_stash_read: remove from tree");
 INSERT_STASH_READ(false, "insert_stash_read: don't remove from tree");
 
+/*
+ * Insert one node in tree and list, remove it from tree, add a second node
+ * after it with bpf_list_add, check bpf_list_is_first/is_last/empty, then
+ * remove both nodes from list via bpf_list_del.
+ */
+SEC("tc")
+__description("list_add_del_and_check: test bpf_list_add/del/is_first/is_last/empty")
+__success __retval(0)
+long list_add_del_and_check(void *ctx)
+{
+	long err = 0;
+	struct bpf_rb_node *rb;
+	struct bpf_list_node *l_node, *l_node_ref;
+	struct node_data *n_rb, *n_new, *n_new_ref;
+
+	err = __insert_in_tree_and_list(&head, &root, &lock);
+	if (err)
+		return err;
+
+	bpf_spin_lock(&lock);
+	/* Test1: bpf_list_empty */
+	if (bpf_list_empty(&head)) {
+		bpf_spin_unlock(&lock);
+		return -4;
+	}
+
+	rb = bpf_rbtree_first(&root);
+	if (!rb) {
+		bpf_spin_unlock(&lock);
+		return -5;
+	}
+
+	rb = bpf_rbtree_remove(&root, rb);
+	bpf_spin_unlock(&lock);
+	if (!rb)
+		return -6;
+
+	n_rb = container_of(rb, struct node_data, r);
+	n_new = bpf_obj_new(typeof(*n_new));
+	if (!n_new) {
+		bpf_obj_drop(n_rb);
+		return -7;
+	}
+	n_new_ref = bpf_refcount_acquire(n_new);
+	if (!n_new_ref) {
+		bpf_obj_drop(n_rb);
+		bpf_obj_drop(n_new);
+		return -8;
+	}
+
+	bpf_spin_lock(&lock);
+	/* Test2: bpf_list_add */
+	if (bpf_list_add(&head, &n_new->l, &n_rb->l)) {
+		bpf_spin_unlock(&lock);
+		bpf_obj_drop(n_rb);
+		bpf_obj_drop(n_new_ref);
+		return -9;
+	}
+
+	/* Test3: bpf_list_is_first/is_last */
+	if (!bpf_list_is_first(&head, &n_rb->l) ||
+	    !bpf_list_is_last(&head, &n_new_ref->l)) {
+		bpf_spin_unlock(&lock);
+		bpf_obj_drop(n_rb);
+		bpf_obj_drop(n_new_ref);
+		return -10;
+	}
+
+	/* Test4: bpf_list_del */
+	l_node = bpf_list_del(&head, &n_rb->l);
+	l_node_ref = bpf_list_del(&head, &n_new_ref->l);
+	bpf_spin_unlock(&lock);
+	bpf_obj_drop(n_rb);
+	bpf_obj_drop(n_new_ref);
+
+	if (l_node)
+		bpf_obj_drop(container_of(l_node, struct node_data, l));
+	else
+		err = -11;
+
+	if (l_node_ref)
+		bpf_obj_drop(container_of(l_node_ref, struct node_data, l));
+	else
+		err = -12;
+
+	bpf_spin_lock(&lock);
+	/* Test5: bpf_list_empty */
+	if (!bpf_list_empty(&head))
+		err = -13;
+	bpf_spin_unlock(&lock);
+	return err;
+}
+
+SEC("?tc")
+__failure __msg("bpf_spin_lock at off=32 must be held for bpf_list_head")
+long list_del_without_lock_fail(void *ctx)
+{
+	struct bpf_rb_node *rb;
+	struct bpf_list_node *l;
+	struct node_data *n;
+
+	bpf_spin_lock(&lock);
+	rb = bpf_rbtree_first(&root);
+	bpf_spin_unlock(&lock);
+	if (!rb)
+		return -1;
+
+	n = container_of(rb, struct node_data, r);
+	/* Error case: delete list node without holding lock */
+	l = bpf_list_del(&head, &n->l);
+	if (!l)
+		return -2;
+	bpf_obj_drop(container_of(l, struct node_data, l));
+
+	return 0;
+}
+
+SEC("?tc")
+__failure __msg("bpf_spin_lock at off=32 must be held for bpf_list_head")
+long list_add_without_lock_fail(void *ctx)
+{
+	struct bpf_rb_node *rb;
+	struct bpf_list_node *l;
+	struct node_data *n;
+
+	bpf_spin_lock(&lock);
+	rb = bpf_rbtree_first(&root);
+	l = bpf_list_front(&head);
+	bpf_spin_unlock(&lock);
+	if (!rb || !l)
+		return -1;
+
+	n = container_of(l, struct node_data, l);
+	/* Error case: add list node without holding lock */
+	if (bpf_list_add(&head, &n->l, l))
+		return -2;
+
+	return 0;
+}
+
 SEC("tc")
 __success
 long rbtree_refcounted_node_ref_escapes(void *ctx)
-- 
2.50.1 (Apple Git-155)


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH bpf-next v7 5/5] bpf: refactor kfunc checks using table-driven approach in verifier
  2026-03-08 13:46 [PATCH bpf-next v7 0/5] bpf: Extend the bpf_list family of APIs Chengkaitao
                   ` (3 preceding siblings ...)
  2026-03-08 13:46 ` [PATCH bpf-next v7 4/5] selftests/bpf: Add test cases for bpf_list_del/add/is_first/is_last/empty Chengkaitao
@ 2026-03-08 13:46 ` Chengkaitao
  2026-03-09  6:45   ` Leon Hwang
  4 siblings, 1 reply; 19+ messages in thread
From: Chengkaitao @ 2026-03-08 13:46 UTC (permalink / raw)
  To: martin.lau, ast, daniel, andrii, eddyz87, song, yonghong.song,
	john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah, chengkaitao,
	linux-kselftest
  Cc: bpf, linux-kernel

From: Kaitao Cheng <chengkaitao@kylinos.cn>

Replace per-kfunc btf_id chains in list/rbtree/res_lock and graph node
checks with btf_id_in_kfunc_table() and static kfunc tables for easier
maintenance.

Signed-off-by: Kaitao Cheng <chengkaitao@kylinos.cn>
---
 kernel/bpf/verifier.c | 97 +++++++++++++++++++++++++++++--------------
 1 file changed, 66 insertions(+), 31 deletions(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 5e32e02429c4..853716f599ce 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -12602,6 +12602,62 @@ BTF_ID(func, bpf_session_is_return)
 BTF_ID(func, bpf_stream_vprintk)
 BTF_ID(func, bpf_stream_print_stack)
 
+static const enum special_kfunc_type bpf_list_api_kfuncs[] = {
+	KF_bpf_list_push_front_impl,
+	KF_bpf_list_push_back_impl,
+	KF_bpf_list_add_impl,
+	KF_bpf_list_pop_front,
+	KF_bpf_list_pop_back,
+	KF_bpf_list_del,
+	KF_bpf_list_front,
+	KF_bpf_list_back,
+	KF_bpf_list_is_first,
+	KF_bpf_list_is_last,
+	KF_bpf_list_empty,
+};
+
+/* Kfuncs that take a list node argument (bpf_list_node *). */
+static const enum special_kfunc_type bpf_list_node_api_kfuncs[] = {
+	KF_bpf_list_push_front_impl,
+	KF_bpf_list_push_back_impl,
+	KF_bpf_list_add_impl,
+	KF_bpf_list_del,
+	KF_bpf_list_is_first,
+	KF_bpf_list_is_last,
+};
+
+/* Kfuncs that take an rbtree node argument (bpf_rb_node *). */
+static const enum special_kfunc_type bpf_rbtree_node_api_kfuncs[] = {
+	KF_bpf_rbtree_remove,
+	KF_bpf_rbtree_add_impl,
+	KF_bpf_rbtree_left,
+	KF_bpf_rbtree_right,
+};
+
+static const enum special_kfunc_type bpf_rbtree_api_kfuncs[] = {
+	KF_bpf_rbtree_add_impl,
+	KF_bpf_rbtree_remove,
+	KF_bpf_rbtree_first,
+	KF_bpf_rbtree_root,
+	KF_bpf_rbtree_left,
+	KF_bpf_rbtree_right,
+};
+
+static const enum special_kfunc_type bpf_res_spin_lock_kfuncs[] = {
+	KF_bpf_res_spin_lock,
+	KF_bpf_res_spin_unlock,
+	KF_bpf_res_spin_lock_irqsave,
+	KF_bpf_res_spin_unlock_irqrestore,
+};
+
+static bool btf_id_in_kfunc_table(u32 btf_id, const enum special_kfunc_type *kfuncs, int n)
+{
+	for (int i = 0; i < n; i++)
+		if (btf_id == special_kfunc_list[kfuncs[i]])
+			return true;
+	return false;
+}
+
 static bool is_task_work_add_kfunc(u32 func_id)
 {
 	return func_id == special_kfunc_list[KF_bpf_task_work_schedule_signal] ||
@@ -13002,27 +13058,14 @@ static int check_reg_allocation_locked(struct bpf_verifier_env *env, struct bpf_
 
 static bool is_bpf_list_api_kfunc(u32 btf_id)
 {
-	return btf_id == special_kfunc_list[KF_bpf_list_push_front_impl] ||
-	       btf_id == special_kfunc_list[KF_bpf_list_push_back_impl] ||
-	       btf_id == special_kfunc_list[KF_bpf_list_add_impl] ||
-	       btf_id == special_kfunc_list[KF_bpf_list_pop_front] ||
-	       btf_id == special_kfunc_list[KF_bpf_list_pop_back] ||
-	       btf_id == special_kfunc_list[KF_bpf_list_del] ||
-	       btf_id == special_kfunc_list[KF_bpf_list_front] ||
-	       btf_id == special_kfunc_list[KF_bpf_list_back] ||
-	       btf_id == special_kfunc_list[KF_bpf_list_is_first] ||
-	       btf_id == special_kfunc_list[KF_bpf_list_is_last] ||
-	       btf_id == special_kfunc_list[KF_bpf_list_empty];
+	return btf_id_in_kfunc_table(btf_id, bpf_list_api_kfuncs,
+				     ARRAY_SIZE(bpf_list_api_kfuncs));
 }
 
 static bool is_bpf_rbtree_api_kfunc(u32 btf_id)
 {
-	return btf_id == special_kfunc_list[KF_bpf_rbtree_add_impl] ||
-	       btf_id == special_kfunc_list[KF_bpf_rbtree_remove] ||
-	       btf_id == special_kfunc_list[KF_bpf_rbtree_first] ||
-	       btf_id == special_kfunc_list[KF_bpf_rbtree_root] ||
-	       btf_id == special_kfunc_list[KF_bpf_rbtree_left] ||
-	       btf_id == special_kfunc_list[KF_bpf_rbtree_right];
+	return btf_id_in_kfunc_table(btf_id, bpf_rbtree_api_kfuncs,
+				     ARRAY_SIZE(bpf_rbtree_api_kfuncs));
 }
 
 static bool is_bpf_iter_num_api_kfunc(u32 btf_id)
@@ -13040,10 +13083,8 @@ static bool is_bpf_graph_api_kfunc(u32 btf_id)
 
 static bool is_bpf_res_spin_lock_kfunc(u32 btf_id)
 {
-	return btf_id == special_kfunc_list[KF_bpf_res_spin_lock] ||
-	       btf_id == special_kfunc_list[KF_bpf_res_spin_unlock] ||
-	       btf_id == special_kfunc_list[KF_bpf_res_spin_lock_irqsave] ||
-	       btf_id == special_kfunc_list[KF_bpf_res_spin_unlock_irqrestore];
+	return btf_id_in_kfunc_table(btf_id, bpf_res_spin_lock_kfuncs,
+				     ARRAY_SIZE(bpf_res_spin_lock_kfuncs));
 }
 
 static bool is_bpf_arena_kfunc(u32 btf_id)
@@ -13132,18 +13173,12 @@ static bool check_kfunc_is_graph_node_api(struct bpf_verifier_env *env,
 
 	switch (node_field_type) {
 	case BPF_LIST_NODE:
-		ret = (kfunc_btf_id == special_kfunc_list[KF_bpf_list_push_front_impl] ||
-		       kfunc_btf_id == special_kfunc_list[KF_bpf_list_push_back_impl] ||
-		       kfunc_btf_id == special_kfunc_list[KF_bpf_list_add_impl] ||
-		       kfunc_btf_id == special_kfunc_list[KF_bpf_list_del] ||
-		       kfunc_btf_id == special_kfunc_list[KF_bpf_list_is_first] ||
-		       kfunc_btf_id == special_kfunc_list[KF_bpf_list_is_last]);
+		ret = btf_id_in_kfunc_table(kfunc_btf_id, bpf_list_node_api_kfuncs,
+					    ARRAY_SIZE(bpf_list_node_api_kfuncs));
 		break;
 	case BPF_RB_NODE:
-		ret = (kfunc_btf_id == special_kfunc_list[KF_bpf_rbtree_remove] ||
-		       kfunc_btf_id == special_kfunc_list[KF_bpf_rbtree_add_impl] ||
-		       kfunc_btf_id == special_kfunc_list[KF_bpf_rbtree_left] ||
-		       kfunc_btf_id == special_kfunc_list[KF_bpf_rbtree_right]);
+		ret = btf_id_in_kfunc_table(kfunc_btf_id, bpf_rbtree_node_api_kfuncs,
+					    ARRAY_SIZE(bpf_rbtree_node_api_kfuncs));
 		break;
 	default:
 		verbose(env, "verifier internal error: unexpected graph node argument type %s\n",
-- 
2.50.1 (Apple Git-155)


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH bpf-next v7 4/5] selftests/bpf: Add test cases for bpf_list_del/add/is_first/is_last/empty
  2026-03-08 13:46 ` [PATCH bpf-next v7 4/5] selftests/bpf: Add test cases for bpf_list_del/add/is_first/is_last/empty Chengkaitao
@ 2026-03-08 14:25   ` bot+bpf-ci
  2026-03-09  6:43   ` Leon Hwang
  1 sibling, 0 replies; 19+ messages in thread
From: bot+bpf-ci @ 2026-03-08 14:25 UTC (permalink / raw)
  To: pilgrimtao, martin.lau, ast, daniel, andrii, eddyz87, song,
	yonghong.song, john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah,
	chengkaitao, linux-kselftest
  Cc: bpf, linux-kernel, ast, andrii, daniel, martin.lau, eddyz87,
	yonghong.song, clm, ihor.solodrai

[-- Attachment #1: Type: text/plain, Size: 2588 bytes --]

> diff --git a/tools/testing/selftests/bpf/bpf_experimental.h b/tools/testing/selftests/bpf/bpf_experimental.h
> --- a/tools/testing/selftests/bpf/bpf_experimental.h
> +++ b/tools/testing/selftests/bpf/bpf_experimental.h
> @@ -86,6 +86,22 @@ #define bpf_list_push_front(head, node) bpf_list_push_front_impl(head, node, NULL, 0)
>  #define bpf_list_push_back(head, node) bpf_list_push_back_impl(head, node, NULL, 0)
>
> +/* Description
> + *	Insert 'new' after 'prev' in the BPF linked list with head 'head'.

[ ... ]

> +extern int bpf_list_add_impl(struct bpf_list_head *head, struct bpf_list_node *new,
> +			     struct bpf_list_node *prev, void *meta, __u64 off) __ksym;
> +
> +/* Convenience macro to wrap over bpf_list_add_impl */
> +#define bpf_list_add(head, new, prev) bpf_list_add_impl(head, new, prev, NULL, 0)

This adds the extern declaration for bpf_list_add_impl, but the test
also calls bpf_list_del(), bpf_list_empty(), bpf_list_is_first(),
bpf_list_is_last(), and bpf_list_front(). Are the extern __ksym
declarations for those five kfuncs missing from this header?

The v6 of this series added all the declarations here (39 lines), but
v7 reduced it to just bpf_list_add_impl. The btf_dump code in libbpf
(btf_dump.c) treats BTF_KIND_FUNC as unsupported data and does not emit
function prototypes into vmlinux.h:

tools/lib/bpf/btf_dump.c:btf_dump_emit_type_decl_via_under() {
    ...
    case BTF_KIND_FUNC:
    case BTF_KIND_FUNC_PROTO:
        err = btf_dump_unsupported_data(d, t, id);
    ...
}

Without declarations, the BPF program will not compile. The existing
kfuncs in this file (bpf_list_push_front_impl, bpf_list_pop_front,
bpf_list_pop_back, etc.) all have explicit extern __ksym declarations
here. Would something like the following be needed?

    extern struct bpf_list_node *bpf_list_del(struct bpf_list_head *head,
                                              struct bpf_list_node *node) __ksym;
    extern bool bpf_list_is_first(struct bpf_list_head *head,
                                  struct bpf_list_node *node) __ksym;
    extern bool bpf_list_is_last(struct bpf_list_head *head,
                                 struct bpf_list_node *node) __ksym;
    extern bool bpf_list_empty(struct bpf_list_head *head) __ksym;
    extern struct bpf_list_node *bpf_list_front(struct bpf_list_head *head) __ksym;


---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md

CI run summary: https://github.com/kernel-patches/bpf/actions/runs/22822524869

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH bpf-next v7 2/5] bpf: Add bpf_list_add_impl to insert node after a given list node
  2026-03-08 13:46 ` [PATCH bpf-next v7 2/5] bpf: Add bpf_list_add_impl to insert node after a given list node Chengkaitao
@ 2026-03-08 14:25   ` bot+bpf-ci
  2026-03-09  6:34   ` Leon Hwang
  2026-03-10 20:10   ` Kumar Kartikeya Dwivedi
  2 siblings, 0 replies; 19+ messages in thread
From: bot+bpf-ci @ 2026-03-08 14:25 UTC (permalink / raw)
  To: pilgrimtao, martin.lau, ast, daniel, andrii, eddyz87, song,
	yonghong.song, john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah,
	chengkaitao, linux-kselftest
  Cc: bpf, linux-kernel, ast, andrii, daniel, martin.lau, eddyz87,
	yonghong.song, clm, ihor.solodrai

[-- Attachment #1: Type: text/plain, Size: 2466 bytes --]

> diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> --- a/kernel/bpf/helpers.c
> +++ b/kernel/bpf/helpers.c

[ ... ]

> +__bpf_kfunc int bpf_list_add_impl(struct bpf_list_head *head,
> +				  struct bpf_list_node *new,
> +				  struct bpf_list_node *prev,
> +				  void *meta__ign, u64 off)
> +{
> +	struct bpf_list_node_kern *kn = (void *)new, *kp = (void *)prev;
> +	struct btf_struct_meta *meta = meta__ign;
> +
> +	return __bpf_list_add(head, kn, &kp->list_head,
> +			      meta ? meta->record : NULL, off);
> +}

The prev argument here is typed as struct bpf_list_node *, so the
verifier classifies it as KF_ARG_PTR_TO_LIST_NODE.  In
check_kfunc_args(), the KF_ARG_PTR_TO_LIST_NODE case requires:

    if (reg->type != (PTR_TO_BTF_ID | MEM_ALLOC)) {
        ...
        return -EINVAL;
    }
    if (!reg->ref_obj_id) {
        ...
        return -EINVAL;
    }

This means prev must be an owning reference.  However, the natural
way to obtain a reference to a node already in the list is via
bpf_list_front() or bpf_list_back(), which return non-owning
references (PTR_TO_BTF_ID with NON_OWN_REF set via
ref_set_non_owning).  These will not pass the check above.

Compare with KF_ARG_PTR_TO_RB_NODE, which differentiates between
insertion (requires owning ref) and other operations (allows
non-owning refs):

    case KF_ARG_PTR_TO_RB_NODE:
        if (meta->func_id == special_kfunc_list[KF_bpf_rbtree_add_impl]) {
            /* owning ref required */
        } else {
            /* non-owning or refcounted allowed */
        }

Should KF_ARG_PTR_TO_LIST_NODE have a similar distinction, so that
for bpf_list_add_impl only the new argument (R2) requires an owning
reference while prev (R3) can accept non-owning references?

Without this, the API only works when prev is an owning reference to
a refcounted object still in the list (e.g. obtained from
bpf_rbtree_remove on a node that is in both a tree and a list).
This is the pattern used in the selftests, but it limits the kfunc
to a narrow use case that the commit message does not describe.

This is not addressed in any later commit in the series (checked
through 4a7e9776ee29 "bpf: refactor kfunc checks using table-driven
approach in verifier").


---
AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md

CI run summary: https://github.com/kernel-patches/bpf/actions/runs/22822524869

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH bpf-next v7 1/5] bpf: Introduce the bpf_list_del kfunc.
  2026-03-08 13:46 ` [PATCH bpf-next v7 1/5] bpf: Introduce the bpf_list_del kfunc Chengkaitao
@ 2026-03-09  6:33   ` Leon Hwang
  2026-03-10 20:10     ` Kumar Kartikeya Dwivedi
  0 siblings, 1 reply; 19+ messages in thread
From: Leon Hwang @ 2026-03-09  6:33 UTC (permalink / raw)
  To: Chengkaitao, martin.lau, ast, daniel, andrii, eddyz87, song,
	yonghong.song, john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah,
	chengkaitao, linux-kselftest
  Cc: bpf, linux-kernel

On 8/3/26 21:46, Chengkaitao wrote:
> From: Kaitao Cheng <chengkaitao@kylinos.cn>
> 
> If a user holds ownership of a node in the middle of a list, they
> can directly remove it from the list without strictly adhering to
> deletion rules from the head or tail.
> 
> We have added an additional parameter bpf_list_head *head to
> bpf_list_del, as the verifier requires the head parameter to
> check whether the lock is being held.
> 
> This is typically paired with bpf_refcount. After calling
> bpf_list_del, it is generally necessary to drop the reference to
> the list node twice to prevent reference count leaks.
> 
> Signed-off-by: Kaitao Cheng <chengkaitao@kylinos.cn>
> ---
>  kernel/bpf/helpers.c  | 30 +++++++++++++++++++++++-------
>  kernel/bpf/verifier.c |  6 +++++-
>  2 files changed, 28 insertions(+), 8 deletions(-)
> 
> diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> index 6eb6c82ed2ee..01b74c4ac00d 100644
> --- a/kernel/bpf/helpers.c
> +++ b/kernel/bpf/helpers.c
> @@ -2426,20 +2426,23 @@ __bpf_kfunc int bpf_list_push_back_impl(struct bpf_list_head *head,
>  	return __bpf_list_add(n, head, true, meta ? meta->record : NULL, off);
>  }
>  
> -static struct bpf_list_node *__bpf_list_del(struct bpf_list_head *head, bool tail)
> +static struct bpf_list_node *__bpf_list_del(struct bpf_list_head *head,
> +					    struct list_head *n)
>  {
> -	struct list_head *n, *h = (void *)head;
> +	struct list_head *h = (void *)head;
>  	struct bpf_list_node_kern *node;
>  
>  	/* If list_head was 0-initialized by map, bpf_obj_init_field wasn't
>  	 * called on its fields, so init here
>  	 */
> -	if (unlikely(!h->next))
> +	if (unlikely(!h->next)) {
>  		INIT_LIST_HEAD(h);
> -	if (list_empty(h))
> +		return NULL;
> +	}
> +
> +	if (n == h)
>  		return NULL;
>  
> -	n = tail ? h->prev : h->next;
>  	node = container_of(n, struct bpf_list_node_kern, list_head);
>  	if (WARN_ON_ONCE(READ_ONCE(node->owner) != head))
>  		return NULL;

This refactoring is worth, because the "struct list_head *n" seems
better than "bool tail".

But, such refactoring should be a preparatory patch. Importantly,
refactoring should not introduce functional changes.

Thanks,
Leon

> @@ -2451,12 +2454,24 @@ static struct bpf_list_node *__bpf_list_del(struct bpf_list_head *head, bool tai
>  
>  __bpf_kfunc struct bpf_list_node *bpf_list_pop_front(struct bpf_list_head *head)
>  {
> -	return __bpf_list_del(head, false);
> +	struct list_head *h = (void *)head;
> +
> +	return __bpf_list_del(head, h->next);
>  }
>  
>  __bpf_kfunc struct bpf_list_node *bpf_list_pop_back(struct bpf_list_head *head)
>  {
> -	return __bpf_list_del(head, true);
> +	struct list_head *h = (void *)head;
> +
> +	return __bpf_list_del(head, h->prev);
> +}
> +
> +__bpf_kfunc struct bpf_list_node *bpf_list_del(struct bpf_list_head *head,
> +					       struct bpf_list_node *node)
> +{
> +	struct bpf_list_node_kern *kn = (void *)node;
> +
> +	return __bpf_list_del(head, &kn->list_head);
>  }
>  
>  __bpf_kfunc struct bpf_list_node *bpf_list_front(struct bpf_list_head *head)
> @@ -4545,6 +4560,7 @@ BTF_ID_FLAGS(func, bpf_list_push_front_impl)
>  BTF_ID_FLAGS(func, bpf_list_push_back_impl)
>  BTF_ID_FLAGS(func, bpf_list_pop_front, KF_ACQUIRE | KF_RET_NULL)
>  BTF_ID_FLAGS(func, bpf_list_pop_back, KF_ACQUIRE | KF_RET_NULL)
> +BTF_ID_FLAGS(func, bpf_list_del, KF_ACQUIRE | KF_RET_NULL)
>  BTF_ID_FLAGS(func, bpf_list_front, KF_RET_NULL)
>  BTF_ID_FLAGS(func, bpf_list_back, KF_RET_NULL)
>  BTF_ID_FLAGS(func, bpf_task_acquire, KF_ACQUIRE | KF_RCU | KF_RET_NULL)
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 67c09b43a497..c9557d3fb8dd 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -12461,6 +12461,7 @@ enum special_kfunc_type {
>  	KF_bpf_list_push_back_impl,
>  	KF_bpf_list_pop_front,
>  	KF_bpf_list_pop_back,
> +	KF_bpf_list_del,
>  	KF_bpf_list_front,
>  	KF_bpf_list_back,
>  	KF_bpf_cast_to_kern_ctx,
> @@ -12521,6 +12522,7 @@ BTF_ID(func, bpf_list_push_front_impl)
>  BTF_ID(func, bpf_list_push_back_impl)
>  BTF_ID(func, bpf_list_pop_front)
>  BTF_ID(func, bpf_list_pop_back)
> +BTF_ID(func, bpf_list_del)
>  BTF_ID(func, bpf_list_front)
>  BTF_ID(func, bpf_list_back)
>  BTF_ID(func, bpf_cast_to_kern_ctx)
> @@ -12996,6 +12998,7 @@ static bool is_bpf_list_api_kfunc(u32 btf_id)
>  	       btf_id == special_kfunc_list[KF_bpf_list_push_back_impl] ||
>  	       btf_id == special_kfunc_list[KF_bpf_list_pop_front] ||
>  	       btf_id == special_kfunc_list[KF_bpf_list_pop_back] ||
> +	       btf_id == special_kfunc_list[KF_bpf_list_del] ||
>  	       btf_id == special_kfunc_list[KF_bpf_list_front] ||
>  	       btf_id == special_kfunc_list[KF_bpf_list_back];
>  }
> @@ -13118,7 +13121,8 @@ static bool check_kfunc_is_graph_node_api(struct bpf_verifier_env *env,
>  	switch (node_field_type) {
>  	case BPF_LIST_NODE:
>  		ret = (kfunc_btf_id == special_kfunc_list[KF_bpf_list_push_front_impl] ||
> -		       kfunc_btf_id == special_kfunc_list[KF_bpf_list_push_back_impl]);
> +		       kfunc_btf_id == special_kfunc_list[KF_bpf_list_push_back_impl] ||
> +		       kfunc_btf_id == special_kfunc_list[KF_bpf_list_del]);
>  		break;
>  	case BPF_RB_NODE:
>  		ret = (kfunc_btf_id == special_kfunc_list[KF_bpf_rbtree_remove] ||


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH bpf-next v7 2/5] bpf: Add bpf_list_add_impl to insert node after a given list node
  2026-03-08 13:46 ` [PATCH bpf-next v7 2/5] bpf: Add bpf_list_add_impl to insert node after a given list node Chengkaitao
  2026-03-08 14:25   ` bot+bpf-ci
@ 2026-03-09  6:34   ` Leon Hwang
  2026-03-10 20:10   ` Kumar Kartikeya Dwivedi
  2 siblings, 0 replies; 19+ messages in thread
From: Leon Hwang @ 2026-03-09  6:34 UTC (permalink / raw)
  To: Chengkaitao, martin.lau, ast, daniel, andrii, eddyz87, song,
	yonghong.song, john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah,
	chengkaitao, linux-kselftest
  Cc: bpf, linux-kernel

On 8/3/26 21:46, Chengkaitao wrote:
> From: Kaitao Cheng <chengkaitao@kylinos.cn>
> 
> Add a new kfunc bpf_list_add_impl(head, new, prev, meta, off) that
> inserts 'new' after 'prev' in the BPF linked list. Both must be in
> the same list; 'prev' must already be in the list. The new node must
> be an owning reference (e.g. from bpf_obj_new); the kfunc consumes
> that reference and the node becomes non-owning once inserted.
> 
> We have added an additional parameter bpf_list_head *head to
> bpf_list_add_impl, as the verifier requires the head parameter to
> check whether the lock is being held.
> 
> Returns 0 on success, -EINVAL if 'prev' is not in a list or 'new'
> is already in a list (or duplicate insertion). On failure, the
> kernel drops the passed-in node.
> 
> Signed-off-by: Kaitao Cheng <chengkaitao@kylinos.cn>
> ---
>  kernel/bpf/helpers.c  | 56 ++++++++++++++++++++++++++++++-------------
>  kernel/bpf/verifier.c | 13 ++++++++--
>  2 files changed, 50 insertions(+), 19 deletions(-)
> 
> diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> index 01b74c4ac00d..407520fde668 100644
> --- a/kernel/bpf/helpers.c
> +++ b/kernel/bpf/helpers.c
> @@ -2379,11 +2379,12 @@ __bpf_kfunc void *bpf_refcount_acquire_impl(void *p__refcounted_kptr, void *meta
>  	return (void *)p__refcounted_kptr;
>  }
>  
> -static int __bpf_list_add(struct bpf_list_node_kern *node,
> -			  struct bpf_list_head *head,
> -			  bool tail, struct btf_record *rec, u64 off)
> +static int __bpf_list_add(struct bpf_list_head *head,
> +			  struct bpf_list_node_kern *new,
> +			  struct list_head *prev,
> +			  struct btf_record *rec, u64 off)
>  {
> -	struct list_head *n = &node->list_head, *h = (void *)head;
> +	struct list_head *n = &new->list_head, *h = (void *)head;
>  
>  	/* If list_head was 0-initialized by map, bpf_obj_init_field wasn't
>  	 * called on its fields, so init here
> @@ -2391,39 +2392,59 @@ static int __bpf_list_add(struct bpf_list_node_kern *node,
>  	if (unlikely(!h->next))
>  		INIT_LIST_HEAD(h);
>  
> -	/* node->owner != NULL implies !list_empty(n), no need to separately
> +	/* When prev is not the list head, it must be a node in this list. */
> +	if (prev != h && WARN_ON_ONCE(READ_ONCE(container_of(
> +	    prev, struct bpf_list_node_kern, list_head)->owner) != head))
> +		goto fail;
> +
> +	/* new->owner != NULL implies !list_empty(n), no need to separately
>  	 * check the latter
>  	 */
> -	if (cmpxchg(&node->owner, NULL, BPF_PTR_POISON)) {
> -		/* Only called from BPF prog, no need to migrate_disable */
> -		__bpf_obj_drop_impl((void *)n - off, rec, false);
> -		return -EINVAL;
> -	}
> -
> -	tail ? list_add_tail(n, h) : list_add(n, h);
> -	WRITE_ONCE(node->owner, head);
> +	if (cmpxchg(&new->owner, NULL, BPF_PTR_POISON))
> +		goto fail;
>  
> +	list_add(n, prev);
> +	WRITE_ONCE(new->owner, head);
>  	return 0;
> +
> +fail:
> +	/* Only called from BPF prog, no need to migrate_disable */
> +	__bpf_obj_drop_impl((void *)n - off, rec, false);
> +	return -EINVAL;
>  }
>  

This refactoring is worth. But it should be a preparatory patch.

>  __bpf_kfunc int bpf_list_push_front_impl(struct bpf_list_head *head,
>  					 struct bpf_list_node *node,
>  					 void *meta__ign, u64 off)
>  {
> -	struct bpf_list_node_kern *n = (void *)node;
> +	struct bpf_list_node_kern *new = (void *)node;
>  	struct btf_struct_meta *meta = meta__ign;
> +	struct list_head *h = (void *)head;
>  
> -	return __bpf_list_add(n, head, false, meta ? meta->record : NULL, off);
> +	return __bpf_list_add(head, new, h, meta ? meta->record : NULL, off);

Strange to change the positions of the first two args.

Thanks,
Leon

>  }
>  
>  __bpf_kfunc int bpf_list_push_back_impl(struct bpf_list_head *head,
>  					struct bpf_list_node *node,
>  					void *meta__ign, u64 off)
>  {
> -	struct bpf_list_node_kern *n = (void *)node;
> +	struct bpf_list_node_kern *new = (void *)node;
> +	struct btf_struct_meta *meta = meta__ign;
> +	struct list_head *h = (void *)head;
> +
> +	return __bpf_list_add(head, new, h->prev, meta ? meta->record : NULL, off);
> +}
> +
> +__bpf_kfunc int bpf_list_add_impl(struct bpf_list_head *head,
> +				  struct bpf_list_node *new,
> +				  struct bpf_list_node *prev,
> +				  void *meta__ign, u64 off)
> +{
> +	struct bpf_list_node_kern *kn = (void *)new, *kp = (void *)prev;
>  	struct btf_struct_meta *meta = meta__ign;
>  
> -	return __bpf_list_add(n, head, true, meta ? meta->record : NULL, off);
> +	return __bpf_list_add(head, kn, &kp->list_head,
> +			      meta ? meta->record : NULL, off);
>  }
>  
>  static struct bpf_list_node *__bpf_list_del(struct bpf_list_head *head,
> @@ -4563,6 +4584,7 @@ BTF_ID_FLAGS(func, bpf_list_pop_back, KF_ACQUIRE | KF_RET_NULL)
>  BTF_ID_FLAGS(func, bpf_list_del, KF_ACQUIRE | KF_RET_NULL)
>  BTF_ID_FLAGS(func, bpf_list_front, KF_RET_NULL)
>  BTF_ID_FLAGS(func, bpf_list_back, KF_RET_NULL)
> +BTF_ID_FLAGS(func, bpf_list_add_impl)
>  BTF_ID_FLAGS(func, bpf_task_acquire, KF_ACQUIRE | KF_RCU | KF_RET_NULL)
>  BTF_ID_FLAGS(func, bpf_task_release, KF_RELEASE)
>  BTF_ID_FLAGS(func, bpf_rbtree_remove, KF_ACQUIRE | KF_RET_NULL)
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index c9557d3fb8dd..5f55b68ed935 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -12459,6 +12459,7 @@ enum special_kfunc_type {
>  	KF_bpf_refcount_acquire_impl,
>  	KF_bpf_list_push_front_impl,
>  	KF_bpf_list_push_back_impl,
> +	KF_bpf_list_add_impl,
>  	KF_bpf_list_pop_front,
>  	KF_bpf_list_pop_back,
>  	KF_bpf_list_del,
> @@ -12520,6 +12521,7 @@ BTF_ID(func, bpf_obj_drop_impl)
>  BTF_ID(func, bpf_refcount_acquire_impl)
>  BTF_ID(func, bpf_list_push_front_impl)
>  BTF_ID(func, bpf_list_push_back_impl)
> +BTF_ID(func, bpf_list_add_impl)
>  BTF_ID(func, bpf_list_pop_front)
>  BTF_ID(func, bpf_list_pop_back)
>  BTF_ID(func, bpf_list_del)
> @@ -12996,6 +12998,7 @@ static bool is_bpf_list_api_kfunc(u32 btf_id)
>  {
>  	return btf_id == special_kfunc_list[KF_bpf_list_push_front_impl] ||
>  	       btf_id == special_kfunc_list[KF_bpf_list_push_back_impl] ||
> +	       btf_id == special_kfunc_list[KF_bpf_list_add_impl] ||
>  	       btf_id == special_kfunc_list[KF_bpf_list_pop_front] ||
>  	       btf_id == special_kfunc_list[KF_bpf_list_pop_back] ||
>  	       btf_id == special_kfunc_list[KF_bpf_list_del] ||
> @@ -13122,6 +13125,7 @@ static bool check_kfunc_is_graph_node_api(struct bpf_verifier_env *env,
>  	case BPF_LIST_NODE:
>  		ret = (kfunc_btf_id == special_kfunc_list[KF_bpf_list_push_front_impl] ||
>  		       kfunc_btf_id == special_kfunc_list[KF_bpf_list_push_back_impl] ||
> +		       kfunc_btf_id == special_kfunc_list[KF_bpf_list_add_impl] ||
>  		       kfunc_btf_id == special_kfunc_list[KF_bpf_list_del]);
>  		break;
>  	case BPF_RB_NODE:
> @@ -14264,6 +14268,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
>  
>  	if (meta.func_id == special_kfunc_list[KF_bpf_list_push_front_impl] ||
>  	    meta.func_id == special_kfunc_list[KF_bpf_list_push_back_impl] ||
> +	    meta.func_id == special_kfunc_list[KF_bpf_list_add_impl] ||
>  	    meta.func_id == special_kfunc_list[KF_bpf_rbtree_add_impl]) {
>  		release_ref_obj_id = regs[BPF_REG_2].ref_obj_id;
>  		insn_aux->insert_off = regs[BPF_REG_2].off;
> @@ -23230,13 +23235,17 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
>  		*cnt = 3;
>  	} else if (desc->func_id == special_kfunc_list[KF_bpf_list_push_back_impl] ||
>  		   desc->func_id == special_kfunc_list[KF_bpf_list_push_front_impl] ||
> +		   desc->func_id == special_kfunc_list[KF_bpf_list_add_impl] ||
>  		   desc->func_id == special_kfunc_list[KF_bpf_rbtree_add_impl]) {
>  		struct btf_struct_meta *kptr_struct_meta = env->insn_aux_data[insn_idx].kptr_struct_meta;
>  		int struct_meta_reg = BPF_REG_3;
>  		int node_offset_reg = BPF_REG_4;
>  
> -		/* rbtree_add has extra 'less' arg, so args-to-fixup are in diff regs */
> -		if (desc->func_id == special_kfunc_list[KF_bpf_rbtree_add_impl]) {
> +		/* list/rbtree_add_impl have an extra arg (prev/less),
> +		 * so args-to-fixup are in different regs.
> +		 */
> +		if (desc->func_id == special_kfunc_list[KF_bpf_list_add_impl] ||
> +		    desc->func_id == special_kfunc_list[KF_bpf_rbtree_add_impl]) {
>  			struct_meta_reg = BPF_REG_4;
>  			node_offset_reg = BPF_REG_5;
>  		}


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH bpf-next v7 3/5] bpf: add bpf_list_is_first/last/empty kfuncs
  2026-03-08 13:46 ` [PATCH bpf-next v7 3/5] bpf: add bpf_list_is_first/last/empty kfuncs Chengkaitao
@ 2026-03-09  6:42   ` Leon Hwang
  0 siblings, 0 replies; 19+ messages in thread
From: Leon Hwang @ 2026-03-09  6:42 UTC (permalink / raw)
  To: Chengkaitao, martin.lau, ast, daniel, andrii, eddyz87, song,
	yonghong.song, john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah,
	chengkaitao, linux-kselftest
  Cc: bpf, linux-kernel

On 8/3/26 21:46, Chengkaitao wrote:
> From: Kaitao Cheng <chengkaitao@kylinos.cn>
> 
> Add three kfuncs for BPF linked list queries:
> - bpf_list_is_first(head, node): true if node is the first in the list.
> - bpf_list_is_last(head, node): true if node is the last in the list.
> - bpf_list_empty(head): true if the list has no entries.
> 
> In previous versions, to implement the above functionality, it was  ^
  Currently, ..., it is

"The previous versions" would mislead readers, that you are referring to
the previous versions of this series.

Thanks,
Leon

> necessary to first call bpf_list_pop_front/back to retrieve the first
> or last node before checking whether the passed-in node was the first
> or last one. After the check, the node had to be pushed back into the
> list using bpf_list_push_front/back, which was very inefficient.>
> Now, with the bpf_list_is_first/last/empty kfuncs, we can directly
> check whether a node is the first, last, or whether the list is empty,
> without having to first retrieve the node.
> 
> Signed-off-by: Kaitao Cheng <chengkaitao@kylinos.cn>
> ---
>  kernel/bpf/helpers.c  | 38 ++++++++++++++++++++++++++++++++++++++
>  kernel/bpf/verifier.c | 15 +++++++++++++--
>  2 files changed, 51 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> index 407520fde668..476f5ad319e2 100644
> --- a/kernel/bpf/helpers.c
> +++ b/kernel/bpf/helpers.c
> @@ -2515,6 +2515,41 @@ __bpf_kfunc struct bpf_list_node *bpf_list_back(struct bpf_list_head *head)
>  	return (struct bpf_list_node *)h->prev;
>  }
>  
> +__bpf_kfunc bool bpf_list_is_first(struct bpf_list_head *head, struct bpf_list_node *node)
> +{
> +	struct list_head *h = (struct list_head *)head;
> +	struct bpf_list_node_kern *kn = (struct bpf_list_node_kern *)node;
> +
> +	if (READ_ONCE(kn->owner) != head)
> +		return false;
> +
> +	return list_is_first(&kn->list_head, h);
> +}
> +
> +__bpf_kfunc bool bpf_list_is_last(struct bpf_list_head *head, struct bpf_list_node *node)
> +{
> +	struct list_head *h = (struct list_head *)head;
> +	struct bpf_list_node_kern *kn = (struct bpf_list_node_kern *)node;
> +
> +	if (READ_ONCE(kn->owner) != head)
> +		return false;
> +
> +	return list_is_last(&kn->list_head, h);
> +}
> +
> +__bpf_kfunc bool bpf_list_empty(struct bpf_list_head *head)
> +{
> +	struct list_head *h = (struct list_head *)head;
> +
> +	/* If list_head was 0-initialized by map, bpf_obj_init_field wasn't
> +	 * called on its fields, so init here
> +	 */
> +	if (unlikely(!h->next))
> +		INIT_LIST_HEAD(h);
> +
> +	return list_empty(h);
> +}
> +
>  __bpf_kfunc struct bpf_rb_node *bpf_rbtree_remove(struct bpf_rb_root *root,
>  						  struct bpf_rb_node *node)
>  {
> @@ -4585,6 +4620,9 @@ BTF_ID_FLAGS(func, bpf_list_del, KF_ACQUIRE | KF_RET_NULL)
>  BTF_ID_FLAGS(func, bpf_list_front, KF_RET_NULL)
>  BTF_ID_FLAGS(func, bpf_list_back, KF_RET_NULL)
>  BTF_ID_FLAGS(func, bpf_list_add_impl)
> +BTF_ID_FLAGS(func, bpf_list_is_first)
> +BTF_ID_FLAGS(func, bpf_list_is_last)
> +BTF_ID_FLAGS(func, bpf_list_empty)
>  BTF_ID_FLAGS(func, bpf_task_acquire, KF_ACQUIRE | KF_RCU | KF_RET_NULL)
>  BTF_ID_FLAGS(func, bpf_task_release, KF_RELEASE)
>  BTF_ID_FLAGS(func, bpf_rbtree_remove, KF_ACQUIRE | KF_RET_NULL)
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 5f55b68ed935..5e32e02429c4 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -12465,6 +12465,9 @@ enum special_kfunc_type {
>  	KF_bpf_list_del,
>  	KF_bpf_list_front,
>  	KF_bpf_list_back,
> +	KF_bpf_list_is_first,
> +	KF_bpf_list_is_last,
> +	KF_bpf_list_empty,
>  	KF_bpf_cast_to_kern_ctx,
>  	KF_bpf_rdonly_cast,
>  	KF_bpf_rcu_read_lock,
> @@ -12527,6 +12530,9 @@ BTF_ID(func, bpf_list_pop_back)
>  BTF_ID(func, bpf_list_del)
>  BTF_ID(func, bpf_list_front)
>  BTF_ID(func, bpf_list_back)
> +BTF_ID(func, bpf_list_is_first)
> +BTF_ID(func, bpf_list_is_last)
> +BTF_ID(func, bpf_list_empty)
>  BTF_ID(func, bpf_cast_to_kern_ctx)
>  BTF_ID(func, bpf_rdonly_cast)
>  BTF_ID(func, bpf_rcu_read_lock)
> @@ -13003,7 +13009,10 @@ static bool is_bpf_list_api_kfunc(u32 btf_id)
>  	       btf_id == special_kfunc_list[KF_bpf_list_pop_back] ||
>  	       btf_id == special_kfunc_list[KF_bpf_list_del] ||
>  	       btf_id == special_kfunc_list[KF_bpf_list_front] ||
> -	       btf_id == special_kfunc_list[KF_bpf_list_back];
> +	       btf_id == special_kfunc_list[KF_bpf_list_back] ||
> +	       btf_id == special_kfunc_list[KF_bpf_list_is_first] ||
> +	       btf_id == special_kfunc_list[KF_bpf_list_is_last] ||
> +	       btf_id == special_kfunc_list[KF_bpf_list_empty];
>  }
>  
>  static bool is_bpf_rbtree_api_kfunc(u32 btf_id)
> @@ -13126,7 +13135,9 @@ static bool check_kfunc_is_graph_node_api(struct bpf_verifier_env *env,
>  		ret = (kfunc_btf_id == special_kfunc_list[KF_bpf_list_push_front_impl] ||
>  		       kfunc_btf_id == special_kfunc_list[KF_bpf_list_push_back_impl] ||
>  		       kfunc_btf_id == special_kfunc_list[KF_bpf_list_add_impl] ||
> -		       kfunc_btf_id == special_kfunc_list[KF_bpf_list_del]);
> +		       kfunc_btf_id == special_kfunc_list[KF_bpf_list_del] ||
> +		       kfunc_btf_id == special_kfunc_list[KF_bpf_list_is_first] ||
> +		       kfunc_btf_id == special_kfunc_list[KF_bpf_list_is_last]);
>  		break;
>  	case BPF_RB_NODE:
>  		ret = (kfunc_btf_id == special_kfunc_list[KF_bpf_rbtree_remove] ||


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH bpf-next v7 4/5] selftests/bpf: Add test cases for bpf_list_del/add/is_first/is_last/empty
  2026-03-08 13:46 ` [PATCH bpf-next v7 4/5] selftests/bpf: Add test cases for bpf_list_del/add/is_first/is_last/empty Chengkaitao
  2026-03-08 14:25   ` bot+bpf-ci
@ 2026-03-09  6:43   ` Leon Hwang
  2026-03-10  2:05     ` Alexei Starovoitov
  1 sibling, 1 reply; 19+ messages in thread
From: Leon Hwang @ 2026-03-09  6:43 UTC (permalink / raw)
  To: Chengkaitao, martin.lau, ast, daniel, andrii, eddyz87, song,
	yonghong.song, john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah,
	chengkaitao, linux-kselftest
  Cc: bpf, linux-kernel

On 8/3/26 21:46, Chengkaitao wrote:
> From: Kaitao Cheng <chengkaitao@kylinos.cn>
> 
> Extend the refcounted_kptr test: add a node to both an rbtree and a
> list, retrieve the node from the rbtree to obtain the node pointer,
> then add a new node after the first in the list, and finally use
> bpf_list_del to remove both nodes.
> 
> The test asserts that the list is non-empty after insert, asserts the
> first and last nodes after bpf_list_add, and asserts that the list is
> empty after removing both nodes.
> 
> To verify the validity of bpf_list_del/add, the test also expects the
> verifier to reject calls to bpf_list_del/add made without holding the
> spin_lock.
> 
> Signed-off-by: Kaitao Cheng <chengkaitao@kylinos.cn>
> ---
>  .../testing/selftests/bpf/bpf_experimental.h  |  16 ++
>  .../selftests/bpf/progs/refcounted_kptr.c     | 140 ++++++++++++++++++
>  2 files changed, 156 insertions(+)
> 
> diff --git a/tools/testing/selftests/bpf/bpf_experimental.h b/tools/testing/selftests/bpf/bpf_experimental.h
> index 4b7210c318dd..005ca9d84677 100644
> --- a/tools/testing/selftests/bpf/bpf_experimental.h
> +++ b/tools/testing/selftests/bpf/bpf_experimental.h
> @@ -85,6 +85,22 @@ extern int bpf_list_push_back_impl(struct bpf_list_head *head,
>  /* Convenience macro to wrap over bpf_list_push_back_impl */
>  #define bpf_list_push_back(head, node) bpf_list_push_back_impl(head, node, NULL, 0)
>  
> +/* Description
> + *	Insert 'new' after 'prev' in the BPF linked list with head 'head'.
> + *	The bpf_spin_lock protecting the list must be held. 'prev' must already
> + *	be in that list; 'new' must not be in any list. The 'meta' and 'off'
> + *	parameters are rewritten by the verifier, no need for BPF programs to
> + *	set them.
> + * Returns
> + *	0 on success, -EINVAL if head is NULL, prev is not in the list with head,
> + *	or new is already in a list.
> + */
> +extern int bpf_list_add_impl(struct bpf_list_head *head, struct bpf_list_node *new,
> +			     struct bpf_list_node *prev, void *meta, __u64 off) __ksym;
> +
> +/* Convenience macro to wrap over bpf_list_add_impl */
> +#define bpf_list_add(head, new, prev) bpf_list_add_impl(head, new, prev, NULL, 0)
> +
>  /* Description
>   *	Remove the entry at the beginning of the BPF linked list.
>   * Returns
> diff --git a/tools/testing/selftests/bpf/progs/refcounted_kptr.c b/tools/testing/selftests/bpf/progs/refcounted_kptr.c
> index 1aca85d86aeb..c2defa991acd 100644
> --- a/tools/testing/selftests/bpf/progs/refcounted_kptr.c
> +++ b/tools/testing/selftests/bpf/progs/refcounted_kptr.c
> @@ -367,6 +367,146 @@ long insert_rbtree_and_stash__del_tree_##rem_tree(void *ctx)		\
>  INSERT_STASH_READ(true, "insert_stash_read: remove from tree");
>  INSERT_STASH_READ(false, "insert_stash_read: don't remove from tree");
>  
> +/*
> + * Insert one node in tree and list, remove it from tree, add a second node
> + * after it with bpf_list_add, check bpf_list_is_first/is_last/empty, then
> + * remove both nodes from list via bpf_list_del.
> + */
> +SEC("tc")
> +__description("list_add_del_and_check: test bpf_list_add/del/is_first/is_last/empty")
> +__success __retval(0)
> +long list_add_del_and_check(void *ctx)
> +{
> +	long err = 0;
> +	struct bpf_rb_node *rb;
> +	struct bpf_list_node *l_node, *l_node_ref;
> +	struct node_data *n_rb, *n_new, *n_new_ref;
> +

Prefer inverted Christmas tree style.

> +	err = __insert_in_tree_and_list(&head, &root, &lock);
> +	if (err)
> +		return err;
> +
> +	bpf_spin_lock(&lock);
> +	/* Test1: bpf_list_empty */
> +	if (bpf_list_empty(&head)) {
> +		bpf_spin_unlock(&lock);
> +		return -4;
> +	}
> +
> +	rb = bpf_rbtree_first(&root);
> +	if (!rb) {
> +		bpf_spin_unlock(&lock);
> +		return -5;
> +	}
> +
> +	rb = bpf_rbtree_remove(&root, rb);
> +	bpf_spin_unlock(&lock);
> +	if (!rb)
> +		return -6;
> +
> +	n_rb = container_of(rb, struct node_data, r);
> +	n_new = bpf_obj_new(typeof(*n_new));
> +	if (!n_new) {
> +		bpf_obj_drop(n_rb);
> +		return -7;
> +	}
> +	n_new_ref = bpf_refcount_acquire(n_new);
> +	if (!n_new_ref) {
> +		bpf_obj_drop(n_rb);
> +		bpf_obj_drop(n_new);
> +		return -8;
> +	}
> +
> +	bpf_spin_lock(&lock);
> +	/* Test2: bpf_list_add */
> +	if (bpf_list_add(&head, &n_new->l, &n_rb->l)) {
> +		bpf_spin_unlock(&lock);
> +		bpf_obj_drop(n_rb);
> +		bpf_obj_drop(n_new_ref);
> +		return -9;
> +	}
> +
> +	/* Test3: bpf_list_is_first/is_last */
> +	if (!bpf_list_is_first(&head, &n_rb->l) ||
> +	    !bpf_list_is_last(&head, &n_new_ref->l)) {
> +		bpf_spin_unlock(&lock);
> +		bpf_obj_drop(n_rb);
> +		bpf_obj_drop(n_new_ref);
> +		return -10;
> +	}
> +
> +	/* Test4: bpf_list_del */
> +	l_node = bpf_list_del(&head, &n_rb->l);
> +	l_node_ref = bpf_list_del(&head, &n_new_ref->l);
> +	bpf_spin_unlock(&lock);
> +	bpf_obj_drop(n_rb);
> +	bpf_obj_drop(n_new_ref);
> +
> +	if (l_node)
> +		bpf_obj_drop(container_of(l_node, struct node_data, l));
> +	else
> +		err = -11;
> +
> +	if (l_node_ref)
> +		bpf_obj_drop(container_of(l_node_ref, struct node_data, l));
> +	else
> +		err = -12;
> +
> +	bpf_spin_lock(&lock);
> +	/* Test5: bpf_list_empty */
> +	if (!bpf_list_empty(&head))
> +		err = -13;
> +	bpf_spin_unlock(&lock);
> +	return err;
> +}
> +

Could you split this test into 5 tests?

More easily to understand the purpose of tests with fewer lines.

Thanks,
Leon

> +SEC("?tc")
> +__failure __msg("bpf_spin_lock at off=32 must be held for bpf_list_head")
> +long list_del_without_lock_fail(void *ctx)
> +{
> +	struct bpf_rb_node *rb;
> +	struct bpf_list_node *l;
> +	struct node_data *n;
> +
> +	bpf_spin_lock(&lock);
> +	rb = bpf_rbtree_first(&root);
> +	bpf_spin_unlock(&lock);
> +	if (!rb)
> +		return -1;
> +
> +	n = container_of(rb, struct node_data, r);
> +	/* Error case: delete list node without holding lock */
> +	l = bpf_list_del(&head, &n->l);
> +	if (!l)
> +		return -2;
> +	bpf_obj_drop(container_of(l, struct node_data, l));
> +
> +	return 0;
> +}
> +
> +SEC("?tc")
> +__failure __msg("bpf_spin_lock at off=32 must be held for bpf_list_head")
> +long list_add_without_lock_fail(void *ctx)
> +{
> +	struct bpf_rb_node *rb;
> +	struct bpf_list_node *l;
> +	struct node_data *n;
> +
> +	bpf_spin_lock(&lock);
> +	rb = bpf_rbtree_first(&root);
> +	l = bpf_list_front(&head);
> +	bpf_spin_unlock(&lock);
> +	if (!rb || !l)
> +		return -1;
> +
> +	n = container_of(l, struct node_data, l);
> +	/* Error case: add list node without holding lock */
> +	if (bpf_list_add(&head, &n->l, l))
> +		return -2;
> +
> +	return 0;
> +}
> +
>  SEC("tc")
>  __success
>  long rbtree_refcounted_node_ref_escapes(void *ctx)


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH bpf-next v7 5/5] bpf: refactor kfunc checks using table-driven approach in verifier
  2026-03-08 13:46 ` [PATCH bpf-next v7 5/5] bpf: refactor kfunc checks using table-driven approach in verifier Chengkaitao
@ 2026-03-09  6:45   ` Leon Hwang
  2026-03-10 20:10     ` Kumar Kartikeya Dwivedi
  0 siblings, 1 reply; 19+ messages in thread
From: Leon Hwang @ 2026-03-09  6:45 UTC (permalink / raw)
  To: Chengkaitao, martin.lau, ast, daniel, andrii, eddyz87, song,
	yonghong.song, john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah,
	chengkaitao, linux-kselftest
  Cc: bpf, linux-kernel

On 8/3/26 21:46, Chengkaitao wrote:
> From: Kaitao Cheng <chengkaitao@kylinos.cn>
> 
> Replace per-kfunc btf_id chains in list/rbtree/res_lock and graph node
> checks with btf_id_in_kfunc_table() and static kfunc tables for easier
> maintenance.
> 

Such refactoring should be the first patch? Less churn. Then, update the
list only.

However, is_bpf_rbtree_api_kfunc(), is_bpf_res_spin_lock_kfunc(), and
BPF_RB_NODE should be excluded, because you didn't touch them in this
series.

Thanks,
Leon

> Signed-off-by: Kaitao Cheng <chengkaitao@kylinos.cn>
> ---
>  kernel/bpf/verifier.c | 97 +++++++++++++++++++++++++++++--------------
>  1 file changed, 66 insertions(+), 31 deletions(-)
> 
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 5e32e02429c4..853716f599ce 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -12602,6 +12602,62 @@ BTF_ID(func, bpf_session_is_return)
>  BTF_ID(func, bpf_stream_vprintk)
>  BTF_ID(func, bpf_stream_print_stack)
>  
> +static const enum special_kfunc_type bpf_list_api_kfuncs[] = {
> +	KF_bpf_list_push_front_impl,
> +	KF_bpf_list_push_back_impl,
> +	KF_bpf_list_add_impl,
> +	KF_bpf_list_pop_front,
> +	KF_bpf_list_pop_back,
> +	KF_bpf_list_del,
> +	KF_bpf_list_front,
> +	KF_bpf_list_back,
> +	KF_bpf_list_is_first,
> +	KF_bpf_list_is_last,
> +	KF_bpf_list_empty,
> +};
> +
> +/* Kfuncs that take a list node argument (bpf_list_node *). */
> +static const enum special_kfunc_type bpf_list_node_api_kfuncs[] = {
> +	KF_bpf_list_push_front_impl,
> +	KF_bpf_list_push_back_impl,
> +	KF_bpf_list_add_impl,
> +	KF_bpf_list_del,
> +	KF_bpf_list_is_first,
> +	KF_bpf_list_is_last,
> +};
> +
> +/* Kfuncs that take an rbtree node argument (bpf_rb_node *). */
> +static const enum special_kfunc_type bpf_rbtree_node_api_kfuncs[] = {
> +	KF_bpf_rbtree_remove,
> +	KF_bpf_rbtree_add_impl,
> +	KF_bpf_rbtree_left,
> +	KF_bpf_rbtree_right,
> +};
> +
> +static const enum special_kfunc_type bpf_rbtree_api_kfuncs[] = {
> +	KF_bpf_rbtree_add_impl,
> +	KF_bpf_rbtree_remove,
> +	KF_bpf_rbtree_first,
> +	KF_bpf_rbtree_root,
> +	KF_bpf_rbtree_left,
> +	KF_bpf_rbtree_right,
> +};
> +
> +static const enum special_kfunc_type bpf_res_spin_lock_kfuncs[] = {
> +	KF_bpf_res_spin_lock,
> +	KF_bpf_res_spin_unlock,
> +	KF_bpf_res_spin_lock_irqsave,
> +	KF_bpf_res_spin_unlock_irqrestore,
> +};
> +
> +static bool btf_id_in_kfunc_table(u32 btf_id, const enum special_kfunc_type *kfuncs, int n)
> +{
> +	for (int i = 0; i < n; i++)
> +		if (btf_id == special_kfunc_list[kfuncs[i]])
> +			return true;
> +	return false;
> +}
> +
>  static bool is_task_work_add_kfunc(u32 func_id)
>  {
>  	return func_id == special_kfunc_list[KF_bpf_task_work_schedule_signal] ||
> @@ -13002,27 +13058,14 @@ static int check_reg_allocation_locked(struct bpf_verifier_env *env, struct bpf_
>  
>  static bool is_bpf_list_api_kfunc(u32 btf_id)
>  {
> -	return btf_id == special_kfunc_list[KF_bpf_list_push_front_impl] ||
> -	       btf_id == special_kfunc_list[KF_bpf_list_push_back_impl] ||
> -	       btf_id == special_kfunc_list[KF_bpf_list_add_impl] ||
> -	       btf_id == special_kfunc_list[KF_bpf_list_pop_front] ||
> -	       btf_id == special_kfunc_list[KF_bpf_list_pop_back] ||
> -	       btf_id == special_kfunc_list[KF_bpf_list_del] ||
> -	       btf_id == special_kfunc_list[KF_bpf_list_front] ||
> -	       btf_id == special_kfunc_list[KF_bpf_list_back] ||
> -	       btf_id == special_kfunc_list[KF_bpf_list_is_first] ||
> -	       btf_id == special_kfunc_list[KF_bpf_list_is_last] ||
> -	       btf_id == special_kfunc_list[KF_bpf_list_empty];
> +	return btf_id_in_kfunc_table(btf_id, bpf_list_api_kfuncs,
> +				     ARRAY_SIZE(bpf_list_api_kfuncs));
>  }
>  
>  static bool is_bpf_rbtree_api_kfunc(u32 btf_id)
>  {
> -	return btf_id == special_kfunc_list[KF_bpf_rbtree_add_impl] ||
> -	       btf_id == special_kfunc_list[KF_bpf_rbtree_remove] ||
> -	       btf_id == special_kfunc_list[KF_bpf_rbtree_first] ||
> -	       btf_id == special_kfunc_list[KF_bpf_rbtree_root] ||
> -	       btf_id == special_kfunc_list[KF_bpf_rbtree_left] ||
> -	       btf_id == special_kfunc_list[KF_bpf_rbtree_right];
> +	return btf_id_in_kfunc_table(btf_id, bpf_rbtree_api_kfuncs,
> +				     ARRAY_SIZE(bpf_rbtree_api_kfuncs));
>  }
>  
>  static bool is_bpf_iter_num_api_kfunc(u32 btf_id)
> @@ -13040,10 +13083,8 @@ static bool is_bpf_graph_api_kfunc(u32 btf_id)
>  
>  static bool is_bpf_res_spin_lock_kfunc(u32 btf_id)
>  {
> -	return btf_id == special_kfunc_list[KF_bpf_res_spin_lock] ||
> -	       btf_id == special_kfunc_list[KF_bpf_res_spin_unlock] ||
> -	       btf_id == special_kfunc_list[KF_bpf_res_spin_lock_irqsave] ||
> -	       btf_id == special_kfunc_list[KF_bpf_res_spin_unlock_irqrestore];
> +	return btf_id_in_kfunc_table(btf_id, bpf_res_spin_lock_kfuncs,
> +				     ARRAY_SIZE(bpf_res_spin_lock_kfuncs));
>  }
>  
>  static bool is_bpf_arena_kfunc(u32 btf_id)
> @@ -13132,18 +13173,12 @@ static bool check_kfunc_is_graph_node_api(struct bpf_verifier_env *env,
>  
>  	switch (node_field_type) {
>  	case BPF_LIST_NODE:
> -		ret = (kfunc_btf_id == special_kfunc_list[KF_bpf_list_push_front_impl] ||
> -		       kfunc_btf_id == special_kfunc_list[KF_bpf_list_push_back_impl] ||
> -		       kfunc_btf_id == special_kfunc_list[KF_bpf_list_add_impl] ||
> -		       kfunc_btf_id == special_kfunc_list[KF_bpf_list_del] ||
> -		       kfunc_btf_id == special_kfunc_list[KF_bpf_list_is_first] ||
> -		       kfunc_btf_id == special_kfunc_list[KF_bpf_list_is_last]);
> +		ret = btf_id_in_kfunc_table(kfunc_btf_id, bpf_list_node_api_kfuncs,
> +					    ARRAY_SIZE(bpf_list_node_api_kfuncs));
>  		break;
>  	case BPF_RB_NODE:
> -		ret = (kfunc_btf_id == special_kfunc_list[KF_bpf_rbtree_remove] ||
> -		       kfunc_btf_id == special_kfunc_list[KF_bpf_rbtree_add_impl] ||
> -		       kfunc_btf_id == special_kfunc_list[KF_bpf_rbtree_left] ||
> -		       kfunc_btf_id == special_kfunc_list[KF_bpf_rbtree_right]);
> +		ret = btf_id_in_kfunc_table(kfunc_btf_id, bpf_rbtree_node_api_kfuncs,
> +					    ARRAY_SIZE(bpf_rbtree_node_api_kfuncs));
>  		break;
>  	default:
>  		verbose(env, "verifier internal error: unexpected graph node argument type %s\n",


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH bpf-next v7 4/5] selftests/bpf: Add test cases for bpf_list_del/add/is_first/is_last/empty
  2026-03-09  6:43   ` Leon Hwang
@ 2026-03-10  2:05     ` Alexei Starovoitov
  0 siblings, 0 replies; 19+ messages in thread
From: Alexei Starovoitov @ 2026-03-10  2:05 UTC (permalink / raw)
  To: Leon Hwang
  Cc: Chengkaitao, Martin KaFai Lau, Alexei Starovoitov,
	Daniel Borkmann, Andrii Nakryiko, Eduard, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Stanislav Fomichev, Hao Luo, Jiri Olsa,
	Shuah Khan, Chengkaitao, open list:KERNEL SELFTEST FRAMEWORK, bpf,
	LKML

On Sun, Mar 8, 2026 at 11:44 PM Leon Hwang <leon.hwang@linux.dev> wrote:
>
> > +long list_add_del_and_check(void *ctx)
> > +{
> > +     long err = 0;
> > +     struct bpf_rb_node *rb;
> > +     struct bpf_list_node *l_node, *l_node_ref;
> > +     struct node_data *n_rb, *n_new, *n_new_ref;
> > +
>
> Prefer inverted Christmas tree style.

Just to clarify.

This is not a requirement, but "nice to have" in a new code
if it doesn't interfere with logical declaration of variables.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH bpf-next v7 1/5] bpf: Introduce the bpf_list_del kfunc.
  2026-03-09  6:33   ` Leon Hwang
@ 2026-03-10 20:10     ` Kumar Kartikeya Dwivedi
  2026-03-10 20:28       ` Kumar Kartikeya Dwivedi
  0 siblings, 1 reply; 19+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2026-03-10 20:10 UTC (permalink / raw)
  To: Leon Hwang
  Cc: Chengkaitao, martin.lau, ast, daniel, andrii, eddyz87, song,
	yonghong.song, john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah,
	chengkaitao, linux-kselftest, bpf, linux-kernel

On Mon, 9 Mar 2026 at 07:34, Leon Hwang <leon.hwang@linux.dev> wrote:
>
> On 8/3/26 21:46, Chengkaitao wrote:
> > From: Kaitao Cheng <chengkaitao@kylinos.cn>
> >
> > If a user holds ownership of a node in the middle of a list, they
> > can directly remove it from the list without strictly adhering to
> > deletion rules from the head or tail.
> >
> > We have added an additional parameter bpf_list_head *head to
> > bpf_list_del, as the verifier requires the head parameter to
> > check whether the lock is being held.
> >
> > This is typically paired with bpf_refcount. After calling
> > bpf_list_del, it is generally necessary to drop the reference to
> > the list node twice to prevent reference count leaks.
> >
> > Signed-off-by: Kaitao Cheng <chengkaitao@kylinos.cn>
> > ---
> >  kernel/bpf/helpers.c  | 30 +++++++++++++++++++++++-------
> >  kernel/bpf/verifier.c |  6 +++++-
> >  2 files changed, 28 insertions(+), 8 deletions(-)
> >
> > diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> > index 6eb6c82ed2ee..01b74c4ac00d 100644
> > --- a/kernel/bpf/helpers.c
> > +++ b/kernel/bpf/helpers.c
> > @@ -2426,20 +2426,23 @@ __bpf_kfunc int bpf_list_push_back_impl(struct bpf_list_head *head,
> >       return __bpf_list_add(n, head, true, meta ? meta->record : NULL, off);
> >  }
> >
> > -static struct bpf_list_node *__bpf_list_del(struct bpf_list_head *head, bool tail)
> > +static struct bpf_list_node *__bpf_list_del(struct bpf_list_head *head,
> > +                                         struct list_head *n)
> >  {
> > -     struct list_head *n, *h = (void *)head;
> > +     struct list_head *h = (void *)head;
> >       struct bpf_list_node_kern *node;
> >
> >       /* If list_head was 0-initialized by map, bpf_obj_init_field wasn't
> >        * called on its fields, so init here
> >        */
> > -     if (unlikely(!h->next))
> > +     if (unlikely(!h->next)) {
> >               INIT_LIST_HEAD(h);
> > -     if (list_empty(h))
> > +             return NULL;
> > +     }
> > +
> > +     if (n == h)
> >               return NULL;

Couldn't you keep the list_empty(h) check after INIT_LIST_HEAD(h) as before?
And we don't need n == h either. You could add a comment that n will
never match h.
The verifier should ensure it, since it distinguishes the head and node types.

Let's say the head is uninit. Then list_empty(h) will catch that case.
Otherwise n might be NULL.

After list_empty(h) says false, we definitely have non-null n.
We just need to check list membership using the owner check, and then
we're good.
It will be a less noisy diff.

> >
> > -     n = tail ? h->prev : h->next;
> >       node = container_of(n, struct bpf_list_node_kern, list_head);
> >       if (WARN_ON_ONCE(READ_ONCE(node->owner) != head))
> >               return NULL;
>
> This refactoring is worth, because the "struct list_head *n" seems
> better than "bool tail".
>
> But, such refactoring should be a preparatory patch. Importantly,
> refactoring should not introduce functional changes.
>

I think it's fine. Let's address this and avoid too many respins now.
It isn't a lot of code anyway. You could mention in the commit log
that you chose to refactor __bpf_list_del though.



> Thanks,
> Leon
>
> > @@ -2451,12 +2454,24 @@ static struct bpf_list_node *__bpf_list_del(struct bpf_list_head *head, bool tai
> >
> >  __bpf_kfunc struct bpf_list_node *bpf_list_pop_front(struct bpf_list_head *head)
> >  {
> > -     return __bpf_list_del(head, false);
> > +     struct list_head *h = (void *)head;
> > +
> > +     return __bpf_list_del(head, h->next);
> >  }
> >
> >  __bpf_kfunc struct bpf_list_node *bpf_list_pop_back(struct bpf_list_head *head)
> >  {
> > -     return __bpf_list_del(head, true);
> > +     struct list_head *h = (void *)head;
> > +
> > +     return __bpf_list_del(head, h->prev);
> > +}
> > +
> > +__bpf_kfunc struct bpf_list_node *bpf_list_del(struct bpf_list_head *head,
> > +                                            struct bpf_list_node *node)
> > +{
> > +     struct bpf_list_node_kern *kn = (void *)node;
> > +
> > +     return __bpf_list_del(head, &kn->list_head);
> >  }
> >
> >  __bpf_kfunc struct bpf_list_node *bpf_list_front(struct bpf_list_head *head)
> > @@ -4545,6 +4560,7 @@ BTF_ID_FLAGS(func, bpf_list_push_front_impl)
> >  BTF_ID_FLAGS(func, bpf_list_push_back_impl)
> >  BTF_ID_FLAGS(func, bpf_list_pop_front, KF_ACQUIRE | KF_RET_NULL)
> >  BTF_ID_FLAGS(func, bpf_list_pop_back, KF_ACQUIRE | KF_RET_NULL)
> > +BTF_ID_FLAGS(func, bpf_list_del, KF_ACQUIRE | KF_RET_NULL)
> >  BTF_ID_FLAGS(func, bpf_list_front, KF_RET_NULL)
> >  BTF_ID_FLAGS(func, bpf_list_back, KF_RET_NULL)
> >  BTF_ID_FLAGS(func, bpf_task_acquire, KF_ACQUIRE | KF_RCU | KF_RET_NULL)
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index 67c09b43a497..c9557d3fb8dd 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -12461,6 +12461,7 @@ enum special_kfunc_type {
> >       KF_bpf_list_push_back_impl,
> >       KF_bpf_list_pop_front,
> >       KF_bpf_list_pop_back,
> > +     KF_bpf_list_del,
> >       KF_bpf_list_front,
> >       KF_bpf_list_back,
> >       KF_bpf_cast_to_kern_ctx,
> > @@ -12521,6 +12522,7 @@ BTF_ID(func, bpf_list_push_front_impl)
> >  BTF_ID(func, bpf_list_push_back_impl)
> >  BTF_ID(func, bpf_list_pop_front)
> >  BTF_ID(func, bpf_list_pop_back)
> > +BTF_ID(func, bpf_list_del)
> >  BTF_ID(func, bpf_list_front)
> >  BTF_ID(func, bpf_list_back)
> >  BTF_ID(func, bpf_cast_to_kern_ctx)
> > @@ -12996,6 +12998,7 @@ static bool is_bpf_list_api_kfunc(u32 btf_id)
> >              btf_id == special_kfunc_list[KF_bpf_list_push_back_impl] ||
> >              btf_id == special_kfunc_list[KF_bpf_list_pop_front] ||
> >              btf_id == special_kfunc_list[KF_bpf_list_pop_back] ||
> > +            btf_id == special_kfunc_list[KF_bpf_list_del] ||
> >              btf_id == special_kfunc_list[KF_bpf_list_front] ||
> >              btf_id == special_kfunc_list[KF_bpf_list_back];
> >  }
> > @@ -13118,7 +13121,8 @@ static bool check_kfunc_is_graph_node_api(struct bpf_verifier_env *env,
> >       switch (node_field_type) {
> >       case BPF_LIST_NODE:
> >               ret = (kfunc_btf_id == special_kfunc_list[KF_bpf_list_push_front_impl] ||
> > -                    kfunc_btf_id == special_kfunc_list[KF_bpf_list_push_back_impl]);
> > +                    kfunc_btf_id == special_kfunc_list[KF_bpf_list_push_back_impl] ||
> > +                    kfunc_btf_id == special_kfunc_list[KF_bpf_list_del]);
> >               break;
> >       case BPF_RB_NODE:
> >               ret = (kfunc_btf_id == special_kfunc_list[KF_bpf_rbtree_remove] ||
>
>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH bpf-next v7 2/5] bpf: Add bpf_list_add_impl to insert node after a given list node
  2026-03-08 13:46 ` [PATCH bpf-next v7 2/5] bpf: Add bpf_list_add_impl to insert node after a given list node Chengkaitao
  2026-03-08 14:25   ` bot+bpf-ci
  2026-03-09  6:34   ` Leon Hwang
@ 2026-03-10 20:10   ` Kumar Kartikeya Dwivedi
  2 siblings, 0 replies; 19+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2026-03-10 20:10 UTC (permalink / raw)
  To: Chengkaitao
  Cc: martin.lau, ast, daniel, andrii, eddyz87, song, yonghong.song,
	john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah, chengkaitao,
	linux-kselftest, bpf, linux-kernel

On Sun, 8 Mar 2026 at 14:47, Chengkaitao <pilgrimtao@gmail.com> wrote:
>
> From: Kaitao Cheng <chengkaitao@kylinos.cn>
>
> Add a new kfunc bpf_list_add_impl(head, new, prev, meta, off) that
> inserts 'new' after 'prev' in the BPF linked list. Both must be in
> the same list; 'prev' must already be in the list. The new node must
> be an owning reference (e.g. from bpf_obj_new); the kfunc consumes
> that reference and the node becomes non-owning once inserted.
>
> We have added an additional parameter bpf_list_head *head to
> bpf_list_add_impl, as the verifier requires the head parameter to
> check whether the lock is being held.
>
> Returns 0 on success, -EINVAL if 'prev' is not in a list or 'new'
> is already in a list (or duplicate insertion). On failure, the
> kernel drops the passed-in node.
>
> Signed-off-by: Kaitao Cheng <chengkaitao@kylinos.cn>
> ---
>  kernel/bpf/helpers.c  | 56 ++++++++++++++++++++++++++++++-------------
>  kernel/bpf/verifier.c | 13 ++++++++--
>  2 files changed, 50 insertions(+), 19 deletions(-)
>
> diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> index 01b74c4ac00d..407520fde668 100644
> --- a/kernel/bpf/helpers.c
> +++ b/kernel/bpf/helpers.c
> @@ -2379,11 +2379,12 @@ __bpf_kfunc void *bpf_refcount_acquire_impl(void *p__refcounted_kptr, void *meta
>         return (void *)p__refcounted_kptr;
>  }
>
> -static int __bpf_list_add(struct bpf_list_node_kern *node,
> -                         struct bpf_list_head *head,
> -                         bool tail, struct btf_record *rec, u64 off)
> +static int __bpf_list_add(struct bpf_list_head *head,
> +                         struct bpf_list_node_kern *new,
> +                         struct list_head *prev,
> +                         struct btf_record *rec, u64 off)
>  {
> -       struct list_head *n = &node->list_head, *h = (void *)head;
> +       struct list_head *n = &new->list_head, *h = (void *)head;
>
>         /* If list_head was 0-initialized by map, bpf_obj_init_field wasn't
>          * called on its fields, so init here
> @@ -2391,39 +2392,59 @@ static int __bpf_list_add(struct bpf_list_node_kern *node,
>         if (unlikely(!h->next))
>                 INIT_LIST_HEAD(h);
>
> -       /* node->owner != NULL implies !list_empty(n), no need to separately
> +       /* When prev is not the list head, it must be a node in this list. */
> +       if (prev != h && WARN_ON_ONCE(READ_ONCE(container_of(
> +           prev, struct bpf_list_node_kern, list_head)->owner) != head))
> +               goto fail;

There is a slight issue here, if head is not initialized, prev will be
NULL here, since it passes h->prev.
So we'll do a bad deref. I think we should probably pass a pointer to
prev (list_head **) and then load it after INIT_LIST_HEAD(h) is done.
prev != h check looks ok (since we want to establish that prev is not
a node) otherwise.

Probably also add a test for such a case to catch this sort of bug.
You can see whether it crashes without changing your patch, and
doesn't with the fix.

> +
> +       /* new->owner != NULL implies !list_empty(n), no need to separately
>          * check the latter
>          */
> -       if (cmpxchg(&node->owner, NULL, BPF_PTR_POISON)) {
> -               /* Only called from BPF prog, no need to migrate_disable */
> -               __bpf_obj_drop_impl((void *)n - off, rec, false);
> -               return -EINVAL;
> -       }
> -
> -       tail ? list_add_tail(n, h) : list_add(n, h);
> -       WRITE_ONCE(node->owner, head);
> +       if (cmpxchg(&new->owner, NULL, BPF_PTR_POISON))
> +               goto fail;
>
> +       list_add(n, prev);
> +       WRITE_ONCE(new->owner, head);
>         return 0;
> +
> +fail:
> +       /* Only called from BPF prog, no need to migrate_disable */
> +       __bpf_obj_drop_impl((void *)n - off, rec, false);
> +       return -EINVAL;
>  }
>
> [...]
> @@ -12996,6 +12998,7 @@ static bool is_bpf_list_api_kfunc(u32 btf_id)
>  {
>         return btf_id == special_kfunc_list[KF_bpf_list_push_front_impl] ||
>                btf_id == special_kfunc_list[KF_bpf_list_push_back_impl] ||
> +              btf_id == special_kfunc_list[KF_bpf_list_add_impl] ||
>                btf_id == special_kfunc_list[KF_bpf_list_pop_front] ||
>                btf_id == special_kfunc_list[KF_bpf_list_pop_back] ||
>                btf_id == special_kfunc_list[KF_bpf_list_del] ||
> @@ -13122,6 +13125,7 @@ static bool check_kfunc_is_graph_node_api(struct bpf_verifier_env *env,
>         case BPF_LIST_NODE:
>                 ret = (kfunc_btf_id == special_kfunc_list[KF_bpf_list_push_front_impl] ||
>                        kfunc_btf_id == special_kfunc_list[KF_bpf_list_push_back_impl] ||
> +                      kfunc_btf_id == special_kfunc_list[KF_bpf_list_add_impl] ||
>                        kfunc_btf_id == special_kfunc_list[KF_bpf_list_del]);
>                 break;
>         case BPF_RB_NODE:
> @@ -14264,6 +14268,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
>
>         if (meta.func_id == special_kfunc_list[KF_bpf_list_push_front_impl] ||
>             meta.func_id == special_kfunc_list[KF_bpf_list_push_back_impl] ||
> +           meta.func_id == special_kfunc_list[KF_bpf_list_add_impl] ||
>             meta.func_id == special_kfunc_list[KF_bpf_rbtree_add_impl]) {
>                 release_ref_obj_id = regs[BPF_REG_2].ref_obj_id;
>                 insn_aux->insert_off = regs[BPF_REG_2].off;

Please rebase patches on bpf-next/master always before sending, this
one didn't apply cleanly.


> @@ -23230,13 +23235,17 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
>                 *cnt = 3;
>         } else if (desc->func_id == special_kfunc_list[KF_bpf_list_push_back_impl] ||
>                    desc->func_id == special_kfunc_list[KF_bpf_list_push_front_impl] ||
> +                  desc->func_id == special_kfunc_list[KF_bpf_list_add_impl] ||
>                    desc->func_id == special_kfunc_list[KF_bpf_rbtree_add_impl]) {
>                 struct btf_struct_meta *kptr_struct_meta = env->insn_aux_data[insn_idx].kptr_struct_meta;
>                 int struct_meta_reg = BPF_REG_3;
>                 int node_offset_reg = BPF_REG_4;
>
> -               /* rbtree_add has extra 'less' arg, so args-to-fixup are in diff regs */
> -               if (desc->func_id == special_kfunc_list[KF_bpf_rbtree_add_impl]) {
> +               /* list/rbtree_add_impl have an extra arg (prev/less),
> +                * so args-to-fixup are in different regs.
> +                */
> +               if (desc->func_id == special_kfunc_list[KF_bpf_list_add_impl] ||
> +                   desc->func_id == special_kfunc_list[KF_bpf_rbtree_add_impl]) {
>                         struct_meta_reg = BPF_REG_4;
>                         node_offset_reg = BPF_REG_5;
>                 }
> --
> 2.50.1 (Apple Git-155)
>
>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH bpf-next v7 5/5] bpf: refactor kfunc checks using table-driven approach in verifier
  2026-03-09  6:45   ` Leon Hwang
@ 2026-03-10 20:10     ` Kumar Kartikeya Dwivedi
  2026-03-11  5:36       ` Leon Hwang
  0 siblings, 1 reply; 19+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2026-03-10 20:10 UTC (permalink / raw)
  To: Leon Hwang
  Cc: Chengkaitao, martin.lau, ast, daniel, andrii, eddyz87, song,
	yonghong.song, john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah,
	chengkaitao, linux-kselftest, bpf, linux-kernel

On Mon, 9 Mar 2026 at 07:45, Leon Hwang <leon.hwang@linux.dev> wrote:
>
> On 8/3/26 21:46, Chengkaitao wrote:
> > From: Kaitao Cheng <chengkaitao@kylinos.cn>
> >
> > Replace per-kfunc btf_id chains in list/rbtree/res_lock and graph node
> > checks with btf_id_in_kfunc_table() and static kfunc tables for easier
> > maintenance.
> >
>
> Such refactoring should be the first patch? Less churn. Then, update the
> list only.
>
> However, is_bpf_rbtree_api_kfunc(), is_bpf_res_spin_lock_kfunc(), and
> BPF_RB_NODE should be excluded, because you didn't touch them in this
> series.

I think moving clean up earlier makes some sense, but why exclude
rbtree and res spin lock?
Looks better to me to do them all.

>
> Thanks,
> Leon
>
> [...]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH bpf-next v7 1/5] bpf: Introduce the bpf_list_del kfunc.
  2026-03-10 20:10     ` Kumar Kartikeya Dwivedi
@ 2026-03-10 20:28       ` Kumar Kartikeya Dwivedi
  0 siblings, 0 replies; 19+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2026-03-10 20:28 UTC (permalink / raw)
  To: Leon Hwang
  Cc: Chengkaitao, martin.lau, ast, daniel, andrii, eddyz87, song,
	yonghong.song, john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah,
	chengkaitao, linux-kselftest, bpf, linux-kernel

On Tue, 10 Mar 2026 at 21:10, Kumar Kartikeya Dwivedi <memxor@gmail.com> wrote:
>
> On Mon, 9 Mar 2026 at 07:34, Leon Hwang <leon.hwang@linux.dev> wrote:
> >
> > On 8/3/26 21:46, Chengkaitao wrote:
> > > From: Kaitao Cheng <chengkaitao@kylinos.cn>
> > >
> > > If a user holds ownership of a node in the middle of a list, they
> > > can directly remove it from the list without strictly adhering to
> > > deletion rules from the head or tail.
> > >
> > > We have added an additional parameter bpf_list_head *head to
> > > bpf_list_del, as the verifier requires the head parameter to
> > > check whether the lock is being held.
> > >
> > > This is typically paired with bpf_refcount. After calling
> > > bpf_list_del, it is generally necessary to drop the reference to
> > > the list node twice to prevent reference count leaks.
> > >
> > > Signed-off-by: Kaitao Cheng <chengkaitao@kylinos.cn>
> > > ---
> > >  kernel/bpf/helpers.c  | 30 +++++++++++++++++++++++-------
> > >  kernel/bpf/verifier.c |  6 +++++-
> > >  2 files changed, 28 insertions(+), 8 deletions(-)
> > >
> > > diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> > > index 6eb6c82ed2ee..01b74c4ac00d 100644
> > > --- a/kernel/bpf/helpers.c
> > > +++ b/kernel/bpf/helpers.c
> > > @@ -2426,20 +2426,23 @@ __bpf_kfunc int bpf_list_push_back_impl(struct bpf_list_head *head,
> > >       return __bpf_list_add(n, head, true, meta ? meta->record : NULL, off);
> > >  }
> > >
> > > -static struct bpf_list_node *__bpf_list_del(struct bpf_list_head *head, bool tail)
> > > +static struct bpf_list_node *__bpf_list_del(struct bpf_list_head *head,
> > > +                                         struct list_head *n)
> > >  {
> > > -     struct list_head *n, *h = (void *)head;
> > > +     struct list_head *h = (void *)head;
> > >       struct bpf_list_node_kern *node;
> > >
> > >       /* If list_head was 0-initialized by map, bpf_obj_init_field wasn't
> > >        * called on its fields, so init here
> > >        */
> > > -     if (unlikely(!h->next))
> > > +     if (unlikely(!h->next)) {
> > >               INIT_LIST_HEAD(h);
> > > -     if (list_empty(h))
> > > +             return NULL;
> > > +     }
> > > +
> > > +     if (n == h)
> > >               return NULL;
>
> Couldn't you keep the list_empty(h) check after INIT_LIST_HEAD(h) as before?
> And we don't need n == h either. You could add a comment that n will
> never match h.
> The verifier should ensure it, since it distinguishes the head and node types.
>
> Let's say the head is uninit. Then list_empty(h) will catch that case.
> Otherwise n might be NULL.
>
> After list_empty(h) says false, we definitely have non-null n.
> We just need to check list membership using the owner check, and then
> we're good.
> It will be a less noisy diff.
>
> > >
> > > -     n = tail ? h->prev : h->next;
> > >       node = container_of(n, struct bpf_list_node_kern, list_head);
> > >       if (WARN_ON_ONCE(READ_ONCE(node->owner) != head))
> > >               return NULL;
> >
> > This refactoring is worth, because the "struct list_head *n" seems
> > better than "bool tail".
> >
> > But, such refactoring should be a preparatory patch. Importantly,
> > refactoring should not introduce functional changes.
> >
>
> I think it's fine. Let's address this and avoid too many respins now.
> It isn't a lot of code anyway. You could mention in the commit log
> that you chose to refactor __bpf_list_del though.
>

Just to make it clearer, since I feel the language above might be a
bit confusing:
Let's not add more churn and just fix the issues in the existing set,
and try to move towards landing this.
We are quite close, and it's been 7 iterations already.

The bit about the non-owning refs pointed out by the AI bot, you can
do it as a follow up on top after this is accepted.

> [...]

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH bpf-next v7 5/5] bpf: refactor kfunc checks using table-driven approach in verifier
  2026-03-10 20:10     ` Kumar Kartikeya Dwivedi
@ 2026-03-11  5:36       ` Leon Hwang
  0 siblings, 0 replies; 19+ messages in thread
From: Leon Hwang @ 2026-03-11  5:36 UTC (permalink / raw)
  To: Kumar Kartikeya Dwivedi
  Cc: Chengkaitao, martin.lau, ast, daniel, andrii, eddyz87, song,
	yonghong.song, john.fastabend, kpsingh, sdf, haoluo, jolsa, shuah,
	chengkaitao, linux-kselftest, bpf, linux-kernel

On 11/3/26 04:10, Kumar Kartikeya Dwivedi wrote:
> On Mon, 9 Mar 2026 at 07:45, Leon Hwang <leon.hwang@linux.dev> wrote:
>>
>> On 8/3/26 21:46, Chengkaitao wrote:
>>> From: Kaitao Cheng <chengkaitao@kylinos.cn>
>>>
>>> Replace per-kfunc btf_id chains in list/rbtree/res_lock and graph node
>>> checks with btf_id_in_kfunc_table() and static kfunc tables for easier
>>> maintenance.
>>>
>>
>> Such refactoring should be the first patch? Less churn. Then, update the
>> list only.
>>
>> However, is_bpf_rbtree_api_kfunc(), is_bpf_res_spin_lock_kfunc(), and
>> BPF_RB_NODE should be excluded, because you didn't touch them in this
>> series.
> 
> I think moving clean up earlier makes some sense, but why exclude
> rbtree and res spin lock?
> Looks better to me to do them all.
> 

Okay. Acceptable for me to keep them.

Thanks,
Leon


^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2026-03-11  5:36 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-08 13:46 [PATCH bpf-next v7 0/5] bpf: Extend the bpf_list family of APIs Chengkaitao
2026-03-08 13:46 ` [PATCH bpf-next v7 1/5] bpf: Introduce the bpf_list_del kfunc Chengkaitao
2026-03-09  6:33   ` Leon Hwang
2026-03-10 20:10     ` Kumar Kartikeya Dwivedi
2026-03-10 20:28       ` Kumar Kartikeya Dwivedi
2026-03-08 13:46 ` [PATCH bpf-next v7 2/5] bpf: Add bpf_list_add_impl to insert node after a given list node Chengkaitao
2026-03-08 14:25   ` bot+bpf-ci
2026-03-09  6:34   ` Leon Hwang
2026-03-10 20:10   ` Kumar Kartikeya Dwivedi
2026-03-08 13:46 ` [PATCH bpf-next v7 3/5] bpf: add bpf_list_is_first/last/empty kfuncs Chengkaitao
2026-03-09  6:42   ` Leon Hwang
2026-03-08 13:46 ` [PATCH bpf-next v7 4/5] selftests/bpf: Add test cases for bpf_list_del/add/is_first/is_last/empty Chengkaitao
2026-03-08 14:25   ` bot+bpf-ci
2026-03-09  6:43   ` Leon Hwang
2026-03-10  2:05     ` Alexei Starovoitov
2026-03-08 13:46 ` [PATCH bpf-next v7 5/5] bpf: refactor kfunc checks using table-driven approach in verifier Chengkaitao
2026-03-09  6:45   ` Leon Hwang
2026-03-10 20:10     ` Kumar Kartikeya Dwivedi
2026-03-11  5:36       ` Leon Hwang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox