BPF List
 help / color / mirror / Atom feed
From: sashiko-bot@kernel.org
To: "Kaitao cheng" <kaitao.cheng@linux.dev>
Cc: bpf@vger.kernel.org
Subject: Re: [PATCH RESEND bpf-next v10 2/8] bpf: clear list node owner and unlink before drop
Date: Wed, 13 May 2026 06:02:21 +0000	[thread overview]
Message-ID: <20260513060221.DB2C7C2BCB7@smtp.kernel.org> (raw)
In-Reply-To: <20260512055919.95716-3-kaitao.cheng@linux.dev>

Thank you for your contribution! Sashiko AI review found 3 potential issue(s) to consider:
- [Critical] Race condition causing list corruption, infinite loop, and Use-After-Free in `bpf_list_head_free`.
- [Critical] Incomplete fix leaving `bpf_rb_root_free` vulnerable to Use-After-Free via dangling `owner` pointers.
- [High] Unbounded O(N) latency under IRQ-disabled spinlock in `bpf_list_head_free`.
--

commit 1bc93a76541e332d30d80430964ef29571e93654
Author: Kaitao Cheng <chengkaitao@kylinos.cn>

bpf: clear list node owner and unlink before drop

This commit updates bpf_list_head_free to clear the owner pointer of each
node and use a local drain list before dropping the objects, aiming to
prevent concurrent readers from seeing an invalid next node.

> diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
> --- a/kernel/bpf/helpers.c
> +++ b/kernel/bpf/helpers.c

[ ... ]

> @@ -2261,20 +2262,23 @@ void bpf_list_head_free(const struct btf_field *field, void *list_head,
>  	__bpf_spin_lock_irqsave(spin_lock);
>  	if (!head->next || list_empty(head))
>  		goto unlock;
> -	head = head->next;
> +	list_for_each_safe(pos, n, head) {
> +		WRITE_ONCE(container_of(pos,
> +			struct bpf_list_node_kern, list_head)->owner, NULL);
> +		list_move_tail(pos, &drain);
> +	}
>  unlock:

Does this introduce an unbounded delay with interrupts disabled?

The original code detached the head in O(1) time. Now there is an O(N)
loop inside __bpf_spin_lock_irqsave(). Since BPF lists can grow quite large,
could this cause soft lockups? 

This also appears to contradict the comment just above the lock which
explicitly mentions doing the actual list draining outside the lock to not
hold it for too long.

> -	INIT_LIST_HEAD(orig_head);
> +	INIT_LIST_HEAD(head);
>  	__bpf_spin_unlock_irqrestore(spin_lock);
>  
> -	while (head != orig_head) {
> -		void *obj = head;
> -
> -		obj -= field->graph_root.node_offset;
> -		head = head->next;
> +	while (!list_empty(&drain)) {
> +		pos = drain.next;
> +		list_del_init(pos);

Can this lead to list corruption and an infinite loop?

By setting the node's owner to NULL in the previous loop under the lock,
the node appears unlinked to the BPF subsystem. Once the lock is released
at the unlock label, a concurrent BPF program holding a reference to this
node might successfully pass it to bpf_list_push_front() or similar,
since the ownership check will now succeed.

If that happens, list_add() will overwrite the node's next and prev
pointers. However, the local drain list header isn't updated. The
drain.next pointer will still point to this hijacked node, and
list_empty(&drain) will remain false forever, causing an infinite loop
that repeatedly unlinks and drops the exact same node.

[ ... ]

On a related note, does bpf_rb_root_free() suffer from a similar issue
regarding dangling owner pointers?

While not introduced by this patch, if a surviving RB node's original
bpf_rb_root memory is freed and then reallocated as another bpf_rb_root,
the node's owner pointer might falsely match the reallocated root. 

A concurrent BPF program could then pass this node to bpf_rbtree_remove(),
bypassing the ownership check and accessing freed memory during
rb_erase_cached().

-- 
Sashiko AI review · https://sashiko.dev/#/patchset/20260512055919.95716-1-kaitao.cheng@linux.dev?part=2

  parent reply	other threads:[~2026-05-13  6:02 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-05-12  5:59 [PATCH RESEND bpf-next v10 0/8] bpf: Extend the bpf_list family of APIs Kaitao cheng
2026-05-12  5:59 ` [PATCH RESEND bpf-next v10 1/8] bpf: refactor __bpf_list_del to take list node pointer Kaitao cheng
2026-05-12  6:41   ` bot+bpf-ci
2026-05-12  8:55     ` Kaitao Cheng
2026-05-12  5:59 ` [PATCH RESEND bpf-next v10 2/8] bpf: clear list node owner and unlink before drop Kaitao cheng
2026-05-12  6:41   ` bot+bpf-ci
2026-05-13  6:02   ` sashiko-bot [this message]
2026-05-12  5:59 ` [PATCH RESEND bpf-next v10 3/8] bpf: Introduce the bpf_list_del kfunc Kaitao cheng
2026-05-12  6:41   ` bot+bpf-ci
2026-05-12  9:36     ` Kaitao Cheng
2026-05-12  5:59 ` [PATCH RESEND bpf-next v10 4/8] bpf: refactor __bpf_list_add to take insertion point via **prev_ptr Kaitao cheng
2026-05-12  5:59 ` [PATCH RESEND bpf-next v10 5/8] bpf: Add bpf_list_add to insert node after a given list node Kaitao cheng
2026-05-12  6:41   ` bot+bpf-ci
2026-05-12 12:05     ` Kaitao Cheng
2026-05-12  5:59 ` [PATCH RESEND bpf-next v10 6/8] bpf: add bpf_list_is_first/last/empty kfuncs Kaitao cheng
2026-05-12  5:59 ` [PATCH RESEND bpf-next v10 7/8] bpf: allow non-owning list-node args via __nonown_allowed Kaitao cheng
2026-05-12  6:41   ` bot+bpf-ci
2026-05-12  5:59 ` [PATCH RESEND bpf-next v10 8/8] selftests/bpf: Add test cases for bpf_list_del/add/is_first/is_last/empty Kaitao cheng

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260513060221.DB2C7C2BCB7@smtp.kernel.org \
    --to=sashiko-bot@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=kaitao.cheng@linux.dev \
    --cc=sashiko-reviews@lists.linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox