From: Jiayuan Chen <jiayuan.chen@linux.dev>
To: Michal Luczaj <mhal@rbox.co>,
John Fastabend <john.fastabend@gmail.com>,
Jakub Sitnicki <jakub@cloudflare.com>,
Eric Dumazet <edumazet@google.com>,
Kuniyuki Iwashima <kuniyu@google.com>,
Paolo Abeni <pabeni@redhat.com>,
Willem de Bruijn <willemb@google.com>,
"David S. Miller" <davem@davemloft.net>,
Jakub Kicinski <kuba@kernel.org>, Simon Horman <horms@kernel.org>,
Yonghong Song <yhs@fb.com>, Andrii Nakryiko <andrii@kernel.org>,
Alexei Starovoitov <ast@kernel.org>,
Daniel Borkmann <daniel@iogearbox.net>,
Martin KaFai Lau <martin.lau@linux.dev>,
Eduard Zingerman <eddyz87@gmail.com>, Song Liu <song@kernel.org>,
Yonghong Song <yonghong.song@linux.dev>,
KP Singh <kpsingh@kernel.org>,
Stanislav Fomichev <sdf@fomichev.me>, Hao Luo <haoluo@google.com>,
Jiri Olsa <jolsa@kernel.org>, Shuah Khan <shuah@kernel.org>,
Cong Wang <cong.wang@bytedance.com>
Cc: netdev@vger.kernel.org, bpf@vger.kernel.org,
linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org
Subject: Re: [PATCH bpf v3 3/5] bpf, sockmap: Fix af_unix iter deadlock
Date: Fri, 6 Mar 2026 14:04:09 +0800 [thread overview]
Message-ID: <b2218c30-a704-4395-a67e-95c62b75586f@linux.dev> (raw)
In-Reply-To: <20260306-unix-proto-update-null-ptr-deref-v3-3-2f0c7410c523@rbox.co>
On 3/6/26 7:30 AM, Michal Luczaj wrote:
> bpf_iter_unix_seq_show() may deadlock when lock_sock_fast() takes the fast
> path and the iter prog attempts to update a sockmap. Which ends up spinning
> at sock_map_update_elem()'s bh_lock_sock():
>
> WARNING: possible recursive locking detected
> test_progs/1393 is trying to acquire lock:
> ffff88811ec25f58 (slock-AF_UNIX){+...}-{3:3}, at: sock_map_update_elem+0xdb/0x1f0
>
> but task is already holding lock:
> ffff88811ec25f58 (slock-AF_UNIX){+...}-{3:3}, at: __lock_sock_fast+0x37/0xe0
>
> other info that might help us debug this:
> Possible unsafe locking scenario:
>
> CPU0
> ----
> lock(slock-AF_UNIX);
> lock(slock-AF_UNIX);
>
> *** DEADLOCK ***
>
> May be due to missing lock nesting notation
>
> 4 locks held by test_progs/1393:
> #0: ffff88814b59c790 (&p->lock){+.+.}-{4:4}, at: bpf_seq_read+0x59/0x10d0
> #1: ffff88811ec25fd8 (sk_lock-AF_UNIX){+.+.}-{0:0}, at: bpf_seq_read+0x42c/0x10d0
> #2: ffff88811ec25f58 (slock-AF_UNIX){+...}-{3:3}, at: __lock_sock_fast+0x37/0xe0
> #3: ffffffff85a6a7c0 (rcu_read_lock){....}-{1:3}, at: bpf_iter_run_prog+0x51d/0xb00
>
> Call Trace:
> dump_stack_lvl+0x5d/0x80
> print_deadlock_bug.cold+0xc0/0xce
> __lock_acquire+0x130f/0x2590
> lock_acquire+0x14e/0x2b0
> _raw_spin_lock+0x30/0x40
> sock_map_update_elem+0xdb/0x1f0
> bpf_prog_2d0075e5d9b721cd_dump_unix+0x55/0x4f4
> bpf_iter_run_prog+0x5b9/0xb00
> bpf_iter_unix_seq_show+0x1f7/0x2e0
> bpf_seq_read+0x42c/0x10d0
> vfs_read+0x171/0xb20
> ksys_read+0xff/0x200
> do_syscall_64+0x6b/0x3a0
> entry_SYSCALL_64_after_hwframe+0x76/0x7e
>
> Suggested-by: Kuniyuki Iwashima <kuniyu@google.com>
> Suggested-by: Martin KaFai Lau <martin.lau@linux.dev>
> Fixes: 2c860a43dd77 ("bpf: af_unix: Implement BPF iterator for UNIX domain socket.")
> Signed-off-by: Michal Luczaj <mhal@rbox.co>
> ---
> net/unix/af_unix.c | 7 +++----
> 1 file changed, 3 insertions(+), 4 deletions(-)
>
> diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
> index 3756a93dc63a..3d2cfb4ecbcd 100644
> --- a/net/unix/af_unix.c
> +++ b/net/unix/af_unix.c
> @@ -3729,15 +3729,14 @@ static int bpf_iter_unix_seq_show(struct seq_file *seq, void *v)
> struct bpf_prog *prog;
> struct sock *sk = v;
> uid_t uid;
> - bool slow;
> int ret;
>
> if (v == SEQ_START_TOKEN)
> return 0;
>
> - slow = lock_sock_fast(sk);
> + lock_sock(sk);
>
> - if (unlikely(sk_unhashed(sk))) {
> + if (unlikely(sock_flag(sk, SOCK_DEAD))) {
> ret = SEQ_SKIP;
> goto unlock;
> }
Switching to lock_sock() fixes the deadlock, but it does not provide mutual
exclusion with unix_release_sock(), which uses unix_state_lock() exclusively
and does not touch lock_sock() at all. So a dying socket can still reach the
BPF prog concurrently with unix_release_sock() running on another CPU.
Both SOCK_DEAD and the clearing of unix_peer(sk) happen under
unix_state_lock() in unix_release_sock(). Without taking unix_state_lock()
before the SOCK_DEAD check, there is a window:
iter unix_release_sock()
--- lock_sock(sk)
SOCK_DEAD == 0(check passes)
unix_state_lock(sk)
unix_peer(sk) = NULL
sock_set_flag(sk, SOCK_DEAD)
unix_state_unlock(sk)
BPF prog runs
→ accesses unix_peer(sk) == NULL → crash
This was not raised in the v2 discussion.
The natural fix is to check SOCK_DEAD under unix_state_lock(). However,
holding unix_state_lock() throughout BPF prog execution would conflict with
patch 5: sock_map_sk_acquire_fast() also takes unix_state_lock() for AF_UNIX
sockets, resulting in a recursive spinlock deadlock.
Kuniyuki, Martin — what is the right approach here?
> @@ -3747,7 +3746,7 @@ static int bpf_iter_unix_seq_show(struct seq_file *seq, void *v)
> prog = bpf_iter_get_info(&meta, false);
> ret = unix_prog_seq_show(prog, &meta, v, uid);
> unlock:
> - unlock_sock_fast(sk, slow);
> + release_sock(sk);
> return ret;
> }
>
>
next prev parent reply other threads:[~2026-03-06 6:04 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-05 23:30 [PATCH bpf v3 0/5] bpf, sockmap: Fix af_unix null-ptr-deref in proto update Michal Luczaj
2026-03-05 23:30 ` [PATCH bpf v3 1/5] bpf, sockmap: Annotate af_unix sock::sk_state data-races Michal Luczaj
2026-03-06 5:30 ` Kuniyuki Iwashima
2026-03-06 6:24 ` [PATCH bpf v3 1/5] bpf, sockmap: Annotate af_unix sock^sk_state data-races Jiayuan Chen
2026-03-18 17:05 ` [PATCH bpf v3 1/5] bpf, sockmap: Annotate af_unix sock::sk_state data-races Michal Luczaj
2026-03-05 23:30 ` [PATCH bpf v3 2/5] bpf, sockmap: Use sock_map_sk_{acquire,release}() where open-coded Michal Luczaj
2026-03-06 5:44 ` Kuniyuki Iwashima
2026-03-06 14:05 ` Michal Luczaj
2026-03-11 4:17 ` Kuniyuki Iwashima
2026-03-11 4:57 ` Kuniyuki Iwashima
2026-03-15 23:58 ` Michal Luczaj
2026-03-05 23:30 ` [PATCH bpf v3 3/5] bpf, sockmap: Fix af_unix iter deadlock Michal Luczaj
2026-03-06 5:47 ` Kuniyuki Iwashima
2026-03-06 6:04 ` Jiayuan Chen [this message]
2026-03-06 6:15 ` Jiayuan Chen
2026-03-06 14:06 ` Michal Luczaj
2026-03-06 14:31 ` Jiayuan Chen
2026-03-06 14:33 ` Jiayuan Chen
2026-03-05 23:30 ` [PATCH bpf v3 4/5] selftests/bpf: Extend bpf_iter_unix to attempt deadlocking Michal Luczaj
2026-03-06 14:34 ` Jiayuan Chen
2026-03-05 23:30 ` [PATCH bpf v3 5/5] bpf, sockmap: Adapt for af_unix-specific lock Michal Luczaj
2026-03-06 5:01 ` Jiayuan Chen
2026-03-06 14:09 ` Michal Luczaj
2026-03-10 22:20 ` Martin KaFai Lau
2026-03-15 23:58 ` Michal Luczaj
2026-03-26 6:26 ` Martin KaFai Lau
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b2218c30-a704-4395-a67e-95c62b75586f@linux.dev \
--to=jiayuan.chen@linux.dev \
--cc=andrii@kernel.org \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=cong.wang@bytedance.com \
--cc=daniel@iogearbox.net \
--cc=davem@davemloft.net \
--cc=eddyz87@gmail.com \
--cc=edumazet@google.com \
--cc=haoluo@google.com \
--cc=horms@kernel.org \
--cc=jakub@cloudflare.com \
--cc=john.fastabend@gmail.com \
--cc=jolsa@kernel.org \
--cc=kpsingh@kernel.org \
--cc=kuba@kernel.org \
--cc=kuniyu@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=martin.lau@linux.dev \
--cc=mhal@rbox.co \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=sdf@fomichev.me \
--cc=shuah@kernel.org \
--cc=song@kernel.org \
--cc=willemb@google.com \
--cc=yhs@fb.com \
--cc=yonghong.song@linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox