* [PATCH bpf-next v6] bpf, sockmap: avoid using sk_socket after free when sending
@ 2025-05-16 14:17 Jiayuan Chen
2025-05-19 19:52 ` John Fastabend
` (2 more replies)
0 siblings, 3 replies; 6+ messages in thread
From: Jiayuan Chen @ 2025-05-16 14:17 UTC (permalink / raw)
To: bpf
Cc: Jiayuan Chen, Michal Luczaj, John Fastabend, Jakub Sitnicki,
David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
Simon Horman, Thadeu Lima de Souza Cascardo, netdev, linux-kernel
The sk->sk_socket is not locked or referenced in backlog thread, and
during the call to skb_send_sock(), there is a race condition with
the release of sk_socket. All types of sockets(tcp/udp/unix/vsock)
will be affected.
Race conditions:
'''
CPU0 CPU1
backlog::skb_send_sock
sendmsg_unlocked
sock_sendmsg
sock_sendmsg_nosec
close(fd):
...
ops->release() -> sock_map_close()
sk_socket->ops = NULL
free(socket)
sock->ops->sendmsg
^
panic here
'''
The ref of psock become 0 after sock_map_close() executed.
'''
void sock_map_close()
{
...
if (likely(psock)) {
...
// !! here we remove psock and the ref of psock become 0
sock_map_remove_links(sk, psock)
psock = sk_psock_get(sk);
if (unlikely(!psock))
goto no_psock; <=== Control jumps here via goto
...
cancel_delayed_work_sync(&psock->work); <=== not executed
sk_psock_put(sk, psock);
...
}
'''
Based on the fact that we already wait for the workqueue to finish in
sock_map_close() if psock is held, we simply increase the psock
reference count to avoid race conditions.
With this patch, if the backlog thread is running, sock_map_close() will
wait for the backlog thread to complete and cancel all pending work.
If no backlog running, any pending work that hasn't started by then will
fail when invoked by sk_psock_get(), as the psock reference count have
been zeroed, and sk_psock_drop() will cancel all jobs via
cancel_delayed_work_sync().
In summary, we require synchronization to coordinate the backlog thread
and close() thread.
The panic I catched:
'''
Workqueue: events sk_psock_backlog
RIP: 0010:sock_sendmsg+0x21d/0x440
RAX: 0000000000000000 RBX: ffffc9000521fad8 RCX: 0000000000000001
...
Call Trace:
<TASK>
? die_addr+0x40/0xa0
? exc_general_protection+0x14c/0x230
? asm_exc_general_protection+0x26/0x30
? sock_sendmsg+0x21d/0x440
? sock_sendmsg+0x3e0/0x440
? __pfx_sock_sendmsg+0x10/0x10
__skb_send_sock+0x543/0xb70
sk_psock_backlog+0x247/0xb80
...
'''
Reported-by: Michal Luczaj <mhal@rbox.co>
Fixes: 4b4647add7d3 ("sock_map: avoid race between sock_map_close and sk_psock_put")
Signed-off-by: Jiayuan Chen <jiayuan.chen@linux.dev>
---
V5 -> V6: Use correct "Fixes" tag.
V4 -> V5:
This patch is extracted from my previous v4 patchset that contained
multiple fixes, and it remains unchanged. Since this fix is relatively
simple and easy to review, we want to separate it from other fixes to
avoid any potential interference.
---
net/core/skmsg.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/net/core/skmsg.c b/net/core/skmsg.c
index 276934673066..34c51eb1a14f 100644
--- a/net/core/skmsg.c
+++ b/net/core/skmsg.c
@@ -656,6 +656,13 @@ static void sk_psock_backlog(struct work_struct *work)
bool ingress;
int ret;
+ /* Increment the psock refcnt to synchronize with close(fd) path in
+ * sock_map_close(), ensuring we wait for backlog thread completion
+ * before sk_socket freed. If refcnt increment fails, it indicates
+ * sock_map_close() completed with sk_socket potentially already freed.
+ */
+ if (!sk_psock_get(psock->sk))
+ return;
mutex_lock(&psock->work_mutex);
while ((skb = skb_peek(&psock->ingress_skb))) {
len = skb->len;
@@ -708,6 +715,7 @@ static void sk_psock_backlog(struct work_struct *work)
}
end:
mutex_unlock(&psock->work_mutex);
+ sk_psock_put(psock->sk, psock);
}
struct sk_psock *sk_psock_init(struct sock *sk, int node)
--
2.47.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH bpf-next v6] bpf, sockmap: avoid using sk_socket after free when sending
2025-05-16 14:17 [PATCH bpf-next v6] bpf, sockmap: avoid using sk_socket after free when sending Jiayuan Chen
@ 2025-05-19 19:52 ` John Fastabend
2025-05-22 19:25 ` Martin KaFai Lau
2025-05-22 23:30 ` patchwork-bot+netdevbpf
2 siblings, 0 replies; 6+ messages in thread
From: John Fastabend @ 2025-05-19 19:52 UTC (permalink / raw)
To: Jiayuan Chen
Cc: bpf, Michal Luczaj, Jakub Sitnicki, David S. Miller, Eric Dumazet,
Jakub Kicinski, Paolo Abeni, Simon Horman,
Thadeu Lima de Souza Cascardo, netdev, linux-kernel
On 2025-05-16 22:17:12, Jiayuan Chen wrote:
> The sk->sk_socket is not locked or referenced in backlog thread, and
> during the call to skb_send_sock(), there is a race condition with
> the release of sk_socket. All types of sockets(tcp/udp/unix/vsock)
> will be affected.
[...]
>
> Reported-by: Michal Luczaj <mhal@rbox.co>
> Fixes: 4b4647add7d3 ("sock_map: avoid race between sock_map_close and sk_psock_put")
> Signed-off-by: Jiayuan Chen <jiayuan.chen@linux.dev>
LGTM thanks for fixing the tag.
Reviewed-by: John Fastabend <john.fastabend@gmail.com>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH bpf-next v6] bpf, sockmap: avoid using sk_socket after free when sending
2025-05-16 14:17 [PATCH bpf-next v6] bpf, sockmap: avoid using sk_socket after free when sending Jiayuan Chen
2025-05-19 19:52 ` John Fastabend
@ 2025-05-22 19:25 ` Martin KaFai Lau
2025-05-22 22:56 ` Jiayuan Chen
2025-05-22 23:30 ` patchwork-bot+netdevbpf
2 siblings, 1 reply; 6+ messages in thread
From: Martin KaFai Lau @ 2025-05-22 19:25 UTC (permalink / raw)
To: Jiayuan Chen
Cc: bpf, Michal Luczaj, John Fastabend, Jakub Sitnicki,
David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
Simon Horman, Thadeu Lima de Souza Cascardo, netdev, linux-kernel
On 5/16/25 7:17 AM, Jiayuan Chen wrote:
> The sk->sk_socket is not locked or referenced in backlog thread, and
> during the call to skb_send_sock(), there is a race condition with
> the release of sk_socket. All types of sockets(tcp/udp/unix/vsock)
> will be affected.
>
> Race conditions:
> '''
> CPU0 CPU1
>
> backlog::skb_send_sock
> sendmsg_unlocked
> sock_sendmsg
> sock_sendmsg_nosec
> close(fd):
> ...
> ops->release() -> sock_map_close()
> sk_socket->ops = NULL
> free(socket)
> sock->ops->sendmsg
> ^
> panic here
> '''
>
> The ref of psock become 0 after sock_map_close() executed.
> '''
> void sock_map_close()
> {
> ...
> if (likely(psock)) {
> ...
> // !! here we remove psock and the ref of psock become 0
> sock_map_remove_links(sk, psock)
> psock = sk_psock_get(sk);
> if (unlikely(!psock))
> goto no_psock; <=== Control jumps here via goto
> ...
> cancel_delayed_work_sync(&psock->work); <=== not executed
> sk_psock_put(sk, psock);
> ...
> }
> '''
>
> Based on the fact that we already wait for the workqueue to finish in
> sock_map_close() if psock is held, we simply increase the psock
> reference count to avoid race conditions.
>
> With this patch, if the backlog thread is running, sock_map_close() will
> wait for the backlog thread to complete and cancel all pending work.
>
> If no backlog running, any pending work that hasn't started by then will
> fail when invoked by sk_psock_get(), as the psock reference count have
> been zeroed, and sk_psock_drop() will cancel all jobs via
> cancel_delayed_work_sync().
>
> In summary, we require synchronization to coordinate the backlog thread
> and close() thread.
>
> The panic I catched:
> '''
> Workqueue: events sk_psock_backlog
> RIP: 0010:sock_sendmsg+0x21d/0x440
> RAX: 0000000000000000 RBX: ffffc9000521fad8 RCX: 0000000000000001
> ...
> Call Trace:
> <TASK>
> ? die_addr+0x40/0xa0
> ? exc_general_protection+0x14c/0x230
> ? asm_exc_general_protection+0x26/0x30
> ? sock_sendmsg+0x21d/0x440
> ? sock_sendmsg+0x3e0/0x440
> ? __pfx_sock_sendmsg+0x10/0x10
> __skb_send_sock+0x543/0xb70
> sk_psock_backlog+0x247/0xb80
> ...
> '''
>
> Reported-by: Michal Luczaj <mhal@rbox.co>
> Fixes: 4b4647add7d3 ("sock_map: avoid race between sock_map_close and sk_psock_put")
> Signed-off-by: Jiayuan Chen <jiayuan.chen@linux.dev>
>
> ---
> V5 -> V6: Use correct "Fixes" tag.
> V4 -> V5:
> This patch is extracted from my previous v4 patchset that contained
> multiple fixes, and it remains unchanged. Since this fix is relatively
> simple and easy to review, we want to separate it from other fixes to
> avoid any potential interference.
> ---
> net/core/skmsg.c | 8 ++++++++
> 1 file changed, 8 insertions(+)
>
> diff --git a/net/core/skmsg.c b/net/core/skmsg.c
> index 276934673066..34c51eb1a14f 100644
> --- a/net/core/skmsg.c
> +++ b/net/core/skmsg.c
> @@ -656,6 +656,13 @@ static void sk_psock_backlog(struct work_struct *work)
> bool ingress;
> int ret;
>
> + /* Increment the psock refcnt to synchronize with close(fd) path in
> + * sock_map_close(), ensuring we wait for backlog thread completion
> + * before sk_socket freed. If refcnt increment fails, it indicates
> + * sock_map_close() completed with sk_socket potentially already freed.
> + */
> + if (!sk_psock_get(psock->sk))
This seems to be the first use case to pass "psock->sk" to "sk_psock_get()".
I could have missed the sock_map details here. Considering it is racing with
sock_map_close() which should also do a sock_put(sk) [?],
could you help to explain what makes it safe to access the psock->sk here?
> + return;
> mutex_lock(&psock->work_mutex);
> while ((skb = skb_peek(&psock->ingress_skb))) {
> len = skb->len;
> @@ -708,6 +715,7 @@ static void sk_psock_backlog(struct work_struct *work)
> }
> end:
> mutex_unlock(&psock->work_mutex);
> + sk_psock_put(psock->sk, psock);
> }
>
> struct sk_psock *sk_psock_init(struct sock *sk, int node)
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH bpf-next v6] bpf, sockmap: avoid using sk_socket after free when sending
2025-05-22 19:25 ` Martin KaFai Lau
@ 2025-05-22 22:56 ` Jiayuan Chen
2025-05-22 23:23 ` Martin KaFai Lau
0 siblings, 1 reply; 6+ messages in thread
From: Jiayuan Chen @ 2025-05-22 22:56 UTC (permalink / raw)
To: Martin KaFai Lau
Cc: bpf, Michal Luczaj, John Fastabend, Jakub Sitnicki,
David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
Simon Horman, Thadeu Lima de Souza Cascardo, netdev, linux-kernel
2025/5/23 03:25, "Martin KaFai Lau" <martin.lau@linux.dev> wrote:
>
> On 5/16/25 7:17 AM, Jiayuan Chen wrote:
>
> >
> > The sk->sk_socket is not locked or referenced in backlog thread, and
> >
> > during the call to skb_send_sock(), there is a race condition with
> >
> > the release of sk_socket. All types of sockets(tcp/udp/unix/vsock)
> >
> > will be affected.
> >
> > Race conditions:
> >
> > '''
> >
> > CPU0 CPU1
> >
> > backlog::skb_send_sock
> >
> > sendmsg_unlocked
> >
> > sock_sendmsg
> >
> > sock_sendmsg_nosec
> >
> > close(fd):
> >
> > ...
> >
> > ops->release() -> sock_map_close()
> >
> > sk_socket->ops = NULL
> >
> > free(socket)
> >
> > sock->ops->sendmsg
> >
> > ^
> >
> > panic here
> >
> > '''
> >
> > The ref of psock become 0 after sock_map_close() executed.
> >
> > '''
> >
> > void sock_map_close()
> >
> > {
> >
> > ...
> >
> > if (likely(psock)) {
> >
> > ...
> >
> > // !! here we remove psock and the ref of psock become 0
> >
> > sock_map_remove_links(sk, psock)
> >
> > psock = sk_psock_get(sk);
> >
> > if (unlikely(!psock))
> >
> > goto no_psock; <=== Control jumps here via goto
> >
> > ...
> >
> > cancel_delayed_work_sync(&psock->work); <=== not executed
> >
> > sk_psock_put(sk, psock);
> >
> > ...
> >
> > }
> >
> > '''
> >
> > Based on the fact that we already wait for the workqueue to finish in
> >
> > sock_map_close() if psock is held, we simply increase the psock
> >
> > reference count to avoid race conditions.
> >
> > With this patch, if the backlog thread is running, sock_map_close() will
> >
> > wait for the backlog thread to complete and cancel all pending work.
> >
> > If no backlog running, any pending work that hasn't started by then will
> >
> > fail when invoked by sk_psock_get(), as the psock reference count have
> >
> > been zeroed, and sk_psock_drop() will cancel all jobs via
> >
> > cancel_delayed_work_sync().
> >
> > In summary, we require synchronization to coordinate the backlog thread
> >
> > and close() thread.
> >
> > The panic I catched:
> >
> > '''
> >
> > Workqueue: events sk_psock_backlog
> >
> > RIP: 0010:sock_sendmsg+0x21d/0x440
> >
> > RAX: 0000000000000000 RBX: ffffc9000521fad8 RCX: 0000000000000001
> >
> > ...
> >
> > Call Trace:
> >
> > <TASK>
> >
> > ? die_addr+0x40/0xa0
> >
> > ? exc_general_protection+0x14c/0x230
> >
> > ? asm_exc_general_protection+0x26/0x30
> >
> > ? sock_sendmsg+0x21d/0x440
> >
> > ? sock_sendmsg+0x3e0/0x440
> >
> > ? __pfx_sock_sendmsg+0x10/0x10
> >
> > __skb_send_sock+0x543/0xb70
> >
> > sk_psock_backlog+0x247/0xb80
> >
> > ...
> >
> > '''
> >
> > Reported-by: Michal Luczaj <mhal@rbox.co>
> >
> > Fixes: 4b4647add7d3 ("sock_map: avoid race between sock_map_close and sk_psock_put")
> >
> > Signed-off-by: Jiayuan Chen <jiayuan.chen@linux.dev>
> >
> > ---
> >
> > V5 -> V6: Use correct "Fixes" tag.
> >
> > V4 -> V5:
> >
> > This patch is extracted from my previous v4 patchset that contained
> >
> > multiple fixes, and it remains unchanged. Since this fix is relatively
> >
> > simple and easy to review, we want to separate it from other fixes to
> >
> > avoid any potential interference.
> >
> > ---
> >
> > net/core/skmsg.c | 8 ++++++++
> >
> > 1 file changed, 8 insertions(+)
> >
> > diff --git a/net/core/skmsg.c b/net/core/skmsg.c
> >
> > index 276934673066..34c51eb1a14f 100644
> >
> > --- a/net/core/skmsg.c
> >
> > +++ b/net/core/skmsg.c
> >
> > @@ -656,6 +656,13 @@ static void sk_psock_backlog(struct work_struct *work)
> >
> > bool ingress;
> >
> > int ret;
> >
> > > + /* Increment the psock refcnt to synchronize with close(fd) path in
> >
> > + * sock_map_close(), ensuring we wait for backlog thread completion
> >
> > + * before sk_socket freed. If refcnt increment fails, it indicates
> >
> > + * sock_map_close() completed with sk_socket potentially already freed.
> >
> > + */
> >
> > + if (!sk_psock_get(psock->sk))
> >
>
> This seems to be the first use case to pass "psock->sk" to "sk_psock_get()".
>
> I could have missed the sock_map details here. Considering it is racing with sock_map_close() which should also do a sock_put(sk) [?],
>
> could you help to explain what makes it safe to access the psock->sk here?
>
> >
> > + return;
> >
> > mutex_lock(&psock->work_mutex);
> >
> > while ((skb = skb_peek(&psock->ingress_skb))) {
> >
> > len = skb->len;
> >
> > @@ -708,6 +715,7 @@ static void sk_psock_backlog(struct work_struct *work)
> >
> > }
> >
> > end:
> >
> > mutex_unlock(&psock->work_mutex);
> >
> > + sk_psock_put(psock->sk, psock);
> >
> > }
> >
> > > struct sk_psock *sk_psock_init(struct sock *sk, int node)
> >
>
Hi Martin,
Using 'sk_psock_get(psock->sk)' in the workqueue is safe because
sock_map_close() only reduces the reference count of psock to zero, while
the actual memory release is fully handled by the RCU callback: sk_psock_destroy().
In sk_psock_destroy(), we first cancel_delayed_work_sync() to wait for the
workqueue to complete, and then perform sock_put(psock->sk). This means we
already have an explicit synchronization mechanism in place that guarantees
safe access to both psock and psock->sk in the workqueue context.
Thanks.
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH bpf-next v6] bpf, sockmap: avoid using sk_socket after free when sending
2025-05-22 22:56 ` Jiayuan Chen
@ 2025-05-22 23:23 ` Martin KaFai Lau
0 siblings, 0 replies; 6+ messages in thread
From: Martin KaFai Lau @ 2025-05-22 23:23 UTC (permalink / raw)
To: Jiayuan Chen
Cc: bpf, Michal Luczaj, John Fastabend, Jakub Sitnicki,
David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
Simon Horman, Thadeu Lima de Souza Cascardo, netdev, linux-kernel
On 5/22/25 3:56 PM, Jiayuan Chen wrote:
>>> @@ -656,6 +656,13 @@ static void sk_psock_backlog(struct work_struct *work)
>>> bool ingress;
>>> int ret;
>>> > + /* Increment the psock refcnt to synchronize with close(fd) path in
>>> + * sock_map_close(), ensuring we wait for backlog thread completion
>>> + * before sk_socket freed. If refcnt increment fails, it indicates
>>> + * sock_map_close() completed with sk_socket potentially already freed.
>>> + */
>>> + if (!sk_psock_get(psock->sk))
>>
>> This seems to be the first use case to pass "psock->sk" to "sk_psock_get()".
>>
>> I could have missed the sock_map details here. Considering it is racing with sock_map_close() which should also do a sock_put(sk) [?],
>>
>> could you help to explain what makes it safe to access the psock->sk here?
>>
>
> Using 'sk_psock_get(psock->sk)' in the workqueue is safe because
> sock_map_close() only reduces the reference count of psock to zero, while
> the actual memory release is fully handled by the RCU callback: sk_psock_destroy().
>
> In sk_psock_destroy(), we first cancel_delayed_work_sync() to wait for the
> workqueue to complete, and then perform sock_put(psock->sk). This means we
Got it. The sock_put(psock->sk) done after a rcu gp is the part that I was missing.
Applied. Thanks.
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH bpf-next v6] bpf, sockmap: avoid using sk_socket after free when sending
2025-05-16 14:17 [PATCH bpf-next v6] bpf, sockmap: avoid using sk_socket after free when sending Jiayuan Chen
2025-05-19 19:52 ` John Fastabend
2025-05-22 19:25 ` Martin KaFai Lau
@ 2025-05-22 23:30 ` patchwork-bot+netdevbpf
2 siblings, 0 replies; 6+ messages in thread
From: patchwork-bot+netdevbpf @ 2025-05-22 23:30 UTC (permalink / raw)
To: Jiayuan Chen
Cc: bpf, mhal, john.fastabend, jakub, davem, edumazet, kuba, pabeni,
horms, cascardo, netdev, linux-kernel
Hello:
This patch was applied to bpf/bpf-next.git (master)
by Martin KaFai Lau <martin.lau@kernel.org>:
On Fri, 16 May 2025 22:17:12 +0800 you wrote:
> The sk->sk_socket is not locked or referenced in backlog thread, and
> during the call to skb_send_sock(), there is a race condition with
> the release of sk_socket. All types of sockets(tcp/udp/unix/vsock)
> will be affected.
>
> Race conditions:
> '''
> CPU0 CPU1
>
> [...]
Here is the summary with links:
- [bpf-next,v6] bpf, sockmap: avoid using sk_socket after free when sending
https://git.kernel.org/bpf/bpf-next/c/8259eb0e06d8
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2025-05-22 23:29 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-05-16 14:17 [PATCH bpf-next v6] bpf, sockmap: avoid using sk_socket after free when sending Jiayuan Chen
2025-05-19 19:52 ` John Fastabend
2025-05-22 19:25 ` Martin KaFai Lau
2025-05-22 22:56 ` Jiayuan Chen
2025-05-22 23:23 ` Martin KaFai Lau
2025-05-22 23:30 ` patchwork-bot+netdevbpf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).