Netdev List
 help / color / mirror / Atom feed
* [PATCH net 1/1] rxrpc: serialize kernel accept preallocation with socket teardown
       [not found] <cover.1778230563.git.d4n.for.sec@gmail.com>
@ 2026-05-08 15:58 ` Ren Wei
  2026-05-12 13:25   ` Simon Horman
  0 siblings, 1 reply; 2+ messages in thread
From: Ren Wei @ 2026-05-08 15:58 UTC (permalink / raw)
  To: linux-afs, netdev
  Cc: dhowells, marc.dionne, yuantan098, yifanwucs, tomapufckgml, bird,
	d4n.for.sec, n05ec

From: Li Daming <d4n.for.sec@gmail.com>

rxrpc_kernel_charge_accept() reads rx->backlog without any
socket/backlog synchronization and passes that raw pointer into
rxrpc_service_prealloc_one(). A concurrent rxrpc_discard_prealloc()
sets rx->backlog = NULL and frees the backlog rings, so a kernel
preallocation worker can keep using a freed struct rxrpc_backlog
while updating *_backlog_head/tail and array slots.

Serialize the state check and backlog lookup with the socket lock,
and reject kernel preallocation once teardown has disabled
listening or discarded the service backlog.

Fixes: 00e907127e6f ("rxrpc: Preallocate peers, conns and calls for incoming service requests")
Cc: stable@kernel.org
Reported-by: Yuan Tan <yuantan098@gmail.com>
Reported-by: Yifan Wu <yifanwucs@gmail.com>
Reported-by: Juefei Pu <tomapufckgml@gmail.com>
Reported-by: Xin Liu <bird@lzu.edu.cn>
Signed-off-by: Li Daming <d4n.for.sec@gmail.com>
Signed-off-by: Ren Wei <n05ec@lzu.edu.cn>
---
 net/rxrpc/call_accept.c | 25 +++++++++++++++++++------
 1 file changed, 19 insertions(+), 6 deletions(-)

diff --git a/net/rxrpc/call_accept.c b/net/rxrpc/call_accept.c
index ee2d131..4782412 100644
--- a/net/rxrpc/call_accept.c
+++ b/net/rxrpc/call_accept.c
@@ -471,13 +471,26 @@ int rxrpc_kernel_charge_accept(struct socket *sock, rxrpc_notify_rx_t notify_rx,
 			       unsigned long user_call_ID, gfp_t gfp,
 			       unsigned int debug_id)
 {
-	struct rxrpc_sock *rx = rxrpc_sk(sock->sk);
-	struct rxrpc_backlog *b = rx->backlog;
+	struct rxrpc_backlog *b;
+	struct rxrpc_sock *rx;
+	struct sock *sk;
+	int ret;
 
-	if (sock->sk->sk_state == RXRPC_CLOSE)
-		return -ESHUTDOWN;
+	sk = sock->sk;
+	rx = rxrpc_sk(sk);
+
+	lock_sock(sk);
+	if (sk->sk_state != RXRPC_SERVER_LISTENING || !rx->backlog) {
+		ret = -ESHUTDOWN;
+		goto out;
+	}
+
+	b = rx->backlog;
+	ret = rxrpc_service_prealloc_one(rx, b, notify_rx, user_call_ID,
+					 gfp, debug_id);
 
-	return rxrpc_service_prealloc_one(rx, b, notify_rx, user_call_ID,
-					  gfp, debug_id);
+out:
+	release_sock(sk);
+	return ret;
 }
 EXPORT_SYMBOL(rxrpc_kernel_charge_accept);
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH net 1/1] rxrpc: serialize kernel accept preallocation with socket teardown
  2026-05-08 15:58 ` [PATCH net 1/1] rxrpc: serialize kernel accept preallocation with socket teardown Ren Wei
@ 2026-05-12 13:25   ` Simon Horman
  0 siblings, 0 replies; 2+ messages in thread
From: Simon Horman @ 2026-05-12 13:25 UTC (permalink / raw)
  To: n05ec
  Cc: 'Simon Horman', linux-afs, netdev, dhowells, marc.dionne,
	yuantan098, yifanwucs, tomapufckgml, bird, d4n.for.sec

From: 'Simon Horman' <horms@kernel.org>

This is an AI-generated review of your patch. The human sending this
email has considered the AI review valid, or at least plausible.
Full review at: https://sashiko.dev
---
rxrpc: serialize kernel accept preallocation with socket teardown

This commit attempts to fix a use-after-free in the rxrpc accept
preallocation path by serializing the state check and backlog lookup
with the socket lock.

Reported-by: Yuan Tan <yuantan098@gmail.com>
Reported-by: Yifan Wu <yifanwucs@gmail.com>
Reported-by: Juefei Pu <tomapufckgml@gmail.com>
Reported-by: Xin Liu <bird@lzu.edu.cn>

> diff --git a/net/rxrpc/call_accept.c b/net/rxrpc/call_accept.c
> index ee2d1319e69a8..47824120f1da1 100644
> --- a/net/rxrpc/call_accept.c
> +++ b/net/rxrpc/call_accept.c
> @@ -471,13 +471,26 @@ int rxrpc_kernel_charge_accept(struct socket *sock, rxrpc_notify_rx_t notify_rx,
>  			       unsigned long user_call_ID, gfp_t gfp,
>  			       unsigned int debug_id)
>  {
> -	struct rxrpc_sock *rx = rxrpc_sk(sock->sk);
> -	struct rxrpc_backlog *b = rx->backlog;
> +	struct rxrpc_backlog *b;
> +	struct rxrpc_sock *rx;
> +	struct sock *sk;
> +	int ret;
>  
> -	if (sock->sk->sk_state == RXRPC_CLOSE)
> -		return -ESHUTDOWN;
> +	sk = sock->sk;

Could the sock pointer passed to this function already be freed by the time it
executes?

The AFS subsystem afs_rx_new_call() queues charge_preallocation_work on
afs_wq. However, during socket teardown, afs_close_socket() appears to flush
afs_async_calls instead of afs_wq and doesn't cancel the work item with
cancel_work_sync().

If the stray work executes asynchronously after sock_release() has destroyed
the socket, wouldn't dereferencing sock->sk result in a use-after-free?

> +	rx = rxrpc_sk(sk);
> +
> +	lock_sock(sk);

Does lock_sock() adequately protect against concurrent socket release?

Looking at the socket release path, rxrpc_release_sock() modifies
sk->sk_state and calls rxrpc_discard_prealloc() (which frees the
backlog) without holding lock_sock().

If rxrpc_kernel_charge_accept() runs concurrently with socket release,
could rxrpc_release_sock() locklessly free the backlog from under it while
this function holds the socket lock, leading to a use-after-free when
accessing the backlog below?

> +	if (sk->sk_state != RXRPC_SERVER_LISTENING || !rx->backlog) {
> +		ret = -ESHUTDOWN;
> +		goto out;
> +	}
> +
> +	b = rx->backlog;
> +	ret = rxrpc_service_prealloc_one(rx, b, notify_rx, user_call_ID,
> +					 gfp, debug_id);

[ ... ]

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2026-05-12 13:26 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <cover.1778230563.git.d4n.for.sec@gmail.com>
2026-05-08 15:58 ` [PATCH net 1/1] rxrpc: serialize kernel accept preallocation with socket teardown Ren Wei
2026-05-12 13:25   ` Simon Horman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox