The Linux Kernel Mailing List
 help / color / mirror / Atom feed
* [PATCH] mptcp: serialize subflow->closing with RX path
@ 2026-05-07  7:28 Kalpan Jani
  2026-05-07 16:12 ` Matthieu Baerts
  2026-05-07 17:08 ` Paolo Abeni
  0 siblings, 2 replies; 3+ messages in thread
From: Kalpan Jani @ 2026-05-07  7:28 UTC (permalink / raw)
  To: matttbe, martineau, mptcp, netdev, linux-kernel
  Cc: shardul.b, janak, kalpanjani009, shardulsb08, Kalpan Jani

There is a race between mptcp_data_ready() (RX path) and
mptcp_close_ssk() (teardown path) when accessing subflow->closing.

Currently, mptcp_data_ready() checks subflow->closing before acquiring
mptcp_data_lock(), while mptcp_close_ssk() may concurrently set
subflow->closing and purge backlog entries. This creates a classic
time-of-check vs time-of-use (TOCTOU) race:

  CPU A (close path)              CPU B (RX path)
  ----------------------         -------------------------
  set closing = 1
                                 read closing == 0
  purge backlog
                                 enqueue skb to backlog

As a result, skb entries referencing the subflow socket (ssk) may be
enqueued after the subflow is marked closing and scheduled for cleanup.
This can lead to:

  - WARN in inet_sock_destruct() due to non-zero sk_rmem_alloc
  - potential use-after-free via stale skb->sk references

Fix this by serializing both the closing check and backlog enqueue
under mptcp_data_lock(). This ensures that subflow->closing state and
backlog operations are observed atomically, preventing new skb from
being enqueued once teardown begins.

Also protect backlog cleanup in mptcp_close_ssk() with the same lock
to guarantee mutual exclusion with the RX path.

This restores proper synchronization between RX and teardown paths
and prevents stale skb references to closing subflows.

Signed-off-by: Kalpan Jani <kalpan.jani@mpiricsoftware.com>
---
 net/mptcp/protocol.c | 31 ++++++++++++++++++++++++++++---
 1 file changed, 28 insertions(+), 3 deletions(-)

diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index 718e910ff..295f8e1c0 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -910,14 +910,34 @@ void mptcp_data_ready(struct sock *sk, struct sock *ssk)
 	struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk);
 	struct mptcp_sock *msk = mptcp_sk(sk);
 
+	/*
+	 * The close path can set subflow->closing while we are racing
+	 * from BH context here. The old check was done before taking
+	 * mptcp_data_lock(), leaving a TOCTOU window:
+	 *
+	 *   CPU A: close path sets closing = 1 and purges backlog
+	 *   CPU B: already observed closing == 0 and later enqueues skb
+	 *
+	 * That skb keeps skb->sk == ssk and can later trigger:
+	 * - WARN in inet_sock_destruct() (ssk->sk_rmem_alloc != 0)
+	 * - UAF in backlog purge via stale skb->sk
+	 */
+
 	/* The peer can send data while we are shutting down this
 	 * subflow at subflow destruction time, but we must avoid enqueuing
 	 * more data to the msk receive queue
 	 */
-	if (unlikely(subflow->closing))
-		return;
 
 	mptcp_data_lock(sk);
+
+	/* Serialize closing check with backlog enqueue */
+	if (unlikely(subflow->closing)) {
+		mptcp_data_unlock(sk);
+		return;
+	}
+
 	mptcp_rcv_rtt_update(msk, subflow);
 	if (!sock_owned_by_user(sk)) {
 		/* Wake-up the reader only for in-sequence data */
@@ -2653,9 +2673,12 @@ void mptcp_close_ssk(struct sock *sk, struct sock *ssk,
 	if (sk->sk_state == TCP_ESTABLISHED)
 		mptcp_event(MPTCP_EVENT_SUB_CLOSED, mptcp_sk(sk), ssk, GFP_KERNEL);
 
-	/* Remove any reference from the backlog to this ssk; backlog skbs consume
+	/* Remove any reference from the backlog to this ssk.
+	 * Serialize cleanup with RX-side enqueue using mptcp_data_lock().
+	 * Backlog skbs consume
 	 * space in the msk receive queue, no need to touch sk->sk_rmem_alloc
 	 */
+	mptcp_data_lock(sk);
 	list_for_each_entry(skb, &msk->backlog_list, list) {
 		if (skb->sk != ssk)
 			continue;
@@ -2663,6 +2686,8 @@ void mptcp_close_ssk(struct sock *sk, struct sock *ssk,
 		atomic_sub(skb->truesize, &skb->sk->sk_rmem_alloc);
 		skb->sk = NULL;
 	}
+	mptcp_data_unlock(sk);
+
 
 	/* subflow aborted before reaching the fully_established status
 	 * attempt the creation of the next subflow
-- 
2.43.0

^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH] mptcp: serialize subflow->closing with RX path
  2026-05-07  7:28 [PATCH] mptcp: serialize subflow->closing with RX path Kalpan Jani
@ 2026-05-07 16:12 ` Matthieu Baerts
  2026-05-07 17:08 ` Paolo Abeni
  1 sibling, 0 replies; 3+ messages in thread
From: Matthieu Baerts @ 2026-05-07 16:12 UTC (permalink / raw)
  To: Kalpan Jani, martineau, mptcp, netdev, linux-kernel
  Cc: shardul.b, janak, kalpanjani009, shardulsb08

Hi Kalpan,

On 07/05/2026 09:28, Kalpan Jani wrote:
> There is a race between mptcp_data_ready() (RX path) and
> mptcp_close_ssk() (teardown path) when accessing subflow->closing.

Thank you for sharing this patch!

Sadly, this patch doesn't apply and looks corrupted:

  Applying: mptcp: serialize subflow->closing with RX path
  error: corrupt patch at line 44
  error: could not build fake ancestor

Did you manually edit it without changing the line references?

While at it, please follow the rules from:

  https://docs.kernel.org/process/maintainer-netdev.html

=> designate your patch to a tree: [PATCH net]

It might be easier if you send new versions only to the MPTCP ML (not
ccing netdev).

> Currently, mptcp_data_ready() checks subflow->closing before acquiring
> mptcp_data_lock(), while mptcp_close_ssk() may concurrently set
> subflow->closing and purge backlog entries. This creates a classic
> time-of-check vs time-of-use (TOCTOU) race:
> 
>   CPU A (close path)              CPU B (RX path)
>   ----------------------         -------------------------
>   set closing = 1
>                                  read closing == 0
>   purge backlog
>                                  enqueue skb to backlog
> 
> As a result, skb entries referencing the subflow socket (ssk) may be
> enqueued after the subflow is marked closing and scheduled for cleanup.
> This can lead to:
> 
>   - WARN in inet_sock_destruct() due to non-zero sk_rmem_alloc
>   - potential use-after-free via stale skb->sk references

By chance, do you have (decoded) calltraces to share in the commit
message? And even better: a reproducer? Or explaining how you found this
issue, and eventually which tool helped you find it.

> Fix this by serializing both the closing check and backlog enqueue
> under mptcp_data_lock(). This ensures that subflow->closing state and
> backlog operations are observed atomically, preventing new skb from
> being enqueued once teardown begins.
> 
> Also protect backlog cleanup in mptcp_close_ssk() with the same lock
> to guarantee mutual exclusion with the RX path.
> 
> This restores proper synchronization between RX and teardown paths
> and prevents stale skb references to closing subflows.

Also, for fixes, the "Fixes:" tag is required.

> Signed-off-by: Kalpan Jani <kalpan.jani@mpiricsoftware.com>
> ---
>  net/mptcp/protocol.c | 31 ++++++++++++++++++++++++++++---
>  1 file changed, 28 insertions(+), 3 deletions(-)
> 
> diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
> index 718e910ff..295f8e1c0 100644
> --- a/net/mptcp/protocol.c
> +++ b/net/mptcp/protocol.c
> @@ -910,14 +910,34 @@ void mptcp_data_ready(struct sock *sk, struct sock *ssk)
>  	struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk);
>  	struct mptcp_sock *msk = mptcp_sk(sk);
>  
> +	/*
> +	 * The close path can set subflow->closing while we are racing
> +	 * from BH context here. The old check was done before taking
> +	 * mptcp_data_lock(), leaving a TOCTOU window:
> +	 *
> +	 *   CPU A: close path sets closing = 1 and purges backlog
> +	 *   CPU B: already observed closing == 0 and later enqueues skb
> +	 *
> +	 * That skb keeps skb->sk == ssk and can later trigger:
> +	 * - WARN in inet_sock_destruct() (ssk->sk_rmem_alloc != 0)
> +	 * - UAF in backlog purge via stale skb->sk
> +	 */

I don't think that's useful to add a comment referring an old behaviour.

> +
>  	/* The peer can send data while we are shutting down this
>  	 * subflow at subflow destruction time, but we must avoid enqueuing
>  	 * more data to the msk receive queue
>  	 */

Instead, I suggest moving this comment below as well, and merge it with
the new one you added.
> -	if (unlikely(subflow->closing))
> -		return;
>  
>  	mptcp_data_lock(sk);
> +
> +	/* Serialize closing check with backlog enqueue */
> +	if (unlikely(subflow->closing)) {
> +		mptcp_data_unlock(sk);

When locks are used, we usually prefer having one exit path: please add
a new label above mptcp_data_unlock() below, and a goto here.

> +		return;
> +	}
> +
>  	mptcp_rcv_rtt_update(msk, subflow);
>  	if (!sock_owned_by_user(sk)) {
>  		/* Wake-up the reader only for in-sequence data */
> @@ -2653,9 +2673,12 @@ void mptcp_close_ssk(struct sock *sk, struct sock *ssk,
>  	if (sk->sk_state == TCP_ESTABLISHED)
>  		mptcp_event(MPTCP_EVENT_SUB_CLOSED, mptcp_sk(sk), ssk, GFP_KERNEL);
>  
> -	/* Remove any reference from the backlog to this ssk; backlog skbs consume
> +	/* Remove any reference from the backlog to this ssk.
> +	 * Serialize cleanup with RX-side enqueue using mptcp_data_lock().

Easier to add this new line at the end of the comment to reduce the diff.

> +	 * Backlog skbs consume
>  	 * space in the msk receive queue, no need to touch sk->sk_rmem_alloc
>  	 */
> +	mptcp_data_lock(sk);
>  	list_for_each_entry(skb, &msk->backlog_list, list) {
>  		if (skb->sk != ssk)
>  			continue;
> @@ -2663,6 +2686,8 @@ void mptcp_close_ssk(struct sock *sk, struct sock *ssk,
>  		atomic_sub(skb->truesize, &skb->sk->sk_rmem_alloc);
>  		skb->sk = NULL;
>  	}
> +	mptcp_data_unlock(sk);
> +
>  

No double empty lines. I think 'checkpatch' would tell you that.

>  	/* subflow aborted before reaching the fully_established status
>  	 * attempt the creation of the next subflow

Cheers,
Matt
-- 
Sponsored by the NGI0 Core fund.


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH] mptcp: serialize subflow->closing with RX path
  2026-05-07  7:28 [PATCH] mptcp: serialize subflow->closing with RX path Kalpan Jani
  2026-05-07 16:12 ` Matthieu Baerts
@ 2026-05-07 17:08 ` Paolo Abeni
  1 sibling, 0 replies; 3+ messages in thread
From: Paolo Abeni @ 2026-05-07 17:08 UTC (permalink / raw)
  To: Kalpan Jani, matttbe, martineau, mptcp, netdev, linux-kernel
  Cc: shardul.b, janak, kalpanjani009, shardulsb08

On 5/7/26 9:28 AM, Kalpan Jani wrote:
> There is a race between mptcp_data_ready() (RX path) and
> mptcp_close_ssk() (teardown path) when accessing subflow->closing.
> 
> Currently, mptcp_data_ready() checks subflow->closing before acquiring
> mptcp_data_lock(), while mptcp_close_ssk() may concurrently set
> subflow->closing and 

Are you sure this race can really happen? both the relevant part of 
__mptcp_close_ssk() and mptcp_data_ready() run under the ssk socket
lock.

> @@ -2653,9 +2673,12 @@ void mptcp_close_ssk(struct sock *sk, struct sock *ssk,
>  	if (sk->sk_state == TCP_ESTABLISHED)
>  		mptcp_event(MPTCP_EVENT_SUB_CLOSED, mptcp_sk(sk), ssk, GFP_KERNEL);
>  
> -	/* Remove any reference from the backlog to this ssk; backlog skbs consume
> +	/* Remove any reference from the backlog to this ssk.
> +	 * Serialize cleanup with RX-side enqueue using mptcp_data_lock().
> +	 * Backlog skbs consume
>  	 * space in the msk receive queue, no need to touch sk->sk_rmem_alloc
>  	 */
> +	mptcp_data_lock(sk);
>  	list_for_each_entry(skb, &msk->backlog_list, list) {
>  		if (skb->sk != ssk)
>  			continue;

The real problem is here: the backlog is currently traversed without the
data lock (wrong: the mptcp_data_lock() protects backlog updates), while
the ssk is still possibly open, unlocked and can keep receiving packets
and adding them to the BL.

A better solution would be something alike the following patch (completely
untested):
---
diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index 718e910ff23f..68d97926cb81 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -2550,6 +2550,21 @@ static void __mptcp_close_ssk(struct sock *sk, struct sock *ssk,
 	lock_sock_nested(ssk, SINGLE_DEPTH_NESTING);
 	subflow->closing = 1;
 
+	/* Remove any reference from the backlog to this ssk; backlog skbs consume
+	 * space in the msk receive queue, no need to touch sk->sk_rmem_alloc
+	 */
+	if (flags & MPTCP_CF_PUSH) {
+		mptcp_data_lock(sk);
+		list_for_each_entry(skb, &msk->backlog_list, list) {
+			if (skb->sk != ssk)
+				continue;
+
+			atomic_sub(skb->truesize, &skb->sk->sk_rmem_alloc);
+			skb->sk = NULL;
+		}
+		mptcp_data_unlock(sk);
+	}
+
 	/* Borrow the fwd allocated page left-over; fwd memory for the subflow
 	 * could be negative at this point, but will be reach zero soon - when
 	 * the data allocated using such fragment will be freed.
@@ -2653,17 +2668,6 @@ void mptcp_close_ssk(struct sock *sk, struct sock *ssk,
 	if (sk->sk_state == TCP_ESTABLISHED)
 		mptcp_event(MPTCP_EVENT_SUB_CLOSED, mptcp_sk(sk), ssk, GFP_KERNEL);
 
-	/* Remove any reference from the backlog to this ssk; backlog skbs consume
-	 * space in the msk receive queue, no need to touch sk->sk_rmem_alloc
-	 */
-	list_for_each_entry(skb, &msk->backlog_list, list) {
-		if (skb->sk != ssk)
-			continue;
-
-		atomic_sub(skb->truesize, &skb->sk->sk_rmem_alloc);
-		skb->sk = NULL;
-	}
-
 	/* subflow aborted before reaching the fully_established status
 	 * attempt the creation of the next subflow
 	 */


^ permalink raw reply related	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2026-05-07 17:08 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-07  7:28 [PATCH] mptcp: serialize subflow->closing with RX path Kalpan Jani
2026-05-07 16:12 ` Matthieu Baerts
2026-05-07 17:08 ` Paolo Abeni

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox