* [PATCH mptcp-next v5 00/12] mptcp: address stall under memory pressure
@ 2026-05-10 21:03 Paolo Abeni
2026-05-10 21:03 ` [PATCH mptcp-next v5 01/12] mptcp: do not drop partial packets Paolo Abeni
` (12 more replies)
0 siblings, 13 replies; 17+ messages in thread
From: Paolo Abeni @ 2026-05-10 21:03 UTC (permalink / raw)
To: mptcp; +Cc: Shardul Bankar
This an attempt to fix the data transfer stall reported by Geliang and
Gang more carefully enforcing memory constraints at the MPTCP level.
The first path is from Shardul, included here as the condition addressed
by such a patch is frequently hit in the relevant test-cased.
Patch 2 and 3 makes the admission check much more strict for incoming
packets exceeding the memory limits, with some exception for fallback
sockets.
Patch 4, 5, 6 and 7 are cleanups/refactors finalized to safely re-using
TCP helpers on MPTCP skbs.
Patch 8 makes TCP pruning related helpers available to MPTCP and patch 9
makes use of them. Patch 10 addresses an edge scenario that could still
lead to transfer stall under memory pressure.
Finally patch 11 and 12 improve the MPTCP-level retransmission schema to
make recovery from memory pressure significanly faster.
Tested successfully vs the test cases proposed by Geliang and Gang and
vs the selftests.
---
Jast a few minor updates over v4 (that was misnamed without revision) to
deal with sashiko's feedback (some was irrelevant/wrong, mentioned as
reported on the previous revision). Changes described in the individual
patches.
Patch 1 should go via net, included here for my own sake :-P
*** BLURB HERE ***
Paolo Abeni (12):
mptcp: do not drop partial packets
mptcp: explicitly drop over memory limits
mptcp: enforce hard limit on backlog flushing
mptcp: drop the mptcp_ooo_try_coalesce() helper
mptcp: drop the cant_coalesce CB field
mptcp: remove CB offset field
mptcp: sync mptcp skb cb layout with tcp one
tcp: expose the tcp_collapse_ofo_queue() helper to mptcp usage, too
mptcp: implemented OoO queue pruning
mptcp: track prune recovery status
mptcp: move the retrans loop to a separate helper
mptcp: let the retrans scheduler do its job.
include/net/tcp.h | 8 +
net/ipv4/tcp_input.c | 54 +++--
net/mptcp/fastopen.c | 17 +-
net/mptcp/mib.c | 3 +
net/mptcp/mib.h | 3 +
net/mptcp/options.c | 74 ++++++-
net/mptcp/protocol.c | 487 +++++++++++++++++++++++++++++--------------
net/mptcp/protocol.h | 24 ++-
net/mptcp/subflow.c | 11 +
9 files changed, 492 insertions(+), 189 deletions(-)
--
2.54.0
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH mptcp-next v5 01/12] mptcp: do not drop partial packets
2026-05-10 21:03 [PATCH mptcp-next v5 00/12] mptcp: address stall under memory pressure Paolo Abeni
@ 2026-05-10 21:03 ` Paolo Abeni
2026-05-11 10:14 ` Matthieu Baerts
2026-05-10 21:03 ` [PATCH mptcp-next v5 02/12] mptcp: explicitly drop over memory limits Paolo Abeni
` (11 subsequent siblings)
12 siblings, 1 reply; 17+ messages in thread
From: Paolo Abeni @ 2026-05-10 21:03 UTC (permalink / raw)
To: mptcp; +Cc: Shardul Bankar
From: Shardul Bankar <shardul.b@mpiricsoftware.com>
When a packet arrives with map_seq < ack_seq < end_seq, the beginning
of the packet has already been acknowledged but the end contains new
data. Currently the entire packet is dropped as "old data," forcing
the sender to retransmit.
Instead, skip the already-acked bytes by adjusting the skb offset and
enqueue only the new portion. Update bytes_received and ack_seq to
reflect the new data consumed.
A previous attempt at this fix (commit 1d2ce718811a ("mptcp: do not
drop partial packets"), reverted in commit bf39160c4218 ("Revert
"mptcp: do not drop partial packets"")) also added a zero-window
check and changed rcv_wnd_sent initialization, which caused test
regressions. This version addresses only the partial packet handling
without modifying receive window accounting.
Fixes: ab174ad8ef76 ("mptcp: move ooo skbs into msk out of order queue.")
Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/600
Signed-off-by: Shardul Bankar <shardul.b@mpiricsoftware.com>
[pabeni@redhat.com: update map]
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
v2 -> v3:
- update map_seq, too
v2: https://lore.kernel.org/mptcp/20260422143931.43281-1-shardul.b@mpiricsoftware.com/
Note:
- this introduces some code duplication. will be cleaned-up in
later patches
---
net/mptcp/protocol.c | 24 +++++++++++++++++++-----
1 file changed, 19 insertions(+), 5 deletions(-)
diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index 93e7a42fc65c..ce8372fb3c6a 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -397,12 +397,26 @@ static bool __mptcp_move_skb(struct sock *sk, struct sk_buff *skb)
return false;
}
- /* old data, keep it simple and drop the whole pkt, sender
- * will retransmit as needed, if needed.
+ /* Completely old data? */
+ if (!after64(MPTCP_SKB_CB(skb)->end_seq, msk->ack_seq)) {
+ MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_DUPDATA);
+ mptcp_drop(sk, skb);
+ return false;
+ }
+
+ /* Partial packet: map_seq < ack_seq < end_seq.
+ * Skip the already-acked bytes and enqueue the new data.
*/
- MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_DUPDATA);
- mptcp_drop(sk, skb);
- return false;
+ copy_len = MPTCP_SKB_CB(skb)->end_seq - msk->ack_seq;
+ MPTCP_SKB_CB(skb)->offset += msk->ack_seq - MPTCP_SKB_CB(skb)->map_seq;
+ MPTCP_SKB_CB(skb)->map_seq += msk->ack_seq -
+ MPTCP_SKB_CB(skb)->map_seq;
+ msk->bytes_received += copy_len;
+ WRITE_ONCE(msk->ack_seq, msk->ack_seq + copy_len);
+
+ skb_set_owner_r(skb, sk);
+ __skb_queue_tail(&sk->sk_receive_queue, skb);
+ return true;
}
static void mptcp_stop_rtx_timer(struct sock *sk)
--
2.54.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH mptcp-next v5 02/12] mptcp: explicitly drop over memory limits
2026-05-10 21:03 [PATCH mptcp-next v5 00/12] mptcp: address stall under memory pressure Paolo Abeni
2026-05-10 21:03 ` [PATCH mptcp-next v5 01/12] mptcp: do not drop partial packets Paolo Abeni
@ 2026-05-10 21:03 ` Paolo Abeni
2026-05-10 21:03 ` [PATCH mptcp-next v5 03/12] mptcp: enforce hard limit on backlog flushing Paolo Abeni
` (10 subsequent siblings)
12 siblings, 0 replies; 17+ messages in thread
From: Paolo Abeni @ 2026-05-10 21:03 UTC (permalink / raw)
To: mptcp; +Cc: Shardul Bankar
Currently the enforcement of the rcvbuf constraint is implemented
when moving the skbs into the msk receive or OoO queue, keeping the
incoming skbs in the subflow queue when over limit.
Under significant memory pressure the above can cause permanent data
transfer stalls. Hard enforce the memory limits as early as possible,
before landing even in the subflow queues, and refine the check when
owning the msk socket lock.
Note that fallback socket must not drop on the later checks, as the
incoming skb is already acked, and such drop would break the stream.
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
v4 -> v5:
- fix possible u32 overflow in mptcp_over_limit
v3 -> v4:
- schedule TCP ack on drop
- enforce limits in __mptcp_move_skb() and __mptcp_add_backlog(), too
but only if not fallback.
v1 -> v2:
- deal correctly with tcp fin and zero win probe
RFC -> v1:
- limit vs actual buffer size
- use CB info instead of skb->len
Note that:
- this needs the follow-up patches to really fix the stall
- sashiko can assume ZWP carries unacked data and may be silently dropped.
AFAIK that is false.
- sashiko apparently can't graps mptcp subflow never hit the tcp rx
fastpath, and the mptcp_incoming_options in tcp_rcv_state_process
is hit, the peer can't transmit any more data.
- the memory comparison is intentionally very rough, as
the msk socket lock is not currently held where the condition is
now enforced. This should require some refinement, shared as-is
i to avoid more latency on my side
---
net/mptcp/options.c | 32 ++++++++++++++++++++++++++++++--
net/mptcp/protocol.c | 29 +++++++++++++++++++++--------
2 files changed, 51 insertions(+), 10 deletions(-)
diff --git a/net/mptcp/options.c b/net/mptcp/options.c
index 4cc583fdc7a9..36f12e5dfa92 100644
--- a/net/mptcp/options.c
+++ b/net/mptcp/options.c
@@ -1158,8 +1158,30 @@ static bool add_addr_hmac_valid(struct mptcp_sock *msk,
return hmac == mp_opt->ahmac;
}
-/* Return false in case of error (or subflow has been reset),
- * else return true.
+static bool mptcp_over_limit(struct sock *sk, struct sock *ssk,
+ const struct sk_buff *skb)
+{
+ struct mptcp_sock *msk = mptcp_sk(sk);
+ u64 mem = sk_rmem_alloc_get(sk);
+
+ mem += READ_ONCE(msk->backlog_len);
+ if (likely(mem <= READ_ONCE(sk->sk_rcvbuf)))
+ return false;
+
+ /* Avoid silently dropping pure acks, fin or zero win probes. */
+ if (TCP_SKB_CB(skb)->seq == TCP_SKB_CB(skb)->end_seq ||
+ TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN ||
+ !after(TCP_SKB_CB(skb)->end_seq, tcp_sk(ssk)->rcv_nxt))
+ return false;
+
+ /* Dropped due to memory constraints, schedule an ack. */
+ inet_csk(ssk)->icsk_ack.pending |= ICSK_ACK_NOMEM | ICSK_ACK_NOW;
+ inet_csk_schedule_ack(ssk);
+ return true;
+}
+
+/* Return false when the caller must drop the packet, i.e. in case of error,
+ * subflow has been reset, or over memory limits.
*/
bool mptcp_incoming_options(struct sock *sk, struct sk_buff *skb)
{
@@ -1185,6 +1207,9 @@ bool mptcp_incoming_options(struct sock *sk, struct sk_buff *skb)
__mptcp_data_acked(subflow->conn);
mptcp_data_unlock(subflow->conn);
+
+ if (mptcp_over_limit(subflow->conn, sk, skb))
+ return false;
return true;
}
@@ -1263,6 +1288,9 @@ bool mptcp_incoming_options(struct sock *sk, struct sk_buff *skb)
return true;
}
+ if (mptcp_over_limit(subflow->conn, sk, skb))
+ return false;
+
mpext = skb_ext_add(skb, SKB_EXT_MPTCP);
if (!mpext)
return false;
diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index ce8372fb3c6a..492ee9ca9c77 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -381,6 +381,15 @@ static bool __mptcp_move_skb(struct sock *sk, struct sk_buff *skb)
mptcp_borrow_fwdmem(sk, skb);
+ /* Can't drop packets for fallback socket this late, or the stream
+ * will break.
+ */
+ if (unlikely(sk_rmem_alloc_get(sk) > READ_ONCE(sk->sk_rcvbuf)) &&
+ !__mptcp_check_fallback(msk)) {
+ mptcp_drop(sk, skb);
+ return false;
+ }
+
if (MPTCP_SKB_CB(skb)->map_seq == msk->ack_seq) {
/* in sequence */
msk->bytes_received += copy_len;
@@ -675,6 +684,7 @@ static void __mptcp_add_backlog(struct sock *sk,
struct sk_buff *tail = NULL;
struct sock *ssk = skb->sk;
bool fragstolen;
+ u64 limit;
int delta;
if (unlikely(sk->sk_state == TCP_CLOSE)) {
@@ -682,6 +692,15 @@ static void __mptcp_add_backlog(struct sock *sk,
return;
}
+ /* Similar additional allowance as plain TCP. */
+ limit = READ_ONCE(sk->sk_rcvbuf);
+ limit += (limit >> 1) + 64 * 1024;
+ limit = min_t(u64, limit, UINT_MAX);
+ if (msk->backlog_len > limit && !__mptcp_check_fallback(msk)) {
+ kfree_skb_reason(skb, SKB_DROP_REASON_SOCKET_RCVBUFF);
+ return;
+ }
+
/* Try to coalesce with the last skb in our backlog */
if (!list_empty(&msk->backlog_list))
tail = list_last_entry(&msk->backlog_list, struct sk_buff, list);
@@ -753,7 +772,7 @@ static bool __mptcp_move_skbs_from_subflow(struct mptcp_sock *msk,
mptcp_init_skb(ssk, skb, offset, len);
- if (own_msk && sk_rmem_alloc_get(sk) < sk->sk_rcvbuf) {
+ if (own_msk) {
mptcp_subflow_lend_fwdmem(subflow, skb);
ret |= __mptcp_move_skb(sk, skb);
} else {
@@ -2211,10 +2230,6 @@ static bool __mptcp_move_skbs(struct sock *sk, struct list_head *skbs, u32 *delt
*delta = 0;
while (1) {
- /* If the msk recvbuf is full stop, don't drop */
- if (sk_rmem_alloc_get(sk) > sk->sk_rcvbuf)
- break;
-
prefetch(skb->next);
list_del(&skb->list);
*delta += skb->truesize;
@@ -2242,9 +2257,7 @@ static bool mptcp_can_spool_backlog(struct sock *sk, struct list_head *skbs)
DEBUG_NET_WARN_ON_ONCE(msk->backlog_unaccounted && sk->sk_socket &&
mem_cgroup_from_sk(sk));
- /* Don't spool the backlog if the rcvbuf is full. */
- if (list_empty(&msk->backlog_list) ||
- sk_rmem_alloc_get(sk) > sk->sk_rcvbuf)
+ if (list_empty(&msk->backlog_list))
return false;
INIT_LIST_HEAD(skbs);
--
2.54.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH mptcp-next v5 03/12] mptcp: enforce hard limit on backlog flushing
2026-05-10 21:03 [PATCH mptcp-next v5 00/12] mptcp: address stall under memory pressure Paolo Abeni
2026-05-10 21:03 ` [PATCH mptcp-next v5 01/12] mptcp: do not drop partial packets Paolo Abeni
2026-05-10 21:03 ` [PATCH mptcp-next v5 02/12] mptcp: explicitly drop over memory limits Paolo Abeni
@ 2026-05-10 21:03 ` Paolo Abeni
2026-05-10 21:03 ` [PATCH mptcp-next v5 04/12] mptcp: drop the mptcp_ooo_try_coalesce() helper Paolo Abeni
` (9 subsequent siblings)
12 siblings, 0 replies; 17+ messages in thread
From: Paolo Abeni @ 2026-05-10 21:03 UTC (permalink / raw)
To: mptcp; +Cc: Shardul Bankar
Currently a wild producer could keep the backlog flushing operation
spinning for an unbound time.
Since the previous patch the amount of data present in the backlog is
hard-limited. Move the backlog len update at the end of the flush loop to
prevent it spinning forever.
Also, no need to splice back the remaining skbs list into the backlog, as
such list is always empty after each backlog processing loop.
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
net/mptcp/protocol.c | 21 ++++++---------------
1 file changed, 6 insertions(+), 15 deletions(-)
diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index 492ee9ca9c77..b329d1dc4161 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -2228,7 +2228,6 @@ static bool __mptcp_move_skbs(struct sock *sk, struct list_head *skbs, u32 *delt
struct mptcp_sock *msk = mptcp_sk(sk);
bool moved = false;
- *delta = 0;
while (1) {
prefetch(skb->next);
list_del(&skb->list);
@@ -2265,20 +2264,12 @@ static bool mptcp_can_spool_backlog(struct sock *sk, struct list_head *skbs)
return true;
}
-static void mptcp_backlog_spooled(struct sock *sk, u32 moved,
- struct list_head *skbs)
-{
- struct mptcp_sock *msk = mptcp_sk(sk);
-
- WRITE_ONCE(msk->backlog_len, msk->backlog_len - moved);
- list_splice(skbs, &msk->backlog_list);
-}
-
static bool mptcp_move_skbs(struct sock *sk)
{
+ struct mptcp_sock *msk = mptcp_sk(sk);
struct list_head skbs;
bool enqueued = false;
- u32 moved;
+ u32 moved = 0;
mptcp_data_lock(sk);
while (mptcp_can_spool_backlog(sk, &skbs)) {
@@ -2286,8 +2277,8 @@ static bool mptcp_move_skbs(struct sock *sk)
enqueued |= __mptcp_move_skbs(sk, &skbs, &moved);
mptcp_data_lock(sk);
- mptcp_backlog_spooled(sk, moved, &skbs);
}
+ WRITE_ONCE(msk->backlog_len, msk->backlog_len - moved);
mptcp_data_unlock(sk);
return enqueued;
}
@@ -3670,12 +3661,12 @@ static void mptcp_release_cb(struct sock *sk)
__must_hold(&sk->sk_lock.slock)
{
struct mptcp_sock *msk = mptcp_sk(sk);
+ u32 moved = 0;
for (;;) {
unsigned long flags = (msk->cb_flags & MPTCP_FLAGS_PROCESS_CTX_NEED);
struct list_head join_list, skbs;
bool spool_bl;
- u32 moved;
spool_bl = mptcp_can_spool_backlog(sk, &skbs);
if (!flags && !spool_bl)
@@ -3708,9 +3699,9 @@ static void mptcp_release_cb(struct sock *sk)
cond_resched();
spin_lock_bh(&sk->sk_lock.slock);
- if (spool_bl)
- mptcp_backlog_spooled(sk, moved, &skbs);
}
+ if (moved)
+ WRITE_ONCE(msk->backlog_len, msk->backlog_len - moved);
if (__test_and_clear_bit(MPTCP_CLEAN_UNA, &msk->cb_flags))
__mptcp_clean_una_wakeup(sk);
--
2.54.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH mptcp-next v5 04/12] mptcp: drop the mptcp_ooo_try_coalesce() helper
2026-05-10 21:03 [PATCH mptcp-next v5 00/12] mptcp: address stall under memory pressure Paolo Abeni
` (2 preceding siblings ...)
2026-05-10 21:03 ` [PATCH mptcp-next v5 03/12] mptcp: enforce hard limit on backlog flushing Paolo Abeni
@ 2026-05-10 21:03 ` Paolo Abeni
2026-05-10 21:03 ` [PATCH mptcp-next v5 05/12] mptcp: drop the cant_coalesce CB field Paolo Abeni
` (8 subsequent siblings)
12 siblings, 0 replies; 17+ messages in thread
From: Paolo Abeni @ 2026-05-10 21:03 UTC (permalink / raw)
To: mptcp; +Cc: Shardul Bankar
It's used to save an additional comparison for in-order skbs, but is
also a barrier to remove CB offset. Remove the helper, let
__mptcp_try_coalesce() always perform the sequence check and remove
duplicate checks from the callers.
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
net/mptcp/protocol.c | 21 ++++++---------------
1 file changed, 6 insertions(+), 15 deletions(-)
diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index b329d1dc4161..479e653ddc0d 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -159,7 +159,8 @@ static bool __mptcp_try_coalesce(struct sock *sk, struct sk_buff *to,
{
int limit = READ_ONCE(sk->sk_rcvbuf);
- if (unlikely(MPTCP_SKB_CB(to)->cant_coalesce) ||
+ if (MPTCP_SKB_CB(from)->map_seq != MPTCP_SKB_CB(to)->end_seq ||
+ unlikely(MPTCP_SKB_CB(to)->cant_coalesce) ||
MPTCP_SKB_CB(from)->offset ||
((to->len + from->len) > (limit >> 3)) ||
!skb_try_coalesce(to, from, fragstolen, delta))
@@ -192,15 +193,6 @@ static bool mptcp_try_coalesce(struct sock *sk, struct sk_buff *to,
return true;
}
-static bool mptcp_ooo_try_coalesce(struct mptcp_sock *msk, struct sk_buff *to,
- struct sk_buff *from)
-{
- if (MPTCP_SKB_CB(from)->map_seq != MPTCP_SKB_CB(to)->end_seq)
- return false;
-
- return mptcp_try_coalesce((struct sock *)msk, to, from);
-}
-
/* "inspired" by tcp_rcvbuf_grow(), main difference:
* - mptcp does not maintain a msk-level window clamp
* - returns true when the receive buffer is actually updated
@@ -275,7 +267,7 @@ static void mptcp_data_queue_ofo(struct mptcp_sock *msk, struct sk_buff *skb)
/* with 2 subflows, adding at end of ooo queue is quite likely
* Use of ooo_last_skb avoids the O(Log(N)) rbtree lookup.
*/
- if (mptcp_ooo_try_coalesce(msk, msk->ooo_last_skb, skb)) {
+ if (mptcp_try_coalesce(sk, msk->ooo_last_skb, skb)) {
MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_OFOMERGE);
MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_OFOQUEUETAIL);
return;
@@ -321,7 +313,7 @@ static void mptcp_data_queue_ofo(struct mptcp_sock *msk, struct sk_buff *skb)
MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_DUPDATA);
goto merge_right;
}
- } else if (mptcp_ooo_try_coalesce(msk, skb1, skb)) {
+ } else if (mptcp_try_coalesce(sk, skb1, skb)) {
MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_OFOMERGE);
return;
}
@@ -705,8 +697,7 @@ static void __mptcp_add_backlog(struct sock *sk,
if (!list_empty(&msk->backlog_list))
tail = list_last_entry(&msk->backlog_list, struct sk_buff, list);
- if (tail && MPTCP_SKB_CB(skb)->map_seq == MPTCP_SKB_CB(tail)->end_seq &&
- ssk == tail->sk &&
+ if (tail && ssk == tail->sk &&
__mptcp_try_coalesce(sk, tail, skb, &fragstolen, &delta)) {
skb->truesize -= delta;
kfree_skb_partial(skb, fragstolen);
@@ -830,7 +821,7 @@ static bool __mptcp_ofo_queue(struct mptcp_sock *msk)
end_seq = MPTCP_SKB_CB(skb)->end_seq;
tail = skb_peek_tail(&sk->sk_receive_queue);
- if (!tail || !mptcp_ooo_try_coalesce(msk, tail, skb)) {
+ if (!tail || !mptcp_try_coalesce(sk, tail, skb)) {
int delta = msk->ack_seq - MPTCP_SKB_CB(skb)->map_seq;
/* skip overlapping data, if any */
--
2.54.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH mptcp-next v5 05/12] mptcp: drop the cant_coalesce CB field
2026-05-10 21:03 [PATCH mptcp-next v5 00/12] mptcp: address stall under memory pressure Paolo Abeni
` (3 preceding siblings ...)
2026-05-10 21:03 ` [PATCH mptcp-next v5 04/12] mptcp: drop the mptcp_ooo_try_coalesce() helper Paolo Abeni
@ 2026-05-10 21:03 ` Paolo Abeni
2026-05-10 21:03 ` [PATCH mptcp-next v5 06/12] mptcp: remove CB offset field Paolo Abeni
` (7 subsequent siblings)
12 siblings, 0 replies; 17+ messages in thread
From: Paolo Abeni @ 2026-05-10 21:03 UTC (permalink / raw)
To: mptcp; +Cc: Shardul Bankar
Such field is used to ensure in-sequence processing in case of fastopen.
Instead let's perform synchronization of the fastopen skb sequence
when the IASN becomes available with the 3rd ack.
When the `cant_coalesce` field has been introduced, commit f03afb3aeb9d
("mptcp: drop __mptcp_fastopen_gen_msk_ackseq()") noted that updating the
already queued skb for passive fastopen socket at 3rd ack time would be
difficult and race prone. The main point is that such update don't need
to be synchronously performed at 3rd ack time, but is sufficient to
perform it before the next segment is introduced into the msk.
To such extent, add an explicit test in __mptcp_move_skb(). Performance
wise this trades a conditional in the fast path - in __mptcp_try_coalesce()
- with a similar one in __mptcp_move_skb() and a couple more in slow paths.
After this change the user-space will always observe consistent sequence
numbers in the receive queue, even in the TFO dummy mapping case.
There is still a potential race in mptcp_inq_hint() that will be addressed
by a later patch in the series.
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
net/mptcp/fastopen.c | 2 +-
net/mptcp/protocol.c | 28 ++++++++++++++++++++++++++--
net/mptcp/protocol.h | 4 +++-
net/mptcp/subflow.c | 7 +++++++
4 files changed, 37 insertions(+), 4 deletions(-)
diff --git a/net/mptcp/fastopen.c b/net/mptcp/fastopen.c
index 082c46c0f50e..c7d5bee8088e 100644
--- a/net/mptcp/fastopen.c
+++ b/net/mptcp/fastopen.c
@@ -48,11 +48,11 @@ void mptcp_fastopen_subflow_synack_set_params(struct mptcp_subflow_context *subf
MPTCP_SKB_CB(skb)->end_seq = 0;
MPTCP_SKB_CB(skb)->offset = 0;
MPTCP_SKB_CB(skb)->has_rxtstamp = has_rxtstamp;
- MPTCP_SKB_CB(skb)->cant_coalesce = 1;
mptcp_data_lock(sk);
DEBUG_NET_WARN_ON_ONCE(sock_owned_by_user_nocheck(sk));
+ mptcp_sk(sk)->rcvd_dummy_seq = true;
mptcp_borrow_fwdmem(sk, skb);
skb_set_owner_r(skb, sk);
__skb_queue_tail(&sk->sk_receive_queue, skb);
diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index 479e653ddc0d..6909586a3090 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -160,7 +160,6 @@ static bool __mptcp_try_coalesce(struct sock *sk, struct sk_buff *to,
int limit = READ_ONCE(sk->sk_rcvbuf);
if (MPTCP_SKB_CB(from)->map_seq != MPTCP_SKB_CB(to)->end_seq ||
- unlikely(MPTCP_SKB_CB(to)->cant_coalesce) ||
MPTCP_SKB_CB(from)->offset ||
((to->len + from->len) > (limit >> 3)) ||
!skb_try_coalesce(to, from, fragstolen, delta))
@@ -357,7 +356,6 @@ static void mptcp_init_skb(struct sock *ssk, struct sk_buff *skb, int offset,
MPTCP_SKB_CB(skb)->end_seq = MPTCP_SKB_CB(skb)->map_seq + copy_len;
MPTCP_SKB_CB(skb)->offset = offset;
MPTCP_SKB_CB(skb)->has_rxtstamp = has_rxtstamp;
- MPTCP_SKB_CB(skb)->cant_coalesce = 0;
__skb_unlink(skb, &ssk->sk_receive_queue);
@@ -365,6 +363,24 @@ static void mptcp_init_skb(struct sock *ssk, struct sk_buff *skb, int offset,
skb_dst_drop(skb);
}
+void __mptcp_sync_rcv_sequence(struct sock *sk)
+{
+ struct mptcp_sock *msk = mptcp_sk(sk);
+ struct sk_buff *skb;
+
+ if (likely(!msk->rcvd_dummy_seq))
+ return;
+
+ /* User space can have already received the TFO skb. */
+ msk->rcvd_dummy_seq = false;
+ skb = skb_peek_tail(&sk->sk_receive_queue);
+ if (!skb)
+ return;
+
+ MPTCP_SKB_CB(skb)->map_seq = msk->ack_seq - skb->len;
+ MPTCP_SKB_CB(skb)->end_seq = msk->ack_seq;
+}
+
static bool __mptcp_move_skb(struct sock *sk, struct sk_buff *skb)
{
u64 copy_len = MPTCP_SKB_CB(skb)->end_seq - MPTCP_SKB_CB(skb)->map_seq;
@@ -373,6 +389,12 @@ static bool __mptcp_move_skb(struct sock *sk, struct sk_buff *skb)
mptcp_borrow_fwdmem(sk, skb);
+ /* Be sure to sync the eventual fastopen dummy mapping before any other
+ * skb lands into the msk.
+ */
+ if (unlikely(msk->rcvd_dummy_seq))
+ __mptcp_sync_rcv_sequence(sk);
+
/* Can't drop packets for fallback socket this late, or the stream
* will break.
*/
@@ -3707,6 +3729,8 @@ static void mptcp_release_cb(struct sock *sk)
__mptcp_error_report(sk);
if (__test_and_clear_bit(MPTCP_SYNC_SNDBUF, &msk->cb_flags))
__mptcp_sync_sndbuf(sk);
+ if (__test_and_clear_bit(MPTCP_SYNC_SEQ, &msk->cb_flags))
+ __mptcp_sync_rcv_sequence(sk);
}
}
diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
index 661600f8b573..16a1f4531dad 100644
--- a/net/mptcp/protocol.h
+++ b/net/mptcp/protocol.h
@@ -124,13 +124,13 @@
#define MPTCP_FLUSH_JOIN_LIST 5
#define MPTCP_SYNC_STATE 6
#define MPTCP_SYNC_SNDBUF 7
+#define MPTCP_SYNC_SEQ 8
struct mptcp_skb_cb {
u64 map_seq;
u64 end_seq;
u32 offset;
u8 has_rxtstamp;
- u8 cant_coalesce;
};
#define MPTCP_SKB_CB(__skb) ((struct mptcp_skb_cb *)&((__skb)->cb[0]))
@@ -310,6 +310,7 @@ struct mptcp_sock {
u32 token;
unsigned long flags;
unsigned long cb_flags;
+ bool rcvd_dummy_seq;
bool recovery; /* closing subflow write queue reinjected */
bool can_ack;
bool fully_established;
@@ -1172,6 +1173,7 @@ void mptcp_event_pm_listener(const struct sock *ssk,
enum mptcp_event_type event);
bool mptcp_userspace_pm_active(const struct mptcp_sock *msk);
+void __mptcp_sync_rcv_sequence(struct sock *sk);
void mptcp_fastopen_subflow_synack_set_params(struct mptcp_subflow_context *subflow,
struct request_sock *req);
int mptcp_pm_genl_fill_addr(struct sk_buff *msg,
diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
index d562e149606f..5f371bf773f8 100644
--- a/net/mptcp/subflow.c
+++ b/net/mptcp/subflow.c
@@ -478,6 +478,8 @@ static void subflow_set_remote_key(struct mptcp_sock *msk,
struct mptcp_subflow_context *subflow,
const struct mptcp_options_received *mp_opt)
{
+ struct sock *sk = (struct sock *)msk;
+
/* active MPC subflow will reach here multiple times:
* at subflow_finish_connect() time and at 4th ack time
*/
@@ -496,6 +498,11 @@ static void subflow_set_remote_key(struct mptcp_sock *msk,
WRITE_ONCE(msk->ack_seq, subflow->iasn);
WRITE_ONCE(msk->can_ack, true);
atomic64_set(&msk->rcv_wnd_sent, subflow->iasn);
+
+ if (!sock_owned_by_user(sk))
+ __mptcp_sync_rcv_sequence(sk);
+ else
+ __set_bit(MPTCP_SYNC_SEQ, &msk->cb_flags);
}
static void mptcp_propagate_state(struct sock *sk, struct sock *ssk,
--
2.54.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH mptcp-next v5 06/12] mptcp: remove CB offset field
2026-05-10 21:03 [PATCH mptcp-next v5 00/12] mptcp: address stall under memory pressure Paolo Abeni
` (4 preceding siblings ...)
2026-05-10 21:03 ` [PATCH mptcp-next v5 05/12] mptcp: drop the cant_coalesce CB field Paolo Abeni
@ 2026-05-10 21:03 ` Paolo Abeni
2026-05-10 21:03 ` [PATCH mptcp-next v5 07/12] mptcp: sync mptcp skb cb layout with tcp one Paolo Abeni
` (6 subsequent siblings)
12 siblings, 0 replies; 17+ messages in thread
From: Paolo Abeni @ 2026-05-10 21:03 UTC (permalink / raw)
To: mptcp; +Cc: Shardul Bankar
Instead, use a new msk-level field to track the bytes already consumed
inside each skb, carrying the amount of bytes already copied to
user-space, alike what TCP is already doing.
The newly introduce `copied_seq` field is always accessed under the msk
socket lock, delegating the synchronization with IASN to the msk release
CB, when the socket is owned by the user-space at remote key reception
time. Such synchronization preserves any partial progress (copy) made on
the TFO packet.
Note that the explicit synchronization in __mptcp_move_skb() is needed to
ensure that the TFO skb in the receive queue got its map_seq synched
before the next skb lands into the receive queue when spooling the backlog
at mptcp_release_cb() time, as the release CB synchronization will happen
later.
Prior to this patch, the TFO skb dummy mapping was always ignored, now it
affects the `copied_seq` initial update: be sure to extends the sign
correctly of such mapping initialization time.
Overall this simplify a bit the __mptcp_recvmsg_mskq(), mptcp_inq_hint()
and the __mptcp_move_skb() code and will also make possible the next
patch.
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
v4 -> v5:
- fix transient build issue, restoring a __mptcp_move_skb() chunk that
leaked in the next patch.
v3 -> v4:
- fix peek seq race
v2 -> v3:
- do not use msk->first in release_cb to deal with MPTCP_SYNC_SEQ:
subflow->iasn access is (data) racy and msk->first can be null, instead
recompute iasn from msk bytes_received and TFO skb len
- when updating copied_seq after remote key reception, add iasn to it
instead of overwriting, to avoid deleting any partial progress.
v1 -> v2:
- deal correctly with peek, as usally "inspired" from the correspondent
tcp code
- update mptcp_inq_hint(), too
Notes:
- this explicitly relays on "mptcp: do not drop partial packets" to
avoid dropping partially consumed packets
- sashiko may confuse the 'offset' in mptcp_init_skb for an MPTCP-level
one, but it refers to the TCP sequence space. Conclusion out of the
that assumptions are wrong.
- the data race in mptcp_inq_hint() is real, but pre-existing and can
impact only sockopt() output - the other call-sites are race free, as
ack_seq updates are serialized by the RX path.
Fixing the race for good without sashiko tripping on other similar
minor races would require another largish series. Postponed.
- sashiko may see a race with `copied_seq` in mptcp_recv_skb(), that is
not real: subflow_set_remote_key()/__mptcp_sync_rcv_sequence() has seen
the msk owned; if mptcp_data_ready() has seen again the msk owend, the
only skb in the receive queue can be the (unsynched) TFO one, with dummy
sequence. If mptcp_data_ready() observed msk not owned and queued more
skbs, the release_cb() has run and synched `copied_seq` and TFO skb
map_seq.
---
net/mptcp/fastopen.c | 15 +++--
net/mptcp/protocol.c | 129 ++++++++++++++++++-------------------------
net/mptcp/protocol.h | 8 ++-
net/mptcp/subflow.c | 7 ++-
4 files changed, 77 insertions(+), 82 deletions(-)
diff --git a/net/mptcp/fastopen.c b/net/mptcp/fastopen.c
index c7d5bee8088e..03e605b050f8 100644
--- a/net/mptcp/fastopen.c
+++ b/net/mptcp/fastopen.c
@@ -9,6 +9,7 @@
void mptcp_fastopen_subflow_synack_set_params(struct mptcp_subflow_context *subflow,
struct request_sock *req)
{
+ struct mptcp_sock *msk;
struct sock *sk, *ssk;
struct sk_buff *skb;
struct tcp_sock *tp;
@@ -43,20 +44,24 @@ void mptcp_fastopen_subflow_synack_set_params(struct mptcp_subflow_context *subf
subflow->ssn_offset += skb->len;
has_rxtstamp = TCP_SKB_CB(skb)->has_rxtstamp;
- /* Only the sequence delta is relevant */
- MPTCP_SKB_CB(skb)->map_seq = -skb->len;
+ /* The TFO segment data sits before the IASN; before receiving
+ * the remote key, IASN is assumed being 0.
+ */
+ MPTCP_SKB_CB(skb)->map_seq = -(u64)skb->len;
MPTCP_SKB_CB(skb)->end_seq = 0;
- MPTCP_SKB_CB(skb)->offset = 0;
MPTCP_SKB_CB(skb)->has_rxtstamp = has_rxtstamp;
mptcp_data_lock(sk);
DEBUG_NET_WARN_ON_ONCE(sock_owned_by_user_nocheck(sk));
- mptcp_sk(sk)->rcvd_dummy_seq = true;
+ msk = mptcp_sk(sk);
+ msk->rcvd_dummy_seq = true;
+ msk->copied_seq = MPTCP_SKB_CB(skb)->map_seq;
+ msk->tfo_skb_len = skb->len;
mptcp_borrow_fwdmem(sk, skb);
skb_set_owner_r(skb, sk);
__skb_queue_tail(&sk->sk_receive_queue, skb);
- mptcp_sk(sk)->bytes_received += skb->len;
+ msk->bytes_received += skb->len;
sk->sk_data_ready(sk);
diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index 6909586a3090..47df6d4a26a7 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -28,7 +28,7 @@
#include "protocol.h"
#include "mib.h"
-static unsigned int mptcp_inq_hint(const struct sock *sk);
+static unsigned int mptcp_inq_hint(struct sock *sk);
#define CREATE_TRACE_POINTS
#include <trace/events/mptcp.h>
@@ -160,7 +160,6 @@ static bool __mptcp_try_coalesce(struct sock *sk, struct sk_buff *to,
int limit = READ_ONCE(sk->sk_rcvbuf);
if (MPTCP_SKB_CB(from)->map_seq != MPTCP_SKB_CB(to)->end_seq ||
- MPTCP_SKB_CB(from)->offset ||
((to->len + from->len) > (limit >> 3)) ||
!skb_try_coalesce(to, from, fragstolen, delta))
return false;
@@ -342,8 +341,7 @@ static void mptcp_data_queue_ofo(struct mptcp_sock *msk, struct sk_buff *skb)
skb_set_owner_r(skb, sk);
}
-static void mptcp_init_skb(struct sock *ssk, struct sk_buff *skb, int offset,
- int copy_len)
+static void mptcp_init_skb(struct sock *ssk, struct sk_buff *skb, int offset)
{
struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk);
bool has_rxtstamp = TCP_SKB_CB(skb)->has_rxtstamp;
@@ -352,9 +350,9 @@ static void mptcp_init_skb(struct sock *ssk, struct sk_buff *skb, int offset,
* mptcp_subflow_get_mapped_dsn() is based on the current tp->copied_seq
* value
*/
- MPTCP_SKB_CB(skb)->map_seq = mptcp_subflow_get_mapped_dsn(subflow);
- MPTCP_SKB_CB(skb)->end_seq = MPTCP_SKB_CB(skb)->map_seq + copy_len;
- MPTCP_SKB_CB(skb)->offset = offset;
+ MPTCP_SKB_CB(skb)->map_seq = mptcp_subflow_get_mapped_dsn(subflow) -
+ offset;
+ MPTCP_SKB_CB(skb)->end_seq = MPTCP_SKB_CB(skb)->map_seq + skb->len;
MPTCP_SKB_CB(skb)->has_rxtstamp = has_rxtstamp;
__skb_unlink(skb, &ssk->sk_receive_queue);
@@ -377,8 +375,8 @@ void __mptcp_sync_rcv_sequence(struct sock *sk)
if (!skb)
return;
- MPTCP_SKB_CB(skb)->map_seq = msk->ack_seq - skb->len;
- MPTCP_SKB_CB(skb)->end_seq = msk->ack_seq;
+ MPTCP_SKB_CB(skb)->map_seq = mptcp_iasn(msk) - skb->len;
+ MPTCP_SKB_CB(skb)->end_seq = MPTCP_SKB_CB(skb)->map_seq + skb->len;
}
static bool __mptcp_move_skb(struct sock *sk, struct sk_buff *skb)
@@ -405,6 +403,7 @@ static bool __mptcp_move_skb(struct sock *sk, struct sk_buff *skb)
}
if (MPTCP_SKB_CB(skb)->map_seq == msk->ack_seq) {
+add_queue:
/* in sequence */
msk->bytes_received += copy_len;
WRITE_ONCE(msk->ack_seq, msk->ack_seq + copy_len);
@@ -418,28 +417,16 @@ static bool __mptcp_move_skb(struct sock *sk, struct sk_buff *skb)
} else if (after64(MPTCP_SKB_CB(skb)->map_seq, msk->ack_seq)) {
mptcp_data_queue_ofo(msk, skb);
return false;
+ } else if (after64(MPTCP_SKB_CB(skb)->end_seq, msk->ack_seq)) {
+ /* Partial packet: map_seq < ack_seq < end_seq.*/
+ copy_len -= msk->ack_seq - MPTCP_SKB_CB(skb)->map_seq;
+ goto add_queue;
}
- /* Completely old data? */
- if (!after64(MPTCP_SKB_CB(skb)->end_seq, msk->ack_seq)) {
- MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_DUPDATA);
- mptcp_drop(sk, skb);
- return false;
- }
-
- /* Partial packet: map_seq < ack_seq < end_seq.
- * Skip the already-acked bytes and enqueue the new data.
- */
- copy_len = MPTCP_SKB_CB(skb)->end_seq - msk->ack_seq;
- MPTCP_SKB_CB(skb)->offset += msk->ack_seq - MPTCP_SKB_CB(skb)->map_seq;
- MPTCP_SKB_CB(skb)->map_seq += msk->ack_seq -
- MPTCP_SKB_CB(skb)->map_seq;
- msk->bytes_received += copy_len;
- WRITE_ONCE(msk->ack_seq, msk->ack_seq + copy_len);
-
- skb_set_owner_r(skb, sk);
- __skb_queue_tail(&sk->sk_receive_queue, skb);
- return true;
+ /* Completely old data. */
+ MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_DUPDATA);
+ mptcp_drop(sk, skb);
+ return false;
}
static void mptcp_stop_rtx_timer(struct sock *sk)
@@ -783,7 +770,7 @@ static bool __mptcp_move_skbs_from_subflow(struct mptcp_sock *msk,
if (offset < skb->len) {
size_t len = skb->len - offset;
- mptcp_init_skb(ssk, skb, offset, len);
+ mptcp_init_skb(ssk, skb, offset);
if (own_msk) {
mptcp_subflow_lend_fwdmem(subflow, skb);
@@ -850,8 +837,6 @@ static bool __mptcp_ofo_queue(struct mptcp_sock *msk)
pr_debug("uncoalesced seq=%llx ack seq=%llx delta=%d\n",
MPTCP_SKB_CB(skb)->map_seq, msk->ack_seq,
delta);
- MPTCP_SKB_CB(skb)->offset += delta;
- MPTCP_SKB_CB(skb)->map_seq += delta;
__skb_queue_tail(&sk->sk_receive_queue, skb);
}
msk->bytes_received += end_seq - msk->ack_seq;
@@ -2095,34 +2080,22 @@ static void mptcp_eat_recv_skb(struct sock *sk, struct sk_buff *skb)
}
static int __mptcp_recvmsg_mskq(struct sock *sk, struct msghdr *msg,
- size_t len, int flags, int copied_total,
+ size_t len, int flags, u64 *seq,
struct scm_timestamping_internal *tss,
int *cmsg_flags, struct sk_buff **last)
{
struct mptcp_sock *msk = mptcp_sk(sk);
struct sk_buff *skb, *tmp;
- int total_data_len = 0;
int copied = 0;
skb_queue_walk_safe(&sk->sk_receive_queue, skb, tmp) {
- u32 delta, offset = MPTCP_SKB_CB(skb)->offset;
- u32 data_len = skb->len - offset;
- u32 count;
+ u64 offset = *seq - MPTCP_SKB_CB(skb)->map_seq;
+ u32 count, data_len = skb->len - offset;
int err;
- if (flags & MSG_PEEK) {
- /* skip already peeked skbs */
- if (total_data_len + data_len <= copied_total) {
- total_data_len += data_len;
- *last = skb;
- continue;
- }
-
- /* skip the already peeked data in the current skb */
- delta = copied_total - total_data_len;
- offset += delta;
- data_len -= delta;
- }
+ /* Skip the already peeked data. */
+ if (offset >= skb->len)
+ continue;
count = min_t(size_t, len - copied, data_len);
if (!(flags & MSG_TRUNC)) {
@@ -2140,14 +2113,12 @@ static int __mptcp_recvmsg_mskq(struct sock *sk, struct msghdr *msg,
}
copied += count;
+ *seq += count;
if (!(flags & MSG_PEEK)) {
msk->bytes_consumed += count;
- if (count < data_len) {
- MPTCP_SKB_CB(skb)->offset += count;
- MPTCP_SKB_CB(skb)->map_seq += count;
+ if (count < data_len)
break;
- }
mptcp_eat_recv_skb(sk, skb);
} else {
@@ -2296,25 +2267,23 @@ static bool mptcp_move_skbs(struct sock *sk)
return enqueued;
}
-static unsigned int mptcp_inq_hint(const struct sock *sk)
+static unsigned int mptcp_inq_hint(struct sock *sk)
{
const struct mptcp_sock *msk = mptcp_sk(sk);
- const struct sk_buff *skb;
-
- skb = skb_peek(&sk->sk_receive_queue);
- if (skb) {
- u64 hint_val = READ_ONCE(msk->ack_seq) - MPTCP_SKB_CB(skb)->map_seq;
+ u64 hint_val;
- if (hint_val >= INT_MAX)
- return INT_MAX;
-
- return (unsigned int)hint_val;
- }
+ /* Avoid races vs ack_seq updates. */
+ mptcp_data_lock(sk);
+ hint_val = msk->ack_seq - msk->copied_seq;
+ mptcp_data_unlock(sk);
+ if (hint_val >= INT_MAX)
+ return INT_MAX;
- if (sk->sk_state == TCP_CLOSE || (sk->sk_shutdown & RCV_SHUTDOWN))
+ if (!hint_val &&
+ (sk->sk_state == TCP_CLOSE || (sk->sk_shutdown & RCV_SHUTDOWN)))
return 1;
- return 0;
+ return (unsigned int)hint_val;
}
static int mptcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
@@ -2323,6 +2292,7 @@ static int mptcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
struct mptcp_sock *msk = mptcp_sk(sk);
struct scm_timestamping_internal tss;
int copied = 0, cmsg_flags = 0;
+ u64 peek_seq, *seq;
int target;
long timeo;
@@ -2342,6 +2312,11 @@ static int mptcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
len = min_t(size_t, len, INT_MAX);
target = sock_rcvlowat(sk, flags & MSG_WAITALL, len);
+ seq = &msk->copied_seq;
+ if (flags & MSG_PEEK) {
+ peek_seq = msk->copied_seq;
+ seq = &peek_seq;
+ }
if (unlikely(msk->recvmsg_inq))
cmsg_flags = MPTCP_CMSG_INQ;
@@ -2351,7 +2326,7 @@ static int mptcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
int err, bytes_read;
bytes_read = __mptcp_recvmsg_mskq(sk, msg, len - copied, flags,
- copied, &tss, &cmsg_flags,
+ seq, &tss, &cmsg_flags,
&last);
if (unlikely(bytes_read < 0)) {
if (!copied)
@@ -2406,6 +2381,10 @@ static int mptcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
err = copied ? : err;
goto out_err;
}
+
+ /* Recompute peek offset after eventual seq resync. */
+ if (flags & MSG_PEEK)
+ peek_seq = msk->copied_seq + copied;
}
mptcp_cleanup_rbuf(msk, copied);
@@ -3500,11 +3479,13 @@ static int mptcp_disconnect(struct sock *sk, int flags)
msk->bytes_retrans = 0;
msk->rcvspace_init = 0;
msk->fastclosing = 0;
+ msk->tfo_skb_len = 0;
mptcp_init_rtt_est(msk);
/* for fallback's sake */
WRITE_ONCE(msk->ack_seq, 0);
atomic64_set(&msk->rcv_wnd_sent, 0);
+ msk->copied_seq = 0;
WRITE_ONCE(sk->sk_shutdown, 0);
sk_error_report(sk);
@@ -3729,8 +3710,10 @@ static void mptcp_release_cb(struct sock *sk)
__mptcp_error_report(sk);
if (__test_and_clear_bit(MPTCP_SYNC_SNDBUF, &msk->cb_flags))
__mptcp_sync_sndbuf(sk);
- if (__test_and_clear_bit(MPTCP_SYNC_SEQ, &msk->cb_flags))
+ if (__test_and_clear_bit(MPTCP_SYNC_SEQ, &msk->cb_flags)) {
+ msk->copied_seq += mptcp_iasn(msk);
__mptcp_sync_rcv_sequence(sk);
+ }
}
}
@@ -4390,7 +4373,7 @@ static struct sk_buff *mptcp_recv_skb(struct sock *sk, u32 *off)
mptcp_move_skbs(sk);
while ((skb = skb_peek(&sk->sk_receive_queue)) != NULL) {
- offset = MPTCP_SKB_CB(skb)->offset;
+ offset = msk->copied_seq - MPTCP_SKB_CB(skb)->map_seq;
if (offset < skb->len) {
*off = offset;
return skb;
@@ -4432,11 +4415,9 @@ static int __mptcp_read_sock(struct sock *sk, read_descriptor_t *desc,
copied += count;
msk->bytes_consumed += count;
- if (count < data_len) {
- MPTCP_SKB_CB(skb)->offset += count;
- MPTCP_SKB_CB(skb)->map_seq += count;
+ msk->copied_seq += count;
+ if (count < data_len)
break;
- }
mptcp_eat_recv_skb(sk, skb);
}
diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
index 16a1f4531dad..f3d852e52982 100644
--- a/net/mptcp/protocol.h
+++ b/net/mptcp/protocol.h
@@ -129,7 +129,6 @@
struct mptcp_skb_cb {
u64 map_seq;
u64 end_seq;
- u32 offset;
u8 has_rxtstamp;
};
@@ -289,6 +288,7 @@ struct mptcp_sock {
u64 bytes_sent;
u64 snd_nxt;
u64 bytes_received;
+ u64 copied_seq;
u64 ack_seq;
atomic64_t rcv_wnd_sent;
u64 rcv_data_fin_seq;
@@ -308,6 +308,7 @@ struct mptcp_sock {
u32 last_ack_recv;
unsigned long timer_ival;
u32 token;
+ u32 tfo_skb_len;
unsigned long flags;
unsigned long cb_flags;
bool rcvd_dummy_seq;
@@ -859,6 +860,11 @@ struct sock *mptcp_subflow_get_retrans(struct mptcp_sock *msk);
int mptcp_sched_get_send(struct mptcp_sock *msk);
int mptcp_sched_get_retrans(struct mptcp_sock *msk);
+static inline u64 mptcp_iasn(const struct mptcp_sock *msk)
+{
+ return msk->ack_seq - msk->bytes_received + msk->tfo_skb_len;
+}
+
static inline u64 mptcp_data_avail(const struct mptcp_sock *msk)
{
return READ_ONCE(msk->bytes_received) - READ_ONCE(msk->bytes_consumed);
diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
index 5f371bf773f8..c8ea876bdd03 100644
--- a/net/mptcp/subflow.c
+++ b/net/mptcp/subflow.c
@@ -499,10 +499,13 @@ static void subflow_set_remote_key(struct mptcp_sock *msk,
WRITE_ONCE(msk->can_ack, true);
atomic64_set(&msk->rcv_wnd_sent, subflow->iasn);
- if (!sock_owned_by_user(sk))
+ if (!sock_owned_by_user(sk)) {
+ /* User space could have already read partially the TFO skb */
+ msk->copied_seq += subflow->iasn;
__mptcp_sync_rcv_sequence(sk);
- else
+ } else {
__set_bit(MPTCP_SYNC_SEQ, &msk->cb_flags);
+ }
}
static void mptcp_propagate_state(struct sock *sk, struct sock *ssk,
--
2.54.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH mptcp-next v5 07/12] mptcp: sync mptcp skb cb layout with tcp one
2026-05-10 21:03 [PATCH mptcp-next v5 00/12] mptcp: address stall under memory pressure Paolo Abeni
` (5 preceding siblings ...)
2026-05-10 21:03 ` [PATCH mptcp-next v5 06/12] mptcp: remove CB offset field Paolo Abeni
@ 2026-05-10 21:03 ` Paolo Abeni
2026-05-10 21:03 ` [PATCH mptcp-next v5 08/12] tcp: expose the tcp_collapse_ofo_queue() helper to mptcp usage, too Paolo Abeni
` (5 subsequent siblings)
12 siblings, 0 replies; 17+ messages in thread
From: Paolo Abeni @ 2026-05-10 21:03 UTC (permalink / raw)
To: mptcp; +Cc: Shardul Bankar
The MPTCP protocol uses a significantly different CB layout WRT TCP, as it
includes different information and use 64 bits for the sequence numbers.
As the msk-level rcvbuf buffer size is limited by the core socket code the
INT_MAX; after validating the incoming skb vs the current receive window,
we can safely use 32 bits for MPTCP-level sequence number. This allow
updating the MPTCP CB layout so that fields with a corresponding TCP-level
data use the same area inside the CB itself.
Add build time check to ensure the latter invariant.
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
v4 -> v5:
- the main __mptcp_move_skb() chunk belonged to the previous patch
v3 -> v4:
- fix wrong check for OoO skb in mptcp_move_skb().
v1 -> v2:
- use u64 for admission checks
rfc -> v1:
- keep `ack_seq` up2date
---
net/mptcp/fastopen.c | 6 ++--
net/mptcp/protocol.c | 82 +++++++++++++++++++++++++++-----------------
net/mptcp/protocol.h | 7 ++--
3 files changed, 60 insertions(+), 35 deletions(-)
diff --git a/net/mptcp/fastopen.c b/net/mptcp/fastopen.c
index 03e605b050f8..a9e90dc14f77 100644
--- a/net/mptcp/fastopen.c
+++ b/net/mptcp/fastopen.c
@@ -47,8 +47,10 @@ void mptcp_fastopen_subflow_synack_set_params(struct mptcp_subflow_context *subf
/* The TFO segment data sits before the IASN; before receiving
* the remote key, IASN is assumed being 0.
*/
- MPTCP_SKB_CB(skb)->map_seq = -(u64)skb->len;
+ MPTCP_SKB_CB(skb)->map_seq64 = -(u64)skb->len;
+ MPTCP_SKB_CB(skb)->map_seq = MPTCP_SKB_CB(skb)->map_seq64;
MPTCP_SKB_CB(skb)->end_seq = 0;
+ MPTCP_SKB_CB(skb)->flags = 0;
MPTCP_SKB_CB(skb)->has_rxtstamp = has_rxtstamp;
mptcp_data_lock(sk);
@@ -56,7 +58,7 @@ void mptcp_fastopen_subflow_synack_set_params(struct mptcp_subflow_context *subf
msk = mptcp_sk(sk);
msk->rcvd_dummy_seq = true;
- msk->copied_seq = MPTCP_SKB_CB(skb)->map_seq;
+ msk->copied_seq = MPTCP_SKB_CB(skb)->map_seq64;
msk->tfo_skb_len = skb->len;
mptcp_borrow_fwdmem(sk, skb);
skb_set_owner_r(skb, sk);
diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index 47df6d4a26a7..9e0267e71a5a 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -164,7 +164,7 @@ static bool __mptcp_try_coalesce(struct sock *sk, struct sk_buff *to,
!skb_try_coalesce(to, from, fragstolen, delta))
return false;
- pr_debug("colesced seq %llx into %llx new len %d new end seq %llx\n",
+ pr_debug("colesced seq %x into %x new len %d new end seq %x\n",
MPTCP_SKB_CB(from)->map_seq, MPTCP_SKB_CB(to)->map_seq,
to->len, MPTCP_SKB_CB(from)->end_seq);
MPTCP_SKB_CB(to)->end_seq = MPTCP_SKB_CB(from)->end_seq;
@@ -234,14 +234,18 @@ static void mptcp_data_queue_ofo(struct mptcp_sock *msk, struct sk_buff *skb)
{
struct sock *sk = (struct sock *)msk;
struct rb_node **p, *parent;
- u64 seq, end_seq, max_seq;
+ u64 end_seq, max_seq;
struct sk_buff *skb1;
+ u32 seq;
seq = MPTCP_SKB_CB(skb)->map_seq;
- end_seq = MPTCP_SKB_CB(skb)->end_seq;
+ end_seq = MPTCP_SKB_CB(skb)->map_seq64 + skb->len;
max_seq = atomic64_read(&msk->rcv_wnd_sent);
- pr_debug("msk=%p seq=%llx limit=%llx empty=%d\n", msk, seq, max_seq,
+ /* Use the full sequence space to perform the admission checks, to
+ * protect vs possible wrap-arounds.
+ */
+ pr_debug("msk=%p seq=%x limit=%llx empty=%d\n", msk, seq, max_seq,
RB_EMPTY_ROOT(&msk->out_of_order_queue));
if (after64(end_seq, max_seq)) {
/* out of window */
@@ -272,7 +276,7 @@ static void mptcp_data_queue_ofo(struct mptcp_sock *msk, struct sk_buff *skb)
}
/* Can avoid an rbtree lookup if we are adding skb after ooo_last_skb */
- if (!before64(seq, MPTCP_SKB_CB(msk->ooo_last_skb)->end_seq)) {
+ if (!before(seq, MPTCP_SKB_CB(msk->ooo_last_skb)->end_seq)) {
MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_OFOQUEUETAIL);
parent = &msk->ooo_last_skb->rbnode;
p = &parent->rb_right;
@@ -284,18 +288,18 @@ static void mptcp_data_queue_ofo(struct mptcp_sock *msk, struct sk_buff *skb)
while (*p) {
parent = *p;
skb1 = rb_to_skb(parent);
- if (before64(seq, MPTCP_SKB_CB(skb1)->map_seq)) {
+ if (before(seq, MPTCP_SKB_CB(skb1)->map_seq)) {
p = &parent->rb_left;
continue;
}
- if (before64(seq, MPTCP_SKB_CB(skb1)->end_seq)) {
- if (!after64(end_seq, MPTCP_SKB_CB(skb1)->end_seq)) {
+ if (before(seq, MPTCP_SKB_CB(skb1)->end_seq)) {
+ if (!after(end_seq, MPTCP_SKB_CB(skb1)->end_seq)) {
/* All the bits are present. Drop. */
mptcp_drop(sk, skb);
MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_DUPDATA);
return;
}
- if (after64(seq, MPTCP_SKB_CB(skb1)->map_seq)) {
+ if (after(seq, MPTCP_SKB_CB(skb1)->map_seq)) {
/* partial overlap:
* | skb |
* | skb1 |
@@ -326,7 +330,7 @@ static void mptcp_data_queue_ofo(struct mptcp_sock *msk, struct sk_buff *skb)
merge_right:
/* Remove other segments covered by skb. */
while ((skb1 = skb_rb_next(skb)) != NULL) {
- if (before64(end_seq, MPTCP_SKB_CB(skb1)->end_seq))
+ if (before((u32)end_seq, MPTCP_SKB_CB(skb1)->end_seq))
break;
rb_erase(&skb1->rbnode, &msk->out_of_order_queue);
mptcp_drop(sk, skb1);
@@ -346,13 +350,15 @@ static void mptcp_init_skb(struct sock *ssk, struct sk_buff *skb, int offset)
struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk);
bool has_rxtstamp = TCP_SKB_CB(skb)->has_rxtstamp;
- /* the skb map_seq accounts for the skb offset:
+ /* The skb map_seq accounts for the skb offset:
* mptcp_subflow_get_mapped_dsn() is based on the current tp->copied_seq
- * value
+ * value; note that end seq number is only available in 32bits format.
*/
- MPTCP_SKB_CB(skb)->map_seq = mptcp_subflow_get_mapped_dsn(subflow) -
- offset;
+ MPTCP_SKB_CB(skb)->map_seq64 = mptcp_subflow_get_mapped_dsn(subflow) -
+ offset;
+ MPTCP_SKB_CB(skb)->map_seq = (u32)MPTCP_SKB_CB(skb)->map_seq64;
MPTCP_SKB_CB(skb)->end_seq = MPTCP_SKB_CB(skb)->map_seq + skb->len;
+ MPTCP_SKB_CB(skb)->flags = 0;
MPTCP_SKB_CB(skb)->has_rxtstamp = has_rxtstamp;
__skb_unlink(skb, &ssk->sk_receive_queue);
@@ -375,13 +381,14 @@ void __mptcp_sync_rcv_sequence(struct sock *sk)
if (!skb)
return;
- MPTCP_SKB_CB(skb)->map_seq = mptcp_iasn(msk) - skb->len;
+ MPTCP_SKB_CB(skb)->map_seq64 = mptcp_iasn(msk) - skb->len;
+ MPTCP_SKB_CB(skb)->map_seq = (u32)MPTCP_SKB_CB(skb)->map_seq64;
MPTCP_SKB_CB(skb)->end_seq = MPTCP_SKB_CB(skb)->map_seq + skb->len;
}
static bool __mptcp_move_skb(struct sock *sk, struct sk_buff *skb)
{
- u64 copy_len = MPTCP_SKB_CB(skb)->end_seq - MPTCP_SKB_CB(skb)->map_seq;
+ u32 copy_len = MPTCP_SKB_CB(skb)->end_seq - MPTCP_SKB_CB(skb)->map_seq;
struct mptcp_sock *msk = mptcp_sk(sk);
struct sk_buff *tail;
@@ -402,7 +409,7 @@ static bool __mptcp_move_skb(struct sock *sk, struct sk_buff *skb)
return false;
}
- if (MPTCP_SKB_CB(skb)->map_seq == msk->ack_seq) {
+ if (MPTCP_SKB_CB(skb)->map_seq64 == msk->ack_seq) {
add_queue:
/* in sequence */
msk->bytes_received += copy_len;
@@ -414,12 +421,13 @@ static bool __mptcp_move_skb(struct sock *sk, struct sk_buff *skb)
skb_set_owner_r(skb, sk);
__skb_queue_tail(&sk->sk_receive_queue, skb);
return true;
- } else if (after64(MPTCP_SKB_CB(skb)->map_seq, msk->ack_seq)) {
+ } else if (after64(MPTCP_SKB_CB(skb)->map_seq64, msk->ack_seq)) {
mptcp_data_queue_ofo(msk, skb);
return false;
- } else if (after64(MPTCP_SKB_CB(skb)->end_seq, msk->ack_seq)) {
+ } else if (after64(MPTCP_SKB_CB(skb)->map_seq64 + skb->len,
+ msk->ack_seq)) {
/* Partial packet: map_seq < ack_seq < end_seq.*/
- copy_len -= msk->ack_seq - MPTCP_SKB_CB(skb)->map_seq;
+ copy_len -= (u32)msk->ack_seq - MPTCP_SKB_CB(skb)->map_seq;
goto add_queue;
}
@@ -807,40 +815,40 @@ static bool __mptcp_ofo_queue(struct mptcp_sock *msk)
{
struct sock *sk = (struct sock *)msk;
struct sk_buff *skb, *tail;
+ u32 seq_delta, ack_seq;
bool moved = false;
struct rb_node *p;
- u64 end_seq;
p = rb_first(&msk->out_of_order_queue);
pr_debug("msk=%p empty=%d\n", msk, RB_EMPTY_ROOT(&msk->out_of_order_queue));
while (p) {
+ ack_seq = msk->ack_seq;
skb = rb_to_skb(p);
- if (after64(MPTCP_SKB_CB(skb)->map_seq, msk->ack_seq))
+ if (after(MPTCP_SKB_CB(skb)->map_seq, ack_seq))
break;
p = rb_next(p);
rb_erase(&skb->rbnode, &msk->out_of_order_queue);
- if (unlikely(!after64(MPTCP_SKB_CB(skb)->end_seq,
- msk->ack_seq))) {
+ if (unlikely(!after(MPTCP_SKB_CB(skb)->end_seq, ack_seq))) {
mptcp_drop(sk, skb);
MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_DUPDATA);
continue;
}
- end_seq = MPTCP_SKB_CB(skb)->end_seq;
+ seq_delta = MPTCP_SKB_CB(skb)->end_seq - ack_seq;
tail = skb_peek_tail(&sk->sk_receive_queue);
if (!tail || !mptcp_try_coalesce(sk, tail, skb)) {
- int delta = msk->ack_seq - MPTCP_SKB_CB(skb)->map_seq;
+ int delta = ack_seq - MPTCP_SKB_CB(skb)->map_seq;
/* skip overlapping data, if any */
- pr_debug("uncoalesced seq=%llx ack seq=%llx delta=%d\n",
- MPTCP_SKB_CB(skb)->map_seq, msk->ack_seq,
+ pr_debug("uncoalesced seq=%x ack seq=%x delta=%d\n",
+ MPTCP_SKB_CB(skb)->map_seq, ack_seq,
delta);
__skb_queue_tail(&sk->sk_receive_queue, skb);
}
- msk->bytes_received += end_seq - msk->ack_seq;
- WRITE_ONCE(msk->ack_seq, end_seq);
+ msk->bytes_received += seq_delta;
+ WRITE_ONCE(msk->ack_seq, msk->ack_seq + seq_delta);
moved = true;
}
return moved;
@@ -2089,7 +2097,7 @@ static int __mptcp_recvmsg_mskq(struct sock *sk, struct msghdr *msg,
int copied = 0;
skb_queue_walk_safe(&sk->sk_receive_queue, skb, tmp) {
- u64 offset = *seq - MPTCP_SKB_CB(skb)->map_seq;
+ u32 offset = (u32)(*seq) - MPTCP_SKB_CB(skb)->map_seq;
u32 count, data_len = skb->len - offset;
int err;
@@ -4622,11 +4630,23 @@ static int mptcp_napi_poll(struct napi_struct *napi, int budget)
return work_done;
}
+#define CHK_CB_FIELD(mptcp_field, tcp_field) \
+ ({ \
+ BUILD_BUG_ON(offsetof(struct mptcp_skb_cb, mptcp_field) != \
+ offsetof(struct tcp_skb_cb, tcp_field)); \
+ BUILD_BUG_ON(offsetofend(struct mptcp_skb_cb, mptcp_field) != \
+ offsetofend(struct tcp_skb_cb, tcp_field)); \
+ })
+
void __init mptcp_proto_init(void)
{
struct mptcp_delegated_action *delegated;
int cpu;
+ CHK_CB_FIELD(map_seq, seq);
+ CHK_CB_FIELD(end_seq, end_seq);
+ CHK_CB_FIELD(flags, tcp_flags);
+
mptcp_prot.h.hashinfo = tcp_prot.h.hashinfo;
if (percpu_counter_init(&mptcp_sockets_allocated, 0, GFP_KERNEL))
diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
index f3d852e52982..6786da97bbc8 100644
--- a/net/mptcp/protocol.h
+++ b/net/mptcp/protocol.h
@@ -127,9 +127,12 @@
#define MPTCP_SYNC_SEQ 8
struct mptcp_skb_cb {
- u64 map_seq;
- u64 end_seq;
+ u32 map_seq;
+ u32 end_seq;
+ u32 unused;
+ u16 flags;
u8 has_rxtstamp;
+ u64 map_seq64;
};
#define MPTCP_SKB_CB(__skb) ((struct mptcp_skb_cb *)&((__skb)->cb[0]))
--
2.54.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH mptcp-next v5 08/12] tcp: expose the tcp_collapse_ofo_queue() helper to mptcp usage, too
2026-05-10 21:03 [PATCH mptcp-next v5 00/12] mptcp: address stall under memory pressure Paolo Abeni
` (6 preceding siblings ...)
2026-05-10 21:03 ` [PATCH mptcp-next v5 07/12] mptcp: sync mptcp skb cb layout with tcp one Paolo Abeni
@ 2026-05-10 21:03 ` Paolo Abeni
2026-05-10 21:03 ` [PATCH mptcp-next v5 09/12] mptcp: implemented OoO queue pruning Paolo Abeni
` (4 subsequent siblings)
12 siblings, 0 replies; 17+ messages in thread
From: Paolo Abeni @ 2026-05-10 21:03 UTC (permalink / raw)
To: mptcp; +Cc: Shardul Bankar
The end goal is to avoid duplicating the quite untrivial strategy at MPTCP
level.
After the previous patch, the mentioned helpers could process skbs standing
in MPTCP-level queues without any CB-related adaptation.
The only additional adjustment needed is explicitly providing the OoO queue
reference, to cope with different sk layout.
Additionally rename the helper to clearly document its hybrid nature and
let it return the number of collapsed skbs, to allow proper accounting from
the future MPTCP caller.
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
rfc -> v1:
- fix arg typo
Note:
- this will need a significant amount of testing at the TCP level and
explicit approval from Eric, which I can't guess if we can hope.
---
include/net/tcp.h | 8 +++++++
net/ipv4/tcp_input.c | 54 ++++++++++++++++++++++++++++----------------
2 files changed, 43 insertions(+), 19 deletions(-)
diff --git a/include/net/tcp.h b/include/net/tcp.h
index 3c4e6adb0dbd..b21189858d66 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -1838,6 +1838,14 @@ extern void tcp_openreq_init_rwin(struct request_sock *req,
void tcp_enter_memory_pressure(struct sock *sk);
void tcp_leave_memory_pressure(struct sock *sk);
+unsigned int xtcp_collapse(struct sock *sk, struct sk_buff_head *list,
+ struct rb_root *root, struct sk_buff *head,
+ struct sk_buff *tail, u32 start, u32 end,
+ u8 scaling_ratio);
+unsigned int xtcp_collapse_ofo_queue(struct sock *sk,
+ struct rb_root *out_of_order_queue,
+ struct sk_buff **ooo_last_skb,
+ u8 scaling_ratio);
static inline int keepalive_intvl_when(const struct tcp_sock *tp)
{
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 7995a89bafc9..f3a9cf0a1e6c 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -5731,16 +5731,22 @@ static struct sk_buff *tcp_collapse_one(struct sock *sk, struct sk_buff *skb,
/* Collapse contiguous sequence of skbs head..tail with
* sequence numbers start..end.
*
+ * sk can be either a TCP or an MPTCP socket.
+ *
* If tail is NULL, this means until the end of the queue.
*
* Segments with FIN/SYN are not collapsed (only because this
* simplifies code)
+ *
+ * Returns the number of collapsed skbs.
*/
-static void
-tcp_collapse(struct sock *sk, struct sk_buff_head *list, struct rb_root *root,
- struct sk_buff *head, struct sk_buff *tail, u32 start, u32 end)
+unsigned int
+xtcp_collapse(struct sock *sk, struct sk_buff_head *list, struct rb_root *root,
+ struct sk_buff *head, struct sk_buff *tail, u32 start, u32 end,
+ u8 scaling_ratio)
{
struct sk_buff *skb = head, *n;
+ unsigned int collapsed = 0;
struct sk_buff_head tmp;
bool end_of_skbs;
@@ -5756,6 +5762,7 @@ tcp_collapse(struct sock *sk, struct sk_buff_head *list, struct rb_root *root,
/* No new bits? It is possible on ofo queue. */
if (!before(start, TCP_SKB_CB(skb)->end_seq)) {
+ collapsed++;
skb = tcp_collapse_one(sk, skb, list, root);
if (!skb)
break;
@@ -5768,7 +5775,7 @@ tcp_collapse(struct sock *sk, struct sk_buff_head *list, struct rb_root *root,
* overlaps to the next one and mptcp allow collapsing.
*/
if (!(TCP_SKB_CB(skb)->tcp_flags & (TCPHDR_SYN | TCPHDR_FIN)) &&
- (tcp_win_from_space(sk, skb->truesize) > skb->len ||
+ (__tcp_win_from_space(scaling_ratio, skb->truesize) > skb->len ||
before(TCP_SKB_CB(skb)->seq, start))) {
end_of_skbs = false;
break;
@@ -5788,7 +5795,7 @@ tcp_collapse(struct sock *sk, struct sk_buff_head *list, struct rb_root *root,
if (end_of_skbs ||
(TCP_SKB_CB(skb)->tcp_flags & (TCPHDR_SYN | TCPHDR_FIN)) ||
!skb_frags_readable(skb))
- return;
+ return collapsed;
__skb_queue_head_init(&tmp);
@@ -5825,6 +5832,7 @@ tcp_collapse(struct sock *sk, struct sk_buff_head *list, struct rb_root *root,
start += size;
}
if (!before(start, TCP_SKB_CB(skb)->end_seq)) {
+ collapsed++;
skb = tcp_collapse_one(sk, skb, list, root);
if (!skb ||
skb == tail ||
@@ -5838,23 +5846,27 @@ tcp_collapse(struct sock *sk, struct sk_buff_head *list, struct rb_root *root,
end:
skb_queue_walk_safe(&tmp, skb, n)
tcp_rbtree_insert(root, skb);
+ return collapsed;
}
/* Collapse ofo queue. Algorithm: select contiguous sequence of skbs
- * and tcp_collapse() them until all the queue is collapsed.
+ * and xtcp_collapse() them until all the queue is collapsed.
*/
-static void tcp_collapse_ofo_queue(struct sock *sk)
+unsigned int xtcp_collapse_ofo_queue(struct sock *sk,
+ struct rb_root *ooo_queue,
+ struct sk_buff **ooo_last_skb,
+ u8 scaling_ratio)
{
- struct tcp_sock *tp = tcp_sk(sk);
u32 range_truesize, sum_tiny = 0;
struct sk_buff *skb, *head;
+ unsigned int collapsed = 0;
u32 start, end;
- skb = skb_rb_first(&tp->out_of_order_queue);
+ skb = skb_rb_first(ooo_queue);
new_range:
if (!skb) {
- tp->ooo_last_skb = skb_rb_last(&tp->out_of_order_queue);
- return;
+ *ooo_last_skb = skb_rb_last(ooo_queue);
+ return collapsed;
}
start = TCP_SKB_CB(skb)->seq;
end = TCP_SKB_CB(skb)->end_seq;
@@ -5872,12 +5884,13 @@ static void tcp_collapse_ofo_queue(struct sock *sk)
/* Do not attempt collapsing tiny skbs */
if (range_truesize != head->truesize ||
end - start >= SKB_WITH_OVERHEAD(PAGE_SIZE)) {
- tcp_collapse(sk, NULL, &tp->out_of_order_queue,
- head, skb, start, end);
+ collapsed += xtcp_collapse(sk, NULL, ooo_queue,
+ head, skb, start, end,
+ scaling_ratio);
} else {
sum_tiny += range_truesize;
if (sum_tiny > sk->sk_rcvbuf >> 3)
- return;
+ return collapsed;
}
goto new_range;
}
@@ -5888,6 +5901,7 @@ static void tcp_collapse_ofo_queue(struct sock *sk)
if (after(TCP_SKB_CB(skb)->end_seq, end))
end = TCP_SKB_CB(skb)->end_seq;
}
+ return collapsed;
}
/*
@@ -5975,12 +5989,14 @@ static int tcp_prune_queue(struct sock *sk, const struct sk_buff *in_skb)
if (tcp_can_ingest(sk, in_skb))
return 0;
- tcp_collapse_ofo_queue(sk);
+ xtcp_collapse_ofo_queue(sk, &tp->out_of_order_queue,
+ &tp->ooo_last_skb, tp->scaling_ratio);
if (!skb_queue_empty(&sk->sk_receive_queue))
- tcp_collapse(sk, &sk->sk_receive_queue, NULL,
- skb_peek(&sk->sk_receive_queue),
- NULL,
- tp->copied_seq, tp->rcv_nxt);
+ xtcp_collapse(sk, &sk->sk_receive_queue, NULL,
+ skb_peek(&sk->sk_receive_queue),
+ NULL,
+ tp->copied_seq, tp->rcv_nxt,
+ tp->scaling_ratio);
if (tcp_can_ingest(sk, in_skb))
return 0;
--
2.54.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH mptcp-next v5 09/12] mptcp: implemented OoO queue pruning
2026-05-10 21:03 [PATCH mptcp-next v5 00/12] mptcp: address stall under memory pressure Paolo Abeni
` (7 preceding siblings ...)
2026-05-10 21:03 ` [PATCH mptcp-next v5 08/12] tcp: expose the tcp_collapse_ofo_queue() helper to mptcp usage, too Paolo Abeni
@ 2026-05-10 21:03 ` Paolo Abeni
2026-05-10 21:03 ` [PATCH mptcp-next v5 10/12] mptcp: track prune recovery status Paolo Abeni
` (3 subsequent siblings)
12 siblings, 0 replies; 17+ messages in thread
From: Paolo Abeni @ 2026-05-10 21:03 UTC (permalink / raw)
To: mptcp; +Cc: Shardul Bankar
Leverage the hybrid helpers to implement the receive queue and OoO queue
collapsing at ingress time when reaching memory bounds.
If the msk is owned by the user-space at incoming skb time, perform the
pruning in the release_cb. The prune check is additionally performed
when the skb reaches the msk-level queues.
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
v2 -> v3:
- deal with unsynced TFO skb at prune time - only possible when pruning
in mptcp_over_limit()
v1 -> v2:
- collapse rcv queue, too
- deal with MPC map, too
- drop left-over sentence in the commit message
RFC -> v1:
- use data_seq only when available
- avoid ack_seq lockless access
- drop limit on fallback
- collapse rcvqueue, too
- drop only when pruning is not possible and over rcvbuf * 2
Note:
- sashiko can be confused about fwd memory lifecycle (I can
understand that :). Any exceeding amount of fwd allocated memory
is always released by the next sk_mem_uncharge() - i.e. fwd memory
is not tied to the current skb.
- sashiko is also fooled by the main xtcp_collapse_ofo_queue()
loop: ooo_last_skb is always kept up2date with the current tree
status
- AFAICS KASAN handles bitmap variables in a sane way, and sashiko
doesn't know about that
---
net/mptcp/mib.c | 3 +++
net/mptcp/mib.h | 3 +++
net/mptcp/options.c | 49 +++++++++++++++++++++++++++++-----
net/mptcp/protocol.c | 62 ++++++++++++++++++++++++++++++++++++++++++++
net/mptcp/protocol.h | 2 ++
5 files changed, 112 insertions(+), 7 deletions(-)
diff --git a/net/mptcp/mib.c b/net/mptcp/mib.c
index f23fda0c55a7..5128feec942c 100644
--- a/net/mptcp/mib.c
+++ b/net/mptcp/mib.c
@@ -85,6 +85,9 @@ static const struct snmp_mib mptcp_snmp_list[] = {
SNMP_MIB_ITEM("SimultConnectFallback", MPTCP_MIB_SIMULTCONNFALLBACK),
SNMP_MIB_ITEM("FallbackFailed", MPTCP_MIB_FALLBACKFAILED),
SNMP_MIB_ITEM("WinProbe", MPTCP_MIB_WINPROBE),
+ SNMP_MIB_ITEM("OfoPruned", MPTCP_MIB_OFO_PRUNED),
+ SNMP_MIB_ITEM("RcvPruned", MPTCP_MIB_RCVPRUNED),
+ SNMP_MIB_ITEM("RcvCollapsed", MPTCP_MIB_RCVCOLLAPSED),
};
/* mptcp_mib_alloc - allocate percpu mib counters
diff --git a/net/mptcp/mib.h b/net/mptcp/mib.h
index 812218b5ed2b..2f8f68e33ac5 100644
--- a/net/mptcp/mib.h
+++ b/net/mptcp/mib.h
@@ -88,6 +88,9 @@ enum linux_mptcp_mib_field {
MPTCP_MIB_SIMULTCONNFALLBACK, /* Simultaneous connect */
MPTCP_MIB_FALLBACKFAILED, /* Can't fallback due to msk status */
MPTCP_MIB_WINPROBE, /* MPTCP-level zero window probe */
+ MPTCP_MIB_OFO_PRUNED, /* MPTCP-level OoO queue pruned */
+ MPTCP_MIB_RCVPRUNED, /* Dropped due to memory constrains */
+ MPTCP_MIB_RCVCOLLAPSED, /* Collapsed due to memory pressure */
__MPTCP_MIB_MAX
};
diff --git a/net/mptcp/options.c b/net/mptcp/options.c
index 36f12e5dfa92..5669d0a395c6 100644
--- a/net/mptcp/options.c
+++ b/net/mptcp/options.c
@@ -1159,10 +1159,13 @@ static bool add_addr_hmac_valid(struct mptcp_sock *msk,
}
static bool mptcp_over_limit(struct sock *sk, struct sock *ssk,
- const struct sk_buff *skb)
+ const struct sk_buff *skb,
+ const struct mptcp_options_received *mp_opt)
{
struct mptcp_sock *msk = mptcp_sk(sk);
u64 mem = sk_rmem_alloc_get(sk);
+ u64 limit;
+ bool ret;
mem += READ_ONCE(msk->backlog_len);
if (likely(mem <= READ_ONCE(sk->sk_rcvbuf)))
@@ -1174,10 +1177,38 @@ static bool mptcp_over_limit(struct sock *sk, struct sock *ssk,
!after(TCP_SKB_CB(skb)->end_seq, tcp_sk(ssk)->rcv_nxt))
return false;
- /* Dropped due to memory constraints, schedule an ack. */
- inet_csk(ssk)->icsk_ack.pending |= ICSK_ACK_NOMEM | ICSK_ACK_NOW;
- inet_csk_schedule_ack(ssk);
- return true;
+ mptcp_data_lock(sk);
+ if (!sock_owned_by_user(sk)) {
+ /* When the data sequence is not (yet) available for the
+ * incoming skb, allow pruning the whole OoO queue.
+ */
+ u32 seq = (u32)((!mp_opt->use_map || mp_opt->mpc_map) ?
+ msk->ack_seq : mp_opt->data_seq);
+
+ /* Be sure TFO skb sequence number is in-sync, as the
+ * TCP pruning helper will be badly fouled otherwise.
+ */
+ if (unlikely(msk->rcvd_dummy_seq))
+ __mptcp_sync_rcv_sequence(sk);
+
+ limit = sk->sk_rcvbuf;
+ __mptcp_check_prune(sk, seq);
+ } else {
+ /* Pruning will take place later in the RX path, allow
+ * some extra slack.
+ */
+ limit = ((u64)READ_ONCE(sk->sk_rcvbuf)) << 1;
+ __set_bit(MPTCP_PRUNE, &msk->cb_flags);
+ }
+ ret = sk_rmem_alloc_get(sk) + msk->backlog_len > limit;
+ mptcp_data_unlock(sk);
+
+ if (ret) {
+ /* Dropped due to memory constraints, schedule an ack. */
+ inet_csk(ssk)->icsk_ack.pending |= ICSK_ACK_NOMEM | ICSK_ACK_NOW;
+ inet_csk_schedule_ack(ssk);
+ }
+ return ret;
}
/* Return false when the caller must drop the packet, i.e. in case of error,
@@ -1208,7 +1239,11 @@ bool mptcp_incoming_options(struct sock *sk, struct sk_buff *skb)
__mptcp_data_acked(subflow->conn);
mptcp_data_unlock(subflow->conn);
- if (mptcp_over_limit(subflow->conn, sk, skb))
+ /* Will use ack_seq as limit for OoO pruning; any value would do
+ * as OoO queue must be empty.
+ */
+ mp_opt.use_map = 0;
+ if (mptcp_over_limit(subflow->conn, sk, skb, &mp_opt))
return false;
return true;
}
@@ -1288,7 +1323,7 @@ bool mptcp_incoming_options(struct sock *sk, struct sk_buff *skb)
return true;
}
- if (mptcp_over_limit(subflow->conn, sk, skb))
+ if (mptcp_over_limit(subflow->conn, sk, skb, &mp_opt))
return false;
mpext = skb_ext_add(skb, SKB_EXT_MPTCP);
diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index 9e0267e71a5a..6c309c73c128 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -386,6 +386,64 @@ void __mptcp_sync_rcv_sequence(struct sock *sk)
MPTCP_SKB_CB(skb)->end_seq = MPTCP_SKB_CB(skb)->map_seq + skb->len;
}
+/* "Inspired" from the TCP version */
+static void mptcp_prune_ofo_queue(struct sock *sk, u32 seq)
+{
+ struct mptcp_sock *msk = mptcp_sk(sk);
+ struct rb_node *node, *prev;
+ bool pruned = false;
+
+ if (RB_EMPTY_ROOT(&msk->out_of_order_queue))
+ return;
+
+ node = &msk->ooo_last_skb->rbnode;
+
+ do {
+ struct sk_buff *skb = rb_to_skb(node);
+
+ /* Stop pruning if the incoming skb would land in OoO tail. */
+ if (after(seq, MPTCP_SKB_CB(skb)->map_seq))
+ break;
+
+ pruned = true;
+ prev = rb_prev(node);
+ rb_erase(node, &msk->out_of_order_queue);
+ mptcp_drop(sk, skb);
+ msk->ooo_last_skb = rb_to_skb(prev);
+ if (atomic_read(&sk->sk_rmem_alloc) < sk->sk_rcvbuf)
+ break;
+
+ node = prev;
+ } while (node);
+
+ if (pruned)
+ MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_OFO_PRUNED);
+}
+
+bool __mptcp_check_prune(struct sock *sk, u32 seq)
+{
+ struct mptcp_sock *msk = mptcp_sk(sk);
+ unsigned int dropped;
+
+ dropped = xtcp_collapse_ofo_queue(sk, &msk->out_of_order_queue,
+ &msk->ooo_last_skb,
+ msk->scaling_ratio);
+ if (!skb_queue_empty(&sk->sk_receive_queue))
+ dropped += xtcp_collapse(sk, &sk->sk_receive_queue, NULL,
+ skb_peek(&sk->sk_receive_queue),
+ NULL,
+ msk->copied_seq, msk->ack_seq,
+ msk->scaling_ratio);
+
+ if (dropped)
+ MPTCP_ADD_STATS(sock_net(sk), MPTCP_MIB_RCVCOLLAPSED, dropped);
+ if (likely(atomic_read(&sk->sk_rmem_alloc) < sk->sk_rcvbuf))
+ return false;
+
+ mptcp_prune_ofo_queue(sk, seq);
+ return atomic_read(&sk->sk_rmem_alloc) >= sk->sk_rcvbuf;
+}
+
static bool __mptcp_move_skb(struct sock *sk, struct sk_buff *skb)
{
u32 copy_len = MPTCP_SKB_CB(skb)->end_seq - MPTCP_SKB_CB(skb)->map_seq;
@@ -404,7 +462,9 @@ static bool __mptcp_move_skb(struct sock *sk, struct sk_buff *skb)
* will break.
*/
if (unlikely(sk_rmem_alloc_get(sk) > READ_ONCE(sk->sk_rcvbuf)) &&
+ __mptcp_check_prune(sk, MPTCP_SKB_CB(skb)->map_seq) &&
!__mptcp_check_fallback(msk)) {
+ MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_RCVPRUNED);
mptcp_drop(sk, skb);
return false;
}
@@ -3722,6 +3782,8 @@ static void mptcp_release_cb(struct sock *sk)
msk->copied_seq += mptcp_iasn(msk);
__mptcp_sync_rcv_sequence(sk);
}
+ if (__test_and_clear_bit(MPTCP_PRUNE, &msk->cb_flags))
+ __mptcp_check_prune(sk, (u32)(msk->ack_seq - 1));
}
}
diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
index 6786da97bbc8..ae019a10e1c8 100644
--- a/net/mptcp/protocol.h
+++ b/net/mptcp/protocol.h
@@ -125,6 +125,7 @@
#define MPTCP_SYNC_STATE 6
#define MPTCP_SYNC_SNDBUF 7
#define MPTCP_SYNC_SEQ 8
+#define MPTCP_PRUNE 9
struct mptcp_skb_cb {
u32 map_seq;
@@ -832,6 +833,7 @@ bool __mptcp_close(struct sock *sk, long timeout);
void mptcp_cancel_work(struct sock *sk);
void __mptcp_unaccepted_force_close(struct sock *sk);
void mptcp_set_state(struct sock *sk, int state);
+bool __mptcp_check_prune(struct sock *sk, u32 seq);
bool mptcp_addresses_equal(const struct mptcp_addr_info *a,
const struct mptcp_addr_info *b, bool use_port);
--
2.54.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH mptcp-next v5 10/12] mptcp: track prune recovery status
2026-05-10 21:03 [PATCH mptcp-next v5 00/12] mptcp: address stall under memory pressure Paolo Abeni
` (8 preceding siblings ...)
2026-05-10 21:03 ` [PATCH mptcp-next v5 09/12] mptcp: implemented OoO queue pruning Paolo Abeni
@ 2026-05-10 21:03 ` Paolo Abeni
2026-05-10 21:03 ` [PATCH mptcp-next v5 11/12] mptcp: move the retrans loop to a separate helper Paolo Abeni
` (2 subsequent siblings)
12 siblings, 0 replies; 17+ messages in thread
From: Paolo Abeni @ 2026-05-10 21:03 UTC (permalink / raw)
To: mptcp; +Cc: Shardul Bankar
After dropping any data already acked at the TCP level, the MPTCP must
avoid inducing TCP-level retransmission until the pruned data has been
successfully acked at MPTCP level. Otherwise the subflows could keep
retransmitting skbs carring OoO MPTCP data, preventing reinjections and
stalling completely the data transfer.
Explicitly keep track of the highest pruned MPTCP-level seq number and
stop dropping at TCP level until such sequence has been acked.
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
Notes:
- sashiko may miss that msk->ack_seq access in mptcp_over_limit()
happens under the msk data lock and this is raceless.
---
net/mptcp/options.c | 7 +++++++
net/mptcp/protocol.c | 16 ++++++++++++++--
net/mptcp/protocol.h | 3 +++
net/mptcp/subflow.c | 1 +
4 files changed, 25 insertions(+), 2 deletions(-)
diff --git a/net/mptcp/options.c b/net/mptcp/options.c
index 5669d0a395c6..bc987ab5bdfc 100644
--- a/net/mptcp/options.c
+++ b/net/mptcp/options.c
@@ -1201,6 +1201,13 @@ static bool mptcp_over_limit(struct sock *sk, struct sock *ssk,
__set_bit(MPTCP_PRUNE, &msk->cb_flags);
}
ret = sk_rmem_alloc_get(sk) + msk->backlog_len > limit;
+
+ /* After pruning any packets ensure that MPTCP-driven drops do not
+ * cause TCP-level retransmission.
+ */
+ if (before((u32)(msk->ack_seq), READ_ONCE(msk->pruned_seq)))
+ ret = false;
+
mptcp_data_unlock(sk);
if (ret) {
diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index 6c309c73c128..baf3457d37a7 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -392,12 +392,14 @@ static void mptcp_prune_ofo_queue(struct sock *sk, u32 seq)
struct mptcp_sock *msk = mptcp_sk(sk);
struct rb_node *node, *prev;
bool pruned = false;
+ u32 pruned_seq;
if (RB_EMPTY_ROOT(&msk->out_of_order_queue))
return;
node = &msk->ooo_last_skb->rbnode;
+ pruned_seq = msk->pruned_seq;
do {
struct sk_buff *skb = rb_to_skb(node);
@@ -408,16 +410,21 @@ static void mptcp_prune_ofo_queue(struct sock *sk, u32 seq)
pruned = true;
prev = rb_prev(node);
rb_erase(node, &msk->out_of_order_queue);
+ if (after(MPTCP_SKB_CB(skb)->end_seq, pruned_seq))
+ pruned_seq = MPTCP_SKB_CB(skb)->end_seq;
mptcp_drop(sk, skb);
msk->ooo_last_skb = rb_to_skb(prev);
+
if (atomic_read(&sk->sk_rmem_alloc) < sk->sk_rcvbuf)
break;
node = prev;
} while (node);
- if (pruned)
+ if (pruned) {
+ WRITE_ONCE(msk->pruned_seq, pruned_seq);
MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_OFO_PRUNED);
+ }
}
bool __mptcp_check_prune(struct sock *sk, u32 seq)
@@ -464,6 +471,8 @@ static bool __mptcp_move_skb(struct sock *sk, struct sk_buff *skb)
if (unlikely(sk_rmem_alloc_get(sk) > READ_ONCE(sk->sk_rcvbuf)) &&
__mptcp_check_prune(sk, MPTCP_SKB_CB(skb)->map_seq) &&
!__mptcp_check_fallback(msk)) {
+ if (after(MPTCP_SKB_CB(skb)->end_seq, msk->pruned_seq))
+ WRITE_ONCE(msk->pruned_seq, MPTCP_SKB_CB(skb)->end_seq);
MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_RCVPRUNED);
mptcp_drop(sk, skb);
return false;
@@ -882,7 +891,7 @@ static bool __mptcp_ofo_queue(struct mptcp_sock *msk)
p = rb_first(&msk->out_of_order_queue);
pr_debug("msk=%p empty=%d\n", msk, RB_EMPTY_ROOT(&msk->out_of_order_queue));
while (p) {
- ack_seq = msk->ack_seq;
+ ack_seq = (u32)(msk->ack_seq);
skb = rb_to_skb(p);
if (after(MPTCP_SKB_CB(skb)->map_seq, ack_seq))
break;
@@ -911,6 +920,8 @@ static bool __mptcp_ofo_queue(struct mptcp_sock *msk)
WRITE_ONCE(msk->ack_seq, msk->ack_seq + seq_delta);
moved = true;
}
+ if (after(msk->ack_seq, msk->pruned_seq))
+ WRITE_ONCE(msk->pruned_seq, (u32)msk->ack_seq);
return moved;
}
@@ -3554,6 +3565,7 @@ static int mptcp_disconnect(struct sock *sk, int flags)
WRITE_ONCE(msk->ack_seq, 0);
atomic64_set(&msk->rcv_wnd_sent, 0);
msk->copied_seq = 0;
+ WRITE_ONCE(msk->pruned_seq, 0);
WRITE_ONCE(sk->sk_shutdown, 0);
sk_error_report(sk);
diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
index ae019a10e1c8..70431cde1757 100644
--- a/net/mptcp/protocol.h
+++ b/net/mptcp/protocol.h
@@ -307,6 +307,9 @@ struct mptcp_sock {
u64 bytes_acked;
u64 snd_una;
u64 wnd_end;
+ u32 pruned_seq; /* If strictly above ack_seq,
+ * the highest seq pruned.
+ */
u32 last_data_sent;
u32 last_data_recv;
u32 last_ack_recv;
diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
index c8ea876bdd03..f35afe64d9cd 100644
--- a/net/mptcp/subflow.c
+++ b/net/mptcp/subflow.c
@@ -496,6 +496,7 @@ static void subflow_set_remote_key(struct mptcp_sock *msk,
WRITE_ONCE(msk->remote_key, subflow->remote_key);
WRITE_ONCE(msk->ack_seq, subflow->iasn);
+ WRITE_ONCE(msk->pruned_seq, subflow->iasn);
WRITE_ONCE(msk->can_ack, true);
atomic64_set(&msk->rcv_wnd_sent, subflow->iasn);
--
2.54.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH mptcp-next v5 11/12] mptcp: move the retrans loop to a separate helper
2026-05-10 21:03 [PATCH mptcp-next v5 00/12] mptcp: address stall under memory pressure Paolo Abeni
` (9 preceding siblings ...)
2026-05-10 21:03 ` [PATCH mptcp-next v5 10/12] mptcp: track prune recovery status Paolo Abeni
@ 2026-05-10 21:03 ` Paolo Abeni
2026-05-10 21:03 ` [PATCH mptcp-next v5 12/12] mptcp: let the retrans scheduler do its job Paolo Abeni
2026-05-10 22:16 ` [PATCH mptcp-next v5 00/12] mptcp: address stall under memory pressure MPTCP CI
12 siblings, 0 replies; 17+ messages in thread
From: Paolo Abeni @ 2026-05-10 21:03 UTC (permalink / raw)
To: mptcp; +Cc: Shardul Bankar
This is a cleanup in order to make the next patch simpler.
No functional change intended.
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
net/mptcp/protocol.c | 74 +++++++++++++++++++++++++-------------------
1 file changed, 43 insertions(+), 31 deletions(-)
diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index baf3457d37a7..6dc7899c6fc7 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -2851,41 +2851,14 @@ static void mptcp_check_fastclose(struct mptcp_sock *msk)
sk_error_report(sk);
}
-static void __mptcp_retrans(struct sock *sk)
+/* Retransmit the specified data fragment on all the selected subflows. */
+static int __mptcp_push_retrans(struct sock *sk, struct mptcp_data_frag *dfrag)
{
struct mptcp_sendmsg_info info = { .data_lock_held = true, };
struct mptcp_sock *msk = mptcp_sk(sk);
struct mptcp_subflow_context *subflow;
- struct mptcp_data_frag *dfrag;
struct sock *ssk;
- int ret, err;
- u16 len = 0;
-
- mptcp_clean_una_wakeup(sk);
-
- /* first check ssk: need to kick "stale" logic */
- err = mptcp_sched_get_retrans(msk);
- dfrag = mptcp_rtx_head(sk);
- if (!dfrag) {
- if (mptcp_data_fin_enabled(msk)) {
- struct inet_connection_sock *icsk = inet_csk(sk);
-
- WRITE_ONCE(icsk->icsk_retransmits,
- icsk->icsk_retransmits + 1);
- mptcp_set_datafin_timeout(sk);
- mptcp_send_ack(msk);
-
- goto reset_timer;
- }
-
- if (!mptcp_send_head(sk))
- goto clear_scheduled;
-
- goto reset_timer;
- }
-
- if (err)
- goto reset_timer;
+ int ret, len = 0;
mptcp_for_each_subflow(msk, subflow) {
if (READ_ONCE(subflow->scheduled)) {
@@ -2913,7 +2886,7 @@ static void __mptcp_retrans(struct sock *sk)
!msk->allow_subflows) {
spin_unlock_bh(&msk->fallback_lock);
release_sock(ssk);
- goto clear_scheduled;
+ return -1;
}
while (info.sent < info.limit) {
@@ -2936,6 +2909,45 @@ static void __mptcp_retrans(struct sock *sk)
release_sock(ssk);
}
}
+ return len;
+}
+
+static void __mptcp_retrans(struct sock *sk)
+{
+ struct mptcp_sock *msk = mptcp_sk(sk);
+ struct mptcp_subflow_context *subflow;
+ struct mptcp_data_frag *dfrag;
+ int err, len;
+
+ mptcp_clean_una_wakeup(sk);
+
+ /* first check ssk: need to kick "stale" logic */
+ err = mptcp_sched_get_retrans(msk);
+ dfrag = mptcp_rtx_head(sk);
+ if (!dfrag) {
+ if (mptcp_data_fin_enabled(msk)) {
+ struct inet_connection_sock *icsk = inet_csk(sk);
+
+ WRITE_ONCE(icsk->icsk_retransmits,
+ icsk->icsk_retransmits + 1);
+ mptcp_set_datafin_timeout(sk);
+ mptcp_send_ack(msk);
+
+ goto reset_timer;
+ }
+
+ if (!mptcp_send_head(sk))
+ goto clear_scheduled;
+
+ goto reset_timer;
+ }
+
+ if (err)
+ goto reset_timer;
+
+ len = __mptcp_push_retrans(sk, dfrag);
+ if (len < 0)
+ goto clear_scheduled;
msk->bytes_retrans += len;
dfrag->already_sent = max(dfrag->already_sent, len);
--
2.54.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* [PATCH mptcp-next v5 12/12] mptcp: let the retrans scheduler do its job.
2026-05-10 21:03 [PATCH mptcp-next v5 00/12] mptcp: address stall under memory pressure Paolo Abeni
` (10 preceding siblings ...)
2026-05-10 21:03 ` [PATCH mptcp-next v5 11/12] mptcp: move the retrans loop to a separate helper Paolo Abeni
@ 2026-05-10 21:03 ` Paolo Abeni
2026-05-10 22:16 ` [PATCH mptcp-next v5 00/12] mptcp: address stall under memory pressure MPTCP CI
12 siblings, 0 replies; 17+ messages in thread
From: Paolo Abeni @ 2026-05-10 21:03 UTC (permalink / raw)
To: mptcp; +Cc: Shardul Bankar
Currently the MPTCP core enforces that when MPTCP-level retrans timer
fires, at most a single dfrag is retransmitted. If some corner-cases it
may be necessary retransmit multiple dfrags, and the MPTCP socket will
need to wait multiple retrans timeout to accomplish that.
Remove the mentioned constraint, allowing to transmit multiple dfrags per
retrans period, as long as the scheduler keeps selecting subflows for
retransmissions and pending data is available in the rtx queue.
The default scheduler will transmit a dfrag per available subflow.
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
v4 -> v5:
- fixed already_sent update
v3 -> v4:
- avoid quadratic behavior, fix retrans_seq update
- fix rtx timer re-schedule miss
v2 -> v3:
- fix infinite loop issue (should address tls tests failures)
v1 -> v2:
- fix retrans sequence update (sashiko)
---
net/mptcp/protocol.c | 105 +++++++++++++++++++++++++++++++------------
1 file changed, 77 insertions(+), 28 deletions(-)
diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index 6dc7899c6fc7..8204d29a844b 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -1232,13 +1232,6 @@ static void __mptcp_clean_una_wakeup(struct sock *sk)
mptcp_write_space(sk);
}
-static void mptcp_clean_una_wakeup(struct sock *sk)
-{
- mptcp_data_lock(sk);
- __mptcp_clean_una_wakeup(sk);
- mptcp_data_unlock(sk);
-}
-
static void mptcp_enter_memory_pressure(struct sock *sk)
{
struct mptcp_subflow_context *subflow;
@@ -2851,8 +2844,12 @@ static void mptcp_check_fastclose(struct mptcp_sock *msk)
sk_error_report(sk);
}
-/* Retransmit the specified data fragment on all the selected subflows. */
-static int __mptcp_push_retrans(struct sock *sk, struct mptcp_data_frag *dfrag)
+/*
+ * Retransmit the specified data fragment on all the selected subflows,
+ * starting from the specified sequence
+ */
+static int __mptcp_push_retrans(struct sock *sk, struct mptcp_data_frag *dfrag,
+ u64 sent_seq)
{
struct mptcp_sendmsg_info info = { .data_lock_held = true, };
struct mptcp_sock *msk = mptcp_sk(sk);
@@ -2862,6 +2859,7 @@ static int __mptcp_push_retrans(struct sock *sk, struct mptcp_data_frag *dfrag)
mptcp_for_each_subflow(msk, subflow) {
if (READ_ONCE(subflow->scheduled)) {
+ u16 offset = sent_seq - dfrag->data_seq;
u16 copied = 0;
mptcp_subflow_set_scheduled(subflow, false);
@@ -2871,9 +2869,12 @@ static int __mptcp_push_retrans(struct sock *sk, struct mptcp_data_frag *dfrag)
lock_sock(ssk);
/* limit retransmission to the bytes already sent on some subflows */
- info.sent = 0;
+ info.sent = offset;
info.limit = READ_ONCE(msk->csum_enabled) ? dfrag->data_len :
dfrag->already_sent;
+ DEBUG_NET_WARN_ON_ONCE(!before64(sent_seq,
+ dfrag->data_seq +
+ info.limit));
/*
* make the whole retrans decision, xmit, disallow
@@ -2917,41 +2918,89 @@ static void __mptcp_retrans(struct sock *sk)
struct mptcp_sock *msk = mptcp_sk(sk);
struct mptcp_subflow_context *subflow;
struct mptcp_data_frag *dfrag;
+ bool retransmitted = false;
+ u64 retrans_seq;
int err, len;
- mptcp_clean_una_wakeup(sk);
-
- /* first check ssk: need to kick "stale" logic */
- err = mptcp_sched_get_retrans(msk);
+ mptcp_data_lock(sk);
+ __mptcp_clean_una_wakeup(sk);
+ retrans_seq = msk->snd_una;
dfrag = mptcp_rtx_head(sk);
+ mptcp_data_unlock(sk);
+ if (!dfrag)
+ goto check_data_fin;
+
+ for (;;) {
+ bool already_retrans;
+
+ /* The scheduler may clean the RTX queue. */
+ get_page(dfrag->page);
+
+ /* The default scheduler will kick "stale" logic. */
+ err = mptcp_sched_get_retrans(msk);
+ if (err) {
+ put_page(dfrag->page);
+ break;
+ }
+
+ /* Incoming acks can have moved retrans sequence after
+ * the current dfrag, if so try to start again from RTX head.
+ */
+ mptcp_data_lock(sk);
+ already_retrans = !dfrag->already_sent ||
+ !before64(msk->snd_una, dfrag->data_seq +
+ dfrag->already_sent);
+ put_page(dfrag->page);
+ if (already_retrans) {
+ __mptcp_clean_una_wakeup(sk);
+ retrans_seq = msk->snd_una;
+ dfrag = mptcp_rtx_head(sk);
+ }
+ mptcp_data_unlock(sk);
+ if (!dfrag)
+ break;
+
+ len = __mptcp_push_retrans(sk, dfrag, retrans_seq);
+ if (len < 0)
+ goto clear_scheduled;
+
+ retransmitted = true;
+ retrans_seq += len;
+ msk->bytes_retrans += len;
+ dfrag->already_sent = max_t(u16, dfrag->already_sent,
+ retrans_seq - dfrag->data_seq);
+
+ /* Attempt the next fragment only if the current one is
+ * completely retransmitted.
+ */
+ if (before64(retrans_seq, dfrag->data_seq + dfrag->data_len))
+ break;
+
+ dfrag = list_is_last(&dfrag->list, &msk->rtx_queue) ?
+ NULL : list_next_entry(dfrag, list);
+ if (!dfrag || !dfrag->already_sent)
+ break;
+ }
+
+ /* Data fin retransmission needed only if no data retransmission took
+ * place, and RTX queue is empty.
+ */
+check_data_fin:
if (!dfrag) {
- if (mptcp_data_fin_enabled(msk)) {
+ if (!retransmitted && mptcp_data_fin_enabled(msk)) {
struct inet_connection_sock *icsk = inet_csk(sk);
WRITE_ONCE(icsk->icsk_retransmits,
icsk->icsk_retransmits + 1);
mptcp_set_datafin_timeout(sk);
mptcp_send_ack(msk);
-
goto reset_timer;
}
if (!mptcp_send_head(sk))
goto clear_scheduled;
-
- goto reset_timer;
}
- if (err)
- goto reset_timer;
-
- len = __mptcp_push_retrans(sk, dfrag);
- if (len < 0)
- goto clear_scheduled;
-
- msk->bytes_retrans += len;
- dfrag->already_sent = max(dfrag->already_sent, len);
-
reset_timer:
mptcp_check_and_set_pending(sk);
--
2.54.0
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH mptcp-next v5 00/12] mptcp: address stall under memory pressure
2026-05-10 21:03 [PATCH mptcp-next v5 00/12] mptcp: address stall under memory pressure Paolo Abeni
` (11 preceding siblings ...)
2026-05-10 21:03 ` [PATCH mptcp-next v5 12/12] mptcp: let the retrans scheduler do its job Paolo Abeni
@ 2026-05-10 22:16 ` MPTCP CI
12 siblings, 0 replies; 17+ messages in thread
From: MPTCP CI @ 2026-05-10 22:16 UTC (permalink / raw)
To: Paolo Abeni; +Cc: mptcp
Hi Paolo,
Thank you for your modifications, that's great!
Our CI did some validations and here is its report:
- KVM Validation: normal (except selftest_mptcp_join): Success! ✅
- KVM Validation: normal (only selftest_mptcp_join): Success! ✅
- KVM Validation: debug (except selftest_mptcp_join): Unstable: 2 failed test(s): packetdrill_fastopen packetdrill_sockopts ⚠️
- KVM Validation: debug (only selftest_mptcp_join): Success! ✅
- KVM Validation: btf-normal (only bpftest_all): Success! ✅
- KVM Validation: btf-debug (only bpftest_all): Success! ✅
- Task: https://github.com/multipath-tcp/mptcp_net-next/actions/runs/25640183335
Initiator: Patchew Applier
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/49dce7f2f3dd
Patchwork: https://patchwork.kernel.org/project/mptcp/list/?series=1092385
If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:
$ cd [kernel source code]
$ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
--pull always mptcp/mptcp-upstream-virtme-docker:latest \
auto-normal
For more details:
https://github.com/multipath-tcp/mptcp-upstream-virtme-docker
Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)
Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (NGI0 Core)
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH mptcp-next v5 01/12] mptcp: do not drop partial packets
2026-05-10 21:03 ` [PATCH mptcp-next v5 01/12] mptcp: do not drop partial packets Paolo Abeni
@ 2026-05-11 10:14 ` Matthieu Baerts
2026-05-11 10:29 ` Matthieu Baerts
0 siblings, 1 reply; 17+ messages in thread
From: Matthieu Baerts @ 2026-05-11 10:14 UTC (permalink / raw)
To: Paolo Abeni, mptcp; +Cc: Shardul Bankar
Hi Paolo, Shardul,
On 10/05/2026 23:03, Paolo Abeni wrote:
> From: Shardul Bankar <shardul.b@mpiricsoftware.com>
>
> When a packet arrives with map_seq < ack_seq < end_seq, the beginning
> of the packet has already been acknowledged but the end contains new
> data. Currently the entire packet is dropped as "old data," forcing
> the sender to retransmit.
>
> Instead, skip the already-acked bytes by adjusting the skb offset and
> enqueue only the new portion. Update bytes_received and ack_seq to
> reflect the new data consumed.
>
> A previous attempt at this fix (commit 1d2ce718811a ("mptcp: do not
> drop partial packets"), reverted in commit bf39160c4218 ("Revert
> "mptcp: do not drop partial packets"")) also added a zero-window
> check and changed rcv_wnd_sent initialization, which caused test
> regressions. This version addresses only the partial packet handling
> without modifying receive window accounting.
>
> Fixes: ab174ad8ef76 ("mptcp: move ooo skbs into msk out of order queue.")
> Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/600
> Signed-off-by: Shardul Bankar <shardul.b@mpiricsoftware.com>
> [pabeni@redhat.com: update map]
> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
> ---
> v2 -> v3:
> - update map_seq, too
> v2: https://lore.kernel.org/mptcp/20260422143931.43281-1-shardul.b@mpiricsoftware.com/
Thank you for this new version!
I suggest already applying this patch to our tree, so I can send it soon
to netdev.
Cheers,
Matt
--
Sponsored by the NGI0 Core fund.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH mptcp-next v5 01/12] mptcp: do not drop partial packets
2026-05-11 10:14 ` Matthieu Baerts
@ 2026-05-11 10:29 ` Matthieu Baerts
2026-05-11 10:52 ` Matthieu Baerts
0 siblings, 1 reply; 17+ messages in thread
From: Matthieu Baerts @ 2026-05-11 10:29 UTC (permalink / raw)
To: Paolo Abeni, mptcp; +Cc: Shardul Bankar
On 11/05/2026 12:14, Matthieu Baerts wrote:
> Hi Paolo, Shardul,
>
> On 10/05/2026 23:03, Paolo Abeni wrote:
>> From: Shardul Bankar <shardul.b@mpiricsoftware.com>
>>
>> When a packet arrives with map_seq < ack_seq < end_seq, the beginning
>> of the packet has already been acknowledged but the end contains new
>> data. Currently the entire packet is dropped as "old data," forcing
>> the sender to retransmit.
>>
>> Instead, skip the already-acked bytes by adjusting the skb offset and
>> enqueue only the new portion. Update bytes_received and ack_seq to
>> reflect the new data consumed.
>>
>> A previous attempt at this fix (commit 1d2ce718811a ("mptcp: do not
>> drop partial packets"), reverted in commit bf39160c4218 ("Revert
>> "mptcp: do not drop partial packets"")) also added a zero-window
>> check and changed rcv_wnd_sent initialization, which caused test
>> regressions. This version addresses only the partial packet handling
>> without modifying receive window accounting.
>>
>> Fixes: ab174ad8ef76 ("mptcp: move ooo skbs into msk out of order queue.")
>> Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/600
>> Signed-off-by: Shardul Bankar <shardul.b@mpiricsoftware.com>
>> [pabeni@redhat.com: update map]
>> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
>> ---
>> v2 -> v3:
>> - update map_seq, too
>> v2: https://lore.kernel.org/mptcp/20260422143931.43281-1-shardul.b@mpiricsoftware.com/
>
> Thank you for this new version!
>
> I suggest already applying this patch to our tree, so I can send it soon
> to netdev.
I forgot:
Reviewed-by: Matthieu Baerts (NGI0) <matttbe@kernel.org>
Now in our tree:
New patches for t/upstream-net and t/upstream:
- 7c4bbb8f4f28: mptcp: do not drop partial packets
- Results: 43f725eb6425..c7817ca73dd1 (export-net)
- Results: 31aef5b7fcb7..335465cfffb4 (export)
Tests are now in progress:
- export-net:
https://github.com/multipath-tcp/mptcp_net-next/commit/fe2e6c368bb7e0eaf46b51f4f7fa03a8223b9ea5/checks
- export:
https://github.com/multipath-tcp/mptcp_net-next/commit/52f68f13d42a1c3752de7a776b6f873988f48401/checks
Cheers,
Matt
--
Sponsored by the NGI0 Core fund.
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH mptcp-next v5 01/12] mptcp: do not drop partial packets
2026-05-11 10:29 ` Matthieu Baerts
@ 2026-05-11 10:52 ` Matthieu Baerts
0 siblings, 0 replies; 17+ messages in thread
From: Matthieu Baerts @ 2026-05-11 10:52 UTC (permalink / raw)
To: Paolo Abeni, mptcp; +Cc: Shardul Bankar
On 11/05/2026 12:29, Matthieu Baerts wrote:
> On 11/05/2026 12:14, Matthieu Baerts wrote:
>> Hi Paolo, Shardul,
>>
>> On 10/05/2026 23:03, Paolo Abeni wrote:
>>> From: Shardul Bankar <shardul.b@mpiricsoftware.com>
>>>
>>> When a packet arrives with map_seq < ack_seq < end_seq, the beginning
>>> of the packet has already been acknowledged but the end contains new
>>> data. Currently the entire packet is dropped as "old data," forcing
>>> the sender to retransmit.
>>>
>>> Instead, skip the already-acked bytes by adjusting the skb offset and
>>> enqueue only the new portion. Update bytes_received and ack_seq to
>>> reflect the new data consumed.
>>>
>>> A previous attempt at this fix (commit 1d2ce718811a ("mptcp: do not
>>> drop partial packets"), reverted in commit bf39160c4218 ("Revert
>>> "mptcp: do not drop partial packets"")) also added a zero-window
Note that I also modified this paragraph, not to reference commits that
have not been upstreamed + a mentioning the original code was from Paolo.
Cheers,
Matt
--
Sponsored by the NGI0 Core fund.
^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2026-05-11 10:52 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-10 21:03 [PATCH mptcp-next v5 00/12] mptcp: address stall under memory pressure Paolo Abeni
2026-05-10 21:03 ` [PATCH mptcp-next v5 01/12] mptcp: do not drop partial packets Paolo Abeni
2026-05-11 10:14 ` Matthieu Baerts
2026-05-11 10:29 ` Matthieu Baerts
2026-05-11 10:52 ` Matthieu Baerts
2026-05-10 21:03 ` [PATCH mptcp-next v5 02/12] mptcp: explicitly drop over memory limits Paolo Abeni
2026-05-10 21:03 ` [PATCH mptcp-next v5 03/12] mptcp: enforce hard limit on backlog flushing Paolo Abeni
2026-05-10 21:03 ` [PATCH mptcp-next v5 04/12] mptcp: drop the mptcp_ooo_try_coalesce() helper Paolo Abeni
2026-05-10 21:03 ` [PATCH mptcp-next v5 05/12] mptcp: drop the cant_coalesce CB field Paolo Abeni
2026-05-10 21:03 ` [PATCH mptcp-next v5 06/12] mptcp: remove CB offset field Paolo Abeni
2026-05-10 21:03 ` [PATCH mptcp-next v5 07/12] mptcp: sync mptcp skb cb layout with tcp one Paolo Abeni
2026-05-10 21:03 ` [PATCH mptcp-next v5 08/12] tcp: expose the tcp_collapse_ofo_queue() helper to mptcp usage, too Paolo Abeni
2026-05-10 21:03 ` [PATCH mptcp-next v5 09/12] mptcp: implemented OoO queue pruning Paolo Abeni
2026-05-10 21:03 ` [PATCH mptcp-next v5 10/12] mptcp: track prune recovery status Paolo Abeni
2026-05-10 21:03 ` [PATCH mptcp-next v5 11/12] mptcp: move the retrans loop to a separate helper Paolo Abeni
2026-05-10 21:03 ` [PATCH mptcp-next v5 12/12] mptcp: let the retrans scheduler do its job Paolo Abeni
2026-05-10 22:16 ` [PATCH mptcp-next v5 00/12] mptcp: address stall under memory pressure MPTCP CI
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.