* [PATCH v5 mptcp-next 00/10] mptcp: introduce backlog processing
@ 2025-10-06 8:11 Paolo Abeni
2025-10-06 8:12 ` [PATCH v5 mptcp-next 01/10] mptcp: borrow forward memory from subflow Paolo Abeni
` (11 more replies)
0 siblings, 12 replies; 33+ messages in thread
From: Paolo Abeni @ 2025-10-06 8:11 UTC (permalink / raw)
To: mptcp
This series includes RX path improvement built around backlog processing
The main goals are improving the RX performances _and_ increase the
long term maintainability.
Patches 1-3 prepare the stack for backlog processing, removing
assumptions that will not hold true anymore after backlog introduction.
Patch 4 fixes a long standing issue which is quite hard to reproduce
with the current implementation but will become very apparent with
backlog usage.
Patches 5, 6 and 8 are more cleanups that will make the backlog patch a
little less huge.
Patch 7 is a somewhat unrelated cleanup, included here before I forgot
about it.
The real work is done by patch 9 and 10. Patch 9 introduces the helpers
needed to manipulate the msk-level backlog, and the data struct itself,
without any actual functional change. Patch 10 finally use the backlog
for RX skb processing. Note that MPTCP can't uset the sk_backlog, as
the mptcp release callback can also release and re-acquire the msk-level
spinlock and core backlog processing works under the assumption that
such event is not possible.
Other relevant points are:
- skbs in the backlog are _not_ accounted. TCP does the same, and we
can't update the fwd mem while enqueuing to the backlog as the caller
does not own the msk-level socket lock nor can acquire it.
- skbs in the backlog still use the incoming ssk rmem. This allows
backpressure and implicitly prevent excessive memory usage for the
backlog itself
- [this is possibly the most critical point]: when the msk rx buf is
full, we don't add more packets there even when the caller owns the
msk socket lock. Instead packets are added to the backlog. Note that
the amount of memory used there is still limited by the above. Also
note that this implicitly means that such packets could stage in the
backlog until the receiver flushes the rx buffer - an unbound amount
of time. That is not supposed to happen for the backlog, ence the
criticality here.
---
This should address the issues reported by the CI on the previous
iteration (at least here), and feature some more patch splits to make
the last one less big. See the individual patches changelog for the
details.
Side note: local testing hinted we have some unrelated/pre-existend
issues with mptcp-level rcvwin management that I think deserve a better
investigation. Specifically I observe, especially in the peek tests,
RCVWNDSHARED events even with a single flow - and that is quite
unexpected.
Paolo Abeni (10):
mptcp: borrow forward memory from subflow
mptcp: cleanup fallback data fin reception
mptcp: cleanup fallback dummy mapping generation
mptcp: fix MSG_PEEK stream corruption
mptcp: ensure the kernel PM does not take action too late
mptcp: do not miss early first subflow close event notification.
mptcp: make mptcp_destroy_common() static
mptcp: drop the __mptcp_data_ready() helper
mptcp: introduce mptcp-level backlog
mptcp: leverage the backlog for RX packet processing
net/mptcp/pm.c | 4 +-
net/mptcp/pm_kernel.c | 2 +
net/mptcp/protocol.c | 323 ++++++++++++++++++++++++++++--------------
net/mptcp/protocol.h | 8 +-
net/mptcp/subflow.c | 12 +-
5 files changed, 233 insertions(+), 116 deletions(-)
--
2.51.0
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH v5 mptcp-next 01/10] mptcp: borrow forward memory from subflow
2025-10-06 8:11 [PATCH v5 mptcp-next 00/10] mptcp: introduce backlog processing Paolo Abeni
@ 2025-10-06 8:12 ` Paolo Abeni
2025-10-06 8:12 ` [PATCH v5 mptcp-next 02/10] mptcp: cleanup fallback data fin reception Paolo Abeni
` (10 subsequent siblings)
11 siblings, 0 replies; 33+ messages in thread
From: Paolo Abeni @ 2025-10-06 8:12 UTC (permalink / raw)
To: mptcp
In the MPTCP receive path, we release the subflow allocated
fwd memory just to allocate it again shortly after for the msk.
That could increases the failures chances, especially during
backlog processing, when other actions could consume the just
released memory before the msk socket has a chance to do the
rcv allocation.
Replace the skb_orphan() call with an open-coded variant that
explicitly borrows, with a PAGE_SIZE granularity, the fwd memory
from the subflow socket instead of releasing it. During backlog
processing the borrowed memory is accounted at release_cb time.
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
v1 -> v2:
- rebased
- explain why skb_orphan is removed
---
net/mptcp/protocol.c | 19 +++++++++++++++----
1 file changed, 15 insertions(+), 4 deletions(-)
diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index 574a1e222d9cf..34661ab979158 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -337,11 +337,12 @@ static void mptcp_data_queue_ofo(struct mptcp_sock *msk, struct sk_buff *skb)
mptcp_rcvbuf_grow(sk);
}
-static void mptcp_init_skb(struct sock *ssk,
- struct sk_buff *skb, int offset, int copy_len)
+static int mptcp_init_skb(struct sock *ssk,
+ struct sk_buff *skb, int offset, int copy_len)
{
const struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk);
bool has_rxtstamp = TCP_SKB_CB(skb)->has_rxtstamp;
+ int borrowed;
/* the skb map_seq accounts for the skb offset:
* mptcp_subflow_get_mapped_dsn() is based on the current tp->copied_seq
@@ -357,6 +358,13 @@ static void mptcp_init_skb(struct sock *ssk,
skb_ext_reset(skb);
skb_dst_drop(skb);
+
+ /* "borrow" the fwd memory from the subflow, instead of reclaiming it */
+ skb->destructor = NULL;
+ borrowed = ssk->sk_forward_alloc - sk_unused_reserved_mem(ssk);
+ borrowed &= ~(PAGE_SIZE - 1);
+ sk_forward_alloc_add(ssk, skb->truesize - borrowed);
+ return borrowed;
}
static bool __mptcp_move_skb(struct sock *sk, struct sk_buff *skb)
@@ -690,9 +698,12 @@ static bool __mptcp_move_skbs_from_subflow(struct mptcp_sock *msk,
if (offset < skb->len) {
size_t len = skb->len - offset;
+ int bmem;
- mptcp_init_skb(ssk, skb, offset, len);
- skb_orphan(skb);
+ bmem = mptcp_init_skb(ssk, skb, offset, len);
+ skb->sk = NULL;
+ sk_forward_alloc_add(sk, bmem);
+ atomic_sub(skb->truesize, &ssk->sk_rmem_alloc);
ret = __mptcp_move_skb(sk, skb) || ret;
seq += len;
--
2.51.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v5 mptcp-next 02/10] mptcp: cleanup fallback data fin reception
2025-10-06 8:11 [PATCH v5 mptcp-next 00/10] mptcp: introduce backlog processing Paolo Abeni
2025-10-06 8:12 ` [PATCH v5 mptcp-next 01/10] mptcp: borrow forward memory from subflow Paolo Abeni
@ 2025-10-06 8:12 ` Paolo Abeni
2025-10-06 8:12 ` [PATCH v5 mptcp-next 03/10] mptcp: cleanup fallback dummy mapping generation Paolo Abeni
` (9 subsequent siblings)
11 siblings, 0 replies; 33+ messages in thread
From: Paolo Abeni @ 2025-10-06 8:12 UTC (permalink / raw)
To: mptcp
MPTCP currently generate a dummy data_fin for fallback socket
when the fallback subflow has completed data reception using
the current ack_seq.
We are going to introduce backlog usage for the msk soon, even
for fallback sockets: the ack_seq value will not match the most recent
sequence number seen by the fallback subflow socket, as it will ignore
data_seq sitting in the backlog.
Instead use the last map sequence number to set the data_fin,
as fallback (dummy) map sequences are always in sequence.
Reviewed-by: Geliang Tang <geliang@kernel.org>
Tested-by: Geliang Tang <geliang@kernel.org>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
v2 -> v3:
- keep the close check in subflow_sched_work_if_closed, fix
CI failures
---
net/mptcp/subflow.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
index e8325890a3223..b9455c04e8a46 100644
--- a/net/mptcp/subflow.c
+++ b/net/mptcp/subflow.c
@@ -1285,6 +1285,7 @@ static bool subflow_is_done(const struct sock *sk)
/* sched mptcp worker for subflow cleanup if no more data is pending */
static void subflow_sched_work_if_closed(struct mptcp_sock *msk, struct sock *ssk)
{
+ const struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk);
struct sock *sk = (struct sock *)msk;
if (likely(ssk->sk_state != TCP_CLOSE &&
@@ -1303,7 +1304,8 @@ static void subflow_sched_work_if_closed(struct mptcp_sock *msk, struct sock *ss
*/
if (__mptcp_check_fallback(msk) && subflow_is_done(ssk) &&
msk->first == ssk &&
- mptcp_update_rcv_data_fin(msk, READ_ONCE(msk->ack_seq), true))
+ mptcp_update_rcv_data_fin(msk, subflow->map_seq +
+ subflow->map_data_len, true))
mptcp_schedule_work(sk);
}
--
2.51.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v5 mptcp-next 03/10] mptcp: cleanup fallback dummy mapping generation
2025-10-06 8:11 [PATCH v5 mptcp-next 00/10] mptcp: introduce backlog processing Paolo Abeni
2025-10-06 8:12 ` [PATCH v5 mptcp-next 01/10] mptcp: borrow forward memory from subflow Paolo Abeni
2025-10-06 8:12 ` [PATCH v5 mptcp-next 02/10] mptcp: cleanup fallback data fin reception Paolo Abeni
@ 2025-10-06 8:12 ` Paolo Abeni
2025-10-06 8:12 ` [PATCH v5 mptcp-next 04/10] mptcp: fix MSG_PEEK stream corruption Paolo Abeni
` (8 subsequent siblings)
11 siblings, 0 replies; 33+ messages in thread
From: Paolo Abeni @ 2025-10-06 8:12 UTC (permalink / raw)
To: mptcp
MPTCP currently access ack_seq outside the msk socket log scope to
generate the dummy mapping for fallback socket. Soon we are going
to introduce backlog usage and even for fallback socket the ack_seq
value will be significantly off outside of the msk socket lock scope.
Avoid relying on ack_seq for dummy mapping generation, using instead
the subflow sequence number. Note that in case of disconnect() and
(re)connect() we must ensure that any previous state is re-set.
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
v2 -> v3:
- reordered before the backlog introduction to avoid transiently
break the fallback
- explicitly reset ack_seq
---
net/mptcp/protocol.c | 3 +++
net/mptcp/subflow.c | 8 +++++++-
2 files changed, 10 insertions(+), 1 deletion(-)
diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index 34661ab979158..12f201aa81f43 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -3234,6 +3234,9 @@ static int mptcp_disconnect(struct sock *sk, int flags)
msk->bytes_retrans = 0;
msk->rcvspace_init = 0;
+ /* for fallback's sake */
+ WRITE_ONCE(msk->ack_seq, 0);
+
WRITE_ONCE(sk->sk_shutdown, 0);
sk_error_report(sk);
return 0;
diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
index b9455c04e8a46..ac8616e7521e8 100644
--- a/net/mptcp/subflow.c
+++ b/net/mptcp/subflow.c
@@ -491,6 +491,9 @@ static void subflow_set_remote_key(struct mptcp_sock *msk,
mptcp_crypto_key_sha(subflow->remote_key, NULL, &subflow->iasn);
subflow->iasn++;
+ /* for fallback's sake */
+ subflow->map_seq = subflow->iasn;
+
WRITE_ONCE(msk->remote_key, subflow->remote_key);
WRITE_ONCE(msk->ack_seq, subflow->iasn);
WRITE_ONCE(msk->can_ack, true);
@@ -1435,9 +1438,12 @@ static bool subflow_check_data_avail(struct sock *ssk)
skb = skb_peek(&ssk->sk_receive_queue);
subflow->map_valid = 1;
- subflow->map_seq = READ_ONCE(msk->ack_seq);
subflow->map_data_len = skb->len;
subflow->map_subflow_seq = tcp_sk(ssk)->copied_seq - subflow->ssn_offset;
+ subflow->map_seq = __mptcp_expand_seq(subflow->map_seq,
+ subflow->iasn +
+ TCP_SKB_CB(skb)->seq -
+ subflow->ssn_offset - 1);
WRITE_ONCE(subflow->data_avail, true);
return true;
}
--
2.51.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v5 mptcp-next 04/10] mptcp: fix MSG_PEEK stream corruption
2025-10-06 8:11 [PATCH v5 mptcp-next 00/10] mptcp: introduce backlog processing Paolo Abeni
` (2 preceding siblings ...)
2025-10-06 8:12 ` [PATCH v5 mptcp-next 03/10] mptcp: cleanup fallback dummy mapping generation Paolo Abeni
@ 2025-10-06 8:12 ` Paolo Abeni
2025-10-06 8:12 ` [PATCH v5 mptcp-next 05/10] mptcp: ensure the kernel PM does not take action too late Paolo Abeni
` (7 subsequent siblings)
11 siblings, 0 replies; 33+ messages in thread
From: Paolo Abeni @ 2025-10-06 8:12 UTC (permalink / raw)
To: mptcp
If a MSG_PEEK | MSG_WAITALL read operation consumes all the bytes in the
receive queue and recvmsg() need to waits for more data - i.e. it's a
blocking one - upon arrival of the next packet the MPTCP protocol will
start again copying the oldest data present in the receive queue,
corrupting the data stream.
Address the issue explicitly tracking the peeked sequence number,
restarting from the last peeked byte.
Fixes: ca4fb892579f ("mptcp: add MSG_PEEK support")
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
This may sound quite esoteric, but it will soon become very easy to
reproduce with mptcp_connect, thanks to the backlog.
---
net/mptcp/protocol.c | 38 +++++++++++++++++++++++++-------------
1 file changed, 25 insertions(+), 13 deletions(-)
diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index 12f201aa81f43..ce1238f620c33 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -1947,22 +1947,36 @@ static int mptcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
static void mptcp_rcv_space_adjust(struct mptcp_sock *msk, int copied);
-static int __mptcp_recvmsg_mskq(struct sock *sk,
- struct msghdr *msg,
- size_t len, int flags,
+static int __mptcp_recvmsg_mskq(struct sock *sk, struct msghdr *msg,
+ size_t len, int flags, int copied_total,
struct scm_timestamping_internal *tss,
int *cmsg_flags)
{
struct mptcp_sock *msk = mptcp_sk(sk);
struct sk_buff *skb, *tmp;
+ int total_data_len = 0;
int copied = 0;
skb_queue_walk_safe(&sk->sk_receive_queue, skb, tmp) {
- u32 offset = MPTCP_SKB_CB(skb)->offset;
+ u32 delta, offset = MPTCP_SKB_CB(skb)->offset;
u32 data_len = skb->len - offset;
- u32 count = min_t(size_t, len - copied, data_len);
+ u32 count;
int err;
+ if (flags & MSG_PEEK) {
+ /* skip already peeked skbs*/
+ if (total_data_len + data_len <= copied_total) {
+ total_data_len += data_len;
+ continue;
+ }
+
+ /* skip the already peeked data in the current skb */
+ delta = copied_total - total_data_len;
+ offset += delta;
+ data_len -= delta;
+ }
+
+ count = min_t(size_t, len - copied, data_len);
if (!(flags & MSG_TRUNC)) {
err = skb_copy_datagram_msg(skb, offset, msg, count);
if (unlikely(err < 0)) {
@@ -1979,16 +1993,14 @@ static int __mptcp_recvmsg_mskq(struct sock *sk,
copied += count;
- if (count < data_len) {
- if (!(flags & MSG_PEEK)) {
+ if (!(flags & MSG_PEEK)) {
+ msk->bytes_consumed += count;
+ if (count < data_len) {
MPTCP_SKB_CB(skb)->offset += count;
MPTCP_SKB_CB(skb)->map_seq += count;
- msk->bytes_consumed += count;
+ break;
}
- break;
- }
- if (!(flags & MSG_PEEK)) {
/* avoid the indirect call, we know the destructor is sock_rfree */
skb->destructor = NULL;
skb->sk = NULL;
@@ -1996,7 +2008,6 @@ static int __mptcp_recvmsg_mskq(struct sock *sk,
sk_mem_uncharge(sk, skb->truesize);
__skb_unlink(skb, &sk->sk_receive_queue);
skb_attempt_defer_free(skb);
- msk->bytes_consumed += count;
}
if (copied >= len)
@@ -2194,7 +2205,8 @@ static int mptcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
while (copied < len) {
int err, bytes_read;
- bytes_read = __mptcp_recvmsg_mskq(sk, msg, len - copied, flags, &tss, &cmsg_flags);
+ bytes_read = __mptcp_recvmsg_mskq(sk, msg, len - copied, flags,
+ copied, &tss, &cmsg_flags);
if (unlikely(bytes_read < 0)) {
if (!copied)
copied = bytes_read;
--
2.51.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v5 mptcp-next 05/10] mptcp: ensure the kernel PM does not take action too late
2025-10-06 8:11 [PATCH v5 mptcp-next 00/10] mptcp: introduce backlog processing Paolo Abeni
` (3 preceding siblings ...)
2025-10-06 8:12 ` [PATCH v5 mptcp-next 04/10] mptcp: fix MSG_PEEK stream corruption Paolo Abeni
@ 2025-10-06 8:12 ` Paolo Abeni
2025-10-06 8:12 ` [PATCH v5 mptcp-next 06/10] mptcp: do not miss early first subflow close event notification Paolo Abeni
` (6 subsequent siblings)
11 siblings, 0 replies; 33+ messages in thread
From: Paolo Abeni @ 2025-10-06 8:12 UTC (permalink / raw)
To: mptcp
The PM hooks can currently take place when when the msk is already
shutting down. Subflow creation will fail, thanks to the existing
check at join time, but we can entirely avoid starting the to be failed
operations.
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
net/mptcp/pm.c | 4 +++-
net/mptcp/pm_kernel.c | 2 ++
2 files changed, 5 insertions(+), 1 deletion(-)
diff --git a/net/mptcp/pm.c b/net/mptcp/pm.c
index daf6dcb806843..eade530d38e01 100644
--- a/net/mptcp/pm.c
+++ b/net/mptcp/pm.c
@@ -588,6 +588,7 @@ void mptcp_pm_subflow_established(struct mptcp_sock *msk)
void mptcp_pm_subflow_check_next(struct mptcp_sock *msk,
const struct mptcp_subflow_context *subflow)
{
+ struct sock *sk = (struct sock *)msk;
struct mptcp_pm_data *pm = &msk->pm;
bool update_subflows;
@@ -611,7 +612,8 @@ void mptcp_pm_subflow_check_next(struct mptcp_sock *msk,
/* Even if this subflow is not really established, tell the PM to try
* to pick the next ones, if possible.
*/
- if (mptcp_pm_nl_check_work_pending(msk))
+ if (mptcp_is_fully_established(sk) &&
+ mptcp_pm_nl_check_work_pending(msk))
mptcp_pm_schedule_work(msk, MPTCP_PM_SUBFLOW_ESTABLISHED);
spin_unlock_bh(&pm->lock);
diff --git a/net/mptcp/pm_kernel.c b/net/mptcp/pm_kernel.c
index da431da16ae04..07b5142004e73 100644
--- a/net/mptcp/pm_kernel.c
+++ b/net/mptcp/pm_kernel.c
@@ -328,6 +328,8 @@ static void mptcp_pm_create_subflow_or_signal_addr(struct mptcp_sock *msk)
struct mptcp_pm_local local;
mptcp_mpc_endpoint_setup(msk);
+ if (!mptcp_is_fully_established(sk))
+ return;
pr_debug("local %d:%d signal %d:%d subflows %d:%d\n",
msk->pm.local_addr_used, endp_subflow_max,
--
2.51.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v5 mptcp-next 06/10] mptcp: do not miss early first subflow close event notification.
2025-10-06 8:11 [PATCH v5 mptcp-next 00/10] mptcp: introduce backlog processing Paolo Abeni
` (4 preceding siblings ...)
2025-10-06 8:12 ` [PATCH v5 mptcp-next 05/10] mptcp: ensure the kernel PM does not take action too late Paolo Abeni
@ 2025-10-06 8:12 ` Paolo Abeni
2025-10-06 8:12 ` [PATCH v5 mptcp-next 07/10] mptcp: make mptcp_destroy_common() static Paolo Abeni
` (5 subsequent siblings)
11 siblings, 0 replies; 33+ messages in thread
From: Paolo Abeni @ 2025-10-06 8:12 UTC (permalink / raw)
To: mptcp
The MPTCP protocol is not currently emitting the NL event when the first
subflow is closed before msk accept() time.
By replacing the in use close helper is such scenario, implicitly introduce
the missing notification. Note that in such scenario we want to be sure
that mptcp_close_ssk() will not trigger any PM work, move the msk state
change update earlier, so that the previous patch will offer such
guarantee.
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
net/mptcp/protocol.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index ce1238f620c33..6ae5ab7595272 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -3988,10 +3988,10 @@ static int mptcp_stream_accept(struct socket *sock, struct socket *newsock,
* deal with bad peers not doing a complete shutdown.
*/
if (unlikely(inet_sk_state_load(msk->first) == TCP_CLOSE)) {
- __mptcp_close_ssk(newsk, msk->first,
- mptcp_subflow_ctx(msk->first), 0);
if (unlikely(list_is_singular(&msk->conn_list)))
mptcp_set_state(newsk, TCP_CLOSE);
+ mptcp_close_ssk(newsk, msk->first,
+ mptcp_subflow_ctx(msk->first));
}
} else {
tcpfallback:
--
2.51.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v5 mptcp-next 07/10] mptcp: make mptcp_destroy_common() static
2025-10-06 8:11 [PATCH v5 mptcp-next 00/10] mptcp: introduce backlog processing Paolo Abeni
` (5 preceding siblings ...)
2025-10-06 8:12 ` [PATCH v5 mptcp-next 06/10] mptcp: do not miss early first subflow close event notification Paolo Abeni
@ 2025-10-06 8:12 ` Paolo Abeni
2025-10-06 8:12 ` [PATCH v5 mptcp-next 08/10] mptcp: drop the __mptcp_data_ready() helper Paolo Abeni
` (4 subsequent siblings)
11 siblings, 0 replies; 33+ messages in thread
From: Paolo Abeni @ 2025-10-06 8:12 UTC (permalink / raw)
To: mptcp
Such function is only used inside protocol.c, there is no need
to expose it to the whole stack.
Note that the function definition most be moved earlier to avoid
forward declaration.
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
net/mptcp/protocol.c | 42 +++++++++++++++++++++---------------------
net/mptcp/protocol.h | 2 --
2 files changed, 21 insertions(+), 23 deletions(-)
diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index 6ae5ab7595272..e354f16f4a79f 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -3195,6 +3195,27 @@ static void mptcp_copy_inaddrs(struct sock *msk, const struct sock *ssk)
inet_sk(msk)->inet_rcv_saddr = inet_sk(ssk)->inet_rcv_saddr;
}
+static void mptcp_destroy_common(struct mptcp_sock *msk, unsigned int flags)
+{
+ struct mptcp_subflow_context *subflow, *tmp;
+ struct sock *sk = (struct sock *)msk;
+
+ __mptcp_clear_xmit(sk);
+
+ /* join list will be eventually flushed (with rst) at sock lock release time */
+ mptcp_for_each_subflow_safe(msk, subflow, tmp)
+ __mptcp_close_ssk(sk, mptcp_subflow_tcp_sock(subflow), subflow, flags);
+
+ __skb_queue_purge(&sk->sk_receive_queue);
+ skb_rbtree_purge(&msk->out_of_order_queue);
+
+ /* move all the rx fwd alloc into the sk_mem_reclaim_final in
+ * inet_sock_destruct() will dispose it
+ */
+ mptcp_token_destroy(msk);
+ mptcp_pm_destroy(msk);
+}
+
static int mptcp_disconnect(struct sock *sk, int flags)
{
struct mptcp_sock *msk = mptcp_sk(sk);
@@ -3399,27 +3420,6 @@ void mptcp_rcv_space_init(struct mptcp_sock *msk, const struct sock *ssk)
msk->rcvq_space.space = TCP_INIT_CWND * TCP_MSS_DEFAULT;
}
-void mptcp_destroy_common(struct mptcp_sock *msk, unsigned int flags)
-{
- struct mptcp_subflow_context *subflow, *tmp;
- struct sock *sk = (struct sock *)msk;
-
- __mptcp_clear_xmit(sk);
-
- /* join list will be eventually flushed (with rst) at sock lock release time */
- mptcp_for_each_subflow_safe(msk, subflow, tmp)
- __mptcp_close_ssk(sk, mptcp_subflow_tcp_sock(subflow), subflow, flags);
-
- __skb_queue_purge(&sk->sk_receive_queue);
- skb_rbtree_purge(&msk->out_of_order_queue);
-
- /* move all the rx fwd alloc into the sk_mem_reclaim_final in
- * inet_sock_destruct() will dispose it
- */
- mptcp_token_destroy(msk);
- mptcp_pm_destroy(msk);
-}
-
static void mptcp_destroy(struct sock *sk)
{
struct mptcp_sock *msk = mptcp_sk(sk);
diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
index 0545eab231250..46d8432c72ee7 100644
--- a/net/mptcp/protocol.h
+++ b/net/mptcp/protocol.h
@@ -979,8 +979,6 @@ static inline void mptcp_propagate_sndbuf(struct sock *sk, struct sock *ssk)
local_bh_enable();
}
-void mptcp_destroy_common(struct mptcp_sock *msk, unsigned int flags);
-
#define MPTCP_TOKEN_MAX_RETRIES 4
void __init mptcp_token_init(void);
--
2.51.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v5 mptcp-next 08/10] mptcp: drop the __mptcp_data_ready() helper
2025-10-06 8:11 [PATCH v5 mptcp-next 00/10] mptcp: introduce backlog processing Paolo Abeni
` (6 preceding siblings ...)
2025-10-06 8:12 ` [PATCH v5 mptcp-next 07/10] mptcp: make mptcp_destroy_common() static Paolo Abeni
@ 2025-10-06 8:12 ` Paolo Abeni
2025-10-06 8:12 ` [PATCH v5 mptcp-next 09/10] mptcp: introduce mptcp-level backlog Paolo Abeni
` (3 subsequent siblings)
11 siblings, 0 replies; 33+ messages in thread
From: Paolo Abeni @ 2025-10-06 8:12 UTC (permalink / raw)
To: mptcp
It adds little clarity and there is a single user of such helper,
just inline it in the caller.
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
- v4 -> v5:
split out of main backlog patch, to make the latter smaller
---
net/mptcp/protocol.c | 20 ++++++++------------
1 file changed, 8 insertions(+), 12 deletions(-)
diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index e354f16f4a79f..05ee6bd26b7fa 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -838,18 +838,10 @@ static bool move_skbs_to_msk(struct mptcp_sock *msk, struct sock *ssk)
return moved;
}
-static void __mptcp_data_ready(struct sock *sk, struct sock *ssk)
-{
- struct mptcp_sock *msk = mptcp_sk(sk);
-
- /* Wake-up the reader only for in-sequence data */
- if (move_skbs_to_msk(msk, ssk) && mptcp_epollin_ready(sk))
- sk->sk_data_ready(sk);
-}
-
void mptcp_data_ready(struct sock *sk, struct sock *ssk)
{
struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk);
+ struct mptcp_sock *msk = mptcp_sk(sk);
/* The peer can send data while we are shutting down this
* subflow at msk destruction time, but we must avoid enqueuing
@@ -859,10 +851,14 @@ void mptcp_data_ready(struct sock *sk, struct sock *ssk)
return;
mptcp_data_lock(sk);
- if (!sock_owned_by_user(sk))
- __mptcp_data_ready(sk, ssk);
- else
+ if (!sock_owned_by_user(sk)) {
+ /* Wake-up the reader only for in-sequence data */
+ if (move_skbs_to_msk(msk, ssk) && mptcp_epollin_ready(sk))
+ sk->sk_data_ready(sk);
+
+ } else {
__set_bit(MPTCP_DEQUEUE, &mptcp_sk(sk)->cb_flags);
+ }
mptcp_data_unlock(sk);
}
--
2.51.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v5 mptcp-next 09/10] mptcp: introduce mptcp-level backlog
2025-10-06 8:11 [PATCH v5 mptcp-next 00/10] mptcp: introduce backlog processing Paolo Abeni
` (7 preceding siblings ...)
2025-10-06 8:12 ` [PATCH v5 mptcp-next 08/10] mptcp: drop the __mptcp_data_ready() helper Paolo Abeni
@ 2025-10-06 8:12 ` Paolo Abeni
2025-10-08 3:09 ` Geliang Tang
2025-10-20 19:45 ` Mat Martineau
2025-10-06 8:12 ` [PATCH v5 mptcp-next 10/10] mptcp: leverage the backlog for RX packet processing Paolo Abeni
` (2 subsequent siblings)
11 siblings, 2 replies; 33+ messages in thread
From: Paolo Abeni @ 2025-10-06 8:12 UTC (permalink / raw)
To: mptcp
We are soon using it for incoming data processing.
MPTCP can't leverage the sk_backlog, as the latter is processed
before the release callback, and such callback for MPTCP releases
and re-acquire the socket spinlock, breaking the sk_backlog processing
assumption.
Add a skb backlog list inside the mptcp sock struct, and implement
basic helper to transfer packet to and purge such list.
Packets in the backlog are not memory accounted, but still use the
incoming subflow receive memory, to allow back-pressure.
No packet is currently added to the backlog, so no functional changes
intended here.
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
--
v4 -> v5:
- split out of the next path, to make the latter smaller
- set a custom destructor for skbs in the backlog, this avoid
duplicate code, and fix a few places where the need ssk cleanup
was not performed.
- factor out the backlog purge in a new helper,
use spinlock protection, clear the backlog list and zero the
backlog len
- explicitly init the backlog_len at mptcp_init_sock() time
---
net/mptcp/protocol.c | 70 +++++++++++++++++++++++++++++++++++++++++---
net/mptcp/protocol.h | 4 +++
2 files changed, 70 insertions(+), 4 deletions(-)
diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index 05ee6bd26b7fa..2d5d3da67d1ac 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -337,6 +337,11 @@ static void mptcp_data_queue_ofo(struct mptcp_sock *msk, struct sk_buff *skb)
mptcp_rcvbuf_grow(sk);
}
+static void mptcp_bl_free(struct sk_buff *skb)
+{
+ atomic_sub(skb->truesize, &skb->sk->sk_rmem_alloc);
+}
+
static int mptcp_init_skb(struct sock *ssk,
struct sk_buff *skb, int offset, int copy_len)
{
@@ -360,7 +365,7 @@ static int mptcp_init_skb(struct sock *ssk,
skb_dst_drop(skb);
/* "borrow" the fwd memory from the subflow, instead of reclaiming it */
- skb->destructor = NULL;
+ skb->destructor = mptcp_bl_free;
borrowed = ssk->sk_forward_alloc - sk_unused_reserved_mem(ssk);
borrowed &= ~(PAGE_SIZE - 1);
sk_forward_alloc_add(ssk, skb->truesize - borrowed);
@@ -373,6 +378,13 @@ static bool __mptcp_move_skb(struct sock *sk, struct sk_buff *skb)
struct mptcp_sock *msk = mptcp_sk(sk);
struct sk_buff *tail;
+ /* Avoid the indirect call overhead, we know destructor is
+ * mptcp_bl_free at this point.
+ */
+ atomic_sub(skb->truesize, &skb->sk->sk_rmem_alloc);
+ skb->sk = NULL;
+ skb->destructor = NULL;
+
/* try to fetch required memory from subflow */
if (!sk_rmem_schedule(sk, skb, skb->truesize)) {
MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_RCVPRUNED);
@@ -654,6 +666,35 @@ static void mptcp_dss_corruption(struct mptcp_sock *msk, struct sock *ssk)
}
}
+static void __mptcp_add_backlog(struct sock *sk, struct sk_buff *skb)
+{
+ struct mptcp_sock *msk = mptcp_sk(sk);
+ struct sk_buff *tail = NULL;
+ bool fragstolen;
+ int delta;
+
+ if (unlikely(sk->sk_state == TCP_CLOSE)) {
+ kfree_skb_reason(skb, SKB_DROP_REASON_SOCKET_CLOSE);
+ return;
+ }
+
+ /* Try to coalesce with the last skb in our backlog */
+ if (!list_empty(&msk->backlog_list))
+ tail = list_last_entry(&msk->backlog_list, struct sk_buff, list);
+
+ if (tail && MPTCP_SKB_CB(skb)->map_seq == MPTCP_SKB_CB(tail)->end_seq &&
+ skb->sk == tail->sk &&
+ __mptcp_try_coalesce(sk, tail, skb, &fragstolen, &delta)) {
+ skb->truesize -= delta;
+ kfree_skb_partial(skb, fragstolen);
+ WRITE_ONCE(msk->backlog_len, msk->backlog_len + delta);
+ return;
+ }
+
+ list_add_tail(&skb->list, &msk->backlog_list);
+ WRITE_ONCE(msk->backlog_len, msk->backlog_len + skb->truesize);
+}
+
static bool __mptcp_move_skbs_from_subflow(struct mptcp_sock *msk,
struct sock *ssk)
{
@@ -701,10 +742,12 @@ static bool __mptcp_move_skbs_from_subflow(struct mptcp_sock *msk,
int bmem;
bmem = mptcp_init_skb(ssk, skb, offset, len);
- skb->sk = NULL;
sk_forward_alloc_add(sk, bmem);
- atomic_sub(skb->truesize, &ssk->sk_rmem_alloc);
- ret = __mptcp_move_skb(sk, skb) || ret;
+
+ if (true)
+ ret |= __mptcp_move_skb(sk, skb);
+ else
+ __mptcp_add_backlog(sk, skb);
seq += len;
if (unlikely(map_remaining < len)) {
@@ -2753,12 +2796,28 @@ static void mptcp_mp_fail_no_response(struct mptcp_sock *msk)
unlock_sock_fast(ssk, slow);
}
+static void mptcp_backlog_purge(struct sock *sk)
+{
+ struct mptcp_sock *msk = mptcp_sk(sk);
+ struct sk_buff *tmp, *skb;
+ LIST_HEAD(backlog);
+
+ mptcp_data_lock(sk);
+ list_splice_init(&msk->backlog_list, &backlog);
+ msk->backlog_len = 0;
+ mptcp_data_unlock(sk);
+
+ list_for_each_entry_safe(skb, tmp, &backlog, list)
+ kfree_skb_reason(skb, SKB_DROP_REASON_SOCKET_CLOSE);
+}
+
static void mptcp_do_fastclose(struct sock *sk)
{
struct mptcp_subflow_context *subflow, *tmp;
struct mptcp_sock *msk = mptcp_sk(sk);
mptcp_set_state(sk, TCP_CLOSE);
+ mptcp_backlog_purge(sk);
mptcp_for_each_subflow_safe(msk, subflow, tmp)
__mptcp_close_ssk(sk, mptcp_subflow_tcp_sock(subflow),
subflow, MPTCP_CF_FASTCLOSE);
@@ -2816,11 +2875,13 @@ static void __mptcp_init_sock(struct sock *sk)
INIT_LIST_HEAD(&msk->conn_list);
INIT_LIST_HEAD(&msk->join_list);
INIT_LIST_HEAD(&msk->rtx_queue);
+ INIT_LIST_HEAD(&msk->backlog_list);
INIT_WORK(&msk->work, mptcp_worker);
msk->out_of_order_queue = RB_ROOT;
msk->first_pending = NULL;
msk->timer_ival = TCP_RTO_MIN;
msk->scaling_ratio = TCP_DEFAULT_SCALING_RATIO;
+ msk->backlog_len = 0;
WRITE_ONCE(msk->first, NULL);
inet_csk(sk)->icsk_sync_mss = mptcp_sync_mss;
@@ -3197,6 +3258,7 @@ static void mptcp_destroy_common(struct mptcp_sock *msk, unsigned int flags)
struct sock *sk = (struct sock *)msk;
__mptcp_clear_xmit(sk);
+ mptcp_backlog_purge(sk);
/* join list will be eventually flushed (with rst) at sock lock release time */
mptcp_for_each_subflow_safe(msk, subflow, tmp)
diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
index 46d8432c72ee7..a21c4955f4cfb 100644
--- a/net/mptcp/protocol.h
+++ b/net/mptcp/protocol.h
@@ -358,6 +358,9 @@ struct mptcp_sock {
* allow_infinite_fallback and
* allow_join
*/
+
+ struct list_head backlog_list; /*protected by the data lock */
+ u32 backlog_len;
};
#define mptcp_data_lock(sk) spin_lock_bh(&(sk)->sk_lock.slock)
@@ -408,6 +411,7 @@ static inline int mptcp_space_from_win(const struct sock *sk, int win)
static inline int __mptcp_space(const struct sock *sk)
{
return mptcp_win_from_space(sk, READ_ONCE(sk->sk_rcvbuf) -
+ READ_ONCE(mptcp_sk(sk)->backlog_len) -
sk_rmem_alloc_get(sk));
}
--
2.51.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v5 mptcp-next 10/10] mptcp: leverage the backlog for RX packet processing
2025-10-06 8:11 [PATCH v5 mptcp-next 00/10] mptcp: introduce backlog processing Paolo Abeni
` (8 preceding siblings ...)
2025-10-06 8:12 ` [PATCH v5 mptcp-next 09/10] mptcp: introduce mptcp-level backlog Paolo Abeni
@ 2025-10-06 8:12 ` Paolo Abeni
2025-10-20 23:32 ` Mat Martineau
2025-10-06 17:07 ` [PATCH v5 mptcp-next 00/10] mptcp: introduce backlog processing Matthieu Baerts
2025-10-06 17:43 ` MPTCP CI
11 siblings, 1 reply; 33+ messages in thread
From: Paolo Abeni @ 2025-10-06 8:12 UTC (permalink / raw)
To: mptcp
When the msk socket is owned or the msk receive buffer is full,
move the incoming skbs in a msk level backlog list. This avoid
traversing the joined subflows and acquiring the subflow level
socket lock at reception time, improving the RX performances.
when processing the backlog, use the fwd alloc memory borrowed from
the incoming subflow. skbs exceeding the msk receive space are
not dropped; instead they are kept into the backlog until the receive
buffer is freed. Dropping packets already acked at the TCP level is
explicitly discouraged by the RFC and would corrupt the data stream
for fallback sockets.
Special care is needed to avoid adding skbs to the backlog of a closed
msk, and to avoid leaving dangling references into the backlog
at subflow closing time.
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
---
v4 -> v5:
- consolidate ssk rcvbuf accunting in __mptcp_move_skb(), remove
some code duplication
- return soon in __mptcp_add_backlog() when dropping skbs due to
the msk closed. This avoid later UaF
---
net/mptcp/protocol.c | 137 ++++++++++++++++++++++++-------------------
net/mptcp/protocol.h | 2 +-
2 files changed, 79 insertions(+), 60 deletions(-)
diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index 2d5d3da67d1ac..a97a92eccc502 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -696,7 +696,7 @@ static void __mptcp_add_backlog(struct sock *sk, struct sk_buff *skb)
}
static bool __mptcp_move_skbs_from_subflow(struct mptcp_sock *msk,
- struct sock *ssk)
+ struct sock *ssk, bool own_msk)
{
struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk);
struct sock *sk = (struct sock *)msk;
@@ -712,9 +712,6 @@ static bool __mptcp_move_skbs_from_subflow(struct mptcp_sock *msk,
struct sk_buff *skb;
bool fin;
- if (sk_rmem_alloc_get(sk) > sk->sk_rcvbuf)
- break;
-
/* try to move as much data as available */
map_remaining = subflow->map_data_len -
mptcp_subflow_get_map_offset(subflow);
@@ -742,9 +739,12 @@ static bool __mptcp_move_skbs_from_subflow(struct mptcp_sock *msk,
int bmem;
bmem = mptcp_init_skb(ssk, skb, offset, len);
- sk_forward_alloc_add(sk, bmem);
+ if (own_msk)
+ sk_forward_alloc_add(sk, bmem);
+ else
+ msk->borrowed_mem += bmem;
- if (true)
+ if (own_msk && sk_rmem_alloc_get(sk) < sk->sk_rcvbuf)
ret |= __mptcp_move_skb(sk, skb);
else
__mptcp_add_backlog(sk, skb);
@@ -866,7 +866,7 @@ static bool move_skbs_to_msk(struct mptcp_sock *msk, struct sock *ssk)
struct sock *sk = (struct sock *)msk;
bool moved;
- moved = __mptcp_move_skbs_from_subflow(msk, ssk);
+ moved = __mptcp_move_skbs_from_subflow(msk, ssk, true);
__mptcp_ofo_queue(msk);
if (unlikely(ssk->sk_err))
__mptcp_subflow_error_report(sk, ssk);
@@ -898,9 +898,8 @@ void mptcp_data_ready(struct sock *sk, struct sock *ssk)
/* Wake-up the reader only for in-sequence data */
if (move_skbs_to_msk(msk, ssk) && mptcp_epollin_ready(sk))
sk->sk_data_ready(sk);
-
} else {
- __set_bit(MPTCP_DEQUEUE, &mptcp_sk(sk)->cb_flags);
+ __mptcp_move_skbs_from_subflow(msk, ssk, false);
}
mptcp_data_unlock(sk);
}
@@ -2135,60 +2134,56 @@ static void mptcp_rcv_space_adjust(struct mptcp_sock *msk, int copied)
msk->rcvq_space.time = mstamp;
}
-static struct mptcp_subflow_context *
-__mptcp_first_ready_from(struct mptcp_sock *msk,
- struct mptcp_subflow_context *subflow)
-{
- struct mptcp_subflow_context *start_subflow = subflow;
-
- while (!READ_ONCE(subflow->data_avail)) {
- subflow = mptcp_next_subflow(msk, subflow);
- if (subflow == start_subflow)
- return NULL;
- }
- return subflow;
-}
-
-static bool __mptcp_move_skbs(struct sock *sk)
+static bool __mptcp_move_skbs(struct sock *sk, struct list_head *skbs, u32 *delta)
{
- struct mptcp_subflow_context *subflow;
+ struct sk_buff *skb = list_first_entry(skbs, struct sk_buff, list);
struct mptcp_sock *msk = mptcp_sk(sk);
- bool ret = false;
-
- if (list_empty(&msk->conn_list))
- return false;
-
- subflow = list_first_entry(&msk->conn_list,
- struct mptcp_subflow_context, node);
- for (;;) {
- struct sock *ssk;
- bool slowpath;
+ bool moved = false;
- /*
- * As an optimization avoid traversing the subflows list
- * and ev. acquiring the subflow socket lock before baling out
- */
+ while (1) {
+ /* If the msk recvbuf is full stop, don't drop */
if (sk_rmem_alloc_get(sk) > sk->sk_rcvbuf)
break;
- subflow = __mptcp_first_ready_from(msk, subflow);
- if (!subflow)
- break;
+ prefetch(skb->next);
+ list_del(&skb->list);
+ *delta += skb->truesize;
- ssk = mptcp_subflow_tcp_sock(subflow);
- slowpath = lock_sock_fast(ssk);
- ret = __mptcp_move_skbs_from_subflow(msk, ssk) || ret;
- if (unlikely(ssk->sk_err))
- __mptcp_error_report(sk);
- unlock_sock_fast(ssk, slowpath);
+ moved |= __mptcp_move_skb(sk, skb);
+ if (list_empty(skbs))
+ break;
- subflow = mptcp_next_subflow(msk, subflow);
+ skb = list_first_entry(skbs, struct sk_buff, list);
}
__mptcp_ofo_queue(msk);
- if (ret)
+ if (moved)
mptcp_check_data_fin((struct sock *)msk);
- return ret;
+ return moved;
+}
+
+static bool mptcp_move_skbs(struct sock *sk)
+{
+ struct mptcp_sock *msk = mptcp_sk(sk);
+ bool moved = false;
+ LIST_HEAD(skbs);
+ u32 delta = 0;
+
+ mptcp_data_lock(sk);
+ while (!list_empty(&msk->backlog_list)) {
+ list_splice_init(&msk->backlog_list, &skbs);
+ mptcp_data_unlock(sk);
+ moved |= __mptcp_move_skbs(sk, &skbs, &delta);
+
+ mptcp_data_lock(sk);
+ if (!list_empty(&skbs)) {
+ list_splice(&skbs, &msk->backlog_list);
+ break;
+ }
+ }
+ WRITE_ONCE(msk->backlog_len, msk->backlog_len - delta);
+ mptcp_data_unlock(sk);
+ return moved;
}
static unsigned int mptcp_inq_hint(const struct sock *sk)
@@ -2254,7 +2249,7 @@ static int mptcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
copied += bytes_read;
- if (skb_queue_empty(&sk->sk_receive_queue) && __mptcp_move_skbs(sk))
+ if (!list_empty(&msk->backlog_list) && mptcp_move_skbs(sk))
continue;
/* only the MPTCP socket status is relevant here. The exit
@@ -2559,6 +2554,9 @@ static void __mptcp_close_ssk(struct sock *sk, struct sock *ssk,
void mptcp_close_ssk(struct sock *sk, struct sock *ssk,
struct mptcp_subflow_context *subflow)
{
+ struct mptcp_sock *msk = mptcp_sk(sk);
+ struct sk_buff *skb;
+
/* The first subflow can already be closed and still in the list */
if (subflow->close_event_done)
return;
@@ -2568,6 +2566,18 @@ void mptcp_close_ssk(struct sock *sk, struct sock *ssk,
if (sk->sk_state == TCP_ESTABLISHED)
mptcp_event(MPTCP_EVENT_SUB_CLOSED, mptcp_sk(sk), ssk, GFP_KERNEL);
+ /* Remove any reference from the backlog to this ssk, accounting the
+ * related skb directly to the main socket
+ */
+ list_for_each_entry(skb, &msk->backlog_list, list) {
+ if (skb->sk != ssk)
+ continue;
+
+ atomic_sub(skb->truesize, &skb->sk->sk_rmem_alloc);
+ atomic_add(skb->truesize, &sk->sk_rmem_alloc);
+ skb->sk = sk;
+ }
+
/* subflow aborted before reaching the fully_established status
* attempt the creation of the next subflow
*/
@@ -3509,23 +3519,29 @@ void __mptcp_check_push(struct sock *sk, struct sock *ssk)
#define MPTCP_FLAGS_PROCESS_CTX_NEED (BIT(MPTCP_PUSH_PENDING) | \
BIT(MPTCP_RETRANSMIT) | \
- BIT(MPTCP_FLUSH_JOIN_LIST) | \
- BIT(MPTCP_DEQUEUE))
+ BIT(MPTCP_FLUSH_JOIN_LIST))
/* processes deferred events and flush wmem */
static void mptcp_release_cb(struct sock *sk)
__must_hold(&sk->sk_lock.slock)
{
struct mptcp_sock *msk = mptcp_sk(sk);
+ u32 delta = 0;
for (;;) {
unsigned long flags = (msk->cb_flags & MPTCP_FLAGS_PROCESS_CTX_NEED);
- struct list_head join_list;
+ LIST_HEAD(join_list);
+ LIST_HEAD(skbs);
+
+ sk_forward_alloc_add(sk, msk->borrowed_mem);
+ msk->borrowed_mem = 0;
+
+ if (sk_rmem_alloc_get(sk) < sk->sk_rcvbuf)
+ list_splice_init(&msk->backlog_list, &skbs);
- if (!flags)
+ if (!flags && list_empty(&skbs))
break;
- INIT_LIST_HEAD(&join_list);
list_splice_init(&msk->join_list, &join_list);
/* the following actions acquire the subflow socket lock
@@ -3544,7 +3560,8 @@ static void mptcp_release_cb(struct sock *sk)
__mptcp_push_pending(sk, 0);
if (flags & BIT(MPTCP_RETRANSMIT))
__mptcp_retrans(sk);
- if ((flags & BIT(MPTCP_DEQUEUE)) && __mptcp_move_skbs(sk)) {
+ if (!list_empty(&skbs) &&
+ __mptcp_move_skbs(sk, &skbs, &delta)) {
/* notify ack seq update */
mptcp_cleanup_rbuf(msk, 0);
sk->sk_data_ready(sk);
@@ -3552,7 +3569,9 @@ static void mptcp_release_cb(struct sock *sk)
cond_resched();
spin_lock_bh(&sk->sk_lock.slock);
+ list_splice(&skbs, &msk->backlog_list);
}
+ WRITE_ONCE(msk->backlog_len, msk->backlog_len - delta);
if (__test_and_clear_bit(MPTCP_CLEAN_UNA, &msk->cb_flags))
__mptcp_clean_una_wakeup(sk);
@@ -3784,7 +3803,7 @@ static int mptcp_ioctl(struct sock *sk, int cmd, int *karg)
return -EINVAL;
lock_sock(sk);
- if (__mptcp_move_skbs(sk))
+ if (mptcp_move_skbs(sk))
mptcp_cleanup_rbuf(msk, 0);
*karg = mptcp_inq_hint(sk);
release_sock(sk);
diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
index a21c4955f4cfb..cfabda66e7ac4 100644
--- a/net/mptcp/protocol.h
+++ b/net/mptcp/protocol.h
@@ -124,7 +124,6 @@
#define MPTCP_FLUSH_JOIN_LIST 5
#define MPTCP_SYNC_STATE 6
#define MPTCP_SYNC_SNDBUF 7
-#define MPTCP_DEQUEUE 8
struct mptcp_skb_cb {
u64 map_seq;
@@ -301,6 +300,7 @@ struct mptcp_sock {
u32 last_ack_recv;
unsigned long timer_ival;
u32 token;
+ u32 borrowed_mem;
unsigned long flags;
unsigned long cb_flags;
bool recovery; /* closing subflow write queue reinjected */
--
2.51.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* Re: [PATCH v5 mptcp-next 00/10] mptcp: introduce backlog processing
2025-10-06 8:11 [PATCH v5 mptcp-next 00/10] mptcp: introduce backlog processing Paolo Abeni
` (9 preceding siblings ...)
2025-10-06 8:12 ` [PATCH v5 mptcp-next 10/10] mptcp: leverage the backlog for RX packet processing Paolo Abeni
@ 2025-10-06 17:07 ` Matthieu Baerts
2025-10-08 3:07 ` Geliang Tang
2025-10-06 17:43 ` MPTCP CI
11 siblings, 1 reply; 33+ messages in thread
From: Matthieu Baerts @ 2025-10-06 17:07 UTC (permalink / raw)
To: Paolo Abeni, mptcp
Hi Paolo,
On 06/10/2025 10:11, Paolo Abeni wrote:
> This series includes RX path improvement built around backlog processing
Thank you for the new version! This is not a review, but just a note to
tell you patchew didn't manage to apply the patches due to the same
conflict that was already there with the v4 (mptcp_init_skb() parameters
have been moved to the previous line). I just applied the patches
manually. While at it, I also used this test branch for syzkaller to
validate them.
(Also, on patch "mptcp: drop the __mptcp_data_ready() helper", git
complained that there is a trailing whitespace.)
Cheers,
Matt
--
Sponsored by the NGI0 Core fund.
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v5 mptcp-next 00/10] mptcp: introduce backlog processing
2025-10-06 8:11 [PATCH v5 mptcp-next 00/10] mptcp: introduce backlog processing Paolo Abeni
` (10 preceding siblings ...)
2025-10-06 17:07 ` [PATCH v5 mptcp-next 00/10] mptcp: introduce backlog processing Matthieu Baerts
@ 2025-10-06 17:43 ` MPTCP CI
11 siblings, 0 replies; 33+ messages in thread
From: MPTCP CI @ 2025-10-06 17:43 UTC (permalink / raw)
To: Paolo Abeni; +Cc: mptcp
Hi Paolo,
Thank you for your modifications, that's great!
Our CI did some validations and here is its report:
- KVM Validation: normal (except selftest_mptcp_join): Success! ✅
- KVM Validation: normal (only selftest_mptcp_join): Success! ✅
- KVM Validation: debug (except selftest_mptcp_join): Success! ✅
- KVM Validation: debug (only selftest_mptcp_join): Success! ✅
- KVM Validation: btf-normal (only bpftest_all): Success! ✅
- KVM Validation: btf-debug (only bpftest_all): Success! ✅
- Task: https://github.com/multipath-tcp/mptcp_net-next/actions/runs/18288523358
Initiator: Matthieu Baerts (NGI0)
Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/5641b16abf48
Patchwork: https://patchwork.kernel.org/project/mptcp/list/?series=1008615
If there are some issues, you can reproduce them using the same environment as
the one used by the CI thanks to a docker image, e.g.:
$ cd [kernel source code]
$ docker run -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
--pull always mptcp/mptcp-upstream-virtme-docker:latest \
auto-normal
For more details:
https://github.com/multipath-tcp/mptcp-upstream-virtme-docker
Please note that despite all the efforts that have been already done to have a
stable tests suite when executed on a public CI like here, it is possible some
reported issues are not due to your modifications. Still, do not hesitate to
help us improve that ;-)
Cheers,
MPTCP GH Action bot
Bot operated by Matthieu Baerts (NGI0 Core)
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v5 mptcp-next 00/10] mptcp: introduce backlog processing
2025-10-06 17:07 ` [PATCH v5 mptcp-next 00/10] mptcp: introduce backlog processing Matthieu Baerts
@ 2025-10-08 3:07 ` Geliang Tang
2025-10-08 7:30 ` Paolo Abeni
0 siblings, 1 reply; 33+ messages in thread
From: Geliang Tang @ 2025-10-08 3:07 UTC (permalink / raw)
To: Matthieu Baerts, Paolo Abeni, mptcp
Hi Paolo, Matt,
On Mon, 2025-10-06 at 19:07 +0200, Matthieu Baerts wrote:
> Hi Paolo,
>
> On 06/10/2025 10:11, Paolo Abeni wrote:
> > This series includes RX path improvement built around backlog
> > processing
> Thank you for the new version! This is not a review, but just a note
> to
> tell you patchew didn't manage to apply the patches due to the same
> conflict that was already there with the v4 (mptcp_init_skb()
> parameters
> have been moved to the previous line). I just applied the patches
> manually. While at it, I also used this test branch for syzkaller to
> validate them.
>
> (Also, on patch "mptcp: drop the __mptcp_data_ready() helper", git
> complained that there is a trailing whitespace.)
Sorry, patches 9-10 break my "implement mptcp read_sock" v12 series. I
rebased this series on patches 1-8, it works well. But after applying
patches 9-10, I changed mptcp_recv_skb() in [1] from
static struct sk_buff *mptcp_recv_skb(struct sock *sk, u32 *off)
{
struct mptcp_sock *msk = mptcp_sk(sk);
struct sk_buff *skb;
u32 offset;
if (skb_queue_empty(&sk->sk_receive_queue))
__mptcp_move_skbs(sk);
while ((skb = skb_peek(&sk->sk_receive_queue)) != NULL) {
offset = MPTCP_SKB_CB(skb)->offset;
if (offset < skb->len) {
*off = offset;
return skb;
}
mptcp_eat_recv_skb(sk, skb);
}
return NULL;
}
to
static struct sk_buff *mptcp_recv_skb(struct sock *sk, u32 *off)
{
struct mptcp_sock *msk = mptcp_sk(sk);
struct sk_buff *skb;
u32 offset;
if (!list_empty(&msk->backlog_list))
mptcp_move_skbs(sk);
while ((skb = skb_peek(&sk->sk_receive_queue)) != NULL) {
offset = MPTCP_SKB_CB(skb)->offset;
if (offset < skb->len) {
*off = offset;
return skb;
}
mptcp_eat_recv_skb(sk, skb);
}
return NULL;
}
The splice tests (mptcp_connect_splice.sh) have a low probability
(approximately 1 in 100) of reporting timeout failures:
=== Attempt: 158 (Wed, 08 Oct 2025 02:35:45 +0000) ===
Selftest Test: ./mptcp_connect_splice.sh
TAP version 13
1..1
# INFO: set ns3-0wY081 dev ns3eth2: ethtool -K gso off gro off
# INFO: set ns4-MjBWza dev ns4eth3: ethtool -K tso off gro off
# Created /tmp/tmp.rxe4DwYW9E (size 5136 B) containing data sent by
client
# Created /tmp/tmp.0H0GbllUo9 (size 7193203 B) containing data sent by
server
# 01 New MPTCP socket can be blocked via sysctl
[ OK ]
# 02 Validating network environment with pings
[ OK ]
# INFO: Using loss of 0.07% delay 21 ms reorder 99% 66% with delay 5ms
on ns3eth4
# INFO: extra options: -m splice
# 03 ns1 MPTCP -> ns1 (10.0.1.1:10000 ) MPTCP (duration
152ms) [ OK ]
# 04 ns1 MPTCP -> ns1 (10.0.1.1:10001 ) TCP (duration
152ms) [ OK ]
# 05 ns1 TCP -> ns1 (10.0.1.1:10002 ) MPTCP (duration
149ms) [ OK ]
# 06 ns1 MPTCP -> ns1 (dead:beef:1::1:10003) MPTCP (duration
151ms) [ OK ]
# 07 ns1 MPTCP -> ns1 (dead:beef:1::1:10004) TCP (duration
169ms) [ OK ]
# 08 ns1 TCP -> ns1 (dead:beef:1::1:10005) MPTCP (duration
152ms) [ OK ]
# 09 ns1 MPTCP -> ns2 (10.0.1.2:10006 ) MPTCP (duration
172ms) [ OK ]
# 10 ns1 MPTCP -> ns2 (dead:beef:1::2:10007) MPTCP (duration
172ms) [ OK ]
# 11 ns1 MPTCP -> ns2 (10.0.2.1:10008 ) MPTCP (duration
157ms) [ OK ]
# 12 ns1 MPTCP -> ns2 (dead:beef:2::1:10009) MPTCP (duration
157ms) [ OK ]
# 13 ns1 MPTCP -> ns3 (10.0.2.2:10010 ) MPTCP (duration
497ms) [ OK ]
# 14 ns1 MPTCP -> ns3 (dead:beef:2::2:10011) MPTCP (duration
500ms) [ OK ]
# 15 ns1 MPTCP -> ns3 (10.0.3.2:10012 ) MPTCP (duration
602ms) [ OK ]
# 16 ns1 MPTCP -> ns3 (dead:beef:3::2:10013) MPTCP (duration
571ms) [ OK ]
# 17 ns1 MPTCP -> ns4 (10.0.3.1:10014 ) MPTCP (duration
544ms) [ OK ]
# 18 ns1 MPTCP -> ns4 (dead:beef:3::1:10015) MPTCP (duration
627ms) [ OK ]
# 19 ns2 MPTCP -> ns1 (10.0.1.1:10016 ) MPTCP (duration
136ms) [ OK ]
# 20 ns2 MPTCP -> ns1 (dead:beef:1::1:10017) MPTCP (duration
181ms) [ OK ]
# 21 ns2 MPTCP -> ns3 (10.0.2.2:10018 ) MPTCP (duration
415ms) [ OK ]
# 22 ns2 MPTCP -> ns3 (dead:beef:2::2:10019) MPTCP (duration
490ms) [ OK ]
# 23 ns2 MPTCP -> ns3 (10.0.3.2:10020 ) MPTCP (duration
438ms) [ OK ]
# 24 ns2 MPTCP -> ns3 (dead:beef:3::2:10021) MPTCP (duration
498ms) [ OK ]
# 25 ns2 MPTCP -> ns4 (10.0.3.1:10022 ) MPTCP (duration
602ms) [ OK ]
# 26 ns2 MPTCP -> ns4 (dead:beef:3::1:10023) MPTCP (duration
559ms) [ OK ]
# 27 ns3 MPTCP -> ns1 (10.0.1.1:10024 ) MPTCP (duration
580ms) [ OK ]
# 28 ns3 MPTCP -> ns1 (dead:beef:1::1:10025) MPTCP (duration
603ms) [ OK ]
# 29 ns3 MPTCP -> ns2 (10.0.1.2:10026 ) MPTCP (duration
628ms) [ OK ]
# 30 ns3 MPTCP -> ns2 (dead:beef:1::2:10027) MPTCP (duration
451ms) [ OK ]
# 31 ns3 MPTCP -> ns2 (10.0.2.1:10028 ) MPTCP (duration
416ms) [ OK ]
# 32 ns3 MPTCP -> ns2 (dead:beef:2::1:10029) MPTCP (duration
497ms) [ OK ]
# 33 ns3 MPTCP -> ns4 (10.0.3.1:10030 ) MPTCP (duration
159ms) [ OK ]
# 34 ns3 MPTCP -> ns4 (dead:beef:3::1:10031) MPTCP (duration
156ms) [ OK ]
# 35 ns4 MPTCP -> ns1 (10.0.1.1:10032 ) MPTCP (duration
574ms) [ OK ]
# 36 ns4 MPTCP -> ns1 (dead:beef:1::1:10033) MPTCP (duration
863ms) [ OK ]
# 37 ns4 MPTCP -> ns2 (10.0.1.2:10034 ) MPTCP (duration
471ms) [ OK ]
# 38 ns4 MPTCP -> ns2 (dead:beef:1::2:10035) MPTCP (duration
538ms) [ OK ]
# 39 ns4 MPTCP -> ns2 (10.0.2.1:10036 ) MPTCP (duration
520ms) [ OK ]
# 40 ns4 MPTCP -> ns2 (dead:beef:2::1:10037) MPTCP (duration
511ms) [ OK ]
# 41 ns4 MPTCP -> ns3 (10.0.2.2:10038 ) MPTCP (duration
137ms) [ OK ]
# 42 ns4 MPTCP -> ns3 (dead:beef:2::2:10039) MPTCP (duration
155ms) [ OK ]
# 43 ns4 MPTCP -> ns3 (10.0.3.2:10040 ) MPTCP (duration
563ms) [ OK ]
# 44 ns4 MPTCP -> ns3 (dead:beef:3::2:10041) MPTCP (duration
152ms) [ OK ]
# INFO: with peek mode: saveWithPeek
# 45 ns1 MPTCP -> ns1 (10.0.1.1:10042 ) MPTCP (duration
150ms) [ OK ]
# 46 ns1 MPTCP -> ns1 (10.0.1.1:10043 ) TCP (duration
184ms) [ OK ]
# 47 ns1 TCP -> ns1 (10.0.1.1:10044 ) MPTCP (duration
153ms) [ OK ]
# 48 ns1 MPTCP -> ns1 (dead:beef:1::1:10045) MPTCP (duration
154ms) [ OK ]
# 49 ns1 MPTCP -> ns1 (dead:beef:1::1:10046) TCP (duration
148ms) [ OK ]
# 50 ns1 TCP -> ns1 (dead:beef:1::1:10047) MPTCP (duration
175ms) [ OK ]
# INFO: with peek mode: saveAfterPeek
# 51 ns1 MPTCP -> ns1 (10.0.1.1:10048 ) MPTCP (duration
175ms) [ OK ]
# 52 ns1 MPTCP -> ns1 (10.0.1.1:10049 ) TCP (duration
155ms) [ OK ]
# 53 ns1 TCP -> ns1 (10.0.1.1:10050 ) MPTCP (duration
146ms) [ OK ]
# 54 ns1 MPTCP -> ns1 (dead:beef:1::1:10051) MPTCP (duration
153ms) [ OK ]
# 55 ns1 MPTCP -> ns1 (dead:beef:1::1:10052) TCP (duration
153ms) [ OK ]
# 56 ns1 TCP -> ns1 (dead:beef:1::1:10053) MPTCP (duration
151ms) [ OK ]
# INFO: with MPTFO start
# 57 ns2 MPTCP -> ns1 (10.0.1.1:10054 ) MPTCP (duration
60989ms) [FAIL] client exit code 0, server 124
#
# netns ns1-RqXF2p (listener) socket stat for 10054:
# Failed to find cgroup2 mount
# Failed to find cgroup2 mount
# Failed to find cgroup2 mount
# Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
# tcp ESTAB 0 0 10.0.1.1:10054 10.0.1.2:55516
ino:2064372 sk:1 cgroup:unreachable:1 <->
# skmem:(r0,rb131072,t0,tb340992,f0,w0,o0,bl0,d0) sack cubic
wscale:8,8 rto:206 rtt:5.026/10.034 ato:40 mss:1460 pmtu:1500
rcvmss:1436 advmss:1460 cwnd:10 bytes_sent:115312 bytes_retrans:1560
bytes_acked:113752 bytes_received:5136 segs_out:85 segs_in:16
data_segs_out:83 data_segs_in:4 send 23239156bps lastsnd:60939
lastrcv:61035 lastack:60912 pacing_rate 343879640bps delivery_rate
1994680bps delivered:84 busy:123ms sndbuf_limited:41ms(33.3%)
retrans:0/2 dsack_dups:2 rcv_space:14600 rcv_ssthresh:75432
minrtt:0.003 rcv_wnd:75520 tcp-ulp-mptcp flags:Mec
token:0000(id:0)/32ed0950(id:0) seq:2946228641406205031 sfseq:1
ssnoff:1349223625 maplen:5136
# mptcp LAST-ACK 0 0 10.0.1.1:10054 10.0.1.2:55516
timer:(keepalive,59sec,0) ino:0 sk:2 cgroup:unreachable:1 ---
# skmem:(r0,rb131072,t0,tb345088,f4088,w352264,o0,bl0,d0)
subflows_max:2 remote_key token:32ed0950 write_seq:6317574787800720824
snd_una:6317574787800376423 rcv_nxt:2946228641406210168
bytes_sent:113752 bytes_received:5136 bytes_acked:113752
subflows_total:1 last_data_sent:60954 last_data_recv:61036
last_ack_recv:60913
# TcpPassiveOpens 1 0.0
# TcpInSegs 13 0.0
# TcpOutSegs 84 0.0
# TcpRetransSegs 2 0.0
# TcpExtTCPPureAcks 11 0.0
# TcpExtTCPLossProbes 3 0.0
# TcpExtTCPDSACKRecv 2 0.0
# TcpExtTCPDSACKIgnoredNoUndo 2 0.0
# TcpExtTCPFastOpenCookieReqd 1 0.0
# TcpExtTCPOrigDataSent 81 0.0
# TcpExtTCPDelivered 83 0.0
# TcpExtTCPDSACKRecvSegs 2 0.0
# MPTcpExtMPCapableSYNRX 1 0.0
# MPTcpExtMPCapableACKRX 1 0.0
#
# netns ns2-xZI1rh (connector) socket stat for 10054:
# Failed to find cgroup2 mount
# Failed to find cgroup2 mount
# Failed to find cgroup2 mount
# Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
# tcp ESTAB 0 0 10.0.1.2:55516 10.0.1.1:10054
ino:2065678 sk:3 cgroup:unreachable:1 <->
# skmem:(r0,rb131072,t0,tb46080,f12288,w0,o0,bl0,d2) sack cubic
wscale:8,8 rto:201 rtt:0.029/0.016 ato:80 mss:1460 pmtu:1500
rcvmss:1432 advmss:1460 cwnd:10 bytes_sent:5136 bytes_acked:5137
bytes_received:113752 segs_out:16 segs_in:86 data_segs_out:4
data_segs_in:83 send 4027586207bps lastsnd:61068 lastrcv:60986
lastack:60972 pacing_rate 7852100840bps delivery_rate 6674285712bps
delivered:5 rcv_rtt:0.043 rcv_space:14600 rcv_ssthresh:114691
minrtt:0.007 snd_wnd:75520 tcp-ulp-mptcp flags:Mmec
token:0000(id:0)/73d713b3(id:0) seq:6317574787800368999 sfseq:106329
ssnoff:821551077 maplen:7424
# mptcp FIN-WAIT-2 124504 0 10.0.1.2:55516 10.0.1.1:10054
timer:(keepalive,,0) ino:0 sk:4 cgroup:unreachable:1 ---
# skmem:(r124504,rb131072,t0,tb50176,f6568,w0,o0,bl0,d0)
subflows_max:2 remote_key token:73d713b3 write_seq:2946228641406210168
snd_una:2946228641406210168 rcv_nxt:6317574787800376423 bytes_sent:5136
bytes_received:113752 bytes_acked:5137 subflows_total:1
last_data_sent:61068 last_data_recv:60986 last_ack_recv:60972
# TcpActiveOpens 1 0.0
# TcpInSegs 17 0.0
# TcpOutSegs 16 0.0
# TcpExtDelayedACKs 3 0.0
# TcpExtDelayedACKLost 2 0.0
# TcpExtTCPPureAcks 2 0.0
# TcpExtTCPDSACKOldSent 2 0.0
# TcpExtTCPToZeroWindowAdv 1 0.0
# TcpExtTCPOrigDataSent 4 0.0
# TcpExtTCPDelivered 5 0.0
# MPTcpExtMPCapableSYNTX 1 0.0
# MPTcpExtMPCapableSYNACKRX 1 0.0
#
# 58 ns2 MPTCP -> ns1 (10.0.1.1:10055 ) MPTCP (duration
60992ms) [FAIL] client exit code 0, server 124
#
# netns ns1-RqXF2p (listener) socket stat for 10055:
# Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
# TcpPassiveOpens 1 0.0
# TcpEstabResets 2 0.0
# TcpInSegs 28 0.0
# TcpOutSegs 142 0.0
# TcpRetransSegs 22 0.0
# TcpExtTCPPureAcks 23 0.0
# TcpExtTCPLostRetransmit 8 0.0
# TcpExtTCPSlowStartRetrans 13 0.0
# TcpExtTCPTimeouts 1 0.0
# TcpExtTCPLossProbes 1 0.0
# TcpExtTCPBacklogCoalesce 1 0.0
# TcpExtTCPFastOpenPassive 1 0.0
# TcpExtTCPOrigDataSent 138 0.0
# TcpExtTCPDelivered 83 0.0
# TcpExtTcpTimeoutRehash 1 0.0
# MPTcpExtMPCapableSYNRX 1 0.0
# MPTcpExtMPCapableACKRX 1 0.0
# MPTcpExtMPFastcloseRx 2 0.0
# MPTcpExtMPRstRx 2 0.0
# MPTcpExtSndWndShared 5 0.0
#
# netns ns2-xZI1rh (connector) socket stat for 10055:
# Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
# TcpActiveOpens 1 0.0
# TcpEstabResets 2 0.0
# TcpInSegs 32 0.0
# TcpOutSegs 30 0.0
# TcpOutRsts 2 0.0
# TcpExtBeyondWindow 4 0.0
# TcpExtDelayedACKs 2 0.0
# TcpExtTCPPureAcks 3 0.0
# TcpExtTCPFastOpenActive 1 0.0
# TcpExtTCPToZeroWindowAdv 1 0.0
# TcpExtTCPOrigDataSent 4 0.0
# TcpExtTCPDelivered 5 0.0
# TcpExtTCPZeroWindowDrop 10 0.0
# MPTcpExtMPCapableSYNTX 1 0.0
# MPTcpExtMPCapableSYNACKRX 1 0.0
# MPTcpExtMPFastcloseTx 2 0.0
# MPTcpExtMPRstTx 2 0.0
#
# 59 ns2 MPTCP -> ns1 (dead:beef:1::1:10056) MPTCP (duration
60983ms) [FAIL] client exit code 0, server 124
#
# netns ns1-RqXF2p (listener) socket stat for 10056:
# Failed to find cgroup2 mount
# Failed to find cgroup2 mount
# Failed to find cgroup2 mount
# Netid State Recv-Q Send-Q Local Address:Port Peer
Address:Port
# tcp ESTAB 0 0 [dead:beef:1::1]:10056
[dead:beef:1::2]:51008 ino:2066517 sk:5 cgroup:unreachable:1 <->
# skmem:(r0,rb131072,t0,tb354816,f0,w0,o0,bl0,d0) sack cubic
wscale:8,8 rto:206 rtt:5.142/10.26 ato:40 mss:1440 pmtu:1500
rcvmss:1416 advmss:1440 cwnd:10 bytes_sent:116192 bytes_retrans:1860
bytes_acked:114332 bytes_received:5136 segs_out:88 segs_in:16
data_segs_out:86 data_segs_in:4 send 22403734bps lastsnd:60928
lastrcv:61025 lastack:60901 pacing_rate 345009112bps delivery_rate
1967640bps delivered:87 busy:123ms sndbuf_limited:41ms(33.3%)
retrans:0/2 dsack_dups:2 rcv_space:14400 rcv_ssthresh:74532
minrtt:0.003 rcv_wnd:74752 tcp-ulp-mptcp flags:Mec
token:0000(id:0)/dfc0f4f3(id:0) seq:4063451370598395855 sfseq:1
ssnoff:3788096358 maplen:5136
# mptcp LAST-ACK 0 0 [dead:beef:1::1]:10056
[dead:beef:1::2]:51008 timer:(keepalive,59sec,0) ino:0 sk:6
cgroup:unreachable:1 ---
# skmem:(r0,rb131072,t0,tb358912,f316,w351940,o0,bl0,d0)
subflows_max:2 remote_key token:dfc0f4f3 write_seq:2127521061748173342
snd_una:2127521061747829521 rcv_nxt:4063451370598400992
bytes_sent:114332 bytes_received:5136 bytes_acked:114332
subflows_total:1 last_data_sent:60942 last_data_recv:61025
last_ack_recv:60901
# TcpPassiveOpens 1 0.0
# TcpInSegs 13 0.0
# TcpOutSegs 87 0.0
# TcpRetransSegs 2 0.0
# TcpExtTCPPureAcks 11 0.0
# TcpExtTCPLossProbes 3 0.0
# TcpExtTCPDSACKRecv 2 0.0
# TcpExtTCPDSACKIgnoredNoUndo 2 0.0
# TcpExtTCPFastOpenCookieReqd 1 0.0
# TcpExtTCPOrigDataSent 84 0.0
# TcpExtTCPDelivered 86 0.0
# TcpExtTCPDSACKRecvSegs 2 0.0
# MPTcpExtMPCapableSYNRX 1 0.0
# MPTcpExtMPCapableACKRX 1 0.0
#
# netns ns2-xZI1rh (connector) socket stat for 10056:
# Failed to find cgroup2 mount
# Failed to find cgroup2 mount
# Failed to find cgroup2 mount
# Netid State Recv-Q Send-Q Local Address:Port Peer
Address:Port
# tcp ESTAB 0 0 [dead:beef:1::2]:51008
[dead:beef:1::1]:10056 ino:2065857 sk:7 cgroup:unreachable:1 <->
# skmem:(r0,rb131072,t0,tb46080,f12288,w0,o0,bl0,d2) sack cubic
wscale:8,8 rto:201 rtt:0.032/0.018 ato:80 mss:1440 pmtu:1500
rcvmss:1412 advmss:1440 cwnd:10 bytes_sent:5136 bytes_acked:5137
bytes_received:114332 segs_out:16 segs_in:89 data_segs_out:4
data_segs_in:86 send 3600000000bps lastsnd:61060 lastrcv:60977
lastack:60963 pacing_rate 7116602312bps delivery_rate 6582857136bps
delivered:5 rcv_rtt:0.051 rcv_space:14400 rcv_ssthresh:115128
minrtt:0.007 snd_wnd:74752 tcp-ulp-mptcp flags:Mmec
token:0000(id:0)/45f63d89(id:0) seq:2127521061747821841 sfseq:106653
ssnoff:320893875 maplen:7680
# mptcp FIN-WAIT-2 124188 0 [dead:beef:1::2]:51008
[dead:beef:1::1]:10056 timer:(keepalive,,0) ino:0 sk:8
cgroup:unreachable:1 ---
# skmem:(r124188,rb131072,t0,tb50176,f6884,w0,o0,bl0,d0)
subflows_max:2 remote_key token:45f63d89 write_seq:4063451370598400992
snd_una:4063451370598400992 rcv_nxt:2127521061747829521 bytes_sent:5136
bytes_received:114332 bytes_acked:5137 subflows_total:1
last_data_sent:61060 last_data_recv:60977 last_ack_recv:60963
# TcpActiveOpens 1 0.0
# TcpInSegs 17 0.0
# TcpOutSegs 16 0.0
# TcpExtDelayedACKs 3 0.0
# TcpExtDelayedACKLost 2 0.0
# TcpExtTCPPureAcks 2 0.0
# TcpExtTCPDSACKOldSent 2 0.0
# TcpExtTCPToZeroWindowAdv 1 0.0
# TcpExtTCPOrigDataSent 4 0.0
# TcpExtTCPDelivered 5 0.0
# MPTcpExtMPCapableSYNTX 1 0.0
# MPTcpExtMPCapableSYNACKRX 1 0.0
#
# 60 ns2 MPTCP -> ns1 (dead:beef:1::1:10057) MPTCP (duration
60988ms) [FAIL] client exit code 0, server 124
#
# netns ns1-RqXF2p (listener) socket stat for 10057:
# Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
# TcpPassiveOpens 1 0.0
# TcpEstabResets 2 0.0
# TcpInSegs 29 0.0
# TcpOutSegs 144 0.0
# TcpRetransSegs 22 0.0
# TcpExtTCPPureAcks 23 0.0
# TcpExtTCPLostRetransmit 8 0.0
# TcpExtTCPSlowStartRetrans 13 0.0
# TcpExtTCPTimeouts 1 0.0
# TcpExtTCPLossProbes 1 0.0
# TcpExtTCPBacklogCoalesce 2 0.0
# TcpExtTCPFastOpenPassive 1 0.0
# TcpExtTCPOrigDataSent 140 0.0
# TcpExtTCPDelivered 84 0.0
# TcpExtTcpTimeoutRehash 1 0.0
# MPTcpExtMPCapableSYNRX 1 0.0
# MPTcpExtMPCapableACKRX 1 0.0
# MPTcpExtMPFastcloseRx 2 0.0
# MPTcpExtMPRstRx 2 0.0
# MPTcpExtSndWndShared 5 0.0
#
# netns ns2-xZI1rh (connector) socket stat for 10057:
# Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
# TcpActiveOpens 1 0.0
# TcpEstabResets 2 0.0
# TcpInSegs 32 0.0
# TcpOutSegs 31 0.0
# TcpOutRsts 2 0.0
# TcpExtBeyondWindow 4 0.0
# TcpExtDelayedACKs 3 0.0
# TcpExtTCPPureAcks 3 0.0
# TcpExtTCPFastOpenActive 1 0.0
# TcpExtTCPToZeroWindowAdv 1 0.0
# TcpExtTCPOrigDataSent 4 0.0
# TcpExtTCPDelivered 5 0.0
# TcpExtTCPZeroWindowDrop 10 0.0
# MPTcpExtMPCapableSYNTX 1 0.0
# MPTcpExtMPCapableSYNACKRX 1 0.0
# MPTcpExtMPFastcloseTx 2 0.0
# MPTcpExtMPRstTx 2 0.0
#
# INFO: with MPTFO end
# [FAIL] Tests with MPTFO have failed
# INFO: test tproxy ipv4
# 61 ns1 MPTCP -> ns2 (10.0.3.1:20000 ) MPTCP (duration
161ms) [ OK ]
# INFO: tproxy ipv4 pass
# INFO: test tproxy ipv6
# 62 ns1 MPTCP -> ns2 (dead:beef:3::1:20000) MPTCP (duration
163ms) [ OK ]
# INFO: tproxy ipv6 pass
# INFO: disconnect
# 63 ns1 MPTCP -> ns1 (10.0.1.1:20001 ) MPTCP (duration
54ms) [ OK ]
# 64 ns1 MPTCP -> ns1 (10.0.1.1:20002 ) TCP (duration
56ms) [ OK ]
# 65 ns1 TCP -> ns1 (10.0.1.1:20003 ) MPTCP (duration
59ms) [ OK ]
# 66 ns1 MPTCP -> ns1 (dead:beef:1::1:20004) MPTCP (duration
60ms) [ OK ]
# 67 ns1 MPTCP -> ns1 (dead:beef:1::1:20005) TCP (duration
56ms) [ OK ]
# 68 ns1 TCP -> ns1 (dead:beef:1::1:20006) MPTCP (duration
55ms) [ OK ]
# Time: 288 seconds
not ok 1 test: selftest_mptcp_connect_splice # FAIL
# time=288
=== ERROR after 158 attempts (Wed, 08 Oct 2025 02:40:34 +0000) ===
Stopped after 158 attempts
I'm not sure if this error indicates a bug in patches 9-10, or if
there's an issue with the implementation of mptcp_recv_skb(). I'm still
unsure how to resolve it. Could you please give me some suggestions?
But patches 1-8 look good to me indeed:
Reviewed-by: Geliang Tang <geliang@kernel.org>
I'm wondering if we can merge patches 1-8 into the export branch first.
I changed the statues of them as "Queued" on patchwork.
Besides, I have one minor comment on patch 9, which I'll reply directly
on patch 9.
Thanks,
-Geliang
[1]
https://patchwork.kernel.org/project/mptcp/patch/2f159972f4aac7002a46ebc03b9d3898ece4c081.1758975929.git.tanggeliang@kylinos.cn/
>
> Cheers,
> Matt
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v5 mptcp-next 09/10] mptcp: introduce mptcp-level backlog
2025-10-06 8:12 ` [PATCH v5 mptcp-next 09/10] mptcp: introduce mptcp-level backlog Paolo Abeni
@ 2025-10-08 3:09 ` Geliang Tang
2025-10-20 19:45 ` Mat Martineau
1 sibling, 0 replies; 33+ messages in thread
From: Geliang Tang @ 2025-10-08 3:09 UTC (permalink / raw)
To: Paolo Abeni, mptcp
Hi Paolo,
On Mon, 2025-10-06 at 10:12 +0200, Paolo Abeni wrote:
> We are soon using it for incoming data processing.
> MPTCP can't leverage the sk_backlog, as the latter is processed
> before the release callback, and such callback for MPTCP releases
> and re-acquire the socket spinlock, breaking the sk_backlog
> processing
> assumption.
>
> Add a skb backlog list inside the mptcp sock struct, and implement
> basic helper to transfer packet to and purge such list.
>
> Packets in the backlog are not memory accounted, but still use the
> incoming subflow receive memory, to allow back-pressure.
>
> No packet is currently added to the backlog, so no functional changes
> intended here.
>
> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
> --
> v4 -> v5:
> - split out of the next path, to make the latter smaller
> - set a custom destructor for skbs in the backlog, this avoid
> duplicate code, and fix a few places where the need ssk cleanup
> was not performed.
> - factor out the backlog purge in a new helper,
> use spinlock protection, clear the backlog list and zero the
> backlog len
> - explicitly init the backlog_len at mptcp_init_sock() time
> ---
> net/mptcp/protocol.c | 70 +++++++++++++++++++++++++++++++++++++++++-
> --
> net/mptcp/protocol.h | 4 +++
> 2 files changed, 70 insertions(+), 4 deletions(-)
>
> diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
> index 05ee6bd26b7fa..2d5d3da67d1ac 100644
> --- a/net/mptcp/protocol.c
> +++ b/net/mptcp/protocol.c
> @@ -337,6 +337,11 @@ static void mptcp_data_queue_ofo(struct
> mptcp_sock *msk, struct sk_buff *skb)
> mptcp_rcvbuf_grow(sk);
> }
>
> +static void mptcp_bl_free(struct sk_buff *skb)
> +{
> + atomic_sub(skb->truesize, &skb->sk->sk_rmem_alloc);
> +}
> +
> static int mptcp_init_skb(struct sock *ssk,
> struct sk_buff *skb, int offset, int copy_len)
> {
> @@ -360,7 +365,7 @@ static int mptcp_init_skb(struct sock *ssk,
> skb_dst_drop(skb);
>
> /* "borrow" the fwd memory from the subflow, instead of reclaiming
> it */
> - skb->destructor = NULL;
> + skb->destructor = mptcp_bl_free;
> borrowed = ssk->sk_forward_alloc - sk_unused_reserved_mem(ssk);
> borrowed &= ~(PAGE_SIZE - 1);
> sk_forward_alloc_add(ssk, skb->truesize - borrowed);
> @@ -373,6 +378,13 @@ static bool __mptcp_move_skb(struct sock *sk,
> struct sk_buff *skb)
> struct mptcp_sock *msk = mptcp_sk(sk);
> struct sk_buff *tail;
>
> + /* Avoid the indirect call overhead, we know destructor is
> + * mptcp_bl_free at this point.
> + */
> + atomic_sub(skb->truesize, &skb->sk->sk_rmem_alloc);
> + skb->sk = NULL;
> + skb->destructor = NULL;
> +
> /* try to fetch required memory from subflow */
> if (!sk_rmem_schedule(sk, skb, skb->truesize)) {
> MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_RCVPRUNED);
> @@ -654,6 +666,35 @@ static void mptcp_dss_corruption(struct
> mptcp_sock *msk, struct sock *ssk)
> }
> }
>
> +static void __mptcp_add_backlog(struct sock *sk, struct sk_buff
> *skb)
> +{
> + struct mptcp_sock *msk = mptcp_sk(sk);
> + struct sk_buff *tail = NULL;
> + bool fragstolen;
> + int delta;
> +
> + if (unlikely(sk->sk_state == TCP_CLOSE)) {
> + kfree_skb_reason(skb, SKB_DROP_REASON_SOCKET_CLOSE);
> + return;
> + }
> +
> + /* Try to coalesce with the last skb in our backlog */
> + if (!list_empty(&msk->backlog_list))
> + tail = list_last_entry(&msk->backlog_list, struct sk_buff, list);
> +
> + if (tail && MPTCP_SKB_CB(skb)->map_seq == MPTCP_SKB_CB(tail)-
> >end_seq &&
> + skb->sk == tail->sk &&
> + __mptcp_try_coalesce(sk, tail, skb, &fragstolen, &delta)) {
> + skb->truesize -= delta;
> + kfree_skb_partial(skb, fragstolen);
> + WRITE_ONCE(msk->backlog_len, msk->backlog_len + delta);
> + return;
> + }
> +
> + list_add_tail(&skb->list, &msk->backlog_list);
> + WRITE_ONCE(msk->backlog_len, msk->backlog_len + skb->truesize);
> +}
> +
> static bool __mptcp_move_skbs_from_subflow(struct mptcp_sock *msk,
> struct sock *ssk)
> {
> @@ -701,10 +742,12 @@ static bool
> __mptcp_move_skbs_from_subflow(struct mptcp_sock *msk,
> int bmem;
>
> bmem = mptcp_init_skb(ssk, skb, offset, len);
> - skb->sk = NULL;
> sk_forward_alloc_add(sk, bmem);
> - atomic_sub(skb->truesize, &ssk->sk_rmem_alloc);
> - ret = __mptcp_move_skb(sk, skb) || ret;
> +
> + if (true)
nit: How about adding the own_msk parameter to
__mptcp_move_skbs_from_subflow() in this patch? This would allow us to
avoid using 'if (true)' here by using 'if (own_msk)'.
Thanks,
-Geliang
> + ret |= __mptcp_move_skb(sk, skb);
> + else
> + __mptcp_add_backlog(sk, skb);
> seq += len;
>
> if (unlikely(map_remaining < len)) {
> @@ -2753,12 +2796,28 @@ static void mptcp_mp_fail_no_response(struct
> mptcp_sock *msk)
> unlock_sock_fast(ssk, slow);
> }
>
> +static void mptcp_backlog_purge(struct sock *sk)
> +{
> + struct mptcp_sock *msk = mptcp_sk(sk);
> + struct sk_buff *tmp, *skb;
> + LIST_HEAD(backlog);
> +
> + mptcp_data_lock(sk);
> + list_splice_init(&msk->backlog_list, &backlog);
> + msk->backlog_len = 0;
> + mptcp_data_unlock(sk);
> +
> + list_for_each_entry_safe(skb, tmp, &backlog, list)
> + kfree_skb_reason(skb, SKB_DROP_REASON_SOCKET_CLOSE);
> +}
> +
> static void mptcp_do_fastclose(struct sock *sk)
> {
> struct mptcp_subflow_context *subflow, *tmp;
> struct mptcp_sock *msk = mptcp_sk(sk);
>
> mptcp_set_state(sk, TCP_CLOSE);
> + mptcp_backlog_purge(sk);
> mptcp_for_each_subflow_safe(msk, subflow, tmp)
> __mptcp_close_ssk(sk, mptcp_subflow_tcp_sock(subflow),
> subflow, MPTCP_CF_FASTCLOSE);
> @@ -2816,11 +2875,13 @@ static void __mptcp_init_sock(struct sock
> *sk)
> INIT_LIST_HEAD(&msk->conn_list);
> INIT_LIST_HEAD(&msk->join_list);
> INIT_LIST_HEAD(&msk->rtx_queue);
> + INIT_LIST_HEAD(&msk->backlog_list);
> INIT_WORK(&msk->work, mptcp_worker);
> msk->out_of_order_queue = RB_ROOT;
> msk->first_pending = NULL;
> msk->timer_ival = TCP_RTO_MIN;
> msk->scaling_ratio = TCP_DEFAULT_SCALING_RATIO;
> + msk->backlog_len = 0;
>
> WRITE_ONCE(msk->first, NULL);
> inet_csk(sk)->icsk_sync_mss = mptcp_sync_mss;
> @@ -3197,6 +3258,7 @@ static void mptcp_destroy_common(struct
> mptcp_sock *msk, unsigned int flags)
> struct sock *sk = (struct sock *)msk;
>
> __mptcp_clear_xmit(sk);
> + mptcp_backlog_purge(sk);
>
> /* join list will be eventually flushed (with rst) at sock lock
> release time */
> mptcp_for_each_subflow_safe(msk, subflow, tmp)
> diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
> index 46d8432c72ee7..a21c4955f4cfb 100644
> --- a/net/mptcp/protocol.h
> +++ b/net/mptcp/protocol.h
> @@ -358,6 +358,9 @@ struct mptcp_sock {
> * allow_infinite_fallback and
> * allow_join
> */
> +
> + struct list_head backlog_list; /*protected by the data lock */
> + u32 backlog_len;
> };
>
> #define mptcp_data_lock(sk) spin_lock_bh(&(sk)->sk_lock.slock)
> @@ -408,6 +411,7 @@ static inline int mptcp_space_from_win(const
> struct sock *sk, int win)
> static inline int __mptcp_space(const struct sock *sk)
> {
> return mptcp_win_from_space(sk, READ_ONCE(sk->sk_rcvbuf) -
> + READ_ONCE(mptcp_sk(sk)->backlog_len) -
> sk_rmem_alloc_get(sk));
> }
>
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v5 mptcp-next 00/10] mptcp: introduce backlog processing
2025-10-08 3:07 ` Geliang Tang
@ 2025-10-08 7:30 ` Paolo Abeni
2025-10-09 6:54 ` Geliang Tang
0 siblings, 1 reply; 33+ messages in thread
From: Paolo Abeni @ 2025-10-08 7:30 UTC (permalink / raw)
To: Geliang Tang, Matthieu Baerts, mptcp
On 10/8/25 5:07 AM, Geliang Tang wrote:
> On Mon, 2025-10-06 at 19:07 +0200, Matthieu Baerts wrote:
>> Hi Paolo,
>>
>> On 06/10/2025 10:11, Paolo Abeni wrote:
>>> This series includes RX path improvement built around backlog
>>> processing
>> Thank you for the new version! This is not a review, but just a note
>> to
>> tell you patchew didn't manage to apply the patches due to the same
>> conflict that was already there with the v4 (mptcp_init_skb()
>> parameters
>> have been moved to the previous line). I just applied the patches
>> manually. While at it, I also used this test branch for syzkaller to
>> validate them.
>>
>> (Also, on patch "mptcp: drop the __mptcp_data_ready() helper", git
>> complained that there is a trailing whitespace.)
>
> Sorry, patches 9-10 break my "implement mptcp read_sock" v12 series. I
> rebased this series on patches 1-8, it works well. But after applying
> patches 9-10, I changed mptcp_recv_skb() in [1] from
Thanks for the feedback, the applied delta looks good to me.
> # INFO: with MPTFO start
> # 57 ns2 MPTCP -> ns1 (10.0.1.1:10054 ) MPTCP (duration
> 60989ms) [FAIL] client exit code 0, server 124
> #
> # netns ns1-RqXF2p (listener) socket stat for 10054:
> # Failed to find cgroup2 mount
> # Failed to find cgroup2 mount
> # Failed to find cgroup2 mount
> # Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
> # tcp ESTAB 0 0 10.0.1.1:10054 10.0.1.2:55516
> ino:2064372 sk:1 cgroup:unreachable:1 <->
> # skmem:(r0,rb131072,t0,tb340992,f0,w0,o0,bl0,d0) sack cubic
> wscale:8,8 rto:206 rtt:5.026/10.034 ato:40 mss:1460 pmtu:1500
> rcvmss:1436 advmss:1460 cwnd:10 bytes_sent:115312 bytes_retrans:1560
> bytes_acked:113752 bytes_received:5136 segs_out:85 segs_in:16
> data_segs_out:83 data_segs_in:4 send 23239156bps lastsnd:60939
> lastrcv:61035 lastack:60912 pacing_rate 343879640bps delivery_rate
> 1994680bps delivered:84 busy:123ms sndbuf_limited:41ms(33.3%)
> retrans:0/2 dsack_dups:2 rcv_space:14600 rcv_ssthresh:75432
> minrtt:0.003 rcv_wnd:75520 tcp-ulp-mptcp flags:Mec
> token:0000(id:0)/32ed0950(id:0) seq:2946228641406205031 sfseq:1
> ssnoff:1349223625 maplen:5136
> # mptcp LAST-ACK 0 0 10.0.1.1:10054 10.0.1.2:55516
> timer:(keepalive,59sec,0) ino:0 sk:2 cgroup:unreachable:1 ---
> # skmem:(r0,rb131072,t0,tb345088,f4088,w352264,o0,bl0,d0)
> subflows_max:2 remote_key token:32ed0950 write_seq:6317574787800720824
> snd_una:6317574787800376423 rcv_nxt:2946228641406210168
> bytes_sent:113752 bytes_received:5136 bytes_acked:113752
> subflows_total:1 last_data_sent:60954 last_data_recv:61036
> last_ack_recv:60913
bytes_sent == bytes_sent, possibly we are missing a window-open event,
which in turn should be triggered by a mptcp_cleanp_rbuf(), which AFAICS
are correctly invoked in the splice code. TL;DR: I can't find anything
obviously wrong :-P
Also the default rx buf size is suspect.
Can you reproduce the issue while capturing the traffic with tcpdump? if
so, could you please share the capture?
Are TFO cases the only one failing?
Thanks,
Paolo
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v5 mptcp-next 00/10] mptcp: introduce backlog processing
2025-10-08 7:30 ` Paolo Abeni
@ 2025-10-09 6:54 ` Geliang Tang
2025-10-09 7:52 ` Paolo Abeni
0 siblings, 1 reply; 33+ messages in thread
From: Geliang Tang @ 2025-10-09 6:54 UTC (permalink / raw)
To: Paolo Abeni, Matthieu Baerts, mptcp
[-- Attachment #1: Type: text/plain, Size: 3687 bytes --]
Hi Paolo,
On Wed, 2025-10-08 at 09:30 +0200, Paolo Abeni wrote:
> On 10/8/25 5:07 AM, Geliang Tang wrote:
> > On Mon, 2025-10-06 at 19:07 +0200, Matthieu Baerts wrote:
> > > Hi Paolo,
> > >
> > > On 06/10/2025 10:11, Paolo Abeni wrote:
> > > > This series includes RX path improvement built around backlog
> > > > processing
> > > Thank you for the new version! This is not a review, but just a
> > > note
> > > to
> > > tell you patchew didn't manage to apply the patches due to the
> > > same
> > > conflict that was already there with the v4 (mptcp_init_skb()
> > > parameters
> > > have been moved to the previous line). I just applied the patches
> > > manually. While at it, I also used this test branch for syzkaller
> > > to
> > > validate them.
> > >
> > > (Also, on patch "mptcp: drop the __mptcp_data_ready() helper",
> > > git
> > > complained that there is a trailing whitespace.)
> >
> > Sorry, patches 9-10 break my "implement mptcp read_sock" v12
> > series. I
> > rebased this series on patches 1-8, it works well. But after
> > applying
> > patches 9-10, I changed mptcp_recv_skb() in [1] from
>
> Thanks for the feedback, the applied delta looks good to me.
>
> > # INFO: with MPTFO start
> > # 57 ns2 MPTCP -> ns1 (10.0.1.1:10054 ) MPTCP (duration
> > 60989ms) [FAIL] client exit code 0, server 124
> > #
> > # netns ns1-RqXF2p (listener) socket stat for 10054:
> > # Failed to find cgroup2 mount
> > # Failed to find cgroup2 mount
> > # Failed to find cgroup2 mount
> > # Netid State Recv-Q Send-Q Local Address:Port Peer
> > Address:Port
> > # tcp ESTAB 0 0 10.0.1.1:10054
> > 10.0.1.2:55516
> > ino:2064372 sk:1 cgroup:unreachable:1 <->
> > # skmem:(r0,rb131072,t0,tb340992,f0,w0,o0,bl0,d0) sack
> > cubic
> > wscale:8,8 rto:206 rtt:5.026/10.034 ato:40 mss:1460 pmtu:1500
> > rcvmss:1436 advmss:1460 cwnd:10 bytes_sent:115312
> > bytes_retrans:1560
> > bytes_acked:113752 bytes_received:5136 segs_out:85 segs_in:16
> > data_segs_out:83 data_segs_in:4 send 23239156bps lastsnd:60939
> > lastrcv:61035 lastack:60912 pacing_rate 343879640bps delivery_rate
> > 1994680bps delivered:84 busy:123ms sndbuf_limited:41ms(33.3%)
> > retrans:0/2 dsack_dups:2 rcv_space:14600 rcv_ssthresh:75432
> > minrtt:0.003 rcv_wnd:75520 tcp-ulp-mptcp flags:Mec
> > token:0000(id:0)/32ed0950(id:0) seq:2946228641406205031 sfseq:1
> > ssnoff:1349223625 maplen:5136
> > # mptcp LAST-ACK 0 0 10.0.1.1:10054
> > 10.0.1.2:55516
> > timer:(keepalive,59sec,0) ino:0 sk:2 cgroup:unreachable:1 ---
> > # skmem:(r0,rb131072,t0,tb345088,f4088,w352264,o0,bl0,d0)
> > subflows_max:2 remote_key token:32ed0950
> > write_seq:6317574787800720824
> > snd_una:6317574787800376423 rcv_nxt:2946228641406210168
> > bytes_sent:113752 bytes_received:5136 bytes_acked:113752
> > subflows_total:1 last_data_sent:60954 last_data_recv:61036
> > last_ack_recv:60913
>
> bytes_sent == bytes_sent, possibly we are missing a window-open
> event,
> which in turn should be triggered by a mptcp_cleanp_rbuf(), which
> AFAICS
> are correctly invoked in the splice code. TL;DR: I can't find
> anything
> obviously wrong :-P
>
> Also the default rx buf size is suspect.
>
> Can you reproduce the issue while capturing the traffic with tcpdump?
> if
> so, could you please share the capture?
Thank you for your suggestion. I've attached several tcpdump logs from
when the tests failed.
>
> Are TFO cases the only one failing?
Not all failures occurred in TFO cases.
Thanks,
-Geliang
>
> Thanks,
>
> Paolo
>
[-- Attachment #2: h37CDP-ns1-ns3-MPTCP-MPTCP-dead:beef:2::2-10011-connector.pcap --]
[-- Type: application/vnd.tcpdump.pcap, Size: 9321384 bytes --]
[-- Attachment #3: h37CDP-ns1-ns3-MPTCP-MPTCP-dead:beef:2::2-10011-listener.pcap --]
[-- Type: application/vnd.tcpdump.pcap, Size: 9321884 bytes --]
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v5 mptcp-next 00/10] mptcp: introduce backlog processing
2025-10-09 6:54 ` Geliang Tang
@ 2025-10-09 7:52 ` Paolo Abeni
2025-10-09 9:02 ` Geliang Tang
0 siblings, 1 reply; 33+ messages in thread
From: Paolo Abeni @ 2025-10-09 7:52 UTC (permalink / raw)
To: Geliang Tang, Matthieu Baerts, mptcp
[-- Attachment #1: Type: text/plain, Size: 4188 bytes --]
On 10/9/25 8:54 AM, Geliang Tang wrote:
> On Wed, 2025-10-08 at 09:30 +0200, Paolo Abeni wrote:
>> On 10/8/25 5:07 AM, Geliang Tang wrote:
>>> On Mon, 2025-10-06 at 19:07 +0200, Matthieu Baerts wrote:
>>>> Hi Paolo,
>>>>
>>>> On 06/10/2025 10:11, Paolo Abeni wrote:
>>>>> This series includes RX path improvement built around backlog
>>>>> processing
>>>> Thank you for the new version! This is not a review, but just a
>>>> note
>>>> to
>>>> tell you patchew didn't manage to apply the patches due to the
>>>> same
>>>> conflict that was already there with the v4 (mptcp_init_skb()
>>>> parameters
>>>> have been moved to the previous line). I just applied the patches
>>>> manually. While at it, I also used this test branch for syzkaller
>>>> to
>>>> validate them.
>>>>
>>>> (Also, on patch "mptcp: drop the __mptcp_data_ready() helper",
>>>> git
>>>> complained that there is a trailing whitespace.)
>>>
>>> Sorry, patches 9-10 break my "implement mptcp read_sock" v12
>>> series. I
>>> rebased this series on patches 1-8, it works well. But after
>>> applying
>>> patches 9-10, I changed mptcp_recv_skb() in [1] from
>>
>> Thanks for the feedback, the applied delta looks good to me.
>>
>>> # INFO: with MPTFO start
>>> # 57 ns2 MPTCP -> ns1 (10.0.1.1:10054 ) MPTCP (duration
>>> 60989ms) [FAIL] client exit code 0, server 124
>>> #
>>> # netns ns1-RqXF2p (listener) socket stat for 10054:
>>> # Failed to find cgroup2 mount
>>> # Failed to find cgroup2 mount
>>> # Failed to find cgroup2 mount
>>> # Netid State Recv-Q Send-Q Local Address:Port Peer
>>> Address:Port
>>> # tcp ESTAB 0 0 10.0.1.1:10054
>>> 10.0.1.2:55516
>>> ino:2064372 sk:1 cgroup:unreachable:1 <->
>>> # skmem:(r0,rb131072,t0,tb340992,f0,w0,o0,bl0,d0) sack
>>> cubic
>>> wscale:8,8 rto:206 rtt:5.026/10.034 ato:40 mss:1460 pmtu:1500
>>> rcvmss:1436 advmss:1460 cwnd:10 bytes_sent:115312
>>> bytes_retrans:1560
>>> bytes_acked:113752 bytes_received:5136 segs_out:85 segs_in:16
>>> data_segs_out:83 data_segs_in:4 send 23239156bps lastsnd:60939
>>> lastrcv:61035 lastack:60912 pacing_rate 343879640bps delivery_rate
>>> 1994680bps delivered:84 busy:123ms sndbuf_limited:41ms(33.3%)
>>> retrans:0/2 dsack_dups:2 rcv_space:14600 rcv_ssthresh:75432
>>> minrtt:0.003 rcv_wnd:75520 tcp-ulp-mptcp flags:Mec
>>> token:0000(id:0)/32ed0950(id:0) seq:2946228641406205031 sfseq:1
>>> ssnoff:1349223625 maplen:5136
>>> # mptcp LAST-ACK 0 0 10.0.1.1:10054
>>> 10.0.1.2:55516
>>> timer:(keepalive,59sec,0) ino:0 sk:2 cgroup:unreachable:1 ---
>>> # skmem:(r0,rb131072,t0,tb345088,f4088,w352264,o0,bl0,d0)
>>> subflows_max:2 remote_key token:32ed0950
>>> write_seq:6317574787800720824
>>> snd_una:6317574787800376423 rcv_nxt:2946228641406210168
>>> bytes_sent:113752 bytes_received:5136 bytes_acked:113752
>>> subflows_total:1 last_data_sent:60954 last_data_recv:61036
>>> last_ack_recv:60913
>>
>> bytes_sent == bytes_sent, possibly we are missing a window-open
>> event,
>> which in turn should be triggered by a mptcp_cleanp_rbuf(), which
>> AFAICS
>> are correctly invoked in the splice code. TL;DR: I can't find
>> anything
>> obviously wrong :-P
>>
>> Also the default rx buf size is suspect.
>>
>> Can you reproduce the issue while capturing the traffic with tcpdump?
>> if
>> so, could you please share the capture?
>
> Thank you for your suggestion. I've attached several tcpdump logs from
> when the tests failed.
Oh wow! the receiver actually sends the window open notification
(packets 527 and 528 in the trace), but the sender does not react at all.
I have no idea/I haven't digged yet why the sender did not try a zero
window probe (it should!), but it looks like we have some old bug in
sender wakeup since MPTCP_DEQUEUE introduction (which is very
surprising, why we did not catch/observe this earlier ?!?). That could
explain also sporadic mptcp_join failures.
Could you please try the attached patch?
/P
p.s. AFAICS the backlog introduction should just increase the frequency
of an already possible event...
[-- Attachment #2: always_wakeup_snd_nxt_increase.patch --]
[-- Type: text/x-patch, Size: 2138 bytes --]
diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index a92ecec1beb3b3..268ec752ffc01b 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -1038,12 +1038,14 @@ static void dfrag_clear(struct sock *sk, struct mptcp_data_frag *dfrag)
}
/* called under both the msk socket lock and the data lock */
-static void __mptcp_clean_una(struct sock *sk)
+static void __mptcp_clean_una_wakeup(struct sock *sk)
{
struct mptcp_sock *msk = mptcp_sk(sk);
struct mptcp_data_frag *dtmp, *dfrag;
u64 snd_una;
+ lockdep_assert_held_once(&sk->sk_lock.slock);
+
snd_una = msk->snd_una;
list_for_each_entry_safe(dfrag, dtmp, &msk->rtx_queue, list) {
if (after64(dfrag->data_seq + dfrag->data_len, snd_una))
@@ -1095,13 +1097,6 @@ static void __mptcp_clean_una(struct sock *sk)
if (mptcp_pending_data_fin_ack(sk))
mptcp_schedule_work(sk);
-}
-
-static void __mptcp_clean_una_wakeup(struct sock *sk)
-{
- lockdep_assert_held_once(&sk->sk_lock.slock);
-
- __mptcp_clean_una(sk);
mptcp_write_space(sk);
}
@@ -3512,7 +3507,7 @@ static void mptcp_destroy(struct sock *sk)
void __mptcp_data_acked(struct sock *sk)
{
if (!sock_owned_by_user(sk))
- __mptcp_clean_una(sk);
+ __mptcp_clean_una_wakeup(sk);
else
__set_bit(MPTCP_CLEAN_UNA, &mptcp_sk(sk)->cb_flags);
}
diff --git a/tools/testing/selftests/net/mptcp/mptcp_connect.sh b/tools/testing/selftests/net/mptcp/mptcp_connect.sh
index 61ae6762f5b601..a185abe13b95c4 100755
--- a/tools/testing/selftests/net/mptcp/mptcp_connect.sh
+++ b/tools/testing/selftests/net/mptcp/mptcp_connect.sh
@@ -447,8 +447,8 @@ do_transfer()
local duration
duration=$((stop-start))
printf "(duration %05sms) " "${duration}"
- mptcp_lib_pr_err_stats "${listener_ns}" "${connector_ns}" "${port}" \
- "/tmp/${listener_ns}.out" "/tmp/${connector_ns}.out"
+ # mptcp_lib_pr_err_stats "${listener_ns}" "${connector_ns}" "${port}" \
+ # "/tmp/${listener_ns}.out" "/tmp/${connector_ns}.out"
if [ ${rets} -ne 0 ] || [ ${retc} -ne 0 ]; then
mptcp_lib_pr_fail "client exit code $retc, server $rets"
mptcp_lib_pr_err_stats "${listener_ns}" "${connector_ns}" "${port}" \
^ permalink raw reply related [flat|nested] 33+ messages in thread
* Re: [PATCH v5 mptcp-next 00/10] mptcp: introduce backlog processing
2025-10-09 7:52 ` Paolo Abeni
@ 2025-10-09 9:02 ` Geliang Tang
2025-10-09 10:23 ` Paolo Abeni
0 siblings, 1 reply; 33+ messages in thread
From: Geliang Tang @ 2025-10-09 9:02 UTC (permalink / raw)
To: Paolo Abeni, Matthieu Baerts, mptcp
Hi Paolo,
On Thu, 2025-10-09 at 09:52 +0200, Paolo Abeni wrote:
> On 10/9/25 8:54 AM, Geliang Tang wrote:
> > On Wed, 2025-10-08 at 09:30 +0200, Paolo Abeni wrote:
> > > On 10/8/25 5:07 AM, Geliang Tang wrote:
> > > > On Mon, 2025-10-06 at 19:07 +0200, Matthieu Baerts wrote:
> > > > > Hi Paolo,
> > > > >
> > > > > On 06/10/2025 10:11, Paolo Abeni wrote:
> > > > > > This series includes RX path improvement built around
> > > > > > backlog
> > > > > > processing
> > > > > Thank you for the new version! This is not a review, but just
> > > > > a
> > > > > note
> > > > > to
> > > > > tell you patchew didn't manage to apply the patches due to
> > > > > the
> > > > > same
> > > > > conflict that was already there with the v4 (mptcp_init_skb()
> > > > > parameters
> > > > > have been moved to the previous line). I just applied the
> > > > > patches
> > > > > manually. While at it, I also used this test branch for
> > > > > syzkaller
> > > > > to
> > > > > validate them.
> > > > >
> > > > > (Also, on patch "mptcp: drop the __mptcp_data_ready()
> > > > > helper",
> > > > > git
> > > > > complained that there is a trailing whitespace.)
> > > >
> > > > Sorry, patches 9-10 break my "implement mptcp read_sock" v12
> > > > series. I
> > > > rebased this series on patches 1-8, it works well. But after
> > > > applying
> > > > patches 9-10, I changed mptcp_recv_skb() in [1] from
> > >
> > > Thanks for the feedback, the applied delta looks good to me.
> > >
> > > > # INFO: with MPTFO start
> > > > # 57 ns2 MPTCP -> ns1 (10.0.1.1:10054 ) MPTCP
> > > > (duration
> > > > 60989ms) [FAIL] client exit code 0, server 124
> > > > #
> > > > # netns ns1-RqXF2p (listener) socket stat for 10054:
> > > > # Failed to find cgroup2 mount
> > > > # Failed to find cgroup2 mount
> > > > # Failed to find cgroup2 mount
> > > > # Netid State Recv-Q Send-Q Local Address:Port Peer
> > > > Address:Port
> > > > # tcp ESTAB 0 0 10.0.1.1:10054
> > > > 10.0.1.2:55516
> > > > ino:2064372 sk:1 cgroup:unreachable:1 <->
> > > > # skmem:(r0,rb131072,t0,tb340992,f0,w0,o0,bl0,d0) sack
> > > > cubic
> > > > wscale:8,8 rto:206 rtt:5.026/10.034 ato:40 mss:1460 pmtu:1500
> > > > rcvmss:1436 advmss:1460 cwnd:10 bytes_sent:115312
> > > > bytes_retrans:1560
> > > > bytes_acked:113752 bytes_received:5136 segs_out:85 segs_in:16
> > > > data_segs_out:83 data_segs_in:4 send 23239156bps lastsnd:60939
> > > > lastrcv:61035 lastack:60912 pacing_rate 343879640bps
> > > > delivery_rate
> > > > 1994680bps delivered:84 busy:123ms sndbuf_limited:41ms(33.3%)
> > > > retrans:0/2 dsack_dups:2 rcv_space:14600 rcv_ssthresh:75432
> > > > minrtt:0.003 rcv_wnd:75520 tcp-ulp-mptcp flags:Mec
> > > > token:0000(id:0)/32ed0950(id:0) seq:2946228641406205031 sfseq:1
> > > > ssnoff:1349223625 maplen:5136
> > > > # mptcp LAST-ACK 0 0 10.0.1.1:10054
> > > > 10.0.1.2:55516
> > > > timer:(keepalive,59sec,0) ino:0 sk:2 cgroup:unreachable:1 ---
> > > > #
> > > > skmem:(r0,rb131072,t0,tb345088,f4088,w352264,o0,bl0,d0)
> > > > subflows_max:2 remote_key token:32ed0950
> > > > write_seq:6317574787800720824
> > > > snd_una:6317574787800376423 rcv_nxt:2946228641406210168
> > > > bytes_sent:113752 bytes_received:5136 bytes_acked:113752
> > > > subflows_total:1 last_data_sent:60954 last_data_recv:61036
> > > > last_ack_recv:60913
> > >
> > > bytes_sent == bytes_sent, possibly we are missing a window-open
> > > event,
> > > which in turn should be triggered by a mptcp_cleanp_rbuf(), which
> > > AFAICS
> > > are correctly invoked in the splice code. TL;DR: I can't find
> > > anything
> > > obviously wrong :-P
> > >
> > > Also the default rx buf size is suspect.
> > >
> > > Can you reproduce the issue while capturing the traffic with
> > > tcpdump?
> > > if
> > > so, could you please share the capture?
> >
> > Thank you for your suggestion. I've attached several tcpdump logs
> > from
> > when the tests failed.
>
> Oh wow! the receiver actually sends the window open notification
> (packets 527 and 528 in the trace), but the sender does not react at
> all.
>
> I have no idea/I haven't digged yet why the sender did not try a zero
> window probe (it should!), but it looks like we have some old bug in
> sender wakeup since MPTCP_DEQUEUE introduction (which is very
> surprising, why we did not catch/observe this earlier ?!?). That
> could
> explain also sporadic mptcp_join failures.
>
> Could you please try the attached patch?
Thank you very much. I just tested this patch, but it doesn't work. The
splice test still fails and reports the same error.
-Geliang
>
> /P
>
> p.s. AFAICS the backlog introduction should just increase the
> frequency
> of an already possible event...
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v5 mptcp-next 00/10] mptcp: introduce backlog processing
2025-10-09 9:02 ` Geliang Tang
@ 2025-10-09 10:23 ` Paolo Abeni
2025-10-09 13:58 ` Paolo Abeni
0 siblings, 1 reply; 33+ messages in thread
From: Paolo Abeni @ 2025-10-09 10:23 UTC (permalink / raw)
To: Geliang Tang, Matthieu Baerts, mptcp
On 10/9/25 11:02 AM, Geliang Tang wrote:
> On Thu, 2025-10-09 at 09:52 +0200, Paolo Abeni wrote:
>> On 10/9/25 8:54 AM, Geliang Tang wrote:
>>> On Wed, 2025-10-08 at 09:30 +0200, Paolo Abeni wrote:
>>>> On 10/8/25 5:07 AM, Geliang Tang wrote:
>>>>> On Mon, 2025-10-06 at 19:07 +0200, Matthieu Baerts wrote:
>>>>>> Hi Paolo,
>>>>>>
>>>>>> On 06/10/2025 10:11, Paolo Abeni wrote:
>>>>>>> This series includes RX path improvement built around
>>>>>>> backlog
>>>>>>> processing
>>>>>> Thank you for the new version! This is not a review, but just
>>>>>> a
>>>>>> note
>>>>>> to
>>>>>> tell you patchew didn't manage to apply the patches due to
>>>>>> the
>>>>>> same
>>>>>> conflict that was already there with the v4 (mptcp_init_skb()
>>>>>> parameters
>>>>>> have been moved to the previous line). I just applied the
>>>>>> patches
>>>>>> manually. While at it, I also used this test branch for
>>>>>> syzkaller
>>>>>> to
>>>>>> validate them.
>>>>>>
>>>>>> (Also, on patch "mptcp: drop the __mptcp_data_ready()
>>>>>> helper",
>>>>>> git
>>>>>> complained that there is a trailing whitespace.)
>>>>>
>>>>> Sorry, patches 9-10 break my "implement mptcp read_sock" v12
>>>>> series. I
>>>>> rebased this series on patches 1-8, it works well. But after
>>>>> applying
>>>>> patches 9-10, I changed mptcp_recv_skb() in [1] from
>>>>
>>>> Thanks for the feedback, the applied delta looks good to me.
>>>>
>>>>> # INFO: with MPTFO start
>>>>> # 57 ns2 MPTCP -> ns1 (10.0.1.1:10054 ) MPTCP
>>>>> (duration
>>>>> 60989ms) [FAIL] client exit code 0, server 124
>>>>> #
>>>>> # netns ns1-RqXF2p (listener) socket stat for 10054:
>>>>> # Failed to find cgroup2 mount
>>>>> # Failed to find cgroup2 mount
>>>>> # Failed to find cgroup2 mount
>>>>> # Netid State Recv-Q Send-Q Local Address:Port Peer
>>>>> Address:Port
>>>>> # tcp ESTAB 0 0 10.0.1.1:10054
>>>>> 10.0.1.2:55516
>>>>> ino:2064372 sk:1 cgroup:unreachable:1 <->
>>>>> # skmem:(r0,rb131072,t0,tb340992,f0,w0,o0,bl0,d0) sack
>>>>> cubic
>>>>> wscale:8,8 rto:206 rtt:5.026/10.034 ato:40 mss:1460 pmtu:1500
>>>>> rcvmss:1436 advmss:1460 cwnd:10 bytes_sent:115312
>>>>> bytes_retrans:1560
>>>>> bytes_acked:113752 bytes_received:5136 segs_out:85 segs_in:16
>>>>> data_segs_out:83 data_segs_in:4 send 23239156bps lastsnd:60939
>>>>> lastrcv:61035 lastack:60912 pacing_rate 343879640bps
>>>>> delivery_rate
>>>>> 1994680bps delivered:84 busy:123ms sndbuf_limited:41ms(33.3%)
>>>>> retrans:0/2 dsack_dups:2 rcv_space:14600 rcv_ssthresh:75432
>>>>> minrtt:0.003 rcv_wnd:75520 tcp-ulp-mptcp flags:Mec
>>>>> token:0000(id:0)/32ed0950(id:0) seq:2946228641406205031 sfseq:1
>>>>> ssnoff:1349223625 maplen:5136
>>>>> # mptcp LAST-ACK 0 0 10.0.1.1:10054
>>>>> 10.0.1.2:55516
>>>>> timer:(keepalive,59sec,0) ino:0 sk:2 cgroup:unreachable:1 ---
>>>>> #
>>>>> skmem:(r0,rb131072,t0,tb345088,f4088,w352264,o0,bl0,d0)
>>>>> subflows_max:2 remote_key token:32ed0950
>>>>> write_seq:6317574787800720824
>>>>> snd_una:6317574787800376423 rcv_nxt:2946228641406210168
>>>>> bytes_sent:113752 bytes_received:5136 bytes_acked:113752
>>>>> subflows_total:1 last_data_sent:60954 last_data_recv:61036
>>>>> last_ack_recv:60913
>>>>
>>>> bytes_sent == bytes_sent, possibly we are missing a window-open
>>>> event,
>>>> which in turn should be triggered by a mptcp_cleanp_rbuf(), which
>>>> AFAICS
>>>> are correctly invoked in the splice code. TL;DR: I can't find
>>>> anything
>>>> obviously wrong :-P
>>>>
>>>> Also the default rx buf size is suspect.
>>>>
>>>> Can you reproduce the issue while capturing the traffic with
>>>> tcpdump?
>>>> if
>>>> so, could you please share the capture?
>>>
>>> Thank you for your suggestion. I've attached several tcpdump logs
>>> from
>>> when the tests failed.
>>
>> Oh wow! the receiver actually sends the window open notification
>> (packets 527 and 528 in the trace), but the sender does not react at
>> all.
>>
>> I have no idea/I haven't digged yet why the sender did not try a zero
>> window probe (it should!), but it looks like we have some old bug in
>> sender wakeup since MPTCP_DEQUEUE introduction (which is very
>> surprising, why we did not catch/observe this earlier ?!?). That
>> could
>> explain also sporadic mptcp_join failures.
>>
>> Could you please try the attached patch?
>
> Thank you very much. I just tested this patch, but it doesn't work. The
> splice test still fails and reports the same error.
Uhmmm... right, in the pcap trace you shared the relevant ack opened the
(mptcp-level) window, without changing the msk-level ack seq.
So we need something similar for __mptcp_check_push(). I can't do it
right now. Could you please have a look?
Otherwise I'll try to share v2 patch later/tomorrow.
Cheers,
Paolo
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v5 mptcp-next 00/10] mptcp: introduce backlog processing
2025-10-09 10:23 ` Paolo Abeni
@ 2025-10-09 13:58 ` Paolo Abeni
2025-10-10 8:21 ` Paolo Abeni
0 siblings, 1 reply; 33+ messages in thread
From: Paolo Abeni @ 2025-10-09 13:58 UTC (permalink / raw)
To: Geliang Tang, Matthieu Baerts, mptcp
On 10/9/25 12:23 PM, Paolo Abeni wrote:
> On 10/9/25 11:02 AM, Geliang Tang wrote:
>> On Thu, 2025-10-09 at 09:52 +0200, Paolo Abeni wrote:
>>> On 10/9/25 8:54 AM, Geliang Tang wrote:
>>>> On Wed, 2025-10-08 at 09:30 +0200, Paolo Abeni wrote:
>>>>> On 10/8/25 5:07 AM, Geliang Tang wrote:
>>>>>> On Mon, 2025-10-06 at 19:07 +0200, Matthieu Baerts wrote:
>>>>>>> Hi Paolo,
>>>>>>>
>>>>>>> On 06/10/2025 10:11, Paolo Abeni wrote:
>>>>>>>> This series includes RX path improvement built around
>>>>>>>> backlog
>>>>>>>> processing
>>>>>>> Thank you for the new version! This is not a review, but just
>>>>>>> a
>>>>>>> note
>>>>>>> to
>>>>>>> tell you patchew didn't manage to apply the patches due to
>>>>>>> the
>>>>>>> same
>>>>>>> conflict that was already there with the v4 (mptcp_init_skb()
>>>>>>> parameters
>>>>>>> have been moved to the previous line). I just applied the
>>>>>>> patches
>>>>>>> manually. While at it, I also used this test branch for
>>>>>>> syzkaller
>>>>>>> to
>>>>>>> validate them.
>>>>>>>
>>>>>>> (Also, on patch "mptcp: drop the __mptcp_data_ready()
>>>>>>> helper",
>>>>>>> git
>>>>>>> complained that there is a trailing whitespace.)
>>>>>>
>>>>>> Sorry, patches 9-10 break my "implement mptcp read_sock" v12
>>>>>> series. I
>>>>>> rebased this series on patches 1-8, it works well. But after
>>>>>> applying
>>>>>> patches 9-10, I changed mptcp_recv_skb() in [1] from
>>>>>
>>>>> Thanks for the feedback, the applied delta looks good to me.
>>>>>
>>>>>> # INFO: with MPTFO start
>>>>>> # 57 ns2 MPTCP -> ns1 (10.0.1.1:10054 ) MPTCP
>>>>>> (duration
>>>>>> 60989ms) [FAIL] client exit code 0, server 124
>>>>>> #
>>>>>> # netns ns1-RqXF2p (listener) socket stat for 10054:
>>>>>> # Failed to find cgroup2 mount
>>>>>> # Failed to find cgroup2 mount
>>>>>> # Failed to find cgroup2 mount
>>>>>> # Netid State Recv-Q Send-Q Local Address:Port Peer
>>>>>> Address:Port
>>>>>> # tcp ESTAB 0 0 10.0.1.1:10054
>>>>>> 10.0.1.2:55516
>>>>>> ino:2064372 sk:1 cgroup:unreachable:1 <->
>>>>>> # skmem:(r0,rb131072,t0,tb340992,f0,w0,o0,bl0,d0) sack
>>>>>> cubic
>>>>>> wscale:8,8 rto:206 rtt:5.026/10.034 ato:40 mss:1460 pmtu:1500
>>>>>> rcvmss:1436 advmss:1460 cwnd:10 bytes_sent:115312
>>>>>> bytes_retrans:1560
>>>>>> bytes_acked:113752 bytes_received:5136 segs_out:85 segs_in:16
>>>>>> data_segs_out:83 data_segs_in:4 send 23239156bps lastsnd:60939
>>>>>> lastrcv:61035 lastack:60912 pacing_rate 343879640bps
>>>>>> delivery_rate
>>>>>> 1994680bps delivered:84 busy:123ms sndbuf_limited:41ms(33.3%)
>>>>>> retrans:0/2 dsack_dups:2 rcv_space:14600 rcv_ssthresh:75432
>>>>>> minrtt:0.003 rcv_wnd:75520 tcp-ulp-mptcp flags:Mec
>>>>>> token:0000(id:0)/32ed0950(id:0) seq:2946228641406205031 sfseq:1
>>>>>> ssnoff:1349223625 maplen:5136
>>>>>> # mptcp LAST-ACK 0 0 10.0.1.1:10054
>>>>>> 10.0.1.2:55516
>>>>>> timer:(keepalive,59sec,0) ino:0 sk:2 cgroup:unreachable:1 ---
>>>>>> #
>>>>>> skmem:(r0,rb131072,t0,tb345088,f4088,w352264,o0,bl0,d0)
>>>>>> subflows_max:2 remote_key token:32ed0950
>>>>>> write_seq:6317574787800720824
>>>>>> snd_una:6317574787800376423 rcv_nxt:2946228641406210168
>>>>>> bytes_sent:113752 bytes_received:5136 bytes_acked:113752
>>>>>> subflows_total:1 last_data_sent:60954 last_data_recv:61036
>>>>>> last_ack_recv:60913
>>>>>
>>>>> bytes_sent == bytes_sent, possibly we are missing a window-open
>>>>> event,
>>>>> which in turn should be triggered by a mptcp_cleanp_rbuf(), which
>>>>> AFAICS
>>>>> are correctly invoked in the splice code. TL;DR: I can't find
>>>>> anything
>>>>> obviously wrong :-P
>>>>>
>>>>> Also the default rx buf size is suspect.
>>>>>
>>>>> Can you reproduce the issue while capturing the traffic with
>>>>> tcpdump?
>>>>> if
>>>>> so, could you please share the capture?
>>>>
>>>> Thank you for your suggestion. I've attached several tcpdump logs
>>>> from
>>>> when the tests failed.
>>>
>>> Oh wow! the receiver actually sends the window open notification
>>> (packets 527 and 528 in the trace), but the sender does not react at
>>> all.
>>>
>>> I have no idea/I haven't digged yet why the sender did not try a zero
>>> window probe (it should!), but it looks like we have some old bug in
>>> sender wakeup since MPTCP_DEQUEUE introduction (which is very
>>> surprising, why we did not catch/observe this earlier ?!?). That
>>> could
>>> explain also sporadic mptcp_join failures.
>>>
>>> Could you please try the attached patch?
>>
>> Thank you very much. I just tested this patch, but it doesn't work. The
>> splice test still fails and reports the same error.
>
> Uhmmm... right, in the pcap trace you shared the relevant ack opened the
> (mptcp-level) window, without changing the msk-level ack seq.
>
> So we need something similar for __mptcp_check_push(). I can't do it
> right now. Could you please have a look?
I reviewed again the relevant code and my initial assessment was wrong.
i.e. there is no need of additional wake-ups.
@Geliang: if you reproduce the issue multiple times, are there any
common patterns ? i.e. sender files considerably larger than the client
one, or only a specific subsets of all the test-cases failing, or ...
Thanks,
Paolo
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v5 mptcp-next 00/10] mptcp: introduce backlog processing
2025-10-09 13:58 ` Paolo Abeni
@ 2025-10-10 8:21 ` Paolo Abeni
2025-10-10 12:22 ` Geliang Tang
0 siblings, 1 reply; 33+ messages in thread
From: Paolo Abeni @ 2025-10-10 8:21 UTC (permalink / raw)
To: Geliang Tang, Matthieu Baerts, mptcp
On 10/9/25 3:58 PM, Paolo Abeni wrote:
> @Geliang: if you reproduce the issue multiple times, are there any
> common patterns ? i.e. sender files considerably larger than the client
> one, or only a specific subsets of all the test-cases failing, or ...
Other questions:
- Can you please share your setup details (VM vs baremetal, debug config
vs non debug, vmg vs plain qemu, number of [v]cores...)? I can't repro
the issue locally.
- Can you please share a pcap capture _and_ the selftest text output for
the same failing test?
In the log shared previously the sender had data queued at the
mptcp-level, but not at TCP-level. In the shared pcap capture the
receiver sends a couple of acks opening the tcp-level and mptcp-level
window, but the sender never replies.
In such scenario the incoming ack should reach ack_update_msk() ->
__mptcp_check_push() -> __mptcp_subflow_push_pending() (or
mptcp_release_cb -> __mptcp_push_pending() ) -> mptcp_sendmsg_frag() but
such chain is apparently broken somewhere in the failing scenario. Could
you please add probe points the the mentioned funtions and perf record
the test, to try to see where the mentioned chain is interrupted?
Thanks,
Paolo
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v5 mptcp-next 00/10] mptcp: introduce backlog processing
2025-10-10 8:21 ` Paolo Abeni
@ 2025-10-10 12:22 ` Geliang Tang
2025-10-13 9:07 ` Geliang Tang
0 siblings, 1 reply; 33+ messages in thread
From: Geliang Tang @ 2025-10-10 12:22 UTC (permalink / raw)
To: Paolo Abeni, Matthieu Baerts, mptcp
Hi Paolo,
On Fri, 2025-10-10 at 10:21 +0200, Paolo Abeni wrote:
> On 10/9/25 3:58 PM, Paolo Abeni wrote:
> > @Geliang: if you reproduce the issue multiple times, are there any
> > common patterns ? i.e. sender files considerably larger than the
> > client
> > one, or only a specific subsets of all the test-cases failing, or
> > ...
>
> Other questions:
> - Can you please share your setup details (VM vs baremetal, debug
> config
> vs non debug, vmg vs plain qemu, number of [v]cores...)? I can't
> repro
> the issue locally.
Here are my modifications:
https://git.kernel.org/pub/scm/linux/kernel/git/geliang/mptcp_net-next.git/log/?h=splice_new
I used mptcp-upstream-virtme-docker normal config to reproduce it:
docker run \
-e INPUT_NO_BLOCK=1 \
-e INPUT_PACKETDRILL_NO_SYNC=1 \
-v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
--pull always ghcr.io/multipath-tcp/mptcp-upstream-virtme-
docker:latest \
auto-normal
$ cat .virtme-exec-run
run_loop run_selftest_one ./mptcp_connect_splice.sh
Running mptcp_connect_splice.sh in a loop dozens of times should
reproduce the test failure.
> - Can you please share a pcap capture _and_ the selftest text output
> for
> the same failing test?
>
> In the log shared previously the sender had data queued at the
> mptcp-level, but not at TCP-level. In the shared pcap capture the
> receiver sends a couple of acks opening the tcp-level and mptcp-level
> window, but the sender never replies.
>
> In such scenario the incoming ack should reach ack_update_msk() ->
> __mptcp_check_push() -> __mptcp_subflow_push_pending() (or
> mptcp_release_cb -> __mptcp_push_pending() ) -> mptcp_sendmsg_frag()
> but
> such chain is apparently broken somewhere in the failing scenario.
> Could
> you please add probe points the the mentioned funtions and perf
> record
> the test, to try to see where the mentioned chain is interrupted?
Thank you for your suggestion. I will proceed with testing accordingly.
-Geliang
>
> Thanks,
>
> Paolo
>
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v5 mptcp-next 00/10] mptcp: introduce backlog processing
2025-10-10 12:22 ` Geliang Tang
@ 2025-10-13 9:07 ` Geliang Tang
2025-10-13 13:29 ` Paolo Abeni
2025-10-15 9:00 ` Paolo Abeni
0 siblings, 2 replies; 33+ messages in thread
From: Geliang Tang @ 2025-10-13 9:07 UTC (permalink / raw)
To: Paolo Abeni, Matthieu Baerts, mptcp
[-- Attachment #1: Type: text/plain, Size: 2471 bytes --]
Hi Paolo,
On Fri, 2025-10-10 at 20:22 +0800, Geliang Tang wrote:
> Hi Paolo,
>
> On Fri, 2025-10-10 at 10:21 +0200, Paolo Abeni wrote:
> > On 10/9/25 3:58 PM, Paolo Abeni wrote:
> > > @Geliang: if you reproduce the issue multiple times, are there
> > > any
> > > common patterns ? i.e. sender files considerably larger than the
> > > client
> > > one, or only a specific subsets of all the test-cases failing, or
> > > ...
> >
> > Other questions:
> > - Can you please share your setup details (VM vs baremetal, debug
> > config
> > vs non debug, vmg vs plain qemu, number of [v]cores...)? I can't
> > repro
> > the issue locally.
>
> Here are my modifications:
>
> https://git.kernel.org/pub/scm/linux/kernel/git/geliang/mptcp_net-next.git/log/?h=splice_new
>
> I used mptcp-upstream-virtme-docker normal config to reproduce it:
>
> docker run \
> -e INPUT_NO_BLOCK=1 \
> -e INPUT_PACKETDRILL_NO_SYNC=1 \
> -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
> --pull always ghcr.io/multipath-tcp/mptcp-upstream-virtme-
> docker:latest \
> auto-normal
>
> $ cat .virtme-exec-run
> run_loop run_selftest_one ./mptcp_connect_splice.sh
>
> Running mptcp_connect_splice.sh in a loop dozens of times should
> reproduce the test failure.
>
> > - Can you please share a pcap capture _and_ the selftest text
> > output
> > for
> > the same failing test?
The pcap captures (gQQ13x-ns1-ns3-MPTCP-MPTCP-dead:beef:3::2-10013-
connector.pcap, gQQ13x-ns1-ns3-MPTCP-MPTCP-dead:beef:3::2-10013-
listener.pcap) and the selftest text output (selftest_output) are
attached.
Thanks,
-Geliang
> >
> > In the log shared previously the sender had data queued at the
> > mptcp-level, but not at TCP-level. In the shared pcap capture the
> > receiver sends a couple of acks opening the tcp-level and mptcp-
> > level
> > window, but the sender never replies.
> >
> > In such scenario the incoming ack should reach ack_update_msk() ->
> > __mptcp_check_push() -> __mptcp_subflow_push_pending() (or
> > mptcp_release_cb -> __mptcp_push_pending() ) ->
> > mptcp_sendmsg_frag()
> > but
> > such chain is apparently broken somewhere in the failing scenario.
> > Could
> > you please add probe points the the mentioned funtions and perf
> > record
> > the test, to try to see where the mentioned chain is interrupted?
>
> Thank you for your suggestion. I will proceed with testing
> accordingly.
>
> -Geliang
>
> >
> > Thanks,
> >
> > Paolo
> >
>
>
[-- Attachment #2: gQQ13x-ns1-ns3-MPTCP-MPTCP-dead:beef:3::2-10013-connector.pcap --]
[-- Type: application/vnd.tcpdump.pcap, Size: 3524784 bytes --]
[-- Attachment #3: gQQ13x-ns1-ns3-MPTCP-MPTCP-dead:beef:3::2-10013-listener.pcap --]
[-- Type: application/vnd.tcpdump.pcap, Size: 3459828 bytes --]
[-- Attachment #4: selftest_output --]
[-- Type: text/plain, Size: 59209 bytes --]
Selftest Test: ./mptcp_connect_splice.sh
TAP version 13
1..1
# INFO: Packet capture files will have this prefix: gQQ13x-
# INFO: set ns3-CLrvi9 dev ns3eth2: ethtool -K tso off gso off gro off
# INFO: set ns4-wl139Z dev ns4eth3: ethtool -K gro off
# Created /tmp/tmp.v2zbC0ZvuB (size 1881388 B) containing data sent by client
# Created /tmp/tmp.eJVAi3xEfQ (size 4318832 B) containing data sent by server
# 01 New MPTCP socket can be blocked via sysctl [ OK ]
# 02 Validating network environment with pings [ OK ]
# INFO: Using loss of 0.54% on ns3eth4
# INFO: extra options: -m splice
# 03 ns1 MPTCP -> ns1 (10.0.1.1:10000 ) MPTCP (duration 60ms) [ OK ]
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# 319 packets captured319 packets captured
# 637 packets received by filter
# 0 packets dropped by kernel
#
# 637 packets received by filter
# 0 packets dropped by kernel
# 04 ns1 MPTCP -> ns1 (10.0.1.1:10001 ) TCP (duration 51ms) [ OK ]
# tcpdump: WARNING: tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)tcpdump: data link type LINUX_SLL2
#
# tcpdump: data link type LINUX_SLL2
# tcpdump: tcpdump: listening on anylistening on any, link-type LINUX_SLL2 (Linux cooked v2), link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# , snapshot length 65535 bytes
# 209 packets captured
# 209 packets captured
# 414 packets received by filter
# 0 packets dropped by kernel
# 414 packets received by filter
# 0 packets dropped by kernel
# 05 ns1 TCP -> ns1 (10.0.1.1:10002 ) MPTCP (duration 49ms) [ OK ]
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2)tcpdump: , snapshot length 65535 bytes
# listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# 251 packets captured
# 502 packets received by filter251 packets captured
# 502 packets received by filter
# 0 packets dropped by kernel
# 0 packets dropped by kernel
#
# 06 ns1 MPTCP -> ns1 (dead:beef:1::1:10003) MPTCP (duration 266ms) [ OK ]
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: tcpdump: listening on anylistening on any, link-type LINUX_SLL2 (Linux cooked v2), link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# , snapshot length 65535 bytes
# 289 packets captured
# 289 packets captured576 packets received by filter
# 576 packets received by filter
# 0 packets dropped by kernel
#
# 0 packets dropped by kernel
# 07 ns1 MPTCP -> ns1 (dead:beef:1::1:10004) TCP (duration 57ms) [ OK ]
# tcpdump: WARNING: tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
#
# tcpdump: data link type LINUX_SLL2
# tcpdump: data link type LINUX_SLL2
# tcpdump: tcpdump: listening on anylistening on any, link-type LINUX_SLL2 (Linux cooked v2), link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# , snapshot length 65535 bytes
# 216 packets captured216 packets captured
# 432 packets received by filter
# 432 packets received by filter
# 0 packets dropped by kernel
#
# 0 packets dropped by kernel
# 08 ns1 TCP -> ns1 (dead:beef:1::1:10005) MPTCP (duration 55ms) [ OK ]
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: WARNING: tcpdump: data link type LINUX_SLL2
# any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: tcpdump: listening on anylistening on any, link-type LINUX_SLL2 (Linux cooked v2), link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# , snapshot length 65535 bytes
# 246 packets captured246 packets captured
# 490 packets received by filter
# 0 packets dropped by kernel
#
# 490 packets received by filter
# 0 packets dropped by kernel
# 09 ns1 MPTCP -> ns2 (10.0.1.2:10006 ) MPTCP (duration 51ms) [ OK ]
# tcpdump: WARNING: tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
#
# tcpdump: data link type LINUX_SLL2
# tcpdump: data link type LINUX_SLL2
# tcpdump: tcpdump: listening on anylistening on any, link-type LINUX_SLL2 (Linux cooked v2), link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# , snapshot length 65535 bytes
# 389 packets captured389 packets captured
# 389 packets received by filter
#
# 0 packets dropped by kernel
# 389 packets received by filter
# 0 packets dropped by kernel
# 10 ns1 MPTCP -> ns2 (dead:beef:1::2:10007) MPTCP (duration 49ms) [ OK ]
# tcpdump: WARNING: tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
#
# tcpdump: data link type LINUX_SLL2
# tcpdump: data link type LINUX_SLL2
# tcpdump: tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2)listening on any, snapshot length 65535 bytes
# , link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# 399 packets captured399 packets captured
# 399 packets received by filter
# 0 packets dropped by kernel
#
# 399 packets received by filter
# 0 packets dropped by kernel
# 11 ns1 MPTCP -> ns2 (10.0.2.1:10008 ) MPTCP (duration 54ms) [ OK ]
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# 400 packets captured400 packets captured
# 400 packets received by filter
# 400 packets received by filter
#
# 0 packets dropped by kernel
# 0 packets dropped by kernel
# 12 ns1 MPTCP -> ns2 (dead:beef:2::1:10009) MPTCP (duration 54ms) [ OK ]
# tcpdump: WARNING: tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)tcpdump: data link type LINUX_SLL2
#
# tcpdump: data link type LINUX_SLL2
# tcpdump: tcpdump: listening on anylistening on any, link-type LINUX_SLL2 (Linux cooked v2), link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# , snapshot length 65535 bytes
# 412 packets captured
# 412 packets received by filter
# 0 packets dropped by kernel
# 412 packets captured
# 412 packets received by filter
# 0 packets dropped by kernel
# 13 ns1 MPTCP -> ns3 (10.0.2.2:10010 ) MPTCP (duration 51ms) [ OK ]
# tcpdump: WARNING: tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
#
# tcpdump: data link type LINUX_SLL2
# tcpdump: data link type LINUX_SLL2
# tcpdump: tcpdump: listening on anylistening on any, link-type LINUX_SLL2 (Linux cooked v2), link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# , snapshot length 65535 bytes
# 3405 packets captured
# 3405 packets received by filter
# 0 packets dropped by kernel
# 3409 packets captured
# 3409 packets received by filter
# 0 packets dropped by kernel
# 14 ns1 MPTCP -> ns3 (dead:beef:2::2:10011) MPTCP (duration 53ms) [ OK ]
# tcpdump: WARNING: tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
#
# tcpdump: data link type LINUX_SLL2
# tcpdump: data link type LINUX_SLL2
# tcpdump: tcpdump: listening on anylistening on any, link-type LINUX_SLL2 (Linux cooked v2), link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# , snapshot length 65535 bytes
# 3471 packets captured
# 3471 packets received by filter
# 0 packets dropped by kernel
# 3472 packets captured
# 3472 packets received by filter
# 0 packets dropped by kernel
# 15 ns1 MPTCP -> ns3 (10.0.3.2:10012 ) MPTCP (duration 55ms) [ OK ]
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: tcpdump: listening on anylistening on any, link-type LINUX_SLL2 (Linux cooked v2), link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# , snapshot length 65535 bytes
# 3382 packets captured3382 packets captured
#
# 3382 packets received by filter3382 packets received by filter
#
# 0 packets dropped by kernel
# 0 packets dropped by kernel
# 16 ns1 MPTCP -> ns3 (dead:beef:3::2:10013) MPTCP (duration 61020ms) [FAIL] client exit code 124, server 124
#
# netns ns3-CLrvi9 (listener) socket stat for 10013:
# Failed to find cgroup2 mount
# Failed to find cgroup2 mount
# Failed to find cgroup2 mount
# Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
# tcp ESTAB 0 0 [dead:beef:3::2]:10013 [dead:beef:1::1]:40880 ino:1784823 sk:1 cgroup:unreachable:1 <->
# skmem:(r0,rb412270,t0,tb921600,f0,w0,o0,bl0,d0) ts sack cubic wscale:9,9 rto:201 rtt:0.274/0.017 ato:40 mss:1428 pmtu:1500 rcvmss:1404 advmss:1428 cwnd:10 bytes_sent:1405104 bytes_acked:1405104 bytes_received:1881388 segs_out:1079 segs_in:1520 data_segs_out:1016 data_segs_in:1354 send 416934307bps lastsnd:61848 lastrcv:62059 lastack:61848 pacing_rate 16669767440bps delivery_rate 4553622376bps delivered:1017 busy:2ms sndbuf_limited:1ms(50.0%) rcv_rtt:0.052 rcv_space:131072 rcv_ssthresh:354973 minrtt:0.005 rcv_ooopack:28 rcv_wnd:262144 tcp-ulp-mptcp flags:Mec token:0000(id:0)/b0c8040a(id:0) seq:16590065159408846425 sfseq:1870337 ssnoff:4151079956 maplen:11052
# mptcp LAST-ACK 0 0 [dead:beef:3::2]:10013 [dead:beef:1::1]:40880 timer:(keepalive,58sec,0) ino:0 sk:2 cgroup:unreachable:1 ---
# skmem:(r0,rb295633,t0,tb925696,f0,w950272,o0,bl0,d0) subflows_max:2 remote_key token:b0c8040a write_seq:3988646186081754782 snd_una:3988646186080814893 rcv_nxt:16590065159408857478 bytes_retrans:21384 bytes_sent:1383720 bytes_received:1881388 bytes_acked:1383720 subflows_total:1 last_data_sent:61849 last_data_recv:62067 last_ack_recv:61849
# TcpPassiveOpens 1 0.0
# TcpInSegs 222 0.0
# TcpOutSegs 1080 0.0
# TcpExtTCPPureAcks 147 0.0
# TcpExtTCPBacklogCoalesce 18 0.0
# TcpExtTCPOFOQueue 1 0.0
# TcpExtTCPFromZeroWindowAdv 16 0.0
# TcpExtTCPToZeroWindowAdv 17 0.0
# TcpExtTCPWantZeroWindowAdv 5 0.0
# TcpExtTCPOrigDataSent 1016 0.0
# TcpExtTCPDelivered 1016 0.0
# MPTcpExtMPCapableSYNRX 1 0.0
# MPTcpExtMPCapableACKRX 1 0.0
# MPTcpExtMPTCPRetrans 1 0.0
# MPTcpExtRcvWndShared 5 0.0
#
# netns ns1-gQQ13x (connector) socket stat for 10013:
# Failed to find cgroup2 mount
# Failed to find cgroup2 mount
# Failed to find cgroup2 mount
# Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
# tcp ESTAB 0 0 [dead:beef:1::1]:40880 [dead:beef:3::2]:10013 ino:1786090 sk:3 cgroup:unreachable:1 <->
# skmem:(r0,rb228877,t0,tb967680,f4096,w0,o0,bl0,d0) ts sack cubic wscale:9,9 rto:201 rtt:0.071/0.069 ato:40 mss:1428 pmtu:1500 rcvmss:1400 advmss:1428 cwnd:148 ssthresh:147 bytes_sent:1945972 bytes_retrans:64584 bytes_acked:1881389 bytes_received:1405104 segs_out:1568 segs_in:1080 data_segs_out:1400 data_segs_in:1016 send 23813408451bps lastsnd:62090 lastrcv:61879 lastack:61879 pacing_rate 28228207304bps delivery_rate 4154181816bps delivered:1355 busy:2ms rwnd_limited:1ms(50.0%) retrans:0/46 rcv_rtt:0.377 rcv_space:69736 rcv_ssthresh:138774 minrtt:0.003 snd_wnd:262144 rcv_wnd:71680 tcp-ulp-mptcp flags:Mmec token:0000(id:0)/4ef426b2(id:0) seq:3988646186080799933 sfseq:1390145 ssnoff:2685584681 maplen:14960
# mptcp FIN-WAIT-2 0 0 [dead:beef:1::1]:40880 [dead:beef:3::2]:10013 timer:(keepalive,58sec,0) ino:0 sk:4 cgroup:unreachable:1 ---
# skmem:(r0,rb228877,t0,tb971776,f0,w0,o0,bl0,d0) subflows_max:2 remote_key token:4ef426b2 write_seq:16590065159408857478 snd_una:16590065159408857478 rcv_nxt:3988646186080814893 bytes_sent:1881388 bytes_received:1383720 bytes_acked:1881389 subflows_total:1 last_data_sent:62091 last_data_recv:61880 last_ack_recv:61880
# TcpActiveOpens 1 0.0
# TcpInSegs 1080 0.0
# TcpOutSegs 1522 0.0
# TcpRetransSegs 46 0.0
# TcpExtTCPPureAcks 63 0.0
# TcpExtTCPSackRecovery 1 0.0
# TcpExtTCPFastRetrans 46 0.0
# TcpExtTCPSackShiftFallback 1 0.0
# TcpExtTCPFromZeroWindowAdv 11 0.0
# TcpExtTCPToZeroWindowAdv 11 0.0
# TcpExtTCPWantZeroWindowAdv 4 0.0
# TcpExtTCPOrigDataSent 1354 0.0
# TcpExtTCPHystartTrainDetect 1 0.0
# TcpExtTCPHystartTrainCwnd 210 0.0
# TcpExtTCPDelivered 1355 0.0
# MPTcpExtMPCapableSYNTX 1 0.0
# MPTcpExtMPCapableSYNACKRX 1 0.0
# MPTcpExtDuplicateData 16 0.0
# MPTcpExtRcvWndShared 91 0.0
#
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# 1306 packets captured1303 packets captured
# 1303 packets received by filter
# 0 packets dropped by kernel
#
# 1306 packets received by filter
# 0 packets dropped by kernel
# 17 ns1 MPTCP -> ns4 (10.0.3.1:10014 ) MPTCP (duration 60ms) [ OK ]
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# 465 packets captured
# 465 packets received by filter
# 0 packets dropped by kernel
# 3419 packets captured
# 3419 packets received by filter
# 0 packets dropped by kernel
# 18 ns1 MPTCP -> ns4 (dead:beef:3::1:10015) MPTCP (duration 56ms) [ OK ]
# tcpdump: WARNING: tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: data link type LINUX_SLL2
# tcpdump: tcpdump: listening on anylistening on any, link-type LINUX_SLL2 (Linux cooked v2), link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# , snapshot length 65535 bytes
# 3570 packets captured
# 3570 packets received by filter
# 0 packets dropped by kernel
# 545 packets captured
# 545 packets received by filter
# 0 packets dropped by kernel
# [FAIL] Tests with ns1-gQQ13x as a sender have failed
# 19 ns2 MPTCP -> ns1 (10.0.1.1:10016 ) MPTCP (duration 51ms) [ OK ]
# tcpdump: WARNING: tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)tcpdump: data link type LINUX_SLL2
#
# tcpdump: data link type LINUX_SLL2
# tcpdump: tcpdump: listening on anylistening on any, link-type LINUX_SLL2 (Linux cooked v2), link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# , snapshot length 65535 bytes
# 381 packets captured385 packets captured
# 385 packets received by filter
# 0 packets dropped by kernel
#
# 381 packets received by filter
# 0 packets dropped by kernel
# 20 ns2 MPTCP -> ns1 (dead:beef:1::1:10017) MPTCP (duration 52ms) [ OK ]
# tcpdump: WARNING: tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)tcpdump: data link type LINUX_SLL2
#
# tcpdump: data link type LINUX_SLL2
# tcpdump: listening on anytcpdump: , link-type LINUX_SLL2 (Linux cooked v2)listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# , snapshot length 65535 bytes
# 414 packets captured410 packets captured
# 410 packets received by filter
# 0 packets dropped by kernel
#
# 414 packets received by filter
# 0 packets dropped by kernel
# 21 ns2 MPTCP -> ns3 (10.0.2.2:10018 ) MPTCP (duration 65ms) [ OK ]
# tcpdump: WARNING: tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
#
# tcpdump: data link type LINUX_SLL2
# tcpdump: data link type LINUX_SLL2
# tcpdump: tcpdump: listening on anylistening on any, link-type LINUX_SLL2 (Linux cooked v2), link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# , snapshot length 65535 bytes
# 3398 packets captured3398 packets captured
# 3398 packets received by filter
# 3398 packets received by filter
# 0 packets dropped by kernel
#
# 0 packets dropped by kernel
# 22 ns2 MPTCP -> ns3 (dead:beef:2::2:10019) MPTCP (duration 59ms) [ OK ]
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# 3495 packets captured
# 3495 packets received by filter
# 0 packets dropped by kernel
# 3491 packets captured
# 3491 packets received by filter
# 0 packets dropped by kernel
# 23 ns2 MPTCP -> ns3 (10.0.3.2:10020 ) MPTCP (duration 58ms) [ OK ]
# tcpdump: WARNING: tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)tcpdump: data link type LINUX_SLL2
#
# tcpdump: data link type LINUX_SLL2
# tcpdump: tcpdump: listening on anylistening on any, link-type LINUX_SLL2 (Linux cooked v2), link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# , snapshot length 65535 bytes
# 3403 packets captured
# 3403 packets received by filter
# 0 packets dropped by kernel
# 3403 packets captured
# 3403 packets received by filter
# 0 packets dropped by kernel
# 24 ns2 MPTCP -> ns3 (dead:beef:3::2:10021) MPTCP (duration 51ms) [ OK ]
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: WARNING: tcpdump: data link type LINUX_SLL2
# any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: tcpdump: listening on anylistening on any, link-type LINUX_SLL2 (Linux cooked v2), link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# , snapshot length 65535 bytes
# 3461 packets captured
# 3461 packets received by filter
# 0 packets dropped by kernel3461 packets captured
#
# 3461 packets received by filter
# 0 packets dropped by kernel
# 25 ns2 MPTCP -> ns4 (10.0.3.1:10022 ) MPTCP (duration 56ms) [ OK ]
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2)tcpdump: , snapshot length 65535 bytes
# listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# 3426 packets captured
# 3426 packets received by filter
# 0 packets dropped by kernel
# 474 packets captured
# 474 packets received by filter
# 0 packets dropped by kernel
# 26 ns2 MPTCP -> ns4 (dead:beef:3::1:10023) MPTCP (duration 264ms) [ OK ]
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# 547 packets captured3552 packets captured
# 3552 packets received by filter
# 0 packets dropped by kernel
#
# 547 packets received by filter
# 0 packets dropped by kernel
# 27 ns3 MPTCP -> ns1 (10.0.1.1:10024 ) MPTCP (duration 49ms) [ OK ]
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# 1786 packets captured1785 packets captured
# 1785 packets received by filter
# 0 packets dropped by kernel
#
# 1786 packets received by filter
# 0 packets dropped by kernel
# 28 ns3 MPTCP -> ns1 (dead:beef:1::1:10025) MPTCP (duration 54ms) [ OK ]
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# 1767 packets captured1766 packets captured
# 1766 packets received by filter
# 0 packets dropped by kernel
#
# 1767 packets received by filter
# 0 packets dropped by kernel
# 29 ns3 MPTCP -> ns2 (10.0.1.2:10026 ) MPTCP (duration 262ms) [ OK ]
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: tcpdump: listening on anylistening on any, link-type LINUX_SLL2 (Linux cooked v2), link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# , snapshot length 65535 bytes
# 1858 packets captured
# 1858 packets received by filter
# 0 packets dropped by kernel
# 1858 packets captured
# 1858 packets received by filter
# 0 packets dropped by kernel
# 30 ns3 MPTCP -> ns2 (dead:beef:1::2:10027) MPTCP (duration 49ms) [ OK ]
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# 1759 packets captured
# 1759 packets received by filter
# 0 packets dropped by kernel
# 1759 packets captured
# 1759 packets received by filter
# 0 packets dropped by kernel
# 31 ns3 MPTCP -> ns2 (10.0.2.1:10028 ) MPTCP (duration 258ms) [ OK ]
# tcpdump: WARNING: tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: tcpdump: listening on anylistening on any, link-type LINUX_SLL2 (Linux cooked v2), link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# , snapshot length 65535 bytes
# 1748 packets captured1748 packets captured
# 1748 packets received by filter
# 0 packets dropped by kernel
#
# 1748 packets received by filter
# 0 packets dropped by kernel
# 32 ns3 MPTCP -> ns2 (dead:beef:2::1:10029) MPTCP (duration 49ms) [ OK ]
# tcpdump: WARNING: tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: tcpdump: listening on anylistening on any, link-type LINUX_SLL2 (Linux cooked v2), link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# , snapshot length 65535 bytes
# 1820 packets captured1820 packets captured
#
# 1820 packets received by filter1820 packets received by filter
#
# 0 packets dropped by kernel0 packets dropped by kernel
#
# 33 ns3 MPTCP -> ns4 (10.0.3.1:10030 ) MPTCP (duration 50ms) [ OK ]
# tcpdump: WARNING: tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: data link type LINUX_SLL2
# tcpdump: tcpdump: listening on anylistening on any, link-type LINUX_SLL2 (Linux cooked v2), link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# , snapshot length 65535 bytes
# 384 packets captured384 packets captured
# 384 packets received by filter
# 0 packets dropped by kernel
#
# 384 packets received by filter
# 0 packets dropped by kernel
# 34 ns3 MPTCP -> ns4 (dead:beef:3::1:10031) MPTCP (duration 51ms) [ OK ]
# tcpdump: WARNING: tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
#
# tcpdump: data link type LINUX_SLL2
# tcpdump: data link type LINUX_SLL2
# tcpdump: tcpdump: listening on anylistening on any, link-type LINUX_SLL2 (Linux cooked v2), link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# , snapshot length 65535 bytes
# 391 packets captured391 packets captured
# 391 packets received by filter
# 0 packets dropped by kernel
#
# 391 packets received by filter
# 0 packets dropped by kernel
# 35 ns4 MPTCP -> ns1 (10.0.1.1:10032 ) MPTCP (duration 266ms) [ OK ]
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: WARNING: tcpdump: data link type LINUX_SLL2
# any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: tcpdump: listening on anylistening on any, link-type LINUX_SLL2 (Linux cooked v2), link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# , snapshot length 65535 bytes
# 488 packets captured
# 1754 packets captured
# 1754 packets received by filter
# 488 packets received by filter
# 0 packets dropped by kernel
# 0 packets dropped by kernel
# 36 ns4 MPTCP -> ns1 (dead:beef:1::1:10033) MPTCP (duration 61ms) [ OK ]
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# 623 packets captured
# 623 packets received by filter
# 0 packets dropped by kernel
# 1918 packets captured
# 1918 packets received by filter
# 0 packets dropped by kernel
# 37 ns4 MPTCP -> ns2 (10.0.1.2:10034 ) MPTCP (duration 52ms) [ OK ]
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# 442 packets captured
# 442 packets received by filter
# 0 packets dropped by kernel
# 1721 packets captured
# 1721 packets received by filter
# 0 packets dropped by kernel
# 38 ns4 MPTCP -> ns2 (dead:beef:1::2:10035) MPTCP (duration 54ms) [ OK ]
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# 453 packets captured
# 1745 packets captured453 packets received by filter
#
# 0 packets dropped by kernel
# 1745 packets received by filter
# 0 packets dropped by kernel
# 39 ns4 MPTCP -> ns2 (10.0.2.1:10036 ) MPTCP (duration 53ms) [ OK ]
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: WARNING: tcpdump: data link type LINUX_SLL2
# any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: tcpdump: listening on anylistening on any, link-type LINUX_SLL2 (Linux cooked v2), link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# , snapshot length 65535 bytes
# 1755 packets captured
# 1755 packets received by filter475 packets captured
# 475 packets received by filter
#
# 0 packets dropped by kernel
# 0 packets dropped by kernel
# 40 ns4 MPTCP -> ns2 (dead:beef:2::1:10037) MPTCP (duration 59ms) [ OK ]
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# 501 packets captured
# 501 packets received by filter
# 0 packets dropped by kernel
# 1791 packets captured
# 1791 packets received by filter
# 0 packets dropped by kernel
# 41 ns4 MPTCP -> ns3 (10.0.2.2:10038 ) MPTCP (duration 47ms) [ OK ]
# tcpdump: WARNING: tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
#
# tcpdump: data link type LINUX_SLL2
# tcpdump: data link type LINUX_SLL2
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2)tcpdump: , snapshot length 65535 bytes
# listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# 401 packets captured401 packets captured
# 401 packets received by filter
# 0 packets dropped by kernel
#
# 401 packets received by filter
# 0 packets dropped by kernel
# 42 ns4 MPTCP -> ns3 (dead:beef:2::2:10039) MPTCP (duration 51ms) [ OK ]
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: tcpdump: listening on anylistening on any, link-type LINUX_SLL2 (Linux cooked v2), link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# , snapshot length 65535 bytes
# 386 packets captured386 packets captured
# 386 packets received by filter
#
# 0 packets dropped by kernel
# 386 packets received by filter
# 0 packets dropped by kernel
# 43 ns4 MPTCP -> ns3 (10.0.3.2:10040 ) MPTCP (duration 54ms) [ OK ]
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# 380 packets captured380 packets captured
# 380 packets received by filter
# 0 packets dropped by kernel
#
# 380 packets received by filter
# 0 packets dropped by kernel
# 44 ns4 MPTCP -> ns3 (dead:beef:3::2:10041) MPTCP (duration 51ms) [ OK ]
# tcpdump: WARNING: tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)tcpdump: data link type LINUX_SLL2
#
# tcpdump: data link type LINUX_SLL2
# tcpdump: tcpdump: listening on anylistening on any, link-type LINUX_SLL2 (Linux cooked v2), link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# , snapshot length 65535 bytes
# 387 packets captured387 packets captured
# 387 packets received by filter
# 0 packets dropped by kernel
#
# 387 packets received by filter
# 0 packets dropped by kernel
# INFO: with peek mode: saveWithPeek
# 45 ns1 MPTCP -> ns1 (10.0.1.1:10042 ) MPTCP (duration 53ms) [ OK ]
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# 278 packets captured278 packets captured
# 556 packets received by filter
# 0 packets dropped by kernel
#
# 556 packets received by filter
# 0 packets dropped by kernel
# 46 ns1 MPTCP -> ns1 (10.0.1.1:10043 ) TCP (duration 52ms) [ OK ]
# tcpdump: WARNING: tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
#
# tcpdump: data link type LINUX_SLL2
# tcpdump: data link type LINUX_SLL2
# tcpdump: tcpdump: listening on anylistening on any, link-type LINUX_SLL2 (Linux cooked v2), link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# , snapshot length 65535 bytes
# 213 packets captured
# 426 packets received by filter
# 213 packets captured
# 426 packets received by filter
# 0 packets dropped by kernel
# 0 packets dropped by kernel
# 47 ns1 TCP -> ns1 (10.0.1.1:10044 ) MPTCP (duration 51ms) [ OK ]
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# 244 packets captured244 packets captured
# 488 packets received by filter
# 0 packets dropped by kernel
#
# 488 packets received by filter
# 0 packets dropped by kernel
# 48 ns1 MPTCP -> ns1 (dead:beef:1::1:10045) MPTCP (duration 261ms) [ OK ]
# tcpdump: WARNING: tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
#
# tcpdump: data link type LINUX_SLL2
# tcpdump: data link type LINUX_SLL2
# tcpdump: tcpdump: listening on anylistening on any, link-type LINUX_SLL2 (Linux cooked v2), link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# , snapshot length 65535 bytes
# 312 packets captured312 packets captured
# 624 packets received by filter
# 624 packets received by filter
#
# 0 packets dropped by kernel
# 0 packets dropped by kernel
# 49 ns1 MPTCP -> ns1 (dead:beef:1::1:10046) TCP (duration 52ms) [ OK ]
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: WARNING: tcpdump: data link type LINUX_SLL2
# any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: tcpdump: listening on anylistening on any, link-type LINUX_SLL2 (Linux cooked v2), link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# , snapshot length 65535 bytes
# 217 packets captured217 packets captured
# 434 packets received by filter
# 0 packets dropped by kernel
#
# 434 packets received by filter
# 0 packets dropped by kernel
# 50 ns1 TCP -> ns1 (dead:beef:1::1:10047) MPTCP (duration 47ms) [ OK ]
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)tcpdump: WARNING:
# any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: data link type LINUX_SLL2
# tcpdump: tcpdump: listening on anylistening on any, link-type LINUX_SLL2 (Linux cooked v2), link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# , snapshot length 65535 bytes
# 242 packets captured242 packets captured
# 484 packets received by filter
# 0 packets dropped by kernel
#
# 484 packets received by filter
# 0 packets dropped by kernel
# INFO: with peek mode: saveAfterPeek
# 51 ns1 MPTCP -> ns1 (10.0.1.1:10048 ) MPTCP (duration 50ms) [ OK ]
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# 265 packets captured265 packets captured
# 530 packets received by filter
# 0 packets dropped by kernel
#
# 530 packets received by filter
# 0 packets dropped by kernel
# 52 ns1 MPTCP -> ns1 (10.0.1.1:10049 ) TCP (duration 46ms) [ OK ]
# tcpdump: WARNING: tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
#
# tcpdump: data link type LINUX_SLL2
# tcpdump: data link type LINUX_SLL2
# tcpdump: tcpdump: listening on anylistening on any, link-type LINUX_SLL2 (Linux cooked v2), link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# , snapshot length 65535 bytes
# 206 packets captured206 packets captured
# 412 packets received by filter
# 412 packets received by filter
#
# 0 packets dropped by kernel
# 0 packets dropped by kernel
# 53 ns1 TCP -> ns1 (10.0.1.1:10050 ) MPTCP (duration 48ms) [ OK ]
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# 243 packets captured243 packets captured
# 486 packets received by filter
# 0 packets dropped by kernel
#
# 486 packets received by filter
# 0 packets dropped by kernel
# 54 ns1 MPTCP -> ns1 (dead:beef:1::1:10051) MPTCP (duration 54ms) [ OK ]
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# 286 packets captured286 packets captured
# 572 packets received by filter
# 0 packets dropped by kernel
#
# 572 packets received by filter
# 0 packets dropped by kernel
# 55 ns1 MPTCP -> ns1 (dead:beef:1::1:10052) TCP (duration 57ms) [ OK ]
# tcpdump: WARNING: tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
#
# tcpdump: data link type LINUX_SLL2
# tcpdump: data link type LINUX_SLL2
# tcpdump: tcpdump: listening on anylistening on any, link-type LINUX_SLL2 (Linux cooked v2), link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# , snapshot length 65535 bytes
# 216 packets captured216 packets captured
# 432 packets received by filter
# 0 packets dropped by kernel
#
# 432 packets received by filter
# 0 packets dropped by kernel
# 56 ns1 TCP -> ns1 (dead:beef:1::1:10053) MPTCP (duration 49ms) [ OK ]
# tcpdump: WARNING: tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)tcpdump: data link type LINUX_SLL2
#
# tcpdump: data link type LINUX_SLL2
# tcpdump: tcpdump: listening on anylistening on any, link-type LINUX_SLL2 (Linux cooked v2), link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# , snapshot length 65535 bytes
# 245 packets captured245 packets captured
# 490 packets received by filter
# 0 packets dropped by kernel
#
# 490 packets received by filter
# 0 packets dropped by kernel
# INFO: with MPTFO start
# 57 ns2 MPTCP -> ns1 (10.0.1.1:10054 ) MPTCP (duration 49ms) [ OK ]
# tcpdump: WARNING: tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
#
# tcpdump: data link type LINUX_SLL2
# tcpdump: data link type LINUX_SLL2
# tcpdump: tcpdump: listening on anylistening on any, link-type LINUX_SLL2 (Linux cooked v2), link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# , snapshot length 65535 bytes
# 382 packets captured
# 382 packets captured
# 382 packets received by filter
# 0 packets dropped by kernel
# 382 packets received by filter
# 0 packets dropped by kernel
# 58 ns2 MPTCP -> ns1 (10.0.1.1:10055 ) MPTCP (duration 677ms) [ OK ]
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: tcpdump: data link type LINUX_SLL2
# listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# 939 packets captured939 packets captured
# 939 packets received by filter
# 939 packets received by filter
# 0 packets dropped by kernel
# 0 packets dropped by kernel
#
# 59 ns2 MPTCP -> ns1 (dead:beef:1::1:10056) MPTCP (duration 52ms) [ OK ]
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# 388 packets captured388 packets captured
# 388 packets received by filter
# 0 packets dropped by kernel
#
# 388 packets received by filter
# 0 packets dropped by kernel
# 60 ns2 MPTCP -> ns1 (dead:beef:1::1:10057) MPTCP (duration 473ms) [ OK ]
# tcpdump: WARNING: tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)tcpdump: data link type LINUX_SLL2
#
# tcpdump: data link type LINUX_SLL2
# tcpdump: tcpdump: listening on anylistening on any, link-type LINUX_SLL2 (Linux cooked v2), link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# , snapshot length 65535 bytes
# 823 packets captured823 packets captured
#
# 823 packets received by filter823 packets received by filter
#
# 0 packets dropped by kernel0 packets dropped by kernel
#
# INFO: with MPTFO end
# INFO: test tproxy ipv4
# 61 ns1 MPTCP -> ns2 (10.0.3.1:20000 ) MPTCP (duration 49ms) [ OK ]
# tcpdump: WARNING: tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
#
# tcpdump: data link type LINUX_SLL2
# tcpdump: data link type LINUX_SLL2
# tcpdump: tcpdump: listening on anylistening on any, link-type LINUX_SLL2 (Linux cooked v2), link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# , snapshot length 65535 bytes
# 368 packets captured368 packets captured
# 368 packets received by filter
# 0 packets dropped by kernel
#
# 368 packets received by filter
# 0 packets dropped by kernel
# INFO: tproxy ipv4 pass
# INFO: test tproxy ipv6
# 62 ns1 MPTCP -> ns2 (dead:beef:3::1:20000) MPTCP (duration 50ms) [ OK ]
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# 387 packets captured
# 387 packets received by filter387 packets captured
# 387 packets received by filter
#
# 0 packets dropped by kernel
# 0 packets dropped by kernel
# INFO: tproxy ipv6 pass
# INFO: disconnect
# 63 ns1 MPTCP -> ns1 (10.0.1.1:20001 ) MPTCP (duration 317ms) [ OK ]
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)tcpdump: WARNING:
# any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: data link type LINUX_SLL2
# tcpdump: tcpdump: listening on anylistening on any, link-type LINUX_SLL2 (Linux cooked v2), link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# , snapshot length 65535 bytes
# 523 packets captured523 packets captured
# 1046 packets received by filter
# 0 packets dropped by kernel
#
# 1046 packets received by filter
# 0 packets dropped by kernel
# 64 ns1 MPTCP -> ns1 (10.0.1.1:20002 ) TCP (duration 781ms) [ OK ]
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2)listening on any, snapshot length 65535 bytes
# , link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# 409 packets captured409 packets captured
# 818 packets received by filter
# 0 packets dropped by kernel
#
# 818 packets received by filter
# 0 packets dropped by kernel
# 65 ns1 TCP -> ns1 (10.0.1.1:20003 ) MPTCP (duration 99ms) [ OK ]
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2)listening on any, snapshot length 65535 bytes
# , link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# 440 packets captured
# 880 packets received by filter
# 440 packets captured
# 0 packets dropped by kernel880 packets received by filter
#
# 0 packets dropped by kernel
# 66 ns1 MPTCP -> ns1 (dead:beef:1::1:20004) MPTCP (duration 104ms) [ OK ]
# tcpdump: WARNING: tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
#
# tcpdump: data link type LINUX_SLL2
# tcpdump: data link type LINUX_SLL2
# tcpdump: tcpdump: listening on anylistening on any, link-type LINUX_SLL2 (Linux cooked v2), link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# , snapshot length 65535 bytes
# 526 packets captured526 packets captured
# 1052 packets received by filter
# 0 packets dropped by kernel
#
# 1052 packets received by filter
# 0 packets dropped by kernel
# 67 ns1 MPTCP -> ns1 (dead:beef:1::1:20005) TCP (duration 774ms) [ OK ]
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# 393 packets captured
# 393 packets captured
# 786 packets received by filter
# 786 packets received by filter
# 0 packets dropped by kernel
# 0 packets dropped by kernel
# 68 ns1 TCP -> ns1 (dead:beef:1::1:20006) MPTCP (duration 104ms) [ OK ]
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: WARNING: any: That device doesn't support promiscuous mode
# (Promiscuous mode not supported on the "any" device)
# tcpdump: data link type LINUX_SLL2
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# tcpdump: listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 65535 bytes
# 439 packets captured439 packets captured
# 878 packets received by filter
# 0 packets dropped by kernel
#
# 878 packets received by filter
# 0 packets dropped by kernel
# Time: 230 seconds
not ok 1 test: selftest_mptcp_connect_splice # FAIL
# time=231
=== ERROR after 111 attempts (Mon, 13 Oct 2025 08:06:00 +0000) ===
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v5 mptcp-next 00/10] mptcp: introduce backlog processing
2025-10-13 9:07 ` Geliang Tang
@ 2025-10-13 13:29 ` Paolo Abeni
2025-10-13 17:07 ` Paolo Abeni
2025-10-15 9:00 ` Paolo Abeni
1 sibling, 1 reply; 33+ messages in thread
From: Paolo Abeni @ 2025-10-13 13:29 UTC (permalink / raw)
To: Geliang Tang, Matthieu Baerts, mptcp
Hi,
On 10/13/25 11:07 AM, Geliang Tang wrote:
> On Fri, 2025-10-10 at 20:22 +0800, Geliang Tang wrote:
>> On Fri, 2025-10-10 at 10:21 +0200, Paolo Abeni wrote:
>>> - Can you please share a pcap capture _and_ the selftest text
>>> output
>>> for
>>> the same failing test?
>
> The pcap captures (gQQ13x-ns1-ns3-MPTCP-MPTCP-dead:beef:3::2-10013-
> connector.pcap, gQQ13x-ns1-ns3-MPTCP-MPTCP-dead:beef:3::2-10013-
> listener.pcap) and the selftest text output (selftest_output) are
> attached.
I'm possibly low on coffee, but I see a single attachment here?
/P
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v5 mptcp-next 00/10] mptcp: introduce backlog processing
2025-10-13 13:29 ` Paolo Abeni
@ 2025-10-13 17:07 ` Paolo Abeni
0 siblings, 0 replies; 33+ messages in thread
From: Paolo Abeni @ 2025-10-13 17:07 UTC (permalink / raw)
To: Geliang Tang, Matthieu Baerts, mptcp
On 10/13/25 3:29 PM, Paolo Abeni wrote:
> On 10/13/25 11:07 AM, Geliang Tang wrote:
>> On Fri, 2025-10-10 at 20:22 +0800, Geliang Tang wrote:
>>> On Fri, 2025-10-10 at 10:21 +0200, Paolo Abeni wrote:
>>>> - Can you please share a pcap capture _and_ the selftest text
>>>> output
>>>> for
>>>> the same failing test?
>>
>> The pcap captures (gQQ13x-ns1-ns3-MPTCP-MPTCP-dead:beef:3::2-10013-
>> connector.pcap, gQQ13x-ns1-ns3-MPTCP-MPTCP-dead:beef:3::2-10013-
>> listener.pcap) and the selftest text output (selftest_output) are
>> attached.
>
> I'm possibly low on coffee, but I see a single attachment here?
Please ignore my previous message PEBKAC here somehow.
/P
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v5 mptcp-next 00/10] mptcp: introduce backlog processing
2025-10-13 9:07 ` Geliang Tang
2025-10-13 13:29 ` Paolo Abeni
@ 2025-10-15 9:00 ` Paolo Abeni
2025-10-17 6:38 ` Geliang Tang
1 sibling, 1 reply; 33+ messages in thread
From: Paolo Abeni @ 2025-10-15 9:00 UTC (permalink / raw)
To: Geliang Tang, Matthieu Baerts, mptcp
On 10/13/25 11:07 AM, Geliang Tang wrote:
> On Fri, 2025-10-10 at 20:22 +0800, Geliang Tang wrote:
>> Hi Paolo,
>>
>> On Fri, 2025-10-10 at 10:21 +0200, Paolo Abeni wrote:
>>> On 10/9/25 3:58 PM, Paolo Abeni wrote:
>>>> @Geliang: if you reproduce the issue multiple times, are there
>>>> any
>>>> common patterns ? i.e. sender files considerably larger than the
>>>> client
>>>> one, or only a specific subsets of all the test-cases failing, or
>>>> ...
>>>
>>> Other questions:
>>> - Can you please share your setup details (VM vs baremetal, debug
>>> config
>>> vs non debug, vmg vs plain qemu, number of [v]cores...)? I can't
>>> repro
>>> the issue locally.
>>
>> Here are my modifications:
>>
>> https://git.kernel.org/pub/scm/linux/kernel/git/geliang/mptcp_net-next.git/log/?h=splice_new
>>
>> I used mptcp-upstream-virtme-docker normal config to reproduce it:
>>
>> docker run \
>> -e INPUT_NO_BLOCK=1 \
>> -e INPUT_PACKETDRILL_NO_SYNC=1 \
>> -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
>> --pull always ghcr.io/multipath-tcp/mptcp-upstream-virtme-
>> docker:latest \
>> auto-normal
>>
>> $ cat .virtme-exec-run
>> run_loop run_selftest_one ./mptcp_connect_splice.sh
>>
>> Running mptcp_connect_splice.sh in a loop dozens of times should
>> reproduce the test failure.
>>
>>> - Can you please share a pcap capture _and_ the selftest text
>>> output
>>> for
>>> the same failing test?
>
> The pcap captures (gQQ13x-ns1-ns3-MPTCP-MPTCP-dead:beef:3::2-10013-
> connector.pcap, gQQ13x-ns1-ns3-MPTCP-MPTCP-dead:beef:3::2-10013-
> listener.pcap) and the selftest text output (selftest_output) are
> attached.
Looks like the 'stuck' scenario is quite consistent. The receiver filled
it's receive window, and sent an ack shortly after when re-opening, but
the sender did not react to such ack.
The perf instrumentation I mentioned would be very useful. I tried to
capture it myself, but so far I failed - the repro run for several
hundred iterations without issues and finally podmad stuck (podman bug
apparently, or local resources exhausted).
Did you have better luck collecting the perf trace?
/P
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v5 mptcp-next 00/10] mptcp: introduce backlog processing
2025-10-15 9:00 ` Paolo Abeni
@ 2025-10-17 6:38 ` Geliang Tang
2025-10-18 0:16 ` Mat Martineau
0 siblings, 1 reply; 33+ messages in thread
From: Geliang Tang @ 2025-10-17 6:38 UTC (permalink / raw)
To: Paolo Abeni, Matthieu Baerts, Mat Martineau, mptcp
Hi Paolo, Matt, Mat,
On Wed, 2025-10-15 at 11:00 +0200, Paolo Abeni wrote:
> On 10/13/25 11:07 AM, Geliang Tang wrote:
> > On Fri, 2025-10-10 at 20:22 +0800, Geliang Tang wrote:
> > > Hi Paolo,
> > >
> > > On Fri, 2025-10-10 at 10:21 +0200, Paolo Abeni wrote:
> > > > On 10/9/25 3:58 PM, Paolo Abeni wrote:
> > > > > @Geliang: if you reproduce the issue multiple times, are
> > > > > there
> > > > > any
> > > > > common patterns ? i.e. sender files considerably larger than
> > > > > the
> > > > > client
> > > > > one, or only a specific subsets of all the test-cases
> > > > > failing, or
> > > > > ...
> > > >
> > > > Other questions:
> > > > - Can you please share your setup details (VM vs baremetal,
> > > > debug
> > > > config
> > > > vs non debug, vmg vs plain qemu, number of [v]cores...)? I
> > > > can't
> > > > repro
> > > > the issue locally.
> > >
> > > Here are my modifications:
> > >
> > > https://git.kernel.org/pub/scm/linux/kernel/git/geliang/mptcp_net-next.git/log/?h=splice_new
> > >
> > > I used mptcp-upstream-virtme-docker normal config to reproduce
> > > it:
> > >
> > > docker run \
> > > -e INPUT_NO_BLOCK=1 \
> > > -e INPUT_PACKETDRILL_NO_SYNC=1 \
> > > -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it
> > > \
> > > --pull always ghcr.io/multipath-tcp/mptcp-upstream-
> > > virtme-
> > > docker:latest \
> > > auto-normal
> > >
> > > $ cat .virtme-exec-run
> > > run_loop run_selftest_one ./mptcp_connect_splice.sh
> > >
> > > Running mptcp_connect_splice.sh in a loop dozens of times should
> > > reproduce the test failure.
> > >
> > > > - Can you please share a pcap capture _and_ the selftest text
> > > > output
> > > > for
> > > > the same failing test?
> >
> > The pcap captures (gQQ13x-ns1-ns3-MPTCP-MPTCP-dead:beef:3::2-10013-
> > connector.pcap, gQQ13x-ns1-ns3-MPTCP-MPTCP-dead:beef:3::2-10013-
> > listener.pcap) and the selftest text output (selftest_output) are
> > attached.
>
> Looks like the 'stuck' scenario is quite consistent. The receiver
> filled
> it's receive window, and sent an ack shortly after when re-opening,
> but
> the sender did not react to such ack.
>
> The perf instrumentation I mentioned would be very useful. I tried to
> capture it myself, but so far I failed - the repro run for several
> hundred iterations without issues and finally podmad stuck (podman
> bug
> apparently, or local resources exhausted).
>
> Did you have better luck collecting the perf trace?
Sorry, I haven't made any progress yet. Please give me some more time.
I was thinking, since this issue only occurs during the splice test,
let's move the discussion to the future "implement mptcp read_sock and
splice" series. We shouldn't let it block the merging of this current
series.
I don't have any further constructive review comments on patches 9 and
10. I'm wondering if we should get input from Matt and Mat.
Thanks,
-Geliang
>
> /P
>
>
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v5 mptcp-next 00/10] mptcp: introduce backlog processing
2025-10-17 6:38 ` Geliang Tang
@ 2025-10-18 0:16 ` Mat Martineau
0 siblings, 0 replies; 33+ messages in thread
From: Mat Martineau @ 2025-10-18 0:16 UTC (permalink / raw)
To: Geliang Tang; +Cc: Paolo Abeni, Matthieu Baerts, mptcp
[-- Attachment #1: Type: text/plain, Size: 3030 bytes --]
On Fri, 17 Oct 2025, Geliang Tang wrote:
> Hi Paolo, Matt, Mat,
>
> On Wed, 2025-10-15 at 11:00 +0200, Paolo Abeni wrote:
>> On 10/13/25 11:07 AM, Geliang Tang wrote:
>>> On Fri, 2025-10-10 at 20:22 +0800, Geliang Tang wrote:
>>>> Hi Paolo,
>>>>
>>>> On Fri, 2025-10-10 at 10:21 +0200, Paolo Abeni wrote:
>>>>> On 10/9/25 3:58 PM, Paolo Abeni wrote:
>>>>>> @Geliang: if you reproduce the issue multiple times, are
>>>>>> there
>>>>>> any
>>>>>> common patterns ? i.e. sender files considerably larger than
>>>>>> the
>>>>>> client
>>>>>> one, or only a specific subsets of all the test-cases
>>>>>> failing, or
>>>>>> ...
>>>>>
>>>>> Other questions:
>>>>> - Can you please share your setup details (VM vs baremetal,
>>>>> debug
>>>>> config
>>>>> vs non debug, vmg vs plain qemu, number of [v]cores...)? I
>>>>> can't
>>>>> repro
>>>>> the issue locally.
>>>>
>>>> Here are my modifications:
>>>>
>>>> https://git.kernel.org/pub/scm/linux/kernel/git/geliang/mptcp_net-next.git/log/?h=splice_new
>>>>
>>>> I used mptcp-upstream-virtme-docker normal config to reproduce
>>>> it:
>>>>
>>>> docker run \
>>>> -e INPUT_NO_BLOCK=1 \
>>>> -e INPUT_PACKETDRILL_NO_SYNC=1 \
>>>> -v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it
>>>> \
>>>> --pull always ghcr.io/multipath-tcp/mptcp-upstream-
>>>> virtme-
>>>> docker:latest \
>>>> auto-normal
>>>>
>>>> $ cat .virtme-exec-run
>>>> run_loop run_selftest_one ./mptcp_connect_splice.sh
>>>>
>>>> Running mptcp_connect_splice.sh in a loop dozens of times should
>>>> reproduce the test failure.
>>>>
>>>>> - Can you please share a pcap capture _and_ the selftest text
>>>>> output
>>>>> for
>>>>> the same failing test?
>>>
>>> The pcap captures (gQQ13x-ns1-ns3-MPTCP-MPTCP-dead:beef:3::2-10013-
>>> connector.pcap, gQQ13x-ns1-ns3-MPTCP-MPTCP-dead:beef:3::2-10013-
>>> listener.pcap) and the selftest text output (selftest_output) are
>>> attached.
>>
>> Looks like the 'stuck' scenario is quite consistent. The receiver
>> filled
>> it's receive window, and sent an ack shortly after when re-opening,
>> but
>> the sender did not react to such ack.
>>
>> The perf instrumentation I mentioned would be very useful. I tried to
>> capture it myself, but so far I failed - the repro run for several
>> hundred iterations without issues and finally podmad stuck (podman
>> bug
>> apparently, or local resources exhausted).
>>
>> Did you have better luck collecting the perf trace?
>
> Sorry, I haven't made any progress yet. Please give me some more time.
>
>
> I was thinking, since this issue only occurs during the splice test,
> let's move the discussion to the future "implement mptcp read_sock and
> splice" series. We shouldn't let it block the merging of this current
> series.
>
> I don't have any further constructive review comments on patches 9 and
> 10. I'm wondering if we should get input from Matt and Mat.
>
I am planning to take a close look at 9 & 10 early next week, would like
to understand the new backlog rx path. Sorry for the delay!
- Mat
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v5 mptcp-next 09/10] mptcp: introduce mptcp-level backlog
2025-10-06 8:12 ` [PATCH v5 mptcp-next 09/10] mptcp: introduce mptcp-level backlog Paolo Abeni
2025-10-08 3:09 ` Geliang Tang
@ 2025-10-20 19:45 ` Mat Martineau
1 sibling, 0 replies; 33+ messages in thread
From: Mat Martineau @ 2025-10-20 19:45 UTC (permalink / raw)
To: Paolo Abeni; +Cc: mptcp
On Mon, 6 Oct 2025, Paolo Abeni wrote:
> We are soon using it for incoming data processing.
> MPTCP can't leverage the sk_backlog, as the latter is processed
> before the release callback, and such callback for MPTCP releases
> and re-acquire the socket spinlock, breaking the sk_backlog processing
> assumption.
>
> Add a skb backlog list inside the mptcp sock struct, and implement
> basic helper to transfer packet to and purge such list.
>
> Packets in the backlog are not memory accounted, but still use the
> incoming subflow receive memory, to allow back-pressure.
>
> No packet is currently added to the backlog, so no functional changes
> intended here.
>
> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
> --
> v4 -> v5:
> - split out of the next path, to make the latter smaller
> - set a custom destructor for skbs in the backlog, this avoid
> duplicate code, and fix a few places where the need ssk cleanup
> was not performed.
> - factor out the backlog purge in a new helper,
> use spinlock protection, clear the backlog list and zero the
> backlog len
> - explicitly init the backlog_len at mptcp_init_sock() time
> ---
> net/mptcp/protocol.c | 70 +++++++++++++++++++++++++++++++++++++++++---
> net/mptcp/protocol.h | 4 +++
> 2 files changed, 70 insertions(+), 4 deletions(-)
>
> diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
> index 05ee6bd26b7fa..2d5d3da67d1ac 100644
> --- a/net/mptcp/protocol.c
> +++ b/net/mptcp/protocol.c
> @@ -337,6 +337,11 @@ static void mptcp_data_queue_ofo(struct mptcp_sock *msk, struct sk_buff *skb)
> mptcp_rcvbuf_grow(sk);
> }
>
> +static void mptcp_bl_free(struct sk_buff *skb)
> +{
> + atomic_sub(skb->truesize, &skb->sk->sk_rmem_alloc);
> +}
> +
> static int mptcp_init_skb(struct sock *ssk,
> struct sk_buff *skb, int offset, int copy_len)
> {
> @@ -360,7 +365,7 @@ static int mptcp_init_skb(struct sock *ssk,
> skb_dst_drop(skb);
>
> /* "borrow" the fwd memory from the subflow, instead of reclaiming it */
> - skb->destructor = NULL;
> + skb->destructor = mptcp_bl_free;
> borrowed = ssk->sk_forward_alloc - sk_unused_reserved_mem(ssk);
> borrowed &= ~(PAGE_SIZE - 1);
> sk_forward_alloc_add(ssk, skb->truesize - borrowed);
> @@ -373,6 +378,13 @@ static bool __mptcp_move_skb(struct sock *sk, struct sk_buff *skb)
> struct mptcp_sock *msk = mptcp_sk(sk);
> struct sk_buff *tail;
>
> + /* Avoid the indirect call overhead, we know destructor is
> + * mptcp_bl_free at this point.
> + */
> + atomic_sub(skb->truesize, &skb->sk->sk_rmem_alloc);
Hi Paolo -
Better (very slightly :) ) to make the direct call to mptcp_bl_free()
then? The optimizer would inline it.
> + skb->sk = NULL;
> + skb->destructor = NULL;
> +
> /* try to fetch required memory from subflow */
> if (!sk_rmem_schedule(sk, skb, skb->truesize)) {
> MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_RCVPRUNED);
> @@ -654,6 +666,35 @@ static void mptcp_dss_corruption(struct mptcp_sock *msk, struct sock *ssk)
> }
> }
>
> +static void __mptcp_add_backlog(struct sock *sk, struct sk_buff *skb)
> +{
> + struct mptcp_sock *msk = mptcp_sk(sk);
> + struct sk_buff *tail = NULL;
> + bool fragstolen;
> + int delta;
> +
> + if (unlikely(sk->sk_state == TCP_CLOSE)) {
> + kfree_skb_reason(skb, SKB_DROP_REASON_SOCKET_CLOSE);
> + return;
> + }
> +
> + /* Try to coalesce with the last skb in our backlog */
> + if (!list_empty(&msk->backlog_list))
> + tail = list_last_entry(&msk->backlog_list, struct sk_buff, list);
> +
> + if (tail && MPTCP_SKB_CB(skb)->map_seq == MPTCP_SKB_CB(tail)->end_seq &&
> + skb->sk == tail->sk &&
> + __mptcp_try_coalesce(sk, tail, skb, &fragstolen, &delta)) {
> + skb->truesize -= delta;
> + kfree_skb_partial(skb, fragstolen);
> + WRITE_ONCE(msk->backlog_len, msk->backlog_len + delta);
> + return;
> + }
> +
> + list_add_tail(&skb->list, &msk->backlog_list);
> + WRITE_ONCE(msk->backlog_len, msk->backlog_len + skb->truesize);
> +}
> +
> static bool __mptcp_move_skbs_from_subflow(struct mptcp_sock *msk,
> struct sock *ssk)
> {
> @@ -701,10 +742,12 @@ static bool __mptcp_move_skbs_from_subflow(struct mptcp_sock *msk,
> int bmem;
>
> bmem = mptcp_init_skb(ssk, skb, offset, len);
> - skb->sk = NULL;
> sk_forward_alloc_add(sk, bmem);
> - atomic_sub(skb->truesize, &ssk->sk_rmem_alloc);
> - ret = __mptcp_move_skb(sk, skb) || ret;
> +
> + if (true)
> + ret |= __mptcp_move_skb(sk, skb);
> + else
> + __mptcp_add_backlog(sk, skb);
> seq += len;
>
> if (unlikely(map_remaining < len)) {
> @@ -2753,12 +2796,28 @@ static void mptcp_mp_fail_no_response(struct mptcp_sock *msk)
> unlock_sock_fast(ssk, slow);
> }
>
> +static void mptcp_backlog_purge(struct sock *sk)
> +{
> + struct mptcp_sock *msk = mptcp_sk(sk);
> + struct sk_buff *tmp, *skb;
> + LIST_HEAD(backlog);
> +
> + mptcp_data_lock(sk);
> + list_splice_init(&msk->backlog_list, &backlog);
> + msk->backlog_len = 0;
> + mptcp_data_unlock(sk);
> +
> + list_for_each_entry_safe(skb, tmp, &backlog, list)
> + kfree_skb_reason(skb, SKB_DROP_REASON_SOCKET_CLOSE);
> +}
> +
> static void mptcp_do_fastclose(struct sock *sk)
> {
> struct mptcp_subflow_context *subflow, *tmp;
> struct mptcp_sock *msk = mptcp_sk(sk);
>
> mptcp_set_state(sk, TCP_CLOSE);
> + mptcp_backlog_purge(sk);
Should mptcp_backlog_purge() also be called in mptcp_check_fastclose()?
- Mat
> mptcp_for_each_subflow_safe(msk, subflow, tmp)
> __mptcp_close_ssk(sk, mptcp_subflow_tcp_sock(subflow),
> subflow, MPTCP_CF_FASTCLOSE);
> @@ -2816,11 +2875,13 @@ static void __mptcp_init_sock(struct sock *sk)
> INIT_LIST_HEAD(&msk->conn_list);
> INIT_LIST_HEAD(&msk->join_list);
> INIT_LIST_HEAD(&msk->rtx_queue);
> + INIT_LIST_HEAD(&msk->backlog_list);
> INIT_WORK(&msk->work, mptcp_worker);
> msk->out_of_order_queue = RB_ROOT;
> msk->first_pending = NULL;
> msk->timer_ival = TCP_RTO_MIN;
> msk->scaling_ratio = TCP_DEFAULT_SCALING_RATIO;
> + msk->backlog_len = 0;
>
> WRITE_ONCE(msk->first, NULL);
> inet_csk(sk)->icsk_sync_mss = mptcp_sync_mss;
> @@ -3197,6 +3258,7 @@ static void mptcp_destroy_common(struct mptcp_sock *msk, unsigned int flags)
> struct sock *sk = (struct sock *)msk;
>
> __mptcp_clear_xmit(sk);
> + mptcp_backlog_purge(sk);
>
> /* join list will be eventually flushed (with rst) at sock lock release time */
> mptcp_for_each_subflow_safe(msk, subflow, tmp)
> diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
> index 46d8432c72ee7..a21c4955f4cfb 100644
> --- a/net/mptcp/protocol.h
> +++ b/net/mptcp/protocol.h
> @@ -358,6 +358,9 @@ struct mptcp_sock {
> * allow_infinite_fallback and
> * allow_join
> */
> +
> + struct list_head backlog_list; /*protected by the data lock */
> + u32 backlog_len;
> };
>
> #define mptcp_data_lock(sk) spin_lock_bh(&(sk)->sk_lock.slock)
> @@ -408,6 +411,7 @@ static inline int mptcp_space_from_win(const struct sock *sk, int win)
> static inline int __mptcp_space(const struct sock *sk)
> {
> return mptcp_win_from_space(sk, READ_ONCE(sk->sk_rcvbuf) -
> + READ_ONCE(mptcp_sk(sk)->backlog_len) -
> sk_rmem_alloc_get(sk));
> }
>
> --
> 2.51.0
>
>
>
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v5 mptcp-next 10/10] mptcp: leverage the backlog for RX packet processing
2025-10-06 8:12 ` [PATCH v5 mptcp-next 10/10] mptcp: leverage the backlog for RX packet processing Paolo Abeni
@ 2025-10-20 23:32 ` Mat Martineau
2025-10-21 17:21 ` Paolo Abeni
0 siblings, 1 reply; 33+ messages in thread
From: Mat Martineau @ 2025-10-20 23:32 UTC (permalink / raw)
To: Paolo Abeni; +Cc: mptcp
On Mon, 6 Oct 2025, Paolo Abeni wrote:
> When the msk socket is owned or the msk receive buffer is full,
> move the incoming skbs in a msk level backlog list. This avoid
> traversing the joined subflows and acquiring the subflow level
> socket lock at reception time, improving the RX performances.
>
> when processing the backlog, use the fwd alloc memory borrowed from
> the incoming subflow. skbs exceeding the msk receive space are
> not dropped; instead they are kept into the backlog until the receive
> buffer is freed. Dropping packets already acked at the TCP level is
> explicitly discouraged by the RFC and would corrupt the data stream
> for fallback sockets.
>
> Special care is needed to avoid adding skbs to the backlog of a closed
> msk, and to avoid leaving dangling references into the backlog
> at subflow closing time.
>
> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
> ---
> v4 -> v5:
> - consolidate ssk rcvbuf accunting in __mptcp_move_skb(), remove
> some code duplication
> - return soon in __mptcp_add_backlog() when dropping skbs due to
> the msk closed. This avoid later UaF
> ---
> net/mptcp/protocol.c | 137 ++++++++++++++++++++++++-------------------
> net/mptcp/protocol.h | 2 +-
> 2 files changed, 79 insertions(+), 60 deletions(-)
>
> diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
> index 2d5d3da67d1ac..a97a92eccc502 100644
> --- a/net/mptcp/protocol.c
> +++ b/net/mptcp/protocol.c
...
> @@ -3509,23 +3519,29 @@ void __mptcp_check_push(struct sock *sk, struct sock *ssk)
>
> #define MPTCP_FLAGS_PROCESS_CTX_NEED (BIT(MPTCP_PUSH_PENDING) | \
> BIT(MPTCP_RETRANSMIT) | \
> - BIT(MPTCP_FLUSH_JOIN_LIST) | \
> - BIT(MPTCP_DEQUEUE))
> + BIT(MPTCP_FLUSH_JOIN_LIST))
>
> /* processes deferred events and flush wmem */
> static void mptcp_release_cb(struct sock *sk)
> __must_hold(&sk->sk_lock.slock)
> {
> struct mptcp_sock *msk = mptcp_sk(sk);
> + u32 delta = 0;
>
> for (;;) {
> unsigned long flags = (msk->cb_flags & MPTCP_FLAGS_PROCESS_CTX_NEED);
> - struct list_head join_list;
> + LIST_HEAD(join_list);
> + LIST_HEAD(skbs);
> +
> + sk_forward_alloc_add(sk, msk->borrowed_mem);
> + msk->borrowed_mem = 0;
> +
> + if (sk_rmem_alloc_get(sk) < sk->sk_rcvbuf)
> + list_splice_init(&msk->backlog_list, &skbs);
>
> - if (!flags)
> + if (!flags && list_empty(&skbs))
> break;
>
> - INIT_LIST_HEAD(&join_list);
> list_splice_init(&msk->join_list, &join_list);
>
> /* the following actions acquire the subflow socket lock
> @@ -3544,7 +3560,8 @@ static void mptcp_release_cb(struct sock *sk)
> __mptcp_push_pending(sk, 0);
> if (flags & BIT(MPTCP_RETRANSMIT))
> __mptcp_retrans(sk);
> - if ((flags & BIT(MPTCP_DEQUEUE)) && __mptcp_move_skbs(sk)) {
> + if (!list_empty(&skbs) &&
> + __mptcp_move_skbs(sk, &skbs, &delta)) {
> /* notify ack seq update */
> mptcp_cleanup_rbuf(msk, 0);
> sk->sk_data_ready(sk);
> @@ -3552,7 +3569,9 @@ static void mptcp_release_cb(struct sock *sk)
>
> cond_resched();
> spin_lock_bh(&sk->sk_lock.slock);
> + list_splice(&skbs, &msk->backlog_list);
> }
> + WRITE_ONCE(msk->backlog_len, msk->backlog_len - delta);
Hi Paolo -
Given the possible multiple calls to __mptcp_move_skbs() and that the
spinlock is released/reacquired (and the cond_resched) in the middle,
would it make sense to update msk->backlog_len for each iteration of the
loop so __mptcp_space() and mptcp_space() don't under-report available
space and mptcp_cleanup_rbuf() can make incremental progress?
I know we don't want to WRITE_ONCE() more than necessary, but it seems
like there won't typically be more than one loop iteration. In the cases
where it does repeat the loop that means data is arriving quickly and
reporting mptcp_space accurately will be important.
- Mat
>
> if (__test_and_clear_bit(MPTCP_CLEAN_UNA, &msk->cb_flags))
> __mptcp_clean_una_wakeup(sk);
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v5 mptcp-next 10/10] mptcp: leverage the backlog for RX packet processing
2025-10-20 23:32 ` Mat Martineau
@ 2025-10-21 17:21 ` Paolo Abeni
2025-10-21 23:53 ` Mat Martineau
0 siblings, 1 reply; 33+ messages in thread
From: Paolo Abeni @ 2025-10-21 17:21 UTC (permalink / raw)
To: Mat Martineau; +Cc: mptcp
On 10/21/25 1:32 AM, Mat Martineau wrote:
> On Mon, 6 Oct 2025, Paolo Abeni wrote:
>> diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
>> index 2d5d3da67d1ac..a97a92eccc502 100644
>> --- a/net/mptcp/protocol.c
>> +++ b/net/mptcp/protocol.c
>
> ...
>
>> @@ -3509,23 +3519,29 @@ void __mptcp_check_push(struct sock *sk, struct sock *ssk)
>>
>> #define MPTCP_FLAGS_PROCESS_CTX_NEED (BIT(MPTCP_PUSH_PENDING) | \
>> BIT(MPTCP_RETRANSMIT) | \
>> - BIT(MPTCP_FLUSH_JOIN_LIST) | \
>> - BIT(MPTCP_DEQUEUE))
>> + BIT(MPTCP_FLUSH_JOIN_LIST))
>>
>> /* processes deferred events and flush wmem */
>> static void mptcp_release_cb(struct sock *sk)
>> __must_hold(&sk->sk_lock.slock)
>> {
>> struct mptcp_sock *msk = mptcp_sk(sk);
>> + u32 delta = 0;
>>
>> for (;;) {
>> unsigned long flags = (msk->cb_flags & MPTCP_FLAGS_PROCESS_CTX_NEED);
>> - struct list_head join_list;
>> + LIST_HEAD(join_list);
>> + LIST_HEAD(skbs);
>> +
>> + sk_forward_alloc_add(sk, msk->borrowed_mem);
>> + msk->borrowed_mem = 0;
>> +
>> + if (sk_rmem_alloc_get(sk) < sk->sk_rcvbuf)
>> + list_splice_init(&msk->backlog_list, &skbs);
>>
>> - if (!flags)
>> + if (!flags && list_empty(&skbs))
>> break;
>>
>> - INIT_LIST_HEAD(&join_list);
>> list_splice_init(&msk->join_list, &join_list);
>>
>> /* the following actions acquire the subflow socket lock
>> @@ -3544,7 +3560,8 @@ static void mptcp_release_cb(struct sock *sk)
>> __mptcp_push_pending(sk, 0);
>> if (flags & BIT(MPTCP_RETRANSMIT))
>> __mptcp_retrans(sk);
>> - if ((flags & BIT(MPTCP_DEQUEUE)) && __mptcp_move_skbs(sk)) {
>> + if (!list_empty(&skbs) &&
>> + __mptcp_move_skbs(sk, &skbs, &delta)) {
>> /* notify ack seq update */
>> mptcp_cleanup_rbuf(msk, 0);
>> sk->sk_data_ready(sk);
>> @@ -3552,7 +3569,9 @@ static void mptcp_release_cb(struct sock *sk)
>>
>> cond_resched();
>> spin_lock_bh(&sk->sk_lock.slock);
>> + list_splice(&skbs, &msk->backlog_list);
>> }
>> + WRITE_ONCE(msk->backlog_len, msk->backlog_len - delta);
>
> Hi Paolo -
>
> Given the possible multiple calls to __mptcp_move_skbs() and that the
> spinlock is released/reacquired (and the cond_resched) in the middle,
> would it make sense to update msk->backlog_len for each iteration of the
> loop so __mptcp_space() and mptcp_space() don't under-report available
> space and mptcp_cleanup_rbuf() can make incremental progress?
>
> I know we don't want to WRITE_ONCE() more than necessary, but it seems
> like there won't typically be more than one loop iteration. In the cases
> where it does repeat the loop that means data is arriving quickly and
> reporting mptcp_space accurately will be important.
That WRITE_ONCE() is intentionally out of the loop, but not as an
optimization, but for a functional goal, similar to:
https://elixir.bootlin.com/linux/v6.17.4/source/net/core/sock.c#L3190
without it, in exceptional situation, the loop could run for an
unbounded amount of time.
Given this is MPTCP-level, and packets went already through the TCP
subflow possibly such scenario is even more unlikely, but I think it's
still possible and serious enough we want to avoid it.
WRT the receive buffer utilization, it should not change much, as either
on the backlog or in the receiver buffer, the skb should be accounted
for the whole truesize.
Cheers,
Paolo
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v5 mptcp-next 10/10] mptcp: leverage the backlog for RX packet processing
2025-10-21 17:21 ` Paolo Abeni
@ 2025-10-21 23:53 ` Mat Martineau
0 siblings, 0 replies; 33+ messages in thread
From: Mat Martineau @ 2025-10-21 23:53 UTC (permalink / raw)
To: Paolo Abeni; +Cc: mptcp
On Tue, 21 Oct 2025, Paolo Abeni wrote:
> On 10/21/25 1:32 AM, Mat Martineau wrote:
>> On Mon, 6 Oct 2025, Paolo Abeni wrote:
>>> diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
>>> index 2d5d3da67d1ac..a97a92eccc502 100644
>>> --- a/net/mptcp/protocol.c
>>> +++ b/net/mptcp/protocol.c
>>
>> ...
>>
>>> @@ -3509,23 +3519,29 @@ void __mptcp_check_push(struct sock *sk, struct sock *ssk)
>>>
>>> #define MPTCP_FLAGS_PROCESS_CTX_NEED (BIT(MPTCP_PUSH_PENDING) | \
>>> BIT(MPTCP_RETRANSMIT) | \
>>> - BIT(MPTCP_FLUSH_JOIN_LIST) | \
>>> - BIT(MPTCP_DEQUEUE))
>>> + BIT(MPTCP_FLUSH_JOIN_LIST))
>>>
>>> /* processes deferred events and flush wmem */
>>> static void mptcp_release_cb(struct sock *sk)
>>> __must_hold(&sk->sk_lock.slock)
>>> {
>>> struct mptcp_sock *msk = mptcp_sk(sk);
>>> + u32 delta = 0;
>>>
>>> for (;;) {
>>> unsigned long flags = (msk->cb_flags & MPTCP_FLAGS_PROCESS_CTX_NEED);
>>> - struct list_head join_list;
>>> + LIST_HEAD(join_list);
>>> + LIST_HEAD(skbs);
>>> +
>>> + sk_forward_alloc_add(sk, msk->borrowed_mem);
>>> + msk->borrowed_mem = 0;
>>> +
>>> + if (sk_rmem_alloc_get(sk) < sk->sk_rcvbuf)
>>> + list_splice_init(&msk->backlog_list, &skbs);
>>>
>>> - if (!flags)
>>> + if (!flags && list_empty(&skbs))
>>> break;
>>>
>>> - INIT_LIST_HEAD(&join_list);
>>> list_splice_init(&msk->join_list, &join_list);
>>>
>>> /* the following actions acquire the subflow socket lock
>>> @@ -3544,7 +3560,8 @@ static void mptcp_release_cb(struct sock *sk)
>>> __mptcp_push_pending(sk, 0);
>>> if (flags & BIT(MPTCP_RETRANSMIT))
>>> __mptcp_retrans(sk);
>>> - if ((flags & BIT(MPTCP_DEQUEUE)) && __mptcp_move_skbs(sk)) {
>>> + if (!list_empty(&skbs) &&
>>> + __mptcp_move_skbs(sk, &skbs, &delta)) {
>>> /* notify ack seq update */
>>> mptcp_cleanup_rbuf(msk, 0);
>>> sk->sk_data_ready(sk);
>>> @@ -3552,7 +3569,9 @@ static void mptcp_release_cb(struct sock *sk)
>>>
>>> cond_resched();
>>> spin_lock_bh(&sk->sk_lock.slock);
>>> + list_splice(&skbs, &msk->backlog_list);
>>> }
>>> + WRITE_ONCE(msk->backlog_len, msk->backlog_len - delta);
>>
>> Hi Paolo -
>>
>> Given the possible multiple calls to __mptcp_move_skbs() and that the
>> spinlock is released/reacquired (and the cond_resched) in the middle,
>> would it make sense to update msk->backlog_len for each iteration of the
>> loop so __mptcp_space() and mptcp_space() don't under-report available
>> space and mptcp_cleanup_rbuf() can make incremental progress?
>>
>> I know we don't want to WRITE_ONCE() more than necessary, but it seems
>> like there won't typically be more than one loop iteration. In the cases
>> where it does repeat the loop that means data is arriving quickly and
>> reporting mptcp_space accurately will be important.
>
> That WRITE_ONCE() is intentionally out of the loop, but not as an
> optimization, but for a functional goal, similar to:
>
> https://elixir.bootlin.com/linux/v6.17.4/source/net/core/sock.c#L3190
>
> without it, in exceptional situation, the loop could run for an
> unbounded amount of time.
>
> Given this is MPTCP-level, and packets went already through the TCP
> subflow possibly such scenario is even more unlikely, but I think it's
> still possible and serious enough we want to avoid it.
>
Ah, I think I see where that could happen if the received packets get
discarded and don't get counted against the rcv buffer limits. "Double
counting" the most recently moved packets using backlog_len creates the
temporary appearance of no buffer space in that "wild producer" scenario.
Still applies at the MPTCP level, I agree.
> WRT the receive buffer utilization, it should not change much, as either
> on the backlog or in the receiver buffer, the skb should be accounted
> for the whole truesize.
My concern is mostly that the moved skbs are accounted twice until the
loop is finally exited, so in the non-adversarial case where packets are
accepted there would be an impact on the behavior of the rx window during
high speed transfers.
If the overall delta was used for the purpose of exiting the loop, the
backlog_len could be updated on each loop iteration while still keeping
the loop bounded.
Taking into consideration the expected case of high-throughput behavior
vs. an unexpected/infrequent "wild producer" scenario, does it still seem
best to keep the above code as-is? I'll see what you decide in v6 :)
- Mat
^ permalink raw reply [flat|nested] 33+ messages in thread
end of thread, other threads:[~2025-10-21 23:53 UTC | newest]
Thread overview: 33+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-06 8:11 [PATCH v5 mptcp-next 00/10] mptcp: introduce backlog processing Paolo Abeni
2025-10-06 8:12 ` [PATCH v5 mptcp-next 01/10] mptcp: borrow forward memory from subflow Paolo Abeni
2025-10-06 8:12 ` [PATCH v5 mptcp-next 02/10] mptcp: cleanup fallback data fin reception Paolo Abeni
2025-10-06 8:12 ` [PATCH v5 mptcp-next 03/10] mptcp: cleanup fallback dummy mapping generation Paolo Abeni
2025-10-06 8:12 ` [PATCH v5 mptcp-next 04/10] mptcp: fix MSG_PEEK stream corruption Paolo Abeni
2025-10-06 8:12 ` [PATCH v5 mptcp-next 05/10] mptcp: ensure the kernel PM does not take action too late Paolo Abeni
2025-10-06 8:12 ` [PATCH v5 mptcp-next 06/10] mptcp: do not miss early first subflow close event notification Paolo Abeni
2025-10-06 8:12 ` [PATCH v5 mptcp-next 07/10] mptcp: make mptcp_destroy_common() static Paolo Abeni
2025-10-06 8:12 ` [PATCH v5 mptcp-next 08/10] mptcp: drop the __mptcp_data_ready() helper Paolo Abeni
2025-10-06 8:12 ` [PATCH v5 mptcp-next 09/10] mptcp: introduce mptcp-level backlog Paolo Abeni
2025-10-08 3:09 ` Geliang Tang
2025-10-20 19:45 ` Mat Martineau
2025-10-06 8:12 ` [PATCH v5 mptcp-next 10/10] mptcp: leverage the backlog for RX packet processing Paolo Abeni
2025-10-20 23:32 ` Mat Martineau
2025-10-21 17:21 ` Paolo Abeni
2025-10-21 23:53 ` Mat Martineau
2025-10-06 17:07 ` [PATCH v5 mptcp-next 00/10] mptcp: introduce backlog processing Matthieu Baerts
2025-10-08 3:07 ` Geliang Tang
2025-10-08 7:30 ` Paolo Abeni
2025-10-09 6:54 ` Geliang Tang
2025-10-09 7:52 ` Paolo Abeni
2025-10-09 9:02 ` Geliang Tang
2025-10-09 10:23 ` Paolo Abeni
2025-10-09 13:58 ` Paolo Abeni
2025-10-10 8:21 ` Paolo Abeni
2025-10-10 12:22 ` Geliang Tang
2025-10-13 9:07 ` Geliang Tang
2025-10-13 13:29 ` Paolo Abeni
2025-10-13 17:07 ` Paolo Abeni
2025-10-15 9:00 ` Paolo Abeni
2025-10-17 6:38 ` Geliang Tang
2025-10-18 0:16 ` Mat Martineau
2025-10-06 17:43 ` MPTCP CI
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox