From: Geliang Tang <geliang@kernel.org>
To: Paolo Abeni <pabeni@redhat.com>,
Matthieu Baerts <matttbe@kernel.org>,
mptcp@lists.linux.dev
Subject: Re: [PATCH v5 mptcp-next 00/10] mptcp: introduce backlog processing
Date: Thu, 09 Oct 2025 17:02:00 +0800 [thread overview]
Message-ID: <c53540268cb422a14d38745c6d0bdb440deb7c11.camel@kernel.org> (raw)
In-Reply-To: <8a8feb1d-ad10-4ba4-a448-db8a0e45c7c3@redhat.com>
Hi Paolo,
On Thu, 2025-10-09 at 09:52 +0200, Paolo Abeni wrote:
> On 10/9/25 8:54 AM, Geliang Tang wrote:
> > On Wed, 2025-10-08 at 09:30 +0200, Paolo Abeni wrote:
> > > On 10/8/25 5:07 AM, Geliang Tang wrote:
> > > > On Mon, 2025-10-06 at 19:07 +0200, Matthieu Baerts wrote:
> > > > > Hi Paolo,
> > > > >
> > > > > On 06/10/2025 10:11, Paolo Abeni wrote:
> > > > > > This series includes RX path improvement built around
> > > > > > backlog
> > > > > > processing
> > > > > Thank you for the new version! This is not a review, but just
> > > > > a
> > > > > note
> > > > > to
> > > > > tell you patchew didn't manage to apply the patches due to
> > > > > the
> > > > > same
> > > > > conflict that was already there with the v4 (mptcp_init_skb()
> > > > > parameters
> > > > > have been moved to the previous line). I just applied the
> > > > > patches
> > > > > manually. While at it, I also used this test branch for
> > > > > syzkaller
> > > > > to
> > > > > validate them.
> > > > >
> > > > > (Also, on patch "mptcp: drop the __mptcp_data_ready()
> > > > > helper",
> > > > > git
> > > > > complained that there is a trailing whitespace.)
> > > >
> > > > Sorry, patches 9-10 break my "implement mptcp read_sock" v12
> > > > series. I
> > > > rebased this series on patches 1-8, it works well. But after
> > > > applying
> > > > patches 9-10, I changed mptcp_recv_skb() in [1] from
> > >
> > > Thanks for the feedback, the applied delta looks good to me.
> > >
> > > > # INFO: with MPTFO start
> > > > # 57 ns2 MPTCP -> ns1 (10.0.1.1:10054 ) MPTCP
> > > > (duration
> > > > 60989ms) [FAIL] client exit code 0, server 124
> > > > #
> > > > # netns ns1-RqXF2p (listener) socket stat for 10054:
> > > > # Failed to find cgroup2 mount
> > > > # Failed to find cgroup2 mount
> > > > # Failed to find cgroup2 mount
> > > > # Netid State Recv-Q Send-Q Local Address:Port Peer
> > > > Address:Port
> > > > # tcp ESTAB 0 0 10.0.1.1:10054
> > > > 10.0.1.2:55516
> > > > ino:2064372 sk:1 cgroup:unreachable:1 <->
> > > > # skmem:(r0,rb131072,t0,tb340992,f0,w0,o0,bl0,d0) sack
> > > > cubic
> > > > wscale:8,8 rto:206 rtt:5.026/10.034 ato:40 mss:1460 pmtu:1500
> > > > rcvmss:1436 advmss:1460 cwnd:10 bytes_sent:115312
> > > > bytes_retrans:1560
> > > > bytes_acked:113752 bytes_received:5136 segs_out:85 segs_in:16
> > > > data_segs_out:83 data_segs_in:4 send 23239156bps lastsnd:60939
> > > > lastrcv:61035 lastack:60912 pacing_rate 343879640bps
> > > > delivery_rate
> > > > 1994680bps delivered:84 busy:123ms sndbuf_limited:41ms(33.3%)
> > > > retrans:0/2 dsack_dups:2 rcv_space:14600 rcv_ssthresh:75432
> > > > minrtt:0.003 rcv_wnd:75520 tcp-ulp-mptcp flags:Mec
> > > > token:0000(id:0)/32ed0950(id:0) seq:2946228641406205031 sfseq:1
> > > > ssnoff:1349223625 maplen:5136
> > > > # mptcp LAST-ACK 0 0 10.0.1.1:10054
> > > > 10.0.1.2:55516
> > > > timer:(keepalive,59sec,0) ino:0 sk:2 cgroup:unreachable:1 ---
> > > > #
> > > > skmem:(r0,rb131072,t0,tb345088,f4088,w352264,o0,bl0,d0)
> > > > subflows_max:2 remote_key token:32ed0950
> > > > write_seq:6317574787800720824
> > > > snd_una:6317574787800376423 rcv_nxt:2946228641406210168
> > > > bytes_sent:113752 bytes_received:5136 bytes_acked:113752
> > > > subflows_total:1 last_data_sent:60954 last_data_recv:61036
> > > > last_ack_recv:60913
> > >
> > > bytes_sent == bytes_sent, possibly we are missing a window-open
> > > event,
> > > which in turn should be triggered by a mptcp_cleanp_rbuf(), which
> > > AFAICS
> > > are correctly invoked in the splice code. TL;DR: I can't find
> > > anything
> > > obviously wrong :-P
> > >
> > > Also the default rx buf size is suspect.
> > >
> > > Can you reproduce the issue while capturing the traffic with
> > > tcpdump?
> > > if
> > > so, could you please share the capture?
> >
> > Thank you for your suggestion. I've attached several tcpdump logs
> > from
> > when the tests failed.
>
> Oh wow! the receiver actually sends the window open notification
> (packets 527 and 528 in the trace), but the sender does not react at
> all.
>
> I have no idea/I haven't digged yet why the sender did not try a zero
> window probe (it should!), but it looks like we have some old bug in
> sender wakeup since MPTCP_DEQUEUE introduction (which is very
> surprising, why we did not catch/observe this earlier ?!?). That
> could
> explain also sporadic mptcp_join failures.
>
> Could you please try the attached patch?
Thank you very much. I just tested this patch, but it doesn't work. The
splice test still fails and reports the same error.
-Geliang
>
> /P
>
> p.s. AFAICS the backlog introduction should just increase the
> frequency
> of an already possible event...
next prev parent reply other threads:[~2025-10-09 9:02 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-06 8:11 [PATCH v5 mptcp-next 00/10] mptcp: introduce backlog processing Paolo Abeni
2025-10-06 8:12 ` [PATCH v5 mptcp-next 01/10] mptcp: borrow forward memory from subflow Paolo Abeni
2025-10-06 8:12 ` [PATCH v5 mptcp-next 02/10] mptcp: cleanup fallback data fin reception Paolo Abeni
2025-10-06 8:12 ` [PATCH v5 mptcp-next 03/10] mptcp: cleanup fallback dummy mapping generation Paolo Abeni
2025-10-06 8:12 ` [PATCH v5 mptcp-next 04/10] mptcp: fix MSG_PEEK stream corruption Paolo Abeni
2025-10-06 8:12 ` [PATCH v5 mptcp-next 05/10] mptcp: ensure the kernel PM does not take action too late Paolo Abeni
2025-10-06 8:12 ` [PATCH v5 mptcp-next 06/10] mptcp: do not miss early first subflow close event notification Paolo Abeni
2025-10-06 8:12 ` [PATCH v5 mptcp-next 07/10] mptcp: make mptcp_destroy_common() static Paolo Abeni
2025-10-06 8:12 ` [PATCH v5 mptcp-next 08/10] mptcp: drop the __mptcp_data_ready() helper Paolo Abeni
2025-10-06 8:12 ` [PATCH v5 mptcp-next 09/10] mptcp: introduce mptcp-level backlog Paolo Abeni
2025-10-08 3:09 ` Geliang Tang
2025-10-20 19:45 ` Mat Martineau
2025-10-06 8:12 ` [PATCH v5 mptcp-next 10/10] mptcp: leverage the backlog for RX packet processing Paolo Abeni
2025-10-20 23:32 ` Mat Martineau
2025-10-21 17:21 ` Paolo Abeni
2025-10-21 23:53 ` Mat Martineau
2025-10-06 17:07 ` [PATCH v5 mptcp-next 00/10] mptcp: introduce backlog processing Matthieu Baerts
2025-10-08 3:07 ` Geliang Tang
2025-10-08 7:30 ` Paolo Abeni
2025-10-09 6:54 ` Geliang Tang
2025-10-09 7:52 ` Paolo Abeni
2025-10-09 9:02 ` Geliang Tang [this message]
2025-10-09 10:23 ` Paolo Abeni
2025-10-09 13:58 ` Paolo Abeni
2025-10-10 8:21 ` Paolo Abeni
2025-10-10 12:22 ` Geliang Tang
2025-10-13 9:07 ` Geliang Tang
2025-10-13 13:29 ` Paolo Abeni
2025-10-13 17:07 ` Paolo Abeni
2025-10-15 9:00 ` Paolo Abeni
2025-10-17 6:38 ` Geliang Tang
2025-10-18 0:16 ` Mat Martineau
2025-10-06 17:43 ` MPTCP CI
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=c53540268cb422a14d38745c6d0bdb440deb7c11.camel@kernel.org \
--to=geliang@kernel.org \
--cc=matttbe@kernel.org \
--cc=mptcp@lists.linux.dev \
--cc=pabeni@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox