From: Geliang Tang <geliang@kernel.org>
To: Paolo Abeni <pabeni@redhat.com>,
Matthieu Baerts <matttbe@kernel.org>,
mptcp@lists.linux.dev
Subject: Re: [PATCH v5 mptcp-next 00/10] mptcp: introduce backlog processing
Date: Fri, 10 Oct 2025 20:22:46 +0800 [thread overview]
Message-ID: <20a3df573803203df6b672d1ecd606e242e84b20.camel@kernel.org> (raw)
In-Reply-To: <53ed629a-d364-470f-8a52-5a34692f0da7@redhat.com>
Hi Paolo,
On Fri, 2025-10-10 at 10:21 +0200, Paolo Abeni wrote:
> On 10/9/25 3:58 PM, Paolo Abeni wrote:
> > @Geliang: if you reproduce the issue multiple times, are there any
> > common patterns ? i.e. sender files considerably larger than the
> > client
> > one, or only a specific subsets of all the test-cases failing, or
> > ...
>
> Other questions:
> - Can you please share your setup details (VM vs baremetal, debug
> config
> vs non debug, vmg vs plain qemu, number of [v]cores...)? I can't
> repro
> the issue locally.
Here are my modifications:
https://git.kernel.org/pub/scm/linux/kernel/git/geliang/mptcp_net-next.git/log/?h=splice_new
I used mptcp-upstream-virtme-docker normal config to reproduce it:
docker run \
-e INPUT_NO_BLOCK=1 \
-e INPUT_PACKETDRILL_NO_SYNC=1 \
-v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it \
--pull always ghcr.io/multipath-tcp/mptcp-upstream-virtme-
docker:latest \
auto-normal
$ cat .virtme-exec-run
run_loop run_selftest_one ./mptcp_connect_splice.sh
Running mptcp_connect_splice.sh in a loop dozens of times should
reproduce the test failure.
> - Can you please share a pcap capture _and_ the selftest text output
> for
> the same failing test?
>
> In the log shared previously the sender had data queued at the
> mptcp-level, but not at TCP-level. In the shared pcap capture the
> receiver sends a couple of acks opening the tcp-level and mptcp-level
> window, but the sender never replies.
>
> In such scenario the incoming ack should reach ack_update_msk() ->
> __mptcp_check_push() -> __mptcp_subflow_push_pending() (or
> mptcp_release_cb -> __mptcp_push_pending() ) -> mptcp_sendmsg_frag()
> but
> such chain is apparently broken somewhere in the failing scenario.
> Could
> you please add probe points the the mentioned funtions and perf
> record
> the test, to try to see where the mentioned chain is interrupted?
Thank you for your suggestion. I will proceed with testing accordingly.
-Geliang
>
> Thanks,
>
> Paolo
>
next prev parent reply other threads:[~2025-10-10 12:22 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-06 8:11 [PATCH v5 mptcp-next 00/10] mptcp: introduce backlog processing Paolo Abeni
2025-10-06 8:12 ` [PATCH v5 mptcp-next 01/10] mptcp: borrow forward memory from subflow Paolo Abeni
2025-10-06 8:12 ` [PATCH v5 mptcp-next 02/10] mptcp: cleanup fallback data fin reception Paolo Abeni
2025-10-06 8:12 ` [PATCH v5 mptcp-next 03/10] mptcp: cleanup fallback dummy mapping generation Paolo Abeni
2025-10-06 8:12 ` [PATCH v5 mptcp-next 04/10] mptcp: fix MSG_PEEK stream corruption Paolo Abeni
2025-10-06 8:12 ` [PATCH v5 mptcp-next 05/10] mptcp: ensure the kernel PM does not take action too late Paolo Abeni
2025-10-06 8:12 ` [PATCH v5 mptcp-next 06/10] mptcp: do not miss early first subflow close event notification Paolo Abeni
2025-10-06 8:12 ` [PATCH v5 mptcp-next 07/10] mptcp: make mptcp_destroy_common() static Paolo Abeni
2025-10-06 8:12 ` [PATCH v5 mptcp-next 08/10] mptcp: drop the __mptcp_data_ready() helper Paolo Abeni
2025-10-06 8:12 ` [PATCH v5 mptcp-next 09/10] mptcp: introduce mptcp-level backlog Paolo Abeni
2025-10-08 3:09 ` Geliang Tang
2025-10-20 19:45 ` Mat Martineau
2025-10-06 8:12 ` [PATCH v5 mptcp-next 10/10] mptcp: leverage the backlog for RX packet processing Paolo Abeni
2025-10-20 23:32 ` Mat Martineau
2025-10-21 17:21 ` Paolo Abeni
2025-10-21 23:53 ` Mat Martineau
2025-10-06 17:07 ` [PATCH v5 mptcp-next 00/10] mptcp: introduce backlog processing Matthieu Baerts
2025-10-08 3:07 ` Geliang Tang
2025-10-08 7:30 ` Paolo Abeni
2025-10-09 6:54 ` Geliang Tang
2025-10-09 7:52 ` Paolo Abeni
2025-10-09 9:02 ` Geliang Tang
2025-10-09 10:23 ` Paolo Abeni
2025-10-09 13:58 ` Paolo Abeni
2025-10-10 8:21 ` Paolo Abeni
2025-10-10 12:22 ` Geliang Tang [this message]
2025-10-13 9:07 ` Geliang Tang
2025-10-13 13:29 ` Paolo Abeni
2025-10-13 17:07 ` Paolo Abeni
2025-10-15 9:00 ` Paolo Abeni
2025-10-17 6:38 ` Geliang Tang
2025-10-18 0:16 ` Mat Martineau
2025-10-06 17:43 ` MPTCP CI
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20a3df573803203df6b672d1ecd606e242e84b20.camel@kernel.org \
--to=geliang@kernel.org \
--cc=matttbe@kernel.org \
--cc=mptcp@lists.linux.dev \
--cc=pabeni@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox