MPTCP Linux Development
 help / color / mirror / Atom feed
From: Geliang Tang <geliang@kernel.org>
To: Paolo Abeni <pabeni@redhat.com>,
	Matthieu Baerts <matttbe@kernel.org>,
	 Mat Martineau <martineau@kernel.org>,
	mptcp@lists.linux.dev
Subject: Re: [PATCH v5 mptcp-next 00/10] mptcp: introduce backlog processing
Date: Fri, 17 Oct 2025 14:38:42 +0800	[thread overview]
Message-ID: <fc39dd4b9b36334ad52ba3e0940e642db4a560ee.camel@kernel.org> (raw)
In-Reply-To: <281e24d1-da6a-493e-9d12-66bd3cdd7ed4@redhat.com>

Hi Paolo, Matt, Mat,

On Wed, 2025-10-15 at 11:00 +0200, Paolo Abeni wrote:
> On 10/13/25 11:07 AM, Geliang Tang wrote:
> > On Fri, 2025-10-10 at 20:22 +0800, Geliang Tang wrote:
> > > Hi Paolo,
> > > 
> > > On Fri, 2025-10-10 at 10:21 +0200, Paolo Abeni wrote:
> > > > On 10/9/25 3:58 PM, Paolo Abeni wrote:
> > > > > @Geliang: if you reproduce the issue multiple times, are
> > > > > there
> > > > > any
> > > > > common patterns ? i.e. sender files considerably larger than
> > > > > the
> > > > > client
> > > > > one, or only a specific subsets of all the test-cases
> > > > > failing, or
> > > > > ...
> > > > 
> > > > Other questions:
> > > > - Can you please share your setup details (VM vs baremetal,
> > > > debug
> > > > config
> > > > vs non debug, vmg vs plain qemu, number of [v]cores...)? I
> > > > can't
> > > > repro
> > > > the issue locally.
> > > 
> > > Here are my modifications:
> > > 
> > > https://git.kernel.org/pub/scm/linux/kernel/git/geliang/mptcp_net-next.git/log/?h=splice_new
> > > 
> > > I used mptcp-upstream-virtme-docker normal config to reproduce
> > > it:
> > > 
> > > docker run \
> > > 	-e INPUT_NO_BLOCK=1 \
> > > 	-e INPUT_PACKETDRILL_NO_SYNC=1 \
> > > 	-v "${PWD}:${PWD}:rw" -w "${PWD}" --privileged --rm -it
> > > \
> > >         --pull always ghcr.io/multipath-tcp/mptcp-upstream-
> > > virtme-
> > > docker:latest \
> > > 	auto-normal
> > > 
> > > $ cat .virtme-exec-run 
> > > run_loop run_selftest_one ./mptcp_connect_splice.sh
> > > 
> > > Running mptcp_connect_splice.sh in a loop dozens of times should
> > > reproduce the test failure.
> > > 
> > > > - Can you please share a pcap capture _and_ the selftest text
> > > > output
> > > > for
> > > > the same failing  test?
> > 
> > The pcap captures (gQQ13x-ns1-ns3-MPTCP-MPTCP-dead:beef:3::2-10013-
> > connector.pcap, gQQ13x-ns1-ns3-MPTCP-MPTCP-dead:beef:3::2-10013-
> > listener.pcap) and the selftest text output (selftest_output) are
> > attached.
> 
> Looks like the 'stuck' scenario is quite consistent. The receiver
> filled
> it's receive window, and sent an ack shortly after when re-opening,
> but
> the sender did not react to such ack.
> 
> The perf instrumentation I mentioned would be very useful. I tried to
> capture it myself, but so far I failed - the repro run for several
> hundred iterations without issues and finally podmad stuck (podman
> bug
> apparently, or local resources exhausted).
> 
> Did you have better luck collecting the perf trace?

Sorry, I haven't made any progress yet. Please give me some more time.


I was thinking, since this issue only occurs during the splice test,
let's move the discussion to the future "implement mptcp read_sock and
splice" series. We shouldn't let it block the merging of this current
series.

I don't have any further constructive review comments on patches 9 and
10. I'm wondering if we should get input from Matt and Mat.

Thanks,
-Geliang

> 
> /P
> 
> 


  reply	other threads:[~2025-10-17  6:38 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-10-06  8:11 [PATCH v5 mptcp-next 00/10] mptcp: introduce backlog processing Paolo Abeni
2025-10-06  8:12 ` [PATCH v5 mptcp-next 01/10] mptcp: borrow forward memory from subflow Paolo Abeni
2025-10-06  8:12 ` [PATCH v5 mptcp-next 02/10] mptcp: cleanup fallback data fin reception Paolo Abeni
2025-10-06  8:12 ` [PATCH v5 mptcp-next 03/10] mptcp: cleanup fallback dummy mapping generation Paolo Abeni
2025-10-06  8:12 ` [PATCH v5 mptcp-next 04/10] mptcp: fix MSG_PEEK stream corruption Paolo Abeni
2025-10-06  8:12 ` [PATCH v5 mptcp-next 05/10] mptcp: ensure the kernel PM does not take action too late Paolo Abeni
2025-10-06  8:12 ` [PATCH v5 mptcp-next 06/10] mptcp: do not miss early first subflow close event notification Paolo Abeni
2025-10-06  8:12 ` [PATCH v5 mptcp-next 07/10] mptcp: make mptcp_destroy_common() static Paolo Abeni
2025-10-06  8:12 ` [PATCH v5 mptcp-next 08/10] mptcp: drop the __mptcp_data_ready() helper Paolo Abeni
2025-10-06  8:12 ` [PATCH v5 mptcp-next 09/10] mptcp: introduce mptcp-level backlog Paolo Abeni
2025-10-08  3:09   ` Geliang Tang
2025-10-20 19:45   ` Mat Martineau
2025-10-06  8:12 ` [PATCH v5 mptcp-next 10/10] mptcp: leverage the backlog for RX packet processing Paolo Abeni
2025-10-20 23:32   ` Mat Martineau
2025-10-21 17:21     ` Paolo Abeni
2025-10-21 23:53       ` Mat Martineau
2025-10-06 17:07 ` [PATCH v5 mptcp-next 00/10] mptcp: introduce backlog processing Matthieu Baerts
2025-10-08  3:07   ` Geliang Tang
2025-10-08  7:30     ` Paolo Abeni
2025-10-09  6:54       ` Geliang Tang
2025-10-09  7:52         ` Paolo Abeni
2025-10-09  9:02           ` Geliang Tang
2025-10-09 10:23             ` Paolo Abeni
2025-10-09 13:58               ` Paolo Abeni
2025-10-10  8:21                 ` Paolo Abeni
2025-10-10 12:22                   ` Geliang Tang
2025-10-13  9:07                     ` Geliang Tang
2025-10-13 13:29                       ` Paolo Abeni
2025-10-13 17:07                         ` Paolo Abeni
2025-10-15  9:00                       ` Paolo Abeni
2025-10-17  6:38                         ` Geliang Tang [this message]
2025-10-18  0:16                           ` Mat Martineau
2025-10-06 17:43 ` MPTCP CI

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=fc39dd4b9b36334ad52ba3e0940e642db4a560ee.camel@kernel.org \
    --to=geliang@kernel.org \
    --cc=martineau@kernel.org \
    --cc=matttbe@kernel.org \
    --cc=mptcp@lists.linux.dev \
    --cc=pabeni@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox