From: Jakub Kicinski <kuba@kernel.org>
To: Paolo Abeni <pabeni@redhat.com>
Cc: Eric Dumazet <edumazet@google.com>,
netdev@vger.kernel.org, Neal Cardwell <ncardwell@google.com>,
Kuniyuki Iwashima <kuniyu@google.com>,
"David S. Miller" <davem@davemloft.net>,
David Ahern <dsahern@kernel.org>, Simon Horman <horms@kernel.org>,
Matthieu Baerts <matttbe@kernel.org>
Subject: Re: [PATCH net-next 1/2] tcp: do not set a zero size receive buffer
Date: Mon, 21 Jul 2025 08:27:28 -0700 [thread overview]
Message-ID: <20250721082728.355745f2@kernel.org> (raw)
In-Reply-To: <1d78b781-5cca-440c-b9d0-bdf40a410a3d@redhat.com>
On Mon, 21 Jul 2025 16:56:06 +0200 Paolo Abeni wrote:
> >> I *think* that catching only the !sk_rmem_alloc case would avoid the
> >> stall, but I think it's a bit 'late'.
> >
> > A packetdrill test here would help understanding your concern.
>
> I fear like a complete working script would take a lot of time, let me
> try to sketch just the relevant part:
>
> # receiver state is:
> # rmem=110592 rcvbuf=174650 scaling_ratio=253 rwin=63232
> # no OoO data, no memory pressure,
>
> # the incoming packet is in sequence
> +0 > P. 109297:172528(63232) ack 1
>
> With just the 0 rmem check in tcp_prune_queue(), such function will
> still invoke tcp_clamp_window() that will shrink the receive buffer to
> 110592.
> tcp_collapse() can't make enough room and the incoming packet will be
> dropped. I think we should instead accept such packet.
>
> Side note: the above data are taken from an actual reproduction of the issue
>
> Please LMK if the above clarifies a bit my doubt or if a full pktdrill
> is needed.
Not the first time we stumble on packetdrills for scaling ratio.
Solving it is probably outside the scope of this discussion but
I wonder what would be the best way to do it. My go to is to
integrate packetdrill with netdevsim and have an option for netdevsim
to inflate truesize on demand. But perhaps there's a clever way we can
force something like tap to give us the ratio we desire. Other ideas?
next prev parent reply other threads:[~2025-07-21 15:27 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-07-18 17:25 [PATCH net-next 0/2] tcp: a couple of fixes Paolo Abeni
2025-07-18 17:25 ` [PATCH net-next 1/2] tcp: do not set a zero size receive buffer Paolo Abeni
2025-07-18 22:15 ` Matthieu Baerts
2025-07-21 8:04 ` Eric Dumazet
2025-07-21 10:50 ` Paolo Abeni
2025-07-21 12:30 ` Eric Dumazet
2025-07-21 12:41 ` Eric Dumazet
2025-07-21 13:32 ` Paolo Abeni
2025-07-21 13:52 ` Eric Dumazet
2025-07-21 14:56 ` Paolo Abeni
2025-07-21 15:21 ` Eric Dumazet
2025-07-21 16:17 ` Paolo Abeni
2025-07-21 15:27 ` Jakub Kicinski [this message]
2025-07-21 15:30 ` Jakub Kicinski
2025-07-21 16:50 ` Eric Dumazet
2025-07-18 17:25 ` [PATCH net-next 2/2] tcp: do not increment BeyondWindow MIB for old seq Paolo Abeni
2025-07-18 22:16 ` Matthieu Baerts
2025-07-21 8:28 ` Eric Dumazet
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250721082728.355745f2@kernel.org \
--to=kuba@kernel.org \
--cc=davem@davemloft.net \
--cc=dsahern@kernel.org \
--cc=edumazet@google.com \
--cc=horms@kernel.org \
--cc=kuniyu@google.com \
--cc=matttbe@kernel.org \
--cc=ncardwell@google.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).