public inbox for netdev@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH net-next] selftests: hw-net: tso: set a TCP window clamp to avoid spurious drops
@ 2026-02-23 20:40 Jakub Kicinski
  2026-02-26  3:10 ` patchwork-bot+netdevbpf
  0 siblings, 1 reply; 2+ messages in thread
From: Jakub Kicinski @ 2026-02-23 20:40 UTC (permalink / raw)
  To: davem
  Cc: netdev, edumazet, pabeni, andrew+netdev, horms, shuah,
	daniel.zahka, willemb, linux-kselftest, Jakub Kicinski

The TSO test wants to make sure that there isn't a lot of retransmits,
because that could indicate that device has a buggy TSO implementation.
On debug kernels, however, we're likely to see significant packet loss
because we simply overwhelm the receiver.

In a QEMU loop with virtio devices we see ~10% false positive rate
with occasional run hitting the threshold of 25% packet loss.

Since we're only sending 4MB of data, set a TCP_WINDOW_CLAMP to 200k.
This seems to make virtio happy while having little impact since we're
primarily interested in testing the sender, and the test doesn't
currently enable BIG TCP.

Running socat over virtio loop for 2 sec on a debug kernel shows:

  TcpOutSegs                      27327              0.0
  TcpRetransSegs                  83                 0.0

  TcpOutSegs                      30012              0.0
  TcpRetransSegs                  80                 0.0

  TcpOutSegs                      28767              0.0
  TcpRetransSegs                  77                 0.0

But with the clamp the 3 attemps show no retransmit:

  TcpOutSegs                      31537              0.0
  TcpRetransSegs                  0                  0.0

  TcpOutSegs                      30323              0.0
  TcpRetransSegs                  0                  0.0

  TcpOutSegs                      28700              0.0
  TcpRetransSegs                  0                  0.0

Since we expect no receiver-related drops now we can significantly
increase test's sensitivity to drops.

All the testing we do in NIPA uses cubic.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
 tools/testing/selftests/drivers/net/hw/tso.py | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/tools/testing/selftests/drivers/net/hw/tso.py b/tools/testing/selftests/drivers/net/hw/tso.py
index 0998e68ebaf0..bb675e3dac88 100755
--- a/tools/testing/selftests/drivers/net/hw/tso.py
+++ b/tools/testing/selftests/drivers/net/hw/tso.py
@@ -36,8 +36,11 @@ from lib.py import bkg, cmd, defer, ethtool, ip, rand_port, wait_port_listen
 def run_one_stream(cfg, ipver, remote_v4, remote_v6, should_lso):
     cfg.require_cmd("socat", local=False, remote=True)
 
+    # Set recv window clamp to avoid overwhelming receiver on debug kernels
+    # the 200k clamp should still let use reach > 15Gbps on real HW
     port = rand_port()
-    listen_cmd = f"socat -{ipver} -t 2 -u TCP-LISTEN:{port},reuseport /dev/null,ignoreeof"
+    listen_opts = f"{port},reuseport,tcp-window-clamp=200000"
+    listen_cmd = f"socat -{ipver} -t 2 -u TCP-LISTEN:{listen_opts} /dev/null,ignoreeof"
 
     with bkg(listen_cmd, host=cfg.remote, exit_wait=True) as nc:
         wait_port_listen(port, host=cfg.remote)
@@ -68,7 +71,7 @@ from lib.py import bkg, cmd, defer, ethtool, ip, rand_port, wait_port_listen
 
         # Make sure we have order of magnitude more LSO packets than
         # retransmits, in case TCP retransmitted all the LSO packets.
-        ksft_lt(tcp_sock_get_retrans(sock), total_lso_wire / 4)
+        ksft_lt(tcp_sock_get_retrans(sock), total_lso_wire / 16)
         sock.close()
 
         if should_lso:
-- 
2.53.0


^ permalink raw reply related	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2026-02-26  3:10 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-23 20:40 [PATCH net-next] selftests: hw-net: tso: set a TCP window clamp to avoid spurious drops Jakub Kicinski
2026-02-26  3:10 ` patchwork-bot+netdevbpf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox