From: Stanislav Fomichev <stfomichev@gmail.com>
To: Jakub Kicinski <kuba@kernel.org>
Cc: davem@davemloft.net, netdev@vger.kernel.org, edumazet@google.com,
pabeni@redhat.com, andrew+netdev@lunn.ch, horms@kernel.org,
shuah@kernel.org, ast@kernel.org, hawk@kernel.org,
john.fastabend@gmail.com, sdf@fomichev.me,
linux-kselftest@vger.kernel.org
Subject: Re: [PATCH net-next] selftests: drv-net: xdp: make the XDP qstats tests less flaky
Date: Fri, 14 Nov 2025 08:23:32 -0800 [thread overview]
Message-ID: <aRdXhE0a-P3Ep1YE@mini-arch> (raw)
In-Reply-To: <20251113152703.3819756-1-kuba@kernel.org>
On 11/13, Jakub Kicinski wrote:
> The XDP qstats tests send 2k packets over a single socket.
> Looks like when netdev CI is busy running those tests in QEMU
> occasionally flakes. The target doesn't get to run at all
> before all 2000 packets are sent.
>
> Lower the number of packets to 1000 and reopen the socket
> every 50 packets, to give RSS a chance to spread the packets
> to multiple queues.
>
> For the netdev CI testing either lowering the count or using
> multiple sockets is enough, but let's do both for extra resiliency.
>
> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
> ---
> CC: shuah@kernel.org
> CC: ast@kernel.org
> CC: hawk@kernel.org
> CC: john.fastabend@gmail.com
> CC: sdf@fomichev.me
> CC: linux-kselftest@vger.kernel.org
> ---
> tools/testing/selftests/drivers/net/xdp.py | 15 +++++++++------
> 1 file changed, 9 insertions(+), 6 deletions(-)
>
> diff --git a/tools/testing/selftests/drivers/net/xdp.py b/tools/testing/selftests/drivers/net/xdp.py
> index a148004e1c36..834a37ae7d0d 100755
> --- a/tools/testing/selftests/drivers/net/xdp.py
> +++ b/tools/testing/selftests/drivers/net/xdp.py
> @@ -687,9 +687,12 @@ from lib.py import ip, bpftool, defer
> "/dev/null"
> # Listener runs on "remote" in case of XDP_TX
> rx_host = cfg.remote if act == XDPAction.TX else None
> - # We want to spew 2000 packets quickly, bash seems to do a good enough job
> - tx_udp = f"exec 5<>/dev/udp/{cfg.addr}/{port}; " \
> - "for i in `seq 2000`; do echo a >&5; done; exec 5>&-"
> + # We want to spew 1000 packets quickly, bash seems to do a good enough job
> + # Each reopening of the socket gives us a differenot local port (for RSS)
> + tx_udp = "for _ in `seq 20`; do " \
> + f"exec 5<>/dev/udp/{cfg.addr}/{port}; " \
> + "for i in `seq 50`; do echo a >&5; done; " \
> + "exec 5>&-; done"
TIL about bash's /dev/udp, interesting..
Acked-by: Stanislav Fomichev <sdf@fomichev.me>
next prev parent reply other threads:[~2025-11-14 16:23 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-13 15:27 [PATCH net-next] selftests: drv-net: xdp: make the XDP qstats tests less flaky Jakub Kicinski
2025-11-14 16:23 ` Stanislav Fomichev [this message]
2025-11-15 2:01 ` patchwork-bot+netdevbpf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aRdXhE0a-P3Ep1YE@mini-arch \
--to=stfomichev@gmail.com \
--cc=andrew+netdev@lunn.ch \
--cc=ast@kernel.org \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=hawk@kernel.org \
--cc=horms@kernel.org \
--cc=john.fastabend@gmail.com \
--cc=kuba@kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=sdf@fomichev.me \
--cc=shuah@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).