From: Jason Xing <kerneljasonxing@gmail.com>
To: davem@davemloft.net, edumazet@google.com, kuba@kernel.org,
pabeni@redhat.com, bjorn@kernel.org, magnus.karlsson@intel.com,
maciej.fijalkowski@intel.com, jonathan.lemon@gmail.com,
sdf@fomichev.me, ast@kernel.org, daniel@iogearbox.net,
hawk@kernel.org, john.fastabend@gmail.com, joe@dama.to,
willemdebruijn.kernel@gmail.com
Cc: bpf@vger.kernel.org, netdev@vger.kernel.org,
Jason Xing <kernelxing@tencent.com>
Subject: [PATCH net-next v3 2/2] selftests/bpf: check if the global consumer of tx queue updates after send call
Date: Wed, 25 Jun 2025 18:10:14 +0800 [thread overview]
Message-ID: <20250625101014.45066-3-kerneljasonxing@gmail.com> (raw)
In-Reply-To: <20250625101014.45066-1-kerneljasonxing@gmail.com>
From: Jason Xing <kernelxing@tencent.com>
The subtest sends 33 packets at one time on purpose to see if xsk
exitting __xsk_generic_xmit() updates the global consumer of tx queue
when reaching the max loop (max_tx_budget, 32 by default). The number 33
can avoid xskq_cons_peek_desc() updates the consumer, to accurately
check if the issue that the first patch resolves remains.
Speaking of the selftest implementation, it's not possible to use the
normal validation_func to check if the issue happens because the whole
send packets logic will call the sendto multiple times such that we're
unable to detect in time.
Signed-off-by: Jason Xing <kernelxing@tencent.com>
---
tools/testing/selftests/bpf/xskxceiver.c | 30 ++++++++++++++++++++++--
1 file changed, 28 insertions(+), 2 deletions(-)
diff --git a/tools/testing/selftests/bpf/xskxceiver.c b/tools/testing/selftests/bpf/xskxceiver.c
index 0ced4026ee44..f7aa83706bc7 100644
--- a/tools/testing/selftests/bpf/xskxceiver.c
+++ b/tools/testing/selftests/bpf/xskxceiver.c
@@ -109,6 +109,8 @@
#include <network_helpers.h>
+#define MAX_TX_BUDGET_DEFAULT 32
+
static bool opt_verbose;
static bool opt_print_tests;
static enum test_mode opt_mode = TEST_MODE_ALL;
@@ -1323,7 +1325,8 @@ static int receive_pkts(struct test_spec *test)
return TEST_PASS;
}
-static int __send_pkts(struct ifobject *ifobject, struct xsk_socket_info *xsk, bool timeout)
+static int __send_pkts(struct test_spec *test, struct ifobject *ifobject,
+ struct xsk_socket_info *xsk, bool timeout)
{
u32 i, idx = 0, valid_pkts = 0, valid_frags = 0, buffer_len;
struct pkt_stream *pkt_stream = xsk->pkt_stream;
@@ -1437,9 +1440,21 @@ static int __send_pkts(struct ifobject *ifobject, struct xsk_socket_info *xsk, b
}
if (!timeout) {
+ int prev_tx_consumer;
+
+ if (!strncmp("TX_QUEUE_CONSUMER", test->name, MAX_TEST_NAME_SIZE))
+ prev_tx_consumer = *xsk->tx.consumer;
+
if (complete_pkts(xsk, i))
return TEST_FAILURE;
+ if (!strncmp("TX_QUEUE_CONSUMER", test->name, MAX_TEST_NAME_SIZE)) {
+ int delta = *xsk->tx.consumer - prev_tx_consumer;
+
+ if (delta != MAX_TX_BUDGET_DEFAULT)
+ return TEST_FAILURE;
+ }
+
usleep(10);
return TEST_PASS;
}
@@ -1492,7 +1507,7 @@ static int send_pkts(struct test_spec *test, struct ifobject *ifobject)
__set_bit(i, bitmap);
continue;
}
- ret = __send_pkts(ifobject, &ifobject->xsk_arr[i], timeout);
+ ret = __send_pkts(test, ifobject, &ifobject->xsk_arr[i], timeout);
if (ret == TEST_CONTINUE && !test->fail)
continue;
@@ -2613,6 +2628,16 @@ static int testapp_adjust_tail_grow_mb(struct test_spec *test)
XSK_UMEM__LARGE_FRAME_SIZE * 2);
}
+static int testapp_tx_queue_consumer(struct test_spec *test)
+{
+ int nr_packets = MAX_TX_BUDGET_DEFAULT + 1;
+
+ pkt_stream_replace(test, nr_packets, MIN_PKT_SIZE);
+ test->ifobj_tx->xsk->batch_size = nr_packets;
+
+ return testapp_validate_traffic(test);
+}
+
static void run_pkt_test(struct test_spec *test)
{
int ret;
@@ -2723,6 +2748,7 @@ static const struct test_spec tests[] = {
{.name = "XDP_ADJUST_TAIL_SHRINK_MULTI_BUFF", .test_func = testapp_adjust_tail_shrink_mb},
{.name = "XDP_ADJUST_TAIL_GROW", .test_func = testapp_adjust_tail_grow},
{.name = "XDP_ADJUST_TAIL_GROW_MULTI_BUFF", .test_func = testapp_adjust_tail_grow_mb},
+ {.name = "TX_QUEUE_CONSUMER", .test_func = testapp_tx_queue_consumer},
};
static void print_tests(void)
--
2.41.3
next prev parent reply other threads:[~2025-06-25 10:10 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-06-25 10:10 [PATCH net-next v3 0/2] net: xsk: update tx queue consumer Jason Xing
2025-06-25 10:10 ` [PATCH net-next v3 1/2] net: xsk: update tx queue consumer immediately after transmission Jason Xing
2025-06-25 11:09 ` Maciej Fijalkowski
2025-06-25 12:49 ` Jason Xing
2025-06-25 10:10 ` Jason Xing [this message]
2025-06-25 12:19 ` [PATCH net-next v3 2/2] selftests/bpf: check if the global consumer of tx queue updates after send call Maciej Fijalkowski
2025-06-25 12:58 ` Jason Xing
2025-06-25 15:00 ` Stanislav Fomichev
2025-06-25 15:35 ` Jason Xing
2025-06-25 23:41 ` Stanislav Fomichev
2025-06-25 23:55 ` Jason Xing
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250625101014.45066-3-kerneljasonxing@gmail.com \
--to=kerneljasonxing@gmail.com \
--cc=ast@kernel.org \
--cc=bjorn@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=hawk@kernel.org \
--cc=joe@dama.to \
--cc=john.fastabend@gmail.com \
--cc=jonathan.lemon@gmail.com \
--cc=kernelxing@tencent.com \
--cc=kuba@kernel.org \
--cc=maciej.fijalkowski@intel.com \
--cc=magnus.karlsson@intel.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=sdf@fomichev.me \
--cc=willemdebruijn.kernel@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).