* [PATCH 0/2] app/testpmd: assign share group dynamically
@ 2026-03-24 12:37 Dariusz Sosnowski
2026-03-24 12:37 ` [PATCH 1/2] " Dariusz Sosnowski
` (2 more replies)
0 siblings, 3 replies; 22+ messages in thread
From: Dariusz Sosnowski @ 2026-03-24 12:37 UTC (permalink / raw)
To: Aman Singh
Cc: dev, Thomas Monjalon, Raslan Darawsheh, Stephen Hemminger,
Adrian Schollmeyer
Our internal regression tests have flagged issues with shared Rx queues.
Specifically, issues with domain mismatches:
Invalid shared rxq config: switch domain mismatch ports 0 and 3
These started appearing after commit [1] which added checks
for globally unique share group indexes.
This could be worked around with --rxq-share=N option,
but it does not allow proper testing of all use cases [2].
This patchset addresses that by changing behavior of --rxq-share parameter.
Instead of relying on user to select proper parameter value,
testpmd will dynamically assign globally unique share group index
to each unique switch and Rx domain.
[1]: 8ebba91086f4 ("app/testpmd: fail on shared Rx queue switch mismatch")
[2]: https://inbox.dpdk.org/dev/yotjxacqrodttraqrr3r6ftut4cty66g6cjnr5ughswtatapgh@gqqkftskp3qq/
Dariusz Sosnowski (2):
app/testpmd: assign share group dynamically
app/testpmd: revert switch domain mismatch check
app/test-pmd/parameters.c | 12 +---
app/test-pmd/testpmd.c | 88 ++++++++++++++-------------
app/test-pmd/testpmd.h | 2 +-
doc/guides/testpmd_app_ug/run_app.rst | 10 +--
4 files changed, 54 insertions(+), 58 deletions(-)
--
2.47.3
^ permalink raw reply [flat|nested] 22+ messages in thread* [PATCH 1/2] app/testpmd: assign share group dynamically 2026-03-24 12:37 [PATCH 0/2] app/testpmd: assign share group dynamically Dariusz Sosnowski @ 2026-03-24 12:37 ` Dariusz Sosnowski 2026-03-24 15:15 ` Stephen Hemminger 2026-03-25 16:45 ` Stephen Hemminger 2026-03-24 12:37 ` [PATCH 2/2] app/testpmd: revert switch domain mismatch check Dariusz Sosnowski 2026-03-24 16:56 ` [PATCH v2 0/2] app/testpmd: assign share group dynamically Dariusz Sosnowski 2 siblings, 2 replies; 22+ messages in thread From: Dariusz Sosnowski @ 2026-03-24 12:37 UTC (permalink / raw) To: Aman Singh Cc: dev, Thomas Monjalon, Raslan Darawsheh, Stephen Hemminger, Adrian Schollmeyer Testpmd exposes "--rxq-share=[N]" parameter which controls sharing Rx queues. Before this patch logic was that either: - all queues were assigned to the same share group (when N was not passed), - or ports were grouped in subsets of N ports, each subset got different share group index. 2nd option did not work well with dynamic representor probing, where new representors would be assigned to new share group. This patch changes the logic in testpmd to dynamically assign share group index. Each unique switch and Rx domain will get different share group. Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com> --- app/test-pmd/parameters.c | 12 ++----- app/test-pmd/testpmd.c | 49 +++++++++++++++++++++++++-- app/test-pmd/testpmd.h | 2 +- doc/guides/testpmd_app_ug/run_app.rst | 10 +++--- 4 files changed, 54 insertions(+), 19 deletions(-) diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c index 3617860830..5d9a5f2501 100644 --- a/app/test-pmd/parameters.c +++ b/app/test-pmd/parameters.c @@ -507,7 +507,7 @@ usage(char* progname) printf(" --tx-ip=src,dst: IP addresses in Tx-only mode\n"); printf(" --tx-udp=src[,dst]: UDP ports in Tx-only mode\n"); printf(" --eth-link-speed: force link speed.\n"); - printf(" --rxq-share=X: number of ports per shared Rx queue groups, defaults to UINT32_MAX (1 group)\n"); + printf(" --rxq-share: enable Rx queue sharing per switch and Rx domain\n"); printf(" --disable-link-check: disable check on link status when " "starting/stopping ports.\n"); printf(" --disable-device-start: do not automatically start port\n"); @@ -1579,15 +1579,7 @@ launch_args_parse(int argc, char** argv) rte_exit(EXIT_FAILURE, "txonly-flows must be >= 1 and <= 64\n"); break; case TESTPMD_OPT_RXQ_SHARE_NUM: - if (optarg == NULL) { - rxq_share = UINT32_MAX; - } else { - n = atoi(optarg); - if (n >= 0) - rxq_share = (uint32_t)n; - else - rte_exit(EXIT_FAILURE, "rxq-share must be >= 0\n"); - } + rxq_share = 1; break; case TESTPMD_OPT_NO_FLUSH_RX_NUM: no_flush_rx = 1; diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index aad880aa34..be8e8299e3 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -545,9 +545,17 @@ uint8_t record_core_cycles; uint8_t record_burst_stats; /* - * Number of ports per shared Rx queue group, 0 disable. + * Enable Rx queue sharing between ports in the same switch and Rx domain. */ -uint32_t rxq_share; +uint8_t rxq_share; + +struct share_group_slot { + uint16_t domain_id; + uint16_t rx_domain; + uint16_t share_group; +}; + +struct share_group_slot share_group_slots[RTE_MAX_ETHPORTS]; unsigned int num_sockets = 0; unsigned int socket_ids[RTE_MAX_NUMA_NODES]; @@ -586,6 +594,41 @@ int proc_id; */ unsigned int num_procs = 1; +static uint16_t +assign_share_group(struct rte_eth_dev_info *dev_info) +{ + unsigned int first_free = RTE_DIM(share_group_slots); + bool found = false; + unsigned int i; + + for (i = 0; i < RTE_DIM(share_group_slots); i++) { + if (share_group_slots[i].share_group > 0) { + if (dev_info->switch_info.domain_id == share_group_slots[i].domain_id && + dev_info->switch_info.rx_domain == share_group_slots[i].rx_domain) { + found = true; + break; + } + } else if (first_free == RTE_DIM(share_group_slots)) { + first_free = i; + } + } + + if (found) + return share_group_slots[i].share_group; + + /* + * testpmd assigns all queues on a given port to single share group. + * There are RTE_MAX_ETHPORTS share group slots, + * so at least one should always be available. + */ + RTE_ASSERT(first_free < RTE_DIM(share_group_slots)); + + share_group_slots[first_free].domain_id = dev_info->switch_info.domain_id; + share_group_slots[first_free].rx_domain = dev_info->switch_info.rx_domain; + share_group_slots[first_free].share_group = first_free + 1; + return share_group_slots[first_free].share_group; +} + static void eth_rx_metadata_negotiate_mp(uint16_t port_id) { @@ -4097,7 +4140,7 @@ rxtx_port_config(portid_t pid) if (rxq_share > 0 && (port->dev_info.dev_capa & RTE_ETH_DEV_CAPA_RXQ_SHARE)) { /* Non-zero share group to enable RxQ share. */ - port->rxq[qid].conf.share_group = pid / rxq_share + 1; + port->rxq[qid].conf.share_group = assign_share_group(&port->dev_info); port->rxq[qid].conf.share_qid = qid; /* Equal mapping. */ } diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index af185540c3..9b60ebd7fc 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -675,7 +675,7 @@ extern enum tx_pkt_split tx_pkt_split; extern uint8_t txonly_multi_flow; extern uint16_t txonly_flows; -extern uint32_t rxq_share; +extern uint8_t rxq_share; extern uint16_t nb_pkt_per_burst; extern uint16_t nb_pkt_flowgen_clones; diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst index ae3ef8cdf8..f4a30e5da9 100644 --- a/doc/guides/testpmd_app_ug/run_app.rst +++ b/doc/guides/testpmd_app_ug/run_app.rst @@ -393,13 +393,13 @@ The command line options are: Valid range is 1 to 64. Default is 64. Reducing this value limits the number of unique UDP source ports generated. -* ``--rxq-share=[X]`` +* ``--rxq-share`` Create queues in shared Rx queue mode if device supports. - Shared Rx queues are grouped per X ports. X defaults to UINT32_MAX, - implies all ports join share group 1. Forwarding engine "shared-rxq" - should be used for shared Rx queues. This engine does Rx only and - update stream statistics accordingly. + Testpmd will assign unique share group index per each + unique switch and Rx domain. + Forwarding engine "shared-rxq" should be used for shared Rx queues. + This engine does Rx only and update stream statistics accordingly. * ``--eth-link-speed`` -- 2.47.3 ^ permalink raw reply related [flat|nested] 22+ messages in thread
* Re: [PATCH 1/2] app/testpmd: assign share group dynamically 2026-03-24 12:37 ` [PATCH 1/2] " Dariusz Sosnowski @ 2026-03-24 15:15 ` Stephen Hemminger 2026-03-25 16:45 ` Stephen Hemminger 1 sibling, 0 replies; 22+ messages in thread From: Stephen Hemminger @ 2026-03-24 15:15 UTC (permalink / raw) To: Dariusz Sosnowski Cc: Aman Singh, dev, Thomas Monjalon, Raslan Darawsheh, Adrian Schollmeyer On Tue, 24 Mar 2026 13:37:08 +0100 Dariusz Sosnowski <dsosnowski@nvidia.com> wrote: > Testpmd exposes "--rxq-share=[N]" parameter which controls > sharing Rx queues. Before this patch logic was that either: > > - all queues were assigned to the same share group > (when N was not passed), > - or ports were grouped in subsets of N ports, > each subset got different share group index. > > 2nd option did not work well with dynamic representor probing, > where new representors would be assigned to new share group. > > This patch changes the logic in testpmd to dynamically > assign share group index. Each unique switch and Rx domain > will get different share group. > > Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com> > --- AI review feedback: The logic is sound — assign_share_group() maps each unique (domain_id, rx_domain) pair to a share group index via a simple linear scan of a fixed-size slot table. No correctness bugs found. Two warnings: Warning: share_group_slots[] entries are never freed on port removal. With dynamic representor probing and hot-unplug cycles, the slot array (sized RTE_MAX_ETHPORTS, default 32) could fill up. The RTE_ASSERT will fire in debug builds, but in release builds the behavior would be reading an uninitialized slot. Consider adding cleanup when a port is removed, or document the limitation. Warning: share_group_slots[] should be declared static. It's a file-scope global without the rte_ prefix and only accessed within testpmd.c. Making it static gives it proper scoping. ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH 1/2] app/testpmd: assign share group dynamically 2026-03-24 12:37 ` [PATCH 1/2] " Dariusz Sosnowski 2026-03-24 15:15 ` Stephen Hemminger @ 2026-03-25 16:45 ` Stephen Hemminger 1 sibling, 0 replies; 22+ messages in thread From: Stephen Hemminger @ 2026-03-25 16:45 UTC (permalink / raw) To: Dariusz Sosnowski Cc: Aman Singh, dev, Thomas Monjalon, Raslan Darawsheh, Adrian Schollmeyer On Tue, 24 Mar 2026 13:37:08 +0100 Dariusz Sosnowski <dsosnowski@nvidia.com> wrote: > +static uint16_t > +assign_share_group(struct rte_eth_dev_info *dev_info) > +{ > + unsigned int first_free = RTE_DIM(share_group_slots); > + bool found = false; > + unsigned int i; > + > + for (i = 0; i < RTE_DIM(share_group_slots); i++) { > + if (share_group_slots[i].share_group > 0) { > + if (dev_info->switch_info.domain_id == share_group_slots[i].domain_id && > + dev_info->switch_info.rx_domain == share_group_slots[i].rx_domain) { > + found = true; > + break; > + } > + } else if (first_free == RTE_DIM(share_group_slots)) { > + first_free = i; > + } > + } > + > + if (found) > + return share_group_slots[i].share_group; Please use a short circuit return, that would be simpler and code would be shorter. Same thing below, avoid unnecessary bools. for (i = 0; i < RTE_DIM(share_group_slots); i++) { if (share_group_slots[i].share_group > 0) { if (dev_info->switch_info.domain_id == share_group_slots[i].domain_id && dev_info->switch_info.rx_domain == share_group_slots[i].rx_domain) return share_group_slots[i].share_group; } else if (first_free == RTE_DIM(share_group_slots)) { first_free = i; } } ^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH 2/2] app/testpmd: revert switch domain mismatch check 2026-03-24 12:37 [PATCH 0/2] app/testpmd: assign share group dynamically Dariusz Sosnowski 2026-03-24 12:37 ` [PATCH 1/2] " Dariusz Sosnowski @ 2026-03-24 12:37 ` Dariusz Sosnowski 2026-03-24 15:17 ` Stephen Hemminger 2026-03-24 16:56 ` [PATCH v2 0/2] app/testpmd: assign share group dynamically Dariusz Sosnowski 2 siblings, 1 reply; 22+ messages in thread From: Dariusz Sosnowski @ 2026-03-24 12:37 UTC (permalink / raw) To: Aman Singh Cc: dev, Thomas Monjalon, Raslan Darawsheh, Stephen Hemminger, Adrian Schollmeyer This reverts commit 8ebba91086f47c90e398d7775921e05659c0d62f. Previous patch changed --rxq-share parameter logic. If this parameter is passed, then unique share group index per switch and Rx domain will be assigned to each shared Rx queue. As a result the check for domain mismatch is not needed. Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com> --- app/test-pmd/testpmd.c | 39 --------------------------------------- 1 file changed, 39 deletions(-) diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index be8e8299e3..5542ac9855 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -2721,45 +2721,6 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, uint32_t prev_hdrs = 0; int ret; - if (rx_conf->share_group > 0) { - /* Check required switch info for Rx queue sharing */ - const uint16_t dom_id = ports[port_id].dev_info.switch_info.domain_id; - const uint16_t rx_dom = ports[port_id].dev_info.switch_info.rx_domain; - - uint16_t pid; - const char *mismatch = NULL; - uint16_t mismatch_pid = (uint16_t)RTE_PORT_ALL; - - RTE_ETH_FOREACH_DEV(pid) { - struct rte_port *o_port = &ports[pid]; - const uint16_t o_dom_id = o_port->dev_info.switch_info.domain_id; - const uint16_t o_rx_dom = o_port->dev_info.switch_info.rx_domain; - - for (uint16_t q = 0; q < nb_rxq; ++q) { - struct port_rxqueue *rxq = &o_port->rxq[q]; - if (rxq->conf.share_group != rx_conf->share_group || - rxq->conf.share_qid != rx_conf->share_qid) - continue; - if (o_dom_id == dom_id && o_rx_dom == rx_dom) - continue; - - if (o_dom_id != dom_id) - mismatch = "switch domain"; - else if (o_rx_dom != rx_dom) - mismatch = "rx domain"; - - mismatch_pid = pid; - break; - } - } - - if (mismatch) { - fprintf(stderr, - "Invalid shared rxq config: %s mismatch between ports %u and %u\n", - mismatch, port_id, mismatch_pid); - return -EINVAL; - } - } if ((rx_pkt_nb_segs > 1) && (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { -- 2.47.3 ^ permalink raw reply related [flat|nested] 22+ messages in thread
* Re: [PATCH 2/2] app/testpmd: revert switch domain mismatch check 2026-03-24 12:37 ` [PATCH 2/2] app/testpmd: revert switch domain mismatch check Dariusz Sosnowski @ 2026-03-24 15:17 ` Stephen Hemminger 0 siblings, 0 replies; 22+ messages in thread From: Stephen Hemminger @ 2026-03-24 15:17 UTC (permalink / raw) To: Dariusz Sosnowski Cc: Aman Singh, dev, Thomas Monjalon, Raslan Darawsheh, Adrian Schollmeyer On Tue, 24 Mar 2026 13:37:09 +0100 Dariusz Sosnowski <dsosnowski@nvidia.com> wrote: > This reverts commit 8ebba91086f47c90e398d7775921e05659c0d62f. > > Previous patch changed --rxq-share parameter logic. > If this parameter is passed, then unique share group index > per switch and Rx domain will be assigned to each shared Rx queue. > As a result the check for domain mismatch is not needed. > > Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com> > --- One nit from AI review Clean revert, correctness is fine — the mismatch check is now redundant since patch 1 guarantees ports in the same share group have matching domain/rx_domain by construction. Warning: Double blank line after deletion. The removal leaves two consecutive blank lines between the variable declarations (int ret;) and the if ((rx_pkt_nb_segs > 1) block. One should be removed. ^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v2 0/2] app/testpmd: assign share group dynamically 2026-03-24 12:37 [PATCH 0/2] app/testpmd: assign share group dynamically Dariusz Sosnowski 2026-03-24 12:37 ` [PATCH 1/2] " Dariusz Sosnowski 2026-03-24 12:37 ` [PATCH 2/2] app/testpmd: revert switch domain mismatch check Dariusz Sosnowski @ 2026-03-24 16:56 ` Dariusz Sosnowski 2026-03-24 16:56 ` [PATCH v2 1/2] " Dariusz Sosnowski ` (2 more replies) 2 siblings, 3 replies; 22+ messages in thread From: Dariusz Sosnowski @ 2026-03-24 16:56 UTC (permalink / raw) To: Aman Singh Cc: dev, Thomas Monjalon, Raslan Darawsheh, Stephen Hemminger, Adrian Schollmeyer Our internal regression tests have flagged issues with shared Rx queues. Specifically, issues with domain mismatches: Invalid shared rxq config: switch domain mismatch ports 0 and 3 These started appearing after commit [1] which added checks for globally unique share group indexes. This could be worked around with --rxq-share=N option, but it does not allow proper testing of all use cases [2]. This patchset addresses that by changing behavior of --rxq-share parameter. Instead of relying on user to select proper parameter value, testpmd will dynamically assign globally unique share group index to each unique switch and Rx domain. [1]: 8ebba91086f4 ("app/testpmd: fail on shared Rx queue switch mismatch") [2]: https://inbox.dpdk.org/dev/yotjxacqrodttraqrr3r6ftut4cty66g6cjnr5ughswtatapgh@gqqkftskp3qq/ v2: - Add releasing share groups when ports are closed. - Add static to share_group_slots array definition. - Remove double empty line in revert commit. Dariusz Sosnowski (2): app/testpmd: assign share group dynamically app/testpmd: revert switch domain mismatch check app/test-pmd/parameters.c | 12 +-- app/test-pmd/testpmd.c | 122 +++++++++++++++++--------- app/test-pmd/testpmd.h | 2 +- doc/guides/testpmd_app_ug/run_app.rst | 10 +-- 4 files changed, 87 insertions(+), 59 deletions(-) -- 2.47.3 ^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v2 1/2] app/testpmd: assign share group dynamically 2026-03-24 16:56 ` [PATCH v2 0/2] app/testpmd: assign share group dynamically Dariusz Sosnowski @ 2026-03-24 16:56 ` Dariusz Sosnowski 2026-03-25 16:49 ` Stephen Hemminger 2026-03-25 16:50 ` Stephen Hemminger 2026-03-24 16:56 ` [PATCH v2 2/2] app/testpmd: revert switch domain mismatch check Dariusz Sosnowski 2026-03-25 18:02 ` [PATCH v3 0/2] app/testpmd: assign share group dynamically Dariusz Sosnowski 2 siblings, 2 replies; 22+ messages in thread From: Dariusz Sosnowski @ 2026-03-24 16:56 UTC (permalink / raw) To: Aman Singh Cc: dev, Thomas Monjalon, Raslan Darawsheh, Stephen Hemminger, Adrian Schollmeyer Testpmd exposes "--rxq-share=[N]" parameter which controls sharing Rx queues. Before this patch logic was that either: - all queues were assigned to the same share group (when N was not passed), - or ports were grouped in subsets of N ports, each subset got different share group index. 2nd option did not work well with dynamic representor probing, where new representors would be assigned to new share group. This patch changes the logic in testpmd to dynamically assign share group index. Each unique switch and Rx domain will get different share group. Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com> --- app/test-pmd/parameters.c | 12 +--- app/test-pmd/testpmd.c | 82 ++++++++++++++++++++++++++- app/test-pmd/testpmd.h | 2 +- doc/guides/testpmd_app_ug/run_app.rst | 10 ++-- 4 files changed, 87 insertions(+), 19 deletions(-) diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c index 3617860830..5d9a5f2501 100644 --- a/app/test-pmd/parameters.c +++ b/app/test-pmd/parameters.c @@ -507,7 +507,7 @@ usage(char* progname) printf(" --tx-ip=src,dst: IP addresses in Tx-only mode\n"); printf(" --tx-udp=src[,dst]: UDP ports in Tx-only mode\n"); printf(" --eth-link-speed: force link speed.\n"); - printf(" --rxq-share=X: number of ports per shared Rx queue groups, defaults to UINT32_MAX (1 group)\n"); + printf(" --rxq-share: enable Rx queue sharing per switch and Rx domain\n"); printf(" --disable-link-check: disable check on link status when " "starting/stopping ports.\n"); printf(" --disable-device-start: do not automatically start port\n"); @@ -1579,15 +1579,7 @@ launch_args_parse(int argc, char** argv) rte_exit(EXIT_FAILURE, "txonly-flows must be >= 1 and <= 64\n"); break; case TESTPMD_OPT_RXQ_SHARE_NUM: - if (optarg == NULL) { - rxq_share = UINT32_MAX; - } else { - n = atoi(optarg); - if (n >= 0) - rxq_share = (uint32_t)n; - else - rte_exit(EXIT_FAILURE, "rxq-share must be >= 0\n"); - } + rxq_share = 1; break; case TESTPMD_OPT_NO_FLUSH_RX_NUM: no_flush_rx = 1; diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index aad880aa34..81b220466f 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -545,9 +545,17 @@ uint8_t record_core_cycles; uint8_t record_burst_stats; /* - * Number of ports per shared Rx queue group, 0 disable. + * Enable Rx queue sharing between ports in the same switch and Rx domain. */ -uint32_t rxq_share; +uint8_t rxq_share; + +struct share_group_slot { + uint16_t domain_id; + uint16_t rx_domain; + uint16_t share_group; +}; + +static struct share_group_slot share_group_slots[RTE_MAX_ETHPORTS]; unsigned int num_sockets = 0; unsigned int socket_ids[RTE_MAX_NUMA_NODES]; @@ -586,6 +594,73 @@ int proc_id; */ unsigned int num_procs = 1; +static uint16_t +assign_share_group(struct rte_eth_dev_info *dev_info) +{ + unsigned int first_free = RTE_DIM(share_group_slots); + bool found = false; + unsigned int i; + + for (i = 0; i < RTE_DIM(share_group_slots); i++) { + if (share_group_slots[i].share_group > 0) { + if (dev_info->switch_info.domain_id == share_group_slots[i].domain_id && + dev_info->switch_info.rx_domain == share_group_slots[i].rx_domain) { + found = true; + break; + } + } else if (first_free == RTE_DIM(share_group_slots)) { + first_free = i; + } + } + + if (found) + return share_group_slots[i].share_group; + + /* + * testpmd assigns all queues on a given port to single share group. + * There are RTE_MAX_ETHPORTS share group slots, + * so at least one should always be available. + */ + RTE_ASSERT(first_free < RTE_DIM(share_group_slots)); + + share_group_slots[first_free].domain_id = dev_info->switch_info.domain_id; + share_group_slots[first_free].rx_domain = dev_info->switch_info.rx_domain; + share_group_slots[first_free].share_group = first_free + 1; + return share_group_slots[first_free].share_group; +} + +static void +try_release_share_group(struct share_group_slot *slot) +{ + uint16_t pi; + bool group_not_used = true; + + /* Check if any port still uses this share group. */ + RTE_ETH_FOREACH_DEV(pi) { + if (ports[pi].dev_info.switch_info.domain_id == slot->domain_id && + ports[pi].dev_info.switch_info.rx_domain == slot->rx_domain) { + group_not_used = false; + break; + } + } + if (group_not_used) { + slot->share_group = 0; + slot->domain_id = 0; + slot->rx_domain = 0; + } +} + +static void +try_release_share_groups(void) +{ + unsigned int i; + + /* Try release each used share group. */ + for (i = 0; i < RTE_DIM(share_group_slots); i++) + if (share_group_slots[i].share_group > 0) + try_release_share_group(&share_group_slots[i]); +} + static void eth_rx_metadata_negotiate_mp(uint16_t port_id) { @@ -3315,6 +3390,7 @@ remove_invalid_ports(void) remove_invalid_ports_in(ports_ids, &nb_ports); remove_invalid_ports_in(fwd_ports_ids, &nb_fwd_ports); nb_cfg_ports = nb_fwd_ports; + try_release_share_groups(); } static void @@ -4097,7 +4173,7 @@ rxtx_port_config(portid_t pid) if (rxq_share > 0 && (port->dev_info.dev_capa & RTE_ETH_DEV_CAPA_RXQ_SHARE)) { /* Non-zero share group to enable RxQ share. */ - port->rxq[qid].conf.share_group = pid / rxq_share + 1; + port->rxq[qid].conf.share_group = assign_share_group(&port->dev_info); port->rxq[qid].conf.share_qid = qid; /* Equal mapping. */ } diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index af185540c3..9b60ebd7fc 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -675,7 +675,7 @@ extern enum tx_pkt_split tx_pkt_split; extern uint8_t txonly_multi_flow; extern uint16_t txonly_flows; -extern uint32_t rxq_share; +extern uint8_t rxq_share; extern uint16_t nb_pkt_per_burst; extern uint16_t nb_pkt_flowgen_clones; diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst index ae3ef8cdf8..f4a30e5da9 100644 --- a/doc/guides/testpmd_app_ug/run_app.rst +++ b/doc/guides/testpmd_app_ug/run_app.rst @@ -393,13 +393,13 @@ The command line options are: Valid range is 1 to 64. Default is 64. Reducing this value limits the number of unique UDP source ports generated. -* ``--rxq-share=[X]`` +* ``--rxq-share`` Create queues in shared Rx queue mode if device supports. - Shared Rx queues are grouped per X ports. X defaults to UINT32_MAX, - implies all ports join share group 1. Forwarding engine "shared-rxq" - should be used for shared Rx queues. This engine does Rx only and - update stream statistics accordingly. + Testpmd will assign unique share group index per each + unique switch and Rx domain. + Forwarding engine "shared-rxq" should be used for shared Rx queues. + This engine does Rx only and update stream statistics accordingly. * ``--eth-link-speed`` -- 2.47.3 ^ permalink raw reply related [flat|nested] 22+ messages in thread
* Re: [PATCH v2 1/2] app/testpmd: assign share group dynamically 2026-03-24 16:56 ` [PATCH v2 1/2] " Dariusz Sosnowski @ 2026-03-25 16:49 ` Stephen Hemminger 2026-03-25 18:06 ` Dariusz Sosnowski 2026-03-25 16:50 ` Stephen Hemminger 1 sibling, 1 reply; 22+ messages in thread From: Stephen Hemminger @ 2026-03-25 16:49 UTC (permalink / raw) To: Dariusz Sosnowski Cc: Aman Singh, dev, Thomas Monjalon, Raslan Darawsheh, Adrian Schollmeyer On Tue, 24 Mar 2026 17:56:56 +0100 Dariusz Sosnowski <dsosnowski@nvidia.com> wrote: > Testpmd exposes "--rxq-share=[N]" parameter which controls > sharing Rx queues. Before this patch logic was that either: > > - all queues were assigned to the same share group > (when N was not passed), > - or ports were grouped in subsets of N ports, > each subset got different share group index. > > 2nd option did not work well with dynamic representor probing, > where new representors would be assigned to new share group. > > This patch changes the logic in testpmd to dynamically > assign share group index. Each unique switch and Rx domain > will get different share group. > > Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com> > --- AI review found that the option definition still expects arg. Warning: Option definition still uses OPTIONAL_ARG but argument is now ignored. The --rxq-share parameter definition at line 354 in parameters.c still uses OPTIONAL_ARG(TESTPMD_OPT_RXQ_SHARE) (which maps to optional_argument in getopt), but the parsing now unconditionally sets rxq_share = 1 and ignores optarg. If a user passes --rxq-share=5, the value is silently discarded. The option definition should be changed to NO_ARG(TESTPMD_OPT_RXQ_SHARE) to match the new behavior, and the documentation now correctly documents it as a bare flag. Warning: share_group_slots entries are never cleared on port removal. The share_group_slots[] array grows as new (domain_id, rx_domain) pairs are seen, but entries are never removed when ports are hot-unplugged or closed. Over many hotplug cycles with distinct domain IDs, slots could fill up. In practice this is bounded by RTE_MAX_ETHPORTS (default 32) so it is unlikely to be a real problem, but an RTE_ASSERT will fire if it does overflow — worth a comment noting this limitation. Info: Documentation grammar nit. In run_app.rst, "This engine does Rx only and update stream statistics accordingly" — "update" should be "updates". ^ permalink raw reply [flat|nested] 22+ messages in thread
* RE: [PATCH v2 1/2] app/testpmd: assign share group dynamically 2026-03-25 16:49 ` Stephen Hemminger @ 2026-03-25 18:06 ` Dariusz Sosnowski 0 siblings, 0 replies; 22+ messages in thread From: Dariusz Sosnowski @ 2026-03-25 18:06 UTC (permalink / raw) To: Stephen Hemminger Cc: Aman Singh, dev@dpdk.org, NBU-Contact-Thomas Monjalon (EXTERNAL), Raslan Darawsheh, Adrian Schollmeyer > -----Original Message----- > From: Stephen Hemminger <stephen@networkplumber.org> > Sent: Wednesday, March 25, 2026 5:49 PM > To: Dariusz Sosnowski <dsosnowski@nvidia.com> > Cc: Aman Singh <aman.deep.singh@intel.com>; dev@dpdk.org; NBU- > Contact-Thomas Monjalon (EXTERNAL) <thomas@monjalon.net>; Raslan > Darawsheh <rasland@nvidia.com>; Adrian Schollmeyer > <a.schollmeyer@syseleven.de> > Subject: Re: [PATCH v2 1/2] app/testpmd: assign share group dynamically > > On Tue, 24 Mar 2026 17:56:56 +0100 > Dariusz Sosnowski <dsosnowski@nvidia.com> wrote: > > > Testpmd exposes "--rxq-share=[N]" parameter which controls sharing Rx > > queues. Before this patch logic was that either: > > > > - all queues were assigned to the same share group > > (when N was not passed), > > - or ports were grouped in subsets of N ports, > > each subset got different share group index. > > > > 2nd option did not work well with dynamic representor probing, where > > new representors would be assigned to new share group. > > > > This patch changes the logic in testpmd to dynamically assign share > > group index. Each unique switch and Rx domain will get different share > > group. > > > > Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com> > > --- > > AI review found that the option definition still expects arg. > > > Warning: Option definition still uses OPTIONAL_ARG but argument is now > ignored. > The --rxq-share parameter definition at line 354 in parameters.c still uses > OPTIONAL_ARG(TESTPMD_OPT_RXQ_SHARE) (which maps to > optional_argument in getopt), but the parsing now unconditionally sets > rxq_share = 1 and ignores optarg. If a user passes --rxq-share=5, the value is > silently discarded. The option definition should be changed to > NO_ARG(TESTPMD_OPT_RXQ_SHARE) to match the new behavior, and the > documentation now correctly documents it as a bare flag. Fixed in v3. > > Warning: share_group_slots entries are never cleared on port removal. > The share_group_slots[] array grows as new (domain_id, rx_domain) pairs > are seen, but entries are never removed when ports are hot-unplugged or > closed. Over many hotplug cycles with distinct domain IDs, slots could fill up. > In practice this is bounded by RTE_MAX_ETHPORTS (default 32) so it is > unlikely to be a real problem, but an RTE_ASSERT will fire if it does overflow > — worth a comment noting this limitation. False positive: This was already handled in v2. Whenever ports are removed, any unused share group is cleared through try_release_share_groups(). > > Info: Documentation grammar nit. > In run_app.rst, "This engine does Rx only and update stream statistics > accordingly" — "update" should be "updates". Fixed in v3. Best regards, Dariusz Sosnowski ^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v2 1/2] app/testpmd: assign share group dynamically 2026-03-24 16:56 ` [PATCH v2 1/2] " Dariusz Sosnowski 2026-03-25 16:49 ` Stephen Hemminger @ 2026-03-25 16:50 ` Stephen Hemminger 2026-03-25 18:12 ` Dariusz Sosnowski 1 sibling, 1 reply; 22+ messages in thread From: Stephen Hemminger @ 2026-03-25 16:50 UTC (permalink / raw) To: Dariusz Sosnowski Cc: Aman Singh, dev, Thomas Monjalon, Raslan Darawsheh, Adrian Schollmeyer On Tue, 24 Mar 2026 17:56:56 +0100 Dariusz Sosnowski <dsosnowski@nvidia.com> wrote: > + /* > + * testpmd assigns all queues on a given port to single share group. > + * There are RTE_MAX_ETHPORTS share group slots, > + * so at least one should always be available. > + */ > + RTE_ASSERT(first_free < RTE_DIM(share_group_slots)); > + Since RTE_ASSERT is compiled away in normal builds, this is a noop. Please use a regular if statement and error handling. ^ permalink raw reply [flat|nested] 22+ messages in thread
* RE: [PATCH v2 1/2] app/testpmd: assign share group dynamically 2026-03-25 16:50 ` Stephen Hemminger @ 2026-03-25 18:12 ` Dariusz Sosnowski 0 siblings, 0 replies; 22+ messages in thread From: Dariusz Sosnowski @ 2026-03-25 18:12 UTC (permalink / raw) To: Stephen Hemminger Cc: Aman Singh, dev@dpdk.org, NBU-Contact-Thomas Monjalon (EXTERNAL), Raslan Darawsheh, Adrian Schollmeyer > -----Original Message----- > From: Stephen Hemminger <stephen@networkplumber.org> > Sent: Wednesday, March 25, 2026 5:51 PM > To: Dariusz Sosnowski <dsosnowski@nvidia.com> > Cc: Aman Singh <aman.deep.singh@intel.com>; dev@dpdk.org; NBU- > Contact-Thomas Monjalon (EXTERNAL) <thomas@monjalon.net>; Raslan > Darawsheh <rasland@nvidia.com>; Adrian Schollmeyer > <a.schollmeyer@syseleven.de> > Subject: Re: [PATCH v2 1/2] app/testpmd: assign share group dynamically > > External email: Use caution opening links or attachments > > > On Tue, 24 Mar 2026 17:56:56 +0100 > Dariusz Sosnowski <dsosnowski@nvidia.com> wrote: > > > + /* > > + * testpmd assigns all queues on a given port to single share group. > > + * There are RTE_MAX_ETHPORTS share group slots, > > + * so at least one should always be available. > > + */ > > + RTE_ASSERT(first_free < RTE_DIM(share_group_slots)); > > + > > Since RTE_ASSERT is compiled away in normal builds, this is a noop. > Please use a regular if statement and error handling. Fixed in v3. Best regards, Dariusz Sosnowski ^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v2 2/2] app/testpmd: revert switch domain mismatch check 2026-03-24 16:56 ` [PATCH v2 0/2] app/testpmd: assign share group dynamically Dariusz Sosnowski 2026-03-24 16:56 ` [PATCH v2 1/2] " Dariusz Sosnowski @ 2026-03-24 16:56 ` Dariusz Sosnowski 2026-03-25 18:02 ` [PATCH v3 0/2] app/testpmd: assign share group dynamically Dariusz Sosnowski 2 siblings, 0 replies; 22+ messages in thread From: Dariusz Sosnowski @ 2026-03-24 16:56 UTC (permalink / raw) To: Aman Singh Cc: dev, Thomas Monjalon, Raslan Darawsheh, Stephen Hemminger, Adrian Schollmeyer This reverts commit 8ebba91086f47c90e398d7775921e05659c0d62f. Previous patch changed --rxq-share parameter logic. If this parameter is passed, then unique share group index per switch and Rx domain will be assigned to each shared Rx queue. As a result the check for domain mismatch is not needed. Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com> --- app/test-pmd/testpmd.c | 40 ---------------------------------------- 1 file changed, 40 deletions(-) diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index 81b220466f..980f41d25c 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -2753,46 +2753,6 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, uint32_t prev_hdrs = 0; int ret; - if (rx_conf->share_group > 0) { - /* Check required switch info for Rx queue sharing */ - const uint16_t dom_id = ports[port_id].dev_info.switch_info.domain_id; - const uint16_t rx_dom = ports[port_id].dev_info.switch_info.rx_domain; - - uint16_t pid; - const char *mismatch = NULL; - uint16_t mismatch_pid = (uint16_t)RTE_PORT_ALL; - - RTE_ETH_FOREACH_DEV(pid) { - struct rte_port *o_port = &ports[pid]; - const uint16_t o_dom_id = o_port->dev_info.switch_info.domain_id; - const uint16_t o_rx_dom = o_port->dev_info.switch_info.rx_domain; - - for (uint16_t q = 0; q < nb_rxq; ++q) { - struct port_rxqueue *rxq = &o_port->rxq[q]; - if (rxq->conf.share_group != rx_conf->share_group || - rxq->conf.share_qid != rx_conf->share_qid) - continue; - if (o_dom_id == dom_id && o_rx_dom == rx_dom) - continue; - - if (o_dom_id != dom_id) - mismatch = "switch domain"; - else if (o_rx_dom != rx_dom) - mismatch = "rx domain"; - - mismatch_pid = pid; - break; - } - } - - if (mismatch) { - fprintf(stderr, - "Invalid shared rxq config: %s mismatch between ports %u and %u\n", - mismatch, port_id, mismatch_pid); - return -EINVAL; - } - } - if ((rx_pkt_nb_segs > 1) && (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { /* multi-segment configuration */ -- 2.47.3 ^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH v3 0/2] app/testpmd: assign share group dynamically 2026-03-24 16:56 ` [PATCH v2 0/2] app/testpmd: assign share group dynamically Dariusz Sosnowski 2026-03-24 16:56 ` [PATCH v2 1/2] " Dariusz Sosnowski 2026-03-24 16:56 ` [PATCH v2 2/2] app/testpmd: revert switch domain mismatch check Dariusz Sosnowski @ 2026-03-25 18:02 ` Dariusz Sosnowski 2026-03-25 18:02 ` [PATCH v3 1/2] " Dariusz Sosnowski ` (2 more replies) 2 siblings, 3 replies; 22+ messages in thread From: Dariusz Sosnowski @ 2026-03-25 18:02 UTC (permalink / raw) To: Aman Singh Cc: dev, Thomas Monjalon, Raslan Darawsheh, Stephen Hemminger, Adrian Schollmeyer Our internal regression tests have flagged issues with shared Rx queues. Specifically, issues with domain mismatches: Invalid shared rxq config: switch domain mismatch ports 0 and 3 These started appearing after commit [1] which added checks for globally unique share group indexes. This could be worked around with --rxq-share=N option, but it does not allow proper testing of all use cases [2]. This patchset addresses that by changing behavior of --rxq-share parameter. Instead of relying on user to select proper parameter value, testpmd will dynamically assign globally unique share group index to each unique switch and Rx domain. [1]: 8ebba91086f4 ("app/testpmd: fail on shared Rx queue switch mismatch") [2]: https://inbox.dpdk.org/dev/yotjxacqrodttraqrr3r6ftut4cty66g6cjnr5ughswtatapgh@gqqkftskp3qq/ v3: - Use short circuit return in assign_share_group(). - Replace assert in assign_share_group() with error checking. - Do not require optional argument with --rxq-share option. - Fix typo in docs: update -> updates v2: - Add releasing share groups when ports are closed. - Add static to share_group_slots array definition. - Remove double empty line in revert commit. Dariusz Sosnowski (2): app/testpmd: assign share group dynamically app/testpmd: revert switch domain mismatch check app/test-pmd/parameters.c | 14 +-- app/test-pmd/testpmd.c | 124 +++++++++++++++++--------- app/test-pmd/testpmd.h | 2 +- doc/guides/testpmd_app_ug/run_app.rst | 10 +-- 4 files changed, 89 insertions(+), 61 deletions(-) -- 2.47.3 ^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v3 1/2] app/testpmd: assign share group dynamically 2026-03-25 18:02 ` [PATCH v3 0/2] app/testpmd: assign share group dynamically Dariusz Sosnowski @ 2026-03-25 18:02 ` Dariusz Sosnowski 2026-03-25 18:51 ` Stephen Hemminger 2026-03-25 18:02 ` [PATCH v3 2/2] app/testpmd: revert switch domain mismatch check Dariusz Sosnowski 2026-03-25 19:09 ` [PATCH v4 0/2] app/testpmd: assign share group dynamically Dariusz Sosnowski 2 siblings, 1 reply; 22+ messages in thread From: Dariusz Sosnowski @ 2026-03-25 18:02 UTC (permalink / raw) To: Aman Singh Cc: dev, Thomas Monjalon, Raslan Darawsheh, Stephen Hemminger, Adrian Schollmeyer Testpmd exposes "--rxq-share=[N]" parameter which controls sharing Rx queues. Before this patch logic was that either: - all queues were assigned to the same share group (when N was not passed), - or ports were grouped in subsets of N ports, each subset got different share group index. 2nd option did not work well with dynamic representor probing, where new representors would be assigned to new share group. This patch changes the logic in testpmd to dynamically assign share group index. Each unique switch and Rx domain will get different share group. Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com> --- app/test-pmd/parameters.c | 14 +---- app/test-pmd/testpmd.c | 84 +++++++++++++++++++++++++-- app/test-pmd/testpmd.h | 2 +- doc/guides/testpmd_app_ug/run_app.rst | 10 ++-- 4 files changed, 89 insertions(+), 21 deletions(-) diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c index 3617860830..ecbd618f00 100644 --- a/app/test-pmd/parameters.c +++ b/app/test-pmd/parameters.c @@ -351,7 +351,7 @@ static const struct option long_options[] = { NO_ARG(TESTPMD_OPT_MULTI_RX_MEMPOOL), NO_ARG(TESTPMD_OPT_TXONLY_MULTI_FLOW), REQUIRED_ARG(TESTPMD_OPT_TXONLY_FLOWS), - OPTIONAL_ARG(TESTPMD_OPT_RXQ_SHARE), + NO_ARG(TESTPMD_OPT_RXQ_SHARE), REQUIRED_ARG(TESTPMD_OPT_ETH_LINK_SPEED), NO_ARG(TESTPMD_OPT_DISABLE_LINK_CHECK), NO_ARG(TESTPMD_OPT_DISABLE_DEVICE_START), @@ -507,7 +507,7 @@ usage(char* progname) printf(" --tx-ip=src,dst: IP addresses in Tx-only mode\n"); printf(" --tx-udp=src[,dst]: UDP ports in Tx-only mode\n"); printf(" --eth-link-speed: force link speed.\n"); - printf(" --rxq-share=X: number of ports per shared Rx queue groups, defaults to UINT32_MAX (1 group)\n"); + printf(" --rxq-share: enable Rx queue sharing per switch and Rx domain\n"); printf(" --disable-link-check: disable check on link status when " "starting/stopping ports.\n"); printf(" --disable-device-start: do not automatically start port\n"); @@ -1579,15 +1579,7 @@ launch_args_parse(int argc, char** argv) rte_exit(EXIT_FAILURE, "txonly-flows must be >= 1 and <= 64\n"); break; case TESTPMD_OPT_RXQ_SHARE_NUM: - if (optarg == NULL) { - rxq_share = UINT32_MAX; - } else { - n = atoi(optarg); - if (n >= 0) - rxq_share = (uint32_t)n; - else - rte_exit(EXIT_FAILURE, "rxq-share must be >= 0\n"); - } + rxq_share = 1; break; case TESTPMD_OPT_NO_FLUSH_RX_NUM: no_flush_rx = 1; diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index aad880aa34..a70efbb03f 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -545,9 +545,17 @@ uint8_t record_core_cycles; uint8_t record_burst_stats; /* - * Number of ports per shared Rx queue group, 0 disable. + * Enable Rx queue sharing between ports in the same switch and Rx domain. */ -uint32_t rxq_share; +uint8_t rxq_share; + +struct share_group_slot { + uint16_t domain_id; + uint16_t rx_domain; + uint16_t share_group; +}; + +static struct share_group_slot share_group_slots[RTE_MAX_ETHPORTS]; unsigned int num_sockets = 0; unsigned int socket_ids[RTE_MAX_NUMA_NODES]; @@ -586,6 +594,67 @@ int proc_id; */ unsigned int num_procs = 1; +static int +assign_share_group(struct rte_eth_dev_info *dev_info, uint16_t *share_group) +{ + unsigned int first_free = RTE_DIM(share_group_slots); + unsigned int i; + + for (i = 0; i < RTE_DIM(share_group_slots); i++) { + if (share_group_slots[i].share_group > 0) { + if (dev_info->switch_info.domain_id == share_group_slots[i].domain_id && + dev_info->switch_info.rx_domain == share_group_slots[i].rx_domain) { + *share_group = share_group_slots[i].share_group; + return 0; + } + } else if (first_free == RTE_DIM(share_group_slots)) { + first_free = i; + } + } + + if (first_free == RTE_DIM(share_group_slots)) + return -ENOSPC; + + share_group_slots[first_free].domain_id = dev_info->switch_info.domain_id; + share_group_slots[first_free].rx_domain = dev_info->switch_info.rx_domain; + share_group_slots[first_free].share_group = first_free + 1; + *share_group = share_group_slots[first_free].share_group; + + return 0; +} + +static void +try_release_share_group(struct share_group_slot *slot) +{ + uint16_t pi; + bool group_not_used = true; + + /* Check if any port still uses this share group. */ + RTE_ETH_FOREACH_DEV(pi) { + if (ports[pi].dev_info.switch_info.domain_id == slot->domain_id && + ports[pi].dev_info.switch_info.rx_domain == slot->rx_domain) { + group_not_used = false; + break; + } + } + if (group_not_used) { + slot->share_group = 0; + slot->domain_id = 0; + slot->rx_domain = 0; + } +} + +static void +try_release_share_groups(void) +{ + unsigned int i; + + /* Try release each used share group. */ + for (i = 0; i < RTE_DIM(share_group_slots); i++) + if (share_group_slots[i].share_group > 0) + try_release_share_group(&share_group_slots[i]); +} + static void eth_rx_metadata_negotiate_mp(uint16_t port_id) { @@ -3315,6 +3384,7 @@ remove_invalid_ports(void) remove_invalid_ports_in(ports_ids, &nb_ports); remove_invalid_ports_in(fwd_ports_ids, &nb_fwd_ports); nb_cfg_ports = nb_fwd_ports; + try_release_share_groups(); } static void @@ -4097,8 +4167,14 @@ rxtx_port_config(portid_t pid) if (rxq_share > 0 && (port->dev_info.dev_capa & RTE_ETH_DEV_CAPA_RXQ_SHARE)) { /* Non-zero share group to enable RxQ share. */ - port->rxq[qid].conf.share_group = pid / rxq_share + 1; - port->rxq[qid].conf.share_qid = qid; /* Equal mapping. */ + uint16_t share_group; + + if (assign_share_group(&port->dev_info, &share_group) == 0) { + port->rxq[qid].conf.share_group = share_group; + port->rxq[qid].conf.share_qid = qid; /* Equal mapping. */ + } else { + TESTPMD_LOG(INFO, "port %u: failed assigning share group\n", pid); + } } if (offloads != 0) diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index af185540c3..9b60ebd7fc 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -675,7 +675,7 @@ extern enum tx_pkt_split tx_pkt_split; extern uint8_t txonly_multi_flow; extern uint16_t txonly_flows; -extern uint32_t rxq_share; +extern uint8_t rxq_share; extern uint16_t nb_pkt_per_burst; extern uint16_t nb_pkt_flowgen_clones; diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst index ae3ef8cdf8..d0a05d6311 100644 --- a/doc/guides/testpmd_app_ug/run_app.rst +++ b/doc/guides/testpmd_app_ug/run_app.rst @@ -393,13 +393,13 @@ The command line options are: Valid range is 1 to 64. Default is 64. Reducing this value limits the number of unique UDP source ports generated. -* ``--rxq-share=[X]`` +* ``--rxq-share`` Create queues in shared Rx queue mode if device supports. - Shared Rx queues are grouped per X ports. X defaults to UINT32_MAX, - implies all ports join share group 1. Forwarding engine "shared-rxq" - should be used for shared Rx queues. This engine does Rx only and - update stream statistics accordingly. + Testpmd will assign unique share group index per each + unique switch and Rx domain. + Forwarding engine "shared-rxq" should be used for shared Rx queues. + This engine does Rx only and updates stream statistics accordingly. * ``--eth-link-speed`` -- 2.47.3 ^ permalink raw reply related [flat|nested] 22+ messages in thread
* Re: [PATCH v3 1/2] app/testpmd: assign share group dynamically 2026-03-25 18:02 ` [PATCH v3 1/2] " Dariusz Sosnowski @ 2026-03-25 18:51 ` Stephen Hemminger 2026-03-25 19:11 ` Dariusz Sosnowski 0 siblings, 1 reply; 22+ messages in thread From: Stephen Hemminger @ 2026-03-25 18:51 UTC (permalink / raw) To: Dariusz Sosnowski Cc: Aman Singh, dev, Thomas Monjalon, Raslan Darawsheh, Adrian Schollmeyer On Wed, 25 Mar 2026 19:02:53 +0100 Dariusz Sosnowski <dsosnowski@nvidia.com> wrote: > +static void > +try_release_share_group(struct share_group_slot *slot) > +{ > + uint16_t pi; > + bool group_not_used = true; > + > + /* Check if any port still uses this share group. */ > + RTE_ETH_FOREACH_DEV(pi) { > + if (ports[pi].dev_info.switch_info.domain_id == slot->domain_id && > + ports[pi].dev_info.switch_info.rx_domain == slot->rx_domain) { > + group_not_used = false; > + break; > + } > + } > + if (group_not_used) { > + slot->share_group = 0; > + slot->domain_id = 0; > + slot->rx_domain = 0; > + } > +} Just add a return and skip the group_not_used boolean. ^ permalink raw reply [flat|nested] 22+ messages in thread
* RE: [PATCH v3 1/2] app/testpmd: assign share group dynamically 2026-03-25 18:51 ` Stephen Hemminger @ 2026-03-25 19:11 ` Dariusz Sosnowski 0 siblings, 0 replies; 22+ messages in thread From: Dariusz Sosnowski @ 2026-03-25 19:11 UTC (permalink / raw) To: Stephen Hemminger Cc: Aman Singh, dev@dpdk.org, NBU-Contact-Thomas Monjalon (EXTERNAL), Raslan Darawsheh, Adrian Schollmeyer > -----Original Message----- > From: Stephen Hemminger <stephen@networkplumber.org> > Sent: Wednesday, March 25, 2026 7:51 PM > To: Dariusz Sosnowski <dsosnowski@nvidia.com> > Cc: Aman Singh <aman.deep.singh@intel.com>; dev@dpdk.org; NBU- > Contact-Thomas Monjalon (EXTERNAL) <thomas@monjalon.net>; Raslan > Darawsheh <rasland@nvidia.com>; Adrian Schollmeyer > <a.schollmeyer@syseleven.de> > Subject: Re: [PATCH v3 1/2] app/testpmd: assign share group dynamically > > On Wed, 25 Mar 2026 19:02:53 +0100 > Dariusz Sosnowski <dsosnowski@nvidia.com> wrote: > > > +static void > > +try_release_share_group(struct share_group_slot *slot) { > > + uint16_t pi; > > + bool group_not_used = true; > > + > > + /* Check if any port still uses this share group. */ > > + RTE_ETH_FOREACH_DEV(pi) { > > + if (ports[pi].dev_info.switch_info.domain_id == slot->domain_id > && > > + ports[pi].dev_info.switch_info.rx_domain == slot->rx_domain) { > > + group_not_used = false; > > + break; > > + } > > + } > > + if (group_not_used) { > > + slot->share_group = 0; > > + slot->domain_id = 0; > > + slot->rx_domain = 0; > > + } > > +} > > Just add a return and skip the group_not_used boolean. Fixed in v4 Best regards, Dariusz Sosnowski ^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v3 2/2] app/testpmd: revert switch domain mismatch check 2026-03-25 18:02 ` [PATCH v3 0/2] app/testpmd: assign share group dynamically Dariusz Sosnowski 2026-03-25 18:02 ` [PATCH v3 1/2] " Dariusz Sosnowski @ 2026-03-25 18:02 ` Dariusz Sosnowski 2026-03-25 19:09 ` [PATCH v4 0/2] app/testpmd: assign share group dynamically Dariusz Sosnowski 2 siblings, 0 replies; 22+ messages in thread From: Dariusz Sosnowski @ 2026-03-25 18:02 UTC (permalink / raw) To: Aman Singh Cc: dev, Thomas Monjalon, Raslan Darawsheh, Stephen Hemminger, Adrian Schollmeyer This reverts commit 8ebba91086f47c90e398d7775921e05659c0d62f. Previous patch changed --rxq-share parameter logic. If this parameter is passed, then unique share group index per switch and Rx domain will be assigned to each shared Rx queue. As a result the check for domain mismatch is not needed. Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com> --- app/test-pmd/testpmd.c | 40 ---------------------------------------- 1 file changed, 40 deletions(-) diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index a70efbb03f..20501acf9e 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -2747,46 +2747,6 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, uint32_t prev_hdrs = 0; int ret; - if (rx_conf->share_group > 0) { - /* Check required switch info for Rx queue sharing */ - const uint16_t dom_id = ports[port_id].dev_info.switch_info.domain_id; - const uint16_t rx_dom = ports[port_id].dev_info.switch_info.rx_domain; - - uint16_t pid; - const char *mismatch = NULL; - uint16_t mismatch_pid = (uint16_t)RTE_PORT_ALL; - - RTE_ETH_FOREACH_DEV(pid) { - struct rte_port *o_port = &ports[pid]; - const uint16_t o_dom_id = o_port->dev_info.switch_info.domain_id; - const uint16_t o_rx_dom = o_port->dev_info.switch_info.rx_domain; - - for (uint16_t q = 0; q < nb_rxq; ++q) { - struct port_rxqueue *rxq = &o_port->rxq[q]; - if (rxq->conf.share_group != rx_conf->share_group || - rxq->conf.share_qid != rx_conf->share_qid) - continue; - if (o_dom_id == dom_id && o_rx_dom == rx_dom) - continue; - - if (o_dom_id != dom_id) - mismatch = "switch domain"; - else if (o_rx_dom != rx_dom) - mismatch = "rx domain"; - - mismatch_pid = pid; - break; - } - } - - if (mismatch) { - fprintf(stderr, - "Invalid shared rxq config: %s mismatch between ports %u and %u\n", - mismatch, port_id, mismatch_pid); - return -EINVAL; - } - } - if ((rx_pkt_nb_segs > 1) && (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { /* multi-segment configuration */ -- 2.47.3 ^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH v4 0/2] app/testpmd: assign share group dynamically 2026-03-25 18:02 ` [PATCH v3 0/2] app/testpmd: assign share group dynamically Dariusz Sosnowski 2026-03-25 18:02 ` [PATCH v3 1/2] " Dariusz Sosnowski 2026-03-25 18:02 ` [PATCH v3 2/2] app/testpmd: revert switch domain mismatch check Dariusz Sosnowski @ 2026-03-25 19:09 ` Dariusz Sosnowski 2026-03-25 19:09 ` [PATCH v4 1/2] " Dariusz Sosnowski ` (2 more replies) 2 siblings, 3 replies; 22+ messages in thread From: Dariusz Sosnowski @ 2026-03-25 19:09 UTC (permalink / raw) To: Aman Singh Cc: dev, Thomas Monjalon, Raslan Darawsheh, Stephen Hemminger, Adrian Schollmeyer Our internal regression tests have flagged issues with shared Rx queues. Specifically, issues with domain mismatches: Invalid shared rxq config: switch domain mismatch ports 0 and 3 These started appearing after commit [1] which added checks for globally unique share group indexes. This could be worked around with --rxq-share=N option, but it does not allow proper testing of all use cases [2]. This patchset addresses that by changing behavior of --rxq-share parameter. Instead of relying on user to select proper parameter value, testpmd will dynamically assign globally unique share group index to each unique switch and Rx domain. [1]: 8ebba91086f4 ("app/testpmd: fail on shared Rx queue switch mismatch") [2]: https://inbox.dpdk.org/dev/yotjxacqrodttraqrr3r6ftut4cty66g6cjnr5ughswtatapgh@gqqkftskp3qq/ v4: - Use short circuit return try_release_share_group(). v3: - Use short circuit return in assign_share_group(). - Replace assert in assign_share_group() with error checking. - Do not require optional argument with --rxq-share option. - Fix typo in docs: update -> updates v2: - Add releasing share groups when ports are closed. - Add static to share_group_slots array definition. - Remove double empty line in revert commit. Dariusz Sosnowski (2): app/testpmd: assign share group dynamically app/testpmd: revert switch domain mismatch check app/test-pmd/parameters.c | 14 +-- app/test-pmd/testpmd.c | 121 ++++++++++++++++---------- app/test-pmd/testpmd.h | 2 +- doc/guides/testpmd_app_ug/run_app.rst | 10 +-- 4 files changed, 86 insertions(+), 61 deletions(-) -- 2.47.3 ^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v4 1/2] app/testpmd: assign share group dynamically 2026-03-25 19:09 ` [PATCH v4 0/2] app/testpmd: assign share group dynamically Dariusz Sosnowski @ 2026-03-25 19:09 ` Dariusz Sosnowski 2026-03-25 19:09 ` [PATCH v4 2/2] app/testpmd: revert switch domain mismatch check Dariusz Sosnowski 2026-03-25 20:16 ` [PATCH v4 0/2] app/testpmd: assign share group dynamically Stephen Hemminger 2 siblings, 0 replies; 22+ messages in thread From: Dariusz Sosnowski @ 2026-03-25 19:09 UTC (permalink / raw) To: Aman Singh Cc: dev, Thomas Monjalon, Raslan Darawsheh, Stephen Hemminger, Adrian Schollmeyer Testpmd exposes "--rxq-share=[N]" parameter which controls sharing Rx queues. Before this patch logic was that either: - all queues were assigned to the same share group (when N was not passed), - or ports were grouped in subsets of N ports, each subset got different share group index. 2nd option did not work well with dynamic representor probing, where new representors would be assigned to new share group. This patch changes the logic in testpmd to dynamically assign share group index. Each unique switch and Rx domain will get different share group. Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com> --- app/test-pmd/parameters.c | 14 +---- app/test-pmd/testpmd.c | 81 +++++++++++++++++++++++++-- app/test-pmd/testpmd.h | 2 +- doc/guides/testpmd_app_ug/run_app.rst | 10 ++-- 4 files changed, 86 insertions(+), 21 deletions(-) diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c index 3617860830..ecbd618f00 100644 --- a/app/test-pmd/parameters.c +++ b/app/test-pmd/parameters.c @@ -351,7 +351,7 @@ static const struct option long_options[] = { NO_ARG(TESTPMD_OPT_MULTI_RX_MEMPOOL), NO_ARG(TESTPMD_OPT_TXONLY_MULTI_FLOW), REQUIRED_ARG(TESTPMD_OPT_TXONLY_FLOWS), - OPTIONAL_ARG(TESTPMD_OPT_RXQ_SHARE), + NO_ARG(TESTPMD_OPT_RXQ_SHARE), REQUIRED_ARG(TESTPMD_OPT_ETH_LINK_SPEED), NO_ARG(TESTPMD_OPT_DISABLE_LINK_CHECK), NO_ARG(TESTPMD_OPT_DISABLE_DEVICE_START), @@ -507,7 +507,7 @@ usage(char* progname) printf(" --tx-ip=src,dst: IP addresses in Tx-only mode\n"); printf(" --tx-udp=src[,dst]: UDP ports in Tx-only mode\n"); printf(" --eth-link-speed: force link speed.\n"); - printf(" --rxq-share=X: number of ports per shared Rx queue groups, defaults to UINT32_MAX (1 group)\n"); + printf(" --rxq-share: enable Rx queue sharing per switch and Rx domain\n"); printf(" --disable-link-check: disable check on link status when " "starting/stopping ports.\n"); printf(" --disable-device-start: do not automatically start port\n"); @@ -1579,15 +1579,7 @@ launch_args_parse(int argc, char** argv) rte_exit(EXIT_FAILURE, "txonly-flows must be >= 1 and <= 64\n"); break; case TESTPMD_OPT_RXQ_SHARE_NUM: - if (optarg == NULL) { - rxq_share = UINT32_MAX; - } else { - n = atoi(optarg); - if (n >= 0) - rxq_share = (uint32_t)n; - else - rte_exit(EXIT_FAILURE, "rxq-share must be >= 0\n"); - } + rxq_share = 1; break; case TESTPMD_OPT_NO_FLUSH_RX_NUM: no_flush_rx = 1; diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index aad880aa34..e655ddd247 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -545,9 +545,17 @@ uint8_t record_core_cycles; uint8_t record_burst_stats; /* - * Number of ports per shared Rx queue group, 0 disable. + * Enable Rx queue sharing between ports in the same switch and Rx domain. */ -uint32_t rxq_share; +uint8_t rxq_share; + +struct share_group_slot { + uint16_t domain_id; + uint16_t rx_domain; + uint16_t share_group; +}; + +static struct share_group_slot share_group_slots[RTE_MAX_ETHPORTS]; unsigned int num_sockets = 0; unsigned int socket_ids[RTE_MAX_NUMA_NODES]; @@ -586,6 +594,64 @@ int proc_id; */ unsigned int num_procs = 1; +static int +assign_share_group(struct rte_eth_dev_info *dev_info, uint16_t *share_group) +{ + unsigned int first_free = RTE_DIM(share_group_slots); + unsigned int i; + + for (i = 0; i < RTE_DIM(share_group_slots); i++) { + if (share_group_slots[i].share_group > 0) { + if (dev_info->switch_info.domain_id == share_group_slots[i].domain_id && + dev_info->switch_info.rx_domain == share_group_slots[i].rx_domain) { + *share_group = share_group_slots[i].share_group; + return 0; + } + } else if (first_free == RTE_DIM(share_group_slots)) { + first_free = i; + } + } + + if (first_free == RTE_DIM(share_group_slots)) + return -ENOSPC; + + share_group_slots[first_free].domain_id = dev_info->switch_info.domain_id; + share_group_slots[first_free].rx_domain = dev_info->switch_info.rx_domain; + share_group_slots[first_free].share_group = first_free + 1; + *share_group = share_group_slots[first_free].share_group; + + return 0; +} + +static void +try_release_share_group(struct share_group_slot *slot) +{ + uint16_t pi; + + /* Check if any port still uses this share group. */ + RTE_ETH_FOREACH_DEV(pi) { + if (ports[pi].dev_info.switch_info.domain_id == slot->domain_id && + ports[pi].dev_info.switch_info.rx_domain == slot->rx_domain) { + return; + } + } + + slot->share_group = 0; + slot->domain_id = 0; + slot->rx_domain = 0; +} + +static void +try_release_share_groups(void) +{ + unsigned int i; + + /* Try release each used share group. */ + for (i = 0; i < RTE_DIM(share_group_slots); i++) + if (share_group_slots[i].share_group > 0) + try_release_share_group(&share_group_slots[i]); +} + static void eth_rx_metadata_negotiate_mp(uint16_t port_id) { @@ -3315,6 +3381,7 @@ remove_invalid_ports(void) remove_invalid_ports_in(ports_ids, &nb_ports); remove_invalid_ports_in(fwd_ports_ids, &nb_fwd_ports); nb_cfg_ports = nb_fwd_ports; + try_release_share_groups(); } static void @@ -4097,8 +4164,14 @@ rxtx_port_config(portid_t pid) if (rxq_share > 0 && (port->dev_info.dev_capa & RTE_ETH_DEV_CAPA_RXQ_SHARE)) { /* Non-zero share group to enable RxQ share. */ - port->rxq[qid].conf.share_group = pid / rxq_share + 1; - port->rxq[qid].conf.share_qid = qid; /* Equal mapping. */ + uint16_t share_group; + + if (assign_share_group(&port->dev_info, &share_group) == 0) { + port->rxq[qid].conf.share_group = share_group; + port->rxq[qid].conf.share_qid = qid; /* Equal mapping. */ + } else { + TESTPMD_LOG(INFO, "port %u: failed assigning share group\n", pid); + } } if (offloads != 0) diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index af185540c3..9b60ebd7fc 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -675,7 +675,7 @@ extern enum tx_pkt_split tx_pkt_split; extern uint8_t txonly_multi_flow; extern uint16_t txonly_flows; -extern uint32_t rxq_share; +extern uint8_t rxq_share; extern uint16_t nb_pkt_per_burst; extern uint16_t nb_pkt_flowgen_clones; diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst index ae3ef8cdf8..d0a05d6311 100644 --- a/doc/guides/testpmd_app_ug/run_app.rst +++ b/doc/guides/testpmd_app_ug/run_app.rst @@ -393,13 +393,13 @@ The command line options are: Valid range is 1 to 64. Default is 64. Reducing this value limits the number of unique UDP source ports generated. -* ``--rxq-share=[X]`` +* ``--rxq-share`` Create queues in shared Rx queue mode if device supports. - Shared Rx queues are grouped per X ports. X defaults to UINT32_MAX, - implies all ports join share group 1. Forwarding engine "shared-rxq" - should be used for shared Rx queues. This engine does Rx only and - update stream statistics accordingly. + Testpmd will assign unique share group index per each + unique switch and Rx domain. + Forwarding engine "shared-rxq" should be used for shared Rx queues. + This engine does Rx only and updates stream statistics accordingly. * ``--eth-link-speed`` -- 2.47.3 ^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH v4 2/2] app/testpmd: revert switch domain mismatch check 2026-03-25 19:09 ` [PATCH v4 0/2] app/testpmd: assign share group dynamically Dariusz Sosnowski 2026-03-25 19:09 ` [PATCH v4 1/2] " Dariusz Sosnowski @ 2026-03-25 19:09 ` Dariusz Sosnowski 2026-03-25 20:16 ` [PATCH v4 0/2] app/testpmd: assign share group dynamically Stephen Hemminger 2 siblings, 0 replies; 22+ messages in thread From: Dariusz Sosnowski @ 2026-03-25 19:09 UTC (permalink / raw) To: Aman Singh Cc: dev, Thomas Monjalon, Raslan Darawsheh, Stephen Hemminger, Adrian Schollmeyer This reverts commit 8ebba91086f47c90e398d7775921e05659c0d62f. Previous patch changed --rxq-share parameter logic. If this parameter is passed, then unique share group index per switch and Rx domain will be assigned to each shared Rx queue. As a result the check for domain mismatch is not needed. Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com> --- app/test-pmd/testpmd.c | 40 ---------------------------------------- 1 file changed, 40 deletions(-) diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index e655ddd247..e2569d9e30 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -2744,46 +2744,6 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, uint32_t prev_hdrs = 0; int ret; - if (rx_conf->share_group > 0) { - /* Check required switch info for Rx queue sharing */ - const uint16_t dom_id = ports[port_id].dev_info.switch_info.domain_id; - const uint16_t rx_dom = ports[port_id].dev_info.switch_info.rx_domain; - - uint16_t pid; - const char *mismatch = NULL; - uint16_t mismatch_pid = (uint16_t)RTE_PORT_ALL; - - RTE_ETH_FOREACH_DEV(pid) { - struct rte_port *o_port = &ports[pid]; - const uint16_t o_dom_id = o_port->dev_info.switch_info.domain_id; - const uint16_t o_rx_dom = o_port->dev_info.switch_info.rx_domain; - - for (uint16_t q = 0; q < nb_rxq; ++q) { - struct port_rxqueue *rxq = &o_port->rxq[q]; - if (rxq->conf.share_group != rx_conf->share_group || - rxq->conf.share_qid != rx_conf->share_qid) - continue; - if (o_dom_id == dom_id && o_rx_dom == rx_dom) - continue; - - if (o_dom_id != dom_id) - mismatch = "switch domain"; - else if (o_rx_dom != rx_dom) - mismatch = "rx domain"; - - mismatch_pid = pid; - break; - } - } - - if (mismatch) { - fprintf(stderr, - "Invalid shared rxq config: %s mismatch between ports %u and %u\n", - mismatch, port_id, mismatch_pid); - return -EINVAL; - } - } - if ((rx_pkt_nb_segs > 1) && (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { /* multi-segment configuration */ -- 2.47.3 ^ permalink raw reply related [flat|nested] 22+ messages in thread
* Re: [PATCH v4 0/2] app/testpmd: assign share group dynamically 2026-03-25 19:09 ` [PATCH v4 0/2] app/testpmd: assign share group dynamically Dariusz Sosnowski 2026-03-25 19:09 ` [PATCH v4 1/2] " Dariusz Sosnowski 2026-03-25 19:09 ` [PATCH v4 2/2] app/testpmd: revert switch domain mismatch check Dariusz Sosnowski @ 2026-03-25 20:16 ` Stephen Hemminger 2 siblings, 0 replies; 22+ messages in thread From: Stephen Hemminger @ 2026-03-25 20:16 UTC (permalink / raw) To: Dariusz Sosnowski Cc: Aman Singh, dev, Thomas Monjalon, Raslan Darawsheh, Adrian Schollmeyer On Wed, 25 Mar 2026 20:09:04 +0100 Dariusz Sosnowski <dsosnowski@nvidia.com> wrote: > Our internal regression tests have flagged issues with shared Rx queues. > Specifically, issues with domain mismatches: > > Invalid shared rxq config: switch domain mismatch ports 0 and 3 > > These started appearing after commit [1] which added checks > for globally unique share group indexes. > This could be worked around with --rxq-share=N option, > but it does not allow proper testing of all use cases [2]. > > This patchset addresses that by changing behavior of --rxq-share parameter. > Instead of relying on user to select proper parameter value, > testpmd will dynamically assign globally unique share group index > to each unique switch and Rx domain. > > [1]: 8ebba91086f4 ("app/testpmd: fail on shared Rx queue switch mismatch") > [2]: https://inbox.dpdk.org/dev/yotjxacqrodttraqrr3r6ftut4cty66g6cjnr5ughswtatapgh@gqqkftskp3qq/ > > v4: > - Use short circuit return try_release_share_group(). > > v3: > - Use short circuit return in assign_share_group(). > - Replace assert in assign_share_group() with error checking. > - Do not require optional argument with --rxq-share option. > - Fix typo in docs: update -> updates > > v2: > - Add releasing share groups when ports are closed. > - Add static to share_group_slots array definition. > - Remove double empty line in revert commit. > > Dariusz Sosnowski (2): > app/testpmd: assign share group dynamically > app/testpmd: revert switch domain mismatch check > > app/test-pmd/parameters.c | 14 +-- > app/test-pmd/testpmd.c | 121 ++++++++++++++++---------- > app/test-pmd/testpmd.h | 2 +- > doc/guides/testpmd_app_ug/run_app.rst | 10 +-- > 4 files changed, 86 insertions(+), 61 deletions(-) > > -- > 2.47.3 > Applied to next-net thanks ^ permalink raw reply [flat|nested] 22+ messages in thread
end of thread, other threads:[~2026-03-25 20:16 UTC | newest] Thread overview: 22+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2026-03-24 12:37 [PATCH 0/2] app/testpmd: assign share group dynamically Dariusz Sosnowski 2026-03-24 12:37 ` [PATCH 1/2] " Dariusz Sosnowski 2026-03-24 15:15 ` Stephen Hemminger 2026-03-25 16:45 ` Stephen Hemminger 2026-03-24 12:37 ` [PATCH 2/2] app/testpmd: revert switch domain mismatch check Dariusz Sosnowski 2026-03-24 15:17 ` Stephen Hemminger 2026-03-24 16:56 ` [PATCH v2 0/2] app/testpmd: assign share group dynamically Dariusz Sosnowski 2026-03-24 16:56 ` [PATCH v2 1/2] " Dariusz Sosnowski 2026-03-25 16:49 ` Stephen Hemminger 2026-03-25 18:06 ` Dariusz Sosnowski 2026-03-25 16:50 ` Stephen Hemminger 2026-03-25 18:12 ` Dariusz Sosnowski 2026-03-24 16:56 ` [PATCH v2 2/2] app/testpmd: revert switch domain mismatch check Dariusz Sosnowski 2026-03-25 18:02 ` [PATCH v3 0/2] app/testpmd: assign share group dynamically Dariusz Sosnowski 2026-03-25 18:02 ` [PATCH v3 1/2] " Dariusz Sosnowski 2026-03-25 18:51 ` Stephen Hemminger 2026-03-25 19:11 ` Dariusz Sosnowski 2026-03-25 18:02 ` [PATCH v3 2/2] app/testpmd: revert switch domain mismatch check Dariusz Sosnowski 2026-03-25 19:09 ` [PATCH v4 0/2] app/testpmd: assign share group dynamically Dariusz Sosnowski 2026-03-25 19:09 ` [PATCH v4 1/2] " Dariusz Sosnowski 2026-03-25 19:09 ` [PATCH v4 2/2] app/testpmd: revert switch domain mismatch check Dariusz Sosnowski 2026-03-25 20:16 ` [PATCH v4 0/2] app/testpmd: assign share group dynamically Stephen Hemminger
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox