* Re: [PATCH] app/testpmd: fix DCB forwarding TC mask and queue guard
2026-03-11 8:37 [PATCH] app/testpmd: fix DCB forwarding TC mask and queue guard Talluri Chaitanyababu
@ 2026-03-11 15:56 ` Stephen Hemminger
2026-03-12 10:36 ` [PATCH v2] " Talluri Chaitanyababu
` (3 subsequent siblings)
4 siblings, 0 replies; 18+ messages in thread
From: Stephen Hemminger @ 2026-03-11 15:56 UTC (permalink / raw)
To: Talluri Chaitanyababu
Cc: dev, bruce.richardson, aman.deep.singh, shaiq.wani, stable
On Wed, 11 Mar 2026 08:37:51 +0000
Talluri Chaitanyababu <chaitanyababux.talluri@intel.com> wrote:
> diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
> index c33c66f327..3fb9b940eb 100644
> --- a/app/test-pmd/cmdline.c
> +++ b/app/test-pmd/cmdline.c
> @@ -3682,6 +3682,19 @@ cmd_config_dcb_parsed(void *parsed_result,
> return;
> }
>
> + /*
> + * Update forwarding TC mask to match actual configured TCs.
> + * Must query after init_port_dcb_config() to get updated nb_tcs.
> + */
> + ret = rte_eth_dev_get_dcb_info(port_id, &dcb_info);
> + if (ret == 0 && dcb_info.nb_tcs > 0) {
> + dcb_fwd_tc_mask = (1u << dcb_info.nb_tcs) - 1;
> + } else if (ret != 0) {
> + fprintf(stderr, "Failed to get DCB info for port %u: %s\n",
> + port_id, rte_strerror(-ret));
> + return;
> + }
> +
That function already called get_dcb_info only a few lines before?
^ permalink raw reply [flat|nested] 18+ messages in thread* [PATCH v2] app/testpmd: fix DCB forwarding TC mask and queue guard
2026-03-11 8:37 [PATCH] app/testpmd: fix DCB forwarding TC mask and queue guard Talluri Chaitanyababu
2026-03-11 15:56 ` Stephen Hemminger
@ 2026-03-12 10:36 ` Talluri Chaitanyababu
2026-03-12 18:44 ` Stephen Hemminger
2026-03-13 0:19 ` fengchengwen
2026-03-16 6:21 ` [PATCH v3] " Talluri Chaitanyababu
` (2 subsequent siblings)
4 siblings, 2 replies; 18+ messages in thread
From: Talluri Chaitanyababu @ 2026-03-12 10:36 UTC (permalink / raw)
To: dev, bruce.richardson, stephen, aman.deep.singh
Cc: shaiq.wani, Talluri Chaitanyababu, stable
Update forwarding TC mask based on configured traffic classes to properly
handle both 4 TC and 8 TC modes. The bitmask calculation (1u << nb_tcs) - 1
correctly creates masks for all available traffic classes (0xF for 4 TCs,
0xFF for 8 TCs).
When the mask is not updated after a TC configuration change, it stays at
the default 0xFF, which causes dcb_fwd_tc_update_dcb_info() to skip the
compress logic entirely (early return when mask ==
DEFAULT_DCB_FWD_TC_MASK).
This can lead to inconsistent queue allocations.
Additionally, the existing VMDQ pool guard in dcb_fwd_config_setup() only
checks RX queue counts, missing the case where the TX port has zero queues
for a given pool/TC combination. When nb_tx_queue is 0, the expression
"j % nb_tx_queue" triggers a SIGFPE (integer division by zero).
Fix this by:
1. Updating dcb_fwd_tc_mask after port DCB reconfiguration using the
user requested num_tcs value, so fwd_config_setup() sees the correct
mask.
2. Extending the existing pool guard to also check TX queue counts.
3. Adding a defensive break after the division by dcb_fwd_tc_cores to
catch integer truncation to zero.
Fixes: 0ecbf93f5001 ("app/testpmd: add command to disable DCB")
Cc: stable@dpdk.org
Signed-off-by: Talluri Chaitanyababu <chaitanyababux.talluri@intel.com>
Signed-off-by: Shaiq Wani <shaiq.wani@intel.com>
---
v2:
* Used res->num_tcs to derive dcb_fwd_tc_mask.
* Removed redundant rte_eth_dev_get_dcb_info().
---
app/test-pmd/cmdline.c | 3 +++
app/test-pmd/config.c | 9 ++++++++-
2 files changed, 11 insertions(+), 1 deletion(-)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index c33c66f327..19766cc633 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -3682,6 +3682,9 @@ cmd_config_dcb_parsed(void *parsed_result,
return;
}
+ /* Update forwarding TC mask to match the configured number of TCs. */
+ dcb_fwd_tc_mask = (1u << res->num_tcs) - 1;
+
fwd_config_setup();
cmd_reconfig_device_queue(port_id, 1, 1);
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index f9f3c542a6..9b201ac241 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -5450,7 +5450,8 @@ dcb_fwd_config_setup(void)
/* if the nb_queue is zero, means this tc is
* not enabled on the POOL
*/
- if (rxp_dcb_info.tc_queue.tc_rxq[i][tc].nb_queue == 0)
+ if (rxp_dcb_info.tc_queue.tc_rxq[i][tc].nb_queue == 0 ||
+ txp_dcb_info.tc_queue.tc_txq[i][tc].nb_queue == 0)
break;
k = fwd_lcores[lc_id]->stream_nb +
fwd_lcores[lc_id]->stream_idx;
@@ -5458,6 +5459,12 @@ dcb_fwd_config_setup(void)
dcb_fwd_tc_cores;
nb_tx_queue = txp_dcb_info.tc_queue.tc_txq[i][tc].nb_queue /
dcb_fwd_tc_cores;
+ /* guard against integer truncation to zero (e.g.
+ * nb_queue=1, dcb_fwd_tc_cores=2) to prevent SIGFPE
+ * from "j % nb_tx_queue" below.
+ */
+ if (nb_rx_queue == 0 || nb_tx_queue == 0)
+ break;
rxq = rxp_dcb_info.tc_queue.tc_rxq[i][tc].base + nb_rx_queue * sub_core_idx;
txq = txp_dcb_info.tc_queue.tc_txq[i][tc].base + nb_tx_queue * sub_core_idx;
for (j = 0; j < nb_rx_queue; j++) {
--
2.43.0
^ permalink raw reply related [flat|nested] 18+ messages in thread* Re: [PATCH v2] app/testpmd: fix DCB forwarding TC mask and queue guard
2026-03-12 10:36 ` [PATCH v2] " Talluri Chaitanyababu
@ 2026-03-12 18:44 ` Stephen Hemminger
2026-03-13 0:19 ` fengchengwen
1 sibling, 0 replies; 18+ messages in thread
From: Stephen Hemminger @ 2026-03-12 18:44 UTC (permalink / raw)
To: Talluri Chaitanyababu
Cc: dev, bruce.richardson, aman.deep.singh, shaiq.wani, stable
On Thu, 12 Mar 2026 10:36:15 +0000
Talluri Chaitanyababu <chaitanyababux.talluri@intel.com> wrote:
> Update forwarding TC mask based on configured traffic classes to properly
> handle both 4 TC and 8 TC modes. The bitmask calculation (1u << nb_tcs) - 1
> correctly creates masks for all available traffic classes (0xF for 4 TCs,
> 0xFF for 8 TCs).
>
> When the mask is not updated after a TC configuration change, it stays at
> the default 0xFF, which causes dcb_fwd_tc_update_dcb_info() to skip the
> compress logic entirely (early return when mask ==
> DEFAULT_DCB_FWD_TC_MASK).
> This can lead to inconsistent queue allocations.
>
> Additionally, the existing VMDQ pool guard in dcb_fwd_config_setup() only
> checks RX queue counts, missing the case where the TX port has zero queues
> for a given pool/TC combination. When nb_tx_queue is 0, the expression
> "j % nb_tx_queue" triggers a SIGFPE (integer division by zero).
>
> Fix this by:
> 1. Updating dcb_fwd_tc_mask after port DCB reconfiguration using the
> user requested num_tcs value, so fwd_config_setup() sees the correct
> mask.
> 2. Extending the existing pool guard to also check TX queue counts.
> 3. Adding a defensive break after the division by dcb_fwd_tc_cores to
> catch integer truncation to zero.
>
> Fixes: 0ecbf93f5001 ("app/testpmd: add command to disable DCB")
> Cc: stable@dpdk.org
>
> Signed-off-by: Talluri Chaitanyababu <chaitanyababux.talluri@intel.com>
> Signed-off-by: Shaiq Wani <shaiq.wani@intel.com>
> ---
Is the same person as this one that is in .mailmap?
Chaitanya Babu Talluri <tallurix.chaitanya.babu@intel.com>
^ permalink raw reply [flat|nested] 18+ messages in thread* Re: [PATCH v2] app/testpmd: fix DCB forwarding TC mask and queue guard
2026-03-12 10:36 ` [PATCH v2] " Talluri Chaitanyababu
2026-03-12 18:44 ` Stephen Hemminger
@ 2026-03-13 0:19 ` fengchengwen
2026-03-16 6:05 ` Talluri, ChaitanyababuX
1 sibling, 1 reply; 18+ messages in thread
From: fengchengwen @ 2026-03-13 0:19 UTC (permalink / raw)
To: Talluri Chaitanyababu, dev, bruce.richardson, stephen,
aman.deep.singh
Cc: shaiq.wani, stable
On 3/12/2026 6:36 PM, Talluri Chaitanyababu wrote:
> Update forwarding TC mask based on configured traffic classes to properly
> handle both 4 TC and 8 TC modes. The bitmask calculation (1u << nb_tcs) - 1
> correctly creates masks for all available traffic classes (0xF for 4 TCs,
> 0xFF for 8 TCs).
>
> When the mask is not updated after a TC configuration change, it stays at
> the default 0xFF, which causes dcb_fwd_tc_update_dcb_info() to skip the
> compress logic entirely (early return when mask ==
> DEFAULT_DCB_FWD_TC_MASK).
> This can lead to inconsistent queue allocations.
Sorry, I cannot understand your question. Could you please provide some steps
to reproduce the issue and the problem phenomenon?
>
> Additionally, the existing VMDQ pool guard in dcb_fwd_config_setup() only
> checks RX queue counts, missing the case where the TX port has zero queues
> for a given pool/TC combination. When nb_tx_queue is 0, the expression
> "j % nb_tx_queue" triggers a SIGFPE (integer division by zero).
The dcb_fwd_check_cores_per_tc() check this case. So please provide the steps.
>
> Fix this by:
> 1. Updating dcb_fwd_tc_mask after port DCB reconfiguration using the
> user requested num_tcs value, so fwd_config_setup() sees the correct
> mask.
> 2. Extending the existing pool guard to also check TX queue counts.
> 3. Adding a defensive break after the division by dcb_fwd_tc_cores to
> catch integer truncation to zero.
>
> Fixes: 0ecbf93f5001 ("app/testpmd: add command to disable DCB")
> Cc: stable@dpdk.org
>
> Signed-off-by: Talluri Chaitanyababu <chaitanyababux.talluri@intel.com>
> Signed-off-by: Shaiq Wani <shaiq.wani@intel.com>
> ---
^ permalink raw reply [flat|nested] 18+ messages in thread* RE: [PATCH v2] app/testpmd: fix DCB forwarding TC mask and queue guard
2026-03-13 0:19 ` fengchengwen
@ 2026-03-16 6:05 ` Talluri, ChaitanyababuX
2026-03-17 1:07 ` fengchengwen
0 siblings, 1 reply; 18+ messages in thread
From: Talluri, ChaitanyababuX @ 2026-03-16 6:05 UTC (permalink / raw)
To: fengchengwen, dev@dpdk.org, Richardson, Bruce,
stephen@networkplumber.org, Singh, Aman Deep
Cc: Wani, Shaiq, stable@dpdk.org
-----Original Message-----
From: fengchengwen <fengchengwen@huawei.com>
Sent: 13 March 2026 05:49
To: Talluri, ChaitanyababuX <chaitanyababux.talluri@intel.com>; dev@dpdk.org; Richardson, Bruce <bruce.richardson@intel.com>; stephen@networkplumber.org; Singh, Aman Deep <aman.deep.singh@intel.com>
Cc: Wani, Shaiq <shaiq.wani@intel.com>; stable@dpdk.org
Subject: Re: [PATCH v2] app/testpmd: fix DCB forwarding TC mask and queue guard
On 3/12/2026 6:36 PM, Talluri Chaitanyababu wrote:
> Update forwarding TC mask based on configured traffic classes to
> properly handle both 4 TC and 8 TC modes. The bitmask calculation (1u
> << nb_tcs) - 1 correctly creates masks for all available traffic
> classes (0xF for 4 TCs, 0xFF for 8 TCs).
>
> When the mask is not updated after a TC configuration change, it stays
> at the default 0xFF, which causes dcb_fwd_tc_update_dcb_info() to skip
> the compress logic entirely (early return when mask ==
> DEFAULT_DCB_FWD_TC_MASK).
> This can lead to inconsistent queue allocations.
Sorry, I cannot understand your question. Could you please provide some steps to reproduce the issue and the problem phenomenon?
Please find the reproduction steps and problem description below.
1.bind 2 port to vfio-pci
./usertools/dpdk-devbind.py -b vfio-pci 0000:af:00.0 0000:af:00.1
2. start testpmd and reset DCB PFC
./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-20 -n 4 -a 0000:af:00.0 -a 0000:af:00.1 --file-prefix=testpmd1 -- -i --rxq=256 --txq=256 --nb-cores=16 --total-num-mbufs=600000
testpmd> port stop all
testpmd> port config 0 dcb vt off 8 pfc on
testpmd> port config 1 dcb vt off 8 pfc on
testpmd> port start all
testpmd> port stop all
testpmd> port config 0 dcb vt off 4 pfc on
Test Log:
root@srv13:~/test-1/dpdk# ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-20 -n 4 -a 0000:31:00.0 -a 0000:4b:00.0 --file-prefix=testpmd1 -- -i --rxq=256 --txq=256 --nb-cores=16 --total-num-mbufs=600000
EAL: Detected CPU lcores: 96
EAL: Detected NUMA nodes: 2
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/testpmd1/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: VFIO support initialized
EAL: Using IOMMU type 1 (Type 1)
ICE_INIT: ice_load_pkg_type(): Active package is: 1.3.50.0, ICE OS Default Package (single VLAN mode)
ICE_INIT: ice_load_pkg_type(): Active package is: 1.3.50.0, ICE OS Default Package (single VLAN mode)
Interactive-mode selected
testpmd: create a new mbuf pool <mb_pool_0>: n=600000, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
ICE_DRIVER: ice_set_tx_function(): Using Vector AVX2 (port 0).
ICE_DRIVER: ice_set_rx_function(): Using Offload Vector AVX2 (port 0).
Port 0: B4:96:91:9F:5E:B0
Configuring Port 1 (socket 0)
ICE_DRIVER: ice_set_tx_function(): Using Vector AVX2 (port 1).
ICE_DRIVER: ice_set_rx_function(): Using Offload Vector AVX2 (port 1).
Port 1: 68:05:CA:A3:13:4C
Checking link statuses...
Done
testpmd> port stop all
Stopping ports...
Port 0: link state change event
Checking link statuses...
Port 1: link state change event
Done
testpmd> port config 0 dcb vt off 8 pfc on
In DCB mode, all forwarding ports must be configured in this mode.
testpmd> port config 1 dcb vt off 8 pfc on
testpmd> port start all
Configuring Port 0 (socket 0)
ICE_DRIVER: ice_set_tx_function(): Using Vector AVX2 (port 0).
ICE_DRIVER: ice_set_rx_function(): Using Offload Vector AVX2 (port 0).
Port 0: link state change event
Port 0: B4:96:91:9F:5E:B0
Configuring Port 1 (socket 0)
ICE_DRIVER: ice_set_tx_function(): Using Vector AVX2 (port 1).
ICE_DRIVER: ice_set_rx_function(): Using Offload Vector AVX2 (port 1).
Port 1: 68:05:CA:A3:13:4C
Checking link statuses...
Done
testpmd> port stop all
Stopping ports...
Port 0: link state change event
Checking link statuses...
Port 1: link state change event
Done
testpmd> port config 0 dcb vt off 4 pfc on
Floating point exception
Expected behaviour:
After reconfiguring PFC from 8 to 4 TCs, the forwarding TC mask should reflect
the configured number of TCs (mask = 0xF).
>
> Additionally, the existing VMDQ pool guard in dcb_fwd_config_setup()
> only checks RX queue counts, missing the case where the TX port has
> zero queues for a given pool/TC combination. When nb_tx_queue is 0,
> the expression "j % nb_tx_queue" triggers a SIGFPE (integer division by zero).
The dcb_fwd_check_cores_per_tc() check this case. So please provide the steps.
>
> Fix this by:
> 1. Updating dcb_fwd_tc_mask after port DCB reconfiguration using the
> user requested num_tcs value, so fwd_config_setup() sees the correct
> mask.
> 2. Extending the existing pool guard to also check TX queue counts.
> 3. Adding a defensive break after the division by dcb_fwd_tc_cores to
> catch integer truncation to zero.
>
> Fixes: 0ecbf93f5001 ("app/testpmd: add command to disable DCB")
> Cc: stable@dpdk.org
>
> Signed-off-by: Talluri Chaitanyababu
> <chaitanyababux.talluri@intel.com>
> Signed-off-by: Shaiq Wani <shaiq.wani@intel.com>
> ---
^ permalink raw reply [flat|nested] 18+ messages in thread* Re: [PATCH v2] app/testpmd: fix DCB forwarding TC mask and queue guard
2026-03-16 6:05 ` Talluri, ChaitanyababuX
@ 2026-03-17 1:07 ` fengchengwen
2026-03-18 7:21 ` Talluri, ChaitanyababuX
0 siblings, 1 reply; 18+ messages in thread
From: fengchengwen @ 2026-03-17 1:07 UTC (permalink / raw)
To: Talluri, ChaitanyababuX, dev@dpdk.org, Richardson, Bruce,
stephen@networkplumber.org, Singh, Aman Deep
Cc: Wani, Shaiq, stable@dpdk.org
On 3/16/2026 2:05 PM, Talluri, ChaitanyababuX wrote:
>
>
> -----Original Message-----
> From: fengchengwen <fengchengwen@huawei.com>
> Sent: 13 March 2026 05:49
> To: Talluri, ChaitanyababuX <chaitanyababux.talluri@intel.com>; dev@dpdk.org; Richardson, Bruce <bruce.richardson@intel.com>; stephen@networkplumber.org; Singh, Aman Deep <aman.deep.singh@intel.com>
> Cc: Wani, Shaiq <shaiq.wani@intel.com>; stable@dpdk.org
> Subject: Re: [PATCH v2] app/testpmd: fix DCB forwarding TC mask and queue guard
>
> On 3/12/2026 6:36 PM, Talluri Chaitanyababu wrote:
>> Update forwarding TC mask based on configured traffic classes to
>> properly handle both 4 TC and 8 TC modes. The bitmask calculation (1u
>> << nb_tcs) - 1 correctly creates masks for all available traffic
>> classes (0xF for 4 TCs, 0xFF for 8 TCs).
>>
>> When the mask is not updated after a TC configuration change, it stays
>> at the default 0xFF, which causes dcb_fwd_tc_update_dcb_info() to skip
>> the compress logic entirely (early return when mask ==
>> DEFAULT_DCB_FWD_TC_MASK).
>> This can lead to inconsistent queue allocations.
>
> Sorry, I cannot understand your question. Could you please provide some steps to reproduce the issue and the problem phenomenon?
>
> Please find the reproduction steps and problem description below.
>
> 1.bind 2 port to vfio-pci
> ./usertools/dpdk-devbind.py -b vfio-pci 0000:af:00.0 0000:af:00.1
> 2. start testpmd and reset DCB PFC
> ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-20 -n 4 -a 0000:af:00.0 -a 0000:af:00.1 --file-prefix=testpmd1 -- -i --rxq=256 --txq=256 --nb-cores=16 --total-num-mbufs=600000
>
> testpmd> port stop all
> testpmd> port config 0 dcb vt off 8 pfc on
> testpmd> port config 1 dcb vt off 8 pfc on
> testpmd> port start all
> testpmd> port stop all
> testpmd> port config 0 dcb vt off 4 pfc on
>
> Test Log:
> root@srv13:~/test-1/dpdk# ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-20 -n 4 -a 0000:31:00.0 -a 0000:4b:00.0 --file-prefix=testpmd1 -- -i --rxq=256 --txq=256 --nb-cores=16 --total-num-mbufs=600000
> EAL: Detected CPU lcores: 96
> EAL: Detected NUMA nodes: 2
> EAL: Detected static linkage of DPDK
> EAL: Multi-process socket /var/run/dpdk/testpmd1/mp_socket
> EAL: Selected IOVA mode 'VA'
> EAL: VFIO support initialized
> EAL: Using IOMMU type 1 (Type 1)
> ICE_INIT: ice_load_pkg_type(): Active package is: 1.3.50.0, ICE OS Default Package (single VLAN mode)
> ICE_INIT: ice_load_pkg_type(): Active package is: 1.3.50.0, ICE OS Default Package (single VLAN mode)
> Interactive-mode selected
> testpmd: create a new mbuf pool <mb_pool_0>: n=600000, size=2176, socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc
> Configuring Port 0 (socket 0)
> ICE_DRIVER: ice_set_tx_function(): Using Vector AVX2 (port 0).
> ICE_DRIVER: ice_set_rx_function(): Using Offload Vector AVX2 (port 0).
> Port 0: B4:96:91:9F:5E:B0
> Configuring Port 1 (socket 0)
> ICE_DRIVER: ice_set_tx_function(): Using Vector AVX2 (port 1).
> ICE_DRIVER: ice_set_rx_function(): Using Offload Vector AVX2 (port 1).
> Port 1: 68:05:CA:A3:13:4C
> Checking link statuses...
> Done
> testpmd> port stop all
> Stopping ports...
>
> Port 0: link state change event
> Checking link statuses...
>
> Port 1: link state change event
> Done
> testpmd> port config 0 dcb vt off 8 pfc on
> In DCB mode, all forwarding ports must be configured in this mode.
> testpmd> port config 1 dcb vt off 8 pfc on
> testpmd> port start all
> Configuring Port 0 (socket 0)
> ICE_DRIVER: ice_set_tx_function(): Using Vector AVX2 (port 0).
> ICE_DRIVER: ice_set_rx_function(): Using Offload Vector AVX2 (port 0).
>
> Port 0: link state change event
> Port 0: B4:96:91:9F:5E:B0
> Configuring Port 1 (socket 0)
> ICE_DRIVER: ice_set_tx_function(): Using Vector AVX2 (port 1).
> ICE_DRIVER: ice_set_rx_function(): Using Offload Vector AVX2 (port 1).
> Port 1: 68:05:CA:A3:13:4C
> Checking link statuses...
> Done
> testpmd> port stop all
> Stopping ports...
>
> Port 0: link state change event
> Checking link statuses...
>
> Port 1: link state change event
> Done
> testpmd> port config 0 dcb vt off 4 pfc on
> Floating point exception
I just try to reproduce this on Kunpeng platform but found it seems no error:
dpdk-testpmd -a 0000:7d:00.0 -a 0000:7d:00.2 --file-prefix=feng -l 0-20 -- -i --rxq=64 --txq=64 --nb-cores=16
PS: this netcard only has a max of 64 queues
So could you show the gdb output?
The detail output:
./dpdk-testpmd -a 0000:7d:00.0 -a 0000:7d:00.2 --file-prefix=feng -l 0-20 -- -i --rxq=64 --txq=64 --nb-cores=16
EAL: Detected CPU lcores: 96
EAL: Detected NUMA nodes: 4
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/feng/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: VFIO support initialized
EAL: DPDK is running on a NUMA system, but is compiled without NUMA support.
EAL: This will have adverse consequences for performance and usability.
EAL: Please use --legacy-mem option, or recompile with NUMA support.
EAL: Using IOMMU type 1 (Type 1)
Interactive-mode selected
testpmd: create a new mbuf pool <mb_pool_0>: n=307456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
HNS3_DRIVER: 0000:7d:00.0 hns3_set_fiber_port_link_speed(): auto-negotiation is not supported, use default fixed speed!
Port 0: 00:18:2D:00:00:79
Configuring Port 1 (socket 0)
HNS3_DRIVER: 0000:7d:00.2 hns3_set_fiber_port_link_speed(): auto-negotiation is not supported, use default fixed speed!
Port 1: 00:18:2D:02:00:79
Checking link statuses...
Done
testpmd>
testpmd>
testpmd> HNS3_DRIVER: 0000:7d:00.0 hns3_update_link_status(): Link status change to up!
Port 0: link state change event
testpmd>
testpmd> port stop all
Stopping ports...
Checking link statuses...
Done
testpmd> port config 0 dcb vt off 8 pfc on
In DCB mode, all forwarding ports must be configured in this mode.
testpmd> port config 1 dcb vt off 8 pfc on
testpmd> port start all
Configuring Port 0 (socket 0)
HNS3_DRIVER: 0000:7d:00.0 hns3_set_fiber_port_link_speed(): auto-negotiation is not supported, use default fixed speed!
Port 0: 00:18:2D:00:00:79
Configuring Port 1 (socket 0)
HNS3_DRIVER: 0000:7d:00.2 hns3_set_fiber_port_link_speed(): auto-negotiation is not supported, use default fixed speed!
Port 1: 00:18:2D:02:00:79
Checking link statuses...
Done
testpmd> port stop all
Stopping ports...
Checking link statuses...
Done
testpmd> port config 0 dcb vt off 4 pfc on
testpmd>
testpmd> start
Not all ports were started
testpmd> port start all
Configuring Port 0 (socket 0)
HNS3_DRIVER: 0000:7d:00.0 hns3_set_fiber_port_link_speed(): auto-negotiation is not supported, use default fixed speed!
Port 0: 00:18:2D:00:00:79
HNS3_DRIVER: 0000:7d:00.2 hns3_set_fiber_port_link_speed(): auto-negotiation is not supported, use default fixed speed!
Port 1: 00:18:2D:02:00:79
Checking link statuses...
Done
testpmd> HNS3_DRIVER: 0000:7d:00.0 hns3_update_link_status(): Link status change to up!
Port 0: link state change event
testpmd> start
io packet forwarding - ports=2 - cores=12 - streams=128 - NUMA support enabled, MP allocation mode: native
Logical Core 1 (socket 0) forwards packets on 16 streams:
RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=1 (socket 0) -> TX P=1/Q=1 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=2 (socket 0) -> TX P=1/Q=2 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=3 (socket 0) -> TX P=1/Q=3 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=4 (socket 0) -> TX P=1/Q=4 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=5 (socket 0) -> TX P=1/Q=5 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=6 (socket 0) -> TX P=1/Q=6 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=7 (socket 0) -> TX P=1/Q=7 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=8 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=9 (socket 0) -> TX P=1/Q=1 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=10 (socket 0) -> TX P=1/Q=2 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=11 (socket 0) -> TX P=1/Q=3 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=12 (socket 0) -> TX P=1/Q=4 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=13 (socket 0) -> TX P=1/Q=5 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=14 (socket 0) -> TX P=1/Q=6 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=15 (socket 0) -> TX P=1/Q=7 (socket 0) peer=02:00:00:00:00:01
Logical Core 2 (socket 0) forwards packets on 16 streams:
RX P=0/Q=16 (socket 0) -> TX P=1/Q=8 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=17 (socket 0) -> TX P=1/Q=9 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=18 (socket 0) -> TX P=1/Q=10 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=19 (socket 0) -> TX P=1/Q=11 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=20 (socket 0) -> TX P=1/Q=12 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=21 (socket 0) -> TX P=1/Q=13 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=22 (socket 0) -> TX P=1/Q=14 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=23 (socket 0) -> TX P=1/Q=15 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=24 (socket 0) -> TX P=1/Q=8 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=25 (socket 0) -> TX P=1/Q=9 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=26 (socket 0) -> TX P=1/Q=10 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=27 (socket 0) -> TX P=1/Q=11 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=28 (socket 0) -> TX P=1/Q=12 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=29 (socket 0) -> TX P=1/Q=13 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=30 (socket 0) -> TX P=1/Q=14 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=31 (socket 0) -> TX P=1/Q=15 (socket 0) peer=02:00:00:00:00:01
Logical Core 3 (socket 0) forwards packets on 16 streams:
RX P=0/Q=32 (socket 0) -> TX P=1/Q=16 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=33 (socket 0) -> TX P=1/Q=17 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=34 (socket 0) -> TX P=1/Q=18 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=35 (socket 0) -> TX P=1/Q=19 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=36 (socket 0) -> TX P=1/Q=20 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=37 (socket 0) -> TX P=1/Q=21 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=38 (socket 0) -> TX P=1/Q=22 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=39 (socket 0) -> TX P=1/Q=23 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=40 (socket 0) -> TX P=1/Q=16 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=41 (socket 0) -> TX P=1/Q=17 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=42 (socket 0) -> TX P=1/Q=18 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=43 (socket 0) -> TX P=1/Q=19 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=44 (socket 0) -> TX P=1/Q=20 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=45 (socket 0) -> TX P=1/Q=21 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=46 (socket 0) -> TX P=1/Q=22 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=47 (socket 0) -> TX P=1/Q=23 (socket 0) peer=02:00:00:00:00:01
Logical Core 4 (socket 0) forwards packets on 16 streams:
RX P=0/Q=48 (socket 0) -> TX P=1/Q=24 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=49 (socket 0) -> TX P=1/Q=25 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=50 (socket 0) -> TX P=1/Q=26 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=51 (socket 0) -> TX P=1/Q=27 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=52 (socket 0) -> TX P=1/Q=28 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=53 (socket 0) -> TX P=1/Q=29 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=54 (socket 0) -> TX P=1/Q=30 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=55 (socket 0) -> TX P=1/Q=31 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=56 (socket 0) -> TX P=1/Q=24 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=57 (socket 0) -> TX P=1/Q=25 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=58 (socket 0) -> TX P=1/Q=26 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=59 (socket 0) -> TX P=1/Q=27 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=60 (socket 0) -> TX P=1/Q=28 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=61 (socket 0) -> TX P=1/Q=29 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=62 (socket 0) -> TX P=1/Q=30 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=63 (socket 0) -> TX P=1/Q=31 (socket 0) peer=02:00:00:00:00:01
Logical Core 5 (socket 0) forwards packets on 8 streams:
RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=1 (socket 0) -> TX P=0/Q=1 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=2 (socket 0) -> TX P=0/Q=2 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=3 (socket 0) -> TX P=0/Q=3 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=4 (socket 0) -> TX P=0/Q=4 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=5 (socket 0) -> TX P=0/Q=5 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=6 (socket 0) -> TX P=0/Q=6 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=7 (socket 0) -> TX P=0/Q=7 (socket 0) peer=02:00:00:00:00:00
Logical Core 6 (socket 0) forwards packets on 8 streams:
RX P=1/Q=8 (socket 0) -> TX P=0/Q=16 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=9 (socket 0) -> TX P=0/Q=17 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=10 (socket 0) -> TX P=0/Q=18 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=11 (socket 0) -> TX P=0/Q=19 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=12 (socket 0) -> TX P=0/Q=20 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=13 (socket 0) -> TX P=0/Q=21 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=14 (socket 0) -> TX P=0/Q=22 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=15 (socket 0) -> TX P=0/Q=23 (socket 0) peer=02:00:00:00:00:00
Logical Core 7 (socket 0) forwards packets on 8 streams:
RX P=1/Q=16 (socket 0) -> TX P=0/Q=32 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=17 (socket 0) -> TX P=0/Q=33 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=18 (socket 0) -> TX P=0/Q=34 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=19 (socket 0) -> TX P=0/Q=35 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=20 (socket 0) -> TX P=0/Q=36 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=21 (socket 0) -> TX P=0/Q=37 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=22 (socket 0) -> TX P=0/Q=38 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=23 (socket 0) -> TX P=0/Q=39 (socket 0) peer=02:00:00:00:00:00
Logical Core 8 (socket 0) forwards packets on 8 streams:
RX P=1/Q=24 (socket 0) -> TX P=0/Q=48 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=25 (socket 0) -> TX P=0/Q=49 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=26 (socket 0) -> TX P=0/Q=50 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=27 (socket 0) -> TX P=0/Q=51 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=28 (socket 0) -> TX P=0/Q=52 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=29 (socket 0) -> TX P=0/Q=53 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=30 (socket 0) -> TX P=0/Q=54 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=31 (socket 0) -> TX P=0/Q=55 (socket 0) peer=02:00:00:00:00:00
Logical Core 9 (socket 0) forwards packets on 8 streams:
RX P=1/Q=32 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=33 (socket 0) -> TX P=0/Q=1 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=34 (socket 0) -> TX P=0/Q=2 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=35 (socket 0) -> TX P=0/Q=3 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=36 (socket 0) -> TX P=0/Q=4 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=37 (socket 0) -> TX P=0/Q=5 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=38 (socket 0) -> TX P=0/Q=6 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=39 (socket 0) -> TX P=0/Q=7 (socket 0) peer=02:00:00:00:00:00
Logical Core 10 (socket 0) forwards packets on 8 streams:
RX P=1/Q=40 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=41 (socket 0) -> TX P=0/Q=1 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=42 (socket 0) -> TX P=0/Q=2 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=43 (socket 0) -> TX P=0/Q=3 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=44 (socket 0) -> TX P=0/Q=4 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=45 (socket 0) -> TX P=0/Q=5 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=46 (socket 0) -> TX P=0/Q=6 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=47 (socket 0) -> TX P=0/Q=7 (socket 0) peer=02:00:00:00:00:00
Logical Core 11 (socket 0) forwards packets on 8 streams:
RX P=1/Q=48 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=49 (socket 0) -> TX P=0/Q=1 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=50 (socket 0) -> TX P=0/Q=2 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=51 (socket 0) -> TX P=0/Q=3 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=52 (socket 0) -> TX P=0/Q=4 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=53 (socket 0) -> TX P=0/Q=5 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=54 (socket 0) -> TX P=0/Q=6 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=55 (socket 0) -> TX P=0/Q=7 (socket 0) peer=02:00:00:00:00:00
Logical Core 12 (socket 0) forwards packets on 8 streams:
RX P=1/Q=56 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=57 (socket 0) -> TX P=0/Q=1 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=58 (socket 0) -> TX P=0/Q=2 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=59 (socket 0) -> TX P=0/Q=3 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=60 (socket 0) -> TX P=0/Q=4 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=61 (socket 0) -> TX P=0/Q=5 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=62 (socket 0) -> TX P=0/Q=6 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=63 (socket 0) -> TX P=0/Q=7 (socket 0) peer=02:00:00:00:00:00
io packet forwarding packets/burst=32
nb forwarding cores=16 - nb forwarding ports=2
port 0: RX queue number: 64 Tx queue number: 64
Rx offloads=0x80200 Tx offloads=0x10000
RX queue: 0
RX desc=1024 - RX free threshold=64
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x80200
TX queue: 0
TX desc=1024 - TX free threshold=928
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x10000 - TX RS bit threshold=32
port 1: RX queue number: 64 Tx queue number: 64
Rx offloads=0x80200 Tx offloads=0x10000
RX queue: 0
RX desc=1024 - RX free threshold=64
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x80200
TX queue: 0
TX desc=1024 - TX free threshold=928
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x10000 - TX RS bit threshold=32
testpmd>
>
> Expected behaviour:
>
> After reconfiguring PFC from 8 to 4 TCs, the forwarding TC mask should reflect
> the configured number of TCs (mask = 0xF).
>>
>> Additionally, the existing VMDQ pool guard in dcb_fwd_config_setup()
>> only checks RX queue counts, missing the case where the TX port has
>> zero queues for a given pool/TC combination. When nb_tx_queue is 0,
>> the expression "j % nb_tx_queue" triggers a SIGFPE (integer division by zero).
>
> The dcb_fwd_check_cores_per_tc() check this case. So please provide the steps.
>
>>
>> Fix this by:
>> 1. Updating dcb_fwd_tc_mask after port DCB reconfiguration using the
>> user requested num_tcs value, so fwd_config_setup() sees the correct
>> mask.
>> 2. Extending the existing pool guard to also check TX queue counts.
>> 3. Adding a defensive break after the division by dcb_fwd_tc_cores to
>> catch integer truncation to zero.
>>
>> Fixes: 0ecbf93f5001 ("app/testpmd: add command to disable DCB")
>> Cc: stable@dpdk.org
>>
>> Signed-off-by: Talluri Chaitanyababu
>> <chaitanyababux.talluri@intel.com>
>> Signed-off-by: Shaiq Wani <shaiq.wani@intel.com>
>> ---
^ permalink raw reply [flat|nested] 18+ messages in thread* RE: [PATCH v2] app/testpmd: fix DCB forwarding TC mask and queue guard
2026-03-17 1:07 ` fengchengwen
@ 2026-03-18 7:21 ` Talluri, ChaitanyababuX
2026-03-19 1:19 ` fengchengwen
0 siblings, 1 reply; 18+ messages in thread
From: Talluri, ChaitanyababuX @ 2026-03-18 7:21 UTC (permalink / raw)
To: fengchengwen, dev@dpdk.org, Richardson, Bruce,
stephen@networkplumber.org, Singh, Aman Deep
Cc: Wani, Shaiq, stable@dpdk.org
-----Original Message-----
From: fengchengwen <fengchengwen@huawei.com>
Sent: 17 March 2026 06:38
To: Talluri, ChaitanyababuX <chaitanyababux.talluri@intel.com>; dev@dpdk.org; Richardson, Bruce <bruce.richardson@intel.com>; stephen@networkplumber.org; Singh, Aman Deep <aman.deep.singh@intel.com>
Cc: Wani, Shaiq <shaiq.wani@intel.com>; stable@dpdk.org
Subject: Re: [PATCH v2] app/testpmd: fix DCB forwarding TC mask and queue guard
On 3/16/2026 2:05 PM, Talluri, ChaitanyababuX wrote:
>
>
> -----Original Message-----
> From: fengchengwen <fengchengwen@huawei.com>
> Sent: 13 March 2026 05:49
> To: Talluri, ChaitanyababuX <chaitanyababux.talluri@intel.com>;
> dev@dpdk.org; Richardson, Bruce <bruce.richardson@intel.com>;
> stephen@networkplumber.org; Singh, Aman Deep
> <aman.deep.singh@intel.com>
> Cc: Wani, Shaiq <shaiq.wani@intel.com>; stable@dpdk.org
> Subject: Re: [PATCH v2] app/testpmd: fix DCB forwarding TC mask and
> queue guard
>
> On 3/12/2026 6:36 PM, Talluri Chaitanyababu wrote:
>> Update forwarding TC mask based on configured traffic classes to
>> properly handle both 4 TC and 8 TC modes. The bitmask calculation (1u
>> << nb_tcs) - 1 correctly creates masks for all available traffic
>> classes (0xF for 4 TCs, 0xFF for 8 TCs).
>>
>> When the mask is not updated after a TC configuration change, it
>> stays at the default 0xFF, which causes dcb_fwd_tc_update_dcb_info()
>> to skip the compress logic entirely (early return when mask ==
>> DEFAULT_DCB_FWD_TC_MASK).
>> This can lead to inconsistent queue allocations.
>
> Sorry, I cannot understand your question. Could you please provide some steps to reproduce the issue and the problem phenomenon?
>
> Please find the reproduction steps and problem description below.
>
> 1.bind 2 port to vfio-pci
> ./usertools/dpdk-devbind.py -b vfio-pci 0000:af:00.0 0000:af:00.1 2.
> start testpmd and reset DCB PFC
> ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-20 -n 4 -a
> 0000:af:00.0 -a 0000:af:00.1 --file-prefix=testpmd1 -- -i --rxq=256
> --txq=256 --nb-cores=16 --total-num-mbufs=600000
>
> testpmd> port stop all
> testpmd> port config 0 dcb vt off 8 pfc on port config 1 dcb vt off 8
> testpmd> pfc on port start all port stop all port config 0 dcb vt off
> testpmd> 4 pfc on
>
> Test Log:
> root@srv13:~/test-1/dpdk#
> ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-20 -n 4 -a
> 0000:31:00.0 -a 0000:4b:00.0 --file-prefix=testpmd1 -- -i --rxq=256
> --txq=256 --nb-cores=16 --total-num-mbufs=600000
> EAL: Detected CPU lcores: 96
> EAL: Detected NUMA nodes: 2
> EAL: Detected static linkage of DPDK
> EAL: Multi-process socket /var/run/dpdk/testpmd1/mp_socket
> EAL: Selected IOVA mode 'VA'
> EAL: VFIO support initialized
> EAL: Using IOMMU type 1 (Type 1)
> ICE_INIT: ice_load_pkg_type(): Active package is: 1.3.50.0, ICE OS
> Default Package (single VLAN mode)
> ICE_INIT: ice_load_pkg_type(): Active package is: 1.3.50.0, ICE OS
> Default Package (single VLAN mode) Interactive-mode selected
> testpmd: create a new mbuf pool <mb_pool_0>: n=600000, size=2176,
> socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc Configuring Port 0
> (socket 0)
> ICE_DRIVER: ice_set_tx_function(): Using Vector AVX2 (port 0).
> ICE_DRIVER: ice_set_rx_function(): Using Offload Vector AVX2 (port 0).
> Port 0: B4:96:91:9F:5E:B0
> Configuring Port 1 (socket 0)
> ICE_DRIVER: ice_set_tx_function(): Using Vector AVX2 (port 1).
> ICE_DRIVER: ice_set_rx_function(): Using Offload Vector AVX2 (port 1).
> Port 1: 68:05:CA:A3:13:4C
> Checking link statuses...
> Done
> testpmd> port stop all
> Stopping ports...
>
> Port 0: link state change event
> Checking link statuses...
>
> Port 1: link state change event
> Done
> testpmd> port config 0 dcb vt off 8 pfc on
> In DCB mode, all forwarding ports must be configured in this mode.
> testpmd> port config 1 dcb vt off 8 pfc on port start all
> Configuring Port 0 (socket 0)
> ICE_DRIVER: ice_set_tx_function(): Using Vector AVX2 (port 0).
> ICE_DRIVER: ice_set_rx_function(): Using Offload Vector AVX2 (port 0).
>
> Port 0: link state change event
> Port 0: B4:96:91:9F:5E:B0
> Configuring Port 1 (socket 0)
> ICE_DRIVER: ice_set_tx_function(): Using Vector AVX2 (port 1).
> ICE_DRIVER: ice_set_rx_function(): Using Offload Vector AVX2 (port 1).
> Port 1: 68:05:CA:A3:13:4C
> Checking link statuses...
> Done
> testpmd> port stop all
> Stopping ports...
>
> Port 0: link state change event
> Checking link statuses...
>
> Port 1: link state change event
> Done
> testpmd> port config 0 dcb vt off 4 pfc on
> Floating point exception
I just try to reproduce this on Kunpeng platform but found it seems no error:
dpdk-testpmd -a 0000:7d:00.0 -a 0000:7d:00.2 --file-prefix=feng -l 0-20 -- -i --rxq=64 --txq=64 --nb-cores=16
PS: this netcard only has a max of 64 queues
So could you show the gdb output?
As requested, please find the GDB output below.
gdb --args ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd \
-l 1-20 -n 4 \
-a 0000:31:00.0 -a 0000:4b:00.0 \
--file-prefix=testpmd1 \
-- -i --rxq=256 --txq=256 --nb-cores=16 --total-num-mbufs=600000
GNU gdb (Ubuntu 15.0.50.20240403-0ubuntu1) 15.0.50.20240403-git
Copyright (C) 2024 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<https://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd...
(gdb) run
Starting program: /home/intel/withoutfix/dpdk/x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-20 -n 4 -a 0000:31:00.0 -a 0000:4b:00.0 --file-prefix=testpmd1 -- -i --rxq=256 --txq=256 --nb-cores=16 --total-num-mbufs=600000
This GDB supports auto-downloading debuginfo from the following URLs:
<https://debuginfod.ubuntu.com>
Enable debuginfod for this session? (y or [n]) y
Debuginfod has been enabled.
To make this setting permanent, add 'set debuginfod enabled on' to .gdbinit.
Downloading separate debug info for system-supplied DSO at 0x7ffff7fc3000
Downloading separate debug info for /lib/x86_64-linux-gnu/libelf.so.1
Downloading separate debug info for /lib/x86_64-linux-gnu/libpcap.so.0.8
Downloading separate debug info for /lib/x86_64-linux-gnu/libmlx5.so.1
warning: could not find '.gnu_debugaltlink' file for /lib/x86_64-linux-gnu/libmlx5.so.1
Downloading separate debug info for /lib/x86_64-linux-gnu/libmlx5.so.1
Downloading separate debug info for /lib/x86_64-linux-gnu/libibverbs.so.1
warning: could not find '.gnu_debugaltlink' file for /lib/x86_64-linux-gnu/libmana.so.1
Downloading separate debug info for /lib/x86_64-linux-gnu/libmana.so.1
warning: could not find '.gnu_debugaltlink' file for /lib/x86_64-linux-gnu/libmlx4.so.1
Downloading separate debug info for /lib/x86_64-linux-gnu/libmlx4.so.1
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Downloading separate debug info for /lib/x86_64-linux-gnu/libsystemd.so.0
Downloading separate debug info for /lib/x86_64-linux-gnu/libcap.so.2
warning: could not find '.gnu_debugaltlink' file for /lib/x86_64-linux-gnu/libcap.so.2
Downloading separate debug info for /lib/x86_64-linux-gnu/libcap.so.2
Downloading separate debug info for /lib/x86_64-linux-gnu/libgcrypt.so.20
Downloading separate debug info for /lib/x86_64-linux-gnu/liblzma.so.5
Downloading separate debug info for /lib/x86_64-linux-gnu/libgpg-error.so.0
EAL: Detected CPU lcores: 96
EAL: Detected NUMA nodes: 2
EAL: Detected static linkage of DPDK
[New Thread 0x7ffff765e400 (LWP 1886998)]
EAL: Multi-process socket /var/run/dpdk/testpmd1/mp_socket
[New Thread 0x7ffff6e5d400 (LWP 1886999)]
EAL: Selected IOVA mode 'VA'
EAL: VFIO support initialized
[New Thread 0x7ffff565b400 (LWP 1887000)]
[New Thread 0x7ffff4e5a400 (LWP 1887001)]
[New Thread 0x7fffeffff400 (LWP 1887002)]
[New Thread 0x7fffef7fe400 (LWP 1887003)]
[New Thread 0x7fffeeffd400 (LWP 1887004)]
[New Thread 0x7fffee7fc400 (LWP 1887005)]
[New Thread 0x7fffedffb400 (LWP 1887006)]
[New Thread 0x7fffed7fa400 (LWP 1887007)]
[New Thread 0x7fffecff9400 (LWP 1887008)]
[New Thread 0x7fffcbfff400 (LWP 1887009)]
[New Thread 0x7fffcb7fe400 (LWP 1887010)]
[New Thread 0x7fffcaffd400 (LWP 1887011)]
[New Thread 0x7fffca7fc400 (LWP 1887012)]
[New Thread 0x7fffc9ffb400 (LWP 1887013)]
[New Thread 0x7fffc97fa400 (LWP 1887014)]
[New Thread 0x7fffc8ff9400 (LWP 1887015)]
[New Thread 0x7fffabfff400 (LWP 1887016)]
[New Thread 0x7fffab7fe400 (LWP 1887017)]
[New Thread 0x7fffaaffd400 (LWP 1887018)]
EAL: Using IOMMU type 1 (Type 1)
ICE_INIT: ice_load_pkg_type(): Active package is: 1.3.50.0, ICE OS Default Package (single VLAN mode)
ICE_INIT: ice_load_pkg_type(): Active package is: 1.3.50.0, ICE OS Default Package (single VLAN mode)
[New Thread 0x7fffaa7fc400 (LWP 1887025)]
[New Thread 0x7fffa9ffb400 (LWP 1887026)]
Interactive-mode selected
testpmd: create a new mbuf pool <mb_pool_0>: n=600000, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
ICE_DRIVER: ice_set_tx_function(): Using Vector AVX2 (port 0).
ICE_DRIVER: ice_set_rx_function(): Using Offload Vector AVX2 (port 0).
Port 0: link state change event
Port 0: B4:96:91:9F:5E:B0
Configuring Port 1 (socket 0)
ICE_DRIVER: ice_set_tx_function(): Using Vector AVX2 (port 1).
ICE_DRIVER: ice_set_rx_function(): Using Offload Vector AVX2 (port 1).
Port 1: 68:05:CA:A3:13:4C
Checking link statuses...
Done
testpmd>
testpmd>
testpmd> port stop all
Stopping ports...
Port 0: link state change event
Checking link statuses...
Port 1: link state change event
Done
testpmd> port config 0 dcb vt off 8 pfc on
In DCB mode, all forwarding ports must be configured in this mode.
testpmd> port config 1 dcb vt off 8 pfc on
testpmd> port start all
Configuring Port 0 (socket 0)
ICE_DRIVER: ice_set_tx_function(): Using Vector AVX2 (port 0).
ICE_DRIVER: ice_set_rx_function(): Using Offload Vector AVX2 (port 0).
Port 0: B4:96:91:9F:5E:B0
Configuring Port 1 (socket 0)
ICE_DRIVER: ice_set_tx_function(): Using Vector AVX2 (port 1).
ICE_DRIVER: ice_set_rx_function(): Using Offload Vector AVX2 (port 1).
Port 1: 68:05:CA:A3:13:4C
Checking link statuses...
Done
testpmd> port stop all
Stopping ports...
Port 0: link state change event
Checking link statuses...
Port 1: link state change event
Done
testpmd> port config 0 dcb vt off 4 pfc on
Thread 1 "dpdk-testpmd" received signal SIGFPE, Arithmetic exception.
0x0000555555795011 in dcb_fwd_config_setup () at ../app/test-pmd/config.c:5470
5470 fs->tx_queue = txq + j % nb_tx_queue;
(gdb) n
Couldn't get registers: No such process.
(gdb) [Thread 0x7fffaa7fc400 (LWP 1887025) exited]
[Thread 0x7fffa9ffb400 (LWP 1887026) exited]
[Thread 0x7fffaaffd400 (LWP 1887018) exited]
[Thread 0x7fffab7fe400 (LWP 1887017) exited]
[Thread 0x7fffabfff400 (LWP 1887016) exited]
[Thread 0x7fffc8ff9400 (LWP 1887015) exited]
[Thread 0x7fffc97fa400 (LWP 1887014) exited]
[Thread 0x7fffc9ffb400 (LWP 1887013) exited]
[Thread 0x7fffcaffd400 (LWP 1887011) exited]
[Thread 0x7fffcb7fe400 (LWP 1887010) exited]
[Thread 0x7fffcbfff400 (LWP 1887009) exited]
[Thread 0x7fffecff9400 (LWP 1887008) exited]
[Thread 0x7fffed7fa400 (LWP 1887007) exited]
[Thread 0x7fffedffb400 (LWP 1887006) exited]
[Thread 0x7fffee7fc400 (LWP 1887005) exited]
[Thread 0x7fffeeffd400 (LWP 1887004) exited]
[Thread 0x7fffef7fe400 (LWP 1887003) exited]
[Thread 0x7fffeffff400 (LWP 1887002) exited]
[Thread 0x7ffff4e5a400 (LWP 1887001) exited]
[Thread 0x7ffff565b400 (LWP 1887000) exited]
[Thread 0x7ffff6e5d400 (LWP 1886999) exited]
[Thread 0x7ffff765e400 (LWP 1886998) exited]
[Thread 0x7ffff7c1ac00 (LWP 1886964) exited]
[Thread 0x7fffca7fc400 (LWP 1887012) exited]
[New process 1886964]
Program terminated with signal SIGFPE, Arithmetic exception.
The program no longer exists.
The detail output:
./dpdk-testpmd -a 0000:7d:00.0 -a 0000:7d:00.2 --file-prefix=feng -l 0-20 -- -i --rxq=64 --txq=64 --nb-cores=16
EAL: Detected CPU lcores: 96
EAL: Detected NUMA nodes: 4
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/feng/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: VFIO support initialized
EAL: DPDK is running on a NUMA system, but is compiled without NUMA support.
EAL: This will have adverse consequences for performance and usability.
EAL: Please use --legacy-mem option, or recompile with NUMA support.
EAL: Using IOMMU type 1 (Type 1)
Interactive-mode selected
testpmd: create a new mbuf pool <mb_pool_0>: n=307456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc Configuring Port 0 (socket 0)
HNS3_DRIVER: 0000:7d:00.0 hns3_set_fiber_port_link_speed(): auto-negotiation is not supported, use default fixed speed!
Port 0: 00:18:2D:00:00:79
Configuring Port 1 (socket 0)
HNS3_DRIVER: 0000:7d:00.2 hns3_set_fiber_port_link_speed(): auto-negotiation is not supported, use default fixed speed!
Port 1: 00:18:2D:02:00:79
Checking link statuses...
Done
testpmd>
testpmd>
testpmd> HNS3_DRIVER: 0000:7d:00.0 hns3_update_link_status(): Link status change to up!
Port 0: link state change event
testpmd>
testpmd> port stop all
Stopping ports...
Checking link statuses...
Done
testpmd> port config 0 dcb vt off 8 pfc on
In DCB mode, all forwarding ports must be configured in this mode.
testpmd> port config 1 dcb vt off 8 pfc on port start all
Configuring Port 0 (socket 0)
HNS3_DRIVER: 0000:7d:00.0 hns3_set_fiber_port_link_speed(): auto-negotiation is not supported, use default fixed speed!
Port 0: 00:18:2D:00:00:79
Configuring Port 1 (socket 0)
HNS3_DRIVER: 0000:7d:00.2 hns3_set_fiber_port_link_speed(): auto-negotiation is not supported, use default fixed speed!
Port 1: 00:18:2D:02:00:79
Checking link statuses...
Done
testpmd> port stop all
Stopping ports...
Checking link statuses...
Done
testpmd> port config 0 dcb vt off 4 pfc on
testpmd>
testpmd> start
Not all ports were started
testpmd> port start all
Configuring Port 0 (socket 0)
HNS3_DRIVER: 0000:7d:00.0 hns3_set_fiber_port_link_speed(): auto-negotiation is not supported, use default fixed speed!
Port 0: 00:18:2D:00:00:79
HNS3_DRIVER: 0000:7d:00.2 hns3_set_fiber_port_link_speed(): auto-negotiation is not supported, use default fixed speed!
Port 1: 00:18:2D:02:00:79
Checking link statuses...
Done
testpmd> HNS3_DRIVER: 0000:7d:00.0 hns3_update_link_status(): Link status change to up!
Port 0: link state change event
testpmd> start
io packet forwarding - ports=2 - cores=12 - streams=128 - NUMA support enabled, MP allocation mode: native Logical Core 1 (socket 0) forwards packets on 16 streams:
RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=1 (socket 0) -> TX P=1/Q=1 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=2 (socket 0) -> TX P=1/Q=2 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=3 (socket 0) -> TX P=1/Q=3 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=4 (socket 0) -> TX P=1/Q=4 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=5 (socket 0) -> TX P=1/Q=5 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=6 (socket 0) -> TX P=1/Q=6 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=7 (socket 0) -> TX P=1/Q=7 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=8 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=9 (socket 0) -> TX P=1/Q=1 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=10 (socket 0) -> TX P=1/Q=2 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=11 (socket 0) -> TX P=1/Q=3 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=12 (socket 0) -> TX P=1/Q=4 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=13 (socket 0) -> TX P=1/Q=5 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=14 (socket 0) -> TX P=1/Q=6 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=15 (socket 0) -> TX P=1/Q=7 (socket 0) peer=02:00:00:00:00:01 Logical Core 2 (socket 0) forwards packets on 16 streams:
RX P=0/Q=16 (socket 0) -> TX P=1/Q=8 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=17 (socket 0) -> TX P=1/Q=9 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=18 (socket 0) -> TX P=1/Q=10 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=19 (socket 0) -> TX P=1/Q=11 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=20 (socket 0) -> TX P=1/Q=12 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=21 (socket 0) -> TX P=1/Q=13 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=22 (socket 0) -> TX P=1/Q=14 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=23 (socket 0) -> TX P=1/Q=15 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=24 (socket 0) -> TX P=1/Q=8 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=25 (socket 0) -> TX P=1/Q=9 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=26 (socket 0) -> TX P=1/Q=10 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=27 (socket 0) -> TX P=1/Q=11 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=28 (socket 0) -> TX P=1/Q=12 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=29 (socket 0) -> TX P=1/Q=13 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=30 (socket 0) -> TX P=1/Q=14 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=31 (socket 0) -> TX P=1/Q=15 (socket 0) peer=02:00:00:00:00:01 Logical Core 3 (socket 0) forwards packets on 16 streams:
RX P=0/Q=32 (socket 0) -> TX P=1/Q=16 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=33 (socket 0) -> TX P=1/Q=17 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=34 (socket 0) -> TX P=1/Q=18 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=35 (socket 0) -> TX P=1/Q=19 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=36 (socket 0) -> TX P=1/Q=20 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=37 (socket 0) -> TX P=1/Q=21 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=38 (socket 0) -> TX P=1/Q=22 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=39 (socket 0) -> TX P=1/Q=23 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=40 (socket 0) -> TX P=1/Q=16 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=41 (socket 0) -> TX P=1/Q=17 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=42 (socket 0) -> TX P=1/Q=18 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=43 (socket 0) -> TX P=1/Q=19 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=44 (socket 0) -> TX P=1/Q=20 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=45 (socket 0) -> TX P=1/Q=21 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=46 (socket 0) -> TX P=1/Q=22 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=47 (socket 0) -> TX P=1/Q=23 (socket 0) peer=02:00:00:00:00:01 Logical Core 4 (socket 0) forwards packets on 16 streams:
RX P=0/Q=48 (socket 0) -> TX P=1/Q=24 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=49 (socket 0) -> TX P=1/Q=25 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=50 (socket 0) -> TX P=1/Q=26 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=51 (socket 0) -> TX P=1/Q=27 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=52 (socket 0) -> TX P=1/Q=28 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=53 (socket 0) -> TX P=1/Q=29 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=54 (socket 0) -> TX P=1/Q=30 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=55 (socket 0) -> TX P=1/Q=31 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=56 (socket 0) -> TX P=1/Q=24 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=57 (socket 0) -> TX P=1/Q=25 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=58 (socket 0) -> TX P=1/Q=26 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=59 (socket 0) -> TX P=1/Q=27 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=60 (socket 0) -> TX P=1/Q=28 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=61 (socket 0) -> TX P=1/Q=29 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=62 (socket 0) -> TX P=1/Q=30 (socket 0) peer=02:00:00:00:00:01
RX P=0/Q=63 (socket 0) -> TX P=1/Q=31 (socket 0) peer=02:00:00:00:00:01 Logical Core 5 (socket 0) forwards packets on 8 streams:
RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=1 (socket 0) -> TX P=0/Q=1 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=2 (socket 0) -> TX P=0/Q=2 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=3 (socket 0) -> TX P=0/Q=3 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=4 (socket 0) -> TX P=0/Q=4 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=5 (socket 0) -> TX P=0/Q=5 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=6 (socket 0) -> TX P=0/Q=6 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=7 (socket 0) -> TX P=0/Q=7 (socket 0) peer=02:00:00:00:00:00 Logical Core 6 (socket 0) forwards packets on 8 streams:
RX P=1/Q=8 (socket 0) -> TX P=0/Q=16 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=9 (socket 0) -> TX P=0/Q=17 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=10 (socket 0) -> TX P=0/Q=18 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=11 (socket 0) -> TX P=0/Q=19 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=12 (socket 0) -> TX P=0/Q=20 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=13 (socket 0) -> TX P=0/Q=21 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=14 (socket 0) -> TX P=0/Q=22 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=15 (socket 0) -> TX P=0/Q=23 (socket 0) peer=02:00:00:00:00:00 Logical Core 7 (socket 0) forwards packets on 8 streams:
RX P=1/Q=16 (socket 0) -> TX P=0/Q=32 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=17 (socket 0) -> TX P=0/Q=33 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=18 (socket 0) -> TX P=0/Q=34 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=19 (socket 0) -> TX P=0/Q=35 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=20 (socket 0) -> TX P=0/Q=36 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=21 (socket 0) -> TX P=0/Q=37 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=22 (socket 0) -> TX P=0/Q=38 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=23 (socket 0) -> TX P=0/Q=39 (socket 0) peer=02:00:00:00:00:00 Logical Core 8 (socket 0) forwards packets on 8 streams:
RX P=1/Q=24 (socket 0) -> TX P=0/Q=48 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=25 (socket 0) -> TX P=0/Q=49 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=26 (socket 0) -> TX P=0/Q=50 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=27 (socket 0) -> TX P=0/Q=51 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=28 (socket 0) -> TX P=0/Q=52 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=29 (socket 0) -> TX P=0/Q=53 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=30 (socket 0) -> TX P=0/Q=54 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=31 (socket 0) -> TX P=0/Q=55 (socket 0) peer=02:00:00:00:00:00 Logical Core 9 (socket 0) forwards packets on 8 streams:
RX P=1/Q=32 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=33 (socket 0) -> TX P=0/Q=1 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=34 (socket 0) -> TX P=0/Q=2 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=35 (socket 0) -> TX P=0/Q=3 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=36 (socket 0) -> TX P=0/Q=4 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=37 (socket 0) -> TX P=0/Q=5 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=38 (socket 0) -> TX P=0/Q=6 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=39 (socket 0) -> TX P=0/Q=7 (socket 0) peer=02:00:00:00:00:00 Logical Core 10 (socket 0) forwards packets on 8 streams:
RX P=1/Q=40 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=41 (socket 0) -> TX P=0/Q=1 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=42 (socket 0) -> TX P=0/Q=2 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=43 (socket 0) -> TX P=0/Q=3 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=44 (socket 0) -> TX P=0/Q=4 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=45 (socket 0) -> TX P=0/Q=5 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=46 (socket 0) -> TX P=0/Q=6 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=47 (socket 0) -> TX P=0/Q=7 (socket 0) peer=02:00:00:00:00:00 Logical Core 11 (socket 0) forwards packets on 8 streams:
RX P=1/Q=48 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=49 (socket 0) -> TX P=0/Q=1 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=50 (socket 0) -> TX P=0/Q=2 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=51 (socket 0) -> TX P=0/Q=3 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=52 (socket 0) -> TX P=0/Q=4 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=53 (socket 0) -> TX P=0/Q=5 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=54 (socket 0) -> TX P=0/Q=6 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=55 (socket 0) -> TX P=0/Q=7 (socket 0) peer=02:00:00:00:00:00 Logical Core 12 (socket 0) forwards packets on 8 streams:
RX P=1/Q=56 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=57 (socket 0) -> TX P=0/Q=1 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=58 (socket 0) -> TX P=0/Q=2 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=59 (socket 0) -> TX P=0/Q=3 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=60 (socket 0) -> TX P=0/Q=4 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=61 (socket 0) -> TX P=0/Q=5 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=62 (socket 0) -> TX P=0/Q=6 (socket 0) peer=02:00:00:00:00:00
RX P=1/Q=63 (socket 0) -> TX P=0/Q=7 (socket 0) peer=02:00:00:00:00:00
io packet forwarding packets/burst=32
nb forwarding cores=16 - nb forwarding ports=2
port 0: RX queue number: 64 Tx queue number: 64
Rx offloads=0x80200 Tx offloads=0x10000
RX queue: 0
RX desc=1024 - RX free threshold=64
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x80200
TX queue: 0
TX desc=1024 - TX free threshold=928
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x10000 - TX RS bit threshold=32
port 1: RX queue number: 64 Tx queue number: 64
Rx offloads=0x80200 Tx offloads=0x10000
RX queue: 0
RX desc=1024 - RX free threshold=64
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x80200
TX queue: 0
TX desc=1024 - TX free threshold=928
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x10000 - TX RS bit threshold=32
testpmd>
>
> Expected behaviour:
>
> After reconfiguring PFC from 8 to 4 TCs, the forwarding TC mask should
> reflect the configured number of TCs (mask = 0xF).
>>
>> Additionally, the existing VMDQ pool guard in dcb_fwd_config_setup()
>> only checks RX queue counts, missing the case where the TX port has
>> zero queues for a given pool/TC combination. When nb_tx_queue is 0,
>> the expression "j % nb_tx_queue" triggers a SIGFPE (integer division by zero).
>
> The dcb_fwd_check_cores_per_tc() check this case. So please provide the steps.
>
>>
>> Fix this by:
>> 1. Updating dcb_fwd_tc_mask after port DCB reconfiguration using the
>> user requested num_tcs value, so fwd_config_setup() sees the correct
>> mask.
>> 2. Extending the existing pool guard to also check TX queue counts.
>> 3. Adding a defensive break after the division by dcb_fwd_tc_cores to
>> catch integer truncation to zero.
>>
>> Fixes: 0ecbf93f5001 ("app/testpmd: add command to disable DCB")
>> Cc: stable@dpdk.org
>>
>> Signed-off-by: Talluri Chaitanyababu
>> <chaitanyababux.talluri@intel.com>
>> Signed-off-by: Shaiq Wani <shaiq.wani@intel.com>
>> ---
^ permalink raw reply [flat|nested] 18+ messages in thread* Re: [PATCH v2] app/testpmd: fix DCB forwarding TC mask and queue guard
2026-03-18 7:21 ` Talluri, ChaitanyababuX
@ 2026-03-19 1:19 ` fengchengwen
0 siblings, 0 replies; 18+ messages in thread
From: fengchengwen @ 2026-03-19 1:19 UTC (permalink / raw)
To: Talluri, ChaitanyababuX, dev@dpdk.org, Richardson, Bruce,
stephen@networkplumber.org, Singh, Aman Deep
Cc: Wani, Shaiq, stable@dpdk.org
On 3/18/2026 3:21 PM, Talluri, ChaitanyababuX wrote:
>
>
> -----Original Message-----
> From: fengchengwen <fengchengwen@huawei.com>
> Sent: 17 March 2026 06:38
> To: Talluri, ChaitanyababuX <chaitanyababux.talluri@intel.com>; dev@dpdk.org; Richardson, Bruce <bruce.richardson@intel.com>; stephen@networkplumber.org; Singh, Aman Deep <aman.deep.singh@intel.com>
> Cc: Wani, Shaiq <shaiq.wani@intel.com>; stable@dpdk.org
> Subject: Re: [PATCH v2] app/testpmd: fix DCB forwarding TC mask and queue guard
>
> On 3/16/2026 2:05 PM, Talluri, ChaitanyababuX wrote:
>>
>>
>> -----Original Message-----
>> From: fengchengwen <fengchengwen@huawei.com>
>> Sent: 13 March 2026 05:49
>> To: Talluri, ChaitanyababuX <chaitanyababux.talluri@intel.com>;
>> dev@dpdk.org; Richardson, Bruce <bruce.richardson@intel.com>;
>> stephen@networkplumber.org; Singh, Aman Deep
>> <aman.deep.singh@intel.com>
>> Cc: Wani, Shaiq <shaiq.wani@intel.com>; stable@dpdk.org
>> Subject: Re: [PATCH v2] app/testpmd: fix DCB forwarding TC mask and
>> queue guard
>>
>> On 3/12/2026 6:36 PM, Talluri Chaitanyababu wrote:
>>> Update forwarding TC mask based on configured traffic classes to
>>> properly handle both 4 TC and 8 TC modes. The bitmask calculation (1u
>>> << nb_tcs) - 1 correctly creates masks for all available traffic
>>> classes (0xF for 4 TCs, 0xFF for 8 TCs).
>>>
>>> When the mask is not updated after a TC configuration change, it
>>> stays at the default 0xFF, which causes dcb_fwd_tc_update_dcb_info()
>>> to skip the compress logic entirely (early return when mask ==
>>> DEFAULT_DCB_FWD_TC_MASK).
>>> This can lead to inconsistent queue allocations.
>>
>> Sorry, I cannot understand your question. Could you please provide some steps to reproduce the issue and the problem phenomenon?
>>
>> Please find the reproduction steps and problem description below.
>>
>> 1.bind 2 port to vfio-pci
>> ./usertools/dpdk-devbind.py -b vfio-pci 0000:af:00.0 0000:af:00.1 2.
>> start testpmd and reset DCB PFC
>> ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-20 -n 4 -a
>> 0000:af:00.0 -a 0000:af:00.1 --file-prefix=testpmd1 -- -i --rxq=256
>> --txq=256 --nb-cores=16 --total-num-mbufs=600000
>>
>> testpmd> port stop all
>> testpmd> port config 0 dcb vt off 8 pfc on port config 1 dcb vt off 8
>> testpmd> pfc on port start all port stop all port config 0 dcb vt off
>> testpmd> 4 pfc on
>>
>> Test Log:
>> root@srv13:~/test-1/dpdk#
>> ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-20 -n 4 -a
>> 0000:31:00.0 -a 0000:4b:00.0 --file-prefix=testpmd1 -- -i --rxq=256
>> --txq=256 --nb-cores=16 --total-num-mbufs=600000
>> EAL: Detected CPU lcores: 96
>> EAL: Detected NUMA nodes: 2
>> EAL: Detected static linkage of DPDK
>> EAL: Multi-process socket /var/run/dpdk/testpmd1/mp_socket
>> EAL: Selected IOVA mode 'VA'
>> EAL: VFIO support initialized
>> EAL: Using IOMMU type 1 (Type 1)
>> ICE_INIT: ice_load_pkg_type(): Active package is: 1.3.50.0, ICE OS
>> Default Package (single VLAN mode)
>> ICE_INIT: ice_load_pkg_type(): Active package is: 1.3.50.0, ICE OS
>> Default Package (single VLAN mode) Interactive-mode selected
>> testpmd: create a new mbuf pool <mb_pool_0>: n=600000, size=2176,
>> socket=0
>> testpmd: preferred mempool ops selected: ring_mp_mc Configuring Port 0
>> (socket 0)
>> ICE_DRIVER: ice_set_tx_function(): Using Vector AVX2 (port 0).
>> ICE_DRIVER: ice_set_rx_function(): Using Offload Vector AVX2 (port 0).
>> Port 0: B4:96:91:9F:5E:B0
>> Configuring Port 1 (socket 0)
>> ICE_DRIVER: ice_set_tx_function(): Using Vector AVX2 (port 1).
>> ICE_DRIVER: ice_set_rx_function(): Using Offload Vector AVX2 (port 1).
>> Port 1: 68:05:CA:A3:13:4C
>> Checking link statuses...
>> Done
>> testpmd> port stop all
>> Stopping ports...
>>
>> Port 0: link state change event
>> Checking link statuses...
>>
>> Port 1: link state change event
>> Done
>> testpmd> port config 0 dcb vt off 8 pfc on
>> In DCB mode, all forwarding ports must be configured in this mode.
>> testpmd> port config 1 dcb vt off 8 pfc on port start all
>> Configuring Port 0 (socket 0)
>> ICE_DRIVER: ice_set_tx_function(): Using Vector AVX2 (port 0).
>> ICE_DRIVER: ice_set_rx_function(): Using Offload Vector AVX2 (port 0).
>>
>> Port 0: link state change event
>> Port 0: B4:96:91:9F:5E:B0
>> Configuring Port 1 (socket 0)
>> ICE_DRIVER: ice_set_tx_function(): Using Vector AVX2 (port 1).
>> ICE_DRIVER: ice_set_rx_function(): Using Offload Vector AVX2 (port 1).
>> Port 1: 68:05:CA:A3:13:4C
>> Checking link statuses...
>> Done
>> testpmd> port stop all
>> Stopping ports...
>>
>> Port 0: link state change event
>> Checking link statuses...
>>
>> Port 1: link state change event
>> Done
>> testpmd> port config 0 dcb vt off 4 pfc on
>> Floating point exception
>
> I just try to reproduce this on Kunpeng platform but found it seems no error:
> dpdk-testpmd -a 0000:7d:00.0 -a 0000:7d:00.2 --file-prefix=feng -l 0-20 -- -i --rxq=64 --txq=64 --nb-cores=16
> PS: this netcard only has a max of 64 queues
>
> So could you show the gdb output?
>
> As requested, please find the GDB output below.
>
> gdb --args ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd \
> -l 1-20 -n 4 \
> -a 0000:31:00.0 -a 0000:4b:00.0 \
> --file-prefix=testpmd1 \
> -- -i --rxq=256 --txq=256 --nb-cores=16 --total-num-mbufs=600000
> GNU gdb (Ubuntu 15.0.50.20240403-0ubuntu1) 15.0.50.20240403-git
> Copyright (C) 2024 Free Software Foundation, Inc.
> License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
> This is free software: you are free to change and redistribute it.
> There is NO WARRANTY, to the extent permitted by law.
> Type "show copying" and "show warranty" for details.
> This GDB was configured as "x86_64-linux-gnu".
> Type "show configuration" for configuration details.
> For bug reporting instructions, please see:
> <https://www.gnu.org/software/gdb/bugs/>.
> Find the GDB manual and other documentation resources online at:
> <http://www.gnu.org/software/gdb/documentation/>.
>
> For help, type "help".
> Type "apropos word" to search for commands related to "word"...
> Reading symbols from ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd...
> (gdb) run
> Starting program: /home/intel/withoutfix/dpdk/x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-20 -n 4 -a 0000:31:00.0 -a 0000:4b:00.0 --file-prefix=testpmd1 -- -i --rxq=256 --txq=256 --nb-cores=16 --total-num-mbufs=600000
>
> This GDB supports auto-downloading debuginfo from the following URLs:
> <https://debuginfod.ubuntu.com>
> Enable debuginfod for this session? (y or [n]) y
> Debuginfod has been enabled.
> To make this setting permanent, add 'set debuginfod enabled on' to .gdbinit.
> Downloading separate debug info for system-supplied DSO at 0x7ffff7fc3000
> Downloading separate debug info for /lib/x86_64-linux-gnu/libelf.so.1
> Downloading separate debug info for /lib/x86_64-linux-gnu/libpcap.so.0.8
> Downloading separate debug info for /lib/x86_64-linux-gnu/libmlx5.so.1
> warning: could not find '.gnu_debugaltlink' file for /lib/x86_64-linux-gnu/libmlx5.so.1
> Downloading separate debug info for /lib/x86_64-linux-gnu/libmlx5.so.1
> Downloading separate debug info for /lib/x86_64-linux-gnu/libibverbs.so.1
> warning: could not find '.gnu_debugaltlink' file for /lib/x86_64-linux-gnu/libmana.so.1
> Downloading separate debug info for /lib/x86_64-linux-gnu/libmana.so.1
> warning: could not find '.gnu_debugaltlink' file for /lib/x86_64-linux-gnu/libmlx4.so.1
> Downloading separate debug info for /lib/x86_64-linux-gnu/libmlx4.so.1
> [Thread debugging using libthread_db enabled]
> Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
> Downloading separate debug info for /lib/x86_64-linux-gnu/libsystemd.so.0
> Downloading separate debug info for /lib/x86_64-linux-gnu/libcap.so.2
> warning: could not find '.gnu_debugaltlink' file for /lib/x86_64-linux-gnu/libcap.so.2
> Downloading separate debug info for /lib/x86_64-linux-gnu/libcap.so.2
> Downloading separate debug info for /lib/x86_64-linux-gnu/libgcrypt.so.20
> Downloading separate debug info for /lib/x86_64-linux-gnu/liblzma.so.5
> Downloading separate debug info for /lib/x86_64-linux-gnu/libgpg-error.so.0
> EAL: Detected CPU lcores: 96
> EAL: Detected NUMA nodes: 2
> EAL: Detected static linkage of DPDK
> [New Thread 0x7ffff765e400 (LWP 1886998)]
> EAL: Multi-process socket /var/run/dpdk/testpmd1/mp_socket
> [New Thread 0x7ffff6e5d400 (LWP 1886999)]
> EAL: Selected IOVA mode 'VA'
> EAL: VFIO support initialized
> [New Thread 0x7ffff565b400 (LWP 1887000)]
> [New Thread 0x7ffff4e5a400 (LWP 1887001)]
> [New Thread 0x7fffeffff400 (LWP 1887002)]
> [New Thread 0x7fffef7fe400 (LWP 1887003)]
> [New Thread 0x7fffeeffd400 (LWP 1887004)]
> [New Thread 0x7fffee7fc400 (LWP 1887005)]
> [New Thread 0x7fffedffb400 (LWP 1887006)]
> [New Thread 0x7fffed7fa400 (LWP 1887007)]
> [New Thread 0x7fffecff9400 (LWP 1887008)]
> [New Thread 0x7fffcbfff400 (LWP 1887009)]
> [New Thread 0x7fffcb7fe400 (LWP 1887010)]
> [New Thread 0x7fffcaffd400 (LWP 1887011)]
> [New Thread 0x7fffca7fc400 (LWP 1887012)]
> [New Thread 0x7fffc9ffb400 (LWP 1887013)]
> [New Thread 0x7fffc97fa400 (LWP 1887014)]
> [New Thread 0x7fffc8ff9400 (LWP 1887015)]
> [New Thread 0x7fffabfff400 (LWP 1887016)]
> [New Thread 0x7fffab7fe400 (LWP 1887017)]
> [New Thread 0x7fffaaffd400 (LWP 1887018)]
> EAL: Using IOMMU type 1 (Type 1)
> ICE_INIT: ice_load_pkg_type(): Active package is: 1.3.50.0, ICE OS Default Package (single VLAN mode)
> ICE_INIT: ice_load_pkg_type(): Active package is: 1.3.50.0, ICE OS Default Package (single VLAN mode)
> [New Thread 0x7fffaa7fc400 (LWP 1887025)]
> [New Thread 0x7fffa9ffb400 (LWP 1887026)]
> Interactive-mode selected
> testpmd: create a new mbuf pool <mb_pool_0>: n=600000, size=2176, socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc
> Configuring Port 0 (socket 0)
> ICE_DRIVER: ice_set_tx_function(): Using Vector AVX2 (port 0).
> ICE_DRIVER: ice_set_rx_function(): Using Offload Vector AVX2 (port 0).
>
> Port 0: link state change event
> Port 0: B4:96:91:9F:5E:B0
> Configuring Port 1 (socket 0)
> ICE_DRIVER: ice_set_tx_function(): Using Vector AVX2 (port 1).
> ICE_DRIVER: ice_set_rx_function(): Using Offload Vector AVX2 (port 1).
> Port 1: 68:05:CA:A3:13:4C
> Checking link statuses...
> Done
> testpmd>
> testpmd>
> testpmd> port stop all
> Stopping ports...
>
> Port 0: link state change event
> Checking link statuses...
>
> Port 1: link state change event
> Done
> testpmd> port config 0 dcb vt off 8 pfc on
> In DCB mode, all forwarding ports must be configured in this mode.
> testpmd> port config 1 dcb vt off 8 pfc on
> testpmd> port start all
> Configuring Port 0 (socket 0)
> ICE_DRIVER: ice_set_tx_function(): Using Vector AVX2 (port 0).
> ICE_DRIVER: ice_set_rx_function(): Using Offload Vector AVX2 (port 0).
> Port 0: B4:96:91:9F:5E:B0
> Configuring Port 1 (socket 0)
> ICE_DRIVER: ice_set_tx_function(): Using Vector AVX2 (port 1).
> ICE_DRIVER: ice_set_rx_function(): Using Offload Vector AVX2 (port 1).
> Port 1: 68:05:CA:A3:13:4C
> Checking link statuses...
> Done
> testpmd> port stop all
> Stopping ports...
>
> Port 0: link state change event
> Checking link statuses...
>
> Port 1: link state change event
> Done
> testpmd> port config 0 dcb vt off 4 pfc on
>
> Thread 1 "dpdk-testpmd" received signal SIGFPE, Arithmetic exception.
> 0x0000555555795011 in dcb_fwd_config_setup () at ../app/test-pmd/config.c:5470
> 5470 fs->tx_queue = txq + j % nb_tx_queue;
Thanks
It seem the nb_tx_queues is zero.
I add log to show the value on Kunpeng platform, and it reproduce:
testpmd> port config 0 dcb vt off 4 pfc on
nb_tx_queue is zero !!!
nb_tx_queue is zero !!!
nb_tx_queue is zero !!!
nb_tx_queue is zero !!!
...
the div zero is not trigger exception on ARM platform default.
So the rootcause is clean:
1\ Port1 has 8 TC and each TC as it corresponding queus
2\ Port0 only has 4 TC, the TC[0~3]'s queues are valid, but TC[4~7] is invalid (the nb_tx_queues is zero)
3\ The above command will make port1's TC[4~7] forward to port0's TC[4~7], but because the port0's TC[4~7] is invalid, so it lead to exception.
BTW: I just rebase to f87fa31a9304 which don't include dcb fwd-tc/fwd-tc-cores command, and found it still exist above div zero problem.
So this commit fix tag is wrong.
> (gdb) n
> Couldn't get registers: No such process.
> (gdb) [Thread 0x7fffaa7fc400 (LWP 1887025) exited]
> [Thread 0x7fffa9ffb400 (LWP 1887026) exited]
> [Thread 0x7fffaaffd400 (LWP 1887018) exited]
> [Thread 0x7fffab7fe400 (LWP 1887017) exited]
> [Thread 0x7fffabfff400 (LWP 1887016) exited]
> [Thread 0x7fffc8ff9400 (LWP 1887015) exited]
> [Thread 0x7fffc97fa400 (LWP 1887014) exited]
> [Thread 0x7fffc9ffb400 (LWP 1887013) exited]
> [Thread 0x7fffcaffd400 (LWP 1887011) exited]
> [Thread 0x7fffcb7fe400 (LWP 1887010) exited]
> [Thread 0x7fffcbfff400 (LWP 1887009) exited]
> [Thread 0x7fffecff9400 (LWP 1887008) exited]
> [Thread 0x7fffed7fa400 (LWP 1887007) exited]
> [Thread 0x7fffedffb400 (LWP 1887006) exited]
> [Thread 0x7fffee7fc400 (LWP 1887005) exited]
> [Thread 0x7fffeeffd400 (LWP 1887004) exited]
> [Thread 0x7fffef7fe400 (LWP 1887003) exited]
> [Thread 0x7fffeffff400 (LWP 1887002) exited]
> [Thread 0x7ffff4e5a400 (LWP 1887001) exited]
> [Thread 0x7ffff565b400 (LWP 1887000) exited]
> [Thread 0x7ffff6e5d400 (LWP 1886999) exited]
> [Thread 0x7ffff765e400 (LWP 1886998) exited]
> [Thread 0x7ffff7c1ac00 (LWP 1886964) exited]
> [Thread 0x7fffca7fc400 (LWP 1887012) exited]
> [New process 1886964]
>
> Program terminated with signal SIGFPE, Arithmetic exception.
> The program no longer exists.
>
> The detail output:
> ./dpdk-testpmd -a 0000:7d:00.0 -a 0000:7d:00.2 --file-prefix=feng -l 0-20 -- -i --rxq=64 --txq=64 --nb-cores=16
> EAL: Detected CPU lcores: 96
> EAL: Detected NUMA nodes: 4
> EAL: Detected static linkage of DPDK
> EAL: Multi-process socket /var/run/dpdk/feng/mp_socket
> EAL: Selected IOVA mode 'VA'
> EAL: VFIO support initialized
> EAL: DPDK is running on a NUMA system, but is compiled without NUMA support.
> EAL: This will have adverse consequences for performance and usability.
> EAL: Please use --legacy-mem option, or recompile with NUMA support.
> EAL: Using IOMMU type 1 (Type 1)
> Interactive-mode selected
> testpmd: create a new mbuf pool <mb_pool_0>: n=307456, size=2176, socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc Configuring Port 0 (socket 0)
> HNS3_DRIVER: 0000:7d:00.0 hns3_set_fiber_port_link_speed(): auto-negotiation is not supported, use default fixed speed!
> Port 0: 00:18:2D:00:00:79
> Configuring Port 1 (socket 0)
> HNS3_DRIVER: 0000:7d:00.2 hns3_set_fiber_port_link_speed(): auto-negotiation is not supported, use default fixed speed!
> Port 1: 00:18:2D:02:00:79
> Checking link statuses...
> Done
> testpmd>
> testpmd>
> testpmd> HNS3_DRIVER: 0000:7d:00.0 hns3_update_link_status(): Link status change to up!
>
> Port 0: link state change event
>
> testpmd>
> testpmd> port stop all
> Stopping ports...
> Checking link statuses...
> Done
> testpmd> port config 0 dcb vt off 8 pfc on
> In DCB mode, all forwarding ports must be configured in this mode.
> testpmd> port config 1 dcb vt off 8 pfc on port start all
> Configuring Port 0 (socket 0)
> HNS3_DRIVER: 0000:7d:00.0 hns3_set_fiber_port_link_speed(): auto-negotiation is not supported, use default fixed speed!
> Port 0: 00:18:2D:00:00:79
> Configuring Port 1 (socket 0)
> HNS3_DRIVER: 0000:7d:00.2 hns3_set_fiber_port_link_speed(): auto-negotiation is not supported, use default fixed speed!
> Port 1: 00:18:2D:02:00:79
> Checking link statuses...
> Done
> testpmd> port stop all
> Stopping ports...
> Checking link statuses...
> Done
> testpmd> port config 0 dcb vt off 4 pfc on
> testpmd>
> testpmd> start
> Not all ports were started
> testpmd> port start all
> Configuring Port 0 (socket 0)
> HNS3_DRIVER: 0000:7d:00.0 hns3_set_fiber_port_link_speed(): auto-negotiation is not supported, use default fixed speed!
> Port 0: 00:18:2D:00:00:79
> HNS3_DRIVER: 0000:7d:00.2 hns3_set_fiber_port_link_speed(): auto-negotiation is not supported, use default fixed speed!
> Port 1: 00:18:2D:02:00:79
> Checking link statuses...
> Done
> testpmd> HNS3_DRIVER: 0000:7d:00.0 hns3_update_link_status(): Link status change to up!
>
> Port 0: link state change event
>
> testpmd> start
> io packet forwarding - ports=2 - cores=12 - streams=128 - NUMA support enabled, MP allocation mode: native Logical Core 1 (socket 0) forwards packets on 16 streams:
> RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=1 (socket 0) -> TX P=1/Q=1 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=2 (socket 0) -> TX P=1/Q=2 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=3 (socket 0) -> TX P=1/Q=3 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=4 (socket 0) -> TX P=1/Q=4 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=5 (socket 0) -> TX P=1/Q=5 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=6 (socket 0) -> TX P=1/Q=6 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=7 (socket 0) -> TX P=1/Q=7 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=8 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=9 (socket 0) -> TX P=1/Q=1 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=10 (socket 0) -> TX P=1/Q=2 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=11 (socket 0) -> TX P=1/Q=3 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=12 (socket 0) -> TX P=1/Q=4 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=13 (socket 0) -> TX P=1/Q=5 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=14 (socket 0) -> TX P=1/Q=6 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=15 (socket 0) -> TX P=1/Q=7 (socket 0) peer=02:00:00:00:00:01 Logical Core 2 (socket 0) forwards packets on 16 streams:
> RX P=0/Q=16 (socket 0) -> TX P=1/Q=8 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=17 (socket 0) -> TX P=1/Q=9 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=18 (socket 0) -> TX P=1/Q=10 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=19 (socket 0) -> TX P=1/Q=11 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=20 (socket 0) -> TX P=1/Q=12 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=21 (socket 0) -> TX P=1/Q=13 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=22 (socket 0) -> TX P=1/Q=14 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=23 (socket 0) -> TX P=1/Q=15 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=24 (socket 0) -> TX P=1/Q=8 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=25 (socket 0) -> TX P=1/Q=9 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=26 (socket 0) -> TX P=1/Q=10 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=27 (socket 0) -> TX P=1/Q=11 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=28 (socket 0) -> TX P=1/Q=12 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=29 (socket 0) -> TX P=1/Q=13 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=30 (socket 0) -> TX P=1/Q=14 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=31 (socket 0) -> TX P=1/Q=15 (socket 0) peer=02:00:00:00:00:01 Logical Core 3 (socket 0) forwards packets on 16 streams:
> RX P=0/Q=32 (socket 0) -> TX P=1/Q=16 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=33 (socket 0) -> TX P=1/Q=17 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=34 (socket 0) -> TX P=1/Q=18 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=35 (socket 0) -> TX P=1/Q=19 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=36 (socket 0) -> TX P=1/Q=20 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=37 (socket 0) -> TX P=1/Q=21 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=38 (socket 0) -> TX P=1/Q=22 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=39 (socket 0) -> TX P=1/Q=23 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=40 (socket 0) -> TX P=1/Q=16 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=41 (socket 0) -> TX P=1/Q=17 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=42 (socket 0) -> TX P=1/Q=18 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=43 (socket 0) -> TX P=1/Q=19 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=44 (socket 0) -> TX P=1/Q=20 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=45 (socket 0) -> TX P=1/Q=21 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=46 (socket 0) -> TX P=1/Q=22 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=47 (socket 0) -> TX P=1/Q=23 (socket 0) peer=02:00:00:00:00:01 Logical Core 4 (socket 0) forwards packets on 16 streams:
> RX P=0/Q=48 (socket 0) -> TX P=1/Q=24 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=49 (socket 0) -> TX P=1/Q=25 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=50 (socket 0) -> TX P=1/Q=26 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=51 (socket 0) -> TX P=1/Q=27 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=52 (socket 0) -> TX P=1/Q=28 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=53 (socket 0) -> TX P=1/Q=29 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=54 (socket 0) -> TX P=1/Q=30 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=55 (socket 0) -> TX P=1/Q=31 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=56 (socket 0) -> TX P=1/Q=24 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=57 (socket 0) -> TX P=1/Q=25 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=58 (socket 0) -> TX P=1/Q=26 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=59 (socket 0) -> TX P=1/Q=27 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=60 (socket 0) -> TX P=1/Q=28 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=61 (socket 0) -> TX P=1/Q=29 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=62 (socket 0) -> TX P=1/Q=30 (socket 0) peer=02:00:00:00:00:01
> RX P=0/Q=63 (socket 0) -> TX P=1/Q=31 (socket 0) peer=02:00:00:00:00:01 Logical Core 5 (socket 0) forwards packets on 8 streams:
> RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=1 (socket 0) -> TX P=0/Q=1 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=2 (socket 0) -> TX P=0/Q=2 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=3 (socket 0) -> TX P=0/Q=3 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=4 (socket 0) -> TX P=0/Q=4 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=5 (socket 0) -> TX P=0/Q=5 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=6 (socket 0) -> TX P=0/Q=6 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=7 (socket 0) -> TX P=0/Q=7 (socket 0) peer=02:00:00:00:00:00 Logical Core 6 (socket 0) forwards packets on 8 streams:
> RX P=1/Q=8 (socket 0) -> TX P=0/Q=16 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=9 (socket 0) -> TX P=0/Q=17 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=10 (socket 0) -> TX P=0/Q=18 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=11 (socket 0) -> TX P=0/Q=19 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=12 (socket 0) -> TX P=0/Q=20 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=13 (socket 0) -> TX P=0/Q=21 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=14 (socket 0) -> TX P=0/Q=22 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=15 (socket 0) -> TX P=0/Q=23 (socket 0) peer=02:00:00:00:00:00 Logical Core 7 (socket 0) forwards packets on 8 streams:
> RX P=1/Q=16 (socket 0) -> TX P=0/Q=32 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=17 (socket 0) -> TX P=0/Q=33 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=18 (socket 0) -> TX P=0/Q=34 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=19 (socket 0) -> TX P=0/Q=35 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=20 (socket 0) -> TX P=0/Q=36 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=21 (socket 0) -> TX P=0/Q=37 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=22 (socket 0) -> TX P=0/Q=38 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=23 (socket 0) -> TX P=0/Q=39 (socket 0) peer=02:00:00:00:00:00 Logical Core 8 (socket 0) forwards packets on 8 streams:
> RX P=1/Q=24 (socket 0) -> TX P=0/Q=48 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=25 (socket 0) -> TX P=0/Q=49 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=26 (socket 0) -> TX P=0/Q=50 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=27 (socket 0) -> TX P=0/Q=51 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=28 (socket 0) -> TX P=0/Q=52 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=29 (socket 0) -> TX P=0/Q=53 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=30 (socket 0) -> TX P=0/Q=54 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=31 (socket 0) -> TX P=0/Q=55 (socket 0) peer=02:00:00:00:00:00 Logical Core 9 (socket 0) forwards packets on 8 streams:
> RX P=1/Q=32 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=33 (socket 0) -> TX P=0/Q=1 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=34 (socket 0) -> TX P=0/Q=2 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=35 (socket 0) -> TX P=0/Q=3 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=36 (socket 0) -> TX P=0/Q=4 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=37 (socket 0) -> TX P=0/Q=5 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=38 (socket 0) -> TX P=0/Q=6 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=39 (socket 0) -> TX P=0/Q=7 (socket 0) peer=02:00:00:00:00:00 Logical Core 10 (socket 0) forwards packets on 8 streams:
> RX P=1/Q=40 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=41 (socket 0) -> TX P=0/Q=1 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=42 (socket 0) -> TX P=0/Q=2 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=43 (socket 0) -> TX P=0/Q=3 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=44 (socket 0) -> TX P=0/Q=4 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=45 (socket 0) -> TX P=0/Q=5 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=46 (socket 0) -> TX P=0/Q=6 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=47 (socket 0) -> TX P=0/Q=7 (socket 0) peer=02:00:00:00:00:00 Logical Core 11 (socket 0) forwards packets on 8 streams:
> RX P=1/Q=48 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=49 (socket 0) -> TX P=0/Q=1 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=50 (socket 0) -> TX P=0/Q=2 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=51 (socket 0) -> TX P=0/Q=3 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=52 (socket 0) -> TX P=0/Q=4 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=53 (socket 0) -> TX P=0/Q=5 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=54 (socket 0) -> TX P=0/Q=6 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=55 (socket 0) -> TX P=0/Q=7 (socket 0) peer=02:00:00:00:00:00 Logical Core 12 (socket 0) forwards packets on 8 streams:
> RX P=1/Q=56 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=57 (socket 0) -> TX P=0/Q=1 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=58 (socket 0) -> TX P=0/Q=2 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=59 (socket 0) -> TX P=0/Q=3 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=60 (socket 0) -> TX P=0/Q=4 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=61 (socket 0) -> TX P=0/Q=5 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=62 (socket 0) -> TX P=0/Q=6 (socket 0) peer=02:00:00:00:00:00
> RX P=1/Q=63 (socket 0) -> TX P=0/Q=7 (socket 0) peer=02:00:00:00:00:00
>
> io packet forwarding packets/burst=32
> nb forwarding cores=16 - nb forwarding ports=2
> port 0: RX queue number: 64 Tx queue number: 64
> Rx offloads=0x80200 Tx offloads=0x10000
> RX queue: 0
> RX desc=1024 - RX free threshold=64
> RX threshold registers: pthresh=0 hthresh=0 wthresh=0
> RX Offloads=0x80200
> TX queue: 0
> TX desc=1024 - TX free threshold=928
> TX threshold registers: pthresh=0 hthresh=0 wthresh=0
> TX offloads=0x10000 - TX RS bit threshold=32
> port 1: RX queue number: 64 Tx queue number: 64
> Rx offloads=0x80200 Tx offloads=0x10000
> RX queue: 0
> RX desc=1024 - RX free threshold=64
> RX threshold registers: pthresh=0 hthresh=0 wthresh=0
> RX Offloads=0x80200
> TX queue: 0
> TX desc=1024 - TX free threshold=928
> TX threshold registers: pthresh=0 hthresh=0 wthresh=0
> TX offloads=0x10000 - TX RS bit threshold=32
> testpmd>
>
>
>>
>> Expected behaviour:
>>
>> After reconfiguring PFC from 8 to 4 TCs, the forwarding TC mask should
>> reflect the configured number of TCs (mask = 0xF).
>>>
>>> Additionally, the existing VMDQ pool guard in dcb_fwd_config_setup()
>>> only checks RX queue counts, missing the case where the TX port has
>>> zero queues for a given pool/TC combination. When nb_tx_queue is 0,
>>> the expression "j % nb_tx_queue" triggers a SIGFPE (integer division by zero).
>>
>> The dcb_fwd_check_cores_per_tc() check this case. So please provide the steps.
>>
>>>
>>> Fix this by:
>>> 1. Updating dcb_fwd_tc_mask after port DCB reconfiguration using the
>>> user requested num_tcs value, so fwd_config_setup() sees the correct
>>> mask.
>>> 2. Extending the existing pool guard to also check TX queue counts.
>>> 3. Adding a defensive break after the division by dcb_fwd_tc_cores to
>>> catch integer truncation to zero.
>>>
>>> Fixes: 0ecbf93f5001 ("app/testpmd: add command to disable DCB")
>>> Cc: stable@dpdk.org
>>>
>>> Signed-off-by: Talluri Chaitanyababu
>>> <chaitanyababux.talluri@intel.com>
>>> Signed-off-by: Shaiq Wani <shaiq.wani@intel.com>
>>> ---
>
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v3] app/testpmd: fix DCB forwarding TC mask and queue guard
2026-03-11 8:37 [PATCH] app/testpmd: fix DCB forwarding TC mask and queue guard Talluri Chaitanyababu
2026-03-11 15:56 ` Stephen Hemminger
2026-03-12 10:36 ` [PATCH v2] " Talluri Chaitanyababu
@ 2026-03-16 6:21 ` Talluri Chaitanyababu
2026-03-17 1:23 ` fengchengwen
2026-03-17 8:57 ` Thomas Monjalon
2026-03-18 6:17 ` [PATCH v4] app/testpmd: fix DCB forwarding TC mismatch handling Talluri Chaitanyababu
2026-03-20 6:29 ` [PATCH v5] " Talluri Chaitanyababu
4 siblings, 2 replies; 18+ messages in thread
From: Talluri Chaitanyababu @ 2026-03-16 6:21 UTC (permalink / raw)
To: dev, bruce.richardson, stephen, aman.deep.singh
Cc: shaiq.wani, Talluri Chaitanyababu, stable
Update forwarding TC mask based on configured traffic classes to properly
handle both 4 TC and 8 TC modes. The bitmask calculation (1u << nb_tcs) - 1
correctly creates masks for all available traffic classes (0xF for 4 TCs,
0xFF for 8 TCs).
When the mask is not updated after a TC configuration change, it stays at
the default 0xFF, which causes dcb_fwd_tc_update_dcb_info() to skip the
compress logic entirely (early return when mask ==
DEFAULT_DCB_FWD_TC_MASK).
This can lead to inconsistent queue allocations.
Additionally, the existing VMDQ pool guard in dcb_fwd_config_setup() only
checks RX queue counts, missing the case where the TX port has zero queues
for a given pool/TC combination. When nb_tx_queue is 0, the expression
"j % nb_tx_queue" triggers a SIGFPE (integer division by zero).
Fix this by:
1. Updating dcb_fwd_tc_mask after port DCB reconfiguration using the
user requested num_tcs value, so fwd_config_setup() sees the correct
mask.
2. Extending the existing pool guard to also check TX queue counts.
3. Adding a defensive break after the division by dcb_fwd_tc_cores to
catch integer truncation to zero.
Fixes: 0ecbf93f5001 ("app/testpmd: add command to disable DCB")
Cc: stable@dpdk.org
Signed-off-by: Talluri Chaitanyababu <chaitanyababux.talluri@intel.com>
Signed-off-by: Shaiq Wani <shaiq.wani@intel.com>
---
v3: Removed old email address.
v2:
* Used res->num_tcs to derive dcb_fwd_tc_mask.
* Removed redundant rte_eth_dev_get_dcb_info().
---
app/test-pmd/cmdline.c | 3 +++
app/test-pmd/config.c | 9 ++++++++-
2 files changed, 11 insertions(+), 1 deletion(-)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index e9a1331071..a53af7e72b 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -3682,6 +3682,9 @@ cmd_config_dcb_parsed(void *parsed_result,
return;
}
+ /* Update forwarding TC mask to match the configured number of TCs. */
+ dcb_fwd_tc_mask = (1u << res->num_tcs) - 1;
+
fwd_config_setup();
cmd_reconfig_device_queue(port_id, 1, 1);
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index f9f3c542a6..9b201ac241 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -5450,7 +5450,8 @@ dcb_fwd_config_setup(void)
/* if the nb_queue is zero, means this tc is
* not enabled on the POOL
*/
- if (rxp_dcb_info.tc_queue.tc_rxq[i][tc].nb_queue == 0)
+ if (rxp_dcb_info.tc_queue.tc_rxq[i][tc].nb_queue == 0 ||
+ txp_dcb_info.tc_queue.tc_txq[i][tc].nb_queue == 0)
break;
k = fwd_lcores[lc_id]->stream_nb +
fwd_lcores[lc_id]->stream_idx;
@@ -5458,6 +5459,12 @@ dcb_fwd_config_setup(void)
dcb_fwd_tc_cores;
nb_tx_queue = txp_dcb_info.tc_queue.tc_txq[i][tc].nb_queue /
dcb_fwd_tc_cores;
+ /* guard against integer truncation to zero (e.g.
+ * nb_queue=1, dcb_fwd_tc_cores=2) to prevent SIGFPE
+ * from "j % nb_tx_queue" below.
+ */
+ if (nb_rx_queue == 0 || nb_tx_queue == 0)
+ break;
rxq = rxp_dcb_info.tc_queue.tc_rxq[i][tc].base + nb_rx_queue * sub_core_idx;
txq = txp_dcb_info.tc_queue.tc_txq[i][tc].base + nb_tx_queue * sub_core_idx;
for (j = 0; j < nb_rx_queue; j++) {
--
2.43.0
^ permalink raw reply related [flat|nested] 18+ messages in thread* Re: [PATCH v3] app/testpmd: fix DCB forwarding TC mask and queue guard
2026-03-16 6:21 ` [PATCH v3] " Talluri Chaitanyababu
@ 2026-03-17 1:23 ` fengchengwen
2026-03-17 8:57 ` Thomas Monjalon
1 sibling, 0 replies; 18+ messages in thread
From: fengchengwen @ 2026-03-17 1:23 UTC (permalink / raw)
To: Talluri Chaitanyababu, dev, bruce.richardson, stephen,
aman.deep.singh
Cc: shaiq.wani, stable
On 3/16/2026 2:21 PM, Talluri Chaitanyababu wrote:
> Update forwarding TC mask based on configured traffic classes to properly
> handle both 4 TC and 8 TC modes. The bitmask calculation (1u << nb_tcs) - 1
> correctly creates masks for all available traffic classes (0xF for 4 TCs,
> 0xFF for 8 TCs).
>
> When the mask is not updated after a TC configuration change, it stays at
> the default 0xFF, which causes dcb_fwd_tc_update_dcb_info() to skip the
> compress logic entirely (early return when mask ==
> DEFAULT_DCB_FWD_TC_MASK).
> This can lead to inconsistent queue allocations.
>
> Additionally, the existing VMDQ pool guard in dcb_fwd_config_setup() only
> checks RX queue counts, missing the case where the TX port has zero queues
> for a given pool/TC combination. When nb_tx_queue is 0, the expression
> "j % nb_tx_queue" triggers a SIGFPE (integer division by zero).
>
> Fix this by:
> 1. Updating dcb_fwd_tc_mask after port DCB reconfiguration using the
> user requested num_tcs value, so fwd_config_setup() sees the correct
> mask.
> 2. Extending the existing pool guard to also check TX queue counts.
> 3. Adding a defensive break after the division by dcb_fwd_tc_cores to
> catch integer truncation to zero.
>
> Fixes: 0ecbf93f5001 ("app/testpmd: add command to disable DCB")
Why this commit?
> Cc: stable@dpdk.org
>
> Signed-off-by: Talluri Chaitanyababu <chaitanyababux.talluri@intel.com>
> Signed-off-by: Shaiq Wani <shaiq.wani@intel.com>
> ---
>
> v3: Removed old email address.
>
> v2:
> * Used res->num_tcs to derive dcb_fwd_tc_mask.
> * Removed redundant rte_eth_dev_get_dcb_info().
> ---
> app/test-pmd/cmdline.c | 3 +++
> app/test-pmd/config.c | 9 ++++++++-
> 2 files changed, 11 insertions(+), 1 deletion(-)
>
> diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
> index e9a1331071..a53af7e72b 100644
> --- a/app/test-pmd/cmdline.c
> +++ b/app/test-pmd/cmdline.c
> @@ -3682,6 +3682,9 @@ cmd_config_dcb_parsed(void *parsed_result,
> return;
> }
>
> + /* Update forwarding TC mask to match the configured number of TCs. */
> + dcb_fwd_tc_mask = (1u << res->num_tcs) - 1;
This is just configure, please don't modify it when run command.
Combined with your detail steps last email, I may guest your problem:
1. port stop all
2. port config 0 dcb vt off 8 pfc on
3. port config 1 dcb vt off 8 pfc on
4. port start all
5. port stop all
6. port config 0 dcb vt off 4 pfc on
When the step6 executed, the port 0 has 4 TC, but the port 1 still has 8 TC which was configured in step3
If start forward after step6, the TC was mismatch, it may lead to multiple thread operators the same Tx queues.
So you want to make sure only forward 4 TC in both port0 and port1 in step6 ?
> +
> fwd_config_setup();
>
> cmd_reconfig_device_queue(port_id, 1, 1);
> diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
> index f9f3c542a6..9b201ac241 100644
> --- a/app/test-pmd/config.c
> +++ b/app/test-pmd/config.c
> @@ -5450,7 +5450,8 @@ dcb_fwd_config_setup(void)
> /* if the nb_queue is zero, means this tc is
> * not enabled on the POOL
> */
> - if (rxp_dcb_info.tc_queue.tc_rxq[i][tc].nb_queue == 0)
> + if (rxp_dcb_info.tc_queue.tc_rxq[i][tc].nb_queue == 0 ||
> + txp_dcb_info.tc_queue.tc_txq[i][tc].nb_queue == 0)
> break;
> k = fwd_lcores[lc_id]->stream_nb +
> fwd_lcores[lc_id]->stream_idx;
> @@ -5458,6 +5459,12 @@ dcb_fwd_config_setup(void)
> dcb_fwd_tc_cores;
> nb_tx_queue = txp_dcb_info.tc_queue.tc_txq[i][tc].nb_queue /
> dcb_fwd_tc_cores;
> + /* guard against integer truncation to zero (e.g.
> + * nb_queue=1, dcb_fwd_tc_cores=2) to prevent SIGFPE
> + * from "j % nb_tx_queue" below.
> + */
> + if (nb_rx_queue == 0 || nb_tx_queue == 0)
> + break;
This could add in dcb_fwd_check_cores_per_tc():
for (port = 0; port < nb_fwd_ports; port++) {
(void)rte_eth_dev_get_dcb_info(fwd_ports_ids[port], &dcb_info);
for (tc = 0; tc < dcb_info.nb_tcs; tc++) {
for (vmdq_idx = 0; vmdq_idx < RTE_ETH_MAX_VMDQ_POOL; vmdq_idx++) {
if (dcb_info.tc_queue.tc_rxq[vmdq_idx][tc].nb_queue == 0)
break;
/* make sure nb_rx_queue can be divisible. */
if (dcb_info.tc_queue.tc_rxq[vmdq_idx][tc].nb_queue %
dcb_fwd_tc_cores)
return -1;
/* make sure nb_tx_queue can be divisible. */
if (dcb_info.tc_queue.tc_txq[vmdq_idx][tc].nb_queue %
dcb_fwd_tc_cores)
return -1;
--------/// please add here!
}
}
}
> rxq = rxp_dcb_info.tc_queue.tc_rxq[i][tc].base + nb_rx_queue * sub_core_idx;
> txq = txp_dcb_info.tc_queue.tc_txq[i][tc].base + nb_tx_queue * sub_core_idx;
> for (j = 0; j < nb_rx_queue; j++) {
^ permalink raw reply [flat|nested] 18+ messages in thread* Re: [PATCH v3] app/testpmd: fix DCB forwarding TC mask and queue guard
2026-03-16 6:21 ` [PATCH v3] " Talluri Chaitanyababu
2026-03-17 1:23 ` fengchengwen
@ 2026-03-17 8:57 ` Thomas Monjalon
2026-03-17 9:02 ` Thomas Monjalon
1 sibling, 1 reply; 18+ messages in thread
From: Thomas Monjalon @ 2026-03-17 8:57 UTC (permalink / raw)
To: stephen, Talluri Chaitanyababu, Chengwen Feng
Cc: dev, bruce.richardson, aman.deep.singh, shaiq.wani, stable
16/03/2026 07:21, Talluri Chaitanyababu:
> Update forwarding TC mask based on configured traffic classes to properly
> handle both 4 TC and 8 TC modes. The bitmask calculation (1u << nb_tcs) - 1
> correctly creates masks for all available traffic classes (0xF for 4 TCs,
> 0xFF for 8 TCs).
I don't know why this patch was merged in next-net without any review.
It has been pulled in main yesterday, sorry for that.
You'll need to make another patch to fix after the complete review.
Stephen, please don't merge too fast if no review.
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v3] app/testpmd: fix DCB forwarding TC mask and queue guard
2026-03-17 8:57 ` Thomas Monjalon
@ 2026-03-17 9:02 ` Thomas Monjalon
0 siblings, 0 replies; 18+ messages in thread
From: Thomas Monjalon @ 2026-03-17 9:02 UTC (permalink / raw)
To: stephen, Talluri Chaitanyababu, Chengwen Feng
Cc: stable, dev, bruce.richardson, aman.deep.singh, shaiq.wani
17/03/2026 09:57, Thomas Monjalon:
> 16/03/2026 07:21, Talluri Chaitanyababu:
> > Update forwarding TC mask based on configured traffic classes to properly
> > handle both 4 TC and 8 TC modes. The bitmask calculation (1u << nb_tcs) - 1
> > correctly creates masks for all available traffic classes (0xF for 4 TCs,
> > 0xFF for 8 TCs).
>
> I don't know why this patch was merged in next-net without any review.
> It has been pulled in main yesterday, sorry for that.
> You'll need to make another patch to fix after the complete review.
>
> Stephen, please don't merge too fast if no review.
After another thought, I've dropped it from main,
so you can continue with a v4 as usual.
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v4] app/testpmd: fix DCB forwarding TC mismatch handling
2026-03-11 8:37 [PATCH] app/testpmd: fix DCB forwarding TC mask and queue guard Talluri Chaitanyababu
` (2 preceding siblings ...)
2026-03-16 6:21 ` [PATCH v3] " Talluri Chaitanyababu
@ 2026-03-18 6:17 ` Talluri Chaitanyababu
2026-03-19 1:35 ` fengchengwen
2026-03-20 6:29 ` [PATCH v5] " Talluri Chaitanyababu
4 siblings, 1 reply; 18+ messages in thread
From: Talluri Chaitanyababu @ 2026-03-18 6:17 UTC (permalink / raw)
To: dev, bruce.richardson, aman.deep.singh, fengchengwen
Cc: shaiq.wani, Talluri Chaitanyababu, stable
Fix DCB forwarding issues when RX and TX ports are configured with
different numbers of traffic classes (TCs).
When ports have asymmetric TC configurations (e.g. 4 TCs on RX and 8 TCs
on TX), the forwarding logic iterates based only on the RX port TC count.
This can lead to accessing invalid TX TC entries and incorrect queue
mapping, potentially causing multiple threads to operate on the same
queues.
Additionally, the existing VMDq pool guard in dcb_fwd_config_setup()
only checks RX queue counts and does not consider the case where the TX
port has no queues for a given pool/TC combination.
Fix this by:
1. Introducing an effective TC count using RTE_MIN() of RX and TX TC
values, ensuring forwarding only operates on valid TCs supported by
both ports.
2. Updating the loop condition to use the effective TC count instead of
only the RX TC count.
3. Extending validation in dcb_fwd_check_cores_per_tc() to ensure both RX
and TX queue counts are divisible by dcb_fwd_tc_cores, preventing
invalid configurations.
This ensures correct queue mapping and avoids issues when switching
between different DCB configurations across ports.
Fixes: 945e9be0a803 ("app/testpmd: support multi-cores process one TC")
Cc: stable@dpdk.org
Signed-off-by: Talluri Chaitanyababu <chaitanyababux.talluri@intel.com>
Signed-off-by: Shaiq Wani <shaiq.wani@intel.com>
---
v4:
* Removed runtime update of dcb_fwd_tc_mask as per review comments.
* Used effective TC count (RTE_MIN of RX/TX) to handle asymmetric configs.
* Moved queue validation to dcb_fwd_check_cores_per_tc().
v3: Removed old email address.
v2:
* Used res->num_tcs to derive dcb_fwd_tc_mask.
* Removed redundant rte_eth_dev_get_dcb_info().
---
app/test-pmd/config.c | 12 +++++++++---
1 file changed, 9 insertions(+), 3 deletions(-)
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index f9f3c542a6..4caa1b1237 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -5286,7 +5286,8 @@ dcb_fwd_check_cores_per_tc(void)
(void)rte_eth_dev_get_dcb_info(fwd_ports_ids[port], &dcb_info);
for (tc = 0; tc < dcb_info.nb_tcs; tc++) {
for (vmdq_idx = 0; vmdq_idx < RTE_ETH_MAX_VMDQ_POOL; vmdq_idx++) {
- if (dcb_info.tc_queue.tc_rxq[vmdq_idx][tc].nb_queue == 0)
+ if (dcb_info.tc_queue.tc_rxq[vmdq_idx][tc].nb_queue == 0 ||
+ dcb_info.tc_queue.tc_txq[vmdq_idx][tc].nb_queue == 0)
break;
/* make sure nb_rx_queue can be divisible. */
if (dcb_info.tc_queue.tc_rxq[vmdq_idx][tc].nb_queue %
@@ -5377,6 +5378,7 @@ dcb_fwd_config_setup(void)
uint16_t nb_rx_queue, nb_tx_queue;
uint16_t i, j, k, sm_id = 0;
uint16_t sub_core_idx = 0;
+ uint8_t effective_nb_tcs;
uint16_t total_tc_num;
struct rte_port *port;
uint8_t tc = 0;
@@ -5442,6 +5444,7 @@ dcb_fwd_config_setup(void)
dcb_fwd_tc_update_dcb_info(&rxp_dcb_info);
(void)rte_eth_dev_get_dcb_info(fwd_ports_ids[txp], &txp_dcb_info);
dcb_fwd_tc_update_dcb_info(&txp_dcb_info);
+ effective_nb_tcs = RTE_MIN(rxp_dcb_info.nb_tcs, txp_dcb_info.nb_tcs);
for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_lcores; lc_id++) {
fwd_lcores[lc_id]->stream_nb = 0;
@@ -5450,7 +5453,8 @@ dcb_fwd_config_setup(void)
/* if the nb_queue is zero, means this tc is
* not enabled on the POOL
*/
- if (rxp_dcb_info.tc_queue.tc_rxq[i][tc].nb_queue == 0)
+ if (rxp_dcb_info.tc_queue.tc_rxq[i][tc].nb_queue == 0 ||
+ txp_dcb_info.tc_queue.tc_txq[i][tc].nb_queue == 0)
break;
k = fwd_lcores[lc_id]->stream_nb +
fwd_lcores[lc_id]->stream_idx;
@@ -5480,7 +5484,7 @@ dcb_fwd_config_setup(void)
sub_core_idx = 0;
tc++;
- if (tc < rxp_dcb_info.nb_tcs)
+ if (tc < effective_nb_tcs)
continue;
/* Restart from TC 0 on next RX port */
tc = 0;
@@ -5497,6 +5501,8 @@ dcb_fwd_config_setup(void)
dcb_fwd_tc_update_dcb_info(&rxp_dcb_info);
rte_eth_dev_get_dcb_info(fwd_ports_ids[txp], &txp_dcb_info);
dcb_fwd_tc_update_dcb_info(&txp_dcb_info);
+
+ effective_nb_tcs = RTE_MIN(rxp_dcb_info.nb_tcs, txp_dcb_info.nb_tcs);
}
}
--
2.43.0
^ permalink raw reply related [flat|nested] 18+ messages in thread* Re: [PATCH v4] app/testpmd: fix DCB forwarding TC mismatch handling
2026-03-18 6:17 ` [PATCH v4] app/testpmd: fix DCB forwarding TC mismatch handling Talluri Chaitanyababu
@ 2026-03-19 1:35 ` fengchengwen
0 siblings, 0 replies; 18+ messages in thread
From: fengchengwen @ 2026-03-19 1:35 UTC (permalink / raw)
To: Talluri Chaitanyababu, dev, bruce.richardson, aman.deep.singh
Cc: shaiq.wani, stable
Hi,
On 3/18/2026 2:17 PM, Talluri Chaitanyababu wrote:
> Fix DCB forwarding issues when RX and TX ports are configured with
> different numbers of traffic classes (TCs).
Suggest: Fix DCB forwarding failed when the number of TCs on ports is inconsistent.
And RX/TX -> Rx/Tx
>
> When ports have asymmetric TC configurations (e.g. 4 TCs on RX and 8 TCs
> on TX), the forwarding logic iterates based only on the RX port TC count.
e.g. 4 TCs on RX and 8 TCs on TX
suggest replaced with: e.g. 2 ports, port0 has 4 TCs and port1 has 8 TCs
And suggest add the testpmd command in commit-log like below so that others could reproduce:
x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-20 -n 4 -a 0000:31:00.0 -a 0000:4b:00.0 --file-prefix=testpmd1 -- -i --rxq=256 --txq=256 --nb-cores=16 --total-num-mbufs=600000
port stop all
port config 0 dcb vt off 8 pfc on
port config 1 dcb vt off 8 pfc on
port start all
port stop all
port config 0 dcb vt off 4 pfc on
> This can lead to accessing invalid TX TC entries and incorrect queue
> mapping, potentially causing multiple threads to operate on the same
> queues.
Suggest: This can lead to accessing invalid TX TC entries and incorrect queue
mapping and which will result SIGFPE exception.
>
> Additionally, the existing VMDq pool guard in dcb_fwd_config_setup()
> only checks RX queue counts and does not consider the case where the TX
> port has no queues for a given pool/TC combination.
>
> Fix this by:
> 1. Introducing an effective TC count using RTE_MIN() of RX and TX TC
> values, ensuring forwarding only operates on valid TCs supported by
> both ports.
> 2. Updating the loop condition to use the effective TC count instead of
> only the RX TC count.
> 3. Extending validation in dcb_fwd_check_cores_per_tc() to ensure both RX
> and TX queue counts are divisible by dcb_fwd_tc_cores, preventing
> invalid configurations.
Suggest 1&2 as one commit, and 3 as another commit.
>
> This ensures correct queue mapping and avoids issues when switching
> between different DCB configurations across ports.
>
> Fixes: 945e9be0a803 ("app/testpmd: support multi-cores process one TC")
The SIGFPE error was not introduced by this commit as I pointed out in early reply to v2.
I just suggest you split two commit, the second commit could adds this fix tag, but as the
first commit please find the real commit which introduce the bug.
> Cc: stable@dpdk.org
>
> Signed-off-by: Talluri Chaitanyababu <chaitanyababux.talluri@intel.com>
> Signed-off-by: Shaiq Wani <shaiq.wani@intel.com>
> ---
...
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v5] app/testpmd: fix DCB forwarding TC mismatch handling
2026-03-11 8:37 [PATCH] app/testpmd: fix DCB forwarding TC mask and queue guard Talluri Chaitanyababu
` (3 preceding siblings ...)
2026-03-18 6:17 ` [PATCH v4] app/testpmd: fix DCB forwarding TC mismatch handling Talluri Chaitanyababu
@ 2026-03-20 6:29 ` Talluri Chaitanyababu
2026-03-20 9:38 ` fengchengwen
2026-03-24 0:06 ` Stephen Hemminger
4 siblings, 2 replies; 18+ messages in thread
From: Talluri Chaitanyababu @ 2026-03-20 6:29 UTC (permalink / raw)
To: dev, bruce.richardson, stephen, fengchengwen, aman.deep.singh
Cc: shaiq.wani, Talluri Chaitanyababu, stable
Fix DCB forwarding failed when the number of TCs on ports is inconsistent.
When ports have asymmetric TC configurations (e.g. 2 ports, port0 has
4 TCs and port1 has 8 TCs), the forwarding logic iterates based only
on the Rx port TC count.
This can lead to accessing invalid Tx TC entries and incorrect queue
mapping, which will result in a SIGFPE exception.
Additionally, the existing VMDq pool guard in dcb_fwd_config_setup()
only checks RX queue counts and does not consider the case where the TX
port has no queues for a given pool/TC combination.
Fix this by:
1. Introducing an effective TC count using RTE_MIN() of Rx and Tx TC
values, ensuring forwarding only operates on valid TCs supported by
both ports.
2. Updating the loop condition to use the effective TC count instead of
only the Rx TC count.
3. Extending the queue validation in dcb_fwd_config_setup() to ensure
both Rx and Tx queues are valid for a given TC.
Testpmd command to reproduce:
x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-20 -n 4 \
-a 0000:31:00.0 -a 0000:4b:00.0 \
--file-prefix=testpmd1 -- -i --rxq=256 --txq=256 \
--nb-cores=16 --total-num-mbufs=600000
port stop all
port config 0 dcb vt off 8 pfc on
port config 1 dcb vt off 8 pfc on
port start all
port stop all
port config 0 dcb vt off 4 pfc on
This ensures correct queue mapping and avoids issues when switching
between different DCB configurations across ports.
Fixes: 1a572499beb6 ("app/testpmd: setup DCB forwarding based on traffic class")
Cc: stable@dpdk.org
Signed-off-by: Talluri Chaitanyababu <chaitanyababux.talluri@intel.com>
Signed-off-by: Shaiq Wani <shaiq.wani@intel.com>
---
v5:
* Updated commit message as per review comments.
* Added reproduction steps and SIGFPE explanation.
* Updated Fixes tag.
v4:
* Removed runtime update of dcb_fwd_tc_mask as per review comments.
* Used effective TC count (RTE_MIN of RX/TX) to handle asymmetric configs.
* Moved queue validation to dcb_fwd_check_cores_per_tc().
v3: Removed old email address.
v2:
* Used res->num_tcs to derive dcb_fwd_tc_mask.
* Removed redundant rte_eth_dev_get_dcb_info().
---
app/test-pmd/config.c | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index f9f3c542a6..052e8b7c24 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -5377,6 +5377,7 @@ dcb_fwd_config_setup(void)
uint16_t nb_rx_queue, nb_tx_queue;
uint16_t i, j, k, sm_id = 0;
uint16_t sub_core_idx = 0;
+ uint8_t effective_nb_tcs;
uint16_t total_tc_num;
struct rte_port *port;
uint8_t tc = 0;
@@ -5442,6 +5443,7 @@ dcb_fwd_config_setup(void)
dcb_fwd_tc_update_dcb_info(&rxp_dcb_info);
(void)rte_eth_dev_get_dcb_info(fwd_ports_ids[txp], &txp_dcb_info);
dcb_fwd_tc_update_dcb_info(&txp_dcb_info);
+ effective_nb_tcs = RTE_MIN(rxp_dcb_info.nb_tcs, txp_dcb_info.nb_tcs);
for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_lcores; lc_id++) {
fwd_lcores[lc_id]->stream_nb = 0;
@@ -5450,7 +5452,8 @@ dcb_fwd_config_setup(void)
/* if the nb_queue is zero, means this tc is
* not enabled on the POOL
*/
- if (rxp_dcb_info.tc_queue.tc_rxq[i][tc].nb_queue == 0)
+ if (rxp_dcb_info.tc_queue.tc_rxq[i][tc].nb_queue == 0 ||
+ txp_dcb_info.tc_queue.tc_txq[i][tc].nb_queue == 0)
break;
k = fwd_lcores[lc_id]->stream_nb +
fwd_lcores[lc_id]->stream_idx;
@@ -5480,7 +5483,7 @@ dcb_fwd_config_setup(void)
sub_core_idx = 0;
tc++;
- if (tc < rxp_dcb_info.nb_tcs)
+ if (tc < effective_nb_tcs)
continue;
/* Restart from TC 0 on next RX port */
tc = 0;
@@ -5497,6 +5500,8 @@ dcb_fwd_config_setup(void)
dcb_fwd_tc_update_dcb_info(&rxp_dcb_info);
rte_eth_dev_get_dcb_info(fwd_ports_ids[txp], &txp_dcb_info);
dcb_fwd_tc_update_dcb_info(&txp_dcb_info);
+
+ effective_nb_tcs = RTE_MIN(rxp_dcb_info.nb_tcs, txp_dcb_info.nb_tcs);
}
}
--
2.43.0
^ permalink raw reply related [flat|nested] 18+ messages in thread* Re: [PATCH v5] app/testpmd: fix DCB forwarding TC mismatch handling
2026-03-20 6:29 ` [PATCH v5] " Talluri Chaitanyababu
@ 2026-03-20 9:38 ` fengchengwen
2026-03-24 0:06 ` Stephen Hemminger
1 sibling, 0 replies; 18+ messages in thread
From: fengchengwen @ 2026-03-20 9:38 UTC (permalink / raw)
To: Talluri Chaitanyababu, dev, bruce.richardson, stephen,
aman.deep.singh
Cc: shaiq.wani, stable
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
On 3/20/2026 2:29 PM, Talluri Chaitanyababu wrote:
> Fix DCB forwarding failed when the number of TCs on ports is inconsistent.
>
> When ports have asymmetric TC configurations (e.g. 2 ports, port0 has
> 4 TCs and port1 has 8 TCs), the forwarding logic iterates based only
> on the Rx port TC count.
> This can lead to accessing invalid Tx TC entries and incorrect queue
> mapping, which will result in a SIGFPE exception.
>
> Additionally, the existing VMDq pool guard in dcb_fwd_config_setup()
> only checks RX queue counts and does not consider the case where the TX
> port has no queues for a given pool/TC combination.
>
> Fix this by:
> 1. Introducing an effective TC count using RTE_MIN() of Rx and Tx TC
> values, ensuring forwarding only operates on valid TCs supported by
> both ports.
> 2. Updating the loop condition to use the effective TC count instead of
> only the Rx TC count.
> 3. Extending the queue validation in dcb_fwd_config_setup() to ensure
> both Rx and Tx queues are valid for a given TC.
>
> Testpmd command to reproduce:
>
> x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-20 -n 4 \
> -a 0000:31:00.0 -a 0000:4b:00.0 \
> --file-prefix=testpmd1 -- -i --rxq=256 --txq=256 \
> --nb-cores=16 --total-num-mbufs=600000
>
> port stop all
> port config 0 dcb vt off 8 pfc on
> port config 1 dcb vt off 8 pfc on
> port start all
> port stop all
> port config 0 dcb vt off 4 pfc on
>
> This ensures correct queue mapping and avoids issues when switching
> between different DCB configurations across ports.
>
> Fixes: 1a572499beb6 ("app/testpmd: setup DCB forwarding based on traffic class")
> Cc: stable@dpdk.org
>
> Signed-off-by: Talluri Chaitanyababu <chaitanyababux.talluri@intel.com>
> Signed-off-by: Shaiq Wani <shaiq.wani@intel.com>
> ---
>
> v5:
> * Updated commit message as per review comments.
> * Added reproduction steps and SIGFPE explanation.
> * Updated Fixes tag.
>
> v4:
> * Removed runtime update of dcb_fwd_tc_mask as per review comments.
> * Used effective TC count (RTE_MIN of RX/TX) to handle asymmetric configs.
> * Moved queue validation to dcb_fwd_check_cores_per_tc().
>
> v3: Removed old email address.
>
> v2:
> * Used res->num_tcs to derive dcb_fwd_tc_mask.
> * Removed redundant rte_eth_dev_get_dcb_info().
> ---
> app/test-pmd/config.c | 9 +++++++--
> 1 file changed, 7 insertions(+), 2 deletions(-)
>
> diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
> index f9f3c542a6..052e8b7c24 100644
> --- a/app/test-pmd/config.c
> +++ b/app/test-pmd/config.c
> @@ -5377,6 +5377,7 @@ dcb_fwd_config_setup(void)
> uint16_t nb_rx_queue, nb_tx_queue;
> uint16_t i, j, k, sm_id = 0;
> uint16_t sub_core_idx = 0;
> + uint8_t effective_nb_tcs;
> uint16_t total_tc_num;
> struct rte_port *port;
> uint8_t tc = 0;
> @@ -5442,6 +5443,7 @@ dcb_fwd_config_setup(void)
> dcb_fwd_tc_update_dcb_info(&rxp_dcb_info);
> (void)rte_eth_dev_get_dcb_info(fwd_ports_ids[txp], &txp_dcb_info);
> dcb_fwd_tc_update_dcb_info(&txp_dcb_info);
> + effective_nb_tcs = RTE_MIN(rxp_dcb_info.nb_tcs, txp_dcb_info.nb_tcs);
>
> for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_lcores; lc_id++) {
> fwd_lcores[lc_id]->stream_nb = 0;
> @@ -5450,7 +5452,8 @@ dcb_fwd_config_setup(void)
> /* if the nb_queue is zero, means this tc is
> * not enabled on the POOL
> */
> - if (rxp_dcb_info.tc_queue.tc_rxq[i][tc].nb_queue == 0)
> + if (rxp_dcb_info.tc_queue.tc_rxq[i][tc].nb_queue == 0 ||
> + txp_dcb_info.tc_queue.tc_txq[i][tc].nb_queue == 0)
> break;
> k = fwd_lcores[lc_id]->stream_nb +
> fwd_lcores[lc_id]->stream_idx;
> @@ -5480,7 +5483,7 @@ dcb_fwd_config_setup(void)
>
> sub_core_idx = 0;
> tc++;
> - if (tc < rxp_dcb_info.nb_tcs)
> + if (tc < effective_nb_tcs)
> continue;
> /* Restart from TC 0 on next RX port */
> tc = 0;
> @@ -5497,6 +5500,8 @@ dcb_fwd_config_setup(void)
> dcb_fwd_tc_update_dcb_info(&rxp_dcb_info);
> rte_eth_dev_get_dcb_info(fwd_ports_ids[txp], &txp_dcb_info);
> dcb_fwd_tc_update_dcb_info(&txp_dcb_info);
> +
> + effective_nb_tcs = RTE_MIN(rxp_dcb_info.nb_tcs, txp_dcb_info.nb_tcs);
> }
> }
>
^ permalink raw reply [flat|nested] 18+ messages in thread* Re: [PATCH v5] app/testpmd: fix DCB forwarding TC mismatch handling
2026-03-20 6:29 ` [PATCH v5] " Talluri Chaitanyababu
2026-03-20 9:38 ` fengchengwen
@ 2026-03-24 0:06 ` Stephen Hemminger
1 sibling, 0 replies; 18+ messages in thread
From: Stephen Hemminger @ 2026-03-24 0:06 UTC (permalink / raw)
To: Talluri Chaitanyababu
Cc: dev, bruce.richardson, fengchengwen, aman.deep.singh, shaiq.wani,
stable
On Fri, 20 Mar 2026 06:29:54 +0000
Talluri Chaitanyababu <chaitanyababux.talluri@intel.com> wrote:
> Fix DCB forwarding failed when the number of TCs on ports is inconsistent.
>
> When ports have asymmetric TC configurations (e.g. 2 ports, port0 has
> 4 TCs and port1 has 8 TCs), the forwarding logic iterates based only
> on the Rx port TC count.
> This can lead to accessing invalid Tx TC entries and incorrect queue
> mapping, which will result in a SIGFPE exception.
>
> Additionally, the existing VMDq pool guard in dcb_fwd_config_setup()
> only checks RX queue counts and does not consider the case where the TX
> port has no queues for a given pool/TC combination.
>
> Fix this by:
> 1. Introducing an effective TC count using RTE_MIN() of Rx and Tx TC
> values, ensuring forwarding only operates on valid TCs supported by
> both ports.
> 2. Updating the loop condition to use the effective TC count instead of
> only the Rx TC count.
> 3. Extending the queue validation in dcb_fwd_config_setup() to ensure
> both Rx and Tx queues are valid for a given TC.
>
> Testpmd command to reproduce:
>
> x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-20 -n 4 \
> -a 0000:31:00.0 -a 0000:4b:00.0 \
> --file-prefix=testpmd1 -- -i --rxq=256 --txq=256 \
> --nb-cores=16 --total-num-mbufs=600000
>
> port stop all
> port config 0 dcb vt off 8 pfc on
> port config 1 dcb vt off 8 pfc on
> port start all
> port stop all
> port config 0 dcb vt off 4 pfc on
>
> This ensures correct queue mapping and avoids issues when switching
> between different DCB configurations across ports.
>
> Fixes: 1a572499beb6 ("app/testpmd: setup DCB forwarding based on traffic class")
> Cc: stable@dpdk.org
>
> Signed-off-by: Talluri Chaitanyababu <chaitanyababux.talluri@intel.com>
> Signed-off-by: Shaiq Wani <shaiq.wani@intel.com>
> ---
Applied to next-net
^ permalink raw reply [flat|nested] 18+ messages in thread