From mboxrd@z Thu Jan 1 00:00:00 1970 From: Subject: 8023ad bond tx crashed if one port has 2 more tx queues Date: Tue, 19 Mar 2019 17:41:25 +0800 Message-ID: <001a01d4de37$ec097f80$c41c7e80$@netitest.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: , , Return-path: Received: from out20-75.mail.aliyun.com (out20-75.mail.aliyun.com [115.124.20.75]) by dpdk.org (Postfix) with ESMTP id DBF0111A4 for ; Tue, 19 Mar 2019 10:41:34 +0100 (CET) Content-Language: zh-cn List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi Guys, I found a bug in dpdk bond code, while one port has 2 more tx queues, 8023ad bond port will be crashed in tx burst. Just analyzed the code below, if 2 more CPU cores send packets on a port by different tx queue, the arrays like slave_port_ids/ dist_slave_port_ids/ slave_tx_fail_coun/ slave_bufs will be shared by all of cores, it will be crashed in this function. Is there any better solution for this? For now, I just add lock for rte_eth_tx_burst. static uint16_t bond_ethdev_tx_burst_8023ad(void *queue, struct rte_mbuf **bufs, uint16_t nb_bufs) { struct bond_tx_queue *bd_tx_q = (struct bond_tx_queue *)queue; struct bond_dev_private *internals = bd_tx_q->dev_private; uint16_t [RTE_MAX_ETHPORTS]; uint16_t slave_count; uint16_t dist_slave_port_ids[RTE_MAX_ETHPORTS]; uint16_t dist_slave_count; /* 2-D array to sort mbufs for transmission on each slave into */ struct rte_mbuf *slave_bufs[RTE_MAX_ETHPORTS][nb_bufs]; /* Number of mbufs for transmission on each slave */ uint16_t slave_nb_bufs[RTE_MAX_ETHPORTS] = { 0 }; /* Mapping array generated by hash function to map mbufs to slaves */ uint16_t bufs_slave_port_idxs[RTE_MAX_ETHPORTS] = { 0 }; uint16_t slave_tx_count, slave_tx_fail_count[RTE_MAX_ETHPORTS] = { 0 }; uint16_t total_tx_count = 0, total_tx_fail_count = 0; Thanks and Regards, Haifeng