From mboxrd@z Thu Jan 1 00:00:00 1970 From: Ilya Maximets Subject: [PATCH RFC 1/2] net/i40e: allow bulk alloc for the max size desc ring Date: Wed, 19 Oct 2016 17:07:16 +0300 Message-ID: <1476886037-4586-2-git-send-email-i.maximets@samsung.com> References: <1476886037-4586-1-git-send-email-i.maximets@samsung.com> Cc: Dyasly Sergey , Heetae Ahn , Bruce Richardson , Ilya Maximets To: dev@dpdk.org, Helin Zhang , Konstantin Ananyev , Jingjing Wu Return-path: Received: from mailout1.w1.samsung.com (mailout1.w1.samsung.com [210.118.77.11]) by dpdk.org (Postfix) with ESMTP id 78F6D7F00 for ; Wed, 19 Oct 2016 16:07:33 +0200 (CEST) Received: from eucas1p2.samsung.com (unknown [182.198.249.207]) by mailout1.w1.samsung.com (Oracle Communications Messaging Server 7.0.5.31.0 64bit (built May 5 2014)) with ESMTP id <0OFA007RISKJO530@mailout1.w1.samsung.com> for dev@dpdk.org; Wed, 19 Oct 2016 15:07:31 +0100 (BST) In-reply-to: <1476886037-4586-1-git-send-email-i.maximets@samsung.com> List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The only reason why bulk alloc disabled for the rings with more than (I40E_MAX_RING_DESC - RTE_PMD_I40E_RX_MAX_BURST) descriptors is the possible out-of-bound access to the dma memory. But it's the artificial limit and can be easily avoided by allocating of RTE_PMD_I40E_RX_MAX_BURST more descriptors in memory. This will not interfere the HW and, as soon as all rings' memory zeroized, Rx functions will work correctly. This change allows to use vectorized Rx functions with 4096 descriptors in Rx ring which is important to achieve zero packet drop rate in high-load installations. Signed-off-by: Ilya Maximets --- drivers/net/i40e/i40e_rxtx.c | 24 +++++++++++++----------- 1 file changed, 13 insertions(+), 11 deletions(-) diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c index 7ae7d9f..1f76691 100644 --- a/drivers/net/i40e/i40e_rxtx.c +++ b/drivers/net/i40e/i40e_rxtx.c @@ -409,15 +409,6 @@ check_rx_burst_bulk_alloc_preconditions(__rte_unused struct i40e_rx_queue *rxq) "rxq->rx_free_thresh=%d", rxq->nb_rx_desc, rxq->rx_free_thresh); ret = -EINVAL; - } else if (!(rxq->nb_rx_desc < (I40E_MAX_RING_DESC - - RTE_PMD_I40E_RX_MAX_BURST))) { - PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions: " - "rxq->nb_rx_desc=%d, " - "I40E_MAX_RING_DESC=%d, " - "RTE_PMD_I40E_RX_MAX_BURST=%d", - rxq->nb_rx_desc, I40E_MAX_RING_DESC, - RTE_PMD_I40E_RX_MAX_BURST); - ret = -EINVAL; } #else ret = -EINVAL; @@ -1698,8 +1689,19 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev, rxq->rx_deferred_start = rx_conf->rx_deferred_start; /* Allocate the maximun number of RX ring hardware descriptor. */ - ring_size = sizeof(union i40e_rx_desc) * I40E_MAX_RING_DESC; - ring_size = RTE_ALIGN(ring_size, I40E_DMA_MEM_ALIGN); + len = I40E_MAX_RING_DESC; + +#ifdef RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC + /** + * Allocating a little more memory because vectorized/bulk_alloc Rx + * functions doesn't check boundaries each time. + */ + len += RTE_PMD_I40E_RX_MAX_BURST; +#endif + + ring_size = RTE_ALIGN(len * sizeof(union i40e_rx_desc), + I40E_DMA_MEM_ALIGN); + rz = rte_eth_dma_zone_reserve(dev, "rx_ring", queue_idx, ring_size, I40E_RING_BASE_ALIGN, socket_id); if (!rz) { -- 2.7.4