From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.7 required=3.0 tests=DATE_IN_FUTURE_06_12, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98064C47255 for ; Mon, 11 May 2020 07:28:33 +0000 (UTC) Received: from dpdk.org (dpdk.org [92.243.14.124]) by mail.kernel.org (Postfix) with ESMTP id 302F8206A3 for ; Mon, 11 May 2020 07:28:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 302F8206A3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=dev-bounces@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 313851C2B2; Mon, 11 May 2020 09:28:32 +0200 (CEST) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id 4C1501C19F; Mon, 11 May 2020 09:28:30 +0200 (CEST) IronPort-SDR: QxhFLL95KNDVhldnU1pkd5/Dyzk9VrHf4q4LFi7gsaZa4oz9jrLDQC5RlEkO7Cl8YYAPY3fs8k t294nRWH37qA== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 May 2020 00:28:29 -0700 IronPort-SDR: pcYfx2RvVSvHhQZxG+0BZTyinQkFc5dtdl2n3kL0N6htoe0LdKKk4qvh5jkUVo9a9YOCdK9sBM /ldzCFtbcXvQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,378,1583222400"; d="scan'208";a="296858440" Received: from dpdk-xuting-main.sh.intel.com ([10.67.116.253]) by fmsmga002.fm.intel.com with ESMTP; 11 May 2020 00:28:27 -0700 From: Ting Xu To: dev@dpdk.org Cc: beilei.xing@intel.com, jingjing.wu@intel.com, xiaolong.ye@intel.com, stable@dpdk.org Date: Mon, 11 May 2020 15:27:48 +0000 Message-Id: <20200511152748.21144-1-ting.xu@intel.com> X-Mailer: git-send-email 2.17.1 Subject: [dpdk-dev] [PATCH v1] net/iavf: fix setting wrong RXDID value for Rx queue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" CVL kernel PF configures all reserved queues for VF, including Rx queue RXDID. The number of reserved queues is the maximum between Tx and Rx queues. If the number of the enabled Rx queues is less than that of reserved queues, required RXDID will only be set for those enabled, but default value (0) is set for others. However, RXDID 0 (legacy 16byte descriptor) is not supported now, PF will return error when configuring those disabled VF queues. In this patch, required RXDID is set for all reserved Rx queues, no matter enabled or not. In this way, PF will configure Rx queues correctly without reporting error. Fixes: b8b4c54ef9b0 ("net/iavf: support flexible Rx descriptor in normal path") Cc: stable@dpdk.org Signed-off-by: Ting Xu --- drivers/net/iavf/iavf_vchnl.c | 44 +++++++++++++++++------------------ 1 file changed, 22 insertions(+), 22 deletions(-) diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c index 2a0cdd927..328cfdf01 100644 --- a/drivers/net/iavf/iavf_vchnl.c +++ b/drivers/net/iavf/iavf_vchnl.c @@ -593,32 +593,32 @@ iavf_configure_queues(struct iavf_adapter *adapter) vc_qp->rxq.ring_len = rxq[i]->nb_rx_desc; vc_qp->rxq.dma_ring_addr = rxq[i]->rx_ring_phys_addr; vc_qp->rxq.databuffer_size = rxq[i]->rx_buf_len; + } #ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC - if (vf->vf_res->vf_cap_flags & - VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC && - vf->supported_rxdid & BIT(IAVF_RXDID_COMMS_OVS_1)) { - vc_qp->rxq.rxdid = IAVF_RXDID_COMMS_OVS_1; - PMD_DRV_LOG(NOTICE, "request RXDID == %d in " - "Queue[%d]", vc_qp->rxq.rxdid, i); - } else { - vc_qp->rxq.rxdid = IAVF_RXDID_LEGACY_1; - PMD_DRV_LOG(NOTICE, "request RXDID == %d in " - "Queue[%d]", vc_qp->rxq.rxdid, i); - } + if (vf->vf_res->vf_cap_flags & + VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC && + vf->supported_rxdid & BIT(IAVF_RXDID_COMMS_OVS_1)) { + vc_qp->rxq.rxdid = IAVF_RXDID_COMMS_OVS_1; + PMD_DRV_LOG(NOTICE, "request RXDID == %d in " + "Queue[%d]", vc_qp->rxq.rxdid, i); + } else { + vc_qp->rxq.rxdid = IAVF_RXDID_LEGACY_1; + PMD_DRV_LOG(NOTICE, "request RXDID == %d in " + "Queue[%d]", vc_qp->rxq.rxdid, i); + } #else - if (vf->vf_res->vf_cap_flags & - VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC && - vf->supported_rxdid & BIT(IAVF_RXDID_LEGACY_0)) { - vc_qp->rxq.rxdid = IAVF_RXDID_LEGACY_0; - PMD_DRV_LOG(NOTICE, "request RXDID == %d in " - "Queue[%d]", vc_qp->rxq.rxdid, i); - } else { - PMD_DRV_LOG(ERR, "RXDID == 0 is not supported"); - return -1; - } -#endif + if (vf->vf_res->vf_cap_flags & + VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC && + vf->supported_rxdid & BIT(IAVF_RXDID_LEGACY_0)) { + vc_qp->rxq.rxdid = IAVF_RXDID_LEGACY_0; + PMD_DRV_LOG(NOTICE, "request RXDID == %d in " + "Queue[%d]", vc_qp->rxq.rxdid, i); + } else { + PMD_DRV_LOG(ERR, "RXDID == 0 is not supported"); + return -1; } +#endif } memset(&args, 0, sizeof(args)); -- 2.17.1