From mboxrd@z Thu Jan 1 00:00:00 1970 From: Ferruh Yigit Subject: Re: [PATCH] nfp: handle packets with length 0 as usual ones Date: Fri, 18 Aug 2017 16:10:12 +0100 Message-ID: <51e4be70-4fa4-fbab-6e7a-5f8e9c94ee3c@intel.com> References: <1502445950-44582-1-git-send-email-alejandro.lucero@netronome.com> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Cc: stable@dpdk.org To: Alejandro Lucero , dev@dpdk.org Return-path: In-Reply-To: <1502445950-44582-1-git-send-email-alejandro.lucero@netronome.com> Content-Language: en-US List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 8/11/2017 11:05 AM, Alejandro Lucero wrote: > A DPDK app could, whatever the reason, send packets with size 0. > The PMD is not sending those packets, which does make sense, > but the problem is the mbuf is not released either. That leads > to mbufs not being available, because the app trusts the > PMD will do it. > > Although this is a problem related to app wrong behaviour, we > should harden the PMD in this regard. Not sending a packet with > size 0 could be problematic, needing special handling inside the > PMD xmit function. It could be a burst of those packets, which can > be easily handled, but it could also be a single packet in a burst, > what is harder to handle. > > It would be simpler to just send that kind of packets, which will > likely be dropped by the hw at some point. The main problem is how > the fw/hw handles the DMA, because a dma read to a hypothetical 0x0 > address could trigger an IOMMU error. It turns out, it is safe to > send a descriptor with packet size 0 to the hardware: the DMA never > happens, from the PCIe point of view. > > Signed-off-by: Alejandro Lucero > --- > drivers/net/nfp/nfp_net.c | 17 ++++++++++++----- > 1 file changed, 12 insertions(+), 5 deletions(-) > > diff --git a/drivers/net/nfp/nfp_net.c b/drivers/net/nfp/nfp_net.c > index 92b03c4..679a91b 100644 > --- a/drivers/net/nfp/nfp_net.c > +++ b/drivers/net/nfp/nfp_net.c > @@ -2094,7 +2094,7 @@ uint32_t nfp_net_txq_full(struct nfp_net_txq *txq) > */ > pkt_size = pkt->pkt_len; > > - while (pkt_size) { > + while (pkt) { > /* Copying TSO, VLAN and cksum info */ > *txds = txd; > > @@ -2126,17 +2126,24 @@ uint32_t nfp_net_txq_full(struct nfp_net_txq *txq) > txq->wr_p = 0; > > pkt_size -= dma_size; > - if (!pkt_size) { > + if (!pkt_size) > /* End of packet */ > txds->offset_eop |= PCIE_DESC_TX_EOP; > - } else { > + else > txds->offset_eop &= PCIE_DESC_TX_OFFSET_MASK; > - pkt = pkt->next; > - } > + > + pkt = pkt->next; > /* Referencing next free TX descriptor */ > txds = &txq->txds[txq->wr_p]; > lmbuf = &txq->txbufs[txq->wr_p].mbuf; > issued_descs++; > + > + /* Double-checking if we have to use chained mbuf. > + * It seems there are some apps which could wrongly > + * have zeroed mbufs chained leading to send null > + * descriptors to the hw. */ > + if (!pkt_size) > + break; For the case chained mbufs with all are zero size [1], won't this cause next mbufs not freed because rte_pktmbuf_free_seg(*lmbuf) used? [1] As you mentioned in the commit log, this not correct thing to do, but since patch is trying to harden PMD for this wrong application behavior.. > } > i++; > } >