From mboxrd@z Thu Jan 1 00:00:00 1970 From: Pavan Nikhilesh Bhagavatula Subject: [PATCH v6 4/4] app/testpmd: add mempool bulk get for txonly mode Date: Tue, 2 Apr 2019 09:53:36 +0000 Message-ID: <20190402095255.848-4-pbhagavatula@marvell.com> References: <20190228194128.14236-1-pbhagavatula@marvell.com> <20190402095255.848-1-pbhagavatula@marvell.com> Mime-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Cc: "dev@dpdk.org" , Pavan Nikhilesh Bhagavatula , Yingya Han To: Jerin Jacob Kollanukkaran , "thomas@monjalon.net" , "arybchenko@solarflare.com" , "ferruh.yigit@intel.com" , "bernard.iremonger@intel.com" , "alialnu@mellanox.com" Return-path: Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 1D0E25424 for ; Tue, 2 Apr 2019 11:53:59 +0200 (CEST) In-Reply-To: <20190402095255.848-1-pbhagavatula@marvell.com> Content-Language: en-US List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Use mempool bulk get ops to alloc burst of packets and process them. If bulk get fails fallback to rte_mbuf_raw_alloc. Tested-by: Yingya Han Suggested-by: Andrew Rybchenko Signed-off-by: Pavan Nikhilesh --- app/test-pmd/txonly.c | 35 ++++++++++++++++++++++++++--------- 1 file changed, 26 insertions(+), 9 deletions(-) diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c index 56ca0ad24..66e63788a 100644 --- a/app/test-pmd/txonly.c +++ b/app/test-pmd/txonly.c @@ -268,16 +268,33 @@ pkt_burst_transmit(struct fwd_stream *fs) ether_addr_copy(&ports[fs->tx_port].eth_addr, ð_hdr.s_addr); eth_hdr.ether_type =3D rte_cpu_to_be_16(ETHER_TYPE_IPv4); =20 - for (nb_pkt =3D 0; nb_pkt < nb_pkt_per_burst; nb_pkt++) { - pkt =3D rte_mbuf_raw_alloc(mbp); - if (pkt =3D=3D NULL) - break; - if (unlikely(!pkt_burst_prepare(pkt, mbp, ð_hdr, vlan_tci, - vlan_tci_outer, ol_flags))) { - rte_pktmbuf_free(pkt); - break; + if (rte_mempool_get_bulk(mbp, (void **)pkts_burst, + nb_pkt_per_burst) =3D=3D 0) { + for (nb_pkt =3D 0; nb_pkt < nb_pkt_per_burst; nb_pkt++) { + if (unlikely(!pkt_burst_prepare(pkts_burst[nb_pkt], mbp, + ð_hdr, vlan_tci, + vlan_tci_outer, + ol_flags))) { + rte_mempool_put_bulk(mbp, + (void **)&pkts_burst[nb_pkt], + nb_pkt_per_burst - nb_pkt); + break; + } + } + } else { + for (nb_pkt =3D 0; nb_pkt < nb_pkt_per_burst; nb_pkt++) { + pkt =3D rte_mbuf_raw_alloc(mbp); + if (pkt =3D=3D NULL) + break; + if (unlikely(!pkt_burst_prepare(pkt, mbp, ð_hdr, + vlan_tci, + vlan_tci_outer, + ol_flags))) { + rte_pktmbuf_free(pkt); + break; + } + pkts_burst[nb_pkt] =3D pkt; } - pkts_burst[nb_pkt] =3D pkt; } =20 if (nb_pkt =3D=3D 0) --=20 2.21.0