From mboxrd@z Thu Jan 1 00:00:00 1970 From: Olivier Deme Subject: Re: [PATCH] kni:optimization of rte_kni_rx_burst Date: Wed, 25 Feb 2015 12:28:53 +0000 Message-ID: <54EDC005.10008@druidsoftware.com> References: <14248648813214-git-send-email-Hemant@freescale.com> <54EDBC76.2050507@druidsoftware.com> Mime-Version: 1.0 Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: quoted-printable To: "Hemant-KZfg59tc24xl57MIdRCFDg@public.gmane.org" , "dev-VfR2kkLFssw@public.gmane.org" Return-path: In-Reply-To: List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces-VfR2kkLFssw@public.gmane.org Sender: "dev" I guess it would be unusual but possible for the kernel to enqueue=20 faster to tx_q than the application dequeues. But that would also be possible with a real NIC, so I think it is=20 acceptable for the kernel to have to drop egress packets in that case. On 25/02/15 12:24, Hemant-KZfg59tc24xl57MIdRCFDg@public.gmane.org wrote: > Hi OIivier > Comments inline. > Regards, > Hemant > >> -----Original Message----- >> From: dev [mailto:dev-bounces-VfR2kkLFssw@public.gmane.org] On Behalf Of Olivier Deme >> Sent: 25/Feb/2015 5:44 PM >> To: dev-VfR2kkLFssw@public.gmane.org >> Subject: Re: [dpdk-dev] [PATCH] kni:optimization of rte_kni_rx_burst >> >> Thank you Hemant, I think there might be one issue left with the patch= though. >> The alloc_q must initially be filled with mbufs before getting mbuf ba= ck on the >> tx_q. >> >> So the patch should allow rte_kni_rx_burst to check if alloc_q is empt= y. >> If so, it should invoke kni_allocate_mbufs(kni, 0) (to fill the alloc_= q with >> MAX_MBUF_BURST_NUM mbufs) >> >> The patch for rte_kni_rx_burst would then look like: >> >> @@ -575,7 +575,7 @@ rte_kni_rx_burst(struct rte_kni *kni, struct rte_m= buf >> **mbufs, unsigned num) >> >> /* If buffers removed, allocate mbufs and then put them into al= loc_q */ >> if (ret) >> - kni_allocate_mbufs(kni); >> + kni_allocate_mbufs(kni, ret); >> + else if (unlikely(kni->alloc_q->write =3D=3D kni->alloc_q->read)) >> + kni_allocate_mbufs(kni, 0); >> > [hemant] This will introduce a run-time check. > > I missed to include the other change in the patch. > I am doing it in kni_alloc i.e. initiate the alloc_q with default bur= st size. > kni_allocate_mbufs(ctx, 0); > > In a way, we are now suggesting to reduce the size of alloc_q to only d= efault burst size. > > Can we reach is situation, when the kernel is adding packets faster in = tx_q than the application is able to dequeue? > alloc_q can be empty in this case and kernel will be striving. > >> Olivier. >> >> On 25/02/15 11:48, Hemant Agrawal wrote: >>> From: Hemant Agrawal >>> >>> if any buffer is read from the tx_q, MAX_BURST buffers will be alloca= ted and >> attempted to be added to to the alloc_q. >>> This seems terribly inefficient and it also looks like the alloc_q wi= ll quickly fill >> to its maximum capacity. If the system buffers are low in number, it w= ill reach >> "out of memory" situation. >>> This patch allocates the number of buffers as many dequeued from tx_q= . >>> >>> Signed-off-by: Hemant Agrawal >>> --- >>> lib/librte_kni/rte_kni.c | 13 ++++++++----- >>> 1 file changed, 8 insertions(+), 5 deletions(-) >>> >>> diff --git a/lib/librte_kni/rte_kni.c b/lib/librte_kni/rte_kni.c inde= x >>> 4e70fa0..4cf8e30 100644 >>> --- a/lib/librte_kni/rte_kni.c >>> +++ b/lib/librte_kni/rte_kni.c >>> @@ -128,7 +128,7 @@ struct rte_kni_memzone_pool { >>> >>> >>> static void kni_free_mbufs(struct rte_kni *kni); -static void >>> kni_allocate_mbufs(struct rte_kni *kni); >>> +static void kni_allocate_mbufs(struct rte_kni *kni, int num); >>> >>> static volatile int kni_fd =3D -1; >>> static struct rte_kni_memzone_pool kni_memzone_pool =3D { @@ -575,= 7 >>> +575,7 @@ rte_kni_rx_burst(struct rte_kni *kni, struct rte_mbuf >>> **mbufs, unsigned num) >>> >>> /* If buffers removed, allocate mbufs and then put them into allo= c_q >> */ >>> if (ret) >>> - kni_allocate_mbufs(kni); >>> + kni_allocate_mbufs(kni, ret); >>> >>> return ret; >>> } >>> @@ -594,7 +594,7 @@ kni_free_mbufs(struct rte_kni *kni) >>> } >>> >>> static void >>> -kni_allocate_mbufs(struct rte_kni *kni) >>> +kni_allocate_mbufs(struct rte_kni *kni, int num) >>> { >>> int i, ret; >>> struct rte_mbuf *pkts[MAX_MBUF_BURST_NUM]; @@ -620,7 +620,10 >> @@ >>> kni_allocate_mbufs(struct rte_kni *kni) >>> return; >>> } >>> >>> - for (i =3D 0; i < MAX_MBUF_BURST_NUM; i++) { >>> + if (num =3D=3D 0 || num > MAX_MBUF_BURST_NUM) >>> + num =3D MAX_MBUF_BURST_NUM; >>> + >>> + for (i =3D 0; i < num; i++) { >>> pkts[i] =3D rte_pktmbuf_alloc(kni->pktmbuf_pool); >>> if (unlikely(pkts[i] =3D=3D NULL)) { >>> /* Out of memory */ >>> @@ -636,7 +639,7 @@ kni_allocate_mbufs(struct rte_kni *kni) >>> ret =3D kni_fifo_put(kni->alloc_q, (void **)pkts, i); >>> >>> /* Check if any mbufs not put into alloc_q, and then free them */ >>> - if (ret >=3D 0 && ret < i && ret < MAX_MBUF_BURST_NUM) >> {MAX_MBUF_BURST_NUM >>> + if (ret >=3D 0 && ret < i && ret < num) { >>> int j; >>> >>> for (j =3D ret; j < i; j++) >> -- >> *Olivier Dem=E9* >> *Druid Software Ltd.* >> *Tel: +353 1 202 1831* >> *Email: odeme-UsMDwKmwmRBx67MzidHQgQC/G2K4zDHf@public.gmane.org * >> *URL: http://www.druidsoftware.com* >> *Hall 7, stand 7F70.* >> Druid Software: Monetising enterprise small cells solutions. --=20 *Olivier Dem=E9* *Druid Software Ltd.* *Tel: +353 1 202 1831* *Email: odeme-UsMDwKmwmRBx67MzidHQgQC/G2K4zDHf@public.gmane.org * *URL: http://www.druidsoftware.com* *Hall 7, stand 7F70.* Druid Software: Monetising enterprise small cells solutions.