From mboxrd@z Thu Jan 1 00:00:00 1970 From: Bruce Richardson Subject: Re: [RFC] More changes for rte_mempool.h:__mempool_get_bulk() Date: Mon, 29 Sep 2014 13:06:13 +0100 Message-ID: <20140929120613.GG12072@BRICHA3-MOBL> References: <3B9A624B-ABBF-4A20-96CD-8D5607006FEA@windriver.com> <2601191342CEEE43887BDE71AB977258213851D2@IRSMSX104.ger.corp.intel.com> <4F9CE4A3-600B-42E0-B5C0-71D3AF7F0CF5@windriver.com> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Cc: "" To: "Wiles, Roger Keith" Return-path: Content-Disposition: inline In-Reply-To: <4F9CE4A3-600B-42E0-B5C0-71D3AF7F0CF5-CWA4WttNNZF54TAoqtyWWQ@public.gmane.org> List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces-VfR2kkLFssw@public.gmane.org Sender: "dev" On Sun, Sep 28, 2014 at 11:17:34PM +0000, Wiles, Roger Keith wrote: >=20 > On Sep 28, 2014, at 5:41 PM, Ananyev, Konstantin wrote: >=20 > >=20 > >=20 > >> -----Original Message----- > >> From: dev [mailto:dev-bounces-VfR2kkLFssw@public.gmane.org] On Behalf Of Wiles, Roger Ke= ith > >> Sent: Sunday, September 28, 2014 6:52 PM > >> To: > >> Subject: [dpdk-dev] [RFC] More changes for rte_mempool.h:__mempool_g= et_bulk() > >>=20 > >> Here is a Request for Comment on __mempool_get_bulk() routine. I bel= ieve I am seeing a few more issues in this routine, please look > >> at the code below and see if these seem to fix some concerns in how = the ring is handled. > >>=20 > >> The first issue I believe is cache->len is increased by ret and not = req as we do not know if ret =3D=3D req. This also means the cache->len > >> may still not satisfy the request from the cache. > >>=20 > >> The second issue is if you believe the above code then we have to ac= count for that issue in the stats. > >>=20 > >> Let me know what you think? > >> ++Keith > >> --- > >>=20 > >> diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/r= te_mempool.h > >> index 199a493..b1b1f7a 100644 > >> --- a/lib/librte_mempool/rte_mempool.h > >> +++ b/lib/librte_mempool/rte_mempool.h > >> @@ -945,9 +945,7 @@ __mempool_get_bulk(struct rte_mempool *mp, void = **obj_table, > >> unsigned n, int is_mc) > >> { > >> int ret; > >> -#ifdef RTE_LIBRTE_MEMPOOL_DEBUG > >> - unsigned n_orig =3D n; > >> -#endif > >=20 > > Yep, as I said in my previous mail n_orig could be removed in total. > > Though from other side - it is harmless. > >=20 > >> + > >> #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0 > >> struct rte_mempool_cache *cache; > >> uint32_t index, len; > >> @@ -979,7 +977,21 @@ __mempool_get_bulk(struct rte_mempool *mp, void= **obj_table, > >> goto ring_dequeue; > >> } > >>=20 > >> - cache->len +=3D req; > >> + cache->len +=3D ret; // Need to adjust len by r= et not req, as (ret !=3D req) > >> + > >=20 > > rte_ring_mc_dequeue_bulk(.., req) at line 971, would either get all r= eq objects from the ring and return 0 (success), > > or wouldn't get any entry from the ring and return negative value (fa= ilure). > > So this change is erroneous. >=20 > Sorry, I combined my thoughts on changing the get_bulk behavior and you= would be correct for the current design. This is why I decided to make i= t an RFC :-) > >=20 > >> + if ( cache->len < n ) { > >=20 > > If n > cache_size, then we will go straight to 'ring_dequeue' see li= ne 959. > > So no need for that check here. >=20 > My thinking (at the time) was get_bulk should return =E2=80=99n=E2=80=99= instead of zero, which I feel is the better coding. You are correct it d= oes not make sense unless you factor in my thinking at time :-( > >=20 > >> + /* > >> + * Number (ret + cache->len) may not be >=3D= n. As > >> + * the 'ret' value maybe zero or less then '= req'. > >> + * > >> + * Note: > >> + * An issue of order from the cache and comm= on pool could > >> + * be an issue if (cache->len !=3D 0 and les= s then n), but the > >> + * normal case it should be OK. If the user = needs to preserve > >> + * the order of packets then he must set cac= he_size =3D=3D 0. > >> + */ > >> + goto ring_dequeue; > >> + } > >> } > >>=20 > >> /* Now fill in the response ... */ > >> @@ -1002,9 +1014,12 @@ ring_dequeue: > >> ret =3D rte_ring_sc_dequeue_bulk(mp->ring, obj_table,= n); > >>=20 > >> if (ret < 0) > >> - __MEMPOOL_STAT_ADD(mp, get_fail, n_orig); > >> - else > >> + __MEMPOOL_STAT_ADD(mp, get_fail, n); > >> + else { > >> __MEMPOOL_STAT_ADD(mp, get_success, ret); > >> + // Catch the case when ret !=3D n, adding zero shoul= d not be a problem. > >> + __MEMPOOL_STAT_ADD(mp, get_fail, n - ret); > >=20 > > As I said above, ret =3D=3D 0 on success, so need for that change. > > Just n (or n_orig) is ok here. > >=20 > >> + } > >>=20 > >> return ret; > >> } > >>=20 > >> Keith Wiles, Principal Technologist with CTO office, Wind River mobi= le 972-213-5533 >=20 > Do we think it is worth it to change the behavior of get_bulk returning= =E2=80=99n=E2=80=99 instead of zero on success? It would remove a few te= st IMO in a couple of places. We could also return <0 on the zero case as= well, just to make sure code did not try to follow the success case by m= istake. If you want to have such a function, i think it should align with the=20 functions on the rings. In this case, this would mean having a get_burst=20 function, which returns less than or equal to the number of elements=20 requested. I would not change the behaviour of the existing function=20 without also changing the rings "bulk" function to match. /Bruce > >=20 > > NACK in summary. > > Konstantin >=20 > Keith Wiles, Principal Technologist with CTO office, Wind River mobile = 972-213-5533 >=20