From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jerin Jacob Subject: Re: [PATCH] eventdev: add producer enqueue hint Date: Tue, 27 Jun 2017 15:47:37 +0530 Message-ID: <20170627101736.GA21161@jerin> References: <20170612114627.18893-1-jerin.jacob@caviumnetworks.com> <9184057F7FC11744A2107296B6B8EB1E01ED7263@FMSMSX108.amr.corp.intel.com> <20170627080820.GA14276@jerin> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: "Eads, Gage" , "dev@dpdk.org" , "Richardson, Bruce" , "hemant.agrawal@nxp.com" , "nipun.gupta@nxp.com" , "Vangati, Narender" , "Rao, Nikhil" To: "Van Haaren, Harry" Return-path: Received: from NAM01-BN3-obe.outbound.protection.outlook.com (mail-bn3nam01on0052.outbound.protection.outlook.com [104.47.33.52]) by dpdk.org (Postfix) with ESMTP id 716A2374 for ; Tue, 27 Jun 2017 12:18:23 +0200 (CEST) Content-Disposition: inline In-Reply-To: List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" -----Original Message----- > Date: Tue, 27 Jun 2017 08:44:34 +0000 > From: "Van Haaren, Harry" > To: Jerin Jacob , "Eads, Gage" > > CC: "dev@dpdk.org" , "Richardson, Bruce" > , "hemant.agrawal@nxp.com" > , "nipun.gupta@nxp.com" , > "Vangati, Narender" , "Rao, Nikhil" > > Subject: RE: [dpdk-dev] [PATCH] eventdev: add producer enqueue hint > > > From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com] > > Sent: Tuesday, June 27, 2017 9:08 AM > > To: Eads, Gage > > Cc: dev@dpdk.org; Richardson, Bruce ; Van Haaren, Harry > > ; hemant.agrawal@nxp.com; nipun.gupta@nxp.com; Vangati, > > Narender ; Rao, Nikhil > > Subject: Re: [dpdk-dev] [PATCH] eventdev: add producer enqueue hint > > > > > > > void > > > > diff --git a/lib/librte_eventdev/rte_eventdev.h > > > > b/lib/librte_eventdev/rte_eventdev.h > > > > index a248fe90e..1c1a46593 100644 > > > > --- a/lib/librte_eventdev/rte_eventdev.h > > > > +++ b/lib/librte_eventdev/rte_eventdev.h > > > > @@ -933,7 +933,15 @@ struct rte_event { > > > > * and is undefined on dequeue. > > > > * @see RTE_EVENT_OP_NEW, (RTE_EVENT_OP_*) > > > > */ > > > > - uint8_t rsvd:4; > > > > + uint8_t all_op_new:1; > > > > + /**< Valid only with event enqueue operation - This hint > > > > + * indicates that the enqueue request has only the > > > > + * events with op == RTE_EVENT_OP_NEW. > > > > + * The event producer, typically use this pattern to > > > > + * inject the events to eventdev. > > > > + * @see RTE_EVENT_OP_NEW > > > > rte_event_enqueue_burst() > > > > + */ > > > > + uint8_t rsvd:3; > > > > /**< Reserved for future use */ > > > > uint8_t sched_type:2; > > > > /**< Scheduler synchronization type > > > > (RTE_SCHED_TYPE_*) > > > > -- > > > > 2.13.1 > > > > > > I slightly prefer the parallel enqueue API -- I can see folks making the mistake of > > setting all_op_new without setting the op to RTE_EVENT_OP_NEW, and later adding a > > "forward-only" enqueue API could be interesting for the sw PMD -- but this looks fine to > > me. Curious if others have any thoughts. > > > > If forward-only parallel enqueue API interesting for the SW PMD then I > > can drop this one and introduce forward-only API. Let me know if others > > have any thoughts? > > > To make sure I understand correctly, the "parallel API" idea is to add a new function pointer per-PMD, and dedicate it to enqueueing a burst of packets with the same OP? So the end result would be function(s) in the public API like this: > > rte_event_enqueue_burst_new(port, new_events, n_events); > rte_event_enqueue_burst_forward(port, new_events, n_events); > > Given these are a "specialization" of the generic enqueue_burst() function, the PMD is not obliged to implement them. If they are NULL, the eventdev.c infrastructure can just point the burst_new() and burst_forward() to the generic enqueue without any performance delta? > > The cost is some added code in the public header and infrastructure. > The gain is that we don't overload the current API with new behavior. > > > Assuming my description of the parallel proposal above is correct, +1 for the parallel function approach. I like APIs that "do what they say on the tin" :) Yes. We are on the same page. I will send the v2.