From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jerin Jacob Subject: Re: [PATCH v4 1/6] eventdev: introduce event driven programming model Date: Tue, 7 Feb 2017 10:29:57 +0530 Message-ID: <20170207045956.GA5438@localhost.localdomain> References: <1480996340-29871-1-git-send-email-jerin.jacob@caviumnetworks.com> <1482312326-2589-1-git-send-email-jerin.jacob@caviumnetworks.com> <1482312326-2589-2-git-send-email-jerin.jacob@caviumnetworks.com> <20170202140911.GA26986@localhost.localdomain> <6032c262-5cfc-db97-f41e-8704c367bee3@nxp.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Cc: Nipun Gupta , "bruce.richardson@intel.com" , "gage.eads@intel.com" , "dev@dpdk.org" , "thomas.monjalon@6wind.com" , "harry.van.haaren@intel.com" To: Hemant Agrawal Return-path: Received: from NAM03-CO1-obe.outbound.protection.outlook.com (mail-co1nam03on0060.outbound.protection.outlook.com [104.47.40.60]) by dpdk.org (Postfix) with ESMTP id 3ACAE9E7 for ; Tue, 7 Feb 2017 06:00:21 +0100 (CET) Content-Disposition: inline In-Reply-To: <6032c262-5cfc-db97-f41e-8704c367bee3@nxp.com> List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Fri, Feb 03, 2017 at 04:28:15PM +0530, Hemant Agrawal wrote: > On 2/3/2017 12:08 PM, Nipun Gupta wrote: > > > > > -----Original Message----- > > > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Jerin Jacob > > > > > Sent: Wednesday, December 21, 2016 14:55 > > > > > To: dev@dpdk.org > > > > > Cc: thomas.monjalon@6wind.com; bruce.richardson@intel.com; Hemant > > > > > Agrawal ; gage.eads@intel.com; > > > > > harry.van.haaren@intel.com; Jerin Jacob > > > > > > > > Subject: [dpdk-dev] [PATCH v4 1/6] eventdev: introduce event driven > > > > > programming model > > > > > > > > > > In a polling model, lcores poll ethdev ports and associated > > > > > rx queues directly to look for packet. In an event driven model, > > > > > by contrast, lcores call the scheduler that selects packets for > > > > > them based on programmer-specified criteria. Eventdev library > > > > > adds support for event driven programming model, which offer > > > > > applications automatic multicore scaling, dynamic load balancing, > > > > > pipelining, packet ingress order maintenance and > > > > > synchronization services to simplify application packet processing. > > > > > > > > > > By introducing event driven programming model, DPDK can support > > > > > both polling and event driven programming models for packet processing, > > > > > and applications are free to choose whatever model > > > > > (or combination of the two) that best suits their needs. > > > > > > > > > > This patch adds the eventdev specification header file. > > > > > > > > > > Signed-off-by: Jerin Jacob > > > > > Acked-by: Bruce Richardson > > > > > --- > > > > > MAINTAINERS | 3 + > > > > > doc/api/doxy-api-index.md | 1 + > > > > > doc/api/doxy-api.conf | 1 + > > > > > lib/librte_eventdev/rte_eventdev.h | 1275 > > > > > ++++++++++++++++++++++++++++++++++++ > > > > > 4 files changed, 1280 insertions(+) > > > > > create mode 100644 lib/librte_eventdev/rte_eventdev.h > > > > > > > > > > > > > > > > > + > > > > > +/** > > > > > + * Event device information > > > > > + */ > > > > > +struct rte_event_dev_info { > > > > > + const char *driver_name; /**< Event driver name */ > > > > > + struct rte_pci_device *pci_dev; /**< PCI information */ > > > > > > > > With 'rte_device' in place (rte_dev.h), should we not have 'rte_device' instead > > > of 'rte_pci_device' here? > > > > > > Yes. Please post a patch to fix this. As the time of merging to > > > next-eventdev tree it was not the case. > > > > Sure. I'll send a patch regarding this. > > > > > > > > > > > > > > + * The number of events dequeued is the number of scheduler contexts held > > > by > > > > > + * this port. These contexts are automatically released in the next > > > > > + * rte_event_dequeue_burst() invocation, or invoking > > > > > rte_event_enqueue_burst() > > > > > + * with RTE_EVENT_OP_RELEASE operation can be used to release the > > > > > + * contexts early. > > > > > + * > > > > > + * @param dev_id > > > > > + * The identifier of the device. > > > > > + * @param port_id > > > > > + * The identifier of the event port. > > > > > + * @param[out] ev > > > > > + * Points to an array of *nb_events* objects of type *rte_event* structure > > > > > + * for output to be populated with the dequeued event objects. > > > > > + * @param nb_events > > > > > + * The maximum number of event objects to dequeue, typically number of > > > > > + * rte_event_port_dequeue_depth() available for this port. > > > > > + * > > > > > + * @param timeout_ticks > > > > > + * - 0 no-wait, returns immediately if there is no event. > > > > > + * - >0 wait for the event, if the device is configured with > > > > > + * RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT then this function will > > > > > wait until > > > > > + * the event available or *timeout_ticks* time. > > > > > > > > Just for understanding - Is expectation that rte_event_dequeue_burst() will > > > wait till timeout > > > > unless requested number of events (nb_events) are not received on the event > > > port? > > > > > > Yes. If you need any change then a send RFC patch for the header file > > > change. > > "at least one event available" Looks good to me. If there no objections then you can send a patch to update the header file. > > The API should not wait, if at least one event is available to discard the > timeout value. > > the *timeout* is valid only until the first event is received (even when > multiple events are requested) and driver will only checking for further > events availability and return as many events as it is able to get in its > processing loop. > > > > > > > > > > > > > > + * if the device is not configured with > > > > > RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT > > > > > + * then this function will wait until the event available or > > > > > + * *dequeue_timeout_ns* ns which was previously supplied to > > > > > + * rte_event_dev_configure() > > > > > + * > > > > > + * @return > > > > > + * The number of event objects actually dequeued from the port. The return > > > > > + * value can be less than the value of the *nb_events* parameter when the > > > > > + * event port's queue is not full. > > > > > + * > > > > > + * @see rte_event_port_dequeue_depth() > > > > > + */ > > > > > +uint16_t > > > > > +rte_event_dequeue_burst(uint8_t dev_id, uint8_t port_id, struct rte_event > > > > > ev[], > > > > > + uint16_t nb_events, uint64_t timeout_ticks); > > > > > + > > > > > > > > > > > > > > > > Regards, > > > > Nipun > > > >