From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jerin Jacob Subject: Re: [PATCH 2/4] eventdev: implement the northbound APIs Date: Tue, 29 Nov 2016 07:31:48 +0530 Message-ID: <20161129020147.GA9930@svelivela-lt.caveonetworks.com> References: <9184057F7FC11744A2107296B6B8EB1E01E31739@FMSMSX108.amr.corp.intel.com> <20161121191358.GA9044@svelivela-lt.caveonetworks.com> <20161121193133.GA9895@svelivela-lt.caveonetworks.com> <9184057F7FC11744A2107296B6B8EB1E01E31C40@FMSMSX108.amr.corp.intel.com> <20161122181913.GA9456@svelivela-lt.caveonetworks.com> <9184057F7FC11744A2107296B6B8EB1E01E32F3E@FMSMSX108.amr.corp.intel.com> <20161122200022.GA12168@svelivela-lt.caveonetworks.com> <9184057F7FC11744A2107296B6B8EB1E01E331A3@FMSMSX108.amr.corp.intel.com> <20161122234331.GA20501@svelivela-lt.caveonetworks.com> <9184057F7FC11744A2107296B6B8EB1E01E33E96@FMSMSX108.amr.corp.intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Cc: "dev@dpdk.org" , "Richardson, Bruce" , "Van Haaren, Harry" , "hemant.agrawal@nxp.com" To: "Eads, Gage" Return-path: Received: from NAM02-CY1-obe.outbound.protection.outlook.com (mail-cys01nam02on0064.outbound.protection.outlook.com [104.47.37.64]) by dpdk.org (Postfix) with ESMTP id EB1823B5 for ; Tue, 29 Nov 2016 03:01:56 +0100 (CET) Content-Disposition: inline In-Reply-To: <9184057F7FC11744A2107296B6B8EB1E01E33E96@FMSMSX108.amr.corp.intel.com> List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Mon, Nov 28, 2016 at 03:53:08PM +0000, Eads, Gage wrote: > (Bruce's adviced heeded :)) > > > > > > > > > > > How would this check work? Wouldn't it prevent any core from > > > > running the software scheduler in the centralized case? > > > > > > > > I guess you may not need RTE_EVENT_DEV_CAP here, instead need flag > > > > for device configure here > > > > > > > > #define RTE_EVENT_DEV_CFG_DISTRIBUTED_SCHED (1ULL << 1) > > > > > > > > struct rte_event_dev_config config; config.event_dev_cfg = > > > > RTE_EVENT_DEV_CFG_DISTRIBUTED_SCHED; > > > > rte_event_dev_configure(.., &config); > > > > > > > > on the driver side on configure, > > > > if (config.event_dev_cfg & RTE_EVENT_DEV_CFG_DISTRIBUTED_SCHED) > > > > eventdev->schedule = NULL; > > > > else // centralized case > > > > eventdev->schedule = your_centrized_schedule_function; > > > > > > > > Does that work? > > > > > > Hm, I fear the API would give users the impression that they can select the > > scheduling behavior of a given eventdev, when a software scheduler is more > > likely to be either distributed or centralized -- not both. > > > > Even if it is capability flag then also it is per "device". Right ? > > capability flag is more of read only too. Am i missing something here? > > > > Correct, the capability flag I'm envisioning is per-device and read-only. > > > > > > > What if we use the capability flag, and define rte_event_schedule() as the > > scheduling function for centralized schedulers and rte_event_dequeue() as the > > scheduling function for distributed schedulers? That way, the datapath could be > > the simple dequeue -> process -> enqueue. Applications would check the > > capability flag at configuration time to decide whether or not to launch an > > lcore that calls rte_event_schedule(). > > > > I am all for simple "dequeue -> process -> enqueue". > > rte_event_schedule() added for SW scheduler only, now it may not make sense > > to add one more check on top of "rte_event_schedule()" to see it is really need > > or not in fastpath? > > > > Yes, the additional check shouldn't be needed. In terms of the 'typical workflow' description, this is what I have in mind: > > * > * An event driven based application has following typical workflow on fastpath: > * \code{.c} > * while (1) { > * > * rte_event_dequeue(...); > * > * (event processing) > * > * rte_event_enqueue(...); > * } > * \endcode > * > * The point at which events are scheduled to ports depends on the device. For > * hardware devices, scheduling occurs asynchronously. Software schedulers can > * either be distributed (each worker thread schedules events to its own port) > * or centralized (a dedicated thread schedules to all ports). Distributed > * software schedulers perform the scheduling in rte_event_dequeue(), whereas > * centralized scheduler logic is located in rte_event_schedule(). The > * RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag indicates whether a > * device is centralized and thus needs a dedicated scheduling thread that > * repeatedly calls rte_event_schedule(). Makes sense. I will change the existing schedule description to the proposed one and add RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag in v2. Thanks Gage. > * > */