From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jerin Jacob Subject: Re: [PATCH 1/3] examples/eventdev_pipeline: added sample app Date: Wed, 17 May 2017 23:33:16 +0530 Message-ID: <20170517180314.GA26402@jerin> References: <1492768299-84016-1-git-send-email-harry.van.haaren@intel.com> <1492768299-84016-2-git-send-email-harry.van.haaren@intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: dev@dpdk.org, Gage Eads , Bruce Richardson To: Harry van Haaren Return-path: Received: from NAM02-CY1-obe.outbound.protection.outlook.com (mail-cys01nam02on0069.outbound.protection.outlook.com [104.47.37.69]) by dpdk.org (Postfix) with ESMTP id 049AD2C5 for ; Wed, 17 May 2017 20:03:39 +0200 (CEST) Content-Disposition: inline In-Reply-To: <1492768299-84016-2-git-send-email-harry.van.haaren@intel.com> List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" -----Original Message----- > Date: Fri, 21 Apr 2017 10:51:37 +0100 > From: Harry van Haaren > To: dev@dpdk.org > CC: jerin.jacob@caviumnetworks.com, Harry van Haaren > , Gage Eads , Bruce > Richardson > Subject: [PATCH 1/3] examples/eventdev_pipeline: added sample app > X-Mailer: git-send-email 2.7.4 > > This commit adds a sample app for the eventdev library. > The app has been tested with DPDK 17.05-rc2, hence this > release (or later) is recommended. > > The sample app showcases a pipeline processing use-case, > with event scheduling and processing defined per stage. > The application recieves traffic as normal, with each > packet traversing the pipeline. Once the packet has > been processed by each of the pipeline stages, it is > transmitted again. > > The app provides a framework to utilize cores for a single > role or multiple roles. Examples of roles are the RX core, > TX core, Scheduling core (in the case of the event/sw PMD), > and worker cores. > > Various flags are available to configure numbers of stages, > cycles of work at each stage, type of scheduling, number of > worker cores, queue depths etc. For a full explaination, > please refer to the documentation. > > Signed-off-by: Gage Eads > Signed-off-by: Bruce Richardson > Signed-off-by: Harry van Haaren > --- > + > +static inline void > +schedule_devices(uint8_t dev_id, unsigned lcore_id) > +{ > + if (rx_core[lcore_id] && (rx_single || > + rte_atomic32_cmpset(&rx_lock, 0, 1))) { > + producer(); > + rte_atomic32_clear((rte_atomic32_t *)&rx_lock); > + } > + > + if (sched_core[lcore_id] && (sched_single || > + rte_atomic32_cmpset(&sched_lock, 0, 1))) { > + rte_event_schedule(dev_id); One question here, Does rte_event_schedule()'s SW PMD implementation capable of running concurrently on multiple cores? Context: Currently I am writing a testpmd like test framework to realize different use cases along with with performance test cases like throughput and latency and making sure it works on SW and HW driver. I see the following segfault problem when rte_event_schedule() invoked on multiple core currently. Is it expected? #0 0x000000000043e945 in __pull_port_lb (allow_reorder=0, port_id=2, sw=0x7ff93f3cb540) at /export/dpdk-thunderx/drivers/event/sw/sw_evdev_scheduler.c:406 /export/dpdk-thunderx/drivers/event/sw/sw_evdev_scheduler.c:406:11647:beg:0x43e945 [Current thread is 1 (Thread 0x7ff9fbd34700 (LWP 796))] (gdb) bt #0 0x000000000043e945 in __pull_port_lb (allow_reorder=0, port_id=2, sw=0x7ff93f3cb540) at /export/dpdk-thunderx/drivers/event/sw/sw_evdev_scheduler.c:406 #1 sw_schedule_pull_port_no_reorder (port_id=2, sw=0x7ff93f3cb540) at /export/dpdk-thunderx/drivers/event/sw/sw_evdev_scheduler.c:495 #2 sw_event_schedule (dev=) at /export/dpdk-thunderx/drivers/event/sw/sw_evdev_scheduler.c:566 #3 0x000000000040b4af in rte_event_schedule (dev_id=) at /export/dpdk-thunderx/build/include/rte_eventdev.h:1092 #4 worker (arg=) at /export/dpdk-thunderx/app/test-eventdev/test_queue_order.c:200 #5 0x000000000042d14b in eal_thread_loop (arg=) at /export/dpdk-thunderx/lib/librte_eal/linuxapp/eal/eal_thread.c:184 #6 0x00007ff9fd8e32e7 in start_thread () from /usr/lib/libpthread.so.0 #7 0x00007ff9fd62454f in clone () from /usr/lib/libc.so.6 (gdb) list 401 */ 402 uint32_t iq_num = PRIO_TO_IQ(qe->priority); 403 struct sw_qid *qid = &sw->qids[qe->queue_id]; 404 405 if ((flags & QE_FLAG_VALID) && 406 iq_ring_free_count(qid->iq[iq_num]) == 0) 407 break; 408 409 /* now process based on flags. Note that for directed 410 * queues, the enqueue_flush masks off all but the (gdb) > + if (dump_dev_signal) { > + rte_event_dev_dump(0, stdout); > + dump_dev_signal = 0; > + } > + rte_atomic32_clear((rte_atomic32_t *)&sched_lock); > + } > + > + if (tx_core[lcore_id] && (tx_single || > + rte_atomic32_cmpset(&tx_lock, 0, 1))) { > + consumer(); > + rte_atomic32_clear((rte_atomic32_t *)&tx_lock); > + } > +} > +