From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Wiles, Keith" Subject: Re: [PATCH 0/3] *** timer library enhancements *** Date: Wed, 23 Aug 2017 15:02:02 +0000 Message-ID: <3F9B5E47-8083-443E-96EE-CBC41695BE43@intel.com> References: <1503499644-29432-1-git-send-email-erik.g.carrillo@intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Cc: "rsanford@akamai.com" , "dev@dpdk.org" To: "Carrillo, Erik G" Return-path: Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id E118D7D3A for ; Wed, 23 Aug 2017 17:02:04 +0200 (CEST) In-Reply-To: <1503499644-29432-1-git-send-email-erik.g.carrillo@intel.com> Content-Language: en-US Content-ID: List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" > On Aug 23, 2017, at 9:47 AM, Gabriel Carrillo = wrote: >=20 > In the current implementation of the DPDK timer library, timers can be > created and set to be handled by a target lcore by adding it to a > skiplist that corresponds to that lcore. However, if an application > enables multiple lcores, and each of these lcores repeatedly attempts > to install timers on the same target lcore, overall application > throughput will be reduced as all lcores contend to acquire the lock > guarding the single skiplist of pending timers.=20 >=20 > This patchset addresses this scenario by adding an array of skiplists > to each lcore's priv_timer struct, such that when lcore i installs a > timer on lcore k, the timer will be added to the ith skiplist for > lcore k. If lcore j installs a timer on lcore k simultaneously, > lcores i and j can both proceed since they will be acquiring different > locks for different lists.=20 >=20 > When lcore k processes its pending timers, it will traverse each skiplist > in its array and acquire a skiplist's lock while a run list is broken > out; meanwhile, all other lists can continue to be modified. Then, all > run lists for lcore k are collected and traversed together so timers are > executed in their global order.=20 What is the performance and/or latency added to the timeout now? I worry about the case when just about all of the cores are enabled, which = could be as high was 128 or more now. One option is to have the lcore j that wants to install a timer on lcore k = to pass a message via a ring to lcore k to add that timer. We could even ad= d that logic into setting a timer on a different lcore then the caller in t= he current API. The ring would be a multi-producer and single consumer, we = still have the lock. What am I missing here? >=20 > Gabriel Carrillo (3): > timer: add per-installer pending lists for each lcore > timer: handle timers installed from non-EAL threads > doc: update timer lib docs >=20 > doc/guides/prog_guide/timer_lib.rst | 19 ++- > lib/librte_timer/rte_timer.c | 329 +++++++++++++++++++++++--------= ----- > lib/librte_timer/rte_timer.h | 9 +- > 3 files changed, 231 insertions(+), 126 deletions(-) >=20 > --=20 > 2.6.4 >=20 Regards, Keith