From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jianbo Liu Subject: Re: [PATCH] ring: guarantee ordering of cons/prod loading when doing enqueue/dequeue Date: Fri, 13 Oct 2017 15:33:37 +0800 Message-ID: <20171013073336.GB10844@arm.com> References: <20171010095636.4507-1-hejianet@gmail.com> <20171012155350.j34ddtivxzd27pag@platinum> <2601191342CEEE43887BDE71AB9772585FAA859F@IRSMSX103.ger.corp.intel.com> <20171012172311.GA8524@jerin> <20171013014914.GA2067@jerin> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Cc: Jia He , "Ananyev, Konstantin" , Olivier MATZ , "dev@dpdk.org" , "jia.he@hxt-semitech.com" , "jie2.liu@hxt-semitech.com" , "bing.zhao@hxt-semitech.com" To: Jerin Jacob Return-path: Received: from EUR01-HE1-obe.outbound.protection.outlook.com (mail-he1eur01on0041.outbound.protection.outlook.com [104.47.0.41]) by dpdk.org (Postfix) with ESMTP id CF0B71B62E for ; Fri, 13 Oct 2017 09:34:48 +0200 (CEST) Content-Disposition: inline In-Reply-To: <20171013014914.GA2067@jerin> List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The 10/13/2017 07:19, Jerin Jacob wrote: > -----Original Message----- > > Date: Fri, 13 Oct 2017 09:16:31 +0800 > > From: Jia He > > To: Jerin Jacob , "Ananyev, Konstantin" > > > > Cc: Olivier MATZ , "dev@dpdk.org" , > > "jia.he@hxt-semitech.com" , > > "jie2.liu@hxt-semitech.com" , > > "bing.zhao@hxt-semitech.com" > > Subject: Re: [PATCH] ring: guarantee ordering of cons/prod loading when > > doing enqueue/dequeue > > User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 > > Thunderbird/52.3.0 > > > > Hi > > > > > > On 10/13/2017 9:02 AM, Jia He Wrote: > > > Hi Jerin > > > > > > > > > On 10/13/2017 1:23 AM, Jerin Jacob Wrote: > > > > -----Original Message----- > > > > > Date: Thu, 12 Oct 2017 17:05:50 +0000 > > > > > > > [...] > > > > On the same lines, > > > > > > > > Jia He, jie2.liu, bing.zhao, > > > > > > > > Is this patch based on code review or do you saw this issue on any > > > > of the > > > > arm/ppc target? arm64 will have performance impact with this change= . > > sorry, miss one important information > > Our platform is an aarch64 server with 46 cpus. > > Is this an OOO(Out of order execution) aarch64 CPU implementation? > > > If we reduced the involved cpu numbers, the bug occurred less frequentl= y. > > > > Yes, mb barrier impact the performance, but correctness is more importa= nt, > > isn't it ;-) > > Yes. > > > Maybe we can find any other lightweight barrier here? > > Yes, Regarding the lightweight barrier, arm64 has native support for acqu= ire and release > semantics, which is exposed through gcc as architecture agnostic > functions. > https://gcc.gnu.org/onlinedocs/gcc/_005f_005fatomic-Builtins.html > http://preshing.com/20130922/acquire-and-release-fences/ > > Good to know, > 1) How much overhead this patch in your platform? Just relative > numbers are enough > 2) As a prototype, Is Changing to acquire and release schematics > reduces the overhead in your platform? > +1, can you try what ODP does in the link mentioned below? > Reference FreeBSD ring/DPDK style ring implementation through acquire > and release schematics > https://github.com/Linaro/odp/blob/master/platform/linux-generic/pktio/ri= ng.c > > I will also spend on cycles on this. > > > > > > Cheers, > > Jia > > > Based on mbuf_autotest, the rte_panic will be invoked in seconds. > > > > > > PANIC in test_refcnt_iter(): > > > (lcore=3D0, iter=3D0): after 10s only 61 of 64 mbufs left free > > > 1: [./test(rte_dump_stack+0x38) [0x58d868]] > > > Aborted (core dumped) > > > > > > Cheers, > > > Jia > > > > > > > > > > > > > Konstantin > > > > > -- IMPORTANT NOTICE: The contents of this email and any attachments are confid= ential and may also be privileged. If you are not the intended recipient, p= lease notify the sender immediately and do not disclose the contents to any= other person, use it for any purpose, or store or copy the information in = any medium. Thank you.