From mboxrd@z Thu Jan 1 00:00:00 1970 From: Thomas Monjalon Subject: Re: [PATCH] eal/ppc: remove fix of memory barrier for IBM POWER Date: Tue, 19 Mar 2019 12:14:59 +0100 Message-ID: <1789153.zrlSK8XYcq@xps> References: <1552913893-43407-1-git-send-email-dekelp@mellanox.com> <001d01d4de03$378f18a0$a6ad49e0$@linux.vnet.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Cc: Yongseok Koh , Shahaf Shuler , "dev@dpdk.org" , Ori Kam , "stable@dpdk.org" To: Dekel Peled , Chao Zhu Return-path: In-Reply-To: List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Guys, please let's avoid top-post. You are both not replying to each other: 1/ Dekel mentioned the IBM doc but Chao did not argue about the lack of IO protection with lwsync. We assume that rte_mb should protect any access including IO. 2/ Chao asked about the semantic of the barrier used in mlx5 code, but Dekel did not reply about the semantic: are we protecting IO or general memory access? 19/03/2019 11:05, Dekel Peled: > Hi, >=20 > For ppc, rte_io_mb() is defined as rte_mb(), which is defined as asm sync. > According to comments in arch/ppc_64/rte_atomic.h, rte_wmb() and rte_rmb(= ) are the same as rte_mb(), for store and load respectively. > My patch propose to define rte_wmb() and rte_rmb() as asm sync, like rte_= mb(), since using lwsync is incorrect for them. >=20 > Regards, > Dekel >=20 > > -----Original Message----- > > From: Chao Zhu > > Sent: Tuesday, March 19, 2019 5:24 AM > > To: Dekel Peled > > Cc: Yongseok Koh ; Shahaf Shuler > > ; dev@dpdk.org; Ori Kam ; > > Thomas Monjalon ; stable@dpdk.org > > Subject: RE: [PATCH] eal/ppc: remove fix of memory barrier for IBM POWER > >=20 > > Dekel=A3=AC > >=20 > > To control the memory order for device memory, I think you should use > > rte_io_mb() instead of rte_mb(). This will generate correct result. rte= _wmb() > > is used for system memory. > >=20 > > > -----Original Message----- > > > From: Dekel Peled > > > Sent: Monday, March 18, 2019 8:58 PM > > > To: chaozhu@linux.vnet.ibm.com > > > Cc: yskoh@mellanox.com; shahafs@mellanox.com; dev@dpdk.org; > > > orika@mellanox.com; thomas@monjalon.net; dekelp@mellanox.com; > > > stable@dpdk.org > > > Subject: [PATCH] eal/ppc: remove fix of memory barrier for IBM POWER > > > > > > From previous patch description: "to improve performance on PPC64, use > > > light weight sync instruction instead of sync instruction." > > > > > > Excerpt from IBM doc [1], section "Memory barrier instructions": > > > "The second form of the sync instruction is light-weight sync, or lws= ync. > > > This form is used to control ordering for storage accesses to system > > > memory only. It does not create a memory barrier for accesses to devi= ce > > memory." > > > > > > This patch removes the use of lwsync, so calls to rte_wmb() and > > > rte_rmb() will provide correct memory barrier to ensure order of > > > accesses to system memory and device memory. > > > > > > [1] > > > > > https://eur03.safelinks.protection.outlook.com/?url=3Dhttps%3A%2F%2Fwww > > . > > > > > ibm.com%2Fdeveloperworks%2Fsystems%2Farticles%2Fpowerpc.html& > > ;data=3D > > > > > 02%7C01%7Cdekelp%40mellanox.com%7C381426b6b9d042f776fa08d6ac1a5d > > c5%7Ca > > > > > 652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636885626593364016&am > > p;sdata > > > > > =3DwFYTcFX2A%2BMdtQMgtojTAtUOzqds7U5pypNS%2F2SoXUM%3D&re > > served=3D0 > > > > > > Fixes: d23a6bd04d72 ("eal/ppc: fix memory barrier for IBM POWER") > > > Cc: stable@dpdk.org > > > > > > Signed-off-by: Dekel Peled > > > --- > > > lib/librte_eal/common/include/arch/ppc_64/rte_atomic.h | 8 -------- > > > 1 file changed, 8 deletions(-) > > > > > > diff --git a/lib/librte_eal/common/include/arch/ppc_64/rte_atomic.h > > > b/lib/librte_eal/common/include/arch/ppc_64/rte_atomic.h > > > index ce38350..797381c 100644 > > > --- a/lib/librte_eal/common/include/arch/ppc_64/rte_atomic.h > > > +++ b/lib/librte_eal/common/include/arch/ppc_64/rte_atomic.h > > > @@ -63,11 +63,7 @@ > > > * Guarantees that the STORE operations generated before the barrier > > > * occur before the STORE operations generated after. > > > */ > > > -#ifdef RTE_ARCH_64 > > > -#define rte_wmb() asm volatile("lwsync" : : : "memory") > > > -#else > > > #define rte_wmb() asm volatile("sync" : : : "memory") > > > -#endif > > > > > > /** > > > * Read memory barrier. > > > @@ -75,11 +71,7 @@ > > > * Guarantees that the LOAD operations generated before the barrier > > > * occur before the LOAD operations generated after. > > > */ > > > -#ifdef RTE_ARCH_64 > > > -#define rte_rmb() asm volatile("lwsync" : : : "memory") > > > -#else > > > #define rte_rmb() asm volatile("sync" : : : "memory") > > > -#endif > > > > > > #define rte_smp_mb() rte_mb()