From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752526AbcDZRPs (ORCPT ); Tue, 26 Apr 2016 13:15:48 -0400 Received: from foss.arm.com ([217.140.101.70]:56453 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752170AbcDZRPp (ORCPT ); Tue, 26 Apr 2016 13:15:45 -0400 Date: Tue, 26 Apr 2016 18:15:43 +0100 From: Will Deacon To: Peter Zijlstra Cc: linux-kernel@vger.kernel.org, waiman.long@hpe.com, mingo@redhat.com, paulmck@linux.vnet.ibm.com, boqun.feng@gmail.com, torvalds@linux-foundation.org, dave@stgolabs.net Subject: Re: [RFC][PATCH 3/3] locking,arm64: Introduce cmpwait() Message-ID: <20160426171543.GG1793@arm.com> References: <20160404122250.340636238@infradead.org> <20160404123633.484451002@infradead.org> <20160412165941.GG26124@arm.com> <20160413125243.GA6810@worktop.ger.corp.intel.com> <20160426163344.GE1793@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20160426163344.GE1793@arm.com> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Apr 26, 2016 at 05:33:44PM +0100, Will Deacon wrote: > From 5aa5750d5eeb6e3a42f5547f094dc803f89793bb Mon Sep 17 00:00:00 2001 > From: Will Deacon > Date: Tue, 26 Apr 2016 17:31:53 +0100 > Subject: [PATCH] fixup! locking,arm64: Introduce cmpwait() > > Signed-off-by: Will Deacon > --- > arch/arm64/include/asm/cmpxchg.h | 15 +++++++++------ > 1 file changed, 9 insertions(+), 6 deletions(-) > > diff --git a/arch/arm64/include/asm/cmpxchg.h b/arch/arm64/include/asm/cmpxchg.h > index cd7bff6ddedc..9b7113a3f0d7 100644 > --- a/arch/arm64/include/asm/cmpxchg.h > +++ b/arch/arm64/include/asm/cmpxchg.h > @@ -225,18 +225,19 @@ __CMPXCHG_GEN(_mb) > }) > > #define __CMPWAIT_GEN(w, sz, name) \ > -void __cmpwait_case_##name(volatile void *ptr, unsigned long val) \ > +static inline void __cmpwait_case_##name(volatile void *ptr, \ > + unsigned long val) \ > { \ > unsigned long tmp; \ > \ > asm volatile( \ > " ldxr" #sz "\t%" #w "[tmp], %[v]\n" \ > " eor %" #w "[tmp], %" #w "[tmp], %" #w "[val]\n" \ > - " cbnz %" #w "[tmp], 1f\n" \ > + " cbz %" #w "[tmp], 1f\n" \ Actually, you're right with cbnz. I only noticed when I came to implement my own version of smp_cond_load_acquire. *sigh* I have fixups applied locally, so maybe the best thing is for me to send you an arm64 series on top of whatever you post next? Will