From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Paul E. McKenney" Subject: Re: [RFC PATCH] lib: Introduce generic __cmpxchg_u64() and use it where needed Date: Thu, 1 Nov 2018 15:26:41 -0700 Message-ID: <20181101222641.GI4170@linux.ibm.com> References: <4e2438a23d2edf03368950a72ec058d1d299c32e.camel@hammerspace.com> <20181101131846.biyilr2msonljmij@lakrids.cambridge.arm.com> <20181101145926.GE3178@hirez.programming.kicks-ass.net> <20181101163212.GF3159@hirez.programming.kicks-ass.net> <20181101171432.GH3178@hirez.programming.kicks-ass.net> <20181101172739.GA3196@hirez.programming.kicks-ass.net> <20181101202910.GB4170@linux.ibm.com> <20181101213834.GA3339@worktop.programming.kicks-ass.net> Reply-To: paulmck@linux.ibm.com Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: Eric Dumazet , Trond Myklebust , "mark.rutland@arm.com" , "linux-kernel@vger.kernel.org" , "ralf@linux-mips.org" , "jlayton@kernel.org" , "linuxppc-dev@lists.ozlabs.org" , "bfields@fieldses.org" , "linux-mips@linux-mips.org" , "linux@roeck-us.net" , "linux-nfs@vger.kernel.org" , "akpm@linux-foundation.org" , "will.deacon@arm.com" , "boqun.feng@gmail.com" , "paul.burton@mips.com" , "anna.schumaker@netapp.com" Return-path: Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:39308 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727629AbeKBHbo (ORCPT ); Fri, 2 Nov 2018 03:31:44 -0400 Received: from pps.filterd (m0098399.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.22/8.16.0.22) with SMTP id wA1MNpYL130275 for ; Thu, 1 Nov 2018 18:26:49 -0400 Received: from e13.ny.us.ibm.com (e13.ny.us.ibm.com [129.33.205.203]) by mx0a-001b2d01.pphosted.com with ESMTP id 2ng7d5ya42-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Thu, 01 Nov 2018 18:26:49 -0400 Received: from localhost by e13.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Thu, 1 Nov 2018 22:26:48 -0000 Content-Disposition: inline In-Reply-To: <20181101213834.GA3339@worktop.programming.kicks-ass.net> Sender: netdev-owner@vger.kernel.org List-ID: On Thu, Nov 01, 2018 at 10:38:34PM +0100, Peter Zijlstra wrote: > On Thu, Nov 01, 2018 at 01:29:10PM -0700, Paul E. McKenney wrote: > > On Thu, Nov 01, 2018 at 06:27:39PM +0100, Peter Zijlstra wrote: > > > On Thu, Nov 01, 2018 at 06:14:32PM +0100, Peter Zijlstra wrote: > > > > > This reminds me of this sooooo silly patch :/ > > > > > > > > > > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=adb03115f4590baa280ddc440a8eff08a6be0cb7 > > > > > > You'd probably want to write it like so; +- some ordering stuff, that > > > code didn't look like it really needs the memory barriers implied by > > > these, but I didn't look too hard. > > > > The atomic_fetch_add() API would need to be propagated out to the other > > architectures, correct? > > Like these commits I did like 2 years ago ? :-) Color me blind and stupid! ;-) Thanx, Paul > $ git log --oneline 6dc25876cdb1...1f51dee7ca74 > 6dc25876cdb1 locking/atomic, arch/xtensa: Implement atomic_fetch_{add,sub,and,or,xor}() > a8bcccaba162 locking/atomic, arch/x86: Implement atomic{,64}_fetch_{add,sub,and,or,xor}() > 1af5de9af138 locking/atomic, arch/tile: Implement atomic{,64}_fetch_{add,sub,and,or,xor}() > 3a1adb23a52c locking/atomic, arch/sparc: Implement atomic{,64}_fetch_{add,sub,and,or,xor}() > 7d9794e75237 locking/atomic, arch/sh: Implement atomic_fetch_{add,sub,and,or,xor}() > 56fefbbc3f13 locking/atomic, arch/s390: Implement atomic{,64}_fetch_{add,sub,and,or,xor}() > a28cc7bbe8e3 locking/atomic, arch/powerpc: Implement atomic{,64}_fetch_{add,sub,and,or,xor}{,_relaxed,_acquire,_release}() > e5857a6ed600 locking/atomic, arch/parisc: Implement atomic{,64}_fetch_{add,sub,and,or,xor}() > f8d638e28d7c locking/atomic, arch/mn10300: Implement atomic_fetch_{add,sub,and,or,xor}() > 4edac529eb62 locking/atomic, arch/mips: Implement atomic{,64}_fetch_{add,sub,and,or,xor}() > e898eb27ffd8 locking/atomic, arch/metag: Implement atomic_fetch_{add,sub,and,or,xor}() > e39d88ea3ce4 locking/atomic, arch/m68k: Implement atomic_fetch_{add,sub,and,or,xor}() > f64937052303 locking/atomic, arch/m32r: Implement atomic_fetch_{add,sub,and,or,xor}() > cc102507fac7 locking/atomic, arch/ia64: Implement atomic{,64}_fetch_{add,sub,and,or,xor}() > 4be7dd393515 locking/atomic, arch/hexagon: Implement atomic_fetch_{add,sub,and,or,xor}() > 0c074cbc3309 locking/atomic, arch/h8300: Implement atomic_fetch_{add,sub,and,or,xor}() > d9c730281617 locking/atomic, arch/frv: Implement atomic{,64}_fetch_{add,sub,and,or,xor}() > e87fc0ec0705 locking/atomic, arch/blackfin: Implement atomic_fetch_{add,sub,and,or,xor}() > 1a6eafacd481 locking/atomic, arch/avr32: Implement atomic_fetch_{add,sub,and,or,xor}() > 2efe95fe6952 locking/atomic, arch/arm64: Implement atomic{,64}_fetch_{add,sub,and,andnot,or,xor}{,_relaxed,_acquire,_release}() for LSE instructions > 6822a84dd4e3 locking/atomic, arch/arm64: Generate LSE non-return cases using common macros > e490f9b1d3b4 locking/atomic, arch/arm64: Implement atomic{,64}_fetch_{add,sub,and,andnot,or,xor}{,_relaxed,_acquire,_release}() > 6da068c1beba locking/atomic, arch/arm: Implement atomic{,64}_fetch_{add,sub,and,andnot,or,xor}{,_relaxed,_acquire,_release}() > fbffe892e525 locking/atomic, arch/arc: Implement atomic_fetch_{add,sub,and,andnot,or,xor}() >