From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Paul E. McKenney" Subject: Re: [PATCH RFC 08/26] locking: Remove spin_unlock_wait() generic definitions Date: Mon, 3 Jul 2017 14:10:35 -0700 Message-ID: <20170703211035.GH2393@linux.vnet.ibm.com> References: <20170629235918.GA6445@linux.vnet.ibm.com> <1498780894-8253-8-git-send-email-paulmck@linux.vnet.ibm.com> <20170630091928.GC9726@arm.com> <20170630123815.GT2393@linux.vnet.ibm.com> <20170630131339.GA14118@arm.com> <20170630221840.GI2393@linux.vnet.ibm.com> <20170703131514.GE1573@arm.com> <20170703161851.GY2393@linux.vnet.ibm.com> Reply-To: paulmck@linux.vnet.ibm.com Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: Will Deacon , Linux Kernel Mailing List , NetFilter , Network Development , Oleg Nesterov , Andrew Morton , Ingo Molnar , Davidlohr Bueso , Manfred Spraul , Tejun Heo , Arnd Bergmann , "linux-arch@vger.kernel.org" , Peter Zijlstra , Alan Stern , Andrea Parri To: Linus Torvalds Return-path: Content-Disposition: inline In-Reply-To: Sender: netdev-owner@vger.kernel.org List-Id: netfilter-devel.vger.kernel.org On Mon, Jul 03, 2017 at 09:40:22AM -0700, Linus Torvalds wrote: > On Mon, Jul 3, 2017 at 9:18 AM, Paul E. McKenney > wrote: > > > > Agreed, and my next step is to look at spin_lock() followed by > > spin_is_locked(), not necessarily the same lock. > > Hmm. Most (all?) "spin_is_locked()" really should be about the same > thread that took the lock (ie it's about asserts and lock debugging). Good to know, that does make things easier. ;-) I am not certain that it is feasible to automatically recognize non-assert/non-debugging use cases of spin_is_locked(), but there is aways manual inspection. > The optimistic ABBA avoidance pattern for spinlocks *should* be > > spin_lock(inner) > ... > if (!try_lock(outer)) { > spin_unlock(inner); > .. do them in the right order .. > > so I don't think spin_is_locked() should have any memory barriers. > > In fact, the core function for spin_is_locked() is arguably > arch_spin_value_unlocked() which doesn't even do the access itself. OK, so we should rework any cases where people are relying on acquisition of one spin_lock() being ordered with a later spin_is_locked() on some other lock by that same thread. Thanx, Paul