From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933853AbcIALvs (ORCPT ); Thu, 1 Sep 2016 07:51:48 -0400 Received: from merlin.infradead.org ([205.233.59.134]:48492 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933830AbcIALvp (ORCPT ); Thu, 1 Sep 2016 07:51:45 -0400 Date: Thu, 1 Sep 2016 13:51:34 +0200 From: Peter Zijlstra To: Manfred Spraul Cc: Will Deacon , benh@kernel.crashing.org, paulmck@linux.vnet.ibm.com, Ingo Molnar , Boqun Feng , Andrew Morton , LKML , 1vier1@web.de, Davidlohr Bueso Subject: Re: [PATCH 1/4] spinlock: Document memory barrier rules Message-ID: <20160901115134.GS10153@twins.programming.kicks-ass.net> References: <1472385376-8801-2-git-send-email-manfred@colorfullife.com> <20160829104815.GI10153@twins.programming.kicks-ass.net> <968e4c62-4486-a6aa-8fdf-67ff9b05a330@colorfullife.com> <20160829134424.GS10153@twins.programming.kicks-ass.net> <4859166f-ff39-e998-638b-6bf6912422a3@colorfullife.com> <20160831154049.GY10121@twins.programming.kicks-ass.net> <20160831164020.GG29505@arm.com> <80de24e3-fa01-a6d6-99e9-afd1e831e07b@colorfullife.com> <20160901084435.GN10153@twins.programming.kicks-ass.net> <3f7c39e5-4c46-0641-d29e-36c9439ad6dc@colorfullife.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <3f7c39e5-4c46-0641-d29e-36c9439ad6dc@colorfullife.com> User-Agent: Mutt/1.5.23.1 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Sep 01, 2016 at 01:04:26PM +0200, Manfred Spraul wrote: > >So for both power and arm64, you can in fact model spin_unlock_wait() > >as LOCK+UNLOCK. > Is this consensus? Dunno, but it was done to fix your earlier locking scheme and both architectures where it matters have done so. So I suppose that could be taken as consensus ;-) > If I understand it right, the rules are: > 1. spin_unlock_wait() must behave like spin_lock();spin_unlock(); >>From a barrier perspective, yes I think so. Ideally the implementation would avoid stores (which was the entire point of introducing that primitive IIRC) if at all possible (not possible on ARM64/Power). > 2. spin_is_locked() must behave like spin_trylock() ? spin_unlock(),TRUE : > FALSE Not sure on this one, That might be consistent, but I don't see the ll/sc-nop in there. Will? > 3. the ACQUIRE during spin_lock applies to the lock load, not to the store. I think we can state that ACQUIRE on _any_ atomic only applies to the LOAD not the STORE. And we're waiting for that to bite us again before trying to deal with it in a more generic manner; for now only the spinlock implementations (specifically spin_unlock_wait) deal with it. Will, Boqun, did I get that right?