From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752815AbbKPN6T (ORCPT ); Mon, 16 Nov 2015 08:58:19 -0500 Received: from foss.arm.com ([217.140.101.70]:49903 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751389AbbKPN6R (ORCPT ); Mon, 16 Nov 2015 08:58:17 -0500 Date: Mon, 16 Nov 2015 13:58:11 +0000 From: Will Deacon To: "Paul E. McKenney" Cc: Oleg Nesterov , Peter Zijlstra , Boqun Feng , mingo@kernel.org, linux-kernel@vger.kernel.org, corbet@lwn.net, mhocko@kernel.org, dhowells@redhat.com, torvalds@linux-foundation.org, Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras Subject: Re: [PATCH 4/4] locking: Introduce smp_cond_acquire() Message-ID: <20151116135811.GB807@arm.com> References: <20151111193953.GA23515@redhat.com> <20151112070915.GC6314@fixme-laptop.cn.ibm.com> <20151112150058.GA30321@redhat.com> <20151112151839.GE6314@fixme-laptop.cn.ibm.com> <20151112183807.GA7538@redhat.com> <20151112180203.GF17308@twins.programming.kicks-ass.net> <20151112193302.GA9988@redhat.com> <20151112185906.GK3972@linux.vnet.ibm.com> <20151112213339.GC23979@arm.com> <20151112234351.GO3972@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20151112234351.GO3972@linux.vnet.ibm.com> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Paul, On Thu, Nov 12, 2015 at 03:43:51PM -0800, Paul E. McKenney wrote: > On Thu, Nov 12, 2015 at 09:33:39PM +0000, Will Deacon wrote: > > I think we ended up concluding that smp_mb__after_unlock_lock is indeed > > required, but I don't think we should just resurrect the old definition, > > which doesn't keep UNLOCK -> LOCK distinct from RELEASE -> ACQUIRE. I'm > > still working on documenting the different types of transitivity that we > > identified in that thread, but it's slow going. > > > > Also, as far as spin_unlock_wait is concerned, it is neither a LOCK or > > an UNLOCK and this barrier doesn't offer us anything. Sure, it might > > work because PPC defines it as smp_mb(), but it doesn't help on arm64 > > and defining the macro is overkill for us in most places (i.e. RCU). > > > > If we decide that the current usage of spin_unlock_wait is valid, then I > > would much rather implement a version of it in the arm64 backend that > > does something like: > > > > 1: ldrex r1, [&lock] > > if r1 indicates that lock is taken, branch back to 1b > > strex r1, [&lock] > > if store failed, branch back to 1b > > > > i.e. we don't just test the lock, but we also write it back atomically > > if we discover that it's free. That would then clear the exclusive monitor > > on any cores in the process of taking the lock and restore the ordering > > that we need. > > We could clearly do something similar in PowerPC, but I suspect that this > would hurt really badly on large systems, given that there are PowerPC > systems with more than a thousand hardware threads. So one approach > is ARM makes spin_unlock_wait() do the write, similar to spin_lock(); > spin_lock(), but PowerPC relies on smp_mb__after_unlock_lock(). Sure, I'm certainly not trying to tell you how to do this for PPC, but the above would be better for arm64 (any huge system should be using the 8.1 atomics anyway). > Or does someone have a better proposal? I don't think I'm completely clear on the current proposal wrt smp_mb__after_unlock_lock. To summarise my understanding (in the context of Boqun's original example, which I've duplicated at the end of this mail): * Putting smp_mb__after_unlock_lock() after the LOCK on CPU2 creates global order, by upgrading the UNLOCK -> LOCK to a full barrier. If we extend this to include the accesses made by the UNLOCK and LOCK as happening *before* the notional full barrier, then a from-read edge from CPU2 to CPU1 on `object' implies that the LOCK operation is observed by CPU1 before it writes object = NULL. So we can add this barrier and fix the test for PPC. * Upgrading spin_unlock_wait to a LOCK operation (which is basically what I'm proposing for arm64) means that we now rely on LOCK -> LOCK being part of a single, total order (i.e. all CPUs agree on the order in which a lock was taken). Assuming PPC has this global LOCK -> LOCK ordering, then we end up in a sticky situation defining the kernel's memory model because we don't have an architecture-agnostic semantic. The only way I can see to fix this is by adding something like smp_mb__between_lock_unlock_wait, but that's grim. Do you see a way forward? Will --->8 CPU 1 CPU 2 CPU 3 ================== ==================== ============== spin_unlock(&lock); spin_lock(&lock): r1 = *lock; // r1 == 0; o = READ_ONCE(object); // reordered here object = NULL; smp_mb(); spin_unlock_wait(&lock); *lock = 1; smp_mb(); o->dead = true; if (o) // true BUG_ON(o->dead); // true!!