From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754183AbcBAQ6S (ORCPT ); Mon, 1 Feb 2016 11:58:18 -0500 Received: from foss.arm.com ([217.140.101.70]:50075 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753419AbcBAQ6O (ORCPT ); Mon, 1 Feb 2016 11:58:14 -0500 Date: Mon, 1 Feb 2016 16:58:13 +0000 From: Will Deacon To: Peter Zijlstra Cc: Paul McKenney , linux-kernel@vger.kernel.org, Davidlohr Bueso , Ingo Molnar , parri.andrea@gmail.com Subject: Re: [RFC][PATCH] locking/mcs: Fix ordering for mcs_spin_lock() Message-ID: <20160201165813.GH6828@arm.com> References: <20160201143724.GW6357@twins.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20160201143724.GW6357@twins.programming.kicks-ass.net> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Peter, On Mon, Feb 01, 2016 at 03:37:24PM +0100, Peter Zijlstra wrote: > Given the below patch; we've now got an unconditional full global > barrier in, does this make the MCS spinlock RCsc ? > > The 'problem' is that this barrier can happen before we actually acquire > the lock. That is, if we hit arch_mcs_spin_lock_contended() _that_ will > be the acquire barrier and we end up with a SYNC in between unlock and > lock -- ie. not an smp_mb__after_unlock_lock() equivalent. In which case, I don't think the lock will be RCsc with this change; you'd need an smp_mb__after_unlock_lock() after arch_mcs_spin_lock_contended(...) if you wanted the thing to be RCsc. > Subject: locking/mcs: Fix ordering for mcs_spin_lock() > From: Peter Zijlstra > Date: Mon Feb 1 15:11:28 CET 2016 > > Similar to commit b4b29f94856a ("locking/osq: Fix ordering of node > initialisation in osq_lock") the use of xchg_acquire() is > fundamentally broken with MCS like constructs. > > Furthermore, it turns out we rely on the global transitivity of this > operation because the unlock path observes the pointer with a > READ_ONCE(), not an smp_load_acquire(). > > This is non-critical because the MCS code isn't actually used and > mostly serves as documentation, a stepping stone to the more complex > things we've build on top of the idea. > > Cc: Will Deacon > Cc: "Paul E. McKenney" > Reported-by: Andrea Parri > Fixes: 3552a07a9c4a ("locking/mcs: Use acquire/release semantics") > Signed-off-by: Peter Zijlstra (Intel) > --- > kernel/locking/mcs_spinlock.h | 8 +++++++- > 1 file changed, 7 insertions(+), 1 deletion(-) Acked-by: Will Deacon Although I wonder how useful this is as a documentation aid now that we have the osq_lock. Will