From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pb0-f47.google.com (mail-pb0-f47.google.com [209.85.160.47]) by kanga.kvack.org (Postfix) with ESMTP id 7BF466B00F2 for ; Wed, 6 Nov 2013 12:05:36 -0500 (EST) Received: by mail-pb0-f47.google.com with SMTP id rq13so5124620pbb.20 for ; Wed, 06 Nov 2013 09:05:36 -0800 (PST) Received: from psmtp.com ([74.125.245.116]) by mx.google.com with SMTP id pl8si110355pbb.134.2013.11.06.09.05.33 for ; Wed, 06 Nov 2013 09:05:34 -0800 (PST) Message-ID: <527A76C9.1030208@hp.com> Date: Wed, 06 Nov 2013 12:05:13 -0500 From: Waiman Long MIME-Version: 1.0 Subject: Re: [PATCH v2 3/4] MCS Lock: Barrier corrections References: <1383673356.11046.279.camel@schen9-DESK> <20131105183744.GJ26895@mudshark.cambridge.arm.com> <1383679317.11046.293.camel@schen9-DESK> <20131106122019.GG21074@mudshark.cambridge.arm.com> In-Reply-To: <20131106122019.GG21074@mudshark.cambridge.arm.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org List-ID: To: Will Deacon Cc: "Figo.zhang" , Tim Chen , Ingo Molnar , Andrew Morton , Thomas Gleixner , "linux-kernel@vger.kernel.org" , linux-mm , "linux-arch@vger.kernel.org" , Linus Torvalds , Andrea Arcangeli , Alex Shi , Andi Kleen , Michel Lespinasse , Davidlohr Bueso , Matthew R Wilcox , Dave Hansen , Peter Zijlstra , Rik van Riel , Peter Hurley , "Paul E.McKenney" , Raghavendra K T , George Spelvin , "H. Peter Anvin" , Arnd Bergmann , Aswin Chandramouleeswaran , Scott J Norton On 11/06/2013 07:20 AM, Will Deacon wrote: > On Wed, Nov 06, 2013 at 05:44:42AM +0000, Figo.zhang wrote: >> 2013/11/6 Tim Chen> >> On Tue, 2013-11-05 at 18:37 +0000, Will Deacon wrote: >>> On Tue, Nov 05, 2013 at 05:42:36PM +0000, Tim Chen wrote: >>>> diff --git a/include/linux/mcs_spinlock.h b/include/linux/mcs_spinlock.h >>>> index 96f14299..93d445d 100644 >>>> --- a/include/linux/mcs_spinlock.h >>>> +++ b/include/linux/mcs_spinlock.h >>>> @@ -36,16 +36,19 @@ void mcs_spin_lock(struct mcs_spinlock **lock, struct mcs_spinlock *node) >>>> node->locked = 0; >>>> node->next = NULL; >>>> >>>> + /* xchg() provides a memory barrier */ >>>> prev = xchg(lock, node); >>>> if (likely(prev == NULL)) { >>>> /* Lock acquired */ >>>> return; >>>> } >>>> ACCESS_ONCE(prev->next) = node; >>>> - smp_wmb(); >>>> /* Wait until the lock holder passes the lock down */ >>>> while (!ACCESS_ONCE(node->locked)) >>>> arch_mutex_cpu_relax(); >>>> + >>>> + /* Make sure subsequent operations happen after the lock is acquired */ >>>> + smp_rmb(); >>> Ok, so this is an smp_rmb() because we assume that stores aren't speculated, >>> right? (i.e. the control dependency above is enough for stores to be ordered >>> with respect to taking the lock)... >>> >>>> } >>>> >>>> /* >>>> @@ -58,6 +61,7 @@ static void mcs_spin_unlock(struct mcs_spinlock **lock, struct mcs_spinlock *nod >>>> >>>> if (likely(!next)) { >>>> /* >>>> + * cmpxchg() provides a memory barrier. >>>> * Release the lock by setting it to NULL >>>> */ >>>> if (likely(cmpxchg(lock, node, NULL) == node)) >>>> @@ -65,9 +69,14 @@ static void mcs_spin_unlock(struct mcs_spinlock **lock, struct mcs_spinlock *nod >>>> /* Wait until the next pointer is set */ >>>> while (!(next = ACCESS_ONCE(node->next))) >>>> arch_mutex_cpu_relax(); >>>> + } else { >>>> + /* >>>> + * Make sure all operations within the critical section >>>> + * happen before the lock is released. >>>> + */ >>>> + smp_wmb(); >>> ...but I don't see what prevents reads inside the critical section from >>> moving across the smp_wmb() here. >> This is to prevent any read in next critical section from >> creeping up before write in the previous critical section >> has completed > Understood, but an smp_wmb() doesn't provide any ordering guarantees with > respect to reads, hence why I think you need an smp_mb() here. A major reason for the current design is to avoid overhead of a full memory barrier in x86 which doesn't need that. I do agree that the current code may not be enough for other architectures. I would like to propose that the following changes: 1) Move the lock/unlock functions to mcs_spinlock.c. 2) Define a set of primitives - smp_mb__before_critical_section(), smp_mb_after_critical_section() that will fall back to smp_mb() if they are not defined in asm/processor.h, for example. 3) Use the new primitives instead of the current smp_rmb() and smp_wmb() memory barrier. That will allow each architecture to tailor what sort of memory barrier do they want to use. Regards, Longman -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org