From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from ozlabs.org (ozlabs.org [IPv6:2401:3900:2:1::2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 73DE11A003F for ; Thu, 11 Sep 2014 12:49:34 +1000 (EST) From: Michael Ellerman To: stable@vger.kernel.org Subject: [PATCH 2/2] powerpc: Add smp_mb()s to arch_spin_unlock_wait() Date: Thu, 11 Sep 2014 12:49:22 +1000 Message-Id: <1410403762-14067-2-git-send-email-mpe@ellerman.id.au> In-Reply-To: <1410403762-14067-1-git-send-email-mpe@ellerman.id.au> References: <1410403762-14067-1-git-send-email-mpe@ellerman.id.au> Cc: linuxppc-dev@ozlabs.org List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Backported from 78e05b1421fa upstream, for stable 3.14 and 3.16. Similar to the previous commit which described why we need to add a barrier to arch_spin_is_locked(), we have a similar problem with spin_unlock_wait(). We need a barrier on entry to ensure any spinlock we have previously taken is visibly locked prior to the load of lock->slock. It's also not clear if spin_unlock_wait() is intended to have ACQUIRE semantics. For now be conservative and add a barrier on exit to give it ACQUIRE semantics. Signed-off-by: Michael Ellerman Signed-off-by: Benjamin Herrenschmidt --- arch/powerpc/lib/locks.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/arch/powerpc/lib/locks.c b/arch/powerpc/lib/locks.c index 0c9c8d7d0734..170a0346f756 100644 --- a/arch/powerpc/lib/locks.c +++ b/arch/powerpc/lib/locks.c @@ -70,12 +70,16 @@ void __rw_yield(arch_rwlock_t *rw) void arch_spin_unlock_wait(arch_spinlock_t *lock) { + smp_mb(); + while (lock->slock) { HMT_low(); if (SHARED_PROCESSOR) __spin_yield(lock); } HMT_medium(); + + smp_mb(); } EXPORT_SYMBOL(arch_spin_unlock_wait); -- 1.9.1