From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933655AbcH2Ngc (ORCPT ); Mon, 29 Aug 2016 09:36:32 -0400 Received: from mail-wm0-f68.google.com ([74.125.82.68]:33088 "EHLO mail-wm0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933477AbcH2NgL (ORCPT ); Mon, 29 Aug 2016 09:36:11 -0400 From: Manfred Spraul To: benh@kernel.crashing.org, paulmck@linux.vnet.ibm.com, Ingo Molnar , Boqun Feng , Peter Zijlstra , Andrew Morton Cc: LKML , 1vier1@web.de, Davidlohr Bueso , Manfred Spraul Subject: [PATCH 4/4 V4] qspinlock for x86: smp_mb__after_spin_lock() is free Date: Mon, 29 Aug 2016 15:34:29 +0200 Message-Id: <1472477669-27508-5-git-send-email-manfred@colorfullife.com> X-Mailer: git-send-email 2.5.5 In-Reply-To: <1472477669-27508-1-git-send-email-manfred@colorfullife.com> References: <1472477669-27508-1-git-send-email-manfred@colorfullife.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org For x86 qspinlocks, no additional memory barrier is required in smp_mb__after_spin_lock: Theoretically, for qspinlock we could define two barriers: - smp_mb__after_spin_lock: Free for x86, not free for powerpc - smp_mb__between_spin_lock_and_spin_unlock_wait(): Free for all archs, see queued_spin_unlock_wait for details. As smp_mb__between_spin_lock_and_spin_unlock_wait() is not used in any hotpaths, the patch does not create that define yet. Signed-off-by: Manfred Spraul --- arch/x86/include/asm/qspinlock.h | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h index eaba080..04d26ed 100644 --- a/arch/x86/include/asm/qspinlock.h +++ b/arch/x86/include/asm/qspinlock.h @@ -61,6 +61,17 @@ static inline bool virt_spin_lock(struct qspinlock *lock) } #endif /* CONFIG_PARAVIRT */ +#ifndef smp_mb__after_spin_lock +/** + * smp_mb__after_spin_lock() - Provide smp_mb() after spin_lock + * + * queued_spin_lock() provides full memory barriers semantics, + * thus no further memory barrier is required. See + * queued_spin_unlock_wait() for further details. + */ +#define smp_mb__after_spin_lock() do { } while (0) +#endif + #include #endif /* _ASM_X86_QSPINLOCK_H */ -- 2.5.5