From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pa0-x22b.google.com (mail-pa0-x22b.google.com [IPv6:2607:f8b0:400e:c03::22b]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 9F2EB1A0774 for ; Tue, 13 Oct 2015 01:15:39 +1100 (AEDT) Received: by pabve7 with SMTP id ve7so97500379pab.2 for ; Mon, 12 Oct 2015 07:15:37 -0700 (PDT) From: Boqun Feng To: linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org Cc: Peter Zijlstra , Ingo Molnar , Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , Thomas Gleixner , Will Deacon , "Paul E. McKenney" , Waiman Long , Davidlohr Bueso , Boqun Feng Subject: [PATCH v3 3/6] atomics: Allow architectures to define their own __atomic_op_* helpers Date: Mon, 12 Oct 2015 22:14:03 +0800 Message-Id: <1444659246-24769-4-git-send-email-boqun.feng@gmail.com> In-Reply-To: <1444659246-24769-1-git-send-email-boqun.feng@gmail.com> References: <1444659246-24769-1-git-send-email-boqun.feng@gmail.com> List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Some architectures may have their special barriers for acquire, release and fence semantics, so that general memory barriers(smp_mb__*_atomic()) in the default __atomic_op_*() may be too strong, so allow architectures to define their own helpers which can overwrite the default helpers. Signed-off-by: Boqun Feng --- include/linux/atomic.h | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/include/linux/atomic.h b/include/linux/atomic.h index 27e580d..947c1dc 100644 --- a/include/linux/atomic.h +++ b/include/linux/atomic.h @@ -43,20 +43,29 @@ static inline int atomic_read_ctrl(const atomic_t *v) * The idea here is to build acquire/release variants by adding explicit * barriers on top of the relaxed variant. In the case where the relaxed * variant is already fully ordered, no additional barriers are needed. + * + * Besides, if an arch has a special barrier for acquire/release, it could + * implement its own __atomic_op_* and use the same framework for building + * variants */ +#ifndef __atomic_op_acquire #define __atomic_op_acquire(op, args...) \ ({ \ typeof(op##_relaxed(args)) __ret = op##_relaxed(args); \ smp_mb__after_atomic(); \ __ret; \ }) +#endif +#ifndef __atomic_op_release #define __atomic_op_release(op, args...) \ ({ \ smp_mb__before_atomic(); \ op##_relaxed(args); \ }) +#endif +#ifndef __atomic_op_fence #define __atomic_op_fence(op, args...) \ ({ \ typeof(op##_relaxed(args)) __ret; \ @@ -65,6 +74,7 @@ static inline int atomic_read_ctrl(const atomic_t *v) smp_mb__after_atomic(); \ __ret; \ }) +#endif /* atomic_add_return_relaxed */ #ifndef atomic_add_return_relaxed -- 2.5.3