From mboxrd@z Thu Jan 1 00:00:00 1970 From: Peter Zijlstra Subject: [PATCH 12/31] arch,frv: Convert smp_mb__* Date: Wed, 19 Mar 2014 07:47:41 +0100 Message-ID: <20140319065204.422709709@infradead.org> References: <20140319064729.660482086@infradead.org> Return-path: Received: from merlin.infradead.org ([205.233.59.134]:52913 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753380AbaCSG40 (ORCPT ); Wed, 19 Mar 2014 02:56:26 -0400 Content-Disposition: inline; filename=peterz-frv-smp_mb__atomic.patch Sender: linux-arch-owner@vger.kernel.org List-ID: To: linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Cc: torvalds@linux-foundation.org, akpm@linux-foundation.org, mingo@kernel.org, will.deacon@arm.com, paulmck@linux.vnet.ibm.com, Peter Zijlstra Because: arch/frv/include/asm/smp.h:#error SMP not supported smp_mb() is barrier() and we can use the default implementation that uses smp_mb(). Signed-off-by: Peter Zijlstra --- arch/frv/include/asm/atomic.h | 7 +------ arch/frv/include/asm/bitops.h | 6 ------ 2 files changed, 1 insertion(+), 12 deletions(-) --- a/arch/frv/include/asm/atomic.h +++ b/arch/frv/include/asm/atomic.h @@ -17,6 +17,7 @@ #include #include #include +#include #ifdef CONFIG_SMP #error not SMP safe @@ -29,12 +30,6 @@ * We do not have SMP systems, so we don't have to deal with that. */ -/* Atomic operations are already serializing */ -#define smp_mb__before_atomic_dec() barrier() -#define smp_mb__after_atomic_dec() barrier() -#define smp_mb__before_atomic_inc() barrier() -#define smp_mb__after_atomic_inc() barrier() - #define ATOMIC_INIT(i) { (i) } #define atomic_read(v) (*(volatile int *)&(v)->counter) #define atomic_set(v, i) (((v)->counter) = (i)) --- a/arch/frv/include/asm/bitops.h +++ b/arch/frv/include/asm/bitops.h @@ -25,12 +25,6 @@ #include -/* - * clear_bit() doesn't provide any barrier for the compiler. - */ -#define smp_mb__before_clear_bit() barrier() -#define smp_mb__after_clear_bit() barrier() - #ifndef CONFIG_FRV_OUTOFLINE_ATOMIC_OPS static inline unsigned long atomic_test_and_ANDNOT_mask(unsigned long mask, volatile unsigned long *v)