From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Michael S. Tsirkin" Subject: Re: [PATCH v2 15/32] powerpc: define __smp_xxx Date: Tue, 5 Jan 2016 10:51:17 +0200 Message-ID: <20160105085117.GA11858@redhat.com> References: <1451572003-2440-1-git-send-email-mst@redhat.com> <1451572003-2440-16-git-send-email-mst@redhat.com> <20160105013648.GA1256@fixme-laptop.cn.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Return-path: Content-Disposition: inline In-Reply-To: <20160105013648.GA1256-nNqVUaWX1rAq6Sbylg4iGasjOiXwFzmk@public.gmane.org> Sender: linux-metag-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org To: Boqun Feng Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, Peter Zijlstra , Arnd Bergmann , linux-arch-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, Andrew Cooper , virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org, Stefano Stabellini , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , David Miller , linux-ia64-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linuxppc-dev-uLR06cmDAlY/bJ5BZ2RsiQ@public.gmane.org, linux-s390-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, sparclinux-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org, linux-metag-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-mips-6z/3iImG2C8G8FEW9MqTrA@public.gmane.org, x86-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org, user-mode-linux-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org, adi-buildroot-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org, linux-sh-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-xtensa-PjhNF2WwrV/0Sa2dR60CXw@public.gmane.org, xen-devel-GuqFBffKawtpuQazS67q72D2FQJk+8+b@public.gmane.org, Benjamin Herrenschmidt , Paul Mackerras List-Id: linux-arch.vger.kernel.org On Tue, Jan 05, 2016 at 09:36:55AM +0800, Boqun Feng wrote: > Hi Michael, > > On Thu, Dec 31, 2015 at 09:07:42PM +0200, Michael S. Tsirkin wrote: > > This defines __smp_xxx barriers for powerpc > > for use by virtualization. > > > > smp_xxx barriers are removed as they are > > defined correctly by asm-generic/barriers.h I think this is the part that was missed in review. > > This reduces the amount of arch-specific boiler-plate code. > > > > Signed-off-by: Michael S. Tsirkin > > Acked-by: Arnd Bergmann > > --- > > arch/powerpc/include/asm/barrier.h | 24 ++++++++---------------- > > 1 file changed, 8 insertions(+), 16 deletions(-) > > > > diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h > > index 980ad0c..c0deafc 100644 > > --- a/arch/powerpc/include/asm/barrier.h > > +++ b/arch/powerpc/include/asm/barrier.h > > @@ -44,19 +44,11 @@ > > #define dma_rmb() __lwsync() > > #define dma_wmb() __asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory") > > > > -#ifdef CONFIG_SMP > > -#define smp_lwsync() __lwsync() > > +#define __smp_lwsync() __lwsync() > > > > so __smp_lwsync() is always mapped to lwsync, right? Yes. > > -#define smp_mb() mb() > > -#define smp_rmb() __lwsync() > > -#define smp_wmb() __asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory") > > -#else > > -#define smp_lwsync() barrier() > > - > > -#define smp_mb() barrier() > > -#define smp_rmb() barrier() > > -#define smp_wmb() barrier() > > -#endif /* CONFIG_SMP */ > > +#define __smp_mb() mb() > > +#define __smp_rmb() __lwsync() > > +#define __smp_wmb() __asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory") > > > > /* > > * This is a barrier which prevents following instructions from being > > @@ -67,18 +59,18 @@ > > #define data_barrier(x) \ > > asm volatile("twi 0,%0,0; isync" : : "r" (x) : "memory"); > > > > -#define smp_store_release(p, v) \ > > +#define __smp_store_release(p, v) \ > > do { \ > > compiletime_assert_atomic_type(*p); \ > > - smp_lwsync(); \ > > + __smp_lwsync(); \ > > , therefore this will emit an lwsync no matter SMP or UP. Absolutely. But smp_store_release (without __) will not. Please note I did test this: for ppc code before and after this patch generates exactly the same binary on SMP and UP. > Another thing is that smp_lwsync() may have a third user(other than > smp_load_acquire() and smp_store_release()): > > http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877 > > I'm OK to change my patch accordingly, but do we really want > smp_lwsync() get involved in this cleanup? If I understand you > correctly, this cleanup focuses on external API like smp_{r,w,}mb(), > while smp_lwsync() is internal to PPC. > > Regards, > Boqun I think you missed the leading ___ :) smp_store_release is external and it needs __smp_lwsync as defined here. I can duplicate some code and have smp_lwsync *not* call __smp_lwsync but why do this? Still, if you prefer it this way, please let me know. > > WRITE_ONCE(*p, v); \ > > } while (0) > > > > -#define smp_load_acquire(p) \ > > +#define __smp_load_acquire(p) \ > > ({ \ > > typeof(*p) ___p1 = READ_ONCE(*p); \ > > compiletime_assert_atomic_type(*p); \ > > - smp_lwsync(); \ > > + __smp_lwsync(); \ > > ___p1; \ > > }) > > > > -- > > MST > > > > -- > > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > > the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org > > More majordomo info at http://vger.kernel.org/majordomo-info.html > > Please read the FAQ at http://www.tux.org/lkml/ From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx1.redhat.com ([209.132.183.28]:41327 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750904AbcAEIv3 (ORCPT ); Tue, 5 Jan 2016 03:51:29 -0500 Date: Tue, 5 Jan 2016 10:51:17 +0200 From: "Michael S. Tsirkin" Subject: Re: [PATCH v2 15/32] powerpc: define __smp_xxx Message-ID: <20160105085117.GA11858@redhat.com> References: <1451572003-2440-1-git-send-email-mst@redhat.com> <1451572003-2440-16-git-send-email-mst@redhat.com> <20160105013648.GA1256@fixme-laptop.cn.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20160105013648.GA1256@fixme-laptop.cn.ibm.com> Sender: linux-arch-owner@vger.kernel.org List-ID: To: Boqun Feng Cc: linux-kernel@vger.kernel.org, Peter Zijlstra , Arnd Bergmann , linux-arch@vger.kernel.org, Andrew Cooper , virtualization@lists.linux-foundation.org, Stefano Stabellini , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , David Miller , linux-ia64@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-metag@vger.kernel.org, linux-mips@linux-mips.org, x86@kernel.org, user-mode-linux-devel@lists.sourceforge.net, adi-buildroot-devel@lists.sourceforge.net, linux-sh@vger.kernel.org, linux-xtensa@linux-xtensa.org, xen-devel@lists.xenproject.org, Benjamin Herrenschmidt , Paul Mackerras , Michael Ellerman , Ingo Molnar , Davidlohr Bueso , Andrey Konovalov , "Paul E. McKenney" Message-ID: <20160105085117.eoxGRMWu5gy8RfKgjU0JzFgytNcAFhrIlXiURE1pRdo@z> On Tue, Jan 05, 2016 at 09:36:55AM +0800, Boqun Feng wrote: > Hi Michael, > > On Thu, Dec 31, 2015 at 09:07:42PM +0200, Michael S. Tsirkin wrote: > > This defines __smp_xxx barriers for powerpc > > for use by virtualization. > > > > smp_xxx barriers are removed as they are > > defined correctly by asm-generic/barriers.h I think this is the part that was missed in review. > > This reduces the amount of arch-specific boiler-plate code. > > > > Signed-off-by: Michael S. Tsirkin > > Acked-by: Arnd Bergmann > > --- > > arch/powerpc/include/asm/barrier.h | 24 ++++++++---------------- > > 1 file changed, 8 insertions(+), 16 deletions(-) > > > > diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h > > index 980ad0c..c0deafc 100644 > > --- a/arch/powerpc/include/asm/barrier.h > > +++ b/arch/powerpc/include/asm/barrier.h > > @@ -44,19 +44,11 @@ > > #define dma_rmb() __lwsync() > > #define dma_wmb() __asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory") > > > > -#ifdef CONFIG_SMP > > -#define smp_lwsync() __lwsync() > > +#define __smp_lwsync() __lwsync() > > > > so __smp_lwsync() is always mapped to lwsync, right? Yes. > > -#define smp_mb() mb() > > -#define smp_rmb() __lwsync() > > -#define smp_wmb() __asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory") > > -#else > > -#define smp_lwsync() barrier() > > - > > -#define smp_mb() barrier() > > -#define smp_rmb() barrier() > > -#define smp_wmb() barrier() > > -#endif /* CONFIG_SMP */ > > +#define __smp_mb() mb() > > +#define __smp_rmb() __lwsync() > > +#define __smp_wmb() __asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory") > > > > /* > > * This is a barrier which prevents following instructions from being > > @@ -67,18 +59,18 @@ > > #define data_barrier(x) \ > > asm volatile("twi 0,%0,0; isync" : : "r" (x) : "memory"); > > > > -#define smp_store_release(p, v) \ > > +#define __smp_store_release(p, v) \ > > do { \ > > compiletime_assert_atomic_type(*p); \ > > - smp_lwsync(); \ > > + __smp_lwsync(); \ > > , therefore this will emit an lwsync no matter SMP or UP. Absolutely. But smp_store_release (without __) will not. Please note I did test this: for ppc code before and after this patch generates exactly the same binary on SMP and UP. > Another thing is that smp_lwsync() may have a third user(other than > smp_load_acquire() and smp_store_release()): > > http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877 > > I'm OK to change my patch accordingly, but do we really want > smp_lwsync() get involved in this cleanup? If I understand you > correctly, this cleanup focuses on external API like smp_{r,w,}mb(), > while smp_lwsync() is internal to PPC. > > Regards, > Boqun I think you missed the leading ___ :) smp_store_release is external and it needs __smp_lwsync as defined here. I can duplicate some code and have smp_lwsync *not* call __smp_lwsync but why do this? Still, if you prefer it this way, please let me know. > > WRITE_ONCE(*p, v); \ > > } while (0) > > > > -#define smp_load_acquire(p) \ > > +#define __smp_load_acquire(p) \ > > ({ \ > > typeof(*p) ___p1 = READ_ONCE(*p); \ > > compiletime_assert_atomic_type(*p); \ > > - smp_lwsync(); \ > > + __smp_lwsync(); \ > > ___p1; \ > > }) > > > > -- > > MST > > > > -- > > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > > the body of a message to majordomo@vger.kernel.org > > More majordomo info at http://vger.kernel.org/majordomo-info.html > > Please read the FAQ at http://www.tux.org/lkml/