From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Michael S. Tsirkin" Subject: Re: [PATCH RFC] kvm: optimize out smp_mb using srcu_read_unlock Date: Thu, 31 Oct 2013 01:26:05 +0200 Message-ID: <20131030232605.GA28823@redhat.com> References: <20131030190929.GA7153@redhat.com> <20131030201552.GP4126@linux.vnet.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: linux-kernel , kvm@vger.kernel.org, gleb@redhat.com, pbonzini@redhat.com To: "Paul E. McKenney" Return-path: Content-Disposition: inline In-Reply-To: <20131030201552.GP4126@linux.vnet.ibm.com> Sender: linux-kernel-owner@vger.kernel.org List-Id: kvm.vger.kernel.org > > Paul, could you review this patch please? > > Documentation/memory-barriers.txt says that unlock has a weaker > > uni-directional barrier, but in practice srcu_read_unlock calls > > smp_mb(). > > > > Is it OK to rely on this? If not, can I add > > smp_mb__after_srcu_read_unlock (making it an empty macro for now) > > so we can avoid an actual extra smp_mb()? > > Please use smp_mb__after_srcu_read_unlock(). After all, it was not > that long ago that srcu_read_unlock() contained no memory barriers, > and perhaps some day it won't need to once again. > > Thanx, Paul > Thanks! Something like this will be enough? diff --git a/include/linux/srcu.h b/include/linux/srcu.h index c114614..9b058ee 100644 --- a/include/linux/srcu.h +++ b/include/linux/srcu.h @@ -237,4 +237,18 @@ static inline void srcu_read_unlock(struct srcu_struct *sp, int idx) __srcu_read_unlock(sp, idx); } +/** + * smp_mb__after_srcu_read_unlock - ensure full ordering after srcu_read_unlock + * + * Converts the preceding srcu_read_unlock into a two-way memory barrier. + * + * Call this after srcu_read_unlock, to guarantee that all memory operations + * that occur after smp_mb__after_srcu_read_unlock will appear to happen after + * the preceding srcu_read_unlock. + */ +static inline void smp_mb__after_srcu_read_unlock(void) +{ + /* __srcu_read_unlock has smp_mb() internally so nothing to do here. */ +} + #endif