From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756934AbbCRSun (ORCPT ); Wed, 18 Mar 2015 14:50:43 -0400 Received: from mail.efficios.com ([78.47.125.74]:46843 "EHLO mail.efficios.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755066AbbCRSuj (ORCPT ); Wed, 18 Mar 2015 14:50:39 -0400 Date: Wed, 18 Mar 2015 18:50:41 +0000 (UTC) From: Mathieu Desnoyers To: josh@joshtriplett.org Cc: linux-kernel@vger.kernel.org, "Paul E. McKenney" , KOSAKI Motohiro , Steven Rostedt , Nicholas Miell , Linus Torvalds , Ingo Molnar , Alan Cox , Lai Jiangshan , Stephen Hemminger , Andrew Morton , Thomas Gleixner , Peter Zijlstra , David Howells Message-ID: <814426779.34655.1426704641015.JavaMail.zimbra@efficios.com> In-Reply-To: <20150318171555.GA31509@cloud> References: <1426695782-4742-1-git-send-email-mathieu.desnoyers@efficios.com> <20150318164240.GA31251@cloud> <1914348389.33427.1426697534805.JavaMail.zimbra@efficios.com> <20150318171555.GA31509@cloud> Subject: Re: [RFC PATCH v14] sys_membarrier(): system/process-wide memory barrier (x86) MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [173.246.22.116] X-Mailer: Zimbra 8.0.7_GA_6021 (ZimbraWebClient - FF36 (Linux)/8.0.7_GA_6021) Thread-Topic: sys_membarrier(): system/process-wide memory barrier (x86) Thread-Index: 3obXDIcZzcvteepMh4W89iCMPs/zLg== Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org ----- Original Message ----- > On Wed, Mar 18, 2015 at 04:52:14PM +0000, Mathieu Desnoyers wrote: > > ----- Original Message ----- > > > On Wed, Mar 18, 2015 at 12:23:02PM -0400, Mathieu Desnoyers wrote: > > > > memory barriers in reader: 1701557485 reads, 3129842 writes > > > > signal-based scheme: 9825306874 reads, 5386 writes > > > > sys_membarrier: 7992076602 reads, 220 writes > > > > > > > > The dynamic sys_membarrier availability check adds some overhead to > > > > the read-side compared to the signal-based scheme, but besides that, > > > > with the expedited scheme, we can see that we are close to the > > > > read-side > > > > performance of the signal-based scheme. However, this non-expedited > > > > sys_membarrier implementation has a much slower grace period than > > > > signal > > > > and memory barrier schemes. > > > > > > Doesn't the query flag allow you to find out in advance rather than > > > dynamically within the reader? What's the reader performance if you > > > hardcode availability of membarrier? > > > > What I am currently doing is to use sys_membarrier with a query > > flag within a lib constructor, and cache the result in a global > > variable. In the reader, I just test the variable, and thus detect > > whether I can use sys_membarrier, or if I need to fallback to > > barriers on both reader and writer. > > > > Are you suggesting I try removing the global variable load+test > > from the reader fast path ? > > Right. You said that "The dynamic sys_membarrier availability check > adds some overhead to the read-side compared to the signal-based > scheme"; I wondered how much. With 8 reader threads in parallel, no writer (workload found in userspace RCU tests/benchmark/test_urcu*.c): * memory barriers in read-side 307.4 million reads/s * sys_membarrier read-side With dynamic check: 1142.0 million reads/s Hardcoded barrier(): 1453.2 million reads/s (For a 27% speedup over dynamic check.) * QSBR (quiescent-state based) read-side 2276.9 million reads/s It might start being worthwhile to consider turning memory barriers into no-op within lib constructors at some point. Remember that rcu_read_lock/unlock can be inlined into applications, which may add to the challenge. Thanks, Mathieu -- Mathieu Desnoyers EfficiOS Inc. http://www.efficios.com