From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752484AbbJSLYW (ORCPT ); Mon, 19 Oct 2015 07:24:22 -0400 Received: from mail-wi0-f181.google.com ([209.85.212.181]:33337 "EHLO mail-wi0-f181.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750885AbbJSLYV (ORCPT ); Mon, 19 Oct 2015 07:24:21 -0400 Date: Mon, 19 Oct 2015 13:24:17 +0200 From: Ingo Molnar To: Peter Zijlstra Cc: ling.ma.program@gmail.com, mingo@redhat.com, linux-kernel@vger.kernel.org, Ma Ling , Arnaldo Carvalho de Melo , Jiri Olsa Subject: Re: [RFC PATCH] qspinlock: Improve performance by reducing load instruction rollback Message-ID: <20151019112417.GA752@gmail.com> References: <1445221642-15319-1-git-send-email-ling.ma.program@gmail.com> <20151019075823.GB22488@gmail.com> <20151019093410.GJ3816@twins.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20151019093410.GJ3816@twins.programming.kicks-ass.net> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org * Peter Zijlstra wrote: > On Mon, Oct 19, 2015 at 09:58:23AM +0200, Ingo Molnar wrote: > > > > * ling.ma.program@gmail.com wrote: > > > > > From: Ma Ling > > > > > > All load instructions can run speculatively but they have to follow > > > memory order rule in multiple cores as below: > > > _x = _y = 0 > > > > > > Processor 0 Processor 1 > > > > > > mov r1, [ _y] //M1 mov [ _x], 1 //M3 > > > mov r2, [ _x] //M2 mov [ _y], 1 //M4 > > > > > > If r1 = 1, r2 must be 1 > > > > > > In order to guarantee above rule, although Processor 0 execute > > > M1 and M2 instruction out of order, they are kept in ROB, > > > when load buffer for _x in Processor 0 received the update > > > message from Processor 1, Processor 0 need to roll back > > > from M2 instruction, which will flush the whole pipeline, > > > the latency is over the penalty from branch prediction miss. > > > > > > In this patch we use lock cmpxchg instruction to force load > > > instructions to be serialization, the destination operand > > > receives a write cycle without regard to the result of > > > the comparison, which can help us to reduce the penalty > > > from load instruction roll back. > > > > > > Our experiment indicates the performance can be improved by 10%~15% > > > for 2 and 3 threads cases, the conflicts from lock cache line > > > spend them most of the time. > > > > So it would be nice to create a new user-space spinlock testing facility, via a > > new 'perf bench spinlock' feature or so. That way others can test and validate > > your results on different hardware as well. > > So its trivial to lift this code into userspace -- in fact, I have that > somewhere. > > The trouble is going to keep them in sync. So we can just try this optimistically, and if it keeps breaking, we can use the technique perf uses to sync up the rbtree implementation: we copy the kernel version into tooling, but run diff against the kernel version and warn at tool build time that there's divergence. I.e. a non-build-fatal force that keeps things in sync. Thanks, Ingo