From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753476AbcALWOm (ORCPT ); Tue, 12 Jan 2016 17:14:42 -0500 Received: from mx1.redhat.com ([209.132.183.28]:57081 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750708AbcALWOl (ORCPT ); Tue, 12 Jan 2016 17:14:41 -0500 Date: Wed, 13 Jan 2016 00:14:36 +0200 From: "Michael S. Tsirkin" To: Linus Torvalds Cc: Andy Lutomirski , Andy Lutomirski , Davidlohr Bueso , Davidlohr Bueso , Peter Zijlstra , the arch/x86 maintainers , Linux Kernel Mailing List , virtualization , "H. Peter Anvin" , Thomas Gleixner , "Paul E. McKenney" , Ingo Molnar Subject: Re: [PATCH 3/4] x86,asm: Re-work smp_store_mb() Message-ID: <20160113001127-mutt-send-email-mst@redhat.com> References: <20151027223744.GB11242@worktop.amr.corp.intel.com> <20151102201535.GB1707@linux-uzut.site> <20160112150032-mutt-send-email-mst@redhat.com> <56956276.1090705@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jan 12, 2016 at 01:37:38PM -0800, Linus Torvalds wrote: > On Tue, Jan 12, 2016 at 12:59 PM, Andy Lutomirski wrote: > > > > Here's an article with numbers: > > > > http://shipilev.net/blog/2014/on-the-fence-with-dependencies/ > > Well, that's with the busy loop and one set of code generation. It > doesn't show the "oops, deeper stack isn't even in the cache any more > due to call chains" issue. > > But yes: > > > I think they're suggesting using a negative offset, which is safe as > > long as it doesn't page fault, even though we have the redzone > > disabled. > > I think a negative offset might work very well. Partly exactly > *because* we have the redzone disabled: we know that inside the > kernel, we'll never have any live stack frame accesses under the stack > pointer, so "-4(%rsp)" sounds good to me. There should never be any > pending writes in the write buffer, because even if it *was* live, it > would have been read off first. > > Yeah, it potentially does extend the stack cache footprint by another > 4 bytes, but that sounds very benign. > > So perhaps it might be worth trying to switch the "mfence" to "lock ; > addl $0,-4(%rsp)" in the kernel for x86-64, and remove the alternate > for x86-32. > > I'd still want to see somebody try to benchmark it. I doubt it's > noticeable, but making changes because you think it might save a few > cycles without then even measuring it is just wrong. > > Linus Oops, I posted v2 with just offset 0 before reading the rest of this thread. I did try with offset 0 and didn't measure any change on any perf bench test, or on kernel build. I wonder which benchmark stresses smp_mb the most. I'll look into using a negative offset. -- MST