From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Paul E. McKenney" Subject: Re: [PATCH v2 0/9] Remove spin_unlock_wait() Date: Sat, 8 Jul 2017 04:39:46 -0700 Message-ID: <20170708113946.GN2393@linux.vnet.ibm.com> References: <20170629235918.GA6445@linux.vnet.ibm.com> <20170705232955.GA15992@linux.vnet.ibm.com> <063D6719AE5E284EB5DD2968C1650D6DD0033F01@AcuExch.aculab.com> <20170706160555.xc63yydk77gmttae@hirez.programming.kicks-ass.net> <20170706162024.GD2393@linux.vnet.ibm.com> <20170706165036.v4u5rbz56si4emw5@hirez.programming.kicks-ass.net> <20170707083128.wqk6msuuhtyykhpu@gmail.com> <48164d9a-f291-94f3-e0b1-98bb312bf846@colorfullife.com> <20170708083543.tnr7yyhojmyiluw4@gmail.com> Reply-To: paulmck@linux.vnet.ibm.com Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Return-path: Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:38822 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1750863AbdGHLjy (ORCPT ); Sat, 8 Jul 2017 07:39:54 -0400 Received: from pps.filterd (m0098413.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.21/8.16.0.21) with SMTP id v68BcXGg126754 for ; Sat, 8 Jul 2017 07:39:53 -0400 Received: from e15.ny.us.ibm.com (e15.ny.us.ibm.com [129.33.205.205]) by mx0b-001b2d01.pphosted.com with ESMTP id 2bjv02bhjk-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Sat, 08 Jul 2017 07:39:53 -0400 Received: from localhost by e15.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Sat, 8 Jul 2017 07:39:53 -0400 Content-Disposition: inline In-Reply-To: <20170708083543.tnr7yyhojmyiluw4@gmail.com> Sender: linux-arch-owner@vger.kernel.org List-ID: To: Ingo Molnar Cc: Manfred Spraul , Peter Zijlstra , David Laight , "linux-kernel@vger.kernel.org" , "netfilter-devel@vger.kernel.org" , "netdev@vger.kernel.org" , "oleg@redhat.com" , "akpm@linux-foundation.org" , "mingo@redhat.com" , "dave@stgolabs.net" , "tj@kernel.org" , "arnd@arndb.de" , "linux-arch@vger.kernel.org" , "will.deacon@arm.com" , "stern@rowland.harvard.edu" , "parri.andrea@gmail.com" , torvalds@linux-fou On Sat, Jul 08, 2017 at 10:35:43AM +0200, Ingo Molnar wrote: > > * Manfred Spraul wrote: > > > Hi Ingo, > > > > On 07/07/2017 10:31 AM, Ingo Molnar wrote: > > > > > > There's another, probably just as significant advantage: queued_spin_unlock_wait() > > > is 'read-only', while spin_lock()+spin_unlock() dirties the lock cache line. On > > > any bigger system this should make a very measurable difference - if > > > spin_unlock_wait() is ever used in a performance critical code path. > > At least for ipc/sem: > > Dirtying the cacheline (in the slow path) allows to remove a smp_mb() in the > > hot path. > > So for sem_lock(), I either need a primitive that dirties the cacheline or > > sem_lock() must continue to use spin_lock()/spin_unlock(). > > Technically you could use spin_trylock()+spin_unlock() and avoid the lock acquire > spinning on spin_unlock() and get very close to the slow path performance of a > pure cacheline-dirtying behavior. > > But adding something like spin_barrier(), which purely dirties the lock cacheline, > would be even faster, right? Interestingly enough, the arm64 and powerpc implementations of spin_unlock_wait() were very close to what it sounds like you are describing. Thanx, Paul From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:38822 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1750863AbdGHLjy (ORCPT ); Sat, 8 Jul 2017 07:39:54 -0400 Received: from pps.filterd (m0098413.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.21/8.16.0.21) with SMTP id v68BcXGg126754 for ; Sat, 8 Jul 2017 07:39:53 -0400 Received: from e15.ny.us.ibm.com (e15.ny.us.ibm.com [129.33.205.205]) by mx0b-001b2d01.pphosted.com with ESMTP id 2bjv02bhjk-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Sat, 08 Jul 2017 07:39:53 -0400 Received: from localhost by e15.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Sat, 8 Jul 2017 07:39:53 -0400 Date: Sat, 8 Jul 2017 04:39:46 -0700 From: "Paul E. McKenney" Subject: Re: [PATCH v2 0/9] Remove spin_unlock_wait() Reply-To: paulmck@linux.vnet.ibm.com References: <20170629235918.GA6445@linux.vnet.ibm.com> <20170705232955.GA15992@linux.vnet.ibm.com> <063D6719AE5E284EB5DD2968C1650D6DD0033F01@AcuExch.aculab.com> <20170706160555.xc63yydk77gmttae@hirez.programming.kicks-ass.net> <20170706162024.GD2393@linux.vnet.ibm.com> <20170706165036.v4u5rbz56si4emw5@hirez.programming.kicks-ass.net> <20170707083128.wqk6msuuhtyykhpu@gmail.com> <48164d9a-f291-94f3-e0b1-98bb312bf846@colorfullife.com> <20170708083543.tnr7yyhojmyiluw4@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170708083543.tnr7yyhojmyiluw4@gmail.com> Message-ID: <20170708113946.GN2393@linux.vnet.ibm.com> Sender: linux-arch-owner@vger.kernel.org List-ID: To: Ingo Molnar Cc: Manfred Spraul , Peter Zijlstra , David Laight , "linux-kernel@vger.kernel.org" , "netfilter-devel@vger.kernel.org" , "netdev@vger.kernel.org" , "oleg@redhat.com" , "akpm@linux-foundation.org" , "mingo@redhat.com" , "dave@stgolabs.net" , "tj@kernel.org" , "arnd@arndb.de" , "linux-arch@vger.kernel.org" , "will.deacon@arm.com" , "stern@rowland.harvard.edu" , "parri.andrea@gmail.com" , "torvalds@linux-foundation.org" Message-ID: <20170708113946.OeO48Uvx21xRQAg_TtlZNVjjxldO3Z0rioqhnNuFrmU@z> On Sat, Jul 08, 2017 at 10:35:43AM +0200, Ingo Molnar wrote: > > * Manfred Spraul wrote: > > > Hi Ingo, > > > > On 07/07/2017 10:31 AM, Ingo Molnar wrote: > > > > > > There's another, probably just as significant advantage: queued_spin_unlock_wait() > > > is 'read-only', while spin_lock()+spin_unlock() dirties the lock cache line. On > > > any bigger system this should make a very measurable difference - if > > > spin_unlock_wait() is ever used in a performance critical code path. > > At least for ipc/sem: > > Dirtying the cacheline (in the slow path) allows to remove a smp_mb() in the > > hot path. > > So for sem_lock(), I either need a primitive that dirties the cacheline or > > sem_lock() must continue to use spin_lock()/spin_unlock(). > > Technically you could use spin_trylock()+spin_unlock() and avoid the lock acquire > spinning on spin_unlock() and get very close to the slow path performance of a > pure cacheline-dirtying behavior. > > But adding something like spin_barrier(), which purely dirties the lock cacheline, > would be even faster, right? Interestingly enough, the arm64 and powerpc implementations of spin_unlock_wait() were very close to what it sounds like you are describing. Thanx, Paul