From mboxrd@z Thu Jan 1 00:00:00 1970 From: Waiman Long Subject: Re: [PATCH v5 3/3] locking/rwsem: Optimize down_read_trylock() Date: Fri, 22 Mar 2019 13:41:05 -0400 Message-ID: <27c2fb96-daa4-ba2c-da06-e559dc5b693e@redhat.com> References: <20190322143008.21313-1-longman@redhat.com> <20190322143008.21313-4-longman@redhat.com> <20190322172501.3nbjw6e2wqsaisgw@shell.armlinux.org.uk> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <20190322172501.3nbjw6e2wqsaisgw@shell.armlinux.org.uk> Content-Language: en-US List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=m.gmane.org@lists.infradead.org To: Russell King - ARM Linux admin Cc: linux-ia64@vger.kernel.org, linux-sh@vger.kernel.org, Peter Zijlstra , Will Deacon , linux-kernel@vger.kernel.org, "H. Peter Anvin" , sparclinux@vger.kernel.org, linux-riscv@lists.infradead.org, linux-arch@vger.kernel.org, linux-s390@vger.kernel.org, Davidlohr Bueso , linux-c6x-dev@linux-c6x.org, linux-hexagon@vger.kernel.org, x86@kernel.org, Ingo Molnar , uclinux-h8-devel@lists.sourceforge.jp, linux-xtensa@linux-xtensa.org, Arnd Bergmann , linux-um@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-m68k@lists.linux-m68k.org, openrisc@lists.librecores.org, Borislav Petkov , Thomas Gleixner , linux-arm-kernel@lists.infradead.org, linux-parisc@vger.kernel.org, Linus Torvalds , linux-mips@vger.k List-Id: linux-arch.vger.kernel.org On 03/22/2019 01:25 PM, Russell King - ARM Linux admin wrote: > On Fri, Mar 22, 2019 at 10:30:08AM -0400, Waiman Long wrote: >> Modify __down_read_trylock() to optimize for an unlocked rwsem and make >> it generate slightly better code. >> >> Before this patch, down_read_trylock: >> >> 0x0000000000000000 <+0>: callq 0x5 >> 0x0000000000000005 <+5>: jmp 0x18 >> 0x0000000000000007 <+7>: lea 0x1(%rdx),%rcx >> 0x000000000000000b <+11>: mov %rdx,%rax >> 0x000000000000000e <+14>: lock cmpxchg %rcx,(%rdi) >> 0x0000000000000013 <+19>: cmp %rax,%rdx >> 0x0000000000000016 <+22>: je 0x23 >> 0x0000000000000018 <+24>: mov (%rdi),%rdx >> 0x000000000000001b <+27>: test %rdx,%rdx >> 0x000000000000001e <+30>: jns 0x7 >> 0x0000000000000020 <+32>: xor %eax,%eax >> 0x0000000000000022 <+34>: retq >> 0x0000000000000023 <+35>: mov %gs:0x0,%rax >> 0x000000000000002c <+44>: or $0x3,%rax >> 0x0000000000000030 <+48>: mov %rax,0x20(%rdi) >> 0x0000000000000034 <+52>: mov $0x1,%eax >> 0x0000000000000039 <+57>: retq >> >> After patch, down_read_trylock: >> >> 0x0000000000000000 <+0>: callq 0x5 >> 0x0000000000000005 <+5>: xor %eax,%eax >> 0x0000000000000007 <+7>: lea 0x1(%rax),%rdx >> 0x000000000000000b <+11>: lock cmpxchg %rdx,(%rdi) >> 0x0000000000000010 <+16>: jne 0x29 >> 0x0000000000000012 <+18>: mov %gs:0x0,%rax >> 0x000000000000001b <+27>: or $0x3,%rax >> 0x000000000000001f <+31>: mov %rax,0x20(%rdi) >> 0x0000000000000023 <+35>: mov $0x1,%eax >> 0x0000000000000028 <+40>: retq >> 0x0000000000000029 <+41>: test %rax,%rax >> 0x000000000000002c <+44>: jns 0x7 >> 0x000000000000002e <+46>: xor %eax,%eax >> 0x0000000000000030 <+48>: retq >> >> By using a rwsem microbenchmark, the down_read_trylock() rate (with a >> load of 10 to lengthen the lock critical section) on a x86-64 system >> before and after the patch were: >> >> Before Patch After Patch >> # of Threads rlock rlock >> ------------ ----- ----- >> 1 14,496 14,716 >> 2 8,644 8,453 >> 4 6,799 6,983 >> 8 5,664 7,190 >> >> On a ARM64 system, the performance results were: >> >> Before Patch After Patch >> # of Threads rlock rlock >> ------------ ----- ----- >> 1 23,676 24,488 >> 2 7,697 9,502 >> 4 4,945 3,440 >> 8 2,641 1,603 >> >> For the uncontended case (1 thread), the new down_read_trylock() is a >> little bit faster. For the contended cases, the new down_read_trylock() >> perform pretty well in x86-64, but performance degrades at high >> contention level on ARM64. > So, 70% for 4 threads, 61% for 4 threads - does this trend > continue tailing off as the number of threads (and cores) > increase? > I didn't try higher number of contending threads. I won't worry too much about contention as trylock is a one-off event. The chance of having more than one trylock happening simultaneously is very small. Cheers, Longman From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx1.redhat.com ([209.132.183.28]:37150 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727741AbfCVRl1 (ORCPT ); Fri, 22 Mar 2019 13:41:27 -0400 Subject: Re: [PATCH v5 3/3] locking/rwsem: Optimize down_read_trylock() References: <20190322143008.21313-1-longman@redhat.com> <20190322143008.21313-4-longman@redhat.com> <20190322172501.3nbjw6e2wqsaisgw@shell.armlinux.org.uk> From: Waiman Long Message-ID: <27c2fb96-daa4-ba2c-da06-e559dc5b693e@redhat.com> Date: Fri, 22 Mar 2019 13:41:05 -0400 MIME-Version: 1.0 In-Reply-To: <20190322172501.3nbjw6e2wqsaisgw@shell.armlinux.org.uk> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Content-Language: en-US Sender: linux-arch-owner@vger.kernel.org List-ID: To: Russell King - ARM Linux admin Cc: Peter Zijlstra , Ingo Molnar , Will Deacon , Thomas Gleixner , linux-ia64@vger.kernel.org, linux-sh@vger.kernel.org, linux-mips@vger.kernel.org, "H. Peter Anvin" , sparclinux@vger.kernel.org, linux-riscv@lists.infradead.org, linux-arch@vger.kernel.org, linux-s390@vger.kernel.org, Davidlohr Bueso , linux-c6x-dev@linux-c6x.org, linux-hexagon@vger.kernel.org, x86@kernel.org, uclinux-h8-devel@lists.sourceforge.jp, linux-xtensa@linux-xtensa.org, Arnd Bergmann , linux-um@lists.infradead.org, linux-m68k@lists.linux-m68k.org, openrisc@lists.librecores.org, Borislav Petkov , linux-arm-kernel@lists.infradead.org, Tim Chen , linux-parisc@vger.kernel.org, Linus Torvalds , linux-kernel@vger.kernel.org, linux-alpha@vger.kernel.org, nios2-dev@lists.rocketboards.org, Andrew Morton , linuxppc-dev@lists.ozlabs.org Message-ID: <20190322174105.R43e-7bctP8T_hi3qIz7uqGtanHLx73ft-vAjaGDXAc@z> On 03/22/2019 01:25 PM, Russell King - ARM Linux admin wrote: > On Fri, Mar 22, 2019 at 10:30:08AM -0400, Waiman Long wrote: >> Modify __down_read_trylock() to optimize for an unlocked rwsem and make >> it generate slightly better code. >> >> Before this patch, down_read_trylock: >> >> 0x0000000000000000 <+0>: callq 0x5 >> 0x0000000000000005 <+5>: jmp 0x18 >> 0x0000000000000007 <+7>: lea 0x1(%rdx),%rcx >> 0x000000000000000b <+11>: mov %rdx,%rax >> 0x000000000000000e <+14>: lock cmpxchg %rcx,(%rdi) >> 0x0000000000000013 <+19>: cmp %rax,%rdx >> 0x0000000000000016 <+22>: je 0x23 >> 0x0000000000000018 <+24>: mov (%rdi),%rdx >> 0x000000000000001b <+27>: test %rdx,%rdx >> 0x000000000000001e <+30>: jns 0x7 >> 0x0000000000000020 <+32>: xor %eax,%eax >> 0x0000000000000022 <+34>: retq >> 0x0000000000000023 <+35>: mov %gs:0x0,%rax >> 0x000000000000002c <+44>: or $0x3,%rax >> 0x0000000000000030 <+48>: mov %rax,0x20(%rdi) >> 0x0000000000000034 <+52>: mov $0x1,%eax >> 0x0000000000000039 <+57>: retq >> >> After patch, down_read_trylock: >> >> 0x0000000000000000 <+0>: callq 0x5 >> 0x0000000000000005 <+5>: xor %eax,%eax >> 0x0000000000000007 <+7>: lea 0x1(%rax),%rdx >> 0x000000000000000b <+11>: lock cmpxchg %rdx,(%rdi) >> 0x0000000000000010 <+16>: jne 0x29 >> 0x0000000000000012 <+18>: mov %gs:0x0,%rax >> 0x000000000000001b <+27>: or $0x3,%rax >> 0x000000000000001f <+31>: mov %rax,0x20(%rdi) >> 0x0000000000000023 <+35>: mov $0x1,%eax >> 0x0000000000000028 <+40>: retq >> 0x0000000000000029 <+41>: test %rax,%rax >> 0x000000000000002c <+44>: jns 0x7 >> 0x000000000000002e <+46>: xor %eax,%eax >> 0x0000000000000030 <+48>: retq >> >> By using a rwsem microbenchmark, the down_read_trylock() rate (with a >> load of 10 to lengthen the lock critical section) on a x86-64 system >> before and after the patch were: >> >> Before Patch After Patch >> # of Threads rlock rlock >> ------------ ----- ----- >> 1 14,496 14,716 >> 2 8,644 8,453 >> 4 6,799 6,983 >> 8 5,664 7,190 >> >> On a ARM64 system, the performance results were: >> >> Before Patch After Patch >> # of Threads rlock rlock >> ------------ ----- ----- >> 1 23,676 24,488 >> 2 7,697 9,502 >> 4 4,945 3,440 >> 8 2,641 1,603 >> >> For the uncontended case (1 thread), the new down_read_trylock() is a >> little bit faster. For the contended cases, the new down_read_trylock() >> perform pretty well in x86-64, but performance degrades at high >> contention level on ARM64. > So, 70% for 4 threads, 61% for 4 threads - does this trend > continue tailing off as the number of threads (and cores) > increase? > I didn't try higher number of contending threads. I won't worry too much about contention as trylock is a one-off event. The chance of having more than one trylock happening simultaneously is very small. Cheers, Longman