From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755085AbcHWUmB (ORCPT ); Tue, 23 Aug 2016 16:42:01 -0400 Received: from bombadil.infradead.org ([198.137.202.9]:44234 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753725AbcHWUl7 (ORCPT ); Tue, 23 Aug 2016 16:41:59 -0400 Date: Tue, 23 Aug 2016 22:41:36 +0200 From: Peter Zijlstra To: Waiman Long Cc: Jason Low , Davidlohr Bueso , Linus Torvalds , Ding Tianhong , Thomas Gleixner , Will Deacon , Ingo Molnar , Imre Deak , Linux Kernel Mailing List , Tim Chen , "Paul E. McKenney" , jason.low2@hp.com, chris@chris-wilson.co.uk Subject: Re: [RFC][PATCH 0/3] locking/mutex: Rewrite basic mutex Message-ID: <20160823204136.GW10153@twins.programming.kicks-ass.net> References: <20160823124617.015645861@infradead.org> <20160823161750.GD31186@linux-80c1.suse> <1471970103.2381.51.camel@j-VirtualBox> <20160823165739.GQ10153@twins.programming.kicks-ass.net> <57BCA5B1.1010401@hpe.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <57BCA5B1.1010401@hpe.com> User-Agent: Mutt/1.5.23.1 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Aug 23, 2016 at 03:36:17PM -0400, Waiman Long wrote: > I think this is the right way to go. There isn't any big change in the > slowpath, so the contended performance should be the same. The fastpath, > however, will get a bit slower as a single atomic op plus a jump instruction > (a single cacheline load) is replaced by a read-and-test and compxchg > (potentially 2 cacheline loads) which will be somewhat slower than the > optimized assembly code. Yeah, I'll try and run some workloads tomorrow if you and Jason don't beat me to it ;-) > Alternatively, you can replace the > __mutex_trylock() in mutex_lock() by just a blind cmpxchg to optimize the > fastpath further. Problem with that is that we need to preserve the flag bits, so we need the initial load. Or were you thinking of: cmpxchg(&lock->owner, 0UL, (unsigned long)current), which only works on uncontended locks? > A cmpxhcg will still be a tiny bit slower than other > atomic ops, but it will be more acceptable, I think. I don't think cmpxchg is much slower than say xadd or xchg, the typical problem with cmpxchg is the looping part, but single instruction costs should be similar. > BTW, I got the following compilation warning when I tried your patch: > > drivers/gpu/drm/i915/i915_gem_shrinker.c: In function ‘mutex_is_locked_by’: > drivers/gpu/drm/i915/i915_gem_shrinker.c:44:22: error: invalid operands to > binary == (have ‘atomic_long_t’ and ‘struct task_struct *’) > return mutex->owner == task; > ^ > CC [M] drivers/gpu/drm/i915/intel_psr.o > drivers/gpu/drm/i915/i915_gem_shrinker.c:49:1: warning: control reaches end > of non-void function [-Wreturn-type] > } > ^ > make[4]: *** [drivers/gpu/drm/i915/i915_gem_shrinker.o] Error 1 > > Apparently, you may need to look to see if there are other direct access of > the owner field in the other code. AArggghh.. that is horrible horrible code. It tries to do a recursive mutex and pokes at the innards of the mutex. that so deserves to break.