From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752992AbcAVKx3 (ORCPT ); Fri, 22 Jan 2016 05:53:29 -0500 Received: from bombadil.infradead.org ([198.137.202.9]:34042 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751529AbcAVKxX (ORCPT ); Fri, 22 Jan 2016 05:53:23 -0500 Date: Fri, 22 Jan 2016 11:53:12 +0100 From: Peter Zijlstra To: Jason Low Cc: Waiman Long , Ding Tianhong , Ingo Molnar , "linux-kernel@vger.kernel.org" , Davidlohr Bueso , Linus Torvalds , "Paul E. McKenney" , Thomas Gleixner , Will Deacon , Tim Chen , Waiman Long Subject: Re: [PATCH RFC] locking/mutexes: don't spin on owner when wait list is not NULL. Message-ID: <20160122105312.GQ6357@twins.programming.kicks-ass.net> References: <56A0A4ED.3070308@huawei.com> <56A1638A.7050202@hpe.com> <20160122085422.GO6357@twins.programming.kicks-ass.net> <1453458019.9727.8.camel@j-VirtualBox> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1453458019.9727.8.camel@j-VirtualBox> User-Agent: Mutt/1.5.21 (2012-12-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jan 22, 2016 at 02:20:19AM -0800, Jason Low wrote: > --- a/kernel/locking/mutex.c > +++ b/kernel/locking/mutex.c > @@ -543,6 +543,8 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, > lock_contended(&lock->dep_map, ip); > > for (;;) { > + bool acquired = false; > + > /* > * Lets try to take the lock again - this is needed even if > * we get here for the first time (shortly after failing to > @@ -577,7 +579,12 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, > /* didn't get the lock, go to sleep: */ > spin_unlock_mutex(&lock->wait_lock, flags); > schedule_preempt_disabled(); > + > + if (mutex_is_locked(lock)) > + acquired = mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx); > spin_lock_mutex(&lock->wait_lock, flags); > + if (acquired) > + break; > } > __set_task_state(task, TASK_RUNNING); I think the problem here is that mutex_optimistic_spin() leaves the mutex->count == 0, even though we have waiters (us at the very least). But this should be easily fixed, since if we acquired, we should be the one releasing, so there's no race. So something like so: if (acquired) { atomic_set(&mutex->count, -1); break; } Should deal with that -- we'll set it to 0 again a little further down if the list ends up empty. There might be other details, but this is the one that stood out.