From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752653AbcGVSq6 (ORCPT ); Fri, 22 Jul 2016 14:46:58 -0400 Received: from g1t6223.austin.hp.com ([15.73.96.124]:34735 "EHLO g1t6223.austin.hp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751294AbcGVSq5 (ORCPT ); Fri, 22 Jul 2016 14:46:57 -0400 Message-ID: <1469213094.2344.52.camel@j-VirtualBox> Subject: Re: [RFC] Avoid mutex starvation when optimistic spinning is disabled From: Jason Low To: imre.deak@intel.com Cc: jason.low2@hpe.com, Waiman Long , Peter Zijlstra , linux-kernel@vger.kernel.org, Ingo Molnar , Chris Wilson , Daniel Vetter , Davidlohr Bueso Date: Fri, 22 Jul 2016 11:44:54 -0700 In-Reply-To: <1469180097.30237.41.camel@intel.com> References: <1468858607-20481-1-git-send-email-imre.deak@intel.com> <20160718171537.GC6862@twins.programming.kicks-ass.net> <1468864069.2367.21.camel@j-VirtualBox> <1468947205.31332.40.camel@intel.com> <1468969470.10247.15.camel@j-VirtualBox> <1468989556.10247.22.camel@j-VirtualBox> <578FC4EE.1070000@hpe.com> <1469140171.2344.24.camel@j-VirtualBox> <1469180097.30237.41.camel@intel.com> Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.10.4-0ubuntu2 Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, 2016-07-22 at 12:34 +0300, Imre Deak wrote: > On to, 2016-07-21 at 15:29 -0700, Jason Low wrote: > > On Wed, 2016-07-20 at 14:37 -0400, Waiman Long wrote: > > > On 07/20/2016 12:39 AM, Jason Low wrote: > > > > On Tue, 2016-07-19 at 16:04 -0700, Jason Low wrote: > > > > > Hi Imre, > > > > > > > > > > Here is a patch which prevents a thread from spending too much > > > > > "time" > > > > > waiting for a mutex in the !CONFIG_MUTEX_SPIN_ON_OWNER case. > > > > > > > > > > Would you like to try this out and see if this addresses the > > > > > mutex > > > > > starvation issue you are seeing in your workload when > > > > > optimistic > > > > > spinning is disabled? > > > > Although it looks like it didn't take care of the 'lock stealing' > > > > case > > > > in the slowpath. Here is the updated fixed version: > > > > > > > > --- > > > > Signed-off-by: Jason Low > > > > --- > > > > include/linux/mutex.h | 2 ++ > > > > kernel/locking/mutex.c | 65 > > > > ++++++++++++++++++++++++++++++++++++++++++++------ > > > > 2 files changed, 60 insertions(+), 7 deletions(-) > > > > > > > > diff --git a/include/linux/mutex.h b/include/linux/mutex.h > > > > index 2cb7531..c1ca68d 100644 > > > > --- a/include/linux/mutex.h > > > > +++ b/include/linux/mutex.h > > > > @@ -57,6 +57,8 @@ struct mutex { > > > > #endif > > > > #ifdef CONFIG_MUTEX_SPIN_ON_OWNER > > > > struct optimistic_spin_queue osq; /* Spinner MCS lock > > > > */ > > > > +#else > > > > + bool yield_to_waiter; /* Prevent starvation when > > > > spinning disabled */ > > > > #endif > > > > #ifdef CONFIG_DEBUG_MUTEXES > > > > void *magic; > > > > > > You don't need that on non-SMP system. So maybe you should put it > > > under > > > #ifdef CONFIG_SMP block. > > > > Right, maybe something like: > > > > #ifdef CONFIG_MUTEX_SPIN_ON_OWNER > > ... > > ... > > #elif !defined(CONFIG_SMP) /* If optimistic spinning disabled */ > > bool yield_to_waiter; > > #endif > > > > > > @@ -556,7 +595,8 @@ __mutex_lock_common(struct mutex *lock, long > > > > state, unsigned int subclass, > > > > * other waiters. We only attempt the xchg if > > > > the count is > > > > * non-negative in order to avoid unnecessary > > > > xchg operations: > > > > */ > > > > - if (atomic_read(&lock->count)>= 0&& > > > > + if ((!need_yield_to_waiter(lock) || loop> 1)&& > > > > + atomic_read(&lock->count)>= 0&& > > > > (atomic_xchg_acquire(&lock->count, -1) == 1)) > > > > > > > > > > I think you need to reset the yield_to_waiter variable here when > > > loop > > > > 1 instead of at the end of the loop. > > > > So I think in the current state, only the top waiter would be able to > > both set and clear the yield_to_waiter variable anyway. However, I > > agree > > that this detail is not obvious and it would be better to reset the > > variable here when loop > 1 to make it more readable. > > AFAICS an interruptible waiter behind the top waiter receiving a signal > and grabbing the lock could also reset yield_to_waiter incorrectly in > that way, increasing the top waiter's delay arbitrarily. Okay, fair enough :) The reset will get moved so that only the waiter yielded to can call it.