From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752589AbcEKLtc (ORCPT ); Wed, 11 May 2016 07:49:32 -0400 Received: from merlin.infradead.org ([205.233.59.134]:59648 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752187AbcEKLtb (ORCPT ); Wed, 11 May 2016 07:49:31 -0400 Date: Wed, 11 May 2016 13:49:18 +0200 From: Peter Zijlstra To: Jason Low Cc: Ingo Molnar , linux-kernel@vger.kernel.org, Davidlohr Bueso , Scott J Norton , Waiman Long , peter@hurleysoftware.com, Jason Low Subject: Re: [PATCH] locking/rwsem: Optimize write lock slowpath Message-ID: <20160511114918.GG3190@twins.programming.kicks-ass.net> References: <1462821397.2701.16.camel@j-VirtualBox> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1462821397.2701.16.camel@j-VirtualBox> User-Agent: Mutt/1.5.21 (2012-12-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, May 09, 2016 at 12:16:37PM -0700, Jason Low wrote: > When acquiring the rwsem write lock in the slowpath, we first try > to set count to RWSEM_WAITING_BIAS. When that is successful, > we then atomically add the RWSEM_WAITING_BIAS in cases where > there are other tasks on the wait list. This causes write lock > operations to often issue multiple atomic operations. > > We can instead make the list_is_singular() check first, and then > set the count accordingly, so that we issue at most 1 atomic > operation when acquiring the write lock and reduce unnecessary > cacheline contention. > > Signed-off-by: Jason Low > --- > kernel/locking/rwsem-xadd.c | 20 +++++++++++++------- > 1 file changed, 13 insertions(+), 7 deletions(-) > > diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c > index df4dcb8..23c33e6 100644 > --- a/kernel/locking/rwsem-xadd.c > +++ b/kernel/locking/rwsem-xadd.c > @@ -258,14 +258,20 @@ EXPORT_SYMBOL(rwsem_down_read_failed); > static inline bool rwsem_try_write_lock(long count, struct rw_semaphore *sem) > { > /* > + * Avoid trying to acquire write lock if count isn't RWSEM_WAITING_BIAS. > */ > + if (count != RWSEM_WAITING_BIAS) > + return false; > + > + /* > + * Acquire the lock by trying to set it to ACTIVE_WRITE_BIAS. If there > + * are other tasks on the wait list, we need to add on WAITING_BIAS. > + */ > + count = list_is_singular(&sem->wait_list) ? > + RWSEM_ACTIVE_WRITE_BIAS : > + RWSEM_ACTIVE_WRITE_BIAS + RWSEM_WAITING_BIAS; > + > + if (cmpxchg_acquire(&sem->count, RWSEM_WAITING_BIAS, count) == RWSEM_WAITING_BIAS) { > rwsem_set_owner(sem); > return true; > } Right; so that whole thing works because we're holding sem->wait_lock. Should we clarify that someplace? Also; should we not make rw_semaphore::count an atomic_long_t and kill rwsem_atomic_{update,add}() ?