From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932257AbcEKSdr (ORCPT ); Wed, 11 May 2016 14:33:47 -0400 Received: from mx2.suse.de ([195.135.220.15]:54107 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751692AbcEKSdn (ORCPT ); Wed, 11 May 2016 14:33:43 -0400 Date: Wed, 11 May 2016 11:33:28 -0700 From: Davidlohr Bueso To: Peter Zijlstra Cc: Jason Low , Ingo Molnar , linux-kernel@vger.kernel.org, Scott J Norton , Waiman Long , peter@hurleysoftware.com, Jason Low Subject: Re: [PATCH] locking/rwsem: Optimize write lock slowpath Message-ID: <20160511183328.GA10711@linux-uzut.site> References: <1462821397.2701.16.camel@j-VirtualBox> <20160511114918.GG3190@twins.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Disposition: inline In-Reply-To: <20160511114918.GG3190@twins.programming.kicks-ass.net> User-Agent: Mutt/1.6.0 (2016-04-01) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, 11 May 2016, Peter Zijlstra wrote: >On Mon, May 09, 2016 at 12:16:37PM -0700, Jason Low wrote: >> When acquiring the rwsem write lock in the slowpath, we first try >> to set count to RWSEM_WAITING_BIAS. When that is successful, >> we then atomically add the RWSEM_WAITING_BIAS in cases where >> there are other tasks on the wait list. This causes write lock >> operations to often issue multiple atomic operations. >> >> We can instead make the list_is_singular() check first, and then >> set the count accordingly, so that we issue at most 1 atomic >> operation when acquiring the write lock and reduce unnecessary >> cacheline contention. >> >> Signed-off-by: Jason Low Acked-by: Davidlohr Bueso (one nit: the patch title could be more informative to what optimization we are talking about here... ie: reduce atomic ops in writer slowpath' or something.) >> --- >> kernel/locking/rwsem-xadd.c | 20 +++++++++++++------- >> 1 file changed, 13 insertions(+), 7 deletions(-) >> >> diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c >> index df4dcb8..23c33e6 100644 >> --- a/kernel/locking/rwsem-xadd.c >> +++ b/kernel/locking/rwsem-xadd.c >> @@ -258,14 +258,20 @@ EXPORT_SYMBOL(rwsem_down_read_failed); >> static inline bool rwsem_try_write_lock(long count, struct rw_semaphore *sem) >> { >> /* >> + * Avoid trying to acquire write lock if count isn't RWSEM_WAITING_BIAS. >> */ >> + if (count != RWSEM_WAITING_BIAS) >> + return false; >> + >> + /* >> + * Acquire the lock by trying to set it to ACTIVE_WRITE_BIAS. If there >> + * are other tasks on the wait list, we need to add on WAITING_BIAS. >> + */ >> + count = list_is_singular(&sem->wait_list) ? >> + RWSEM_ACTIVE_WRITE_BIAS : >> + RWSEM_ACTIVE_WRITE_BIAS + RWSEM_WAITING_BIAS; >> + >> + if (cmpxchg_acquire(&sem->count, RWSEM_WAITING_BIAS, count) == RWSEM_WAITING_BIAS) { >> rwsem_set_owner(sem); >> return true; >> } > >Right; so that whole thing works because we're holding sem->wait_lock. >Should we clarify that someplace? Yes exactly, rwsem_try_write_lock() is always called with the wait_lock held, unlike the unqueued cousin. Thanks, Davidlohr