From: Waiman Long <waiman.long@hpe.com>
To: Jason Low <jason.low2@hp.com>
Cc: Peter Zijlstra <peterz@infradead.org>,
Ingo Molnar <mingo@redhat.com>, <linux-kernel@vger.kernel.org>,
Davidlohr Bueso <dave@stgolabs.net>,
Scott J Norton <scott.norton@hpe.com>, <peter@hurleysoftware.com>,
Jason Low <jason.low2@hpe.com>
Subject: Re: [PATCH] locking/rwsem: Optimize write lock slowpath
Date: Mon, 9 May 2016 22:25:54 -0400 [thread overview]
Message-ID: <573146B2.2010807@hpe.com> (raw)
In-Reply-To: <1462821397.2701.16.camel@j-VirtualBox>
On 05/09/2016 03:16 PM, Jason Low wrote:
> When acquiring the rwsem write lock in the slowpath, we first try
> to set count to RWSEM_WAITING_BIAS. When that is successful,
> we then atomically add the RWSEM_WAITING_BIAS in cases where
> there are other tasks on the wait list. This causes write lock
> operations to often issue multiple atomic operations.
>
> We can instead make the list_is_singular() check first, and then
> set the count accordingly, so that we issue at most 1 atomic
> operation when acquiring the write lock and reduce unnecessary
> cacheline contention.
>
> Signed-off-by: Jason Low<jason.low2@hp.com>
> ---
> kernel/locking/rwsem-xadd.c | 20 +++++++++++++-------
> 1 file changed, 13 insertions(+), 7 deletions(-)
>
> diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
> index df4dcb8..23c33e6 100644
> --- a/kernel/locking/rwsem-xadd.c
> +++ b/kernel/locking/rwsem-xadd.c
> @@ -258,14 +258,20 @@ EXPORT_SYMBOL(rwsem_down_read_failed);
> static inline bool rwsem_try_write_lock(long count, struct rw_semaphore *sem)
> {
> /*
> - * Try acquiring the write lock. Check count first in order
> - * to reduce unnecessary expensive cmpxchg() operations.
> + * Avoid trying to acquire write lock if count isn't RWSEM_WAITING_BIAS.
> */
> - if (count == RWSEM_WAITING_BIAS&&
> - cmpxchg_acquire(&sem->count, RWSEM_WAITING_BIAS,
> - RWSEM_ACTIVE_WRITE_BIAS) == RWSEM_WAITING_BIAS) {
> - if (!list_is_singular(&sem->wait_list))
> - rwsem_atomic_update(RWSEM_WAITING_BIAS, sem);
> + if (count != RWSEM_WAITING_BIAS)
> + return false;
> +
> + /*
> + * Acquire the lock by trying to set it to ACTIVE_WRITE_BIAS. If there
> + * are other tasks on the wait list, we need to add on WAITING_BIAS.
> + */
> + count = list_is_singular(&sem->wait_list) ?
> + RWSEM_ACTIVE_WRITE_BIAS :
> + RWSEM_ACTIVE_WRITE_BIAS + RWSEM_WAITING_BIAS;
> +
> + if (cmpxchg_acquire(&sem->count, RWSEM_WAITING_BIAS, count) == RWSEM_WAITING_BIAS) {
> rwsem_set_owner(sem);
> return true;
> }
Acked-by: Waiman Long<Waiman.Long@hpe.com>
next prev parent reply other threads:[~2016-05-10 2:26 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-05-09 19:16 [PATCH] locking/rwsem: Optimize write lock slowpath Jason Low
2016-05-10 2:25 ` Waiman Long [this message]
2016-05-11 11:49 ` Peter Zijlstra
2016-05-11 18:26 ` Jason Low
2016-05-11 18:38 ` Peter Zijlstra
2016-05-11 18:33 ` Davidlohr Bueso
2016-05-11 18:49 ` Jason Low
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=573146B2.2010807@hpe.com \
--to=waiman.long@hpe.com \
--cc=dave@stgolabs.net \
--cc=jason.low2@hp.com \
--cc=jason.low2@hpe.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=peter@hurleysoftware.com \
--cc=peterz@infradead.org \
--cc=scott.norton@hpe.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox