From: Michel Lespinasse <walken@google.com>
To: Linus Torvalds <torvalds@linux-foundation.org>,
David Howells <dhowells@redhat.com>, Ingo Molnar <mingo@elte.hu>,
Thomas Gleixner <tglx@linutronix.de>
Cc: LKML <linux-kernel@vger.kernel.org>,
Andrew Morton <akpm@linux-foundation.org>,
Mike Waychison <mikew@google.com>,
Suleiman Souhlal <suleiman@google.com>,
Ying Han <yinghan@google.com>,
Michel Lespinasse <walken@google.com>
Subject: [PATCH 03/10] rwsem: lighter active count checks when waking up readers
Date: Fri, 14 May 2010 05:39:35 -0700 [thread overview]
Message-ID: <1273840782-5920-4-git-send-email-walken@google.com> (raw)
In-Reply-To: <1273840782-5920-1-git-send-email-walken@google.com>
In __rwsem_do_wake(), we can skip the active count check unless we come
there from up_xxxx(). Also when checking the active count, it is not
actually necessary to increment it; this allows us to get rid of the
read side undo code and simplify the calculation of the final rwsem count
adjustment once we've counted the reader threads to wake.
The basic observation is the following. When there are waiter threads
on a rwsem and the spinlock is held, other threads can only increment the
active count by trying to grab the rwsem in up_xxxx(). However up_xxxx()
will notice there are waiter threads and take the down_failed path,
blocking to acquire the spinlock on the way there. Therefore, a thread
observing an active count of zero with waiters queued and the spinlock held,
is protected against other threads acquiring the rwsem until it wakes the
last waiter or releases the spinlock.
Signed-off-by: Michel Lespinasse <walken@google.com>
---
lib/rwsem.c | 60 ++++++++++++++++++++++++++++++++--------------------------
1 files changed, 33 insertions(+), 27 deletions(-)
diff --git a/lib/rwsem.c b/lib/rwsem.c
index e3e133f..93206a3 100644
--- a/lib/rwsem.c
+++ b/lib/rwsem.c
@@ -36,6 +36,14 @@ struct rwsem_waiter {
#define RWSEM_WAITING_FOR_WRITE 0x00000002
};
+/* Wake types for __rwsem_do_wake(). Note that RWSEM_WAKE_NO_ACTIVE and
+ * RWSEM_WAKE_READ_OWNED imply that the spinlock must have been kept held
+ * since the rwsem value was observed.
+ */
+#define RWSEM_WAKE_ANY 0 /* Wake whatever's at head of wait list */
+#define RWSEM_WAKE_NO_ACTIVE 1 /* rwsem was observed with no active thread */
+#define RWSEM_WAKE_READ_OWNED 2 /* rwsem was observed to be read owned */
+
/*
* handle the lock release when processes blocked on it that can now run
* - if we come here from up_xxxx(), then:
@@ -46,8 +54,8 @@ struct rwsem_waiter {
* - woken process blocks are discarded from the list after having task zeroed
* - writers are only woken if downgrading is false
*/
-static inline struct rw_semaphore *
-__rwsem_do_wake(struct rw_semaphore *sem, int downgrading)
+static struct rw_semaphore *
+__rwsem_do_wake(struct rw_semaphore *sem, int wake_type)
{
struct rwsem_waiter *waiter;
struct task_struct *tsk;
@@ -58,7 +66,7 @@ __rwsem_do_wake(struct rw_semaphore *sem, int downgrading)
if (!(waiter->flags & RWSEM_WAITING_FOR_WRITE))
goto readers_only;
- if (downgrading)
+ if (wake_type == RWSEM_WAKE_READ_OWNED)
goto out;
/* There's a writer at the front of the queue - try to grant it the
@@ -85,19 +93,24 @@ __rwsem_do_wake(struct rw_semaphore *sem, int downgrading)
goto out;
readers_only:
- if (!downgrading) {
-
- /* if we came through an up_xxxx() call, we only only wake
- * someone up if we can transition the active part of the
- * count from 0 -> 1
- */
- try_again_read:
- oldcount = rwsem_atomic_update(RWSEM_ACTIVE_BIAS, sem)
- - RWSEM_ACTIVE_BIAS;
- if (oldcount & RWSEM_ACTIVE_MASK)
- /* Someone grabbed the sem already */
- goto undo_read;
- }
+ /* If we come here from up_xxxx(), another thread might have reached
+ * rwsem_down_failed_common() before we acquired the spinlock and
+ * woken up an active locker. We prefer to check for this first in
+ * order to not spend too much time with the spinlock held if we're
+ * not going to be able to wake up readers in the end.
+ *
+ * Note that we do not need to update the rwsem count: any writer
+ * trying to acquire rwsem will run rwsem_down_write_failed() due
+ * to the waiting threads, and block trying to acquire the spinlock.
+ *
+ * We use a dummy atomic update in order to acquire the cache line
+ * exclusively since we expect to succeed and run the final rwsem
+ * count adjustment pretty soon.
+ */
+ if (wake_type == RWSEM_WAKE_ANY &&
+ rwsem_atomic_update(0, sem) & RWSEM_ACTIVE_MASK)
+ /* Someone grabbed the sem already */
+ goto out;
/* Grant an infinite number of read locks to the readers at the front
* of the queue. Note we increment the 'active part' of the count by
@@ -117,9 +130,6 @@ __rwsem_do_wake(struct rw_semaphore *sem, int downgrading)
loop = woken;
woken *= RWSEM_ACTIVE_BIAS - RWSEM_WAITING_BIAS;
- if (!downgrading)
- /* we'd already done one increment earlier */
- woken -= RWSEM_ACTIVE_BIAS;
rwsem_atomic_add(woken, sem);
@@ -145,10 +155,6 @@ __rwsem_do_wake(struct rw_semaphore *sem, int downgrading)
if (rwsem_atomic_update(-RWSEM_ACTIVE_BIAS, sem) & RWSEM_ACTIVE_MASK)
goto out;
goto try_again;
- undo_read:
- if (rwsem_atomic_update(-RWSEM_ACTIVE_BIAS, sem) & RWSEM_ACTIVE_MASK)
- goto out;
- goto try_again_read;
}
/*
@@ -170,12 +176,12 @@ rwsem_down_failed_common(struct rw_semaphore *sem,
list_add_tail(&waiter->list, &sem->wait_list);
- /* we're now waiting on the lock, but no longer actively read-locking */
+ /* we're now waiting on the lock, but no longer actively locking */
count = rwsem_atomic_update(adjustment, sem);
/* if there are no active locks, wake the front queued process(es) up */
if (!(count & RWSEM_ACTIVE_MASK))
- sem = __rwsem_do_wake(sem, 0);
+ sem = __rwsem_do_wake(sem, RWSEM_WAKE_NO_ACTIVE);
spin_unlock_irq(&sem->wait_lock);
@@ -232,7 +238,7 @@ asmregparm struct rw_semaphore *rwsem_wake(struct rw_semaphore *sem)
/* do nothing if list empty */
if (!list_empty(&sem->wait_list))
- sem = __rwsem_do_wake(sem, 0);
+ sem = __rwsem_do_wake(sem, RWSEM_WAKE_ANY);
spin_unlock_irqrestore(&sem->wait_lock, flags);
@@ -252,7 +258,7 @@ asmregparm struct rw_semaphore *rwsem_downgrade_wake(struct rw_semaphore *sem)
/* do nothing if list empty */
if (!list_empty(&sem->wait_list))
- sem = __rwsem_do_wake(sem, 1);
+ sem = __rwsem_do_wake(sem, RWSEM_WAKE_READ_OWNED);
spin_unlock_irqrestore(&sem->wait_lock, flags);
--
1.7.0.1
next prev parent reply other threads:[~2010-05-14 12:41 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-05-14 12:39 [PATCH 00/10] V2: rwsem changes + down_read_unfair() proposal Michel Lespinasse
2010-05-14 12:39 ` [PATCH 01/10] x86 rwsem: minor cleanups Michel Lespinasse
2010-05-14 12:39 ` [PATCH 02/10] rwsem: fully separate code pathes to wake writers vs readers Michel Lespinasse
2010-05-14 12:39 ` Michel Lespinasse [this message]
2010-05-14 12:39 ` [PATCH 04/10] rwsem: let RWSEM_WAITING_BIAS represent any number of waiting threads Michel Lespinasse
2010-05-14 12:39 ` [PATCH 05/10] rwsem: wake queued readers when writer blocks on active read lock Michel Lespinasse
2010-05-14 12:39 ` [PATCH 06/10] rwsem: smaller wrappers around rwsem_down_failed_common Michel Lespinasse
2010-05-14 12:39 ` [PATCH 07/10] generic rwsem: implement down_read_unfair Michel Lespinasse
2010-05-14 12:39 ` [PATCH 08/10] rwsem: down_read_unfair infrastructure support Michel Lespinasse
2010-05-14 12:39 ` [PATCH 09/10] x86 rwsem: down_read_unfair implementation Michel Lespinasse
2010-05-14 12:39 ` [PATCH 10/10] Use down_read_unfair() for /sys/<pid>/exe and /sys/<pid>/maps files Michel Lespinasse
2010-05-14 15:13 ` [PATCH 00/10] V2: rwsem changes + down_read_unfair() proposal Linus Torvalds
2010-05-17 21:28 ` Michel Lespinasse
-- strict thread matches above, loose matches on Subject: below --
2010-05-17 22:25 [PATCH 00/10] V3: rwsem changes + down_read_critical() proposal Michel Lespinasse
2010-05-17 22:25 ` [PATCH 03/10] rwsem: lighter active count checks when waking up readers Michel Lespinasse
2010-05-19 12:25 ` David Howells
2010-05-20 22:33 ` Michel Lespinasse
2010-05-21 8:06 ` David Howells
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1273840782-5920-4-git-send-email-walken@google.com \
--to=walken@google.com \
--cc=akpm@linux-foundation.org \
--cc=dhowells@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mikew@google.com \
--cc=mingo@elte.hu \
--cc=suleiman@google.com \
--cc=tglx@linutronix.de \
--cc=torvalds@linux-foundation.org \
--cc=yinghan@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).