From: Peter Zijlstra <peterz@infradead.org>
To: tglx@linutronix.de, boqun.feng@gmail.com
Cc: linux-kernel@vger.kernel.org, peterz@infradead.org,
Ingo Molnar <mingo@kernel.org>,
Juri Lelli <juri.lelli@redhat.com>,
Steven Rostedt <rostedt@goodmis.org>,
Davidlohr Bueso <dave@stgolabs.net>,
Will Deacon <will@kernel.org>, Waiman Long <longman@redhat.com>,
Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
Mike Galbraith <efault@gmx.de>,
Daniel Bristot de Oliveira <bristot@redhat.com>
Subject: [PATCH 3/4] locking/rwbase: Fix rwbase_write_lock() vs __rwbase_read_lock()
Date: Thu, 09 Sep 2021 12:59:18 +0200 [thread overview]
Message-ID: <20210909110203.893845303@infradead.org> (raw)
In-Reply-To: 20210909105915.757320973@infradead.org
Boqun noticed that the write-trylock sequence of load+set is broken in
rwbase_write_lock()'s wait-loop since they're not both under the same
wait_lock instance.
Restructure the code to make this more obvious and correct.
Reported-by: Boqun Feng <boqun.feng@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
kernel/locking/rwbase_rt.c | 44 ++++++++++++++++++++++++++------------------
1 file changed, 26 insertions(+), 18 deletions(-)
--- a/kernel/locking/rwbase_rt.c
+++ b/kernel/locking/rwbase_rt.c
@@ -196,6 +196,19 @@ static inline void rwbase_write_downgrad
__rwbase_write_unlock(rwb, WRITER_BIAS - 1, flags);
}
+static inline bool __rwbase_write_trylock(struct rwbase_rt *rwb)
+{
+ /* Can do without CAS because we're serialized by wait_lock. */
+ lockdep_assert_held(&rwb->rtmutex.wait_lock);
+
+ if (!atomic_read(&rwb->readers)) {
+ atomic_set(&rwb->readers, WRITER_BIAS);
+ return 1;
+ }
+
+ return 0;
+}
+
static int __sched rwbase_write_lock(struct rwbase_rt *rwb,
unsigned int state)
{
@@ -210,34 +223,30 @@ static int __sched rwbase_write_lock(str
atomic_sub(READER_BIAS, &rwb->readers);
raw_spin_lock_irqsave(&rtm->wait_lock, flags);
- /*
- * set_current_state() for rw_semaphore
- * current_save_and_set_rtlock_wait_state() for rwlock
- */
- rwbase_set_and_save_current_state(state);
+ if (__rwbase_write_trylock(rwb))
+ goto out_unlock;
- /* Block until all readers have left the critical section. */
- for (; atomic_read(&rwb->readers);) {
+ rwbase_set_and_save_current_state(state);
+ for (;;) {
/* Optimized out for rwlocks */
if (rwbase_signal_pending_state(state, current)) {
rwbase_restore_current_state();
__rwbase_write_unlock(rwb, 0, flags);
return -EINTR;
}
+
+ if (__rwbase_write_trylock(rwb))
+ break;
+
raw_spin_unlock_irqrestore(&rtm->wait_lock, flags);
+ rwbase_schedule();
+ raw_spin_lock_irqsave(&rtm->wait_lock, flags);
- /*
- * Schedule and wait for the readers to leave the critical
- * section. The last reader leaving it wakes the waiter.
- */
- if (atomic_read(&rwb->readers) != 0)
- rwbase_schedule();
set_current_state(state);
- raw_spin_lock_irqsave(&rtm->wait_lock, flags);
}
-
- atomic_set(&rwb->readers, WRITER_BIAS);
rwbase_restore_current_state();
+
+out_unlock:
raw_spin_unlock_irqrestore(&rtm->wait_lock, flags);
return 0;
}
@@ -253,8 +262,7 @@ static inline int rwbase_write_trylock(s
atomic_sub(READER_BIAS, &rwb->readers);
raw_spin_lock_irqsave(&rtm->wait_lock, flags);
- if (!atomic_read(&rwb->readers)) {
- atomic_set(&rwb->readers, WRITER_BIAS);
+ if (__rwbase_write_trylock(rwb)) {
raw_spin_unlock_irqrestore(&rtm->wait_lock, flags);
return 1;
}
next prev parent reply other threads:[~2021-09-09 11:04 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-09-09 10:59 [PATCH 0/4] locking/rwbase: Assorted fixes Peter Zijlstra
2021-09-09 10:59 ` [PATCH 1/4] sched/wakeup: Strengthen current_save_and_set_rtlock_wait_state() Peter Zijlstra
2021-09-09 13:45 ` Will Deacon
2021-09-09 14:27 ` Peter Zijlstra
2021-09-10 12:57 ` Will Deacon
2021-09-10 13:17 ` Peter Zijlstra
2021-09-10 14:01 ` Peter Zijlstra
2021-09-10 15:06 ` Will Deacon
2021-09-10 16:07 ` Waiman Long
2021-09-10 17:09 ` Peter Zijlstra
2021-09-12 3:57 ` Boqun Feng
2021-09-10 12:45 ` Sebastian Andrzej Siewior
2021-09-13 22:08 ` Thomas Gleixner
2021-09-13 22:52 ` Thomas Gleixner
2021-09-14 6:45 ` Peter Zijlstra
2021-09-09 10:59 ` [PATCH 2/4] locking/rwbase: Properly match set_and_save_state() to restore_state() Peter Zijlstra
2021-09-09 13:53 ` Will Deacon
2021-09-14 7:31 ` Thomas Gleixner
2021-09-16 11:59 ` [tip: locking/urgent] " tip-bot2 for Peter Zijlstra
2021-09-09 10:59 ` Peter Zijlstra [this message]
2021-09-14 7:45 ` [PATCH 3/4] locking/rwbase: Fix rwbase_write_lock() vs __rwbase_read_lock() Thomas Gleixner
2021-09-14 13:59 ` Peter Zijlstra
2021-09-14 15:00 ` Thomas Gleixner
2021-09-16 11:59 ` [tip: locking/urgent] locking/rwbase: Extract __rwbase_write_trylock() tip-bot2 for Peter Zijlstra
2021-09-09 10:59 ` [PATCH 4/4] locking/rwbase: Take care of ordering guarantee for fastpath reader Peter Zijlstra
2021-09-14 7:46 ` Thomas Gleixner
2021-09-16 11:59 ` [tip: locking/urgent] " tip-bot2 for Boqun Feng
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210909110203.893845303@infradead.org \
--to=peterz@infradead.org \
--cc=bigeasy@linutronix.de \
--cc=boqun.feng@gmail.com \
--cc=bristot@redhat.com \
--cc=dave@stgolabs.net \
--cc=efault@gmx.de \
--cc=juri.lelli@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=longman@redhat.com \
--cc=mingo@kernel.org \
--cc=rostedt@goodmis.org \
--cc=tglx@linutronix.de \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox