From: Waiman Long <longman@redhat.com>
To: Peter Zijlstra <peterz@infradead.org>,
Ingo Molnar <mingo@redhat.com>, Will Deacon <will@kernel.org>,
Boqun Feng <boqun.feng@gmail.com>
Cc: linux-kernel@vger.kernel.org, john.p.donnelly@oracle.com,
"Hillf Danton" <hdanton@sina.com>,
"Mukesh Ojha" <quic_mojha@quicinc.com>,
"Ting11 Wang 王婷" <wangting11@xiaomi.com>,
"Waiman Long" <longman@redhat.com>
Subject: [PATCH v6 6/6] locking/rwsem: Update handoff lock events tracking
Date: Thu, 17 Nov 2022 21:20:16 -0500 [thread overview]
Message-ID: <20221118022016.462070-7-longman@redhat.com> (raw)
In-Reply-To: <20221118022016.462070-1-longman@redhat.com>
With the new direct rwsem lock handoff, the corresponding handoff lock
events are updated to also track the number of secondary lock handoffs
in rwsem_down_read_slowpath() to see how prevalent those handoff
events are. The number of primary lock handoffs in the unlock paths is
(rwsem_handoff_read + rwsem_handoff_write - rwsem_handoff_rslow).
After running a 96-thread rwsem microbenchmark with equal number
of readers and writers on a 2-socket 96-thread system for 40s, the
following handoff stats were obtained:
rwsem_handoff_read=189
rwsem_handoff_rslow=1
rwsem_handoff_write=6678
rwsem_handoff_wspin=6681
The number of primary handoffs was 6866, whereas there was only one
secondary handoff for this test run.
Signed-off-by: Waiman Long <longman@redhat.com>
---
kernel/locking/lock_events_list.h | 6 ++++--
kernel/locking/rwsem.c | 9 +++++----
2 files changed, 9 insertions(+), 6 deletions(-)
diff --git a/kernel/locking/lock_events_list.h b/kernel/locking/lock_events_list.h
index 97fb6f3f840a..04d101767c2c 100644
--- a/kernel/locking/lock_events_list.h
+++ b/kernel/locking/lock_events_list.h
@@ -63,7 +63,9 @@ LOCK_EVENT(rwsem_rlock) /* # of read locks acquired */
LOCK_EVENT(rwsem_rlock_steal) /* # of read locks by lock stealing */
LOCK_EVENT(rwsem_rlock_fast) /* # of fast read locks acquired */
LOCK_EVENT(rwsem_rlock_fail) /* # of failed read lock acquisitions */
-LOCK_EVENT(rwsem_rlock_handoff) /* # of read lock handoffs */
LOCK_EVENT(rwsem_wlock) /* # of write locks acquired */
LOCK_EVENT(rwsem_wlock_fail) /* # of failed write lock acquisitions */
-LOCK_EVENT(rwsem_wlock_handoff) /* # of write lock handoffs */
+LOCK_EVENT(rwsem_handoff_read) /* # of read lock handoffs */
+LOCK_EVENT(rwsem_handoff_write) /* # of write lock handoffs */
+LOCK_EVENT(rwsem_handoff_rslow) /* # of handoffs in read slowpath */
+LOCK_EVENT(rwsem_handoff_wspin) /* # of handoff spins in write slowpath */
diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
index 047e5fcb2457..b3135e12ca22 100644
--- a/kernel/locking/rwsem.c
+++ b/kernel/locking/rwsem.c
@@ -469,10 +469,8 @@ static void rwsem_mark_wake(struct rw_semaphore *sem,
* force the issue.
*/
if (time_after(jiffies, waiter->timeout)) {
- if (!(oldcount & RWSEM_FLAG_HANDOFF)) {
+ if (!(oldcount & RWSEM_FLAG_HANDOFF))
adjustment -= RWSEM_FLAG_HANDOFF;
- lockevent_inc(rwsem_rlock_handoff);
- }
WRITE_ONCE(waiter->handoff_state, HANDOFF_REQUESTED);
}
@@ -677,7 +675,6 @@ static inline bool rwsem_try_write_lock(struct rw_semaphore *sem,
*/
if (new & RWSEM_FLAG_HANDOFF) {
WRITE_ONCE(first->handoff_state, HANDOFF_REQUESTED);
- lockevent_inc(rwsem_wlock_handoff);
return false;
}
@@ -1011,10 +1008,12 @@ static void rwsem_handoff(struct rw_semaphore *sem, long adj,
wake_type = RWSEM_WAKE_ANY;
adj += RWSEM_WRITER_LOCKED;
atomic_long_set(&sem->owner, (long)waiter->task);
+ lockevent_inc(rwsem_handoff_write);
} else {
wake_type = RWSEM_WAKE_READ_OWNED;
adj += RWSEM_READER_BIAS;
__rwsem_set_reader_owned(sem, waiter->task);
+ lockevent_inc(rwsem_handoff_read);
}
atomic_long_add(adj, &sem->count);
rwsem_mark_wake(sem, wake_type, wake_q);
@@ -1123,6 +1122,7 @@ rwsem_down_read_slowpath(struct rw_semaphore *sem, long count, unsigned int stat
if (rwsem_first_waiter(sem)->type == RWSEM_WAITING_FOR_READ)
adjustment = 0;
rwsem_handoff(sem, adjustment, &wake_q);
+ lockevent_inc(rwsem_handoff_rslow);
if (!adjustment) {
raw_spin_unlock_irq(&sem->wait_lock);
@@ -1253,6 +1253,7 @@ rwsem_down_write_slowpath(struct rw_semaphore *sem, int state)
if (handoff == HANDOFF_REQUESTED) {
rwsem_spin_on_owner(sem);
handoff = READ_ONCE(waiter.handoff_state);
+ lockevent_inc(rwsem_handoff_wspin);
}
if (handoff == HANDOFF_GRANTED)
--
2.31.1
next prev parent reply other threads:[~2022-11-18 2:21 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-11-18 2:20 [PATCH v6 0/6] lockinig/rwsem: Fix rwsem bugs & enable true lock handoff Waiman Long
2022-11-18 2:20 ` [PATCH v6 1/6] locking/rwsem: Prevent non-first waiter from spinning in down_write() slowpath Waiman Long
2022-12-16 15:02 ` Jiri Wiesner
2023-01-20 22:58 ` Waiman Long
2022-11-18 2:20 ` [PATCH v6 2/6] locking/rwsem: Disable preemption at all down_read*() and up_read() code paths Waiman Long
2022-12-16 15:03 ` Jiri Wiesner
2022-11-18 2:20 ` [PATCH v6 3/6] locking/rwsem: Disable preemption at all down_write*() and up_write() " Waiman Long
2022-12-16 15:03 ` Jiri Wiesner
2022-11-18 2:20 ` [PATCH v6 4/6] locking/rwsem: Change waiter->hanodff_set to a handoff_state enum Waiman Long
2022-11-18 2:20 ` [PATCH v6 5/6] locking/rwsem: Enable direct rwsem lock handoff Waiman Long
2023-01-23 14:59 ` Peter Zijlstra
2023-01-23 17:30 ` Waiman Long
2023-01-23 22:07 ` Waiman Long
2023-01-24 12:58 ` Peter Zijlstra
2023-01-24 12:29 ` Peter Zijlstra
2023-01-25 1:53 ` Waiman Long
2022-11-18 2:20 ` Waiman Long [this message]
2023-01-17 20:53 ` [PATCH v6 0/6] lockinig/rwsem: Fix rwsem bugs & enable true " Waiman Long
2023-01-22 13:46 ` Peter Zijlstra
2023-01-23 3:40 ` Waiman Long
2023-01-23 21:10 ` Waiman Long
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20221118022016.462070-7-longman@redhat.com \
--to=longman@redhat.com \
--cc=boqun.feng@gmail.com \
--cc=hdanton@sina.com \
--cc=john.p.donnelly@oracle.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=quic_mojha@quicinc.com \
--cc=wangting11@xiaomi.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox