From: Paolo Bonzini <pbonzini@redhat.com>
To: qemu-devel@nongnu.org
Cc: stefanha@redhat.com, famz@redhat.com, kwolf@redhat.com, berto@igalia.com
Subject: [Qemu-devel] [PATCH 07/11] coroutine-lock: add limited spinning to CoMutex
Date: Fri, 15 Apr 2016 13:32:02 +0200 [thread overview]
Message-ID: <1460719926-12950-8-git-send-email-pbonzini@redhat.com> (raw)
In-Reply-To: <1460719926-12950-1-git-send-email-pbonzini@redhat.com>
Running a very small critical section on pthread_mutex_t and CoMutex
shows that pthread_mutex_t is much faster because it doesn't actually
go to sleep. What happens is that the critical section is shorter
than the latency of entering the kernel and thus FUTEX_WAIT always
fails. With CoMutex there is no such latency but you still want to
avoid wait and wakeup. So introduce it artificially.
This only works with two waiters; because CoMutex is fair, it will
always have more waits and wakeups than a pthread_mutex_t.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
include/qemu/coroutine.h | 5 +++++
util/qemu-coroutine-lock.c | 34 ++++++++++++++++++++++++++++++++--
2 files changed, 37 insertions(+), 2 deletions(-)
diff --git a/include/qemu/coroutine.h b/include/qemu/coroutine.h
index 018a60d..d15a09a 100644
--- a/include/qemu/coroutine.h
+++ b/include/qemu/coroutine.h
@@ -163,6 +163,11 @@ typedef struct CoMutex {
*/
unsigned locked;
+ /* Context that is holding the lock. Useful to avoid spinning
+ * when two coroutines on the same AioContext try to get the lock. :)
+ */
+ AioContext *ctx;
+
/* A queue of waiters. Elements are added atomically in front of
* from_push. to_pop is only populated, and popped from, by whoever
* is in charge of the next wakeup. This can be an unlocker or,
diff --git a/util/qemu-coroutine-lock.c b/util/qemu-coroutine-lock.c
index 7ed0f37..aa59e82 100644
--- a/util/qemu-coroutine-lock.c
+++ b/util/qemu-coroutine-lock.c
@@ -177,18 +177,44 @@ void qemu_co_mutex_init(CoMutex *mutex)
void coroutine_fn qemu_co_mutex_lock(CoMutex *mutex)
{
+ AioContext *ctx = qemu_get_current_aio_context();
Coroutine *self = qemu_coroutine_self();
CoWaitRecord w;
unsigned old_handoff;
+ int waiters, i;
+
+ /* Running a very small critical section on pthread_mutex_t and CoMutex
+ * shows that pthread_mutex_t is much faster because it doesn't actually
+ * go to sleep. What happens is that the critical section is shorter
+ * than the latency of entering the kernel and thus FUTEX_WAIT always
+ * fails. With CoMutex there is no such latency but you still want to
+ * avoid wait and wakeup. So introduce it artificially.
+ */
+ i = 0;
+retry_fast_path:
+ waiters = atomic_cmpxchg(&mutex->locked, 0, 1);
+ if (waiters != 0) {
+ while (waiters == 1 && ++i < 1000) {
+ if (atomic_read(&mutex->ctx) == ctx) {
+ break;
+ }
+ if (atomic_read(&mutex->locked) == 0) {
+ goto retry_fast_path;
+ }
+ /* cpu_relax(); */
+ }
+ waiters = atomic_fetch_inc(&mutex->locked);
+ }
- if (atomic_fetch_inc(&mutex->locked) == 0) {
+ if (waiters == 0) {
/* Uncontended. */
trace_qemu_co_mutex_lock_uncontended(mutex, self);
+ mutex->ctx = ctx;
return;
}
trace_qemu_co_mutex_lock_entry(mutex, self);
- self->ctx = qemu_get_current_aio_context();
+ self->ctx = ctx;
w.co = self;
push_waiter(mutex, &w);
@@ -207,9 +233,11 @@ void coroutine_fn qemu_co_mutex_lock(CoMutex *mutex)
if (co == self) {
/* We got the lock ourselves! */
assert(to_wake == &w);
+ mutex->ctx = ctx;
return;
}
+ mutex->ctx = co->ctx;
qemu_coroutine_wake(co->ctx, co);
}
@@ -223,6 +251,7 @@ void coroutine_fn qemu_co_mutex_unlock(CoMutex *mutex)
trace_qemu_co_mutex_unlock_entry(mutex, self);
+ mutex->ctx = NULL;
assert(mutex->locked);
assert(qemu_in_coroutine());
@@ -237,6 +266,7 @@ void coroutine_fn qemu_co_mutex_unlock(CoMutex *mutex)
if (to_wake) {
Coroutine *co = to_wake->co;
+ mutex->ctx = co->ctx;
qemu_coroutine_wake(co->ctx, co);
goto out;
}
--
2.5.5
next prev parent reply other threads:[~2016-04-15 11:32 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-04-15 11:31 [Qemu-devel] [RFC PATCH resend 00/11] Make CoMutex/CoQueue/CoRwlock thread-safe Paolo Bonzini
2016-04-15 11:31 ` [Qemu-devel] [PATCH 01/11] coroutine: use QSIMPLEQ instead of QTAILQ Paolo Bonzini
2016-04-19 13:45 ` Stefan Hajnoczi
2016-04-15 11:31 ` [Qemu-devel] [PATCH 02/11] throttle-groups: restart throttled requests from coroutine context Paolo Bonzini
2016-04-19 13:49 ` Stefan Hajnoczi
2016-04-15 11:31 ` [Qemu-devel] [PATCH 03/11] coroutine: delete qemu_co_enter_next Paolo Bonzini
2016-04-19 13:49 ` Stefan Hajnoczi
2016-04-15 11:31 ` [Qemu-devel] [PATCH 04/11] aio: introduce aio_co_schedule Paolo Bonzini
2016-04-19 14:31 ` Stefan Hajnoczi
2016-05-17 14:57 ` Paolo Bonzini
2016-05-26 19:19 ` Stefan Hajnoczi
2016-04-29 5:11 ` Fam Zheng
2016-05-17 14:38 ` Paolo Bonzini
2016-04-15 11:32 ` [Qemu-devel] [PATCH 05/11] coroutine-lock: reschedule coroutine on the AioContext it was running on Paolo Bonzini
2016-04-15 11:32 ` [Qemu-devel] [PATCH 06/11] coroutine-lock: make CoMutex thread-safe Paolo Bonzini
2016-04-29 6:26 ` Fam Zheng
2016-05-17 15:34 ` Paolo Bonzini
2016-04-15 11:32 ` Paolo Bonzini [this message]
2016-04-15 11:32 ` [Qemu-devel] [PATCH 08/11] test-aio-multithread: add performance comparison with thread-based mutexes Paolo Bonzini
2016-04-29 6:52 ` Fam Zheng
2016-05-12 16:49 ` Paolo Bonzini
2016-04-15 11:32 ` [Qemu-devel] [PATCH 09/11] coroutine-lock: place CoMutex before CoQueue in header Paolo Bonzini
2016-04-15 11:32 ` [Qemu-devel] [PATCH 10/11] coroutine-lock: add mutex argument to CoQueue APIs Paolo Bonzini
2016-04-15 11:32 ` [Qemu-devel] [PATCH 11/11] coroutine-lock: make CoRwlock thread-safe and fair Paolo Bonzini
2016-04-26 10:54 ` [Qemu-devel] [RFC PATCH resend 00/11] Make CoMutex/CoQueue/CoRwlock thread-safe Stefan Hajnoczi
2016-04-27 15:42 ` Stefan Hajnoczi
2016-05-17 15:34 ` Paolo Bonzini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1460719926-12950-8-git-send-email-pbonzini@redhat.com \
--to=pbonzini@redhat.com \
--cc=berto@igalia.com \
--cc=famz@redhat.com \
--cc=kwolf@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).