From: Paolo Bonzini <pbonzini@redhat.com>
To: qemu-devel@nongnu.org
Cc: famz@redhat.com, stefanha@redhat.com
Subject: [Qemu-devel] [PATCH 03/10] aio: make ctx->list_lock a QemuLockCnt, subsuming ctx->walking_bh
Date: Thu, 12 Jan 2017 19:07:53 +0100 [thread overview]
Message-ID: <20170112180800.21085-4-pbonzini@redhat.com> (raw)
In-Reply-To: <20170112180800.21085-1-pbonzini@redhat.com>
This will make it possible to walk the list of bottom halves without
holding the AioContext lock---and in turn to call bottom half
handlers without holding the lock.
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
async.c | 35 ++++++++++++++++-------------------
include/block/aio.h | 12 +++++-------
2 files changed, 21 insertions(+), 26 deletions(-)
diff --git a/async.c b/async.c
index 69292fa..2305e11 100644
--- a/async.c
+++ b/async.c
@@ -53,14 +53,14 @@ void aio_bh_schedule_oneshot(AioContext *ctx, QEMUBHFunc *cb, void *opaque)
.cb = cb,
.opaque = opaque,
};
- qemu_mutex_lock(&ctx->list_lock);
+ qemu_lockcnt_lock(&ctx->list_lock);
bh->next = ctx->first_bh;
bh->scheduled = 1;
bh->deleted = 1;
/* Make sure that the members are ready before putting bh into list */
smp_wmb();
ctx->first_bh = bh;
- qemu_mutex_unlock(&ctx->list_lock);
+ qemu_lockcnt_unlock(&ctx->list_lock);
aio_notify(ctx);
}
@@ -73,12 +73,12 @@ QEMUBH *aio_bh_new(AioContext *ctx, QEMUBHFunc *cb, void *opaque)
.cb = cb,
.opaque = opaque,
};
- qemu_mutex_lock(&ctx->list_lock);
+ qemu_lockcnt_lock(&ctx->list_lock);
bh->next = ctx->first_bh;
/* Make sure that the members are ready before putting bh into list */
smp_wmb();
ctx->first_bh = bh;
- qemu_mutex_unlock(&ctx->list_lock);
+ qemu_lockcnt_unlock(&ctx->list_lock);
return bh;
}
@@ -93,13 +93,11 @@ int aio_bh_poll(AioContext *ctx)
QEMUBH *bh, **bhp, *next;
int ret;
- ctx->walking_bh++;
+ qemu_lockcnt_inc(&ctx->list_lock);
ret = 0;
- for (bh = ctx->first_bh; bh; bh = next) {
- /* Make sure that fetching bh happens before accessing its members */
- smp_read_barrier_depends();
- next = bh->next;
+ for (bh = atomic_rcu_read(&ctx->first_bh); bh; bh = next) {
+ next = atomic_rcu_read(&bh->next);
/* The atomic_xchg is paired with the one in qemu_bh_schedule. The
* implicit memory barrier ensures that the callback sees all writes
* done by the scheduling thread. It also ensures that the scheduling
@@ -116,11 +114,8 @@ int aio_bh_poll(AioContext *ctx)
}
}
- ctx->walking_bh--;
-
/* remove deleted bhs */
- if (!ctx->walking_bh) {
- qemu_mutex_lock(&ctx->list_lock);
+ if (qemu_lockcnt_dec_and_lock(&ctx->list_lock)) {
bhp = &ctx->first_bh;
while (*bhp) {
bh = *bhp;
@@ -131,7 +126,7 @@ int aio_bh_poll(AioContext *ctx)
bhp = &bh->next;
}
}
- qemu_mutex_unlock(&ctx->list_lock);
+ qemu_lockcnt_unlock(&ctx->list_lock);
}
return ret;
@@ -187,7 +182,8 @@ aio_compute_timeout(AioContext *ctx)
int timeout = -1;
QEMUBH *bh;
- for (bh = ctx->first_bh; bh; bh = bh->next) {
+ for (bh = atomic_rcu_read(&ctx->first_bh); bh;
+ bh = atomic_rcu_read(&bh->next)) {
if (bh->scheduled) {
if (bh->idle) {
/* idle bottom halves will be polled at least
@@ -270,7 +266,8 @@ aio_ctx_finalize(GSource *source)
}
#endif
- qemu_mutex_lock(&ctx->list_lock);
+ qemu_lockcnt_lock(&ctx->list_lock);
+ assert(!qemu_lockcnt_count(&ctx->list_lock));
while (ctx->first_bh) {
QEMUBH *next = ctx->first_bh->next;
@@ -280,12 +277,12 @@ aio_ctx_finalize(GSource *source)
g_free(ctx->first_bh);
ctx->first_bh = next;
}
- qemu_mutex_unlock(&ctx->list_lock);
+ qemu_lockcnt_unlock(&ctx->list_lock);
aio_set_event_notifier(ctx, &ctx->notifier, false, NULL, NULL);
event_notifier_cleanup(&ctx->notifier);
qemu_rec_mutex_destroy(&ctx->lock);
- qemu_mutex_destroy(&ctx->list_lock);
+ qemu_lockcnt_destroy(&ctx->list_lock);
timerlistgroup_deinit(&ctx->tlg);
}
@@ -372,6 +369,7 @@ AioContext *aio_context_new(Error **errp)
goto fail;
}
g_source_set_can_recurse(&ctx->source, true);
+ qemu_lockcnt_init(&ctx->list_lock);
aio_set_event_notifier(ctx, &ctx->notifier,
false,
(EventNotifierHandler *)
@@ -381,7 +379,6 @@ AioContext *aio_context_new(Error **errp)
ctx->linux_aio = NULL;
#endif
ctx->thread_pool = NULL;
- qemu_mutex_init(&ctx->list_lock);
qemu_rec_mutex_init(&ctx->lock);
timerlistgroup_init(&ctx->tlg, aio_timerlist_notify, ctx);
diff --git a/include/block/aio.h b/include/block/aio.h
index 013d400..be3adfe 100644
--- a/include/block/aio.h
+++ b/include/block/aio.h
@@ -90,17 +90,15 @@ struct AioContext {
*/
uint32_t notify_me;
- /* lock to protect between bh's adders and deleter */
- QemuMutex list_lock;
+ /* A lock to protect between bh's adders and deleter, and to ensure
+ * that no callbacks are removed while we're walking and dispatching
+ * them.
+ */
+ QemuLockCnt list_lock;
/* Anchor of the list of Bottom Halves belonging to the context */
struct QEMUBH *first_bh;
- /* A simple lock used to protect the first_bh list, and ensure that
- * no callbacks are removed while we're walking and dispatching callbacks.
- */
- int walking_bh;
-
/* Used by aio_notify.
*
* "notified" is used to avoid expensive event_notifier_test_and_clear
--
2.9.3
next prev parent reply other threads:[~2017-01-12 18:08 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-01-12 18:07 [Qemu-devel] [PATCH v5 00/10] aio_context_acquire/release pushdown, part 1 Paolo Bonzini
2017-01-12 18:07 ` [Qemu-devel] [PATCH 01/10] aio: rename bh_lock to list_lock Paolo Bonzini
2017-01-12 18:07 ` [Qemu-devel] [PATCH 02/10] qemu-thread: introduce QemuLockCnt Paolo Bonzini
2017-01-12 18:07 ` Paolo Bonzini [this message]
2017-01-12 18:07 ` [Qemu-devel] [PATCH 04/10] qemu-thread: optimize QemuLockCnt with futexes on Linux Paolo Bonzini
2017-01-12 18:07 ` [Qemu-devel] [PATCH 05/10] aio-posix: split aio_dispatch_handlers out of aio_dispatch Paolo Bonzini
2017-01-12 18:07 ` [Qemu-devel] [PATCH 06/10] aio: tweak walking in dispatch phase Paolo Bonzini
2017-01-12 18:07 ` [Qemu-devel] [PATCH 07/10] aio-posix: remove walking_handlers, protecting AioHandler list with list_lock Paolo Bonzini
2017-01-12 18:07 ` [Qemu-devel] [PATCH 08/10] aio-win32: " Paolo Bonzini
2017-01-12 18:07 ` [Qemu-devel] [PATCH 09/10] aio: document locking Paolo Bonzini
2017-01-12 18:08 ` [Qemu-devel] [PATCH 10/10] async: optimize aio_bh_poll Paolo Bonzini
2017-01-13 2:54 ` [Qemu-devel] [PATCH v5 00/10] aio_context_acquire/release pushdown, part 1 Fam Zheng
2017-01-13 13:20 ` Stefan Hajnoczi
-- strict thread matches above, loose matches on Subject: below --
2017-01-12 16:55 [Qemu-devel] [PATCH v4 " Paolo Bonzini
2017-01-12 16:55 ` [Qemu-devel] [PATCH 03/10] aio: make ctx->list_lock a QemuLockCnt, subsuming ctx->walking_bh Paolo Bonzini
2017-01-04 13:26 [Qemu-devel] [PATCH v3 00/10] aio_context_acquire/release pushdown, part 1 Paolo Bonzini
2017-01-04 13:26 ` [Qemu-devel] [PATCH 03/10] aio: make ctx->list_lock a QemuLockCnt, subsuming ctx->walking_bh Paolo Bonzini
2016-12-21 14:03 [Qemu-devel] [PATCH v2 00/10] aio_context_acquire/release pushdown, part 1 Paolo Bonzini
2016-12-21 14:03 ` [Qemu-devel] [PATCH 03/10] aio: make ctx->list_lock a QemuLockCnt, subsuming ctx->walking_bh Paolo Bonzini
2016-11-29 11:46 [Qemu-devel] [PATCH for-2.9 00/10] aio_context_acquire/release pushdown, part 1 Paolo Bonzini
2016-11-29 11:47 ` [Qemu-devel] [PATCH 03/10] aio: make ctx->list_lock a QemuLockCnt, subsuming ctx->walking_bh Paolo Bonzini
2016-11-30 13:06 ` Stefan Hajnoczi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170112180800.21085-4-pbonzini@redhat.com \
--to=pbonzini@redhat.com \
--cc=famz@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).