From: Stefan Hajnoczi <stefanha@redhat.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: qemu-devel@nongnu.org, kwolf@redhat.com, famz@redhat.com
Subject: Re: [Qemu-devel] [PATCH 1/2] async: add aio_bh_schedule_oneshot
Date: Wed, 5 Oct 2016 14:13:18 +0100 [thread overview]
Message-ID: <20161005131318.GF863@stefanha-x1.localdomain> (raw)
In-Reply-To: <1475511256-20051-2-git-send-email-pbonzini@redhat.com>
[-- Attachment #1: Type: text/plain, Size: 4407 bytes --]
On Mon, Oct 03, 2016 at 06:14:15PM +0200, Paolo Bonzini wrote:
> qemu_bh_delete is already clearing bh->scheduled at the same time
> as it's setting bh->deleted. Since it's not using any memory
> barriers, there is no synchronization going on for bh->deleted,
> and this makes the bh->deleted checks superfluous in aio_compute_timeout,
> aio_bh_poll and aio_ctx_check.
Yikes. On one hand this sounds scary but in practice qemu_bh_delete()
isn't called from another thread so the next aio_bh_poll() will indeed
clean it up instead of dispatching a deleted BH.
Due to the nature of this change I suggest making it in a separate
patch.
> Just remove them, and put the (bh->scheduled && bh->deleted) combo
> to work in a new function aio_bh_schedule_oneshot. The new function
> removes the need to save the QEMUBH pointer between the creation
> and the execution of the bottom half.
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
> async.c | 27 +++++++++++++++++++++++----
> include/block/aio.h | 10 ++++++++++
> 2 files changed, 33 insertions(+), 4 deletions(-)
>
> diff --git a/async.c b/async.c
> index 3bca9b0..f30d011 100644
> --- a/async.c
> +++ b/async.c
> @@ -44,6 +44,25 @@ struct QEMUBH {
> bool deleted;
> };
>
> +void aio_bh_schedule_oneshot(AioContext *ctx, QEMUBHFunc *cb, void *opaque)
> +{
> + QEMUBH *bh;
> + bh = g_new(QEMUBH, 1);
> + *bh = (QEMUBH){
> + .ctx = ctx,
> + .cb = cb,
> + .opaque = opaque,
> + };
> + qemu_mutex_lock(&ctx->bh_lock);
> + bh->next = ctx->first_bh;
> + bh->scheduled = 1;
> + bh->deleted = 1;
> + /* Make sure that the members are ready before putting bh into list */
> + smp_wmb();
> + ctx->first_bh = bh;
> + qemu_mutex_unlock(&ctx->bh_lock);
> +}
> +
> QEMUBH *aio_bh_new(AioContext *ctx, QEMUBHFunc *cb, void *opaque)
> {
> QEMUBH *bh;
> @@ -86,7 +105,7 @@ int aio_bh_poll(AioContext *ctx)
> * thread sees the zero before bh->cb has run, and thus will call
> * aio_notify again if necessary.
> */
> - if (!bh->deleted && atomic_xchg(&bh->scheduled, 0)) {
> + if (atomic_xchg(&bh->scheduled, 0)) {
> /* Idle BHs and the notify BH don't count as progress */
> if (!bh->idle && bh != ctx->notify_dummy_bh) {
> ret = 1;
> @@ -104,7 +123,7 @@ int aio_bh_poll(AioContext *ctx)
> bhp = &ctx->first_bh;
> while (*bhp) {
> bh = *bhp;
> - if (bh->deleted) {
> + if (bh->deleted && !bh->scheduled) {
> *bhp = bh->next;
> g_free(bh);
> } else {
> @@ -168,7 +187,7 @@ aio_compute_timeout(AioContext *ctx)
> QEMUBH *bh;
>
> for (bh = ctx->first_bh; bh; bh = bh->next) {
> - if (!bh->deleted && bh->scheduled) {
> + if (bh->scheduled) {
> if (bh->idle) {
> /* idle bottom halves will be polled at least
> * every 10ms */
> @@ -216,7 +235,7 @@ aio_ctx_check(GSource *source)
> aio_notify_accept(ctx);
>
> for (bh = ctx->first_bh; bh; bh = bh->next) {
> - if (!bh->deleted && bh->scheduled) {
> + if (bh->scheduled) {
> return true;
> }
> }
> diff --git a/include/block/aio.h b/include/block/aio.h
> index 173c1ed..be7dd3b 100644
> --- a/include/block/aio.h
> +++ b/include/block/aio.h
> @@ -181,6 +181,16 @@ void aio_context_acquire(AioContext *ctx);
> void aio_context_release(AioContext *ctx);
>
> /**
> + * aio_bh_schedule_oneshot: Allocate a new bottom half structure that will run
> + * only once and as soon as possible.
> + *
> + * Bottom halves are lightweight callbacks whose invocation is guaranteed
> + * to be wait-free, thread-safe and signal-safe. The #QEMUBH structure
> + * is opaque and must be allocated prior to its use.
I'm confused. There is no QEMUBH structure in this function
prototype. Is this comment from an earlier version of this function?
> + */
> +void aio_bh_schedule_oneshot(AioContext *ctx, QEMUBHFunc *cb, void *opaque);
> +
> +/**
> * aio_bh_new: Allocate a new bottom half structure.
> *
> * Bottom halves are lightweight callbacks whose invocation is guaranteed
> --
> 2.7.4
>
>
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]
next prev parent reply other threads:[~2016-10-05 13:13 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-10-03 16:14 [Qemu-devel] [PATCH 0/2] block: introduce and use aio_bh_schedule_oneshot Paolo Bonzini
2016-10-03 16:14 ` [Qemu-devel] [PATCH 1/2] async: add aio_bh_schedule_oneshot Paolo Bonzini
2016-10-05 13:13 ` Stefan Hajnoczi [this message]
2016-10-05 13:55 ` Paolo Bonzini
2016-10-05 14:20 ` Kevin Wolf
2016-10-05 14:25 ` Paolo Bonzini
2016-10-05 15:26 ` Kevin Wolf
2016-10-03 16:14 ` [Qemu-devel] [PATCH 2/2] block: use aio_bh_schedule_oneshot Paolo Bonzini
2016-10-05 13:16 ` Stefan Hajnoczi
2016-10-05 11:08 ` [Qemu-devel] [PATCH 0/2] block: introduce and " Kevin Wolf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20161005131318.GF863@stefanha-x1.localdomain \
--to=stefanha@redhat.com \
--cc=famz@redhat.com \
--cc=kwolf@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).