qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Stefan Hajnoczi <stefanha@redhat.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: Kevin Wolf <kwolf@redhat.com>,
	qemu-devel@nongnu.org, Wenchao Xia <xiawenc@linux.vnet.ibm.com>
Subject: Re: [Qemu-devel] [RFC] aio: add aio_context_acquire() and aio_context_release()
Date: Fri, 30 Aug 2013 11:22:03 +0200	[thread overview]
Message-ID: <20130830092203.GA15010@stefanha-thinkpad.redhat.com> (raw)
In-Reply-To: <521F05B7.9000203@redhat.com>

On Thu, Aug 29, 2013 at 10:26:31AM +0200, Paolo Bonzini wrote:
> Il 27/08/2013 16:39, Stefan Hajnoczi ha scritto:
> > +void aio_context_acquire(AioContext *ctx)
> > +{
> > +    qemu_mutex_lock(&ctx->acquire_lock);
> > +    while (ctx->owner) {
> > +        assert(!qemu_thread_is_self(ctx->owner));
> > +        aio_notify(ctx); /* kick current owner */
> > +        qemu_cond_wait(&ctx->acquire_cond, &ctx->acquire_lock);
> > +    }
> > +    qemu_thread_get_self(&ctx->owner_thread);
> > +    ctx->owner = &ctx->owner_thread;
> > +    qemu_mutex_unlock(&ctx->acquire_lock);
> > +}
> > +
> > +void aio_context_release(AioContext *ctx)
> > +{
> > +    qemu_mutex_lock(&ctx->acquire_lock);
> > +    assert(ctx->owner && qemu_thread_is_self(ctx->owner));
> > +    ctx->owner = NULL;
> > +    qemu_cond_signal(&ctx->acquire_cond);
> > +    qemu_mutex_unlock(&ctx->acquire_lock);
> > +}
> 
> Thinking more about it, there is a risk of busy waiting here if one
> thread releases the AioContext and tries to acquire it again (as in the
> common case of one thread doing acquire/poll/release in a loop).  It
> would only work if mutexes guarantee some level of fairness.

You are right.  I wrote a test that showed there is no fairness.  For
some reason I thought the condvar would provide fairness.

> If you implement recursive acquisition, however, you can make aio_poll
> acquire the context up until just before it invokes ppoll, and then
> again after it comes back from the ppoll.  The two acquire/release pair
> will be no-ops if called during "synchronous" I/O such as
> 
>   /* Another thread */
>   aio_context_acquire(ctx);
>   bdrv_read(bs, 0x1000, buf, 1);
>   aio_context_release(ctx);
> 
> Yet they will do the right thing when called from the event loop thread.
> 
> (where the bdrv_read can actually be something more complicated such as
> a live snapshot or, in general, anything involving bdrv_drain_all).

This doesn't guarantee fairness either, right?  If ppoll(2) returns
immediately then the thread might still be scheduled and have enough of
its time slice left to acquire the AioContext again.

With your approach another thread can squeeze in when ppoll(2) is
returning so newer fd activity can be processed before we processed
*before* older activity.  Not sure out-of-order callbacks are a problem
but it can happen since we don't have fairness.

But at least this way other threads can acquire the AioContext while
ppoll(2) is blocked without racing against each other for the acquire
lock.

Stefan

  reply	other threads:[~2013-08-30  9:57 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-08-27 14:39 [Qemu-devel] [RFC] aio: add aio_context_acquire() and aio_context_release() Stefan Hajnoczi
2013-08-27 15:12 ` Paolo Bonzini
2013-08-28  2:41   ` Wenchao Xia
2013-08-27 18:33 ` Alex Bligh
2013-08-28  3:25 ` Wenchao Xia
2013-08-28  8:49   ` Stefan Hajnoczi
2013-08-29  1:09     ` Wenchao Xia
2013-08-29  7:43       ` Stefan Hajnoczi
2013-09-10 19:42         ` Michael Roth
2013-09-12  8:11           ` Stefan Hajnoczi
2013-08-29  8:26 ` Paolo Bonzini
2013-08-30  9:22   ` Stefan Hajnoczi [this message]
2013-08-30 13:24     ` Paolo Bonzini
2013-08-30 14:25       ` Stefan Hajnoczi
2013-08-30  4:02 ` Wenchao Xia

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20130830092203.GA15010@stefanha-thinkpad.redhat.com \
    --to=stefanha@redhat.com \
    --cc=kwolf@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=xiawenc@linux.vnet.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).