qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Stefan Hajnoczi <stefanha@gmail.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: Kevin Wolf <kwolf@redhat.com>, qemu-devel <qemu-devel@nongnu.org>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Wenchao Xia <xiawenc@linux.vnet.ibm.com>
Subject: Re: [Qemu-devel] [RFC] aio: add aio_context_acquire() and aio_context_release()
Date: Fri, 30 Aug 2013 16:25:50 +0200	[thread overview]
Message-ID: <CAJSP0QVCa3-9cExo1hTsZdgPTo=O_67O04Z1aYo7Yu24FC-cnQ@mail.gmail.com> (raw)
In-Reply-To: <52209D2A.8030201@redhat.com>

On Fri, Aug 30, 2013 at 3:24 PM, Paolo Bonzini <pbonzini@redhat.com> wrote:
> Il 30/08/2013 11:22, Stefan Hajnoczi ha scritto:
>> On Thu, Aug 29, 2013 at 10:26:31AM +0200, Paolo Bonzini wrote:
>>> Il 27/08/2013 16:39, Stefan Hajnoczi ha scritto:
>>>> +void aio_context_acquire(AioContext *ctx)
>>>> +{
>>>> +    qemu_mutex_lock(&ctx->acquire_lock);
>>>> +    while (ctx->owner) {
>>>> +        assert(!qemu_thread_is_self(ctx->owner));
>>>> +        aio_notify(ctx); /* kick current owner */
>>>> +        qemu_cond_wait(&ctx->acquire_cond, &ctx->acquire_lock);
>>>> +    }
>>>> +    qemu_thread_get_self(&ctx->owner_thread);
>>>> +    ctx->owner = &ctx->owner_thread;
>>>> +    qemu_mutex_unlock(&ctx->acquire_lock);
>>>> +}
>>>> +
>>>> +void aio_context_release(AioContext *ctx)
>>>> +{
>>>> +    qemu_mutex_lock(&ctx->acquire_lock);
>>>> +    assert(ctx->owner && qemu_thread_is_self(ctx->owner));
>>>> +    ctx->owner = NULL;
>>>> +    qemu_cond_signal(&ctx->acquire_cond);
>>>> +    qemu_mutex_unlock(&ctx->acquire_lock);
>>>> +}
>>>
>>> Thinking more about it, there is a risk of busy waiting here if one
>>> thread releases the AioContext and tries to acquire it again (as in the
>>> common case of one thread doing acquire/poll/release in a loop).  It
>>> would only work if mutexes guarantee some level of fairness.
>>
>> You are right.  I wrote a test that showed there is no fairness.  For
>> some reason I thought the condvar would provide fairness.
>>
>>> If you implement recursive acquisition, however, you can make aio_poll
>>> acquire the context up until just before it invokes ppoll, and then
>>> again after it comes back from the ppoll.  The two acquire/release pair
>>> will be no-ops if called during "synchronous" I/O such as
>>>
>>>   /* Another thread */
>>>   aio_context_acquire(ctx);
>>>   bdrv_read(bs, 0x1000, buf, 1);
>>>   aio_context_release(ctx);
>>>
>>> Yet they will do the right thing when called from the event loop thread.
>>>
>>> (where the bdrv_read can actually be something more complicated such as
>>> a live snapshot or, in general, anything involving bdrv_drain_all).
>>
>> This doesn't guarantee fairness either, right?
>
> Yes, but the non-zero timeout of ppoll would in practice guarantee it.
> The problem happens only when the release and acquire are very close in
> time, which shouldn't happen if the ppoll is done released.
>
>> With your approach another thread can squeeze in when ppoll(2) is
>> returning so newer fd activity can be processed before we processed
>> *before* older activity.  Not sure out-of-order callbacks are a problem
>> but it can happen since we don't have fairness.
>
> I think this should not happen.  The other thread would rerun ppoll(2).
>  Since poll/ppoll are level-triggered, you could have some flags
> processed twice.  But this is not a problem, we had the same bug with
> iothread and qemu_aio_wait and we should have fixed all occurrences.

I forgot they are level-triggered.  Releasing around the blocking
operation (ppoll) is similar to how iothread/vcpu thread work so it
seems like a good idea to follow that pattern here too.

I'll implement this in the next revision.

Stefan

  reply	other threads:[~2013-08-30 14:25 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-08-27 14:39 [Qemu-devel] [RFC] aio: add aio_context_acquire() and aio_context_release() Stefan Hajnoczi
2013-08-27 15:12 ` Paolo Bonzini
2013-08-28  2:41   ` Wenchao Xia
2013-08-27 18:33 ` Alex Bligh
2013-08-28  3:25 ` Wenchao Xia
2013-08-28  8:49   ` Stefan Hajnoczi
2013-08-29  1:09     ` Wenchao Xia
2013-08-29  7:43       ` Stefan Hajnoczi
2013-09-10 19:42         ` Michael Roth
2013-09-12  8:11           ` Stefan Hajnoczi
2013-08-29  8:26 ` Paolo Bonzini
2013-08-30  9:22   ` Stefan Hajnoczi
2013-08-30 13:24     ` Paolo Bonzini
2013-08-30 14:25       ` Stefan Hajnoczi [this message]
2013-08-30  4:02 ` Wenchao Xia

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAJSP0QVCa3-9cExo1hTsZdgPTo=O_67O04Z1aYo7Yu24FC-cnQ@mail.gmail.com' \
    --to=stefanha@gmail.com \
    --cc=kwolf@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@redhat.com \
    --cc=xiawenc@linux.vnet.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).