qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Stefan Hajnoczi <stefanha@redhat.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: Kevin Wolf <kwolf@redhat.com>, qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [PATCH] coroutine: use AioContext for CoQueue BH
Date: Thu, 7 Mar 2013 11:09:12 +0100	[thread overview]
Message-ID: <20130307100912.GC14726@stefanha-thinkpad.redhat.com> (raw)
In-Reply-To: <51375AD5.9000703@redhat.com>

On Wed, Mar 06, 2013 at 04:03:49PM +0100, Paolo Bonzini wrote:
> Il 06/03/2013 15:53, Stefan Hajnoczi ha scritto:
> > CoQueue uses a BH to awake coroutines that were made ready to run again
> > using qemu_co_queue_next() or qemu_co_queue_restart_all().  The BH
> > currently runs in the iothread AioContext and would break coroutines
> > that run in a different AioContext.
> > 
> > This is a slightly tricky problem because the lifetime of the BH exceeds
> > that of the CoQueue.  This means coroutines can be awoken after CoQueue
> > itself has been freed.  Also, there is no qemu_co_queue_destroy()
> > function which we could use to handle freeing resources.
> > 
> > Introducing qemu_co_queue_destroy() has a ripple effect of requiring us
> > to also add qemu_co_mutex_destroy() and qemu_co_rwlock_destroy(), as
> > well as updating all callers.  Avoid doing that.
> > 
> > We also cannot switch from BH to GIdle function because aio_poll() does
> > not dispatch GIdle functions.  (GIdle functions make memory management
> > slightly easier because they free themselves.)
> > 
> > Finally, I don't want to move unlock_queue and unlock_bh into
> > AioContext.  That would break encapsulation - AioContext isn't supposed
> > to know about CoQueue.
> > 
> > This patch implements a different solution: each qemu_co_queue_next() or
> > qemu_co_queue_restart_all() call creates a new BH and list of coroutines
> > to wake up.  Callers tend to invoke qemu_co_queue_next() and
> > qemu_co_queue_restart_all() occasionally after blocking I/O, so creating
> > a new BH for each call shouldn't be massively inefficient.
> > 
> > Note that this patch does not add an interface for specifying the
> > AioContext.  That is left to future patches which will convert CoQueue,
> > CoMutex, and CoRwlock to expose AioContext.
> > 
> > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> > ---
> >  include/block/coroutine.h |  1 +
> >  qemu-coroutine-lock.c     | 59 ++++++++++++++++++++++++++++++++---------------
> >  2 files changed, 42 insertions(+), 18 deletions(-)
> > 
> > diff --git a/include/block/coroutine.h b/include/block/coroutine.h
> > index c31fae3..a978162 100644
> > --- a/include/block/coroutine.h
> > +++ b/include/block/coroutine.h
> > @@ -104,6 +104,7 @@ bool qemu_in_coroutine(void);
> >   */
> >  typedef struct CoQueue {
> >      QTAILQ_HEAD(, Coroutine) entries;
> > +    AioContext *ctx;
> >  } CoQueue;
> >  
> >  /**
> > diff --git a/qemu-coroutine-lock.c b/qemu-coroutine-lock.c
> > index 97ef01c..ae986b3 100644
> > --- a/qemu-coroutine-lock.c
> > +++ b/qemu-coroutine-lock.c
> > @@ -29,28 +29,34 @@
> >  #include "block/aio.h"
> >  #include "trace.h"
> >  
> > -static QTAILQ_HEAD(, Coroutine) unlock_bh_queue =
> > -    QTAILQ_HEAD_INITIALIZER(unlock_bh_queue);
> > -static QEMUBH* unlock_bh;
> > +/* Coroutines are awoken from a BH to allow the current coroutine to complete
> > + * its flow of execution.  The BH may run after the CoQueue has been destroyed,
> > + * so keep BH data in a separate heap-allocated struct.
> > + */
> > +typedef struct {
> > +    QEMUBH *bh;
> > +    QTAILQ_HEAD(, Coroutine) entries;
> > +} CoQueueNextData;
> >  
> >  static void qemu_co_queue_next_bh(void *opaque)
> >  {
> > +    CoQueueNextData *data = opaque;
> >      Coroutine *next;
> >  
> >      trace_qemu_co_queue_next_bh();
> > -    while ((next = QTAILQ_FIRST(&unlock_bh_queue))) {
> > -        QTAILQ_REMOVE(&unlock_bh_queue, next, co_queue_next);
> > +    while ((next = QTAILQ_FIRST(&data->entries))) {
> > +        QTAILQ_REMOVE(&data->entries, next, co_queue_next);
> >          qemu_coroutine_enter(next, NULL);
> >      }
> > +
> > +    qemu_bh_delete(data->bh);
> > +    g_slice_free(CoQueueNextData, data);
> >  }
> >  
> >  void qemu_co_queue_init(CoQueue *queue)
> >  {
> >      QTAILQ_INIT(&queue->entries);
> > -
> > -    if (!unlock_bh) {
> > -        unlock_bh = qemu_bh_new(qemu_co_queue_next_bh, NULL);
> > -    }
> > +    queue->ctx = NULL;
> 
> What about adding an accessor for qemu_aio_context and using it?  Then
> you can just use aio_bh_new in qemu_co_queue_do_restart.

Your wish is my command.  I'll add this patch to the threadpool series
where I've already introduced qemu_get_aio_context().

Stefan

  reply	other threads:[~2013-03-07 10:09 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-03-06 14:53 [Qemu-devel] [PATCH] coroutine: use AioContext for CoQueue BH Stefan Hajnoczi
2013-03-06 15:03 ` Paolo Bonzini
2013-03-07 10:09   ` Stefan Hajnoczi [this message]
2013-03-06 15:41 ` Kevin Wolf
2013-03-07 10:25   ` Stefan Hajnoczi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20130307100912.GC14726@stefanha-thinkpad.redhat.com \
    --to=stefanha@redhat.com \
    --cc=kwolf@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).