qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Stefan Hajnoczi <stefanha@redhat.com>
To: Ming Lei <ming.lei@canonical.com>
Cc: "Kevin Wolf" <kwolf@redhat.com>,
	"Paolo Bonzini" <pbonzini@redhat.com>,
	"Marcin Gibuła" <m.gibula@beyond.pl>,
	qemu-devel <qemu-devel@nongnu.org>
Subject: Re: [Qemu-devel] [PATCH] linux-aio: avoid deadlock in nested aio_poll() calls
Date: Tue, 5 Aug 2014 14:43:30 +0100	[thread overview]
Message-ID: <20140805134330.GC12251@stefanha-thinkpad.redhat.com> (raw)
In-Reply-To: <CACVXFVNgRqVW12gJ_6bBm2CO9Dp-MuQf-Ks6JjRsjFFCQ5OqPg@mail.gmail.com>

[-- Attachment #1: Type: text/plain, Size: 5582 bytes --]

On Tue, Aug 05, 2014 at 06:44:25PM +0800, Ming Lei wrote:
> On Mon, Aug 4, 2014 at 11:56 PM, Stefan Hajnoczi <stefanha@redhat.com> wrote:
> > If two Linux AIO request completions are fetched in the same
> > io_getevents() call, QEMU will deadlock if request A's callback waits
> > for request B to complete using an aio_poll() loop.  This was reported
> > to happen with the mirror blockjob.
> >
> > This patch moves completion processing into a BH and makes it resumable.
> > Nested event loops can resume completion processing so that request B
> > will complete and the deadlock will not occur.
> >
> > Cc: Kevin Wolf <kwolf@redhat.com>
> > Cc: Paolo Bonzini <pbonzini@redhat.com>
> > Cc: Ming Lei <ming.lei@canonical.com>
> > Cc: Marcin Gibuła <m.gibula@beyond.pl>
> > Reported-by: Marcin Gibuła <m.gibula@beyond.pl>
> > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> > ---
> >  block/linux-aio.c | 71 ++++++++++++++++++++++++++++++++++++++++++-------------
> >  1 file changed, 55 insertions(+), 16 deletions(-)
> >
> > diff --git a/block/linux-aio.c b/block/linux-aio.c
> > index 7ac7e8c..9aca758 100644
> > --- a/block/linux-aio.c
> > +++ b/block/linux-aio.c
> > @@ -51,6 +51,12 @@ struct qemu_laio_state {
> >
> >      /* io queue for submit at batch */
> >      LaioQueue io_q;
> > +
> > +    /* I/O completion processing */
> > +    QEMUBH *completion_bh;
> > +    struct io_event events[MAX_EVENTS];
> > +    int event_idx;
> > +    int event_max;
> >  };
> >
> >  static inline ssize_t io_event_ret(struct io_event *ev)
> > @@ -86,27 +92,58 @@ static void qemu_laio_process_completion(struct qemu_laio_state *s,
> >      qemu_aio_release(laiocb);
> >  }
> >
> > -static void qemu_laio_completion_cb(EventNotifier *e)
> > +/* The completion BH fetches completed I/O requests and invokes their
> > + * callbacks.
> > + *
> > + * The function is somewhat tricky because it supports nested event loops, for
> > + * example when a request callback invokes aio_poll().  In order to do this,
> 
> Looks it is a very tricky usage, maybe it is better to change the caller.

This comment is not about usage.  It's just for people reading the
implementation.  I can move it inside the function body, if you like.

I like the idea of eliminating nested event loops, but it requires a
huge change: making all callers either async (using callbacks) or
coroutines so they can yield.

There are many callers so this is a lot of work and will have
side-effects too.

BTW, here is the thread-pool.c fix which is analogous to this patch:
https://lists.nongnu.org/archive/html/qemu-devel/2014-07/msg02437.html

> > + * the completion events array and index are kept in qemu_laio_state.  The BH
> > + * reschedules itself as long as there are completions pending so it will
> > + * either be called again in a nested event loop or will be called after all
> > + * events have been completed.  When there are no events left to complete, the
> > + * BH returns without rescheduling.
> > + */
> > +static void qemu_laio_completion_bh(void *opaque)
> >  {
> > -    struct qemu_laio_state *s = container_of(e, struct qemu_laio_state, e);
> > -
> > -    while (event_notifier_test_and_clear(&s->e)) {
> > -        struct io_event events[MAX_EVENTS];
> > -        struct timespec ts = { 0 };
> > -        int nevents, i;
> > +    struct qemu_laio_state *s = opaque;
> >
> > +    /* Fetch more completion events when empty */
> > +    if (s->event_idx == s->event_max) {
> >          do {
> > -            nevents = io_getevents(s->ctx, MAX_EVENTS, MAX_EVENTS, events, &ts);
> > -        } while (nevents == -EINTR);
> > +            struct timespec ts = { 0 };
> > +            s->event_max = io_getevents(s->ctx, MAX_EVENTS, MAX_EVENTS,
> > +                                        s->events, &ts);
> > +        } while (s->event_max == -EINTR);
> > +
> > +        s->event_idx = 0;
> > +        if (s->event_max <= 0) {
> > +            s->event_max = 0;
> > +            return; /* no more events */
> > +        }
> > +    }
> >
> > -        for (i = 0; i < nevents; i++) {
> > -            struct iocb *iocb = events[i].obj;
> > -            struct qemu_laiocb *laiocb =
> > -                    container_of(iocb, struct qemu_laiocb, iocb);
> > +    /* Reschedule so nested event loops see currently pending completions */
> > +    qemu_bh_schedule(s->completion_bh);
> >
> > -            laiocb->ret = io_event_ret(&events[i]);
> > -            qemu_laio_process_completion(s, laiocb);
> > -        }
> > +    /* Process completion events */
> > +    while (s->event_idx < s->event_max) {
> > +        struct iocb *iocb = s->events[s->event_idx].obj;
> > +        struct qemu_laiocb *laiocb =
> > +                container_of(iocb, struct qemu_laiocb, iocb);
> > +
> > +        laiocb->ret = io_event_ret(&s->events[s->event_idx]);
> > +        s->event_idx++;
> > +
> > +        qemu_laio_process_completion(s, laiocb);
> 
> The implementation is same tricky with the usage, :-)
> 
> Also using a FIFO style implementation should be more efficient
> since IO events can still be read and completed in current BH handler
> if the queue isn't full, but becomes more complicated.

That might help but should be benchmarked.

Another trick is calling qemu_laio_completion_bh() directly from
qemu_laio_completion_cb() to avoid a BH iteration.

I think they are premature optimization.  Let's first agree whether this
fix is correct or not :).

Stefan

[-- Attachment #2: Type: application/pgp-signature, Size: 473 bytes --]

  reply	other threads:[~2014-08-05 13:43 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-08-04 15:56 [Qemu-devel] [PATCH] linux-aio: avoid deadlock in nested aio_poll() calls Stefan Hajnoczi
2014-08-04 16:10 ` Marcin Gibuła
2014-08-05 10:44 ` Ming Lei
2014-08-05 13:43   ` Stefan Hajnoczi [this message]
2014-08-05 14:26 ` Marcin Gibuła
2014-08-05 14:48   ` Marcin Gibuła
2014-08-05 17:33 ` Marcin Gibuła
2014-08-29 14:59 ` Stefan Hajnoczi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140805134330.GC12251@stefanha-thinkpad.redhat.com \
    --to=stefanha@redhat.com \
    --cc=kwolf@redhat.com \
    --cc=m.gibula@beyond.pl \
    --cc=ming.lei@canonical.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).