qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Stefan Hajnoczi <stefanha@gmail.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: Kevin Wolf <kwolf@redhat.com>,
	Anthony Liguori <aliguori@us.ibm.com>,
	Ping Fan Liu <pingfank@linux.vnet.ibm.com>,
	qemu-devel@nongnu.org, Stefan Hajnoczi <stefanha@redhat.com>
Subject: Re: [Qemu-devel] [PATCH v3 01/17] block: stop relying on io_flush() in bdrv_drain_all()
Date: Fri, 14 Jun 2013 12:58:24 +0200	[thread overview]
Message-ID: <20130614105824.GC26780@stefanha-thinkpad.redhat.com> (raw)
In-Reply-To: <51B9D856.20901@redhat.com>

On Thu, Jun 13, 2013 at 10:33:58AM -0400, Paolo Bonzini wrote:
> Il 10/06/2013 10:38, Stefan Hajnoczi ha scritto:
> > On Mon, Jun 10, 2013 at 02:25:57PM +0200, Stefan Hajnoczi wrote:
> >> @@ -1427,26 +1456,18 @@ void bdrv_close_all(void)
> >>  void bdrv_drain_all(void)
> >>  {
> >>      BlockDriverState *bs;
> >> -    bool busy;
> >> -
> >> -    do {
> >> -        busy = qemu_aio_wait();
> >>  
> >> +    while (bdrv_requests_pending_all()) {
> >>          /* FIXME: We do not have timer support here, so this is effectively
> >>           * a busy wait.
> >>           */
> >>          QTAILQ_FOREACH(bs, &bdrv_states, list) {
> >>              if (!qemu_co_queue_empty(&bs->throttled_reqs)) {
> >>                  qemu_co_queue_restart_all(&bs->throttled_reqs);
> >> -                busy = true;
> >>              }
> >>          }
> >> -    } while (busy);
> >>  
> >> -    /* If requests are still pending there is a bug somewhere */
> >> -    QTAILQ_FOREACH(bs, &bdrv_states, list) {
> >> -        assert(QLIST_EMPTY(&bs->tracked_requests));
> >> -        assert(qemu_co_queue_empty(&bs->throttled_reqs));
> >> +        qemu_aio_wait();
> >>      }
> >>  }
> > 
> > tests/ide-test found an issue here: block.c invokes callbacks from a BH
> > so we may not yet have completed the request when this loop terminates.
> > 
> > Kevin: can you fold in this patch?
> > 
> > diff --git a/block.c b/block.c
> > index 31f7231..e176215 100644
> > --- a/block.c
> > +++ b/block.c
> > @@ -1469,6 +1469,9 @@ void bdrv_drain_all(void)
> >  
> >          qemu_aio_wait();
> >      }
> > +
> > +    /* Process pending completion BHs */
> > +    aio_poll(qemu_get_aio_context(), false);
> >  }
> 
> aio_poll could require multiple iterations, this is why the old code was
> starting with "busy = qemu_aio_wait()".  You can add a while loop around
> the new call, or do something more similar to the old code, along the
> lines of "do { QTAILQ_FOREACH(...) ...; busy =
> bdrv_request_pending_all(); busy |= aio_poll(qemu_get_aio_context(),
> busy); } while(busy);".

Good idea, the trick is to use busy as the blocking flag.

I will resend with the fix to avoid making this email thread too messy.

Stefan

  reply	other threads:[~2013-06-14 10:58 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-06-10 12:25 [Qemu-devel] [PATCH v3 00/17] aio: drop io_flush() Stefan Hajnoczi
2013-06-10 12:25 ` [Qemu-devel] [PATCH v3 01/17] block: stop relying on io_flush() in bdrv_drain_all() Stefan Hajnoczi
2013-06-10 14:38   ` Stefan Hajnoczi
2013-06-13 14:33     ` Paolo Bonzini
2013-06-14 10:58       ` Stefan Hajnoczi [this message]
2013-06-10 12:25 ` [Qemu-devel] [PATCH v3 02/17] dataplane/virtio-blk: check exit conditions before aio_poll() Stefan Hajnoczi
2013-06-10 12:25 ` [Qemu-devel] [PATCH v3 03/17] tests: adjust test-aio to new aio_poll() semantics Stefan Hajnoczi
2013-06-10 12:26 ` [Qemu-devel] [PATCH v3 04/17] tests: adjust test-thread-pool " Stefan Hajnoczi
2013-06-10 12:26 ` [Qemu-devel] [PATCH v3 05/17] aio: stop using .io_flush() Stefan Hajnoczi
2013-06-10 12:26 ` [Qemu-devel] [PATCH v3 06/17] block/curl: drop curl_aio_flush() Stefan Hajnoczi
2013-06-10 12:26 ` [Qemu-devel] [PATCH v3 07/17] block/gluster: drop qemu_gluster_aio_flush_cb() Stefan Hajnoczi
2013-06-10 12:26 ` [Qemu-devel] [PATCH v3 08/17] block/iscsi: drop iscsi_process_flush() Stefan Hajnoczi
2013-06-10 12:26 ` [Qemu-devel] [PATCH v3 09/17] block/linux-aio: drop qemu_laio_completion_cb() Stefan Hajnoczi
2013-06-10 12:26 ` [Qemu-devel] [PATCH v3 10/17] block/nbd: drop nbd_have_request() Stefan Hajnoczi
2013-06-10 12:26 ` [Qemu-devel] [PATCH v3 11/17] block/rbd: drop qemu_rbd_aio_flush_cb() Stefan Hajnoczi
2013-06-10 12:26 ` [Qemu-devel] [PATCH v3 12/17] block/sheepdog: drop have_co_req() and aio_flush_request() Stefan Hajnoczi
2013-06-10 12:26 ` [Qemu-devel] [PATCH v3 13/17] block/ssh: drop return_true() Stefan Hajnoczi
2013-06-10 12:26 ` [Qemu-devel] [PATCH v3 14/17] dataplane/virtio-blk: drop flush_true() and flush_io() Stefan Hajnoczi
2013-06-10 12:26 ` [Qemu-devel] [PATCH v3 15/17] thread-pool: drop thread_pool_active() Stefan Hajnoczi
2013-06-10 12:26 ` [Qemu-devel] [PATCH v3 16/17] tests: drop event_active_cb() Stefan Hajnoczi
2013-06-10 12:26 ` [Qemu-devel] [PATCH v3 17/17] aio: drop io_flush argument Stefan Hajnoczi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20130614105824.GC26780@stefanha-thinkpad.redhat.com \
    --to=stefanha@gmail.com \
    --cc=aliguori@us.ibm.com \
    --cc=kwolf@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=pingfank@linux.vnet.ibm.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).