From: Paolo Bonzini <pbonzini@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: Kevin Wolf <kwolf@redhat.com>,
Anthony Liguori <aliguori@us.ibm.com>,
Ping Fan Liu <pingfank@linux.vnet.ibm.com>,
qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [PATCH v4 01/17] block: stop relying on io_flush() in bdrv_drain_all()
Date: Thu, 27 Jun 2013 15:13:19 +0200 [thread overview]
Message-ID: <51CC3A6F.4090505@redhat.com> (raw)
In-Reply-To: <1371210243-6099-2-git-send-email-stefanha@redhat.com>
Il 14/06/2013 13:43, Stefan Hajnoczi ha scritto:
> If a block driver has no file descriptors to monitor but there are still
> active requests, it can return 1 from .io_flush(). This is used to spin
> during synchronous I/O.
>
> Stop relying on .io_flush() and instead check
> QLIST_EMPTY(&bs->tracked_requests) to decide whether there are active
> requests.
>
> This is the first step in removing .io_flush() so that event loops no
> longer need to have the concept of synchronous I/O. Eventually we may
> be able to kill synchronous I/O completely by running everything in a
> coroutine, but that is future work.
>
> Note this patch moves bs->throttled_reqs initialization to bdrv_new() so
> that bdrv_requests_pending(bs) can safely access it. In practice bs is
> g_malloc0() so the memory is already zeroed but it's safer to initialize
> the queue properly.
>
> In bdrv_delete() make sure to call bdrv_make_anon() *after* bdrv_close()
> so that the device is still seen by bdrv_drain_all() when iterating
> bdrv_states.
I wonder if this last change should be separated out and CCed to
qemu-stable. It seems like a bug if you close a device that has pending
throttled operations.
Paolo
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
> block.c | 50 +++++++++++++++++++++++++++++++++++++-------------
> 1 file changed, 37 insertions(+), 13 deletions(-)
>
> diff --git a/block.c b/block.c
> index 79ad33d..04821d8 100644
> --- a/block.c
> +++ b/block.c
> @@ -148,7 +148,6 @@ static void bdrv_block_timer(void *opaque)
>
> void bdrv_io_limits_enable(BlockDriverState *bs)
> {
> - qemu_co_queue_init(&bs->throttled_reqs);
> bs->block_timer = qemu_new_timer_ns(vm_clock, bdrv_block_timer, bs);
> bs->io_limits_enabled = true;
> }
> @@ -305,6 +304,7 @@ BlockDriverState *bdrv_new(const char *device_name)
> }
> bdrv_iostatus_disable(bs);
> notifier_list_init(&bs->close_notifiers);
> + qemu_co_queue_init(&bs->throttled_reqs);
>
> return bs;
> }
> @@ -1412,6 +1412,35 @@ void bdrv_close_all(void)
> }
> }
>
> +/* Check if any requests are in-flight (including throttled requests) */
> +static bool bdrv_requests_pending(BlockDriverState *bs)
> +{
> + if (!QLIST_EMPTY(&bs->tracked_requests)) {
> + return true;
> + }
> + if (!qemu_co_queue_empty(&bs->throttled_reqs)) {
> + return true;
> + }
> + if (bs->file && bdrv_requests_pending(bs->file)) {
> + return true;
> + }
> + if (bs->backing_hd && bdrv_requests_pending(bs->backing_hd)) {
> + return true;
> + }
> + return false;
> +}
> +
> +static bool bdrv_requests_pending_all(void)
> +{
> + BlockDriverState *bs;
> + QTAILQ_FOREACH(bs, &bdrv_states, list) {
> + if (bdrv_requests_pending(bs)) {
> + return true;
> + }
> + }
> + return false;
> +}
> +
> /*
> * Wait for pending requests to complete across all BlockDriverStates
> *
> @@ -1426,27 +1455,22 @@ void bdrv_close_all(void)
> */
> void bdrv_drain_all(void)
> {
> + /* Always run first iteration so any pending completion BHs run */
> + bool busy = true;
> BlockDriverState *bs;
> - bool busy;
> -
> - do {
> - busy = qemu_aio_wait();
>
> + while (busy) {
> /* FIXME: We do not have timer support here, so this is effectively
> * a busy wait.
> */
> QTAILQ_FOREACH(bs, &bdrv_states, list) {
> if (!qemu_co_queue_empty(&bs->throttled_reqs)) {
> qemu_co_queue_restart_all(&bs->throttled_reqs);
> - busy = true;
> }
> }
> - } while (busy);
>
> - /* If requests are still pending there is a bug somewhere */
> - QTAILQ_FOREACH(bs, &bdrv_states, list) {
> - assert(QLIST_EMPTY(&bs->tracked_requests));
> - assert(qemu_co_queue_empty(&bs->throttled_reqs));
> + busy = bdrv_requests_pending_all();
> + busy |= aio_poll(qemu_get_aio_context(), busy);
> }
> }
>
> @@ -1591,11 +1615,11 @@ void bdrv_delete(BlockDriverState *bs)
> assert(!bs->job);
> assert(!bs->in_use);
>
> + bdrv_close(bs);
> +
> /* remove from list, if necessary */
> bdrv_make_anon(bs);
>
> - bdrv_close(bs);
> -
> g_free(bs);
> }
>
>
next prev parent reply other threads:[~2013-06-27 13:13 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-06-14 11:43 [Qemu-devel] [PATCH v4 00/17] aio: drop io_flush() Stefan Hajnoczi
2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 01/17] block: stop relying on io_flush() in bdrv_drain_all() Stefan Hajnoczi
2013-06-27 13:13 ` Paolo Bonzini [this message]
2013-06-27 13:29 ` Stefan Hajnoczi
2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 02/17] dataplane/virtio-blk: check exit conditions before aio_poll() Stefan Hajnoczi
2013-06-27 13:14 ` Paolo Bonzini
2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 03/17] tests: adjust test-aio to new aio_poll() semantics Stefan Hajnoczi
2013-06-27 13:15 ` Paolo Bonzini
2013-06-27 13:28 ` Paolo Bonzini
2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 04/17] tests: adjust test-thread-pool " Stefan Hajnoczi
2013-06-27 13:16 ` Paolo Bonzini
2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 05/17] aio: stop using .io_flush() Stefan Hajnoczi
2013-06-27 13:19 ` Paolo Bonzini
2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 06/17] block/curl: drop curl_aio_flush() Stefan Hajnoczi
2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 07/17] block/gluster: drop qemu_gluster_aio_flush_cb() Stefan Hajnoczi
2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 08/17] block/iscsi: drop iscsi_process_flush() Stefan Hajnoczi
2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 09/17] block/linux-aio: drop qemu_laio_completion_cb() Stefan Hajnoczi
2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 10/17] block/nbd: drop nbd_have_request() Stefan Hajnoczi
2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 11/17] block/rbd: drop qemu_rbd_aio_flush_cb() Stefan Hajnoczi
2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 12/17] block/sheepdog: drop have_co_req() and aio_flush_request() Stefan Hajnoczi
2013-06-14 11:43 ` [Qemu-devel] [PATCH v4 13/17] block/ssh: drop return_true() Stefan Hajnoczi
2013-06-14 11:44 ` [Qemu-devel] [PATCH v4 14/17] dataplane/virtio-blk: drop flush_true() and flush_io() Stefan Hajnoczi
2013-06-14 11:44 ` [Qemu-devel] [PATCH v4 15/17] thread-pool: drop thread_pool_active() Stefan Hajnoczi
2013-06-14 11:44 ` [Qemu-devel] [PATCH v4 16/17] tests: drop event_active_cb() Stefan Hajnoczi
2013-06-27 13:20 ` Paolo Bonzini
2013-06-14 11:44 ` [Qemu-devel] [PATCH v4 17/17] aio: drop io_flush argument Stefan Hajnoczi
2013-06-27 13:21 ` Paolo Bonzini
2013-06-27 12:23 ` [Qemu-devel] [PATCH v4 00/17] aio: drop io_flush() Stefan Hajnoczi
2013-06-27 13:22 ` Paolo Bonzini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=51CC3A6F.4090505@redhat.com \
--to=pbonzini@redhat.com \
--cc=aliguori@us.ibm.com \
--cc=kwolf@redhat.com \
--cc=pingfank@linux.vnet.ibm.com \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).