From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Kevin Wolf <kwolf@redhat.com>,
Peter Maydell <peter.maydell@linaro.org>,
Alexander Yarygin <yarygin@linux.vnet.ibm.com>,
Christian Borntraeger <borntraeger@de.ibm.com>,
Stefan Hajnoczi <stefanha@redhat.com>,
Cornelia Huck <cornelia.huck@de.ibm.com>,
Paolo Bonzini <pbonzini@redhat.com>
Subject: [Qemu-devel] [PULL 01/15] block: Let bdrv_drain_all() to call aio_poll() for each AioContext
Date: Wed, 24 Jun 2015 16:27:52 +0100 [thread overview]
Message-ID: <1435159686-14817-2-git-send-email-stefanha@redhat.com> (raw)
In-Reply-To: <1435159686-14817-1-git-send-email-stefanha@redhat.com>
From: Alexander Yarygin <yarygin@linux.vnet.ibm.com>
After the commit 9b536adc ("block: acquire AioContext in
bdrv_drain_all()") the aio_poll() function got called for every
BlockDriverState, in assumption that every device may have its own
AioContext. If we have thousands of disks attached, there are a lot of
BlockDriverStates but only a few AioContexts, leading to tons of
unnecessary aio_poll() calls.
This patch changes the bdrv_drain_all() function allowing it find shared
AioContexts and to call aio_poll() only for unique ones.
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Cornelia Huck <cornelia.huck@de.ibm.com>
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Alexander Yarygin <yarygin@linux.vnet.ibm.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Tested-by: Christian Borntraeger <borntraeger@de.ibm.com>
Message-id: 1433936297-7098-4-git-send-email-yarygin@linux.vnet.ibm.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
block/io.c | 42 ++++++++++++++++++++++++++----------------
1 file changed, 26 insertions(+), 16 deletions(-)
diff --git a/block/io.c b/block/io.c
index 9cc729b..43f85ab 100644
--- a/block/io.c
+++ b/block/io.c
@@ -233,17 +233,6 @@ static bool bdrv_requests_pending(BlockDriverState *bs)
return false;
}
-static bool bdrv_drain_one(BlockDriverState *bs)
-{
- bool bs_busy;
-
- bdrv_flush_io_queue(bs);
- bdrv_start_throttled_reqs(bs);
- bs_busy = bdrv_requests_pending(bs);
- bs_busy |= aio_poll(bdrv_get_aio_context(bs), bs_busy);
- return bs_busy;
-}
-
/*
* Wait for pending requests to complete on a single BlockDriverState subtree
*
@@ -256,8 +245,13 @@ static bool bdrv_drain_one(BlockDriverState *bs)
*/
void bdrv_drain(BlockDriverState *bs)
{
- while (bdrv_drain_one(bs)) {
+ bool busy = true;
+
+ while (busy) {
/* Keep iterating */
+ bdrv_flush_io_queue(bs);
+ busy = bdrv_requests_pending(bs);
+ busy |= aio_poll(bdrv_get_aio_context(bs), busy);
}
}
@@ -278,6 +272,7 @@ void bdrv_drain_all(void)
/* Always run first iteration so any pending completion BHs run */
bool busy = true;
BlockDriverState *bs = NULL;
+ GSList *aio_ctxs = NULL, *ctx;
while ((bs = bdrv_next(bs))) {
AioContext *aio_context = bdrv_get_aio_context(bs);
@@ -287,17 +282,30 @@ void bdrv_drain_all(void)
block_job_pause(bs->job);
}
aio_context_release(aio_context);
+
+ if (!aio_ctxs || !g_slist_find(aio_ctxs, aio_context)) {
+ aio_ctxs = g_slist_prepend(aio_ctxs, aio_context);
+ }
}
while (busy) {
busy = false;
- bs = NULL;
- while ((bs = bdrv_next(bs))) {
- AioContext *aio_context = bdrv_get_aio_context(bs);
+ for (ctx = aio_ctxs; ctx != NULL; ctx = ctx->next) {
+ AioContext *aio_context = ctx->data;
+ bs = NULL;
aio_context_acquire(aio_context);
- busy |= bdrv_drain_one(bs);
+ while ((bs = bdrv_next(bs))) {
+ if (aio_context == bdrv_get_aio_context(bs)) {
+ bdrv_flush_io_queue(bs);
+ if (bdrv_requests_pending(bs)) {
+ busy = true;
+ aio_poll(aio_context, busy);
+ }
+ }
+ }
+ busy |= aio_poll(aio_context, false);
aio_context_release(aio_context);
}
}
@@ -312,6 +320,7 @@ void bdrv_drain_all(void)
}
aio_context_release(aio_context);
}
+ g_slist_free(aio_ctxs);
}
/**
@@ -2562,4 +2571,5 @@ void bdrv_flush_io_queue(BlockDriverState *bs)
} else if (bs->file) {
bdrv_flush_io_queue(bs->file);
}
+ bdrv_start_throttled_reqs(bs);
}
--
2.4.3
next prev parent reply other threads:[~2015-06-24 15:28 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-06-24 15:27 [Qemu-devel] [PULL 00/15] Block patches Stefan Hajnoczi
2015-06-24 15:27 ` Stefan Hajnoczi [this message]
2015-06-24 15:27 ` [Qemu-devel] [PULL 02/15] throttle: Check current timers before updating any_timer_armed[] Stefan Hajnoczi
2015-06-24 15:27 ` [Qemu-devel] [PULL 03/15] block-backend: Introduce blk_drain() Stefan Hajnoczi
2015-06-24 15:27 ` [Qemu-devel] [PULL 04/15] virtio-blk: Use blk_drain() to drain IO requests Stefan Hajnoczi
2015-06-24 15:27 ` [Qemu-devel] [PULL 05/15] util/hbitmap: Add an API to reset all set bits in hbitmap Stefan Hajnoczi
2015-06-24 15:27 ` [Qemu-devel] [PULL 06/15] vvfat: add a label option Stefan Hajnoczi
2015-06-24 15:27 ` [Qemu-devel] [PULL 07/15] nvme: Fix memleak in nvme_dma_read_prp Stefan Hajnoczi
2015-06-24 15:27 ` [Qemu-devel] [PULL 08/15] block: Use bdrv_is_sg() everywhere Stefan Hajnoczi
2015-06-24 15:28 ` [Qemu-devel] [PULL 09/15] Fix migration in case of scsi-generic Stefan Hajnoczi
2015-06-24 15:28 ` [Qemu-devel] [PULL 10/15] raw-posix: DPRINTF instead of DEBUG_BLOCK_PRINT Stefan Hajnoczi
2015-06-24 15:28 ` [Qemu-devel] [PULL 11/15] raw-posix: Use DPRINTF for DEBUG_FLOPPY Stefan Hajnoczi
2015-06-24 15:28 ` [Qemu-devel] [PULL 12/15] raw-posix: Introduce hdev_is_sg() Stefan Hajnoczi
2015-06-24 15:28 ` [Qemu-devel] [PULL 13/15] iov: don't touch iov in iov_send_recv() Stefan Hajnoczi
2015-06-24 15:28 ` [Qemu-devel] [PULL 14/15] qemu-iotests: fix 051.out after qdev error message change Stefan Hajnoczi
2015-06-24 15:28 ` [Qemu-devel] [PULL 15/15] virito-blk: drop duplicate check Stefan Hajnoczi
2015-06-25 13:03 ` [Qemu-devel] [PULL 00/15] Block patches Peter Maydell
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1435159686-14817-2-git-send-email-stefanha@redhat.com \
--to=stefanha@redhat.com \
--cc=borntraeger@de.ibm.com \
--cc=cornelia.huck@de.ibm.com \
--cc=kwolf@redhat.com \
--cc=pbonzini@redhat.com \
--cc=peter.maydell@linaro.org \
--cc=qemu-devel@nongnu.org \
--cc=yarygin@linux.vnet.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).