From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Kevin Wolf <kwolf@redhat.com>,
pingfank@linux.vnet.ibm.com, alex@alex.org.uk,
jan.kiszka@siemens.com, Stefan Hajnoczi <stefanha@redhat.com>,
pbonzini@redhat.com
Subject: [Qemu-devel] [PATCH v2 2/2] aio: make aio_poll(ctx, true) block with no fds
Date: Tue, 26 Nov 2013 16:18:01 +0100 [thread overview]
Message-ID: <1385479081-17887-3-git-send-email-stefanha@redhat.com> (raw)
In-Reply-To: <1385479081-17887-1-git-send-email-stefanha@redhat.com>
This patch drops a special case where aio_poll(ctx, true) returns false
instead of blocking if no file descriptors are waiting on I/O. Now it
is possible to block in aio_poll() to wait for aio_notify().
This change eliminates busy waiting. bdrv_drain_all() used to rely on
busy waiting to completed throttled I/O requests but this is no longer
required so we can simplify aio_poll().
Note that aio_poll() still returns false when aio_notify() was used. In
other words, stopping a blocking aio_poll() wait is not considered
making progress.
Adjust test-aio /aio/bh/callback-delete/one which assumed aio_poll(ctx,
true) would immediately return false instead of blocking.
Reviewed-by: Alex Bligh <alex@alex.org.uk>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
aio-posix.c | 5 -----
aio-win32.c | 5 -----
tests/test-aio.c | 1 -
3 files changed, 11 deletions(-)
diff --git a/aio-posix.c b/aio-posix.c
index bd06f33..f921d4f 100644
--- a/aio-posix.c
+++ b/aio-posix.c
@@ -217,11 +217,6 @@ bool aio_poll(AioContext *ctx, bool blocking)
ctx->walking_handlers--;
- /* early return if we only have the aio_notify() fd */
- if (ctx->pollfds->len == 1) {
- return progress;
- }
-
/* wait until next event */
ret = qemu_poll_ns((GPollFD *)ctx->pollfds->data,
ctx->pollfds->len,
diff --git a/aio-win32.c b/aio-win32.c
index f9cfbb7..23f4e5b 100644
--- a/aio-win32.c
+++ b/aio-win32.c
@@ -161,11 +161,6 @@ bool aio_poll(AioContext *ctx, bool blocking)
ctx->walking_handlers--;
- /* early return if we only have the aio_notify() fd */
- if (count == 1) {
- return progress;
- }
-
/* wait until next event */
while (count > 0) {
int ret;
diff --git a/tests/test-aio.c b/tests/test-aio.c
index c4fe0fc..592721e 100644
--- a/tests/test-aio.c
+++ b/tests/test-aio.c
@@ -195,7 +195,6 @@ static void test_bh_delete_from_cb(void)
g_assert(data1.bh == NULL);
g_assert(!aio_poll(ctx, false));
- g_assert(!aio_poll(ctx, true));
}
static void test_bh_delete_from_cb_many(void)
--
1.8.4.2
next prev parent reply other threads:[~2013-11-26 15:18 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-11-26 15:17 [Qemu-devel] [PATCH v2 0/2] block: make aio_poll(ctx, true) block with no fds Stefan Hajnoczi
2013-11-26 15:18 ` [Qemu-devel] [PATCH v2 1/2] block: clean up bdrv_drain_all() throttling comments Stefan Hajnoczi
2013-11-26 15:18 ` Stefan Hajnoczi [this message]
2013-12-05 15:55 ` [Qemu-devel] [PATCH v2 0/2] block: make aio_poll(ctx, true) block with no fds Stefan Hajnoczi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1385479081-17887-3-git-send-email-stefanha@redhat.com \
--to=stefanha@redhat.com \
--cc=alex@alex.org.uk \
--cc=jan.kiszka@siemens.com \
--cc=kwolf@redhat.com \
--cc=pbonzini@redhat.com \
--cc=pingfank@linux.vnet.ibm.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).