From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:52915) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1S79th-0006Yz-4q for qemu-devel@nongnu.org; Mon, 12 Mar 2012 14:23:06 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1S79tH-0001hx-Hg for qemu-devel@nongnu.org; Mon, 12 Mar 2012 14:23:04 -0400 Received: from mail-ey0-f173.google.com ([209.85.215.173]:58373) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1S79tH-0001gq-8c for qemu-devel@nongnu.org; Mon, 12 Mar 2012 14:22:39 -0400 Received: by mail-ey0-f173.google.com with SMTP id f11so1620797eaa.4 for ; Mon, 12 Mar 2012 11:22:38 -0700 (PDT) Sender: Paolo Bonzini From: Paolo Bonzini Date: Mon, 12 Mar 2012 19:22:26 +0100 Message-Id: <1331576548-23067-6-git-send-email-pbonzini@redhat.com> In-Reply-To: <1331576548-23067-1-git-send-email-pbonzini@redhat.com> References: <1331576548-23067-1-git-send-email-pbonzini@redhat.com> Subject: [Qemu-devel] [PATCH 5/7] aio: return "AIO in progress" state from qemu_aio_wait List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org The definition of when qemu_aio_flush should loop is much simpler than it looks. It just has to call qemu_aio_wait until it makes no progress and all flush callbacks return false. qemu_aio_wait is the logical place to tell the caller about this, and the return code will also help implementing bdrv_drain. Signed-off-by: Paolo Bonzini --- aio.c | 48 ++++++++++++++++++++++-------------------------- qemu-aio.h | 6 ++++-- 2 files changed, 26 insertions(+), 28 deletions(-) diff --git a/aio.c b/aio.c index f19b3c6..71264fc 100644 --- a/aio.c +++ b/aio.c @@ -99,41 +99,30 @@ int qemu_aio_set_fd_handler(int fd, void qemu_aio_flush(void) { - AioHandler *node; - int ret; - - do { - ret = 0; - - /* - * If there are pending emulated aio start them now so flush - * will be able to return 1. - */ - qemu_aio_wait(); - - QLIST_FOREACH(node, &aio_handlers, node) { - if (node->io_flush) { - ret |= node->io_flush(node->opaque); - } - } - } while (qemu_bh_poll() || ret > 0); + /* Always poll at least once; bottom halves may start new AIO so + * flush will be able to return 1. However, they also might not :) + * so only block starting from the second call. + */ + while (qemu_aio_wait()); } -void qemu_aio_wait(void) +bool qemu_aio_wait(void) { int ret; /* * If there are callbacks left that have been queued, we need to call then. - * Return afterwards to avoid waiting needlessly in select(). + * Do not call select in this case, because it is possible that the caller + * does not need a complete flush (as is the case for qemu_aio_wait loops). */ if (qemu_bh_poll()) { - return; + return true; } do { AioHandler *node; fd_set rdfds, wrfds; + bool busy; int max_fd = -1; walking_handlers = 1; @@ -142,14 +131,18 @@ void qemu_aio_wait(void) FD_ZERO(&wrfds); /* fill fd sets */ + busy = false; QLIST_FOREACH(node, &aio_handlers, node) { /* If there aren't pending AIO operations, don't invoke callbacks. * Otherwise, if there are no AIO requests, qemu_aio_wait() would * wait indefinitely. */ - if (node->io_flush && node->io_flush(node->opaque) == 0) - continue; - + if (node->io_flush) { + if (node->io_flush(node->opaque) == 0) { + continue; + } + busy = true; + } if (!node->deleted && node->io_read) { FD_SET(node->fd, &rdfds); max_fd = MAX(max_fd, node->fd + 1); @@ -163,8 +156,9 @@ void qemu_aio_wait(void) walking_handlers = 0; /* No AIO operations? Get us out of here */ - if (max_fd == -1) - break; + if (!busy) { + return false; + } /* wait until next event */ ret = select(max_fd, &rdfds, &wrfds, NULL, NULL); @@ -204,4 +198,6 @@ void qemu_aio_wait(void) walking_handlers = 0; } } while (ret == 0); + + return true; } diff --git a/qemu-aio.h b/qemu-aio.h index 0fc8409..bfdd35f 100644 --- a/qemu-aio.h +++ b/qemu-aio.h @@ -48,8 +48,10 @@ void qemu_aio_flush(void); /* Wait for a single AIO completion to occur. This function will wait * until a single AIO event has completed and it will ensure something * has moved before returning. This can issue new pending aio as - * result of executing I/O completion or bh callbacks. */ -void qemu_aio_wait(void); + * result of executing I/O completion or bh callbacks. + * + * Return whether there is still any pending AIO operation. */ +bool qemu_aio_wait(void); /* Register a file descriptor and associated callbacks. Behaves very similarly * to qemu_set_fd_handler2. Unlike qemu_set_fd_handler2, these callbacks will -- 1.7.7.6