From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:58585) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZFfus-0005xN-4h for qemu-devel@nongnu.org; Thu, 16 Jul 2015 05:57:22 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ZFfuq-00053p-RB for qemu-devel@nongnu.org; Thu, 16 Jul 2015 05:57:22 -0400 Received: from mail-wg0-x22f.google.com ([2a00:1450:400c:c00::22f]:35232) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZFfuq-00053k-K6 for qemu-devel@nongnu.org; Thu, 16 Jul 2015 05:57:20 -0400 Received: by wgjx7 with SMTP id x7so53982715wgj.2 for ; Thu, 16 Jul 2015 02:57:19 -0700 (PDT) Sender: Paolo Bonzini From: Paolo Bonzini Date: Thu, 16 Jul 2015 11:56:48 +0200 Message-Id: <1437040609-9878-3-git-send-email-pbonzini@redhat.com> In-Reply-To: <1437040609-9878-1-git-send-email-pbonzini@redhat.com> References: <1437040609-9878-1-git-send-email-pbonzini@redhat.com> Subject: [Qemu-devel] [PATCH v2 2/3] aio-win32: reorganize polling loop List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: kwolf@redhat.com, lersek@redhat.com, rjones@redhat.com, stefanha@redhat.com Preparatory bugfixes and tweaks to the loop before the next patch: - disable dispatch optimization during aio_prepare. This fixes a bug. - do not modify "blocking" until after the first WaitForMultipleObjects call. This is needed in the next patch. - change the loop to do...while. This makes it obvious that the loop is always entered at least once. In the next patch this is important because the first iteration undoes the ctx->notify_me increment that happened before entering the loop. Signed-off-by: Paolo Bonzini --- aio-win32.c | 21 ++++++++++++--------- 1 file changed, 12 insertions(+), 9 deletions(-) diff --git a/aio-win32.c b/aio-win32.c index 233d8f5..9268b5c 100644 --- a/aio-win32.c +++ b/aio-win32.c @@ -284,11 +284,6 @@ bool aio_poll(AioContext *ctx, bool blocking) int timeout; aio_context_acquire(ctx); - have_select_revents = aio_prepare(ctx); - if (have_select_revents) { - blocking = false; - } - was_dispatching = ctx->dispatching; progress = false; @@ -304,6 +299,8 @@ bool aio_poll(AioContext *ctx, bool blocking) */ aio_set_dispatching(ctx, !blocking); + have_select_revents = aio_prepare(ctx); + ctx->walking_handlers++; /* fill fd sets */ @@ -317,12 +314,18 @@ bool aio_poll(AioContext *ctx, bool blocking) ctx->walking_handlers--; first = true; - /* wait until next event */ - while (count > 0) { + /* ctx->notifier is always registered. */ + assert(count > 0); + + /* Multiple iterations, all of them non-blocking except the first, + * may be necessary to process all pending events. After the first + * WaitForMultipleObjects call ctx->notify_me will be decremented. + */ + do { HANDLE event; int ret; - timeout = blocking + timeout = blocking && !have_select_revents ? qemu_timeout_ns_to_ms(aio_compute_timeout(ctx)) : 0; if (timeout) { aio_context_release(ctx); @@ -351,7 +354,7 @@ bool aio_poll(AioContext *ctx, bool blocking) blocking = false; progress |= aio_dispatch_handlers(ctx, event); - } + } while (count > 0); progress |= timerlistgroup_run_timers(&ctx->tlg); -- 2.4.3