From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:46920) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XNP4g-0007r6-9j for qemu-devel@nongnu.org; Fri, 29 Aug 2014 12:31:00 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1XNP4Z-0002tl-Nh for qemu-devel@nongnu.org; Fri, 29 Aug 2014 12:30:54 -0400 Received: from mx1.redhat.com ([209.132.183.28]:53285) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XNP4Z-0002tE-GI for qemu-devel@nongnu.org; Fri, 29 Aug 2014 12:30:47 -0400 From: Stefan Hajnoczi Date: Fri, 29 Aug 2014 17:29:46 +0100 Message-Id: <1409329803-20744-19-git-send-email-stefanha@redhat.com> In-Reply-To: <1409329803-20744-1-git-send-email-stefanha@redhat.com> References: <1409329803-20744-1-git-send-email-stefanha@redhat.com> Subject: [Qemu-devel] [PULL 18/35] aio-win32: add aio_set_dispatching optimization List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: Peter Maydell , Stefan Hajnoczi , Paolo Bonzini From: Paolo Bonzini Signed-off-by: Paolo Bonzini Signed-off-by: Stefan Hajnoczi --- aio-win32.c | 17 ++++++++++++++++- 1 file changed, 16 insertions(+), 1 deletion(-) diff --git a/aio-win32.c b/aio-win32.c index 1ec434a..fd52686 100644 --- a/aio-win32.c +++ b/aio-win32.c @@ -144,12 +144,25 @@ bool aio_poll(AioContext *ctx, bool blocking) { AioHandler *node; HANDLE events[MAXIMUM_WAIT_OBJECTS + 1]; - bool progress, first; + bool was_dispatching, progress, first; int count; int timeout; + was_dispatching = ctx->dispatching; progress = false; + /* aio_notify can avoid the expensive event_notifier_set if + * everything (file descriptors, bottom halves, timers) will + * be re-evaluated before the next blocking poll(). This is + * already true when aio_poll is called with blocking == false; + * if blocking == true, it is only true after poll() returns. + * + * If we're in a nested event loop, ctx->dispatching might be true. + * In that case we can restore it just before returning, but we + * have to clear it now. + */ + aio_set_dispatching(ctx, !blocking); + ctx->walking_handlers++; /* fill fd sets */ @@ -170,6 +183,7 @@ bool aio_poll(AioContext *ctx, bool blocking) timeout = blocking ? qemu_timeout_ns_to_ms(aio_compute_timeout(ctx)) : 0; ret = WaitForMultipleObjects(count, events, FALSE, timeout); + aio_set_dispatching(ctx, true); if (first && aio_bh_poll(ctx)) { progress = true; @@ -191,5 +205,6 @@ bool aio_poll(AioContext *ctx, bool blocking) progress |= timerlistgroup_run_timers(&ctx->tlg); + aio_set_dispatching(ctx, was_dispatching); return progress; } -- 1.9.3