From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:36230) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZFVeS-00077r-GN for qemu-devel@nongnu.org; Wed, 15 Jul 2015 18:59:45 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ZFVeO-0000iR-QW for qemu-devel@nongnu.org; Wed, 15 Jul 2015 18:59:44 -0400 Received: from mail-wi0-x231.google.com ([2a00:1450:400c:c05::231]:34554) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZFVeO-0000iA-C9 for qemu-devel@nongnu.org; Wed, 15 Jul 2015 18:59:40 -0400 Received: by wibud3 with SMTP id ud3so1262017wib.1 for ; Wed, 15 Jul 2015 15:59:39 -0700 (PDT) Sender: Paolo Bonzini References: <1436980403-25898-1-git-send-email-pbonzini@redhat.com> <20150715201435.GF3582@noname.redhat.com> From: Paolo Bonzini Message-ID: <55A6E5D7.8000204@redhat.com> Date: Thu, 16 Jul 2015 00:59:35 +0200 MIME-Version: 1.0 In-Reply-To: <20150715201435.GF3582@noname.redhat.com> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 8bit Subject: Re: [Qemu-devel] [PATCH] AioContext: fix broken ctx->dispatching optimization List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Kevin Wolf Cc: lersek@redhat.com, qemu-devel@nongnu.org, stefanha@redhat.com, rjones@redhat.com On 15/07/2015 22:14, Kevin Wolf wrote: > > index 233d8f5..ae7c6cf 100644 > > --- a/aio-win32.c > > +++ b/aio-win32.c > > @@ -318,11 +313,11 @@ bool aio_poll(AioContext *ctx, bool blocking) > > first = true; > > > > /* wait until next event */ > > - while (count > 0) { > > + do { > > HANDLE event; > > int ret; > > > > - timeout = blocking > > + timeout = blocking && !have_select_revents > > ? qemu_timeout_ns_to_ms(aio_compute_timeout(ctx)) : 0; > > if (timeout) { > > aio_context_release(ctx); > > Here I'm not sure why you switched to a do..while loop? It's not obvious > to me how the change from aio_set_dispatching() to ctx->notify_me is > related to this. I sort of expected a reviewer to ask me about this change. I would have explained this in the commit message, but it was already long enough. :) The count on entry is never zero, because the ctx->notifier is always registered. I changed the while to do...while so that it's obvious that ctx->notify_me is always decremented. In retrospect I could have added simply /* ctx->notifier is always registered. */ assert(count > 0); above the "do". > With this information, I understand that what has changed is that the > return value of g_main_context_iteration() changes from true before this > patch (had the aio_notify() from aio_set_fd_handler() pending) to false > after the patch (aio_notify() doesn't inject an event any more). > > This should mean that like above we can assert that the first iteration > returns false, i.e. reverse the assertion (and indeed, with this > change the test still passes for me). I was a bit undecided about this. In the end I decided that the calls to aio_poll/g_main_context_iteration were just to put the AioContext in a known state, and the assertions on the return value of g_assert were not really important. For this reason, the while loop seemed to express the intentions best, and I made it consistent between the AioContext and GSource cases. Paolo