From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:51354) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UMwht-0003DS-H3 for qemu-devel@nongnu.org; Tue, 02 Apr 2013 04:36:42 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1UMwhs-0002Lo-AD for qemu-devel@nongnu.org; Tue, 02 Apr 2013 04:36:41 -0400 Received: from mx1.redhat.com ([209.132.183.28]:56248) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UMwhs-0002Lf-2E for qemu-devel@nongnu.org; Tue, 02 Apr 2013 04:36:40 -0400 Date: Tue, 2 Apr 2013 10:34:03 +0200 From: Kevin Wolf Message-ID: <20130402083332.GE2341@dhcp-200-207.str.redhat.com> References: <1364507550-25093-1-git-send-email-aliguori@us.ibm.com> <1364507550-25093-2-git-send-email-aliguori@us.ibm.com> <5154D42E.70804@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <5154D42E.70804@redhat.com> Subject: Re: [Qemu-devel] [RFC PATCH 1/3] aio-context: if io_flush isn't provided, assume "always busy" List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Paolo Bonzini Cc: Anthony Liguori , qemu-devel@nongnu.org, Mike Roth Am 29.03.2013 um 00:37 hat Paolo Bonzini geschrieben: > Il 28/03/2013 22:52, Anthony Liguori ha scritto: > > Today, all callers of qemu_aio_set_fd_handler() pass a valid io_flush > > function. > > Except one: > > aio_set_event_notifier(ctx, &ctx->notifier, > (EventNotifierHandler *) > event_notifier_test_and_clear, NULL); > > This is the EventNotifier that is used by qemu_notify_event. > > It's quite surprising that this patch works and passes the tests. /me > reads cover letter... ah, it is untested. :) > > But if you can eliminate the sole usage of aio_wait()'s return value (in > bdrv_drain_all()), everything would be much simpler. There is a > relatively convenient > > assert(QLIST_EMPTY(&bs->tracked_requests)); > > that you can use as the exit condition instead. Perhaps it's not > trivial to do it efficiently, but it's not a fast path. We just need to move to .bdrv_drain() for all block driver that register an AioHandler. I'm pretty sure that each one has its own data structures to manage in-flight requests (basically what is the aio_flush handler today would become the .bdrv_drain callback). Then bdrv_drain_all() can directly use the bdrv_drain() return value and doesn't need to have it passed through aio_wait() any more. Kevin