From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:55220) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UFmnR-0005ug-Vm for qemu-devel@nongnu.org; Wed, 13 Mar 2013 10:36:59 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1UFmnN-0001k0-JJ for qemu-devel@nongnu.org; Wed, 13 Mar 2013 10:36:49 -0400 Received: from mail-we0-x22f.google.com ([2a00:1450:400c:c03::22f]:42834) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UFmRk-0000Hn-Kh for qemu-devel@nongnu.org; Wed, 13 Mar 2013 10:14:24 -0400 Received: by mail-we0-f175.google.com with SMTP id x8so1038146wey.6 for ; Wed, 13 Mar 2013 07:14:24 -0700 (PDT) Sender: Paolo Bonzini From: Paolo Bonzini Date: Wed, 13 Mar 2013 15:14:15 +0100 Message-Id: <1363184055-8610-1-git-send-email-pbonzini@redhat.com> Subject: [Qemu-devel] [PATCH] dataplane: fix hang introduced by AioContext transition List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: kwolf@redhat.com, stefanha@redhat.com The bug is that the EventNotifiers do have a NULL io_flush callback. Because _none_ of the callbacks on the dataplane AioContext have such a callback, aio_poll will simply do nothing. Fixed by adding the callbacks: the ioeventfd will always be polled (this can change in the future to pause/resume the processing during live snapshots or similar operations); the ioqueue will be polled if there are outstanding requests. I must admit I have screwed up my testing somehow, because commit 2c20e71 does not work even if cherry-picked on top of 1.4.0, and this patch fixes it there as well. Signed-off-by: Paolo Bonzini --- hw/dataplane/virtio-blk.c | 17 +++++++++++++++-- 1 file changed, 15 insertions(+), 2 deletions(-) diff --git a/hw/dataplane/virtio-blk.c b/hw/dataplane/virtio-blk.c index aa9b040..24994fa 100644 --- a/hw/dataplane/virtio-blk.c +++ b/hw/dataplane/virtio-blk.c @@ -261,6 +261,11 @@ static int process_request(IOQueue *ioq, struct iovec iov[], } } +static int flush_true(EventNotifier *e) +{ + return true; +} + static void handle_notify(EventNotifier *e) { VirtIOBlockDataPlane *s = container_of(e, VirtIOBlockDataPlane, @@ -340,6 +345,14 @@ static void handle_notify(EventNotifier *e) } } +static int flush_io(EventNotifier *e) +{ + VirtIOBlockDataPlane *s = container_of(e, VirtIOBlockDataPlane, + io_notifier); + + return s->num_reqs > 0; +} + static void handle_io(EventNotifier *e) { VirtIOBlockDataPlane *s = container_of(e, VirtIOBlockDataPlane, @@ -470,7 +483,7 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s) exit(1); } s->host_notifier = *virtio_queue_get_host_notifier(vq); - aio_set_event_notifier(s->ctx, &s->host_notifier, handle_notify, NULL); + aio_set_event_notifier(s->ctx, &s->host_notifier, handle_notify, flush_true); /* Set up ioqueue */ ioq_init(&s->ioqueue, s->fd, REQ_MAX); @@ -478,7 +491,7 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s) ioq_put_iocb(&s->ioqueue, &s->requests[i].iocb); } s->io_notifier = *ioq_get_notifier(&s->ioqueue); - aio_set_event_notifier(s->ctx, &s->io_notifier, handle_io, NULL); + aio_set_event_notifier(s->ctx, &s->io_notifier, handle_io, flush_io); s->started = true; trace_virtio_blk_data_plane_start(s); -- 1.8.1.4