From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:43642) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aIuHk-0004h4-3h for qemu-devel@nongnu.org; Tue, 12 Jan 2016 03:26:36 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1aIuHg-0006Us-0Z for qemu-devel@nongnu.org; Tue, 12 Jan 2016 03:26:36 -0500 Received: from mx1.redhat.com ([209.132.183.28]:36550) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aIuHf-0006Uf-PQ for qemu-devel@nongnu.org; Tue, 12 Jan 2016 03:26:31 -0500 Date: Tue, 12 Jan 2016 10:26:25 +0200 From: Victor Kaplansky Message-ID: <1452586854-21058-1-git-send-email-victork@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Subject: [Qemu-devel] [RFC PATCH] vhost: fix lost interrupts from slow reacting back-end List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: Didier Pallard , "Michael S. Tsirkin" , Thibaut Collet , Jean-Mickael Guerin , =?iso-8859-1?Q?Marc-Andr=E9?= Lureau , pbonzini@redhat.com This RFC PATCH tries to solve the problem of lost interrupts from a slow back-end. Didier could you test it? Thanks, Victor When interrupts are unmasked, it could take some undefined time to the back-end to start routing events to guest_notifier. Till that the events will continue flow to masked_notifier, and some interrupts could be lost. This patch tries to handle the above situation by testing and cleaning both masked_notifier and guest_notifier in guest_notifier read handler. Signed-off-by: Victor Kaplansky --- include/hw/virtio/virtio.h | 1 + hw/virtio/vhost.c | 3 +++ hw/virtio/virtio.c | 14 ++++++++++++++ 3 files changed, 18 insertions(+) diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h index 205fadf2..f52b0b6a 100644 --- a/include/hw/virtio/virtio.h +++ b/include/hw/virtio/virtio.h @@ -240,6 +240,7 @@ VirtQueue *virtio_get_queue(VirtIODevice *vdev, int n); uint16_t virtio_get_queue_index(VirtQueue *vq); int virtio_queue_get_id(VirtQueue *vq); EventNotifier *virtio_queue_get_guest_notifier(VirtQueue *vq); +void virtio_queue_set_masked_guest_notifier(VirtQueue *vq, EventNotifier *n); void virtio_queue_set_guest_notifier_fd_handler(VirtQueue *vq, bool assign, bool with_irqfd); EventNotifier *virtio_queue_get_host_notifier(VirtQueue *vq); diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c index de29968a..51ce1532 100644 --- a/hw/virtio/vhost.c +++ b/hw/virtio/vhost.c @@ -854,6 +854,9 @@ static int vhost_virtqueue_start(struct vhost_dev *dev, /* Clear and discard previous events if any. */ event_notifier_test_and_clear(&vq->masked_notifier); + /* Set masked guest_notifier. */ + virtio_queue_set_masked_guest_notifier(vvq, &vq->masked_notifier); + return 0; fail_kick: diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c index bd6b4df9..d9095c51 100644 --- a/hw/virtio/virtio.c +++ b/hw/virtio/virtio.c @@ -89,6 +89,7 @@ struct VirtQueue VirtIODevice *vdev; EventNotifier guest_notifier; EventNotifier host_notifier; + EventNotifier *masked_guest_notifier; QLIST_ENTRY(VirtQueue) node; }; @@ -1622,6 +1623,14 @@ static void virtio_queue_guest_notifier_read(EventNotifier *n) if (event_notifier_test_and_clear(n)) { virtio_irq(vq); } + /* It could take some time to the backend to switch to + * sending to unmasked evenfd, so we have to test masked + * notifier too. */ + if (vq->masked_guest_notifier) { + if (event_notifier_test_and_clear(vq->masked_guest_notifier)) { + virtio_irq(vq); + } + } } void virtio_queue_set_guest_notifier_fd_handler(VirtQueue *vq, bool assign, @@ -1645,6 +1654,11 @@ EventNotifier *virtio_queue_get_guest_notifier(VirtQueue *vq) return &vq->guest_notifier; } +void virtio_queue_set_masked_guest_notifier(VirtQueue *vq, EventNotifier *n) +{ + vq->masked_guest_notifier = n; +} + static void virtio_queue_host_notifier_read(EventNotifier *n) { VirtQueue *vq = container_of(n, VirtQueue, host_notifier); -- --Victor