* [Qemu-devel] [PATCH] Revert "virtio: move common ioeventfd handling out of virtio-pci" @ 2012-08-03 14:58 Stefan Hajnoczi 2012-08-03 16:16 ` [Qemu-devel] [untested PATCH] virtio: fix vhost handling Paolo Bonzini 0 siblings, 1 reply; 4+ messages in thread From: Stefan Hajnoczi @ 2012-08-03 14:58 UTC (permalink / raw) To: qemu-devel; +Cc: Paolo Bonzini, Stefan Hajnoczi, Michael S. Tsirkin This reverts commit b1f416aa8d870fab71030abc9401cfc77b948e8e. The above commit breaks vhost_net because it always registers the virtio_pci_host_notifier_read() handler function on the ioeventfd, even when vhost_net.ko is using the ioeventfd. The result is both QEMU and vhost_net.ko polling on the same eventfd and the virtio_net.ko guest driver seeing inconsistent results: # ifconfig eth0 192.168.0.1 netmask 255.255.255.0 virtio_net virtio0: output:id 0 is not a head! Cc: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com> --- hw/virtio-pci.c | 36 ++++++++++++++++++++++++++++++++++-- hw/virtio.c | 22 ---------------------- hw/virtio.h | 1 - 3 files changed, 34 insertions(+), 25 deletions(-) diff --git a/hw/virtio-pci.c b/hw/virtio-pci.c index 3ab9747..34262cb 100644 --- a/hw/virtio-pci.c +++ b/hw/virtio-pci.c @@ -173,18 +173,46 @@ static int virtio_pci_set_host_notifier_internal(VirtIOPCIProxy *proxy, __func__, r); return r; } - virtio_queue_set_host_notifier_fd_handler(vq, true); memory_region_add_eventfd(&proxy->bar, VIRTIO_PCI_QUEUE_NOTIFY, 2, true, n, notifier); } else { memory_region_del_eventfd(&proxy->bar, VIRTIO_PCI_QUEUE_NOTIFY, 2, true, n, notifier); - virtio_queue_set_host_notifier_fd_handler(vq, false); + /* Handle the race condition where the guest kicked and we deassigned + * before we got around to handling the kick. + */ + if (event_notifier_test_and_clear(notifier)) { + virtio_queue_notify_vq(vq); + } + event_notifier_cleanup(notifier); } return r; } +static void virtio_pci_host_notifier_read(void *opaque) +{ + VirtQueue *vq = opaque; + EventNotifier *n = virtio_queue_get_host_notifier(vq); + if (event_notifier_test_and_clear(n)) { + virtio_queue_notify_vq(vq); + } +} + +static void virtio_pci_set_host_notifier_fd_handler(VirtIOPCIProxy *proxy, + int n, bool assign) +{ + VirtQueue *vq = virtio_get_queue(proxy->vdev, n); + EventNotifier *notifier = virtio_queue_get_host_notifier(vq); + if (assign) { + qemu_set_fd_handler(event_notifier_get_fd(notifier), + virtio_pci_host_notifier_read, NULL, vq); + } else { + qemu_set_fd_handler(event_notifier_get_fd(notifier), + NULL, NULL, NULL); + } +} + static void virtio_pci_start_ioeventfd(VirtIOPCIProxy *proxy) { int n, r; @@ -204,6 +232,8 @@ static void virtio_pci_start_ioeventfd(VirtIOPCIProxy *proxy) if (r < 0) { goto assign_error; } + + virtio_pci_set_host_notifier_fd_handler(proxy, n, true); } proxy->ioeventfd_started = true; return; @@ -214,6 +244,7 @@ assign_error: continue; } + virtio_pci_set_host_notifier_fd_handler(proxy, n, false); r = virtio_pci_set_host_notifier_internal(proxy, n, false); assert(r >= 0); } @@ -235,6 +266,7 @@ static void virtio_pci_stop_ioeventfd(VirtIOPCIProxy *proxy) continue; } + virtio_pci_set_host_notifier_fd_handler(proxy, n, false); r = virtio_pci_set_host_notifier_internal(proxy, n, false); assert(r >= 0); } diff --git a/hw/virtio.c b/hw/virtio.c index d146f86..1fab9bb 100644 --- a/hw/virtio.c +++ b/hw/virtio.c @@ -1012,28 +1012,6 @@ EventNotifier *virtio_queue_get_guest_notifier(VirtQueue *vq) { return &vq->guest_notifier; } - -static void virtio_queue_host_notifier_read(EventNotifier *n) -{ - VirtQueue *vq = container_of(n, VirtQueue, host_notifier); - if (event_notifier_test_and_clear(n)) { - virtio_queue_notify_vq(vq); - } -} - -void virtio_queue_set_host_notifier_fd_handler(VirtQueue *vq, bool assign) -{ - if (assign) { - event_notifier_set_handler(&vq->host_notifier, - virtio_queue_host_notifier_read); - } else { - event_notifier_set_handler(&vq->host_notifier, NULL); - /* Test and clear notifier before after disabling event, - * in case poll callback didn't have time to run. */ - virtio_queue_host_notifier_read(&vq->host_notifier); - } -} - EventNotifier *virtio_queue_get_host_notifier(VirtQueue *vq) { return &vq->host_notifier; diff --git a/hw/virtio.h b/hw/virtio.h index f8b5535..6ae5b6e 100644 --- a/hw/virtio.h +++ b/hw/virtio.h @@ -233,7 +233,6 @@ EventNotifier *virtio_queue_get_guest_notifier(VirtQueue *vq); void virtio_queue_set_guest_notifier_fd_handler(VirtQueue *vq, bool assign, bool with_irqfd); EventNotifier *virtio_queue_get_host_notifier(VirtQueue *vq); -void virtio_queue_set_host_notifier_fd_handler(VirtQueue *vq, bool assign); void virtio_queue_notify_vq(VirtQueue *vq); void virtio_irq(VirtQueue *vq); #endif -- 1.7.10.4 ^ permalink raw reply related [flat|nested] 4+ messages in thread
* [Qemu-devel] [untested PATCH] virtio: fix vhost handling 2012-08-03 14:58 [Qemu-devel] [PATCH] Revert "virtio: move common ioeventfd handling out of virtio-pci" Stefan Hajnoczi @ 2012-08-03 16:16 ` Paolo Bonzini 2012-08-06 12:48 ` Stefan Hajnoczi 0 siblings, 1 reply; 4+ messages in thread From: Paolo Bonzini @ 2012-08-03 16:16 UTC (permalink / raw) To: qemu-devel; +Cc: Stefan Hajnoczi Commit b1f416aa8d870fab71030abc9401cfc77b948e8e breaks vhost_net because it always registers the virtio_pci_host_notifier_read() handler function on the ioeventfd, even when vhost_net.ko is using the ioeventfd. The result is both QEMU and vhost_net.ko polling on the same eventfd and the virtio_net.ko guest driver seeing inconsistent results: # ifconfig eth0 192.168.0.1 netmask 255.255.255.0 virtio_net virtio0: output:id 0 is not a head! To fix this, proceed the same as we do for irqfd: add a parameter to virtio_queue_set_host_notifier_fd_handler and in that case only set the notifier, not the handler Cc: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> --- Interesting, I tested vhost (or thought so). Can you try this patch instead? hw/virtio-pci.c | 14 +++++++------- hw/virtio.c | 7 +++++-- hw/virtio.h | 3 ++- 3 files changed, 14 insertions(+), 10 deletions(-) diff --git a/hw/virtio-pci.c b/hw/virtio-pci.c index 3ab9747..6133626 100644 --- a/hw/virtio-pci.c +++ b/hw/virtio-pci.c @@ -160,7 +160,7 @@ static int virtio_pci_load_queue(void * opaque, int n, QEMUFile *f) } static int virtio_pci_set_host_notifier_internal(VirtIOPCIProxy *proxy, - int n, bool assign) + int n, bool assign, bool with_vhost) { VirtQueue *vq = virtio_get_queue(proxy->vdev, n); EventNotifier *notifier = virtio_queue_get_host_notifier(vq); @@ -173,13 +173,13 @@ static int virtio_pci_set_host_notifier_internal(VirtIOPCIProxy *proxy, __func__, r); return r; } - virtio_queue_set_host_notifier_fd_handler(vq, true); + virtio_queue_set_host_notifier_fd_handler(vq, true, with_vhost); memory_region_add_eventfd(&proxy->bar, VIRTIO_PCI_QUEUE_NOTIFY, 2, true, n, notifier); } else { memory_region_del_eventfd(&proxy->bar, VIRTIO_PCI_QUEUE_NOTIFY, 2, true, n, notifier); - virtio_queue_set_host_notifier_fd_handler(vq, false); + virtio_queue_set_host_notifier_fd_handler(vq, false, with_vhost); event_notifier_cleanup(notifier); } return r; @@ -200,7 +200,7 @@ static void virtio_pci_start_ioeventfd(VirtIOPCIProxy *proxy) continue; } - r = virtio_pci_set_host_notifier_internal(proxy, n, true); + r = virtio_pci_set_host_notifier_internal(proxy, n, true, false); if (r < 0) { goto assign_error; } @@ -214,7 +214,7 @@ assign_error: continue; } - r = virtio_pci_set_host_notifier_internal(proxy, n, false); + r = virtio_pci_set_host_notifier_internal(proxy, n, false, false); assert(r >= 0); } proxy->ioeventfd_started = false; @@ -235,7 +235,7 @@ static void virtio_pci_stop_ioeventfd(VirtIOPCIProxy *proxy) continue; } - r = virtio_pci_set_host_notifier_internal(proxy, n, false); + r = virtio_pci_set_host_notifier_internal(proxy, n, false, false); assert(r >= 0); } proxy->ioeventfd_started = false; @@ -683,7 +683,7 @@ static int virtio_pci_set_host_notifier(void *opaque, int n, bool assign) * currently only stops on status change away from ok, * reset, vmstop and such. If we do add code to start here, * need to check vmstate, device state etc. */ - return virtio_pci_set_host_notifier_internal(proxy, n, assign); + return virtio_pci_set_host_notifier_internal(proxy, n, assign, assign); } static void virtio_pci_vmstate_change(void *opaque, bool running) diff --git a/hw/virtio.c b/hw/virtio.c index d146f86..89e6d6f 100644 --- a/hw/virtio.c +++ b/hw/virtio.c @@ -1021,13 +1021,16 @@ static void virtio_queue_host_notifier_read(EventNotifier *n) } } -void virtio_queue_set_host_notifier_fd_handler(VirtQueue *vq, bool assign) +void virtio_queue_set_host_notifier_fd_handler(VirtQueue *vq, bool assign, + bool with_vhost) { - if (assign) { + if (assign && !with_vhost) { event_notifier_set_handler(&vq->host_notifier, virtio_queue_host_notifier_read); } else { event_notifier_set_handler(&vq->host_notifier, NULL); + } + if (!assign) { /* Test and clear notifier before after disabling event, * in case poll callback didn't have time to run. */ virtio_queue_host_notifier_read(&vq->host_notifier); diff --git a/hw/virtio.h b/hw/virtio.h index f8b5535..d6a8ea3 100644 --- a/hw/virtio.h +++ b/hw/virtio.h @@ -233,7 +233,8 @@ EventNotifier *virtio_queue_get_guest_notifier(VirtQueue *vq); void virtio_queue_set_guest_notifier_fd_handler(VirtQueue *vq, bool assign, bool with_irqfd); EventNotifier *virtio_queue_get_host_notifier(VirtQueue *vq); -void virtio_queue_set_host_notifier_fd_handler(VirtQueue *vq, bool assign); +void virtio_queue_set_host_notifier_fd_handler(VirtQueue *vq, bool assign, + bool with_vhost); void virtio_queue_notify_vq(VirtQueue *vq); void virtio_irq(VirtQueue *vq); #endif -- 1.7.10.4 ^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [Qemu-devel] [untested PATCH] virtio: fix vhost handling 2012-08-03 16:16 ` [Qemu-devel] [untested PATCH] virtio: fix vhost handling Paolo Bonzini @ 2012-08-06 12:48 ` Stefan Hajnoczi 2012-08-06 13:20 ` Paolo Bonzini 0 siblings, 1 reply; 4+ messages in thread From: Stefan Hajnoczi @ 2012-08-06 12:48 UTC (permalink / raw) To: Paolo Bonzini; +Cc: qemu-devel, Stefan Hajnoczi On Fri, Aug 3, 2012 at 5:16 PM, Paolo Bonzini <pbonzini@redhat.com> wrote: > Commit b1f416aa8d870fab71030abc9401cfc77b948e8e breaks vhost_net > because it always registers the virtio_pci_host_notifier_read() handler > function on the ioeventfd, even when vhost_net.ko is using the ioeventfd. > The result is both QEMU and vhost_net.ko polling on the same eventfd > and the virtio_net.ko guest driver seeing inconsistent results: > > # ifconfig eth0 192.168.0.1 netmask 255.255.255.0 > virtio_net virtio0: output:id 0 is not a head! > > To fix this, proceed the same as we do for irqfd: add a parameter to > virtio_queue_set_host_notifier_fd_handler and in that case only set > the notifier, not the handler > > Cc: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com> > Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> > --- > Interesting, I tested vhost (or thought so). Can you try this > patch instead? Does this really make the code better than just reverting the patch? Tested-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com> > > hw/virtio-pci.c | 14 +++++++------- > hw/virtio.c | 7 +++++-- > hw/virtio.h | 3 ++- > 3 files changed, 14 insertions(+), 10 deletions(-) > > diff --git a/hw/virtio-pci.c b/hw/virtio-pci.c > index 3ab9747..6133626 100644 > --- a/hw/virtio-pci.c > +++ b/hw/virtio-pci.c > @@ -160,7 +160,7 @@ static int virtio_pci_load_queue(void * opaque, int n, QEMUFile *f) > } > > static int virtio_pci_set_host_notifier_internal(VirtIOPCIProxy *proxy, > - int n, bool assign) > + int n, bool assign, bool with_vhost) I don't like this name because virtio-blk-data-plane also wants to use the ioeventfd. I suggest we call it use_handler (note logic is reversed from with_vhost). Stefan ^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [Qemu-devel] [untested PATCH] virtio: fix vhost handling 2012-08-06 12:48 ` Stefan Hajnoczi @ 2012-08-06 13:20 ` Paolo Bonzini 0 siblings, 0 replies; 4+ messages in thread From: Paolo Bonzini @ 2012-08-06 13:20 UTC (permalink / raw) To: Stefan Hajnoczi; +Cc: qemu-devel, Stefan Hajnoczi Il 06/08/2012 14:48, Stefan Hajnoczi ha scritto: >> > Interesting, I tested vhost (or thought so). Can you try this >> > patch instead? > Does this really make the code better than just reverting the patch? The main problem here is that the current code has calls to event_notifier_get_fd. These compile under Windows, but they will not make sense when EventNotifier is ported to Windows because it will not have a file descriptor. So I want to remove event_notifier_get_fd from public code, and reverting the patch is a step backwards. > I don't like this name because virtio-blk-data-plane also wants to use > the ioeventfd. I suggest we call it use_handler (note logic is > reversed from with_vhost). Ok, noted. Paolo ^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2012-08-06 13:20 UTC | newest] Thread overview: 4+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2012-08-03 14:58 [Qemu-devel] [PATCH] Revert "virtio: move common ioeventfd handling out of virtio-pci" Stefan Hajnoczi 2012-08-03 16:16 ` [Qemu-devel] [untested PATCH] virtio: fix vhost handling Paolo Bonzini 2012-08-06 12:48 ` Stefan Hajnoczi 2012-08-06 13:20 ` Paolo Bonzini
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).