From: Jason Wang <jasowang@redhat.com>
To: qemu-devel@nongnu.org
Subject: Re: [RFC v2 06/13] vhost: Route host->guest notification through shadow virtqueue
Date: Tue, 16 Mar 2021 15:21:42 +0800 [thread overview]
Message-ID: <f8b606be-cb31-aa33-0714-f7ca59f1816c@redhat.com> (raw)
In-Reply-To: <20210315194842.277740-7-eperezma@redhat.com>
在 2021/3/16 上午3:48, Eugenio Pérez 写道:
> On one hand it uses a mutex to synchronize guest masking with SVQ start
> and stop, because otherwise guest mask could race with the SVQ
> stop code, sending an incorrect call notifier to vhost device. This
> would prevent further communication.
Is this becuase of the OOB monitor? If yes, can we simply drop the QMP
command and introduce cli paramter for vhost backend?
Thanks
>
> On the other hand it needs to add an event to synchronize guest
> unmasking with call handling. Not doing that way could cause the guest
> to receive notifications after its unmask call. This could be done
> through the mutex but the event solution is cheaper for the buffer
> forwarding.
>
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> ---
> hw/virtio/vhost-shadow-virtqueue.h | 3 +
> include/hw/virtio/vhost.h | 1 +
> hw/virtio/vhost-shadow-virtqueue.c | 127 +++++++++++++++++++++++++++++
> hw/virtio/vhost.c | 29 ++++++-
> 4 files changed, 157 insertions(+), 3 deletions(-)
>
> diff --git a/hw/virtio/vhost-shadow-virtqueue.h b/hw/virtio/vhost-shadow-virtqueue.h
> index c891c6510d..2ca4b92b12 100644
> --- a/hw/virtio/vhost-shadow-virtqueue.h
> +++ b/hw/virtio/vhost-shadow-virtqueue.h
> @@ -17,6 +17,9 @@
>
> typedef struct VhostShadowVirtqueue VhostShadowVirtqueue;
>
> +void vhost_shadow_vq_mask(VhostShadowVirtqueue *svq, EventNotifier *masked);
> +void vhost_shadow_vq_unmask(VhostShadowVirtqueue *svq);
> +
> bool vhost_shadow_vq_start(struct vhost_dev *dev,
> unsigned idx,
> VhostShadowVirtqueue *svq);
> diff --git a/include/hw/virtio/vhost.h b/include/hw/virtio/vhost.h
> index 7ffdf9aea0..2f556bd3d5 100644
> --- a/include/hw/virtio/vhost.h
> +++ b/include/hw/virtio/vhost.h
> @@ -29,6 +29,7 @@ struct vhost_virtqueue {
> unsigned long long used_phys;
> unsigned used_size;
> bool notifier_is_masked;
> + QemuRecMutex masked_mutex;
> EventNotifier masked_notifier;
> struct vhost_dev *dev;
> };
> diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
> index 3e43399e9c..8f6ffa729a 100644
> --- a/hw/virtio/vhost-shadow-virtqueue.c
> +++ b/hw/virtio/vhost-shadow-virtqueue.c
> @@ -32,8 +32,22 @@ typedef struct VhostShadowVirtqueue {
> */
> EventNotifier host_notifier;
>
> + /* (Possible) masked notifier */
> + struct {
> + EventNotifier *n;
> +
> + /*
> + * Event to confirm unmasking.
> + * set when the masked notifier has no uses
> + */
> + QemuEvent is_free;
> + } masked_notifier;
> +
> /* Virtio queue shadowing */
> VirtQueue *vq;
> +
> + /* Virtio device */
> + VirtIODevice *vdev;
> } VhostShadowVirtqueue;
>
> /* Forward guest notifications */
> @@ -49,6 +63,70 @@ static void vhost_handle_guest_kick(EventNotifier *n)
> event_notifier_set(&svq->kick_notifier);
> }
>
> +/* Forward vhost notifications */
> +static void vhost_shadow_vq_handle_call_no_test(EventNotifier *n)
> +{
> + VhostShadowVirtqueue *svq = container_of(n, VhostShadowVirtqueue,
> + call_notifier);
> + EventNotifier *masked_notifier;
> +
> + /* Signal start of using masked notifier */
> + qemu_event_reset(&svq->masked_notifier.is_free);
> + masked_notifier = qatomic_load_acquire(&svq->masked_notifier.n);
> + if (!masked_notifier) {
> + qemu_event_set(&svq->masked_notifier.is_free);
> + }
> +
> + if (!masked_notifier) {
> + unsigned n = virtio_get_queue_index(svq->vq);
> + virtio_queue_invalidate_signalled_used(svq->vdev, n);
> + virtio_notify_irqfd(svq->vdev, svq->vq);
> + } else {
> + event_notifier_set(svq->masked_notifier.n);
> + }
> +
> + if (masked_notifier) {
> + /* Signal not using it anymore */
> + qemu_event_set(&svq->masked_notifier.is_free);
> + }
> +}
> +
> +static void vhost_shadow_vq_handle_call(EventNotifier *n)
> +{
> +
> + if (likely(event_notifier_test_and_clear(n))) {
> + vhost_shadow_vq_handle_call_no_test(n);
> + }
> +}
> +
> +/*
> + * Mask the shadow virtqueue.
> + *
> + * It can be called from a guest masking vmexit or shadow virtqueue start
> + * through QMP.
> + *
> + * @vq Shadow virtqueue
> + * @masked Masked notifier to signal instead of guest
> + */
> +void vhost_shadow_vq_mask(VhostShadowVirtqueue *svq, EventNotifier *masked)
> +{
> + qatomic_store_release(&svq->masked_notifier.n, masked);
> +}
> +
> +/*
> + * Unmask the shadow virtqueue.
> + *
> + * It can be called from a guest unmasking vmexit or shadow virtqueue start
> + * through QMP.
> + *
> + * @vq Shadow virtqueue
> + */
> +void vhost_shadow_vq_unmask(VhostShadowVirtqueue *svq)
> +{
> + qatomic_store_release(&svq->masked_notifier.n, NULL);
> + qemu_event_wait(&svq->masked_notifier.is_free);
> +}
> +
> /*
> * Restore the vhost guest to host notifier, i.e., disables svq effect.
> */
> @@ -103,8 +181,39 @@ bool vhost_shadow_vq_start(struct vhost_dev *dev,
> goto err_set_vring_kick;
> }
>
> + /* Set vhost call */
> + file.fd = event_notifier_get_fd(&svq->call_notifier),
> + r = dev->vhost_ops->vhost_set_vring_call(dev, &file);
> + if (unlikely(r != 0)) {
> + error_report("Couldn't set call fd: %s", strerror(errno));
> + goto err_set_vring_call;
> + }
> +
> +
> + /*
> + * Lock to avoid a race condition between guest setting masked status and
> + * us.
> + */
> + QEMU_LOCK_GUARD(&dev->vqs[idx].masked_mutex);
> + /* Set shadow vq -> guest notifier */
> + assert(dev->shadow_vqs_enabled);
> + vhost_virtqueue_mask(dev, dev->vdev, dev->vq_index + idx,
> + dev->vqs[idx].notifier_is_masked);
> +
> + if (dev->vqs[idx].notifier_is_masked &&
> + event_notifier_test_and_clear(&dev->vqs[idx].masked_notifier)) {
> + /* Check for pending notifications from the device */
> + vhost_shadow_vq_handle_call_no_test(&svq->call_notifier);
> + }
> +
> return true;
>
> +err_set_vring_call:
> + r = vhost_shadow_vq_restore_vdev_host_notifier(dev, idx, svq);
> + if (unlikely(r < 0)) {
> + error_report("Couldn't restore vq kick fd: %s", strerror(-r));
> + }
> +
> err_set_vring_kick:
> event_notifier_set_handler(&svq->host_notifier, NULL);
>
> @@ -126,7 +235,19 @@ void vhost_shadow_vq_stop(struct vhost_dev *dev,
> error_report("Couldn't restore vq kick fd: %s", strerror(-r));
> }
>
> + assert(!dev->shadow_vqs_enabled);
> +
> event_notifier_set_handler(&svq->host_notifier, NULL);
> +
> + /*
> + * Lock to avoid a race condition between guest setting masked status and
> + * us.
> + */
> + QEMU_LOCK_GUARD(&dev->vqs[idx].masked_mutex);
> +
> + /* Restore vhost call */
> + vhost_virtqueue_mask(dev, dev->vdev, dev->vq_index + idx,
> + dev->vqs[idx].notifier_is_masked);
> }
>
> /*
> @@ -154,6 +275,10 @@ VhostShadowVirtqueue *vhost_shadow_vq_new(struct vhost_dev *dev, int idx)
> }
>
> svq->vq = virtio_get_queue(dev->vdev, vq_idx);
> + svq->vdev = dev->vdev;
> + event_notifier_set_handler(&svq->call_notifier,
> + vhost_shadow_vq_handle_call);
> + qemu_event_init(&svq->masked_notifier.is_free, true);
> return g_steal_pointer(&svq);
>
> err_init_call_notifier:
> @@ -168,7 +293,9 @@ err_init_kick_notifier:
> */
> void vhost_shadow_vq_free(VhostShadowVirtqueue *vq)
> {
> + qemu_event_destroy(&vq->masked_notifier.is_free);
> event_notifier_cleanup(&vq->kick_notifier);
> + event_notifier_set_handler(&vq->call_notifier, NULL);
> event_notifier_cleanup(&vq->call_notifier);
> g_free(vq);
> }
> diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
> index 4858a35df6..eab3e334f2 100644
> --- a/hw/virtio/vhost.c
> +++ b/hw/virtio/vhost.c
> @@ -1224,7 +1224,8 @@ static int vhost_sw_live_migration_stop(struct vhost_dev *dev)
> {
> int idx;
>
> - dev->shadow_vqs_enabled = false;
> + /* Can be read by vhost_virtqueue_mask, from vm exit */
> + qatomic_store_release(&dev->shadow_vqs_enabled, false);
>
> for (idx = 0; idx < dev->nvqs; ++idx) {
> vhost_shadow_vq_stop(dev, idx, dev->shadow_vqs[idx]);
> @@ -1248,7 +1249,8 @@ static int vhost_sw_live_migration_start(struct vhost_dev *dev)
> }
> }
>
> - dev->shadow_vqs_enabled = true;
> + /* Can be read by vhost_virtqueue_mask, from vm exit */
> + qatomic_store_release(&dev->shadow_vqs_enabled, true);
> for (idx = 0; idx < dev->nvqs; ++idx) {
> bool ok = vhost_shadow_vq_start(dev, idx, dev->shadow_vqs[idx]);
> if (unlikely(!ok)) {
> @@ -1259,7 +1261,7 @@ static int vhost_sw_live_migration_start(struct vhost_dev *dev)
> return 0;
>
> err_start:
> - dev->shadow_vqs_enabled = false;
> + qatomic_store_release(&dev->shadow_vqs_enabled, false);
> for (stop_idx = 0; stop_idx < idx; stop_idx++) {
> vhost_shadow_vq_stop(dev, idx, dev->shadow_vqs[stop_idx]);
> }
> @@ -1343,6 +1345,7 @@ static int vhost_virtqueue_init(struct vhost_dev *dev,
> goto fail_call;
> }
>
> + qemu_rec_mutex_init(&vq->masked_mutex);
> vq->dev = dev;
>
> return 0;
> @@ -1353,6 +1356,7 @@ fail_call:
>
> static void vhost_virtqueue_cleanup(struct vhost_virtqueue *vq)
> {
> + qemu_rec_mutex_destroy(&vq->masked_mutex);
> event_notifier_cleanup(&vq->masked_notifier);
> }
>
> @@ -1591,6 +1595,25 @@ void vhost_virtqueue_mask(struct vhost_dev *hdev, VirtIODevice *vdev, int n,
> /* should only be called after backend is connected */
> assert(hdev->vhost_ops);
>
> + /* Avoid race condition with shadow virtqueue stop/start */
> + QEMU_LOCK_GUARD(&hdev->vqs[index].masked_mutex);
> +
> + /* Set by QMP thread, so using acquire semantics */
> + if (qatomic_load_acquire(&hdev->shadow_vqs_enabled)) {
> + if (mask) {
> + vhost_shadow_vq_mask(hdev->shadow_vqs[index],
> + &hdev->vqs[index].masked_notifier);
> + } else {
> + vhost_shadow_vq_unmask(hdev->shadow_vqs[index]);
> + }
> +
> + /*
> + * Vhost call fd must remain the same since shadow vq is not polling
> + * for changes
> + */
> + return;
> + }
> +
> if (mask) {
> assert(vdev->use_guest_notifier_mask);
> file.fd = event_notifier_get_fd(&hdev->vqs[index].masked_notifier);
next prev parent reply other threads:[~2021-03-16 7:23 UTC|newest]
Thread overview: 46+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-03-15 19:48 [RFC v2 00/13] vDPA software assisted live migration Eugenio Pérez
2021-03-15 19:48 ` [RFC v2 01/13] virtio: Add virtio_queue_is_host_notifier_enabled Eugenio Pérez
2021-03-15 19:48 ` [RFC v2 02/13] vhost: Save masked_notifier state Eugenio Pérez
2021-03-15 19:48 ` [RFC v2 03/13] vhost: Add VhostShadowVirtqueue Eugenio Pérez
2021-03-15 19:48 ` [RFC v2 04/13] vhost: Add x-vhost-enable-shadow-vq qmp Eugenio Pérez
2021-03-16 13:37 ` Eric Blake
2021-03-15 19:48 ` [RFC v2 05/13] vhost: Route guest->host notification through shadow virtqueue Eugenio Pérez
2021-03-16 7:18 ` Jason Wang
2021-03-16 10:31 ` Eugenio Perez Martin
2021-03-17 2:05 ` Jason Wang
2021-03-17 16:47 ` Eugenio Perez Martin
2021-03-18 3:10 ` Jason Wang
2021-03-18 9:18 ` Eugenio Perez Martin
2021-03-18 9:29 ` Jason Wang
2021-03-18 10:48 ` Eugenio Perez Martin
2021-03-18 12:04 ` Eugenio Perez Martin
2021-03-19 6:55 ` Jason Wang
2021-03-15 19:48 ` [RFC v2 06/13] vhost: Route host->guest " Eugenio Pérez
2021-03-16 7:21 ` Jason Wang [this message]
2021-03-15 19:48 ` [RFC v2 07/13] vhost: Avoid re-set masked notifier in shadow vq Eugenio Pérez
2021-03-15 19:48 ` [RFC v2 08/13] virtio: Add vhost_shadow_vq_get_vring_addr Eugenio Pérez
2021-03-16 7:50 ` Jason Wang
2021-03-16 15:20 ` Eugenio Perez Martin
2021-05-17 17:39 ` Eugenio Perez Martin
2021-03-15 19:48 ` [RFC v2 09/13] virtio: Add virtio_queue_full Eugenio Pérez
2021-03-15 19:48 ` [RFC v2 10/13] vhost: add vhost_kernel_set_vring_enable Eugenio Pérez
2021-03-16 7:29 ` Jason Wang
2021-03-16 10:43 ` Eugenio Perez Martin
2021-03-17 2:25 ` Jason Wang
2021-03-15 19:48 ` [RFC v2 11/13] vhost: Shadow virtqueue buffers forwarding Eugenio Pérez
2021-03-16 8:15 ` Jason Wang
2021-03-16 16:05 ` Eugenio Perez Martin
2021-03-17 2:50 ` Jason Wang
2021-03-17 14:38 ` Eugenio Perez Martin
2021-03-18 3:14 ` Jason Wang
2021-03-18 8:06 ` Eugenio Perez Martin
2021-03-18 9:16 ` Jason Wang
2021-03-18 9:54 ` Eugenio Perez Martin
2021-03-15 19:48 ` [RFC v2 12/13] vhost: Check for device VRING_USED_F_NO_NOTIFY at shadow virtqueue kick Eugenio Pérez
2021-03-16 8:07 ` Jason Wang
2021-05-17 17:11 ` Eugenio Perez Martin
2021-03-15 19:48 ` [RFC v2 13/13] vhost: Use VRING_AVAIL_F_NO_INTERRUPT at device call on shadow virtqueue Eugenio Pérez
2021-03-16 8:08 ` Jason Wang
2021-05-17 17:32 ` Eugenio Perez Martin
2021-03-16 8:28 ` [RFC v2 00/13] vDPA software assisted live migration Jason Wang
2021-03-16 17:25 ` Eugenio Perez Martin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=f8b606be-cb31-aa33-0714-f7ca59f1816c@redhat.com \
--to=jasowang@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).