qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/3] vdpa: map shadow vrings with MAP_SHARED
@ 2023-06-02 14:38 Eugenio Pérez
  2023-06-02 14:38 ` [PATCH v2 1/3] vdpa: do not block migration if device has cvq and x-svq=on Eugenio Pérez
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: Eugenio Pérez @ 2023-06-02 14:38 UTC (permalink / raw)
  To: qemu-devel; +Cc: Eugenio Pérez, Jason Wang, Michael S. Tsirkin, Lei Yang

The vdpa devices that use va addresses neeeds these maps shared.  Otherwise,
vhost_vdpa checks will refuse to accept the maps.

Discovered this issue while testing SVQ with vdpa sim, now defaulting to
use_va=on.

v2:
* Use PROT_READ|PROT_WRITE instead of O_RDWR. The latter does not work for
  mmap.

Eugenio Pérez (3):
  vdpa: do not block migration if device has cvq and x-svq=on
  vdpa: reorder vhost_vdpa_net_cvq_cmd_page_len function
  vdpa: map shadow vrings with MAP_SHARED

 hw/virtio/vhost-shadow-virtqueue.c | 18 ++++-----
 net/vhost-vdpa.c                   | 59 ++++++++++++++++--------------
 2 files changed, 40 insertions(+), 37 deletions(-)

-- 
2.31.1




^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH v2 1/3] vdpa: do not block migration if device has cvq and x-svq=on
  2023-06-02 14:38 [PATCH v2 0/3] vdpa: map shadow vrings with MAP_SHARED Eugenio Pérez
@ 2023-06-02 14:38 ` Eugenio Pérez
  2023-06-02 17:27   ` Lei Yang
  2023-06-02 14:38 ` [PATCH v2 2/3] vdpa: reorder vhost_vdpa_net_cvq_cmd_page_len function Eugenio Pérez
  2023-06-02 14:38 ` [PATCH v2 3/3] vdpa: map shadow vrings with MAP_SHARED Eugenio Pérez
  2 siblings, 1 reply; 5+ messages in thread
From: Eugenio Pérez @ 2023-06-02 14:38 UTC (permalink / raw)
  To: qemu-devel; +Cc: Eugenio Pérez, Jason Wang, Michael S. Tsirkin, Lei Yang

It was a mistake to forbid in all cases, as SVQ is already able to send
all the CVQ messages before start forwarding data vqs.  It actually
caused a regression, making impossible to migrate device previously
migratable.

Fixes: 36e4647247f2 ("vdpa: add vhost_vdpa_net_valid_svq_features")
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 net/vhost-vdpa.c | 11 +++++++----
 1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 37cdc84562..c63456ff7c 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -837,13 +837,16 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
         s->vhost_vdpa.shadow_vq_ops_opaque = s;
 
         /*
-         * TODO: We cannot migrate devices with CVQ as there is no way to set
-         * the device state (MAC, MQ, etc) before starting the datapath.
+         * TODO: We cannot migrate devices with CVQ and no x-svq enabled as
+         * there is no way to set the device state (MAC, MQ, etc) before
+         * starting the datapath.
          *
          * Migration blocker ownership now belongs to s->vhost_vdpa.
          */
-        error_setg(&s->vhost_vdpa.migration_blocker,
-                   "net vdpa cannot migrate with CVQ feature");
+        if (!svq) {
+            error_setg(&s->vhost_vdpa.migration_blocker,
+                       "net vdpa cannot migrate with CVQ feature");
+        }
     }
     ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa, queue_pair_index, nvqs);
     if (ret) {
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH v2 2/3] vdpa: reorder vhost_vdpa_net_cvq_cmd_page_len function
  2023-06-02 14:38 [PATCH v2 0/3] vdpa: map shadow vrings with MAP_SHARED Eugenio Pérez
  2023-06-02 14:38 ` [PATCH v2 1/3] vdpa: do not block migration if device has cvq and x-svq=on Eugenio Pérez
@ 2023-06-02 14:38 ` Eugenio Pérez
  2023-06-02 14:38 ` [PATCH v2 3/3] vdpa: map shadow vrings with MAP_SHARED Eugenio Pérez
  2 siblings, 0 replies; 5+ messages in thread
From: Eugenio Pérez @ 2023-06-02 14:38 UTC (permalink / raw)
  To: qemu-devel; +Cc: Eugenio Pérez, Jason Wang, Michael S. Tsirkin, Lei Yang

We need to call it from resource cleanup context, as munmap needs the
size of the mappings.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
---
 net/vhost-vdpa.c | 32 ++++++++++++++++----------------
 1 file changed, 16 insertions(+), 16 deletions(-)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index c63456ff7c..91451a254a 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -116,6 +116,22 @@ VHostNetState *vhost_vdpa_get_vhost_net(NetClientState *nc)
     return s->vhost_net;
 }
 
+static size_t vhost_vdpa_net_cvq_cmd_len(void)
+{
+    /*
+     * MAC_TABLE_SET is the ctrl command that produces the longer out buffer.
+     * In buffer is always 1 byte, so it should fit here
+     */
+    return sizeof(struct virtio_net_ctrl_hdr) +
+           2 * sizeof(struct virtio_net_ctrl_mac) +
+           MAC_TABLE_ENTRIES * ETH_ALEN;
+}
+
+static size_t vhost_vdpa_net_cvq_cmd_page_len(void)
+{
+    return ROUND_UP(vhost_vdpa_net_cvq_cmd_len(), qemu_real_host_page_size());
+}
+
 static bool vhost_vdpa_net_valid_svq_features(uint64_t features, Error **errp)
 {
     uint64_t invalid_dev_features =
@@ -422,22 +438,6 @@ static void vhost_vdpa_cvq_unmap_buf(struct vhost_vdpa *v, void *addr)
     vhost_iova_tree_remove(tree, *map);
 }
 
-static size_t vhost_vdpa_net_cvq_cmd_len(void)
-{
-    /*
-     * MAC_TABLE_SET is the ctrl command that produces the longer out buffer.
-     * In buffer is always 1 byte, so it should fit here
-     */
-    return sizeof(struct virtio_net_ctrl_hdr) +
-           2 * sizeof(struct virtio_net_ctrl_mac) +
-           MAC_TABLE_ENTRIES * ETH_ALEN;
-}
-
-static size_t vhost_vdpa_net_cvq_cmd_page_len(void)
-{
-    return ROUND_UP(vhost_vdpa_net_cvq_cmd_len(), qemu_real_host_page_size());
-}
-
 /** Map CVQ buffer. */
 static int vhost_vdpa_cvq_map_buf(struct vhost_vdpa *v, void *buf, size_t size,
                                   bool write)
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH v2 3/3] vdpa: map shadow vrings with MAP_SHARED
  2023-06-02 14:38 [PATCH v2 0/3] vdpa: map shadow vrings with MAP_SHARED Eugenio Pérez
  2023-06-02 14:38 ` [PATCH v2 1/3] vdpa: do not block migration if device has cvq and x-svq=on Eugenio Pérez
  2023-06-02 14:38 ` [PATCH v2 2/3] vdpa: reorder vhost_vdpa_net_cvq_cmd_page_len function Eugenio Pérez
@ 2023-06-02 14:38 ` Eugenio Pérez
  2 siblings, 0 replies; 5+ messages in thread
From: Eugenio Pérez @ 2023-06-02 14:38 UTC (permalink / raw)
  To: qemu-devel; +Cc: Eugenio Pérez, Jason Wang, Michael S. Tsirkin, Lei Yang

The vdpa devices that use va addresses neeeds these maps shared.
Otherwise, vhost_vdpa checks will refuse to accept the maps.

The mmap call will always return a page aligned address, so removing the
qemu_memalign call.  Keeping the ROUND_UP for the size as we still need
to DMA-map them in full.

Not applying fixes tag as it never worked with va devices.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
v2: Use PROT_READ|PROT_WRITE instead of wrong O_RDWR.
---
 hw/virtio/vhost-shadow-virtqueue.c | 18 +++++++++---------
 net/vhost-vdpa.c                   | 16 ++++++++--------
 2 files changed, 17 insertions(+), 17 deletions(-)

diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
index bd7c12b6d3..1b1d85306c 100644
--- a/hw/virtio/vhost-shadow-virtqueue.c
+++ b/hw/virtio/vhost-shadow-virtqueue.c
@@ -649,7 +649,7 @@ void vhost_svq_set_svq_kick_fd(VhostShadowVirtqueue *svq, int svq_kick_fd)
 void vhost_svq_start(VhostShadowVirtqueue *svq, VirtIODevice *vdev,
                      VirtQueue *vq, VhostIOVATree *iova_tree)
 {
-    size_t desc_size, driver_size, device_size;
+    size_t desc_size;
 
     event_notifier_set_handler(&svq->hdev_call, vhost_svq_handle_call);
     svq->next_guest_avail_elem = NULL;
@@ -662,14 +662,14 @@ void vhost_svq_start(VhostShadowVirtqueue *svq, VirtIODevice *vdev,
 
     svq->vring.num = virtio_queue_get_num(vdev, virtio_get_queue_index(vq));
     svq->num_free = svq->vring.num;
-    driver_size = vhost_svq_driver_area_size(svq);
-    device_size = vhost_svq_device_area_size(svq);
-    svq->vring.desc = qemu_memalign(qemu_real_host_page_size(), driver_size);
+    svq->vring.desc = mmap(NULL, vhost_svq_driver_area_size(svq),
+                           PROT_READ | PROT_WRITE, MAP_SHARED | MAP_ANONYMOUS,
+                           -1, 0);
     desc_size = sizeof(vring_desc_t) * svq->vring.num;
     svq->vring.avail = (void *)((char *)svq->vring.desc + desc_size);
-    memset(svq->vring.desc, 0, driver_size);
-    svq->vring.used = qemu_memalign(qemu_real_host_page_size(), device_size);
-    memset(svq->vring.used, 0, device_size);
+    svq->vring.used = mmap(NULL, vhost_svq_device_area_size(svq),
+                           PROT_READ | PROT_WRITE, MAP_SHARED | MAP_ANONYMOUS,
+                           -1, 0);
     svq->desc_state = g_new0(SVQDescState, svq->vring.num);
     svq->desc_next = g_new0(uint16_t, svq->vring.num);
     for (unsigned i = 0; i < svq->vring.num - 1; i++) {
@@ -712,8 +712,8 @@ void vhost_svq_stop(VhostShadowVirtqueue *svq)
     svq->vq = NULL;
     g_free(svq->desc_next);
     g_free(svq->desc_state);
-    qemu_vfree(svq->vring.desc);
-    qemu_vfree(svq->vring.used);
+    munmap(svq->vring.desc, vhost_svq_driver_area_size(svq));
+    munmap(svq->vring.used, vhost_svq_device_area_size(svq));
     event_notifier_set_handler(&svq->hdev_call, NULL);
 }
 
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 91451a254a..f365cff0bd 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -201,8 +201,8 @@ static void vhost_vdpa_cleanup(NetClientState *nc)
 {
     VhostVDPAState *s = DO_UPCAST(VhostVDPAState, nc, nc);
 
-    qemu_vfree(s->cvq_cmd_out_buffer);
-    qemu_vfree(s->status);
+    munmap(s->cvq_cmd_out_buffer, vhost_vdpa_net_cvq_cmd_page_len());
+    munmap(s->status, vhost_vdpa_net_cvq_cmd_page_len());
     if (s->vhost_net) {
         vhost_net_cleanup(s->vhost_net);
         g_free(s->vhost_net);
@@ -826,12 +826,12 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
         vhost_vdpa_net_valid_svq_features(features,
                                           &s->vhost_vdpa.migration_blocker);
     } else if (!is_datapath) {
-        s->cvq_cmd_out_buffer = qemu_memalign(qemu_real_host_page_size(),
-                                            vhost_vdpa_net_cvq_cmd_page_len());
-        memset(s->cvq_cmd_out_buffer, 0, vhost_vdpa_net_cvq_cmd_page_len());
-        s->status = qemu_memalign(qemu_real_host_page_size(),
-                                  vhost_vdpa_net_cvq_cmd_page_len());
-        memset(s->status, 0, vhost_vdpa_net_cvq_cmd_page_len());
+        s->cvq_cmd_out_buffer = mmap(NULL, vhost_vdpa_net_cvq_cmd_page_len(),
+                                     PROT_READ | PROT_WRITE,
+                                     MAP_SHARED | MAP_ANONYMOUS, -1, 0);
+        s->status = mmap(NULL, vhost_vdpa_net_cvq_cmd_page_len(),
+                         PROT_READ | PROT_WRITE, MAP_SHARED | MAP_ANONYMOUS,
+                         -1, 0);
 
         s->vhost_vdpa.shadow_vq_ops = &vhost_vdpa_net_svq_ops;
         s->vhost_vdpa.shadow_vq_ops_opaque = s;
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH v2 1/3] vdpa: do not block migration if device has cvq and x-svq=on
  2023-06-02 14:38 ` [PATCH v2 1/3] vdpa: do not block migration if device has cvq and x-svq=on Eugenio Pérez
@ 2023-06-02 17:27   ` Lei Yang
  0 siblings, 0 replies; 5+ messages in thread
From: Lei Yang @ 2023-06-02 17:27 UTC (permalink / raw)
  To: Eugenio Pérez; +Cc: qemu-devel, Jason Wang, Michael S. Tsirkin

QE tested this series with vhost_vdpa and x-svq=on, guest can migrate
and everything works well.

Tested-by: Lei Yang <leiyang@redhat.com>

On Fri, Jun 2, 2023 at 10:39 PM Eugenio Pérez <eperezma@redhat.com> wrote:
>
> It was a mistake to forbid in all cases, as SVQ is already able to send
> all the CVQ messages before start forwarding data vqs.  It actually
> caused a regression, making impossible to migrate device previously
> migratable.
>
> Fixes: 36e4647247f2 ("vdpa: add vhost_vdpa_net_valid_svq_features")
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> ---
>  net/vhost-vdpa.c | 11 +++++++----
>  1 file changed, 7 insertions(+), 4 deletions(-)
>
> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> index 37cdc84562..c63456ff7c 100644
> --- a/net/vhost-vdpa.c
> +++ b/net/vhost-vdpa.c
> @@ -837,13 +837,16 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
>          s->vhost_vdpa.shadow_vq_ops_opaque = s;
>
>          /*
> -         * TODO: We cannot migrate devices with CVQ as there is no way to set
> -         * the device state (MAC, MQ, etc) before starting the datapath.
> +         * TODO: We cannot migrate devices with CVQ and no x-svq enabled as
> +         * there is no way to set the device state (MAC, MQ, etc) before
> +         * starting the datapath.
>           *
>           * Migration blocker ownership now belongs to s->vhost_vdpa.
>           */
> -        error_setg(&s->vhost_vdpa.migration_blocker,
> -                   "net vdpa cannot migrate with CVQ feature");
> +        if (!svq) {
> +            error_setg(&s->vhost_vdpa.migration_blocker,
> +                       "net vdpa cannot migrate with CVQ feature");
> +        }
>      }
>      ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa, queue_pair_index, nvqs);
>      if (ret) {
> --
> 2.31.1
>



^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2023-06-02 17:28 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-06-02 14:38 [PATCH v2 0/3] vdpa: map shadow vrings with MAP_SHARED Eugenio Pérez
2023-06-02 14:38 ` [PATCH v2 1/3] vdpa: do not block migration if device has cvq and x-svq=on Eugenio Pérez
2023-06-02 17:27   ` Lei Yang
2023-06-02 14:38 ` [PATCH v2 2/3] vdpa: reorder vhost_vdpa_net_cvq_cmd_page_len function Eugenio Pérez
2023-06-02 14:38 ` [PATCH v2 3/3] vdpa: map shadow vrings with MAP_SHARED Eugenio Pérez

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).