qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: "Eugenio Pérez" <eperezma@redhat.com>
To: qemu-devel@nongnu.org
Cc: Jason Wang <jasowang@redhat.com>,
	Gautam Dawar <gdawar@xilinx.com>,
	si-wei.liu@oracle.com, Zhu Lingshan <lingshan.zhu@intel.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	Parav Pandit <parav@mellanox.com>, Cindy Lu <lulu@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Harpreet Singh Anand <hanand@xilinx.com>,
	Laurent Vivier <lvivier@redhat.com>,
	Shannon Nelson <snelson@pensando.io>,
	Lei Yang <leiyang@redhat.com>,
	Dragos Tatulea <dtatulea@nvidia.com>
Subject: [PATCH 5/7] vdpa: delay enable of data vqs
Date: Fri, 28 Jul 2023 19:20:26 +0200	[thread overview]
Message-ID: <20230728172028.2074052-6-eperezma@redhat.com> (raw)
In-Reply-To: <20230728172028.2074052-1-eperezma@redhat.com>

To restore the device at the destination of a live migration we send
the commands through control virtqueue.  For a device to read CVQ it
must have received the DRIVER_OK status bit.

However this opens a window where the device could start receiving
packets in rx queue 0 before it receives the RSS configuration.  To
avoid that, we do not send vring_enable until all configuration is used
by the device.

Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
v2: Enable the dataplane vqs if cannot shadow CVQ because of device
features or ASID.
---
 net/vhost-vdpa.c | 44 +++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 43 insertions(+), 1 deletion(-)

diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 3d7dc3e5c0..2c1cfda657 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -283,6 +283,15 @@ static VhostVDPAState *vhost_vdpa_net_first_nc_vdpa(VhostVDPAState *s)
     return DO_UPCAST(VhostVDPAState, nc, nc0);
 }
 
+/** From any vdpa net client, get the netclient of the last queue pair */
+static VhostVDPAState *vhost_vdpa_net_last_nc_vdpa(VhostVDPAState *s)
+{
+    VirtIONet *n = qemu_get_nic_opaque(s->nc.peer);
+    NetClientState *nc = qemu_get_peer(n->nic->ncs, n->max_ncs - 1);
+
+    return DO_UPCAST(VhostVDPAState, nc, nc);
+}
+
 static void vhost_vdpa_net_log_global_enable(VhostVDPAState *s, bool enable)
 {
     struct vhost_vdpa *v = &s->vhost_vdpa;
@@ -996,6 +1005,13 @@ static int vhost_vdpa_net_load(NetClientState *nc)
         return r;
     }
 
+    for (int i = 0; i < v->dev->vq_index; ++i) {
+        r = vhost_vdpa_set_vring_ready(v, i);
+        if (unlikely(r)) {
+            return r;
+        }
+    }
+
     return 0;
 }
 
@@ -1255,9 +1271,35 @@ static const VhostShadowVirtqueueOps vhost_vdpa_net_svq_ops = {
     .avail_handler = vhost_vdpa_net_handle_ctrl_avail,
 };
 
+/**
+ * Check if a vhost_vdpa device should enable before DRIVER_OK
+ *
+ * CVQ must always start first if we want to restore the state safely. Do not
+ * start data vqs if the device has CVQ.
+ */
 static bool vhost_vdpa_should_enable(const struct vhost_vdpa *v)
 {
-    return true;
+    struct vhost_dev *dev = v->dev;
+    VhostVDPAState *s = container_of(v, VhostVDPAState, vhost_vdpa);
+    VhostVDPAState *cvq_s = vhost_vdpa_net_last_nc_vdpa(s);
+
+    if (!(dev->vq_index_end % 2)) {
+        /* vDPA device does not have CVQ */
+        return true;
+    }
+
+    if (dev->vq_index + 1 == dev->vq_index_end) {
+        /* We're evaluating CVQ, that must always enable first */
+        return true;
+    }
+
+    if (!vhost_vdpa_net_valid_svq_features(v->dev->features, NULL) ||
+        !cvq_s->cvq_isolated) {
+        /* CVQ is not isolated, so let's enable as usual */
+        return true;
+    }
+
+    return false;
 }
 
 static const VhostVDPAVirtIOOps vhost_vdpa_virtio_net_ops = {
-- 
2.39.3



  parent reply	other threads:[~2023-07-28 18:39 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-28 17:20 [PATCH 0/7] Enable vdpa net migration with features depending on CVQ Eugenio Pérez
2023-07-28 17:20 ` [PATCH 1/7] vdpa: export vhost_vdpa_set_vring_ready Eugenio Pérez
2023-07-28 17:20 ` [PATCH 2/7] vdpa: add should_enable op Eugenio Pérez
2023-07-28 17:20 ` [PATCH 3/7] vdpa: use virtio_ops->should_enable at vhost_vdpa_set_vrings_ready Eugenio Pérez
2023-07-28 17:20 ` [PATCH 4/7] vdpa: add stub vhost_vdpa_should_enable Eugenio Pérez
2023-07-28 17:20 ` Eugenio Pérez [this message]
2023-07-28 17:20 ` [PATCH 6/7] vdpa: enable cvq svq if data vq are shadowed Eugenio Pérez
2023-07-28 17:20 ` [PATCH 7/7] vdpa: remove net cvq migration blocker Eugenio Pérez
2023-07-31  6:41 ` [PATCH 0/7] Enable vdpa net migration with features depending on CVQ Jason Wang
2023-07-31 10:15   ` Eugenio Perez Martin
2023-08-01  3:48     ` Jason Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230728172028.2074052-6-eperezma@redhat.com \
    --to=eperezma@redhat.com \
    --cc=dtatulea@nvidia.com \
    --cc=gdawar@xilinx.com \
    --cc=hanand@xilinx.com \
    --cc=jasowang@redhat.com \
    --cc=leiyang@redhat.com \
    --cc=lingshan.zhu@intel.com \
    --cc=lulu@redhat.com \
    --cc=lvivier@redhat.com \
    --cc=mst@redhat.com \
    --cc=parav@mellanox.com \
    --cc=qemu-devel@nongnu.org \
    --cc=sgarzare@redhat.com \
    --cc=si-wei.liu@oracle.com \
    --cc=snelson@pensando.io \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).