From: "Eugenio Pérez" <eperezma@redhat.com>
To: qemu-devel@nongnu.org
Cc: yvugenfi@redhat.com, "Eugenio Pérez" <eperezma@redhat.com>,
si-wei.liu@oracle.com, "Jason Wang" <jasowang@redhat.com>,
"Michael S. Tsirkin" <mst@redhat.com>,
"Dragos Tatulea" <dtatulea@nvidia.com>,
"Shannon Nelson" <snelson@pensando.io>
Subject: [RFC PATCH 11/12] vdpa: use SVQ to stall dataplane while NIC state is being restored
Date: Thu, 20 Jul 2023 20:14:58 +0200 [thread overview]
Message-ID: <20230720181459.607008-12-eperezma@redhat.com> (raw)
In-Reply-To: <20230720181459.607008-1-eperezma@redhat.com>
Some dynamic state of a virtio-net vDPA devices is restored from CVQ in
the event of a live migration. However, dataplane needs to be disabled
so the NIC does not receive buffers in the invalid ring.
As a default method to achieve it, let's offer a shadow vring with 0
avail idx. As a fallback method, we will enable dataplane vqs later, as
proposed previously.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
net/vhost-vdpa.c | 49 +++++++++++++++++++++++++++++++++++++++++++-----
1 file changed, 44 insertions(+), 5 deletions(-)
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index af83de92f8..e14ae48f23 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -338,10 +338,25 @@ static int vhost_vdpa_net_data_start(NetClientState *nc)
{
VhostVDPAState *s = DO_UPCAST(VhostVDPAState, nc, nc);
struct vhost_vdpa *v = &s->vhost_vdpa;
+ bool has_cvq = v->dev->vq_index_end % 2;
assert(nc->info->type == NET_CLIENT_DRIVER_VHOST_VDPA);
- if (s->always_svq ||
+ if (has_cvq && (v->dev->features & VIRTIO_F_RING_RESET)) {
+ /*
+ * Offer a fake vring to the device while the state is restored
+ * through CVQ. That way, the guest will not see packets in unexpected
+ * queues.
+ *
+ * This will be undone after loading all state through CVQ, at
+ * vhost_vdpa_net_load.
+ *
+ * TODO: Future optimizations may skip some SVQ setup and teardown,
+ * like set the right kick and call fd or doorbell maps directly, and
+ * the iova tree.
+ */
+ v->shadow_vqs_enabled = true;
+ } else if (s->always_svq ||
migration_is_setup_or_active(migrate_get_current()->state)) {
v->shadow_vqs_enabled = true;
v->shadow_data = true;
@@ -738,10 +753,34 @@ static int vhost_vdpa_net_load(NetClientState *nc)
return r;
}
- for (int i = 0; i < v->dev->vq_index; ++i) {
- r = vhost_vdpa_set_vring_ready(v, i);
- if (unlikely(r)) {
- return r;
+ if (v->dev->features & VIRTIO_F_RING_RESET && !s->always_svq &&
+ !migration_is_setup_or_active(migrate_get_current()->state)) {
+ NICState *nic = qemu_get_nic(s->nc.peer);
+ int queue_pairs = n->multiqueue ? n->max_queue_pairs : 1;
+
+ for (int i = 0; i < queue_pairs; ++i) {
+ NetClientState *ncs = qemu_get_peer(nic->ncs, i);
+ VhostVDPAState *s_i = DO_UPCAST(VhostVDPAState, nc, ncs);
+
+ for (int j = 0; j < 2; ++j) {
+ vhost_net_virtqueue_reset(v->dev->vdev, ncs->peer, j);
+ }
+
+ s_i->vhost_vdpa.shadow_vqs_enabled = false;
+
+ for (int j = 0; j < 2; ++j) {
+ r = vhost_net_virtqueue_restart(v->dev->vdev, ncs->peer, j);
+ if (unlikely(r < 0)) {
+ return r;
+ }
+ }
+ }
+ } else {
+ for (int i = 0; i < v->dev->vq_index; ++i) {
+ r = vhost_vdpa_set_vring_ready(v, i);
+ if (unlikely(r)) {
+ return r;
+ }
}
}
--
2.39.3
next prev parent reply other threads:[~2023-07-20 18:16 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-07-20 18:14 [RFC PATCH 00/12] Prefer to use SVQ to stall dataplane at NIC state restore through CVQ Eugenio Pérez
2023-07-20 18:14 ` [RFC PATCH 01/12] vhost: add vhost_reset_queue_op Eugenio Pérez
2023-07-20 18:14 ` [RFC PATCH 02/12] vhost: add vhost_restart_queue_op Eugenio Pérez
2023-07-20 18:14 ` [RFC PATCH 03/12] vhost_net: Use ops->vhost_restart_queue in vhost_net_virtqueue_restart Eugenio Pérez
2023-07-25 7:07 ` Jason Wang
2023-07-20 18:14 ` [RFC PATCH 04/12] vhost_net: Use ops->vhost_reset_queue in vhost_net_virtqueue_reset Eugenio Pérez
2023-07-25 7:08 ` Jason Wang
2023-07-20 18:14 ` [RFC PATCH 05/12] vdpa: add vhost_vdpa_set_vring_ready_internal Eugenio Pérez
2023-07-20 18:14 ` [RFC PATCH 06/12] vdpa: add vhost_vdpa_svq_stop Eugenio Pérez
2023-07-20 18:14 ` [RFC PATCH 07/12] vdpa: add vhost_vdpa_reset_queue Eugenio Pérez
2023-07-21 21:56 ` Si-Wei Liu
2023-07-24 16:35 ` Eugenio Perez Martin
2023-07-20 18:14 ` [RFC PATCH 08/12] vdpa: add vhost_vdpa_svq_start Eugenio Pérez
2023-07-20 18:14 ` [RFC PATCH 09/12] vdpa: add vhost_vdpa_restart_queue Eugenio Pérez
2023-07-20 18:14 ` [RFC PATCH 10/12] vdpa: enable all vqs if the device support RING_RESET feature Eugenio Pérez
2023-07-20 18:14 ` Eugenio Pérez [this message]
2023-07-21 22:58 ` [RFC PATCH 11/12] vdpa: use SVQ to stall dataplane while NIC state is being restored Si-Wei Liu
2023-07-24 19:59 ` Eugenio Perez Martin
2023-07-25 3:48 ` Jason Wang
2023-07-20 18:14 ` [RFC PATCH 12/12] vhost: Allow _F_RING_RESET with shadow virtqueue Eugenio Pérez
2023-07-21 6:48 ` [RFC PATCH 00/12] Prefer to use SVQ to stall dataplane at NIC state restore through CVQ Eugenio Perez Martin
2023-07-25 7:15 ` Jason Wang
2023-07-27 12:53 ` Eugenio Perez Martin
2023-07-27 13:05 ` Michael S. Tsirkin
2023-07-27 15:23 ` Eugenio Perez Martin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230720181459.607008-12-eperezma@redhat.com \
--to=eperezma@redhat.com \
--cc=dtatulea@nvidia.com \
--cc=jasowang@redhat.com \
--cc=mst@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=si-wei.liu@oracle.com \
--cc=snelson@pensando.io \
--cc=yvugenfi@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).