From: "Eugenio Pérez" <eperezma@redhat.com>
To: qemu-devel@nongnu.org
Cc: Shannon <shannon.nelson@amd.com>,
Parav Pandit <parav@mellanox.com>,
Stefano Garzarella <sgarzare@redhat.com>,
"Michael S. Tsirkin" <mst@redhat.com>,
yin31149@gmail.com, Jason Wang <jasowang@redhat.com>,
Yajun Wu <yajunw@nvidia.com>,
Zhu Lingshan <lingshan.zhu@intel.com>,
Lei Yang <leiyang@redhat.com>,
Dragos Tatulea <dtatulea@nvidia.com>,
Juan Quintela <quintela@redhat.com>,
Laurent Vivier <lvivier@redhat.com>,
si-wei.liu@oracle.com, Gautam Dawar <gdawar@xilinx.com>
Subject: [RFC PATCH 01/18] vdpa: add VhostVDPAShared
Date: Thu, 19 Oct 2023 16:34:38 +0200 [thread overview]
Message-ID: <20231019143455.2377694-2-eperezma@redhat.com> (raw)
In-Reply-To: <20231019143455.2377694-1-eperezma@redhat.com>
It will hold properties shared among all vhost_vdpa instances associated
with of the same device. For example, we just need one iova_tree or one
memory listener for the entire device.
Next patches will register the vhost_vdpa memory listener at the
beginning of the VM migration at the destination. This enables QEMU to
map the memory to the device before stopping the VM at the source,
instead of doing while both source and destination are stopped, thus
minimizing the downtime.
However, the destination QEMU is unaware of which vhost_vdpa struct will
register its memory_listener. If the source guest has CVQ enabled, it
will be the one associated with the CVQ. Otherwise, it will be the
first one.
Save the memory operations related members in a common place rather than
always in the first / last vhost_vdpa.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
include/hw/virtio/vhost-vdpa.h | 5 +++++
net/vhost-vdpa.c | 24 ++++++++++++++++++++++--
2 files changed, 27 insertions(+), 2 deletions(-)
diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h
index 5407d54fd7..eb1a56d75a 100644
--- a/include/hw/virtio/vhost-vdpa.h
+++ b/include/hw/virtio/vhost-vdpa.h
@@ -30,6 +30,10 @@ typedef struct VhostVDPAHostNotifier {
void *addr;
} VhostVDPAHostNotifier;
+/* Info shared by all vhost_vdpa device models */
+typedef struct vhost_vdpa_shared {
+} VhostVDPAShared;
+
typedef struct vhost_vdpa {
int device_fd;
int index;
@@ -46,6 +50,7 @@ typedef struct vhost_vdpa {
bool suspended;
/* IOVA mapping used by the Shadow Virtqueue */
VhostIOVATree *iova_tree;
+ VhostVDPAShared *shared;
GPtrArray *shadow_vqs;
const VhostShadowVirtqueueOps *shadow_vq_ops;
void *shadow_vq_ops_opaque;
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 939c984d5b..a2f9855288 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -238,6 +238,10 @@ static void vhost_vdpa_cleanup(NetClientState *nc)
qemu_close(s->vhost_vdpa.device_fd);
s->vhost_vdpa.device_fd = -1;
}
+ if (s->vhost_vdpa.index != 0) {
+ return;
+ }
+ g_free(s->vhost_vdpa.shared);
}
static bool vhost_vdpa_has_vnet_hdr(NetClientState *nc)
@@ -1428,6 +1432,7 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
bool svq,
struct vhost_vdpa_iova_range iova_range,
uint64_t features,
+ VhostVDPAShared *shared,
Error **errp)
{
NetClientState *nc = NULL;
@@ -1463,6 +1468,7 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
if (queue_pair_index == 0) {
vhost_vdpa_net_valid_svq_features(features,
&s->vhost_vdpa.migration_blocker);
+ s->vhost_vdpa.shared = g_new0(VhostVDPAShared, 1);
} else if (!is_datapath) {
s->cvq_cmd_out_buffer = mmap(NULL, vhost_vdpa_net_cvq_cmd_page_len(),
PROT_READ | PROT_WRITE,
@@ -1475,11 +1481,16 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
s->vhost_vdpa.shadow_vq_ops_opaque = s;
s->cvq_isolated = cvq_isolated;
}
+ if (queue_pair_index != 0) {
+ s->vhost_vdpa.shared = shared;
+ }
+
ret = vhost_vdpa_add(nc, (void *)&s->vhost_vdpa, queue_pair_index, nvqs);
if (ret) {
qemu_del_net_client(nc);
return NULL;
}
+
return nc;
}
@@ -1591,17 +1602,26 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
ncs = g_malloc0(sizeof(*ncs) * queue_pairs);
for (i = 0; i < queue_pairs; i++) {
+ VhostVDPAShared *shared = NULL;
+
+ if (i) {
+ shared = DO_UPCAST(VhostVDPAState, nc, ncs[0])->vhost_vdpa.shared;
+ }
ncs[i] = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
vdpa_device_fd, i, 2, true, opts->x_svq,
- iova_range, features, errp);
+ iova_range, features, shared, errp);
if (!ncs[i])
goto err;
}
if (has_cvq) {
+ VhostVDPAState *s0 = DO_UPCAST(VhostVDPAState, nc, ncs[0]);
+ VhostVDPAShared *shared = s0->vhost_vdpa.shared;
+
nc = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
vdpa_device_fd, i, 1, false,
- opts->x_svq, iova_range, features, errp);
+ opts->x_svq, iova_range, features, shared,
+ errp);
if (!nc)
goto err;
}
--
2.39.3
next prev parent reply other threads:[~2023-10-19 14:37 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-10-19 14:34 [RFC PATCH 00/18] Map memory at destination .load_setup in vDPA-net migration Eugenio Pérez
2023-10-19 14:34 ` Eugenio Pérez [this message]
2023-10-19 14:34 ` [RFC PATCH 02/18] vdpa: move iova tree to the shared struct Eugenio Pérez
2023-11-02 9:36 ` Si-Wei Liu
2023-11-24 17:11 ` Eugenio Perez Martin
2023-10-19 14:34 ` [RFC PATCH 03/18] vdpa: move iova_range to vhost_vdpa_shared Eugenio Pérez
2023-10-19 14:34 ` [RFC PATCH 04/18] vdpa: move shadow_data " Eugenio Pérez
2023-11-02 8:47 ` Si-Wei Liu
2023-11-02 15:45 ` Eugenio Perez Martin
2023-10-19 14:34 ` [RFC PATCH 05/18] vdpa: use vdpa shared for tracing Eugenio Pérez
2023-10-19 14:34 ` [RFC PATCH 06/18] vdpa: move file descriptor to vhost_vdpa_shared Eugenio Pérez
2023-10-19 14:34 ` [RFC PATCH 07/18] vdpa: move iotlb_batch_begin_sent " Eugenio Pérez
2023-10-19 14:34 ` [RFC PATCH 08/18] vdpa: move backend_cap " Eugenio Pérez
2023-10-19 14:34 ` [RFC PATCH 09/18] vdpa: remove msg type of vhost_vdpa Eugenio Pérez
2023-10-19 14:34 ` [RFC PATCH 10/18] vdpa: move iommu_list to vhost_vdpa_shared Eugenio Pérez
2023-10-19 14:34 ` [RFC PATCH 11/18] vdpa: use VhostVDPAShared in vdpa_dma_map and unmap Eugenio Pérez
2023-10-19 14:34 ` [RFC PATCH 12/18] vdpa: use dev_shared in vdpa_iommu Eugenio Pérez
2023-10-19 14:34 ` [RFC PATCH 13/18] vdpa: move memory listener to vhost_vdpa_shared Eugenio Pérez
2023-10-19 14:34 ` [RFC PATCH 14/18] vdpa: do not set virtio status bits if unneeded Eugenio Pérez
2023-10-19 14:34 ` [RFC PATCH 15/18] vdpa: add vhost_vdpa_load_setup Eugenio Pérez
2023-11-02 8:48 ` Si-Wei Liu
2023-11-02 15:24 ` Eugenio Perez Martin
2023-10-19 14:34 ` [RFC PATCH 16/18] vdpa: add vhost_vdpa_net_load_setup NetClient callback Eugenio Pérez
2023-10-19 14:34 ` [RFC PATCH 17/18] vdpa: use shadow_data instead of first device v->shadow_vqs_enabled Eugenio Pérez
2023-10-19 14:34 ` [RFC PATCH 18/18] virtio_net: register incremental migration handlers Eugenio Pérez
2023-11-02 4:36 ` [RFC PATCH 00/18] Map memory at destination .load_setup in vDPA-net migration Jason Wang
2023-11-02 10:12 ` Si-Wei Liu
2023-11-02 12:37 ` Eugenio Perez Martin
2023-11-03 20:19 ` Si-Wei Liu
2023-12-05 14:23 ` Eugenio Perez Martin
2023-12-06 0:36 ` Si-Wei Liu
2023-11-06 4:17 ` Jason Wang
2023-11-03 20:40 ` Si-Wei Liu
2023-11-06 9:04 ` Eugenio Perez Martin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231019143455.2377694-2-eperezma@redhat.com \
--to=eperezma@redhat.com \
--cc=dtatulea@nvidia.com \
--cc=gdawar@xilinx.com \
--cc=jasowang@redhat.com \
--cc=leiyang@redhat.com \
--cc=lingshan.zhu@intel.com \
--cc=lvivier@redhat.com \
--cc=mst@redhat.com \
--cc=parav@mellanox.com \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
--cc=sgarzare@redhat.com \
--cc=shannon.nelson@amd.com \
--cc=si-wei.liu@oracle.com \
--cc=yajunw@nvidia.com \
--cc=yin31149@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).