From: Si-Wei Liu <si-wei.liu@oracle.com>
To: "Eugenio Pérez" <eperezma@redhat.com>, qemu-devel@nongnu.org
Cc: Shannon <shannon.nelson@amd.com>,
Parav Pandit <parav@mellanox.com>,
Stefano Garzarella <sgarzare@redhat.com>,
"Michael S. Tsirkin" <mst@redhat.com>,
yin31149@gmail.com, Jason Wang <jasowang@redhat.com>,
Yajun Wu <yajunw@nvidia.com>,
Zhu Lingshan <lingshan.zhu@intel.com>,
Lei Yang <leiyang@redhat.com>,
Dragos Tatulea <dtatulea@nvidia.com>,
Juan Quintela <quintela@redhat.com>,
Laurent Vivier <lvivier@redhat.com>,
Gautam Dawar <gdawar@xilinx.com>
Subject: Re: [RFC PATCH 15/18] vdpa: add vhost_vdpa_load_setup
Date: Thu, 2 Nov 2023 01:48:21 -0700 [thread overview]
Message-ID: <00fe0c0b-267c-4d1f-8f0c-efdd8c166002@oracle.com> (raw)
In-Reply-To: <20231019143455.2377694-16-eperezma@redhat.com>
On 10/19/2023 7:34 AM, Eugenio Pérez wrote:
> Callers can use this function to setup the incoming migration.
>
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> ---
> include/hw/virtio/vhost-vdpa.h | 7 +++++++
> hw/virtio/vhost-vdpa.c | 17 ++++++++++++++++-
> 2 files changed, 23 insertions(+), 1 deletion(-)
>
> diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h
> index 8f54e5edd4..edc08b7a02 100644
> --- a/include/hw/virtio/vhost-vdpa.h
> +++ b/include/hw/virtio/vhost-vdpa.h
> @@ -45,6 +45,12 @@ typedef struct vhost_vdpa_shared {
>
> bool iotlb_batch_begin_sent;
>
> + /*
> + * The memory listener has been registered, so DMA maps have been sent to
> + * the device.
> + */
> + bool listener_registered;
> +
> /* Vdpa must send shadow addresses as IOTLB key for data queues, not GPA */
> bool shadow_data;
> } VhostVDPAShared;
> @@ -73,6 +79,7 @@ int vhost_vdpa_dma_map(VhostVDPAShared *s, uint32_t asid, hwaddr iova,
> hwaddr size, void *vaddr, bool readonly);
> int vhost_vdpa_dma_unmap(VhostVDPAShared *s, uint32_t asid, hwaddr iova,
> hwaddr size);
> +int vhost_vdpa_load_setup(VhostVDPAShared *s, AddressSpace *dma_as);
>
> typedef struct vdpa_iommu {
> VhostVDPAShared *dev_shared;
> diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
> index cc252fc2d8..bfbe4673af 100644
> --- a/hw/virtio/vhost-vdpa.c
> +++ b/hw/virtio/vhost-vdpa.c
> @@ -1325,7 +1325,9 @@ static int vhost_vdpa_dev_start(struct vhost_dev *dev, bool started)
> "IOMMU and try again");
> return -1;
> }
> - memory_listener_register(&v->shared->listener, dev->vdev->dma_as);
> + if (!v->shared->listener_registered) {
> + memory_listener_register(&v->shared->listener, dev->vdev->dma_as);
> + }
Set listener_registered to true after registration; in addition, it
looks like the memory_listener_unregister in vhost_vdpa_reset_status
doesn't clear the listener_registered flag after unregistration. This
code path can be called during SVQ switching, if not doing so mapping
can't be added back after a couple rounds of SVQ switching or live
migration.
-Siwei
>
> return vhost_vdpa_add_status(dev, VIRTIO_CONFIG_S_DRIVER_OK);
> }
> @@ -1528,3 +1530,16 @@ const VhostOps vdpa_ops = {
> .vhost_set_config_call = vhost_vdpa_set_config_call,
> .vhost_reset_status = vhost_vdpa_reset_status,
> };
> +
> +int vhost_vdpa_load_setup(VhostVDPAShared *shared, AddressSpace *dma_as)
> +{
> + uint8_t s = VIRTIO_CONFIG_S_ACKNOWLEDGE | VIRTIO_CONFIG_S_DRIVER;
> + int r = ioctl(shared->device_fd, VHOST_VDPA_SET_STATUS, &s);
> + if (unlikely(r < 0)) {
> + return r;
> + }
> +
> + memory_listener_register(&shared->listener, dma_as);
> + shared->listener_registered = true;
> + return 0;
> +}
next prev parent reply other threads:[~2023-11-02 8:49 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-10-19 14:34 [RFC PATCH 00/18] Map memory at destination .load_setup in vDPA-net migration Eugenio Pérez
2023-10-19 14:34 ` [RFC PATCH 01/18] vdpa: add VhostVDPAShared Eugenio Pérez
2023-10-19 14:34 ` [RFC PATCH 02/18] vdpa: move iova tree to the shared struct Eugenio Pérez
2023-11-02 9:36 ` Si-Wei Liu
2023-11-24 17:11 ` Eugenio Perez Martin
2023-10-19 14:34 ` [RFC PATCH 03/18] vdpa: move iova_range to vhost_vdpa_shared Eugenio Pérez
2023-10-19 14:34 ` [RFC PATCH 04/18] vdpa: move shadow_data " Eugenio Pérez
2023-11-02 8:47 ` Si-Wei Liu
2023-11-02 15:45 ` Eugenio Perez Martin
2023-10-19 14:34 ` [RFC PATCH 05/18] vdpa: use vdpa shared for tracing Eugenio Pérez
2023-10-19 14:34 ` [RFC PATCH 06/18] vdpa: move file descriptor to vhost_vdpa_shared Eugenio Pérez
2023-10-19 14:34 ` [RFC PATCH 07/18] vdpa: move iotlb_batch_begin_sent " Eugenio Pérez
2023-10-19 14:34 ` [RFC PATCH 08/18] vdpa: move backend_cap " Eugenio Pérez
2023-10-19 14:34 ` [RFC PATCH 09/18] vdpa: remove msg type of vhost_vdpa Eugenio Pérez
2023-10-19 14:34 ` [RFC PATCH 10/18] vdpa: move iommu_list to vhost_vdpa_shared Eugenio Pérez
2023-10-19 14:34 ` [RFC PATCH 11/18] vdpa: use VhostVDPAShared in vdpa_dma_map and unmap Eugenio Pérez
2023-10-19 14:34 ` [RFC PATCH 12/18] vdpa: use dev_shared in vdpa_iommu Eugenio Pérez
2023-10-19 14:34 ` [RFC PATCH 13/18] vdpa: move memory listener to vhost_vdpa_shared Eugenio Pérez
2023-10-19 14:34 ` [RFC PATCH 14/18] vdpa: do not set virtio status bits if unneeded Eugenio Pérez
2023-10-19 14:34 ` [RFC PATCH 15/18] vdpa: add vhost_vdpa_load_setup Eugenio Pérez
2023-11-02 8:48 ` Si-Wei Liu [this message]
2023-11-02 15:24 ` Eugenio Perez Martin
2023-10-19 14:34 ` [RFC PATCH 16/18] vdpa: add vhost_vdpa_net_load_setup NetClient callback Eugenio Pérez
2023-10-19 14:34 ` [RFC PATCH 17/18] vdpa: use shadow_data instead of first device v->shadow_vqs_enabled Eugenio Pérez
2023-10-19 14:34 ` [RFC PATCH 18/18] virtio_net: register incremental migration handlers Eugenio Pérez
2023-11-02 4:36 ` [RFC PATCH 00/18] Map memory at destination .load_setup in vDPA-net migration Jason Wang
2023-11-02 10:12 ` Si-Wei Liu
2023-11-02 12:37 ` Eugenio Perez Martin
2023-11-03 20:19 ` Si-Wei Liu
2023-12-05 14:23 ` Eugenio Perez Martin
2023-12-06 0:36 ` Si-Wei Liu
2023-11-06 4:17 ` Jason Wang
2023-11-03 20:40 ` Si-Wei Liu
2023-11-06 9:04 ` Eugenio Perez Martin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=00fe0c0b-267c-4d1f-8f0c-efdd8c166002@oracle.com \
--to=si-wei.liu@oracle.com \
--cc=dtatulea@nvidia.com \
--cc=eperezma@redhat.com \
--cc=gdawar@xilinx.com \
--cc=jasowang@redhat.com \
--cc=leiyang@redhat.com \
--cc=lingshan.zhu@intel.com \
--cc=lvivier@redhat.com \
--cc=mst@redhat.com \
--cc=parav@mellanox.com \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
--cc=sgarzare@redhat.com \
--cc=shannon.nelson@amd.com \
--cc=yajunw@nvidia.com \
--cc=yin31149@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).