From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
To: Alex Williamson <alex.williamson@redhat.com>
Cc: cohuck@redhat.com, cjia@nvidia.com, aik@ozlabs.ru,
Zhengxiao.zx@alibaba-inc.com, shuangtai.tst@alibaba-inc.com,
qemu-devel@nongnu.org, peterx@redhat.com,
Kirti Wankhede <kwankhede@nvidia.com>,
eauger@redhat.com, yi.l.liu@intel.com, quintela@redhat.com,
ziye.yang@intel.com, armbru@redhat.com, mlevitsk@redhat.com,
pasic@linux.ibm.com, felipe@nutanix.com, zhi.a.wang@intel.com,
kevin.tian@intel.com, yan.y.zhao@intel.com,
changpeng.liu@intel.com, eskultet@redhat.com, Ken.Xue@amd.com,
jonathan.davies@nutanix.com, pbonzini@redhat.com
Subject: Re: [PATCH QEMU v25 09/17] vfio: Add load state functions to SaveVMHandlers
Date: Fri, 26 Jun 2020 15:54:00 +0100 [thread overview]
Message-ID: <20200626145400.GM3087@work-vm> (raw)
In-Reply-To: <20200624125437.664869ce@x1.home>
* Alex Williamson (alex.williamson@redhat.com) wrote:
> On Sun, 21 Jun 2020 01:51:18 +0530
> Kirti Wankhede <kwankhede@nvidia.com> wrote:
>
> > Sequence during _RESUMING device state:
> > While data for this device is available, repeat below steps:
> > a. read data_offset from where user application should write data.
> > b. write data of data_size to migration region from data_offset.
> > c. write data_size which indicates vendor driver that data is written in
> > staging buffer.
> >
> > For user, data is opaque. User should write data in the same order as
> > received.
> >
> > Signed-off-by: Kirti Wankhede <kwankhede@nvidia.com>
> > Reviewed-by: Neo Jia <cjia@nvidia.com>
> > Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> > ---
> > hw/vfio/migration.c | 177 +++++++++++++++++++++++++++++++++++++++++++++++++++
> > hw/vfio/trace-events | 3 +
> > 2 files changed, 180 insertions(+)
> >
> > diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c
> > index ef1150c1ff02..faacea5327cb 100644
> > --- a/hw/vfio/migration.c
> > +++ b/hw/vfio/migration.c
> > @@ -302,6 +302,33 @@ static int vfio_save_device_config_state(QEMUFile *f, void *opaque)
> > return qemu_file_get_error(f);
> > }
> >
> > +static int vfio_load_device_config_state(QEMUFile *f, void *opaque)
> > +{
> > + VFIODevice *vbasedev = opaque;
> > + uint64_t data;
> > +
> > + if (vbasedev->ops && vbasedev->ops->vfio_load_config) {
> > + int ret;
> > +
> > + ret = vbasedev->ops->vfio_load_config(vbasedev, f);
> > + if (ret) {
> > + error_report("%s: Failed to load device config space",
> > + vbasedev->name);
> > + return ret;
> > + }
> > + }
> > +
> > + data = qemu_get_be64(f);
> > + if (data != VFIO_MIG_FLAG_END_OF_STATE) {
> > + error_report("%s: Failed loading device config space, "
> > + "end flag incorrect 0x%"PRIx64, vbasedev->name, data);
> > + return -EINVAL;
> > + }
> > +
> > + trace_vfio_load_device_config_state(vbasedev->name);
> > + return qemu_file_get_error(f);
> > +}
> > +
> > /* ---------------------------------------------------------------------- */
> >
> > static int vfio_save_setup(QEMUFile *f, void *opaque)
> > @@ -472,12 +499,162 @@ static int vfio_save_complete_precopy(QEMUFile *f, void *opaque)
> > return ret;
> > }
> >
> > +static int vfio_load_setup(QEMUFile *f, void *opaque)
> > +{
> > + VFIODevice *vbasedev = opaque;
> > + VFIOMigration *migration = vbasedev->migration;
> > + int ret = 0;
> > +
> > + if (migration->region.mmaps) {
> > + ret = vfio_region_mmap(&migration->region);
> > + if (ret) {
> > + error_report("%s: Failed to mmap VFIO migration region %d: %s",
> > + vbasedev->name, migration->region.nr,
> > + strerror(-ret));
> > + return ret;
>
>
> Not fatal.
>
>
> > + }
> > + }
> > +
> > + ret = vfio_migration_set_state(vbasedev, ~VFIO_DEVICE_STATE_MASK,
> > + VFIO_DEVICE_STATE_RESUMING);
> > + if (ret) {
> > + error_report("%s: Failed to set state RESUMING", vbasedev->name);
> > + }
> > + return ret;
> > +}
> > +
> > +static int vfio_load_cleanup(void *opaque)
> > +{
> > + vfio_save_cleanup(opaque);
> > + return 0;
> > +}
> > +
> > +static int vfio_load_state(QEMUFile *f, void *opaque, int version_id)
> > +{
> > + VFIODevice *vbasedev = opaque;
> > + VFIOMigration *migration = vbasedev->migration;
> > + int ret = 0;
> > + uint64_t data, data_size;
> > +
> > + data = qemu_get_be64(f);
> > + while (data != VFIO_MIG_FLAG_END_OF_STATE) {
> > +
> > + trace_vfio_load_state(vbasedev->name, data);
> > +
> > + switch (data) {
> > + case VFIO_MIG_FLAG_DEV_CONFIG_STATE:
> > + {
> > + ret = vfio_load_device_config_state(f, opaque);
> > + if (ret) {
> > + return ret;
> > + }
> > + break;
> > + }
> > + case VFIO_MIG_FLAG_DEV_SETUP_STATE:
> > + {
> > + data = qemu_get_be64(f);
> > + if (data == VFIO_MIG_FLAG_END_OF_STATE) {
> > + return ret;
> > + } else {
> > + error_report("%s: SETUP STATE: EOS not found 0x%"PRIx64,
> > + vbasedev->name, data);
> > + return -EINVAL;
>
> This is essentially just a compatibility failure, right? For instance
> some future version of QEMU might include additional data between these
> markers that we don't understand and therefore we fail the migration.
Or any other screwup in data layout; we've found having a canary at the
end of state is quite useful for when we screwup for one reason or
another.
Dave
> Thanks,
>
> Alex
>
> > + }
> > + break;
> > + }
> > + case VFIO_MIG_FLAG_DEV_DATA_STATE:
> > + {
> > + VFIORegion *region = &migration->region;
> > + uint64_t data_offset = 0, size;
> > +
> > + data_size = size = qemu_get_be64(f);
> > + if (data_size == 0) {
> > + break;
> > + }
> > +
> > + ret = pread(vbasedev->fd, &data_offset, sizeof(data_offset),
> > + region->fd_offset +
> > + offsetof(struct vfio_device_migration_info,
> > + data_offset));
> > + if (ret != sizeof(data_offset)) {
> > + error_report("%s:Failed to get migration buffer data offset %d",
> > + vbasedev->name, ret);
> > + return -EINVAL;
> > + }
> > +
> > + trace_vfio_load_state_device_data(vbasedev->name, data_offset,
> > + data_size);
> > +
> > + while (size) {
> > + void *buf = NULL;
> > + uint64_t sec_size;
> > + bool buffer_mmaped;
> > +
> > + buf = get_data_section_size(region, data_offset, size,
> > + &sec_size);
> > +
> > + buffer_mmaped = (buf != NULL);
> > +
> > + if (!buffer_mmaped) {
> > + buf = g_try_malloc(sec_size);
> > + if (!buf) {
> > + error_report("%s: Error allocating buffer ", __func__);
> > + return -ENOMEM;
> > + }
> > + }
> > +
> > + qemu_get_buffer(f, buf, sec_size);
> > +
> > + if (!buffer_mmaped) {
> > + ret = pwrite(vbasedev->fd, buf, sec_size,
> > + region->fd_offset + data_offset);
> > + g_free(buf);
> > +
> > + if (ret != sec_size) {
> > + error_report("%s: Failed to set migration buffer %d",
> > + vbasedev->name, ret);
> > + return -EINVAL;
> > + }
> > + }
> > + size -= sec_size;
> > + data_offset += sec_size;
> > + }
> > +
> > + ret = pwrite(vbasedev->fd, &data_size, sizeof(data_size),
> > + region->fd_offset +
> > + offsetof(struct vfio_device_migration_info, data_size));
> > + if (ret != sizeof(data_size)) {
> > + error_report("%s: Failed to set migration buffer data size %d",
> > + vbasedev->name, ret);
> > + return -EINVAL;
> > + }
> > + break;
> > + }
> > +
> > + default:
> > + error_report("%s: Unknown tag 0x%"PRIx64, vbasedev->name, data);
> > + return -EINVAL;
> > + }
> > +
> > + data = qemu_get_be64(f);
> > + ret = qemu_file_get_error(f);
> > + if (ret) {
> > + return ret;
> > + }
> > + }
> > +
> > + return ret;
> > +}
> > +
> > static SaveVMHandlers savevm_vfio_handlers = {
> > .save_setup = vfio_save_setup,
> > .save_cleanup = vfio_save_cleanup,
> > .save_live_pending = vfio_save_pending,
> > .save_live_iterate = vfio_save_iterate,
> > .save_live_complete_precopy = vfio_save_complete_precopy,
> > + .load_setup = vfio_load_setup,
> > + .load_cleanup = vfio_load_cleanup,
> > + .load_state = vfio_load_state,
> > };
> >
> > /* ---------------------------------------------------------------------- */
> > diff --git a/hw/vfio/trace-events b/hw/vfio/trace-events
> > index 9a1c5e17d97f..4a4bd3ba9a2a 100644
> > --- a/hw/vfio/trace-events
> > +++ b/hw/vfio/trace-events
> > @@ -157,3 +157,6 @@ vfio_save_device_config_state(const char *name) " (%s)"
> > vfio_save_pending(const char *name, uint64_t precopy, uint64_t postcopy, uint64_t compatible) " (%s) precopy 0x%"PRIx64" postcopy 0x%"PRIx64" compatible 0x%"PRIx64
> > vfio_save_iterate(const char *name, int data_size) " (%s) data_size %d"
> > vfio_save_complete_precopy(const char *name) " (%s)"
> > +vfio_load_device_config_state(const char *name) " (%s)"
> > +vfio_load_state(const char *name, uint64_t data) " (%s) data 0x%"PRIx64
> > +vfio_load_state_device_data(const char *name, uint64_t data_offset, uint64_t data_size) " (%s) Offset 0x%"PRIx64" size 0x%"PRIx64
>
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
next prev parent reply other threads:[~2020-06-26 14:55 UTC|newest]
Thread overview: 66+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-06-20 20:21 [PATCH QEMU v25 00/17] Add migration support for VFIO devices Kirti Wankhede
2020-06-20 20:21 ` [PATCH QEMU v25 01/17] vfio: Add function to unmap VFIO region Kirti Wankhede
2020-06-20 20:21 ` [PATCH QEMU v25 02/17] vfio: Add vfio_get_object callback to VFIODeviceOps Kirti Wankhede
2020-06-20 20:21 ` [PATCH QEMU v25 03/17] vfio: Add save and load functions for VFIO PCI devices Kirti Wankhede
2020-06-22 20:28 ` Alex Williamson
2020-06-24 14:29 ` Kirti Wankhede
2020-06-24 19:49 ` Alex Williamson
2020-06-26 12:16 ` Dr. David Alan Gilbert
2020-06-26 22:44 ` Alex Williamson
2020-06-29 9:59 ` Dr. David Alan Gilbert
2020-06-20 20:21 ` [PATCH QEMU v25 04/17] vfio: Add migration region initialization and finalize function Kirti Wankhede
2020-06-23 7:54 ` Cornelia Huck
2020-06-20 20:21 ` [PATCH QEMU v25 05/17] vfio: Add VM state change handler to know state of VM Kirti Wankhede
2020-06-22 22:50 ` Alex Williamson
2020-06-23 18:55 ` Kirti Wankhede
2020-06-26 14:51 ` Dr. David Alan Gilbert
2020-06-23 8:07 ` Cornelia Huck
2020-06-20 20:21 ` [PATCH QEMU v25 06/17] vfio: Add migration state change notifier Kirti Wankhede
2020-06-23 8:10 ` Cornelia Huck
2020-06-20 20:21 ` [PATCH QEMU v25 07/17] vfio: Register SaveVMHandlers for VFIO device Kirti Wankhede
2020-06-22 22:50 ` Alex Williamson
2020-06-23 19:21 ` Kirti Wankhede
2020-06-23 19:50 ` Alex Williamson
2020-06-26 14:22 ` Dr. David Alan Gilbert
2020-06-26 14:31 ` Dr. David Alan Gilbert
2020-06-20 20:21 ` [PATCH QEMU v25 08/17] vfio: Add save state functions to SaveVMHandlers Kirti Wankhede
2020-06-22 22:50 ` Alex Williamson
2020-06-23 20:34 ` Kirti Wankhede
2020-06-23 20:40 ` Alex Williamson
2020-06-20 20:21 ` [PATCH QEMU v25 09/17] vfio: Add load " Kirti Wankhede
2020-06-24 18:54 ` Alex Williamson
2020-06-25 14:16 ` Kirti Wankhede
2020-06-25 14:57 ` Alex Williamson
2020-06-26 14:54 ` Dr. David Alan Gilbert [this message]
2020-06-20 20:21 ` [PATCH QEMU v25 10/17] memory: Set DIRTY_MEMORY_MIGRATION when IOMMU is enabled Kirti Wankhede
2020-06-20 20:21 ` [PATCH QEMU v25 11/17] vfio: Get migration capability flags for container Kirti Wankhede
2020-06-24 8:43 ` Cornelia Huck
2020-06-24 18:55 ` Alex Williamson
2020-06-25 14:09 ` Kirti Wankhede
2020-06-25 14:56 ` Alex Williamson
2020-06-20 20:21 ` [PATCH QEMU v25 12/17] vfio: Add function to start and stop dirty pages tracking Kirti Wankhede
2020-06-23 10:32 ` Cornelia Huck
2020-06-23 11:01 ` Dr. David Alan Gilbert
2020-06-23 11:06 ` Cornelia Huck
2020-06-24 18:55 ` Alex Williamson
2020-06-20 20:21 ` [PATCH QEMU v25 13/17] vfio: create mapped iova list when vIOMMU is enabled Kirti Wankhede
2020-06-24 18:55 ` Alex Williamson
2020-06-25 14:34 ` Kirti Wankhede
2020-06-25 17:40 ` Alex Williamson
2020-06-26 14:43 ` Peter Xu
2020-06-20 20:21 ` [PATCH QEMU v25 14/17] vfio: Add vfio_listener_log_sync to mark dirty pages Kirti Wankhede
2020-06-24 18:55 ` Alex Williamson
2020-06-25 14:43 ` Kirti Wankhede
2020-06-25 17:57 ` Alex Williamson
2020-06-20 20:21 ` [PATCH QEMU v25 15/17] vfio: Add ioctl to get dirty pages bitmap during dma unmap Kirti Wankhede
2020-06-23 8:25 ` Cornelia Huck
2020-06-24 18:56 ` Alex Williamson
2020-06-25 15:01 ` Kirti Wankhede
2020-06-25 19:18 ` Alex Williamson
2020-06-26 14:15 ` Dr. David Alan Gilbert
2020-06-20 20:21 ` [PATCH QEMU v25 16/17] vfio: Make vfio-pci device migration capable Kirti Wankhede
2020-06-22 16:51 ` Cornelia Huck
2020-06-20 20:21 ` [PATCH QEMU v25 17/17] qapi: Add VFIO devices migration stats in Migration stats Kirti Wankhede
2020-06-23 7:21 ` Markus Armbruster
2020-06-23 21:16 ` Kirti Wankhede
2020-06-25 5:51 ` Markus Armbruster
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200626145400.GM3087@work-vm \
--to=dgilbert@redhat.com \
--cc=Ken.Xue@amd.com \
--cc=Zhengxiao.zx@alibaba-inc.com \
--cc=aik@ozlabs.ru \
--cc=alex.williamson@redhat.com \
--cc=armbru@redhat.com \
--cc=changpeng.liu@intel.com \
--cc=cjia@nvidia.com \
--cc=cohuck@redhat.com \
--cc=eauger@redhat.com \
--cc=eskultet@redhat.com \
--cc=felipe@nutanix.com \
--cc=jonathan.davies@nutanix.com \
--cc=kevin.tian@intel.com \
--cc=kwankhede@nvidia.com \
--cc=mlevitsk@redhat.com \
--cc=pasic@linux.ibm.com \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
--cc=shuangtai.tst@alibaba-inc.com \
--cc=yan.y.zhao@intel.com \
--cc=yi.l.liu@intel.com \
--cc=zhi.a.wang@intel.com \
--cc=ziye.yang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).