qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
To: Yan Zhao <yan.y.zhao@intel.com>
Cc: "alex.williamson@redhat.com" <alex.williamson@redhat.com>,
	"Tian, Kevin" <kevin.tian@intel.com>,
	"Liu, Yi L" <yi.l.liu@intel.com>,
	"cjia@nvidia.com" <cjia@nvidia.com>,
	"eskultet@redhat.com" <eskultet@redhat.com>,
	"Yang, Ziye" <ziye.yang@intel.com>,
	"Ken.Xue@amd.com" <Ken.Xue@amd.com>,
	"Zhengxiao.zx@Alibaba-inc.com" <Zhengxiao.zx@alibaba-inc.com>,
	"shuangtai.tst@alibaba-inc.com" <shuangtai.tst@alibaba-inc.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"mlevitsk@redhat.com" <mlevitsk@redhat.com>,
	"pasic@linux.ibm.com" <pasic@linux.ibm.com>,
	"aik@ozlabs.ru" <aik@ozlabs.ru>,
	Kirti Wankhede <kwankhede@nvidia.com>,
	"eauger@redhat.com" <eauger@redhat.com>,
	"cohuck@redhat.com" <cohuck@redhat.com>,
	"jonathan.davies@nutanix.com" <jonathan.davies@nutanix.com>,
	"felipe@nutanix.com" <felipe@nutanix.com>,
	"Liu, Changpeng" <changpeng.liu@intel.com>,
	"Wang, Zhi A" <zhi.a.wang@intel.com>
Subject: Re: [Qemu-devel] [PATCH v7 00/13] Add migration support for VFIO device
Date: Wed, 24 Jul 2019 12:32:46 +0100	[thread overview]
Message-ID: <20190724113246.GF2717@work-vm> (raw)
In-Reply-To: <20190719012328.GF8912@joy-OptiPlex-7040>

* Yan Zhao (yan.y.zhao@intel.com) wrote:
> On Fri, Jul 19, 2019 at 02:32:33AM +0800, Kirti Wankhede wrote:
> > 
> > On 7/12/2019 6:02 AM, Yan Zhao wrote:
> > > On Fri, Jul 12, 2019 at 03:08:31AM +0800, Kirti Wankhede wrote:
> > >>
> > >>
> > >> On 7/11/2019 9:53 PM, Dr. David Alan Gilbert wrote:
> > >>> * Yan Zhao (yan.y.zhao@intel.com) wrote:
> > >>>> On Thu, Jul 11, 2019 at 06:50:12PM +0800, Dr. David Alan Gilbert wrote:
> > >>>>> * Yan Zhao (yan.y.zhao@intel.com) wrote:
> > >>>>>> Hi Kirti,
> > >>>>>> There are still unaddressed comments to your patches v4.
> > >>>>>> Would you mind addressing them?
> > >>>>>>
> > >>>>>> 1. should we register two migration interfaces simultaneously
> > >>>>>> (https://lists.gnu.org/archive/html/qemu-devel/2019-06/msg04750.html)
> > >>>>>
> > >>>>> Please don't do this.
> > >>>>> As far as I'm aware we currently only have one device that does that
> > >>>>> (vmxnet3) and a patch has just been posted that fixes/removes that.
> > >>>>>
> > >>>>> Dave
> > >>>>>
> > >>>> hi Dave,
> > >>>> Thanks for notifying this. but if we want to support postcopy in future,
> > >>>> after device stops, what interface could we use to transfer data of
> > >>>> device state only?
> > >>>> for postcopy, when source device stops, we need to transfer only
> > >>>> necessary device state to target vm before target vm starts, and we
> > >>>> don't want to transfer device memory as we'll do that after target vm
> > >>>> resuming.
> > >>>
> > >>> Hmm ok, lets see; that's got to happen in the call to:
> > >>>     qemu_savevm_state_complete_precopy(fb, false, false);
> > >>> that's made from postcopy_start.
> > >>>  (the false's are iterable_only and inactivate_disks)
> > >>>
> > >>> and at that time I believe the state is POSTCOPY_ACTIVE, so in_postcopy
> > >>> is true.
> > >>>
> > >>> If you're doing postcopy, then you'll probably define a has_postcopy()
> > >>> function, so qemu_savevm_state_complete_precopy will skip the
> > >>> save_live_complete_precopy call from it's loop for at least two of the
> > >>> reasons in it's big if.
> > >>>
> > >>> So you're right; you need the VMSD for this to happen in the second
> > >>> loop in qemu_savevm_state_complete_precopy.  Hmm.
> > >>>
> > >>> Now, what worries me, and I don't know the answer, is how the section
> > >>> header for the vmstate and the section header for an iteration look
> > >>> on the stream; how are they different?
> > >>>
> > >>
> > >> I don't have way to test postcopy migration - is one of the major reason
> > >> I had not included postcopy support in this patchset and clearly called
> > >> out in cover letter.
> > >> This patchset is thoroughly tested for precopy migration.
> > >> If anyone have hardware that supports fault, then I would prefer to add
> > >> postcopy support as incremental change later which can be tested before
> > >> submitting.
> > >>
> > >> Just a suggestion, instead of using VMSD, is it possible to have some
> > >> additional check to call save_live_complete_precopy from
> > >> qemu_savevm_state_complete_precopy?
> > >>
> > >>
> > >>>>
> > >>>>>> 2. in each save iteration, how much data is to be saved
> > >>>>>> (https://lists.gnu.org/archive/html/qemu-devel/2019-06/msg04683.html)
> > >>
> > >>> how big is the data_size ?
> > >>> if this size is too big, it may take too much time and block others.
> > >>
> > >> I do had mentioned this in the comment about the structure in vfio.h
> > >> header. data_size will be provided by vendor driver and obviously will
> > >> not be greater that migration region size. Vendor driver should be
> > >> responsible to keep its solution optimized.
> > >>
> > > if the data_size is no big than migration region size, and each
> > > iteration only saves data of data_size, i'm afraid it will cause
> > > prolonged down time. after all, the migration region size cannot be very
> > > big.
> > 
> > As I mentioned above, its vendor driver responsibility to keep its
> > solution optimized.
> > A good behaving vendor driver should not cause unnecessary prolonged
> > down time.
> >
> I think vendor data can determine the data_size, but qemu has to decide
> how much data to transmit according to the actual transmitting time (or
> transmitting speed). when transmitting speed is high, transmit more data
> a iteration, and low speed small data a iteration. This transmitting
> time knowledge is not available to vendor driver, and even it has this
> knowledge, can it dynamically change data region size?
> maybe you can say vendor driver can register a big enough region and
> dynamically use part of it, but what is big enough and it really costs
> memory.
> 
> > > Also, if vendor driver determines how much data to save in each
> > > iteration alone, and no checks in qemu, it may cause other devices'
> > > migration time be squeezed.
> > > 
> > 
> > Sorry, how will that squeeze other device's migration time?
> > 
> if a vendor driver has extremely big data to transmit, other devices
> have to wait until it finishes to transmit their own data. In a given
> time period, it can be considered as their time slot being squeezed.
> not sure whether it's a good practice.

I don't think we have anything for fairly allocating bandwidth/time
among devices; that would be pretty hard.

I think the only thing we currently have is the final downtime
heuristics; i.e. the threshold you should be below at the point
we decide to stop iterating.

Dave

> > >>
> > >>>>>> 3. do we need extra interface to get data for device state only
> > >>>>>> (https://lists.gnu.org/archive/html/qemu-devel/2019-06/msg04812.html)
> > >>
> > >> I don't think so. Opaque Device data from vendor driver can include
> > >> device state and device memory. Vendor driver who is managing his device
> > >> can decide how to place data over the stream.
> > >>
> > > I know current design is opaque device data. then to support postcopy,
> > > we may have to add extra device state like in-postcopy. but postcopy is
> > > more like a qemu state and is not a device state.
> > 
> > One bit from device_state can be used to inform vendor driver about
> > postcopy, when postcopy support will be added.
> >
> ok. if you insist on that, one bit in devie_state is also good, as long
> as everyone agrees on it:)
> 
> > > to address it elegantly, may we add an extra interface besides
> > > vfio_save_buffer() to get data for device state only?
> > > 
> > 
> > When in postcopy state, based on device_state flag, vendor driver can
> > copy device state first in migration region, I still think there is no
> > need to separate device state and device memory.
> >
> it's the difference between more device_state flags and more interfaces.
> 
> > >>>>>> 4. definition of dirty page copied_pfn
> > >>>>>> (https://lists.gnu.org/archive/html/qemu-devel/2019-06/msg05592.html)
> > >>>>>>
> > >>
> > >> This was inline to discussion going with Alex. I addressed the concern
> > >> there. Please check current patchset, which addresses the concerns raised.
> > >>
> > > ok. I saw you also updated the flow in the part. please check my comment
> > > in that patch for detail. but as a suggestion, I think processed_pfns is
> > > a better name compared to copied_pfns :)
> > > 
> > 
> > Vendor driver can process total_pfns, but can copy only some pfns bitmap
> > to migration region. One of the reason could be the size of migration
> > region is not able to accommodate bitmap of total_pfns size. So it could
> > be like: 0 < copied_pfns < total_pfns. That's why the name
> > 'copied_pfns'. I'll continue with 'copied_pfns'.
> >
> so it's bitmap's pfn count, right? 
> besides VFIO_DEVICE_DIRTY_PFNS_NONE, and VFIO_DEVICE_DIRTY_PFNS_ALL to
> indicate no dirty and all dirty in total_pfns, why not add two more
> flags, e.g.
> VFIO_DEVICE_DIRTY_PFNS_ONE_ITERATION,
> VFIO_DEVICE_DIRTY_PFNS_ALL_ONE_ITERATION, to skip copying bitmap from
> kernel if the copied bitmap in one iteration is all 0 or all 1?
> 
> Thanks
> Yan
> > 
> > >>>>>> Also, I'm glad to see that you updated code by following my comments below,
> > >>>>>> but please don't forget to reply my comments next time:)
> > >>
> > >> I tried to reply top of threads and addressed common concerns raised in
> > >> that. Sorry If I missed any, I'll make sure to point you to my replies
> > >> going ahead.
> > >>
> > > ok. let's cooperate:)
> > > 
> > > Thanks
> > > Yan
> > > 
> > >> Thanks,
> > >> Kirti
> > >>
> > >>>>>> https://lists.gnu.org/archive/html/qemu-devel/2019-06/msg05357.html
> > >>>>>> https://lists.gnu.org/archive/html/qemu-devel/2019-06/msg06454.html
> > >>>>>>
> > >>>>>> Thanks
> > >>>>>> Yan
> > >>>>>>
> > >>>>>> On Tue, Jul 09, 2019 at 05:49:07PM +0800, Kirti Wankhede wrote:
> > >>>>>>> Add migration support for VFIO device
> > >>>>>>>
> > >>>>>>> This Patch set include patches as below:
> > >>>>>>> - Define KABI for VFIO device for migration support.
> > >>>>>>> - Added save and restore functions for PCI configuration space
> > >>>>>>> - Generic migration functionality for VFIO device.
> > >>>>>>>   * This patch set adds functionality only for PCI devices, but can be
> > >>>>>>>     extended to other VFIO devices.
> > >>>>>>>   * Added all the basic functions required for pre-copy, stop-and-copy and
> > >>>>>>>     resume phases of migration.
> > >>>>>>>   * Added state change notifier and from that notifier function, VFIO
> > >>>>>>>     device's state changed is conveyed to VFIO device driver.
> > >>>>>>>   * During save setup phase and resume/load setup phase, migration region
> > >>>>>>>     is queried and is used to read/write VFIO device data.
> > >>>>>>>   * .save_live_pending and .save_live_iterate are implemented to use QEMU's
> > >>>>>>>     functionality of iteration during pre-copy phase.
> > >>>>>>>   * In .save_live_complete_precopy, that is in stop-and-copy phase,
> > >>>>>>>     iteration to read data from VFIO device driver is implemented till pending
> > >>>>>>>     bytes returned by driver are not zero.
> > >>>>>>>   * Added function to get dirty pages bitmap for the pages which are used by
> > >>>>>>>     driver.
> > >>>>>>> - Add vfio_listerner_log_sync to mark dirty pages.
> > >>>>>>> - Make VFIO PCI device migration capable. If migration region is not provided by
> > >>>>>>>   driver, migration is blocked.
> > >>>>>>>
> > >>>>>>> Below is the flow of state change for live migration where states in brackets
> > >>>>>>> represent VM state, migration state and VFIO device state as:
> > >>>>>>>     (VM state, MIGRATION_STATUS, VFIO_DEVICE_STATE)
> > >>>>>>>
> > >>>>>>> Live migration save path:
> > >>>>>>>         QEMU normal running state
> > >>>>>>>         (RUNNING, _NONE, _RUNNING)
> > >>>>>>>                         |
> > >>>>>>>     migrate_init spawns migration_thread.
> > >>>>>>>     (RUNNING, _SETUP, _RUNNING|_SAVING)
> > >>>>>>>     Migration thread then calls each device's .save_setup()
> > >>>>>>>                         |
> > >>>>>>>     (RUNNING, _ACTIVE, _RUNNING|_SAVING)
> > >>>>>>>     If device is active, get pending bytes by .save_live_pending()
> > >>>>>>>     if pending bytes >= threshold_size,  call save_live_iterate()
> > >>>>>>>     Data of VFIO device for pre-copy phase is copied.
> > >>>>>>>     Iterate till pending bytes converge and are less than threshold
> > >>>>>>>                         |
> > >>>>>>>     On migration completion, vCPUs stops and calls .save_live_complete_precopy
> > >>>>>>>     for each active device. VFIO device is then transitioned in
> > >>>>>>>      _SAVING state.
> > >>>>>>>     (FINISH_MIGRATE, _DEVICE, _SAVING)
> > >>>>>>>     For VFIO device, iterate in  .save_live_complete_precopy  until
> > >>>>>>>     pending data is 0.
> > >>>>>>>     (FINISH_MIGRATE, _DEVICE, _STOPPED)
> > >>>>>>>                         |
> > >>>>>>>     (FINISH_MIGRATE, _COMPLETED, STOPPED)
> > >>>>>>>     Migraton thread schedule cleanup bottom half and exit
> > >>>>>>>
> > >>>>>>> Live migration resume path:
> > >>>>>>>     Incomming migration calls .load_setup for each device
> > >>>>>>>     (RESTORE_VM, _ACTIVE, STOPPED)
> > >>>>>>>                         |
> > >>>>>>>     For each device, .load_state is called for that device section data
> > >>>>>>>                         |
> > >>>>>>>     At the end, called .load_cleanup for each device and vCPUs are started.
> > >>>>>>>                         |
> > >>>>>>>         (RUNNING, _NONE, _RUNNING)
> > >>>>>>>
> > >>>>>>> Note that:
> > >>>>>>> - Migration post copy is not supported.
> > >>>>>>>
> > >>>>>>> v6 -> v7:
> > >>>>>>> - Fix build failures.
> > >>>>>>>
> > >>>>>>> v5 -> v6:
> > >>>>>>> - Fix build failure.
> > >>>>>>>
> > >>>>>>> v4 -> v5:
> > >>>>>>> - Added decriptive comment about the sequence of access of members of structure
> > >>>>>>>   vfio_device_migration_info to be followed based on Alex's suggestion
> > >>>>>>> - Updated get dirty pages sequence.
> > >>>>>>> - As per Cornelia Huck's suggestion, added callbacks to VFIODeviceOps to
> > >>>>>>>   get_object, save_config and load_config.
> > >>>>>>> - Fixed multiple nit picks.
> > >>>>>>> - Tested live migration with multiple vfio device assigned to a VM.
> > >>>>>>>
> > >>>>>>> v3 -> v4:
> > >>>>>>> - Added one more bit for _RESUMING flag to be set explicitly.
> > >>>>>>> - data_offset field is read-only for user space application.
> > >>>>>>> - data_size is read for every iteration before reading data from migration, that
> > >>>>>>>   is removed assumption that data will be till end of migration region.
> > >>>>>>> - If vendor driver supports mappable sparsed region, map those region during
> > >>>>>>>   setup state of save/load, similarly unmap those from cleanup routines.
> > >>>>>>> - Handles race condition that causes data corruption in migration region during
> > >>>>>>>   save device state by adding mutex and serialiaing save_buffer and
> > >>>>>>>   get_dirty_pages routines.
> > >>>>>>> - Skip called get_dirty_pages routine for mapped MMIO region of device.
> > >>>>>>> - Added trace events.
> > >>>>>>> - Splitted into multiple functional patches.
> > >>>>>>>
> > >>>>>>> v2 -> v3:
> > >>>>>>> - Removed enum of VFIO device states. Defined VFIO device state with 2 bits.
> > >>>>>>> - Re-structured vfio_device_migration_info to keep it minimal and defined action
> > >>>>>>>   on read and write access on its members.
> > >>>>>>>
> > >>>>>>> v1 -> v2:
> > >>>>>>> - Defined MIGRATION region type and sub-type which should be used with region
> > >>>>>>>   type capability.
> > >>>>>>> - Re-structured vfio_device_migration_info. This structure will be placed at 0th
> > >>>>>>>   offset of migration region.
> > >>>>>>> - Replaced ioctl with read/write for trapped part of migration region.
> > >>>>>>> - Added both type of access support, trapped or mmapped, for data section of the
> > >>>>>>>   region.
> > >>>>>>> - Moved PCI device functions to pci file.
> > >>>>>>> - Added iteration to get dirty page bitmap until bitmap for all requested pages
> > >>>>>>>   are copied.
> > >>>>>>>
> > >>>>>>> Thanks,
> > >>>>>>> Kirti
> > >>>>>>>
> > >>>>>>> Kirti Wankhede (13):
> > >>>>>>>   vfio: KABI for migration interface
> > >>>>>>>   vfio: Add function to unmap VFIO region
> > >>>>>>>   vfio: Add vfio_get_object callback to VFIODeviceOps
> > >>>>>>>   vfio: Add save and load functions for VFIO PCI devices
> > >>>>>>>   vfio: Add migration region initialization and finalize function
> > >>>>>>>   vfio: Add VM state change handler to know state of VM
> > >>>>>>>   vfio: Add migration state change notifier
> > >>>>>>>   vfio: Register SaveVMHandlers for VFIO device
> > >>>>>>>   vfio: Add save state functions to SaveVMHandlers
> > >>>>>>>   vfio: Add load state functions to SaveVMHandlers
> > >>>>>>>   vfio: Add function to get dirty page list
> > >>>>>>>   vfio: Add vfio_listerner_log_sync to mark dirty pages
> > >>>>>>>   vfio: Make vfio-pci device migration capable.
> > >>>>>>>
> > >>>>>>>  hw/vfio/Makefile.objs         |   2 +-
> > >>>>>>>  hw/vfio/common.c              |  55 +++
> > >>>>>>>  hw/vfio/migration.c           | 874 ++++++++++++++++++++++++++++++++++++++++++
> > >>>>>>>  hw/vfio/pci.c                 | 137 ++++++-
> > >>>>>>>  hw/vfio/trace-events          |  19 +
> > >>>>>>>  include/hw/vfio/vfio-common.h |  25 ++
> > >>>>>>>  linux-headers/linux/vfio.h    | 166 ++++++++
> > >>>>>>>  7 files changed, 1271 insertions(+), 7 deletions(-)
> > >>>>>>>  create mode 100644 hw/vfio/migration.c
> > >>>>>>>
> > >>>>>>> -- 
> > >>>>>>> 2.7.0
> > >>>>>>>
> > >>>>> --
> > >>>>> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
> > >>> --
> > >>> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
> > >>>
> > > 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK


  reply	other threads:[~2019-07-24 11:33 UTC|newest]

Thread overview: 77+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-07-09  9:49 [Qemu-devel] [PATCH v7 00/13] Add migration support for VFIO device Kirti Wankhede
2019-07-09  9:49 ` [Qemu-devel] [PATCH v7 01/13] vfio: KABI for migration interface Kirti Wankhede
2019-07-16 20:56   ` Alex Williamson
2019-07-17 11:55     ` Cornelia Huck
2019-07-23 12:13     ` Cornelia Huck
2019-08-21 20:32       ` Kirti Wankhede
2019-08-21 20:31     ` Kirti Wankhede
2019-07-09  9:49 ` [Qemu-devel] [PATCH v7 02/13] vfio: Add function to unmap VFIO region Kirti Wankhede
2019-07-16 16:29   ` Cornelia Huck
2019-07-18 18:54     ` Kirti Wankhede
2019-07-09  9:49 ` [Qemu-devel] [PATCH v7 03/13] vfio: Add vfio_get_object callback to VFIODeviceOps Kirti Wankhede
2019-07-16 16:32   ` Cornelia Huck
2019-07-09  9:49 ` [Qemu-devel] [PATCH v7 04/13] vfio: Add save and load functions for VFIO PCI devices Kirti Wankhede
2019-07-11 12:07   ` Dr. David Alan Gilbert
2019-08-22  4:50     ` Kirti Wankhede
2019-08-22  9:32       ` Dr. David Alan Gilbert
2019-08-22 19:10         ` Kirti Wankhede
2019-08-22 19:13           ` Dr. David Alan Gilbert
2019-08-22 23:57             ` Tian, Kevin
2019-08-23  9:26               ` Dr. David Alan Gilbert
2019-08-23  9:49                 ` Tian, Kevin
2019-07-16 21:14   ` Alex Williamson
2019-07-17  9:10     ` Dr. David Alan Gilbert
2019-07-09  9:49 ` [Qemu-devel] [PATCH v7 05/13] vfio: Add migration region initialization and finalize function Kirti Wankhede
2019-07-16 21:37   ` Alex Williamson
2019-07-18 20:19     ` Kirti Wankhede
2019-07-23 12:52   ` Cornelia Huck
2019-07-09  9:49 ` [Qemu-devel] [PATCH v7 06/13] vfio: Add VM state change handler to know state of VM Kirti Wankhede
2019-07-11 12:13   ` Dr. David Alan Gilbert
2019-07-11 19:14     ` Kirti Wankhede
2019-07-22  8:23       ` Yan Zhao
2019-08-20 20:31         ` Kirti Wankhede
2019-07-16 22:03   ` Alex Williamson
2019-07-22  8:37   ` Yan Zhao
2019-08-20 20:33     ` Kirti Wankhede
2019-08-23  1:32       ` Yan Zhao
2019-07-09  9:49 ` [Qemu-devel] [PATCH v7 07/13] vfio: Add migration state change notifier Kirti Wankhede
2019-07-17  2:25   ` Yan Zhao
2019-08-20 20:24     ` Kirti Wankhede
2019-08-23  0:54       ` Yan Zhao
2019-07-09  9:49 ` [Qemu-devel] [PATCH v7 08/13] vfio: Register SaveVMHandlers for VFIO device Kirti Wankhede
2019-07-22  8:34   ` Yan Zhao
2019-08-20 20:33     ` Kirti Wankhede
2019-08-23  1:23       ` Yan Zhao
2019-07-09  9:49 ` [Qemu-devel] [PATCH v7 09/13] vfio: Add save state functions to SaveVMHandlers Kirti Wankhede
2019-07-12  2:44   ` Yan Zhao
2019-07-18 18:45     ` Kirti Wankhede
2019-07-17  2:50   ` Yan Zhao
2019-08-20 20:30     ` Kirti Wankhede
2019-07-09  9:49 ` [Qemu-devel] [PATCH v7 10/13] vfio: Add load " Kirti Wankhede
2019-07-12  2:52   ` Yan Zhao
2019-07-18 19:00     ` Kirti Wankhede
2019-07-22  3:20       ` Yan Zhao
2019-07-22 19:07         ` Alex Williamson
2019-07-22 21:50           ` Yan Zhao
2019-08-20 20:35             ` Kirti Wankhede
2019-07-09  9:49 ` [Qemu-devel] [PATCH v7 11/13] vfio: Add function to get dirty page list Kirti Wankhede
2019-07-12  0:33   ` Yan Zhao
2019-07-18 18:39     ` Kirti Wankhede
2019-07-19  1:24       ` Yan Zhao
2019-07-22  8:39   ` Yan Zhao
2019-08-20 20:34     ` Kirti Wankhede
2019-07-09  9:49 ` [Qemu-devel] [PATCH v7 12/13] vfio: Add vfio_listerner_log_sync to mark dirty pages Kirti Wankhede
2019-07-23 13:18   ` Cornelia Huck
2019-07-09  9:49 ` [Qemu-devel] [PATCH v7 13/13] vfio: Make vfio-pci device migration capable Kirti Wankhede
2019-07-11  2:55 ` [Qemu-devel] [PATCH v7 00/13] Add migration support for VFIO device Yan Zhao
2019-07-11 10:50   ` Dr. David Alan Gilbert
2019-07-11 11:47     ` Yan Zhao
2019-07-11 16:23       ` Dr. David Alan Gilbert
2019-07-11 19:08         ` Kirti Wankhede
2019-07-12  0:32           ` Yan Zhao
2019-07-18 18:32             ` Kirti Wankhede
2019-07-19  1:23               ` Yan Zhao
2019-07-24 11:32                 ` Dr. David Alan Gilbert [this message]
2019-07-12 17:42           ` Dr. David Alan Gilbert
2019-07-15  0:35             ` Yan Zhao
2019-07-12  0:14         ` Yan Zhao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190724113246.GF2717@work-vm \
    --to=dgilbert@redhat.com \
    --cc=Ken.Xue@amd.com \
    --cc=Zhengxiao.zx@alibaba-inc.com \
    --cc=aik@ozlabs.ru \
    --cc=alex.williamson@redhat.com \
    --cc=changpeng.liu@intel.com \
    --cc=cjia@nvidia.com \
    --cc=cohuck@redhat.com \
    --cc=eauger@redhat.com \
    --cc=eskultet@redhat.com \
    --cc=felipe@nutanix.com \
    --cc=jonathan.davies@nutanix.com \
    --cc=kevin.tian@intel.com \
    --cc=kwankhede@nvidia.com \
    --cc=mlevitsk@redhat.com \
    --cc=pasic@linux.ibm.com \
    --cc=qemu-devel@nongnu.org \
    --cc=shuangtai.tst@alibaba-inc.com \
    --cc=yan.y.zhao@intel.com \
    --cc=yi.l.liu@intel.com \
    --cc=zhi.a.wang@intel.com \
    --cc=ziye.yang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).