qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] [RFC v2 0/2] Add live migration support in the PVRDMA device
@ 2019-07-06  4:09 Sukrit Bhatnagar
  2019-07-06  4:09 ` [Qemu-devel] [RFC v2 1/2] hw/pvrdma: make DSR mapping idempotent in load_dsr() Sukrit Bhatnagar
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Sukrit Bhatnagar @ 2019-07-06  4:09 UTC (permalink / raw)
  To: qemu-devel; +Cc: Yuval Shaia

Changes in v2:

* Modify load_dsr() such that dsr mapping is not performed if dsr value
  is non-NULL. Also move free_dsr() out of load_dsr() and call it right
  before if needed. These two changes will allow us to call load_dsr()
  even when we have already done dsr mapping and would like to go on
  with the rest of mappings.

* Use VMStateDescription instead of SaveVMHandlers to describe migration
  state. Also add fields for parent PCI object and MSIX.

* Use a temporary structure (struct PVRDMAMigTmp) to hold some fields
  during migration. These fields, such as cmd_slot_dma and resp_slot_dma
  inside dsr, do not fit into VMSTATE macros as their container
  (dsr_info->dsr) will not be ready until it is mapped on the dest.

* Perform mappings to CQ and event notification rings after the state is
  loaded. This is an extension to the mappings performed in v1;
  following the flow of load_dsr(). All the mappings are succesfully
  done on the dest on state load.

Link(s) to v1:
https://lists.gnu.org/archive/html/qemu-devel/2019-06/msg04924.html
https://lists.gnu.org/archive/html/qemu-devel/2019-06/msg04923.html


Things working now (were not working at the time of v1):

* vmxnet3 is migrating successfully. The issue was in the migration of
  its PCI configuration space, and is solved by the patch Marcel had sent:
  https://lists.gnu.org/archive/html/qemu-devel/2019-07/msg01500.html

* There is no problem due to BounceBuffers which were failing the dma mapping
  calls in state load logic earlier. Not sure exactly how it went away. I am
  guessing that adding the PCI and MSIX state to migration solved the issue.


What is still needed:

* A workaround to get libvirt to support same-host migration. Since
  the problems faced in v1 (mentioned above) are out of the way, we
  can move further, and in doing so, we will need this.

Sukrit Bhatnagar (2):
  hw/pvrdma: make DSR mapping idempotent in load_dsr()
  hw/pvrdma: add live migration support

 hw/rdma/vmw/pvrdma_main.c | 104 +++++++++++++++++++++++++++++++++++---
 1 file changed, 96 insertions(+), 8 deletions(-)

-- 
2.21.0



^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2019-07-10 20:16 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2019-07-06  4:09 [Qemu-devel] [RFC v2 0/2] Add live migration support in the PVRDMA device Sukrit Bhatnagar
2019-07-06  4:09 ` [Qemu-devel] [RFC v2 1/2] hw/pvrdma: make DSR mapping idempotent in load_dsr() Sukrit Bhatnagar
2019-07-06  4:09 ` [Qemu-devel] [RFC v2 2/2] hw/pvrdma: add live migration support Sukrit Bhatnagar
2019-07-08  5:13   ` Yuval Shaia
2019-07-10 20:14     ` Sukrit Bhatnagar
2019-07-06 19:04 ` [Qemu-devel] [RFC v2 0/2] Add live migration support in the PVRDMA device Marcel Apfelbaum
2019-07-08  9:38   ` Daniel P. Berrangé
2019-07-08 18:58     ` Marcel Apfelbaum

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).