qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [RFC V1 0/7] Live update: vdpa
@ 2024-07-12 14:02 Steve Sistare
  2024-07-12 14:02 ` [RFC V1 1/7] migration: cpr_needed_for_reuse Steve Sistare
                   ` (6 more replies)
  0 siblings, 7 replies; 10+ messages in thread
From: Steve Sistare @ 2024-07-12 14:02 UTC (permalink / raw)
  To: qemu-devel
  Cc: Michael S. Tsirkin, Jason Wang, Philippe Mathieu-Daude,
	Eugenio Perez Martin, Peter Xu, Fabiano Rosas, Si-Wei Liu,
	Steve Sistare

Support vdpa devices with the cpr-exec live migration mode.  
This series depends on the QEMU series 
  Live update: cpr-exec
  https://lore.kernel.org/qemu-devel/1719776434-435013-1-git-send-email-steven.sistare@oracle.com/

and depends on the kernel series:
  vdpa live update
  https://lore.kernel.org/virtualization/1720790333-456232-1-git-send-email-steven.sistare@oracle.com/

Preserve the device descriptor across exec, which in turn preserves the
locks on pages which are pinned in memory for DMA.  Suppress the DMA
unmap calls which are normally triggerred when a vdpa device is suspended.
After exec, call VHOST_NEW_OWNER to inform the device that a new process
is in charge.

If the device advertises the VHOST_BACKEND_F_IOTLB_REMAP capability, then
send VHOST_IOTLB_REMAP messages to update the userland address for each
DMA mapping.  Devices that do not advertise this cap have already translated
the userland addresses to physical when the DMA was initially mapped,
and do not require any update.

The cpr-exec mode leverages the vdpa live migration code path for the rest 
of the update, but is faster than live migration because it does not unlock
and relock pages in memory for DMA.

This series does not add any user-visible interfaces.

Steve Sistare (7):
  migration: cpr_needed_for_reuse
  migration: skip dirty memory tracking for cpr
  vdpa/cpr: preserve device fd
  vdpa/cpr: kernel interfaces
  vdpa/cpr: use VHOST_NEW_OWNER
  vdpa/cpr: pass shadow parameter to dma functions
  vdpa/cpr: preserve dma mappings

 hw/virtio/trace-events                       |  5 +-
 hw/virtio/vhost-vdpa.c                       | 71 +++++++++++++++-----
 include/hw/virtio/vhost-vdpa.h               |  7 +-
 include/hw/virtio/vhost.h                    |  1 +
 include/migration/cpr.h                      |  1 +
 include/standard-headers/linux/vhost_types.h |  7 ++
 linux-headers/linux/vhost.h                  |  9 +++
 migration/cpr.c                              |  5 ++
 net/vhost-vdpa.c                             | 29 +++++---
 scripts/tracetool/__init__.py                |  2 +-
 system/memory.c                              | 11 +++
 11 files changed, 120 insertions(+), 28 deletions(-)

-- 
2.39.3



^ permalink raw reply	[flat|nested] 10+ messages in thread

* [RFC V1 1/7] migration: cpr_needed_for_reuse
  2024-07-12 14:02 [RFC V1 0/7] Live update: vdpa Steve Sistare
@ 2024-07-12 14:02 ` Steve Sistare
  2024-07-12 14:02 ` [RFC V1 2/7] migration: skip dirty memory tracking for cpr Steve Sistare
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Steve Sistare @ 2024-07-12 14:02 UTC (permalink / raw)
  To: qemu-devel
  Cc: Michael S. Tsirkin, Jason Wang, Philippe Mathieu-Daude,
	Eugenio Perez Martin, Peter Xu, Fabiano Rosas, Si-Wei Liu,
	Steve Sistare

Define a vmstate "needed" helper.  This will be moved to the preceding patch
series "Live update: cpr-exec" because it is needed by multiple devices.

Signed-off-by: Steve Sistare <steven.sistare@oracle.com>
---
 include/migration/cpr.h | 1 +
 migration/cpr.c         | 5 +++++
 2 files changed, 6 insertions(+)

diff --git a/include/migration/cpr.h b/include/migration/cpr.h
index c6c60f87bc..8d20d3ec49 100644
--- a/include/migration/cpr.h
+++ b/include/migration/cpr.h
@@ -24,6 +24,7 @@ void cpr_resave_fd(const char *name, int id, int fd);
 
 int cpr_state_save(Error **errp);
 int cpr_state_load(Error **errp);
+bool cpr_needed_for_reuse(void *opaque);
 
 QEMUFile *cpr_exec_output(Error **errp);
 QEMUFile *cpr_exec_input(Error **errp);
diff --git a/migration/cpr.c b/migration/cpr.c
index f756c1552d..843241c073 100644
--- a/migration/cpr.c
+++ b/migration/cpr.c
@@ -236,3 +236,8 @@ int cpr_state_load(Error **errp)
     return ret;
 }
 
+bool cpr_needed_for_reuse(void *opaque)
+{
+    MigMode mode = migrate_mode();
+    return mode == MIG_MODE_CPR_EXEC;
+}
-- 
2.39.3



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [RFC V1 2/7] migration: skip dirty memory tracking for cpr
  2024-07-12 14:02 [RFC V1 0/7] Live update: vdpa Steve Sistare
  2024-07-12 14:02 ` [RFC V1 1/7] migration: cpr_needed_for_reuse Steve Sistare
@ 2024-07-12 14:02 ` Steve Sistare
  2024-08-12 18:57   ` Fabiano Rosas
  2024-07-12 14:02 ` [RFC V1 3/7] vdpa/cpr: preserve device fd Steve Sistare
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 10+ messages in thread
From: Steve Sistare @ 2024-07-12 14:02 UTC (permalink / raw)
  To: qemu-devel
  Cc: Michael S. Tsirkin, Jason Wang, Philippe Mathieu-Daude,
	Eugenio Perez Martin, Peter Xu, Fabiano Rosas, Si-Wei Liu,
	Steve Sistare

CPR preserves memory in place, so there is no need to track dirty memory.
By skipping it, CPR can support devices that do not support tracking.

Signed-off-by: Steve Sistare <steven.sistare@oracle.com>
---
 system/memory.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/system/memory.c b/system/memory.c
index b7548bf112..aef584e638 100644
--- a/system/memory.c
+++ b/system/memory.c
@@ -27,6 +27,7 @@
 
 #include "exec/memory-internal.h"
 #include "exec/ram_addr.h"
+#include "migration/misc.h"
 #include "sysemu/kvm.h"
 #include "sysemu/runstate.h"
 #include "sysemu/tcg.h"
@@ -2947,6 +2948,11 @@ bool memory_global_dirty_log_start(unsigned int flags, Error **errp)
 
     assert(flags && !(flags & (~GLOBAL_DIRTY_MASK)));
 
+    /* CPR preserves memory in place, so no need to track dirty memory */
+    if (migrate_mode() != MIG_MODE_NORMAL) {
+        return true;
+    }
+
     if (vmstate_change) {
         /* If there is postponed stop(), operate on it first */
         postponed_stop_flags &= ~flags;
@@ -3021,6 +3027,11 @@ static void memory_vm_change_state_handler(void *opaque, bool running,
 
 void memory_global_dirty_log_stop(unsigned int flags)
 {
+    /* CPR preserves memory in place, so no need to track dirty memory */
+    if (migrate_mode() != MIG_MODE_NORMAL) {
+        return;
+    }
+
     if (!runstate_is_running()) {
         /* Postpone the dirty log stop, e.g., to when VM starts again */
         if (vmstate_change) {
-- 
2.39.3



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [RFC V1 3/7] vdpa/cpr: preserve device fd
  2024-07-12 14:02 [RFC V1 0/7] Live update: vdpa Steve Sistare
  2024-07-12 14:02 ` [RFC V1 1/7] migration: cpr_needed_for_reuse Steve Sistare
  2024-07-12 14:02 ` [RFC V1 2/7] migration: skip dirty memory tracking for cpr Steve Sistare
@ 2024-07-12 14:02 ` Steve Sistare
  2024-07-12 14:02 ` [RFC V1 4/7] vdpa/cpr: kernel interfaces Steve Sistare
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Steve Sistare @ 2024-07-12 14:02 UTC (permalink / raw)
  To: qemu-devel
  Cc: Michael S. Tsirkin, Jason Wang, Philippe Mathieu-Daude,
	Eugenio Perez Martin, Peter Xu, Fabiano Rosas, Si-Wei Liu,
	Steve Sistare

Save the vdpa device fd in CPR state when it is created, and fetch the fd
from that state after CPR.  Remember that the fd was reused, for subsequent
patches.

Signed-off-by: Steve Sistare <steven.sistare@oracle.com>
---
 include/hw/virtio/vhost-vdpa.h |  3 +++
 net/vhost-vdpa.c               | 24 ++++++++++++++++++------
 2 files changed, 21 insertions(+), 6 deletions(-)

diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h
index 0a9575b469..427458cfed 100644
--- a/include/hw/virtio/vhost-vdpa.h
+++ b/include/hw/virtio/vhost-vdpa.h
@@ -54,6 +54,9 @@ typedef struct vhost_vdpa_shared {
     /* Vdpa must send shadow addresses as IOTLB key for data queues, not GPA */
     bool shadow_data;
 
+    /* Device descriptor is being reused after CPR restart */
+    bool reused;
+
     /* SVQ switching is in progress, or already completed? */
     SVQTransitionState svq_switching;
 } VhostVDPAShared;
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index daa38428c5..e6010e8900 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -12,6 +12,7 @@
 #include "qemu/osdep.h"
 #include "clients.h"
 #include "hw/virtio/virtio-net.h"
+#include "migration/cpr.h"
 #include "net/vhost_net.h"
 #include "net/vhost-vdpa.h"
 #include "hw/virtio/vhost-vdpa.h"
@@ -240,8 +241,10 @@ static void vhost_vdpa_cleanup(NetClientState *nc)
     if (s->vhost_vdpa.index != 0) {
         return;
     }
+    cpr_delete_fd(nc->name, 0);
     qemu_close(s->vhost_vdpa.shared->device_fd);
     g_free(s->vhost_vdpa.shared);
+    s->vhost_vdpa.shared = NULL;
 }
 
 /** Dummy SetSteeringEBPF to support RSS for vhost-vdpa backend  */
@@ -1675,6 +1678,7 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
                                        int nvqs,
                                        bool is_datapath,
                                        bool svq,
+                                       bool reused,
                                        struct vhost_vdpa_iova_range iova_range,
                                        uint64_t features,
                                        VhostVDPAShared *shared,
@@ -1712,6 +1716,7 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer,
                                           &s->vhost_vdpa.migration_blocker);
         s->vhost_vdpa.shared = g_new0(VhostVDPAShared, 1);
         s->vhost_vdpa.shared->device_fd = vdpa_device_fd;
+        s->vhost_vdpa.shared->reused = reused;
         s->vhost_vdpa.shared->iova_range = iova_range;
         s->vhost_vdpa.shared->shadow_data = svq;
     } else if (!is_datapath) {
@@ -1793,6 +1798,7 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
     struct vhost_vdpa_iova_range iova_range;
     NetClientState *nc;
     int queue_pairs, r, i = 0, has_cvq = 0;
+    bool reused;
 
     assert(netdev->type == NET_CLIENT_DRIVER_VHOST_VDPA);
     opts = &netdev->u.vhost_vdpa;
@@ -1808,13 +1814,17 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
         return -1;
     }
 
-    if (opts->vhostdev) {
+    vdpa_device_fd = cpr_find_fd(name, 0);
+    reused = (vdpa_device_fd != -1);
+
+    if (opts->vhostdev && vdpa_device_fd == -1) {
         vdpa_device_fd = qemu_open(opts->vhostdev, O_RDWR, errp);
         if (vdpa_device_fd == -1) {
             return -errno;
         }
-    } else {
-        /* has_vhostfd */
+        cpr_save_fd(name, 0, vdpa_device_fd);
+
+    } else if (opts->vhostfd) {
         vdpa_device_fd = monitor_fd_param(monitor_cur(), opts->vhostfd, errp);
         if (vdpa_device_fd == -1) {
             error_prepend(errp, "vhost-vdpa: unable to parse vhostfd: ");
@@ -1855,7 +1865,8 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
         }
         ncs[i] = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
                                      vdpa_device_fd, i, 2, true, opts->x_svq,
-                                     iova_range, features, shared, errp);
+                                     reused, iova_range, features, shared,
+                                     errp);
         if (!ncs[i])
             goto err;
     }
@@ -1866,8 +1877,8 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
 
         nc = net_vhost_vdpa_init(peer, TYPE_VHOST_VDPA, name,
                                  vdpa_device_fd, i, 1, false,
-                                 opts->x_svq, iova_range, features, shared,
-                                 errp);
+                                 opts->x_svq, reused, iova_range, features,
+                                 shared, errp);
         if (!nc)
             goto err;
     }
@@ -1882,6 +1893,7 @@ err:
     }
 
     qemu_close(vdpa_device_fd);
+    cpr_delete_fd(name, 0);
 
     return -1;
 }
-- 
2.39.3



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [RFC V1 4/7] vdpa/cpr: kernel interfaces
  2024-07-12 14:02 [RFC V1 0/7] Live update: vdpa Steve Sistare
                   ` (2 preceding siblings ...)
  2024-07-12 14:02 ` [RFC V1 3/7] vdpa/cpr: preserve device fd Steve Sistare
@ 2024-07-12 14:02 ` Steve Sistare
  2024-07-12 14:02 ` [RFC V1 5/7] vdpa/cpr: use VHOST_NEW_OWNER Steve Sistare
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 10+ messages in thread
From: Steve Sistare @ 2024-07-12 14:02 UTC (permalink / raw)
  To: qemu-devel
  Cc: Michael S. Tsirkin, Jason Wang, Philippe Mathieu-Daude,
	Eugenio Perez Martin, Peter Xu, Fabiano Rosas, Si-Wei Liu,
	Steve Sistare

Add the proposed vdpa kernel interfaces for CPR:
  VHOST_NEW_OWNER: new ioctl
  VHOST_BACKEND_F_NEW_OWNER: new capability
  VHOST_IOTLB_REMAP: new iotlb message
  VHOST_BACKEND_F_IOTLB_REMAP: new capability

Signed-off-by: Steve Sistare <steven.sistare@oracle.com>
---
 include/standard-headers/linux/vhost_types.h | 7 +++++++
 linux-headers/linux/vhost.h                  | 9 +++++++++
 2 files changed, 16 insertions(+)

diff --git a/include/standard-headers/linux/vhost_types.h b/include/standard-headers/linux/vhost_types.h
index fd54044936..fa605ee1da 100644
--- a/include/standard-headers/linux/vhost_types.h
+++ b/include/standard-headers/linux/vhost_types.h
@@ -87,6 +87,7 @@ struct vhost_iotlb_msg {
  */
 #define VHOST_IOTLB_BATCH_BEGIN    5
 #define VHOST_IOTLB_BATCH_END      6
+#define VHOST_IOTLB_REMAP          7
 	uint8_t type;
 };
 
@@ -193,4 +194,10 @@ struct vhost_vdpa_iova_range {
 /* IOTLB don't flush memory mapping across device reset */
 #define VHOST_BACKEND_F_IOTLB_PERSIST  0x8
 
+/* Supports VHOST_NEW_OWNER */
+#define VHOST_BACKEND_F_NEW_OWNER  0x9
+
+/* Supports VHOST_IOTLB_REMAP */
+#define VHOST_BACKEND_F_IOTLB_REMAP  0xa
+
 #endif
diff --git a/linux-headers/linux/vhost.h b/linux-headers/linux/vhost.h
index b95dd84eef..6c008c956a 100644
--- a/linux-headers/linux/vhost.h
+++ b/linux-headers/linux/vhost.h
@@ -123,6 +123,15 @@
 #define VHOST_SET_BACKEND_FEATURES _IOW(VHOST_VIRTIO, 0x25, __u64)
 #define VHOST_GET_BACKEND_FEATURES _IOR(VHOST_VIRTIO, 0x26, __u64)
 
+/* Set current process as the new owner of this file descriptor.  The fd must
+ * already be owned, via a prior call to VHOST_SET_OWNER.  The pinned memory
+ * count is transferred from the previous to the new owner.
+ * Errors:
+ *   EINVAL: not owned
+ *   EBUSY:  caller is already the owner
+ *   ENOMEM: RLIMIT_MEMLOCK exceeded
+ */
+#define VHOST_NEW_OWNER                _IO(VHOST_VIRTIO, 0x27)
 /* VHOST_NET specific defines */
 
 /* Attach virtio net ring to a raw socket, or tap device.
-- 
2.39.3



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [RFC V1 5/7] vdpa/cpr: use VHOST_NEW_OWNER
  2024-07-12 14:02 [RFC V1 0/7] Live update: vdpa Steve Sistare
                   ` (3 preceding siblings ...)
  2024-07-12 14:02 ` [RFC V1 4/7] vdpa/cpr: kernel interfaces Steve Sistare
@ 2024-07-12 14:02 ` Steve Sistare
  2024-07-12 14:02 ` [RFC V1 6/7] vdpa/cpr: pass shadow parameter to dma functions Steve Sistare
  2024-07-12 14:02 ` [RFC V1 7/7] vdpa/cpr: preserve dma mappings Steve Sistare
  6 siblings, 0 replies; 10+ messages in thread
From: Steve Sistare @ 2024-07-12 14:02 UTC (permalink / raw)
  To: qemu-devel
  Cc: Michael S. Tsirkin, Jason Wang, Philippe Mathieu-Daude,
	Eugenio Perez Martin, Peter Xu, Fabiano Rosas, Si-Wei Liu,
	Steve Sistare

Block CPR if the kernel does not support VHOST_NEW_OWNER.
After CPR, call VHOST_NEW_OWNER in new QEMU.

Signed-off-by: Steve Sistare <steven.sistare@oracle.com>
---
 hw/virtio/trace-events    |  1 +
 hw/virtio/vhost-vdpa.c    | 24 ++++++++++++++++++++++--
 include/hw/virtio/vhost.h |  1 +
 3 files changed, 24 insertions(+), 2 deletions(-)

diff --git a/hw/virtio/trace-events b/hw/virtio/trace-events
index 3cf84e04a7..990c61be79 100644
--- a/hw/virtio/trace-events
+++ b/hw/virtio/trace-events
@@ -64,6 +64,7 @@ vhost_vdpa_set_vring_kick(void *dev, unsigned int index, int fd) "dev: %p index:
 vhost_vdpa_set_vring_call(void *dev, unsigned int index, int fd) "dev: %p index: %u fd: %d"
 vhost_vdpa_get_features(void *dev, uint64_t features) "dev: %p features: 0x%"PRIx64
 vhost_vdpa_set_owner(void *dev) "dev: %p"
+vhost_vdpa_new_owner(void *dev) "dev: %p"
 vhost_vdpa_vq_get_addr(void *dev, void *vq, uint64_t desc_user_addr, uint64_t avail_user_addr, uint64_t used_user_addr) "dev: %p vq: %p desc_user_addr: 0x%"PRIx64" avail_user_addr: 0x%"PRIx64" used_user_addr: 0x%"PRIx64
 vhost_vdpa_get_iova_range(void *dev, uint64_t first, uint64_t last) "dev: %p first: 0x%"PRIx64" last: 0x%"PRIx64
 vhost_vdpa_set_config_call(void *dev, int fd)"dev: %p fd: %d"
diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 3cdaa12ed5..9e3f414ac2 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -769,6 +769,7 @@ static int vhost_vdpa_cleanup(struct vhost_dev *dev)
     vhost_vdpa_svq_cleanup(dev);
 
     dev->opaque = NULL;
+    migrate_del_blocker(&dev->cpr_blocker);
 
     return 0;
 }
@@ -848,13 +849,13 @@ static int vhost_vdpa_set_backend_cap(struct vhost_dev *dev)
     uint64_t f = 0x1ULL << VHOST_BACKEND_F_IOTLB_MSG_V2 |
         0x1ULL << VHOST_BACKEND_F_IOTLB_BATCH |
         0x1ULL << VHOST_BACKEND_F_IOTLB_ASID |
-        0x1ULL << VHOST_BACKEND_F_SUSPEND;
+        0x1ULL << VHOST_BACKEND_F_SUSPEND |
+        0x1ULL << VHOST_BACKEND_F_NEW_OWNER;
     int r;
 
     if (vhost_vdpa_call(dev, VHOST_GET_BACKEND_FEATURES, &features)) {
         return -EFAULT;
     }
-
     features &= f;
 
     if (vhost_vdpa_first_dev(dev)) {
@@ -1360,6 +1361,18 @@ static int vhost_vdpa_dev_start(struct vhost_dev *dev, bool started)
     }
 
     if (started) {
+        /*
+         * Register a blocker the first time device is started (when we know
+         * its capabilities).
+         */
+        if (!dev->cpr_blocker &&
+            !(dev->backend_cap & BIT_ULL(VHOST_BACKEND_F_NEW_OWNER))) {
+            error_setg(&dev->cpr_blocker, "vhost-vdpa: device does not support "
+                                          "VHOST_BACKEND_F_NEW_OWNER");
+            migrate_add_blocker_modes(&dev->cpr_blocker, &error_abort,
+                                      MIG_MODE_CPR_EXEC, -1);
+        }
+
         if (vhost_dev_has_iommu(dev) && (v->shadow_vqs_enabled)) {
             error_report("SVQ can not work while IOMMU enable, please disable"
                          "IOMMU and try again");
@@ -1518,10 +1531,17 @@ static int vhost_vdpa_get_features(struct vhost_dev *dev,
 
 static int vhost_vdpa_set_owner(struct vhost_dev *dev)
 {
+    struct vhost_vdpa *v = dev->opaque;
+
     if (!vhost_vdpa_first_dev(dev)) {
         return 0;
     }
 
+    if (v->shared->reused) {
+        trace_vhost_vdpa_new_owner(dev);
+        return vhost_vdpa_call(dev, VHOST_NEW_OWNER, NULL);
+    }
+
     trace_vhost_vdpa_set_owner(dev);
     return vhost_vdpa_call(dev, VHOST_SET_OWNER, NULL);
 }
diff --git a/include/hw/virtio/vhost.h b/include/hw/virtio/vhost.h
index d75faf46e9..3f1b802f85 100644
--- a/include/hw/virtio/vhost.h
+++ b/include/hw/virtio/vhost.h
@@ -133,6 +133,7 @@ struct vhost_dev {
     QLIST_HEAD(, vhost_iommu) iommu_list;
     IOMMUNotifier n;
     const VhostDevConfigOps *config_ops;
+    Error *cpr_blocker;
 };
 
 extern const VhostOps kernel_ops;
-- 
2.39.3



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [RFC V1 6/7] vdpa/cpr: pass shadow parameter to dma functions
  2024-07-12 14:02 [RFC V1 0/7] Live update: vdpa Steve Sistare
                   ` (4 preceding siblings ...)
  2024-07-12 14:02 ` [RFC V1 5/7] vdpa/cpr: use VHOST_NEW_OWNER Steve Sistare
@ 2024-07-12 14:02 ` Steve Sistare
  2024-07-12 14:02 ` [RFC V1 7/7] vdpa/cpr: preserve dma mappings Steve Sistare
  6 siblings, 0 replies; 10+ messages in thread
From: Steve Sistare @ 2024-07-12 14:02 UTC (permalink / raw)
  To: qemu-devel
  Cc: Michael S. Tsirkin, Jason Wang, Philippe Mathieu-Daude,
	Eugenio Perez Martin, Peter Xu, Fabiano Rosas, Si-Wei Liu,
	Steve Sistare

Pass a parameter to the dma mapping functions that indicates if the
memory backs rings or buffers for svq's.  No functional change.

Signed-off-by: Steve Sistare <steven.sistare@oracle.com>
---
 hw/virtio/vhost-vdpa.c         | 19 ++++++++++---------
 include/hw/virtio/vhost-vdpa.h |  4 ++--
 net/vhost-vdpa.c               |  5 +++--
 3 files changed, 15 insertions(+), 13 deletions(-)

diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index 9e3f414ac2..d9ebc396b7 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -92,7 +92,7 @@ static bool vhost_vdpa_listener_skipped_section(MemoryRegionSection *section,
  * This is not an ABI break since it is set to 0 by the initializer anyway.
  */
 int vhost_vdpa_dma_map(VhostVDPAShared *s, uint32_t asid, hwaddr iova,
-                       hwaddr size, void *vaddr, bool readonly)
+                       hwaddr size, void *vaddr, bool readonly, bool shadow)
 {
     struct vhost_msg_v2 msg = {};
     int fd = s->device_fd;
@@ -124,7 +124,7 @@ int vhost_vdpa_dma_map(VhostVDPAShared *s, uint32_t asid, hwaddr iova,
  * This is not an ABI break since it is set to 0 by the initializer anyway.
  */
 int vhost_vdpa_dma_unmap(VhostVDPAShared *s, uint32_t asid, hwaddr iova,
-                         hwaddr size)
+                         hwaddr size, bool shadow)
 {
     struct vhost_msg_v2 msg = {};
     int fd = s->device_fd;
@@ -234,7 +234,7 @@ static void vhost_vdpa_iommu_map_notify(IOMMUNotifier *n, IOMMUTLBEntry *iotlb)
             return;
         }
         ret = vhost_vdpa_dma_map(s, VHOST_VDPA_GUEST_PA_ASID, iova,
-                                 iotlb->addr_mask + 1, vaddr, read_only);
+                                 iotlb->addr_mask + 1, vaddr, read_only, false);
         if (ret) {
             error_report("vhost_vdpa_dma_map(%p, 0x%" HWADDR_PRIx ", "
                          "0x%" HWADDR_PRIx ", %p) = %d (%m)",
@@ -242,7 +242,7 @@ static void vhost_vdpa_iommu_map_notify(IOMMUNotifier *n, IOMMUTLBEntry *iotlb)
         }
     } else {
         ret = vhost_vdpa_dma_unmap(s, VHOST_VDPA_GUEST_PA_ASID, iova,
-                                   iotlb->addr_mask + 1);
+                                   iotlb->addr_mask + 1, false);
         if (ret) {
             error_report("vhost_vdpa_dma_unmap(%p, 0x%" HWADDR_PRIx ", "
                          "0x%" HWADDR_PRIx ") = %d (%m)",
@@ -376,7 +376,8 @@ static void vhost_vdpa_listener_region_add(MemoryListener *listener,
 
     vhost_vdpa_iotlb_batch_begin_once(s);
     ret = vhost_vdpa_dma_map(s, VHOST_VDPA_GUEST_PA_ASID, iova,
-                             int128_get64(llsize), vaddr, section->readonly);
+                             int128_get64(llsize), vaddr, section->readonly,
+                             false);
     if (ret) {
         error_report("vhost vdpa map fail!");
         goto fail_map;
@@ -463,7 +464,7 @@ static void vhost_vdpa_listener_region_del(MemoryListener *listener,
     if (int128_eq(llsize, int128_2_64())) {
         llsize = int128_rshift(llsize, 1);
         ret = vhost_vdpa_dma_unmap(s, VHOST_VDPA_GUEST_PA_ASID, iova,
-                                   int128_get64(llsize));
+                                   int128_get64(llsize), false);
 
         if (ret) {
             error_report("vhost_vdpa_dma_unmap(%p, 0x%" HWADDR_PRIx ", "
@@ -473,7 +474,7 @@ static void vhost_vdpa_listener_region_del(MemoryListener *listener,
         iova += int128_get64(llsize);
     }
     ret = vhost_vdpa_dma_unmap(s, VHOST_VDPA_GUEST_PA_ASID, iova,
-                               int128_get64(llsize));
+                               int128_get64(llsize), false);
 
     if (ret) {
         error_report("vhost_vdpa_dma_unmap(%p, 0x%" HWADDR_PRIx ", "
@@ -1116,7 +1117,7 @@ static void vhost_vdpa_svq_unmap_ring(struct vhost_vdpa *v, hwaddr addr)
 
     size = ROUND_UP(result->size, qemu_real_host_page_size());
     r = vhost_vdpa_dma_unmap(v->shared, v->address_space_id, result->iova,
-                             size);
+                             size, true);
     if (unlikely(r < 0)) {
         error_report("Unable to unmap SVQ vring: %s (%d)", g_strerror(-r), -r);
         return;
@@ -1159,7 +1160,7 @@ static bool vhost_vdpa_svq_map_ring(struct vhost_vdpa *v, DMAMap *needle,
     r = vhost_vdpa_dma_map(v->shared, v->address_space_id, needle->iova,
                            needle->size + 1,
                            (void *)(uintptr_t)needle->translated_addr,
-                           needle->perm == IOMMU_RO);
+                           needle->perm == IOMMU_RO, true);
     if (unlikely(r != 0)) {
         error_setg_errno(errp, -r, "Cannot map region to device");
         vhost_iova_tree_remove(v->shared->iova_tree, *needle);
diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h
index 427458cfed..aac6ad439c 100644
--- a/include/hw/virtio/vhost-vdpa.h
+++ b/include/hw/virtio/vhost-vdpa.h
@@ -82,9 +82,9 @@ int vhost_vdpa_get_iova_range(int fd, struct vhost_vdpa_iova_range *iova_range);
 int vhost_vdpa_set_vring_ready(struct vhost_vdpa *v, unsigned idx);
 
 int vhost_vdpa_dma_map(VhostVDPAShared *s, uint32_t asid, hwaddr iova,
-                       hwaddr size, void *vaddr, bool readonly);
+                       hwaddr size, void *vaddr, bool readonly, bool shadow);
 int vhost_vdpa_dma_unmap(VhostVDPAShared *s, uint32_t asid, hwaddr iova,
-                         hwaddr size);
+                         hwaddr size, bool shadow);
 
 typedef struct vdpa_iommu {
     VhostVDPAShared *dev_shared;
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index e6010e8900..e3e861cfcc 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -499,7 +499,7 @@ static void vhost_vdpa_cvq_unmap_buf(struct vhost_vdpa *v, void *addr)
     }
 
     r = vhost_vdpa_dma_unmap(v->shared, v->address_space_id, map->iova,
-                             map->size + 1);
+                             map->size + 1, true);
     if (unlikely(r != 0)) {
         error_report("Device cannot unmap: %s(%d)", g_strerror(r), r);
     }
@@ -524,7 +524,8 @@ static int vhost_vdpa_cvq_map_buf(struct vhost_vdpa *v, void *buf, size_t size,
     }
 
     r = vhost_vdpa_dma_map(v->shared, v->address_space_id, map.iova,
-                           vhost_vdpa_net_cvq_cmd_page_len(), buf, !write);
+                           vhost_vdpa_net_cvq_cmd_page_len(), buf, !write,
+                           true);
     if (unlikely(r < 0)) {
         goto dma_map_err;
     }
-- 
2.39.3



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [RFC V1 7/7] vdpa/cpr: preserve dma mappings
  2024-07-12 14:02 [RFC V1 0/7] Live update: vdpa Steve Sistare
                   ` (5 preceding siblings ...)
  2024-07-12 14:02 ` [RFC V1 6/7] vdpa/cpr: pass shadow parameter to dma functions Steve Sistare
@ 2024-07-12 14:02 ` Steve Sistare
  6 siblings, 0 replies; 10+ messages in thread
From: Steve Sistare @ 2024-07-12 14:02 UTC (permalink / raw)
  To: qemu-devel
  Cc: Michael S. Tsirkin, Jason Wang, Philippe Mathieu-Daude,
	Eugenio Perez Martin, Peter Xu, Fabiano Rosas, Si-Wei Liu,
	Steve Sistare

Preserve dma mappings during CPR restart by suppressing dma_map and
dma_unmap calls.  For devices with capability VHOST_BACKEND_F_IOTLB_REMAP,
convert dma_map calls to VHOST_IOTLB_REMAP to set the new userland VA for
the existing mapping.

However, map and unmap shadow vq buffers normally.  Their pages are not
locked in memory, and they are re-created after CPR.

Signed-off-by: Steve Sistare <steven.sistare@oracle.com>
---
 hw/virtio/trace-events        |  4 ++--
 hw/virtio/vhost-vdpa.c        | 30 +++++++++++++++++++++++++-----
 scripts/tracetool/__init__.py |  2 +-
 3 files changed, 28 insertions(+), 8 deletions(-)

diff --git a/hw/virtio/trace-events b/hw/virtio/trace-events
index 990c61be79..30d7f5ec69 100644
--- a/hw/virtio/trace-events
+++ b/hw/virtio/trace-events
@@ -31,8 +31,8 @@ vhost_user_create_notifier(int idx, void *n) "idx:%d n:%p"
 
 # vhost-vdpa.c
 vhost_vdpa_skipped_memory_section(int is_ram, int is_iommu, int is_protected, int is_ram_device, uint64_t first, uint64_t last, int page_mask) "is_ram=%d, is_iommu=%d, is_protected=%d, is_ram_device=%d iova_min=0x%"PRIx64" iova_last=0x%"PRIx64" page_mask=0x%x"
-vhost_vdpa_dma_map(void *vdpa, int fd, uint32_t msg_type, uint32_t asid, uint64_t iova, uint64_t size, uint64_t uaddr, uint8_t perm, uint8_t type) "vdpa_shared:%p fd: %d msg_type: %"PRIu32" asid: %"PRIu32" iova: 0x%"PRIx64" size: 0x%"PRIx64" uaddr: 0x%"PRIx64" perm: 0x%"PRIx8" type: %"PRIu8
-vhost_vdpa_dma_unmap(void *vdpa, int fd, uint32_t msg_type, uint32_t asid, uint64_t iova, uint64_t size, uint8_t type) "vdpa_shared:%p fd: %d msg_type: %"PRIu32" asid: %"PRIu32" iova: 0x%"PRIx64" size: 0x%"PRIx64" type: %"PRIu8
+vhost_vdpa_dma_map(void *vdpa, int fd, uint32_t msg_type, uint32_t asid, uint64_t iova, uint64_t size, uint64_t uaddr, uint8_t perm, uint8_t type, bool shadow, const char *override) "vdpa_shared:%p fd: %d msg_type: %"PRIu32" asid: %"PRIu32" iova: 0x%"PRIx64" size: 0x%"PRIx64" uaddr: 0x%"PRIx64" perm: 0x%"PRIx8" type: %"PRIu8" shadow: %d %s"
+vhost_vdpa_dma_unmap(void *vdpa, int fd, uint32_t msg_type, uint32_t asid, uint64_t iova, uint64_t size, uint8_t type, bool shadow, const char *override) "vdpa_shared:%p fd: %d msg_type: %"PRIu32" asid: %"PRIu32" iova: 0x%"PRIx64" size: 0x%"PRIx64" type: %"PRIu8" shadow: %d %s"
 vhost_vdpa_listener_begin_batch(void *v, int fd, uint32_t msg_type, uint8_t type)  "vdpa_shared:%p fd: %d msg_type: %"PRIu32" type: %"PRIu8
 vhost_vdpa_listener_commit(void *v, int fd, uint32_t msg_type, uint8_t type)  "vdpa_shared:%p fd: %d msg_type: %"PRIu32" type: %"PRIu8
 vhost_vdpa_listener_region_add_unaligned(void *v, const char *name, uint64_t offset_as, uint64_t offset_page) "vdpa_shared: %p region %s offset_within_address_space %"PRIu64" offset_within_region %"PRIu64
diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
index d9ebc396b7..3ee809abfe 100644
--- a/hw/virtio/vhost-vdpa.c
+++ b/hw/virtio/vhost-vdpa.c
@@ -22,6 +22,8 @@
 #include "hw/virtio/vhost-vdpa.h"
 #include "exec/address-spaces.h"
 #include "migration/blocker.h"
+#include "migration/cpr.h"
+#include "migration/options.h"
 #include "qemu/cutils.h"
 #include "qemu/main-loop.h"
 #include "trace.h"
@@ -97,18 +99,29 @@ int vhost_vdpa_dma_map(VhostVDPAShared *s, uint32_t asid, hwaddr iova,
     struct vhost_msg_v2 msg = {};
     int fd = s->device_fd;
     int ret = 0;
+    bool remap = false, suppress = false;
+
+    if (migrate_mode() == MIG_MODE_CPR_EXEC && !shadow) {
+        remap = !!(s->backend_cap & BIT_ULL(VHOST_BACKEND_F_IOTLB_REMAP));
+        suppress = !remap;
+    }
 
     msg.type = VHOST_IOTLB_MSG_V2;
     msg.asid = asid;
     msg.iotlb.iova = iova;
     msg.iotlb.size = size;
     msg.iotlb.uaddr = (uint64_t)(uintptr_t)vaddr;
-    msg.iotlb.perm = readonly ? VHOST_ACCESS_RO : VHOST_ACCESS_RW;
-    msg.iotlb.type = VHOST_IOTLB_UPDATE;
+    msg.iotlb.perm = remap ? 0 : readonly ? VHOST_ACCESS_RO : VHOST_ACCESS_RW;
+    msg.iotlb.type = remap ? VHOST_IOTLB_REMAP : VHOST_IOTLB_UPDATE;
 
     trace_vhost_vdpa_dma_map(s, fd, msg.type, msg.asid, msg.iotlb.iova,
                              msg.iotlb.size, msg.iotlb.uaddr, msg.iotlb.perm,
-                             msg.iotlb.type);
+                             msg.iotlb.type, shadow,
+                             remap ? "(remap)" : suppress ? "(suppress)" : "");
+
+    if (suppress) {
+        return 0;
+    }
 
     if (write(fd, &msg, sizeof(msg)) != sizeof(msg)) {
         error_report("failed to write, fd=%d, errno=%d (%s)",
@@ -129,6 +142,7 @@ int vhost_vdpa_dma_unmap(VhostVDPAShared *s, uint32_t asid, hwaddr iova,
     struct vhost_msg_v2 msg = {};
     int fd = s->device_fd;
     int ret = 0;
+    bool suppress = migrate_mode() == MIG_MODE_CPR_EXEC && !shadow;
 
     msg.type = VHOST_IOTLB_MSG_V2;
     msg.asid = asid;
@@ -137,7 +151,12 @@ int vhost_vdpa_dma_unmap(VhostVDPAShared *s, uint32_t asid, hwaddr iova,
     msg.iotlb.type = VHOST_IOTLB_INVALIDATE;
 
     trace_vhost_vdpa_dma_unmap(s, fd, msg.type, msg.asid, msg.iotlb.iova,
-                               msg.iotlb.size, msg.iotlb.type);
+                               msg.iotlb.size, msg.iotlb.type, shadow,
+                               suppress ? "(suppressed)" : "");
+
+    if (suppress) {
+        return 0;
+    }
 
     if (write(fd, &msg, sizeof(msg)) != sizeof(msg)) {
         error_report("failed to write, fd=%d, errno=%d (%s)",
@@ -851,7 +870,8 @@ static int vhost_vdpa_set_backend_cap(struct vhost_dev *dev)
         0x1ULL << VHOST_BACKEND_F_IOTLB_BATCH |
         0x1ULL << VHOST_BACKEND_F_IOTLB_ASID |
         0x1ULL << VHOST_BACKEND_F_SUSPEND |
-        0x1ULL << VHOST_BACKEND_F_NEW_OWNER;
+        0x1ULL << VHOST_BACKEND_F_NEW_OWNER |
+        0x1ULL << VHOST_BACKEND_F_IOTLB_REMAP;
     int r;
 
     if (vhost_vdpa_call(dev, VHOST_GET_BACKEND_FEATURES, &features)) {
diff --git a/scripts/tracetool/__init__.py b/scripts/tracetool/__init__.py
index bc03238c0f..bfb181cb81 100644
--- a/scripts/tracetool/__init__.py
+++ b/scripts/tracetool/__init__.py
@@ -253,7 +253,7 @@ def __init__(self, name, props, fmt, args, lineno, filename, orig=None,
         self.event_trans = event_trans
         self.event_exec = event_exec
 
-        if len(args) > 10:
+        if len(args) > 11:
             raise ValueError("Event '%s' has more than maximum permitted "
                              "argument count" % name)
 
-- 
2.39.3



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [RFC V1 2/7] migration: skip dirty memory tracking for cpr
  2024-07-12 14:02 ` [RFC V1 2/7] migration: skip dirty memory tracking for cpr Steve Sistare
@ 2024-08-12 18:57   ` Fabiano Rosas
  2024-08-14 19:54     ` Steven Sistare
  0 siblings, 1 reply; 10+ messages in thread
From: Fabiano Rosas @ 2024-08-12 18:57 UTC (permalink / raw)
  To: Steve Sistare, qemu-devel
  Cc: Michael S. Tsirkin, Jason Wang, Philippe Mathieu-Daude,
	Eugenio Perez Martin, Peter Xu, Si-Wei Liu, Steve Sistare

Steve Sistare <steven.sistare@oracle.com> writes:

> CPR preserves memory in place, so there is no need to track dirty memory.
> By skipping it, CPR can support devices that do not support tracking.
>
> Signed-off-by: Steve Sistare <steven.sistare@oracle.com>
> ---
>  system/memory.c | 11 +++++++++++
>  1 file changed, 11 insertions(+)
>
> diff --git a/system/memory.c b/system/memory.c
> index b7548bf112..aef584e638 100644
> --- a/system/memory.c
> +++ b/system/memory.c
> @@ -27,6 +27,7 @@
>  
>  #include "exec/memory-internal.h"
>  #include "exec/ram_addr.h"
> +#include "migration/misc.h"
>  #include "sysemu/kvm.h"
>  #include "sysemu/runstate.h"
>  #include "sysemu/tcg.h"
> @@ -2947,6 +2948,11 @@ bool memory_global_dirty_log_start(unsigned int flags, Error **errp)
>  
>      assert(flags && !(flags & (~GLOBAL_DIRTY_MASK)));
>  
> +    /* CPR preserves memory in place, so no need to track dirty memory */
> +    if (migrate_mode() != MIG_MODE_NORMAL) {
> +        return true;
> +    }

How this interacts with DIRTY_RATE and DIRTY_LIMIT? The former at least
seems to never overlap with CPR, right? I'm wondering whether this check
would be more appropriate up in ram.c along with the similar
migrate_background_snapshot() check.

(I wish we had made the global_dirty_log_change() function a bit more
flexible. It would have been a nice place to put this and the snapshot
check. Not worth the risk of changing it now...)

Also, not tracking dirty memory implies also not doing the bitmap sync?
We skip it for bg_snapshot, but not for CPR.

> +
>      if (vmstate_change) {
>          /* If there is postponed stop(), operate on it first */
>          postponed_stop_flags &= ~flags;
> @@ -3021,6 +3027,11 @@ static void memory_vm_change_state_handler(void *opaque, bool running,
>  
>  void memory_global_dirty_log_stop(unsigned int flags)
>  {
> +    /* CPR preserves memory in place, so no need to track dirty memory */
> +    if (migrate_mode() != MIG_MODE_NORMAL) {
> +        return;
> +    }
> +
>      if (!runstate_is_running()) {
>          /* Postpone the dirty log stop, e.g., to when VM starts again */
>          if (vmstate_change) {


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RFC V1 2/7] migration: skip dirty memory tracking for cpr
  2024-08-12 18:57   ` Fabiano Rosas
@ 2024-08-14 19:54     ` Steven Sistare
  0 siblings, 0 replies; 10+ messages in thread
From: Steven Sistare @ 2024-08-14 19:54 UTC (permalink / raw)
  To: Fabiano Rosas, qemu-devel
  Cc: Michael S. Tsirkin, Jason Wang, Philippe Mathieu-Daude,
	Eugenio Perez Martin, Peter Xu, Si-Wei Liu

On 8/12/2024 2:57 PM, Fabiano Rosas wrote:
> Steve Sistare <steven.sistare@oracle.com> writes:
> 
>> CPR preserves memory in place, so there is no need to track dirty memory.
>> By skipping it, CPR can support devices that do not support tracking.
>>
>> Signed-off-by: Steve Sistare <steven.sistare@oracle.com>
>> ---
>>   system/memory.c | 11 +++++++++++
>>   1 file changed, 11 insertions(+)
>>
>> diff --git a/system/memory.c b/system/memory.c
>> index b7548bf112..aef584e638 100644
>> --- a/system/memory.c
>> +++ b/system/memory.c
>> @@ -27,6 +27,7 @@
>>   
>>   #include "exec/memory-internal.h"
>>   #include "exec/ram_addr.h"
>> +#include "migration/misc.h"
>>   #include "sysemu/kvm.h"
>>   #include "sysemu/runstate.h"
>>   #include "sysemu/tcg.h"
>> @@ -2947,6 +2948,11 @@ bool memory_global_dirty_log_start(unsigned int flags, Error **errp)
>>   
>>       assert(flags && !(flags & (~GLOBAL_DIRTY_MASK)));
>>   
>> +    /* CPR preserves memory in place, so no need to track dirty memory */
>> +    if (migrate_mode() != MIG_MODE_NORMAL) {
>> +        return true;
>> +    }
> 
> How this interacts with DIRTY_RATE and DIRTY_LIMIT? The former at least
> seems to never overlap with CPR, right? I'm wondering whether this check
> would be more appropriate up in ram.c along with the similar
> migrate_background_snapshot() check.
Agreed.  I previously pushed the CPR check down to memory_global_dirty_log_start
to catch all callers, but the only callers that matter are ram_init_bitmaps and
ram_save_cleanup, so I will hoist the check to those callers.

> (I wish we had made the global_dirty_log_change() function a bit more
> flexible. It would have been a nice place to put this and the snapshot
> check. Not worth the risk of changing it now...)
> 
> Also, not tracking dirty memory implies also not doing the bitmap sync?
> We skip it for bg_snapshot, but not for CPR.

Good catch, I must skip it for CPR also:

--------------------------
@@ -3250,7 +3261,8 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
      rs->last_stage = !migration_in_colo_state();

      WITH_RCU_READ_LOCK_GUARD() {
-        if (!migration_in_postcopy()) {
+        /* We don't use dirty log with CPR. */
+        if (!migration_in_postcopy() && migrate_mode() == MIG_MODE_NORMAL) {
              migration_bitmap_sync_precopy(rs, true);
          }
---------------------------

- Steve

>> +
>>       if (vmstate_change) {
>>           /* If there is postponed stop(), operate on it first */
>>           postponed_stop_flags &= ~flags;
>> @@ -3021,6 +3027,11 @@ static void memory_vm_change_state_handler(void *opaque, bool running,
>>   
>>   void memory_global_dirty_log_stop(unsigned int flags)
>>   {
>> +    /* CPR preserves memory in place, so no need to track dirty memory */
>> +    if (migrate_mode() != MIG_MODE_NORMAL) {
>> +        return;
>> +    }
>> +
>>       if (!runstate_is_running()) {
>>           /* Postpone the dirty log stop, e.g., to when VM starts again */
>>           if (vmstate_change) {


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2024-08-14 19:55 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-07-12 14:02 [RFC V1 0/7] Live update: vdpa Steve Sistare
2024-07-12 14:02 ` [RFC V1 1/7] migration: cpr_needed_for_reuse Steve Sistare
2024-07-12 14:02 ` [RFC V1 2/7] migration: skip dirty memory tracking for cpr Steve Sistare
2024-08-12 18:57   ` Fabiano Rosas
2024-08-14 19:54     ` Steven Sistare
2024-07-12 14:02 ` [RFC V1 3/7] vdpa/cpr: preserve device fd Steve Sistare
2024-07-12 14:02 ` [RFC V1 4/7] vdpa/cpr: kernel interfaces Steve Sistare
2024-07-12 14:02 ` [RFC V1 5/7] vdpa/cpr: use VHOST_NEW_OWNER Steve Sistare
2024-07-12 14:02 ` [RFC V1 6/7] vdpa/cpr: pass shadow parameter to dma functions Steve Sistare
2024-07-12 14:02 ` [RFC V1 7/7] vdpa/cpr: preserve dma mappings Steve Sistare

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).