qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v1 0/4] virtio-mem: Support "x-ignore-shared" migration
@ 2023-06-20 13:03 David Hildenbrand
  2023-06-20 13:03 ` [PATCH v1 1/4] softmmu/physmem: Warn with ram_block_discard_range() on MAP_PRIVATE file mapping David Hildenbrand
                   ` (4 more replies)
  0 siblings, 5 replies; 15+ messages in thread
From: David Hildenbrand @ 2023-06-20 13:03 UTC (permalink / raw)
  To: qemu-devel
  Cc: David Hildenbrand, Michael S. Tsirkin, Juan Quintela, Peter Xu,
	Leonardo Bras, Paolo Bonzini, Philippe Mathieu-Daudé,
	Peng Tao

Stumbling over "x-ignore-shared" migration support for virtio-mem on
my todo list, I remember talking to Dave G. a while ago about how
ram_block_discard_range() in MAP_PIRVATE file mappings is possibly
harmful when the file is used somewhere else -- for example, with VM
templating in multiple VMs.

This series adds a warning to ram_block_discard_range() in that problematic
case and adds "x-ignore-shared" migration support for virtio-mem, which
is pretty straight-forward. The last patch also documents how VM templating
interacts with virtio-mem.

Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Juan Quintela <quintela@redhat.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Leonardo Bras <leobras@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Philippe Mathieu-Daudé" <philmd@linaro.org>
Cc: Peng Tao <tao.peng@linux.alibaba.com>

David Hildenbrand (4):
  softmmu/physmem: Warn with ram_block_discard_range() on MAP_PRIVATE
    file mapping
  virtio-mem: Skip most of virtio_mem_unplug_all() without plugged
    memory
  migration/ram: Expose ramblock_is_ignored() as
    migrate_ram_is_ignored()
  virtio-mem: Support "x-ignore-shared" migration

 hw/virtio/virtio-mem.c   | 67 ++++++++++++++++++++++++++++------------
 include/migration/misc.h |  1 +
 migration/postcopy-ram.c |  2 +-
 migration/ram.c          | 14 ++++-----
 migration/ram.h          |  3 +-
 softmmu/physmem.c        | 18 +++++++++++
 6 files changed, 76 insertions(+), 29 deletions(-)

-- 
2.40.1



^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v1 1/4] softmmu/physmem: Warn with ram_block_discard_range() on MAP_PRIVATE file mapping
  2023-06-20 13:03 [PATCH v1 0/4] virtio-mem: Support "x-ignore-shared" migration David Hildenbrand
@ 2023-06-20 13:03 ` David Hildenbrand
  2023-06-21 15:55   ` Peter Xu
  2023-06-20 13:03 ` [PATCH v1 2/4] virtio-mem: Skip most of virtio_mem_unplug_all() without plugged memory David Hildenbrand
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 15+ messages in thread
From: David Hildenbrand @ 2023-06-20 13:03 UTC (permalink / raw)
  To: qemu-devel
  Cc: David Hildenbrand, Michael S. Tsirkin, Juan Quintela, Peter Xu,
	Leonardo Bras, Paolo Bonzini, Philippe Mathieu-Daudé,
	Peng Tao

ram_block_discard_range() cannot possibly do the right thing in
MAP_PRIVATE file mappings in the general case.

To achieve the documented semantics, we also have to punch a hole into
the file, possibly messing with other MAP_PRIVATE/MAP_SHARED mappings
of such a file.

For example, using VM templating -- see commit b17fbbe55cba ("migration:
allow private destination ram with x-ignore-shared") -- in combination with
any mechanism that relies on discarding of RAM is problematic. This
includes:
* Postcopy live migration
* virtio-balloon inflation/deflation or free-page-reporting
* virtio-mem

So at least warn that there is something possibly dangerous is going on
when using ram_block_discard_range() in these cases.

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 softmmu/physmem.c | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/softmmu/physmem.c b/softmmu/physmem.c
index 6bdd944fe8..27c7219c82 100644
--- a/softmmu/physmem.c
+++ b/softmmu/physmem.c
@@ -3451,6 +3451,24 @@ int ram_block_discard_range(RAMBlock *rb, uint64_t start, size_t length)
              * so a userfault will trigger.
              */
 #ifdef CONFIG_FALLOCATE_PUNCH_HOLE
+            /*
+             * We'll discard data from the actual file, even though we only
+             * have a MAP_PRIVATE mapping, possibly messing with other
+             * MAP_PRIVATE/MAP_SHARED mappings. There is no easy way to
+             * change that behavior whithout violating the promised
+             * semantics of ram_block_discard_range().
+             *
+             * Only warn, because it work as long as nobody else uses that
+             * file.
+             */
+            if (!qemu_ram_is_shared(rb)) {
+                warn_report_once("ram_block_discard_range: Discarding RAM"
+                                 " in private file mappings is possibly"
+                                 " dangerous, because it will modify the"
+                                 " underlying file and will affect other"
+                                 " users of the file");
+            }
+
             ret = fallocate(rb->fd, FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE,
                             start, length);
             if (ret) {
-- 
2.40.1



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v1 2/4] virtio-mem: Skip most of virtio_mem_unplug_all() without plugged memory
  2023-06-20 13:03 [PATCH v1 0/4] virtio-mem: Support "x-ignore-shared" migration David Hildenbrand
  2023-06-20 13:03 ` [PATCH v1 1/4] softmmu/physmem: Warn with ram_block_discard_range() on MAP_PRIVATE file mapping David Hildenbrand
@ 2023-06-20 13:03 ` David Hildenbrand
  2023-06-20 13:03 ` [PATCH v1 3/4] migration/ram: Expose ramblock_is_ignored() as migrate_ram_is_ignored() David Hildenbrand
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 15+ messages in thread
From: David Hildenbrand @ 2023-06-20 13:03 UTC (permalink / raw)
  To: qemu-devel
  Cc: David Hildenbrand, Michael S. Tsirkin, Juan Quintela, Peter Xu,
	Leonardo Bras, Paolo Bonzini, Philippe Mathieu-Daudé,
	Peng Tao

Already when starting QEMU we perform one system reset that ends up
triggering virtio_mem_unplug_all() with no actual memory plugged yet.
That, in turn will trigger ram_block_discard_range() and perform some
other actions that are not required in that case.

Let's optimize virtio_mem_unplug_all() for the case that no memory is
plugged. This will be beneficial for x-ignore-shared support as well.

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 hw/virtio/virtio-mem.c | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/hw/virtio/virtio-mem.c b/hw/virtio/virtio-mem.c
index 538b695c29..9f6169af32 100644
--- a/hw/virtio/virtio-mem.c
+++ b/hw/virtio/virtio-mem.c
@@ -606,20 +606,20 @@ static int virtio_mem_unplug_all(VirtIOMEM *vmem)
 {
     RAMBlock *rb = vmem->memdev->mr.ram_block;
 
-    if (virtio_mem_is_busy()) {
-        return -EBUSY;
-    }
-
-    if (ram_block_discard_range(rb, 0, qemu_ram_get_used_length(rb))) {
-        return -EBUSY;
-    }
-    virtio_mem_notify_unplug_all(vmem);
-
-    bitmap_clear(vmem->bitmap, 0, vmem->bitmap_size);
     if (vmem->size) {
+        if (virtio_mem_is_busy()) {
+            return -EBUSY;
+        }
+        if (ram_block_discard_range(rb, 0, qemu_ram_get_used_length(rb))) {
+            return -EBUSY;
+        }
+        virtio_mem_notify_unplug_all(vmem);
+
+        bitmap_clear(vmem->bitmap, 0, vmem->bitmap_size);
         vmem->size = 0;
         notifier_list_notify(&vmem->size_change_notifiers, &vmem->size);
     }
+
     trace_virtio_mem_unplugged_all();
     virtio_mem_resize_usable_region(vmem, vmem->requested_size, true);
     return 0;
-- 
2.40.1



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v1 3/4] migration/ram: Expose ramblock_is_ignored() as migrate_ram_is_ignored()
  2023-06-20 13:03 [PATCH v1 0/4] virtio-mem: Support "x-ignore-shared" migration David Hildenbrand
  2023-06-20 13:03 ` [PATCH v1 1/4] softmmu/physmem: Warn with ram_block_discard_range() on MAP_PRIVATE file mapping David Hildenbrand
  2023-06-20 13:03 ` [PATCH v1 2/4] virtio-mem: Skip most of virtio_mem_unplug_all() without plugged memory David Hildenbrand
@ 2023-06-20 13:03 ` David Hildenbrand
  2023-06-21 15:56   ` Peter Xu
  2023-06-20 13:03 ` [PATCH v1 4/4] virtio-mem: Support "x-ignore-shared" migration David Hildenbrand
  2023-07-06  5:59 ` [PATCH v1 0/4] " Mario Casquero
  4 siblings, 1 reply; 15+ messages in thread
From: David Hildenbrand @ 2023-06-20 13:03 UTC (permalink / raw)
  To: qemu-devel
  Cc: David Hildenbrand, Michael S. Tsirkin, Juan Quintela, Peter Xu,
	Leonardo Bras, Paolo Bonzini, Philippe Mathieu-Daudé,
	Peng Tao

virtio-mem wants to know whether it should not mess with the RAMBlock
content (e.g., discard RAM, preallocate memory) on incoming migration.

So let's expose that function as migrate_ram_is_ignored() in
migration/misc.h

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 include/migration/misc.h |  1 +
 migration/postcopy-ram.c |  2 +-
 migration/ram.c          | 14 +++++++-------
 migration/ram.h          |  3 +--
 4 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/include/migration/misc.h b/include/migration/misc.h
index 5ebe13b4b9..7dcc0b5c2c 100644
--- a/include/migration/misc.h
+++ b/include/migration/misc.h
@@ -40,6 +40,7 @@ int precopy_notify(PrecopyNotifyReason reason, Error **errp);
 
 void ram_mig_init(void);
 void qemu_guest_free_page_hint(void *addr, size_t len);
+bool migrate_ram_is_ignored(RAMBlock *block);
 
 /* migration/block.c */
 
diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c
index 5615ec29eb..29aea9456d 100644
--- a/migration/postcopy-ram.c
+++ b/migration/postcopy-ram.c
@@ -408,7 +408,7 @@ bool postcopy_ram_supported_by_host(MigrationIncomingState *mis, Error **errp)
     /*
      * We don't support postcopy with some type of ramblocks.
      *
-     * NOTE: we explicitly ignored ramblock_is_ignored() instead we checked
+     * NOTE: we explicitly ignored migrate_ram_is_ignored() instead we checked
      * all possible ramblocks.  This is because this function can be called
      * when creating the migration object, during the phase RAM_MIGRATABLE
      * is not even properly set for all the ramblocks.
diff --git a/migration/ram.c b/migration/ram.c
index 5283a75f02..0ada6477e8 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -194,7 +194,7 @@ static bool postcopy_preempt_active(void)
     return migrate_postcopy_preempt() && migration_in_postcopy();
 }
 
-bool ramblock_is_ignored(RAMBlock *block)
+bool migrate_ram_is_ignored(RAMBlock *block)
 {
     return !qemu_ram_is_migratable(block) ||
            (migrate_ignore_shared() && qemu_ram_is_shared(block)
@@ -696,7 +696,7 @@ static void pss_find_next_dirty(PageSearchStatus *pss)
     unsigned long size = rb->used_length >> TARGET_PAGE_BITS;
     unsigned long *bitmap = rb->bmap;
 
-    if (ramblock_is_ignored(rb)) {
+    if (migrate_ram_is_ignored(rb)) {
         /* Points directly to the end, so we know no dirty page */
         pss->page = size;
         return;
@@ -780,7 +780,7 @@ unsigned long colo_bitmap_find_dirty(RAMState *rs, RAMBlock *rb,
 
     *num = 0;
 
-    if (ramblock_is_ignored(rb)) {
+    if (migrate_ram_is_ignored(rb)) {
         return size;
     }
 
@@ -2260,7 +2260,7 @@ static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss)
     unsigned long start_page = pss->page;
     int res;
 
-    if (ramblock_is_ignored(pss->block)) {
+    if (migrate_ram_is_ignored(pss->block)) {
         error_report("block %s should not be migrated !", pss->block->idstr);
         return 0;
     }
@@ -3347,7 +3347,7 @@ static inline RAMBlock *ram_block_from_stream(MigrationIncomingState *mis,
         return NULL;
     }
 
-    if (ramblock_is_ignored(block)) {
+    if (migrate_ram_is_ignored(block)) {
         error_report("block %s should not be migrated !", id);
         return NULL;
     }
@@ -3958,7 +3958,7 @@ static int ram_load_precopy(QEMUFile *f)
                     }
                     if (migrate_ignore_shared()) {
                         hwaddr addr = qemu_get_be64(f);
-                        if (ramblock_is_ignored(block) &&
+                        if (migrate_ram_is_ignored(block) &&
                             block->mr->addr != addr) {
                             error_report("Mismatched GPAs for block %s "
                                          "%" PRId64 "!= %" PRId64,
@@ -4254,7 +4254,7 @@ static void ram_mig_ram_block_resized(RAMBlockNotifier *n, void *host,
     RAMBlock *rb = qemu_ram_block_from_host(host, false, &offset);
     Error *err = NULL;
 
-    if (ramblock_is_ignored(rb)) {
+    if (migrate_ram_is_ignored(rb)) {
         return;
     }
 
diff --git a/migration/ram.h b/migration/ram.h
index ea1f3c25b5..145c915ca7 100644
--- a/migration/ram.h
+++ b/migration/ram.h
@@ -36,11 +36,10 @@
 extern XBZRLECacheStats xbzrle_counters;
 extern CompressionStats compression_counters;
 
-bool ramblock_is_ignored(RAMBlock *block);
 /* Should be holding either ram_list.mutex, or the RCU lock. */
 #define RAMBLOCK_FOREACH_NOT_IGNORED(block)            \
     INTERNAL_RAMBLOCK_FOREACH(block)                   \
-        if (ramblock_is_ignored(block)) {} else
+        if (migrate_ram_is_ignored(block)) {} else
 
 #define RAMBLOCK_FOREACH_MIGRATABLE(block)             \
     INTERNAL_RAMBLOCK_FOREACH(block)                   \
-- 
2.40.1



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v1 4/4] virtio-mem: Support "x-ignore-shared" migration
  2023-06-20 13:03 [PATCH v1 0/4] virtio-mem: Support "x-ignore-shared" migration David Hildenbrand
                   ` (2 preceding siblings ...)
  2023-06-20 13:03 ` [PATCH v1 3/4] migration/ram: Expose ramblock_is_ignored() as migrate_ram_is_ignored() David Hildenbrand
@ 2023-06-20 13:03 ` David Hildenbrand
  2023-06-20 13:06   ` Michael S. Tsirkin
  2023-07-06  5:59 ` [PATCH v1 0/4] " Mario Casquero
  4 siblings, 1 reply; 15+ messages in thread
From: David Hildenbrand @ 2023-06-20 13:03 UTC (permalink / raw)
  To: qemu-devel
  Cc: David Hildenbrand, Michael S. Tsirkin, Juan Quintela, Peter Xu,
	Leonardo Bras, Paolo Bonzini, Philippe Mathieu-Daudé,
	Peng Tao

To achieve desired "x-ignore-shared" functionality, we should not
discard all RAM when realizing the device and not mess with
preallocation/postcopy when loading device state. In essence, we should
not touch RAM content.

As "x-ignore-shared" gets set after realizing the device, we cannot
rely on that. Let's simply skip discarding of RAM on incoming migration.
Note that virtio_mem_post_load() will call
virtio_mem_restore_unplugged() -- unless "x-ignore-shared" is set. So
once migration finished we'll have a consistent state.

The initial system reset will also not discard any RAM, because
virtio_mem_unplug_all() will not call virtio_mem_unplug_all() when no
memory is plugged (which is the case before loading the device state).

Note that something like VM templating -- see commit b17fbbe55cba
("migration: allow private destination ram with x-ignore-shared") -- is
currently incompatible with virtio-mem and ram_block_discard_range() will
warn in case a private file mapping is supplied by virtio-mem.

For VM templating with virtio-mem, it makes more sense to either
(a) Create the template without the virtio-mem device and hotplug a
    virtio-mem device to the new VM instances using proper own memory
    backend.
(b) Use a virtio-mem device that doesn't provide any memory in the
    template (requested-size=0) and use private anonymous memory.

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 hw/virtio/virtio-mem.c | 47 ++++++++++++++++++++++++++++++++++--------
 1 file changed, 38 insertions(+), 9 deletions(-)

diff --git a/hw/virtio/virtio-mem.c b/hw/virtio/virtio-mem.c
index 9f6169af32..b013dfbaf0 100644
--- a/hw/virtio/virtio-mem.c
+++ b/hw/virtio/virtio-mem.c
@@ -18,6 +18,7 @@
 #include "sysemu/numa.h"
 #include "sysemu/sysemu.h"
 #include "sysemu/reset.h"
+#include "sysemu/runstate.h"
 #include "hw/virtio/virtio.h"
 #include "hw/virtio/virtio-bus.h"
 #include "hw/virtio/virtio-access.h"
@@ -886,11 +887,23 @@ static void virtio_mem_device_realize(DeviceState *dev, Error **errp)
         return;
     }
 
-    ret = ram_block_discard_range(rb, 0, qemu_ram_get_used_length(rb));
-    if (ret) {
-        error_setg_errno(errp, -ret, "Unexpected error discarding RAM");
-        ram_block_coordinated_discard_require(false);
-        return;
+    /*
+     * We don't know at this point whether shared RAM is migrated using
+     * QEMU or migrated using the file content. "x-ignore-shared" will be
+     * configurated after realizing the device. So in case we have an
+     * incoming migration, simply always skip the discard step.
+     *
+     * Otherwise, make sure that we start with a clean slate: either the
+     * memory backend might get reused or the shared file might still have
+     * memory allocated.
+     */
+    if (!runstate_check(RUN_STATE_INMIGRATE)) {
+        ret = ram_block_discard_range(rb, 0, qemu_ram_get_used_length(rb));
+        if (ret) {
+            error_setg_errno(errp, -ret, "Unexpected error discarding RAM");
+            ram_block_coordinated_discard_require(false);
+            return;
+        }
     }
 
     virtio_mem_resize_usable_region(vmem, vmem->requested_size, true);
@@ -962,10 +975,6 @@ static int virtio_mem_post_load(void *opaque, int version_id)
     RamDiscardListener *rdl;
     int ret;
 
-    if (vmem->prealloc && !vmem->early_migration) {
-        warn_report("Proper preallocation with migration requires a newer QEMU machine");
-    }
-
     /*
      * We started out with all memory discarded and our memory region is mapped
      * into an address space. Replay, now that we updated the bitmap.
@@ -978,6 +987,18 @@ static int virtio_mem_post_load(void *opaque, int version_id)
         }
     }
 
+    /*
+     * If shared RAM is migrated using the file content and not using QEMU,
+     * don't mess with preallocation and postcopy.
+     */
+    if (migrate_ram_is_ignored(vmem->memdev->mr.ram_block)) {
+        return 0;
+    }
+
+    if (vmem->prealloc && !vmem->early_migration) {
+        warn_report("Proper preallocation with migration requires a newer QEMU machine");
+    }
+
     if (migration_in_incoming_postcopy()) {
         return 0;
     }
@@ -1010,6 +1031,14 @@ static int virtio_mem_post_load_early(void *opaque, int version_id)
         return 0;
     }
 
+    /*
+     * If shared RAM is migrated using the file content and not using QEMU,
+     * don't mess with preallocation and postcopy.
+     */
+    if (migrate_ram_is_ignored(rb)) {
+        return 0;
+    }
+
     /*
      * We restored the bitmap and verified that the basic properties
      * match on source and destination, so we can go ahead and preallocate
-- 
2.40.1



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH v1 4/4] virtio-mem: Support "x-ignore-shared" migration
  2023-06-20 13:03 ` [PATCH v1 4/4] virtio-mem: Support "x-ignore-shared" migration David Hildenbrand
@ 2023-06-20 13:06   ` Michael S. Tsirkin
  2023-06-20 13:40     ` David Hildenbrand
  0 siblings, 1 reply; 15+ messages in thread
From: Michael S. Tsirkin @ 2023-06-20 13:06 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: qemu-devel, Juan Quintela, Peter Xu, Leonardo Bras, Paolo Bonzini,
	Philippe Mathieu-Daudé, Peng Tao

On Tue, Jun 20, 2023 at 03:03:54PM +0200, David Hildenbrand wrote:
> To achieve desired "x-ignore-shared" functionality, we should not
> discard all RAM when realizing the device and not mess with
> preallocation/postcopy when loading device state. In essence, we should
> not touch RAM content.
> 
> As "x-ignore-shared" gets set after realizing the device, we cannot
> rely on that. Let's simply skip discarding of RAM on incoming migration.
> Note that virtio_mem_post_load() will call
> virtio_mem_restore_unplugged() -- unless "x-ignore-shared" is set. So
> once migration finished we'll have a consistent state.
> 
> The initial system reset will also not discard any RAM, because
> virtio_mem_unplug_all() will not call virtio_mem_unplug_all() when no
> memory is plugged (which is the case before loading the device state).
> 
> Note that something like VM templating -- see commit b17fbbe55cba
> ("migration: allow private destination ram with x-ignore-shared") -- is
> currently incompatible with virtio-mem and ram_block_discard_range() will
> warn in case a private file mapping is supplied by virtio-mem.
> 
> For VM templating with virtio-mem, it makes more sense to either
> (a) Create the template without the virtio-mem device and hotplug a
>     virtio-mem device to the new VM instances using proper own memory
>     backend.
> (b) Use a virtio-mem device that doesn't provide any memory in the
>     template (requested-size=0) and use private anonymous memory.
> 
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
>  hw/virtio/virtio-mem.c | 47 ++++++++++++++++++++++++++++++++++--------
>  1 file changed, 38 insertions(+), 9 deletions(-)
> 
> diff --git a/hw/virtio/virtio-mem.c b/hw/virtio/virtio-mem.c
> index 9f6169af32..b013dfbaf0 100644
> --- a/hw/virtio/virtio-mem.c
> +++ b/hw/virtio/virtio-mem.c
> @@ -18,6 +18,7 @@
>  #include "sysemu/numa.h"
>  #include "sysemu/sysemu.h"
>  #include "sysemu/reset.h"
> +#include "sysemu/runstate.h"
>  #include "hw/virtio/virtio.h"
>  #include "hw/virtio/virtio-bus.h"
>  #include "hw/virtio/virtio-access.h"
> @@ -886,11 +887,23 @@ static void virtio_mem_device_realize(DeviceState *dev, Error **errp)
>          return;
>      }
>  
> -    ret = ram_block_discard_range(rb, 0, qemu_ram_get_used_length(rb));
> -    if (ret) {
> -        error_setg_errno(errp, -ret, "Unexpected error discarding RAM");
> -        ram_block_coordinated_discard_require(false);
> -        return;
> +    /*
> +     * We don't know at this point whether shared RAM is migrated using
> +     * QEMU or migrated using the file content. "x-ignore-shared" will be
> +     * configurated

configurated == configured?

> after realizing the device. So in case we have an
> +     * incoming migration, simply always skip the discard step.
> +     *
> +     * Otherwise, make sure that we start with a clean slate: either the
> +     * memory backend might get reused or the shared file might still have
> +     * memory allocated.
> +     */
> +    if (!runstate_check(RUN_STATE_INMIGRATE)) {
> +        ret = ram_block_discard_range(rb, 0, qemu_ram_get_used_length(rb));
> +        if (ret) {
> +            error_setg_errno(errp, -ret, "Unexpected error discarding RAM");
> +            ram_block_coordinated_discard_require(false);
> +            return;
> +        }
>      }
>  
>      virtio_mem_resize_usable_region(vmem, vmem->requested_size, true);
> @@ -962,10 +975,6 @@ static int virtio_mem_post_load(void *opaque, int version_id)
>      RamDiscardListener *rdl;
>      int ret;
>  
> -    if (vmem->prealloc && !vmem->early_migration) {
> -        warn_report("Proper preallocation with migration requires a newer QEMU machine");
> -    }
> -
>      /*
>       * We started out with all memory discarded and our memory region is mapped
>       * into an address space. Replay, now that we updated the bitmap.
> @@ -978,6 +987,18 @@ static int virtio_mem_post_load(void *opaque, int version_id)
>          }
>      }
>  
> +    /*
> +     * If shared RAM is migrated using the file content and not using QEMU,
> +     * don't mess with preallocation and postcopy.
> +     */
> +    if (migrate_ram_is_ignored(vmem->memdev->mr.ram_block)) {
> +        return 0;
> +    }
> +
> +    if (vmem->prealloc && !vmem->early_migration) {
> +        warn_report("Proper preallocation with migration requires a newer QEMU machine");
> +    }
> +
>      if (migration_in_incoming_postcopy()) {
>          return 0;
>      }
> @@ -1010,6 +1031,14 @@ static int virtio_mem_post_load_early(void *opaque, int version_id)
>          return 0;
>      }
>  
> +    /*
> +     * If shared RAM is migrated using the file content and not using QEMU,
> +     * don't mess with preallocation and postcopy.
> +     */
> +    if (migrate_ram_is_ignored(rb)) {
> +        return 0;
> +    }
> +
>      /*
>       * We restored the bitmap and verified that the basic properties
>       * match on source and destination, so we can go ahead and preallocate
> -- 
> 2.40.1



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v1 4/4] virtio-mem: Support "x-ignore-shared" migration
  2023-06-20 13:06   ` Michael S. Tsirkin
@ 2023-06-20 13:40     ` David Hildenbrand
  0 siblings, 0 replies; 15+ messages in thread
From: David Hildenbrand @ 2023-06-20 13:40 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: qemu-devel, Juan Quintela, Peter Xu, Leonardo Bras, Paolo Bonzini,
	Philippe Mathieu-Daudé, Peng Tao

On 20.06.23 15:06, Michael S. Tsirkin wrote:
> On Tue, Jun 20, 2023 at 03:03:54PM +0200, David Hildenbrand wrote:
>> To achieve desired "x-ignore-shared" functionality, we should not
>> discard all RAM when realizing the device and not mess with
>> preallocation/postcopy when loading device state. In essence, we should
>> not touch RAM content.
>>
>> As "x-ignore-shared" gets set after realizing the device, we cannot
>> rely on that. Let's simply skip discarding of RAM on incoming migration.
>> Note that virtio_mem_post_load() will call
>> virtio_mem_restore_unplugged() -- unless "x-ignore-shared" is set. So
>> once migration finished we'll have a consistent state.
>>
>> The initial system reset will also not discard any RAM, because
>> virtio_mem_unplug_all() will not call virtio_mem_unplug_all() when no
>> memory is plugged (which is the case before loading the device state).
>>
>> Note that something like VM templating -- see commit b17fbbe55cba
>> ("migration: allow private destination ram with x-ignore-shared") -- is
>> currently incompatible with virtio-mem and ram_block_discard_range() will
>> warn in case a private file mapping is supplied by virtio-mem.
>>
>> For VM templating with virtio-mem, it makes more sense to either
>> (a) Create the template without the virtio-mem device and hotplug a
>>      virtio-mem device to the new VM instances using proper own memory
>>      backend.
>> (b) Use a virtio-mem device that doesn't provide any memory in the
>>      template (requested-size=0) and use private anonymous memory.
>>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
>> ---
>>   hw/virtio/virtio-mem.c | 47 ++++++++++++++++++++++++++++++++++--------
>>   1 file changed, 38 insertions(+), 9 deletions(-)
>>
>> diff --git a/hw/virtio/virtio-mem.c b/hw/virtio/virtio-mem.c
>> index 9f6169af32..b013dfbaf0 100644
>> --- a/hw/virtio/virtio-mem.c
>> +++ b/hw/virtio/virtio-mem.c
>> @@ -18,6 +18,7 @@
>>   #include "sysemu/numa.h"
>>   #include "sysemu/sysemu.h"
>>   #include "sysemu/reset.h"
>> +#include "sysemu/runstate.h"
>>   #include "hw/virtio/virtio.h"
>>   #include "hw/virtio/virtio-bus.h"
>>   #include "hw/virtio/virtio-access.h"
>> @@ -886,11 +887,23 @@ static void virtio_mem_device_realize(DeviceState *dev, Error **errp)
>>           return;
>>       }
>>   
>> -    ret = ram_block_discard_range(rb, 0, qemu_ram_get_used_length(rb));
>> -    if (ret) {
>> -        error_setg_errno(errp, -ret, "Unexpected error discarding RAM");
>> -        ram_block_coordinated_discard_require(false);
>> -        return;
>> +    /*
>> +     * We don't know at this point whether shared RAM is migrated using
>> +     * QEMU or migrated using the file content. "x-ignore-shared" will be
>> +     * configurated
> 
> configurated == configured?

Thanks!

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v1 1/4] softmmu/physmem: Warn with ram_block_discard_range() on MAP_PRIVATE file mapping
  2023-06-20 13:03 ` [PATCH v1 1/4] softmmu/physmem: Warn with ram_block_discard_range() on MAP_PRIVATE file mapping David Hildenbrand
@ 2023-06-21 15:55   ` Peter Xu
  2023-06-21 16:17     ` David Hildenbrand
  0 siblings, 1 reply; 15+ messages in thread
From: Peter Xu @ 2023-06-21 15:55 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: qemu-devel, Michael S. Tsirkin, Juan Quintela, Leonardo Bras,
	Paolo Bonzini, Philippe Mathieu-Daudé, Peng Tao

On Tue, Jun 20, 2023 at 03:03:51PM +0200, David Hildenbrand wrote:
> ram_block_discard_range() cannot possibly do the right thing in
> MAP_PRIVATE file mappings in the general case.
> 
> To achieve the documented semantics, we also have to punch a hole into
> the file, possibly messing with other MAP_PRIVATE/MAP_SHARED mappings
> of such a file.
> 
> For example, using VM templating -- see commit b17fbbe55cba ("migration:
> allow private destination ram with x-ignore-shared") -- in combination with
> any mechanism that relies on discarding of RAM is problematic. This
> includes:
> * Postcopy live migration
> * virtio-balloon inflation/deflation or free-page-reporting
> * virtio-mem
> 
> So at least warn that there is something possibly dangerous is going on
> when using ram_block_discard_range() in these cases.

The issue is probably valid.

One thing I worry is when the user (or, qemu instance) exclusively owns the
file, just forgot to attach share=on, where it used to work perfectly then
it'll show this warning.  But I agree maybe it's good to remind them just
to attach the share=on.

For real private mem users, the warning can of real help, one should
probably leverage things like file snapshot provided by modern file
systems, so each VM should just have its own snapshot ram file to use then
map it share=on I suppose.

For the long term, maybe we should simply support private mem here simply
by a MADV_DONTNEED.  I assume that's the right semantics for postcopy (just
need to support MINOR faults, though; MISSING faults definitely will stop
working.. but for all the rest framework shouldn't need much change), and I
hope that's also the semantics that balloon/virtio-mem wants here.  Not
sure whether/when that's strongly needed, assuming the corner case above
can still be work arounded properly by other means.

For now, a warning looks all sane.

> 
> Signed-off-by: David Hildenbrand <david@redhat.com>

Acked-by: Peter Xu <peterx@redhat.com>

> ---
>  softmmu/physmem.c | 18 ++++++++++++++++++
>  1 file changed, 18 insertions(+)
> 
> diff --git a/softmmu/physmem.c b/softmmu/physmem.c
> index 6bdd944fe8..27c7219c82 100644
> --- a/softmmu/physmem.c
> +++ b/softmmu/physmem.c
> @@ -3451,6 +3451,24 @@ int ram_block_discard_range(RAMBlock *rb, uint64_t start, size_t length)
>               * so a userfault will trigger.
>               */
>  #ifdef CONFIG_FALLOCATE_PUNCH_HOLE
> +            /*
> +             * We'll discard data from the actual file, even though we only
> +             * have a MAP_PRIVATE mapping, possibly messing with other
> +             * MAP_PRIVATE/MAP_SHARED mappings. There is no easy way to
> +             * change that behavior whithout violating the promised
> +             * semantics of ram_block_discard_range().
> +             *
> +             * Only warn, because it work as long as nobody else uses that
> +             * file.
> +             */
> +            if (!qemu_ram_is_shared(rb)) {
> +                warn_report_once("ram_block_discard_range: Discarding RAM"
> +                                 " in private file mappings is possibly"
> +                                 " dangerous, because it will modify the"
> +                                 " underlying file and will affect other"
> +                                 " users of the file");
> +            }
> +
>              ret = fallocate(rb->fd, FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE,
>                              start, length);
>              if (ret) {
> -- 
> 2.40.1
> 

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v1 3/4] migration/ram: Expose ramblock_is_ignored() as migrate_ram_is_ignored()
  2023-06-20 13:03 ` [PATCH v1 3/4] migration/ram: Expose ramblock_is_ignored() as migrate_ram_is_ignored() David Hildenbrand
@ 2023-06-21 15:56   ` Peter Xu
  0 siblings, 0 replies; 15+ messages in thread
From: Peter Xu @ 2023-06-21 15:56 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: qemu-devel, Michael S. Tsirkin, Juan Quintela, Leonardo Bras,
	Paolo Bonzini, Philippe Mathieu-Daudé, Peng Tao

On Tue, Jun 20, 2023 at 03:03:53PM +0200, David Hildenbrand wrote:
> virtio-mem wants to know whether it should not mess with the RAMBlock
> content (e.g., discard RAM, preallocate memory) on incoming migration.
> 
> So let's expose that function as migrate_ram_is_ignored() in
> migration/misc.h
> 
> Signed-off-by: David Hildenbrand <david@redhat.com>

Acked-by: Peter Xu <peterx@redhat.com>

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v1 1/4] softmmu/physmem: Warn with ram_block_discard_range() on MAP_PRIVATE file mapping
  2023-06-21 15:55   ` Peter Xu
@ 2023-06-21 16:17     ` David Hildenbrand
  2023-06-21 16:55       ` Peter Xu
  0 siblings, 1 reply; 15+ messages in thread
From: David Hildenbrand @ 2023-06-21 16:17 UTC (permalink / raw)
  To: Peter Xu
  Cc: qemu-devel, Michael S. Tsirkin, Juan Quintela, Leonardo Bras,
	Paolo Bonzini, Philippe Mathieu-Daudé, Peng Tao

On 21.06.23 17:55, Peter Xu wrote:
> On Tue, Jun 20, 2023 at 03:03:51PM +0200, David Hildenbrand wrote:
>> ram_block_discard_range() cannot possibly do the right thing in
>> MAP_PRIVATE file mappings in the general case.
>>
>> To achieve the documented semantics, we also have to punch a hole into
>> the file, possibly messing with other MAP_PRIVATE/MAP_SHARED mappings
>> of such a file.
>>
>> For example, using VM templating -- see commit b17fbbe55cba ("migration:
>> allow private destination ram with x-ignore-shared") -- in combination with
>> any mechanism that relies on discarding of RAM is problematic. This
>> includes:
>> * Postcopy live migration
>> * virtio-balloon inflation/deflation or free-page-reporting
>> * virtio-mem
>>
>> So at least warn that there is something possibly dangerous is going on
>> when using ram_block_discard_range() in these cases.
> 
> The issue is probably valid.
> 
> One thing I worry is when the user (or, qemu instance) exclusively owns the
> file, just forgot to attach share=on, where it used to work perfectly then
> it'll show this warning.  But I agree maybe it's good to remind them just
> to attach the share=on.

For memory-backend-memfd "share=on" is fortunately the default. For 
memory-backend-file it isn't (and in most cases you do want share=on, 
like for hugetlbfs or tmpfs).

Missing the "share=on" for memory-backend-file can have sane use cases, 
but for the common /dev/shm/ case it even results in an undesired 
double-memory consumption (just like memory-backend-memfd,share=off).


> 
> For real private mem users, the warning can of real help, one should
> probably leverage things like file snapshot provided by modern file
> systems, so each VM should just have its own snapshot ram file to use then
> map it share=on I suppose.

Yes, I agree. Although we recently learned that fs-backed VM RAM (SSD) 
performs poorly and will severely wear your SSD severly :(

> 
> For the long term, maybe we should simply support private mem here simply
> by a MADV_DONTNEED.  I assume that's the right semantics for postcopy (just
> need to support MINOR faults, though; MISSING faults definitely will stop
> working.. but for all the rest framework shouldn't need much change), and I
> hope that's also the semantics that balloon/virtio-mem wants here.  Not
> sure whether/when that's strongly needed, assuming the corner case above
> can still be work arounded properly by other means.

I briefly thought about that but came to the conclusion that fixing it 
is not that easy. So I went with the warn.

As documented, ram_block_discard_range() guarantees two things

a) Read 0 after discarding succeeded
b) Make postcopy work by triggering a fault on next access

And if we'd simply want to drop the FALLOC_FL_PUNCH_HOLE:

1) For hugetlb, only newer kernels support MADV_DONTNEED. So there is no 
way to just discard in a private mapping here that works for kernels we 
still care about.

2) free-page-reporting wants to read 0's when re-accessing discarded 
memory. If there is still something there in the file, that won't work.

3) Regarding postcopy on MAP_PRIVATE shmem, I am not sure if it will 
actually do what you want if the pagecache holds a page. Maybe it works, 
but I am not so sure. Needs investigation.


> 
> For now, a warning looks all sane.
> 
>>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
> 
> Acked-by: Peter Xu <peterx@redhat.com>

Thanks!

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v1 1/4] softmmu/physmem: Warn with ram_block_discard_range() on MAP_PRIVATE file mapping
  2023-06-21 16:17     ` David Hildenbrand
@ 2023-06-21 16:55       ` Peter Xu
  2023-06-22 13:10         ` David Hildenbrand
  0 siblings, 1 reply; 15+ messages in thread
From: Peter Xu @ 2023-06-21 16:55 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: qemu-devel, Michael S. Tsirkin, Juan Quintela, Leonardo Bras,
	Paolo Bonzini, Philippe Mathieu-Daudé, Peng Tao

On Wed, Jun 21, 2023 at 06:17:37PM +0200, David Hildenbrand wrote:
> As documented, ram_block_discard_range() guarantees two things
> 
> a) Read 0 after discarding succeeded
> b) Make postcopy work by triggering a fault on next access
> 
> And if we'd simply want to drop the FALLOC_FL_PUNCH_HOLE:
> 
> 1) For hugetlb, only newer kernels support MADV_DONTNEED. So there is no way
> to just discard in a private mapping here that works for kernels we still
> care about.
> 
> 2) free-page-reporting wants to read 0's when re-accessing discarded memory.
> If there is still something there in the file, that won't work.

Ah right.  The semantics is indeed slightly different..

IMHO, ideally here we need a zero page installed as private, ignoring the
page cache underneath, freeing any possible private page.  But I just don't
know how to do that easily with current default mm infrastructures, or
free-page-reporting over private mem seems just won't really work at all,
it seems to me.

Maybe.. UFFDIO_ZEROPAGE would work? We need uffd registered by default, but
that's slightly tricky.

> 
> 3) Regarding postcopy on MAP_PRIVATE shmem, I am not sure if it will
> actually do what you want if the pagecache holds a page. Maybe it works, but
> I am not so sure. Needs investigation.

For MINOR I think it will.  I actually already implemented some of that (I
think, all of that is required) in the HGM qemu rfc series, and smoked it a
bit without any known issue yet with the HGM kernel.

IIUC we can work on MINOR support without HGM; I can separate it out.  It's
really a matter of whether it'll be worthwhile the effort and time.

Thanks,

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v1 1/4] softmmu/physmem: Warn with ram_block_discard_range() on MAP_PRIVATE file mapping
  2023-06-21 16:55       ` Peter Xu
@ 2023-06-22 13:10         ` David Hildenbrand
  2023-06-22 14:54           ` Peter Xu
  0 siblings, 1 reply; 15+ messages in thread
From: David Hildenbrand @ 2023-06-22 13:10 UTC (permalink / raw)
  To: Peter Xu
  Cc: qemu-devel, Michael S. Tsirkin, Juan Quintela, Leonardo Bras,
	Paolo Bonzini, Philippe Mathieu-Daudé, Peng Tao

On 21.06.23 18:55, Peter Xu wrote:
> On Wed, Jun 21, 2023 at 06:17:37PM +0200, David Hildenbrand wrote:
>> As documented, ram_block_discard_range() guarantees two things
>>
>> a) Read 0 after discarding succeeded
>> b) Make postcopy work by triggering a fault on next access
>>
>> And if we'd simply want to drop the FALLOC_FL_PUNCH_HOLE:
>>
>> 1) For hugetlb, only newer kernels support MADV_DONTNEED. So there is no way
>> to just discard in a private mapping here that works for kernels we still
>> care about.
>>
>> 2) free-page-reporting wants to read 0's when re-accessing discarded memory.
>> If there is still something there in the file, that won't work.
> 
> Ah right.  The semantics is indeed slightly different..
> 
> IMHO, ideally here we need a zero page installed as private, ignoring the
> page cache underneath, freeing any possible private page.  But I just don't
> know how to do that easily with current default mm infrastructures, or
> free-page-reporting over private mem seems just won't really work at all,
> it seems to me.
> 
> Maybe.. UFFDIO_ZEROPAGE would work? We need uffd registered by default, but
> that's slightly tricky.

Maybe ... depends also on the uffd semantics as in 3).

> 
>>
>> 3) Regarding postcopy on MAP_PRIVATE shmem, I am not sure if it will
>> actually do what you want if the pagecache holds a page. Maybe it works, but
>> I am not so sure. Needs investigation.
> 
> For MINOR I think it will.  I actually already implemented some of that (I
> think, all of that is required) in the HGM qemu rfc series, and smoked it a
> bit without any known issue yet with the HGM kernel.
> 
> IIUC we can work on MINOR support without HGM; I can separate it out.  It's
> really a matter of whether it'll be worthwhile the effort and time.

Yes, MINOR might work. But especially postcopy doesn't make too much 
sense targeting a private mapping that has some other pages in there 
already ... so it might not be worth the trouble I guess.

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v1 1/4] softmmu/physmem: Warn with ram_block_discard_range() on MAP_PRIVATE file mapping
  2023-06-22 13:10         ` David Hildenbrand
@ 2023-06-22 14:54           ` Peter Xu
  0 siblings, 0 replies; 15+ messages in thread
From: Peter Xu @ 2023-06-22 14:54 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: qemu-devel, Michael S. Tsirkin, Juan Quintela, Leonardo Bras,
	Paolo Bonzini, Philippe Mathieu-Daudé, Peng Tao

On Thu, Jun 22, 2023 at 03:10:47PM +0200, David Hildenbrand wrote:
> Maybe ... depends also on the uffd semantics as in 3).

UFFDIO_COPY|ZEROPAGE bypasses page cache for private file mappings, afaict.
We'll still got a limit on the inode size (so we can't COPY|ZEROPAGE over
that offset of vma) but the rest should be all fine.

Feel free to have a quick skim over 5b51072e97d5 ("userfaultfd: shmem:
allocate anonymous memory for MAP_PRIVATE shmem").

Thanks,

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v1 0/4] virtio-mem: Support "x-ignore-shared" migration
  2023-06-20 13:03 [PATCH v1 0/4] virtio-mem: Support "x-ignore-shared" migration David Hildenbrand
                   ` (3 preceding siblings ...)
  2023-06-20 13:03 ` [PATCH v1 4/4] virtio-mem: Support "x-ignore-shared" migration David Hildenbrand
@ 2023-07-06  5:59 ` Mario Casquero
  2023-07-06  7:19   ` David Hildenbrand
  4 siblings, 1 reply; 15+ messages in thread
From: Mario Casquero @ 2023-07-06  5:59 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: qemu-devel, Michael S. Tsirkin, Juan Quintela, Peter Xu,
	Leonardo Bras, Paolo Bonzini, Philippe Mathieu-Daudé,
	Peng Tao

This series has been tested successfully by QE. Start a VM with a 8G
virtio-mem device and start memtester on it. Enable x-ignore-shared
capability and then do migration. Migration was successful and
virtio-mem can be resized as usual.

Tested-by: Mario Casquero <mcasquer@redhat.com>

BR,
Mario




On Tue, Jun 20, 2023 at 3:05 PM David Hildenbrand <david@redhat.com> wrote:
>
> Stumbling over "x-ignore-shared" migration support for virtio-mem on
> my todo list, I remember talking to Dave G. a while ago about how
> ram_block_discard_range() in MAP_PIRVATE file mappings is possibly
> harmful when the file is used somewhere else -- for example, with VM
> templating in multiple VMs.
>
> This series adds a warning to ram_block_discard_range() in that problematic
> case and adds "x-ignore-shared" migration support for virtio-mem, which
> is pretty straight-forward. The last patch also documents how VM templating
> interacts with virtio-mem.
>
> Cc: "Michael S. Tsirkin" <mst@redhat.com>
> Cc: Juan Quintela <quintela@redhat.com>
> Cc: Peter Xu <peterx@redhat.com>
> Cc: Leonardo Bras <leobras@redhat.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: "Philippe Mathieu-Daudé" <philmd@linaro.org>
> Cc: Peng Tao <tao.peng@linux.alibaba.com>
>
> David Hildenbrand (4):
>   softmmu/physmem: Warn with ram_block_discard_range() on MAP_PRIVATE
>     file mapping
>   virtio-mem: Skip most of virtio_mem_unplug_all() without plugged
>     memory
>   migration/ram: Expose ramblock_is_ignored() as
>     migrate_ram_is_ignored()
>   virtio-mem: Support "x-ignore-shared" migration
>
>  hw/virtio/virtio-mem.c   | 67 ++++++++++++++++++++++++++++------------
>  include/migration/misc.h |  1 +
>  migration/postcopy-ram.c |  2 +-
>  migration/ram.c          | 14 ++++-----
>  migration/ram.h          |  3 +-
>  softmmu/physmem.c        | 18 +++++++++++
>  6 files changed, 76 insertions(+), 29 deletions(-)
>
> --
> 2.40.1
>
>



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v1 0/4] virtio-mem: Support "x-ignore-shared" migration
  2023-07-06  5:59 ` [PATCH v1 0/4] " Mario Casquero
@ 2023-07-06  7:19   ` David Hildenbrand
  0 siblings, 0 replies; 15+ messages in thread
From: David Hildenbrand @ 2023-07-06  7:19 UTC (permalink / raw)
  To: Mario Casquero
  Cc: qemu-devel, Michael S. Tsirkin, Juan Quintela, Peter Xu,
	Leonardo Bras, Paolo Bonzini, Philippe Mathieu-Daudé,
	Peng Tao

On 06.07.23 07:59, Mario Casquero wrote:
> This series has been tested successfully by QE. Start a VM with a 8G
> virtio-mem device and start memtester on it. Enable x-ignore-shared
> capability and then do migration. Migration was successful and
> virtio-mem can be resized as usual.
> 
> Tested-by: Mario Casquero <mcasquer@redhat.com>
> 

Thanks a lot Mario!

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2023-07-06  7:20 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-06-20 13:03 [PATCH v1 0/4] virtio-mem: Support "x-ignore-shared" migration David Hildenbrand
2023-06-20 13:03 ` [PATCH v1 1/4] softmmu/physmem: Warn with ram_block_discard_range() on MAP_PRIVATE file mapping David Hildenbrand
2023-06-21 15:55   ` Peter Xu
2023-06-21 16:17     ` David Hildenbrand
2023-06-21 16:55       ` Peter Xu
2023-06-22 13:10         ` David Hildenbrand
2023-06-22 14:54           ` Peter Xu
2023-06-20 13:03 ` [PATCH v1 2/4] virtio-mem: Skip most of virtio_mem_unplug_all() without plugged memory David Hildenbrand
2023-06-20 13:03 ` [PATCH v1 3/4] migration/ram: Expose ramblock_is_ignored() as migrate_ram_is_ignored() David Hildenbrand
2023-06-21 15:56   ` Peter Xu
2023-06-20 13:03 ` [PATCH v1 4/4] virtio-mem: Support "x-ignore-shared" migration David Hildenbrand
2023-06-20 13:06   ` Michael S. Tsirkin
2023-06-20 13:40     ` David Hildenbrand
2023-07-06  5:59 ` [PATCH v1 0/4] " Mario Casquero
2023-07-06  7:19   ` David Hildenbrand

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).