qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] kvm: Reallocate dirty_bmap when we change a slot
@ 2019-11-21 16:56 Dr. David Alan Gilbert (git)
  2019-11-21 17:17 ` Peter Xu
  0 siblings, 1 reply; 2+ messages in thread
From: Dr. David Alan Gilbert (git) @ 2019-11-21 16:56 UTC (permalink / raw)
  To: qemu-devel, pbonzini, peterx

From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>

kvm_set_phys_mem can be called to reallocate a slot by something the
guest does (e.g. writing to PAM and other chipset registers).
This can happen in the middle of a migration, and if we're unlucky
it can now happen between the split 'sync' and 'clear'; the clear
asserts if there's no bmap to clear.   Recreate the bmap whenever
we change the slot, keeping the clear path happy.

Typically this is triggered by the guest rebooting during a migrate.

Corresponds to:
https://bugzilla.redhat.com/show_bug.cgi?id=1772774
https://bugzilla.redhat.com/show_bug.cgi?id=1771032

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
 accel/kvm/kvm-all.c | 44 +++++++++++++++++++++++++++++---------------
 1 file changed, 29 insertions(+), 15 deletions(-)

diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
index 140b0bd8f6..dd56f61420 100644
--- a/accel/kvm/kvm-all.c
+++ b/accel/kvm/kvm-all.c
@@ -515,6 +515,27 @@ static int kvm_get_dirty_pages_log_range(MemoryRegionSection *section,
 
 #define ALIGN(x, y)  (((x)+(y)-1) & ~((y)-1))
 
+/* Allocate the dirty bitmap for a slot  */
+static void kvm_memslot_init_dirty_bitmap(KVMSlot *mem)
+{
+    /*
+     * XXX bad kernel interface alert
+     * For dirty bitmap, kernel allocates array of size aligned to
+     * bits-per-long.  But for case when the kernel is 64bits and
+     * the userspace is 32bits, userspace can't align to the same
+     * bits-per-long, since sizeof(long) is different between kernel
+     * and user space.  This way, userspace will provide buffer which
+     * may be 4 bytes less than the kernel will use, resulting in
+     * userspace memory corruption (which is not detectable by valgrind
+     * too, in most cases).
+     * So for now, let's align to 64 instead of HOST_LONG_BITS here, in
+     * a hope that sizeof(long) won't become >8 any time soon.
+     */
+    hwaddr bitmap_size = ALIGN(((mem->memory_size) >> TARGET_PAGE_BITS),
+                                        /*HOST_LONG_BITS*/ 64) / 8;
+    mem->dirty_bmap = g_malloc0(bitmap_size);
+}
+
 /**
  * kvm_physical_sync_dirty_bitmap - Sync dirty bitmap from kernel space
  *
@@ -547,23 +568,9 @@ static int kvm_physical_sync_dirty_bitmap(KVMMemoryListener *kml,
             goto out;
         }
 
-        /* XXX bad kernel interface alert
-         * For dirty bitmap, kernel allocates array of size aligned to
-         * bits-per-long.  But for case when the kernel is 64bits and
-         * the userspace is 32bits, userspace can't align to the same
-         * bits-per-long, since sizeof(long) is different between kernel
-         * and user space.  This way, userspace will provide buffer which
-         * may be 4 bytes less than the kernel will use, resulting in
-         * userspace memory corruption (which is not detectable by valgrind
-         * too, in most cases).
-         * So for now, let's align to 64 instead of HOST_LONG_BITS here, in
-         * a hope that sizeof(long) won't become >8 any time soon.
-         */
         if (!mem->dirty_bmap) {
-            hwaddr bitmap_size = ALIGN(((mem->memory_size) >> TARGET_PAGE_BITS),
-                                        /*HOST_LONG_BITS*/ 64) / 8;
             /* Allocate on the first log_sync, once and for all */
-            mem->dirty_bmap = g_malloc0(bitmap_size);
+            kvm_memslot_init_dirty_bitmap(mem);
         }
 
         d.dirty_bitmap = mem->dirty_bmap;
@@ -1064,6 +1071,13 @@ static void kvm_set_phys_mem(KVMMemoryListener *kml,
         mem->ram = ram;
         mem->flags = kvm_mem_flags(mr);
 
+        if (mem->flags & KVM_MEM_LOG_DIRTY_PAGES) {
+            /*
+             * Reallocate the bmap; it means it doesn't disappear in
+             * middle of a migrate.
+             */
+            kvm_memslot_init_dirty_bitmap(mem);
+        }
         err = kvm_set_user_memory_region(kml, mem, true);
         if (err) {
             fprintf(stderr, "%s: error registering slot: %s\n", __func__,
-- 
2.23.0



^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH] kvm: Reallocate dirty_bmap when we change a slot
  2019-11-21 16:56 [PATCH] kvm: Reallocate dirty_bmap when we change a slot Dr. David Alan Gilbert (git)
@ 2019-11-21 17:17 ` Peter Xu
  0 siblings, 0 replies; 2+ messages in thread
From: Peter Xu @ 2019-11-21 17:17 UTC (permalink / raw)
  To: Dr. David Alan Gilbert (git); +Cc: pbonzini, qemu-devel

On Thu, Nov 21, 2019 at 04:56:45PM +0000, Dr. David Alan Gilbert (git) wrote:
> From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
> 
> kvm_set_phys_mem can be called to reallocate a slot by something the
> guest does (e.g. writing to PAM and other chipset registers).
> This can happen in the middle of a migration, and if we're unlucky
> it can now happen between the split 'sync' and 'clear'; the clear
> asserts if there's no bmap to clear.   Recreate the bmap whenever
> we change the slot, keeping the clear path happy.
> 
> Typically this is triggered by the guest rebooting during a migrate.
> 
> Corresponds to:
> https://bugzilla.redhat.com/show_bug.cgi?id=1772774
> https://bugzilla.redhat.com/show_bug.cgi?id=1771032
> 
> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

Reviewed-by: Peter Xu <peterx@redhat.com>

Thanks!

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2019-11-21 17:19 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2019-11-21 16:56 [PATCH] kvm: Reallocate dirty_bmap when we change a slot Dr. David Alan Gilbert (git)
2019-11-21 17:17 ` Peter Xu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).