* [PATCH v2 1/4] migration: Add last stage indicator to global dirty log synchronization
2023-02-27 4:26 [PATCH v2 0/4] hw/arm/virt: Support dirty ring Gavin Shan
@ 2023-02-27 4:26 ` Gavin Shan
2023-02-27 4:26 ` [PATCH v2 2/4] kvm: Synchronize the backup bitmap in the last stage Gavin Shan
` (3 subsequent siblings)
4 siblings, 0 replies; 7+ messages in thread
From: Gavin Shan @ 2023-02-27 4:26 UTC (permalink / raw)
To: qemu-arm
Cc: qemu-devel, pbonzini, peter.maydell, peterx, david, philmd, mst,
cohuck, quintela, dgilbert, maz, zhenyzha, shan.gavin
The global dirty log synchronization is used when KVM and dirty ring
are enabled. There is a particularity for ARM64 where the backup
bitmap is used to track dirty pages in non-running-vcpu situations.
It means the dirty ring works with the combination of ring buffer
and backup bitmap. The dirty bits in the backup bitmap needs to
collected in the last stage of live migration.
In order to identify the last stage of live migration and pass it
down, an extra parameter is added to the relevant functions and
callbacks. This last stage indicator isn't used until the dirty
ring is enabled in the subsequent patches.
No functional change intended.
Signed-off-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Tested-by: Zhenyu Zhang <zhenyzha@redhat.com>
---
accel/kvm/kvm-all.c | 2 +-
include/exec/memory.h | 7 +++++--
migration/dirtyrate.c | 4 ++--
migration/ram.c | 20 ++++++++++----------
softmmu/memory.c | 10 +++++-----
5 files changed, 23 insertions(+), 20 deletions(-)
diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
index 9b26582655..01a6a026af 100644
--- a/accel/kvm/kvm-all.c
+++ b/accel/kvm/kvm-all.c
@@ -1554,7 +1554,7 @@ static void kvm_log_sync(MemoryListener *listener,
kvm_slots_unlock();
}
-static void kvm_log_sync_global(MemoryListener *l)
+static void kvm_log_sync_global(MemoryListener *l, bool last_stage)
{
KVMMemoryListener *kml = container_of(l, KVMMemoryListener, listener);
KVMState *s = kvm_state;
diff --git a/include/exec/memory.h b/include/exec/memory.h
index 2e602a2fad..b5463b3a7a 100644
--- a/include/exec/memory.h
+++ b/include/exec/memory.h
@@ -929,8 +929,11 @@ struct MemoryListener {
* its @log_sync must be NULL. Vice versa.
*
* @listener: The #MemoryListener.
+ * @last_stage: The last stage to synchronize the log during migration.
+ * The caller should gurantee that the synchronization with true for
+ * @last_stage is triggered for once after all VCPUs have been stopped.
*/
- void (*log_sync_global)(MemoryListener *listener);
+ void (*log_sync_global)(MemoryListener *listener, bool last_stage);
/**
* @log_clear:
@@ -2408,7 +2411,7 @@ MemoryRegionSection memory_region_find(MemoryRegion *mr,
*
* Synchronizes the dirty page log for all address spaces.
*/
-void memory_global_dirty_log_sync(void);
+void memory_global_dirty_log_sync(bool last_stage);
/**
* memory_global_dirty_log_sync: synchronize the dirty log for all memory
diff --git a/migration/dirtyrate.c b/migration/dirtyrate.c
index 575d48c397..da9b4a1f8d 100644
--- a/migration/dirtyrate.c
+++ b/migration/dirtyrate.c
@@ -101,7 +101,7 @@ void global_dirty_log_change(unsigned int flag, bool start)
static void global_dirty_log_sync(unsigned int flag, bool one_shot)
{
qemu_mutex_lock_iothread();
- memory_global_dirty_log_sync();
+ memory_global_dirty_log_sync(false);
if (one_shot) {
memory_global_dirty_log_stop(flag);
}
@@ -553,7 +553,7 @@ static void calculate_dirtyrate_dirty_bitmap(struct DirtyRateConfig config)
* skip it unconditionally and start dirty tracking
* from 2'round of log sync
*/
- memory_global_dirty_log_sync();
+ memory_global_dirty_log_sync(false);
/*
* reset page protect manually and unconditionally.
diff --git a/migration/ram.c b/migration/ram.c
index 96e8a19a58..22e712c8b9 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1217,7 +1217,7 @@ static void migration_trigger_throttle(RAMState *rs)
}
}
-static void migration_bitmap_sync(RAMState *rs)
+static void migration_bitmap_sync(RAMState *rs, bool last_stage)
{
RAMBlock *block;
int64_t end_time;
@@ -1229,7 +1229,7 @@ static void migration_bitmap_sync(RAMState *rs)
}
trace_migration_bitmap_sync_start();
- memory_global_dirty_log_sync();
+ memory_global_dirty_log_sync(last_stage);
qemu_mutex_lock(&rs->bitmap_mutex);
WITH_RCU_READ_LOCK_GUARD() {
@@ -1263,7 +1263,7 @@ static void migration_bitmap_sync(RAMState *rs)
}
}
-static void migration_bitmap_sync_precopy(RAMState *rs)
+static void migration_bitmap_sync_precopy(RAMState *rs, bool last_stage)
{
Error *local_err = NULL;
@@ -1276,7 +1276,7 @@ static void migration_bitmap_sync_precopy(RAMState *rs)
local_err = NULL;
}
- migration_bitmap_sync(rs);
+ migration_bitmap_sync(rs, last_stage);
if (precopy_notify(PRECOPY_NOTIFY_AFTER_BITMAP_SYNC, &local_err)) {
error_report_err(local_err);
@@ -2937,7 +2937,7 @@ void ram_postcopy_send_discard_bitmap(MigrationState *ms)
RCU_READ_LOCK_GUARD();
/* This should be our last sync, the src is now paused */
- migration_bitmap_sync(rs);
+ migration_bitmap_sync(rs, false);
/* Easiest way to make sure we don't resume in the middle of a host-page */
rs->pss[RAM_CHANNEL_PRECOPY].last_sent_block = NULL;
@@ -3128,7 +3128,7 @@ static void ram_init_bitmaps(RAMState *rs)
/* We don't use dirty log with background snapshots */
if (!migrate_background_snapshot()) {
memory_global_dirty_log_start(GLOBAL_DIRTY_MIGRATION);
- migration_bitmap_sync_precopy(rs);
+ migration_bitmap_sync_precopy(rs, false);
}
}
qemu_mutex_unlock_ramlist();
@@ -3446,7 +3446,7 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
WITH_RCU_READ_LOCK_GUARD() {
if (!migration_in_postcopy()) {
- migration_bitmap_sync_precopy(rs);
+ migration_bitmap_sync_precopy(rs, true);
}
ram_control_before_iterate(f, RAM_CONTROL_FINISH);
@@ -3516,7 +3516,7 @@ static void ram_state_pending_exact(void *opaque, uint64_t *must_precopy,
if (!migration_in_postcopy()) {
qemu_mutex_lock_iothread();
WITH_RCU_READ_LOCK_GUARD() {
- migration_bitmap_sync_precopy(rs);
+ migration_bitmap_sync_precopy(rs, false);
}
qemu_mutex_unlock_iothread();
remaining_size = rs->migration_dirty_pages * TARGET_PAGE_SIZE;
@@ -3926,7 +3926,7 @@ void colo_incoming_start_dirty_log(void)
qemu_mutex_lock_iothread();
qemu_mutex_lock_ramlist();
- memory_global_dirty_log_sync();
+ memory_global_dirty_log_sync(false);
WITH_RCU_READ_LOCK_GUARD() {
RAMBLOCK_FOREACH_NOT_IGNORED(block) {
ramblock_sync_dirty_bitmap(ram_state, block);
@@ -4217,7 +4217,7 @@ void colo_flush_ram_cache(void)
void *src_host;
unsigned long offset = 0;
- memory_global_dirty_log_sync();
+ memory_global_dirty_log_sync(false);
WITH_RCU_READ_LOCK_GUARD() {
RAMBLOCK_FOREACH_NOT_IGNORED(block) {
ramblock_sync_dirty_bitmap(ram_state, block);
diff --git a/softmmu/memory.c b/softmmu/memory.c
index 9d64efca26..1cc36ef028 100644
--- a/softmmu/memory.c
+++ b/softmmu/memory.c
@@ -2224,7 +2224,7 @@ void memory_region_set_dirty(MemoryRegion *mr, hwaddr addr,
* If memory region `mr' is NULL, do global sync. Otherwise, sync
* dirty bitmap for the specified memory region.
*/
-static void memory_region_sync_dirty_bitmap(MemoryRegion *mr)
+static void memory_region_sync_dirty_bitmap(MemoryRegion *mr, bool last_stage)
{
MemoryListener *listener;
AddressSpace *as;
@@ -2254,7 +2254,7 @@ static void memory_region_sync_dirty_bitmap(MemoryRegion *mr)
* is to do a global sync, because we are not capable to
* sync in a finer granularity.
*/
- listener->log_sync_global(listener);
+ listener->log_sync_global(listener, last_stage);
trace_memory_region_sync_dirty(mr ? mr->name : "(all)", listener->name, 1);
}
}
@@ -2318,7 +2318,7 @@ DirtyBitmapSnapshot *memory_region_snapshot_and_clear_dirty(MemoryRegion *mr,
{
DirtyBitmapSnapshot *snapshot;
assert(mr->ram_block);
- memory_region_sync_dirty_bitmap(mr);
+ memory_region_sync_dirty_bitmap(mr, false);
snapshot = cpu_physical_memory_snapshot_and_clear_dirty(mr, addr, size, client);
memory_global_after_dirty_log_sync();
return snapshot;
@@ -2844,9 +2844,9 @@ bool memory_region_present(MemoryRegion *container, hwaddr addr)
return mr && mr != container;
}
-void memory_global_dirty_log_sync(void)
+void memory_global_dirty_log_sync(bool last_stage)
{
- memory_region_sync_dirty_bitmap(NULL);
+ memory_region_sync_dirty_bitmap(NULL, last_stage);
}
void memory_global_after_dirty_log_sync(void)
--
2.23.0
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH v2 2/4] kvm: Synchronize the backup bitmap in the last stage
2023-02-27 4:26 [PATCH v2 0/4] hw/arm/virt: Support dirty ring Gavin Shan
2023-02-27 4:26 ` [PATCH v2 1/4] migration: Add last stage indicator to global dirty log synchronization Gavin Shan
@ 2023-02-27 4:26 ` Gavin Shan
2023-02-27 4:26 ` [PATCH v2 3/4] kvm: Add helper kvm_dirty_ring_init() Gavin Shan
` (2 subsequent siblings)
4 siblings, 0 replies; 7+ messages in thread
From: Gavin Shan @ 2023-02-27 4:26 UTC (permalink / raw)
To: qemu-arm
Cc: qemu-devel, pbonzini, peter.maydell, peterx, david, philmd, mst,
cohuck, quintela, dgilbert, maz, zhenyzha, shan.gavin
In the last stage of live migration or memory slot removal, the
backup bitmap needs to be synchronized when it has been enabled.
Signed-off-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Tested-by: Zhenyu Zhang <zhenyzha@redhat.com>
---
accel/kvm/kvm-all.c | 11 +++++++++++
include/sysemu/kvm_int.h | 1 +
2 files changed, 12 insertions(+)
diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
index 01a6a026af..b5e12de522 100644
--- a/accel/kvm/kvm-all.c
+++ b/accel/kvm/kvm-all.c
@@ -1352,6 +1352,10 @@ static void kvm_set_phys_mem(KVMMemoryListener *kml,
*/
if (kvm_state->kvm_dirty_ring_size) {
kvm_dirty_ring_reap_locked(kvm_state, NULL);
+ if (kvm_state->kvm_dirty_ring_with_bitmap) {
+ kvm_slot_sync_dirty_pages(mem);
+ kvm_slot_get_dirty_log(kvm_state, mem);
+ }
} else {
kvm_slot_get_dirty_log(kvm_state, mem);
}
@@ -1573,6 +1577,12 @@ static void kvm_log_sync_global(MemoryListener *l, bool last_stage)
mem = &kml->slots[i];
if (mem->memory_size && mem->flags & KVM_MEM_LOG_DIRTY_PAGES) {
kvm_slot_sync_dirty_pages(mem);
+
+ if (s->kvm_dirty_ring_with_bitmap && last_stage &&
+ kvm_slot_get_dirty_log(s, mem)) {
+ kvm_slot_sync_dirty_pages(mem);
+ }
+
/*
* This is not needed by KVM_GET_DIRTY_LOG because the
* ioctl will unconditionally overwrite the whole region.
@@ -3701,6 +3711,7 @@ static void kvm_accel_instance_init(Object *obj)
s->kernel_irqchip_split = ON_OFF_AUTO_AUTO;
/* KVM dirty ring is by default off */
s->kvm_dirty_ring_size = 0;
+ s->kvm_dirty_ring_with_bitmap = false;
s->notify_vmexit = NOTIFY_VMEXIT_OPTION_RUN;
s->notify_window = 0;
}
diff --git a/include/sysemu/kvm_int.h b/include/sysemu/kvm_int.h
index 60b520a13e..fdd5b1bde0 100644
--- a/include/sysemu/kvm_int.h
+++ b/include/sysemu/kvm_int.h
@@ -115,6 +115,7 @@ struct KVMState
} *as;
uint64_t kvm_dirty_ring_bytes; /* Size of the per-vcpu dirty ring */
uint32_t kvm_dirty_ring_size; /* Number of dirty GFNs per ring */
+ bool kvm_dirty_ring_with_bitmap;
struct KVMDirtyRingReaper reaper;
NotifyVmexitOption notify_vmexit;
uint32_t notify_window;
--
2.23.0
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH v2 3/4] kvm: Add helper kvm_dirty_ring_init()
2023-02-27 4:26 [PATCH v2 0/4] hw/arm/virt: Support dirty ring Gavin Shan
2023-02-27 4:26 ` [PATCH v2 1/4] migration: Add last stage indicator to global dirty log synchronization Gavin Shan
2023-02-27 4:26 ` [PATCH v2 2/4] kvm: Synchronize the backup bitmap in the last stage Gavin Shan
@ 2023-02-27 4:26 ` Gavin Shan
2023-02-27 4:26 ` [PATCH v2 4/4] kvm: Enable dirty ring for arm64 Gavin Shan
2023-03-13 7:13 ` [PATCH v2 0/4] hw/arm/virt: Support dirty ring Gavin Shan
4 siblings, 0 replies; 7+ messages in thread
From: Gavin Shan @ 2023-02-27 4:26 UTC (permalink / raw)
To: qemu-arm
Cc: qemu-devel, pbonzini, peter.maydell, peterx, david, philmd, mst,
cohuck, quintela, dgilbert, maz, zhenyzha, shan.gavin
Due to multiple capabilities associated with the dirty ring for different
architectures: KVM_CAP_DIRTY_{LOG_RING, LOG_RING_ACQ_REL} for x86 and
arm64 separately. There will be more to be done in order to support the
dirty ring for arm64.
Lets add helper kvm_dirty_ring_init() to enable the dirty ring. With this,
the code looks a bit clean.
No functional change intended.
Signed-off-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Tested-by: Zhenyu Zhang <zhenyzha@redhat.com>
---
accel/kvm/kvm-all.c | 76 ++++++++++++++++++++++++++++-----------------
1 file changed, 47 insertions(+), 29 deletions(-)
diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
index b5e12de522..e5035026c9 100644
--- a/accel/kvm/kvm-all.c
+++ b/accel/kvm/kvm-all.c
@@ -1453,6 +1453,50 @@ static int kvm_dirty_ring_reaper_init(KVMState *s)
return 0;
}
+static int kvm_dirty_ring_init(KVMState *s)
+{
+ uint32_t ring_size = s->kvm_dirty_ring_size;
+ uint64_t ring_bytes = ring_size * sizeof(struct kvm_dirty_gfn);
+ int ret;
+
+ s->kvm_dirty_ring_size = 0;
+ s->kvm_dirty_ring_bytes = 0;
+
+ /* Bail if the dirty ring size isn't specified */
+ if (!ring_size) {
+ return 0;
+ }
+
+ /*
+ * Read the max supported pages. Fall back to dirty logging mode
+ * if the dirty ring isn't supported.
+ */
+ ret = kvm_vm_check_extension(s, KVM_CAP_DIRTY_LOG_RING);
+ if (ret <= 0) {
+ warn_report("KVM dirty ring not available, using bitmap method");
+ return 0;
+ }
+
+ if (ring_bytes > ret) {
+ error_report("KVM dirty ring size %" PRIu32 " too big "
+ "(maximum is %ld). Please use a smaller value.",
+ ring_size, (long)ret / sizeof(struct kvm_dirty_gfn));
+ return -EINVAL;
+ }
+
+ ret = kvm_vm_enable_cap(s, KVM_CAP_DIRTY_LOG_RING, 0, ring_bytes);
+ if (ret) {
+ error_report("Enabling of KVM dirty ring failed: %s. "
+ "Suggested minimum value is 1024.", strerror(-ret));
+ return -EIO;
+ }
+
+ s->kvm_dirty_ring_size = ring_size;
+ s->kvm_dirty_ring_bytes = ring_bytes;
+
+ return 0;
+}
+
static void kvm_region_add(MemoryListener *listener,
MemoryRegionSection *section)
{
@@ -2522,35 +2566,9 @@ static int kvm_init(MachineState *ms)
* Enable KVM dirty ring if supported, otherwise fall back to
* dirty logging mode
*/
- if (s->kvm_dirty_ring_size > 0) {
- uint64_t ring_bytes;
-
- ring_bytes = s->kvm_dirty_ring_size * sizeof(struct kvm_dirty_gfn);
-
- /* Read the max supported pages */
- ret = kvm_vm_check_extension(s, KVM_CAP_DIRTY_LOG_RING);
- if (ret > 0) {
- if (ring_bytes > ret) {
- error_report("KVM dirty ring size %" PRIu32 " too big "
- "(maximum is %ld). Please use a smaller value.",
- s->kvm_dirty_ring_size,
- (long)ret / sizeof(struct kvm_dirty_gfn));
- ret = -EINVAL;
- goto err;
- }
-
- ret = kvm_vm_enable_cap(s, KVM_CAP_DIRTY_LOG_RING, 0, ring_bytes);
- if (ret) {
- error_report("Enabling of KVM dirty ring failed: %s. "
- "Suggested minimum value is 1024.", strerror(-ret));
- goto err;
- }
-
- s->kvm_dirty_ring_bytes = ring_bytes;
- } else {
- warn_report("KVM dirty ring not available, using bitmap method");
- s->kvm_dirty_ring_size = 0;
- }
+ ret = kvm_dirty_ring_init(s);
+ if (ret < 0) {
+ goto err;
}
/*
--
2.23.0
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH v2 4/4] kvm: Enable dirty ring for arm64
2023-02-27 4:26 [PATCH v2 0/4] hw/arm/virt: Support dirty ring Gavin Shan
` (2 preceding siblings ...)
2023-02-27 4:26 ` [PATCH v2 3/4] kvm: Add helper kvm_dirty_ring_init() Gavin Shan
@ 2023-02-27 4:26 ` Gavin Shan
2023-03-13 7:13 ` [PATCH v2 0/4] hw/arm/virt: Support dirty ring Gavin Shan
4 siblings, 0 replies; 7+ messages in thread
From: Gavin Shan @ 2023-02-27 4:26 UTC (permalink / raw)
To: qemu-arm
Cc: qemu-devel, pbonzini, peter.maydell, peterx, david, philmd, mst,
cohuck, quintela, dgilbert, maz, zhenyzha, shan.gavin
arm64 has different capability from x86 to enable the dirty ring, which
is KVM_CAP_DIRTY_LOG_RING_ACQ_REL. Besides, arm64 also needs the backup
bitmap extension (KVM_CAP_DIRTY_LOG_RING_WITH_BITMAP) when 'kvm-arm-gicv3'
or 'arm-its-kvm' device is enabled. Here the extension is always enabled
and the unnecessary overhead to do the last stage of dirty log synchronization
when those two devices aren't used is introduced, but the overhead should
be very small and acceptable. The benefit is cover future cases where those
two devices are used without modifying the code.
Signed-off-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Tested-by: Zhenyu Zhang <zhenyzha@redhat.com>
---
accel/kvm/kvm-all.c | 23 +++++++++++++++++++++--
1 file changed, 21 insertions(+), 2 deletions(-)
diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
index e5035026c9..d96bca618b 100644
--- a/accel/kvm/kvm-all.c
+++ b/accel/kvm/kvm-all.c
@@ -1457,6 +1457,7 @@ static int kvm_dirty_ring_init(KVMState *s)
{
uint32_t ring_size = s->kvm_dirty_ring_size;
uint64_t ring_bytes = ring_size * sizeof(struct kvm_dirty_gfn);
+ unsigned int capability = KVM_CAP_DIRTY_LOG_RING;
int ret;
s->kvm_dirty_ring_size = 0;
@@ -1471,7 +1472,12 @@ static int kvm_dirty_ring_init(KVMState *s)
* Read the max supported pages. Fall back to dirty logging mode
* if the dirty ring isn't supported.
*/
- ret = kvm_vm_check_extension(s, KVM_CAP_DIRTY_LOG_RING);
+ ret = kvm_vm_check_extension(s, capability);
+ if (ret <= 0) {
+ capability = KVM_CAP_DIRTY_LOG_RING_ACQ_REL;
+ ret = kvm_vm_check_extension(s, capability);
+ }
+
if (ret <= 0) {
warn_report("KVM dirty ring not available, using bitmap method");
return 0;
@@ -1484,13 +1490,26 @@ static int kvm_dirty_ring_init(KVMState *s)
return -EINVAL;
}
- ret = kvm_vm_enable_cap(s, KVM_CAP_DIRTY_LOG_RING, 0, ring_bytes);
+ ret = kvm_vm_enable_cap(s, capability, 0, ring_bytes);
if (ret) {
error_report("Enabling of KVM dirty ring failed: %s. "
"Suggested minimum value is 1024.", strerror(-ret));
return -EIO;
}
+ /* Enable the backup bitmap if it is supported */
+ ret = kvm_vm_check_extension(s, KVM_CAP_DIRTY_LOG_RING_WITH_BITMAP);
+ if (ret > 0) {
+ ret = kvm_vm_enable_cap(s, KVM_CAP_DIRTY_LOG_RING_WITH_BITMAP, 0);
+ if (ret) {
+ error_report("Enabling of KVM dirty ring's backup bitmap failed: "
+ "%s. ", strerror(-ret));
+ return -EIO;
+ }
+
+ s->kvm_dirty_ring_with_bitmap = true;
+ }
+
s->kvm_dirty_ring_size = ring_size;
s->kvm_dirty_ring_bytes = ring_bytes;
--
2.23.0
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH v2 0/4] hw/arm/virt: Support dirty ring
2023-02-27 4:26 [PATCH v2 0/4] hw/arm/virt: Support dirty ring Gavin Shan
` (3 preceding siblings ...)
2023-02-27 4:26 ` [PATCH v2 4/4] kvm: Enable dirty ring for arm64 Gavin Shan
@ 2023-03-13 7:13 ` Gavin Shan
2023-03-14 17:05 ` Peter Maydell
4 siblings, 1 reply; 7+ messages in thread
From: Gavin Shan @ 2023-03-13 7:13 UTC (permalink / raw)
To: qemu-arm
Cc: qemu-devel, pbonzini, peter.maydell, peterx, david, philmd, mst,
cohuck, quintela, dgilbert, maz, zhenyzha, shan.gavin
On 2/27/23 12:26 PM, Gavin Shan wrote:
> This series intends to support dirty ring for live migration for arm64. The
> dirty ring use discrete buffer to track dirty pages. For arm64, the speciality
> is to use backup bitmap to track dirty pages when there is no-running-vcpu
> context. It's known that the backup bitmap needs to be synchronized when
> KVM device "kvm-arm-gicv3" or "arm-its-kvm" has been enabled. The backup
> bitmap is collected in the last stage of migration. The policy here is to
> always enable the backup bitmap extension. The overhead to synchronize the
> backup bitmap in the last stage of migration, when those two devices aren't
> used, is introduced. However, the overhead should be very small and acceptable.
> The benefit is to support future cases where those two devices are used without
> modifying the code.
>
> PATCH[1] add migration last stage indicator
> PATCH[2] synchronize the backup bitmap in the last stage of migration
> PATCH[3] add helper kvm_dirty_ring_init() to enable dirty ring
> PATCH[4] enable dirty ring for arm64
>
> v1: https://lists.nongnu.org/archive/html/qemu-arm/2023-02/msg00434.html
> RFCv1: https://lists.nongnu.org/archive/html/qemu-arm/2023-02/msg00171.html
>
> Testing
> =======
> (1) kvm-unit-tests/its-pending-migration and kvm-unit-tests/its-migration with
> dirty ring or normal dirty page tracking mechanism. All test cases passed.
>
> QEMU=./qemu.main/build/qemu-system-aarch64 ACCEL=kvm \
> ./its-pending-migration
>
> QEMU=./qemu.main/build/qemu-system-aarch64 ACCEL=kvm \
> ./its-migration
>
> QEMU=./qemu.main/build/qemu-system-aarch64 ACCEL=kvm,dirty-ring-size=65536 \
> ./its-pending-migration
>
> QEMU=./qemu.main/build/qemu-system-aarch64 ACCEL=kvm,dirty-ring-size=65536 \
> ./its-migration
>
> (2) Combinations of migration, post-copy migration, e1000e and virtio-net
> devices. All test cases passed.
>
> -netdev tap,id=net0,script=/etc/qemu-ifup,downscript=/etc/qemu-ifdown \
> -device e1000e,bus=pcie.5,netdev=net0,mac=52:54:00:f1:26:a0
>
> -netdev tap,id=vnet0,script=/etc/qemu-ifup,downscript=/etc/qemu-ifdown \
> -device virtio-net-pci,bus=pcie.6,netdev=vnet0,mac=52:54:00:f1:26:b0
>
> Changelog
> =========
> v2:
> * Drop PATCH[v1 1/6] to synchronize linux-headers (Gavin)
> * More restrictive comments about struct MemoryListener::log_sync_global (PeterX)
> * Always enable the backup bitmap extension (PeterM)
> v1:
> * Combine two patches into one PATCH[v1 2/6] for the last stage indicator (PeterX)
> * Drop the secondary bitmap and use the original one directly (Juan)
> * Avoid "goto out" in helper kvm_dirty_ring_init() (Juan)
>
Ping, Paolo and Peter Maydell. Please take a look to see if it can be
merged, thanks!
>
> Gavin Shan (4):
> migration: Add last stage indicator to global dirty log
> synchronization
> kvm: Synchronize the backup bitmap in the last stage
> kvm: Add helper kvm_dirty_ring_init()
> kvm: Enable dirty ring for arm64
>
> accel/kvm/kvm-all.c | 108 ++++++++++++++++++++++++++++-----------
> include/exec/memory.h | 7 ++-
> include/sysemu/kvm_int.h | 1 +
> migration/dirtyrate.c | 4 +-
> migration/ram.c | 20 ++++----
> softmmu/memory.c | 10 ++--
> 6 files changed, 101 insertions(+), 49 deletions(-)
>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v2 0/4] hw/arm/virt: Support dirty ring
2023-03-13 7:13 ` [PATCH v2 0/4] hw/arm/virt: Support dirty ring Gavin Shan
@ 2023-03-14 17:05 ` Peter Maydell
0 siblings, 0 replies; 7+ messages in thread
From: Peter Maydell @ 2023-03-14 17:05 UTC (permalink / raw)
To: Gavin Shan
Cc: qemu-arm, qemu-devel, pbonzini, peterx, david, philmd, mst,
cohuck, quintela, dgilbert, maz, zhenyzha, shan.gavin
On Mon, 13 Mar 2023 at 07:13, Gavin Shan <gshan@redhat.com> wrote:
>
> On 2/27/23 12:26 PM, Gavin Shan wrote:
> > This series intends to support dirty ring for live migration for arm64. The
> > dirty ring use discrete buffer to track dirty pages. For arm64, the speciality
> > is to use backup bitmap to track dirty pages when there is no-running-vcpu
> > context. It's known that the backup bitmap needs to be synchronized when
> > KVM device "kvm-arm-gicv3" or "arm-its-kvm" has been enabled. The backup
> > bitmap is collected in the last stage of migration. The policy here is to
> > always enable the backup bitmap extension. The overhead to synchronize the
> > backup bitmap in the last stage of migration, when those two devices aren't
> > used, is introduced. However, the overhead should be very small and acceptable.
> > The benefit is to support future cases where those two devices are used without
> > modifying the code.
> >
> > PATCH[1] add migration last stage indicator
> > PATCH[2] synchronize the backup bitmap in the last stage of migration
> > PATCH[3] add helper kvm_dirty_ring_init() to enable dirty ring
> > PATCH[4] enable dirty ring for arm64
> >
> > v1: https://lists.nongnu.org/archive/html/qemu-arm/2023-02/msg00434.html
> > RFCv1: https://lists.nongnu.org/archive/html/qemu-arm/2023-02/msg00171.html
> >
> > Testing
> > =======
> > (1) kvm-unit-tests/its-pending-migration and kvm-unit-tests/its-migration with
> > dirty ring or normal dirty page tracking mechanism. All test cases passed.
> >
> > QEMU=./qemu.main/build/qemu-system-aarch64 ACCEL=kvm \
> > ./its-pending-migration
> >
> > QEMU=./qemu.main/build/qemu-system-aarch64 ACCEL=kvm \
> > ./its-migration
> >
> > QEMU=./qemu.main/build/qemu-system-aarch64 ACCEL=kvm,dirty-ring-size=65536 \
> > ./its-pending-migration
> >
> > QEMU=./qemu.main/build/qemu-system-aarch64 ACCEL=kvm,dirty-ring-size=65536 \
> > ./its-migration
> >
> > (2) Combinations of migration, post-copy migration, e1000e and virtio-net
> > devices. All test cases passed.
> >
> > -netdev tap,id=net0,script=/etc/qemu-ifup,downscript=/etc/qemu-ifdown \
> > -device e1000e,bus=pcie.5,netdev=net0,mac=52:54:00:f1:26:a0
> >
> > -netdev tap,id=vnet0,script=/etc/qemu-ifup,downscript=/etc/qemu-ifdown \
> > -device virtio-net-pci,bus=pcie.6,netdev=vnet0,mac=52:54:00:f1:26:b0
> >
> > Changelog
> > =========
> > v2:
> > * Drop PATCH[v1 1/6] to synchronize linux-headers (Gavin)
> > * More restrictive comments about struct MemoryListener::log_sync_global (PeterX)
> > * Always enable the backup bitmap extension (PeterM)
> > v1:
> > * Combine two patches into one PATCH[v1 2/6] for the last stage indicator (PeterX)
> > * Drop the secondary bitmap and use the original one directly (Juan)
> > * Avoid "goto out" in helper kvm_dirty_ring_init() (Juan)
> >
>
> Ping, Paolo and Peter Maydell. Please take a look to see if it can be
> merged, thanks!
No objections here; I'm assuming that since it's only touching
KVM core code that it would go via Paolo. However, as a new
feature it has missed softfreeze for 8.0, so it'll have to
wait until the tree re-opens for 8.1.
thanks
-- PMM
^ permalink raw reply [flat|nested] 7+ messages in thread