* [Qemu-devel] [RFC 0/4] migration.experimental queue
@ 2013-01-18 11:53 Juan Quintela
2013-01-18 11:53 ` [Qemu-devel] [PATCH 1/4] ram: add free_space parameter to save_live functions Juan Quintela
` (3 more replies)
0 siblings, 4 replies; 10+ messages in thread
From: Juan Quintela @ 2013-01-18 11:53 UTC (permalink / raw)
To: qemu-devel
Hi
This is the other half of the series for the people that want to test
the migration latency improvements.
They are on top of the pull request submmited yesterday.
Last patch as usual is to print where the time is spent in the
completation stage, will be removed on proper submission.=
Paolo patches still being merged on top of this.
Later, Juan.
The following changes since commit 6522773f88a2e37800f0bf7dc3632a14649f53c6:
migration: remove argument to qemu_savevm_state_cancel (2013-01-17 13:54:52 +0100)
are available in the git repository at:
git://repo.or.cz/qemu/quintela.git migration.experimental.next
for you to fetch changes up to 86a00ab592330a264f6a26fa84da1a64f5ae7800:
migration: print times for end phase (2013-01-18 12:36:58 +0100)
----------------------------------------------------------------
Juan Quintela (4):
ram: add free_space parameter to save_live functions
ram: remove xbrle last_stage optimization
ram: reuse ram_save_iterate() for the complete stage
migration: print times for end phase
arch_init.c | 56 +++++++++++++--------------------------------
block-migration.c | 2 +-
block.c | 6 +++++
cpus.c | 17 ++++++++++++++
include/migration/vmstate.h | 2 +-
include/sysemu/sysemu.h | 2 +-
migration.c | 28 ++++++++++++++++++++++-
savevm.c | 23 ++++++++++++++++---
8 files changed, 89 insertions(+), 47 deletions(-)
^ permalink raw reply [flat|nested] 10+ messages in thread
* [Qemu-devel] [PATCH 1/4] ram: add free_space parameter to save_live functions
2013-01-18 11:53 [Qemu-devel] [RFC 0/4] migration.experimental queue Juan Quintela
@ 2013-01-18 11:53 ` Juan Quintela
2013-01-21 9:59 ` Orit Wasserman
2013-01-18 11:53 ` [Qemu-devel] [PATCH 2/4] ram: remove xbrle last_stage optimization Juan Quintela
` (2 subsequent siblings)
3 siblings, 1 reply; 10+ messages in thread
From: Juan Quintela @ 2013-01-18 11:53 UTC (permalink / raw)
To: qemu-devel
As we really know how much space we have free in the buffers, we can
send that information instead of guessing how much we can sent each time.
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
arch_init.c | 20 +++++++++-----------
block-migration.c | 2 +-
include/migration/vmstate.h | 2 +-
include/sysemu/sysemu.h | 2 +-
migration.c | 3 ++-
savevm.c | 10 +++++++---
6 files changed, 21 insertions(+), 18 deletions(-)
diff --git a/arch_init.c b/arch_init.c
index dada6de..2792b76 100644
--- a/arch_init.c
+++ b/arch_init.c
@@ -601,9 +601,12 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
return 0;
}
-static int ram_save_iterate(QEMUFile *f, void *opaque)
+/* Maximum size for a transmited page
+ header + len + idstr + page size */
+#define MAX_PAGE_SIZE (8 + 1 + 256 + TARGET_PAGE_SIZE)
+
+static int ram_save_iterate(QEMUFile *f, void *opaque, uint64_t free_space)
{
- int ret;
int i;
int64_t t0;
int total_sent = 0;
@@ -616,15 +619,15 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
t0 = qemu_get_clock_ns(rt_clock);
i = 0;
- while ((ret = qemu_file_rate_limit(f)) == 0) {
- int bytes_sent;
-
- bytes_sent = ram_save_block(f, false);
+ /* We need space for at least one page and end of section marker */
+ while (free_space > MAX_PAGE_SIZE + 8) {
+ int bytes_sent = ram_save_block(f, false);
/* no more blocks to sent */
if (bytes_sent == 0) {
break;
}
total_sent += bytes_sent;
+ free_space -= bytes_sent;
acct_info.iterations++;
/* we want to check in the 1st loop, just in case it was the 1st time
and we had to sync the dirty bitmap.
@@ -644,11 +647,6 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
qemu_mutex_unlock_ramlist();
- if (ret < 0) {
- bytes_transferred += total_sent;
- return ret;
- }
-
qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
total_sent += 8;
bytes_transferred += total_sent;
diff --git a/block-migration.c b/block-migration.c
index 6acf3e1..0c3157a 100644
--- a/block-migration.c
+++ b/block-migration.c
@@ -535,7 +535,7 @@ static int block_save_setup(QEMUFile *f, void *opaque)
return 0;
}
-static int block_save_iterate(QEMUFile *f, void *opaque)
+static int block_save_iterate(QEMUFile *f, void *opaque, uint64_t free_space)
{
int ret;
diff --git a/include/migration/vmstate.h b/include/migration/vmstate.h
index f27276c..0b55cf4 100644
--- a/include/migration/vmstate.h
+++ b/include/migration/vmstate.h
@@ -33,7 +33,7 @@ typedef struct SaveVMHandlers {
void (*set_params)(const MigrationParams *params, void * opaque);
SaveStateHandler *save_state;
int (*save_live_setup)(QEMUFile *f, void *opaque);
- int (*save_live_iterate)(QEMUFile *f, void *opaque);
+ int (*save_live_iterate)(QEMUFile *f, void *opaque, uint64_t free_space);
int (*save_live_complete)(QEMUFile *f, void *opaque);
uint64_t (*save_live_pending)(QEMUFile *f, void *opaque, uint64_t max_size);
void (*cancel)(void *opaque);
diff --git a/include/sysemu/sysemu.h b/include/sysemu/sysemu.h
index d65a9f1..3ff043c 100644
--- a/include/sysemu/sysemu.h
+++ b/include/sysemu/sysemu.h
@@ -75,7 +75,7 @@ void qemu_announce_self(void);
bool qemu_savevm_state_blocked(Error **errp);
int qemu_savevm_state_begin(QEMUFile *f,
const MigrationParams *params);
-int qemu_savevm_state_iterate(QEMUFile *f);
+int qemu_savevm_state_iterate(QEMUFile *f, uint64_t free_space);
int qemu_savevm_state_complete(QEMUFile *f);
void qemu_savevm_state_cancel(void);
uint64_t qemu_savevm_state_pending(QEMUFile *f, uint64_t max_size);
diff --git a/migration.c b/migration.c
index 77c1971..e74ce49 100644
--- a/migration.c
+++ b/migration.c
@@ -683,6 +683,7 @@ static void *buffered_file_thread(void *opaque)
while (true) {
int64_t current_time = qemu_get_clock_ms(rt_clock);
uint64_t pending_size;
+ size_t free_space = s->buffer_capacity - s->buffer_size;
qemu_mutex_lock_iothread();
if (s->state != MIG_STATE_ACTIVE) {
@@ -699,7 +700,7 @@ static void *buffered_file_thread(void *opaque)
pending_size = qemu_savevm_state_pending(s->file, max_size);
DPRINTF("pending size %lu max %lu\n", pending_size, max_size);
if (pending_size && pending_size >= max_size) {
- ret = qemu_savevm_state_iterate(s->file);
+ ret = qemu_savevm_state_iterate(s->file, free_space);
if (ret < 0) {
qemu_mutex_unlock_iothread();
break;
diff --git a/savevm.c b/savevm.c
index 913a623..3447f91 100644
--- a/savevm.c
+++ b/savevm.c
@@ -1609,10 +1609,11 @@ int qemu_savevm_state_begin(QEMUFile *f,
* 0 : We haven't finished, caller have to go again
* 1 : We have finished, we can go to complete phase
*/
-int qemu_savevm_state_iterate(QEMUFile *f)
+int qemu_savevm_state_iterate(QEMUFile *f, uint64_t free_space)
{
SaveStateEntry *se;
int ret = 1;
+ size_t remaining_space = free_space;
QTAILQ_FOREACH(se, &savevm_handlers, entry) {
if (!se->ops || !se->ops->save_live_iterate) {
@@ -1629,9 +1630,11 @@ int qemu_savevm_state_iterate(QEMUFile *f)
trace_savevm_section_start();
/* Section type */
qemu_put_byte(f, QEMU_VM_SECTION_PART);
+ remaining_space -= 1;
qemu_put_be32(f, se->section_id);
+ remaining_space -= 4;
- ret = se->ops->save_live_iterate(f, se->opaque);
+ ret = se->ops->save_live_iterate(f, se->opaque, remaining_space);
trace_savevm_section_end(se->section_id);
if (ret <= 0) {
@@ -1641,6 +1644,7 @@ int qemu_savevm_state_iterate(QEMUFile *f)
synchronized over and over again. */
break;
}
+ remaining_space -= ret;
}
if (ret != 0) {
return ret;
@@ -1756,7 +1760,7 @@ static int qemu_savevm_state(QEMUFile *f)
goto out;
do {
- ret = qemu_savevm_state_iterate(f);
+ ret = qemu_savevm_state_iterate(f, SIZE_MAX);
if (ret < 0)
goto out;
} while (ret == 0);
--
1.8.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [Qemu-devel] [PATCH 2/4] ram: remove xbrle last_stage optimization
2013-01-18 11:53 [Qemu-devel] [RFC 0/4] migration.experimental queue Juan Quintela
2013-01-18 11:53 ` [Qemu-devel] [PATCH 1/4] ram: add free_space parameter to save_live functions Juan Quintela
@ 2013-01-18 11:53 ` Juan Quintela
2013-01-21 10:11 ` Orit Wasserman
2013-01-18 11:53 ` [Qemu-devel] [PATCH 3/4] ram: reuse ram_save_iterate() for the complete stage Juan Quintela
2013-01-18 11:53 ` [Qemu-devel] [PATCH 4/4] migration: print times for end phase Juan Quintela
3 siblings, 1 reply; 10+ messages in thread
From: Juan Quintela @ 2013-01-18 11:53 UTC (permalink / raw)
To: qemu-devel
We need to remove it to be able to return from complete to iterative
phases of migration.
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
arch_init.c | 24 +++++++++---------------
1 file changed, 9 insertions(+), 15 deletions(-)
diff --git a/arch_init.c b/arch_init.c
index 2792b76..9f7d44d 100644
--- a/arch_init.c
+++ b/arch_init.c
@@ -286,16 +286,14 @@ static size_t save_block_hdr(QEMUFile *f, RAMBlock *block, ram_addr_t offset,
static int save_xbzrle_page(QEMUFile *f, uint8_t *current_data,
ram_addr_t current_addr, RAMBlock *block,
- ram_addr_t offset, int cont, bool last_stage)
+ ram_addr_t offset, int cont)
{
int encoded_len = 0, bytes_sent = -1;
uint8_t *prev_cached_page;
if (!cache_is_cached(XBZRLE.cache, current_addr)) {
- if (!last_stage) {
- cache_insert(XBZRLE.cache, current_addr,
- g_memdup(current_data, TARGET_PAGE_SIZE));
- }
+ cache_insert(XBZRLE.cache, current_addr,
+ g_memdup(current_data, TARGET_PAGE_SIZE));
acct_info.xbzrle_cache_miss++;
return -1;
}
@@ -321,9 +319,7 @@ static int save_xbzrle_page(QEMUFile *f, uint8_t *current_data,
}
/* we need to update the data in the cache, in order to get the same data */
- if (!last_stage) {
- memcpy(prev_cached_page, XBZRLE.current_buf, TARGET_PAGE_SIZE);
- }
+ memcpy(prev_cached_page, XBZRLE.current_buf, TARGET_PAGE_SIZE);
/* Send XBZRLE based compressed page */
bytes_sent = save_block_hdr(f, block, offset, cont, RAM_SAVE_FLAG_XBZRLE);
@@ -426,7 +422,7 @@ static void migration_bitmap_sync(void)
* 0 means no dirty pages
*/
-static int ram_save_block(QEMUFile *f, bool last_stage)
+static int ram_save_block(QEMUFile *f)
{
RAMBlock *block = last_seen_block;
ram_addr_t offset = last_offset;
@@ -470,10 +466,8 @@ static int ram_save_block(QEMUFile *f, bool last_stage)
} else if (migrate_use_xbzrle()) {
current_addr = block->offset + offset;
bytes_sent = save_xbzrle_page(f, p, current_addr, block,
- offset, cont, last_stage);
- if (!last_stage) {
- p = get_cached_data(XBZRLE.cache, current_addr);
- }
+ offset, cont);
+ p = get_cached_data(XBZRLE.cache, current_addr);
}
/* XBZRLE overflow or normal page */
@@ -621,7 +615,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque, uint64_t free_space)
i = 0;
/* We need space for at least one page and end of section marker */
while (free_space > MAX_PAGE_SIZE + 8) {
- int bytes_sent = ram_save_block(f, false);
+ int bytes_sent = ram_save_block(f);
/* no more blocks to sent */
if (bytes_sent == 0) {
break;
@@ -665,7 +659,7 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
while (true) {
int bytes_sent;
- bytes_sent = ram_save_block(f, true);
+ bytes_sent = ram_save_block(f);
/* no more blocks to sent */
if (bytes_sent == 0) {
break;
--
1.8.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [Qemu-devel] [PATCH 3/4] ram: reuse ram_save_iterate() for the complete stage
2013-01-18 11:53 [Qemu-devel] [RFC 0/4] migration.experimental queue Juan Quintela
2013-01-18 11:53 ` [Qemu-devel] [PATCH 1/4] ram: add free_space parameter to save_live functions Juan Quintela
2013-01-18 11:53 ` [Qemu-devel] [PATCH 2/4] ram: remove xbrle last_stage optimization Juan Quintela
@ 2013-01-18 11:53 ` Juan Quintela
2013-01-21 10:17 ` Orit Wasserman
2013-01-21 10:31 ` Paolo Bonzini
2013-01-18 11:53 ` [Qemu-devel] [PATCH 4/4] migration: print times for end phase Juan Quintela
3 siblings, 2 replies; 10+ messages in thread
From: Juan Quintela @ 2013-01-18 11:53 UTC (permalink / raw)
To: qemu-devel
This means that we only have one memory loop for the iterate and
complete phase.
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
arch_init.c | 16 ----------------
migration.c | 12 ++++++++++++
2 files changed, 12 insertions(+), 16 deletions(-)
diff --git a/arch_init.c b/arch_init.c
index 9f7d44d..9eef10a 100644
--- a/arch_init.c
+++ b/arch_init.c
@@ -651,23 +651,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque, uint64_t free_space)
static int ram_save_complete(QEMUFile *f, void *opaque)
{
qemu_mutex_lock_ramlist();
- migration_bitmap_sync();
-
- /* try transferring iterative blocks of memory */
-
- /* flush all remaining blocks regardless of rate limiting */
- while (true) {
- int bytes_sent;
-
- bytes_sent = ram_save_block(f);
- /* no more blocks to sent */
- if (bytes_sent == 0) {
- break;
- }
- bytes_transferred += bytes_sent;
- }
migration_end();
-
qemu_mutex_unlock_ramlist();
qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
diff --git a/migration.c b/migration.c
index e74ce49..de665f7 100644
--- a/migration.c
+++ b/migration.c
@@ -717,6 +717,18 @@ static void *buffered_file_thread(void *opaque)
} else {
vm_stop_force_state(RUN_STATE_FINISH_MIGRATE);
}
+
+ /* 8 is the size of an end of section mark, so empty section */
+ while ((ret = qemu_savevm_state_iterate(s->file, free_space))
+ > 8) {
+ ret = buffered_flush(s);
+ if (ret < 0) {
+ qemu_mutex_unlock_iothread();
+ break;
+ }
+ free_space = s->buffer_capacity - s->buffer_size;
+ }
+
ret = qemu_savevm_state_complete(s->file);
if (ret < 0) {
qemu_mutex_unlock_iothread();
--
1.8.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [Qemu-devel] [PATCH 4/4] migration: print times for end phase
2013-01-18 11:53 [Qemu-devel] [RFC 0/4] migration.experimental queue Juan Quintela
` (2 preceding siblings ...)
2013-01-18 11:53 ` [Qemu-devel] [PATCH 3/4] ram: reuse ram_save_iterate() for the complete stage Juan Quintela
@ 2013-01-18 11:53 ` Juan Quintela
2013-01-21 10:19 ` Orit Wasserman
3 siblings, 1 reply; 10+ messages in thread
From: Juan Quintela @ 2013-01-18 11:53 UTC (permalink / raw)
To: qemu-devel
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
block.c | 6 ++++++
cpus.c | 17 +++++++++++++++++
migration.c | 13 +++++++++++++
savevm.c | 13 +++++++++++++
4 files changed, 49 insertions(+)
diff --git a/block.c b/block.c
index 6fa7c90..c121db3 100644
--- a/block.c
+++ b/block.c
@@ -2693,9 +2693,15 @@ int bdrv_get_flags(BlockDriverState *bs)
void bdrv_flush_all(void)
{
BlockDriverState *bs;
+ int64_t start_time, end_time;
+
+ start_time = qemu_get_clock_ms(rt_clock);
QTAILQ_FOREACH(bs, &bdrv_states, list) {
bdrv_flush(bs);
+ end_time = qemu_get_clock_ms(rt_clock);
+ printf("time flush device %s: %ld\n", bs->filename,
+ end_time - start_time);
}
}
diff --git a/cpus.c b/cpus.c
index a4390c3..15534ba 100644
--- a/cpus.c
+++ b/cpus.c
@@ -439,14 +439,31 @@ bool cpu_is_stopped(CPUState *cpu)
static void do_vm_stop(RunState state)
{
+ int64_t start_time, end_time;
+
if (runstate_is_running()) {
+ start_time = qemu_get_clock_ms(rt_clock);
cpu_disable_ticks();
+ end_time = qemu_get_clock_ms(rt_clock);
+ printf("time cpu_disable_ticks %ld\n", end_time - start_time);
pause_all_vcpus();
+ end_time = qemu_get_clock_ms(rt_clock);
+ printf("time pause_all_vcpus %ld\n", end_time - start_time);
runstate_set(state);
+ end_time = qemu_get_clock_ms(rt_clock);
+ printf("time runstate_set %ld\n", end_time - start_time);
vm_state_notify(0, state);
+ end_time = qemu_get_clock_ms(rt_clock);
+ printf("time vmstate_notify %ld\n", end_time - start_time);
bdrv_drain_all();
+ end_time = qemu_get_clock_ms(rt_clock);
+ printf("time bdrv_drain_all %ld\n", end_time - start_time);
bdrv_flush_all();
+ end_time = qemu_get_clock_ms(rt_clock);
+ printf("time bdrv_flush_all %ld\n", end_time - start_time);
monitor_protocol_event(QEVENT_STOP, NULL);
+ end_time = qemu_get_clock_ms(rt_clock);
+ printf("time monitor_protocol_event %ld\n", end_time - start_time);
}
}
diff --git a/migration.c b/migration.c
index de665f7..5e965cc 100644
--- a/migration.c
+++ b/migration.c
@@ -712,12 +712,17 @@ static void *buffered_file_thread(void *opaque)
DPRINTF("done iterating\n");
start_time = qemu_get_clock_ms(rt_clock);
qemu_system_wakeup_request(QEMU_WAKEUP_REASON_OTHER);
+ end_time = qemu_get_clock_ms(rt_clock);
+ printf("wakeup_request %ld\n", end_time - start_time);
if (old_vm_running) {
vm_stop(RUN_STATE_FINISH_MIGRATE);
} else {
vm_stop_force_state(RUN_STATE_FINISH_MIGRATE);
}
+ end_time = qemu_get_clock_ms(rt_clock);
+ printf("vm_stop %ld\n", end_time - start_time);
+
/* 8 is the size of an end of section mark, so empty section */
while ((ret = qemu_savevm_state_iterate(s->file, free_space))
> 8) {
@@ -728,15 +733,21 @@ static void *buffered_file_thread(void *opaque)
}
free_space = s->buffer_capacity - s->buffer_size;
}
+ end_time = qemu_get_clock_ms(rt_clock);
+ printf("iterate phase %ld\n", end_time - start_time);
ret = qemu_savevm_state_complete(s->file);
if (ret < 0) {
qemu_mutex_unlock_iothread();
break;
} else {
+ end_time = qemu_get_clock_ms(rt_clock);
+ printf("complete without error 3a %ld\n",
+ end_time - start_time);
migrate_fd_completed(s);
}
end_time = qemu_get_clock_ms(rt_clock);
+ printf("completed %ld\n", end_time - start_time);
s->total_time = end_time - s->total_time;
s->downtime = end_time - start_time;
if (s->state != MIG_STATE_COMPLETED) {
@@ -744,6 +755,8 @@ static void *buffered_file_thread(void *opaque)
vm_start();
}
}
+ end_time = qemu_get_clock_ms(rt_clock);
+ printf("end completed stage %ld\n", end_time - start_time);
last_round = true;
}
}
diff --git a/savevm.c b/savevm.c
index 3447f91..113c1dd 100644
--- a/savevm.c
+++ b/savevm.c
@@ -1660,9 +1660,14 @@ int qemu_savevm_state_complete(QEMUFile *f)
{
SaveStateEntry *se;
int ret;
+ int64_t t1;
+ int64_t t0 = qemu_get_clock_ms(rt_clock);
cpu_synchronize_all_states();
+ t1 = qemu_get_clock_ms(rt_clock);
+ printf("synchronize_all_states %ld\n", t1 - t0);
+ t0 = t1;
QTAILQ_FOREACH(se, &savevm_handlers, entry) {
if (!se->ops || !se->ops->save_live_complete) {
continue;
@@ -1683,6 +1688,11 @@ int qemu_savevm_state_complete(QEMUFile *f)
return ret;
}
}
+ t1 = qemu_get_clock_ms(rt_clock);
+
+ printf("migrate save live complete %ld\n", t1 - t0);
+
+ t0 = t1;
QTAILQ_FOREACH(se, &savevm_handlers, entry) {
int len;
@@ -1707,6 +1717,9 @@ int qemu_savevm_state_complete(QEMUFile *f)
trace_savevm_section_end(se->section_id);
}
+ t1 = qemu_get_clock_ms(rt_clock);
+
+ printf("migrate rest devices %ld\n", t1 - t0);
qemu_put_byte(f, QEMU_VM_EOF);
return qemu_file_get_error(f);
--
1.8.1
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [Qemu-devel] [PATCH 1/4] ram: add free_space parameter to save_live functions
2013-01-18 11:53 ` [Qemu-devel] [PATCH 1/4] ram: add free_space parameter to save_live functions Juan Quintela
@ 2013-01-21 9:59 ` Orit Wasserman
0 siblings, 0 replies; 10+ messages in thread
From: Orit Wasserman @ 2013-01-21 9:59 UTC (permalink / raw)
To: Juan Quintela; +Cc: qemu-devel
On 01/18/2013 01:53 PM, Juan Quintela wrote:
> As we really know how much space we have free in the buffers, we can
> send that information instead of guessing how much we can sent each time.
>
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
> arch_init.c | 20 +++++++++-----------
> block-migration.c | 2 +-
> include/migration/vmstate.h | 2 +-
> include/sysemu/sysemu.h | 2 +-
> migration.c | 3 ++-
> savevm.c | 10 +++++++---
> 6 files changed, 21 insertions(+), 18 deletions(-)
>
> diff --git a/arch_init.c b/arch_init.c
> index dada6de..2792b76 100644
> --- a/arch_init.c
> +++ b/arch_init.c
> @@ -601,9 +601,12 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
> return 0;
> }
>
> -static int ram_save_iterate(QEMUFile *f, void *opaque)
> +/* Maximum size for a transmited page
> + header + len + idstr + page size */
> +#define MAX_PAGE_SIZE (8 + 1 + 256 + TARGET_PAGE_SIZE)
> +
> +static int ram_save_iterate(QEMUFile *f, void *opaque, uint64_t free_space)
> {
> - int ret;
> int i;
> int64_t t0;
> int total_sent = 0;
> @@ -616,15 +619,15 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
>
> t0 = qemu_get_clock_ns(rt_clock);
> i = 0;
> - while ((ret = qemu_file_rate_limit(f)) == 0) {
> - int bytes_sent;
> -
> - bytes_sent = ram_save_block(f, false);
> + /* We need space for at least one page and end of section marker */
> + while (free_space > MAX_PAGE_SIZE + 8) {
Actually we may need more if we move to a new memory block we will need to add the block idstr
and may run of space (not talking about compression which requires less space and we may have it)....
Why not move this logic into ram_save_block?
> + int bytes_sent = ram_save_block(f, false);
> /* no more blocks to sent */
> if (bytes_sent == 0) {
> break;
> }
> total_sent += bytes_sent;
> + free_space -= bytes_sent;
> acct_info.iterations++;
> /* we want to check in the 1st loop, just in case it was the 1st time
> and we had to sync the dirty bitmap.
> @@ -644,11 +647,6 @@ static int ram_save_iterate(QEMUFile *f, void *opaque)
>
> qemu_mutex_unlock_ramlist();
>
> - if (ret < 0) {
> - bytes_transferred += total_sent;
> - return ret;
> - }
> -
don't we need to return negative to release the lock sometimes?
> qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
> total_sent += 8;
> bytes_transferred += total_sent;
> diff --git a/block-migration.c b/block-migration.c
> index 6acf3e1..0c3157a 100644
> --- a/block-migration.c
> +++ b/block-migration.c
> @@ -535,7 +535,7 @@ static int block_save_setup(QEMUFile *f, void *opaque)
> return 0;
> }
>
> -static int block_save_iterate(QEMUFile *f, void *opaque)
> +static int block_save_iterate(QEMUFile *f, void *opaque, uint64_t free_space)
> {
> int ret;
>
> diff --git a/include/migration/vmstate.h b/include/migration/vmstate.h
> index f27276c..0b55cf4 100644
> --- a/include/migration/vmstate.h
> +++ b/include/migration/vmstate.h
> @@ -33,7 +33,7 @@ typedef struct SaveVMHandlers {
> void (*set_params)(const MigrationParams *params, void * opaque);
> SaveStateHandler *save_state;
> int (*save_live_setup)(QEMUFile *f, void *opaque);
> - int (*save_live_iterate)(QEMUFile *f, void *opaque);
> + int (*save_live_iterate)(QEMUFile *f, void *opaque, uint64_t free_space);
> int (*save_live_complete)(QEMUFile *f, void *opaque);
> uint64_t (*save_live_pending)(QEMUFile *f, void *opaque, uint64_t max_size);
> void (*cancel)(void *opaque);
> diff --git a/include/sysemu/sysemu.h b/include/sysemu/sysemu.h
> index d65a9f1..3ff043c 100644
> --- a/include/sysemu/sysemu.h
> +++ b/include/sysemu/sysemu.h
> @@ -75,7 +75,7 @@ void qemu_announce_self(void);
> bool qemu_savevm_state_blocked(Error **errp);
> int qemu_savevm_state_begin(QEMUFile *f,
> const MigrationParams *params);
> -int qemu_savevm_state_iterate(QEMUFile *f);
> +int qemu_savevm_state_iterate(QEMUFile *f, uint64_t free_space);
> int qemu_savevm_state_complete(QEMUFile *f);
> void qemu_savevm_state_cancel(void);
> uint64_t qemu_savevm_state_pending(QEMUFile *f, uint64_t max_size);
> diff --git a/migration.c b/migration.c
> index 77c1971..e74ce49 100644
> --- a/migration.c
> +++ b/migration.c
> @@ -683,6 +683,7 @@ static void *buffered_file_thread(void *opaque)
> while (true) {
> int64_t current_time = qemu_get_clock_ms(rt_clock);
> uint64_t pending_size;
> + size_t free_space = s->buffer_capacity - s->buffer_size;
don't we need to take into consideration the rate_limit (xfer_limit)
otherwise we may send too much.
>
> qemu_mutex_lock_iothread();
> if (s->state != MIG_STATE_ACTIVE) {
> @@ -699,7 +700,7 @@ static void *buffered_file_thread(void *opaque)
> pending_size = qemu_savevm_state_pending(s->file, max_size);
> DPRINTF("pending size %lu max %lu\n", pending_size, max_size);
> if (pending_size && pending_size >= max_size) {
> - ret = qemu_savevm_state_iterate(s->file);
> + ret = qemu_savevm_state_iterate(s->file, free_space);
ret is never negative so the lock won't be released
> if (ret < 0) {
> qemu_mutex_unlock_iothread();
> break;
> diff --git a/savevm.c b/savevm.c
> index 913a623..3447f91 100644
> --- a/savevm.c
> +++ b/savevm.c
> @@ -1609,10 +1609,11 @@ int qemu_savevm_state_begin(QEMUFile *f,
> * 0 : We haven't finished, caller have to go again
> * 1 : We have finished, we can go to complete phase
> */
> -int qemu_savevm_state_iterate(QEMUFile *f)
> +int qemu_savevm_state_iterate(QEMUFile *f, uint64_t free_space)
> {
> SaveStateEntry *se;
> int ret = 1;
> + size_t remaining_space = free_space;
can't we add free_space variable the QemuFile and make
all the qemu_put_byte .. qemu_put_buffer functions to update it?
this way we won't need to pass it to the iterate functions ...
Regards,
Orit
>
> QTAILQ_FOREACH(se, &savevm_handlers, entry) {
> if (!se->ops || !se->ops->save_live_iterate) {
> @@ -1629,9 +1630,11 @@ int qemu_savevm_state_iterate(QEMUFile *f)
> trace_savevm_section_start();
> /* Section type */
> qemu_put_byte(f, QEMU_VM_SECTION_PART);
> + remaining_space -= 1;
> qemu_put_be32(f, se->section_id);
> + remaining_space -= 4;
>
> - ret = se->ops->save_live_iterate(f, se->opaque);
> + ret = se->ops->save_live_iterate(f, se->opaque, remaining_space);
> trace_savevm_section_end(se->section_id);
>
> if (ret <= 0) {
> @@ -1641,6 +1644,7 @@ int qemu_savevm_state_iterate(QEMUFile *f)
> synchronized over and over again. */
> break;
> }
> + remaining_space -= ret;
> }
> if (ret != 0) {
> return ret;
> @@ -1756,7 +1760,7 @@ static int qemu_savevm_state(QEMUFile *f)
> goto out;
>
> do {
> - ret = qemu_savevm_state_iterate(f);
> + ret = qemu_savevm_state_iterate(f, SIZE_MAX);
> if (ret < 0)
> goto out;
> } while (ret == 0);
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Qemu-devel] [PATCH 2/4] ram: remove xbrle last_stage optimization
2013-01-18 11:53 ` [Qemu-devel] [PATCH 2/4] ram: remove xbrle last_stage optimization Juan Quintela
@ 2013-01-21 10:11 ` Orit Wasserman
0 siblings, 0 replies; 10+ messages in thread
From: Orit Wasserman @ 2013-01-21 10:11 UTC (permalink / raw)
To: Juan Quintela; +Cc: qemu-devel
Juan,
Why not add a migration_is_last_stage (similar to migration_is_xbzrle) function and leave the optimization
Regards,
Orit
On 01/18/2013 01:53 PM, Juan Quintela wrote:
> We need to remove it to be able to return from complete to iterative
> phases of migration.
>
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
> arch_init.c | 24 +++++++++---------------
> 1 file changed, 9 insertions(+), 15 deletions(-)
>
> diff --git a/arch_init.c b/arch_init.c
> index 2792b76..9f7d44d 100644
> --- a/arch_init.c
> +++ b/arch_init.c
> @@ -286,16 +286,14 @@ static size_t save_block_hdr(QEMUFile *f, RAMBlock *block, ram_addr_t offset,
>
> static int save_xbzrle_page(QEMUFile *f, uint8_t *current_data,
> ram_addr_t current_addr, RAMBlock *block,
> - ram_addr_t offset, int cont, bool last_stage)
> + ram_addr_t offset, int cont)
> {
> int encoded_len = 0, bytes_sent = -1;
> uint8_t *prev_cached_page;
>
> if (!cache_is_cached(XBZRLE.cache, current_addr)) {
> - if (!last_stage) {
> - cache_insert(XBZRLE.cache, current_addr,
> - g_memdup(current_data, TARGET_PAGE_SIZE));
> - }
> + cache_insert(XBZRLE.cache, current_addr,
> + g_memdup(current_data, TARGET_PAGE_SIZE));
> acct_info.xbzrle_cache_miss++;
> return -1;
> }
> @@ -321,9 +319,7 @@ static int save_xbzrle_page(QEMUFile *f, uint8_t *current_data,
> }
>
> /* we need to update the data in the cache, in order to get the same data */
> - if (!last_stage) {
> - memcpy(prev_cached_page, XBZRLE.current_buf, TARGET_PAGE_SIZE);
> - }
> + memcpy(prev_cached_page, XBZRLE.current_buf, TARGET_PAGE_SIZE);
>
> /* Send XBZRLE based compressed page */
> bytes_sent = save_block_hdr(f, block, offset, cont, RAM_SAVE_FLAG_XBZRLE);
> @@ -426,7 +422,7 @@ static void migration_bitmap_sync(void)
> * 0 means no dirty pages
> */
>
> -static int ram_save_block(QEMUFile *f, bool last_stage)
> +static int ram_save_block(QEMUFile *f)
> {
> RAMBlock *block = last_seen_block;
> ram_addr_t offset = last_offset;
> @@ -470,10 +466,8 @@ static int ram_save_block(QEMUFile *f, bool last_stage)
> } else if (migrate_use_xbzrle()) {
> current_addr = block->offset + offset;
> bytes_sent = save_xbzrle_page(f, p, current_addr, block,
> - offset, cont, last_stage);
> - if (!last_stage) {
> - p = get_cached_data(XBZRLE.cache, current_addr);
> - }
> + offset, cont);
> + p = get_cached_data(XBZRLE.cache, current_addr);
> }
>
> /* XBZRLE overflow or normal page */
> @@ -621,7 +615,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque, uint64_t free_space)
> i = 0;
> /* We need space for at least one page and end of section marker */
> while (free_space > MAX_PAGE_SIZE + 8) {
> - int bytes_sent = ram_save_block(f, false);
> + int bytes_sent = ram_save_block(f);
> /* no more blocks to sent */
> if (bytes_sent == 0) {
> break;
> @@ -665,7 +659,7 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
> while (true) {
> int bytes_sent;
>
> - bytes_sent = ram_save_block(f, true);
> + bytes_sent = ram_save_block(f);
> /* no more blocks to sent */
> if (bytes_sent == 0) {
> break;
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Qemu-devel] [PATCH 3/4] ram: reuse ram_save_iterate() for the complete stage
2013-01-18 11:53 ` [Qemu-devel] [PATCH 3/4] ram: reuse ram_save_iterate() for the complete stage Juan Quintela
@ 2013-01-21 10:17 ` Orit Wasserman
2013-01-21 10:31 ` Paolo Bonzini
1 sibling, 0 replies; 10+ messages in thread
From: Orit Wasserman @ 2013-01-21 10:17 UTC (permalink / raw)
To: Juan Quintela; +Cc: qemu-devel
On 01/18/2013 01:53 PM, Juan Quintela wrote:
> This means that we only have one memory loop for the iterate and
> complete phase.
>
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
> arch_init.c | 16 ----------------
> migration.c | 12 ++++++++++++
> 2 files changed, 12 insertions(+), 16 deletions(-)
>
> diff --git a/arch_init.c b/arch_init.c
> index 9f7d44d..9eef10a 100644
> --- a/arch_init.c
> +++ b/arch_init.c
> @@ -651,23 +651,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque, uint64_t free_space)
> static int ram_save_complete(QEMUFile *f, void *opaque)
> {
> qemu_mutex_lock_ramlist();
do we still need to lock ramlist here?
> - migration_bitmap_sync();
> -
> - /* try transferring iterative blocks of memory */
> -
> - /* flush all remaining blocks regardless of rate limiting */
> - while (true) {
> - int bytes_sent;
> -
> - bytes_sent = ram_save_block(f);
> - /* no more blocks to sent */
> - if (bytes_sent == 0) {
> - break;
> - }
> - bytes_transferred += bytes_sent;
> - }
> migration_end();
> -
> qemu_mutex_unlock_ramlist();
> qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
>
> diff --git a/migration.c b/migration.c
> index e74ce49..de665f7 100644
> --- a/migration.c
> +++ b/migration.c
> @@ -717,6 +717,18 @@ static void *buffered_file_thread(void *opaque)
> } else {
> vm_stop_force_state(RUN_STATE_FINISH_MIGRATE);
> }
> +
> + /* 8 is the size of an end of section mark, so empty section */
> + while ((ret = qemu_savevm_state_iterate(s->file, free_space))
> + > 8) {
Sorry I don't understand this condition, can you explain?
Regards,
Orit
> + ret = buffered_flush(s);
> + if (ret < 0) {
> + qemu_mutex_unlock_iothread();
> + break;
> + }
> + free_space = s->buffer_capacity - s->buffer_size;
> + }
> +
> ret = qemu_savevm_state_complete(s->file);
> if (ret < 0) {
> qemu_mutex_unlock_iothread();
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Qemu-devel] [PATCH 4/4] migration: print times for end phase
2013-01-18 11:53 ` [Qemu-devel] [PATCH 4/4] migration: print times for end phase Juan Quintela
@ 2013-01-21 10:19 ` Orit Wasserman
0 siblings, 0 replies; 10+ messages in thread
From: Orit Wasserman @ 2013-01-21 10:19 UTC (permalink / raw)
To: Juan Quintela; +Cc: qemu-devel
This is for debugging?
Why not trace events?
Regards,
Orit
On 01/18/2013 01:53 PM, Juan Quintela wrote:
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
> block.c | 6 ++++++
> cpus.c | 17 +++++++++++++++++
> migration.c | 13 +++++++++++++
> savevm.c | 13 +++++++++++++
> 4 files changed, 49 insertions(+)
>
> diff --git a/block.c b/block.c
> index 6fa7c90..c121db3 100644
> --- a/block.c
> +++ b/block.c
> @@ -2693,9 +2693,15 @@ int bdrv_get_flags(BlockDriverState *bs)
> void bdrv_flush_all(void)
> {
> BlockDriverState *bs;
> + int64_t start_time, end_time;
> +
> + start_time = qemu_get_clock_ms(rt_clock);
>
> QTAILQ_FOREACH(bs, &bdrv_states, list) {
> bdrv_flush(bs);
> + end_time = qemu_get_clock_ms(rt_clock);
> + printf("time flush device %s: %ld\n", bs->filename,
> + end_time - start_time);
> }
> }
>
> diff --git a/cpus.c b/cpus.c
> index a4390c3..15534ba 100644
> --- a/cpus.c
> +++ b/cpus.c
> @@ -439,14 +439,31 @@ bool cpu_is_stopped(CPUState *cpu)
>
> static void do_vm_stop(RunState state)
> {
> + int64_t start_time, end_time;
> +
> if (runstate_is_running()) {
> + start_time = qemu_get_clock_ms(rt_clock);
> cpu_disable_ticks();
> + end_time = qemu_get_clock_ms(rt_clock);
> + printf("time cpu_disable_ticks %ld\n", end_time - start_time);
> pause_all_vcpus();
> + end_time = qemu_get_clock_ms(rt_clock);
> + printf("time pause_all_vcpus %ld\n", end_time - start_time);
> runstate_set(state);
> + end_time = qemu_get_clock_ms(rt_clock);
> + printf("time runstate_set %ld\n", end_time - start_time);
> vm_state_notify(0, state);
> + end_time = qemu_get_clock_ms(rt_clock);
> + printf("time vmstate_notify %ld\n", end_time - start_time);
> bdrv_drain_all();
> + end_time = qemu_get_clock_ms(rt_clock);
> + printf("time bdrv_drain_all %ld\n", end_time - start_time);
> bdrv_flush_all();
> + end_time = qemu_get_clock_ms(rt_clock);
> + printf("time bdrv_flush_all %ld\n", end_time - start_time);
> monitor_protocol_event(QEVENT_STOP, NULL);
> + end_time = qemu_get_clock_ms(rt_clock);
> + printf("time monitor_protocol_event %ld\n", end_time - start_time);
> }
> }
>
> diff --git a/migration.c b/migration.c
> index de665f7..5e965cc 100644
> --- a/migration.c
> +++ b/migration.c
> @@ -712,12 +712,17 @@ static void *buffered_file_thread(void *opaque)
> DPRINTF("done iterating\n");
> start_time = qemu_get_clock_ms(rt_clock);
> qemu_system_wakeup_request(QEMU_WAKEUP_REASON_OTHER);
> + end_time = qemu_get_clock_ms(rt_clock);
> + printf("wakeup_request %ld\n", end_time - start_time);
> if (old_vm_running) {
> vm_stop(RUN_STATE_FINISH_MIGRATE);
> } else {
> vm_stop_force_state(RUN_STATE_FINISH_MIGRATE);
> }
>
> + end_time = qemu_get_clock_ms(rt_clock);
> + printf("vm_stop %ld\n", end_time - start_time);
> +
> /* 8 is the size of an end of section mark, so empty section */
> while ((ret = qemu_savevm_state_iterate(s->file, free_space))
> > 8) {
> @@ -728,15 +733,21 @@ static void *buffered_file_thread(void *opaque)
> }
> free_space = s->buffer_capacity - s->buffer_size;
> }
> + end_time = qemu_get_clock_ms(rt_clock);
> + printf("iterate phase %ld\n", end_time - start_time);
>
> ret = qemu_savevm_state_complete(s->file);
> if (ret < 0) {
> qemu_mutex_unlock_iothread();
> break;
> } else {
> + end_time = qemu_get_clock_ms(rt_clock);
> + printf("complete without error 3a %ld\n",
> + end_time - start_time);
> migrate_fd_completed(s);
> }
> end_time = qemu_get_clock_ms(rt_clock);
> + printf("completed %ld\n", end_time - start_time);
> s->total_time = end_time - s->total_time;
> s->downtime = end_time - start_time;
> if (s->state != MIG_STATE_COMPLETED) {
> @@ -744,6 +755,8 @@ static void *buffered_file_thread(void *opaque)
> vm_start();
> }
> }
> + end_time = qemu_get_clock_ms(rt_clock);
> + printf("end completed stage %ld\n", end_time - start_time);
> last_round = true;
> }
> }
> diff --git a/savevm.c b/savevm.c
> index 3447f91..113c1dd 100644
> --- a/savevm.c
> +++ b/savevm.c
> @@ -1660,9 +1660,14 @@ int qemu_savevm_state_complete(QEMUFile *f)
> {
> SaveStateEntry *se;
> int ret;
> + int64_t t1;
> + int64_t t0 = qemu_get_clock_ms(rt_clock);
>
> cpu_synchronize_all_states();
> + t1 = qemu_get_clock_ms(rt_clock);
> + printf("synchronize_all_states %ld\n", t1 - t0);
>
> + t0 = t1;
> QTAILQ_FOREACH(se, &savevm_handlers, entry) {
> if (!se->ops || !se->ops->save_live_complete) {
> continue;
> @@ -1683,6 +1688,11 @@ int qemu_savevm_state_complete(QEMUFile *f)
> return ret;
> }
> }
> + t1 = qemu_get_clock_ms(rt_clock);
> +
> + printf("migrate save live complete %ld\n", t1 - t0);
> +
> + t0 = t1;
>
> QTAILQ_FOREACH(se, &savevm_handlers, entry) {
> int len;
> @@ -1707,6 +1717,9 @@ int qemu_savevm_state_complete(QEMUFile *f)
> trace_savevm_section_end(se->section_id);
> }
>
> + t1 = qemu_get_clock_ms(rt_clock);
> +
> + printf("migrate rest devices %ld\n", t1 - t0);
> qemu_put_byte(f, QEMU_VM_EOF);
>
> return qemu_file_get_error(f);
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Qemu-devel] [PATCH 3/4] ram: reuse ram_save_iterate() for the complete stage
2013-01-18 11:53 ` [Qemu-devel] [PATCH 3/4] ram: reuse ram_save_iterate() for the complete stage Juan Quintela
2013-01-21 10:17 ` Orit Wasserman
@ 2013-01-21 10:31 ` Paolo Bonzini
1 sibling, 0 replies; 10+ messages in thread
From: Paolo Bonzini @ 2013-01-21 10:31 UTC (permalink / raw)
To: Juan Quintela; +Cc: qemu-devel
Il 18/01/2013 12:53, Juan Quintela ha scritto:
> This means that we only have one memory loop for the iterate and
> complete phase.
I think this is premature. One important difference between iterate and
complete is that ultimately iterate will run without the BQL, while
that's not necessarily true of complete. So we may end up reverting
this patch.
> Signed-off-by: Juan Quintela <quintela@redhat.com>
> ---
> arch_init.c | 16 ----------------
> migration.c | 12 ++++++++++++
> 2 files changed, 12 insertions(+), 16 deletions(-)
>
> diff --git a/arch_init.c b/arch_init.c
> index 9f7d44d..9eef10a 100644
> --- a/arch_init.c
> +++ b/arch_init.c
> @@ -651,23 +651,7 @@ static int ram_save_iterate(QEMUFile *f, void *opaque, uint64_t free_space)
> static int ram_save_complete(QEMUFile *f, void *opaque)
> {
> qemu_mutex_lock_ramlist();
> - migration_bitmap_sync();
> -
> - /* try transferring iterative blocks of memory */
> -
> - /* flush all remaining blocks regardless of rate limiting */
> - while (true) {
> - int bytes_sent;
> -
> - bytes_sent = ram_save_block(f);
> - /* no more blocks to sent */
> - if (bytes_sent == 0) {
> - break;
> - }
> - bytes_transferred += bytes_sent;
> - }
> migration_end();
> -
> qemu_mutex_unlock_ramlist();
> qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
>
> diff --git a/migration.c b/migration.c
> index e74ce49..de665f7 100644
> --- a/migration.c
> +++ b/migration.c
> @@ -717,6 +717,18 @@ static void *buffered_file_thread(void *opaque)
> } else {
> vm_stop_force_state(RUN_STATE_FINISH_MIGRATE);
> }
> +
> + /* 8 is the size of an end of section mark, so empty section */
> + while ((ret = qemu_savevm_state_iterate(s->file, free_space))
> + > 8) {
> + ret = buffered_flush(s);
> + if (ret < 0) {
> + qemu_mutex_unlock_iothread();
> + break;
> + }
> + free_space = s->buffer_capacity - s->buffer_size;
> + }
> +
If you really want to apply this patch, however, move this loop to
qemu_savevm_state_complete. do_savevm has a similar loop:
do {
ret = qemu_savevm_state_iterate(f);
if (ret < 0)
goto out;
} while (ret == 0);
and then you can unify buffered_file_thread and do_savevm's code.
Paolo
> ret = qemu_savevm_state_complete(s->file);
> if (ret < 0) {
> qemu_mutex_unlock_iothread();
>
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2013-01-21 10:31 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-01-18 11:53 [Qemu-devel] [RFC 0/4] migration.experimental queue Juan Quintela
2013-01-18 11:53 ` [Qemu-devel] [PATCH 1/4] ram: add free_space parameter to save_live functions Juan Quintela
2013-01-21 9:59 ` Orit Wasserman
2013-01-18 11:53 ` [Qemu-devel] [PATCH 2/4] ram: remove xbrle last_stage optimization Juan Quintela
2013-01-21 10:11 ` Orit Wasserman
2013-01-18 11:53 ` [Qemu-devel] [PATCH 3/4] ram: reuse ram_save_iterate() for the complete stage Juan Quintela
2013-01-21 10:17 ` Orit Wasserman
2013-01-21 10:31 ` Paolo Bonzini
2013-01-18 11:53 ` [Qemu-devel] [PATCH 4/4] migration: print times for end phase Juan Quintela
2013-01-21 10:19 ` Orit Wasserman
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).