* [PULL 00/18] Migration 20211214 patches
@ 2021-12-15 10:32 Juan Quintela
2021-12-15 10:32 ` [PULL 01/18] migration/ram.c: Remove the qemu_mutex_lock in colo_flush_ram_cache Juan Quintela
` (18 more replies)
0 siblings, 19 replies; 20+ messages in thread
From: Juan Quintela @ 2021-12-15 10:32 UTC (permalink / raw)
To: qemu-devel
Cc: Marc-André Lureau, Hailiang Zhang, Dr. David Alan Gilbert,
Juan Quintela
The following changes since commit 76b56fdfc9fa43ec6e5986aee33f108c6c6a511e:
Merge tag 'block-pull-request' of https://gitlab.com/stefanha/qemu into staging (2021-12-14 12:46:18 -0800)
are available in the Git repository at:
https://gitlab.com/juan.quintela/qemu.git tags/migration-20211214-pull-request
for you to fetch changes up to a5ed22948873b50fcf1415d1ce15c71d61a9388d:
multifd: Make zlib compression method not use iovs (2021-12-15 10:38:34 +0100)
----------------------------------------------------------------
Migration Pull request
Hi
This are the reviewed patches for the freeze period:
- colo: fix/optimize several things (rao, chen)
- shutdown qio channels correctly when an error happens (li)
- serveral multifd patches for the zero series (me)
Please apply.
Thanks, Juan.
----------------------------------------------------------------
Juan Quintela (12):
migration: Remove is_zero_range()
dump: Remove is_zero_page()
multifd: Delete useless operation
migration: Never call twice qemu_target_page_size()
multifd: Rename used field to num
multifd: Add missing documention
multifd: The variable is only used inside the loop
multifd: remove used parameter from send_prepare() method
multifd: remove used parameter from send_recv_pages() method
multifd: Fill offset and block for reception
multifd: Make zstd compression method not use iovs
multifd: Make zlib compression method not use iovs
Li Zhang (1):
multifd: Shut down the QIO channels to avoid blocking the send threads
when they are terminated.
Rao, Lei (3):
migration/ram.c: Remove the qemu_mutex_lock in colo_flush_ram_cache.
Fixed a QEMU hang when guest poweroff in COLO mode
COLO: Move some trace code behind qemu_mutex_unlock_iothread()
Zhang Chen (2):
migration/colo: More accurate update checkpoint time
migration/colo: Optimize COLO primary node start code path
include/migration/colo.h | 1 +
migration/multifd.h | 6 ++--
dump/dump.c | 10 +-----
migration/colo.c | 33 ++++++++++++++-----
migration/migration.c | 26 +++++++++------
migration/multifd-zlib.c | 48 +++++++++++++--------------
migration/multifd-zstd.c | 47 ++++++++++++---------------
migration/multifd.c | 70 +++++++++++++++++++++-------------------
migration/ram.c | 11 ++-----
migration/savevm.c | 5 +--
10 files changed, 131 insertions(+), 126 deletions(-)
--
2.33.1
^ permalink raw reply [flat|nested] 20+ messages in thread
* [PULL 01/18] migration/ram.c: Remove the qemu_mutex_lock in colo_flush_ram_cache.
2021-12-15 10:32 [PULL 00/18] Migration 20211214 patches Juan Quintela
@ 2021-12-15 10:32 ` Juan Quintela
2021-12-15 10:32 ` [PULL 02/18] migration/colo: More accurate update checkpoint time Juan Quintela
` (17 subsequent siblings)
18 siblings, 0 replies; 20+ messages in thread
From: Juan Quintela @ 2021-12-15 10:32 UTC (permalink / raw)
To: qemu-devel
Cc: Hailiang Zhang, Juan Quintela, Rao, Lei, Dr. David Alan Gilbert,
Zhang Chen, Marc-André Lureau
From: "Rao, Lei" <lei.rao@intel.com>
The code to acquire bitmap_mutex is added in the commit of
"63268c4970a5f126cc9af75f3ccb8057abef5ec0". There is no
need to acquire bitmap_mutex in colo_flush_ram_cache(). This
is because the colo_flush_ram_cache only be called on the COLO
secondary VM, which is the destination side.
On the COLO secondary VM, only the COLO thread will touch
the bitmap of ram cache.
Signed-off-by: Lei Rao <lei.rao@intel.com>
Reviewed-by: Zhang Chen <chen.zhang@intel.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/ram.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index 863035d235..2c688f5bbb 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -3918,7 +3918,6 @@ void colo_flush_ram_cache(void)
unsigned long offset = 0;
memory_global_dirty_log_sync();
- qemu_mutex_lock(&ram_state->bitmap_mutex);
WITH_RCU_READ_LOCK_GUARD() {
RAMBLOCK_FOREACH_NOT_IGNORED(block) {
ramblock_sync_dirty_bitmap(ram_state, block);
@@ -3954,7 +3953,6 @@ void colo_flush_ram_cache(void)
}
}
trace_colo_flush_ram_cache_end();
- qemu_mutex_unlock(&ram_state->bitmap_mutex);
}
/**
--
2.33.1
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PULL 02/18] migration/colo: More accurate update checkpoint time
2021-12-15 10:32 [PULL 00/18] Migration 20211214 patches Juan Quintela
2021-12-15 10:32 ` [PULL 01/18] migration/ram.c: Remove the qemu_mutex_lock in colo_flush_ram_cache Juan Quintela
@ 2021-12-15 10:32 ` Juan Quintela
2021-12-15 10:32 ` [PULL 03/18] Fixed a QEMU hang when guest poweroff in COLO mode Juan Quintela
` (16 subsequent siblings)
18 siblings, 0 replies; 20+ messages in thread
From: Juan Quintela @ 2021-12-15 10:32 UTC (permalink / raw)
To: qemu-devel
Cc: Marc-André Lureau, Hailiang Zhang, Dr. David Alan Gilbert,
Zhang Chen, Juan Quintela
From: Zhang Chen <chen.zhang@intel.com>
Previous operation(like vm_start and replication_start_all) will consume
extra time before update the timer, so reduce time in this patch.
Signed-off-by: Zhang Chen <chen.zhang@intel.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/colo.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/migration/colo.c b/migration/colo.c
index 2415325262..c8fadae956 100644
--- a/migration/colo.c
+++ b/migration/colo.c
@@ -530,7 +530,6 @@ static void colo_process_checkpoint(MigrationState *s)
{
QIOChannelBuffer *bioc;
QEMUFile *fb = NULL;
- int64_t current_time = qemu_clock_get_ms(QEMU_CLOCK_HOST);
Error *local_err = NULL;
int ret;
@@ -578,8 +577,8 @@ static void colo_process_checkpoint(MigrationState *s)
qemu_mutex_unlock_iothread();
trace_colo_vm_state_change("stop", "run");
- timer_mod(s->colo_delay_timer,
- current_time + s->parameters.x_checkpoint_delay);
+ timer_mod(s->colo_delay_timer, qemu_clock_get_ms(QEMU_CLOCK_HOST) +
+ s->parameters.x_checkpoint_delay);
while (s->state == MIGRATION_STATUS_COLO) {
if (failover_get_state() != FAILOVER_STATUS_NONE) {
--
2.33.1
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PULL 03/18] Fixed a QEMU hang when guest poweroff in COLO mode
2021-12-15 10:32 [PULL 00/18] Migration 20211214 patches Juan Quintela
2021-12-15 10:32 ` [PULL 01/18] migration/ram.c: Remove the qemu_mutex_lock in colo_flush_ram_cache Juan Quintela
2021-12-15 10:32 ` [PULL 02/18] migration/colo: More accurate update checkpoint time Juan Quintela
@ 2021-12-15 10:32 ` Juan Quintela
2021-12-15 10:32 ` [PULL 04/18] migration/colo: Optimize COLO primary node start code path Juan Quintela
` (15 subsequent siblings)
18 siblings, 0 replies; 20+ messages in thread
From: Juan Quintela @ 2021-12-15 10:32 UTC (permalink / raw)
To: qemu-devel
Cc: Hailiang Zhang, Juan Quintela, Rao, Lei, Dr. David Alan Gilbert,
Zhang Chen, Marc-André Lureau
From: "Rao, Lei" <lei.rao@intel.com>
When the PVM guest poweroff, the COLO thread may wait a semaphore
in colo_process_checkpoint().So, we should wake up the COLO thread
before migration shutdown.
Signed-off-by: Lei Rao <lei.rao@intel.com>
Reviewed-by: Zhang Chen <chen.zhang@intel.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
include/migration/colo.h | 1 +
migration/colo.c | 20 ++++++++++++++++++++
migration/migration.c | 6 ++++++
3 files changed, 27 insertions(+)
diff --git a/include/migration/colo.h b/include/migration/colo.h
index 768e1f04c3..5fbe1a6d5d 100644
--- a/include/migration/colo.h
+++ b/include/migration/colo.h
@@ -37,4 +37,5 @@ COLOMode get_colo_mode(void);
void colo_do_failover(void);
void colo_checkpoint_notify(void *opaque);
+void colo_shutdown(void);
#endif
diff --git a/migration/colo.c b/migration/colo.c
index c8fadae956..2a85504966 100644
--- a/migration/colo.c
+++ b/migration/colo.c
@@ -819,6 +819,26 @@ static void colo_wait_handle_message(MigrationIncomingState *mis,
}
}
+void colo_shutdown(void)
+{
+ MigrationIncomingState *mis = NULL;
+ MigrationState *s = NULL;
+
+ switch (get_colo_mode()) {
+ case COLO_MODE_PRIMARY:
+ s = migrate_get_current();
+ qemu_event_set(&s->colo_checkpoint_event);
+ qemu_sem_post(&s->colo_exit_sem);
+ break;
+ case COLO_MODE_SECONDARY:
+ mis = migration_incoming_get_current();
+ qemu_sem_post(&mis->colo_incoming_sem);
+ break;
+ default:
+ break;
+ }
+}
+
void *colo_process_incoming_thread(void *opaque)
{
MigrationIncomingState *mis = opaque;
diff --git a/migration/migration.c b/migration/migration.c
index abaf6f9e3d..c0ab86e9a5 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -225,6 +225,12 @@ void migration_cancel(const Error *error)
void migration_shutdown(void)
{
+ /*
+ * When the QEMU main thread exit, the COLO thread
+ * may wait a semaphore. So, we should wakeup the
+ * COLO thread before migration shutdown.
+ */
+ colo_shutdown();
/*
* Cancel the current migration - that will (eventually)
* stop the migration using this structure
--
2.33.1
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PULL 04/18] migration/colo: Optimize COLO primary node start code path
2021-12-15 10:32 [PULL 00/18] Migration 20211214 patches Juan Quintela
` (2 preceding siblings ...)
2021-12-15 10:32 ` [PULL 03/18] Fixed a QEMU hang when guest poweroff in COLO mode Juan Quintela
@ 2021-12-15 10:32 ` Juan Quintela
2021-12-15 10:32 ` [PULL 05/18] migration: Remove is_zero_range() Juan Quintela
` (14 subsequent siblings)
18 siblings, 0 replies; 20+ messages in thread
From: Juan Quintela @ 2021-12-15 10:32 UTC (permalink / raw)
To: qemu-devel
Cc: Marc-André Lureau, Hailiang Zhang, Dr. David Alan Gilbert,
Zhang Chen, Juan Quintela
From: Zhang Chen <chen.zhang@intel.com>
Optimize COLO primary start path from:
MIGRATION_STATUS_XXX --> MIGRATION_STATUS_ACTIVE --> MIGRATION_STATUS_COLO --> MIGRATION_STATUS_COMPLETED
To:
MIGRATION_STATUS_XXX --> MIGRATION_STATUS_COLO --> MIGRATION_STATUS_COMPLETED
No need to start primary COLO through "MIGRATION_STATUS_ACTIVE".
Signed-off-by: Zhang Chen <chen.zhang@intel.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/colo.c | 2 --
migration/migration.c | 13 +++++++------
2 files changed, 7 insertions(+), 8 deletions(-)
diff --git a/migration/colo.c b/migration/colo.c
index 2a85504966..4a772afe78 100644
--- a/migration/colo.c
+++ b/migration/colo.c
@@ -666,8 +666,6 @@ void migrate_start_colo_process(MigrationState *s)
colo_checkpoint_notify, s);
qemu_sem_init(&s->colo_exit_sem, 0);
- migrate_set_state(&s->state, MIGRATION_STATUS_ACTIVE,
- MIGRATION_STATUS_COLO);
colo_process_checkpoint(s);
qemu_mutex_lock_iothread();
}
diff --git a/migration/migration.c b/migration/migration.c
index c0ab86e9a5..2c1edb2cb9 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -3613,12 +3613,7 @@ static void migration_iteration_finish(MigrationState *s)
migration_calculate_complete(s);
runstate_set(RUN_STATE_POSTMIGRATE);
break;
-
- case MIGRATION_STATUS_ACTIVE:
- /*
- * We should really assert here, but since it's during
- * migration, let's try to reduce the usage of assertions.
- */
+ case MIGRATION_STATUS_COLO:
if (!migrate_colo_enabled()) {
error_report("%s: critical error: calling COLO code without "
"COLO enabled", __func__);
@@ -3628,6 +3623,12 @@ static void migration_iteration_finish(MigrationState *s)
* Fixme: we will run VM in COLO no matter its old running state.
* After exited COLO, we will keep running.
*/
+ /* Fallthrough */
+ case MIGRATION_STATUS_ACTIVE:
+ /*
+ * We should really assert here, but since it's during
+ * migration, let's try to reduce the usage of assertions.
+ */
s->vm_was_running = true;
/* Fallthrough */
case MIGRATION_STATUS_FAILED:
--
2.33.1
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PULL 05/18] migration: Remove is_zero_range()
2021-12-15 10:32 [PULL 00/18] Migration 20211214 patches Juan Quintela
` (3 preceding siblings ...)
2021-12-15 10:32 ` [PULL 04/18] migration/colo: Optimize COLO primary node start code path Juan Quintela
@ 2021-12-15 10:32 ` Juan Quintela
2021-12-15 10:32 ` [PULL 06/18] dump: Remove is_zero_page() Juan Quintela
` (13 subsequent siblings)
18 siblings, 0 replies; 20+ messages in thread
From: Juan Quintela @ 2021-12-15 10:32 UTC (permalink / raw)
To: qemu-devel
Cc: Marc-André Lureau, Hailiang Zhang, Richard Henderson,
Dr. David Alan Gilbert, Juan Quintela
It just calls buffer_is_zero(). Just change the callers.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
---
migration/ram.c | 9 ++-------
1 file changed, 2 insertions(+), 7 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index 2c688f5bbb..57efa67f20 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -81,11 +81,6 @@
/* 0x80 is reserved in migration.h start with 0x100 next */
#define RAM_SAVE_FLAG_COMPRESS_PAGE 0x100
-static inline bool is_zero_range(uint8_t *p, uint64_t size)
-{
- return buffer_is_zero(p, size);
-}
-
XBZRLECacheStats xbzrle_counters;
/* struct contains XBZRLE cache and a static page
@@ -1180,7 +1175,7 @@ static int save_zero_page_to_file(RAMState *rs, QEMUFile *file,
uint8_t *p = block->host + offset;
int len = 0;
- if (is_zero_range(p, TARGET_PAGE_SIZE)) {
+ if (buffer_is_zero(p, TARGET_PAGE_SIZE)) {
len += save_page_header(rs, file, block, offset | RAM_SAVE_FLAG_ZERO);
qemu_put_byte(file, 0);
len += 1;
@@ -3367,7 +3362,7 @@ static inline void *colo_cache_from_block_offset(RAMBlock *block,
*/
void ram_handle_compressed(void *host, uint8_t ch, uint64_t size)
{
- if (ch != 0 || !is_zero_range(host, size)) {
+ if (ch != 0 || !buffer_is_zero(host, size)) {
memset(host, ch, size);
}
}
--
2.33.1
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PULL 06/18] dump: Remove is_zero_page()
2021-12-15 10:32 [PULL 00/18] Migration 20211214 patches Juan Quintela
` (4 preceding siblings ...)
2021-12-15 10:32 ` [PULL 05/18] migration: Remove is_zero_range() Juan Quintela
@ 2021-12-15 10:32 ` Juan Quintela
2021-12-15 10:32 ` [PULL 07/18] multifd: Delete useless operation Juan Quintela
` (12 subsequent siblings)
18 siblings, 0 replies; 20+ messages in thread
From: Juan Quintela @ 2021-12-15 10:32 UTC (permalink / raw)
To: qemu-devel
Cc: Marc-André Lureau, Hailiang Zhang, Richard Henderson,
Dr. David Alan Gilbert, Juan Quintela
It just calls buffer_is_zero(). Just change the callers.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
---
dump/dump.c | 10 +---------
1 file changed, 1 insertion(+), 9 deletions(-)
diff --git a/dump/dump.c b/dump/dump.c
index 662d0a62cd..a84d8b1598 100644
--- a/dump/dump.c
+++ b/dump/dump.c
@@ -1293,14 +1293,6 @@ static size_t get_len_buf_out(size_t page_size, uint32_t flag_compress)
return 0;
}
-/*
- * check if the page is all 0
- */
-static inline bool is_zero_page(const uint8_t *buf, size_t page_size)
-{
- return buffer_is_zero(buf, page_size);
-}
-
static void write_dump_pages(DumpState *s, Error **errp)
{
int ret = 0;
@@ -1357,7 +1349,7 @@ static void write_dump_pages(DumpState *s, Error **errp)
*/
while (get_next_page(&block_iter, &pfn_iter, &buf, s)) {
/* check zero page */
- if (is_zero_page(buf, s->dump_info.page_size)) {
+ if (buffer_is_zero(buf, s->dump_info.page_size)) {
ret = write_cache(&page_desc, &pd_zero, sizeof(PageDescriptor),
false);
if (ret < 0) {
--
2.33.1
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PULL 07/18] multifd: Delete useless operation
2021-12-15 10:32 [PULL 00/18] Migration 20211214 patches Juan Quintela
` (5 preceding siblings ...)
2021-12-15 10:32 ` [PULL 06/18] dump: Remove is_zero_page() Juan Quintela
@ 2021-12-15 10:32 ` Juan Quintela
2021-12-15 10:32 ` [PULL 08/18] migration: Never call twice qemu_target_page_size() Juan Quintela
` (11 subsequent siblings)
18 siblings, 0 replies; 20+ messages in thread
From: Juan Quintela @ 2021-12-15 10:32 UTC (permalink / raw)
To: qemu-devel
Cc: Marc-André Lureau, Hailiang Zhang, Dr. David Alan Gilbert,
Juan Quintela
We are dividing by page_size to multiply again in the only use.
Once there, improve the comments.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
migration/multifd-zlib.c | 13 ++++---------
migration/multifd-zstd.c | 13 ++++---------
2 files changed, 8 insertions(+), 18 deletions(-)
diff --git a/migration/multifd-zlib.c b/migration/multifd-zlib.c
index ab4ba75d75..3fc7813b44 100644
--- a/migration/multifd-zlib.c
+++ b/migration/multifd-zlib.c
@@ -42,7 +42,6 @@ struct zlib_data {
*/
static int zlib_send_setup(MultiFDSendParams *p, Error **errp)
{
- uint32_t page_count = MULTIFD_PACKET_SIZE / qemu_target_page_size();
struct zlib_data *z = g_malloc0(sizeof(struct zlib_data));
z_stream *zs = &z->zs;
@@ -54,9 +53,8 @@ static int zlib_send_setup(MultiFDSendParams *p, Error **errp)
error_setg(errp, "multifd %d: deflate init failed", p->id);
return -1;
}
- /* We will never have more than page_count pages */
- z->zbuff_len = page_count * qemu_target_page_size();
- z->zbuff_len *= 2;
+ /* To be safe, we reserve twice the size of the packet */
+ z->zbuff_len = MULTIFD_PACKET_SIZE * 2;
z->zbuff = g_try_malloc(z->zbuff_len);
if (!z->zbuff) {
deflateEnd(&z->zs);
@@ -180,7 +178,6 @@ static int zlib_send_write(MultiFDSendParams *p, uint32_t used, Error **errp)
*/
static int zlib_recv_setup(MultiFDRecvParams *p, Error **errp)
{
- uint32_t page_count = MULTIFD_PACKET_SIZE / qemu_target_page_size();
struct zlib_data *z = g_malloc0(sizeof(struct zlib_data));
z_stream *zs = &z->zs;
@@ -194,10 +191,8 @@ static int zlib_recv_setup(MultiFDRecvParams *p, Error **errp)
error_setg(errp, "multifd %d: inflate init failed", p->id);
return -1;
}
- /* We will never have more than page_count pages */
- z->zbuff_len = page_count * qemu_target_page_size();
- /* We know compression "could" use more space */
- z->zbuff_len *= 2;
+ /* To be safe, we reserve twice the size of the packet */
+ z->zbuff_len = MULTIFD_PACKET_SIZE * 2;
z->zbuff = g_try_malloc(z->zbuff_len);
if (!z->zbuff) {
inflateEnd(zs);
diff --git a/migration/multifd-zstd.c b/migration/multifd-zstd.c
index 693bddf8c9..cc3b8869c0 100644
--- a/migration/multifd-zstd.c
+++ b/migration/multifd-zstd.c
@@ -47,7 +47,6 @@ struct zstd_data {
*/
static int zstd_send_setup(MultiFDSendParams *p, Error **errp)
{
- uint32_t page_count = MULTIFD_PACKET_SIZE / qemu_target_page_size();
struct zstd_data *z = g_new0(struct zstd_data, 1);
int res;
@@ -67,9 +66,8 @@ static int zstd_send_setup(MultiFDSendParams *p, Error **errp)
p->id, ZSTD_getErrorName(res));
return -1;
}
- /* We will never have more than page_count pages */
- z->zbuff_len = page_count * qemu_target_page_size();
- z->zbuff_len *= 2;
+ /* To be safe, we reserve twice the size of the packet */
+ z->zbuff_len = MULTIFD_PACKET_SIZE * 2;
z->zbuff = g_try_malloc(z->zbuff_len);
if (!z->zbuff) {
ZSTD_freeCStream(z->zcs);
@@ -191,7 +189,6 @@ static int zstd_send_write(MultiFDSendParams *p, uint32_t used, Error **errp)
*/
static int zstd_recv_setup(MultiFDRecvParams *p, Error **errp)
{
- uint32_t page_count = MULTIFD_PACKET_SIZE / qemu_target_page_size();
struct zstd_data *z = g_new0(struct zstd_data, 1);
int ret;
@@ -212,10 +209,8 @@ static int zstd_recv_setup(MultiFDRecvParams *p, Error **errp)
return -1;
}
- /* We will never have more than page_count pages */
- z->zbuff_len = page_count * qemu_target_page_size();
- /* We know compression "could" use more space */
- z->zbuff_len *= 2;
+ /* To be safe, we reserve twice the size of the packet */
+ z->zbuff_len = MULTIFD_PACKET_SIZE * 2;
z->zbuff = g_try_malloc(z->zbuff_len);
if (!z->zbuff) {
ZSTD_freeDStream(z->zds);
--
2.33.1
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PULL 08/18] migration: Never call twice qemu_target_page_size()
2021-12-15 10:32 [PULL 00/18] Migration 20211214 patches Juan Quintela
` (6 preceding siblings ...)
2021-12-15 10:32 ` [PULL 07/18] multifd: Delete useless operation Juan Quintela
@ 2021-12-15 10:32 ` Juan Quintela
2021-12-15 10:32 ` [PULL 09/18] multifd: Rename used field to num Juan Quintela
` (10 subsequent siblings)
18 siblings, 0 replies; 20+ messages in thread
From: Juan Quintela @ 2021-12-15 10:32 UTC (permalink / raw)
To: qemu-devel
Cc: Marc-André Lureau, Hailiang Zhang, Dr. David Alan Gilbert,
Juan Quintela
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
migration/migration.c | 7 ++++---
migration/multifd.c | 7 ++++---
migration/savevm.c | 5 +++--
3 files changed, 11 insertions(+), 8 deletions(-)
diff --git a/migration/migration.c b/migration/migration.c
index 2c1edb2cb9..3de11ae921 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -996,6 +996,8 @@ static void populate_time_info(MigrationInfo *info, MigrationState *s)
static void populate_ram_info(MigrationInfo *info, MigrationState *s)
{
+ size_t page_size = qemu_target_page_size();
+
info->has_ram = true;
info->ram = g_malloc0(sizeof(*info->ram));
info->ram->transferred = ram_counters.transferred;
@@ -1004,12 +1006,11 @@ static void populate_ram_info(MigrationInfo *info, MigrationState *s)
/* legacy value. It is not used anymore */
info->ram->skipped = 0;
info->ram->normal = ram_counters.normal;
- info->ram->normal_bytes = ram_counters.normal *
- qemu_target_page_size();
+ info->ram->normal_bytes = ram_counters.normal * page_size;
info->ram->mbps = s->mbps;
info->ram->dirty_sync_count = ram_counters.dirty_sync_count;
info->ram->postcopy_requests = ram_counters.postcopy_requests;
- info->ram->page_size = qemu_target_page_size();
+ info->ram->page_size = page_size;
info->ram->multifd_bytes = ram_counters.multifd_bytes;
info->ram->pages_per_second = s->pages_per_second;
diff --git a/migration/multifd.c b/migration/multifd.c
index 7c9deb1921..8125d0015c 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -289,7 +289,8 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
{
MultiFDPacket_t *packet = p->packet;
- uint32_t pages_max = MULTIFD_PACKET_SIZE / qemu_target_page_size();
+ size_t page_size = qemu_target_page_size();
+ uint32_t pages_max = MULTIFD_PACKET_SIZE / page_size;
RAMBlock *block;
int i;
@@ -358,14 +359,14 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
for (i = 0; i < p->pages->used; i++) {
uint64_t offset = be64_to_cpu(packet->offset[i]);
- if (offset > (block->used_length - qemu_target_page_size())) {
+ if (offset > (block->used_length - page_size)) {
error_setg(errp, "multifd: offset too long %" PRIu64
" (max " RAM_ADDR_FMT ")",
offset, block->used_length);
return -1;
}
p->pages->iov[i].iov_base = block->host + offset;
- p->pages->iov[i].iov_len = qemu_target_page_size();
+ p->pages->iov[i].iov_len = page_size;
}
return 0;
diff --git a/migration/savevm.c b/migration/savevm.c
index d59e976d50..0bef031acb 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -1685,6 +1685,7 @@ static int loadvm_postcopy_handle_advise(MigrationIncomingState *mis,
{
PostcopyState ps = postcopy_state_set(POSTCOPY_INCOMING_ADVISE);
uint64_t remote_pagesize_summary, local_pagesize_summary, remote_tps;
+ size_t page_size = qemu_target_page_size();
Error *local_err = NULL;
trace_loadvm_postcopy_handle_advise();
@@ -1741,13 +1742,13 @@ static int loadvm_postcopy_handle_advise(MigrationIncomingState *mis,
}
remote_tps = qemu_get_be64(mis->from_src_file);
- if (remote_tps != qemu_target_page_size()) {
+ if (remote_tps != page_size) {
/*
* Again, some differences could be dealt with, but for now keep it
* simple.
*/
error_report("Postcopy needs matching target page sizes (s=%d d=%zd)",
- (int)remote_tps, qemu_target_page_size());
+ (int)remote_tps, page_size);
return -1;
}
--
2.33.1
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PULL 09/18] multifd: Rename used field to num
2021-12-15 10:32 [PULL 00/18] Migration 20211214 patches Juan Quintela
` (7 preceding siblings ...)
2021-12-15 10:32 ` [PULL 08/18] migration: Never call twice qemu_target_page_size() Juan Quintela
@ 2021-12-15 10:32 ` Juan Quintela
2021-12-15 10:32 ` [PULL 10/18] multifd: Add missing documention Juan Quintela
` (9 subsequent siblings)
18 siblings, 0 replies; 20+ messages in thread
From: Juan Quintela @ 2021-12-15 10:32 UTC (permalink / raw)
To: qemu-devel
Cc: Marc-André Lureau, Hailiang Zhang, Dr. David Alan Gilbert,
Juan Quintela
We will need to split it later in zero_num (number of zero pages) and
normal_num (number of normal pages). This name is better.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
migration/multifd.h | 2 +-
migration/multifd.c | 38 +++++++++++++++++++-------------------
2 files changed, 20 insertions(+), 20 deletions(-)
diff --git a/migration/multifd.h b/migration/multifd.h
index 15c50ca0b2..86820dd028 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -55,7 +55,7 @@ typedef struct {
typedef struct {
/* number of used pages */
- uint32_t used;
+ uint32_t num;
/* number of allocated pages */
uint32_t allocated;
/* global number of generated multifd packets */
diff --git a/migration/multifd.c b/migration/multifd.c
index 8125d0015c..8ea86d81dc 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -252,7 +252,7 @@ static MultiFDPages_t *multifd_pages_init(size_t size)
static void multifd_pages_clear(MultiFDPages_t *pages)
{
- pages->used = 0;
+ pages->num = 0;
pages->allocated = 0;
pages->packet_num = 0;
pages->block = NULL;
@@ -270,7 +270,7 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
packet->flags = cpu_to_be32(p->flags);
packet->pages_alloc = cpu_to_be32(p->pages->allocated);
- packet->pages_used = cpu_to_be32(p->pages->used);
+ packet->pages_used = cpu_to_be32(p->pages->num);
packet->next_packet_size = cpu_to_be32(p->next_packet_size);
packet->packet_num = cpu_to_be64(p->packet_num);
@@ -278,7 +278,7 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
strncpy(packet->ramblock, p->pages->block->idstr, 256);
}
- for (i = 0; i < p->pages->used; i++) {
+ for (i = 0; i < p->pages->num; i++) {
/* there are architectures where ram_addr_t is 32 bit */
uint64_t temp = p->pages->offset[i];
@@ -332,18 +332,18 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
p->pages = multifd_pages_init(packet->pages_alloc);
}
- p->pages->used = be32_to_cpu(packet->pages_used);
- if (p->pages->used > packet->pages_alloc) {
+ p->pages->num = be32_to_cpu(packet->pages_used);
+ if (p->pages->num > packet->pages_alloc) {
error_setg(errp, "multifd: received packet "
"with %d pages and expected maximum pages are %d",
- p->pages->used, packet->pages_alloc) ;
+ p->pages->num, packet->pages_alloc) ;
return -1;
}
p->next_packet_size = be32_to_cpu(packet->next_packet_size);
p->packet_num = be64_to_cpu(packet->packet_num);
- if (p->pages->used == 0) {
+ if (p->pages->num == 0) {
return 0;
}
@@ -356,7 +356,7 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
return -1;
}
- for (i = 0; i < p->pages->used; i++) {
+ for (i = 0; i < p->pages->num; i++) {
uint64_t offset = be64_to_cpu(packet->offset[i]);
if (offset > (block->used_length - page_size)) {
@@ -443,13 +443,13 @@ static int multifd_send_pages(QEMUFile *f)
}
qemu_mutex_unlock(&p->mutex);
}
- assert(!p->pages->used);
+ assert(!p->pages->num);
assert(!p->pages->block);
p->packet_num = multifd_send_state->packet_num++;
multifd_send_state->pages = p->pages;
p->pages = pages;
- transferred = ((uint64_t) pages->used) * qemu_target_page_size()
+ transferred = ((uint64_t) pages->num) * qemu_target_page_size()
+ p->packet_len;
qemu_file_update_transfer(f, transferred);
ram_counters.multifd_bytes += transferred;
@@ -469,12 +469,12 @@ int multifd_queue_page(QEMUFile *f, RAMBlock *block, ram_addr_t offset)
}
if (pages->block == block) {
- pages->offset[pages->used] = offset;
- pages->iov[pages->used].iov_base = block->host + offset;
- pages->iov[pages->used].iov_len = qemu_target_page_size();
- pages->used++;
+ pages->offset[pages->num] = offset;
+ pages->iov[pages->num].iov_base = block->host + offset;
+ pages->iov[pages->num].iov_len = qemu_target_page_size();
+ pages->num++;
- if (pages->used < pages->allocated) {
+ if (pages->num < pages->allocated) {
return 1;
}
}
@@ -586,7 +586,7 @@ void multifd_send_sync_main(QEMUFile *f)
if (!migrate_use_multifd()) {
return;
}
- if (multifd_send_state->pages->used) {
+ if (multifd_send_state->pages->num) {
if (multifd_send_pages(f) < 0) {
error_report("%s: multifd_send_pages fail", __func__);
return;
@@ -649,7 +649,7 @@ static void *multifd_send_thread(void *opaque)
qemu_mutex_lock(&p->mutex);
if (p->pending_job) {
- uint32_t used = p->pages->used;
+ uint32_t used = p->pages->num;
uint64_t packet_num = p->packet_num;
flags = p->flags;
@@ -665,7 +665,7 @@ static void *multifd_send_thread(void *opaque)
p->flags = 0;
p->num_packets++;
p->num_pages += used;
- p->pages->used = 0;
+ p->pages->num = 0;
p->pages->block = NULL;
qemu_mutex_unlock(&p->mutex);
@@ -1091,7 +1091,7 @@ static void *multifd_recv_thread(void *opaque)
break;
}
- used = p->pages->used;
+ used = p->pages->num;
flags = p->flags;
/* recv methods don't know how to handle the SYNC flag */
p->flags &= ~MULTIFD_FLAG_SYNC;
--
2.33.1
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PULL 10/18] multifd: Add missing documention
2021-12-15 10:32 [PULL 00/18] Migration 20211214 patches Juan Quintela
` (8 preceding siblings ...)
2021-12-15 10:32 ` [PULL 09/18] multifd: Rename used field to num Juan Quintela
@ 2021-12-15 10:32 ` Juan Quintela
2021-12-15 10:32 ` [PULL 11/18] multifd: The variable is only used inside the loop Juan Quintela
` (8 subsequent siblings)
18 siblings, 0 replies; 20+ messages in thread
From: Juan Quintela @ 2021-12-15 10:32 UTC (permalink / raw)
To: qemu-devel
Cc: Marc-André Lureau, Hailiang Zhang, Dr. David Alan Gilbert,
Juan Quintela
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
migration/multifd-zlib.c | 2 ++
migration/multifd-zstd.c | 2 ++
migration/multifd.c | 1 +
3 files changed, 5 insertions(+)
diff --git a/migration/multifd-zlib.c b/migration/multifd-zlib.c
index 3fc7813b44..d0437cce2a 100644
--- a/migration/multifd-zlib.c
+++ b/migration/multifd-zlib.c
@@ -72,6 +72,7 @@ static int zlib_send_setup(MultiFDSendParams *p, Error **errp)
* Close the channel and return memory.
*
* @p: Params for the channel that we are using
+ * @errp: pointer to an error
*/
static void zlib_send_cleanup(MultiFDSendParams *p, Error **errp)
{
@@ -94,6 +95,7 @@ static void zlib_send_cleanup(MultiFDSendParams *p, Error **errp)
*
* @p: Params for the channel that we are using
* @used: number of pages used
+ * @errp: pointer to an error
*/
static int zlib_send_prepare(MultiFDSendParams *p, uint32_t used, Error **errp)
{
diff --git a/migration/multifd-zstd.c b/migration/multifd-zstd.c
index cc3b8869c0..09ae1cf91a 100644
--- a/migration/multifd-zstd.c
+++ b/migration/multifd-zstd.c
@@ -84,6 +84,7 @@ static int zstd_send_setup(MultiFDSendParams *p, Error **errp)
* Close the channel and return memory.
*
* @p: Params for the channel that we are using
+ * @errp: pointer to an error
*/
static void zstd_send_cleanup(MultiFDSendParams *p, Error **errp)
{
@@ -107,6 +108,7 @@ static void zstd_send_cleanup(MultiFDSendParams *p, Error **errp)
*
* @p: Params for the channel that we are using
* @used: number of pages used
+ * @errp: pointer to an error
*/
static int zstd_send_prepare(MultiFDSendParams *p, uint32_t used, Error **errp)
{
diff --git a/migration/multifd.c b/migration/multifd.c
index 8ea86d81dc..cdeffdc4c5 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -66,6 +66,7 @@ static int nocomp_send_setup(MultiFDSendParams *p, Error **errp)
* For no compression this function does nothing.
*
* @p: Params for the channel that we are using
+ * @errp: pointer to an error
*/
static void nocomp_send_cleanup(MultiFDSendParams *p, Error **errp)
{
--
2.33.1
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PULL 11/18] multifd: The variable is only used inside the loop
2021-12-15 10:32 [PULL 00/18] Migration 20211214 patches Juan Quintela
` (9 preceding siblings ...)
2021-12-15 10:32 ` [PULL 10/18] multifd: Add missing documention Juan Quintela
@ 2021-12-15 10:32 ` Juan Quintela
2021-12-15 10:32 ` [PULL 12/18] multifd: remove used parameter from send_prepare() method Juan Quintela
` (7 subsequent siblings)
18 siblings, 0 replies; 20+ messages in thread
From: Juan Quintela @ 2021-12-15 10:32 UTC (permalink / raw)
To: qemu-devel
Cc: Marc-André Lureau, Hailiang Zhang, Dr. David Alan Gilbert,
Juan Quintela
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
migration/multifd.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/migration/multifd.c b/migration/multifd.c
index cdeffdc4c5..ce7101cf9d 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -629,7 +629,6 @@ static void *multifd_send_thread(void *opaque)
MultiFDSendParams *p = opaque;
Error *local_err = NULL;
int ret = 0;
- uint32_t flags = 0;
trace_multifd_send_thread_start(p->id);
rcu_register_thread();
@@ -652,7 +651,7 @@ static void *multifd_send_thread(void *opaque)
if (p->pending_job) {
uint32_t used = p->pages->num;
uint64_t packet_num = p->packet_num;
- flags = p->flags;
+ uint32_t flags = p->flags;
if (used) {
ret = multifd_send_state->ops->send_prepare(p, used,
--
2.33.1
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PULL 12/18] multifd: remove used parameter from send_prepare() method
2021-12-15 10:32 [PULL 00/18] Migration 20211214 patches Juan Quintela
` (10 preceding siblings ...)
2021-12-15 10:32 ` [PULL 11/18] multifd: The variable is only used inside the loop Juan Quintela
@ 2021-12-15 10:32 ` Juan Quintela
2021-12-15 10:32 ` [PULL 13/18] multifd: remove used parameter from send_recv_pages() method Juan Quintela
` (6 subsequent siblings)
18 siblings, 0 replies; 20+ messages in thread
From: Juan Quintela @ 2021-12-15 10:32 UTC (permalink / raw)
To: qemu-devel
Cc: Marc-André Lureau, Hailiang Zhang, Dr. David Alan Gilbert,
Juan Quintela
It is already there as p->pages->num.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
migration/multifd.h | 2 +-
migration/multifd-zlib.c | 7 +++----
migration/multifd-zstd.c | 7 +++----
migration/multifd.c | 9 +++------
4 files changed, 10 insertions(+), 15 deletions(-)
diff --git a/migration/multifd.h b/migration/multifd.h
index 86820dd028..7968cc5c20 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -159,7 +159,7 @@ typedef struct {
/* Cleanup for sending side */
void (*send_cleanup)(MultiFDSendParams *p, Error **errp);
/* Prepare the send packet */
- int (*send_prepare)(MultiFDSendParams *p, uint32_t used, Error **errp);
+ int (*send_prepare)(MultiFDSendParams *p, Error **errp);
/* Write the send packet */
int (*send_write)(MultiFDSendParams *p, uint32_t used, Error **errp);
/* Setup for receiving side */
diff --git a/migration/multifd-zlib.c b/migration/multifd-zlib.c
index d0437cce2a..28f0ed933b 100644
--- a/migration/multifd-zlib.c
+++ b/migration/multifd-zlib.c
@@ -94,10 +94,9 @@ static void zlib_send_cleanup(MultiFDSendParams *p, Error **errp)
* Returns 0 for success or -1 for error
*
* @p: Params for the channel that we are using
- * @used: number of pages used
* @errp: pointer to an error
*/
-static int zlib_send_prepare(MultiFDSendParams *p, uint32_t used, Error **errp)
+static int zlib_send_prepare(MultiFDSendParams *p, Error **errp)
{
struct iovec *iov = p->pages->iov;
struct zlib_data *z = p->data;
@@ -106,11 +105,11 @@ static int zlib_send_prepare(MultiFDSendParams *p, uint32_t used, Error **errp)
int ret;
uint32_t i;
- for (i = 0; i < used; i++) {
+ for (i = 0; i < p->pages->num; i++) {
uint32_t available = z->zbuff_len - out_size;
int flush = Z_NO_FLUSH;
- if (i == used - 1) {
+ if (i == p->pages->num - 1) {
flush = Z_SYNC_FLUSH;
}
diff --git a/migration/multifd-zstd.c b/migration/multifd-zstd.c
index 09ae1cf91a..4a71e96e06 100644
--- a/migration/multifd-zstd.c
+++ b/migration/multifd-zstd.c
@@ -107,10 +107,9 @@ static void zstd_send_cleanup(MultiFDSendParams *p, Error **errp)
* Returns 0 for success or -1 for error
*
* @p: Params for the channel that we are using
- * @used: number of pages used
* @errp: pointer to an error
*/
-static int zstd_send_prepare(MultiFDSendParams *p, uint32_t used, Error **errp)
+static int zstd_send_prepare(MultiFDSendParams *p, Error **errp)
{
struct iovec *iov = p->pages->iov;
struct zstd_data *z = p->data;
@@ -121,10 +120,10 @@ static int zstd_send_prepare(MultiFDSendParams *p, uint32_t used, Error **errp)
z->out.size = z->zbuff_len;
z->out.pos = 0;
- for (i = 0; i < used; i++) {
+ for (i = 0; i < p->pages->num; i++) {
ZSTD_EndDirective flush = ZSTD_e_continue;
- if (i == used - 1) {
+ if (i == p->pages->num - 1) {
flush = ZSTD_e_flush;
}
z->in.src = iov[i].iov_base;
diff --git a/migration/multifd.c b/migration/multifd.c
index ce7101cf9d..098ef8842c 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -82,13 +82,11 @@ static void nocomp_send_cleanup(MultiFDSendParams *p, Error **errp)
* Returns 0 for success or -1 for error
*
* @p: Params for the channel that we are using
- * @used: number of pages used
* @errp: pointer to an error
*/
-static int nocomp_send_prepare(MultiFDSendParams *p, uint32_t used,
- Error **errp)
+static int nocomp_send_prepare(MultiFDSendParams *p, Error **errp)
{
- p->next_packet_size = used * qemu_target_page_size();
+ p->next_packet_size = p->pages->num * qemu_target_page_size();
p->flags |= MULTIFD_FLAG_NOCOMP;
return 0;
}
@@ -654,8 +652,7 @@ static void *multifd_send_thread(void *opaque)
uint32_t flags = p->flags;
if (used) {
- ret = multifd_send_state->ops->send_prepare(p, used,
- &local_err);
+ ret = multifd_send_state->ops->send_prepare(p, &local_err);
if (ret != 0) {
qemu_mutex_unlock(&p->mutex);
break;
--
2.33.1
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PULL 13/18] multifd: remove used parameter from send_recv_pages() method
2021-12-15 10:32 [PULL 00/18] Migration 20211214 patches Juan Quintela
` (11 preceding siblings ...)
2021-12-15 10:32 ` [PULL 12/18] multifd: remove used parameter from send_prepare() method Juan Quintela
@ 2021-12-15 10:32 ` Juan Quintela
2021-12-15 10:32 ` [PULL 14/18] multifd: Fill offset and block for reception Juan Quintela
` (5 subsequent siblings)
18 siblings, 0 replies; 20+ messages in thread
From: Juan Quintela @ 2021-12-15 10:32 UTC (permalink / raw)
To: qemu-devel
Cc: Marc-André Lureau, Hailiang Zhang, Dr. David Alan Gilbert,
Juan Quintela
It is already there as p->pages->num.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
migration/multifd.h | 2 +-
migration/multifd-zlib.c | 9 ++++-----
migration/multifd-zstd.c | 7 +++----
migration/multifd.c | 7 +++----
4 files changed, 11 insertions(+), 14 deletions(-)
diff --git a/migration/multifd.h b/migration/multifd.h
index 7968cc5c20..e57adc783b 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -167,7 +167,7 @@ typedef struct {
/* Cleanup for receiving side */
void (*recv_cleanup)(MultiFDRecvParams *p);
/* Read all pages */
- int (*recv_pages)(MultiFDRecvParams *p, uint32_t used, Error **errp);
+ int (*recv_pages)(MultiFDRecvParams *p, Error **errp);
} MultiFDMethods;
void multifd_register_ops(int method, MultiFDMethods *ops);
diff --git a/migration/multifd-zlib.c b/migration/multifd-zlib.c
index 28f0ed933b..e85ef8824d 100644
--- a/migration/multifd-zlib.c
+++ b/migration/multifd-zlib.c
@@ -230,17 +230,16 @@ static void zlib_recv_cleanup(MultiFDRecvParams *p)
* Returns 0 for success or -1 for error
*
* @p: Params for the channel that we are using
- * @used: number of pages used
* @errp: pointer to an error
*/
-static int zlib_recv_pages(MultiFDRecvParams *p, uint32_t used, Error **errp)
+static int zlib_recv_pages(MultiFDRecvParams *p, Error **errp)
{
struct zlib_data *z = p->data;
z_stream *zs = &z->zs;
uint32_t in_size = p->next_packet_size;
/* we measure the change of total_out */
uint32_t out_size = zs->total_out;
- uint32_t expected_size = used * qemu_target_page_size();
+ uint32_t expected_size = p->pages->num * qemu_target_page_size();
uint32_t flags = p->flags & MULTIFD_FLAG_COMPRESSION_MASK;
int ret;
int i;
@@ -259,12 +258,12 @@ static int zlib_recv_pages(MultiFDRecvParams *p, uint32_t used, Error **errp)
zs->avail_in = in_size;
zs->next_in = z->zbuff;
- for (i = 0; i < used; i++) {
+ for (i = 0; i < p->pages->num; i++) {
struct iovec *iov = &p->pages->iov[i];
int flush = Z_NO_FLUSH;
unsigned long start = zs->total_out;
- if (i == used - 1) {
+ if (i == p->pages->num - 1) {
flush = Z_SYNC_FLUSH;
}
diff --git a/migration/multifd-zstd.c b/migration/multifd-zstd.c
index 4a71e96e06..a8b104f4ee 100644
--- a/migration/multifd-zstd.c
+++ b/migration/multifd-zstd.c
@@ -250,14 +250,13 @@ static void zstd_recv_cleanup(MultiFDRecvParams *p)
* Returns 0 for success or -1 for error
*
* @p: Params for the channel that we are using
- * @used: number of pages used
* @errp: pointer to an error
*/
-static int zstd_recv_pages(MultiFDRecvParams *p, uint32_t used, Error **errp)
+static int zstd_recv_pages(MultiFDRecvParams *p, Error **errp)
{
uint32_t in_size = p->next_packet_size;
uint32_t out_size = 0;
- uint32_t expected_size = used * qemu_target_page_size();
+ uint32_t expected_size = p->pages->num * qemu_target_page_size();
uint32_t flags = p->flags & MULTIFD_FLAG_COMPRESSION_MASK;
struct zstd_data *z = p->data;
int ret;
@@ -278,7 +277,7 @@ static int zstd_recv_pages(MultiFDRecvParams *p, uint32_t used, Error **errp)
z->in.size = in_size;
z->in.pos = 0;
- for (i = 0; i < used; i++) {
+ for (i = 0; i < p->pages->num; i++) {
struct iovec *iov = &p->pages->iov[i];
z->out.dst = iov->iov_base;
diff --git a/migration/multifd.c b/migration/multifd.c
index 098ef8842c..55d99a8232 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -141,10 +141,9 @@ static void nocomp_recv_cleanup(MultiFDRecvParams *p)
* Returns 0 for success or -1 for error
*
* @p: Params for the channel that we are using
- * @used: number of pages used
* @errp: pointer to an error
*/
-static int nocomp_recv_pages(MultiFDRecvParams *p, uint32_t used, Error **errp)
+static int nocomp_recv_pages(MultiFDRecvParams *p, Error **errp)
{
uint32_t flags = p->flags & MULTIFD_FLAG_COMPRESSION_MASK;
@@ -153,7 +152,7 @@ static int nocomp_recv_pages(MultiFDRecvParams *p, uint32_t used, Error **errp)
p->id, flags, MULTIFD_FLAG_NOCOMP);
return -1;
}
- return qio_channel_readv_all(p->c, p->pages->iov, used, errp);
+ return qio_channel_readv_all(p->c, p->pages->iov, p->pages->num, errp);
}
static MultiFDMethods multifd_nocomp_ops = {
@@ -1099,7 +1098,7 @@ static void *multifd_recv_thread(void *opaque)
qemu_mutex_unlock(&p->mutex);
if (used) {
- ret = multifd_recv_state->ops->recv_pages(p, used, &local_err);
+ ret = multifd_recv_state->ops->recv_pages(p, &local_err);
if (ret != 0) {
break;
}
--
2.33.1
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PULL 14/18] multifd: Fill offset and block for reception
2021-12-15 10:32 [PULL 00/18] Migration 20211214 patches Juan Quintela
` (12 preceding siblings ...)
2021-12-15 10:32 ` [PULL 13/18] multifd: remove used parameter from send_recv_pages() method Juan Quintela
@ 2021-12-15 10:32 ` Juan Quintela
2021-12-15 10:32 ` [PULL 15/18] multifd: Shut down the QIO channels to avoid blocking the send threads when they are terminated Juan Quintela
` (4 subsequent siblings)
18 siblings, 0 replies; 20+ messages in thread
From: Juan Quintela @ 2021-12-15 10:32 UTC (permalink / raw)
To: qemu-devel
Cc: Marc-André Lureau, Hailiang Zhang, Dr. David Alan Gilbert,
Juan Quintela
We were using the iov directly, but we will need this info on the
following patch.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
migration/multifd.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/migration/multifd.c b/migration/multifd.c
index 55d99a8232..0533da154a 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -354,6 +354,7 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
return -1;
}
+ p->pages->block = block;
for (i = 0; i < p->pages->num; i++) {
uint64_t offset = be64_to_cpu(packet->offset[i]);
@@ -363,6 +364,7 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
offset, block->used_length);
return -1;
}
+ p->pages->offset[i] = offset;
p->pages->iov[i].iov_base = block->host + offset;
p->pages->iov[i].iov_len = page_size;
}
--
2.33.1
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PULL 15/18] multifd: Shut down the QIO channels to avoid blocking the send threads when they are terminated.
2021-12-15 10:32 [PULL 00/18] Migration 20211214 patches Juan Quintela
` (13 preceding siblings ...)
2021-12-15 10:32 ` [PULL 14/18] multifd: Fill offset and block for reception Juan Quintela
@ 2021-12-15 10:32 ` Juan Quintela
2021-12-15 10:32 ` [PULL 16/18] COLO: Move some trace code behind qemu_mutex_unlock_iothread() Juan Quintela
` (3 subsequent siblings)
18 siblings, 0 replies; 20+ messages in thread
From: Juan Quintela @ 2021-12-15 10:32 UTC (permalink / raw)
To: qemu-devel
Cc: Li Zhang, Daniel P . Berrangé, Hailiang Zhang, Juan Quintela,
Dr. David Alan Gilbert, Marc-André Lureau
From: Li Zhang <lizhang@suse.de>
When doing live migration with multifd channels 8, 16 or larger number,
the guest hangs in the presence of the network errors such as missing TCP ACKs.
At sender's side:
The main thread is blocked on qemu_thread_join, migration_fd_cleanup
is called because one thread fails on qio_channel_write_all when
the network problem happens and other send threads are blocked on sendmsg.
They could not be terminated. So the main thread is blocked on qemu_thread_join
to wait for the threads terminated.
(gdb) bt
0 0x00007f30c8dcffc0 in __pthread_clockjoin_ex () at /lib64/libpthread.so.0
1 0x000055cbb716084b in qemu_thread_join (thread=0x55cbb881f418) at ../util/qemu-thread-posix.c:627
2 0x000055cbb6b54e40 in multifd_save_cleanup () at ../migration/multifd.c:542
3 0x000055cbb6b4de06 in migrate_fd_cleanup (s=0x55cbb8024000) at ../migration/migration.c:1808
4 0x000055cbb6b4dfb4 in migrate_fd_cleanup_bh (opaque=0x55cbb8024000) at ../migration/migration.c:1850
5 0x000055cbb7173ac1 in aio_bh_call (bh=0x55cbb7eb98e0) at ../util/async.c:141
6 0x000055cbb7173bcb in aio_bh_poll (ctx=0x55cbb7ebba80) at ../util/async.c:169
7 0x000055cbb715ba4b in aio_dispatch (ctx=0x55cbb7ebba80) at ../util/aio-posix.c:381
8 0x000055cbb7173ffe in aio_ctx_dispatch (source=0x55cbb7ebba80, callback=0x0, user_data=0x0) at ../util/async.c:311
9 0x00007f30c9c8cdf4 in g_main_context_dispatch () at /usr/lib64/libglib-2.0.so.0
10 0x000055cbb71851a2 in glib_pollfds_poll () at ../util/main-loop.c:232
11 0x000055cbb718521c in os_host_main_loop_wait (timeout=42251070366) at ../util/main-loop.c:255
12 0x000055cbb7185321 in main_loop_wait (nonblocking=0) at ../util/main-loop.c:531
13 0x000055cbb6e6ba27 in qemu_main_loop () at ../softmmu/runstate.c:726
14 0x000055cbb6ad6fd7 in main (argc=68, argv=0x7ffc0c578888, envp=0x7ffc0c578ab0) at ../softmmu/main.c:50
To make sure that the send threads could be terminated, IO channels should be
shut down to avoid waiting IO.
Signed-off-by: Li Zhang <lizhang@suse.de>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/multifd.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/migration/multifd.c b/migration/multifd.c
index 0533da154a..3242f688e5 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -524,6 +524,9 @@ static void multifd_send_terminate_threads(Error *err)
qemu_mutex_lock(&p->mutex);
p->quit = true;
qemu_sem_post(&p->sem);
+ if (p->c) {
+ qio_channel_shutdown(p->c, QIO_CHANNEL_SHUTDOWN_BOTH, NULL);
+ }
qemu_mutex_unlock(&p->mutex);
}
}
--
2.33.1
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PULL 16/18] COLO: Move some trace code behind qemu_mutex_unlock_iothread()
2021-12-15 10:32 [PULL 00/18] Migration 20211214 patches Juan Quintela
` (14 preceding siblings ...)
2021-12-15 10:32 ` [PULL 15/18] multifd: Shut down the QIO channels to avoid blocking the send threads when they are terminated Juan Quintela
@ 2021-12-15 10:32 ` Juan Quintela
2021-12-15 10:32 ` [PULL 17/18] multifd: Make zstd compression method not use iovs Juan Quintela
` (2 subsequent siblings)
18 siblings, 0 replies; 20+ messages in thread
From: Juan Quintela @ 2021-12-15 10:32 UTC (permalink / raw)
To: qemu-devel
Cc: Marc-André Lureau, Hailiang Zhang, Dr. David Alan Gilbert,
Rao, Lei, Juan Quintela
From: "Rao, Lei" <lei.rao@intel.com>
There is no need to put some trace code in the critical section.
So, moving it behind qemu_mutex_unlock_iothread() can reduce the
lock time.
Signed-off-by: Lei Rao <lei.rao@intel.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
migration/colo.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/migration/colo.c b/migration/colo.c
index 4a772afe78..5f7071b3cd 100644
--- a/migration/colo.c
+++ b/migration/colo.c
@@ -680,8 +680,8 @@ static void colo_incoming_process_checkpoint(MigrationIncomingState *mis,
qemu_mutex_lock_iothread();
vm_stop_force_state(RUN_STATE_COLO);
- trace_colo_vm_state_change("run", "stop");
qemu_mutex_unlock_iothread();
+ trace_colo_vm_state_change("run", "stop");
/* FIXME: This is unnecessary for periodic checkpoint mode */
colo_send_message(mis->to_src_file, COLO_MESSAGE_CHECKPOINT_REPLY,
@@ -783,8 +783,8 @@ static void colo_incoming_process_checkpoint(MigrationIncomingState *mis,
vmstate_loading = false;
vm_start();
- trace_colo_vm_state_change("stop", "run");
qemu_mutex_unlock_iothread();
+ trace_colo_vm_state_change("stop", "run");
if (failover_get_state() == FAILOVER_STATUS_RELAUNCH) {
return;
@@ -887,8 +887,8 @@ void *colo_process_incoming_thread(void *opaque)
abort();
#endif
vm_start();
- trace_colo_vm_state_change("stop", "run");
qemu_mutex_unlock_iothread();
+ trace_colo_vm_state_change("stop", "run");
colo_send_message(mis->to_src_file, COLO_MESSAGE_CHECKPOINT_READY,
&local_err);
--
2.33.1
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PULL 17/18] multifd: Make zstd compression method not use iovs
2021-12-15 10:32 [PULL 00/18] Migration 20211214 patches Juan Quintela
` (15 preceding siblings ...)
2021-12-15 10:32 ` [PULL 16/18] COLO: Move some trace code behind qemu_mutex_unlock_iothread() Juan Quintela
@ 2021-12-15 10:32 ` Juan Quintela
2021-12-15 10:32 ` [PULL 18/18] multifd: Make zlib " Juan Quintela
2021-12-15 18:33 ` [PULL 00/18] Migration 20211214 patches Richard Henderson
18 siblings, 0 replies; 20+ messages in thread
From: Juan Quintela @ 2021-12-15 10:32 UTC (permalink / raw)
To: qemu-devel
Cc: Marc-André Lureau, Hailiang Zhang, Dr. David Alan Gilbert,
Juan Quintela
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
migration/multifd-zstd.c | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/migration/multifd-zstd.c b/migration/multifd-zstd.c
index a8b104f4ee..2d5b61106c 100644
--- a/migration/multifd-zstd.c
+++ b/migration/multifd-zstd.c
@@ -13,6 +13,7 @@
#include "qemu/osdep.h"
#include <zstd.h>
#include "qemu/rcu.h"
+#include "exec/ramblock.h"
#include "exec/target_page.h"
#include "qapi/error.h"
#include "migration.h"
@@ -111,8 +112,8 @@ static void zstd_send_cleanup(MultiFDSendParams *p, Error **errp)
*/
static int zstd_send_prepare(MultiFDSendParams *p, Error **errp)
{
- struct iovec *iov = p->pages->iov;
struct zstd_data *z = p->data;
+ size_t page_size = qemu_target_page_size();
int ret;
uint32_t i;
@@ -126,8 +127,8 @@ static int zstd_send_prepare(MultiFDSendParams *p, Error **errp)
if (i == p->pages->num - 1) {
flush = ZSTD_e_flush;
}
- z->in.src = iov[i].iov_base;
- z->in.size = iov[i].iov_len;
+ z->in.src = p->pages->block->host + p->pages->offset[i];
+ z->in.size = page_size;
z->in.pos = 0;
/*
@@ -256,7 +257,8 @@ static int zstd_recv_pages(MultiFDRecvParams *p, Error **errp)
{
uint32_t in_size = p->next_packet_size;
uint32_t out_size = 0;
- uint32_t expected_size = p->pages->num * qemu_target_page_size();
+ size_t page_size = qemu_target_page_size();
+ uint32_t expected_size = p->pages->num * page_size;
uint32_t flags = p->flags & MULTIFD_FLAG_COMPRESSION_MASK;
struct zstd_data *z = p->data;
int ret;
@@ -278,10 +280,8 @@ static int zstd_recv_pages(MultiFDRecvParams *p, Error **errp)
z->in.pos = 0;
for (i = 0; i < p->pages->num; i++) {
- struct iovec *iov = &p->pages->iov[i];
-
- z->out.dst = iov->iov_base;
- z->out.size = iov->iov_len;
+ z->out.dst = p->pages->block->host + p->pages->offset[i];
+ z->out.size = page_size;
z->out.pos = 0;
/*
@@ -295,8 +295,8 @@ static int zstd_recv_pages(MultiFDRecvParams *p, Error **errp)
do {
ret = ZSTD_decompressStream(z->zds, &z->out, &z->in);
} while (ret > 0 && (z->in.size - z->in.pos > 0)
- && (z->out.pos < iov->iov_len));
- if (ret > 0 && (z->out.pos < iov->iov_len)) {
+ && (z->out.pos < page_size));
+ if (ret > 0 && (z->out.pos < page_size)) {
error_setg(errp, "multifd %d: decompressStream buffer too small",
p->id);
return -1;
--
2.33.1
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PULL 18/18] multifd: Make zlib compression method not use iovs
2021-12-15 10:32 [PULL 00/18] Migration 20211214 patches Juan Quintela
` (16 preceding siblings ...)
2021-12-15 10:32 ` [PULL 17/18] multifd: Make zstd compression method not use iovs Juan Quintela
@ 2021-12-15 10:32 ` Juan Quintela
2021-12-15 18:33 ` [PULL 00/18] Migration 20211214 patches Richard Henderson
18 siblings, 0 replies; 20+ messages in thread
From: Juan Quintela @ 2021-12-15 10:32 UTC (permalink / raw)
To: qemu-devel
Cc: Marc-André Lureau, Hailiang Zhang, Dr. David Alan Gilbert,
Juan Quintela
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
migration/multifd-zlib.c | 17 +++++++++--------
1 file changed, 9 insertions(+), 8 deletions(-)
diff --git a/migration/multifd-zlib.c b/migration/multifd-zlib.c
index e85ef8824d..da6201704c 100644
--- a/migration/multifd-zlib.c
+++ b/migration/multifd-zlib.c
@@ -13,6 +13,7 @@
#include "qemu/osdep.h"
#include <zlib.h>
#include "qemu/rcu.h"
+#include "exec/ramblock.h"
#include "exec/target_page.h"
#include "qapi/error.h"
#include "migration.h"
@@ -98,8 +99,8 @@ static void zlib_send_cleanup(MultiFDSendParams *p, Error **errp)
*/
static int zlib_send_prepare(MultiFDSendParams *p, Error **errp)
{
- struct iovec *iov = p->pages->iov;
struct zlib_data *z = p->data;
+ size_t page_size = qemu_target_page_size();
z_stream *zs = &z->zs;
uint32_t out_size = 0;
int ret;
@@ -113,8 +114,8 @@ static int zlib_send_prepare(MultiFDSendParams *p, Error **errp)
flush = Z_SYNC_FLUSH;
}
- zs->avail_in = iov[i].iov_len;
- zs->next_in = iov[i].iov_base;
+ zs->avail_in = page_size;
+ zs->next_in = p->pages->block->host + p->pages->offset[i];
zs->avail_out = available;
zs->next_out = z->zbuff + out_size;
@@ -235,6 +236,7 @@ static void zlib_recv_cleanup(MultiFDRecvParams *p)
static int zlib_recv_pages(MultiFDRecvParams *p, Error **errp)
{
struct zlib_data *z = p->data;
+ size_t page_size = qemu_target_page_size();
z_stream *zs = &z->zs;
uint32_t in_size = p->next_packet_size;
/* we measure the change of total_out */
@@ -259,7 +261,6 @@ static int zlib_recv_pages(MultiFDRecvParams *p, Error **errp)
zs->next_in = z->zbuff;
for (i = 0; i < p->pages->num; i++) {
- struct iovec *iov = &p->pages->iov[i];
int flush = Z_NO_FLUSH;
unsigned long start = zs->total_out;
@@ -267,8 +268,8 @@ static int zlib_recv_pages(MultiFDRecvParams *p, Error **errp)
flush = Z_SYNC_FLUSH;
}
- zs->avail_out = iov->iov_len;
- zs->next_out = iov->iov_base;
+ zs->avail_out = page_size;
+ zs->next_out = p->pages->block->host + p->pages->offset[i];
/*
* Welcome to inflate semantics
@@ -281,8 +282,8 @@ static int zlib_recv_pages(MultiFDRecvParams *p, Error **errp)
do {
ret = inflate(zs, flush);
} while (ret == Z_OK && zs->avail_in
- && (zs->total_out - start) < iov->iov_len);
- if (ret == Z_OK && (zs->total_out - start) < iov->iov_len) {
+ && (zs->total_out - start) < page_size);
+ if (ret == Z_OK && (zs->total_out - start) < page_size) {
error_setg(errp, "multifd %d: inflate generated too few output",
p->id);
return -1;
--
2.33.1
^ permalink raw reply related [flat|nested] 20+ messages in thread
* Re: [PULL 00/18] Migration 20211214 patches
2021-12-15 10:32 [PULL 00/18] Migration 20211214 patches Juan Quintela
` (17 preceding siblings ...)
2021-12-15 10:32 ` [PULL 18/18] multifd: Make zlib " Juan Quintela
@ 2021-12-15 18:33 ` Richard Henderson
18 siblings, 0 replies; 20+ messages in thread
From: Richard Henderson @ 2021-12-15 18:33 UTC (permalink / raw)
To: Juan Quintela, qemu-devel
Cc: Marc-André Lureau, Hailiang Zhang, Dr. David Alan Gilbert
On 12/15/21 2:32 AM, Juan Quintela wrote:
> The following changes since commit 76b56fdfc9fa43ec6e5986aee33f108c6c6a511e:
>
> Merge tag 'block-pull-request' of https://gitlab.com/stefanha/qemu into staging (2021-12-14 12:46:18 -0800)
>
> are available in the Git repository at:
>
> https://gitlab.com/juan.quintela/qemu.git tags/migration-20211214-pull-request
>
> for you to fetch changes up to a5ed22948873b50fcf1415d1ce15c71d61a9388d:
>
> multifd: Make zlib compression method not use iovs (2021-12-15 10:38:34 +0100)
>
> ----------------------------------------------------------------
> Migration Pull request
>
> Hi
>
> This are the reviewed patches for the freeze period:
>
> - colo: fix/optimize several things (rao, chen)
> - shutdown qio channels correctly when an error happens (li)
> - serveral multifd patches for the zero series (me)
>
> Please apply.
>
> Thanks, Juan.
>
> ----------------------------------------------------------------
>
> Juan Quintela (12):
> migration: Remove is_zero_range()
> dump: Remove is_zero_page()
> multifd: Delete useless operation
> migration: Never call twice qemu_target_page_size()
> multifd: Rename used field to num
> multifd: Add missing documention
> multifd: The variable is only used inside the loop
> multifd: remove used parameter from send_prepare() method
> multifd: remove used parameter from send_recv_pages() method
> multifd: Fill offset and block for reception
> multifd: Make zstd compression method not use iovs
> multifd: Make zlib compression method not use iovs
>
> Li Zhang (1):
> multifd: Shut down the QIO channels to avoid blocking the send threads
> when they are terminated.
>
> Rao, Lei (3):
> migration/ram.c: Remove the qemu_mutex_lock in colo_flush_ram_cache.
> Fixed a QEMU hang when guest poweroff in COLO mode
> COLO: Move some trace code behind qemu_mutex_unlock_iothread()
>
> Zhang Chen (2):
> migration/colo: More accurate update checkpoint time
> migration/colo: Optimize COLO primary node start code path
>
> include/migration/colo.h | 1 +
> migration/multifd.h | 6 ++--
> dump/dump.c | 10 +-----
> migration/colo.c | 33 ++++++++++++++-----
> migration/migration.c | 26 +++++++++------
> migration/multifd-zlib.c | 48 +++++++++++++--------------
> migration/multifd-zstd.c | 47 ++++++++++++---------------
> migration/multifd.c | 70 +++++++++++++++++++++-------------------
> migration/ram.c | 11 ++-----
> migration/savevm.c | 5 +--
> 10 files changed, 131 insertions(+), 126 deletions(-)
Applied, thanks.
r~
^ permalink raw reply [flat|nested] 20+ messages in thread
end of thread, other threads:[~2021-12-15 19:28 UTC | newest]
Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2021-12-15 10:32 [PULL 00/18] Migration 20211214 patches Juan Quintela
2021-12-15 10:32 ` [PULL 01/18] migration/ram.c: Remove the qemu_mutex_lock in colo_flush_ram_cache Juan Quintela
2021-12-15 10:32 ` [PULL 02/18] migration/colo: More accurate update checkpoint time Juan Quintela
2021-12-15 10:32 ` [PULL 03/18] Fixed a QEMU hang when guest poweroff in COLO mode Juan Quintela
2021-12-15 10:32 ` [PULL 04/18] migration/colo: Optimize COLO primary node start code path Juan Quintela
2021-12-15 10:32 ` [PULL 05/18] migration: Remove is_zero_range() Juan Quintela
2021-12-15 10:32 ` [PULL 06/18] dump: Remove is_zero_page() Juan Quintela
2021-12-15 10:32 ` [PULL 07/18] multifd: Delete useless operation Juan Quintela
2021-12-15 10:32 ` [PULL 08/18] migration: Never call twice qemu_target_page_size() Juan Quintela
2021-12-15 10:32 ` [PULL 09/18] multifd: Rename used field to num Juan Quintela
2021-12-15 10:32 ` [PULL 10/18] multifd: Add missing documention Juan Quintela
2021-12-15 10:32 ` [PULL 11/18] multifd: The variable is only used inside the loop Juan Quintela
2021-12-15 10:32 ` [PULL 12/18] multifd: remove used parameter from send_prepare() method Juan Quintela
2021-12-15 10:32 ` [PULL 13/18] multifd: remove used parameter from send_recv_pages() method Juan Quintela
2021-12-15 10:32 ` [PULL 14/18] multifd: Fill offset and block for reception Juan Quintela
2021-12-15 10:32 ` [PULL 15/18] multifd: Shut down the QIO channels to avoid blocking the send threads when they are terminated Juan Quintela
2021-12-15 10:32 ` [PULL 16/18] COLO: Move some trace code behind qemu_mutex_unlock_iothread() Juan Quintela
2021-12-15 10:32 ` [PULL 17/18] multifd: Make zstd compression method not use iovs Juan Quintela
2021-12-15 10:32 ` [PULL 18/18] multifd: Make zlib " Juan Quintela
2021-12-15 18:33 ` [PULL 00/18] Migration 20211214 patches Richard Henderson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).