From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
To: zhanghailiang <zhang.zhanghailiang@huawei.com>
Cc: danielcho@qnap.com, qemu-devel@nongnu.org, quintela@redhat.com
Subject: Re: [PATCH V2 4/8] COLO: Optimize memory back-up process
Date: Thu, 12 Mar 2020 18:44:37 +0000 [thread overview]
Message-ID: <20200312184437.GM3211@work-vm> (raw)
In-Reply-To: <20200224065414.36524-5-zhang.zhanghailiang@huawei.com>
* zhanghailiang (zhang.zhanghailiang@huawei.com) wrote:
> This patch will reduce the downtime of VM for the initial process,
> Privously, we copied all these memory in preparing stage of COLO
> while we need to stop VM, which is a time-consuming process.
> Here we optimize it by a trick, back-up every page while in migration
> process while COLO is enabled, though it affects the speed of the
> migration, but it obviously reduce the downtime of back-up all SVM'S
> memory in COLO preparing stage.
>
> Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
I'll queue this as well; I'm going to clean up some minor things:
> ---
> migration/colo.c | 3 +++
> migration/ram.c | 68 +++++++++++++++++++++++++++++++++++-------------
> migration/ram.h | 1 +
> 3 files changed, 54 insertions(+), 18 deletions(-)
>
> diff --git a/migration/colo.c b/migration/colo.c
> index 93c5a452fb..44942c4e23 100644
> --- a/migration/colo.c
> +++ b/migration/colo.c
> @@ -26,6 +26,7 @@
> #include "qemu/main-loop.h"
> #include "qemu/rcu.h"
> #include "migration/failover.h"
> +#include "migration/ram.h"
> #ifdef CONFIG_REPLICATION
> #include "replication.h"
> #endif
> @@ -845,6 +846,8 @@ void *colo_process_incoming_thread(void *opaque)
> */
> qemu_file_set_blocking(mis->from_src_file, true);
>
> + colo_incoming_start_dirty_log();
> +
> bioc = qio_channel_buffer_new(COLO_BUFFER_BASE_SIZE);
> fb = qemu_fopen_channel_input(QIO_CHANNEL(bioc));
> object_unref(OBJECT(bioc));
> diff --git a/migration/ram.c b/migration/ram.c
> index ed23ed1c7c..ebf9e6ba51 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -2277,6 +2277,7 @@ static void ram_list_init_bitmaps(void)
> * dirty_memory[DIRTY_MEMORY_MIGRATION] don't include the whole
> * guest memory.
> */
> +
That change is nice, but shouldn't really be here.
> block->bmap = bitmap_new(pages);
> bitmap_set(block->bmap, 0, pages);
> block->clear_bmap_shift = shift;
> @@ -2986,7 +2987,6 @@ int colo_init_ram_cache(void)
> }
> return -errno;
> }
> - memcpy(block->colo_cache, block->host, block->used_length);
> }
> }
>
> @@ -3000,19 +3000,36 @@ int colo_init_ram_cache(void)
>
> RAMBLOCK_FOREACH_NOT_IGNORED(block) {
> unsigned long pages = block->max_length >> TARGET_PAGE_BITS;
> -
> block->bmap = bitmap_new(pages);
> - bitmap_set(block->bmap, 0, pages);
> }
> }
> - ram_state = g_new0(RAMState, 1);
> - ram_state->migration_dirty_pages = 0;
> - qemu_mutex_init(&ram_state->bitmap_mutex);
> - memory_global_dirty_log_start();
>
> + ram_state_init(&ram_state);
> return 0;
> }
>
> +/* TODO: duplicated with ram_init_bitmaps */
> +void colo_incoming_start_dirty_log(void)
> +{
> + RAMBlock *block = NULL;
> + /* For memory_global_dirty_log_start below. */
> + qemu_mutex_lock_iothread();
> + qemu_mutex_lock_ramlist();
> +
> + memory_global_dirty_log_sync();
> + WITH_RCU_READ_LOCK_GUARD() {
> + RAMBLOCK_FOREACH_NOT_IGNORED(block) {
> + ramblock_sync_dirty_bitmap(ram_state, block);
> + /* Discard this dirty bitmap record */
> + bitmap_zero(block->bmap, block->max_length >> TARGET_PAGE_BITS);
> + }
> + memory_global_dirty_log_start();
> + }
> + ram_state->migration_dirty_pages = 0;
> + qemu_mutex_unlock_ramlist();
> + qemu_mutex_unlock_iothread();
> +}
> +
> /* It is need to hold the global lock to call this helper */
> void colo_release_ram_cache(void)
> {
> @@ -3032,9 +3049,7 @@ void colo_release_ram_cache(void)
> }
> }
> }
> - qemu_mutex_destroy(&ram_state->bitmap_mutex);
> - g_free(ram_state);
> - ram_state = NULL;
> + ram_state_cleanup(&ram_state);
> }
>
> /**
> @@ -3302,7 +3317,6 @@ static void colo_flush_ram_cache(void)
> ramblock_sync_dirty_bitmap(ram_state, block);
> }
> }
> -
I'll remove that
> trace_colo_flush_ram_cache_begin(ram_state->migration_dirty_pages);
> WITH_RCU_READ_LOCK_GUARD() {
> block = QLIST_FIRST_RCU(&ram_list.blocks);
> @@ -3348,7 +3362,7 @@ static int ram_load_precopy(QEMUFile *f)
>
> while (!ret && !(flags & RAM_SAVE_FLAG_EOS)) {
> ram_addr_t addr, total_ram_bytes;
> - void *host = NULL;
> + void *host = NULL, *host_bak = NULL;
> uint8_t ch;
>
> /*
> @@ -3379,20 +3393,35 @@ static int ram_load_precopy(QEMUFile *f)
> RAM_SAVE_FLAG_COMPRESS_PAGE | RAM_SAVE_FLAG_XBZRLE)) {
> RAMBlock *block = ram_block_from_stream(f, flags);
>
> + host = host_from_ram_block_offset(block, addr);
> /*
> - * After going into COLO, we should load the Page into colo_cache.
> + * After going into COLO stage, we should not load the page
> + * into SVM's memory diretly, we put them into colo_cache firstly.
^ typo - c
> + * NOTE: We need to keep a copy of SVM's ram in colo_cache.
> + * Privously, we copied all these memory in preparing stage of COLO
^ typo - e
> + * while we need to stop VM, which is a time-consuming process.
> + * Here we optimize it by a trick, back-up every page while in
> + * migration process while COLO is enabled, though it affects the
> + * speed of the migration, but it obviously reduce the downtime of
> + * back-up all SVM'S memory in COLO preparing stage.
> */
> - if (migration_incoming_in_colo_state()) {
> - host = colo_cache_from_block_offset(block, addr);
> - } else {
> - host = host_from_ram_block_offset(block, addr);
> + if (migration_incoming_colo_enabled()) {
> + if (migration_incoming_in_colo_state()) {
> + /* In COLO stage, put all pages into cache temporarily */
> + host = colo_cache_from_block_offset(block, addr);
> + } else {
> + /*
> + * In migration stage but before COLO stage,
> + * Put all pages into both cache and SVM's memory.
> + */
> + host_bak = colo_cache_from_block_offset(block, addr);
> + }
> }
> if (!host) {
> error_report("Illegal RAM offset " RAM_ADDR_FMT, addr);
> ret = -EINVAL;
> break;
> }
> -
> if (!migration_incoming_in_colo_state()) {
> ramblock_recv_bitmap_set(block, host);
> }
> @@ -3506,6 +3535,9 @@ static int ram_load_precopy(QEMUFile *f)
> if (!ret) {
> ret = qemu_file_get_error(f);
> }
> + if (!ret && host_bak) {
> + memcpy(host_bak, host, TARGET_PAGE_SIZE);
> + }
> }
>
> ret |= wait_for_decompress_done();
> diff --git a/migration/ram.h b/migration/ram.h
> index a553d40751..5ceaff7cb4 100644
> --- a/migration/ram.h
> +++ b/migration/ram.h
> @@ -66,5 +66,6 @@ int ram_dirty_bitmap_reload(MigrationState *s, RAMBlock *rb);
> /* ram cache */
> int colo_init_ram_cache(void);
> void colo_release_ram_cache(void);
> +void colo_incoming_start_dirty_log(void);
>
> #endif
> --
> 2.21.0
>
>
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
next prev parent reply other threads:[~2020-03-12 18:46 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-02-24 6:54 [PATCH V2 0/8] Optimize VM's downtime while do checkpoint in COLO zhanghailiang
2020-02-24 6:54 ` [PATCH V2 1/8] migration: fix COLO broken caused by a previous commit zhanghailiang
2020-02-27 18:36 ` Juan Quintela
2020-02-24 6:54 ` [PATCH V2 2/8] migration/colo: wrap incoming checkpoint process into new helper zhanghailiang
2020-02-24 6:54 ` [PATCH V2 3/8] savevm: Don't call colo_init_ram_cache twice zhanghailiang
2020-02-27 18:37 ` Juan Quintela
2020-02-24 6:54 ` [PATCH V2 4/8] COLO: Optimize memory back-up process zhanghailiang
2020-02-25 2:52 ` Daniel Cho
2020-02-25 3:56 ` Zhanghailiang
2020-03-12 18:44 ` Dr. David Alan Gilbert [this message]
2020-02-24 6:54 ` [PATCH V2 5/8] ram/colo: only record bitmap of dirty pages in COLO stage zhanghailiang
2020-03-12 18:55 ` Dr. David Alan Gilbert
2020-02-24 6:54 ` [PATCH V2 6/8] migration: recognize COLO as part of activating process zhanghailiang
2020-03-12 19:42 ` Dr. David Alan Gilbert
2020-02-24 6:54 ` [PATCH V2 7/8] COLO: Migrate dirty pages during the gap of checkpointing zhanghailiang
2020-02-24 15:18 ` Eric Blake
2020-02-25 1:07 ` Zhanghailiang
2020-03-12 19:50 ` Dr. David Alan Gilbert
2020-02-24 6:54 ` [PATCH V2 8/8] migration/colo: Only flush ram cache while do checkpoint zhanghailiang
2020-03-12 19:51 ` Dr. David Alan Gilbert
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200312184437.GM3211@work-vm \
--to=dgilbert@redhat.com \
--cc=danielcho@qnap.com \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
--cc=zhang.zhanghailiang@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).