From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
To: Zhang Chen <zhangckid@gmail.com>
Cc: qemu-devel@nongnu.org, Eric Blake <eblake@redhat.com>,
Markus Armbruster <armbru@redhat.com>,
Paolo Bonzini <pbonzini@redhat.com>,
Jason Wang <jasowang@redhat.com>,
zhanghailiang <zhang.zhanghailiang@huawei.com>,
Li Zhijian <lizhijian@cn.fujitsu.com>
Subject: Re: [Qemu-devel] [PATCH V7 RESEND 07/17] COLO: Load dirty pages into SVM's RAM cache firstly
Date: Tue, 15 May 2018 17:55:18 +0100 [thread overview]
Message-ID: <20180515165517.GE2749@work-vm> (raw)
In-Reply-To: <20180514165424.12884-8-zhangckid@gmail.com>
* Zhang Chen (zhangckid@gmail.com) wrote:
> We should not load PVM's state directly into SVM, because there maybe some
> errors happen when SVM is receving data, which will break SVM.
>
> We need to ensure receving all data before load the state into SVM. We use
> an extra memory to cache these data (PVM's ram). The ram cache in secondary side
> is initially the same as SVM/PVM's memory. And in the process of checkpoint,
> we cache the dirty pages of PVM into this ram cache firstly, so this ram cache
> always the same as PVM's memory at every checkpoint, then we flush this cached ram
> to SVM after we receive all PVM's state.
>
> Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
> Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
> Signed-off-by: Zhang Chen <zhangckid@gmail.com>
> ---
> include/exec/ram_addr.h | 1 +
> migration/migration.c | 2 +
> migration/ram.c | 99 +++++++++++++++++++++++++++++++++++++++--
> migration/ram.h | 4 ++
> migration/savevm.c | 2 +-
> 5 files changed, 104 insertions(+), 4 deletions(-)
>
> diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
> index cf2446a176..51ec153a57 100644
> --- a/include/exec/ram_addr.h
> +++ b/include/exec/ram_addr.h
> @@ -27,6 +27,7 @@ struct RAMBlock {
> struct rcu_head rcu;
> struct MemoryRegion *mr;
> uint8_t *host;
> + uint8_t *colo_cache; /* For colo, VM's ram cache */
> ram_addr_t offset;
> ram_addr_t used_length;
> ram_addr_t max_length;
> diff --git a/migration/migration.c b/migration/migration.c
> index 8dee7dd309..cfc1b958b9 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -421,6 +421,8 @@ static void process_incoming_migration_co(void *opaque)
>
> /* Wait checkpoint incoming thread exit before free resource */
> qemu_thread_join(&mis->colo_incoming_thread);
> + /* We hold the global iothread lock, so it is safe here */
> + colo_release_ram_cache();
> }
>
> if (ret < 0) {
> diff --git a/migration/ram.c b/migration/ram.c
> index 912810c18e..7ca845f8a9 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -2520,6 +2520,20 @@ static inline void *host_from_ram_block_offset(RAMBlock *block,
> return block->host + offset;
> }
>
> +static inline void *colo_cache_from_block_offset(RAMBlock *block,
> + ram_addr_t offset)
> +{
> + if (!offset_in_ramblock(block, offset)) {
> + return NULL;
> + }
> + if (!block->colo_cache) {
> + error_report("%s: colo_cache is NULL in block :%s",
> + __func__, block->idstr);
> + return NULL;
> + }
> + return block->colo_cache + offset;
> +}
> +
> /**
> * ram_handle_compressed: handle the zero page case
> *
> @@ -2724,6 +2738,57 @@ static void decompress_data_with_multi_threads(QEMUFile *f,
> qemu_mutex_unlock(&decomp_done_lock);
> }
>
> +/*
> + * colo cache: this is for secondary VM, we cache the whole
> + * memory of the secondary VM, it is need to hold the global lock
> + * to call this helper.
> + */
> +int colo_init_ram_cache(void)
> +{
> + RAMBlock *block;
> +
> + rcu_read_lock();
> + QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
> + block->colo_cache = qemu_anon_ram_alloc(block->used_length,
> + NULL,
> + false);
> + if (!block->colo_cache) {
> + error_report("%s: Can't alloc memory for COLO cache of block %s,"
> + "size 0x" RAM_ADDR_FMT, __func__, block->idstr,
> + block->used_length);
> + goto out_locked;
> + }
> + }
> + rcu_read_unlock();
> + return 0;
> +
> +out_locked:
> + QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
> + if (block->colo_cache) {
> + qemu_anon_ram_free(block->colo_cache, block->used_length);
> + block->colo_cache = NULL;
> + }
> + }
> +
> + rcu_read_unlock();
> + return -errno;
> +}
> +
> +/* It is need to hold the global lock to call this helper */
> +void colo_release_ram_cache(void)
> +{
> + RAMBlock *block;
> +
> + rcu_read_lock();
> + QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
> + if (block->colo_cache) {
> + qemu_anon_ram_free(block->colo_cache, block->used_length);
> + block->colo_cache = NULL;
> + }
> + }
> + rcu_read_unlock();
> +}
> +
> /**
> * ram_load_setup: Setup RAM for migration incoming side
> *
> @@ -2740,6 +2805,7 @@ static int ram_load_setup(QEMUFile *f, void *opaque)
>
> xbzrle_load_setup();
> ramblock_recv_map_init();
> +
> return 0;
> }
>
> @@ -2753,6 +2819,7 @@ static int ram_load_cleanup(void *opaque)
> g_free(rb->receivedmap);
> rb->receivedmap = NULL;
> }
> +
> return 0;
> }
>
> @@ -2966,7 +3033,7 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
>
> while (!postcopy_running && !ret && !(flags & RAM_SAVE_FLAG_EOS)) {
> ram_addr_t addr, total_ram_bytes;
> - void *host = NULL;
> + void *host = NULL, *host_bak = NULL;
> uint8_t ch;
>
> addr = qemu_get_be64(f);
> @@ -2986,13 +3053,36 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
> RAM_SAVE_FLAG_COMPRESS_PAGE | RAM_SAVE_FLAG_XBZRLE)) {
> RAMBlock *block = ram_block_from_stream(f, flags);
>
> - host = host_from_ram_block_offset(block, addr);
> + /*
> + * After going into COLO, we should load the Page into colo_cache
> + * NOTE: We need to keep a copy of SVM's ram in colo_cache.
> + * Privously, we copied all these memory in preparing stage of COLO
> + * while we need to stop VM, which is a time-consuming process.
> + * Here we optimize it by a trick, back-up every page while in
> + * migration process while COLO is enabled, though it affects the
> + * speed of the migration, but it obviously reduce the downtime of
> + * back-up all SVM'S memory in COLO preparing stage.
> + */
> + if (migration_incoming_in_colo_state()) {
> + host = colo_cache_from_block_offset(block, addr);
> + /* After goes into COLO state, don't backup it any more */
> + if (!migration_incoming_in_colo_state()) {
I don't understand how we can reach this nested 'if';
colo_cache_from_block_offset is short and simple; so how can
migration_incoming_in_colo_state() be both true and false?
I think this is trying to do it for when COLO is enabled but when
receiving the first checkpoint you want to take a copy; but I don't
think that's what the 'if' is doing.
Dave
> + host_bak = host;
> + }
> + }
> + if (!migration_incoming_in_colo_state()) {
> + host = host_from_ram_block_offset(block, addr);
> + }
> if (!host) {
> error_report("Illegal RAM offset " RAM_ADDR_FMT, addr);
> ret = -EINVAL;
> break;
> }
> - ramblock_recv_bitmap_set(block, host);
> +
> + if (!migration_incoming_in_colo_state()) {
> + ramblock_recv_bitmap_set(block, host);
> + }
> +
> trace_ram_load_loop(block->idstr, (uint64_t)addr, flags, host);
> }
>
> @@ -3087,6 +3177,9 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
> if (!ret) {
> ret = qemu_file_get_error(f);
> }
> + if (!ret && host_bak && host) {
> + memcpy(host_bak, host, TARGET_PAGE_SIZE);
> + }
> }
>
> ret |= wait_for_decompress_done();
> diff --git a/migration/ram.h b/migration/ram.h
> index 5030be110a..66e9b86ff0 100644
> --- a/migration/ram.h
> +++ b/migration/ram.h
> @@ -64,4 +64,8 @@ bool ramblock_recv_bitmap_test_byte_offset(RAMBlock *rb, uint64_t byte_offset);
> void ramblock_recv_bitmap_set(RAMBlock *rb, void *host_addr);
> void ramblock_recv_bitmap_set_range(RAMBlock *rb, void *host_addr, size_t nr);
>
> +/* ram cache */
> +int colo_init_ram_cache(void);
> +void colo_release_ram_cache(void);
> +
> #endif
> diff --git a/migration/savevm.c b/migration/savevm.c
> index c43d220220..ec0bff09ce 100644
> --- a/migration/savevm.c
> +++ b/migration/savevm.c
> @@ -1807,7 +1807,7 @@ static int loadvm_handle_cmd_packaged(MigrationIncomingState *mis)
> static int loadvm_process_enable_colo(MigrationIncomingState *mis)
> {
> migration_incoming_enable_colo();
> - return 0;
> + return colo_init_ram_cache();
> }
>
> /*
> --
> 2.17.0
>
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
next prev parent reply other threads:[~2018-05-15 16:55 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-05-14 16:54 [Qemu-devel] [PATCH V7 RESEND 00/17] COLO: integrate colo frame with block replication and COLO proxy Zhang Chen
2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 01/17] filter-rewriter: fix memory leak for connection in connection_track_table Zhang Chen
2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 02/17] colo-compare: implement the process of checkpoint Zhang Chen
2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 03/17] colo-compare: use notifier to notify packets comparing result Zhang Chen
2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 04/17] COLO: integrate colo compare with colo frame Zhang Chen
2018-05-16 11:12 ` Dr. David Alan Gilbert
2018-05-16 13:55 ` Zhang Chen
2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 05/17] COLO: Add block replication into colo process Zhang Chen
2018-05-16 15:54 ` Dr. David Alan Gilbert
2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 06/17] COLO: Remove colo_state migration struct Zhang Chen
2018-05-15 16:02 ` Dr. David Alan Gilbert
2018-05-16 13:58 ` Zhang Chen
2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 07/17] COLO: Load dirty pages into SVM's RAM cache firstly Zhang Chen
2018-05-15 16:55 ` Dr. David Alan Gilbert [this message]
2018-05-20 18:30 ` Zhang Chen
2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 08/17] ram/COLO: Record the dirty pages that SVM received Zhang Chen
2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 09/17] COLO: Flush memory data from ram cache Zhang Chen
2018-05-15 14:44 ` Dr. David Alan Gilbert
2018-05-20 16:09 ` Zhang Chen
2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 10/17] qmp event: Add COLO_EXIT event to notify users while exited COLO Zhang Chen
2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 11/17] qapi: Add new command to query colo status Zhang Chen
2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 12/17] savevm: split the process of different stages for loadvm/savevm Zhang Chen
2018-05-15 18:56 ` Dr. David Alan Gilbert
2018-06-03 5:10 ` Zhang Chen
2018-06-19 19:00 ` Dr. David Alan Gilbert
2018-06-22 3:45 ` Zhang Chen
2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 13/17] COLO: flush host dirty ram from cache Zhang Chen
2018-05-15 15:32 ` Dr. David Alan Gilbert
2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 14/17] filter: Add handle_event method for NetFilterClass Zhang Chen
2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 15/17] filter-rewriter: handle checkpoint and failover event Zhang Chen
2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 16/17] COLO: notify net filters about checkpoint/failover event Zhang Chen
2018-05-17 9:48 ` Dr. David Alan Gilbert
2018-05-14 16:54 ` [Qemu-devel] [PATCH V7 RESEND 17/17] COLO: quick failover process by kick COLO thread Zhang Chen
2018-05-17 9:53 ` Dr. David Alan Gilbert
2018-05-16 11:18 ` [Qemu-devel] [PATCH V7 RESEND 00/17] COLO: integrate colo frame with block replication and COLO proxy Dr. David Alan Gilbert
2018-05-16 12:21 ` Jason Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180515165517.GE2749@work-vm \
--to=dgilbert@redhat.com \
--cc=armbru@redhat.com \
--cc=eblake@redhat.com \
--cc=jasowang@redhat.com \
--cc=lizhijian@cn.fujitsu.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=zhang.zhanghailiang@huawei.com \
--cc=zhangckid@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).