qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
To: zhanghailiang <zhang.zhanghailiang@huawei.com>
Cc: qemu-devel@nongnu.org, zhangchen.fnst@cn.fujitsu.com,
	lizhijian@cn.fujitsu.com, xiecl.fnst@cn.fujitsu.com,
	Juan Quintela <quintela@redhat.com>
Subject: Re: [Qemu-devel] [PATCH 07/15] COLO: Load PVM's dirty pages into SVM's RAM cache temporarily
Date: Fri, 7 Apr 2017 18:06:04 +0100	[thread overview]
Message-ID: <20170407170600.GD2623@work-vm> (raw)
In-Reply-To: <1487734936-43472-8-git-send-email-zhang.zhanghailiang@huawei.com>

* zhanghailiang (zhang.zhanghailiang@huawei.com) wrote:
> We should not load PVM's state directly into SVM, because there maybe some
> errors happen when SVM is receving data, which will break SVM.
> 
> We need to ensure receving all data before load the state into SVM. We use
> an extra memory to cache these data (PVM's ram). The ram cache in secondary side
> is initially the same as SVM/PVM's memory. And in the process of checkpoint,
> we cache the dirty pages of PVM into this ram cache firstly, so this ram cache
> always the same as PVM's memory at every checkpoint, then we flush this cached ram
> to SVM after we receive all PVM's state.

You're probably going to find this interesting to merge with Juan's recent RAM block series.
Probably not too hard, but he's touching a lot of the same code and rearranging things.

Dave


> Cc: Juan Quintela <quintela@redhat.com>
> Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
> Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> ---
>  include/exec/ram_addr.h       |  1 +
>  include/migration/migration.h |  4 +++
>  migration/colo.c              | 14 +++++++++
>  migration/ram.c               | 73 ++++++++++++++++++++++++++++++++++++++++++-
>  4 files changed, 91 insertions(+), 1 deletion(-)
> 
> diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
> index 3e79466..44e1190 100644
> --- a/include/exec/ram_addr.h
> +++ b/include/exec/ram_addr.h
> @@ -27,6 +27,7 @@ struct RAMBlock {
>      struct rcu_head rcu;
>      struct MemoryRegion *mr;
>      uint8_t *host;
> +    uint8_t *colo_cache; /* For colo, VM's ram cache */
>      ram_addr_t offset;
>      ram_addr_t used_length;
>      ram_addr_t max_length;
> diff --git a/include/migration/migration.h b/include/migration/migration.h
> index 1735d66..93c6148 100644
> --- a/include/migration/migration.h
> +++ b/include/migration/migration.h
> @@ -379,4 +379,8 @@ int ram_save_queue_pages(MigrationState *ms, const char *rbname,
>  PostcopyState postcopy_state_get(void);
>  /* Set the state and return the old state */
>  PostcopyState postcopy_state_set(PostcopyState new_state);
> +
> +/* ram cache */
> +int colo_init_ram_cache(void);
> +void colo_release_ram_cache(void);
>  #endif
> diff --git a/migration/colo.c b/migration/colo.c
> index 1e3e975..edb7f00 100644
> --- a/migration/colo.c
> +++ b/migration/colo.c
> @@ -551,6 +551,7 @@ void *colo_process_incoming_thread(void *opaque)
>      uint64_t total_size;
>      uint64_t value;
>      Error *local_err = NULL;
> +    int ret;
>  
>      qemu_sem_init(&mis->colo_incoming_sem, 0);
>  
> @@ -572,6 +573,12 @@ void *colo_process_incoming_thread(void *opaque)
>       */
>      qemu_file_set_blocking(mis->from_src_file, true);
>  
> +    ret = colo_init_ram_cache();
> +    if (ret < 0) {
> +        error_report("Failed to initialize ram cache");
> +        goto out;
> +    }
> +
>      bioc = qio_channel_buffer_new(COLO_BUFFER_BASE_SIZE);
>      fb = qemu_fopen_channel_input(QIO_CHANNEL(bioc));
>      object_unref(OBJECT(bioc));
> @@ -705,11 +712,18 @@ out:
>      if (fb) {
>          qemu_fclose(fb);
>      }
> +    /*
> +     * We can ensure BH is hold the global lock, and will join COLO
> +     * incoming thread, so here it is not necessary to lock here again,
> +     * Or there will be a deadlock error.
> +     */
> +    colo_release_ram_cache();
>  
>      /* Hope this not to be too long to loop here */
>      qemu_sem_wait(&mis->colo_incoming_sem);
>      qemu_sem_destroy(&mis->colo_incoming_sem);
>      /* Must be called after failover BH is completed */
> +
>      if (mis->to_src_file) {
>          qemu_fclose(mis->to_src_file);
>      }
> diff --git a/migration/ram.c b/migration/ram.c
> index f289fcd..b588990 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -219,6 +219,7 @@ static RAMBlock *last_sent_block;
>  static ram_addr_t last_offset;
>  static QemuMutex migration_bitmap_mutex;
>  static uint64_t migration_dirty_pages;
> +static bool ram_cache_enable;
>  static uint32_t last_version;
>  static bool ram_bulk_stage;
>  
> @@ -2227,6 +2228,20 @@ static inline void *host_from_ram_block_offset(RAMBlock *block,
>      return block->host + offset;
>  }
>  
> +static inline void *colo_cache_from_block_offset(RAMBlock *block,
> +                                                 ram_addr_t offset)
> +{
> +    if (!offset_in_ramblock(block, offset)) {
> +        return NULL;
> +    }
> +    if (!block->colo_cache) {
> +        error_report("%s: colo_cache is NULL in block :%s",
> +                     __func__, block->idstr);
> +        return NULL;
> +    }
> +    return block->colo_cache + offset;
> +}
> +
>  /*
>   * If a page (or a whole RDMA chunk) has been
>   * determined to be zero, then zap it.
> @@ -2542,7 +2557,12 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
>                       RAM_SAVE_FLAG_COMPRESS_PAGE | RAM_SAVE_FLAG_XBZRLE)) {
>              RAMBlock *block = ram_block_from_stream(f, flags);
>  
> -            host = host_from_ram_block_offset(block, addr);
> +            /* After going into COLO, we should load the Page into colo_cache */
> +            if (ram_cache_enable) {
> +                host = colo_cache_from_block_offset(block, addr);
> +            } else {
> +                host = host_from_ram_block_offset(block, addr);
> +            }
>              if (!host) {
>                  error_report("Illegal RAM offset " RAM_ADDR_FMT, addr);
>                  ret = -EINVAL;
> @@ -2637,6 +2657,57 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
>      return ret;
>  }
>  
> +/*
> + * colo cache: this is for secondary VM, we cache the whole
> + * memory of the secondary VM, it will be called after first migration.
> + */
> +int colo_init_ram_cache(void)
> +{
> +    RAMBlock *block;
> +
> +    rcu_read_lock();
> +    QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
> +        block->colo_cache = qemu_anon_ram_alloc(block->used_length, NULL);
> +        if (!block->colo_cache) {
> +            error_report("%s: Can't alloc memory for COLO cache of block %s,"
> +                         "size 0x" RAM_ADDR_FMT, __func__, block->idstr,
> +                         block->used_length);
> +            goto out_locked;
> +        }
> +        memcpy(block->colo_cache, block->host, block->used_length);
> +    }
> +    rcu_read_unlock();
> +    ram_cache_enable = true;
> +    return 0;
> +
> +out_locked:
> +    QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
> +        if (block->colo_cache) {
> +            qemu_anon_ram_free(block->colo_cache, block->used_length);
> +            block->colo_cache = NULL;
> +        }
> +    }
> +
> +    rcu_read_unlock();
> +    return -errno;
> +}
> +
> +void colo_release_ram_cache(void)
> +{
> +    RAMBlock *block;
> +
> +    ram_cache_enable = false;
> +
> +    rcu_read_lock();
> +    QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
> +        if (block->colo_cache) {
> +            qemu_anon_ram_free(block->colo_cache, block->used_length);
> +            block->colo_cache = NULL;
> +        }
> +    }
> +    rcu_read_unlock();
> +}
> +
>  static SaveVMHandlers savevm_ram_handlers = {
>      .save_live_setup = ram_save_setup,
>      .save_live_iterate = ram_save_iterate,
> -- 
> 1.8.3.1
> 
> 
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

  reply	other threads:[~2017-04-07 17:06 UTC|newest]

Thread overview: 42+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-02-22  3:42 [Qemu-devel] [PATCH 00/15] COLO: integrate colo frame with block replication and net compare zhanghailiang
2017-02-22  3:42 ` [Qemu-devel] [PATCH 01/15] net/colo: Add notifier/callback related helpers for filter zhanghailiang
2017-04-07 15:46   ` Dr. David Alan Gilbert
2017-04-10  7:26     ` Hailiang Zhang
2017-02-22  3:42 ` [Qemu-devel] [PATCH 02/15] colo-compare: implement the process of checkpoint zhanghailiang
2017-02-22  9:31   ` Zhang Chen
2017-02-23  1:02     ` Hailiang Zhang
2017-02-23  5:49       ` Zhang Chen
2017-04-14  5:57     ` Jason Wang
2017-04-14  6:22       ` Hailiang Zhang
2017-04-14  6:38         ` Jason Wang
2017-04-17 11:04           ` Hailiang Zhang
2017-04-18  1:32             ` Zhang Chen
2017-04-18  3:55             ` Jason Wang
2017-04-18  6:58               ` Hailiang Zhang
2017-04-20  5:15                 ` Jason Wang
2017-04-21  8:10                   ` Hailiang Zhang
2017-02-22  3:42 ` [Qemu-devel] [PATCH 03/15] colo-compare: use notifier to notify packets comparing result zhanghailiang
2017-02-22  3:42 ` [Qemu-devel] [PATCH 04/15] COLO: integrate colo compare with colo frame zhanghailiang
2017-04-07 15:59   ` Dr. David Alan Gilbert
2017-02-22  3:42 ` [Qemu-devel] [PATCH 05/15] COLO: Handle shutdown command for VM in COLO state zhanghailiang
2017-02-22 15:35   ` Eric Blake
2017-02-23  1:15     ` Hailiang Zhang
2017-02-22  3:42 ` [Qemu-devel] [PATCH 06/15] COLO: Add block replication into colo process zhanghailiang
2017-02-22  3:42 ` [Qemu-devel] [PATCH 07/15] COLO: Load PVM's dirty pages into SVM's RAM cache temporarily zhanghailiang
2017-04-07 17:06   ` Dr. David Alan Gilbert [this message]
2017-04-10  7:31     ` Hailiang Zhang
2017-02-22  3:42 ` [Qemu-devel] [PATCH 08/15] ram/COLO: Record the dirty pages that SVM received zhanghailiang
2017-02-23 18:44   ` Dr. David Alan Gilbert
2017-02-22  3:42 ` [Qemu-devel] [PATCH 09/15] COLO: Flush PVM's cached RAM into SVM's memory zhanghailiang
2017-02-22  3:42 ` [Qemu-devel] [PATCH 10/15] qmp event: Add COLO_EXIT event to notify users while exited from COLO zhanghailiang
2017-02-22  3:42 ` [Qemu-devel] [PATCH 11/15] savevm: split save/find loadvm_handlers entry into two helper functions zhanghailiang
2017-02-22  3:42 ` [Qemu-devel] [PATCH 12/15] savevm: split the process of different stages for loadvm/savevm zhanghailiang
2017-04-07 17:18   ` Dr. David Alan Gilbert
2017-04-10  8:26     ` Hailiang Zhang
2017-04-20  9:09       ` Dr. David Alan Gilbert
2017-04-21  6:50         ` Hailiang Zhang
2017-02-22  3:42 ` [Qemu-devel] [PATCH 13/15] COLO: Separate the process of saving/loading ram and device state zhanghailiang
2017-02-22  3:42 ` [Qemu-devel] [PATCH 14/15] COLO: Split qemu_savevm_state_begin out of checkpoint process zhanghailiang
2017-02-22  3:42 ` [Qemu-devel] [PATCH 15/15] COLO: flush host dirty ram from cache zhanghailiang
2017-04-07 17:39   ` Dr. David Alan Gilbert
2017-04-10  7:13     ` Hailiang Zhang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170407170600.GD2623@work-vm \
    --to=dgilbert@redhat.com \
    --cc=lizhijian@cn.fujitsu.com \
    --cc=qemu-devel@nongnu.org \
    --cc=quintela@redhat.com \
    --cc=xiecl.fnst@cn.fujitsu.com \
    --cc=zhang.zhanghailiang@huawei.com \
    --cc=zhangchen.fnst@cn.fujitsu.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).