From: zhanghailiang <zhang.zhanghailiang@huawei.com>
To: qemu-devel@nongnu.org
Cc: xiecl.fnst@cn.fujitsu.com, lizhijian@cn.fujitsu.com,
quintela@redhat.com, armbru@redhat.com, yunhong.jiang@intel.com,
eddie.dong@intel.com, peter.huangpeng@huawei.com,
dgilbert@redhat.com,
zhanghailiang <zhang.zhanghailiang@huawei.com>,
arei.gonglei@huawei.com, stefanha@redhat.com,
amit.shah@redhat.com, zhangchen.fnst@cn.fujitsu.com,
hongyang.yang@easystack.cn
Subject: [Qemu-devel] [PATCH COLO-Frame v15 14/38] COLO: Flush PVM's cached RAM into SVM's memory
Date: Mon, 22 Feb 2016 10:40:08 +0800 [thread overview]
Message-ID: <1456108832-24212-15-git-send-email-zhang.zhanghailiang@huawei.com> (raw)
In-Reply-To: <1456108832-24212-1-git-send-email-zhang.zhanghailiang@huawei.com>
During the time of VM's running, PVM may dirty some pages, we will transfer
PVM's dirty pages to SVM and store them into SVM's RAM cache at next checkpoint
time. So, the content of SVM's RAM cache will always be same with PVM's memory
after checkpoint.
Instead of flushing all content of PVM's RAM cache into SVM's MEMORY,
we do this in a more efficient way:
Only flush any page that dirtied by PVM since last checkpoint.
In this way, we can ensure SVM's memory same with PVM's.
Besides, we must ensure flush RAM cache before load device state.
Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com>
Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
Signed-off-by: Gonglei <arei.gonglei@huawei.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
v12:
- Add a trace point in the end of colo_flush_ram_cache() (Dave's suggestion)
- Add Reviewed-by tag
v11:
- Move the place of 'need_flush' (Dave's suggestion)
- Remove unused 'DPRINTF("Flush ram_cache\n")'
v10:
- trace the number of dirty pages that be received.
---
include/migration/migration.h | 1 +
migration/colo.c | 2 --
migration/ram.c | 38 ++++++++++++++++++++++++++++++++++++++
trace-events | 2 ++
4 files changed, 41 insertions(+), 2 deletions(-)
diff --git a/include/migration/migration.h b/include/migration/migration.h
index 6907986..14b9f3d 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -336,4 +336,5 @@ PostcopyState postcopy_state_set(PostcopyState new_state);
/* ram cache */
int colo_init_ram_cache(void);
void colo_release_ram_cache(void);
+void colo_flush_ram_cache(void);
#endif
diff --git a/migration/colo.c b/migration/colo.c
index b9f60c7..473fb14 100644
--- a/migration/colo.c
+++ b/migration/colo.c
@@ -417,8 +417,6 @@ void *colo_process_incoming_thread(void *opaque)
}
qemu_mutex_unlock_iothread();
- /* TODO: flush vm state */
-
colo_put_cmd(mis->to_src_file, COLO_MESSAGE_VMSTATE_LOADED,
&local_err);
if (local_err) {
diff --git a/migration/ram.c b/migration/ram.c
index 7373df3..891f3b2 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -2465,6 +2465,7 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
* be atomic
*/
bool postcopy_running = postcopy_state_get() >= POSTCOPY_INCOMING_LISTENING;
+ bool need_flush = false;
seq_iter++;
@@ -2499,6 +2500,7 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
/* After going into COLO, we should load the Page into colo_cache */
if (ram_cache_enable) {
host = colo_cache_from_block_offset(block, addr);
+ need_flush = true;
} else {
host = host_from_ram_block_offset(block, addr);
}
@@ -2591,6 +2593,10 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
}
rcu_read_unlock();
+
+ if (!ret && ram_cache_enable && need_flush) {
+ colo_flush_ram_cache();
+ }
DPRINTF("Completed load of VM with exit code %d seq iteration "
"%" PRIu64 "\n", ret, seq_iter);
return ret;
@@ -2663,6 +2669,38 @@ void colo_release_ram_cache(void)
rcu_read_unlock();
}
+/*
+ * Flush content of RAM cache into SVM's memory.
+ * Only flush the pages that be dirtied by PVM or SVM or both.
+ */
+void colo_flush_ram_cache(void)
+{
+ RAMBlock *block = NULL;
+ void *dst_host;
+ void *src_host;
+ ram_addr_t offset = 0;
+
+ trace_colo_flush_ram_cache_begin(migration_dirty_pages);
+ rcu_read_lock();
+ block = QLIST_FIRST_RCU(&ram_list.blocks);
+ while (block) {
+ ram_addr_t ram_addr_abs;
+ offset = migration_bitmap_find_dirty(block, offset, &ram_addr_abs);
+ migration_bitmap_clear_dirty(ram_addr_abs);
+ if (offset >= block->used_length) {
+ offset = 0;
+ block = QLIST_NEXT_RCU(block, next);
+ } else {
+ dst_host = block->host + offset;
+ src_host = block->colo_cache + offset;
+ memcpy(dst_host, src_host, TARGET_PAGE_SIZE);
+ }
+ }
+ rcu_read_unlock();
+ trace_colo_flush_ram_cache_end();
+ assert(migration_dirty_pages == 0);
+}
+
static SaveVMHandlers savevm_ram_handlers = {
.save_live_setup = ram_save_setup,
.save_live_iterate = ram_save_iterate,
diff --git a/trace-events b/trace-events
index 97807cd..ee4a2fb 100644
--- a/trace-events
+++ b/trace-events
@@ -1290,6 +1290,8 @@ migration_throttle(void) ""
ram_load_postcopy_loop(uint64_t addr, int flags) "@%" PRIx64 " %x"
ram_postcopy_send_discard_bitmap(void) ""
ram_save_queue_pages(const char *rbname, size_t start, size_t len) "%s: start: %zx len: %zx"
+colo_flush_ram_cache_begin(uint64_t dirty_pages) "dirty_pages %" PRIu64
+colo_flush_ram_cache_end(void) ""
# hw/display/qxl.c
disable qxl_interface_set_mm_time(int qid, uint32_t mm_time) "%d %d"
--
1.8.3.1
next prev parent reply other threads:[~2016-02-22 2:41 UTC|newest]
Thread overview: 52+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-02-22 2:39 [Qemu-devel] [PATCH COLO-Frame v15 00/38] COarse-grain LOck-stepping(COLO) Virtual Machines for Non-stop Service (FT) zhanghailiang
2016-02-22 2:39 ` [Qemu-devel] [PATCH COLO-Frame v15 01/38] configure: Add parameter for configure to enable/disable COLO support zhanghailiang
2016-02-22 2:39 ` [Qemu-devel] [PATCH COLO-Frame v15 02/38] migration: Introduce capability 'x-colo' to migration zhanghailiang
2016-02-22 2:39 ` [Qemu-devel] [PATCH COLO-Frame v15 03/38] COLO: migrate colo related info to secondary node zhanghailiang
2016-02-22 2:39 ` [Qemu-devel] [PATCH COLO-Frame v15 04/38] migration: Integrate COLO checkpoint process into migration zhanghailiang
2016-02-22 2:39 ` [Qemu-devel] [PATCH COLO-Frame v15 05/38] migration: Integrate COLO checkpoint process into loadvm zhanghailiang
2016-02-22 2:40 ` [Qemu-devel] [PATCH COLO-Frame v15 06/38] COLO/migration: Create a new communication path from destination to source zhanghailiang
2016-02-22 2:40 ` [Qemu-devel] [PATCH COLO-Frame v15 07/38] COLO: Implement colo checkpoint protocol zhanghailiang
2016-02-22 2:40 ` [Qemu-devel] [PATCH COLO-Frame v15 08/38] COLO: Add a new RunState RUN_STATE_COLO zhanghailiang
2016-02-22 2:40 ` [Qemu-devel] [PATCH COLO-Frame v15 09/38] QEMUSizedBuffer: Introduce two help functions for qsb zhanghailiang
2016-02-22 2:40 ` [Qemu-devel] [PATCH COLO-Frame v15 10/38] COLO: Save PVM state to secondary side when do checkpoint zhanghailiang
2016-02-22 2:40 ` [Qemu-devel] [PATCH COLO-Frame v15 11/38] COLO: Load PVM's dirty pages into SVM's RAM cache temporarily zhanghailiang
2016-02-22 2:40 ` [Qemu-devel] [PATCH COLO-Frame v15 12/38] ram/COLO: Record the dirty pages that SVM received zhanghailiang
2016-02-22 2:40 ` [Qemu-devel] [PATCH COLO-Frame v15 13/38] COLO: Load VMState into qsb before restore it zhanghailiang
2016-02-22 2:40 ` zhanghailiang [this message]
2016-02-22 2:40 ` [Qemu-devel] [PATCH COLO-Frame v15 15/38] COLO: Add checkpoint-delay parameter for migrate-set-parameters zhanghailiang
2016-02-22 2:40 ` [Qemu-devel] [PATCH COLO-Frame v15 16/38] COLO: synchronize PVM's state to SVM periodically zhanghailiang
2016-02-22 2:40 ` [Qemu-devel] [PATCH COLO-Frame v15 17/38] COLO failover: Introduce a new command to trigger a failover zhanghailiang
2016-02-22 2:40 ` [Qemu-devel] [PATCH COLO-Frame v15 18/38] COLO failover: Introduce state to record failover process zhanghailiang
2016-02-22 2:40 ` [Qemu-devel] [PATCH COLO-Frame v15 19/38] COLO: Implement failover work for Primary VM zhanghailiang
2016-02-22 2:40 ` [Qemu-devel] [PATCH COLO-Frame v15 20/38] COLO: Implement failover work for Secondary VM zhanghailiang
2016-02-22 2:40 ` [Qemu-devel] [PATCH COLO-Frame v15 21/38] qmp event: Add COLO_EXIT event to notify users while exited from COLO zhanghailiang
2016-02-22 2:40 ` [Qemu-devel] [PATCH COLO-Frame v15 22/38] COLO failover: Shutdown related socket fd when do failover zhanghailiang
2016-02-22 2:40 ` [Qemu-devel] [PATCH COLO-Frame v15 23/38] COLO failover: Don't do failover during loading VM's state zhanghailiang
2016-02-22 2:40 ` [Qemu-devel] [PATCH COLO-Frame v15 24/38] COLO: Process shutdown command for VM in COLO state zhanghailiang
2016-02-22 2:40 ` [Qemu-devel] [PATCH COLO-Frame v15 25/38] COLO: Update the global runstate after going into colo state zhanghailiang
2016-02-22 2:40 ` [Qemu-devel] [PATCH COLO-Frame v15 26/38] savevm: Introduce two helper functions for save/find loadvm_handlers entry zhanghailiang
2016-02-22 2:40 ` [Qemu-devel] [PATCH COLO-Frame v15 27/38] migration/savevm: Add new helpers to process the different stages of loadvm zhanghailiang
2016-02-26 12:52 ` Dr. David Alan Gilbert
2016-02-22 2:40 ` [Qemu-devel] [PATCH COLO-Frame v15 28/38] migration/savevm: Export two helper functions for savevm process zhanghailiang
2016-02-26 13:00 ` Dr. David Alan Gilbert
2016-02-22 2:40 ` [Qemu-devel] [PATCH COLO-Frame v15 29/38] COLO: Separate the process of saving/loading ram and device state zhanghailiang
2016-02-26 13:16 ` Dr. David Alan Gilbert
2016-02-27 10:03 ` Hailiang Zhang
2016-02-22 2:40 ` [Qemu-devel] [PATCH COLO-Frame v15 30/38] COLO: Split qemu_savevm_state_begin out of checkpoint process zhanghailiang
2016-02-22 2:40 ` [Qemu-devel] [PATCH COLO-Frame v15 31/38] net/filter: Add a 'status' property for filter object zhanghailiang
2016-02-22 2:40 ` [Qemu-devel] [PATCH COLO-Frame v15 32/38] filter-buffer: Accept zero interval zhanghailiang
2016-02-22 2:40 ` [Qemu-devel] [PATCH COLO-Frame v15 33/38] net: Add notifier/callback for netdev init zhanghailiang
2016-02-22 2:40 ` [Qemu-devel] [PATCH COLO-Frame v15 34/38] COLO/filter: add each netdev a buffer filter zhanghailiang
2016-02-22 2:40 ` [Qemu-devel] [PATCH COLO-Frame v15 35/38] COLO: manage the status of buffer filters for PVM zhanghailiang
2016-02-22 2:40 ` [Qemu-devel] [PATCH COLO-Frame v15 36/38] filter-buffer: make filter_buffer_flush() public zhanghailiang
2016-02-22 2:40 ` [Qemu-devel] [PATCH COLO-Frame v15 37/38] COLO: flush buffered packets in checkpoint process or exit COLO zhanghailiang
2016-02-22 2:40 ` [Qemu-devel] [PATCH COLO-Frame v15 38/38] COLO: Add block replication into colo process zhanghailiang
2016-02-25 19:52 ` [Qemu-devel] [PATCH COLO-Frame v15 00/38] COarse-grain LOck-stepping(COLO) Virtual Machines for Non-stop Service (FT) Dr. David Alan Gilbert
2016-02-26 16:36 ` Dr. David Alan Gilbert
2016-02-27 7:54 ` Hailiang Zhang
2016-02-29 9:47 ` Dr. David Alan Gilbert
2016-02-29 12:16 ` Hailiang Zhang
2016-02-29 13:04 ` Dr. David Alan Gilbert
2016-03-01 12:25 ` Dr. David Alan Gilbert
2016-03-02 13:01 ` Hailiang Zhang
2016-03-03 20:13 ` Dr. David Alan Gilbert
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1456108832-24212-15-git-send-email-zhang.zhanghailiang@huawei.com \
--to=zhang.zhanghailiang@huawei.com \
--cc=amit.shah@redhat.com \
--cc=arei.gonglei@huawei.com \
--cc=armbru@redhat.com \
--cc=dgilbert@redhat.com \
--cc=eddie.dong@intel.com \
--cc=hongyang.yang@easystack.cn \
--cc=lizhijian@cn.fujitsu.com \
--cc=peter.huangpeng@huawei.com \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
--cc=stefanha@redhat.com \
--cc=xiecl.fnst@cn.fujitsu.com \
--cc=yunhong.jiang@intel.com \
--cc=zhangchen.fnst@cn.fujitsu.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).