From: Jay Zhou <jianjay.zhou@huawei.com>
To: qemu-devel@nongnu.org
Cc: pbonzini@redhat.com, dgilbert@redhat.com,
arei.gonglei@huawei.com, zhang.zhanghailiang@huawei.com,
wangxinxin.wang@huawei.com, weidong.huang@huawei.com,
xiaoguangrong@tencent.com, jdenemar@redhat.com,
huangzhichao@huawei.com, Jay Zhou <jianjay.zhou@huawei.com>
Subject: [Qemu-devel] [PATCH v3] migration: optimize the downtime
Date: Fri, 28 Jul 2017 18:28:53 +0800 [thread overview]
Message-ID: <1501237733-2736-1-git-send-email-jianjay.zhou@huawei.com> (raw)
Qemu_savevm_state_cleanup takes about 300ms in my ram migration tests
with a 8U24G vm(20G is really occupied), the main cost comes from
KVM_SET_USER_MEMORY_REGION ioctl when mem.memory_size = 0 in
kvm_set_user_memory_region. In kmod, the main cost is
kvm_zap_obsolete_pages, which traverses the active_mmu_pages list to
zap the unsync sptes.
It can be optimized by delaying memory_global_dirty_log_stop to the next
vm_start.
Changes v2->v3:
- NULL VMChangeStateHandler if it is deleted and protect the scenario
of nested invocations of memory_global_dirty_log_start/stop [Paolo]
Changes v1->v2:
- create a VMChangeStateHandler in memory.c to reduce the coupling [Paolo]
Signed-off-by: Jay Zhou <jianjay.zhou@huawei.com>
---
memory.c | 36 +++++++++++++++++++++++++++++++++++-
1 file changed, 35 insertions(+), 1 deletion(-)
diff --git a/memory.c b/memory.c
index a7bc70a..c0adc35 100644
--- a/memory.c
+++ b/memory.c
@@ -2357,8 +2357,15 @@ void memory_global_dirty_log_sync(void)
}
}
+static VMChangeStateEntry *vmstate_change;
+
void memory_global_dirty_log_start(void)
{
+ if (vmstate_change) {
+ qemu_del_vm_change_state_handler(vmstate_change);
+ vmstate_change = NULL;
+ }
+
global_dirty_log = true;
MEMORY_LISTENER_CALL_GLOBAL(log_global_start, Forward);
@@ -2369,7 +2376,7 @@ void memory_global_dirty_log_start(void)
memory_region_transaction_commit();
}
-void memory_global_dirty_log_stop(void)
+static void memory_global_dirty_log_do_stop(void)
{
global_dirty_log = false;
@@ -2381,6 +2388,33 @@ void memory_global_dirty_log_stop(void)
MEMORY_LISTENER_CALL_GLOBAL(log_global_stop, Reverse);
}
+static void memory_vm_change_state_handler(void *opaque, int running,
+ RunState state)
+{
+ if (running) {
+ memory_global_dirty_log_do_stop();
+
+ if (vmstate_change) {
+ qemu_del_vm_change_state_handler(vmstate_change);
+ vmstate_change = NULL;
+ }
+ }
+}
+
+void memory_global_dirty_log_stop(void)
+{
+ if (!runstate_is_running()) {
+ if (vmstate_change) {
+ return;
+ }
+ vmstate_change = qemu_add_vm_change_state_handler(
+ memory_vm_change_state_handler, NULL);
+ return;
+ }
+
+ memory_global_dirty_log_do_stop();
+}
+
static void listener_add_address_space(MemoryListener *listener,
AddressSpace *as)
{
--
1.8.3.1
reply other threads:[~2017-07-28 10:29 UTC|newest]
Thread overview: [no followups] expand[flat|nested] mbox.gz Atom feed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1501237733-2736-1-git-send-email-jianjay.zhou@huawei.com \
--to=jianjay.zhou@huawei.com \
--cc=arei.gonglei@huawei.com \
--cc=dgilbert@redhat.com \
--cc=huangzhichao@huawei.com \
--cc=jdenemar@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=wangxinxin.wang@huawei.com \
--cc=weidong.huang@huawei.com \
--cc=xiaoguangrong@tencent.com \
--cc=zhang.zhanghailiang@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).