From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:46670) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1db2Wr-0002Hl-Hr for qemu-devel@nongnu.org; Fri, 28 Jul 2017 06:29:58 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1db2Wm-0004fa-S9 for qemu-devel@nongnu.org; Fri, 28 Jul 2017 06:29:57 -0400 Received: from szxga01-in.huawei.com ([45.249.212.187]:4474) by eggs.gnu.org with esmtps (TLS1.0:RSA_ARCFOUR_SHA1:16) (Exim 4.71) (envelope-from ) id 1db2Wm-0004cy-8q for qemu-devel@nongnu.org; Fri, 28 Jul 2017 06:29:52 -0400 From: Jay Zhou Date: Fri, 28 Jul 2017 18:28:53 +0800 Message-ID: <1501237733-2736-1-git-send-email-jianjay.zhou@huawei.com> MIME-Version: 1.0 Content-Type: text/plain Subject: [Qemu-devel] [PATCH v3] migration: optimize the downtime List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: pbonzini@redhat.com, dgilbert@redhat.com, arei.gonglei@huawei.com, zhang.zhanghailiang@huawei.com, wangxinxin.wang@huawei.com, weidong.huang@huawei.com, xiaoguangrong@tencent.com, jdenemar@redhat.com, huangzhichao@huawei.com, Jay Zhou Qemu_savevm_state_cleanup takes about 300ms in my ram migration tests with a 8U24G vm(20G is really occupied), the main cost comes from KVM_SET_USER_MEMORY_REGION ioctl when mem.memory_size = 0 in kvm_set_user_memory_region. In kmod, the main cost is kvm_zap_obsolete_pages, which traverses the active_mmu_pages list to zap the unsync sptes. It can be optimized by delaying memory_global_dirty_log_stop to the next vm_start. Changes v2->v3: - NULL VMChangeStateHandler if it is deleted and protect the scenario of nested invocations of memory_global_dirty_log_start/stop [Paolo] Changes v1->v2: - create a VMChangeStateHandler in memory.c to reduce the coupling [Paolo] Signed-off-by: Jay Zhou --- memory.c | 36 +++++++++++++++++++++++++++++++++++- 1 file changed, 35 insertions(+), 1 deletion(-) diff --git a/memory.c b/memory.c index a7bc70a..c0adc35 100644 --- a/memory.c +++ b/memory.c @@ -2357,8 +2357,15 @@ void memory_global_dirty_log_sync(void) } } +static VMChangeStateEntry *vmstate_change; + void memory_global_dirty_log_start(void) { + if (vmstate_change) { + qemu_del_vm_change_state_handler(vmstate_change); + vmstate_change = NULL; + } + global_dirty_log = true; MEMORY_LISTENER_CALL_GLOBAL(log_global_start, Forward); @@ -2369,7 +2376,7 @@ void memory_global_dirty_log_start(void) memory_region_transaction_commit(); } -void memory_global_dirty_log_stop(void) +static void memory_global_dirty_log_do_stop(void) { global_dirty_log = false; @@ -2381,6 +2388,33 @@ void memory_global_dirty_log_stop(void) MEMORY_LISTENER_CALL_GLOBAL(log_global_stop, Reverse); } +static void memory_vm_change_state_handler(void *opaque, int running, + RunState state) +{ + if (running) { + memory_global_dirty_log_do_stop(); + + if (vmstate_change) { + qemu_del_vm_change_state_handler(vmstate_change); + vmstate_change = NULL; + } + } +} + +void memory_global_dirty_log_stop(void) +{ + if (!runstate_is_running()) { + if (vmstate_change) { + return; + } + vmstate_change = qemu_add_vm_change_state_handler( + memory_vm_change_state_handler, NULL); + return; + } + + memory_global_dirty_log_do_stop(); +} + static void listener_add_address_space(MemoryListener *listener, AddressSpace *as) { -- 1.8.3.1