From mboxrd@z Thu Jan 1 00:00:00 1970 From: Radim =?utf-8?B?S3LEjW3DocWZ?= Subject: Re: [PATCH 6/6] KVM: Dirty memory tracking for performant checkpointing and improved live migration Date: Mon, 2 May 2016 18:23:44 +0200 Message-ID: <20160502162343.GC30059@potion> References: <201604261856.u3QIuJMs025122@dev1.sn.stratus.com> <20160428180847.GB15747@potion> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: QUOTED-PRINTABLE Cc: Paolo Bonzini , "kvm@vger.kernel.org" To: "Cao, Lei" Return-path: Received: from mx1.redhat.com ([209.132.183.28]:40310 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751360AbcEBQXs (ORCPT ); Mon, 2 May 2016 12:23:48 -0400 Content-Disposition: inline In-Reply-To: Sender: kvm-owner@vger.kernel.org List-ID: 2016-04-29 18:47+0000, Cao, Lei: > On 4/28/2016 2:08 PM, Radim Kr=C4=8Dm=C3=A1=C5=99 wrote: >> 2016-04-26 19:26+0000, Cao, Lei: >> * Is there a reason to call KVM_ENABLE_MT often? >=20 > KVM_ENABLE_MT can be called multiple times during a protected > VM's lifecycle in a checkpointing system. A protected VM has two > instances, primary and secondary. Memory tracking is only enabled on > the primary. When we do a polite failover, memory tracking is > disabled on the old primary and enabled on the new primary. Memory > tracking is also disabled when the secondary goes away, in which case > checkpoint cycle stops and there is no need for memory tracking. When > the secondary comes back, memory tracking is re-enabled and the two > instances sync up and checkpoint cycle starts. Makes sense. >> * How significant is the benefit of MT_FETCH_WAIT? >=20 > This allows the user thread that harvest dirty pages to park instead > of doing busy wait when there is no or very few dirty pages. True, mandatory polling could be ugly. >> * When would you disable MT_FETCH_REARM? >=20 > In a checkpointing system, dirty pages are harvested after the VM is > paused. Userspace can choose to rearm the write traps all at once aft= er > all the dirty pages have been fetched using KVM_REARM_DIRTY_PAGES, in > which case the traps don't need to be armed during each fetch. Ah, it makes a difference when you don't plan to run the VM again. I guess all three of them are worth it. (Might change my mind when I gain better understanding.) >> * What drawbacks had an interface without explicit checkpointing cyc= les? >=20 > Checkpointing cycle has to be implemented in userspace to use this > interface.=20 But isn't the explicit cycle necessary only in userspace? The dirty list could be implemented as a circullar buffer, so KVM wouldn't need an explicit notification about the new cycle -- the userspace would just drain all dirty pages and unpause vcpus. (Quiesced can be stateless one-time kick of waiters instead.) Thanks.