From: "Radim Krčmář" <rkrcmar@redhat.com>
To: "Cao, Lei" <Lei.Cao@stratus.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>,
"kvm@vger.kernel.org" <kvm@vger.kernel.org>
Subject: Re: [PATCH 6/6] KVM: Dirty memory tracking for performant checkpointing and improved live migration
Date: Mon, 2 May 2016 18:23:44 +0200 [thread overview]
Message-ID: <20160502162343.GC30059@potion> (raw)
In-Reply-To: <BL2PR08MB48134FFE8FDC29565BB81BBF0660@BL2PR08MB481.namprd08.prod.outlook.com>
2016-04-29 18:47+0000, Cao, Lei:
> On 4/28/2016 2:08 PM, Radim Krčmář wrote:
>> 2016-04-26 19:26+0000, Cao, Lei:
>> * Is there a reason to call KVM_ENABLE_MT often?
>
> KVM_ENABLE_MT can be called multiple times during a protected
> VM's lifecycle in a checkpointing system. A protected VM has two
> instances, primary and secondary. Memory tracking is only enabled on
> the primary. When we do a polite failover, memory tracking is
> disabled on the old primary and enabled on the new primary. Memory
> tracking is also disabled when the secondary goes away, in which case
> checkpoint cycle stops and there is no need for memory tracking. When
> the secondary comes back, memory tracking is re-enabled and the two
> instances sync up and checkpoint cycle starts.
Makes sense.
>> * How significant is the benefit of MT_FETCH_WAIT?
>
> This allows the user thread that harvest dirty pages to park instead
> of doing busy wait when there is no or very few dirty pages.
True, mandatory polling could be ugly.
>> * When would you disable MT_FETCH_REARM?
>
> In a checkpointing system, dirty pages are harvested after the VM is
> paused. Userspace can choose to rearm the write traps all at once after
> all the dirty pages have been fetched using KVM_REARM_DIRTY_PAGES, in
> which case the traps don't need to be armed during each fetch.
Ah, it makes a difference when you don't plan to run the VM again.
I guess all three of them are worth it.
(Might change my mind when I gain better understanding.)
>> * What drawbacks had an interface without explicit checkpointing cycles?
>
> Checkpointing cycle has to be implemented in userspace to use this
> interface.
But isn't the explicit cycle necessary only in userspace?
The dirty list could be implemented as a circullar buffer, so KVM
wouldn't need an explicit notification about the new cycle -- the
userspace would just drain all dirty pages and unpause vcpus.
(Quiesced can be stateless one-time kick of waiters instead.)
Thanks.
next prev parent reply other threads:[~2016-05-02 16:23 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <201604261856.u3QIuJMs025122@dev1.sn.stratus.com>
2016-04-26 19:26 ` [PATCH 6/6] KVM: Dirty memory tracking for performant checkpointing and improved live migration Cao, Lei
2016-04-28 18:08 ` Radim Krčmář
2016-04-29 18:47 ` Cao, Lei
2016-05-02 16:23 ` Radim Krčmář [this message]
2016-05-03 13:34 ` Cao, Lei
2016-05-03 7:10 ` Huang, Kai
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160502162343.GC30059@potion \
--to=rkrcmar@redhat.com \
--cc=Lei.Cao@stratus.com \
--cc=kvm@vger.kernel.org \
--cc=pbonzini@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox