qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Wenchao Xia <xiawenc@linux.vnet.ibm.com>
To: Stefan Hajnoczi <stefanha@gmail.com>
Cc: Anthony Liguori <aliguori@us.ibm.com>, kvm <kvm@vger.kernel.org>,
	Marcelo Tosatti <mtosatti@redhat.com>,
	qemu-devel <qemu-devel@nongnu.org>,
	Chijianchun <chijianchun@huawei.com>,
	Paul Brook <paul@codesourcery.com>, Alex Bligh <alex@alex.org.uk>,
	fred.konrad@greensocs.com, Avi Kivity <avi@redhat.com>
Subject: Re: [Qemu-devel] Are there plans to achieve ram live Snapshot feature?
Date: Thu, 15 Aug 2013 16:03:47 +0800	[thread overview]
Message-ID: <520C8B63.2060304@linux.vnet.ibm.com> (raw)
In-Reply-To: <20130815074919.GA22521@stefanha-thinkpad.redhat.com>

于 2013-8-15 15:49, Stefan Hajnoczi 写道:
> On Thu, Aug 15, 2013 at 10:26:36AM +0800, Wenchao Xia wrote:
>> 于 2013-8-14 15:53, Stefan Hajnoczi 写道:
>>> On Wed, Aug 14, 2013 at 3:54 AM, Wenchao Xia <xiawenc@linux.vnet.ibm.com> wrote:
>>>> 于 2013-8-13 16:21, Stefan Hajnoczi 写道:
>>>>
>>>>> On Tue, Aug 13, 2013 at 4:53 AM, Wenchao Xia <xiawenc@linux.vnet.ibm.com>
>>>>> wrote:
>>>>>>
>>>>>> 于 2013-8-12 19:33, Stefan Hajnoczi 写道:
>>>>>>
>>>>>>> On Mon, Aug 12, 2013 at 12:26 PM, Alex Bligh <alex@alex.org.uk> wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>> --On 12 August 2013 11:59:03 +0200 Stefan Hajnoczi <stefanha@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> The idea that was discussed on qemu-devel@nongnu.org uses fork(2) to
>>>>>>>>> capture the state of guest RAM and then send it back to the parent
>>>>>>>>> process.  The guest is only paused for a brief instant during fork(2)
>>>>>>>>> and can continue to run afterwards.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> How would you capture the state of emulated hardware which might not
>>>>>>>> be in the guest RAM?
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Exactly the same way vmsave works today.  It calls the device's save
>>>>>>> functions which serialize state to file.
>>>>>>>
>>>>>>> The difference between today's vmsave and the fork(2) approach is that
>>>>>>> QEMU does not need to wait for guest RAM to be written to file before
>>>>>>> resuming the guest.
>>>>>>>
>>>>>>> Stefan
>>>>>>>
>>>>>>      I have a worry about what glib says:
>>>>>>
>>>>>> "On Unix, the GLib mainloop is incompatible with fork(). Any program
>>>>>> using the mainloop must either exec() or exit() from the child without
>>>>>> returning to the mainloop. "
>>>>>
>>>>>
>>>>> This is fine, the child just writes out the memory pages and exits.
>>>>> It never returns to the glib mainloop.
>>>>>
>>>>>>      There is another way to do it: intercept the write in kvm.ko(or other
>>>>>> kernel code). Since the key is intercept the memory change, we can do
>>>>>> it in userspace in TCG mode, thus we can add the missing part in KVM
>>>>>> mode. Another benefit of this way is: the used memory can be
>>>>>> controlled. For example, with ioctl(), set a buffer of a fixed size
>>>>>> which keeps the intercepted write data by kernel code, which can avoid
>>>>>> frequently switch back to user space qemu code. when it is full always
>>>>>> return back to userspace's qemu code, let qemu code save the data into
>>>>>> disk. I haven't check the exactly behavior of Intel guest mode about
>>>>>> how to handle page fault, so can't estimate the performance caused by
>>>>>> switching of guest mode and root mode, but it should not be worse than
>>>>>> fork().
>>>>>
>>>>>
>>>>> The fork(2) approach is portable, covers both KVM and TCG, and doesn't
>>>>> require kernel changes.  A kvm.ko kernel change also won't be
>>>>> supported on existing KVM hosts.  These are big drawbacks and the
>>>>> kernel approach would need to be significantly better than plain old
>>>>> fork(2) to make it worthwhile.
>>>>>
>>>>> Stefan
>>>>>
>>>>     I think advantage is memory usage is predictable, so memory usage
>>>> peak can be avoided, by always save the changed pages first. fork()
>>>> does not know which pages are changed. I am not sure if this would
>>>> be a serious issue when server's memory is consumed much, for example,
>>>> 24G host emulate 11G*2 guest to provide powerful virtual server.
>>>
>>> Memory usage is predictable but guest uptime is unpredictable because
>>> it waits until memory is written out.  This defeats the point of
>>> "live" savevm.  The guest may be stalled arbitrarily.
>>>
>>    I think it is adjustable. There is no much difference with
>> fork(), except get more precise control about the changed pages.
>>    Kernel intercept the change, and stores the changed page in another
>> page, similar to fork(). When userspace qemu code execute, save some
>> pages to disk. Buffer can be used like some lubricant. When Buffer =
>> MAX, it equals to fork(), guest runs more lively. When Buffer = 0,
>> guest runs less lively. I think it allows user to find a good balance
>> point with a parameter.
>>    It is harder to implement, just want to show the idea.
>
> You are right.  You could set a bigger buffer size to increase guest
> uptime.
>
>>> The fork child can minimize the chance of out-of-memory by using
>>> madvise(MADV_DONTNEED) after pages have been written out.
>>    It seems no way to make sure the written out page is the changed
>> pages, so it have a good chance the written one is the unchanged and
>> still used by the other qemu process.
>
> The KVM dirty log tells you which pages were touched.  The fork child
> process could give priority to the pages which have been touched by the
> guest.  They must be written out and marked madvise(MADV_DONTNEED) as
> soon as possible.
   Hmm, if dirty log still works normal in child process to reflect the
memory status in parent not child's, then the problem could be solved
by: when dirty pages is too much, child tell parent to wait some time.
But I haven't check if kvm.ko behaviors like that.

>
> I haven't looked at the vmsave data format yet to see if memory pages
> can be saved in random order, but this might work.  It reduces the
> likelihood of copy-on-write memory growth.
>
> Stefan
>


-- 
Best Regards

Wenchao Xia

      reply	other threads:[~2013-08-15  8:04 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-08-09 10:20 [Qemu-devel] Are there plans to achieve ram live Snapshot feature? Chijianchun
2013-08-09 15:38 ` Paolo Bonzini
2013-08-09 15:45 ` Anthony Liguori
2013-08-09 15:51   ` Eric Blake
2013-08-12  9:59 ` Stefan Hajnoczi
2013-08-12 10:26   ` Alex Bligh
2013-08-12 11:33     ` Stefan Hajnoczi
2013-08-13  2:53       ` Wenchao Xia
2013-08-13  8:21         ` Stefan Hajnoczi
2013-08-14  1:54           ` Wenchao Xia
2013-08-14  7:53             ` Stefan Hajnoczi
2013-08-14  8:13               ` Alex Bligh
2013-08-15  2:26               ` Wenchao Xia
2013-08-15  7:49                 ` Stefan Hajnoczi
2013-08-15  8:03                   ` Wenchao Xia [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=520C8B63.2060304@linux.vnet.ibm.com \
    --to=xiawenc@linux.vnet.ibm.com \
    --cc=alex@alex.org.uk \
    --cc=aliguori@us.ibm.com \
    --cc=avi@redhat.com \
    --cc=chijianchun@huawei.com \
    --cc=fred.konrad@greensocs.com \
    --cc=kvm@vger.kernel.org \
    --cc=mtosatti@redhat.com \
    --cc=paul@codesourcery.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).