qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Peter Lieven <pl@dlh.net>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: Shu Ming <shuming@linux.vnet.ibm.com>,
	qemu-devel@nongnu.org, kvm@vger.kernel.org
Subject: Re: [Qemu-devel] Stalls on Live Migration of VMs with a lot of memory
Date: Wed, 04 Jan 2012 12:42:10 +0100	[thread overview]
Message-ID: <4F043B12.60501@dlh.net> (raw)
In-Reply-To: <4F0437DA.8080600@redhat.com>

On 04.01.2012 12:28, Paolo Bonzini wrote:
> On 01/04/2012 12:22 PM, Peter Lieven wrote:
>>> There were patches to move RAM migration to a separate thread. The
>>> problem is that they broke block migration.
>>>
>>> However, asynchronous NBD is in and streaming will follow suit soon.
>>> As soon as we have those two features, we might as well remove the
>>> block migration code.
>>
>> ok, so its a matter of time, right?
>
> Well, there are other solutions of varying complexity in the works, 
> that might remove the need for the migration thread or at least reduce 
> the problem (post-copy migration, XBRLE, vectorized hot loops).  But 
> yes, we are aware of the problem and we should solve it in one way or 
> the other.
i have read all these approached and they seem all promising.
>
>> would it make sense to patch ram_save_block to always process a full ram
>> block?
>
> If I understand the proposal, then migration would hardly be live 
> anymore.  The biggest RAM block in a 32G machine is, well, 32G big. 
> Other RAM blocks are for the VRAM and for some BIOS data, but they are 
> very small in proportion.
ok, then i misunderstood the ram blocks thing. i thought the guest ram 
would consist of a collection of ram blocks.
then let me describe it differntly. would it make sense to process 
bigger portions of memory (e.g. 1M) in stage 2 to reduce the number of 
calls to cpu_physical_memory_reset_dirty and instead run it on bigger 
portions of memory. we might loose a few dirty pages but they will be 
tracked in the next iteration in stage 2 or in stage 3 at least. what 
would be necessary is that nobody marks a page dirty
while i copy the dirty information for the portion of memory i want to 
process.
>
>> - in stage 3 the vm is stopped, right? so there can't be any more dirty
>> blocks after scanning the whole memory once?
>
> No, stage 3 is entered when there are very few dirty memory pages 
> remaining.  This may happen after scanning the whole memory many 
> times.  It may even never happen if migration does not converge 
> because of low bandwidth or too strict downtime requirements.
ok, is there a chance that i lose one final page if it is modified just 
after i walked over it and i found no other page dirty (so bytes_sent = 0).

Peter
>
> Paolo

  reply	other threads:[~2012-01-04 11:42 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-01-03 18:04 [Qemu-devel] Stalls on Live Migration of VMs with a lot of memory Peter Lieven
2012-01-04  1:38 ` Shu Ming
2012-01-04  9:11   ` Peter Lieven
2012-01-04 10:53   ` Peter Lieven
2012-01-04 11:05     ` Paolo Bonzini
2012-01-04 11:22       ` Peter Lieven
2012-01-04 11:28         ` Paolo Bonzini
2012-01-04 11:42           ` Peter Lieven [this message]
2012-01-04 12:28             ` Paolo Bonzini
2012-01-04 13:08               ` Peter Lieven
2012-01-04 14:14                 ` Paolo Bonzini
2012-01-04 14:17                   ` Peter Lieven
2012-01-04 14:21                   ` Peter Lieven

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4F043B12.60501@dlh.net \
    --to=pl@dlh.net \
    --cc=kvm@vger.kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=shuming@linux.vnet.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).