From: Peter Lieven <pl@dlh.net>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: Shu Ming <shuming@linux.vnet.ibm.com>,
qemu-devel@nongnu.org, kvm@vger.kernel.org
Subject: Re: [Qemu-devel] Stalls on Live Migration of VMs with a lot of memory
Date: Wed, 04 Jan 2012 12:22:49 +0100 [thread overview]
Message-ID: <4F043689.2000604@dlh.net> (raw)
In-Reply-To: <4F04326F.8080808@redhat.com>
On 04.01.2012 12:05, Paolo Bonzini wrote:
> On 01/04/2012 11:53 AM, Peter Lieven wrote:
>> On 04.01.2012 02:38, Shu Ming wrote:
>>> On 2012-1-4 2:04, Peter Lieven wrote:
>>>> Hi all,
>>>>
>>>> is there any known issue when migrating VMs with a lot of (e.g. 32GB)
>>>> of memory.
>>>> It seems that there is some portion in the migration code which takes
>>>> too much time when the number
>>>> of memory pages is large.
>>>>
>>>> Symptoms are: Irresponsive VNC connection, VM stalls and also
>>>> irresponsive QEMU Monitor (via TCP).
>>>>
>>>> The problem seems to be worse on 10G connections between 2 Nodes (i
>>>> already tried limiting the
>>>> bandwidth with the migrate_set_speed command) than on 1G connections.
>>> Is the migration accomplished finally? How long will that be? I did a
>>> test on VM with 4G and it took me about two seconds.
>> it seems that the majority of time (90%) is lost in:
>>
>> cpu_physical_memory_reset_dirty(current_addr,
>> current_addr + TARGET_PAGE_SIZE,
>> MIGRATION_DIRTY_FLAG);
>>
>> anyone any idea, to improve this?
>
> There were patches to move RAM migration to a separate thread. The
> problem is that they broke block migration.
>
> However, asynchronous NBD is in and streaming will follow suit soon.
> As soon as we have those two features, we might as well remove the
> block migration code.
ok, so its a matter of time, right?
would it make sense to patch ram_save_block to always process a full ram
block?
i think of copying the dirty information for the whole block then reset
the dirty information for the complete block and then process
the the pages that have been dirty before the reset.
questions:
- how big can ram blocks be?
- is it possible that ram blocks differ in size?
- in stage 3 the vm is stopped, right? so there can't be any more
dirty blocks after scanning the whole memory once?
peter
>
> Paolo
next prev parent reply other threads:[~2012-01-04 11:22 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-01-03 18:04 [Qemu-devel] Stalls on Live Migration of VMs with a lot of memory Peter Lieven
2012-01-04 1:38 ` Shu Ming
2012-01-04 9:11 ` Peter Lieven
2012-01-04 10:53 ` Peter Lieven
2012-01-04 11:05 ` Paolo Bonzini
2012-01-04 11:22 ` Peter Lieven [this message]
2012-01-04 11:28 ` Paolo Bonzini
2012-01-04 11:42 ` Peter Lieven
2012-01-04 12:28 ` Paolo Bonzini
2012-01-04 13:08 ` Peter Lieven
2012-01-04 14:14 ` Paolo Bonzini
2012-01-04 14:17 ` Peter Lieven
2012-01-04 14:21 ` Peter Lieven
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4F043689.2000604@dlh.net \
--to=pl@dlh.net \
--cc=kvm@vger.kernel.org \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=shuming@linux.vnet.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).