qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Stefan Priebe <s.priebe@profihost.ag>
To: Stefan Hajnoczi <stefanha@gmail.com>
Cc: Orit Wasserman <owasserm@redhat.com>, Peter Lieven <pl@kamp.de>,
	qemu-devel <qemu-devel@nongnu.org>,
	Dave Gilbert <dgilbert@redhat.com>,
	Juan Quintela <quintela@redhat.com>
Subject: Re: [Qemu-devel] memory allocation of migration changed?
Date: Fri, 14 Feb 2014 19:15:17 +0100	[thread overview]
Message-ID: <52FE5D35.4070209@profihost.ag> (raw)
In-Reply-To: <20140214145900.GK17391@stefanha-thinkpad.redhat.com>

Am 14.02.2014 15:59, schrieb Stefan Hajnoczi:
> On Tue, Feb 11, 2014 at 07:32:46PM +0100, Stefan Priebe wrote:
>> Am 11.02.2014 17:22, schrieb Peter Lieven:
>>>
>>>
>>>> Am 11.02.2014 um 16:44 schrieb Stefan Hajnoczi <stefanha@gmail.com>:
>>>>
>>>> On Tue, Feb 11, 2014 at 3:54 PM, Stefan Priebe - Profihost AG
>>>> <s.priebe@profihost.ag> wrote:
>>>>> in the past (Qemu 1.5) a migration failed if there was not enogh memory
>>>>> on the target host available directly at the beginning.
>>>>>
>>>>> Now with Qemu 1.7 i've seen succeeded migrations but the kernel OOM
>>>>> memory killer killing qemu processes. So the migration seems to takes
>>>>> place without having anough memory on the target machine?
>>>>
>>>> How much memory is the guest configured with?  How much memory does
>>>> the host have?
>>>>
>>>> I wonder if there are zero pages that can be migrated almost "for
>>>> free" and the destination host doesn't touch.  When they are touched
>>>> for the first time after migration handover, they need to be allocated
>>>> on the destination host.  This can lead to OOM if you overcommitted
>>>> memory.
>>>>
>>>> Can you reproduce the OOM reliably?  It should be possible to debug it
>>>> and figure out whether it's just bad luck or a true regression.
>>>>
>>>> Stefan
>>>
>>> Kernel Version would also be interesting as well as thp and ksm settings.
>>
>> Kernel Host: 3.10.26
>>
>> What's thp / ksm? how to get those settings?
>
> Transparent Huge Pages
>
> # cat /sys/kernel/mm/transparent_hugepage/enabled
>
> Kernel Samepage Merging
>
> # cat /sys/kernel/mm/ksm/run


# cat /sys/kernel/mm/transparent_hugepage/enabled
[always] madvise never


# cat /sys/kernel/mm/ksm/run
1


>
> Stefan
>

  reply	other threads:[~2014-02-14 18:15 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-02-11 14:54 [Qemu-devel] memory allocation of migration changed? Stefan Priebe - Profihost AG
2014-02-11 15:44 ` Stefan Hajnoczi
2014-02-11 16:22   ` Peter Lieven
2014-02-11 18:32     ` Stefan Priebe
2014-02-14 14:59       ` Stefan Hajnoczi
2014-02-14 18:15         ` Stefan Priebe [this message]
2014-02-11 18:30   ` Stefan Priebe
2014-02-14 15:03     ` Stefan Hajnoczi
2014-02-14 18:16       ` Stefan Priebe
2014-02-24 15:00         ` Stefan Hajnoczi
2014-02-24 16:13           ` Eric Blake
2014-03-12 19:15             ` Stefan Priebe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=52FE5D35.4070209@profihost.ag \
    --to=s.priebe@profihost.ag \
    --cc=dgilbert@redhat.com \
    --cc=owasserm@redhat.com \
    --cc=pl@kamp.de \
    --cc=qemu-devel@nongnu.org \
    --cc=quintela@redhat.com \
    --cc=stefanha@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).