From: Juergen Gross <jgross@suse.com>
To: "Roger Pau Monné" <roger.pau@citrix.com>,
"George Dunlap" <dunlapg@umich.edu>
Cc: Wei Liu <wei.liu2@citrix.com>,
Andrew Cooper <andrew.cooper3@citrix.com>,
Jan Beulich <JBeulich@suse.com>,
abelgun@amazon.com, xen-devel <xen-devel@lists.xenproject.org>,
Boris Ostrovsky <boris.ostrovsky@oracle.com>,
David Woodhouse <dwmw2@infradead.org>,
bercarug@amazon.com
Subject: Re: [Memory Accounting] was: Re: PVH dom0 creation fails - the system freezes
Date: Thu, 26 Jul 2018 13:22:33 +0200 [thread overview]
Message-ID: <b66e5e1c-5c56-9eda-562e-768763d0df78@suse.com> (raw)
In-Reply-To: <20180726111145.za7enqdukb6kq4iz@mac.bytemobile.com>
On 26/07/18 13:11, Roger Pau Monné wrote:
> On Thu, Jul 26, 2018 at 10:45:08AM +0100, George Dunlap wrote:
>> On Thu, Jul 26, 2018 at 12:07 AM, Boris Ostrovsky
>> <boris.ostrovsky@oracle.com> wrote:
>>> On 07/25/2018 02:56 PM, Andrew Cooper wrote:
>>>> On 25/07/18 17:29, Juergen Gross wrote:
>>>>> On 25/07/18 18:12, Roger Pau Monné wrote:
>>>>>> On Wed, Jul 25, 2018 at 05:05:35PM +0300, bercarug@amazon.com wrote:
>>>>>>> On 07/25/2018 05:02 PM, Wei Liu wrote:
>>>>>>>> On Wed, Jul 25, 2018 at 03:41:11PM +0200, Juergen Gross wrote:
>>>>>>>>> On 25/07/18 15:35, Roger Pau Monné wrote:
>>>>>>>>>>> What could be causing the available memory loss problem?
>>>>>>>>>> That seems to be Linux aggressively ballooning out memory, you go from
>>>>>>>>>> 7129M total memory to 246M. Are you creating a lot of domains?
>>>>>>>>> This might be related to the tools thinking dom0 is a PV domain.
>>>>>>>> Good point.
>>>>>>>>
>>>>>>>> In that case, xenstore-ls -fp would also be useful. The output should
>>>>>>>> show the balloon target for Dom0.
>>>>>>>>
>>>>>>>> You can also try to set the autoballoon to off in /etc/xen/xl.cfg to see
>>>>>>>> if it makes any difference.
>>>>>>>>
>>>>>>>> Wei.
>>>>>>> Also tried setting autoballooning off, but it had no effect.
>>>>>> This is a Linux/libxl issue that I'm not sure what's the best way to
>>>>>> solve. Linux has the following 'workaround' in the balloon driver:
>>>>>>
>>>>>> err = xenbus_scanf(XBT_NIL, "memory", "static-max", "%llu",
>>>>>> &static_max);
>>>>>> if (err != 1)
>>>>>> static_max = new_target;
>>>>>> else
>>>>>> static_max >>= PAGE_SHIFT - 10;
>>>>>> target_diff = xen_pv_domain() ? 0
>>>>>> : static_max - balloon_stats.target_pages;
>>>>> Hmm, shouldn't PVH behave the same way as PV here? I don't think
>>>>> there is memory missing for PVH, opposed to HVM's firmware memory.
>>>>>
>>>>> Adding Boris for a second opinion.
>>>
>>> (Notwithstanding Andrews' rant below ;-))
>>>
>>> I am trying to remember --- what memory were we trying not to online for
>>> HVM here?
>>
>> My general memory of the situation is this:
>>
>> * Balloon drivers are told to reach a "target" value for max_pages.
>> * max_pages includes all memory assigned to the guest, including video
>> ram, "special" pages, ipxe ROMs, bios ROMs from passed-through
>> devices, and so on.
>> * Unfortunately, the balloon driver doesn't know what their max_pages
>> value is and can't read it.
>> * So what the balloon drivers do at the moment (as I understand it) is
>> look at the memory *reported as RAM*, and do a calculation:
>> visible_ram - target_max_pages = pages_in_balloon
>>
>> You can probably see why this won't work -- the result is that the
>> guest balloons down to (target_max_pages + non_ram_pages). This is
>> kind of messy for normal guests, but when you have a
>> populate-on-demand guest, that leaves non_ram_pages amount of PoD ram
>> in the guest. The hypervisor then spends a huge amount of work
>> swapping the PoD pages around under the guest's feet, until it can't
>> find any more zeroed guest pages to use, and it crashes the guest.
>>
>> The kludge we have right now is to make up a number for HVM guests
>> which is slightly larger than non_ram_pages, and tell the guest to aim
>> for *that* instead.
>>
>> I think what we need is for the *toolstack* to calculate the size of
>> the balloon rather than the guest, and tell the balloon driver how big
>> to make its balloon, rather than the balloon driver trying to figure
>> that out on its own.
>
> Maybe the best option would be for the toolstack to fetch the e820
> memory map and set the target based on the size of the RAM regions in
> there for PVH Dom0? That would certainly match the expectations of the
> guest.
>
> Note that for DomUs if hvmloader (or any other component) inside of
> the guest changes the memory map it would also have to adjust the
> value in the xenstore 'target' node.
How would it do that later when the guest is already running?
I believe the right way would be to design a proper ballooning interface
suitable for all kinds of guests from scratch. This should include how
to deal with hotplug of memory or booting with mem < mem_max. Whether
PoD should be included should be discussed, too.
After defining that interface we can look after a proper way to select
the correct interface (old or new) in the gust and how to communicate
that selection to the host.
Juergen
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
next prev parent reply other threads:[~2018-07-26 11:22 UTC|newest]
Thread overview: 47+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-07-23 11:50 PVH dom0 creation fails - the system freezes bercarug
2018-07-24 9:54 ` Jan Beulich
2018-07-25 10:06 ` bercarug
2018-07-25 10:22 ` Wei Liu
2018-07-25 10:43 ` Juergen Gross
2018-07-25 13:35 ` Roger Pau Monné
2018-07-25 13:41 ` Juergen Gross
2018-07-25 14:02 ` Wei Liu
2018-07-25 14:05 ` bercarug
2018-07-25 14:10 ` Wei Liu
2018-07-25 16:12 ` Roger Pau Monné
2018-07-25 16:29 ` Juergen Gross
2018-07-25 18:56 ` [Memory Accounting] was: " Andrew Cooper
2018-07-25 23:07 ` Boris Ostrovsky
2018-07-26 9:41 ` Juergen Gross
2018-07-26 9:45 ` George Dunlap
2018-07-26 11:11 ` Roger Pau Monné
2018-07-26 11:22 ` Juergen Gross [this message]
2018-07-26 11:27 ` George Dunlap
2018-07-26 12:19 ` Juergen Gross
2018-07-26 14:44 ` George Dunlap
2018-07-26 13:50 ` Roger Pau Monné
2018-07-26 13:58 ` Juergen Gross
2018-07-26 14:35 ` Roger Pau Monné
2018-07-26 11:23 ` George Dunlap
2018-07-26 11:08 ` Roger Pau Monné
2018-07-26 8:15 ` bercarug
2018-07-26 8:31 ` Juergen Gross
2018-07-26 11:05 ` Roger Pau Monné
2018-07-25 13:57 ` bercarug
2018-07-25 14:12 ` Roger Pau Monné
2018-07-25 16:19 ` Paul Durrant
2018-07-26 16:46 ` Roger Pau Monné
2018-07-27 8:48 ` Bercaru, Gabriel
2018-07-27 9:11 ` Roger Pau Monné
2018-08-02 11:36 ` Bercaru, Gabriel
2018-08-02 13:55 ` Roger Pau Monné
2018-08-08 7:46 ` bercarug
2018-08-08 8:08 ` Roger Pau Monné
2018-08-08 8:39 ` bercarug
2018-08-08 8:43 ` Paul Durrant
2018-08-08 8:51 ` Roger Pau Monné
2018-08-08 8:54 ` bercarug
2018-08-08 9:44 ` Roger Pau Monné
2018-08-08 10:11 ` Roger Pau Monné
2018-08-08 10:13 ` bercarug
[not found] ` <5B6AAD430200009A03E1638C@prv1-mh.provo.novell.com>
[not found] ` <5B6AAF130200003B04D2E796@prv1-mh.provo.novell.com>
2018-08-08 10:00 ` Jan Beulich
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b66e5e1c-5c56-9eda-562e-768763d0df78@suse.com \
--to=jgross@suse.com \
--cc=JBeulich@suse.com \
--cc=abelgun@amazon.com \
--cc=andrew.cooper3@citrix.com \
--cc=bercarug@amazon.com \
--cc=boris.ostrovsky@oracle.com \
--cc=dunlapg@umich.edu \
--cc=dwmw2@infradead.org \
--cc=roger.pau@citrix.com \
--cc=wei.liu2@citrix.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).