From: George Dunlap <george.dunlap@eu.citrix.com>
To: Jan Beulich <JBeulich@novell.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: PoD issue
Date: Fri, 29 Jan 2010 18:30:49 +0000 [thread overview]
Message-ID: <4B632959.4070202@eu.citrix.com> (raw)
In-Reply-To: <4B632202020000780002CC72@vpn.id2.novell.com>
PoD is not critical to balloon out guest memory. You can boot with mem
== maxmem and then balloon down afterwards just as you could before,
without involving PoD. (Or at least, you should be able to; if you
can't then it's a bug.) It's just that with PoD you can do something
you've always wanted to do but never knew it: boot with 1GiB with the
option of expanding up to 2GiB later. :-)
With the 54 megabyte difference: It's not like a GiB vs GB thing, is
it? (i.e., 2^30 vs 10^9?) The difference between 1GiB (2^30) and 1 GB
(10^9) is about 74 megs, or 18,000 pages.
I guess that is a weakness of PoD in general: we can't control the guest
balloon driver, but we rely on it to have the same model of how to
translate "target" into # pages in the balloon as the PoD code.
-George
Jan Beulich wrote:
>>>> George Dunlap <george.dunlap@eu.citrix.com> 29.01.10 17:01 >>>
>>>>
>> What seems likely to me is that Xen (setting the PoD target) and the
>> balloon driver (allocating memory) have a different way of calculating
>> the amount of guest memory. So the balloon driver thinks it's done
>> handing memory back to Xen when there are still more outstanding PoD
>> entries than there are entries in the PoD memory pool. What balloon
>> driver are you using?
>>
>
> The one from our forward ported 2.6.32.x tree. I would suppose there
> are no significant differences here to the one in 2.6.18, but I wonder
> how precise the totalram_pages value is that the driver (also in 2.6.18)
> uses to initialize bs.current_pages. Given that with PoD it is now crucial
> for the guest to balloon out enough memory, using an imprecise start
> value is not acceptable anymore. The question however is what more
> reliable data source one could use (given that any non-exported
> kernel object is out of question). And I wonder how this works reliably
> for others...
>
>
>> Can you let me know max_mem, target, and what the
>> balloon driver has reached before calling it quits? (Although 13,000
>> pages is an awful lot to be off by: 54 MB...)
>>
>
> The balloon driver reports the expected state: target and allocation
> are 1G. But yes - how did I not pay attention to this - the balloon is
> *far* from being 1G in size (and in fact the difference is probably
> matching quite closely those 54M).
>
> Thanks a lot!
>
> Jan
>
>
next prev parent reply other threads:[~2010-01-29 18:30 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-01-29 15:27 PoD issue Jan Beulich
2010-01-29 16:01 ` George Dunlap
2010-01-29 16:59 ` Jan Beulich
2010-01-29 18:30 ` George Dunlap [this message]
-- strict thread matches above, loose matches on Subject: below --
2010-01-31 17:48 Jan Beulich
2010-02-03 18:42 ` George Dunlap
2010-02-04 8:17 ` Jan Beulich
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4B632959.4070202@eu.citrix.com \
--to=george.dunlap@eu.citrix.com \
--cc=JBeulich@novell.com \
--cc=xen-devel@lists.xensource.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).