From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Jan Beulich <JBeulich@novell.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: Re: PoD issue
Date: Thu, 4 Feb 2010 11:12:18 -0800 [thread overview]
Message-ID: <de76405a1002041112q3d36b847p294869f6d5f760f1@mail.gmail.com> (raw)
In-Reply-To: <4B6A90B1020000780002DA3B@vpn.id2.novell.com>
Yeah, the OSS tree doesn't get the kind of regression testing it
really needs at the moment. I was using the OSS balloon drivers when
I implemented and submitted the PoD code last year. I didn't have any
trouble then, and I was definitely using up all of the memory. But I
haven't done any testing on OSS since then, basically.
-George
On Thu, Feb 4, 2010 at 12:17 AM, Jan Beulich <JBeulich@novell.com> wrote:
> It was in the balloon driver's interaction with xenstore - see 2.6.18 c/s
> 989.
>
> I have to admit that I cannot see how this issue could slip attention
> when the PoD code was introduced - any guest with PoD in use and
> an unfixed balloon driver is set to crash sooner or later (implying the
> unfortunate effect of requiring an update of the pv drivers in HVM
> guests when upgrading Xen from a PoD-incapable to a PoD-capable
> version).
>
> Jan
>
>>>> George Dunlap <george.dunlap@eu.citrix.com> 03.02.10 19:42 >>>
> So did you track down where the math error is? Do we have a plan to fix
> this going forward?
> -George
>
> Jan Beulich wrote:
>>>>> George Dunlap 01/29/10 7:30 PM >>>
>>>>>
>>> PoD is not critical to balloon out guest memory. You can boot with mem
>>> == maxmem and then balloon down afterwards just as you could before,
>>> without involving PoD. (Or at least, you should be able to; if you
>>> can't then it's a bug.) It's just that with PoD you can do something
>>> you've always wanted to do but never knew it: boot with 1GiB with the
>>> option of expanding up to 2GiB later. :-)
>>>
>>
>> Oh, no, that's not what I meant. What I really wanted to say is that
>> with PoD, a properly functioning balloon driver in the guest is crucial
>> for it to stay alive long enough.
>>
>>
>>> With the 54 megabyte difference: It's not like a GiB vs GB thing, is
>>> it? (i.e., 2^30 vs 10^9?) The difference between 1GiB (2^30) and 1 GB
>>> (10^9) is about 74 megs, or 18,000 pages.
>>>
>>
>> No, that's not the problem. As I understand it now, the problem is
>> that totalram_pages (which the balloon driver bases its calculations
>> on) reflects all memory available after all bootmem allocations were
>> done (i.e. includes neither the static kernel image nor any memory
>> allocated before or from the bootmem allocator).
>>
>>
>>> I guess that is a weakness of PoD in general: we can't control the guest
>>> balloon driver, but we rely on it to have the same model of how to
>>> translate "target" into # pages in the balloon as the PoD code.
>>>
>>
>> I think this isn't a weakness of PoD, but a design issue in the balloon
>> driver's xenstore interface: While a target value shown in or obtained
>> from the /proc and /sys interfaces naturally can be based on (and
>> reflect) any internal kernel state, the xenstore interface should only
>> use numbers in terms of full memory amount given to the guest.
>> Hence a target value read from the memory/target node should be
>> adjusted before put in relation to totalram_pages. And I think this
>> is a general misconception in the current implementation (i.e. it
>> should be corrected not only for the HVM case, but for the pv one
>> as well).
>>
>> The bad aspect of this is that it will require a fixed balloon driver
>> in any HVM guest that has maxmem>mem when the underlying Xen
>> gets updated to a version that supports PoD. I cannot, however,
>> see an OS and OS-version independent alternative (i.e. something
>> to be done in the PoD code or the tools).
>>
>> Jan
>>
>>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel
>
next prev parent reply other threads:[~2010-02-04 19:12 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-01-31 17:48 PoD issue Jan Beulich
2010-02-03 18:42 ` George Dunlap
2010-02-04 8:17 ` Jan Beulich
2010-02-04 19:12 ` George Dunlap [this message]
2010-02-19 0:03 ` Keith Coleman
2010-02-19 6:53 ` Ian Pratt
2010-02-19 21:28 ` Keith Coleman
2010-02-19 8:19 ` Jan Beulich
2010-06-04 15:03 ` Pasi Kärkkäinen
-- strict thread matches above, loose matches on Subject: below --
2010-06-05 16:15 Jan Beulich
2010-06-07 9:28 ` George Dunlap
2010-06-07 9:51 ` Pasi Kärkkäinen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=de76405a1002041112q3d36b847p294869f6d5f760f1@mail.gmail.com \
--to=george.dunlap@eu.citrix.com \
--cc=JBeulich@novell.com \
--cc=xen-devel@lists.xensource.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).