xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* Re: PoD issue
@ 2010-01-31 17:48 Jan Beulich
  2010-02-03 18:42 ` George Dunlap
  0 siblings, 1 reply; 12+ messages in thread
From: Jan Beulich @ 2010-01-31 17:48 UTC (permalink / raw)
  To: george.dunlap; +Cc: xen-devel

>>> George Dunlap  01/29/10 7:30 PM >>>
>PoD is not critical to balloon out guest memory.  You can boot with mem 
>== maxmem and then balloon down afterwards just as you could before, 
>without involving PoD.  (Or at least, you should be able to; if you 
>can't then it's a bug.)  It's just that with PoD you can do something 
>you've always wanted to do but never knew it: boot with 1GiB with the 
>option of expanding up to 2GiB later. :-)

Oh, no, that's not what I meant. What I really wanted to say is that
with PoD, a properly functioning balloon driver in the guest is crucial
for it to stay alive long enough.

>With the 54 megabyte difference: It's not like a GiB vs GB thing, is 
>it?  (i.e., 2^30 vs 10^9?)  The difference between 1GiB (2^30) and 1 GB 
>(10^9) is about 74 megs, or 18,000 pages.

No, that's not the problem. As I understand it now, the problem is
that totalram_pages (which the balloon driver bases its calculations
on) reflects all memory available after all bootmem allocations were
done (i.e. includes neither the static kernel image nor any memory
allocated before or from the bootmem allocator).

>I guess that is a weakness of PoD in general: we can't control the guest 
>balloon driver, but we rely on it to have the same model of how to 
>translate "target" into # pages in the balloon as the PoD code.

I think this isn't a weakness of PoD, but a design issue in the balloon
driver's xenstore interface: While a target value shown in or obtained
from the /proc and /sys interfaces naturally can be based on (and
reflect) any internal kernel state, the xenstore interface should only
use numbers in terms of full memory amount given to the guest.
Hence a target value read from the memory/target node should be
adjusted before put in relation to totalram_pages. And I think this
is a general misconception in the current implementation (i.e. it
should be corrected not only for the HVM case, but for the pv one
as well).

The bad aspect of this is that it will require a fixed balloon driver
in any HVM guest that has maxmem>mem when the underlying Xen
gets updated to a version that supports PoD. I cannot, however,
see an OS and OS-version independent alternative (i.e. something
to be done in the PoD code or the tools).

Jan

^ permalink raw reply	[flat|nested] 12+ messages in thread
* Re: Re: PoD issue
@ 2010-06-05 16:15 Jan Beulich
  2010-06-07  9:28 ` George Dunlap
  0 siblings, 1 reply; 12+ messages in thread
From: Jan Beulich @ 2010-06-05 16:15 UTC (permalink / raw)
  To: pasik; +Cc: George.Dunlap, xen-devel, Keir.Fraser, list.keith

>>> Pasi Kärkkäinen 06/04/10 5:03 PM >>>
>On Fri, Feb 19, 2010 at 08:19:15AM +0000, Jan Beulich wrote:
>> >>> Keith Coleman  19.02.10 01:03 >>>
>> >On Thu, Feb 4, 2010 at 2:12 PM, George Dunlap
>> > wrote:
>> >> Yeah, the OSS tree doesn't get the kind of regression testing it
>> >> really needs at the moment.  I was using the OSS balloon drivers when
>> >> I implemented and submitted the PoD code last year.  I didn't have any
>> >> trouble then, and I was definitely using up all of the memory.  But I
>> >> haven't done any testing on OSS since then, basically.
>> >>
>> >
>> >Is it expected that booting HVM guests with maxmem > memory is
>> >unstable? In testing 3.4.3-rc2 (kernel 2.6.18 c/s 993) I can easily
>> >crash the guest and occasionally the entire server.
>> 
>> Crashing the guest is expected if the guest doesn't have a fixed
>> balloon driver (i.e. the mentioned c/s would need to be in the
>> sources the pv drivers for the guest were built from).
>> 
>> Crashing the host is certainly unacceptable - please provide logs
>> thereof.
>> 
>
>Was this resolved? Someone was complaining recently that maxmem != memory
>crashes his Xen host..

I don 't recall ever having seen logs of a host crash of this sort,
so if this ever was the case and no-one else fixed it, I would
believe it still to be an issue.

Jan

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2010-06-07  9:51 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-01-31 17:48 PoD issue Jan Beulich
2010-02-03 18:42 ` George Dunlap
2010-02-04  8:17   ` Jan Beulich
2010-02-04 19:12     ` George Dunlap
2010-02-19  0:03       ` Keith Coleman
2010-02-19  6:53         ` Ian Pratt
2010-02-19 21:28           ` Keith Coleman
2010-02-19  8:19         ` Jan Beulich
2010-06-04 15:03           ` Pasi Kärkkäinen
  -- strict thread matches above, loose matches on Subject: below --
2010-06-05 16:15 Jan Beulich
2010-06-07  9:28 ` George Dunlap
2010-06-07  9:51   ` Pasi Kärkkäinen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).