From mboxrd@z Thu Jan 1 00:00:00 1970 From: George Dunlap Subject: Re: PoD issue Date: Wed, 3 Feb 2010 10:42:09 -0800 Message-ID: <4B69C381.10005@eu.citrix.com> References: <4B65C25E02000078000584AB@vpn.id2.novell.com> Mime-Version: 1.0 Content-Type: text/plain; charset="ISO-8859-1"; format=flowed Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <4B65C25E02000078000584AB@vpn.id2.novell.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xensource.com Errors-To: xen-devel-bounces@lists.xensource.com To: Jan Beulich Cc: "xen-devel@lists.xensource.com" List-Id: xen-devel@lists.xenproject.org So did you track down where the math error is? Do we have a plan to fix this going forward? -George Jan Beulich wrote: >>>> George Dunlap 01/29/10 7:30 PM >>> >>>> >> PoD is not critical to balloon out guest memory. You can boot with mem >> == maxmem and then balloon down afterwards just as you could before, >> without involving PoD. (Or at least, you should be able to; if you >> can't then it's a bug.) It's just that with PoD you can do something >> you've always wanted to do but never knew it: boot with 1GiB with the >> option of expanding up to 2GiB later. :-) >> > > Oh, no, that's not what I meant. What I really wanted to say is that > with PoD, a properly functioning balloon driver in the guest is crucial > for it to stay alive long enough. > > >> With the 54 megabyte difference: It's not like a GiB vs GB thing, is >> it? (i.e., 2^30 vs 10^9?) The difference between 1GiB (2^30) and 1 GB >> (10^9) is about 74 megs, or 18,000 pages. >> > > No, that's not the problem. As I understand it now, the problem is > that totalram_pages (which the balloon driver bases its calculations > on) reflects all memory available after all bootmem allocations were > done (i.e. includes neither the static kernel image nor any memory > allocated before or from the bootmem allocator). > > >> I guess that is a weakness of PoD in general: we can't control the guest >> balloon driver, but we rely on it to have the same model of how to >> translate "target" into # pages in the balloon as the PoD code. >> > > I think this isn't a weakness of PoD, but a design issue in the balloon > driver's xenstore interface: While a target value shown in or obtained > from the /proc and /sys interfaces naturally can be based on (and > reflect) any internal kernel state, the xenstore interface should only > use numbers in terms of full memory amount given to the guest. > Hence a target value read from the memory/target node should be > adjusted before put in relation to totalram_pages. And I think this > is a general misconception in the current implementation (i.e. it > should be corrected not only for the HVM case, but for the pv one > as well). > > The bad aspect of this is that it will require a fixed balloon driver > in any HVM guest that has maxmem>mem when the underlying Xen > gets updated to a version that supports PoD. I cannot, however, > see an OS and OS-version independent alternative (i.e. something > to be done in the PoD code or the tools). > > Jan > >