From mboxrd@z Thu Jan 1 00:00:00 1970 From: George Dunlap Subject: Re: PoD issue Date: Fri, 29 Jan 2010 18:30:49 +0000 Message-ID: <4B632959.4070202@eu.citrix.com> References: <4B630C63020000780002CC11@vpn.id2.novell.com> <4B630643.2000904@eu.citrix.com> <4B632202020000780002CC72@vpn.id2.novell.com> Mime-Version: 1.0 Content-Type: text/plain; charset="ISO-8859-1"; format=flowed Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <4B632202020000780002CC72@vpn.id2.novell.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xensource.com Errors-To: xen-devel-bounces@lists.xensource.com To: Jan Beulich Cc: "xen-devel@lists.xensource.com" List-Id: xen-devel@lists.xenproject.org PoD is not critical to balloon out guest memory. You can boot with mem == maxmem and then balloon down afterwards just as you could before, without involving PoD. (Or at least, you should be able to; if you can't then it's a bug.) It's just that with PoD you can do something you've always wanted to do but never knew it: boot with 1GiB with the option of expanding up to 2GiB later. :-) With the 54 megabyte difference: It's not like a GiB vs GB thing, is it? (i.e., 2^30 vs 10^9?) The difference between 1GiB (2^30) and 1 GB (10^9) is about 74 megs, or 18,000 pages. I guess that is a weakness of PoD in general: we can't control the guest balloon driver, but we rely on it to have the same model of how to translate "target" into # pages in the balloon as the PoD code. -George Jan Beulich wrote: >>>> George Dunlap 29.01.10 17:01 >>> >>>> >> What seems likely to me is that Xen (setting the PoD target) and the >> balloon driver (allocating memory) have a different way of calculating >> the amount of guest memory. So the balloon driver thinks it's done >> handing memory back to Xen when there are still more outstanding PoD >> entries than there are entries in the PoD memory pool. What balloon >> driver are you using? >> > > The one from our forward ported 2.6.32.x tree. I would suppose there > are no significant differences here to the one in 2.6.18, but I wonder > how precise the totalram_pages value is that the driver (also in 2.6.18) > uses to initialize bs.current_pages. Given that with PoD it is now crucial > for the guest to balloon out enough memory, using an imprecise start > value is not acceptable anymore. The question however is what more > reliable data source one could use (given that any non-exported > kernel object is out of question). And I wonder how this works reliably > for others... > > >> Can you let me know max_mem, target, and what the >> balloon driver has reached before calling it quits? (Although 13,000 >> pages is an awful lot to be off by: 54 MB...) >> > > The balloon driver reports the expected state: target and allocation > are 1G. But yes - how did I not pay attention to this - the balloon is > *far* from being 1G in size (and in fact the difference is probably > matching quite closely those 54M). > > Thanks a lot! > > Jan > >