From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Jan Beulich" Subject: Re: Xen 4.0.1 "xc_map_foreign_batch: mmap failed: Cannot allocate memory" Date: Wed, 05 Jan 2011 16:33:50 +0000 Message-ID: <4D24AB7E020000780002A861@vpn.id2.novell.com> References: <4D249CBD020000780002A7E4@vpn.id2.novell.com> Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: quoted-printable Return-path: In-Reply-To: Content-Disposition: inline List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xensource.com Errors-To: xen-devel-bounces@lists.xensource.com To: Stefano Stabellini Cc: Charles Arnold , "xen-devel@lists.xensource.com" , Keir Fraser List-Id: xen-devel@lists.xenproject.org >>> On 05.01.11 at 17:22, Stefano Stabellini wrote: > On Wed, 5 Jan 2011, Jan Beulich wrote: >> >>> On 05.01.11 at 15:37, Stefano Stabellini >> wrote: >> > On Thu, 16 Dec 2010, Keir Fraser wrote: >> >> On 16/12/2010 20:44, "Charles Arnold" wrote: >> >>=20 >> >> >>> On 12/16/2010 at 01:33 PM, in message , Keir >> >> > Fraser wrote: >> >> >> On 16/12/2010 19:23, "Charles Arnold" wrote: >> >> >>=20 >> >> >>> The bug is that qemu-dm seems to make the assumption that it can = mmap from >> >> >>> dom0 all the memory with which the guest has been defined = instead of the >> >> >>> memory >> >> >>> that is actually available on the host. >> >> >>=20 >> >> >> 32-bit dom0? Hm, I thought the qemu mapcache was supposed to = limit the=20 > total >> >> >> amount of guest memory mapped at one time, for a 32-bit qemu. For = 64-bit >> >> >> qemu I wouldn't expect to find a limit as low as 3.25G. >> >> >=20 >> >> > Sorry, I should have specified that it is a 64 bit dom0 / = hypervisor. >> >>=20 >> >> Okay, well I'm not sure what limit qemu-dm is hitting then. Mapping = 3.25G of >> >> guest memory will only require a few megabytes of pagetables for the = qemu >> >> process in dom0. Perhaps there is a ulimit or something set on the = qemu >> >> process? >> >>=20 >> >> If we can work out and detect this limit, perhaps 64-bit qemu-dm = could have >> >> a mapping cache similar to 32-bit qemu-dm, limited to some fraction = of the >> >> detected mapping limit. And/or, on mapping failure, we could reclaim >> >> resources by simply zapping the existing cached mappings. Seems = there's a >> >> few options. I don't really maintain qemu-dm myself -- you might get = some >> >> help from Ian Jackson, Stefano, or Anthony Perard if you need more = advice. >> >=20 >> > The mapcache size limit should be 64GB on a 64bit qemu-dm. >> > Any interesting error messages in the qemu logs? >>=20 >> Despite knowing next to nothing about qemu, I'm not certain the >> mapcache alone matters here: One would expect this to only >> consume memory for page table construction, but then you >> wouldn't need Dom0 to have more memory than the guest for the >> latter to do heavy I/O. There ought to be something that >> allocates memory in amounts roughly equivalent to what the >> guest has under I/O. > =20 > Qemu-dm allocates a bounce buffer for each in flight dma > request, because the aio API used in qemu-dm cannot handle sg lists (it > is probably the main reason to switch to the new qemu). > However the bounce buffer is going to be free'd as soon as the dma > request completes. But this means it can have very close to the total amount of memory the guest has in flight on its own. Clearly this should be throttled based on available memory (just consider you have multiple such I/O hungry guests). Jan