From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Jan Beulich" Subject: Re: Xen 4.0.1 "xc_map_foreign_batch: mmap failed: Cannot allocate memory" Date: Fri, 07 Jan 2011 12:37:10 +0000 Message-ID: <4D271706020000780002AFB2@vpn.id2.novell.com> References: <4D26EC59020000780002AF40@vpn.id2.novell.com> Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: quoted-printable Return-path: In-Reply-To: Content-Disposition: inline List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xensource.com Errors-To: xen-devel-bounces@lists.xensource.com To: Stefano Stabellini Cc: Charles Arnold , "xen-devel@lists.xensource.com" List-Id: xen-devel@lists.xenproject.org >>> On 07.01.11 at 12:18, Stefano Stabellini wrote: > On Fri, 7 Jan 2011, Jan Beulich wrote: >> >>> On 06.01.11 at 21:49, Charles Arnold wrote: >> > >>> On 1/6/2011 at 10:14 AM, in message <4D25C782.5B74.0091.0@novell.c= om>,=20 > Charles Arnold wrote:=20 >> > Attached is the messages file with the printk output. >>=20 >> Hmm, a failure due to may_expand_vm() is really odd. Something >> must be explicitly setting a non-infinite RLIMIT_AS on qemu-dm (or >> one of its parents), as the default is "infinite" (as reaching = "infinity" >> - being ~0UL - is simply impossible, and unduly large lengths should >> be caught by get_unmapped_area() already). >>=20 >> /proc//limits would at least tell us what the limit is. >>=20 >=20 > Knowing this would be very interesting. I just found that on SLE11 this gets set to 80% (admin controllable) of the sum of physical (not accounting for the balloon) and swap memory. That's (at least on large systems using a relatively low dom0_mem=3D value) likely awfully low for qemu-dm serving large guests. However, it's only rlim_cur that gets set this low by default, and hence it would seem reasonable to me to have qemu-dm bump it to whatever getrlimit() returns in rlimit_max. >> And certainly qemu-dm needs to be prepared to have a >> non-infinite address space limit set on it. >>=20 >=20 > Currently the number of buckets and the bucket size in the mapcache are > statically defined depending on x86_32/x86_64. > It shouldn't be difficult to make them dynamic depending on RLIMIT_AS. That still wouldn't help if RLIMIT_AS gets changed when it's already running. The only proper way to deal with the situation as a whole (including but not limited to rlim_max being relatively low) is to get proper error handling implemented, either causing guest I/O to be throttled when mmap() fails, or no longer used mappings cleared (if that isn't being done already). Jan