From mboxrd@z Thu Jan 1 00:00:00 1970 From: Dan Magenheimer Subject: Re: [RFC/PATCH v2] XENMEM_claim_pages (subop of existing) hypercall Date: Thu, 15 Nov 2012 11:19:57 -0800 (PST) Message-ID: References: <6f7c28bb-ffa9-4b39-929b-2a05d99a77e7@default> <1352982337.3499.119.camel@zakaz.uk.xensource.com> <50A4F07F02000078000A8DBD@nat28.tlf.novell.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <50A4F07F02000078000A8DBD@nat28.tlf.novell.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Jan Beulich , Ian Campbell Cc: xen-devel@lists.xen.org, "Keir (Xen.org)" , Konrad Wilk , "Tim(Xen.org)" , Dave Mccracken , Zhigang Wang List-Id: xen-devel@lists.xenproject.org > -----Original Message----- > From: Jan Beulich [mailto:JBeulich@suse.com] > Sent: Thursday, November 15, 2012 5:39 AM > To: Ian Campbell; Dan Magenheimer > Cc: xen-devel@lists.xen.org; Dave McCracken; KonradWilk; Zhigang Wang; Keir (Xen.org); Tim(Xen.org) > Subject: Re: [Xen-devel] [RFC/PATCH v2] XENMEM_claim_pages (subop of existing) hypercall > > >>> On 15.11.12 at 13:25, Ian Campbell wrote: > > Also doesn't this fail to make any sort of guarantee if you are building > > a 32 bit PV guest, since they require memory under a certain host > > address limit (160GB IIRC)? > > This case is unreliable already, and has always been (I think we > have a tools side hack in some of our trees in an attempt to deal > with that), when ballooning is used to get at the memory, or > when trying to start a 32-bit guest after having run 64-bit ones > exhausting most of memory, and having terminated an early > created one (as allocation is top down, ones created close to > exhaustion, i.e. later, would eat up that "special" memory at > lower addresses). > > So this new functionality "only" makes a bad situation worse > (which isn't meant to say I wouldn't prefer to see it get fixed). Hmmm... I guess I don't see how claim makes the situation worse. Well maybe a few microseconds worse. Old model: (1) Allocate a huge number of pages New model: (1) Claim a huge number of pages. If successful... (2) Allocate that huge number of pages In either case, the failure conditions are the same except that the claim mechanism checks one of the failure conditions sooner. Or am I misunderstanding?