From mboxrd@z Thu Jan 1 00:00:00 1970 From: Dan Magenheimer Subject: Re: domain creation vs querying free memory (xend and xl) Date: Thu, 4 Oct 2012 09:59:22 -0700 (PDT) Message-ID: <48a08581-faa9-40a0-8afd-dc334ab82e43@default> References: <53b8c758-2675-42a7-b63f-4f9ad0006d84@default> <20581.55931.246130.308384@mariner.uk.xensource.com> <8ba2021c-1095-4fd1-98a5-f6eec8a3498b@default> <20121002091017.GA95926@ocelot.phlegethon.org> <66cc0085-1216-40f7-8059-eaf615202c12@default> <20121002201624.GA98445@ocelot.phlegethon.org> <20121004100645.GC38243@ocelot.phlegethon.org> <51CA094C-870A-4772-A22E-4CB151E854F2@gridcentric.ca> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <51CA094C-870A-4772-A22E-4CB151E854F2@gridcentric.ca> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Andres Lagar-Cavilla , Tim Deegan Cc: Olaf Hering , Keir Fraser , Konrad Wilk , George Dunlap , Kurt Hackel , Ian Jackson , xen-devel@lists.xen.org, George Shuklin , Dario Faggioli List-Id: xen-devel@lists.xenproject.org > From: Andres Lagar-Cavilla [mailto:andreslc@gridcentric.ca] > Subject: Re: [Xen-devel] domain creation vs querying free memory (xend and xl) > > On Oct 4, 2012, at 6:06 AM, Tim Deegan wrote: > > > At 14:56 -0700 on 02 Oct (1349189817), Dan Magenheimer wrote: > >> Tmem argues that doing "memory capacity transfers" at a page granularity > >> can only be done efficiently in the hypervisor. This is true for > >> page-sharing when it breaks a "share" also... it can't go ask the > >> toolstack to approve allocation of a new page every time a write to a shared > >> page occurs. > >> > >> Does that make sense? > > > > Yes. The page-sharing version can be handled by having a pool of > > dedicated memory for breaking shares, and the toolstack asynchronously > > replenish that, rather than allowing CoW to use up all memory in the > > system. > > That is doable. One benefit is that it would minimize the chance of a VM hitting a CoW ENOMEM. I don't > see how it would altogether avoid it. Agreed, so it doesn't really solve the problem. (See longer reply to Tim.) > If the objective is trying to put a cap to the unpredictable growth of memory allocations via CoW > unsharing, two observations: (1) will never grow past nominal VM footprint (2) One can put a cap today > by tweaking d->max_pages -- CoW will fail, faulting vcpu will sleep, and things can be kicked back > into action at a later point. But IIRC isn't it (2) that has given VMware memory overcommit a bad name? Any significant memory pressure due to overcommit leads to double-swapping, which leads to horrible performance?