From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41766) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Y3QYX-0002hd-Tk for qemu-devel@nongnu.org; Tue, 23 Dec 2014 09:35:31 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Y3QYR-0000ZJ-WD for qemu-devel@nongnu.org; Tue, 23 Dec 2014 09:35:25 -0500 Received: from fldsmtpe04.verizon.com ([140.108.26.143]:64920) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Y3QYR-0000YS-Ke for qemu-devel@nongnu.org; Tue, 23 Dec 2014 09:35:19 -0500 From: Don Slutz Message-ID: <54997DA2.6000408@terremark.com> Date: Tue, 23 Dec 2014 09:35:14 -0500 MIME-Version: 1.0 References: <1417612519-6931-1-git-send-email-dslutz@verizon.com> In-Reply-To: <1417612519-6931-1-git-send-email-dslutz@verizon.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH for 2.3 v2 1/1] xen-hvm: increase maxmem before calling xc_domain_populate_physmap List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Don Slutz Cc: xen-devel@lists.xensource.com, qemu-devel@nongnu.org, Stefano Stabellini Ping. On 12/03/14 08:15, Don Slutz wrote: > From: Stefano Stabellini > > Increase maxmem before calling xc_domain_populate_physmap_exact to > avoid the risk of running out of guest memory. This way we can also > avoid complex memory calculations in libxl at domain construction > time. > > This patch fixes an abort() when assigning more than 4 NICs to a VM. > > Signed-off-by: Stefano Stabellini > Signed-off-by: Don Slutz > --- > v2: Changes by Don Slutz > Switch from xc_domain_getinfo to xc_domain_getinfolist > Fix error check for xc_domain_getinfolist > Limit increase of maxmem to only do when needed: > Add QEMU_SPARE_PAGES (How many pages to leave free) > Add free_pages calculation > > xen-hvm.c | 19 +++++++++++++++++++ > 1 file changed, 19 insertions(+) > > diff --git a/xen-hvm.c b/xen-hvm.c > index 7548794..d30e77e 100644 > --- a/xen-hvm.c > +++ b/xen-hvm.c > @@ -90,6 +90,7 @@ static inline ioreq_t *xen_vcpu_ioreq(shared_iopage_t *shared_page, int vcpu) > #endif > > #define BUFFER_IO_MAX_DELAY 100 > +#define QEMU_SPARE_PAGES 16 > > typedef struct XenPhysmap { > hwaddr start_addr; > @@ -244,6 +245,8 @@ void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size, MemoryRegion *mr) > unsigned long nr_pfn; > xen_pfn_t *pfn_list; > int i; > + xc_domaininfo_t info; > + unsigned long free_pages; > > if (runstate_check(RUN_STATE_INMIGRATE)) { > /* RAM already populated in Xen */ > @@ -266,6 +269,22 @@ void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size, MemoryRegion *mr) > pfn_list[i] = (ram_addr >> TARGET_PAGE_BITS) + i; > } > > + if ((xc_domain_getinfolist(xen_xc, xen_domid, 1, &info) != 1) || > + (info.domain != xen_domid)) { > + hw_error("xc_domain_getinfolist failed"); > + } > + free_pages = info.max_pages - info.tot_pages; > + if (free_pages > QEMU_SPARE_PAGES) { > + free_pages -= QEMU_SPARE_PAGES; > + } else { > + free_pages = 0; > + } > + if ((free_pages < nr_pfn) && > + (xc_domain_setmaxmem(xen_xc, xen_domid, > + ((info.max_pages + nr_pfn - free_pages) > + << (XC_PAGE_SHIFT - 10))) < 0)) { > + hw_error("xc_domain_setmaxmem failed"); > + } > if (xc_domain_populate_physmap_exact(xen_xc, xen_domid, nr_pfn, 0, 0, pfn_list)) { > hw_error("xen: failed to populate ram at " RAM_ADDR_FMT, ram_addr); > }