From: Don Slutz <dslutz@verizon.com>
To: qemu-devel@nongnu.org, xen-devel@lists.xensource.com,
Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Don Slutz <dslutz@verizon.com>
Subject: [Qemu-devel] [PATCH for 2.3 v2 1/1] xen-hvm: increase maxmem before calling xc_domain_populate_physmap
Date: Wed, 3 Dec 2014 08:15:19 -0500 [thread overview]
Message-ID: <1417612519-6931-1-git-send-email-dslutz@verizon.com> (raw)
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Increase maxmem before calling xc_domain_populate_physmap_exact to
avoid the risk of running out of guest memory. This way we can also
avoid complex memory calculations in libxl at domain construction
time.
This patch fixes an abort() when assigning more than 4 NICs to a VM.
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: Don Slutz <dslutz@verizon.com>
---
v2: Changes by Don Slutz
Switch from xc_domain_getinfo to xc_domain_getinfolist
Fix error check for xc_domain_getinfolist
Limit increase of maxmem to only do when needed:
Add QEMU_SPARE_PAGES (How many pages to leave free)
Add free_pages calculation
xen-hvm.c | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/xen-hvm.c b/xen-hvm.c
index 7548794..d30e77e 100644
--- a/xen-hvm.c
+++ b/xen-hvm.c
@@ -90,6 +90,7 @@ static inline ioreq_t *xen_vcpu_ioreq(shared_iopage_t *shared_page, int vcpu)
#endif
#define BUFFER_IO_MAX_DELAY 100
+#define QEMU_SPARE_PAGES 16
typedef struct XenPhysmap {
hwaddr start_addr;
@@ -244,6 +245,8 @@ void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size, MemoryRegion *mr)
unsigned long nr_pfn;
xen_pfn_t *pfn_list;
int i;
+ xc_domaininfo_t info;
+ unsigned long free_pages;
if (runstate_check(RUN_STATE_INMIGRATE)) {
/* RAM already populated in Xen */
@@ -266,6 +269,22 @@ void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size, MemoryRegion *mr)
pfn_list[i] = (ram_addr >> TARGET_PAGE_BITS) + i;
}
+ if ((xc_domain_getinfolist(xen_xc, xen_domid, 1, &info) != 1) ||
+ (info.domain != xen_domid)) {
+ hw_error("xc_domain_getinfolist failed");
+ }
+ free_pages = info.max_pages - info.tot_pages;
+ if (free_pages > QEMU_SPARE_PAGES) {
+ free_pages -= QEMU_SPARE_PAGES;
+ } else {
+ free_pages = 0;
+ }
+ if ((free_pages < nr_pfn) &&
+ (xc_domain_setmaxmem(xen_xc, xen_domid,
+ ((info.max_pages + nr_pfn - free_pages)
+ << (XC_PAGE_SHIFT - 10))) < 0)) {
+ hw_error("xc_domain_setmaxmem failed");
+ }
if (xc_domain_populate_physmap_exact(xen_xc, xen_domid, nr_pfn, 0, 0, pfn_list)) {
hw_error("xen: failed to populate ram at " RAM_ADDR_FMT, ram_addr);
}
--
1.8.4
next reply other threads:[~2014-12-03 13:15 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-12-03 13:15 Don Slutz [this message]
2014-12-23 14:35 ` [Qemu-devel] [PATCH for 2.3 v2 1/1] xen-hvm: increase maxmem before calling xc_domain_populate_physmap Don Slutz
2015-01-09 16:32 ` Don Slutz
2015-01-12 11:20 ` Stefano Stabellini
2015-01-13 18:07 ` Stefano Stabellini
2015-01-13 20:11 ` Don Slutz
2015-01-14 11:30 ` Stefano Stabellini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1417612519-6931-1-git-send-email-dslutz@verizon.com \
--to=dslutz@verizon.com \
--cc=qemu-devel@nongnu.org \
--cc=stefano.stabellini@eu.citrix.com \
--cc=xen-devel@lists.xensource.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).