From: Tiejun Chen <tiejun.chen@intel.com>
To: JBeulich@suse.com, ian.jackson@eu.citrix.com,
stefano.stabellini@eu.citrix.com, ian.campbell@citrix.com,
yang.z.zhang@intel.com, kevin.tian@intel.com
Cc: xen-devel@lists.xen.org
Subject: [RFC][v3][PATCH 5/6] tools:libxc: check if mmio BAR is out of RMRR mappings
Date: Fri, 15 Aug 2014 16:27:17 +0800 [thread overview]
Message-ID: <1408091238-18364-6-git-send-email-tiejun.chen@intel.com> (raw)
In-Reply-To: <1408091238-18364-1-git-send-email-tiejun.chen@intel.com>
We need to avoid allocating MMIO BAR conficting to RMRR range.
Signed-off-by: Tiejun Chen <tiejun.chen@intel.com>
---
tools/libxc/xc_domain.c | 26 ++++++++++++++++++++++++++
tools/libxc/xc_hvm_build_x86.c | 23 +++++++++++++++++++++++
tools/libxc/xenctrl.h | 4 ++++
3 files changed, 53 insertions(+)
diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index c67ac9a..8d011ef 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -649,6 +649,32 @@ int xc_domain_set_memory_map(xc_interface *xch,
return rc;
}
+
+int xc_get_rmrr_map(xc_interface *xch,
+ struct e820entry entries[],
+ uint32_t max_entries)
+{
+ int rc;
+ struct xen_memory_map memmap = {
+ .nr_entries = max_entries
+ };
+ DECLARE_HYPERCALL_BOUNCE(entries, sizeof(struct e820entry) * max_entries,
+ XC_HYPERCALL_BUFFER_BOUNCE_OUT);
+
+ if ( !entries || xc_hypercall_bounce_pre(xch, entries) || max_entries <= 1)
+ return -1;
+
+
+ set_xen_guest_handle(memmap.buffer, entries);
+
+ rc = do_memory_op(xch, XENMEM_reserved_device_memory_map,
+ &memmap, sizeof(memmap));
+
+ xc_hypercall_bounce_post(xch, entries);
+
+ return rc ? rc : memmap.nr_entries;
+}
+
int xc_get_machine_memory_map(xc_interface *xch,
struct e820entry entries[],
uint32_t max_entries)
diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
index c81a25b..2196cdb 100644
--- a/tools/libxc/xc_hvm_build_x86.c
+++ b/tools/libxc/xc_hvm_build_x86.c
@@ -262,6 +262,8 @@ static int setup_guest(xc_interface *xch,
int claim_enabled = args->claim_enabled;
xen_pfn_t special_array[NR_SPECIAL_PAGES];
xen_pfn_t ioreq_server_array[NR_IOREQ_SERVER_PAGES];
+ struct e820entry map[E820MAX];
+ uint64_t rmrr_start = 0, rmrr_end = 0;
if ( nr_pages > target_pages )
pod_mode = XENMEMF_populate_on_demand;
@@ -300,6 +302,27 @@ static int setup_guest(xc_interface *xch,
goto error_out;
}
+ /* We should check if mmio range is out of RMRR mapping. */
+ rc = xc_get_rmrr_map(xch, map, E820MAX);
+ if (rc < 0)
+ {
+ PERROR("Could not get RMRR info on domain");
+ }
+ else if ( rc )
+ {
+ for ( i = 0; i < rc; i++ )
+ {
+ rmrr_start = map[i].addr;
+ rmrr_end = map[i].addr + map[i].size + 1;
+ if ( check_mmio_hole(rmrr_start, map[i].size + 1, mmio_start, mmio_size) )
+ {
+ PERROR("MMIO: [%lx]<->[%lx] overlap RMRR [%lx]<->[%lx]\n",
+ mmio_start, (mmio_start + mmio_size), rmrr_start, rmrr_end);
+ goto error_out;
+ }
+ }
+ }
+
for ( i = 0; i < nr_pages; i++ )
page_array[i] = i;
for ( i = mmio_start >> PAGE_SHIFT; i < nr_pages; i++ )
diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
index 1c5d0db..6d3b135 100644
--- a/tools/libxc/xenctrl.h
+++ b/tools/libxc/xenctrl.h
@@ -1270,6 +1270,10 @@ int xc_domain_set_memory_map(xc_interface *xch,
int xc_get_machine_memory_map(xc_interface *xch,
struct e820entry entries[],
uint32_t max_entries);
+
+int xc_get_rmrr_map(xc_interface *xch,
+ struct e820entry entries[],
+ uint32_t max_entries);
#endif
int xc_domain_set_time_offset(xc_interface *xch,
uint32_t domid,
--
1.9.1
next prev parent reply other threads:[~2014-08-15 8:27 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-08-15 8:27 [RFC][v3][PATCH 0/6] xen: reserve RMRR to avoid conflicting MMIO/RAM Tiejun Chen
2014-08-15 8:27 ` [RFC][v3][PATCH 1/6] xen:x86: record RMRR mappings Tiejun Chen
2014-08-15 9:39 ` Andrew Cooper
2014-08-15 16:29 ` Jan Beulich
2014-08-18 7:42 ` Chen, Tiejun
2014-08-18 9:57 ` Andrew Cooper
2014-08-18 10:05 ` Chen, Tiejun
2014-08-18 12:31 ` Jan Beulich
2014-08-19 2:14 ` Chen, Tiejun
2014-08-19 2:28 ` Chen, Tiejun
2014-08-19 13:12 ` Jan Beulich
2014-08-18 7:45 ` Chen, Tiejun
2014-08-18 9:51 ` Andrew Cooper
2014-08-18 10:01 ` Chen, Tiejun
2014-08-18 12:56 ` Jan Beulich
2014-08-15 8:27 ` [RFC][v3][PATCH 2/6] xen:x86: introduce a new hypercall to get " Tiejun Chen
2014-08-15 9:46 ` Andrew Cooper
2014-08-18 7:46 ` Chen, Tiejun
2014-08-15 8:27 ` [RFC][v3][PATCH 3/6] tools:firmware:hvmloader: reserve RMRR mappings in e820 Tiejun Chen
2014-08-15 9:58 ` Andrew Cooper
2014-08-18 7:51 ` Chen, Tiejun
2014-08-18 10:00 ` Andrew Cooper
2014-08-15 8:27 ` [RFC][v3][PATCH 4/6] xen:x86: add XENMEM_reserved_device_memory_map to expose RMRR Tiejun Chen
2014-08-15 12:15 ` Andrew Cooper
2014-08-18 8:00 ` Chen, Tiejun
2014-08-18 10:06 ` Andrew Cooper
2014-08-15 8:27 ` Tiejun Chen [this message]
2014-08-15 12:21 ` [RFC][v3][PATCH 5/6] tools:libxc: check if mmio BAR is out of RMRR mappings Andrew Cooper
2014-08-18 8:05 ` Chen, Tiejun
2014-08-15 8:27 ` [RFC][v3][PATCH 6/6] xen:vtd: make USB RMRR mapping safe Tiejun Chen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1408091238-18364-6-git-send-email-tiejun.chen@intel.com \
--to=tiejun.chen@intel.com \
--cc=JBeulich@suse.com \
--cc=ian.campbell@citrix.com \
--cc=ian.jackson@eu.citrix.com \
--cc=kevin.tian@intel.com \
--cc=stefano.stabellini@eu.citrix.com \
--cc=xen-devel@lists.xen.org \
--cc=yang.z.zhang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).