From: Tiejun Chen <tiejun.chen@intel.com>
To: JBeulich@suse.com, ian.campbell@citrix.com,
ian.jackson@eu.citrix.com, stefano.stabellini@eu.citrix.com,
kevin.tian@intel.com, yang.z.zhang@intel.com
Cc: xen-devel@lists.xen.org
Subject: [v6][PATCH 4/7] libxc/hvm_info_table: introduce a new field nr_reserved_device_memory_map
Date: Wed, 10 Sep 2014 13:49:47 +0800 [thread overview]
Message-ID: <1410328190-6372-5-git-send-email-tiejun.chen@intel.com> (raw)
In-Reply-To: <1410328190-6372-1-git-send-email-tiejun.chen@intel.com>
In hvm_info_table this field represents the number of all reserved device
memory maps. It will be convenient to expose such a information to VM.
While building hvm info, libxc is responsible for constructing this number
after check_rdm_overlap().
Signed-off-by: Tiejun Chen <tiejun.chen@intel.com>
diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
index 299e33a..8c61422 100644
--- a/tools/libxc/xc_hvm_build_x86.c
+++ b/tools/libxc/xc_hvm_build_x86.c
@@ -89,7 +89,8 @@ static int modules_init(struct xc_hvm_build_args *args,
}
static void build_hvm_info(void *hvm_info_page, uint64_t mem_size,
- uint64_t mmio_start, uint64_t mmio_size)
+ uint64_t mmio_start, uint64_t mmio_size,
+ unsigned int num)
{
struct hvm_info_table *hvm_info = (struct hvm_info_table *)
(((unsigned char *)hvm_info_page) + HVM_INFO_OFFSET);
@@ -119,6 +120,9 @@ static void build_hvm_info(void *hvm_info_page, uint64_t mem_size,
hvm_info->high_mem_pgend = highmem_end >> PAGE_SHIFT;
hvm_info->reserved_mem_pgstart = ioreq_server_pfn(0);
+ /* Reserved device memory map number. */
+ hvm_info->nr_reserved_device_memory_map = num;
+
/* Finish with the checksum. */
for ( i = 0, sum = 0; i < hvm_info->length; i++ )
sum += ((uint8_t *)hvm_info)[i];
@@ -329,6 +333,7 @@ static int setup_guest(xc_interface *xch,
int claim_enabled = args->claim_enabled;
xen_pfn_t special_array[NR_SPECIAL_PAGES];
xen_pfn_t ioreq_server_array[NR_IOREQ_SERVER_PAGES];
+ unsigned int num_reserved = 0;
if ( nr_pages > target_pages )
pod_mode = XENMEMF_populate_on_demand;
@@ -371,6 +376,8 @@ static int setup_guest(xc_interface *xch,
if ( rc < 0 )
goto error_out;
+ num_reserved = rc;
+
for ( i = 0; i < nr_pages; i++ )
page_array[i] = i;
for ( i = mmio_start >> PAGE_SHIFT; i < nr_pages; i++ )
@@ -540,7 +547,7 @@ static int setup_guest(xc_interface *xch,
xch, dom, PAGE_SIZE, PROT_READ | PROT_WRITE,
HVM_INFO_PFN)) == NULL )
goto error_out;
- build_hvm_info(hvm_info_page, v_end, mmio_start, mmio_size);
+ build_hvm_info(hvm_info_page, v_end, mmio_start, mmio_size, num_reserved);
munmap(hvm_info_page, PAGE_SIZE);
/* Allocate and clear special pages. */
diff --git a/xen/include/public/hvm/hvm_info_table.h b/xen/include/public/hvm/hvm_info_table.h
index 36085fa..bf401d5 100644
--- a/xen/include/public/hvm/hvm_info_table.h
+++ b/xen/include/public/hvm/hvm_info_table.h
@@ -65,6 +65,9 @@ struct hvm_info_table {
*/
uint32_t high_mem_pgend;
+ /* How many reserved device memory maps does we have? */
+ uint32_t nr_reserved_device_memory_map;
+
/* Bitmap of which CPUs are online at boot time. */
uint8_t vcpu_online[(HVM_MAX_VCPUS + 7)/8];
};
--
1.9.1
next prev parent reply other threads:[~2014-09-10 5:49 UTC|newest]
Thread overview: 37+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-09-10 5:49 [v6][PATCH 0/7] xen: reserve RMRR to avoid conflicting MMIO/RAM Tiejun Chen
2014-09-10 5:49 ` [v6][PATCH 1/7] introduce XENMEM_reserved_device_memory_map Tiejun Chen
2014-09-10 21:34 ` Tian, Kevin
2014-09-10 5:49 ` [v6][PATCH 2/7] tools/libxc: introduce hypercall for xc_reserved_device_memory_map Tiejun Chen
2014-09-11 15:21 ` Jan Beulich
2014-09-11 15:23 ` Ian Campbell
2014-09-11 15:55 ` Andrew Cooper
2014-09-12 2:43 ` Chen, Tiejun
2014-09-12 6:20 ` Jan Beulich
2014-09-10 5:49 ` [v6][PATCH 3/7] tools/libxc: check if mmio BAR is out of reserved device memory maps Tiejun Chen
2014-09-10 21:37 ` Tian, Kevin
2014-09-11 1:14 ` Chen, Tiejun
2014-09-11 22:55 ` Tian, Kevin
2014-09-11 15:38 ` Jan Beulich
2014-09-12 2:56 ` Chen, Tiejun
2014-09-12 6:19 ` Jan Beulich
2014-09-10 5:49 ` Tiejun Chen [this message]
2014-09-10 21:39 ` [v6][PATCH 4/7] libxc/hvm_info_table: introduce a new field nr_reserved_device_memory_map Tian, Kevin
2014-09-11 1:16 ` Chen, Tiejun
2014-09-10 5:49 ` [v6][PATCH 5/7] hvmloader: introduce hypercall for xc_reserved_device_memory_map Tiejun Chen
2014-09-10 21:41 ` Tian, Kevin
2014-09-11 1:32 ` Chen, Tiejun
2014-09-11 7:52 ` Jan Beulich
2014-09-11 15:45 ` Jan Beulich
2014-09-12 4:52 ` Chen, Tiejun
2014-09-10 5:49 ` [v6][PATCH 6/7] hvmloader: check to reserved device memory maps in e820 Tiejun Chen
2014-09-11 15:57 ` Jan Beulich
2014-09-12 6:08 ` Jan Beulich
2014-09-12 6:28 ` Chen, Tiejun
2014-09-12 6:44 ` Jan Beulich
2014-09-10 5:49 ` [v6][PATCH 7/7] xen/vtd: make USB RMRR mapping safe Tiejun Chen
2014-09-18 9:11 ` Jan Beulich
2014-09-10 21:44 ` [v6][PATCH 0/7] xen: reserve RMRR to avoid conflicting MMIO/RAM Tian, Kevin
2014-09-11 1:38 ` Chen, Tiejun
2014-09-11 7:48 ` Jan Beulich
2014-09-11 9:39 ` Chen, Tiejun
2014-09-11 10:01 ` Jan Beulich
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1410328190-6372-5-git-send-email-tiejun.chen@intel.com \
--to=tiejun.chen@intel.com \
--cc=JBeulich@suse.com \
--cc=ian.campbell@citrix.com \
--cc=ian.jackson@eu.citrix.com \
--cc=kevin.tian@intel.com \
--cc=stefano.stabellini@eu.citrix.com \
--cc=xen-devel@lists.xen.org \
--cc=yang.z.zhang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).