* [PATCH 1/5] xen/balloon: account for pages released during memory setup
2011-09-28 16:46 [PATCH 0/5] xen: memory initialization/balloon fixes (#4) David Vrabel
@ 2011-09-28 16:46 ` David Vrabel
2011-09-28 16:46 ` [PATCH 2/5] xen/balloon: simplify test for the end of usable RAM David Vrabel
` (3 subsequent siblings)
4 siblings, 0 replies; 15+ messages in thread
From: David Vrabel @ 2011-09-28 16:46 UTC (permalink / raw)
To: xen-devel; +Cc: David Vrabel, Konrad Rzeszutek Wilk
From: David Vrabel <david.vrabel@citrix.com>
In xen_memory_setup() pages that occur in gaps in the memory map are
released back to Xen. This reduces the domain's current page count in
the hypervisor. The Xen balloon driver does not correctly decrease
its initial current_pages count to reflect this. If 'delta' pages are
released and the target is adjusted the resulting reservation is
always 'delta' less than the requested target.
This affects dom0 if the initial allocation of pages overlaps the PCI
memory region but won't affect most domU guests that have been setup
with pseudo-physical memory maps that don't have gaps.
Fix this by accouting for the released pages when starting the balloon
driver.
If the domain's targets are managed by xapi, the domain may eventually
run out of memory and die because xapi currently gets its target
calculations wrong and whenever it is restarted it always reduces the
target by 'delta'.
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
arch/x86/xen/setup.c | 7 ++++++-
drivers/xen/balloon.c | 4 +++-
include/xen/page.h | 2 ++
3 files changed, 11 insertions(+), 2 deletions(-)
diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index 46d6d21..c983717 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -39,6 +39,9 @@ extern void xen_syscall32_target(void);
/* Amount of extra memory space we add to the e820 ranges */
phys_addr_t xen_extra_mem_start, xen_extra_mem_size;
+/* Number of pages released from the initial allocation. */
+unsigned long xen_released_pages;
+
/*
* The maximum amount of extra memory compared to the base size. The
* main scaling factor is the size of struct page. At extreme ratios
@@ -313,7 +316,9 @@ char * __init xen_memory_setup(void)
extra_pages = 0;
}
- extra_pages += xen_return_unused_memory(xen_start_info->nr_pages, &e820);
+ xen_released_pages = xen_return_unused_memory(xen_start_info->nr_pages,
+ &e820);
+ extra_pages += xen_released_pages;
/*
* Clamp the amount of extra memory to a EXTRA_MEM_RATIO
diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
index 5dfd8f8..4f59fb3 100644
--- a/drivers/xen/balloon.c
+++ b/drivers/xen/balloon.c
@@ -565,7 +565,9 @@ static int __init balloon_init(void)
pr_info("xen/balloon: Initialising balloon driver.\n");
- balloon_stats.current_pages = xen_pv_domain() ? min(xen_start_info->nr_pages, max_pfn) : max_pfn;
+ balloon_stats.current_pages = xen_pv_domain()
+ ? min(xen_start_info->nr_pages - xen_released_pages, max_pfn)
+ : max_pfn;
balloon_stats.target_pages = balloon_stats.current_pages;
balloon_stats.balloon_low = 0;
balloon_stats.balloon_high = 0;
diff --git a/include/xen/page.h b/include/xen/page.h
index 0be36b9..92b61f8 100644
--- a/include/xen/page.h
+++ b/include/xen/page.h
@@ -5,4 +5,6 @@
extern phys_addr_t xen_extra_mem_start, xen_extra_mem_size;
+extern unsigned long xen_released_pages;
+
#endif /* _XEN_PAGE_H */
--
1.7.2.5
^ permalink raw reply related [flat|nested] 15+ messages in thread* [PATCH 2/5] xen/balloon: simplify test for the end of usable RAM
2011-09-28 16:46 [PATCH 0/5] xen: memory initialization/balloon fixes (#4) David Vrabel
2011-09-28 16:46 ` [PATCH 1/5] xen/balloon: account for pages released during memory setup David Vrabel
@ 2011-09-28 16:46 ` David Vrabel
2011-09-28 16:46 ` [PATCH 3/5] xen: allow balloon driver to use more than one memory region David Vrabel
` (2 subsequent siblings)
4 siblings, 0 replies; 15+ messages in thread
From: David Vrabel @ 2011-09-28 16:46 UTC (permalink / raw)
To: xen-devel; +Cc: David Vrabel, Konrad Rzeszutek Wilk
From: David Vrabel <david.vrabel@citrix.com>
When initializing the balloon only max_pfn needs to be checked
(max_pfn will always be <= e820_end_of_ram_pfn()) and improve the
confusing comment.
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
drivers/xen/balloon.c | 18 +++++++++---------
1 files changed, 9 insertions(+), 9 deletions(-)
diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
index 4f59fb3..9efb993 100644
--- a/drivers/xen/balloon.c
+++ b/drivers/xen/balloon.c
@@ -586,16 +586,16 @@ static int __init balloon_init(void)
#endif
/*
- * Initialise the balloon with excess memory space. We need
- * to make sure we don't add memory which doesn't exist or
- * logically exist. The E820 map can be trimmed to be smaller
- * than the amount of physical memory due to the mem= command
- * line parameter. And if this is a 32-bit non-HIGHMEM kernel
- * on a system with memory which requires highmem to access,
- * don't try to use it.
+ * Initialize the balloon with pages from the extra memory
+ * region (see arch/x86/xen/setup.c).
+ *
+ * If the amount of usable memory has been limited (e.g., with
+ * the 'mem' command line parameter), don't add pages beyond
+ * this limit.
*/
- extra_pfn_end = min(min(max_pfn, e820_end_of_ram_pfn()),
- (unsigned long)PFN_DOWN(xen_extra_mem_start + xen_extra_mem_size));
+ extra_pfn_end = min(max_pfn,
+ (unsigned long)PFN_DOWN(xen_extra_mem_start
+ + xen_extra_mem_size));
for (pfn = PFN_UP(xen_extra_mem_start);
pfn < extra_pfn_end;
pfn++) {
--
1.7.2.5
^ permalink raw reply related [flat|nested] 15+ messages in thread* [PATCH 3/5] xen: allow balloon driver to use more than one memory region
2011-09-28 16:46 [PATCH 0/5] xen: memory initialization/balloon fixes (#4) David Vrabel
2011-09-28 16:46 ` [PATCH 1/5] xen/balloon: account for pages released during memory setup David Vrabel
2011-09-28 16:46 ` [PATCH 2/5] xen/balloon: simplify test for the end of usable RAM David Vrabel
@ 2011-09-28 16:46 ` David Vrabel
2011-09-28 16:46 ` [PATCH 4/5] xen: allow extra memory to be in multiple regions David Vrabel
2011-09-28 16:46 ` [PATCH 5/5] " David Vrabel
4 siblings, 0 replies; 15+ messages in thread
From: David Vrabel @ 2011-09-28 16:46 UTC (permalink / raw)
To: xen-devel; +Cc: David Vrabel, Konrad Rzeszutek Wilk
From: David Vrabel <david.vrabel@citrix.com>
Allow the xen balloon driver to populate its list of extra pages from
more than one region of memory. This will allow platforms to provide
(for example) a region of low memory and a region of high memory.
The maximum possible number of extra regions is 128 (== E820MAX) which
is quite large so xen_extra_mem is placed in __initdata. This is safe
as both xen_memory_setup() and balloon_init() are in __init.
The balloon regions themselves are not altered (i.e., there is still
only the one region).
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
arch/x86/xen/setup.c | 20 ++++++++++----------
drivers/xen/balloon.c | 44 +++++++++++++++++++++++++++-----------------
include/xen/page.h | 10 +++++++++-
3 files changed, 46 insertions(+), 28 deletions(-)
diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index c983717..0c8e974 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -37,7 +37,7 @@ extern void xen_syscall_target(void);
extern void xen_syscall32_target(void);
/* Amount of extra memory space we add to the e820 ranges */
-phys_addr_t xen_extra_mem_start, xen_extra_mem_size;
+struct xen_memory_region xen_extra_mem[XEN_EXTRA_MEM_MAX_REGIONS] __initdata;
/* Number of pages released from the initial allocation. */
unsigned long xen_released_pages;
@@ -59,7 +59,7 @@ static void __init xen_add_extra_mem(unsigned long pages)
unsigned long pfn;
u64 size = (u64)pages * PAGE_SIZE;
- u64 extra_start = xen_extra_mem_start + xen_extra_mem_size;
+ u64 extra_start = xen_extra_mem[0].start + xen_extra_mem[0].size;
if (!pages)
return;
@@ -69,7 +69,7 @@ static void __init xen_add_extra_mem(unsigned long pages)
memblock_x86_reserve_range(extra_start, extra_start + size, "XEN EXTRA");
- xen_extra_mem_size += size;
+ xen_extra_mem[0].size += size;
xen_max_p2m_pfn = PFN_DOWN(extra_start + size);
@@ -242,7 +242,7 @@ char * __init xen_memory_setup(void)
memcpy(map_raw, map, sizeof(map));
e820.nr_map = 0;
- xen_extra_mem_start = mem_end;
+ xen_extra_mem[0].start = mem_end;
for (i = 0; i < memmap.nr_entries; i++) {
unsigned long long end;
@@ -270,8 +270,8 @@ char * __init xen_memory_setup(void)
e820_add_region(end, delta, E820_UNUSABLE);
}
- if (map[i].size > 0 && end > xen_extra_mem_start)
- xen_extra_mem_start = end;
+ if (map[i].size > 0 && end > xen_extra_mem[0].start)
+ xen_extra_mem[0].start = end;
/* Add region if any remains */
if (map[i].size > 0)
@@ -279,10 +279,10 @@ char * __init xen_memory_setup(void)
}
/* Align the balloon area so that max_low_pfn does not get set
* to be at the _end_ of the PCI gap at the far end (fee01000).
- * Note that xen_extra_mem_start gets set in the loop above to be
- * past the last E820 region. */
- if (xen_initial_domain() && (xen_extra_mem_start < (1ULL<<32)))
- xen_extra_mem_start = (1ULL<<32);
+ * Note that the start of balloon area gets set in the loop above
+ * to be past the last E820 region. */
+ if (xen_initial_domain() && (xen_extra_mem[0].start < (1ULL<<32)))
+ xen_extra_mem[0].start = (1ULL<<32);
/*
* In domU, the ISA region is normal, usable memory, but we
diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
index 9efb993..fc43b53 100644
--- a/drivers/xen/balloon.c
+++ b/drivers/xen/balloon.c
@@ -555,11 +555,32 @@ void free_xenballooned_pages(int nr_pages, struct page** pages)
}
EXPORT_SYMBOL(free_xenballooned_pages);
-static int __init balloon_init(void)
+static void __init balloon_add_region(unsigned long start_pfn,
+ unsigned long pages)
{
unsigned long pfn, extra_pfn_end;
struct page *page;
+ /*
+ * If the amount of usable memory has been limited (e.g., with
+ * the 'mem' command line parameter), don't add pages beyond
+ * this limit.
+ */
+ extra_pfn_end = min(max_pfn, start_pfn + pages);
+
+ for (pfn = start_pfn; pfn < extra_pfn_end; pfn++) {
+ page = pfn_to_page(pfn);
+ /* totalram_pages and totalhigh_pages do not
+ include the boot-time balloon extension, so
+ don't subtract from it. */
+ __balloon_append(page);
+ }
+}
+
+static int __init balloon_init(void)
+{
+ int i;
+
if (!xen_domain())
return -ENODEV;
@@ -587,23 +608,12 @@ static int __init balloon_init(void)
/*
* Initialize the balloon with pages from the extra memory
- * region (see arch/x86/xen/setup.c).
- *
- * If the amount of usable memory has been limited (e.g., with
- * the 'mem' command line parameter), don't add pages beyond
- * this limit.
+ * regions (see arch/x86/xen/setup.c).
*/
- extra_pfn_end = min(max_pfn,
- (unsigned long)PFN_DOWN(xen_extra_mem_start
- + xen_extra_mem_size));
- for (pfn = PFN_UP(xen_extra_mem_start);
- pfn < extra_pfn_end;
- pfn++) {
- page = pfn_to_page(pfn);
- /* totalram_pages and totalhigh_pages do not include the boot-time
- balloon extension, so don't subtract from it. */
- __balloon_append(page);
- }
+ for (i = 0; i < XEN_EXTRA_MEM_MAX_REGIONS; i++)
+ if (xen_extra_mem[i].size)
+ balloon_add_region(PFN_UP(xen_extra_mem[i].start),
+ PFN_DOWN(xen_extra_mem[i].size));
return 0;
}
diff --git a/include/xen/page.h b/include/xen/page.h
index 92b61f8..12765b6 100644
--- a/include/xen/page.h
+++ b/include/xen/page.h
@@ -3,7 +3,15 @@
#include <asm/xen/page.h>
-extern phys_addr_t xen_extra_mem_start, xen_extra_mem_size;
+struct xen_memory_region {
+ phys_addr_t start;
+ phys_addr_t size;
+};
+
+#define XEN_EXTRA_MEM_MAX_REGIONS 128 /* == E820MAX */
+
+extern __initdata
+struct xen_memory_region xen_extra_mem[XEN_EXTRA_MEM_MAX_REGIONS];
extern unsigned long xen_released_pages;
--
1.7.2.5
^ permalink raw reply related [flat|nested] 15+ messages in thread* [PATCH 4/5] xen: allow extra memory to be in multiple regions
2011-09-28 16:46 [PATCH 0/5] xen: memory initialization/balloon fixes (#4) David Vrabel
` (2 preceding siblings ...)
2011-09-28 16:46 ` [PATCH 3/5] xen: allow balloon driver to use more than one memory region David Vrabel
@ 2011-09-28 16:46 ` David Vrabel
2011-09-29 11:08 ` [PATCH] xen: release all pages within 1-1 p2m mappings David Vrabel
` (2 more replies)
2011-09-28 16:46 ` [PATCH 5/5] " David Vrabel
4 siblings, 3 replies; 15+ messages in thread
From: David Vrabel @ 2011-09-28 16:46 UTC (permalink / raw)
To: xen-devel; +Cc: David Vrabel, Konrad Rzeszutek Wilk
From: David Vrabel <david.vrabel@citrix.com>
Allow the extra memory (used by the balloon driver) to be in multiple
regions (typically two regions, one for low memory and one for high
memory). This allows the balloon driver to increase the number of
available low pages (if the initial number if pages is small).
As a side effect, the algorithm for building the e820 memory map is
simpler and more obviously correct as the map supplied by the
hypervisor is (almost) used as is (in particular, all reserved regions
and gaps are preserved). Only RAM regions are altered and RAM regions
above max_pfn + extra_pages are marked as unused (the region is split
in two if necessary).
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
arch/x86/xen/setup.c | 182 ++++++++++++++++++++++++--------------------------
1 files changed, 86 insertions(+), 96 deletions(-)
diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index 0c8e974..03eda3c 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -54,26 +54,32 @@ unsigned long xen_released_pages;
*/
#define EXTRA_MEM_RATIO (10)
-static void __init xen_add_extra_mem(unsigned long pages)
+static void __init xen_add_extra_mem(u64 start, u64 size)
{
unsigned long pfn;
+ int i;
- u64 size = (u64)pages * PAGE_SIZE;
- u64 extra_start = xen_extra_mem[0].start + xen_extra_mem[0].size;
-
- if (!pages)
- return;
-
- e820_add_region(extra_start, size, E820_RAM);
- sanitize_e820_map(e820.map, ARRAY_SIZE(e820.map), &e820.nr_map);
-
- memblock_x86_reserve_range(extra_start, extra_start + size, "XEN EXTRA");
+ for (i = 0; i < XEN_EXTRA_MEM_MAX_REGIONS; i++) {
+ /* Add new region. */
+ if (xen_extra_mem[i].size == 0) {
+ xen_extra_mem[i].start = start;
+ xen_extra_mem[i].size = size;
+ break;
+ }
+ /* Append to existing region. */
+ if (xen_extra_mem[i].start + xen_extra_mem[i].size == start) {
+ xen_extra_mem[i].size += size;
+ break;
+ }
+ }
+ if (i == XEN_EXTRA_MEM_MAX_REGIONS)
+ printk(KERN_WARNING "Warning: not enough extra memory regions\n");
- xen_extra_mem[0].size += size;
+ memblock_x86_reserve_range(start, start + size, "XEN EXTRA");
- xen_max_p2m_pfn = PFN_DOWN(extra_start + size);
+ xen_max_p2m_pfn = PFN_DOWN(start + size);
- for (pfn = PFN_DOWN(extra_start); pfn <= xen_max_p2m_pfn; pfn++)
+ for (pfn = PFN_DOWN(start); pfn <= xen_max_p2m_pfn; pfn++)
__set_phys_to_machine(pfn, INVALID_P2M_ENTRY);
}
@@ -120,8 +126,8 @@ static unsigned long __init xen_release_chunk(phys_addr_t start_addr,
return len;
}
-static unsigned long __init xen_return_unused_memory(unsigned long max_pfn,
- const struct e820map *e820)
+static unsigned long __init xen_return_unused_memory(
+ unsigned long max_pfn, const struct e820entry *map, int nr_map)
{
phys_addr_t max_addr = PFN_PHYS(max_pfn);
phys_addr_t last_end = ISA_END_ADDRESS;
@@ -129,13 +135,13 @@ static unsigned long __init xen_return_unused_memory(unsigned long max_pfn,
int i;
/* Free any unused memory above the low 1Mbyte. */
- for (i = 0; i < e820->nr_map && last_end < max_addr; i++) {
- phys_addr_t end = e820->map[i].addr;
+ for (i = 0; i < nr_map && last_end < max_addr; i++) {
+ phys_addr_t end = map[i].addr;
end = min(max_addr, end);
if (last_end < end)
released += xen_release_chunk(last_end, end);
- last_end = max(last_end, e820->map[i].addr + e820->map[i].size);
+ last_end = max(last_end, map[i].addr + map[i].size);
}
if (last_end < max_addr)
@@ -200,20 +206,32 @@ static unsigned long __init xen_get_max_pages(void)
return min(max_pages, MAX_DOMAIN_PAGES);
}
+static void xen_align_and_add_e820_region(u64 start, u64 size, int type)
+{
+ u64 end = start + size;
+
+ /* Align RAM regions to page boundaries. */
+ if (type == E820_RAM || type == E820_UNUSABLE) {
+ start = PAGE_ALIGN(start);
+ end &= ~((u64)PAGE_SIZE - 1);
+ }
+
+ e820_add_region(start, end - start, type);
+}
+
/**
* machine_specific_memory_setup - Hook for machine specific memory setup.
**/
char * __init xen_memory_setup(void)
{
static struct e820entry map[E820MAX] __initdata;
- static struct e820entry map_raw[E820MAX] __initdata;
unsigned long max_pfn = xen_start_info->nr_pages;
unsigned long long mem_end;
int rc;
struct xen_memory_map memmap;
+ unsigned long max_pages;
unsigned long extra_pages = 0;
- unsigned long extra_limit;
unsigned long identity_pages = 0;
int i;
int op;
@@ -240,49 +258,55 @@ char * __init xen_memory_setup(void)
}
BUG_ON(rc);
- memcpy(map_raw, map, sizeof(map));
- e820.nr_map = 0;
- xen_extra_mem[0].start = mem_end;
- for (i = 0; i < memmap.nr_entries; i++) {
- unsigned long long end;
-
- /* Guard against non-page aligned E820 entries. */
- if (map[i].type == E820_RAM)
- map[i].size -= (map[i].size + map[i].addr) % PAGE_SIZE;
-
- end = map[i].addr + map[i].size;
- if (map[i].type == E820_RAM && end > mem_end) {
- /* RAM off the end - may be partially included */
- u64 delta = min(map[i].size, end - mem_end);
-
- map[i].size -= delta;
- end -= delta;
-
- extra_pages += PFN_DOWN(delta);
- /*
- * Set RAM below 4GB that is not for us to be unusable.
- * This prevents "System RAM" address space from being
- * used as potential resource for I/O address (happens
- * when 'allocate_resource' is called).
- */
- if (delta &&
- (xen_initial_domain() && end < 0x100000000ULL))
- e820_add_region(end, delta, E820_UNUSABLE);
+ /* Make sure the Xen-supplied memory map is well-ordered. */
+ sanitize_e820_map(map, memmap.nr_entries, &memmap.nr_entries);
+
+ max_pages = xen_get_max_pages();
+ if (max_pages > max_pfn)
+ extra_pages += max_pages - max_pfn;
+
+ xen_released_pages = xen_return_unused_memory(max_pfn, map,
+ memmap.nr_entries);
+ extra_pages += xen_released_pages;
+
+ /*
+ * Clamp the amount of extra memory to a EXTRA_MEM_RATIO
+ * factor the base size. On non-highmem systems, the base
+ * size is the full initial memory allocation; on highmem it
+ * is limited to the max size of lowmem, so that it doesn't
+ * get completely filled.
+ *
+ * In principle there could be a problem in lowmem systems if
+ * the initial memory is also very large with respect to
+ * lowmem, but we won't try to deal with that here.
+ */
+ extra_pages = min(EXTRA_MEM_RATIO * min(max_pfn, PFN_DOWN(MAXMEM)),
+ extra_pages);
+
+ i = 0;
+ while (i < memmap.nr_entries) {
+ u64 addr = map[i].addr;
+ u64 size = map[i].size;
+ u32 type = map[i].type;
+
+ if (type == E820_RAM) {
+ if (addr < mem_end) {
+ size = min(size, mem_end - addr);
+ } else if (extra_pages) {
+ size = min(size, (u64)extra_pages * PAGE_SIZE);
+ extra_pages -= size / PAGE_SIZE;
+ xen_add_extra_mem(addr, size);
+ } else
+ type = E820_UNUSABLE;
}
- if (map[i].size > 0 && end > xen_extra_mem[0].start)
- xen_extra_mem[0].start = end;
+ xen_align_and_add_e820_region(addr, size, type);
- /* Add region if any remains */
- if (map[i].size > 0)
- e820_add_region(map[i].addr, map[i].size, map[i].type);
+ map[i].addr += size;
+ map[i].size -= size;
+ if (map[i].size == 0)
+ i++;
}
- /* Align the balloon area so that max_low_pfn does not get set
- * to be at the _end_ of the PCI gap at the far end (fee01000).
- * Note that the start of balloon area gets set in the loop above
- * to be past the last E820 region. */
- if (xen_initial_domain() && (xen_extra_mem[0].start < (1ULL<<32)))
- xen_extra_mem[0].start = (1ULL<<32);
/*
* In domU, the ISA region is normal, usable memory, but we
@@ -308,45 +332,11 @@ char * __init xen_memory_setup(void)
sanitize_e820_map(e820.map, ARRAY_SIZE(e820.map), &e820.nr_map);
- extra_limit = xen_get_max_pages();
- if (max_pfn + extra_pages > extra_limit) {
- if (extra_limit > max_pfn)
- extra_pages = extra_limit - max_pfn;
- else
- extra_pages = 0;
- }
-
- xen_released_pages = xen_return_unused_memory(xen_start_info->nr_pages,
- &e820);
- extra_pages += xen_released_pages;
-
- /*
- * Clamp the amount of extra memory to a EXTRA_MEM_RATIO
- * factor the base size. On non-highmem systems, the base
- * size is the full initial memory allocation; on highmem it
- * is limited to the max size of lowmem, so that it doesn't
- * get completely filled.
- *
- * In principle there could be a problem in lowmem systems if
- * the initial memory is also very large with respect to
- * lowmem, but we won't try to deal with that here.
- */
- extra_limit = min(EXTRA_MEM_RATIO * min(max_pfn, PFN_DOWN(MAXMEM)),
- max_pfn + extra_pages);
-
- if (extra_limit >= max_pfn)
- extra_pages = extra_limit - max_pfn;
- else
- extra_pages = 0;
-
- xen_add_extra_mem(extra_pages);
-
/*
* Set P2M for all non-RAM pages and E820 gaps to be identity
- * type PFNs. We supply it with the non-sanitized version
- * of the E820.
+ * type PFNs.
*/
- identity_pages = xen_set_identity(map_raw, memmap.nr_entries);
+ identity_pages = xen_set_identity(e820.map, e820.nr_map);
printk(KERN_INFO "Set %ld page(s) to 1-1 mapping.\n", identity_pages);
return "Xen";
}
--
1.7.2.5
^ permalink raw reply related [flat|nested] 15+ messages in thread* [PATCH] xen: release all pages within 1-1 p2m mappings
2011-09-28 16:46 ` [PATCH 4/5] xen: allow extra memory to be in multiple regions David Vrabel
@ 2011-09-29 11:08 ` David Vrabel
2011-09-29 11:29 ` David Vrabel
2011-09-29 11:26 ` [PATCH 1/2] xen: allow extra memory to be in multiple regions David Vrabel
2011-09-29 11:26 ` [PATCH 2/2] xen: release all pages within 1-1 p2m mappings David Vrabel
2 siblings, 1 reply; 15+ messages in thread
From: David Vrabel @ 2011-09-29 11:08 UTC (permalink / raw)
To: xen-devel; +Cc: David Vrabel, Konrad Rzeszutek Wilk
From: David Vrabel <david.vrabel@citrix.com>
In xen_memory_setup() all reserved regions and gaps are set to an
identity (1-1) p2m mapping. If an available page has a PFN within one
of these 1-1 mappings it will become inaccessible (as it MFN is lost)
so release them before setting up the mapping.
This can make an additional 256 MiB or more of RAM available
(depending on the size of the reserved regions in the memory map) if
the initial pages overlap with reserved regions.
The 1:1 p2m mappings are also extended to cover partial pages. This
fixes an issue with (for example) systems with a BIOS that puts the
DMI tables in a reserved region that begins on a non-page boundary.
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
arch/x86/xen/setup.c | 117 ++++++++++++++++++--------------------------------
1 files changed, 42 insertions(+), 75 deletions(-)
diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index 2ad2fd5..38d0af4 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -83,25 +83,18 @@ static void __init xen_add_extra_mem(u64 start, u64 size)
__set_phys_to_machine(pfn, INVALID_P2M_ENTRY);
}
-static unsigned long __init xen_release_chunk(phys_addr_t start_addr,
- phys_addr_t end_addr)
+static unsigned long __init xen_release_chunk(unsigned long start,
+ unsigned long end)
{
struct xen_memory_reservation reservation = {
.address_bits = 0,
.extent_order = 0,
.domid = DOMID_SELF
};
- unsigned long start, end;
unsigned long len = 0;
unsigned long pfn;
int ret;
- start = PFN_UP(start_addr);
- end = PFN_DOWN(end_addr);
-
- if (end <= start)
- return 0;
-
for(pfn = start; pfn < end; pfn++) {
unsigned long mfn = pfn_to_mfn(pfn);
@@ -126,72 +119,52 @@ static unsigned long __init xen_release_chunk(phys_addr_t start_addr,
return len;
}
-static unsigned long __init xen_return_unused_memory(
- unsigned long max_pfn, const struct e820entry *map, int nr_map)
+static unsigned long __init xen_set_identity_and_release(
+ const struct e820entry *list, size_t map_size, unsigned long nr_pages)
{
- phys_addr_t max_addr = PFN_PHYS(max_pfn);
- phys_addr_t last_end = ISA_END_ADDRESS;
+ phys_addr_t start = 0;
unsigned long released = 0;
- int i;
-
- /* Free any unused memory above the low 1Mbyte. */
- for (i = 0; i < nr_map && last_end < max_addr; i++) {
- phys_addr_t end = map[i].addr;
- end = min(max_addr, end);
-
- if (last_end < end)
- released += xen_release_chunk(last_end, end);
- last_end = max(last_end, map[i].addr + map[i].size);
- }
-
- if (last_end < max_addr)
- released += xen_release_chunk(last_end, max_addr);
-
- printk(KERN_INFO "released %lu pages of unused memory\n", released);
- return released;
-}
-
-static unsigned long __init xen_set_identity(const struct e820entry *list,
- ssize_t map_size)
-{
- phys_addr_t last = xen_initial_domain() ? 0 : ISA_END_ADDRESS;
- phys_addr_t start_pci = last;
- const struct e820entry *entry;
unsigned long identity = 0;
+ const struct e820entry *entry;
int i;
+ /*
+ * Combine non-RAM regions and gaps until a RAM region (or the
+ * end of the map) is reached, then set the 1:1 map and
+ * release the pages (if available) in those non-RAM regions.
+ *
+ * The combined non-RAM regions are rounded to a whole number
+ * of pages so any partial pages are accessible via the 1:1
+ * mapping. This is needed for some BIOSes that put (for
+ * example) the DMI tables in a reserved region that begins on
+ * a non-page boundary.
+ */
for (i = 0, entry = list; i < map_size; i++, entry++) {
- phys_addr_t start = entry->addr;
- phys_addr_t end = start + entry->size;
+ phys_addr_t end = entry->addr + entry->size;
- if (start < last)
- start = last;
+ if (entry->type == E820_RAM || i == map_size - 1) {
+ unsigned long start_pfn = PFN_DOWN(start);
+ unsigned long end_pfn = PFN_UP(end);
- if (end <= start)
- continue;
+ if (entry->type == E820_RAM)
+ end_pfn = PFN_UP(entry->addr);
- /* Skip over the 1MB region. */
- if (last > end)
- continue;
+ if (start_pfn < end_pfn) {
+ if (start_pfn < nr_pages)
+ released += xen_release_chunk(
+ start_pfn, min(end_pfn, nr_pages));
- if ((entry->type == E820_RAM) || (entry->type == E820_UNUSABLE)) {
- if (start > start_pci)
identity += set_phys_range_identity(
- PFN_UP(start_pci), PFN_DOWN(start));
-
- /* Without saving 'last' we would gooble RAM too
- * at the end of the loop. */
- last = end;
- start_pci = end;
- continue;
+ start_pfn, end_pfn);
+ }
+ start = end;
}
- start_pci = min(start, start_pci);
- last = end;
}
- if (last > start_pci)
- identity += set_phys_range_identity(
- PFN_UP(start_pci), PFN_DOWN(last));
- return identity;
+
+ printk(KERN_INFO "Released %lu pages of unused memory\n", released);
+ printk(KERN_INFO "Set %ld page(s) to 1-1 mapping\n", identity);
+
+ return released;
}
static unsigned long __init xen_get_max_pages(void)
@@ -232,7 +205,6 @@ char * __init xen_memory_setup(void)
struct xen_memory_map memmap;
unsigned long max_pages;
unsigned long extra_pages = 0;
- unsigned long identity_pages = 0;
int i;
int op;
@@ -265,8 +237,13 @@ char * __init xen_memory_setup(void)
if (max_pages > max_pfn)
extra_pages += max_pages - max_pfn;
- xen_released_pages = xen_return_unused_memory(max_pfn, map,
- memmap.nr_entries);
+ /*
+ * Set P2M for all non-RAM pages and E820 gaps to be identity
+ * type PFNs. Any RAM pages that would be made inaccesible by
+ * this are first released.
+ */
+ xen_released_pages = xen_set_identity_and_release(
+ map, memmap.nr_entries, max_pfn);
extra_pages += xen_released_pages;
/*
@@ -312,10 +289,6 @@ char * __init xen_memory_setup(void)
* In domU, the ISA region is normal, usable memory, but we
* reserve ISA memory anyway because too many things poke
* about in there.
- *
- * In Dom0, the host E820 information can leave gaps in the
- * ISA range, which would cause us to release those pages. To
- * avoid this, we unconditionally reserve them here.
*/
e820_add_region(ISA_START_ADDRESS, ISA_END_ADDRESS - ISA_START_ADDRESS,
E820_RESERVED);
@@ -332,12 +305,6 @@ char * __init xen_memory_setup(void)
sanitize_e820_map(e820.map, ARRAY_SIZE(e820.map), &e820.nr_map);
- /*
- * Set P2M for all non-RAM pages and E820 gaps to be identity
- * type PFNs.
- */
- identity_pages = xen_set_identity(e820.map, e820.nr_map);
- printk(KERN_INFO "Set %ld page(s) to 1-1 mapping.\n", identity_pages);
return "Xen";
}
--
1.7.2.5
^ permalink raw reply related [flat|nested] 15+ messages in thread* Re: [PATCH] xen: release all pages within 1-1 p2m mappings
2011-09-29 11:08 ` [PATCH] xen: release all pages within 1-1 p2m mappings David Vrabel
@ 2011-09-29 11:29 ` David Vrabel
0 siblings, 0 replies; 15+ messages in thread
From: David Vrabel @ 2011-09-29 11:29 UTC (permalink / raw)
To: David Vrabel; +Cc: xen-devel, Konrad Rzeszutek Wilk
Er. This was the wrong patch. I meant to repost "xen: allow extra
memory to be in multiple regions" to remove unnecessary test of
E820_UNUSABLE in xen_align_and_add_e820_region().
Sorry.
David
On 29/09/11 12:08, David Vrabel wrote:
> From: David Vrabel <david.vrabel@citrix.com>
>
> In xen_memory_setup() all reserved regions and gaps are set to an
> identity (1-1) p2m mapping. If an available page has a PFN within one
> of these 1-1 mappings it will become inaccessible (as it MFN is lost)
> so release them before setting up the mapping.
>
> [...]
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 1/2] xen: allow extra memory to be in multiple regions
2011-09-28 16:46 ` [PATCH 4/5] xen: allow extra memory to be in multiple regions David Vrabel
2011-09-29 11:08 ` [PATCH] xen: release all pages within 1-1 p2m mappings David Vrabel
@ 2011-09-29 11:26 ` David Vrabel
2011-09-29 11:26 ` [PATCH 2/2] xen: release all pages within 1-1 p2m mappings David Vrabel
2 siblings, 0 replies; 15+ messages in thread
From: David Vrabel @ 2011-09-29 11:26 UTC (permalink / raw)
To: xen-devel; +Cc: David Vrabel, Konrad Rzeszutek Wilk
From: David Vrabel <david.vrabel@citrix.com>
Allow the extra memory (used by the balloon driver) to be in multiple
regions (typically two regions, one for low memory and one for high
memory). This allows the balloon driver to increase the number of
available low pages (if the initial number if pages is small).
As a side effect, the algorithm for building the e820 memory map is
simpler and more obviously correct as the map supplied by the
hypervisor is (almost) used as is (in particular, all reserved regions
and gaps are preserved). Only RAM regions are altered and RAM regions
above max_pfn + extra_pages are marked as unused (the region is split
in two if necessary).
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
arch/x86/xen/setup.c | 182 ++++++++++++++++++++++++--------------------------
1 files changed, 86 insertions(+), 96 deletions(-)
diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index 0c8e974..2ad2fd5 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -54,26 +54,32 @@ unsigned long xen_released_pages;
*/
#define EXTRA_MEM_RATIO (10)
-static void __init xen_add_extra_mem(unsigned long pages)
+static void __init xen_add_extra_mem(u64 start, u64 size)
{
unsigned long pfn;
+ int i;
- u64 size = (u64)pages * PAGE_SIZE;
- u64 extra_start = xen_extra_mem[0].start + xen_extra_mem[0].size;
-
- if (!pages)
- return;
-
- e820_add_region(extra_start, size, E820_RAM);
- sanitize_e820_map(e820.map, ARRAY_SIZE(e820.map), &e820.nr_map);
-
- memblock_x86_reserve_range(extra_start, extra_start + size, "XEN EXTRA");
+ for (i = 0; i < XEN_EXTRA_MEM_MAX_REGIONS; i++) {
+ /* Add new region. */
+ if (xen_extra_mem[i].size == 0) {
+ xen_extra_mem[i].start = start;
+ xen_extra_mem[i].size = size;
+ break;
+ }
+ /* Append to existing region. */
+ if (xen_extra_mem[i].start + xen_extra_mem[i].size == start) {
+ xen_extra_mem[i].size += size;
+ break;
+ }
+ }
+ if (i == XEN_EXTRA_MEM_MAX_REGIONS)
+ printk(KERN_WARNING "Warning: not enough extra memory regions\n");
- xen_extra_mem[0].size += size;
+ memblock_x86_reserve_range(start, start + size, "XEN EXTRA");
- xen_max_p2m_pfn = PFN_DOWN(extra_start + size);
+ xen_max_p2m_pfn = PFN_DOWN(start + size);
- for (pfn = PFN_DOWN(extra_start); pfn <= xen_max_p2m_pfn; pfn++)
+ for (pfn = PFN_DOWN(start); pfn <= xen_max_p2m_pfn; pfn++)
__set_phys_to_machine(pfn, INVALID_P2M_ENTRY);
}
@@ -120,8 +126,8 @@ static unsigned long __init xen_release_chunk(phys_addr_t start_addr,
return len;
}
-static unsigned long __init xen_return_unused_memory(unsigned long max_pfn,
- const struct e820map *e820)
+static unsigned long __init xen_return_unused_memory(
+ unsigned long max_pfn, const struct e820entry *map, int nr_map)
{
phys_addr_t max_addr = PFN_PHYS(max_pfn);
phys_addr_t last_end = ISA_END_ADDRESS;
@@ -129,13 +135,13 @@ static unsigned long __init xen_return_unused_memory(unsigned long max_pfn,
int i;
/* Free any unused memory above the low 1Mbyte. */
- for (i = 0; i < e820->nr_map && last_end < max_addr; i++) {
- phys_addr_t end = e820->map[i].addr;
+ for (i = 0; i < nr_map && last_end < max_addr; i++) {
+ phys_addr_t end = map[i].addr;
end = min(max_addr, end);
if (last_end < end)
released += xen_release_chunk(last_end, end);
- last_end = max(last_end, e820->map[i].addr + e820->map[i].size);
+ last_end = max(last_end, map[i].addr + map[i].size);
}
if (last_end < max_addr)
@@ -200,20 +206,32 @@ static unsigned long __init xen_get_max_pages(void)
return min(max_pages, MAX_DOMAIN_PAGES);
}
+static void xen_align_and_add_e820_region(u64 start, u64 size, int type)
+{
+ u64 end = start + size;
+
+ /* Align RAM regions to page boundaries. */
+ if (type == E820_RAM) {
+ start = PAGE_ALIGN(start);
+ end &= ~((u64)PAGE_SIZE - 1);
+ }
+
+ e820_add_region(start, end - start, type);
+}
+
/**
* machine_specific_memory_setup - Hook for machine specific memory setup.
**/
char * __init xen_memory_setup(void)
{
static struct e820entry map[E820MAX] __initdata;
- static struct e820entry map_raw[E820MAX] __initdata;
unsigned long max_pfn = xen_start_info->nr_pages;
unsigned long long mem_end;
int rc;
struct xen_memory_map memmap;
+ unsigned long max_pages;
unsigned long extra_pages = 0;
- unsigned long extra_limit;
unsigned long identity_pages = 0;
int i;
int op;
@@ -240,49 +258,55 @@ char * __init xen_memory_setup(void)
}
BUG_ON(rc);
- memcpy(map_raw, map, sizeof(map));
- e820.nr_map = 0;
- xen_extra_mem[0].start = mem_end;
- for (i = 0; i < memmap.nr_entries; i++) {
- unsigned long long end;
-
- /* Guard against non-page aligned E820 entries. */
- if (map[i].type == E820_RAM)
- map[i].size -= (map[i].size + map[i].addr) % PAGE_SIZE;
-
- end = map[i].addr + map[i].size;
- if (map[i].type == E820_RAM && end > mem_end) {
- /* RAM off the end - may be partially included */
- u64 delta = min(map[i].size, end - mem_end);
-
- map[i].size -= delta;
- end -= delta;
-
- extra_pages += PFN_DOWN(delta);
- /*
- * Set RAM below 4GB that is not for us to be unusable.
- * This prevents "System RAM" address space from being
- * used as potential resource for I/O address (happens
- * when 'allocate_resource' is called).
- */
- if (delta &&
- (xen_initial_domain() && end < 0x100000000ULL))
- e820_add_region(end, delta, E820_UNUSABLE);
+ /* Make sure the Xen-supplied memory map is well-ordered. */
+ sanitize_e820_map(map, memmap.nr_entries, &memmap.nr_entries);
+
+ max_pages = xen_get_max_pages();
+ if (max_pages > max_pfn)
+ extra_pages += max_pages - max_pfn;
+
+ xen_released_pages = xen_return_unused_memory(max_pfn, map,
+ memmap.nr_entries);
+ extra_pages += xen_released_pages;
+
+ /*
+ * Clamp the amount of extra memory to a EXTRA_MEM_RATIO
+ * factor the base size. On non-highmem systems, the base
+ * size is the full initial memory allocation; on highmem it
+ * is limited to the max size of lowmem, so that it doesn't
+ * get completely filled.
+ *
+ * In principle there could be a problem in lowmem systems if
+ * the initial memory is also very large with respect to
+ * lowmem, but we won't try to deal with that here.
+ */
+ extra_pages = min(EXTRA_MEM_RATIO * min(max_pfn, PFN_DOWN(MAXMEM)),
+ extra_pages);
+
+ i = 0;
+ while (i < memmap.nr_entries) {
+ u64 addr = map[i].addr;
+ u64 size = map[i].size;
+ u32 type = map[i].type;
+
+ if (type == E820_RAM) {
+ if (addr < mem_end) {
+ size = min(size, mem_end - addr);
+ } else if (extra_pages) {
+ size = min(size, (u64)extra_pages * PAGE_SIZE);
+ extra_pages -= size / PAGE_SIZE;
+ xen_add_extra_mem(addr, size);
+ } else
+ type = E820_UNUSABLE;
}
- if (map[i].size > 0 && end > xen_extra_mem[0].start)
- xen_extra_mem[0].start = end;
+ xen_align_and_add_e820_region(addr, size, type);
- /* Add region if any remains */
- if (map[i].size > 0)
- e820_add_region(map[i].addr, map[i].size, map[i].type);
+ map[i].addr += size;
+ map[i].size -= size;
+ if (map[i].size == 0)
+ i++;
}
- /* Align the balloon area so that max_low_pfn does not get set
- * to be at the _end_ of the PCI gap at the far end (fee01000).
- * Note that the start of balloon area gets set in the loop above
- * to be past the last E820 region. */
- if (xen_initial_domain() && (xen_extra_mem[0].start < (1ULL<<32)))
- xen_extra_mem[0].start = (1ULL<<32);
/*
* In domU, the ISA region is normal, usable memory, but we
@@ -308,45 +332,11 @@ char * __init xen_memory_setup(void)
sanitize_e820_map(e820.map, ARRAY_SIZE(e820.map), &e820.nr_map);
- extra_limit = xen_get_max_pages();
- if (max_pfn + extra_pages > extra_limit) {
- if (extra_limit > max_pfn)
- extra_pages = extra_limit - max_pfn;
- else
- extra_pages = 0;
- }
-
- xen_released_pages = xen_return_unused_memory(xen_start_info->nr_pages,
- &e820);
- extra_pages += xen_released_pages;
-
- /*
- * Clamp the amount of extra memory to a EXTRA_MEM_RATIO
- * factor the base size. On non-highmem systems, the base
- * size is the full initial memory allocation; on highmem it
- * is limited to the max size of lowmem, so that it doesn't
- * get completely filled.
- *
- * In principle there could be a problem in lowmem systems if
- * the initial memory is also very large with respect to
- * lowmem, but we won't try to deal with that here.
- */
- extra_limit = min(EXTRA_MEM_RATIO * min(max_pfn, PFN_DOWN(MAXMEM)),
- max_pfn + extra_pages);
-
- if (extra_limit >= max_pfn)
- extra_pages = extra_limit - max_pfn;
- else
- extra_pages = 0;
-
- xen_add_extra_mem(extra_pages);
-
/*
* Set P2M for all non-RAM pages and E820 gaps to be identity
- * type PFNs. We supply it with the non-sanitized version
- * of the E820.
+ * type PFNs.
*/
- identity_pages = xen_set_identity(map_raw, memmap.nr_entries);
+ identity_pages = xen_set_identity(e820.map, e820.nr_map);
printk(KERN_INFO "Set %ld page(s) to 1-1 mapping.\n", identity_pages);
return "Xen";
}
--
1.7.2.5
^ permalink raw reply related [flat|nested] 15+ messages in thread* [PATCH 2/2] xen: release all pages within 1-1 p2m mappings
2011-09-28 16:46 ` [PATCH 4/5] xen: allow extra memory to be in multiple regions David Vrabel
2011-09-29 11:08 ` [PATCH] xen: release all pages within 1-1 p2m mappings David Vrabel
2011-09-29 11:26 ` [PATCH 1/2] xen: allow extra memory to be in multiple regions David Vrabel
@ 2011-09-29 11:26 ` David Vrabel
2 siblings, 0 replies; 15+ messages in thread
From: David Vrabel @ 2011-09-29 11:26 UTC (permalink / raw)
To: xen-devel; +Cc: David Vrabel, Konrad Rzeszutek Wilk
From: David Vrabel <david.vrabel@citrix.com>
In xen_memory_setup() all reserved regions and gaps are set to an
identity (1-1) p2m mapping. If an available page has a PFN within one
of these 1-1 mappings it will become inaccessible (as it MFN is lost)
so release them before setting up the mapping.
This can make an additional 256 MiB or more of RAM available
(depending on the size of the reserved regions in the memory map) if
the initial pages overlap with reserved regions.
The 1:1 p2m mappings are also extended to cover partial pages. This
fixes an issue with (for example) systems with a BIOS that puts the
DMI tables in a reserved region that begins on a non-page boundary.
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
arch/x86/xen/setup.c | 117 ++++++++++++++++++--------------------------------
1 files changed, 42 insertions(+), 75 deletions(-)
diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index 2ad2fd5..38d0af4 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -83,25 +83,18 @@ static void __init xen_add_extra_mem(u64 start, u64 size)
__set_phys_to_machine(pfn, INVALID_P2M_ENTRY);
}
-static unsigned long __init xen_release_chunk(phys_addr_t start_addr,
- phys_addr_t end_addr)
+static unsigned long __init xen_release_chunk(unsigned long start,
+ unsigned long end)
{
struct xen_memory_reservation reservation = {
.address_bits = 0,
.extent_order = 0,
.domid = DOMID_SELF
};
- unsigned long start, end;
unsigned long len = 0;
unsigned long pfn;
int ret;
- start = PFN_UP(start_addr);
- end = PFN_DOWN(end_addr);
-
- if (end <= start)
- return 0;
-
for(pfn = start; pfn < end; pfn++) {
unsigned long mfn = pfn_to_mfn(pfn);
@@ -126,72 +119,52 @@ static unsigned long __init xen_release_chunk(phys_addr_t start_addr,
return len;
}
-static unsigned long __init xen_return_unused_memory(
- unsigned long max_pfn, const struct e820entry *map, int nr_map)
+static unsigned long __init xen_set_identity_and_release(
+ const struct e820entry *list, size_t map_size, unsigned long nr_pages)
{
- phys_addr_t max_addr = PFN_PHYS(max_pfn);
- phys_addr_t last_end = ISA_END_ADDRESS;
+ phys_addr_t start = 0;
unsigned long released = 0;
- int i;
-
- /* Free any unused memory above the low 1Mbyte. */
- for (i = 0; i < nr_map && last_end < max_addr; i++) {
- phys_addr_t end = map[i].addr;
- end = min(max_addr, end);
-
- if (last_end < end)
- released += xen_release_chunk(last_end, end);
- last_end = max(last_end, map[i].addr + map[i].size);
- }
-
- if (last_end < max_addr)
- released += xen_release_chunk(last_end, max_addr);
-
- printk(KERN_INFO "released %lu pages of unused memory\n", released);
- return released;
-}
-
-static unsigned long __init xen_set_identity(const struct e820entry *list,
- ssize_t map_size)
-{
- phys_addr_t last = xen_initial_domain() ? 0 : ISA_END_ADDRESS;
- phys_addr_t start_pci = last;
- const struct e820entry *entry;
unsigned long identity = 0;
+ const struct e820entry *entry;
int i;
+ /*
+ * Combine non-RAM regions and gaps until a RAM region (or the
+ * end of the map) is reached, then set the 1:1 map and
+ * release the pages (if available) in those non-RAM regions.
+ *
+ * The combined non-RAM regions are rounded to a whole number
+ * of pages so any partial pages are accessible via the 1:1
+ * mapping. This is needed for some BIOSes that put (for
+ * example) the DMI tables in a reserved region that begins on
+ * a non-page boundary.
+ */
for (i = 0, entry = list; i < map_size; i++, entry++) {
- phys_addr_t start = entry->addr;
- phys_addr_t end = start + entry->size;
+ phys_addr_t end = entry->addr + entry->size;
- if (start < last)
- start = last;
+ if (entry->type == E820_RAM || i == map_size - 1) {
+ unsigned long start_pfn = PFN_DOWN(start);
+ unsigned long end_pfn = PFN_UP(end);
- if (end <= start)
- continue;
+ if (entry->type == E820_RAM)
+ end_pfn = PFN_UP(entry->addr);
- /* Skip over the 1MB region. */
- if (last > end)
- continue;
+ if (start_pfn < end_pfn) {
+ if (start_pfn < nr_pages)
+ released += xen_release_chunk(
+ start_pfn, min(end_pfn, nr_pages));
- if ((entry->type == E820_RAM) || (entry->type == E820_UNUSABLE)) {
- if (start > start_pci)
identity += set_phys_range_identity(
- PFN_UP(start_pci), PFN_DOWN(start));
-
- /* Without saving 'last' we would gooble RAM too
- * at the end of the loop. */
- last = end;
- start_pci = end;
- continue;
+ start_pfn, end_pfn);
+ }
+ start = end;
}
- start_pci = min(start, start_pci);
- last = end;
}
- if (last > start_pci)
- identity += set_phys_range_identity(
- PFN_UP(start_pci), PFN_DOWN(last));
- return identity;
+
+ printk(KERN_INFO "Released %lu pages of unused memory\n", released);
+ printk(KERN_INFO "Set %ld page(s) to 1-1 mapping\n", identity);
+
+ return released;
}
static unsigned long __init xen_get_max_pages(void)
@@ -232,7 +205,6 @@ char * __init xen_memory_setup(void)
struct xen_memory_map memmap;
unsigned long max_pages;
unsigned long extra_pages = 0;
- unsigned long identity_pages = 0;
int i;
int op;
@@ -265,8 +237,13 @@ char * __init xen_memory_setup(void)
if (max_pages > max_pfn)
extra_pages += max_pages - max_pfn;
- xen_released_pages = xen_return_unused_memory(max_pfn, map,
- memmap.nr_entries);
+ /*
+ * Set P2M for all non-RAM pages and E820 gaps to be identity
+ * type PFNs. Any RAM pages that would be made inaccesible by
+ * this are first released.
+ */
+ xen_released_pages = xen_set_identity_and_release(
+ map, memmap.nr_entries, max_pfn);
extra_pages += xen_released_pages;
/*
@@ -312,10 +289,6 @@ char * __init xen_memory_setup(void)
* In domU, the ISA region is normal, usable memory, but we
* reserve ISA memory anyway because too many things poke
* about in there.
- *
- * In Dom0, the host E820 information can leave gaps in the
- * ISA range, which would cause us to release those pages. To
- * avoid this, we unconditionally reserve them here.
*/
e820_add_region(ISA_START_ADDRESS, ISA_END_ADDRESS - ISA_START_ADDRESS,
E820_RESERVED);
@@ -332,12 +305,6 @@ char * __init xen_memory_setup(void)
sanitize_e820_map(e820.map, ARRAY_SIZE(e820.map), &e820.nr_map);
- /*
- * Set P2M for all non-RAM pages and E820 gaps to be identity
- * type PFNs.
- */
- identity_pages = xen_set_identity(e820.map, e820.nr_map);
- printk(KERN_INFO "Set %ld page(s) to 1-1 mapping.\n", identity_pages);
return "Xen";
}
--
1.7.2.5
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 5/5] xen: release all pages within 1-1 p2m mappings
2011-09-28 16:46 [PATCH 0/5] xen: memory initialization/balloon fixes (#4) David Vrabel
` (3 preceding siblings ...)
2011-09-28 16:46 ` [PATCH 4/5] xen: allow extra memory to be in multiple regions David Vrabel
@ 2011-09-28 16:46 ` David Vrabel
4 siblings, 0 replies; 15+ messages in thread
From: David Vrabel @ 2011-09-28 16:46 UTC (permalink / raw)
To: xen-devel; +Cc: David Vrabel, Konrad Rzeszutek Wilk
From: David Vrabel <david.vrabel@citrix.com>
In xen_memory_setup() all reserved regions and gaps are set to an
identity (1-1) p2m mapping. If an available page has a PFN within one
of these 1-1 mappings it will become inaccessible (as it MFN is lost)
so release them before setting up the mapping.
This can make an additional 256 MiB or more of RAM available
(depending on the size of the reserved regions in the memory map) if
the initial pages overlap with reserved regions.
The 1:1 p2m mappings are also extended to cover partial pages. This
fixes an issue with (for example) systems with a BIOS that puts the
DMI tables in a reserved region that begins on a non-page boundary.
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
arch/x86/xen/setup.c | 117 ++++++++++++++++++--------------------------------
1 files changed, 42 insertions(+), 75 deletions(-)
diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index 03eda3c..62a1334 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -83,25 +83,18 @@ static void __init xen_add_extra_mem(u64 start, u64 size)
__set_phys_to_machine(pfn, INVALID_P2M_ENTRY);
}
-static unsigned long __init xen_release_chunk(phys_addr_t start_addr,
- phys_addr_t end_addr)
+static unsigned long __init xen_release_chunk(unsigned long start,
+ unsigned long end)
{
struct xen_memory_reservation reservation = {
.address_bits = 0,
.extent_order = 0,
.domid = DOMID_SELF
};
- unsigned long start, end;
unsigned long len = 0;
unsigned long pfn;
int ret;
- start = PFN_UP(start_addr);
- end = PFN_DOWN(end_addr);
-
- if (end <= start)
- return 0;
-
for(pfn = start; pfn < end; pfn++) {
unsigned long mfn = pfn_to_mfn(pfn);
@@ -126,72 +119,52 @@ static unsigned long __init xen_release_chunk(phys_addr_t start_addr,
return len;
}
-static unsigned long __init xen_return_unused_memory(
- unsigned long max_pfn, const struct e820entry *map, int nr_map)
+static unsigned long __init xen_set_identity_and_release(
+ const struct e820entry *list, size_t map_size, unsigned long nr_pages)
{
- phys_addr_t max_addr = PFN_PHYS(max_pfn);
- phys_addr_t last_end = ISA_END_ADDRESS;
+ phys_addr_t start = 0;
unsigned long released = 0;
- int i;
-
- /* Free any unused memory above the low 1Mbyte. */
- for (i = 0; i < nr_map && last_end < max_addr; i++) {
- phys_addr_t end = map[i].addr;
- end = min(max_addr, end);
-
- if (last_end < end)
- released += xen_release_chunk(last_end, end);
- last_end = max(last_end, map[i].addr + map[i].size);
- }
-
- if (last_end < max_addr)
- released += xen_release_chunk(last_end, max_addr);
-
- printk(KERN_INFO "released %lu pages of unused memory\n", released);
- return released;
-}
-
-static unsigned long __init xen_set_identity(const struct e820entry *list,
- ssize_t map_size)
-{
- phys_addr_t last = xen_initial_domain() ? 0 : ISA_END_ADDRESS;
- phys_addr_t start_pci = last;
- const struct e820entry *entry;
unsigned long identity = 0;
+ const struct e820entry *entry;
int i;
+ /*
+ * Combine non-RAM regions and gaps until a RAM region (or the
+ * end of the map) is reached, then set the 1:1 map and
+ * release the pages (if available) in those non-RAM regions.
+ *
+ * The combined non-RAM regions are rounded to a whole number
+ * of pages so any partial pages are accessible via the 1:1
+ * mapping. This is needed for some BIOSes that put (for
+ * example) the DMI tables in a reserved region that begins on
+ * a non-page boundary.
+ */
for (i = 0, entry = list; i < map_size; i++, entry++) {
- phys_addr_t start = entry->addr;
- phys_addr_t end = start + entry->size;
+ phys_addr_t end = entry->addr + entry->size;
- if (start < last)
- start = last;
+ if (entry->type == E820_RAM || i == map_size - 1) {
+ unsigned long start_pfn = PFN_DOWN(start);
+ unsigned long end_pfn = PFN_UP(end);
- if (end <= start)
- continue;
+ if (entry->type == E820_RAM)
+ end_pfn = PFN_UP(entry->addr);
- /* Skip over the 1MB region. */
- if (last > end)
- continue;
+ if (start_pfn < end_pfn) {
+ if (start_pfn < nr_pages)
+ released += xen_release_chunk(
+ start_pfn, min(end_pfn, nr_pages));
- if ((entry->type == E820_RAM) || (entry->type == E820_UNUSABLE)) {
- if (start > start_pci)
identity += set_phys_range_identity(
- PFN_UP(start_pci), PFN_DOWN(start));
-
- /* Without saving 'last' we would gooble RAM too
- * at the end of the loop. */
- last = end;
- start_pci = end;
- continue;
+ start_pfn, end_pfn);
+ }
+ start = end;
}
- start_pci = min(start, start_pci);
- last = end;
}
- if (last > start_pci)
- identity += set_phys_range_identity(
- PFN_UP(start_pci), PFN_DOWN(last));
- return identity;
+
+ printk(KERN_INFO "Released %lu pages of unused memory\n", released);
+ printk(KERN_INFO "Set %ld page(s) to 1-1 mapping\n", identity);
+
+ return released;
}
static unsigned long __init xen_get_max_pages(void)
@@ -232,7 +205,6 @@ char * __init xen_memory_setup(void)
struct xen_memory_map memmap;
unsigned long max_pages;
unsigned long extra_pages = 0;
- unsigned long identity_pages = 0;
int i;
int op;
@@ -265,8 +237,13 @@ char * __init xen_memory_setup(void)
if (max_pages > max_pfn)
extra_pages += max_pages - max_pfn;
- xen_released_pages = xen_return_unused_memory(max_pfn, map,
- memmap.nr_entries);
+ /*
+ * Set P2M for all non-RAM pages and E820 gaps to be identity
+ * type PFNs. Any RAM pages that would be made inaccesible by
+ * this are first released.
+ */
+ xen_released_pages = xen_set_identity_and_release(
+ map, memmap.nr_entries, max_pfn);
extra_pages += xen_released_pages;
/*
@@ -312,10 +289,6 @@ char * __init xen_memory_setup(void)
* In domU, the ISA region is normal, usable memory, but we
* reserve ISA memory anyway because too many things poke
* about in there.
- *
- * In Dom0, the host E820 information can leave gaps in the
- * ISA range, which would cause us to release those pages. To
- * avoid this, we unconditionally reserve them here.
*/
e820_add_region(ISA_START_ADDRESS, ISA_END_ADDRESS - ISA_START_ADDRESS,
E820_RESERVED);
@@ -332,12 +305,6 @@ char * __init xen_memory_setup(void)
sanitize_e820_map(e820.map, ARRAY_SIZE(e820.map), &e820.nr_map);
- /*
- * Set P2M for all non-RAM pages and E820 gaps to be identity
- * type PFNs.
- */
- identity_pages = xen_set_identity(e820.map, e820.nr_map);
- printk(KERN_INFO "Set %ld page(s) to 1-1 mapping.\n", identity_pages);
return "Xen";
}
--
1.7.2.5
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 5/5] xen: release all pages within 1-1 p2m mappings
2011-08-19 14:57 xen: memory initialization/balloon fixes (#2) David Vrabel
@ 2011-08-19 14:57 ` David Vrabel
2011-08-19 15:05 ` David Vrabel
2011-09-06 21:20 ` Konrad Rzeszutek Wilk
0 siblings, 2 replies; 15+ messages in thread
From: David Vrabel @ 2011-08-19 14:57 UTC (permalink / raw)
To: xen-devel; +Cc: David Vrabel, David Vrabel, Konrad Rzeszutek Wilk
In xen_memory_setup() all reserved regions and gaps are set to an
identity (1-1) p2m mapping. If an available page has a PFN within one
of these 1-1 mappings it will become accessible (as it MFN is lost) so
release them before setting up the mapping.
This can make an additional 256 MiB or more of RAM available
(depending on the size of the reserved regions in the memory map).
Signed-off-by: David Vrabel <david.vrabel@csr.com>
---
arch/x86/xen/setup.c | 88 ++++++++++++--------------------------------------
1 files changed, 21 insertions(+), 67 deletions(-)
diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index 93e4542..0f1cd69 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -123,73 +123,33 @@ static unsigned long __init xen_release_chunk(phys_addr_t start_addr,
return len;
}
-static unsigned long __init xen_return_unused_memory(unsigned long max_pfn,
- const struct e820entry *map,
- int nr_map)
+static unsigned long __init xen_set_identity_and_release(const struct e820entry *list,
+ ssize_t map_size,
+ unsigned long nr_pages)
{
- phys_addr_t max_addr = PFN_PHYS(max_pfn);
- phys_addr_t last_end = ISA_END_ADDRESS;
+ phys_addr_t avail_end = PFN_PHYS(nr_pages);
+ phys_addr_t last_end = 0;
unsigned long released = 0;
- int i;
-
- /* Free any unused memory above the low 1Mbyte. */
- for (i = 0; i < nr_map && last_end < max_addr; i++) {
- phys_addr_t end = map[i].addr;
- end = min(max_addr, end);
-
- if (last_end < end)
- released += xen_release_chunk(last_end, end);
- last_end = max(last_end, map[i].addr + map[i].size);
- }
-
- if (last_end < max_addr)
- released += xen_release_chunk(last_end, max_addr);
-
- printk(KERN_INFO "released %lu pages of unused memory\n", released);
- return released;
-}
-
-static unsigned long __init xen_set_identity(const struct e820entry *list,
- ssize_t map_size)
-{
- phys_addr_t last = xen_initial_domain() ? 0 : ISA_END_ADDRESS;
- phys_addr_t start_pci = last;
const struct e820entry *entry;
- unsigned long identity = 0;
int i;
for (i = 0, entry = list; i < map_size; i++, entry++) {
- phys_addr_t start = entry->addr;
- phys_addr_t end = start + entry->size;
-
- if (start < last)
- start = last;
+ phys_addr_t begin = last_end;
+ phys_addr_t end = entry->addr + entry->size;
- if (end <= start)
- continue;
+ last_end = end;
- /* Skip over the 1MB region. */
- if (last > end)
- continue;
+ if (entry->type == E820_RAM || entry->type == E820_UNUSABLE)
+ end = entry->addr;
- if ((entry->type == E820_RAM) || (entry->type == E820_UNUSABLE)) {
- if (start > start_pci)
- identity += set_phys_range_identity(
- PFN_UP(start_pci), PFN_DOWN(start));
+ if (begin < end) {
+ if (begin < avail_end)
+ released += xen_release_chunk(begin, min(end, avail_end));
- /* Without saving 'last' we would gooble RAM too
- * at the end of the loop. */
- last = end;
- start_pci = end;
- continue;
+ set_phys_range_identity(PFN_UP(begin), PFN_DOWN(end));
}
- start_pci = min(start, start_pci);
- last = end;
}
- if (last > start_pci)
- identity += set_phys_range_identity(
- PFN_UP(start_pci), PFN_DOWN(last));
- return identity;
+ return released;
}
static unsigned long __init xen_get_max_pages(void)
@@ -217,7 +177,6 @@ char * __init xen_memory_setup(void)
struct xen_memory_map memmap;
unsigned long max_pages;
unsigned long extra_pages = 0;
- unsigned long identity_pages = 0;
int i;
int op;
@@ -250,7 +209,12 @@ char * __init xen_memory_setup(void)
if (max_pages > max_pfn)
extra_pages += max_pages - max_pfn;
- extra_pages += xen_return_unused_memory(max_pfn, map, memmap.nr_entries);
+ /*
+ * Set P2M for all non-RAM pages and E820 gaps to be identity
+ * type PFNs. Any RAM pages that would be made inaccesible by
+ * this are first released.
+ */
+ extra_pages += xen_set_identity_and_release(map, memmap.nr_entries, max_pfn);
/*
* Clamp the amount of extra memory to a EXTRA_MEM_RATIO
@@ -294,10 +258,6 @@ char * __init xen_memory_setup(void)
* In domU, the ISA region is normal, usable memory, but we
* reserve ISA memory anyway because too many things poke
* about in there.
- *
- * In Dom0, the host E820 information can leave gaps in the
- * ISA range, which would cause us to release those pages. To
- * avoid this, we unconditionally reserve them here.
*/
e820_add_region(ISA_START_ADDRESS, ISA_END_ADDRESS - ISA_START_ADDRESS,
E820_RESERVED);
@@ -314,12 +274,6 @@ char * __init xen_memory_setup(void)
sanitize_e820_map(e820.map, ARRAY_SIZE(e820.map), &e820.nr_map);
- /*
- * Set P2M for all non-RAM pages and E820 gaps to be identity
- * type PFNs.
- */
- identity_pages = xen_set_identity(e820.map, e820.nr_map);
- printk(KERN_INFO "Set %ld page(s) to 1-1 mapping.\n", identity_pages);
return "Xen";
}
--
1.7.2.5
^ permalink raw reply related [flat|nested] 15+ messages in thread* Re: [PATCH 5/5] xen: release all pages within 1-1 p2m mappings
2011-08-19 14:57 ` [PATCH 5/5] xen: release all pages within 1-1 p2m mappings David Vrabel
@ 2011-08-19 15:05 ` David Vrabel
2011-09-06 21:20 ` Konrad Rzeszutek Wilk
1 sibling, 0 replies; 15+ messages in thread
From: David Vrabel @ 2011-08-19 15:05 UTC (permalink / raw)
To: David Vrabel; +Cc: xen-devel@lists.xensource.com, Rzeszutek Wilk, Konrad
On 19/08/11 15:57, David Vrabel wrote:
>
> Signed-off-by: David Vrabel <david.vrabel@csr.com>
Oops. Wrong email address. It should have been citrix.com. Let me know
if you want me to resend this patch with the correct signed-off-by.
Sorry.
David
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 5/5] xen: release all pages within 1-1 p2m mappings
2011-08-19 14:57 ` [PATCH 5/5] xen: release all pages within 1-1 p2m mappings David Vrabel
2011-08-19 15:05 ` David Vrabel
@ 2011-09-06 21:20 ` Konrad Rzeszutek Wilk
2011-09-07 11:03 ` David Vrabel
1 sibling, 1 reply; 15+ messages in thread
From: Konrad Rzeszutek Wilk @ 2011-09-06 21:20 UTC (permalink / raw)
To: David Vrabel; +Cc: xen-devel, David Vrabel
On Fri, Aug 19, 2011 at 03:57:20PM +0100, David Vrabel wrote:
> In xen_memory_setup() all reserved regions and gaps are set to an
> identity (1-1) p2m mapping. If an available page has a PFN within one
> of these 1-1 mappings it will become accessible (as it MFN is lost) so
> release them before setting up the mapping.
>
> This can make an additional 256 MiB or more of RAM available
> (depending on the size of the reserved regions in the memory map).
.. if the xen_start_info->nr_pages overlaps the reserved region.
>
> Signed-off-by: David Vrabel <david.vrabel@csr.com>
> ---
> arch/x86/xen/setup.c | 88 ++++++++++++--------------------------------------
> 1 files changed, 21 insertions(+), 67 deletions(-)
>
> diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
> index 93e4542..0f1cd69 100644
> --- a/arch/x86/xen/setup.c
> +++ b/arch/x86/xen/setup.c
> @@ -123,73 +123,33 @@ static unsigned long __init xen_release_chunk(phys_addr_t start_addr,
> return len;
> }
>
> -static unsigned long __init xen_return_unused_memory(unsigned long max_pfn,
> - const struct e820entry *map,
> - int nr_map)
> +static unsigned long __init xen_set_identity_and_release(const struct e820entry *list,
> + ssize_t map_size,
> + unsigned long nr_pages)
> {
> - phys_addr_t max_addr = PFN_PHYS(max_pfn);
> - phys_addr_t last_end = ISA_END_ADDRESS;
> + phys_addr_t avail_end = PFN_PHYS(nr_pages);
> + phys_addr_t last_end = 0;
> unsigned long released = 0;
> - int i;
> -
> - /* Free any unused memory above the low 1Mbyte. */
> - for (i = 0; i < nr_map && last_end < max_addr; i++) {
> - phys_addr_t end = map[i].addr;
> - end = min(max_addr, end);
> -
> - if (last_end < end)
> - released += xen_release_chunk(last_end, end);
> - last_end = max(last_end, map[i].addr + map[i].size);
> - }
> -
> - if (last_end < max_addr)
> - released += xen_release_chunk(last_end, max_addr);
> -
> - printk(KERN_INFO "released %lu pages of unused memory\n", released);
> - return released;
> -}
> -
> -static unsigned long __init xen_set_identity(const struct e820entry *list,
> - ssize_t map_size)
> -{
> - phys_addr_t last = xen_initial_domain() ? 0 : ISA_END_ADDRESS;
> - phys_addr_t start_pci = last;
> const struct e820entry *entry;
> - unsigned long identity = 0;
> int i;
>
> for (i = 0, entry = list; i < map_size; i++, entry++) {
> - phys_addr_t start = entry->addr;
> - phys_addr_t end = start + entry->size;
> -
> - if (start < last)
> - start = last;
> + phys_addr_t begin = last_end;
The "begin" is a bit confusing. You are using the previous E820 entry's end - not
the beginning of this E820 entry. Doing a s/begin/last_end/ makes
the code a bit easier to understand.
> + phys_addr_t end = entry->addr + entry->size;
>
> - if (end <= start)
> - continue;
> + last_end = end;
Please include the comment:
/* This entry end. */
>
> - /* Skip over the 1MB region. */
> - if (last > end)
> - continue;
> + if (entry->type == E820_RAM || entry->type == E820_UNUSABLE)
> + end = entry->addr;
And:
/* Should encapsulate the gap between prev_end and this E820
entry's starting address. */
>
> - if ((entry->type == E820_RAM) || (entry->type == E820_UNUSABLE)) {
> - if (start > start_pci)
> - identity += set_phys_range_identity(
> - PFN_UP(start_pci), PFN_DOWN(start));
> + if (begin < end) {
> + if (begin < avail_end)
> + released += xen_release_chunk(begin, min(end, avail_end));
>
> - /* Without saving 'last' we would gooble RAM too
> - * at the end of the loop. */
> - last = end;
> - start_pci = end;
> - continue;
> + set_phys_range_identity(PFN_UP(begin), PFN_DOWN(end));
identity += set_phys_range ..
> }
> - start_pci = min(start, start_pci);
> - last = end;
> }
> - if (last > start_pci)
> - identity += set_phys_range_identity(
> - PFN_UP(start_pci), PFN_DOWN(last));
> - return identity;
OK, but you have ripped out the nice printk's that existed before. So add them
back in:
printk(KERN_INFO "released %lu pages of unused memory\n", released);
printk(KERN_INFO "Set %ld page(s) to 1-1 mapping.\n", identity);
as they are quite useful in the field.
> + return released;
> }
>
> static unsigned long __init xen_get_max_pages(void)
> @@ -217,7 +177,6 @@ char * __init xen_memory_setup(void)
> struct xen_memory_map memmap;
> unsigned long max_pages;
> unsigned long extra_pages = 0;
> - unsigned long identity_pages = 0;
> int i;
> int op;
>
> @@ -250,7 +209,12 @@ char * __init xen_memory_setup(void)
> if (max_pages > max_pfn)
> extra_pages += max_pages - max_pfn;
>
> - extra_pages += xen_return_unused_memory(max_pfn, map, memmap.nr_entries);
> + /*
> + * Set P2M for all non-RAM pages and E820 gaps to be identity
> + * type PFNs. Any RAM pages that would be made inaccesible by
> + * this are first released.
> + */
> + extra_pages += xen_set_identity_and_release(map, memmap.nr_entries, max_pfn);
>
> /*
> * Clamp the amount of extra memory to a EXTRA_MEM_RATIO
> @@ -294,10 +258,6 @@ char * __init xen_memory_setup(void)
> * In domU, the ISA region is normal, usable memory, but we
> * reserve ISA memory anyway because too many things poke
> * about in there.
> - *
> - * In Dom0, the host E820 information can leave gaps in the
> - * ISA range, which would cause us to release those pages. To
> - * avoid this, we unconditionally reserve them here.
> */
> e820_add_region(ISA_START_ADDRESS, ISA_END_ADDRESS - ISA_START_ADDRESS,
> E820_RESERVED);
> @@ -314,12 +274,6 @@ char * __init xen_memory_setup(void)
>
> sanitize_e820_map(e820.map, ARRAY_SIZE(e820.map), &e820.nr_map);
>
> - /*
> - * Set P2M for all non-RAM pages and E820 gaps to be identity
> - * type PFNs.
> - */
> - identity_pages = xen_set_identity(e820.map, e820.nr_map);
> - printk(KERN_INFO "Set %ld page(s) to 1-1 mapping.\n", identity_pages);
> return "Xen";
> }
>
> --
> 1.7.2.5
^ permalink raw reply [flat|nested] 15+ messages in thread* Re: [PATCH 5/5] xen: release all pages within 1-1 p2m mappings
2011-09-06 21:20 ` Konrad Rzeszutek Wilk
@ 2011-09-07 11:03 ` David Vrabel
2011-09-07 18:23 ` Konrad Rzeszutek Wilk
0 siblings, 1 reply; 15+ messages in thread
From: David Vrabel @ 2011-09-07 11:03 UTC (permalink / raw)
To: Konrad Rzeszutek Wilk; +Cc: Vrabel, xen-devel@lists.xensource.com, David
On 06/09/11 22:20, Konrad Rzeszutek Wilk wrote:
> On Fri, Aug 19, 2011 at 03:57:20PM +0100, David Vrabel wrote:
>>
>> --- a/arch/x86/xen/setup.c
>> +++ b/arch/x86/xen/setup.c
>> @@ -123,73 +123,33 @@ static unsigned long __init xen_release_chunk(phys_addr_t start_addr,
>> return len;
>> }
>>
>> -static unsigned long __init xen_return_unused_memory(unsigned long max_pfn,
>> - const struct e820entry *map,
>> - int nr_map)
>> +static unsigned long __init xen_set_identity_and_release(const struct e820entry *list,
>> + ssize_t map_size,
>> + unsigned long nr_pages)
>> {
>> - phys_addr_t max_addr = PFN_PHYS(max_pfn);
>> - phys_addr_t last_end = ISA_END_ADDRESS;
>> + phys_addr_t avail_end = PFN_PHYS(nr_pages);
>> + phys_addr_t last_end = 0;
>> unsigned long released = 0;
>> - int i;
>> -
>> - /* Free any unused memory above the low 1Mbyte. */
>> - for (i = 0; i < nr_map && last_end < max_addr; i++) {
>> - phys_addr_t end = map[i].addr;
>> - end = min(max_addr, end);
>> -
>> - if (last_end < end)
>> - released += xen_release_chunk(last_end, end);
>> - last_end = max(last_end, map[i].addr + map[i].size);
>> - }
>> -
>> - if (last_end < max_addr)
>> - released += xen_release_chunk(last_end, max_addr);
>> -
>> - printk(KERN_INFO "released %lu pages of unused memory\n", released);
>> - return released;
>> -}
>> -
>> -static unsigned long __init xen_set_identity(const struct e820entry *list,
>> - ssize_t map_size)
>> -{
>> - phys_addr_t last = xen_initial_domain() ? 0 : ISA_END_ADDRESS;
>> - phys_addr_t start_pci = last;
>> const struct e820entry *entry;
>> - unsigned long identity = 0;
>> int i;
>>
>> for (i = 0, entry = list; i < map_size; i++, entry++) {
>> - phys_addr_t start = entry->addr;
>> - phys_addr_t end = start + entry->size;
>> -
>> - if (start < last)
>> - start = last;
>> + phys_addr_t begin = last_end;
>
> The "begin" is a bit confusing. You are using the previous E820 entry's end - not
> the beginning of this E820 entry. Doing a s/begin/last_end/ makes
> the code a bit easier to understand.
Really? It seems pretty clear to me that they're the beginning and end
of the memory range we're considering to release or map.
That loop went through a number of variations and what's there is what I
think is the most readable.
>> + phys_addr_t end = entry->addr + entry->size;
>>
>> - if (end <= start)
>> - continue;
>> + last_end = end;
>
> Please include the comment:
> /* This entry end. */
Not really in favour of little comments like this. I'll put a comment
above the loop.
/*
* For each memory region consider whether to release and map the
* region and the preceeding gap (if any).
*/
> OK, but you have ripped out the nice printk's that existed before. So add them
> back in:
>
>
> printk(KERN_INFO "released %lu pages of unused memory\n", released);
> printk(KERN_INFO "Set %ld page(s) to 1-1 mapping.\n", identity);
>
> as they are quite useful in the field.
What problem are these useful for diagnosing that the remaining messages
(and the e820 map) don't tell you already?
xen_release_chunk: looking at area pfn 9e-100: 98 pages freed
1-1 mapping on 9e->100
1-1 mapping on bf699->bf6af
1-1 mapping on bf6af->bf6ce
1-1 mapping on bf6ce->c0000
1-1 mapping on c0000->f0000
1-1 mapping on f0000->100000
The total count just doesn't seem useful.
David
^ permalink raw reply [flat|nested] 15+ messages in thread* Re: [PATCH 5/5] xen: release all pages within 1-1 p2m mappings
2011-09-07 11:03 ` David Vrabel
@ 2011-09-07 18:23 ` Konrad Rzeszutek Wilk
0 siblings, 0 replies; 15+ messages in thread
From: Konrad Rzeszutek Wilk @ 2011-09-07 18:23 UTC (permalink / raw)
To: David Vrabel; +Cc: xen-devel@lists.xensource.com, David Vrabel
On Wed, Sep 07, 2011 at 12:03:30PM +0100, David Vrabel wrote:
> On 06/09/11 22:20, Konrad Rzeszutek Wilk wrote:
> > On Fri, Aug 19, 2011 at 03:57:20PM +0100, David Vrabel wrote:
> >>
> >> --- a/arch/x86/xen/setup.c
> >> +++ b/arch/x86/xen/setup.c
> >> @@ -123,73 +123,33 @@ static unsigned long __init xen_release_chunk(phys_addr_t start_addr,
> >> return len;
> >> }
> >>
> >> -static unsigned long __init xen_return_unused_memory(unsigned long max_pfn,
> >> - const struct e820entry *map,
> >> - int nr_map)
> >> +static unsigned long __init xen_set_identity_and_release(const struct e820entry *list,
> >> + ssize_t map_size,
> >> + unsigned long nr_pages)
> >> {
> >> - phys_addr_t max_addr = PFN_PHYS(max_pfn);
> >> - phys_addr_t last_end = ISA_END_ADDRESS;
> >> + phys_addr_t avail_end = PFN_PHYS(nr_pages);
> >> + phys_addr_t last_end = 0;
> >> unsigned long released = 0;
> >> - int i;
> >> -
> >> - /* Free any unused memory above the low 1Mbyte. */
> >> - for (i = 0; i < nr_map && last_end < max_addr; i++) {
> >> - phys_addr_t end = map[i].addr;
> >> - end = min(max_addr, end);
> >> -
> >> - if (last_end < end)
> >> - released += xen_release_chunk(last_end, end);
> >> - last_end = max(last_end, map[i].addr + map[i].size);
> >> - }
> >> -
> >> - if (last_end < max_addr)
> >> - released += xen_release_chunk(last_end, max_addr);
> >> -
> >> - printk(KERN_INFO "released %lu pages of unused memory\n", released);
> >> - return released;
> >> -}
> >> -
> >> -static unsigned long __init xen_set_identity(const struct e820entry *list,
> >> - ssize_t map_size)
> >> -{
> >> - phys_addr_t last = xen_initial_domain() ? 0 : ISA_END_ADDRESS;
> >> - phys_addr_t start_pci = last;
> >> const struct e820entry *entry;
> >> - unsigned long identity = 0;
> >> int i;
> >>
> >> for (i = 0, entry = list; i < map_size; i++, entry++) {
> >> - phys_addr_t start = entry->addr;
> >> - phys_addr_t end = start + entry->size;
> >> -
> >> - if (start < last)
> >> - start = last;
> >> + phys_addr_t begin = last_end;
> >
> > The "begin" is a bit confusing. You are using the previous E820 entry's end - not
> > the beginning of this E820 entry. Doing a s/begin/last_end/ makes
> > the code a bit easier to understand.
>
> Really? It seems pretty clear to me that they're the beginning and end
> of the memory range we're considering to release or map.
>
> That loop went through a number of variations and what's there is what I
> think is the most readable.
Please add a comment describing that.
>
> >> + phys_addr_t end = entry->addr + entry->size;
> >>
> >> - if (end <= start)
> >> - continue;
> >> + last_end = end;
> >
> > Please include the comment:
> > /* This entry end. */
>
> Not really in favour of little comments like this. I'll put a comment
> above the loop.
>
> /*
> * For each memory region consider whether to release and map the
> * region and the preceeding gap (if any).
> */
OK, can you expand it please to mention that you are evaluating the
beginning and end of the memory range. Thanks!
>
> > OK, but you have ripped out the nice printk's that existed before. So add them
> > back in:
> >
> >
> > printk(KERN_INFO "released %lu pages of unused memory\n", released);
> > printk(KERN_INFO "Set %ld page(s) to 1-1 mapping.\n", identity);
> >
> > as they are quite useful in the field.
>
> What problem are these useful for diagnosing that the remaining messages
> (and the e820 map) don't tell you already?
You get an idea of which are released and which ones are identity pages.
The E820 won't tell you how many total got released. Well, you can
figure out if you look at the E820, at the mem=X and do some decimal
to hex conversations.
Much easier just to look at the end result.
>
> xen_release_chunk: looking at area pfn 9e-100: 98 pages freed
> 1-1 mapping on 9e->100
> 1-1 mapping on bf699->bf6af
> 1-1 mapping on bf6af->bf6ce
> 1-1 mapping on bf6ce->c0000
> 1-1 mapping on c0000->f0000
> 1-1 mapping on f0000->100000
Ok, but those are 'debug' version. The totals are for 'info' level
Also, considering that the Red Hat guys posted patches to improve
the look and feel of those printk's I don't want to them rip out.
^ permalink raw reply [flat|nested] 15+ messages in thread