xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: "Roger Pau Monné" <roger.pau@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>,
	Sander Eikelenboom <linux@eikelenboom.it>
Cc: xen-devel@lists.xenproject.org,
	David Vrabel <david.vrabel@citrix.com>,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: dom0 PVH: Over-allocation for domain 0: 393217 > 393216
Date: Wed, 4 Jun 2014 12:48:49 +0200	[thread overview]
Message-ID: <538EF991.1080809@citrix.com> (raw)
In-Reply-To: <20140603164949.74f4eab4@mantra.us.oracle.com>

On 04/06/14 01:49, Mukesh Rathor wrote:
> On Tue, 3 Jun 2014 12:28:27 -0700
> Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> 
>> On Tue, 3 Jun 2014 14:11:17 +0200
>> Sander Eikelenboom <linux@eikelenboom.it> wrote:
>>
>>> Hi,
>>>
>>> I just tried booting with "dom0pvh" and found the following warnings
>>> (complete combined xl-dmesg/dmesg attached) which don't show up when
>>> booting without dom0pvh:
>>
>> Yeah, I see. I'm able to reproduce. Something about
>> dom0_mem=1536M,max:1536M ie, specifying max in there. Without it,
>> dom0_mem=1536M, its fine. You can do that to play around more while I
>> take a look.
> 
> Ok, I know whats going on. Basically, we are trying to populate the
> pfns removed due to punched holes in the e820, but because max is
> specified (same as initial ram), guest_physmap_add_page can't add.
> 
> This will be fixed by the e820 work that Roger and David Vrabel are 
> doing. Please keep an eye on the thread:
> 
> http://www.gossamer-threads.com/lists/xen/devel/332603

Hello,

I've been looking into this, and commented on David's patch:

http://lists.xenproject.org/archives/html/xen-devel/2014-06/msg00293.html

I'm also appending a modified version of patch 1, so you can apply it 
directly. It seems to work fine in all my test cases (no dom0_mem, 
dom0_mem=<mem> and dom0_mem=<mem>,max:<mem>).

---
commit f503c114f86d92036e1af4e4246550f15e85c4bb
Author: David Vrabel <david.vrabel@citrix.com>
Date:   Tue Jun 3 09:13:43 2014 +0200

    x86/xen: fix memory setup for PVH dom0
    
    Since af06d66ee32b (x86: fix setup of PVH Dom0 memory map) in Xen, PVH
    dom0 need only use the memory memory provided by Xen which has already
    setup all the correct holes.
    
    xen_memory_setup() then ends up being trivial for a PVH guest so
    introduce a new function (xen_auto_xlated_memory_setup()).
    
    Signed-off-by: David Vrabel <david.vrabel@citrix.com>
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index 201d09a..4e7d2b7 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1536,7 +1536,10 @@ asmlinkage void __init xen_start_kernel(void)
 	if (!xen_pvh_domain())
 		pv_cpu_ops = xen_cpu_ops;
 
-	x86_init.resources.memory_setup = xen_memory_setup;
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		x86_init.resources.memory_setup = xen_auto_xlated_memory_setup;
+	else
+		x86_init.resources.memory_setup = xen_memory_setup;
 	x86_init.oem.arch_setup = xen_arch_setup;
 	x86_init.oem.banner = xen_banner;
 
diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index 0982233..e6e9df8 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -509,6 +509,34 @@ char * __init xen_memory_setup(void)
 }
 
 /*
+ * Machine specific memory setup for auto-translated guests.
+ */
+char * __init xen_auto_xlated_memory_setup(void)
+{
+	static struct e820entry map[E820MAX] __initdata;
+
+	struct xen_memory_map memmap;
+	int i;
+	int rc;
+
+	memmap.nr_entries = E820MAX;
+	set_xen_guest_handle(memmap.buffer, map);
+
+	rc = HYPERVISOR_memory_op(XENMEM_memory_map, &memmap);
+	BUG_ON(rc);
+
+	sanitize_e820_map(map, ARRAY_SIZE(map), &memmap.nr_entries);
+
+	for (i = 0; i < memmap.nr_entries; i++)
+		e820_add_region(map[i].addr, map[i].size, map[i].type);
+
+	memblock_reserve(__pa(xen_start_info->mfn_list),
+		xen_start_info->pt_base - xen_start_info->mfn_list);
+
+	return "Xen";
+}
+
+/*
  * Set the bit indicating "nosegneg" library variants should be used.
  * We only need to bother in pure 32-bit mode; compat 32-bit processes
  * can have un-truncated segments, so wrapping around is allowed.
diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
index 1cb6f4c..b371d18 100644
--- a/arch/x86/xen/xen-ops.h
+++ b/arch/x86/xen/xen-ops.h
@@ -34,6 +34,7 @@ extern unsigned long xen_max_p2m_pfn;
 void xen_set_pat(u64);
 
 char * __init xen_memory_setup(void);
+char * xen_auto_xlated_memory_setup(void);
 void __init xen_arch_setup(void);
 void xen_enable_sysenter(void);
 void xen_enable_syscall(void);

  reply	other threads:[~2014-06-04 10:48 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-06-03 12:11 dom0 PVH: Over-allocation for domain 0: 393217 > 393216 Sander Eikelenboom
2014-06-03 19:28 ` Mukesh Rathor
2014-06-03 23:49   ` Mukesh Rathor
2014-06-04 10:48     ` Roger Pau Monné [this message]
2014-06-04 17:29       ` Sander Eikelenboom

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=538EF991.1080809@citrix.com \
    --to=roger.pau@citrix.com \
    --cc=david.vrabel@citrix.com \
    --cc=jbeulich@suse.com \
    --cc=linux@eikelenboom.it \
    --cc=mukesh.rathor@oracle.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).