xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com,
	jbeulich@suse.com, stefano.stabellini@eu.citrix.com
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: [PATCH 5/6] xen/mmu: Copy and revector the P2M tree.
Date: Tue, 31 Jul 2012 10:43:23 -0400	[thread overview]
Message-ID: <1343745804-28028-6-git-send-email-konrad.wilk@oracle.com> (raw)
In-Reply-To: <1343745804-28028-1-git-send-email-konrad.wilk@oracle.com>

Please first read the description in "xen/p2m: Add logic to revector a
P2M tree to use __va leafs" patch.

The 'xen_revector_p2m_tree()' function allocates a new P2M tree
copies the contents of the old one in it, and returns the new one.

At this stage, the __ka address space (which is what the old
P2M tree was using) is partially disassembled. The cleanup_highmap
has removed the PMD entries from 0-16MB and anything past _brk_end
up to the max_pfn_mapped (which is the end of the ramdisk).

We have revectored the P2M tree (and the one for save/restore as well)
to use new shiny __va address to new MFNs. The xen_start_info
has been taken care of already in 'xen_setup_kernel_pagetable()' and
xen_start_info->shared_info in 'xen_setup_shared_info()', so
we are free to roam and delete PMD entries - which is exactly what
we are going to do. We rip out the __ka for the old P2M array.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/mmu.c |   57 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 57 insertions(+), 0 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index de4b8fd..9358b75 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -1183,9 +1183,64 @@ static __init void xen_mapping_pagetable_reserve(u64 start, u64 end)
 
 static void xen_post_allocator_init(void);
 
+#ifdef CONFIG_X86_64
+void __init xen_cleanhighmap(unsigned long vaddr, unsigned long vaddr_end)
+{
+	unsigned long kernel_end = roundup((unsigned long)_brk_end, PMD_SIZE) - 1;
+	pmd_t *pmd = level2_kernel_pgt + pmd_index(vaddr);
+
+	/* NOTE: The loop is more greedy than the cleanup_highmap variant.
+	 * We include the PMD passed in on _both_ boundaries. */
+	for (; vaddr <= vaddr_end && (pmd < (level2_kernel_pgt + PAGE_SIZE));
+			pmd++, vaddr += PMD_SIZE) {
+		if (pmd_none(*pmd))
+			continue;
+		if (vaddr < (unsigned long) _text || vaddr > kernel_end)
+			set_pmd(pmd, __pmd(0));
+	}
+	/* In case we did something silly, we should crash in this function
+	 * instead of somewhere later and be confusing. */
+	xen_mc_flush();
+}
+#endif
 static void __init xen_pagetable_setup_done(pgd_t *base)
 {
+#ifdef CONFIG_X86_64
+	unsigned long size;
+	unsigned long addr;
+#endif
+
 	xen_setup_shared_info();
+#ifdef CONFIG_X86_64
+	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
+		unsigned long new_mfn_list;
+
+		size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
+
+		new_mfn_list = xen_revector_p2m_tree();
+
+		/* On 32-bit, we get zero so this never gets executed. */
+		if (new_mfn_list && new_mfn_list != xen_start_info->mfn_list) {
+			/* using __ka address! */
+			memset((void *)xen_start_info->mfn_list, 0, size);
+
+			/* We should be in __ka space. */
+			BUG_ON(xen_start_info->mfn_list < __START_KERNEL_map);
+			addr = xen_start_info->mfn_list;
+			size = PAGE_ALIGN(xen_start_info->nr_pages * sizeof(unsigned long));
+			/* We roundup to the PMD, which means that if anybody at this stage is
+			 * using the __ka address of xen_start_info or xen_start_info->shared_info
+			 * they are in going to crash. Fortunatly we have already revectored
+			 * in xen_setup_kernel_pagetable and in xen_setup_shared_info. */
+			size = roundup(size, PMD_SIZE);
+			xen_cleanhighmap(addr, addr + size);
+
+			memblock_free(__pa(xen_start_info->mfn_list), size);
+			/* And revector! Bye bye old array */
+			xen_start_info->mfn_list = new_mfn_list;
+		}
+	}
+#endif
 	xen_post_allocator_init();
 }
 
@@ -1822,6 +1877,8 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
 
 	/* Our (by three pages) smaller Xen pagetable that we are using */
 	memblock_reserve(PFN_PHYS(pt_base), (pt_end - pt_base) * PAGE_SIZE);
+	/* Revector the xen_start_info */
+	xen_start_info = (struct start_info *)__va(__pa(xen_start_info));
 }
 #else	/* !CONFIG_X86_64 */
 static RESERVE_BRK_ARRAY(pmd_t, initial_kernel_pmd, PTRS_PER_PMD);
-- 
1.7.7.6

  parent reply	other threads:[~2012-07-31 14:43 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-07-31 14:43 [PATCH] Boot PV guests with more than 128GB (v2) for 3.7 Konrad Rzeszutek Wilk
2012-07-31 14:43 ` [PATCH 1/6] xen/mmu: use copy_page instead of memcpy Konrad Rzeszutek Wilk
2012-07-31 14:43 ` [PATCH 2/6] xen/mmu: For 64-bit do not call xen_map_identity_early Konrad Rzeszutek Wilk
2012-07-31 14:43 ` [PATCH 3/6] xen/mmu: Recycle the Xen provided L4, L3, and L2 pages Konrad Rzeszutek Wilk
2012-07-31 14:43 ` [PATCH 4/6] xen/p2m: Add logic to revector a P2M tree to use __va leafs Konrad Rzeszutek Wilk
2012-07-31 14:43 ` Konrad Rzeszutek Wilk [this message]
2012-07-31 14:43 ` [PATCH 6/6] xen/mmu: Remove from __ka space PMD entries for pagetables Konrad Rzeszutek Wilk
2012-08-01 15:50 ` [PATCH] Boot PV guests with more than 128GB (v2) for 3.7 Konrad Rzeszutek Wilk
2012-08-02  9:05   ` Jan Beulich
2012-08-02 14:17     ` Konrad Rzeszutek Wilk
2012-08-02 23:04       ` Mukesh Rathor
2012-08-03 13:30         ` Konrad Rzeszutek Wilk
2012-08-03 13:54           ` Jan Beulich
     [not found]             ` <CAPbh3rsXaqQS9WQQmJ2uQ46LZdyFzkbSodUabGDAyFS+qTEwUg@mail.gmail.com>
2012-08-13  7:54               ` Jan Beulich
2012-09-03  6:33                 ` Ping: " Jan Beulich
2012-09-06 21:03                   ` Konrad Rzeszutek Wilk
2012-09-07  9:01                     ` Jan Beulich
2012-09-07 13:39                       ` Konrad Rzeszutek Wilk
2012-09-07 14:09                         ` Jan Beulich
2012-09-07 14:11                           ` Konrad Rzeszutek Wilk
2013-08-27 20:34                 ` Konrad Rzeszutek Wilk
2013-08-28  7:55                   ` Jan Beulich
2013-08-28 14:44                     ` Konrad Rzeszutek Wilk
2013-08-28 14:58                       ` [PATCH] libxc/x86: fix page table creation for huge guests Jan Beulich
2013-09-09  8:37                         ` Ping: " Jan Beulich
2013-09-12 15:38                           ` Ian Jackson
2012-08-03 18:37           ` [PATCH] Boot PV guests with more than 128GB (v2) for 3.7 Mukesh Rathor

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1343745804-28028-6-git-send-email-konrad.wilk@oracle.com \
    --to=konrad.wilk@oracle.com \
    --cc=jbeulich@suse.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=stefano.stabellini@eu.citrix.com \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).