* [PATCH 0 of 8] x86/xen: updated physical mapping patches, and _PAGE_GLOBAL cleanup
@ 2008-07-01 23:46 Jeremy Fitzhardinge
2008-07-01 23:46 ` [PATCH 1 of 8] x86_64: create global mappings in head_64.S Jeremy Fitzhardinge
` (8 more replies)
0 siblings, 9 replies; 11+ messages in thread
From: Jeremy Fitzhardinge @ 2008-07-01 23:46 UTC (permalink / raw)
To: Ingo Molnar
Cc: LKML, x86, Stephen Tweedie, Eduardo Habkost, Mark McLoughlin, x86
Hi Ingo,
Here's a revised series of the Xen-64 groundwork patches relating to
creating the physical memory mapping. The first few patches are the
necessary changes to make it work without triggering CPA warnings, and
the last couple are cleanups of _PAGE_GLOBAL in the _PAGE_KERNEL
flags, and could probably happily live in another topic branch
(they're not at all Xen-specific or required for Xen to work).
Breakdown:
1 - x86_64: create global mappings in head_64.S
Create global mappings in head_64.S for consistency,
to avoid spurious CPA failures.
2-5: Map physical memory in a Xen-compatible way
These superscede "x86, 64-bit: adjust mapping of physical pagetables
to work with Xen".
6: Fold _PAGE_GLOBAL into _PAGE_KENREL mappings
This reverts patch 1, by solving the problem in a more general way.
7: Remove __PAGE_KERNEL* from 32-bit
Patch 7 makes the 32-bit kernel's __PAGE_KERNEL variables redundant,
so remove them.
8: Make CPA testing use some other pte bit
Use an unused bit to test CPA, rather than _PAGE_GLOBAL, so that
testing will still work when _PAGE_GLOBAL isn't usable.
Thanks,
J
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 1 of 8] x86_64: create global mappings in head_64.S
2008-07-01 23:46 [PATCH 0 of 8] x86/xen: updated physical mapping patches, and _PAGE_GLOBAL cleanup Jeremy Fitzhardinge
@ 2008-07-01 23:46 ` Jeremy Fitzhardinge
2008-07-01 23:46 ` [PATCH 2 of 8] x86_64: unmap iomapping before populating Jeremy Fitzhardinge
` (7 subsequent siblings)
8 siblings, 0 replies; 11+ messages in thread
From: Jeremy Fitzhardinge @ 2008-07-01 23:46 UTC (permalink / raw)
To: Ingo Molnar
Cc: LKML, x86, Stephen Tweedie, Eduardo Habkost, Mark McLoughlin, x86
When creating the initial set of pagetables in head_64.S, create them
global. This matches the mappings created later in the kernel boot;
this means we can reuse these pagetables without causing an
inconsistency.
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
arch/x86/kernel/head_64.S | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -374,7 +374,7 @@
/* Since I easily can, map the first 1G.
* Don't set NX because code runs from these pages.
*/
- PMDS(0, __PAGE_KERNEL_LARGE_EXEC, PTRS_PER_PMD)
+ PMDS(0, __PAGE_KERNEL_LARGE_EXEC | _PAGE_GLOBAL, PTRS_PER_PMD)
NEXT_PAGE(level2_kernel_pgt)
/*
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 2 of 8] x86_64: unmap iomapping before populating
2008-07-01 23:46 [PATCH 0 of 8] x86/xen: updated physical mapping patches, and _PAGE_GLOBAL cleanup Jeremy Fitzhardinge
2008-07-01 23:46 ` [PATCH 1 of 8] x86_64: create global mappings in head_64.S Jeremy Fitzhardinge
@ 2008-07-01 23:46 ` Jeremy Fitzhardinge
2008-07-01 23:46 ` [PATCH 3 of 8] x86_64/setup: preserve existing PUD mappings Jeremy Fitzhardinge
` (6 subsequent siblings)
8 siblings, 0 replies; 11+ messages in thread
From: Jeremy Fitzhardinge @ 2008-07-01 23:46 UTC (permalink / raw)
To: Ingo Molnar
Cc: LKML, x86, Stephen Tweedie, Eduardo Habkost, Mark McLoughlin, x86
Xen doesn't like stray writable mappings of pagetable pages, so make
sure the ioremap mapping is removed before attaching the new page to
the pagetable.
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
arch/x86/mm/init_64.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -333,11 +333,11 @@
pmd = alloc_low_page(&pmd_phys);
spin_lock(&init_mm.page_table_lock);
+ last_map_addr = phys_pmd_init(pmd, addr, end);
+ unmap_low_page(pmd);
pud_populate(&init_mm, pud, __va(pmd_phys));
- last_map_addr = phys_pmd_init(pmd, addr, end);
spin_unlock(&init_mm.page_table_lock);
- unmap_low_page(pmd);
}
__flush_tlb_all();
update_page_count(PG_LEVEL_1G, pages);
@@ -534,10 +534,10 @@
if (next > end)
next = end;
last_map_addr = phys_pud_init(pud, __pa(start), __pa(next));
+ unmap_low_page(pud);
if (!after_bootmem)
pgd_populate(&init_mm, pgd_offset_k(start),
__va(pud_phys));
- unmap_low_page(pud);
}
if (!after_bootmem)
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 3 of 8] x86_64/setup: preserve existing PUD mappings
2008-07-01 23:46 [PATCH 0 of 8] x86/xen: updated physical mapping patches, and _PAGE_GLOBAL cleanup Jeremy Fitzhardinge
2008-07-01 23:46 ` [PATCH 1 of 8] x86_64: create global mappings in head_64.S Jeremy Fitzhardinge
2008-07-01 23:46 ` [PATCH 2 of 8] x86_64: unmap iomapping before populating Jeremy Fitzhardinge
@ 2008-07-01 23:46 ` Jeremy Fitzhardinge
2008-07-01 23:46 ` [PATCH 4 of 8] x86_64/setup: unconditionally populate the pgd Jeremy Fitzhardinge
` (5 subsequent siblings)
8 siblings, 0 replies; 11+ messages in thread
From: Jeremy Fitzhardinge @ 2008-07-01 23:46 UTC (permalink / raw)
To: Ingo Molnar
Cc: LKML, x86, Stephen Tweedie, Eduardo Habkost, Mark McLoughlin, x86
When constructing the physical mapping, reuse any existing PUD pages
rather than starting afresh. This preserves any special mappings the
earlier boot code may have created.
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
arch/x86/mm/init_64.c | 20 +++++++++++++++++---
1 file changed, 17 insertions(+), 3 deletions(-)
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -493,6 +493,14 @@
}
#endif
+static unsigned long __meminit
+phys_pud_update(pgd_t *pgd, unsigned long addr, unsigned long end)
+{
+ pud_t *pud = (pud_t *)pgd_page_vaddr(*pgd);
+
+ return phys_pud_init(pud, addr, end);
+}
+
/*
* Setup the direct mapping of the physical memory at PAGE_OFFSET.
* This runs before bootmem is initialized and gets pages directly from
@@ -525,14 +533,20 @@
unsigned long pud_phys;
pud_t *pud;
+ next = start + PGDIR_SIZE;
+ if (next > end)
+ next = end;
+
+ if (pgd_val(*pgd)) {
+ last_map_addr = phys_pud_update(pgd, __pa(start), __pa(end));
+ continue;
+ }
+
if (after_bootmem)
pud = pud_offset(pgd, start & PGDIR_MASK);
else
pud = alloc_low_page(&pud_phys);
- next = start + PGDIR_SIZE;
- if (next > end)
- next = end;
last_map_addr = phys_pud_init(pud, __pa(start), __pa(next));
unmap_low_page(pud);
if (!after_bootmem)
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 4 of 8] x86_64/setup: unconditionally populate the pgd
2008-07-01 23:46 [PATCH 0 of 8] x86/xen: updated physical mapping patches, and _PAGE_GLOBAL cleanup Jeremy Fitzhardinge
` (2 preceding siblings ...)
2008-07-01 23:46 ` [PATCH 3 of 8] x86_64/setup: preserve existing PUD mappings Jeremy Fitzhardinge
@ 2008-07-01 23:46 ` Jeremy Fitzhardinge
2008-07-01 23:46 ` [PATCH 5 of 8] x86_64/setup: create 4k mappings if the cpu doens't support PSE Jeremy Fitzhardinge
` (4 subsequent siblings)
8 siblings, 0 replies; 11+ messages in thread
From: Jeremy Fitzhardinge @ 2008-07-01 23:46 UTC (permalink / raw)
To: Ingo Molnar
Cc: LKML, x86, Stephen Tweedie, Eduardo Habkost, Mark McLoughlin, x86
When allocating a new pud, unconditionally populate the pgd (why did
we bother to create a new pud if we weren't going to populate it?).
This will only happen if the pgd slot was empty, since any existing
pud will be reused.
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
arch/x86/mm/init_64.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -549,9 +549,8 @@
last_map_addr = phys_pud_init(pud, __pa(start), __pa(next));
unmap_low_page(pud);
- if (!after_bootmem)
- pgd_populate(&init_mm, pgd_offset_k(start),
- __va(pud_phys));
+ pgd_populate(&init_mm, pgd_offset_k(start),
+ __va(pud_phys));
}
if (!after_bootmem)
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 5 of 8] x86_64/setup: create 4k mappings if the cpu doens't support PSE
2008-07-01 23:46 [PATCH 0 of 8] x86/xen: updated physical mapping patches, and _PAGE_GLOBAL cleanup Jeremy Fitzhardinge
` (3 preceding siblings ...)
2008-07-01 23:46 ` [PATCH 4 of 8] x86_64/setup: unconditionally populate the pgd Jeremy Fitzhardinge
@ 2008-07-01 23:46 ` Jeremy Fitzhardinge
2008-07-01 23:46 ` [PATCH 6 of 8] x86: always set _PAGE_GLOBAL in _PAGE_KERNEL* flags Jeremy Fitzhardinge
` (3 subsequent siblings)
8 siblings, 0 replies; 11+ messages in thread
From: Jeremy Fitzhardinge @ 2008-07-01 23:46 UTC (permalink / raw)
To: Ingo Molnar
Cc: LKML, x86, Stephen Tweedie, Eduardo Habkost, Mark McLoughlin, x86
If the CPU (or environment) doesn't support PSE, then create 4k mappings.
This:
1. allocates enough memory for the ptes
2. reuses existing ptes, or
3. allocates and initializes new pte pages
In other words, its identical to the code which deals with puds and pmds.
If the processor does support PSE, the behaviour is unchanged.
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
arch/x86/mm/init_64.c | 63 +++++++++++++++++++++++++++++++++++++++++++++----
1 file changed, 59 insertions(+), 4 deletions(-)
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -253,6 +253,40 @@
early_iounmap(adr, PAGE_SIZE);
}
+static void __meminit
+phys_pte_init(pte_t *pte_page, unsigned long addr, unsigned long end)
+{
+ unsigned pages = 0;
+ int i;
+ pte_t *pte = pte_page + pte_index(addr);
+
+ for(i = pte_index(addr); i < PTRS_PER_PTE; i++, addr += PAGE_SIZE, pte++) {
+
+ if (addr >= end) {
+ if (!after_bootmem) {
+ for(; i < PTRS_PER_PTE; i++, pte++)
+ set_pte(pte, __pte(0));
+ }
+ break;
+ }
+
+ if (pte_val(*pte))
+ continue;
+
+ set_pte(pte, pfn_pte(addr >> PAGE_SHIFT, PAGE_KERNEL));
+ pages++;
+ }
+ update_page_count(PG_LEVEL_4K, pages);
+}
+
+static void __meminit
+phys_pte_update(pmd_t *pmd, unsigned long address, unsigned long end)
+{
+ pte_t *pte = (pte_t *)pmd_page_vaddr(*pmd);
+
+ phys_pte_init(pte, address, end);
+}
+
static unsigned long __meminit
phys_pmd_init(pmd_t *pmd_page, unsigned long address, unsigned long end)
{
@@ -261,7 +295,9 @@
int i = pmd_index(address);
for (; i < PTRS_PER_PMD; i++, address += PMD_SIZE) {
+ unsigned long pte_phys;
pmd_t *pmd = pmd_page + pmd_index(address);
+ pte_t *pte;
if (address >= end) {
if (!after_bootmem) {
@@ -271,12 +307,27 @@
break;
}
- if (pmd_val(*pmd))
+ if (pmd_val(*pmd)) {
+ WARN_ON(!pmd_present(*pmd));
+ if (!pmd_large(*pmd)) {
+ WARN_ON(cpu_has_pse);
+ phys_pte_update(pmd, address, end);
+ }
continue;
+ }
- pages++;
- set_pte((pte_t *)pmd,
- pfn_pte(address >> PAGE_SHIFT, PAGE_KERNEL_LARGE));
+ if (cpu_has_pse) {
+ pages++;
+ set_pte((pte_t *)pmd,
+ pfn_pte(address >> PAGE_SHIFT, PAGE_KERNEL_LARGE));
+ continue;
+ }
+
+ pte = alloc_low_page(&pte_phys);
+ phys_pte_init(pte, address, end);
+ unmap_low_page(pte);
+
+ pmd_populate_kernel(&init_mm, pmd, __va(pte_phys));
}
update_page_count(PG_LEVEL_2M, pages);
return address;
@@ -354,6 +405,10 @@
if (!direct_gbpages) {
pmds = (end + PMD_SIZE - 1) >> PMD_SHIFT;
tables += round_up(pmds * sizeof(pmd_t), PAGE_SIZE);
+ }
+ if (!cpu_has_pse) {
+ unsigned long ptes = (end + PAGE_SIZE - 1) >> PAGE_SHIFT;
+ tables += round_up(ptes * sizeof(pte_t), PAGE_SIZE);
}
/*
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 6 of 8] x86: always set _PAGE_GLOBAL in _PAGE_KERNEL* flags
2008-07-01 23:46 [PATCH 0 of 8] x86/xen: updated physical mapping patches, and _PAGE_GLOBAL cleanup Jeremy Fitzhardinge
` (4 preceding siblings ...)
2008-07-01 23:46 ` [PATCH 5 of 8] x86_64/setup: create 4k mappings if the cpu doens't support PSE Jeremy Fitzhardinge
@ 2008-07-01 23:46 ` Jeremy Fitzhardinge
2008-07-01 23:46 ` [PATCH 7 of 8] x86_32: remove __PAGE_KERNEL(_EXEC) Jeremy Fitzhardinge
` (2 subsequent siblings)
8 siblings, 0 replies; 11+ messages in thread
From: Jeremy Fitzhardinge @ 2008-07-01 23:46 UTC (permalink / raw)
To: Ingo Molnar
Cc: LKML, x86, Stephen Tweedie, Eduardo Habkost, Mark McLoughlin, x86
Consistently set _PAGE_GLOBAL in _PAGE_KERNEL flags. This makes 32-
and 64-bit code consistent, and removes some special cases where
__PAGE_KERNEL* did not have _PAGE_GLOBAL set, causing confusion as a
result of the inconsistencies.
This patch only affects x86-64, which generally always supports PGD.
The x86-32 patch is next.
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
arch/x86/kernel/head_64.S | 4 ++--
include/asm-x86/pgtable.h | 32 +++++++++++++-------------------
2 files changed, 15 insertions(+), 21 deletions(-)
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -374,7 +374,7 @@
/* Since I easily can, map the first 1G.
* Don't set NX because code runs from these pages.
*/
- PMDS(0, __PAGE_KERNEL_LARGE_EXEC | _PAGE_GLOBAL, PTRS_PER_PMD)
+ PMDS(0, __PAGE_KERNEL_LARGE_EXEC, PTRS_PER_PMD)
NEXT_PAGE(level2_kernel_pgt)
/*
@@ -387,7 +387,7 @@
* If you want to increase this then increase MODULES_VADDR
* too.)
*/
- PMDS(0, __PAGE_KERNEL_LARGE_EXEC|_PAGE_GLOBAL,
+ PMDS(0, __PAGE_KERNEL_LARGE_EXEC,
KERNEL_IMAGE_SIZE/PMD_SIZE)
NEXT_PAGE(level2_spare_pgt)
diff --git a/include/asm-x86/pgtable.h b/include/asm-x86/pgtable.h
--- a/include/asm-x86/pgtable.h
+++ b/include/asm-x86/pgtable.h
@@ -88,7 +88,7 @@
#endif /* __ASSEMBLY__ */
#else
#define __PAGE_KERNEL_EXEC \
- (_PAGE_PRESENT | _PAGE_RW | _PAGE_DIRTY | _PAGE_ACCESSED)
+ (_PAGE_PRESENT | _PAGE_RW | _PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_GLOBAL)
#define __PAGE_KERNEL (__PAGE_KERNEL_EXEC | _PAGE_NX)
#endif
@@ -103,24 +103,18 @@
#define __PAGE_KERNEL_LARGE (__PAGE_KERNEL | _PAGE_PSE)
#define __PAGE_KERNEL_LARGE_EXEC (__PAGE_KERNEL_EXEC | _PAGE_PSE)
-#ifdef CONFIG_X86_32
-# define MAKE_GLOBAL(x) __pgprot((x))
-#else
-# define MAKE_GLOBAL(x) __pgprot((x) | _PAGE_GLOBAL)
-#endif
-
-#define PAGE_KERNEL MAKE_GLOBAL(__PAGE_KERNEL)
-#define PAGE_KERNEL_RO MAKE_GLOBAL(__PAGE_KERNEL_RO)
-#define PAGE_KERNEL_EXEC MAKE_GLOBAL(__PAGE_KERNEL_EXEC)
-#define PAGE_KERNEL_RX MAKE_GLOBAL(__PAGE_KERNEL_RX)
-#define PAGE_KERNEL_WC MAKE_GLOBAL(__PAGE_KERNEL_WC)
-#define PAGE_KERNEL_NOCACHE MAKE_GLOBAL(__PAGE_KERNEL_NOCACHE)
-#define PAGE_KERNEL_UC_MINUS MAKE_GLOBAL(__PAGE_KERNEL_UC_MINUS)
-#define PAGE_KERNEL_EXEC_NOCACHE MAKE_GLOBAL(__PAGE_KERNEL_EXEC_NOCACHE)
-#define PAGE_KERNEL_LARGE MAKE_GLOBAL(__PAGE_KERNEL_LARGE)
-#define PAGE_KERNEL_LARGE_EXEC MAKE_GLOBAL(__PAGE_KERNEL_LARGE_EXEC)
-#define PAGE_KERNEL_VSYSCALL MAKE_GLOBAL(__PAGE_KERNEL_VSYSCALL)
-#define PAGE_KERNEL_VSYSCALL_NOCACHE MAKE_GLOBAL(__PAGE_KERNEL_VSYSCALL_NOCACHE)
+#define PAGE_KERNEL __pgprot(__PAGE_KERNEL)
+#define PAGE_KERNEL_RO __pgprot(__PAGE_KERNEL_RO)
+#define PAGE_KERNEL_EXEC __pgprot(__PAGE_KERNEL_EXEC)
+#define PAGE_KERNEL_RX __pgprot(__PAGE_KERNEL_RX)
+#define PAGE_KERNEL_WC __pgprot(__PAGE_KERNEL_WC)
+#define PAGE_KERNEL_NOCACHE __pgprot(__PAGE_KERNEL_NOCACHE)
+#define PAGE_KERNEL_UC_MINUS __pgprot(__PAGE_KERNEL_UC_MINUS)
+#define PAGE_KERNEL_EXEC_NOCACHE __pgprot(__PAGE_KERNEL_EXEC_NOCACHE)
+#define PAGE_KERNEL_LARGE __pgprot(__PAGE_KERNEL_LARGE)
+#define PAGE_KERNEL_LARGE_EXEC __pgprot(__PAGE_KERNEL_LARGE_EXEC)
+#define PAGE_KERNEL_VSYSCALL __pgprot(__PAGE_KERNEL_VSYSCALL)
+#define PAGE_KERNEL_VSYSCALL_NOCACHE __pgprot(__PAGE_KERNEL_VSYSCALL_NOCACHE)
/* xwr */
#define __P000 PAGE_NONE
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 7 of 8] x86_32: remove __PAGE_KERNEL(_EXEC)
2008-07-01 23:46 [PATCH 0 of 8] x86/xen: updated physical mapping patches, and _PAGE_GLOBAL cleanup Jeremy Fitzhardinge
` (5 preceding siblings ...)
2008-07-01 23:46 ` [PATCH 6 of 8] x86: always set _PAGE_GLOBAL in _PAGE_KERNEL* flags Jeremy Fitzhardinge
@ 2008-07-01 23:46 ` Jeremy Fitzhardinge
2008-07-01 23:46 ` [PATCH 8 of 8] x86/cpa: use an undefined PTE bit for testing CPA Jeremy Fitzhardinge
2008-07-04 10:33 ` [PATCH 0 of 8] x86/xen: updated physical mapping patches, and _PAGE_GLOBAL cleanup Ingo Molnar
8 siblings, 0 replies; 11+ messages in thread
From: Jeremy Fitzhardinge @ 2008-07-01 23:46 UTC (permalink / raw)
To: Ingo Molnar
Cc: LKML, x86, Stephen Tweedie, Eduardo Habkost, Mark McLoughlin, x86
Older x86-32 processors do not support global mappings (PGD), so must
only use it if the processor supports it.
The _PAGE_KERNEL* flags always have _PAGE_KERNEL set, since logically
we always want it set.
This is OK even on processors which do not support PGD, since all
_PAGE flags are masked with __supported_pte_mask before being turned
into a real in-pagetable pte. On 32-bit systems, __supported_pte_mask
is initialized to not contain _PAGE_GLOBAL, and it is then added if
the CPU is found to support it.
The x86-32 code used to use __PAGE_KERNEL/__PAGE_KERNEL_EXEC for this
purpose, but they're now redundant and can be removed.
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
arch/x86/mm/init_32.c | 10 ++--------
include/asm-x86/pgtable.h | 10 ----------
2 files changed, 2 insertions(+), 18 deletions(-)
diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
--- a/arch/x86/mm/init_32.c
+++ b/arch/x86/mm/init_32.c
@@ -383,11 +383,6 @@
# define set_highmem_pages_init() do { } while (0)
#endif /* CONFIG_HIGHMEM */
-pteval_t __PAGE_KERNEL = _PAGE_KERNEL;
-EXPORT_SYMBOL(__PAGE_KERNEL);
-
-pteval_t __PAGE_KERNEL_EXEC = _PAGE_KERNEL_EXEC;
-
void __init native_pagetable_setup_start(pgd_t *base)
{
unsigned long pfn, va;
@@ -509,7 +504,7 @@
int nx_enabled;
-pteval_t __supported_pte_mask __read_mostly = ~_PAGE_NX;
+pteval_t __supported_pte_mask __read_mostly = ~(_PAGE_NX | _PAGE_GLOBAL);
EXPORT_SYMBOL_GPL(__supported_pte_mask);
#ifdef CONFIG_X86_PAE
@@ -803,8 +798,7 @@
/* Enable PGE if available */
if (cpu_has_pge) {
set_in_cr4(X86_CR4_PGE);
- __PAGE_KERNEL |= _PAGE_GLOBAL;
- __PAGE_KERNEL_EXEC |= _PAGE_GLOBAL;
+ __supported_pte_mask |= _PAGE_GLOBAL;
}
/*
diff --git a/include/asm-x86/pgtable.h b/include/asm-x86/pgtable.h
--- a/include/asm-x86/pgtable.h
+++ b/include/asm-x86/pgtable.h
@@ -78,19 +78,9 @@
#define PAGE_READONLY_EXEC __pgprot(_PAGE_PRESENT | _PAGE_USER | \
_PAGE_ACCESSED)
-#ifdef CONFIG_X86_32
-#define _PAGE_KERNEL_EXEC \
- (_PAGE_PRESENT | _PAGE_RW | _PAGE_DIRTY | _PAGE_ACCESSED)
-#define _PAGE_KERNEL (_PAGE_KERNEL_EXEC | _PAGE_NX)
-
-#ifndef __ASSEMBLY__
-extern pteval_t __PAGE_KERNEL, __PAGE_KERNEL_EXEC;
-#endif /* __ASSEMBLY__ */
-#else
#define __PAGE_KERNEL_EXEC \
(_PAGE_PRESENT | _PAGE_RW | _PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_GLOBAL)
#define __PAGE_KERNEL (__PAGE_KERNEL_EXEC | _PAGE_NX)
-#endif
#define __PAGE_KERNEL_RO (__PAGE_KERNEL & ~_PAGE_RW)
#define __PAGE_KERNEL_RX (__PAGE_KERNEL_EXEC & ~_PAGE_RW)
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 8 of 8] x86/cpa: use an undefined PTE bit for testing CPA
2008-07-01 23:46 [PATCH 0 of 8] x86/xen: updated physical mapping patches, and _PAGE_GLOBAL cleanup Jeremy Fitzhardinge
` (6 preceding siblings ...)
2008-07-01 23:46 ` [PATCH 7 of 8] x86_32: remove __PAGE_KERNEL(_EXEC) Jeremy Fitzhardinge
@ 2008-07-01 23:46 ` Jeremy Fitzhardinge
2008-07-04 10:33 ` [PATCH 0 of 8] x86/xen: updated physical mapping patches, and _PAGE_GLOBAL cleanup Ingo Molnar
8 siblings, 0 replies; 11+ messages in thread
From: Jeremy Fitzhardinge @ 2008-07-01 23:46 UTC (permalink / raw)
To: Ingo Molnar
Cc: LKML, x86, Stephen Tweedie, Eduardo Habkost, Mark McLoughlin, x86
Rather than using _PAGE_GLOBAL - which not all CPUs support - to test
CPA, use one of the reserved-for-software-use PTE flags instead. This
allows CPA testing to work on CPUs which don't support PGD.
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
arch/x86/mm/pageattr-test.c | 21 +++++++++++++--------
1 file changed, 13 insertions(+), 8 deletions(-)
diff --git a/arch/x86/mm/pageattr-test.c b/arch/x86/mm/pageattr-test.c
--- a/arch/x86/mm/pageattr-test.c
+++ b/arch/x86/mm/pageattr-test.c
@@ -1,8 +1,8 @@
/*
* self test for change_page_attr.
*
- * Clears the global bit on random pages in the direct mapping, then reverts
- * and compares page tables forwards and afterwards.
+ * Clears the a test pte bit on random pages in the direct mapping,
+ * then reverts and compares page tables forwards and afterwards.
*/
#include <linux/bootmem.h>
#include <linux/kthread.h>
@@ -31,6 +31,13 @@
#endif
GPS = (1<<30)
};
+
+#define PAGE_TESTBIT __pgprot(_PAGE_UNUSED1)
+
+static int pte_testbit(pte_t pte)
+{
+ return pte_flags(pte) & _PAGE_UNUSED1;
+}
struct split_state {
long lpg, gpg, spg, exec;
@@ -165,15 +172,14 @@
continue;
}
- err = change_page_attr_clear(addr[i], len[i],
- __pgprot(_PAGE_GLOBAL));
+ err = change_page_attr_set(addr[i], len[i], PAGE_TESTBIT);
if (err < 0) {
printk(KERN_ERR "CPA %d failed %d\n", i, err);
failed++;
}
pte = lookup_address(addr[i], &level);
- if (!pte || pte_global(*pte) || pte_huge(*pte)) {
+ if (!pte || !pte_testbit(*pte) || pte_huge(*pte)) {
printk(KERN_ERR "CPA %lx: bad pte %Lx\n", addr[i],
pte ? (u64)pte_val(*pte) : 0ULL);
failed++;
@@ -198,14 +204,13 @@
failed++;
continue;
}
- err = change_page_attr_set(addr[i], len[i],
- __pgprot(_PAGE_GLOBAL));
+ err = change_page_attr_clear(addr[i], len[i], PAGE_TESTBIT);
if (err < 0) {
printk(KERN_ERR "CPA reverting failed: %d\n", err);
failed++;
}
pte = lookup_address(addr[i], &level);
- if (!pte || !pte_global(*pte)) {
+ if (!pte || pte_testbit(*pte)) {
printk(KERN_ERR "CPA %lx: bad pte after revert %Lx\n",
addr[i], pte ? (u64)pte_val(*pte) : 0ULL);
failed++;
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 0 of 8] x86/xen: updated physical mapping patches, and _PAGE_GLOBAL cleanup
2008-07-01 23:46 [PATCH 0 of 8] x86/xen: updated physical mapping patches, and _PAGE_GLOBAL cleanup Jeremy Fitzhardinge
` (7 preceding siblings ...)
2008-07-01 23:46 ` [PATCH 8 of 8] x86/cpa: use an undefined PTE bit for testing CPA Jeremy Fitzhardinge
@ 2008-07-04 10:33 ` Ingo Molnar
2008-07-04 15:56 ` Jeremy Fitzhardinge
8 siblings, 1 reply; 11+ messages in thread
From: Ingo Molnar @ 2008-07-04 10:33 UTC (permalink / raw)
To: Jeremy Fitzhardinge
Cc: LKML, x86, Stephen Tweedie, Eduardo Habkost, Mark McLoughlin
* Jeremy Fitzhardinge <jeremy@goop.org> wrote:
> Hi Ingo,
>
> Here's a revised series of the Xen-64 groundwork patches relating to
> creating the physical memory mapping. The first few patches are the
> necessary changes to make it work without triggering CPA warnings, and
> the last couple are cleanups of _PAGE_GLOBAL in the _PAGE_KERNEL
> flags, and could probably happily live in another topic branch
> (they're not at all Xen-specific or required for Xen to work).
well there are context dependencies so i've put them into x86/xen-64bit.
i picked up these four and merged them into tip/master:
Jeremy Fitzhardinge (4):
x86_64/setup: unconditionally populate the pgd
x86: always set _PAGE_GLOBAL in _PAGE_KERNEL* flags
x86_32: remove __PAGE_KERNEL(_EXEC)
x86/cpa: use an undefined PTE bit for testing CPA
the others were either already applied or didnt apply.
i'm still testing tip/master but i've pushed out these updates to
x86/xen-64bit - you should be able to get the tree i'm testing by doing:
git-checkout tip/master
git-merge tip/x86/xen-64bit
Ingo
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 0 of 8] x86/xen: updated physical mapping patches, and _PAGE_GLOBAL cleanup
2008-07-04 10:33 ` [PATCH 0 of 8] x86/xen: updated physical mapping patches, and _PAGE_GLOBAL cleanup Ingo Molnar
@ 2008-07-04 15:56 ` Jeremy Fitzhardinge
0 siblings, 0 replies; 11+ messages in thread
From: Jeremy Fitzhardinge @ 2008-07-04 15:56 UTC (permalink / raw)
To: Ingo Molnar; +Cc: LKML, x86, Stephen Tweedie, Eduardo Habkost, Mark McLoughlin
Ingo Molnar wrote:
> well there are context dependencies so i've put them into x86/xen-64bit.
>
> i picked up these four and merged them into tip/master:
>
> Jeremy Fitzhardinge (4):
> x86_64/setup: unconditionally populate the pgd
> x86: always set _PAGE_GLOBAL in _PAGE_KERNEL* flags
> x86_32: remove __PAGE_KERNEL(_EXEC)
> x86/cpa: use an undefined PTE bit for testing CPA
>
> the others were either already applied or didnt apply.
>
Yes, that's fine. The other patches were a more elegant recasting of an
existing patch into a more finely bisectable series, which I only needed
to do to debug the _PAGE_GLOBAL issue. These four are the really
interesting parts of the series.
> i'm still testing tip/master but i've pushed out these updates to
> x86/xen-64bit - you should be able to get the tree i'm testing by doing:
>
> git-checkout tip/master
> git-merge tip/x86/xen-64bit
>
Thanks,
J
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2008-07-04 15:56 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-07-01 23:46 [PATCH 0 of 8] x86/xen: updated physical mapping patches, and _PAGE_GLOBAL cleanup Jeremy Fitzhardinge
2008-07-01 23:46 ` [PATCH 1 of 8] x86_64: create global mappings in head_64.S Jeremy Fitzhardinge
2008-07-01 23:46 ` [PATCH 2 of 8] x86_64: unmap iomapping before populating Jeremy Fitzhardinge
2008-07-01 23:46 ` [PATCH 3 of 8] x86_64/setup: preserve existing PUD mappings Jeremy Fitzhardinge
2008-07-01 23:46 ` [PATCH 4 of 8] x86_64/setup: unconditionally populate the pgd Jeremy Fitzhardinge
2008-07-01 23:46 ` [PATCH 5 of 8] x86_64/setup: create 4k mappings if the cpu doens't support PSE Jeremy Fitzhardinge
2008-07-01 23:46 ` [PATCH 6 of 8] x86: always set _PAGE_GLOBAL in _PAGE_KERNEL* flags Jeremy Fitzhardinge
2008-07-01 23:46 ` [PATCH 7 of 8] x86_32: remove __PAGE_KERNEL(_EXEC) Jeremy Fitzhardinge
2008-07-01 23:46 ` [PATCH 8 of 8] x86/cpa: use an undefined PTE bit for testing CPA Jeremy Fitzhardinge
2008-07-04 10:33 ` [PATCH 0 of 8] x86/xen: updated physical mapping patches, and _PAGE_GLOBAL cleanup Ingo Molnar
2008-07-04 15:56 ` Jeremy Fitzhardinge
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox