public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0 of 8] x86: use PTE_MASK consistently
@ 2008-05-09 11:02 Jeremy Fitzhardinge
  2008-05-09 11:02 ` [PATCH 1 of 8] x86: define PTE_MASK in a universally useful way Jeremy Fitzhardinge
                   ` (8 more replies)
  0 siblings, 9 replies; 13+ messages in thread
From: Jeremy Fitzhardinge @ 2008-05-09 11:02 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: LKML, Thomas Gleixner, Hugh Dickins

Hi all,

Here's a series to rationalize the use of PTE_MASK and remove some
amount of ad-hocery.

This gist of the series is:
 1. Fix the definition of PTE_MASK so that its equally applicable in
    all pagetable modes
 2. Use it consistently

I haven't tried to address the *_bad() stuff, other than to convert
pmd_bad_* to use PTE_MASK.

I've compile tested it a bit and run it on 32-bit PAE (native and
Xen), but I haven't tested it with >4G memory, non-PAE or 64-bit.  In
other words, it needs some time in Ingo's torture machine.

      J


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1 of 8] x86: define PTE_MASK in a universally useful way
  2008-05-09 11:02 [PATCH 0 of 8] x86: use PTE_MASK consistently Jeremy Fitzhardinge
@ 2008-05-09 11:02 ` Jeremy Fitzhardinge
  2008-05-09 11:02 ` [PATCH 2 of 8] x86: fix warning on 32-bit non-PAE Jeremy Fitzhardinge
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 13+ messages in thread
From: Jeremy Fitzhardinge @ 2008-05-09 11:02 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: LKML, Thomas Gleixner, Hugh Dickins

Define PTE_MASK so that it contains a meaningful value for all x86
pagetable configurations.  Previously it was defined as a "long" which
means that it was too short to cover a 32-bit PAE pte entry.

It is now defined as a pteval_t, which is an integer type long enough
to contain a full pte (or pmd, pud, pgd).

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 include/asm-x86/page.h |   13 +++++++++----
 1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/include/asm-x86/page.h b/include/asm-x86/page.h
--- a/include/asm-x86/page.h
+++ b/include/asm-x86/page.h
@@ -10,8 +10,13 @@
 
 #ifdef __KERNEL__
 
-#define PHYSICAL_PAGE_MASK	(PAGE_MASK & __PHYSICAL_MASK)
-#define PTE_MASK		(_AT(long, PHYSICAL_PAGE_MASK))
+/* Cast PAGE_MASK to a signed type so that it is sign-extended if
+   virtual addresses are 32-bits but physical addresses are larger
+   (ie, 32-bit PAE). */
+#define PHYSICAL_PAGE_MASK	(((signed long)PAGE_MASK) & __PHYSICAL_MASK)
+
+/* PTE_MASK extracts the PFN from a (pte|pmd|pud|pgd)val_t */
+#define PTE_MASK		((pteval_t)PHYSICAL_PAGE_MASK)
 
 #define PMD_PAGE_SIZE		(_AC(1, UL) << PMD_SHIFT)
 #define PMD_PAGE_MASK		(~(PMD_PAGE_SIZE-1))
@@ -24,8 +29,8 @@
 /* to align the pointer to the (next) page boundary */
 #define PAGE_ALIGN(addr)	(((addr)+PAGE_SIZE-1)&PAGE_MASK)
 
-#define __PHYSICAL_MASK		_AT(phys_addr_t, (_AC(1,ULL) << __PHYSICAL_MASK_SHIFT) - 1)
-#define __VIRTUAL_MASK		((_AC(1,UL) << __VIRTUAL_MASK_SHIFT) - 1)
+#define __PHYSICAL_MASK		((((phys_addr_t)1) << __PHYSICAL_MASK_SHIFT) - 1)
+#define __VIRTUAL_MASK		((1UL << __VIRTUAL_MASK_SHIFT) - 1)
 
 #ifndef __ASSEMBLY__
 #include <linux/types.h>



^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 2 of 8] x86: fix warning on 32-bit non-PAE
  2008-05-09 11:02 [PATCH 0 of 8] x86: use PTE_MASK consistently Jeremy Fitzhardinge
  2008-05-09 11:02 ` [PATCH 1 of 8] x86: define PTE_MASK in a universally useful way Jeremy Fitzhardinge
@ 2008-05-09 11:02 ` Jeremy Fitzhardinge
  2008-05-09 11:02 ` [PATCH 3 of 8] x86: rearrange __(VIRTUAL|PHYSICAL)_MASK Jeremy Fitzhardinge
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 13+ messages in thread
From: Jeremy Fitzhardinge @ 2008-05-09 11:02 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: LKML, Thomas Gleixner, Hugh Dickins

Fix the warning:

include2/asm/pgtable.h: In function `pte_modify':
include2/asm/pgtable.h:290: warning: left shift count >= width of type

On 32-bit PAE the virtual and physical addresses are both 32-bits,
so it ends up evaluating 1<<32.  Do the shift as a 64-bit shift then
cast to the appropriate size.  This should all be done at compile time,
and so have no effect on generated code.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 include/asm-x86/page.h |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/asm-x86/page.h b/include/asm-x86/page.h
--- a/include/asm-x86/page.h
+++ b/include/asm-x86/page.h
@@ -29,7 +29,7 @@
 /* to align the pointer to the (next) page boundary */
 #define PAGE_ALIGN(addr)	(((addr)+PAGE_SIZE-1)&PAGE_MASK)
 
-#define __PHYSICAL_MASK		((((phys_addr_t)1) << __PHYSICAL_MASK_SHIFT) - 1)
+#define __PHYSICAL_MASK		((phys_addr_t)(1ULL << __PHYSICAL_MASK_SHIFT) - 1)
 #define __VIRTUAL_MASK		((1UL << __VIRTUAL_MASK_SHIFT) - 1)
 
 #ifndef __ASSEMBLY__



^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 3 of 8] x86: rearrange __(VIRTUAL|PHYSICAL)_MASK
  2008-05-09 11:02 [PATCH 0 of 8] x86: use PTE_MASK consistently Jeremy Fitzhardinge
  2008-05-09 11:02 ` [PATCH 1 of 8] x86: define PTE_MASK in a universally useful way Jeremy Fitzhardinge
  2008-05-09 11:02 ` [PATCH 2 of 8] x86: fix warning on 32-bit non-PAE Jeremy Fitzhardinge
@ 2008-05-09 11:02 ` Jeremy Fitzhardinge
  2008-05-09 11:02 ` [PATCH 4 of 8] x86: use PTE_MASK in 32-bit PAE Jeremy Fitzhardinge
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 13+ messages in thread
From: Jeremy Fitzhardinge @ 2008-05-09 11:02 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: LKML, Thomas Gleixner, Hugh Dickins

Put the definitions of __(VIRTUAL|PHYSICAL)_MASK before their uses.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 include/asm-x86/page.h |    6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/include/asm-x86/page.h b/include/asm-x86/page.h
--- a/include/asm-x86/page.h
+++ b/include/asm-x86/page.h
@@ -9,6 +9,9 @@
 #define PAGE_MASK	(~(PAGE_SIZE-1))
 
 #ifdef __KERNEL__
+
+#define __PHYSICAL_MASK		((phys_addr_t)(1ULL << __PHYSICAL_MASK_SHIFT) - 1)
+#define __VIRTUAL_MASK		((1UL << __VIRTUAL_MASK_SHIFT) - 1)
 
 /* Cast PAGE_MASK to a signed type so that it is sign-extended if
    virtual addresses are 32-bits but physical addresses are larger
@@ -28,9 +31,6 @@
 
 /* to align the pointer to the (next) page boundary */
 #define PAGE_ALIGN(addr)	(((addr)+PAGE_SIZE-1)&PAGE_MASK)
-
-#define __PHYSICAL_MASK		((phys_addr_t)(1ULL << __PHYSICAL_MASK_SHIFT) - 1)
-#define __VIRTUAL_MASK		((1UL << __VIRTUAL_MASK_SHIFT) - 1)
 
 #ifndef __ASSEMBLY__
 #include <linux/types.h>



^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 4 of 8] x86: use PTE_MASK in 32-bit PAE
  2008-05-09 11:02 [PATCH 0 of 8] x86: use PTE_MASK consistently Jeremy Fitzhardinge
                   ` (2 preceding siblings ...)
  2008-05-09 11:02 ` [PATCH 3 of 8] x86: rearrange __(VIRTUAL|PHYSICAL)_MASK Jeremy Fitzhardinge
@ 2008-05-09 11:02 ` Jeremy Fitzhardinge
  2008-05-09 11:02 ` [PATCH 5 of 8] x86: use PTE_MASK in pgtable_32.h Jeremy Fitzhardinge
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 13+ messages in thread
From: Jeremy Fitzhardinge @ 2008-05-09 11:02 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: LKML, Thomas Gleixner, Hugh Dickins

Use PTE_MASK in 3-level pagetables (ie, 32-bit PAE).

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 include/asm-x86/pgtable-3level.h |    6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/include/asm-x86/pgtable-3level.h b/include/asm-x86/pgtable-3level.h
--- a/include/asm-x86/pgtable-3level.h
+++ b/include/asm-x86/pgtable-3level.h
@@ -120,9 +120,9 @@
 		write_cr3(pgd);
 }
 
-#define pud_page(pud) ((struct page *) __va(pud_val(pud) & PAGE_MASK))
+#define pud_page(pud) ((struct page *) __va(pud_val(pud) & PTE_MASK))
 
-#define pud_page_vaddr(pud) ((unsigned long) __va(pud_val(pud) & PAGE_MASK))
+#define pud_page_vaddr(pud) ((unsigned long) __va(pud_val(pud) & PTE_MASK))
 
 
 /* Find an entry in the second-level page table.. */
@@ -160,7 +160,7 @@
 
 static inline unsigned long pte_pfn(pte_t pte)
 {
-	return (pte_val(pte) & ~_PAGE_NX) >> PAGE_SHIFT;
+	return (pte_val(pte) & PTE_MASK) >> PAGE_SHIFT;
 }
 
 /*



^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 5 of 8] x86: use PTE_MASK in pgtable_32.h
  2008-05-09 11:02 [PATCH 0 of 8] x86: use PTE_MASK consistently Jeremy Fitzhardinge
                   ` (3 preceding siblings ...)
  2008-05-09 11:02 ` [PATCH 4 of 8] x86: use PTE_MASK in 32-bit PAE Jeremy Fitzhardinge
@ 2008-05-09 11:02 ` Jeremy Fitzhardinge
  2008-05-09 15:35   ` Thomas Gleixner
  2008-05-09 11:02 ` [PATCH 6 of 8] x86: clarify use of _PAGE_CHG_MASK Jeremy Fitzhardinge
                   ` (3 subsequent siblings)
  8 siblings, 1 reply; 13+ messages in thread
From: Jeremy Fitzhardinge @ 2008-05-09 11:02 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: LKML, Thomas Gleixner, Hugh Dickins

---
 include/asm-x86/pgtable_32.h |    6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/include/asm-x86/pgtable_32.h b/include/asm-x86/pgtable_32.h
--- a/include/asm-x86/pgtable_32.h
+++ b/include/asm-x86/pgtable_32.h
@@ -98,9 +98,9 @@
 extern int pmd_bad(pmd_t pmd);
 
 #define pmd_bad_v1(x)							\
-	(_KERNPG_TABLE != (pmd_val((x)) & ~(PAGE_MASK | _PAGE_USER)))
+	(_KERNPG_TABLE != (pmd_val((x)) & ~(PTE_MASK | _PAGE_USER)))
 #define	pmd_bad_v2(x)							\
-	(_KERNPG_TABLE != (pmd_val((x)) & ~(PAGE_MASK | _PAGE_USER |	\
+	(_KERNPG_TABLE != (pmd_val((x)) & ~(PTE_MASK | _PAGE_USER |	\
 					    _PAGE_PSE | _PAGE_NX)))
 
 #define pages_to_mb(x) ((x) >> (20-PAGE_SHIFT))
@@ -172,7 +172,7 @@
 #define pmd_page(pmd) (pfn_to_page(pmd_val((pmd)) >> PAGE_SHIFT))
 
 #define pmd_page_vaddr(pmd)					\
-	((unsigned long)__va(pmd_val((pmd)) & PAGE_MASK))
+	((unsigned long)__va(pmd_val((pmd)) & PTE_MASK))
 
 #if defined(CONFIG_HIGHPTE)
 #define pte_offset_map(dir, address)					\



^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 6 of 8] x86: clarify use of _PAGE_CHG_MASK
  2008-05-09 11:02 [PATCH 0 of 8] x86: use PTE_MASK consistently Jeremy Fitzhardinge
                   ` (4 preceding siblings ...)
  2008-05-09 11:02 ` [PATCH 5 of 8] x86: use PTE_MASK in pgtable_32.h Jeremy Fitzhardinge
@ 2008-05-09 11:02 ` Jeremy Fitzhardinge
  2008-05-09 11:02 ` [PATCH 7 of 8] x86: use PTE_MASK rather than ad-hoc mask Jeremy Fitzhardinge
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 13+ messages in thread
From: Jeremy Fitzhardinge @ 2008-05-09 11:02 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: LKML, Thomas Gleixner, Hugh Dickins

_PAGE_CHG_MASK is defined as the set of bits not updated by
pte_modify(); specifically, the pfn itself, and the Accessed and Dirty
bits (which are updated by hardware).

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 include/asm-x86/pgtable.h |    8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/include/asm-x86/pgtable.h b/include/asm-x86/pgtable.h
--- a/include/asm-x86/pgtable.h
+++ b/include/asm-x86/pgtable.h
@@ -58,6 +58,7 @@
 #define _KERNPG_TABLE	(_PAGE_PRESENT | _PAGE_RW | _PAGE_ACCESSED |	\
 			 _PAGE_DIRTY)
 
+/* Set of bits not changed in pte_modify */
 #define _PAGE_CHG_MASK	(PTE_MASK | _PAGE_ACCESSED | _PAGE_DIRTY)
 
 #define _PAGE_CACHE_MASK	(_PAGE_PCD | _PAGE_PWT)
@@ -285,11 +286,8 @@
 {
 	pteval_t val = pte_val(pte);
 
-	/*
-	 * Chop off the NX bit (if present), and add the NX portion of
-	 * the newprot (if present):
-	 */
-	val &= _PAGE_CHG_MASK & ~_PAGE_NX;
+	/* Extract unchanged bits from pte */
+	val &= _PAGE_CHG_MASK;
 	val |= pgprot_val(newprot) & __supported_pte_mask;
 
 	return __pte(val);



^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 7 of 8] x86: use PTE_MASK rather than ad-hoc mask
  2008-05-09 11:02 [PATCH 0 of 8] x86: use PTE_MASK consistently Jeremy Fitzhardinge
                   ` (5 preceding siblings ...)
  2008-05-09 11:02 ` [PATCH 6 of 8] x86: clarify use of _PAGE_CHG_MASK Jeremy Fitzhardinge
@ 2008-05-09 11:02 ` Jeremy Fitzhardinge
  2008-05-09 11:02 ` [PATCH 8 of 8] xen: use PTE_MASK in pte_mfn() Jeremy Fitzhardinge
  2008-05-13 10:11 ` [PATCH 0 of 8] x86: use PTE_MASK consistently Ingo Molnar
  8 siblings, 0 replies; 13+ messages in thread
From: Jeremy Fitzhardinge @ 2008-05-09 11:02 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: LKML, Thomas Gleixner, Hugh Dickins

Use ~PTE_MASK to extract the non-pfn parts of the pte (ie, the pte
flags), rather than constructing an ad-hoc mask.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 include/asm-x86/pgtable.h |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/asm-x86/pgtable.h b/include/asm-x86/pgtable.h
--- a/include/asm-x86/pgtable.h
+++ b/include/asm-x86/pgtable.h
@@ -293,7 +293,7 @@
 	return __pte(val);
 }
 
-#define pte_pgprot(x) __pgprot(pte_val(x) & (0xfff | _PAGE_NX))
+#define pte_pgprot(x) __pgprot(pte_val(x) & ~PTE_MASK)
 
 #define canon_pgprot(p) __pgprot(pgprot_val(p) & __supported_pte_mask)
 



^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 8 of 8] xen: use PTE_MASK in pte_mfn()
  2008-05-09 11:02 [PATCH 0 of 8] x86: use PTE_MASK consistently Jeremy Fitzhardinge
                   ` (6 preceding siblings ...)
  2008-05-09 11:02 ` [PATCH 7 of 8] x86: use PTE_MASK rather than ad-hoc mask Jeremy Fitzhardinge
@ 2008-05-09 11:02 ` Jeremy Fitzhardinge
  2008-05-13 10:11 ` [PATCH 0 of 8] x86: use PTE_MASK consistently Ingo Molnar
  8 siblings, 0 replies; 13+ messages in thread
From: Jeremy Fitzhardinge @ 2008-05-09 11:02 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: LKML, Thomas Gleixner, Hugh Dickins

Use PTE_MASK to extract mfn from pte.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 include/asm-x86/xen/page.h |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/asm-x86/xen/page.h b/include/asm-x86/xen/page.h
--- a/include/asm-x86/xen/page.h
+++ b/include/asm-x86/xen/page.h
@@ -127,7 +127,7 @@
 
 static inline unsigned long pte_mfn(pte_t pte)
 {
-	return (pte.pte & ~_PAGE_NX) >> PAGE_SHIFT;
+	return (pte.pte & PTE_MASK) >> PAGE_SHIFT;
 }
 
 static inline pte_t mfn_pte(unsigned long page_nr, pgprot_t pgprot)



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 5 of 8] x86: use PTE_MASK in pgtable_32.h
  2008-05-09 11:02 ` [PATCH 5 of 8] x86: use PTE_MASK in pgtable_32.h Jeremy Fitzhardinge
@ 2008-05-09 15:35   ` Thomas Gleixner
  2008-05-09 18:36     ` Jeremy Fitzhardinge
  0 siblings, 1 reply; 13+ messages in thread
From: Thomas Gleixner @ 2008-05-09 15:35 UTC (permalink / raw)
  To: Jeremy Fitzhardinge; +Cc: Ingo Molnar, LKML, Hugh Dickins

On Fri, 9 May 2008, Jeremy Fitzhardinge wrote:

> ---
>  include/asm-x86/pgtable_32.h |    6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/include/asm-x86/pgtable_32.h b/include/asm-x86/pgtable_32.h
> --- a/include/asm-x86/pgtable_32.h
> +++ b/include/asm-x86/pgtable_32.h
> @@ -98,9 +98,9 @@
>  extern int pmd_bad(pmd_t pmd);
>  
>  #define pmd_bad_v1(x)							\
> -	(_KERNPG_TABLE != (pmd_val((x)) & ~(PAGE_MASK | _PAGE_USER)))
> +	(_KERNPG_TABLE != (pmd_val((x)) & ~(PTE_MASK | _PAGE_USER)))
>  #define	pmd_bad_v2(x)							\
> -	(_KERNPG_TABLE != (pmd_val((x)) & ~(PAGE_MASK | _PAGE_USER |	\
> +	(_KERNPG_TABLE != (pmd_val((x)) & ~(PTE_MASK | _PAGE_USER |	\

that's gone from mainline already. Hugh's patch restored the old pmd_bad check.

Thanks,
	tglx

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 5 of 8] x86: use PTE_MASK in pgtable_32.h
  2008-05-09 15:35   ` Thomas Gleixner
@ 2008-05-09 18:36     ` Jeremy Fitzhardinge
  0 siblings, 0 replies; 13+ messages in thread
From: Jeremy Fitzhardinge @ 2008-05-09 18:36 UTC (permalink / raw)
  To: Thomas Gleixner; +Cc: Ingo Molnar, LKML, Hugh Dickins

Thomas Gleixner wrote:
> that's gone from mainline already. Hugh's patch restored the old pmd_bad check.
>   

Here's the rebased patch.

---
 include/asm-x86/pgtable_32.h |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

===================================================================
--- a/include/asm-x86/pgtable_32.h
+++ b/include/asm-x86/pgtable_32.h
@@ -94,7 +94,7 @@
 /* To avoid harmful races, pmd_none(x) should check only the lower when PAE */
 #define pmd_none(x)	(!(unsigned long)pmd_val((x)))
 #define pmd_present(x)	(pmd_val((x)) & _PAGE_PRESENT)
-#define pmd_bad(x) ((pmd_val(x) & (~PAGE_MASK & ~_PAGE_USER)) != _KERNPG_TABLE)
+#define pmd_bad(x) ((pmd_val(x) & (~PTE_MASK & ~_PAGE_USER)) != _KERNPG_TABLE)
 
 #define pages_to_mb(x) ((x) >> (20-PAGE_SHIFT))
 
@@ -165,7 +165,7 @@
 #define pmd_page(pmd) (pfn_to_page(pmd_val((pmd)) >> PAGE_SHIFT))
 
 #define pmd_page_vaddr(pmd)					\
-	((unsigned long)__va(pmd_val((pmd)) & PAGE_MASK))
+	((unsigned long)__va(pmd_val((pmd)) & PTE_MASK))
 
 #if defined(CONFIG_HIGHPTE)
 #define pte_offset_map(dir, address)					\



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 0 of 8] x86: use PTE_MASK consistently
  2008-05-09 11:02 [PATCH 0 of 8] x86: use PTE_MASK consistently Jeremy Fitzhardinge
                   ` (7 preceding siblings ...)
  2008-05-09 11:02 ` [PATCH 8 of 8] xen: use PTE_MASK in pte_mfn() Jeremy Fitzhardinge
@ 2008-05-13 10:11 ` Ingo Molnar
  8 siblings, 0 replies; 13+ messages in thread
From: Ingo Molnar @ 2008-05-13 10:11 UTC (permalink / raw)
  To: Jeremy Fitzhardinge; +Cc: LKML, Thomas Gleixner, Hugh Dickins


* Jeremy Fitzhardinge <jeremy@goop.org> wrote:

> Here's a series to rationalize the use of PTE_MASK and remove some 
> amount of ad-hocery.
> 
> This gist of the series is:
>  1. Fix the definition of PTE_MASK so that its equally applicable in
>     all pagetable modes
>  2. Use it consistently
> 
> I haven't tried to address the *_bad() stuff, other than to convert 
> pmd_bad_* to use PTE_MASK.
> 
> I've compile tested it a bit and run it on 32-bit PAE (native and 
> Xen), but I haven't tested it with >4G memory, non-PAE or 64-bit.  In 
> other words, it needs some time in Ingo's torture machine.

applied, thanks. This patchset has held up fine so far in overnight 
testing, nice work.

	Ingo

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 3 of 8] x86: rearrange __(VIRTUAL|PHYSICAL)_MASK
  2008-05-20  7:26 Jeremy Fitzhardinge
@ 2008-05-20  7:26 ` Jeremy Fitzhardinge
  0 siblings, 0 replies; 13+ messages in thread
From: Jeremy Fitzhardinge @ 2008-05-20  7:26 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: Andrew Morton, Ingo Molnar, LKML, Thomas Gleixner, Hugh Dickins,
	Theodore Tso, Gabriel C, Keith Packard, Pallipadi, Venkatesh,
	Eric Anholt, Siddha, Suresh B, airlied, Barnes, Jesse,
	Rafael J. Wysocki

Put the definitions of __(VIRTUAL|PHYSICAL)_MASK before their uses.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
---
 include/asm-x86/page.h |    6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/include/asm-x86/page.h b/include/asm-x86/page.h
--- a/include/asm-x86/page.h
+++ b/include/asm-x86/page.h
@@ -9,6 +9,9 @@
 #define PAGE_MASK	(~(PAGE_SIZE-1))
 
 #ifdef __KERNEL__
+
+#define __PHYSICAL_MASK		((phys_addr_t)(1ULL << __PHYSICAL_MASK_SHIFT) - 1)
+#define __VIRTUAL_MASK		((1UL << __VIRTUAL_MASK_SHIFT) - 1)
 
 /* Cast PAGE_MASK to a signed type so that it is sign-extended if
    virtual addresses are 32-bits but physical addresses are larger
@@ -28,9 +31,6 @@
 
 /* to align the pointer to the (next) page boundary */
 #define PAGE_ALIGN(addr)	(((addr)+PAGE_SIZE-1)&PAGE_MASK)
-
-#define __PHYSICAL_MASK		((phys_addr_t)(1ULL << __PHYSICAL_MASK_SHIFT) - 1)
-#define __VIRTUAL_MASK		((1UL << __VIRTUAL_MASK_SHIFT) - 1)
 
 #ifndef __ASSEMBLY__
 #include <linux/types.h>



^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2008-05-20  9:08 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-05-09 11:02 [PATCH 0 of 8] x86: use PTE_MASK consistently Jeremy Fitzhardinge
2008-05-09 11:02 ` [PATCH 1 of 8] x86: define PTE_MASK in a universally useful way Jeremy Fitzhardinge
2008-05-09 11:02 ` [PATCH 2 of 8] x86: fix warning on 32-bit non-PAE Jeremy Fitzhardinge
2008-05-09 11:02 ` [PATCH 3 of 8] x86: rearrange __(VIRTUAL|PHYSICAL)_MASK Jeremy Fitzhardinge
2008-05-09 11:02 ` [PATCH 4 of 8] x86: use PTE_MASK in 32-bit PAE Jeremy Fitzhardinge
2008-05-09 11:02 ` [PATCH 5 of 8] x86: use PTE_MASK in pgtable_32.h Jeremy Fitzhardinge
2008-05-09 15:35   ` Thomas Gleixner
2008-05-09 18:36     ` Jeremy Fitzhardinge
2008-05-09 11:02 ` [PATCH 6 of 8] x86: clarify use of _PAGE_CHG_MASK Jeremy Fitzhardinge
2008-05-09 11:02 ` [PATCH 7 of 8] x86: use PTE_MASK rather than ad-hoc mask Jeremy Fitzhardinge
2008-05-09 11:02 ` [PATCH 8 of 8] xen: use PTE_MASK in pte_mfn() Jeremy Fitzhardinge
2008-05-13 10:11 ` [PATCH 0 of 8] x86: use PTE_MASK consistently Ingo Molnar
  -- strict thread matches above, loose matches on Subject: below --
2008-05-20  7:26 Jeremy Fitzhardinge
2008-05-20  7:26 ` [PATCH 3 of 8] x86: rearrange __(VIRTUAL|PHYSICAL)_MASK Jeremy Fitzhardinge

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox