linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [patch 0/5] Strong Access Ordering
@ 2008-07-07 14:28 Dave Kleikamp
  2008-07-07 14:28 ` [patch 1/5] mm: Allow architectures to define additional protection bits Dave Kleikamp
                   ` (4 more replies)
  0 siblings, 5 replies; 8+ messages in thread
From: Dave Kleikamp @ 2008-07-07 14:28 UTC (permalink / raw)
  To: Benjamin Herrenschmidt; +Cc: linuxppc-dev list

Ben,
Please include these patches in powerpc-next.

Changelog since posting to linuxppc-dev on June 18th:
- rebased on top of powerpc-next branch
- Added arch_validate_prot define as suggested by Andrew
- Added #ifdef CONFIG_PPC64 in include/powerpc/mman.h to fix 32-bit build

Thanks,
Shaggy
-- 
Dave Kleikamp
IBM Linux Technology Center

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [patch 1/5] mm: Allow architectures to define additional protection bits
  2008-07-07 14:28 [patch 0/5] Strong Access Ordering Dave Kleikamp
@ 2008-07-07 14:28 ` Dave Kleikamp
  2008-07-07 14:28 ` [patch 2/5] powerpc: Define flags for Strong Access Ordering Dave Kleikamp
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 8+ messages in thread
From: Dave Kleikamp @ 2008-07-07 14:28 UTC (permalink / raw)
  To: Benjamin Herrenschmidt; +Cc: linuxppc-dev list, Andrew Morton, Hugh Dickins

This patch allows architectures to define functions to deal with
additional protections bits for mmap() and mprotect().

arch_calc_vm_prot_bits() maps additonal protection bits to vm_flags
arch_vm_get_page_prot() maps additional vm_flags to the vma's vm_page_prot
arch_validate_prot() checks for valid values of the protection bits

Note: vm_get_page_prot() is now pretty ugly, but the generated code
should be identical for architectures that don't define additional
protection bits.

Signed-off-by: Dave Kleikamp <shaggy@linux.vnet.ibm.com>
Acked-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Hugh Dickins <hugh@veritas.com>
---

 include/linux/mman.h |   29 ++++++++++++++++++++++++++++-
 mm/mmap.c            |    5 +++--
 mm/mprotect.c        |    2 +-
 3 files changed, 32 insertions(+), 4 deletions(-)

Index: b/include/linux/mman.h
===================================================================
--- a/include/linux/mman.h
+++ b/include/linux/mman.h
@@ -34,6 +34,32 @@
 }
 
 /*
+ * Allow architectures to handle additional protection bits
+ */
+
+#ifndef arch_calc_vm_prot_bits
+#define arch_calc_vm_prot_bits(prot) 0
+#endif
+
+#ifndef arch_vm_get_page_prot
+#define arch_vm_get_page_prot(vm_flags) __pgprot(0)
+#endif
+
+#ifndef arch_validate_prot
+/*
+ * This is called from mprotect().  PROT_GROWSDOWN and PROT_GROWSUP have
+ * already been masked out.
+ *
+ * Returns true if the prot flags are valid
+ */
+static inline int arch_validate_prot(unsigned long prot)
+{
+	return (prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM)) == 0;
+}
+#define arch_validate_prot arch_validate_prot
+#endif
+
+/*
  * Optimisation macro.  It is equivalent to:
  *      (x & bit1) ? bit2 : 0
  * but this version is faster.
@@ -51,7 +77,8 @@
 {
 	return _calc_vm_trans(prot, PROT_READ,  VM_READ ) |
 	       _calc_vm_trans(prot, PROT_WRITE, VM_WRITE) |
-	       _calc_vm_trans(prot, PROT_EXEC,  VM_EXEC );
+	       _calc_vm_trans(prot, PROT_EXEC,  VM_EXEC) |
+	       arch_calc_vm_prot_bits(prot);
 }
 
 /*
Index: b/mm/mmap.c
===================================================================
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -72,8 +72,9 @@
 
 pgprot_t vm_get_page_prot(unsigned long vm_flags)
 {
-	return protection_map[vm_flags &
-				(VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)];
+	return __pgprot(pgprot_val(protection_map[vm_flags &
+				(VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)]) |
+			pgprot_val(arch_vm_get_page_prot(vm_flags)));
 }
 EXPORT_SYMBOL(vm_get_page_prot);
 
Index: b/mm/mprotect.c
===================================================================
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -239,7 +239,7 @@
 	end = start + len;
 	if (end <= start)
 		return -ENOMEM;
-	if (prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM))
+	if (!arch_validate_prot(prot))
 		return -EINVAL;
 
 	reqprot = prot;

-- 
Dave Kleikamp
IBM Linux Technology Center

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [patch 2/5] powerpc: Define flags for Strong Access Ordering
  2008-07-07 14:28 [patch 0/5] Strong Access Ordering Dave Kleikamp
  2008-07-07 14:28 ` [patch 1/5] mm: Allow architectures to define additional protection bits Dave Kleikamp
@ 2008-07-07 14:28 ` Dave Kleikamp
  2008-07-07 14:28 ` [patch 3/5] powerpc: Add SAO Feature bit to the cputable Dave Kleikamp
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 8+ messages in thread
From: Dave Kleikamp @ 2008-07-07 14:28 UTC (permalink / raw)
  To: Benjamin Herrenschmidt; +Cc: linuxppc-dev list

This patch defines:

- PROT_SAO, which is passed into mmap() and mprotect() in the prot field
- VM_SAO in vma->vm_flags, and
- _PAGE_SAO, the combination of WIMG bits in the pte that enables strong
access ordering for the page.

NOTE: There doesn't seem to be a precedent for architecture-dependent vm_flags.
It may be better to define VM_SAO somewhere in include/asm-powerpc/.  Since
vm_flags is a long, defining it in the high-order word would help prevent a
collision with any newly added values in architecture-independent code.

Signed-off-by: Dave Kleikamp <shaggy@linux.vnet.ibm.com>
---

 include/asm-powerpc/mman.h          |    2 ++
 include/asm-powerpc/pgtable-ppc64.h |    3 +++
 include/linux/mm.h                  |    1 +
 3 files changed, 6 insertions(+)

Index: b/include/asm-powerpc/mman.h
===================================================================
--- a/include/asm-powerpc/mman.h
+++ b/include/asm-powerpc/mman.h
@@ -10,6 +10,8 @@
  * 2 of the License, or (at your option) any later version.
  */
 
+#define PROT_SAO	0x10		/* Strong Access Ordering */
+
 #define MAP_RENAME      MAP_ANONYMOUS   /* In SunOS terminology */
 #define MAP_NORESERVE   0x40            /* don't reserve swap pages */
 #define MAP_LOCKED	0x80
Index: b/include/asm-powerpc/pgtable-ppc64.h
===================================================================
--- a/include/asm-powerpc/pgtable-ppc64.h
+++ b/include/asm-powerpc/pgtable-ppc64.h
@@ -93,6 +93,9 @@
 #define _PAGE_RW	0x0200 /* software: user write access allowed */
 #define _PAGE_BUSY	0x0800 /* software: PTE & hash are busy */
 
+/* Strong Access Ordering */
+#define _PAGE_SAO	(_PAGE_WRITETHRU | _PAGE_NO_CACHE | _PAGE_COHERENT)
+
 #define _PAGE_BASE	(_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_COHERENT)
 
 #define _PAGE_WRENABLE	(_PAGE_RW | _PAGE_DIRTY)
Index: b/include/linux/mm.h
===================================================================
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -108,6 +108,7 @@
 
 #define VM_CAN_NONLINEAR 0x08000000	/* Has ->fault & does nonlinear pages */
 #define VM_MIXEDMAP	0x10000000	/* Can contain "struct page" and pure PFN pages */
+#define VM_SAO		0x20000000	/* Strong Access Ordering (powerpc) */
 
 #ifndef VM_STACK_DEFAULT_FLAGS		/* arch can override this */
 #define VM_STACK_DEFAULT_FLAGS VM_DATA_DEFAULT_FLAGS

-- 
Dave Kleikamp
IBM Linux Technology Center

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [patch 3/5] powerpc: Add SAO Feature bit to the cputable
  2008-07-07 14:28 [patch 0/5] Strong Access Ordering Dave Kleikamp
  2008-07-07 14:28 ` [patch 1/5] mm: Allow architectures to define additional protection bits Dave Kleikamp
  2008-07-07 14:28 ` [patch 2/5] powerpc: Define flags for Strong Access Ordering Dave Kleikamp
@ 2008-07-07 14:28 ` Dave Kleikamp
  2008-07-07 14:28 ` [patch 4/5] powerpc: Add Strong Access Ordering Dave Kleikamp
  2008-07-07 14:28 ` [patch 5/5] powerpc: Dont clear _PAGE_COHERENT when _PAGE_SAO is set Dave Kleikamp
  4 siblings, 0 replies; 8+ messages in thread
From: Dave Kleikamp @ 2008-07-07 14:28 UTC (permalink / raw)
  To: Benjamin Herrenschmidt; +Cc: linuxppc-dev list

Signed-off-by: Joel Schopp <jschopp@austin.ibm.com>
Signed-off-by: Dave Kleikamp <shaggy@linux.vnet.ibm.com>

---
 include/asm-powerpc/cputable.h |    3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

Index: b/include/asm-powerpc/cputable.h
===================================================================
--- a/include/asm-powerpc/cputable.h
+++ b/include/asm-powerpc/cputable.h
@@ -185,6 +185,7 @@
 #define CPU_FTR_1T_SEGMENT		LONG_ASM_CONST(0x0004000000000000)
 #define CPU_FTR_NO_SLBIE_B		LONG_ASM_CONST(0x0008000000000000)
 #define CPU_FTR_VSX			LONG_ASM_CONST(0x0010000000000000)
+#define CPU_FTR_SAO			LONG_ASM_CONST(0x0020000000000000)
 
 #ifndef __ASSEMBLY__
 
@@ -400,7 +401,7 @@
 	    CPU_FTR_MMCRA | CPU_FTR_SMT | \
 	    CPU_FTR_COHERENT_ICACHE | CPU_FTR_LOCKLESS_TLBIE | \
 	    CPU_FTR_PURR | CPU_FTR_SPURR | CPU_FTR_REAL_LE | \
-	    CPU_FTR_DSCR)
+	    CPU_FTR_DSCR | CPU_FTR_SAO)
 #define CPU_FTRS_CELL	(CPU_FTR_USE_TB | \
 	    CPU_FTR_HPTE_TABLE | CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_CTRL | \
 	    CPU_FTR_ALTIVEC_COMP | CPU_FTR_MMCRA | CPU_FTR_SMT | \

-- 
Dave Kleikamp
IBM Linux Technology Center

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [patch 4/5] powerpc: Add Strong Access Ordering
  2008-07-07 14:28 [patch 0/5] Strong Access Ordering Dave Kleikamp
                   ` (2 preceding siblings ...)
  2008-07-07 14:28 ` [patch 3/5] powerpc: Add SAO Feature bit to the cputable Dave Kleikamp
@ 2008-07-07 14:28 ` Dave Kleikamp
  2008-07-07 14:28 ` [patch 5/5] powerpc: Dont clear _PAGE_COHERENT when _PAGE_SAO is set Dave Kleikamp
  4 siblings, 0 replies; 8+ messages in thread
From: Dave Kleikamp @ 2008-07-07 14:28 UTC (permalink / raw)
  To: Benjamin Herrenschmidt; +Cc: linuxppc-dev list

Allow an application to enable Strong Access Ordering on specific pages of
memory on Power 7 hardware. Currently, power has a weaker memory model than
x86. Implementing a stronger memory model allows an emulator to more
efficiently translate x86 code into power code, resulting in faster code
execution.

On Power 7 hardware, storing 0b1110 in the WIMG bits of the hpte enables
strong access ordering mode for the memory page.  This patchset allows a
user to specify which pages are thus enabled by passing a new protection
bit through mmap() and mprotect().  I have tentatively defined this bit,
PROT_SAO, as 0x10.

Signed-off-by: Dave Kleikamp <shaggy@linux.vnet.ibm.com>
---

 arch/powerpc/kernel/syscalls.c |    3 +++
 include/asm-powerpc/mman.h     |   30 ++++++++++++++++++++++++++++++
 2 files changed, 33 insertions(+)

Index: b/arch/powerpc/kernel/syscalls.c
===================================================================
--- a/arch/powerpc/kernel/syscalls.c
+++ b/arch/powerpc/kernel/syscalls.c
@@ -143,6 +143,9 @@
 	struct file * file = NULL;
 	unsigned long ret = -EINVAL;
 
+	if (!arch_validate_prot(prot))
+		goto out;
+
 	if (shift) {
 		if (off & ((1 << shift) - 1))
 			goto out;
Index: b/include/asm-powerpc/mman.h
===================================================================
--- a/include/asm-powerpc/mman.h
+++ b/include/asm-powerpc/mman.h
@@ -1,7 +1,9 @@
 #ifndef _ASM_POWERPC_MMAN_H
 #define _ASM_POWERPC_MMAN_H
 
+#include <asm/cputable.h>
 #include <asm-generic/mman.h>
+#include <linux/mm.h>
 
 /*
  * This program is free software; you can redistribute it and/or
@@ -26,4 +28,32 @@
 #define MAP_POPULATE	0x8000		/* populate (prefault) pagetables */
 #define MAP_NONBLOCK	0x10000		/* do not block on IO */
 
+#ifdef CONFIG_PPC64
+/*
+ * This file is included by linux/mman.h, so we can't use cacl_vm_prot_bits()
+ * here.  How important is the optimization?
+ */
+static inline unsigned long arch_calc_vm_prot_bits(unsigned long prot)
+{
+	return (prot & PROT_SAO) ? VM_SAO : 0;
+}
+#define arch_calc_vm_prot_bits(prot) arch_calc_vm_prot_bits(prot)
+
+static inline pgprot_t arch_vm_get_page_prot(unsigned long vm_flags)
+{
+	return (vm_flags & VM_SAO) ? __pgprot(_PAGE_SAO) : 0;
+}
+#define arch_vm_get_page_prot(vm_flags) arch_vm_get_page_prot(vm_flags)
+
+static inline int arch_validate_prot(unsigned long prot)
+{
+	if (prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM | PROT_SAO))
+		return 0;
+	if ((prot & PROT_SAO) && !cpu_has_feature(CPU_FTR_SAO))
+		return 0;
+	return 1;
+}
+#define arch_validate_prot(prot) arch_validate_prot(prot)
+
+#endif /* CONFIG_PPC64 */
 #endif	/* _ASM_POWERPC_MMAN_H */

-- 
Dave Kleikamp
IBM Linux Technology Center

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [patch 5/5] powerpc: Dont clear _PAGE_COHERENT when _PAGE_SAO is set
  2008-07-07 14:28 [patch 0/5] Strong Access Ordering Dave Kleikamp
                   ` (3 preceding siblings ...)
  2008-07-07 14:28 ` [patch 4/5] powerpc: Add Strong Access Ordering Dave Kleikamp
@ 2008-07-07 14:28 ` Dave Kleikamp
  2008-07-09  3:46   ` Benjamin Herrenschmidt
  4 siblings, 1 reply; 8+ messages in thread
From: Dave Kleikamp @ 2008-07-07 14:28 UTC (permalink / raw)
  To: Benjamin Herrenschmidt; +Cc: linuxppc-dev list

Signed-off-by: Dave Kleikamp <shaggy@linux.vnet.ibm.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---

 arch/powerpc/platforms/pseries/lpar.c |    3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

Index: b/arch/powerpc/platforms/pseries/lpar.c
===================================================================
--- a/arch/powerpc/platforms/pseries/lpar.c
+++ b/arch/powerpc/platforms/pseries/lpar.c
@@ -305,7 +305,8 @@
 	flags = 0;
 
 	/* Make pHyp happy */
-	if (rflags & (_PAGE_GUARDED|_PAGE_NO_CACHE))
+	if ((rflags & _PAGE_GUARDED) ||
+	    ((rflags & _PAGE_NO_CACHE) & !(rflags & _PAGE_WRITETHRU)))
 		hpte_r &= ~_PAGE_COHERENT;
 
 	lpar_rc = plpar_pte_enter(flags, hpte_group, hpte_v, hpte_r, &slot);

-- 
Dave Kleikamp
IBM Linux Technology Center

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [patch 5/5] powerpc: Dont clear _PAGE_COHERENT when _PAGE_SAO is set
  2008-07-07 14:28 ` [patch 5/5] powerpc: Dont clear _PAGE_COHERENT when _PAGE_SAO is set Dave Kleikamp
@ 2008-07-09  3:46   ` Benjamin Herrenschmidt
  2008-07-09 15:28     ` Dave Kleikamp
  0 siblings, 1 reply; 8+ messages in thread
From: Benjamin Herrenschmidt @ 2008-07-09  3:46 UTC (permalink / raw)
  To: Dave Kleikamp; +Cc: linuxppc-dev list

On Mon, 2008-07-07 at 09:28 -0500, Dave Kleikamp wrote:
> plain text document attachment (dont_clobber_M.patch)
> Signed-off-by: Dave Kleikamp <shaggy@linux.vnet.ibm.com>
> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
> ---

The old code looks bogus.. why clear M when G is set ? Only
I should have mattered.

I'll apply anyway as you aren't changing the existing behaviour here but
maybe you can shoot me a fixup patch that removes the _PAGE_GUARDED
condition completely here ?

It's legal to have G=1 M=1 pages and can even be useful under some
circumstances.

Cheers,
Ben.

>  arch/powerpc/platforms/pseries/lpar.c |    3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> Index: b/arch/powerpc/platforms/pseries/lpar.c
> ===================================================================
> --- a/arch/powerpc/platforms/pseries/lpar.c
> +++ b/arch/powerpc/platforms/pseries/lpar.c
> @@ -305,7 +305,8 @@
>  	flags = 0;
>  
>  	/* Make pHyp happy */
> -	if (rflags & (_PAGE_GUARDED|_PAGE_NO_CACHE))
> +	if ((rflags & _PAGE_GUARDED) ||
> +	    ((rflags & _PAGE_NO_CACHE) & !(rflags & _PAGE_WRITETHRU)))
>  		hpte_r &= ~_PAGE_COHERENT;
>  
>  	lpar_rc = plpar_pte_enter(flags, hpte_group, hpte_v, hpte_r, &slot);
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [patch 5/5] powerpc: Dont clear _PAGE_COHERENT when _PAGE_SAO is set
  2008-07-09  3:46   ` Benjamin Herrenschmidt
@ 2008-07-09 15:28     ` Dave Kleikamp
  0 siblings, 0 replies; 8+ messages in thread
From: Dave Kleikamp @ 2008-07-09 15:28 UTC (permalink / raw)
  To: Benjamin Herrenschmidt; +Cc: linuxppc-dev list

On Wed, 2008-07-09 at 13:46 +1000, Benjamin Herrenschmidt wrote:

> The old code looks bogus.. why clear M when G is set ? Only
> I should have mattered.

I can't find anywhere where G gets set without also setting I, so the
test seems redundant as well.

> I'll apply anyway as you aren't changing the existing behaviour here but
> maybe you can shoot me a fixup patch that removes the _PAGE_GUARDED
> condition completely here ?

No problem.  There is code in cell/beat_htab.c doing the same thing.
I've gone ahead and fixed it there too.

> It's legal to have G=1 M=1 pages and can even be useful under some
> circumstances.

It doesn't look like anyone is trying to take advantage of that
currently.

Here's your patch:

powerpc: Remove unnecessary condition when sanity-checking WIMG bits

It is okay for both _PAGE_GUARDED and _PAGE_COHERENT (G and M) to be set
in the same pte.  In fact, even if that were not the case, there doesn't
seem to be any place where G is set without also setting I (_PAGE_NO_CACHE),
so the test for I is sufficient.

Signed-off-by: Dave Kleikamp <shaggy@linux.vnet.ibm.com>

diff --git a/arch/powerpc/platforms/cell/beat_htab.c b/arch/powerpc/platforms/cell/beat_htab.c
index 81467ff..2e67bd8 100644
--- a/arch/powerpc/platforms/cell/beat_htab.c
+++ b/arch/powerpc/platforms/cell/beat_htab.c
@@ -112,7 +112,7 @@ static long beat_lpar_hpte_insert(unsigned long hpte_group,
 	if (!(vflags & HPTE_V_BOLTED))
 		DBG_LOW(" hpte_v=%016lx, hpte_r=%016lx\n", hpte_v, hpte_r);
 
-	if (rflags & (_PAGE_GUARDED|_PAGE_NO_CACHE))
+	if (rflags & _PAGE_NO_CACHE)
 		hpte_r &= ~_PAGE_COHERENT;
 
 	spin_lock(&beat_htab_lock);
@@ -334,7 +334,7 @@ static long beat_lpar_hpte_insert_v3(unsigned long hpte_group,
 	if (!(vflags & HPTE_V_BOLTED))
 		DBG_LOW(" hpte_v=%016lx, hpte_r=%016lx\n", hpte_v, hpte_r);
 
-	if (rflags & (_PAGE_GUARDED|_PAGE_NO_CACHE))
+	if (rflags & _PAGE_NO_CACHE)
 		hpte_r &= ~_PAGE_COHERENT;
 
 	/* insert into not-volted entry */
diff --git a/arch/powerpc/platforms/pseries/lpar.c b/arch/powerpc/platforms/pseries/lpar.c
index 38b5927..52a80e5 100644
--- a/arch/powerpc/platforms/pseries/lpar.c
+++ b/arch/powerpc/platforms/pseries/lpar.c
@@ -305,8 +305,7 @@ static long pSeries_lpar_hpte_insert(unsigned long hpte_group,
 	flags = 0;
 
 	/* Make pHyp happy */
-	if ((rflags & _PAGE_GUARDED) ||
-	    ((rflags & _PAGE_NO_CACHE) & !(rflags & _PAGE_WRITETHRU)))
+	if ((rflags & _PAGE_NO_CACHE) & !(rflags & _PAGE_WRITETHRU))
 		hpte_r &= ~_PAGE_COHERENT;
 
 	lpar_rc = plpar_pte_enter(flags, hpte_group, hpte_v, hpte_r, &slot);

-- 
David Kleikamp
IBM Linux Technology Center

^ permalink raw reply related	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2008-07-09 15:28 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-07-07 14:28 [patch 0/5] Strong Access Ordering Dave Kleikamp
2008-07-07 14:28 ` [patch 1/5] mm: Allow architectures to define additional protection bits Dave Kleikamp
2008-07-07 14:28 ` [patch 2/5] powerpc: Define flags for Strong Access Ordering Dave Kleikamp
2008-07-07 14:28 ` [patch 3/5] powerpc: Add SAO Feature bit to the cputable Dave Kleikamp
2008-07-07 14:28 ` [patch 4/5] powerpc: Add Strong Access Ordering Dave Kleikamp
2008-07-07 14:28 ` [patch 5/5] powerpc: Dont clear _PAGE_COHERENT when _PAGE_SAO is set Dave Kleikamp
2008-07-09  3:46   ` Benjamin Herrenschmidt
2008-07-09 15:28     ` Dave Kleikamp

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).