* [RESEND PATCH 0/4] Fix PROT_NONE page permissions when !CPU_USE_DOMAINS
@ 2012-10-17 15:35 Will Deacon
2012-10-17 15:35 ` [RESEND PATCH 1/4] ARM: mm: use pteval_t to represent page protection values Will Deacon
` (3 more replies)
0 siblings, 4 replies; 5+ messages in thread
From: Will Deacon @ 2012-10-17 15:35 UTC (permalink / raw)
To: linux-arm-kernel
Hello,
This is a respin of the patches originally posted here:
http://lists.infradead.org/pipermail/linux-arm-kernel/2012-September/121661.html
the only difference being that these are based on top of -rc1. I intended
to change the definition of pte_present_user to avoid the additional check,
but it turns out that GCC is generating terrible code regardless of what I
try:
#define pte_present_user(pte) (pte_present(pte) && (pte_val(pte) & L_PTE_USER))
c0010990: e3a02043 mov r2, #67 ; 0x43
c0010994: e3a03000 mov r3, #0
c0010998: e0000002 and r0, r0, r2
c001099c: e0011003 and r1, r1, r3
c00109a0: e3510000 cmp r1, #0
c00109a4: 03500040 cmpeq r0, #64 ; 0x40
c00109a8: 93a00000 movls r0, #0
c00109ac: 83a00001 movhi r0, #1
c00109b0: e12fff1e bx lr
#define pte_present_user(pte) \
((pte_val(pte) & (L_PTE_PRESENT | L_PTE_USER)) > L_PTE_USER)
c0010990: e3a02003 mov r2, #3
c0010994: e3a03000 mov r3, #0
c0010998: e0022000 and r2, r2, r0
c001099c: e0033001 and r3, r3, r1
c00109a0: e192c003 orrs ip, r2, r3
c00109a4: 17e00350 ubfxne r0, r0, #6, #1
c00109a8: 03a00000 moveq r0, #0
c00109ac: e12fff1e bx lr
After some investigation, it looks like this is related to having 64-bit
ptes (LPAE) [I've reported this to the GCC guys], so I reverted to
classic MMU and we get the same number of instructions there for either
case:
c0010950: e3003101 movw r3, #257 ; 0x101
c0010954: e0000003 and r0, r0, r3
c0010958: e0503003 subs r3, r0, r3
c001095c: e2730000 rsbs r0, r3, #0
c0010960: e0b00003 adcs r0, r0, r3
c0010964: e12fff1e bx lr
vs
c0010950: e3003101 movw r3, #257 ; 0x101
c0010954: e0003003 and r3, r0, r3
c0010958: e3530c01 cmp r3, #256 ; 0x100
c001095c: 93a00000 movls r0, #0
c0010960: 83a00001 movhi r0, #1
c0010964: e12fff1e bx lr
so I've opted to leave it as I have currently implemented it.
Comments welcome,
Will
Will Deacon (4):
ARM: mm: use pteval_t to represent page protection values
ARM: mm: don't use the access flag permissions mechanism for classic
MMU
ARM: mm: introduce L_PTE_VALID for page table entries
ARM: mm: introduce present, faulting entries for PAGE_NONE
arch/arm/include/asm/pgtable-2level.h | 2 ++
arch/arm/include/asm/pgtable-3level.h | 4 +++-
arch/arm/include/asm/pgtable.h | 10 ++++------
arch/arm/mm/mmu.c | 2 +-
arch/arm/mm/proc-macros.S | 4 ++++
arch/arm/mm/proc-v7-2level.S | 10 +++++++---
arch/arm/mm/proc-v7-3level.S | 5 ++++-
7 files changed, 25 insertions(+), 12 deletions(-)
--
1.7.4.1
^ permalink raw reply [flat|nested] 5+ messages in thread
* [RESEND PATCH 1/4] ARM: mm: use pteval_t to represent page protection values
2012-10-17 15:35 [RESEND PATCH 0/4] Fix PROT_NONE page permissions when !CPU_USE_DOMAINS Will Deacon
@ 2012-10-17 15:35 ` Will Deacon
2012-10-17 15:35 ` [RESEND PATCH 2/4] ARM: mm: don't use the access flag permissions mechanism for classic MMU Will Deacon
` (2 subsequent siblings)
3 siblings, 0 replies; 5+ messages in thread
From: Will Deacon @ 2012-10-17 15:35 UTC (permalink / raw)
To: linux-arm-kernel
When updating the page protection map after calculating the user_pgprot
value, the base protection map is temporarily stored in an unsigned long
type, causing truncation of the protection bits when LPAE is enabled.
This effectively means that calls to mprotect() will corrupt the upper
page attributes, clearing the XN bit unconditionally.
This patch uses pteval_t to store the intermediate protection values,
preserving the upper bits for 64-bit descriptors.
Acked-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
arch/arm/mm/mmu.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index 941dfb9..99b47b9 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -488,7 +488,7 @@ static void __init build_mem_type_table(void)
#endif
for (i = 0; i < 16; i++) {
- unsigned long v = pgprot_val(protection_map[i]);
+ pteval_t v = pgprot_val(protection_map[i]);
protection_map[i] = __pgprot(v | user_pgprot);
}
--
1.7.4.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [RESEND PATCH 2/4] ARM: mm: don't use the access flag permissions mechanism for classic MMU
2012-10-17 15:35 [RESEND PATCH 0/4] Fix PROT_NONE page permissions when !CPU_USE_DOMAINS Will Deacon
2012-10-17 15:35 ` [RESEND PATCH 1/4] ARM: mm: use pteval_t to represent page protection values Will Deacon
@ 2012-10-17 15:35 ` Will Deacon
2012-10-17 15:35 ` [RESEND PATCH 3/4] ARM: mm: introduce L_PTE_VALID for page table entries Will Deacon
2012-10-17 15:35 ` [RESEND PATCH 4/4] ARM: mm: introduce present, faulting entries for PAGE_NONE Will Deacon
3 siblings, 0 replies; 5+ messages in thread
From: Will Deacon @ 2012-10-17 15:35 UTC (permalink / raw)
To: linux-arm-kernel
The simplified access permissions model is not used for the classic MMU
translation regime, so ensure that it is turned off in the sctlr prior
to turning on address translation for ARMv7.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
arch/arm/mm/proc-v7-2level.S | 4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/arm/mm/proc-v7-2level.S b/arch/arm/mm/proc-v7-2level.S
index fd045e7..e37600b 100644
--- a/arch/arm/mm/proc-v7-2level.S
+++ b/arch/arm/mm/proc-v7-2level.S
@@ -161,11 +161,11 @@ ENDPROC(cpu_v7_set_pte_ext)
* TFR EV X F I D LR S
* .EEE ..EE PUI. .T.T 4RVI ZWRS BLDP WCAM
* rxxx rrxx xxx0 0101 xxxx xxxx x111 xxxx < forced
- * 1 0 110 0011 1100 .111 1101 < we want
+ * 01 0 110 0011 1100 .111 1101 < we want
*/
.align 2
.type v7_crval, #object
v7_crval:
- crval clear=0x0120c302, mmuset=0x10c03c7d, ucset=0x00c01c7c
+ crval clear=0x2120c302, mmuset=0x10c03c7d, ucset=0x00c01c7c
.previous
--
1.7.4.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [RESEND PATCH 3/4] ARM: mm: introduce L_PTE_VALID for page table entries
2012-10-17 15:35 [RESEND PATCH 0/4] Fix PROT_NONE page permissions when !CPU_USE_DOMAINS Will Deacon
2012-10-17 15:35 ` [RESEND PATCH 1/4] ARM: mm: use pteval_t to represent page protection values Will Deacon
2012-10-17 15:35 ` [RESEND PATCH 2/4] ARM: mm: don't use the access flag permissions mechanism for classic MMU Will Deacon
@ 2012-10-17 15:35 ` Will Deacon
2012-10-17 15:35 ` [RESEND PATCH 4/4] ARM: mm: introduce present, faulting entries for PAGE_NONE Will Deacon
3 siblings, 0 replies; 5+ messages in thread
From: Will Deacon @ 2012-10-17 15:35 UTC (permalink / raw)
To: linux-arm-kernel
For long-descriptor translation table formats, the ARMv7 architecture
defines the last two bits of the second- and third-level descriptors to
be:
x0b - Invalid
01b - Block (second-level), Reserved (third-level)
11b - Table (second-level), Page (third-level)
This allows us to define L_PTE_PRESENT as (3 << 0) and use this value to
create ptes directly. However, when determining whether a given pte
value is present in the low-level page table accessors, we only need to
check the least significant bit of the descriptor, allowing us to write
faulting, present entries which are required for PROT_NONE mappings.
This patch introduces L_PTE_VALID, which can be used to test whether a
pte should fault, and updates the low-level page table accessors
accordingly.
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
arch/arm/include/asm/pgtable-2level.h | 1 +
arch/arm/include/asm/pgtable-3level.h | 3 ++-
arch/arm/include/asm/pgtable.h | 4 +---
arch/arm/mm/proc-v7-2level.S | 2 +-
arch/arm/mm/proc-v7-3level.S | 2 +-
5 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/arm/include/asm/pgtable-2level.h b/arch/arm/include/asm/pgtable-2level.h
index 2317a71..c44a1ec 100644
--- a/arch/arm/include/asm/pgtable-2level.h
+++ b/arch/arm/include/asm/pgtable-2level.h
@@ -115,6 +115,7 @@
* The PTE table pointer refers to the hardware entries; the "Linux"
* entries are stored 1024 bytes below.
*/
+#define L_PTE_VALID (_AT(pteval_t, 1) << 0) /* Valid */
#define L_PTE_PRESENT (_AT(pteval_t, 1) << 0)
#define L_PTE_YOUNG (_AT(pteval_t, 1) << 1)
#define L_PTE_FILE (_AT(pteval_t, 1) << 2) /* only when !PRESENT */
diff --git a/arch/arm/include/asm/pgtable-3level.h b/arch/arm/include/asm/pgtable-3level.h
index b249035..e32311a 100644
--- a/arch/arm/include/asm/pgtable-3level.h
+++ b/arch/arm/include/asm/pgtable-3level.h
@@ -67,7 +67,8 @@
* These bits overlap with the hardware bits but the naming is preserved for
* consistency with the classic page table format.
*/
-#define L_PTE_PRESENT (_AT(pteval_t, 3) << 0) /* Valid */
+#define L_PTE_VALID (_AT(pteval_t, 1) << 0) /* Valid */
+#define L_PTE_PRESENT (_AT(pteval_t, 3) << 0) /* Present */
#define L_PTE_FILE (_AT(pteval_t, 1) << 2) /* only when !PRESENT */
#define L_PTE_USER (_AT(pteval_t, 1) << 6) /* AP[1] */
#define L_PTE_RDONLY (_AT(pteval_t, 1) << 7) /* AP[2] */
diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h
index 08c1231..ccf34b6 100644
--- a/arch/arm/include/asm/pgtable.h
+++ b/arch/arm/include/asm/pgtable.h
@@ -203,9 +203,7 @@ static inline pte_t *pmd_page_vaddr(pmd_t pmd)
#define pte_exec(pte) (!(pte_val(pte) & L_PTE_XN))
#define pte_special(pte) (0)
-#define pte_present_user(pte) \
- ((pte_val(pte) & (L_PTE_PRESENT | L_PTE_USER)) == \
- (L_PTE_PRESENT | L_PTE_USER))
+#define pte_present_user(pte) (pte_present(pte) && (pte_val(pte) & L_PTE_USER))
#if __LINUX_ARM_ARCH__ < 6
static inline void __sync_icache_dcache(pte_t pteval)
diff --git a/arch/arm/mm/proc-v7-2level.S b/arch/arm/mm/proc-v7-2level.S
index e37600b..e755e9f 100644
--- a/arch/arm/mm/proc-v7-2level.S
+++ b/arch/arm/mm/proc-v7-2level.S
@@ -100,7 +100,7 @@ ENTRY(cpu_v7_set_pte_ext)
orrne r3, r3, #PTE_EXT_XN
tst r1, #L_PTE_YOUNG
- tstne r1, #L_PTE_PRESENT
+ tstne r1, #L_PTE_VALID
moveq r3, #0
ARM( str r3, [r0, #2048]! )
diff --git a/arch/arm/mm/proc-v7-3level.S b/arch/arm/mm/proc-v7-3level.S
index 8de0f1d..d23d067 100644
--- a/arch/arm/mm/proc-v7-3level.S
+++ b/arch/arm/mm/proc-v7-3level.S
@@ -65,7 +65,7 @@ ENDPROC(cpu_v7_switch_mm)
*/
ENTRY(cpu_v7_set_pte_ext)
#ifdef CONFIG_MMU
- tst r2, #L_PTE_PRESENT
+ tst r2, #L_PTE_VALID
beq 1f
tst r3, #1 << (55 - 32) @ L_PTE_DIRTY
orreq r2, #L_PTE_RDONLY
--
1.7.4.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [RESEND PATCH 4/4] ARM: mm: introduce present, faulting entries for PAGE_NONE
2012-10-17 15:35 [RESEND PATCH 0/4] Fix PROT_NONE page permissions when !CPU_USE_DOMAINS Will Deacon
` (2 preceding siblings ...)
2012-10-17 15:35 ` [RESEND PATCH 3/4] ARM: mm: introduce L_PTE_VALID for page table entries Will Deacon
@ 2012-10-17 15:35 ` Will Deacon
3 siblings, 0 replies; 5+ messages in thread
From: Will Deacon @ 2012-10-17 15:35 UTC (permalink / raw)
To: linux-arm-kernel
PROT_NONE mappings apply the page protection attributes defined by _P000
which translate to PAGE_NONE for ARM. These attributes specify an XN,
RDONLY pte that is inaccessible to userspace. However, on kernels
configured without support for domains, such a pte *is* accessible to
the kernel and can be read via get_user, allowing tasks to read
PROT_NONE pages via syscalls such as read/write over a pipe.
This patch introduces a new software pte flag, L_PTE_NONE, that is set
to identify faulting, present entries.
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
arch/arm/include/asm/pgtable-2level.h | 1 +
arch/arm/include/asm/pgtable-3level.h | 1 +
arch/arm/include/asm/pgtable.h | 6 +++---
arch/arm/mm/proc-macros.S | 4 ++++
arch/arm/mm/proc-v7-2level.S | 4 ++++
arch/arm/mm/proc-v7-3level.S | 3 +++
6 files changed, 16 insertions(+), 3 deletions(-)
diff --git a/arch/arm/include/asm/pgtable-2level.h b/arch/arm/include/asm/pgtable-2level.h
index c44a1ec..f97ee02 100644
--- a/arch/arm/include/asm/pgtable-2level.h
+++ b/arch/arm/include/asm/pgtable-2level.h
@@ -124,6 +124,7 @@
#define L_PTE_USER (_AT(pteval_t, 1) << 8)
#define L_PTE_XN (_AT(pteval_t, 1) << 9)
#define L_PTE_SHARED (_AT(pteval_t, 1) << 10) /* shared(v6), coherent(xsc3) */
+#define L_PTE_NONE (_AT(pteval_t, 1) << 11)
/*
* These are the memory types, defined to be compatible with
diff --git a/arch/arm/include/asm/pgtable-3level.h b/arch/arm/include/asm/pgtable-3level.h
index e32311a..a3f3792 100644
--- a/arch/arm/include/asm/pgtable-3level.h
+++ b/arch/arm/include/asm/pgtable-3level.h
@@ -77,6 +77,7 @@
#define L_PTE_XN (_AT(pteval_t, 1) << 54) /* XN */
#define L_PTE_DIRTY (_AT(pteval_t, 1) << 55) /* unused */
#define L_PTE_SPECIAL (_AT(pteval_t, 1) << 56) /* unused */
+#define L_PTE_NONE (_AT(pteval_t, 1) << 57) /* PROT_NONE */
/*
* To be used in assembly code with the upper page attributes.
diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h
index ccf34b6..9c82f98 100644
--- a/arch/arm/include/asm/pgtable.h
+++ b/arch/arm/include/asm/pgtable.h
@@ -73,7 +73,7 @@ extern pgprot_t pgprot_kernel;
#define _MOD_PROT(p, b) __pgprot(pgprot_val(p) | (b))
-#define PAGE_NONE _MOD_PROT(pgprot_user, L_PTE_XN | L_PTE_RDONLY)
+#define PAGE_NONE _MOD_PROT(pgprot_user, L_PTE_XN | L_PTE_RDONLY | L_PTE_NONE)
#define PAGE_SHARED _MOD_PROT(pgprot_user, L_PTE_USER | L_PTE_XN)
#define PAGE_SHARED_EXEC _MOD_PROT(pgprot_user, L_PTE_USER)
#define PAGE_COPY _MOD_PROT(pgprot_user, L_PTE_USER | L_PTE_RDONLY | L_PTE_XN)
@@ -83,7 +83,7 @@ extern pgprot_t pgprot_kernel;
#define PAGE_KERNEL _MOD_PROT(pgprot_kernel, L_PTE_XN)
#define PAGE_KERNEL_EXEC pgprot_kernel
-#define __PAGE_NONE __pgprot(_L_PTE_DEFAULT | L_PTE_RDONLY | L_PTE_XN)
+#define __PAGE_NONE __pgprot(_L_PTE_DEFAULT | L_PTE_RDONLY | L_PTE_XN | L_PTE_NONE)
#define __PAGE_SHARED __pgprot(_L_PTE_DEFAULT | L_PTE_USER | L_PTE_XN)
#define __PAGE_SHARED_EXEC __pgprot(_L_PTE_DEFAULT | L_PTE_USER)
#define __PAGE_COPY __pgprot(_L_PTE_DEFAULT | L_PTE_USER | L_PTE_RDONLY | L_PTE_XN)
@@ -240,7 +240,7 @@ static inline pte_t pte_mkspecial(pte_t pte) { return pte; }
static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
{
- const pteval_t mask = L_PTE_XN | L_PTE_RDONLY | L_PTE_USER;
+ const pteval_t mask = L_PTE_XN | L_PTE_RDONLY | L_PTE_USER | L_PTE_NONE;
pte_val(pte) = (pte_val(pte) & ~mask) | (pgprot_val(newprot) & mask);
return pte;
}
diff --git a/arch/arm/mm/proc-macros.S b/arch/arm/mm/proc-macros.S
index b29a226..eb6aa73 100644
--- a/arch/arm/mm/proc-macros.S
+++ b/arch/arm/mm/proc-macros.S
@@ -167,6 +167,10 @@
tst r1, #L_PTE_YOUNG
tstne r1, #L_PTE_PRESENT
moveq r3, #0
+#ifndef CONFIG_CPU_USE_DOMAINS
+ tstne r1, #L_PTE_NONE
+ movne r3, #0
+#endif
str r3, [r0]
mcr p15, 0, r0, c7, c10, 1 @ flush_pte
diff --git a/arch/arm/mm/proc-v7-2level.S b/arch/arm/mm/proc-v7-2level.S
index e755e9f..6d98c13 100644
--- a/arch/arm/mm/proc-v7-2level.S
+++ b/arch/arm/mm/proc-v7-2level.S
@@ -101,6 +101,10 @@ ENTRY(cpu_v7_set_pte_ext)
tst r1, #L_PTE_YOUNG
tstne r1, #L_PTE_VALID
+#ifndef CONFIG_CPU_USE_DOMAINS
+ eorne r1, r1, #L_PTE_NONE
+ tstne r1, #L_PTE_NONE
+#endif
moveq r3, #0
ARM( str r3, [r0, #2048]! )
diff --git a/arch/arm/mm/proc-v7-3level.S b/arch/arm/mm/proc-v7-3level.S
index d23d067..7b56386 100644
--- a/arch/arm/mm/proc-v7-3level.S
+++ b/arch/arm/mm/proc-v7-3level.S
@@ -67,6 +67,9 @@ ENTRY(cpu_v7_set_pte_ext)
#ifdef CONFIG_MMU
tst r2, #L_PTE_VALID
beq 1f
+ tst r3, #1 << (57 - 32) @ L_PTE_NONE
+ bicne r2, #L_PTE_VALID
+ bne 1f
tst r3, #1 << (55 - 32) @ L_PTE_DIRTY
orreq r2, #L_PTE_RDONLY
1: strd r2, r3, [r0]
--
1.7.4.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
end of thread, other threads:[~2012-10-17 15:35 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-10-17 15:35 [RESEND PATCH 0/4] Fix PROT_NONE page permissions when !CPU_USE_DOMAINS Will Deacon
2012-10-17 15:35 ` [RESEND PATCH 1/4] ARM: mm: use pteval_t to represent page protection values Will Deacon
2012-10-17 15:35 ` [RESEND PATCH 2/4] ARM: mm: don't use the access flag permissions mechanism for classic MMU Will Deacon
2012-10-17 15:35 ` [RESEND PATCH 3/4] ARM: mm: introduce L_PTE_VALID for page table entries Will Deacon
2012-10-17 15:35 ` [RESEND PATCH 4/4] ARM: mm: introduce present, faulting entries for PAGE_NONE Will Deacon
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).