* [PATCH 0/4] arm: Privileged no-access for LPAE
@ 2015-09-23 14:24 Catalin Marinas
2015-09-23 14:24 ` [PATCH 1/4] arm: kvm: Move TTBCR_* definitions from kvm_arm.h into pgtable-3level-hwdef.h Catalin Marinas
` (4 more replies)
0 siblings, 5 replies; 10+ messages in thread
From: Catalin Marinas @ 2015-09-23 14:24 UTC (permalink / raw)
To: linux-arm-kernel
Hi,
This is the first attempt to add support for privileged no-access on
LPAE-enabled kernels by disabling TTBR0 page table walks. The first
three patches are pretty much refactoring/clean-up without any
functional change. The last patch implements the actual PAN using TTBR0
disabling. Its description also contains the details of how this works.
The patches can be found here:
git://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/linux-aarch64 arm32-pan
Tested in different configurations (with and without LPAE, all
VMSPLIT_*, loadable modules) but only under KVM on Juno (ARMv8).
Thanks.
Catalin Marinas (4):
arm: kvm: Move TTBCR_* definitions from kvm_arm.h into
pgtable-3level-hwdef.h
arm: Move asm statements accessing TTBCR into dedicated functions
arm: Reduce the number of #ifdef CONFIG_CPU_SW_DOMAIN_PAN
arm: Implement privileged no-access using TTBR0 page table walks
disabling
arch/arm/Kconfig | 22 ++++++++--
arch/arm/include/asm/assembler.h | 68 +++++++++++++++++++++++++----
arch/arm/include/asm/kvm_arm.h | 17 +-------
arch/arm/include/asm/pgtable-3level-hwdef.h | 26 +++++++++++
arch/arm/include/asm/proc-fns.h | 12 +++++
arch/arm/include/asm/uaccess.h | 53 +++++++++++++++++++---
arch/arm/kvm/init.S | 2 +-
arch/arm/lib/csumpartialcopyuser.S | 20 ++++++++-
arch/arm/mm/fault.c | 10 +++++
arch/arm/mm/mmu.c | 7 ++-
10 files changed, 199 insertions(+), 38 deletions(-)
^ permalink raw reply [flat|nested] 10+ messages in thread* [PATCH 1/4] arm: kvm: Move TTBCR_* definitions from kvm_arm.h into pgtable-3level-hwdef.h 2015-09-23 14:24 [PATCH 0/4] arm: Privileged no-access for LPAE Catalin Marinas @ 2015-09-23 14:24 ` Catalin Marinas 2015-09-23 14:24 ` [PATCH 2/4] arm: Move asm statements accessing TTBCR into dedicated functions Catalin Marinas ` (3 subsequent siblings) 4 siblings, 0 replies; 10+ messages in thread From: Catalin Marinas @ 2015-09-23 14:24 UTC (permalink / raw) To: linux-arm-kernel These macros will be reused in a subsequent patch, so share the definitions between core arm code and KVM. The patch also renames some of the macros by appending the more appropriate _MASK suffix. Note that these macros are only relevant to LPAE kernel builds, therefore they are added to pgtable-3level-hwdef.h Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> --- arch/arm/include/asm/kvm_arm.h | 17 ++--------------- arch/arm/include/asm/pgtable-3level-hwdef.h | 17 +++++++++++++++++ arch/arm/kvm/init.S | 2 +- 3 files changed, 20 insertions(+), 16 deletions(-) diff --git a/arch/arm/include/asm/kvm_arm.h b/arch/arm/include/asm/kvm_arm.h index d995821f1698..0ece3bb82b97 100644 --- a/arch/arm/include/asm/kvm_arm.h +++ b/arch/arm/include/asm/kvm_arm.h @@ -88,21 +88,8 @@ #define HSCTLR_MASK (HSCTLR_M | HSCTLR_A | HSCTLR_C | HSCTLR_I | \ HSCTLR_WXN | HSCTLR_FI | HSCTLR_EE | HSCTLR_TE) -/* TTBCR and HTCR Registers bits */ -#define TTBCR_EAE (1 << 31) -#define TTBCR_IMP (1 << 30) -#define TTBCR_SH1 (3 << 28) -#define TTBCR_ORGN1 (3 << 26) -#define TTBCR_IRGN1 (3 << 24) -#define TTBCR_EPD1 (1 << 23) -#define TTBCR_A1 (1 << 22) -#define TTBCR_T1SZ (7 << 16) -#define TTBCR_SH0 (3 << 12) -#define TTBCR_ORGN0 (3 << 10) -#define TTBCR_IRGN0 (3 << 8) -#define TTBCR_EPD0 (1 << 7) -#define TTBCR_T0SZ (7 << 0) -#define HTCR_MASK (TTBCR_T0SZ | TTBCR_IRGN0 | TTBCR_ORGN0 | TTBCR_SH0) +/* HTCR Register bits */ +#define HTCR_MASK (TTBCR_T0SZ_MASK | TTBCR_IRGN0_MASK | TTBCR_ORGN0_MASK | TTBCR_SH0_MASK) /* Hyp System Trap Register */ #define HSTR_T(x) (1 << x) diff --git a/arch/arm/include/asm/pgtable-3level-hwdef.h b/arch/arm/include/asm/pgtable-3level-hwdef.h index f8f1cff62065..3ed7965106e3 100644 --- a/arch/arm/include/asm/pgtable-3level-hwdef.h +++ b/arch/arm/include/asm/pgtable-3level-hwdef.h @@ -105,4 +105,21 @@ #define TTBR1_SIZE (((PAGE_OFFSET >> 30) - 1) << 16) +/* + * TTBCR register bits. + */ +#define TTBCR_EAE (1 << 31) +#define TTBCR_IMP (1 << 30) +#define TTBCR_SH1_MASK (3 << 28) +#define TTBCR_ORGN1_MASK (3 << 26) +#define TTBCR_IRGN1_MASK (3 << 24) +#define TTBCR_EPD1 (1 << 23) +#define TTBCR_A1 (1 << 22) +#define TTBCR_T1SZ_MASK (7 << 16) +#define TTBCR_SH0_MASK (3 << 12) +#define TTBCR_ORGN0_MASK (3 << 10) +#define TTBCR_IRGN0_MASK (3 << 8) +#define TTBCR_EPD0 (1 << 7) +#define TTBCR_T0SZ_MASK (7 << 0) + #endif diff --git a/arch/arm/kvm/init.S b/arch/arm/kvm/init.S index 3988e72d16ff..fdceab289d03 100644 --- a/arch/arm/kvm/init.S +++ b/arch/arm/kvm/init.S @@ -80,7 +80,7 @@ __do_hyp_init: ldr r2, =HTCR_MASK bic r0, r0, r2 mrc p15, 0, r1, c2, c0, 2 @ TTBCR - and r1, r1, #(HTCR_MASK & ~TTBCR_T0SZ) + and r1, r1, #(HTCR_MASK & ~TTBCR_T0SZ_MASK) orr r0, r0, r1 mcr p15, 4, r0, c2, c0, 2 @ HTCR ^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 2/4] arm: Move asm statements accessing TTBCR into dedicated functions 2015-09-23 14:24 [PATCH 0/4] arm: Privileged no-access for LPAE Catalin Marinas 2015-09-23 14:24 ` [PATCH 1/4] arm: kvm: Move TTBCR_* definitions from kvm_arm.h into pgtable-3level-hwdef.h Catalin Marinas @ 2015-09-23 14:24 ` Catalin Marinas 2015-09-23 14:24 ` [PATCH 3/4] arm: Reduce the number of #ifdef CONFIG_CPU_SW_DOMAIN_PAN Catalin Marinas ` (2 subsequent siblings) 4 siblings, 0 replies; 10+ messages in thread From: Catalin Marinas @ 2015-09-23 14:24 UTC (permalink / raw) To: linux-arm-kernel This patch implements cpu_get_ttbcr() and cpu_set_ttbcr() and replaces the corresponding asm statements. These functions will be reused in subsequent patches. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> --- arch/arm/include/asm/proc-fns.h | 12 ++++++++++++ arch/arm/mm/mmu.c | 7 +++---- 2 files changed, 15 insertions(+), 4 deletions(-) diff --git a/arch/arm/include/asm/proc-fns.h b/arch/arm/include/asm/proc-fns.h index 8877ad5ffe10..db695e612ab4 100644 --- a/arch/arm/include/asm/proc-fns.h +++ b/arch/arm/include/asm/proc-fns.h @@ -142,6 +142,18 @@ extern void cpu_resume(void); }) #endif +static inline unsigned int cpu_get_ttbcr(void) +{ + unsigned int ttbcr; + asm("mrc p15, 0, %0, c2, c0, 2" : "=r" (ttbcr)); + return ttbcr; +} + +static inline void cpu_set_ttbcr(unsigned int ttbcr) +{ + asm volatile("mcr p15, 0, %0, c2, c0, 2" : : "r" (ttbcr)); +} + #else /*!CONFIG_MMU */ #define cpu_switch_mm(pgd,mm) { } diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index 7cd15143a507..f6c5744bb5ef 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -1494,9 +1494,8 @@ void __init early_paging_init(const struct machine_desc *mdesc) */ cr = get_cr(); set_cr(cr & ~(CR_I | CR_C)); - asm("mrc p15, 0, %0, c2, c0, 2" : "=r" (ttbcr)); - asm volatile("mcr p15, 0, %0, c2, c0, 2" - : : "r" (ttbcr & ~(3 << 8 | 3 << 10))); + ttbcr = cpu_get_ttbcr(); + cpu_set_ttbcr(ttbcr & ~(3 << 8 | 3 << 10)); flush_cache_all(); /* @@ -1508,7 +1507,7 @@ void __init early_paging_init(const struct machine_desc *mdesc) lpae_pgtables_remap(offset, pa_pgd, boot_data); /* Re-enable the caches and cacheable TLB walks */ - asm volatile("mcr p15, 0, %0, c2, c0, 2" : : "r" (ttbcr)); + cpu_set_ttbcr(ttbcr); set_cr(cr); } ^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 3/4] arm: Reduce the number of #ifdef CONFIG_CPU_SW_DOMAIN_PAN 2015-09-23 14:24 [PATCH 0/4] arm: Privileged no-access for LPAE Catalin Marinas 2015-09-23 14:24 ` [PATCH 1/4] arm: kvm: Move TTBCR_* definitions from kvm_arm.h into pgtable-3level-hwdef.h Catalin Marinas 2015-09-23 14:24 ` [PATCH 2/4] arm: Move asm statements accessing TTBCR into dedicated functions Catalin Marinas @ 2015-09-23 14:24 ` Catalin Marinas 2015-09-23 14:24 ` [PATCH 4/4] arm: Implement privileged no-access using TTBR0 page table walks disabling Catalin Marinas 2015-12-10 19:40 ` [PATCH 0/4] arm: Privileged no-access for LPAE Kees Cook 4 siblings, 0 replies; 10+ messages in thread From: Catalin Marinas @ 2015-09-23 14:24 UTC (permalink / raw) To: linux-arm-kernel This is a clean-up patch aimed at reducing the number of checks on CONFIG_CPU_SW_DOMAIN_PAN, together with some empty lines for better clarity once the CONFIG_CPU_TTBR0_PAN is introduced. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> --- arch/arm/include/asm/assembler.h | 26 ++++++++++++++++++-------- arch/arm/include/asm/uaccess.h | 27 +++++++++++++++++++++------ arch/arm/lib/csumpartialcopyuser.S | 6 +++++- 3 files changed, 44 insertions(+), 15 deletions(-) diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h index b2bc8e11471d..26b4c697c857 100644 --- a/arch/arm/include/asm/assembler.h +++ b/arch/arm/include/asm/assembler.h @@ -449,8 +449,9 @@ THUMB( orr \reg , \reg , #PSR_T_BIT ) #endif .endm +#if defined(CONFIG_CPU_SW_DOMAIN_PAN) + .macro uaccess_disable, tmp, isb=1 -#ifdef CONFIG_CPU_SW_DOMAIN_PAN /* * Whenever we re-enter userspace, the domains should always be * set appropriately. @@ -460,11 +461,9 @@ THUMB( orr \reg , \reg , #PSR_T_BIT ) .if \isb instr_sync .endif -#endif .endm .macro uaccess_enable, tmp, isb=1 -#ifdef CONFIG_CPU_SW_DOMAIN_PAN /* * Whenever we re-enter userspace, the domains should always be * set appropriately. @@ -474,23 +473,34 @@ THUMB( orr \reg , \reg , #PSR_T_BIT ) .if \isb instr_sync .endif -#endif .endm .macro uaccess_save, tmp -#ifdef CONFIG_CPU_SW_DOMAIN_PAN mrc p15, 0, \tmp, c3, c0, 0 str \tmp, [sp, #S_FRAME_SIZE] -#endif .endm .macro uaccess_restore -#ifdef CONFIG_CPU_SW_DOMAIN_PAN ldr r0, [sp, #S_FRAME_SIZE] mcr p15, 0, r0, c3, c0, 0 -#endif .endm +#else + + .macro uaccess_disable, tmp, isb=1 + .endm + + .macro uaccess_enable, tmp, isb=1 + .endm + + .macro uaccess_save, tmp + .endm + + .macro uaccess_restore + .endm + +#endif + .irp c,,eq,ne,cs,cc,mi,pl,vs,vc,hi,ls,ge,lt,gt,le,hs,lo .macro ret\c, reg #if __LINUX_ARM_ARCH__ < 6 diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h index 8cc85a4ebec2..711c9877787b 100644 --- a/arch/arm/include/asm/uaccess.h +++ b/arch/arm/include/asm/uaccess.h @@ -55,9 +55,10 @@ extern int fixup_exception(struct pt_regs *regs); * perform such accesses (eg, via list poison values) which could then * be exploited for priviledge escalation. */ +#if defined(CONFIG_CPU_SW_DOMAIN_PAN) + static inline unsigned int uaccess_save_and_enable(void) { -#ifdef CONFIG_CPU_SW_DOMAIN_PAN unsigned int old_domain = get_domain(); /* Set the current domain access to permit user accesses */ @@ -65,19 +66,33 @@ static inline unsigned int uaccess_save_and_enable(void) domain_val(DOMAIN_USER, DOMAIN_CLIENT)); return old_domain; -#else - return 0; -#endif } static inline void uaccess_restore(unsigned int flags) { -#ifdef CONFIG_CPU_SW_DOMAIN_PAN /* Restore the user access mask */ set_domain(flags); -#endif } + +#else + +static inline unsigned int uaccess_save_and_enable(void) +{ + return 0; +} + +static inline void uaccess_restore(unsigned int flags) +{ +} + +static inline bool uaccess_disabled(struct pt_regs *regs) +{ + return false; +} + +#endif + /* * These two are intentionally not defined anywhere - if the kernel * code generates any references to them, that's a bug. diff --git a/arch/arm/lib/csumpartialcopyuser.S b/arch/arm/lib/csumpartialcopyuser.S index 1712f132b80d..d50fe3c07615 100644 --- a/arch/arm/lib/csumpartialcopyuser.S +++ b/arch/arm/lib/csumpartialcopyuser.S @@ -17,7 +17,8 @@ .text -#ifdef CONFIG_CPU_SW_DOMAIN_PAN +#if defined(CONFIG_CPU_SW_DOMAIN_PAN) + .macro save_regs mrc p15, 0, ip, c3, c0, 0 stmfd sp!, {r1, r2, r4 - r8, ip, lr} @@ -29,7 +30,9 @@ mcr p15, 0, ip, c3, c0, 0 ret lr .endm + #else + .macro save_regs stmfd sp!, {r1, r2, r4 - r8, lr} .endm @@ -37,6 +40,7 @@ .macro load_regs ldmfd sp!, {r1, r2, r4 - r8, pc} .endm + #endif .macro load1b, reg1 ^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 4/4] arm: Implement privileged no-access using TTBR0 page table walks disabling 2015-09-23 14:24 [PATCH 0/4] arm: Privileged no-access for LPAE Catalin Marinas ` (2 preceding siblings ...) 2015-09-23 14:24 ` [PATCH 3/4] arm: Reduce the number of #ifdef CONFIG_CPU_SW_DOMAIN_PAN Catalin Marinas @ 2015-09-23 14:24 ` Catalin Marinas 2015-12-10 19:40 ` [PATCH 0/4] arm: Privileged no-access for LPAE Kees Cook 4 siblings, 0 replies; 10+ messages in thread From: Catalin Marinas @ 2015-09-23 14:24 UTC (permalink / raw) To: linux-arm-kernel With LPAE enabled, privileged no-access cannot be enforced using CPU domains as such feature is not available. This patch implements PAN by disabling TTBR0 page table walks while in kernel mode. The ARM architecture allows page table walks to be split between TTBR0 and TTBR1. With LPAE enabled, the split is defined by a combination of TTBCR T0SZ and T1SZ bits. Currently, an LPAE-enabled kernel uses TTBR0 for user addresses and TTBR1 for kernel addresses with the VMSPLIT_2G and VMSPLIT_3G configurations. The main advantage for the 3:1 split is that TTBR1 is reduced to 2 levels, so potentially faster TLB refill (though usually the first level entries are already cached in the TLB). The PAN support on LPAE-enabled kernels uses TTBR0 when running in user space or in kernel space during user access routines (TTBCR T0SZ and T1SZ are both 0). When running user accesses are disabled in kernel mode, TTBR0 page table walks are disabled by setting TTBCR.EPD0. TTBR1 is used for kernel accesses (including loadable modules; anything covered by swapper_pg_dir) by reducing the TTBCR.T0SZ to the minimum (2^(32-7) = 32MB). To avoid user accesses potentially hitting stale TLB entries, the ASID is switched to 0 (reserved) by setting TTBCR.A1 and using the ASID value in TTBR1. The difference from a non-PAN kernel is that with the 3:1 memory split, TTBR1 always uses 3 levels of page tables. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> --- arch/arm/Kconfig | 22 ++++++++++++--- arch/arm/include/asm/assembler.h | 42 +++++++++++++++++++++++++++++ arch/arm/include/asm/pgtable-3level-hwdef.h | 9 +++++++ arch/arm/include/asm/uaccess.h | 34 ++++++++++++++++++++--- arch/arm/lib/csumpartialcopyuser.S | 14 ++++++++++ arch/arm/mm/fault.c | 10 +++++++ 6 files changed, 124 insertions(+), 7 deletions(-) diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index 72ad724c67ae..bcfe80c1036a 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -1704,9 +1704,9 @@ config HIGHPTE consumed by page tables. Setting this option will allow user-space 2nd level page tables to reside in high memory. -config CPU_SW_DOMAIN_PAN - bool "Enable use of CPU domains to implement privileged no-access" - depends on MMU && !ARM_LPAE +config ARM_PAN + bool "Enable privileged no-access" + depends on MMU default y help Increase kernel security by ensuring that normal kernel accesses @@ -1715,10 +1715,26 @@ config CPU_SW_DOMAIN_PAN by ensuring that magic values (such as LIST_POISON) will always fault when dereferenced. + The implementation uses CPU domains when !CONFIG_ARM_LPAE and + disabling of TTBR0 page table walks with CONFIG_ARM_LPAE. + +config CPU_SW_DOMAIN_PAN + def_bool y + depends on ARM_PAN && !ARM_LPAE + help + Enable use of CPU domains to implement privileged no-access. + CPUs with low-vector mappings use a best-efforts implementation. Their lower 1MB needs to remain accessible for the vectors, but the remainder of userspace will become appropriately inaccessible. +config CPU_TTBR0_PAN + def_bool y + depends on ARM_PAN && ARM_LPAE + help + Enable privileged no-access by disabling TTBR0 page table walks when + running in kernel mode. + config HW_PERF_EVENTS def_bool y depends on ARM_PMU diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h index 26b4c697c857..8dccd8916172 100644 --- a/arch/arm/include/asm/assembler.h +++ b/arch/arm/include/asm/assembler.h @@ -25,6 +25,7 @@ #include <asm/opcodes-virt.h> #include <asm/asm-offsets.h> #include <asm/page.h> +#include <asm/pgtable.h> #include <asm/thread_info.h> #define IOMEM(x) (x) @@ -485,6 +486,47 @@ THUMB( orr \reg , \reg , #PSR_T_BIT ) mcr p15, 0, r0, c3, c0, 0 .endm +#elif defined(CONFIG_CPU_TTBR0_PAN) + + .macro uaccess_disable, tmp, isb=1 + /* + * Disable TTBR0 page table walks (EDP0 = 1), use the reserved ASID + * from TTBR1 (A1 = 1) and enable TTBR1 page table walks for kernel + * addresses by reducing TTBR0 range to 32MB (T0SZ = 7). + */ + mrc p15, 0, \tmp, c2, c0, 2 + orr \tmp, \tmp, #TTBCR_EPD0 | TTBCR_T0SZ_MASK + orr \tmp, \tmp, #TTBCR_A1 + mcr p15, 0, \tmp, c2, c0, 2 + .if \isb + instr_sync + .endif + .endm + + .macro uaccess_enable, tmp, isb=1 + /* + * Enable TTBR0 page table walks (T0SZ = 0, EDP0 = 0) and ASID from + * TTBR0 (A1 = 0). + */ + mrc p15, 0, \tmp, c2, c0, 2 + bic \tmp, \tmp, #TTBCR_EPD0 | TTBCR_T0SZ_MASK + bic \tmp, \tmp, #TTBCR_A1 + mcr p15, 0, \tmp, c2, c0, 2 + .if \isb + instr_sync + .endif + .endm + + .macro uaccess_save, tmp + mrc p15, 0, \tmp, c2, c0, 2 + str \tmp, [sp, #S_FRAME_SIZE] + .endm + + .macro uaccess_restore + ldr r0, [sp, #S_FRAME_SIZE] + mcr p15, 0, r0, c2, c0, 2 + .endm + #else .macro uaccess_disable, tmp, isb=1 diff --git a/arch/arm/include/asm/pgtable-3level-hwdef.h b/arch/arm/include/asm/pgtable-3level-hwdef.h index 3ed7965106e3..92fee5f79e0f 100644 --- a/arch/arm/include/asm/pgtable-3level-hwdef.h +++ b/arch/arm/include/asm/pgtable-3level-hwdef.h @@ -85,6 +85,7 @@ #define PHYS_MASK_SHIFT (40) #define PHYS_MASK ((1ULL << PHYS_MASK_SHIFT) - 1) +#ifndef CONFIG_CPU_TTBR0_PAN /* * TTBR0/TTBR1 split (PAGE_OFFSET): * 0x40000000: T0SZ = 2, T1SZ = 0 (not used) @@ -104,6 +105,14 @@ #endif #define TTBR1_SIZE (((PAGE_OFFSET >> 30) - 1) << 16) +#else +/* + * With CONFIG_CPU_TTBR0_PAN enabled, TTBR1 is only used during uaccess + * disabled regions when TTBR0 is disabled. + */ +#define TTBR1_OFFSET 0 /* pointing to swapper_pg_dir */ +#define TTBR1_SIZE 0 /* TTBR1 size controlled via TTBCR.T0SZ */ +#endif /* * TTBCR register bits. diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h index 711c9877787b..bbc4e97c1951 100644 --- a/arch/arm/include/asm/uaccess.h +++ b/arch/arm/include/asm/uaccess.h @@ -16,6 +16,8 @@ #include <asm/errno.h> #include <asm/memory.h> #include <asm/domain.h> +#include <asm/pgtable.h> +#include <asm/proc-fns.h> #include <asm/unified.h> #include <asm/compiler.h> @@ -74,21 +76,45 @@ static inline void uaccess_restore(unsigned int flags) set_domain(flags); } - -#else +#elif defined(CONFIG_CPU_TTBR0_PAN) static inline unsigned int uaccess_save_and_enable(void) { - return 0; + unsigned int old_ttbcr = cpu_get_ttbcr(); + + /* + * Enable TTBR0 page table walks (T0SZ = 0, EDP0 = 0) and ASID from + * TTBR0 (A1 = 0). + */ + cpu_set_ttbcr(old_ttbcr & ~(TTBCR_A1 | TTBCR_EPD0 | TTBCR_T0SZ_MASK)); + isb(); + + return old_ttbcr; } static inline void uaccess_restore(unsigned int flags) { + cpu_set_ttbcr(flags); + isb(); } static inline bool uaccess_disabled(struct pt_regs *regs) { - return false; + /* uaccess state saved above pt_regs on SVC exception entry */ + unsigned int ttbcr = *(unsigned int *)(regs + 1); + + return ttbcr & TTBCR_EPD0; +} + +#else + +static inline unsigned int uaccess_save_and_enable(void) +{ + return 0; +} + +static inline void uaccess_restore(unsigned int flags) +{ } #endif diff --git a/arch/arm/lib/csumpartialcopyuser.S b/arch/arm/lib/csumpartialcopyuser.S index d50fe3c07615..4ef2515f051a 100644 --- a/arch/arm/lib/csumpartialcopyuser.S +++ b/arch/arm/lib/csumpartialcopyuser.S @@ -31,6 +31,20 @@ ret lr .endm +#elif defined(CONFIG_CPU_TTBR0_PAN) + + .macro save_regs + mrc p15, 0, ip, c2, c0, 2 + stmfd sp!, {r1, r2, r4 - r8, ip, lr} + uaccess_enable ip + .endm + + .macro load_regs + ldmfd sp!, {r1, r2, r4 - r8, ip, lr} + mcr p15, 0, ip, c2, c0, 2 + ret lr + .endm + #else .macro save_regs diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c index 0d629b8f973f..a16de0635de2 100644 --- a/arch/arm/mm/fault.c +++ b/arch/arm/mm/fault.c @@ -284,6 +284,16 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs) if (fsr & FSR_WRITE) flags |= FAULT_FLAG_WRITE; +#ifdef CONFIG_CPU_TTBR0_PAN + /* + * Privileged access aborts with CONFIG_CPU_TTBR0_PAN enabled are + * routed via the translation fault mechanism. Check whether uaccess + * is disabled while in kernel mode. + */ + if (!user_mode(regs) && uaccess_disabled(regs)) + goto no_context; +#endif + /* * As per x86, we may deadlock here. However, since the kernel only * validly references user space from well defined areas of the code, ^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 0/4] arm: Privileged no-access for LPAE 2015-09-23 14:24 [PATCH 0/4] arm: Privileged no-access for LPAE Catalin Marinas ` (3 preceding siblings ...) 2015-09-23 14:24 ` [PATCH 4/4] arm: Implement privileged no-access using TTBR0 page table walks disabling Catalin Marinas @ 2015-12-10 19:40 ` Kees Cook 2015-12-11 17:21 ` Catalin Marinas 4 siblings, 1 reply; 10+ messages in thread From: Kees Cook @ 2015-12-10 19:40 UTC (permalink / raw) To: linux-arm-kernel [thread necromancy] This series looks good to me. I'd love to see it accepted. At the very least the cleanups look like no-brainers. :) Please consider the series: Reviewed-by: Kees Cook <keescook@chromium.org> Thanks for working on it! -Kees On Wed, Sep 23, 2015 at 7:24 AM, Catalin Marinas <catalin.marinas@arm.com> wrote: > Hi, > > This is the first attempt to add support for privileged no-access on > LPAE-enabled kernels by disabling TTBR0 page table walks. The first > three patches are pretty much refactoring/clean-up without any > functional change. The last patch implements the actual PAN using TTBR0 > disabling. Its description also contains the details of how this works. > > The patches can be found here: > > git://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/linux-aarch64 arm32-pan > > Tested in different configurations (with and without LPAE, all > VMSPLIT_*, loadable modules) but only under KVM on Juno (ARMv8). > > Thanks. > > > Catalin Marinas (4): > arm: kvm: Move TTBCR_* definitions from kvm_arm.h into > pgtable-3level-hwdef.h > arm: Move asm statements accessing TTBCR into dedicated functions > arm: Reduce the number of #ifdef CONFIG_CPU_SW_DOMAIN_PAN > arm: Implement privileged no-access using TTBR0 page table walks > disabling > > arch/arm/Kconfig | 22 ++++++++-- > arch/arm/include/asm/assembler.h | 68 +++++++++++++++++++++++++---- > arch/arm/include/asm/kvm_arm.h | 17 +------- > arch/arm/include/asm/pgtable-3level-hwdef.h | 26 +++++++++++ > arch/arm/include/asm/proc-fns.h | 12 +++++ > arch/arm/include/asm/uaccess.h | 53 +++++++++++++++++++--- > arch/arm/kvm/init.S | 2 +- > arch/arm/lib/csumpartialcopyuser.S | 20 ++++++++- > arch/arm/mm/fault.c | 10 +++++ > arch/arm/mm/mmu.c | 7 ++- > 10 files changed, 199 insertions(+), 38 deletions(-) > > > _______________________________________________ > linux-arm-kernel mailing list > linux-arm-kernel at lists.infradead.org > http://lists.infradead.org/mailman/listinfo/linux-arm-kernel -- Kees Cook Chrome OS & Brillo Security ^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 0/4] arm: Privileged no-access for LPAE 2015-12-10 19:40 ` [PATCH 0/4] arm: Privileged no-access for LPAE Kees Cook @ 2015-12-11 17:21 ` Catalin Marinas 2020-09-28 13:09 ` Orson Zhai 0 siblings, 1 reply; 10+ messages in thread From: Catalin Marinas @ 2015-12-11 17:21 UTC (permalink / raw) To: linux-arm-kernel On Thu, Dec 10, 2015 at 11:40:44AM -0800, Kees Cook wrote: > [thread necromancy] > > This series looks good to me. I'd love to see it accepted. At the very > least the cleanups look like no-brainers. :) > > Please consider the series: > > Reviewed-by: Kees Cook <keescook@chromium.org> > > Thanks for working on it! Thanks for the review. After some more (internal) discussions around these patches, I need to get clarification on the architecture whether changing the TTBCR.A1 bit is enough to guarantee an ASID change (I do this trick to change to the reserved ASID and avoid TLB invalidation as normally required by changes to translation control registers). If that's not allowed by the architecture, I would have to change the patches to switch to a reserved TTBR0 rather than disabling TTBR0 walks at the TTBCR level. -- Catalin ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 0/4] arm: Privileged no-access for LPAE 2015-12-11 17:21 ` Catalin Marinas @ 2020-09-28 13:09 ` Orson Zhai 2020-09-28 16:29 ` Catalin Marinas 0 siblings, 1 reply; 10+ messages in thread From: Orson Zhai @ 2020-09-28 13:09 UTC (permalink / raw) To: Catalin Marinas; +Cc: linux-arm-kernel Hi Catalin, On Fri, Dec 11, 2015 at 05:21:40PM +0000, Catalin Marinas wrote: > On Thu, Dec 10, 2015 at 11:40:44AM -0800, Kees Cook wrote: > > [thread necromancy] > > > > This series looks good to me. I'd love to see it accepted. At the very > > least the cleanups look like no-brainers. :) > > > > Please consider the series: > > > > Reviewed-by: Kees Cook <keescook@chromium.org> > > > > Thanks for working on it! > > Thanks for the review. After some more (internal) discussions around > these patches, I need to get clarification on the architecture whether > changing the TTBCR.A1 bit is enough to guarantee an ASID change (I do Did you check it after then? Now I have a real requirement for implementing LPAE and PAN at the same time. So I'd like to know if this patch could work. I had some talk with Will about it at other place. He thought this patch is not in correct state. May I have your latest opinions? Thanks. -Orson > this trick to change to the reserved ASID and avoid TLB invalidation as > normally required by changes to translation control registers). If > that's not allowed by the architecture, I would have to change the > patches to switch to a reserved TTBR0 rather than disabling TTBR0 walks > at the TTBCR level. > > -- > Catalin _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 0/4] arm: Privileged no-access for LPAE 2020-09-28 13:09 ` Orson Zhai @ 2020-09-28 16:29 ` Catalin Marinas 0 siblings, 0 replies; 10+ messages in thread From: Catalin Marinas @ 2020-09-28 16:29 UTC (permalink / raw) To: Orson Zhai; +Cc: linux-arm-kernel On Mon, Sep 28, 2020 at 09:09:07PM +0800, Orson Zhai wrote: > On Fri, Dec 11, 2015 at 05:21:40PM +0000, Catalin Marinas wrote: > > On Thu, Dec 10, 2015 at 11:40:44AM -0800, Kees Cook wrote: > > > [thread necromancy] > > > > > > This series looks good to me. I'd love to see it accepted. At the very > > > least the cleanups look like no-brainers. :) > > > > > > Please consider the series: > > > > > > Reviewed-by: Kees Cook <keescook@chromium.org> > > > > > > Thanks for working on it! > > > > Thanks for the review. After some more (internal) discussions around > > these patches, I need to get clarification on the architecture whether > > changing the TTBCR.A1 bit is enough to guarantee an ASID change (I do > > Did you check it after then? Now I have a real requirement for implementing > LPAE and PAN at the same time. So I'd like to know if this patch could work. > I had some talk with Will about it at other place. He thought this patch is > not in correct state. > > May I have your latest opinions? It may work on specific 32-bit CPU implementations but it's not guaranteed since the TTBCR.A1 bit is allowed to be cached in the TLB. If you have a CPU implementation in mind, you could check with the microarchitects whether A1 is cached in the TLB. But since that's not universally applicable, the patchset cannot be merged into mainline. I haven't touched these patches for the past 5 years, so I can't tell whether they still apply. -- Catalin _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 0/4] PAN for ARM32 using LPAE
@ 2024-01-23 21:16 Linus Walleij
2024-01-23 21:16 ` [PATCH 4/4] ARM: Implement privileged no-access using TTBR0 page table walks disabling Linus Walleij
0 siblings, 1 reply; 10+ messages in thread
From: Linus Walleij @ 2024-01-23 21:16 UTC (permalink / raw)
To: Russell King, Ard Biesheuvel, Arnd Bergmann, Stefan Wahren,
Kees Cook, Geert Uytterhoeven
Cc: linux-arm-kernel, Linus Walleij, Catalin Marinas
This is a patch set from Catalin that ended up on the back burner.
Since LPAE systems, i.e. ARM32 systems with a lot of physical memory,
will be with us for a while more, this is a pretty straight-forward
hardening measure that we should support.
The last patch explains the mechanism: since PAN using CPU domains
isn't available when using the LPAE MMU tables, we use the split
between the two translation base tables instead: TTBR0 is for
userspace pages and TTBR1 is for kernelspace tables. When executing
in kernelspace: we protect userspace by simply disabling page
walks in TTBR0.
This was tested by a simple hack in the ELF loader:
create_elf_tables()
+ unsigned char *test;
(...)
if (copy_to_user(u_rand_bytes, k_rand_bytes, sizeof(k_rand_bytes)))
return -EFAULT;
+ /* Cause a kernelspace access to userspace memory */
+ test = (char *)u_rand_bytes;
+ pr_info("Some byte: %02x\n", *test);
This tries to read a byte from userspace memory right after the
first unconditional copy_to_user(), a function that carefully
switches access permissions if we're using PAN.
Without LPAE PAN this will just happily print these bytes from
userspace but with LPAE PAN it will cause a predictable
crash:
Run /init as init process
Some byte: ac
8<--- cut here ---
Unable to handle kernel paging request at virtual address 7ec59f6b when read
[7ec59f6b] *pgd=82c3b003, *pmd=82863003, *pte=e00000882f6f5f
Internal error: Oops: 206 [#1] SMP ARM
CPU: 0 PID: 47 Comm: rc.init Not tainted 6.7.0-rc1+ #25
Hardware name: ARM-Versatile Express
PC is at create_elf_tables+0x13c/0x608
Thus we can show that LPAE PAN does its job.
Changes from Catalins initial patch set:
- Use IS_ENABLED() to avoid some ifdefs
- Create a uaccess_disabled() for classic CPU domains
and reate a stub uaccess_disabled() for !PAN so we can
always check this.
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
---
Catalin Marinas (4):
ARM: Add TTBCR_* definitions to pgtable-3level-hwdef.h
ARM: Move asm statements accessing TTBCR into C functions
ARM: Reduce the number of #ifdef CONFIG_CPU_SW_DOMAIN_PAN
ARM: Implement privileged no-access using TTBR0 page table walks disabling
arch/arm/Kconfig | 22 ++++++++--
arch/arm/include/asm/assembler.h | 1 +
arch/arm/include/asm/pgtable-3level-hwdef.h | 26 +++++++++++
arch/arm/include/asm/proc-fns.h | 12 +++++
arch/arm/include/asm/uaccess-asm.h | 58 ++++++++++++++++++++++--
arch/arm/include/asm/uaccess.h | 68 ++++++++++++++++++++++++++---
arch/arm/kernel/suspend.c | 8 ++++
arch/arm/lib/csumpartialcopyuser.S | 20 ++++++++-
arch/arm/mm/fault.c | 8 ++++
arch/arm/mm/mmu.c | 7 ++-
10 files changed, 212 insertions(+), 18 deletions(-)
---
base-commit: 8615ebf1370a798c403b4495f39de48270ad48f9
change-id: 20231216-arm32-lpae-pan-56125ab63d63
Best regards,
--
Linus Walleij <linus.walleij@linaro.org>
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 10+ messages in thread* [PATCH 4/4] ARM: Implement privileged no-access using TTBR0 page table walks disabling 2024-01-23 21:16 [PATCH 0/4] PAN for ARM32 using LPAE Linus Walleij @ 2024-01-23 21:16 ` Linus Walleij 0 siblings, 0 replies; 10+ messages in thread From: Linus Walleij @ 2024-01-23 21:16 UTC (permalink / raw) To: Russell King, Ard Biesheuvel, Arnd Bergmann, Stefan Wahren, Kees Cook, Geert Uytterhoeven Cc: linux-arm-kernel, Linus Walleij, Catalin Marinas From: Catalin Marinas <catalin.marinas@arm.com> With LPAE enabled, privileged no-access cannot be enforced using CPU domains as such feature is not available. This patch implements PAN by disabling TTBR0 page table walks while in kernel mode. The ARM architecture allows page table walks to be split between TTBR0 and TTBR1. With LPAE enabled, the split is defined by a combination of TTBCR T0SZ and T1SZ bits. Currently, an LPAE-enabled kernel uses TTBR0 for user addresses and TTBR1 for kernel addresses with the VMSPLIT_2G and VMSPLIT_3G configurations. The main advantage for the 3:1 split is that TTBR1 is reduced to 2 levels, so potentially faster TLB refill (though usually the first level entries are already cached in the TLB). The PAN support on LPAE-enabled kernels uses TTBR0 when running in user space or in kernel space during user access routines (TTBCR T0SZ and T1SZ are both 0). When running user accesses are disabled in kernel mode, TTBR0 page table walks are disabled by setting TTBCR.EPD0. TTBR1 is used for kernel accesses (including loadable modules; anything covered by swapper_pg_dir) by reducing the TTBCR.T0SZ to the minimum (2^(32-7) = 32MB). To avoid user accesses potentially hitting stale TLB entries, the ASID is switched to 0 (reserved) by setting TTBCR.A1 and using the ASID value in TTBR1. The difference from a non-PAN kernel is that with the 3:1 memory split, TTBR1 always uses 3 levels of page tables. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> --- arch/arm/Kconfig | 22 ++++++++++++-- arch/arm/include/asm/assembler.h | 1 + arch/arm/include/asm/pgtable-3level-hwdef.h | 9 ++++++ arch/arm/include/asm/uaccess-asm.h | 42 ++++++++++++++++++++++++++ arch/arm/include/asm/uaccess.h | 47 +++++++++++++++++++++++++++++ arch/arm/kernel/suspend.c | 8 +++++ arch/arm/lib/csumpartialcopyuser.S | 14 +++++++++ arch/arm/mm/fault.c | 8 +++++ 8 files changed, 148 insertions(+), 3 deletions(-) diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index 0af6709570d1..3d97a15a3e2d 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -1231,9 +1231,9 @@ config HIGHPTE consumed by page tables. Setting this option will allow user-space 2nd level page tables to reside in high memory. -config CPU_SW_DOMAIN_PAN - bool "Enable use of CPU domains to implement privileged no-access" - depends on MMU && !ARM_LPAE +config ARM_PAN + bool "Enable privileged no-access" + depends on MMU default y help Increase kernel security by ensuring that normal kernel accesses @@ -1242,10 +1242,26 @@ config CPU_SW_DOMAIN_PAN by ensuring that magic values (such as LIST_POISON) will always fault when dereferenced. + The implementation uses CPU domains when !CONFIG_ARM_LPAE and + disabling of TTBR0 page table walks with CONFIG_ARM_LPAE. + +config CPU_SW_DOMAIN_PAN + def_bool y + depends on ARM_PAN && !ARM_LPAE + help + Enable use of CPU domains to implement privileged no-access. + CPUs with low-vector mappings use a best-efforts implementation. Their lower 1MB needs to remain accessible for the vectors, but the remainder of userspace will become appropriately inaccessible. +config CPU_TTBR0_PAN + def_bool y + depends on ARM_PAN && ARM_LPAE + help + Enable privileged no-access by disabling TTBR0 page table walks when + running in kernel mode. + config HW_PERF_EVENTS def_bool y depends on ARM_PMU diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h index aebe2c8f6a68..d33c1e24e00b 100644 --- a/arch/arm/include/asm/assembler.h +++ b/arch/arm/include/asm/assembler.h @@ -21,6 +21,7 @@ #include <asm/opcodes-virt.h> #include <asm/asm-offsets.h> #include <asm/page.h> +#include <asm/pgtable.h> #include <asm/thread_info.h> #include <asm/uaccess-asm.h> diff --git a/arch/arm/include/asm/pgtable-3level-hwdef.h b/arch/arm/include/asm/pgtable-3level-hwdef.h index 19da7753a0b8..323ad811732e 100644 --- a/arch/arm/include/asm/pgtable-3level-hwdef.h +++ b/arch/arm/include/asm/pgtable-3level-hwdef.h @@ -74,6 +74,7 @@ #define PHYS_MASK_SHIFT (40) #define PHYS_MASK ((1ULL << PHYS_MASK_SHIFT) - 1) +#ifndef CONFIG_CPU_TTBR0_PAN /* * TTBR0/TTBR1 split (PAGE_OFFSET): * 0x40000000: T0SZ = 2, T1SZ = 0 (not used) @@ -93,6 +94,14 @@ #endif #define TTBR1_SIZE (((PAGE_OFFSET >> 30) - 1) << 16) +#else +/* + * With CONFIG_CPU_TTBR0_PAN enabled, TTBR1 is only used during uaccess + * disabled regions when TTBR0 is disabled. + */ +#define TTBR1_OFFSET 0 /* pointing to swapper_pg_dir */ +#define TTBR1_SIZE 0 /* TTBR1 size controlled via TTBCR.T0SZ */ +#endif /* * TTBCR register bits. diff --git a/arch/arm/include/asm/uaccess-asm.h b/arch/arm/include/asm/uaccess-asm.h index ea42ba25920f..f7acf4cabbdc 100644 --- a/arch/arm/include/asm/uaccess-asm.h +++ b/arch/arm/include/asm/uaccess-asm.h @@ -65,6 +65,37 @@ .endif .endm +#elif defined(CONFIG_CPU_TTBR0_PAN) + + .macro uaccess_disable, tmp, isb=1 + /* + * Disable TTBR0 page table walks (EDP0 = 1), use the reserved ASID + * from TTBR1 (A1 = 1) and enable TTBR1 page table walks for kernel + * addresses by reducing TTBR0 range to 32MB (T0SZ = 7). + */ + mrc p15, 0, \tmp, c2, c0, 2 @ read TTBCR + orr \tmp, \tmp, #TTBCR_EPD0 | TTBCR_T0SZ_MASK + orr \tmp, \tmp, #TTBCR_A1 + mcr p15, 0, \tmp, c2, c0, 2 @ write TTBCR + .if \isb + instr_sync + .endif + .endm + + .macro uaccess_enable, tmp, isb=1 + /* + * Enable TTBR0 page table walks (T0SZ = 0, EDP0 = 0) and ASID from + * TTBR0 (A1 = 0). + */ + mrc p15, 0, \tmp, c2, c0, 2 @ read TTBCR + bic \tmp, \tmp, #TTBCR_EPD0 | TTBCR_T0SZ_MASK + bic \tmp, \tmp, #TTBCR_A1 + mcr p15, 0, \tmp, c2, c0, 2 @ write TTBCR + .if \isb + instr_sync + .endif + .endm + #else .macro uaccess_disable, tmp, isb=1 @@ -79,6 +110,12 @@ #define DACR(x...) x #else #define DACR(x...) +#endif + +#ifdef CONFIG_CPU_TTBR0_PAN +#define PAN(x...) x +#else +#define PAN(x...) #endif /* @@ -94,6 +131,8 @@ .macro uaccess_entry, tsk, tmp0, tmp1, tmp2, disable DACR( mrc p15, 0, \tmp0, c3, c0, 0) DACR( str \tmp0, [sp, #SVC_DACR]) + PAN( mrc p15, 0, \tmp0, c2, c0, 2) + PAN( str \tmp0, [sp, #SVC_DACR]) .if \disable && IS_ENABLED(CONFIG_CPU_SW_DOMAIN_PAN) /* kernel=client, user=no access */ mov \tmp2, #DACR_UACCESS_DISABLE @@ -112,8 +151,11 @@ .macro uaccess_exit, tsk, tmp0, tmp1 DACR( ldr \tmp0, [sp, #SVC_DACR]) DACR( mcr p15, 0, \tmp0, c3, c0, 0) + PAN( ldr \tmp0, [sp, #SVC_DACR]) + PAN( mcr p15, 0, \tmp0, c2, c0, 2) .endm #undef DACR +#undef PAN #endif /* __ASM_UACCESS_ASM_H__ */ diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h index 9b9234d1bb6a..5b542eab009f 100644 --- a/arch/arm/include/asm/uaccess.h +++ b/arch/arm/include/asm/uaccess.h @@ -14,6 +14,8 @@ #include <asm/domain.h> #include <asm/unaligned.h> #include <asm/unified.h> +#include <asm/pgtable.h> +#include <asm/proc-fns.h> #include <asm/compiler.h> #include <asm/extable.h> @@ -43,6 +45,45 @@ static __always_inline void uaccess_restore(unsigned int flags) set_domain(flags); } +static inline bool uaccess_disabled(struct pt_regs *regs) +{ + /* + * This is handled by hardware domain checks but included for + * completeness. + */ + return !(get_domain() & domain_mask(DOMAIN_USER)); +} + +#elif defined(CONFIG_CPU_TTBR0_PAN) + +static inline unsigned int uaccess_save_and_enable(void) +{ + unsigned int old_ttbcr = cpu_get_ttbcr(); + + /* + * Enable TTBR0 page table walks (T0SZ = 0, EDP0 = 0) and ASID from + * TTBR0 (A1 = 0). + */ + cpu_set_ttbcr(old_ttbcr & ~(TTBCR_A1 | TTBCR_EPD0 | TTBCR_T0SZ_MASK)); + isb(); + + return old_ttbcr; +} + +static inline void uaccess_restore(unsigned int flags) +{ + cpu_set_ttbcr(flags); + isb(); +} + +static inline bool uaccess_disabled(struct pt_regs *regs) +{ + /* uaccess state saved above pt_regs on SVC exception entry */ + unsigned int ttbcr = *(unsigned int *)(regs + 1); + + return ttbcr & TTBCR_EPD0; +} + #else static inline unsigned int uaccess_save_and_enable(void) @@ -54,6 +95,12 @@ static inline void uaccess_restore(unsigned int flags) { } +static inline bool uaccess_disabled(struct pt_regs *regs) +{ + /* Without PAN userspace is always available */ + return false; +} + #endif /* diff --git a/arch/arm/kernel/suspend.c b/arch/arm/kernel/suspend.c index c3ec3861dd07..58a6441b58c4 100644 --- a/arch/arm/kernel/suspend.c +++ b/arch/arm/kernel/suspend.c @@ -12,6 +12,7 @@ #include <asm/smp_plat.h> #include <asm/suspend.h> #include <asm/tlbflush.h> +#include <asm/uaccess.h> extern int __cpu_suspend(unsigned long, int (*)(unsigned long), u32 cpuid); extern void cpu_resume_mmu(void); @@ -26,6 +27,13 @@ int cpu_suspend(unsigned long arg, int (*fn)(unsigned long)) if (!idmap_pgd) return -EINVAL; + /* + * Needed for the MMU disabling/enabing code to be able to run from + * TTBR0 addresses. + */ + if (IS_ENABLED(CONFIG_CPU_TTBR0_PAN)) + uaccess_save_and_enable(); + /* * Function graph tracer state gets incosistent when the kernel * calls functions that never return (aka suspend finishers) hence diff --git a/arch/arm/lib/csumpartialcopyuser.S b/arch/arm/lib/csumpartialcopyuser.S index 04d8d9d741c7..c289bde04743 100644 --- a/arch/arm/lib/csumpartialcopyuser.S +++ b/arch/arm/lib/csumpartialcopyuser.S @@ -27,6 +27,20 @@ ret lr .endm +#elif defined(CONFIG_CPU_TTBR0_PAN) + + .macro save_regs + mrc p15, 0, ip, c2, c0, 2 @ read TTBCR + stmfd sp!, {r1, r2, r4 - r8, ip, lr} + uaccess_enable ip + .endm + + .macro load_regs + ldmfd sp!, {r1, r2, r4 - r8, ip, lr} + mcr p15, 0, ip, c2, c0, 2 @ restore TTBCR + ret lr + .endm + #else .macro save_regs diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c index e96fb40b9cc3..de4abf9dfd6a 100644 --- a/arch/arm/mm/fault.c +++ b/arch/arm/mm/fault.c @@ -278,6 +278,14 @@ do_page_fault(unsigned long addr, unsigned int fsr, struct pt_regs *regs) perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, addr); + /* + * Privileged access aborts with CONFIG_CPU_TTBR0_PAN enabled are + * routed via the translation fault mechanism. Check whether uaccess + * is disabled while in kernel mode. + */ + if (IS_ENABLED(CONFIG_CPU_TTBR0_PAN) && !user_mode(regs) && uaccess_disabled(regs)) + goto no_context; + if (!(flags & FAULT_FLAG_USER)) goto lock_mmap; -- 2.34.1 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel ^ permalink raw reply related [flat|nested] 10+ messages in thread
end of thread, other threads:[~2024-01-23 21:17 UTC | newest] Thread overview: 10+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2015-09-23 14:24 [PATCH 0/4] arm: Privileged no-access for LPAE Catalin Marinas 2015-09-23 14:24 ` [PATCH 1/4] arm: kvm: Move TTBCR_* definitions from kvm_arm.h into pgtable-3level-hwdef.h Catalin Marinas 2015-09-23 14:24 ` [PATCH 2/4] arm: Move asm statements accessing TTBCR into dedicated functions Catalin Marinas 2015-09-23 14:24 ` [PATCH 3/4] arm: Reduce the number of #ifdef CONFIG_CPU_SW_DOMAIN_PAN Catalin Marinas 2015-09-23 14:24 ` [PATCH 4/4] arm: Implement privileged no-access using TTBR0 page table walks disabling Catalin Marinas 2015-12-10 19:40 ` [PATCH 0/4] arm: Privileged no-access for LPAE Kees Cook 2015-12-11 17:21 ` Catalin Marinas 2020-09-28 13:09 ` Orson Zhai 2020-09-28 16:29 ` Catalin Marinas -- strict thread matches above, loose matches on Subject: below -- 2024-01-23 21:16 [PATCH 0/4] PAN for ARM32 using LPAE Linus Walleij 2024-01-23 21:16 ` [PATCH 4/4] ARM: Implement privileged no-access using TTBR0 page table walks disabling Linus Walleij
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).