linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/6] arm64: kernel: Add support for Privileged Access Never
@ 2015-07-21 12:23 James Morse
  2015-07-21 12:23 ` [PATCH v3 1/6] arm64: kernel: Add cpuid_feature_extract_field() for 4bit sign extension James Morse
                   ` (5 more replies)
  0 siblings, 6 replies; 19+ messages in thread
From: James Morse @ 2015-07-21 12:23 UTC (permalink / raw)
  To: linux-arm-kernel

This series adds support for Privileged Access Never (PAN; part of the ARMv8.1
Extensions). When enabled, this feature causes a permission fault if the kernel
attempts to access memory that is also accessible by userspace - instead the
PAN bit must be cleared when accessing userspace memory. (or use the
ldt*/stt* instructions).

This series detects and enables this feature, and uses alternatives to change
{get,put}_user() et al to clear the PAN bit while they do their work.

Changes since v 2:
* Added missing PAN-swivel around  swp emulation. (Thanks to Vladimir Murzin for
  spotting this!).
* Use bit shifts in cpuid_feature_extract_field(), to produce better asm.
* Changed the enable() patch field names, and switched to ints.
* Removed PSTATE_PAN define and use PSR_PAN_BIT instead.

Changes since v1:
* Copied cpuid_feature_extract_field() from arch/arm as a new patch, suggested
  by Russell King [1].
* Changed feature-detection patch to use cpuid_feature_extract_field() for sign
  extension, and '>='.
* Moved SCTLR_EL1_* from asm/cputype.h to asm/sysreg.h
* Added PSR_PAN_BIT in uapi/asm/ptrace.h
* Removed the setting of PSTATE_PAN in kernel/process.c

[1] http://www.spinics.net/lists/arm-kernel/msg432518.html


James Morse (6):
  arm64: kernel: Add cpuid_feature_extract_field() for 4bit sign
    extension
  arm64: kernel: preparatory: Move config_sctlr_el1
  arm64: kernel: Add cpufeature 'enable' callback
  arm64: kernel: Add min_field_value and use '>=' for feature detection
  arm64: kernel: Add optional CONFIG_ parameter to ALTERNATIVE()
  arm64: kernel: Add support for Privileged Access Never

 arch/arm64/Kconfig                   | 14 +++++++++++++
 arch/arm64/include/asm/alternative.h | 28 ++++++++++++++++++++++---
 arch/arm64/include/asm/cpufeature.h  | 15 +++++++++++---
 arch/arm64/include/asm/cputype.h     |  3 ---
 arch/arm64/include/asm/futex.h       |  8 ++++++++
 arch/arm64/include/asm/processor.h   |  2 ++
 arch/arm64/include/asm/sysreg.h      | 20 ++++++++++++++++++
 arch/arm64/include/asm/uaccess.h     | 11 ++++++++++
 arch/arm64/include/uapi/asm/ptrace.h |  1 +
 arch/arm64/kernel/armv8_deprecated.c | 19 ++++++++---------
 arch/arm64/kernel/cpufeature.c       | 40 +++++++++++++++++++++++++++++++++---
 arch/arm64/lib/clear_user.S          |  8 ++++++++
 arch/arm64/lib/copy_from_user.S      |  8 ++++++++
 arch/arm64/lib/copy_in_user.S        |  8 ++++++++
 arch/arm64/lib/copy_to_user.S        |  8 ++++++++
 arch/arm64/mm/fault.c                | 23 +++++++++++++++++++++
 16 files changed, 193 insertions(+), 23 deletions(-)

-- 
2.1.4

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH v3 1/6] arm64: kernel: Add cpuid_feature_extract_field() for 4bit sign extension
  2015-07-21 12:23 [PATCH v3 0/6] arm64: kernel: Add support for Privileged Access Never James Morse
@ 2015-07-21 12:23 ` James Morse
  2015-07-21 12:32   ` Catalin Marinas
  2015-07-21 12:23 ` [PATCH v3 2/6] arm64: kernel: preparatory: Move config_sctlr_el1 James Morse
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 19+ messages in thread
From: James Morse @ 2015-07-21 12:23 UTC (permalink / raw)
  To: linux-arm-kernel

Based on arch/arm/include/asm/cputype.h, this function does the
shifting and sign extension necessary when accessing cpu feature fields.

Signed-off-by: James Morse <james.morse@arm.com>
Suggested-by: Russell King <linux@arm.linux.org.uk>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/cpufeature.h | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index c1044218a63a..9fafa7537997 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -70,6 +70,13 @@ static inline void cpus_set_cap(unsigned int num)
 		__set_bit(num, cpu_hwcaps);
 }
 
+static inline int __attribute_const__ cpuid_feature_extract_field(u64 features,
+								  int field)
+{
+	return (s64)(features << (64 - 4 - field)) >> (64 - 4);
+}
+
+
 void check_cpu_capabilities(const struct arm64_cpu_capabilities *caps,
 			    const char *info);
 void check_local_cpu_errata(void);
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v3 2/6] arm64: kernel: preparatory: Move config_sctlr_el1
  2015-07-21 12:23 [PATCH v3 0/6] arm64: kernel: Add support for Privileged Access Never James Morse
  2015-07-21 12:23 ` [PATCH v3 1/6] arm64: kernel: Add cpuid_feature_extract_field() for 4bit sign extension James Morse
@ 2015-07-21 12:23 ` James Morse
  2015-07-21 12:23 ` [PATCH v3 3/6] arm64: kernel: Add cpufeature 'enable' callback James Morse
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 19+ messages in thread
From: James Morse @ 2015-07-21 12:23 UTC (permalink / raw)
  To: linux-arm-kernel

Later patches need config_sctlr_el1 to set/clear bits in the sctlr_el1
register.

This patch moves this function into header a file.

Signed-off-by: James Morse <james.morse@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
---
 arch/arm64/include/asm/cputype.h     |  3 ---
 arch/arm64/include/asm/sysreg.h      | 12 ++++++++++++
 arch/arm64/kernel/armv8_deprecated.c | 11 +----------
 3 files changed, 13 insertions(+), 13 deletions(-)

diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h
index a84ec605bed8..ee6403df9fe4 100644
--- a/arch/arm64/include/asm/cputype.h
+++ b/arch/arm64/include/asm/cputype.h
@@ -81,9 +81,6 @@
 #define ID_AA64MMFR0_BIGEND(mmfr0)	\
 	(((mmfr0) & ID_AA64MMFR0_BIGEND_MASK) >> ID_AA64MMFR0_BIGEND_SHIFT)
 
-#define SCTLR_EL1_CP15BEN	(0x1 << 5)
-#define SCTLR_EL1_SED		(0x1 << 8)
-
 #ifndef __ASSEMBLY__
 
 /*
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 5c89df0acbcb..56391fbae1e1 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -20,6 +20,9 @@
 #ifndef __ASM_SYSREG_H
 #define __ASM_SYSREG_H
 
+#define SCTLR_EL1_CP15BEN	(0x1 << 5)
+#define SCTLR_EL1_SED		(0x1 << 8)
+
 #define sys_reg(op0, op1, crn, crm, op2) \
 	((((op0)-2)<<19)|((op1)<<16)|((crn)<<12)|((crm)<<8)|((op2)<<5))
 
@@ -55,6 +58,15 @@ asm(
 "	.endm\n"
 );
 
+static inline void config_sctlr_el1(u32 clear, u32 set)
+{
+	u32 val;
+
+	asm volatile("mrs %0, sctlr_el1" : "=r" (val));
+	val &= ~clear;
+	val |= set;
+	asm volatile("msr sctlr_el1, %0" : : "r" (val));
+}
 #endif
 
 #endif	/* __ASM_SYSREG_H */
diff --git a/arch/arm64/kernel/armv8_deprecated.c b/arch/arm64/kernel/armv8_deprecated.c
index 7922c2e710ca..78d56bff91fd 100644
--- a/arch/arm64/kernel/armv8_deprecated.c
+++ b/arch/arm64/kernel/armv8_deprecated.c
@@ -16,6 +16,7 @@
 
 #include <asm/insn.h>
 #include <asm/opcodes.h>
+#include <asm/sysreg.h>
 #include <asm/system_misc.h>
 #include <asm/traps.h>
 #include <asm/uaccess.h>
@@ -504,16 +505,6 @@ ret:
 	return 0;
 }
 
-static inline void config_sctlr_el1(u32 clear, u32 set)
-{
-	u32 val;
-
-	asm volatile("mrs %0, sctlr_el1" : "=r" (val));
-	val &= ~clear;
-	val |= set;
-	asm volatile("msr sctlr_el1, %0" : : "r" (val));
-}
-
 static int cp15_barrier_set_hw_mode(bool enable)
 {
 	if (enable)
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v3 3/6] arm64: kernel: Add cpufeature 'enable' callback
  2015-07-21 12:23 [PATCH v3 0/6] arm64: kernel: Add support for Privileged Access Never James Morse
  2015-07-21 12:23 ` [PATCH v3 1/6] arm64: kernel: Add cpuid_feature_extract_field() for 4bit sign extension James Morse
  2015-07-21 12:23 ` [PATCH v3 2/6] arm64: kernel: preparatory: Move config_sctlr_el1 James Morse
@ 2015-07-21 12:23 ` James Morse
  2015-07-21 12:23 ` [PATCH v3 4/6] arm64: kernel: Add min_field_value and use '>=' for feature detection James Morse
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 19+ messages in thread
From: James Morse @ 2015-07-21 12:23 UTC (permalink / raw)
  To: linux-arm-kernel

This patch adds an 'enable()' callback to cpu capability/feature
detection, allowing features that require some setup or configuration
to get this opportunity once the feature has been detected.

Signed-off-by: James Morse <james.morse@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
---
 arch/arm64/include/asm/cpufeature.h | 1 +
 arch/arm64/kernel/cpufeature.c      | 6 ++++++
 2 files changed, 7 insertions(+)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 9fafa7537997..484fa9425314 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -34,6 +34,7 @@ struct arm64_cpu_capabilities {
 	const char *desc;
 	u16 capability;
 	bool (*matches)(const struct arm64_cpu_capabilities *);
+	void (*enable)(void);
 	union {
 		struct {	/* To be used for erratum handling only */
 			u32 midr_model;
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 5ad86ceac010..650ffc28bedc 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -55,6 +55,12 @@ void check_cpu_capabilities(const struct arm64_cpu_capabilities *caps,
 			pr_info("%s %s\n", info, caps[i].desc);
 		cpus_set_cap(caps[i].capability);
 	}
+
+	/* second pass allows enable() to consider interacting capabilities */
+	for (i = 0; caps[i].desc; i++) {
+		if (cpus_have_cap(caps[i].capability) && caps[i].enable)
+			caps[i].enable();
+	}
 }
 
 void check_local_cpu_features(void)
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v3 4/6] arm64: kernel: Add min_field_value and use '>=' for feature detection
  2015-07-21 12:23 [PATCH v3 0/6] arm64: kernel: Add support for Privileged Access Never James Morse
                   ` (2 preceding siblings ...)
  2015-07-21 12:23 ` [PATCH v3 3/6] arm64: kernel: Add cpufeature 'enable' callback James Morse
@ 2015-07-21 12:23 ` James Morse
  2015-07-21 12:33   ` Catalin Marinas
  2015-07-21 12:23 ` [PATCH v3 5/6] arm64: kernel: Add optional CONFIG_ parameter to ALTERNATIVE() James Morse
  2015-07-21 12:23 ` [PATCH v3 6/6] arm64: kernel: Add support for Privileged Access Never James Morse
  5 siblings, 1 reply; 19+ messages in thread
From: James Morse @ 2015-07-21 12:23 UTC (permalink / raw)
  To: linux-arm-kernel

When a new cpu feature is available, the cpu feature bits will have some
initial value, which is incremented when the feature is updated.
This patch changes 'register_value' to be 'min_field_value', and checks
the feature bits value (interpreted as a signed int) is greater than this
minimum.

Signed-off-by: James Morse <james.morse@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/cpufeature.h |  4 ++--
 arch/arm64/kernel/cpufeature.c      | 14 +++++++++++---
 2 files changed, 13 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 484fa9425314..f595f7ddd43b 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -42,8 +42,8 @@ struct arm64_cpu_capabilities {
 		};
 
 		struct {	/* Feature register checking */
-			u64 register_mask;
-			u64 register_value;
+			int field_pos;
+			int min_field_value;
 		};
 	};
 };
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 650ffc28bedc..74fd0f74b065 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -23,12 +23,20 @@
 #include <asm/cpufeature.h>
 
 static bool
+feature_matches(u64 reg, const struct arm64_cpu_capabilities *entry)
+{
+	int val = cpuid_feature_extract_field(reg, entry->field_pos);
+
+	return val >= entry->min_field_value;
+}
+
+static bool
 has_id_aa64pfr0_feature(const struct arm64_cpu_capabilities *entry)
 {
 	u64 val;
 
 	val = read_cpuid(id_aa64pfr0_el1);
-	return (val & entry->register_mask) == entry->register_value;
+	return feature_matches(val, entry);
 }
 
 static const struct arm64_cpu_capabilities arm64_features[] = {
@@ -36,8 +44,8 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.desc = "GIC system register CPU interface",
 		.capability = ARM64_HAS_SYSREG_GIC_CPUIF,
 		.matches = has_id_aa64pfr0_feature,
-		.register_mask = (0xf << 24),
-		.register_value = (1 << 24),
+		.field_pos = 24,
+		.min_field_value = 1,
 	},
 	{},
 };
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v3 5/6] arm64: kernel: Add optional CONFIG_ parameter to ALTERNATIVE()
  2015-07-21 12:23 [PATCH v3 0/6] arm64: kernel: Add support for Privileged Access Never James Morse
                   ` (3 preceding siblings ...)
  2015-07-21 12:23 ` [PATCH v3 4/6] arm64: kernel: Add min_field_value and use '>=' for feature detection James Morse
@ 2015-07-21 12:23 ` James Morse
  2015-07-21 12:23 ` [PATCH v3 6/6] arm64: kernel: Add support for Privileged Access Never James Morse
  5 siblings, 0 replies; 19+ messages in thread
From: James Morse @ 2015-07-21 12:23 UTC (permalink / raw)
  To: linux-arm-kernel

Some uses of ALTERNATIVE() may depend on a feature that is disabled at
compile time by a Kconfig option. In this case the unused alternative
instructions waste space, and if the original instruction is a nop, it
wastes time and space.

This patch adds an optional 'config' option to ALTERNATIVE() and
alternative_insn that allows the compiler to remove both the original
and alternative instructions if the config option is not defined.

Signed-off-by: James Morse <james.morse@arm.com>
Suggested-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
---
 arch/arm64/include/asm/alternative.h | 28 +++++++++++++++++++++++++---
 1 file changed, 25 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/alternative.h b/arch/arm64/include/asm/alternative.h
index c385a0c4057f..5598182dea28 100644
--- a/arch/arm64/include/asm/alternative.h
+++ b/arch/arm64/include/asm/alternative.h
@@ -3,6 +3,7 @@
 
 #ifndef __ASSEMBLY__
 
+#include <linux/kconfig.h>
 #include <linux/types.h>
 #include <linux/stddef.h>
 #include <linux/stringify.h>
@@ -40,7 +41,8 @@ void free_alternatives_memory(void);
  * be fixed in a binutils release posterior to 2.25.51.0.2 (anything
  * containing commit 4e4d08cf7399b606 or c1baaddf8861).
  */
-#define ALTERNATIVE(oldinstr, newinstr, feature)			\
+#define __ALTERNATIVE_CFG(oldinstr, newinstr, feature, cfg_enabled)	\
+	".if "__stringify(cfg_enabled)" == 1\n"				\
 	"661:\n\t"							\
 	oldinstr "\n"							\
 	"662:\n"							\
@@ -53,7 +55,11 @@ void free_alternatives_memory(void);
 	"664:\n\t"							\
 	".popsection\n\t"						\
 	".org	. - (664b-663b) + (662b-661b)\n\t"			\
-	".org	. - (662b-661b) + (664b-663b)\n"
+	".org	. - (662b-661b) + (664b-663b)\n"			\
+	".endif\n"
+
+#define _ALTERNATIVE_CFG(oldinstr, newinstr, feature, cfg, ...)	\
+	__ALTERNATIVE_CFG(oldinstr, newinstr, feature, IS_ENABLED(cfg))
 
 #else
 
@@ -65,7 +71,8 @@ void free_alternatives_memory(void);
 	.byte \alt_len
 .endm
 
-.macro alternative_insn insn1 insn2 cap
+.macro alternative_insn insn1, insn2, cap, enable = 1
+	.if \enable
 661:	\insn1
 662:	.pushsection .altinstructions, "a"
 	altinstruction_entry 661b, 663f, \cap, 662b-661b, 664f-663f
@@ -75,8 +82,23 @@ void free_alternatives_memory(void);
 664:	.popsection
 	.org	. - (664b-663b) + (662b-661b)
 	.org	. - (662b-661b) + (664b-663b)
+	.endif
 .endm
 
+#define _ALTERNATIVE_CFG(insn1, insn2, cap, cfg, ...)	\
+	alternative_insn insn1, insn2, cap, IS_ENABLED(cfg)
+
+
 #endif  /*  __ASSEMBLY__  */
 
+/*
+ * Usage: asm(ALTERNATIVE(oldinstr, newinstr, feature));
+ *
+ * Usage: asm(ALTERNATIVE(oldinstr, newinstr, feature, CONFIG_FOO));
+ * N.B. If CONFIG_FOO is specified, but not selected, the whole block
+ *      will be omitted, including oldinstr.
+ */
+#define ALTERNATIVE(oldinstr, newinstr, ...)   \
+	_ALTERNATIVE_CFG(oldinstr, newinstr, __VA_ARGS__, 1)
+
 #endif /* __ASM_ALTERNATIVE_H */
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v3 6/6] arm64: kernel: Add support for Privileged Access Never
  2015-07-21 12:23 [PATCH v3 0/6] arm64: kernel: Add support for Privileged Access Never James Morse
                   ` (4 preceding siblings ...)
  2015-07-21 12:23 ` [PATCH v3 5/6] arm64: kernel: Add optional CONFIG_ parameter to ALTERNATIVE() James Morse
@ 2015-07-21 12:23 ` James Morse
  2015-07-21 12:38   ` Catalin Marinas
  5 siblings, 1 reply; 19+ messages in thread
From: James Morse @ 2015-07-21 12:23 UTC (permalink / raw)
  To: linux-arm-kernel

'Privileged Access Never' is a new arm8.1 feature which prevents
privileged code from accessing any virtual address where read or write
access is also permitted at EL0.

This patch enables the PAN feature on all CPUs, and modifies {get,put}_user
helpers temporarily to permit access.

This will catch kernel bugs where user memory is accessed directly.
'Unprivileged loads and stores' using ldtrb et al are unaffected by PAN.

Signed-off-by: James Morse <james.morse@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/Kconfig                   | 14 ++++++++++++++
 arch/arm64/include/asm/cpufeature.h  |  3 ++-
 arch/arm64/include/asm/futex.h       |  8 ++++++++
 arch/arm64/include/asm/processor.h   |  2 ++
 arch/arm64/include/asm/sysreg.h      |  8 ++++++++
 arch/arm64/include/asm/uaccess.h     | 11 +++++++++++
 arch/arm64/include/uapi/asm/ptrace.h |  1 +
 arch/arm64/kernel/armv8_deprecated.c |  8 +++++++-
 arch/arm64/kernel/cpufeature.c       | 20 ++++++++++++++++++++
 arch/arm64/lib/clear_user.S          |  8 ++++++++
 arch/arm64/lib/copy_from_user.S      |  8 ++++++++
 arch/arm64/lib/copy_in_user.S        |  8 ++++++++
 arch/arm64/lib/copy_to_user.S        |  8 ++++++++
 arch/arm64/mm/fault.c                | 23 +++++++++++++++++++++++
 14 files changed, 128 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 318175f62c24..c53a4b1d5968 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -597,6 +597,20 @@ config FORCE_MAX_ZONEORDER
 	default "14" if (ARM64_64K_PAGES && TRANSPARENT_HUGEPAGE)
 	default "11"
 
+config ARM64_PAN
+	bool "Enable support for Privileged Access Never (PAN)"
+	default y
+	help
+	 Privileged Access Never (PAN; part of the ARMv8.1 Extensions)
+	 prevents the kernel or hypervisor from accessing user-space (EL0)
+	 memory directly.
+
+	 Choosing this option will cause any unprotected (not using
+	 copy_to_user et al) memory access to fail with a permission fault.
+
+	 The feature is detected at runtime, and will remain as a 'nop'
+	 instruction if the cpu does not implement the feature.
+
 menuconfig ARMV8_DEPRECATED
 	bool "Emulate deprecated/obsolete ARMv8 instructions"
 	depends on COMPAT
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index f595f7ddd43b..d71140b76773 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -25,8 +25,9 @@
 #define ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE	1
 #define ARM64_WORKAROUND_845719			2
 #define ARM64_HAS_SYSREG_GIC_CPUIF		3
+#define ARM64_HAS_PAN				4
 
-#define ARM64_NCAPS				4
+#define ARM64_NCAPS				5
 
 #ifndef __ASSEMBLY__
 
diff --git a/arch/arm64/include/asm/futex.h b/arch/arm64/include/asm/futex.h
index 74069b3bd919..775e85b9d1f2 100644
--- a/arch/arm64/include/asm/futex.h
+++ b/arch/arm64/include/asm/futex.h
@@ -20,10 +20,16 @@
 
 #include <linux/futex.h>
 #include <linux/uaccess.h>
+
+#include <asm/alternative.h>
+#include <asm/cpufeature.h>
 #include <asm/errno.h>
+#include <asm/sysreg.h>
 
 #define __futex_atomic_op(insn, ret, oldval, uaddr, tmp, oparg)		\
 	asm volatile(							\
+	ALTERNATIVE("nop", SET_PSTATE_PAN(0), ARM64_HAS_PAN,		\
+		    CONFIG_ARM64_PAN)					\
 "1:	ldxr	%w1, %2\n"						\
 	insn "\n"							\
 "2:	stlxr	%w3, %w0, %2\n"						\
@@ -39,6 +45,8 @@
 "	.align	3\n"							\
 "	.quad	1b, 4b, 2b, 4b\n"					\
 "	.popsection\n"							\
+	ALTERNATIVE("nop", SET_PSTATE_PAN(1), ARM64_HAS_PAN,		\
+		    CONFIG_ARM64_PAN)					\
 	: "=&r" (ret), "=&r" (oldval), "+Q" (*uaddr), "=&r" (tmp)	\
 	: "r" (oparg), "Ir" (-EFAULT)					\
 	: "memory")
diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index e4c893e54f01..98f32355dc97 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -186,4 +186,6 @@ static inline void spin_lock_prefetch(const void *x)
 
 #endif
 
+void cpu_enable_pan(void);
+
 #endif /* __ASM_PROCESSOR_H */
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 56391fbae1e1..4df5012cfae4 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -20,12 +20,20 @@
 #ifndef __ASM_SYSREG_H
 #define __ASM_SYSREG_H
 
+#include <asm/opcodes.h>
+
 #define SCTLR_EL1_CP15BEN	(0x1 << 5)
 #define SCTLR_EL1_SED		(0x1 << 8)
 
 #define sys_reg(op0, op1, crn, crm, op2) \
 	((((op0)-2)<<19)|((op1)<<16)|((crn)<<12)|((crm)<<8)|((op2)<<5))
 
+#define REG_PSTATE_PAN_IMM                     sys_reg(2, 0, 4, 0, 4)
+#define SCTLR_EL1_SPAN                         (1 << 23)
+
+#define SET_PSTATE_PAN(x) __inst_arm(0xd5000000 | REG_PSTATE_PAN_IMM |\
+				     (!!x)<<8 | 0x1f)
+
 #ifdef __ASSEMBLY__
 
 	.irp	num,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30
diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index 07e1ba449bf1..b2ede967fe7d 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -24,7 +24,10 @@
 #include <linux/string.h>
 #include <linux/thread_info.h>
 
+#include <asm/alternative.h>
+#include <asm/cpufeature.h>
 #include <asm/ptrace.h>
+#include <asm/sysreg.h>
 #include <asm/errno.h>
 #include <asm/memory.h>
 #include <asm/compiler.h>
@@ -131,6 +134,8 @@ static inline void set_fs(mm_segment_t fs)
 do {									\
 	unsigned long __gu_val;						\
 	__chk_user_ptr(ptr);						\
+	asm(ALTERNATIVE("nop", SET_PSTATE_PAN(0), ARM64_HAS_PAN,	\
+			CONFIG_ARM64_PAN));				\
 	switch (sizeof(*(ptr))) {					\
 	case 1:								\
 		__get_user_asm("ldrb", "%w", __gu_val, (ptr), (err));	\
@@ -148,6 +153,8 @@ do {									\
 		BUILD_BUG();						\
 	}								\
 	(x) = (__force __typeof__(*(ptr)))__gu_val;			\
+	asm(ALTERNATIVE("nop", SET_PSTATE_PAN(1), ARM64_HAS_PAN,	\
+			CONFIG_ARM64_PAN));				\
 } while (0)
 
 #define __get_user(x, ptr)						\
@@ -194,6 +201,8 @@ do {									\
 do {									\
 	__typeof__(*(ptr)) __pu_val = (x);				\
 	__chk_user_ptr(ptr);						\
+	asm(ALTERNATIVE("nop", SET_PSTATE_PAN(0), ARM64_HAS_PAN,	\
+			CONFIG_ARM64_PAN));				\
 	switch (sizeof(*(ptr))) {					\
 	case 1:								\
 		__put_user_asm("strb", "%w", __pu_val, (ptr), (err));	\
@@ -210,6 +219,8 @@ do {									\
 	default:							\
 		BUILD_BUG();						\
 	}								\
+	asm(ALTERNATIVE("nop", SET_PSTATE_PAN(1), ARM64_HAS_PAN,	\
+			CONFIG_ARM64_PAN));				\
 } while (0)
 
 #define __put_user(x, ptr)						\
diff --git a/arch/arm64/include/uapi/asm/ptrace.h b/arch/arm64/include/uapi/asm/ptrace.h
index 6913643bbe54..208db3df135a 100644
--- a/arch/arm64/include/uapi/asm/ptrace.h
+++ b/arch/arm64/include/uapi/asm/ptrace.h
@@ -44,6 +44,7 @@
 #define PSR_I_BIT	0x00000080
 #define PSR_A_BIT	0x00000100
 #define PSR_D_BIT	0x00000200
+#define PSR_PAN_BIT	0x00400000
 #define PSR_Q_BIT	0x08000000
 #define PSR_V_BIT	0x10000000
 #define PSR_C_BIT	0x20000000
diff --git a/arch/arm64/kernel/armv8_deprecated.c b/arch/arm64/kernel/armv8_deprecated.c
index 78d56bff91fd..bcee7abac68e 100644
--- a/arch/arm64/kernel/armv8_deprecated.c
+++ b/arch/arm64/kernel/armv8_deprecated.c
@@ -14,6 +14,8 @@
 #include <linux/slab.h>
 #include <linux/sysctl.h>
 
+#include <asm/alternative.h>
+#include <asm/cpufeature.h>
 #include <asm/insn.h>
 #include <asm/opcodes.h>
 #include <asm/sysreg.h>
@@ -280,6 +282,8 @@ static void register_insn_emulation_sysctl(struct ctl_table *table)
  */
 #define __user_swpX_asm(data, addr, res, temp, B)		\
 	__asm__ __volatile__(					\
+	ALTERNATIVE("nop", SET_PSTATE_PAN(0), ARM64_HAS_PAN,	\
+		    CONFIG_ARM64_PAN)				\
 	"	mov		%w2, %w1\n"			\
 	"0:	ldxr"B"		%w1, [%3]\n"			\
 	"1:	stxr"B"		%w0, %w2, [%3]\n"		\
@@ -295,7 +299,9 @@ static void register_insn_emulation_sysctl(struct ctl_table *table)
 	"	.align		3\n"				\
 	"	.quad		0b, 3b\n"			\
 	"	.quad		1b, 3b\n"			\
-	"	.popsection"					\
+	"	.popsection\n"					\
+	ALTERNATIVE("nop", SET_PSTATE_PAN(1), ARM64_HAS_PAN,	\
+		CONFIG_ARM64_PAN)				\
 	: "=&r" (res), "+r" (data), "=&r" (temp)		\
 	: "r" (addr), "i" (-EAGAIN), "i" (-EFAULT)		\
 	: "memory")
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 74fd0f74b065..978fa169d3c3 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -21,6 +21,7 @@
 #include <linux/types.h>
 #include <asm/cpu.h>
 #include <asm/cpufeature.h>
+#include <asm/processor.h>
 
 static bool
 feature_matches(u64 reg, const struct arm64_cpu_capabilities *entry)
@@ -39,6 +40,15 @@ has_id_aa64pfr0_feature(const struct arm64_cpu_capabilities *entry)
 	return feature_matches(val, entry);
 }
 
+static bool __maybe_unused
+has_id_aa64mmfr1_feature(const struct arm64_cpu_capabilities *entry)
+{
+	u64 val;
+
+	val = read_cpuid(id_aa64mmfr1_el1);
+	return feature_matches(val, entry);
+}
+
 static const struct arm64_cpu_capabilities arm64_features[] = {
 	{
 		.desc = "GIC system register CPU interface",
@@ -47,6 +57,16 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.field_pos = 24,
 		.min_field_value = 1,
 	},
+#ifdef CONFIG_ARM64_PAN
+	{
+		.desc = "Privileged Access Never",
+		.capability = ARM64_HAS_PAN,
+		.matches = has_id_aa64mmfr1_feature,
+		.field_pos = 20,
+		.min_field_value = 1,
+		.enable = cpu_enable_pan,
+	},
+#endif /* CONFIG_ARM64_PAN */
 	{},
 };
 
diff --git a/arch/arm64/lib/clear_user.S b/arch/arm64/lib/clear_user.S
index c17967fdf5f6..a9723c71c52b 100644
--- a/arch/arm64/lib/clear_user.S
+++ b/arch/arm64/lib/clear_user.S
@@ -16,7 +16,11 @@
  * along with this program.  If not, see <http://www.gnu.org/licenses/>.
  */
 #include <linux/linkage.h>
+
+#include <asm/alternative.h>
 #include <asm/assembler.h>
+#include <asm/cpufeature.h>
+#include <asm/sysreg.h>
 
 	.text
 
@@ -29,6 +33,8 @@
  * Alignment fixed up by hardware.
  */
 ENTRY(__clear_user)
+ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(0)), ARM64_HAS_PAN, \
+	    CONFIG_ARM64_PAN)
 	mov	x2, x1			// save the size for fixup return
 	subs	x1, x1, #8
 	b.mi	2f
@@ -48,6 +54,8 @@ USER(9f, strh	wzr, [x0], #2	)
 	b.mi	5f
 USER(9f, strb	wzr, [x0]	)
 5:	mov	x0, #0
+ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_HAS_PAN, \
+	    CONFIG_ARM64_PAN)
 	ret
 ENDPROC(__clear_user)
 
diff --git a/arch/arm64/lib/copy_from_user.S b/arch/arm64/lib/copy_from_user.S
index 5e27add9d362..882c1544a73e 100644
--- a/arch/arm64/lib/copy_from_user.S
+++ b/arch/arm64/lib/copy_from_user.S
@@ -15,7 +15,11 @@
  */
 
 #include <linux/linkage.h>
+
+#include <asm/alternative.h>
 #include <asm/assembler.h>
+#include <asm/cpufeature.h>
+#include <asm/sysreg.h>
 
 /*
  * Copy from user space to a kernel buffer (alignment handled by the hardware)
@@ -28,6 +32,8 @@
  *	x0 - bytes not copied
  */
 ENTRY(__copy_from_user)
+ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(0)), ARM64_HAS_PAN, \
+	    CONFIG_ARM64_PAN)
 	add	x4, x1, x2			// upper user buffer boundary
 	subs	x2, x2, #8
 	b.mi	2f
@@ -51,6 +57,8 @@ USER(9f, ldrh	w3, [x1], #2	)
 USER(9f, ldrb	w3, [x1]	)
 	strb	w3, [x0]
 5:	mov	x0, #0
+ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_HAS_PAN, \
+	    CONFIG_ARM64_PAN)
 	ret
 ENDPROC(__copy_from_user)
 
diff --git a/arch/arm64/lib/copy_in_user.S b/arch/arm64/lib/copy_in_user.S
index 84b6c9bb9b93..97063c4cba75 100644
--- a/arch/arm64/lib/copy_in_user.S
+++ b/arch/arm64/lib/copy_in_user.S
@@ -17,7 +17,11 @@
  */
 
 #include <linux/linkage.h>
+
+#include <asm/alternative.h>
 #include <asm/assembler.h>
+#include <asm/cpufeature.h>
+#include <asm/sysreg.h>
 
 /*
  * Copy from user space to user space (alignment handled by the hardware)
@@ -30,6 +34,8 @@
  *	x0 - bytes not copied
  */
 ENTRY(__copy_in_user)
+ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(0)), ARM64_HAS_PAN, \
+	    CONFIG_ARM64_PAN)
 	add	x4, x0, x2			// upper user buffer boundary
 	subs	x2, x2, #8
 	b.mi	2f
@@ -53,6 +59,8 @@ USER(9f, strh	w3, [x0], #2	)
 USER(9f, ldrb	w3, [x1]	)
 USER(9f, strb	w3, [x0]	)
 5:	mov	x0, #0
+ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_HAS_PAN, \
+	    CONFIG_ARM64_PAN)
 	ret
 ENDPROC(__copy_in_user)
 
diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S
index a0aeeb9b7a28..c782aaf5494d 100644
--- a/arch/arm64/lib/copy_to_user.S
+++ b/arch/arm64/lib/copy_to_user.S
@@ -15,7 +15,11 @@
  */
 
 #include <linux/linkage.h>
+
+#include <asm/alternative.h>
 #include <asm/assembler.h>
+#include <asm/cpufeature.h>
+#include <asm/sysreg.h>
 
 /*
  * Copy to user space from a kernel buffer (alignment handled by the hardware)
@@ -28,6 +32,8 @@
  *	x0 - bytes not copied
  */
 ENTRY(__copy_to_user)
+ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(0)), ARM64_HAS_PAN, \
+	    CONFIG_ARM64_PAN)
 	add	x4, x0, x2			// upper user buffer boundary
 	subs	x2, x2, #8
 	b.mi	2f
@@ -51,6 +57,8 @@ USER(9f, strh	w3, [x0], #2	)
 	ldrb	w3, [x1]
 USER(9f, strb	w3, [x0]	)
 5:	mov	x0, #0
+ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_HAS_PAN, \
+	    CONFIG_ARM64_PAN)
 	ret
 ENDPROC(__copy_to_user)
 
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 94d98cd1aad8..149a36ea9673 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -30,9 +30,11 @@
 #include <linux/highmem.h>
 #include <linux/perf_event.h>
 
+#include <asm/cpufeature.h>
 #include <asm/exception.h>
 #include <asm/debug-monitors.h>
 #include <asm/esr.h>
+#include <asm/sysreg.h>
 #include <asm/system_misc.h>
 #include <asm/pgtable.h>
 #include <asm/tlbflush.h>
@@ -147,6 +149,13 @@ static void do_bad_area(unsigned long addr, unsigned int esr, struct pt_regs *re
 		__do_kernel_fault(mm, addr, esr, regs);
 }
 
+static bool pan_enabled(struct pt_regs *regs)
+{
+	if (IS_ENABLED(CONFIG_ARM64_PAN))
+		return ((regs->pstate & PSR_PAN_BIT) != 0);
+	return false;
+}
+
 #define VM_FAULT_BADMAP		0x010000
 #define VM_FAULT_BADACCESS	0x020000
 
@@ -224,6 +233,13 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr,
 	}
 
 	/*
+	 * PAN bit set implies the fault happened in kernel space, but not
+	 * in the arch's user access functions.
+	 */
+	if (pan_enabled(regs))
+		goto no_context;
+
+	/*
 	 * As per x86, we may deadlock here. However, since the kernel only
 	 * validly references user space from well defined areas of the code,
 	 * we can bug out early if this is from code which shouldn't.
@@ -536,3 +552,10 @@ asmlinkage int __exception do_debug_exception(unsigned long addr,
 
 	return 0;
 }
+
+#ifdef CONFIG_ARM64_PAN
+void cpu_enable_pan(void)
+{
+	config_sctlr_el1(SCTLR_EL1_SPAN, 0);
+}
+#endif /* CONFIG_ARM64_PAN */
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v3 1/6] arm64: kernel: Add cpuid_feature_extract_field() for 4bit sign extension
  2015-07-21 12:23 ` [PATCH v3 1/6] arm64: kernel: Add cpuid_feature_extract_field() for 4bit sign extension James Morse
@ 2015-07-21 12:32   ` Catalin Marinas
  0 siblings, 0 replies; 19+ messages in thread
From: Catalin Marinas @ 2015-07-21 12:32 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Jul 21, 2015 at 01:23:26PM +0100, James Morse wrote:
> Based on arch/arm/include/asm/cputype.h, this function does the
> shifting and sign extension necessary when accessing cpu feature fields.
> 
> Signed-off-by: James Morse <james.morse@arm.com>
> Suggested-by: Russell King <linux@arm.linux.org.uk>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>

Acked-by: Catalin Marinas <catalin.marinas@arm.com>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH v3 4/6] arm64: kernel: Add min_field_value and use '>=' for feature detection
  2015-07-21 12:23 ` [PATCH v3 4/6] arm64: kernel: Add min_field_value and use '>=' for feature detection James Morse
@ 2015-07-21 12:33   ` Catalin Marinas
  0 siblings, 0 replies; 19+ messages in thread
From: Catalin Marinas @ 2015-07-21 12:33 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Jul 21, 2015 at 01:23:29PM +0100, James Morse wrote:
> When a new cpu feature is available, the cpu feature bits will have some
> initial value, which is incremented when the feature is updated.
> This patch changes 'register_value' to be 'min_field_value', and checks
> the feature bits value (interpreted as a signed int) is greater than this
> minimum.
> 
> Signed-off-by: James Morse <james.morse@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>

Acked-by: Catalin Marinas <catalin.marinas@arm.com>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH v3 6/6] arm64: kernel: Add support for Privileged Access Never
  2015-07-21 12:23 ` [PATCH v3 6/6] arm64: kernel: Add support for Privileged Access Never James Morse
@ 2015-07-21 12:38   ` Catalin Marinas
  2015-07-22 17:01     ` Will Deacon
  2015-07-23 12:00     ` [PATCH v3 6/6] " Will Deacon
  0 siblings, 2 replies; 19+ messages in thread
From: Catalin Marinas @ 2015-07-21 12:38 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Jul 21, 2015 at 01:23:31PM +0100, James Morse wrote:
> diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
> index 94d98cd1aad8..149a36ea9673 100644
> --- a/arch/arm64/mm/fault.c
> +++ b/arch/arm64/mm/fault.c
> @@ -30,9 +30,11 @@
>  #include <linux/highmem.h>
>  #include <linux/perf_event.h>
>  
> +#include <asm/cpufeature.h>
>  #include <asm/exception.h>
>  #include <asm/debug-monitors.h>
>  #include <asm/esr.h>
> +#include <asm/sysreg.h>
>  #include <asm/system_misc.h>
>  #include <asm/pgtable.h>
>  #include <asm/tlbflush.h>
> @@ -147,6 +149,13 @@ static void do_bad_area(unsigned long addr, unsigned int esr, struct pt_regs *re
>  		__do_kernel_fault(mm, addr, esr, regs);
>  }
>  
> +static bool pan_enabled(struct pt_regs *regs)
> +{
> +	if (IS_ENABLED(CONFIG_ARM64_PAN))
> +		return ((regs->pstate & PSR_PAN_BIT) != 0);

Nitpick: no brackets needed for return.

Otherwise the patch looks fine to me:

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH v3 6/6] arm64: kernel: Add support for Privileged Access Never
  2015-07-21 12:38   ` Catalin Marinas
@ 2015-07-22 17:01     ` Will Deacon
  2015-07-22 18:04       ` James Morse
  2015-07-22 18:05       ` [PATCH v4] " James Morse
  2015-07-23 12:00     ` [PATCH v3 6/6] " Will Deacon
  1 sibling, 2 replies; 19+ messages in thread
From: Will Deacon @ 2015-07-22 17:01 UTC (permalink / raw)
  To: linux-arm-kernel

Hi James,

On Tue, Jul 21, 2015 at 01:38:31PM +0100, Catalin Marinas wrote:
> On Tue, Jul 21, 2015 at 01:23:31PM +0100, James Morse wrote:
> > diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
> > index 94d98cd1aad8..149a36ea9673 100644
> > --- a/arch/arm64/mm/fault.c
> > +++ b/arch/arm64/mm/fault.c
> > @@ -30,9 +30,11 @@
> >  #include <linux/highmem.h>
> >  #include <linux/perf_event.h>
> >  
> > +#include <asm/cpufeature.h>
> >  #include <asm/exception.h>
> >  #include <asm/debug-monitors.h>
> >  #include <asm/esr.h>
> > +#include <asm/sysreg.h>
> >  #include <asm/system_misc.h>
> >  #include <asm/pgtable.h>
> >  #include <asm/tlbflush.h>
> > @@ -147,6 +149,13 @@ static void do_bad_area(unsigned long addr, unsigned int esr, struct pt_regs *re
> >  		__do_kernel_fault(mm, addr, esr, regs);
> >  }
> >  
> > +static bool pan_enabled(struct pt_regs *regs)
> > +{
> > +	if (IS_ENABLED(CONFIG_ARM64_PAN))
> > +		return ((regs->pstate & PSR_PAN_BIT) != 0);
> 
> Nitpick: no brackets needed for return.
> 
> Otherwise the patch looks fine to me:
> 
> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

I've applied your series with the exception of this last one, as it
conflicts with some other patches I have queued for 4.3. Please can you
rebase this against the arm64 "devel" branch? (usually it would be
for-next/core, but I'm holding off stabilising until -rc4 since allmodconfig
build is broken atm).

Thanks,

Will

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH v3 6/6] arm64: kernel: Add support for Privileged Access Never
  2015-07-22 17:01     ` Will Deacon
@ 2015-07-22 18:04       ` James Morse
  2015-07-22 18:14         ` Will Deacon
  2015-07-22 18:05       ` [PATCH v4] " James Morse
  1 sibling, 1 reply; 19+ messages in thread
From: James Morse @ 2015-07-22 18:04 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Will,

On 22/07/15 18:01, Will Deacon wrote:
> I've applied your series with the exception of this last one, as it
> conflicts with some other patches I have queued for 4.3. Please can you
> rebase this against the arm64 "devel" branch? (usually it would be
> for-next/core, but I'm holding off stabilising until -rc4 since allmodconfig
> build is broken atm).

The version of patch 5 "arm64: kernel: Add optional CONFIG_ parameter to
ALTERNATIVE()" in your tree has:

> [will: removed unused asm macro changes for now to avoid conflicts]

Those were used in arch/arm64/lib/clear_user.S and friends.
I shall remove the 'CONFIG_ARM64_PAN' from those four asm files - it can be
tidied up later.


Thanks,

James

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH v4] arm64: kernel: Add support for Privileged Access Never
  2015-07-22 17:01     ` Will Deacon
  2015-07-22 18:04       ` James Morse
@ 2015-07-22 18:05       ` James Morse
  2015-07-23 13:07         ` Will Deacon
  1 sibling, 1 reply; 19+ messages in thread
From: James Morse @ 2015-07-22 18:05 UTC (permalink / raw)
  To: linux-arm-kernel

'Privileged Access Never' is a new arm8.1 feature which prevents
privileged code from accessing any virtual address where read or write
access is also permitted at EL0.

This patch enables the PAN feature on all CPUs, and modifies {get,put}_user
helpers temporarily to permit access.

This will catch kernel bugs where user memory is accessed directly.
'Unprivileged loads and stores' using ldtrb et al are unaffected by PAN.

Signed-off-by: James Morse <james.morse@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
---
This version is rebased against the arm64 'devel' branch, somewhere
after Suzuki's "arm64: Generalise msr_s/mrs_s operations" patch.

 arch/arm64/Kconfig                   | 14 ++++++++++++++
 arch/arm64/include/asm/cpufeature.h  |  3 ++-
 arch/arm64/include/asm/futex.h       |  8 ++++++++
 arch/arm64/include/asm/processor.h   |  2 ++
 arch/arm64/include/asm/sysreg.h      |  8 ++++++++
 arch/arm64/include/asm/uaccess.h     | 11 +++++++++++
 arch/arm64/include/uapi/asm/ptrace.h |  1 +
 arch/arm64/kernel/armv8_deprecated.c |  8 +++++++-
 arch/arm64/kernel/cpufeature.c       | 20 ++++++++++++++++++++
 arch/arm64/lib/clear_user.S          |  6 ++++++
 arch/arm64/lib/copy_from_user.S      |  6 ++++++
 arch/arm64/lib/copy_in_user.S        |  6 ++++++
 arch/arm64/lib/copy_to_user.S        |  6 ++++++
 arch/arm64/mm/fault.c                | 23 +++++++++++++++++++++++
 14 files changed, 120 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index de8dee60fd82..c2bd79a02a6c 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -596,6 +596,20 @@ config FORCE_MAX_ZONEORDER
 	default "14" if (ARM64_64K_PAGES && TRANSPARENT_HUGEPAGE)
 	default "11"
 
+config ARM64_PAN
+	bool "Enable support for Privileged Access Never (PAN)"
+	default y
+	help
+	 Privileged Access Never (PAN; part of the ARMv8.1 Extensions)
+	 prevents the kernel or hypervisor from accessing user-space (EL0)
+	 memory directly.
+
+	 Choosing this option will cause any unprotected (not using
+	 copy_to_user et al) memory access to fail with a permission fault.
+
+	 The feature is detected at runtime, and will remain as a 'nop'
+	 instruction if the cpu does not implement the feature.
+
 menuconfig ARMV8_DEPRECATED
 	bool "Emulate deprecated/obsolete ARMv8 instructions"
 	depends on COMPAT
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index f595f7ddd43b..d71140b76773 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -25,8 +25,9 @@
 #define ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE	1
 #define ARM64_WORKAROUND_845719			2
 #define ARM64_HAS_SYSREG_GIC_CPUIF		3
+#define ARM64_HAS_PAN				4
 
-#define ARM64_NCAPS				4
+#define ARM64_NCAPS				5
 
 #ifndef __ASSEMBLY__
 
diff --git a/arch/arm64/include/asm/futex.h b/arch/arm64/include/asm/futex.h
index 74069b3bd919..775e85b9d1f2 100644
--- a/arch/arm64/include/asm/futex.h
+++ b/arch/arm64/include/asm/futex.h
@@ -20,10 +20,16 @@
 
 #include <linux/futex.h>
 #include <linux/uaccess.h>
+
+#include <asm/alternative.h>
+#include <asm/cpufeature.h>
 #include <asm/errno.h>
+#include <asm/sysreg.h>
 
 #define __futex_atomic_op(insn, ret, oldval, uaddr, tmp, oparg)		\
 	asm volatile(							\
+	ALTERNATIVE("nop", SET_PSTATE_PAN(0), ARM64_HAS_PAN,		\
+		    CONFIG_ARM64_PAN)					\
 "1:	ldxr	%w1, %2\n"						\
 	insn "\n"							\
 "2:	stlxr	%w3, %w0, %2\n"						\
@@ -39,6 +45,8 @@
 "	.align	3\n"							\
 "	.quad	1b, 4b, 2b, 4b\n"					\
 "	.popsection\n"							\
+	ALTERNATIVE("nop", SET_PSTATE_PAN(1), ARM64_HAS_PAN,		\
+		    CONFIG_ARM64_PAN)					\
 	: "=&r" (ret), "=&r" (oldval), "+Q" (*uaddr), "=&r" (tmp)	\
 	: "r" (oparg), "Ir" (-EFAULT)					\
 	: "memory")
diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index e4c893e54f01..98f32355dc97 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -186,4 +186,6 @@ static inline void spin_lock_prefetch(const void *x)
 
 #endif
 
+void cpu_enable_pan(void);
+
 #endif /* __ASM_PROCESSOR_H */
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 5295bcbcb374..a7f3d4b2514d 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -20,6 +20,8 @@
 #ifndef __ASM_SYSREG_H
 #define __ASM_SYSREG_H
 
+#include <asm/opcodes.h>
+
 #define SCTLR_EL1_CP15BEN	(0x1 << 5)
 #define SCTLR_EL1_SED		(0x1 << 8)
 
@@ -36,6 +38,12 @@
 #define sys_reg(op0, op1, crn, crm, op2) \
 	((((op0)&3)<<19)|((op1)<<16)|((crn)<<12)|((crm)<<8)|((op2)<<5))
 
+#define REG_PSTATE_PAN_IMM                     sys_reg(0, 0, 4, 0, 4)
+#define SCTLR_EL1_SPAN                         (1 << 23)
+
+#define SET_PSTATE_PAN(x) __inst_arm(0xd5000000 | REG_PSTATE_PAN_IMM |\
+				     (!!x)<<8 | 0x1f)
+
 #ifdef __ASSEMBLY__
 
 	.irp	num,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30
diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index 07e1ba449bf1..b2ede967fe7d 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -24,7 +24,10 @@
 #include <linux/string.h>
 #include <linux/thread_info.h>
 
+#include <asm/alternative.h>
+#include <asm/cpufeature.h>
 #include <asm/ptrace.h>
+#include <asm/sysreg.h>
 #include <asm/errno.h>
 #include <asm/memory.h>
 #include <asm/compiler.h>
@@ -131,6 +134,8 @@ static inline void set_fs(mm_segment_t fs)
 do {									\
 	unsigned long __gu_val;						\
 	__chk_user_ptr(ptr);						\
+	asm(ALTERNATIVE("nop", SET_PSTATE_PAN(0), ARM64_HAS_PAN,	\
+			CONFIG_ARM64_PAN));				\
 	switch (sizeof(*(ptr))) {					\
 	case 1:								\
 		__get_user_asm("ldrb", "%w", __gu_val, (ptr), (err));	\
@@ -148,6 +153,8 @@ do {									\
 		BUILD_BUG();						\
 	}								\
 	(x) = (__force __typeof__(*(ptr)))__gu_val;			\
+	asm(ALTERNATIVE("nop", SET_PSTATE_PAN(1), ARM64_HAS_PAN,	\
+			CONFIG_ARM64_PAN));				\
 } while (0)
 
 #define __get_user(x, ptr)						\
@@ -194,6 +201,8 @@ do {									\
 do {									\
 	__typeof__(*(ptr)) __pu_val = (x);				\
 	__chk_user_ptr(ptr);						\
+	asm(ALTERNATIVE("nop", SET_PSTATE_PAN(0), ARM64_HAS_PAN,	\
+			CONFIG_ARM64_PAN));				\
 	switch (sizeof(*(ptr))) {					\
 	case 1:								\
 		__put_user_asm("strb", "%w", __pu_val, (ptr), (err));	\
@@ -210,6 +219,8 @@ do {									\
 	default:							\
 		BUILD_BUG();						\
 	}								\
+	asm(ALTERNATIVE("nop", SET_PSTATE_PAN(1), ARM64_HAS_PAN,	\
+			CONFIG_ARM64_PAN));				\
 } while (0)
 
 #define __put_user(x, ptr)						\
diff --git a/arch/arm64/include/uapi/asm/ptrace.h b/arch/arm64/include/uapi/asm/ptrace.h
index 6913643bbe54..208db3df135a 100644
--- a/arch/arm64/include/uapi/asm/ptrace.h
+++ b/arch/arm64/include/uapi/asm/ptrace.h
@@ -44,6 +44,7 @@
 #define PSR_I_BIT	0x00000080
 #define PSR_A_BIT	0x00000100
 #define PSR_D_BIT	0x00000200
+#define PSR_PAN_BIT	0x00400000
 #define PSR_Q_BIT	0x08000000
 #define PSR_V_BIT	0x10000000
 #define PSR_C_BIT	0x20000000
diff --git a/arch/arm64/kernel/armv8_deprecated.c b/arch/arm64/kernel/armv8_deprecated.c
index 78d56bff91fd..bcee7abac68e 100644
--- a/arch/arm64/kernel/armv8_deprecated.c
+++ b/arch/arm64/kernel/armv8_deprecated.c
@@ -14,6 +14,8 @@
 #include <linux/slab.h>
 #include <linux/sysctl.h>
 
+#include <asm/alternative.h>
+#include <asm/cpufeature.h>
 #include <asm/insn.h>
 #include <asm/opcodes.h>
 #include <asm/sysreg.h>
@@ -280,6 +282,8 @@ static void register_insn_emulation_sysctl(struct ctl_table *table)
  */
 #define __user_swpX_asm(data, addr, res, temp, B)		\
 	__asm__ __volatile__(					\
+	ALTERNATIVE("nop", SET_PSTATE_PAN(0), ARM64_HAS_PAN,	\
+		    CONFIG_ARM64_PAN)				\
 	"	mov		%w2, %w1\n"			\
 	"0:	ldxr"B"		%w1, [%3]\n"			\
 	"1:	stxr"B"		%w0, %w2, [%3]\n"		\
@@ -295,7 +299,9 @@ static void register_insn_emulation_sysctl(struct ctl_table *table)
 	"	.align		3\n"				\
 	"	.quad		0b, 3b\n"			\
 	"	.quad		1b, 3b\n"			\
-	"	.popsection"					\
+	"	.popsection\n"					\
+	ALTERNATIVE("nop", SET_PSTATE_PAN(1), ARM64_HAS_PAN,	\
+		CONFIG_ARM64_PAN)				\
 	: "=&r" (res), "+r" (data), "=&r" (temp)		\
 	: "r" (addr), "i" (-EAGAIN), "i" (-EFAULT)		\
 	: "memory")
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 74fd0f74b065..978fa169d3c3 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -21,6 +21,7 @@
 #include <linux/types.h>
 #include <asm/cpu.h>
 #include <asm/cpufeature.h>
+#include <asm/processor.h>
 
 static bool
 feature_matches(u64 reg, const struct arm64_cpu_capabilities *entry)
@@ -39,6 +40,15 @@ has_id_aa64pfr0_feature(const struct arm64_cpu_capabilities *entry)
 	return feature_matches(val, entry);
 }
 
+static bool __maybe_unused
+has_id_aa64mmfr1_feature(const struct arm64_cpu_capabilities *entry)
+{
+	u64 val;
+
+	val = read_cpuid(id_aa64mmfr1_el1);
+	return feature_matches(val, entry);
+}
+
 static const struct arm64_cpu_capabilities arm64_features[] = {
 	{
 		.desc = "GIC system register CPU interface",
@@ -47,6 +57,16 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.field_pos = 24,
 		.min_field_value = 1,
 	},
+#ifdef CONFIG_ARM64_PAN
+	{
+		.desc = "Privileged Access Never",
+		.capability = ARM64_HAS_PAN,
+		.matches = has_id_aa64mmfr1_feature,
+		.field_pos = 20,
+		.min_field_value = 1,
+		.enable = cpu_enable_pan,
+	},
+#endif /* CONFIG_ARM64_PAN */
 	{},
 };
 
diff --git a/arch/arm64/lib/clear_user.S b/arch/arm64/lib/clear_user.S
index c17967fdf5f6..96ed5cfecb7f 100644
--- a/arch/arm64/lib/clear_user.S
+++ b/arch/arm64/lib/clear_user.S
@@ -16,7 +16,11 @@
  * along with this program.  If not, see <http://www.gnu.org/licenses/>.
  */
 #include <linux/linkage.h>
+
+#include <asm/alternative.h>
 #include <asm/assembler.h>
+#include <asm/cpufeature.h>
+#include <asm/sysreg.h>
 
 	.text
 
@@ -29,6 +33,7 @@
  * Alignment fixed up by hardware.
  */
 ENTRY(__clear_user)
+alternative_insn "nop", __stringify(SET_PSTATE_PAN(0)), ARM64_HAS_PAN
 	mov	x2, x1			// save the size for fixup return
 	subs	x1, x1, #8
 	b.mi	2f
@@ -48,6 +53,7 @@ USER(9f, strh	wzr, [x0], #2	)
 	b.mi	5f
 USER(9f, strb	wzr, [x0]	)
 5:	mov	x0, #0
+alternative_insn "nop", __stringify(SET_PSTATE_PAN(1)), ARM64_HAS_PAN
 	ret
 ENDPROC(__clear_user)
 
diff --git a/arch/arm64/lib/copy_from_user.S b/arch/arm64/lib/copy_from_user.S
index 47c3fa5ae4ae..e73819dd47d2 100644
--- a/arch/arm64/lib/copy_from_user.S
+++ b/arch/arm64/lib/copy_from_user.S
@@ -15,7 +15,11 @@
  */
 
 #include <linux/linkage.h>
+
+#include <asm/alternative.h>
 #include <asm/assembler.h>
+#include <asm/cpufeature.h>
+#include <asm/sysreg.h>
 
 /*
  * Copy from user space to a kernel buffer (alignment handled by the hardware)
@@ -28,6 +32,7 @@
  *	x0 - bytes not copied
  */
 ENTRY(__copy_from_user)
+alternative_insn "nop", __stringify(SET_PSTATE_PAN(0)), ARM64_HAS_PAN
 	add	x5, x1, x2			// upper user buffer boundary
 	subs	x2, x2, #16
 	b.mi	1f
@@ -56,6 +61,7 @@ USER(9f, ldrh	w3, [x1], #2	)
 USER(9f, ldrb	w3, [x1]	)
 	strb	w3, [x0]
 5:	mov	x0, #0
+alternative_insn "nop", __stringify(SET_PSTATE_PAN(1)), ARM64_HAS_PAN
 	ret
 ENDPROC(__copy_from_user)
 
diff --git a/arch/arm64/lib/copy_in_user.S b/arch/arm64/lib/copy_in_user.S
index 436bcc5d77b5..9e6376a3e247 100644
--- a/arch/arm64/lib/copy_in_user.S
+++ b/arch/arm64/lib/copy_in_user.S
@@ -17,7 +17,11 @@
  */
 
 #include <linux/linkage.h>
+
+#include <asm/alternative.h>
 #include <asm/assembler.h>
+#include <asm/cpufeature.h>
+#include <asm/sysreg.h>
 
 /*
  * Copy from user space to user space (alignment handled by the hardware)
@@ -30,6 +34,7 @@
  *	x0 - bytes not copied
  */
 ENTRY(__copy_in_user)
+alternative_insn "nop", __stringify(SET_PSTATE_PAN(0)), ARM64_HAS_PAN
 	add	x5, x0, x2			// upper user buffer boundary
 	subs	x2, x2, #16
 	b.mi	1f
@@ -58,6 +63,7 @@ USER(9f, strh	w3, [x0], #2	)
 USER(9f, ldrb	w3, [x1]	)
 USER(9f, strb	w3, [x0]	)
 5:	mov	x0, #0
+alternative_insn "nop", __stringify(SET_PSTATE_PAN(1)), ARM64_HAS_PAN
 	ret
 ENDPROC(__copy_in_user)
 
diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S
index f5e1f526f408..936199faba3f 100644
--- a/arch/arm64/lib/copy_to_user.S
+++ b/arch/arm64/lib/copy_to_user.S
@@ -15,7 +15,11 @@
  */
 
 #include <linux/linkage.h>
+
+#include <asm/alternative.h>
 #include <asm/assembler.h>
+#include <asm/cpufeature.h>
+#include <asm/sysreg.h>
 
 /*
  * Copy to user space from a kernel buffer (alignment handled by the hardware)
@@ -28,6 +32,7 @@
  *	x0 - bytes not copied
  */
 ENTRY(__copy_to_user)
+alternative_insn "nop", __stringify(SET_PSTATE_PAN(0)), ARM64_HAS_PAN
 	add	x5, x0, x2			// upper user buffer boundary
 	subs	x2, x2, #16
 	b.mi	1f
@@ -56,6 +61,7 @@ USER(9f, strh	w3, [x0], #2	)
 	ldrb	w3, [x1]
 USER(9f, strb	w3, [x0]	)
 5:	mov	x0, #0
+alternative_insn "nop", __stringify(SET_PSTATE_PAN(1)), ARM64_HAS_PAN
 	ret
 ENDPROC(__copy_to_user)
 
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 94d98cd1aad8..5fe96ef31e0e 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -30,9 +30,11 @@
 #include <linux/highmem.h>
 #include <linux/perf_event.h>
 
+#include <asm/cpufeature.h>
 #include <asm/exception.h>
 #include <asm/debug-monitors.h>
 #include <asm/esr.h>
+#include <asm/sysreg.h>
 #include <asm/system_misc.h>
 #include <asm/pgtable.h>
 #include <asm/tlbflush.h>
@@ -147,6 +149,13 @@ static void do_bad_area(unsigned long addr, unsigned int esr, struct pt_regs *re
 		__do_kernel_fault(mm, addr, esr, regs);
 }
 
+static bool pan_enabled(struct pt_regs *regs)
+{
+	if (IS_ENABLED(CONFIG_ARM64_PAN))
+		return (regs->pstate & PSR_PAN_BIT) != 0;
+	return false;
+}
+
 #define VM_FAULT_BADMAP		0x010000
 #define VM_FAULT_BADACCESS	0x020000
 
@@ -224,6 +233,13 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr,
 	}
 
 	/*
+	 * PAN bit set implies the fault happened in kernel space, but not
+	 * in the arch's user access functions.
+	 */
+	if (pan_enabled(regs))
+		goto no_context;
+
+	/*
 	 * As per x86, we may deadlock here. However, since the kernel only
 	 * validly references user space from well defined areas of the code,
 	 * we can bug out early if this is from code which shouldn't.
@@ -536,3 +552,10 @@ asmlinkage int __exception do_debug_exception(unsigned long addr,
 
 	return 0;
 }
+
+#ifdef CONFIG_ARM64_PAN
+void cpu_enable_pan(void)
+{
+	config_sctlr_el1(SCTLR_EL1_SPAN, 0);
+}
+#endif /* CONFIG_ARM64_PAN */
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v3 6/6] arm64: kernel: Add support for Privileged Access Never
  2015-07-22 18:04       ` James Morse
@ 2015-07-22 18:14         ` Will Deacon
  2015-07-23  7:58           ` James Morse
  0 siblings, 1 reply; 19+ messages in thread
From: Will Deacon @ 2015-07-22 18:14 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jul 22, 2015 at 07:04:54PM +0100, James Morse wrote:
> On 22/07/15 18:01, Will Deacon wrote:
> > I've applied your series with the exception of this last one, as it
> > conflicts with some other patches I have queued for 4.3. Please can you
> > rebase this against the arm64 "devel" branch? (usually it would be
> > for-next/core, but I'm holding off stabilising until -rc4 since allmodconfig
> > build is broken atm).
> 
> The version of patch 5 "arm64: kernel: Add optional CONFIG_ parameter to
> ALTERNATIVE()" in your tree has:
> 
> > [will: removed unused asm macro changes for now to avoid conflicts]
> 
> Those were used in arch/arm64/lib/clear_user.S and friends.
> I shall remove the 'CONFIG_ARM64_PAN' from those four asm files - it can be
> tidied up later.

Ah, damn, I didn't realise you'd made the ALTERNATIVE macro work for both
C and asm. The reason I changed it is because I don't know what's best to
do with the new alternative_if_not macros -- having an enabled argument
for the _else and _endif variants is really odd.

I think the options are:

  (1) Just spit out a NOP (you're current approach)
  (2) Use #ifdefs at the caller
  (3) Only have the option for alternative_insn
  (4) Add the option to all the alternative_ macros

What do you reckon?

Will

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH v3 6/6] arm64: kernel: Add support for Privileged Access Never
  2015-07-22 18:14         ` Will Deacon
@ 2015-07-23  7:58           ` James Morse
  0 siblings, 0 replies; 19+ messages in thread
From: James Morse @ 2015-07-23  7:58 UTC (permalink / raw)
  To: linux-arm-kernel

On 22/07/15 19:14, Will Deacon wrote:
> On Wed, Jul 22, 2015 at 07:04:54PM +0100, James Morse wrote:
>> On 22/07/15 18:01, Will Deacon wrote:
>>> I've applied your series with the exception of this last one, as it
>>> conflicts with some other patches I have queued for 4.3. Please can you
>>> rebase this against the arm64 "devel" branch? (usually it would be
>>> for-next/core, but I'm holding off stabilising until -rc4 since allmodconfig
>>> build is broken atm).
>>
>> The version of patch 5 "arm64: kernel: Add optional CONFIG_ parameter to
>> ALTERNATIVE()" in your tree has:
>>
>>> [will: removed unused asm macro changes for now to avoid conflicts]
>>
>> Those were used in arch/arm64/lib/clear_user.S and friends.
>> I shall remove the 'CONFIG_ARM64_PAN' from those four asm files - it can be
>> tidied up later.
> 
> Ah, damn, I didn't realise you'd made the ALTERNATIVE macro work for both
> C and asm. The reason I changed it is because I don't know what's best to
> do with the new alternative_if_not macros -- having an enabled argument
> for the _else and _endif variants is really odd.
> 
> I think the options are:
> 
>   (1) Just spit out a NOP (you're current approach)
>   (2) Use #ifdefs at the caller
>   (3) Only have the option for alternative_insn
>   (4) Add the option to all the alternative_ macros
> 
> What do you reckon?

I would go with (1) for now, it only affects four functions, not the
uaccess.h macros, where it would be inlined all over the place.

(4) can be a future optimisation.


James

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH v3 6/6] arm64: kernel: Add support for Privileged Access Never
  2015-07-21 12:38   ` Catalin Marinas
  2015-07-22 17:01     ` Will Deacon
@ 2015-07-23 12:00     ` Will Deacon
  1 sibling, 0 replies; 19+ messages in thread
From: Will Deacon @ 2015-07-23 12:00 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Jul 21, 2015 at 01:38:31PM +0100, Catalin Marinas wrote:
> On Tue, Jul 21, 2015 at 01:23:31PM +0100, James Morse wrote:
> > diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
> > index 94d98cd1aad8..149a36ea9673 100644
> > --- a/arch/arm64/mm/fault.c
> > +++ b/arch/arm64/mm/fault.c
> > @@ -30,9 +30,11 @@
> >  #include <linux/highmem.h>
> >  #include <linux/perf_event.h>
> >  
> > +#include <asm/cpufeature.h>
> >  #include <asm/exception.h>
> >  #include <asm/debug-monitors.h>
> >  #include <asm/esr.h>
> > +#include <asm/sysreg.h>
> >  #include <asm/system_misc.h>
> >  #include <asm/pgtable.h>
> >  #include <asm/tlbflush.h>
> > @@ -147,6 +149,13 @@ static void do_bad_area(unsigned long addr, unsigned int esr, struct pt_regs *re
> >  		__do_kernel_fault(mm, addr, esr, regs);
> >  }
> >  
> > +static bool pan_enabled(struct pt_regs *regs)
> > +{
> > +	if (IS_ENABLED(CONFIG_ARM64_PAN))
> > +		return ((regs->pstate & PSR_PAN_BIT) != 0);
> 
> Nitpick: no brackets needed for return.

Couldn't we just write this function as:

  return IS_ENABLED(CONFIG_ARM64_PAN) && (regs->pstate & PSR_PAN_BIT);

?

Will

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH v4] arm64: kernel: Add support for Privileged Access Never
  2015-07-22 18:05       ` [PATCH v4] " James Morse
@ 2015-07-23 13:07         ` Will Deacon
  2015-07-24 15:14           ` James Morse
  0 siblings, 1 reply; 19+ messages in thread
From: Will Deacon @ 2015-07-23 13:07 UTC (permalink / raw)
  To: linux-arm-kernel

Hi James,

First off, thanks for rebasing this patch.

On Wed, Jul 22, 2015 at 07:05:54PM +0100, James Morse wrote:
> 'Privileged Access Never' is a new arm8.1 feature which prevents
> privileged code from accessing any virtual address where read or write
> access is also permitted at EL0.
> 
> This patch enables the PAN feature on all CPUs, and modifies {get,put}_user
> helpers temporarily to permit access.
> 
> This will catch kernel bugs where user memory is accessed directly.
> 'Unprivileged loads and stores' using ldtrb et al are unaffected by PAN.
> 
> Signed-off-by: James Morse <james.morse@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> ---
> This version is rebased against the arm64 'devel' branch, somewhere
> after Suzuki's "arm64: Generalise msr_s/mrs_s operations" patch.

Now, having spoken with Catalin, we reckon that it's probably best to
bite the bullet and add the enable parameter to the conditional alternative
asm macros anyway; it's still fairly early days for 4.3 so we've got time
to get this right.

In that light, I've got the following diff against this patch (see below)
and then another patch on top of that adding the extra parameters.

Could you take a look please? Sorry for messing you about.

Will

--->8

diff --git a/arch/arm64/lib/clear_user.S b/arch/arm64/lib/clear_user.S
index 96ed5cfecb7f..a9723c71c52b 100644
--- a/arch/arm64/lib/clear_user.S
+++ b/arch/arm64/lib/clear_user.S
@@ -33,7 +33,8 @@
  * Alignment fixed up by hardware.
  */
 ENTRY(__clear_user)
-alternative_insn "nop", __stringify(SET_PSTATE_PAN(0)), ARM64_HAS_PAN
+ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(0)), ARM64_HAS_PAN, \
+	    CONFIG_ARM64_PAN)
 	mov	x2, x1			// save the size for fixup return
 	subs	x1, x1, #8
 	b.mi	2f
@@ -53,7 +54,8 @@ USER(9f, strh	wzr, [x0], #2	)
 	b.mi	5f
 USER(9f, strb	wzr, [x0]	)
 5:	mov	x0, #0
-alternative_insn "nop", __stringify(SET_PSTATE_PAN(1)), ARM64_HAS_PAN
+ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_HAS_PAN, \
+	    CONFIG_ARM64_PAN)
 	ret
 ENDPROC(__clear_user)
 
diff --git a/arch/arm64/lib/copy_from_user.S b/arch/arm64/lib/copy_from_user.S
index e73819dd47d2..1be9ef27be97 100644
--- a/arch/arm64/lib/copy_from_user.S
+++ b/arch/arm64/lib/copy_from_user.S
@@ -32,7 +32,8 @@
  *	x0 - bytes not copied
  */
 ENTRY(__copy_from_user)
-alternative_insn "nop", __stringify(SET_PSTATE_PAN(0)), ARM64_HAS_PAN
+ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(0)), ARM64_HAS_PAN, \
+	    CONFIG_ARM64_PAN)
 	add	x5, x1, x2			// upper user buffer boundary
 	subs	x2, x2, #16
 	b.mi	1f
@@ -61,7 +62,8 @@ USER(9f, ldrh	w3, [x1], #2	)
 USER(9f, ldrb	w3, [x1]	)
 	strb	w3, [x0]
 5:	mov	x0, #0
-alternative_insn "nop", __stringify(SET_PSTATE_PAN(1)), ARM64_HAS_PAN
+ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_HAS_PAN, \
+	    CONFIG_ARM64_PAN)
 	ret
 ENDPROC(__copy_from_user)
 
diff --git a/arch/arm64/lib/copy_in_user.S b/arch/arm64/lib/copy_in_user.S
index 9e6376a3e247..1b94661e22b3 100644
--- a/arch/arm64/lib/copy_in_user.S
+++ b/arch/arm64/lib/copy_in_user.S
@@ -34,7 +34,8 @@
  *	x0 - bytes not copied
  */
 ENTRY(__copy_in_user)
-alternative_insn "nop", __stringify(SET_PSTATE_PAN(0)), ARM64_HAS_PAN
+ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(0)), ARM64_HAS_PAN, \
+	    CONFIG_ARM64_PAN)
 	add	x5, x0, x2			// upper user buffer boundary
 	subs	x2, x2, #16
 	b.mi	1f
@@ -63,7 +64,8 @@ USER(9f, strh	w3, [x0], #2	)
 USER(9f, ldrb	w3, [x1]	)
 USER(9f, strb	w3, [x0]	)
 5:	mov	x0, #0
-alternative_insn "nop", __stringify(SET_PSTATE_PAN(1)), ARM64_HAS_PAN
+ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_HAS_PAN, \
+	    CONFIG_ARM64_PAN)
 	ret
 ENDPROC(__copy_in_user)
 
diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S
index 936199faba3f..a257b47e2dc4 100644
--- a/arch/arm64/lib/copy_to_user.S
+++ b/arch/arm64/lib/copy_to_user.S
@@ -32,7 +32,8 @@
  *	x0 - bytes not copied
  */
 ENTRY(__copy_to_user)
-alternative_insn "nop", __stringify(SET_PSTATE_PAN(0)), ARM64_HAS_PAN
+ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(0)), ARM64_HAS_PAN, \
+	    CONFIG_ARM64_PAN)
 	add	x5, x0, x2			// upper user buffer boundary
 	subs	x2, x2, #16
 	b.mi	1f
@@ -61,7 +62,8 @@ USER(9f, strh	w3, [x0], #2	)
 	ldrb	w3, [x1]
 USER(9f, strb	w3, [x0]	)
 5:	mov	x0, #0
-alternative_insn "nop", __stringify(SET_PSTATE_PAN(1)), ARM64_HAS_PAN
+ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_HAS_PAN, \
+	    CONFIG_ARM64_PAN)
 	ret
 ENDPROC(__copy_to_user)
 
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 5fe96ef31e0e..ce591211434e 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -149,13 +149,6 @@ static void do_bad_area(unsigned long addr, unsigned int esr, struct pt_regs *re
 		__do_kernel_fault(mm, addr, esr, regs);
 }
 
-static bool pan_enabled(struct pt_regs *regs)
-{
-	if (IS_ENABLED(CONFIG_ARM64_PAN))
-		return (regs->pstate & PSR_PAN_BIT) != 0;
-	return false;
-}
-
 #define VM_FAULT_BADMAP		0x010000
 #define VM_FAULT_BADACCESS	0x020000
 
@@ -236,7 +229,7 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr,
 	 * PAN bit set implies the fault happened in kernel space, but not
 	 * in the arch's user access functions.
 	 */
-	if (pan_enabled(regs))
+	if (IS_ENABLED(CONFIG_ARM64_PAN) && (regs->pstate & PSR_PAN_BIT))
 		goto no_context;
 
 	/*

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v4] arm64: kernel: Add support for Privileged Access Never
  2015-07-23 13:07         ` Will Deacon
@ 2015-07-24 15:14           ` James Morse
  2015-07-24 16:56             ` Will Deacon
  0 siblings, 1 reply; 19+ messages in thread
From: James Morse @ 2015-07-24 15:14 UTC (permalink / raw)
  To: linux-arm-kernel

On 23/07/15 14:07, Will Deacon wrote:
> Hi James,
> 
> First off, thanks for rebasing this patch.
> 
> On Wed, Jul 22, 2015 at 07:05:54PM +0100, James Morse wrote:
>> 'Privileged Access Never' is a new arm8.1 feature which prevents
>> privileged code from accessing any virtual address where read or write
>> access is also permitted at EL0.
>>
>> This patch enables the PAN feature on all CPUs, and modifies {get,put}_user
>> helpers temporarily to permit access.
>>
>> This will catch kernel bugs where user memory is accessed directly.
>> 'Unprivileged loads and stores' using ldtrb et al are unaffected by PAN.
>>
>> Signed-off-by: James Morse <james.morse@arm.com>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Cc: Will Deacon <will.deacon@arm.com>
>> ---
>> This version is rebased against the arm64 'devel' branch, somewhere
>> after Suzuki's "arm64: Generalise msr_s/mrs_s operations" patch.
> 
> Now, having spoken with Catalin, we reckon that it's probably best to
> bite the bullet and add the enable parameter to the conditional alternative
> asm macros anyway; it's still fairly early days for 4.3 so we've got time
> to get this right.
> 
> In that light, I've got the following diff against this patch (see below)
> and then another patch on top of that adding the extra parameters.
> 
> Could you take a look please? Sorry for messing you about.

Fine by me ...

If you're able to merge it all together, please do. Otherwise I will try to
find time to send a v5.



James

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH v4] arm64: kernel: Add support for Privileged Access Never
  2015-07-24 15:14           ` James Morse
@ 2015-07-24 16:56             ` Will Deacon
  0 siblings, 0 replies; 19+ messages in thread
From: Will Deacon @ 2015-07-24 16:56 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Jul 24, 2015 at 04:14:54PM +0100, James Morse wrote:
> On 23/07/15 14:07, Will Deacon wrote:
> > On Wed, Jul 22, 2015 at 07:05:54PM +0100, James Morse wrote:
> >> 'Privileged Access Never' is a new arm8.1 feature which prevents
> >> privileged code from accessing any virtual address where read or write
> >> access is also permitted at EL0.
> >>
> >> This patch enables the PAN feature on all CPUs, and modifies {get,put}_user
> >> helpers temporarily to permit access.
> >>
> >> This will catch kernel bugs where user memory is accessed directly.
> >> 'Unprivileged loads and stores' using ldtrb et al are unaffected by PAN.
> >>
> >> Signed-off-by: James Morse <james.morse@arm.com>
> >> Cc: Catalin Marinas <catalin.marinas@arm.com>
> >> Cc: Will Deacon <will.deacon@arm.com>
> >> ---
> >> This version is rebased against the arm64 'devel' branch, somewhere
> >> after Suzuki's "arm64: Generalise msr_s/mrs_s operations" patch.
> > 
> > Now, having spoken with Catalin, we reckon that it's probably best to
> > bite the bullet and add the enable parameter to the conditional alternative
> > asm macros anyway; it's still fairly early days for 4.3 so we've got time
> > to get this right.
> > 
> > In that light, I've got the following diff against this patch (see below)
> > and then another patch on top of that adding the extra parameters.
> > 
> > Could you take a look please? Sorry for messing you about.
> 
> Fine by me ...

Thanks, I'll merge it in.

Will

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2015-07-24 16:56 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-07-21 12:23 [PATCH v3 0/6] arm64: kernel: Add support for Privileged Access Never James Morse
2015-07-21 12:23 ` [PATCH v3 1/6] arm64: kernel: Add cpuid_feature_extract_field() for 4bit sign extension James Morse
2015-07-21 12:32   ` Catalin Marinas
2015-07-21 12:23 ` [PATCH v3 2/6] arm64: kernel: preparatory: Move config_sctlr_el1 James Morse
2015-07-21 12:23 ` [PATCH v3 3/6] arm64: kernel: Add cpufeature 'enable' callback James Morse
2015-07-21 12:23 ` [PATCH v3 4/6] arm64: kernel: Add min_field_value and use '>=' for feature detection James Morse
2015-07-21 12:33   ` Catalin Marinas
2015-07-21 12:23 ` [PATCH v3 5/6] arm64: kernel: Add optional CONFIG_ parameter to ALTERNATIVE() James Morse
2015-07-21 12:23 ` [PATCH v3 6/6] arm64: kernel: Add support for Privileged Access Never James Morse
2015-07-21 12:38   ` Catalin Marinas
2015-07-22 17:01     ` Will Deacon
2015-07-22 18:04       ` James Morse
2015-07-22 18:14         ` Will Deacon
2015-07-23  7:58           ` James Morse
2015-07-22 18:05       ` [PATCH v4] " James Morse
2015-07-23 13:07         ` Will Deacon
2015-07-24 15:14           ` James Morse
2015-07-24 16:56             ` Will Deacon
2015-07-23 12:00     ` [PATCH v3 6/6] " Will Deacon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).