* [PATCH 1/2] arm64: Relax constraints on ID feature bits
@ 2018-02-01 10:38 Suzuki K Poulose
2018-02-01 10:38 ` [PATCH 2/2] arm64: Expose Arm v8.4 features Suzuki K Poulose
2018-02-07 10:40 ` [PATCH 1/2] arm64: Relax constraints on ID feature bits Dave Martin
0 siblings, 2 replies; 9+ messages in thread
From: Suzuki K Poulose @ 2018-02-01 10:38 UTC (permalink / raw)
To: linux-arm-kernel
We treat most of the feature bits in the ID registers as STRICT,
implying that all CPUs should match it the boot CPU state. However,
for most of the features, we can handle if there are any mismatches
by using the safe value. e.g, HWCAPs and other features used by the
kernel. Relax the constraint on the feature bits whose mismatch can
be handled by the kernel.
For VHE, if there is a mismatch we don't care if the kernel is
not using it. If the kernel is indeed running in EL2 mode, then
the mismatches results in a panic. Similarly for ASID bits we
take care of conflicts.
For other features like, PAN, UAO we only enable it only if we
have it on all the CPUs. For IESB, we set the SCTLR bit unconditionally
anyways.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Dave Martin <dave.martin@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---
arch/arm64/kernel/cpufeature.c | 66 ++++++++++++++++++++++--------------------
1 file changed, 35 insertions(+), 31 deletions(-)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 4b6f9051cf0c..1c39a635a71c 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -123,36 +123,36 @@ cpufeature_pan_not_uao(const struct arm64_cpu_capabilities *entry, int __unused)
* sync with the documentation of the CPU feature register ABI.
*/
static const struct arm64_ftr_bits ftr_id_aa64isar0[] = {
- ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_FHM_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_DP_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SM4_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SM3_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SHA3_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_RDM_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_ATOMICS_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_CRC32_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SHA2_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SHA1_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_AES_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_FHM_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_DP_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SM4_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SM3_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SHA3_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_RDM_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_ATOMICS_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_CRC32_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SHA2_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SHA1_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_AES_SHIFT, 4, 0),
ARM64_FTR_END,
};
static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
- ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_LRCPC_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DPB_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_LRCPC_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DPB_SHIFT, 4, 0),
ARM64_FTR_END,
};
static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV3_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV2_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_SVE_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_RAS_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_SVE_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_RAS_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_GIC_SHIFT, 4, 0),
- S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_ASIMD_SHIFT, 4, ID_AA64PFR0_ASIMD_NI),
- S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_FP_SHIFT, 4, ID_AA64PFR0_FP_NI),
+ S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_ASIMD_SHIFT, 4, ID_AA64PFR0_ASIMD_NI),
+ S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_FP_SHIFT, 4, ID_AA64PFR0_FP_NI),
/* Linux doesn't care about the EL3 */
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL3_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL2_SHIFT, 4, 0),
@@ -169,7 +169,8 @@ static const struct arm64_ftr_bits ftr_id_aa64mmfr0[] = {
/* Linux shouldn't care about secure memory */
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_SNSMEM_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_BIGENDEL_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_ASID_SHIFT, 4, 0),
+ /* We handle differing ASID widths by explicit checks to make sure the system is safe */
+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_ASID_SHIFT, 4, 0),
/*
* Differing PARange is fine as long as all peripherals and memory are mapped
* within the minimum PARange of all CPUs
@@ -179,20 +180,23 @@ static const struct arm64_ftr_bits ftr_id_aa64mmfr0[] = {
};
static const struct arm64_ftr_bits ftr_id_aa64mmfr1[] = {
- ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_PAN_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_PAN_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_LOR_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_HPD_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_VHE_SHIFT, 4, 0),
+ /* When CONFIG_ARM64_VHE is enabled, we ensure that there is no conflict */
+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_VHE_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_VMIDBITS_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_HADBS_SHIFT, 4, 0),
+ /* We can run a mix of CPUs with and without the support for HW management of AF/DBM bits */
+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_HADBS_SHIFT, 4, 0),
ARM64_FTR_END,
};
static const struct arm64_ftr_bits ftr_id_aa64mmfr2[] = {
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_LVA_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_IESB_SHIFT, 4, 0),
+ /* While IESB is good to have, it is not fatal if we miss this on some CPUs */
+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_IESB_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_LSM_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_UAO_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_UAO_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_CNP_SHIFT, 4, 0),
ARM64_FTR_END,
};
@@ -259,12 +263,12 @@ static const struct arm64_ftr_bits ftr_dczid[] = {
static const struct arm64_ftr_bits ftr_id_isar5[] = {
- ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_RDM_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_CRC32_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_SHA2_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_SHA1_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_AES_SHIFT, 4, 0),
- ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_SEVL_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_ISAR5_RDM_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_ISAR5_CRC32_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_ISAR5_SHA2_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_ISAR5_SHA1_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_ISAR5_AES_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_ISAR5_SEVL_SHIFT, 4, 0),
ARM64_FTR_END,
};
--
2.14.3
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 2/2] arm64: Expose Arm v8.4 features
2018-02-01 10:38 [PATCH 1/2] arm64: Relax constraints on ID feature bits Suzuki K Poulose
@ 2018-02-01 10:38 ` Suzuki K Poulose
2018-02-07 10:43 ` Dave Martin
2018-02-07 10:40 ` [PATCH 1/2] arm64: Relax constraints on ID feature bits Dave Martin
1 sibling, 1 reply; 9+ messages in thread
From: Suzuki K Poulose @ 2018-02-01 10:38 UTC (permalink / raw)
To: linux-arm-kernel
Expose the new features introduced by Arm v8.4 extensions to
Arm v8-A profile.
These include :
1) Data indpendent timing of instructions. (DIT, exposed as HWCAP_DIT)
2) Unaligned atomic instructions and Single-copy atomicity of loads
and stores. (AT, expose as HWCAP_USCAT)
3) LDAPR and STLR instructions with immediate offsets (extension to
LRCPC, exposed as HWCAP_ILRCPC)
4) Flag manipulation instructions (TS, exposed as HWCAP_FLAGM).
While at it get rid of the RES0 entries in the cpu-feature-registers.txt
documentation.
Cc: Dave Martin <dave.martin@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
---
Documentation/arm64/cpu-feature-registers.txt | 18 ++++++++++--------
Documentation/arm64/elf_hwcaps.txt | 16 ++++++++++++++++
arch/arm64/include/asm/sysreg.h | 3 +++
arch/arm64/include/uapi/asm/hwcap.h | 4 ++++
arch/arm64/kernel/cpufeature.c | 7 +++++++
arch/arm64/kernel/cpuinfo.c | 4 ++++
6 files changed, 44 insertions(+), 8 deletions(-)
diff --git a/Documentation/arm64/cpu-feature-registers.txt b/Documentation/arm64/cpu-feature-registers.txt
index a70090b28b07..7964f03846b1 100644
--- a/Documentation/arm64/cpu-feature-registers.txt
+++ b/Documentation/arm64/cpu-feature-registers.txt
@@ -110,7 +110,7 @@ infrastructure:
x--------------------------------------------------x
| Name | bits | visible |
|--------------------------------------------------|
- | RES0 | [63-52] | n |
+ | TS | [55-52] | y |
|--------------------------------------------------|
| FHM | [51-48] | y |
|--------------------------------------------------|
@@ -124,8 +124,6 @@ infrastructure:
|--------------------------------------------------|
| RDM | [31-28] | y |
|--------------------------------------------------|
- | RES0 | [27-24] | n |
- |--------------------------------------------------|
| ATOMICS | [23-20] | y |
|--------------------------------------------------|
| CRC32 | [19-16] | y |
@@ -135,8 +133,6 @@ infrastructure:
| SHA1 | [11-8] | y |
|--------------------------------------------------|
| AES | [7-4] | y |
- |--------------------------------------------------|
- | RES0 | [3-0] | n |
x--------------------------------------------------x
@@ -144,12 +140,10 @@ infrastructure:
x--------------------------------------------------x
| Name | bits | visible |
|--------------------------------------------------|
- | RES0 | [63-36] | n |
+ | DIT | [51-48] | y |
|--------------------------------------------------|
| SVE | [35-32] | y |
|--------------------------------------------------|
- | RES0 | [31-28] | n |
- |--------------------------------------------------|
| GIC | [27-24] | n |
|--------------------------------------------------|
| AdvSIMD | [23-20] | y |
@@ -199,6 +193,14 @@ infrastructure:
| DPB | [3-0] | y |
x--------------------------------------------------x
+ 5) ID_AA64MMFR2_EL1 - Memory model feature register 2
+
+ x--------------------------------------------------x
+ | Name | bits | visible |
+ |--------------------------------------------------|
+ | AT | [35-32] | y |
+ x--------------------------------------------------x
+
Appendix I: Example
---------------------------
diff --git a/Documentation/arm64/elf_hwcaps.txt b/Documentation/arm64/elf_hwcaps.txt
index 57324ee55ecc..d6aff2c5e9e2 100644
--- a/Documentation/arm64/elf_hwcaps.txt
+++ b/Documentation/arm64/elf_hwcaps.txt
@@ -162,3 +162,19 @@ HWCAP_SVE
HWCAP_ASIMDFHM
Functionality implied by ID_AA64ISAR0_EL1.FHM == 0b0001.
+
+HWCAP_DIT
+
+ Functionality implied by ID_AA64PFR0_EL1.DIT == 0b0001.
+
+HWCAP_USCAT
+
+ Functionality implied by ID_AA64MMFR2_EL1.AT == 0b0001.
+
+HWCAP_ILRCPC
+
+ Functionality implied by ID_AA64ISR1_EL1.LRCPC == 0b0002.
+
+HWCAP_FLAGM
+
+ Functionality implied by ID_AA64ISAR0_EL1.TS == 0b0001.
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 0e1960c59197..e7b9f154e476 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -490,6 +490,7 @@
#define SCTLR_EL1_BUILD_BUG_ON_MISSING_BITS BUILD_BUG_ON((SCTLR_EL1_SET ^ SCTLR_EL1_CLEAR) != ~0)
/* id_aa64isar0 */
+#define ID_AA64ISAR0_TS_SHIFT 52
#define ID_AA64ISAR0_FHM_SHIFT 48
#define ID_AA64ISAR0_DP_SHIFT 44
#define ID_AA64ISAR0_SM4_SHIFT 40
@@ -511,6 +512,7 @@
/* id_aa64pfr0 */
#define ID_AA64PFR0_CSV3_SHIFT 60
#define ID_AA64PFR0_CSV2_SHIFT 56
+#define ID_AA64PFR0_DIT_SHIFT 48
#define ID_AA64PFR0_SVE_SHIFT 32
#define ID_AA64PFR0_RAS_SHIFT 28
#define ID_AA64PFR0_GIC_SHIFT 24
@@ -568,6 +570,7 @@
#define ID_AA64MMFR1_VMIDBITS_16 2
/* id_aa64mmfr2 */
+#define ID_AA64MMFR2_AT_SHIFT 32
#define ID_AA64MMFR2_LVA_SHIFT 16
#define ID_AA64MMFR2_IESB_SHIFT 12
#define ID_AA64MMFR2_LSM_SHIFT 8
diff --git a/arch/arm64/include/uapi/asm/hwcap.h b/arch/arm64/include/uapi/asm/hwcap.h
index f018c3deea3b..17c65c8f33cb 100644
--- a/arch/arm64/include/uapi/asm/hwcap.h
+++ b/arch/arm64/include/uapi/asm/hwcap.h
@@ -44,5 +44,9 @@
#define HWCAP_SHA512 (1 << 21)
#define HWCAP_SVE (1 << 22)
#define HWCAP_ASIMDFHM (1 << 23)
+#define HWCAP_DIT (1 << 24)
+#define HWCAP_USCAT (1 << 25)
+#define HWCAP_ILRCPC (1 << 26)
+#define HWCAP_FLAGM (1 << 27)
#endif /* _UAPI__ASM_HWCAP_H */
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 1c39a635a71c..796e83dde9fc 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -123,6 +123,7 @@ cpufeature_pan_not_uao(const struct arm64_cpu_capabilities *entry, int __unused)
* sync with the documentation of the CPU feature register ABI.
*/
static const struct arm64_ftr_bits ftr_id_aa64isar0[] = {
+ ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_TS_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_FHM_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_DP_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SM4_SHIFT, 4, 0),
@@ -148,6 +149,7 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV3_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV2_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_DIT_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_SVE_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_RAS_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_GIC_SHIFT, 4, 0),
@@ -192,6 +194,7 @@ static const struct arm64_ftr_bits ftr_id_aa64mmfr1[] = {
};
static const struct arm64_ftr_bits ftr_id_aa64mmfr2[] = {
+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_AT_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_LVA_SHIFT, 4, 0),
/* While IESB is good to have, it is not fatal if we miss this on some CPUs */
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_IESB_SHIFT, 4, 0),
@@ -1083,14 +1086,18 @@ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = {
HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SM4_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_SM4),
HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_DP_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_ASIMDDP),
HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_FHM_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_ASIMDFHM),
+ HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_TS_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_FLAGM),
HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_FP_SHIFT, FTR_SIGNED, 0, CAP_HWCAP, HWCAP_FP),
HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_FP_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, HWCAP_FPHP),
HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_ASIMD_SHIFT, FTR_SIGNED, 0, CAP_HWCAP, HWCAP_ASIMD),
HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_ASIMD_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, HWCAP_ASIMDHP),
+ HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_DIT_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, HWCAP_DIT),
HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_DPB_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_DCPOP),
HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_JSCVT_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_JSCVT),
HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_FCMA_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_FCMA),
HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_LRCPC_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_LRCPC),
+ HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_LRCPC_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, HWCAP_ILRCPC),
+ HWCAP_CAP(SYS_ID_AA64MMFR2_EL1, ID_AA64MMFR2_AT_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_USCAT),
#ifdef CONFIG_ARM64_SVE
HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_SVE_SHIFT, FTR_UNSIGNED, ID_AA64PFR0_SVE, CAP_HWCAP, HWCAP_SVE),
#endif
diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
index 7f94623df8a5..e9ab7b3ed317 100644
--- a/arch/arm64/kernel/cpuinfo.c
+++ b/arch/arm64/kernel/cpuinfo.c
@@ -77,6 +77,10 @@ static const char *const hwcap_str[] = {
"sha512",
"sve",
"asimdfhm",
+ "dit",
+ "uscat",
+ "ilrcpc",
+ "flagm",
NULL
};
--
2.14.3
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH 1/2] arm64: Relax constraints on ID feature bits
2018-02-01 10:38 [PATCH 1/2] arm64: Relax constraints on ID feature bits Suzuki K Poulose
2018-02-01 10:38 ` [PATCH 2/2] arm64: Expose Arm v8.4 features Suzuki K Poulose
@ 2018-02-07 10:40 ` Dave Martin
2018-02-07 11:41 ` Suzuki K Poulose
1 sibling, 1 reply; 9+ messages in thread
From: Dave Martin @ 2018-02-07 10:40 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Feb 01, 2018 at 10:38:37AM +0000, Suzuki K Poulose wrote:
> We treat most of the feature bits in the ID registers as STRICT,
> implying that all CPUs should match it the boot CPU state. However,
> for most of the features, we can handle if there are any mismatches
> by using the safe value. e.g, HWCAPs and other features used by the
> kernel. Relax the constraint on the feature bits whose mismatch can
> be handled by the kernel.
>
> For VHE, if there is a mismatch we don't care if the kernel is
> not using it. If the kernel is indeed running in EL2 mode, then
> the mismatches results in a panic. Similarly for ASID bits we
> take care of conflicts.
>
> For other features like, PAN, UAO we only enable it only if we
> have it on all the CPUs. For IESB, we set the SCTLR bit unconditionally
> anyways.
Do the remaining STRICT+LOWER_SAFE / NONSTRICT+EXACT cases still
make sense?
I've wondered in the past whether there is redundancy between the strict
and type fields, but when adding entries I just copy-pasted similar ones
rather than fully understanding what was going on...
Cheers
---Dave
>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: James Morse <james.morse@arm.com>
> Cc: Dave Martin <dave.martin@arm.com>
> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> ---
> arch/arm64/kernel/cpufeature.c | 66 ++++++++++++++++++++++--------------------
> 1 file changed, 35 insertions(+), 31 deletions(-)
>
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 4b6f9051cf0c..1c39a635a71c 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -123,36 +123,36 @@ cpufeature_pan_not_uao(const struct arm64_cpu_capabilities *entry, int __unused)
> * sync with the documentation of the CPU feature register ABI.
> */
> static const struct arm64_ftr_bits ftr_id_aa64isar0[] = {
> - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_FHM_SHIFT, 4, 0),
> - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_DP_SHIFT, 4, 0),
> - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SM4_SHIFT, 4, 0),
> - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SM3_SHIFT, 4, 0),
> - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SHA3_SHIFT, 4, 0),
> - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_RDM_SHIFT, 4, 0),
> - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_ATOMICS_SHIFT, 4, 0),
> - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_CRC32_SHIFT, 4, 0),
> - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SHA2_SHIFT, 4, 0),
> - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SHA1_SHIFT, 4, 0),
> - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_AES_SHIFT, 4, 0),
> + ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_FHM_SHIFT, 4, 0),
> + ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_DP_SHIFT, 4, 0),
> + ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SM4_SHIFT, 4, 0),
> + ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SM3_SHIFT, 4, 0),
> + ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SHA3_SHIFT, 4, 0),
> + ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_RDM_SHIFT, 4, 0),
> + ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_ATOMICS_SHIFT, 4, 0),
> + ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_CRC32_SHIFT, 4, 0),
> + ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SHA2_SHIFT, 4, 0),
> + ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SHA1_SHIFT, 4, 0),
> + ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_AES_SHIFT, 4, 0),
> ARM64_FTR_END,
> };
>
> static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
> - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_LRCPC_SHIFT, 4, 0),
> - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
> - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
> - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DPB_SHIFT, 4, 0),
> + ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_LRCPC_SHIFT, 4, 0),
> + ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
> + ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
> + ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DPB_SHIFT, 4, 0),
> ARM64_FTR_END,
> };
>
> static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
> ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV3_SHIFT, 4, 0),
> ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV2_SHIFT, 4, 0),
> - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_SVE_SHIFT, 4, 0),
> - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_RAS_SHIFT, 4, 0),
> + ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_SVE_SHIFT, 4, 0),
> + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_RAS_SHIFT, 4, 0),
> ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_GIC_SHIFT, 4, 0),
> - S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_ASIMD_SHIFT, 4, ID_AA64PFR0_ASIMD_NI),
> - S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_FP_SHIFT, 4, ID_AA64PFR0_FP_NI),
> + S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_ASIMD_SHIFT, 4, ID_AA64PFR0_ASIMD_NI),
> + S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_FP_SHIFT, 4, ID_AA64PFR0_FP_NI),
> /* Linux doesn't care about the EL3 */
> ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL3_SHIFT, 4, 0),
> ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL2_SHIFT, 4, 0),
> @@ -169,7 +169,8 @@ static const struct arm64_ftr_bits ftr_id_aa64mmfr0[] = {
> /* Linux shouldn't care about secure memory */
> ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_SNSMEM_SHIFT, 4, 0),
> ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_BIGENDEL_SHIFT, 4, 0),
> - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_ASID_SHIFT, 4, 0),
> + /* We handle differing ASID widths by explicit checks to make sure the system is safe */
> + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_ASID_SHIFT, 4, 0),
> /*
> * Differing PARange is fine as long as all peripherals and memory are mapped
> * within the minimum PARange of all CPUs
> @@ -179,20 +180,23 @@ static const struct arm64_ftr_bits ftr_id_aa64mmfr0[] = {
> };
>
> static const struct arm64_ftr_bits ftr_id_aa64mmfr1[] = {
> - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_PAN_SHIFT, 4, 0),
> + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_PAN_SHIFT, 4, 0),
> ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_LOR_SHIFT, 4, 0),
> ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_HPD_SHIFT, 4, 0),
> - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_VHE_SHIFT, 4, 0),
> + /* When CONFIG_ARM64_VHE is enabled, we ensure that there is no conflict */
> + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_VHE_SHIFT, 4, 0),
> ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_VMIDBITS_SHIFT, 4, 0),
> - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_HADBS_SHIFT, 4, 0),
> + /* We can run a mix of CPUs with and without the support for HW management of AF/DBM bits */
> + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_HADBS_SHIFT, 4, 0),
> ARM64_FTR_END,
> };
>
> static const struct arm64_ftr_bits ftr_id_aa64mmfr2[] = {
> ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_LVA_SHIFT, 4, 0),
> - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_IESB_SHIFT, 4, 0),
> + /* While IESB is good to have, it is not fatal if we miss this on some CPUs */
> + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_IESB_SHIFT, 4, 0),
> ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_LSM_SHIFT, 4, 0),
> - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_UAO_SHIFT, 4, 0),
> + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_UAO_SHIFT, 4, 0),
> ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_CNP_SHIFT, 4, 0),
> ARM64_FTR_END,
> };
> @@ -259,12 +263,12 @@ static const struct arm64_ftr_bits ftr_dczid[] = {
>
>
> static const struct arm64_ftr_bits ftr_id_isar5[] = {
> - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_RDM_SHIFT, 4, 0),
> - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_CRC32_SHIFT, 4, 0),
> - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_SHA2_SHIFT, 4, 0),
> - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_SHA1_SHIFT, 4, 0),
> - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_AES_SHIFT, 4, 0),
> - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_SEVL_SHIFT, 4, 0),
> + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_ISAR5_RDM_SHIFT, 4, 0),
> + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_ISAR5_CRC32_SHIFT, 4, 0),
> + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_ISAR5_SHA2_SHIFT, 4, 0),
> + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_ISAR5_SHA1_SHIFT, 4, 0),
> + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_ISAR5_AES_SHIFT, 4, 0),
> + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_ISAR5_SEVL_SHIFT, 4, 0),
> ARM64_FTR_END,
> };
>
> --
> 2.14.3
>
>
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 2/2] arm64: Expose Arm v8.4 features
2018-02-01 10:38 ` [PATCH 2/2] arm64: Expose Arm v8.4 features Suzuki K Poulose
@ 2018-02-07 10:43 ` Dave Martin
2018-02-07 10:51 ` Suzuki K Poulose
0 siblings, 1 reply; 9+ messages in thread
From: Dave Martin @ 2018-02-07 10:43 UTC (permalink / raw)
To: linux-arm-kernel
On Thu, Feb 01, 2018 at 10:38:38AM +0000, Suzuki K Poulose wrote:
> Expose the new features introduced by Arm v8.4 extensions to
> Arm v8-A profile.
>
> These include :
>
> 1) Data indpendent timing of instructions. (DIT, exposed as HWCAP_DIT)
> 2) Unaligned atomic instructions and Single-copy atomicity of loads
> and stores. (AT, expose as HWCAP_USCAT)
> 3) LDAPR and STLR instructions with immediate offsets (extension to
> LRCPC, exposed as HWCAP_ILRCPC)
> 4) Flag manipulation instructions (TS, exposed as HWCAP_FLAGM).
>
> While at it get rid of the RES0 entries in the cpu-feature-registers.txt
> documentation.
Should we write somewhere than fields that are not described here are
implicitly RES0, or would that be too strong a statement?
Cheers
---Dave
> Cc: Dave Martin <dave.martin@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> ---
> Documentation/arm64/cpu-feature-registers.txt | 18 ++++++++++--------
> Documentation/arm64/elf_hwcaps.txt | 16 ++++++++++++++++
> arch/arm64/include/asm/sysreg.h | 3 +++
> arch/arm64/include/uapi/asm/hwcap.h | 4 ++++
> arch/arm64/kernel/cpufeature.c | 7 +++++++
> arch/arm64/kernel/cpuinfo.c | 4 ++++
> 6 files changed, 44 insertions(+), 8 deletions(-)
>
> diff --git a/Documentation/arm64/cpu-feature-registers.txt b/Documentation/arm64/cpu-feature-registers.txt
> index a70090b28b07..7964f03846b1 100644
> --- a/Documentation/arm64/cpu-feature-registers.txt
> +++ b/Documentation/arm64/cpu-feature-registers.txt
> @@ -110,7 +110,7 @@ infrastructure:
> x--------------------------------------------------x
> | Name | bits | visible |
> |--------------------------------------------------|
> - | RES0 | [63-52] | n |
> + | TS | [55-52] | y |
> |--------------------------------------------------|
> | FHM | [51-48] | y |
> |--------------------------------------------------|
> @@ -124,8 +124,6 @@ infrastructure:
> |--------------------------------------------------|
> | RDM | [31-28] | y |
> |--------------------------------------------------|
> - | RES0 | [27-24] | n |
> - |--------------------------------------------------|
> | ATOMICS | [23-20] | y |
> |--------------------------------------------------|
> | CRC32 | [19-16] | y |
> @@ -135,8 +133,6 @@ infrastructure:
> | SHA1 | [11-8] | y |
> |--------------------------------------------------|
> | AES | [7-4] | y |
> - |--------------------------------------------------|
> - | RES0 | [3-0] | n |
> x--------------------------------------------------x
>
>
> @@ -144,12 +140,10 @@ infrastructure:
> x--------------------------------------------------x
> | Name | bits | visible |
> |--------------------------------------------------|
> - | RES0 | [63-36] | n |
> + | DIT | [51-48] | y |
> |--------------------------------------------------|
> | SVE | [35-32] | y |
> |--------------------------------------------------|
> - | RES0 | [31-28] | n |
> - |--------------------------------------------------|
> | GIC | [27-24] | n |
> |--------------------------------------------------|
> | AdvSIMD | [23-20] | y |
> @@ -199,6 +193,14 @@ infrastructure:
> | DPB | [3-0] | y |
> x--------------------------------------------------x
>
> + 5) ID_AA64MMFR2_EL1 - Memory model feature register 2
> +
> + x--------------------------------------------------x
> + | Name | bits | visible |
> + |--------------------------------------------------|
> + | AT | [35-32] | y |
> + x--------------------------------------------------x
> +
> Appendix I: Example
> ---------------------------
>
> diff --git a/Documentation/arm64/elf_hwcaps.txt b/Documentation/arm64/elf_hwcaps.txt
> index 57324ee55ecc..d6aff2c5e9e2 100644
> --- a/Documentation/arm64/elf_hwcaps.txt
> +++ b/Documentation/arm64/elf_hwcaps.txt
> @@ -162,3 +162,19 @@ HWCAP_SVE
> HWCAP_ASIMDFHM
>
> Functionality implied by ID_AA64ISAR0_EL1.FHM == 0b0001.
> +
> +HWCAP_DIT
> +
> + Functionality implied by ID_AA64PFR0_EL1.DIT == 0b0001.
> +
> +HWCAP_USCAT
> +
> + Functionality implied by ID_AA64MMFR2_EL1.AT == 0b0001.
> +
> +HWCAP_ILRCPC
> +
> + Functionality implied by ID_AA64ISR1_EL1.LRCPC == 0b0002.
> +
> +HWCAP_FLAGM
> +
> + Functionality implied by ID_AA64ISAR0_EL1.TS == 0b0001.
> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> index 0e1960c59197..e7b9f154e476 100644
> --- a/arch/arm64/include/asm/sysreg.h
> +++ b/arch/arm64/include/asm/sysreg.h
> @@ -490,6 +490,7 @@
> #define SCTLR_EL1_BUILD_BUG_ON_MISSING_BITS BUILD_BUG_ON((SCTLR_EL1_SET ^ SCTLR_EL1_CLEAR) != ~0)
>
> /* id_aa64isar0 */
> +#define ID_AA64ISAR0_TS_SHIFT 52
> #define ID_AA64ISAR0_FHM_SHIFT 48
> #define ID_AA64ISAR0_DP_SHIFT 44
> #define ID_AA64ISAR0_SM4_SHIFT 40
> @@ -511,6 +512,7 @@
> /* id_aa64pfr0 */
> #define ID_AA64PFR0_CSV3_SHIFT 60
> #define ID_AA64PFR0_CSV2_SHIFT 56
> +#define ID_AA64PFR0_DIT_SHIFT 48
> #define ID_AA64PFR0_SVE_SHIFT 32
> #define ID_AA64PFR0_RAS_SHIFT 28
> #define ID_AA64PFR0_GIC_SHIFT 24
> @@ -568,6 +570,7 @@
> #define ID_AA64MMFR1_VMIDBITS_16 2
>
> /* id_aa64mmfr2 */
> +#define ID_AA64MMFR2_AT_SHIFT 32
> #define ID_AA64MMFR2_LVA_SHIFT 16
> #define ID_AA64MMFR2_IESB_SHIFT 12
> #define ID_AA64MMFR2_LSM_SHIFT 8
> diff --git a/arch/arm64/include/uapi/asm/hwcap.h b/arch/arm64/include/uapi/asm/hwcap.h
> index f018c3deea3b..17c65c8f33cb 100644
> --- a/arch/arm64/include/uapi/asm/hwcap.h
> +++ b/arch/arm64/include/uapi/asm/hwcap.h
> @@ -44,5 +44,9 @@
> #define HWCAP_SHA512 (1 << 21)
> #define HWCAP_SVE (1 << 22)
> #define HWCAP_ASIMDFHM (1 << 23)
> +#define HWCAP_DIT (1 << 24)
> +#define HWCAP_USCAT (1 << 25)
> +#define HWCAP_ILRCPC (1 << 26)
> +#define HWCAP_FLAGM (1 << 27)
>
> #endif /* _UAPI__ASM_HWCAP_H */
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index 1c39a635a71c..796e83dde9fc 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -123,6 +123,7 @@ cpufeature_pan_not_uao(const struct arm64_cpu_capabilities *entry, int __unused)
> * sync with the documentation of the CPU feature register ABI.
> */
> static const struct arm64_ftr_bits ftr_id_aa64isar0[] = {
> + ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_TS_SHIFT, 4, 0),
> ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_FHM_SHIFT, 4, 0),
> ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_DP_SHIFT, 4, 0),
> ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SM4_SHIFT, 4, 0),
> @@ -148,6 +149,7 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
> static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
> ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV3_SHIFT, 4, 0),
> ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV2_SHIFT, 4, 0),
> + ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_DIT_SHIFT, 4, 0),
> ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_SVE_SHIFT, 4, 0),
> ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_RAS_SHIFT, 4, 0),
> ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_GIC_SHIFT, 4, 0),
> @@ -192,6 +194,7 @@ static const struct arm64_ftr_bits ftr_id_aa64mmfr1[] = {
> };
>
> static const struct arm64_ftr_bits ftr_id_aa64mmfr2[] = {
> + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_AT_SHIFT, 4, 0),
> ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_LVA_SHIFT, 4, 0),
> /* While IESB is good to have, it is not fatal if we miss this on some CPUs */
> ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_IESB_SHIFT, 4, 0),
> @@ -1083,14 +1086,18 @@ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = {
> HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SM4_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_SM4),
> HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_DP_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_ASIMDDP),
> HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_FHM_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_ASIMDFHM),
> + HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_TS_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_FLAGM),
> HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_FP_SHIFT, FTR_SIGNED, 0, CAP_HWCAP, HWCAP_FP),
> HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_FP_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, HWCAP_FPHP),
> HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_ASIMD_SHIFT, FTR_SIGNED, 0, CAP_HWCAP, HWCAP_ASIMD),
> HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_ASIMD_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, HWCAP_ASIMDHP),
> + HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_DIT_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, HWCAP_DIT),
> HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_DPB_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_DCPOP),
> HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_JSCVT_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_JSCVT),
> HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_FCMA_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_FCMA),
> HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_LRCPC_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_LRCPC),
> + HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_LRCPC_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, HWCAP_ILRCPC),
> + HWCAP_CAP(SYS_ID_AA64MMFR2_EL1, ID_AA64MMFR2_AT_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_USCAT),
> #ifdef CONFIG_ARM64_SVE
> HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_SVE_SHIFT, FTR_UNSIGNED, ID_AA64PFR0_SVE, CAP_HWCAP, HWCAP_SVE),
> #endif
> diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
> index 7f94623df8a5..e9ab7b3ed317 100644
> --- a/arch/arm64/kernel/cpuinfo.c
> +++ b/arch/arm64/kernel/cpuinfo.c
> @@ -77,6 +77,10 @@ static const char *const hwcap_str[] = {
> "sha512",
> "sve",
> "asimdfhm",
> + "dit",
> + "uscat",
> + "ilrcpc",
> + "flagm",
> NULL
> };
>
> --
> 2.14.3
>
>
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 2/2] arm64: Expose Arm v8.4 features
2018-02-07 10:43 ` Dave Martin
@ 2018-02-07 10:51 ` Suzuki K Poulose
2018-02-07 11:55 ` Dave Martin
0 siblings, 1 reply; 9+ messages in thread
From: Suzuki K Poulose @ 2018-02-07 10:51 UTC (permalink / raw)
To: linux-arm-kernel
On 07/02/18 10:43, Dave Martin wrote:
> On Thu, Feb 01, 2018 at 10:38:38AM +0000, Suzuki K Poulose wrote:
>> Expose the new features introduced by Arm v8.4 extensions to
>> Arm v8-A profile.
>>
>> These include :
>>
>> 1) Data indpendent timing of instructions. (DIT, exposed as HWCAP_DIT)
>> 2) Unaligned atomic instructions and Single-copy atomicity of loads
>> and stores. (AT, expose as HWCAP_USCAT)
>> 3) LDAPR and STLR instructions with immediate offsets (extension to
>> LRCPC, exposed as HWCAP_ILRCPC)
>> 4) Flag manipulation instructions (TS, exposed as HWCAP_FLAGM).
>>
>> While at it get rid of the RES0 entries in the cpu-feature-registers.txt
>> documentation.
>
> Should we write somewhere than fields that are not described here are
> implicitly RES0, or would that be too strong a statement?
Dave,
We already have the following description in the file :
"The following rules are applied to the value returned by the
infrastructure:
a) The value of an 'IMPLEMENTATION DEFINED' field is set to 0.
b) The value of a reserved field is populated with the reserved
value as defined by the architecture.
c) The value of a 'visible' field holds the system wide safe value
for the particular feature (except for MIDR_EL1, see section 4).
d) All other fields (i.e, invisible fields) are set to indicate
the feature is missing (as defined by the architecture).
"
So we don't have to bother if the fields are RES0 or they get
some definitions in the future, as long as we don't expose them.
Cheers
Suzuki
>
> Cheers
> ---Dave
>
>> Cc: Dave Martin <dave.martin@arm.com>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Cc: Will Deacon <will.deacon@arm.com>
>> Cc: Mark Rutland <mark.rutland@arm.com>
>> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
>> ---
>> Documentation/arm64/cpu-feature-registers.txt | 18 ++++++++++--------
>> Documentation/arm64/elf_hwcaps.txt | 16 ++++++++++++++++
>> arch/arm64/include/asm/sysreg.h | 3 +++
>> arch/arm64/include/uapi/asm/hwcap.h | 4 ++++
>> arch/arm64/kernel/cpufeature.c | 7 +++++++
>> arch/arm64/kernel/cpuinfo.c | 4 ++++
>> 6 files changed, 44 insertions(+), 8 deletions(-)
>>
>> diff --git a/Documentation/arm64/cpu-feature-registers.txt b/Documentation/arm64/cpu-feature-registers.txt
>> index a70090b28b07..7964f03846b1 100644
>> --- a/Documentation/arm64/cpu-feature-registers.txt
>> +++ b/Documentation/arm64/cpu-feature-registers.txt
>> @@ -110,7 +110,7 @@ infrastructure:
>> x--------------------------------------------------x
>> | Name | bits | visible |
>> |--------------------------------------------------|
>> - | RES0 | [63-52] | n |
>> + | TS | [55-52] | y |
>> |--------------------------------------------------|
>> | FHM | [51-48] | y |
>> |--------------------------------------------------|
>> @@ -124,8 +124,6 @@ infrastructure:
>> |--------------------------------------------------|
>> | RDM | [31-28] | y |
>> |--------------------------------------------------|
>> - | RES0 | [27-24] | n |
>> - |--------------------------------------------------|
>> | ATOMICS | [23-20] | y |
>> |--------------------------------------------------|
>> | CRC32 | [19-16] | y |
>> @@ -135,8 +133,6 @@ infrastructure:
>> | SHA1 | [11-8] | y |
>> |--------------------------------------------------|
>> | AES | [7-4] | y |
>> - |--------------------------------------------------|
>> - | RES0 | [3-0] | n |
>> x--------------------------------------------------x
>>
>>
>> @@ -144,12 +140,10 @@ infrastructure:
>> x--------------------------------------------------x
>> | Name | bits | visible |
>> |--------------------------------------------------|
>> - | RES0 | [63-36] | n |
>> + | DIT | [51-48] | y |
>> |--------------------------------------------------|
>> | SVE | [35-32] | y |
>> |--------------------------------------------------|
>> - | RES0 | [31-28] | n |
>> - |--------------------------------------------------|
>> | GIC | [27-24] | n |
>> |--------------------------------------------------|
>> | AdvSIMD | [23-20] | y |
>> @@ -199,6 +193,14 @@ infrastructure:
>> | DPB | [3-0] | y |
>> x--------------------------------------------------x
>>
>> + 5) ID_AA64MMFR2_EL1 - Memory model feature register 2
>> +
>> + x--------------------------------------------------x
>> + | Name | bits | visible |
>> + |--------------------------------------------------|
>> + | AT | [35-32] | y |
>> + x--------------------------------------------------x
>> +
>> Appendix I: Example
>> ---------------------------
>>
>> diff --git a/Documentation/arm64/elf_hwcaps.txt b/Documentation/arm64/elf_hwcaps.txt
>> index 57324ee55ecc..d6aff2c5e9e2 100644
>> --- a/Documentation/arm64/elf_hwcaps.txt
>> +++ b/Documentation/arm64/elf_hwcaps.txt
>> @@ -162,3 +162,19 @@ HWCAP_SVE
>> HWCAP_ASIMDFHM
>>
>> Functionality implied by ID_AA64ISAR0_EL1.FHM == 0b0001.
>> +
>> +HWCAP_DIT
>> +
>> + Functionality implied by ID_AA64PFR0_EL1.DIT == 0b0001.
>> +
>> +HWCAP_USCAT
>> +
>> + Functionality implied by ID_AA64MMFR2_EL1.AT == 0b0001.
>> +
>> +HWCAP_ILRCPC
>> +
>> + Functionality implied by ID_AA64ISR1_EL1.LRCPC == 0b0002.
>> +
>> +HWCAP_FLAGM
>> +
>> + Functionality implied by ID_AA64ISAR0_EL1.TS == 0b0001.
>> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
>> index 0e1960c59197..e7b9f154e476 100644
>> --- a/arch/arm64/include/asm/sysreg.h
>> +++ b/arch/arm64/include/asm/sysreg.h
>> @@ -490,6 +490,7 @@
>> #define SCTLR_EL1_BUILD_BUG_ON_MISSING_BITS BUILD_BUG_ON((SCTLR_EL1_SET ^ SCTLR_EL1_CLEAR) != ~0)
>>
>> /* id_aa64isar0 */
>> +#define ID_AA64ISAR0_TS_SHIFT 52
>> #define ID_AA64ISAR0_FHM_SHIFT 48
>> #define ID_AA64ISAR0_DP_SHIFT 44
>> #define ID_AA64ISAR0_SM4_SHIFT 40
>> @@ -511,6 +512,7 @@
>> /* id_aa64pfr0 */
>> #define ID_AA64PFR0_CSV3_SHIFT 60
>> #define ID_AA64PFR0_CSV2_SHIFT 56
>> +#define ID_AA64PFR0_DIT_SHIFT 48
>> #define ID_AA64PFR0_SVE_SHIFT 32
>> #define ID_AA64PFR0_RAS_SHIFT 28
>> #define ID_AA64PFR0_GIC_SHIFT 24
>> @@ -568,6 +570,7 @@
>> #define ID_AA64MMFR1_VMIDBITS_16 2
>>
>> /* id_aa64mmfr2 */
>> +#define ID_AA64MMFR2_AT_SHIFT 32
>> #define ID_AA64MMFR2_LVA_SHIFT 16
>> #define ID_AA64MMFR2_IESB_SHIFT 12
>> #define ID_AA64MMFR2_LSM_SHIFT 8
>> diff --git a/arch/arm64/include/uapi/asm/hwcap.h b/arch/arm64/include/uapi/asm/hwcap.h
>> index f018c3deea3b..17c65c8f33cb 100644
>> --- a/arch/arm64/include/uapi/asm/hwcap.h
>> +++ b/arch/arm64/include/uapi/asm/hwcap.h
>> @@ -44,5 +44,9 @@
>> #define HWCAP_SHA512 (1 << 21)
>> #define HWCAP_SVE (1 << 22)
>> #define HWCAP_ASIMDFHM (1 << 23)
>> +#define HWCAP_DIT (1 << 24)
>> +#define HWCAP_USCAT (1 << 25)
>> +#define HWCAP_ILRCPC (1 << 26)
>> +#define HWCAP_FLAGM (1 << 27)
>>
>> #endif /* _UAPI__ASM_HWCAP_H */
>> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
>> index 1c39a635a71c..796e83dde9fc 100644
>> --- a/arch/arm64/kernel/cpufeature.c
>> +++ b/arch/arm64/kernel/cpufeature.c
>> @@ -123,6 +123,7 @@ cpufeature_pan_not_uao(const struct arm64_cpu_capabilities *entry, int __unused)
>> * sync with the documentation of the CPU feature register ABI.
>> */
>> static const struct arm64_ftr_bits ftr_id_aa64isar0[] = {
>> + ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_TS_SHIFT, 4, 0),
>> ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_FHM_SHIFT, 4, 0),
>> ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_DP_SHIFT, 4, 0),
>> ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SM4_SHIFT, 4, 0),
>> @@ -148,6 +149,7 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
>> static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
>> ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV3_SHIFT, 4, 0),
>> ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV2_SHIFT, 4, 0),
>> + ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_DIT_SHIFT, 4, 0),
>> ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_SVE_SHIFT, 4, 0),
>> ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_RAS_SHIFT, 4, 0),
>> ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_GIC_SHIFT, 4, 0),
>> @@ -192,6 +194,7 @@ static const struct arm64_ftr_bits ftr_id_aa64mmfr1[] = {
>> };
>>
>> static const struct arm64_ftr_bits ftr_id_aa64mmfr2[] = {
>> + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_AT_SHIFT, 4, 0),
>> ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_LVA_SHIFT, 4, 0),
>> /* While IESB is good to have, it is not fatal if we miss this on some CPUs */
>> ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_IESB_SHIFT, 4, 0),
>> @@ -1083,14 +1086,18 @@ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = {
>> HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SM4_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_SM4),
>> HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_DP_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_ASIMDDP),
>> HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_FHM_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_ASIMDFHM),
>> + HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_TS_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_FLAGM),
>> HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_FP_SHIFT, FTR_SIGNED, 0, CAP_HWCAP, HWCAP_FP),
>> HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_FP_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, HWCAP_FPHP),
>> HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_ASIMD_SHIFT, FTR_SIGNED, 0, CAP_HWCAP, HWCAP_ASIMD),
>> HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_ASIMD_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, HWCAP_ASIMDHP),
>> + HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_DIT_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, HWCAP_DIT),
>> HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_DPB_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_DCPOP),
>> HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_JSCVT_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_JSCVT),
>> HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_FCMA_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_FCMA),
>> HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_LRCPC_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_LRCPC),
>> + HWCAP_CAP(SYS_ID_AA64ISAR1_EL1, ID_AA64ISAR1_LRCPC_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, HWCAP_ILRCPC),
>> + HWCAP_CAP(SYS_ID_AA64MMFR2_EL1, ID_AA64MMFR2_AT_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_USCAT),
>> #ifdef CONFIG_ARM64_SVE
>> HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_SVE_SHIFT, FTR_UNSIGNED, ID_AA64PFR0_SVE, CAP_HWCAP, HWCAP_SVE),
>> #endif
>> diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
>> index 7f94623df8a5..e9ab7b3ed317 100644
>> --- a/arch/arm64/kernel/cpuinfo.c
>> +++ b/arch/arm64/kernel/cpuinfo.c
>> @@ -77,6 +77,10 @@ static const char *const hwcap_str[] = {
>> "sha512",
>> "sve",
>> "asimdfhm",
>> + "dit",
>> + "uscat",
>> + "ilrcpc",
>> + "flagm",
>> NULL
>> };
>>
>> --
>> 2.14.3
>>
>>
>> _______________________________________________
>> linux-arm-kernel mailing list
>> linux-arm-kernel at lists.infradead.org
>> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 1/2] arm64: Relax constraints on ID feature bits
2018-02-07 10:40 ` [PATCH 1/2] arm64: Relax constraints on ID feature bits Dave Martin
@ 2018-02-07 11:41 ` Suzuki K Poulose
2018-02-07 12:34 ` Dave Martin
0 siblings, 1 reply; 9+ messages in thread
From: Suzuki K Poulose @ 2018-02-07 11:41 UTC (permalink / raw)
To: linux-arm-kernel
On 07/02/18 10:40, Dave Martin wrote:
> On Thu, Feb 01, 2018 at 10:38:37AM +0000, Suzuki K Poulose wrote:
>> We treat most of the feature bits in the ID registers as STRICT,
>> implying that all CPUs should match it the boot CPU state. However,
>> for most of the features, we can handle if there are any mismatches
>> by using the safe value. e.g, HWCAPs and other features used by the
>> kernel. Relax the constraint on the feature bits whose mismatch can
>> be handled by the kernel.
>>
>> For VHE, if there is a mismatch we don't care if the kernel is
>> not using it. If the kernel is indeed running in EL2 mode, then
>> the mismatches results in a panic. Similarly for ASID bits we
>> take care of conflicts.
>>
>> For other features like, PAN, UAO we only enable it only if we
>> have it on all the CPUs. For IESB, we set the SCTLR bit unconditionally
>> anyways.
>
> Do the remaining STRICT+LOWER_SAFE / NONSTRICT+EXACT cases still
> make sense?
Thats a good point. I did take a look at them. Most of them were not
really used by the kernel and some of them needed some additional checks
to make sure the "STRICT" is enforced via other checks. (e.g IDAA64MMFR1:VMID).
Here is the remaining list :
IDAA64PFR0_EL1
- GIC - System register GIC itnerface. I think this can be made non-strict,
since we can now enforce it via capabilities (with the GIC_CPUIF
being a Boot CPU feature). So, we need to wait until that series
is merged.
- EL2 - This is a bit complex. This is STRICT only if all the other CPUs were
booted in EL2 and KVM is enabled. But then we need to add an extra
check for hotplugged CPUs.
IDAA64MMFR0_EL1
- BIGENDEL - Again, the system uses sanitised value, so LOWER_SAFE makes sense.
But there are no checks for hotplugged CPUs, if the SETEND emulation
is enabled.
ID_AA64MMFR1_EL
- LOR - Limited Ordering regions, Not supported by the kernel. So we can
make this non-strict.
- HPD - Hierarchical permission disables. Currently unused by the kernel.
So, can be switched to non-strict
- VMID - VMIDbits width. This is currently STRICT+LOWER_SAFE. The KVM uses
sanitised value of the feature, so this can be NONSTRICT+LOWER_SAFE.
However, we need to ensure a hotplugged CPU complies to the sanitised
width (which may have been used by KVM)
To summarise, I can add LOR/HPD changes. But the others requires a bit more
work and can be done as a separate series.
> I've wondered in the past whether there is redundancy between the strict
> and type fields, but when adding entries I just copy-pasted similar ones
> rather than fully understanding what was going on...
I agree. These were defined before we started using the system wide safe
values and enforcing the capabilities on late/secondary CPUs. Now that
we have an infrastructure which makes sure that conflicts are handled,
we could relax the definitions a bit.
Cheers
Suzuki
>
> Cheers
> ---Dave
>
>>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Cc: Mark Rutland <mark.rutland@arm.com>
>> Cc: Marc Zyngier <marc.zyngier@arm.com>
>> Cc: Will Deacon <will.deacon@arm.com>
>> Cc: James Morse <james.morse@arm.com>
>> Cc: Dave Martin <dave.martin@arm.com>
>> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
>> ---
>> arch/arm64/kernel/cpufeature.c | 66 ++++++++++++++++++++++--------------------
>> 1 file changed, 35 insertions(+), 31 deletions(-)
>>
>> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
>> index 4b6f9051cf0c..1c39a635a71c 100644
>> --- a/arch/arm64/kernel/cpufeature.c
>> +++ b/arch/arm64/kernel/cpufeature.c
>> @@ -123,36 +123,36 @@ cpufeature_pan_not_uao(const struct arm64_cpu_capabilities *entry, int __unused)
>> * sync with the documentation of the CPU feature register ABI.
>> */
>> static const struct arm64_ftr_bits ftr_id_aa64isar0[] = {
>> - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_FHM_SHIFT, 4, 0),
>> - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_DP_SHIFT, 4, 0),
>> - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SM4_SHIFT, 4, 0),
>> - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SM3_SHIFT, 4, 0),
>> - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SHA3_SHIFT, 4, 0),
>> - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_RDM_SHIFT, 4, 0),
>> - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_ATOMICS_SHIFT, 4, 0),
>> - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_CRC32_SHIFT, 4, 0),
>> - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SHA2_SHIFT, 4, 0),
>> - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SHA1_SHIFT, 4, 0),
>> - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_AES_SHIFT, 4, 0),
>> + ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_FHM_SHIFT, 4, 0),
>> + ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_DP_SHIFT, 4, 0),
>> + ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SM4_SHIFT, 4, 0),
>> + ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SM3_SHIFT, 4, 0),
>> + ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SHA3_SHIFT, 4, 0),
>> + ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_RDM_SHIFT, 4, 0),
>> + ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_ATOMICS_SHIFT, 4, 0),
>> + ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_CRC32_SHIFT, 4, 0),
>> + ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SHA2_SHIFT, 4, 0),
>> + ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_SHA1_SHIFT, 4, 0),
>> + ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR0_AES_SHIFT, 4, 0),
>> ARM64_FTR_END,
>> };
>>
>> static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
>> - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_LRCPC_SHIFT, 4, 0),
>> - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
>> - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
>> - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DPB_SHIFT, 4, 0),
>> + ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_LRCPC_SHIFT, 4, 0),
>> + ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_FCMA_SHIFT, 4, 0),
>> + ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_JSCVT_SHIFT, 4, 0),
>> + ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_DPB_SHIFT, 4, 0),
>> ARM64_FTR_END,
>> };
>>
>> static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
>> ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV3_SHIFT, 4, 0),
>> ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV2_SHIFT, 4, 0),
>> - ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_SVE_SHIFT, 4, 0),
>> - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_RAS_SHIFT, 4, 0),
>> + ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_SVE_SHIFT, 4, 0),
>> + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_RAS_SHIFT, 4, 0),
>> ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_GIC_SHIFT, 4, 0),
>> - S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_ASIMD_SHIFT, 4, ID_AA64PFR0_ASIMD_NI),
>> - S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_FP_SHIFT, 4, ID_AA64PFR0_FP_NI),
>> + S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_ASIMD_SHIFT, 4, ID_AA64PFR0_ASIMD_NI),
>> + S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_FP_SHIFT, 4, ID_AA64PFR0_FP_NI),
>> /* Linux doesn't care about the EL3 */
>> ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL3_SHIFT, 4, 0),
>> ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_EL2_SHIFT, 4, 0),
>> @@ -169,7 +169,8 @@ static const struct arm64_ftr_bits ftr_id_aa64mmfr0[] = {
>> /* Linux shouldn't care about secure memory */
>> ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_SNSMEM_SHIFT, 4, 0),
>> ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_BIGENDEL_SHIFT, 4, 0),
>> - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_ASID_SHIFT, 4, 0),
>> + /* We handle differing ASID widths by explicit checks to make sure the system is safe */
>> + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_ASID_SHIFT, 4, 0),
>> /*
>> * Differing PARange is fine as long as all peripherals and memory are mapped
>> * within the minimum PARange of all CPUs
>> @@ -179,20 +180,23 @@ static const struct arm64_ftr_bits ftr_id_aa64mmfr0[] = {
>> };
>>
>> static const struct arm64_ftr_bits ftr_id_aa64mmfr1[] = {
>> - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_PAN_SHIFT, 4, 0),
>> + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_PAN_SHIFT, 4, 0),
>> ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_LOR_SHIFT, 4, 0),
>> ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_HPD_SHIFT, 4, 0),
>> - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_VHE_SHIFT, 4, 0),
>> + /* When CONFIG_ARM64_VHE is enabled, we ensure that there is no conflict */
>> + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_VHE_SHIFT, 4, 0),
>> ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_VMIDBITS_SHIFT, 4, 0),
>> - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_HADBS_SHIFT, 4, 0),
>> + /* We can run a mix of CPUs with and without the support for HW management of AF/DBM bits */
>> + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_HADBS_SHIFT, 4, 0),
>> ARM64_FTR_END,
>> };
>>
>> static const struct arm64_ftr_bits ftr_id_aa64mmfr2[] = {
>> ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_LVA_SHIFT, 4, 0),
>> - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_IESB_SHIFT, 4, 0),
>> + /* While IESB is good to have, it is not fatal if we miss this on some CPUs */
>> + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_IESB_SHIFT, 4, 0),
>> ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_LSM_SHIFT, 4, 0),
>> - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_UAO_SHIFT, 4, 0),
>> + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_UAO_SHIFT, 4, 0),
>> ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_CNP_SHIFT, 4, 0),
>> ARM64_FTR_END,
>> };
>> @@ -259,12 +263,12 @@ static const struct arm64_ftr_bits ftr_dczid[] = {
>>
>>
>> static const struct arm64_ftr_bits ftr_id_isar5[] = {
>> - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_RDM_SHIFT, 4, 0),
>> - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_CRC32_SHIFT, 4, 0),
>> - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_SHA2_SHIFT, 4, 0),
>> - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_SHA1_SHIFT, 4, 0),
>> - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_AES_SHIFT, 4, 0),
>> - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_ISAR5_SEVL_SHIFT, 4, 0),
>> + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_ISAR5_RDM_SHIFT, 4, 0),
>> + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_ISAR5_CRC32_SHIFT, 4, 0),
>> + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_ISAR5_SHA2_SHIFT, 4, 0),
>> + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_ISAR5_SHA1_SHIFT, 4, 0),
>> + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_ISAR5_AES_SHIFT, 4, 0),
>> + ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_ISAR5_SEVL_SHIFT, 4, 0),
>> ARM64_FTR_END,
>> };
>>
>> --
>> 2.14.3
>>
>>
>> _______________________________________________
>> linux-arm-kernel mailing list
>> linux-arm-kernel at lists.infradead.org
>> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 2/2] arm64: Expose Arm v8.4 features
2018-02-07 10:51 ` Suzuki K Poulose
@ 2018-02-07 11:55 ` Dave Martin
0 siblings, 0 replies; 9+ messages in thread
From: Dave Martin @ 2018-02-07 11:55 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Feb 07, 2018 at 10:51:55AM +0000, Suzuki K Poulose wrote:
> On 07/02/18 10:43, Dave Martin wrote:
> >On Thu, Feb 01, 2018 at 10:38:38AM +0000, Suzuki K Poulose wrote:
> >>Expose the new features introduced by Arm v8.4 extensions to
> >>Arm v8-A profile.
> >>
> >>These include :
> >>
> >> 1) Data indpendent timing of instructions. (DIT, exposed as HWCAP_DIT)
> >> 2) Unaligned atomic instructions and Single-copy atomicity of loads
> >> and stores. (AT, expose as HWCAP_USCAT)
> >> 3) LDAPR and STLR instructions with immediate offsets (extension to
> >> LRCPC, exposed as HWCAP_ILRCPC)
> >> 4) Flag manipulation instructions (TS, exposed as HWCAP_FLAGM).
> >>
> >>While at it get rid of the RES0 entries in the cpu-feature-registers.txt
> >>documentation.
> >
> >Should we write somewhere than fields that are not described here are
> >implicitly RES0, or would that be too strong a statement?
>
> Dave,
>
> We already have the following description in the file :
>
> "The following rules are applied to the value returned by the
> infrastructure:
>
> a) The value of an 'IMPLEMENTATION DEFINED' field is set to 0.
> b) The value of a reserved field is populated with the reserved
> value as defined by the architecture.
> c) The value of a 'visible' field holds the system wide safe value
> for the particular feature (except for MIDR_EL1, see section 4).
> d) All other fields (i.e, invisible fields) are set to indicate
> the feature is missing (as defined by the architecture).
> "
>
> So we don't have to bother if the fields are RES0 or they get
> some definitions in the future, as long as we don't expose them.
Agreed, that does seem sufficient:
Reviewed-by: Dave Martin <Dave.Martin@arm.com>
[...]
Cheers
---Dave
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 1/2] arm64: Relax constraints on ID feature bits
2018-02-07 11:41 ` Suzuki K Poulose
@ 2018-02-07 12:34 ` Dave Martin
2018-02-07 13:24 ` Suzuki K Poulose
0 siblings, 1 reply; 9+ messages in thread
From: Dave Martin @ 2018-02-07 12:34 UTC (permalink / raw)
To: linux-arm-kernel
On Wed, Feb 07, 2018 at 11:41:17AM +0000, Suzuki K Poulose wrote:
> On 07/02/18 10:40, Dave Martin wrote:
> >On Thu, Feb 01, 2018 at 10:38:37AM +0000, Suzuki K Poulose wrote:
> >>We treat most of the feature bits in the ID registers as STRICT,
> >>implying that all CPUs should match it the boot CPU state. However,
> >>for most of the features, we can handle if there are any mismatches
> >>by using the safe value. e.g, HWCAPs and other features used by the
> >>kernel. Relax the constraint on the feature bits whose mismatch can
> >>be handled by the kernel.
> >>
> >>For VHE, if there is a mismatch we don't care if the kernel is
> >>not using it. If the kernel is indeed running in EL2 mode, then
> >>the mismatches results in a panic. Similarly for ASID bits we
> >>take care of conflicts.
> >>
> >>For other features like, PAN, UAO we only enable it only if we
> >>have it on all the CPUs. For IESB, we set the SCTLR bit unconditionally
> >>anyways.
> >
> >Do the remaining STRICT+LOWER_SAFE / NONSTRICT+EXACT cases still
> >make sense?
>
> Thats a good point. I did take a look at them. Most of them were not
> really used by the kernel and some of them needed some additional checks
> to make sure the "STRICT" is enforced via other checks. (e.g IDAA64MMFR1:VMID).
>
> Here is the remaining list :
>
> IDAA64PFR0_EL1
> - GIC - System register GIC itnerface. I think this can be made non-strict,
> since we can now enforce it via capabilities (with the GIC_CPUIF
> being a Boot CPU feature). So, we need to wait until that series
> is merged.
>
> - EL2 - This is a bit complex. This is STRICT only if all the other CPUs were
> booted in EL2 and KVM is enabled. But then we need to add an extra
> check for hotplugged CPUs.
>
> IDAA64MMFR0_EL1
> - BIGENDEL - Again, the system uses sanitised value, so LOWER_SAFE makes sense.
> But there are no checks for hotplugged CPUs, if the SETEND emulation
> is enabled.
>
> ID_AA64MMFR1_EL
> - LOR - Limited Ordering regions, Not supported by the kernel. So we can
> make this non-strict.
> - HPD - Hierarchical permission disables. Currently unused by the kernel.
> So, can be switched to non-strict
> - VMID - VMIDbits width. This is currently STRICT+LOWER_SAFE. The KVM uses
> sanitised value of the feature, so this can be NONSTRICT+LOWER_SAFE.
> However, we need to ensure a hotplugged CPU complies to the sanitised
> width (which may have been used by KVM)
>
>
> To summarise, I can add LOR/HPD changes. But the others requires a bit more
> work and can be done as a separate series.
>
> >I've wondered in the past whether there is redundancy between the strict
> >and type fields, but when adding entries I just copy-pasted similar ones
> >rather than fully understanding what was going on...
>
> I agree. These were defined before we started using the system wide safe
> values and enforcing the capabilities on late/secondary CPUs. Now that
> we have an infrastructure which makes sure that conflicts are handled,
> we could relax the definitions a bit.
OK, I this sounds reasonable and I think it all falls under "potential
future cleanups".
A few nits below.
[...]
> >>diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
[...]
> >>- ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_ASID_SHIFT, 4, 0),
> >>+ /* We handle differing ASID widths by explicit checks to make sure the system is safe */
Where is this checked? Because of the risk of breaking this
relationship during maintenance, perhaps we should have a comment in
both places.
> >>+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_ASID_SHIFT, 4, 0),
> >> /*
> >> * Differing PARange is fine as long as all peripherals and memory are mapped
> >> * within the minimum PARange of all CPUs
> >>@@ -179,20 +180,23 @@ static const struct arm64_ftr_bits ftr_id_aa64mmfr0[] = {
> >> };
> >> static const struct arm64_ftr_bits ftr_id_aa64mmfr1[] = {
> >>- ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_PAN_SHIFT, 4, 0),
> >>+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_PAN_SHIFT, 4, 0),
> >> ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_LOR_SHIFT, 4, 0),
> >> ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_HPD_SHIFT, 4, 0),
> >>- ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_VHE_SHIFT, 4, 0),
> >>+ /* When CONFIG_ARM64_VHE is enabled, we ensure that there is no conflict */
Similarly to _ASID, where/how?
> >>+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_VHE_SHIFT, 4, 0),
> >> ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_VMIDBITS_SHIFT, 4, 0),
> >>- ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_HADBS_SHIFT, 4, 0),
> >>+ /* We can run a mix of CPUs with and without the support for HW management of AF/DBM bits */
> >>+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_HADBS_SHIFT, 4, 0),
> >> ARM64_FTR_END,
> >> };
> >> static const struct arm64_ftr_bits ftr_id_aa64mmfr2[] = {
> >> ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_LVA_SHIFT, 4, 0),
> >>- ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_IESB_SHIFT, 4, 0),
> >>+ /* While IESB is good to have, it is not fatal if we miss this on some CPUs */
Maybe this deserves slightly more explanation. We could say that
lacking implicit IESB on exception boundary on a subset of CPUs is no
worse than lacking it on all of them.
Cheers
---Dave
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 1/2] arm64: Relax constraints on ID feature bits
2018-02-07 12:34 ` Dave Martin
@ 2018-02-07 13:24 ` Suzuki K Poulose
0 siblings, 0 replies; 9+ messages in thread
From: Suzuki K Poulose @ 2018-02-07 13:24 UTC (permalink / raw)
To: linux-arm-kernel
On 07/02/18 12:34, Dave Martin wrote:
> On Wed, Feb 07, 2018 at 11:41:17AM +0000, Suzuki K Poulose wrote:
>> On 07/02/18 10:40, Dave Martin wrote:
...
>> To summarise, I can add LOR/HPD changes. But the others requires a bit more
>> work and can be done as a separate series.
>>
>>> I've wondered in the past whether there is redundancy between the strict
>>> and type fields, but when adding entries I just copy-pasted similar ones
>>> rather than fully understanding what was going on...
>>
>> I agree. These were defined before we started using the system wide safe
>> values and enforcing the capabilities on late/secondary CPUs. Now that
>> we have an infrastructure which makes sure that conflicts are handled,
>> we could relax the definitions a bit.
>
> OK, I this sounds reasonable and I think it all falls under "potential
> future cleanups".
>
> A few nits below.
>
> [...]
>
>>>> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
>
> [...]
>
>>>> - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR0_ASID_SHIFT, 4, 0),
>>>> + /* We handle differing ASID widths by explicit checks to make sure the system is safe */
>
> Where is this checked? Because of the risk of breaking this
> relationship during maintenance, perhaps we should have a comment in
> both places.
This is checked via verify_cpu_asid_bits() from check_early_cpu_features(), on all
secondary CPUs. I will mention it in the comment.
>>>> - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR1_VHE_SHIFT, 4, 0),
>>>> + /* When CONFIG_ARM64_VHE is enabled, we ensure that there is no conflict */
>
> Similarly to _ASID, where/how?
>
This is done via verify_cpu_run_el() from check_early_cpu_features(). But with the rewrite
of the capabilities frame work, we move it to the capabilities infrastructure BOOT_CPU features.
>>>> static const struct arm64_ftr_bits ftr_id_aa64mmfr2[] = {
>>>> ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_LVA_SHIFT, 4, 0),
>>>> - ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_IESB_SHIFT, 4, 0),
>>>> + /* While IESB is good to have, it is not fatal if we miss this on some CPUs */
>
> Maybe this deserves slightly more explanation. We could say that
> lacking implicit IESB on exception boundary on a subset of CPUs is no
> worse than lacking it on all of them.
OK.
Cheers
Suzuki
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2018-02-07 13:24 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-02-01 10:38 [PATCH 1/2] arm64: Relax constraints on ID feature bits Suzuki K Poulose
2018-02-01 10:38 ` [PATCH 2/2] arm64: Expose Arm v8.4 features Suzuki K Poulose
2018-02-07 10:43 ` Dave Martin
2018-02-07 10:51 ` Suzuki K Poulose
2018-02-07 11:55 ` Dave Martin
2018-02-07 10:40 ` [PATCH 1/2] arm64: Relax constraints on ID feature bits Dave Martin
2018-02-07 11:41 ` Suzuki K Poulose
2018-02-07 12:34 ` Dave Martin
2018-02-07 13:24 ` Suzuki K Poulose
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).