* [PATCH v2 1/4] arm64: head.S: Initialise MPAM EL2 registers and disable traps
2023-12-07 15:08 [PATCH v2 0/4] KVM: arm64: Hide unsupported MPAM from the guest James Morse
@ 2023-12-07 15:08 ` James Morse
2023-12-07 15:08 ` [PATCH v2 2/4] arm64: cpufeature: discover CPU support for MPAM James Morse
` (3 subsequent siblings)
4 siblings, 0 replies; 10+ messages in thread
From: James Morse @ 2023-12-07 15:08 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: Marc Zyngier, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
James Morse
Add code to head.S's el2_setup to detect MPAM and disable any EL2 traps.
This register resets to an unknown value, setting it to the default
parititons/pmg before we enable the MMU is the best thing to do.
Kexec/kdump will depend on this if the previous kernel left the CPU
configured with a restrictive configuration.
If linux is booted at the highest implemented exception level el2_setup
will clear the enable bit, disabling MPAM.
This code can't be enabled until a subsequent patch adds the Kconfig
and cpufeature boiler plate.
Signed-off-by: James Morse <james.morse@arm.com>
---
arch/arm64/include/asm/el2_setup.h | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/arch/arm64/include/asm/el2_setup.h b/arch/arm64/include/asm/el2_setup.h
index b7afaa026842..fb4ea2a94e10 100644
--- a/arch/arm64/include/asm/el2_setup.h
+++ b/arch/arm64/include/asm/el2_setup.h
@@ -208,6 +208,21 @@
msr spsr_el2, x0
.endm
+.macro __init_el2_mpam
+#ifdef CONFIG_ARM64_MPAM
+ /* Memory Partitioning And Monitoring: disable EL2 traps */
+ mrs x1, id_aa64pfr0_el1
+ ubfx x0, x1, #ID_AA64PFR0_EL1_MPAM_SHIFT, #4
+ cbz x0, .Lskip_mpam_\@ // skip if no MPAM
+ msr_s SYS_MPAM2_EL2, xzr // use the default partition
+ // and disable lower traps
+ mrs_s x0, SYS_MPAMIDR_EL1
+ tbz x0, #17, .Lskip_mpam_\@ // skip if no MPAMHCR reg
+ msr_s SYS_MPAMHCR_EL2, xzr // clear TRAP_MPAMIDR_EL1 -> EL2
+.Lskip_mpam_\@:
+#endif /* CONFIG_ARM64_MPAM */
+.endm
+
/**
* Initialize EL2 registers to sane values. This should be called early on all
* cores that were booted in EL2. Note that everything gets initialised as
@@ -225,6 +240,7 @@
__init_el2_stage2
__init_el2_gicv3
__init_el2_hstr
+ __init_el2_mpam
__init_el2_nvhe_idregs
__init_el2_cptr
__init_el2_fgt
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 10+ messages in thread* [PATCH v2 2/4] arm64: cpufeature: discover CPU support for MPAM
2023-12-07 15:08 [PATCH v2 0/4] KVM: arm64: Hide unsupported MPAM from the guest James Morse
2023-12-07 15:08 ` [PATCH v2 1/4] arm64: head.S: Initialise MPAM EL2 registers and disable traps James Morse
@ 2023-12-07 15:08 ` James Morse
2023-12-07 15:08 ` [PATCH v2 3/4] KVM: arm64: Fix missing traps of guest accesses to the MPAM registers James Morse
` (2 subsequent siblings)
4 siblings, 0 replies; 10+ messages in thread
From: James Morse @ 2023-12-07 15:08 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: Marc Zyngier, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
James Morse
ARMv8.4 adds support for 'Memory Partitioning And Monitoring' (MPAM)
which describes an interface to cache and bandwidth controls wherever
they appear in the system.
Add support to detect MPAM. Like SVE, MPAM has an extra id register that
describes the virtualisation support, which is optional. Detect this
separately so we can detect mismatched/insane systems, but still use
MPAM on the host even if the virtualisation support is missing.
MPAM needs enabling at the highest implemented exception level, otherwise
the register accesses trap. The 'enabled' flag is accessible to lower
exception levels, but its in a register that traps when MPAM isn't enabled.
The cpufeature 'matches' hook is extended to test this on one of the
CPUs, so that firmware can emulate MPAM as disabled if it is reserved
for use by secure world.
(If you have a boot failure that bisects here its likely your CPUs
advertise MPAM in the id registers, but firmware failed to either enable
or MPAM, or emulate the trap as if it were disabled)
Signed-off-by: James Morse <james.morse@arm.com>
---
.../arch/arm64/cpu-feature-registers.rst | 2 +
arch/arm64/Kconfig | 19 ++++-
arch/arm64/include/asm/cpu.h | 1 +
arch/arm64/include/asm/cpufeature.h | 13 ++++
arch/arm64/include/asm/mpam.h | 75 +++++++++++++++++++
arch/arm64/include/asm/sysreg.h | 8 ++
arch/arm64/kernel/Makefile | 1 +
arch/arm64/kernel/cpufeature.c | 67 +++++++++++++++++
arch/arm64/kernel/cpuinfo.c | 4 +
arch/arm64/kernel/mpam.c | 8 ++
arch/arm64/tools/cpucaps | 1 +
arch/arm64/tools/sysreg | 32 ++++++++
12 files changed, 230 insertions(+), 1 deletion(-)
create mode 100644 arch/arm64/include/asm/mpam.h
create mode 100644 arch/arm64/kernel/mpam.c
diff --git a/Documentation/arch/arm64/cpu-feature-registers.rst b/Documentation/arch/arm64/cpu-feature-registers.rst
index 44f9bd78539d..253e9743de2f 100644
--- a/Documentation/arch/arm64/cpu-feature-registers.rst
+++ b/Documentation/arch/arm64/cpu-feature-registers.rst
@@ -152,6 +152,8 @@ infrastructure:
+------------------------------+---------+---------+
| DIT | [51-48] | y |
+------------------------------+---------+---------+
+ | MPAM | [43-40] | n |
+ +------------------------------+---------+---------+
| SVE | [35-32] | y |
+------------------------------+---------+---------+
| GIC | [27-24] | n |
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 7b071a00425d..022af9712f90 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1959,7 +1959,24 @@ config ARM64_TLB_RANGE
The feature introduces new assembly instructions, and they were
support when binutils >= 2.30.
-endmenu # "ARMv8.4 architectural features"
+config ARM64_MPAM
+ bool "Enable support for MPAM"
+ help
+ Memory Partitioning and Monitoring is an optional extension
+ that allows the CPUs to mark load and store transactions with
+ labels for partition-id and performance-monitoring-group.
+ System components, such as the caches, can use the partition-id
+ to apply a performance policy. MPAM monitors can use the
+ partition-id and performance-monitoring-group to measure the
+ cache occupancy or data throughput.
+
+ Use of this extension requires CPU support, support in the
+ memory system components (MSC), and a description from firmware
+ of where the MSC are in the address space.
+
+ MPAM is exposed to user-space via the resctrl pseudo filesystem.
+
+endmenu
menu "ARMv8.5 architectural features"
diff --git a/arch/arm64/include/asm/cpu.h b/arch/arm64/include/asm/cpu.h
index f3034099fd95..9fe9d487f124 100644
--- a/arch/arm64/include/asm/cpu.h
+++ b/arch/arm64/include/asm/cpu.h
@@ -47,6 +47,7 @@ struct cpuinfo_arm64 {
u64 reg_revidr;
u64 reg_gmid;
u64 reg_smidr;
+ u64 reg_mpamidr;
u64 reg_id_aa64dfr0;
u64 reg_id_aa64dfr1;
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index f6d416fe49b0..9d324b48612f 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -610,6 +610,13 @@ static inline bool id_aa64pfr1_sme(u64 pfr1)
return val > 0;
}
+static inline bool id_aa64pfr0_mpam(u64 pfr0)
+{
+ u32 val = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_EL1_MPAM_SHIFT);
+
+ return val > 0;
+}
+
static inline bool id_aa64pfr1_mte(u64 pfr1)
{
u32 val = cpuid_feature_extract_unsigned_field(pfr1, ID_AA64PFR1_EL1_MTE_SHIFT);
@@ -819,6 +826,12 @@ static inline bool system_supports_tlb_range(void)
return alternative_has_cap_unlikely(ARM64_HAS_TLB_RANGE);
}
+static inline bool cpus_support_mpam(void)
+{
+ return IS_ENABLED(CONFIG_ARM64_MPAM) &&
+ cpus_have_final_cap(ARM64_MPAM);
+}
+
int do_emulate_mrs(struct pt_regs *regs, u32 sys_reg, u32 rt);
bool try_emulate_mrs(struct pt_regs *regs, u32 isn);
diff --git a/arch/arm64/include/asm/mpam.h b/arch/arm64/include/asm/mpam.h
new file mode 100644
index 000000000000..82d4f6008aeb
--- /dev/null
+++ b/arch/arm64/include/asm/mpam.h
@@ -0,0 +1,75 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (C) 2021 Arm Ltd. */
+
+#ifndef __ASM__MPAM_H
+#define __ASM__MPAM_H
+
+#include <linux/bitops.h>
+#include <linux/init.h>
+#include <linux/jump_label.h>
+
+#include <asm/cpucaps.h>
+#include <asm/cpufeature.h>
+#include <asm/sysreg.h>
+
+/* CPU Registers */
+#define MPAM_SYSREG_EN BIT_ULL(63)
+#define MPAM_SYSREG_TRAP_IDR BIT_ULL(58)
+#define MPAM_SYSREG_TRAP_MPAM0_EL1 BIT_ULL(49)
+#define MPAM_SYSREG_TRAP_MPAM1_EL1 BIT_ULL(48)
+#define MPAM_SYSREG_PMG_D GENMASK(47, 40)
+#define MPAM_SYSREG_PMG_I GENMASK(39, 32)
+#define MPAM_SYSREG_PARTID_D GENMASK(31, 16)
+#define MPAM_SYSREG_PARTID_I GENMASK(15, 0)
+
+#define MPAMIDR_PMG_MAX GENMASK(40, 32)
+#define MPAMIDR_PMG_MAX_SHIFT 32
+#define MPAMIDR_PMG_MAX_LEN 8
+#define MPAMIDR_VPMR_MAX GENMASK(20, 18)
+#define MPAMIDR_VPMR_MAX_SHIFT 18
+#define MPAMIDR_VPMR_MAX_LEN 3
+#define MPAMIDR_HAS_HCR BIT(17)
+#define MPAMIDR_HAS_HCR_SHIFT 17
+#define MPAMIDR_PARTID_MAX GENMASK(15, 0)
+#define MPAMIDR_PARTID_MAX_SHIFT 0
+#define MPAMIDR_PARTID_MAX_LEN 15
+
+#define MPAMHCR_EL0_VPMEN BIT_ULL(0)
+#define MPAMHCR_EL1_VPMEN BIT_ULL(1)
+#define MPAMHCR_GSTAPP_PLK BIT_ULL(8)
+#define MPAMHCR_TRAP_MPAMIDR BIT_ULL(31)
+
+/* Properties of the VPM registers */
+#define MPAM_VPM_NUM_REGS 8
+#define MPAM_VPM_PARTID_LEN 16
+#define MPAM_VPM_PARTID_MASK 0xffff
+#define MPAM_VPM_REG_LEN 64
+#define MPAM_VPM_PARTIDS_PER_REG (MPAM_VPM_REG_LEN / MPAM_VPM_PARTID_LEN)
+#define MPAM_VPM_MAX_PARTID (MPAM_VPM_NUM_REGS * MPAM_VPM_PARTIDS_PER_REG)
+
+DECLARE_STATIC_KEY_FALSE(arm64_mpam_has_hcr);
+
+/* check whether all CPUs have MPAM support */
+static inline bool mpam_cpus_have_feature(void)
+{
+ if (IS_ENABLED(CONFIG_ARM64_MPAM))
+ return cpus_have_final_cap(ARM64_MPAM);
+ return false;
+}
+
+/* check whether all CPUs have MPAM virtualisation support */
+static inline bool mpam_cpus_have_mpam_hcr(void)
+{
+ if (IS_ENABLED(CONFIG_ARM64_MPAM))
+ return static_branch_unlikely(&arm64_mpam_has_hcr);
+ return false;
+}
+
+/* enable MPAM virtualisation support */
+static inline void __init __enable_mpam_hcr(void)
+{
+ if (IS_ENABLED(CONFIG_ARM64_MPAM))
+ static_branch_enable(&arm64_mpam_has_hcr);
+}
+
+#endif /* __ASM__MPAM_H */
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 5e65f51c10d2..8bf4c359e19f 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -535,6 +535,13 @@
#define SYS_MPAMVPM6_EL2 __SYS__MPAMVPMx_EL2(6)
#define SYS_MPAMVPM7_EL2 __SYS__MPAMVPMx_EL2(7)
+#define SYS_MPAMHCR_EL2 sys_reg(3, 4, 10, 4, 0)
+#define SYS_MPAMVPMV_EL2 sys_reg(3, 4, 10, 4, 1)
+#define SYS_MPAM2_EL2 sys_reg(3, 4, 10, 5, 0)
+
+#define __VPMn_op2(n) ((n) & 0x7)
+#define SYS_MPAM_VPMn_EL2(n) sys_reg(3, 4, 10, 6, __VPMn_op2(n))
+
#define SYS_VBAR_EL2 sys_reg(3, 4, 12, 0, 0)
#define SYS_RVBAR_EL2 sys_reg(3, 4, 12, 0, 1)
#define SYS_RMR_EL2 sys_reg(3, 4, 12, 0, 2)
@@ -622,6 +629,7 @@
#define SYS_PMSCR_EL12 sys_reg(3, 5, 9, 9, 0)
#define SYS_MAIR_EL12 sys_reg(3, 5, 10, 2, 0)
#define SYS_AMAIR_EL12 sys_reg(3, 5, 10, 3, 0)
+#define SYS_MPAM1_EL12 sys_reg(3, 5, 10, 5, 0)
#define SYS_VBAR_EL12 sys_reg(3, 5, 12, 0, 0)
#define SYS_CONTEXTIDR_EL12 sys_reg(3, 5, 13, 0, 1)
#define SYS_SCXTNUM_EL12 sys_reg(3, 5, 13, 0, 7)
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index d95b3d6b471a..685e0a58a4c6 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -69,6 +69,7 @@ obj-$(CONFIG_CRASH_DUMP) += crash_dump.o
obj-$(CONFIG_CRASH_CORE) += crash_core.o
obj-$(CONFIG_ARM_SDE_INTERFACE) += sdei.o
obj-$(CONFIG_ARM64_PTR_AUTH) += pointer_auth.o
+obj-$(CONFIG_ARM64_MPAM) += mpam.o
obj-$(CONFIG_ARM64_MTE) += mte.o
obj-y += vdso-wrap.o
obj-$(CONFIG_COMPAT_VDSO) += vdso32-wrap.o
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 91d2d6714969..609165eb89c6 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -84,6 +84,7 @@
#include <asm/insn.h>
#include <asm/kvm_host.h>
#include <asm/mmu_context.h>
+#include <asm/mpam.h>
#include <asm/mte.h>
#include <asm/processor.h>
#include <asm/smp.h>
@@ -613,6 +614,14 @@ static const struct arm64_ftr_bits ftr_id_dfr1[] = {
ARM64_FTR_END,
};
+static const struct arm64_ftr_bits ftr_mpamidr[] = {
+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, MPAMIDR_PMG_MAX_SHIFT, MPAMIDR_PMG_MAX_LEN, 0), /* PMG_MAX */
+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, MPAMIDR_VPMR_MAX_SHIFT, MPAMIDR_VPMR_MAX_LEN, 0), /* VPMR_MAX */
+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MPAMIDR_HAS_HCR_SHIFT, 1, 0), /* HAS_HCR */
+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, MPAMIDR_PARTID_MAX_SHIFT, MPAMIDR_PARTID_MAX_LEN, 0), /* PARTID_MAX */
+ ARM64_FTR_END,
+};
+
/*
* Common ftr bits for a 32bit register with all hidden, strict
* attributes, with 4bit feature fields and a default safe value of
@@ -725,6 +734,9 @@ static const struct __ftr_reg_entry {
ARM64_FTR_REG(SYS_ID_AA64MMFR2_EL1, ftr_id_aa64mmfr2),
ARM64_FTR_REG(SYS_ID_AA64MMFR3_EL1, ftr_id_aa64mmfr3),
+ /* Op1 = 0, CRn = 10, CRm = 4 */
+ ARM64_FTR_REG(SYS_MPAMIDR_EL1, ftr_mpamidr),
+
/* Op1 = 1, CRn = 0, CRm = 0 */
ARM64_FTR_REG(SYS_GMID_EL1, ftr_gmid),
@@ -1079,6 +1091,9 @@ void __init init_cpu_features(struct cpuinfo_arm64 *info)
cpacr_restore(cpacr);
}
+ if (id_aa64pfr0_mpam(info->reg_id_aa64pfr0))
+ init_cpu_ftr_reg(SYS_MPAMIDR_EL1, info->reg_mpamidr);
+
if (id_aa64pfr1_mte(info->reg_id_aa64pfr1))
init_cpu_ftr_reg(SYS_GMID_EL1, info->reg_gmid);
@@ -1347,6 +1362,11 @@ void update_cpu_features(int cpu,
cpacr_restore(cpacr);
}
+ if (id_aa64pfr0_mpam(info->reg_id_aa64pfr0)) {
+ taint |= check_update_ftr_reg(SYS_MPAMIDR_EL1, cpu,
+ info->reg_mpamidr, boot->reg_mpamidr);
+ }
+
/*
* The kernel uses the LDGM/STGM instructions and the number of tags
* they read/write depends on the GMID_EL1.BS field. Check that the
@@ -2265,6 +2285,42 @@ cpucap_panic_on_conflict(const struct arm64_cpu_capabilities *cap)
return !!(cap->type & ARM64_CPUCAP_PANIC_ON_CONFLICT);
}
+static bool __maybe_unused
+test_has_mpam(const struct arm64_cpu_capabilities *entry, int scope)
+{
+ if (!has_cpuid_feature(entry, scope))
+ return false;
+
+ /* Check firmware actually enabled MPAM on this cpu. */
+ return (read_sysreg_s(SYS_MPAM1_EL1) & MPAM_SYSREG_EN);
+}
+
+static void __maybe_unused
+cpu_enable_mpam(const struct arm64_cpu_capabilities *entry)
+{
+ /*
+ * Access by the kernel (at EL1) should use the reserved PARTID
+ * which is configured unrestricted. This avoids priority-inversion
+ * where latency sensitive tasks have to wait for a task that has
+ * been throttled to release the lock.
+ */
+ write_sysreg_s(0, SYS_MPAM1_EL1);
+
+ /* The EL0 system register is not yet per-task, zero that too. */
+ write_sysreg_s(0, SYS_MPAM0_EL1);
+}
+
+static void mpam_extra_caps(void)
+{
+ u64 idr = read_sanitised_ftr_reg(SYS_MPAMIDR_EL1);
+
+ if (!IS_ENABLED(CONFIG_ARM64_MPAM))
+ return;
+
+ if (idr & MPAMIDR_HAS_HCR)
+ __enable_mpam_hcr();
+}
+
static const struct arm64_cpu_capabilities arm64_features[] = {
{
.capability = ARM64_ALWAYS_BOOT,
@@ -2735,6 +2791,16 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.matches = has_cpuid_feature,
ARM64_CPUID_FIELDS(ID_AA64MMFR2_EL1, EVT, IMP)
},
+#ifdef CONFIG_ARM64_MPAM
+ {
+ .desc = "Memory Partitioning And Monitoring",
+ .type = ARM64_CPUCAP_SYSTEM_FEATURE,
+ .capability = ARM64_MPAM,
+ .matches = test_has_mpam,
+ .cpu_enable = cpu_enable_mpam,
+ ARM64_CPUID_FIELDS(ID_AA64PFR0_EL1, MPAM, 1)
+ },
+#endif
{},
};
@@ -3390,6 +3456,7 @@ void __init setup_user_features(void)
}
minsigstksz_setup();
+ mpam_extra_caps();
}
static int enable_mismatched_32bit_el0(unsigned int cpu)
diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
index a257da7b56fe..f117faa82ce5 100644
--- a/arch/arm64/kernel/cpuinfo.c
+++ b/arch/arm64/kernel/cpuinfo.c
@@ -463,6 +463,10 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info)
if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0))
__cpuinfo_store_cpu_32bit(&info->aarch32);
+ if (IS_ENABLED(CONFIG_ARM64_MPAM) &&
+ id_aa64pfr0_mpam(info->reg_id_aa64pfr0))
+ info->reg_mpamidr = read_cpuid(MPAMIDR_EL1);
+
cpuinfo_detect_icache_policy(info);
}
diff --git a/arch/arm64/kernel/mpam.c b/arch/arm64/kernel/mpam.c
new file mode 100644
index 000000000000..a29dc58c2da5
--- /dev/null
+++ b/arch/arm64/kernel/mpam.c
@@ -0,0 +1,8 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (C) 2021 Arm Ltd. */
+
+#include <asm/mpam.h>
+
+#include <linux/jump_label.h>
+
+DEFINE_STATIC_KEY_FALSE(arm64_mpam_has_hcr);
diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps
index b98c38288a9d..e6f425633d8d 100644
--- a/arch/arm64/tools/cpucaps
+++ b/arch/arm64/tools/cpucaps
@@ -56,6 +56,7 @@ HW_DBM
KVM_HVHE
KVM_PROTECTED_MODE
MISMATCHED_CACHE_TYPE
+MPAM
MTE
MTE_ASYMM
SME
diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
index 96cbeeab4eec..6a120f987851 100644
--- a/arch/arm64/tools/sysreg
+++ b/arch/arm64/tools/sysreg
@@ -2536,6 +2536,22 @@ Res0 1
Field 0 EN
EndSysreg
+Sysreg MPAMIDR_EL1 3 0 10 4 4
+Res0 63:62
+Field 61 HAS_SDEFLT
+Field 60 HAS_FORCE_NS
+Field 59 SP4
+Field 58 HAS_TIDR
+Field 57 HAS_ALTSP
+Res0 56:40
+Field 39:32 PMG_MAX
+Res0 31:21
+Field 20:18 VPMR_MAX
+Field 17 HAS_HCR
+Res0 16
+Field 15:0 PARTID_MAX
+EndSysreg
+
Sysreg LORID_EL1 3 0 10 4 7
Res0 63:24
Field 23:16 LD
@@ -2543,6 +2559,22 @@ Res0 15:8
Field 7:0 LR
EndSysreg
+Sysreg MPAM1_EL1 3 0 10 5 0
+Res0 63:48
+Field 47:40 PMG_D
+Field 39:32 PMG_I
+Field 31:16 PARTID_D
+Field 15:0 PARTID_I
+EndSysreg
+
+Sysreg MPAM0_EL1 3 0 10 5 1
+Res0 63:48
+Field 47:40 PMG_D
+Field 39:32 PMG_I
+Field 31:16 PARTID_D
+Field 15:0 PARTID_I
+EndSysreg
+
Sysreg ISR_EL1 3 0 12 1 0
Res0 63:11
Field 10 IS
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 10+ messages in thread* [PATCH v2 3/4] KVM: arm64: Fix missing traps of guest accesses to the MPAM registers
2023-12-07 15:08 [PATCH v2 0/4] KVM: arm64: Hide unsupported MPAM from the guest James Morse
2023-12-07 15:08 ` [PATCH v2 1/4] arm64: head.S: Initialise MPAM EL2 registers and disable traps James Morse
2023-12-07 15:08 ` [PATCH v2 2/4] arm64: cpufeature: discover CPU support for MPAM James Morse
@ 2023-12-07 15:08 ` James Morse
2023-12-07 15:08 ` [PATCH v2 4/4] KVM: arm64: Disable MPAM visibility by default, and handle traps James Morse
2023-12-13 20:24 ` [PATCH v2 0/4] KVM: arm64: Hide unsupported MPAM from the guest Oliver Upton
4 siblings, 0 replies; 10+ messages in thread
From: James Morse @ 2023-12-07 15:08 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: Marc Zyngier, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
James Morse, Anshuman Khandual
commit 011e5f5bf529f ("arm64/cpufeature: Add remaining feature bits in
ID_AA64PFR0 register") exposed the MPAM field of AA64PFR0_EL1 to guests,
but didn't add trap handling.
If you are unlucky, this results in an MPAM aware guest being delivered
an undef during boot. The host prints:
| kvm [97]: Unsupported guest sys_reg access at: ffff800080024c64 [00000005]
| { Op0( 3), Op1( 0), CRn(10), CRm( 5), Op2( 0), func_read },
Which results in:
| Internal error: Oops - Undefined instruction: 0000000002000000 [#1] PREEMPT SMP
| Modules linked in:
| CPU: 0 PID: 1 Comm: swapper/0 Not tainted 6.6.0-rc7-00559-gd89c186d50b2 #14616
| Hardware name: linux,dummy-virt (DT)
| pstate: 00000005 (nzcv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
| pc : test_has_mpam+0x18/0x30
| lr : test_has_mpam+0x10/0x30
| sp : ffff80008000bd90
...
| Call trace:
| test_has_mpam+0x18/0x30
| update_cpu_capabilities+0x7c/0x11c
| setup_cpu_features+0x14/0xd8
| smp_cpus_done+0x24/0xb8
| smp_init+0x7c/0x8c
| kernel_init_freeable+0xf8/0x280
| kernel_init+0x24/0x1e0
| ret_from_fork+0x10/0x20
| Code: 910003fd 97ffffde 72001c00 54000080 (d538a500)
| ---[ end trace 0000000000000000 ]---
| Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
| ---[ end Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b ]---
Add the support to enable the traps, and handle the three guest accessible
registers as RAZ/WI. This allows guests to keep the invariant id-register
value, while advertising that MPAM isn't really supported.
With MPAM v1.0 we can trap the MPAMIDR_EL1 register only if
ARM64_HAS_MPAM_HCR, with v1.1 an additional MPAM2_EL2.TIDR bit traps
MPAMIDR_EL1 on platforms that don't have MPAMHCR_EL2. Enable one of
these if either is supported. If neither is supported, the guest can
discover that the CPU has MPAM support, and how many PARTID etc the
host has ... but it can't influence anything, so its harmless.
Full support for the feature would only expose MPAM to the guest
if a psuedo-device has been created to describe the virt->phys partid
mapping the VMM expects. This will depend on ARM64_HAS_MPAM_HCR.
Fixes: 011e5f5bf529 ("arm64/cpufeature: Add remaining feature bits in ID_AA64PFR0 register")
CC: Anshuman Khandual <anshuman.khandual@arm.com>
Link: https://lore.kernel.org/linux-arm-kernel/20200925160102.118858-1-james.morse@arm.com/
Signed-off-by: James Morse <james.morse@arm.com>
---
arch/arm64/include/asm/kvm_arm.h | 1 +
arch/arm64/include/asm/mpam.h | 4 ++--
arch/arm64/kernel/image-vars.h | 5 ++++
arch/arm64/kvm/hyp/include/hyp/switch.h | 32 +++++++++++++++++++++++++
arch/arm64/kvm/sys_regs.c | 20 ++++++++++++++++
5 files changed, 60 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index b85f46a73e21..87aa442c2622 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -106,6 +106,7 @@
(HCRX_EL2_SMPME | HCRX_EL2_TCR2En | \
(cpus_have_final_cap(ARM64_HAS_MOPS) ? (HCRX_EL2_MSCEn | HCRX_EL2_MCE2) : 0))
#define HCRX_HOST_FLAGS (HCRX_EL2_MSCEn | HCRX_EL2_TCR2En)
+#define MPAMHCR_HOST_FLAGS 0
/* TCR_EL2 Registers bits */
#define TCR_EL2_RES1 ((1U << 31) | (1 << 23))
diff --git a/arch/arm64/include/asm/mpam.h b/arch/arm64/include/asm/mpam.h
index 82d4f6008aeb..dd11e97d8317 100644
--- a/arch/arm64/include/asm/mpam.h
+++ b/arch/arm64/include/asm/mpam.h
@@ -50,7 +50,7 @@
DECLARE_STATIC_KEY_FALSE(arm64_mpam_has_hcr);
/* check whether all CPUs have MPAM support */
-static inline bool mpam_cpus_have_feature(void)
+static __always_inline bool mpam_cpus_have_feature(void)
{
if (IS_ENABLED(CONFIG_ARM64_MPAM))
return cpus_have_final_cap(ARM64_MPAM);
@@ -58,7 +58,7 @@ static inline bool mpam_cpus_have_feature(void)
}
/* check whether all CPUs have MPAM virtualisation support */
-static inline bool mpam_cpus_have_mpam_hcr(void)
+static __always_inline bool mpam_cpus_have_mpam_hcr(void)
{
if (IS_ENABLED(CONFIG_ARM64_MPAM))
return static_branch_unlikely(&arm64_mpam_has_hcr);
diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h
index 5e4dc72ab1bd..b30046aaa171 100644
--- a/arch/arm64/kernel/image-vars.h
+++ b/arch/arm64/kernel/image-vars.h
@@ -66,6 +66,11 @@ KVM_NVHE_ALIAS(nvhe_hyp_panic_handler);
/* Vectors installed by hyp-init on reset HVC. */
KVM_NVHE_ALIAS(__hyp_stub_vectors);
+/* Additional static keys for cpufeatures */
+#ifdef CONFIG_ARM64_MPAM
+KVM_NVHE_ALIAS(arm64_mpam_has_hcr);
+#endif
+
/* Static keys which are set if a vGIC trap should be handled in hyp. */
KVM_NVHE_ALIAS(vgic_v2_cpuif_trap);
KVM_NVHE_ALIAS(vgic_v3_cpuif_trap);
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index f99d8af0b9af..277373871b44 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -27,6 +27,7 @@
#include <asm/kvm_hyp.h>
#include <asm/kvm_mmu.h>
#include <asm/kvm_nested.h>
+#include <asm/mpam.h>
#include <asm/fpsimd.h>
#include <asm/debug-monitors.h>
#include <asm/processor.h>
@@ -173,6 +174,35 @@ static inline void __deactivate_traps_hfgxtr(struct kvm_vcpu *vcpu)
write_sysreg_s(ctxt_sys_reg(hctxt, HDFGWTR_EL2), SYS_HDFGWTR_EL2);
}
+static inline void __activate_traps_mpam(struct kvm_vcpu *vcpu)
+{
+ u64 r = MPAM_SYSREG_TRAP_MPAM0_EL1 | MPAM_SYSREG_TRAP_MPAM1_EL1;
+
+ if (!mpam_cpus_have_feature())
+ return;
+
+ /* trap guest access to MPAMIDR_EL1 */
+ if (mpam_cpus_have_mpam_hcr()) {
+ write_sysreg_s(MPAMHCR_TRAP_MPAMIDR, SYS_MPAMHCR_EL2);
+ } else {
+ /* From v1.1 TIDR can trap MPAMIDR, set it unconditionally */
+ r |= MPAM_SYSREG_TRAP_IDR;
+ }
+
+ write_sysreg_s(r, SYS_MPAM2_EL2);
+}
+
+static inline void __deactivate_traps_mpam(void)
+{
+ if (!mpam_cpus_have_feature())
+ return;
+
+ write_sysreg_s(0, SYS_MPAM2_EL2);
+
+ if (mpam_cpus_have_mpam_hcr())
+ write_sysreg_s(MPAMHCR_HOST_FLAGS, SYS_MPAMHCR_EL2);
+}
+
static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
{
/* Trap on AArch32 cp15 c15 (impdef sysregs) accesses (EL1 or EL0) */
@@ -213,6 +243,7 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
}
__activate_traps_hfgxtr(vcpu);
+ __activate_traps_mpam(vcpu);
}
static inline void __deactivate_traps_common(struct kvm_vcpu *vcpu)
@@ -232,6 +263,7 @@ static inline void __deactivate_traps_common(struct kvm_vcpu *vcpu)
write_sysreg_s(HCRX_HOST_FLAGS, SYS_HCRX_EL2);
__deactivate_traps_hfgxtr(vcpu);
+ __deactivate_traps_mpam();
}
static inline void ___activate_traps(struct kvm_vcpu *vcpu)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 4735e1b37fb3..15fb9f54e308 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -411,6 +411,23 @@ static bool trap_oslar_el1(struct kvm_vcpu *vcpu,
return true;
}
+static bool workaround_bad_mpam_abi(struct kvm_vcpu *vcpu,
+ struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ /*
+ * The ID register can't be removed without breaking migration,
+ * but MPAMIDR_EL1 can advertise all-zeroes, indicating there are zero
+ * PARTID/PMG supported by the CPU, allowing the other two trapped
+ * registers (MPAM1_EL1 and MPAM0_EL1) to be treated as RAZ/WI.
+ * Emulating MPAM1_EL1 as RAZ/WI means the guest sees the MPAMEN bit
+ * as clear, and realises MPAM isn't usable on this CPU.
+ */
+ p->regval = 0;
+
+ return true;
+}
+
static bool trap_oslsr_el1(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
@@ -2275,8 +2292,11 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ SYS_DESC(SYS_LOREA_EL1), trap_loregion },
{ SYS_DESC(SYS_LORN_EL1), trap_loregion },
{ SYS_DESC(SYS_LORC_EL1), trap_loregion },
+ { SYS_DESC(SYS_MPAMIDR_EL1), workaround_bad_mpam_abi },
{ SYS_DESC(SYS_LORID_EL1), trap_loregion },
+ { SYS_DESC(SYS_MPAM1_EL1), workaround_bad_mpam_abi },
+ { SYS_DESC(SYS_MPAM0_EL1), workaround_bad_mpam_abi },
{ SYS_DESC(SYS_VBAR_EL1), access_rw, reset_val, VBAR_EL1, 0 },
{ SYS_DESC(SYS_DISR_EL1), NULL, reset_val, DISR_EL1, 0 },
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 10+ messages in thread* [PATCH v2 4/4] KVM: arm64: Disable MPAM visibility by default, and handle traps
2023-12-07 15:08 [PATCH v2 0/4] KVM: arm64: Hide unsupported MPAM from the guest James Morse
` (2 preceding siblings ...)
2023-12-07 15:08 ` [PATCH v2 3/4] KVM: arm64: Fix missing traps of guest accesses to the MPAM registers James Morse
@ 2023-12-07 15:08 ` James Morse
2023-12-13 20:24 ` [PATCH v2 0/4] KVM: arm64: Hide unsupported MPAM from the guest Oliver Upton
4 siblings, 0 replies; 10+ messages in thread
From: James Morse @ 2023-12-07 15:08 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: Marc Zyngier, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
James Morse
Currently KVM only allows certain writeable ID registers to be
downgraded from their reset value.
commit 011e5f5bf529f ("arm64/cpufeature: Add remaining feature bits in
ID_AA64PFR0 register") exposed the MPAM field of AA64PFR0_EL1 to guests,
but didn't add trap handling. A previous patch supplied the missing trap
handling.
Existing VMs that have the MPAM field of AA64PFR0_EL1 need to be
migratable, but there is little point enabling the MPAM CPU interface
on new VMs until there is something a guest can do with it.
Clear the MPAM field from the guest's AA64PFR0_EL1 by default, but
allow user-space to set it again if the host supports MPAM. Add a
helper to return the maximum permitted value for an ID register.
For most this is the reset value. To allow the MPAM field to be
written as supported, check if the host sanitised value is '1'
and allow an upgrade from the reset value.
Finally, change the trap handling to inject an undef if MPAM was
not advertised to the guest.
Full support will depend on an psuedo-device being created that
describes the virt->phys PARTID mapping the VMM expects. Migration
would be expected to fail if this psuedo-device can't be created
on the remote end. This ID bit isn't needed to block migration.
Signed-off-by: James Morse <james.morse@arm.com>
---
arch/arm64/kvm/sys_regs.c | 75 +++++++++++++++++++++++++++++++--------
1 file changed, 60 insertions(+), 15 deletions(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 15fb9f54e308..055a72643aed 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -411,21 +411,29 @@ static bool trap_oslar_el1(struct kvm_vcpu *vcpu,
return true;
}
-static bool workaround_bad_mpam_abi(struct kvm_vcpu *vcpu,
- struct sys_reg_params *p,
- const struct sys_reg_desc *r)
+static bool trap_mpam(struct kvm_vcpu *vcpu,
+ struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
{
+ u64 aa64pfr0_el1 = IDREG(vcpu->kvm, SYS_ID_AA64PFR0_EL1);
+
/*
- * The ID register can't be removed without breaking migration,
- * but MPAMIDR_EL1 can advertise all-zeroes, indicating there are zero
- * PARTID/PMG supported by the CPU, allowing the other two trapped
- * registers (MPAM1_EL1 and MPAM0_EL1) to be treated as RAZ/WI.
+ * What did we expose to the guest?
+ * Earlier guests may have seen the ID bits, which can't be removed
+ * without breaking migration, but MPAMIDR_EL1 can advertise all-zeroes,
+ * indicating there are zero PARTID/PMG supported by the CPU, allowing
+ * the other two trapped registers (MPAM1_EL1 and MPAM0_EL1) to be
+ * treated as RAZ/WI.
* Emulating MPAM1_EL1 as RAZ/WI means the guest sees the MPAMEN bit
* as clear, and realises MPAM isn't usable on this CPU.
*/
- p->regval = 0;
+ if (FIELD_GET(ID_AA64PFR0_EL1_MPAM_MASK, aa64pfr0_el1)) {
+ p->regval = 0;
+ return true;
+ }
- return true;
+ kvm_inject_undefined(vcpu);
+ return false;
}
static bool trap_oslsr_el1(struct kvm_vcpu *vcpu,
@@ -1326,6 +1334,36 @@ static s64 kvm_arm64_ftr_safe_value(u32 id, const struct arm64_ftr_bits *ftrp,
return arm64_ftr_safe_value(&kvm_ftr, new, cur);
}
+static u64 kvm_arm64_ftr_max(struct kvm_vcpu *vcpu,
+ const struct sys_reg_desc *rd)
+{
+ u64 pfr0, val = rd->reset(vcpu, rd);
+ u32 field, id = reg_to_encoding(rd);
+
+ /*
+ * Some values may reset to a lower value than can be supported,
+ * get the maximum feature value.
+ */
+ switch (id) {
+ case SYS_ID_AA64PFR0_EL1:
+ pfr0 = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
+
+ /*
+ * MPAM resets to 0, but migration of MPAM=1 guests is needed.
+ * See trap_mpam() for more.
+ */
+ field = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_EL1_MPAM_SHIFT);
+ if (field == ID_AA64PFR0_EL1_MPAM_1) {
+ val &= ~ID_AA64PFR0_EL1_MPAM_MASK;
+ val |= FIELD_PREP(ID_AA64PFR0_EL1_MPAM_MASK, ID_AA64PFR0_EL1_MPAM_1);
+ }
+
+ break;
+ }
+
+ return val;
+}
+
/*
* arm64_check_features() - Check if a feature register value constitutes
* a subset of features indicated by the idreg's KVM sanitised limit.
@@ -1346,8 +1384,7 @@ static int arm64_check_features(struct kvm_vcpu *vcpu,
const struct arm64_ftr_bits *ftrp = NULL;
u32 id = reg_to_encoding(rd);
u64 writable_mask = rd->val;
- u64 limit = rd->reset(vcpu, rd);
- u64 mask = 0;
+ u64 limit, mask = 0;
/*
* Hidden and unallocated ID registers may not have a corresponding
@@ -1361,6 +1398,7 @@ static int arm64_check_features(struct kvm_vcpu *vcpu,
if (!ftr_reg)
return -EINVAL;
+ limit = kvm_arm64_ftr_max(vcpu, rd);
ftrp = ftr_reg->ftr_bits;
for (; ftrp && ftrp->width; ftrp++) {
@@ -1570,6 +1608,14 @@ static u64 read_sanitised_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
val &= ~ID_AA64PFR0_EL1_AMU_MASK;
+ /*
+ * MPAM is disabled by default as KVM also needs a set of PARTID to
+ * program the MPAMVPMx_EL2 PARTID remapping registers with. But some
+ * older kernels let the guest see the ID bit. Turning it on causes
+ * the registers to be emulated as RAZ/WI. See trap_mpam() for more.
+ */
+ val &= ~ID_AA64PFR0_EL1_MPAM_MASK;
+
return val;
}
@@ -2149,7 +2195,6 @@ static const struct sys_reg_desc sys_reg_descs[] = {
.set_user = set_id_reg,
.reset = read_sanitised_id_aa64pfr0_el1,
.val = ~(ID_AA64PFR0_EL1_AMU |
- ID_AA64PFR0_EL1_MPAM |
ID_AA64PFR0_EL1_SVE |
ID_AA64PFR0_EL1_RAS |
ID_AA64PFR0_EL1_GIC |
@@ -2292,11 +2337,11 @@ static const struct sys_reg_desc sys_reg_descs[] = {
{ SYS_DESC(SYS_LOREA_EL1), trap_loregion },
{ SYS_DESC(SYS_LORN_EL1), trap_loregion },
{ SYS_DESC(SYS_LORC_EL1), trap_loregion },
- { SYS_DESC(SYS_MPAMIDR_EL1), workaround_bad_mpam_abi },
+ { SYS_DESC(SYS_MPAMIDR_EL1), trap_mpam },
{ SYS_DESC(SYS_LORID_EL1), trap_loregion },
- { SYS_DESC(SYS_MPAM1_EL1), workaround_bad_mpam_abi },
- { SYS_DESC(SYS_MPAM0_EL1), workaround_bad_mpam_abi },
+ { SYS_DESC(SYS_MPAM1_EL1), trap_mpam },
+ { SYS_DESC(SYS_MPAM0_EL1), trap_mpam },
{ SYS_DESC(SYS_VBAR_EL1), access_rw, reset_val, VBAR_EL1, 0 },
{ SYS_DESC(SYS_DISR_EL1), NULL, reset_val, DISR_EL1, 0 },
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 10+ messages in thread* Re: [PATCH v2 0/4] KVM: arm64: Hide unsupported MPAM from the guest
2023-12-07 15:08 [PATCH v2 0/4] KVM: arm64: Hide unsupported MPAM from the guest James Morse
` (3 preceding siblings ...)
2023-12-07 15:08 ` [PATCH v2 4/4] KVM: arm64: Disable MPAM visibility by default, and handle traps James Morse
@ 2023-12-13 20:24 ` Oliver Upton
2024-01-19 17:15 ` James Morse
4 siblings, 1 reply; 10+ messages in thread
From: Oliver Upton @ 2023-12-13 20:24 UTC (permalink / raw)
To: James Morse
Cc: kvmarm, linux-arm-kernel, Marc Zyngier, Suzuki K Poulose,
Zenghui Yu
Hi James,
Thank you very much for posting these fixes.
On Thu, Dec 07, 2023 at 03:08:00PM +0000, James Morse wrote:
> 'lo
>
> This series fixes up a long standing bug where MPAM was accidentally exposed
> to a guest, but the feature was not otherwise trapped or context switched.
> This could result in KVM warning about unexpected traps, and injecting an
> undef into the guest contradicting the ID registers.
> This would prevent an MPAM aware kernel from booting - fortunately, there
> aren't any of those.
>
> Ideally, we'd take the MPAM feature away from the ID registers, but that
> would leave existing guests unable to migrate to a newer kernel. Instead,
> use the writable ID registers to allow MPAM to be re-enabled - but emulate
> it as RAZ/WI for the system registers that are trapped.
This is certainly a reasonable approach, but TBH I'm not too terribly
concerned about the completeness of the workaround plumbing that we need
here. Undoubtedly what you propose is the most complete solution to the
problem, but it is somewhat involved.
So long as we don't break migration at the userspace ioctl level I'm not
too worried. Maybe something like:
- Ensure the reset value of ID_AA64PFR0_EL1.MPAM is 0
- Allow userspace to write ID_AA64PFR0_EL1.MPAM as 1, *but*
- KVM reinterprets this as '0' behind the scenes for the final register
value
- Add the MPAM registers to the sysreg table with trap_raz_wi() as the
handler to avoid the unsupported sysreg printk, since it is possible
that older VMs may've already seen the MPAM feature.
We've already done something similar to hide our mistakes with IMP DEF
PMU versions in commit f90f9360c3d7 ("KVM: arm64: Rewrite IMPDEF PMU
version as NI"), and I think MPAM may be a good candidate for something
similar.
--
Thanks,
Oliver
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 10+ messages in thread* Re: [PATCH v2 0/4] KVM: arm64: Hide unsupported MPAM from the guest
2023-12-13 20:24 ` [PATCH v2 0/4] KVM: arm64: Hide unsupported MPAM from the guest Oliver Upton
@ 2024-01-19 17:15 ` James Morse
2024-01-23 19:00 ` Oliver Upton
0 siblings, 1 reply; 10+ messages in thread
From: James Morse @ 2024-01-19 17:15 UTC (permalink / raw)
To: Oliver Upton
Cc: kvmarm, linux-arm-kernel, Marc Zyngier, Suzuki K Poulose,
Zenghui Yu
Hi Oliver,
On 13/12/2023 20:24, Oliver Upton wrote:
> On Thu, Dec 07, 2023 at 03:08:00PM +0000, James Morse wrote:
>> This series fixes up a long standing bug where MPAM was accidentally exposed
>> to a guest, but the feature was not otherwise trapped or context switched.
>> This could result in KVM warning about unexpected traps, and injecting an
>> undef into the guest contradicting the ID registers.
>> This would prevent an MPAM aware kernel from booting - fortunately, there
>> aren't any of those.
>>
>> Ideally, we'd take the MPAM feature away from the ID registers, but that
>> would leave existing guests unable to migrate to a newer kernel. Instead,
>> use the writable ID registers to allow MPAM to be re-enabled - but emulate
>> it as RAZ/WI for the system registers that are trapped.
> This is certainly a reasonable approach, but TBH I'm not too terribly
> concerned about the completeness of the workaround plumbing that we need
> here. Undoubtedly what you propose is the most complete solution to the
> problem, but it is somewhat involved.
Yup, I figured having a new upper limit than the default for id registers would be needed
at some point, but maybe not today.
> So long as we don't break migration at the userspace ioctl level I'm not
> too worried. Maybe something like:
>
> - Ensure the reset value of ID_AA64PFR0_EL1.MPAM is 0
>
> - Allow userspace to write ID_AA64PFR0_EL1.MPAM as 1, *but*
> - KVM reinterprets this as '0' behind the scenes for the final register
> value
> - Add the MPAM registers to the sysreg table with trap_raz_wi() as the
> handler to avoid the unsupported sysreg printk, since it is possible
> that older VMs may've already seen the MPAM feature.
> We've already done something similar to hide our mistakes with IMP DEF
> PMU versions in commit f90f9360c3d7 ("KVM: arm64: Rewrite IMPDEF PMU
> version as NI"), and I think MPAM may be a good candidate for something
> similar.
As there is precedent, I feel less dirty doing that!
This also solves the problem of the VMM re-writing the broken value after the vCPU has
started running, and getting a surprise error. A weird side effect of doing this would be
you can write MPAM=1 on A53 and KVM will ignore it, I don't want user-space to start
relying on that! I'll add a final-cap check so this can only be ignored on hardware that
actually has MPAM, and could have been exposed to the bug.
Thanks,
James
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 10+ messages in thread* Re: [PATCH v2 0/4] KVM: arm64: Hide unsupported MPAM from the guest
2024-01-19 17:15 ` James Morse
@ 2024-01-23 19:00 ` Oliver Upton
2024-03-08 17:48 ` Jing Zhang
0 siblings, 1 reply; 10+ messages in thread
From: Oliver Upton @ 2024-01-23 19:00 UTC (permalink / raw)
To: James Morse
Cc: kvmarm, linux-arm-kernel, Marc Zyngier, Suzuki K Poulose,
Zenghui Yu
On Fri, Jan 19, 2024 at 05:15:59PM +0000, James Morse wrote:
[...]
> > We've already done something similar to hide our mistakes with IMP DEF
> > PMU versions in commit f90f9360c3d7 ("KVM: arm64: Rewrite IMPDEF PMU
> > version as NI"), and I think MPAM may be a good candidate for something
> > similar.
>
> As there is precedent, I feel less dirty doing that!
>
> This also solves the problem of the VMM re-writing the broken value after the vCPU has
> started running, and getting a surprise error. A weird side effect of doing this would be
> you can write MPAM=1 on A53 and KVM will ignore it, I don't want user-space to start
> relying on that! I'll add a final-cap check so this can only be ignored on hardware that
> actually has MPAM, and could have been exposed to the bug.
Ah, good idea. I probably should've done something similar in the PMU
case.
--
Thanks,
Oliver
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 10+ messages in thread* Re: [PATCH v2 0/4] KVM: arm64: Hide unsupported MPAM from the guest
2024-01-23 19:00 ` Oliver Upton
@ 2024-03-08 17:48 ` Jing Zhang
2024-03-21 15:37 ` James Morse
0 siblings, 1 reply; 10+ messages in thread
From: Jing Zhang @ 2024-03-08 17:48 UTC (permalink / raw)
To: Oliver Upton
Cc: James Morse, kvmarm, linux-arm-kernel, Marc Zyngier,
Suzuki K Poulose, Zenghui Yu
Hi James,
Will you have a new version for this patch series?
Jing
On Tue, Jan 23, 2024 at 11:06 AM Oliver Upton <oliver.upton@linux.dev> wrote:
>
> On Fri, Jan 19, 2024 at 05:15:59PM +0000, James Morse wrote:
>
> [...]
>
> > > We've already done something similar to hide our mistakes with IMP DEF
> > > PMU versions in commit f90f9360c3d7 ("KVM: arm64: Rewrite IMPDEF PMU
> > > version as NI"), and I think MPAM may be a good candidate for something
> > > similar.
> >
> > As there is precedent, I feel less dirty doing that!
> >
> > This also solves the problem of the VMM re-writing the broken value after the vCPU has
> > started running, and getting a surprise error. A weird side effect of doing this would be
> > you can write MPAM=1 on A53 and KVM will ignore it, I don't want user-space to start
> > relying on that! I'll add a final-cap check so this can only be ignored on hardware that
> > actually has MPAM, and could have been exposed to the bug.
>
> Ah, good idea. I probably should've done something similar in the PMU
> case.
>
> --
> Thanks,
> Oliver
>
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 10+ messages in thread* Re: [PATCH v2 0/4] KVM: arm64: Hide unsupported MPAM from the guest
2024-03-08 17:48 ` Jing Zhang
@ 2024-03-21 15:37 ` James Morse
0 siblings, 0 replies; 10+ messages in thread
From: James Morse @ 2024-03-21 15:37 UTC (permalink / raw)
To: Jing Zhang, Oliver Upton
Cc: kvmarm, linux-arm-kernel, Marc Zyngier, Suzuki K Poulose,
Zenghui Yu
Hi Jing,
On 08/03/2024 17:48, Jing Zhang wrote:
> Will you have a new version for this patch series?
Yes - sorry, I got bogged down in the x86 end of the MPAM tree, so wasn't ready to post
this until ~rc5 ... I figured the arm64 folk wouldn't be too pleased with head.S changes
that late.
I'm trying to get this series posted today, as I'm away for a few weeks.
Thanks for the nudge!
James
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 10+ messages in thread