linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v5 0/7] KVM: arm64: Hide unsupported MPAM from the guest
@ 2024-10-15 13:39 Joey Gouly
  2024-10-15 13:39 ` [PATCH v5 1/7] arm64/sysreg: Convert existing MPAM sysregs and add the remaining entries Joey Gouly
                   ` (7 more replies)
  0 siblings, 8 replies; 25+ messages in thread
From: Joey Gouly @ 2024-10-15 13:39 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm
  Cc: anshuman.khandual, james.morse, joey.gouly, Marc Zyngier,
	Oliver Upton, Suzuki K Poulose, Zenghui Yu, Jing Zhang,
	Shameerali Kolothum Thodi, Catalin Marinas, Will Deacon

Hi,

Thanks Gavin and Marc for the feedback, hopefully incorporated it all here.

changes since v4 [1]:
	- Swap order of patches, to be able to use MPAMIDR_EL1_HAS_HCR_SHIFT from
	  el2_setup.h
	- Remove trap_mpam() and use undef_acccess() directly
	- Stopped using ID_FILTERED() for the 32-bit ID register
	- Fix ID_AA64PFR1_EL1 sanitisation
	- Fix MPAM_EL1 encoding
	- Fix bug that set everything apart from MPAM_frac as writable in
	  ID_AA64PFR1_EL1. Match the current behaviour, so nothing is writable.

James wrote:
	This series fixes up a long standing bug where MPAM was accidentally exposed
	to a guest, but the feature was not otherwise trapped or context switched.
	This could result in KVM warning about unexpected traps, and injecting an
	undef into the guest contradicting the ID registers.
	This would prevent an MPAM aware kernel from booting - fortunately, there
	aren't any of those.

	Ideally, we'd take the MPAM feature away from the ID registers, but that
	would leave existing guests unable to migrate to a newer kernel. Instead,
	just ignore that field when it matches the hardware. KVM wasn't going to
	expose MPAM anyway. The guest will not see MPAM in the id registers.

	This series includes the head.S and KVM changes to enable/disable traps. If
	MPAM is neither enabled nor emulated by EL3 firmware, these system register
	accesses will trap to EL3.
	If your kernel doesn't boot, and the problem bisects here - please update
	your firmware. MPAM has been supported by trusted firmware since v1.6 in
	2018. (also noted on patch 3).

Thanks,
Joey

[1] https://lore.kernel.org/linux-arm-kernel/20241004110714.2051604-1-joey.gouly@arm.com/

James Morse (7):
  arm64/sysreg: Convert existing MPAM sysregs and add the remaining
    entries
  arm64: head.S: Initialise MPAM EL2 registers and disable traps
  arm64: cpufeature: discover CPU support for MPAM
  KVM: arm64: Fix missing traps of guest accesses to the MPAM registers
  KVM: arm64: Add a macro for creating filtered sys_reg_descs entries
  KVM: arm64: Disable MPAM visibility by default and ignore VMM writes
  KVM: arm64: selftests: Test ID_AA64PFR0.MPAM isn't completely ignored

 .../arch/arm64/cpu-feature-registers.rst      |   2 +
 arch/arm64/Kconfig                            |  20 +++
 arch/arm64/include/asm/cpu.h                  |   1 +
 arch/arm64/include/asm/cpucaps.h              |   2 +
 arch/arm64/include/asm/cpufeature.h           |  12 ++
 arch/arm64/include/asm/el2_setup.h            |  16 ++
 arch/arm64/include/asm/kvm_arm.h              |   1 +
 arch/arm64/include/asm/mpam.h                 |  32 ++++
 arch/arm64/include/asm/sysreg.h               |  12 --
 arch/arm64/kernel/Makefile                    |   2 +
 arch/arm64/kernel/cpufeature.c                |  97 +++++++++++
 arch/arm64/kernel/cpuinfo.c                   |   4 +
 arch/arm64/kernel/image-vars.h                |   5 +
 arch/arm64/kernel/mpam.c                      |   6 +
 arch/arm64/kvm/hyp/include/hyp/switch.h       |  32 ++++
 arch/arm64/kvm/sys_regs.c                     | 105 +++++++++---
 arch/arm64/tools/cpucaps                      |   1 +
 arch/arm64/tools/sysreg                       | 161 ++++++++++++++++++
 .../selftests/kvm/aarch64/set_id_regs.c       | 100 ++++++++++-
 19 files changed, 571 insertions(+), 40 deletions(-)
 create mode 100644 arch/arm64/include/asm/mpam.h
 create mode 100644 arch/arm64/kernel/mpam.c

-- 
2.25.1



^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH v5 1/7] arm64/sysreg: Convert existing MPAM sysregs and add the remaining entries
  2024-10-15 13:39 [PATCH v5 0/7] KVM: arm64: Hide unsupported MPAM from the guest Joey Gouly
@ 2024-10-15 13:39 ` Joey Gouly
  2024-10-16 23:41   ` Gavin Shan
  2024-10-15 13:39 ` [PATCH v5 2/7] arm64: head.S: Initialise MPAM EL2 registers and disable traps Joey Gouly
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 25+ messages in thread
From: Joey Gouly @ 2024-10-15 13:39 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm
  Cc: anshuman.khandual, james.morse, joey.gouly, Marc Zyngier,
	Oliver Upton, Suzuki K Poulose, Zenghui Yu, Jing Zhang,
	Shameerali Kolothum Thodi, Catalin Marinas, Will Deacon

From: James Morse <james.morse@arm.com>

Move the existing MPAM system register defines from sysreg.h to
tools/sysreg and add the remaining system registers.

Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Joey Gouly <joey.gouly@arm.com>
---
 arch/arm64/include/asm/sysreg.h |  12 ---
 arch/arm64/tools/sysreg         | 161 ++++++++++++++++++++++++++++++++
 2 files changed, 161 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 9ea97dddefc4..345e81e0d2b3 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -542,18 +542,6 @@
 
 #define SYS_MAIR_EL2			sys_reg(3, 4, 10, 2, 0)
 #define SYS_AMAIR_EL2			sys_reg(3, 4, 10, 3, 0)
-#define SYS_MPAMHCR_EL2			sys_reg(3, 4, 10, 4, 0)
-#define SYS_MPAMVPMV_EL2		sys_reg(3, 4, 10, 4, 1)
-#define SYS_MPAM2_EL2			sys_reg(3, 4, 10, 5, 0)
-#define __SYS__MPAMVPMx_EL2(x)		sys_reg(3, 4, 10, 6, x)
-#define SYS_MPAMVPM0_EL2		__SYS__MPAMVPMx_EL2(0)
-#define SYS_MPAMVPM1_EL2		__SYS__MPAMVPMx_EL2(1)
-#define SYS_MPAMVPM2_EL2		__SYS__MPAMVPMx_EL2(2)
-#define SYS_MPAMVPM3_EL2		__SYS__MPAMVPMx_EL2(3)
-#define SYS_MPAMVPM4_EL2		__SYS__MPAMVPMx_EL2(4)
-#define SYS_MPAMVPM5_EL2		__SYS__MPAMVPMx_EL2(5)
-#define SYS_MPAMVPM6_EL2		__SYS__MPAMVPMx_EL2(6)
-#define SYS_MPAMVPM7_EL2		__SYS__MPAMVPMx_EL2(7)
 
 #define SYS_VBAR_EL2			sys_reg(3, 4, 12, 0, 0)
 #define SYS_RVBAR_EL2			sys_reg(3, 4, 12, 0, 1)
diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
index 8d637ac4b7c6..05d04a45db65 100644
--- a/arch/arm64/tools/sysreg
+++ b/arch/arm64/tools/sysreg
@@ -2737,6 +2737,126 @@ Field	1	E2SPE
 Field	0	E0HSPE
 EndSysreg
 
+Sysreg	MPAMHCR_EL2	3	4	10	4	0
+Res0	63:32
+Field	31	TRAP_MPAMIDR_EL1
+Res0	30:9
+Field	8	GSTAPP_PLK
+Res0	7:2
+Field	1	EL1_VPMEN
+Field	0	EL0_VPMEN
+EndSysreg
+
+Sysreg	MPAMVPMV_EL2	3	4	10	4	1
+Res0	63:32
+Field	31	VPM_V31
+Field	30	VPM_V30
+Field	29	VPM_V29
+Field	28	VPM_V28
+Field	27	VPM_V27
+Field	26	VPM_V26
+Field	25	VPM_V25
+Field	24	VPM_V24
+Field	23	VPM_V23
+Field	22	VPM_V22
+Field	21	VPM_V21
+Field	20	VPM_V20
+Field	19	VPM_V19
+Field	18	VPM_V18
+Field	17	VPM_V17
+Field	16	VPM_V16
+Field	15	VPM_V15
+Field	14	VPM_V14
+Field	13	VPM_V13
+Field	12	VPM_V12
+Field	11	VPM_V11
+Field	10	VPM_V10
+Field	9	VPM_V9
+Field	8	VPM_V8
+Field	7	VPM_V7
+Field	6	VPM_V6
+Field	5	VPM_V5
+Field	4	VPM_V4
+Field	3	VPM_V3
+Field	2	VPM_V2
+Field	1	VPM_V1
+Field	0	VPM_V0
+EndSysreg
+
+Sysreg	MPAM2_EL2	3	4	10	5	0
+Field	63	MPAMEN
+Res0	62:59
+Field	58	TIDR
+Res0	57
+Field	56	ALTSP_HFC
+Field	55	ALTSP_EL2
+Field	54	ALTSP_FRCD
+Res0	53:51
+Field	50	EnMPAMSM
+Field	49	TRAPMPAM0EL1
+Field	48	TRAPMPAM1EL1
+Field	47:40	PMG_D
+Field	39:32	PMG_I
+Field	31:16	PARTID_D
+Field	15:0	PARTID_I
+EndSysreg
+
+Sysreg	MPAMVPM0_EL2	3	4	10	6	0
+Field	63:48	PhyPARTID3
+Field	47:32	PhyPARTID2
+Field	31:16	PhyPARTID1
+Field	15:0	PhyPARTID0
+EndSysreg
+
+Sysreg	MPAMVPM1_EL2	3	4	10	6	1
+Field	63:48	PhyPARTID7
+Field	47:32	PhyPARTID6
+Field	31:16	PhyPARTID5
+Field	15:0	PhyPARTID4
+EndSysreg
+
+Sysreg	MPAMVPM2_EL2	3	4	10	6	2
+Field	63:48	PhyPARTID11
+Field	47:32	PhyPARTID10
+Field	31:16	PhyPARTID9
+Field	15:0	PhyPARTID8
+EndSysreg
+
+Sysreg	MPAMVPM3_EL2	3	4	10	6	3
+Field	63:48	PhyPARTID15
+Field	47:32	PhyPARTID14
+Field	31:16	PhyPARTID13
+Field	15:0	PhyPARTID12
+EndSysreg
+
+Sysreg	MPAMVPM4_EL2	3	4	10	6	4
+Field	63:48	PhyPARTID19
+Field	47:32	PhyPARTID18
+Field	31:16	PhyPARTID17
+Field	15:0	PhyPARTID16
+EndSysreg
+
+Sysreg	MPAMVPM5_EL2	3	4	10	6	5
+Field	63:48	PhyPARTID23
+Field	47:32	PhyPARTID22
+Field	31:16	PhyPARTID21
+Field	15:0	PhyPARTID20
+EndSysreg
+
+Sysreg	MPAMVPM6_EL2	3	4	10	6	6
+Field	63:48	PhyPARTID27
+Field	47:32	PhyPARTID26
+Field	31:16	PhyPARTID25
+Field	15:0	PhyPARTID24
+EndSysreg
+
+Sysreg	MPAMVPM7_EL2	3	4	10	6	7
+Field	63:48	PhyPARTID31
+Field	47:32	PhyPARTID30
+Field	31:16	PhyPARTID29
+Field	15:0	PhyPARTID28
+EndSysreg
+
 Sysreg	CONTEXTIDR_EL2	3	4	13	0	1
 Fields	CONTEXTIDR_ELx
 EndSysreg
@@ -2769,6 +2889,10 @@ Sysreg	FAR_EL12	3	5	6	0	0
 Field	63:0	ADDR
 EndSysreg
 
+Sysreg	MPAM1_EL12	3	5	10	5	0
+Fields	MPAM1_ELx
+EndSysreg
+
 Sysreg	CONTEXTIDR_EL12	3	5	13	0	1
 Fields	CONTEXTIDR_ELx
 EndSysreg
@@ -2941,6 +3065,22 @@ Res0	1
 Field	0	EN
 EndSysreg
 
+Sysreg	MPAMIDR_EL1	3	0	10	4	4
+Res0	63:62
+Field	61	HAS_SDEFLT
+Field	60	HAS_FORCE_NS
+Field	59	SP4
+Field	58	HAS_TIDR
+Field	57	HAS_ALTSP
+Res0	56:40
+Field	39:32	PMG_MAX
+Res0	31:21
+Field	20:18	VPMR_MAX
+Field	17	HAS_HCR
+Res0	16
+Field	15:0	PARTID_MAX
+EndSysreg
+
 Sysreg	LORID_EL1	3	0	10	4	7
 Res0	63:24
 Field	23:16	LD
@@ -2948,6 +3088,27 @@ Res0	15:8
 Field	7:0	LR
 EndSysreg
 
+Sysreg	MPAM1_EL1	3	0	10	5	0
+Field	63	MPAMEN
+Res0	62:61
+Field	60 FORCED_NS
+Res0	59:55
+Field	54	ALTSP_FRCD
+Res0	53:48
+Field	47:40	PMG_D
+Field	39:32	PMG_I
+Field	31:16	PARTID_D
+Field	15:0	PARTID_I
+EndSysreg
+
+Sysreg	MPAM0_EL1	3	0	10	5	1
+Res0	63:48
+Field	47:40	PMG_D
+Field	39:32	PMG_I
+Field	31:16	PARTID_D
+Field	15:0	PARTID_I
+EndSysreg
+
 Sysreg	ISR_EL1	3	0	12	1	0
 Res0	63:11
 Field	10	IS
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v5 2/7] arm64: head.S: Initialise MPAM EL2 registers and disable traps
  2024-10-15 13:39 [PATCH v5 0/7] KVM: arm64: Hide unsupported MPAM from the guest Joey Gouly
  2024-10-15 13:39 ` [PATCH v5 1/7] arm64/sysreg: Convert existing MPAM sysregs and add the remaining entries Joey Gouly
@ 2024-10-15 13:39 ` Joey Gouly
  2024-10-16 23:41   ` Gavin Shan
  2024-10-15 13:39 ` [PATCH v5 3/7] arm64: cpufeature: discover CPU support for MPAM Joey Gouly
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 25+ messages in thread
From: Joey Gouly @ 2024-10-15 13:39 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm
  Cc: anshuman.khandual, james.morse, joey.gouly, Marc Zyngier,
	Oliver Upton, Suzuki K Poulose, Zenghui Yu, Jing Zhang,
	Shameerali Kolothum Thodi, Catalin Marinas, Will Deacon

From: James Morse <james.morse@arm.com>

Add code to head.S's el2_setup to detect MPAM and disable any EL2 traps.
This register resets to an unknown value, setting it to the default
parititons/pmg before we enable the MMU is the best thing to do.

Kexec/kdump will depend on this if the previous kernel left the CPU
configured with a restrictive configuration.

If linux is booted at the highest implemented exception level el2_setup
will clear the enable bit, disabling MPAM.

This code can't be enabled until a subsequent patch adds the Kconfig
and cpufeature boiler plate.

Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Joey Gouly <joey.gouly@arm.com>
---
 arch/arm64/include/asm/el2_setup.h | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/arch/arm64/include/asm/el2_setup.h b/arch/arm64/include/asm/el2_setup.h
index e0ffdf13a18b..1d7f377c61db 100644
--- a/arch/arm64/include/asm/el2_setup.h
+++ b/arch/arm64/include/asm/el2_setup.h
@@ -220,6 +220,21 @@
 	msr	spsr_el2, x0
 .endm
 
+.macro __init_el2_mpam
+#ifdef CONFIG_ARM64_MPAM
+	/* Memory Partitioning And Monitoring: disable EL2 traps */
+	mrs	x1, id_aa64pfr0_el1
+	ubfx	x0, x1, #ID_AA64PFR0_EL1_MPAM_SHIFT, #4
+	cbz	x0, .Lskip_mpam_\@		// skip if no MPAM
+	msr_s	SYS_MPAM2_EL2, xzr		// use the default partition
+						// and disable lower traps
+	mrs_s	x0, SYS_MPAMIDR_EL1
+	tbz	x0, #MPAMIDR_EL1_HAS_HCR_SHIFT, .Lskip_mpam_\@	// skip if no MPAMHCR reg
+	msr_s	SYS_MPAMHCR_EL2, xzr		// clear TRAP_MPAMIDR_EL1 -> EL2
+.Lskip_mpam_\@:
+#endif /* CONFIG_ARM64_MPAM */
+.endm
+
 /**
  * Initialize EL2 registers to sane values. This should be called early on all
  * cores that were booted in EL2. Note that everything gets initialised as
@@ -237,6 +252,7 @@
 	__init_el2_stage2
 	__init_el2_gicv3
 	__init_el2_hstr
+	__init_el2_mpam
 	__init_el2_nvhe_idregs
 	__init_el2_cptr
 	__init_el2_fgt
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v5 3/7] arm64: cpufeature: discover CPU support for MPAM
  2024-10-15 13:39 [PATCH v5 0/7] KVM: arm64: Hide unsupported MPAM from the guest Joey Gouly
  2024-10-15 13:39 ` [PATCH v5 1/7] arm64/sysreg: Convert existing MPAM sysregs and add the remaining entries Joey Gouly
  2024-10-15 13:39 ` [PATCH v5 2/7] arm64: head.S: Initialise MPAM EL2 registers and disable traps Joey Gouly
@ 2024-10-15 13:39 ` Joey Gouly
  2024-10-17  2:36   ` Gavin Shan
  2024-10-15 13:39 ` [PATCH v5 4/7] KVM: arm64: Fix missing traps of guest accesses to the MPAM registers Joey Gouly
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 25+ messages in thread
From: Joey Gouly @ 2024-10-15 13:39 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm
  Cc: anshuman.khandual, james.morse, joey.gouly, Marc Zyngier,
	Oliver Upton, Suzuki K Poulose, Zenghui Yu, Jing Zhang,
	Shameerali Kolothum Thodi, Catalin Marinas, Will Deacon

From: James Morse <james.morse@arm.com>

ARMv8.4 adds support for 'Memory Partitioning And Monitoring' (MPAM)
which describes an interface to cache and bandwidth controls wherever
they appear in the system.

Add support to detect MPAM. Like SVE, MPAM has an extra id register that
describes some more properties, including the virtualisation support,
which is optional. Detect this separately so we can detect
mismatched/insane systems, but still use MPAM on the host even if the
virtualisation support is missing.

MPAM needs enabling at the highest implemented exception level, otherwise
the register accesses trap. The 'enabled' flag is accessible to lower
exception levels, but its in a register that traps when MPAM isn't enabled.
The cpufeature 'matches' hook is extended to test this on one of the
CPUs, so that firmware can emulate MPAM as disabled if it is reserved
for use by secure world.

Secondary CPUs that appear late could trip cpufeature's 'lower safe'
behaviour after the MPAM properties have been advertised to user-space.
Add a verify call to ensure late secondaries match the existing CPUs.

(If you have a boot failure that bisects here its likely your CPUs
advertise MPAM in the id registers, but firmware failed to either enable
or MPAM, or emulate the trap as if it were disabled)

Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Joey Gouly <joey.gouly@arm.com>
---
 .../arch/arm64/cpu-feature-registers.rst      |  2 +
 arch/arm64/Kconfig                            | 20 ++++
 arch/arm64/include/asm/cpu.h                  |  1 +
 arch/arm64/include/asm/cpucaps.h              |  2 +
 arch/arm64/include/asm/cpufeature.h           | 12 +++
 arch/arm64/include/asm/mpam.h                 | 32 ++++++
 arch/arm64/kernel/Makefile                    |  2 +
 arch/arm64/kernel/cpufeature.c                | 97 +++++++++++++++++++
 arch/arm64/kernel/cpuinfo.c                   |  4 +
 arch/arm64/kernel/mpam.c                      |  6 ++
 arch/arm64/tools/cpucaps                      |  1 +
 11 files changed, 179 insertions(+)
 create mode 100644 arch/arm64/include/asm/mpam.h
 create mode 100644 arch/arm64/kernel/mpam.c

diff --git a/Documentation/arch/arm64/cpu-feature-registers.rst b/Documentation/arch/arm64/cpu-feature-registers.rst
index 44f9bd78539d..253e9743de2f 100644
--- a/Documentation/arch/arm64/cpu-feature-registers.rst
+++ b/Documentation/arch/arm64/cpu-feature-registers.rst
@@ -152,6 +152,8 @@ infrastructure:
      +------------------------------+---------+---------+
      | DIT                          | [51-48] |    y    |
      +------------------------------+---------+---------+
+     | MPAM                         | [43-40] |    n    |
+     +------------------------------+---------+---------+
      | SVE                          | [35-32] |    y    |
      +------------------------------+---------+---------+
      | GIC                          | [27-24] |    n    |
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 3e29b44d2d7b..df09537a3b2a 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -2037,6 +2037,26 @@ config ARM64_TLB_RANGE
 	  The feature introduces new assembly instructions, and they were
 	  support when binutils >= 2.30.
 
+config ARM64_MPAM
+	bool "Enable support for MPAM"
+	help
+	  Memory Partitioning and Monitoring is an optional extension
+	  that allows the CPUs to mark load and store transactions with
+	  labels for partition-id and performance-monitoring-group.
+	  System components, such as the caches, can use the partition-id
+	  to apply a performance policy. MPAM monitors can use the
+	  partition-id and performance-monitoring-group to measure the
+	  cache occupancy or data throughput.
+
+	  Use of this extension requires CPU support, support in the
+	  memory system components (MSC), and a description from firmware
+	  of where the MSC are in the address space.
+
+	  MPAM is exposed to user-space via the resctrl pseudo filesystem.
+
+	  NOTE: This config does not currently expose MPAM to user-space via
+	  resctrl.
+
 endmenu # "ARMv8.4 architectural features"
 
 menu "ARMv8.5 architectural features"
diff --git a/arch/arm64/include/asm/cpu.h b/arch/arm64/include/asm/cpu.h
index 9b73fd0cd721..81e4157f92b7 100644
--- a/arch/arm64/include/asm/cpu.h
+++ b/arch/arm64/include/asm/cpu.h
@@ -46,6 +46,7 @@ struct cpuinfo_arm64 {
 	u64		reg_revidr;
 	u64		reg_gmid;
 	u64		reg_smidr;
+	u64		reg_mpamidr;
 
 	u64		reg_id_aa64dfr0;
 	u64		reg_id_aa64dfr1;
diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index a6e5b07b64fd..0b51dfcf8a94 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -38,6 +38,8 @@ cpucap_is_possible(const unsigned int cap)
 		return IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI);
 	case ARM64_MTE:
 		return IS_ENABLED(CONFIG_ARM64_MTE);
+	case ARM64_MPAM:
+		return IS_ENABLED(CONFIG_ARM64_MPAM);
 	case ARM64_BTI:
 		return IS_ENABLED(CONFIG_ARM64_BTI);
 	case ARM64_HAS_TLB_RANGE:
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 3d261cc123c1..1abe55e62e63 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -612,6 +612,13 @@ static inline bool id_aa64pfr1_sme(u64 pfr1)
 	return val > 0;
 }
 
+static inline bool id_aa64pfr0_mpam(u64 pfr0)
+{
+	u32 val = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_EL1_MPAM_SHIFT);
+
+	return val > 0;
+}
+
 static inline bool id_aa64pfr1_mte(u64 pfr1)
 {
 	u32 val = cpuid_feature_extract_unsigned_field(pfr1, ID_AA64PFR1_EL1_MTE_SHIFT);
@@ -838,6 +845,11 @@ static inline bool system_supports_poe(void)
 		alternative_has_cap_unlikely(ARM64_HAS_S1POE);
 }
 
+static inline bool cpus_support_mpam(void)
+{
+	return alternative_has_cap_unlikely(ARM64_MPAM);
+}
+
 int do_emulate_mrs(struct pt_regs *regs, u32 sys_reg, u32 rt);
 bool try_emulate_mrs(struct pt_regs *regs, u32 isn);
 
diff --git a/arch/arm64/include/asm/mpam.h b/arch/arm64/include/asm/mpam.h
new file mode 100644
index 000000000000..d3ef0b5cd456
--- /dev/null
+++ b/arch/arm64/include/asm/mpam.h
@@ -0,0 +1,32 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (C) 2021 Arm Ltd. */
+
+#ifndef __ASM__MPAM_H
+#define __ASM__MPAM_H
+
+#include <linux/bitops.h>
+#include <linux/init.h>
+#include <linux/jump_label.h>
+
+#include <asm/cpucaps.h>
+#include <asm/cpufeature.h>
+#include <asm/sysreg.h>
+
+DECLARE_STATIC_KEY_FALSE(arm64_mpam_has_hcr);
+
+/* check whether all CPUs have MPAM virtualisation support */
+static inline bool mpam_cpus_have_mpam_hcr(void)
+{
+	if (IS_ENABLED(CONFIG_ARM64_MPAM))
+		return static_branch_unlikely(&arm64_mpam_has_hcr);
+	return false;
+}
+
+/* enable MPAM virtualisation support */
+static inline void __init __enable_mpam_hcr(void)
+{
+	if (IS_ENABLED(CONFIG_ARM64_MPAM))
+		static_branch_enable(&arm64_mpam_has_hcr);
+}
+
+#endif /* __ASM__MPAM_H */
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index 2b112f3b7510..2f60dd957e9c 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -63,10 +63,12 @@ obj-$(CONFIG_KEXEC_CORE)		+= machine_kexec.o relocate_kernel.o	\
 obj-$(CONFIG_KEXEC_FILE)		+= machine_kexec_file.o kexec_image.o
 obj-$(CONFIG_ARM64_RELOC_TEST)		+= arm64-reloc-test.o
 arm64-reloc-test-y := reloc_test_core.o reloc_test_syms.o
+
 obj-$(CONFIG_CRASH_DUMP)		+= crash_dump.o
 obj-$(CONFIG_VMCORE_INFO)		+= vmcore_info.o
 obj-$(CONFIG_ARM_SDE_INTERFACE)		+= sdei.o
 obj-$(CONFIG_ARM64_PTR_AUTH)		+= pointer_auth.o
+obj-$(CONFIG_ARM64_MPAM)		+= mpam.o
 obj-$(CONFIG_ARM64_MTE)			+= mte.o
 obj-y					+= vdso-wrap.o
 obj-$(CONFIG_COMPAT_VDSO)		+= vdso32-wrap.o
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 718728a85430..f7fd7e3259e4 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -84,6 +84,7 @@
 #include <asm/insn.h>
 #include <asm/kvm_host.h>
 #include <asm/mmu_context.h>
+#include <asm/mpam.h>
 #include <asm/mte.h>
 #include <asm/processor.h>
 #include <asm/smp.h>
@@ -684,6 +685,14 @@ static const struct arm64_ftr_bits ftr_id_dfr1[] = {
 	ARM64_FTR_END,
 };
 
+static const struct arm64_ftr_bits ftr_mpamidr[] = {
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, MPAMIDR_EL1_PMG_MAX_SHIFT, MPAMIDR_EL1_PMG_MAX_WIDTH, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, MPAMIDR_EL1_VPMR_MAX_SHIFT, MPAMIDR_EL1_VPMR_MAX_WIDTH, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, MPAMIDR_EL1_HAS_HCR_SHIFT, 1, 0),
+	ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, MPAMIDR_EL1_PARTID_MAX_SHIFT, MPAMIDR_EL1_PARTID_MAX_WIDTH, 0),
+	ARM64_FTR_END,
+};
+
 /*
  * Common ftr bits for a 32bit register with all hidden, strict
  * attributes, with 4bit feature fields and a default safe value of
@@ -804,6 +813,9 @@ static const struct __ftr_reg_entry {
 	ARM64_FTR_REG(SYS_ID_AA64MMFR3_EL1, ftr_id_aa64mmfr3),
 	ARM64_FTR_REG(SYS_ID_AA64MMFR4_EL1, ftr_id_aa64mmfr4),
 
+	/* Op1 = 0, CRn = 10, CRm = 4 */
+	ARM64_FTR_REG(SYS_MPAMIDR_EL1, ftr_mpamidr),
+
 	/* Op1 = 1, CRn = 0, CRm = 0 */
 	ARM64_FTR_REG(SYS_GMID_EL1, ftr_gmid),
 
@@ -1163,6 +1175,9 @@ void __init init_cpu_features(struct cpuinfo_arm64 *info)
 		cpacr_restore(cpacr);
 	}
 
+	if (id_aa64pfr0_mpam(info->reg_id_aa64pfr0))
+		init_cpu_ftr_reg(SYS_MPAMIDR_EL1, info->reg_mpamidr);
+
 	if (id_aa64pfr1_mte(info->reg_id_aa64pfr1))
 		init_cpu_ftr_reg(SYS_GMID_EL1, info->reg_gmid);
 }
@@ -1419,6 +1434,11 @@ void update_cpu_features(int cpu,
 		cpacr_restore(cpacr);
 	}
 
+	if (id_aa64pfr0_mpam(info->reg_id_aa64pfr0)) {
+		taint |= check_update_ftr_reg(SYS_MPAMIDR_EL1, cpu,
+					info->reg_mpamidr, boot->reg_mpamidr);
+	}
+
 	/*
 	 * The kernel uses the LDGM/STGM instructions and the number of tags
 	 * they read/write depends on the GMID_EL1.BS field. Check that the
@@ -2377,6 +2397,39 @@ cpucap_panic_on_conflict(const struct arm64_cpu_capabilities *cap)
 	return !!(cap->type & ARM64_CPUCAP_PANIC_ON_CONFLICT);
 }
 
+static bool __maybe_unused
+test_has_mpam(const struct arm64_cpu_capabilities *entry, int scope)
+{
+	if (!has_cpuid_feature(entry, scope))
+		return false;
+
+	/* Check firmware actually enabled MPAM on this cpu. */
+	return (read_sysreg_s(SYS_MPAM1_EL1) & MPAM1_EL1_MPAMEN);
+}
+
+static void __maybe_unused
+cpu_enable_mpam(const struct arm64_cpu_capabilities *entry)
+{
+	/*
+	 * Access by the kernel (at EL1) should use the reserved PARTID
+	 * which is configured unrestricted. This avoids priority-inversion
+	 * where latency sensitive tasks have to wait for a task that has
+	 * been throttled to release the lock.
+	 */
+	write_sysreg_s(0, SYS_MPAM1_EL1);
+}
+
+static void mpam_extra_caps(void)
+{
+	u64 idr = read_sanitised_ftr_reg(SYS_MPAMIDR_EL1);
+
+	if (!IS_ENABLED(CONFIG_ARM64_MPAM))
+		return;
+
+	if (idr & MPAMIDR_EL1_HAS_HCR)
+		__enable_mpam_hcr();
+}
+
 static const struct arm64_cpu_capabilities arm64_features[] = {
 	{
 		.capability = ARM64_ALWAYS_BOOT,
@@ -2872,6 +2925,16 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 #endif
 #endif
 	},
+#endif
+#ifdef CONFIG_ARM64_MPAM
+	{
+		.desc = "Memory Partitioning And Monitoring",
+		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
+		.capability = ARM64_MPAM,
+		.matches = test_has_mpam,
+		.cpu_enable = cpu_enable_mpam,
+		ARM64_CPUID_FIELDS(ID_AA64PFR0_EL1, MPAM, 1)
+	},
 #endif
 	{
 		.desc = "NV1",
@@ -3396,6 +3459,36 @@ static void verify_hyp_capabilities(void)
 	}
 }
 
+static void verify_mpam_capabilities(void)
+{
+	u64 cpu_idr = read_cpuid(ID_AA64PFR0_EL1);
+	u64 sys_idr = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
+	u16 cpu_partid_max, cpu_pmg_max, sys_partid_max, sys_pmg_max;
+
+	if (FIELD_GET(ID_AA64PFR0_EL1_MPAM_MASK, cpu_idr) !=
+	    FIELD_GET(ID_AA64PFR0_EL1_MPAM_MASK, sys_idr)) {
+		pr_crit("CPU%d: MPAM version mismatch\n", smp_processor_id());
+		cpu_die_early();
+	}
+
+	cpu_idr = read_cpuid(MPAMIDR_EL1);
+	sys_idr = read_sanitised_ftr_reg(SYS_MPAMIDR_EL1);
+	if (FIELD_GET(MPAMIDR_EL1_HAS_HCR, cpu_idr) !=
+	    FIELD_GET(MPAMIDR_EL1_HAS_HCR, sys_idr)) {
+		pr_crit("CPU%d: Missing MPAM HCR\n", smp_processor_id());
+		cpu_die_early();
+	}
+
+	cpu_partid_max = FIELD_GET(MPAMIDR_EL1_PARTID_MAX, cpu_idr);
+	cpu_pmg_max = FIELD_GET(MPAMIDR_EL1_PMG_MAX, cpu_idr);
+	sys_partid_max = FIELD_GET(MPAMIDR_EL1_PARTID_MAX, sys_idr);
+	sys_pmg_max = FIELD_GET(MPAMIDR_EL1_PMG_MAX, sys_idr);
+	if (cpu_partid_max < sys_partid_max || cpu_pmg_max < sys_pmg_max) {
+		pr_crit("CPU%d: MPAM PARTID/PMG max values are mismatched\n", smp_processor_id());
+		cpu_die_early();
+	}
+}
+
 /*
  * Run through the enabled system capabilities and enable() it on this CPU.
  * The capabilities were decided based on the available CPUs at the boot time.
@@ -3422,6 +3515,9 @@ static void verify_local_cpu_capabilities(void)
 
 	if (is_hyp_mode_available())
 		verify_hyp_capabilities();
+
+	if (cpus_support_mpam())
+		verify_mpam_capabilities();
 }
 
 void check_local_cpu_capabilities(void)
@@ -3589,6 +3685,7 @@ void __init setup_user_features(void)
 	}
 
 	minsigstksz_setup();
+	mpam_extra_caps();
 }
 
 static int enable_mismatched_32bit_el0(unsigned int cpu)
diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
index 44718d0482b3..7d134255e7cb 100644
--- a/arch/arm64/kernel/cpuinfo.c
+++ b/arch/arm64/kernel/cpuinfo.c
@@ -478,6 +478,10 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info)
 	if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0))
 		__cpuinfo_store_cpu_32bit(&info->aarch32);
 
+	if (IS_ENABLED(CONFIG_ARM64_MPAM) &&
+	    id_aa64pfr0_mpam(info->reg_id_aa64pfr0))
+		info->reg_mpamidr = read_cpuid(MPAMIDR_EL1);
+
 	cpuinfo_detect_icache_policy(info);
 }
 
diff --git a/arch/arm64/kernel/mpam.c b/arch/arm64/kernel/mpam.c
new file mode 100644
index 000000000000..b346f0de4caa
--- /dev/null
+++ b/arch/arm64/kernel/mpam.c
@@ -0,0 +1,6 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (C) 2021 Arm Ltd. */
+
+#include <asm/mpam.h>
+
+DEFINE_STATIC_KEY_FALSE(arm64_mpam_has_hcr);
diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps
index eedb5acc21ed..16feb1ac27b7 100644
--- a/arch/arm64/tools/cpucaps
+++ b/arch/arm64/tools/cpucaps
@@ -60,6 +60,7 @@ HW_DBM
 KVM_HVHE
 KVM_PROTECTED_MODE
 MISMATCHED_CACHE_TYPE
+MPAM
 MTE
 MTE_ASYMM
 SME
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v5 4/7] KVM: arm64: Fix missing traps of guest accesses to the MPAM registers
  2024-10-15 13:39 [PATCH v5 0/7] KVM: arm64: Hide unsupported MPAM from the guest Joey Gouly
                   ` (2 preceding siblings ...)
  2024-10-15 13:39 ` [PATCH v5 3/7] arm64: cpufeature: discover CPU support for MPAM Joey Gouly
@ 2024-10-15 13:39 ` Joey Gouly
  2024-10-17  0:10   ` Oliver Upton
                     ` (3 more replies)
  2024-10-15 13:39 ` [PATCH v5 5/7] KVM: arm64: Add a macro for creating filtered sys_reg_descs entries Joey Gouly
                   ` (3 subsequent siblings)
  7 siblings, 4 replies; 25+ messages in thread
From: Joey Gouly @ 2024-10-15 13:39 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm
  Cc: anshuman.khandual, james.morse, joey.gouly, Marc Zyngier,
	Oliver Upton, Suzuki K Poulose, Zenghui Yu, Jing Zhang,
	Shameerali Kolothum Thodi, Catalin Marinas, Will Deacon

From: James Morse <james.morse@arm.com>

commit 011e5f5bf529f ("arm64/cpufeature: Add remaining feature bits in
ID_AA64PFR0 register") exposed the MPAM field of AA64PFR0_EL1 to guests,
but didn't add trap handling.

If you are unlucky, this results in an MPAM aware guest being delivered
an undef during boot. The host prints:
| kvm [97]: Unsupported guest sys_reg access at: ffff800080024c64 [00000005]
| { Op0( 3), Op1( 0), CRn(10), CRm( 5), Op2( 0), func_read },

Which results in:
| Internal error: Oops - Undefined instruction: 0000000002000000 [#1] PREEMPT SMP
| Modules linked in:
| CPU: 0 PID: 1 Comm: swapper/0 Not tainted 6.6.0-rc7-00559-gd89c186d50b2 #14616
| Hardware name: linux,dummy-virt (DT)
| pstate: 00000005 (nzcv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
| pc : test_has_mpam+0x18/0x30
| lr : test_has_mpam+0x10/0x30
| sp : ffff80008000bd90
...
| Call trace:
|  test_has_mpam+0x18/0x30
|  update_cpu_capabilities+0x7c/0x11c
|  setup_cpu_features+0x14/0xd8
|  smp_cpus_done+0x24/0xb8
|  smp_init+0x7c/0x8c
|  kernel_init_freeable+0xf8/0x280
|  kernel_init+0x24/0x1e0
|  ret_from_fork+0x10/0x20
| Code: 910003fd 97ffffde 72001c00 54000080 (d538a500)
| ---[ end trace 0000000000000000 ]---
| Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
| ---[ end Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b ]---

Add the support to enable the traps, and handle the three guest accessible
registers by injecting an UNDEF. This stops KVM from spamming the host
log, but doesn't yet hide the feature from the id registers.

With MPAM v1.0 we can trap the MPAMIDR_EL1 register only if
ARM64_HAS_MPAM_HCR, with v1.1 an additional MPAM2_EL2.TIDR bit traps
MPAMIDR_EL1 on platforms that don't have MPAMHCR_EL2. Enable one of
these if either is supported. If neither is supported, the guest can
discover that the CPU has MPAM support, and how many PARTID etc the
host has ... but it can't influence anything, so its harmless.

Fixes: 011e5f5bf529f ("arm64/cpufeature: Add remaining feature bits in ID_AA64PFR0 register")
CC: Anshuman Khandual <anshuman.khandual@arm.com>
Link: https://lore.kernel.org/linux-arm-kernel/20200925160102.118858-1-james.morse@arm.com/
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Joey Gouly <joey.gouly@arm.com>
---
 arch/arm64/include/asm/cpufeature.h     |  2 +-
 arch/arm64/include/asm/kvm_arm.h        |  1 +
 arch/arm64/include/asm/mpam.h           |  2 +-
 arch/arm64/kernel/image-vars.h          |  5 ++++
 arch/arm64/kvm/hyp/include/hyp/switch.h | 32 +++++++++++++++++++++++++
 arch/arm64/kvm/sys_regs.c               |  3 +++
 6 files changed, 43 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 1abe55e62e63..985d966787b1 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -845,7 +845,7 @@ static inline bool system_supports_poe(void)
 		alternative_has_cap_unlikely(ARM64_HAS_S1POE);
 }
 
-static inline bool cpus_support_mpam(void)
+static __always_inline bool cpus_support_mpam(void)
 {
 	return alternative_has_cap_unlikely(ARM64_MPAM);
 }
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 109a85ee6910..16afb7a79b15 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -103,6 +103,7 @@
 #define HCR_HOST_VHE_FLAGS (HCR_RW | HCR_TGE | HCR_E2H)
 
 #define HCRX_HOST_FLAGS (HCRX_EL2_MSCEn | HCRX_EL2_TCR2En | HCRX_EL2_EnFPM)
+#define MPAMHCR_HOST_FLAGS	0
 
 /* TCR_EL2 Registers bits */
 #define TCR_EL2_DS		(1UL << 32)
diff --git a/arch/arm64/include/asm/mpam.h b/arch/arm64/include/asm/mpam.h
index d3ef0b5cd456..3ffe0f34ffff 100644
--- a/arch/arm64/include/asm/mpam.h
+++ b/arch/arm64/include/asm/mpam.h
@@ -15,7 +15,7 @@
 DECLARE_STATIC_KEY_FALSE(arm64_mpam_has_hcr);
 
 /* check whether all CPUs have MPAM virtualisation support */
-static inline bool mpam_cpus_have_mpam_hcr(void)
+static __always_inline bool mpam_cpus_have_mpam_hcr(void)
 {
 	if (IS_ENABLED(CONFIG_ARM64_MPAM))
 		return static_branch_unlikely(&arm64_mpam_has_hcr);
diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h
index 8f5422ed1b75..9934e4cc76e9 100644
--- a/arch/arm64/kernel/image-vars.h
+++ b/arch/arm64/kernel/image-vars.h
@@ -101,6 +101,11 @@ KVM_NVHE_ALIAS(nvhe_hyp_panic_handler);
 /* Vectors installed by hyp-init on reset HVC. */
 KVM_NVHE_ALIAS(__hyp_stub_vectors);
 
+/* Additional static keys for cpufeatures */
+#ifdef CONFIG_ARM64_MPAM
+KVM_NVHE_ALIAS(arm64_mpam_has_hcr);
+#endif
+
 /* Static keys which are set if a vGIC trap should be handled in hyp. */
 KVM_NVHE_ALIAS(vgic_v2_cpuif_trap);
 KVM_NVHE_ALIAS(vgic_v3_cpuif_trap);
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 46d52e8a3df3..b886d82ade5f 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -29,6 +29,7 @@
 #include <asm/kvm_nested.h>
 #include <asm/fpsimd.h>
 #include <asm/debug-monitors.h>
+#include <asm/mpam.h>
 #include <asm/processor.h>
 #include <asm/traps.h>
 
@@ -204,6 +205,35 @@ static inline void __deactivate_traps_hfgxtr(struct kvm_vcpu *vcpu)
 		__deactivate_fgt(hctxt, vcpu, kvm, HAFGRTR_EL2);
 }
 
+static inline void  __activate_traps_mpam(struct kvm_vcpu *vcpu)
+{
+	u64 r = MPAM2_EL2_TRAPMPAM0EL1 | MPAM2_EL2_TRAPMPAM1EL1;
+
+	if (!cpus_support_mpam())
+		return;
+
+	/* trap guest access to MPAMIDR_EL1 */
+	if (mpam_cpus_have_mpam_hcr()) {
+		write_sysreg_s(MPAMHCR_EL2_TRAP_MPAMIDR_EL1, SYS_MPAMHCR_EL2);
+	} else {
+		/* From v1.1 TIDR can trap MPAMIDR, set it unconditionally */
+		r |= MPAM2_EL2_TIDR;
+	}
+
+	write_sysreg_s(r, SYS_MPAM2_EL2);
+}
+
+static inline void __deactivate_traps_mpam(void)
+{
+	if (!cpus_support_mpam())
+		return;
+
+	write_sysreg_s(0, SYS_MPAM2_EL2);
+
+	if (mpam_cpus_have_mpam_hcr())
+		write_sysreg_s(MPAMHCR_HOST_FLAGS, SYS_MPAMHCR_EL2);
+}
+
 static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
 {
 	/* Trap on AArch32 cp15 c15 (impdef sysregs) accesses (EL1 or EL0) */
@@ -244,6 +274,7 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
 	}
 
 	__activate_traps_hfgxtr(vcpu);
+	__activate_traps_mpam(vcpu);
 }
 
 static inline void __deactivate_traps_common(struct kvm_vcpu *vcpu)
@@ -263,6 +294,7 @@ static inline void __deactivate_traps_common(struct kvm_vcpu *vcpu)
 		write_sysreg_s(HCRX_HOST_FLAGS, SYS_HCRX_EL2);
 
 	__deactivate_traps_hfgxtr(vcpu);
+	__deactivate_traps_mpam();
 }
 
 static inline void ___activate_traps(struct kvm_vcpu *vcpu, u64 hcr)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index dad88e31f953..c21840042785 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -2517,8 +2517,11 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	{ SYS_DESC(SYS_LOREA_EL1), trap_loregion },
 	{ SYS_DESC(SYS_LORN_EL1), trap_loregion },
 	{ SYS_DESC(SYS_LORC_EL1), trap_loregion },
+	{ SYS_DESC(SYS_MPAMIDR_EL1), undef_access },
 	{ SYS_DESC(SYS_LORID_EL1), trap_loregion },
 
+	{ SYS_DESC(SYS_MPAM1_EL1), undef_access },
+	{ SYS_DESC(SYS_MPAM0_EL1), undef_access },
 	{ SYS_DESC(SYS_VBAR_EL1), access_rw, reset_val, VBAR_EL1, 0 },
 	{ SYS_DESC(SYS_DISR_EL1), NULL, reset_val, DISR_EL1, 0 },
 
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v5 5/7] KVM: arm64: Add a macro for creating filtered sys_reg_descs entries
  2024-10-15 13:39 [PATCH v5 0/7] KVM: arm64: Hide unsupported MPAM from the guest Joey Gouly
                   ` (3 preceding siblings ...)
  2024-10-15 13:39 ` [PATCH v5 4/7] KVM: arm64: Fix missing traps of guest accesses to the MPAM registers Joey Gouly
@ 2024-10-15 13:39 ` Joey Gouly
  2024-10-17  2:37   ` Gavin Shan
  2024-10-15 13:39 ` [PATCH v5 6/7] KVM: arm64: Disable MPAM visibility by default and ignore VMM writes Joey Gouly
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 25+ messages in thread
From: Joey Gouly @ 2024-10-15 13:39 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm
  Cc: anshuman.khandual, james.morse, joey.gouly, Marc Zyngier,
	Oliver Upton, Suzuki K Poulose, Zenghui Yu, Jing Zhang,
	Shameerali Kolothum Thodi, Catalin Marinas, Will Deacon

From: James Morse <james.morse@arm.com>

The sys_reg_descs array holds function pointers and reset value for
managing the user-space and guest view of system registers. These
are mostly created by a set of macro's as only some combinations
of behaviour are needed.

If a register needs special treatment, its sys_reg_descs entry is
open-coded. This is true of some id registers where the value provided
by user-space is validated by some helpers.

Before adding another one of these, add a helper that covers the
existing special cases. 'ID_FILTERED' expects helpers to set the
user-space value, and retrieve the modified reset value.

Like ID_WRITABLE() this uses id_visibility(), which should have no functional
change for the registers converted to use ID_FILTERED().

read_sanitised_id_aa64dfr0_el1() and read_sanitised_id_aa64pfr0_el1() have been
refactored to be called kvm_read_sanitised_id_reg(), to try be consistent with
ID_WRITABLE().

Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Joey Gouly <joey.gouly@arm.com>
---
 arch/arm64/kvm/sys_regs.c | 62 +++++++++++++++++++++++----------------
 1 file changed, 36 insertions(+), 26 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index c21840042785..2e859f566b63 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1509,6 +1509,9 @@ static u8 pmuver_to_perfmon(u8 pmuver)
 	}
 }
 
+static u64 sanitise_id_aa64pfr0_el1(const struct kvm_vcpu *vcpu, u64 val);
+static u64 sanitise_id_aa64dfr0_el1(const struct kvm_vcpu *vcpu, u64 val);
+
 /* Read a sanitised cpufeature ID register by sys_reg_desc */
 static u64 __kvm_read_sanitised_id_reg(const struct kvm_vcpu *vcpu,
 				       const struct sys_reg_desc *r)
@@ -1522,6 +1525,12 @@ static u64 __kvm_read_sanitised_id_reg(const struct kvm_vcpu *vcpu,
 	val = read_sanitised_ftr_reg(id);
 
 	switch (id) {
+	case SYS_ID_AA64DFR0_EL1:
+		val = sanitise_id_aa64dfr0_el1(vcpu, val);
+		break;
+	case SYS_ID_AA64PFR0_EL1:
+		val = sanitise_id_aa64pfr0_el1(vcpu, val);
+		break;
 	case SYS_ID_AA64PFR1_EL1:
 		if (!kvm_has_mte(vcpu->kvm))
 			val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE);
@@ -1683,11 +1692,8 @@ static unsigned int fp8_visibility(const struct kvm_vcpu *vcpu,
 	return REG_HIDDEN;
 }
 
-static u64 read_sanitised_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
-					  const struct sys_reg_desc *rd)
+static u64 sanitise_id_aa64pfr0_el1(const struct kvm_vcpu *vcpu, u64 val)
 {
-	u64 val = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
-
 	if (!vcpu_has_sve(vcpu))
 		val &= ~ID_AA64PFR0_EL1_SVE_MASK;
 
@@ -1728,11 +1734,8 @@ static u64 read_sanitised_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
 	(val);								       \
 })
 
-static u64 read_sanitised_id_aa64dfr0_el1(struct kvm_vcpu *vcpu,
-					  const struct sys_reg_desc *rd)
+static u64 sanitise_id_aa64dfr0_el1(const struct kvm_vcpu *vcpu, u64 val)
 {
-	u64 val = read_sanitised_ftr_reg(SYS_ID_AA64DFR0_EL1);
-
 	val = ID_REG_LIMIT_FIELD_ENUM(val, ID_AA64DFR0_EL1, DebugVer, V8P8);
 
 	/*
@@ -1825,6 +1828,12 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu,
 	return set_id_reg(vcpu, rd, val);
 }
 
+static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
+			       const struct sys_reg_desc *rd, u64 val)
+{
+	return set_id_reg(vcpu, rd, val);
+}
+
 /*
  * cpufeature ID register user accessors
  *
@@ -2141,6 +2150,15 @@ static bool bad_redir_trap(struct kvm_vcpu *vcpu,
 	.val = mask,				\
 }
 
+/* sys_reg_desc initialiser for cpufeature ID registers that need filtering */
+#define ID_FILTERED(sysreg, name, mask) {	\
+	ID_DESC(sysreg),				\
+	.set_user = set_##name,				\
+	.visibility = id_visibility,			\
+	.reset = kvm_read_sanitised_id_reg,		\
+	.val = (mask),					\
+}
+
 /*
  * sys_reg_desc initialiser for architecturally unallocated cpufeature ID
  * register with encoding Op0=3, Op1=0, CRn=0, CRm=crm, Op2=op2
@@ -2365,17 +2383,13 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 
 	/* AArch64 ID registers */
 	/* CRm=4 */
-	{ SYS_DESC(SYS_ID_AA64PFR0_EL1),
-	  .access = access_id_reg,
-	  .get_user = get_id_reg,
-	  .set_user = set_id_reg,
-	  .reset = read_sanitised_id_aa64pfr0_el1,
-	  .val = ~(ID_AA64PFR0_EL1_AMU |
-		   ID_AA64PFR0_EL1_MPAM |
-		   ID_AA64PFR0_EL1_SVE |
-		   ID_AA64PFR0_EL1_RAS |
-		   ID_AA64PFR0_EL1_AdvSIMD |
-		   ID_AA64PFR0_EL1_FP), },
+	ID_FILTERED(ID_AA64PFR0_EL1, id_aa64pfr0_el1,
+		    ~(ID_AA64PFR0_EL1_AMU |
+		      ID_AA64PFR0_EL1_MPAM |
+		      ID_AA64PFR0_EL1_SVE |
+		      ID_AA64PFR0_EL1_RAS |
+		      ID_AA64PFR0_EL1_AdvSIMD |
+		      ID_AA64PFR0_EL1_FP)),
 	ID_SANITISED(ID_AA64PFR1_EL1),
 	ID_WRITABLE(ID_AA64PFR2_EL1, ID_AA64PFR2_EL1_FPMR),
 	ID_UNALLOCATED(4,3),
@@ -2385,13 +2399,9 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	ID_WRITABLE(ID_AA64FPFR0_EL1, ~ID_AA64FPFR0_EL1_RES0),
 
 	/* CRm=5 */
-	{ SYS_DESC(SYS_ID_AA64DFR0_EL1),
-	  .access = access_id_reg,
-	  .get_user = get_id_reg,
-	  .set_user = set_id_aa64dfr0_el1,
-	  .reset = read_sanitised_id_aa64dfr0_el1,
-	  .val = ID_AA64DFR0_EL1_PMUVer_MASK |
-		 ID_AA64DFR0_EL1_DebugVer_MASK, },
+	ID_FILTERED(ID_AA64DFR0_EL1, id_aa64dfr0_el1,
+		    ID_AA64DFR0_EL1_PMUVer_MASK |
+		    ID_AA64DFR0_EL1_DebugVer_MASK),
 	ID_SANITISED(ID_AA64DFR1_EL1),
 	ID_UNALLOCATED(5,2),
 	ID_UNALLOCATED(5,3),
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v5 6/7] KVM: arm64: Disable MPAM visibility by default and ignore VMM writes
  2024-10-15 13:39 [PATCH v5 0/7] KVM: arm64: Hide unsupported MPAM from the guest Joey Gouly
                   ` (4 preceding siblings ...)
  2024-10-15 13:39 ` [PATCH v5 5/7] KVM: arm64: Add a macro for creating filtered sys_reg_descs entries Joey Gouly
@ 2024-10-15 13:39 ` Joey Gouly
  2024-10-17  2:37   ` Gavin Shan
  2024-10-15 13:39 ` [PATCH v5 7/7] KVM: arm64: selftests: Test ID_AA64PFR0.MPAM isn't completely ignored Joey Gouly
  2024-10-17  8:34 ` [PATCH v5 0/7] KVM: arm64: Hide unsupported MPAM from the guest Shameerali Kolothum Thodi
  7 siblings, 1 reply; 25+ messages in thread
From: Joey Gouly @ 2024-10-15 13:39 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm
  Cc: anshuman.khandual, james.morse, joey.gouly, Marc Zyngier,
	Oliver Upton, Suzuki K Poulose, Zenghui Yu, Jing Zhang,
	Shameerali Kolothum Thodi, Catalin Marinas, Will Deacon

From: James Morse <james.morse@arm.com>

commit 011e5f5bf529f ("arm64/cpufeature: Add remaining feature bits in
ID_AA64PFR0 register") exposed the MPAM field of AA64PFR0_EL1 to guests,
but didn't add trap handling. A previous patch supplied the missing trap
handling.

Existing VMs that have the MPAM field of ID_AA64PFR0_EL1 set need to
be migratable, but there is little point enabling the MPAM CPU
interface on new VMs until there is something a guest can do with it.

Clear the MPAM field from the guest's ID_AA64PFR0_EL1 and on hardware
that supports MPAM, politely ignore the VMMs attempts to set this bit.

Guests exposed to this bug have the sanitised value of the MPAM field,
so only the correct value needs to be ignored. This means the field
can continue to be used to block migration to incompatible hardware
(between MPAM=1 and MPAM=5), and the VMM can't rely on the field
being ignored.

Signed-off-by: James Morse <james.morse@arm.com>
Co-developed-by: Joey Gouly <joey.gouly@arm.com>
Signed-off-by: Joey Gouly <joey.gouly@arm.com>
---
 arch/arm64/kvm/sys_regs.c | 44 ++++++++++++++++++++++++++++++++++++---
 1 file changed, 41 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 2e859f566b63..d97ccf1c1558 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1536,6 +1536,7 @@ static u64 __kvm_read_sanitised_id_reg(const struct kvm_vcpu *vcpu,
 			val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE);
 
 		val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_SME);
+		val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MPAM_frac);
 		break;
 	case SYS_ID_AA64PFR2_EL1:
 		/* We only expose FPMR */
@@ -1721,6 +1722,13 @@ static u64 sanitise_id_aa64pfr0_el1(const struct kvm_vcpu *vcpu, u64 val)
 
 	val &= ~ID_AA64PFR0_EL1_AMU_MASK;
 
+	/*
+	 * MPAM is disabled by default as KVM also needs a set of PARTID to
+	 * program the MPAMVPMx_EL2 PARTID remapping registers with. But some
+	 * older kernels let the guest see the ID bit.
+	 */
+	val &= ~ID_AA64PFR0_EL1_MPAM_MASK;
+
 	return val;
 }
 
@@ -1829,9 +1837,39 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu,
 }
 
 static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
-			       const struct sys_reg_desc *rd, u64 val)
+			       const struct sys_reg_desc *rd, u64 user_val)
 {
-	return set_id_reg(vcpu, rd, val);
+	u64 hw_val = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
+	u64 mpam_mask = ID_AA64PFR0_EL1_MPAM_MASK;
+
+	/*
+	 * Commit 011e5f5bf529f ("arm64/cpufeature: Add remaining feature bits
+	 * in ID_AA64PFR0 register") exposed the MPAM field of AA64PFR0_EL1 to
+	 * guests, but didn't add trap handling. KVM doesn't support MPAM and
+	 * always returns an UNDEF for these registers. The guest must see 0
+	 * for this field.
+	 *
+	 * But KVM must also accept values from user-space that were provided
+	 * by KVM. On CPUs that support MPAM, permit user-space to write
+	 * the sanitizied value to ID_AA64PFR0_EL1.MPAM, but ignore this field.
+	 */
+	if ((hw_val & mpam_mask) == (user_val & mpam_mask))
+		user_val &= ~ID_AA64PFR0_EL1_MPAM_MASK;
+
+	return set_id_reg(vcpu, rd, user_val);
+}
+
+static int set_id_aa64pfr1_el1(struct kvm_vcpu *vcpu,
+			       const struct sys_reg_desc *rd, u64 user_val)
+{
+	u64 hw_val = read_sanitised_ftr_reg(SYS_ID_AA64PFR1_EL1);
+	u64 mpam_mask = ID_AA64PFR1_EL1_MPAM_frac_MASK;
+
+	/* See set_id_aa64pfr0_el1 for comment about MPAM */
+	if ((hw_val & mpam_mask) == (user_val & mpam_mask))
+		user_val &= ~ID_AA64PFR1_EL1_MPAM_frac_MASK;
+
+	return set_id_reg(vcpu, rd, user_val);
 }
 
 /*
@@ -2390,7 +2428,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 		      ID_AA64PFR0_EL1_RAS |
 		      ID_AA64PFR0_EL1_AdvSIMD |
 		      ID_AA64PFR0_EL1_FP)),
-	ID_SANITISED(ID_AA64PFR1_EL1),
+	ID_FILTERED(ID_AA64PFR1_EL1, id_aa64pfr1_el1, 0),
 	ID_WRITABLE(ID_AA64PFR2_EL1, ID_AA64PFR2_EL1_FPMR),
 	ID_UNALLOCATED(4,3),
 	ID_WRITABLE(ID_AA64ZFR0_EL1, ~ID_AA64ZFR0_EL1_RES0),
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v5 7/7] KVM: arm64: selftests: Test ID_AA64PFR0.MPAM isn't completely ignored
  2024-10-15 13:39 [PATCH v5 0/7] KVM: arm64: Hide unsupported MPAM from the guest Joey Gouly
                   ` (5 preceding siblings ...)
  2024-10-15 13:39 ` [PATCH v5 6/7] KVM: arm64: Disable MPAM visibility by default and ignore VMM writes Joey Gouly
@ 2024-10-15 13:39 ` Joey Gouly
  2024-10-17  0:41   ` Gavin Shan
  2024-10-17  8:34 ` [PATCH v5 0/7] KVM: arm64: Hide unsupported MPAM from the guest Shameerali Kolothum Thodi
  7 siblings, 1 reply; 25+ messages in thread
From: Joey Gouly @ 2024-10-15 13:39 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm
  Cc: anshuman.khandual, james.morse, joey.gouly, Marc Zyngier,
	Oliver Upton, Suzuki K Poulose, Zenghui Yu, Jing Zhang,
	Shameerali Kolothum Thodi, Catalin Marinas, Will Deacon

From: James Morse <james.morse@arm.com>

The ID_AA64PFR0.MPAM bit was previously accidentally exposed to guests,
and is ignored by KVM. KVM will always present the guest with 0 here,
and trap the MPAM system registers to inject an undef.

But, this value is still needed to prevent migration when the value
is incompatible with the target hardware. Add a kvm unit test to try
and write multiple values to ID_AA64PFR0.MPAM. Only the hardware value
previously exposed should be ignored, all other values should be
rejected.

Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Joey Gouly <joey.gouly@arm.com>
---
 .../selftests/kvm/aarch64/set_id_regs.c       | 100 +++++++++++++++++-
 1 file changed, 99 insertions(+), 1 deletion(-)

diff --git a/tools/testing/selftests/kvm/aarch64/set_id_regs.c b/tools/testing/selftests/kvm/aarch64/set_id_regs.c
index 2a3fe7914b72..d985ead2cc45 100644
--- a/tools/testing/selftests/kvm/aarch64/set_id_regs.c
+++ b/tools/testing/selftests/kvm/aarch64/set_id_regs.c
@@ -433,6 +433,103 @@ static void test_vm_ftr_id_regs(struct kvm_vcpu *vcpu, bool aarch64_only)
 	}
 }
 
+#define MPAM_IDREG_TEST	6
+static void test_user_set_mpam_reg(struct kvm_vcpu *vcpu)
+{
+	uint64_t masks[KVM_ARM_FEATURE_ID_RANGE_SIZE];
+	struct reg_mask_range range = {
+		.addr = (__u64)masks,
+	};
+	uint64_t val, ftr_mask;
+	int idx, err;
+
+	/*
+	 * If ID_AA64PFR0.MPAM is _not_ officially modifiable and is zero,
+	 * check that if it can be set to 1, (i.e. it is supported by the
+	 * hardware), that it can't be set to other values.
+	 */
+
+	/* Get writable masks for feature ID registers */
+	memset(range.reserved, 0, sizeof(range.reserved));
+	vm_ioctl(vcpu->vm, KVM_ARM_GET_REG_WRITABLE_MASKS, &range);
+
+	/* Writeable? Nothing to test! */
+	idx = encoding_to_range_idx(SYS_ID_AA64PFR0_EL1);
+	ftr_mask = ID_AA64PFR0_EL1_MPAM_MASK;
+	if ((masks[idx] & ftr_mask) == ftr_mask) {
+		ksft_test_result_skip("ID_AA64PFR0_EL1.MPAM is officially writable, nothing to test\n");
+		return;
+	}
+
+	/* Get the id register value */
+	vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR0_EL1), &val);
+
+	/* Try to set MPAM=0. This should always be possible. */
+	val &= ~GENMASK_ULL(44, 40);
+	val |= FIELD_PREP(GENMASK_ULL(44, 40), 0);
+	err = __vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR0_EL1), val);
+	if (err)
+		ksft_test_result_fail("ID_AA64PFR0_EL1.MPAM=0 was not accepted\n");
+	else
+		ksft_test_result_pass("ID_AA64PFR0_EL1.MPAM=0 worked\n");
+
+	/* Try to set MPAM=1 */
+	val &= ~GENMASK_ULL(44, 40);
+	val |= FIELD_PREP(GENMASK_ULL(44, 40), 1);
+	err = __vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR0_EL1), val);
+	if (err)
+		ksft_test_result_skip("ID_AA64PFR0_EL1.MPAM is not writable, nothing to test\n");
+	else
+		ksft_test_result_pass("ID_AA64PFR0_EL1.MPAM=1 was writable\n");
+
+	/* Try to set MPAM=2 */
+	val &= ~GENMASK_ULL(43, 40);
+	val |= FIELD_PREP(GENMASK_ULL(43, 40), 2);
+	err = __vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR0_EL1), val);
+	if (err)
+		ksft_test_result_pass("ID_AA64PFR0_EL1.MPAM not arbitrarily modifiable\n");
+	else
+		ksft_test_result_fail("ID_AA64PFR0_EL1.MPAM value should not be ignored\n");
+
+	/* And again for ID_AA64PFR1_EL1.MPAM_frac */
+	idx = encoding_to_range_idx(SYS_ID_AA64PFR1_EL1);
+	ftr_mask = ID_AA64PFR1_EL1_MPAM_frac_MASK;
+	if ((masks[idx] & ftr_mask) == ftr_mask) {
+		ksft_test_result_skip("ID_AA64PFR1_EL1.MPAM_frac is officially writable, nothing to test\n");
+		return;
+	}
+
+	/* Get the id register value */
+	vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR1_EL1), &val);
+
+	/* Try to set MPAM_frac=0. This should always be possible. */
+	val &= ~GENMASK_ULL(19, 16);
+	val |= FIELD_PREP(GENMASK_ULL(19, 16), 0);
+	err = __vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR1_EL1), val);
+	if (err)
+		ksft_test_result_fail("ID_AA64PFR0_EL1.MPAM_frac=0 was not accepted\n");
+	else
+		ksft_test_result_pass("ID_AA64PFR0_EL1.MPAM_frac=0 worked\n");
+
+	/* Try to set MPAM_frac=1 */
+	val &= ~GENMASK_ULL(19, 16);
+	val |= FIELD_PREP(GENMASK_ULL(19, 16), 1);
+	err = __vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR1_EL1), val);
+	if (err)
+		ksft_test_result_skip("ID_AA64PFR1_EL1.MPAM_frac is not writable, nothing to test\n");
+	else
+		ksft_test_result_pass("ID_AA64PFR0_EL1.MPAM_frac=1 was writable\n");
+
+	/* Try to set MPAM_frac=2 */
+	val &= ~GENMASK_ULL(19, 16);
+	val |= FIELD_PREP(GENMASK_ULL(19, 16), 2);
+	err = __vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR1_EL1), val);
+	if (err)
+		ksft_test_result_pass("ID_AA64PFR1_EL1.MPAM_frac not arbitrarily modifiable\n");
+	else
+		ksft_test_result_fail("ID_AA64PFR1_EL1.MPAM_frac value should not be ignored\n");
+}
+
 static void test_guest_reg_read(struct kvm_vcpu *vcpu)
 {
 	bool done = false;
@@ -571,12 +668,13 @@ int main(void)
 		   ARRAY_SIZE(ftr_id_aa64isar2_el1) + ARRAY_SIZE(ftr_id_aa64pfr0_el1) +
 		   ARRAY_SIZE(ftr_id_aa64mmfr0_el1) + ARRAY_SIZE(ftr_id_aa64mmfr1_el1) +
 		   ARRAY_SIZE(ftr_id_aa64mmfr2_el1) + ARRAY_SIZE(ftr_id_aa64zfr0_el1) -
-		   ARRAY_SIZE(test_regs) + 2;
+		   ARRAY_SIZE(test_regs) + 2 + MPAM_IDREG_TEST;
 
 	ksft_set_plan(test_cnt);
 
 	test_vm_ftr_id_regs(vcpu, aarch64_only);
 	test_vcpu_ftr_id_regs(vcpu);
+	test_user_set_mpam_reg(vcpu);
 
 	test_guest_reg_read(vcpu);
 
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [PATCH v5 1/7] arm64/sysreg: Convert existing MPAM sysregs and add the remaining entries
  2024-10-15 13:39 ` [PATCH v5 1/7] arm64/sysreg: Convert existing MPAM sysregs and add the remaining entries Joey Gouly
@ 2024-10-16 23:41   ` Gavin Shan
  0 siblings, 0 replies; 25+ messages in thread
From: Gavin Shan @ 2024-10-16 23:41 UTC (permalink / raw)
  To: Joey Gouly, linux-arm-kernel, kvmarm
  Cc: anshuman.khandual, james.morse, Marc Zyngier, Oliver Upton,
	Suzuki K Poulose, Zenghui Yu, Jing Zhang,
	Shameerali Kolothum Thodi, Catalin Marinas, Will Deacon

On 10/15/24 11:39 PM, Joey Gouly wrote:
> From: James Morse <james.morse@arm.com>
> 
> Move the existing MPAM system register defines from sysreg.h to
> tools/sysreg and add the remaining system registers.
> 
> Signed-off-by: James Morse <james.morse@arm.com>
> Signed-off-by: Joey Gouly <joey.gouly@arm.com>
> ---
>   arch/arm64/include/asm/sysreg.h |  12 ---
>   arch/arm64/tools/sysreg         | 161 ++++++++++++++++++++++++++++++++
>   2 files changed, 161 insertions(+), 12 deletions(-)
> 

Reviewed-by: Gavin Shan <gshan@redhat.com>



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v5 2/7] arm64: head.S: Initialise MPAM EL2 registers and disable traps
  2024-10-15 13:39 ` [PATCH v5 2/7] arm64: head.S: Initialise MPAM EL2 registers and disable traps Joey Gouly
@ 2024-10-16 23:41   ` Gavin Shan
  0 siblings, 0 replies; 25+ messages in thread
From: Gavin Shan @ 2024-10-16 23:41 UTC (permalink / raw)
  To: Joey Gouly, linux-arm-kernel, kvmarm
  Cc: anshuman.khandual, james.morse, Marc Zyngier, Oliver Upton,
	Suzuki K Poulose, Zenghui Yu, Jing Zhang,
	Shameerali Kolothum Thodi, Catalin Marinas, Will Deacon

On 10/15/24 11:39 PM, Joey Gouly wrote:
> From: James Morse <james.morse@arm.com>
> 
> Add code to head.S's el2_setup to detect MPAM and disable any EL2 traps.
> This register resets to an unknown value, setting it to the default
> parititons/pmg before we enable the MMU is the best thing to do.
> 
> Kexec/kdump will depend on this if the previous kernel left the CPU
> configured with a restrictive configuration.
> 
> If linux is booted at the highest implemented exception level el2_setup
> will clear the enable bit, disabling MPAM.
> 
> This code can't be enabled until a subsequent patch adds the Kconfig
> and cpufeature boiler plate.
> 
> Signed-off-by: James Morse <james.morse@arm.com>
> Signed-off-by: Joey Gouly <joey.gouly@arm.com>
> ---
>   arch/arm64/include/asm/el2_setup.h | 16 ++++++++++++++++
>   1 file changed, 16 insertions(+)
> 

Reviewed-by: Gavin Shan <gshan@redhat.com>



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v5 4/7] KVM: arm64: Fix missing traps of guest accesses to the MPAM registers
  2024-10-15 13:39 ` [PATCH v5 4/7] KVM: arm64: Fix missing traps of guest accesses to the MPAM registers Joey Gouly
@ 2024-10-17  0:10   ` Oliver Upton
  2024-10-17 10:58     ` Joey Gouly
  2024-10-17  2:37   ` Gavin Shan
                     ` (2 subsequent siblings)
  3 siblings, 1 reply; 25+ messages in thread
From: Oliver Upton @ 2024-10-17  0:10 UTC (permalink / raw)
  To: Joey Gouly
  Cc: linux-arm-kernel, kvmarm, anshuman.khandual, james.morse,
	Marc Zyngier, Suzuki K Poulose, Zenghui Yu, Jing Zhang,
	Shameerali Kolothum Thodi, Catalin Marinas, Will Deacon

Hi Joey,

On Tue, Oct 15, 2024 at 02:39:20PM +0100, Joey Gouly wrote:
> +static inline void  __activate_traps_mpam(struct kvm_vcpu *vcpu)
> +{
> +	u64 r = MPAM2_EL2_TRAPMPAM0EL1 | MPAM2_EL2_TRAPMPAM1EL1;
> +
> +	if (!cpus_support_mpam())
> +		return;
> +
> +	/* trap guest access to MPAMIDR_EL1 */
> +	if (mpam_cpus_have_mpam_hcr()) {
> +		write_sysreg_s(MPAMHCR_EL2_TRAP_MPAMIDR_EL1, SYS_MPAMHCR_EL2);
> +	} else {
> +		/* From v1.1 TIDR can trap MPAMIDR, set it unconditionally */
> +		r |= MPAM2_EL2_TIDR;
> +	}
> +
> +	write_sysreg_s(r, SYS_MPAM2_EL2);
> +}
> +
> +static inline void __deactivate_traps_mpam(void)
> +{
> +	if (!cpus_support_mpam())
> +		return;
> +
> +	write_sysreg_s(0, SYS_MPAM2_EL2);
> +
> +	if (mpam_cpus_have_mpam_hcr())
> +		write_sysreg_s(MPAMHCR_HOST_FLAGS, SYS_MPAMHCR_EL2);
> +}

TBH, I think our trap configuration should *not* be conditioned on
CONFIG_ARM64_MPAM. Otherwise we're silently allowing the guest to change
things under the nose of KVM/host kernel, assuming an unkind firmware
that left the EL2 trap configuration in a permissive state.

WDYT about detecting the feature && enforcing traps regardless of the Kconfig?

-- 
Thanks,
Oliver


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v5 7/7] KVM: arm64: selftests: Test ID_AA64PFR0.MPAM isn't completely ignored
  2024-10-15 13:39 ` [PATCH v5 7/7] KVM: arm64: selftests: Test ID_AA64PFR0.MPAM isn't completely ignored Joey Gouly
@ 2024-10-17  0:41   ` Gavin Shan
  2024-10-17 11:03     ` Joey Gouly
  0 siblings, 1 reply; 25+ messages in thread
From: Gavin Shan @ 2024-10-17  0:41 UTC (permalink / raw)
  To: Joey Gouly, linux-arm-kernel, kvmarm
  Cc: anshuman.khandual, james.morse, Marc Zyngier, Oliver Upton,
	Suzuki K Poulose, Zenghui Yu, Jing Zhang,
	Shameerali Kolothum Thodi, Catalin Marinas, Will Deacon

On 10/15/24 11:39 PM, Joey Gouly wrote:
> From: James Morse <james.morse@arm.com>
> 
> The ID_AA64PFR0.MPAM bit was previously accidentally exposed to guests,
> and is ignored by KVM. KVM will always present the guest with 0 here,
> and trap the MPAM system registers to inject an undef.
> 
> But, this value is still needed to prevent migration when the value
> is incompatible with the target hardware. Add a kvm unit test to try
> and write multiple values to ID_AA64PFR0.MPAM. Only the hardware value
> previously exposed should be ignored, all other values should be
> rejected.
> 
> Signed-off-by: James Morse <james.morse@arm.com>
> Signed-off-by: Joey Gouly <joey.gouly@arm.com>
> ---
>   .../selftests/kvm/aarch64/set_id_regs.c       | 100 +++++++++++++++++-
>   1 file changed, 99 insertions(+), 1 deletion(-)
> 
> diff --git a/tools/testing/selftests/kvm/aarch64/set_id_regs.c b/tools/testing/selftests/kvm/aarch64/set_id_regs.c
> index 2a3fe7914b72..d985ead2cc45 100644
> --- a/tools/testing/selftests/kvm/aarch64/set_id_regs.c
> +++ b/tools/testing/selftests/kvm/aarch64/set_id_regs.c
> @@ -433,6 +433,103 @@ static void test_vm_ftr_id_regs(struct kvm_vcpu *vcpu, bool aarch64_only)
>   	}
>   }
>   
> +#define MPAM_IDREG_TEST	6
> +static void test_user_set_mpam_reg(struct kvm_vcpu *vcpu)
> +{
> +	uint64_t masks[KVM_ARM_FEATURE_ID_RANGE_SIZE];
> +	struct reg_mask_range range = {
> +		.addr = (__u64)masks,
> +	};
> +	uint64_t val, ftr_mask;
> +	int idx, err;
> +
> +	/*
> +	 * If ID_AA64PFR0.MPAM is _not_ officially modifiable and is zero,
> +	 * check that if it can be set to 1, (i.e. it is supported by the
> +	 * hardware), that it can't be set to other values.
> +	 */
> +
> +	/* Get writable masks for feature ID registers */
> +	memset(range.reserved, 0, sizeof(range.reserved));
> +	vm_ioctl(vcpu->vm, KVM_ARM_GET_REG_WRITABLE_MASKS, &range);
> +
> +	/* Writeable? Nothing to test! */
> +	idx = encoding_to_range_idx(SYS_ID_AA64PFR0_EL1);
> +	ftr_mask = ID_AA64PFR0_EL1_MPAM_MASK;
> +	if ((masks[idx] & ftr_mask) == ftr_mask) {
> +		ksft_test_result_skip("ID_AA64PFR0_EL1.MPAM is officially writable, nothing to test\n");
> +		return;
> +	}
> +

The question is shall we proceed to test ID_AA64PFR1_EL1.MPAM_frac instead of
bailing when ID_AA64PFR0_EL1.MPAM accepts arbitrary values? To me, it doesn't
mean ID_AA64PFR1_EL1.MPAM_frac can accept arbitrary values when ID_AA64PFR0_EL1.MPAM
does.

@ftr_mask can be dropped since it's initialized to a fixed mask and used for
only for once.

     if (mask[idx] & ID_AA64PFR0_EL1_MPAM_MASK] == ID_AA64PFR0_EL1_MPAM_MASK) {
         :
     }

> +	/* Get the id register value */
> +	vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR0_EL1), &val);
> +
> +	/* Try to set MPAM=0. This should always be possible. */
> +	val &= ~GENMASK_ULL(44, 40);
> +	val |= FIELD_PREP(GENMASK_ULL(44, 40), 0);
> +	err = __vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR0_EL1), val);
> +	if (err)
> +		ksft_test_result_fail("ID_AA64PFR0_EL1.MPAM=0 was not accepted\n");
> +	else
> +		ksft_test_result_pass("ID_AA64PFR0_EL1.MPAM=0 worked\n");
> +
> +	/* Try to set MPAM=1 */
> +	val &= ~GENMASK_ULL(44, 40);
> +	val |= FIELD_PREP(GENMASK_ULL(44, 40), 1);
> +	err = __vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR0_EL1), val);
> +	if (err)
> +		ksft_test_result_skip("ID_AA64PFR0_EL1.MPAM is not writable, nothing to test\n");
> +	else
> +		ksft_test_result_pass("ID_AA64PFR0_EL1.MPAM=1 was writable\n");
> +
> +	/* Try to set MPAM=2 */
> +	val &= ~GENMASK_ULL(43, 40);
> +	val |= FIELD_PREP(GENMASK_ULL(43, 40), 2);
> +	err = __vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR0_EL1), val);
> +	if (err)
> +		ksft_test_result_pass("ID_AA64PFR0_EL1.MPAM not arbitrarily modifiable\n");
> +	else
> +		ksft_test_result_fail("ID_AA64PFR0_EL1.MPAM value should not be ignored\n");
> +

GENMASK_ULL(44, 40) looks wrong to me. According to the spec, GENMASK_ULL(43, 40)
is the correct mask. Besides, I think it would be nice to use ID_AA64PFR0_EL1_MPAM_MASK
here, for example:

     val &= ~ID_AA64PFR0_EL1_MPAM_MASK;
     val |= FIELD_PREP(ID_AA64PFR0_EL1_MPAM_MASK, 2);

> +	/* And again for ID_AA64PFR1_EL1.MPAM_frac */
> +	idx = encoding_to_range_idx(SYS_ID_AA64PFR1_EL1);
> +	ftr_mask = ID_AA64PFR1_EL1_MPAM_frac_MASK;
> +	if ((masks[idx] & ftr_mask) == ftr_mask) {
> +		ksft_test_result_skip("ID_AA64PFR1_EL1.MPAM_frac is officially writable, nothing to test\n");
> +		return;
> +	}
> +
> +	/* Get the id register value */
> +	vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR1_EL1), &val);
> +
> +	/* Try to set MPAM_frac=0. This should always be possible. */
> +	val &= ~GENMASK_ULL(19, 16);
> +	val |= FIELD_PREP(GENMASK_ULL(19, 16), 0);
> +	err = __vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR1_EL1), val);
> +	if (err)
> +		ksft_test_result_fail("ID_AA64PFR0_EL1.MPAM_frac=0 was not accepted\n");
> +	else
> +		ksft_test_result_pass("ID_AA64PFR0_EL1.MPAM_frac=0 worked\n");
> +
> +	/* Try to set MPAM_frac=1 */
> +	val &= ~GENMASK_ULL(19, 16);
> +	val |= FIELD_PREP(GENMASK_ULL(19, 16), 1);
> +	err = __vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR1_EL1), val);
> +	if (err)
> +		ksft_test_result_skip("ID_AA64PFR1_EL1.MPAM_frac is not writable, nothing to test\n");
> +	else
> +		ksft_test_result_pass("ID_AA64PFR0_EL1.MPAM_frac=1 was writable\n");
> +
> +	/* Try to set MPAM_frac=2 */
> +	val &= ~GENMASK_ULL(19, 16);
> +	val |= FIELD_PREP(GENMASK_ULL(19, 16), 2);
> +	err = __vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR1_EL1), val);
> +	if (err)
> +		ksft_test_result_pass("ID_AA64PFR1_EL1.MPAM_frac not arbitrarily modifiable\n");
> +	else
> +		ksft_test_result_fail("ID_AA64PFR1_EL1.MPAM_frac value should not be ignored\n");
> +}
> +

Similarly, ID_AA64PFR1_EL1_MPAM_frac_MASK can be used?

>   static void test_guest_reg_read(struct kvm_vcpu *vcpu)
>   {
>   	bool done = false;
> @@ -571,12 +668,13 @@ int main(void)
>   		   ARRAY_SIZE(ftr_id_aa64isar2_el1) + ARRAY_SIZE(ftr_id_aa64pfr0_el1) +
>   		   ARRAY_SIZE(ftr_id_aa64mmfr0_el1) + ARRAY_SIZE(ftr_id_aa64mmfr1_el1) +
>   		   ARRAY_SIZE(ftr_id_aa64mmfr2_el1) + ARRAY_SIZE(ftr_id_aa64zfr0_el1) -
> -		   ARRAY_SIZE(test_regs) + 2;
> +		   ARRAY_SIZE(test_regs) + 2 + MPAM_IDREG_TEST;
>   
>   	ksft_set_plan(test_cnt);
>   
>   	test_vm_ftr_id_regs(vcpu, aarch64_only);
>   	test_vcpu_ftr_id_regs(vcpu);
> +	test_user_set_mpam_reg(vcpu);
>   
>   	test_guest_reg_read(vcpu);
>   

Thanks,
Gavin



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v5 3/7] arm64: cpufeature: discover CPU support for MPAM
  2024-10-15 13:39 ` [PATCH v5 3/7] arm64: cpufeature: discover CPU support for MPAM Joey Gouly
@ 2024-10-17  2:36   ` Gavin Shan
  0 siblings, 0 replies; 25+ messages in thread
From: Gavin Shan @ 2024-10-17  2:36 UTC (permalink / raw)
  To: Joey Gouly, linux-arm-kernel, kvmarm
  Cc: anshuman.khandual, james.morse, Marc Zyngier, Oliver Upton,
	Suzuki K Poulose, Zenghui Yu, Jing Zhang,
	Shameerali Kolothum Thodi, Catalin Marinas, Will Deacon

On 10/15/24 11:39 PM, Joey Gouly wrote:
> From: James Morse <james.morse@arm.com>
> 
> ARMv8.4 adds support for 'Memory Partitioning And Monitoring' (MPAM)
> which describes an interface to cache and bandwidth controls wherever
> they appear in the system.
> 
> Add support to detect MPAM. Like SVE, MPAM has an extra id register that
> describes some more properties, including the virtualisation support,
> which is optional. Detect this separately so we can detect
> mismatched/insane systems, but still use MPAM on the host even if the
> virtualisation support is missing.
> 
> MPAM needs enabling at the highest implemented exception level, otherwise
> the register accesses trap. The 'enabled' flag is accessible to lower
> exception levels, but its in a register that traps when MPAM isn't enabled.
> The cpufeature 'matches' hook is extended to test this on one of the
> CPUs, so that firmware can emulate MPAM as disabled if it is reserved
> for use by secure world.
> 
> Secondary CPUs that appear late could trip cpufeature's 'lower safe'
> behaviour after the MPAM properties have been advertised to user-space.
> Add a verify call to ensure late secondaries match the existing CPUs.
> 
> (If you have a boot failure that bisects here its likely your CPUs
> advertise MPAM in the id registers, but firmware failed to either enable
> or MPAM, or emulate the trap as if it were disabled)
> 
> Signed-off-by: James Morse <james.morse@arm.com>
> Signed-off-by: Joey Gouly <joey.gouly@arm.com>
> ---
>   .../arch/arm64/cpu-feature-registers.rst      |  2 +
>   arch/arm64/Kconfig                            | 20 ++++
>   arch/arm64/include/asm/cpu.h                  |  1 +
>   arch/arm64/include/asm/cpucaps.h              |  2 +
>   arch/arm64/include/asm/cpufeature.h           | 12 +++
>   arch/arm64/include/asm/mpam.h                 | 32 ++++++
>   arch/arm64/kernel/Makefile                    |  2 +
>   arch/arm64/kernel/cpufeature.c                | 97 +++++++++++++++++++
>   arch/arm64/kernel/cpuinfo.c                   |  4 +
>   arch/arm64/kernel/mpam.c                      |  6 ++
>   arch/arm64/tools/cpucaps                      |  1 +
>   11 files changed, 179 insertions(+)
>   create mode 100644 arch/arm64/include/asm/mpam.h
>   create mode 100644 arch/arm64/kernel/mpam.c
> 

Reviewed-by: Gavin Shan <gshan@redhat.com>



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v5 4/7] KVM: arm64: Fix missing traps of guest accesses to the MPAM registers
  2024-10-15 13:39 ` [PATCH v5 4/7] KVM: arm64: Fix missing traps of guest accesses to the MPAM registers Joey Gouly
  2024-10-17  0:10   ` Oliver Upton
@ 2024-10-17  2:37   ` Gavin Shan
  2024-10-17 11:54   ` Marc Zyngier
  2024-10-17 13:43   ` Marc Zyngier
  3 siblings, 0 replies; 25+ messages in thread
From: Gavin Shan @ 2024-10-17  2:37 UTC (permalink / raw)
  To: Joey Gouly, linux-arm-kernel, kvmarm
  Cc: anshuman.khandual, james.morse, Marc Zyngier, Oliver Upton,
	Suzuki K Poulose, Zenghui Yu, Jing Zhang,
	Shameerali Kolothum Thodi, Catalin Marinas, Will Deacon

On 10/15/24 11:39 PM, Joey Gouly wrote:
> From: James Morse <james.morse@arm.com>
> 
> commit 011e5f5bf529f ("arm64/cpufeature: Add remaining feature bits in
> ID_AA64PFR0 register") exposed the MPAM field of AA64PFR0_EL1 to guests,
> but didn't add trap handling.
> 
> If you are unlucky, this results in an MPAM aware guest being delivered
> an undef during boot. The host prints:
> | kvm [97]: Unsupported guest sys_reg access at: ffff800080024c64 [00000005]
> | { Op0( 3), Op1( 0), CRn(10), CRm( 5), Op2( 0), func_read },
> 
> Which results in:
> | Internal error: Oops - Undefined instruction: 0000000002000000 [#1] PREEMPT SMP
> | Modules linked in:
> | CPU: 0 PID: 1 Comm: swapper/0 Not tainted 6.6.0-rc7-00559-gd89c186d50b2 #14616
> | Hardware name: linux,dummy-virt (DT)
> | pstate: 00000005 (nzcv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
> | pc : test_has_mpam+0x18/0x30
> | lr : test_has_mpam+0x10/0x30
> | sp : ffff80008000bd90
> ...
> | Call trace:
> |  test_has_mpam+0x18/0x30
> |  update_cpu_capabilities+0x7c/0x11c
> |  setup_cpu_features+0x14/0xd8
> |  smp_cpus_done+0x24/0xb8
> |  smp_init+0x7c/0x8c
> |  kernel_init_freeable+0xf8/0x280
> |  kernel_init+0x24/0x1e0
> |  ret_from_fork+0x10/0x20
> | Code: 910003fd 97ffffde 72001c00 54000080 (d538a500)
> | ---[ end trace 0000000000000000 ]---
> | Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
> | ---[ end Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b ]---
> 
> Add the support to enable the traps, and handle the three guest accessible
> registers by injecting an UNDEF. This stops KVM from spamming the host
> log, but doesn't yet hide the feature from the id registers.
> 
> With MPAM v1.0 we can trap the MPAMIDR_EL1 register only if
> ARM64_HAS_MPAM_HCR, with v1.1 an additional MPAM2_EL2.TIDR bit traps
> MPAMIDR_EL1 on platforms that don't have MPAMHCR_EL2. Enable one of
> these if either is supported. If neither is supported, the guest can
> discover that the CPU has MPAM support, and how many PARTID etc the
> host has ... but it can't influence anything, so its harmless.
> 
> Fixes: 011e5f5bf529f ("arm64/cpufeature: Add remaining feature bits in ID_AA64PFR0 register")
> CC: Anshuman Khandual <anshuman.khandual@arm.com>
> Link: https://lore.kernel.org/linux-arm-kernel/20200925160102.118858-1-james.morse@arm.com/
> Signed-off-by: James Morse <james.morse@arm.com>
> Signed-off-by: Joey Gouly <joey.gouly@arm.com>
> ---
>   arch/arm64/include/asm/cpufeature.h     |  2 +-
>   arch/arm64/include/asm/kvm_arm.h        |  1 +
>   arch/arm64/include/asm/mpam.h           |  2 +-
>   arch/arm64/kernel/image-vars.h          |  5 ++++
>   arch/arm64/kvm/hyp/include/hyp/switch.h | 32 +++++++++++++++++++++++++
>   arch/arm64/kvm/sys_regs.c               |  3 +++
>   6 files changed, 43 insertions(+), 2 deletions(-)
> 

Reviewed-by: Gavin Shan <gshan@redhat.com>



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v5 5/7] KVM: arm64: Add a macro for creating filtered sys_reg_descs entries
  2024-10-15 13:39 ` [PATCH v5 5/7] KVM: arm64: Add a macro for creating filtered sys_reg_descs entries Joey Gouly
@ 2024-10-17  2:37   ` Gavin Shan
  0 siblings, 0 replies; 25+ messages in thread
From: Gavin Shan @ 2024-10-17  2:37 UTC (permalink / raw)
  To: Joey Gouly, linux-arm-kernel, kvmarm
  Cc: anshuman.khandual, james.morse, Marc Zyngier, Oliver Upton,
	Suzuki K Poulose, Zenghui Yu, Jing Zhang,
	Shameerali Kolothum Thodi, Catalin Marinas, Will Deacon

On 10/15/24 11:39 PM, Joey Gouly wrote:
> From: James Morse <james.morse@arm.com>
> 
> The sys_reg_descs array holds function pointers and reset value for
> managing the user-space and guest view of system registers. These
> are mostly created by a set of macro's as only some combinations
> of behaviour are needed.
> 
> If a register needs special treatment, its sys_reg_descs entry is
> open-coded. This is true of some id registers where the value provided
> by user-space is validated by some helpers.
> 
> Before adding another one of these, add a helper that covers the
> existing special cases. 'ID_FILTERED' expects helpers to set the
> user-space value, and retrieve the modified reset value.
> 
> Like ID_WRITABLE() this uses id_visibility(), which should have no functional
> change for the registers converted to use ID_FILTERED().
> 
> read_sanitised_id_aa64dfr0_el1() and read_sanitised_id_aa64pfr0_el1() have been
> refactored to be called kvm_read_sanitised_id_reg(), to try be consistent with
> ID_WRITABLE().
> 
> Signed-off-by: James Morse <james.morse@arm.com>
> Signed-off-by: Joey Gouly <joey.gouly@arm.com>
> ---
>   arch/arm64/kvm/sys_regs.c | 62 +++++++++++++++++++++++----------------
>   1 file changed, 36 insertions(+), 26 deletions(-)
> 

Reviewed-by: Gavin Shan <gshan@redhat.com>



^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v5 6/7] KVM: arm64: Disable MPAM visibility by default and ignore VMM writes
  2024-10-15 13:39 ` [PATCH v5 6/7] KVM: arm64: Disable MPAM visibility by default and ignore VMM writes Joey Gouly
@ 2024-10-17  2:37   ` Gavin Shan
  0 siblings, 0 replies; 25+ messages in thread
From: Gavin Shan @ 2024-10-17  2:37 UTC (permalink / raw)
  To: Joey Gouly, linux-arm-kernel, kvmarm
  Cc: anshuman.khandual, james.morse, Marc Zyngier, Oliver Upton,
	Suzuki K Poulose, Zenghui Yu, Jing Zhang,
	Shameerali Kolothum Thodi, Catalin Marinas, Will Deacon

On 10/15/24 11:39 PM, Joey Gouly wrote:
> From: James Morse <james.morse@arm.com>
> 
> commit 011e5f5bf529f ("arm64/cpufeature: Add remaining feature bits in
> ID_AA64PFR0 register") exposed the MPAM field of AA64PFR0_EL1 to guests,
> but didn't add trap handling. A previous patch supplied the missing trap
> handling.
> 
> Existing VMs that have the MPAM field of ID_AA64PFR0_EL1 set need to
> be migratable, but there is little point enabling the MPAM CPU
> interface on new VMs until there is something a guest can do with it.
> 
> Clear the MPAM field from the guest's ID_AA64PFR0_EL1 and on hardware
> that supports MPAM, politely ignore the VMMs attempts to set this bit.
> 
> Guests exposed to this bug have the sanitised value of the MPAM field,
> so only the correct value needs to be ignored. This means the field
> can continue to be used to block migration to incompatible hardware
> (between MPAM=1 and MPAM=5), and the VMM can't rely on the field
> being ignored.
> 
> Signed-off-by: James Morse <james.morse@arm.com>
> Co-developed-by: Joey Gouly <joey.gouly@arm.com>
> Signed-off-by: Joey Gouly <joey.gouly@arm.com>
> ---
>   arch/arm64/kvm/sys_regs.c | 44 ++++++++++++++++++++++++++++++++++++---
>   1 file changed, 41 insertions(+), 3 deletions(-)
> 

Reviewed-by: Gavin Shan <gshan@redhat.com>



^ permalink raw reply	[flat|nested] 25+ messages in thread

* RE: [PATCH v5 0/7] KVM: arm64: Hide unsupported MPAM from the guest
  2024-10-15 13:39 [PATCH v5 0/7] KVM: arm64: Hide unsupported MPAM from the guest Joey Gouly
                   ` (6 preceding siblings ...)
  2024-10-15 13:39 ` [PATCH v5 7/7] KVM: arm64: selftests: Test ID_AA64PFR0.MPAM isn't completely ignored Joey Gouly
@ 2024-10-17  8:34 ` Shameerali Kolothum Thodi
  7 siblings, 0 replies; 25+ messages in thread
From: Shameerali Kolothum Thodi @ 2024-10-17  8:34 UTC (permalink / raw)
  To: Joey Gouly, linux-arm-kernel@lists.infradead.org,
	kvmarm@lists.linux.dev
  Cc: anshuman.khandual@arm.com, james.morse@arm.com, Marc Zyngier,
	Oliver Upton, Suzuki K Poulose, yuzenghui, Jing Zhang,
	Catalin Marinas, Will Deacon



> -----Original Message-----
> From: Joey Gouly <joey.gouly@arm.com>
> Sent: Tuesday, October 15, 2024 2:39 PM
> To: linux-arm-kernel@lists.infradead.org; kvmarm@lists.linux.dev
> Cc: anshuman.khandual@arm.com; james.morse@arm.com;
> joey.gouly@arm.com; Marc Zyngier <maz@kernel.org>; Oliver Upton
> <oliver.upton@linux.dev>; Suzuki K Poulose <suzuki.poulose@arm.com>;
> yuzenghui <yuzenghui@huawei.com>; Jing Zhang
> <jingzhangos@google.com>; Shameerali Kolothum Thodi
> <shameerali.kolothum.thodi@huawei.com>; Catalin Marinas
> <catalin.marinas@arm.com>; Will Deacon <will@kernel.org>
> Subject: [PATCH v5 0/7] KVM: arm64: Hide unsupported MPAM from the
> guest

I did basic sanity tests with this series on a HiSilicon platform that has MPAM support and
it looks fine.

FWIW,
Tested-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>

Thanks,
SHameer


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v5 4/7] KVM: arm64: Fix missing traps of guest accesses to the MPAM registers
  2024-10-17  0:10   ` Oliver Upton
@ 2024-10-17 10:58     ` Joey Gouly
  2024-10-17 16:07       ` Oliver Upton
  0 siblings, 1 reply; 25+ messages in thread
From: Joey Gouly @ 2024-10-17 10:58 UTC (permalink / raw)
  To: Oliver Upton
  Cc: linux-arm-kernel, kvmarm, anshuman.khandual, james.morse,
	Marc Zyngier, Suzuki K Poulose, Zenghui Yu, Jing Zhang,
	Shameerali Kolothum Thodi, Catalin Marinas, Will Deacon

On Wed, Oct 16, 2024 at 05:10:17PM -0700, Oliver Upton wrote:
> Hi Joey,
> 
> On Tue, Oct 15, 2024 at 02:39:20PM +0100, Joey Gouly wrote:
> > +static inline void  __activate_traps_mpam(struct kvm_vcpu *vcpu)
> > +{
> > +	u64 r = MPAM2_EL2_TRAPMPAM0EL1 | MPAM2_EL2_TRAPMPAM1EL1;
> > +
> > +	if (!cpus_support_mpam())
> > +		return;
> > +
> > +	/* trap guest access to MPAMIDR_EL1 */
> > +	if (mpam_cpus_have_mpam_hcr()) {
> > +		write_sysreg_s(MPAMHCR_EL2_TRAP_MPAMIDR_EL1, SYS_MPAMHCR_EL2);
> > +	} else {
> > +		/* From v1.1 TIDR can trap MPAMIDR, set it unconditionally */
> > +		r |= MPAM2_EL2_TIDR;
> > +	}
> > +
> > +	write_sysreg_s(r, SYS_MPAM2_EL2);
> > +}
> > +
> > +static inline void __deactivate_traps_mpam(void)
> > +{
> > +	if (!cpus_support_mpam())
> > +		return;
> > +
> > +	write_sysreg_s(0, SYS_MPAM2_EL2);
> > +
> > +	if (mpam_cpus_have_mpam_hcr())
> > +		write_sysreg_s(MPAMHCR_HOST_FLAGS, SYS_MPAMHCR_EL2);
> > +}
> 
> TBH, I think our trap configuration should *not* be conditioned on
> CONFIG_ARM64_MPAM. Otherwise we're silently allowing the guest to change
> things under the nose of KVM/host kernel, assuming an unkind firmware
> that left the EL2 trap configuration in a permissive state.
> 
> WDYT about detecting the feature && enforcing traps regardless of the Kconfig?

I had actually thought about the same thing. I spoke with James and he agrees,
so I will look into removing that.

I will probably end up removing the Kconfig entirely, it can be added back in
later, when actual support for MPAM is added.

Thanks,
Joey

> 
> -- 
> Thanks,
> Oliver


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v5 7/7] KVM: arm64: selftests: Test ID_AA64PFR0.MPAM isn't completely ignored
  2024-10-17  0:41   ` Gavin Shan
@ 2024-10-17 11:03     ` Joey Gouly
  0 siblings, 0 replies; 25+ messages in thread
From: Joey Gouly @ 2024-10-17 11:03 UTC (permalink / raw)
  To: Gavin Shan
  Cc: linux-arm-kernel, kvmarm, anshuman.khandual, james.morse,
	Marc Zyngier, Oliver Upton, Suzuki K Poulose, Zenghui Yu,
	Jing Zhang, Shameerali Kolothum Thodi, Catalin Marinas,
	Will Deacon

On Thu, Oct 17, 2024 at 10:41:14AM +1000, Gavin Shan wrote:
> On 10/15/24 11:39 PM, Joey Gouly wrote:
> > From: James Morse <james.morse@arm.com>
> > 
> > The ID_AA64PFR0.MPAM bit was previously accidentally exposed to guests,
> > and is ignored by KVM. KVM will always present the guest with 0 here,
> > and trap the MPAM system registers to inject an undef.
> > 
> > But, this value is still needed to prevent migration when the value
> > is incompatible with the target hardware. Add a kvm unit test to try
> > and write multiple values to ID_AA64PFR0.MPAM. Only the hardware value
> > previously exposed should be ignored, all other values should be
> > rejected.
> > 
> > Signed-off-by: James Morse <james.morse@arm.com>
> > Signed-off-by: Joey Gouly <joey.gouly@arm.com>
> > ---
> >   .../selftests/kvm/aarch64/set_id_regs.c       | 100 +++++++++++++++++-
> >   1 file changed, 99 insertions(+), 1 deletion(-)
> > 
> > diff --git a/tools/testing/selftests/kvm/aarch64/set_id_regs.c b/tools/testing/selftests/kvm/aarch64/set_id_regs.c
> > index 2a3fe7914b72..d985ead2cc45 100644
> > --- a/tools/testing/selftests/kvm/aarch64/set_id_regs.c
> > +++ b/tools/testing/selftests/kvm/aarch64/set_id_regs.c
> > @@ -433,6 +433,103 @@ static void test_vm_ftr_id_regs(struct kvm_vcpu *vcpu, bool aarch64_only)
> >   	}
> >   }
> > +#define MPAM_IDREG_TEST	6
> > +static void test_user_set_mpam_reg(struct kvm_vcpu *vcpu)
> > +{
> > +	uint64_t masks[KVM_ARM_FEATURE_ID_RANGE_SIZE];
> > +	struct reg_mask_range range = {
> > +		.addr = (__u64)masks,
> > +	};
> > +	uint64_t val, ftr_mask;
> > +	int idx, err;
> > +
> > +	/*
> > +	 * If ID_AA64PFR0.MPAM is _not_ officially modifiable and is zero,
> > +	 * check that if it can be set to 1, (i.e. it is supported by the
> > +	 * hardware), that it can't be set to other values.
> > +	 */
> > +
> > +	/* Get writable masks for feature ID registers */
> > +	memset(range.reserved, 0, sizeof(range.reserved));
> > +	vm_ioctl(vcpu->vm, KVM_ARM_GET_REG_WRITABLE_MASKS, &range);
> > +
> > +	/* Writeable? Nothing to test! */
> > +	idx = encoding_to_range_idx(SYS_ID_AA64PFR0_EL1);
> > +	ftr_mask = ID_AA64PFR0_EL1_MPAM_MASK;
> > +	if ((masks[idx] & ftr_mask) == ftr_mask) {
> > +		ksft_test_result_skip("ID_AA64PFR0_EL1.MPAM is officially writable, nothing to test\n");
> > +		return;
> > +	}
> > +
> 
> The question is shall we proceed to test ID_AA64PFR1_EL1.MPAM_frac instead of
> bailing when ID_AA64PFR0_EL1.MPAM accepts arbitrary values? To me, it doesn't
> mean ID_AA64PFR1_EL1.MPAM_frac can accept arbitrary values when ID_AA64PFR0_EL1.MPAM
> does.

I think it's fine to assume that if MPAM is made writable, that MPAM_frac will
be in the same commit/series. This is about artbitrary values, but if the KVM
API explicitly marks those bits as "writable".

> 
> @ftr_mask can be dropped since it's initialized to a fixed mask and used for
> only for once.
> 
>     if (mask[idx] & ID_AA64PFR0_EL1_MPAM_MASK] == ID_AA64PFR0_EL1_MPAM_MASK) {
>         :
>     }
> 
> > +	/* Get the id register value */
> > +	vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR0_EL1), &val);
> > +
> > +	/* Try to set MPAM=0. This should always be possible. */
> > +	val &= ~GENMASK_ULL(44, 40);
> > +	val |= FIELD_PREP(GENMASK_ULL(44, 40), 0);
> > +	err = __vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR0_EL1), val);
> > +	if (err)
> > +		ksft_test_result_fail("ID_AA64PFR0_EL1.MPAM=0 was not accepted\n");
> > +	else
> > +		ksft_test_result_pass("ID_AA64PFR0_EL1.MPAM=0 worked\n");
> > +
> > +	/* Try to set MPAM=1 */
> > +	val &= ~GENMASK_ULL(44, 40);
> > +	val |= FIELD_PREP(GENMASK_ULL(44, 40), 1);
> > +	err = __vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR0_EL1), val);
> > +	if (err)
> > +		ksft_test_result_skip("ID_AA64PFR0_EL1.MPAM is not writable, nothing to test\n");
> > +	else
> > +		ksft_test_result_pass("ID_AA64PFR0_EL1.MPAM=1 was writable\n");
> > +
> > +	/* Try to set MPAM=2 */
> > +	val &= ~GENMASK_ULL(43, 40);
> > +	val |= FIELD_PREP(GENMASK_ULL(43, 40), 2);
> > +	err = __vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR0_EL1), val);
> > +	if (err)
> > +		ksft_test_result_pass("ID_AA64PFR0_EL1.MPAM not arbitrarily modifiable\n");
> > +	else
> > +		ksft_test_result_fail("ID_AA64PFR0_EL1.MPAM value should not be ignored\n");
> > +
> 
> GENMASK_ULL(44, 40) looks wrong to me. According to the spec, GENMASK_ULL(43, 40)
> is the correct mask. Besides, I think it would be nice to use ID_AA64PFR0_EL1_MPAM_MASK
> here, for example:
> 
>     val &= ~ID_AA64PFR0_EL1_MPAM_MASK;
>     val |= FIELD_PREP(ID_AA64PFR0_EL1_MPAM_MASK, 2);
> 
> > +	/* And again for ID_AA64PFR1_EL1.MPAM_frac */
> > +	idx = encoding_to_range_idx(SYS_ID_AA64PFR1_EL1);
> > +	ftr_mask = ID_AA64PFR1_EL1_MPAM_frac_MASK;
> > +	if ((masks[idx] & ftr_mask) == ftr_mask) {
> > +		ksft_test_result_skip("ID_AA64PFR1_EL1.MPAM_frac is officially writable, nothing to test\n");
> > +		return;
> > +	}
> > +
> > +	/* Get the id register value */
> > +	vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR1_EL1), &val);
> > +
> > +	/* Try to set MPAM_frac=0. This should always be possible. */
> > +	val &= ~GENMASK_ULL(19, 16);
> > +	val |= FIELD_PREP(GENMASK_ULL(19, 16), 0);
> > +	err = __vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR1_EL1), val);
> > +	if (err)
> > +		ksft_test_result_fail("ID_AA64PFR0_EL1.MPAM_frac=0 was not accepted\n");
> > +	else
> > +		ksft_test_result_pass("ID_AA64PFR0_EL1.MPAM_frac=0 worked\n");
> > +
> > +	/* Try to set MPAM_frac=1 */
> > +	val &= ~GENMASK_ULL(19, 16);
> > +	val |= FIELD_PREP(GENMASK_ULL(19, 16), 1);
> > +	err = __vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR1_EL1), val);
> > +	if (err)
> > +		ksft_test_result_skip("ID_AA64PFR1_EL1.MPAM_frac is not writable, nothing to test\n");
> > +	else
> > +		ksft_test_result_pass("ID_AA64PFR0_EL1.MPAM_frac=1 was writable\n");
> > +
> > +	/* Try to set MPAM_frac=2 */
> > +	val &= ~GENMASK_ULL(19, 16);
> > +	val |= FIELD_PREP(GENMASK_ULL(19, 16), 2);
> > +	err = __vcpu_set_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR1_EL1), val);
> > +	if (err)
> > +		ksft_test_result_pass("ID_AA64PFR1_EL1.MPAM_frac not arbitrarily modifiable\n");
> > +	else
> > +		ksft_test_result_fail("ID_AA64PFR1_EL1.MPAM_frac value should not be ignored\n");
> > +}
> > +
> 
> Similarly, ID_AA64PFR1_EL1_MPAM_frac_MASK can be used?
> 
> >   static void test_guest_reg_read(struct kvm_vcpu *vcpu)
> >   {
> >   	bool done = false;
> > @@ -571,12 +668,13 @@ int main(void)
> >   		   ARRAY_SIZE(ftr_id_aa64isar2_el1) + ARRAY_SIZE(ftr_id_aa64pfr0_el1) +
> >   		   ARRAY_SIZE(ftr_id_aa64mmfr0_el1) + ARRAY_SIZE(ftr_id_aa64mmfr1_el1) +
> >   		   ARRAY_SIZE(ftr_id_aa64mmfr2_el1) + ARRAY_SIZE(ftr_id_aa64zfr0_el1) -
> > -		   ARRAY_SIZE(test_regs) + 2;
> > +		   ARRAY_SIZE(test_regs) + 2 + MPAM_IDREG_TEST;
> >   	ksft_set_plan(test_cnt);
> >   	test_vm_ftr_id_regs(vcpu, aarch64_only);
> >   	test_vcpu_ftr_id_regs(vcpu);
> > +	test_user_set_mpam_reg(vcpu);
> >   	test_guest_reg_read(vcpu);
> 

I will clean up the test with the MASK defines!

Thanks,
Joey


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v5 4/7] KVM: arm64: Fix missing traps of guest accesses to the MPAM registers
  2024-10-15 13:39 ` [PATCH v5 4/7] KVM: arm64: Fix missing traps of guest accesses to the MPAM registers Joey Gouly
  2024-10-17  0:10   ` Oliver Upton
  2024-10-17  2:37   ` Gavin Shan
@ 2024-10-17 11:54   ` Marc Zyngier
  2024-10-17 13:06     ` Joey Gouly
  2024-10-17 13:43   ` Marc Zyngier
  3 siblings, 1 reply; 25+ messages in thread
From: Marc Zyngier @ 2024-10-17 11:54 UTC (permalink / raw)
  To: Joey Gouly
  Cc: linux-arm-kernel, kvmarm, anshuman.khandual, james.morse,
	Oliver Upton, Suzuki K Poulose, Zenghui Yu, Jing Zhang,
	Shameerali Kolothum Thodi, Catalin Marinas, Will Deacon

On Tue, 15 Oct 2024 14:39:20 +0100,
Joey Gouly <joey.gouly@arm.com> wrote:
> 
> @@ -204,6 +205,35 @@ static inline void __deactivate_traps_hfgxtr(struct kvm_vcpu *vcpu)
>  		__deactivate_fgt(hctxt, vcpu, kvm, HAFGRTR_EL2);
>  }
>  
> +static inline void  __activate_traps_mpam(struct kvm_vcpu *vcpu)
> +{
> +	u64 r = MPAM2_EL2_TRAPMPAM0EL1 | MPAM2_EL2_TRAPMPAM1EL1;
> +
> +	if (!cpus_support_mpam())
> +		return;
> +
> +	/* trap guest access to MPAMIDR_EL1 */
> +	if (mpam_cpus_have_mpam_hcr()) {
> +		write_sysreg_s(MPAMHCR_EL2_TRAP_MPAMIDR_EL1, SYS_MPAMHCR_EL2);
> +	} else {
> +		/* From v1.1 TIDR can trap MPAMIDR, set it unconditionally */
> +		r |= MPAM2_EL2_TIDR;
> +	}
> +
> +	write_sysreg_s(r, SYS_MPAM2_EL2);

Please use the write_sysreg_el2() accessor, so that VHE under NV has
an easier time, should we ever get there.

> +}
> +
> +static inline void __deactivate_traps_mpam(void)
> +{
> +	if (!cpus_support_mpam())
> +		return;
> +
> +	write_sysreg_s(0, SYS_MPAM2_EL2);

Same thing.

> +
> +	if (mpam_cpus_have_mpam_hcr())
> +		write_sysreg_s(MPAMHCR_HOST_FLAGS, SYS_MPAMHCR_EL2);
> +}
> +

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v5 4/7] KVM: arm64: Fix missing traps of guest accesses to the MPAM registers
  2024-10-17 11:54   ` Marc Zyngier
@ 2024-10-17 13:06     ` Joey Gouly
  2024-10-17 13:38       ` Marc Zyngier
  0 siblings, 1 reply; 25+ messages in thread
From: Joey Gouly @ 2024-10-17 13:06 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-arm-kernel, kvmarm, anshuman.khandual, james.morse,
	Oliver Upton, Suzuki K Poulose, Zenghui Yu, Jing Zhang,
	Shameerali Kolothum Thodi, Catalin Marinas, Will Deacon

Hi Marc,

Thanks for looking!

On Thu, Oct 17, 2024 at 12:54:09PM +0100, Marc Zyngier wrote:
> On Tue, 15 Oct 2024 14:39:20 +0100,
> Joey Gouly <joey.gouly@arm.com> wrote:
> > 
> > @@ -204,6 +205,35 @@ static inline void __deactivate_traps_hfgxtr(struct kvm_vcpu *vcpu)
> >  		__deactivate_fgt(hctxt, vcpu, kvm, HAFGRTR_EL2);
> >  }
> >  
> > +static inline void  __activate_traps_mpam(struct kvm_vcpu *vcpu)
> > +{
> > +	u64 r = MPAM2_EL2_TRAPMPAM0EL1 | MPAM2_EL2_TRAPMPAM1EL1;
> > +
> > +	if (!cpus_support_mpam())
> > +		return;
> > +
> > +	/* trap guest access to MPAMIDR_EL1 */
> > +	if (mpam_cpus_have_mpam_hcr()) {
> > +		write_sysreg_s(MPAMHCR_EL2_TRAP_MPAMIDR_EL1, SYS_MPAMHCR_EL2);
> > +	} else {
> > +		/* From v1.1 TIDR can trap MPAMIDR, set it unconditionally */
> > +		r |= MPAM2_EL2_TIDR;
> > +	}
> > +
> > +	write_sysreg_s(r, SYS_MPAM2_EL2);
> 
> Please use the write_sysreg_el2() accessor, so that VHE under NV has
> an easier time, should we ever get there.

Are you suggesting this, to avoid a trap (and instead get the register-memory
redirection)?

The two reasons I could see to keep it as-is:
	- The name changes from MPAM2_EL2 to MPAM1_EL1 (1 and 2 suffixes after
	  MPAM)
	- It's setting trap behaviour, which is an EL2 concept (and are RES0
	  bits in the MPAM1_EL1 reg). Other trap registers such as HCRX_EL2 use
	  that register directly, however they don't have an _EL1 counterpart so have to.

I'm fine to change it, just want to clarify before I do.

Thanks,
Joey

> 
> > +}
> > +
> > +static inline void __deactivate_traps_mpam(void)
> > +{
> > +	if (!cpus_support_mpam())
> > +		return;
> > +
> > +	write_sysreg_s(0, SYS_MPAM2_EL2);
> 
> Same thing.
> 
> > +
> > +	if (mpam_cpus_have_mpam_hcr())
> > +		write_sysreg_s(MPAMHCR_HOST_FLAGS, SYS_MPAMHCR_EL2);
> > +}
> > +
> 
> Thanks,
> 
> 	M.
> 
> -- 
> Without deviation from the norm, progress is not possible.


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v5 4/7] KVM: arm64: Fix missing traps of guest accesses to the MPAM registers
  2024-10-17 13:06     ` Joey Gouly
@ 2024-10-17 13:38       ` Marc Zyngier
  0 siblings, 0 replies; 25+ messages in thread
From: Marc Zyngier @ 2024-10-17 13:38 UTC (permalink / raw)
  To: Joey Gouly
  Cc: linux-arm-kernel, kvmarm, anshuman.khandual, james.morse,
	Oliver Upton, Suzuki K Poulose, Zenghui Yu, Jing Zhang,
	Shameerali Kolothum Thodi, Catalin Marinas, Will Deacon

On Thu, 17 Oct 2024 14:06:38 +0100,
Joey Gouly <joey.gouly@arm.com> wrote:
> 
> Hi Marc,
> 
> Thanks for looking!
> 
> On Thu, Oct 17, 2024 at 12:54:09PM +0100, Marc Zyngier wrote:
> > On Tue, 15 Oct 2024 14:39:20 +0100,
> > Joey Gouly <joey.gouly@arm.com> wrote:
> > > 
> > > @@ -204,6 +205,35 @@ static inline void __deactivate_traps_hfgxtr(struct kvm_vcpu *vcpu)
> > >  		__deactivate_fgt(hctxt, vcpu, kvm, HAFGRTR_EL2);
> > >  }
> > >  
> > > +static inline void  __activate_traps_mpam(struct kvm_vcpu *vcpu)
> > > +{
> > > +	u64 r = MPAM2_EL2_TRAPMPAM0EL1 | MPAM2_EL2_TRAPMPAM1EL1;
> > > +
> > > +	if (!cpus_support_mpam())
> > > +		return;
> > > +
> > > +	/* trap guest access to MPAMIDR_EL1 */
> > > +	if (mpam_cpus_have_mpam_hcr()) {
> > > +		write_sysreg_s(MPAMHCR_EL2_TRAP_MPAMIDR_EL1, SYS_MPAMHCR_EL2);
> > > +	} else {
> > > +		/* From v1.1 TIDR can trap MPAMIDR, set it unconditionally */
> > > +		r |= MPAM2_EL2_TIDR;
> > > +	}
> > > +
> > > +	write_sysreg_s(r, SYS_MPAM2_EL2);
> > 
> > Please use the write_sysreg_el2() accessor, so that VHE under NV has
> > an easier time, should we ever get there.
> 
> Are you suggesting this, to avoid a trap (and instead get the register-memory
> redirection)?
> 
> The two reasons I could see to keep it as-is:
> 	- The name changes from MPAM2_EL2 to MPAM1_EL1 (1 and 2 suffixes after
> 	  MPAM)

Arghhh.

> 	- It's setting trap behaviour, which is an EL2 concept (and are RES0
> 	  bits in the MPAM1_EL1 reg). Other trap registers such as HCRX_EL2 use
> 	  that register directly, however they don't have an _EL1 counterpart so have to.

Arghhh again.

So this is yet another register that doesn't match FEAT_NV2p1, by
having RES0 fields in its EL2-equivalent-EL1 register, and we can't
even use MPAM1_EL1 for NV.

> I'm fine to change it, just want to clarify before I do.

Nah, don't bother. This thing can't be rescued.

Thanks for the gory details!

	M.

-- 
Without deviation from the norm, progress is not possible.


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v5 4/7] KVM: arm64: Fix missing traps of guest accesses to the MPAM registers
  2024-10-15 13:39 ` [PATCH v5 4/7] KVM: arm64: Fix missing traps of guest accesses to the MPAM registers Joey Gouly
                     ` (2 preceding siblings ...)
  2024-10-17 11:54   ` Marc Zyngier
@ 2024-10-17 13:43   ` Marc Zyngier
  3 siblings, 0 replies; 25+ messages in thread
From: Marc Zyngier @ 2024-10-17 13:43 UTC (permalink / raw)
  To: Joey Gouly
  Cc: linux-arm-kernel, kvmarm, anshuman.khandual, james.morse,
	Oliver Upton, Suzuki K Poulose, Zenghui Yu, Jing Zhang,
	Shameerali Kolothum Thodi, Catalin Marinas, Will Deacon

On Tue, 15 Oct 2024 14:39:20 +0100,
Joey Gouly <joey.gouly@arm.com> wrote:
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index dad88e31f953..c21840042785 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -2517,8 +2517,11 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>  	{ SYS_DESC(SYS_LOREA_EL1), trap_loregion },
>  	{ SYS_DESC(SYS_LORN_EL1), trap_loregion },
>  	{ SYS_DESC(SYS_LORC_EL1), trap_loregion },
> +	{ SYS_DESC(SYS_MPAMIDR_EL1), undef_access },
>  	{ SYS_DESC(SYS_LORID_EL1), trap_loregion },
>  
> +	{ SYS_DESC(SYS_MPAM1_EL1), undef_access },
> +	{ SYS_DESC(SYS_MPAM0_EL1), undef_access },
>  	{ SYS_DESC(SYS_VBAR_EL1), access_rw, reset_val, VBAR_EL1, 0 },
>  	{ SYS_DESC(SYS_DISR_EL1), NULL, reset_val, DISR_EL1, 0 },

For completeness, can you please add the EL2 MPAM registers to that
list, making them equally UNDEF?

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v5 4/7] KVM: arm64: Fix missing traps of guest accesses to the MPAM registers
  2024-10-17 10:58     ` Joey Gouly
@ 2024-10-17 16:07       ` Oliver Upton
  2024-10-22 14:31         ` Joey Gouly
  0 siblings, 1 reply; 25+ messages in thread
From: Oliver Upton @ 2024-10-17 16:07 UTC (permalink / raw)
  To: Joey Gouly
  Cc: linux-arm-kernel, kvmarm, anshuman.khandual, james.morse,
	Marc Zyngier, Suzuki K Poulose, Zenghui Yu, Jing Zhang,
	Shameerali Kolothum Thodi, Catalin Marinas, Will Deacon

On Thu, Oct 17, 2024 at 11:58:49AM +0100, Joey Gouly wrote:
> On Wed, Oct 16, 2024 at 05:10:17PM -0700, Oliver Upton wrote:
> > Hi Joey,
> > 
> > On Tue, Oct 15, 2024 at 02:39:20PM +0100, Joey Gouly wrote:
> > > +static inline void  __activate_traps_mpam(struct kvm_vcpu *vcpu)
> > > +{
> > > +	u64 r = MPAM2_EL2_TRAPMPAM0EL1 | MPAM2_EL2_TRAPMPAM1EL1;
> > > +
> > > +	if (!cpus_support_mpam())
> > > +		return;
> > > +
> > > +	/* trap guest access to MPAMIDR_EL1 */
> > > +	if (mpam_cpus_have_mpam_hcr()) {
> > > +		write_sysreg_s(MPAMHCR_EL2_TRAP_MPAMIDR_EL1, SYS_MPAMHCR_EL2);
> > > +	} else {
> > > +		/* From v1.1 TIDR can trap MPAMIDR, set it unconditionally */
> > > +		r |= MPAM2_EL2_TIDR;
> > > +	}
> > > +
> > > +	write_sysreg_s(r, SYS_MPAM2_EL2);
> > > +}
> > > +
> > > +static inline void __deactivate_traps_mpam(void)
> > > +{
> > > +	if (!cpus_support_mpam())
> > > +		return;
> > > +
> > > +	write_sysreg_s(0, SYS_MPAM2_EL2);
> > > +
> > > +	if (mpam_cpus_have_mpam_hcr())
> > > +		write_sysreg_s(MPAMHCR_HOST_FLAGS, SYS_MPAMHCR_EL2);
> > > +}
> > 
> > TBH, I think our trap configuration should *not* be conditioned on
> > CONFIG_ARM64_MPAM. Otherwise we're silently allowing the guest to change
> > things under the nose of KVM/host kernel, assuming an unkind firmware
> > that left the EL2 trap configuration in a permissive state.
> > 
> > WDYT about detecting the feature && enforcing traps regardless of the Kconfig?
> 
> I had actually thought about the same thing. I spoke with James and he agrees,
> so I will look into removing that.
> 
> I will probably end up removing the Kconfig entirely, it can be added back in
> later, when actual support for MPAM is added.

Sounds good, thanks Joey!

If we go down this route, I'm guessing we can also skip the boot
time EL2 setup portion of it (for now). That'd constrain the fossilized
EL3 issue to *just* failures to run KVM VMs as opposed to kernels not
booting at all.

-- 
Thanks,
Oliver


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v5 4/7] KVM: arm64: Fix missing traps of guest accesses to the MPAM registers
  2024-10-17 16:07       ` Oliver Upton
@ 2024-10-22 14:31         ` Joey Gouly
  0 siblings, 0 replies; 25+ messages in thread
From: Joey Gouly @ 2024-10-22 14:31 UTC (permalink / raw)
  To: Oliver Upton
  Cc: linux-arm-kernel, kvmarm, anshuman.khandual, james.morse,
	Marc Zyngier, Suzuki K Poulose, Zenghui Yu, Jing Zhang,
	Shameerali Kolothum Thodi, Catalin Marinas, Will Deacon

On Thu, Oct 17, 2024 at 09:07:17AM -0700, Oliver Upton wrote:
> On Thu, Oct 17, 2024 at 11:58:49AM +0100, Joey Gouly wrote:
> > On Wed, Oct 16, 2024 at 05:10:17PM -0700, Oliver Upton wrote:
> > > Hi Joey,
> > > 
> > > On Tue, Oct 15, 2024 at 02:39:20PM +0100, Joey Gouly wrote:
> > > > +static inline void  __activate_traps_mpam(struct kvm_vcpu *vcpu)
> > > > +{
> > > > +	u64 r = MPAM2_EL2_TRAPMPAM0EL1 | MPAM2_EL2_TRAPMPAM1EL1;
> > > > +
> > > > +	if (!cpus_support_mpam())
> > > > +		return;
> > > > +
> > > > +	/* trap guest access to MPAMIDR_EL1 */
> > > > +	if (mpam_cpus_have_mpam_hcr()) {
> > > > +		write_sysreg_s(MPAMHCR_EL2_TRAP_MPAMIDR_EL1, SYS_MPAMHCR_EL2);
> > > > +	} else {
> > > > +		/* From v1.1 TIDR can trap MPAMIDR, set it unconditionally */
> > > > +		r |= MPAM2_EL2_TIDR;
> > > > +	}
> > > > +
> > > > +	write_sysreg_s(r, SYS_MPAM2_EL2);
> > > > +}
> > > > +
> > > > +static inline void __deactivate_traps_mpam(void)
> > > > +{
> > > > +	if (!cpus_support_mpam())
> > > > +		return;
> > > > +
> > > > +	write_sysreg_s(0, SYS_MPAM2_EL2);
> > > > +
> > > > +	if (mpam_cpus_have_mpam_hcr())
> > > > +		write_sysreg_s(MPAMHCR_HOST_FLAGS, SYS_MPAMHCR_EL2);
> > > > +}
> > > 
> > > TBH, I think our trap configuration should *not* be conditioned on
> > > CONFIG_ARM64_MPAM. Otherwise we're silently allowing the guest to change
> > > things under the nose of KVM/host kernel, assuming an unkind firmware
> > > that left the EL2 trap configuration in a permissive state.
> > > 
> > > WDYT about detecting the feature && enforcing traps regardless of the Kconfig?
> > 
> > I had actually thought about the same thing. I spoke with James and he agrees,
> > so I will look into removing that.
> > 
> > I will probably end up removing the Kconfig entirely, it can be added back in
> > later, when actual support for MPAM is added.
> 
> Sounds good, thanks Joey!
> 
> If we go down this route, I'm guessing we can also skip the boot
> time EL2 setup portion of it (for now). That'd constrain the fossilized
> EL3 issue to *just* failures to run KVM VMs as opposed to kernels not
> booting at all.
> 

If you run nVHE with a firmware that enabled the traps for some reason (or just
by default since TRAP_MPAMIDR_EL1 resets to UNKNOWN), you will hang at boot
without that change.  The cpufeature code that runs at EL1 in that case, reads
MPAMIDR_EL1 to check for HAS_HCR.

Thanks,
Joey


^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2024-10-22 14:33 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-10-15 13:39 [PATCH v5 0/7] KVM: arm64: Hide unsupported MPAM from the guest Joey Gouly
2024-10-15 13:39 ` [PATCH v5 1/7] arm64/sysreg: Convert existing MPAM sysregs and add the remaining entries Joey Gouly
2024-10-16 23:41   ` Gavin Shan
2024-10-15 13:39 ` [PATCH v5 2/7] arm64: head.S: Initialise MPAM EL2 registers and disable traps Joey Gouly
2024-10-16 23:41   ` Gavin Shan
2024-10-15 13:39 ` [PATCH v5 3/7] arm64: cpufeature: discover CPU support for MPAM Joey Gouly
2024-10-17  2:36   ` Gavin Shan
2024-10-15 13:39 ` [PATCH v5 4/7] KVM: arm64: Fix missing traps of guest accesses to the MPAM registers Joey Gouly
2024-10-17  0:10   ` Oliver Upton
2024-10-17 10:58     ` Joey Gouly
2024-10-17 16:07       ` Oliver Upton
2024-10-22 14:31         ` Joey Gouly
2024-10-17  2:37   ` Gavin Shan
2024-10-17 11:54   ` Marc Zyngier
2024-10-17 13:06     ` Joey Gouly
2024-10-17 13:38       ` Marc Zyngier
2024-10-17 13:43   ` Marc Zyngier
2024-10-15 13:39 ` [PATCH v5 5/7] KVM: arm64: Add a macro for creating filtered sys_reg_descs entries Joey Gouly
2024-10-17  2:37   ` Gavin Shan
2024-10-15 13:39 ` [PATCH v5 6/7] KVM: arm64: Disable MPAM visibility by default and ignore VMM writes Joey Gouly
2024-10-17  2:37   ` Gavin Shan
2024-10-15 13:39 ` [PATCH v5 7/7] KVM: arm64: selftests: Test ID_AA64PFR0.MPAM isn't completely ignored Joey Gouly
2024-10-17  0:41   ` Gavin Shan
2024-10-17 11:03     ` Joey Gouly
2024-10-17  8:34 ` [PATCH v5 0/7] KVM: arm64: Hide unsupported MPAM from the guest Shameerali Kolothum Thodi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).