* [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling
@ 2025-04-26 12:27 Marc Zyngier
2025-04-26 12:27 ` [PATCH v3 01/42] arm64: sysreg: Add ID_AA64ISAR1_EL1.LS64 encoding for FEAT_LS64WB Marc Zyngier
` (42 more replies)
0 siblings, 43 replies; 71+ messages in thread
From: Marc Zyngier @ 2025-04-26 12:27 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
This is yet another version of the series last posted at [1].
The eagled eye reviewer will have noticed that since v2, the series
has more or less doubled in size for any reasonable metric (number of
patches, number of lines added or deleted). It is therefore pretty
urgent that this gets either merged or forgotten! ;-)
See the change log below for the details -- most of it is related to
FGT2 (and its rather large dependencies) being added.
* From v2:
- Added comprehensive support for FEAT_FGT2, as the host kernel is
now making use of these registers, without any form of context
switch in KVM. What could possibly go wrong?
- Reworked some of the FGT description and handling primitives,
reducing the boilerplate code and tables that get added over time.
- Rebased on 6.15-rc3.
[1]: https://lore.kernel.org/r/20250310122505.2857610-1-maz@kernel.org
Marc Zyngier (41):
arm64: sysreg: Add ID_AA64ISAR1_EL1.LS64 encoding for FEAT_LS64WB
arm64: sysreg: Update ID_AA64MMFR4_EL1 description
arm64: sysreg: Add layout for HCR_EL2
arm64: sysreg: Replace HGFxTR_EL2 with HFG{R,W}TR_EL2
arm64: sysreg: Update ID_AA64PFR0_EL1 description
arm64: sysreg: Update PMSIDR_EL1 description
arm64: sysreg: Update TRBIDR_EL1 description
arm64: sysreg: Add registers trapped by HFG{R,W}TR2_EL2
arm64: sysreg: Add registers trapped by HDFG{R,W}TR2_EL2
arm64: sysreg: Add system instructions trapped by HFGIRT2_EL2
arm64: Remove duplicated sysreg encodings
arm64: tools: Resync sysreg.h
arm64: Add syndrome information for trapped LD64B/ST64B{,V,V0}
arm64: Add FEAT_FGT2 capability
KVM: arm64: Tighten handling of unknown FGT groups
KVM: arm64: Simplify handling of negative FGT bits
KVM: arm64: Handle trapping of FEAT_LS64* instructions
KVM: arm64: Restrict ACCDATA_EL1 undef to FEAT_ST64_ACCDATA being
disabled
KVM: arm64: Don't treat HCRX_EL2 as a FGT register
KVM: arm64: Plug FEAT_GCS handling
KVM: arm64: Compute FGT masks from KVM's own FGT tables
KVM: arm64: Add description of FGT bits leading to EC!=0x18
KVM: arm64: Use computed masks as sanitisers for FGT registers
KVM: arm64: Propagate FGT masks to the nVHE hypervisor
KVM: arm64: Use computed FGT masks to setup FGT registers
KVM: arm64: Remove hand-crafted masks for FGT registers
KVM: arm64: Use KVM-specific HCRX_EL2 RES0 mask
KVM: arm64: Handle PSB CSYNC traps
KVM: arm64: Switch to table-driven FGU configuration
KVM: arm64: Validate FGT register descriptions against RES0 masks
KVM: arm64: Use FGT feature maps to drive RES0 bits
KVM: arm64: Allow kvm_has_feat() to take variable arguments
KVM: arm64: Use HCRX_EL2 feature map to drive fixed-value bits
KVM: arm64: Use HCR_EL2 feature map to drive fixed-value bits
KVM: arm64: Add FEAT_FGT2 registers to the VNCR page
KVM: arm64: Add sanitisation for FEAT_FGT2 registers
KVM: arm64: Add trap routing for FEAT_FGT2 registers
KVM: arm64: Add context-switch for FEAT_FGT2 registers
KVM: arm64: Allow sysreg ranges for FGT descriptors
KVM: arm64: Add FGT descriptors for FEAT_FGT2
KVM: arm64: Handle TSB CSYNC traps
Mark Rutland (1):
KVM: arm64: Unconditionally configure fine-grain traps
arch/arm64/include/asm/el2_setup.h | 14 +-
arch/arm64/include/asm/esr.h | 10 +-
arch/arm64/include/asm/kvm_arm.h | 186 ++--
arch/arm64/include/asm/kvm_host.h | 56 +-
arch/arm64/include/asm/sysreg.h | 26 +-
arch/arm64/include/asm/vncr_mapping.h | 5 +
arch/arm64/kernel/cpufeature.c | 7 +
arch/arm64/kvm/Makefile | 2 +-
arch/arm64/kvm/arm.c | 13 +
arch/arm64/kvm/config.c | 1085 +++++++++++++++++++++++
arch/arm64/kvm/emulate-nested.c | 580 ++++++++----
arch/arm64/kvm/handle_exit.c | 77 ++
arch/arm64/kvm/hyp/include/hyp/switch.h | 158 ++--
arch/arm64/kvm/hyp/nvhe/switch.c | 12 +
arch/arm64/kvm/hyp/vgic-v3-sr.c | 8 +-
arch/arm64/kvm/nested.c | 223 +----
arch/arm64/kvm/sys_regs.c | 68 +-
arch/arm64/tools/cpucaps | 1 +
arch/arm64/tools/sysreg | 1002 ++++++++++++++++++++-
tools/arch/arm64/include/asm/sysreg.h | 65 +-
20 files changed, 2888 insertions(+), 710 deletions(-)
create mode 100644 arch/arm64/kvm/config.c
--
2.39.2
^ permalink raw reply [flat|nested] 71+ messages in thread
* [PATCH v3 01/42] arm64: sysreg: Add ID_AA64ISAR1_EL1.LS64 encoding for FEAT_LS64WB
2025-04-26 12:27 [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling Marc Zyngier
@ 2025-04-26 12:27 ` Marc Zyngier
2025-04-29 13:34 ` Joey Gouly
2025-04-26 12:27 ` [PATCH v3 02/42] arm64: sysreg: Update ID_AA64MMFR4_EL1 description Marc Zyngier
` (41 subsequent siblings)
42 siblings, 1 reply; 71+ messages in thread
From: Marc Zyngier @ 2025-04-26 12:27 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
The 2024 extensions are adding yet another variant of LS64
(aptly named FEAT_LS64WB) supporting LS64 accesses to write-back
memory, as well as 32 byte single-copy atomic accesses using pairs
of FP registers.
Add the relevant encoding to ID_AA64ISAR1_EL1.LS64.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/tools/sysreg | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
index bdf044c5d11b6..e5da8848b66b5 100644
--- a/arch/arm64/tools/sysreg
+++ b/arch/arm64/tools/sysreg
@@ -1466,6 +1466,7 @@ UnsignedEnum 63:60 LS64
0b0001 LS64
0b0010 LS64_V
0b0011 LS64_ACCDATA
+ 0b0100 LS64WB
EndEnum
UnsignedEnum 59:56 XS
0b0000 NI
--
2.39.2
^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH v3 02/42] arm64: sysreg: Update ID_AA64MMFR4_EL1 description
2025-04-26 12:27 [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling Marc Zyngier
2025-04-26 12:27 ` [PATCH v3 01/42] arm64: sysreg: Add ID_AA64ISAR1_EL1.LS64 encoding for FEAT_LS64WB Marc Zyngier
@ 2025-04-26 12:27 ` Marc Zyngier
2025-04-29 13:38 ` Joey Gouly
2025-04-26 12:27 ` [PATCH v3 03/42] arm64: sysreg: Add layout for HCR_EL2 Marc Zyngier
` (40 subsequent siblings)
42 siblings, 1 reply; 71+ messages in thread
From: Marc Zyngier @ 2025-04-26 12:27 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
Resync the ID_AA64MMFR4_EL1 with the architectue description.
This results in:
- the new PoPS field
- the new NV2P1 value for the NV_frac field
- the new RMEGDI field
- the new SRMASK field
These fields have been generated from the reference JSON file.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/tools/sysreg | 19 ++++++++++++++++---
1 file changed, 16 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
index e5da8848b66b5..fce8328c7c00b 100644
--- a/arch/arm64/tools/sysreg
+++ b/arch/arm64/tools/sysreg
@@ -1946,12 +1946,21 @@ EndEnum
EndSysreg
Sysreg ID_AA64MMFR4_EL1 3 0 0 7 4
-Res0 63:40
+Res0 63:48
+UnsignedEnum 47:44 SRMASK
+ 0b0000 NI
+ 0b0001 IMP
+EndEnum
+Res0 43:40
UnsignedEnum 39:36 E3DSE
0b0000 NI
0b0001 IMP
EndEnum
-Res0 35:28
+Res0 35:32
+UnsignedEnum 31:28 RMEGDI
+ 0b0000 NI
+ 0b0001 IMP
+EndEnum
SignedEnum 27:24 E2H0
0b0000 IMP
0b1110 NI_NV1
@@ -1960,6 +1969,7 @@ EndEnum
UnsignedEnum 23:20 NV_frac
0b0000 NV_NV2
0b0001 NV2_ONLY
+ 0b0010 NV2P1
EndEnum
UnsignedEnum 19:16 FGWTE3
0b0000 NI
@@ -1979,7 +1989,10 @@ SignedEnum 7:4 EIESB
0b0010 ToELx
0b1111 ANY
EndEnum
-Res0 3:0
+UnsignedEnum 3:0 PoPS
+ 0b0000 NI
+ 0b0001 IMP
+EndEnum
EndSysreg
Sysreg SCTLR_EL1 3 0 1 0 0
--
2.39.2
^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH v3 03/42] arm64: sysreg: Add layout for HCR_EL2
2025-04-26 12:27 [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling Marc Zyngier
2025-04-26 12:27 ` [PATCH v3 01/42] arm64: sysreg: Add ID_AA64ISAR1_EL1.LS64 encoding for FEAT_LS64WB Marc Zyngier
2025-04-26 12:27 ` [PATCH v3 02/42] arm64: sysreg: Update ID_AA64MMFR4_EL1 description Marc Zyngier
@ 2025-04-26 12:27 ` Marc Zyngier
2025-04-29 14:02 ` Joey Gouly
2025-04-26 12:27 ` [PATCH v3 04/42] arm64: sysreg: Replace HGFxTR_EL2 with HFG{R,W}TR_EL2 Marc Zyngier
` (39 subsequent siblings)
42 siblings, 1 reply; 71+ messages in thread
From: Marc Zyngier @ 2025-04-26 12:27 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
Add HCR_EL2 to the sysreg file, more or less directly generated
from the JSON file.
Since the generated names significantly differ from the existing
naming, express the old names in terms of the new one. One day, we'll
fix this mess, but I'm not in any hurry.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_arm.h | 125 ++++++++++++++++---------------
arch/arm64/tools/sysreg | 68 +++++++++++++++++
2 files changed, 132 insertions(+), 61 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 974d72b5905b8..f36d067967c33 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -12,67 +12,70 @@
#include <asm/sysreg.h>
#include <asm/types.h>
-/* Hyp Configuration Register (HCR) bits */
-
-#define HCR_TID5 (UL(1) << 58)
-#define HCR_DCT (UL(1) << 57)
-#define HCR_ATA_SHIFT 56
-#define HCR_ATA (UL(1) << HCR_ATA_SHIFT)
-#define HCR_TTLBOS (UL(1) << 55)
-#define HCR_TTLBIS (UL(1) << 54)
-#define HCR_ENSCXT (UL(1) << 53)
-#define HCR_TOCU (UL(1) << 52)
-#define HCR_AMVOFFEN (UL(1) << 51)
-#define HCR_TICAB (UL(1) << 50)
-#define HCR_TID4 (UL(1) << 49)
-#define HCR_FIEN (UL(1) << 47)
-#define HCR_FWB (UL(1) << 46)
-#define HCR_NV2 (UL(1) << 45)
-#define HCR_AT (UL(1) << 44)
-#define HCR_NV1 (UL(1) << 43)
-#define HCR_NV (UL(1) << 42)
-#define HCR_API (UL(1) << 41)
-#define HCR_APK (UL(1) << 40)
-#define HCR_TEA (UL(1) << 37)
-#define HCR_TERR (UL(1) << 36)
-#define HCR_TLOR (UL(1) << 35)
-#define HCR_E2H (UL(1) << 34)
-#define HCR_ID (UL(1) << 33)
-#define HCR_CD (UL(1) << 32)
-#define HCR_RW_SHIFT 31
-#define HCR_RW (UL(1) << HCR_RW_SHIFT)
-#define HCR_TRVM (UL(1) << 30)
-#define HCR_HCD (UL(1) << 29)
-#define HCR_TDZ (UL(1) << 28)
-#define HCR_TGE (UL(1) << 27)
-#define HCR_TVM (UL(1) << 26)
-#define HCR_TTLB (UL(1) << 25)
-#define HCR_TPU (UL(1) << 24)
-#define HCR_TPC (UL(1) << 23) /* HCR_TPCP if FEAT_DPB */
-#define HCR_TSW (UL(1) << 22)
-#define HCR_TACR (UL(1) << 21)
-#define HCR_TIDCP (UL(1) << 20)
-#define HCR_TSC (UL(1) << 19)
-#define HCR_TID3 (UL(1) << 18)
-#define HCR_TID2 (UL(1) << 17)
-#define HCR_TID1 (UL(1) << 16)
-#define HCR_TID0 (UL(1) << 15)
-#define HCR_TWE (UL(1) << 14)
-#define HCR_TWI (UL(1) << 13)
-#define HCR_DC (UL(1) << 12)
-#define HCR_BSU (3 << 10)
-#define HCR_BSU_IS (UL(1) << 10)
-#define HCR_FB (UL(1) << 9)
-#define HCR_VSE (UL(1) << 8)
-#define HCR_VI (UL(1) << 7)
-#define HCR_VF (UL(1) << 6)
-#define HCR_AMO (UL(1) << 5)
-#define HCR_IMO (UL(1) << 4)
-#define HCR_FMO (UL(1) << 3)
-#define HCR_PTW (UL(1) << 2)
-#define HCR_SWIO (UL(1) << 1)
-#define HCR_VM (UL(1) << 0)
-#define HCR_RES0 ((UL(1) << 48) | (UL(1) << 39))
+/*
+ * Because I'm terribly lazy and that repainting the whole of the KVM
+ * code with the proper names is a pain, use a helper to map the names
+ * inherited from AArch32 with the new fancy nomenclature. One day...
+ */
+#define __HCR(x) HCR_EL2_##x
+
+#define HCR_TID5 __HCR(TID5)
+#define HCR_DCT __HCR(DCT)
+#define HCR_ATA_SHIFT __HCR(ATA_SHIFT)
+#define HCR_ATA __HCR(ATA)
+#define HCR_TTLBOS __HCR(TTLBOS)
+#define HCR_TTLBIS __HCR(TTLBIS)
+#define HCR_ENSCXT __HCR(EnSCXT)
+#define HCR_TOCU __HCR(TOCU)
+#define HCR_AMVOFFEN __HCR(AMVOFFEN)
+#define HCR_TICAB __HCR(TICAB)
+#define HCR_TID4 __HCR(TID4)
+#define HCR_FIEN __HCR(FIEN)
+#define HCR_FWB __HCR(FWB)
+#define HCR_NV2 __HCR(NV2)
+#define HCR_AT __HCR(AT)
+#define HCR_NV1 __HCR(NV1)
+#define HCR_NV __HCR(NV)
+#define HCR_API __HCR(API)
+#define HCR_APK __HCR(APK)
+#define HCR_TEA __HCR(TEA)
+#define HCR_TERR __HCR(TERR)
+#define HCR_TLOR __HCR(TLOR)
+#define HCR_E2H __HCR(E2H)
+#define HCR_ID __HCR(ID)
+#define HCR_CD __HCR(CD)
+#define HCR_RW __HCR(RW)
+#define HCR_TRVM __HCR(TRVM)
+#define HCR_HCD __HCR(HCD)
+#define HCR_TDZ __HCR(TDZ)
+#define HCR_TGE __HCR(TGE)
+#define HCR_TVM __HCR(TVM)
+#define HCR_TTLB __HCR(TTLB)
+#define HCR_TPU __HCR(TPU)
+#define HCR_TPC __HCR(TPCP)
+#define HCR_TSW __HCR(TSW)
+#define HCR_TACR __HCR(TACR)
+#define HCR_TIDCP __HCR(TIDCP)
+#define HCR_TSC __HCR(TSC)
+#define HCR_TID3 __HCR(TID3)
+#define HCR_TID2 __HCR(TID2)
+#define HCR_TID1 __HCR(TID1)
+#define HCR_TID0 __HCR(TID0)
+#define HCR_TWE __HCR(TWE)
+#define HCR_TWI __HCR(TWI)
+#define HCR_DC __HCR(DC)
+#define HCR_BSU __HCR(BSU)
+#define HCR_BSU_IS __HCR(BSU_IS)
+#define HCR_FB __HCR(FB)
+#define HCR_VSE __HCR(VSE)
+#define HCR_VI __HCR(VI)
+#define HCR_VF __HCR(VF)
+#define HCR_AMO __HCR(AMO)
+#define HCR_IMO __HCR(IMO)
+#define HCR_FMO __HCR(FMO)
+#define HCR_PTW __HCR(PTW)
+#define HCR_SWIO __HCR(SWIO)
+#define HCR_VM __HCR(VM)
/*
* The bits we set in HCR:
diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
index fce8328c7c00b..7f39c8f7f036d 100644
--- a/arch/arm64/tools/sysreg
+++ b/arch/arm64/tools/sysreg
@@ -2531,6 +2531,74 @@ Field 1 AFSR1_EL1
Field 0 AFSR0_EL1
EndSysregFields
+Sysreg HCR_EL2 3 4 1 1 0
+Field 63:60 TWEDEL
+Field 59 TWEDEn
+Field 58 TID5
+Field 57 DCT
+Field 56 ATA
+Field 55 TTLBOS
+Field 54 TTLBIS
+Field 53 EnSCXT
+Field 52 TOCU
+Field 51 AMVOFFEN
+Field 50 TICAB
+Field 49 TID4
+Field 48 GPF
+Field 47 FIEN
+Field 46 FWB
+Field 45 NV2
+Field 44 AT
+Field 43 NV1
+Field 42 NV
+Field 41 API
+Field 40 APK
+Field 39 TME
+Field 38 MIOCNCE
+Field 37 TEA
+Field 36 TERR
+Field 35 TLOR
+Field 34 E2H
+Field 33 ID
+Field 32 CD
+Field 31 RW
+Field 30 TRVM
+Field 29 HCD
+Field 28 TDZ
+Field 27 TGE
+Field 26 TVM
+Field 25 TTLB
+Field 24 TPU
+Field 23 TPCP
+Field 22 TSW
+Field 21 TACR
+Field 20 TIDCP
+Field 19 TSC
+Field 18 TID3
+Field 17 TID2
+Field 16 TID1
+Field 15 TID0
+Field 14 TWE
+Field 13 TWI
+Field 12 DC
+UnsignedEnum 11:10 BSU
+ 0b00 NONE
+ 0b01 IS
+ 0b10 OS
+ 0b11 FS
+EndEnum
+Field 9 FB
+Field 8 VSE
+Field 7 VI
+Field 6 VF
+Field 5 AMO
+Field 4 IMO
+Field 3 FMO
+Field 2 PTW
+Field 1 SWIO
+Field 0 VM
+EndSysreg
+
Sysreg MDCR_EL2 3 4 1 1 1
Res0 63:51
Field 50 EnSTEPOP
--
2.39.2
^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH v3 04/42] arm64: sysreg: Replace HGFxTR_EL2 with HFG{R,W}TR_EL2
2025-04-26 12:27 [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling Marc Zyngier
` (2 preceding siblings ...)
2025-04-26 12:27 ` [PATCH v3 03/42] arm64: sysreg: Add layout for HCR_EL2 Marc Zyngier
@ 2025-04-26 12:27 ` Marc Zyngier
2025-04-29 13:07 ` Ben Horgan
2025-04-29 14:26 ` Joey Gouly
2025-04-26 12:27 ` [PATCH v3 05/42] arm64: sysreg: Update ID_AA64PFR0_EL1 description Marc Zyngier
` (38 subsequent siblings)
42 siblings, 2 replies; 71+ messages in thread
From: Marc Zyngier @ 2025-04-26 12:27 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
Treating HFGRTR_EL2 and HFGWTR_EL2 identically was a mistake.
It makes things hard to reason about, has the potential to
introduce bugs by giving a meaning to bits that are really reserved,
and is in general a bad description of the architecture.
Given that #defines are cheap, let's describe both registers as
intended by the architecture, and repaint all the existing uses.
Yes, this is painful.
The registers themselves are generated from the JSON file in
an automated way.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/el2_setup.h | 14 +-
arch/arm64/include/asm/kvm_arm.h | 4 +-
arch/arm64/include/asm/kvm_host.h | 3 +-
arch/arm64/kvm/emulate-nested.c | 154 +++++++++----------
arch/arm64/kvm/hyp/include/hyp/switch.h | 4 +-
arch/arm64/kvm/hyp/vgic-v3-sr.c | 8 +-
arch/arm64/kvm/nested.c | 42 ++---
arch/arm64/kvm/sys_regs.c | 20 +--
arch/arm64/tools/sysreg | 194 +++++++++++++++---------
9 files changed, 250 insertions(+), 193 deletions(-)
diff --git a/arch/arm64/include/asm/el2_setup.h b/arch/arm64/include/asm/el2_setup.h
index ebceaae3c749b..055e69a4184ce 100644
--- a/arch/arm64/include/asm/el2_setup.h
+++ b/arch/arm64/include/asm/el2_setup.h
@@ -213,8 +213,8 @@
cbz x1, .Lskip_debug_fgt_\@
/* Disable nVHE traps of TPIDR2 and SMPRI */
- orr x0, x0, #HFGxTR_EL2_nSMPRI_EL1_MASK
- orr x0, x0, #HFGxTR_EL2_nTPIDR2_EL0_MASK
+ orr x0, x0, #HFGRTR_EL2_nSMPRI_EL1_MASK
+ orr x0, x0, #HFGRTR_EL2_nTPIDR2_EL0_MASK
.Lskip_debug_fgt_\@:
mrs_s x1, SYS_ID_AA64MMFR3_EL1
@@ -222,8 +222,8 @@
cbz x1, .Lskip_pie_fgt_\@
/* Disable trapping of PIR_EL1 / PIRE0_EL1 */
- orr x0, x0, #HFGxTR_EL2_nPIR_EL1
- orr x0, x0, #HFGxTR_EL2_nPIRE0_EL1
+ orr x0, x0, #HFGRTR_EL2_nPIR_EL1
+ orr x0, x0, #HFGRTR_EL2_nPIRE0_EL1
.Lskip_pie_fgt_\@:
mrs_s x1, SYS_ID_AA64MMFR3_EL1
@@ -231,7 +231,7 @@
cbz x1, .Lskip_poe_fgt_\@
/* Disable trapping of POR_EL0 */
- orr x0, x0, #HFGxTR_EL2_nPOR_EL0
+ orr x0, x0, #HFGRTR_EL2_nPOR_EL0
.Lskip_poe_fgt_\@:
/* GCS depends on PIE so we don't check it if PIE is absent */
@@ -240,8 +240,8 @@
cbz x1, .Lset_fgt_\@
/* Disable traps of access to GCS registers at EL0 and EL1 */
- orr x0, x0, #HFGxTR_EL2_nGCS_EL1_MASK
- orr x0, x0, #HFGxTR_EL2_nGCS_EL0_MASK
+ orr x0, x0, #HFGRTR_EL2_nGCS_EL1_MASK
+ orr x0, x0, #HFGRTR_EL2_nGCS_EL0_MASK
.Lset_fgt_\@:
msr_s SYS_HFGRTR_EL2, x0
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index f36d067967c33..43a630b940bfb 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -325,7 +325,7 @@
* Once we get to a point where the two describe the same thing, we'll
* merge the definitions. One day.
*/
-#define __HFGRTR_EL2_RES0 HFGxTR_EL2_RES0
+#define __HFGRTR_EL2_RES0 HFGRTR_EL2_RES0
#define __HFGRTR_EL2_MASK GENMASK(49, 0)
#define __HFGRTR_EL2_nMASK ~(__HFGRTR_EL2_RES0 | __HFGRTR_EL2_MASK)
@@ -336,7 +336,7 @@
#define __HFGRTR_ONLY_MASK (BIT(46) | BIT(42) | BIT(40) | BIT(28) | \
GENMASK(26, 25) | BIT(21) | BIT(18) | \
GENMASK(15, 14) | GENMASK(10, 9) | BIT(2))
-#define __HFGWTR_EL2_RES0 (__HFGRTR_EL2_RES0 | __HFGRTR_ONLY_MASK)
+#define __HFGWTR_EL2_RES0 HFGWTR_EL2_RES0
#define __HFGWTR_EL2_MASK (__HFGRTR_EL2_MASK & ~__HFGRTR_ONLY_MASK)
#define __HFGWTR_EL2_nMASK ~(__HFGWTR_EL2_RES0 | __HFGWTR_EL2_MASK)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index e98cfe7855a62..7a1ef5be7efb2 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -273,7 +273,8 @@ struct kvm_sysreg_masks;
enum fgt_group_id {
__NO_FGT_GROUP__,
- HFGxTR_GROUP,
+ HFGRTR_GROUP,
+ HFGWTR_GROUP = HFGRTR_GROUP,
HDFGRTR_GROUP,
HDFGWTR_GROUP = HDFGRTR_GROUP,
HFGITR_GROUP,
diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index 0fcfcc0478f94..efe1eb3f1bd07 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -1296,81 +1296,81 @@ enum fg_filter_id {
static const struct encoding_to_trap_config encoding_to_fgt[] __initconst = {
/* HFGRTR_EL2, HFGWTR_EL2 */
- SR_FGT(SYS_AMAIR2_EL1, HFGxTR, nAMAIR2_EL1, 0),
- SR_FGT(SYS_MAIR2_EL1, HFGxTR, nMAIR2_EL1, 0),
- SR_FGT(SYS_S2POR_EL1, HFGxTR, nS2POR_EL1, 0),
- SR_FGT(SYS_POR_EL1, HFGxTR, nPOR_EL1, 0),
- SR_FGT(SYS_POR_EL0, HFGxTR, nPOR_EL0, 0),
- SR_FGT(SYS_PIR_EL1, HFGxTR, nPIR_EL1, 0),
- SR_FGT(SYS_PIRE0_EL1, HFGxTR, nPIRE0_EL1, 0),
- SR_FGT(SYS_RCWMASK_EL1, HFGxTR, nRCWMASK_EL1, 0),
- SR_FGT(SYS_TPIDR2_EL0, HFGxTR, nTPIDR2_EL0, 0),
- SR_FGT(SYS_SMPRI_EL1, HFGxTR, nSMPRI_EL1, 0),
- SR_FGT(SYS_GCSCR_EL1, HFGxTR, nGCS_EL1, 0),
- SR_FGT(SYS_GCSPR_EL1, HFGxTR, nGCS_EL1, 0),
- SR_FGT(SYS_GCSCRE0_EL1, HFGxTR, nGCS_EL0, 0),
- SR_FGT(SYS_GCSPR_EL0, HFGxTR, nGCS_EL0, 0),
- SR_FGT(SYS_ACCDATA_EL1, HFGxTR, nACCDATA_EL1, 0),
- SR_FGT(SYS_ERXADDR_EL1, HFGxTR, ERXADDR_EL1, 1),
- SR_FGT(SYS_ERXPFGCDN_EL1, HFGxTR, ERXPFGCDN_EL1, 1),
- SR_FGT(SYS_ERXPFGCTL_EL1, HFGxTR, ERXPFGCTL_EL1, 1),
- SR_FGT(SYS_ERXPFGF_EL1, HFGxTR, ERXPFGF_EL1, 1),
- SR_FGT(SYS_ERXMISC0_EL1, HFGxTR, ERXMISCn_EL1, 1),
- SR_FGT(SYS_ERXMISC1_EL1, HFGxTR, ERXMISCn_EL1, 1),
- SR_FGT(SYS_ERXMISC2_EL1, HFGxTR, ERXMISCn_EL1, 1),
- SR_FGT(SYS_ERXMISC3_EL1, HFGxTR, ERXMISCn_EL1, 1),
- SR_FGT(SYS_ERXSTATUS_EL1, HFGxTR, ERXSTATUS_EL1, 1),
- SR_FGT(SYS_ERXCTLR_EL1, HFGxTR, ERXCTLR_EL1, 1),
- SR_FGT(SYS_ERXFR_EL1, HFGxTR, ERXFR_EL1, 1),
- SR_FGT(SYS_ERRSELR_EL1, HFGxTR, ERRSELR_EL1, 1),
- SR_FGT(SYS_ERRIDR_EL1, HFGxTR, ERRIDR_EL1, 1),
- SR_FGT(SYS_ICC_IGRPEN0_EL1, HFGxTR, ICC_IGRPENn_EL1, 1),
- SR_FGT(SYS_ICC_IGRPEN1_EL1, HFGxTR, ICC_IGRPENn_EL1, 1),
- SR_FGT(SYS_VBAR_EL1, HFGxTR, VBAR_EL1, 1),
- SR_FGT(SYS_TTBR1_EL1, HFGxTR, TTBR1_EL1, 1),
- SR_FGT(SYS_TTBR0_EL1, HFGxTR, TTBR0_EL1, 1),
- SR_FGT(SYS_TPIDR_EL0, HFGxTR, TPIDR_EL0, 1),
- SR_FGT(SYS_TPIDRRO_EL0, HFGxTR, TPIDRRO_EL0, 1),
- SR_FGT(SYS_TPIDR_EL1, HFGxTR, TPIDR_EL1, 1),
- SR_FGT(SYS_TCR_EL1, HFGxTR, TCR_EL1, 1),
- SR_FGT(SYS_TCR2_EL1, HFGxTR, TCR_EL1, 1),
- SR_FGT(SYS_SCXTNUM_EL0, HFGxTR, SCXTNUM_EL0, 1),
- SR_FGT(SYS_SCXTNUM_EL1, HFGxTR, SCXTNUM_EL1, 1),
- SR_FGT(SYS_SCTLR_EL1, HFGxTR, SCTLR_EL1, 1),
- SR_FGT(SYS_REVIDR_EL1, HFGxTR, REVIDR_EL1, 1),
- SR_FGT(SYS_PAR_EL1, HFGxTR, PAR_EL1, 1),
- SR_FGT(SYS_MPIDR_EL1, HFGxTR, MPIDR_EL1, 1),
- SR_FGT(SYS_MIDR_EL1, HFGxTR, MIDR_EL1, 1),
- SR_FGT(SYS_MAIR_EL1, HFGxTR, MAIR_EL1, 1),
- SR_FGT(SYS_LORSA_EL1, HFGxTR, LORSA_EL1, 1),
- SR_FGT(SYS_LORN_EL1, HFGxTR, LORN_EL1, 1),
- SR_FGT(SYS_LORID_EL1, HFGxTR, LORID_EL1, 1),
- SR_FGT(SYS_LOREA_EL1, HFGxTR, LOREA_EL1, 1),
- SR_FGT(SYS_LORC_EL1, HFGxTR, LORC_EL1, 1),
- SR_FGT(SYS_ISR_EL1, HFGxTR, ISR_EL1, 1),
- SR_FGT(SYS_FAR_EL1, HFGxTR, FAR_EL1, 1),
- SR_FGT(SYS_ESR_EL1, HFGxTR, ESR_EL1, 1),
- SR_FGT(SYS_DCZID_EL0, HFGxTR, DCZID_EL0, 1),
- SR_FGT(SYS_CTR_EL0, HFGxTR, CTR_EL0, 1),
- SR_FGT(SYS_CSSELR_EL1, HFGxTR, CSSELR_EL1, 1),
- SR_FGT(SYS_CPACR_EL1, HFGxTR, CPACR_EL1, 1),
- SR_FGT(SYS_CONTEXTIDR_EL1, HFGxTR, CONTEXTIDR_EL1, 1),
- SR_FGT(SYS_CLIDR_EL1, HFGxTR, CLIDR_EL1, 1),
- SR_FGT(SYS_CCSIDR_EL1, HFGxTR, CCSIDR_EL1, 1),
- SR_FGT(SYS_APIBKEYLO_EL1, HFGxTR, APIBKey, 1),
- SR_FGT(SYS_APIBKEYHI_EL1, HFGxTR, APIBKey, 1),
- SR_FGT(SYS_APIAKEYLO_EL1, HFGxTR, APIAKey, 1),
- SR_FGT(SYS_APIAKEYHI_EL1, HFGxTR, APIAKey, 1),
- SR_FGT(SYS_APGAKEYLO_EL1, HFGxTR, APGAKey, 1),
- SR_FGT(SYS_APGAKEYHI_EL1, HFGxTR, APGAKey, 1),
- SR_FGT(SYS_APDBKEYLO_EL1, HFGxTR, APDBKey, 1),
- SR_FGT(SYS_APDBKEYHI_EL1, HFGxTR, APDBKey, 1),
- SR_FGT(SYS_APDAKEYLO_EL1, HFGxTR, APDAKey, 1),
- SR_FGT(SYS_APDAKEYHI_EL1, HFGxTR, APDAKey, 1),
- SR_FGT(SYS_AMAIR_EL1, HFGxTR, AMAIR_EL1, 1),
- SR_FGT(SYS_AIDR_EL1, HFGxTR, AIDR_EL1, 1),
- SR_FGT(SYS_AFSR1_EL1, HFGxTR, AFSR1_EL1, 1),
- SR_FGT(SYS_AFSR0_EL1, HFGxTR, AFSR0_EL1, 1),
+ SR_FGT(SYS_AMAIR2_EL1, HFGRTR, nAMAIR2_EL1, 0),
+ SR_FGT(SYS_MAIR2_EL1, HFGRTR, nMAIR2_EL1, 0),
+ SR_FGT(SYS_S2POR_EL1, HFGRTR, nS2POR_EL1, 0),
+ SR_FGT(SYS_POR_EL1, HFGRTR, nPOR_EL1, 0),
+ SR_FGT(SYS_POR_EL0, HFGRTR, nPOR_EL0, 0),
+ SR_FGT(SYS_PIR_EL1, HFGRTR, nPIR_EL1, 0),
+ SR_FGT(SYS_PIRE0_EL1, HFGRTR, nPIRE0_EL1, 0),
+ SR_FGT(SYS_RCWMASK_EL1, HFGRTR, nRCWMASK_EL1, 0),
+ SR_FGT(SYS_TPIDR2_EL0, HFGRTR, nTPIDR2_EL0, 0),
+ SR_FGT(SYS_SMPRI_EL1, HFGRTR, nSMPRI_EL1, 0),
+ SR_FGT(SYS_GCSCR_EL1, HFGRTR, nGCS_EL1, 0),
+ SR_FGT(SYS_GCSPR_EL1, HFGRTR, nGCS_EL1, 0),
+ SR_FGT(SYS_GCSCRE0_EL1, HFGRTR, nGCS_EL0, 0),
+ SR_FGT(SYS_GCSPR_EL0, HFGRTR, nGCS_EL0, 0),
+ SR_FGT(SYS_ACCDATA_EL1, HFGRTR, nACCDATA_EL1, 0),
+ SR_FGT(SYS_ERXADDR_EL1, HFGRTR, ERXADDR_EL1, 1),
+ SR_FGT(SYS_ERXPFGCDN_EL1, HFGRTR, ERXPFGCDN_EL1, 1),
+ SR_FGT(SYS_ERXPFGCTL_EL1, HFGRTR, ERXPFGCTL_EL1, 1),
+ SR_FGT(SYS_ERXPFGF_EL1, HFGRTR, ERXPFGF_EL1, 1),
+ SR_FGT(SYS_ERXMISC0_EL1, HFGRTR, ERXMISCn_EL1, 1),
+ SR_FGT(SYS_ERXMISC1_EL1, HFGRTR, ERXMISCn_EL1, 1),
+ SR_FGT(SYS_ERXMISC2_EL1, HFGRTR, ERXMISCn_EL1, 1),
+ SR_FGT(SYS_ERXMISC3_EL1, HFGRTR, ERXMISCn_EL1, 1),
+ SR_FGT(SYS_ERXSTATUS_EL1, HFGRTR, ERXSTATUS_EL1, 1),
+ SR_FGT(SYS_ERXCTLR_EL1, HFGRTR, ERXCTLR_EL1, 1),
+ SR_FGT(SYS_ERXFR_EL1, HFGRTR, ERXFR_EL1, 1),
+ SR_FGT(SYS_ERRSELR_EL1, HFGRTR, ERRSELR_EL1, 1),
+ SR_FGT(SYS_ERRIDR_EL1, HFGRTR, ERRIDR_EL1, 1),
+ SR_FGT(SYS_ICC_IGRPEN0_EL1, HFGRTR, ICC_IGRPENn_EL1, 1),
+ SR_FGT(SYS_ICC_IGRPEN1_EL1, HFGRTR, ICC_IGRPENn_EL1, 1),
+ SR_FGT(SYS_VBAR_EL1, HFGRTR, VBAR_EL1, 1),
+ SR_FGT(SYS_TTBR1_EL1, HFGRTR, TTBR1_EL1, 1),
+ SR_FGT(SYS_TTBR0_EL1, HFGRTR, TTBR0_EL1, 1),
+ SR_FGT(SYS_TPIDR_EL0, HFGRTR, TPIDR_EL0, 1),
+ SR_FGT(SYS_TPIDRRO_EL0, HFGRTR, TPIDRRO_EL0, 1),
+ SR_FGT(SYS_TPIDR_EL1, HFGRTR, TPIDR_EL1, 1),
+ SR_FGT(SYS_TCR_EL1, HFGRTR, TCR_EL1, 1),
+ SR_FGT(SYS_TCR2_EL1, HFGRTR, TCR_EL1, 1),
+ SR_FGT(SYS_SCXTNUM_EL0, HFGRTR, SCXTNUM_EL0, 1),
+ SR_FGT(SYS_SCXTNUM_EL1, HFGRTR, SCXTNUM_EL1, 1),
+ SR_FGT(SYS_SCTLR_EL1, HFGRTR, SCTLR_EL1, 1),
+ SR_FGT(SYS_REVIDR_EL1, HFGRTR, REVIDR_EL1, 1),
+ SR_FGT(SYS_PAR_EL1, HFGRTR, PAR_EL1, 1),
+ SR_FGT(SYS_MPIDR_EL1, HFGRTR, MPIDR_EL1, 1),
+ SR_FGT(SYS_MIDR_EL1, HFGRTR, MIDR_EL1, 1),
+ SR_FGT(SYS_MAIR_EL1, HFGRTR, MAIR_EL1, 1),
+ SR_FGT(SYS_LORSA_EL1, HFGRTR, LORSA_EL1, 1),
+ SR_FGT(SYS_LORN_EL1, HFGRTR, LORN_EL1, 1),
+ SR_FGT(SYS_LORID_EL1, HFGRTR, LORID_EL1, 1),
+ SR_FGT(SYS_LOREA_EL1, HFGRTR, LOREA_EL1, 1),
+ SR_FGT(SYS_LORC_EL1, HFGRTR, LORC_EL1, 1),
+ SR_FGT(SYS_ISR_EL1, HFGRTR, ISR_EL1, 1),
+ SR_FGT(SYS_FAR_EL1, HFGRTR, FAR_EL1, 1),
+ SR_FGT(SYS_ESR_EL1, HFGRTR, ESR_EL1, 1),
+ SR_FGT(SYS_DCZID_EL0, HFGRTR, DCZID_EL0, 1),
+ SR_FGT(SYS_CTR_EL0, HFGRTR, CTR_EL0, 1),
+ SR_FGT(SYS_CSSELR_EL1, HFGRTR, CSSELR_EL1, 1),
+ SR_FGT(SYS_CPACR_EL1, HFGRTR, CPACR_EL1, 1),
+ SR_FGT(SYS_CONTEXTIDR_EL1, HFGRTR, CONTEXTIDR_EL1, 1),
+ SR_FGT(SYS_CLIDR_EL1, HFGRTR, CLIDR_EL1, 1),
+ SR_FGT(SYS_CCSIDR_EL1, HFGRTR, CCSIDR_EL1, 1),
+ SR_FGT(SYS_APIBKEYLO_EL1, HFGRTR, APIBKey, 1),
+ SR_FGT(SYS_APIBKEYHI_EL1, HFGRTR, APIBKey, 1),
+ SR_FGT(SYS_APIAKEYLO_EL1, HFGRTR, APIAKey, 1),
+ SR_FGT(SYS_APIAKEYHI_EL1, HFGRTR, APIAKey, 1),
+ SR_FGT(SYS_APGAKEYLO_EL1, HFGRTR, APGAKey, 1),
+ SR_FGT(SYS_APGAKEYHI_EL1, HFGRTR, APGAKey, 1),
+ SR_FGT(SYS_APDBKEYLO_EL1, HFGRTR, APDBKey, 1),
+ SR_FGT(SYS_APDBKEYHI_EL1, HFGRTR, APDBKey, 1),
+ SR_FGT(SYS_APDAKEYLO_EL1, HFGRTR, APDAKey, 1),
+ SR_FGT(SYS_APDAKEYHI_EL1, HFGRTR, APDAKey, 1),
+ SR_FGT(SYS_AMAIR_EL1, HFGRTR, AMAIR_EL1, 1),
+ SR_FGT(SYS_AIDR_EL1, HFGRTR, AIDR_EL1, 1),
+ SR_FGT(SYS_AFSR1_EL1, HFGRTR, AFSR1_EL1, 1),
+ SR_FGT(SYS_AFSR0_EL1, HFGRTR, AFSR0_EL1, 1),
/* HFGITR_EL2 */
SR_FGT(OP_AT_S1E1A, HFGITR, ATS1E1A, 1),
SR_FGT(OP_COSP_RCTX, HFGITR, COSPRCTX, 1),
@@ -2243,7 +2243,7 @@ static bool check_fgt_bit(struct kvm_vcpu *vcpu, bool is_read,
return false;
switch ((enum fgt_group_id)tc.fgt) {
- case HFGxTR_GROUP:
+ case HFGRTR_GROUP:
sr = is_read ? HFGRTR_EL2 : HFGWTR_EL2;
break;
@@ -2319,7 +2319,7 @@ bool triage_sysreg_trap(struct kvm_vcpu *vcpu, int *sr_index)
case __NO_FGT_GROUP__:
break;
- case HFGxTR_GROUP:
+ case HFGRTR_GROUP:
if (is_read)
val = __vcpu_sys_reg(vcpu, HFGRTR_EL2);
else
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index b741ea6aefa58..3150e42d79341 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -79,7 +79,7 @@ static inline void __activate_traps_fpsimd32(struct kvm_vcpu *vcpu)
switch(reg) { \
case HFGRTR_EL2: \
case HFGWTR_EL2: \
- id = HFGxTR_GROUP; \
+ id = HFGRTR_GROUP; \
break; \
case HFGITR_EL2: \
id = HFGITR_GROUP; \
@@ -166,7 +166,7 @@ static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
update_fgt_traps(hctxt, vcpu, kvm, HFGRTR_EL2);
update_fgt_traps_cs(hctxt, vcpu, kvm, HFGWTR_EL2, 0,
cpus_have_final_cap(ARM64_WORKAROUND_AMPERE_AC03_CPU_38) ?
- HFGxTR_EL2_TCR_EL1_MASK : 0);
+ HFGWTR_EL2_TCR_EL1_MASK : 0);
update_fgt_traps(hctxt, vcpu, kvm, HFGITR_EL2);
update_fgt_traps(hctxt, vcpu, kvm, HDFGRTR_EL2);
update_fgt_traps(hctxt, vcpu, kvm, HDFGWTR_EL2);
diff --git a/arch/arm64/kvm/hyp/vgic-v3-sr.c b/arch/arm64/kvm/hyp/vgic-v3-sr.c
index ed363aa3027e5..f38565e28a23a 100644
--- a/arch/arm64/kvm/hyp/vgic-v3-sr.c
+++ b/arch/arm64/kvm/hyp/vgic-v3-sr.c
@@ -1052,11 +1052,11 @@ static bool __vgic_v3_check_trap_forwarding(struct kvm_vcpu *vcpu,
switch (sysreg) {
case SYS_ICC_IGRPEN0_EL1:
if (is_read &&
- (__vcpu_sys_reg(vcpu, HFGRTR_EL2) & HFGxTR_EL2_ICC_IGRPENn_EL1))
+ (__vcpu_sys_reg(vcpu, HFGRTR_EL2) & HFGRTR_EL2_ICC_IGRPENn_EL1))
return true;
if (!is_read &&
- (__vcpu_sys_reg(vcpu, HFGWTR_EL2) & HFGxTR_EL2_ICC_IGRPENn_EL1))
+ (__vcpu_sys_reg(vcpu, HFGWTR_EL2) & HFGWTR_EL2_ICC_IGRPENn_EL1))
return true;
fallthrough;
@@ -1073,11 +1073,11 @@ static bool __vgic_v3_check_trap_forwarding(struct kvm_vcpu *vcpu,
case SYS_ICC_IGRPEN1_EL1:
if (is_read &&
- (__vcpu_sys_reg(vcpu, HFGRTR_EL2) & HFGxTR_EL2_ICC_IGRPENn_EL1))
+ (__vcpu_sys_reg(vcpu, HFGRTR_EL2) & HFGRTR_EL2_ICC_IGRPENn_EL1))
return true;
if (!is_read &&
- (__vcpu_sys_reg(vcpu, HFGWTR_EL2) & HFGxTR_EL2_ICC_IGRPENn_EL1))
+ (__vcpu_sys_reg(vcpu, HFGWTR_EL2) & HFGWTR_EL2_ICC_IGRPENn_EL1))
return true;
fallthrough;
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index 4a3fc11f7ecf3..16f6129c70b59 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -1103,40 +1103,40 @@ int kvm_init_nv_sysregs(struct kvm_vcpu *vcpu)
res0 = res1 = 0;
if (!(kvm_vcpu_has_feature(kvm, KVM_ARM_VCPU_PTRAUTH_ADDRESS) &&
kvm_vcpu_has_feature(kvm, KVM_ARM_VCPU_PTRAUTH_GENERIC)))
- res0 |= (HFGxTR_EL2_APDAKey | HFGxTR_EL2_APDBKey |
- HFGxTR_EL2_APGAKey | HFGxTR_EL2_APIAKey |
- HFGxTR_EL2_APIBKey);
+ res0 |= (HFGRTR_EL2_APDAKey | HFGRTR_EL2_APDBKey |
+ HFGRTR_EL2_APGAKey | HFGRTR_EL2_APIAKey |
+ HFGRTR_EL2_APIBKey);
if (!kvm_has_feat(kvm, ID_AA64MMFR1_EL1, LO, IMP))
- res0 |= (HFGxTR_EL2_LORC_EL1 | HFGxTR_EL2_LOREA_EL1 |
- HFGxTR_EL2_LORID_EL1 | HFGxTR_EL2_LORN_EL1 |
- HFGxTR_EL2_LORSA_EL1);
+ res0 |= (HFGRTR_EL2_LORC_EL1 | HFGRTR_EL2_LOREA_EL1 |
+ HFGRTR_EL2_LORID_EL1 | HFGRTR_EL2_LORN_EL1 |
+ HFGRTR_EL2_LORSA_EL1);
if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, CSV2, CSV2_2) &&
!kvm_has_feat(kvm, ID_AA64PFR1_EL1, CSV2_frac, CSV2_1p2))
- res0 |= (HFGxTR_EL2_SCXTNUM_EL1 | HFGxTR_EL2_SCXTNUM_EL0);
+ res0 |= (HFGRTR_EL2_SCXTNUM_EL1 | HFGRTR_EL2_SCXTNUM_EL0);
if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, GIC, IMP))
- res0 |= HFGxTR_EL2_ICC_IGRPENn_EL1;
+ res0 |= HFGRTR_EL2_ICC_IGRPENn_EL1;
if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, RAS, IMP))
- res0 |= (HFGxTR_EL2_ERRIDR_EL1 | HFGxTR_EL2_ERRSELR_EL1 |
- HFGxTR_EL2_ERXFR_EL1 | HFGxTR_EL2_ERXCTLR_EL1 |
- HFGxTR_EL2_ERXSTATUS_EL1 | HFGxTR_EL2_ERXMISCn_EL1 |
- HFGxTR_EL2_ERXPFGF_EL1 | HFGxTR_EL2_ERXPFGCTL_EL1 |
- HFGxTR_EL2_ERXPFGCDN_EL1 | HFGxTR_EL2_ERXADDR_EL1);
+ res0 |= (HFGRTR_EL2_ERRIDR_EL1 | HFGRTR_EL2_ERRSELR_EL1 |
+ HFGRTR_EL2_ERXFR_EL1 | HFGRTR_EL2_ERXCTLR_EL1 |
+ HFGRTR_EL2_ERXSTATUS_EL1 | HFGRTR_EL2_ERXMISCn_EL1 |
+ HFGRTR_EL2_ERXPFGF_EL1 | HFGRTR_EL2_ERXPFGCTL_EL1 |
+ HFGRTR_EL2_ERXPFGCDN_EL1 | HFGRTR_EL2_ERXADDR_EL1);
if (!kvm_has_feat(kvm, ID_AA64ISAR1_EL1, LS64, LS64_ACCDATA))
- res0 |= HFGxTR_EL2_nACCDATA_EL1;
+ res0 |= HFGRTR_EL2_nACCDATA_EL1;
if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, GCS, IMP))
- res0 |= (HFGxTR_EL2_nGCS_EL0 | HFGxTR_EL2_nGCS_EL1);
+ res0 |= (HFGRTR_EL2_nGCS_EL0 | HFGRTR_EL2_nGCS_EL1);
if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, SME, IMP))
- res0 |= (HFGxTR_EL2_nSMPRI_EL1 | HFGxTR_EL2_nTPIDR2_EL0);
+ res0 |= (HFGRTR_EL2_nSMPRI_EL1 | HFGRTR_EL2_nTPIDR2_EL0);
if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, THE, IMP))
- res0 |= HFGxTR_EL2_nRCWMASK_EL1;
+ res0 |= HFGRTR_EL2_nRCWMASK_EL1;
if (!kvm_has_s1pie(kvm))
- res0 |= (HFGxTR_EL2_nPIRE0_EL1 | HFGxTR_EL2_nPIR_EL1);
+ res0 |= (HFGRTR_EL2_nPIRE0_EL1 | HFGRTR_EL2_nPIR_EL1);
if (!kvm_has_s1poe(kvm))
- res0 |= (HFGxTR_EL2_nPOR_EL0 | HFGxTR_EL2_nPOR_EL1);
+ res0 |= (HFGRTR_EL2_nPOR_EL0 | HFGRTR_EL2_nPOR_EL1);
if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, S2POE, IMP))
- res0 |= HFGxTR_EL2_nS2POR_EL1;
+ res0 |= HFGRTR_EL2_nS2POR_EL1;
if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, AIE, IMP))
- res0 |= (HFGxTR_EL2_nMAIR2_EL1 | HFGxTR_EL2_nAMAIR2_EL1);
+ res0 |= (HFGRTR_EL2_nMAIR2_EL1 | HFGRTR_EL2_nAMAIR2_EL1);
set_sysreg_masks(kvm, HFGRTR_EL2, res0 | __HFGRTR_EL2_RES0, res1);
set_sysreg_masks(kvm, HFGWTR_EL2, res0 | __HFGWTR_EL2_RES0, res1);
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 005ad28f73068..6e01b06bedcae 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -5147,12 +5147,12 @@ void kvm_calculate_traps(struct kvm_vcpu *vcpu)
if (test_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags))
goto out;
- kvm->arch.fgu[HFGxTR_GROUP] = (HFGxTR_EL2_nAMAIR2_EL1 |
- HFGxTR_EL2_nMAIR2_EL1 |
- HFGxTR_EL2_nS2POR_EL1 |
- HFGxTR_EL2_nACCDATA_EL1 |
- HFGxTR_EL2_nSMPRI_EL1_MASK |
- HFGxTR_EL2_nTPIDR2_EL0_MASK);
+ kvm->arch.fgu[HFGRTR_GROUP] = (HFGRTR_EL2_nAMAIR2_EL1 |
+ HFGRTR_EL2_nMAIR2_EL1 |
+ HFGRTR_EL2_nS2POR_EL1 |
+ HFGRTR_EL2_nACCDATA_EL1 |
+ HFGRTR_EL2_nSMPRI_EL1_MASK |
+ HFGRTR_EL2_nTPIDR2_EL0_MASK);
if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, OS))
kvm->arch.fgu[HFGITR_GROUP] |= (HFGITR_EL2_TLBIRVAALE1OS|
@@ -5188,12 +5188,12 @@ void kvm_calculate_traps(struct kvm_vcpu *vcpu)
HFGITR_EL2_ATS1E1WP);
if (!kvm_has_s1pie(kvm))
- kvm->arch.fgu[HFGxTR_GROUP] |= (HFGxTR_EL2_nPIRE0_EL1 |
- HFGxTR_EL2_nPIR_EL1);
+ kvm->arch.fgu[HFGRTR_GROUP] |= (HFGRTR_EL2_nPIRE0_EL1 |
+ HFGRTR_EL2_nPIR_EL1);
if (!kvm_has_s1poe(kvm))
- kvm->arch.fgu[HFGxTR_GROUP] |= (HFGxTR_EL2_nPOR_EL1 |
- HFGxTR_EL2_nPOR_EL0);
+ kvm->arch.fgu[HFGRTR_GROUP] |= (HFGRTR_EL2_nPOR_EL1 |
+ HFGRTR_EL2_nPOR_EL0);
if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, AMU, IMP))
kvm->arch.fgu[HAFGRTR_GROUP] |= ~(HAFGRTR_EL2_RES0 |
diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
index 7f39c8f7f036d..e21e881314a33 100644
--- a/arch/arm64/tools/sysreg
+++ b/arch/arm64/tools/sysreg
@@ -2464,73 +2464,6 @@ UnsignedEnum 2:0 F8S1
EndEnum
EndSysreg
-SysregFields HFGxTR_EL2
-Field 63 nAMAIR2_EL1
-Field 62 nMAIR2_EL1
-Field 61 nS2POR_EL1
-Field 60 nPOR_EL1
-Field 59 nPOR_EL0
-Field 58 nPIR_EL1
-Field 57 nPIRE0_EL1
-Field 56 nRCWMASK_EL1
-Field 55 nTPIDR2_EL0
-Field 54 nSMPRI_EL1
-Field 53 nGCS_EL1
-Field 52 nGCS_EL0
-Res0 51
-Field 50 nACCDATA_EL1
-Field 49 ERXADDR_EL1
-Field 48 ERXPFGCDN_EL1
-Field 47 ERXPFGCTL_EL1
-Field 46 ERXPFGF_EL1
-Field 45 ERXMISCn_EL1
-Field 44 ERXSTATUS_EL1
-Field 43 ERXCTLR_EL1
-Field 42 ERXFR_EL1
-Field 41 ERRSELR_EL1
-Field 40 ERRIDR_EL1
-Field 39 ICC_IGRPENn_EL1
-Field 38 VBAR_EL1
-Field 37 TTBR1_EL1
-Field 36 TTBR0_EL1
-Field 35 TPIDR_EL0
-Field 34 TPIDRRO_EL0
-Field 33 TPIDR_EL1
-Field 32 TCR_EL1
-Field 31 SCXTNUM_EL0
-Field 30 SCXTNUM_EL1
-Field 29 SCTLR_EL1
-Field 28 REVIDR_EL1
-Field 27 PAR_EL1
-Field 26 MPIDR_EL1
-Field 25 MIDR_EL1
-Field 24 MAIR_EL1
-Field 23 LORSA_EL1
-Field 22 LORN_EL1
-Field 21 LORID_EL1
-Field 20 LOREA_EL1
-Field 19 LORC_EL1
-Field 18 ISR_EL1
-Field 17 FAR_EL1
-Field 16 ESR_EL1
-Field 15 DCZID_EL0
-Field 14 CTR_EL0
-Field 13 CSSELR_EL1
-Field 12 CPACR_EL1
-Field 11 CONTEXTIDR_EL1
-Field 10 CLIDR_EL1
-Field 9 CCSIDR_EL1
-Field 8 APIBKey
-Field 7 APIAKey
-Field 6 APGAKey
-Field 5 APDBKey
-Field 4 APDAKey
-Field 3 AMAIR_EL1
-Field 2 AIDR_EL1
-Field 1 AFSR1_EL1
-Field 0 AFSR0_EL1
-EndSysregFields
-
Sysreg HCR_EL2 3 4 1 1 0
Field 63:60 TWEDEL
Field 59 TWEDEn
@@ -2635,11 +2568,134 @@ Field 4:0 HPMN
EndSysreg
Sysreg HFGRTR_EL2 3 4 1 1 4
-Fields HFGxTR_EL2
+Field 63 nAMAIR2_EL1
+Field 62 nMAIR2_EL1
+Field 61 nS2POR_EL1
+Field 60 nPOR_EL1
+Field 59 nPOR_EL0
+Field 58 nPIR_EL1
+Field 57 nPIRE0_EL1
+Field 56 nRCWMASK_EL1
+Field 55 nTPIDR2_EL0
+Field 54 nSMPRI_EL1
+Field 53 nGCS_EL1
+Field 52 nGCS_EL0
+Res0 51
+Field 50 nACCDATA_EL1
+Field 49 ERXADDR_EL1
+Field 48 ERXPFGCDN_EL1
+Field 47 ERXPFGCTL_EL1
+Field 46 ERXPFGF_EL1
+Field 45 ERXMISCn_EL1
+Field 44 ERXSTATUS_EL1
+Field 43 ERXCTLR_EL1
+Field 42 ERXFR_EL1
+Field 41 ERRSELR_EL1
+Field 40 ERRIDR_EL1
+Field 39 ICC_IGRPENn_EL1
+Field 38 VBAR_EL1
+Field 37 TTBR1_EL1
+Field 36 TTBR0_EL1
+Field 35 TPIDR_EL0
+Field 34 TPIDRRO_EL0
+Field 33 TPIDR_EL1
+Field 32 TCR_EL1
+Field 31 SCXTNUM_EL0
+Field 30 SCXTNUM_EL1
+Field 29 SCTLR_EL1
+Field 28 REVIDR_EL1
+Field 27 PAR_EL1
+Field 26 MPIDR_EL1
+Field 25 MIDR_EL1
+Field 24 MAIR_EL1
+Field 23 LORSA_EL1
+Field 22 LORN_EL1
+Field 21 LORID_EL1
+Field 20 LOREA_EL1
+Field 19 LORC_EL1
+Field 18 ISR_EL1
+Field 17 FAR_EL1
+Field 16 ESR_EL1
+Field 15 DCZID_EL0
+Field 14 CTR_EL0
+Field 13 CSSELR_EL1
+Field 12 CPACR_EL1
+Field 11 CONTEXTIDR_EL1
+Field 10 CLIDR_EL1
+Field 9 CCSIDR_EL1
+Field 8 APIBKey
+Field 7 APIAKey
+Field 6 APGAKey
+Field 5 APDBKey
+Field 4 APDAKey
+Field 3 AMAIR_EL1
+Field 2 AIDR_EL1
+Field 1 AFSR1_EL1
+Field 0 AFSR0_EL1
EndSysreg
Sysreg HFGWTR_EL2 3 4 1 1 5
-Fields HFGxTR_EL2
+Field 63 nAMAIR2_EL1
+Field 62 nMAIR2_EL1
+Field 61 nS2POR_EL1
+Field 60 nPOR_EL1
+Field 59 nPOR_EL0
+Field 58 nPIR_EL1
+Field 57 nPIRE0_EL1
+Field 56 nRCWMASK_EL1
+Field 55 nTPIDR2_EL0
+Field 54 nSMPRI_EL1
+Field 53 nGCS_EL1
+Field 52 nGCS_EL0
+Res0 51
+Field 50 nACCDATA_EL1
+Field 49 ERXADDR_EL1
+Field 48 ERXPFGCDN_EL1
+Field 47 ERXPFGCTL_EL1
+Res0 46
+Field 45 ERXMISCn_EL1
+Field 44 ERXSTATUS_EL1
+Field 43 ERXCTLR_EL1
+Res0 42
+Field 41 ERRSELR_EL1
+Res0 40
+Field 39 ICC_IGRPENn_EL1
+Field 38 VBAR_EL1
+Field 37 TTBR1_EL1
+Field 36 TTBR0_EL1
+Field 35 TPIDR_EL0
+Field 34 TPIDRRO_EL0
+Field 33 TPIDR_EL1
+Field 32 TCR_EL1
+Field 31 SCXTNUM_EL0
+Field 30 SCXTNUM_EL1
+Field 29 SCTLR_EL1
+Res0 28
+Field 27 PAR_EL1
+Res0 26:25
+Field 24 MAIR_EL1
+Field 23 LORSA_EL1
+Field 22 LORN_EL1
+Res0 21
+Field 20 LOREA_EL1
+Field 19 LORC_EL1
+Res0 18
+Field 17 FAR_EL1
+Field 16 ESR_EL1
+Res0 15:14
+Field 13 CSSELR_EL1
+Field 12 CPACR_EL1
+Field 11 CONTEXTIDR_EL1
+Res0 10:9
+Field 8 APIBKey
+Field 7 APIAKey
+Field 6 APGAKey
+Field 5 APDBKey
+Field 4 APDAKey
+Field 3 AMAIR_EL1
+Res0 2
+Field 1 AFSR1_EL1
+Field 0 AFSR0_EL1
EndSysreg
Sysreg HFGITR_EL2 3 4 1 1 6
--
2.39.2
^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH v3 05/42] arm64: sysreg: Update ID_AA64PFR0_EL1 description
2025-04-26 12:27 [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling Marc Zyngier
` (3 preceding siblings ...)
2025-04-26 12:27 ` [PATCH v3 04/42] arm64: sysreg: Replace HGFxTR_EL2 with HFG{R,W}TR_EL2 Marc Zyngier
@ 2025-04-26 12:27 ` Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 06/42] arm64: sysreg: Update PMSIDR_EL1 description Marc Zyngier
` (37 subsequent siblings)
42 siblings, 0 replies; 71+ messages in thread
From: Marc Zyngier @ 2025-04-26 12:27 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
Add the missing RASv2 description.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/tools/sysreg | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
index e21e881314a33..bdfc02bd1eb10 100644
--- a/arch/arm64/tools/sysreg
+++ b/arch/arm64/tools/sysreg
@@ -907,6 +907,7 @@ UnsignedEnum 31:28 RAS
0b0000 NI
0b0001 IMP
0b0010 V1P1
+ 0b0011 V2
EndEnum
UnsignedEnum 27:24 GIC
0b0000 NI
--
2.39.2
^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH v3 06/42] arm64: sysreg: Update PMSIDR_EL1 description
2025-04-26 12:27 [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling Marc Zyngier
` (4 preceding siblings ...)
2025-04-26 12:27 ` [PATCH v3 05/42] arm64: sysreg: Update ID_AA64PFR0_EL1 description Marc Zyngier
@ 2025-04-26 12:28 ` Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 07/42] arm64: sysreg: Update TRBIDR_EL1 description Marc Zyngier
` (36 subsequent siblings)
42 siblings, 0 replies; 71+ messages in thread
From: Marc Zyngier @ 2025-04-26 12:28 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
Add the missing SME, ALTCLK, FPF, EFT. CRR and FDS fields.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/tools/sysreg | 28 ++++++++++++++++++++++++++--
1 file changed, 26 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
index bdfc02bd1eb10..668a6f397362c 100644
--- a/arch/arm64/tools/sysreg
+++ b/arch/arm64/tools/sysreg
@@ -2241,7 +2241,28 @@ Field 15:0 MINLAT
EndSysreg
Sysreg PMSIDR_EL1 3 0 9 9 7
-Res0 63:25
+Res0 63:33
+UnsignedEnum 32 SME
+ 0b0 NI
+ 0b1 IMP
+EndEnum
+UnsignedEnum 31:28 ALTCLK
+ 0b0000 NI
+ 0b0001 IMP
+ 0b1111 IMPDEF
+EndEnum
+UnsignedEnum 27 FPF
+ 0b0 NI
+ 0b1 IMP
+EndEnum
+UnsignedEnum 26 EFT
+ 0b0 NI
+ 0b1 IMP
+EndEnum
+UnsignedEnum 25 CRR
+ 0b0 NI
+ 0b1 IMP
+EndEnum
Field 24 PBT
Field 23:20 FORMAT
Enum 19:16 COUNTSIZE
@@ -2259,7 +2280,10 @@ Enum 11:8 INTERVAL
0b0111 3072
0b1000 4096
EndEnum
-Res0 7
+UnsignedEnum 7 FDS
+ 0b0 NI
+ 0b1 IMP
+EndEnum
Field 6 FnE
Field 5 ERND
Field 4 LDS
--
2.39.2
^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH v3 07/42] arm64: sysreg: Update TRBIDR_EL1 description
2025-04-26 12:27 [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling Marc Zyngier
` (5 preceding siblings ...)
2025-04-26 12:28 ` [PATCH v3 06/42] arm64: sysreg: Update PMSIDR_EL1 description Marc Zyngier
@ 2025-04-26 12:28 ` Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 08/42] arm64: sysreg: Add registers trapped by HFG{R,W}TR2_EL2 Marc Zyngier
` (35 subsequent siblings)
42 siblings, 0 replies; 71+ messages in thread
From: Marc Zyngier @ 2025-04-26 12:28 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
Add the missing MPAM field.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/tools/sysreg | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
index 668a6f397362c..6433a3ebcef49 100644
--- a/arch/arm64/tools/sysreg
+++ b/arch/arm64/tools/sysreg
@@ -3688,7 +3688,12 @@ Field 31:0 TRG
EndSysreg
Sysreg TRBIDR_EL1 3 0 9 11 7
-Res0 63:12
+Res0 63:16
+UnsignedEnum 15:12 MPAM
+ 0b0000 NI
+ 0b0001 DEFAULT
+ 0b0010 IMP
+EndEnum
Enum 11:8 EA
0b0000 NON_DESC
0b0001 IGNORE
--
2.39.2
^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH v3 08/42] arm64: sysreg: Add registers trapped by HFG{R,W}TR2_EL2
2025-04-26 12:27 [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling Marc Zyngier
` (6 preceding siblings ...)
2025-04-26 12:28 ` [PATCH v3 07/42] arm64: sysreg: Update TRBIDR_EL1 description Marc Zyngier
@ 2025-04-26 12:28 ` Marc Zyngier
2025-05-01 10:11 ` Joey Gouly
2025-04-26 12:28 ` [PATCH v3 09/42] arm64: sysreg: Add registers trapped by HDFG{R,W}TR2_EL2 Marc Zyngier
` (34 subsequent siblings)
42 siblings, 1 reply; 71+ messages in thread
From: Marc Zyngier @ 2025-04-26 12:28 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
Bulk addition of all the system registers trapped by HFG{R,W}TR2_EL2.
The descriptions are extracted from the BSD-licenced JSON file part
of the 2025-03 drop from ARM.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/tools/sysreg | 395 ++++++++++++++++++++++++++++++++++++++++
1 file changed, 395 insertions(+)
diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
index 6433a3ebcef49..7969e632492bb 100644
--- a/arch/arm64/tools/sysreg
+++ b/arch/arm64/tools/sysreg
@@ -2068,6 +2068,26 @@ Field 1 A
Field 0 M
EndSysreg
+Sysreg SCTLR_EL12 3 5 1 0 0
+Mapping SCTLR_EL1
+EndSysreg
+
+Sysreg SCTLRALIAS_EL1 3 0 1 4 6
+Mapping SCTLR_EL1
+EndSysreg
+
+Sysreg ACTLR_EL1 3 0 1 0 1
+Field 63:0 IMPDEF
+EndSysreg
+
+Sysreg ACTLR_EL12 3 5 1 0 1
+Mapping ACTLR_EL1
+EndSysreg
+
+Sysreg ACTLRALIAS_EL1 3 0 1 4 5
+Mapping ACTLR_EL1
+EndSysreg
+
Sysreg CPACR_EL1 3 0 1 0 2
Res0 63:30
Field 29 E0POE
@@ -2081,6 +2101,323 @@ Field 17:16 ZEN
Res0 15:0
EndSysreg
+Sysreg CPACR_EL12 3 5 1 0 2
+Mapping CPACR_EL1
+EndSysreg
+
+Sysreg CPACRALIAS_EL1 3 0 1 4 4
+Mapping CPACR_EL1
+EndSysreg
+
+Sysreg ACTLRMASK_EL1 3 0 1 4 1
+Field 63:0 IMPDEF
+EndSysreg
+
+Sysreg ACTLRMASK_EL12 3 5 1 4 1
+Mapping ACTLRMASK_EL1
+EndSysreg
+
+Sysreg CPACRMASK_EL1 3 0 1 4 2
+Res0 63:32
+Field 31 TCPAC
+Field 30 TAM
+Field 29 E0POE
+Field 28 TTA
+Res0 27:25
+Field 24 SMEN
+Res0 23:21
+Field 20 FPEN
+Res0 19:17
+Field 16 ZEN
+Res0 15:0
+EndSysreg
+
+Sysreg CPACRMASK_EL12 3 5 1 4 2
+Mapping CPACRMASK_EL1
+EndSysreg
+
+Sysreg PFAR_EL1 3 0 6 0 5
+Field 63 NS
+Field 62 NSE
+Res0 61:56
+Field 55:52 PA_55_52
+Field 51:48 PA_51_48
+Field 47:0 PA
+EndSysreg
+
+Sysreg PFAR_EL12 3 5 6 0 5
+Mapping PFAR_EL1
+EndSysreg
+
+Sysreg RCWSMASK_EL1 3 0 13 0 3
+Field 63:0 RCWSMASK
+EndSysreg
+
+Sysreg SCTLR2_EL1 3 0 1 0 3
+Res0 63:13
+Field 12 CPTM0
+Field 11 CPTM
+Field 10 CPTA0
+Field 9 CPTA
+Field 8 EnPACM0
+Field 7 EnPACM
+Field 6 EnIDCP128
+Field 5 EASE
+Field 4 EnANERR
+Field 3 EnADERR
+Field 2 NMEA
+Res0 1:0
+EndSysreg
+
+Sysreg SCTLR2_EL12 3 5 1 0 3
+Mapping SCTLR2_EL1
+EndSysreg
+
+Sysreg SCTLR2ALIAS_EL1 3 0 1 4 7
+Mapping SCTLR2_EL1
+EndSysreg
+
+Sysreg SCTLR2MASK_EL1 3 0 1 4 3
+Res0 63:13
+Field 12 CPTM0
+Field 11 CPTM
+Field 10 CPTA0
+Field 9 CPTA
+Field 8 EnPACM0
+Field 7 EnPACM
+Field 6 EnIDCP128
+Field 5 EASE
+Field 4 EnANERR
+Field 3 EnADERR
+Field 2 NMEA
+Res0 1:0
+EndSysreg
+
+Sysreg SCTLR2MASK_EL12 3 5 1 4 3
+Mapping SCTLR2MASK_EL1
+EndSysreg
+
+Sysreg SCTLRMASK_EL1 3 0 1 4 0
+Field 63 TIDCP
+Field 62 SPINTMASK
+Field 61 NMI
+Field 60 EnTP2
+Field 59 TCSO
+Field 58 TCSO0
+Field 57 EPAN
+Field 56 EnALS
+Field 55 EnAS0
+Field 54 EnASR
+Field 53 TME
+Field 52 TME0
+Field 51 TMT
+Field 50 TMT0
+Res0 49:47
+Field 46 TWEDEL
+Field 45 TWEDEn
+Field 44 DSSBS
+Field 43 ATA
+Field 42 ATA0
+Res0 41
+Field 40 TCF
+Res0 39
+Field 38 TCF0
+Field 37 ITFSB
+Field 36 BT1
+Field 35 BT0
+Field 34 EnFPM
+Field 33 MSCEn
+Field 32 CMOW
+Field 31 EnIA
+Field 30 EnIB
+Field 29 LSMAOE
+Field 28 nTLSMD
+Field 27 EnDA
+Field 26 UCI
+Field 25 EE
+Field 24 E0E
+Field 23 SPAN
+Field 22 EIS
+Field 21 IESB
+Field 20 TSCXT
+Field 19 WXN
+Field 18 nTWE
+Res0 17
+Field 16 nTWI
+Field 15 UCT
+Field 14 DZE
+Field 13 EnDB
+Field 12 I
+Field 11 EOS
+Field 10 EnRCTX
+Field 9 UMA
+Field 8 SED
+Field 7 ITD
+Field 6 nAA
+Field 5 CP15BEN
+Field 4 SA0
+Field 3 SA
+Field 2 C
+Field 1 A
+Field 0 M
+EndSysreg
+
+Sysreg SCTLRMASK_EL12 3 5 1 4 0
+Mapping SCTLRMASK_EL1
+EndSysreg
+
+Sysreg TCR2MASK_EL1 3 0 2 7 3
+Res0 63:22
+Field 21 FNGNA1
+Field 20 FNGNA0
+Res0 19
+Field 18 FNG1
+Field 17 FNG0
+Field 16 A2
+Field 15 DisCH1
+Field 14 DisCH0
+Res0 13:12
+Field 11 HAFT
+Field 10 PTTWI
+Res0 9:6
+Field 5 D128
+Field 4 AIE
+Field 3 POE
+Field 2 E0POE
+Field 1 PIE
+Field 0 PnCH
+EndSysreg
+
+Sysreg TCR2MASK_EL12 3 5 2 7 3
+Mapping TCR2MASK_EL1
+EndSysreg
+
+Sysreg TCRMASK_EL1 3 0 2 7 2
+Res0 63:62
+Field 61 MTX1
+Field 60 MTX0
+Field 59 DS
+Field 58 TCMA1
+Field 57 TCMA0
+Field 56 E0PD1
+Field 55 E0PD0
+Field 54 NFD1
+Field 53 NFD0
+Field 52 TBID1
+Field 51 TBID0
+Field 50 HWU162
+Field 49 HWU161
+Field 48 HWU160
+Field 47 HWU159
+Field 46 HWU062
+Field 45 HWU061
+Field 44 HWU060
+Field 43 HWU059
+Field 42 HPD1
+Field 41 HPD0
+Field 40 HD
+Field 39 HA
+Field 38 TBI1
+Field 37 TBI0
+Field 36 AS
+Res0 35:33
+Field 32 IPS
+Res0 31
+Field 30 TG1
+Res0 29
+Field 28 SH1
+Res0 27
+Field 26 ORGN1
+Res0 25
+Field 24 IRGN1
+Field 23 EPD1
+Field 22 A1
+Res0 21:17
+Field 16 T1SZ
+Res0 15
+Field 14 TG0
+Res0 13
+Field 12 SH0
+Res0 11
+Field 10 ORGN0
+Res0 9
+Field 8 IRGN0
+Field 7 EPD0
+Res0 6:1
+Field 0 T0SZ
+EndSysreg
+
+Sysreg TCRMASK_EL12 3 5 2 7 2
+Mapping TCRMASK_EL1
+EndSysreg
+
+Sysreg ERXGSR_EL1 3 0 5 3 2
+Field 63 S63
+Field 62 S62
+Field 61 S61
+Field 60 S60
+Field 59 S59
+Field 58 S58
+Field 57 S57
+Field 56 S56
+Field 55 S55
+Field 54 S54
+Field 53 S53
+Field 52 S52
+Field 51 S51
+Field 50 S50
+Field 49 S49
+Field 48 S48
+Field 47 S47
+Field 46 S46
+Field 45 S45
+Field 44 S44
+Field 43 S43
+Field 42 S42
+Field 41 S41
+Field 40 S40
+Field 39 S39
+Field 38 S38
+Field 37 S37
+Field 36 S36
+Field 35 S35
+Field 34 S34
+Field 33 S33
+Field 32 S32
+Field 31 S31
+Field 30 S30
+Field 29 S29
+Field 28 S28
+Field 27 S27
+Field 26 S26
+Field 25 S25
+Field 24 S24
+Field 23 S23
+Field 22 S22
+Field 21 S21
+Field 20 S20
+Field 19 S19
+Field 18 S18
+Field 17 S17
+Field 16 S16
+Field 15 S15
+Field 14 S14
+Field 13 S13
+Field 12 S12
+Field 11 S11
+Field 10 S10
+Field 9 S9
+Field 8 S8
+Field 7 S7
+Field 6 S6
+Field 5 S5
+Field 4 S4
+Field 3 S3
+Field 2 S2
+Field 1 S1
+Field 0 S0
+EndSysreg
+
Sysreg TRFCR_EL1 3 0 1 2 1
Res0 63:7
UnsignedEnum 6:5 TS
@@ -3407,6 +3744,60 @@ Sysreg TTBR1_EL1 3 0 2 0 1
Fields TTBRx_EL1
EndSysreg
+Sysreg TCR_EL1 3 0 2 0 2
+Res0 63:62
+Field 61 MTX1
+Field 60 MTX0
+Field 59 DS
+Field 58 TCMA1
+Field 57 TCMA0
+Field 56 E0PD1
+Field 55 E0PD0
+Field 54 NFD1
+Field 53 NFD0
+Field 52 TBID1
+Field 51 TBID0
+Field 50 HWU162
+Field 49 HWU161
+Field 48 HWU160
+Field 47 HWU159
+Field 46 HWU062
+Field 45 HWU061
+Field 44 HWU060
+Field 43 HWU059
+Field 42 HPD1
+Field 41 HPD0
+Field 40 HD
+Field 39 HA
+Field 38 TBI1
+Field 37 TBI0
+Field 36 AS
+Res0 35
+Field 34:32 IPS
+Field 31:30 TG1
+Field 29:28 SH1
+Field 27:26 ORGN1
+Field 25:24 IRGN1
+Field 23 EPD1
+Field 22 A1
+Field 21:16 T1SZ
+Field 15:14 TG0
+Field 13:12 SH0
+Field 11:10 ORGN0
+Field 9:8 IRGN0
+Field 7 EPD0
+Res0 6
+Field 5:0 T0SZ
+EndSysreg
+
+Sysreg TCR_EL12 3 5 2 0 2
+Mapping TCR_EL1
+EndSysreg
+
+Sysreg TCRALIAS_EL1 3 0 2 7 6
+Mapping TCR_EL1
+EndSysreg
+
Sysreg TCR2_EL1 3 0 2 0 3
Res0 63:16
Field 15 DisCH1
@@ -3427,6 +3818,10 @@ Sysreg TCR2_EL12 3 5 2 0 3
Mapping TCR2_EL1
EndSysreg
+Sysreg TCR2ALIAS_EL1 3 0 2 7 7
+Mapping TCR2_EL1
+EndSysreg
+
Sysreg TCR2_EL2 3 4 2 0 3
Res0 63:16
Field 15 DisCH1
--
2.39.2
^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH v3 09/42] arm64: sysreg: Add registers trapped by HDFG{R,W}TR2_EL2
2025-04-26 12:27 [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling Marc Zyngier
` (7 preceding siblings ...)
2025-04-26 12:28 ` [PATCH v3 08/42] arm64: sysreg: Add registers trapped by HFG{R,W}TR2_EL2 Marc Zyngier
@ 2025-04-26 12:28 ` Marc Zyngier
2025-04-29 13:07 ` Ben Horgan
2025-04-26 12:28 ` [PATCH v3 10/42] arm64: sysreg: Add system instructions trapped by HFGIRT2_EL2 Marc Zyngier
` (33 subsequent siblings)
42 siblings, 1 reply; 71+ messages in thread
From: Marc Zyngier @ 2025-04-26 12:28 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
Bulk addition of all the system registers trapped by HDFG{R,W}TR2_EL2.
The descriptions are extracted from the BSD-licenced JSON file part
of the 2025-03 drop from ARM.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/sysreg.h | 10 +
arch/arm64/tools/sysreg | 343 ++++++++++++++++++++++++++++++++
2 files changed, 353 insertions(+)
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 2639d3633073d..a943eac446938 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -497,12 +497,22 @@
#define __PMEV_op2(n) ((n) & 0x7)
#define __CNTR_CRm(n) (0x8 | (((n) >> 3) & 0x3))
+#define SYS_PMEVCNTSVRn_EL1(n) sys_reg(2, 0, 14, __CNTR_CRm(n), __PMEV_op2(n))
#define SYS_PMEVCNTRn_EL0(n) sys_reg(3, 3, 14, __CNTR_CRm(n), __PMEV_op2(n))
#define __TYPER_CRm(n) (0xc | (((n) >> 3) & 0x3))
#define SYS_PMEVTYPERn_EL0(n) sys_reg(3, 3, 14, __TYPER_CRm(n), __PMEV_op2(n))
#define SYS_PMCCFILTR_EL0 sys_reg(3, 3, 14, 15, 7)
+#define SYS_SPMCGCRn_EL1(n) sys_reg(2, 0, 9, 13, ((n) & 1))
+
+#define __SPMEV_op2(n) ((n) & 0x7)
+#define __SPMEV_crm(p, n) ((((p) & 7) << 1) | (((n) >> 3) & 1))
+#define SYS_SPMEVCNTRn_EL0(n) sys_reg(2, 3, 14, __SPMEV_crm(0b000, n), __SPMEV_op2(n))
+#define SYS_SPMEVFILT2Rn_EL0(n) sys_reg(2, 3, 14, __SPMEV_crm(0b011, n), __SPMEV_op2(n))
+#define SYS_SPMEVFILTRn_EL0(n) sys_reg(2, 3, 14, __SPMEV_crm(0b010, n), __SPMEV_op2(n))
+#define SYS_SPMEVTYPERn_EL0(n) sys_reg(2, 3, 14, __SPMEV_crm(0b001, n), __SPMEV_op2(n))
+
#define SYS_VPIDR_EL2 sys_reg(3, 4, 0, 0, 0)
#define SYS_VMPIDR_EL2 sys_reg(3, 4, 0, 0, 5)
diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
index 7969e632492bb..5695b12b8b4b2 100644
--- a/arch/arm64/tools/sysreg
+++ b/arch/arm64/tools/sysreg
@@ -101,6 +101,17 @@ Res0 63:32
Field 31:0 DTRTX
EndSysreg
+Sysreg MDSELR_EL1 2 0 0 4 2
+Res0 63:6
+Field 5:4 BANK
+Res0 3:0
+EndSysreg
+
+Sysreg MDSTEPOP_EL1 2 0 0 5 2
+Res0 63:32
+Field 31:0 OPCODE
+EndSysreg
+
Sysreg OSECCR_EL1 2 0 0 6 2
Res0 63:32
Field 31:0 EDECCR
@@ -111,6 +122,285 @@ Res0 63:1
Field 0 OSLK
EndSysreg
+Sysreg SPMACCESSR_EL1 2 0 9 13 3
+UnsignedEnum 63:62 P31
+ 0b00 TRAP_RW
+ 0b01 TRAP_W
+ 0b11 NOTRAP
+EndEnum
+UnsignedEnum 61:60 P30
+ 0b00 TRAP_RW
+ 0b01 TRAP_W
+ 0b11 NOTRAP
+EndEnum
+UnsignedEnum 59:58 P29
+ 0b00 TRAP_RW
+ 0b01 TRAP_W
+ 0b11 NOTRAP
+EndEnum
+UnsignedEnum 57:56 P28
+ 0b00 TRAP_RW
+ 0b01 TRAP_W
+ 0b11 NOTRAP
+EndEnum
+UnsignedEnum 55:54 P27
+ 0b00 TRAP_RW
+ 0b01 TRAP_W
+ 0b11 NOTRAP
+EndEnum
+UnsignedEnum 53:52 P26
+ 0b00 TRAP_RW
+ 0b01 TRAP_W
+ 0b11 NOTRAP
+EndEnum
+UnsignedEnum 51:50 P25
+ 0b00 TRAP_RW
+ 0b01 TRAP_W
+ 0b11 NOTRAP
+EndEnum
+UnsignedEnum 49:48 P24
+ 0b00 TRAP_RW
+ 0b01 TRAP_W
+ 0b11 NOTRAP
+EndEnum
+UnsignedEnum 47:46 P23
+ 0b00 TRAP_RW
+ 0b01 TRAP_W
+ 0b11 NOTRAP
+EndEnum
+UnsignedEnum 45:44 P22
+ 0b00 TRAP_RW
+ 0b01 TRAP_W
+ 0b11 NOTRAP
+EndEnum
+UnsignedEnum 43:42 P21
+ 0b00 TRAP_RW
+ 0b01 TRAP_W
+ 0b11 NOTRAP
+EndEnum
+UnsignedEnum 41:40 P20
+ 0b00 TRAP_RW
+ 0b01 TRAP_W
+ 0b11 NOTRAP
+EndEnum
+UnsignedEnum 39:38 P19
+ 0b00 TRAP_RW
+ 0b01 TRAP_W
+ 0b11 NOTRAP
+EndEnum
+UnsignedEnum 37:36 P18
+ 0b00 TRAP_RW
+ 0b01 TRAP_W
+ 0b11 NOTRAP
+EndEnum
+UnsignedEnum 35:34 P17
+ 0b00 TRAP_RW
+ 0b01 TRAP_W
+ 0b11 NOTRAP
+EndEnum
+UnsignedEnum 33:32 P16
+ 0b00 TRAP_RW
+ 0b01 TRAP_W
+ 0b11 NOTRAP
+EndEnum
+UnsignedEnum 31:30 P15
+ 0b00 TRAP_RW
+ 0b01 TRAP_W
+ 0b11 NOTRAP
+EndEnum
+UnsignedEnum 29:28 P14
+ 0b00 TRAP_RW
+ 0b01 TRAP_W
+ 0b11 NOTRAP
+EndEnum
+UnsignedEnum 27:26 P13
+ 0b00 TRAP_RW
+ 0b01 TRAP_W
+ 0b11 NOTRAP
+EndEnum
+UnsignedEnum 25:24 P12
+ 0b00 TRAP_RW
+ 0b01 TRAP_W
+ 0b11 NOTRAP
+EndEnum
+UnsignedEnum 23:22 P11
+ 0b00 TRAP_RW
+ 0b01 TRAP_W
+ 0b11 NOTRAP
+EndEnum
+UnsignedEnum 21:20 P10
+ 0b00 TRAP_RW
+ 0b01 TRAP_W
+ 0b11 NOTRAP
+EndEnum
+UnsignedEnum 19:18 P9
+ 0b00 TRAP_RW
+ 0b01 TRAP_W
+ 0b11 NOTRAP
+EndEnum
+UnsignedEnum 17:16 P8
+ 0b00 TRAP_RW
+ 0b01 TRAP_W
+ 0b11 NOTRAP
+EndEnum
+UnsignedEnum 15:14 P7
+ 0b00 TRAP_RW
+ 0b01 TRAP_W
+ 0b11 NOTRAP
+EndEnum
+UnsignedEnum 13:12 P6
+ 0b00 TRAP_RW
+ 0b01 TRAP_W
+ 0b11 NOTRAP
+EndEnum
+UnsignedEnum 11:10 P5
+ 0b00 TRAP_RW
+ 0b01 TRAP_W
+ 0b11 NOTRAP
+EndEnum
+UnsignedEnum 9:8 P4
+ 0b00 TRAP_RW
+ 0b01 TRAP_W
+ 0b11 NOTRAP
+EndEnum
+UnsignedEnum 7:6 P3
+ 0b00 TRAP_RW
+ 0b01 TRAP_W
+ 0b11 NOTRAP
+EndEnum
+UnsignedEnum 5:4 P2
+ 0b00 TRAP_RW
+ 0b01 TRAP_W
+ 0b11 NOTRAP
+EndEnum
+UnsignedEnum 3:2 P1
+ 0b00 TRAP_RW
+ 0b01 TRAP_W
+ 0b11 NOTRAP
+EndEnum
+UnsignedEnum 1:0 P0
+ 0b00 TRAP_RW
+ 0b01 TRAP_W
+ 0b11 NOTRAP
+EndEnum
+EndSysreg
+
+Sysreg SPMACCESSR_EL12 2 5 9 13 3
+Mapping SPMACCESSR_EL1
+EndSysreg
+
+Sysreg SPMIIDR_EL1 2 0 9 13 4
+Res0 63:32
+Field 31:20 ProductID
+Field 19:16 Variant
+Field 15:12 Revision
+Field 11:0 Implementer
+EndSysreg
+
+Sysreg SPMDEVARCH_EL1 2 0 9 13 5
+Res0 63:32
+Field 31:21 ARCHITECT
+Field 20 PRESENT
+Field 19:16 REVISION
+Field 15:12 ARCHVER
+Field 11:0 ARCHPART
+EndSysreg
+
+Sysreg SPMDEVAFF_EL1 2 0 9 13 6
+Res0 63:40
+Field 39:32 Aff3
+Field 31 F0V
+Field 30 U
+Res0 29:25
+Field 24 MT
+Field 23:16 Aff2
+Field 15:8 Aff1
+Field 7:0 Aff0
+EndSysreg
+
+Sysreg SPMCFGR_EL1 2 0 9 13 7
+Res0 63:32
+Field 31:28 NCG
+Res0 27:25
+Field 24 HDBG
+Field 23 TRO
+Field 22 SS
+Field 21 FZO
+Field 20 MSI
+Field 19 RAO
+Res0 18
+Field 17 NA
+Field 16 EX
+Field 15:14 RAZ
+Field 13:8 SIZE
+Field 7:0 N
+EndSysreg
+
+Sysreg SPMINTENSET_EL1 2 0 9 14 1
+Field 63:0 P
+EndSysreg
+
+Sysreg SPMINTENCLR_EL1 2 0 9 14 2
+Field 63:0 P
+EndSysreg
+
+Sysreg PMCCNTSVR_EL1 2 0 14 11 7
+Field 63:0 CCNT
+EndSysreg
+
+Sysreg PMICNTSVR_EL1 2 0 14 12 0
+Field 63:0 ICNT
+EndSysreg
+
+Sysreg SPMCR_EL0 2 3 9 12 0
+Res0 63:12
+Field 11 TRO
+Field 10 HDBG
+Field 9 FZO
+Field 8 NA
+Res0 7:5
+Field 4 EX
+Res0 3:2
+Field 1 P
+Field 0 E
+EndSysreg
+
+Sysreg SPMCNTENSET_EL0 2 3 9 12 1
+Field 63:0 P
+EndSysreg
+
+Sysreg SPMCNTENCLR_EL0 2 3 9 12 2
+Field 63:0 P
+EndSysreg
+
+Sysreg SPMOVSCLR_EL0 2 3 9 12 3
+Field 63:0 P
+EndSysreg
+
+Sysreg SPMZR_EL0 2 3 9 12 4
+Field 63:0 P
+EndSysreg
+
+Sysreg SPMSELR_EL0 2 3 9 12 5
+Res0 63:10
+Field 9:4 SYSPMUSEL
+Res0 3:2
+Field 1:0 BANK
+EndSysreg
+
+Sysreg SPMOVSSET_EL0 2 3 9 14 3
+Field 63:0 P
+EndSysreg
+
+Sysreg SPMSCR_EL1 2 7 9 14 7
+Field 63:32 IMPDEF
+Field 31 RAO
+Res0 30:5
+Field 4 NAO
+Res0 3:1
+Field 0 SO
+EndSysreg
+
Sysreg ID_PFR0_EL1 3 0 0 1 0
Res0 63:32
UnsignedEnum 31:28 RAS
@@ -2430,6 +2720,16 @@ Field 1 ExTRE
Field 0 E0TRE
EndSysreg
+Sysreg TRCITECR_EL1 3 0 1 2 3
+Res0 63:2
+Field 1 E1E
+Field 0 E0E
+EndSysreg
+
+Sysreg TRCITECR_EL12 3 5 1 2 3
+Mapping TRCITECR_EL1
+EndSysreg
+
Sysreg SMPRI_EL1 3 0 1 2 4
Res0 63:4
Field 3:0 PRIORITY
@@ -2663,6 +2963,16 @@ Field 16 COLL
Field 15:0 MSS
EndSysreg
+Sysreg PMSDSFR_EL1 3 0 9 10 4
+Field 63:0 S
+EndSysreg
+
+Sysreg PMBMAR_EL1 3 0 9 10 5
+Res0 63:10
+Field 9:8 SH
+Field 7:0 Attr
+EndSysreg
+
Sysreg PMBIDR_EL1 3 0 9 10 7
Res0 63:12
Enum 11:8 EA
@@ -2676,6 +2986,21 @@ Field 4 P
Field 3:0 ALIGN
EndSysreg
+Sysreg TRBMPAM_EL1 3 0 9 11 5
+Res0 63:27
+Field 26 EN
+Field 25:24 MPAM_SP
+Field 23:16 PMG
+Field 15:0 PARTID
+EndSysreg
+
+Sysreg PMSSCR_EL1 3 0 9 13 3
+Res0 63:33
+Field 32 NC
+Res0 31:1
+Field 0 SS
+EndSysreg
+
Sysreg PMUACR_EL1 3 0 9 14 4
Res0 63:33
Field 32 F0
@@ -2683,11 +3008,29 @@ Field 31 C
Field 30:0 P
EndSysreg
+Sysreg PMECR_EL1 3 0 9 14 5
+Res0 63:5
+Field 4:3 SSE
+Field 2 KPME
+Field 1:0 PMEE
+EndSysreg
+
+Sysreg PMIAR_EL1 3 0 9 14 7
+Field 63:0 ADDRESS
+EndSysreg
+
Sysreg PMSELR_EL0 3 3 9 12 5
Res0 63:5
Field 4:0 SEL
EndSysreg
+Sysreg PMZR_EL0 3 3 9 13 4
+Res0 63:33
+Field 32 F0
+Field 31 C
+Field 30:0 P
+EndSysreg
+
SysregFields CONTEXTIDR_ELx
Res0 63:32
Field 31:0 PROCID
--
2.39.2
^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH v3 10/42] arm64: sysreg: Add system instructions trapped by HFGIRT2_EL2
2025-04-26 12:27 [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling Marc Zyngier
` (8 preceding siblings ...)
2025-04-26 12:28 ` [PATCH v3 09/42] arm64: sysreg: Add registers trapped by HDFG{R,W}TR2_EL2 Marc Zyngier
@ 2025-04-26 12:28 ` Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 11/42] arm64: Remove duplicated sysreg encodings Marc Zyngier
` (32 subsequent siblings)
42 siblings, 0 replies; 71+ messages in thread
From: Marc Zyngier @ 2025-04-26 12:28 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
Add the new CMOs trapped by HFGITR2_EL2.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/sysreg.h | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index a943eac446938..8908eec48f313 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -117,6 +117,7 @@
#define SB_BARRIER_INSN __SYS_BARRIER_INSN(0, 7, 31)
+/* Data cache zero operations */
#define SYS_DC_ISW sys_insn(1, 0, 7, 6, 2)
#define SYS_DC_IGSW sys_insn(1, 0, 7, 6, 4)
#define SYS_DC_IGDSW sys_insn(1, 0, 7, 6, 6)
@@ -153,11 +154,13 @@
#define SYS_DC_CIGVAC sys_insn(1, 3, 7, 14, 3)
#define SYS_DC_CIGDVAC sys_insn(1, 3, 7, 14, 5)
-/* Data cache zero operations */
#define SYS_DC_ZVA sys_insn(1, 3, 7, 4, 1)
#define SYS_DC_GVA sys_insn(1, 3, 7, 4, 3)
#define SYS_DC_GZVA sys_insn(1, 3, 7, 4, 4)
+#define SYS_DC_CIVAPS sys_insn(1, 0, 7, 15, 1)
+#define SYS_DC_CIGDVAPS sys_insn(1, 0, 7, 15, 5)
+
/*
* Automatically generated definitions for system registers, the
* manual encodings below are in the process of being converted to
--
2.39.2
^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH v3 11/42] arm64: Remove duplicated sysreg encodings
2025-04-26 12:27 [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling Marc Zyngier
` (9 preceding siblings ...)
2025-04-26 12:28 ` [PATCH v3 10/42] arm64: sysreg: Add system instructions trapped by HFGIRT2_EL2 Marc Zyngier
@ 2025-04-26 12:28 ` Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 12/42] arm64: tools: Resync sysreg.h Marc Zyngier
` (31 subsequent siblings)
42 siblings, 0 replies; 71+ messages in thread
From: Marc Zyngier @ 2025-04-26 12:28 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
A bunch of sysregs are now generated from the sysreg file, so no
need to carry separate definitions.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/sysreg.h | 11 -----------
1 file changed, 11 deletions(-)
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 8908eec48f313..690b6ebd118f4 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -535,7 +535,6 @@
#define SYS_VTCR_EL2 sys_reg(3, 4, 2, 1, 2)
#define SYS_VNCR_EL2 sys_reg(3, 4, 2, 2, 0)
-#define SYS_HAFGRTR_EL2 sys_reg(3, 4, 3, 1, 6)
#define SYS_SPSR_EL2 sys_reg(3, 4, 4, 0, 0)
#define SYS_ELR_EL2 sys_reg(3, 4, 4, 0, 1)
#define SYS_SP_EL1 sys_reg(3, 4, 4, 1, 0)
@@ -621,28 +620,18 @@
/* VHE encodings for architectural EL0/1 system registers */
#define SYS_BRBCR_EL12 sys_reg(2, 5, 9, 0, 0)
-#define SYS_SCTLR_EL12 sys_reg(3, 5, 1, 0, 0)
-#define SYS_CPACR_EL12 sys_reg(3, 5, 1, 0, 2)
-#define SYS_SCTLR2_EL12 sys_reg(3, 5, 1, 0, 3)
-#define SYS_ZCR_EL12 sys_reg(3, 5, 1, 2, 0)
-#define SYS_TRFCR_EL12 sys_reg(3, 5, 1, 2, 1)
-#define SYS_SMCR_EL12 sys_reg(3, 5, 1, 2, 6)
#define SYS_TTBR0_EL12 sys_reg(3, 5, 2, 0, 0)
#define SYS_TTBR1_EL12 sys_reg(3, 5, 2, 0, 1)
-#define SYS_TCR_EL12 sys_reg(3, 5, 2, 0, 2)
-#define SYS_TCR2_EL12 sys_reg(3, 5, 2, 0, 3)
#define SYS_SPSR_EL12 sys_reg(3, 5, 4, 0, 0)
#define SYS_ELR_EL12 sys_reg(3, 5, 4, 0, 1)
#define SYS_AFSR0_EL12 sys_reg(3, 5, 5, 1, 0)
#define SYS_AFSR1_EL12 sys_reg(3, 5, 5, 1, 1)
#define SYS_ESR_EL12 sys_reg(3, 5, 5, 2, 0)
#define SYS_TFSR_EL12 sys_reg(3, 5, 5, 6, 0)
-#define SYS_FAR_EL12 sys_reg(3, 5, 6, 0, 0)
#define SYS_PMSCR_EL12 sys_reg(3, 5, 9, 9, 0)
#define SYS_MAIR_EL12 sys_reg(3, 5, 10, 2, 0)
#define SYS_AMAIR_EL12 sys_reg(3, 5, 10, 3, 0)
#define SYS_VBAR_EL12 sys_reg(3, 5, 12, 0, 0)
-#define SYS_CONTEXTIDR_EL12 sys_reg(3, 5, 13, 0, 1)
#define SYS_SCXTNUM_EL12 sys_reg(3, 5, 13, 0, 7)
#define SYS_CNTKCTL_EL12 sys_reg(3, 5, 14, 1, 0)
#define SYS_CNTP_TVAL_EL02 sys_reg(3, 5, 14, 2, 0)
--
2.39.2
^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH v3 12/42] arm64: tools: Resync sysreg.h
2025-04-26 12:27 [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling Marc Zyngier
` (10 preceding siblings ...)
2025-04-26 12:28 ` [PATCH v3 11/42] arm64: Remove duplicated sysreg encodings Marc Zyngier
@ 2025-04-26 12:28 ` Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 13/42] arm64: Add syndrome information for trapped LD64B/ST64B{,V,V0} Marc Zyngier
` (30 subsequent siblings)
42 siblings, 0 replies; 71+ messages in thread
From: Marc Zyngier @ 2025-04-26 12:28 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
Perform a bulk resync of tools/arch/arm64/include/asm/sysreg.h.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
tools/arch/arm64/include/asm/sysreg.h | 65 ++++++++++++++++++++-------
1 file changed, 48 insertions(+), 17 deletions(-)
diff --git a/tools/arch/arm64/include/asm/sysreg.h b/tools/arch/arm64/include/asm/sysreg.h
index b6c5ece4fdee7..690b6ebd118f4 100644
--- a/tools/arch/arm64/include/asm/sysreg.h
+++ b/tools/arch/arm64/include/asm/sysreg.h
@@ -117,6 +117,7 @@
#define SB_BARRIER_INSN __SYS_BARRIER_INSN(0, 7, 31)
+/* Data cache zero operations */
#define SYS_DC_ISW sys_insn(1, 0, 7, 6, 2)
#define SYS_DC_IGSW sys_insn(1, 0, 7, 6, 4)
#define SYS_DC_IGDSW sys_insn(1, 0, 7, 6, 6)
@@ -153,11 +154,13 @@
#define SYS_DC_CIGVAC sys_insn(1, 3, 7, 14, 3)
#define SYS_DC_CIGDVAC sys_insn(1, 3, 7, 14, 5)
-/* Data cache zero operations */
#define SYS_DC_ZVA sys_insn(1, 3, 7, 4, 1)
#define SYS_DC_GVA sys_insn(1, 3, 7, 4, 3)
#define SYS_DC_GZVA sys_insn(1, 3, 7, 4, 4)
+#define SYS_DC_CIVAPS sys_insn(1, 0, 7, 15, 1)
+#define SYS_DC_CIGDVAPS sys_insn(1, 0, 7, 15, 5)
+
/*
* Automatically generated definitions for system registers, the
* manual encodings below are in the process of being converted to
@@ -475,6 +478,7 @@
#define SYS_CNTFRQ_EL0 sys_reg(3, 3, 14, 0, 0)
#define SYS_CNTPCT_EL0 sys_reg(3, 3, 14, 0, 1)
+#define SYS_CNTVCT_EL0 sys_reg(3, 3, 14, 0, 2)
#define SYS_CNTPCTSS_EL0 sys_reg(3, 3, 14, 0, 5)
#define SYS_CNTVCTSS_EL0 sys_reg(3, 3, 14, 0, 6)
@@ -482,23 +486,36 @@
#define SYS_CNTP_CTL_EL0 sys_reg(3, 3, 14, 2, 1)
#define SYS_CNTP_CVAL_EL0 sys_reg(3, 3, 14, 2, 2)
+#define SYS_CNTV_TVAL_EL0 sys_reg(3, 3, 14, 3, 0)
#define SYS_CNTV_CTL_EL0 sys_reg(3, 3, 14, 3, 1)
#define SYS_CNTV_CVAL_EL0 sys_reg(3, 3, 14, 3, 2)
#define SYS_AARCH32_CNTP_TVAL sys_reg(0, 0, 14, 2, 0)
#define SYS_AARCH32_CNTP_CTL sys_reg(0, 0, 14, 2, 1)
#define SYS_AARCH32_CNTPCT sys_reg(0, 0, 0, 14, 0)
+#define SYS_AARCH32_CNTVCT sys_reg(0, 1, 0, 14, 0)
#define SYS_AARCH32_CNTP_CVAL sys_reg(0, 2, 0, 14, 0)
#define SYS_AARCH32_CNTPCTSS sys_reg(0, 8, 0, 14, 0)
+#define SYS_AARCH32_CNTVCTSS sys_reg(0, 9, 0, 14, 0)
#define __PMEV_op2(n) ((n) & 0x7)
#define __CNTR_CRm(n) (0x8 | (((n) >> 3) & 0x3))
+#define SYS_PMEVCNTSVRn_EL1(n) sys_reg(2, 0, 14, __CNTR_CRm(n), __PMEV_op2(n))
#define SYS_PMEVCNTRn_EL0(n) sys_reg(3, 3, 14, __CNTR_CRm(n), __PMEV_op2(n))
#define __TYPER_CRm(n) (0xc | (((n) >> 3) & 0x3))
#define SYS_PMEVTYPERn_EL0(n) sys_reg(3, 3, 14, __TYPER_CRm(n), __PMEV_op2(n))
#define SYS_PMCCFILTR_EL0 sys_reg(3, 3, 14, 15, 7)
+#define SYS_SPMCGCRn_EL1(n) sys_reg(2, 0, 9, 13, ((n) & 1))
+
+#define __SPMEV_op2(n) ((n) & 0x7)
+#define __SPMEV_crm(p, n) ((((p) & 7) << 1) | (((n) >> 3) & 1))
+#define SYS_SPMEVCNTRn_EL0(n) sys_reg(2, 3, 14, __SPMEV_crm(0b000, n), __SPMEV_op2(n))
+#define SYS_SPMEVFILT2Rn_EL0(n) sys_reg(2, 3, 14, __SPMEV_crm(0b011, n), __SPMEV_op2(n))
+#define SYS_SPMEVFILTRn_EL0(n) sys_reg(2, 3, 14, __SPMEV_crm(0b010, n), __SPMEV_op2(n))
+#define SYS_SPMEVTYPERn_EL0(n) sys_reg(2, 3, 14, __SPMEV_crm(0b001, n), __SPMEV_op2(n))
+
#define SYS_VPIDR_EL2 sys_reg(3, 4, 0, 0, 0)
#define SYS_VMPIDR_EL2 sys_reg(3, 4, 0, 0, 5)
@@ -518,7 +535,6 @@
#define SYS_VTCR_EL2 sys_reg(3, 4, 2, 1, 2)
#define SYS_VNCR_EL2 sys_reg(3, 4, 2, 2, 0)
-#define SYS_HAFGRTR_EL2 sys_reg(3, 4, 3, 1, 6)
#define SYS_SPSR_EL2 sys_reg(3, 4, 4, 0, 0)
#define SYS_ELR_EL2 sys_reg(3, 4, 4, 0, 1)
#define SYS_SP_EL1 sys_reg(3, 4, 4, 1, 0)
@@ -604,28 +620,18 @@
/* VHE encodings for architectural EL0/1 system registers */
#define SYS_BRBCR_EL12 sys_reg(2, 5, 9, 0, 0)
-#define SYS_SCTLR_EL12 sys_reg(3, 5, 1, 0, 0)
-#define SYS_CPACR_EL12 sys_reg(3, 5, 1, 0, 2)
-#define SYS_SCTLR2_EL12 sys_reg(3, 5, 1, 0, 3)
-#define SYS_ZCR_EL12 sys_reg(3, 5, 1, 2, 0)
-#define SYS_TRFCR_EL12 sys_reg(3, 5, 1, 2, 1)
-#define SYS_SMCR_EL12 sys_reg(3, 5, 1, 2, 6)
#define SYS_TTBR0_EL12 sys_reg(3, 5, 2, 0, 0)
#define SYS_TTBR1_EL12 sys_reg(3, 5, 2, 0, 1)
-#define SYS_TCR_EL12 sys_reg(3, 5, 2, 0, 2)
-#define SYS_TCR2_EL12 sys_reg(3, 5, 2, 0, 3)
#define SYS_SPSR_EL12 sys_reg(3, 5, 4, 0, 0)
#define SYS_ELR_EL12 sys_reg(3, 5, 4, 0, 1)
#define SYS_AFSR0_EL12 sys_reg(3, 5, 5, 1, 0)
#define SYS_AFSR1_EL12 sys_reg(3, 5, 5, 1, 1)
#define SYS_ESR_EL12 sys_reg(3, 5, 5, 2, 0)
#define SYS_TFSR_EL12 sys_reg(3, 5, 5, 6, 0)
-#define SYS_FAR_EL12 sys_reg(3, 5, 6, 0, 0)
#define SYS_PMSCR_EL12 sys_reg(3, 5, 9, 9, 0)
#define SYS_MAIR_EL12 sys_reg(3, 5, 10, 2, 0)
#define SYS_AMAIR_EL12 sys_reg(3, 5, 10, 3, 0)
#define SYS_VBAR_EL12 sys_reg(3, 5, 12, 0, 0)
-#define SYS_CONTEXTIDR_EL12 sys_reg(3, 5, 13, 0, 1)
#define SYS_SCXTNUM_EL12 sys_reg(3, 5, 13, 0, 7)
#define SYS_CNTKCTL_EL12 sys_reg(3, 5, 14, 1, 0)
#define SYS_CNTP_TVAL_EL02 sys_reg(3, 5, 14, 2, 0)
@@ -1028,8 +1034,11 @@
#define PIE_RX UL(0xa)
#define PIE_RW UL(0xc)
#define PIE_RWX UL(0xe)
+#define PIE_MASK UL(0xf)
-#define PIRx_ELx_PERM(idx, perm) ((perm) << ((idx) * 4))
+#define PIRx_ELx_BITS_PER_IDX 4
+#define PIRx_ELx_PERM_SHIFT(idx) ((idx) * PIRx_ELx_BITS_PER_IDX)
+#define PIRx_ELx_PERM_PREP(idx, perm) (((perm) & PIE_MASK) << PIRx_ELx_PERM_SHIFT(idx))
/*
* Permission Overlay Extension (POE) permission encodings.
@@ -1040,12 +1049,34 @@
#define POE_RX UL(0x3)
#define POE_W UL(0x4)
#define POE_RW UL(0x5)
-#define POE_XW UL(0x6)
-#define POE_RXW UL(0x7)
+#define POE_WX UL(0x6)
+#define POE_RWX UL(0x7)
#define POE_MASK UL(0xf)
-/* Initial value for Permission Overlay Extension for EL0 */
-#define POR_EL0_INIT POE_RXW
+#define POR_ELx_BITS_PER_IDX 4
+#define POR_ELx_PERM_SHIFT(idx) ((idx) * POR_ELx_BITS_PER_IDX)
+#define POR_ELx_PERM_GET(idx, reg) (((reg) >> POR_ELx_PERM_SHIFT(idx)) & POE_MASK)
+#define POR_ELx_PERM_PREP(idx, perm) (((perm) & POE_MASK) << POR_ELx_PERM_SHIFT(idx))
+
+/*
+ * Definitions for Guarded Control Stack
+ */
+
+#define GCS_CAP_ADDR_MASK GENMASK(63, 12)
+#define GCS_CAP_ADDR_SHIFT 12
+#define GCS_CAP_ADDR_WIDTH 52
+#define GCS_CAP_ADDR(x) FIELD_GET(GCS_CAP_ADDR_MASK, x)
+
+#define GCS_CAP_TOKEN_MASK GENMASK(11, 0)
+#define GCS_CAP_TOKEN_SHIFT 0
+#define GCS_CAP_TOKEN_WIDTH 12
+#define GCS_CAP_TOKEN(x) FIELD_GET(GCS_CAP_TOKEN_MASK, x)
+
+#define GCS_CAP_VALID_TOKEN 0x1
+#define GCS_CAP_IN_PROGRESS_TOKEN 0x5
+
+#define GCS_CAP(x) ((((unsigned long)x) & GCS_CAP_ADDR_MASK) | \
+ GCS_CAP_VALID_TOKEN)
#define ARM64_FEATURE_FIELD_BITS 4
--
2.39.2
^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH v3 13/42] arm64: Add syndrome information for trapped LD64B/ST64B{,V,V0}
2025-04-26 12:27 [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling Marc Zyngier
` (11 preceding siblings ...)
2025-04-26 12:28 ` [PATCH v3 12/42] arm64: tools: Resync sysreg.h Marc Zyngier
@ 2025-04-26 12:28 ` Marc Zyngier
2025-05-01 10:17 ` Joey Gouly
2025-04-26 12:28 ` [PATCH v3 14/42] arm64: Add FEAT_FGT2 capability Marc Zyngier
` (29 subsequent siblings)
42 siblings, 1 reply; 71+ messages in thread
From: Marc Zyngier @ 2025-04-26 12:28 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
Provide the architected EC and ISS values for all the FEAT_LS64*
instructions.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/esr.h | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
index e4f77757937e6..a0ae66dd65da9 100644
--- a/arch/arm64/include/asm/esr.h
+++ b/arch/arm64/include/asm/esr.h
@@ -20,7 +20,8 @@
#define ESR_ELx_EC_FP_ASIMD UL(0x07)
#define ESR_ELx_EC_CP10_ID UL(0x08) /* EL2 only */
#define ESR_ELx_EC_PAC UL(0x09) /* EL2 and above */
-/* Unallocated EC: 0x0A - 0x0B */
+#define ESR_ELx_EC_OTHER UL(0x0A)
+/* Unallocated EC: 0x0B */
#define ESR_ELx_EC_CP14_64 UL(0x0C)
#define ESR_ELx_EC_BTI UL(0x0D)
#define ESR_ELx_EC_ILL UL(0x0E)
@@ -181,6 +182,11 @@
#define ESR_ELx_WFx_ISS_WFE (UL(1) << 0)
#define ESR_ELx_xVC_IMM_MASK ((UL(1) << 16) - 1)
+/* ISS definitions for LD64B/ST64B instructions */
+#define ESR_ELx_ISS_OTHER_ST64BV (0)
+#define ESR_ELx_ISS_OTHER_ST64BV0 (1)
+#define ESR_ELx_ISS_OTHER_LDST64B (2)
+
#define DISR_EL1_IDS (UL(1) << 24)
/*
* DISR_EL1 and ESR_ELx share the bottom 13 bits, but the RES0 bits may mean
--
2.39.2
^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH v3 14/42] arm64: Add FEAT_FGT2 capability
2025-04-26 12:27 [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling Marc Zyngier
` (12 preceding siblings ...)
2025-04-26 12:28 ` [PATCH v3 13/42] arm64: Add syndrome information for trapped LD64B/ST64B{,V,V0} Marc Zyngier
@ 2025-04-26 12:28 ` Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 15/42] KVM: arm64: Tighten handling of unknown FGT groups Marc Zyngier
` (28 subsequent siblings)
42 siblings, 0 replies; 71+ messages in thread
From: Marc Zyngier @ 2025-04-26 12:28 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
As we will eventually have to context-switch the FEAT_FGT2 registers
in KVM (something that has been completely ignored so far), add
a new cap that we will be able to check for.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kernel/cpufeature.c | 7 +++++++
arch/arm64/tools/cpucaps | 1 +
2 files changed, 8 insertions(+)
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 9c4d6d552b25c..bb6058c7d144c 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -2876,6 +2876,13 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.matches = has_cpuid_feature,
ARM64_CPUID_FIELDS(ID_AA64MMFR0_EL1, FGT, IMP)
},
+ {
+ .desc = "Fine Grained Traps 2",
+ .type = ARM64_CPUCAP_SYSTEM_FEATURE,
+ .capability = ARM64_HAS_FGT2,
+ .matches = has_cpuid_feature,
+ ARM64_CPUID_FIELDS(ID_AA64MMFR0_EL1, FGT, FGT2)
+ },
#ifdef CONFIG_ARM64_SME
{
.desc = "Scalable Matrix Extension",
diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps
index 772c1b008e437..39b154d2198fb 100644
--- a/arch/arm64/tools/cpucaps
+++ b/arch/arm64/tools/cpucaps
@@ -28,6 +28,7 @@ HAS_EPAN
HAS_EVT
HAS_FPMR
HAS_FGT
+HAS_FGT2
HAS_FPSIMD
HAS_GCS
HAS_GENERIC_AUTH
--
2.39.2
^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH v3 15/42] KVM: arm64: Tighten handling of unknown FGT groups
2025-04-26 12:27 [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling Marc Zyngier
` (13 preceding siblings ...)
2025-04-26 12:28 ` [PATCH v3 14/42] arm64: Add FEAT_FGT2 capability Marc Zyngier
@ 2025-04-26 12:28 ` Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 16/42] KVM: arm64: Simplify handling of negative FGT bits Marc Zyngier
` (27 subsequent siblings)
42 siblings, 0 replies; 71+ messages in thread
From: Marc Zyngier @ 2025-04-26 12:28 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
triage_sysreg_trap() assumes that it knows all the possible values
for FGT groups, which won't be the case as we start adding more
FGT registers (unless we add everything in one go, which is obviously
undesirable).
At the same time, it doesn't offer much in terms of debugging info
when things go wrong.
Turn the "__NR_FGT_GROUP_IDS__" case into a default, covering any
unhandled value, and give the kernel hacker a bit of a clue about
what's wrong (system register and full trap descriptor).
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/emulate-nested.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index efe1eb3f1bd07..1bcbddc88a9b7 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -2352,9 +2352,10 @@ bool triage_sysreg_trap(struct kvm_vcpu *vcpu, int *sr_index)
}
break;
- case __NR_FGT_GROUP_IDS__:
+ default:
/* Something is really wrong, bail out */
- WARN_ONCE(1, "__NR_FGT_GROUP_IDS__");
+ WARN_ONCE(1, "Bad FGT group (encoding %08x, config %016llx)\n",
+ sysreg, tc.val);
goto local;
}
--
2.39.2
^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH v3 16/42] KVM: arm64: Simplify handling of negative FGT bits
2025-04-26 12:27 [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling Marc Zyngier
` (14 preceding siblings ...)
2025-04-26 12:28 ` [PATCH v3 15/42] KVM: arm64: Tighten handling of unknown FGT groups Marc Zyngier
@ 2025-04-26 12:28 ` Marc Zyngier
2025-05-01 10:43 ` Joey Gouly
2025-04-26 12:28 ` [PATCH v3 17/42] KVM: arm64: Handle trapping of FEAT_LS64* instructions Marc Zyngier
` (26 subsequent siblings)
42 siblings, 1 reply; 71+ messages in thread
From: Marc Zyngier @ 2025-04-26 12:28 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
check_fgt_bit() and triage_sysreg_trap() implement the same thing
twice for no good reason. We have to lookup the FGT register twice,
as we don't communucate it. Similarly, we extract the register value
at the wrong spot.
Reorganise the code in a more logical way so that things are done
at the correct location, removing a lot of duplication.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/emulate-nested.c | 49 ++++++++-------------------------
1 file changed, 12 insertions(+), 37 deletions(-)
diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index 1bcbddc88a9b7..52a2d63a667c9 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -2215,11 +2215,11 @@ static u64 kvm_get_sysreg_res0(struct kvm *kvm, enum vcpu_sysreg sr)
return masks->mask[sr - __VNCR_START__].res0;
}
-static bool check_fgt_bit(struct kvm_vcpu *vcpu, bool is_read,
- u64 val, const union trap_config tc)
+static bool check_fgt_bit(struct kvm_vcpu *vcpu, enum vcpu_sysreg sr,
+ const union trap_config tc)
{
struct kvm *kvm = vcpu->kvm;
- enum vcpu_sysreg sr;
+ u64 val;
/*
* KVM doesn't know about any FGTs that apply to the host, and hopefully
@@ -2228,6 +2228,8 @@ static bool check_fgt_bit(struct kvm_vcpu *vcpu, bool is_read,
if (is_hyp_ctxt(vcpu))
return false;
+ val = __vcpu_sys_reg(vcpu, sr);
+
if (tc.pol)
return (val & BIT(tc.bit));
@@ -2242,38 +2244,17 @@ static bool check_fgt_bit(struct kvm_vcpu *vcpu, bool is_read,
if (val & BIT(tc.bit))
return false;
- switch ((enum fgt_group_id)tc.fgt) {
- case HFGRTR_GROUP:
- sr = is_read ? HFGRTR_EL2 : HFGWTR_EL2;
- break;
-
- case HDFGRTR_GROUP:
- sr = is_read ? HDFGRTR_EL2 : HDFGWTR_EL2;
- break;
-
- case HAFGRTR_GROUP:
- sr = HAFGRTR_EL2;
- break;
-
- case HFGITR_GROUP:
- sr = HFGITR_EL2;
- break;
-
- default:
- WARN_ONCE(1, "Unhandled FGT group");
- return false;
- }
-
return !(kvm_get_sysreg_res0(kvm, sr) & BIT(tc.bit));
}
bool triage_sysreg_trap(struct kvm_vcpu *vcpu, int *sr_index)
{
+ enum vcpu_sysreg fgtreg;
union trap_config tc;
enum trap_behaviour b;
bool is_read;
u32 sysreg;
- u64 esr, val;
+ u64 esr;
esr = kvm_vcpu_get_esr(vcpu);
sysreg = esr_sys64_to_sysreg(esr);
@@ -2320,25 +2301,19 @@ bool triage_sysreg_trap(struct kvm_vcpu *vcpu, int *sr_index)
break;
case HFGRTR_GROUP:
- if (is_read)
- val = __vcpu_sys_reg(vcpu, HFGRTR_EL2);
- else
- val = __vcpu_sys_reg(vcpu, HFGWTR_EL2);
+ fgtreg = is_read ? HFGRTR_EL2 : HFGWTR_EL2;
break;
case HDFGRTR_GROUP:
- if (is_read)
- val = __vcpu_sys_reg(vcpu, HDFGRTR_EL2);
- else
- val = __vcpu_sys_reg(vcpu, HDFGWTR_EL2);
+ fgtreg = is_read ? HDFGRTR_EL2 : HDFGWTR_EL2;
break;
case HAFGRTR_GROUP:
- val = __vcpu_sys_reg(vcpu, HAFGRTR_EL2);
+ fgtreg = HAFGRTR_EL2;
break;
case HFGITR_GROUP:
- val = __vcpu_sys_reg(vcpu, HFGITR_EL2);
+ fgtreg = HFGITR_EL2;
switch (tc.fgf) {
u64 tmp;
@@ -2359,7 +2334,7 @@ bool triage_sysreg_trap(struct kvm_vcpu *vcpu, int *sr_index)
goto local;
}
- if (tc.fgt != __NO_FGT_GROUP__ && check_fgt_bit(vcpu, is_read, val, tc))
+ if (tc.fgt != __NO_FGT_GROUP__ && check_fgt_bit(vcpu, fgtreg, tc))
goto inject;
b = compute_trap_behaviour(vcpu, tc);
--
2.39.2
^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH v3 17/42] KVM: arm64: Handle trapping of FEAT_LS64* instructions
2025-04-26 12:27 [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling Marc Zyngier
` (15 preceding siblings ...)
2025-04-26 12:28 ` [PATCH v3 16/42] KVM: arm64: Simplify handling of negative FGT bits Marc Zyngier
@ 2025-04-26 12:28 ` Marc Zyngier
2025-04-29 13:08 ` Ben Horgan
2025-05-01 11:01 ` Joey Gouly
2025-04-26 12:28 ` [PATCH v3 18/42] KVM: arm64: Restrict ACCDATA_EL1 undef to FEAT_ST64_ACCDATA being disabled Marc Zyngier
` (25 subsequent siblings)
42 siblings, 2 replies; 71+ messages in thread
From: Marc Zyngier @ 2025-04-26 12:28 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
We generally don't expect FEAT_LS64* instructions to trap, unless
they are trapped by a guest hypervisor.
Otherwise, this is just the guest playing tricks on us by using
an instruction that isn't advertised, which we handle with a well
deserved UNDEF.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/handle_exit.c | 56 ++++++++++++++++++++++++++++++++++++
1 file changed, 56 insertions(+)
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index b73dc26bc44b4..636c14ed2bb82 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -298,6 +298,61 @@ static int handle_svc(struct kvm_vcpu *vcpu)
return 1;
}
+static int handle_other(struct kvm_vcpu *vcpu)
+{
+ bool is_l2 = vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu);
+ u64 hcrx = __vcpu_sys_reg(vcpu, HCRX_EL2);
+ u64 esr = kvm_vcpu_get_esr(vcpu);
+ u64 iss = ESR_ELx_ISS(esr);
+ struct kvm *kvm = vcpu->kvm;
+ bool allowed, fwd = false;
+
+ /*
+ * We only trap for two reasons:
+ *
+ * - the feature is disabled, and the only outcome is to
+ * generate an UNDEF.
+ *
+ * - the feature is enabled, but a NV guest wants to trap the
+ * feature used my its L2 guest. We forward the exception in
+ * this case.
+ *
+ * What we don't expect is to end-up here if the guest is
+ * expected be be able to directly use the feature, hence the
+ * WARN_ON below.
+ */
+ switch (iss) {
+ case ESR_ELx_ISS_OTHER_ST64BV:
+ allowed = kvm_has_feat(kvm, ID_AA64ISAR1_EL1, LS64, LS64_V);
+ if (is_l2)
+ fwd = !(hcrx & HCRX_EL2_EnASR);
+ break;
+ case ESR_ELx_ISS_OTHER_ST64BV0:
+ allowed = kvm_has_feat(kvm, ID_AA64ISAR1_EL1, LS64, LS64_ACCDATA);
+ if (is_l2)
+ fwd = !(hcrx & HCRX_EL2_EnAS0);
+ break;
+ case ESR_ELx_ISS_OTHER_LDST64B:
+ allowed = kvm_has_feat(kvm, ID_AA64ISAR1_EL1, LS64, LS64);
+ if (is_l2)
+ fwd = !(hcrx & HCRX_EL2_EnALS);
+ break;
+ default:
+ /* Clearly, we're missing something. */
+ WARN_ON_ONCE(1);
+ allowed = false;
+ }
+
+ WARN_ON_ONCE(allowed && !fwd);
+
+ if (allowed && fwd)
+ kvm_inject_nested_sync(vcpu, esr);
+ else
+ kvm_inject_undefined(vcpu);
+
+ return 1;
+}
+
static exit_handle_fn arm_exit_handlers[] = {
[0 ... ESR_ELx_EC_MAX] = kvm_handle_unknown_ec,
[ESR_ELx_EC_WFx] = kvm_handle_wfx,
@@ -307,6 +362,7 @@ static exit_handle_fn arm_exit_handlers[] = {
[ESR_ELx_EC_CP14_LS] = kvm_handle_cp14_load_store,
[ESR_ELx_EC_CP10_ID] = kvm_handle_cp10_id,
[ESR_ELx_EC_CP14_64] = kvm_handle_cp14_64,
+ [ESR_ELx_EC_OTHER] = handle_other,
[ESR_ELx_EC_HVC32] = handle_hvc,
[ESR_ELx_EC_SMC32] = handle_smc,
[ESR_ELx_EC_HVC64] = handle_hvc,
--
2.39.2
^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH v3 18/42] KVM: arm64: Restrict ACCDATA_EL1 undef to FEAT_ST64_ACCDATA being disabled
2025-04-26 12:27 [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling Marc Zyngier
` (16 preceding siblings ...)
2025-04-26 12:28 ` [PATCH v3 17/42] KVM: arm64: Handle trapping of FEAT_LS64* instructions Marc Zyngier
@ 2025-04-26 12:28 ` Marc Zyngier
2025-04-29 13:08 ` Ben Horgan
2025-04-26 12:28 ` [PATCH v3 19/42] KVM: arm64: Don't treat HCRX_EL2 as a FGT register Marc Zyngier
` (24 subsequent siblings)
42 siblings, 1 reply; 71+ messages in thread
From: Marc Zyngier @ 2025-04-26 12:28 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
We currently unconditionally make ACCDATA_EL1 accesses UNDEF.
As we are about to support it, restrict the UNDEF behaviour to cases
where FEAT_ST64_ACCDATA is not exposed to the guest.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/sys_regs.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 6e01b06bedcae..ce347ddb6fae0 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -5150,10 +5150,12 @@ void kvm_calculate_traps(struct kvm_vcpu *vcpu)
kvm->arch.fgu[HFGRTR_GROUP] = (HFGRTR_EL2_nAMAIR2_EL1 |
HFGRTR_EL2_nMAIR2_EL1 |
HFGRTR_EL2_nS2POR_EL1 |
- HFGRTR_EL2_nACCDATA_EL1 |
HFGRTR_EL2_nSMPRI_EL1_MASK |
HFGRTR_EL2_nTPIDR2_EL0_MASK);
+ if (!kvm_has_feat(kvm, ID_AA64ISAR1_EL1, LS64, LS64_ACCDATA))
+ kvm->arch.fgu[HFGRTR_GROUP] |= HFGRTR_EL2_nACCDATA_EL1;
+
if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, OS))
kvm->arch.fgu[HFGITR_GROUP] |= (HFGITR_EL2_TLBIRVAALE1OS|
HFGITR_EL2_TLBIRVALE1OS |
--
2.39.2
^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH v3 19/42] KVM: arm64: Don't treat HCRX_EL2 as a FGT register
2025-04-26 12:27 [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling Marc Zyngier
` (17 preceding siblings ...)
2025-04-26 12:28 ` [PATCH v3 18/42] KVM: arm64: Restrict ACCDATA_EL1 undef to FEAT_ST64_ACCDATA being disabled Marc Zyngier
@ 2025-04-26 12:28 ` Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 20/42] KVM: arm64: Plug FEAT_GCS handling Marc Zyngier
` (23 subsequent siblings)
42 siblings, 0 replies; 71+ messages in thread
From: Marc Zyngier @ 2025-04-26 12:28 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
Treating HCRX_EL2 as yet another FGT register seems excessive, and
gets in a way of further improvements. It is actually simpler to
just be explicit about the masking, so just to that.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/hyp/include/hyp/switch.h | 9 +++------
1 file changed, 3 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 3150e42d79341..027d05f308f75 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -261,12 +261,9 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
if (cpus_have_final_cap(ARM64_HAS_HCX)) {
u64 hcrx = vcpu->arch.hcrx_el2;
if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)) {
- u64 clr = 0, set = 0;
-
- compute_clr_set(vcpu, HCRX_EL2, clr, set);
-
- hcrx |= set;
- hcrx &= ~clr;
+ u64 val = __vcpu_sys_reg(vcpu, HCRX_EL2);
+ hcrx |= val & __HCRX_EL2_MASK;
+ hcrx &= ~(~val & __HCRX_EL2_nMASK);
}
write_sysreg_s(hcrx, SYS_HCRX_EL2);
--
2.39.2
^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH v3 20/42] KVM: arm64: Plug FEAT_GCS handling
2025-04-26 12:27 [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling Marc Zyngier
` (18 preceding siblings ...)
2025-04-26 12:28 ` [PATCH v3 19/42] KVM: arm64: Don't treat HCRX_EL2 as a FGT register Marc Zyngier
@ 2025-04-26 12:28 ` Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 21/42] KVM: arm64: Compute FGT masks from KVM's own FGT tables Marc Zyngier
` (22 subsequent siblings)
42 siblings, 0 replies; 71+ messages in thread
From: Marc Zyngier @ 2025-04-26 12:28 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
We don't seem to be handling the GCS-specific exception class.
Handle it by delivering an UNDEF to the guest, and populate the
relevant trap bits.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/handle_exit.c | 11 +++++++++++
arch/arm64/kvm/sys_regs.c | 8 ++++++++
2 files changed, 19 insertions(+)
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 636c14ed2bb82..eafbd2a243afd 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -298,6 +298,16 @@ static int handle_svc(struct kvm_vcpu *vcpu)
return 1;
}
+static int kvm_handle_gcs(struct kvm_vcpu *vcpu)
+{
+ /* We don't expect GCS, so treat it with contempt */
+ if (kvm_has_feat(vcpu->kvm, ID_AA64PFR1_EL1, GCS, IMP))
+ WARN_ON_ONCE(1);
+
+ kvm_inject_undefined(vcpu);
+ return 1;
+}
+
static int handle_other(struct kvm_vcpu *vcpu)
{
bool is_l2 = vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu);
@@ -380,6 +390,7 @@ static exit_handle_fn arm_exit_handlers[] = {
[ESR_ELx_EC_BRK64] = kvm_handle_guest_debug,
[ESR_ELx_EC_FP_ASIMD] = kvm_handle_fpasimd,
[ESR_ELx_EC_PAC] = kvm_handle_ptrauth,
+ [ESR_ELx_EC_GCS] = kvm_handle_gcs,
};
static exit_handle_fn kvm_get_exit_handler(struct kvm_vcpu *vcpu)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index ce347ddb6fae0..a9ecca4b2fa74 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -5209,6 +5209,14 @@ void kvm_calculate_traps(struct kvm_vcpu *vcpu)
HFGITR_EL2_nBRBIALL);
}
+ if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, GCS, IMP)) {
+ kvm->arch.fgu[HFGRTR_GROUP] |= (HFGRTR_EL2_nGCS_EL0 |
+ HFGRTR_EL2_nGCS_EL1);
+ kvm->arch.fgu[HFGITR_GROUP] |= (HFGITR_EL2_nGCSPUSHM_EL1 |
+ HFGITR_EL2_nGCSSTR_EL1 |
+ HFGITR_EL2_nGCSEPP);
+ }
+
set_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags);
out:
mutex_unlock(&kvm->arch.config_lock);
--
2.39.2
^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH v3 21/42] KVM: arm64: Compute FGT masks from KVM's own FGT tables
2025-04-26 12:27 [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling Marc Zyngier
` (19 preceding siblings ...)
2025-04-26 12:28 ` [PATCH v3 20/42] KVM: arm64: Plug FEAT_GCS handling Marc Zyngier
@ 2025-04-26 12:28 ` Marc Zyngier
2025-05-01 11:32 ` Joey Gouly
2025-04-26 12:28 ` [PATCH v3 22/42] KVM: arm64: Add description of FGT bits leading to EC!=0x18 Marc Zyngier
` (21 subsequent siblings)
42 siblings, 1 reply; 71+ messages in thread
From: Marc Zyngier @ 2025-04-26 12:28 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
In the process of decoupling KVM's view of the FGT bits from the
wider architectural state, use KVM's own FGT tables to build
a synthetic view of what is actually known.
This allows for some checking along the way.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_host.h | 14 ++++
arch/arm64/kvm/emulate-nested.c | 106 ++++++++++++++++++++++++++++++
2 files changed, 120 insertions(+)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 7a1ef5be7efb2..95fedd27f4bb8 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -607,6 +607,20 @@ struct kvm_sysreg_masks {
} mask[NR_SYS_REGS - __SANITISED_REG_START__];
};
+struct fgt_masks {
+ const char *str;
+ u64 mask;
+ u64 nmask;
+ u64 res0;
+};
+
+extern struct fgt_masks hfgrtr_masks;
+extern struct fgt_masks hfgwtr_masks;
+extern struct fgt_masks hfgitr_masks;
+extern struct fgt_masks hdfgrtr_masks;
+extern struct fgt_masks hdfgwtr_masks;
+extern struct fgt_masks hafgrtr_masks;
+
struct kvm_cpu_context {
struct user_pt_regs regs; /* sp = sp_el0 */
diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index 52a2d63a667c9..528b33fcfcfd6 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -2033,6 +2033,105 @@ static u32 encoding_next(u32 encoding)
return sys_reg(op0 + 1, 0, 0, 0, 0);
}
+#define FGT_MASKS(__n, __m) \
+ struct fgt_masks __n = { .str = #__m, .res0 = __m, }
+
+FGT_MASKS(hfgrtr_masks, HFGRTR_EL2_RES0);
+FGT_MASKS(hfgwtr_masks, HFGWTR_EL2_RES0);
+FGT_MASKS(hfgitr_masks, HFGITR_EL2_RES0);
+FGT_MASKS(hdfgrtr_masks, HDFGRTR_EL2_RES0);
+FGT_MASKS(hdfgwtr_masks, HDFGWTR_EL2_RES0);
+FGT_MASKS(hafgrtr_masks, HAFGRTR_EL2_RES0);
+
+static __init bool aggregate_fgt(union trap_config tc)
+{
+ struct fgt_masks *rmasks, *wmasks;
+
+ switch (tc.fgt) {
+ case HFGRTR_GROUP:
+ rmasks = &hfgrtr_masks;
+ wmasks = &hfgwtr_masks;
+ break;
+ case HDFGRTR_GROUP:
+ rmasks = &hdfgrtr_masks;
+ wmasks = &hdfgwtr_masks;
+ break;
+ case HAFGRTR_GROUP:
+ rmasks = &hafgrtr_masks;
+ wmasks = NULL;
+ break;
+ case HFGITR_GROUP:
+ rmasks = &hfgitr_masks;
+ wmasks = NULL;
+ break;
+ }
+
+ /*
+ * A bit can be reserved in either the R or W register, but
+ * not both.
+ */
+ if ((BIT(tc.bit) & rmasks->res0) &&
+ (!wmasks || (BIT(tc.bit) & wmasks->res0)))
+ return false;
+
+ if (tc.pol)
+ rmasks->mask |= BIT(tc.bit) & ~rmasks->res0;
+ else
+ rmasks->nmask |= BIT(tc.bit) & ~rmasks->res0;
+
+ if (wmasks) {
+ if (tc.pol)
+ wmasks->mask |= BIT(tc.bit) & ~wmasks->res0;
+ else
+ wmasks->nmask |= BIT(tc.bit) & ~wmasks->res0;
+ }
+
+ return true;
+}
+
+static __init int check_fgt_masks(struct fgt_masks *masks)
+{
+ unsigned long duplicate = masks->mask & masks->nmask;
+ u64 res0 = masks->res0;
+ int ret = 0;
+
+ if (duplicate) {
+ int i;
+
+ for_each_set_bit(i, &duplicate, 64) {
+ kvm_err("%s[%d] bit has both polarities\n",
+ masks->str, i);
+ }
+
+ ret = -EINVAL;
+ }
+
+ masks->res0 = ~(masks->mask | masks->nmask);
+ if (masks->res0 != res0)
+ kvm_info("Implicit %s = %016llx, expecting %016llx\n",
+ masks->str, masks->res0, res0);
+
+ return ret;
+}
+
+static __init int check_all_fgt_masks(int ret)
+{
+ static struct fgt_masks * const masks[] __initconst = {
+ &hfgrtr_masks,
+ &hfgwtr_masks,
+ &hfgitr_masks,
+ &hdfgrtr_masks,
+ &hdfgwtr_masks,
+ &hafgrtr_masks,
+ };
+ int err = 0;
+
+ for (int i = 0; i < ARRAY_SIZE(masks); i++)
+ err |= check_fgt_masks(masks[i]);
+
+ return ret ?: err;
+}
+
int __init populate_nv_trap_config(void)
{
int ret = 0;
@@ -2097,8 +2196,15 @@ int __init populate_nv_trap_config(void)
ret = xa_err(prev);
print_nv_trap_error(fgt, "Failed FGT insertion", ret);
}
+
+ if (!aggregate_fgt(tc)) {
+ ret = -EINVAL;
+ print_nv_trap_error(fgt, "FGT bit is reserved", ret);
+ }
}
+ ret = check_all_fgt_masks(ret);
+
kvm_info("nv: %ld fine grained trap handlers\n",
ARRAY_SIZE(encoding_to_fgt));
--
2.39.2
^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH v3 22/42] KVM: arm64: Add description of FGT bits leading to EC!=0x18
2025-04-26 12:27 [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling Marc Zyngier
` (20 preceding siblings ...)
2025-04-26 12:28 ` [PATCH v3 21/42] KVM: arm64: Compute FGT masks from KVM's own FGT tables Marc Zyngier
@ 2025-04-26 12:28 ` Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 23/42] KVM: arm64: Use computed masks as sanitisers for FGT registers Marc Zyngier
` (20 subsequent siblings)
42 siblings, 0 replies; 71+ messages in thread
From: Marc Zyngier @ 2025-04-26 12:28 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
The current FTP tables are only concerned with the bits generating
ESR_ELx.EC==0x18. However, we want an exhaustive view of what KVM
really knows about.
So let's add another small table that provides that extra information.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/emulate-nested.c | 36 +++++++++++++++++++++++++++------
1 file changed, 30 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index 528b33fcfcfd6..c30d970bf81cb 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -1279,16 +1279,21 @@ enum fg_filter_id {
__NR_FG_FILTER_IDS__
};
-#define SR_FGF(sr, g, b, p, f) \
- { \
- .encoding = sr, \
- .end = sr, \
- .tc = { \
+#define __FGT(g, b, p, f) \
+ { \
.fgt = g ## _GROUP, \
.bit = g ## _EL2_ ## b ## _SHIFT, \
.pol = p, \
.fgf = f, \
- }, \
+ }
+
+#define FGT(g, b, p) __FGT(g, b, p, __NO_FGF__)
+
+#define SR_FGF(sr, g, b, p, f) \
+ { \
+ .encoding = sr, \
+ .end = sr, \
+ .tc = __FGT(g, b, p, f), \
.line = __LINE__, \
}
@@ -1989,6 +1994,18 @@ static const struct encoding_to_trap_config encoding_to_fgt[] __initconst = {
SR_FGT(SYS_AMEVCNTR0_EL0(0), HAFGRTR, AMEVCNTR00_EL0, 1),
};
+/*
+ * Additional FGTs that do not fire with ESR_EL2.EC==0x18. This table
+ * isn't used for exception routing, but only as a promise that the
+ * trap is handled somewhere else.
+ */
+static const union trap_config non_0x18_fgt[] __initconst = {
+ FGT(HFGITR, nGCSSTR_EL1, 0),
+ FGT(HFGITR, SVC_EL1, 1),
+ FGT(HFGITR, SVC_EL0, 1),
+ FGT(HFGITR, ERET, 1),
+};
+
static union trap_config get_trap_config(u32 sysreg)
{
return (union trap_config) {
@@ -2203,6 +2220,13 @@ int __init populate_nv_trap_config(void)
}
}
+ for (int i = 0; i < ARRAY_SIZE(non_0x18_fgt); i++) {
+ if (!aggregate_fgt(non_0x18_fgt[i])) {
+ ret = -EINVAL;
+ kvm_err("non_0x18_fgt[%d] is reserved\n", i);
+ }
+ }
+
ret = check_all_fgt_masks(ret);
kvm_info("nv: %ld fine grained trap handlers\n",
--
2.39.2
^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH v3 23/42] KVM: arm64: Use computed masks as sanitisers for FGT registers
2025-04-26 12:27 [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling Marc Zyngier
` (21 preceding siblings ...)
2025-04-26 12:28 ` [PATCH v3 22/42] KVM: arm64: Add description of FGT bits leading to EC!=0x18 Marc Zyngier
@ 2025-04-26 12:28 ` Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 24/42] KVM: arm64: Unconditionally configure fine-grain traps Marc Zyngier
` (19 subsequent siblings)
42 siblings, 0 replies; 71+ messages in thread
From: Marc Zyngier @ 2025-04-26 12:28 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
Now that we have computed RES0 bits, use them to sanitise the
guest view of FGT registers.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/nested.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index 16f6129c70b59..479ffd25eea63 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -1137,8 +1137,8 @@ int kvm_init_nv_sysregs(struct kvm_vcpu *vcpu)
res0 |= HFGRTR_EL2_nS2POR_EL1;
if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, AIE, IMP))
res0 |= (HFGRTR_EL2_nMAIR2_EL1 | HFGRTR_EL2_nAMAIR2_EL1);
- set_sysreg_masks(kvm, HFGRTR_EL2, res0 | __HFGRTR_EL2_RES0, res1);
- set_sysreg_masks(kvm, HFGWTR_EL2, res0 | __HFGWTR_EL2_RES0, res1);
+ set_sysreg_masks(kvm, HFGRTR_EL2, res0 | hfgrtr_masks.res0, res1);
+ set_sysreg_masks(kvm, HFGWTR_EL2, res0 | hfgwtr_masks.res0, res1);
/* HDFG[RW]TR_EL2 */
res0 = res1 = 0;
@@ -1176,7 +1176,7 @@ int kvm_init_nv_sysregs(struct kvm_vcpu *vcpu)
HDFGRTR_EL2_nBRBDATA);
if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, PMSVer, V1P2))
res0 |= HDFGRTR_EL2_nPMSNEVFR_EL1;
- set_sysreg_masks(kvm, HDFGRTR_EL2, res0 | HDFGRTR_EL2_RES0, res1);
+ set_sysreg_masks(kvm, HDFGRTR_EL2, res0 | hdfgrtr_masks.res0, res1);
/* Reuse the bits from the read-side and add the write-specific stuff */
if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, PMUVer, IMP))
@@ -1185,10 +1185,10 @@ int kvm_init_nv_sysregs(struct kvm_vcpu *vcpu)
res0 |= HDFGWTR_EL2_TRCOSLAR;
if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, TraceFilt, IMP))
res0 |= HDFGWTR_EL2_TRFCR_EL1;
- set_sysreg_masks(kvm, HFGWTR_EL2, res0 | HDFGWTR_EL2_RES0, res1);
+ set_sysreg_masks(kvm, HFGWTR_EL2, res0 | hdfgwtr_masks.res0, res1);
/* HFGITR_EL2 */
- res0 = HFGITR_EL2_RES0;
+ res0 = hfgitr_masks.res0;
res1 = HFGITR_EL2_RES1;
if (!kvm_has_feat(kvm, ID_AA64ISAR1_EL1, DPB, DPB2))
res0 |= HFGITR_EL2_DCCVADP;
@@ -1222,7 +1222,7 @@ int kvm_init_nv_sysregs(struct kvm_vcpu *vcpu)
set_sysreg_masks(kvm, HFGITR_EL2, res0, res1);
/* HAFGRTR_EL2 - not a lot to see here */
- res0 = HAFGRTR_EL2_RES0;
+ res0 = hafgrtr_masks.res0;
res1 = HAFGRTR_EL2_RES1;
if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, AMU, V1P1))
res0 |= ~(res0 | res1);
--
2.39.2
^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH v3 24/42] KVM: arm64: Unconditionally configure fine-grain traps
2025-04-26 12:27 [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling Marc Zyngier
` (22 preceding siblings ...)
2025-04-26 12:28 ` [PATCH v3 23/42] KVM: arm64: Use computed masks as sanitisers for FGT registers Marc Zyngier
@ 2025-04-26 12:28 ` Marc Zyngier
2025-04-29 13:08 ` Ben Horgan
2025-04-26 12:28 ` [PATCH v3 25/42] KVM: arm64: Propagate FGT masks to the nVHE hypervisor Marc Zyngier
` (18 subsequent siblings)
42 siblings, 1 reply; 71+ messages in thread
From: Marc Zyngier @ 2025-04-26 12:28 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
From: Mark Rutland <mark.rutland@arm.com>
... otherwise we can inherit the host configuration if this differs from
the KVM configuration.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
[maz: simplified a couple of things]
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/hyp/include/hyp/switch.h | 39 ++++++++++---------------
1 file changed, 15 insertions(+), 24 deletions(-)
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 027d05f308f75..925a3288bd5be 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -107,7 +107,8 @@ static inline void __activate_traps_fpsimd32(struct kvm_vcpu *vcpu)
#define update_fgt_traps_cs(hctxt, vcpu, kvm, reg, clr, set) \
do { \
- u64 c = 0, s = 0; \
+ u64 c = clr, s = set; \
+ u64 val; \
\
ctxt_sys_reg(hctxt, reg) = read_sysreg_s(SYS_ ## reg); \
if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)) \
@@ -115,14 +116,10 @@ static inline void __activate_traps_fpsimd32(struct kvm_vcpu *vcpu)
\
compute_undef_clr_set(vcpu, kvm, reg, c, s); \
\
- s |= set; \
- c |= clr; \
- if (c || s) { \
- u64 val = __ ## reg ## _nMASK; \
- val |= s; \
- val &= ~c; \
- write_sysreg_s(val, SYS_ ## reg); \
- } \
+ val = __ ## reg ## _nMASK; \
+ val |= s; \
+ val &= ~c; \
+ write_sysreg_s(val, SYS_ ## reg); \
} while(0)
#define update_fgt_traps(hctxt, vcpu, kvm, reg) \
@@ -175,33 +172,27 @@ static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
update_fgt_traps(hctxt, vcpu, kvm, HAFGRTR_EL2);
}
-#define __deactivate_fgt(htcxt, vcpu, kvm, reg) \
+#define __deactivate_fgt(htcxt, vcpu, reg) \
do { \
- if ((vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)) || \
- kvm->arch.fgu[reg_to_fgt_group_id(reg)]) \
- write_sysreg_s(ctxt_sys_reg(hctxt, reg), \
- SYS_ ## reg); \
+ write_sysreg_s(ctxt_sys_reg(hctxt, reg), \
+ SYS_ ## reg); \
} while(0)
static inline void __deactivate_traps_hfgxtr(struct kvm_vcpu *vcpu)
{
struct kvm_cpu_context *hctxt = host_data_ptr(host_ctxt);
- struct kvm *kvm = kern_hyp_va(vcpu->kvm);
if (!cpus_have_final_cap(ARM64_HAS_FGT))
return;
- __deactivate_fgt(hctxt, vcpu, kvm, HFGRTR_EL2);
- if (cpus_have_final_cap(ARM64_WORKAROUND_AMPERE_AC03_CPU_38))
- write_sysreg_s(ctxt_sys_reg(hctxt, HFGWTR_EL2), SYS_HFGWTR_EL2);
- else
- __deactivate_fgt(hctxt, vcpu, kvm, HFGWTR_EL2);
- __deactivate_fgt(hctxt, vcpu, kvm, HFGITR_EL2);
- __deactivate_fgt(hctxt, vcpu, kvm, HDFGRTR_EL2);
- __deactivate_fgt(hctxt, vcpu, kvm, HDFGWTR_EL2);
+ __deactivate_fgt(hctxt, vcpu, HFGRTR_EL2);
+ __deactivate_fgt(hctxt, vcpu, HFGWTR_EL2);
+ __deactivate_fgt(hctxt, vcpu, HFGITR_EL2);
+ __deactivate_fgt(hctxt, vcpu, HDFGRTR_EL2);
+ __deactivate_fgt(hctxt, vcpu, HDFGWTR_EL2);
if (cpu_has_amu())
- __deactivate_fgt(hctxt, vcpu, kvm, HAFGRTR_EL2);
+ __deactivate_fgt(hctxt, vcpu, HAFGRTR_EL2);
}
static inline void __activate_traps_mpam(struct kvm_vcpu *vcpu)
--
2.39.2
^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH v3 25/42] KVM: arm64: Propagate FGT masks to the nVHE hypervisor
2025-04-26 12:27 [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling Marc Zyngier
` (23 preceding siblings ...)
2025-04-26 12:28 ` [PATCH v3 24/42] KVM: arm64: Unconditionally configure fine-grain traps Marc Zyngier
@ 2025-04-26 12:28 ` Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 26/42] KVM: arm64: Use computed FGT masks to setup FGT registers Marc Zyngier
` (17 subsequent siblings)
42 siblings, 0 replies; 71+ messages in thread
From: Marc Zyngier @ 2025-04-26 12:28 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
The nVHE hypervisor needs to have access to its own view of the FGT
masks, which unfortunately results in a bit of data duplication.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_host.h | 7 +++++++
arch/arm64/kvm/arm.c | 8 ++++++++
arch/arm64/kvm/hyp/nvhe/switch.c | 7 +++++++
3 files changed, 22 insertions(+)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 95fedd27f4bb8..9e5164fad0dbc 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -621,6 +621,13 @@ extern struct fgt_masks hdfgrtr_masks;
extern struct fgt_masks hdfgwtr_masks;
extern struct fgt_masks hafgrtr_masks;
+extern struct fgt_masks kvm_nvhe_sym(hfgrtr_masks);
+extern struct fgt_masks kvm_nvhe_sym(hfgwtr_masks);
+extern struct fgt_masks kvm_nvhe_sym(hfgitr_masks);
+extern struct fgt_masks kvm_nvhe_sym(hdfgrtr_masks);
+extern struct fgt_masks kvm_nvhe_sym(hdfgwtr_masks);
+extern struct fgt_masks kvm_nvhe_sym(hafgrtr_masks);
+
struct kvm_cpu_context {
struct user_pt_regs regs; /* sp = sp_el0 */
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 68fec8c95feef..8951e8693ca7b 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -2450,6 +2450,14 @@ static void kvm_hyp_init_symbols(void)
kvm_nvhe_sym(__icache_flags) = __icache_flags;
kvm_nvhe_sym(kvm_arm_vmid_bits) = kvm_arm_vmid_bits;
+ /* Propagate the FGT state to the the nVHE side */
+ kvm_nvhe_sym(hfgrtr_masks) = hfgrtr_masks;
+ kvm_nvhe_sym(hfgwtr_masks) = hfgwtr_masks;
+ kvm_nvhe_sym(hfgitr_masks) = hfgitr_masks;
+ kvm_nvhe_sym(hdfgrtr_masks) = hdfgrtr_masks;
+ kvm_nvhe_sym(hdfgwtr_masks) = hdfgwtr_masks;
+ kvm_nvhe_sym(hafgrtr_masks) = hafgrtr_masks;
+
/*
* Flush entire BSS since part of its data containing init symbols is read
* while the MMU is off.
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 7d2ba6ef02618..ae55d6d87e3d2 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -33,6 +33,13 @@ DEFINE_PER_CPU(struct kvm_host_data, kvm_host_data);
DEFINE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt);
DEFINE_PER_CPU(unsigned long, kvm_hyp_vector);
+struct fgt_masks hfgrtr_masks;
+struct fgt_masks hfgwtr_masks;
+struct fgt_masks hfgitr_masks;
+struct fgt_masks hdfgrtr_masks;
+struct fgt_masks hdfgwtr_masks;
+struct fgt_masks hafgrtr_masks;
+
extern void kvm_nvhe_prepare_backtrace(unsigned long fp, unsigned long pc);
static void __activate_cptr_traps(struct kvm_vcpu *vcpu)
--
2.39.2
^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH v3 26/42] KVM: arm64: Use computed FGT masks to setup FGT registers
2025-04-26 12:27 [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling Marc Zyngier
` (24 preceding siblings ...)
2025-04-26 12:28 ` [PATCH v3 25/42] KVM: arm64: Propagate FGT masks to the nVHE hypervisor Marc Zyngier
@ 2025-04-26 12:28 ` Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 27/42] KVM: arm64: Remove hand-crafted masks for " Marc Zyngier
` (16 subsequent siblings)
42 siblings, 0 replies; 71+ messages in thread
From: Marc Zyngier @ 2025-04-26 12:28 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
Flip the hyervisor FGT configuration over to the computed FGT
masks.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/hyp/include/hyp/switch.h | 45 +++++++++++++++++++++----
1 file changed, 38 insertions(+), 7 deletions(-)
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 925a3288bd5be..e8645375499df 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -65,12 +65,41 @@ static inline void __activate_traps_fpsimd32(struct kvm_vcpu *vcpu)
}
}
+#define reg_to_fgt_masks(reg) \
+ ({ \
+ struct fgt_masks *m; \
+ switch(reg) { \
+ case HFGRTR_EL2: \
+ m = &hfgrtr_masks; \
+ break; \
+ case HFGWTR_EL2: \
+ m = &hfgwtr_masks; \
+ break; \
+ case HFGITR_EL2: \
+ m = &hfgitr_masks; \
+ break; \
+ case HDFGRTR_EL2: \
+ m = &hdfgrtr_masks; \
+ break; \
+ case HDFGWTR_EL2: \
+ m = &hdfgwtr_masks; \
+ break; \
+ case HAFGRTR_EL2: \
+ m = &hafgrtr_masks; \
+ break; \
+ default: \
+ BUILD_BUG_ON(1); \
+ } \
+ \
+ m; \
+ })
+
#define compute_clr_set(vcpu, reg, clr, set) \
do { \
- u64 hfg; \
- hfg = __vcpu_sys_reg(vcpu, reg) & ~__ ## reg ## _RES0; \
- set |= hfg & __ ## reg ## _MASK; \
- clr |= ~hfg & __ ## reg ## _nMASK; \
+ u64 hfg = __vcpu_sys_reg(vcpu, reg); \
+ struct fgt_masks *m = reg_to_fgt_masks(reg); \
+ set |= hfg & m->mask; \
+ clr |= ~hfg & m->nmask; \
} while(0)
#define reg_to_fgt_group_id(reg) \
@@ -101,12 +130,14 @@ static inline void __activate_traps_fpsimd32(struct kvm_vcpu *vcpu)
#define compute_undef_clr_set(vcpu, kvm, reg, clr, set) \
do { \
u64 hfg = kvm->arch.fgu[reg_to_fgt_group_id(reg)]; \
- set |= hfg & __ ## reg ## _MASK; \
- clr |= hfg & __ ## reg ## _nMASK; \
+ struct fgt_masks *m = reg_to_fgt_masks(reg); \
+ set |= hfg & m->mask; \
+ clr |= hfg & m->nmask; \
} while(0)
#define update_fgt_traps_cs(hctxt, vcpu, kvm, reg, clr, set) \
do { \
+ struct fgt_masks *m = reg_to_fgt_masks(reg); \
u64 c = clr, s = set; \
u64 val; \
\
@@ -116,7 +147,7 @@ static inline void __activate_traps_fpsimd32(struct kvm_vcpu *vcpu)
\
compute_undef_clr_set(vcpu, kvm, reg, c, s); \
\
- val = __ ## reg ## _nMASK; \
+ val = m->nmask; \
val |= s; \
val &= ~c; \
write_sysreg_s(val, SYS_ ## reg); \
--
2.39.2
^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH v3 27/42] KVM: arm64: Remove hand-crafted masks for FGT registers
2025-04-26 12:27 [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling Marc Zyngier
` (25 preceding siblings ...)
2025-04-26 12:28 ` [PATCH v3 26/42] KVM: arm64: Use computed FGT masks to setup FGT registers Marc Zyngier
@ 2025-04-26 12:28 ` Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 28/42] KVM: arm64: Use KVM-specific HCRX_EL2 RES0 mask Marc Zyngier
` (15 subsequent siblings)
42 siblings, 0 replies; 71+ messages in thread
From: Marc Zyngier @ 2025-04-26 12:28 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
These masks are now useless, and can be removed.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_arm.h | 49 +------------------------
arch/arm64/kvm/hyp/include/hyp/switch.h | 19 ----------
2 files changed, 1 insertion(+), 67 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 43a630b940bfb..e7c73d16cd451 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -315,54 +315,7 @@
GENMASK(19, 18) | \
GENMASK(15, 0))
-/*
- * FGT register definitions
- *
- * RES0 and polarity masks as of DDI0487J.a, to be updated as needed.
- * We're not using the generated masks as they are usually ahead of
- * the published ARM ARM, which we use as a reference.
- *
- * Once we get to a point where the two describe the same thing, we'll
- * merge the definitions. One day.
- */
-#define __HFGRTR_EL2_RES0 HFGRTR_EL2_RES0
-#define __HFGRTR_EL2_MASK GENMASK(49, 0)
-#define __HFGRTR_EL2_nMASK ~(__HFGRTR_EL2_RES0 | __HFGRTR_EL2_MASK)
-
-/*
- * The HFGWTR bits are a subset of HFGRTR bits. To ensure we don't miss any
- * future additions, define __HFGWTR* macros relative to __HFGRTR* ones.
- */
-#define __HFGRTR_ONLY_MASK (BIT(46) | BIT(42) | BIT(40) | BIT(28) | \
- GENMASK(26, 25) | BIT(21) | BIT(18) | \
- GENMASK(15, 14) | GENMASK(10, 9) | BIT(2))
-#define __HFGWTR_EL2_RES0 HFGWTR_EL2_RES0
-#define __HFGWTR_EL2_MASK (__HFGRTR_EL2_MASK & ~__HFGRTR_ONLY_MASK)
-#define __HFGWTR_EL2_nMASK ~(__HFGWTR_EL2_RES0 | __HFGWTR_EL2_MASK)
-
-#define __HFGITR_EL2_RES0 HFGITR_EL2_RES0
-#define __HFGITR_EL2_MASK (BIT(62) | BIT(60) | GENMASK(54, 0))
-#define __HFGITR_EL2_nMASK ~(__HFGITR_EL2_RES0 | __HFGITR_EL2_MASK)
-
-#define __HDFGRTR_EL2_RES0 HDFGRTR_EL2_RES0
-#define __HDFGRTR_EL2_MASK (BIT(63) | GENMASK(58, 50) | GENMASK(48, 43) | \
- GENMASK(41, 40) | GENMASK(37, 22) | \
- GENMASK(19, 9) | GENMASK(7, 0))
-#define __HDFGRTR_EL2_nMASK ~(__HDFGRTR_EL2_RES0 | __HDFGRTR_EL2_MASK)
-
-#define __HDFGWTR_EL2_RES0 HDFGWTR_EL2_RES0
-#define __HDFGWTR_EL2_MASK (GENMASK(57, 52) | GENMASK(50, 48) | \
- GENMASK(46, 44) | GENMASK(42, 41) | \
- GENMASK(37, 35) | GENMASK(33, 31) | \
- GENMASK(29, 23) | GENMASK(21, 10) | \
- GENMASK(8, 7) | GENMASK(5, 0))
-#define __HDFGWTR_EL2_nMASK ~(__HDFGWTR_EL2_RES0 | __HDFGWTR_EL2_MASK)
-
-#define __HAFGRTR_EL2_RES0 HAFGRTR_EL2_RES0
-#define __HAFGRTR_EL2_MASK (GENMASK(49, 17) | GENMASK(4, 0))
-#define __HAFGRTR_EL2_nMASK ~(__HAFGRTR_EL2_RES0 | __HAFGRTR_EL2_MASK)
-
-/* Similar definitions for HCRX_EL2 */
+/* Polarity masks for HCRX_EL2 */
#define __HCRX_EL2_RES0 HCRX_EL2_RES0
#define __HCRX_EL2_MASK (BIT(6))
#define __HCRX_EL2_nMASK ~(__HCRX_EL2_RES0 | __HCRX_EL2_MASK)
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index e8645375499df..0d61ec3e907d4 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -156,17 +156,6 @@ static inline void __activate_traps_fpsimd32(struct kvm_vcpu *vcpu)
#define update_fgt_traps(hctxt, vcpu, kvm, reg) \
update_fgt_traps_cs(hctxt, vcpu, kvm, reg, 0, 0)
-/*
- * Validate the fine grain trap masks.
- * Check that the masks do not overlap and that all bits are accounted for.
- */
-#define CHECK_FGT_MASKS(reg) \
- do { \
- BUILD_BUG_ON((__ ## reg ## _MASK) & (__ ## reg ## _nMASK)); \
- BUILD_BUG_ON(~((__ ## reg ## _RES0) ^ (__ ## reg ## _MASK) ^ \
- (__ ## reg ## _nMASK))); \
- } while(0)
-
static inline bool cpu_has_amu(void)
{
u64 pfr0 = read_sysreg_s(SYS_ID_AA64PFR0_EL1);
@@ -180,14 +169,6 @@ static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
struct kvm_cpu_context *hctxt = host_data_ptr(host_ctxt);
struct kvm *kvm = kern_hyp_va(vcpu->kvm);
- CHECK_FGT_MASKS(HFGRTR_EL2);
- CHECK_FGT_MASKS(HFGWTR_EL2);
- CHECK_FGT_MASKS(HFGITR_EL2);
- CHECK_FGT_MASKS(HDFGRTR_EL2);
- CHECK_FGT_MASKS(HDFGWTR_EL2);
- CHECK_FGT_MASKS(HAFGRTR_EL2);
- CHECK_FGT_MASKS(HCRX_EL2);
-
if (!cpus_have_final_cap(ARM64_HAS_FGT))
return;
--
2.39.2
^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH v3 28/42] KVM: arm64: Use KVM-specific HCRX_EL2 RES0 mask
2025-04-26 12:27 [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling Marc Zyngier
` (26 preceding siblings ...)
2025-04-26 12:28 ` [PATCH v3 27/42] KVM: arm64: Remove hand-crafted masks for " Marc Zyngier
@ 2025-04-26 12:28 ` Marc Zyngier
2025-05-01 13:33 ` Joey Gouly
2025-04-26 12:28 ` [PATCH v3 29/42] KVM: arm64: Handle PSB CSYNC traps Marc Zyngier
` (14 subsequent siblings)
42 siblings, 1 reply; 71+ messages in thread
From: Marc Zyngier @ 2025-04-26 12:28 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
We do not have a computed table for HCRX_EL2, so statically define
the bits we know about. A warning will fire if the architecture
grows bits that are not handled yet.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_arm.h | 18 ++++++++++++++----
arch/arm64/kvm/emulate-nested.c | 5 +++++
arch/arm64/kvm/nested.c | 4 ++--
3 files changed, 21 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index e7c73d16cd451..52b3aeb19efc6 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -315,10 +315,20 @@
GENMASK(19, 18) | \
GENMASK(15, 0))
-/* Polarity masks for HCRX_EL2 */
-#define __HCRX_EL2_RES0 HCRX_EL2_RES0
-#define __HCRX_EL2_MASK (BIT(6))
-#define __HCRX_EL2_nMASK ~(__HCRX_EL2_RES0 | __HCRX_EL2_MASK)
+/*
+ * Polarity masks for HCRX_EL2, limited to the bits that we know about
+ * at this point in time. It doesn't mean that we actually *handle*
+ * them, but that at least those that are not advertised to a guest
+ * will be RES0 for that guest.
+ */
+#define __HCRX_EL2_MASK (BIT_ULL(6))
+#define __HCRX_EL2_nMASK (GENMASK_ULL(24, 14) | \
+ GENMASK_ULL(11, 7) | \
+ GENMASK_ULL(5, 0))
+#define __HCRX_EL2_RES0 ~(__HCRX_EL2_nMASK | __HCRX_EL2_MASK)
+#define __HCRX_EL2_RES1 ~(__HCRX_EL2_nMASK | \
+ __HCRX_EL2_MASK | \
+ __HCRX_EL2_RES0)
/* Hyp Prefetch Fault Address Register (HPFAR/HDFAR) */
#define HPFAR_MASK (~UL(0xf))
diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index c30d970bf81cb..c581cf29bc59e 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -2157,6 +2157,7 @@ int __init populate_nv_trap_config(void)
BUILD_BUG_ON(__NR_CGT_GROUP_IDS__ > BIT(TC_CGT_BITS));
BUILD_BUG_ON(__NR_FGT_GROUP_IDS__ > BIT(TC_FGT_BITS));
BUILD_BUG_ON(__NR_FG_FILTER_IDS__ > BIT(TC_FGF_BITS));
+ BUILD_BUG_ON(__HCRX_EL2_MASK & __HCRX_EL2_nMASK);
for (int i = 0; i < ARRAY_SIZE(encoding_to_cgt); i++) {
const struct encoding_to_trap_config *cgt = &encoding_to_cgt[i];
@@ -2182,6 +2183,10 @@ int __init populate_nv_trap_config(void)
}
}
+ if (__HCRX_EL2_RES0 != HCRX_EL2_RES0)
+ kvm_info("Sanitised HCR_EL2_RES0 = %016llx, expecting %016llx\n",
+ __HCRX_EL2_RES0, HCRX_EL2_RES0);
+
kvm_info("nv: %ld coarse grained trap handlers\n",
ARRAY_SIZE(encoding_to_cgt));
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index 479ffd25eea63..666df85230c9b 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -1058,8 +1058,8 @@ int kvm_init_nv_sysregs(struct kvm_vcpu *vcpu)
set_sysreg_masks(kvm, HCR_EL2, res0, res1);
/* HCRX_EL2 */
- res0 = HCRX_EL2_RES0;
- res1 = HCRX_EL2_RES1;
+ res0 = __HCRX_EL2_RES0;
+ res1 = __HCRX_EL2_RES1;
if (!kvm_has_feat(kvm, ID_AA64ISAR3_EL1, PACM, TRIVIAL_IMP))
res0 |= HCRX_EL2_PACMEn;
if (!kvm_has_feat(kvm, ID_AA64PFR2_EL1, FPMR, IMP))
--
2.39.2
^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH v3 29/42] KVM: arm64: Handle PSB CSYNC traps
2025-04-26 12:27 [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling Marc Zyngier
` (27 preceding siblings ...)
2025-04-26 12:28 ` [PATCH v3 28/42] KVM: arm64: Use KVM-specific HCRX_EL2 RES0 mask Marc Zyngier
@ 2025-04-26 12:28 ` Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 30/42] KVM: arm64: Switch to table-driven FGU configuration Marc Zyngier
` (13 subsequent siblings)
42 siblings, 0 replies; 71+ messages in thread
From: Marc Zyngier @ 2025-04-26 12:28 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
The architecture introduces a trap for PSB CSYNC that fits in
the same EC as LS64. Let's deal with it in a similar way as
LS64.
It's not that we expect this to be useful any time soon anyway.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/esr.h | 3 ++-
arch/arm64/kvm/emulate-nested.c | 1 +
arch/arm64/kvm/handle_exit.c | 5 +++++
arch/arm64/tools/sysreg | 2 +-
4 files changed, 9 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
index a0ae66dd65da9..ef5a14276ce15 100644
--- a/arch/arm64/include/asm/esr.h
+++ b/arch/arm64/include/asm/esr.h
@@ -182,10 +182,11 @@
#define ESR_ELx_WFx_ISS_WFE (UL(1) << 0)
#define ESR_ELx_xVC_IMM_MASK ((UL(1) << 16) - 1)
-/* ISS definitions for LD64B/ST64B instructions */
+/* ISS definitions for LD64B/ST64B/PSBCSYNC instructions */
#define ESR_ELx_ISS_OTHER_ST64BV (0)
#define ESR_ELx_ISS_OTHER_ST64BV0 (1)
#define ESR_ELx_ISS_OTHER_LDST64B (2)
+#define ESR_ELx_ISS_OTHER_PSBCSYNC (4)
#define DISR_EL1_IDS (UL(1) << 24)
/*
diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index c581cf29bc59e..0b033d3a3d7a4 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -2000,6 +2000,7 @@ static const struct encoding_to_trap_config encoding_to_fgt[] __initconst = {
* trap is handled somewhere else.
*/
static const union trap_config non_0x18_fgt[] __initconst = {
+ FGT(HFGITR, PSBCSYNC, 1),
FGT(HFGITR, nGCSSTR_EL1, 0),
FGT(HFGITR, SVC_EL1, 1),
FGT(HFGITR, SVC_EL0, 1),
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index eafbd2a243afd..2c07754c11a45 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -347,6 +347,11 @@ static int handle_other(struct kvm_vcpu *vcpu)
if (is_l2)
fwd = !(hcrx & HCRX_EL2_EnALS);
break;
+ case ESR_ELx_ISS_OTHER_PSBCSYNC:
+ allowed = kvm_has_feat(kvm, ID_AA64DFR0_EL1, PMSVer, V1P5);
+ if (is_l2)
+ fwd = (__vcpu_sys_reg(vcpu, HFGITR_EL2) & HFGITR_EL2_PSBCSYNC);
+ break;
default:
/* Clearly, we're missing something. */
WARN_ON_ONCE(1);
diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
index 5695b12b8b4b2..f1fdd31409df4 100644
--- a/arch/arm64/tools/sysreg
+++ b/arch/arm64/tools/sysreg
@@ -3404,7 +3404,7 @@ Field 0 AFSR0_EL1
EndSysreg
Sysreg HFGITR_EL2 3 4 1 1 6
-Res0 63
+Field 63 PSBCSYNC
Field 62 ATS1E1A
Res0 61
Field 60 COSPRCTX
--
2.39.2
^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH v3 30/42] KVM: arm64: Switch to table-driven FGU configuration
2025-04-26 12:27 [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling Marc Zyngier
` (28 preceding siblings ...)
2025-04-26 12:28 ` [PATCH v3 29/42] KVM: arm64: Handle PSB CSYNC traps Marc Zyngier
@ 2025-04-26 12:28 ` Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 31/42] KVM: arm64: Validate FGT register descriptions against RES0 masks Marc Zyngier
` (12 subsequent siblings)
42 siblings, 0 replies; 71+ messages in thread
From: Marc Zyngier @ 2025-04-26 12:28 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
Defining the FGU behaviour is extremely tedious. It relies on matching
each set of bits from FGT registers with am architectural feature, and
adding them to the FGU list if the corresponding feature isn't advertised
to the guest.
It is however relatively easy to dump most of that information from
the architecture JSON description, and use that to control the FGU bits.
Let's introduce a new set of tables descripbing the mapping between
FGT bits and features. Most of the time, this is only a lookup in
an idreg field, with a few more complex exceptions.
While this is obviously many more lines in a new file, this is
mostly generated, and is pretty easy to maintain.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_host.h | 2 +
arch/arm64/kvm/Makefile | 2 +-
arch/arm64/kvm/config.c | 589 ++++++++++++++++++++++++++++++
arch/arm64/kvm/sys_regs.c | 73 +---
4 files changed, 596 insertions(+), 70 deletions(-)
create mode 100644 arch/arm64/kvm/config.c
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 9e5164fad0dbc..9386f15cdc252 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -1610,4 +1610,6 @@ void kvm_set_vm_id_reg(struct kvm *kvm, u32 reg, u64 val);
#define kvm_has_s1poe(k) \
(kvm_has_feat((k), ID_AA64MMFR3_EL1, S1POE, IMP))
+void compute_fgu(struct kvm *kvm, enum fgt_group_id fgt);
+
#endif /* __ARM64_KVM_HOST_H__ */
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index 209bc76263f10..7c329e01c557a 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -14,7 +14,7 @@ CFLAGS_sys_regs.o += -Wno-override-init
CFLAGS_handle_exit.o += -Wno-override-init
kvm-y += arm.o mmu.o mmio.o psci.o hypercalls.o pvtime.o \
- inject_fault.o va_layout.o handle_exit.o \
+ inject_fault.o va_layout.o handle_exit.o config.o \
guest.o debug.o reset.o sys_regs.o stacktrace.o \
vgic-sys-reg-v3.o fpsimd.o pkvm.o \
arch_timer.o trng.o vmid.o emulate-nested.o nested.o at.o \
diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
new file mode 100644
index 0000000000000..8813ecbb45763
--- /dev/null
+++ b/arch/arm64/kvm/config.c
@@ -0,0 +1,589 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2025 Google LLC
+ * Author: Marc Zyngier <maz@kernel.org>
+ */
+
+#include <linux/kvm_host.h>
+#include <asm/sysreg.h>
+
+struct reg_bits_to_feat_map {
+ u64 bits;
+
+#define NEVER_FGU BIT(0) /* Can trap, but never UNDEF */
+#define CALL_FUNC BIT(1) /* Needs to evaluate tons of crap */
+#define FIXED_VALUE BIT(2) /* RAZ/WI or RAO/WI in KVM */
+ unsigned long flags;
+
+ union {
+ struct {
+ u8 regidx;
+ u8 shift;
+ u8 width;
+ bool sign;
+ s8 lo_lim;
+ };
+ bool (*match)(struct kvm *);
+ bool (*fval)(struct kvm *, u64 *);
+ };
+};
+
+#define __NEEDS_FEAT_3(m, f, id, fld, lim) \
+ { \
+ .bits = (m), \
+ .flags = (f), \
+ .regidx = IDREG_IDX(SYS_ ## id), \
+ .shift = id ##_## fld ## _SHIFT, \
+ .width = id ##_## fld ## _WIDTH, \
+ .sign = id ##_## fld ## _SIGNED, \
+ .lo_lim = id ##_## fld ##_## lim \
+ }
+
+#define __NEEDS_FEAT_2(m, f, fun, dummy) \
+ { \
+ .bits = (m), \
+ .flags = (f) | CALL_FUNC, \
+ .fval = (fun), \
+ }
+
+#define __NEEDS_FEAT_1(m, f, fun) \
+ { \
+ .bits = (m), \
+ .flags = (f) | CALL_FUNC, \
+ .match = (fun), \
+ }
+
+#define NEEDS_FEAT_FLAG(m, f, ...) \
+ CONCATENATE(__NEEDS_FEAT_, COUNT_ARGS(__VA_ARGS__))(m, f, __VA_ARGS__)
+
+#define NEEDS_FEAT_FIXED(m, ...) \
+ NEEDS_FEAT_FLAG(m, FIXED_VALUE, __VA_ARGS__, 0)
+
+#define NEEDS_FEAT(m, ...) NEEDS_FEAT_FLAG(m, 0, __VA_ARGS__)
+
+#define FEAT_SPE ID_AA64DFR0_EL1, PMSVer, IMP
+#define FEAT_SPE_FnE ID_AA64DFR0_EL1, PMSVer, V1P2
+#define FEAT_BRBE ID_AA64DFR0_EL1, BRBE, IMP
+#define FEAT_TRC_SR ID_AA64DFR0_EL1, TraceVer, IMP
+#define FEAT_PMUv3 ID_AA64DFR0_EL1, PMUVer, IMP
+#define FEAT_TRBE ID_AA64DFR0_EL1, TraceBuffer, IMP
+#define FEAT_DoubleLock ID_AA64DFR0_EL1, DoubleLock, IMP
+#define FEAT_TRF ID_AA64DFR0_EL1, TraceFilt, IMP
+#define FEAT_AA64EL1 ID_AA64PFR0_EL1, EL1, IMP
+#define FEAT_AIE ID_AA64MMFR3_EL1, AIE, IMP
+#define FEAT_S2POE ID_AA64MMFR3_EL1, S2POE, IMP
+#define FEAT_S1POE ID_AA64MMFR3_EL1, S1POE, IMP
+#define FEAT_S1PIE ID_AA64MMFR3_EL1, S1PIE, IMP
+#define FEAT_THE ID_AA64PFR1_EL1, THE, IMP
+#define FEAT_SME ID_AA64PFR1_EL1, SME, IMP
+#define FEAT_GCS ID_AA64PFR1_EL1, GCS, IMP
+#define FEAT_LS64_ACCDATA ID_AA64ISAR1_EL1, LS64, LS64_ACCDATA
+#define FEAT_RAS ID_AA64PFR0_EL1, RAS, IMP
+#define FEAT_GICv3 ID_AA64PFR0_EL1, GIC, IMP
+#define FEAT_LOR ID_AA64MMFR1_EL1, LO, IMP
+#define FEAT_SPEv1p5 ID_AA64DFR0_EL1, PMSVer, V1P5
+#define FEAT_ATS1A ID_AA64ISAR2_EL1, ATS1A, IMP
+#define FEAT_SPECRES2 ID_AA64ISAR1_EL1, SPECRES, COSP_RCTX
+#define FEAT_SPECRES ID_AA64ISAR1_EL1, SPECRES, IMP
+#define FEAT_TLBIRANGE ID_AA64ISAR0_EL1, TLB, RANGE
+#define FEAT_TLBIOS ID_AA64ISAR0_EL1, TLB, OS
+#define FEAT_PAN2 ID_AA64MMFR1_EL1, PAN, PAN2
+#define FEAT_DPB2 ID_AA64ISAR1_EL1, DPB, DPB2
+#define FEAT_AMUv1 ID_AA64PFR0_EL1, AMU, IMP
+
+static bool feat_rasv1p1(struct kvm *kvm)
+{
+ return (kvm_has_feat(kvm, ID_AA64PFR0_EL1, RAS, V1P1) ||
+ (kvm_has_feat_enum(kvm, ID_AA64PFR0_EL1, RAS, IMP) &&
+ kvm_has_feat(kvm, ID_AA64PFR1_EL1, RAS_frac, RASv1p1)));
+}
+
+static bool feat_csv2_2_csv2_1p2(struct kvm *kvm)
+{
+ return (kvm_has_feat(kvm, ID_AA64PFR0_EL1, CSV2, CSV2_2) ||
+ (kvm_has_feat(kvm, ID_AA64PFR1_EL1, CSV2_frac, CSV2_1p2) &&
+ kvm_has_feat_enum(kvm, ID_AA64PFR0_EL1, CSV2, IMP)));
+}
+
+static bool feat_pauth(struct kvm *kvm)
+{
+ return kvm_has_pauth(kvm, PAuth);
+}
+
+static const struct reg_bits_to_feat_map hfgrtr_feat_map[] = {
+ NEEDS_FEAT(HFGRTR_EL2_nAMAIR2_EL1 |
+ HFGRTR_EL2_nMAIR2_EL1,
+ FEAT_AIE),
+ NEEDS_FEAT(HFGRTR_EL2_nS2POR_EL1, FEAT_S2POE),
+ NEEDS_FEAT(HFGRTR_EL2_nPOR_EL1 |
+ HFGRTR_EL2_nPOR_EL0,
+ FEAT_S1POE),
+ NEEDS_FEAT(HFGRTR_EL2_nPIR_EL1 |
+ HFGRTR_EL2_nPIRE0_EL1,
+ FEAT_S1PIE),
+ NEEDS_FEAT(HFGRTR_EL2_nRCWMASK_EL1, FEAT_THE),
+ NEEDS_FEAT(HFGRTR_EL2_nTPIDR2_EL0 |
+ HFGRTR_EL2_nSMPRI_EL1,
+ FEAT_SME),
+ NEEDS_FEAT(HFGRTR_EL2_nGCS_EL1 |
+ HFGRTR_EL2_nGCS_EL0,
+ FEAT_GCS),
+ NEEDS_FEAT(HFGRTR_EL2_nACCDATA_EL1, FEAT_LS64_ACCDATA),
+ NEEDS_FEAT(HFGRTR_EL2_ERXADDR_EL1 |
+ HFGRTR_EL2_ERXMISCn_EL1 |
+ HFGRTR_EL2_ERXSTATUS_EL1 |
+ HFGRTR_EL2_ERXCTLR_EL1 |
+ HFGRTR_EL2_ERXFR_EL1 |
+ HFGRTR_EL2_ERRSELR_EL1 |
+ HFGRTR_EL2_ERRIDR_EL1,
+ FEAT_RAS),
+ NEEDS_FEAT(HFGRTR_EL2_ERXPFGCDN_EL1|
+ HFGRTR_EL2_ERXPFGCTL_EL1|
+ HFGRTR_EL2_ERXPFGF_EL1,
+ feat_rasv1p1),
+ NEEDS_FEAT(HFGRTR_EL2_ICC_IGRPENn_EL1, FEAT_GICv3),
+ NEEDS_FEAT(HFGRTR_EL2_SCXTNUM_EL0 |
+ HFGRTR_EL2_SCXTNUM_EL1,
+ feat_csv2_2_csv2_1p2),
+ NEEDS_FEAT(HFGRTR_EL2_LORSA_EL1 |
+ HFGRTR_EL2_LORN_EL1 |
+ HFGRTR_EL2_LORID_EL1 |
+ HFGRTR_EL2_LOREA_EL1 |
+ HFGRTR_EL2_LORC_EL1,
+ FEAT_LOR),
+ NEEDS_FEAT(HFGRTR_EL2_APIBKey |
+ HFGRTR_EL2_APIAKey |
+ HFGRTR_EL2_APGAKey |
+ HFGRTR_EL2_APDBKey |
+ HFGRTR_EL2_APDAKey,
+ feat_pauth),
+ NEEDS_FEAT(HFGRTR_EL2_VBAR_EL1 |
+ HFGRTR_EL2_TTBR1_EL1 |
+ HFGRTR_EL2_TTBR0_EL1 |
+ HFGRTR_EL2_TPIDR_EL0 |
+ HFGRTR_EL2_TPIDRRO_EL0 |
+ HFGRTR_EL2_TPIDR_EL1 |
+ HFGRTR_EL2_TCR_EL1 |
+ HFGRTR_EL2_SCTLR_EL1 |
+ HFGRTR_EL2_REVIDR_EL1 |
+ HFGRTR_EL2_PAR_EL1 |
+ HFGRTR_EL2_MPIDR_EL1 |
+ HFGRTR_EL2_MIDR_EL1 |
+ HFGRTR_EL2_MAIR_EL1 |
+ HFGRTR_EL2_ISR_EL1 |
+ HFGRTR_EL2_FAR_EL1 |
+ HFGRTR_EL2_ESR_EL1 |
+ HFGRTR_EL2_DCZID_EL0 |
+ HFGRTR_EL2_CTR_EL0 |
+ HFGRTR_EL2_CSSELR_EL1 |
+ HFGRTR_EL2_CPACR_EL1 |
+ HFGRTR_EL2_CONTEXTIDR_EL1 |
+ HFGRTR_EL2_CLIDR_EL1 |
+ HFGRTR_EL2_CCSIDR_EL1 |
+ HFGRTR_EL2_AMAIR_EL1 |
+ HFGRTR_EL2_AIDR_EL1 |
+ HFGRTR_EL2_AFSR1_EL1 |
+ HFGRTR_EL2_AFSR0_EL1,
+ FEAT_AA64EL1),
+};
+
+static const struct reg_bits_to_feat_map hfgwtr_feat_map[] = {
+ NEEDS_FEAT(HFGWTR_EL2_nAMAIR2_EL1 |
+ HFGWTR_EL2_nMAIR2_EL1,
+ FEAT_AIE),
+ NEEDS_FEAT(HFGWTR_EL2_nS2POR_EL1, FEAT_S2POE),
+ NEEDS_FEAT(HFGWTR_EL2_nPOR_EL1 |
+ HFGWTR_EL2_nPOR_EL0,
+ FEAT_S1POE),
+ NEEDS_FEAT(HFGWTR_EL2_nPIR_EL1 |
+ HFGWTR_EL2_nPIRE0_EL1,
+ FEAT_S1PIE),
+ NEEDS_FEAT(HFGWTR_EL2_nRCWMASK_EL1, FEAT_THE),
+ NEEDS_FEAT(HFGWTR_EL2_nTPIDR2_EL0 |
+ HFGWTR_EL2_nSMPRI_EL1,
+ FEAT_SME),
+ NEEDS_FEAT(HFGWTR_EL2_nGCS_EL1 |
+ HFGWTR_EL2_nGCS_EL0,
+ FEAT_GCS),
+ NEEDS_FEAT(HFGWTR_EL2_nACCDATA_EL1, FEAT_LS64_ACCDATA),
+ NEEDS_FEAT(HFGWTR_EL2_ERXADDR_EL1 |
+ HFGWTR_EL2_ERXMISCn_EL1 |
+ HFGWTR_EL2_ERXSTATUS_EL1 |
+ HFGWTR_EL2_ERXCTLR_EL1 |
+ HFGWTR_EL2_ERRSELR_EL1,
+ FEAT_RAS),
+ NEEDS_FEAT(HFGWTR_EL2_ERXPFGCDN_EL1 |
+ HFGWTR_EL2_ERXPFGCTL_EL1,
+ feat_rasv1p1),
+ NEEDS_FEAT(HFGWTR_EL2_ICC_IGRPENn_EL1, FEAT_GICv3),
+ NEEDS_FEAT(HFGWTR_EL2_SCXTNUM_EL0 |
+ HFGWTR_EL2_SCXTNUM_EL1,
+ feat_csv2_2_csv2_1p2),
+ NEEDS_FEAT(HFGWTR_EL2_LORSA_EL1 |
+ HFGWTR_EL2_LORN_EL1 |
+ HFGWTR_EL2_LOREA_EL1 |
+ HFGWTR_EL2_LORC_EL1,
+ FEAT_LOR),
+ NEEDS_FEAT(HFGWTR_EL2_APIBKey |
+ HFGWTR_EL2_APIAKey |
+ HFGWTR_EL2_APGAKey |
+ HFGWTR_EL2_APDBKey |
+ HFGWTR_EL2_APDAKey,
+ feat_pauth),
+ NEEDS_FEAT(HFGWTR_EL2_VBAR_EL1 |
+ HFGWTR_EL2_TTBR1_EL1 |
+ HFGWTR_EL2_TTBR0_EL1 |
+ HFGWTR_EL2_TPIDR_EL0 |
+ HFGWTR_EL2_TPIDRRO_EL0 |
+ HFGWTR_EL2_TPIDR_EL1 |
+ HFGWTR_EL2_TCR_EL1 |
+ HFGWTR_EL2_SCTLR_EL1 |
+ HFGWTR_EL2_PAR_EL1 |
+ HFGWTR_EL2_MAIR_EL1 |
+ HFGWTR_EL2_FAR_EL1 |
+ HFGWTR_EL2_ESR_EL1 |
+ HFGWTR_EL2_CSSELR_EL1 |
+ HFGWTR_EL2_CPACR_EL1 |
+ HFGWTR_EL2_CONTEXTIDR_EL1 |
+ HFGWTR_EL2_AMAIR_EL1 |
+ HFGWTR_EL2_AFSR1_EL1 |
+ HFGWTR_EL2_AFSR0_EL1,
+ FEAT_AA64EL1),
+};
+
+static const struct reg_bits_to_feat_map hdfgrtr_feat_map[] = {
+ NEEDS_FEAT(HDFGRTR_EL2_PMBIDR_EL1 |
+ HDFGRTR_EL2_PMSLATFR_EL1 |
+ HDFGRTR_EL2_PMSIRR_EL1 |
+ HDFGRTR_EL2_PMSIDR_EL1 |
+ HDFGRTR_EL2_PMSICR_EL1 |
+ HDFGRTR_EL2_PMSFCR_EL1 |
+ HDFGRTR_EL2_PMSEVFR_EL1 |
+ HDFGRTR_EL2_PMSCR_EL1 |
+ HDFGRTR_EL2_PMBSR_EL1 |
+ HDFGRTR_EL2_PMBPTR_EL1 |
+ HDFGRTR_EL2_PMBLIMITR_EL1,
+ FEAT_SPE),
+ NEEDS_FEAT(HDFGRTR_EL2_nPMSNEVFR_EL1, FEAT_SPE_FnE),
+ NEEDS_FEAT(HDFGRTR_EL2_nBRBDATA |
+ HDFGRTR_EL2_nBRBCTL |
+ HDFGRTR_EL2_nBRBIDR,
+ FEAT_BRBE),
+ NEEDS_FEAT(HDFGRTR_EL2_TRCVICTLR |
+ HDFGRTR_EL2_TRCSTATR |
+ HDFGRTR_EL2_TRCSSCSRn |
+ HDFGRTR_EL2_TRCSEQSTR |
+ HDFGRTR_EL2_TRCPRGCTLR |
+ HDFGRTR_EL2_TRCOSLSR |
+ HDFGRTR_EL2_TRCIMSPECn |
+ HDFGRTR_EL2_TRCID |
+ HDFGRTR_EL2_TRCCNTVRn |
+ HDFGRTR_EL2_TRCCLAIM |
+ HDFGRTR_EL2_TRCAUXCTLR |
+ HDFGRTR_EL2_TRCAUTHSTATUS |
+ HDFGRTR_EL2_TRC,
+ FEAT_TRC_SR),
+ NEEDS_FEAT(HDFGRTR_EL2_PMCEIDn_EL0 |
+ HDFGRTR_EL2_PMUSERENR_EL0 |
+ HDFGRTR_EL2_PMMIR_EL1 |
+ HDFGRTR_EL2_PMSELR_EL0 |
+ HDFGRTR_EL2_PMOVS |
+ HDFGRTR_EL2_PMINTEN |
+ HDFGRTR_EL2_PMCNTEN |
+ HDFGRTR_EL2_PMCCNTR_EL0 |
+ HDFGRTR_EL2_PMCCFILTR_EL0 |
+ HDFGRTR_EL2_PMEVTYPERn_EL0 |
+ HDFGRTR_EL2_PMEVCNTRn_EL0,
+ FEAT_PMUv3),
+ NEEDS_FEAT(HDFGRTR_EL2_TRBTRG_EL1 |
+ HDFGRTR_EL2_TRBSR_EL1 |
+ HDFGRTR_EL2_TRBPTR_EL1 |
+ HDFGRTR_EL2_TRBMAR_EL1 |
+ HDFGRTR_EL2_TRBLIMITR_EL1 |
+ HDFGRTR_EL2_TRBIDR_EL1 |
+ HDFGRTR_EL2_TRBBASER_EL1,
+ FEAT_TRBE),
+ NEEDS_FEAT_FLAG(HDFGRTR_EL2_OSDLR_EL1, NEVER_FGU,
+ FEAT_DoubleLock),
+ NEEDS_FEAT(HDFGRTR_EL2_OSECCR_EL1 |
+ HDFGRTR_EL2_OSLSR_EL1 |
+ HDFGRTR_EL2_DBGPRCR_EL1 |
+ HDFGRTR_EL2_DBGAUTHSTATUS_EL1|
+ HDFGRTR_EL2_DBGCLAIM |
+ HDFGRTR_EL2_MDSCR_EL1 |
+ HDFGRTR_EL2_DBGWVRn_EL1 |
+ HDFGRTR_EL2_DBGWCRn_EL1 |
+ HDFGRTR_EL2_DBGBVRn_EL1 |
+ HDFGRTR_EL2_DBGBCRn_EL1,
+ FEAT_AA64EL1)
+};
+
+static const struct reg_bits_to_feat_map hdfgwtr_feat_map[] = {
+ NEEDS_FEAT(HDFGWTR_EL2_PMSLATFR_EL1 |
+ HDFGWTR_EL2_PMSIRR_EL1 |
+ HDFGWTR_EL2_PMSICR_EL1 |
+ HDFGWTR_EL2_PMSFCR_EL1 |
+ HDFGWTR_EL2_PMSEVFR_EL1 |
+ HDFGWTR_EL2_PMSCR_EL1 |
+ HDFGWTR_EL2_PMBSR_EL1 |
+ HDFGWTR_EL2_PMBPTR_EL1 |
+ HDFGWTR_EL2_PMBLIMITR_EL1,
+ FEAT_SPE),
+ NEEDS_FEAT(HDFGWTR_EL2_nPMSNEVFR_EL1, FEAT_SPE_FnE),
+ NEEDS_FEAT(HDFGWTR_EL2_nBRBDATA |
+ HDFGWTR_EL2_nBRBCTL,
+ FEAT_BRBE),
+ NEEDS_FEAT(HDFGWTR_EL2_TRCVICTLR |
+ HDFGWTR_EL2_TRCSSCSRn |
+ HDFGWTR_EL2_TRCSEQSTR |
+ HDFGWTR_EL2_TRCPRGCTLR |
+ HDFGWTR_EL2_TRCOSLAR |
+ HDFGWTR_EL2_TRCIMSPECn |
+ HDFGWTR_EL2_TRCCNTVRn |
+ HDFGWTR_EL2_TRCCLAIM |
+ HDFGWTR_EL2_TRCAUXCTLR |
+ HDFGWTR_EL2_TRC,
+ FEAT_TRC_SR),
+ NEEDS_FEAT(HDFGWTR_EL2_PMUSERENR_EL0 |
+ HDFGWTR_EL2_PMCR_EL0 |
+ HDFGWTR_EL2_PMSWINC_EL0 |
+ HDFGWTR_EL2_PMSELR_EL0 |
+ HDFGWTR_EL2_PMOVS |
+ HDFGWTR_EL2_PMINTEN |
+ HDFGWTR_EL2_PMCNTEN |
+ HDFGWTR_EL2_PMCCNTR_EL0 |
+ HDFGWTR_EL2_PMCCFILTR_EL0 |
+ HDFGWTR_EL2_PMEVTYPERn_EL0 |
+ HDFGWTR_EL2_PMEVCNTRn_EL0,
+ FEAT_PMUv3),
+ NEEDS_FEAT(HDFGWTR_EL2_TRBTRG_EL1 |
+ HDFGWTR_EL2_TRBSR_EL1 |
+ HDFGWTR_EL2_TRBPTR_EL1 |
+ HDFGWTR_EL2_TRBMAR_EL1 |
+ HDFGWTR_EL2_TRBLIMITR_EL1 |
+ HDFGWTR_EL2_TRBBASER_EL1,
+ FEAT_TRBE),
+ NEEDS_FEAT_FLAG(HDFGWTR_EL2_OSDLR_EL1,
+ NEVER_FGU, FEAT_DoubleLock),
+ NEEDS_FEAT(HDFGWTR_EL2_OSECCR_EL1 |
+ HDFGWTR_EL2_OSLAR_EL1 |
+ HDFGWTR_EL2_DBGPRCR_EL1 |
+ HDFGWTR_EL2_DBGCLAIM |
+ HDFGWTR_EL2_MDSCR_EL1 |
+ HDFGWTR_EL2_DBGWVRn_EL1 |
+ HDFGWTR_EL2_DBGWCRn_EL1 |
+ HDFGWTR_EL2_DBGBVRn_EL1 |
+ HDFGWTR_EL2_DBGBCRn_EL1,
+ FEAT_AA64EL1),
+ NEEDS_FEAT(HDFGWTR_EL2_TRFCR_EL1, FEAT_TRF),
+};
+
+
+static const struct reg_bits_to_feat_map hfgitr_feat_map[] = {
+ NEEDS_FEAT(HFGITR_EL2_PSBCSYNC, FEAT_SPEv1p5),
+ NEEDS_FEAT(HFGITR_EL2_ATS1E1A, FEAT_ATS1A),
+ NEEDS_FEAT(HFGITR_EL2_COSPRCTX, FEAT_SPECRES2),
+ NEEDS_FEAT(HFGITR_EL2_nGCSEPP |
+ HFGITR_EL2_nGCSSTR_EL1 |
+ HFGITR_EL2_nGCSPUSHM_EL1,
+ FEAT_GCS),
+ NEEDS_FEAT(HFGITR_EL2_nBRBIALL |
+ HFGITR_EL2_nBRBINJ,
+ FEAT_BRBE),
+ NEEDS_FEAT(HFGITR_EL2_CPPRCTX |
+ HFGITR_EL2_DVPRCTX |
+ HFGITR_EL2_CFPRCTX,
+ FEAT_SPECRES),
+ NEEDS_FEAT(HFGITR_EL2_TLBIRVAALE1 |
+ HFGITR_EL2_TLBIRVALE1 |
+ HFGITR_EL2_TLBIRVAAE1 |
+ HFGITR_EL2_TLBIRVAE1 |
+ HFGITR_EL2_TLBIRVAALE1IS |
+ HFGITR_EL2_TLBIRVALE1IS |
+ HFGITR_EL2_TLBIRVAAE1IS |
+ HFGITR_EL2_TLBIRVAE1IS |
+ HFGITR_EL2_TLBIRVAALE1OS |
+ HFGITR_EL2_TLBIRVALE1OS |
+ HFGITR_EL2_TLBIRVAAE1OS |
+ HFGITR_EL2_TLBIRVAE1OS,
+ FEAT_TLBIRANGE),
+ NEEDS_FEAT(HFGITR_EL2_TLBIVAALE1OS |
+ HFGITR_EL2_TLBIVALE1OS |
+ HFGITR_EL2_TLBIVAAE1OS |
+ HFGITR_EL2_TLBIASIDE1OS |
+ HFGITR_EL2_TLBIVAE1OS |
+ HFGITR_EL2_TLBIVMALLE1OS,
+ FEAT_TLBIOS),
+ NEEDS_FEAT(HFGITR_EL2_ATS1E1WP |
+ HFGITR_EL2_ATS1E1RP,
+ FEAT_PAN2),
+ NEEDS_FEAT(HFGITR_EL2_DCCVADP, FEAT_DPB2),
+ NEEDS_FEAT(HFGITR_EL2_DCCVAC |
+ HFGITR_EL2_SVC_EL1 |
+ HFGITR_EL2_SVC_EL0 |
+ HFGITR_EL2_ERET |
+ HFGITR_EL2_TLBIVAALE1 |
+ HFGITR_EL2_TLBIVALE1 |
+ HFGITR_EL2_TLBIVAAE1 |
+ HFGITR_EL2_TLBIASIDE1 |
+ HFGITR_EL2_TLBIVAE1 |
+ HFGITR_EL2_TLBIVMALLE1 |
+ HFGITR_EL2_TLBIVAALE1IS |
+ HFGITR_EL2_TLBIVALE1IS |
+ HFGITR_EL2_TLBIVAAE1IS |
+ HFGITR_EL2_TLBIASIDE1IS |
+ HFGITR_EL2_TLBIVAE1IS |
+ HFGITR_EL2_TLBIVMALLE1IS |
+ HFGITR_EL2_ATS1E0W |
+ HFGITR_EL2_ATS1E0R |
+ HFGITR_EL2_ATS1E1W |
+ HFGITR_EL2_ATS1E1R |
+ HFGITR_EL2_DCZVA |
+ HFGITR_EL2_DCCIVAC |
+ HFGITR_EL2_DCCVAP |
+ HFGITR_EL2_DCCVAU |
+ HFGITR_EL2_DCCISW |
+ HFGITR_EL2_DCCSW |
+ HFGITR_EL2_DCISW |
+ HFGITR_EL2_DCIVAC |
+ HFGITR_EL2_ICIVAU |
+ HFGITR_EL2_ICIALLU |
+ HFGITR_EL2_ICIALLUIS,
+ FEAT_AA64EL1),
+};
+
+static const struct reg_bits_to_feat_map hafgrtr_feat_map[] = {
+ NEEDS_FEAT(HAFGRTR_EL2_AMEVTYPER115_EL0 |
+ HAFGRTR_EL2_AMEVTYPER114_EL0 |
+ HAFGRTR_EL2_AMEVTYPER113_EL0 |
+ HAFGRTR_EL2_AMEVTYPER112_EL0 |
+ HAFGRTR_EL2_AMEVTYPER111_EL0 |
+ HAFGRTR_EL2_AMEVTYPER110_EL0 |
+ HAFGRTR_EL2_AMEVTYPER19_EL0 |
+ HAFGRTR_EL2_AMEVTYPER18_EL0 |
+ HAFGRTR_EL2_AMEVTYPER17_EL0 |
+ HAFGRTR_EL2_AMEVTYPER16_EL0 |
+ HAFGRTR_EL2_AMEVTYPER15_EL0 |
+ HAFGRTR_EL2_AMEVTYPER14_EL0 |
+ HAFGRTR_EL2_AMEVTYPER13_EL0 |
+ HAFGRTR_EL2_AMEVTYPER12_EL0 |
+ HAFGRTR_EL2_AMEVTYPER11_EL0 |
+ HAFGRTR_EL2_AMEVTYPER10_EL0 |
+ HAFGRTR_EL2_AMEVCNTR115_EL0 |
+ HAFGRTR_EL2_AMEVCNTR114_EL0 |
+ HAFGRTR_EL2_AMEVCNTR113_EL0 |
+ HAFGRTR_EL2_AMEVCNTR112_EL0 |
+ HAFGRTR_EL2_AMEVCNTR111_EL0 |
+ HAFGRTR_EL2_AMEVCNTR110_EL0 |
+ HAFGRTR_EL2_AMEVCNTR19_EL0 |
+ HAFGRTR_EL2_AMEVCNTR18_EL0 |
+ HAFGRTR_EL2_AMEVCNTR17_EL0 |
+ HAFGRTR_EL2_AMEVCNTR16_EL0 |
+ HAFGRTR_EL2_AMEVCNTR15_EL0 |
+ HAFGRTR_EL2_AMEVCNTR14_EL0 |
+ HAFGRTR_EL2_AMEVCNTR13_EL0 |
+ HAFGRTR_EL2_AMEVCNTR12_EL0 |
+ HAFGRTR_EL2_AMEVCNTR11_EL0 |
+ HAFGRTR_EL2_AMEVCNTR10_EL0 |
+ HAFGRTR_EL2_AMCNTEN1 |
+ HAFGRTR_EL2_AMCNTEN0 |
+ HAFGRTR_EL2_AMEVCNTR03_EL0 |
+ HAFGRTR_EL2_AMEVCNTR02_EL0 |
+ HAFGRTR_EL2_AMEVCNTR01_EL0 |
+ HAFGRTR_EL2_AMEVCNTR00_EL0,
+ FEAT_AMUv1),
+};
+
+static bool idreg_feat_match(struct kvm *kvm, const struct reg_bits_to_feat_map *map)
+{
+ u64 regval = kvm->arch.id_regs[map->regidx];
+ u64 regfld = (regval >> map->shift) & GENMASK(map->width - 1, 0);
+
+ if (map->sign) {
+ s64 sfld = sign_extend64(regfld, map->width - 1);
+ s64 slim = sign_extend64(map->lo_lim, map->width - 1);
+ return sfld >= slim;
+ } else {
+ return regfld >= map->lo_lim;
+ }
+}
+
+static u64 __compute_fixed_bits(struct kvm *kvm,
+ const struct reg_bits_to_feat_map *map,
+ int map_size,
+ u64 *fixed_bits,
+ unsigned long require,
+ unsigned long exclude)
+{
+ u64 val = 0;
+
+ for (int i = 0; i < map_size; i++) {
+ bool match;
+
+ if ((map[i].flags & require) != require)
+ continue;
+
+ if (map[i].flags & exclude)
+ continue;
+
+ if (map[i].flags & CALL_FUNC)
+ match = (map[i].flags & FIXED_VALUE) ?
+ map[i].fval(kvm, fixed_bits) :
+ map[i].match(kvm);
+ else
+ match = idreg_feat_match(kvm, &map[i]);
+
+ if (!match || (map[i].flags & FIXED_VALUE))
+ val |= map[i].bits;
+ }
+
+ return val;
+}
+
+static u64 compute_res0_bits(struct kvm *kvm,
+ const struct reg_bits_to_feat_map *map,
+ int map_size,
+ unsigned long require,
+ unsigned long exclude)
+{
+ return __compute_fixed_bits(kvm, map, map_size, NULL,
+ require, exclude | FIXED_VALUE);
+}
+
+void compute_fgu(struct kvm *kvm, enum fgt_group_id fgt)
+{
+ u64 val = 0;
+
+ switch (fgt) {
+ case HFGRTR_GROUP:
+ val |= compute_res0_bits(kvm, hfgrtr_feat_map,
+ ARRAY_SIZE(hfgrtr_feat_map),
+ 0, NEVER_FGU);
+ val |= compute_res0_bits(kvm, hfgwtr_feat_map,
+ ARRAY_SIZE(hfgwtr_feat_map),
+ 0, NEVER_FGU);
+ break;
+ case HFGITR_GROUP:
+ val |= compute_res0_bits(kvm, hfgitr_feat_map,
+ ARRAY_SIZE(hfgitr_feat_map),
+ 0, NEVER_FGU);
+ break;
+ case HDFGRTR_GROUP:
+ val |= compute_res0_bits(kvm, hdfgrtr_feat_map,
+ ARRAY_SIZE(hdfgrtr_feat_map),
+ 0, NEVER_FGU);
+ val |= compute_res0_bits(kvm, hdfgwtr_feat_map,
+ ARRAY_SIZE(hdfgwtr_feat_map),
+ 0, NEVER_FGU);
+ break;
+ case HAFGRTR_GROUP:
+ val |= compute_res0_bits(kvm, hafgrtr_feat_map,
+ ARRAY_SIZE(hafgrtr_feat_map),
+ 0, NEVER_FGU);
+ break;
+ default:
+ BUG();
+ }
+
+ kvm->arch.fgu[fgt] = val;
+}
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index a9ecca4b2fa74..b3e53a899c1fe 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -5147,75 +5147,10 @@ void kvm_calculate_traps(struct kvm_vcpu *vcpu)
if (test_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags))
goto out;
- kvm->arch.fgu[HFGRTR_GROUP] = (HFGRTR_EL2_nAMAIR2_EL1 |
- HFGRTR_EL2_nMAIR2_EL1 |
- HFGRTR_EL2_nS2POR_EL1 |
- HFGRTR_EL2_nSMPRI_EL1_MASK |
- HFGRTR_EL2_nTPIDR2_EL0_MASK);
-
- if (!kvm_has_feat(kvm, ID_AA64ISAR1_EL1, LS64, LS64_ACCDATA))
- kvm->arch.fgu[HFGRTR_GROUP] |= HFGRTR_EL2_nACCDATA_EL1;
-
- if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, OS))
- kvm->arch.fgu[HFGITR_GROUP] |= (HFGITR_EL2_TLBIRVAALE1OS|
- HFGITR_EL2_TLBIRVALE1OS |
- HFGITR_EL2_TLBIRVAAE1OS |
- HFGITR_EL2_TLBIRVAE1OS |
- HFGITR_EL2_TLBIVAALE1OS |
- HFGITR_EL2_TLBIVALE1OS |
- HFGITR_EL2_TLBIVAAE1OS |
- HFGITR_EL2_TLBIASIDE1OS |
- HFGITR_EL2_TLBIVAE1OS |
- HFGITR_EL2_TLBIVMALLE1OS);
-
- if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, RANGE))
- kvm->arch.fgu[HFGITR_GROUP] |= (HFGITR_EL2_TLBIRVAALE1 |
- HFGITR_EL2_TLBIRVALE1 |
- HFGITR_EL2_TLBIRVAAE1 |
- HFGITR_EL2_TLBIRVAE1 |
- HFGITR_EL2_TLBIRVAALE1IS|
- HFGITR_EL2_TLBIRVALE1IS |
- HFGITR_EL2_TLBIRVAAE1IS |
- HFGITR_EL2_TLBIRVAE1IS |
- HFGITR_EL2_TLBIRVAALE1OS|
- HFGITR_EL2_TLBIRVALE1OS |
- HFGITR_EL2_TLBIRVAAE1OS |
- HFGITR_EL2_TLBIRVAE1OS);
-
- if (!kvm_has_feat(kvm, ID_AA64ISAR2_EL1, ATS1A, IMP))
- kvm->arch.fgu[HFGITR_GROUP] |= HFGITR_EL2_ATS1E1A;
-
- if (!kvm_has_feat(kvm, ID_AA64MMFR1_EL1, PAN, PAN2))
- kvm->arch.fgu[HFGITR_GROUP] |= (HFGITR_EL2_ATS1E1RP |
- HFGITR_EL2_ATS1E1WP);
-
- if (!kvm_has_s1pie(kvm))
- kvm->arch.fgu[HFGRTR_GROUP] |= (HFGRTR_EL2_nPIRE0_EL1 |
- HFGRTR_EL2_nPIR_EL1);
-
- if (!kvm_has_s1poe(kvm))
- kvm->arch.fgu[HFGRTR_GROUP] |= (HFGRTR_EL2_nPOR_EL1 |
- HFGRTR_EL2_nPOR_EL0);
-
- if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, AMU, IMP))
- kvm->arch.fgu[HAFGRTR_GROUP] |= ~(HAFGRTR_EL2_RES0 |
- HAFGRTR_EL2_RES1);
-
- if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, BRBE, IMP)) {
- kvm->arch.fgu[HDFGRTR_GROUP] |= (HDFGRTR_EL2_nBRBDATA |
- HDFGRTR_EL2_nBRBCTL |
- HDFGRTR_EL2_nBRBIDR);
- kvm->arch.fgu[HFGITR_GROUP] |= (HFGITR_EL2_nBRBINJ |
- HFGITR_EL2_nBRBIALL);
- }
-
- if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, GCS, IMP)) {
- kvm->arch.fgu[HFGRTR_GROUP] |= (HFGRTR_EL2_nGCS_EL0 |
- HFGRTR_EL2_nGCS_EL1);
- kvm->arch.fgu[HFGITR_GROUP] |= (HFGITR_EL2_nGCSPUSHM_EL1 |
- HFGITR_EL2_nGCSSTR_EL1 |
- HFGITR_EL2_nGCSEPP);
- }
+ compute_fgu(kvm, HFGRTR_GROUP);
+ compute_fgu(kvm, HFGITR_GROUP);
+ compute_fgu(kvm, HDFGRTR_GROUP);
+ compute_fgu(kvm, HAFGRTR_GROUP);
set_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags);
out:
--
2.39.2
^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH v3 31/42] KVM: arm64: Validate FGT register descriptions against RES0 masks
2025-04-26 12:27 [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling Marc Zyngier
` (29 preceding siblings ...)
2025-04-26 12:28 ` [PATCH v3 30/42] KVM: arm64: Switch to table-driven FGU configuration Marc Zyngier
@ 2025-04-26 12:28 ` Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 32/42] KVM: arm64: Use FGT feature maps to drive RES0 bits Marc Zyngier
` (11 subsequent siblings)
42 siblings, 0 replies; 71+ messages in thread
From: Marc Zyngier @ 2025-04-26 12:28 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
In order to point out to the unsuspecting KVM hacker that they
are missing something somewhere, validate that the known FGT bits
do not intersect with the corresponding RES0 mask, as computed at
boot time.
THis check is also performed at boot time, ensuring that there is
no runtime overhead.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_host.h | 1 +
arch/arm64/kvm/config.c | 29 +++++++++++++++++++++++++++++
arch/arm64/kvm/sys_regs.c | 2 ++
3 files changed, 32 insertions(+)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 9386f15cdc252..59bfb049ce987 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -1611,5 +1611,6 @@ void kvm_set_vm_id_reg(struct kvm *kvm, u32 reg, u64 val);
(kvm_has_feat((k), ID_AA64MMFR3_EL1, S1POE, IMP))
void compute_fgu(struct kvm *kvm, enum fgt_group_id fgt);
+void check_feature_map(void);
#endif /* __ARM64_KVM_HOST_H__ */
diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
index 8813ecbb45763..8d567db03e8a5 100644
--- a/arch/arm64/kvm/config.c
+++ b/arch/arm64/kvm/config.c
@@ -494,6 +494,35 @@ static const struct reg_bits_to_feat_map hafgrtr_feat_map[] = {
FEAT_AMUv1),
};
+static void __init check_feat_map(const struct reg_bits_to_feat_map *map,
+ int map_size, u64 res0, const char *str)
+{
+ u64 mask = 0;
+
+ for (int i = 0; i < map_size; i++)
+ mask |= map[i].bits;
+
+ if (mask != ~res0)
+ kvm_err("Undefined %s behaviour, bits %016llx\n",
+ str, mask ^ ~res0);
+}
+
+void __init check_feature_map(void)
+{
+ check_feat_map(hfgrtr_feat_map, ARRAY_SIZE(hfgrtr_feat_map),
+ hfgrtr_masks.res0, hfgrtr_masks.str);
+ check_feat_map(hfgwtr_feat_map, ARRAY_SIZE(hfgwtr_feat_map),
+ hfgwtr_masks.res0, hfgwtr_masks.str);
+ check_feat_map(hfgitr_feat_map, ARRAY_SIZE(hfgitr_feat_map),
+ hfgitr_masks.res0, hfgitr_masks.str);
+ check_feat_map(hdfgrtr_feat_map, ARRAY_SIZE(hdfgrtr_feat_map),
+ hdfgrtr_masks.res0, hdfgrtr_masks.str);
+ check_feat_map(hdfgwtr_feat_map, ARRAY_SIZE(hdfgwtr_feat_map),
+ hdfgwtr_masks.res0, hdfgwtr_masks.str);
+ check_feat_map(hafgrtr_feat_map, ARRAY_SIZE(hafgrtr_feat_map),
+ hafgrtr_masks.res0, hafgrtr_masks.str);
+}
+
static bool idreg_feat_match(struct kvm *kvm, const struct reg_bits_to_feat_map *map)
{
u64 regval = kvm->arch.id_regs[map->regidx];
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index b3e53a899c1fe..f24d1a7d9a8f4 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -5208,6 +5208,8 @@ int __init kvm_sys_reg_table_init(void)
ret = populate_nv_trap_config();
+ check_feature_map();
+
for (i = 0; !ret && i < ARRAY_SIZE(sys_reg_descs); i++)
ret = populate_sysreg_config(sys_reg_descs + i, i);
--
2.39.2
^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH v3 32/42] KVM: arm64: Use FGT feature maps to drive RES0 bits
2025-04-26 12:27 [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling Marc Zyngier
` (30 preceding siblings ...)
2025-04-26 12:28 ` [PATCH v3 31/42] KVM: arm64: Validate FGT register descriptions against RES0 masks Marc Zyngier
@ 2025-04-26 12:28 ` Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 33/42] KVM: arm64: Allow kvm_has_feat() to take variable arguments Marc Zyngier
` (10 subsequent siblings)
42 siblings, 0 replies; 71+ messages in thread
From: Marc Zyngier @ 2025-04-26 12:28 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
Another benefit of mapping bits to features is that it becomes trivial
to define which bits should be handled as RES0.
Let's apply this principle to the guest's view of the FGT registers.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_host.h | 1 +
arch/arm64/kvm/config.c | 46 +++++++++++
arch/arm64/kvm/nested.c | 129 +++---------------------------
3 files changed, 57 insertions(+), 119 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 59bfb049ce987..0eff513167868 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -1611,6 +1611,7 @@ void kvm_set_vm_id_reg(struct kvm *kvm, u32 reg, u64 val);
(kvm_has_feat((k), ID_AA64MMFR3_EL1, S1POE, IMP))
void compute_fgu(struct kvm *kvm, enum fgt_group_id fgt);
+void get_reg_fixed_bits(struct kvm *kvm, enum vcpu_sysreg reg, u64 *res0, u64*res1);
void check_feature_map(void);
#endif /* __ARM64_KVM_HOST_H__ */
diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
index 8d567db03e8a5..a1451aacb14ac 100644
--- a/arch/arm64/kvm/config.c
+++ b/arch/arm64/kvm/config.c
@@ -616,3 +616,49 @@ void compute_fgu(struct kvm *kvm, enum fgt_group_id fgt)
kvm->arch.fgu[fgt] = val;
}
+
+void get_reg_fixed_bits(struct kvm *kvm, enum vcpu_sysreg reg, u64 *res0, u64 *res1)
+{
+ switch (reg) {
+ case HFGRTR_EL2:
+ *res0 = compute_res0_bits(kvm, hfgrtr_feat_map,
+ ARRAY_SIZE(hfgrtr_feat_map), 0, 0);
+ *res0 |= hfgrtr_masks.res0;
+ *res1 = HFGRTR_EL2_RES1;
+ break;
+ case HFGWTR_EL2:
+ *res0 = compute_res0_bits(kvm, hfgwtr_feat_map,
+ ARRAY_SIZE(hfgwtr_feat_map), 0, 0);
+ *res0 |= hfgwtr_masks.res0;
+ *res1 = HFGWTR_EL2_RES1;
+ break;
+ case HFGITR_EL2:
+ *res0 = compute_res0_bits(kvm, hfgitr_feat_map,
+ ARRAY_SIZE(hfgitr_feat_map), 0, 0);
+ *res0 |= hfgitr_masks.res0;
+ *res1 = HFGITR_EL2_RES1;
+ break;
+ case HDFGRTR_EL2:
+ *res0 = compute_res0_bits(kvm, hdfgrtr_feat_map,
+ ARRAY_SIZE(hdfgrtr_feat_map), 0, 0);
+ *res0 |= hdfgrtr_masks.res0;
+ *res1 = HDFGRTR_EL2_RES1;
+ break;
+ case HDFGWTR_EL2:
+ *res0 = compute_res0_bits(kvm, hdfgwtr_feat_map,
+ ARRAY_SIZE(hdfgwtr_feat_map), 0, 0);
+ *res0 |= hdfgwtr_masks.res0;
+ *res1 = HDFGWTR_EL2_RES1;
+ break;
+ case HAFGRTR_EL2:
+ *res0 = compute_res0_bits(kvm, hafgrtr_feat_map,
+ ARRAY_SIZE(hafgrtr_feat_map), 0, 0);
+ *res0 |= hafgrtr_masks.res0;
+ *res1 = HAFGRTR_EL2_RES1;
+ break;
+ default:
+ WARN_ON_ONCE(1);
+ *res0 = *res1 = 0;
+ break;
+ }
+}
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index 666df85230c9b..3d91a0233652b 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -1100,132 +1100,23 @@ int kvm_init_nv_sysregs(struct kvm_vcpu *vcpu)
set_sysreg_masks(kvm, HCRX_EL2, res0, res1);
/* HFG[RW]TR_EL2 */
- res0 = res1 = 0;
- if (!(kvm_vcpu_has_feature(kvm, KVM_ARM_VCPU_PTRAUTH_ADDRESS) &&
- kvm_vcpu_has_feature(kvm, KVM_ARM_VCPU_PTRAUTH_GENERIC)))
- res0 |= (HFGRTR_EL2_APDAKey | HFGRTR_EL2_APDBKey |
- HFGRTR_EL2_APGAKey | HFGRTR_EL2_APIAKey |
- HFGRTR_EL2_APIBKey);
- if (!kvm_has_feat(kvm, ID_AA64MMFR1_EL1, LO, IMP))
- res0 |= (HFGRTR_EL2_LORC_EL1 | HFGRTR_EL2_LOREA_EL1 |
- HFGRTR_EL2_LORID_EL1 | HFGRTR_EL2_LORN_EL1 |
- HFGRTR_EL2_LORSA_EL1);
- if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, CSV2, CSV2_2) &&
- !kvm_has_feat(kvm, ID_AA64PFR1_EL1, CSV2_frac, CSV2_1p2))
- res0 |= (HFGRTR_EL2_SCXTNUM_EL1 | HFGRTR_EL2_SCXTNUM_EL0);
- if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, GIC, IMP))
- res0 |= HFGRTR_EL2_ICC_IGRPENn_EL1;
- if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, RAS, IMP))
- res0 |= (HFGRTR_EL2_ERRIDR_EL1 | HFGRTR_EL2_ERRSELR_EL1 |
- HFGRTR_EL2_ERXFR_EL1 | HFGRTR_EL2_ERXCTLR_EL1 |
- HFGRTR_EL2_ERXSTATUS_EL1 | HFGRTR_EL2_ERXMISCn_EL1 |
- HFGRTR_EL2_ERXPFGF_EL1 | HFGRTR_EL2_ERXPFGCTL_EL1 |
- HFGRTR_EL2_ERXPFGCDN_EL1 | HFGRTR_EL2_ERXADDR_EL1);
- if (!kvm_has_feat(kvm, ID_AA64ISAR1_EL1, LS64, LS64_ACCDATA))
- res0 |= HFGRTR_EL2_nACCDATA_EL1;
- if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, GCS, IMP))
- res0 |= (HFGRTR_EL2_nGCS_EL0 | HFGRTR_EL2_nGCS_EL1);
- if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, SME, IMP))
- res0 |= (HFGRTR_EL2_nSMPRI_EL1 | HFGRTR_EL2_nTPIDR2_EL0);
- if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, THE, IMP))
- res0 |= HFGRTR_EL2_nRCWMASK_EL1;
- if (!kvm_has_s1pie(kvm))
- res0 |= (HFGRTR_EL2_nPIRE0_EL1 | HFGRTR_EL2_nPIR_EL1);
- if (!kvm_has_s1poe(kvm))
- res0 |= (HFGRTR_EL2_nPOR_EL0 | HFGRTR_EL2_nPOR_EL1);
- if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, S2POE, IMP))
- res0 |= HFGRTR_EL2_nS2POR_EL1;
- if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, AIE, IMP))
- res0 |= (HFGRTR_EL2_nMAIR2_EL1 | HFGRTR_EL2_nAMAIR2_EL1);
- set_sysreg_masks(kvm, HFGRTR_EL2, res0 | hfgrtr_masks.res0, res1);
- set_sysreg_masks(kvm, HFGWTR_EL2, res0 | hfgwtr_masks.res0, res1);
+ get_reg_fixed_bits(kvm, HFGRTR_EL2, &res0, &res1);
+ set_sysreg_masks(kvm, HFGRTR_EL2, res0, res1);
+ get_reg_fixed_bits(kvm, HFGWTR_EL2, &res0, &res1);
+ set_sysreg_masks(kvm, HFGWTR_EL2, res0, res1);
/* HDFG[RW]TR_EL2 */
- res0 = res1 = 0;
- if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, DoubleLock, IMP))
- res0 |= HDFGRTR_EL2_OSDLR_EL1;
- if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, PMUVer, IMP))
- res0 |= (HDFGRTR_EL2_PMEVCNTRn_EL0 | HDFGRTR_EL2_PMEVTYPERn_EL0 |
- HDFGRTR_EL2_PMCCFILTR_EL0 | HDFGRTR_EL2_PMCCNTR_EL0 |
- HDFGRTR_EL2_PMCNTEN | HDFGRTR_EL2_PMINTEN |
- HDFGRTR_EL2_PMOVS | HDFGRTR_EL2_PMSELR_EL0 |
- HDFGRTR_EL2_PMMIR_EL1 | HDFGRTR_EL2_PMUSERENR_EL0 |
- HDFGRTR_EL2_PMCEIDn_EL0);
- if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, PMSVer, IMP))
- res0 |= (HDFGRTR_EL2_PMBLIMITR_EL1 | HDFGRTR_EL2_PMBPTR_EL1 |
- HDFGRTR_EL2_PMBSR_EL1 | HDFGRTR_EL2_PMSCR_EL1 |
- HDFGRTR_EL2_PMSEVFR_EL1 | HDFGRTR_EL2_PMSFCR_EL1 |
- HDFGRTR_EL2_PMSICR_EL1 | HDFGRTR_EL2_PMSIDR_EL1 |
- HDFGRTR_EL2_PMSIRR_EL1 | HDFGRTR_EL2_PMSLATFR_EL1 |
- HDFGRTR_EL2_PMBIDR_EL1);
- if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, TraceVer, IMP))
- res0 |= (HDFGRTR_EL2_TRC | HDFGRTR_EL2_TRCAUTHSTATUS |
- HDFGRTR_EL2_TRCAUXCTLR | HDFGRTR_EL2_TRCCLAIM |
- HDFGRTR_EL2_TRCCNTVRn | HDFGRTR_EL2_TRCID |
- HDFGRTR_EL2_TRCIMSPECn | HDFGRTR_EL2_TRCOSLSR |
- HDFGRTR_EL2_TRCPRGCTLR | HDFGRTR_EL2_TRCSEQSTR |
- HDFGRTR_EL2_TRCSSCSRn | HDFGRTR_EL2_TRCSTATR |
- HDFGRTR_EL2_TRCVICTLR);
- if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, TraceBuffer, IMP))
- res0 |= (HDFGRTR_EL2_TRBBASER_EL1 | HDFGRTR_EL2_TRBIDR_EL1 |
- HDFGRTR_EL2_TRBLIMITR_EL1 | HDFGRTR_EL2_TRBMAR_EL1 |
- HDFGRTR_EL2_TRBPTR_EL1 | HDFGRTR_EL2_TRBSR_EL1 |
- HDFGRTR_EL2_TRBTRG_EL1);
- if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, BRBE, IMP))
- res0 |= (HDFGRTR_EL2_nBRBIDR | HDFGRTR_EL2_nBRBCTL |
- HDFGRTR_EL2_nBRBDATA);
- if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, PMSVer, V1P2))
- res0 |= HDFGRTR_EL2_nPMSNEVFR_EL1;
- set_sysreg_masks(kvm, HDFGRTR_EL2, res0 | hdfgrtr_masks.res0, res1);
-
- /* Reuse the bits from the read-side and add the write-specific stuff */
- if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, PMUVer, IMP))
- res0 |= (HDFGWTR_EL2_PMCR_EL0 | HDFGWTR_EL2_PMSWINC_EL0);
- if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, TraceVer, IMP))
- res0 |= HDFGWTR_EL2_TRCOSLAR;
- if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, TraceFilt, IMP))
- res0 |= HDFGWTR_EL2_TRFCR_EL1;
- set_sysreg_masks(kvm, HFGWTR_EL2, res0 | hdfgwtr_masks.res0, res1);
+ get_reg_fixed_bits(kvm, HDFGRTR_EL2, &res0, &res1);
+ set_sysreg_masks(kvm, HDFGRTR_EL2, res0, res1);
+ get_reg_fixed_bits(kvm, HDFGWTR_EL2, &res0, &res1);
+ set_sysreg_masks(kvm, HDFGWTR_EL2, res0, res1);
/* HFGITR_EL2 */
- res0 = hfgitr_masks.res0;
- res1 = HFGITR_EL2_RES1;
- if (!kvm_has_feat(kvm, ID_AA64ISAR1_EL1, DPB, DPB2))
- res0 |= HFGITR_EL2_DCCVADP;
- if (!kvm_has_feat(kvm, ID_AA64MMFR1_EL1, PAN, PAN2))
- res0 |= (HFGITR_EL2_ATS1E1RP | HFGITR_EL2_ATS1E1WP);
- if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, OS))
- res0 |= (HFGITR_EL2_TLBIRVAALE1OS | HFGITR_EL2_TLBIRVALE1OS |
- HFGITR_EL2_TLBIRVAAE1OS | HFGITR_EL2_TLBIRVAE1OS |
- HFGITR_EL2_TLBIVAALE1OS | HFGITR_EL2_TLBIVALE1OS |
- HFGITR_EL2_TLBIVAAE1OS | HFGITR_EL2_TLBIASIDE1OS |
- HFGITR_EL2_TLBIVAE1OS | HFGITR_EL2_TLBIVMALLE1OS);
- if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, RANGE))
- res0 |= (HFGITR_EL2_TLBIRVAALE1 | HFGITR_EL2_TLBIRVALE1 |
- HFGITR_EL2_TLBIRVAAE1 | HFGITR_EL2_TLBIRVAE1 |
- HFGITR_EL2_TLBIRVAALE1IS | HFGITR_EL2_TLBIRVALE1IS |
- HFGITR_EL2_TLBIRVAAE1IS | HFGITR_EL2_TLBIRVAE1IS |
- HFGITR_EL2_TLBIRVAALE1OS | HFGITR_EL2_TLBIRVALE1OS |
- HFGITR_EL2_TLBIRVAAE1OS | HFGITR_EL2_TLBIRVAE1OS);
- if (!kvm_has_feat(kvm, ID_AA64ISAR1_EL1, SPECRES, IMP))
- res0 |= (HFGITR_EL2_CFPRCTX | HFGITR_EL2_DVPRCTX |
- HFGITR_EL2_CPPRCTX);
- if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, BRBE, IMP))
- res0 |= (HFGITR_EL2_nBRBINJ | HFGITR_EL2_nBRBIALL);
- if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, GCS, IMP))
- res0 |= (HFGITR_EL2_nGCSPUSHM_EL1 | HFGITR_EL2_nGCSSTR_EL1 |
- HFGITR_EL2_nGCSEPP);
- if (!kvm_has_feat(kvm, ID_AA64ISAR1_EL1, SPECRES, COSP_RCTX))
- res0 |= HFGITR_EL2_COSPRCTX;
- if (!kvm_has_feat(kvm, ID_AA64ISAR2_EL1, ATS1A, IMP))
- res0 |= HFGITR_EL2_ATS1E1A;
+ get_reg_fixed_bits(kvm, HFGITR_EL2, &res0, &res1);
set_sysreg_masks(kvm, HFGITR_EL2, res0, res1);
/* HAFGRTR_EL2 - not a lot to see here */
- res0 = hafgrtr_masks.res0;
- res1 = HAFGRTR_EL2_RES1;
- if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, AMU, V1P1))
- res0 |= ~(res0 | res1);
+ get_reg_fixed_bits(kvm, HAFGRTR_EL2, &res0, &res1);
set_sysreg_masks(kvm, HAFGRTR_EL2, res0, res1);
/* TCR2_EL2 */
--
2.39.2
^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH v3 33/42] KVM: arm64: Allow kvm_has_feat() to take variable arguments
2025-04-26 12:27 [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling Marc Zyngier
` (31 preceding siblings ...)
2025-04-26 12:28 ` [PATCH v3 32/42] KVM: arm64: Use FGT feature maps to drive RES0 bits Marc Zyngier
@ 2025-04-26 12:28 ` Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 34/42] KVM: arm64: Use HCRX_EL2 feature map to drive fixed-value bits Marc Zyngier
` (9 subsequent siblings)
42 siblings, 0 replies; 71+ messages in thread
From: Marc Zyngier @ 2025-04-26 12:28 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
In order to be able to write more compact (and easier to read) code,
let kvm_has_feat() and co take variable arguments. This enables
constructs such as:
#define FEAT_SME ID_AA64PFR1_EL1, SME, IMP
if (kvm_has_feat(kvm, FEAT_SME))
[...]
which is admitedly more readable.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_host.h | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 0eff513167868..3b5fc64c4085c 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -1572,12 +1572,16 @@ void kvm_set_vm_id_reg(struct kvm *kvm, u32 reg, u64 val);
kvm_cmp_feat_signed(kvm, id, fld, op, limit) : \
kvm_cmp_feat_unsigned(kvm, id, fld, op, limit))
-#define kvm_has_feat(kvm, id, fld, limit) \
+#define __kvm_has_feat(kvm, id, fld, limit) \
kvm_cmp_feat(kvm, id, fld, >=, limit)
-#define kvm_has_feat_enum(kvm, id, fld, val) \
+#define kvm_has_feat(kvm, ...) __kvm_has_feat(kvm, __VA_ARGS__)
+
+#define __kvm_has_feat_enum(kvm, id, fld, val) \
kvm_cmp_feat_unsigned(kvm, id, fld, ==, val)
+#define kvm_has_feat_enum(kvm, ...) __kvm_has_feat_enum(kvm, __VA_ARGS__)
+
#define kvm_has_feat_range(kvm, id, fld, min, max) \
(kvm_cmp_feat(kvm, id, fld, >=, min) && \
kvm_cmp_feat(kvm, id, fld, <=, max))
--
2.39.2
^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH v3 34/42] KVM: arm64: Use HCRX_EL2 feature map to drive fixed-value bits
2025-04-26 12:27 [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling Marc Zyngier
` (32 preceding siblings ...)
2025-04-26 12:28 ` [PATCH v3 33/42] KVM: arm64: Allow kvm_has_feat() to take variable arguments Marc Zyngier
@ 2025-04-26 12:28 ` Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 35/42] KVM: arm64: Use HCR_EL2 " Marc Zyngier
` (8 subsequent siblings)
42 siblings, 0 replies; 71+ messages in thread
From: Marc Zyngier @ 2025-04-26 12:28 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
Similarly to other registers, describe which HCR_EL2 bit depends
on which feature, and use this to compute the RES0 status of these
bits.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/config.c | 78 +++++++++++++++++++++++++++++++++++++++++
arch/arm64/kvm/nested.c | 40 +--------------------
2 files changed, 79 insertions(+), 39 deletions(-)
diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
index a1451aacb14ac..e904b2cce5f64 100644
--- a/arch/arm64/kvm/config.c
+++ b/arch/arm64/kvm/config.c
@@ -77,6 +77,8 @@ struct reg_bits_to_feat_map {
#define FEAT_THE ID_AA64PFR1_EL1, THE, IMP
#define FEAT_SME ID_AA64PFR1_EL1, SME, IMP
#define FEAT_GCS ID_AA64PFR1_EL1, GCS, IMP
+#define FEAT_LS64 ID_AA64ISAR1_EL1, LS64, LS64
+#define FEAT_LS64_V ID_AA64ISAR1_EL1, LS64, LS64_V
#define FEAT_LS64_ACCDATA ID_AA64ISAR1_EL1, LS64, LS64_ACCDATA
#define FEAT_RAS ID_AA64PFR0_EL1, RAS, IMP
#define FEAT_GICv3 ID_AA64PFR0_EL1, GIC, IMP
@@ -90,6 +92,16 @@ struct reg_bits_to_feat_map {
#define FEAT_PAN2 ID_AA64MMFR1_EL1, PAN, PAN2
#define FEAT_DPB2 ID_AA64ISAR1_EL1, DPB, DPB2
#define FEAT_AMUv1 ID_AA64PFR0_EL1, AMU, IMP
+#define FEAT_CMOW ID_AA64MMFR1_EL1, CMOW, IMP
+#define FEAT_D128 ID_AA64MMFR3_EL1, D128, IMP
+#define FEAT_DoubleFault2 ID_AA64PFR1_EL1, DF2, IMP
+#define FEAT_FPMR ID_AA64PFR2_EL1, FPMR, IMP
+#define FEAT_MOPS ID_AA64ISAR2_EL1, MOPS, IMP
+#define FEAT_NMI ID_AA64PFR1_EL1, NMI, IMP
+#define FEAT_SCTLR2 ID_AA64MMFR3_EL1, SCTLRX, IMP
+#define FEAT_SYSREG128 ID_AA64ISAR2_EL1, SYSREG_128, IMP
+#define FEAT_TCR2 ID_AA64MMFR3_EL1, TCRX, IMP
+#define FEAT_XS ID_AA64ISAR1_EL1, XS, IMP
static bool feat_rasv1p1(struct kvm *kvm)
{
@@ -110,6 +122,35 @@ static bool feat_pauth(struct kvm *kvm)
return kvm_has_pauth(kvm, PAuth);
}
+static bool feat_pauth_lr(struct kvm *kvm)
+{
+ return kvm_has_pauth(kvm, PAuth_LR);
+}
+
+static bool feat_aderr(struct kvm *kvm)
+{
+ return (kvm_has_feat(kvm, ID_AA64MMFR3_EL1, ADERR, FEAT_ADERR) &&
+ kvm_has_feat(kvm, ID_AA64MMFR3_EL1, SDERR, FEAT_ADERR));
+}
+
+static bool feat_anerr(struct kvm *kvm)
+{
+ return (kvm_has_feat(kvm, ID_AA64MMFR3_EL1, ANERR, FEAT_ANERR) &&
+ kvm_has_feat(kvm, ID_AA64MMFR3_EL1, SNERR, FEAT_ANERR));
+}
+
+static bool feat_sme_smps(struct kvm *kvm)
+{
+ /*
+ * Revists this if KVM ever supports SME -- this really should
+ * look at the guest's view of SMIDR_EL1. Funnily enough, this
+ * is not captured in the JSON file, but only as a note in the
+ * ARM ARM.
+ */
+ return (kvm_has_feat(kvm, FEAT_SME) &&
+ (read_sysreg_s(SYS_SMIDR_EL1) & SMIDR_EL1_SMPS));
+}
+
static const struct reg_bits_to_feat_map hfgrtr_feat_map[] = {
NEEDS_FEAT(HFGRTR_EL2_nAMAIR2_EL1 |
HFGRTR_EL2_nMAIR2_EL1,
@@ -494,6 +535,35 @@ static const struct reg_bits_to_feat_map hafgrtr_feat_map[] = {
FEAT_AMUv1),
};
+static const struct reg_bits_to_feat_map hcrx_feat_map[] = {
+ NEEDS_FEAT(HCRX_EL2_PACMEn, feat_pauth_lr),
+ NEEDS_FEAT(HCRX_EL2_EnFPM, FEAT_FPMR),
+ NEEDS_FEAT(HCRX_EL2_GCSEn, FEAT_GCS),
+ NEEDS_FEAT(HCRX_EL2_EnIDCP128, FEAT_SYSREG128),
+ NEEDS_FEAT(HCRX_EL2_EnSDERR, feat_aderr),
+ NEEDS_FEAT(HCRX_EL2_TMEA, FEAT_DoubleFault2),
+ NEEDS_FEAT(HCRX_EL2_EnSNERR, feat_anerr),
+ NEEDS_FEAT(HCRX_EL2_D128En, FEAT_D128),
+ NEEDS_FEAT(HCRX_EL2_PTTWI, FEAT_THE),
+ NEEDS_FEAT(HCRX_EL2_SCTLR2En, FEAT_SCTLR2),
+ NEEDS_FEAT(HCRX_EL2_TCR2En, FEAT_TCR2),
+ NEEDS_FEAT(HCRX_EL2_MSCEn |
+ HCRX_EL2_MCE2,
+ FEAT_MOPS),
+ NEEDS_FEAT(HCRX_EL2_CMOW, FEAT_CMOW),
+ NEEDS_FEAT(HCRX_EL2_VFNMI |
+ HCRX_EL2_VINMI |
+ HCRX_EL2_TALLINT,
+ FEAT_NMI),
+ NEEDS_FEAT(HCRX_EL2_SMPME, feat_sme_smps),
+ NEEDS_FEAT(HCRX_EL2_FGTnXS |
+ HCRX_EL2_FnXS,
+ FEAT_XS),
+ NEEDS_FEAT(HCRX_EL2_EnASR, FEAT_LS64_V),
+ NEEDS_FEAT(HCRX_EL2_EnALS, FEAT_LS64),
+ NEEDS_FEAT(HCRX_EL2_EnAS0, FEAT_LS64_ACCDATA),
+};
+
static void __init check_feat_map(const struct reg_bits_to_feat_map *map,
int map_size, u64 res0, const char *str)
{
@@ -521,6 +591,8 @@ void __init check_feature_map(void)
hdfgwtr_masks.res0, hdfgwtr_masks.str);
check_feat_map(hafgrtr_feat_map, ARRAY_SIZE(hafgrtr_feat_map),
hafgrtr_masks.res0, hafgrtr_masks.str);
+ check_feat_map(hcrx_feat_map, ARRAY_SIZE(hcrx_feat_map),
+ __HCRX_EL2_RES0, "HCRX_EL2");
}
static bool idreg_feat_match(struct kvm *kvm, const struct reg_bits_to_feat_map *map)
@@ -656,6 +728,12 @@ void get_reg_fixed_bits(struct kvm *kvm, enum vcpu_sysreg reg, u64 *res0, u64 *r
*res0 |= hafgrtr_masks.res0;
*res1 = HAFGRTR_EL2_RES1;
break;
+ case HCRX_EL2:
+ *res0 = compute_res0_bits(kvm, hcrx_feat_map,
+ ARRAY_SIZE(hcrx_feat_map), 0, 0);
+ *res0 |= __HCRX_EL2_RES0;
+ *res1 = __HCRX_EL2_RES1;
+ break;
default:
WARN_ON_ONCE(1);
*res0 = *res1 = 0;
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index 3d91a0233652b..20c79f1eaebab 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -1058,45 +1058,7 @@ int kvm_init_nv_sysregs(struct kvm_vcpu *vcpu)
set_sysreg_masks(kvm, HCR_EL2, res0, res1);
/* HCRX_EL2 */
- res0 = __HCRX_EL2_RES0;
- res1 = __HCRX_EL2_RES1;
- if (!kvm_has_feat(kvm, ID_AA64ISAR3_EL1, PACM, TRIVIAL_IMP))
- res0 |= HCRX_EL2_PACMEn;
- if (!kvm_has_feat(kvm, ID_AA64PFR2_EL1, FPMR, IMP))
- res0 |= HCRX_EL2_EnFPM;
- if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, GCS, IMP))
- res0 |= HCRX_EL2_GCSEn;
- if (!kvm_has_feat(kvm, ID_AA64ISAR2_EL1, SYSREG_128, IMP))
- res0 |= HCRX_EL2_EnIDCP128;
- if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, ADERR, DEV_ASYNC))
- res0 |= (HCRX_EL2_EnSDERR | HCRX_EL2_EnSNERR);
- if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, DF2, IMP))
- res0 |= HCRX_EL2_TMEA;
- if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, D128, IMP))
- res0 |= HCRX_EL2_D128En;
- if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, THE, IMP))
- res0 |= HCRX_EL2_PTTWI;
- if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, SCTLRX, IMP))
- res0 |= HCRX_EL2_SCTLR2En;
- if (!kvm_has_tcr2(kvm))
- res0 |= HCRX_EL2_TCR2En;
- if (!kvm_has_feat(kvm, ID_AA64ISAR2_EL1, MOPS, IMP))
- res0 |= (HCRX_EL2_MSCEn | HCRX_EL2_MCE2);
- if (!kvm_has_feat(kvm, ID_AA64MMFR1_EL1, CMOW, IMP))
- res0 |= HCRX_EL2_CMOW;
- if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, NMI, IMP))
- res0 |= (HCRX_EL2_VFNMI | HCRX_EL2_VINMI | HCRX_EL2_TALLINT);
- if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, SME, IMP) ||
- !(read_sysreg_s(SYS_SMIDR_EL1) & SMIDR_EL1_SMPS))
- res0 |= HCRX_EL2_SMPME;
- if (!kvm_has_feat(kvm, ID_AA64ISAR1_EL1, XS, IMP))
- res0 |= (HCRX_EL2_FGTnXS | HCRX_EL2_FnXS);
- if (!kvm_has_feat(kvm, ID_AA64ISAR1_EL1, LS64, LS64_V))
- res0 |= HCRX_EL2_EnASR;
- if (!kvm_has_feat(kvm, ID_AA64ISAR1_EL1, LS64, LS64))
- res0 |= HCRX_EL2_EnALS;
- if (!kvm_has_feat(kvm, ID_AA64ISAR1_EL1, LS64, LS64_ACCDATA))
- res0 |= HCRX_EL2_EnAS0;
+ get_reg_fixed_bits(kvm, HCRX_EL2, &res0, &res1);
set_sysreg_masks(kvm, HCRX_EL2, res0, res1);
/* HFG[RW]TR_EL2 */
--
2.39.2
^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH v3 35/42] KVM: arm64: Use HCR_EL2 feature map to drive fixed-value bits
2025-04-26 12:27 [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling Marc Zyngier
` (33 preceding siblings ...)
2025-04-26 12:28 ` [PATCH v3 34/42] KVM: arm64: Use HCRX_EL2 feature map to drive fixed-value bits Marc Zyngier
@ 2025-04-26 12:28 ` Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 36/42] KVM: arm64: Add FEAT_FGT2 registers to the VNCR page Marc Zyngier
` (7 subsequent siblings)
42 siblings, 0 replies; 71+ messages in thread
From: Marc Zyngier @ 2025-04-26 12:28 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
Similarly to other registers, describe which HCR_EL2 bit depends
on which feature, and use this to compute the RES0 status of these
bits.
An additional complexity stems from the status of some bits such
as E2H and RW, which do not had a RESx status, but still take
a fixed value due to implementation choices in KVM.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/config.c | 149 ++++++++++++++++++++++++++++++++++++++++
arch/arm64/kvm/nested.c | 38 +---------
2 files changed, 150 insertions(+), 37 deletions(-)
diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
index e904b2cce5f64..0357d06feaac6 100644
--- a/arch/arm64/kvm/config.c
+++ b/arch/arm64/kvm/config.c
@@ -69,7 +69,10 @@ struct reg_bits_to_feat_map {
#define FEAT_TRBE ID_AA64DFR0_EL1, TraceBuffer, IMP
#define FEAT_DoubleLock ID_AA64DFR0_EL1, DoubleLock, IMP
#define FEAT_TRF ID_AA64DFR0_EL1, TraceFilt, IMP
+#define FEAT_AA32EL0 ID_AA64PFR0_EL1, EL0, AARCH32
+#define FEAT_AA32EL1 ID_AA64PFR0_EL1, EL1, AARCH32
#define FEAT_AA64EL1 ID_AA64PFR0_EL1, EL1, IMP
+#define FEAT_AA64EL3 ID_AA64PFR0_EL1, EL3, IMP
#define FEAT_AIE ID_AA64MMFR3_EL1, AIE, IMP
#define FEAT_S2POE ID_AA64MMFR3_EL1, S2POE, IMP
#define FEAT_S1POE ID_AA64MMFR3_EL1, S1POE, IMP
@@ -92,6 +95,7 @@ struct reg_bits_to_feat_map {
#define FEAT_PAN2 ID_AA64MMFR1_EL1, PAN, PAN2
#define FEAT_DPB2 ID_AA64ISAR1_EL1, DPB, DPB2
#define FEAT_AMUv1 ID_AA64PFR0_EL1, AMU, IMP
+#define FEAT_AMUv1p1 ID_AA64PFR0_EL1, AMU, V1P1
#define FEAT_CMOW ID_AA64MMFR1_EL1, CMOW, IMP
#define FEAT_D128 ID_AA64MMFR3_EL1, D128, IMP
#define FEAT_DoubleFault2 ID_AA64PFR1_EL1, DF2, IMP
@@ -102,6 +106,31 @@ struct reg_bits_to_feat_map {
#define FEAT_SYSREG128 ID_AA64ISAR2_EL1, SYSREG_128, IMP
#define FEAT_TCR2 ID_AA64MMFR3_EL1, TCRX, IMP
#define FEAT_XS ID_AA64ISAR1_EL1, XS, IMP
+#define FEAT_EVT ID_AA64MMFR2_EL1, EVT, IMP
+#define FEAT_EVT_TTLBxS ID_AA64MMFR2_EL1, EVT, TTLBxS
+#define FEAT_MTE2 ID_AA64PFR1_EL1, MTE, MTE2
+#define FEAT_RME ID_AA64PFR0_EL1, RME, IMP
+#define FEAT_S2FWB ID_AA64MMFR2_EL1, FWB, IMP
+#define FEAT_TME ID_AA64ISAR0_EL1, TME, IMP
+#define FEAT_TWED ID_AA64MMFR1_EL1, TWED, IMP
+#define FEAT_E2H0 ID_AA64MMFR4_EL1, E2H0, IMP
+
+static bool not_feat_aa64el3(struct kvm *kvm)
+{
+ return !kvm_has_feat(kvm, FEAT_AA64EL3);
+}
+
+static bool feat_nv2(struct kvm *kvm)
+{
+ return ((kvm_has_feat(kvm, ID_AA64MMFR4_EL1, NV_frac, NV2_ONLY) &&
+ kvm_has_feat_enum(kvm, ID_AA64MMFR2_EL1, NV, NI)) ||
+ kvm_has_feat(kvm, ID_AA64MMFR2_EL1, NV, NV2));
+}
+
+static bool feat_nv2_e2h0_ni(struct kvm *kvm)
+{
+ return feat_nv2(kvm) && !kvm_has_feat(kvm, FEAT_E2H0);
+}
static bool feat_rasv1p1(struct kvm *kvm)
{
@@ -151,6 +180,31 @@ static bool feat_sme_smps(struct kvm *kvm)
(read_sysreg_s(SYS_SMIDR_EL1) & SMIDR_EL1_SMPS));
}
+static bool compute_hcr_rw(struct kvm *kvm, u64 *bits)
+{
+ /* This is purely academic: AArch32 and NV are mutually exclusive */
+ if (bits) {
+ if (kvm_has_feat(kvm, FEAT_AA32EL1))
+ *bits &= ~HCR_EL2_RW;
+ else
+ *bits |= HCR_EL2_RW;
+ }
+
+ return true;
+}
+
+static bool compute_hcr_e2h(struct kvm *kvm, u64 *bits)
+{
+ if (bits) {
+ if (kvm_has_feat(kvm, FEAT_E2H0))
+ *bits &= ~HCR_EL2_E2H;
+ else
+ *bits |= HCR_EL2_E2H;
+ }
+
+ return true;
+}
+
static const struct reg_bits_to_feat_map hfgrtr_feat_map[] = {
NEEDS_FEAT(HFGRTR_EL2_nAMAIR2_EL1 |
HFGRTR_EL2_nMAIR2_EL1,
@@ -564,6 +618,77 @@ static const struct reg_bits_to_feat_map hcrx_feat_map[] = {
NEEDS_FEAT(HCRX_EL2_EnAS0, FEAT_LS64_ACCDATA),
};
+static const struct reg_bits_to_feat_map hcr_feat_map[] = {
+ NEEDS_FEAT(HCR_EL2_TID0, FEAT_AA32EL0),
+ NEEDS_FEAT_FIXED(HCR_EL2_RW, compute_hcr_rw),
+ NEEDS_FEAT(HCR_EL2_HCD, not_feat_aa64el3),
+ NEEDS_FEAT(HCR_EL2_AMO |
+ HCR_EL2_BSU |
+ HCR_EL2_CD |
+ HCR_EL2_DC |
+ HCR_EL2_FB |
+ HCR_EL2_FMO |
+ HCR_EL2_ID |
+ HCR_EL2_IMO |
+ HCR_EL2_MIOCNCE |
+ HCR_EL2_PTW |
+ HCR_EL2_SWIO |
+ HCR_EL2_TACR |
+ HCR_EL2_TDZ |
+ HCR_EL2_TGE |
+ HCR_EL2_TID1 |
+ HCR_EL2_TID2 |
+ HCR_EL2_TID3 |
+ HCR_EL2_TIDCP |
+ HCR_EL2_TPCP |
+ HCR_EL2_TPU |
+ HCR_EL2_TRVM |
+ HCR_EL2_TSC |
+ HCR_EL2_TSW |
+ HCR_EL2_TTLB |
+ HCR_EL2_TVM |
+ HCR_EL2_TWE |
+ HCR_EL2_TWI |
+ HCR_EL2_VF |
+ HCR_EL2_VI |
+ HCR_EL2_VM |
+ HCR_EL2_VSE,
+ FEAT_AA64EL1),
+ NEEDS_FEAT(HCR_EL2_AMVOFFEN, FEAT_AMUv1p1),
+ NEEDS_FEAT(HCR_EL2_EnSCXT, feat_csv2_2_csv2_1p2),
+ NEEDS_FEAT(HCR_EL2_TICAB |
+ HCR_EL2_TID4 |
+ HCR_EL2_TOCU,
+ FEAT_EVT),
+ NEEDS_FEAT(HCR_EL2_TTLBIS |
+ HCR_EL2_TTLBOS,
+ FEAT_EVT_TTLBxS),
+ NEEDS_FEAT(HCR_EL2_TLOR, FEAT_LOR),
+ NEEDS_FEAT(HCR_EL2_ATA |
+ HCR_EL2_DCT |
+ HCR_EL2_TID5,
+ FEAT_MTE2),
+ NEEDS_FEAT(HCR_EL2_AT | /* Ignore the original FEAT_NV */
+ HCR_EL2_NV2 |
+ HCR_EL2_NV,
+ feat_nv2),
+ NEEDS_FEAT(HCR_EL2_NV1, feat_nv2_e2h0_ni), /* Missing from JSON */
+ NEEDS_FEAT(HCR_EL2_API |
+ HCR_EL2_APK,
+ feat_pauth),
+ NEEDS_FEAT(HCR_EL2_TEA |
+ HCR_EL2_TERR,
+ FEAT_RAS),
+ NEEDS_FEAT(HCR_EL2_FIEN, feat_rasv1p1),
+ NEEDS_FEAT(HCR_EL2_GPF, FEAT_RME),
+ NEEDS_FEAT(HCR_EL2_FWB, FEAT_S2FWB),
+ NEEDS_FEAT(HCR_EL2_TME, FEAT_TME),
+ NEEDS_FEAT(HCR_EL2_TWEDEL |
+ HCR_EL2_TWEDEn,
+ FEAT_TWED),
+ NEEDS_FEAT_FIXED(HCR_EL2_E2H, compute_hcr_e2h),
+};
+
static void __init check_feat_map(const struct reg_bits_to_feat_map *map,
int map_size, u64 res0, const char *str)
{
@@ -593,6 +718,8 @@ void __init check_feature_map(void)
hafgrtr_masks.res0, hafgrtr_masks.str);
check_feat_map(hcrx_feat_map, ARRAY_SIZE(hcrx_feat_map),
__HCRX_EL2_RES0, "HCRX_EL2");
+ check_feat_map(hcr_feat_map, ARRAY_SIZE(hcr_feat_map),
+ HCR_EL2_RES0, "HCR_EL2");
}
static bool idreg_feat_match(struct kvm *kvm, const struct reg_bits_to_feat_map *map)
@@ -651,6 +778,17 @@ static u64 compute_res0_bits(struct kvm *kvm,
require, exclude | FIXED_VALUE);
}
+static u64 compute_fixed_bits(struct kvm *kvm,
+ const struct reg_bits_to_feat_map *map,
+ int map_size,
+ u64 *fixed_bits,
+ unsigned long require,
+ unsigned long exclude)
+{
+ return __compute_fixed_bits(kvm, map, map_size, fixed_bits,
+ require | FIXED_VALUE, exclude);
+}
+
void compute_fgu(struct kvm *kvm, enum fgt_group_id fgt)
{
u64 val = 0;
@@ -691,6 +829,8 @@ void compute_fgu(struct kvm *kvm, enum fgt_group_id fgt)
void get_reg_fixed_bits(struct kvm *kvm, enum vcpu_sysreg reg, u64 *res0, u64 *res1)
{
+ u64 fixed = 0, mask;
+
switch (reg) {
case HFGRTR_EL2:
*res0 = compute_res0_bits(kvm, hfgrtr_feat_map,
@@ -734,6 +874,15 @@ void get_reg_fixed_bits(struct kvm *kvm, enum vcpu_sysreg reg, u64 *res0, u64 *r
*res0 |= __HCRX_EL2_RES0;
*res1 = __HCRX_EL2_RES1;
break;
+ case HCR_EL2:
+ mask = compute_fixed_bits(kvm, hcr_feat_map,
+ ARRAY_SIZE(hcr_feat_map), &fixed,
+ 0, 0);
+ *res0 = compute_res0_bits(kvm, hcr_feat_map,
+ ARRAY_SIZE(hcr_feat_map), 0, 0);
+ *res0 |= HCR_EL2_RES0 | (mask & ~fixed);
+ *res1 = HCR_EL2_RES1 | (mask & fixed);
+ break;
default:
WARN_ON_ONCE(1);
*res0 = *res1 = 0;
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index 20c79f1eaebab..b633666be6df4 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -1018,43 +1018,7 @@ int kvm_init_nv_sysregs(struct kvm_vcpu *vcpu)
set_sysreg_masks(kvm, VMPIDR_EL2, res0, res1);
/* HCR_EL2 */
- res0 = BIT(48);
- res1 = HCR_RW;
- if (!kvm_has_feat(kvm, ID_AA64MMFR1_EL1, TWED, IMP))
- res0 |= GENMASK(63, 59);
- if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, MTE, MTE2))
- res0 |= (HCR_TID5 | HCR_DCT | HCR_ATA);
- if (!kvm_has_feat(kvm, ID_AA64MMFR2_EL1, EVT, TTLBxS))
- res0 |= (HCR_TTLBIS | HCR_TTLBOS);
- if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, CSV2, CSV2_2) &&
- !kvm_has_feat(kvm, ID_AA64PFR1_EL1, CSV2_frac, CSV2_1p2))
- res0 |= HCR_ENSCXT;
- if (!kvm_has_feat(kvm, ID_AA64MMFR2_EL1, EVT, IMP))
- res0 |= (HCR_TOCU | HCR_TICAB | HCR_TID4);
- if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, AMU, V1P1))
- res0 |= HCR_AMVOFFEN;
- if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, RAS, V1P1))
- res0 |= HCR_FIEN;
- if (!kvm_has_feat(kvm, ID_AA64MMFR2_EL1, FWB, IMP))
- res0 |= HCR_FWB;
- /* Implementation choice: NV2 is the only supported config */
- if (!kvm_has_feat(kvm, ID_AA64MMFR4_EL1, NV_frac, NV2_ONLY))
- res0 |= (HCR_NV2 | HCR_NV | HCR_AT);
- if (!kvm_has_feat(kvm, ID_AA64MMFR4_EL1, E2H0, NI))
- res0 |= HCR_NV1;
- if (!(kvm_vcpu_has_feature(kvm, KVM_ARM_VCPU_PTRAUTH_ADDRESS) &&
- kvm_vcpu_has_feature(kvm, KVM_ARM_VCPU_PTRAUTH_GENERIC)))
- res0 |= (HCR_API | HCR_APK);
- if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TME, IMP))
- res0 |= BIT(39);
- if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, RAS, IMP))
- res0 |= (HCR_TEA | HCR_TERR);
- if (!kvm_has_feat(kvm, ID_AA64MMFR1_EL1, LO, IMP))
- res0 |= HCR_TLOR;
- if (!kvm_has_feat(kvm, ID_AA64MMFR1_EL1, VH, IMP))
- res0 |= HCR_E2H;
- if (!kvm_has_feat(kvm, ID_AA64MMFR4_EL1, E2H0, IMP))
- res1 |= HCR_E2H;
+ get_reg_fixed_bits(kvm, HCR_EL2, &res0, &res1);
set_sysreg_masks(kvm, HCR_EL2, res0, res1);
/* HCRX_EL2 */
--
2.39.2
^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH v3 36/42] KVM: arm64: Add FEAT_FGT2 registers to the VNCR page
2025-04-26 12:27 [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling Marc Zyngier
` (34 preceding siblings ...)
2025-04-26 12:28 ` [PATCH v3 35/42] KVM: arm64: Use HCR_EL2 " Marc Zyngier
@ 2025-04-26 12:28 ` Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 37/42] KVM: arm64: Add sanitisation for FEAT_FGT2 registers Marc Zyngier
` (6 subsequent siblings)
42 siblings, 0 replies; 71+ messages in thread
From: Marc Zyngier @ 2025-04-26 12:28 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
The FEAT_FGT2 registers are part of the VNCR page. Describe the
corresponding offsets and add them to the vcpu sysreg enumeration.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_host.h | 5 +++++
arch/arm64/include/asm/vncr_mapping.h | 5 +++++
2 files changed, 10 insertions(+)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 3b5fc64c4085c..abe45f97266c5 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -562,6 +562,11 @@ enum vcpu_sysreg {
VNCR(HDFGRTR_EL2),
VNCR(HDFGWTR_EL2),
VNCR(HAFGRTR_EL2),
+ VNCR(HFGRTR2_EL2),
+ VNCR(HFGWTR2_EL2),
+ VNCR(HFGITR2_EL2),
+ VNCR(HDFGRTR2_EL2),
+ VNCR(HDFGWTR2_EL2),
VNCR(CNTVOFF_EL2),
VNCR(CNTV_CVAL_EL0),
diff --git a/arch/arm64/include/asm/vncr_mapping.h b/arch/arm64/include/asm/vncr_mapping.h
index 4f9bbd4d6c267..6f556e9936443 100644
--- a/arch/arm64/include/asm/vncr_mapping.h
+++ b/arch/arm64/include/asm/vncr_mapping.h
@@ -35,6 +35,8 @@
#define VNCR_CNTP_CTL_EL0 0x180
#define VNCR_SCXTNUM_EL1 0x188
#define VNCR_TFSR_EL1 0x190
+#define VNCR_HDFGRTR2_EL2 0x1A0
+#define VNCR_HDFGWTR2_EL2 0x1B0
#define VNCR_HFGRTR_EL2 0x1B8
#define VNCR_HFGWTR_EL2 0x1C0
#define VNCR_HFGITR_EL2 0x1C8
@@ -52,6 +54,9 @@
#define VNCR_PIRE0_EL1 0x290
#define VNCR_PIR_EL1 0x2A0
#define VNCR_POR_EL1 0x2A8
+#define VNCR_HFGRTR2_EL2 0x2C0
+#define VNCR_HFGWTR2_EL2 0x2C8
+#define VNCR_HFGITR2_EL2 0x310
#define VNCR_ICH_LR0_EL2 0x400
#define VNCR_ICH_LR1_EL2 0x408
#define VNCR_ICH_LR2_EL2 0x410
--
2.39.2
^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH v3 37/42] KVM: arm64: Add sanitisation for FEAT_FGT2 registers
2025-04-26 12:27 [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling Marc Zyngier
` (35 preceding siblings ...)
2025-04-26 12:28 ` [PATCH v3 36/42] KVM: arm64: Add FEAT_FGT2 registers to the VNCR page Marc Zyngier
@ 2025-04-26 12:28 ` Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 38/42] KVM: arm64: Add trap routing " Marc Zyngier
` (5 subsequent siblings)
42 siblings, 0 replies; 71+ messages in thread
From: Marc Zyngier @ 2025-04-26 12:28 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
Just like the FEAT_FGT registers, treat the FGT2 variant the same
way. THis is a large update, but a fairly mechanical one.
The config dependencies are extracted from the 2025-03 JSON drop.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_host.h | 15 +++
arch/arm64/kvm/arm.c | 5 +
arch/arm64/kvm/config.c | 194 ++++++++++++++++++++++++++++++
arch/arm64/kvm/emulate-nested.c | 22 ++++
arch/arm64/kvm/hyp/nvhe/switch.c | 5 +
arch/arm64/kvm/nested.c | 16 +++
arch/arm64/kvm/sys_regs.c | 3 +
7 files changed, 260 insertions(+)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index abe45f97266c5..4e191c81f9aa1 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -279,6 +279,11 @@ enum fgt_group_id {
HDFGWTR_GROUP = HDFGRTR_GROUP,
HFGITR_GROUP,
HAFGRTR_GROUP,
+ HFGRTR2_GROUP,
+ HFGWTR2_GROUP = HFGRTR2_GROUP,
+ HDFGRTR2_GROUP,
+ HDFGWTR2_GROUP = HDFGRTR2_GROUP,
+ HFGITR2_GROUP,
/* Must be last */
__NR_FGT_GROUP_IDS__
@@ -625,6 +630,11 @@ extern struct fgt_masks hfgitr_masks;
extern struct fgt_masks hdfgrtr_masks;
extern struct fgt_masks hdfgwtr_masks;
extern struct fgt_masks hafgrtr_masks;
+extern struct fgt_masks hfgrtr2_masks;
+extern struct fgt_masks hfgwtr2_masks;
+extern struct fgt_masks hfgitr2_masks;
+extern struct fgt_masks hdfgrtr2_masks;
+extern struct fgt_masks hdfgwtr2_masks;
extern struct fgt_masks kvm_nvhe_sym(hfgrtr_masks);
extern struct fgt_masks kvm_nvhe_sym(hfgwtr_masks);
@@ -632,6 +642,11 @@ extern struct fgt_masks kvm_nvhe_sym(hfgitr_masks);
extern struct fgt_masks kvm_nvhe_sym(hdfgrtr_masks);
extern struct fgt_masks kvm_nvhe_sym(hdfgwtr_masks);
extern struct fgt_masks kvm_nvhe_sym(hafgrtr_masks);
+extern struct fgt_masks kvm_nvhe_sym(hfgrtr2_masks);
+extern struct fgt_masks kvm_nvhe_sym(hfgwtr2_masks);
+extern struct fgt_masks kvm_nvhe_sym(hfgitr2_masks);
+extern struct fgt_masks kvm_nvhe_sym(hdfgrtr2_masks);
+extern struct fgt_masks kvm_nvhe_sym(hdfgwtr2_masks);
struct kvm_cpu_context {
struct user_pt_regs regs; /* sp = sp_el0 */
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 8951e8693ca7b..ff1c0cf97ee53 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -2457,6 +2457,11 @@ static void kvm_hyp_init_symbols(void)
kvm_nvhe_sym(hdfgrtr_masks) = hdfgrtr_masks;
kvm_nvhe_sym(hdfgwtr_masks) = hdfgwtr_masks;
kvm_nvhe_sym(hafgrtr_masks) = hafgrtr_masks;
+ kvm_nvhe_sym(hfgrtr2_masks) = hfgrtr2_masks;
+ kvm_nvhe_sym(hfgwtr2_masks) = hfgwtr2_masks;
+ kvm_nvhe_sym(hfgitr2_masks) = hfgitr2_masks;
+ kvm_nvhe_sym(hdfgrtr2_masks)= hdfgrtr2_masks;
+ kvm_nvhe_sym(hdfgwtr2_masks)= hdfgwtr2_masks;
/*
* Flush entire BSS since part of its data containing init symbols is read
diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
index 0357d06feaac6..d4e1218b004dd 100644
--- a/arch/arm64/kvm/config.c
+++ b/arch/arm64/kvm/config.c
@@ -66,7 +66,9 @@ struct reg_bits_to_feat_map {
#define FEAT_BRBE ID_AA64DFR0_EL1, BRBE, IMP
#define FEAT_TRC_SR ID_AA64DFR0_EL1, TraceVer, IMP
#define FEAT_PMUv3 ID_AA64DFR0_EL1, PMUVer, IMP
+#define FEAT_PMUv3p9 ID_AA64DFR0_EL1, PMUVer, V3P9
#define FEAT_TRBE ID_AA64DFR0_EL1, TraceBuffer, IMP
+#define FEAT_TRBEv1p1 ID_AA64DFR0_EL1, TraceBuffer, TRBE_V1P1
#define FEAT_DoubleLock ID_AA64DFR0_EL1, DoubleLock, IMP
#define FEAT_TRF ID_AA64DFR0_EL1, TraceFilt, IMP
#define FEAT_AA32EL0 ID_AA64PFR0_EL1, EL0, AARCH32
@@ -84,8 +86,10 @@ struct reg_bits_to_feat_map {
#define FEAT_LS64_V ID_AA64ISAR1_EL1, LS64, LS64_V
#define FEAT_LS64_ACCDATA ID_AA64ISAR1_EL1, LS64, LS64_ACCDATA
#define FEAT_RAS ID_AA64PFR0_EL1, RAS, IMP
+#define FEAT_RASv2 ID_AA64PFR0_EL1, RAS, V2
#define FEAT_GICv3 ID_AA64PFR0_EL1, GIC, IMP
#define FEAT_LOR ID_AA64MMFR1_EL1, LO, IMP
+#define FEAT_SPEv1p4 ID_AA64DFR0_EL1, PMSVer, V1P4
#define FEAT_SPEv1p5 ID_AA64DFR0_EL1, PMSVer, V1P5
#define FEAT_ATS1A ID_AA64ISAR2_EL1, ATS1A, IMP
#define FEAT_SPECRES2 ID_AA64ISAR1_EL1, SPECRES, COSP_RCTX
@@ -110,10 +114,23 @@ struct reg_bits_to_feat_map {
#define FEAT_EVT_TTLBxS ID_AA64MMFR2_EL1, EVT, TTLBxS
#define FEAT_MTE2 ID_AA64PFR1_EL1, MTE, MTE2
#define FEAT_RME ID_AA64PFR0_EL1, RME, IMP
+#define FEAT_MPAM ID_AA64PFR0_EL1, MPAM, 1
#define FEAT_S2FWB ID_AA64MMFR2_EL1, FWB, IMP
#define FEAT_TME ID_AA64ISAR0_EL1, TME, IMP
#define FEAT_TWED ID_AA64MMFR1_EL1, TWED, IMP
#define FEAT_E2H0 ID_AA64MMFR4_EL1, E2H0, IMP
+#define FEAT_SRMASK ID_AA64MMFR4_EL1, SRMASK, IMP
+#define FEAT_PoPS ID_AA64MMFR4_EL1, PoPS, IMP
+#define FEAT_PFAR ID_AA64PFR1_EL1, PFAR, IMP
+#define FEAT_Debugv8p9 ID_AA64DFR0_EL1, PMUVer, V3P9
+#define FEAT_PMUv3_SS ID_AA64DFR0_EL1, PMSS, IMP
+#define FEAT_SEBEP ID_AA64DFR0_EL1, SEBEP, IMP
+#define FEAT_EBEP ID_AA64DFR1_EL1, EBEP, IMP
+#define FEAT_ITE ID_AA64DFR1_EL1, ITE, IMP
+#define FEAT_PMUv3_ICNTR ID_AA64DFR1_EL1, PMICNTR, IMP
+#define FEAT_SPMU ID_AA64DFR1_EL1, SPMU, IMP
+#define FEAT_SPE_nVM ID_AA64DFR2_EL1, SPE_nVM, IMP
+#define FEAT_STEP2 ID_AA64DFR2_EL1, STEP, IMP
static bool not_feat_aa64el3(struct kvm *kvm)
{
@@ -180,6 +197,32 @@ static bool feat_sme_smps(struct kvm *kvm)
(read_sysreg_s(SYS_SMIDR_EL1) & SMIDR_EL1_SMPS));
}
+static bool feat_spe_fds(struct kvm *kvm)
+{
+ /*
+ * Revists this if KVM ever supports SPE -- this really should
+ * look at the guest's view of PMSIDR_EL1.
+ */
+ return (kvm_has_feat(kvm, FEAT_SPEv1p4) &&
+ (read_sysreg_s(SYS_PMSIDR_EL1) & PMSIDR_EL1_FDS));
+}
+
+static bool feat_trbe_mpam(struct kvm *kvm)
+{
+ /*
+ * Revists this if KVM ever supports both MPAM and TRBE --
+ * this really should look at the guest's view of TRBIDR_EL1.
+ */
+ return (kvm_has_feat(kvm, FEAT_TRBE) &&
+ kvm_has_feat(kvm, FEAT_MPAM) &&
+ (read_sysreg_s(SYS_TRBIDR_EL1) & TRBIDR_EL1_MPAM));
+}
+
+static bool feat_ebep_pmuv3_ss(struct kvm *kvm)
+{
+ return kvm_has_feat(kvm, FEAT_EBEP) || kvm_has_feat(kvm, FEAT_PMUv3_SS);
+}
+
static bool compute_hcr_rw(struct kvm *kvm, u64 *bits)
{
/* This is purely academic: AArch32 and NV are mutually exclusive */
@@ -589,6 +632,106 @@ static const struct reg_bits_to_feat_map hafgrtr_feat_map[] = {
FEAT_AMUv1),
};
+static const struct reg_bits_to_feat_map hfgitr2_feat_map[] = {
+ NEEDS_FEAT(HFGITR2_EL2_nDCCIVAPS, FEAT_PoPS),
+ NEEDS_FEAT(HFGITR2_EL2_TSBCSYNC, FEAT_TRBEv1p1)
+};
+
+static const struct reg_bits_to_feat_map hfgrtr2_feat_map[] = {
+ NEEDS_FEAT(HFGRTR2_EL2_nPFAR_EL1, FEAT_PFAR),
+ NEEDS_FEAT(HFGRTR2_EL2_nERXGSR_EL1, FEAT_RASv2),
+ NEEDS_FEAT(HFGRTR2_EL2_nACTLRALIAS_EL1 |
+ HFGRTR2_EL2_nACTLRMASK_EL1 |
+ HFGRTR2_EL2_nCPACRALIAS_EL1 |
+ HFGRTR2_EL2_nCPACRMASK_EL1 |
+ HFGRTR2_EL2_nSCTLR2MASK_EL1 |
+ HFGRTR2_EL2_nSCTLRALIAS2_EL1 |
+ HFGRTR2_EL2_nSCTLRALIAS_EL1 |
+ HFGRTR2_EL2_nSCTLRMASK_EL1 |
+ HFGRTR2_EL2_nTCR2ALIAS_EL1 |
+ HFGRTR2_EL2_nTCR2MASK_EL1 |
+ HFGRTR2_EL2_nTCRALIAS_EL1 |
+ HFGRTR2_EL2_nTCRMASK_EL1,
+ FEAT_SRMASK),
+ NEEDS_FEAT(HFGRTR2_EL2_nRCWSMASK_EL1, FEAT_THE),
+};
+
+static const struct reg_bits_to_feat_map hfgwtr2_feat_map[] = {
+ NEEDS_FEAT(HFGWTR2_EL2_nPFAR_EL1, FEAT_PFAR),
+ NEEDS_FEAT(HFGWTR2_EL2_nACTLRALIAS_EL1 |
+ HFGWTR2_EL2_nACTLRMASK_EL1 |
+ HFGWTR2_EL2_nCPACRALIAS_EL1 |
+ HFGWTR2_EL2_nCPACRMASK_EL1 |
+ HFGWTR2_EL2_nSCTLR2MASK_EL1 |
+ HFGWTR2_EL2_nSCTLRALIAS2_EL1 |
+ HFGWTR2_EL2_nSCTLRALIAS_EL1 |
+ HFGWTR2_EL2_nSCTLRMASK_EL1 |
+ HFGWTR2_EL2_nTCR2ALIAS_EL1 |
+ HFGWTR2_EL2_nTCR2MASK_EL1 |
+ HFGWTR2_EL2_nTCRALIAS_EL1 |
+ HFGWTR2_EL2_nTCRMASK_EL1,
+ FEAT_SRMASK),
+ NEEDS_FEAT(HFGWTR2_EL2_nRCWSMASK_EL1, FEAT_THE),
+};
+
+static const struct reg_bits_to_feat_map hdfgrtr2_feat_map[] = {
+ NEEDS_FEAT(HDFGRTR2_EL2_nMDSELR_EL1, FEAT_Debugv8p9),
+ NEEDS_FEAT(HDFGRTR2_EL2_nPMECR_EL1, feat_ebep_pmuv3_ss),
+ NEEDS_FEAT(HDFGRTR2_EL2_nTRCITECR_EL1, FEAT_ITE),
+ NEEDS_FEAT(HDFGRTR2_EL2_nPMICFILTR_EL0 |
+ HDFGRTR2_EL2_nPMICNTR_EL0,
+ FEAT_PMUv3_ICNTR),
+ NEEDS_FEAT(HDFGRTR2_EL2_nPMUACR_EL1, FEAT_PMUv3p9),
+ NEEDS_FEAT(HDFGRTR2_EL2_nPMSSCR_EL1 |
+ HDFGRTR2_EL2_nPMSSDATA,
+ FEAT_PMUv3_SS),
+ NEEDS_FEAT(HDFGRTR2_EL2_nPMIAR_EL1, FEAT_SEBEP),
+ NEEDS_FEAT(HDFGRTR2_EL2_nPMSDSFR_EL1, feat_spe_fds),
+ NEEDS_FEAT(HDFGRTR2_EL2_nPMBMAR_EL1, FEAT_SPE_nVM),
+ NEEDS_FEAT(HDFGRTR2_EL2_nSPMACCESSR_EL1 |
+ HDFGRTR2_EL2_nSPMCNTEN |
+ HDFGRTR2_EL2_nSPMCR_EL0 |
+ HDFGRTR2_EL2_nSPMDEVAFF_EL1 |
+ HDFGRTR2_EL2_nSPMEVCNTRn_EL0 |
+ HDFGRTR2_EL2_nSPMEVTYPERn_EL0|
+ HDFGRTR2_EL2_nSPMID |
+ HDFGRTR2_EL2_nSPMINTEN |
+ HDFGRTR2_EL2_nSPMOVS |
+ HDFGRTR2_EL2_nSPMSCR_EL1 |
+ HDFGRTR2_EL2_nSPMSELR_EL0,
+ FEAT_SPMU),
+ NEEDS_FEAT(HDFGRTR2_EL2_nMDSTEPOP_EL1, FEAT_STEP2),
+ NEEDS_FEAT(HDFGRTR2_EL2_nTRBMPAM_EL1, feat_trbe_mpam),
+};
+
+static const struct reg_bits_to_feat_map hdfgwtr2_feat_map[] = {
+ NEEDS_FEAT(HDFGWTR2_EL2_nMDSELR_EL1, FEAT_Debugv8p9),
+ NEEDS_FEAT(HDFGWTR2_EL2_nPMECR_EL1, feat_ebep_pmuv3_ss),
+ NEEDS_FEAT(HDFGWTR2_EL2_nTRCITECR_EL1, FEAT_ITE),
+ NEEDS_FEAT(HDFGWTR2_EL2_nPMICFILTR_EL0 |
+ HDFGWTR2_EL2_nPMICNTR_EL0,
+ FEAT_PMUv3_ICNTR),
+ NEEDS_FEAT(HDFGWTR2_EL2_nPMUACR_EL1 |
+ HDFGWTR2_EL2_nPMZR_EL0,
+ FEAT_PMUv3p9),
+ NEEDS_FEAT(HDFGWTR2_EL2_nPMSSCR_EL1, FEAT_PMUv3_SS),
+ NEEDS_FEAT(HDFGWTR2_EL2_nPMIAR_EL1, FEAT_SEBEP),
+ NEEDS_FEAT(HDFGWTR2_EL2_nPMSDSFR_EL1, feat_spe_fds),
+ NEEDS_FEAT(HDFGWTR2_EL2_nPMBMAR_EL1, FEAT_SPE_nVM),
+ NEEDS_FEAT(HDFGWTR2_EL2_nSPMACCESSR_EL1 |
+ HDFGWTR2_EL2_nSPMCNTEN |
+ HDFGWTR2_EL2_nSPMCR_EL0 |
+ HDFGWTR2_EL2_nSPMEVCNTRn_EL0 |
+ HDFGWTR2_EL2_nSPMEVTYPERn_EL0|
+ HDFGWTR2_EL2_nSPMINTEN |
+ HDFGWTR2_EL2_nSPMOVS |
+ HDFGWTR2_EL2_nSPMSCR_EL1 |
+ HDFGWTR2_EL2_nSPMSELR_EL0,
+ FEAT_SPMU),
+ NEEDS_FEAT(HDFGWTR2_EL2_nMDSTEPOP_EL1, FEAT_STEP2),
+ NEEDS_FEAT(HDFGWTR2_EL2_nTRBMPAM_EL1, feat_trbe_mpam),
+};
+
static const struct reg_bits_to_feat_map hcrx_feat_map[] = {
NEEDS_FEAT(HCRX_EL2_PACMEn, feat_pauth_lr),
NEEDS_FEAT(HCRX_EL2_EnFPM, FEAT_FPMR),
@@ -820,6 +963,27 @@ void compute_fgu(struct kvm *kvm, enum fgt_group_id fgt)
ARRAY_SIZE(hafgrtr_feat_map),
0, NEVER_FGU);
break;
+ case HFGRTR2_GROUP:
+ val |= compute_res0_bits(kvm, hfgrtr2_feat_map,
+ ARRAY_SIZE(hfgrtr2_feat_map),
+ 0, NEVER_FGU);
+ val |= compute_res0_bits(kvm, hfgwtr2_feat_map,
+ ARRAY_SIZE(hfgwtr2_feat_map),
+ 0, NEVER_FGU);
+ break;
+ case HFGITR2_GROUP:
+ val |= compute_res0_bits(kvm, hfgitr2_feat_map,
+ ARRAY_SIZE(hfgitr2_feat_map),
+ 0, NEVER_FGU);
+ break;
+ case HDFGRTR2_GROUP:
+ val |= compute_res0_bits(kvm, hdfgrtr2_feat_map,
+ ARRAY_SIZE(hdfgrtr2_feat_map),
+ 0, NEVER_FGU);
+ val |= compute_res0_bits(kvm, hdfgwtr2_feat_map,
+ ARRAY_SIZE(hdfgwtr2_feat_map),
+ 0, NEVER_FGU);
+ break;
default:
BUG();
}
@@ -868,6 +1032,36 @@ void get_reg_fixed_bits(struct kvm *kvm, enum vcpu_sysreg reg, u64 *res0, u64 *r
*res0 |= hafgrtr_masks.res0;
*res1 = HAFGRTR_EL2_RES1;
break;
+ case HFGRTR2_EL2:
+ *res0 = compute_res0_bits(kvm, hfgrtr2_feat_map,
+ ARRAY_SIZE(hfgrtr2_feat_map), 0, 0);
+ *res0 |= hfgrtr2_masks.res0;
+ *res1 = HFGRTR2_EL2_RES1;
+ break;
+ case HFGWTR2_EL2:
+ *res0 = compute_res0_bits(kvm, hfgwtr2_feat_map,
+ ARRAY_SIZE(hfgwtr2_feat_map), 0, 0);
+ *res0 |= hfgwtr2_masks.res0;
+ *res1 = HFGWTR2_EL2_RES1;
+ break;
+ case HFGITR2_EL2:
+ *res0 = compute_res0_bits(kvm, hfgitr2_feat_map,
+ ARRAY_SIZE(hfgitr2_feat_map), 0, 0);
+ *res0 |= hfgitr2_masks.res0;
+ *res1 = HFGITR2_EL2_RES1;
+ break;
+ case HDFGRTR2_EL2:
+ *res0 = compute_res0_bits(kvm, hdfgrtr2_feat_map,
+ ARRAY_SIZE(hdfgrtr2_feat_map), 0, 0);
+ *res0 |= hdfgrtr2_masks.res0;
+ *res1 = HDFGRTR2_EL2_RES1;
+ break;
+ case HDFGWTR2_EL2:
+ *res0 = compute_res0_bits(kvm, hdfgwtr2_feat_map,
+ ARRAY_SIZE(hdfgwtr2_feat_map), 0, 0);
+ *res0 |= hdfgwtr2_masks.res0;
+ *res1 = HDFGWTR2_EL2_RES1;
+ break;
case HCRX_EL2:
*res0 = compute_res0_bits(kvm, hcrx_feat_map,
ARRAY_SIZE(hcrx_feat_map), 0, 0);
diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index 0b033d3a3d7a4..3312aefa095e0 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -2060,6 +2060,11 @@ FGT_MASKS(hfgitr_masks, HFGITR_EL2_RES0);
FGT_MASKS(hdfgrtr_masks, HDFGRTR_EL2_RES0);
FGT_MASKS(hdfgwtr_masks, HDFGWTR_EL2_RES0);
FGT_MASKS(hafgrtr_masks, HAFGRTR_EL2_RES0);
+FGT_MASKS(hfgrtr2_masks, HFGRTR2_EL2_RES0);
+FGT_MASKS(hfgwtr2_masks, HFGWTR2_EL2_RES0);
+FGT_MASKS(hfgitr2_masks, HFGITR2_EL2_RES0);
+FGT_MASKS(hdfgrtr2_masks, HDFGRTR2_EL2_RES0);
+FGT_MASKS(hdfgwtr2_masks, HDFGWTR2_EL2_RES0);
static __init bool aggregate_fgt(union trap_config tc)
{
@@ -2082,6 +2087,18 @@ static __init bool aggregate_fgt(union trap_config tc)
rmasks = &hfgitr_masks;
wmasks = NULL;
break;
+ case HFGRTR2_GROUP:
+ rmasks = &hfgrtr2_masks;
+ wmasks = &hfgwtr2_masks;
+ break;
+ case HDFGRTR2_GROUP:
+ rmasks = &hdfgrtr2_masks;
+ wmasks = &hdfgwtr2_masks;
+ break;
+ case HFGITR2_GROUP:
+ rmasks = &hfgitr2_masks;
+ wmasks = NULL;
+ break;
}
/*
@@ -2141,6 +2158,11 @@ static __init int check_all_fgt_masks(int ret)
&hdfgrtr_masks,
&hdfgwtr_masks,
&hafgrtr_masks,
+ &hfgrtr2_masks,
+ &hfgwtr2_masks,
+ &hfgitr2_masks,
+ &hdfgrtr2_masks,
+ &hdfgwtr2_masks,
};
int err = 0;
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index ae55d6d87e3d2..6947aaf117f63 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -39,6 +39,11 @@ struct fgt_masks hfgitr_masks;
struct fgt_masks hdfgrtr_masks;
struct fgt_masks hdfgwtr_masks;
struct fgt_masks hafgrtr_masks;
+struct fgt_masks hfgrtr2_masks;
+struct fgt_masks hfgwtr2_masks;
+struct fgt_masks hfgitr2_masks;
+struct fgt_masks hdfgrtr2_masks;
+struct fgt_masks hdfgwtr2_masks;
extern void kvm_nvhe_prepare_backtrace(unsigned long fp, unsigned long pc);
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index b633666be6df4..f6a5736ba7ef7 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -1045,6 +1045,22 @@ int kvm_init_nv_sysregs(struct kvm_vcpu *vcpu)
get_reg_fixed_bits(kvm, HAFGRTR_EL2, &res0, &res1);
set_sysreg_masks(kvm, HAFGRTR_EL2, res0, res1);
+ /* HFG[RW]TR2_EL2 */
+ get_reg_fixed_bits(kvm, HFGRTR2_EL2, &res0, &res1);
+ set_sysreg_masks(kvm, HFGRTR2_EL2, res0, res1);
+ get_reg_fixed_bits(kvm, HFGWTR2_EL2, &res0, &res1);
+ set_sysreg_masks(kvm, HFGWTR2_EL2, res0, res1);
+
+ /* HDFG[RW]TR2_EL2 */
+ get_reg_fixed_bits(kvm, HDFGRTR2_EL2, &res0, &res1);
+ set_sysreg_masks(kvm, HDFGRTR2_EL2, res0, res1);
+ get_reg_fixed_bits(kvm, HDFGWTR2_EL2, &res0, &res1);
+ set_sysreg_masks(kvm, HDFGWTR2_EL2, res0, res1);
+
+ /* HFGITR2_EL2 */
+ get_reg_fixed_bits(kvm, HFGITR2_EL2, &res0, &res1);
+ set_sysreg_masks(kvm, HFGITR2_EL2, res0, res1);
+
/* TCR2_EL2 */
res0 = TCR2_EL2_RES0;
res1 = TCR2_EL2_RES1;
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index f24d1a7d9a8f4..8b994690bf1ad 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -5151,6 +5151,9 @@ void kvm_calculate_traps(struct kvm_vcpu *vcpu)
compute_fgu(kvm, HFGITR_GROUP);
compute_fgu(kvm, HDFGRTR_GROUP);
compute_fgu(kvm, HAFGRTR_GROUP);
+ compute_fgu(kvm, HFGRTR2_GROUP);
+ compute_fgu(kvm, HFGITR2_GROUP);
+ compute_fgu(kvm, HDFGRTR2_GROUP);
set_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags);
out:
--
2.39.2
^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH v3 38/42] KVM: arm64: Add trap routing for FEAT_FGT2 registers
2025-04-26 12:27 [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling Marc Zyngier
` (36 preceding siblings ...)
2025-04-26 12:28 ` [PATCH v3 37/42] KVM: arm64: Add sanitisation for FEAT_FGT2 registers Marc Zyngier
@ 2025-04-26 12:28 ` Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 39/42] KVM: arm64: Add context-switch " Marc Zyngier
` (4 subsequent siblings)
42 siblings, 0 replies; 71+ messages in thread
From: Marc Zyngier @ 2025-04-26 12:28 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
Similarly to the FEAT_FGT registers, pick the correct FEAT_FGT2
register when a sysreg trap indicates they could be responsible
for the exception.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/emulate-nested.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index 3312aefa095e0..e2a843675da96 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -2485,6 +2485,18 @@ bool triage_sysreg_trap(struct kvm_vcpu *vcpu, int *sr_index)
}
break;
+ case HFGRTR2_GROUP:
+ fgtreg = is_read ? HFGRTR2_EL2 : HFGWTR2_EL2;
+ break;
+
+ case HDFGRTR2_GROUP:
+ fgtreg = is_read ? HDFGRTR2_EL2 : HDFGWTR2_EL2;
+ break;
+
+ case HFGITR2_GROUP:
+ fgtreg = HFGITR2_EL2;
+ break;
+
default:
/* Something is really wrong, bail out */
WARN_ONCE(1, "Bad FGT group (encoding %08x, config %016llx)\n",
--
2.39.2
^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH v3 39/42] KVM: arm64: Add context-switch for FEAT_FGT2 registers
2025-04-26 12:27 [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling Marc Zyngier
` (37 preceding siblings ...)
2025-04-26 12:28 ` [PATCH v3 38/42] KVM: arm64: Add trap routing " Marc Zyngier
@ 2025-04-26 12:28 ` Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 40/42] KVM: arm64: Allow sysreg ranges for FGT descriptors Marc Zyngier
` (3 subsequent siblings)
42 siblings, 0 replies; 71+ messages in thread
From: Marc Zyngier @ 2025-04-26 12:28 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
Just like the rest of the FGT registers, perform a switch of the
FGT2 equivalent. This avoids the host configuration leaking into
the guest...
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/hyp/include/hyp/switch.h | 44 +++++++++++++++++++++++++
1 file changed, 44 insertions(+)
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 0d61ec3e907d4..f131bca36bd3d 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -87,6 +87,21 @@ static inline void __activate_traps_fpsimd32(struct kvm_vcpu *vcpu)
case HAFGRTR_EL2: \
m = &hafgrtr_masks; \
break; \
+ case HFGRTR2_EL2: \
+ m = &hfgrtr2_masks; \
+ break; \
+ case HFGWTR2_EL2: \
+ m = &hfgwtr2_masks; \
+ break; \
+ case HFGITR2_EL2: \
+ m = &hfgitr2_masks; \
+ break; \
+ case HDFGRTR2_EL2: \
+ m = &hdfgrtr2_masks; \
+ break; \
+ case HDFGWTR2_EL2: \
+ m = &hdfgwtr2_masks; \
+ break; \
default: \
BUILD_BUG_ON(1); \
} \
@@ -120,6 +135,17 @@ static inline void __activate_traps_fpsimd32(struct kvm_vcpu *vcpu)
case HAFGRTR_EL2: \
id = HAFGRTR_GROUP; \
break; \
+ case HFGRTR2_EL2: \
+ case HFGWTR2_EL2: \
+ id = HFGRTR2_GROUP; \
+ break; \
+ case HFGITR2_EL2: \
+ id = HFGITR2_GROUP; \
+ break; \
+ case HDFGRTR2_EL2: \
+ case HDFGWTR2_EL2: \
+ id = HDFGRTR2_GROUP; \
+ break; \
default: \
BUILD_BUG_ON(1); \
} \
@@ -182,6 +208,15 @@ static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
if (cpu_has_amu())
update_fgt_traps(hctxt, vcpu, kvm, HAFGRTR_EL2);
+
+ if (!cpus_have_final_cap(ARM64_HAS_FGT2))
+ return;
+
+ update_fgt_traps(hctxt, vcpu, kvm, HFGRTR2_EL2);
+ update_fgt_traps(hctxt, vcpu, kvm, HFGWTR2_EL2);
+ update_fgt_traps(hctxt, vcpu, kvm, HFGITR2_EL2);
+ update_fgt_traps(hctxt, vcpu, kvm, HDFGRTR2_EL2);
+ update_fgt_traps(hctxt, vcpu, kvm, HDFGWTR2_EL2);
}
#define __deactivate_fgt(htcxt, vcpu, reg) \
@@ -205,6 +240,15 @@ static inline void __deactivate_traps_hfgxtr(struct kvm_vcpu *vcpu)
if (cpu_has_amu())
__deactivate_fgt(hctxt, vcpu, HAFGRTR_EL2);
+
+ if (!cpus_have_final_cap(ARM64_HAS_FGT2))
+ return;
+
+ __deactivate_fgt(hctxt, vcpu, HFGRTR2_EL2);
+ __deactivate_fgt(hctxt, vcpu, HFGWTR2_EL2);
+ __deactivate_fgt(hctxt, vcpu, HFGITR2_EL2);
+ __deactivate_fgt(hctxt, vcpu, HDFGRTR2_EL2);
+ __deactivate_fgt(hctxt, vcpu, HDFGWTR2_EL2);
}
static inline void __activate_traps_mpam(struct kvm_vcpu *vcpu)
--
2.39.2
^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH v3 40/42] KVM: arm64: Allow sysreg ranges for FGT descriptors
2025-04-26 12:27 [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling Marc Zyngier
` (38 preceding siblings ...)
2025-04-26 12:28 ` [PATCH v3 39/42] KVM: arm64: Add context-switch " Marc Zyngier
@ 2025-04-26 12:28 ` Marc Zyngier
2025-04-29 13:08 ` Ben Horgan
2025-04-26 12:28 ` [PATCH v3 41/42] KVM: arm64: Add FGT descriptors for FEAT_FGT2 Marc Zyngier
` (2 subsequent siblings)
42 siblings, 1 reply; 71+ messages in thread
From: Marc Zyngier @ 2025-04-26 12:28 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
Just like we allow sysreg ranges for Coarse Grained Trap descriptors,
allow them for Fine Grain Traps as well.
This comes with a warning that not all ranges are suitable for this
particular definition of ranges.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/emulate-nested.c | 120 +++++++++++---------------------
1 file changed, 39 insertions(+), 81 deletions(-)
diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index e2a843675da96..9c7ecfccbd6e9 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -622,6 +622,11 @@ struct encoding_to_trap_config {
const unsigned int line;
};
+/*
+ * WARNING: using ranges is a treacherous endeavour, as sysregs that
+ * are part of an architectural range are not necessarily contiguous
+ * in the [Op0,Op1,CRn,CRm,Ops] space. Tread carefully.
+ */
#define SR_RANGE_TRAP(sr_start, sr_end, trap_id) \
{ \
.encoding = sr_start, \
@@ -1289,15 +1294,19 @@ enum fg_filter_id {
#define FGT(g, b, p) __FGT(g, b, p, __NO_FGF__)
-#define SR_FGF(sr, g, b, p, f) \
+/* Same warning applies: use carefully */
+#define SR_FGF_RANGE(sr, e, g, b, p, f) \
{ \
.encoding = sr, \
- .end = sr, \
+ .end = e, \
.tc = __FGT(g, b, p, f), \
.line = __LINE__, \
}
-#define SR_FGT(sr, g, b, p) SR_FGF(sr, g, b, p, __NO_FGF__)
+#define SR_FGF(sr, g, b, p, f) SR_FGF_RANGE(sr, sr, g, b, p, f)
+#define SR_FGT(sr, g, b, p) SR_FGF_RANGE(sr, sr, g, b, p, __NO_FGF__)
+#define SR_FGT_RANGE(sr, end, g, b, p) \
+ SR_FGF_RANGE(sr, end, g, b, p, __NO_FGF__)
static const struct encoding_to_trap_config encoding_to_fgt[] __initconst = {
/* HFGRTR_EL2, HFGWTR_EL2 */
@@ -1794,68 +1803,12 @@ static const struct encoding_to_trap_config encoding_to_fgt[] __initconst = {
SR_FGT(SYS_PMCNTENSET_EL0, HDFGRTR, PMCNTEN, 1),
SR_FGT(SYS_PMCCNTR_EL0, HDFGRTR, PMCCNTR_EL0, 1),
SR_FGT(SYS_PMCCFILTR_EL0, HDFGRTR, PMCCFILTR_EL0, 1),
- SR_FGT(SYS_PMEVTYPERn_EL0(0), HDFGRTR, PMEVTYPERn_EL0, 1),
- SR_FGT(SYS_PMEVTYPERn_EL0(1), HDFGRTR, PMEVTYPERn_EL0, 1),
- SR_FGT(SYS_PMEVTYPERn_EL0(2), HDFGRTR, PMEVTYPERn_EL0, 1),
- SR_FGT(SYS_PMEVTYPERn_EL0(3), HDFGRTR, PMEVTYPERn_EL0, 1),
- SR_FGT(SYS_PMEVTYPERn_EL0(4), HDFGRTR, PMEVTYPERn_EL0, 1),
- SR_FGT(SYS_PMEVTYPERn_EL0(5), HDFGRTR, PMEVTYPERn_EL0, 1),
- SR_FGT(SYS_PMEVTYPERn_EL0(6), HDFGRTR, PMEVTYPERn_EL0, 1),
- SR_FGT(SYS_PMEVTYPERn_EL0(7), HDFGRTR, PMEVTYPERn_EL0, 1),
- SR_FGT(SYS_PMEVTYPERn_EL0(8), HDFGRTR, PMEVTYPERn_EL0, 1),
- SR_FGT(SYS_PMEVTYPERn_EL0(9), HDFGRTR, PMEVTYPERn_EL0, 1),
- SR_FGT(SYS_PMEVTYPERn_EL0(10), HDFGRTR, PMEVTYPERn_EL0, 1),
- SR_FGT(SYS_PMEVTYPERn_EL0(11), HDFGRTR, PMEVTYPERn_EL0, 1),
- SR_FGT(SYS_PMEVTYPERn_EL0(12), HDFGRTR, PMEVTYPERn_EL0, 1),
- SR_FGT(SYS_PMEVTYPERn_EL0(13), HDFGRTR, PMEVTYPERn_EL0, 1),
- SR_FGT(SYS_PMEVTYPERn_EL0(14), HDFGRTR, PMEVTYPERn_EL0, 1),
- SR_FGT(SYS_PMEVTYPERn_EL0(15), HDFGRTR, PMEVTYPERn_EL0, 1),
- SR_FGT(SYS_PMEVTYPERn_EL0(16), HDFGRTR, PMEVTYPERn_EL0, 1),
- SR_FGT(SYS_PMEVTYPERn_EL0(17), HDFGRTR, PMEVTYPERn_EL0, 1),
- SR_FGT(SYS_PMEVTYPERn_EL0(18), HDFGRTR, PMEVTYPERn_EL0, 1),
- SR_FGT(SYS_PMEVTYPERn_EL0(19), HDFGRTR, PMEVTYPERn_EL0, 1),
- SR_FGT(SYS_PMEVTYPERn_EL0(20), HDFGRTR, PMEVTYPERn_EL0, 1),
- SR_FGT(SYS_PMEVTYPERn_EL0(21), HDFGRTR, PMEVTYPERn_EL0, 1),
- SR_FGT(SYS_PMEVTYPERn_EL0(22), HDFGRTR, PMEVTYPERn_EL0, 1),
- SR_FGT(SYS_PMEVTYPERn_EL0(23), HDFGRTR, PMEVTYPERn_EL0, 1),
- SR_FGT(SYS_PMEVTYPERn_EL0(24), HDFGRTR, PMEVTYPERn_EL0, 1),
- SR_FGT(SYS_PMEVTYPERn_EL0(25), HDFGRTR, PMEVTYPERn_EL0, 1),
- SR_FGT(SYS_PMEVTYPERn_EL0(26), HDFGRTR, PMEVTYPERn_EL0, 1),
- SR_FGT(SYS_PMEVTYPERn_EL0(27), HDFGRTR, PMEVTYPERn_EL0, 1),
- SR_FGT(SYS_PMEVTYPERn_EL0(28), HDFGRTR, PMEVTYPERn_EL0, 1),
- SR_FGT(SYS_PMEVTYPERn_EL0(29), HDFGRTR, PMEVTYPERn_EL0, 1),
- SR_FGT(SYS_PMEVTYPERn_EL0(30), HDFGRTR, PMEVTYPERn_EL0, 1),
- SR_FGT(SYS_PMEVCNTRn_EL0(0), HDFGRTR, PMEVCNTRn_EL0, 1),
- SR_FGT(SYS_PMEVCNTRn_EL0(1), HDFGRTR, PMEVCNTRn_EL0, 1),
- SR_FGT(SYS_PMEVCNTRn_EL0(2), HDFGRTR, PMEVCNTRn_EL0, 1),
- SR_FGT(SYS_PMEVCNTRn_EL0(3), HDFGRTR, PMEVCNTRn_EL0, 1),
- SR_FGT(SYS_PMEVCNTRn_EL0(4), HDFGRTR, PMEVCNTRn_EL0, 1),
- SR_FGT(SYS_PMEVCNTRn_EL0(5), HDFGRTR, PMEVCNTRn_EL0, 1),
- SR_FGT(SYS_PMEVCNTRn_EL0(6), HDFGRTR, PMEVCNTRn_EL0, 1),
- SR_FGT(SYS_PMEVCNTRn_EL0(7), HDFGRTR, PMEVCNTRn_EL0, 1),
- SR_FGT(SYS_PMEVCNTRn_EL0(8), HDFGRTR, PMEVCNTRn_EL0, 1),
- SR_FGT(SYS_PMEVCNTRn_EL0(9), HDFGRTR, PMEVCNTRn_EL0, 1),
- SR_FGT(SYS_PMEVCNTRn_EL0(10), HDFGRTR, PMEVCNTRn_EL0, 1),
- SR_FGT(SYS_PMEVCNTRn_EL0(11), HDFGRTR, PMEVCNTRn_EL0, 1),
- SR_FGT(SYS_PMEVCNTRn_EL0(12), HDFGRTR, PMEVCNTRn_EL0, 1),
- SR_FGT(SYS_PMEVCNTRn_EL0(13), HDFGRTR, PMEVCNTRn_EL0, 1),
- SR_FGT(SYS_PMEVCNTRn_EL0(14), HDFGRTR, PMEVCNTRn_EL0, 1),
- SR_FGT(SYS_PMEVCNTRn_EL0(15), HDFGRTR, PMEVCNTRn_EL0, 1),
- SR_FGT(SYS_PMEVCNTRn_EL0(16), HDFGRTR, PMEVCNTRn_EL0, 1),
- SR_FGT(SYS_PMEVCNTRn_EL0(17), HDFGRTR, PMEVCNTRn_EL0, 1),
- SR_FGT(SYS_PMEVCNTRn_EL0(18), HDFGRTR, PMEVCNTRn_EL0, 1),
- SR_FGT(SYS_PMEVCNTRn_EL0(19), HDFGRTR, PMEVCNTRn_EL0, 1),
- SR_FGT(SYS_PMEVCNTRn_EL0(20), HDFGRTR, PMEVCNTRn_EL0, 1),
- SR_FGT(SYS_PMEVCNTRn_EL0(21), HDFGRTR, PMEVCNTRn_EL0, 1),
- SR_FGT(SYS_PMEVCNTRn_EL0(22), HDFGRTR, PMEVCNTRn_EL0, 1),
- SR_FGT(SYS_PMEVCNTRn_EL0(23), HDFGRTR, PMEVCNTRn_EL0, 1),
- SR_FGT(SYS_PMEVCNTRn_EL0(24), HDFGRTR, PMEVCNTRn_EL0, 1),
- SR_FGT(SYS_PMEVCNTRn_EL0(25), HDFGRTR, PMEVCNTRn_EL0, 1),
- SR_FGT(SYS_PMEVCNTRn_EL0(26), HDFGRTR, PMEVCNTRn_EL0, 1),
- SR_FGT(SYS_PMEVCNTRn_EL0(27), HDFGRTR, PMEVCNTRn_EL0, 1),
- SR_FGT(SYS_PMEVCNTRn_EL0(28), HDFGRTR, PMEVCNTRn_EL0, 1),
- SR_FGT(SYS_PMEVCNTRn_EL0(29), HDFGRTR, PMEVCNTRn_EL0, 1),
- SR_FGT(SYS_PMEVCNTRn_EL0(30), HDFGRTR, PMEVCNTRn_EL0, 1),
+ SR_FGT_RANGE(SYS_PMEVTYPERn_EL0(0),
+ SYS_PMEVTYPERn_EL0(30),
+ HDFGRTR, PMEVTYPERn_EL0, 1),
+ SR_FGT_RANGE(SYS_PMEVCNTRn_EL0(0),
+ SYS_PMEVCNTRn_EL0(30),
+ HDFGRTR, PMEVCNTRn_EL0, 1),
SR_FGT(SYS_OSDLR_EL1, HDFGRTR, OSDLR_EL1, 1),
SR_FGT(SYS_OSECCR_EL1, HDFGRTR, OSECCR_EL1, 1),
SR_FGT(SYS_OSLSR_EL1, HDFGRTR, OSLSR_EL1, 1),
@@ -2172,6 +2125,9 @@ static __init int check_all_fgt_masks(int ret)
return ret ?: err;
}
+#define for_each_encoding_in(__x, __s, __e) \
+ for (u32 __x = __s; __x <= __e; __x = encoding_next(__x))
+
int __init populate_nv_trap_config(void)
{
int ret = 0;
@@ -2191,7 +2147,7 @@ int __init populate_nv_trap_config(void)
ret = -EINVAL;
}
- for (u32 enc = cgt->encoding; enc <= cgt->end; enc = encoding_next(enc)) {
+ for_each_encoding_in(enc, cgt->encoding, cgt->end) {
prev = xa_store(&sr_forward_xa, enc,
xa_mk_value(cgt->tc.val), GFP_KERNEL);
if (prev && !xa_is_err(prev)) {
@@ -2226,25 +2182,27 @@ int __init populate_nv_trap_config(void)
print_nv_trap_error(fgt, "Invalid FGT", ret);
}
- tc = get_trap_config(fgt->encoding);
+ for_each_encoding_in(enc, fgt->encoding, fgt->end) {
+ tc = get_trap_config(enc);
- if (tc.fgt) {
- ret = -EINVAL;
- print_nv_trap_error(fgt, "Duplicate FGT", ret);
- }
+ if (tc.fgt) {
+ ret = -EINVAL;
+ print_nv_trap_error(fgt, "Duplicate FGT", ret);
+ }
- tc.val |= fgt->tc.val;
- prev = xa_store(&sr_forward_xa, fgt->encoding,
- xa_mk_value(tc.val), GFP_KERNEL);
+ tc.val |= fgt->tc.val;
+ prev = xa_store(&sr_forward_xa, enc,
+ xa_mk_value(tc.val), GFP_KERNEL);
- if (xa_is_err(prev)) {
- ret = xa_err(prev);
- print_nv_trap_error(fgt, "Failed FGT insertion", ret);
- }
+ if (xa_is_err(prev)) {
+ ret = xa_err(prev);
+ print_nv_trap_error(fgt, "Failed FGT insertion", ret);
+ }
- if (!aggregate_fgt(tc)) {
- ret = -EINVAL;
- print_nv_trap_error(fgt, "FGT bit is reserved", ret);
+ if (!aggregate_fgt(tc)) {
+ ret = -EINVAL;
+ print_nv_trap_error(fgt, "FGT bit is reserved", ret);
+ }
}
}
--
2.39.2
^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH v3 41/42] KVM: arm64: Add FGT descriptors for FEAT_FGT2
2025-04-26 12:27 [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling Marc Zyngier
` (39 preceding siblings ...)
2025-04-26 12:28 ` [PATCH v3 40/42] KVM: arm64: Allow sysreg ranges for FGT descriptors Marc Zyngier
@ 2025-04-26 12:28 ` Marc Zyngier
2025-04-29 13:09 ` Ben Horgan
2025-04-26 12:28 ` [PATCH v3 42/42] KVM: arm64: Handle TSB CSYNC traps Marc Zyngier
2025-04-28 18:33 ` [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling Ganapatrao Kulkarni
42 siblings, 1 reply; 71+ messages in thread
From: Marc Zyngier @ 2025-04-26 12:28 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
Bulk addition of all the FGT2 traps reported with EC == 0x18,
as described in the 2025-03 JSON drop.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/emulate-nested.c | 83 +++++++++++++++++++++++++++++++++
1 file changed, 83 insertions(+)
diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index 9c7ecfccbd6e9..f7678af272bbb 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -1385,6 +1385,24 @@ static const struct encoding_to_trap_config encoding_to_fgt[] __initconst = {
SR_FGT(SYS_AIDR_EL1, HFGRTR, AIDR_EL1, 1),
SR_FGT(SYS_AFSR1_EL1, HFGRTR, AFSR1_EL1, 1),
SR_FGT(SYS_AFSR0_EL1, HFGRTR, AFSR0_EL1, 1),
+
+ /* HFGRTR2_EL2, HFGWTR2_EL2 */
+ SR_FGT(SYS_ACTLRALIAS_EL1, HFGRTR2, nACTLRALIAS_EL1, 0),
+ SR_FGT(SYS_ACTLRMASK_EL1, HFGRTR2, nACTLRMASK_EL1, 0),
+ SR_FGT(SYS_CPACRALIAS_EL1, HFGRTR2, nCPACRALIAS_EL1, 0),
+ SR_FGT(SYS_CPACRMASK_EL1, HFGRTR2, nCPACRMASK_EL1, 0),
+ SR_FGT(SYS_PFAR_EL1, HFGRTR2, nPFAR_EL1, 0),
+ SR_FGT(SYS_RCWSMASK_EL1, HFGRTR2, nRCWSMASK_EL1, 0),
+ SR_FGT(SYS_SCTLR2ALIAS_EL1, HFGRTR2, nSCTLRALIAS2_EL1, 0),
+ SR_FGT(SYS_SCTLR2MASK_EL1, HFGRTR2, nSCTLR2MASK_EL1, 0),
+ SR_FGT(SYS_SCTLRALIAS_EL1, HFGRTR2, nSCTLRALIAS_EL1, 0),
+ SR_FGT(SYS_SCTLRMASK_EL1, HFGRTR2, nSCTLRMASK_EL1, 0),
+ SR_FGT(SYS_TCR2ALIAS_EL1, HFGRTR2, nTCR2ALIAS_EL1, 0),
+ SR_FGT(SYS_TCR2MASK_EL1, HFGRTR2, nTCR2MASK_EL1, 0),
+ SR_FGT(SYS_TCRALIAS_EL1, HFGRTR2, nTCRALIAS_EL1, 0),
+ SR_FGT(SYS_TCRMASK_EL1, HFGRTR2, nTCRMASK_EL1, 0),
+ SR_FGT(SYS_ERXGSR_EL1, HFGRTR2, nERXGSR_EL1, 0),
+
/* HFGITR_EL2 */
SR_FGT(OP_AT_S1E1A, HFGITR, ATS1E1A, 1),
SR_FGT(OP_COSP_RCTX, HFGITR, COSPRCTX, 1),
@@ -1494,6 +1512,11 @@ static const struct encoding_to_trap_config encoding_to_fgt[] __initconst = {
SR_FGT(SYS_IC_IVAU, HFGITR, ICIVAU, 1),
SR_FGT(SYS_IC_IALLU, HFGITR, ICIALLU, 1),
SR_FGT(SYS_IC_IALLUIS, HFGITR, ICIALLUIS, 1),
+
+ /* HFGITR2_EL2 */
+ SR_FGT(SYS_DC_CIGDVAPS, HFGITR2, nDCCIVAPS, 0),
+ SR_FGT(SYS_DC_CIVAPS, HFGITR2, nDCCIVAPS, 0),
+
/* HDFGRTR_EL2 */
SR_FGT(SYS_PMBIDR_EL1, HDFGRTR, PMBIDR_EL1, 1),
SR_FGT(SYS_PMSNEVFR_EL1, HDFGRTR, nPMSNEVFR_EL1, 0),
@@ -1886,6 +1909,59 @@ static const struct encoding_to_trap_config encoding_to_fgt[] __initconst = {
SR_FGT(SYS_DBGBCRn_EL1(13), HDFGRTR, DBGBCRn_EL1, 1),
SR_FGT(SYS_DBGBCRn_EL1(14), HDFGRTR, DBGBCRn_EL1, 1),
SR_FGT(SYS_DBGBCRn_EL1(15), HDFGRTR, DBGBCRn_EL1, 1),
+
+ /* HDFGRTR2_EL2 */
+ SR_FGT(SYS_MDSELR_EL1, HDFGRTR2, nMDSELR_EL1, 0),
+ SR_FGT(SYS_MDSTEPOP_EL1, HDFGRTR2, nMDSTEPOP_EL1, 0),
+ SR_FGT(SYS_PMCCNTSVR_EL1, HDFGRTR2, nPMSSDATA, 0),
+ SR_FGT_RANGE(SYS_PMEVCNTSVRn_EL1(0),
+ SYS_PMEVCNTSVRn_EL1(30),
+ HDFGRTR2, nPMSSDATA, 0),
+ SR_FGT(SYS_PMICNTSVR_EL1, HDFGRTR2, nPMSSDATA, 0),
+ SR_FGT(SYS_PMECR_EL1, HDFGRTR2, nPMECR_EL1, 0),
+ SR_FGT(SYS_PMIAR_EL1, HDFGRTR2, nPMIAR_EL1, 0),
+ SR_FGT(SYS_PMICFILTR_EL0, HDFGRTR2, nPMICFILTR_EL0, 0),
+ SR_FGT(SYS_PMICNTR_EL0, HDFGRTR2, nPMICNTR_EL0, 0),
+ SR_FGT(SYS_PMSSCR_EL1, HDFGRTR2, nPMSSCR_EL1, 0),
+ SR_FGT(SYS_PMUACR_EL1, HDFGRTR2, nPMUACR_EL1, 0),
+ SR_FGT(SYS_SPMACCESSR_EL1, HDFGRTR2, nSPMACCESSR_EL1, 0),
+ SR_FGT(SYS_SPMCFGR_EL1, HDFGRTR2, nSPMID, 0),
+ SR_FGT(SYS_SPMDEVARCH_EL1, HDFGRTR2, nSPMID, 0),
+ SR_FGT(SYS_SPMCGCRn_EL1(0), HDFGRTR2, nSPMID, 0),
+ SR_FGT(SYS_SPMCGCRn_EL1(1), HDFGRTR2, nSPMID, 0),
+ SR_FGT(SYS_SPMIIDR_EL1, HDFGRTR2, nSPMID, 0),
+ SR_FGT(SYS_SPMCNTENCLR_EL0, HDFGRTR2, nSPMCNTEN, 0),
+ SR_FGT(SYS_SPMCNTENSET_EL0, HDFGRTR2, nSPMCNTEN, 0),
+ SR_FGT(SYS_SPMCR_EL0, HDFGRTR2, nSPMCR_EL0, 0),
+ SR_FGT(SYS_SPMDEVAFF_EL1, HDFGRTR2, nSPMDEVAFF_EL1, 0),
+ /*
+ * We have up to 64 of these registers in ranges of 16, banked via
+ * SPMSELR_EL0.BANK. We're only concerned with the accessors here,
+ * not the architectural registers.
+ */
+ SR_FGT_RANGE(SYS_SPMEVCNTRn_EL0(0),
+ SYS_SPMEVCNTRn_EL0(15),
+ HDFGRTR2, nSPMEVCNTRn_EL0, 0),
+ SR_FGT_RANGE(SYS_SPMEVFILT2Rn_EL0(0),
+ SYS_SPMEVFILT2Rn_EL0(15),
+ HDFGRTR2, nSPMEVTYPERn_EL0, 0),
+ SR_FGT_RANGE(SYS_SPMEVFILTRn_EL0(0),
+ SYS_SPMEVFILTRn_EL0(15),
+ HDFGRTR2, nSPMEVTYPERn_EL0, 0),
+ SR_FGT_RANGE(SYS_SPMEVTYPERn_EL0(0),
+ SYS_SPMEVTYPERn_EL0(15),
+ HDFGRTR2, nSPMEVTYPERn_EL0, 0),
+ SR_FGT(SYS_SPMINTENCLR_EL1, HDFGRTR2, nSPMINTEN, 0),
+ SR_FGT(SYS_SPMINTENSET_EL1, HDFGRTR2, nSPMINTEN, 0),
+ SR_FGT(SYS_SPMOVSCLR_EL0, HDFGRTR2, nSPMOVS, 0),
+ SR_FGT(SYS_SPMOVSSET_EL0, HDFGRTR2, nSPMOVS, 0),
+ SR_FGT(SYS_SPMSCR_EL1, HDFGRTR2, nSPMSCR_EL1, 0),
+ SR_FGT(SYS_SPMSELR_EL0, HDFGRTR2, nSPMSELR_EL0, 0),
+ SR_FGT(SYS_TRCITECR_EL1, HDFGRTR2, nTRCITECR_EL1, 0),
+ SR_FGT(SYS_PMBMAR_EL1, HDFGRTR2, nPMBMAR_EL1, 0),
+ SR_FGT(SYS_PMSDSFR_EL1, HDFGRTR2, nPMSDSFR_EL1, 0),
+ SR_FGT(SYS_TRBMPAM_EL1, HDFGRTR2, nTRBMPAM_EL1, 0),
+
/*
* HDFGWTR_EL2
*
@@ -1896,12 +1972,19 @@ static const struct encoding_to_trap_config encoding_to_fgt[] __initconst = {
* read-side mappings, and only the write-side mappings that
* differ from the read side, and the trap handler will pick
* the correct shadow register based on the access type.
+ *
+ * Same model applies to the FEAT_FGT2 registers.
*/
SR_FGT(SYS_TRFCR_EL1, HDFGWTR, TRFCR_EL1, 1),
SR_FGT(SYS_TRCOSLAR, HDFGWTR, TRCOSLAR, 1),
SR_FGT(SYS_PMCR_EL0, HDFGWTR, PMCR_EL0, 1),
SR_FGT(SYS_PMSWINC_EL0, HDFGWTR, PMSWINC_EL0, 1),
SR_FGT(SYS_OSLAR_EL1, HDFGWTR, OSLAR_EL1, 1),
+
+ /* HDFGWTR_EL2 */
+ SR_FGT(SYS_PMZR_EL0, HDFGWTR2, nPMZR_EL0, 0),
+ SR_FGT(SYS_SPMZR_EL0, HDFGWTR2, nSPMEVCNTRn_EL0, 0),
+
/*
* HAFGRTR_EL2
*/
--
2.39.2
^ permalink raw reply related [flat|nested] 71+ messages in thread
* [PATCH v3 42/42] KVM: arm64: Handle TSB CSYNC traps
2025-04-26 12:27 [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling Marc Zyngier
` (40 preceding siblings ...)
2025-04-26 12:28 ` [PATCH v3 41/42] KVM: arm64: Add FGT descriptors for FEAT_FGT2 Marc Zyngier
@ 2025-04-26 12:28 ` Marc Zyngier
2025-04-28 18:33 ` [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling Ganapatrao Kulkarni
42 siblings, 0 replies; 71+ messages in thread
From: Marc Zyngier @ 2025-04-26 12:28 UTC (permalink / raw)
To: kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
The architecture introduces a trap for TSB CSYNC that fits in
the same EC as LS64 and PSB CSYNC. Let's deal with it in a similar
way.
It's not that we expect this to be useful any time soon anyway.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/esr.h | 3 ++-
arch/arm64/kvm/emulate-nested.c | 1 +
arch/arm64/kvm/handle_exit.c | 5 +++++
3 files changed, 8 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
index ef5a14276ce15..6079e23608a23 100644
--- a/arch/arm64/include/asm/esr.h
+++ b/arch/arm64/include/asm/esr.h
@@ -182,10 +182,11 @@
#define ESR_ELx_WFx_ISS_WFE (UL(1) << 0)
#define ESR_ELx_xVC_IMM_MASK ((UL(1) << 16) - 1)
-/* ISS definitions for LD64B/ST64B/PSBCSYNC instructions */
+/* ISS definitions for LD64B/ST64B/{T,P}SBCSYNC instructions */
#define ESR_ELx_ISS_OTHER_ST64BV (0)
#define ESR_ELx_ISS_OTHER_ST64BV0 (1)
#define ESR_ELx_ISS_OTHER_LDST64B (2)
+#define ESR_ELx_ISS_OTHER_TSBCSYNC (3)
#define ESR_ELx_ISS_OTHER_PSBCSYNC (4)
#define DISR_EL1_IDS (UL(1) << 24)
diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index f7678af272bbb..96fcf78df7a79 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -2041,6 +2041,7 @@ static const union trap_config non_0x18_fgt[] __initconst = {
FGT(HFGITR, SVC_EL1, 1),
FGT(HFGITR, SVC_EL0, 1),
FGT(HFGITR, ERET, 1),
+ FGT(HFGITR2, TSBCSYNC, 1),
};
static union trap_config get_trap_config(u32 sysreg)
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 2c07754c11a45..fc21665bc380d 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -347,6 +347,11 @@ static int handle_other(struct kvm_vcpu *vcpu)
if (is_l2)
fwd = !(hcrx & HCRX_EL2_EnALS);
break;
+ case ESR_ELx_ISS_OTHER_TSBCSYNC:
+ allowed = kvm_has_feat(kvm, ID_AA64DFR0_EL1, TraceBuffer, TRBE_V1P1);
+ if (is_l2)
+ fwd = (__vcpu_sys_reg(vcpu, HFGITR2_EL2) & HFGITR2_EL2_TSBCSYNC);
+ break;
case ESR_ELx_ISS_OTHER_PSBCSYNC:
allowed = kvm_has_feat(kvm, ID_AA64DFR0_EL1, PMSVer, V1P5);
if (is_l2)
--
2.39.2
^ permalink raw reply related [flat|nested] 71+ messages in thread
* Re: [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling
2025-04-26 12:27 [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling Marc Zyngier
` (41 preceding siblings ...)
2025-04-26 12:28 ` [PATCH v3 42/42] KVM: arm64: Handle TSB CSYNC traps Marc Zyngier
@ 2025-04-28 18:33 ` Ganapatrao Kulkarni
2025-04-28 21:42 ` Marc Zyngier
2025-04-29 7:34 ` Marc Zyngier
42 siblings, 2 replies; 71+ messages in thread
From: Ganapatrao Kulkarni @ 2025-04-28 18:33 UTC (permalink / raw)
To: Marc Zyngier, kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
Hi Marc,
On 26-04-2025 17:57, Marc Zyngier wrote:
> This is yet another version of the series last posted at [1].
>
> The eagled eye reviewer will have noticed that since v2, the series
> has more or less doubled in size for any reasonable metric (number of
> patches, number of lines added or deleted). It is therefore pretty
> urgent that this gets either merged or forgotten! ;-)
>
> See the change log below for the details -- most of it is related to
> FGT2 (and its rather large dependencies) being added.
>
> * From v2:
>
> - Added comprehensive support for FEAT_FGT2, as the host kernel is
> now making use of these registers, without any form of context
> switch in KVM. What could possibly go wrong?
>
> - Reworked some of the FGT description and handling primitives,
> reducing the boilerplate code and tables that get added over time.
>
> - Rebased on 6.15-rc3.
>
> [1]: https://lore.kernel.org/r/20250310122505.2857610-1-maz@kernel.org
>
> Marc Zyngier (41):
> arm64: sysreg: Add ID_AA64ISAR1_EL1.LS64 encoding for FEAT_LS64WB
> arm64: sysreg: Update ID_AA64MMFR4_EL1 description
> arm64: sysreg: Add layout for HCR_EL2
> arm64: sysreg: Replace HGFxTR_EL2 with HFG{R,W}TR_EL2
> arm64: sysreg: Update ID_AA64PFR0_EL1 description
> arm64: sysreg: Update PMSIDR_EL1 description
> arm64: sysreg: Update TRBIDR_EL1 description
> arm64: sysreg: Add registers trapped by HFG{R,W}TR2_EL2
> arm64: sysreg: Add registers trapped by HDFG{R,W}TR2_EL2
> arm64: sysreg: Add system instructions trapped by HFGIRT2_EL2
> arm64: Remove duplicated sysreg encodings
> arm64: tools: Resync sysreg.h
> arm64: Add syndrome information for trapped LD64B/ST64B{,V,V0}
> arm64: Add FEAT_FGT2 capability
> KVM: arm64: Tighten handling of unknown FGT groups
> KVM: arm64: Simplify handling of negative FGT bits
> KVM: arm64: Handle trapping of FEAT_LS64* instructions
> KVM: arm64: Restrict ACCDATA_EL1 undef to FEAT_ST64_ACCDATA being
> disabled
> KVM: arm64: Don't treat HCRX_EL2 as a FGT register
> KVM: arm64: Plug FEAT_GCS handling
> KVM: arm64: Compute FGT masks from KVM's own FGT tables
> KVM: arm64: Add description of FGT bits leading to EC!=0x18
> KVM: arm64: Use computed masks as sanitisers for FGT registers
> KVM: arm64: Propagate FGT masks to the nVHE hypervisor
> KVM: arm64: Use computed FGT masks to setup FGT registers
> KVM: arm64: Remove hand-crafted masks for FGT registers
> KVM: arm64: Use KVM-specific HCRX_EL2 RES0 mask
> KVM: arm64: Handle PSB CSYNC traps
> KVM: arm64: Switch to table-driven FGU configuration
> KVM: arm64: Validate FGT register descriptions against RES0 masks
> KVM: arm64: Use FGT feature maps to drive RES0 bits
> KVM: arm64: Allow kvm_has_feat() to take variable arguments
> KVM: arm64: Use HCRX_EL2 feature map to drive fixed-value bits
> KVM: arm64: Use HCR_EL2 feature map to drive fixed-value bits
> KVM: arm64: Add FEAT_FGT2 registers to the VNCR page
> KVM: arm64: Add sanitisation for FEAT_FGT2 registers
> KVM: arm64: Add trap routing for FEAT_FGT2 registers
> KVM: arm64: Add context-switch for FEAT_FGT2 registers
> KVM: arm64: Allow sysreg ranges for FGT descriptors
> KVM: arm64: Add FGT descriptors for FEAT_FGT2
> KVM: arm64: Handle TSB CSYNC traps
>
> Mark Rutland (1):
> KVM: arm64: Unconditionally configure fine-grain traps
>
> arch/arm64/include/asm/el2_setup.h | 14 +-
> arch/arm64/include/asm/esr.h | 10 +-
> arch/arm64/include/asm/kvm_arm.h | 186 ++--
> arch/arm64/include/asm/kvm_host.h | 56 +-
> arch/arm64/include/asm/sysreg.h | 26 +-
> arch/arm64/include/asm/vncr_mapping.h | 5 +
> arch/arm64/kernel/cpufeature.c | 7 +
> arch/arm64/kvm/Makefile | 2 +-
> arch/arm64/kvm/arm.c | 13 +
> arch/arm64/kvm/config.c | 1085 +++++++++++++++++++++++
> arch/arm64/kvm/emulate-nested.c | 580 ++++++++----
> arch/arm64/kvm/handle_exit.c | 77 ++
> arch/arm64/kvm/hyp/include/hyp/switch.h | 158 ++--
> arch/arm64/kvm/hyp/nvhe/switch.c | 12 +
> arch/arm64/kvm/hyp/vgic-v3-sr.c | 8 +-
> arch/arm64/kvm/nested.c | 223 +----
> arch/arm64/kvm/sys_regs.c | 68 +-
> arch/arm64/tools/cpucaps | 1 +
> arch/arm64/tools/sysreg | 1002 ++++++++++++++++++++-
> tools/arch/arm64/include/asm/sysreg.h | 65 +-
> 20 files changed, 2888 insertions(+), 710 deletions(-)
> create mode 100644 arch/arm64/kvm/config.c
>
I am trying nv-next branch and I believe these FGT related changes are
merged. With this, selftest arm64/set_id_regs is failing. From initial
debug it seems, the register access of SYS_CTR_EL0, SYS_MIDR_EL1,
SYS_REVIDR_EL1 and SYS_AIDR_EL1 from guest_code is resulting in trap to
EL2 (HCR_ID1,ID2 are set) and is getting forwarded back to EL1, since
EL1 sync handler is not installed in the test code, resulting in
hang(endless guest_exit/entry).
It is due to function "triage_sysreg_trap" is returning true.
When guest_code is in EL1 (default case) it is due to return in below if.
if (tc.fgt != __NO_FGT_GROUP__ &&
(vcpu->kvm->arch.fgu[tc.fgt] & BIT(tc.bit))) {
kvm_inject_undefined(vcpu);
return true;
}
IMO, Host should return the value of these sysreg read instead of
forwarding the trap to guest or something more to be added to testcode?
--
Thanks,
Gk
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling
2025-04-28 18:33 ` [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling Ganapatrao Kulkarni
@ 2025-04-28 21:42 ` Marc Zyngier
2025-04-29 7:34 ` Marc Zyngier
1 sibling, 0 replies; 71+ messages in thread
From: Marc Zyngier @ 2025-04-28 21:42 UTC (permalink / raw)
To: Ganapatrao Kulkarni
Cc: kvmarm, kvm, linux-arm-kernel, Joey Gouly, Suzuki K Poulose,
Oliver Upton, Zenghui Yu, Mark Rutland, Fuad Tabba, Will Deacon,
Catalin Marinas
On Mon, 28 Apr 2025 19:33:10 +0100,
Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com> wrote:
>
> Hi Marc,
>
> I am trying nv-next branch and I believe these FGT related changes are
> merged. With this, selftest arm64/set_id_regs is failing. From initial
> debug it seems, the register access of SYS_CTR_EL0, SYS_MIDR_EL1,
> SYS_REVIDR_EL1 and SYS_AIDR_EL1 from guest_code is resulting in trap
> to EL2 (HCR_ID1,ID2 are set) and is getting forwarded back to EL1,
> since EL1 sync handler is not installed in the test code, resulting in
> hang(endless guest_exit/entry).
I don't see this problem here. At the very least, an EL1 Linux guest
runs just fine accessing these registers.
>
> It is due to function "triage_sysreg_trap" is returning true.
>
> When guest_code is in EL1 (default case) it is due to return in below if.
>
> if (tc.fgt != __NO_FGT_GROUP__ &&
> (vcpu->kvm->arch.fgu[tc.fgt] & BIT(tc.bit))) {
> kvm_inject_undefined(vcpu);
> return true;
> }
>
> IMO, Host should return the value of these sysreg read instead of
> forwarding the trap to guest or something more to be added to
> testcode?
This is not a forward. It is an UNDEF. But none of these system
registers are ever supposed to UNDEF.
So something is setting the FGU bit mapped to HFGRTR_EL2.MIDR_EL1 to
1, and forces the register to UNDEF, assuming your analysis is
correct. I'm afraid you'll have to dig a bit deeper.
M.
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling
2025-04-28 18:33 ` [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling Ganapatrao Kulkarni
2025-04-28 21:42 ` Marc Zyngier
@ 2025-04-29 7:34 ` Marc Zyngier
2025-04-29 14:30 ` Ganapatrao Kulkarni
1 sibling, 1 reply; 71+ messages in thread
From: Marc Zyngier @ 2025-04-29 7:34 UTC (permalink / raw)
To: Ganapatrao Kulkarni
Cc: kvmarm, kvm, linux-arm-kernel, Joey Gouly, Suzuki K Poulose,
Oliver Upton, Zenghui Yu, Mark Rutland, Fuad Tabba, Will Deacon,
Catalin Marinas
On Mon, 28 Apr 2025 19:33:10 +0100,
Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com> wrote:
>
> Hi Marc,
>
> On 26-04-2025 17:57, Marc Zyngier wrote:
> > This is yet another version of the series last posted at [1].
> >
> > The eagled eye reviewer will have noticed that since v2, the series
> > has more or less doubled in size for any reasonable metric (number of
> > patches, number of lines added or deleted). It is therefore pretty
> > urgent that this gets either merged or forgotten! ;-)
> >
> > See the change log below for the details -- most of it is related to
> > FGT2 (and its rather large dependencies) being added.
> >
> > * From v2:
> >
> > - Added comprehensive support for FEAT_FGT2, as the host kernel is
> > now making use of these registers, without any form of context
> > switch in KVM. What could possibly go wrong?
> >
> > - Reworked some of the FGT description and handling primitives,
> > reducing the boilerplate code and tables that get added over time.
> >
> > - Rebased on 6.15-rc3.
> >
> > [1]: https://lore.kernel.org/r/20250310122505.2857610-1-maz@kernel.org
> >
> > Marc Zyngier (41):
> > arm64: sysreg: Add ID_AA64ISAR1_EL1.LS64 encoding for FEAT_LS64WB
> > arm64: sysreg: Update ID_AA64MMFR4_EL1 description
> > arm64: sysreg: Add layout for HCR_EL2
> > arm64: sysreg: Replace HGFxTR_EL2 with HFG{R,W}TR_EL2
> > arm64: sysreg: Update ID_AA64PFR0_EL1 description
> > arm64: sysreg: Update PMSIDR_EL1 description
> > arm64: sysreg: Update TRBIDR_EL1 description
> > arm64: sysreg: Add registers trapped by HFG{R,W}TR2_EL2
> > arm64: sysreg: Add registers trapped by HDFG{R,W}TR2_EL2
> > arm64: sysreg: Add system instructions trapped by HFGIRT2_EL2
> > arm64: Remove duplicated sysreg encodings
> > arm64: tools: Resync sysreg.h
> > arm64: Add syndrome information for trapped LD64B/ST64B{,V,V0}
> > arm64: Add FEAT_FGT2 capability
> > KVM: arm64: Tighten handling of unknown FGT groups
> > KVM: arm64: Simplify handling of negative FGT bits
> > KVM: arm64: Handle trapping of FEAT_LS64* instructions
> > KVM: arm64: Restrict ACCDATA_EL1 undef to FEAT_ST64_ACCDATA being
> > disabled
> > KVM: arm64: Don't treat HCRX_EL2 as a FGT register
> > KVM: arm64: Plug FEAT_GCS handling
> > KVM: arm64: Compute FGT masks from KVM's own FGT tables
> > KVM: arm64: Add description of FGT bits leading to EC!=0x18
> > KVM: arm64: Use computed masks as sanitisers for FGT registers
> > KVM: arm64: Propagate FGT masks to the nVHE hypervisor
> > KVM: arm64: Use computed FGT masks to setup FGT registers
> > KVM: arm64: Remove hand-crafted masks for FGT registers
> > KVM: arm64: Use KVM-specific HCRX_EL2 RES0 mask
> > KVM: arm64: Handle PSB CSYNC traps
> > KVM: arm64: Switch to table-driven FGU configuration
> > KVM: arm64: Validate FGT register descriptions against RES0 masks
> > KVM: arm64: Use FGT feature maps to drive RES0 bits
> > KVM: arm64: Allow kvm_has_feat() to take variable arguments
> > KVM: arm64: Use HCRX_EL2 feature map to drive fixed-value bits
> > KVM: arm64: Use HCR_EL2 feature map to drive fixed-value bits
> > KVM: arm64: Add FEAT_FGT2 registers to the VNCR page
> > KVM: arm64: Add sanitisation for FEAT_FGT2 registers
> > KVM: arm64: Add trap routing for FEAT_FGT2 registers
> > KVM: arm64: Add context-switch for FEAT_FGT2 registers
> > KVM: arm64: Allow sysreg ranges for FGT descriptors
> > KVM: arm64: Add FGT descriptors for FEAT_FGT2
> > KVM: arm64: Handle TSB CSYNC traps
> >
> > Mark Rutland (1):
> > KVM: arm64: Unconditionally configure fine-grain traps
> >
> > arch/arm64/include/asm/el2_setup.h | 14 +-
> > arch/arm64/include/asm/esr.h | 10 +-
> > arch/arm64/include/asm/kvm_arm.h | 186 ++--
> > arch/arm64/include/asm/kvm_host.h | 56 +-
> > arch/arm64/include/asm/sysreg.h | 26 +-
> > arch/arm64/include/asm/vncr_mapping.h | 5 +
> > arch/arm64/kernel/cpufeature.c | 7 +
> > arch/arm64/kvm/Makefile | 2 +-
> > arch/arm64/kvm/arm.c | 13 +
> > arch/arm64/kvm/config.c | 1085 +++++++++++++++++++++++
> > arch/arm64/kvm/emulate-nested.c | 580 ++++++++----
> > arch/arm64/kvm/handle_exit.c | 77 ++
> > arch/arm64/kvm/hyp/include/hyp/switch.h | 158 ++--
> > arch/arm64/kvm/hyp/nvhe/switch.c | 12 +
> > arch/arm64/kvm/hyp/vgic-v3-sr.c | 8 +-
> > arch/arm64/kvm/nested.c | 223 +----
> > arch/arm64/kvm/sys_regs.c | 68 +-
> > arch/arm64/tools/cpucaps | 1 +
> > arch/arm64/tools/sysreg | 1002 ++++++++++++++++++++-
> > tools/arch/arm64/include/asm/sysreg.h | 65 +-
> > 20 files changed, 2888 insertions(+), 710 deletions(-)
> > create mode 100644 arch/arm64/kvm/config.c
> >
>
> I am trying nv-next branch and I believe these FGT related changes are
> merged. With this, selftest arm64/set_id_regs is failing. From initial
> debug it seems, the register access of SYS_CTR_EL0, SYS_MIDR_EL1,
> SYS_REVIDR_EL1 and SYS_AIDR_EL1 from guest_code is resulting in trap
> to EL2 (HCR_ID1,ID2 are set) and is getting forwarded back to EL1,
> since EL1 sync handler is not installed in the test code, resulting in
> hang(endless guest_exit/entry).
Let's start by calling bullshit on the test itself:
root@semi-fraudulent:/home/maz# grep AA64PFR0 /sys/kernel/debug/kvm/2008-4/idregs
SYS_ID_AA64PFR0_EL1: 0000000020110000
It basically disable anything 64bit at EL{0,1,2,3]. Frankly, all these
tests are pure garbage. I'm baffled that anyone expects this crap to
give any meaningful result.
> It is due to function "triage_sysreg_trap" is returning true.
>
> When guest_code is in EL1 (default case) it is due to return in below if.
>
> if (tc.fgt != __NO_FGT_GROUP__ &&
> (vcpu->kvm->arch.fgu[tc.fgt] & BIT(tc.bit))) {
> kvm_inject_undefined(vcpu);
> return true;
> }
That explains why we end-up here. The 64bit ISA is "disabled", a bunch
of trap bits are advertised as depending on it, so the corresponding
FGU bits are set to "emulate" the requested behaviour.
Works as intended, and this proves once more that what we call testing
is just horseshit.
In retrospect, we should do a few things:
- Prevent writes to ID_AA64PFR0_EL1 disabling the 64bit ISA, breaking
this stupid test for good.
- Flag all the FGT bits depending on FEAT_AA64EL1 as NEVER_FGU,
because that shouldn't happen, by construction (there is no
architecture revision where these sysregs are UNDEFined).
- Mark all these test as unmaintained and deprecated, recognising that
they are utterly pointless (optional).
Full patch below.
M.
diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
index d4e1218b004dd..666070d4ccd7f 100644
--- a/arch/arm64/kvm/config.c
+++ b/arch/arm64/kvm/config.c
@@ -295,34 +295,34 @@ static const struct reg_bits_to_feat_map hfgrtr_feat_map[] = {
HFGRTR_EL2_APDBKey |
HFGRTR_EL2_APDAKey,
feat_pauth),
- NEEDS_FEAT(HFGRTR_EL2_VBAR_EL1 |
- HFGRTR_EL2_TTBR1_EL1 |
- HFGRTR_EL2_TTBR0_EL1 |
- HFGRTR_EL2_TPIDR_EL0 |
- HFGRTR_EL2_TPIDRRO_EL0 |
- HFGRTR_EL2_TPIDR_EL1 |
- HFGRTR_EL2_TCR_EL1 |
- HFGRTR_EL2_SCTLR_EL1 |
- HFGRTR_EL2_REVIDR_EL1 |
- HFGRTR_EL2_PAR_EL1 |
- HFGRTR_EL2_MPIDR_EL1 |
- HFGRTR_EL2_MIDR_EL1 |
- HFGRTR_EL2_MAIR_EL1 |
- HFGRTR_EL2_ISR_EL1 |
- HFGRTR_EL2_FAR_EL1 |
- HFGRTR_EL2_ESR_EL1 |
- HFGRTR_EL2_DCZID_EL0 |
- HFGRTR_EL2_CTR_EL0 |
- HFGRTR_EL2_CSSELR_EL1 |
- HFGRTR_EL2_CPACR_EL1 |
- HFGRTR_EL2_CONTEXTIDR_EL1 |
- HFGRTR_EL2_CLIDR_EL1 |
- HFGRTR_EL2_CCSIDR_EL1 |
- HFGRTR_EL2_AMAIR_EL1 |
- HFGRTR_EL2_AIDR_EL1 |
- HFGRTR_EL2_AFSR1_EL1 |
- HFGRTR_EL2_AFSR0_EL1,
- FEAT_AA64EL1),
+ NEEDS_FEAT_FLAG(HFGRTR_EL2_VBAR_EL1 |
+ HFGRTR_EL2_TTBR1_EL1 |
+ HFGRTR_EL2_TTBR0_EL1 |
+ HFGRTR_EL2_TPIDR_EL0 |
+ HFGRTR_EL2_TPIDRRO_EL0 |
+ HFGRTR_EL2_TPIDR_EL1 |
+ HFGRTR_EL2_TCR_EL1 |
+ HFGRTR_EL2_SCTLR_EL1 |
+ HFGRTR_EL2_REVIDR_EL1 |
+ HFGRTR_EL2_PAR_EL1 |
+ HFGRTR_EL2_MPIDR_EL1 |
+ HFGRTR_EL2_MIDR_EL1 |
+ HFGRTR_EL2_MAIR_EL1 |
+ HFGRTR_EL2_ISR_EL1 |
+ HFGRTR_EL2_FAR_EL1 |
+ HFGRTR_EL2_ESR_EL1 |
+ HFGRTR_EL2_DCZID_EL0 |
+ HFGRTR_EL2_CTR_EL0 |
+ HFGRTR_EL2_CSSELR_EL1 |
+ HFGRTR_EL2_CPACR_EL1 |
+ HFGRTR_EL2_CONTEXTIDR_EL1|
+ HFGRTR_EL2_CLIDR_EL1 |
+ HFGRTR_EL2_CCSIDR_EL1 |
+ HFGRTR_EL2_AMAIR_EL1 |
+ HFGRTR_EL2_AIDR_EL1 |
+ HFGRTR_EL2_AFSR1_EL1 |
+ HFGRTR_EL2_AFSR0_EL1,
+ NEVER_FGU, FEAT_AA64EL1),
};
static const struct reg_bits_to_feat_map hfgwtr_feat_map[] = {
@@ -368,25 +368,25 @@ static const struct reg_bits_to_feat_map hfgwtr_feat_map[] = {
HFGWTR_EL2_APDBKey |
HFGWTR_EL2_APDAKey,
feat_pauth),
- NEEDS_FEAT(HFGWTR_EL2_VBAR_EL1 |
- HFGWTR_EL2_TTBR1_EL1 |
- HFGWTR_EL2_TTBR0_EL1 |
- HFGWTR_EL2_TPIDR_EL0 |
- HFGWTR_EL2_TPIDRRO_EL0 |
- HFGWTR_EL2_TPIDR_EL1 |
- HFGWTR_EL2_TCR_EL1 |
- HFGWTR_EL2_SCTLR_EL1 |
- HFGWTR_EL2_PAR_EL1 |
- HFGWTR_EL2_MAIR_EL1 |
- HFGWTR_EL2_FAR_EL1 |
- HFGWTR_EL2_ESR_EL1 |
- HFGWTR_EL2_CSSELR_EL1 |
- HFGWTR_EL2_CPACR_EL1 |
- HFGWTR_EL2_CONTEXTIDR_EL1 |
- HFGWTR_EL2_AMAIR_EL1 |
- HFGWTR_EL2_AFSR1_EL1 |
- HFGWTR_EL2_AFSR0_EL1,
- FEAT_AA64EL1),
+ NEEDS_FEAT_FLAG(HFGWTR_EL2_VBAR_EL1 |
+ HFGWTR_EL2_TTBR1_EL1 |
+ HFGWTR_EL2_TTBR0_EL1 |
+ HFGWTR_EL2_TPIDR_EL0 |
+ HFGWTR_EL2_TPIDRRO_EL0 |
+ HFGWTR_EL2_TPIDR_EL1 |
+ HFGWTR_EL2_TCR_EL1 |
+ HFGWTR_EL2_SCTLR_EL1 |
+ HFGWTR_EL2_PAR_EL1 |
+ HFGWTR_EL2_MAIR_EL1 |
+ HFGWTR_EL2_FAR_EL1 |
+ HFGWTR_EL2_ESR_EL1 |
+ HFGWTR_EL2_CSSELR_EL1 |
+ HFGWTR_EL2_CPACR_EL1 |
+ HFGWTR_EL2_CONTEXTIDR_EL1|
+ HFGWTR_EL2_AMAIR_EL1 |
+ HFGWTR_EL2_AFSR1_EL1 |
+ HFGWTR_EL2_AFSR0_EL1,
+ NEVER_FGU, FEAT_AA64EL1),
};
static const struct reg_bits_to_feat_map hdfgrtr_feat_map[] = {
@@ -443,17 +443,17 @@ static const struct reg_bits_to_feat_map hdfgrtr_feat_map[] = {
FEAT_TRBE),
NEEDS_FEAT_FLAG(HDFGRTR_EL2_OSDLR_EL1, NEVER_FGU,
FEAT_DoubleLock),
- NEEDS_FEAT(HDFGRTR_EL2_OSECCR_EL1 |
- HDFGRTR_EL2_OSLSR_EL1 |
- HDFGRTR_EL2_DBGPRCR_EL1 |
- HDFGRTR_EL2_DBGAUTHSTATUS_EL1|
- HDFGRTR_EL2_DBGCLAIM |
- HDFGRTR_EL2_MDSCR_EL1 |
- HDFGRTR_EL2_DBGWVRn_EL1 |
- HDFGRTR_EL2_DBGWCRn_EL1 |
- HDFGRTR_EL2_DBGBVRn_EL1 |
- HDFGRTR_EL2_DBGBCRn_EL1,
- FEAT_AA64EL1)
+ NEEDS_FEAT_FLAG(HDFGRTR_EL2_OSECCR_EL1 |
+ HDFGRTR_EL2_OSLSR_EL1 |
+ HDFGRTR_EL2_DBGPRCR_EL1 |
+ HDFGRTR_EL2_DBGAUTHSTATUS_EL1|
+ HDFGRTR_EL2_DBGCLAIM |
+ HDFGRTR_EL2_MDSCR_EL1 |
+ HDFGRTR_EL2_DBGWVRn_EL1 |
+ HDFGRTR_EL2_DBGWCRn_EL1 |
+ HDFGRTR_EL2_DBGBVRn_EL1 |
+ HDFGRTR_EL2_DBGBCRn_EL1,
+ NEVER_FGU, FEAT_AA64EL1)
};
static const struct reg_bits_to_feat_map hdfgwtr_feat_map[] = {
@@ -503,16 +503,16 @@ static const struct reg_bits_to_feat_map hdfgwtr_feat_map[] = {
FEAT_TRBE),
NEEDS_FEAT_FLAG(HDFGWTR_EL2_OSDLR_EL1,
NEVER_FGU, FEAT_DoubleLock),
- NEEDS_FEAT(HDFGWTR_EL2_OSECCR_EL1 |
- HDFGWTR_EL2_OSLAR_EL1 |
- HDFGWTR_EL2_DBGPRCR_EL1 |
- HDFGWTR_EL2_DBGCLAIM |
- HDFGWTR_EL2_MDSCR_EL1 |
- HDFGWTR_EL2_DBGWVRn_EL1 |
- HDFGWTR_EL2_DBGWCRn_EL1 |
- HDFGWTR_EL2_DBGBVRn_EL1 |
- HDFGWTR_EL2_DBGBCRn_EL1,
- FEAT_AA64EL1),
+ NEEDS_FEAT_FLAG(HDFGWTR_EL2_OSECCR_EL1 |
+ HDFGWTR_EL2_OSLAR_EL1 |
+ HDFGWTR_EL2_DBGPRCR_EL1 |
+ HDFGWTR_EL2_DBGCLAIM |
+ HDFGWTR_EL2_MDSCR_EL1 |
+ HDFGWTR_EL2_DBGWVRn_EL1 |
+ HDFGWTR_EL2_DBGWCRn_EL1 |
+ HDFGWTR_EL2_DBGBVRn_EL1 |
+ HDFGWTR_EL2_DBGBCRn_EL1,
+ NEVER_FGU, FEAT_AA64EL1),
NEEDS_FEAT(HDFGWTR_EL2_TRFCR_EL1, FEAT_TRF),
};
@@ -556,38 +556,38 @@ static const struct reg_bits_to_feat_map hfgitr_feat_map[] = {
HFGITR_EL2_ATS1E1RP,
FEAT_PAN2),
NEEDS_FEAT(HFGITR_EL2_DCCVADP, FEAT_DPB2),
- NEEDS_FEAT(HFGITR_EL2_DCCVAC |
- HFGITR_EL2_SVC_EL1 |
- HFGITR_EL2_SVC_EL0 |
- HFGITR_EL2_ERET |
- HFGITR_EL2_TLBIVAALE1 |
- HFGITR_EL2_TLBIVALE1 |
- HFGITR_EL2_TLBIVAAE1 |
- HFGITR_EL2_TLBIASIDE1 |
- HFGITR_EL2_TLBIVAE1 |
- HFGITR_EL2_TLBIVMALLE1 |
- HFGITR_EL2_TLBIVAALE1IS |
- HFGITR_EL2_TLBIVALE1IS |
- HFGITR_EL2_TLBIVAAE1IS |
- HFGITR_EL2_TLBIASIDE1IS |
- HFGITR_EL2_TLBIVAE1IS |
- HFGITR_EL2_TLBIVMALLE1IS |
- HFGITR_EL2_ATS1E0W |
- HFGITR_EL2_ATS1E0R |
- HFGITR_EL2_ATS1E1W |
- HFGITR_EL2_ATS1E1R |
- HFGITR_EL2_DCZVA |
- HFGITR_EL2_DCCIVAC |
- HFGITR_EL2_DCCVAP |
- HFGITR_EL2_DCCVAU |
- HFGITR_EL2_DCCISW |
- HFGITR_EL2_DCCSW |
- HFGITR_EL2_DCISW |
- HFGITR_EL2_DCIVAC |
- HFGITR_EL2_ICIVAU |
- HFGITR_EL2_ICIALLU |
- HFGITR_EL2_ICIALLUIS,
- FEAT_AA64EL1),
+ NEEDS_FEAT_FLAG(HFGITR_EL2_DCCVAC |
+ HFGITR_EL2_SVC_EL1 |
+ HFGITR_EL2_SVC_EL0 |
+ HFGITR_EL2_ERET |
+ HFGITR_EL2_TLBIVAALE1 |
+ HFGITR_EL2_TLBIVALE1 |
+ HFGITR_EL2_TLBIVAAE1 |
+ HFGITR_EL2_TLBIASIDE1 |
+ HFGITR_EL2_TLBIVAE1 |
+ HFGITR_EL2_TLBIVMALLE1 |
+ HFGITR_EL2_TLBIVAALE1IS |
+ HFGITR_EL2_TLBIVALE1IS |
+ HFGITR_EL2_TLBIVAAE1IS |
+ HFGITR_EL2_TLBIASIDE1IS |
+ HFGITR_EL2_TLBIVAE1IS |
+ HFGITR_EL2_TLBIVMALLE1IS|
+ HFGITR_EL2_ATS1E0W |
+ HFGITR_EL2_ATS1E0R |
+ HFGITR_EL2_ATS1E1W |
+ HFGITR_EL2_ATS1E1R |
+ HFGITR_EL2_DCZVA |
+ HFGITR_EL2_DCCIVAC |
+ HFGITR_EL2_DCCVAP |
+ HFGITR_EL2_DCCVAU |
+ HFGITR_EL2_DCCISW |
+ HFGITR_EL2_DCCSW |
+ HFGITR_EL2_DCISW |
+ HFGITR_EL2_DCIVAC |
+ HFGITR_EL2_ICIVAU |
+ HFGITR_EL2_ICIALLU |
+ HFGITR_EL2_ICIALLUIS,
+ NEVER_FGU, FEAT_AA64EL1),
};
static const struct reg_bits_to_feat_map hafgrtr_feat_map[] = {
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 157de0ace6e7e..28dc778d0d9bb 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1946,6 +1946,12 @@ static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
if ((hw_val & mpam_mask) == (user_val & mpam_mask))
user_val &= ~ID_AA64PFR0_EL1_MPAM_MASK;
+ /* Fail the guest's request to disable the AA64 ISA at EL{0,1,2} */
+ if (!FIELD_GET(ID_AA64PFR0_EL1_EL0, user_val) ||
+ !FIELD_GET(ID_AA64PFR0_EL1_EL1, user_val) ||
+ (vcpu_has_nv(vcpu) && !FIELD_GET(ID_AA64PFR0_EL1_EL2, user_val)))
+ return -EINVAL;
+
return set_id_reg(vcpu, rd, user_val);
}
diff --git a/tools/testing/selftests/kvm/arm64/set_id_regs.c b/tools/testing/selftests/kvm/arm64/set_id_regs.c
index 322b9d3b01255..57708de2075df 100644
--- a/tools/testing/selftests/kvm/arm64/set_id_regs.c
+++ b/tools/testing/selftests/kvm/arm64/set_id_regs.c
@@ -129,10 +129,10 @@ static const struct reg_ftr_bits ftr_id_aa64pfr0_el1[] = {
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, DIT, 0),
REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, SEL2, 0),
REG_FTR_BITS(FTR_EXACT, ID_AA64PFR0_EL1, GIC, 0),
- REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, EL3, 0),
- REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, EL2, 0),
- REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, EL1, 0),
- REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, EL0, 0),
+ REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, EL3, 1),
+ REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, EL2, 1),
+ REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, EL1, 1),
+ REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, EL0, 1),
REG_FTR_END,
};
--
Jazz isn't dead. It just smells funny.
^ permalink raw reply related [flat|nested] 71+ messages in thread
* Re: [PATCH v3 04/42] arm64: sysreg: Replace HGFxTR_EL2 with HFG{R,W}TR_EL2
2025-04-26 12:27 ` [PATCH v3 04/42] arm64: sysreg: Replace HGFxTR_EL2 with HFG{R,W}TR_EL2 Marc Zyngier
@ 2025-04-29 13:07 ` Ben Horgan
2025-04-29 14:26 ` Joey Gouly
1 sibling, 0 replies; 71+ messages in thread
From: Ben Horgan @ 2025-04-29 13:07 UTC (permalink / raw)
To: Marc Zyngier, kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
Hi Marc
On 4/26/25 13:27, Marc Zyngier wrote:
> Treating HFGRTR_EL2 and HFGWTR_EL2 identically was a mistake.
> It makes things hard to reason about, has the potential to
> introduce bugs by giving a meaning to bits that are really reserved,
> and is in general a bad description of the architecture.
There is a typo in the subject line. HGFxTR_EL2 should be HFG_xTR_EL2.
Thanks,
Ben
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH v3 09/42] arm64: sysreg: Add registers trapped by HDFG{R,W}TR2_EL2
2025-04-26 12:28 ` [PATCH v3 09/42] arm64: sysreg: Add registers trapped by HDFG{R,W}TR2_EL2 Marc Zyngier
@ 2025-04-29 13:07 ` Ben Horgan
2025-04-29 14:10 ` Marc Zyngier
0 siblings, 1 reply; 71+ messages in thread
From: Ben Horgan @ 2025-04-29 13:07 UTC (permalink / raw)
To: Marc Zyngier, kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
Hi Marc,
On 4/26/25 13:28, Marc Zyngier wrote:
> +Sysreg SPMCR_EL0 2 3 9 12 0
> +Res0 63:12
> +Field 11 TRO
> +Field 10 HDBG
> +Field 9 FZO
> +Field 8 NA
> +Res0 7:5
Nit: Trailing whitespace. There are a few other places on Res0 lines.
Maybe your generation script could be tweaked.
Thanks,
Ben
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH v3 17/42] KVM: arm64: Handle trapping of FEAT_LS64* instructions
2025-04-26 12:28 ` [PATCH v3 17/42] KVM: arm64: Handle trapping of FEAT_LS64* instructions Marc Zyngier
@ 2025-04-29 13:08 ` Ben Horgan
2025-05-01 11:01 ` Joey Gouly
1 sibling, 0 replies; 71+ messages in thread
From: Ben Horgan @ 2025-04-29 13:08 UTC (permalink / raw)
To: Marc Zyngier, kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
Hi Marc,
On 4/26/25 13:28, Marc Zyngier wrote:
> + /*
> + * We only trap for two reasons:
> + *
> + * - the feature is disabled, and the only outcome is to
> + * generate an UNDEF.
> + *
> + * - the feature is enabled, but a NV guest wants to trap the
> + * feature used my its L2 guest. We forward the exception in
> + * this case.
Nit: my -> by
> + *
> + * What we don't expect is to end-up here if the guest is
> + * expected be be able to directly use the feature, hence the
> + * WARN_ON below.
> + */
Thanks,
Ben
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH v3 18/42] KVM: arm64: Restrict ACCDATA_EL1 undef to FEAT_ST64_ACCDATA being disabled
2025-04-26 12:28 ` [PATCH v3 18/42] KVM: arm64: Restrict ACCDATA_EL1 undef to FEAT_ST64_ACCDATA being disabled Marc Zyngier
@ 2025-04-29 13:08 ` Ben Horgan
0 siblings, 0 replies; 71+ messages in thread
From: Ben Horgan @ 2025-04-29 13:08 UTC (permalink / raw)
To: Marc Zyngier, kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
Hi Marc,
On 4/26/25 13:28, Marc Zyngier wrote:
> We currently unconditionally make ACCDATA_EL1 accesses UNDEF.
>
> As we are about to support it, restrict the UNDEF behaviour to cases
> where FEAT_ST64_ACCDATA is not exposed to the guest.
Isn't the feature called FEAT_LS64_ACCDATA rather than FEAT_ST64_ACCDATA?
Thanks,
Ben
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH v3 24/42] KVM: arm64: Unconditionally configure fine-grain traps
2025-04-26 12:28 ` [PATCH v3 24/42] KVM: arm64: Unconditionally configure fine-grain traps Marc Zyngier
@ 2025-04-29 13:08 ` Ben Horgan
2025-04-29 13:49 ` Marc Zyngier
0 siblings, 1 reply; 71+ messages in thread
From: Ben Horgan @ 2025-04-29 13:08 UTC (permalink / raw)
To: Marc Zyngier, kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
Hi Marc,
On 4/26/25 13:28, Marc Zyngier wrote:
> From: Mark Rutland <mark.rutland@arm.com>
>
> ... otherwise we can inherit the host configuration if this differs from
> the KVM configuration.
>
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> [maz: simplified a couple of things]
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
> arch/arm64/kvm/hyp/include/hyp/switch.h | 39 ++++++++++---------------
> 1 file changed, 15 insertions(+), 24 deletions(-)
>
> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index 027d05f308f75..925a3288bd5be 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -107,7 +107,8 @@ static inline void __activate_traps_fpsimd32(struct kvm_vcpu *vcpu)
>
> [...]
>
> static inline void __deactivate_traps_hfgxtr(struct kvm_vcpu *vcpu)
> {
> struct kvm_cpu_context *hctxt = host_data_ptr(host_ctxt);
> - struct kvm *kvm = kern_hyp_va(vcpu->kvm);
>
> if (!cpus_have_final_cap(ARM64_HAS_FGT))
> return;
>
> - __deactivate_fgt(hctxt, vcpu, kvm, HFGRTR_EL2);
> - if (cpus_have_final_cap(ARM64_WORKAROUND_AMPERE_AC03_CPU_38))
Don't we need to continue considering the ampere errata here? Or, at
least worth a mention in the commit message.
> - write_sysreg_s(ctxt_sys_reg(hctxt, HFGWTR_EL2), SYS_HFGWTR_EL2);
> - else
> - __deactivate_fgt(hctxt, vcpu, kvm, HFGWTR_EL2);
> - __deactivate_fgt(hctxt, vcpu, kvm, HFGITR_EL2);
> - __deactivate_fgt(hctxt, vcpu, kvm, HDFGRTR_EL2);
> - __deactivate_fgt(hctxt, vcpu, kvm, HDFGWTR_EL2);
> + __deactivate_fgt(hctxt, vcpu, HFGRTR_EL2);
> + __deactivate_fgt(hctxt, vcpu, HFGWTR_EL2);
> + __deactivate_fgt(hctxt, vcpu, HFGITR_EL2);
> + __deactivate_fgt(hctxt, vcpu, HDFGRTR_EL2);
> + __deactivate_fgt(hctxt, vcpu, HDFGWTR_EL2);
>
> if (cpu_has_amu())
> - __deactivate_fgt(hctxt, vcpu, kvm, HAFGRTR_EL2);
> + __deactivate_fgt(hctxt, vcpu, HAFGRTR_EL2);
> }
>
> static inline void __activate_traps_mpam(struct kvm_vcpu *vcpu)
Thanks,
Ben
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH v3 40/42] KVM: arm64: Allow sysreg ranges for FGT descriptors
2025-04-26 12:28 ` [PATCH v3 40/42] KVM: arm64: Allow sysreg ranges for FGT descriptors Marc Zyngier
@ 2025-04-29 13:08 ` Ben Horgan
0 siblings, 0 replies; 71+ messages in thread
From: Ben Horgan @ 2025-04-29 13:08 UTC (permalink / raw)
To: Marc Zyngier, kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
Hi Marc,
On 4/26/25 13:28, Marc Zyngier wrote:
> Just like we allow sysreg ranges for Coarse Grained Trap descriptors,
> allow them for Fine Grain Traps as well.
>
> This comes with a warning that not all ranges are suitable for this
> particular definition of ranges.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
> arch/arm64/kvm/emulate-nested.c | 120 +++++++++++---------------------
> 1 file changed, 39 insertions(+), 81 deletions(-)
>
> diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
> index e2a843675da96..9c7ecfccbd6e9 100644
> --- a/arch/arm64/kvm/emulate-nested.c
> +++ b/arch/arm64/kvm/emulate-nested.c
> @@ -622,6 +622,11 @@ struct encoding_to_trap_config {
> const unsigned int line;
> };
>
> +/*
> + * WARNING: using ranges is a treacherous endeavour, as sysregs that
> + * are part of an architectural range are not necessarily contiguous
> + * in the [Op0,Op1,CRn,CRm,Ops] space. Tread carefully.
> + */
> #define SR_RANGE_TRAP(sr_start, sr_end, trap_id) \
> { \
> .encoding = sr_start, \
> @@ -1289,15 +1294,19 @@ enum fg_filter_id {
>
> #define FGT(g, b, p) __FGT(g, b, p, __NO_FGF__)
>
> -#define SR_FGF(sr, g, b, p, f) \
> +/* Same warning applies: use carefully */
Nit: The other warning is a few hundred lines away. Consider identifying
it more precisely.
> +#define SR_FGF_RANGE(sr, e, g, b, p, f) \
> { \
> .encoding = sr, \
> - .end = sr, \
> + .end = e, \
> .tc = __FGT(g, b, p, f), \
> .line = __LINE__, \
> }
Thanks,
Ben
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH v3 41/42] KVM: arm64: Add FGT descriptors for FEAT_FGT2
2025-04-26 12:28 ` [PATCH v3 41/42] KVM: arm64: Add FGT descriptors for FEAT_FGT2 Marc Zyngier
@ 2025-04-29 13:09 ` Ben Horgan
2025-04-29 14:30 ` Marc Zyngier
0 siblings, 1 reply; 71+ messages in thread
From: Ben Horgan @ 2025-04-29 13:09 UTC (permalink / raw)
To: Marc Zyngier, kvmarm, kvm, linux-arm-kernel
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Mark Rutland, Fuad Tabba, Will Deacon, Catalin Marinas
Hi Marc,
On 4/26/25 13:28, Marc Zyngier wrote:
> Bulk addition of all the FGT2 traps reported with EC == 0x18,
> as described in the 2025-03 JSON drop.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
> arch/arm64/kvm/emulate-nested.c | 83 +++++++++++++++++++++++++++++++++
> 1 file changed, 83 insertions(+)
>
> diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
> index 9c7ecfccbd6e9..f7678af272bbb 100644
> --- a/arch/arm64/kvm/emulate-nested.c
> +++ b/arch/arm64/kvm/emulate-nested.c
[...]
> /*
> * HDFGWTR_EL2
> *
> @@ -1896,12 +1972,19 @@ static const struct encoding_to_trap_config encoding_to_fgt[] __initconst = {
> * read-side mappings, and only the write-side mappings that
> * differ from the read side, and the trap handler will pick
> * the correct shadow register based on the access type.
> + *
> + * Same model applies to the FEAT_FGT2 registers.
> */
> SR_FGT(SYS_TRFCR_EL1, HDFGWTR, TRFCR_EL1, 1),
> SR_FGT(SYS_TRCOSLAR, HDFGWTR, TRCOSLAR, 1),
> SR_FGT(SYS_PMCR_EL0, HDFGWTR, PMCR_EL0, 1),
> SR_FGT(SYS_PMSWINC_EL0, HDFGWTR, PMSWINC_EL0, 1),
> SR_FGT(SYS_OSLAR_EL1, HDFGWTR, OSLAR_EL1, 1),
> +
> + /* HDFGWTR_EL2 */
A missing 2. HDFGWTR_EL2 should be HDFGWTR2_EL2.
> + SR_FGT(SYS_PMZR_EL0, HDFGWTR2, nPMZR_EL0, 0),
> + SR_FGT(SYS_SPMZR_EL0, HDFGWTR2, nSPMEVCNTRn_EL0, 0),
> +
> /*
> * HAFGRTR_EL2
> */
Thanks,
Ben
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH v3 01/42] arm64: sysreg: Add ID_AA64ISAR1_EL1.LS64 encoding for FEAT_LS64WB
2025-04-26 12:27 ` [PATCH v3 01/42] arm64: sysreg: Add ID_AA64ISAR1_EL1.LS64 encoding for FEAT_LS64WB Marc Zyngier
@ 2025-04-29 13:34 ` Joey Gouly
0 siblings, 0 replies; 71+ messages in thread
From: Joey Gouly @ 2025-04-29 13:34 UTC (permalink / raw)
To: Marc Zyngier
Cc: kvmarm, kvm, linux-arm-kernel, Suzuki K Poulose, Oliver Upton,
Zenghui Yu, Mark Rutland, Fuad Tabba, Will Deacon,
Catalin Marinas
On Sat, Apr 26, 2025 at 01:27:55PM +0100, Marc Zyngier wrote:
> The 2024 extensions are adding yet another variant of LS64
> (aptly named FEAT_LS64WB) supporting LS64 accesses to write-back
> memory, as well as 32 byte single-copy atomic accesses using pairs
> of FP registers.
>
> Add the relevant encoding to ID_AA64ISAR1_EL1.LS64.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
> arch/arm64/tools/sysreg | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
> index bdf044c5d11b6..e5da8848b66b5 100644
> --- a/arch/arm64/tools/sysreg
> +++ b/arch/arm64/tools/sysreg
> @@ -1466,6 +1466,7 @@ UnsignedEnum 63:60 LS64
> 0b0001 LS64
> 0b0010 LS64_V
> 0b0011 LS64_ACCDATA
> + 0b0100 LS64WB
> EndEnum
> UnsignedEnum 59:56 XS
> 0b0000 NI
Reviewed-by: Joey Gouly <joey.gouly@arm.com>
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH v3 02/42] arm64: sysreg: Update ID_AA64MMFR4_EL1 description
2025-04-26 12:27 ` [PATCH v3 02/42] arm64: sysreg: Update ID_AA64MMFR4_EL1 description Marc Zyngier
@ 2025-04-29 13:38 ` Joey Gouly
0 siblings, 0 replies; 71+ messages in thread
From: Joey Gouly @ 2025-04-29 13:38 UTC (permalink / raw)
To: Marc Zyngier
Cc: kvmarm, kvm, linux-arm-kernel, Suzuki K Poulose, Oliver Upton,
Zenghui Yu, Mark Rutland, Fuad Tabba, Will Deacon,
Catalin Marinas
On Sat, Apr 26, 2025 at 01:27:56PM +0100, Marc Zyngier wrote:
> Resync the ID_AA64MMFR4_EL1 with the architectue description.
>
> This results in:
>
> - the new PoPS field
> - the new NV2P1 value for the NV_frac field
> - the new RMEGDI field
> - the new SRMASK field
>
> These fields have been generated from the reference JSON file.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
> arch/arm64/tools/sysreg | 19 ++++++++++++++++---
> 1 file changed, 16 insertions(+), 3 deletions(-)
>
> diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
> index e5da8848b66b5..fce8328c7c00b 100644
> --- a/arch/arm64/tools/sysreg
> +++ b/arch/arm64/tools/sysreg
> @@ -1946,12 +1946,21 @@ EndEnum
> EndSysreg
>
> Sysreg ID_AA64MMFR4_EL1 3 0 0 7 4
> -Res0 63:40
> +Res0 63:48
> +UnsignedEnum 47:44 SRMASK
> + 0b0000 NI
> + 0b0001 IMP
> +EndEnum
> +Res0 43:40
> UnsignedEnum 39:36 E3DSE
> 0b0000 NI
> 0b0001 IMP
> EndEnum
> -Res0 35:28
> +Res0 35:32
> +UnsignedEnum 31:28 RMEGDI
> + 0b0000 NI
> + 0b0001 IMP
> +EndEnum
> SignedEnum 27:24 E2H0
> 0b0000 IMP
> 0b1110 NI_NV1
> @@ -1960,6 +1969,7 @@ EndEnum
> UnsignedEnum 23:20 NV_frac
> 0b0000 NV_NV2
> 0b0001 NV2_ONLY
> + 0b0010 NV2P1
> EndEnum
> UnsignedEnum 19:16 FGWTE3
> 0b0000 NI
> @@ -1979,7 +1989,10 @@ SignedEnum 7:4 EIESB
> 0b0010 ToELx
> 0b1111 ANY
> EndEnum
> -Res0 3:0
> +UnsignedEnum 3:0 PoPS
> + 0b0000 NI
> + 0b0001 IMP
> +EndEnum
> EndSysreg
>
> Sysreg SCTLR_EL1 3 0 1 0 0
Reviewed-by: Joey Gouly <joey.gouly@arm.com>
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH v3 24/42] KVM: arm64: Unconditionally configure fine-grain traps
2025-04-29 13:08 ` Ben Horgan
@ 2025-04-29 13:49 ` Marc Zyngier
2025-04-29 14:09 ` Ben Horgan
0 siblings, 1 reply; 71+ messages in thread
From: Marc Zyngier @ 2025-04-29 13:49 UTC (permalink / raw)
To: Ben Horgan
Cc: kvmarm, kvm, linux-arm-kernel, Joey Gouly, Suzuki K Poulose,
Oliver Upton, Zenghui Yu, Mark Rutland, Fuad Tabba, Will Deacon,
Catalin Marinas
On Tue, 29 Apr 2025 14:08:27 +0100,
Ben Horgan <ben.horgan@arm.com> wrote:
>
> Hi Marc,
>
> On 4/26/25 13:28, Marc Zyngier wrote:
> > From: Mark Rutland <mark.rutland@arm.com>
> >
> > ... otherwise we can inherit the host configuration if this differs from
> > the KVM configuration.
> >
> > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> > [maz: simplified a couple of things]
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> > arch/arm64/kvm/hyp/include/hyp/switch.h | 39 ++++++++++---------------
> > 1 file changed, 15 insertions(+), 24 deletions(-)
> >
> > diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> > index 027d05f308f75..925a3288bd5be 100644
> > --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> > +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> > @@ -107,7 +107,8 @@ static inline void __activate_traps_fpsimd32(struct kvm_vcpu *vcpu)
> > [...]
> > static inline void __deactivate_traps_hfgxtr(struct kvm_vcpu
> > *vcpu)
> > {
> > struct kvm_cpu_context *hctxt = host_data_ptr(host_ctxt);
> > - struct kvm *kvm = kern_hyp_va(vcpu->kvm);
> > if (!cpus_have_final_cap(ARM64_HAS_FGT))
> > return;
> > - __deactivate_fgt(hctxt, vcpu, kvm, HFGRTR_EL2);
> > - if (cpus_have_final_cap(ARM64_WORKAROUND_AMPERE_AC03_CPU_38))
> Don't we need to continue considering the ampere errata here? Or, at
> least worth a mention in the commit message.
The FGT registers are always context switched, so whatever was saved
*before* the workaround was applied in __activate_traps_hfgxtr() is
blindly restored...
> > - write_sysreg_s(ctxt_sys_reg(hctxt, HFGWTR_EL2), SYS_HFGWTR_EL2);
... and this write always happens.
M.
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH v3 03/42] arm64: sysreg: Add layout for HCR_EL2
2025-04-26 12:27 ` [PATCH v3 03/42] arm64: sysreg: Add layout for HCR_EL2 Marc Zyngier
@ 2025-04-29 14:02 ` Joey Gouly
0 siblings, 0 replies; 71+ messages in thread
From: Joey Gouly @ 2025-04-29 14:02 UTC (permalink / raw)
To: Marc Zyngier
Cc: kvmarm, kvm, linux-arm-kernel, Suzuki K Poulose, Oliver Upton,
Zenghui Yu, Mark Rutland, Fuad Tabba, Will Deacon,
Catalin Marinas
On Sat, Apr 26, 2025 at 01:27:57PM +0100, Marc Zyngier wrote:
> Add HCR_EL2 to the sysreg file, more or less directly generated
> from the JSON file.
>
> Since the generated names significantly differ from the existing
> naming, express the old names in terms of the new one. One day, we'll
> fix this mess, but I'm not in any hurry.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
> arch/arm64/include/asm/kvm_arm.h | 125 ++++++++++++++++---------------
> arch/arm64/tools/sysreg | 68 +++++++++++++++++
> 2 files changed, 132 insertions(+), 61 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
> index 974d72b5905b8..f36d067967c33 100644
> --- a/arch/arm64/include/asm/kvm_arm.h
> +++ b/arch/arm64/include/asm/kvm_arm.h
> @@ -12,67 +12,70 @@
> #include <asm/sysreg.h>
> #include <asm/types.h>
>
> -/* Hyp Configuration Register (HCR) bits */
> -
> -#define HCR_TID5 (UL(1) << 58)
> -#define HCR_DCT (UL(1) << 57)
> -#define HCR_ATA_SHIFT 56
> -#define HCR_ATA (UL(1) << HCR_ATA_SHIFT)
> -#define HCR_TTLBOS (UL(1) << 55)
> -#define HCR_TTLBIS (UL(1) << 54)
> -#define HCR_ENSCXT (UL(1) << 53)
> -#define HCR_TOCU (UL(1) << 52)
> -#define HCR_AMVOFFEN (UL(1) << 51)
> -#define HCR_TICAB (UL(1) << 50)
> -#define HCR_TID4 (UL(1) << 49)
> -#define HCR_FIEN (UL(1) << 47)
> -#define HCR_FWB (UL(1) << 46)
> -#define HCR_NV2 (UL(1) << 45)
> -#define HCR_AT (UL(1) << 44)
> -#define HCR_NV1 (UL(1) << 43)
> -#define HCR_NV (UL(1) << 42)
> -#define HCR_API (UL(1) << 41)
> -#define HCR_APK (UL(1) << 40)
> -#define HCR_TEA (UL(1) << 37)
> -#define HCR_TERR (UL(1) << 36)
> -#define HCR_TLOR (UL(1) << 35)
> -#define HCR_E2H (UL(1) << 34)
> -#define HCR_ID (UL(1) << 33)
> -#define HCR_CD (UL(1) << 32)
> -#define HCR_RW_SHIFT 31
> -#define HCR_RW (UL(1) << HCR_RW_SHIFT)
> -#define HCR_TRVM (UL(1) << 30)
> -#define HCR_HCD (UL(1) << 29)
> -#define HCR_TDZ (UL(1) << 28)
> -#define HCR_TGE (UL(1) << 27)
> -#define HCR_TVM (UL(1) << 26)
> -#define HCR_TTLB (UL(1) << 25)
> -#define HCR_TPU (UL(1) << 24)
> -#define HCR_TPC (UL(1) << 23) /* HCR_TPCP if FEAT_DPB */
> -#define HCR_TSW (UL(1) << 22)
> -#define HCR_TACR (UL(1) << 21)
> -#define HCR_TIDCP (UL(1) << 20)
> -#define HCR_TSC (UL(1) << 19)
> -#define HCR_TID3 (UL(1) << 18)
> -#define HCR_TID2 (UL(1) << 17)
> -#define HCR_TID1 (UL(1) << 16)
> -#define HCR_TID0 (UL(1) << 15)
> -#define HCR_TWE (UL(1) << 14)
> -#define HCR_TWI (UL(1) << 13)
> -#define HCR_DC (UL(1) << 12)
> -#define HCR_BSU (3 << 10)
> -#define HCR_BSU_IS (UL(1) << 10)
> -#define HCR_FB (UL(1) << 9)
> -#define HCR_VSE (UL(1) << 8)
> -#define HCR_VI (UL(1) << 7)
> -#define HCR_VF (UL(1) << 6)
> -#define HCR_AMO (UL(1) << 5)
> -#define HCR_IMO (UL(1) << 4)
> -#define HCR_FMO (UL(1) << 3)
> -#define HCR_PTW (UL(1) << 2)
> -#define HCR_SWIO (UL(1) << 1)
> -#define HCR_VM (UL(1) << 0)
> -#define HCR_RES0 ((UL(1) << 48) | (UL(1) << 39))
> +/*
> + * Because I'm terribly lazy and that repainting the whole of the KVM
> + * code with the proper names is a pain, use a helper to map the names
> + * inherited from AArch32 with the new fancy nomenclature. One day...
> + */
> +#define __HCR(x) HCR_EL2_##x
> +
> +#define HCR_TID5 __HCR(TID5)
> +#define HCR_DCT __HCR(DCT)
> +#define HCR_ATA_SHIFT __HCR(ATA_SHIFT)
> +#define HCR_ATA __HCR(ATA)
> +#define HCR_TTLBOS __HCR(TTLBOS)
> +#define HCR_TTLBIS __HCR(TTLBIS)
> +#define HCR_ENSCXT __HCR(EnSCXT)
> +#define HCR_TOCU __HCR(TOCU)
> +#define HCR_AMVOFFEN __HCR(AMVOFFEN)
> +#define HCR_TICAB __HCR(TICAB)
> +#define HCR_TID4 __HCR(TID4)
> +#define HCR_FIEN __HCR(FIEN)
> +#define HCR_FWB __HCR(FWB)
> +#define HCR_NV2 __HCR(NV2)
> +#define HCR_AT __HCR(AT)
> +#define HCR_NV1 __HCR(NV1)
> +#define HCR_NV __HCR(NV)
> +#define HCR_API __HCR(API)
> +#define HCR_APK __HCR(APK)
> +#define HCR_TEA __HCR(TEA)
> +#define HCR_TERR __HCR(TERR)
> +#define HCR_TLOR __HCR(TLOR)
> +#define HCR_E2H __HCR(E2H)
> +#define HCR_ID __HCR(ID)
> +#define HCR_CD __HCR(CD)
> +#define HCR_RW __HCR(RW)
> +#define HCR_TRVM __HCR(TRVM)
> +#define HCR_HCD __HCR(HCD)
> +#define HCR_TDZ __HCR(TDZ)
> +#define HCR_TGE __HCR(TGE)
> +#define HCR_TVM __HCR(TVM)
> +#define HCR_TTLB __HCR(TTLB)
> +#define HCR_TPU __HCR(TPU)
> +#define HCR_TPC __HCR(TPCP)
> +#define HCR_TSW __HCR(TSW)
> +#define HCR_TACR __HCR(TACR)
> +#define HCR_TIDCP __HCR(TIDCP)
> +#define HCR_TSC __HCR(TSC)
> +#define HCR_TID3 __HCR(TID3)
> +#define HCR_TID2 __HCR(TID2)
> +#define HCR_TID1 __HCR(TID1)
> +#define HCR_TID0 __HCR(TID0)
> +#define HCR_TWE __HCR(TWE)
> +#define HCR_TWI __HCR(TWI)
> +#define HCR_DC __HCR(DC)
> +#define HCR_BSU __HCR(BSU)
> +#define HCR_BSU_IS __HCR(BSU_IS)
> +#define HCR_FB __HCR(FB)
> +#define HCR_VSE __HCR(VSE)
> +#define HCR_VI __HCR(VI)
> +#define HCR_VF __HCR(VF)
> +#define HCR_AMO __HCR(AMO)
> +#define HCR_IMO __HCR(IMO)
> +#define HCR_FMO __HCR(FMO)
> +#define HCR_PTW __HCR(PTW)
> +#define HCR_SWIO __HCR(SWIO)
> +#define HCR_VM __HCR(VM)
>
> /*
> * The bits we set in HCR:
> diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
> index fce8328c7c00b..7f39c8f7f036d 100644
> --- a/arch/arm64/tools/sysreg
> +++ b/arch/arm64/tools/sysreg
> @@ -2531,6 +2531,74 @@ Field 1 AFSR1_EL1
> Field 0 AFSR0_EL1
> EndSysregFields
>
> +Sysreg HCR_EL2 3 4 1 1 0
> +Field 63:60 TWEDEL
> +Field 59 TWEDEn
> +Field 58 TID5
> +Field 57 DCT
> +Field 56 ATA
> +Field 55 TTLBOS
> +Field 54 TTLBIS
> +Field 53 EnSCXT
> +Field 52 TOCU
> +Field 51 AMVOFFEN
> +Field 50 TICAB
> +Field 49 TID4
> +Field 48 GPF
> +Field 47 FIEN
> +Field 46 FWB
> +Field 45 NV2
> +Field 44 AT
> +Field 43 NV1
> +Field 42 NV
> +Field 41 API
> +Field 40 APK
> +Field 39 TME
> +Field 38 MIOCNCE
> +Field 37 TEA
> +Field 36 TERR
> +Field 35 TLOR
> +Field 34 E2H
> +Field 33 ID
> +Field 32 CD
> +Field 31 RW
> +Field 30 TRVM
> +Field 29 HCD
> +Field 28 TDZ
> +Field 27 TGE
> +Field 26 TVM
> +Field 25 TTLB
> +Field 24 TPU
> +Field 23 TPCP
> +Field 22 TSW
> +Field 21 TACR
> +Field 20 TIDCP
> +Field 19 TSC
> +Field 18 TID3
> +Field 17 TID2
> +Field 16 TID1
> +Field 15 TID0
> +Field 14 TWE
> +Field 13 TWI
> +Field 12 DC
> +UnsignedEnum 11:10 BSU
> + 0b00 NONE
> + 0b01 IS
> + 0b10 OS
> + 0b11 FS
> +EndEnum
> +Field 9 FB
> +Field 8 VSE
> +Field 7 VI
> +Field 6 VF
> +Field 5 AMO
> +Field 4 IMO
> +Field 3 FMO
> +Field 2 PTW
> +Field 1 SWIO
> +Field 0 VM
> +EndSysreg
> +
> Sysreg MDCR_EL2 3 4 1 1 1
> Res0 63:51
> Field 50 EnSTEPOP
Reviewed-by: Joey Gouly <joey.gouly@arm.com>
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH v3 24/42] KVM: arm64: Unconditionally configure fine-grain traps
2025-04-29 13:49 ` Marc Zyngier
@ 2025-04-29 14:09 ` Ben Horgan
0 siblings, 0 replies; 71+ messages in thread
From: Ben Horgan @ 2025-04-29 14:09 UTC (permalink / raw)
To: Marc Zyngier
Cc: kvmarm, kvm, linux-arm-kernel, Joey Gouly, Suzuki K Poulose,
Oliver Upton, Zenghui Yu, Mark Rutland, Fuad Tabba, Will Deacon,
Catalin Marinas
On 4/29/25 14:49, Marc Zyngier wrote:
> On Tue, 29 Apr 2025 14:08:27 +0100,
> Ben Horgan <ben.horgan@arm.com> wrote:
>>> - __deactivate_fgt(hctxt, vcpu, kvm, HFGRTR_EL2);
>>> - if (cpus_have_final_cap(ARM64_WORKAROUND_AMPERE_AC03_CPU_38))
>> Don't we need to continue considering the ampere errata here? Or, at
>> least worth a mention in the commit message.
>
> The FGT registers are always context switched, so whatever was saved
> *before* the workaround was applied in __activate_traps_hfgxtr() is
> blindly restored...
>
>>> - write_sysreg_s(ctxt_sys_reg(hctxt, HFGWTR_EL2), SYS_HFGWTR_EL2);
>
> ... and this write always happens.
Thanks for the explanation. I now agree this code is correct.
>
> M.
>
Thanks,
Ben
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH v3 09/42] arm64: sysreg: Add registers trapped by HDFG{R,W}TR2_EL2
2025-04-29 13:07 ` Ben Horgan
@ 2025-04-29 14:10 ` Marc Zyngier
0 siblings, 0 replies; 71+ messages in thread
From: Marc Zyngier @ 2025-04-29 14:10 UTC (permalink / raw)
To: Ben Horgan
Cc: kvmarm, kvm, linux-arm-kernel, Joey Gouly, Suzuki K Poulose,
Oliver Upton, Zenghui Yu, Mark Rutland, Fuad Tabba, Will Deacon,
Catalin Marinas
On Tue, 29 Apr 2025 14:07:39 +0100,
Ben Horgan <ben.horgan@arm.com> wrote:
>
> Hi Marc,
>
> On 4/26/25 13:28, Marc Zyngier wrote:
> > +Sysreg SPMCR_EL0 2 3 9 12 0
> > +Res0 63:12
> > +Field 11 TRO
> > +Field 10 HDBG
> > +Field 9 FZO
> > +Field 8 NA
> > +Res0 7:5
> Nit: Trailing whitespace. There are a few other places on Res0
> lines. Maybe your generation script could be tweaked.
Yeah, this is clearly not great. But this is buried really deep in a
jq script, which is a write-only language, hence hard to fix! ;-)
"git rebase --whitespace=fix" does the trick for now.
Thanks for the heads up,
M.
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH v3 04/42] arm64: sysreg: Replace HGFxTR_EL2 with HFG{R,W}TR_EL2
2025-04-26 12:27 ` [PATCH v3 04/42] arm64: sysreg: Replace HGFxTR_EL2 with HFG{R,W}TR_EL2 Marc Zyngier
2025-04-29 13:07 ` Ben Horgan
@ 2025-04-29 14:26 ` Joey Gouly
2025-05-01 13:20 ` Marc Zyngier
1 sibling, 1 reply; 71+ messages in thread
From: Joey Gouly @ 2025-04-29 14:26 UTC (permalink / raw)
To: Marc Zyngier
Cc: kvmarm, kvm, linux-arm-kernel, Suzuki K Poulose, Oliver Upton,
Zenghui Yu, Mark Rutland, Fuad Tabba, Will Deacon,
Catalin Marinas
On Sat, Apr 26, 2025 at 01:27:58PM +0100, Marc Zyngier wrote:
> Treating HFGRTR_EL2 and HFGWTR_EL2 identically was a mistake.
> It makes things hard to reason about, has the potential to
> introduce bugs by giving a meaning to bits that are really reserved,
> and is in general a bad description of the architecture.
>
> Given that #defines are cheap, let's describe both registers as
> intended by the architecture, and repaint all the existing uses.
>
> Yes, this is painful.
>
> The registers themselves are generated from the JSON file in
> an automated way.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
> arch/arm64/include/asm/el2_setup.h | 14 +-
> arch/arm64/include/asm/kvm_arm.h | 4 +-
> arch/arm64/include/asm/kvm_host.h | 3 +-
> arch/arm64/kvm/emulate-nested.c | 154 +++++++++----------
> arch/arm64/kvm/hyp/include/hyp/switch.h | 4 +-
> arch/arm64/kvm/hyp/vgic-v3-sr.c | 8 +-
> arch/arm64/kvm/nested.c | 42 ++---
> arch/arm64/kvm/sys_regs.c | 20 +--
> arch/arm64/tools/sysreg | 194 +++++++++++++++---------
> 9 files changed, 250 insertions(+), 193 deletions(-)
>
> diff --git a/arch/arm64/include/asm/el2_setup.h b/arch/arm64/include/asm/el2_setup.h
> index ebceaae3c749b..055e69a4184ce 100644
> --- a/arch/arm64/include/asm/el2_setup.h
> +++ b/arch/arm64/include/asm/el2_setup.h
> @@ -213,8 +213,8 @@
> cbz x1, .Lskip_debug_fgt_\@
>
> /* Disable nVHE traps of TPIDR2 and SMPRI */
> - orr x0, x0, #HFGxTR_EL2_nSMPRI_EL1_MASK
> - orr x0, x0, #HFGxTR_EL2_nTPIDR2_EL0_MASK
> + orr x0, x0, #HFGRTR_EL2_nSMPRI_EL1_MASK
> + orr x0, x0, #HFGRTR_EL2_nTPIDR2_EL0_MASK
>
> .Lskip_debug_fgt_\@:
> mrs_s x1, SYS_ID_AA64MMFR3_EL1
> @@ -222,8 +222,8 @@
> cbz x1, .Lskip_pie_fgt_\@
>
> /* Disable trapping of PIR_EL1 / PIRE0_EL1 */
> - orr x0, x0, #HFGxTR_EL2_nPIR_EL1
> - orr x0, x0, #HFGxTR_EL2_nPIRE0_EL1
> + orr x0, x0, #HFGRTR_EL2_nPIR_EL1
> + orr x0, x0, #HFGRTR_EL2_nPIRE0_EL1
>
> .Lskip_pie_fgt_\@:
> mrs_s x1, SYS_ID_AA64MMFR3_EL1
> @@ -231,7 +231,7 @@
> cbz x1, .Lskip_poe_fgt_\@
>
> /* Disable trapping of POR_EL0 */
> - orr x0, x0, #HFGxTR_EL2_nPOR_EL0
> + orr x0, x0, #HFGRTR_EL2_nPOR_EL0
>
> .Lskip_poe_fgt_\@:
> /* GCS depends on PIE so we don't check it if PIE is absent */
> @@ -240,8 +240,8 @@
> cbz x1, .Lset_fgt_\@
>
> /* Disable traps of access to GCS registers at EL0 and EL1 */
> - orr x0, x0, #HFGxTR_EL2_nGCS_EL1_MASK
> - orr x0, x0, #HFGxTR_EL2_nGCS_EL0_MASK
> + orr x0, x0, #HFGRTR_EL2_nGCS_EL1_MASK
> + orr x0, x0, #HFGRTR_EL2_nGCS_EL0_MASK
>
> .Lset_fgt_\@:
> msr_s SYS_HFGRTR_EL2, x0
We still treat them as the same here, funny that the diff cut off the next line:
msr_s SYS_HFGWTR_EL2, x0
Not saying you should do anything about it, I think it's fine.
> diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
> index f36d067967c33..43a630b940bfb 100644
> --- a/arch/arm64/include/asm/kvm_arm.h
> +++ b/arch/arm64/include/asm/kvm_arm.h
> @@ -325,7 +325,7 @@
> * Once we get to a point where the two describe the same thing, we'll
> * merge the definitions. One day.
> */
> -#define __HFGRTR_EL2_RES0 HFGxTR_EL2_RES0
> +#define __HFGRTR_EL2_RES0 HFGRTR_EL2_RES0
> #define __HFGRTR_EL2_MASK GENMASK(49, 0)
> #define __HFGRTR_EL2_nMASK ~(__HFGRTR_EL2_RES0 | __HFGRTR_EL2_MASK)
>
> @@ -336,7 +336,7 @@
> #define __HFGRTR_ONLY_MASK (BIT(46) | BIT(42) | BIT(40) | BIT(28) | \
> GENMASK(26, 25) | BIT(21) | BIT(18) | \
> GENMASK(15, 14) | GENMASK(10, 9) | BIT(2))
> -#define __HFGWTR_EL2_RES0 (__HFGRTR_EL2_RES0 | __HFGRTR_ONLY_MASK)
> +#define __HFGWTR_EL2_RES0 HFGWTR_EL2_RES0
> #define __HFGWTR_EL2_MASK (__HFGRTR_EL2_MASK & ~__HFGRTR_ONLY_MASK)
> #define __HFGWTR_EL2_nMASK ~(__HFGWTR_EL2_RES0 | __HFGWTR_EL2_MASK)
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index e98cfe7855a62..7a1ef5be7efb2 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -273,7 +273,8 @@ struct kvm_sysreg_masks;
>
> enum fgt_group_id {
> __NO_FGT_GROUP__,
> - HFGxTR_GROUP,
> + HFGRTR_GROUP,
> + HFGWTR_GROUP = HFGRTR_GROUP,
I think this change makes most of the diffs using this enum more confusing, but
it also seems to algin the code more closely with HDFGWTR_EL2 and HDFGWTR_EL2.
> HDFGRTR_GROUP,
> HDFGWTR_GROUP = HDFGRTR_GROUP,
> HFGITR_GROUP,
> diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
> index 0fcfcc0478f94..efe1eb3f1bd07 100644
> --- a/arch/arm64/kvm/emulate-nested.c
> +++ b/arch/arm64/kvm/emulate-nested.c
> @@ -1296,81 +1296,81 @@ enum fg_filter_id {
>
> static const struct encoding_to_trap_config encoding_to_fgt[] __initconst = {
> /* HFGRTR_EL2, HFGWTR_EL2 */
> - SR_FGT(SYS_AMAIR2_EL1, HFGxTR, nAMAIR2_EL1, 0),
> - SR_FGT(SYS_MAIR2_EL1, HFGxTR, nMAIR2_EL1, 0),
> - SR_FGT(SYS_S2POR_EL1, HFGxTR, nS2POR_EL1, 0),
> - SR_FGT(SYS_POR_EL1, HFGxTR, nPOR_EL1, 0),
> - SR_FGT(SYS_POR_EL0, HFGxTR, nPOR_EL0, 0),
> - SR_FGT(SYS_PIR_EL1, HFGxTR, nPIR_EL1, 0),
> - SR_FGT(SYS_PIRE0_EL1, HFGxTR, nPIRE0_EL1, 0),
> - SR_FGT(SYS_RCWMASK_EL1, HFGxTR, nRCWMASK_EL1, 0),
> - SR_FGT(SYS_TPIDR2_EL0, HFGxTR, nTPIDR2_EL0, 0),
> - SR_FGT(SYS_SMPRI_EL1, HFGxTR, nSMPRI_EL1, 0),
> - SR_FGT(SYS_GCSCR_EL1, HFGxTR, nGCS_EL1, 0),
> - SR_FGT(SYS_GCSPR_EL1, HFGxTR, nGCS_EL1, 0),
> - SR_FGT(SYS_GCSCRE0_EL1, HFGxTR, nGCS_EL0, 0),
> - SR_FGT(SYS_GCSPR_EL0, HFGxTR, nGCS_EL0, 0),
> - SR_FGT(SYS_ACCDATA_EL1, HFGxTR, nACCDATA_EL1, 0),
> - SR_FGT(SYS_ERXADDR_EL1, HFGxTR, ERXADDR_EL1, 1),
> - SR_FGT(SYS_ERXPFGCDN_EL1, HFGxTR, ERXPFGCDN_EL1, 1),
> - SR_FGT(SYS_ERXPFGCTL_EL1, HFGxTR, ERXPFGCTL_EL1, 1),
> - SR_FGT(SYS_ERXPFGF_EL1, HFGxTR, ERXPFGF_EL1, 1),
> - SR_FGT(SYS_ERXMISC0_EL1, HFGxTR, ERXMISCn_EL1, 1),
> - SR_FGT(SYS_ERXMISC1_EL1, HFGxTR, ERXMISCn_EL1, 1),
> - SR_FGT(SYS_ERXMISC2_EL1, HFGxTR, ERXMISCn_EL1, 1),
> - SR_FGT(SYS_ERXMISC3_EL1, HFGxTR, ERXMISCn_EL1, 1),
> - SR_FGT(SYS_ERXSTATUS_EL1, HFGxTR, ERXSTATUS_EL1, 1),
> - SR_FGT(SYS_ERXCTLR_EL1, HFGxTR, ERXCTLR_EL1, 1),
> - SR_FGT(SYS_ERXFR_EL1, HFGxTR, ERXFR_EL1, 1),
> - SR_FGT(SYS_ERRSELR_EL1, HFGxTR, ERRSELR_EL1, 1),
> - SR_FGT(SYS_ERRIDR_EL1, HFGxTR, ERRIDR_EL1, 1),
> - SR_FGT(SYS_ICC_IGRPEN0_EL1, HFGxTR, ICC_IGRPENn_EL1, 1),
> - SR_FGT(SYS_ICC_IGRPEN1_EL1, HFGxTR, ICC_IGRPENn_EL1, 1),
> - SR_FGT(SYS_VBAR_EL1, HFGxTR, VBAR_EL1, 1),
> - SR_FGT(SYS_TTBR1_EL1, HFGxTR, TTBR1_EL1, 1),
> - SR_FGT(SYS_TTBR0_EL1, HFGxTR, TTBR0_EL1, 1),
> - SR_FGT(SYS_TPIDR_EL0, HFGxTR, TPIDR_EL0, 1),
> - SR_FGT(SYS_TPIDRRO_EL0, HFGxTR, TPIDRRO_EL0, 1),
> - SR_FGT(SYS_TPIDR_EL1, HFGxTR, TPIDR_EL1, 1),
> - SR_FGT(SYS_TCR_EL1, HFGxTR, TCR_EL1, 1),
> - SR_FGT(SYS_TCR2_EL1, HFGxTR, TCR_EL1, 1),
> - SR_FGT(SYS_SCXTNUM_EL0, HFGxTR, SCXTNUM_EL0, 1),
> - SR_FGT(SYS_SCXTNUM_EL1, HFGxTR, SCXTNUM_EL1, 1),
> - SR_FGT(SYS_SCTLR_EL1, HFGxTR, SCTLR_EL1, 1),
> - SR_FGT(SYS_REVIDR_EL1, HFGxTR, REVIDR_EL1, 1),
> - SR_FGT(SYS_PAR_EL1, HFGxTR, PAR_EL1, 1),
> - SR_FGT(SYS_MPIDR_EL1, HFGxTR, MPIDR_EL1, 1),
> - SR_FGT(SYS_MIDR_EL1, HFGxTR, MIDR_EL1, 1),
> - SR_FGT(SYS_MAIR_EL1, HFGxTR, MAIR_EL1, 1),
> - SR_FGT(SYS_LORSA_EL1, HFGxTR, LORSA_EL1, 1),
> - SR_FGT(SYS_LORN_EL1, HFGxTR, LORN_EL1, 1),
> - SR_FGT(SYS_LORID_EL1, HFGxTR, LORID_EL1, 1),
> - SR_FGT(SYS_LOREA_EL1, HFGxTR, LOREA_EL1, 1),
> - SR_FGT(SYS_LORC_EL1, HFGxTR, LORC_EL1, 1),
> - SR_FGT(SYS_ISR_EL1, HFGxTR, ISR_EL1, 1),
> - SR_FGT(SYS_FAR_EL1, HFGxTR, FAR_EL1, 1),
> - SR_FGT(SYS_ESR_EL1, HFGxTR, ESR_EL1, 1),
> - SR_FGT(SYS_DCZID_EL0, HFGxTR, DCZID_EL0, 1),
> - SR_FGT(SYS_CTR_EL0, HFGxTR, CTR_EL0, 1),
> - SR_FGT(SYS_CSSELR_EL1, HFGxTR, CSSELR_EL1, 1),
> - SR_FGT(SYS_CPACR_EL1, HFGxTR, CPACR_EL1, 1),
> - SR_FGT(SYS_CONTEXTIDR_EL1, HFGxTR, CONTEXTIDR_EL1, 1),
> - SR_FGT(SYS_CLIDR_EL1, HFGxTR, CLIDR_EL1, 1),
> - SR_FGT(SYS_CCSIDR_EL1, HFGxTR, CCSIDR_EL1, 1),
> - SR_FGT(SYS_APIBKEYLO_EL1, HFGxTR, APIBKey, 1),
> - SR_FGT(SYS_APIBKEYHI_EL1, HFGxTR, APIBKey, 1),
> - SR_FGT(SYS_APIAKEYLO_EL1, HFGxTR, APIAKey, 1),
> - SR_FGT(SYS_APIAKEYHI_EL1, HFGxTR, APIAKey, 1),
> - SR_FGT(SYS_APGAKEYLO_EL1, HFGxTR, APGAKey, 1),
> - SR_FGT(SYS_APGAKEYHI_EL1, HFGxTR, APGAKey, 1),
> - SR_FGT(SYS_APDBKEYLO_EL1, HFGxTR, APDBKey, 1),
> - SR_FGT(SYS_APDBKEYHI_EL1, HFGxTR, APDBKey, 1),
> - SR_FGT(SYS_APDAKEYLO_EL1, HFGxTR, APDAKey, 1),
> - SR_FGT(SYS_APDAKEYHI_EL1, HFGxTR, APDAKey, 1),
> - SR_FGT(SYS_AMAIR_EL1, HFGxTR, AMAIR_EL1, 1),
> - SR_FGT(SYS_AIDR_EL1, HFGxTR, AIDR_EL1, 1),
> - SR_FGT(SYS_AFSR1_EL1, HFGxTR, AFSR1_EL1, 1),
> - SR_FGT(SYS_AFSR0_EL1, HFGxTR, AFSR0_EL1, 1),
> + SR_FGT(SYS_AMAIR2_EL1, HFGRTR, nAMAIR2_EL1, 0),
> + SR_FGT(SYS_MAIR2_EL1, HFGRTR, nMAIR2_EL1, 0),
> + SR_FGT(SYS_S2POR_EL1, HFGRTR, nS2POR_EL1, 0),
> + SR_FGT(SYS_POR_EL1, HFGRTR, nPOR_EL1, 0),
> + SR_FGT(SYS_POR_EL0, HFGRTR, nPOR_EL0, 0),
> + SR_FGT(SYS_PIR_EL1, HFGRTR, nPIR_EL1, 0),
> + SR_FGT(SYS_PIRE0_EL1, HFGRTR, nPIRE0_EL1, 0),
> + SR_FGT(SYS_RCWMASK_EL1, HFGRTR, nRCWMASK_EL1, 0),
> + SR_FGT(SYS_TPIDR2_EL0, HFGRTR, nTPIDR2_EL0, 0),
> + SR_FGT(SYS_SMPRI_EL1, HFGRTR, nSMPRI_EL1, 0),
> + SR_FGT(SYS_GCSCR_EL1, HFGRTR, nGCS_EL1, 0),
> + SR_FGT(SYS_GCSPR_EL1, HFGRTR, nGCS_EL1, 0),
> + SR_FGT(SYS_GCSCRE0_EL1, HFGRTR, nGCS_EL0, 0),
> + SR_FGT(SYS_GCSPR_EL0, HFGRTR, nGCS_EL0, 0),
> + SR_FGT(SYS_ACCDATA_EL1, HFGRTR, nACCDATA_EL1, 0),
> + SR_FGT(SYS_ERXADDR_EL1, HFGRTR, ERXADDR_EL1, 1),
> + SR_FGT(SYS_ERXPFGCDN_EL1, HFGRTR, ERXPFGCDN_EL1, 1),
> + SR_FGT(SYS_ERXPFGCTL_EL1, HFGRTR, ERXPFGCTL_EL1, 1),
> + SR_FGT(SYS_ERXPFGF_EL1, HFGRTR, ERXPFGF_EL1, 1),
> + SR_FGT(SYS_ERXMISC0_EL1, HFGRTR, ERXMISCn_EL1, 1),
> + SR_FGT(SYS_ERXMISC1_EL1, HFGRTR, ERXMISCn_EL1, 1),
> + SR_FGT(SYS_ERXMISC2_EL1, HFGRTR, ERXMISCn_EL1, 1),
> + SR_FGT(SYS_ERXMISC3_EL1, HFGRTR, ERXMISCn_EL1, 1),
> + SR_FGT(SYS_ERXSTATUS_EL1, HFGRTR, ERXSTATUS_EL1, 1),
> + SR_FGT(SYS_ERXCTLR_EL1, HFGRTR, ERXCTLR_EL1, 1),
> + SR_FGT(SYS_ERXFR_EL1, HFGRTR, ERXFR_EL1, 1),
> + SR_FGT(SYS_ERRSELR_EL1, HFGRTR, ERRSELR_EL1, 1),
> + SR_FGT(SYS_ERRIDR_EL1, HFGRTR, ERRIDR_EL1, 1),
> + SR_FGT(SYS_ICC_IGRPEN0_EL1, HFGRTR, ICC_IGRPENn_EL1, 1),
> + SR_FGT(SYS_ICC_IGRPEN1_EL1, HFGRTR, ICC_IGRPENn_EL1, 1),
> + SR_FGT(SYS_VBAR_EL1, HFGRTR, VBAR_EL1, 1),
> + SR_FGT(SYS_TTBR1_EL1, HFGRTR, TTBR1_EL1, 1),
> + SR_FGT(SYS_TTBR0_EL1, HFGRTR, TTBR0_EL1, 1),
> + SR_FGT(SYS_TPIDR_EL0, HFGRTR, TPIDR_EL0, 1),
> + SR_FGT(SYS_TPIDRRO_EL0, HFGRTR, TPIDRRO_EL0, 1),
> + SR_FGT(SYS_TPIDR_EL1, HFGRTR, TPIDR_EL1, 1),
> + SR_FGT(SYS_TCR_EL1, HFGRTR, TCR_EL1, 1),
> + SR_FGT(SYS_TCR2_EL1, HFGRTR, TCR_EL1, 1),
> + SR_FGT(SYS_SCXTNUM_EL0, HFGRTR, SCXTNUM_EL0, 1),
> + SR_FGT(SYS_SCXTNUM_EL1, HFGRTR, SCXTNUM_EL1, 1),
> + SR_FGT(SYS_SCTLR_EL1, HFGRTR, SCTLR_EL1, 1),
> + SR_FGT(SYS_REVIDR_EL1, HFGRTR, REVIDR_EL1, 1),
> + SR_FGT(SYS_PAR_EL1, HFGRTR, PAR_EL1, 1),
> + SR_FGT(SYS_MPIDR_EL1, HFGRTR, MPIDR_EL1, 1),
> + SR_FGT(SYS_MIDR_EL1, HFGRTR, MIDR_EL1, 1),
> + SR_FGT(SYS_MAIR_EL1, HFGRTR, MAIR_EL1, 1),
> + SR_FGT(SYS_LORSA_EL1, HFGRTR, LORSA_EL1, 1),
> + SR_FGT(SYS_LORN_EL1, HFGRTR, LORN_EL1, 1),
> + SR_FGT(SYS_LORID_EL1, HFGRTR, LORID_EL1, 1),
> + SR_FGT(SYS_LOREA_EL1, HFGRTR, LOREA_EL1, 1),
> + SR_FGT(SYS_LORC_EL1, HFGRTR, LORC_EL1, 1),
> + SR_FGT(SYS_ISR_EL1, HFGRTR, ISR_EL1, 1),
> + SR_FGT(SYS_FAR_EL1, HFGRTR, FAR_EL1, 1),
> + SR_FGT(SYS_ESR_EL1, HFGRTR, ESR_EL1, 1),
> + SR_FGT(SYS_DCZID_EL0, HFGRTR, DCZID_EL0, 1),
> + SR_FGT(SYS_CTR_EL0, HFGRTR, CTR_EL0, 1),
> + SR_FGT(SYS_CSSELR_EL1, HFGRTR, CSSELR_EL1, 1),
> + SR_FGT(SYS_CPACR_EL1, HFGRTR, CPACR_EL1, 1),
> + SR_FGT(SYS_CONTEXTIDR_EL1, HFGRTR, CONTEXTIDR_EL1, 1),
> + SR_FGT(SYS_CLIDR_EL1, HFGRTR, CLIDR_EL1, 1),
> + SR_FGT(SYS_CCSIDR_EL1, HFGRTR, CCSIDR_EL1, 1),
> + SR_FGT(SYS_APIBKEYLO_EL1, HFGRTR, APIBKey, 1),
> + SR_FGT(SYS_APIBKEYHI_EL1, HFGRTR, APIBKey, 1),
> + SR_FGT(SYS_APIAKEYLO_EL1, HFGRTR, APIAKey, 1),
> + SR_FGT(SYS_APIAKEYHI_EL1, HFGRTR, APIAKey, 1),
> + SR_FGT(SYS_APGAKEYLO_EL1, HFGRTR, APGAKey, 1),
> + SR_FGT(SYS_APGAKEYHI_EL1, HFGRTR, APGAKey, 1),
> + SR_FGT(SYS_APDBKEYLO_EL1, HFGRTR, APDBKey, 1),
> + SR_FGT(SYS_APDBKEYHI_EL1, HFGRTR, APDBKey, 1),
> + SR_FGT(SYS_APDAKEYLO_EL1, HFGRTR, APDAKey, 1),
> + SR_FGT(SYS_APDAKEYHI_EL1, HFGRTR, APDAKey, 1),
> + SR_FGT(SYS_AMAIR_EL1, HFGRTR, AMAIR_EL1, 1),
> + SR_FGT(SYS_AIDR_EL1, HFGRTR, AIDR_EL1, 1),
> + SR_FGT(SYS_AFSR1_EL1, HFGRTR, AFSR1_EL1, 1),
> + SR_FGT(SYS_AFSR0_EL1, HFGRTR, AFSR0_EL1, 1),
> /* HFGITR_EL2 */
> SR_FGT(OP_AT_S1E1A, HFGITR, ATS1E1A, 1),
> SR_FGT(OP_COSP_RCTX, HFGITR, COSPRCTX, 1),
> @@ -2243,7 +2243,7 @@ static bool check_fgt_bit(struct kvm_vcpu *vcpu, bool is_read,
> return false;
>
> switch ((enum fgt_group_id)tc.fgt) {
> - case HFGxTR_GROUP:
> + case HFGRTR_GROUP:
> sr = is_read ? HFGRTR_EL2 : HFGWTR_EL2;
> break;
>
> @@ -2319,7 +2319,7 @@ bool triage_sysreg_trap(struct kvm_vcpu *vcpu, int *sr_index)
> case __NO_FGT_GROUP__:
> break;
>
> - case HFGxTR_GROUP:
> + case HFGRTR_GROUP:
> if (is_read)
> val = __vcpu_sys_reg(vcpu, HFGRTR_EL2);
> else
> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index b741ea6aefa58..3150e42d79341 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -79,7 +79,7 @@ static inline void __activate_traps_fpsimd32(struct kvm_vcpu *vcpu)
> switch(reg) { \
> case HFGRTR_EL2: \
> case HFGWTR_EL2: \
> - id = HFGxTR_GROUP; \
> + id = HFGRTR_GROUP; \
> break; \
> case HFGITR_EL2: \
> id = HFGITR_GROUP; \
> @@ -166,7 +166,7 @@ static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
> update_fgt_traps(hctxt, vcpu, kvm, HFGRTR_EL2);
> update_fgt_traps_cs(hctxt, vcpu, kvm, HFGWTR_EL2, 0,
> cpus_have_final_cap(ARM64_WORKAROUND_AMPERE_AC03_CPU_38) ?
> - HFGxTR_EL2_TCR_EL1_MASK : 0);
> + HFGWTR_EL2_TCR_EL1_MASK : 0);
> update_fgt_traps(hctxt, vcpu, kvm, HFGITR_EL2);
> update_fgt_traps(hctxt, vcpu, kvm, HDFGRTR_EL2);
> update_fgt_traps(hctxt, vcpu, kvm, HDFGWTR_EL2);
> diff --git a/arch/arm64/kvm/hyp/vgic-v3-sr.c b/arch/arm64/kvm/hyp/vgic-v3-sr.c
> index ed363aa3027e5..f38565e28a23a 100644
> --- a/arch/arm64/kvm/hyp/vgic-v3-sr.c
> +++ b/arch/arm64/kvm/hyp/vgic-v3-sr.c
> @@ -1052,11 +1052,11 @@ static bool __vgic_v3_check_trap_forwarding(struct kvm_vcpu *vcpu,
> switch (sysreg) {
> case SYS_ICC_IGRPEN0_EL1:
> if (is_read &&
> - (__vcpu_sys_reg(vcpu, HFGRTR_EL2) & HFGxTR_EL2_ICC_IGRPENn_EL1))
> + (__vcpu_sys_reg(vcpu, HFGRTR_EL2) & HFGRTR_EL2_ICC_IGRPENn_EL1))
> return true;
>
> if (!is_read &&
> - (__vcpu_sys_reg(vcpu, HFGWTR_EL2) & HFGxTR_EL2_ICC_IGRPENn_EL1))
> + (__vcpu_sys_reg(vcpu, HFGWTR_EL2) & HFGWTR_EL2_ICC_IGRPENn_EL1))
> return true;
>
> fallthrough;
> @@ -1073,11 +1073,11 @@ static bool __vgic_v3_check_trap_forwarding(struct kvm_vcpu *vcpu,
>
> case SYS_ICC_IGRPEN1_EL1:
> if (is_read &&
> - (__vcpu_sys_reg(vcpu, HFGRTR_EL2) & HFGxTR_EL2_ICC_IGRPENn_EL1))
> + (__vcpu_sys_reg(vcpu, HFGRTR_EL2) & HFGRTR_EL2_ICC_IGRPENn_EL1))
> return true;
>
> if (!is_read &&
> - (__vcpu_sys_reg(vcpu, HFGWTR_EL2) & HFGxTR_EL2_ICC_IGRPENn_EL1))
> + (__vcpu_sys_reg(vcpu, HFGWTR_EL2) & HFGWTR_EL2_ICC_IGRPENn_EL1))
> return true;
>
> fallthrough;
> diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
> index 4a3fc11f7ecf3..16f6129c70b59 100644
> --- a/arch/arm64/kvm/nested.c
> +++ b/arch/arm64/kvm/nested.c
> @@ -1103,40 +1103,40 @@ int kvm_init_nv_sysregs(struct kvm_vcpu *vcpu)
> res0 = res1 = 0;
> if (!(kvm_vcpu_has_feature(kvm, KVM_ARM_VCPU_PTRAUTH_ADDRESS) &&
> kvm_vcpu_has_feature(kvm, KVM_ARM_VCPU_PTRAUTH_GENERIC)))
> - res0 |= (HFGxTR_EL2_APDAKey | HFGxTR_EL2_APDBKey |
> - HFGxTR_EL2_APGAKey | HFGxTR_EL2_APIAKey |
> - HFGxTR_EL2_APIBKey);
> + res0 |= (HFGRTR_EL2_APDAKey | HFGRTR_EL2_APDBKey |
> + HFGRTR_EL2_APGAKey | HFGRTR_EL2_APIAKey |
> + HFGRTR_EL2_APIBKey);
> if (!kvm_has_feat(kvm, ID_AA64MMFR1_EL1, LO, IMP))
> - res0 |= (HFGxTR_EL2_LORC_EL1 | HFGxTR_EL2_LOREA_EL1 |
> - HFGxTR_EL2_LORID_EL1 | HFGxTR_EL2_LORN_EL1 |
> - HFGxTR_EL2_LORSA_EL1);
> + res0 |= (HFGRTR_EL2_LORC_EL1 | HFGRTR_EL2_LOREA_EL1 |
> + HFGRTR_EL2_LORID_EL1 | HFGRTR_EL2_LORN_EL1 |
> + HFGRTR_EL2_LORSA_EL1);
> if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, CSV2, CSV2_2) &&
> !kvm_has_feat(kvm, ID_AA64PFR1_EL1, CSV2_frac, CSV2_1p2))
> - res0 |= (HFGxTR_EL2_SCXTNUM_EL1 | HFGxTR_EL2_SCXTNUM_EL0);
> + res0 |= (HFGRTR_EL2_SCXTNUM_EL1 | HFGRTR_EL2_SCXTNUM_EL0);
> if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, GIC, IMP))
> - res0 |= HFGxTR_EL2_ICC_IGRPENn_EL1;
> + res0 |= HFGRTR_EL2_ICC_IGRPENn_EL1;
> if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, RAS, IMP))
> - res0 |= (HFGxTR_EL2_ERRIDR_EL1 | HFGxTR_EL2_ERRSELR_EL1 |
> - HFGxTR_EL2_ERXFR_EL1 | HFGxTR_EL2_ERXCTLR_EL1 |
> - HFGxTR_EL2_ERXSTATUS_EL1 | HFGxTR_EL2_ERXMISCn_EL1 |
> - HFGxTR_EL2_ERXPFGF_EL1 | HFGxTR_EL2_ERXPFGCTL_EL1 |
> - HFGxTR_EL2_ERXPFGCDN_EL1 | HFGxTR_EL2_ERXADDR_EL1);
> + res0 |= (HFGRTR_EL2_ERRIDR_EL1 | HFGRTR_EL2_ERRSELR_EL1 |
> + HFGRTR_EL2_ERXFR_EL1 | HFGRTR_EL2_ERXCTLR_EL1 |
> + HFGRTR_EL2_ERXSTATUS_EL1 | HFGRTR_EL2_ERXMISCn_EL1 |
> + HFGRTR_EL2_ERXPFGF_EL1 | HFGRTR_EL2_ERXPFGCTL_EL1 |
> + HFGRTR_EL2_ERXPFGCDN_EL1 | HFGRTR_EL2_ERXADDR_EL1);
> if (!kvm_has_feat(kvm, ID_AA64ISAR1_EL1, LS64, LS64_ACCDATA))
> - res0 |= HFGxTR_EL2_nACCDATA_EL1;
> + res0 |= HFGRTR_EL2_nACCDATA_EL1;
> if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, GCS, IMP))
> - res0 |= (HFGxTR_EL2_nGCS_EL0 | HFGxTR_EL2_nGCS_EL1);
> + res0 |= (HFGRTR_EL2_nGCS_EL0 | HFGRTR_EL2_nGCS_EL1);
> if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, SME, IMP))
> - res0 |= (HFGxTR_EL2_nSMPRI_EL1 | HFGxTR_EL2_nTPIDR2_EL0);
> + res0 |= (HFGRTR_EL2_nSMPRI_EL1 | HFGRTR_EL2_nTPIDR2_EL0);
> if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, THE, IMP))
> - res0 |= HFGxTR_EL2_nRCWMASK_EL1;
> + res0 |= HFGRTR_EL2_nRCWMASK_EL1;
> if (!kvm_has_s1pie(kvm))
> - res0 |= (HFGxTR_EL2_nPIRE0_EL1 | HFGxTR_EL2_nPIR_EL1);
> + res0 |= (HFGRTR_EL2_nPIRE0_EL1 | HFGRTR_EL2_nPIR_EL1);
> if (!kvm_has_s1poe(kvm))
> - res0 |= (HFGxTR_EL2_nPOR_EL0 | HFGxTR_EL2_nPOR_EL1);
> + res0 |= (HFGRTR_EL2_nPOR_EL0 | HFGRTR_EL2_nPOR_EL1);
> if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, S2POE, IMP))
> - res0 |= HFGxTR_EL2_nS2POR_EL1;
> + res0 |= HFGRTR_EL2_nS2POR_EL1;
> if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, AIE, IMP))
> - res0 |= (HFGxTR_EL2_nMAIR2_EL1 | HFGxTR_EL2_nAMAIR2_EL1);
> + res0 |= (HFGRTR_EL2_nMAIR2_EL1 | HFGRTR_EL2_nAMAIR2_EL1);
> set_sysreg_masks(kvm, HFGRTR_EL2, res0 | __HFGRTR_EL2_RES0, res1);
> set_sysreg_masks(kvm, HFGWTR_EL2, res0 | __HFGWTR_EL2_RES0, res1);
>
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 005ad28f73068..6e01b06bedcae 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -5147,12 +5147,12 @@ void kvm_calculate_traps(struct kvm_vcpu *vcpu)
> if (test_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags))
> goto out;
>
> - kvm->arch.fgu[HFGxTR_GROUP] = (HFGxTR_EL2_nAMAIR2_EL1 |
> - HFGxTR_EL2_nMAIR2_EL1 |
> - HFGxTR_EL2_nS2POR_EL1 |
> - HFGxTR_EL2_nACCDATA_EL1 |
> - HFGxTR_EL2_nSMPRI_EL1_MASK |
> - HFGxTR_EL2_nTPIDR2_EL0_MASK);
> + kvm->arch.fgu[HFGRTR_GROUP] = (HFGRTR_EL2_nAMAIR2_EL1 |
> + HFGRTR_EL2_nMAIR2_EL1 |
> + HFGRTR_EL2_nS2POR_EL1 |
> + HFGRTR_EL2_nACCDATA_EL1 |
> + HFGRTR_EL2_nSMPRI_EL1_MASK |
> + HFGRTR_EL2_nTPIDR2_EL0_MASK);
For example here you see HFGRTR_GROUP but it actually also applies to HFGWTR_GROUP.
>
> if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, OS))
> kvm->arch.fgu[HFGITR_GROUP] |= (HFGITR_EL2_TLBIRVAALE1OS|
> @@ -5188,12 +5188,12 @@ void kvm_calculate_traps(struct kvm_vcpu *vcpu)
> HFGITR_EL2_ATS1E1WP);
>
> if (!kvm_has_s1pie(kvm))
> - kvm->arch.fgu[HFGxTR_GROUP] |= (HFGxTR_EL2_nPIRE0_EL1 |
> - HFGxTR_EL2_nPIR_EL1);
> + kvm->arch.fgu[HFGRTR_GROUP] |= (HFGRTR_EL2_nPIRE0_EL1 |
> + HFGRTR_EL2_nPIR_EL1);
>
> if (!kvm_has_s1poe(kvm))
> - kvm->arch.fgu[HFGxTR_GROUP] |= (HFGxTR_EL2_nPOR_EL1 |
> - HFGxTR_EL2_nPOR_EL0);
> + kvm->arch.fgu[HFGRTR_GROUP] |= (HFGRTR_EL2_nPOR_EL1 |
> + HFGRTR_EL2_nPOR_EL0);
>
> if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, AMU, IMP))
> kvm->arch.fgu[HAFGRTR_GROUP] |= ~(HAFGRTR_EL2_RES0 |
> diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
> index 7f39c8f7f036d..e21e881314a33 100644
> --- a/arch/arm64/tools/sysreg
> +++ b/arch/arm64/tools/sysreg
> @@ -2464,73 +2464,6 @@ UnsignedEnum 2:0 F8S1
> EndEnum
> EndSysreg
>
> -SysregFields HFGxTR_EL2
> -Field 63 nAMAIR2_EL1
> -Field 62 nMAIR2_EL1
> -Field 61 nS2POR_EL1
> -Field 60 nPOR_EL1
> -Field 59 nPOR_EL0
> -Field 58 nPIR_EL1
> -Field 57 nPIRE0_EL1
> -Field 56 nRCWMASK_EL1
> -Field 55 nTPIDR2_EL0
> -Field 54 nSMPRI_EL1
> -Field 53 nGCS_EL1
> -Field 52 nGCS_EL0
> -Res0 51
> -Field 50 nACCDATA_EL1
> -Field 49 ERXADDR_EL1
> -Field 48 ERXPFGCDN_EL1
> -Field 47 ERXPFGCTL_EL1
> -Field 46 ERXPFGF_EL1
> -Field 45 ERXMISCn_EL1
> -Field 44 ERXSTATUS_EL1
> -Field 43 ERXCTLR_EL1
> -Field 42 ERXFR_EL1
> -Field 41 ERRSELR_EL1
> -Field 40 ERRIDR_EL1
> -Field 39 ICC_IGRPENn_EL1
> -Field 38 VBAR_EL1
> -Field 37 TTBR1_EL1
> -Field 36 TTBR0_EL1
> -Field 35 TPIDR_EL0
> -Field 34 TPIDRRO_EL0
> -Field 33 TPIDR_EL1
> -Field 32 TCR_EL1
> -Field 31 SCXTNUM_EL0
> -Field 30 SCXTNUM_EL1
> -Field 29 SCTLR_EL1
> -Field 28 REVIDR_EL1
> -Field 27 PAR_EL1
> -Field 26 MPIDR_EL1
> -Field 25 MIDR_EL1
> -Field 24 MAIR_EL1
> -Field 23 LORSA_EL1
> -Field 22 LORN_EL1
> -Field 21 LORID_EL1
> -Field 20 LOREA_EL1
> -Field 19 LORC_EL1
> -Field 18 ISR_EL1
> -Field 17 FAR_EL1
> -Field 16 ESR_EL1
> -Field 15 DCZID_EL0
> -Field 14 CTR_EL0
> -Field 13 CSSELR_EL1
> -Field 12 CPACR_EL1
> -Field 11 CONTEXTIDR_EL1
> -Field 10 CLIDR_EL1
> -Field 9 CCSIDR_EL1
> -Field 8 APIBKey
> -Field 7 APIAKey
> -Field 6 APGAKey
> -Field 5 APDBKey
> -Field 4 APDAKey
> -Field 3 AMAIR_EL1
> -Field 2 AIDR_EL1
> -Field 1 AFSR1_EL1
> -Field 0 AFSR0_EL1
> -EndSysregFields
> -
> Sysreg HCR_EL2 3 4 1 1 0
> Field 63:60 TWEDEL
> Field 59 TWEDEn
> @@ -2635,11 +2568,134 @@ Field 4:0 HPMN
> EndSysreg
>
> Sysreg HFGRTR_EL2 3 4 1 1 4
> -Fields HFGxTR_EL2
> +Field 63 nAMAIR2_EL1
> +Field 62 nMAIR2_EL1
> +Field 61 nS2POR_EL1
> +Field 60 nPOR_EL1
> +Field 59 nPOR_EL0
> +Field 58 nPIR_EL1
> +Field 57 nPIRE0_EL1
> +Field 56 nRCWMASK_EL1
> +Field 55 nTPIDR2_EL0
> +Field 54 nSMPRI_EL1
> +Field 53 nGCS_EL1
> +Field 52 nGCS_EL0
> +Res0 51
> +Field 50 nACCDATA_EL1
> +Field 49 ERXADDR_EL1
> +Field 48 ERXPFGCDN_EL1
> +Field 47 ERXPFGCTL_EL1
> +Field 46 ERXPFGF_EL1
> +Field 45 ERXMISCn_EL1
> +Field 44 ERXSTATUS_EL1
> +Field 43 ERXCTLR_EL1
> +Field 42 ERXFR_EL1
> +Field 41 ERRSELR_EL1
> +Field 40 ERRIDR_EL1
> +Field 39 ICC_IGRPENn_EL1
> +Field 38 VBAR_EL1
> +Field 37 TTBR1_EL1
> +Field 36 TTBR0_EL1
> +Field 35 TPIDR_EL0
> +Field 34 TPIDRRO_EL0
> +Field 33 TPIDR_EL1
> +Field 32 TCR_EL1
> +Field 31 SCXTNUM_EL0
> +Field 30 SCXTNUM_EL1
> +Field 29 SCTLR_EL1
> +Field 28 REVIDR_EL1
> +Field 27 PAR_EL1
> +Field 26 MPIDR_EL1
> +Field 25 MIDR_EL1
> +Field 24 MAIR_EL1
> +Field 23 LORSA_EL1
> +Field 22 LORN_EL1
> +Field 21 LORID_EL1
> +Field 20 LOREA_EL1
> +Field 19 LORC_EL1
> +Field 18 ISR_EL1
> +Field 17 FAR_EL1
> +Field 16 ESR_EL1
> +Field 15 DCZID_EL0
> +Field 14 CTR_EL0
> +Field 13 CSSELR_EL1
> +Field 12 CPACR_EL1
> +Field 11 CONTEXTIDR_EL1
> +Field 10 CLIDR_EL1
> +Field 9 CCSIDR_EL1
> +Field 8 APIBKey
> +Field 7 APIAKey
> +Field 6 APGAKey
> +Field 5 APDBKey
> +Field 4 APDAKey
> +Field 3 AMAIR_EL1
> +Field 2 AIDR_EL1
> +Field 1 AFSR1_EL1
> +Field 0 AFSR0_EL1
> EndSysreg
>
> Sysreg HFGWTR_EL2 3 4 1 1 5
> -Fields HFGxTR_EL2
> +Field 63 nAMAIR2_EL1
> +Field 62 nMAIR2_EL1
> +Field 61 nS2POR_EL1
> +Field 60 nPOR_EL1
> +Field 59 nPOR_EL0
> +Field 58 nPIR_EL1
> +Field 57 nPIRE0_EL1
> +Field 56 nRCWMASK_EL1
> +Field 55 nTPIDR2_EL0
> +Field 54 nSMPRI_EL1
> +Field 53 nGCS_EL1
> +Field 52 nGCS_EL0
> +Res0 51
> +Field 50 nACCDATA_EL1
> +Field 49 ERXADDR_EL1
> +Field 48 ERXPFGCDN_EL1
> +Field 47 ERXPFGCTL_EL1
> +Res0 46
> +Field 45 ERXMISCn_EL1
> +Field 44 ERXSTATUS_EL1
> +Field 43 ERXCTLR_EL1
> +Res0 42
> +Field 41 ERRSELR_EL1
> +Res0 40
> +Field 39 ICC_IGRPENn_EL1
> +Field 38 VBAR_EL1
> +Field 37 TTBR1_EL1
> +Field 36 TTBR0_EL1
> +Field 35 TPIDR_EL0
> +Field 34 TPIDRRO_EL0
> +Field 33 TPIDR_EL1
> +Field 32 TCR_EL1
> +Field 31 SCXTNUM_EL0
> +Field 30 SCXTNUM_EL1
> +Field 29 SCTLR_EL1
> +Res0 28
> +Field 27 PAR_EL1
> +Res0 26:25
> +Field 24 MAIR_EL1
> +Field 23 LORSA_EL1
> +Field 22 LORN_EL1
> +Res0 21
> +Field 20 LOREA_EL1
> +Field 19 LORC_EL1
> +Res0 18
> +Field 17 FAR_EL1
> +Field 16 ESR_EL1
> +Res0 15:14
> +Field 13 CSSELR_EL1
> +Field 12 CPACR_EL1
> +Field 11 CONTEXTIDR_EL1
> +Res0 10:9
> +Field 8 APIBKey
> +Field 7 APIAKey
> +Field 6 APGAKey
> +Field 5 APDBKey
> +Field 4 APDAKey
> +Field 3 AMAIR_EL1
> +Res0 2
> +Field 1 AFSR1_EL1
> +Field 0 AFSR0_EL1
> EndSysreg
>
> Sysreg HFGITR_EL2 3 4 1 1 6
> --
> 2.39.2
>
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling
2025-04-29 7:34 ` Marc Zyngier
@ 2025-04-29 14:30 ` Ganapatrao Kulkarni
0 siblings, 0 replies; 71+ messages in thread
From: Ganapatrao Kulkarni @ 2025-04-29 14:30 UTC (permalink / raw)
To: Marc Zyngier
Cc: kvmarm, kvm, linux-arm-kernel, Joey Gouly, Suzuki K Poulose,
Oliver Upton, Zenghui Yu, Mark Rutland, Fuad Tabba, Will Deacon,
Catalin Marinas
On 4/29/2025 1:04 PM, Marc Zyngier wrote:
>>
>> I am trying nv-next branch and I believe these FGT related changes are
>> merged. With this, selftest arm64/set_id_regs is failing. From initial
>> debug it seems, the register access of SYS_CTR_EL0, SYS_MIDR_EL1,
>> SYS_REVIDR_EL1 and SYS_AIDR_EL1 from guest_code is resulting in trap
>> to EL2 (HCR_ID1,ID2 are set) and is getting forwarded back to EL1,
>> since EL1 sync handler is not installed in the test code, resulting in
>> hang(endless guest_exit/entry).
>
> Let's start by calling bullshit on the test itself:
>
> root@semi-fraudulent:/home/maz# grep AA64PFR0 /sys/kernel/debug/kvm/2008-4/idregs
> SYS_ID_AA64PFR0_EL1: 0000000020110000
>
> It basically disable anything 64bit at EL{0,1,2,3]. Frankly, all these
> tests are pure garbage. I'm baffled that anyone expects this crap to
> give any meaningful result.
>
>> It is due to function "triage_sysreg_trap" is returning true.
>>
>> When guest_code is in EL1 (default case) it is due to return in below if.
>>
>> if (tc.fgt != __NO_FGT_GROUP__ &&
>> (vcpu->kvm->arch.fgu[tc.fgt] & BIT(tc.bit))) {
>> kvm_inject_undefined(vcpu);
>> return true;
>> }
>
> That explains why we end-up here. The 64bit ISA is "disabled", a bunch
> of trap bits are advertised as depending on it, so the corresponding
> FGU bits are set to "emulate" the requested behaviour.
>
OK, was comparing fgu for this test case and VM boot.
For this test, all HFGRTR bits were set in fgu.
thanks, I did not notice that guest was disabled for AArch64.
> Works as intended, and this proves once more that what we call testing
> is just horseshit.
>
> In retrospect, we should do a few things:
>
> - Prevent writes to ID_AA64PFR0_EL1 disabling the 64bit ISA, breaking
> this stupid test for good.
>
> - Flag all the FGT bits depending on FEAT_AA64EL1 as NEVER_FGU,
> because that shouldn't happen, by construction (there is no
> architecture revision where these sysregs are UNDEFined).
>
Yes we should.
> - Mark all these test as unmaintained and deprecated, recognising that
> they are utterly pointless (optional).
>
Just wondering, should I continue to modify this test to run in vEL2?
> Full patch below.
>
> M.
>
> diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
> index d4e1218b004dd..666070d4ccd7f 100644
> --- a/arch/arm64/kvm/config.c
> +++ b/arch/arm64/kvm/config.c
> @@ -295,34 +295,34 @@ static const struct reg_bits_to_feat_map hfgrtr_feat_map[] = {
> HFGRTR_EL2_APDBKey |
> HFGRTR_EL2_APDAKey,
> feat_pauth),
> - NEEDS_FEAT(HFGRTR_EL2_VBAR_EL1 |
> - HFGRTR_EL2_TTBR1_EL1 |
> - HFGRTR_EL2_TTBR0_EL1 |
> - HFGRTR_EL2_TPIDR_EL0 |
> - HFGRTR_EL2_TPIDRRO_EL0 |
> - HFGRTR_EL2_TPIDR_EL1 |
> - HFGRTR_EL2_TCR_EL1 |
> - HFGRTR_EL2_SCTLR_EL1 |
> - HFGRTR_EL2_REVIDR_EL1 |
> - HFGRTR_EL2_PAR_EL1 |
> - HFGRTR_EL2_MPIDR_EL1 |
> - HFGRTR_EL2_MIDR_EL1 |
> - HFGRTR_EL2_MAIR_EL1 |
> - HFGRTR_EL2_ISR_EL1 |
> - HFGRTR_EL2_FAR_EL1 |
> - HFGRTR_EL2_ESR_EL1 |
> - HFGRTR_EL2_DCZID_EL0 |
> - HFGRTR_EL2_CTR_EL0 |
> - HFGRTR_EL2_CSSELR_EL1 |
> - HFGRTR_EL2_CPACR_EL1 |
> - HFGRTR_EL2_CONTEXTIDR_EL1 |
> - HFGRTR_EL2_CLIDR_EL1 |
> - HFGRTR_EL2_CCSIDR_EL1 |
> - HFGRTR_EL2_AMAIR_EL1 |
> - HFGRTR_EL2_AIDR_EL1 |
> - HFGRTR_EL2_AFSR1_EL1 |
> - HFGRTR_EL2_AFSR0_EL1,
> - FEAT_AA64EL1),
> + NEEDS_FEAT_FLAG(HFGRTR_EL2_VBAR_EL1 |
> + HFGRTR_EL2_TTBR1_EL1 |
> + HFGRTR_EL2_TTBR0_EL1 |
> + HFGRTR_EL2_TPIDR_EL0 |
> + HFGRTR_EL2_TPIDRRO_EL0 |
> + HFGRTR_EL2_TPIDR_EL1 |
> + HFGRTR_EL2_TCR_EL1 |
> + HFGRTR_EL2_SCTLR_EL1 |
> + HFGRTR_EL2_REVIDR_EL1 |
> + HFGRTR_EL2_PAR_EL1 |
> + HFGRTR_EL2_MPIDR_EL1 |
> + HFGRTR_EL2_MIDR_EL1 |
> + HFGRTR_EL2_MAIR_EL1 |
> + HFGRTR_EL2_ISR_EL1 |
> + HFGRTR_EL2_FAR_EL1 |
> + HFGRTR_EL2_ESR_EL1 |
> + HFGRTR_EL2_DCZID_EL0 |
> + HFGRTR_EL2_CTR_EL0 |
> + HFGRTR_EL2_CSSELR_EL1 |
> + HFGRTR_EL2_CPACR_EL1 |
> + HFGRTR_EL2_CONTEXTIDR_EL1|
> + HFGRTR_EL2_CLIDR_EL1 |
> + HFGRTR_EL2_CCSIDR_EL1 |
> + HFGRTR_EL2_AMAIR_EL1 |
> + HFGRTR_EL2_AIDR_EL1 |
> + HFGRTR_EL2_AFSR1_EL1 |
> + HFGRTR_EL2_AFSR0_EL1,
> + NEVER_FGU, FEAT_AA64EL1),
> };
>
> static const struct reg_bits_to_feat_map hfgwtr_feat_map[] = {
> @@ -368,25 +368,25 @@ static const struct reg_bits_to_feat_map hfgwtr_feat_map[] = {
> HFGWTR_EL2_APDBKey |
> HFGWTR_EL2_APDAKey,
> feat_pauth),
> - NEEDS_FEAT(HFGWTR_EL2_VBAR_EL1 |
> - HFGWTR_EL2_TTBR1_EL1 |
> - HFGWTR_EL2_TTBR0_EL1 |
> - HFGWTR_EL2_TPIDR_EL0 |
> - HFGWTR_EL2_TPIDRRO_EL0 |
> - HFGWTR_EL2_TPIDR_EL1 |
> - HFGWTR_EL2_TCR_EL1 |
> - HFGWTR_EL2_SCTLR_EL1 |
> - HFGWTR_EL2_PAR_EL1 |
> - HFGWTR_EL2_MAIR_EL1 |
> - HFGWTR_EL2_FAR_EL1 |
> - HFGWTR_EL2_ESR_EL1 |
> - HFGWTR_EL2_CSSELR_EL1 |
> - HFGWTR_EL2_CPACR_EL1 |
> - HFGWTR_EL2_CONTEXTIDR_EL1 |
> - HFGWTR_EL2_AMAIR_EL1 |
> - HFGWTR_EL2_AFSR1_EL1 |
> - HFGWTR_EL2_AFSR0_EL1,
> - FEAT_AA64EL1),
> + NEEDS_FEAT_FLAG(HFGWTR_EL2_VBAR_EL1 |
> + HFGWTR_EL2_TTBR1_EL1 |
> + HFGWTR_EL2_TTBR0_EL1 |
> + HFGWTR_EL2_TPIDR_EL0 |
> + HFGWTR_EL2_TPIDRRO_EL0 |
> + HFGWTR_EL2_TPIDR_EL1 |
> + HFGWTR_EL2_TCR_EL1 |
> + HFGWTR_EL2_SCTLR_EL1 |
> + HFGWTR_EL2_PAR_EL1 |
> + HFGWTR_EL2_MAIR_EL1 |
> + HFGWTR_EL2_FAR_EL1 |
> + HFGWTR_EL2_ESR_EL1 |
> + HFGWTR_EL2_CSSELR_EL1 |
> + HFGWTR_EL2_CPACR_EL1 |
> + HFGWTR_EL2_CONTEXTIDR_EL1|
> + HFGWTR_EL2_AMAIR_EL1 |
> + HFGWTR_EL2_AFSR1_EL1 |
> + HFGWTR_EL2_AFSR0_EL1,
> + NEVER_FGU, FEAT_AA64EL1),
> };
>
> static const struct reg_bits_to_feat_map hdfgrtr_feat_map[] = {
> @@ -443,17 +443,17 @@ static const struct reg_bits_to_feat_map hdfgrtr_feat_map[] = {
> FEAT_TRBE),
> NEEDS_FEAT_FLAG(HDFGRTR_EL2_OSDLR_EL1, NEVER_FGU,
> FEAT_DoubleLock),
> - NEEDS_FEAT(HDFGRTR_EL2_OSECCR_EL1 |
> - HDFGRTR_EL2_OSLSR_EL1 |
> - HDFGRTR_EL2_DBGPRCR_EL1 |
> - HDFGRTR_EL2_DBGAUTHSTATUS_EL1|
> - HDFGRTR_EL2_DBGCLAIM |
> - HDFGRTR_EL2_MDSCR_EL1 |
> - HDFGRTR_EL2_DBGWVRn_EL1 |
> - HDFGRTR_EL2_DBGWCRn_EL1 |
> - HDFGRTR_EL2_DBGBVRn_EL1 |
> - HDFGRTR_EL2_DBGBCRn_EL1,
> - FEAT_AA64EL1)
> + NEEDS_FEAT_FLAG(HDFGRTR_EL2_OSECCR_EL1 |
> + HDFGRTR_EL2_OSLSR_EL1 |
> + HDFGRTR_EL2_DBGPRCR_EL1 |
> + HDFGRTR_EL2_DBGAUTHSTATUS_EL1|
> + HDFGRTR_EL2_DBGCLAIM |
> + HDFGRTR_EL2_MDSCR_EL1 |
> + HDFGRTR_EL2_DBGWVRn_EL1 |
> + HDFGRTR_EL2_DBGWCRn_EL1 |
> + HDFGRTR_EL2_DBGBVRn_EL1 |
> + HDFGRTR_EL2_DBGBCRn_EL1,
> + NEVER_FGU, FEAT_AA64EL1)
> };
>
> static const struct reg_bits_to_feat_map hdfgwtr_feat_map[] = {
> @@ -503,16 +503,16 @@ static const struct reg_bits_to_feat_map hdfgwtr_feat_map[] = {
> FEAT_TRBE),
> NEEDS_FEAT_FLAG(HDFGWTR_EL2_OSDLR_EL1,
> NEVER_FGU, FEAT_DoubleLock),
> - NEEDS_FEAT(HDFGWTR_EL2_OSECCR_EL1 |
> - HDFGWTR_EL2_OSLAR_EL1 |
> - HDFGWTR_EL2_DBGPRCR_EL1 |
> - HDFGWTR_EL2_DBGCLAIM |
> - HDFGWTR_EL2_MDSCR_EL1 |
> - HDFGWTR_EL2_DBGWVRn_EL1 |
> - HDFGWTR_EL2_DBGWCRn_EL1 |
> - HDFGWTR_EL2_DBGBVRn_EL1 |
> - HDFGWTR_EL2_DBGBCRn_EL1,
> - FEAT_AA64EL1),
> + NEEDS_FEAT_FLAG(HDFGWTR_EL2_OSECCR_EL1 |
> + HDFGWTR_EL2_OSLAR_EL1 |
> + HDFGWTR_EL2_DBGPRCR_EL1 |
> + HDFGWTR_EL2_DBGCLAIM |
> + HDFGWTR_EL2_MDSCR_EL1 |
> + HDFGWTR_EL2_DBGWVRn_EL1 |
> + HDFGWTR_EL2_DBGWCRn_EL1 |
> + HDFGWTR_EL2_DBGBVRn_EL1 |
> + HDFGWTR_EL2_DBGBCRn_EL1,
> + NEVER_FGU, FEAT_AA64EL1),
> NEEDS_FEAT(HDFGWTR_EL2_TRFCR_EL1, FEAT_TRF),
> };
>
> @@ -556,38 +556,38 @@ static const struct reg_bits_to_feat_map hfgitr_feat_map[] = {
> HFGITR_EL2_ATS1E1RP,
> FEAT_PAN2),
> NEEDS_FEAT(HFGITR_EL2_DCCVADP, FEAT_DPB2),
> - NEEDS_FEAT(HFGITR_EL2_DCCVAC |
> - HFGITR_EL2_SVC_EL1 |
> - HFGITR_EL2_SVC_EL0 |
> - HFGITR_EL2_ERET |
> - HFGITR_EL2_TLBIVAALE1 |
> - HFGITR_EL2_TLBIVALE1 |
> - HFGITR_EL2_TLBIVAAE1 |
> - HFGITR_EL2_TLBIASIDE1 |
> - HFGITR_EL2_TLBIVAE1 |
> - HFGITR_EL2_TLBIVMALLE1 |
> - HFGITR_EL2_TLBIVAALE1IS |
> - HFGITR_EL2_TLBIVALE1IS |
> - HFGITR_EL2_TLBIVAAE1IS |
> - HFGITR_EL2_TLBIASIDE1IS |
> - HFGITR_EL2_TLBIVAE1IS |
> - HFGITR_EL2_TLBIVMALLE1IS |
> - HFGITR_EL2_ATS1E0W |
> - HFGITR_EL2_ATS1E0R |
> - HFGITR_EL2_ATS1E1W |
> - HFGITR_EL2_ATS1E1R |
> - HFGITR_EL2_DCZVA |
> - HFGITR_EL2_DCCIVAC |
> - HFGITR_EL2_DCCVAP |
> - HFGITR_EL2_DCCVAU |
> - HFGITR_EL2_DCCISW |
> - HFGITR_EL2_DCCSW |
> - HFGITR_EL2_DCISW |
> - HFGITR_EL2_DCIVAC |
> - HFGITR_EL2_ICIVAU |
> - HFGITR_EL2_ICIALLU |
> - HFGITR_EL2_ICIALLUIS,
> - FEAT_AA64EL1),
> + NEEDS_FEAT_FLAG(HFGITR_EL2_DCCVAC |
> + HFGITR_EL2_SVC_EL1 |
> + HFGITR_EL2_SVC_EL0 |
> + HFGITR_EL2_ERET |
> + HFGITR_EL2_TLBIVAALE1 |
> + HFGITR_EL2_TLBIVALE1 |
> + HFGITR_EL2_TLBIVAAE1 |
> + HFGITR_EL2_TLBIASIDE1 |
> + HFGITR_EL2_TLBIVAE1 |
> + HFGITR_EL2_TLBIVMALLE1 |
> + HFGITR_EL2_TLBIVAALE1IS |
> + HFGITR_EL2_TLBIVALE1IS |
> + HFGITR_EL2_TLBIVAAE1IS |
> + HFGITR_EL2_TLBIASIDE1IS |
> + HFGITR_EL2_TLBIVAE1IS |
> + HFGITR_EL2_TLBIVMALLE1IS|
> + HFGITR_EL2_ATS1E0W |
> + HFGITR_EL2_ATS1E0R |
> + HFGITR_EL2_ATS1E1W |
> + HFGITR_EL2_ATS1E1R |
> + HFGITR_EL2_DCZVA |
> + HFGITR_EL2_DCCIVAC |
> + HFGITR_EL2_DCCVAP |
> + HFGITR_EL2_DCCVAU |
> + HFGITR_EL2_DCCISW |
> + HFGITR_EL2_DCCSW |
> + HFGITR_EL2_DCISW |
> + HFGITR_EL2_DCIVAC |
> + HFGITR_EL2_ICIVAU |
> + HFGITR_EL2_ICIALLU |
> + HFGITR_EL2_ICIALLUIS,
> + NEVER_FGU, FEAT_AA64EL1),
> };
>
> static const struct reg_bits_to_feat_map hafgrtr_feat_map[] = {
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 157de0ace6e7e..28dc778d0d9bb 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1946,6 +1946,12 @@ static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
> if ((hw_val & mpam_mask) == (user_val & mpam_mask))
> user_val &= ~ID_AA64PFR0_EL1_MPAM_MASK;
>
> + /* Fail the guest's request to disable the AA64 ISA at EL{0,1,2} */
> + if (!FIELD_GET(ID_AA64PFR0_EL1_EL0, user_val) ||
> + !FIELD_GET(ID_AA64PFR0_EL1_EL1, user_val) ||
> + (vcpu_has_nv(vcpu) && !FIELD_GET(ID_AA64PFR0_EL1_EL2, user_val)))
> + return -EINVAL;
> +
> return set_id_reg(vcpu, rd, user_val);
> }
>
> diff --git a/tools/testing/selftests/kvm/arm64/set_id_regs.c b/tools/testing/selftests/kvm/arm64/set_id_regs.c
> index 322b9d3b01255..57708de2075df 100644
> --- a/tools/testing/selftests/kvm/arm64/set_id_regs.c
> +++ b/tools/testing/selftests/kvm/arm64/set_id_regs.c
> @@ -129,10 +129,10 @@ static const struct reg_ftr_bits ftr_id_aa64pfr0_el1[] = {
> REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, DIT, 0),
> REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, SEL2, 0),
> REG_FTR_BITS(FTR_EXACT, ID_AA64PFR0_EL1, GIC, 0),
> - REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, EL3, 0),
> - REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, EL2, 0),
> - REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, EL1, 0),
> - REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, EL0, 0),
> + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, EL3, 1),
> + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, EL2, 1),
> + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, EL1, 1),
> + REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64PFR0_EL1, EL0, 1),
> REG_FTR_END,
> };
>
>
This diff fixes the hang seen while running this test(test ran gracefully).
Tried to run this test in vEL2 and it passing for majority of the registers and failing for the few, looking in to it.
--
Thanks,
Gk
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH v3 41/42] KVM: arm64: Add FGT descriptors for FEAT_FGT2
2025-04-29 13:09 ` Ben Horgan
@ 2025-04-29 14:30 ` Marc Zyngier
0 siblings, 0 replies; 71+ messages in thread
From: Marc Zyngier @ 2025-04-29 14:30 UTC (permalink / raw)
To: Ben Horgan
Cc: kvmarm, kvm, linux-arm-kernel, Joey Gouly, Suzuki K Poulose,
Oliver Upton, Zenghui Yu, Mark Rutland, Fuad Tabba, Will Deacon,
Catalin Marinas
On Tue, 29 Apr 2025 14:09:13 +0100,
Ben Horgan <ben.horgan@arm.com> wrote:
>
> Hi Marc,
>
> On 4/26/25 13:28, Marc Zyngier wrote:
> > Bulk addition of all the FGT2 traps reported with EC == 0x18,
> > as described in the 2025-03 JSON drop.
> >
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> > arch/arm64/kvm/emulate-nested.c | 83 +++++++++++++++++++++++++++++++++
> > 1 file changed, 83 insertions(+)
> >
> > diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
> > index 9c7ecfccbd6e9..f7678af272bbb 100644
> > --- a/arch/arm64/kvm/emulate-nested.c
> > +++ b/arch/arm64/kvm/emulate-nested.c
> [...]
> > /*
> > * HDFGWTR_EL2
> > *
> > @@ -1896,12 +1972,19 @@ static const struct encoding_to_trap_config encoding_to_fgt[] __initconst = {
> > * read-side mappings, and only the write-side mappings that
> > * differ from the read side, and the trap handler will pick
> > * the correct shadow register based on the access type.
> > + *
> > + * Same model applies to the FEAT_FGT2 registers.
> > */
> > SR_FGT(SYS_TRFCR_EL1, HDFGWTR, TRFCR_EL1, 1),
> > SR_FGT(SYS_TRCOSLAR, HDFGWTR, TRCOSLAR, 1),
> > SR_FGT(SYS_PMCR_EL0, HDFGWTR, PMCR_EL0, 1),
> > SR_FGT(SYS_PMSWINC_EL0, HDFGWTR, PMSWINC_EL0, 1),
> > SR_FGT(SYS_OSLAR_EL1, HDFGWTR, OSLAR_EL1, 1),
> > +
> > + /* HDFGWTR_EL2 */
> A missing 2. HDFGWTR_EL2 should be HDFGWTR2_EL2.
All reported typos fixed, thanks you!
M.
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH v3 08/42] arm64: sysreg: Add registers trapped by HFG{R,W}TR2_EL2
2025-04-26 12:28 ` [PATCH v3 08/42] arm64: sysreg: Add registers trapped by HFG{R,W}TR2_EL2 Marc Zyngier
@ 2025-05-01 10:11 ` Joey Gouly
2025-05-01 13:46 ` Marc Zyngier
0 siblings, 1 reply; 71+ messages in thread
From: Joey Gouly @ 2025-05-01 10:11 UTC (permalink / raw)
To: Marc Zyngier
Cc: kvmarm, kvm, linux-arm-kernel, Suzuki K Poulose, Oliver Upton,
Zenghui Yu, Mark Rutland, Fuad Tabba, Will Deacon,
Catalin Marinas
On Sat, Apr 26, 2025 at 01:28:02PM +0100, Marc Zyngier wrote:
> Bulk addition of all the system registers trapped by HFG{R,W}TR2_EL2.
>
> The descriptions are extracted from the BSD-licenced JSON file part
> of the 2025-03 drop from ARM.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
> arch/arm64/tools/sysreg | 395 ++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 395 insertions(+)
>
> diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
> index 6433a3ebcef49..7969e632492bb 100644
> --- a/arch/arm64/tools/sysreg
> +++ b/arch/arm64/tools/sysreg
> @@ -2068,6 +2068,26 @@ Field 1 A
> Field 0 M
> EndSysreg
>
> +Sysreg SCTLR_EL12 3 5 1 0 0
> +Mapping SCTLR_EL1
> +EndSysreg
> +
> +Sysreg SCTLRALIAS_EL1 3 0 1 4 6
> +Mapping SCTLR_EL1
> +EndSysreg
> +
> +Sysreg ACTLR_EL1 3 0 1 0 1
> +Field 63:0 IMPDEF
> +EndSysreg
> +
> +Sysreg ACTLR_EL12 3 5 1 0 1
> +Mapping ACTLR_EL1
> +EndSysreg
> +
> +Sysreg ACTLRALIAS_EL1 3 0 1 4 5
> +Mapping ACTLR_EL1
> +EndSysreg
> +
Do you want to update CPACR_EL1 while you're at it, so that it matches
CPACRMASK_EL1?
> Sysreg CPACR_EL1 3 0 1 0 2
> Res0 63:30
> Field 29 E0POE
> @@ -2081,6 +2101,323 @@ Field 17:16 ZEN
> Res0 15:0
> EndSysreg
>
> +Sysreg CPACR_EL12 3 5 1 0 2
> +Mapping CPACR_EL1
> +EndSysreg
> +
> +Sysreg CPACRALIAS_EL1 3 0 1 4 4
> +Mapping CPACR_EL1
> +EndSysreg
> +
> +Sysreg ACTLRMASK_EL1 3 0 1 4 1
> +Field 63:0 IMPDEF
> +EndSysreg
> +
> +Sysreg ACTLRMASK_EL12 3 5 1 4 1
> +Mapping ACTLRMASK_EL1
> +EndSysreg
> +
> +Sysreg CPACRMASK_EL1 3 0 1 4 2
> +Res0 63:32
> +Field 31 TCPAC
> +Field 30 TAM
> +Field 29 E0POE
> +Field 28 TTA
> +Res0 27:25
> +Field 24 SMEN
> +Res0 23:21
> +Field 20 FPEN
> +Res0 19:17
> +Field 16 ZEN
> +Res0 15:0
> +EndSysreg
> +
> +Sysreg CPACRMASK_EL12 3 5 1 4 2
> +Mapping CPACRMASK_EL1
> +EndSysreg
> +
[..]
Thanks,
Joey
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH v3 13/42] arm64: Add syndrome information for trapped LD64B/ST64B{,V,V0}
2025-04-26 12:28 ` [PATCH v3 13/42] arm64: Add syndrome information for trapped LD64B/ST64B{,V,V0} Marc Zyngier
@ 2025-05-01 10:17 ` Joey Gouly
0 siblings, 0 replies; 71+ messages in thread
From: Joey Gouly @ 2025-05-01 10:17 UTC (permalink / raw)
To: Marc Zyngier
Cc: kvmarm, kvm, linux-arm-kernel, Suzuki K Poulose, Oliver Upton,
Zenghui Yu, Mark Rutland, Fuad Tabba, Will Deacon,
Catalin Marinas
On Sat, Apr 26, 2025 at 01:28:07PM +0100, Marc Zyngier wrote:
> Provide the architected EC and ISS values for all the FEAT_LS64*
> instructions.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
> arch/arm64/include/asm/esr.h | 8 +++++++-
> 1 file changed, 7 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
> index e4f77757937e6..a0ae66dd65da9 100644
> --- a/arch/arm64/include/asm/esr.h
> +++ b/arch/arm64/include/asm/esr.h
> @@ -20,7 +20,8 @@
> #define ESR_ELx_EC_FP_ASIMD UL(0x07)
> #define ESR_ELx_EC_CP10_ID UL(0x08) /* EL2 only */
> #define ESR_ELx_EC_PAC UL(0x09) /* EL2 and above */
> -/* Unallocated EC: 0x0A - 0x0B */
> +#define ESR_ELx_EC_OTHER UL(0x0A)
> +/* Unallocated EC: 0x0B */
> #define ESR_ELx_EC_CP14_64 UL(0x0C)
> #define ESR_ELx_EC_BTI UL(0x0D)
> #define ESR_ELx_EC_ILL UL(0x0E)
> @@ -181,6 +182,11 @@
> #define ESR_ELx_WFx_ISS_WFE (UL(1) << 0)
> #define ESR_ELx_xVC_IMM_MASK ((UL(1) << 16) - 1)
>
> +/* ISS definitions for LD64B/ST64B instructions */
> +#define ESR_ELx_ISS_OTHER_ST64BV (0)
> +#define ESR_ELx_ISS_OTHER_ST64BV0 (1)
> +#define ESR_ELx_ISS_OTHER_LDST64B (2)
> +
> #define DISR_EL1_IDS (UL(1) << 24)
> /*
> * DISR_EL1 and ESR_ELx share the bottom 13 bits, but the RES0 bits may mean
Reviewed-by: Joey Gouly <joey.gouly@arm.com>
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH v3 16/42] KVM: arm64: Simplify handling of negative FGT bits
2025-04-26 12:28 ` [PATCH v3 16/42] KVM: arm64: Simplify handling of negative FGT bits Marc Zyngier
@ 2025-05-01 10:43 ` Joey Gouly
0 siblings, 0 replies; 71+ messages in thread
From: Joey Gouly @ 2025-05-01 10:43 UTC (permalink / raw)
To: Marc Zyngier
Cc: kvmarm, kvm, linux-arm-kernel, Suzuki K Poulose, Oliver Upton,
Zenghui Yu, Mark Rutland, Fuad Tabba, Will Deacon,
Catalin Marinas
On Sat, Apr 26, 2025 at 01:28:10PM +0100, Marc Zyngier wrote:
> check_fgt_bit() and triage_sysreg_trap() implement the same thing
> twice for no good reason. We have to lookup the FGT register twice,
> as we don't communucate it. Similarly, we extract the register value
> at the wrong spot.
>
> Reorganise the code in a more logical way so that things are done
> at the correct location, removing a lot of duplication.
Reviewed-by: Joey Gouly <joey.gouly@arm.com>
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
> arch/arm64/kvm/emulate-nested.c | 49 ++++++++-------------------------
> 1 file changed, 12 insertions(+), 37 deletions(-)
>
> diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
> index 1bcbddc88a9b7..52a2d63a667c9 100644
> --- a/arch/arm64/kvm/emulate-nested.c
> +++ b/arch/arm64/kvm/emulate-nested.c
> @@ -2215,11 +2215,11 @@ static u64 kvm_get_sysreg_res0(struct kvm *kvm, enum vcpu_sysreg sr)
> return masks->mask[sr - __VNCR_START__].res0;
> }
>
> -static bool check_fgt_bit(struct kvm_vcpu *vcpu, bool is_read,
> - u64 val, const union trap_config tc)
> +static bool check_fgt_bit(struct kvm_vcpu *vcpu, enum vcpu_sysreg sr,
> + const union trap_config tc)
> {
> struct kvm *kvm = vcpu->kvm;
> - enum vcpu_sysreg sr;
> + u64 val;
>
> /*
> * KVM doesn't know about any FGTs that apply to the host, and hopefully
> @@ -2228,6 +2228,8 @@ static bool check_fgt_bit(struct kvm_vcpu *vcpu, bool is_read,
> if (is_hyp_ctxt(vcpu))
> return false;
>
> + val = __vcpu_sys_reg(vcpu, sr);
> +
> if (tc.pol)
> return (val & BIT(tc.bit));
>
> @@ -2242,38 +2244,17 @@ static bool check_fgt_bit(struct kvm_vcpu *vcpu, bool is_read,
> if (val & BIT(tc.bit))
> return false;
>
> - switch ((enum fgt_group_id)tc.fgt) {
> - case HFGRTR_GROUP:
> - sr = is_read ? HFGRTR_EL2 : HFGWTR_EL2;
> - break;
> -
> - case HDFGRTR_GROUP:
> - sr = is_read ? HDFGRTR_EL2 : HDFGWTR_EL2;
> - break;
> -
> - case HAFGRTR_GROUP:
> - sr = HAFGRTR_EL2;
> - break;
> -
> - case HFGITR_GROUP:
> - sr = HFGITR_EL2;
> - break;
> -
> - default:
> - WARN_ONCE(1, "Unhandled FGT group");
> - return false;
> - }
> -
> return !(kvm_get_sysreg_res0(kvm, sr) & BIT(tc.bit));
> }
>
> bool triage_sysreg_trap(struct kvm_vcpu *vcpu, int *sr_index)
> {
> + enum vcpu_sysreg fgtreg;
> union trap_config tc;
> enum trap_behaviour b;
> bool is_read;
> u32 sysreg;
> - u64 esr, val;
> + u64 esr;
>
> esr = kvm_vcpu_get_esr(vcpu);
> sysreg = esr_sys64_to_sysreg(esr);
> @@ -2320,25 +2301,19 @@ bool triage_sysreg_trap(struct kvm_vcpu *vcpu, int *sr_index)
> break;
>
> case HFGRTR_GROUP:
> - if (is_read)
> - val = __vcpu_sys_reg(vcpu, HFGRTR_EL2);
> - else
> - val = __vcpu_sys_reg(vcpu, HFGWTR_EL2);
> + fgtreg = is_read ? HFGRTR_EL2 : HFGWTR_EL2;
> break;
>
> case HDFGRTR_GROUP:
> - if (is_read)
> - val = __vcpu_sys_reg(vcpu, HDFGRTR_EL2);
> - else
> - val = __vcpu_sys_reg(vcpu, HDFGWTR_EL2);
> + fgtreg = is_read ? HDFGRTR_EL2 : HDFGWTR_EL2;
> break;
>
> case HAFGRTR_GROUP:
> - val = __vcpu_sys_reg(vcpu, HAFGRTR_EL2);
> + fgtreg = HAFGRTR_EL2;
> break;
>
> case HFGITR_GROUP:
> - val = __vcpu_sys_reg(vcpu, HFGITR_EL2);
> + fgtreg = HFGITR_EL2;
> switch (tc.fgf) {
> u64 tmp;
>
> @@ -2359,7 +2334,7 @@ bool triage_sysreg_trap(struct kvm_vcpu *vcpu, int *sr_index)
> goto local;
> }
>
> - if (tc.fgt != __NO_FGT_GROUP__ && check_fgt_bit(vcpu, is_read, val, tc))
> + if (tc.fgt != __NO_FGT_GROUP__ && check_fgt_bit(vcpu, fgtreg, tc))
> goto inject;
>
> b = compute_trap_behaviour(vcpu, tc);
> --
> 2.39.2
>
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH v3 17/42] KVM: arm64: Handle trapping of FEAT_LS64* instructions
2025-04-26 12:28 ` [PATCH v3 17/42] KVM: arm64: Handle trapping of FEAT_LS64* instructions Marc Zyngier
2025-04-29 13:08 ` Ben Horgan
@ 2025-05-01 11:01 ` Joey Gouly
1 sibling, 0 replies; 71+ messages in thread
From: Joey Gouly @ 2025-05-01 11:01 UTC (permalink / raw)
To: Marc Zyngier
Cc: kvmarm, kvm, linux-arm-kernel, Suzuki K Poulose, Oliver Upton,
Zenghui Yu, Mark Rutland, Fuad Tabba, Will Deacon,
Catalin Marinas
On Sat, Apr 26, 2025 at 01:28:11PM +0100, Marc Zyngier wrote:
> We generally don't expect FEAT_LS64* instructions to trap, unless
> they are trapped by a guest hypervisor.
>
> Otherwise, this is just the guest playing tricks on us by using
> an instruction that isn't advertised, which we handle with a well
> deserved UNDEF.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Joey Gouly <joey.gouly@arm.com>
> ---
> arch/arm64/kvm/handle_exit.c | 56 ++++++++++++++++++++++++++++++++++++
> 1 file changed, 56 insertions(+)
>
> diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
> index b73dc26bc44b4..636c14ed2bb82 100644
> --- a/arch/arm64/kvm/handle_exit.c
> +++ b/arch/arm64/kvm/handle_exit.c
> @@ -298,6 +298,61 @@ static int handle_svc(struct kvm_vcpu *vcpu)
> return 1;
> }
>
> +static int handle_other(struct kvm_vcpu *vcpu)
> +{
> + bool is_l2 = vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu);
> + u64 hcrx = __vcpu_sys_reg(vcpu, HCRX_EL2);
> + u64 esr = kvm_vcpu_get_esr(vcpu);
> + u64 iss = ESR_ELx_ISS(esr);
> + struct kvm *kvm = vcpu->kvm;
> + bool allowed, fwd = false;
> +
> + /*
> + * We only trap for two reasons:
> + *
> + * - the feature is disabled, and the only outcome is to
> + * generate an UNDEF.
> + *
> + * - the feature is enabled, but a NV guest wants to trap the
> + * feature used my its L2 guest. We forward the exception in
> + * this case.
> + *
> + * What we don't expect is to end-up here if the guest is
> + * expected be be able to directly use the feature, hence the
> + * WARN_ON below.
> + */
> + switch (iss) {
> + case ESR_ELx_ISS_OTHER_ST64BV:
> + allowed = kvm_has_feat(kvm, ID_AA64ISAR1_EL1, LS64, LS64_V);
> + if (is_l2)
> + fwd = !(hcrx & HCRX_EL2_EnASR);
> + break;
> + case ESR_ELx_ISS_OTHER_ST64BV0:
> + allowed = kvm_has_feat(kvm, ID_AA64ISAR1_EL1, LS64, LS64_ACCDATA);
> + if (is_l2)
> + fwd = !(hcrx & HCRX_EL2_EnAS0);
> + break;
> + case ESR_ELx_ISS_OTHER_LDST64B:
> + allowed = kvm_has_feat(kvm, ID_AA64ISAR1_EL1, LS64, LS64);
> + if (is_l2)
> + fwd = !(hcrx & HCRX_EL2_EnALS);
> + break;
> + default:
> + /* Clearly, we're missing something. */
> + WARN_ON_ONCE(1);
> + allowed = false;
> + }
> +
> + WARN_ON_ONCE(allowed && !fwd);
> +
> + if (allowed && fwd)
> + kvm_inject_nested_sync(vcpu, esr);
> + else
> + kvm_inject_undefined(vcpu);
> +
> + return 1;
> +}
> +
> static exit_handle_fn arm_exit_handlers[] = {
> [0 ... ESR_ELx_EC_MAX] = kvm_handle_unknown_ec,
> [ESR_ELx_EC_WFx] = kvm_handle_wfx,
> @@ -307,6 +362,7 @@ static exit_handle_fn arm_exit_handlers[] = {
> [ESR_ELx_EC_CP14_LS] = kvm_handle_cp14_load_store,
> [ESR_ELx_EC_CP10_ID] = kvm_handle_cp10_id,
> [ESR_ELx_EC_CP14_64] = kvm_handle_cp14_64,
> + [ESR_ELx_EC_OTHER] = handle_other,
> [ESR_ELx_EC_HVC32] = handle_hvc,
> [ESR_ELx_EC_SMC32] = handle_smc,
> [ESR_ELx_EC_HVC64] = handle_hvc,
> --
> 2.39.2
>
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH v3 21/42] KVM: arm64: Compute FGT masks from KVM's own FGT tables
2025-04-26 12:28 ` [PATCH v3 21/42] KVM: arm64: Compute FGT masks from KVM's own FGT tables Marc Zyngier
@ 2025-05-01 11:32 ` Joey Gouly
0 siblings, 0 replies; 71+ messages in thread
From: Joey Gouly @ 2025-05-01 11:32 UTC (permalink / raw)
To: Marc Zyngier
Cc: kvmarm, kvm, linux-arm-kernel, Suzuki K Poulose, Oliver Upton,
Zenghui Yu, Mark Rutland, Fuad Tabba, Will Deacon,
Catalin Marinas
On Sat, Apr 26, 2025 at 01:28:15PM +0100, Marc Zyngier wrote:
> In the process of decoupling KVM's view of the FGT bits from the
> wider architectural state, use KVM's own FGT tables to build
> a synthetic view of what is actually known.
>
> This allows for some checking along the way.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Joey Gouly <joey.gouly@arm.com>
> ---
> arch/arm64/include/asm/kvm_host.h | 14 ++++
> arch/arm64/kvm/emulate-nested.c | 106 ++++++++++++++++++++++++++++++
> 2 files changed, 120 insertions(+)
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 7a1ef5be7efb2..95fedd27f4bb8 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -607,6 +607,20 @@ struct kvm_sysreg_masks {
> } mask[NR_SYS_REGS - __SANITISED_REG_START__];
> };
>
> +struct fgt_masks {
> + const char *str;
> + u64 mask;
> + u64 nmask;
> + u64 res0;
> +};
> +
> +extern struct fgt_masks hfgrtr_masks;
> +extern struct fgt_masks hfgwtr_masks;
> +extern struct fgt_masks hfgitr_masks;
> +extern struct fgt_masks hdfgrtr_masks;
> +extern struct fgt_masks hdfgwtr_masks;
> +extern struct fgt_masks hafgrtr_masks;
> +
> struct kvm_cpu_context {
> struct user_pt_regs regs; /* sp = sp_el0 */
>
> diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
> index 52a2d63a667c9..528b33fcfcfd6 100644
> --- a/arch/arm64/kvm/emulate-nested.c
> +++ b/arch/arm64/kvm/emulate-nested.c
> @@ -2033,6 +2033,105 @@ static u32 encoding_next(u32 encoding)
> return sys_reg(op0 + 1, 0, 0, 0, 0);
> }
>
> +#define FGT_MASKS(__n, __m) \
> + struct fgt_masks __n = { .str = #__m, .res0 = __m, }
> +
> +FGT_MASKS(hfgrtr_masks, HFGRTR_EL2_RES0);
> +FGT_MASKS(hfgwtr_masks, HFGWTR_EL2_RES0);
> +FGT_MASKS(hfgitr_masks, HFGITR_EL2_RES0);
> +FGT_MASKS(hdfgrtr_masks, HDFGRTR_EL2_RES0);
> +FGT_MASKS(hdfgwtr_masks, HDFGWTR_EL2_RES0);
> +FGT_MASKS(hafgrtr_masks, HAFGRTR_EL2_RES0);
> +
> +static __init bool aggregate_fgt(union trap_config tc)
> +{
> + struct fgt_masks *rmasks, *wmasks;
> +
> + switch (tc.fgt) {
> + case HFGRTR_GROUP:
> + rmasks = &hfgrtr_masks;
> + wmasks = &hfgwtr_masks;
> + break;
> + case HDFGRTR_GROUP:
> + rmasks = &hdfgrtr_masks;
> + wmasks = &hdfgwtr_masks;
> + break;
> + case HAFGRTR_GROUP:
> + rmasks = &hafgrtr_masks;
> + wmasks = NULL;
> + break;
> + case HFGITR_GROUP:
> + rmasks = &hfgitr_masks;
> + wmasks = NULL;
> + break;
> + }
> +
> + /*
> + * A bit can be reserved in either the R or W register, but
> + * not both.
> + */
> + if ((BIT(tc.bit) & rmasks->res0) &&
> + (!wmasks || (BIT(tc.bit) & wmasks->res0)))
> + return false;
> +
> + if (tc.pol)
> + rmasks->mask |= BIT(tc.bit) & ~rmasks->res0;
> + else
> + rmasks->nmask |= BIT(tc.bit) & ~rmasks->res0;
> +
> + if (wmasks) {
> + if (tc.pol)
> + wmasks->mask |= BIT(tc.bit) & ~wmasks->res0;
> + else
> + wmasks->nmask |= BIT(tc.bit) & ~wmasks->res0;
> + }
> +
> + return true;
> +}
> +
> +static __init int check_fgt_masks(struct fgt_masks *masks)
> +{
> + unsigned long duplicate = masks->mask & masks->nmask;
> + u64 res0 = masks->res0;
> + int ret = 0;
> +
> + if (duplicate) {
> + int i;
> +
> + for_each_set_bit(i, &duplicate, 64) {
> + kvm_err("%s[%d] bit has both polarities\n",
> + masks->str, i);
> + }
> +
> + ret = -EINVAL;
> + }
> +
> + masks->res0 = ~(masks->mask | masks->nmask);
> + if (masks->res0 != res0)
> + kvm_info("Implicit %s = %016llx, expecting %016llx\n",
> + masks->str, masks->res0, res0);
> +
> + return ret;
> +}
> +
> +static __init int check_all_fgt_masks(int ret)
> +{
> + static struct fgt_masks * const masks[] __initconst = {
> + &hfgrtr_masks,
> + &hfgwtr_masks,
> + &hfgitr_masks,
> + &hdfgrtr_masks,
> + &hdfgwtr_masks,
> + &hafgrtr_masks,
> + };
> + int err = 0;
> +
> + for (int i = 0; i < ARRAY_SIZE(masks); i++)
> + err |= check_fgt_masks(masks[i]);
> +
> + return ret ?: err;
> +}
> +
> int __init populate_nv_trap_config(void)
> {
> int ret = 0;
> @@ -2097,8 +2196,15 @@ int __init populate_nv_trap_config(void)
> ret = xa_err(prev);
> print_nv_trap_error(fgt, "Failed FGT insertion", ret);
> }
> +
> + if (!aggregate_fgt(tc)) {
> + ret = -EINVAL;
> + print_nv_trap_error(fgt, "FGT bit is reserved", ret);
> + }
> }
>
> + ret = check_all_fgt_masks(ret);
> +
> kvm_info("nv: %ld fine grained trap handlers\n",
> ARRAY_SIZE(encoding_to_fgt));
>
> --
> 2.39.2
>
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH v3 04/42] arm64: sysreg: Replace HGFxTR_EL2 with HFG{R,W}TR_EL2
2025-04-29 14:26 ` Joey Gouly
@ 2025-05-01 13:20 ` Marc Zyngier
0 siblings, 0 replies; 71+ messages in thread
From: Marc Zyngier @ 2025-05-01 13:20 UTC (permalink / raw)
To: Joey Gouly
Cc: kvmarm, kvm, linux-arm-kernel, Suzuki K Poulose, Oliver Upton,
Zenghui Yu, Mark Rutland, Fuad Tabba, Will Deacon,
Catalin Marinas
On Tue, 29 Apr 2025 15:26:56 +0100,
Joey Gouly <joey.gouly@arm.com> wrote:
>
> On Sat, Apr 26, 2025 at 01:27:58PM +0100, Marc Zyngier wrote:
> > @@ -240,8 +240,8 @@
> > cbz x1, .Lset_fgt_\@
> >
> > /* Disable traps of access to GCS registers at EL0 and EL1 */
> > - orr x0, x0, #HFGxTR_EL2_nGCS_EL1_MASK
> > - orr x0, x0, #HFGxTR_EL2_nGCS_EL0_MASK
> > + orr x0, x0, #HFGRTR_EL2_nGCS_EL1_MASK
> > + orr x0, x0, #HFGRTR_EL2_nGCS_EL0_MASK
> >
> > .Lset_fgt_\@:
> > msr_s SYS_HFGRTR_EL2, x0
>
> We still treat them as the same here, funny that the diff cut off the next line:
>
> msr_s SYS_HFGWTR_EL2, x0
>
> Not saying you should do anything about it, I think it's fine.
Yeah, I had spotted these, but pointlessly duplicating these for R/W
did seem over the top.
Overall, What I am trying to achieve is to prevent that someone
accidentally uses something such as HFGxTR_EL2.AIDR_EL1 to HFGWTR_EL2.
I want to be able to catch those early (compile time) when they are
used in macros that compose register and bit names.
>
> > diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
> > index f36d067967c33..43a630b940bfb 100644
> > --- a/arch/arm64/include/asm/kvm_arm.h
> > +++ b/arch/arm64/include/asm/kvm_arm.h
> > @@ -325,7 +325,7 @@
> > * Once we get to a point where the two describe the same thing, we'll
> > * merge the definitions. One day.
> > */
> > -#define __HFGRTR_EL2_RES0 HFGxTR_EL2_RES0
> > +#define __HFGRTR_EL2_RES0 HFGRTR_EL2_RES0
> > #define __HFGRTR_EL2_MASK GENMASK(49, 0)
> > #define __HFGRTR_EL2_nMASK ~(__HFGRTR_EL2_RES0 | __HFGRTR_EL2_MASK)
> >
> > @@ -336,7 +336,7 @@
> > #define __HFGRTR_ONLY_MASK (BIT(46) | BIT(42) | BIT(40) | BIT(28) | \
> > GENMASK(26, 25) | BIT(21) | BIT(18) | \
> > GENMASK(15, 14) | GENMASK(10, 9) | BIT(2))
> > -#define __HFGWTR_EL2_RES0 (__HFGRTR_EL2_RES0 | __HFGRTR_ONLY_MASK)
> > +#define __HFGWTR_EL2_RES0 HFGWTR_EL2_RES0
> > #define __HFGWTR_EL2_MASK (__HFGRTR_EL2_MASK & ~__HFGRTR_ONLY_MASK)
> > #define __HFGWTR_EL2_nMASK ~(__HFGWTR_EL2_RES0 | __HFGWTR_EL2_MASK)
> >
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index e98cfe7855a62..7a1ef5be7efb2 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -273,7 +273,8 @@ struct kvm_sysreg_masks;
> >
> > enum fgt_group_id {
> > __NO_FGT_GROUP__,
> > - HFGxTR_GROUP,
> > + HFGRTR_GROUP,
> > + HFGWTR_GROUP = HFGRTR_GROUP,
>
> I think this change makes most of the diffs using this enum more confusing, but
> it also seems to algin the code more closely with HDFGWTR_EL2 and HDFGWTR_EL2.
Indeed. And once you add FEAT_FGT2 to the mix, HFGxTR becomes really
out of place. As for the confusing aspect, I agree that the notion of
group is a bit jarring, and maybe some documentation would help. The
idea is actually simple:
A sysreg trap always tells us whether this is for read or write. The
data stored for each sysreg tells us which FGT register is controlling
that trap. But since we can have one FGT register for read, and
another for write, we would have to store both. Trouble is, we only
have 63 bits in that descriptor. To save some space, we encode only
the group (covering both read and write), and use the WnR bit to pick
the correct guy.
This means we can encode 11 possible registers in 3 bits, with
restrictions. We still have plenty of bits left, but I'm pretty sure
the architecture will force us to eat into it pretty quickly.
[...]
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index 005ad28f73068..6e01b06bedcae 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -5147,12 +5147,12 @@ void kvm_calculate_traps(struct kvm_vcpu *vcpu)
> > if (test_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags))
> > goto out;
> >
> > - kvm->arch.fgu[HFGxTR_GROUP] = (HFGxTR_EL2_nAMAIR2_EL1 |
> > - HFGxTR_EL2_nMAIR2_EL1 |
> > - HFGxTR_EL2_nS2POR_EL1 |
> > - HFGxTR_EL2_nACCDATA_EL1 |
> > - HFGxTR_EL2_nSMPRI_EL1_MASK |
> > - HFGxTR_EL2_nTPIDR2_EL0_MASK);
> > + kvm->arch.fgu[HFGRTR_GROUP] = (HFGRTR_EL2_nAMAIR2_EL1 |
> > + HFGRTR_EL2_nMAIR2_EL1 |
> > + HFGRTR_EL2_nS2POR_EL1 |
> > + HFGRTR_EL2_nACCDATA_EL1 |
> > + HFGRTR_EL2_nSMPRI_EL1_MASK |
> > + HFGRTR_EL2_nTPIDR2_EL0_MASK);
>
> For example here you see HFGRTR_GROUP but it actually also applies to HFGWTR_GROUP.
Because we use the same encoding trick. I don't see a good way to express
that in a clean way, unfortunately. If you have an idea, I'm all ears!
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH v3 28/42] KVM: arm64: Use KVM-specific HCRX_EL2 RES0 mask
2025-04-26 12:28 ` [PATCH v3 28/42] KVM: arm64: Use KVM-specific HCRX_EL2 RES0 mask Marc Zyngier
@ 2025-05-01 13:33 ` Joey Gouly
0 siblings, 0 replies; 71+ messages in thread
From: Joey Gouly @ 2025-05-01 13:33 UTC (permalink / raw)
To: Marc Zyngier
Cc: kvmarm, kvm, linux-arm-kernel, Suzuki K Poulose, Oliver Upton,
Zenghui Yu, Mark Rutland, Fuad Tabba, Will Deacon,
Catalin Marinas
On Sat, Apr 26, 2025 at 01:28:22PM +0100, Marc Zyngier wrote:
> We do not have a computed table for HCRX_EL2, so statically define
> the bits we know about. A warning will fire if the architecture
> grows bits that are not handled yet.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Joey Gouly <joey.gouly@arm.com>
> ---
> arch/arm64/include/asm/kvm_arm.h | 18 ++++++++++++++----
> arch/arm64/kvm/emulate-nested.c | 5 +++++
> arch/arm64/kvm/nested.c | 4 ++--
> 3 files changed, 21 insertions(+), 6 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
> index e7c73d16cd451..52b3aeb19efc6 100644
> --- a/arch/arm64/include/asm/kvm_arm.h
> +++ b/arch/arm64/include/asm/kvm_arm.h
> @@ -315,10 +315,20 @@
> GENMASK(19, 18) | \
> GENMASK(15, 0))
>
> -/* Polarity masks for HCRX_EL2 */
> -#define __HCRX_EL2_RES0 HCRX_EL2_RES0
> -#define __HCRX_EL2_MASK (BIT(6))
> -#define __HCRX_EL2_nMASK ~(__HCRX_EL2_RES0 | __HCRX_EL2_MASK)
> +/*
> + * Polarity masks for HCRX_EL2, limited to the bits that we know about
> + * at this point in time. It doesn't mean that we actually *handle*
> + * them, but that at least those that are not advertised to a guest
> + * will be RES0 for that guest.
> + */
> +#define __HCRX_EL2_MASK (BIT_ULL(6))
> +#define __HCRX_EL2_nMASK (GENMASK_ULL(24, 14) | \
> + GENMASK_ULL(11, 7) | \
> + GENMASK_ULL(5, 0))
> +#define __HCRX_EL2_RES0 ~(__HCRX_EL2_nMASK | __HCRX_EL2_MASK)
> +#define __HCRX_EL2_RES1 ~(__HCRX_EL2_nMASK | \
> + __HCRX_EL2_MASK | \
> + __HCRX_EL2_RES0)
Convoluted way of writing 0, but it makes sense!
>
> /* Hyp Prefetch Fault Address Register (HPFAR/HDFAR) */
> #define HPFAR_MASK (~UL(0xf))
> diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
> index c30d970bf81cb..c581cf29bc59e 100644
> --- a/arch/arm64/kvm/emulate-nested.c
> +++ b/arch/arm64/kvm/emulate-nested.c
> @@ -2157,6 +2157,7 @@ int __init populate_nv_trap_config(void)
> BUILD_BUG_ON(__NR_CGT_GROUP_IDS__ > BIT(TC_CGT_BITS));
> BUILD_BUG_ON(__NR_FGT_GROUP_IDS__ > BIT(TC_FGT_BITS));
> BUILD_BUG_ON(__NR_FG_FILTER_IDS__ > BIT(TC_FGF_BITS));
> + BUILD_BUG_ON(__HCRX_EL2_MASK & __HCRX_EL2_nMASK);
>
> for (int i = 0; i < ARRAY_SIZE(encoding_to_cgt); i++) {
> const struct encoding_to_trap_config *cgt = &encoding_to_cgt[i];
> @@ -2182,6 +2183,10 @@ int __init populate_nv_trap_config(void)
> }
> }
>
> + if (__HCRX_EL2_RES0 != HCRX_EL2_RES0)
> + kvm_info("Sanitised HCR_EL2_RES0 = %016llx, expecting %016llx\n",
> + __HCRX_EL2_RES0, HCRX_EL2_RES0);
> +
> kvm_info("nv: %ld coarse grained trap handlers\n",
> ARRAY_SIZE(encoding_to_cgt));
>
> diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
> index 479ffd25eea63..666df85230c9b 100644
> --- a/arch/arm64/kvm/nested.c
> +++ b/arch/arm64/kvm/nested.c
> @@ -1058,8 +1058,8 @@ int kvm_init_nv_sysregs(struct kvm_vcpu *vcpu)
> set_sysreg_masks(kvm, HCR_EL2, res0, res1);
>
> /* HCRX_EL2 */
> - res0 = HCRX_EL2_RES0;
> - res1 = HCRX_EL2_RES1;
> + res0 = __HCRX_EL2_RES0;
> + res1 = __HCRX_EL2_RES1;
> if (!kvm_has_feat(kvm, ID_AA64ISAR3_EL1, PACM, TRIVIAL_IMP))
> res0 |= HCRX_EL2_PACMEn;
> if (!kvm_has_feat(kvm, ID_AA64PFR2_EL1, FPMR, IMP))
> --
> 2.39.2
>
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH v3 08/42] arm64: sysreg: Add registers trapped by HFG{R,W}TR2_EL2
2025-05-01 10:11 ` Joey Gouly
@ 2025-05-01 13:46 ` Marc Zyngier
2025-05-01 13:52 ` Joey Gouly
0 siblings, 1 reply; 71+ messages in thread
From: Marc Zyngier @ 2025-05-01 13:46 UTC (permalink / raw)
To: Joey Gouly
Cc: kvmarm, kvm, linux-arm-kernel, Suzuki K Poulose, Oliver Upton,
Zenghui Yu, Mark Rutland, Fuad Tabba, Will Deacon,
Catalin Marinas
On Thu, 01 May 2025 11:11:36 +0100,
Joey Gouly <joey.gouly@arm.com> wrote:
>
> On Sat, Apr 26, 2025 at 01:28:02PM +0100, Marc Zyngier wrote:
> > Bulk addition of all the system registers trapped by HFG{R,W}TR2_EL2.
> >
> > The descriptions are extracted from the BSD-licenced JSON file part
> > of the 2025-03 drop from ARM.
> >
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> > arch/arm64/tools/sysreg | 395 ++++++++++++++++++++++++++++++++++++++++
> > 1 file changed, 395 insertions(+)
> >
> > diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
> > index 6433a3ebcef49..7969e632492bb 100644
> > --- a/arch/arm64/tools/sysreg
> > +++ b/arch/arm64/tools/sysreg
> > @@ -2068,6 +2068,26 @@ Field 1 A
> > Field 0 M
> > EndSysreg
> >
> > +Sysreg SCTLR_EL12 3 5 1 0 0
> > +Mapping SCTLR_EL1
> > +EndSysreg
> > +
> > +Sysreg SCTLRALIAS_EL1 3 0 1 4 6
> > +Mapping SCTLR_EL1
> > +EndSysreg
> > +
> > +Sysreg ACTLR_EL1 3 0 1 0 1
> > +Field 63:0 IMPDEF
> > +EndSysreg
> > +
> > +Sysreg ACTLR_EL12 3 5 1 0 1
> > +Mapping ACTLR_EL1
> > +EndSysreg
> > +
> > +Sysreg ACTLRALIAS_EL1 3 0 1 4 5
> > +Mapping ACTLR_EL1
> > +EndSysreg
> > +
>
> Do you want to update CPACR_EL1 while you're at it, so that it matches
> CPACRMASK_EL1?
Do you mean adding the TAM and TCPAC bits added by FEAT_NV2p1? Sure,
no problem. I'll probably add that as a separate patch though, as I
want this one to only be concerned with the FEAT_FGT2-controlled
accessors.
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 71+ messages in thread
* Re: [PATCH v3 08/42] arm64: sysreg: Add registers trapped by HFG{R,W}TR2_EL2
2025-05-01 13:46 ` Marc Zyngier
@ 2025-05-01 13:52 ` Joey Gouly
0 siblings, 0 replies; 71+ messages in thread
From: Joey Gouly @ 2025-05-01 13:52 UTC (permalink / raw)
To: Marc Zyngier
Cc: kvmarm, kvm, linux-arm-kernel, Suzuki K Poulose, Oliver Upton,
Zenghui Yu, Mark Rutland, Fuad Tabba, Will Deacon,
Catalin Marinas
On Thu, May 01, 2025 at 02:46:06PM +0100, Marc Zyngier wrote:
> On Thu, 01 May 2025 11:11:36 +0100,
> Joey Gouly <joey.gouly@arm.com> wrote:
> >
> > On Sat, Apr 26, 2025 at 01:28:02PM +0100, Marc Zyngier wrote:
> > > Bulk addition of all the system registers trapped by HFG{R,W}TR2_EL2.
> > >
> > > The descriptions are extracted from the BSD-licenced JSON file part
> > > of the 2025-03 drop from ARM.
> > >
> > > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > > ---
> > > arch/arm64/tools/sysreg | 395 ++++++++++++++++++++++++++++++++++++++++
> > > 1 file changed, 395 insertions(+)
> > >
> > > diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
> > > index 6433a3ebcef49..7969e632492bb 100644
> > > --- a/arch/arm64/tools/sysreg
> > > +++ b/arch/arm64/tools/sysreg
> > > @@ -2068,6 +2068,26 @@ Field 1 A
> > > Field 0 M
> > > EndSysreg
> > >
> > > +Sysreg SCTLR_EL12 3 5 1 0 0
> > > +Mapping SCTLR_EL1
> > > +EndSysreg
> > > +
> > > +Sysreg SCTLRALIAS_EL1 3 0 1 4 6
> > > +Mapping SCTLR_EL1
> > > +EndSysreg
> > > +
> > > +Sysreg ACTLR_EL1 3 0 1 0 1
> > > +Field 63:0 IMPDEF
> > > +EndSysreg
> > > +
> > > +Sysreg ACTLR_EL12 3 5 1 0 1
> > > +Mapping ACTLR_EL1
> > > +EndSysreg
> > > +
> > > +Sysreg ACTLRALIAS_EL1 3 0 1 4 5
> > > +Mapping ACTLR_EL1
> > > +EndSysreg
> > > +
> >
> > Do you want to update CPACR_EL1 while you're at it, so that it matches
> > CPACRMASK_EL1?
>
> Do you mean adding the TAM and TCPAC bits added by FEAT_NV2p1? Sure,
> no problem. I'll probably add that as a separate patch though, as I
> want this one to only be concerned with the FEAT_FGT2-controlled
> accessors.
Yep, sounds good.
>
> Thanks,
>
> M.
>
> --
> Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 71+ messages in thread
end of thread, other threads:[~2025-05-01 13:55 UTC | newest]
Thread overview: 71+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-04-26 12:27 [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling Marc Zyngier
2025-04-26 12:27 ` [PATCH v3 01/42] arm64: sysreg: Add ID_AA64ISAR1_EL1.LS64 encoding for FEAT_LS64WB Marc Zyngier
2025-04-29 13:34 ` Joey Gouly
2025-04-26 12:27 ` [PATCH v3 02/42] arm64: sysreg: Update ID_AA64MMFR4_EL1 description Marc Zyngier
2025-04-29 13:38 ` Joey Gouly
2025-04-26 12:27 ` [PATCH v3 03/42] arm64: sysreg: Add layout for HCR_EL2 Marc Zyngier
2025-04-29 14:02 ` Joey Gouly
2025-04-26 12:27 ` [PATCH v3 04/42] arm64: sysreg: Replace HGFxTR_EL2 with HFG{R,W}TR_EL2 Marc Zyngier
2025-04-29 13:07 ` Ben Horgan
2025-04-29 14:26 ` Joey Gouly
2025-05-01 13:20 ` Marc Zyngier
2025-04-26 12:27 ` [PATCH v3 05/42] arm64: sysreg: Update ID_AA64PFR0_EL1 description Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 06/42] arm64: sysreg: Update PMSIDR_EL1 description Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 07/42] arm64: sysreg: Update TRBIDR_EL1 description Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 08/42] arm64: sysreg: Add registers trapped by HFG{R,W}TR2_EL2 Marc Zyngier
2025-05-01 10:11 ` Joey Gouly
2025-05-01 13:46 ` Marc Zyngier
2025-05-01 13:52 ` Joey Gouly
2025-04-26 12:28 ` [PATCH v3 09/42] arm64: sysreg: Add registers trapped by HDFG{R,W}TR2_EL2 Marc Zyngier
2025-04-29 13:07 ` Ben Horgan
2025-04-29 14:10 ` Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 10/42] arm64: sysreg: Add system instructions trapped by HFGIRT2_EL2 Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 11/42] arm64: Remove duplicated sysreg encodings Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 12/42] arm64: tools: Resync sysreg.h Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 13/42] arm64: Add syndrome information for trapped LD64B/ST64B{,V,V0} Marc Zyngier
2025-05-01 10:17 ` Joey Gouly
2025-04-26 12:28 ` [PATCH v3 14/42] arm64: Add FEAT_FGT2 capability Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 15/42] KVM: arm64: Tighten handling of unknown FGT groups Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 16/42] KVM: arm64: Simplify handling of negative FGT bits Marc Zyngier
2025-05-01 10:43 ` Joey Gouly
2025-04-26 12:28 ` [PATCH v3 17/42] KVM: arm64: Handle trapping of FEAT_LS64* instructions Marc Zyngier
2025-04-29 13:08 ` Ben Horgan
2025-05-01 11:01 ` Joey Gouly
2025-04-26 12:28 ` [PATCH v3 18/42] KVM: arm64: Restrict ACCDATA_EL1 undef to FEAT_ST64_ACCDATA being disabled Marc Zyngier
2025-04-29 13:08 ` Ben Horgan
2025-04-26 12:28 ` [PATCH v3 19/42] KVM: arm64: Don't treat HCRX_EL2 as a FGT register Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 20/42] KVM: arm64: Plug FEAT_GCS handling Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 21/42] KVM: arm64: Compute FGT masks from KVM's own FGT tables Marc Zyngier
2025-05-01 11:32 ` Joey Gouly
2025-04-26 12:28 ` [PATCH v3 22/42] KVM: arm64: Add description of FGT bits leading to EC!=0x18 Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 23/42] KVM: arm64: Use computed masks as sanitisers for FGT registers Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 24/42] KVM: arm64: Unconditionally configure fine-grain traps Marc Zyngier
2025-04-29 13:08 ` Ben Horgan
2025-04-29 13:49 ` Marc Zyngier
2025-04-29 14:09 ` Ben Horgan
2025-04-26 12:28 ` [PATCH v3 25/42] KVM: arm64: Propagate FGT masks to the nVHE hypervisor Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 26/42] KVM: arm64: Use computed FGT masks to setup FGT registers Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 27/42] KVM: arm64: Remove hand-crafted masks for " Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 28/42] KVM: arm64: Use KVM-specific HCRX_EL2 RES0 mask Marc Zyngier
2025-05-01 13:33 ` Joey Gouly
2025-04-26 12:28 ` [PATCH v3 29/42] KVM: arm64: Handle PSB CSYNC traps Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 30/42] KVM: arm64: Switch to table-driven FGU configuration Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 31/42] KVM: arm64: Validate FGT register descriptions against RES0 masks Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 32/42] KVM: arm64: Use FGT feature maps to drive RES0 bits Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 33/42] KVM: arm64: Allow kvm_has_feat() to take variable arguments Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 34/42] KVM: arm64: Use HCRX_EL2 feature map to drive fixed-value bits Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 35/42] KVM: arm64: Use HCR_EL2 " Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 36/42] KVM: arm64: Add FEAT_FGT2 registers to the VNCR page Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 37/42] KVM: arm64: Add sanitisation for FEAT_FGT2 registers Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 38/42] KVM: arm64: Add trap routing " Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 39/42] KVM: arm64: Add context-switch " Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 40/42] KVM: arm64: Allow sysreg ranges for FGT descriptors Marc Zyngier
2025-04-29 13:08 ` Ben Horgan
2025-04-26 12:28 ` [PATCH v3 41/42] KVM: arm64: Add FGT descriptors for FEAT_FGT2 Marc Zyngier
2025-04-29 13:09 ` Ben Horgan
2025-04-29 14:30 ` Marc Zyngier
2025-04-26 12:28 ` [PATCH v3 42/42] KVM: arm64: Handle TSB CSYNC traps Marc Zyngier
2025-04-28 18:33 ` [PATCH v3 00/42] KVM: arm64: Revamp Fine Grained Trap handling Ganapatrao Kulkarni
2025-04-28 21:42 ` Marc Zyngier
2025-04-29 7:34 ` Marc Zyngier
2025-04-29 14:30 ` Ganapatrao Kulkarni
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).