* [PATCH 0/4] KVM: arm64: VTCR_EL2 conversion to feature dependency framework
@ 2025-11-29 14:45 Marc Zyngier
2025-11-29 14:45 ` [PATCH 1/4] arm64: Convert ID_AA64MMFR0_EL1.TGRAN{4,16,64}_2 to UnsignedEnum Marc Zyngier
` (3 more replies)
0 siblings, 4 replies; 13+ messages in thread
From: Marc Zyngier @ 2025-11-29 14:45 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel, kvm
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Alexandru Elisei
Alexandru recently pointed out [0] that the RES0 handling of VTCR_EL2
was broken now that we have some support for stage-2 AF.
Instead of addressing this piecemeal as Alexandru was suggesting in
his series, I've taken the approach to do a full-blown conversion,
including moving VTCR_EL2 to the sysreg file (the latter resulting in
a bit of churn in the page table code, both canonical and nested).
The result is, as usual, on the larger side of things, but mostly made
of generated stuff, though the definition for FEAT_LPA2 is horrific,
and required some adjustments to the way we define TGRAN*_2.
Lightly tested on M2.
[0] https://lore.kernel.org/r/20251128100946.74210-1-alexandru.elisei@arm.com
Marc Zyngier (4):
arm64: Convert ID_AA64MMFR0_EL1.TGRAN{4,16,64}_2 to UnsignedEnum
arm64: Convert VTCR_EL2 to sysreg infratructure
KVM: arm64: Account for RES1 bits in DECLARE_FEAT_MAP()
KVM: arm64: Convert VTCR_EL2 to config-driven sanitisation
arch/arm64/include/asm/kvm_arm.h | 52 ++++++-----------------
arch/arm64/include/asm/sysreg.h | 1 -
arch/arm64/kvm/config.c | 72 +++++++++++++++++++++++++++++++-
arch/arm64/kvm/hyp/pgtable.c | 8 ++--
arch/arm64/kvm/nested.c | 11 +++--
arch/arm64/tools/sysreg | 63 ++++++++++++++++++++++++++--
6 files changed, 151 insertions(+), 56 deletions(-)
--
2.47.3
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH 1/4] arm64: Convert ID_AA64MMFR0_EL1.TGRAN{4,16,64}_2 to UnsignedEnum
2025-11-29 14:45 [PATCH 0/4] KVM: arm64: VTCR_EL2 conversion to feature dependency framework Marc Zyngier
@ 2025-11-29 14:45 ` Marc Zyngier
2025-11-29 14:45 ` [PATCH 2/4] arm64: Convert VTCR_EL2 to sysreg infratructure Marc Zyngier
` (2 subsequent siblings)
3 siblings, 0 replies; 13+ messages in thread
From: Marc Zyngier @ 2025-11-29 14:45 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel, kvm
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Alexandru Elisei
ID_AA64MMFR0_EL1.TGRAN{4,16,64}_2 are currently represented as unordered
enumerations. However, the architecture treats them as Unsigned,
as hinted to by the MRS data:
(FEAT_S2TGran4K <=> (((UInt(ID_AA64MMFR0_EL1.TGran4_2) == 0) &&
FEAT_TGran4K) ||
(UInt(ID_AA64MMFR0_EL1.TGran4_2) >= 2))))
and similar descriptions exist for 16 and 64k.
This is also confirmed by D24.1.3.3 ("Alternative ID scheme used for
ID_AA64MMFR0_EL1 stage 2 granule sizes") in the L.b revision of
the ARM ARM.
Turn these fields into UnsignedEnum so that we can use the above
description more or less literally.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/tools/sysreg | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
index 1c6cdf9d54bba..9d388f87d9a13 100644
--- a/arch/arm64/tools/sysreg
+++ b/arch/arm64/tools/sysreg
@@ -2098,18 +2098,18 @@ UnsignedEnum 47:44 EXS
0b0000 NI
0b0001 IMP
EndEnum
-Enum 43:40 TGRAN4_2
+UnsignedEnum 43:40 TGRAN4_2
0b0000 TGRAN4
0b0001 NI
0b0010 IMP
0b0011 52_BIT
EndEnum
-Enum 39:36 TGRAN64_2
+UnsignedEnum 39:36 TGRAN64_2
0b0000 TGRAN64
0b0001 NI
0b0010 IMP
EndEnum
-Enum 35:32 TGRAN16_2
+UnsignedEnum 35:32 TGRAN16_2
0b0000 TGRAN16
0b0001 NI
0b0010 IMP
--
2.47.3
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH 2/4] arm64: Convert VTCR_EL2 to sysreg infratructure
2025-11-29 14:45 [PATCH 0/4] KVM: arm64: VTCR_EL2 conversion to feature dependency framework Marc Zyngier
2025-11-29 14:45 ` [PATCH 1/4] arm64: Convert ID_AA64MMFR0_EL1.TGRAN{4,16,64}_2 to UnsignedEnum Marc Zyngier
@ 2025-11-29 14:45 ` Marc Zyngier
2025-12-03 11:43 ` Alexandru Elisei
2025-11-29 14:45 ` [PATCH 3/4] KVM: arm64: Account for RES1 bits in DECLARE_FEAT_MAP() Marc Zyngier
2025-11-29 14:45 ` [PATCH 4/4] KVM: arm64: Convert VTCR_EL2 to config-driven sanitisation Marc Zyngier
3 siblings, 1 reply; 13+ messages in thread
From: Marc Zyngier @ 2025-11-29 14:45 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel, kvm
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Alexandru Elisei
Our definition of VTCR_EL2 is both partial (tons of fields are
missing) and totally inconsistent (some constants are shifted,
some are not). They are also expressed in terms of TCR, which is
rather inconvenient.
Replace the ad-hoc definitions with the the generated version.
This results in a bunch of additional changes to make the code
with the unshifted nature of generated enumerations.
The register data was extracted from the BSD licenced AARCHMRS
(AARCHMRS_OPENSOURCE_A_profile_FAT-2025-09_ASL0).
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_arm.h | 52 +++++++----------------------
arch/arm64/include/asm/sysreg.h | 1 -
arch/arm64/kvm/hyp/pgtable.c | 8 ++---
arch/arm64/kvm/nested.c | 8 ++---
arch/arm64/tools/sysreg | 57 ++++++++++++++++++++++++++++++++
5 files changed, 76 insertions(+), 50 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 1da290aeedce7..cd2dc378baee6 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -123,37 +123,7 @@
#define TCR_EL2_MASK (TCR_EL2_TG0_MASK | TCR_EL2_SH0_MASK | \
TCR_EL2_ORGN0_MASK | TCR_EL2_IRGN0_MASK)
-/* VTCR_EL2 Registers bits */
-#define VTCR_EL2_DS TCR_EL2_DS
-#define VTCR_EL2_RES1 (1U << 31)
-#define VTCR_EL2_HD (1 << 22)
-#define VTCR_EL2_HA (1 << 21)
-#define VTCR_EL2_PS_SHIFT TCR_EL2_PS_SHIFT
-#define VTCR_EL2_PS_MASK TCR_EL2_PS_MASK
-#define VTCR_EL2_TG0_MASK TCR_TG0_MASK
-#define VTCR_EL2_TG0_4K TCR_TG0_4K
-#define VTCR_EL2_TG0_16K TCR_TG0_16K
-#define VTCR_EL2_TG0_64K TCR_TG0_64K
-#define VTCR_EL2_SH0_MASK TCR_SH0_MASK
-#define VTCR_EL2_SH0_INNER TCR_SH0_INNER
-#define VTCR_EL2_ORGN0_MASK TCR_ORGN0_MASK
-#define VTCR_EL2_ORGN0_WBWA TCR_ORGN0_WBWA
-#define VTCR_EL2_IRGN0_MASK TCR_IRGN0_MASK
-#define VTCR_EL2_IRGN0_WBWA TCR_IRGN0_WBWA
-#define VTCR_EL2_SL0_SHIFT 6
-#define VTCR_EL2_SL0_MASK (3 << VTCR_EL2_SL0_SHIFT)
-#define VTCR_EL2_T0SZ_MASK 0x3f
-#define VTCR_EL2_VS_SHIFT 19
-#define VTCR_EL2_VS_8BIT (0 << VTCR_EL2_VS_SHIFT)
-#define VTCR_EL2_VS_16BIT (1 << VTCR_EL2_VS_SHIFT)
-
-#define VTCR_EL2_T0SZ(x) TCR_T0SZ(x)
-
/*
- * We configure the Stage-2 page tables to always restrict the IPA space to be
- * 40 bits wide (T0SZ = 24). Systems with a PARange smaller than 40 bits are
- * not known to exist and will break with this configuration.
- *
* The VTCR_EL2 is configured per VM and is initialised in kvm_init_stage2_mmu.
*
* Note that when using 4K pages, we concatenate two first level page tables
@@ -161,9 +131,6 @@
*
*/
-#define VTCR_EL2_COMMON_BITS (VTCR_EL2_SH0_INNER | VTCR_EL2_ORGN0_WBWA | \
- VTCR_EL2_IRGN0_WBWA | VTCR_EL2_RES1)
-
/*
* VTCR_EL2:SL0 indicates the entry level for Stage2 translation.
* Interestingly, it depends on the page size.
@@ -195,30 +162,35 @@
*/
#ifdef CONFIG_ARM64_64K_PAGES
-#define VTCR_EL2_TGRAN VTCR_EL2_TG0_64K
+#define VTCR_EL2_TGRAN 64K
#define VTCR_EL2_TGRAN_SL0_BASE 3UL
#elif defined(CONFIG_ARM64_16K_PAGES)
-#define VTCR_EL2_TGRAN VTCR_EL2_TG0_16K
+#define VTCR_EL2_TGRAN 16K
#define VTCR_EL2_TGRAN_SL0_BASE 3UL
#else /* 4K */
-#define VTCR_EL2_TGRAN VTCR_EL2_TG0_4K
+#define VTCR_EL2_TGRAN 4K
#define VTCR_EL2_TGRAN_SL0_BASE 2UL
#endif
#define VTCR_EL2_LVLS_TO_SL0(levels) \
- ((VTCR_EL2_TGRAN_SL0_BASE - (4 - (levels))) << VTCR_EL2_SL0_SHIFT)
+ FIELD_PREP(VTCR_EL2_SL0, (VTCR_EL2_TGRAN_SL0_BASE - (4 - (levels))))
#define VTCR_EL2_SL0_TO_LVLS(sl0) \
((sl0) + 4 - VTCR_EL2_TGRAN_SL0_BASE)
#define VTCR_EL2_LVLS(vtcr) \
- VTCR_EL2_SL0_TO_LVLS(((vtcr) & VTCR_EL2_SL0_MASK) >> VTCR_EL2_SL0_SHIFT)
+ VTCR_EL2_SL0_TO_LVLS(FIELD_GET(VTCR_EL2_SL0, (vtcr)))
+
+#define VTCR_EL2_FLAGS (SYS_FIELD_PREP_ENUM(VTCR_EL2, SH0, INNER) | \
+ SYS_FIELD_PREP_ENUM(VTCR_EL2, ORGN0, WBWA) | \
+ SYS_FIELD_PREP_ENUM(VTCR_EL2, IRGN0, WBWA) | \
+ SYS_FIELD_PREP_ENUM(VTCR_EL2, TG0, VTCR_EL2_TGRAN) | \
+ VTCR_EL2_RES1)
-#define VTCR_EL2_FLAGS (VTCR_EL2_COMMON_BITS | VTCR_EL2_TGRAN)
-#define VTCR_EL2_IPA(vtcr) (64 - ((vtcr) & VTCR_EL2_T0SZ_MASK))
+#define VTCR_EL2_IPA(vtcr) (64 - FIELD_GET(VTCR_EL2_T0SZ, (vtcr)))
/*
* ARM VMSAv8-64 defines an algorithm for finding the translation table
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index c231d2a3e5159..acad7a7621b9e 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -516,7 +516,6 @@
#define SYS_TTBR1_EL2 sys_reg(3, 4, 2, 0, 1)
#define SYS_TCR_EL2 sys_reg(3, 4, 2, 0, 2)
#define SYS_VTTBR_EL2 sys_reg(3, 4, 2, 1, 0)
-#define SYS_VTCR_EL2 sys_reg(3, 4, 2, 1, 2)
#define SYS_HAFGRTR_EL2 sys_reg(3, 4, 3, 1, 6)
#define SYS_SPSR_EL2 sys_reg(3, 4, 4, 0, 0)
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index 947ac1a951a5b..e0bd6a0172729 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -583,8 +583,8 @@ u64 kvm_get_vtcr(u64 mmfr0, u64 mmfr1, u32 phys_shift)
u64 vtcr = VTCR_EL2_FLAGS;
s8 lvls;
- vtcr |= kvm_get_parange(mmfr0) << VTCR_EL2_PS_SHIFT;
- vtcr |= VTCR_EL2_T0SZ(phys_shift);
+ vtcr |= FIELD_PREP(VTCR_EL2_PS, kvm_get_parange(mmfr0));
+ vtcr |= FIELD_PREP(VTCR_EL2_T0SZ, (UL(64) - phys_shift));
/*
* Use a minimum 2 level page table to prevent splitting
* host PMD huge pages at stage2.
@@ -624,9 +624,7 @@ u64 kvm_get_vtcr(u64 mmfr0, u64 mmfr1, u32 phys_shift)
vtcr |= VTCR_EL2_DS;
/* Set the vmid bits */
- vtcr |= (get_vmid_bits(mmfr1) == 16) ?
- VTCR_EL2_VS_16BIT :
- VTCR_EL2_VS_8BIT;
+ vtcr |= (get_vmid_bits(mmfr1) == 16) ? VTCR_EL2_VS : 0;
return vtcr;
}
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index 911fc99ed99d9..e1ef8930c97b3 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -377,7 +377,7 @@ static void vtcr_to_walk_info(u64 vtcr, struct s2_walk_info *wi)
{
wi->t0sz = vtcr & TCR_EL2_T0SZ_MASK;
- switch (vtcr & VTCR_EL2_TG0_MASK) {
+ switch (FIELD_GET(VTCR_EL2_TG0_MASK, vtcr)) {
case VTCR_EL2_TG0_4K:
wi->pgshift = 12; break;
case VTCR_EL2_TG0_16K:
@@ -513,7 +513,7 @@ static u8 get_guest_mapping_ttl(struct kvm_s2_mmu *mmu, u64 addr)
lockdep_assert_held_write(&kvm_s2_mmu_to_kvm(mmu)->mmu_lock);
- switch (vtcr & VTCR_EL2_TG0_MASK) {
+ switch (FIELD_GET(VTCR_EL2_TG0_MASK, vtcr)) {
case VTCR_EL2_TG0_4K:
ttl = (TLBI_TTL_TG_4K << 2);
break;
@@ -530,7 +530,7 @@ static u8 get_guest_mapping_ttl(struct kvm_s2_mmu *mmu, u64 addr)
again:
/* Iteratively compute the block sizes for a particular granule size */
- switch (vtcr & VTCR_EL2_TG0_MASK) {
+ switch (FIELD_GET(VTCR_EL2_TG0_MASK, vtcr)) {
case VTCR_EL2_TG0_4K:
if (sz < SZ_4K) sz = SZ_4K;
else if (sz < SZ_2M) sz = SZ_2M;
@@ -593,7 +593,7 @@ unsigned long compute_tlb_inval_range(struct kvm_s2_mmu *mmu, u64 val)
if (!max_size) {
/* Compute the maximum extent of the invalidation */
- switch (mmu->tlb_vtcr & VTCR_EL2_TG0_MASK) {
+ switch (FIELD_GET(VTCR_EL2_TG0_MASK, mmu->tlb_vtcr)) {
case VTCR_EL2_TG0_4K:
max_size = SZ_1G;
break;
diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
index 9d388f87d9a13..6f43b2ae5993b 100644
--- a/arch/arm64/tools/sysreg
+++ b/arch/arm64/tools/sysreg
@@ -4400,6 +4400,63 @@ Field 56:12 BADDR
Res0 11:0
EndSysreg
+Sysreg VTCR_EL2 3 4 2 1 2
+Res0 63:46
+Field 45 HDBSS
+Field 44 HAFT
+Res0 43:42
+Field 41 TL0
+Field 40 GCSH
+Res0 39
+Field 38 D128
+Field 37 S2POE
+Field 36 S2PIE
+Field 35 TL1
+Field 34 AssuredOnly
+Field 33 SL2
+Field 32 DS
+Res1 31
+Field 30 NSA
+Field 29 NSW
+Field 28 HWU62
+Field 27 HWU61
+Field 26 HWU60
+Field 25 HWU59
+Res0 24:23
+Field 22 HD
+Field 21 HA
+Res0 20
+Enum 19 VS
+ 0b0 8BIT
+ 0b1 16BIT
+EndEnum
+Field 18:16 PS
+Enum 15:14 TG0
+ 0b00 4K
+ 0b01 64K
+ 0b10 16K
+EndEnum
+Enum 13:12 SH0
+ 0b00 NONE
+ 0b01 OUTER
+ 0b11 INNER
+EndEnum
+Enum 11:10 ORGN0
+ 0b00 NC
+ 0b01 WBWA
+ 0b10 WT
+ 0b11 WBnWA
+EndEnum
+Enum 9:8 IRGN0
+ 0b00 NC
+ 0b01 WBWA
+ 0b10 WT
+ 0b11 WBnWA
+EndEnum
+Field 7:6 SL0
+Field 5:0 T0SZ
+EndSysreg
+
Sysreg GCSCR_EL2 3 4 2 5 0
Fields GCSCR_ELx
EndSysreg
--
2.47.3
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH 3/4] KVM: arm64: Account for RES1 bits in DECLARE_FEAT_MAP()
2025-11-29 14:45 [PATCH 0/4] KVM: arm64: VTCR_EL2 conversion to feature dependency framework Marc Zyngier
2025-11-29 14:45 ` [PATCH 1/4] arm64: Convert ID_AA64MMFR0_EL1.TGRAN{4,16,64}_2 to UnsignedEnum Marc Zyngier
2025-11-29 14:45 ` [PATCH 2/4] arm64: Convert VTCR_EL2 to sysreg infratructure Marc Zyngier
@ 2025-11-29 14:45 ` Marc Zyngier
2025-11-29 14:45 ` [PATCH 4/4] KVM: arm64: Convert VTCR_EL2 to config-driven sanitisation Marc Zyngier
3 siblings, 0 replies; 13+ messages in thread
From: Marc Zyngier @ 2025-11-29 14:45 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel, kvm
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Alexandru Elisei
None of the registers we manage in the feature dependency infrastructure
so far has any RES1 bit. This is about to change, as VTCR_EL2 has
its bit 31 being RES1.
In order to not fail the consistency checks by not describing a bit,
add RES1 bits to the set of immutable bits.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/config.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
index 24bb3f36e9d59..a02c28d6a61c9 100644
--- a/arch/arm64/kvm/config.c
+++ b/arch/arm64/kvm/config.c
@@ -109,7 +109,8 @@ struct reg_feat_map_desc {
#define DECLARE_FEAT_MAP(n, r, m, f) \
struct reg_feat_map_desc n = { \
.name = #r, \
- .feat_map = NEEDS_FEAT(~r##_RES0, f), \
+ .feat_map = NEEDS_FEAT(~(r##_RES0 | \
+ r##_RES1), f), \
.bit_feat_map = m, \
.bit_feat_map_sz = ARRAY_SIZE(m), \
}
--
2.47.3
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH 4/4] KVM: arm64: Convert VTCR_EL2 to config-driven sanitisation
2025-11-29 14:45 [PATCH 0/4] KVM: arm64: VTCR_EL2 conversion to feature dependency framework Marc Zyngier
` (2 preceding siblings ...)
2025-11-29 14:45 ` [PATCH 3/4] KVM: arm64: Account for RES1 bits in DECLARE_FEAT_MAP() Marc Zyngier
@ 2025-11-29 14:45 ` Marc Zyngier
2025-12-03 11:44 ` Alexandru Elisei
2025-12-03 16:17 ` Joey Gouly
3 siblings, 2 replies; 13+ messages in thread
From: Marc Zyngier @ 2025-11-29 14:45 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel, kvm
Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Alexandru Elisei
Describe all the VTCR_EL2 fields and their respective configurations,
making sure that we correctly ignore the bits that are not defined
for a given guest configuration.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/config.c | 69 +++++++++++++++++++++++++++++++++++++++++
arch/arm64/kvm/nested.c | 3 +-
2 files changed, 70 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
index a02c28d6a61c9..c36e133c51912 100644
--- a/arch/arm64/kvm/config.c
+++ b/arch/arm64/kvm/config.c
@@ -141,6 +141,7 @@ struct reg_feat_map_desc {
#define FEAT_AA64EL1 ID_AA64PFR0_EL1, EL1, IMP
#define FEAT_AA64EL2 ID_AA64PFR0_EL1, EL2, IMP
#define FEAT_AA64EL3 ID_AA64PFR0_EL1, EL3, IMP
+#define FEAT_SEL2 ID_AA64PFR0_EL1, SEL2, IMP
#define FEAT_AIE ID_AA64MMFR3_EL1, AIE, IMP
#define FEAT_S2POE ID_AA64MMFR3_EL1, S2POE, IMP
#define FEAT_S1POE ID_AA64MMFR3_EL1, S1POE, IMP
@@ -202,6 +203,8 @@ struct reg_feat_map_desc {
#define FEAT_ASID2 ID_AA64MMFR4_EL1, ASID2, IMP
#define FEAT_MEC ID_AA64MMFR3_EL1, MEC, IMP
#define FEAT_HAFT ID_AA64MMFR1_EL1, HAFDBS, HAFT
+#define FEAT_HDBSS ID_AA64MMFR1_EL1, HAFDBS, HDBSS
+#define FEAT_HPDS2 ID_AA64MMFR1_EL1, HPDS, HPDS2
#define FEAT_BTI ID_AA64PFR1_EL1, BT, IMP
#define FEAT_ExS ID_AA64MMFR0_EL1, EXS, IMP
#define FEAT_IESB ID_AA64MMFR2_EL1, IESB, IMP
@@ -219,6 +222,7 @@ struct reg_feat_map_desc {
#define FEAT_FGT2 ID_AA64MMFR0_EL1, FGT, FGT2
#define FEAT_MTPMU ID_AA64DFR0_EL1, MTPMU, IMP
#define FEAT_HCX ID_AA64MMFR1_EL1, HCX, IMP
+#define FEAT_S2PIE ID_AA64MMFR3_EL1, S2PIE, IMP
static bool not_feat_aa64el3(struct kvm *kvm)
{
@@ -362,6 +366,28 @@ static bool feat_pmuv3p9(struct kvm *kvm)
return check_pmu_revision(kvm, V3P9);
}
+#define has_feat_s2tgran(k, s) \
+ ((kvm_has_feat_enum(kvm, ID_AA64MMFR0_EL1, TGRAN##s##_2, TGRAN##s) && \
+ !kvm_has_feat_enum(kvm, ID_AA64MMFR0_EL1, TGRAN##s, NI)) || \
+ kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN##s##_2, IMP))
+
+static bool feat_lpa2(struct kvm *kvm)
+{
+ return ((kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN4, 52_BIT) ||
+ !kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN4, IMP)) &&
+ (kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN16, 52_BIT) ||
+ !kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN16, IMP)) &&
+ (kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN4_2, 52_BIT) ||
+ !has_feat_s2tgran(kvm, 4)) &&
+ (kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN16_2, 52_BIT) ||
+ !has_feat_s2tgran(kvm, 16)));
+}
+
+static bool feat_vmid16(struct kvm *kvm)
+{
+ return kvm_has_feat_enum(kvm, ID_AA64MMFR1_EL1, VMIDBits, 16);
+}
+
static bool compute_hcr_rw(struct kvm *kvm, u64 *bits)
{
/* This is purely academic: AArch32 and NV are mutually exclusive */
@@ -1168,6 +1194,44 @@ static const struct reg_bits_to_feat_map mdcr_el2_feat_map[] = {
static const DECLARE_FEAT_MAP(mdcr_el2_desc, MDCR_EL2,
mdcr_el2_feat_map, FEAT_AA64EL2);
+static const struct reg_bits_to_feat_map vtcr_el2_feat_map[] = {
+ NEEDS_FEAT(VTCR_EL2_HDBSS, FEAT_HDBSS),
+ NEEDS_FEAT(VTCR_EL2_HAFT, FEAT_HAFT),
+ NEEDS_FEAT(VTCR_EL2_TL0 |
+ VTCR_EL2_TL1 |
+ VTCR_EL2_AssuredOnly |
+ VTCR_EL2_GCSH,
+ FEAT_THE),
+ NEEDS_FEAT(VTCR_EL2_D128, FEAT_D128),
+ NEEDS_FEAT(VTCR_EL2_S2POE, FEAT_S2POE),
+ NEEDS_FEAT(VTCR_EL2_S2PIE, FEAT_S2PIE),
+ NEEDS_FEAT(VTCR_EL2_SL2 |
+ VTCR_EL2_DS,
+ feat_lpa2),
+ NEEDS_FEAT(VTCR_EL2_NSA |
+ VTCR_EL2_NSW,
+ FEAT_SEL2),
+ NEEDS_FEAT(VTCR_EL2_HWU62 |
+ VTCR_EL2_HWU61 |
+ VTCR_EL2_HWU60 |
+ VTCR_EL2_HWU59,
+ FEAT_HPDS2),
+ NEEDS_FEAT(VTCR_EL2_HD, ID_AA64MMFR1_EL1, HAFDBS, DBM),
+ NEEDS_FEAT(VTCR_EL2_HA, ID_AA64MMFR1_EL1, HAFDBS, AF),
+ NEEDS_FEAT(VTCR_EL2_VS, feat_vmid16),
+ NEEDS_FEAT(VTCR_EL2_PS |
+ VTCR_EL2_TG0 |
+ VTCR_EL2_SH0 |
+ VTCR_EL2_ORGN0 |
+ VTCR_EL2_IRGN0 |
+ VTCR_EL2_SL0 |
+ VTCR_EL2_T0SZ,
+ FEAT_AA64EL1),
+};
+
+static const DECLARE_FEAT_MAP(vtcr_el2_desc, VTCR_EL2,
+ vtcr_el2_feat_map, FEAT_AA64EL2);
+
static void __init check_feat_map(const struct reg_bits_to_feat_map *map,
int map_size, u64 res0, const char *str)
{
@@ -1211,6 +1275,7 @@ void __init check_feature_map(void)
check_reg_desc(&tcr2_el2_desc);
check_reg_desc(&sctlr_el1_desc);
check_reg_desc(&mdcr_el2_desc);
+ check_reg_desc(&vtcr_el2_desc);
}
static bool idreg_feat_match(struct kvm *kvm, const struct reg_bits_to_feat_map *map)
@@ -1425,6 +1490,10 @@ void get_reg_fixed_bits(struct kvm *kvm, enum vcpu_sysreg reg, u64 *res0, u64 *r
*res0 = compute_reg_res0_bits(kvm, &mdcr_el2_desc, 0, 0);
*res1 = MDCR_EL2_RES1;
break;
+ case VTCR_EL2:
+ *res0 = compute_reg_res0_bits(kvm, &vtcr_el2_desc, 0, 0);
+ *res1 = VTCR_EL2_RES1;
+ break;
default:
WARN_ON_ONCE(1);
*res0 = *res1 = 0;
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index e1ef8930c97b3..606cebcaa7c09 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -1719,8 +1719,7 @@ int kvm_init_nv_sysregs(struct kvm_vcpu *vcpu)
set_sysreg_masks(kvm, VTTBR_EL2, res0, res1);
/* VTCR_EL2 */
- res0 = GENMASK(63, 32) | GENMASK(30, 20);
- res1 = BIT(31);
+ get_reg_fixed_bits(kvm, VTCR_EL2, &res0, &res1);
set_sysreg_masks(kvm, VTCR_EL2, res0, res1);
/* VMPIDR_EL2 */
--
2.47.3
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [PATCH 2/4] arm64: Convert VTCR_EL2 to sysreg infratructure
2025-11-29 14:45 ` [PATCH 2/4] arm64: Convert VTCR_EL2 to sysreg infratructure Marc Zyngier
@ 2025-12-03 11:43 ` Alexandru Elisei
0 siblings, 0 replies; 13+ messages in thread
From: Alexandru Elisei @ 2025-12-03 11:43 UTC (permalink / raw)
To: Marc Zyngier
Cc: kvmarm, linux-arm-kernel, kvm, Joey Gouly, Suzuki K Poulose,
Oliver Upton, Zenghui Yu
Hi Marc,
On Sat, Nov 29, 2025 at 02:45:23PM +0000, Marc Zyngier wrote:
> Our definition of VTCR_EL2 is both partial (tons of fields are
> missing) and totally inconsistent (some constants are shifted,
> some are not). They are also expressed in terms of TCR, which is
> rather inconvenient.
>
> Replace the ad-hoc definitions with the the generated version.
> This results in a bunch of additional changes to make the code
> with the unshifted nature of generated enumerations.
>
> The register data was extracted from the BSD licenced AARCHMRS
> (AARCHMRS_OPENSOURCE_A_profile_FAT-2025-09_ASL0).
Looks correct to me:
Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com>
Thanks,
Alex
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
> arch/arm64/include/asm/kvm_arm.h | 52 +++++++----------------------
> arch/arm64/include/asm/sysreg.h | 1 -
> arch/arm64/kvm/hyp/pgtable.c | 8 ++---
> arch/arm64/kvm/nested.c | 8 ++---
> arch/arm64/tools/sysreg | 57 ++++++++++++++++++++++++++++++++
> 5 files changed, 76 insertions(+), 50 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
> index 1da290aeedce7..cd2dc378baee6 100644
> --- a/arch/arm64/include/asm/kvm_arm.h
> +++ b/arch/arm64/include/asm/kvm_arm.h
> @@ -123,37 +123,7 @@
> #define TCR_EL2_MASK (TCR_EL2_TG0_MASK | TCR_EL2_SH0_MASK | \
> TCR_EL2_ORGN0_MASK | TCR_EL2_IRGN0_MASK)
>
> -/* VTCR_EL2 Registers bits */
> -#define VTCR_EL2_DS TCR_EL2_DS
> -#define VTCR_EL2_RES1 (1U << 31)
> -#define VTCR_EL2_HD (1 << 22)
> -#define VTCR_EL2_HA (1 << 21)
> -#define VTCR_EL2_PS_SHIFT TCR_EL2_PS_SHIFT
> -#define VTCR_EL2_PS_MASK TCR_EL2_PS_MASK
> -#define VTCR_EL2_TG0_MASK TCR_TG0_MASK
> -#define VTCR_EL2_TG0_4K TCR_TG0_4K
> -#define VTCR_EL2_TG0_16K TCR_TG0_16K
> -#define VTCR_EL2_TG0_64K TCR_TG0_64K
> -#define VTCR_EL2_SH0_MASK TCR_SH0_MASK
> -#define VTCR_EL2_SH0_INNER TCR_SH0_INNER
> -#define VTCR_EL2_ORGN0_MASK TCR_ORGN0_MASK
> -#define VTCR_EL2_ORGN0_WBWA TCR_ORGN0_WBWA
> -#define VTCR_EL2_IRGN0_MASK TCR_IRGN0_MASK
> -#define VTCR_EL2_IRGN0_WBWA TCR_IRGN0_WBWA
> -#define VTCR_EL2_SL0_SHIFT 6
> -#define VTCR_EL2_SL0_MASK (3 << VTCR_EL2_SL0_SHIFT)
> -#define VTCR_EL2_T0SZ_MASK 0x3f
> -#define VTCR_EL2_VS_SHIFT 19
> -#define VTCR_EL2_VS_8BIT (0 << VTCR_EL2_VS_SHIFT)
> -#define VTCR_EL2_VS_16BIT (1 << VTCR_EL2_VS_SHIFT)
> -
> -#define VTCR_EL2_T0SZ(x) TCR_T0SZ(x)
> -
> /*
> - * We configure the Stage-2 page tables to always restrict the IPA space to be
> - * 40 bits wide (T0SZ = 24). Systems with a PARange smaller than 40 bits are
> - * not known to exist and will break with this configuration.
> - *
> * The VTCR_EL2 is configured per VM and is initialised in kvm_init_stage2_mmu.
> *
> * Note that when using 4K pages, we concatenate two first level page tables
> @@ -161,9 +131,6 @@
> *
> */
>
> -#define VTCR_EL2_COMMON_BITS (VTCR_EL2_SH0_INNER | VTCR_EL2_ORGN0_WBWA | \
> - VTCR_EL2_IRGN0_WBWA | VTCR_EL2_RES1)
> -
> /*
> * VTCR_EL2:SL0 indicates the entry level for Stage2 translation.
> * Interestingly, it depends on the page size.
> @@ -195,30 +162,35 @@
> */
> #ifdef CONFIG_ARM64_64K_PAGES
>
> -#define VTCR_EL2_TGRAN VTCR_EL2_TG0_64K
> +#define VTCR_EL2_TGRAN 64K
> #define VTCR_EL2_TGRAN_SL0_BASE 3UL
>
> #elif defined(CONFIG_ARM64_16K_PAGES)
>
> -#define VTCR_EL2_TGRAN VTCR_EL2_TG0_16K
> +#define VTCR_EL2_TGRAN 16K
> #define VTCR_EL2_TGRAN_SL0_BASE 3UL
>
> #else /* 4K */
>
> -#define VTCR_EL2_TGRAN VTCR_EL2_TG0_4K
> +#define VTCR_EL2_TGRAN 4K
> #define VTCR_EL2_TGRAN_SL0_BASE 2UL
>
> #endif
>
> #define VTCR_EL2_LVLS_TO_SL0(levels) \
> - ((VTCR_EL2_TGRAN_SL0_BASE - (4 - (levels))) << VTCR_EL2_SL0_SHIFT)
> + FIELD_PREP(VTCR_EL2_SL0, (VTCR_EL2_TGRAN_SL0_BASE - (4 - (levels))))
> #define VTCR_EL2_SL0_TO_LVLS(sl0) \
> ((sl0) + 4 - VTCR_EL2_TGRAN_SL0_BASE)
> #define VTCR_EL2_LVLS(vtcr) \
> - VTCR_EL2_SL0_TO_LVLS(((vtcr) & VTCR_EL2_SL0_MASK) >> VTCR_EL2_SL0_SHIFT)
> + VTCR_EL2_SL0_TO_LVLS(FIELD_GET(VTCR_EL2_SL0, (vtcr)))
> +
> +#define VTCR_EL2_FLAGS (SYS_FIELD_PREP_ENUM(VTCR_EL2, SH0, INNER) | \
> + SYS_FIELD_PREP_ENUM(VTCR_EL2, ORGN0, WBWA) | \
> + SYS_FIELD_PREP_ENUM(VTCR_EL2, IRGN0, WBWA) | \
> + SYS_FIELD_PREP_ENUM(VTCR_EL2, TG0, VTCR_EL2_TGRAN) | \
> + VTCR_EL2_RES1)
>
> -#define VTCR_EL2_FLAGS (VTCR_EL2_COMMON_BITS | VTCR_EL2_TGRAN)
> -#define VTCR_EL2_IPA(vtcr) (64 - ((vtcr) & VTCR_EL2_T0SZ_MASK))
> +#define VTCR_EL2_IPA(vtcr) (64 - FIELD_GET(VTCR_EL2_T0SZ, (vtcr)))
>
> /*
> * ARM VMSAv8-64 defines an algorithm for finding the translation table
> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> index c231d2a3e5159..acad7a7621b9e 100644
> --- a/arch/arm64/include/asm/sysreg.h
> +++ b/arch/arm64/include/asm/sysreg.h
> @@ -516,7 +516,6 @@
> #define SYS_TTBR1_EL2 sys_reg(3, 4, 2, 0, 1)
> #define SYS_TCR_EL2 sys_reg(3, 4, 2, 0, 2)
> #define SYS_VTTBR_EL2 sys_reg(3, 4, 2, 1, 0)
> -#define SYS_VTCR_EL2 sys_reg(3, 4, 2, 1, 2)
>
> #define SYS_HAFGRTR_EL2 sys_reg(3, 4, 3, 1, 6)
> #define SYS_SPSR_EL2 sys_reg(3, 4, 4, 0, 0)
> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> index 947ac1a951a5b..e0bd6a0172729 100644
> --- a/arch/arm64/kvm/hyp/pgtable.c
> +++ b/arch/arm64/kvm/hyp/pgtable.c
> @@ -583,8 +583,8 @@ u64 kvm_get_vtcr(u64 mmfr0, u64 mmfr1, u32 phys_shift)
> u64 vtcr = VTCR_EL2_FLAGS;
> s8 lvls;
>
> - vtcr |= kvm_get_parange(mmfr0) << VTCR_EL2_PS_SHIFT;
> - vtcr |= VTCR_EL2_T0SZ(phys_shift);
> + vtcr |= FIELD_PREP(VTCR_EL2_PS, kvm_get_parange(mmfr0));
> + vtcr |= FIELD_PREP(VTCR_EL2_T0SZ, (UL(64) - phys_shift));
> /*
> * Use a minimum 2 level page table to prevent splitting
> * host PMD huge pages at stage2.
> @@ -624,9 +624,7 @@ u64 kvm_get_vtcr(u64 mmfr0, u64 mmfr1, u32 phys_shift)
> vtcr |= VTCR_EL2_DS;
>
> /* Set the vmid bits */
> - vtcr |= (get_vmid_bits(mmfr1) == 16) ?
> - VTCR_EL2_VS_16BIT :
> - VTCR_EL2_VS_8BIT;
> + vtcr |= (get_vmid_bits(mmfr1) == 16) ? VTCR_EL2_VS : 0;
>
> return vtcr;
> }
> diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
> index 911fc99ed99d9..e1ef8930c97b3 100644
> --- a/arch/arm64/kvm/nested.c
> +++ b/arch/arm64/kvm/nested.c
> @@ -377,7 +377,7 @@ static void vtcr_to_walk_info(u64 vtcr, struct s2_walk_info *wi)
> {
> wi->t0sz = vtcr & TCR_EL2_T0SZ_MASK;
>
> - switch (vtcr & VTCR_EL2_TG0_MASK) {
> + switch (FIELD_GET(VTCR_EL2_TG0_MASK, vtcr)) {
> case VTCR_EL2_TG0_4K:
> wi->pgshift = 12; break;
> case VTCR_EL2_TG0_16K:
> @@ -513,7 +513,7 @@ static u8 get_guest_mapping_ttl(struct kvm_s2_mmu *mmu, u64 addr)
>
> lockdep_assert_held_write(&kvm_s2_mmu_to_kvm(mmu)->mmu_lock);
>
> - switch (vtcr & VTCR_EL2_TG0_MASK) {
> + switch (FIELD_GET(VTCR_EL2_TG0_MASK, vtcr)) {
> case VTCR_EL2_TG0_4K:
> ttl = (TLBI_TTL_TG_4K << 2);
> break;
> @@ -530,7 +530,7 @@ static u8 get_guest_mapping_ttl(struct kvm_s2_mmu *mmu, u64 addr)
>
> again:
> /* Iteratively compute the block sizes for a particular granule size */
> - switch (vtcr & VTCR_EL2_TG0_MASK) {
> + switch (FIELD_GET(VTCR_EL2_TG0_MASK, vtcr)) {
> case VTCR_EL2_TG0_4K:
> if (sz < SZ_4K) sz = SZ_4K;
> else if (sz < SZ_2M) sz = SZ_2M;
> @@ -593,7 +593,7 @@ unsigned long compute_tlb_inval_range(struct kvm_s2_mmu *mmu, u64 val)
>
> if (!max_size) {
> /* Compute the maximum extent of the invalidation */
> - switch (mmu->tlb_vtcr & VTCR_EL2_TG0_MASK) {
> + switch (FIELD_GET(VTCR_EL2_TG0_MASK, mmu->tlb_vtcr)) {
> case VTCR_EL2_TG0_4K:
> max_size = SZ_1G;
> break;
> diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
> index 9d388f87d9a13..6f43b2ae5993b 100644
> --- a/arch/arm64/tools/sysreg
> +++ b/arch/arm64/tools/sysreg
> @@ -4400,6 +4400,63 @@ Field 56:12 BADDR
> Res0 11:0
> EndSysreg
>
> +Sysreg VTCR_EL2 3 4 2 1 2
> +Res0 63:46
> +Field 45 HDBSS
> +Field 44 HAFT
> +Res0 43:42
> +Field 41 TL0
> +Field 40 GCSH
> +Res0 39
> +Field 38 D128
> +Field 37 S2POE
> +Field 36 S2PIE
> +Field 35 TL1
> +Field 34 AssuredOnly
> +Field 33 SL2
> +Field 32 DS
> +Res1 31
> +Field 30 NSA
> +Field 29 NSW
> +Field 28 HWU62
> +Field 27 HWU61
> +Field 26 HWU60
> +Field 25 HWU59
> +Res0 24:23
> +Field 22 HD
> +Field 21 HA
> +Res0 20
> +Enum 19 VS
> + 0b0 8BIT
> + 0b1 16BIT
> +EndEnum
> +Field 18:16 PS
> +Enum 15:14 TG0
> + 0b00 4K
> + 0b01 64K
> + 0b10 16K
> +EndEnum
> +Enum 13:12 SH0
> + 0b00 NONE
> + 0b01 OUTER
> + 0b11 INNER
> +EndEnum
> +Enum 11:10 ORGN0
> + 0b00 NC
> + 0b01 WBWA
> + 0b10 WT
> + 0b11 WBnWA
> +EndEnum
> +Enum 9:8 IRGN0
> + 0b00 NC
> + 0b01 WBWA
> + 0b10 WT
> + 0b11 WBnWA
> +EndEnum
> +Field 7:6 SL0
> +Field 5:0 T0SZ
> +EndSysreg
> +
> Sysreg GCSCR_EL2 3 4 2 5 0
> Fields GCSCR_ELx
> EndSysreg
> --
> 2.47.3
>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 4/4] KVM: arm64: Convert VTCR_EL2 to config-driven sanitisation
2025-11-29 14:45 ` [PATCH 4/4] KVM: arm64: Convert VTCR_EL2 to config-driven sanitisation Marc Zyngier
@ 2025-12-03 11:44 ` Alexandru Elisei
2025-12-03 13:00 ` Marc Zyngier
2025-12-03 16:17 ` Joey Gouly
1 sibling, 1 reply; 13+ messages in thread
From: Alexandru Elisei @ 2025-12-03 11:44 UTC (permalink / raw)
To: Marc Zyngier
Cc: kvmarm, linux-arm-kernel, kvm, Joey Gouly, Suzuki K Poulose,
Oliver Upton, Zenghui Yu
Hi Marc,
On Sat, Nov 29, 2025 at 02:45:25PM +0000, Marc Zyngier wrote:
> Describe all the VTCR_EL2 fields and their respective configurations,
> making sure that we correctly ignore the bits that are not defined
> for a given guest configuration.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
> arch/arm64/kvm/config.c | 69 +++++++++++++++++++++++++++++++++++++++++
> arch/arm64/kvm/nested.c | 3 +-
> 2 files changed, 70 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
> index a02c28d6a61c9..c36e133c51912 100644
> --- a/arch/arm64/kvm/config.c
> +++ b/arch/arm64/kvm/config.c
> @@ -141,6 +141,7 @@ struct reg_feat_map_desc {
> #define FEAT_AA64EL1 ID_AA64PFR0_EL1, EL1, IMP
> #define FEAT_AA64EL2 ID_AA64PFR0_EL1, EL2, IMP
> #define FEAT_AA64EL3 ID_AA64PFR0_EL1, EL3, IMP
> +#define FEAT_SEL2 ID_AA64PFR0_EL1, SEL2, IMP
> #define FEAT_AIE ID_AA64MMFR3_EL1, AIE, IMP
> #define FEAT_S2POE ID_AA64MMFR3_EL1, S2POE, IMP
> #define FEAT_S1POE ID_AA64MMFR3_EL1, S1POE, IMP
> @@ -202,6 +203,8 @@ struct reg_feat_map_desc {
> #define FEAT_ASID2 ID_AA64MMFR4_EL1, ASID2, IMP
> #define FEAT_MEC ID_AA64MMFR3_EL1, MEC, IMP
> #define FEAT_HAFT ID_AA64MMFR1_EL1, HAFDBS, HAFT
> +#define FEAT_HDBSS ID_AA64MMFR1_EL1, HAFDBS, HDBSS
> +#define FEAT_HPDS2 ID_AA64MMFR1_EL1, HPDS, HPDS2
> #define FEAT_BTI ID_AA64PFR1_EL1, BT, IMP
> #define FEAT_ExS ID_AA64MMFR0_EL1, EXS, IMP
> #define FEAT_IESB ID_AA64MMFR2_EL1, IESB, IMP
> @@ -219,6 +222,7 @@ struct reg_feat_map_desc {
> #define FEAT_FGT2 ID_AA64MMFR0_EL1, FGT, FGT2
> #define FEAT_MTPMU ID_AA64DFR0_EL1, MTPMU, IMP
> #define FEAT_HCX ID_AA64MMFR1_EL1, HCX, IMP
> +#define FEAT_S2PIE ID_AA64MMFR3_EL1, S2PIE, IMP
>
> static bool not_feat_aa64el3(struct kvm *kvm)
> {
> @@ -362,6 +366,28 @@ static bool feat_pmuv3p9(struct kvm *kvm)
> return check_pmu_revision(kvm, V3P9);
> }
>
> +#define has_feat_s2tgran(k, s) \
> + ((kvm_has_feat_enum(kvm, ID_AA64MMFR0_EL1, TGRAN##s##_2, TGRAN##s) && \
> + !kvm_has_feat_enum(kvm, ID_AA64MMFR0_EL1, TGRAN##s, NI)) || \
Wouldn't that read better as kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN##s, IMP)?
I think that would also be correct.
> + kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN##s##_2, IMP))
A bit unexpected to treat the same field first as an enum, then as an integer,
but it saves one line.
> +
> +static bool feat_lpa2(struct kvm *kvm)
> +{
> + return ((kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN4, 52_BIT) ||
> + !kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN4, IMP)) &&
> + (kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN16, 52_BIT) ||
> + !kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN16, IMP)) &&
> + (kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN4_2, 52_BIT) ||
> + !has_feat_s2tgran(kvm, 4)) &&
> + (kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN16_2, 52_BIT) ||
> + !has_feat_s2tgran(kvm, 16)));
> +}
That was a doozy, but looks correct to me if the intention was to have the check
as relaxed as possible - i.e, a VM can advertise 52 bit support for one granule,
but not the other (same for stage 1 and stage 2).
As far as I can tell, everything looks good to me:
Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com>
Thanks,
Alex
> +
> +static bool feat_vmid16(struct kvm *kvm)
> +{
> + return kvm_has_feat_enum(kvm, ID_AA64MMFR1_EL1, VMIDBits, 16);
> +}
> +
> static bool compute_hcr_rw(struct kvm *kvm, u64 *bits)
> {
> /* This is purely academic: AArch32 and NV are mutually exclusive */
> @@ -1168,6 +1194,44 @@ static const struct reg_bits_to_feat_map mdcr_el2_feat_map[] = {
> static const DECLARE_FEAT_MAP(mdcr_el2_desc, MDCR_EL2,
> mdcr_el2_feat_map, FEAT_AA64EL2);
>
> +static const struct reg_bits_to_feat_map vtcr_el2_feat_map[] = {
> + NEEDS_FEAT(VTCR_EL2_HDBSS, FEAT_HDBSS),
> + NEEDS_FEAT(VTCR_EL2_HAFT, FEAT_HAFT),
> + NEEDS_FEAT(VTCR_EL2_TL0 |
> + VTCR_EL2_TL1 |
> + VTCR_EL2_AssuredOnly |
> + VTCR_EL2_GCSH,
> + FEAT_THE),
> + NEEDS_FEAT(VTCR_EL2_D128, FEAT_D128),
> + NEEDS_FEAT(VTCR_EL2_S2POE, FEAT_S2POE),
> + NEEDS_FEAT(VTCR_EL2_S2PIE, FEAT_S2PIE),
> + NEEDS_FEAT(VTCR_EL2_SL2 |
> + VTCR_EL2_DS,
> + feat_lpa2),
> + NEEDS_FEAT(VTCR_EL2_NSA |
> + VTCR_EL2_NSW,
> + FEAT_SEL2),
> + NEEDS_FEAT(VTCR_EL2_HWU62 |
> + VTCR_EL2_HWU61 |
> + VTCR_EL2_HWU60 |
> + VTCR_EL2_HWU59,
> + FEAT_HPDS2),
> + NEEDS_FEAT(VTCR_EL2_HD, ID_AA64MMFR1_EL1, HAFDBS, DBM),
> + NEEDS_FEAT(VTCR_EL2_HA, ID_AA64MMFR1_EL1, HAFDBS, AF),
> + NEEDS_FEAT(VTCR_EL2_VS, feat_vmid16),
> + NEEDS_FEAT(VTCR_EL2_PS |
> + VTCR_EL2_TG0 |
> + VTCR_EL2_SH0 |
> + VTCR_EL2_ORGN0 |
> + VTCR_EL2_IRGN0 |
> + VTCR_EL2_SL0 |
> + VTCR_EL2_T0SZ,
> + FEAT_AA64EL1),
> +};
> +
> +static const DECLARE_FEAT_MAP(vtcr_el2_desc, VTCR_EL2,
> + vtcr_el2_feat_map, FEAT_AA64EL2);
> +
> static void __init check_feat_map(const struct reg_bits_to_feat_map *map,
> int map_size, u64 res0, const char *str)
> {
> @@ -1211,6 +1275,7 @@ void __init check_feature_map(void)
> check_reg_desc(&tcr2_el2_desc);
> check_reg_desc(&sctlr_el1_desc);
> check_reg_desc(&mdcr_el2_desc);
> + check_reg_desc(&vtcr_el2_desc);
> }
>
> static bool idreg_feat_match(struct kvm *kvm, const struct reg_bits_to_feat_map *map)
> @@ -1425,6 +1490,10 @@ void get_reg_fixed_bits(struct kvm *kvm, enum vcpu_sysreg reg, u64 *res0, u64 *r
> *res0 = compute_reg_res0_bits(kvm, &mdcr_el2_desc, 0, 0);
> *res1 = MDCR_EL2_RES1;
> break;
> + case VTCR_EL2:
> + *res0 = compute_reg_res0_bits(kvm, &vtcr_el2_desc, 0, 0);
> + *res1 = VTCR_EL2_RES1;
> + break;
> default:
> WARN_ON_ONCE(1);
> *res0 = *res1 = 0;
> diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
> index e1ef8930c97b3..606cebcaa7c09 100644
> --- a/arch/arm64/kvm/nested.c
> +++ b/arch/arm64/kvm/nested.c
> @@ -1719,8 +1719,7 @@ int kvm_init_nv_sysregs(struct kvm_vcpu *vcpu)
> set_sysreg_masks(kvm, VTTBR_EL2, res0, res1);
>
> /* VTCR_EL2 */
> - res0 = GENMASK(63, 32) | GENMASK(30, 20);
> - res1 = BIT(31);
> + get_reg_fixed_bits(kvm, VTCR_EL2, &res0, &res1);
> set_sysreg_masks(kvm, VTCR_EL2, res0, res1);
>
> /* VMPIDR_EL2 */
> --
> 2.47.3
>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 4/4] KVM: arm64: Convert VTCR_EL2 to config-driven sanitisation
2025-12-03 11:44 ` Alexandru Elisei
@ 2025-12-03 13:00 ` Marc Zyngier
2025-12-03 14:03 ` Alexandru Elisei
0 siblings, 1 reply; 13+ messages in thread
From: Marc Zyngier @ 2025-12-03 13:00 UTC (permalink / raw)
To: Alexandru Elisei
Cc: kvmarm, linux-arm-kernel, kvm, Joey Gouly, Suzuki K Poulose,
Oliver Upton, Zenghui Yu
On Wed, 03 Dec 2025 11:44:14 +0000,
Alexandru Elisei <alexandru.elisei@arm.com> wrote:
>
> Hi Marc,
>
> On Sat, Nov 29, 2025 at 02:45:25PM +0000, Marc Zyngier wrote:
> > Describe all the VTCR_EL2 fields and their respective configurations,
> > making sure that we correctly ignore the bits that are not defined
> > for a given guest configuration.
> >
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> > arch/arm64/kvm/config.c | 69 +++++++++++++++++++++++++++++++++++++++++
> > arch/arm64/kvm/nested.c | 3 +-
> > 2 files changed, 70 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
> > index a02c28d6a61c9..c36e133c51912 100644
> > --- a/arch/arm64/kvm/config.c
> > +++ b/arch/arm64/kvm/config.c
> > @@ -141,6 +141,7 @@ struct reg_feat_map_desc {
> > #define FEAT_AA64EL1 ID_AA64PFR0_EL1, EL1, IMP
> > #define FEAT_AA64EL2 ID_AA64PFR0_EL1, EL2, IMP
> > #define FEAT_AA64EL3 ID_AA64PFR0_EL1, EL3, IMP
> > +#define FEAT_SEL2 ID_AA64PFR0_EL1, SEL2, IMP
> > #define FEAT_AIE ID_AA64MMFR3_EL1, AIE, IMP
> > #define FEAT_S2POE ID_AA64MMFR3_EL1, S2POE, IMP
> > #define FEAT_S1POE ID_AA64MMFR3_EL1, S1POE, IMP
> > @@ -202,6 +203,8 @@ struct reg_feat_map_desc {
> > #define FEAT_ASID2 ID_AA64MMFR4_EL1, ASID2, IMP
> > #define FEAT_MEC ID_AA64MMFR3_EL1, MEC, IMP
> > #define FEAT_HAFT ID_AA64MMFR1_EL1, HAFDBS, HAFT
> > +#define FEAT_HDBSS ID_AA64MMFR1_EL1, HAFDBS, HDBSS
> > +#define FEAT_HPDS2 ID_AA64MMFR1_EL1, HPDS, HPDS2
> > #define FEAT_BTI ID_AA64PFR1_EL1, BT, IMP
> > #define FEAT_ExS ID_AA64MMFR0_EL1, EXS, IMP
> > #define FEAT_IESB ID_AA64MMFR2_EL1, IESB, IMP
> > @@ -219,6 +222,7 @@ struct reg_feat_map_desc {
> > #define FEAT_FGT2 ID_AA64MMFR0_EL1, FGT, FGT2
> > #define FEAT_MTPMU ID_AA64DFR0_EL1, MTPMU, IMP
> > #define FEAT_HCX ID_AA64MMFR1_EL1, HCX, IMP
> > +#define FEAT_S2PIE ID_AA64MMFR3_EL1, S2PIE, IMP
> >
> > static bool not_feat_aa64el3(struct kvm *kvm)
> > {
> > @@ -362,6 +366,28 @@ static bool feat_pmuv3p9(struct kvm *kvm)
> > return check_pmu_revision(kvm, V3P9);
> > }
> >
> > +#define has_feat_s2tgran(k, s) \
> > + ((kvm_has_feat_enum(kvm, ID_AA64MMFR0_EL1, TGRAN##s##_2, TGRAN##s) && \
> > + !kvm_has_feat_enum(kvm, ID_AA64MMFR0_EL1, TGRAN##s, NI)) || \
>
> Wouldn't that read better as kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN##s, IMP)?
> I think that would also be correct.
Sure, I don't mind either way,
>
> > + kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN##s##_2, IMP))
>
> A bit unexpected to treat the same field first as an enum, then as an integer,
> but it saves one line.
It potentially saves more if the encoding grows over time. I don't
think it matters.
>
> > +
> > +static bool feat_lpa2(struct kvm *kvm)
> > +{
> > + return ((kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN4, 52_BIT) ||
> > + !kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN4, IMP)) &&
> > + (kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN16, 52_BIT) ||
> > + !kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN16, IMP)) &&
> > + (kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN4_2, 52_BIT) ||
> > + !has_feat_s2tgran(kvm, 4)) &&
> > + (kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN16_2, 52_BIT) ||
> > + !has_feat_s2tgran(kvm, 16)));
> > +}
>
> That was a doozy, but looks correct to me if the intention was to have the check
> as relaxed as possible - i.e, a VM can advertise 52 bit support for one granule,
> but not the other (same for stage 1 and stage 2).
Not quite. The intent is that, for all the possible granules, at all
the possible stages, either the granule size isn't implemented at all,
or it supports 52 bits. I think this covers it, but as you said, this
is a bit of a bran fsck.
This is essentially a transliteration of the MRS:
(FEAT_LPA2 && FEAT_S2TGran4K) <=> (UInt(ID_AA64MMFR0_EL1.TGran4_2) >= 3))
(FEAT_LPA2 && FEAT_S2TGran16K) <=> (UInt(ID_AA64MMFR0_EL1.TGran16_2) >= 3))
(FEAT_LPA2 && FEAT_TGran4K) <=> (SInt(ID_AA64MMFR0_EL1.TGran4) >= 1))
(FEAT_LPA2 && FEAT_TGran16K) <=> (UInt(ID_AA64MMFR0_EL1.TGran16) >= 2))
FEAT_S2TGran4K <=> (((UInt(ID_AA64MMFR0_EL1.TGran4_2) == 0) && FEAT_TGran4K) || (UInt(ID_AA64MMFR0_EL1.TGran4_2) >= 2))
FEAT_S2TGran16K <=> (((UInt(ID_AA64MMFR0_EL1.TGran16_2) == 0) && FEAT_TGran16K) || (UInt(ID_AA64MMFR0_EL1.TGran16_2) >= 2))
FEAT_TGran4K <=> (SInt(ID_AA64MMFR0_EL1.TGran4) >= 0)
FEAT_TGran16K <=> (UInt(ID_AA64MMFR0_EL1.TGran16) >= 1)
> As far as I can tell, everything looks good to me:
>
> Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com>
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 4/4] KVM: arm64: Convert VTCR_EL2 to config-driven sanitisation
2025-12-03 13:00 ` Marc Zyngier
@ 2025-12-03 14:03 ` Alexandru Elisei
2025-12-03 14:58 ` Marc Zyngier
0 siblings, 1 reply; 13+ messages in thread
From: Alexandru Elisei @ 2025-12-03 14:03 UTC (permalink / raw)
To: Marc Zyngier
Cc: kvmarm, linux-arm-kernel, kvm, Joey Gouly, Suzuki K Poulose,
Oliver Upton, Zenghui Yu
Hi Marc,
On Wed, Dec 03, 2025 at 01:00:57PM +0000, Marc Zyngier wrote:
> On Wed, 03 Dec 2025 11:44:14 +0000,
> Alexandru Elisei <alexandru.elisei@arm.com> wrote:
> >
> > Hi Marc,
> >
> > On Sat, Nov 29, 2025 at 02:45:25PM +0000, Marc Zyngier wrote:
> > > Describe all the VTCR_EL2 fields and their respective configurations,
> > > making sure that we correctly ignore the bits that are not defined
> > > for a given guest configuration.
> > >
> > > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > > ---
> > > arch/arm64/kvm/config.c | 69 +++++++++++++++++++++++++++++++++++++++++
> > > arch/arm64/kvm/nested.c | 3 +-
> > > 2 files changed, 70 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
> > > index a02c28d6a61c9..c36e133c51912 100644
> > > --- a/arch/arm64/kvm/config.c
> > > +++ b/arch/arm64/kvm/config.c
> > > @@ -141,6 +141,7 @@ struct reg_feat_map_desc {
> > > #define FEAT_AA64EL1 ID_AA64PFR0_EL1, EL1, IMP
> > > #define FEAT_AA64EL2 ID_AA64PFR0_EL1, EL2, IMP
> > > #define FEAT_AA64EL3 ID_AA64PFR0_EL1, EL3, IMP
> > > +#define FEAT_SEL2 ID_AA64PFR0_EL1, SEL2, IMP
> > > #define FEAT_AIE ID_AA64MMFR3_EL1, AIE, IMP
> > > #define FEAT_S2POE ID_AA64MMFR3_EL1, S2POE, IMP
> > > #define FEAT_S1POE ID_AA64MMFR3_EL1, S1POE, IMP
> > > @@ -202,6 +203,8 @@ struct reg_feat_map_desc {
> > > #define FEAT_ASID2 ID_AA64MMFR4_EL1, ASID2, IMP
> > > #define FEAT_MEC ID_AA64MMFR3_EL1, MEC, IMP
> > > #define FEAT_HAFT ID_AA64MMFR1_EL1, HAFDBS, HAFT
> > > +#define FEAT_HDBSS ID_AA64MMFR1_EL1, HAFDBS, HDBSS
> > > +#define FEAT_HPDS2 ID_AA64MMFR1_EL1, HPDS, HPDS2
> > > #define FEAT_BTI ID_AA64PFR1_EL1, BT, IMP
> > > #define FEAT_ExS ID_AA64MMFR0_EL1, EXS, IMP
> > > #define FEAT_IESB ID_AA64MMFR2_EL1, IESB, IMP
> > > @@ -219,6 +222,7 @@ struct reg_feat_map_desc {
> > > #define FEAT_FGT2 ID_AA64MMFR0_EL1, FGT, FGT2
> > > #define FEAT_MTPMU ID_AA64DFR0_EL1, MTPMU, IMP
> > > #define FEAT_HCX ID_AA64MMFR1_EL1, HCX, IMP
> > > +#define FEAT_S2PIE ID_AA64MMFR3_EL1, S2PIE, IMP
> > >
> > > static bool not_feat_aa64el3(struct kvm *kvm)
> > > {
> > > @@ -362,6 +366,28 @@ static bool feat_pmuv3p9(struct kvm *kvm)
> > > return check_pmu_revision(kvm, V3P9);
> > > }
> > >
> > > +#define has_feat_s2tgran(k, s) \
> > > + ((kvm_has_feat_enum(kvm, ID_AA64MMFR0_EL1, TGRAN##s##_2, TGRAN##s) && \
> > > + !kvm_has_feat_enum(kvm, ID_AA64MMFR0_EL1, TGRAN##s, NI)) || \
> >
> > Wouldn't that read better as kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN##s, IMP)?
> > I think that would also be correct.
>
> Sure, I don't mind either way,
>
> >
> > > + kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN##s##_2, IMP))
> >
> > A bit unexpected to treat the same field first as an enum, then as an integer,
> > but it saves one line.
>
> It potentially saves more if the encoding grows over time. I don't
> think it matters.
Doesn't, was just aestethics and saves someone having to check the values to
make sure it wasn't an error.
>
> >
> > > +
> > > +static bool feat_lpa2(struct kvm *kvm)
> > > +{
> > > + return ((kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN4, 52_BIT) ||
> > > + !kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN4, IMP)) &&
> > > + (kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN16, 52_BIT) ||
> > > + !kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN16, IMP)) &&
> > > + (kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN4_2, 52_BIT) ||
> > > + !has_feat_s2tgran(kvm, 4)) &&
> > > + (kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN16_2, 52_BIT) ||
> > > + !has_feat_s2tgran(kvm, 16)));
> > > +}
> >
> > That was a doozy, but looks correct to me if the intention was to have the check
> > as relaxed as possible - i.e, a VM can advertise 52 bit support for one granule,
> > but not the other (same for stage 1 and stage 2).
>
> Not quite. The intent is that, for all the possible granules, at all
> the possible stages, either the granule size isn't implemented at all,
> or it supports 52 bits. I think this covers it, but as you said, this
> is a bit of a bran fsck.
Hm... this sounds like something that should be sanitised in
set_id_aa64mmfr0_el1(). Sorry, but I just can't tell if TGran{4,16,64} are
writable by userspace.
>
> This is essentially a transliteration of the MRS:
>
> (FEAT_LPA2 && FEAT_S2TGran4K) <=> (UInt(ID_AA64MMFR0_EL1.TGran4_2) >= 3))
> (FEAT_LPA2 && FEAT_S2TGran16K) <=> (UInt(ID_AA64MMFR0_EL1.TGran16_2) >= 3))
> (FEAT_LPA2 && FEAT_TGran4K) <=> (SInt(ID_AA64MMFR0_EL1.TGran4) >= 1))
> (FEAT_LPA2 && FEAT_TGran16K) <=> (UInt(ID_AA64MMFR0_EL1.TGran16) >= 2))
> FEAT_S2TGran4K <=> (((UInt(ID_AA64MMFR0_EL1.TGran4_2) == 0) && FEAT_TGran4K) || (UInt(ID_AA64MMFR0_EL1.TGran4_2) >= 2))
> FEAT_S2TGran16K <=> (((UInt(ID_AA64MMFR0_EL1.TGran16_2) == 0) && FEAT_TGran16K) || (UInt(ID_AA64MMFR0_EL1.TGran16_2) >= 2))
> FEAT_TGran4K <=> (SInt(ID_AA64MMFR0_EL1.TGran4) >= 0)
> FEAT_TGran16K <=> (UInt(ID_AA64MMFR0_EL1.TGran16) >= 1)
How about (untested):
static bool feat_lpas2(struct kvm *kvm)
{
if (kvm_has_feat_exact(kvm, ID_AA64MMFR0_EL1, TGRAN4, IMP) ||
kvm_has_feat_exact(kvm, ID_AA64MMFR0_EL1, TGRAN16, IMP) ||
kvm_has_feat_exact(kvm, ID_AA64MMFR0_EL1, TGRAN4_2, IMP) ||
kvm_has_feat_exact(kvm, ID_AA64MMFR0_EL1, TGRAN16_2, IMP))
return false;
return true;
}
where, in case there's not something similar already and I just don't know about
it:
#define kvm_has_feat_exact(kvm, id, fld, val) \
kvm_cmp_feat(kvm, id, fld, =, val)
The idea being that if one of the granules does not support 52 bit, then it's
not supported by any of the other granules.
Thanks,
Alex
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 4/4] KVM: arm64: Convert VTCR_EL2 to config-driven sanitisation
2025-12-03 14:03 ` Alexandru Elisei
@ 2025-12-03 14:58 ` Marc Zyngier
2025-12-03 15:20 ` Alexandru Elisei
0 siblings, 1 reply; 13+ messages in thread
From: Marc Zyngier @ 2025-12-03 14:58 UTC (permalink / raw)
To: Alexandru Elisei
Cc: kvmarm, linux-arm-kernel, kvm, Joey Gouly, Suzuki K Poulose,
Oliver Upton, Zenghui Yu
On Wed, 03 Dec 2025 14:03:51 +0000,
Alexandru Elisei <alexandru.elisei@arm.com> wrote:
>
> Hi Marc,
>
> On Wed, Dec 03, 2025 at 01:00:57PM +0000, Marc Zyngier wrote:
> > On Wed, 03 Dec 2025 11:44:14 +0000,
> > Alexandru Elisei <alexandru.elisei@arm.com> wrote:
> > >
> > > Hi Marc,
> > >
> > > On Sat, Nov 29, 2025 at 02:45:25PM +0000, Marc Zyngier wrote:
> > > > Describe all the VTCR_EL2 fields and their respective configurations,
> > > > making sure that we correctly ignore the bits that are not defined
> > > > for a given guest configuration.
> > > >
> > > > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > > > ---
> > > > arch/arm64/kvm/config.c | 69 +++++++++++++++++++++++++++++++++++++++++
> > > > arch/arm64/kvm/nested.c | 3 +-
> > > > 2 files changed, 70 insertions(+), 2 deletions(-)
> > > >
> > > > diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
> > > > index a02c28d6a61c9..c36e133c51912 100644
> > > > --- a/arch/arm64/kvm/config.c
> > > > +++ b/arch/arm64/kvm/config.c
> > > > @@ -141,6 +141,7 @@ struct reg_feat_map_desc {
> > > > #define FEAT_AA64EL1 ID_AA64PFR0_EL1, EL1, IMP
> > > > #define FEAT_AA64EL2 ID_AA64PFR0_EL1, EL2, IMP
> > > > #define FEAT_AA64EL3 ID_AA64PFR0_EL1, EL3, IMP
> > > > +#define FEAT_SEL2 ID_AA64PFR0_EL1, SEL2, IMP
> > > > #define FEAT_AIE ID_AA64MMFR3_EL1, AIE, IMP
> > > > #define FEAT_S2POE ID_AA64MMFR3_EL1, S2POE, IMP
> > > > #define FEAT_S1POE ID_AA64MMFR3_EL1, S1POE, IMP
> > > > @@ -202,6 +203,8 @@ struct reg_feat_map_desc {
> > > > #define FEAT_ASID2 ID_AA64MMFR4_EL1, ASID2, IMP
> > > > #define FEAT_MEC ID_AA64MMFR3_EL1, MEC, IMP
> > > > #define FEAT_HAFT ID_AA64MMFR1_EL1, HAFDBS, HAFT
> > > > +#define FEAT_HDBSS ID_AA64MMFR1_EL1, HAFDBS, HDBSS
> > > > +#define FEAT_HPDS2 ID_AA64MMFR1_EL1, HPDS, HPDS2
> > > > #define FEAT_BTI ID_AA64PFR1_EL1, BT, IMP
> > > > #define FEAT_ExS ID_AA64MMFR0_EL1, EXS, IMP
> > > > #define FEAT_IESB ID_AA64MMFR2_EL1, IESB, IMP
> > > > @@ -219,6 +222,7 @@ struct reg_feat_map_desc {
> > > > #define FEAT_FGT2 ID_AA64MMFR0_EL1, FGT, FGT2
> > > > #define FEAT_MTPMU ID_AA64DFR0_EL1, MTPMU, IMP
> > > > #define FEAT_HCX ID_AA64MMFR1_EL1, HCX, IMP
> > > > +#define FEAT_S2PIE ID_AA64MMFR3_EL1, S2PIE, IMP
> > > >
> > > > static bool not_feat_aa64el3(struct kvm *kvm)
> > > > {
> > > > @@ -362,6 +366,28 @@ static bool feat_pmuv3p9(struct kvm *kvm)
> > > > return check_pmu_revision(kvm, V3P9);
> > > > }
> > > >
> > > > +#define has_feat_s2tgran(k, s) \
> > > > + ((kvm_has_feat_enum(kvm, ID_AA64MMFR0_EL1, TGRAN##s##_2, TGRAN##s) && \
> > > > + !kvm_has_feat_enum(kvm, ID_AA64MMFR0_EL1, TGRAN##s, NI)) || \
> > >
> > > Wouldn't that read better as kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN##s, IMP)?
> > > I think that would also be correct.
> >
> > Sure, I don't mind either way,
> >
> > >
> > > > + kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN##s##_2, IMP))
> > >
> > > A bit unexpected to treat the same field first as an enum, then as an integer,
> > > but it saves one line.
> >
> > It potentially saves more if the encoding grows over time. I don't
> > think it matters.
>
> Doesn't, was just aestethics and saves someone having to check the values to
> make sure it wasn't an error.
>
> >
> > >
> > > > +
> > > > +static bool feat_lpa2(struct kvm *kvm)
> > > > +{
> > > > + return ((kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN4, 52_BIT) ||
> > > > + !kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN4, IMP)) &&
> > > > + (kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN16, 52_BIT) ||
> > > > + !kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN16, IMP)) &&
> > > > + (kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN4_2, 52_BIT) ||
> > > > + !has_feat_s2tgran(kvm, 4)) &&
> > > > + (kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN16_2, 52_BIT) ||
> > > > + !has_feat_s2tgran(kvm, 16)));
> > > > +}
> > >
> > > That was a doozy, but looks correct to me if the intention was to have the check
> > > as relaxed as possible - i.e, a VM can advertise 52 bit support for one granule,
> > > but not the other (same for stage 1 and stage 2).
> >
> > Not quite. The intent is that, for all the possible granules, at all
> > the possible stages, either the granule size isn't implemented at all,
> > or it supports 52 bits. I think this covers it, but as you said, this
> > is a bit of a bran fsck.
>
> Hm... this sounds like something that should be sanitised in
> set_id_aa64mmfr0_el1(). Sorry, but I just can't tell if TGran{4,16,64} are
> writable by userspace.
Everything in ID_AA64MMFR0_EL1 is writable, except for ASIDBITS.
>
> >
> > This is essentially a transliteration of the MRS:
> >
> > (FEAT_LPA2 && FEAT_S2TGran4K) <=> (UInt(ID_AA64MMFR0_EL1.TGran4_2) >= 3))
> > (FEAT_LPA2 && FEAT_S2TGran16K) <=> (UInt(ID_AA64MMFR0_EL1.TGran16_2) >= 3))
> > (FEAT_LPA2 && FEAT_TGran4K) <=> (SInt(ID_AA64MMFR0_EL1.TGran4) >= 1))
> > (FEAT_LPA2 && FEAT_TGran16K) <=> (UInt(ID_AA64MMFR0_EL1.TGran16) >= 2))
> > FEAT_S2TGran4K <=> (((UInt(ID_AA64MMFR0_EL1.TGran4_2) == 0) && FEAT_TGran4K) || (UInt(ID_AA64MMFR0_EL1.TGran4_2) >= 2))
> > FEAT_S2TGran16K <=> (((UInt(ID_AA64MMFR0_EL1.TGran16_2) == 0) && FEAT_TGran16K) || (UInt(ID_AA64MMFR0_EL1.TGran16_2) >= 2))
> > FEAT_TGran4K <=> (SInt(ID_AA64MMFR0_EL1.TGran4) >= 0)
> > FEAT_TGran16K <=> (UInt(ID_AA64MMFR0_EL1.TGran16) >= 1)
>
> How about (untested):
>
> static bool feat_lpas2(struct kvm *kvm)
> {
> if (kvm_has_feat_exact(kvm, ID_AA64MMFR0_EL1, TGRAN4, IMP) ||
> kvm_has_feat_exact(kvm, ID_AA64MMFR0_EL1, TGRAN16, IMP) ||
> kvm_has_feat_exact(kvm, ID_AA64MMFR0_EL1, TGRAN4_2, IMP) ||
> kvm_has_feat_exact(kvm, ID_AA64MMFR0_EL1, TGRAN16_2, IMP))
> return false;
>
> return true;
> }
The combination (TGRAN4=NI, TGRAN2_4=TGRAN4, TGRAN16=52_BIT,
TGRAN16_2=52_BIT) is a valid LPA2 configuration, which the test above
rejects.
> where, in case there's not something similar already and I just don't know about
> it:
>
> #define kvm_has_feat_exact(kvm, id, fld, val) \
> kvm_cmp_feat(kvm, id, fld, =, val)
#define __kvm_has_feat_enum(kvm, id, fld, val) \
kvm_cmp_feat_unsigned(kvm, id, fld, ==, val)
#define kvm_has_feat_enum(kvm, ...) __kvm_has_feat_enum(kvm, __VA_ARGS__)
> The idea being that if one of the granules does not support 52 bit, then it's
> not supported by any of the other granules.
See above why I think your proposal doesn't work.
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 4/4] KVM: arm64: Convert VTCR_EL2 to config-driven sanitisation
2025-12-03 14:58 ` Marc Zyngier
@ 2025-12-03 15:20 ` Alexandru Elisei
0 siblings, 0 replies; 13+ messages in thread
From: Alexandru Elisei @ 2025-12-03 15:20 UTC (permalink / raw)
To: Marc Zyngier
Cc: kvmarm, linux-arm-kernel, kvm, Joey Gouly, Suzuki K Poulose,
Oliver Upton, Zenghui Yu
Hi Marc,
On Wed, Dec 03, 2025 at 02:58:13PM +0000, Marc Zyngier wrote:
> On Wed, 03 Dec 2025 14:03:51 +0000,
> Alexandru Elisei <alexandru.elisei@arm.com> wrote:
> >
> > Hi Marc,
> >
> > On Wed, Dec 03, 2025 at 01:00:57PM +0000, Marc Zyngier wrote:
> > > On Wed, 03 Dec 2025 11:44:14 +0000,
> > > Alexandru Elisei <alexandru.elisei@arm.com> wrote:
> > > >
> > > > Hi Marc,
> > > >
> > > > On Sat, Nov 29, 2025 at 02:45:25PM +0000, Marc Zyngier wrote:
> > > > > Describe all the VTCR_EL2 fields and their respective configurations,
> > > > > making sure that we correctly ignore the bits that are not defined
> > > > > for a given guest configuration.
> > > > >
> > > > > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > > > > ---
> > > > > arch/arm64/kvm/config.c | 69 +++++++++++++++++++++++++++++++++++++++++
> > > > > arch/arm64/kvm/nested.c | 3 +-
> > > > > 2 files changed, 70 insertions(+), 2 deletions(-)
> > > > >
> > > > > diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
> > > > > index a02c28d6a61c9..c36e133c51912 100644
> > > > > --- a/arch/arm64/kvm/config.c
> > > > > +++ b/arch/arm64/kvm/config.c
> > > > > @@ -141,6 +141,7 @@ struct reg_feat_map_desc {
> > > > > #define FEAT_AA64EL1 ID_AA64PFR0_EL1, EL1, IMP
> > > > > #define FEAT_AA64EL2 ID_AA64PFR0_EL1, EL2, IMP
> > > > > #define FEAT_AA64EL3 ID_AA64PFR0_EL1, EL3, IMP
> > > > > +#define FEAT_SEL2 ID_AA64PFR0_EL1, SEL2, IMP
> > > > > #define FEAT_AIE ID_AA64MMFR3_EL1, AIE, IMP
> > > > > #define FEAT_S2POE ID_AA64MMFR3_EL1, S2POE, IMP
> > > > > #define FEAT_S1POE ID_AA64MMFR3_EL1, S1POE, IMP
> > > > > @@ -202,6 +203,8 @@ struct reg_feat_map_desc {
> > > > > #define FEAT_ASID2 ID_AA64MMFR4_EL1, ASID2, IMP
> > > > > #define FEAT_MEC ID_AA64MMFR3_EL1, MEC, IMP
> > > > > #define FEAT_HAFT ID_AA64MMFR1_EL1, HAFDBS, HAFT
> > > > > +#define FEAT_HDBSS ID_AA64MMFR1_EL1, HAFDBS, HDBSS
> > > > > +#define FEAT_HPDS2 ID_AA64MMFR1_EL1, HPDS, HPDS2
> > > > > #define FEAT_BTI ID_AA64PFR1_EL1, BT, IMP
> > > > > #define FEAT_ExS ID_AA64MMFR0_EL1, EXS, IMP
> > > > > #define FEAT_IESB ID_AA64MMFR2_EL1, IESB, IMP
> > > > > @@ -219,6 +222,7 @@ struct reg_feat_map_desc {
> > > > > #define FEAT_FGT2 ID_AA64MMFR0_EL1, FGT, FGT2
> > > > > #define FEAT_MTPMU ID_AA64DFR0_EL1, MTPMU, IMP
> > > > > #define FEAT_HCX ID_AA64MMFR1_EL1, HCX, IMP
> > > > > +#define FEAT_S2PIE ID_AA64MMFR3_EL1, S2PIE, IMP
> > > > >
> > > > > static bool not_feat_aa64el3(struct kvm *kvm)
> > > > > {
> > > > > @@ -362,6 +366,28 @@ static bool feat_pmuv3p9(struct kvm *kvm)
> > > > > return check_pmu_revision(kvm, V3P9);
> > > > > }
> > > > >
> > > > > +#define has_feat_s2tgran(k, s) \
> > > > > + ((kvm_has_feat_enum(kvm, ID_AA64MMFR0_EL1, TGRAN##s##_2, TGRAN##s) && \
> > > > > + !kvm_has_feat_enum(kvm, ID_AA64MMFR0_EL1, TGRAN##s, NI)) || \
> > > >
> > > > Wouldn't that read better as kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN##s, IMP)?
> > > > I think that would also be correct.
> > >
> > > Sure, I don't mind either way,
> > >
> > > >
> > > > > + kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN##s##_2, IMP))
> > > >
> > > > A bit unexpected to treat the same field first as an enum, then as an integer,
> > > > but it saves one line.
> > >
> > > It potentially saves more if the encoding grows over time. I don't
> > > think it matters.
> >
> > Doesn't, was just aestethics and saves someone having to check the values to
> > make sure it wasn't an error.
> >
> > >
> > > >
> > > > > +
> > > > > +static bool feat_lpa2(struct kvm *kvm)
> > > > > +{
> > > > > + return ((kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN4, 52_BIT) ||
> > > > > + !kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN4, IMP)) &&
> > > > > + (kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN16, 52_BIT) ||
> > > > > + !kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN16, IMP)) &&
> > > > > + (kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN4_2, 52_BIT) ||
> > > > > + !has_feat_s2tgran(kvm, 4)) &&
> > > > > + (kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN16_2, 52_BIT) ||
> > > > > + !has_feat_s2tgran(kvm, 16)));
> > > > > +}
> > > >
> > > > That was a doozy, but looks correct to me if the intention was to have the check
> > > > as relaxed as possible - i.e, a VM can advertise 52 bit support for one granule,
> > > > but not the other (same for stage 1 and stage 2).
> > >
> > > Not quite. The intent is that, for all the possible granules, at all
> > > the possible stages, either the granule size isn't implemented at all,
> > > or it supports 52 bits. I think this covers it, but as you said, this
> > > is a bit of a bran fsck.
> >
> > Hm... this sounds like something that should be sanitised in
> > set_id_aa64mmfr0_el1(). Sorry, but I just can't tell if TGran{4,16,64} are
> > writable by userspace.
>
> Everything in ID_AA64MMFR0_EL1 is writable, except for ASIDBITS.
Aaah, ok so a bit set in the 'mask' argument to ID_FILTERED means the bit is
writable (the comment for ID_FILTERED doesn't explain that).
>
> >
> > >
> > > This is essentially a transliteration of the MRS:
> > >
> > > (FEAT_LPA2 && FEAT_S2TGran4K) <=> (UInt(ID_AA64MMFR0_EL1.TGran4_2) >= 3))
> > > (FEAT_LPA2 && FEAT_S2TGran16K) <=> (UInt(ID_AA64MMFR0_EL1.TGran16_2) >= 3))
> > > (FEAT_LPA2 && FEAT_TGran4K) <=> (SInt(ID_AA64MMFR0_EL1.TGran4) >= 1))
> > > (FEAT_LPA2 && FEAT_TGran16K) <=> (UInt(ID_AA64MMFR0_EL1.TGran16) >= 2))
> > > FEAT_S2TGran4K <=> (((UInt(ID_AA64MMFR0_EL1.TGran4_2) == 0) && FEAT_TGran4K) || (UInt(ID_AA64MMFR0_EL1.TGran4_2) >= 2))
> > > FEAT_S2TGran16K <=> (((UInt(ID_AA64MMFR0_EL1.TGran16_2) == 0) && FEAT_TGran16K) || (UInt(ID_AA64MMFR0_EL1.TGran16_2) >= 2))
> > > FEAT_TGran4K <=> (SInt(ID_AA64MMFR0_EL1.TGran4) >= 0)
> > > FEAT_TGran16K <=> (UInt(ID_AA64MMFR0_EL1.TGran16) >= 1)
> >
> > How about (untested):
> >
> > static bool feat_lpas2(struct kvm *kvm)
> > {
> > if (kvm_has_feat_exact(kvm, ID_AA64MMFR0_EL1, TGRAN4, IMP) ||
> > kvm_has_feat_exact(kvm, ID_AA64MMFR0_EL1, TGRAN16, IMP) ||
> > kvm_has_feat_exact(kvm, ID_AA64MMFR0_EL1, TGRAN4_2, IMP) ||
> > kvm_has_feat_exact(kvm, ID_AA64MMFR0_EL1, TGRAN16_2, IMP))
> > return false;
> >
> > return true;
> > }
>
> The combination (TGRAN4=NI, TGRAN2_4=TGRAN4, TGRAN16=52_BIT,
> TGRAN16_2=52_BIT) is a valid LPA2 configuration, which the test above
> rejects.
Thank you for taking the time to entertain my comments. It's not clear to me why
the combination is rejected since there are no IMP values in your example, but I
trust your judgement and I don't want to waste your time on nitpicking.
My Reviewed-by stands, by the way, in case that wasn't clear.
Thanks,
Alex
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 4/4] KVM: arm64: Convert VTCR_EL2 to config-driven sanitisation
2025-11-29 14:45 ` [PATCH 4/4] KVM: arm64: Convert VTCR_EL2 to config-driven sanitisation Marc Zyngier
2025-12-03 11:44 ` Alexandru Elisei
@ 2025-12-03 16:17 ` Joey Gouly
2025-12-03 16:43 ` Marc Zyngier
1 sibling, 1 reply; 13+ messages in thread
From: Joey Gouly @ 2025-12-03 16:17 UTC (permalink / raw)
To: Marc Zyngier
Cc: kvmarm, linux-arm-kernel, kvm, Suzuki K Poulose, Oliver Upton,
Zenghui Yu, Alexandru Elisei
Hi!
On Sat, Nov 29, 2025 at 02:45:25PM +0000, Marc Zyngier wrote:
> Describe all the VTCR_EL2 fields and their respective configurations,
> making sure that we correctly ignore the bits that are not defined
> for a given guest configuration.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
> arch/arm64/kvm/config.c | 69 +++++++++++++++++++++++++++++++++++++++++
> arch/arm64/kvm/nested.c | 3 +-
> 2 files changed, 70 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
> index a02c28d6a61c9..c36e133c51912 100644
> --- a/arch/arm64/kvm/config.c
> +++ b/arch/arm64/kvm/config.c
> @@ -141,6 +141,7 @@ struct reg_feat_map_desc {
> #define FEAT_AA64EL1 ID_AA64PFR0_EL1, EL1, IMP
> #define FEAT_AA64EL2 ID_AA64PFR0_EL1, EL2, IMP
> #define FEAT_AA64EL3 ID_AA64PFR0_EL1, EL3, IMP
> +#define FEAT_SEL2 ID_AA64PFR0_EL1, SEL2, IMP
> #define FEAT_AIE ID_AA64MMFR3_EL1, AIE, IMP
> #define FEAT_S2POE ID_AA64MMFR3_EL1, S2POE, IMP
> #define FEAT_S1POE ID_AA64MMFR3_EL1, S1POE, IMP
> @@ -202,6 +203,8 @@ struct reg_feat_map_desc {
> #define FEAT_ASID2 ID_AA64MMFR4_EL1, ASID2, IMP
> #define FEAT_MEC ID_AA64MMFR3_EL1, MEC, IMP
> #define FEAT_HAFT ID_AA64MMFR1_EL1, HAFDBS, HAFT
> +#define FEAT_HDBSS ID_AA64MMFR1_EL1, HAFDBS, HDBSS
> +#define FEAT_HPDS2 ID_AA64MMFR1_EL1, HPDS, HPDS2
> #define FEAT_BTI ID_AA64PFR1_EL1, BT, IMP
> #define FEAT_ExS ID_AA64MMFR0_EL1, EXS, IMP
> #define FEAT_IESB ID_AA64MMFR2_EL1, IESB, IMP
> @@ -219,6 +222,7 @@ struct reg_feat_map_desc {
> #define FEAT_FGT2 ID_AA64MMFR0_EL1, FGT, FGT2
> #define FEAT_MTPMU ID_AA64DFR0_EL1, MTPMU, IMP
> #define FEAT_HCX ID_AA64MMFR1_EL1, HCX, IMP
> +#define FEAT_S2PIE ID_AA64MMFR3_EL1, S2PIE, IMP
>
> static bool not_feat_aa64el3(struct kvm *kvm)
> {
> @@ -362,6 +366,28 @@ static bool feat_pmuv3p9(struct kvm *kvm)
> return check_pmu_revision(kvm, V3P9);
> }
>
> +#define has_feat_s2tgran(k, s) \
> + ((kvm_has_feat_enum(kvm, ID_AA64MMFR0_EL1, TGRAN##s##_2, TGRAN##s) && \
> + !kvm_has_feat_enum(kvm, ID_AA64MMFR0_EL1, TGRAN##s, NI)) || \
> + kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN##s##_2, IMP))
> +
> +static bool feat_lpa2(struct kvm *kvm)
> +{
> + return ((kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN4, 52_BIT) ||
> + !kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN4, IMP)) &&
> + (kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN16, 52_BIT) ||
> + !kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN16, IMP)) &&
> + (kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN4_2, 52_BIT) ||
> + !has_feat_s2tgran(kvm, 4)) &&
> + (kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN16_2, 52_BIT) ||
> + !has_feat_s2tgran(kvm, 16)));
> +}
> +
> +static bool feat_vmid16(struct kvm *kvm)
> +{
> + return kvm_has_feat_enum(kvm, ID_AA64MMFR1_EL1, VMIDBits, 16);
> +}
> +
> static bool compute_hcr_rw(struct kvm *kvm, u64 *bits)
> {
> /* This is purely academic: AArch32 and NV are mutually exclusive */
> @@ -1168,6 +1194,44 @@ static const struct reg_bits_to_feat_map mdcr_el2_feat_map[] = {
> static const DECLARE_FEAT_MAP(mdcr_el2_desc, MDCR_EL2,
> mdcr_el2_feat_map, FEAT_AA64EL2);
>
> +static const struct reg_bits_to_feat_map vtcr_el2_feat_map[] = {
> + NEEDS_FEAT(VTCR_EL2_HDBSS, FEAT_HDBSS),
> + NEEDS_FEAT(VTCR_EL2_HAFT, FEAT_HAFT),
> + NEEDS_FEAT(VTCR_EL2_TL0 |
> + VTCR_EL2_TL1 |
> + VTCR_EL2_AssuredOnly |
> + VTCR_EL2_GCSH,
> + FEAT_THE),
The text for VTCR_EL2.AssuredOnly says:
This field is RES0 when VTCR_EL2.D128 is 1.
> + NEEDS_FEAT(VTCR_EL2_D128, FEAT_D128),
> + NEEDS_FEAT(VTCR_EL2_S2POE, FEAT_S2POE),
> + NEEDS_FEAT(VTCR_EL2_S2PIE, FEAT_S2PIE),
The text for VTCR_EL2.S2PIE says:
This field is RES1 when VTCR_EL2.D128 is set.
Are these cases that need to be handled here somehow?
Thanks,
Joey
> + NEEDS_FEAT(VTCR_EL2_SL2 |
> + VTCR_EL2_DS,
> + feat_lpa2),
> + NEEDS_FEAT(VTCR_EL2_NSA |
> + VTCR_EL2_NSW,
> + FEAT_SEL2),
> + NEEDS_FEAT(VTCR_EL2_HWU62 |
> + VTCR_EL2_HWU61 |
> + VTCR_EL2_HWU60 |
> + VTCR_EL2_HWU59,
> + FEAT_HPDS2),
> + NEEDS_FEAT(VTCR_EL2_HD, ID_AA64MMFR1_EL1, HAFDBS, DBM),
> + NEEDS_FEAT(VTCR_EL2_HA, ID_AA64MMFR1_EL1, HAFDBS, AF),
> + NEEDS_FEAT(VTCR_EL2_VS, feat_vmid16),
> + NEEDS_FEAT(VTCR_EL2_PS |
> + VTCR_EL2_TG0 |
> + VTCR_EL2_SH0 |
> + VTCR_EL2_ORGN0 |
> + VTCR_EL2_IRGN0 |
> + VTCR_EL2_SL0 |
> + VTCR_EL2_T0SZ,
> + FEAT_AA64EL1),
> +};
> +
> +static const DECLARE_FEAT_MAP(vtcr_el2_desc, VTCR_EL2,
> + vtcr_el2_feat_map, FEAT_AA64EL2);
> +
> static void __init check_feat_map(const struct reg_bits_to_feat_map *map,
> int map_size, u64 res0, const char *str)
> {
> @@ -1211,6 +1275,7 @@ void __init check_feature_map(void)
> check_reg_desc(&tcr2_el2_desc);
> check_reg_desc(&sctlr_el1_desc);
> check_reg_desc(&mdcr_el2_desc);
> + check_reg_desc(&vtcr_el2_desc);
> }
>
> static bool idreg_feat_match(struct kvm *kvm, const struct reg_bits_to_feat_map *map)
> @@ -1425,6 +1490,10 @@ void get_reg_fixed_bits(struct kvm *kvm, enum vcpu_sysreg reg, u64 *res0, u64 *r
> *res0 = compute_reg_res0_bits(kvm, &mdcr_el2_desc, 0, 0);
> *res1 = MDCR_EL2_RES1;
> break;
> + case VTCR_EL2:
> + *res0 = compute_reg_res0_bits(kvm, &vtcr_el2_desc, 0, 0);
> + *res1 = VTCR_EL2_RES1;
> + break;
> default:
> WARN_ON_ONCE(1);
> *res0 = *res1 = 0;
> diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
> index e1ef8930c97b3..606cebcaa7c09 100644
> --- a/arch/arm64/kvm/nested.c
> +++ b/arch/arm64/kvm/nested.c
> @@ -1719,8 +1719,7 @@ int kvm_init_nv_sysregs(struct kvm_vcpu *vcpu)
> set_sysreg_masks(kvm, VTTBR_EL2, res0, res1);
>
> /* VTCR_EL2 */
> - res0 = GENMASK(63, 32) | GENMASK(30, 20);
> - res1 = BIT(31);
> + get_reg_fixed_bits(kvm, VTCR_EL2, &res0, &res1);
> set_sysreg_masks(kvm, VTCR_EL2, res0, res1);
>
> /* VMPIDR_EL2 */
> --
> 2.47.3
>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 4/4] KVM: arm64: Convert VTCR_EL2 to config-driven sanitisation
2025-12-03 16:17 ` Joey Gouly
@ 2025-12-03 16:43 ` Marc Zyngier
0 siblings, 0 replies; 13+ messages in thread
From: Marc Zyngier @ 2025-12-03 16:43 UTC (permalink / raw)
To: Joey Gouly
Cc: kvmarm, linux-arm-kernel, kvm, Suzuki K Poulose, Oliver Upton,
Zenghui Yu, Alexandru Elisei
On Wed, 03 Dec 2025 16:17:15 +0000,
Joey Gouly <joey.gouly@arm.com> wrote:
>
> Hi!
>
> On Sat, Nov 29, 2025 at 02:45:25PM +0000, Marc Zyngier wrote:
> > Describe all the VTCR_EL2 fields and their respective configurations,
> > making sure that we correctly ignore the bits that are not defined
> > for a given guest configuration.
> >
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> > arch/arm64/kvm/config.c | 69 +++++++++++++++++++++++++++++++++++++++++
> > arch/arm64/kvm/nested.c | 3 +-
> > 2 files changed, 70 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
> > index a02c28d6a61c9..c36e133c51912 100644
> > --- a/arch/arm64/kvm/config.c
> > +++ b/arch/arm64/kvm/config.c
> > @@ -141,6 +141,7 @@ struct reg_feat_map_desc {
> > #define FEAT_AA64EL1 ID_AA64PFR0_EL1, EL1, IMP
> > #define FEAT_AA64EL2 ID_AA64PFR0_EL1, EL2, IMP
> > #define FEAT_AA64EL3 ID_AA64PFR0_EL1, EL3, IMP
> > +#define FEAT_SEL2 ID_AA64PFR0_EL1, SEL2, IMP
> > #define FEAT_AIE ID_AA64MMFR3_EL1, AIE, IMP
> > #define FEAT_S2POE ID_AA64MMFR3_EL1, S2POE, IMP
> > #define FEAT_S1POE ID_AA64MMFR3_EL1, S1POE, IMP
> > @@ -202,6 +203,8 @@ struct reg_feat_map_desc {
> > #define FEAT_ASID2 ID_AA64MMFR4_EL1, ASID2, IMP
> > #define FEAT_MEC ID_AA64MMFR3_EL1, MEC, IMP
> > #define FEAT_HAFT ID_AA64MMFR1_EL1, HAFDBS, HAFT
> > +#define FEAT_HDBSS ID_AA64MMFR1_EL1, HAFDBS, HDBSS
> > +#define FEAT_HPDS2 ID_AA64MMFR1_EL1, HPDS, HPDS2
> > #define FEAT_BTI ID_AA64PFR1_EL1, BT, IMP
> > #define FEAT_ExS ID_AA64MMFR0_EL1, EXS, IMP
> > #define FEAT_IESB ID_AA64MMFR2_EL1, IESB, IMP
> > @@ -219,6 +222,7 @@ struct reg_feat_map_desc {
> > #define FEAT_FGT2 ID_AA64MMFR0_EL1, FGT, FGT2
> > #define FEAT_MTPMU ID_AA64DFR0_EL1, MTPMU, IMP
> > #define FEAT_HCX ID_AA64MMFR1_EL1, HCX, IMP
> > +#define FEAT_S2PIE ID_AA64MMFR3_EL1, S2PIE, IMP
> >
> > static bool not_feat_aa64el3(struct kvm *kvm)
> > {
> > @@ -362,6 +366,28 @@ static bool feat_pmuv3p9(struct kvm *kvm)
> > return check_pmu_revision(kvm, V3P9);
> > }
> >
> > +#define has_feat_s2tgran(k, s) \
> > + ((kvm_has_feat_enum(kvm, ID_AA64MMFR0_EL1, TGRAN##s##_2, TGRAN##s) && \
> > + !kvm_has_feat_enum(kvm, ID_AA64MMFR0_EL1, TGRAN##s, NI)) || \
> > + kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN##s##_2, IMP))
> > +
> > +static bool feat_lpa2(struct kvm *kvm)
> > +{
> > + return ((kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN4, 52_BIT) ||
> > + !kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN4, IMP)) &&
> > + (kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN16, 52_BIT) ||
> > + !kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN16, IMP)) &&
> > + (kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN4_2, 52_BIT) ||
> > + !has_feat_s2tgran(kvm, 4)) &&
> > + (kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN16_2, 52_BIT) ||
> > + !has_feat_s2tgran(kvm, 16)));
> > +}
> > +
> > +static bool feat_vmid16(struct kvm *kvm)
> > +{
> > + return kvm_has_feat_enum(kvm, ID_AA64MMFR1_EL1, VMIDBits, 16);
> > +}
> > +
> > static bool compute_hcr_rw(struct kvm *kvm, u64 *bits)
> > {
> > /* This is purely academic: AArch32 and NV are mutually exclusive */
> > @@ -1168,6 +1194,44 @@ static const struct reg_bits_to_feat_map mdcr_el2_feat_map[] = {
> > static const DECLARE_FEAT_MAP(mdcr_el2_desc, MDCR_EL2,
> > mdcr_el2_feat_map, FEAT_AA64EL2);
> >
> > +static const struct reg_bits_to_feat_map vtcr_el2_feat_map[] = {
> > + NEEDS_FEAT(VTCR_EL2_HDBSS, FEAT_HDBSS),
> > + NEEDS_FEAT(VTCR_EL2_HAFT, FEAT_HAFT),
> > + NEEDS_FEAT(VTCR_EL2_TL0 |
> > + VTCR_EL2_TL1 |
> > + VTCR_EL2_AssuredOnly |
> > + VTCR_EL2_GCSH,
> > + FEAT_THE),
>
> The text for VTCR_EL2.AssuredOnly says:
>
> This field is RES0 when VTCR_EL2.D128 is 1.
>
> > + NEEDS_FEAT(VTCR_EL2_D128, FEAT_D128),
> > + NEEDS_FEAT(VTCR_EL2_S2POE, FEAT_S2POE),
> > + NEEDS_FEAT(VTCR_EL2_S2PIE, FEAT_S2PIE),
>
> The text for VTCR_EL2.S2PIE says:
>
> This field is RES1 when VTCR_EL2.D128 is set.
>
>
> Are these cases that need to be handled here somehow?
These are not static configurations. They are dynamic behaviours
depending on other control bits.
D128 code, if it ever exists, will have to *interpret* these bits as
RES0 (resp. RES1) when evaluating the page tables.
If you want a similar example in existing code, look at the way we
handle TCR_EL1.HPDn in the S1 PTW. They are treated as RES1 if
TCR2_EL1.PIE is set, as per R_JHSVW.
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2025-12-03 16:43 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-11-29 14:45 [PATCH 0/4] KVM: arm64: VTCR_EL2 conversion to feature dependency framework Marc Zyngier
2025-11-29 14:45 ` [PATCH 1/4] arm64: Convert ID_AA64MMFR0_EL1.TGRAN{4,16,64}_2 to UnsignedEnum Marc Zyngier
2025-11-29 14:45 ` [PATCH 2/4] arm64: Convert VTCR_EL2 to sysreg infratructure Marc Zyngier
2025-12-03 11:43 ` Alexandru Elisei
2025-11-29 14:45 ` [PATCH 3/4] KVM: arm64: Account for RES1 bits in DECLARE_FEAT_MAP() Marc Zyngier
2025-11-29 14:45 ` [PATCH 4/4] KVM: arm64: Convert VTCR_EL2 to config-driven sanitisation Marc Zyngier
2025-12-03 11:44 ` Alexandru Elisei
2025-12-03 13:00 ` Marc Zyngier
2025-12-03 14:03 ` Alexandru Elisei
2025-12-03 14:58 ` Marc Zyngier
2025-12-03 15:20 ` Alexandru Elisei
2025-12-03 16:17 ` Joey Gouly
2025-12-03 16:43 ` Marc Zyngier
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).