linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/6] KVM: arm64: VTCR_EL2 conversion to feature dependency framework
@ 2025-12-10 17:30 Marc Zyngier
  2025-12-10 17:30 ` [PATCH v2 1/6] KVM: arm64: Fix EL2 S1 XN handling for hVHE setups Marc Zyngier
                   ` (6 more replies)
  0 siblings, 7 replies; 21+ messages in thread
From: Marc Zyngier @ 2025-12-10 17:30 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel, kvm
  Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Alexandru Elisei, Sascha Bischoff, Quentin Perret, Fuad Tabba,
	Sebastian Ene

This is a follow-up on my VTCR_EL2 sanitisation series, with extra
goodies, mostly as a consequence of Alexandru's patches and review.

* From [1]:

  - Added two patches fixing some FEAT_XNX issues: one dating back
    from the hVHE introduction, and the other related to the newer
    stuff in 6.19.

  - Expanded the scope of the RES1 handling in DECLARE_FEAT_MAP() to
    deal with FGTs, as we're about to get quality stuff thanks to
    GICv5.

  - Simplified the S2TGRANx detection slightly.

  - Collected RBs, with thanks

[1] https://lore.kernel.org/r/20251129144525.2609207-1-maz@kernel.org

Marc Zyngier (6):
  KVM: arm64: Fix EL2 S1 XN handling for hVHE setups
  arm64: Convert ID_AA64MMFR0_EL1.TGRAN{4,16,64}_2 to UnsignedEnum
  arm64: Convert VTCR_EL2 to sysreg infratructure
  KVM: arm64: Account for RES1 bits in DECLARE_FEAT_MAP() and co
  KVM: arm64: Convert VTCR_EL2 to config-driven sanitisation
  KVM: arm64: Honor UX/PX attributes for EL2 S1 mappings

 arch/arm64/include/asm/kvm_arm.h     | 52 ++++-----------
 arch/arm64/include/asm/kvm_host.h    |  1 +
 arch/arm64/include/asm/kvm_pgtable.h |  2 +
 arch/arm64/include/asm/sysreg.h      |  1 -
 arch/arm64/kvm/config.c              | 94 ++++++++++++++++++++++++----
 arch/arm64/kvm/emulate-nested.c      | 55 +++++++++-------
 arch/arm64/kvm/hyp/pgtable.c         | 32 +++++++---
 arch/arm64/kvm/nested.c              | 11 ++--
 arch/arm64/tools/sysreg              | 63 ++++++++++++++++++-
 9 files changed, 217 insertions(+), 94 deletions(-)

-- 
2.47.3



^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH v2 1/6] KVM: arm64: Fix EL2 S1 XN handling for hVHE setups
  2025-12-10 17:30 [PATCH v2 0/6] KVM: arm64: VTCR_EL2 conversion to feature dependency framework Marc Zyngier
@ 2025-12-10 17:30 ` Marc Zyngier
  2025-12-11 13:37   ` Fuad Tabba
  2025-12-19 13:38   ` Leonardo Bras
  2025-12-10 17:30 ` [PATCH v2 2/6] arm64: Convert ID_AA64MMFR0_EL1.TGRAN{4,16,64}_2 to UnsignedEnum Marc Zyngier
                   ` (5 subsequent siblings)
  6 siblings, 2 replies; 21+ messages in thread
From: Marc Zyngier @ 2025-12-10 17:30 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel, kvm
  Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Alexandru Elisei, Sascha Bischoff, Quentin Perret, Fuad Tabba,
	Sebastian Ene

The current XN implementation is tied to the EL2 translation regime,
and fall flat on its face with the EL2&0 one that is used for hVHE,
as the permission bit for privileged execution is a different one.

Fixes: 6537565fd9b7f ("KVM: arm64: Adjust EL2 stage-1 leaf AP bits when ARM64_KVM_HVHE is set")
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_pgtable.h | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h
index fc02de43c68dd..be68b89692065 100644
--- a/arch/arm64/include/asm/kvm_pgtable.h
+++ b/arch/arm64/include/asm/kvm_pgtable.h
@@ -87,7 +87,15 @@ typedef u64 kvm_pte_t;
 
 #define KVM_PTE_LEAF_ATTR_HI_SW		GENMASK(58, 55)
 
-#define KVM_PTE_LEAF_ATTR_HI_S1_XN	BIT(54)
+#define __KVM_PTE_LEAF_ATTR_HI_S1_XN	BIT(54)
+#define __KVM_PTE_LEAF_ATTR_HI_S1_UXN	BIT(54)
+#define __KVM_PTE_LEAF_ATTR_HI_S1_PXN	BIT(53)
+
+#define KVM_PTE_LEAF_ATTR_HI_S1_XN					\
+	({ cpus_have_final_cap(ARM64_KVM_HVHE) ?			\
+			(__KVM_PTE_LEAF_ATTR_HI_S1_UXN |		\
+			 __KVM_PTE_LEAF_ATTR_HI_S1_PXN) :		\
+			__KVM_PTE_LEAF_ATTR_HI_S1_XN; })
 
 #define KVM_PTE_LEAF_ATTR_HI_S2_XN	GENMASK(54, 53)
 
-- 
2.47.3



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 2/6] arm64: Convert ID_AA64MMFR0_EL1.TGRAN{4,16,64}_2 to UnsignedEnum
  2025-12-10 17:30 [PATCH v2 0/6] KVM: arm64: VTCR_EL2 conversion to feature dependency framework Marc Zyngier
  2025-12-10 17:30 ` [PATCH v2 1/6] KVM: arm64: Fix EL2 S1 XN handling for hVHE setups Marc Zyngier
@ 2025-12-10 17:30 ` Marc Zyngier
  2025-12-11 13:38   ` Fuad Tabba
  2025-12-10 17:30 ` [PATCH v2 3/6] arm64: Convert VTCR_EL2 to sysreg infratructure Marc Zyngier
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 21+ messages in thread
From: Marc Zyngier @ 2025-12-10 17:30 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel, kvm
  Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Alexandru Elisei, Sascha Bischoff, Quentin Perret, Fuad Tabba,
	Sebastian Ene

ID_AA64MMFR0_EL1.TGRAN{4,16,64}_2 are currently represented as unordered
enumerations. However, the architecture treats them as Unsigned,
as hinted to by the MRS data:

(FEAT_S2TGran4K <=> (((UInt(ID_AA64MMFR0_EL1.TGran4_2) == 0) &&
		       FEAT_TGran4K) ||
		     (UInt(ID_AA64MMFR0_EL1.TGran4_2) >= 2))))

and similar descriptions exist for 16 and 64k.

This is also confirmed by D24.1.3.3 ("Alternative ID scheme used for
ID_AA64MMFR0_EL1 stage 2 granule sizes") in the L.b revision of
the ARM ARM.

Turn these fields into UnsignedEnum so that we can use the above
description more or less literally.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/tools/sysreg | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
index 1c6cdf9d54bba..9d388f87d9a13 100644
--- a/arch/arm64/tools/sysreg
+++ b/arch/arm64/tools/sysreg
@@ -2098,18 +2098,18 @@ UnsignedEnum	47:44	EXS
 	0b0000	NI
 	0b0001	IMP
 EndEnum
-Enum	43:40	TGRAN4_2
+UnsignedEnum	43:40	TGRAN4_2
 	0b0000	TGRAN4
 	0b0001	NI
 	0b0010	IMP
 	0b0011	52_BIT
 EndEnum
-Enum	39:36	TGRAN64_2
+UnsignedEnum	39:36	TGRAN64_2
 	0b0000	TGRAN64
 	0b0001	NI
 	0b0010	IMP
 EndEnum
-Enum	35:32	TGRAN16_2
+UnsignedEnum	35:32	TGRAN16_2
 	0b0000	TGRAN16
 	0b0001	NI
 	0b0010	IMP
-- 
2.47.3



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 3/6] arm64: Convert VTCR_EL2 to sysreg infratructure
  2025-12-10 17:30 [PATCH v2 0/6] KVM: arm64: VTCR_EL2 conversion to feature dependency framework Marc Zyngier
  2025-12-10 17:30 ` [PATCH v2 1/6] KVM: arm64: Fix EL2 S1 XN handling for hVHE setups Marc Zyngier
  2025-12-10 17:30 ` [PATCH v2 2/6] arm64: Convert ID_AA64MMFR0_EL1.TGRAN{4,16,64}_2 to UnsignedEnum Marc Zyngier
@ 2025-12-10 17:30 ` Marc Zyngier
  2025-12-11 14:13   ` Fuad Tabba
  2025-12-10 17:30 ` [PATCH v2 4/6] KVM: arm64: Account for RES1 bits in DECLARE_FEAT_MAP() and co Marc Zyngier
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 21+ messages in thread
From: Marc Zyngier @ 2025-12-10 17:30 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel, kvm
  Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Alexandru Elisei, Sascha Bischoff, Quentin Perret, Fuad Tabba,
	Sebastian Ene

Our definition of VTCR_EL2 is both partial (tons of fields are
missing) and totally inconsistent (some constants are shifted,
some are not). They are also expressed in terms of TCR, which is
rather inconvenient.

Replace the ad-hoc definitions with the the generated version.
This results in a bunch of additional changes to make the code
with the unshifted nature of generated enumerations.

The register data was extracted from the BSD licenced AARCHMRS
(AARCHMRS_OPENSOURCE_A_profile_FAT-2025-09_ASL0).

Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_arm.h | 52 +++++++----------------------
 arch/arm64/include/asm/sysreg.h  |  1 -
 arch/arm64/kvm/hyp/pgtable.c     |  8 ++---
 arch/arm64/kvm/nested.c          |  8 ++---
 arch/arm64/tools/sysreg          | 57 ++++++++++++++++++++++++++++++++
 5 files changed, 76 insertions(+), 50 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index e500600e4b9b8..dfdbd2b65db9e 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -124,37 +124,7 @@
 #define TCR_EL2_MASK	(TCR_EL2_TG0_MASK | TCR_EL2_SH0_MASK | \
 			 TCR_EL2_ORGN0_MASK | TCR_EL2_IRGN0_MASK)
 
-/* VTCR_EL2 Registers bits */
-#define VTCR_EL2_DS		TCR_EL2_DS
-#define VTCR_EL2_RES1		(1U << 31)
-#define VTCR_EL2_HD		(1 << 22)
-#define VTCR_EL2_HA		(1 << 21)
-#define VTCR_EL2_PS_SHIFT	TCR_EL2_PS_SHIFT
-#define VTCR_EL2_PS_MASK	TCR_EL2_PS_MASK
-#define VTCR_EL2_TG0_MASK	TCR_TG0_MASK
-#define VTCR_EL2_TG0_4K		TCR_TG0_4K
-#define VTCR_EL2_TG0_16K	TCR_TG0_16K
-#define VTCR_EL2_TG0_64K	TCR_TG0_64K
-#define VTCR_EL2_SH0_MASK	TCR_SH0_MASK
-#define VTCR_EL2_SH0_INNER	TCR_SH0_INNER
-#define VTCR_EL2_ORGN0_MASK	TCR_ORGN0_MASK
-#define VTCR_EL2_ORGN0_WBWA	TCR_ORGN0_WBWA
-#define VTCR_EL2_IRGN0_MASK	TCR_IRGN0_MASK
-#define VTCR_EL2_IRGN0_WBWA	TCR_IRGN0_WBWA
-#define VTCR_EL2_SL0_SHIFT	6
-#define VTCR_EL2_SL0_MASK	(3 << VTCR_EL2_SL0_SHIFT)
-#define VTCR_EL2_T0SZ_MASK	0x3f
-#define VTCR_EL2_VS_SHIFT	19
-#define VTCR_EL2_VS_8BIT	(0 << VTCR_EL2_VS_SHIFT)
-#define VTCR_EL2_VS_16BIT	(1 << VTCR_EL2_VS_SHIFT)
-
-#define VTCR_EL2_T0SZ(x)	TCR_T0SZ(x)
-
 /*
- * We configure the Stage-2 page tables to always restrict the IPA space to be
- * 40 bits wide (T0SZ = 24).  Systems with a PARange smaller than 40 bits are
- * not known to exist and will break with this configuration.
- *
  * The VTCR_EL2 is configured per VM and is initialised in kvm_init_stage2_mmu.
  *
  * Note that when using 4K pages, we concatenate two first level page tables
@@ -162,9 +132,6 @@
  *
  */
 
-#define VTCR_EL2_COMMON_BITS	(VTCR_EL2_SH0_INNER | VTCR_EL2_ORGN0_WBWA | \
-				 VTCR_EL2_IRGN0_WBWA | VTCR_EL2_RES1)
-
 /*
  * VTCR_EL2:SL0 indicates the entry level for Stage2 translation.
  * Interestingly, it depends on the page size.
@@ -196,30 +163,35 @@
  */
 #ifdef CONFIG_ARM64_64K_PAGES
 
-#define VTCR_EL2_TGRAN			VTCR_EL2_TG0_64K
+#define VTCR_EL2_TGRAN			64K
 #define VTCR_EL2_TGRAN_SL0_BASE		3UL
 
 #elif defined(CONFIG_ARM64_16K_PAGES)
 
-#define VTCR_EL2_TGRAN			VTCR_EL2_TG0_16K
+#define VTCR_EL2_TGRAN			16K
 #define VTCR_EL2_TGRAN_SL0_BASE		3UL
 
 #else	/* 4K */
 
-#define VTCR_EL2_TGRAN			VTCR_EL2_TG0_4K
+#define VTCR_EL2_TGRAN			4K
 #define VTCR_EL2_TGRAN_SL0_BASE		2UL
 
 #endif
 
 #define VTCR_EL2_LVLS_TO_SL0(levels)	\
-	((VTCR_EL2_TGRAN_SL0_BASE - (4 - (levels))) << VTCR_EL2_SL0_SHIFT)
+	FIELD_PREP(VTCR_EL2_SL0, (VTCR_EL2_TGRAN_SL0_BASE - (4 - (levels))))
 #define VTCR_EL2_SL0_TO_LVLS(sl0)	\
 	((sl0) + 4 - VTCR_EL2_TGRAN_SL0_BASE)
 #define VTCR_EL2_LVLS(vtcr)		\
-	VTCR_EL2_SL0_TO_LVLS(((vtcr) & VTCR_EL2_SL0_MASK) >> VTCR_EL2_SL0_SHIFT)
+	VTCR_EL2_SL0_TO_LVLS(FIELD_GET(VTCR_EL2_SL0, (vtcr)))
+
+#define VTCR_EL2_FLAGS	(SYS_FIELD_PREP_ENUM(VTCR_EL2, SH0, INNER)	    | \
+			 SYS_FIELD_PREP_ENUM(VTCR_EL2, ORGN0, WBWA)	    | \
+			 SYS_FIELD_PREP_ENUM(VTCR_EL2, IRGN0, WBWA)	    | \
+			 SYS_FIELD_PREP_ENUM(VTCR_EL2, TG0, VTCR_EL2_TGRAN) | \
+			 VTCR_EL2_RES1)
 
-#define VTCR_EL2_FLAGS			(VTCR_EL2_COMMON_BITS | VTCR_EL2_TGRAN)
-#define VTCR_EL2_IPA(vtcr)		(64 - ((vtcr) & VTCR_EL2_T0SZ_MASK))
+#define VTCR_EL2_IPA(vtcr)		(64 - FIELD_GET(VTCR_EL2_T0SZ, (vtcr)))
 
 /*
  * ARM VMSAv8-64 defines an algorithm for finding the translation table
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index c231d2a3e5159..acad7a7621b9e 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -516,7 +516,6 @@
 #define SYS_TTBR1_EL2			sys_reg(3, 4, 2, 0, 1)
 #define SYS_TCR_EL2			sys_reg(3, 4, 2, 0, 2)
 #define SYS_VTTBR_EL2			sys_reg(3, 4, 2, 1, 0)
-#define SYS_VTCR_EL2			sys_reg(3, 4, 2, 1, 2)
 
 #define SYS_HAFGRTR_EL2			sys_reg(3, 4, 3, 1, 6)
 #define SYS_SPSR_EL2			sys_reg(3, 4, 4, 0, 0)
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index 947ac1a951a5b..e0bd6a0172729 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -583,8 +583,8 @@ u64 kvm_get_vtcr(u64 mmfr0, u64 mmfr1, u32 phys_shift)
 	u64 vtcr = VTCR_EL2_FLAGS;
 	s8 lvls;
 
-	vtcr |= kvm_get_parange(mmfr0) << VTCR_EL2_PS_SHIFT;
-	vtcr |= VTCR_EL2_T0SZ(phys_shift);
+	vtcr |= FIELD_PREP(VTCR_EL2_PS, kvm_get_parange(mmfr0));
+	vtcr |= FIELD_PREP(VTCR_EL2_T0SZ, (UL(64) - phys_shift));
 	/*
 	 * Use a minimum 2 level page table to prevent splitting
 	 * host PMD huge pages at stage2.
@@ -624,9 +624,7 @@ u64 kvm_get_vtcr(u64 mmfr0, u64 mmfr1, u32 phys_shift)
 		vtcr |= VTCR_EL2_DS;
 
 	/* Set the vmid bits */
-	vtcr |= (get_vmid_bits(mmfr1) == 16) ?
-		VTCR_EL2_VS_16BIT :
-		VTCR_EL2_VS_8BIT;
+	vtcr |= (get_vmid_bits(mmfr1) == 16) ? VTCR_EL2_VS : 0;
 
 	return vtcr;
 }
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index 911fc99ed99d9..e1ef8930c97b3 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -377,7 +377,7 @@ static void vtcr_to_walk_info(u64 vtcr, struct s2_walk_info *wi)
 {
 	wi->t0sz = vtcr & TCR_EL2_T0SZ_MASK;
 
-	switch (vtcr & VTCR_EL2_TG0_MASK) {
+	switch (FIELD_GET(VTCR_EL2_TG0_MASK, vtcr)) {
 	case VTCR_EL2_TG0_4K:
 		wi->pgshift = 12;	 break;
 	case VTCR_EL2_TG0_16K:
@@ -513,7 +513,7 @@ static u8 get_guest_mapping_ttl(struct kvm_s2_mmu *mmu, u64 addr)
 
 	lockdep_assert_held_write(&kvm_s2_mmu_to_kvm(mmu)->mmu_lock);
 
-	switch (vtcr & VTCR_EL2_TG0_MASK) {
+	switch (FIELD_GET(VTCR_EL2_TG0_MASK, vtcr)) {
 	case VTCR_EL2_TG0_4K:
 		ttl = (TLBI_TTL_TG_4K << 2);
 		break;
@@ -530,7 +530,7 @@ static u8 get_guest_mapping_ttl(struct kvm_s2_mmu *mmu, u64 addr)
 
 again:
 	/* Iteratively compute the block sizes for a particular granule size */
-	switch (vtcr & VTCR_EL2_TG0_MASK) {
+	switch (FIELD_GET(VTCR_EL2_TG0_MASK, vtcr)) {
 	case VTCR_EL2_TG0_4K:
 		if	(sz < SZ_4K)	sz = SZ_4K;
 		else if (sz < SZ_2M)	sz = SZ_2M;
@@ -593,7 +593,7 @@ unsigned long compute_tlb_inval_range(struct kvm_s2_mmu *mmu, u64 val)
 
 	if (!max_size) {
 		/* Compute the maximum extent of the invalidation */
-		switch (mmu->tlb_vtcr & VTCR_EL2_TG0_MASK) {
+		switch (FIELD_GET(VTCR_EL2_TG0_MASK, mmu->tlb_vtcr)) {
 		case VTCR_EL2_TG0_4K:
 			max_size = SZ_1G;
 			break;
diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
index 9d388f87d9a13..6f43b2ae5993b 100644
--- a/arch/arm64/tools/sysreg
+++ b/arch/arm64/tools/sysreg
@@ -4400,6 +4400,63 @@ Field	56:12	BADDR
 Res0	11:0
 EndSysreg
 
+Sysreg	VTCR_EL2	3	4	2	1	2
+Res0	63:46	
+Field	45	HDBSS
+Field	44	HAFT
+Res0	43:42	
+Field	41	TL0
+Field	40	GCSH
+Res0	39
+Field	38	D128
+Field	37	S2POE
+Field	36	S2PIE
+Field	35	TL1
+Field	34	AssuredOnly
+Field	33	SL2
+Field	32	DS
+Res1	31
+Field	30	NSA
+Field	29	NSW
+Field	28	HWU62
+Field	27	HWU61
+Field	26	HWU60
+Field	25	HWU59
+Res0	24:23
+Field	22	HD
+Field	21	HA
+Res0	20
+Enum	19	VS
+	0b0	8BIT
+	0b1	16BIT
+EndEnum
+Field	18:16	PS
+Enum	15:14	TG0
+	0b00	4K
+	0b01	64K
+	0b10	16K
+EndEnum
+Enum	13:12	SH0
+	0b00	NONE
+	0b01	OUTER
+	0b11	INNER
+EndEnum
+Enum	11:10	ORGN0
+	0b00	NC
+	0b01	WBWA
+	0b10	WT
+	0b11	WBnWA
+EndEnum
+Enum	9:8	IRGN0
+	0b00	NC
+	0b01	WBWA
+	0b10	WT
+	0b11	WBnWA
+EndEnum
+Field	7:6	SL0
+Field	5:0	T0SZ
+EndSysreg
+
 Sysreg	GCSCR_EL2	3	4	2	5	0
 Fields	GCSCR_ELx
 EndSysreg
-- 
2.47.3



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 4/6] KVM: arm64: Account for RES1 bits in DECLARE_FEAT_MAP() and co
  2025-12-10 17:30 [PATCH v2 0/6] KVM: arm64: VTCR_EL2 conversion to feature dependency framework Marc Zyngier
                   ` (2 preceding siblings ...)
  2025-12-10 17:30 ` [PATCH v2 3/6] arm64: Convert VTCR_EL2 to sysreg infratructure Marc Zyngier
@ 2025-12-10 17:30 ` Marc Zyngier
  2025-12-11 14:30   ` Fuad Tabba
  2025-12-11 17:23   ` Sascha Bischoff
  2025-12-10 17:30 ` [PATCH v2 5/6] KVM: arm64: Convert VTCR_EL2 to config-driven sanitisation Marc Zyngier
                   ` (2 subsequent siblings)
  6 siblings, 2 replies; 21+ messages in thread
From: Marc Zyngier @ 2025-12-10 17:30 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel, kvm
  Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Alexandru Elisei, Sascha Bischoff, Quentin Perret, Fuad Tabba,
	Sebastian Ene

None of the registers we manage in the feature dependency infrastructure
so far has any RES1 bit. This is about to change, as VTCR_EL2 has
its bit 31 being RES1.

In order to not fail the consistency checks by not describing a bit,
add RES1 bits to the set of immutable bits. This requires some extra
surgery for the FGT handling, as we now need to track RES1 bits there
as well.

There are no RES1 FGT bits *yet*. Watch this space.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h |  1 +
 arch/arm64/kvm/config.c           | 25 +++++++-------
 arch/arm64/kvm/emulate-nested.c   | 55 +++++++++++++++++--------------
 3 files changed, 45 insertions(+), 36 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index ac7f970c78830..b552a1e03848c 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -638,6 +638,7 @@ struct fgt_masks {
 	u64		mask;
 	u64		nmask;
 	u64		res0;
+	u64		res1;
 };
 
 extern struct fgt_masks hfgrtr_masks;
diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
index 24bb3f36e9d59..3845b188551b6 100644
--- a/arch/arm64/kvm/config.c
+++ b/arch/arm64/kvm/config.c
@@ -16,14 +16,14 @@
  */
 struct reg_bits_to_feat_map {
 	union {
-		u64	bits;
-		u64	*res0p;
+		u64		 bits;
+		struct fgt_masks *masks;
 	};
 
 #define	NEVER_FGU	BIT(0)	/* Can trap, but never UNDEF */
 #define	CALL_FUNC	BIT(1)	/* Needs to evaluate tons of crap */
 #define	FIXED_VALUE	BIT(2)	/* RAZ/WI or RAO/WI in KVM */
-#define	RES0_POINTER	BIT(3)	/* Pointer to RES0 value instead of bits */
+#define	MASKS_POINTER	BIT(3)	/* Pointer to fgt_masks struct instead of bits */
 
 	unsigned long	flags;
 
@@ -92,8 +92,8 @@ struct reg_feat_map_desc {
 #define NEEDS_FEAT_FIXED(m, ...)			\
 	__NEEDS_FEAT_FLAG(m, FIXED_VALUE, bits, __VA_ARGS__, 0)
 
-#define NEEDS_FEAT_RES0(p, ...)				\
-	__NEEDS_FEAT_FLAG(p, RES0_POINTER, res0p, __VA_ARGS__)
+#define NEEDS_FEAT_MASKS(p, ...)				\
+	__NEEDS_FEAT_FLAG(p, MASKS_POINTER, masks, __VA_ARGS__)
 
 /*
  * Declare the dependency between a set of bits and a set of features,
@@ -109,19 +109,20 @@ struct reg_feat_map_desc {
 #define DECLARE_FEAT_MAP(n, r, m, f)					\
 	struct reg_feat_map_desc n = {					\
 		.name			= #r,				\
-		.feat_map		= NEEDS_FEAT(~r##_RES0, f), 	\
+		.feat_map		= NEEDS_FEAT(~(r##_RES0 |	\
+						       r##_RES1), f),	\
 		.bit_feat_map		= m,				\
 		.bit_feat_map_sz	= ARRAY_SIZE(m),		\
 	}
 
 /*
  * Specialised version of the above for FGT registers that have their
- * RES0 masks described as struct fgt_masks.
+ * RESx masks described as struct fgt_masks.
  */
 #define DECLARE_FEAT_MAP_FGT(n, msk, m, f)				\
 	struct reg_feat_map_desc n = {					\
 		.name			= #msk,				\
-		.feat_map		= NEEDS_FEAT_RES0(&msk.res0, f),\
+		.feat_map		= NEEDS_FEAT_MASKS(&msk, f),	\
 		.bit_feat_map		= m,				\
 		.bit_feat_map_sz	= ARRAY_SIZE(m),		\
 	}
@@ -1168,21 +1169,21 @@ static const DECLARE_FEAT_MAP(mdcr_el2_desc, MDCR_EL2,
 			      mdcr_el2_feat_map, FEAT_AA64EL2);
 
 static void __init check_feat_map(const struct reg_bits_to_feat_map *map,
-				  int map_size, u64 res0, const char *str)
+				  int map_size, u64 resx, const char *str)
 {
 	u64 mask = 0;
 
 	for (int i = 0; i < map_size; i++)
 		mask |= map[i].bits;
 
-	if (mask != ~res0)
+	if (mask != ~resx)
 		kvm_err("Undefined %s behaviour, bits %016llx\n",
-			str, mask ^ ~res0);
+			str, mask ^ ~resx);
 }
 
 static u64 reg_feat_map_bits(const struct reg_bits_to_feat_map *map)
 {
-	return map->flags & RES0_POINTER ? ~(*map->res0p) : map->bits;
+	return map->flags & MASKS_POINTER ? (map->masks->mask | map->masks->nmask) : map->bits;
 }
 
 static void __init check_reg_desc(const struct reg_feat_map_desc *r)
diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index 834f13fb1fb7d..75d49f83342a5 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -2105,23 +2105,24 @@ static u32 encoding_next(u32 encoding)
 }
 
 #define FGT_MASKS(__n, __m)						\
-	struct fgt_masks __n = { .str = #__m, .res0 = __m, }
-
-FGT_MASKS(hfgrtr_masks, HFGRTR_EL2_RES0);
-FGT_MASKS(hfgwtr_masks, HFGWTR_EL2_RES0);
-FGT_MASKS(hfgitr_masks, HFGITR_EL2_RES0);
-FGT_MASKS(hdfgrtr_masks, HDFGRTR_EL2_RES0);
-FGT_MASKS(hdfgwtr_masks, HDFGWTR_EL2_RES0);
-FGT_MASKS(hafgrtr_masks, HAFGRTR_EL2_RES0);
-FGT_MASKS(hfgrtr2_masks, HFGRTR2_EL2_RES0);
-FGT_MASKS(hfgwtr2_masks, HFGWTR2_EL2_RES0);
-FGT_MASKS(hfgitr2_masks, HFGITR2_EL2_RES0);
-FGT_MASKS(hdfgrtr2_masks, HDFGRTR2_EL2_RES0);
-FGT_MASKS(hdfgwtr2_masks, HDFGWTR2_EL2_RES0);
+	struct fgt_masks __n = { .str = #__m, .res0 = __m ## _RES0, .res1 = __m ## _RES1 }
+
+FGT_MASKS(hfgrtr_masks, HFGRTR_EL2);
+FGT_MASKS(hfgwtr_masks, HFGWTR_EL2);
+FGT_MASKS(hfgitr_masks, HFGITR_EL2);
+FGT_MASKS(hdfgrtr_masks, HDFGRTR_EL2);
+FGT_MASKS(hdfgwtr_masks, HDFGWTR_EL2);
+FGT_MASKS(hafgrtr_masks, HAFGRTR_EL2);
+FGT_MASKS(hfgrtr2_masks, HFGRTR2_EL2);
+FGT_MASKS(hfgwtr2_masks, HFGWTR2_EL2);
+FGT_MASKS(hfgitr2_masks, HFGITR2_EL2);
+FGT_MASKS(hdfgrtr2_masks, HDFGRTR2_EL2);
+FGT_MASKS(hdfgwtr2_masks, HDFGWTR2_EL2);
 
 static __init bool aggregate_fgt(union trap_config tc)
 {
 	struct fgt_masks *rmasks, *wmasks;
+	u64 rresx, wresx;
 
 	switch (tc.fgt) {
 	case HFGRTR_GROUP:
@@ -2154,24 +2155,27 @@ static __init bool aggregate_fgt(union trap_config tc)
 		break;
 	}
 
+	rresx = rmasks->res0 | rmasks->res1;
+	if (wmasks)
+		wresx = wmasks->res0 | wmasks->res1;
+
 	/*
 	 * A bit can be reserved in either the R or W register, but
 	 * not both.
 	 */
-	if ((BIT(tc.bit) & rmasks->res0) &&
-	    (!wmasks || (BIT(tc.bit) & wmasks->res0)))
+	if ((BIT(tc.bit) & rresx) && (!wmasks || (BIT(tc.bit) & wresx)))
 		return false;
 
 	if (tc.pol)
-		rmasks->mask |= BIT(tc.bit) & ~rmasks->res0;
+		rmasks->mask |= BIT(tc.bit) & ~rresx;
 	else
-		rmasks->nmask |= BIT(tc.bit) & ~rmasks->res0;
+		rmasks->nmask |= BIT(tc.bit) & ~rresx;
 
 	if (wmasks) {
 		if (tc.pol)
-			wmasks->mask |= BIT(tc.bit) & ~wmasks->res0;
+			wmasks->mask |= BIT(tc.bit) & ~wresx;
 		else
-			wmasks->nmask |= BIT(tc.bit) & ~wmasks->res0;
+			wmasks->nmask |= BIT(tc.bit) & ~wresx;
 	}
 
 	return true;
@@ -2180,7 +2184,6 @@ static __init bool aggregate_fgt(union trap_config tc)
 static __init int check_fgt_masks(struct fgt_masks *masks)
 {
 	unsigned long duplicate = masks->mask & masks->nmask;
-	u64 res0 = masks->res0;
 	int ret = 0;
 
 	if (duplicate) {
@@ -2194,10 +2197,14 @@ static __init int check_fgt_masks(struct fgt_masks *masks)
 		ret = -EINVAL;
 	}
 
-	masks->res0 = ~(masks->mask | masks->nmask);
-	if (masks->res0 != res0)
-		kvm_info("Implicit %s = %016llx, expecting %016llx\n",
-			 masks->str, masks->res0, res0);
+	if ((masks->res0 | masks->res1 | masks->mask | masks->nmask) != GENMASK(63, 0) ||
+	    (masks->res0 & masks->res1)  || (masks->res0 & masks->mask) ||
+	    (masks->res0 & masks->nmask) || (masks->res1 & masks->mask)  ||
+	    (masks->res1 & masks->nmask) || (masks->mask & masks->nmask)) {
+		kvm_info("Inconsistent masks for %s (%016llx, %016llx, %016llx, %016llx)\n",
+			 masks->str, masks->res0, masks->res1, masks->mask, masks->nmask);
+		masks->res0 = ~(masks->res1 | masks->mask | masks->nmask);
+	}
 
 	return ret;
 }
-- 
2.47.3



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 5/6] KVM: arm64: Convert VTCR_EL2 to config-driven sanitisation
  2025-12-10 17:30 [PATCH v2 0/6] KVM: arm64: VTCR_EL2 conversion to feature dependency framework Marc Zyngier
                   ` (3 preceding siblings ...)
  2025-12-10 17:30 ` [PATCH v2 4/6] KVM: arm64: Account for RES1 bits in DECLARE_FEAT_MAP() and co Marc Zyngier
@ 2025-12-10 17:30 ` Marc Zyngier
  2025-12-11 14:44   ` Fuad Tabba
  2025-12-10 17:30 ` [PATCH v2 6/6] KVM: arm64: Honor UX/PX attributes for EL2 S1 mappings Marc Zyngier
  2025-12-11 14:55 ` [PATCH v2 0/6] KVM: arm64: VTCR_EL2 conversion to feature dependency framework Fuad Tabba
  6 siblings, 1 reply; 21+ messages in thread
From: Marc Zyngier @ 2025-12-10 17:30 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel, kvm
  Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Alexandru Elisei, Sascha Bischoff, Quentin Perret, Fuad Tabba,
	Sebastian Ene

Describe all the VTCR_EL2 fields and their respective configurations,
making sure that we correctly ignore the bits that are not defined
for a given guest configuration.

Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/config.c | 69 +++++++++++++++++++++++++++++++++++++++++
 arch/arm64/kvm/nested.c |  3 +-
 2 files changed, 70 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
index 3845b188551b6..9c04f895d3769 100644
--- a/arch/arm64/kvm/config.c
+++ b/arch/arm64/kvm/config.c
@@ -141,6 +141,7 @@ struct reg_feat_map_desc {
 #define FEAT_AA64EL1		ID_AA64PFR0_EL1, EL1, IMP
 #define FEAT_AA64EL2		ID_AA64PFR0_EL1, EL2, IMP
 #define FEAT_AA64EL3		ID_AA64PFR0_EL1, EL3, IMP
+#define FEAT_SEL2		ID_AA64PFR0_EL1, SEL2, IMP
 #define FEAT_AIE		ID_AA64MMFR3_EL1, AIE, IMP
 #define FEAT_S2POE		ID_AA64MMFR3_EL1, S2POE, IMP
 #define FEAT_S1POE		ID_AA64MMFR3_EL1, S1POE, IMP
@@ -202,6 +203,8 @@ struct reg_feat_map_desc {
 #define FEAT_ASID2		ID_AA64MMFR4_EL1, ASID2, IMP
 #define FEAT_MEC		ID_AA64MMFR3_EL1, MEC, IMP
 #define FEAT_HAFT		ID_AA64MMFR1_EL1, HAFDBS, HAFT
+#define FEAT_HDBSS		ID_AA64MMFR1_EL1, HAFDBS, HDBSS
+#define FEAT_HPDS2		ID_AA64MMFR1_EL1, HPDS, HPDS2
 #define FEAT_BTI		ID_AA64PFR1_EL1, BT, IMP
 #define FEAT_ExS		ID_AA64MMFR0_EL1, EXS, IMP
 #define FEAT_IESB		ID_AA64MMFR2_EL1, IESB, IMP
@@ -219,6 +222,7 @@ struct reg_feat_map_desc {
 #define FEAT_FGT2		ID_AA64MMFR0_EL1, FGT, FGT2
 #define FEAT_MTPMU		ID_AA64DFR0_EL1, MTPMU, IMP
 #define FEAT_HCX		ID_AA64MMFR1_EL1, HCX, IMP
+#define FEAT_S2PIE		ID_AA64MMFR3_EL1, S2PIE, IMP
 
 static bool not_feat_aa64el3(struct kvm *kvm)
 {
@@ -362,6 +366,28 @@ static bool feat_pmuv3p9(struct kvm *kvm)
 	return check_pmu_revision(kvm, V3P9);
 }
 
+#define has_feat_s2tgran(k, s)						\
+  ((kvm_has_feat_enum(kvm, ID_AA64MMFR0_EL1, TGRAN##s##_2, TGRAN##s) && \
+    kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN##s, IMP))		     ||	\
+   kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN##s##_2, IMP))
+
+static bool feat_lpa2(struct kvm *kvm)
+{
+	return ((kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN4, 52_BIT)    ||
+		 !kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN4, IMP))	&&
+		(kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN16, 52_BIT)   ||
+		 !kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN16, IMP))	&&
+		(kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN4_2, 52_BIT)  ||
+		 !has_feat_s2tgran(kvm, 4))				&&
+		(kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN16_2, 52_BIT) ||
+		 !has_feat_s2tgran(kvm, 16)));
+}
+
+static bool feat_vmid16(struct kvm *kvm)
+{
+	return kvm_has_feat_enum(kvm, ID_AA64MMFR1_EL1, VMIDBits, 16);
+}
+
 static bool compute_hcr_rw(struct kvm *kvm, u64 *bits)
 {
 	/* This is purely academic: AArch32 and NV are mutually exclusive */
@@ -1168,6 +1194,44 @@ static const struct reg_bits_to_feat_map mdcr_el2_feat_map[] = {
 static const DECLARE_FEAT_MAP(mdcr_el2_desc, MDCR_EL2,
 			      mdcr_el2_feat_map, FEAT_AA64EL2);
 
+static const struct reg_bits_to_feat_map vtcr_el2_feat_map[] = {
+	NEEDS_FEAT(VTCR_EL2_HDBSS, FEAT_HDBSS),
+	NEEDS_FEAT(VTCR_EL2_HAFT, FEAT_HAFT),
+	NEEDS_FEAT(VTCR_EL2_TL0		|
+		   VTCR_EL2_TL1		|
+		   VTCR_EL2_AssuredOnly	|
+		   VTCR_EL2_GCSH,
+		   FEAT_THE),
+	NEEDS_FEAT(VTCR_EL2_D128, FEAT_D128),
+	NEEDS_FEAT(VTCR_EL2_S2POE, FEAT_S2POE),
+	NEEDS_FEAT(VTCR_EL2_S2PIE, FEAT_S2PIE),
+	NEEDS_FEAT(VTCR_EL2_SL2		|
+		   VTCR_EL2_DS,
+		   feat_lpa2),
+	NEEDS_FEAT(VTCR_EL2_NSA		|
+		   VTCR_EL2_NSW,
+		   FEAT_SEL2),
+	NEEDS_FEAT(VTCR_EL2_HWU62	|
+		   VTCR_EL2_HWU61	|
+		   VTCR_EL2_HWU60	|
+		   VTCR_EL2_HWU59,
+		   FEAT_HPDS2),
+	NEEDS_FEAT(VTCR_EL2_HD, ID_AA64MMFR1_EL1, HAFDBS, DBM),
+	NEEDS_FEAT(VTCR_EL2_HA, ID_AA64MMFR1_EL1, HAFDBS, AF),
+	NEEDS_FEAT(VTCR_EL2_VS, feat_vmid16),
+	NEEDS_FEAT(VTCR_EL2_PS		|
+		   VTCR_EL2_TG0		|
+		   VTCR_EL2_SH0		|
+		   VTCR_EL2_ORGN0	|
+		   VTCR_EL2_IRGN0	|
+		   VTCR_EL2_SL0		|
+		   VTCR_EL2_T0SZ,
+		   FEAT_AA64EL1),
+};
+
+static const DECLARE_FEAT_MAP(vtcr_el2_desc, VTCR_EL2,
+			      vtcr_el2_feat_map, FEAT_AA64EL2);
+
 static void __init check_feat_map(const struct reg_bits_to_feat_map *map,
 				  int map_size, u64 resx, const char *str)
 {
@@ -1211,6 +1275,7 @@ void __init check_feature_map(void)
 	check_reg_desc(&tcr2_el2_desc);
 	check_reg_desc(&sctlr_el1_desc);
 	check_reg_desc(&mdcr_el2_desc);
+	check_reg_desc(&vtcr_el2_desc);
 }
 
 static bool idreg_feat_match(struct kvm *kvm, const struct reg_bits_to_feat_map *map)
@@ -1425,6 +1490,10 @@ void get_reg_fixed_bits(struct kvm *kvm, enum vcpu_sysreg reg, u64 *res0, u64 *r
 		*res0 = compute_reg_res0_bits(kvm, &mdcr_el2_desc, 0, 0);
 		*res1 = MDCR_EL2_RES1;
 		break;
+	case VTCR_EL2:
+		*res0 = compute_reg_res0_bits(kvm, &vtcr_el2_desc, 0, 0);
+		*res1 = VTCR_EL2_RES1;
+		break;
 	default:
 		WARN_ON_ONCE(1);
 		*res0 = *res1 = 0;
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index e1ef8930c97b3..606cebcaa7c09 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -1719,8 +1719,7 @@ int kvm_init_nv_sysregs(struct kvm_vcpu *vcpu)
 	set_sysreg_masks(kvm, VTTBR_EL2, res0, res1);
 
 	/* VTCR_EL2 */
-	res0 = GENMASK(63, 32) | GENMASK(30, 20);
-	res1 = BIT(31);
+	get_reg_fixed_bits(kvm, VTCR_EL2, &res0, &res1);
 	set_sysreg_masks(kvm, VTCR_EL2, res0, res1);
 
 	/* VMPIDR_EL2 */
-- 
2.47.3



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 6/6] KVM: arm64: Honor UX/PX attributes for EL2 S1 mappings
  2025-12-10 17:30 [PATCH v2 0/6] KVM: arm64: VTCR_EL2 conversion to feature dependency framework Marc Zyngier
                   ` (4 preceding siblings ...)
  2025-12-10 17:30 ` [PATCH v2 5/6] KVM: arm64: Convert VTCR_EL2 to config-driven sanitisation Marc Zyngier
@ 2025-12-10 17:30 ` Marc Zyngier
  2025-12-11 14:51   ` Fuad Tabba
  2025-12-11 15:18   ` Joey Gouly
  2025-12-11 14:55 ` [PATCH v2 0/6] KVM: arm64: VTCR_EL2 conversion to feature dependency framework Fuad Tabba
  6 siblings, 2 replies; 21+ messages in thread
From: Marc Zyngier @ 2025-12-10 17:30 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel, kvm
  Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Alexandru Elisei, Sascha Bischoff, Quentin Perret, Fuad Tabba,
	Sebastian Ene

Now that we potentially have two bits to deal with when setting
execution permissions, make sure we correctly handle them when both
when building the page tables and when reading back from them.

Reported-by: Alexandru Elisei <alexandru.elisei@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_pgtable.h | 12 +++---------
 arch/arm64/kvm/hyp/pgtable.c         | 24 +++++++++++++++++++++---
 2 files changed, 24 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h
index be68b89692065..095e6b73740a6 100644
--- a/arch/arm64/include/asm/kvm_pgtable.h
+++ b/arch/arm64/include/asm/kvm_pgtable.h
@@ -87,15 +87,9 @@ typedef u64 kvm_pte_t;
 
 #define KVM_PTE_LEAF_ATTR_HI_SW		GENMASK(58, 55)
 
-#define __KVM_PTE_LEAF_ATTR_HI_S1_XN	BIT(54)
-#define __KVM_PTE_LEAF_ATTR_HI_S1_UXN	BIT(54)
-#define __KVM_PTE_LEAF_ATTR_HI_S1_PXN	BIT(53)
-
-#define KVM_PTE_LEAF_ATTR_HI_S1_XN					\
-	({ cpus_have_final_cap(ARM64_KVM_HVHE) ?			\
-			(__KVM_PTE_LEAF_ATTR_HI_S1_UXN |		\
-			 __KVM_PTE_LEAF_ATTR_HI_S1_PXN) :		\
-			__KVM_PTE_LEAF_ATTR_HI_S1_XN; })
+#define KVM_PTE_LEAF_ATTR_HI_S1_XN	BIT(54)
+#define KVM_PTE_LEAF_ATTR_HI_S1_UXN	BIT(54)
+#define KVM_PTE_LEAF_ATTR_HI_S1_PXN	BIT(53)
 
 #define KVM_PTE_LEAF_ATTR_HI_S2_XN	GENMASK(54, 53)
 
diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index e0bd6a0172729..97c0835d25590 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -342,6 +342,9 @@ static int hyp_set_prot_attr(enum kvm_pgtable_prot prot, kvm_pte_t *ptep)
 	if (!(prot & KVM_PGTABLE_PROT_R))
 		return -EINVAL;
 
+	if (!cpus_have_final_cap(ARM64_KVM_HVHE))
+		prot &= ~KVM_PGTABLE_PROT_UX;
+
 	if (prot & KVM_PGTABLE_PROT_X) {
 		if (prot & KVM_PGTABLE_PROT_W)
 			return -EINVAL;
@@ -351,8 +354,16 @@ static int hyp_set_prot_attr(enum kvm_pgtable_prot prot, kvm_pte_t *ptep)
 
 		if (system_supports_bti_kernel())
 			attr |= KVM_PTE_LEAF_ATTR_HI_S1_GP;
+	}
+
+	if (cpus_have_final_cap(ARM64_KVM_HVHE)) {
+		if (!(prot & KVM_PGTABLE_PROT_PX))
+			attr |= KVM_PTE_LEAF_ATTR_HI_S1_PXN;
+		if (!(prot & KVM_PGTABLE_PROT_UX))
+			attr |= KVM_PTE_LEAF_ATTR_HI_S1_UXN;
 	} else {
-		attr |= KVM_PTE_LEAF_ATTR_HI_S1_XN;
+		if (!(prot & KVM_PGTABLE_PROT_PX))
+			attr |= KVM_PTE_LEAF_ATTR_HI_S1_XN;
 	}
 
 	attr |= FIELD_PREP(KVM_PTE_LEAF_ATTR_LO_S1_AP, ap);
@@ -373,8 +384,15 @@ enum kvm_pgtable_prot kvm_pgtable_hyp_pte_prot(kvm_pte_t pte)
 	if (!kvm_pte_valid(pte))
 		return prot;
 
-	if (!(pte & KVM_PTE_LEAF_ATTR_HI_S1_XN))
-		prot |= KVM_PGTABLE_PROT_X;
+	if (cpus_have_final_cap(ARM64_KVM_HVHE)) {
+		if (!(pte & KVM_PTE_LEAF_ATTR_HI_S1_PXN))
+			prot |= KVM_PGTABLE_PROT_PX;
+		if (!(pte & KVM_PTE_LEAF_ATTR_HI_S1_UXN))
+			prot |= KVM_PGTABLE_PROT_UX;
+	} else {
+		if (!(pte & KVM_PTE_LEAF_ATTR_HI_S1_XN))
+			prot |= KVM_PGTABLE_PROT_PX;
+	}
 
 	ap = FIELD_GET(KVM_PTE_LEAF_ATTR_LO_S1_AP, pte);
 	if (ap == KVM_PTE_LEAF_ATTR_LO_S1_AP_RO)
-- 
2.47.3



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 1/6] KVM: arm64: Fix EL2 S1 XN handling for hVHE setups
  2025-12-10 17:30 ` [PATCH v2 1/6] KVM: arm64: Fix EL2 S1 XN handling for hVHE setups Marc Zyngier
@ 2025-12-11 13:37   ` Fuad Tabba
  2025-12-11 14:30     ` Marc Zyngier
  2025-12-19 13:38   ` Leonardo Bras
  1 sibling, 1 reply; 21+ messages in thread
From: Fuad Tabba @ 2025-12-11 13:37 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, kvm, Joey Gouly, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Alexandru Elisei, Sascha Bischoff,
	Quentin Perret, Sebastian Ene

Hi Marc,


On Wed, 10 Dec 2025 at 17:30, Marc Zyngier <maz@kernel.org> wrote:
>
> The current XN implementation is tied to the EL2 translation regime,
> and fall flat on its face with the EL2&0 one that is used for hVHE,
> as the permission bit for privileged execution is a different one.
>
> Fixes: 6537565fd9b7f ("KVM: arm64: Adjust EL2 stage-1 leaf AP bits when ARM64_KVM_HVHE is set")
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_pgtable.h | 10 +++++++++-
>  1 file changed, 9 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h
> index fc02de43c68dd..be68b89692065 100644
> --- a/arch/arm64/include/asm/kvm_pgtable.h
> +++ b/arch/arm64/include/asm/kvm_pgtable.h
> @@ -87,7 +87,15 @@ typedef u64 kvm_pte_t;
>
>  #define KVM_PTE_LEAF_ATTR_HI_SW                GENMASK(58, 55)
>
> -#define KVM_PTE_LEAF_ATTR_HI_S1_XN     BIT(54)
> +#define __KVM_PTE_LEAF_ATTR_HI_S1_XN   BIT(54)
> +#define __KVM_PTE_LEAF_ATTR_HI_S1_UXN  BIT(54)
> +#define __KVM_PTE_LEAF_ATTR_HI_S1_PXN  BIT(53)
> +
> +#define KVM_PTE_LEAF_ATTR_HI_S1_XN                                     \
> +       ({ cpus_have_final_cap(ARM64_KVM_HVHE) ?                        \
> +                       (__KVM_PTE_LEAF_ATTR_HI_S1_UXN |                \
> +                        __KVM_PTE_LEAF_ATTR_HI_S1_PXN) :               \
> +                       __KVM_PTE_LEAF_ATTR_HI_S1_XN; })
>
>  #define KVM_PTE_LEAF_ATTR_HI_S2_XN     GENMASK(54, 53)

I was just wondering, is this patch really necessary, considering
patch 6/6 redos the whole thing and fixes the bug?

That said:
Reviewed-by: Fuad Tabba <tabba@google.com>

Cheers,
/fuad

> --
> 2.47.3
>


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 2/6] arm64: Convert ID_AA64MMFR0_EL1.TGRAN{4,16,64}_2 to UnsignedEnum
  2025-12-10 17:30 ` [PATCH v2 2/6] arm64: Convert ID_AA64MMFR0_EL1.TGRAN{4,16,64}_2 to UnsignedEnum Marc Zyngier
@ 2025-12-11 13:38   ` Fuad Tabba
  0 siblings, 0 replies; 21+ messages in thread
From: Fuad Tabba @ 2025-12-11 13:38 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, kvm, Joey Gouly, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Alexandru Elisei, Sascha Bischoff,
	Quentin Perret, Sebastian Ene

Hi Marc,

On Wed, 10 Dec 2025 at 17:30, Marc Zyngier <maz@kernel.org> wrote:
>
> ID_AA64MMFR0_EL1.TGRAN{4,16,64}_2 are currently represented as unordered
> enumerations. However, the architecture treats them as Unsigned,
> as hinted to by the MRS data:
>
> (FEAT_S2TGran4K <=> (((UInt(ID_AA64MMFR0_EL1.TGran4_2) == 0) &&
>                        FEAT_TGran4K) ||
>                      (UInt(ID_AA64MMFR0_EL1.TGran4_2) >= 2))))
>
> and similar descriptions exist for 16 and 64k.
>
> This is also confirmed by D24.1.3.3 ("Alternative ID scheme used for
> ID_AA64MMFR0_EL1 stage 2 granule sizes") in the L.b revision of
> the ARM ARM.
>
> Turn these fields into UnsignedEnum so that we can use the above
> description more or less literally.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---

Reviewed-by: Fuad Tabba <tabba@google.com>

Cheers,
/fuad


>  arch/arm64/tools/sysreg | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
> index 1c6cdf9d54bba..9d388f87d9a13 100644
> --- a/arch/arm64/tools/sysreg
> +++ b/arch/arm64/tools/sysreg
> @@ -2098,18 +2098,18 @@ UnsignedEnum    47:44   EXS
>         0b0000  NI
>         0b0001  IMP
>  EndEnum
> -Enum   43:40   TGRAN4_2
> +UnsignedEnum   43:40   TGRAN4_2
>         0b0000  TGRAN4
>         0b0001  NI
>         0b0010  IMP
>         0b0011  52_BIT
>  EndEnum
> -Enum   39:36   TGRAN64_2
> +UnsignedEnum   39:36   TGRAN64_2
>         0b0000  TGRAN64
>         0b0001  NI
>         0b0010  IMP
>  EndEnum
> -Enum   35:32   TGRAN16_2
> +UnsignedEnum   35:32   TGRAN16_2
>         0b0000  TGRAN16
>         0b0001  NI
>         0b0010  IMP
> --
> 2.47.3
>


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 3/6] arm64: Convert VTCR_EL2 to sysreg infratructure
  2025-12-10 17:30 ` [PATCH v2 3/6] arm64: Convert VTCR_EL2 to sysreg infratructure Marc Zyngier
@ 2025-12-11 14:13   ` Fuad Tabba
  0 siblings, 0 replies; 21+ messages in thread
From: Fuad Tabba @ 2025-12-11 14:13 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, kvm, Joey Gouly, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Alexandru Elisei, Sascha Bischoff,
	Quentin Perret, Sebastian Ene

Hi Marc,

In subject: infratructure -> infrastructure


On Wed, 10 Dec 2025 at 17:30, Marc Zyngier <maz@kernel.org> wrote:
>
> Our definition of VTCR_EL2 is both partial (tons of fields are
> missing) and totally inconsistent (some constants are shifted,
> some are not). They are also expressed in terms of TCR, which is
> rather inconvenient.
>
> Replace the ad-hoc definitions with the the generated version.

Duplicate "the"

> This results in a bunch of additional changes to make the code
> with the unshifted nature of generated enumerations.
>
> The register data was extracted from the BSD licenced AARCHMRS
> (AARCHMRS_OPENSOURCE_A_profile_FAT-2025-09_ASL0).
>
> Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com>
> Signed-off-by: Marc Zyngier <maz@kernel.org>

After checking the bits against the spec, and with the nits above:

Reviewed-by: Fuad Tabba <tabba@google.com>

/fuad

> ---
>  arch/arm64/include/asm/kvm_arm.h | 52 +++++++----------------------
>  arch/arm64/include/asm/sysreg.h  |  1 -
>  arch/arm64/kvm/hyp/pgtable.c     |  8 ++---
>  arch/arm64/kvm/nested.c          |  8 ++---
>  arch/arm64/tools/sysreg          | 57 ++++++++++++++++++++++++++++++++
>  5 files changed, 76 insertions(+), 50 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
> index e500600e4b9b8..dfdbd2b65db9e 100644
> --- a/arch/arm64/include/asm/kvm_arm.h
> +++ b/arch/arm64/include/asm/kvm_arm.h
> @@ -124,37 +124,7 @@
>  #define TCR_EL2_MASK   (TCR_EL2_TG0_MASK | TCR_EL2_SH0_MASK | \
>                          TCR_EL2_ORGN0_MASK | TCR_EL2_IRGN0_MASK)
>
> -/* VTCR_EL2 Registers bits */
> -#define VTCR_EL2_DS            TCR_EL2_DS
> -#define VTCR_EL2_RES1          (1U << 31)
> -#define VTCR_EL2_HD            (1 << 22)
> -#define VTCR_EL2_HA            (1 << 21)
> -#define VTCR_EL2_PS_SHIFT      TCR_EL2_PS_SHIFT
> -#define VTCR_EL2_PS_MASK       TCR_EL2_PS_MASK
> -#define VTCR_EL2_TG0_MASK      TCR_TG0_MASK
> -#define VTCR_EL2_TG0_4K                TCR_TG0_4K
> -#define VTCR_EL2_TG0_16K       TCR_TG0_16K
> -#define VTCR_EL2_TG0_64K       TCR_TG0_64K
> -#define VTCR_EL2_SH0_MASK      TCR_SH0_MASK
> -#define VTCR_EL2_SH0_INNER     TCR_SH0_INNER
> -#define VTCR_EL2_ORGN0_MASK    TCR_ORGN0_MASK
> -#define VTCR_EL2_ORGN0_WBWA    TCR_ORGN0_WBWA
> -#define VTCR_EL2_IRGN0_MASK    TCR_IRGN0_MASK
> -#define VTCR_EL2_IRGN0_WBWA    TCR_IRGN0_WBWA
> -#define VTCR_EL2_SL0_SHIFT     6
> -#define VTCR_EL2_SL0_MASK      (3 << VTCR_EL2_SL0_SHIFT)
> -#define VTCR_EL2_T0SZ_MASK     0x3f
> -#define VTCR_EL2_VS_SHIFT      19
> -#define VTCR_EL2_VS_8BIT       (0 << VTCR_EL2_VS_SHIFT)
> -#define VTCR_EL2_VS_16BIT      (1 << VTCR_EL2_VS_SHIFT)
> -
> -#define VTCR_EL2_T0SZ(x)       TCR_T0SZ(x)
> -
>  /*
> - * We configure the Stage-2 page tables to always restrict the IPA space to be
> - * 40 bits wide (T0SZ = 24).  Systems with a PARange smaller than 40 bits are
> - * not known to exist and will break with this configuration.
> - *
>   * The VTCR_EL2 is configured per VM and is initialised in kvm_init_stage2_mmu.
>   *
>   * Note that when using 4K pages, we concatenate two first level page tables
> @@ -162,9 +132,6 @@
>   *
>   */
>
> -#define VTCR_EL2_COMMON_BITS   (VTCR_EL2_SH0_INNER | VTCR_EL2_ORGN0_WBWA | \
> -                                VTCR_EL2_IRGN0_WBWA | VTCR_EL2_RES1)
> -
>  /*
>   * VTCR_EL2:SL0 indicates the entry level for Stage2 translation.
>   * Interestingly, it depends on the page size.
> @@ -196,30 +163,35 @@
>   */
>  #ifdef CONFIG_ARM64_64K_PAGES
>
> -#define VTCR_EL2_TGRAN                 VTCR_EL2_TG0_64K
> +#define VTCR_EL2_TGRAN                 64K
>  #define VTCR_EL2_TGRAN_SL0_BASE                3UL
>
>  #elif defined(CONFIG_ARM64_16K_PAGES)
>
> -#define VTCR_EL2_TGRAN                 VTCR_EL2_TG0_16K
> +#define VTCR_EL2_TGRAN                 16K
>  #define VTCR_EL2_TGRAN_SL0_BASE                3UL
>
>  #else  /* 4K */
>
> -#define VTCR_EL2_TGRAN                 VTCR_EL2_TG0_4K
> +#define VTCR_EL2_TGRAN                 4K
>  #define VTCR_EL2_TGRAN_SL0_BASE                2UL
>
>  #endif
>
>  #define VTCR_EL2_LVLS_TO_SL0(levels)   \
> -       ((VTCR_EL2_TGRAN_SL0_BASE - (4 - (levels))) << VTCR_EL2_SL0_SHIFT)
> +       FIELD_PREP(VTCR_EL2_SL0, (VTCR_EL2_TGRAN_SL0_BASE - (4 - (levels))))
>  #define VTCR_EL2_SL0_TO_LVLS(sl0)      \
>         ((sl0) + 4 - VTCR_EL2_TGRAN_SL0_BASE)
>  #define VTCR_EL2_LVLS(vtcr)            \
> -       VTCR_EL2_SL0_TO_LVLS(((vtcr) & VTCR_EL2_SL0_MASK) >> VTCR_EL2_SL0_SHIFT)
> +       VTCR_EL2_SL0_TO_LVLS(FIELD_GET(VTCR_EL2_SL0, (vtcr)))
> +
> +#define VTCR_EL2_FLAGS (SYS_FIELD_PREP_ENUM(VTCR_EL2, SH0, INNER)          | \
> +                        SYS_FIELD_PREP_ENUM(VTCR_EL2, ORGN0, WBWA)         | \
> +                        SYS_FIELD_PREP_ENUM(VTCR_EL2, IRGN0, WBWA)         | \
> +                        SYS_FIELD_PREP_ENUM(VTCR_EL2, TG0, VTCR_EL2_TGRAN) | \
> +                        VTCR_EL2_RES1)
>
> -#define VTCR_EL2_FLAGS                 (VTCR_EL2_COMMON_BITS | VTCR_EL2_TGRAN)
> -#define VTCR_EL2_IPA(vtcr)             (64 - ((vtcr) & VTCR_EL2_T0SZ_MASK))
> +#define VTCR_EL2_IPA(vtcr)             (64 - FIELD_GET(VTCR_EL2_T0SZ, (vtcr)))
>
>  /*
>   * ARM VMSAv8-64 defines an algorithm for finding the translation table
> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> index c231d2a3e5159..acad7a7621b9e 100644
> --- a/arch/arm64/include/asm/sysreg.h
> +++ b/arch/arm64/include/asm/sysreg.h
> @@ -516,7 +516,6 @@
>  #define SYS_TTBR1_EL2                  sys_reg(3, 4, 2, 0, 1)
>  #define SYS_TCR_EL2                    sys_reg(3, 4, 2, 0, 2)
>  #define SYS_VTTBR_EL2                  sys_reg(3, 4, 2, 1, 0)
> -#define SYS_VTCR_EL2                   sys_reg(3, 4, 2, 1, 2)
>
>  #define SYS_HAFGRTR_EL2                        sys_reg(3, 4, 3, 1, 6)
>  #define SYS_SPSR_EL2                   sys_reg(3, 4, 4, 0, 0)
> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> index 947ac1a951a5b..e0bd6a0172729 100644
> --- a/arch/arm64/kvm/hyp/pgtable.c
> +++ b/arch/arm64/kvm/hyp/pgtable.c
> @@ -583,8 +583,8 @@ u64 kvm_get_vtcr(u64 mmfr0, u64 mmfr1, u32 phys_shift)
>         u64 vtcr = VTCR_EL2_FLAGS;
>         s8 lvls;
>
> -       vtcr |= kvm_get_parange(mmfr0) << VTCR_EL2_PS_SHIFT;
> -       vtcr |= VTCR_EL2_T0SZ(phys_shift);
> +       vtcr |= FIELD_PREP(VTCR_EL2_PS, kvm_get_parange(mmfr0));
> +       vtcr |= FIELD_PREP(VTCR_EL2_T0SZ, (UL(64) - phys_shift));
>         /*
>          * Use a minimum 2 level page table to prevent splitting
>          * host PMD huge pages at stage2.
> @@ -624,9 +624,7 @@ u64 kvm_get_vtcr(u64 mmfr0, u64 mmfr1, u32 phys_shift)
>                 vtcr |= VTCR_EL2_DS;
>
>         /* Set the vmid bits */
> -       vtcr |= (get_vmid_bits(mmfr1) == 16) ?
> -               VTCR_EL2_VS_16BIT :
> -               VTCR_EL2_VS_8BIT;
> +       vtcr |= (get_vmid_bits(mmfr1) == 16) ? VTCR_EL2_VS : 0;
>
>         return vtcr;
>  }
> diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
> index 911fc99ed99d9..e1ef8930c97b3 100644
> --- a/arch/arm64/kvm/nested.c
> +++ b/arch/arm64/kvm/nested.c
> @@ -377,7 +377,7 @@ static void vtcr_to_walk_info(u64 vtcr, struct s2_walk_info *wi)
>  {
>         wi->t0sz = vtcr & TCR_EL2_T0SZ_MASK;
>
> -       switch (vtcr & VTCR_EL2_TG0_MASK) {
> +       switch (FIELD_GET(VTCR_EL2_TG0_MASK, vtcr)) {
>         case VTCR_EL2_TG0_4K:
>                 wi->pgshift = 12;        break;
>         case VTCR_EL2_TG0_16K:
> @@ -513,7 +513,7 @@ static u8 get_guest_mapping_ttl(struct kvm_s2_mmu *mmu, u64 addr)
>
>         lockdep_assert_held_write(&kvm_s2_mmu_to_kvm(mmu)->mmu_lock);
>
> -       switch (vtcr & VTCR_EL2_TG0_MASK) {
> +       switch (FIELD_GET(VTCR_EL2_TG0_MASK, vtcr)) {
>         case VTCR_EL2_TG0_4K:
>                 ttl = (TLBI_TTL_TG_4K << 2);
>                 break;
> @@ -530,7 +530,7 @@ static u8 get_guest_mapping_ttl(struct kvm_s2_mmu *mmu, u64 addr)
>
>  again:
>         /* Iteratively compute the block sizes for a particular granule size */
> -       switch (vtcr & VTCR_EL2_TG0_MASK) {
> +       switch (FIELD_GET(VTCR_EL2_TG0_MASK, vtcr)) {
>         case VTCR_EL2_TG0_4K:
>                 if      (sz < SZ_4K)    sz = SZ_4K;
>                 else if (sz < SZ_2M)    sz = SZ_2M;
> @@ -593,7 +593,7 @@ unsigned long compute_tlb_inval_range(struct kvm_s2_mmu *mmu, u64 val)
>
>         if (!max_size) {
>                 /* Compute the maximum extent of the invalidation */
> -               switch (mmu->tlb_vtcr & VTCR_EL2_TG0_MASK) {
> +               switch (FIELD_GET(VTCR_EL2_TG0_MASK, mmu->tlb_vtcr)) {
>                 case VTCR_EL2_TG0_4K:
>                         max_size = SZ_1G;
>                         break;
> diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
> index 9d388f87d9a13..6f43b2ae5993b 100644
> --- a/arch/arm64/tools/sysreg
> +++ b/arch/arm64/tools/sysreg
> @@ -4400,6 +4400,63 @@ Field    56:12   BADDR
>  Res0   11:0
>  EndSysreg
>
> +Sysreg VTCR_EL2        3       4       2       1       2
> +Res0   63:46
> +Field  45      HDBSS
> +Field  44      HAFT
> +Res0   43:42
> +Field  41      TL0
> +Field  40      GCSH
> +Res0   39
> +Field  38      D128
> +Field  37      S2POE
> +Field  36      S2PIE
> +Field  35      TL1
> +Field  34      AssuredOnly
> +Field  33      SL2
> +Field  32      DS
> +Res1   31
> +Field  30      NSA
> +Field  29      NSW
> +Field  28      HWU62
> +Field  27      HWU61
> +Field  26      HWU60
> +Field  25      HWU59
> +Res0   24:23
> +Field  22      HD
> +Field  21      HA
> +Res0   20
> +Enum   19      VS
> +       0b0     8BIT
> +       0b1     16BIT
> +EndEnum
> +Field  18:16   PS
> +Enum   15:14   TG0
> +       0b00    4K
> +       0b01    64K
> +       0b10    16K
> +EndEnum
> +Enum   13:12   SH0
> +       0b00    NONE
> +       0b01    OUTER
> +       0b11    INNER
> +EndEnum
> +Enum   11:10   ORGN0
> +       0b00    NC
> +       0b01    WBWA
> +       0b10    WT
> +       0b11    WBnWA
> +EndEnum
> +Enum   9:8     IRGN0
> +       0b00    NC
> +       0b01    WBWA
> +       0b10    WT
> +       0b11    WBnWA
> +EndEnum
> +Field  7:6     SL0
> +Field  5:0     T0SZ
> +EndSysreg
> +
>  Sysreg GCSCR_EL2       3       4       2       5       0
>  Fields GCSCR_ELx
>  EndSysreg
> --
> 2.47.3
>


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 4/6] KVM: arm64: Account for RES1 bits in DECLARE_FEAT_MAP() and co
  2025-12-10 17:30 ` [PATCH v2 4/6] KVM: arm64: Account for RES1 bits in DECLARE_FEAT_MAP() and co Marc Zyngier
@ 2025-12-11 14:30   ` Fuad Tabba
  2025-12-11 17:23   ` Sascha Bischoff
  1 sibling, 0 replies; 21+ messages in thread
From: Fuad Tabba @ 2025-12-11 14:30 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, kvm, Joey Gouly, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Alexandru Elisei, Sascha Bischoff,
	Quentin Perret, Sebastian Ene

On Wed, 10 Dec 2025 at 17:30, Marc Zyngier <maz@kernel.org> wrote:
>
> None of the registers we manage in the feature dependency infrastructure
> so far has any RES1 bit. This is about to change, as VTCR_EL2 has
> its bit 31 being RES1.
>
> In order to not fail the consistency checks by not describing a bit,
> add RES1 bits to the set of immutable bits. This requires some extra
> surgery for the FGT handling, as we now need to track RES1 bits there
> as well.
>
> There are no RES1 FGT bits *yet*. Watch this space.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>


Reviewed-by: Fuad Tabba <tabba@google.com>

/fuad

> ---
>  arch/arm64/include/asm/kvm_host.h |  1 +
>  arch/arm64/kvm/config.c           | 25 +++++++-------
>  arch/arm64/kvm/emulate-nested.c   | 55 +++++++++++++++++--------------
>  3 files changed, 45 insertions(+), 36 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index ac7f970c78830..b552a1e03848c 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -638,6 +638,7 @@ struct fgt_masks {
>         u64             mask;
>         u64             nmask;
>         u64             res0;
> +       u64             res1;
>  };
>
>  extern struct fgt_masks hfgrtr_masks;
> diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
> index 24bb3f36e9d59..3845b188551b6 100644
> --- a/arch/arm64/kvm/config.c
> +++ b/arch/arm64/kvm/config.c
> @@ -16,14 +16,14 @@
>   */
>  struct reg_bits_to_feat_map {
>         union {
> -               u64     bits;
> -               u64     *res0p;
> +               u64              bits;
> +               struct fgt_masks *masks;
>         };
>
>  #define        NEVER_FGU       BIT(0)  /* Can trap, but never UNDEF */
>  #define        CALL_FUNC       BIT(1)  /* Needs to evaluate tons of crap */
>  #define        FIXED_VALUE     BIT(2)  /* RAZ/WI or RAO/WI in KVM */
> -#define        RES0_POINTER    BIT(3)  /* Pointer to RES0 value instead of bits */
> +#define        MASKS_POINTER   BIT(3)  /* Pointer to fgt_masks struct instead of bits */
>
>         unsigned long   flags;
>
> @@ -92,8 +92,8 @@ struct reg_feat_map_desc {
>  #define NEEDS_FEAT_FIXED(m, ...)                       \
>         __NEEDS_FEAT_FLAG(m, FIXED_VALUE, bits, __VA_ARGS__, 0)
>
> -#define NEEDS_FEAT_RES0(p, ...)                                \
> -       __NEEDS_FEAT_FLAG(p, RES0_POINTER, res0p, __VA_ARGS__)
> +#define NEEDS_FEAT_MASKS(p, ...)                               \
> +       __NEEDS_FEAT_FLAG(p, MASKS_POINTER, masks, __VA_ARGS__)
>
>  /*
>   * Declare the dependency between a set of bits and a set of features,
> @@ -109,19 +109,20 @@ struct reg_feat_map_desc {
>  #define DECLARE_FEAT_MAP(n, r, m, f)                                   \
>         struct reg_feat_map_desc n = {                                  \
>                 .name                   = #r,                           \
> -               .feat_map               = NEEDS_FEAT(~r##_RES0, f),     \
> +               .feat_map               = NEEDS_FEAT(~(r##_RES0 |       \
> +                                                      r##_RES1), f),   \
>                 .bit_feat_map           = m,                            \
>                 .bit_feat_map_sz        = ARRAY_SIZE(m),                \
>         }
>
>  /*
>   * Specialised version of the above for FGT registers that have their
> - * RES0 masks described as struct fgt_masks.
> + * RESx masks described as struct fgt_masks.
>   */
>  #define DECLARE_FEAT_MAP_FGT(n, msk, m, f)                             \
>         struct reg_feat_map_desc n = {                                  \
>                 .name                   = #msk,                         \
> -               .feat_map               = NEEDS_FEAT_RES0(&msk.res0, f),\
> +               .feat_map               = NEEDS_FEAT_MASKS(&msk, f),    \
>                 .bit_feat_map           = m,                            \
>                 .bit_feat_map_sz        = ARRAY_SIZE(m),                \
>         }
> @@ -1168,21 +1169,21 @@ static const DECLARE_FEAT_MAP(mdcr_el2_desc, MDCR_EL2,
>                               mdcr_el2_feat_map, FEAT_AA64EL2);
>
>  static void __init check_feat_map(const struct reg_bits_to_feat_map *map,
> -                                 int map_size, u64 res0, const char *str)
> +                                 int map_size, u64 resx, const char *str)
>  {
>         u64 mask = 0;
>
>         for (int i = 0; i < map_size; i++)
>                 mask |= map[i].bits;
>
> -       if (mask != ~res0)
> +       if (mask != ~resx)
>                 kvm_err("Undefined %s behaviour, bits %016llx\n",
> -                       str, mask ^ ~res0);
> +                       str, mask ^ ~resx);
>  }
>
>  static u64 reg_feat_map_bits(const struct reg_bits_to_feat_map *map)
>  {
> -       return map->flags & RES0_POINTER ? ~(*map->res0p) : map->bits;
> +       return map->flags & MASKS_POINTER ? (map->masks->mask | map->masks->nmask) : map->bits;
>  }
>
>  static void __init check_reg_desc(const struct reg_feat_map_desc *r)
> diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
> index 834f13fb1fb7d..75d49f83342a5 100644
> --- a/arch/arm64/kvm/emulate-nested.c
> +++ b/arch/arm64/kvm/emulate-nested.c
> @@ -2105,23 +2105,24 @@ static u32 encoding_next(u32 encoding)
>  }
>
>  #define FGT_MASKS(__n, __m)                                            \
> -       struct fgt_masks __n = { .str = #__m, .res0 = __m, }
> -
> -FGT_MASKS(hfgrtr_masks, HFGRTR_EL2_RES0);
> -FGT_MASKS(hfgwtr_masks, HFGWTR_EL2_RES0);
> -FGT_MASKS(hfgitr_masks, HFGITR_EL2_RES0);
> -FGT_MASKS(hdfgrtr_masks, HDFGRTR_EL2_RES0);
> -FGT_MASKS(hdfgwtr_masks, HDFGWTR_EL2_RES0);
> -FGT_MASKS(hafgrtr_masks, HAFGRTR_EL2_RES0);
> -FGT_MASKS(hfgrtr2_masks, HFGRTR2_EL2_RES0);
> -FGT_MASKS(hfgwtr2_masks, HFGWTR2_EL2_RES0);
> -FGT_MASKS(hfgitr2_masks, HFGITR2_EL2_RES0);
> -FGT_MASKS(hdfgrtr2_masks, HDFGRTR2_EL2_RES0);
> -FGT_MASKS(hdfgwtr2_masks, HDFGWTR2_EL2_RES0);
> +       struct fgt_masks __n = { .str = #__m, .res0 = __m ## _RES0, .res1 = __m ## _RES1 }
> +
> +FGT_MASKS(hfgrtr_masks, HFGRTR_EL2);
> +FGT_MASKS(hfgwtr_masks, HFGWTR_EL2);
> +FGT_MASKS(hfgitr_masks, HFGITR_EL2);
> +FGT_MASKS(hdfgrtr_masks, HDFGRTR_EL2);
> +FGT_MASKS(hdfgwtr_masks, HDFGWTR_EL2);
> +FGT_MASKS(hafgrtr_masks, HAFGRTR_EL2);
> +FGT_MASKS(hfgrtr2_masks, HFGRTR2_EL2);
> +FGT_MASKS(hfgwtr2_masks, HFGWTR2_EL2);
> +FGT_MASKS(hfgitr2_masks, HFGITR2_EL2);
> +FGT_MASKS(hdfgrtr2_masks, HDFGRTR2_EL2);
> +FGT_MASKS(hdfgwtr2_masks, HDFGWTR2_EL2);
>
>  static __init bool aggregate_fgt(union trap_config tc)
>  {
>         struct fgt_masks *rmasks, *wmasks;
> +       u64 rresx, wresx;
>
>         switch (tc.fgt) {
>         case HFGRTR_GROUP:
> @@ -2154,24 +2155,27 @@ static __init bool aggregate_fgt(union trap_config tc)
>                 break;
>         }
>
> +       rresx = rmasks->res0 | rmasks->res1;
> +       if (wmasks)
> +               wresx = wmasks->res0 | wmasks->res1;
> +
>         /*
>          * A bit can be reserved in either the R or W register, but
>          * not both.
>          */
> -       if ((BIT(tc.bit) & rmasks->res0) &&
> -           (!wmasks || (BIT(tc.bit) & wmasks->res0)))
> +       if ((BIT(tc.bit) & rresx) && (!wmasks || (BIT(tc.bit) & wresx)))
>                 return false;
>
>         if (tc.pol)
> -               rmasks->mask |= BIT(tc.bit) & ~rmasks->res0;
> +               rmasks->mask |= BIT(tc.bit) & ~rresx;
>         else
> -               rmasks->nmask |= BIT(tc.bit) & ~rmasks->res0;
> +               rmasks->nmask |= BIT(tc.bit) & ~rresx;
>
>         if (wmasks) {
>                 if (tc.pol)
> -                       wmasks->mask |= BIT(tc.bit) & ~wmasks->res0;
> +                       wmasks->mask |= BIT(tc.bit) & ~wresx;
>                 else
> -                       wmasks->nmask |= BIT(tc.bit) & ~wmasks->res0;
> +                       wmasks->nmask |= BIT(tc.bit) & ~wresx;
>         }
>
>         return true;
> @@ -2180,7 +2184,6 @@ static __init bool aggregate_fgt(union trap_config tc)
>  static __init int check_fgt_masks(struct fgt_masks *masks)
>  {
>         unsigned long duplicate = masks->mask & masks->nmask;
> -       u64 res0 = masks->res0;
>         int ret = 0;
>
>         if (duplicate) {
> @@ -2194,10 +2197,14 @@ static __init int check_fgt_masks(struct fgt_masks *masks)
>                 ret = -EINVAL;
>         }
>
> -       masks->res0 = ~(masks->mask | masks->nmask);
> -       if (masks->res0 != res0)
> -               kvm_info("Implicit %s = %016llx, expecting %016llx\n",
> -                        masks->str, masks->res0, res0);
> +       if ((masks->res0 | masks->res1 | masks->mask | masks->nmask) != GENMASK(63, 0) ||
> +           (masks->res0 & masks->res1)  || (masks->res0 & masks->mask) ||
> +           (masks->res0 & masks->nmask) || (masks->res1 & masks->mask)  ||
> +           (masks->res1 & masks->nmask) || (masks->mask & masks->nmask)) {
> +               kvm_info("Inconsistent masks for %s (%016llx, %016llx, %016llx, %016llx)\n",
> +                        masks->str, masks->res0, masks->res1, masks->mask, masks->nmask);
> +               masks->res0 = ~(masks->res1 | masks->mask | masks->nmask);
> +       }
>
>         return ret;
>  }
> --
> 2.47.3
>


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 1/6] KVM: arm64: Fix EL2 S1 XN handling for hVHE setups
  2025-12-11 13:37   ` Fuad Tabba
@ 2025-12-11 14:30     ` Marc Zyngier
  0 siblings, 0 replies; 21+ messages in thread
From: Marc Zyngier @ 2025-12-11 14:30 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, linux-arm-kernel, kvm, Joey Gouly, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Alexandru Elisei, Sascha Bischoff,
	Quentin Perret, Sebastian Ene

On Thu, 11 Dec 2025 13:37:50 +0000,
Fuad Tabba <tabba@google.com> wrote:
> 
> Hi Marc,
> 
> 
> On Wed, 10 Dec 2025 at 17:30, Marc Zyngier <maz@kernel.org> wrote:
> >
> > The current XN implementation is tied to the EL2 translation regime,
> > and fall flat on its face with the EL2&0 one that is used for hVHE,
> > as the permission bit for privileged execution is a different one.
> >
> > Fixes: 6537565fd9b7f ("KVM: arm64: Adjust EL2 stage-1 leaf AP bits when ARM64_KVM_HVHE is set")
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/include/asm/kvm_pgtable.h | 10 +++++++++-
> >  1 file changed, 9 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h
> > index fc02de43c68dd..be68b89692065 100644
> > --- a/arch/arm64/include/asm/kvm_pgtable.h
> > +++ b/arch/arm64/include/asm/kvm_pgtable.h
> > @@ -87,7 +87,15 @@ typedef u64 kvm_pte_t;
> >
> >  #define KVM_PTE_LEAF_ATTR_HI_SW                GENMASK(58, 55)
> >
> > -#define KVM_PTE_LEAF_ATTR_HI_S1_XN     BIT(54)
> > +#define __KVM_PTE_LEAF_ATTR_HI_S1_XN   BIT(54)
> > +#define __KVM_PTE_LEAF_ATTR_HI_S1_UXN  BIT(54)
> > +#define __KVM_PTE_LEAF_ATTR_HI_S1_PXN  BIT(53)
> > +
> > +#define KVM_PTE_LEAF_ATTR_HI_S1_XN                                     \
> > +       ({ cpus_have_final_cap(ARM64_KVM_HVHE) ?                        \
> > +                       (__KVM_PTE_LEAF_ATTR_HI_S1_UXN |                \
> > +                        __KVM_PTE_LEAF_ATTR_HI_S1_PXN) :               \
> > +                       __KVM_PTE_LEAF_ATTR_HI_S1_XN; })
> >
> >  #define KVM_PTE_LEAF_ATTR_HI_S2_XN     GENMASK(54, 53)
> 
> I was just wondering, is this patch really necessary, considering
> patch 6/6 redos the whole thing and fixes the bug?

I want this one to be in a state where it can be backported, without
having to deal with the NV-induced FEAT_XNX support. This is why it is
a standalone patch.

> That said:
> Reviewed-by: Fuad Tabba <tabba@google.com>

Thanks!

	M.

-- 
Without deviation from the norm, progress is not possible.


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 5/6] KVM: arm64: Convert VTCR_EL2 to config-driven sanitisation
  2025-12-10 17:30 ` [PATCH v2 5/6] KVM: arm64: Convert VTCR_EL2 to config-driven sanitisation Marc Zyngier
@ 2025-12-11 14:44   ` Fuad Tabba
  0 siblings, 0 replies; 21+ messages in thread
From: Fuad Tabba @ 2025-12-11 14:44 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, kvm, Joey Gouly, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Alexandru Elisei, Sascha Bischoff,
	Quentin Perret, Sebastian Ene

On Wed, 10 Dec 2025 at 17:30, Marc Zyngier <maz@kernel.org> wrote:
>
> Describe all the VTCR_EL2 fields and their respective configurations,
> making sure that we correctly ignore the bits that are not defined
> for a given guest configuration.
>
> Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---

Reviewed-by: Fuad Tabba <tabba@google.com>

/fuad

>  arch/arm64/kvm/config.c | 69 +++++++++++++++++++++++++++++++++++++++++
>  arch/arm64/kvm/nested.c |  3 +-
>  2 files changed, 70 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
> index 3845b188551b6..9c04f895d3769 100644
> --- a/arch/arm64/kvm/config.c
> +++ b/arch/arm64/kvm/config.c
> @@ -141,6 +141,7 @@ struct reg_feat_map_desc {
>  #define FEAT_AA64EL1           ID_AA64PFR0_EL1, EL1, IMP
>  #define FEAT_AA64EL2           ID_AA64PFR0_EL1, EL2, IMP
>  #define FEAT_AA64EL3           ID_AA64PFR0_EL1, EL3, IMP
> +#define FEAT_SEL2              ID_AA64PFR0_EL1, SEL2, IMP
>  #define FEAT_AIE               ID_AA64MMFR3_EL1, AIE, IMP
>  #define FEAT_S2POE             ID_AA64MMFR3_EL1, S2POE, IMP
>  #define FEAT_S1POE             ID_AA64MMFR3_EL1, S1POE, IMP
> @@ -202,6 +203,8 @@ struct reg_feat_map_desc {
>  #define FEAT_ASID2             ID_AA64MMFR4_EL1, ASID2, IMP
>  #define FEAT_MEC               ID_AA64MMFR3_EL1, MEC, IMP
>  #define FEAT_HAFT              ID_AA64MMFR1_EL1, HAFDBS, HAFT
> +#define FEAT_HDBSS             ID_AA64MMFR1_EL1, HAFDBS, HDBSS
> +#define FEAT_HPDS2             ID_AA64MMFR1_EL1, HPDS, HPDS2
>  #define FEAT_BTI               ID_AA64PFR1_EL1, BT, IMP
>  #define FEAT_ExS               ID_AA64MMFR0_EL1, EXS, IMP
>  #define FEAT_IESB              ID_AA64MMFR2_EL1, IESB, IMP
> @@ -219,6 +222,7 @@ struct reg_feat_map_desc {
>  #define FEAT_FGT2              ID_AA64MMFR0_EL1, FGT, FGT2
>  #define FEAT_MTPMU             ID_AA64DFR0_EL1, MTPMU, IMP
>  #define FEAT_HCX               ID_AA64MMFR1_EL1, HCX, IMP
> +#define FEAT_S2PIE             ID_AA64MMFR3_EL1, S2PIE, IMP
>
>  static bool not_feat_aa64el3(struct kvm *kvm)
>  {
> @@ -362,6 +366,28 @@ static bool feat_pmuv3p9(struct kvm *kvm)
>         return check_pmu_revision(kvm, V3P9);
>  }
>
> +#define has_feat_s2tgran(k, s)                                         \
> +  ((kvm_has_feat_enum(kvm, ID_AA64MMFR0_EL1, TGRAN##s##_2, TGRAN##s) && \
> +    kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN##s, IMP))                     || \
> +   kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN##s##_2, IMP))
> +
> +static bool feat_lpa2(struct kvm *kvm)
> +{
> +       return ((kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN4, 52_BIT)    ||
> +                !kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN4, IMP))     &&
> +               (kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN16, 52_BIT)   ||
> +                !kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN16, IMP))    &&
> +               (kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN4_2, 52_BIT)  ||
> +                !has_feat_s2tgran(kvm, 4))                             &&
> +               (kvm_has_feat(kvm, ID_AA64MMFR0_EL1, TGRAN16_2, 52_BIT) ||
> +                !has_feat_s2tgran(kvm, 16)));
> +}
> +
> +static bool feat_vmid16(struct kvm *kvm)
> +{
> +       return kvm_has_feat_enum(kvm, ID_AA64MMFR1_EL1, VMIDBits, 16);
> +}
> +
>  static bool compute_hcr_rw(struct kvm *kvm, u64 *bits)
>  {
>         /* This is purely academic: AArch32 and NV are mutually exclusive */
> @@ -1168,6 +1194,44 @@ static const struct reg_bits_to_feat_map mdcr_el2_feat_map[] = {
>  static const DECLARE_FEAT_MAP(mdcr_el2_desc, MDCR_EL2,
>                               mdcr_el2_feat_map, FEAT_AA64EL2);
>
> +static const struct reg_bits_to_feat_map vtcr_el2_feat_map[] = {
> +       NEEDS_FEAT(VTCR_EL2_HDBSS, FEAT_HDBSS),
> +       NEEDS_FEAT(VTCR_EL2_HAFT, FEAT_HAFT),
> +       NEEDS_FEAT(VTCR_EL2_TL0         |
> +                  VTCR_EL2_TL1         |
> +                  VTCR_EL2_AssuredOnly |
> +                  VTCR_EL2_GCSH,
> +                  FEAT_THE),
> +       NEEDS_FEAT(VTCR_EL2_D128, FEAT_D128),
> +       NEEDS_FEAT(VTCR_EL2_S2POE, FEAT_S2POE),
> +       NEEDS_FEAT(VTCR_EL2_S2PIE, FEAT_S2PIE),
> +       NEEDS_FEAT(VTCR_EL2_SL2         |
> +                  VTCR_EL2_DS,
> +                  feat_lpa2),
> +       NEEDS_FEAT(VTCR_EL2_NSA         |
> +                  VTCR_EL2_NSW,
> +                  FEAT_SEL2),
> +       NEEDS_FEAT(VTCR_EL2_HWU62       |
> +                  VTCR_EL2_HWU61       |
> +                  VTCR_EL2_HWU60       |
> +                  VTCR_EL2_HWU59,
> +                  FEAT_HPDS2),
> +       NEEDS_FEAT(VTCR_EL2_HD, ID_AA64MMFR1_EL1, HAFDBS, DBM),
> +       NEEDS_FEAT(VTCR_EL2_HA, ID_AA64MMFR1_EL1, HAFDBS, AF),
> +       NEEDS_FEAT(VTCR_EL2_VS, feat_vmid16),
> +       NEEDS_FEAT(VTCR_EL2_PS          |
> +                  VTCR_EL2_TG0         |
> +                  VTCR_EL2_SH0         |
> +                  VTCR_EL2_ORGN0       |
> +                  VTCR_EL2_IRGN0       |
> +                  VTCR_EL2_SL0         |
> +                  VTCR_EL2_T0SZ,
> +                  FEAT_AA64EL1),
> +};
> +
> +static const DECLARE_FEAT_MAP(vtcr_el2_desc, VTCR_EL2,
> +                             vtcr_el2_feat_map, FEAT_AA64EL2);
> +
>  static void __init check_feat_map(const struct reg_bits_to_feat_map *map,
>                                   int map_size, u64 resx, const char *str)
>  {
> @@ -1211,6 +1275,7 @@ void __init check_feature_map(void)
>         check_reg_desc(&tcr2_el2_desc);
>         check_reg_desc(&sctlr_el1_desc);
>         check_reg_desc(&mdcr_el2_desc);
> +       check_reg_desc(&vtcr_el2_desc);
>  }
>
>  static bool idreg_feat_match(struct kvm *kvm, const struct reg_bits_to_feat_map *map)
> @@ -1425,6 +1490,10 @@ void get_reg_fixed_bits(struct kvm *kvm, enum vcpu_sysreg reg, u64 *res0, u64 *r
>                 *res0 = compute_reg_res0_bits(kvm, &mdcr_el2_desc, 0, 0);
>                 *res1 = MDCR_EL2_RES1;
>                 break;
> +       case VTCR_EL2:
> +               *res0 = compute_reg_res0_bits(kvm, &vtcr_el2_desc, 0, 0);
> +               *res1 = VTCR_EL2_RES1;
> +               break;
>         default:
>                 WARN_ON_ONCE(1);
>                 *res0 = *res1 = 0;
> diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
> index e1ef8930c97b3..606cebcaa7c09 100644
> --- a/arch/arm64/kvm/nested.c
> +++ b/arch/arm64/kvm/nested.c
> @@ -1719,8 +1719,7 @@ int kvm_init_nv_sysregs(struct kvm_vcpu *vcpu)
>         set_sysreg_masks(kvm, VTTBR_EL2, res0, res1);
>
>         /* VTCR_EL2 */
> -       res0 = GENMASK(63, 32) | GENMASK(30, 20);
> -       res1 = BIT(31);
> +       get_reg_fixed_bits(kvm, VTCR_EL2, &res0, &res1);
>         set_sysreg_masks(kvm, VTCR_EL2, res0, res1);
>
>         /* VMPIDR_EL2 */
> --
> 2.47.3
>


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 6/6] KVM: arm64: Honor UX/PX attributes for EL2 S1 mappings
  2025-12-10 17:30 ` [PATCH v2 6/6] KVM: arm64: Honor UX/PX attributes for EL2 S1 mappings Marc Zyngier
@ 2025-12-11 14:51   ` Fuad Tabba
  2025-12-11 15:18   ` Joey Gouly
  1 sibling, 0 replies; 21+ messages in thread
From: Fuad Tabba @ 2025-12-11 14:51 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, kvm, Joey Gouly, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Alexandru Elisei, Sascha Bischoff,
	Quentin Perret, Sebastian Ene

On Wed, 10 Dec 2025 at 17:30, Marc Zyngier <maz@kernel.org> wrote:
>
> Now that we potentially have two bits to deal with when setting
> execution permissions, make sure we correctly handle them when both
> when building the page tables and when reading back from them.
>
> Reported-by: Alexandru Elisei <alexandru.elisei@arm.com>
> Signed-off-by: Marc Zyngier <maz@kernel.org>

Reviewed-by: Fuad Tabba <tabba@google.com>

/fuad

> ---
>  arch/arm64/include/asm/kvm_pgtable.h | 12 +++---------
>  arch/arm64/kvm/hyp/pgtable.c         | 24 +++++++++++++++++++++---
>  2 files changed, 24 insertions(+), 12 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h
> index be68b89692065..095e6b73740a6 100644
> --- a/arch/arm64/include/asm/kvm_pgtable.h
> +++ b/arch/arm64/include/asm/kvm_pgtable.h
> @@ -87,15 +87,9 @@ typedef u64 kvm_pte_t;
>
>  #define KVM_PTE_LEAF_ATTR_HI_SW                GENMASK(58, 55)
>
> -#define __KVM_PTE_LEAF_ATTR_HI_S1_XN   BIT(54)
> -#define __KVM_PTE_LEAF_ATTR_HI_S1_UXN  BIT(54)
> -#define __KVM_PTE_LEAF_ATTR_HI_S1_PXN  BIT(53)
> -
> -#define KVM_PTE_LEAF_ATTR_HI_S1_XN                                     \
> -       ({ cpus_have_final_cap(ARM64_KVM_HVHE) ?                        \
> -                       (__KVM_PTE_LEAF_ATTR_HI_S1_UXN |                \
> -                        __KVM_PTE_LEAF_ATTR_HI_S1_PXN) :               \
> -                       __KVM_PTE_LEAF_ATTR_HI_S1_XN; })
> +#define KVM_PTE_LEAF_ATTR_HI_S1_XN     BIT(54)
> +#define KVM_PTE_LEAF_ATTR_HI_S1_UXN    BIT(54)
> +#define KVM_PTE_LEAF_ATTR_HI_S1_PXN    BIT(53)
>
>  #define KVM_PTE_LEAF_ATTR_HI_S2_XN     GENMASK(54, 53)
>
> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> index e0bd6a0172729..97c0835d25590 100644
> --- a/arch/arm64/kvm/hyp/pgtable.c
> +++ b/arch/arm64/kvm/hyp/pgtable.c
> @@ -342,6 +342,9 @@ static int hyp_set_prot_attr(enum kvm_pgtable_prot prot, kvm_pte_t *ptep)
>         if (!(prot & KVM_PGTABLE_PROT_R))
>                 return -EINVAL;
>
> +       if (!cpus_have_final_cap(ARM64_KVM_HVHE))
> +               prot &= ~KVM_PGTABLE_PROT_UX;
> +
>         if (prot & KVM_PGTABLE_PROT_X) {
>                 if (prot & KVM_PGTABLE_PROT_W)
>                         return -EINVAL;
> @@ -351,8 +354,16 @@ static int hyp_set_prot_attr(enum kvm_pgtable_prot prot, kvm_pte_t *ptep)
>
>                 if (system_supports_bti_kernel())
>                         attr |= KVM_PTE_LEAF_ATTR_HI_S1_GP;
> +       }
> +
> +       if (cpus_have_final_cap(ARM64_KVM_HVHE)) {
> +               if (!(prot & KVM_PGTABLE_PROT_PX))
> +                       attr |= KVM_PTE_LEAF_ATTR_HI_S1_PXN;
> +               if (!(prot & KVM_PGTABLE_PROT_UX))
> +                       attr |= KVM_PTE_LEAF_ATTR_HI_S1_UXN;
>         } else {
> -               attr |= KVM_PTE_LEAF_ATTR_HI_S1_XN;
> +               if (!(prot & KVM_PGTABLE_PROT_PX))
> +                       attr |= KVM_PTE_LEAF_ATTR_HI_S1_XN;
>         }
>
>         attr |= FIELD_PREP(KVM_PTE_LEAF_ATTR_LO_S1_AP, ap);
> @@ -373,8 +384,15 @@ enum kvm_pgtable_prot kvm_pgtable_hyp_pte_prot(kvm_pte_t pte)
>         if (!kvm_pte_valid(pte))
>                 return prot;
>
> -       if (!(pte & KVM_PTE_LEAF_ATTR_HI_S1_XN))
> -               prot |= KVM_PGTABLE_PROT_X;
> +       if (cpus_have_final_cap(ARM64_KVM_HVHE)) {
> +               if (!(pte & KVM_PTE_LEAF_ATTR_HI_S1_PXN))
> +                       prot |= KVM_PGTABLE_PROT_PX;
> +               if (!(pte & KVM_PTE_LEAF_ATTR_HI_S1_UXN))
> +                       prot |= KVM_PGTABLE_PROT_UX;
> +       } else {
> +               if (!(pte & KVM_PTE_LEAF_ATTR_HI_S1_XN))
> +                       prot |= KVM_PGTABLE_PROT_PX;
> +       }
>
>         ap = FIELD_GET(KVM_PTE_LEAF_ATTR_LO_S1_AP, pte);
>         if (ap == KVM_PTE_LEAF_ATTR_LO_S1_AP_RO)
> --
> 2.47.3
>


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 0/6] KVM: arm64: VTCR_EL2 conversion to feature dependency framework
  2025-12-10 17:30 [PATCH v2 0/6] KVM: arm64: VTCR_EL2 conversion to feature dependency framework Marc Zyngier
                   ` (5 preceding siblings ...)
  2025-12-10 17:30 ` [PATCH v2 6/6] KVM: arm64: Honor UX/PX attributes for EL2 S1 mappings Marc Zyngier
@ 2025-12-11 14:55 ` Fuad Tabba
  6 siblings, 0 replies; 21+ messages in thread
From: Fuad Tabba @ 2025-12-11 14:55 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, kvm, Joey Gouly, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Alexandru Elisei, Sascha Bischoff,
	Quentin Perret, Sebastian Ene

Hi Marc,

On Wed, 10 Dec 2025 at 17:30, Marc Zyngier <maz@kernel.org> wrote:
>
> This is a follow-up on my VTCR_EL2 sanitisation series, with extra
> goodies, mostly as a consequence of Alexandru's patches and review.
>
> * From [1]:
>
>   - Added two patches fixing some FEAT_XNX issues: one dating back
>     from the hVHE introduction, and the other related to the newer
>     stuff in 6.19.
>
>   - Expanded the scope of the RES1 handling in DECLARE_FEAT_MAP() to
>     deal with FGTs, as we're about to get quality stuff thanks to
>     GICv5.
>
>   - Simplified the S2TGRANx detection slightly.
>
>   - Collected RBs, with thanks
>
> [1] https://lore.kernel.org/r/20251129144525.2609207-1-maz@kernel.org
>
> Marc Zyngier (6):
>   KVM: arm64: Fix EL2 S1 XN handling for hVHE setups
>   arm64: Convert ID_AA64MMFR0_EL1.TGRAN{4,16,64}_2 to UnsignedEnum
>   arm64: Convert VTCR_EL2 to sysreg infratructure
>   KVM: arm64: Account for RES1 bits in DECLARE_FEAT_MAP() and co
>   KVM: arm64: Convert VTCR_EL2 to config-driven sanitisation
>   KVM: arm64: Honor UX/PX attributes for EL2 S1 mappings

For the series, on qemu, hVHE and protected hVHE:
Tested-by: Fuad Tabba <tabba@google.com>


Cheers,
/fuad



>  arch/arm64/include/asm/kvm_arm.h     | 52 ++++-----------
>  arch/arm64/include/asm/kvm_host.h    |  1 +
>  arch/arm64/include/asm/kvm_pgtable.h |  2 +
>  arch/arm64/include/asm/sysreg.h      |  1 -
>  arch/arm64/kvm/config.c              | 94 ++++++++++++++++++++++++----
>  arch/arm64/kvm/emulate-nested.c      | 55 +++++++++-------
>  arch/arm64/kvm/hyp/pgtable.c         | 32 +++++++---
>  arch/arm64/kvm/nested.c              | 11 ++--
>  arch/arm64/tools/sysreg              | 63 ++++++++++++++++++-
>  9 files changed, 217 insertions(+), 94 deletions(-)
>
> --
> 2.47.3
>


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 6/6] KVM: arm64: Honor UX/PX attributes for EL2 S1 mappings
  2025-12-10 17:30 ` [PATCH v2 6/6] KVM: arm64: Honor UX/PX attributes for EL2 S1 mappings Marc Zyngier
  2025-12-11 14:51   ` Fuad Tabba
@ 2025-12-11 15:18   ` Joey Gouly
  2025-12-11 16:21     ` Marc Zyngier
  1 sibling, 1 reply; 21+ messages in thread
From: Joey Gouly @ 2025-12-11 15:18 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, kvm, Suzuki K Poulose, Oliver Upton,
	Zenghui Yu, Alexandru Elisei, Sascha Bischoff, Quentin Perret,
	Fuad Tabba, Sebastian Ene

Question,

On Wed, Dec 10, 2025 at 05:30:24PM +0000, Marc Zyngier wrote:
> Now that we potentially have two bits to deal with when setting
> execution permissions, make sure we correctly handle them when both
> when building the page tables and when reading back from them.
> 
> Reported-by: Alexandru Elisei <alexandru.elisei@arm.com>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_pgtable.h | 12 +++---------
>  arch/arm64/kvm/hyp/pgtable.c         | 24 +++++++++++++++++++++---
>  2 files changed, 24 insertions(+), 12 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h
> index be68b89692065..095e6b73740a6 100644
> --- a/arch/arm64/include/asm/kvm_pgtable.h
> +++ b/arch/arm64/include/asm/kvm_pgtable.h
> @@ -87,15 +87,9 @@ typedef u64 kvm_pte_t;
>  
>  #define KVM_PTE_LEAF_ATTR_HI_SW		GENMASK(58, 55)
>  
> -#define __KVM_PTE_LEAF_ATTR_HI_S1_XN	BIT(54)
> -#define __KVM_PTE_LEAF_ATTR_HI_S1_UXN	BIT(54)
> -#define __KVM_PTE_LEAF_ATTR_HI_S1_PXN	BIT(53)
> -
> -#define KVM_PTE_LEAF_ATTR_HI_S1_XN					\
> -	({ cpus_have_final_cap(ARM64_KVM_HVHE) ?			\
> -			(__KVM_PTE_LEAF_ATTR_HI_S1_UXN |		\
> -			 __KVM_PTE_LEAF_ATTR_HI_S1_PXN) :		\
> -			__KVM_PTE_LEAF_ATTR_HI_S1_XN; })
> +#define KVM_PTE_LEAF_ATTR_HI_S1_XN	BIT(54)
> +#define KVM_PTE_LEAF_ATTR_HI_S1_UXN	BIT(54)
> +#define KVM_PTE_LEAF_ATTR_HI_S1_PXN	BIT(53)
>  
>  #define KVM_PTE_LEAF_ATTR_HI_S2_XN	GENMASK(54, 53)
>  
> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> index e0bd6a0172729..97c0835d25590 100644
> --- a/arch/arm64/kvm/hyp/pgtable.c
> +++ b/arch/arm64/kvm/hyp/pgtable.c
> @@ -342,6 +342,9 @@ static int hyp_set_prot_attr(enum kvm_pgtable_prot prot, kvm_pte_t *ptep)
>  	if (!(prot & KVM_PGTABLE_PROT_R))
>  		return -EINVAL;
>  
> +	if (!cpus_have_final_cap(ARM64_KVM_HVHE))
> +		prot &= ~KVM_PGTABLE_PROT_UX;

Trying to understand this part. We don't consider KVM_PGTABLE_PROT_UX below
when !HVHE, and we don't set it in kvm_pgtable_hyp_pte_prot() when !HVHE
either, so can it ever actually be set?

Otherwise LGTM!

Thanks,
Joey

> +
>  	if (prot & KVM_PGTABLE_PROT_X) {
>  		if (prot & KVM_PGTABLE_PROT_W)
>  			return -EINVAL;
> @@ -351,8 +354,16 @@ static int hyp_set_prot_attr(enum kvm_pgtable_prot prot, kvm_pte_t *ptep)
>  
>  		if (system_supports_bti_kernel())
>  			attr |= KVM_PTE_LEAF_ATTR_HI_S1_GP;
> +	}
> +
> +	if (cpus_have_final_cap(ARM64_KVM_HVHE)) {
> +		if (!(prot & KVM_PGTABLE_PROT_PX))
> +			attr |= KVM_PTE_LEAF_ATTR_HI_S1_PXN;
> +		if (!(prot & KVM_PGTABLE_PROT_UX))
> +			attr |= KVM_PTE_LEAF_ATTR_HI_S1_UXN;
>  	} else {
> -		attr |= KVM_PTE_LEAF_ATTR_HI_S1_XN;
> +		if (!(prot & KVM_PGTABLE_PROT_PX))
> +			attr |= KVM_PTE_LEAF_ATTR_HI_S1_XN;
>  	}
>  
>  	attr |= FIELD_PREP(KVM_PTE_LEAF_ATTR_LO_S1_AP, ap);
> @@ -373,8 +384,15 @@ enum kvm_pgtable_prot kvm_pgtable_hyp_pte_prot(kvm_pte_t pte)
>  	if (!kvm_pte_valid(pte))
>  		return prot;
>  
> -	if (!(pte & KVM_PTE_LEAF_ATTR_HI_S1_XN))
> -		prot |= KVM_PGTABLE_PROT_X;
> +	if (cpus_have_final_cap(ARM64_KVM_HVHE)) {
> +		if (!(pte & KVM_PTE_LEAF_ATTR_HI_S1_PXN))
> +			prot |= KVM_PGTABLE_PROT_PX;
> +		if (!(pte & KVM_PTE_LEAF_ATTR_HI_S1_UXN))
> +			prot |= KVM_PGTABLE_PROT_UX;
> +	} else {
> +		if (!(pte & KVM_PTE_LEAF_ATTR_HI_S1_XN))
> +			prot |= KVM_PGTABLE_PROT_PX;
> +	}
>  
>  	ap = FIELD_GET(KVM_PTE_LEAF_ATTR_LO_S1_AP, pte);
>  	if (ap == KVM_PTE_LEAF_ATTR_LO_S1_AP_RO)
> -- 
> 2.47.3
> 


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 6/6] KVM: arm64: Honor UX/PX attributes for EL2 S1 mappings
  2025-12-11 15:18   ` Joey Gouly
@ 2025-12-11 16:21     ` Marc Zyngier
  2025-12-12 16:00       ` Joey Gouly
  0 siblings, 1 reply; 21+ messages in thread
From: Marc Zyngier @ 2025-12-11 16:21 UTC (permalink / raw)
  To: Joey Gouly
  Cc: kvmarm, linux-arm-kernel, kvm, Suzuki K Poulose, Oliver Upton,
	Zenghui Yu, Alexandru Elisei, Sascha Bischoff, Quentin Perret,
	Fuad Tabba, Sebastian Ene

On Thu, 11 Dec 2025 15:18:51 +0000,
Joey Gouly <joey.gouly@arm.com> wrote:
> 
> Question,
> 
> On Wed, Dec 10, 2025 at 05:30:24PM +0000, Marc Zyngier wrote:
> > Now that we potentially have two bits to deal with when setting
> > execution permissions, make sure we correctly handle them when both
> > when building the page tables and when reading back from them.
> > 
> > Reported-by: Alexandru Elisei <alexandru.elisei@arm.com>
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/include/asm/kvm_pgtable.h | 12 +++---------
> >  arch/arm64/kvm/hyp/pgtable.c         | 24 +++++++++++++++++++++---
> >  2 files changed, 24 insertions(+), 12 deletions(-)
> > 
> > diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h
> > index be68b89692065..095e6b73740a6 100644
> > --- a/arch/arm64/include/asm/kvm_pgtable.h
> > +++ b/arch/arm64/include/asm/kvm_pgtable.h
> > @@ -87,15 +87,9 @@ typedef u64 kvm_pte_t;
> >  
> >  #define KVM_PTE_LEAF_ATTR_HI_SW		GENMASK(58, 55)
> >  
> > -#define __KVM_PTE_LEAF_ATTR_HI_S1_XN	BIT(54)
> > -#define __KVM_PTE_LEAF_ATTR_HI_S1_UXN	BIT(54)
> > -#define __KVM_PTE_LEAF_ATTR_HI_S1_PXN	BIT(53)
> > -
> > -#define KVM_PTE_LEAF_ATTR_HI_S1_XN					\
> > -	({ cpus_have_final_cap(ARM64_KVM_HVHE) ?			\
> > -			(__KVM_PTE_LEAF_ATTR_HI_S1_UXN |		\
> > -			 __KVM_PTE_LEAF_ATTR_HI_S1_PXN) :		\
> > -			__KVM_PTE_LEAF_ATTR_HI_S1_XN; })
> > +#define KVM_PTE_LEAF_ATTR_HI_S1_XN	BIT(54)
> > +#define KVM_PTE_LEAF_ATTR_HI_S1_UXN	BIT(54)
> > +#define KVM_PTE_LEAF_ATTR_HI_S1_PXN	BIT(53)
> >  
> >  #define KVM_PTE_LEAF_ATTR_HI_S2_XN	GENMASK(54, 53)
> >  
> > diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> > index e0bd6a0172729..97c0835d25590 100644
> > --- a/arch/arm64/kvm/hyp/pgtable.c
> > +++ b/arch/arm64/kvm/hyp/pgtable.c
> > @@ -342,6 +342,9 @@ static int hyp_set_prot_attr(enum kvm_pgtable_prot prot, kvm_pte_t *ptep)
> >  	if (!(prot & KVM_PGTABLE_PROT_R))
> >  		return -EINVAL;
> >  
> > +	if (!cpus_have_final_cap(ARM64_KVM_HVHE))
> > +		prot &= ~KVM_PGTABLE_PROT_UX;
> 
> Trying to understand this part. We don't consider KVM_PGTABLE_PROT_UX below
> when !HVHE, and we don't set it in kvm_pgtable_hyp_pte_prot() when !HVHE
> either, so can it ever actually be set?

Because KVM_PGTABLE_PROT_X, which is directly passed by the high-level
code, is defined as such:

	KVM_PGTABLE_PROT_X			= KVM_PGTABLE_PROT_PX	|
						  KVM_PGTABLE_PROT_UX,

We *could* make that value dependent on HVHE, but since that's in an
enum, it is pretty ugly to do (not impossible though).

But it is in the following code that this becomes useful...

>
> Otherwise LGTM!
> 
> Thanks,
> Joey
> 
> > +
> >  	if (prot & KVM_PGTABLE_PROT_X) {
> >  		if (prot & KVM_PGTABLE_PROT_W)
> >  			return -EINVAL;

... here. If you were passed UX (and only that), and that you're
!HVHE, you won't have execution at all, and can allow writes.

Does that make sense?

	M.

-- 
Without deviation from the norm, progress is not possible.


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 4/6] KVM: arm64: Account for RES1 bits in DECLARE_FEAT_MAP() and co
  2025-12-10 17:30 ` [PATCH v2 4/6] KVM: arm64: Account for RES1 bits in DECLARE_FEAT_MAP() and co Marc Zyngier
  2025-12-11 14:30   ` Fuad Tabba
@ 2025-12-11 17:23   ` Sascha Bischoff
  1 sibling, 0 replies; 21+ messages in thread
From: Sascha Bischoff @ 2025-12-11 17:23 UTC (permalink / raw)
  To: maz@kernel.org, kvmarm@lists.linux.dev, kvm@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org
  Cc: yuzenghui@huawei.com, qperret@google.com, Suzuki Poulose,
	sebastianene@google.com, nd, oupton@kernel.org, Alexandru Elisei,
	Joey Gouly, tabba@google.com

On Wed, 2025-12-10 at 17:30 +0000, Marc Zyngier wrote:
> None of the registers we manage in the feature dependency
> infrastructure
> so far has any RES1 bit. This is about to change, as VTCR_EL2 has
> its bit 31 being RES1.
> 
> In order to not fail the consistency checks by not describing a bit,
> add RES1 bits to the set of immutable bits. This requires some extra
> surgery for the FGT handling, as we now need to track RES1 bits there
> as well.
> 
> There are no RES1 FGT bits *yet*. Watch this space.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>

I've tested this change with my WIP vgic_v5 support, which indeed
introduces a RES1 FGT bit. I can confirm that the FGT support is now
working as I'd expect it to. Thanks for this change!

Tested-by: Sascha Bischoff <sascha.bischoff@arm.com>

Thanks,
Sascha

> ---
>  arch/arm64/include/asm/kvm_host.h |  1 +
>  arch/arm64/kvm/config.c           | 25 +++++++-------
>  arch/arm64/kvm/emulate-nested.c   | 55 +++++++++++++++++------------
> --
>  3 files changed, 45 insertions(+), 36 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h
> b/arch/arm64/include/asm/kvm_host.h
> index ac7f970c78830..b552a1e03848c 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -638,6 +638,7 @@ struct fgt_masks {
>  	u64		mask;
>  	u64		nmask;
>  	u64		res0;
> +	u64		res1;
>  };
>  
>  extern struct fgt_masks hfgrtr_masks;
> diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
> index 24bb3f36e9d59..3845b188551b6 100644
> --- a/arch/arm64/kvm/config.c
> +++ b/arch/arm64/kvm/config.c
> @@ -16,14 +16,14 @@
>   */
>  struct reg_bits_to_feat_map {
>  	union {
> -		u64	bits;
> -		u64	*res0p;
> +		u64		 bits;
> +		struct fgt_masks *masks;
>  	};
>  
>  #define	NEVER_FGU	BIT(0)	/* Can trap, but never UNDEF
> */
>  #define	CALL_FUNC	BIT(1)	/* Needs to evaluate tons of
> crap */
>  #define	FIXED_VALUE	BIT(2)	/* RAZ/WI or RAO/WI in KVM
> */
> -#define	RES0_POINTER	BIT(3)	/* Pointer to RES0 value
> instead of bits */
> +#define	MASKS_POINTER	BIT(3)	/* Pointer to fgt_masks
> struct instead of bits */
>  
>  	unsigned long	flags;
>  
> @@ -92,8 +92,8 @@ struct reg_feat_map_desc {
>  #define NEEDS_FEAT_FIXED(m, ...)			\
>  	__NEEDS_FEAT_FLAG(m, FIXED_VALUE, bits, __VA_ARGS__, 0)
>  
> -#define NEEDS_FEAT_RES0(p, ...)				\
> -	__NEEDS_FEAT_FLAG(p, RES0_POINTER, res0p, __VA_ARGS__)
> +#define NEEDS_FEAT_MASKS(p, ...)				\
> +	__NEEDS_FEAT_FLAG(p, MASKS_POINTER, masks, __VA_ARGS__)
>  
>  /*
>   * Declare the dependency between a set of bits and a set of
> features,
> @@ -109,19 +109,20 @@ struct reg_feat_map_desc {
>  #define DECLARE_FEAT_MAP(n, r, m,
> f)					\
>  	struct reg_feat_map_desc n =
> {					\
>  		.name			=
> #r,				\
> -		.feat_map		= NEEDS_FEAT(~r##_RES0,
> f), 	\
> +		.feat_map		= NEEDS_FEAT(~(r##_RES0
> |	\
> +						       r##_RES1),
> f),	\
>  		.bit_feat_map		=
> m,				\
>  		.bit_feat_map_sz	=
> ARRAY_SIZE(m),		\
>  	}
>  
>  /*
>   * Specialised version of the above for FGT registers that have
> their
> - * RES0 masks described as struct fgt_masks.
> + * RESx masks described as struct fgt_masks.
>   */
>  #define DECLARE_FEAT_MAP_FGT(n, msk, m,
> f)				\
>  	struct reg_feat_map_desc n =
> {					\
>  		.name			=
> #msk,				\
> -		.feat_map		= NEEDS_FEAT_RES0(&msk.res0,
> f),\
> +		.feat_map		= NEEDS_FEAT_MASKS(&msk,
> f),	\
>  		.bit_feat_map		=
> m,				\
>  		.bit_feat_map_sz	=
> ARRAY_SIZE(m),		\
>  	}
> @@ -1168,21 +1169,21 @@ static const DECLARE_FEAT_MAP(mdcr_el2_desc,
> MDCR_EL2,
>  			      mdcr_el2_feat_map, FEAT_AA64EL2);
>  
>  static void __init check_feat_map(const struct reg_bits_to_feat_map
> *map,
> -				  int map_size, u64 res0, const char
> *str)
> +				  int map_size, u64 resx, const char
> *str)
>  {
>  	u64 mask = 0;
>  
>  	for (int i = 0; i < map_size; i++)
>  		mask |= map[i].bits;
>  
> -	if (mask != ~res0)
> +	if (mask != ~resx)
>  		kvm_err("Undefined %s behaviour, bits %016llx\n",
> -			str, mask ^ ~res0);
> +			str, mask ^ ~resx);
>  }
>  
>  static u64 reg_feat_map_bits(const struct reg_bits_to_feat_map *map)
>  {
> -	return map->flags & RES0_POINTER ? ~(*map->res0p) : map-
> >bits;
> +	return map->flags & MASKS_POINTER ? (map->masks->mask | map-
> >masks->nmask) : map->bits;
>  }
>  
>  static void __init check_reg_desc(const struct reg_feat_map_desc *r)
> diff --git a/arch/arm64/kvm/emulate-nested.c
> b/arch/arm64/kvm/emulate-nested.c
> index 834f13fb1fb7d..75d49f83342a5 100644
> --- a/arch/arm64/kvm/emulate-nested.c
> +++ b/arch/arm64/kvm/emulate-nested.c
> @@ -2105,23 +2105,24 @@ static u32 encoding_next(u32 encoding)
>  }
>  
>  #define FGT_MASKS(__n,
> __m)						\
> -	struct fgt_masks __n = { .str = #__m, .res0 = __m, }
> -
> -FGT_MASKS(hfgrtr_masks, HFGRTR_EL2_RES0);
> -FGT_MASKS(hfgwtr_masks, HFGWTR_EL2_RES0);
> -FGT_MASKS(hfgitr_masks, HFGITR_EL2_RES0);
> -FGT_MASKS(hdfgrtr_masks, HDFGRTR_EL2_RES0);
> -FGT_MASKS(hdfgwtr_masks, HDFGWTR_EL2_RES0);
> -FGT_MASKS(hafgrtr_masks, HAFGRTR_EL2_RES0);
> -FGT_MASKS(hfgrtr2_masks, HFGRTR2_EL2_RES0);
> -FGT_MASKS(hfgwtr2_masks, HFGWTR2_EL2_RES0);
> -FGT_MASKS(hfgitr2_masks, HFGITR2_EL2_RES0);
> -FGT_MASKS(hdfgrtr2_masks, HDFGRTR2_EL2_RES0);
> -FGT_MASKS(hdfgwtr2_masks, HDFGWTR2_EL2_RES0);
> +	struct fgt_masks __n = { .str = #__m, .res0 = __m ## _RES0,
> .res1 = __m ## _RES1 }
> +
> +FGT_MASKS(hfgrtr_masks, HFGRTR_EL2);
> +FGT_MASKS(hfgwtr_masks, HFGWTR_EL2);
> +FGT_MASKS(hfgitr_masks, HFGITR_EL2);
> +FGT_MASKS(hdfgrtr_masks, HDFGRTR_EL2);
> +FGT_MASKS(hdfgwtr_masks, HDFGWTR_EL2);
> +FGT_MASKS(hafgrtr_masks, HAFGRTR_EL2);
> +FGT_MASKS(hfgrtr2_masks, HFGRTR2_EL2);
> +FGT_MASKS(hfgwtr2_masks, HFGWTR2_EL2);
> +FGT_MASKS(hfgitr2_masks, HFGITR2_EL2);
> +FGT_MASKS(hdfgrtr2_masks, HDFGRTR2_EL2);
> +FGT_MASKS(hdfgwtr2_masks, HDFGWTR2_EL2);
>  
>  static __init bool aggregate_fgt(union trap_config tc)
>  {
>  	struct fgt_masks *rmasks, *wmasks;
> +	u64 rresx, wresx;
>  
>  	switch (tc.fgt) {
>  	case HFGRTR_GROUP:
> @@ -2154,24 +2155,27 @@ static __init bool aggregate_fgt(union
> trap_config tc)
>  		break;
>  	}
>  
> +	rresx = rmasks->res0 | rmasks->res1;
> +	if (wmasks)
> +		wresx = wmasks->res0 | wmasks->res1;
> +
>  	/*
>  	 * A bit can be reserved in either the R or W register, but
>  	 * not both.
>  	 */
> -	if ((BIT(tc.bit) & rmasks->res0) &&
> -	    (!wmasks || (BIT(tc.bit) & wmasks->res0)))
> +	if ((BIT(tc.bit) & rresx) && (!wmasks || (BIT(tc.bit) &
> wresx)))
>  		return false;
>  
>  	if (tc.pol)
> -		rmasks->mask |= BIT(tc.bit) & ~rmasks->res0;
> +		rmasks->mask |= BIT(tc.bit) & ~rresx;
>  	else
> -		rmasks->nmask |= BIT(tc.bit) & ~rmasks->res0;
> +		rmasks->nmask |= BIT(tc.bit) & ~rresx;
>  
>  	if (wmasks) {
>  		if (tc.pol)
> -			wmasks->mask |= BIT(tc.bit) & ~wmasks->res0;
> +			wmasks->mask |= BIT(tc.bit) & ~wresx;
>  		else
> -			wmasks->nmask |= BIT(tc.bit) & ~wmasks-
> >res0;
> +			wmasks->nmask |= BIT(tc.bit) & ~wresx;
>  	}
>  
>  	return true;
> @@ -2180,7 +2184,6 @@ static __init bool aggregate_fgt(union
> trap_config tc)
>  static __init int check_fgt_masks(struct fgt_masks *masks)
>  {
>  	unsigned long duplicate = masks->mask & masks->nmask;
> -	u64 res0 = masks->res0;
>  	int ret = 0;
>  
>  	if (duplicate) {
> @@ -2194,10 +2197,14 @@ static __init int check_fgt_masks(struct
> fgt_masks *masks)
>  		ret = -EINVAL;
>  	}
>  
> -	masks->res0 = ~(masks->mask | masks->nmask);
> -	if (masks->res0 != res0)
> -		kvm_info("Implicit %s = %016llx, expecting
> %016llx\n",
> -			 masks->str, masks->res0, res0);
> +	if ((masks->res0 | masks->res1 | masks->mask | masks->nmask)
> != GENMASK(63, 0) ||
> +	    (masks->res0 & masks->res1)  || (masks->res0 & masks-
> >mask) ||
> +	    (masks->res0 & masks->nmask) || (masks->res1 & masks-
> >mask)  ||
> +	    (masks->res1 & masks->nmask) || (masks->mask & masks-
> >nmask)) {
> +		kvm_info("Inconsistent masks for %s (%016llx,
> %016llx, %016llx, %016llx)\n",
> +			 masks->str, masks->res0, masks->res1,
> masks->mask, masks->nmask);
> +		masks->res0 = ~(masks->res1 | masks->mask | masks-
> >nmask);
> +	}
>  
>  	return ret;
>  }


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 6/6] KVM: arm64: Honor UX/PX attributes for EL2 S1 mappings
  2025-12-11 16:21     ` Marc Zyngier
@ 2025-12-12 16:00       ` Joey Gouly
  0 siblings, 0 replies; 21+ messages in thread
From: Joey Gouly @ 2025-12-12 16:00 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, kvm, Suzuki K Poulose, Oliver Upton,
	Zenghui Yu, Alexandru Elisei, Sascha Bischoff, Quentin Perret,
	Fuad Tabba, Sebastian Ene

On Thu, Dec 11, 2025 at 04:21:20PM +0000, Marc Zyngier wrote:
> On Thu, 11 Dec 2025 15:18:51 +0000,
> Joey Gouly <joey.gouly@arm.com> wrote:
> > 
> > Question,
> > 
> > On Wed, Dec 10, 2025 at 05:30:24PM +0000, Marc Zyngier wrote:
> > > Now that we potentially have two bits to deal with when setting
> > > execution permissions, make sure we correctly handle them when both
> > > when building the page tables and when reading back from them.
> > > 
> > > Reported-by: Alexandru Elisei <alexandru.elisei@arm.com>
> > > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > > ---
> > >  arch/arm64/include/asm/kvm_pgtable.h | 12 +++---------
> > >  arch/arm64/kvm/hyp/pgtable.c         | 24 +++++++++++++++++++++---
> > >  2 files changed, 24 insertions(+), 12 deletions(-)
> > > 
> > > diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h
> > > index be68b89692065..095e6b73740a6 100644
> > > --- a/arch/arm64/include/asm/kvm_pgtable.h
> > > +++ b/arch/arm64/include/asm/kvm_pgtable.h
> > > @@ -87,15 +87,9 @@ typedef u64 kvm_pte_t;
> > >  
> > >  #define KVM_PTE_LEAF_ATTR_HI_SW		GENMASK(58, 55)
> > >  
> > > -#define __KVM_PTE_LEAF_ATTR_HI_S1_XN	BIT(54)
> > > -#define __KVM_PTE_LEAF_ATTR_HI_S1_UXN	BIT(54)
> > > -#define __KVM_PTE_LEAF_ATTR_HI_S1_PXN	BIT(53)
> > > -
> > > -#define KVM_PTE_LEAF_ATTR_HI_S1_XN					\
> > > -	({ cpus_have_final_cap(ARM64_KVM_HVHE) ?			\
> > > -			(__KVM_PTE_LEAF_ATTR_HI_S1_UXN |		\
> > > -			 __KVM_PTE_LEAF_ATTR_HI_S1_PXN) :		\
> > > -			__KVM_PTE_LEAF_ATTR_HI_S1_XN; })
> > > +#define KVM_PTE_LEAF_ATTR_HI_S1_XN	BIT(54)
> > > +#define KVM_PTE_LEAF_ATTR_HI_S1_UXN	BIT(54)
> > > +#define KVM_PTE_LEAF_ATTR_HI_S1_PXN	BIT(53)
> > >  
> > >  #define KVM_PTE_LEAF_ATTR_HI_S2_XN	GENMASK(54, 53)
> > >  
> > > diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> > > index e0bd6a0172729..97c0835d25590 100644
> > > --- a/arch/arm64/kvm/hyp/pgtable.c
> > > +++ b/arch/arm64/kvm/hyp/pgtable.c
> > > @@ -342,6 +342,9 @@ static int hyp_set_prot_attr(enum kvm_pgtable_prot prot, kvm_pte_t *ptep)
> > >  	if (!(prot & KVM_PGTABLE_PROT_R))
> > >  		return -EINVAL;
> > >  
> > > +	if (!cpus_have_final_cap(ARM64_KVM_HVHE))
> > > +		prot &= ~KVM_PGTABLE_PROT_UX;
> > 
> > Trying to understand this part. We don't consider KVM_PGTABLE_PROT_UX below
> > when !HVHE, and we don't set it in kvm_pgtable_hyp_pte_prot() when !HVHE
> > either, so can it ever actually be set?
> 
> Because KVM_PGTABLE_PROT_X, which is directly passed by the high-level
> code, is defined as such:
> 
> 	KVM_PGTABLE_PROT_X			= KVM_PGTABLE_PROT_PX	|
> 						  KVM_PGTABLE_PROT_UX,
> 

This was the main missing part!

> We *could* make that value dependent on HVHE, but since that's in an
> enum, it is pretty ugly to do (not impossible though).
> 
> But it is in the following code that this becomes useful...
> 
> >
> > Otherwise LGTM!
> > 
> > Thanks,
> > Joey
> > 
> > > +
> > >  	if (prot & KVM_PGTABLE_PROT_X) {
> > >  		if (prot & KVM_PGTABLE_PROT_W)
> > >  			return -EINVAL;
> 
> ... here. If you were passed UX (and only that), and that you're
> !HVHE, you won't have execution at all, and can allow writes.
> 
> Does that make sense?

Yes, thanks!

Reviewed-by: Joey Gouly <joey.gouly@arm.com>

> 
> 	M.
> 
> -- 
> Without deviation from the norm, progress is not possible.


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 1/6] KVM: arm64: Fix EL2 S1 XN handling for hVHE setups
  2025-12-10 17:30 ` [PATCH v2 1/6] KVM: arm64: Fix EL2 S1 XN handling for hVHE setups Marc Zyngier
  2025-12-11 13:37   ` Fuad Tabba
@ 2025-12-19 13:38   ` Leonardo Bras
  2025-12-19 14:13     ` Marc Zyngier
  1 sibling, 1 reply; 21+ messages in thread
From: Leonardo Bras @ 2025-12-19 13:38 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Leonardo Bras, kvmarm, linux-arm-kernel, kvm, Joey Gouly,
	Suzuki K Poulose, Oliver Upton, Zenghui Yu, Alexandru Elisei,
	Sascha Bischoff, Quentin Perret, Fuad Tabba, Sebastian Ene

On Wed, Dec 10, 2025 at 05:30:19PM +0000, Marc Zyngier wrote:
> The current XN implementation is tied to the EL2 translation regime,
> and fall flat on its face with the EL2&0 one that is used for hVHE,
> as the permission bit for privileged execution is a different one.
> 
> Fixes: 6537565fd9b7f ("KVM: arm64: Adjust EL2 stage-1 leaf AP bits when ARM64_KVM_HVHE is set")
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_pgtable.h | 10 +++++++++-
>  1 file changed, 9 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h
> index fc02de43c68dd..be68b89692065 100644
> --- a/arch/arm64/include/asm/kvm_pgtable.h
> +++ b/arch/arm64/include/asm/kvm_pgtable.h
> @@ -87,7 +87,15 @@ typedef u64 kvm_pte_t;
>  
>  #define KVM_PTE_LEAF_ATTR_HI_SW		GENMASK(58, 55)
>  
> -#define KVM_PTE_LEAF_ATTR_HI_S1_XN	BIT(54)
> +#define __KVM_PTE_LEAF_ATTR_HI_S1_XN	BIT(54)
> +#define __KVM_PTE_LEAF_ATTR_HI_S1_UXN	BIT(54)
> +#define __KVM_PTE_LEAF_ATTR_HI_S1_PXN	BIT(53)
> +
> +#define KVM_PTE_LEAF_ATTR_HI_S1_XN					\
> +	({ cpus_have_final_cap(ARM64_KVM_HVHE) ?			\
> +			(__KVM_PTE_LEAF_ATTR_HI_S1_UXN |		\
> +			 __KVM_PTE_LEAF_ATTR_HI_S1_PXN) :		\
> +			__KVM_PTE_LEAF_ATTR_HI_S1_XN; })
>  
>  #define KVM_PTE_LEAF_ATTR_HI_S2_XN	GENMASK(54, 53)
>  
> -- 
> 2.47.3
> 

Cool,
Is this according to the following in Arm ARM?

Figure D8-16
Stage 1 attribute fields in VMSAv8-64 Block and Page descriptors

Thanks!
Leo


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 1/6] KVM: arm64: Fix EL2 S1 XN handling for hVHE setups
  2025-12-19 13:38   ` Leonardo Bras
@ 2025-12-19 14:13     ` Marc Zyngier
  0 siblings, 0 replies; 21+ messages in thread
From: Marc Zyngier @ 2025-12-19 14:13 UTC (permalink / raw)
  To: Leonardo Bras
  Cc: kvmarm, linux-arm-kernel, kvm, Joey Gouly, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Alexandru Elisei, Sascha Bischoff,
	Quentin Perret, Fuad Tabba, Sebastian Ene

On Fri, 19 Dec 2025 13:38:50 +0000,
Leonardo Bras <leo.bras@arm.com> wrote:
> 
> On Wed, Dec 10, 2025 at 05:30:19PM +0000, Marc Zyngier wrote:
> > The current XN implementation is tied to the EL2 translation regime,
> > and fall flat on its face with the EL2&0 one that is used for hVHE,
> > as the permission bit for privileged execution is a different one.
> > 
> > Fixes: 6537565fd9b7f ("KVM: arm64: Adjust EL2 stage-1 leaf AP bits when ARM64_KVM_HVHE is set")
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/include/asm/kvm_pgtable.h | 10 +++++++++-
> >  1 file changed, 9 insertions(+), 1 deletion(-)
> > 
> > diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h
> > index fc02de43c68dd..be68b89692065 100644
> > --- a/arch/arm64/include/asm/kvm_pgtable.h
> > +++ b/arch/arm64/include/asm/kvm_pgtable.h
> > @@ -87,7 +87,15 @@ typedef u64 kvm_pte_t;
> >  
> >  #define KVM_PTE_LEAF_ATTR_HI_SW		GENMASK(58, 55)
> >  
> > -#define KVM_PTE_LEAF_ATTR_HI_S1_XN	BIT(54)
> > +#define __KVM_PTE_LEAF_ATTR_HI_S1_XN	BIT(54)
> > +#define __KVM_PTE_LEAF_ATTR_HI_S1_UXN	BIT(54)
> > +#define __KVM_PTE_LEAF_ATTR_HI_S1_PXN	BIT(53)
> > +
> > +#define KVM_PTE_LEAF_ATTR_HI_S1_XN					\
> > +	({ cpus_have_final_cap(ARM64_KVM_HVHE) ?			\
> > +			(__KVM_PTE_LEAF_ATTR_HI_S1_UXN |		\
> > +			 __KVM_PTE_LEAF_ATTR_HI_S1_PXN) :		\
> > +			__KVM_PTE_LEAF_ATTR_HI_S1_XN; })
> >  
> >  #define KVM_PTE_LEAF_ATTR_HI_S2_XN	GENMASK(54, 53)
> >  
> > -- 
> > 2.47.3
> > 
> 
> Cool,
> Is this according to the following in Arm ARM?
> 
> Figure D8-16
> Stage 1 attribute fields in VMSAv8-64 Block and Page descriptors

In M.a (or M.a.a, as it is now called), this is all part of
I_GLMLD. But R_JJNHR is a much more interesting source of information,
as it clearly outlines in which conditions XN, UXN and PXN are all
sharing the same two bits in funky ways...

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.


^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2025-12-19 14:13 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-12-10 17:30 [PATCH v2 0/6] KVM: arm64: VTCR_EL2 conversion to feature dependency framework Marc Zyngier
2025-12-10 17:30 ` [PATCH v2 1/6] KVM: arm64: Fix EL2 S1 XN handling for hVHE setups Marc Zyngier
2025-12-11 13:37   ` Fuad Tabba
2025-12-11 14:30     ` Marc Zyngier
2025-12-19 13:38   ` Leonardo Bras
2025-12-19 14:13     ` Marc Zyngier
2025-12-10 17:30 ` [PATCH v2 2/6] arm64: Convert ID_AA64MMFR0_EL1.TGRAN{4,16,64}_2 to UnsignedEnum Marc Zyngier
2025-12-11 13:38   ` Fuad Tabba
2025-12-10 17:30 ` [PATCH v2 3/6] arm64: Convert VTCR_EL2 to sysreg infratructure Marc Zyngier
2025-12-11 14:13   ` Fuad Tabba
2025-12-10 17:30 ` [PATCH v2 4/6] KVM: arm64: Account for RES1 bits in DECLARE_FEAT_MAP() and co Marc Zyngier
2025-12-11 14:30   ` Fuad Tabba
2025-12-11 17:23   ` Sascha Bischoff
2025-12-10 17:30 ` [PATCH v2 5/6] KVM: arm64: Convert VTCR_EL2 to config-driven sanitisation Marc Zyngier
2025-12-11 14:44   ` Fuad Tabba
2025-12-10 17:30 ` [PATCH v2 6/6] KVM: arm64: Honor UX/PX attributes for EL2 S1 mappings Marc Zyngier
2025-12-11 14:51   ` Fuad Tabba
2025-12-11 15:18   ` Joey Gouly
2025-12-11 16:21     ` Marc Zyngier
2025-12-12 16:00       ` Joey Gouly
2025-12-11 14:55 ` [PATCH v2 0/6] KVM: arm64: VTCR_EL2 conversion to feature dependency framework Fuad Tabba

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).