public inbox for linux-arm-kernel@lists.infradead.org
 help / color / mirror / Atom feed
* [PATCH 00/20] KVM: arm64: Generalise RESx handling
@ 2026-01-26 12:16 Marc Zyngier
  2026-01-26 12:16 ` [PATCH 01/20] arm64: Convert SCTLR_EL2 to sysreg infrastructure Marc Zyngier
                   ` (19 more replies)
  0 siblings, 20 replies; 53+ messages in thread
From: Marc Zyngier @ 2026-01-26 12:16 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel, kvm
  Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Fuad Tabba, Will Deacon, Catalin Marinas

Having spent some time dealing with some dark corners of the
architecture, I have realised that our RESx handling is a bit patchy.
Specially when it comes to RES1 bits, which are not clearly defined in
config.c, and rely on band-aids such as FIXED_VALUE.

This series takes the excuse of adding SCTLR_EL2 sanitisation to bite
the bullet and pursue several goals:

- clearly define bits that are RES1 when a feature is absent

- have a unified data structure to manage both RES0 and RES1 bits

- deal with the annoying complexity of some features being
  conditioned on E2H==1

- allow a single bit to take different RESx values depending on the
  value of E2H

This allows quite a bit of cleanup, including the total removal of the
FIXED_VALUE horror, which was always a bizarre construct. We also get
a new debugfs file to introspect the RESx settings for a given guest.

Overall, this lowers the complexity of expressing the configuration
constraints, for very little code (most of the extra lines are
introduced by the debugfs stuff, and SCTLR_EL2 being added to the
sysreg file).

Patches on top of my kvm-arm64/vtcr branch (which is currently
simmering in -next).

Marc Zyngier (20):
  arm64: Convert SCTLR_EL2 to sysreg infrastructure
  KVM: arm64: Remove duplicate configuration for SCTLR_EL1.{EE,E0E}
  KVM: arm64: Introduce standalone FGU computing primitive
  KVM: arm64: Introduce data structure tracking both RES0 and RES1 bits
  KVM: arm64: Extend unified RESx handling to runtime sanitisation
  KVM: arm64: Inherit RESx bits from FGT register descriptors
  KVM: arm64: Allow RES1 bits to be inferred from configuration
  KVM: arm64: Correctly handle SCTLR_EL1 RES1 bits for unsupported
    features
  KVM: arm64: Convert HCR_EL2.RW to AS_RES1
  KVM: arm64: Simplify FIXED_VALUE handling
  KVM: arm64: Add REQUIRES_E2H1 constraint as configuration flags
  KVM: arm64: Add RESx_WHEN_E2Hx constraints as configuration flags
  KVM: arm64: Move RESx into individual register descriptors
  KVM: arm64: Simplify handling of HCR_EL2.E2H RESx
  KVM: arm64: Get rid of FIXED_VALUE altogether
  KVM: arm64: Simplify handling of full register invalid constraint
  KVM: arm64: Remove all traces of FEAT_TME
  KVM: arm64: Remove all traces of HCR_EL2.MIOCNCE
  KVM: arm64: Add sanitisation to SCTLR_EL2
  KVM: arm64: Add debugfs file dumping computed RESx values

 arch/arm64/include/asm/kvm_host.h             |  33 +-
 arch/arm64/include/asm/sysreg.h               |   7 -
 arch/arm64/kvm/config.c                       | 430 ++++++++++--------
 arch/arm64/kvm/emulate-nested.c               |  10 +-
 arch/arm64/kvm/nested.c                       | 151 +++---
 arch/arm64/kvm/sys_regs.c                     |  98 ++++
 arch/arm64/tools/sysreg                       |  82 +++-
 tools/arch/arm64/include/asm/sysreg.h         |   6 -
 tools/perf/Documentation/perf-arm-spe.txt     |   1 -
 .../testing/selftests/kvm/arm64/set_id_regs.c |   1 -
 10 files changed, 510 insertions(+), 309 deletions(-)

-- 
2.47.3



^ permalink raw reply	[flat|nested] 53+ messages in thread

* [PATCH 01/20] arm64: Convert SCTLR_EL2 to sysreg infrastructure
  2026-01-26 12:16 [PATCH 00/20] KVM: arm64: Generalise RESx handling Marc Zyngier
@ 2026-01-26 12:16 ` Marc Zyngier
  2026-01-26 17:53   ` Fuad Tabba
  2026-01-26 12:16 ` [PATCH 02/20] KVM: arm64: Remove duplicate configuration for SCTLR_EL1.{EE,E0E} Marc Zyngier
                   ` (18 subsequent siblings)
  19 siblings, 1 reply; 53+ messages in thread
From: Marc Zyngier @ 2026-01-26 12:16 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel, kvm
  Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Fuad Tabba, Will Deacon, Catalin Marinas

Convert SCTLR_EL2 to the sysreg infrastructure, as per the 2025-12_rel
revision of the Registers.json file.

Note that we slightly deviate from the above, as we stick to the ARM
ARM M.a definition of SCTLR_EL2[9], which is RES0, in order to avoid
dragging the POE2 definitions...

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/sysreg.h       |  7 ---
 arch/arm64/tools/sysreg               | 69 +++++++++++++++++++++++++++
 tools/arch/arm64/include/asm/sysreg.h |  6 ---
 3 files changed, 69 insertions(+), 13 deletions(-)

diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 939f9c5bbae67..30f0409b1c802 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -504,7 +504,6 @@
 #define SYS_VPIDR_EL2			sys_reg(3, 4, 0, 0, 0)
 #define SYS_VMPIDR_EL2			sys_reg(3, 4, 0, 0, 5)
 
-#define SYS_SCTLR_EL2			sys_reg(3, 4, 1, 0, 0)
 #define SYS_ACTLR_EL2			sys_reg(3, 4, 1, 0, 1)
 #define SYS_SCTLR2_EL2			sys_reg(3, 4, 1, 0, 3)
 #define SYS_HCR_EL2			sys_reg(3, 4, 1, 1, 0)
@@ -837,12 +836,6 @@
 #define SCTLR_ELx_A	 (BIT(1))
 #define SCTLR_ELx_M	 (BIT(0))
 
-/* SCTLR_EL2 specific flags. */
-#define SCTLR_EL2_RES1	((BIT(4))  | (BIT(5))  | (BIT(11)) | (BIT(16)) | \
-			 (BIT(18)) | (BIT(22)) | (BIT(23)) | (BIT(28)) | \
-			 (BIT(29)))
-
-#define SCTLR_EL2_BT	(BIT(36))
 #ifdef CONFIG_CPU_BIG_ENDIAN
 #define ENDIAN_SET_EL2		SCTLR_ELx_EE
 #else
diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
index a0f6249bd4f98..969a75615d612 100644
--- a/arch/arm64/tools/sysreg
+++ b/arch/arm64/tools/sysreg
@@ -3749,6 +3749,75 @@ UnsignedEnum	2:0	F8S1
 EndEnum
 EndSysreg
 
+Sysreg	SCTLR_EL2	3	4	1	0	0
+Field	63	TIDCP
+Field	62	SPINTMASK
+Field	61	NMI
+Field	60	EnTP2
+Field	59	TCSO
+Field	58	TCSO0
+Field	57	EPAN
+Field	56	EnALS
+Field	55	EnAS0
+Field	54	EnASR
+Res0	53:50
+Field	49:46	TWEDEL
+Field	45	TWEDEn
+Field	44	DSSBS
+Field	43	ATA
+Field	42	ATA0
+Enum	41:40	TCF
+	0b00	NONE
+	0b01	SYNC
+	0b10	ASYNC
+	0b11	ASYMM
+EndEnum
+Enum	39:38	TCF0
+	0b00	NONE
+	0b01	SYNC
+	0b10	ASYNC
+	0b11	ASYMM
+EndEnum
+Field	37	ITFSB
+Field	36	BT
+Field	35	BT0
+Field	34	EnFPM
+Field	33	MSCEn
+Field	32	CMOW
+Field	31	EnIA
+Field	30	EnIB
+Field	29	LSMAOE
+Field	28	nTLSMD
+Field	27	EnDA
+Field	26	UCI
+Field	25	EE
+Field	24	E0E
+Field	23	SPAN
+Field	22	EIS
+Field	21	IESB
+Field	20	TSCXT
+Field	19	WXN
+Field	18	nTWE
+Res0	17
+Field	16	nTWI
+Field	15	UCT
+Field	14	DZE
+Field	13	EnDB
+Field	12	I
+Field	11	EOS
+Field	10	EnRCTX
+Res0	9
+Field	8	SED
+Field	7	ITD
+Field	6	nAA
+Field	5	CP15BEN
+Field	4	SA0
+Field	3	SA
+Field	2	C
+Field	1	A
+Field	0	M
+EndSysreg
+
 Sysreg	HCR_EL2		3	4	1	1	0
 Field	63:60	TWEDEL
 Field	59	TWEDEn
diff --git a/tools/arch/arm64/include/asm/sysreg.h b/tools/arch/arm64/include/asm/sysreg.h
index 178b7322bf049..f75efe98e9df3 100644
--- a/tools/arch/arm64/include/asm/sysreg.h
+++ b/tools/arch/arm64/include/asm/sysreg.h
@@ -847,12 +847,6 @@
 #define SCTLR_ELx_A	 (BIT(1))
 #define SCTLR_ELx_M	 (BIT(0))
 
-/* SCTLR_EL2 specific flags. */
-#define SCTLR_EL2_RES1	((BIT(4))  | (BIT(5))  | (BIT(11)) | (BIT(16)) | \
-			 (BIT(18)) | (BIT(22)) | (BIT(23)) | (BIT(28)) | \
-			 (BIT(29)))
-
-#define SCTLR_EL2_BT	(BIT(36))
 #ifdef CONFIG_CPU_BIG_ENDIAN
 #define ENDIAN_SET_EL2		SCTLR_ELx_EE
 #else
-- 
2.47.3



^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH 02/20] KVM: arm64: Remove duplicate configuration for SCTLR_EL1.{EE,E0E}
  2026-01-26 12:16 [PATCH 00/20] KVM: arm64: Generalise RESx handling Marc Zyngier
  2026-01-26 12:16 ` [PATCH 01/20] arm64: Convert SCTLR_EL2 to sysreg infrastructure Marc Zyngier
@ 2026-01-26 12:16 ` Marc Zyngier
  2026-01-26 18:04   ` Fuad Tabba
  2026-01-26 12:16 ` [PATCH 03/20] KVM: arm64: Introduce standalone FGU computing primitive Marc Zyngier
                   ` (17 subsequent siblings)
  19 siblings, 1 reply; 53+ messages in thread
From: Marc Zyngier @ 2026-01-26 12:16 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel, kvm
  Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Fuad Tabba, Will Deacon, Catalin Marinas

We already have specific constraints for SCTLR_EL1.{EE,E0E}, and
making them depend on FEAT_AA64EL1 is just buggy.

Fixes: 6bd4a274b026e ("KVM: arm64: Convert SCTLR_EL1 to config-driven sanitisation")
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/config.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
index 9c04f895d3769..0bcdb39885734 100644
--- a/arch/arm64/kvm/config.c
+++ b/arch/arm64/kvm/config.c
@@ -1140,8 +1140,6 @@ static const struct reg_bits_to_feat_map sctlr_el1_feat_map[] = {
 		   SCTLR_EL1_TWEDEn,
 		   FEAT_TWED),
 	NEEDS_FEAT(SCTLR_EL1_UCI	|
-		   SCTLR_EL1_EE		|
-		   SCTLR_EL1_E0E	|
 		   SCTLR_EL1_WXN	|
 		   SCTLR_EL1_nTWE	|
 		   SCTLR_EL1_nTWI	|
-- 
2.47.3



^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH 03/20] KVM: arm64: Introduce standalone FGU computing primitive
  2026-01-26 12:16 [PATCH 00/20] KVM: arm64: Generalise RESx handling Marc Zyngier
  2026-01-26 12:16 ` [PATCH 01/20] arm64: Convert SCTLR_EL2 to sysreg infrastructure Marc Zyngier
  2026-01-26 12:16 ` [PATCH 02/20] KVM: arm64: Remove duplicate configuration for SCTLR_EL1.{EE,E0E} Marc Zyngier
@ 2026-01-26 12:16 ` Marc Zyngier
  2026-01-26 18:35   ` Fuad Tabba
  2026-01-26 12:16 ` [PATCH 04/20] KVM: arm64: Introduce data structure tracking both RES0 and RES1 bits Marc Zyngier
                   ` (16 subsequent siblings)
  19 siblings, 1 reply; 53+ messages in thread
From: Marc Zyngier @ 2026-01-26 12:16 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel, kvm
  Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Fuad Tabba, Will Deacon, Catalin Marinas

Computing the FGU bits is made oddly complicated, as we use the RES0
helper instead of using a specific abstraction.

Introduce such an abstraction, which is going to make things significantly
simpler in the future.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/config.c | 57 ++++++++++++++++++-----------------------
 1 file changed, 25 insertions(+), 32 deletions(-)

diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
index 0bcdb39885734..2122599f7cbbd 100644
--- a/arch/arm64/kvm/config.c
+++ b/arch/arm64/kvm/config.c
@@ -1335,26 +1335,30 @@ static u64 compute_res0_bits(struct kvm *kvm,
 static u64 compute_reg_res0_bits(struct kvm *kvm,
 				 const struct reg_feat_map_desc *r,
 				 unsigned long require, unsigned long exclude)
-
 {
 	u64 res0;
 
 	res0 = compute_res0_bits(kvm, r->bit_feat_map, r->bit_feat_map_sz,
 				 require, exclude);
 
-	/*
-	 * If computing FGUs, don't take RES0 or register existence
-	 * into account -- we're not computing bits for the register
-	 * itself.
-	 */
-	if (!(exclude & NEVER_FGU)) {
-		res0 |= compute_res0_bits(kvm, &r->feat_map, 1, require, exclude);
-		res0 |= ~reg_feat_map_bits(&r->feat_map);
-	}
+	res0 |= compute_res0_bits(kvm, &r->feat_map, 1, require, exclude);
+	res0 |= ~reg_feat_map_bits(&r->feat_map);
 
 	return res0;
 }
 
+static u64 compute_fgu_bits(struct kvm *kvm, const struct reg_feat_map_desc *r)
+{
+	/*
+	 * If computing FGUs, we collect the unsupported feature bits as
+	 * RES0 bits, but don't take the actual RES0 bits or register
+	 * existence into account -- we're not computing bits for the
+	 * register itself.
+	 */
+	return compute_res0_bits(kvm, r->bit_feat_map, r->bit_feat_map_sz,
+				 0, NEVER_FGU);
+}
+
 static u64 compute_reg_fixed_bits(struct kvm *kvm,
 				  const struct reg_feat_map_desc *r,
 				  u64 *fixed_bits, unsigned long require,
@@ -1370,40 +1374,29 @@ void compute_fgu(struct kvm *kvm, enum fgt_group_id fgt)
 
 	switch (fgt) {
 	case HFGRTR_GROUP:
-		val |= compute_reg_res0_bits(kvm, &hfgrtr_desc,
-					     0, NEVER_FGU);
-		val |= compute_reg_res0_bits(kvm, &hfgwtr_desc,
-					     0, NEVER_FGU);
+		val |= compute_fgu_bits(kvm, &hfgrtr_desc);
+		val |= compute_fgu_bits(kvm, &hfgwtr_desc);
 		break;
 	case HFGITR_GROUP:
-		val |= compute_reg_res0_bits(kvm, &hfgitr_desc,
-					     0, NEVER_FGU);
+		val |= compute_fgu_bits(kvm, &hfgitr_desc);
 		break;
 	case HDFGRTR_GROUP:
-		val |= compute_reg_res0_bits(kvm, &hdfgrtr_desc,
-					     0, NEVER_FGU);
-		val |= compute_reg_res0_bits(kvm, &hdfgwtr_desc,
-					     0, NEVER_FGU);
+		val |= compute_fgu_bits(kvm, &hdfgrtr_desc);
+		val |= compute_fgu_bits(kvm, &hdfgwtr_desc);
 		break;
 	case HAFGRTR_GROUP:
-		val |= compute_reg_res0_bits(kvm, &hafgrtr_desc,
-					     0, NEVER_FGU);
+		val |= compute_fgu_bits(kvm, &hafgrtr_desc);
 		break;
 	case HFGRTR2_GROUP:
-		val |= compute_reg_res0_bits(kvm, &hfgrtr2_desc,
-					     0, NEVER_FGU);
-		val |= compute_reg_res0_bits(kvm, &hfgwtr2_desc,
-					     0, NEVER_FGU);
+		val |= compute_fgu_bits(kvm, &hfgrtr2_desc);
+		val |= compute_fgu_bits(kvm, &hfgwtr2_desc);
 		break;
 	case HFGITR2_GROUP:
-		val |= compute_reg_res0_bits(kvm, &hfgitr2_desc,
-					     0, NEVER_FGU);
+		val |= compute_fgu_bits(kvm, &hfgitr2_desc);
 		break;
 	case HDFGRTR2_GROUP:
-		val |= compute_reg_res0_bits(kvm, &hdfgrtr2_desc,
-					     0, NEVER_FGU);
-		val |= compute_reg_res0_bits(kvm, &hdfgwtr2_desc,
-					     0, NEVER_FGU);
+		val |= compute_fgu_bits(kvm, &hdfgrtr2_desc);
+		val |= compute_fgu_bits(kvm, &hdfgwtr2_desc);
 		break;
 	default:
 		BUG();
-- 
2.47.3



^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH 04/20] KVM: arm64: Introduce data structure tracking both RES0 and RES1 bits
  2026-01-26 12:16 [PATCH 00/20] KVM: arm64: Generalise RESx handling Marc Zyngier
                   ` (2 preceding siblings ...)
  2026-01-26 12:16 ` [PATCH 03/20] KVM: arm64: Introduce standalone FGU computing primitive Marc Zyngier
@ 2026-01-26 12:16 ` Marc Zyngier
  2026-01-26 18:54   ` Fuad Tabba
  2026-01-26 12:16 ` [PATCH 05/20] KVM: arm64: Extend unified RESx handling to runtime sanitisation Marc Zyngier
                   ` (15 subsequent siblings)
  19 siblings, 1 reply; 53+ messages in thread
From: Marc Zyngier @ 2026-01-26 12:16 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel, kvm
  Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Fuad Tabba, Will Deacon, Catalin Marinas

We have so far mostly tracked RES0 bits, but only made a few attempts
at being just as strict for RES1 bits (probably because they are both
rarer and harder to handle).

Start scratching the surface by introducing a data structure tracking
RES0 and RES1 bits at the same time.

Note that contrary to the usual idiom, this structure is mostly passed
around by value -- the ABI handles it nicely, and the resulting code is
much nicer.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h |  17 ++--
 arch/arm64/kvm/config.c           | 122 +++++++++++++++-------------
 arch/arm64/kvm/nested.c           | 129 +++++++++++++++---------------
 3 files changed, 144 insertions(+), 124 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index b552a1e03848c..a7e4cd8ebf56f 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -626,13 +626,20 @@ enum vcpu_sysreg {
 	NR_SYS_REGS	/* Nothing after this line! */
 };
 
+struct resx {
+	u64	res0;
+	u64	res1;
+};
+
 struct kvm_sysreg_masks {
-	struct {
-		u64	res0;
-		u64	res1;
-	} mask[NR_SYS_REGS - __SANITISED_REG_START__];
+	struct resx mask[NR_SYS_REGS - __SANITISED_REG_START__];
 };
 
+#define kvm_set_sysreg_resx(k, sr, resx)		\
+	do {						\
+		(k)->arch.sysreg_masks->mask[sr - __SANITISED_REG_START__] = resx; \
+	} while(0)
+
 struct fgt_masks {
 	const char	*str;
 	u64		mask;
@@ -1607,7 +1614,7 @@ static inline bool kvm_arch_has_irq_bypass(void)
 }
 
 void compute_fgu(struct kvm *kvm, enum fgt_group_id fgt);
-void get_reg_fixed_bits(struct kvm *kvm, enum vcpu_sysreg reg, u64 *res0, u64 *res1);
+struct resx get_reg_fixed_bits(struct kvm *kvm, enum vcpu_sysreg reg);
 void check_feature_map(void);
 void kvm_vcpu_load_fgt(struct kvm_vcpu *vcpu);
 
diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
index 2122599f7cbbd..a907195bd44b6 100644
--- a/arch/arm64/kvm/config.c
+++ b/arch/arm64/kvm/config.c
@@ -1290,14 +1290,15 @@ static bool idreg_feat_match(struct kvm *kvm, const struct reg_bits_to_feat_map
 	}
 }
 
-static u64 __compute_fixed_bits(struct kvm *kvm,
+static
+struct resx __compute_fixed_bits(struct kvm *kvm,
 				const struct reg_bits_to_feat_map *map,
 				int map_size,
 				u64 *fixed_bits,
 				unsigned long require,
 				unsigned long exclude)
 {
-	u64 val = 0;
+	struct resx resx = {};
 
 	for (int i = 0; i < map_size; i++) {
 		bool match;
@@ -1316,13 +1317,14 @@ static u64 __compute_fixed_bits(struct kvm *kvm,
 			match = idreg_feat_match(kvm, &map[i]);
 
 		if (!match || (map[i].flags & FIXED_VALUE))
-			val |= reg_feat_map_bits(&map[i]);
+			resx.res0 |= reg_feat_map_bits(&map[i]);
 	}
 
-	return val;
+	return resx;
 }
 
-static u64 compute_res0_bits(struct kvm *kvm,
+static
+struct resx compute_resx_bits(struct kvm *kvm,
 			     const struct reg_bits_to_feat_map *map,
 			     int map_size,
 			     unsigned long require,
@@ -1332,34 +1334,43 @@ static u64 compute_res0_bits(struct kvm *kvm,
 				    require, exclude | FIXED_VALUE);
 }
 
-static u64 compute_reg_res0_bits(struct kvm *kvm,
+static
+struct resx compute_reg_resx_bits(struct kvm *kvm,
 				 const struct reg_feat_map_desc *r,
 				 unsigned long require, unsigned long exclude)
 {
-	u64 res0;
+	struct resx resx, tmp;
 
-	res0 = compute_res0_bits(kvm, r->bit_feat_map, r->bit_feat_map_sz,
+	resx = compute_resx_bits(kvm, r->bit_feat_map, r->bit_feat_map_sz,
 				 require, exclude);
 
-	res0 |= compute_res0_bits(kvm, &r->feat_map, 1, require, exclude);
-	res0 |= ~reg_feat_map_bits(&r->feat_map);
+	tmp = compute_resx_bits(kvm, &r->feat_map, 1, require, exclude);
+
+	resx.res0 |= tmp.res0;
+	resx.res0 |= ~reg_feat_map_bits(&r->feat_map);
+	resx.res1 |= tmp.res1;
 
-	return res0;
+	return resx;
 }
 
 static u64 compute_fgu_bits(struct kvm *kvm, const struct reg_feat_map_desc *r)
 {
+	struct resx resx;
+
 	/*
 	 * If computing FGUs, we collect the unsupported feature bits as
-	 * RES0 bits, but don't take the actual RES0 bits or register
+	 * RESx bits, but don't take the actual RESx bits or register
 	 * existence into account -- we're not computing bits for the
 	 * register itself.
 	 */
-	return compute_res0_bits(kvm, r->bit_feat_map, r->bit_feat_map_sz,
+	resx = compute_resx_bits(kvm, r->bit_feat_map, r->bit_feat_map_sz,
 				 0, NEVER_FGU);
+
+	return resx.res0 | resx.res1;
 }
 
-static u64 compute_reg_fixed_bits(struct kvm *kvm,
+static
+struct resx compute_reg_fixed_bits(struct kvm *kvm,
 				  const struct reg_feat_map_desc *r,
 				  u64 *fixed_bits, unsigned long require,
 				  unsigned long exclude)
@@ -1405,91 +1416,94 @@ void compute_fgu(struct kvm *kvm, enum fgt_group_id fgt)
 	kvm->arch.fgu[fgt] = val;
 }
 
-void get_reg_fixed_bits(struct kvm *kvm, enum vcpu_sysreg reg, u64 *res0, u64 *res1)
+struct resx get_reg_fixed_bits(struct kvm *kvm, enum vcpu_sysreg reg)
 {
 	u64 fixed = 0, mask;
+	struct resx resx;
 
 	switch (reg) {
 	case HFGRTR_EL2:
-		*res0 = compute_reg_res0_bits(kvm, &hfgrtr_desc, 0, 0);
-		*res1 = HFGRTR_EL2_RES1;
+		resx = compute_reg_resx_bits(kvm, &hfgrtr_desc, 0, 0);
+		resx.res1 |= HFGRTR_EL2_RES1;
 		break;
 	case HFGWTR_EL2:
-		*res0 = compute_reg_res0_bits(kvm, &hfgwtr_desc, 0, 0);
-		*res1 = HFGWTR_EL2_RES1;
+		resx = compute_reg_resx_bits(kvm, &hfgwtr_desc, 0, 0);
+		resx.res1 |= HFGWTR_EL2_RES1;
 		break;
 	case HFGITR_EL2:
-		*res0 = compute_reg_res0_bits(kvm, &hfgitr_desc, 0, 0);
-		*res1 = HFGITR_EL2_RES1;
+		resx = compute_reg_resx_bits(kvm, &hfgitr_desc, 0, 0);
+		resx.res1 |= HFGITR_EL2_RES1;
 		break;
 	case HDFGRTR_EL2:
-		*res0 = compute_reg_res0_bits(kvm, &hdfgrtr_desc, 0, 0);
-		*res1 = HDFGRTR_EL2_RES1;
+		resx = compute_reg_resx_bits(kvm, &hdfgrtr_desc, 0, 0);
+		resx.res1 |= HDFGRTR_EL2_RES1;
 		break;
 	case HDFGWTR_EL2:
-		*res0 = compute_reg_res0_bits(kvm, &hdfgwtr_desc, 0, 0);
-		*res1 = HDFGWTR_EL2_RES1;
+		resx = compute_reg_resx_bits(kvm, &hdfgwtr_desc, 0, 0);
+		resx.res1 |= HDFGWTR_EL2_RES1;
 		break;
 	case HAFGRTR_EL2:
-		*res0 = compute_reg_res0_bits(kvm, &hafgrtr_desc, 0, 0);
-		*res1 = HAFGRTR_EL2_RES1;
+		resx = compute_reg_resx_bits(kvm, &hafgrtr_desc, 0, 0);
+		resx.res1 |= HAFGRTR_EL2_RES1;
 		break;
 	case HFGRTR2_EL2:
-		*res0 = compute_reg_res0_bits(kvm, &hfgrtr2_desc, 0, 0);
-		*res1 = HFGRTR2_EL2_RES1;
+		resx = compute_reg_resx_bits(kvm, &hfgrtr2_desc, 0, 0);
+		resx.res1 |= HFGRTR2_EL2_RES1;
 		break;
 	case HFGWTR2_EL2:
-		*res0 = compute_reg_res0_bits(kvm, &hfgwtr2_desc, 0, 0);
-		*res1 = HFGWTR2_EL2_RES1;
+		resx = compute_reg_resx_bits(kvm, &hfgwtr2_desc, 0, 0);
+		resx.res1 |= HFGWTR2_EL2_RES1;
 		break;
 	case HFGITR2_EL2:
-		*res0 = compute_reg_res0_bits(kvm, &hfgitr2_desc, 0, 0);
-		*res1 = HFGITR2_EL2_RES1;
+		resx = compute_reg_resx_bits(kvm, &hfgitr2_desc, 0, 0);
+		resx.res1 |= HFGITR2_EL2_RES1;
 		break;
 	case HDFGRTR2_EL2:
-		*res0 = compute_reg_res0_bits(kvm, &hdfgrtr2_desc, 0, 0);
-		*res1 = HDFGRTR2_EL2_RES1;
+		resx = compute_reg_resx_bits(kvm, &hdfgrtr2_desc, 0, 0);
+		resx.res1 |= HDFGRTR2_EL2_RES1;
 		break;
 	case HDFGWTR2_EL2:
-		*res0 = compute_reg_res0_bits(kvm, &hdfgwtr2_desc, 0, 0);
-		*res1 = HDFGWTR2_EL2_RES1;
+		resx = compute_reg_resx_bits(kvm, &hdfgwtr2_desc, 0, 0);
+		resx.res1 |= HDFGWTR2_EL2_RES1;
 		break;
 	case HCRX_EL2:
-		*res0 = compute_reg_res0_bits(kvm, &hcrx_desc, 0, 0);
-		*res1 = __HCRX_EL2_RES1;
+		resx = compute_reg_resx_bits(kvm, &hcrx_desc, 0, 0);
+		resx.res1 |= __HCRX_EL2_RES1;
 		break;
 	case HCR_EL2:
-		mask = compute_reg_fixed_bits(kvm, &hcr_desc, &fixed, 0, 0);
-		*res0 = compute_reg_res0_bits(kvm, &hcr_desc, 0, 0);
-		*res0 |= (mask & ~fixed);
-		*res1 = HCR_EL2_RES1 | (mask & fixed);
+		mask = compute_reg_fixed_bits(kvm, &hcr_desc, &fixed, 0, 0).res0;
+		resx = compute_reg_resx_bits(kvm, &hcr_desc, 0, 0);
+		resx.res0 |= (mask & ~fixed);
+		resx.res1 |= HCR_EL2_RES1 | (mask & fixed);
 		break;
 	case SCTLR2_EL1:
 	case SCTLR2_EL2:
-		*res0 = compute_reg_res0_bits(kvm, &sctlr2_desc, 0, 0);
-		*res1 = SCTLR2_EL1_RES1;
+		resx = compute_reg_resx_bits(kvm, &sctlr2_desc, 0, 0);
+		resx.res1 |= SCTLR2_EL1_RES1;
 		break;
 	case TCR2_EL2:
-		*res0 = compute_reg_res0_bits(kvm, &tcr2_el2_desc, 0, 0);
-		*res1 = TCR2_EL2_RES1;
+		resx = compute_reg_resx_bits(kvm, &tcr2_el2_desc, 0, 0);
+		resx.res1 |= TCR2_EL2_RES1;
 		break;
 	case SCTLR_EL1:
-		*res0 = compute_reg_res0_bits(kvm, &sctlr_el1_desc, 0, 0);
-		*res1 = SCTLR_EL1_RES1;
+		resx = compute_reg_resx_bits(kvm, &sctlr_el1_desc, 0, 0);
+		resx.res1 |= SCTLR_EL1_RES1;
 		break;
 	case MDCR_EL2:
-		*res0 = compute_reg_res0_bits(kvm, &mdcr_el2_desc, 0, 0);
-		*res1 = MDCR_EL2_RES1;
+		resx = compute_reg_resx_bits(kvm, &mdcr_el2_desc, 0, 0);
+		resx.res1 |= MDCR_EL2_RES1;
 		break;
 	case VTCR_EL2:
-		*res0 = compute_reg_res0_bits(kvm, &vtcr_el2_desc, 0, 0);
-		*res1 = VTCR_EL2_RES1;
+		resx = compute_reg_resx_bits(kvm, &vtcr_el2_desc, 0, 0);
+		resx.res1 |= VTCR_EL2_RES1;
 		break;
 	default:
 		WARN_ON_ONCE(1);
-		*res0 = *res1 = 0;
+		resx = (typeof(resx)){};
 		break;
 	}
+
+	return resx;
 }
 
 static __always_inline struct fgt_masks *__fgt_reg_to_masks(enum vcpu_sysreg reg)
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index 486eba72bb027..c5a45bc62153e 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -1683,22 +1683,19 @@ u64 kvm_vcpu_apply_reg_masks(const struct kvm_vcpu *vcpu,
 	return v;
 }
 
-static __always_inline void set_sysreg_masks(struct kvm *kvm, int sr, u64 res0, u64 res1)
+static __always_inline void set_sysreg_masks(struct kvm *kvm, int sr, struct resx resx)
 {
-	int i = sr - __SANITISED_REG_START__;
-
 	BUILD_BUG_ON(!__builtin_constant_p(sr));
 	BUILD_BUG_ON(sr < __SANITISED_REG_START__);
 	BUILD_BUG_ON(sr >= NR_SYS_REGS);
 
-	kvm->arch.sysreg_masks->mask[i].res0 = res0;
-	kvm->arch.sysreg_masks->mask[i].res1 = res1;
+	kvm_set_sysreg_resx(kvm, sr, resx);
 }
 
 int kvm_init_nv_sysregs(struct kvm_vcpu *vcpu)
 {
 	struct kvm *kvm = vcpu->kvm;
-	u64 res0, res1;
+	struct resx resx;
 
 	lockdep_assert_held(&kvm->arch.config_lock);
 
@@ -1711,110 +1708,112 @@ int kvm_init_nv_sysregs(struct kvm_vcpu *vcpu)
 		return -ENOMEM;
 
 	/* VTTBR_EL2 */
-	res0 = res1 = 0;
+	resx = (typeof(resx)){};
 	if (!kvm_has_feat_enum(kvm, ID_AA64MMFR1_EL1, VMIDBits, 16))
-		res0 |= GENMASK(63, 56);
+		resx.res0 |= GENMASK(63, 56);
 	if (!kvm_has_feat(kvm, ID_AA64MMFR2_EL1, CnP, IMP))
-		res0 |= VTTBR_CNP_BIT;
-	set_sysreg_masks(kvm, VTTBR_EL2, res0, res1);
+		resx.res0 |= VTTBR_CNP_BIT;
+	set_sysreg_masks(kvm, VTTBR_EL2, resx);
 
 	/* VTCR_EL2 */
-	get_reg_fixed_bits(kvm, VTCR_EL2, &res0, &res1);
-	set_sysreg_masks(kvm, VTCR_EL2, res0, res1);
+	resx = get_reg_fixed_bits(kvm, VTCR_EL2);
+	set_sysreg_masks(kvm, VTCR_EL2, resx);
 
 	/* VMPIDR_EL2 */
-	res0 = GENMASK(63, 40) | GENMASK(30, 24);
-	res1 = BIT(31);
-	set_sysreg_masks(kvm, VMPIDR_EL2, res0, res1);
+	resx.res0 = GENMASK(63, 40) | GENMASK(30, 24);
+	resx.res1 = BIT(31);
+	set_sysreg_masks(kvm, VMPIDR_EL2, resx);
 
 	/* HCR_EL2 */
-	get_reg_fixed_bits(kvm, HCR_EL2, &res0, &res1);
-	set_sysreg_masks(kvm, HCR_EL2, res0, res1);
+	resx = get_reg_fixed_bits(kvm, HCR_EL2);
+	set_sysreg_masks(kvm, HCR_EL2, resx);
 
 	/* HCRX_EL2 */
-	get_reg_fixed_bits(kvm, HCRX_EL2, &res0, &res1);
-	set_sysreg_masks(kvm, HCRX_EL2, res0, res1);
+	resx = get_reg_fixed_bits(kvm, HCRX_EL2);
+	set_sysreg_masks(kvm, HCRX_EL2, resx);
 
 	/* HFG[RW]TR_EL2 */
-	get_reg_fixed_bits(kvm, HFGRTR_EL2, &res0, &res1);
-	set_sysreg_masks(kvm, HFGRTR_EL2, res0, res1);
-	get_reg_fixed_bits(kvm, HFGWTR_EL2, &res0, &res1);
-	set_sysreg_masks(kvm, HFGWTR_EL2, res0, res1);
+	resx = get_reg_fixed_bits(kvm, HFGRTR_EL2);
+	set_sysreg_masks(kvm, HFGRTR_EL2, resx);
+	resx = get_reg_fixed_bits(kvm, HFGWTR_EL2);
+	set_sysreg_masks(kvm, HFGWTR_EL2, resx);
 
 	/* HDFG[RW]TR_EL2 */
-	get_reg_fixed_bits(kvm, HDFGRTR_EL2, &res0, &res1);
-	set_sysreg_masks(kvm, HDFGRTR_EL2, res0, res1);
-	get_reg_fixed_bits(kvm, HDFGWTR_EL2, &res0, &res1);
-	set_sysreg_masks(kvm, HDFGWTR_EL2, res0, res1);
+	resx = get_reg_fixed_bits(kvm, HDFGRTR_EL2);
+	set_sysreg_masks(kvm, HDFGRTR_EL2, resx);
+	resx = get_reg_fixed_bits(kvm, HDFGWTR_EL2);
+	set_sysreg_masks(kvm, HDFGWTR_EL2, resx);
 
 	/* HFGITR_EL2 */
-	get_reg_fixed_bits(kvm, HFGITR_EL2, &res0, &res1);
-	set_sysreg_masks(kvm, HFGITR_EL2, res0, res1);
+	resx = get_reg_fixed_bits(kvm, HFGITR_EL2);
+	set_sysreg_masks(kvm, HFGITR_EL2, resx);
 
 	/* HAFGRTR_EL2 - not a lot to see here */
-	get_reg_fixed_bits(kvm, HAFGRTR_EL2, &res0, &res1);
-	set_sysreg_masks(kvm, HAFGRTR_EL2, res0, res1);
+	resx = get_reg_fixed_bits(kvm, HAFGRTR_EL2);
+	set_sysreg_masks(kvm, HAFGRTR_EL2, resx);
 
 	/* HFG[RW]TR2_EL2 */
-	get_reg_fixed_bits(kvm, HFGRTR2_EL2, &res0, &res1);
-	set_sysreg_masks(kvm, HFGRTR2_EL2, res0, res1);
-	get_reg_fixed_bits(kvm, HFGWTR2_EL2, &res0, &res1);
-	set_sysreg_masks(kvm, HFGWTR2_EL2, res0, res1);
+	resx = get_reg_fixed_bits(kvm, HFGRTR2_EL2);
+	set_sysreg_masks(kvm, HFGRTR2_EL2, resx);
+	resx = get_reg_fixed_bits(kvm, HFGWTR2_EL2);
+	set_sysreg_masks(kvm, HFGWTR2_EL2, resx);
 
 	/* HDFG[RW]TR2_EL2 */
-	get_reg_fixed_bits(kvm, HDFGRTR2_EL2, &res0, &res1);
-	set_sysreg_masks(kvm, HDFGRTR2_EL2, res0, res1);
-	get_reg_fixed_bits(kvm, HDFGWTR2_EL2, &res0, &res1);
-	set_sysreg_masks(kvm, HDFGWTR2_EL2, res0, res1);
+	resx = get_reg_fixed_bits(kvm, HDFGRTR2_EL2);
+	set_sysreg_masks(kvm, HDFGRTR2_EL2, resx);
+	resx = get_reg_fixed_bits(kvm, HDFGWTR2_EL2);
+	set_sysreg_masks(kvm, HDFGWTR2_EL2, resx);
 
 	/* HFGITR2_EL2 */
-	get_reg_fixed_bits(kvm, HFGITR2_EL2, &res0, &res1);
-	set_sysreg_masks(kvm, HFGITR2_EL2, res0, res1);
+	resx = get_reg_fixed_bits(kvm, HFGITR2_EL2);
+	set_sysreg_masks(kvm, HFGITR2_EL2, resx);
 
 	/* TCR2_EL2 */
-	get_reg_fixed_bits(kvm, TCR2_EL2, &res0, &res1);
-	set_sysreg_masks(kvm, TCR2_EL2, res0, res1);
+	resx = get_reg_fixed_bits(kvm, TCR2_EL2);
+	set_sysreg_masks(kvm, TCR2_EL2, resx);
 
 	/* SCTLR_EL1 */
-	get_reg_fixed_bits(kvm, SCTLR_EL1, &res0, &res1);
-	set_sysreg_masks(kvm, SCTLR_EL1, res0, res1);
+	resx = get_reg_fixed_bits(kvm, SCTLR_EL1);
+	set_sysreg_masks(kvm, SCTLR_EL1, resx);
 
 	/* SCTLR2_ELx */
-	get_reg_fixed_bits(kvm, SCTLR2_EL1, &res0, &res1);
-	set_sysreg_masks(kvm, SCTLR2_EL1, res0, res1);
-	get_reg_fixed_bits(kvm, SCTLR2_EL2, &res0, &res1);
-	set_sysreg_masks(kvm, SCTLR2_EL2, res0, res1);
+	resx = get_reg_fixed_bits(kvm, SCTLR2_EL1);
+	set_sysreg_masks(kvm, SCTLR2_EL1, resx);
+	resx = get_reg_fixed_bits(kvm, SCTLR2_EL2);
+	set_sysreg_masks(kvm, SCTLR2_EL2, resx);
 
 	/* MDCR_EL2 */
-	get_reg_fixed_bits(kvm, MDCR_EL2, &res0, &res1);
-	set_sysreg_masks(kvm, MDCR_EL2, res0, res1);
+	resx = get_reg_fixed_bits(kvm, MDCR_EL2);
+	set_sysreg_masks(kvm, MDCR_EL2, resx);
 
 	/* CNTHCTL_EL2 */
-	res0 = GENMASK(63, 20);
-	res1 = 0;
+	resx.res0 = GENMASK(63, 20);
+	resx.res1 = 0;
 	if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, RME, IMP))
-		res0 |= CNTHCTL_CNTPMASK | CNTHCTL_CNTVMASK;
+		resx.res0 |= CNTHCTL_CNTPMASK | CNTHCTL_CNTVMASK;
 	if (!kvm_has_feat(kvm, ID_AA64MMFR0_EL1, ECV, CNTPOFF)) {
-		res0 |= CNTHCTL_ECV;
+		resx.res0 |= CNTHCTL_ECV;
 		if (!kvm_has_feat(kvm, ID_AA64MMFR0_EL1, ECV, IMP))
-			res0 |= (CNTHCTL_EL1TVT | CNTHCTL_EL1TVCT |
-				 CNTHCTL_EL1NVPCT | CNTHCTL_EL1NVVCT);
+			resx.res0 |= (CNTHCTL_EL1TVT | CNTHCTL_EL1TVCT |
+				      CNTHCTL_EL1NVPCT | CNTHCTL_EL1NVVCT);
 	}
 	if (!kvm_has_feat(kvm, ID_AA64MMFR1_EL1, VH, IMP))
-		res0 |= GENMASK(11, 8);
-	set_sysreg_masks(kvm, CNTHCTL_EL2, res0, res1);
+		resx.res0 |= GENMASK(11, 8);
+	set_sysreg_masks(kvm, CNTHCTL_EL2, resx);
 
 	/* ICH_HCR_EL2 */
-	res0 = ICH_HCR_EL2_RES0;
-	res1 = ICH_HCR_EL2_RES1;
+	resx.res0 = ICH_HCR_EL2_RES0;
+	resx.res1 = ICH_HCR_EL2_RES1;
 	if (!(kvm_vgic_global_state.ich_vtr_el2 & ICH_VTR_EL2_TDS))
-		res0 |= ICH_HCR_EL2_TDIR;
+		resx.res0 |= ICH_HCR_EL2_TDIR;
 	/* No GICv4 is presented to the guest */
-	res0 |= ICH_HCR_EL2_DVIM | ICH_HCR_EL2_vSGIEOICount;
-	set_sysreg_masks(kvm, ICH_HCR_EL2, res0, res1);
+	resx.res0 |= ICH_HCR_EL2_DVIM | ICH_HCR_EL2_vSGIEOICount;
+	set_sysreg_masks(kvm, ICH_HCR_EL2, resx);
 
 	/* VNCR_EL2 */
-	set_sysreg_masks(kvm, VNCR_EL2, VNCR_EL2_RES0, VNCR_EL2_RES1);
+	resx.res0 = VNCR_EL2_RES0;
+	resx.res1 = VNCR_EL2_RES1;
+	set_sysreg_masks(kvm, VNCR_EL2, resx);
 
 out:
 	for (enum vcpu_sysreg sr = __SANITISED_REG_START__; sr < NR_SYS_REGS; sr++)
-- 
2.47.3



^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH 05/20] KVM: arm64: Extend unified RESx handling to runtime sanitisation
  2026-01-26 12:16 [PATCH 00/20] KVM: arm64: Generalise RESx handling Marc Zyngier
                   ` (3 preceding siblings ...)
  2026-01-26 12:16 ` [PATCH 04/20] KVM: arm64: Introduce data structure tracking both RES0 and RES1 bits Marc Zyngier
@ 2026-01-26 12:16 ` Marc Zyngier
  2026-01-26 19:15   ` Fuad Tabba
  2026-01-26 12:16 ` [PATCH 06/20] KVM: arm64: Inherit RESx bits from FGT register descriptors Marc Zyngier
                   ` (14 subsequent siblings)
  19 siblings, 1 reply; 53+ messages in thread
From: Marc Zyngier @ 2026-01-26 12:16 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel, kvm
  Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Fuad Tabba, Will Deacon, Catalin Marinas

Add a new helper to retrieve the RESx values for a given system
register, and use it for the runtime sanitisation.

This results in slightly better code generation for a fairly hot
path in the hypervisor.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 13 +++++++++++++
 arch/arm64/kvm/emulate-nested.c   | 10 +---------
 arch/arm64/kvm/nested.c           | 13 ++++---------
 3 files changed, 18 insertions(+), 18 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index a7e4cd8ebf56f..9dca94e4361f0 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -635,6 +635,19 @@ struct kvm_sysreg_masks {
 	struct resx mask[NR_SYS_REGS - __SANITISED_REG_START__];
 };
 
+#define kvm_get_sysreg_resx(k, sr)					\
+	({                                                              \
+		struct kvm_sysreg_masks *__masks;			\
+		struct resx __resx = {};				\
+									\
+		__masks = (k)->arch.sysreg_masks;			\
+		if (likely(__masks &&					\
+			   sr >= __SANITISED_REG_START__ &&		\
+			   sr < NR_SYS_REGS))				\
+			__resx = __masks->mask[sr - __SANITISED_REG_START__]; \
+		__resx;							\
+	})
+
 #define kvm_set_sysreg_resx(k, sr, resx)		\
 	do {						\
 		(k)->arch.sysreg_masks->mask[sr - __SANITISED_REG_START__] = resx; \
diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index 774cfbf5b43ba..43334cd2db9e5 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -2427,15 +2427,7 @@ static enum trap_behaviour compute_trap_behaviour(struct kvm_vcpu *vcpu,
 
 static u64 kvm_get_sysreg_res0(struct kvm *kvm, enum vcpu_sysreg sr)
 {
-	struct kvm_sysreg_masks *masks;
-
-	/* Only handle the VNCR-backed regs for now */
-	if (sr < __VNCR_START__)
-		return 0;
-
-	masks = kvm->arch.sysreg_masks;
-
-	return masks->mask[sr - __SANITISED_REG_START__].res0;
+	return kvm_get_sysreg_resx(kvm, sr).res0;
 }
 
 static bool check_fgt_bit(struct kvm_vcpu *vcpu, enum vcpu_sysreg sr,
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index c5a45bc62153e..75a23f1c56d13 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -1669,16 +1669,11 @@ u64 limit_nv_id_reg(struct kvm *kvm, u32 reg, u64 val)
 u64 kvm_vcpu_apply_reg_masks(const struct kvm_vcpu *vcpu,
 			     enum vcpu_sysreg sr, u64 v)
 {
-	struct kvm_sysreg_masks *masks;
-
-	masks = vcpu->kvm->arch.sysreg_masks;
-
-	if (masks) {
-		sr -= __SANITISED_REG_START__;
+	struct resx resx;
 
-		v &= ~masks->mask[sr].res0;
-		v |= masks->mask[sr].res1;
-	}
+	resx = kvm_get_sysreg_resx(vcpu->kvm, sr);
+	v &= ~resx.res0;
+	v |= resx.res1;
 
 	return v;
 }
-- 
2.47.3



^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH 06/20] KVM: arm64: Inherit RESx bits from FGT register descriptors
  2026-01-26 12:16 [PATCH 00/20] KVM: arm64: Generalise RESx handling Marc Zyngier
                   ` (4 preceding siblings ...)
  2026-01-26 12:16 ` [PATCH 05/20] KVM: arm64: Extend unified RESx handling to runtime sanitisation Marc Zyngier
@ 2026-01-26 12:16 ` Marc Zyngier
  2026-01-27 15:21   ` Joey Gouly
  2026-01-27 17:58   ` Fuad Tabba
  2026-01-26 12:16 ` [PATCH 07/20] KVM: arm64: Allow RES1 bits to be inferred from configuration Marc Zyngier
                   ` (13 subsequent siblings)
  19 siblings, 2 replies; 53+ messages in thread
From: Marc Zyngier @ 2026-01-26 12:16 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel, kvm
  Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Fuad Tabba, Will Deacon, Catalin Marinas

The FGT registers have their computed RESx bits stashed in specific
descriptors, which we can easily use when computing the masks used
for the guest.

This removes a bit of boilerplate code.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/config.c | 16 +++++-----------
 1 file changed, 5 insertions(+), 11 deletions(-)

diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
index a907195bd44b6..8d152605999ba 100644
--- a/arch/arm64/kvm/config.c
+++ b/arch/arm64/kvm/config.c
@@ -1344,6 +1344,11 @@ struct resx compute_reg_resx_bits(struct kvm *kvm,
 	resx = compute_resx_bits(kvm, r->bit_feat_map, r->bit_feat_map_sz,
 				 require, exclude);
 
+	if (r->feat_map.flags & MASKS_POINTER) {
+		resx.res0 |= r->feat_map.masks->res0;
+		resx.res1 |= r->feat_map.masks->res1;
+	}
+
 	tmp = compute_resx_bits(kvm, &r->feat_map, 1, require, exclude);
 
 	resx.res0 |= tmp.res0;
@@ -1424,47 +1429,36 @@ struct resx get_reg_fixed_bits(struct kvm *kvm, enum vcpu_sysreg reg)
 	switch (reg) {
 	case HFGRTR_EL2:
 		resx = compute_reg_resx_bits(kvm, &hfgrtr_desc, 0, 0);
-		resx.res1 |= HFGRTR_EL2_RES1;
 		break;
 	case HFGWTR_EL2:
 		resx = compute_reg_resx_bits(kvm, &hfgwtr_desc, 0, 0);
-		resx.res1 |= HFGWTR_EL2_RES1;
 		break;
 	case HFGITR_EL2:
 		resx = compute_reg_resx_bits(kvm, &hfgitr_desc, 0, 0);
-		resx.res1 |= HFGITR_EL2_RES1;
 		break;
 	case HDFGRTR_EL2:
 		resx = compute_reg_resx_bits(kvm, &hdfgrtr_desc, 0, 0);
-		resx.res1 |= HDFGRTR_EL2_RES1;
 		break;
 	case HDFGWTR_EL2:
 		resx = compute_reg_resx_bits(kvm, &hdfgwtr_desc, 0, 0);
-		resx.res1 |= HDFGWTR_EL2_RES1;
 		break;
 	case HAFGRTR_EL2:
 		resx = compute_reg_resx_bits(kvm, &hafgrtr_desc, 0, 0);
-		resx.res1 |= HAFGRTR_EL2_RES1;
 		break;
 	case HFGRTR2_EL2:
 		resx = compute_reg_resx_bits(kvm, &hfgrtr2_desc, 0, 0);
-		resx.res1 |= HFGRTR2_EL2_RES1;
 		break;
 	case HFGWTR2_EL2:
 		resx = compute_reg_resx_bits(kvm, &hfgwtr2_desc, 0, 0);
-		resx.res1 |= HFGWTR2_EL2_RES1;
 		break;
 	case HFGITR2_EL2:
 		resx = compute_reg_resx_bits(kvm, &hfgitr2_desc, 0, 0);
-		resx.res1 |= HFGITR2_EL2_RES1;
 		break;
 	case HDFGRTR2_EL2:
 		resx = compute_reg_resx_bits(kvm, &hdfgrtr2_desc, 0, 0);
-		resx.res1 |= HDFGRTR2_EL2_RES1;
 		break;
 	case HDFGWTR2_EL2:
 		resx = compute_reg_resx_bits(kvm, &hdfgwtr2_desc, 0, 0);
-		resx.res1 |= HDFGWTR2_EL2_RES1;
 		break;
 	case HCRX_EL2:
 		resx = compute_reg_resx_bits(kvm, &hcrx_desc, 0, 0);
-- 
2.47.3



^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH 07/20] KVM: arm64: Allow RES1 bits to be inferred from configuration
  2026-01-26 12:16 [PATCH 00/20] KVM: arm64: Generalise RESx handling Marc Zyngier
                   ` (5 preceding siblings ...)
  2026-01-26 12:16 ` [PATCH 06/20] KVM: arm64: Inherit RESx bits from FGT register descriptors Marc Zyngier
@ 2026-01-26 12:16 ` Marc Zyngier
  2026-01-27 15:26   ` Joey Gouly
  2026-01-27 17:58   ` Fuad Tabba
  2026-01-26 12:16 ` [PATCH 08/20] KVM: arm64: Correctly handle SCTLR_EL1 RES1 bits for unsupported features Marc Zyngier
                   ` (12 subsequent siblings)
  19 siblings, 2 replies; 53+ messages in thread
From: Marc Zyngier @ 2026-01-26 12:16 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel, kvm
  Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Fuad Tabba, Will Deacon, Catalin Marinas

So far, when a bit field is tied to an unsupported feature, we set
it as RES0. This is almost forrect, but there are a few exceptions
where the bits become RES1.

Add a AS_RES1 qualifier that instruct the RESx computing code to
simply do that.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/config.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
index 8d152605999ba..6a4674fabf865 100644
--- a/arch/arm64/kvm/config.c
+++ b/arch/arm64/kvm/config.c
@@ -24,6 +24,7 @@ struct reg_bits_to_feat_map {
 #define	CALL_FUNC	BIT(1)	/* Needs to evaluate tons of crap */
 #define	FIXED_VALUE	BIT(2)	/* RAZ/WI or RAO/WI in KVM */
 #define	MASKS_POINTER	BIT(3)	/* Pointer to fgt_masks struct instead of bits */
+#define	AS_RES1		BIT(4)	/* RES1 when not supported */
 
 	unsigned long	flags;
 
@@ -1316,8 +1317,12 @@ struct resx __compute_fixed_bits(struct kvm *kvm,
 		else
 			match = idreg_feat_match(kvm, &map[i]);
 
-		if (!match || (map[i].flags & FIXED_VALUE))
-			resx.res0 |= reg_feat_map_bits(&map[i]);
+		if (!match || (map[i].flags & FIXED_VALUE)) {
+			if (map[i].flags & AS_RES1)
+ 				resx.res1 |= reg_feat_map_bits(&map[i]);
+			else
+				resx.res0 |= reg_feat_map_bits(&map[i]);
+		}
 	}
 
 	return resx;
-- 
2.47.3



^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH 08/20] KVM: arm64: Correctly handle SCTLR_EL1 RES1 bits for unsupported features
  2026-01-26 12:16 [PATCH 00/20] KVM: arm64: Generalise RESx handling Marc Zyngier
                   ` (6 preceding siblings ...)
  2026-01-26 12:16 ` [PATCH 07/20] KVM: arm64: Allow RES1 bits to be inferred from configuration Marc Zyngier
@ 2026-01-26 12:16 ` Marc Zyngier
  2026-01-27 18:06   ` Fuad Tabba
  2026-01-26 12:16 ` [PATCH 09/20] KVM: arm64: Convert HCR_EL2.RW to AS_RES1 Marc Zyngier
                   ` (11 subsequent siblings)
  19 siblings, 1 reply; 53+ messages in thread
From: Marc Zyngier @ 2026-01-26 12:16 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel, kvm
  Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Fuad Tabba, Will Deacon, Catalin Marinas

A bunch of SCTLR_EL1 bits must be set to RES1 when the controling
feature is not present. Add the AS_RES1 qualifier where needed.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/config.c | 26 ++++++++++++++------------
 1 file changed, 14 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
index 6a4674fabf865..68ed5af2b4d53 100644
--- a/arch/arm64/kvm/config.c
+++ b/arch/arm64/kvm/config.c
@@ -1085,27 +1085,28 @@ static const DECLARE_FEAT_MAP(tcr2_el2_desc, TCR2_EL2,
 			      tcr2_el2_feat_map, FEAT_TCR2);
 
 static const struct reg_bits_to_feat_map sctlr_el1_feat_map[] = {
-	NEEDS_FEAT(SCTLR_EL1_CP15BEN	|
-		   SCTLR_EL1_ITD	|
-		   SCTLR_EL1_SED,
-		   FEAT_AA32EL0),
+	NEEDS_FEAT(SCTLR_EL1_CP15BEN, FEAT_AA32EL0),
+	NEEDS_FEAT_FLAG(SCTLR_EL1_ITD	|
+			SCTLR_EL1_SED,
+			AS_RES1, FEAT_AA32EL0),
 	NEEDS_FEAT(SCTLR_EL1_BT0	|
 		   SCTLR_EL1_BT1,
 		   FEAT_BTI),
 	NEEDS_FEAT(SCTLR_EL1_CMOW, FEAT_CMOW),
-	NEEDS_FEAT(SCTLR_EL1_TSCXT, feat_csv2_2_csv2_1p2),
-	NEEDS_FEAT(SCTLR_EL1_EIS	|
-		   SCTLR_EL1_EOS,
-		   FEAT_ExS),
+	NEEDS_FEAT_FLAG(SCTLR_EL1_TSCXT,
+			AS_RES1, feat_csv2_2_csv2_1p2),
+	NEEDS_FEAT_FLAG(SCTLR_EL1_EIS	|
+			SCTLR_EL1_EOS,
+			AS_RES1, FEAT_ExS),
 	NEEDS_FEAT(SCTLR_EL1_EnFPM, FEAT_FPMR),
 	NEEDS_FEAT(SCTLR_EL1_IESB, FEAT_IESB),
 	NEEDS_FEAT(SCTLR_EL1_EnALS, FEAT_LS64),
 	NEEDS_FEAT(SCTLR_EL1_EnAS0, FEAT_LS64_ACCDATA),
 	NEEDS_FEAT(SCTLR_EL1_EnASR, FEAT_LS64_V),
 	NEEDS_FEAT(SCTLR_EL1_nAA, FEAT_LSE2),
-	NEEDS_FEAT(SCTLR_EL1_LSMAOE	|
-		   SCTLR_EL1_nTLSMD,
-		   FEAT_LSMAOC),
+	NEEDS_FEAT_FLAG(SCTLR_EL1_LSMAOE	|
+			SCTLR_EL1_nTLSMD,
+			AS_RES1, FEAT_LSMAOC),
 	NEEDS_FEAT(SCTLR_EL1_EE, FEAT_MixedEnd),
 	NEEDS_FEAT(SCTLR_EL1_E0E, feat_mixedendel0),
 	NEEDS_FEAT(SCTLR_EL1_MSCEn, FEAT_MOPS),
@@ -1121,7 +1122,8 @@ static const struct reg_bits_to_feat_map sctlr_el1_feat_map[] = {
 	NEEDS_FEAT(SCTLR_EL1_NMI	|
 		   SCTLR_EL1_SPINTMASK,
 		   FEAT_NMI),
-	NEEDS_FEAT(SCTLR_EL1_SPAN, FEAT_PAN),
+	NEEDS_FEAT_FLAG(SCTLR_EL1_SPAN,
+			AS_RES1, FEAT_PAN),
 	NEEDS_FEAT(SCTLR_EL1_EPAN, FEAT_PAN3),
 	NEEDS_FEAT(SCTLR_EL1_EnDA	|
 		   SCTLR_EL1_EnDB	|
-- 
2.47.3



^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH 09/20] KVM: arm64: Convert HCR_EL2.RW to AS_RES1
  2026-01-26 12:16 [PATCH 00/20] KVM: arm64: Generalise RESx handling Marc Zyngier
                   ` (7 preceding siblings ...)
  2026-01-26 12:16 ` [PATCH 08/20] KVM: arm64: Correctly handle SCTLR_EL1 RES1 bits for unsupported features Marc Zyngier
@ 2026-01-26 12:16 ` Marc Zyngier
  2026-01-27 18:09   ` Fuad Tabba
  2026-01-26 12:16 ` [PATCH 10/20] KVM: arm64: Simplify FIXED_VALUE handling Marc Zyngier
                   ` (10 subsequent siblings)
  19 siblings, 1 reply; 53+ messages in thread
From: Marc Zyngier @ 2026-01-26 12:16 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel, kvm
  Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Fuad Tabba, Will Deacon, Catalin Marinas

Now that we have the AS_RES1 constraint, it becomes trivial to express
the HCR_EL2.RW behaviour.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/config.c | 15 +--------------
 1 file changed, 1 insertion(+), 14 deletions(-)

diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
index 68ed5af2b4d53..39487182057a3 100644
--- a/arch/arm64/kvm/config.c
+++ b/arch/arm64/kvm/config.c
@@ -389,19 +389,6 @@ static bool feat_vmid16(struct kvm *kvm)
 	return kvm_has_feat_enum(kvm, ID_AA64MMFR1_EL1, VMIDBits, 16);
 }
 
-static bool compute_hcr_rw(struct kvm *kvm, u64 *bits)
-{
-	/* This is purely academic: AArch32 and NV are mutually exclusive */
-	if (bits) {
-		if (kvm_has_feat(kvm, FEAT_AA32EL1))
-			*bits &= ~HCR_EL2_RW;
-		else
-			*bits |= HCR_EL2_RW;
-	}
-
-	return true;
-}
-
 static bool compute_hcr_e2h(struct kvm *kvm, u64 *bits)
 {
 	if (bits) {
@@ -967,7 +954,7 @@ static const DECLARE_FEAT_MAP(hcrx_desc, __HCRX_EL2,
 
 static const struct reg_bits_to_feat_map hcr_feat_map[] = {
 	NEEDS_FEAT(HCR_EL2_TID0, FEAT_AA32EL0),
-	NEEDS_FEAT_FIXED(HCR_EL2_RW, compute_hcr_rw),
+	NEEDS_FEAT_FLAG(HCR_EL2_RW, AS_RES1, FEAT_AA32EL1),
 	NEEDS_FEAT(HCR_EL2_HCD, not_feat_aa64el3),
 	NEEDS_FEAT(HCR_EL2_AMO		|
 		   HCR_EL2_BSU		|
-- 
2.47.3



^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH 10/20] KVM: arm64: Simplify FIXED_VALUE handling
  2026-01-26 12:16 [PATCH 00/20] KVM: arm64: Generalise RESx handling Marc Zyngier
                   ` (8 preceding siblings ...)
  2026-01-26 12:16 ` [PATCH 09/20] KVM: arm64: Convert HCR_EL2.RW to AS_RES1 Marc Zyngier
@ 2026-01-26 12:16 ` Marc Zyngier
  2026-01-27 18:20   ` Fuad Tabba
  2026-01-26 12:16 ` [PATCH 11/20] KVM: arm64: Add REQUIRES_E2H1 constraint as configuration flags Marc Zyngier
                   ` (9 subsequent siblings)
  19 siblings, 1 reply; 53+ messages in thread
From: Marc Zyngier @ 2026-01-26 12:16 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel, kvm
  Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Fuad Tabba, Will Deacon, Catalin Marinas

The FIXED_VALUE qualifier (mostly used for HCR_EL2) is pointlessly
complicated, as it tries to piggy-back on the previous RES0 handling
while being done in a different phase, on different data.

Instead, make it an integral part of the RESx computation, and allow
it to directly set RESx bits. This is much easier to understand.

It also paves the way for some additional changes to that will allow
the full removal of the FIXED_VALUE handling.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/config.c | 67 ++++++++++++++---------------------------
 1 file changed, 22 insertions(+), 45 deletions(-)

diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
index 39487182057a3..4fac04d3132c0 100644
--- a/arch/arm64/kvm/config.c
+++ b/arch/arm64/kvm/config.c
@@ -37,7 +37,7 @@ struct reg_bits_to_feat_map {
 			s8	lo_lim;
 		};
 		bool	(*match)(struct kvm *);
-		bool	(*fval)(struct kvm *, u64 *);
+		bool	(*fval)(struct kvm *, struct resx *);
 	};
 };
 
@@ -389,14 +389,12 @@ static bool feat_vmid16(struct kvm *kvm)
 	return kvm_has_feat_enum(kvm, ID_AA64MMFR1_EL1, VMIDBits, 16);
 }
 
-static bool compute_hcr_e2h(struct kvm *kvm, u64 *bits)
+static bool compute_hcr_e2h(struct kvm *kvm, struct resx *bits)
 {
-	if (bits) {
-		if (kvm_has_feat(kvm, FEAT_E2H0))
-			*bits &= ~HCR_EL2_E2H;
-		else
-			*bits |= HCR_EL2_E2H;
-	}
+	if (kvm_has_feat(kvm, FEAT_E2H0))
+		bits->res0 |= HCR_EL2_E2H;
+	else
+		bits->res1 |= HCR_EL2_E2H;
 
 	return true;
 }
@@ -1281,12 +1279,11 @@ static bool idreg_feat_match(struct kvm *kvm, const struct reg_bits_to_feat_map
 }
 
 static
-struct resx __compute_fixed_bits(struct kvm *kvm,
-				const struct reg_bits_to_feat_map *map,
-				int map_size,
-				u64 *fixed_bits,
-				unsigned long require,
-				unsigned long exclude)
+struct resx compute_resx_bits(struct kvm *kvm,
+			      const struct reg_bits_to_feat_map *map,
+			      int map_size,
+			      unsigned long require,
+			      unsigned long exclude)
 {
 	struct resx resx = {};
 
@@ -1299,14 +1296,18 @@ struct resx __compute_fixed_bits(struct kvm *kvm,
 		if (map[i].flags & exclude)
 			continue;
 
-		if (map[i].flags & CALL_FUNC)
-			match = (map[i].flags & FIXED_VALUE) ?
-				map[i].fval(kvm, fixed_bits) :
-				map[i].match(kvm);
-		else
+		switch (map[i].flags & (CALL_FUNC | FIXED_VALUE)) {
+		case CALL_FUNC | FIXED_VALUE:
+			map[i].fval(kvm, &resx);
+			continue;
+		case CALL_FUNC:
+			match = map[i].match(kvm);
+			break;
+		default:
 			match = idreg_feat_match(kvm, &map[i]);
+		}
 
-		if (!match || (map[i].flags & FIXED_VALUE)) {
+		if (!match) {
 			if (map[i].flags & AS_RES1)
  				resx.res1 |= reg_feat_map_bits(&map[i]);
 			else
@@ -1317,17 +1318,6 @@ struct resx __compute_fixed_bits(struct kvm *kvm,
 	return resx;
 }
 
-static
-struct resx compute_resx_bits(struct kvm *kvm,
-			     const struct reg_bits_to_feat_map *map,
-			     int map_size,
-			     unsigned long require,
-			     unsigned long exclude)
-{
-	return __compute_fixed_bits(kvm, map, map_size, NULL,
-				    require, exclude | FIXED_VALUE);
-}
-
 static
 struct resx compute_reg_resx_bits(struct kvm *kvm,
 				 const struct reg_feat_map_desc *r,
@@ -1368,16 +1358,6 @@ static u64 compute_fgu_bits(struct kvm *kvm, const struct reg_feat_map_desc *r)
 	return resx.res0 | resx.res1;
 }
 
-static
-struct resx compute_reg_fixed_bits(struct kvm *kvm,
-				  const struct reg_feat_map_desc *r,
-				  u64 *fixed_bits, unsigned long require,
-				  unsigned long exclude)
-{
-	return __compute_fixed_bits(kvm, r->bit_feat_map, r->bit_feat_map_sz,
-				    fixed_bits, require | FIXED_VALUE, exclude);
-}
-
 void compute_fgu(struct kvm *kvm, enum fgt_group_id fgt)
 {
 	u64 val = 0;
@@ -1417,7 +1397,6 @@ void compute_fgu(struct kvm *kvm, enum fgt_group_id fgt)
 
 struct resx get_reg_fixed_bits(struct kvm *kvm, enum vcpu_sysreg reg)
 {
-	u64 fixed = 0, mask;
 	struct resx resx;
 
 	switch (reg) {
@@ -1459,10 +1438,8 @@ struct resx get_reg_fixed_bits(struct kvm *kvm, enum vcpu_sysreg reg)
 		resx.res1 |= __HCRX_EL2_RES1;
 		break;
 	case HCR_EL2:
-		mask = compute_reg_fixed_bits(kvm, &hcr_desc, &fixed, 0, 0).res0;
 		resx = compute_reg_resx_bits(kvm, &hcr_desc, 0, 0);
-		resx.res0 |= (mask & ~fixed);
-		resx.res1 |= HCR_EL2_RES1 | (mask & fixed);
+		resx.res1 |= HCR_EL2_RES1;
 		break;
 	case SCTLR2_EL1:
 	case SCTLR2_EL2:
-- 
2.47.3



^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH 11/20] KVM: arm64: Add REQUIRES_E2H1 constraint as configuration flags
  2026-01-26 12:16 [PATCH 00/20] KVM: arm64: Generalise RESx handling Marc Zyngier
                   ` (9 preceding siblings ...)
  2026-01-26 12:16 ` [PATCH 10/20] KVM: arm64: Simplify FIXED_VALUE handling Marc Zyngier
@ 2026-01-26 12:16 ` Marc Zyngier
  2026-01-27 18:28   ` Fuad Tabba
  2026-01-26 12:16 ` [PATCH 12/20] KVM: arm64: Add RESx_WHEN_E2Hx constraints " Marc Zyngier
                   ` (8 subsequent siblings)
  19 siblings, 1 reply; 53+ messages in thread
From: Marc Zyngier @ 2026-01-26 12:16 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel, kvm
  Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Fuad Tabba, Will Deacon, Catalin Marinas

A bunch of EL2 configuration are very similar to their EL1 counterpart,
with the added constraint that HCR_EL2.E2H being 1.

For us, this means HCR_EL2.E2H being RES1, which is something we can
statically evaluate.

Add a REQUIRES_E2H1 constraint, which allows us to express conditions
in a much simpler way (without extra code). Existing occurrences are
converted, before we add a lot more.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/config.c | 38 ++++++++++++++------------------------
 1 file changed, 14 insertions(+), 24 deletions(-)

diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
index 4fac04d3132c0..1990cebc77c66 100644
--- a/arch/arm64/kvm/config.c
+++ b/arch/arm64/kvm/config.c
@@ -25,6 +25,7 @@ struct reg_bits_to_feat_map {
 #define	FIXED_VALUE	BIT(2)	/* RAZ/WI or RAO/WI in KVM */
 #define	MASKS_POINTER	BIT(3)	/* Pointer to fgt_masks struct instead of bits */
 #define	AS_RES1		BIT(4)	/* RES1 when not supported */
+#define	REQUIRES_E2H1	BIT(5)	/* Add HCR_EL2.E2H RES1 as a pre-condition */
 
 	unsigned long	flags;
 
@@ -311,21 +312,6 @@ static bool feat_trbe_mpam(struct kvm *kvm)
 		(read_sysreg_s(SYS_TRBIDR_EL1) & TRBIDR_EL1_MPAM));
 }
 
-static bool feat_asid2_e2h1(struct kvm *kvm)
-{
-	return kvm_has_feat(kvm, FEAT_ASID2) && !kvm_has_feat(kvm, FEAT_E2H0);
-}
-
-static bool feat_d128_e2h1(struct kvm *kvm)
-{
-	return kvm_has_feat(kvm, FEAT_D128) && !kvm_has_feat(kvm, FEAT_E2H0);
-}
-
-static bool feat_mec_e2h1(struct kvm *kvm)
-{
-	return kvm_has_feat(kvm, FEAT_MEC) && !kvm_has_feat(kvm, FEAT_E2H0);
-}
-
 static bool feat_ebep_pmuv3_ss(struct kvm *kvm)
 {
 	return kvm_has_feat(kvm, FEAT_EBEP) || kvm_has_feat(kvm, FEAT_PMUv3_SS);
@@ -1045,15 +1031,15 @@ static const DECLARE_FEAT_MAP(sctlr2_desc, SCTLR2_EL1,
 			      sctlr2_feat_map, FEAT_SCTLR2);
 
 static const struct reg_bits_to_feat_map tcr2_el2_feat_map[] = {
-	NEEDS_FEAT(TCR2_EL2_FNG1	|
-		   TCR2_EL2_FNG0	|
-		   TCR2_EL2_A2,
-		   feat_asid2_e2h1),
-	NEEDS_FEAT(TCR2_EL2_DisCH1	|
-		   TCR2_EL2_DisCH0	|
-		   TCR2_EL2_D128,
-		   feat_d128_e2h1),
-	NEEDS_FEAT(TCR2_EL2_AMEC1, feat_mec_e2h1),
+	NEEDS_FEAT_FLAG(TCR2_EL2_FNG1	|
+			TCR2_EL2_FNG0	|
+			TCR2_EL2_A2,
+			REQUIRES_E2H1, FEAT_ASID2),
+	NEEDS_FEAT_FLAG(TCR2_EL2_DisCH1	|
+			TCR2_EL2_DisCH0	|
+			TCR2_EL2_D128,
+			REQUIRES_E2H1, FEAT_D128),
+	NEEDS_FEAT_FLAG(TCR2_EL2_AMEC1, REQUIRES_E2H1, FEAT_MEC),
 	NEEDS_FEAT(TCR2_EL2_AMEC0, FEAT_MEC),
 	NEEDS_FEAT(TCR2_EL2_HAFT, FEAT_HAFT),
 	NEEDS_FEAT(TCR2_EL2_PTTWI	|
@@ -1285,6 +1271,7 @@ struct resx compute_resx_bits(struct kvm *kvm,
 			      unsigned long require,
 			      unsigned long exclude)
 {
+	bool e2h0 = kvm_has_feat(kvm, FEAT_E2H0);
 	struct resx resx = {};
 
 	for (int i = 0; i < map_size; i++) {
@@ -1307,6 +1294,9 @@ struct resx compute_resx_bits(struct kvm *kvm,
 			match = idreg_feat_match(kvm, &map[i]);
 		}
 
+		if (map[i].flags & REQUIRES_E2H1)
+			match &= !e2h0;
+		
 		if (!match) {
 			if (map[i].flags & AS_RES1)
  				resx.res1 |= reg_feat_map_bits(&map[i]);
-- 
2.47.3



^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH 12/20] KVM: arm64: Add RESx_WHEN_E2Hx constraints as configuration flags
  2026-01-26 12:16 [PATCH 00/20] KVM: arm64: Generalise RESx handling Marc Zyngier
                   ` (10 preceding siblings ...)
  2026-01-26 12:16 ` [PATCH 11/20] KVM: arm64: Add REQUIRES_E2H1 constraint as configuration flags Marc Zyngier
@ 2026-01-26 12:16 ` Marc Zyngier
  2026-01-28 17:43   ` Fuad Tabba
  2026-01-26 12:16 ` [PATCH 13/20] KVM: arm64: Move RESx into individual register descriptors Marc Zyngier
                   ` (7 subsequent siblings)
  19 siblings, 1 reply; 53+ messages in thread
From: Marc Zyngier @ 2026-01-26 12:16 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel, kvm
  Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Fuad Tabba, Will Deacon, Catalin Marinas

"Thanks" to VHE, SCTLR_EL2 radically changes shape depending on the
value of HCR_EL2.E2H, as a lot of the bits that didn't have much
meaning with E2H=0 start impacting EL0 with E2H=1.

This has a direct impact on the RESx behaviour of these bits, and
we need a way to express them.

For this purpose, introduce a set of 4 new constaints that, when
the controlling feature is not present, force the RESx value to
be either 0 or 1 depending on the value of E2H.

This allows diverging RESx values depending on the value of E2H,
something that is required by a bunch of SCTLR_EL2 bits.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/config.c | 24 +++++++++++++++++++++---
 1 file changed, 21 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
index 1990cebc77c66..7063fffc22799 100644
--- a/arch/arm64/kvm/config.c
+++ b/arch/arm64/kvm/config.c
@@ -26,6 +26,10 @@ struct reg_bits_to_feat_map {
 #define	MASKS_POINTER	BIT(3)	/* Pointer to fgt_masks struct instead of bits */
 #define	AS_RES1		BIT(4)	/* RES1 when not supported */
 #define	REQUIRES_E2H1	BIT(5)	/* Add HCR_EL2.E2H RES1 as a pre-condition */
+#define	RES0_WHEN_E2H0	BIT(6)	/* RES0 when E2H=0 and not supported */
+#define	RES0_WHEN_E2H1	BIT(7)	/* RES0 when E2H=1 and not supported */
+#define	RES1_WHEN_E2H0	BIT(8)	/* RES1 when E2H=0 and not supported */
+#define	RES1_WHEN_E2H1	BIT(9)	/* RES1 when E2H=1 and not supported */
 
 	unsigned long	flags;
 
@@ -1298,10 +1302,24 @@ struct resx compute_resx_bits(struct kvm *kvm,
 			match &= !e2h0;
 		
 		if (!match) {
+			u64 bits = reg_feat_map_bits(&map[i]);
+
+			if (e2h0) {
+				if      (map[i].flags & RES1_WHEN_E2H0)
+					resx.res1 |= bits;
+				else if (map[i].flags & RES0_WHEN_E2H0)
+					resx.res0 |= bits;
+			} else {
+				if      (map[i].flags & RES1_WHEN_E2H1)
+					resx.res1 |= bits;
+				else if (map[i].flags & RES0_WHEN_E2H1)
+					resx.res0 |= bits;
+			}
+
 			if (map[i].flags & AS_RES1)
- 				resx.res1 |= reg_feat_map_bits(&map[i]);
-			else
-				resx.res0 |= reg_feat_map_bits(&map[i]);
+				resx.res1 |= bits;
+			else if (!(resx.res1 & bits))
+				resx.res0 |= bits;
 		}
 	}
 
-- 
2.47.3



^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH 13/20] KVM: arm64: Move RESx into individual register descriptors
  2026-01-26 12:16 [PATCH 00/20] KVM: arm64: Generalise RESx handling Marc Zyngier
                   ` (11 preceding siblings ...)
  2026-01-26 12:16 ` [PATCH 12/20] KVM: arm64: Add RESx_WHEN_E2Hx constraints " Marc Zyngier
@ 2026-01-26 12:16 ` Marc Zyngier
  2026-01-29 16:29   ` Fuad Tabba
  2026-01-26 12:16 ` [PATCH 14/20] KVM: arm64: Simplify handling of HCR_EL2.E2H RESx Marc Zyngier
                   ` (6 subsequent siblings)
  19 siblings, 1 reply; 53+ messages in thread
From: Marc Zyngier @ 2026-01-26 12:16 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel, kvm
  Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Fuad Tabba, Will Deacon, Catalin Marinas

Instead of hacking the RES1 bits at runtime, move them into the
register descriptors. This makes it significantly nicer.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/config.c | 36 +++++++++++++++++++++++++++++-------
 1 file changed, 29 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
index 7063fffc22799..d5871758f1fcc 100644
--- a/arch/arm64/kvm/config.c
+++ b/arch/arm64/kvm/config.c
@@ -30,6 +30,7 @@ struct reg_bits_to_feat_map {
 #define	RES0_WHEN_E2H1	BIT(7)	/* RES0 when E2H=1 and not supported */
 #define	RES1_WHEN_E2H0	BIT(8)	/* RES1 when E2H=0 and not supported */
 #define	RES1_WHEN_E2H1	BIT(9)	/* RES1 when E2H=1 and not supported */
+#define	FORCE_RESx	BIT(10)	/* Unconditional RESx */
 
 	unsigned long	flags;
 
@@ -107,6 +108,11 @@ struct reg_feat_map_desc {
  */
 #define NEEDS_FEAT(m, ...)	NEEDS_FEAT_FLAG(m, 0, __VA_ARGS__)
 
+/* Declare fixed RESx bits */
+#define FORCE_RES0(m)		NEEDS_FEAT_FLAG(m, FORCE_RESx, enforce_resx)
+#define FORCE_RES1(m)		NEEDS_FEAT_FLAG(m, FORCE_RESx | AS_RES1, \
+						enforce_resx)
+
 /*
  * Declare the dependency between a non-FGT register, a set of
  * feature, and the set of individual bits it contains. This generates
@@ -230,6 +236,15 @@ struct reg_feat_map_desc {
 #define FEAT_HCX		ID_AA64MMFR1_EL1, HCX, IMP
 #define FEAT_S2PIE		ID_AA64MMFR3_EL1, S2PIE, IMP
 
+static bool enforce_resx(struct kvm *kvm)
+{
+	/*
+	 * Returning false here means that the RESx bits will be always
+	 * addded to the fixed set bit. Yes, this is counter-intuitive.
+	 */
+	return false;
+}
+
 static bool not_feat_aa64el3(struct kvm *kvm)
 {
 	return !kvm_has_feat(kvm, FEAT_AA64EL3);
@@ -1009,6 +1024,8 @@ static const struct reg_bits_to_feat_map hcr_feat_map[] = {
 		   HCR_EL2_TWEDEn,
 		   FEAT_TWED),
 	NEEDS_FEAT_FIXED(HCR_EL2_E2H, compute_hcr_e2h),
+	FORCE_RES0(HCR_EL2_RES0),
+	FORCE_RES1(HCR_EL2_RES1),
 };
 
 static const DECLARE_FEAT_MAP(hcr_desc, HCR_EL2,
@@ -1029,6 +1046,8 @@ static const struct reg_bits_to_feat_map sctlr2_feat_map[] = {
 		   SCTLR2_EL1_CPTM	|
 		   SCTLR2_EL1_CPTM0,
 		   FEAT_CPA2),
+	FORCE_RES0(SCTLR2_EL1_RES0),
+	FORCE_RES1(SCTLR2_EL1_RES1),
 };
 
 static const DECLARE_FEAT_MAP(sctlr2_desc, SCTLR2_EL1,
@@ -1054,6 +1073,8 @@ static const struct reg_bits_to_feat_map tcr2_el2_feat_map[] = {
 		   TCR2_EL2_E0POE,
 		   FEAT_S1POE),
 	NEEDS_FEAT(TCR2_EL2_PIE, FEAT_S1PIE),
+	FORCE_RES0(TCR2_EL2_RES0),
+	FORCE_RES1(TCR2_EL2_RES1),
 };
 
 static const DECLARE_FEAT_MAP(tcr2_el2_desc, TCR2_EL2,
@@ -1131,6 +1152,8 @@ static const struct reg_bits_to_feat_map sctlr_el1_feat_map[] = {
 		   SCTLR_EL1_A		|
 		   SCTLR_EL1_M,
 		   FEAT_AA64EL1),
+	FORCE_RES0(SCTLR_EL1_RES0),
+	FORCE_RES1(SCTLR_EL1_RES1),
 };
 
 static const DECLARE_FEAT_MAP(sctlr_el1_desc, SCTLR_EL1,
@@ -1165,6 +1188,8 @@ static const struct reg_bits_to_feat_map mdcr_el2_feat_map[] = {
 		   MDCR_EL2_TDE		|
 		   MDCR_EL2_TDRA,
 		   FEAT_AA64EL1),
+	FORCE_RES0(MDCR_EL2_RES0),
+	FORCE_RES1(MDCR_EL2_RES1),
 };
 
 static const DECLARE_FEAT_MAP(mdcr_el2_desc, MDCR_EL2,
@@ -1203,6 +1228,8 @@ static const struct reg_bits_to_feat_map vtcr_el2_feat_map[] = {
 		   VTCR_EL2_SL0		|
 		   VTCR_EL2_T0SZ,
 		   FEAT_AA64EL1),
+	FORCE_RES0(VTCR_EL2_RES0),
+	FORCE_RES1(VTCR_EL2_RES1),
 };
 
 static const DECLARE_FEAT_MAP(vtcr_el2_desc, VTCR_EL2,
@@ -1214,7 +1241,8 @@ static void __init check_feat_map(const struct reg_bits_to_feat_map *map,
 	u64 mask = 0;
 
 	for (int i = 0; i < map_size; i++)
-		mask |= map[i].bits;
+		if (!(map[i].flags & FORCE_RESx))
+			mask |= map[i].bits;
 
 	if (mask != ~resx)
 		kvm_err("Undefined %s behaviour, bits %016llx\n",
@@ -1447,28 +1475,22 @@ struct resx get_reg_fixed_bits(struct kvm *kvm, enum vcpu_sysreg reg)
 		break;
 	case HCR_EL2:
 		resx = compute_reg_resx_bits(kvm, &hcr_desc, 0, 0);
-		resx.res1 |= HCR_EL2_RES1;
 		break;
 	case SCTLR2_EL1:
 	case SCTLR2_EL2:
 		resx = compute_reg_resx_bits(kvm, &sctlr2_desc, 0, 0);
-		resx.res1 |= SCTLR2_EL1_RES1;
 		break;
 	case TCR2_EL2:
 		resx = compute_reg_resx_bits(kvm, &tcr2_el2_desc, 0, 0);
-		resx.res1 |= TCR2_EL2_RES1;
 		break;
 	case SCTLR_EL1:
 		resx = compute_reg_resx_bits(kvm, &sctlr_el1_desc, 0, 0);
-		resx.res1 |= SCTLR_EL1_RES1;
 		break;
 	case MDCR_EL2:
 		resx = compute_reg_resx_bits(kvm, &mdcr_el2_desc, 0, 0);
-		resx.res1 |= MDCR_EL2_RES1;
 		break;
 	case VTCR_EL2:
 		resx = compute_reg_resx_bits(kvm, &vtcr_el2_desc, 0, 0);
-		resx.res1 |= VTCR_EL2_RES1;
 		break;
 	default:
 		WARN_ON_ONCE(1);
-- 
2.47.3



^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH 14/20] KVM: arm64: Simplify handling of HCR_EL2.E2H RESx
  2026-01-26 12:16 [PATCH 00/20] KVM: arm64: Generalise RESx handling Marc Zyngier
                   ` (12 preceding siblings ...)
  2026-01-26 12:16 ` [PATCH 13/20] KVM: arm64: Move RESx into individual register descriptors Marc Zyngier
@ 2026-01-26 12:16 ` Marc Zyngier
  2026-01-29 16:41   ` Fuad Tabba
  2026-01-26 12:16 ` [PATCH 15/20] KVM: arm64: Get rid of FIXED_VALUE altogether Marc Zyngier
                   ` (5 subsequent siblings)
  19 siblings, 1 reply; 53+ messages in thread
From: Marc Zyngier @ 2026-01-26 12:16 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel, kvm
  Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Fuad Tabba, Will Deacon, Catalin Marinas

Now that we can link the RESx behaviour with the value of HCR_EL2.E2H,
we can trivially express the tautological constraint that makes E2H
a reserved value at all times.

Fun, isn't it?

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/config.c | 13 ++-----------
 1 file changed, 2 insertions(+), 11 deletions(-)

diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
index d5871758f1fcc..187d047a9cf4a 100644
--- a/arch/arm64/kvm/config.c
+++ b/arch/arm64/kvm/config.c
@@ -394,16 +394,6 @@ static bool feat_vmid16(struct kvm *kvm)
 	return kvm_has_feat_enum(kvm, ID_AA64MMFR1_EL1, VMIDBits, 16);
 }
 
-static bool compute_hcr_e2h(struct kvm *kvm, struct resx *bits)
-{
-	if (kvm_has_feat(kvm, FEAT_E2H0))
-		bits->res0 |= HCR_EL2_E2H;
-	else
-		bits->res1 |= HCR_EL2_E2H;
-
-	return true;
-}
-
 static const struct reg_bits_to_feat_map hfgrtr_feat_map[] = {
 	NEEDS_FEAT(HFGRTR_EL2_nAMAIR2_EL1	|
 		   HFGRTR_EL2_nMAIR2_EL1,
@@ -1023,7 +1013,8 @@ static const struct reg_bits_to_feat_map hcr_feat_map[] = {
 	NEEDS_FEAT(HCR_EL2_TWEDEL	|
 		   HCR_EL2_TWEDEn,
 		   FEAT_TWED),
-	NEEDS_FEAT_FIXED(HCR_EL2_E2H, compute_hcr_e2h),
+	NEEDS_FEAT_FLAG(HCR_EL2_E2H, RES0_WHEN_E2H0 | RES1_WHEN_E2H1,
+			enforce_resx),
 	FORCE_RES0(HCR_EL2_RES0),
 	FORCE_RES1(HCR_EL2_RES1),
 };
-- 
2.47.3



^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH 15/20] KVM: arm64: Get rid of FIXED_VALUE altogether
  2026-01-26 12:16 [PATCH 00/20] KVM: arm64: Generalise RESx handling Marc Zyngier
                   ` (13 preceding siblings ...)
  2026-01-26 12:16 ` [PATCH 14/20] KVM: arm64: Simplify handling of HCR_EL2.E2H RESx Marc Zyngier
@ 2026-01-26 12:16 ` Marc Zyngier
  2026-01-29 16:54   ` Fuad Tabba
  2026-01-26 12:16 ` [PATCH 16/20] KVM: arm64: Simplify handling of full register invalid constraint Marc Zyngier
                   ` (4 subsequent siblings)
  19 siblings, 1 reply; 53+ messages in thread
From: Marc Zyngier @ 2026-01-26 12:16 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel, kvm
  Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Fuad Tabba, Will Deacon, Catalin Marinas

We have now killed every occurrences of FIXED_VALUE, and we can therefore
drop the whole infrastructure. Good riddance.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/config.c | 24 +++---------------------
 1 file changed, 3 insertions(+), 21 deletions(-)

diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
index 187d047a9cf4a..28e534f2850ea 100644
--- a/arch/arm64/kvm/config.c
+++ b/arch/arm64/kvm/config.c
@@ -22,7 +22,7 @@ struct reg_bits_to_feat_map {
 
 #define	NEVER_FGU	BIT(0)	/* Can trap, but never UNDEF */
 #define	CALL_FUNC	BIT(1)	/* Needs to evaluate tons of crap */
-#define	FIXED_VALUE	BIT(2)	/* RAZ/WI or RAO/WI in KVM */
+#define	FORCE_RESx	BIT(2)	/* Unconditional RESx */
 #define	MASKS_POINTER	BIT(3)	/* Pointer to fgt_masks struct instead of bits */
 #define	AS_RES1		BIT(4)	/* RES1 when not supported */
 #define	REQUIRES_E2H1	BIT(5)	/* Add HCR_EL2.E2H RES1 as a pre-condition */
@@ -30,7 +30,6 @@ struct reg_bits_to_feat_map {
 #define	RES0_WHEN_E2H1	BIT(7)	/* RES0 when E2H=1 and not supported */
 #define	RES1_WHEN_E2H0	BIT(8)	/* RES1 when E2H=0 and not supported */
 #define	RES1_WHEN_E2H1	BIT(9)	/* RES1 when E2H=1 and not supported */
-#define	FORCE_RESx	BIT(10)	/* Unconditional RESx */
 
 	unsigned long	flags;
 
@@ -43,7 +42,6 @@ struct reg_bits_to_feat_map {
 			s8	lo_lim;
 		};
 		bool	(*match)(struct kvm *);
-		bool	(*fval)(struct kvm *, struct resx *);
 	};
 };
 
@@ -76,13 +74,6 @@ struct reg_feat_map_desc {
 		.lo_lim	= id ##_## fld ##_## lim	\
 	}
 
-#define __NEEDS_FEAT_2(m, f, w, fun, dummy)		\
-	{						\
-		.w	= (m),				\
-		.flags = (f) | CALL_FUNC,		\
-		.fval = (fun),				\
-	}
-
 #define __NEEDS_FEAT_1(m, f, w, fun)			\
 	{						\
 		.w	= (m),				\
@@ -96,9 +87,6 @@ struct reg_feat_map_desc {
 #define NEEDS_FEAT_FLAG(m, f, ...)			\
 	__NEEDS_FEAT_FLAG(m, f, bits, __VA_ARGS__)
 
-#define NEEDS_FEAT_FIXED(m, ...)			\
-	__NEEDS_FEAT_FLAG(m, FIXED_VALUE, bits, __VA_ARGS__, 0)
-
 #define NEEDS_FEAT_MASKS(p, ...)				\
 	__NEEDS_FEAT_FLAG(p, MASKS_POINTER, masks, __VA_ARGS__)
 
@@ -1306,16 +1294,10 @@ struct resx compute_resx_bits(struct kvm *kvm,
 		if (map[i].flags & exclude)
 			continue;
 
-		switch (map[i].flags & (CALL_FUNC | FIXED_VALUE)) {
-		case CALL_FUNC | FIXED_VALUE:
-			map[i].fval(kvm, &resx);
-			continue;
-		case CALL_FUNC:
+		if (map[i].flags & CALL_FUNC)
 			match = map[i].match(kvm);
-			break;
-		default:
+		else
 			match = idreg_feat_match(kvm, &map[i]);
-		}
 
 		if (map[i].flags & REQUIRES_E2H1)
 			match &= !e2h0;
-- 
2.47.3



^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH 16/20] KVM: arm64: Simplify handling of full register invalid constraint
  2026-01-26 12:16 [PATCH 00/20] KVM: arm64: Generalise RESx handling Marc Zyngier
                   ` (14 preceding siblings ...)
  2026-01-26 12:16 ` [PATCH 15/20] KVM: arm64: Get rid of FIXED_VALUE altogether Marc Zyngier
@ 2026-01-26 12:16 ` Marc Zyngier
  2026-01-29 17:34   ` Fuad Tabba
  2026-01-26 12:16 ` [PATCH 17/20] KVM: arm64: Remove all traces of FEAT_TME Marc Zyngier
                   ` (3 subsequent siblings)
  19 siblings, 1 reply; 53+ messages in thread
From: Marc Zyngier @ 2026-01-26 12:16 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel, kvm
  Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Fuad Tabba, Will Deacon, Catalin Marinas

Now that we embed the RESx bits in the register description, it becomes
easier to deal with registers that are simply not valid, as their
existence is not satisfied by the configuration (SCTLR2_ELx without
FEAT_SCTLR2, for example). Such registers essentially become RES0 for
any bit that wasn't already advertised as RESx.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/config.c | 15 +++++++++------
 1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
index 28e534f2850ea..0c037742215ac 100644
--- a/arch/arm64/kvm/config.c
+++ b/arch/arm64/kvm/config.c
@@ -1332,7 +1332,7 @@ struct resx compute_reg_resx_bits(struct kvm *kvm,
 				 const struct reg_feat_map_desc *r,
 				 unsigned long require, unsigned long exclude)
 {
-	struct resx resx, tmp;
+	struct resx resx;
 
 	resx = compute_resx_bits(kvm, r->bit_feat_map, r->bit_feat_map_sz,
 				 require, exclude);
@@ -1342,11 +1342,14 @@ struct resx compute_reg_resx_bits(struct kvm *kvm,
 		resx.res1 |= r->feat_map.masks->res1;
 	}
 
-	tmp = compute_resx_bits(kvm, &r->feat_map, 1, require, exclude);
-
-	resx.res0 |= tmp.res0;
-	resx.res0 |= ~reg_feat_map_bits(&r->feat_map);
-	resx.res1 |= tmp.res1;
+	/*
+	 * If the register itself was not valid, all the non-RESx bits are
+	 * now considered RES0 (this matches the behaviour of registers such
+	 * as SCTLR2 and TCR2). Weed out any potential (though unlikely)
+	 * overlap with RES1 bits coming from the previous computation.
+	 */
+	resx.res0 |= compute_resx_bits(kvm, &r->feat_map, 1, require, exclude).res0;
+	resx.res1 &= ~resx.res0;
 
 	return resx;
 }
-- 
2.47.3



^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH 17/20] KVM: arm64: Remove all traces of FEAT_TME
  2026-01-26 12:16 [PATCH 00/20] KVM: arm64: Generalise RESx handling Marc Zyngier
                   ` (15 preceding siblings ...)
  2026-01-26 12:16 ` [PATCH 16/20] KVM: arm64: Simplify handling of full register invalid constraint Marc Zyngier
@ 2026-01-26 12:16 ` Marc Zyngier
  2026-01-29 17:43   ` Fuad Tabba
  2026-01-26 12:16 ` [PATCH 18/20] KVM: arm64: Remove all traces of HCR_EL2.MIOCNCE Marc Zyngier
                   ` (2 subsequent siblings)
  19 siblings, 1 reply; 53+ messages in thread
From: Marc Zyngier @ 2026-01-26 12:16 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel, kvm
  Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Fuad Tabba, Will Deacon, Catalin Marinas

FEAT_TME has been dropped from the architecture. Retrospectively.
I'm sure someone is crying somewhere, but most of us won't.

Clean-up time.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/config.c                         |  7 -------
 arch/arm64/kvm/nested.c                         |  5 -----
 arch/arm64/tools/sysreg                         | 12 +++---------
 tools/perf/Documentation/perf-arm-spe.txt       |  1 -
 tools/testing/selftests/kvm/arm64/set_id_regs.c |  1 -
 5 files changed, 3 insertions(+), 23 deletions(-)

diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
index 0c037742215ac..f892098b70c0b 100644
--- a/arch/arm64/kvm/config.c
+++ b/arch/arm64/kvm/config.c
@@ -184,7 +184,6 @@ struct reg_feat_map_desc {
 #define FEAT_RME		ID_AA64PFR0_EL1, RME, IMP
 #define FEAT_MPAM		ID_AA64PFR0_EL1, MPAM, 1
 #define FEAT_S2FWB		ID_AA64MMFR2_EL1, FWB, IMP
-#define FEAT_TME		ID_AA64ISAR0_EL1, TME, IMP
 #define FEAT_TWED		ID_AA64MMFR1_EL1, TWED, IMP
 #define FEAT_E2H0		ID_AA64MMFR4_EL1, E2H0, IMP
 #define FEAT_SRMASK		ID_AA64MMFR4_EL1, SRMASK, IMP
@@ -997,7 +996,6 @@ static const struct reg_bits_to_feat_map hcr_feat_map[] = {
 	NEEDS_FEAT(HCR_EL2_FIEN, feat_rasv1p1),
 	NEEDS_FEAT(HCR_EL2_GPF, FEAT_RME),
 	NEEDS_FEAT(HCR_EL2_FWB, FEAT_S2FWB),
-	NEEDS_FEAT(HCR_EL2_TME, FEAT_TME),
 	NEEDS_FEAT(HCR_EL2_TWEDEL	|
 		   HCR_EL2_TWEDEn,
 		   FEAT_TWED),
@@ -1109,11 +1107,6 @@ static const struct reg_bits_to_feat_map sctlr_el1_feat_map[] = {
 	NEEDS_FEAT(SCTLR_EL1_EnRCTX, FEAT_SPECRES),
 	NEEDS_FEAT(SCTLR_EL1_DSSBS, FEAT_SSBS),
 	NEEDS_FEAT(SCTLR_EL1_TIDCP, FEAT_TIDCP1),
-	NEEDS_FEAT(SCTLR_EL1_TME0	|
-		   SCTLR_EL1_TME	|
-		   SCTLR_EL1_TMT0	|
-		   SCTLR_EL1_TMT,
-		   FEAT_TME),
 	NEEDS_FEAT(SCTLR_EL1_TWEDEL	|
 		   SCTLR_EL1_TWEDEn,
 		   FEAT_TWED),
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index 75a23f1c56d13..96e899dbd9192 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -1505,11 +1505,6 @@ u64 limit_nv_id_reg(struct kvm *kvm, u32 reg, u64 val)
 	u64 orig_val = val;
 
 	switch (reg) {
-	case SYS_ID_AA64ISAR0_EL1:
-		/* Support everything but TME */
-		val &= ~ID_AA64ISAR0_EL1_TME;
-		break;
-
 	case SYS_ID_AA64ISAR1_EL1:
 		/* Support everything but LS64 and Spec Invalidation */
 		val &= ~(ID_AA64ISAR1_EL1_LS64	|
diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
index 969a75615d612..650d7d477087e 100644
--- a/arch/arm64/tools/sysreg
+++ b/arch/arm64/tools/sysreg
@@ -1856,10 +1856,7 @@ UnsignedEnum	31:28	RDM
 	0b0000	NI
 	0b0001	IMP
 EndEnum
-UnsignedEnum	27:24	TME
-	0b0000	NI
-	0b0001	IMP
-EndEnum
+Res0	27:24
 UnsignedEnum	23:20	ATOMIC
 	0b0000	NI
 	0b0010	IMP
@@ -2432,10 +2429,7 @@ Field	57	EPAN
 Field	56	EnALS
 Field	55	EnAS0
 Field	54	EnASR
-Field	53	TME
-Field	52	TME0
-Field	51	TMT
-Field	50	TMT0
+Res0	53:50
 Field	49:46	TWEDEL
 Field	45	TWEDEn
 Field	44	DSSBS
@@ -3840,7 +3834,7 @@ Field	43	NV1
 Field	42	NV
 Field	41	API
 Field	40	APK
-Field	39	TME
+Res0	39
 Field	38	MIOCNCE
 Field	37	TEA
 Field	36	TERR
diff --git a/tools/perf/Documentation/perf-arm-spe.txt b/tools/perf/Documentation/perf-arm-spe.txt
index 8b02e5b983fa9..201a82bec0de4 100644
--- a/tools/perf/Documentation/perf-arm-spe.txt
+++ b/tools/perf/Documentation/perf-arm-spe.txt
@@ -176,7 +176,6 @@ and inv_event_filter are:
   bit 10    - Remote access (FEAT_SPEv1p4)
   bit 11    - Misaligned access (FEAT_SPEv1p1)
   bit 12-15 - IMPLEMENTATION DEFINED events (when implemented)
-  bit 16    - Transaction (FEAT_TME)
   bit 17    - Partial or empty SME or SVE predicate (FEAT_SPEv1p1)
   bit 18    - Empty SME or SVE predicate (FEAT_SPEv1p1)
   bit 19    - L2D access (FEAT_SPEv1p4)
diff --git a/tools/testing/selftests/kvm/arm64/set_id_regs.c b/tools/testing/selftests/kvm/arm64/set_id_regs.c
index c4815d3658167..73de5be58bab0 100644
--- a/tools/testing/selftests/kvm/arm64/set_id_regs.c
+++ b/tools/testing/selftests/kvm/arm64/set_id_regs.c
@@ -91,7 +91,6 @@ static const struct reg_ftr_bits ftr_id_aa64isar0_el1[] = {
 	REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, SM3, 0),
 	REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, SHA3, 0),
 	REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, RDM, 0),
-	REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, TME, 0),
 	REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, ATOMIC, 0),
 	REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, CRC32, 0),
 	REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, SHA2, 0),
-- 
2.47.3



^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH 18/20] KVM: arm64: Remove all traces of HCR_EL2.MIOCNCE
  2026-01-26 12:16 [PATCH 00/20] KVM: arm64: Generalise RESx handling Marc Zyngier
                   ` (16 preceding siblings ...)
  2026-01-26 12:16 ` [PATCH 17/20] KVM: arm64: Remove all traces of FEAT_TME Marc Zyngier
@ 2026-01-26 12:16 ` Marc Zyngier
  2026-01-29 17:51   ` Fuad Tabba
  2026-01-26 12:16 ` [PATCH 19/20] KVM: arm64: Add sanitisation to SCTLR_EL2 Marc Zyngier
  2026-01-26 12:16 ` [PATCH 20/20] KVM: arm64: Add debugfs file dumping computed RESx values Marc Zyngier
  19 siblings, 1 reply; 53+ messages in thread
From: Marc Zyngier @ 2026-01-26 12:16 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel, kvm
  Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Fuad Tabba, Will Deacon, Catalin Marinas

MIOCNCE had the potential to eat your data, and also was never
implemented by anyone. It's been retrospectively removed from
the architecture, and we're happy to follow that lead.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/config.c | 1 -
 arch/arm64/tools/sysreg | 3 +--
 2 files changed, 1 insertion(+), 3 deletions(-)

diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
index f892098b70c0b..eebafb90bcf62 100644
--- a/arch/arm64/kvm/config.c
+++ b/arch/arm64/kvm/config.c
@@ -944,7 +944,6 @@ static const struct reg_bits_to_feat_map hcr_feat_map[] = {
 		   HCR_EL2_FMO		|
 		   HCR_EL2_ID		|
 		   HCR_EL2_IMO		|
-		   HCR_EL2_MIOCNCE	|
 		   HCR_EL2_PTW		|
 		   HCR_EL2_SWIO		|
 		   HCR_EL2_TACR		|
diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
index 650d7d477087e..724e6ad966c20 100644
--- a/arch/arm64/tools/sysreg
+++ b/arch/arm64/tools/sysreg
@@ -3834,8 +3834,7 @@ Field	43	NV1
 Field	42	NV
 Field	41	API
 Field	40	APK
-Res0	39
-Field	38	MIOCNCE
+Res0	39:38
 Field	37	TEA
 Field	36	TERR
 Field	35	TLOR
-- 
2.47.3



^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH 19/20] KVM: arm64: Add sanitisation to SCTLR_EL2
  2026-01-26 12:16 [PATCH 00/20] KVM: arm64: Generalise RESx handling Marc Zyngier
                   ` (17 preceding siblings ...)
  2026-01-26 12:16 ` [PATCH 18/20] KVM: arm64: Remove all traces of HCR_EL2.MIOCNCE Marc Zyngier
@ 2026-01-26 12:16 ` Marc Zyngier
  2026-01-29 18:11   ` Fuad Tabba
  2026-01-26 12:16 ` [PATCH 20/20] KVM: arm64: Add debugfs file dumping computed RESx values Marc Zyngier
  19 siblings, 1 reply; 53+ messages in thread
From: Marc Zyngier @ 2026-01-26 12:16 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel, kvm
  Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Fuad Tabba, Will Deacon, Catalin Marinas

Sanitise SCTLR_EL2 the usual way. The most important aspect of
this is that we benefit from SCTLR_EL2.SPAN being RES1 when
HCR_EL2.E2H==0.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h |  2 +-
 arch/arm64/kvm/config.c           | 82 +++++++++++++++++++++++++++++++
 arch/arm64/kvm/nested.c           |  4 ++
 3 files changed, 87 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 9dca94e4361f0..c82b071ade2a5 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -495,7 +495,6 @@ enum vcpu_sysreg {
 	DBGVCR32_EL2,	/* Debug Vector Catch Register */
 
 	/* EL2 registers */
-	SCTLR_EL2,	/* System Control Register (EL2) */
 	ACTLR_EL2,	/* Auxiliary Control Register (EL2) */
 	CPTR_EL2,	/* Architectural Feature Trap Register (EL2) */
 	HACR_EL2,	/* Hypervisor Auxiliary Control Register */
@@ -526,6 +525,7 @@ enum vcpu_sysreg {
 
 	/* Anything from this can be RES0/RES1 sanitised */
 	MARKER(__SANITISED_REG_START__),
+	SCTLR_EL2,	/* System Control Register (EL2) */
 	TCR2_EL2,	/* Extended Translation Control Register (EL2) */
 	SCTLR2_EL2,	/* System Control Register 2 (EL2) */
 	MDCR_EL2,	/* Monitor Debug Configuration Register (EL2) */
diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
index eebafb90bcf62..562513a4683e2 100644
--- a/arch/arm64/kvm/config.c
+++ b/arch/arm64/kvm/config.c
@@ -1130,6 +1130,84 @@ static const struct reg_bits_to_feat_map sctlr_el1_feat_map[] = {
 static const DECLARE_FEAT_MAP(sctlr_el1_desc, SCTLR_EL1,
 			      sctlr_el1_feat_map, FEAT_AA64EL1);
 
+static const struct reg_bits_to_feat_map sctlr_el2_feat_map[] = {
+	NEEDS_FEAT_FLAG(SCTLR_EL2_CP15BEN,
+			RES0_WHEN_E2H1 | RES1_WHEN_E2H0 | REQUIRES_E2H1,
+			FEAT_AA32EL0),
+	NEEDS_FEAT_FLAG(SCTLR_EL2_ITD	|
+			SCTLR_EL2_SED,
+			RES1_WHEN_E2H1 | RES0_WHEN_E2H0 | REQUIRES_E2H1,
+			FEAT_AA32EL0),
+	NEEDS_FEAT_FLAG(SCTLR_EL2_BT0, REQUIRES_E2H1, FEAT_BTI),
+	NEEDS_FEAT(SCTLR_EL2_BT, FEAT_BTI),
+	NEEDS_FEAT_FLAG(SCTLR_EL2_CMOW, REQUIRES_E2H1, FEAT_CMOW),
+	NEEDS_FEAT_FLAG(SCTLR_EL2_TSCXT,
+			RES0_WHEN_E2H0 | RES1_WHEN_E2H1 | REQUIRES_E2H1,
+			feat_csv2_2_csv2_1p2),
+	NEEDS_FEAT_FLAG(SCTLR_EL2_EIS	|
+			SCTLR_EL2_EOS,
+			AS_RES1, FEAT_ExS),
+	NEEDS_FEAT(SCTLR_EL2_EnFPM, FEAT_FPMR),
+	NEEDS_FEAT(SCTLR_EL2_IESB, FEAT_IESB),
+	NEEDS_FEAT_FLAG(SCTLR_EL2_EnALS, REQUIRES_E2H1, FEAT_LS64),
+	NEEDS_FEAT_FLAG(SCTLR_EL2_EnAS0, REQUIRES_E2H1, FEAT_LS64_ACCDATA),
+	NEEDS_FEAT_FLAG(SCTLR_EL2_EnASR, REQUIRES_E2H1, FEAT_LS64_V),
+	NEEDS_FEAT(SCTLR_EL2_nAA, FEAT_LSE2),
+	NEEDS_FEAT_FLAG(SCTLR_EL2_LSMAOE	|
+			SCTLR_EL2_nTLSMD,
+			AS_RES1 | REQUIRES_E2H1, FEAT_LSMAOC),
+	NEEDS_FEAT(SCTLR_EL2_EE, FEAT_MixedEnd),
+	NEEDS_FEAT_FLAG(SCTLR_EL2_E0E, REQUIRES_E2H1, feat_mixedendel0),
+	NEEDS_FEAT_FLAG(SCTLR_EL2_MSCEn, REQUIRES_E2H1, FEAT_MOPS),
+	NEEDS_FEAT_FLAG(SCTLR_EL2_ATA0	|
+			SCTLR_EL2_TCF0,
+			REQUIRES_E2H1, FEAT_MTE2),
+	NEEDS_FEAT(SCTLR_EL2_ATA	|
+		   SCTLR_EL2_TCF,
+		   FEAT_MTE2),
+	NEEDS_FEAT(SCTLR_EL2_ITFSB, feat_mte_async),
+	NEEDS_FEAT_FLAG(SCTLR_EL2_TCSO0, REQUIRES_E2H1, FEAT_MTE_STORE_ONLY),
+	NEEDS_FEAT(SCTLR_EL2_TCSO,
+		   FEAT_MTE_STORE_ONLY),
+	NEEDS_FEAT(SCTLR_EL2_NMI	|
+		   SCTLR_EL2_SPINTMASK,
+		   FEAT_NMI),
+	NEEDS_FEAT_FLAG(SCTLR_EL2_SPAN,	AS_RES1 | REQUIRES_E2H1, FEAT_PAN),
+	NEEDS_FEAT_FLAG(SCTLR_EL2_EPAN, REQUIRES_E2H1, FEAT_PAN3),
+	NEEDS_FEAT(SCTLR_EL2_EnDA	|
+		   SCTLR_EL2_EnDB	|
+		   SCTLR_EL2_EnIA	|
+		   SCTLR_EL2_EnIB,
+		   feat_pauth),
+	NEEDS_FEAT_FLAG(SCTLR_EL2_EnTP2, REQUIRES_E2H1, FEAT_SME),
+	NEEDS_FEAT(SCTLR_EL2_EnRCTX, FEAT_SPECRES),
+	NEEDS_FEAT(SCTLR_EL2_DSSBS, FEAT_SSBS),
+	NEEDS_FEAT_FLAG(SCTLR_EL2_TIDCP, REQUIRES_E2H1, FEAT_TIDCP1),
+	NEEDS_FEAT_FLAG(SCTLR_EL2_TWEDEL	|
+			SCTLR_EL2_TWEDEn,
+			REQUIRES_E2H1, FEAT_TWED),
+	NEEDS_FEAT_FLAG(SCTLR_EL2_nTWE	|
+			SCTLR_EL2_nTWI,
+			AS_RES1 | REQUIRES_E2H1, FEAT_AA64EL2),
+	NEEDS_FEAT_FLAG(SCTLR_EL2_UCI	|
+			SCTLR_EL2_UCT	|
+			SCTLR_EL2_DZE	|
+			SCTLR_EL2_SA0,
+			REQUIRES_E2H1, FEAT_AA64EL2),
+	NEEDS_FEAT(SCTLR_EL2_WXN	|
+		   SCTLR_EL2_I		|
+		   SCTLR_EL2_SA		|
+		   SCTLR_EL2_C		|
+		   SCTLR_EL2_A		|
+		   SCTLR_EL2_M,
+		   FEAT_AA64EL2),
+	FORCE_RES0(SCTLR_EL2_RES0),
+	FORCE_RES1(SCTLR_EL2_RES1),
+};
+
+static const DECLARE_FEAT_MAP(sctlr_el2_desc, SCTLR_EL2,
+			      sctlr_el2_feat_map, FEAT_AA64EL2);
+
 static const struct reg_bits_to_feat_map mdcr_el2_feat_map[] = {
 	NEEDS_FEAT(MDCR_EL2_EBWE, FEAT_Debugv8p9),
 	NEEDS_FEAT(MDCR_EL2_TDOSA, FEAT_DoubleLock),
@@ -1249,6 +1327,7 @@ void __init check_feature_map(void)
 	check_reg_desc(&sctlr2_desc);
 	check_reg_desc(&tcr2_el2_desc);
 	check_reg_desc(&sctlr_el1_desc);
+	check_reg_desc(&sctlr_el2_desc);
 	check_reg_desc(&mdcr_el2_desc);
 	check_reg_desc(&vtcr_el2_desc);
 }
@@ -1454,6 +1533,9 @@ struct resx get_reg_fixed_bits(struct kvm *kvm, enum vcpu_sysreg reg)
 	case SCTLR_EL1:
 		resx = compute_reg_resx_bits(kvm, &sctlr_el1_desc, 0, 0);
 		break;
+	case SCTLR_EL2:
+		resx = compute_reg_resx_bits(kvm, &sctlr_el2_desc, 0, 0);
+		break;
 	case MDCR_EL2:
 		resx = compute_reg_resx_bits(kvm, &mdcr_el2_desc, 0, 0);
 		break;
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index 96e899dbd9192..ed710228484f3 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -1766,6 +1766,10 @@ int kvm_init_nv_sysregs(struct kvm_vcpu *vcpu)
 	resx = get_reg_fixed_bits(kvm, SCTLR_EL1);
 	set_sysreg_masks(kvm, SCTLR_EL1, resx);
 
+	/* SCTLR_EL2 */
+	resx = get_reg_fixed_bits(kvm, SCTLR_EL2);
+	set_sysreg_masks(kvm, SCTLR_EL2, resx);
+
 	/* SCTLR2_ELx */
 	resx = get_reg_fixed_bits(kvm, SCTLR2_EL1);
 	set_sysreg_masks(kvm, SCTLR2_EL1, resx);
-- 
2.47.3



^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH 20/20] KVM: arm64: Add debugfs file dumping computed RESx values
  2026-01-26 12:16 [PATCH 00/20] KVM: arm64: Generalise RESx handling Marc Zyngier
                   ` (18 preceding siblings ...)
  2026-01-26 12:16 ` [PATCH 19/20] KVM: arm64: Add sanitisation to SCTLR_EL2 Marc Zyngier
@ 2026-01-26 12:16 ` Marc Zyngier
  2026-02-02  8:59   ` Fuad Tabba
  19 siblings, 1 reply; 53+ messages in thread
From: Marc Zyngier @ 2026-01-26 12:16 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel, kvm
  Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Fuad Tabba, Will Deacon, Catalin Marinas

Computing RESx values is hard. Verifying that they are correct is
harder. Add a debugfs file called "resx" that will dump all the RESx
values for a given VM.

I found it useful, maybe you will too.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h |  1 +
 arch/arm64/kvm/sys_regs.c         | 98 +++++++++++++++++++++++++++++++
 2 files changed, 99 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index c82b071ade2a5..54072f6ec9d4b 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -375,6 +375,7 @@ struct kvm_arch {
 
 	/* Iterator for idreg debugfs */
 	u8	idreg_debugfs_iter;
+	u16	sr_resx_iter;
 
 	/* Hypercall features firmware registers' descriptor */
 	struct kvm_smccc_features smccc_feat;
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 88a57ca36d96c..f3f92b489b588 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -5090,12 +5090,110 @@ static const struct seq_operations idregs_debug_sops = {
 
 DEFINE_SEQ_ATTRIBUTE(idregs_debug);
 
+static const struct sys_reg_desc *sr_resx_find(struct kvm *kvm, u16 pos)
+{
+	unsigned long i, sr_idx = 0;
+
+	for (i = 0; i < ARRAY_SIZE(sys_reg_descs); i++) {
+		const struct sys_reg_desc *r = &sys_reg_descs[i];
+
+		if (r->reg < __SANITISED_REG_START__)
+			continue;
+
+		if (sr_idx == pos)
+			return r;
+
+		sr_idx++;
+	}
+
+	return NULL;
+}
+
+static void *sr_resx_start(struct seq_file *s, loff_t *pos)
+{
+	struct kvm *kvm = s->private;
+	u16 *iter;
+
+	guard(mutex)(&kvm->arch.config_lock);
+
+	if (!kvm->arch.sysreg_masks)
+		return NULL;
+
+	iter = &kvm->arch.sr_resx_iter;
+	if (*iter != (u16)~0)
+		return ERR_PTR(-EBUSY);
+
+	*iter = *pos;
+	if (!sr_resx_find(kvm, *iter))
+		iter = NULL;
+
+	return iter;
+}
+
+static void *sr_resx_next(struct seq_file *s, void *v, loff_t *pos)
+{
+	struct kvm *kvm = s->private;
+
+	(*pos)++;
+
+	if (sr_resx_find(kvm, kvm->arch.sr_resx_iter + 1)) {
+		kvm->arch.sr_resx_iter++;
+
+		return &kvm->arch.sr_resx_iter;
+	}
+
+	return NULL;
+}
+
+static void sr_resx_stop(struct seq_file *s, void *v)
+{
+	struct kvm *kvm = s->private;
+
+	if (IS_ERR(v))
+		return;
+
+	guard(mutex)(&kvm->arch.config_lock);
+
+	kvm->arch.sr_resx_iter = ~0;
+}
+
+static int sr_resx_show(struct seq_file *s, void *v)
+{
+	const struct sys_reg_desc *desc;
+	struct kvm *kvm = s->private;
+	struct resx resx;
+
+	desc = sr_resx_find(kvm, kvm->arch.sr_resx_iter);
+
+	if (!desc->name)
+		return 0;
+
+	resx = kvm_get_sysreg_resx(kvm, desc->reg);
+
+	seq_printf(s, "%20s:\tRES0:%016llx\tRES1:%016llx\n",
+		   desc->name, resx.res0, resx.res1);
+
+	return 0;
+}
+
+static const struct seq_operations sr_resx_sops = {
+	.start	= sr_resx_start,
+	.next	= sr_resx_next,
+	.stop	= sr_resx_stop,
+	.show	= sr_resx_show,
+};
+
+DEFINE_SEQ_ATTRIBUTE(sr_resx);
+
 void kvm_sys_regs_create_debugfs(struct kvm *kvm)
 {
 	kvm->arch.idreg_debugfs_iter = ~0;
+	kvm->arch.sr_resx_iter = ~0;
 
 	debugfs_create_file("idregs", 0444, kvm->debugfs_dentry, kvm,
 			    &idregs_debug_fops);
+	debugfs_create_file("resx", 0444, kvm->debugfs_dentry, kvm,
+			    &sr_resx_fops);
 }
 
 static void reset_vm_ftr_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *reg)
-- 
2.47.3



^ permalink raw reply related	[flat|nested] 53+ messages in thread

* Re: [PATCH 01/20] arm64: Convert SCTLR_EL2 to sysreg infrastructure
  2026-01-26 12:16 ` [PATCH 01/20] arm64: Convert SCTLR_EL2 to sysreg infrastructure Marc Zyngier
@ 2026-01-26 17:53   ` Fuad Tabba
  0 siblings, 0 replies; 53+ messages in thread
From: Fuad Tabba @ 2026-01-26 17:53 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, kvm, Joey Gouly, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Will Deacon, Catalin Marinas

On Mon, 26 Jan 2026 at 12:17, Marc Zyngier <maz@kernel.org> wrote:
>
> Convert SCTLR_EL2 to the sysreg infrastructure, as per the 2025-12_rel
> revision of the Registers.json file.
>
> Note that we slightly deviate from the above, as we stick to the ARM
> ARM M.a definition of SCTLR_EL2[9], which is RES0, in order to avoid
> dragging the POE2 definitions...
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>

Other than the deliberate deviation for bit 9, it matches the spec.

Of course, this changes the semantics of SCTLR_EL2_RES1, since now
it's 0. But I see you handle the consequences of this change later on.

Reviewed-by: Fuad Tabba <tabba@google.com>

Cheers,
/fuad



> ---
>  arch/arm64/include/asm/sysreg.h       |  7 ---
>  arch/arm64/tools/sysreg               | 69 +++++++++++++++++++++++++++
>  tools/arch/arm64/include/asm/sysreg.h |  6 ---
>  3 files changed, 69 insertions(+), 13 deletions(-)
>
> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> index 939f9c5bbae67..30f0409b1c802 100644
> --- a/arch/arm64/include/asm/sysreg.h
> +++ b/arch/arm64/include/asm/sysreg.h
> @@ -504,7 +504,6 @@
>  #define SYS_VPIDR_EL2                  sys_reg(3, 4, 0, 0, 0)
>  #define SYS_VMPIDR_EL2                 sys_reg(3, 4, 0, 0, 5)
>
> -#define SYS_SCTLR_EL2                  sys_reg(3, 4, 1, 0, 0)
>  #define SYS_ACTLR_EL2                  sys_reg(3, 4, 1, 0, 1)
>  #define SYS_SCTLR2_EL2                 sys_reg(3, 4, 1, 0, 3)
>  #define SYS_HCR_EL2                    sys_reg(3, 4, 1, 1, 0)
> @@ -837,12 +836,6 @@
>  #define SCTLR_ELx_A     (BIT(1))
>  #define SCTLR_ELx_M     (BIT(0))
>
> -/* SCTLR_EL2 specific flags. */
> -#define SCTLR_EL2_RES1 ((BIT(4))  | (BIT(5))  | (BIT(11)) | (BIT(16)) | \
> -                        (BIT(18)) | (BIT(22)) | (BIT(23)) | (BIT(28)) | \
> -                        (BIT(29)))
> -
> -#define SCTLR_EL2_BT   (BIT(36))
>  #ifdef CONFIG_CPU_BIG_ENDIAN
>  #define ENDIAN_SET_EL2         SCTLR_ELx_EE
>  #else
> diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
> index a0f6249bd4f98..969a75615d612 100644
> --- a/arch/arm64/tools/sysreg
> +++ b/arch/arm64/tools/sysreg
> @@ -3749,6 +3749,75 @@ UnsignedEnum     2:0     F8S1
>  EndEnum
>  EndSysreg
>
> +Sysreg SCTLR_EL2       3       4       1       0       0
> +Field  63      TIDCP
> +Field  62      SPINTMASK
> +Field  61      NMI
> +Field  60      EnTP2
> +Field  59      TCSO
> +Field  58      TCSO0
> +Field  57      EPAN
> +Field  56      EnALS
> +Field  55      EnAS0
> +Field  54      EnASR
> +Res0   53:50
> +Field  49:46   TWEDEL
> +Field  45      TWEDEn
> +Field  44      DSSBS
> +Field  43      ATA
> +Field  42      ATA0
> +Enum   41:40   TCF
> +       0b00    NONE
> +       0b01    SYNC
> +       0b10    ASYNC
> +       0b11    ASYMM
> +EndEnum
> +Enum   39:38   TCF0
> +       0b00    NONE
> +       0b01    SYNC
> +       0b10    ASYNC
> +       0b11    ASYMM
> +EndEnum
> +Field  37      ITFSB
> +Field  36      BT
> +Field  35      BT0
> +Field  34      EnFPM
> +Field  33      MSCEn
> +Field  32      CMOW
> +Field  31      EnIA
> +Field  30      EnIB
> +Field  29      LSMAOE
> +Field  28      nTLSMD
> +Field  27      EnDA
> +Field  26      UCI
> +Field  25      EE
> +Field  24      E0E
> +Field  23      SPAN
> +Field  22      EIS
> +Field  21      IESB
> +Field  20      TSCXT
> +Field  19      WXN
> +Field  18      nTWE
> +Res0   17
> +Field  16      nTWI
> +Field  15      UCT
> +Field  14      DZE
> +Field  13      EnDB
> +Field  12      I
> +Field  11      EOS
> +Field  10      EnRCTX
> +Res0   9
> +Field  8       SED
> +Field  7       ITD
> +Field  6       nAA
> +Field  5       CP15BEN
> +Field  4       SA0
> +Field  3       SA
> +Field  2       C
> +Field  1       A
> +Field  0       M
> +EndSysreg
> +
>  Sysreg HCR_EL2         3       4       1       1       0
>  Field  63:60   TWEDEL
>  Field  59      TWEDEn
> diff --git a/tools/arch/arm64/include/asm/sysreg.h b/tools/arch/arm64/include/asm/sysreg.h
> index 178b7322bf049..f75efe98e9df3 100644
> --- a/tools/arch/arm64/include/asm/sysreg.h
> +++ b/tools/arch/arm64/include/asm/sysreg.h
> @@ -847,12 +847,6 @@
>  #define SCTLR_ELx_A     (BIT(1))
>  #define SCTLR_ELx_M     (BIT(0))
>
> -/* SCTLR_EL2 specific flags. */
> -#define SCTLR_EL2_RES1 ((BIT(4))  | (BIT(5))  | (BIT(11)) | (BIT(16)) | \
> -                        (BIT(18)) | (BIT(22)) | (BIT(23)) | (BIT(28)) | \
> -                        (BIT(29)))
> -
> -#define SCTLR_EL2_BT   (BIT(36))
>  #ifdef CONFIG_CPU_BIG_ENDIAN
>  #define ENDIAN_SET_EL2         SCTLR_ELx_EE
>  #else
> --
> 2.47.3
>


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH 02/20] KVM: arm64: Remove duplicate configuration for SCTLR_EL1.{EE,E0E}
  2026-01-26 12:16 ` [PATCH 02/20] KVM: arm64: Remove duplicate configuration for SCTLR_EL1.{EE,E0E} Marc Zyngier
@ 2026-01-26 18:04   ` Fuad Tabba
  0 siblings, 0 replies; 53+ messages in thread
From: Fuad Tabba @ 2026-01-26 18:04 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, kvm, Joey Gouly, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Will Deacon, Catalin Marinas

On Mon, 26 Jan 2026 at 12:17, Marc Zyngier <maz@kernel.org> wrote:
>
> We already have specific constraints for SCTLR_EL1.{EE,E0E}, and
> making them depend on FEAT_AA64EL1 is just buggy.

Looking at the spec, I see that they depend on FEAT_MixedEnd and
FEAT_MixedEndEL0, not on FEAT_AA64EL1. They are already in the right
place in config.c.

Reviewed-by: Fuad Tabba <tabba@google.com>

Cheers,
/fuad






>
> Fixes: 6bd4a274b026e ("KVM: arm64: Convert SCTLR_EL1 to config-driven sanitisation")
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/config.c | 2 --
>  1 file changed, 2 deletions(-)
>
> diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
> index 9c04f895d3769..0bcdb39885734 100644
> --- a/arch/arm64/kvm/config.c
> +++ b/arch/arm64/kvm/config.c
> @@ -1140,8 +1140,6 @@ static const struct reg_bits_to_feat_map sctlr_el1_feat_map[] = {
>                    SCTLR_EL1_TWEDEn,
>                    FEAT_TWED),
>         NEEDS_FEAT(SCTLR_EL1_UCI        |
> -                  SCTLR_EL1_EE         |
> -                  SCTLR_EL1_E0E        |
>                    SCTLR_EL1_WXN        |
>                    SCTLR_EL1_nTWE       |
>                    SCTLR_EL1_nTWI       |
> --
> 2.47.3
>


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH 03/20] KVM: arm64: Introduce standalone FGU computing primitive
  2026-01-26 12:16 ` [PATCH 03/20] KVM: arm64: Introduce standalone FGU computing primitive Marc Zyngier
@ 2026-01-26 18:35   ` Fuad Tabba
  0 siblings, 0 replies; 53+ messages in thread
From: Fuad Tabba @ 2026-01-26 18:35 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, kvm, Joey Gouly, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Will Deacon, Catalin Marinas

On Mon, 26 Jan 2026 at 12:17, Marc Zyngier <maz@kernel.org> wrote:
>
> Computing the FGU bits is made oddly complicated, as we use the RES0
> helper instead of using a specific abstraction.
>
> Introduce such an abstraction, which is going to make things significantly
> simpler in the future.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>

The old way was mixing "bits that don't exist" with "bits we need to
trap". Here the distinction is clear.

Reviewed-by: Fuad Tabba <tabba@google.com>

Cheers,
/fuad





> ---
>  arch/arm64/kvm/config.c | 57 ++++++++++++++++++-----------------------
>  1 file changed, 25 insertions(+), 32 deletions(-)
>
> diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
> index 0bcdb39885734..2122599f7cbbd 100644
> --- a/arch/arm64/kvm/config.c
> +++ b/arch/arm64/kvm/config.c
> @@ -1335,26 +1335,30 @@ static u64 compute_res0_bits(struct kvm *kvm,
>  static u64 compute_reg_res0_bits(struct kvm *kvm,
>                                  const struct reg_feat_map_desc *r,
>                                  unsigned long require, unsigned long exclude)
> -
>  {
>         u64 res0;
>
>         res0 = compute_res0_bits(kvm, r->bit_feat_map, r->bit_feat_map_sz,
>                                  require, exclude);
>
> -       /*
> -        * If computing FGUs, don't take RES0 or register existence
> -        * into account -- we're not computing bits for the register
> -        * itself.
> -        */
> -       if (!(exclude & NEVER_FGU)) {
> -               res0 |= compute_res0_bits(kvm, &r->feat_map, 1, require, exclude);
> -               res0 |= ~reg_feat_map_bits(&r->feat_map);
> -       }
> +       res0 |= compute_res0_bits(kvm, &r->feat_map, 1, require, exclude);
> +       res0 |= ~reg_feat_map_bits(&r->feat_map);
>
>         return res0;
>  }
>
> +static u64 compute_fgu_bits(struct kvm *kvm, const struct reg_feat_map_desc *r)
> +{
> +       /*
> +        * If computing FGUs, we collect the unsupported feature bits as
> +        * RES0 bits, but don't take the actual RES0 bits or register
> +        * existence into account -- we're not computing bits for the
> +        * register itself.
> +        */
> +       return compute_res0_bits(kvm, r->bit_feat_map, r->bit_feat_map_sz,
> +                                0, NEVER_FGU);
> +}
> +
>  static u64 compute_reg_fixed_bits(struct kvm *kvm,
>                                   const struct reg_feat_map_desc *r,
>                                   u64 *fixed_bits, unsigned long require,
> @@ -1370,40 +1374,29 @@ void compute_fgu(struct kvm *kvm, enum fgt_group_id fgt)
>
>         switch (fgt) {
>         case HFGRTR_GROUP:
> -               val |= compute_reg_res0_bits(kvm, &hfgrtr_desc,
> -                                            0, NEVER_FGU);
> -               val |= compute_reg_res0_bits(kvm, &hfgwtr_desc,
> -                                            0, NEVER_FGU);
> +               val |= compute_fgu_bits(kvm, &hfgrtr_desc);
> +               val |= compute_fgu_bits(kvm, &hfgwtr_desc);
>                 break;
>         case HFGITR_GROUP:
> -               val |= compute_reg_res0_bits(kvm, &hfgitr_desc,
> -                                            0, NEVER_FGU);
> +               val |= compute_fgu_bits(kvm, &hfgitr_desc);
>                 break;
>         case HDFGRTR_GROUP:
> -               val |= compute_reg_res0_bits(kvm, &hdfgrtr_desc,
> -                                            0, NEVER_FGU);
> -               val |= compute_reg_res0_bits(kvm, &hdfgwtr_desc,
> -                                            0, NEVER_FGU);
> +               val |= compute_fgu_bits(kvm, &hdfgrtr_desc);
> +               val |= compute_fgu_bits(kvm, &hdfgwtr_desc);
>                 break;
>         case HAFGRTR_GROUP:
> -               val |= compute_reg_res0_bits(kvm, &hafgrtr_desc,
> -                                            0, NEVER_FGU);
> +               val |= compute_fgu_bits(kvm, &hafgrtr_desc);
>                 break;
>         case HFGRTR2_GROUP:
> -               val |= compute_reg_res0_bits(kvm, &hfgrtr2_desc,
> -                                            0, NEVER_FGU);
> -               val |= compute_reg_res0_bits(kvm, &hfgwtr2_desc,
> -                                            0, NEVER_FGU);
> +               val |= compute_fgu_bits(kvm, &hfgrtr2_desc);
> +               val |= compute_fgu_bits(kvm, &hfgwtr2_desc);
>                 break;
>         case HFGITR2_GROUP:
> -               val |= compute_reg_res0_bits(kvm, &hfgitr2_desc,
> -                                            0, NEVER_FGU);
> +               val |= compute_fgu_bits(kvm, &hfgitr2_desc);
>                 break;
>         case HDFGRTR2_GROUP:
> -               val |= compute_reg_res0_bits(kvm, &hdfgrtr2_desc,
> -                                            0, NEVER_FGU);
> -               val |= compute_reg_res0_bits(kvm, &hdfgwtr2_desc,
> -                                            0, NEVER_FGU);
> +               val |= compute_fgu_bits(kvm, &hdfgrtr2_desc);
> +               val |= compute_fgu_bits(kvm, &hdfgwtr2_desc);
>                 break;
>         default:
>                 BUG();
> --
> 2.47.3
>


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH 04/20] KVM: arm64: Introduce data structure tracking both RES0 and RES1 bits
  2026-01-26 12:16 ` [PATCH 04/20] KVM: arm64: Introduce data structure tracking both RES0 and RES1 bits Marc Zyngier
@ 2026-01-26 18:54   ` Fuad Tabba
  0 siblings, 0 replies; 53+ messages in thread
From: Fuad Tabba @ 2026-01-26 18:54 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, kvm, Joey Gouly, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Will Deacon, Catalin Marinas

Hi Marc,

On Mon, 26 Jan 2026 at 12:17, Marc Zyngier <maz@kernel.org> wrote:
>
> We have so far mostly tracked RES0 bits, but only made a few attempts
> at being just as strict for RES1 bits (probably because they are both
> rarer and harder to handle).
>
> Start scratching the surface by introducing a data structure tracking
> RES0 and RES1 bits at the same time.
>
> Note that contrary to the usual idiom, this structure is mostly passed
> around by value -- the ABI handles it nicely, and the resulting code is
> much nicer.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_host.h |  17 ++--
>  arch/arm64/kvm/config.c           | 122 +++++++++++++++-------------
>  arch/arm64/kvm/nested.c           | 129 +++++++++++++++---------------
>  3 files changed, 144 insertions(+), 124 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index b552a1e03848c..a7e4cd8ebf56f 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -626,13 +626,20 @@ enum vcpu_sysreg {
>         NR_SYS_REGS     /* Nothing after this line! */
>  };
>
> +struct resx {
> +       u64     res0;
> +       u64     res1;
> +};
> +
>  struct kvm_sysreg_masks {
> -       struct {
> -               u64     res0;
> -               u64     res1;
> -       } mask[NR_SYS_REGS - __SANITISED_REG_START__];
> +       struct resx mask[NR_SYS_REGS - __SANITISED_REG_START__];
>  };
>
> +#define kvm_set_sysreg_resx(k, sr, resx)               \
> +       do {                                            \
> +               (k)->arch.sysreg_masks->mask[sr - __SANITISED_REG_START__] = resx; \

Is sr better between parentheses (sr)? (checkpatch, but I think it's valid)

> +       } while(0)

checkpatch nit: space after while

> +
>  struct fgt_masks {
>         const char      *str;
>         u64             mask;
> @@ -1607,7 +1614,7 @@ static inline bool kvm_arch_has_irq_bypass(void)
>  }
>
>  void compute_fgu(struct kvm *kvm, enum fgt_group_id fgt);
> -void get_reg_fixed_bits(struct kvm *kvm, enum vcpu_sysreg reg, u64 *res0, u64 *res1);
> +struct resx get_reg_fixed_bits(struct kvm *kvm, enum vcpu_sysreg reg);
>  void check_feature_map(void);
>  void kvm_vcpu_load_fgt(struct kvm_vcpu *vcpu);
>
> diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
> index 2122599f7cbbd..a907195bd44b6 100644
> --- a/arch/arm64/kvm/config.c
> +++ b/arch/arm64/kvm/config.c
> @@ -1290,14 +1290,15 @@ static bool idreg_feat_match(struct kvm *kvm, const struct reg_bits_to_feat_map
>         }
>  }
>

nit: all the functions below with multiline parameters are misaligned
wrt the parenthesis (checkpatch, but also visible in editor).

> -static u64 __compute_fixed_bits(struct kvm *kvm,
> +static

nit: why the newline? (and same for the remaining ones below)

Nits aside, this preserves the logic and the resulting code is already
easier to read and understand.

Reviewed-by: Fuad Tabba <tabba@google.com>

Cheers,
/fuad



> +struct resx __compute_fixed_bits(struct kvm *kvm,
>                                 const struct reg_bits_to_feat_map *map,
>                                 int map_size,
>                                 u64 *fixed_bits,
>                                 unsigned long require,
>                                 unsigned long exclude)
>  {
> -       u64 val = 0;
> +       struct resx resx = {};
>
>         for (int i = 0; i < map_size; i++) {
>                 bool match;
> @@ -1316,13 +1317,14 @@ static u64 __compute_fixed_bits(struct kvm *kvm,
>                         match = idreg_feat_match(kvm, &map[i]);
>
>                 if (!match || (map[i].flags & FIXED_VALUE))
> -                       val |= reg_feat_map_bits(&map[i]);
> +                       resx.res0 |= reg_feat_map_bits(&map[i]);
>         }
>
> -       return val;
> +       return resx;
>  }
>
> -static u64 compute_res0_bits(struct kvm *kvm,
> +static
> +struct resx compute_resx_bits(struct kvm *kvm,
>                              const struct reg_bits_to_feat_map *map,
>                              int map_size,
>                              unsigned long require,
> @@ -1332,34 +1334,43 @@ static u64 compute_res0_bits(struct kvm *kvm,
>                                     require, exclude | FIXED_VALUE);
>  }
>
> -static u64 compute_reg_res0_bits(struct kvm *kvm,
> +static
> +struct resx compute_reg_resx_bits(struct kvm *kvm,
>                                  const struct reg_feat_map_desc *r,
>                                  unsigned long require, unsigned long exclude)
>  {
> -       u64 res0;
> +       struct resx resx, tmp;
>
> -       res0 = compute_res0_bits(kvm, r->bit_feat_map, r->bit_feat_map_sz,
> +       resx = compute_resx_bits(kvm, r->bit_feat_map, r->bit_feat_map_sz,
>                                  require, exclude);
>
> -       res0 |= compute_res0_bits(kvm, &r->feat_map, 1, require, exclude);
> -       res0 |= ~reg_feat_map_bits(&r->feat_map);
> +       tmp = compute_resx_bits(kvm, &r->feat_map, 1, require, exclude);
> +
> +       resx.res0 |= tmp.res0;
> +       resx.res0 |= ~reg_feat_map_bits(&r->feat_map);
> +       resx.res1 |= tmp.res1;
>
> -       return res0;
> +       return resx;
>  }
>
>  static u64 compute_fgu_bits(struct kvm *kvm, const struct reg_feat_map_desc *r)
>  {
> +       struct resx resx;
> +
>         /*
>          * If computing FGUs, we collect the unsupported feature bits as
> -        * RES0 bits, but don't take the actual RES0 bits or register
> +        * RESx bits, but don't take the actual RESx bits or register
>          * existence into account -- we're not computing bits for the
>          * register itself.
>          */
> -       return compute_res0_bits(kvm, r->bit_feat_map, r->bit_feat_map_sz,
> +       resx = compute_resx_bits(kvm, r->bit_feat_map, r->bit_feat_map_sz,
>                                  0, NEVER_FGU);
> +
> +       return resx.res0 | resx.res1;
>  }
>
> -static u64 compute_reg_fixed_bits(struct kvm *kvm,
> +static
> +struct resx compute_reg_fixed_bits(struct kvm *kvm,
>                                   const struct reg_feat_map_desc *r,
>                                   u64 *fixed_bits, unsigned long require,
>                                   unsigned long exclude)
> @@ -1405,91 +1416,94 @@ void compute_fgu(struct kvm *kvm, enum fgt_group_id fgt)
>         kvm->arch.fgu[fgt] = val;
>  }
>
> -void get_reg_fixed_bits(struct kvm *kvm, enum vcpu_sysreg reg, u64 *res0, u64 *res1)
> +struct resx get_reg_fixed_bits(struct kvm *kvm, enum vcpu_sysreg reg)
>  {
>         u64 fixed = 0, mask;
> +       struct resx resx;
>
>         switch (reg) {
>         case HFGRTR_EL2:
> -               *res0 = compute_reg_res0_bits(kvm, &hfgrtr_desc, 0, 0);
> -               *res1 = HFGRTR_EL2_RES1;
> +               resx = compute_reg_resx_bits(kvm, &hfgrtr_desc, 0, 0);
> +               resx.res1 |= HFGRTR_EL2_RES1;
>                 break;
>         case HFGWTR_EL2:
> -               *res0 = compute_reg_res0_bits(kvm, &hfgwtr_desc, 0, 0);
> -               *res1 = HFGWTR_EL2_RES1;
> +               resx = compute_reg_resx_bits(kvm, &hfgwtr_desc, 0, 0);
> +               resx.res1 |= HFGWTR_EL2_RES1;
>                 break;
>         case HFGITR_EL2:
> -               *res0 = compute_reg_res0_bits(kvm, &hfgitr_desc, 0, 0);
> -               *res1 = HFGITR_EL2_RES1;
> +               resx = compute_reg_resx_bits(kvm, &hfgitr_desc, 0, 0);
> +               resx.res1 |= HFGITR_EL2_RES1;
>                 break;
>         case HDFGRTR_EL2:
> -               *res0 = compute_reg_res0_bits(kvm, &hdfgrtr_desc, 0, 0);
> -               *res1 = HDFGRTR_EL2_RES1;
> +               resx = compute_reg_resx_bits(kvm, &hdfgrtr_desc, 0, 0);
> +               resx.res1 |= HDFGRTR_EL2_RES1;
>                 break;
>         case HDFGWTR_EL2:
> -               *res0 = compute_reg_res0_bits(kvm, &hdfgwtr_desc, 0, 0);
> -               *res1 = HDFGWTR_EL2_RES1;
> +               resx = compute_reg_resx_bits(kvm, &hdfgwtr_desc, 0, 0);
> +               resx.res1 |= HDFGWTR_EL2_RES1;
>                 break;
>         case HAFGRTR_EL2:
> -               *res0 = compute_reg_res0_bits(kvm, &hafgrtr_desc, 0, 0);
> -               *res1 = HAFGRTR_EL2_RES1;
> +               resx = compute_reg_resx_bits(kvm, &hafgrtr_desc, 0, 0);
> +               resx.res1 |= HAFGRTR_EL2_RES1;
>                 break;
>         case HFGRTR2_EL2:
> -               *res0 = compute_reg_res0_bits(kvm, &hfgrtr2_desc, 0, 0);
> -               *res1 = HFGRTR2_EL2_RES1;
> +               resx = compute_reg_resx_bits(kvm, &hfgrtr2_desc, 0, 0);
> +               resx.res1 |= HFGRTR2_EL2_RES1;
>                 break;
>         case HFGWTR2_EL2:
> -               *res0 = compute_reg_res0_bits(kvm, &hfgwtr2_desc, 0, 0);
> -               *res1 = HFGWTR2_EL2_RES1;
> +               resx = compute_reg_resx_bits(kvm, &hfgwtr2_desc, 0, 0);
> +               resx.res1 |= HFGWTR2_EL2_RES1;
>                 break;
>         case HFGITR2_EL2:
> -               *res0 = compute_reg_res0_bits(kvm, &hfgitr2_desc, 0, 0);
> -               *res1 = HFGITR2_EL2_RES1;
> +               resx = compute_reg_resx_bits(kvm, &hfgitr2_desc, 0, 0);
> +               resx.res1 |= HFGITR2_EL2_RES1;
>                 break;
>         case HDFGRTR2_EL2:
> -               *res0 = compute_reg_res0_bits(kvm, &hdfgrtr2_desc, 0, 0);
> -               *res1 = HDFGRTR2_EL2_RES1;
> +               resx = compute_reg_resx_bits(kvm, &hdfgrtr2_desc, 0, 0);
> +               resx.res1 |= HDFGRTR2_EL2_RES1;
>                 break;
>         case HDFGWTR2_EL2:
> -               *res0 = compute_reg_res0_bits(kvm, &hdfgwtr2_desc, 0, 0);
> -               *res1 = HDFGWTR2_EL2_RES1;
> +               resx = compute_reg_resx_bits(kvm, &hdfgwtr2_desc, 0, 0);
> +               resx.res1 |= HDFGWTR2_EL2_RES1;
>                 break;
>         case HCRX_EL2:
> -               *res0 = compute_reg_res0_bits(kvm, &hcrx_desc, 0, 0);
> -               *res1 = __HCRX_EL2_RES1;
> +               resx = compute_reg_resx_bits(kvm, &hcrx_desc, 0, 0);
> +               resx.res1 |= __HCRX_EL2_RES1;
>                 break;
>         case HCR_EL2:
> -               mask = compute_reg_fixed_bits(kvm, &hcr_desc, &fixed, 0, 0);
> -               *res0 = compute_reg_res0_bits(kvm, &hcr_desc, 0, 0);
> -               *res0 |= (mask & ~fixed);
> -               *res1 = HCR_EL2_RES1 | (mask & fixed);
> +               mask = compute_reg_fixed_bits(kvm, &hcr_desc, &fixed, 0, 0).res0;
> +               resx = compute_reg_resx_bits(kvm, &hcr_desc, 0, 0);
> +               resx.res0 |= (mask & ~fixed);
> +               resx.res1 |= HCR_EL2_RES1 | (mask & fixed);
>                 break;
>         case SCTLR2_EL1:
>         case SCTLR2_EL2:
> -               *res0 = compute_reg_res0_bits(kvm, &sctlr2_desc, 0, 0);
> -               *res1 = SCTLR2_EL1_RES1;
> +               resx = compute_reg_resx_bits(kvm, &sctlr2_desc, 0, 0);
> +               resx.res1 |= SCTLR2_EL1_RES1;
>                 break;
>         case TCR2_EL2:
> -               *res0 = compute_reg_res0_bits(kvm, &tcr2_el2_desc, 0, 0);
> -               *res1 = TCR2_EL2_RES1;
> +               resx = compute_reg_resx_bits(kvm, &tcr2_el2_desc, 0, 0);
> +               resx.res1 |= TCR2_EL2_RES1;
>                 break;
>         case SCTLR_EL1:
> -               *res0 = compute_reg_res0_bits(kvm, &sctlr_el1_desc, 0, 0);
> -               *res1 = SCTLR_EL1_RES1;
> +               resx = compute_reg_resx_bits(kvm, &sctlr_el1_desc, 0, 0);
> +               resx.res1 |= SCTLR_EL1_RES1;
>                 break;
>         case MDCR_EL2:
> -               *res0 = compute_reg_res0_bits(kvm, &mdcr_el2_desc, 0, 0);
> -               *res1 = MDCR_EL2_RES1;
> +               resx = compute_reg_resx_bits(kvm, &mdcr_el2_desc, 0, 0);
> +               resx.res1 |= MDCR_EL2_RES1;
>                 break;
>         case VTCR_EL2:
> -               *res0 = compute_reg_res0_bits(kvm, &vtcr_el2_desc, 0, 0);
> -               *res1 = VTCR_EL2_RES1;
> +               resx = compute_reg_resx_bits(kvm, &vtcr_el2_desc, 0, 0);
> +               resx.res1 |= VTCR_EL2_RES1;
>                 break;
>         default:
>                 WARN_ON_ONCE(1);
> -               *res0 = *res1 = 0;
> +               resx = (typeof(resx)){};
>                 break;
>         }
> +
> +       return resx;
>  }
>
>  static __always_inline struct fgt_masks *__fgt_reg_to_masks(enum vcpu_sysreg reg)
> diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
> index 486eba72bb027..c5a45bc62153e 100644
> --- a/arch/arm64/kvm/nested.c
> +++ b/arch/arm64/kvm/nested.c
> @@ -1683,22 +1683,19 @@ u64 kvm_vcpu_apply_reg_masks(const struct kvm_vcpu *vcpu,
>         return v;
>  }
>
> -static __always_inline void set_sysreg_masks(struct kvm *kvm, int sr, u64 res0, u64 res1)
> +static __always_inline void set_sysreg_masks(struct kvm *kvm, int sr, struct resx resx)
>  {
> -       int i = sr - __SANITISED_REG_START__;
> -
>         BUILD_BUG_ON(!__builtin_constant_p(sr));
>         BUILD_BUG_ON(sr < __SANITISED_REG_START__);
>         BUILD_BUG_ON(sr >= NR_SYS_REGS);
>
> -       kvm->arch.sysreg_masks->mask[i].res0 = res0;
> -       kvm->arch.sysreg_masks->mask[i].res1 = res1;
> +       kvm_set_sysreg_resx(kvm, sr, resx);
>  }
>
>  int kvm_init_nv_sysregs(struct kvm_vcpu *vcpu)
>  {
>         struct kvm *kvm = vcpu->kvm;
> -       u64 res0, res1;
> +       struct resx resx;
>
>         lockdep_assert_held(&kvm->arch.config_lock);
>
> @@ -1711,110 +1708,112 @@ int kvm_init_nv_sysregs(struct kvm_vcpu *vcpu)
>                 return -ENOMEM;
>
>         /* VTTBR_EL2 */
> -       res0 = res1 = 0;
> +       resx = (typeof(resx)){};
>         if (!kvm_has_feat_enum(kvm, ID_AA64MMFR1_EL1, VMIDBits, 16))
> -               res0 |= GENMASK(63, 56);
> +               resx.res0 |= GENMASK(63, 56);
>         if (!kvm_has_feat(kvm, ID_AA64MMFR2_EL1, CnP, IMP))
> -               res0 |= VTTBR_CNP_BIT;
> -       set_sysreg_masks(kvm, VTTBR_EL2, res0, res1);
> +               resx.res0 |= VTTBR_CNP_BIT;
> +       set_sysreg_masks(kvm, VTTBR_EL2, resx);
>
>         /* VTCR_EL2 */
> -       get_reg_fixed_bits(kvm, VTCR_EL2, &res0, &res1);
> -       set_sysreg_masks(kvm, VTCR_EL2, res0, res1);
> +       resx = get_reg_fixed_bits(kvm, VTCR_EL2);
> +       set_sysreg_masks(kvm, VTCR_EL2, resx);
>
>         /* VMPIDR_EL2 */
> -       res0 = GENMASK(63, 40) | GENMASK(30, 24);
> -       res1 = BIT(31);
> -       set_sysreg_masks(kvm, VMPIDR_EL2, res0, res1);
> +       resx.res0 = GENMASK(63, 40) | GENMASK(30, 24);
> +       resx.res1 = BIT(31);
> +       set_sysreg_masks(kvm, VMPIDR_EL2, resx);
>
>         /* HCR_EL2 */
> -       get_reg_fixed_bits(kvm, HCR_EL2, &res0, &res1);
> -       set_sysreg_masks(kvm, HCR_EL2, res0, res1);
> +       resx = get_reg_fixed_bits(kvm, HCR_EL2);
> +       set_sysreg_masks(kvm, HCR_EL2, resx);
>
>         /* HCRX_EL2 */
> -       get_reg_fixed_bits(kvm, HCRX_EL2, &res0, &res1);
> -       set_sysreg_masks(kvm, HCRX_EL2, res0, res1);
> +       resx = get_reg_fixed_bits(kvm, HCRX_EL2);
> +       set_sysreg_masks(kvm, HCRX_EL2, resx);
>
>         /* HFG[RW]TR_EL2 */
> -       get_reg_fixed_bits(kvm, HFGRTR_EL2, &res0, &res1);
> -       set_sysreg_masks(kvm, HFGRTR_EL2, res0, res1);
> -       get_reg_fixed_bits(kvm, HFGWTR_EL2, &res0, &res1);
> -       set_sysreg_masks(kvm, HFGWTR_EL2, res0, res1);
> +       resx = get_reg_fixed_bits(kvm, HFGRTR_EL2);
> +       set_sysreg_masks(kvm, HFGRTR_EL2, resx);
> +       resx = get_reg_fixed_bits(kvm, HFGWTR_EL2);
> +       set_sysreg_masks(kvm, HFGWTR_EL2, resx);
>
>         /* HDFG[RW]TR_EL2 */
> -       get_reg_fixed_bits(kvm, HDFGRTR_EL2, &res0, &res1);
> -       set_sysreg_masks(kvm, HDFGRTR_EL2, res0, res1);
> -       get_reg_fixed_bits(kvm, HDFGWTR_EL2, &res0, &res1);
> -       set_sysreg_masks(kvm, HDFGWTR_EL2, res0, res1);
> +       resx = get_reg_fixed_bits(kvm, HDFGRTR_EL2);
> +       set_sysreg_masks(kvm, HDFGRTR_EL2, resx);
> +       resx = get_reg_fixed_bits(kvm, HDFGWTR_EL2);
> +       set_sysreg_masks(kvm, HDFGWTR_EL2, resx);
>
>         /* HFGITR_EL2 */
> -       get_reg_fixed_bits(kvm, HFGITR_EL2, &res0, &res1);
> -       set_sysreg_masks(kvm, HFGITR_EL2, res0, res1);
> +       resx = get_reg_fixed_bits(kvm, HFGITR_EL2);
> +       set_sysreg_masks(kvm, HFGITR_EL2, resx);
>
>         /* HAFGRTR_EL2 - not a lot to see here */
> -       get_reg_fixed_bits(kvm, HAFGRTR_EL2, &res0, &res1);
> -       set_sysreg_masks(kvm, HAFGRTR_EL2, res0, res1);
> +       resx = get_reg_fixed_bits(kvm, HAFGRTR_EL2);
> +       set_sysreg_masks(kvm, HAFGRTR_EL2, resx);
>
>         /* HFG[RW]TR2_EL2 */
> -       get_reg_fixed_bits(kvm, HFGRTR2_EL2, &res0, &res1);
> -       set_sysreg_masks(kvm, HFGRTR2_EL2, res0, res1);
> -       get_reg_fixed_bits(kvm, HFGWTR2_EL2, &res0, &res1);
> -       set_sysreg_masks(kvm, HFGWTR2_EL2, res0, res1);
> +       resx = get_reg_fixed_bits(kvm, HFGRTR2_EL2);
> +       set_sysreg_masks(kvm, HFGRTR2_EL2, resx);
> +       resx = get_reg_fixed_bits(kvm, HFGWTR2_EL2);
> +       set_sysreg_masks(kvm, HFGWTR2_EL2, resx);
>
>         /* HDFG[RW]TR2_EL2 */
> -       get_reg_fixed_bits(kvm, HDFGRTR2_EL2, &res0, &res1);
> -       set_sysreg_masks(kvm, HDFGRTR2_EL2, res0, res1);
> -       get_reg_fixed_bits(kvm, HDFGWTR2_EL2, &res0, &res1);
> -       set_sysreg_masks(kvm, HDFGWTR2_EL2, res0, res1);
> +       resx = get_reg_fixed_bits(kvm, HDFGRTR2_EL2);
> +       set_sysreg_masks(kvm, HDFGRTR2_EL2, resx);
> +       resx = get_reg_fixed_bits(kvm, HDFGWTR2_EL2);
> +       set_sysreg_masks(kvm, HDFGWTR2_EL2, resx);
>
>         /* HFGITR2_EL2 */
> -       get_reg_fixed_bits(kvm, HFGITR2_EL2, &res0, &res1);
> -       set_sysreg_masks(kvm, HFGITR2_EL2, res0, res1);
> +       resx = get_reg_fixed_bits(kvm, HFGITR2_EL2);
> +       set_sysreg_masks(kvm, HFGITR2_EL2, resx);
>
>         /* TCR2_EL2 */
> -       get_reg_fixed_bits(kvm, TCR2_EL2, &res0, &res1);
> -       set_sysreg_masks(kvm, TCR2_EL2, res0, res1);
> +       resx = get_reg_fixed_bits(kvm, TCR2_EL2);
> +       set_sysreg_masks(kvm, TCR2_EL2, resx);
>
>         /* SCTLR_EL1 */
> -       get_reg_fixed_bits(kvm, SCTLR_EL1, &res0, &res1);
> -       set_sysreg_masks(kvm, SCTLR_EL1, res0, res1);
> +       resx = get_reg_fixed_bits(kvm, SCTLR_EL1);
> +       set_sysreg_masks(kvm, SCTLR_EL1, resx);
>
>         /* SCTLR2_ELx */
> -       get_reg_fixed_bits(kvm, SCTLR2_EL1, &res0, &res1);
> -       set_sysreg_masks(kvm, SCTLR2_EL1, res0, res1);
> -       get_reg_fixed_bits(kvm, SCTLR2_EL2, &res0, &res1);
> -       set_sysreg_masks(kvm, SCTLR2_EL2, res0, res1);
> +       resx = get_reg_fixed_bits(kvm, SCTLR2_EL1);
> +       set_sysreg_masks(kvm, SCTLR2_EL1, resx);
> +       resx = get_reg_fixed_bits(kvm, SCTLR2_EL2);
> +       set_sysreg_masks(kvm, SCTLR2_EL2, resx);
>
>         /* MDCR_EL2 */
> -       get_reg_fixed_bits(kvm, MDCR_EL2, &res0, &res1);
> -       set_sysreg_masks(kvm, MDCR_EL2, res0, res1);
> +       resx = get_reg_fixed_bits(kvm, MDCR_EL2);
> +       set_sysreg_masks(kvm, MDCR_EL2, resx);
>
>         /* CNTHCTL_EL2 */
> -       res0 = GENMASK(63, 20);
> -       res1 = 0;
> +       resx.res0 = GENMASK(63, 20);
> +       resx.res1 = 0;
>         if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, RME, IMP))
> -               res0 |= CNTHCTL_CNTPMASK | CNTHCTL_CNTVMASK;
> +               resx.res0 |= CNTHCTL_CNTPMASK | CNTHCTL_CNTVMASK;
>         if (!kvm_has_feat(kvm, ID_AA64MMFR0_EL1, ECV, CNTPOFF)) {
> -               res0 |= CNTHCTL_ECV;
> +               resx.res0 |= CNTHCTL_ECV;
>                 if (!kvm_has_feat(kvm, ID_AA64MMFR0_EL1, ECV, IMP))
> -                       res0 |= (CNTHCTL_EL1TVT | CNTHCTL_EL1TVCT |
> -                                CNTHCTL_EL1NVPCT | CNTHCTL_EL1NVVCT);
> +                       resx.res0 |= (CNTHCTL_EL1TVT | CNTHCTL_EL1TVCT |
> +                                     CNTHCTL_EL1NVPCT | CNTHCTL_EL1NVVCT);
>         }
>         if (!kvm_has_feat(kvm, ID_AA64MMFR1_EL1, VH, IMP))
> -               res0 |= GENMASK(11, 8);
> -       set_sysreg_masks(kvm, CNTHCTL_EL2, res0, res1);
> +               resx.res0 |= GENMASK(11, 8);
> +       set_sysreg_masks(kvm, CNTHCTL_EL2, resx);
>
>         /* ICH_HCR_EL2 */
> -       res0 = ICH_HCR_EL2_RES0;
> -       res1 = ICH_HCR_EL2_RES1;
> +       resx.res0 = ICH_HCR_EL2_RES0;
> +       resx.res1 = ICH_HCR_EL2_RES1;
>         if (!(kvm_vgic_global_state.ich_vtr_el2 & ICH_VTR_EL2_TDS))
> -               res0 |= ICH_HCR_EL2_TDIR;
> +               resx.res0 |= ICH_HCR_EL2_TDIR;
>         /* No GICv4 is presented to the guest */
> -       res0 |= ICH_HCR_EL2_DVIM | ICH_HCR_EL2_vSGIEOICount;
> -       set_sysreg_masks(kvm, ICH_HCR_EL2, res0, res1);
> +       resx.res0 |= ICH_HCR_EL2_DVIM | ICH_HCR_EL2_vSGIEOICount;
> +       set_sysreg_masks(kvm, ICH_HCR_EL2, resx);
>
>         /* VNCR_EL2 */
> -       set_sysreg_masks(kvm, VNCR_EL2, VNCR_EL2_RES0, VNCR_EL2_RES1);
> +       resx.res0 = VNCR_EL2_RES0;
> +       resx.res1 = VNCR_EL2_RES1;
> +       set_sysreg_masks(kvm, VNCR_EL2, resx);
>
>  out:
>         for (enum vcpu_sysreg sr = __SANITISED_REG_START__; sr < NR_SYS_REGS; sr++)
> --
> 2.47.3
>


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH 05/20] KVM: arm64: Extend unified RESx handling to runtime sanitisation
  2026-01-26 12:16 ` [PATCH 05/20] KVM: arm64: Extend unified RESx handling to runtime sanitisation Marc Zyngier
@ 2026-01-26 19:15   ` Fuad Tabba
  2026-01-27 10:52     ` Marc Zyngier
  0 siblings, 1 reply; 53+ messages in thread
From: Fuad Tabba @ 2026-01-26 19:15 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, kvm, Joey Gouly, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Will Deacon, Catalin Marinas

Hi Marc,

On Mon, 26 Jan 2026 at 12:17, Marc Zyngier <maz@kernel.org> wrote:
>
> Add a new helper to retrieve the RESx values for a given system
> register, and use it for the runtime sanitisation.
>
> This results in slightly better code generation for a fairly hot
> path in the hypervisor.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_host.h | 13 +++++++++++++
>  arch/arm64/kvm/emulate-nested.c   | 10 +---------
>  arch/arm64/kvm/nested.c           | 13 ++++---------
>  3 files changed, 18 insertions(+), 18 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index a7e4cd8ebf56f..9dca94e4361f0 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -635,6 +635,19 @@ struct kvm_sysreg_masks {
>         struct resx mask[NR_SYS_REGS - __SANITISED_REG_START__];
>  };
>
> +#define kvm_get_sysreg_resx(k, sr)                                     \
> +       ({                                                              \
> +               struct kvm_sysreg_masks *__masks;                       \
> +               struct resx __resx = {};                                \
> +                                                                       \
> +               __masks = (k)->arch.sysreg_masks;                       \
> +               if (likely(__masks &&                                   \
> +                          sr >= __SANITISED_REG_START__ &&             \
> +                          sr < NR_SYS_REGS))                           \
> +                       __resx = __masks->mask[sr - __SANITISED_REG_START__]; \
> +               __resx;                                                 \
> +       })
> +

This now covers all registers that need to be sanitized, not just
VNCR-backed ones now.

nit: wouldn't it be better to capture sr in a local variable rather
than reuse it? It is an enum, but it would make checkpatch feel
slightly better :)

Reviewed-by: Fuad Tabba <tabba@google.com>

Cheers,
/fuad





>  #define kvm_set_sysreg_resx(k, sr, resx)               \
>         do {                                            \
>                 (k)->arch.sysreg_masks->mask[sr - __SANITISED_REG_START__] = resx; \
> diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
> index 774cfbf5b43ba..43334cd2db9e5 100644
> --- a/arch/arm64/kvm/emulate-nested.c
> +++ b/arch/arm64/kvm/emulate-nested.c
> @@ -2427,15 +2427,7 @@ static enum trap_behaviour compute_trap_behaviour(struct kvm_vcpu *vcpu,
>
>  static u64 kvm_get_sysreg_res0(struct kvm *kvm, enum vcpu_sysreg sr)
>  {
> -       struct kvm_sysreg_masks *masks;
> -
> -       /* Only handle the VNCR-backed regs for now */
> -       if (sr < __VNCR_START__)
> -               return 0;
> -
> -       masks = kvm->arch.sysreg_masks;
> -
> -       return masks->mask[sr - __SANITISED_REG_START__].res0;
> +       return kvm_get_sysreg_resx(kvm, sr).res0;
>  }
>
>  static bool check_fgt_bit(struct kvm_vcpu *vcpu, enum vcpu_sysreg sr,
> diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
> index c5a45bc62153e..75a23f1c56d13 100644
> --- a/arch/arm64/kvm/nested.c
> +++ b/arch/arm64/kvm/nested.c
> @@ -1669,16 +1669,11 @@ u64 limit_nv_id_reg(struct kvm *kvm, u32 reg, u64 val)
>  u64 kvm_vcpu_apply_reg_masks(const struct kvm_vcpu *vcpu,
>                              enum vcpu_sysreg sr, u64 v)
>  {
> -       struct kvm_sysreg_masks *masks;
> -
> -       masks = vcpu->kvm->arch.sysreg_masks;
> -
> -       if (masks) {
> -               sr -= __SANITISED_REG_START__;
> +       struct resx resx;
>
> -               v &= ~masks->mask[sr].res0;
> -               v |= masks->mask[sr].res1;
> -       }
> +       resx = kvm_get_sysreg_resx(vcpu->kvm, sr);
> +       v &= ~resx.res0;
> +       v |= resx.res1;
>
>         return v;
>  }
> --
> 2.47.3
>


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH 05/20] KVM: arm64: Extend unified RESx handling to runtime sanitisation
  2026-01-26 19:15   ` Fuad Tabba
@ 2026-01-27 10:52     ` Marc Zyngier
  0 siblings, 0 replies; 53+ messages in thread
From: Marc Zyngier @ 2026-01-27 10:52 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, linux-arm-kernel, kvm, Joey Gouly, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Will Deacon, Catalin Marinas

On Mon, 26 Jan 2026 19:15:00 +0000,
Fuad Tabba <tabba@google.com> wrote:
> 
> Hi Marc,
> 
> On Mon, 26 Jan 2026 at 12:17, Marc Zyngier <maz@kernel.org> wrote:
> >
> > Add a new helper to retrieve the RESx values for a given system
> > register, and use it for the runtime sanitisation.
> >
> > This results in slightly better code generation for a fairly hot
> > path in the hypervisor.
> >
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/include/asm/kvm_host.h | 13 +++++++++++++
> >  arch/arm64/kvm/emulate-nested.c   | 10 +---------
> >  arch/arm64/kvm/nested.c           | 13 ++++---------
> >  3 files changed, 18 insertions(+), 18 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index a7e4cd8ebf56f..9dca94e4361f0 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -635,6 +635,19 @@ struct kvm_sysreg_masks {
> >         struct resx mask[NR_SYS_REGS - __SANITISED_REG_START__];
> >  };
> >
> > +#define kvm_get_sysreg_resx(k, sr)                                     \
> > +       ({                                                              \
> > +               struct kvm_sysreg_masks *__masks;                       \
> > +               struct resx __resx = {};                                \
> > +                                                                       \
> > +               __masks = (k)->arch.sysreg_masks;                       \
> > +               if (likely(__masks &&                                   \
> > +                          sr >= __SANITISED_REG_START__ &&             \
> > +                          sr < NR_SYS_REGS))                           \
> > +                       __resx = __masks->mask[sr - __SANITISED_REG_START__]; \
> > +               __resx;                                                 \
> > +       })
> > +
> 
> This now covers all registers that need to be sanitized, not just
> VNCR-backed ones now.

Only kvm_get_sysreg_res0() was previously limited to VNCR-registers,
and that was a bug found by Zenghui. What I'm trying to do here is to
concentrate the decision about accessing the masks in a single place
that is safe to use from any context.

> 
> nit: wouldn't it be better to capture sr in a local variable rather
> than reuse it? It is an enum, but it would make checkpatch feel
> slightly better :)

Indeed, this macro is pretty horrible, and needs some tidying up. I'll
have a look at pimping it up ;-)

>
> Reviewed-by: Fuad Tabba <tabba@google.com>

Thanks!

	M.

-- 
Without deviation from the norm, progress is not possible.


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH 06/20] KVM: arm64: Inherit RESx bits from FGT register descriptors
  2026-01-26 12:16 ` [PATCH 06/20] KVM: arm64: Inherit RESx bits from FGT register descriptors Marc Zyngier
@ 2026-01-27 15:21   ` Joey Gouly
  2026-01-27 17:58   ` Fuad Tabba
  1 sibling, 0 replies; 53+ messages in thread
From: Joey Gouly @ 2026-01-27 15:21 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, kvm, Suzuki K Poulose, Oliver Upton,
	Zenghui Yu, Fuad Tabba, Will Deacon, Catalin Marinas

On Mon, Jan 26, 2026 at 12:16:40PM +0000, Marc Zyngier wrote:
> The FGT registers have their computed RESx bits stashed in specific
> descriptors, which we can easily use when computing the masks used
> for the guest.
> 
> This removes a bit of boilerplate code.

Reviewed-by: Joey Gouly <joey.gouly@arm.com>

> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/config.c | 16 +++++-----------
>  1 file changed, 5 insertions(+), 11 deletions(-)
> 
> diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
> index a907195bd44b6..8d152605999ba 100644
> --- a/arch/arm64/kvm/config.c
> +++ b/arch/arm64/kvm/config.c
> @@ -1344,6 +1344,11 @@ struct resx compute_reg_resx_bits(struct kvm *kvm,
>  	resx = compute_resx_bits(kvm, r->bit_feat_map, r->bit_feat_map_sz,
>  				 require, exclude);
>  
> +	if (r->feat_map.flags & MASKS_POINTER) {
> +		resx.res0 |= r->feat_map.masks->res0;
> +		resx.res1 |= r->feat_map.masks->res1;
> +	}
> +
>  	tmp = compute_resx_bits(kvm, &r->feat_map, 1, require, exclude);
>  
>  	resx.res0 |= tmp.res0;
> @@ -1424,47 +1429,36 @@ struct resx get_reg_fixed_bits(struct kvm *kvm, enum vcpu_sysreg reg)
>  	switch (reg) {
>  	case HFGRTR_EL2:
>  		resx = compute_reg_resx_bits(kvm, &hfgrtr_desc, 0, 0);
> -		resx.res1 |= HFGRTR_EL2_RES1;
>  		break;
>  	case HFGWTR_EL2:
>  		resx = compute_reg_resx_bits(kvm, &hfgwtr_desc, 0, 0);
> -		resx.res1 |= HFGWTR_EL2_RES1;
>  		break;
>  	case HFGITR_EL2:
>  		resx = compute_reg_resx_bits(kvm, &hfgitr_desc, 0, 0);
> -		resx.res1 |= HFGITR_EL2_RES1;
>  		break;
>  	case HDFGRTR_EL2:
>  		resx = compute_reg_resx_bits(kvm, &hdfgrtr_desc, 0, 0);
> -		resx.res1 |= HDFGRTR_EL2_RES1;
>  		break;
>  	case HDFGWTR_EL2:
>  		resx = compute_reg_resx_bits(kvm, &hdfgwtr_desc, 0, 0);
> -		resx.res1 |= HDFGWTR_EL2_RES1;
>  		break;
>  	case HAFGRTR_EL2:
>  		resx = compute_reg_resx_bits(kvm, &hafgrtr_desc, 0, 0);
> -		resx.res1 |= HAFGRTR_EL2_RES1;
>  		break;
>  	case HFGRTR2_EL2:
>  		resx = compute_reg_resx_bits(kvm, &hfgrtr2_desc, 0, 0);
> -		resx.res1 |= HFGRTR2_EL2_RES1;
>  		break;
>  	case HFGWTR2_EL2:
>  		resx = compute_reg_resx_bits(kvm, &hfgwtr2_desc, 0, 0);
> -		resx.res1 |= HFGWTR2_EL2_RES1;
>  		break;
>  	case HFGITR2_EL2:
>  		resx = compute_reg_resx_bits(kvm, &hfgitr2_desc, 0, 0);
> -		resx.res1 |= HFGITR2_EL2_RES1;
>  		break;
>  	case HDFGRTR2_EL2:
>  		resx = compute_reg_resx_bits(kvm, &hdfgrtr2_desc, 0, 0);
> -		resx.res1 |= HDFGRTR2_EL2_RES1;
>  		break;
>  	case HDFGWTR2_EL2:
>  		resx = compute_reg_resx_bits(kvm, &hdfgwtr2_desc, 0, 0);
> -		resx.res1 |= HDFGWTR2_EL2_RES1;
>  		break;
>  	case HCRX_EL2:
>  		resx = compute_reg_resx_bits(kvm, &hcrx_desc, 0, 0);
> -- 
> 2.47.3
> 


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH 07/20] KVM: arm64: Allow RES1 bits to be inferred from configuration
  2026-01-26 12:16 ` [PATCH 07/20] KVM: arm64: Allow RES1 bits to be inferred from configuration Marc Zyngier
@ 2026-01-27 15:26   ` Joey Gouly
  2026-01-27 17:58   ` Fuad Tabba
  1 sibling, 0 replies; 53+ messages in thread
From: Joey Gouly @ 2026-01-27 15:26 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, kvm, Suzuki K Poulose, Oliver Upton,
	Zenghui Yu, Fuad Tabba, Will Deacon, Catalin Marinas

On Mon, Jan 26, 2026 at 12:16:41PM +0000, Marc Zyngier wrote:
> So far, when a bit field is tied to an unsupported feature, we set
> it as RES0. This is almost forrect, but there are a few exceptions

forrect is almost correct too!

> where the bits become RES1.
> 
> Add a AS_RES1 qualifier that instruct the RESx computing code to
> simply do that.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>

Reviewed-by: Joey Gouly <joey.gouly@arm.com>

Thanks,
Joey

> ---
>  arch/arm64/kvm/config.c | 9 +++++++--
>  1 file changed, 7 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
> index 8d152605999ba..6a4674fabf865 100644
> --- a/arch/arm64/kvm/config.c
> +++ b/arch/arm64/kvm/config.c
> @@ -24,6 +24,7 @@ struct reg_bits_to_feat_map {
>  #define	CALL_FUNC	BIT(1)	/* Needs to evaluate tons of crap */
>  #define	FIXED_VALUE	BIT(2)	/* RAZ/WI or RAO/WI in KVM */
>  #define	MASKS_POINTER	BIT(3)	/* Pointer to fgt_masks struct instead of bits */
> +#define	AS_RES1		BIT(4)	/* RES1 when not supported */
>  
>  	unsigned long	flags;
>  
> @@ -1316,8 +1317,12 @@ struct resx __compute_fixed_bits(struct kvm *kvm,
>  		else
>  			match = idreg_feat_match(kvm, &map[i]);
>  
> -		if (!match || (map[i].flags & FIXED_VALUE))
> -			resx.res0 |= reg_feat_map_bits(&map[i]);
> +		if (!match || (map[i].flags & FIXED_VALUE)) {
> +			if (map[i].flags & AS_RES1)
> + 				resx.res1 |= reg_feat_map_bits(&map[i]);
> +			else
> +				resx.res0 |= reg_feat_map_bits(&map[i]);
> +		}
>  	}
>  
>  	return resx;
> -- 
> 2.47.3
> 


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH 06/20] KVM: arm64: Inherit RESx bits from FGT register descriptors
  2026-01-26 12:16 ` [PATCH 06/20] KVM: arm64: Inherit RESx bits from FGT register descriptors Marc Zyngier
  2026-01-27 15:21   ` Joey Gouly
@ 2026-01-27 17:58   ` Fuad Tabba
  1 sibling, 0 replies; 53+ messages in thread
From: Fuad Tabba @ 2026-01-27 17:58 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, kvm, Joey Gouly, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Will Deacon, Catalin Marinas

On Mon, 26 Jan 2026 at 12:17, Marc Zyngier <maz@kernel.org> wrote:
>
> The FGT registers have their computed RESx bits stashed in specific
> descriptors, which we can easily use when computing the masks used
> for the guest.
>
> This removes a bit of boilerplate code.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>

Reviewed-by: Fuad Tabba <tabba@google.com>

Cheers,
/fuad


> ---
>  arch/arm64/kvm/config.c | 16 +++++-----------
>  1 file changed, 5 insertions(+), 11 deletions(-)
>
> diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
> index a907195bd44b6..8d152605999ba 100644
> --- a/arch/arm64/kvm/config.c
> +++ b/arch/arm64/kvm/config.c
> @@ -1344,6 +1344,11 @@ struct resx compute_reg_resx_bits(struct kvm *kvm,
>         resx = compute_resx_bits(kvm, r->bit_feat_map, r->bit_feat_map_sz,
>                                  require, exclude);
>
> +       if (r->feat_map.flags & MASKS_POINTER) {
> +               resx.res0 |= r->feat_map.masks->res0;
> +               resx.res1 |= r->feat_map.masks->res1;
> +       }
> +
>         tmp = compute_resx_bits(kvm, &r->feat_map, 1, require, exclude);
>
>         resx.res0 |= tmp.res0;
> @@ -1424,47 +1429,36 @@ struct resx get_reg_fixed_bits(struct kvm *kvm, enum vcpu_sysreg reg)
>         switch (reg) {
>         case HFGRTR_EL2:
>                 resx = compute_reg_resx_bits(kvm, &hfgrtr_desc, 0, 0);
> -               resx.res1 |= HFGRTR_EL2_RES1;
>                 break;
>         case HFGWTR_EL2:
>                 resx = compute_reg_resx_bits(kvm, &hfgwtr_desc, 0, 0);
> -               resx.res1 |= HFGWTR_EL2_RES1;
>                 break;
>         case HFGITR_EL2:
>                 resx = compute_reg_resx_bits(kvm, &hfgitr_desc, 0, 0);
> -               resx.res1 |= HFGITR_EL2_RES1;
>                 break;
>         case HDFGRTR_EL2:
>                 resx = compute_reg_resx_bits(kvm, &hdfgrtr_desc, 0, 0);
> -               resx.res1 |= HDFGRTR_EL2_RES1;
>                 break;
>         case HDFGWTR_EL2:
>                 resx = compute_reg_resx_bits(kvm, &hdfgwtr_desc, 0, 0);
> -               resx.res1 |= HDFGWTR_EL2_RES1;
>                 break;
>         case HAFGRTR_EL2:
>                 resx = compute_reg_resx_bits(kvm, &hafgrtr_desc, 0, 0);
> -               resx.res1 |= HAFGRTR_EL2_RES1;
>                 break;
>         case HFGRTR2_EL2:
>                 resx = compute_reg_resx_bits(kvm, &hfgrtr2_desc, 0, 0);
> -               resx.res1 |= HFGRTR2_EL2_RES1;
>                 break;
>         case HFGWTR2_EL2:
>                 resx = compute_reg_resx_bits(kvm, &hfgwtr2_desc, 0, 0);
> -               resx.res1 |= HFGWTR2_EL2_RES1;
>                 break;
>         case HFGITR2_EL2:
>                 resx = compute_reg_resx_bits(kvm, &hfgitr2_desc, 0, 0);
> -               resx.res1 |= HFGITR2_EL2_RES1;
>                 break;
>         case HDFGRTR2_EL2:
>                 resx = compute_reg_resx_bits(kvm, &hdfgrtr2_desc, 0, 0);
> -               resx.res1 |= HDFGRTR2_EL2_RES1;
>                 break;
>         case HDFGWTR2_EL2:
>                 resx = compute_reg_resx_bits(kvm, &hdfgwtr2_desc, 0, 0);
> -               resx.res1 |= HDFGWTR2_EL2_RES1;
>                 break;
>         case HCRX_EL2:
>                 resx = compute_reg_resx_bits(kvm, &hcrx_desc, 0, 0);
> --
> 2.47.3
>


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH 07/20] KVM: arm64: Allow RES1 bits to be inferred from configuration
  2026-01-26 12:16 ` [PATCH 07/20] KVM: arm64: Allow RES1 bits to be inferred from configuration Marc Zyngier
  2026-01-27 15:26   ` Joey Gouly
@ 2026-01-27 17:58   ` Fuad Tabba
  1 sibling, 0 replies; 53+ messages in thread
From: Fuad Tabba @ 2026-01-27 17:58 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, kvm, Joey Gouly, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Will Deacon, Catalin Marinas

On Mon, 26 Jan 2026 at 12:17, Marc Zyngier <maz@kernel.org> wrote:
>
> So far, when a bit field is tied to an unsupported feature, we set
> it as RES0. This is almost forrect, but there are a few exceptions
> where the bits become RES1.

You need to correct forrect :)

> Add a AS_RES1 qualifier that instruct the RESx computing code to
> simply do that.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/config.c | 9 +++++++--
>  1 file changed, 7 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
> index 8d152605999ba..6a4674fabf865 100644
> --- a/arch/arm64/kvm/config.c
> +++ b/arch/arm64/kvm/config.c
> @@ -24,6 +24,7 @@ struct reg_bits_to_feat_map {
>  #define        CALL_FUNC       BIT(1)  /* Needs to evaluate tons of crap */
>  #define        FIXED_VALUE     BIT(2)  /* RAZ/WI or RAO/WI in KVM */
>  #define        MASKS_POINTER   BIT(3)  /* Pointer to fgt_masks struct instead of bits */
> +#define        AS_RES1         BIT(4)  /* RES1 when not supported */
>
>         unsigned long   flags;
>
> @@ -1316,8 +1317,12 @@ struct resx __compute_fixed_bits(struct kvm *kvm,
>                 else
>                         match = idreg_feat_match(kvm, &map[i]);
>
> -               if (!match || (map[i].flags & FIXED_VALUE))
> -                       resx.res0 |= reg_feat_map_bits(&map[i]);
> +               if (!match || (map[i].flags & FIXED_VALUE)) {
> +                       if (map[i].flags & AS_RES1)
> +                               resx.res1 |= reg_feat_map_bits(&map[i]);
> +                       else
> +                               resx.res0 |= reg_feat_map_bits(&map[i]);
> +               }

checkpatch is complaining about whitespaces here. I can't blame it.

With those fixed, looks good to me.

Reviewed-by: Fuad Tabba <tabba@google.com>

Cheers,
/fuad

>         }
>
>         return resx;



> --
> 2.47.3
>


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH 08/20] KVM: arm64: Correctly handle SCTLR_EL1 RES1 bits for unsupported features
  2026-01-26 12:16 ` [PATCH 08/20] KVM: arm64: Correctly handle SCTLR_EL1 RES1 bits for unsupported features Marc Zyngier
@ 2026-01-27 18:06   ` Fuad Tabba
  0 siblings, 0 replies; 53+ messages in thread
From: Fuad Tabba @ 2026-01-27 18:06 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, kvm, Joey Gouly, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Will Deacon, Catalin Marinas

Hi Marc,

On Mon, 26 Jan 2026 at 12:17, Marc Zyngier <maz@kernel.org> wrote:
>
> A bunch of SCTLR_EL1 bits must be set to RES1 when the controlling

nit: controlling

> feature is not present. Add the AS_RES1 qualifier where needed.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>

Otherwise, it matches the Arm Arm.

Reviewed-by: Fuad Tabba <tabba@google.com>

Cheers,
/fuad



> ---
>  arch/arm64/kvm/config.c | 26 ++++++++++++++------------
>  1 file changed, 14 insertions(+), 12 deletions(-)
>
> diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
> index 6a4674fabf865..68ed5af2b4d53 100644
> --- a/arch/arm64/kvm/config.c
> +++ b/arch/arm64/kvm/config.c
> @@ -1085,27 +1085,28 @@ static const DECLARE_FEAT_MAP(tcr2_el2_desc, TCR2_EL2,
>                               tcr2_el2_feat_map, FEAT_TCR2);
>
>  static const struct reg_bits_to_feat_map sctlr_el1_feat_map[] = {
> -       NEEDS_FEAT(SCTLR_EL1_CP15BEN    |
> -                  SCTLR_EL1_ITD        |
> -                  SCTLR_EL1_SED,
> -                  FEAT_AA32EL0),
> +       NEEDS_FEAT(SCTLR_EL1_CP15BEN, FEAT_AA32EL0),
> +       NEEDS_FEAT_FLAG(SCTLR_EL1_ITD   |
> +                       SCTLR_EL1_SED,
> +                       AS_RES1, FEAT_AA32EL0),
>         NEEDS_FEAT(SCTLR_EL1_BT0        |
>                    SCTLR_EL1_BT1,
>                    FEAT_BTI),
>         NEEDS_FEAT(SCTLR_EL1_CMOW, FEAT_CMOW),
> -       NEEDS_FEAT(SCTLR_EL1_TSCXT, feat_csv2_2_csv2_1p2),
> -       NEEDS_FEAT(SCTLR_EL1_EIS        |
> -                  SCTLR_EL1_EOS,
> -                  FEAT_ExS),
> +       NEEDS_FEAT_FLAG(SCTLR_EL1_TSCXT,
> +                       AS_RES1, feat_csv2_2_csv2_1p2),
> +       NEEDS_FEAT_FLAG(SCTLR_EL1_EIS   |
> +                       SCTLR_EL1_EOS,
> +                       AS_RES1, FEAT_ExS),
>         NEEDS_FEAT(SCTLR_EL1_EnFPM, FEAT_FPMR),
>         NEEDS_FEAT(SCTLR_EL1_IESB, FEAT_IESB),
>         NEEDS_FEAT(SCTLR_EL1_EnALS, FEAT_LS64),
>         NEEDS_FEAT(SCTLR_EL1_EnAS0, FEAT_LS64_ACCDATA),
>         NEEDS_FEAT(SCTLR_EL1_EnASR, FEAT_LS64_V),
>         NEEDS_FEAT(SCTLR_EL1_nAA, FEAT_LSE2),
> -       NEEDS_FEAT(SCTLR_EL1_LSMAOE     |
> -                  SCTLR_EL1_nTLSMD,
> -                  FEAT_LSMAOC),
> +       NEEDS_FEAT_FLAG(SCTLR_EL1_LSMAOE        |
> +                       SCTLR_EL1_nTLSMD,
> +                       AS_RES1, FEAT_LSMAOC),
>         NEEDS_FEAT(SCTLR_EL1_EE, FEAT_MixedEnd),
>         NEEDS_FEAT(SCTLR_EL1_E0E, feat_mixedendel0),
>         NEEDS_FEAT(SCTLR_EL1_MSCEn, FEAT_MOPS),
> @@ -1121,7 +1122,8 @@ static const struct reg_bits_to_feat_map sctlr_el1_feat_map[] = {
>         NEEDS_FEAT(SCTLR_EL1_NMI        |
>                    SCTLR_EL1_SPINTMASK,
>                    FEAT_NMI),
> -       NEEDS_FEAT(SCTLR_EL1_SPAN, FEAT_PAN),
> +       NEEDS_FEAT_FLAG(SCTLR_EL1_SPAN,
> +                       AS_RES1, FEAT_PAN),
>         NEEDS_FEAT(SCTLR_EL1_EPAN, FEAT_PAN3),
>         NEEDS_FEAT(SCTLR_EL1_EnDA       |
>                    SCTLR_EL1_EnDB       |
> --
> 2.47.3
>


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH 09/20] KVM: arm64: Convert HCR_EL2.RW to AS_RES1
  2026-01-26 12:16 ` [PATCH 09/20] KVM: arm64: Convert HCR_EL2.RW to AS_RES1 Marc Zyngier
@ 2026-01-27 18:09   ` Fuad Tabba
  0 siblings, 0 replies; 53+ messages in thread
From: Fuad Tabba @ 2026-01-27 18:09 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, kvm, Joey Gouly, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Will Deacon, Catalin Marinas

On Mon, 26 Jan 2026 at 12:17, Marc Zyngier <maz@kernel.org> wrote:
>
> Now that we have the AS_RES1 constraint, it becomes trivial to express
> the HCR_EL2.RW behaviour.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>

Reviewed-by: Fuad Tabba <tabba@google.com>

Cheers,
/fuad

> ---
>  arch/arm64/kvm/config.c | 15 +--------------
>  1 file changed, 1 insertion(+), 14 deletions(-)
>
> diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
> index 68ed5af2b4d53..39487182057a3 100644
> --- a/arch/arm64/kvm/config.c
> +++ b/arch/arm64/kvm/config.c
> @@ -389,19 +389,6 @@ static bool feat_vmid16(struct kvm *kvm)
>         return kvm_has_feat_enum(kvm, ID_AA64MMFR1_EL1, VMIDBits, 16);
>  }
>
> -static bool compute_hcr_rw(struct kvm *kvm, u64 *bits)
> -{
> -       /* This is purely academic: AArch32 and NV are mutually exclusive */
> -       if (bits) {
> -               if (kvm_has_feat(kvm, FEAT_AA32EL1))
> -                       *bits &= ~HCR_EL2_RW;
> -               else
> -                       *bits |= HCR_EL2_RW;
> -       }
> -
> -       return true;
> -}
> -
>  static bool compute_hcr_e2h(struct kvm *kvm, u64 *bits)
>  {
>         if (bits) {
> @@ -967,7 +954,7 @@ static const DECLARE_FEAT_MAP(hcrx_desc, __HCRX_EL2,
>
>  static const struct reg_bits_to_feat_map hcr_feat_map[] = {
>         NEEDS_FEAT(HCR_EL2_TID0, FEAT_AA32EL0),
> -       NEEDS_FEAT_FIXED(HCR_EL2_RW, compute_hcr_rw),
> +       NEEDS_FEAT_FLAG(HCR_EL2_RW, AS_RES1, FEAT_AA32EL1),
>         NEEDS_FEAT(HCR_EL2_HCD, not_feat_aa64el3),
>         NEEDS_FEAT(HCR_EL2_AMO          |
>                    HCR_EL2_BSU          |
> --
> 2.47.3
>


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH 10/20] KVM: arm64: Simplify FIXED_VALUE handling
  2026-01-26 12:16 ` [PATCH 10/20] KVM: arm64: Simplify FIXED_VALUE handling Marc Zyngier
@ 2026-01-27 18:20   ` Fuad Tabba
  0 siblings, 0 replies; 53+ messages in thread
From: Fuad Tabba @ 2026-01-27 18:20 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, kvm, Joey Gouly, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Will Deacon, Catalin Marinas

On Mon, 26 Jan 2026 at 12:17, Marc Zyngier <maz@kernel.org> wrote:
>
> The FIXED_VALUE qualifier (mostly used for HCR_EL2) is pointlessly
> complicated, as it tries to piggy-back on the previous RES0 handling
> while being done in a different phase, on different data.
>
> Instead, make it an integral part of the RESx computation, and allow
> it to directly set RESx bits. This is much easier to understand.
>
> It also paves the way for some additional changes to that will allow
> the full removal of the FIXED_VALUE handling.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>

The new code preserves the logic, and is easier to understand.

Reviewed-by: Fuad Tabba <tabba@google.com>

Cheers,
/fuad


> ---
>  arch/arm64/kvm/config.c | 67 ++++++++++++++---------------------------
>  1 file changed, 22 insertions(+), 45 deletions(-)
>
> diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
> index 39487182057a3..4fac04d3132c0 100644
> --- a/arch/arm64/kvm/config.c
> +++ b/arch/arm64/kvm/config.c
> @@ -37,7 +37,7 @@ struct reg_bits_to_feat_map {
>                         s8      lo_lim;
>                 };
>                 bool    (*match)(struct kvm *);
> -               bool    (*fval)(struct kvm *, u64 *);
> +               bool    (*fval)(struct kvm *, struct resx *);
>         };
>  };
>
> @@ -389,14 +389,12 @@ static bool feat_vmid16(struct kvm *kvm)
>         return kvm_has_feat_enum(kvm, ID_AA64MMFR1_EL1, VMIDBits, 16);
>  }
>
> -static bool compute_hcr_e2h(struct kvm *kvm, u64 *bits)
> +static bool compute_hcr_e2h(struct kvm *kvm, struct resx *bits)
>  {
> -       if (bits) {
> -               if (kvm_has_feat(kvm, FEAT_E2H0))
> -                       *bits &= ~HCR_EL2_E2H;
> -               else
> -                       *bits |= HCR_EL2_E2H;
> -       }
> +       if (kvm_has_feat(kvm, FEAT_E2H0))
> +               bits->res0 |= HCR_EL2_E2H;
> +       else
> +               bits->res1 |= HCR_EL2_E2H;
>
>         return true;
>  }
> @@ -1281,12 +1279,11 @@ static bool idreg_feat_match(struct kvm *kvm, const struct reg_bits_to_feat_map
>  }
>
>  static
> -struct resx __compute_fixed_bits(struct kvm *kvm,
> -                               const struct reg_bits_to_feat_map *map,
> -                               int map_size,
> -                               u64 *fixed_bits,
> -                               unsigned long require,
> -                               unsigned long exclude)
> +struct resx compute_resx_bits(struct kvm *kvm,
> +                             const struct reg_bits_to_feat_map *map,
> +                             int map_size,
> +                             unsigned long require,
> +                             unsigned long exclude)
>  {
>         struct resx resx = {};
>
> @@ -1299,14 +1296,18 @@ struct resx __compute_fixed_bits(struct kvm *kvm,
>                 if (map[i].flags & exclude)
>                         continue;
>
> -               if (map[i].flags & CALL_FUNC)
> -                       match = (map[i].flags & FIXED_VALUE) ?
> -                               map[i].fval(kvm, fixed_bits) :
> -                               map[i].match(kvm);
> -               else
> +               switch (map[i].flags & (CALL_FUNC | FIXED_VALUE)) {
> +               case CALL_FUNC | FIXED_VALUE:
> +                       map[i].fval(kvm, &resx);
> +                       continue;
> +               case CALL_FUNC:
> +                       match = map[i].match(kvm);
> +                       break;
> +               default:
>                         match = idreg_feat_match(kvm, &map[i]);
> +               }
>
> -               if (!match || (map[i].flags & FIXED_VALUE)) {
> +               if (!match) {
>                         if (map[i].flags & AS_RES1)
>                                 resx.res1 |= reg_feat_map_bits(&map[i]);
>                         else
> @@ -1317,17 +1318,6 @@ struct resx __compute_fixed_bits(struct kvm *kvm,
>         return resx;
>  }
>
> -static
> -struct resx compute_resx_bits(struct kvm *kvm,
> -                            const struct reg_bits_to_feat_map *map,
> -                            int map_size,
> -                            unsigned long require,
> -                            unsigned long exclude)
> -{
> -       return __compute_fixed_bits(kvm, map, map_size, NULL,
> -                                   require, exclude | FIXED_VALUE);
> -}
> -
>  static
>  struct resx compute_reg_resx_bits(struct kvm *kvm,
>                                  const struct reg_feat_map_desc *r,
> @@ -1368,16 +1358,6 @@ static u64 compute_fgu_bits(struct kvm *kvm, const struct reg_feat_map_desc *r)
>         return resx.res0 | resx.res1;
>  }
>
> -static
> -struct resx compute_reg_fixed_bits(struct kvm *kvm,
> -                                 const struct reg_feat_map_desc *r,
> -                                 u64 *fixed_bits, unsigned long require,
> -                                 unsigned long exclude)
> -{
> -       return __compute_fixed_bits(kvm, r->bit_feat_map, r->bit_feat_map_sz,
> -                                   fixed_bits, require | FIXED_VALUE, exclude);
> -}
> -
>  void compute_fgu(struct kvm *kvm, enum fgt_group_id fgt)
>  {
>         u64 val = 0;
> @@ -1417,7 +1397,6 @@ void compute_fgu(struct kvm *kvm, enum fgt_group_id fgt)
>
>  struct resx get_reg_fixed_bits(struct kvm *kvm, enum vcpu_sysreg reg)
>  {
> -       u64 fixed = 0, mask;
>         struct resx resx;
>
>         switch (reg) {
> @@ -1459,10 +1438,8 @@ struct resx get_reg_fixed_bits(struct kvm *kvm, enum vcpu_sysreg reg)
>                 resx.res1 |= __HCRX_EL2_RES1;
>                 break;
>         case HCR_EL2:
> -               mask = compute_reg_fixed_bits(kvm, &hcr_desc, &fixed, 0, 0).res0;
>                 resx = compute_reg_resx_bits(kvm, &hcr_desc, 0, 0);
> -               resx.res0 |= (mask & ~fixed);
> -               resx.res1 |= HCR_EL2_RES1 | (mask & fixed);
> +               resx.res1 |= HCR_EL2_RES1;
>                 break;
>         case SCTLR2_EL1:
>         case SCTLR2_EL2:
> --
> 2.47.3
>


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH 11/20] KVM: arm64: Add REQUIRES_E2H1 constraint as configuration flags
  2026-01-26 12:16 ` [PATCH 11/20] KVM: arm64: Add REQUIRES_E2H1 constraint as configuration flags Marc Zyngier
@ 2026-01-27 18:28   ` Fuad Tabba
  0 siblings, 0 replies; 53+ messages in thread
From: Fuad Tabba @ 2026-01-27 18:28 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, kvm, Joey Gouly, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Will Deacon, Catalin Marinas

On Mon, 26 Jan 2026 at 12:17, Marc Zyngier <maz@kernel.org> wrote:
>
> A bunch of EL2 configuration are very similar to their EL1 counterpart,
> with the added constraint that HCR_EL2.E2H being 1.
>
> For us, this means HCR_EL2.E2H being RES1, which is something we can
> statically evaluate.
>
> Add a REQUIRES_E2H1 constraint, which allows us to express conditions
> in a much simpler way (without extra code). Existing occurrences are
> converted, before we add a lot more.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/config.c | 38 ++++++++++++++------------------------
>  1 file changed, 14 insertions(+), 24 deletions(-)
>
> diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
> index 4fac04d3132c0..1990cebc77c66 100644
> --- a/arch/arm64/kvm/config.c
> +++ b/arch/arm64/kvm/config.c
> @@ -25,6 +25,7 @@ struct reg_bits_to_feat_map {
>  #define        FIXED_VALUE     BIT(2)  /* RAZ/WI or RAO/WI in KVM */
>  #define        MASKS_POINTER   BIT(3)  /* Pointer to fgt_masks struct instead of bits */
>  #define        AS_RES1         BIT(4)  /* RES1 when not supported */
> +#define        REQUIRES_E2H1   BIT(5)  /* Add HCR_EL2.E2H RES1 as a pre-condition */
>
>         unsigned long   flags;
>
> @@ -311,21 +312,6 @@ static bool feat_trbe_mpam(struct kvm *kvm)
>                 (read_sysreg_s(SYS_TRBIDR_EL1) & TRBIDR_EL1_MPAM));
>  }
>
> -static bool feat_asid2_e2h1(struct kvm *kvm)
> -{
> -       return kvm_has_feat(kvm, FEAT_ASID2) && !kvm_has_feat(kvm, FEAT_E2H0);
> -}
> -
> -static bool feat_d128_e2h1(struct kvm *kvm)
> -{
> -       return kvm_has_feat(kvm, FEAT_D128) && !kvm_has_feat(kvm, FEAT_E2H0);
> -}
> -
> -static bool feat_mec_e2h1(struct kvm *kvm)
> -{
> -       return kvm_has_feat(kvm, FEAT_MEC) && !kvm_has_feat(kvm, FEAT_E2H0);
> -}
> -
>  static bool feat_ebep_pmuv3_ss(struct kvm *kvm)
>  {
>         return kvm_has_feat(kvm, FEAT_EBEP) || kvm_has_feat(kvm, FEAT_PMUv3_SS);
> @@ -1045,15 +1031,15 @@ static const DECLARE_FEAT_MAP(sctlr2_desc, SCTLR2_EL1,
>                               sctlr2_feat_map, FEAT_SCTLR2);
>
>  static const struct reg_bits_to_feat_map tcr2_el2_feat_map[] = {
> -       NEEDS_FEAT(TCR2_EL2_FNG1        |
> -                  TCR2_EL2_FNG0        |
> -                  TCR2_EL2_A2,
> -                  feat_asid2_e2h1),
> -       NEEDS_FEAT(TCR2_EL2_DisCH1      |
> -                  TCR2_EL2_DisCH0      |
> -                  TCR2_EL2_D128,
> -                  feat_d128_e2h1),
> -       NEEDS_FEAT(TCR2_EL2_AMEC1, feat_mec_e2h1),
> +       NEEDS_FEAT_FLAG(TCR2_EL2_FNG1   |
> +                       TCR2_EL2_FNG0   |
> +                       TCR2_EL2_A2,
> +                       REQUIRES_E2H1, FEAT_ASID2),
> +       NEEDS_FEAT_FLAG(TCR2_EL2_DisCH1 |
> +                       TCR2_EL2_DisCH0 |
> +                       TCR2_EL2_D128,
> +                       REQUIRES_E2H1, FEAT_D128),
> +       NEEDS_FEAT_FLAG(TCR2_EL2_AMEC1, REQUIRES_E2H1, FEAT_MEC),
>         NEEDS_FEAT(TCR2_EL2_AMEC0, FEAT_MEC),
>         NEEDS_FEAT(TCR2_EL2_HAFT, FEAT_HAFT),
>         NEEDS_FEAT(TCR2_EL2_PTTWI       |
> @@ -1285,6 +1271,7 @@ struct resx compute_resx_bits(struct kvm *kvm,
>                               unsigned long require,
>                               unsigned long exclude)
>  {
> +       bool e2h0 = kvm_has_feat(kvm, FEAT_E2H0);
>         struct resx resx = {};
>
>         for (int i = 0; i < map_size; i++) {
> @@ -1307,6 +1294,9 @@ struct resx compute_resx_bits(struct kvm *kvm,
>                         match = idreg_feat_match(kvm, &map[i]);
>                 }
>
> +               if (map[i].flags & REQUIRES_E2H1)
> +                       match &= !e2h0;
> +

nit: white space in the newline

Reviewed-by: Fuad Tabba <tabba@google.com>

Cheers,
/fuad




>                 if (!match) {
>                         if (map[i].flags & AS_RES1)
>                                 resx.res1 |= reg_feat_map_bits(&map[i]);
> --
> 2.47.3
>


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH 12/20] KVM: arm64: Add RESx_WHEN_E2Hx constraints as configuration flags
  2026-01-26 12:16 ` [PATCH 12/20] KVM: arm64: Add RESx_WHEN_E2Hx constraints " Marc Zyngier
@ 2026-01-28 17:43   ` Fuad Tabba
  2026-01-29 10:14     ` Marc Zyngier
  0 siblings, 1 reply; 53+ messages in thread
From: Fuad Tabba @ 2026-01-28 17:43 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, kvm, Joey Gouly, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Will Deacon, Catalin Marinas

Hi Marc,

On Mon, 26 Jan 2026 at 12:17, Marc Zyngier <maz@kernel.org> wrote:
>
> "Thanks" to VHE, SCTLR_EL2 radically changes shape depending on the
> value of HCR_EL2.E2H, as a lot of the bits that didn't have much
> meaning with E2H=0 start impacting EL0 with E2H=1.
>
> This has a direct impact on the RESx behaviour of these bits, and
> we need a way to express them.
>
> For this purpose, introduce a set of 4 new constaints that, when
> the controlling feature is not present, force the RESx value to
> be either 0 or 1 depending on the value of E2H.
>
> This allows diverging RESx values depending on the value of E2H,
> something that is required by a bunch of SCTLR_EL2 bits.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/config.c | 24 +++++++++++++++++++++---
>  1 file changed, 21 insertions(+), 3 deletions(-)
>
> diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
> index 1990cebc77c66..7063fffc22799 100644
> --- a/arch/arm64/kvm/config.c
> +++ b/arch/arm64/kvm/config.c
> @@ -26,6 +26,10 @@ struct reg_bits_to_feat_map {
>  #define        MASKS_POINTER   BIT(3)  /* Pointer to fgt_masks struct instead of bits */
>  #define        AS_RES1         BIT(4)  /* RES1 when not supported */
>  #define        REQUIRES_E2H1   BIT(5)  /* Add HCR_EL2.E2H RES1 as a pre-condition */
> +#define        RES0_WHEN_E2H0  BIT(6)  /* RES0 when E2H=0 and not supported */
> +#define        RES0_WHEN_E2H1  BIT(7)  /* RES0 when E2H=1 and not supported */
> +#define        RES1_WHEN_E2H0  BIT(8)  /* RES1 when E2H=0 and not supported */
> +#define        RES1_WHEN_E2H1  BIT(9)  /* RES1 when E2H=1 and not supported */
>
>         unsigned long   flags;
>
> @@ -1298,10 +1302,24 @@ struct resx compute_resx_bits(struct kvm *kvm,
>                         match &= !e2h0;
>
>                 if (!match) {
> +                       u64 bits = reg_feat_map_bits(&map[i]);
> +
> +                       if (e2h0) {
> +                               if      (map[i].flags & RES1_WHEN_E2H0)
> +                                       resx.res1 |= bits;
> +                               else if (map[i].flags & RES0_WHEN_E2H0)
> +                                       resx.res0 |= bits;
> +                       } else {
> +                               if      (map[i].flags & RES1_WHEN_E2H1)
> +                                       resx.res1 |= bits;
> +                               else if (map[i].flags & RES0_WHEN_E2H1)
> +                                       resx.res0 |= bits;
> +                       }
> +
>                         if (map[i].flags & AS_RES1)
> -                               resx.res1 |= reg_feat_map_bits(&map[i]);
> -                       else
> -                               resx.res0 |= reg_feat_map_bits(&map[i]);
> +                               resx.res1 |= bits;
> +                       else if (!(resx.res1 & bits))
> +                               resx.res0 |= bits;

The logic here feels a bit more complex than necessary, specifically
regarding the interaction between the E2H checks and the fallthrough
to AS_RES1.

Although AS_RES1 and RES0_WHEN_E2H0 are mutually exclusive in
practice, the current structure technically permits a scenario where
both res0 and res1 get set if the flags are mixed (the e2h0 block sets
res0, and the AS_RES1 block falls through and sets res1). This cannot
be ruled out by looking at this function alone.

  It might be cleaner (and safer) to determine the res1 first, and
then apply the masks. Something like:

+                       bool is_res1 = false;
+
+                       if (map[i].flags & AS_RES1)
+                               is_res1 = true;
+                       else if (e2h0)
+                               is_res1 = (map[i].flags & RES1_WHEN_E2H0);
+                       else
+                               is_res1 = (map[i].flags & RES1_WHEN_E2H1);
...

This also brings up a side point: given the visual similarity of these
flags, it is quite easy to make a typo and accidentally combine
incompatible flags (e.g., AS_RES1 | RESx_WHEN_E2Hx, or RES0_WHEN_E2H0
| RES1_WHEN_E2H0), would it be worth adding a check to warn on
obviously invalid combinations?

Or maybe even redefining AS_RES1 to be
(RES1_WHEN_E2H1|RES1_WHEN_E2H0), which is what it is conceptually.
That could simplify this code even further:

+                       if (e2h0)
+                               is_res1 = (map[i].flags & RES1_WHEN_E2H0);
+                       else
+                               is_res1 = (map[i].flags & RES1_WHEN_E2H1);

What do you think?

Cheers,
/fuad




>                 }
>         }
>
> --
> 2.47.3
>


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH 12/20] KVM: arm64: Add RESx_WHEN_E2Hx constraints as configuration flags
  2026-01-28 17:43   ` Fuad Tabba
@ 2026-01-29 10:14     ` Marc Zyngier
  2026-01-29 10:30       ` Fuad Tabba
  0 siblings, 1 reply; 53+ messages in thread
From: Marc Zyngier @ 2026-01-29 10:14 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, linux-arm-kernel, kvm, Joey Gouly, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Will Deacon, Catalin Marinas

Hey Fuad,

On Wed, 28 Jan 2026 17:43:40 +0000,
Fuad Tabba <tabba@google.com> wrote:
> 
> Hi Marc,
> 
> On Mon, 26 Jan 2026 at 12:17, Marc Zyngier <maz@kernel.org> wrote:
> >
> > "Thanks" to VHE, SCTLR_EL2 radically changes shape depending on the
> > value of HCR_EL2.E2H, as a lot of the bits that didn't have much
> > meaning with E2H=0 start impacting EL0 with E2H=1.
> >
> > This has a direct impact on the RESx behaviour of these bits, and
> > we need a way to express them.
> >
> > For this purpose, introduce a set of 4 new constaints that, when
> > the controlling feature is not present, force the RESx value to
> > be either 0 or 1 depending on the value of E2H.
> >
> > This allows diverging RESx values depending on the value of E2H,
> > something that is required by a bunch of SCTLR_EL2 bits.
> >
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/kvm/config.c | 24 +++++++++++++++++++++---
> >  1 file changed, 21 insertions(+), 3 deletions(-)
> >
> > diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
> > index 1990cebc77c66..7063fffc22799 100644
> > --- a/arch/arm64/kvm/config.c
> > +++ b/arch/arm64/kvm/config.c
> > @@ -26,6 +26,10 @@ struct reg_bits_to_feat_map {
> >  #define        MASKS_POINTER   BIT(3)  /* Pointer to fgt_masks struct instead of bits */
> >  #define        AS_RES1         BIT(4)  /* RES1 when not supported */
> >  #define        REQUIRES_E2H1   BIT(5)  /* Add HCR_EL2.E2H RES1 as a pre-condition */
> > +#define        RES0_WHEN_E2H0  BIT(6)  /* RES0 when E2H=0 and not supported */
> > +#define        RES0_WHEN_E2H1  BIT(7)  /* RES0 when E2H=1 and not supported */
> > +#define        RES1_WHEN_E2H0  BIT(8)  /* RES1 when E2H=0 and not supported */
> > +#define        RES1_WHEN_E2H1  BIT(9)  /* RES1 when E2H=1 and not supported */
> >
> >         unsigned long   flags;
> >
> > @@ -1298,10 +1302,24 @@ struct resx compute_resx_bits(struct kvm *kvm,
> >                         match &= !e2h0;
> >
> >                 if (!match) {
> > +                       u64 bits = reg_feat_map_bits(&map[i]);
> > +
> > +                       if (e2h0) {
> > +                               if      (map[i].flags & RES1_WHEN_E2H0)
> > +                                       resx.res1 |= bits;
> > +                               else if (map[i].flags & RES0_WHEN_E2H0)
> > +                                       resx.res0 |= bits;
> > +                       } else {
> > +                               if      (map[i].flags & RES1_WHEN_E2H1)
> > +                                       resx.res1 |= bits;
> > +                               else if (map[i].flags & RES0_WHEN_E2H1)
> > +                                       resx.res0 |= bits;
> > +                       }
> > +
> >                         if (map[i].flags & AS_RES1)
> > -                               resx.res1 |= reg_feat_map_bits(&map[i]);
> > -                       else
> > -                               resx.res0 |= reg_feat_map_bits(&map[i]);
> > +                               resx.res1 |= bits;
> > +                       else if (!(resx.res1 & bits))
> > +                               resx.res0 |= bits;
> 
> The logic here feels a bit more complex than necessary, specifically
> regarding the interaction between the E2H checks and the fallthrough
> to AS_RES1.
> 
> Although AS_RES1 and RES0_WHEN_E2H0 are mutually exclusive in
> practice, the current structure technically permits a scenario where
> both res0 and res1 get set if the flags are mixed (the e2h0 block sets
> res0, and the AS_RES1 block falls through and sets res1). This cannot
> be ruled out by looking at this function alone.
> 
>   It might be cleaner (and safer) to determine the res1 first, and
> then apply the masks. Something like:
> 
> +                       bool is_res1 = false;
> +
> +                       if (map[i].flags & AS_RES1)
> +                               is_res1 = true;
> +                       else if (e2h0)
> +                               is_res1 = (map[i].flags & RES1_WHEN_E2H0);
> +                       else
> +                               is_res1 = (map[i].flags & RES1_WHEN_E2H1);
> ...

I think you have just put your finger on something that escaped me so
far. You are totally right that the code as written today is ugly, and
the trick to work out that we need to account the bits as RES0 is
awful.

But it additionally outlines something else: since RES0 is an implicit
property (we don't specify a flag for it), RES0_WHEN_E2Hx could also
be implicit properties. I couldn't find an example where anything
would break. This would also avoid the combination with AS_RES1 by
construction.

> 
> This also brings up a side point: given the visual similarity of these
> flags, it is quite easy to make a typo and accidentally combine
> incompatible flags (e.g., AS_RES1 | RESx_WHEN_E2Hx, or RES0_WHEN_E2H0
> | RES1_WHEN_E2H0), would it be worth adding a check to warn on
> obviously invalid combinations?
> 
> Or maybe even redefining AS_RES1 to be
> (RES1_WHEN_E2H1|RES1_WHEN_E2H0), which is what it is conceptually.
> That could simplify this code even further:
> 
> +                       if (e2h0)
> +                               is_res1 = (map[i].flags & RES1_WHEN_E2H0);
> +                       else
> +                               is_res1 = (map[i].flags & RES1_WHEN_E2H1);

While that would work, I think this is a step too far. Eventually, we
should be able to sanitise things outside of NV, and RES1 should not
depend on E2H at all in this case.

I ended up with the following hack, completely untested (needs
renumbering, and the rest of SCTLR_EL2 repainted). Let me know what
you think.

Thanks,

	M.

diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
index 562513a4683e2..204e5aeda4d24 100644
--- a/arch/arm64/kvm/config.c
+++ b/arch/arm64/kvm/config.c
@@ -26,8 +26,6 @@ struct reg_bits_to_feat_map {
 #define	MASKS_POINTER	BIT(3)	/* Pointer to fgt_masks struct instead of bits */
 #define	AS_RES1		BIT(4)	/* RES1 when not supported */
 #define	REQUIRES_E2H1	BIT(5)	/* Add HCR_EL2.E2H RES1 as a pre-condition */
-#define	RES0_WHEN_E2H0	BIT(6)	/* RES0 when E2H=0 and not supported */
-#define	RES0_WHEN_E2H1	BIT(7)	/* RES0 when E2H=1 and not supported */
 #define	RES1_WHEN_E2H0	BIT(8)	/* RES1 when E2H=0 and not supported */
 #define	RES1_WHEN_E2H1	BIT(9)	/* RES1 when E2H=1 and not supported */
 
@@ -1375,22 +1373,15 @@ struct resx compute_resx_bits(struct kvm *kvm,
 		
 		if (!match) {
 			u64 bits = reg_feat_map_bits(&map[i]);
+			bool res1;
 
-			if (e2h0) {
-				if      (map[i].flags & RES1_WHEN_E2H0)
-					resx.res1 |= bits;
-				else if (map[i].flags & RES0_WHEN_E2H0)
-					resx.res0 |= bits;
-			} else {
-				if      (map[i].flags & RES1_WHEN_E2H1)
-					resx.res1 |= bits;
-				else if (map[i].flags & RES0_WHEN_E2H1)
-					resx.res0 |= bits;
-			}
-
-			if (map[i].flags & AS_RES1)
+			res1  = (map[i].flags & AS_RES1);
+			res1 |= e2h0 && (map[i].flags & RES1_WHEN_E2H0);
+			res1 |= !e2h0 && (map[i].flags & RES1_WHEN_E2H1);
+
+			if (res1)
 				resx.res1 |= bits;
-			else if (!(resx.res1 & bits))
+			else
 				resx.res0 |= bits;
 		}
 	}

-- 
Without deviation from the norm, progress is not possible.


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* Re: [PATCH 12/20] KVM: arm64: Add RESx_WHEN_E2Hx constraints as configuration flags
  2026-01-29 10:14     ` Marc Zyngier
@ 2026-01-29 10:30       ` Fuad Tabba
  0 siblings, 0 replies; 53+ messages in thread
From: Fuad Tabba @ 2026-01-29 10:30 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, kvm, Joey Gouly, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Will Deacon, Catalin Marinas

Hi Marc,

On Thu, 29 Jan 2026 at 10:14, Marc Zyngier <maz@kernel.org> wrote:
>
> Hey Fuad,
>
> On Wed, 28 Jan 2026 17:43:40 +0000,
> Fuad Tabba <tabba@google.com> wrote:
> >
> > Hi Marc,
> >
> > On Mon, 26 Jan 2026 at 12:17, Marc Zyngier <maz@kernel.org> wrote:
> > >
> > > "Thanks" to VHE, SCTLR_EL2 radically changes shape depending on the
> > > value of HCR_EL2.E2H, as a lot of the bits that didn't have much
> > > meaning with E2H=0 start impacting EL0 with E2H=1.
> > >
> > > This has a direct impact on the RESx behaviour of these bits, and
> > > we need a way to express them.
> > >
> > > For this purpose, introduce a set of 4 new constaints that, when
> > > the controlling feature is not present, force the RESx value to
> > > be either 0 or 1 depending on the value of E2H.
> > >
> > > This allows diverging RESx values depending on the value of E2H,
> > > something that is required by a bunch of SCTLR_EL2 bits.
> > >
> > > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > > ---
> > >  arch/arm64/kvm/config.c | 24 +++++++++++++++++++++---
> > >  1 file changed, 21 insertions(+), 3 deletions(-)
> > >
> > > diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
> > > index 1990cebc77c66..7063fffc22799 100644
> > > --- a/arch/arm64/kvm/config.c
> > > +++ b/arch/arm64/kvm/config.c
> > > @@ -26,6 +26,10 @@ struct reg_bits_to_feat_map {
> > >  #define        MASKS_POINTER   BIT(3)  /* Pointer to fgt_masks struct instead of bits */
> > >  #define        AS_RES1         BIT(4)  /* RES1 when not supported */
> > >  #define        REQUIRES_E2H1   BIT(5)  /* Add HCR_EL2.E2H RES1 as a pre-condition */
> > > +#define        RES0_WHEN_E2H0  BIT(6)  /* RES0 when E2H=0 and not supported */
> > > +#define        RES0_WHEN_E2H1  BIT(7)  /* RES0 when E2H=1 and not supported */
> > > +#define        RES1_WHEN_E2H0  BIT(8)  /* RES1 when E2H=0 and not supported */
> > > +#define        RES1_WHEN_E2H1  BIT(9)  /* RES1 when E2H=1 and not supported */
> > >
> > >         unsigned long   flags;
> > >
> > > @@ -1298,10 +1302,24 @@ struct resx compute_resx_bits(struct kvm *kvm,
> > >                         match &= !e2h0;
> > >
> > >                 if (!match) {
> > > +                       u64 bits = reg_feat_map_bits(&map[i]);
> > > +
> > > +                       if (e2h0) {
> > > +                               if      (map[i].flags & RES1_WHEN_E2H0)
> > > +                                       resx.res1 |= bits;
> > > +                               else if (map[i].flags & RES0_WHEN_E2H0)
> > > +                                       resx.res0 |= bits;
> > > +                       } else {
> > > +                               if      (map[i].flags & RES1_WHEN_E2H1)
> > > +                                       resx.res1 |= bits;
> > > +                               else if (map[i].flags & RES0_WHEN_E2H1)
> > > +                                       resx.res0 |= bits;
> > > +                       }
> > > +
> > >                         if (map[i].flags & AS_RES1)
> > > -                               resx.res1 |= reg_feat_map_bits(&map[i]);
> > > -                       else
> > > -                               resx.res0 |= reg_feat_map_bits(&map[i]);
> > > +                               resx.res1 |= bits;
> > > +                       else if (!(resx.res1 & bits))
> > > +                               resx.res0 |= bits;
> >
> > The logic here feels a bit more complex than necessary, specifically
> > regarding the interaction between the E2H checks and the fallthrough
> > to AS_RES1.
> >
> > Although AS_RES1 and RES0_WHEN_E2H0 are mutually exclusive in
> > practice, the current structure technically permits a scenario where
> > both res0 and res1 get set if the flags are mixed (the e2h0 block sets
> > res0, and the AS_RES1 block falls through and sets res1). This cannot
> > be ruled out by looking at this function alone.
> >
> >   It might be cleaner (and safer) to determine the res1 first, and
> > then apply the masks. Something like:
> >
> > +                       bool is_res1 = false;
> > +
> > +                       if (map[i].flags & AS_RES1)
> > +                               is_res1 = true;
> > +                       else if (e2h0)
> > +                               is_res1 = (map[i].flags & RES1_WHEN_E2H0);
> > +                       else
> > +                               is_res1 = (map[i].flags & RES1_WHEN_E2H1);
> > ...
>
> I think you have just put your finger on something that escaped me so
> far. You are totally right that the code as written today is ugly, and
> the trick to work out that we need to account the bits as RES0 is
> awful.
>
> But it additionally outlines something else: since RES0 is an implicit
> property (we don't specify a flag for it), RES0_WHEN_E2Hx could also
> be implicit properties. I couldn't find an example where anything
> would break. This would also avoid the combination with AS_RES1 by
> construction.
>
> >
> > This also brings up a side point: given the visual similarity of these
> > flags, it is quite easy to make a typo and accidentally combine
> > incompatible flags (e.g., AS_RES1 | RESx_WHEN_E2Hx, or RES0_WHEN_E2H0
> > | RES1_WHEN_E2H0), would it be worth adding a check to warn on
> > obviously invalid combinations?
> >
> > Or maybe even redefining AS_RES1 to be
> > (RES1_WHEN_E2H1|RES1_WHEN_E2H0), which is what it is conceptually.
> > That could simplify this code even further:
> >
> > +                       if (e2h0)
> > +                               is_res1 = (map[i].flags & RES1_WHEN_E2H0);
> > +                       else
> > +                               is_res1 = (map[i].flags & RES1_WHEN_E2H1);
>
> While that would work, I think this is a step too far. Eventually, we
> should be able to sanitise things outside of NV, and RES1 should not
> depend on E2H at all in this case.
>
> I ended up with the following hack, completely untested (needs
> renumbering, and the rest of SCTLR_EL2 repainted). Let me know what
> you think.
>
> Thanks,
>
>         M.
>
> diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
> index 562513a4683e2..204e5aeda4d24 100644
> --- a/arch/arm64/kvm/config.c
> +++ b/arch/arm64/kvm/config.c
> @@ -26,8 +26,6 @@ struct reg_bits_to_feat_map {
>  #define        MASKS_POINTER   BIT(3)  /* Pointer to fgt_masks struct instead of bits */
>  #define        AS_RES1         BIT(4)  /* RES1 when not supported */
>  #define        REQUIRES_E2H1   BIT(5)  /* Add HCR_EL2.E2H RES1 as a pre-condition */
> -#define        RES0_WHEN_E2H0  BIT(6)  /* RES0 when E2H=0 and not supported */
> -#define        RES0_WHEN_E2H1  BIT(7)  /* RES0 when E2H=1 and not supported */
>  #define        RES1_WHEN_E2H0  BIT(8)  /* RES1 when E2H=0 and not supported */
>  #define        RES1_WHEN_E2H1  BIT(9)  /* RES1 when E2H=1 and not supported */
>
> @@ -1375,22 +1373,15 @@ struct resx compute_resx_bits(struct kvm *kvm,
>
>                 if (!match) {
>                         u64 bits = reg_feat_map_bits(&map[i]);
> +                       bool res1;
>
> -                       if (e2h0) {
> -                               if      (map[i].flags & RES1_WHEN_E2H0)
> -                                       resx.res1 |= bits;
> -                               else if (map[i].flags & RES0_WHEN_E2H0)
> -                                       resx.res0 |= bits;
> -                       } else {
> -                               if      (map[i].flags & RES1_WHEN_E2H1)
> -                                       resx.res1 |= bits;
> -                               else if (map[i].flags & RES0_WHEN_E2H1)
> -                                       resx.res0 |= bits;
> -                       }
> -
> -                       if (map[i].flags & AS_RES1)
> +                       res1  = (map[i].flags & AS_RES1);
> +                       res1 |= e2h0 && (map[i].flags & RES1_WHEN_E2H0);
> +                       res1 |= !e2h0 && (map[i].flags & RES1_WHEN_E2H1);
> +
> +                       if (res1)
>                                 resx.res1 |= bits;
> -                       else if (!(resx.res1 & bits))
> +                       else
>                                 resx.res0 |= bits;
>                 }
>         }

LGTM. Treating RES0 as the implicit default simplifies the logic and
makes invalid combinations impossible by construction, which is what
we want, as well as being easier to read.

Thanks,
/fuad

>
> --
> Without deviation from the norm, progress is not possible.


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH 13/20] KVM: arm64: Move RESx into individual register descriptors
  2026-01-26 12:16 ` [PATCH 13/20] KVM: arm64: Move RESx into individual register descriptors Marc Zyngier
@ 2026-01-29 16:29   ` Fuad Tabba
  2026-01-29 17:19     ` Marc Zyngier
  0 siblings, 1 reply; 53+ messages in thread
From: Fuad Tabba @ 2026-01-29 16:29 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, kvm, Joey Gouly, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Will Deacon, Catalin Marinas

Hi Marc,

On Mon, 26 Jan 2026 at 12:17, Marc Zyngier <maz@kernel.org> wrote:
>
> Instead of hacking the RES1 bits at runtime, move them into the
> register descriptors. This makes it significantly nicer.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/config.c | 36 +++++++++++++++++++++++++++++-------
>  1 file changed, 29 insertions(+), 7 deletions(-)
>
> diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
> index 7063fffc22799..d5871758f1fcc 100644
> --- a/arch/arm64/kvm/config.c
> +++ b/arch/arm64/kvm/config.c
> @@ -30,6 +30,7 @@ struct reg_bits_to_feat_map {
>  #define        RES0_WHEN_E2H1  BIT(7)  /* RES0 when E2H=1 and not supported */
>  #define        RES1_WHEN_E2H0  BIT(8)  /* RES1 when E2H=0 and not supported */
>  #define        RES1_WHEN_E2H1  BIT(9)  /* RES1 when E2H=1 and not supported */
> +#define        FORCE_RESx      BIT(10) /* Unconditional RESx */
>
>         unsigned long   flags;
>
> @@ -107,6 +108,11 @@ struct reg_feat_map_desc {
>   */
>  #define NEEDS_FEAT(m, ...)     NEEDS_FEAT_FLAG(m, 0, __VA_ARGS__)
>
> +/* Declare fixed RESx bits */
> +#define FORCE_RES0(m)          NEEDS_FEAT_FLAG(m, FORCE_RESx, enforce_resx)
> +#define FORCE_RES1(m)          NEEDS_FEAT_FLAG(m, FORCE_RESx | AS_RES1, \
> +                                               enforce_resx)
> +
>  /*
>   * Declare the dependency between a non-FGT register, a set of
>   * feature, and the set of individual bits it contains. This generates

nit: features

> @@ -230,6 +236,15 @@ struct reg_feat_map_desc {
>  #define FEAT_HCX               ID_AA64MMFR1_EL1, HCX, IMP
>  #define FEAT_S2PIE             ID_AA64MMFR3_EL1, S2PIE, IMP
>
> +static bool enforce_resx(struct kvm *kvm)
> +{
> +       /*
> +        * Returning false here means that the RESx bits will be always
> +        * addded to the fixed set bit. Yes, this is counter-intuitive.

nit: added

> +        */
> +       return false;
> +}

I see what you're doing here, but it took me a while to get it and
convince myself that there aren't any bugs (my self couldn't find any
bugs, but I wouldn't trust him that much). You already introduce a new
flag, FORCE_RESx. Why not just check that directly in the
compute_resx_bits() loop, before the check for CALL_FUNC?

+ if (map[i].flags & FORCE_RESx)
+     match = false;
+ else if (map[i].flags & CALL_FUNC)
...

The way it is now, to understand FORCE_RES0, you must trace a flag, a
macro expansion, and a function pointer, just to set a boolean to
false.

Cheers,
/fuad


> +
>  static bool not_feat_aa64el3(struct kvm *kvm)
>  {
>         return !kvm_has_feat(kvm, FEAT_AA64EL3);
> @@ -1009,6 +1024,8 @@ static const struct reg_bits_to_feat_map hcr_feat_map[] = {
>                    HCR_EL2_TWEDEn,
>                    FEAT_TWED),
>         NEEDS_FEAT_FIXED(HCR_EL2_E2H, compute_hcr_e2h),
> +       FORCE_RES0(HCR_EL2_RES0),
> +       FORCE_RES1(HCR_EL2_RES1),
>  };
>
>  static const DECLARE_FEAT_MAP(hcr_desc, HCR_EL2,
> @@ -1029,6 +1046,8 @@ static const struct reg_bits_to_feat_map sctlr2_feat_map[] = {
>                    SCTLR2_EL1_CPTM      |
>                    SCTLR2_EL1_CPTM0,
>                    FEAT_CPA2),
> +       FORCE_RES0(SCTLR2_EL1_RES0),
> +       FORCE_RES1(SCTLR2_EL1_RES1),
>  };
>
>  static const DECLARE_FEAT_MAP(sctlr2_desc, SCTLR2_EL1,
> @@ -1054,6 +1073,8 @@ static const struct reg_bits_to_feat_map tcr2_el2_feat_map[] = {
>                    TCR2_EL2_E0POE,
>                    FEAT_S1POE),
>         NEEDS_FEAT(TCR2_EL2_PIE, FEAT_S1PIE),
> +       FORCE_RES0(TCR2_EL2_RES0),
> +       FORCE_RES1(TCR2_EL2_RES1),
>  };
>
>  static const DECLARE_FEAT_MAP(tcr2_el2_desc, TCR2_EL2,
> @@ -1131,6 +1152,8 @@ static const struct reg_bits_to_feat_map sctlr_el1_feat_map[] = {
>                    SCTLR_EL1_A          |
>                    SCTLR_EL1_M,
>                    FEAT_AA64EL1),
> +       FORCE_RES0(SCTLR_EL1_RES0),
> +       FORCE_RES1(SCTLR_EL1_RES1),
>  };
>
>  static const DECLARE_FEAT_MAP(sctlr_el1_desc, SCTLR_EL1,
> @@ -1165,6 +1188,8 @@ static const struct reg_bits_to_feat_map mdcr_el2_feat_map[] = {
>                    MDCR_EL2_TDE         |
>                    MDCR_EL2_TDRA,
>                    FEAT_AA64EL1),
> +       FORCE_RES0(MDCR_EL2_RES0),
> +       FORCE_RES1(MDCR_EL2_RES1),
>  };
>
>  static const DECLARE_FEAT_MAP(mdcr_el2_desc, MDCR_EL2,
> @@ -1203,6 +1228,8 @@ static const struct reg_bits_to_feat_map vtcr_el2_feat_map[] = {
>                    VTCR_EL2_SL0         |
>                    VTCR_EL2_T0SZ,
>                    FEAT_AA64EL1),
> +       FORCE_RES0(VTCR_EL2_RES0),
> +       FORCE_RES1(VTCR_EL2_RES1),
>  };
>
>  static const DECLARE_FEAT_MAP(vtcr_el2_desc, VTCR_EL2,
> @@ -1214,7 +1241,8 @@ static void __init check_feat_map(const struct reg_bits_to_feat_map *map,
>         u64 mask = 0;
>
>         for (int i = 0; i < map_size; i++)
> -               mask |= map[i].bits;
> +               if (!(map[i].flags & FORCE_RESx))
> +                       mask |= map[i].bits;
>
>         if (mask != ~resx)
>                 kvm_err("Undefined %s behaviour, bits %016llx\n",
> @@ -1447,28 +1475,22 @@ struct resx get_reg_fixed_bits(struct kvm *kvm, enum vcpu_sysreg reg)
>                 break;
>         case HCR_EL2:
>                 resx = compute_reg_resx_bits(kvm, &hcr_desc, 0, 0);
> -               resx.res1 |= HCR_EL2_RES1;
>                 break;
>         case SCTLR2_EL1:
>         case SCTLR2_EL2:
>                 resx = compute_reg_resx_bits(kvm, &sctlr2_desc, 0, 0);
> -               resx.res1 |= SCTLR2_EL1_RES1;
>                 break;
>         case TCR2_EL2:
>                 resx = compute_reg_resx_bits(kvm, &tcr2_el2_desc, 0, 0);
> -               resx.res1 |= TCR2_EL2_RES1;
>                 break;
>         case SCTLR_EL1:
>                 resx = compute_reg_resx_bits(kvm, &sctlr_el1_desc, 0, 0);
> -               resx.res1 |= SCTLR_EL1_RES1;
>                 break;
>         case MDCR_EL2:
>                 resx = compute_reg_resx_bits(kvm, &mdcr_el2_desc, 0, 0);
> -               resx.res1 |= MDCR_EL2_RES1;
>                 break;
>         case VTCR_EL2:
>                 resx = compute_reg_resx_bits(kvm, &vtcr_el2_desc, 0, 0);
> -               resx.res1 |= VTCR_EL2_RES1;
>                 break;
>         default:
>                 WARN_ON_ONCE(1);
> --
> 2.47.3
>


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH 14/20] KVM: arm64: Simplify handling of HCR_EL2.E2H RESx
  2026-01-26 12:16 ` [PATCH 14/20] KVM: arm64: Simplify handling of HCR_EL2.E2H RESx Marc Zyngier
@ 2026-01-29 16:41   ` Fuad Tabba
  0 siblings, 0 replies; 53+ messages in thread
From: Fuad Tabba @ 2026-01-29 16:41 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, kvm, Joey Gouly, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Will Deacon, Catalin Marinas

Hi Marc,

On Mon, 26 Jan 2026 at 12:17, Marc Zyngier <maz@kernel.org> wrote:
>
> Now that we can link the RESx behaviour with the value of HCR_EL2.E2H,
> we can trivially express the tautological constraint that makes E2H
> a reserved value at all times.
>
> Fun, isn't it?
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/config.c | 13 ++-----------
>  1 file changed, 2 insertions(+), 11 deletions(-)
>
> diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
> index d5871758f1fcc..187d047a9cf4a 100644
> --- a/arch/arm64/kvm/config.c
> +++ b/arch/arm64/kvm/config.c
> @@ -394,16 +394,6 @@ static bool feat_vmid16(struct kvm *kvm)
>         return kvm_has_feat_enum(kvm, ID_AA64MMFR1_EL1, VMIDBits, 16);
>  }
>
> -static bool compute_hcr_e2h(struct kvm *kvm, struct resx *bits)
> -{
> -       if (kvm_has_feat(kvm, FEAT_E2H0))
> -               bits->res0 |= HCR_EL2_E2H;
> -       else
> -               bits->res1 |= HCR_EL2_E2H;
> -
> -       return true;
> -}
> -
>  static const struct reg_bits_to_feat_map hfgrtr_feat_map[] = {
>         NEEDS_FEAT(HFGRTR_EL2_nAMAIR2_EL1       |
>                    HFGRTR_EL2_nMAIR2_EL1,
> @@ -1023,7 +1013,8 @@ static const struct reg_bits_to_feat_map hcr_feat_map[] = {
>         NEEDS_FEAT(HCR_EL2_TWEDEL       |
>                    HCR_EL2_TWEDEn,
>                    FEAT_TWED),
> -       NEEDS_FEAT_FIXED(HCR_EL2_E2H, compute_hcr_e2h),
> +       NEEDS_FEAT_FLAG(HCR_EL2_E2H, RES0_WHEN_E2H0 | RES1_WHEN_E2H1,
> +                       enforce_resx),

If you were to take my suggestion for the previous patch, I think that
you could express this as follows:

    FORCE_RESx | RES0_WHEN_E2H0 | RES1_WHEN_E2H1

Or if you use the modified patch 12:

    FORCE_RESx | RES1_WHEN_E2H1

Either way:
Reviewed-by: Fuad Tabba <tabba@google.com>

Cheers,
/fuad

>         FORCE_RES0(HCR_EL2_RES0),
>         FORCE_RES1(HCR_EL2_RES1),
>  };


> --
> 2.47.3
>


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH 15/20] KVM: arm64: Get rid of FIXED_VALUE altogether
  2026-01-26 12:16 ` [PATCH 15/20] KVM: arm64: Get rid of FIXED_VALUE altogether Marc Zyngier
@ 2026-01-29 16:54   ` Fuad Tabba
  0 siblings, 0 replies; 53+ messages in thread
From: Fuad Tabba @ 2026-01-29 16:54 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, kvm, Joey Gouly, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Will Deacon, Catalin Marinas

On Mon, 26 Jan 2026 at 12:17, Marc Zyngier <maz@kernel.org> wrote:
>
> We have now killed every occurrences of FIXED_VALUE, and we can therefore
> drop the whole infrastructure. Good riddance.

Indeed.

> Signed-off-by: Marc Zyngier <maz@kernel.org>

Reviewed-by: Fuad Tabba <tabba@google.com>

Cheers,
/fuad
> ---
>  arch/arm64/kvm/config.c | 24 +++---------------------
>  1 file changed, 3 insertions(+), 21 deletions(-)
>
> diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
> index 187d047a9cf4a..28e534f2850ea 100644
> --- a/arch/arm64/kvm/config.c
> +++ b/arch/arm64/kvm/config.c
> @@ -22,7 +22,7 @@ struct reg_bits_to_feat_map {
>
>  #define        NEVER_FGU       BIT(0)  /* Can trap, but never UNDEF */
>  #define        CALL_FUNC       BIT(1)  /* Needs to evaluate tons of crap */
> -#define        FIXED_VALUE     BIT(2)  /* RAZ/WI or RAO/WI in KVM */
> +#define        FORCE_RESx      BIT(2)  /* Unconditional RESx */
>  #define        MASKS_POINTER   BIT(3)  /* Pointer to fgt_masks struct instead of bits */
>  #define        AS_RES1         BIT(4)  /* RES1 when not supported */
>  #define        REQUIRES_E2H1   BIT(5)  /* Add HCR_EL2.E2H RES1 as a pre-condition */
> @@ -30,7 +30,6 @@ struct reg_bits_to_feat_map {
>  #define        RES0_WHEN_E2H1  BIT(7)  /* RES0 when E2H=1 and not supported */
>  #define        RES1_WHEN_E2H0  BIT(8)  /* RES1 when E2H=0 and not supported */
>  #define        RES1_WHEN_E2H1  BIT(9)  /* RES1 when E2H=1 and not supported */
> -#define        FORCE_RESx      BIT(10) /* Unconditional RESx */
>
>         unsigned long   flags;
>
> @@ -43,7 +42,6 @@ struct reg_bits_to_feat_map {
>                         s8      lo_lim;
>                 };
>                 bool    (*match)(struct kvm *);
> -               bool    (*fval)(struct kvm *, struct resx *);
>         };
>  };
>
> @@ -76,13 +74,6 @@ struct reg_feat_map_desc {
>                 .lo_lim = id ##_## fld ##_## lim        \
>         }
>
> -#define __NEEDS_FEAT_2(m, f, w, fun, dummy)            \
> -       {                                               \
> -               .w      = (m),                          \
> -               .flags = (f) | CALL_FUNC,               \
> -               .fval = (fun),                          \
> -       }
> -
>  #define __NEEDS_FEAT_1(m, f, w, fun)                   \
>         {                                               \
>                 .w      = (m),                          \
> @@ -96,9 +87,6 @@ struct reg_feat_map_desc {
>  #define NEEDS_FEAT_FLAG(m, f, ...)                     \
>         __NEEDS_FEAT_FLAG(m, f, bits, __VA_ARGS__)
>
> -#define NEEDS_FEAT_FIXED(m, ...)                       \
> -       __NEEDS_FEAT_FLAG(m, FIXED_VALUE, bits, __VA_ARGS__, 0)
> -
>  #define NEEDS_FEAT_MASKS(p, ...)                               \
>         __NEEDS_FEAT_FLAG(p, MASKS_POINTER, masks, __VA_ARGS__)
>
> @@ -1306,16 +1294,10 @@ struct resx compute_resx_bits(struct kvm *kvm,
>                 if (map[i].flags & exclude)
>                         continue;
>
> -               switch (map[i].flags & (CALL_FUNC | FIXED_VALUE)) {
> -               case CALL_FUNC | FIXED_VALUE:
> -                       map[i].fval(kvm, &resx);
> -                       continue;
> -               case CALL_FUNC:
> +               if (map[i].flags & CALL_FUNC)
>                         match = map[i].match(kvm);
> -                       break;
> -               default:
> +               else
>                         match = idreg_feat_match(kvm, &map[i]);
> -               }
>
>                 if (map[i].flags & REQUIRES_E2H1)
>                         match &= !e2h0;
> --
> 2.47.3
>


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH 13/20] KVM: arm64: Move RESx into individual register descriptors
  2026-01-29 16:29   ` Fuad Tabba
@ 2026-01-29 17:19     ` Marc Zyngier
  2026-01-29 17:39       ` Fuad Tabba
  2026-01-29 18:07       ` Marc Zyngier
  0 siblings, 2 replies; 53+ messages in thread
From: Marc Zyngier @ 2026-01-29 17:19 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, linux-arm-kernel, kvm, Joey Gouly, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Will Deacon, Catalin Marinas

On Thu, 29 Jan 2026 16:29:39 +0000,
Fuad Tabba <tabba@google.com> wrote:
> 
> Hi Marc,
> 
> On Mon, 26 Jan 2026 at 12:17, Marc Zyngier <maz@kernel.org> wrote:
> >
> > Instead of hacking the RES1 bits at runtime, move them into the
> > register descriptors. This makes it significantly nicer.
> >
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/kvm/config.c | 36 +++++++++++++++++++++++++++++-------
> >  1 file changed, 29 insertions(+), 7 deletions(-)
> >
> > diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
> > index 7063fffc22799..d5871758f1fcc 100644
> > --- a/arch/arm64/kvm/config.c
> > +++ b/arch/arm64/kvm/config.c
> > @@ -30,6 +30,7 @@ struct reg_bits_to_feat_map {
> >  #define        RES0_WHEN_E2H1  BIT(7)  /* RES0 when E2H=1 and not supported */
> >  #define        RES1_WHEN_E2H0  BIT(8)  /* RES1 when E2H=0 and not supported */
> >  #define        RES1_WHEN_E2H1  BIT(9)  /* RES1 when E2H=1 and not supported */
> > +#define        FORCE_RESx      BIT(10) /* Unconditional RESx */
> >
> >         unsigned long   flags;
> >
> > @@ -107,6 +108,11 @@ struct reg_feat_map_desc {
> >   */
> >  #define NEEDS_FEAT(m, ...)     NEEDS_FEAT_FLAG(m, 0, __VA_ARGS__)
> >
> > +/* Declare fixed RESx bits */
> > +#define FORCE_RES0(m)          NEEDS_FEAT_FLAG(m, FORCE_RESx, enforce_resx)
> > +#define FORCE_RES1(m)          NEEDS_FEAT_FLAG(m, FORCE_RESx | AS_RES1, \
> > +                                               enforce_resx)
> > +
> >  /*
> >   * Declare the dependency between a non-FGT register, a set of
> >   * feature, and the set of individual bits it contains. This generates
> 
> nit: features
> 
> > @@ -230,6 +236,15 @@ struct reg_feat_map_desc {
> >  #define FEAT_HCX               ID_AA64MMFR1_EL1, HCX, IMP
> >  #define FEAT_S2PIE             ID_AA64MMFR3_EL1, S2PIE, IMP
> >
> > +static bool enforce_resx(struct kvm *kvm)
> > +{
> > +       /*
> > +        * Returning false here means that the RESx bits will be always
> > +        * addded to the fixed set bit. Yes, this is counter-intuitive.
> 
> nit: added
> 
> > +        */
> > +       return false;
> > +}
> 
> I see what you're doing here, but it took me a while to get it and
> convince myself that there aren't any bugs (my self couldn't find any
> bugs, but I wouldn't trust him that much). You already introduce a new
> flag, FORCE_RESx. Why not just check that directly in the
> compute_resx_bits() loop, before the check for CALL_FUNC?
> 
> + if (map[i].flags & FORCE_RESx)
> +     match = false;
> + else if (map[i].flags & CALL_FUNC)
> ...
> 
> The way it is now, to understand FORCE_RES0, you must trace a flag, a
> macro expansion, and a function pointer, just to set a boolean to
> false.

With that scheme, you'd write something like:

+#define FORCE_RES0(m)          NEEDS_FEAT_FLAG(m, FORCE_RESx)

This construct would need a new __NEEDS_FEAT_0() macro that doesn't
take any argument other than flags. Something like below (untested).

	M.

diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
index 9485e1f2dc0b7..364bdd1e5be51 100644
--- a/arch/arm64/kvm/config.c
+++ b/arch/arm64/kvm/config.c
@@ -79,6 +79,12 @@ struct reg_feat_map_desc {
 		.match = (fun),				\
 	}
 
+#define __NEEDS_FEAT_0(m, f, w, ...)			\
+	{						\
+		.w	= (m),				\
+		.flags = (f),				\
+	}
+
 #define __NEEDS_FEAT_FLAG(m, f, w, ...)			\
 	CONCATENATE(__NEEDS_FEAT_, COUNT_ARGS(__VA_ARGS__))(m, f, w, __VA_ARGS__)
 
@@ -95,9 +101,8 @@ struct reg_feat_map_desc {
 #define NEEDS_FEAT(m, ...)	NEEDS_FEAT_FLAG(m, 0, __VA_ARGS__)
 
 /* Declare fixed RESx bits */
-#define FORCE_RES0(m)		NEEDS_FEAT_FLAG(m, FORCE_RESx, enforce_resx)
-#define FORCE_RES1(m)		NEEDS_FEAT_FLAG(m, FORCE_RESx | AS_RES1, \
-						enforce_resx)
+#define FORCE_RES0(m)		NEEDS_FEAT_FLAG(m, FORCE_RESx)
+#define FORCE_RES1(m)		NEEDS_FEAT_FLAG(m, FORCE_RESx | AS_RES1)
 
 /*
  * Declare the dependency between a non-FGT register, a set of
@@ -221,15 +226,6 @@ struct reg_feat_map_desc {
 #define FEAT_HCX		ID_AA64MMFR1_EL1, HCX, IMP
 #define FEAT_S2PIE		ID_AA64MMFR3_EL1, S2PIE, IMP
 
-static bool enforce_resx(struct kvm *kvm)
-{
-	/*
-	 * Returning false here means that the RESx bits will be always
-	 * addded to the fixed set bit. Yes, this is counter-intuitive.
-	 */
-	return false;
-}
-
 static bool not_feat_aa64el3(struct kvm *kvm)
 {
 	return !kvm_has_feat(kvm, FEAT_AA64EL3);
@@ -996,7 +992,7 @@ static const struct reg_bits_to_feat_map hcr_feat_map[] = {
 	NEEDS_FEAT(HCR_EL2_TWEDEL	|
 		   HCR_EL2_TWEDEn,
 		   FEAT_TWED),
-	NEEDS_FEAT_FLAG(HCR_EL2_E2H, RES1_WHEN_E2H1, enforce_resx),
+	NEEDS_FEAT_FLAG(HCR_EL2_E2H, RES1_WHEN_E2H1 | FORCE_RESx),
 	FORCE_RES0(HCR_EL2_RES0),
 	FORCE_RES1(HCR_EL2_RES1),
 };
@@ -1362,7 +1358,9 @@ struct resx compute_resx_bits(struct kvm *kvm,
 		if (map[i].flags & exclude)
 			continue;
 
-		if (map[i].flags & CALL_FUNC)
+		if (map[i].flags & FORCE_RESx)
+			match = false;
+		else if (map[i].flags & CALL_FUNC)
 			match = map[i].match(kvm);
 		else
 			match = idreg_feat_match(kvm, &map[i]);

-- 
Without deviation from the norm, progress is not possible.


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* Re: [PATCH 16/20] KVM: arm64: Simplify handling of full register invalid constraint
  2026-01-26 12:16 ` [PATCH 16/20] KVM: arm64: Simplify handling of full register invalid constraint Marc Zyngier
@ 2026-01-29 17:34   ` Fuad Tabba
  0 siblings, 0 replies; 53+ messages in thread
From: Fuad Tabba @ 2026-01-29 17:34 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, kvm, Joey Gouly, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Will Deacon, Catalin Marinas

On Mon, 26 Jan 2026 at 12:17, Marc Zyngier <maz@kernel.org> wrote:
>
> Now that we embed the RESx bits in the register description, it becomes
> easier to deal with registers that are simply not valid, as their
> existence is not satisfied by the configuration (SCTLR2_ELx without
> FEAT_SCTLR2, for example). Such registers essentially become RES0 for
> any bit that wasn't already advertised as RESx.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>

Reviewed-by: Fuad Tabba <tabba@google.com>

Cheers,
/fuad


> ---
>  arch/arm64/kvm/config.c | 15 +++++++++------
>  1 file changed, 9 insertions(+), 6 deletions(-)
>
> diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
> index 28e534f2850ea..0c037742215ac 100644
> --- a/arch/arm64/kvm/config.c
> +++ b/arch/arm64/kvm/config.c
> @@ -1332,7 +1332,7 @@ struct resx compute_reg_resx_bits(struct kvm *kvm,
>                                  const struct reg_feat_map_desc *r,
>                                  unsigned long require, unsigned long exclude)
>  {
> -       struct resx resx, tmp;
> +       struct resx resx;
>
>         resx = compute_resx_bits(kvm, r->bit_feat_map, r->bit_feat_map_sz,
>                                  require, exclude);
> @@ -1342,11 +1342,14 @@ struct resx compute_reg_resx_bits(struct kvm *kvm,
>                 resx.res1 |= r->feat_map.masks->res1;
>         }
>
> -       tmp = compute_resx_bits(kvm, &r->feat_map, 1, require, exclude);
> -
> -       resx.res0 |= tmp.res0;
> -       resx.res0 |= ~reg_feat_map_bits(&r->feat_map);
> -       resx.res1 |= tmp.res1;
> +       /*
> +        * If the register itself was not valid, all the non-RESx bits are
> +        * now considered RES0 (this matches the behaviour of registers such
> +        * as SCTLR2 and TCR2). Weed out any potential (though unlikely)
> +        * overlap with RES1 bits coming from the previous computation.
> +        */
> +       resx.res0 |= compute_resx_bits(kvm, &r->feat_map, 1, require, exclude).res0;
> +       resx.res1 &= ~resx.res0;
>
>         return resx;
>  }
> --
> 2.47.3
>


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH 13/20] KVM: arm64: Move RESx into individual register descriptors
  2026-01-29 17:19     ` Marc Zyngier
@ 2026-01-29 17:39       ` Fuad Tabba
  2026-01-29 18:07       ` Marc Zyngier
  1 sibling, 0 replies; 53+ messages in thread
From: Fuad Tabba @ 2026-01-29 17:39 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, kvm, Joey Gouly, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Will Deacon, Catalin Marinas

...
> > I see what you're doing here, but it took me a while to get it and
> > convince myself that there aren't any bugs (my self couldn't find any
> > bugs, but I wouldn't trust him that much). You already introduce a new
> > flag, FORCE_RESx. Why not just check that directly in the
> > compute_resx_bits() loop, before the check for CALL_FUNC?
> >
> > + if (map[i].flags & FORCE_RESx)
> > +     match = false;
> > + else if (map[i].flags & CALL_FUNC)
> > ...
> >
> > The way it is now, to understand FORCE_RES0, you must trace a flag, a
> > macro expansion, and a function pointer, just to set a boolean to
> > false.
>
> With that scheme, you'd write something like:
>
> +#define FORCE_RES0(m)          NEEDS_FEAT_FLAG(m, FORCE_RESx)
>
> This construct would need a new __NEEDS_FEAT_0() macro that doesn't
> take any argument other than flags. Something like below (untested).
>
>         M.

LGTM. Not tested either. I plan to test the series once I'm done reviewing it.

Thanks,
/fuad

>
> diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
> index 9485e1f2dc0b7..364bdd1e5be51 100644
> --- a/arch/arm64/kvm/config.c
> +++ b/arch/arm64/kvm/config.c
> @@ -79,6 +79,12 @@ struct reg_feat_map_desc {
>                 .match = (fun),                         \
>         }
>
> +#define __NEEDS_FEAT_0(m, f, w, ...)                   \
> +       {                                               \
> +               .w      = (m),                          \
> +               .flags = (f),                           \
> +       }
> +
>  #define __NEEDS_FEAT_FLAG(m, f, w, ...)                        \
>         CONCATENATE(__NEEDS_FEAT_, COUNT_ARGS(__VA_ARGS__))(m, f, w, __VA_ARGS__)
>
> @@ -95,9 +101,8 @@ struct reg_feat_map_desc {
>  #define NEEDS_FEAT(m, ...)     NEEDS_FEAT_FLAG(m, 0, __VA_ARGS__)
>
>  /* Declare fixed RESx bits */
> -#define FORCE_RES0(m)          NEEDS_FEAT_FLAG(m, FORCE_RESx, enforce_resx)
> -#define FORCE_RES1(m)          NEEDS_FEAT_FLAG(m, FORCE_RESx | AS_RES1, \
> -                                               enforce_resx)
> +#define FORCE_RES0(m)          NEEDS_FEAT_FLAG(m, FORCE_RESx)
> +#define FORCE_RES1(m)          NEEDS_FEAT_FLAG(m, FORCE_RESx | AS_RES1)
>
>  /*
>   * Declare the dependency between a non-FGT register, a set of
> @@ -221,15 +226,6 @@ struct reg_feat_map_desc {
>  #define FEAT_HCX               ID_AA64MMFR1_EL1, HCX, IMP
>  #define FEAT_S2PIE             ID_AA64MMFR3_EL1, S2PIE, IMP
>
> -static bool enforce_resx(struct kvm *kvm)
> -{
> -       /*
> -        * Returning false here means that the RESx bits will be always
> -        * addded to the fixed set bit. Yes, this is counter-intuitive.
> -        */
> -       return false;
> -}
> -
>  static bool not_feat_aa64el3(struct kvm *kvm)
>  {
>         return !kvm_has_feat(kvm, FEAT_AA64EL3);
> @@ -996,7 +992,7 @@ static const struct reg_bits_to_feat_map hcr_feat_map[] = {
>         NEEDS_FEAT(HCR_EL2_TWEDEL       |
>                    HCR_EL2_TWEDEn,
>                    FEAT_TWED),
> -       NEEDS_FEAT_FLAG(HCR_EL2_E2H, RES1_WHEN_E2H1, enforce_resx),
> +       NEEDS_FEAT_FLAG(HCR_EL2_E2H, RES1_WHEN_E2H1 | FORCE_RESx),
>         FORCE_RES0(HCR_EL2_RES0),
>         FORCE_RES1(HCR_EL2_RES1),
>  };
> @@ -1362,7 +1358,9 @@ struct resx compute_resx_bits(struct kvm *kvm,
>                 if (map[i].flags & exclude)
>                         continue;
>
> -               if (map[i].flags & CALL_FUNC)
> +               if (map[i].flags & FORCE_RESx)
> +                       match = false;
> +               else if (map[i].flags & CALL_FUNC)
>                         match = map[i].match(kvm);
>                 else
>                         match = idreg_feat_match(kvm, &map[i]);
>
> --
> Without deviation from the norm, progress is not possible.


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH 17/20] KVM: arm64: Remove all traces of FEAT_TME
  2026-01-26 12:16 ` [PATCH 17/20] KVM: arm64: Remove all traces of FEAT_TME Marc Zyngier
@ 2026-01-29 17:43   ` Fuad Tabba
  0 siblings, 0 replies; 53+ messages in thread
From: Fuad Tabba @ 2026-01-29 17:43 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, kvm, Joey Gouly, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Will Deacon, Catalin Marinas

On Mon, 26 Jan 2026 at 12:17, Marc Zyngier <maz@kernel.org> wrote:
>
> FEAT_TME has been dropped from the architecture. Retrospectively.
> I'm sure someone is crying somewhere, but most of us won't.

:'-(

Please don't do a web search for my name and "Transactional Memory".

> Clean-up time.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>

I checked and it is indeed withdrawn. So with a heavy heart:

Reviewed-by: Fuad Tabba <tabba@google.com>

RIP,
/fuad







> ---
>  arch/arm64/kvm/config.c                         |  7 -------
>  arch/arm64/kvm/nested.c                         |  5 -----
>  arch/arm64/tools/sysreg                         | 12 +++---------
>  tools/perf/Documentation/perf-arm-spe.txt       |  1 -
>  tools/testing/selftests/kvm/arm64/set_id_regs.c |  1 -
>  5 files changed, 3 insertions(+), 23 deletions(-)
>
> diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
> index 0c037742215ac..f892098b70c0b 100644
> --- a/arch/arm64/kvm/config.c
> +++ b/arch/arm64/kvm/config.c
> @@ -184,7 +184,6 @@ struct reg_feat_map_desc {
>  #define FEAT_RME               ID_AA64PFR0_EL1, RME, IMP
>  #define FEAT_MPAM              ID_AA64PFR0_EL1, MPAM, 1
>  #define FEAT_S2FWB             ID_AA64MMFR2_EL1, FWB, IMP
> -#define FEAT_TME               ID_AA64ISAR0_EL1, TME, IMP
>  #define FEAT_TWED              ID_AA64MMFR1_EL1, TWED, IMP
>  #define FEAT_E2H0              ID_AA64MMFR4_EL1, E2H0, IMP
>  #define FEAT_SRMASK            ID_AA64MMFR4_EL1, SRMASK, IMP
> @@ -997,7 +996,6 @@ static const struct reg_bits_to_feat_map hcr_feat_map[] = {
>         NEEDS_FEAT(HCR_EL2_FIEN, feat_rasv1p1),
>         NEEDS_FEAT(HCR_EL2_GPF, FEAT_RME),
>         NEEDS_FEAT(HCR_EL2_FWB, FEAT_S2FWB),
> -       NEEDS_FEAT(HCR_EL2_TME, FEAT_TME),
>         NEEDS_FEAT(HCR_EL2_TWEDEL       |
>                    HCR_EL2_TWEDEn,
>                    FEAT_TWED),
> @@ -1109,11 +1107,6 @@ static const struct reg_bits_to_feat_map sctlr_el1_feat_map[] = {
>         NEEDS_FEAT(SCTLR_EL1_EnRCTX, FEAT_SPECRES),
>         NEEDS_FEAT(SCTLR_EL1_DSSBS, FEAT_SSBS),
>         NEEDS_FEAT(SCTLR_EL1_TIDCP, FEAT_TIDCP1),
> -       NEEDS_FEAT(SCTLR_EL1_TME0       |
> -                  SCTLR_EL1_TME        |
> -                  SCTLR_EL1_TMT0       |
> -                  SCTLR_EL1_TMT,
> -                  FEAT_TME),
>         NEEDS_FEAT(SCTLR_EL1_TWEDEL     |
>                    SCTLR_EL1_TWEDEn,
>                    FEAT_TWED),
> diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
> index 75a23f1c56d13..96e899dbd9192 100644
> --- a/arch/arm64/kvm/nested.c
> +++ b/arch/arm64/kvm/nested.c
> @@ -1505,11 +1505,6 @@ u64 limit_nv_id_reg(struct kvm *kvm, u32 reg, u64 val)
>         u64 orig_val = val;
>
>         switch (reg) {
> -       case SYS_ID_AA64ISAR0_EL1:
> -               /* Support everything but TME */
> -               val &= ~ID_AA64ISAR0_EL1_TME;
> -               break;
> -
>         case SYS_ID_AA64ISAR1_EL1:
>                 /* Support everything but LS64 and Spec Invalidation */
>                 val &= ~(ID_AA64ISAR1_EL1_LS64  |
> diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
> index 969a75615d612..650d7d477087e 100644
> --- a/arch/arm64/tools/sysreg
> +++ b/arch/arm64/tools/sysreg
> @@ -1856,10 +1856,7 @@ UnsignedEnum     31:28   RDM
>         0b0000  NI
>         0b0001  IMP
>  EndEnum
> -UnsignedEnum   27:24   TME
> -       0b0000  NI
> -       0b0001  IMP
> -EndEnum
> +Res0   27:24
>  UnsignedEnum   23:20   ATOMIC
>         0b0000  NI
>         0b0010  IMP
> @@ -2432,10 +2429,7 @@ Field    57      EPAN
>  Field  56      EnALS
>  Field  55      EnAS0
>  Field  54      EnASR
> -Field  53      TME
> -Field  52      TME0
> -Field  51      TMT
> -Field  50      TMT0
> +Res0   53:50
>  Field  49:46   TWEDEL
>  Field  45      TWEDEn
>  Field  44      DSSBS
> @@ -3840,7 +3834,7 @@ Field     43      NV1
>  Field  42      NV
>  Field  41      API
>  Field  40      APK
> -Field  39      TME
> +Res0   39
>  Field  38      MIOCNCE
>  Field  37      TEA
>  Field  36      TERR
> diff --git a/tools/perf/Documentation/perf-arm-spe.txt b/tools/perf/Documentation/perf-arm-spe.txt
> index 8b02e5b983fa9..201a82bec0de4 100644
> --- a/tools/perf/Documentation/perf-arm-spe.txt
> +++ b/tools/perf/Documentation/perf-arm-spe.txt
> @@ -176,7 +176,6 @@ and inv_event_filter are:
>    bit 10    - Remote access (FEAT_SPEv1p4)
>    bit 11    - Misaligned access (FEAT_SPEv1p1)
>    bit 12-15 - IMPLEMENTATION DEFINED events (when implemented)
> -  bit 16    - Transaction (FEAT_TME)
>    bit 17    - Partial or empty SME or SVE predicate (FEAT_SPEv1p1)
>    bit 18    - Empty SME or SVE predicate (FEAT_SPEv1p1)
>    bit 19    - L2D access (FEAT_SPEv1p4)
> diff --git a/tools/testing/selftests/kvm/arm64/set_id_regs.c b/tools/testing/selftests/kvm/arm64/set_id_regs.c
> index c4815d3658167..73de5be58bab0 100644
> --- a/tools/testing/selftests/kvm/arm64/set_id_regs.c
> +++ b/tools/testing/selftests/kvm/arm64/set_id_regs.c
> @@ -91,7 +91,6 @@ static const struct reg_ftr_bits ftr_id_aa64isar0_el1[] = {
>         REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, SM3, 0),
>         REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, SHA3, 0),
>         REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, RDM, 0),
> -       REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, TME, 0),
>         REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, ATOMIC, 0),
>         REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, CRC32, 0),
>         REG_FTR_BITS(FTR_LOWER_SAFE, ID_AA64ISAR0_EL1, SHA2, 0),
> --
> 2.47.3
>


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH 18/20] KVM: arm64: Remove all traces of HCR_EL2.MIOCNCE
  2026-01-26 12:16 ` [PATCH 18/20] KVM: arm64: Remove all traces of HCR_EL2.MIOCNCE Marc Zyngier
@ 2026-01-29 17:51   ` Fuad Tabba
  0 siblings, 0 replies; 53+ messages in thread
From: Fuad Tabba @ 2026-01-29 17:51 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, kvm, Joey Gouly, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Will Deacon, Catalin Marinas

On Mon, 26 Jan 2026 at 12:17, Marc Zyngier <maz@kernel.org> wrote:
>
> MIOCNCE had the potential to eat your data, and also was never
> implemented by anyone. It's been retrospectively removed from
> the architecture, and we're happy to follow that lead.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>

The field HCR_EL2.MIOCNCE is deprecated and made RES0.

Reviewed-by: Fuad Tabba <tabba@google.com>

Cheers,
/fuad






> ---
>  arch/arm64/kvm/config.c | 1 -
>  arch/arm64/tools/sysreg | 3 +--
>  2 files changed, 1 insertion(+), 3 deletions(-)
>
> diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
> index f892098b70c0b..eebafb90bcf62 100644
> --- a/arch/arm64/kvm/config.c
> +++ b/arch/arm64/kvm/config.c
> @@ -944,7 +944,6 @@ static const struct reg_bits_to_feat_map hcr_feat_map[] = {
>                    HCR_EL2_FMO          |
>                    HCR_EL2_ID           |
>                    HCR_EL2_IMO          |
> -                  HCR_EL2_MIOCNCE      |
>                    HCR_EL2_PTW          |
>                    HCR_EL2_SWIO         |
>                    HCR_EL2_TACR         |
> diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
> index 650d7d477087e..724e6ad966c20 100644
> --- a/arch/arm64/tools/sysreg
> +++ b/arch/arm64/tools/sysreg
> @@ -3834,8 +3834,7 @@ Field     43      NV1
>  Field  42      NV
>  Field  41      API
>  Field  40      APK
> -Res0   39
> -Field  38      MIOCNCE
> +Res0   39:38
>  Field  37      TEA
>  Field  36      TERR
>  Field  35      TLOR
> --
> 2.47.3
>


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH 13/20] KVM: arm64: Move RESx into individual register descriptors
  2026-01-29 17:19     ` Marc Zyngier
  2026-01-29 17:39       ` Fuad Tabba
@ 2026-01-29 18:07       ` Marc Zyngier
  2026-01-29 18:13         ` Fuad Tabba
  1 sibling, 1 reply; 53+ messages in thread
From: Marc Zyngier @ 2026-01-29 18:07 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, linux-arm-kernel, kvm, Joey Gouly, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Will Deacon, Catalin Marinas

On Thu, 29 Jan 2026 17:19:55 +0000,
Marc Zyngier <maz@kernel.org> wrote:
> 
> On Thu, 29 Jan 2026 16:29:39 +0000,
> Fuad Tabba <tabba@google.com> wrote:
> > 
> > Hi Marc,
> > 
> > On Mon, 26 Jan 2026 at 12:17, Marc Zyngier <maz@kernel.org> wrote:
> > >
> > > Instead of hacking the RES1 bits at runtime, move them into the
> > > register descriptors. This makes it significantly nicer.
> > >
> > > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > > ---
> > >  arch/arm64/kvm/config.c | 36 +++++++++++++++++++++++++++++-------
> > >  1 file changed, 29 insertions(+), 7 deletions(-)
> > >
> > > diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
> > > index 7063fffc22799..d5871758f1fcc 100644
> > > --- a/arch/arm64/kvm/config.c
> > > +++ b/arch/arm64/kvm/config.c
> > > @@ -30,6 +30,7 @@ struct reg_bits_to_feat_map {
> > >  #define        RES0_WHEN_E2H1  BIT(7)  /* RES0 when E2H=1 and not supported */
> > >  #define        RES1_WHEN_E2H0  BIT(8)  /* RES1 when E2H=0 and not supported */
> > >  #define        RES1_WHEN_E2H1  BIT(9)  /* RES1 when E2H=1 and not supported */
> > > +#define        FORCE_RESx      BIT(10) /* Unconditional RESx */
> > >
> > >         unsigned long   flags;
> > >
> > > @@ -107,6 +108,11 @@ struct reg_feat_map_desc {
> > >   */
> > >  #define NEEDS_FEAT(m, ...)     NEEDS_FEAT_FLAG(m, 0, __VA_ARGS__)
> > >
> > > +/* Declare fixed RESx bits */
> > > +#define FORCE_RES0(m)          NEEDS_FEAT_FLAG(m, FORCE_RESx, enforce_resx)
> > > +#define FORCE_RES1(m)          NEEDS_FEAT_FLAG(m, FORCE_RESx | AS_RES1, \
> > > +                                               enforce_resx)
> > > +
> > >  /*
> > >   * Declare the dependency between a non-FGT register, a set of
> > >   * feature, and the set of individual bits it contains. This generates
> > 
> > nit: features
> > 
> > > @@ -230,6 +236,15 @@ struct reg_feat_map_desc {
> > >  #define FEAT_HCX               ID_AA64MMFR1_EL1, HCX, IMP
> > >  #define FEAT_S2PIE             ID_AA64MMFR3_EL1, S2PIE, IMP
> > >
> > > +static bool enforce_resx(struct kvm *kvm)
> > > +{
> > > +       /*
> > > +        * Returning false here means that the RESx bits will be always
> > > +        * addded to the fixed set bit. Yes, this is counter-intuitive.
> > 
> > nit: added
> > 
> > > +        */
> > > +       return false;
> > > +}
> > 
> > I see what you're doing here, but it took me a while to get it and
> > convince myself that there aren't any bugs (my self couldn't find any
> > bugs, but I wouldn't trust him that much). You already introduce a new
> > flag, FORCE_RESx. Why not just check that directly in the
> > compute_resx_bits() loop, before the check for CALL_FUNC?
> > 
> > + if (map[i].flags & FORCE_RESx)
> > +     match = false;
> > + else if (map[i].flags & CALL_FUNC)
> > ...
> > 
> > The way it is now, to understand FORCE_RES0, you must trace a flag, a
> > macro expansion, and a function pointer, just to set a boolean to
> > false.
> 
> With that scheme, you'd write something like:
> 
> +#define FORCE_RES0(m)          NEEDS_FEAT_FLAG(m, FORCE_RESx)
> 
> This construct would need a new __NEEDS_FEAT_0() macro that doesn't
> take any argument other than flags. Something like below (untested).
> 
> 	M.
> 
> diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
> index 9485e1f2dc0b7..364bdd1e5be51 100644
> --- a/arch/arm64/kvm/config.c
> +++ b/arch/arm64/kvm/config.c
> @@ -79,6 +79,12 @@ struct reg_feat_map_desc {
>  		.match = (fun),				\
>  	}
>  
> +#define __NEEDS_FEAT_0(m, f, w, ...)			\
> +	{						\
> +		.w	= (m),				\
> +		.flags = (f),				\
> +	}
> +
>  #define __NEEDS_FEAT_FLAG(m, f, w, ...)			\
>  	CONCATENATE(__NEEDS_FEAT_, COUNT_ARGS(__VA_ARGS__))(m, f, w, __VA_ARGS__)
>  
> @@ -95,9 +101,8 @@ struct reg_feat_map_desc {
>  #define NEEDS_FEAT(m, ...)	NEEDS_FEAT_FLAG(m, 0, __VA_ARGS__)
>  
>  /* Declare fixed RESx bits */
> -#define FORCE_RES0(m)		NEEDS_FEAT_FLAG(m, FORCE_RESx, enforce_resx)
> -#define FORCE_RES1(m)		NEEDS_FEAT_FLAG(m, FORCE_RESx | AS_RES1, \
> -						enforce_resx)
> +#define FORCE_RES0(m)		NEEDS_FEAT_FLAG(m, FORCE_RESx)
> +#define FORCE_RES1(m)		NEEDS_FEAT_FLAG(m, FORCE_RESx | AS_RES1)
>  
>  /*
>   * Declare the dependency between a non-FGT register, a set of
> @@ -221,15 +226,6 @@ struct reg_feat_map_desc {
>  #define FEAT_HCX		ID_AA64MMFR1_EL1, HCX, IMP
>  #define FEAT_S2PIE		ID_AA64MMFR3_EL1, S2PIE, IMP
>  
> -static bool enforce_resx(struct kvm *kvm)
> -{
> -	/*
> -	 * Returning false here means that the RESx bits will be always
> -	 * addded to the fixed set bit. Yes, this is counter-intuitive.
> -	 */
> -	return false;
> -}
> -
>  static bool not_feat_aa64el3(struct kvm *kvm)
>  {
>  	return !kvm_has_feat(kvm, FEAT_AA64EL3);
> @@ -996,7 +992,7 @@ static const struct reg_bits_to_feat_map hcr_feat_map[] = {
>  	NEEDS_FEAT(HCR_EL2_TWEDEL	|
>  		   HCR_EL2_TWEDEn,
>  		   FEAT_TWED),
> -	NEEDS_FEAT_FLAG(HCR_EL2_E2H, RES1_WHEN_E2H1, enforce_resx),
> +	NEEDS_FEAT_FLAG(HCR_EL2_E2H, RES1_WHEN_E2H1 | FORCE_RESx),

Actually, this interacts badly with check_feat_map(), which tries to
find whether we have fully populated the registers, excluding the RESx
bits. But since we consider E2H to be a reserved but, we end-up with:

[    0.141317] kvm [1]: Undefined HCR_EL2 behaviour, bits 0000000400000000

With my approach, it was possible to distinguish the architecturally
RESx bits (defined as RES0 or RES1), as they were the only ones with
the FORCE_RESx attribute.

I can work around it with

diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
index 364bdd1e5be51..398458f4a6b7b 100644
--- a/arch/arm64/kvm/config.c
+++ b/arch/arm64/kvm/config.c
@@ -1283,7 +1283,7 @@ static void __init check_feat_map(const struct reg_bits_to_feat_map *map,
 	u64 mask = 0;
 
 	for (int i = 0; i < map_size; i++)
-		if (!(map[i].flags & FORCE_RESx))
+		if (!(map[i].flags & FORCE_RESx) || !(map[i].bits & resx))
 			mask |= map[i].bits;
 
 	if (mask != ~resx)

but it becomes a bit awkward...

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* Re: [PATCH 19/20] KVM: arm64: Add sanitisation to SCTLR_EL2
  2026-01-26 12:16 ` [PATCH 19/20] KVM: arm64: Add sanitisation to SCTLR_EL2 Marc Zyngier
@ 2026-01-29 18:11   ` Fuad Tabba
  0 siblings, 0 replies; 53+ messages in thread
From: Fuad Tabba @ 2026-01-29 18:11 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, kvm, Joey Gouly, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Will Deacon, Catalin Marinas

Hi Marc,

On Mon, 26 Jan 2026 at 12:17, Marc Zyngier <maz@kernel.org> wrote:
>
> Sanitise SCTLR_EL2 the usual way. The most important aspect of
> this is that we benefit from SCTLR_EL2.SPAN being RES1 when
> HCR_EL2.E2H==0.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>

This would change slightly when you change patch 12, but it matches
the spec and those changes would be trivial to apply here.

Reviewed-by: Fuad Tabba <tabba@google.com>

Cheers,
/fuad






> ---
>  arch/arm64/include/asm/kvm_host.h |  2 +-
>  arch/arm64/kvm/config.c           | 82 +++++++++++++++++++++++++++++++
>  arch/arm64/kvm/nested.c           |  4 ++
>  3 files changed, 87 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 9dca94e4361f0..c82b071ade2a5 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -495,7 +495,6 @@ enum vcpu_sysreg {
>         DBGVCR32_EL2,   /* Debug Vector Catch Register */
>
>         /* EL2 registers */
> -       SCTLR_EL2,      /* System Control Register (EL2) */
>         ACTLR_EL2,      /* Auxiliary Control Register (EL2) */
>         CPTR_EL2,       /* Architectural Feature Trap Register (EL2) */
>         HACR_EL2,       /* Hypervisor Auxiliary Control Register */
> @@ -526,6 +525,7 @@ enum vcpu_sysreg {
>
>         /* Anything from this can be RES0/RES1 sanitised */
>         MARKER(__SANITISED_REG_START__),
> +       SCTLR_EL2,      /* System Control Register (EL2) */
>         TCR2_EL2,       /* Extended Translation Control Register (EL2) */
>         SCTLR2_EL2,     /* System Control Register 2 (EL2) */
>         MDCR_EL2,       /* Monitor Debug Configuration Register (EL2) */
> diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
> index eebafb90bcf62..562513a4683e2 100644
> --- a/arch/arm64/kvm/config.c
> +++ b/arch/arm64/kvm/config.c
> @@ -1130,6 +1130,84 @@ static const struct reg_bits_to_feat_map sctlr_el1_feat_map[] = {
>  static const DECLARE_FEAT_MAP(sctlr_el1_desc, SCTLR_EL1,
>                               sctlr_el1_feat_map, FEAT_AA64EL1);
>
> +static const struct reg_bits_to_feat_map sctlr_el2_feat_map[] = {
> +       NEEDS_FEAT_FLAG(SCTLR_EL2_CP15BEN,
> +                       RES0_WHEN_E2H1 | RES1_WHEN_E2H0 | REQUIRES_E2H1,
> +                       FEAT_AA32EL0),
> +       NEEDS_FEAT_FLAG(SCTLR_EL2_ITD   |
> +                       SCTLR_EL2_SED,
> +                       RES1_WHEN_E2H1 | RES0_WHEN_E2H0 | REQUIRES_E2H1,
> +                       FEAT_AA32EL0),
> +       NEEDS_FEAT_FLAG(SCTLR_EL2_BT0, REQUIRES_E2H1, FEAT_BTI),
> +       NEEDS_FEAT(SCTLR_EL2_BT, FEAT_BTI),
> +       NEEDS_FEAT_FLAG(SCTLR_EL2_CMOW, REQUIRES_E2H1, FEAT_CMOW),
> +       NEEDS_FEAT_FLAG(SCTLR_EL2_TSCXT,
> +                       RES0_WHEN_E2H0 | RES1_WHEN_E2H1 | REQUIRES_E2H1,
> +                       feat_csv2_2_csv2_1p2),
> +       NEEDS_FEAT_FLAG(SCTLR_EL2_EIS   |
> +                       SCTLR_EL2_EOS,
> +                       AS_RES1, FEAT_ExS),
> +       NEEDS_FEAT(SCTLR_EL2_EnFPM, FEAT_FPMR),
> +       NEEDS_FEAT(SCTLR_EL2_IESB, FEAT_IESB),
> +       NEEDS_FEAT_FLAG(SCTLR_EL2_EnALS, REQUIRES_E2H1, FEAT_LS64),
> +       NEEDS_FEAT_FLAG(SCTLR_EL2_EnAS0, REQUIRES_E2H1, FEAT_LS64_ACCDATA),
> +       NEEDS_FEAT_FLAG(SCTLR_EL2_EnASR, REQUIRES_E2H1, FEAT_LS64_V),
> +       NEEDS_FEAT(SCTLR_EL2_nAA, FEAT_LSE2),
> +       NEEDS_FEAT_FLAG(SCTLR_EL2_LSMAOE        |
> +                       SCTLR_EL2_nTLSMD,
> +                       AS_RES1 | REQUIRES_E2H1, FEAT_LSMAOC),
> +       NEEDS_FEAT(SCTLR_EL2_EE, FEAT_MixedEnd),
> +       NEEDS_FEAT_FLAG(SCTLR_EL2_E0E, REQUIRES_E2H1, feat_mixedendel0),
> +       NEEDS_FEAT_FLAG(SCTLR_EL2_MSCEn, REQUIRES_E2H1, FEAT_MOPS),
> +       NEEDS_FEAT_FLAG(SCTLR_EL2_ATA0  |
> +                       SCTLR_EL2_TCF0,
> +                       REQUIRES_E2H1, FEAT_MTE2),
> +       NEEDS_FEAT(SCTLR_EL2_ATA        |
> +                  SCTLR_EL2_TCF,
> +                  FEAT_MTE2),
> +       NEEDS_FEAT(SCTLR_EL2_ITFSB, feat_mte_async),
> +       NEEDS_FEAT_FLAG(SCTLR_EL2_TCSO0, REQUIRES_E2H1, FEAT_MTE_STORE_ONLY),
> +       NEEDS_FEAT(SCTLR_EL2_TCSO,
> +                  FEAT_MTE_STORE_ONLY),
> +       NEEDS_FEAT(SCTLR_EL2_NMI        |
> +                  SCTLR_EL2_SPINTMASK,
> +                  FEAT_NMI),
> +       NEEDS_FEAT_FLAG(SCTLR_EL2_SPAN, AS_RES1 | REQUIRES_E2H1, FEAT_PAN),
> +       NEEDS_FEAT_FLAG(SCTLR_EL2_EPAN, REQUIRES_E2H1, FEAT_PAN3),
> +       NEEDS_FEAT(SCTLR_EL2_EnDA       |
> +                  SCTLR_EL2_EnDB       |
> +                  SCTLR_EL2_EnIA       |
> +                  SCTLR_EL2_EnIB,
> +                  feat_pauth),
> +       NEEDS_FEAT_FLAG(SCTLR_EL2_EnTP2, REQUIRES_E2H1, FEAT_SME),
> +       NEEDS_FEAT(SCTLR_EL2_EnRCTX, FEAT_SPECRES),
> +       NEEDS_FEAT(SCTLR_EL2_DSSBS, FEAT_SSBS),
> +       NEEDS_FEAT_FLAG(SCTLR_EL2_TIDCP, REQUIRES_E2H1, FEAT_TIDCP1),
> +       NEEDS_FEAT_FLAG(SCTLR_EL2_TWEDEL        |
> +                       SCTLR_EL2_TWEDEn,
> +                       REQUIRES_E2H1, FEAT_TWED),
> +       NEEDS_FEAT_FLAG(SCTLR_EL2_nTWE  |
> +                       SCTLR_EL2_nTWI,
> +                       AS_RES1 | REQUIRES_E2H1, FEAT_AA64EL2),
> +       NEEDS_FEAT_FLAG(SCTLR_EL2_UCI   |
> +                       SCTLR_EL2_UCT   |
> +                       SCTLR_EL2_DZE   |
> +                       SCTLR_EL2_SA0,
> +                       REQUIRES_E2H1, FEAT_AA64EL2),
> +       NEEDS_FEAT(SCTLR_EL2_WXN        |
> +                  SCTLR_EL2_I          |
> +                  SCTLR_EL2_SA         |
> +                  SCTLR_EL2_C          |
> +                  SCTLR_EL2_A          |
> +                  SCTLR_EL2_M,
> +                  FEAT_AA64EL2),
> +       FORCE_RES0(SCTLR_EL2_RES0),
> +       FORCE_RES1(SCTLR_EL2_RES1),
> +};
> +
> +static const DECLARE_FEAT_MAP(sctlr_el2_desc, SCTLR_EL2,
> +                             sctlr_el2_feat_map, FEAT_AA64EL2);
> +
>  static const struct reg_bits_to_feat_map mdcr_el2_feat_map[] = {
>         NEEDS_FEAT(MDCR_EL2_EBWE, FEAT_Debugv8p9),
>         NEEDS_FEAT(MDCR_EL2_TDOSA, FEAT_DoubleLock),
> @@ -1249,6 +1327,7 @@ void __init check_feature_map(void)
>         check_reg_desc(&sctlr2_desc);
>         check_reg_desc(&tcr2_el2_desc);
>         check_reg_desc(&sctlr_el1_desc);
> +       check_reg_desc(&sctlr_el2_desc);
>         check_reg_desc(&mdcr_el2_desc);
>         check_reg_desc(&vtcr_el2_desc);
>  }
> @@ -1454,6 +1533,9 @@ struct resx get_reg_fixed_bits(struct kvm *kvm, enum vcpu_sysreg reg)
>         case SCTLR_EL1:
>                 resx = compute_reg_resx_bits(kvm, &sctlr_el1_desc, 0, 0);
>                 break;
> +       case SCTLR_EL2:
> +               resx = compute_reg_resx_bits(kvm, &sctlr_el2_desc, 0, 0);
> +               break;
>         case MDCR_EL2:
>                 resx = compute_reg_resx_bits(kvm, &mdcr_el2_desc, 0, 0);
>                 break;
> diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
> index 96e899dbd9192..ed710228484f3 100644
> --- a/arch/arm64/kvm/nested.c
> +++ b/arch/arm64/kvm/nested.c
> @@ -1766,6 +1766,10 @@ int kvm_init_nv_sysregs(struct kvm_vcpu *vcpu)
>         resx = get_reg_fixed_bits(kvm, SCTLR_EL1);
>         set_sysreg_masks(kvm, SCTLR_EL1, resx);
>
> +       /* SCTLR_EL2 */
> +       resx = get_reg_fixed_bits(kvm, SCTLR_EL2);
> +       set_sysreg_masks(kvm, SCTLR_EL2, resx);
> +
>         /* SCTLR2_ELx */
>         resx = get_reg_fixed_bits(kvm, SCTLR2_EL1);
>         set_sysreg_masks(kvm, SCTLR2_EL1, resx);
> --
> 2.47.3
>


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH 13/20] KVM: arm64: Move RESx into individual register descriptors
  2026-01-29 18:07       ` Marc Zyngier
@ 2026-01-29 18:13         ` Fuad Tabba
  2026-01-30  9:06           ` Marc Zyngier
  0 siblings, 1 reply; 53+ messages in thread
From: Fuad Tabba @ 2026-01-29 18:13 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, kvm, Joey Gouly, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Will Deacon, Catalin Marinas

>
> Actually, this interacts badly with check_feat_map(), which tries to
> find whether we have fully populated the registers, excluding the RESx
> bits. But since we consider E2H to be a reserved but, we end-up with:
>
> [    0.141317] kvm [1]: Undefined HCR_EL2 behaviour, bits 0000000400000000
>
> With my approach, it was possible to distinguish the architecturally
> RESx bits (defined as RES0 or RES1), as they were the only ones with
> the FORCE_RESx attribute.
>
> I can work around it with
>
> diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
> index 364bdd1e5be51..398458f4a6b7b 100644
> --- a/arch/arm64/kvm/config.c
> +++ b/arch/arm64/kvm/config.c
> @@ -1283,7 +1283,7 @@ static void __init check_feat_map(const struct reg_bits_to_feat_map *map,
>         u64 mask = 0;
>
>         for (int i = 0; i < map_size; i++)
> -               if (!(map[i].flags & FORCE_RESx))
> +               if (!(map[i].flags & FORCE_RESx) || !(map[i].bits & resx))
>                         mask |= map[i].bits;
>
>         if (mask != ~resx)
>
> but it becomes a bit awkward...

If it becomes more complicated than the original, then what's the
point. Up to you whether you want to try to pursue this or not. From
my part:

Reviewed-by: Fuad Tabba <tabba@google.com>

Thanks,
/fuad




>
> Thanks,
>
>         M.
>
> --
> Without deviation from the norm, progress is not possible.
>


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH 13/20] KVM: arm64: Move RESx into individual register descriptors
  2026-01-29 18:13         ` Fuad Tabba
@ 2026-01-30  9:06           ` Marc Zyngier
  0 siblings, 0 replies; 53+ messages in thread
From: Marc Zyngier @ 2026-01-30  9:06 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, linux-arm-kernel, kvm, Joey Gouly, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Will Deacon, Catalin Marinas

On Thu, 29 Jan 2026 18:13:18 +0000,
Fuad Tabba <tabba@google.com> wrote:
> 
> >
> > Actually, this interacts badly with check_feat_map(), which tries to
> > find whether we have fully populated the registers, excluding the RESx
> > bits. But since we consider E2H to be a reserved but, we end-up with:
> >
> > [    0.141317] kvm [1]: Undefined HCR_EL2 behaviour, bits 0000000400000000
> >
> > With my approach, it was possible to distinguish the architecturally
> > RESx bits (defined as RES0 or RES1), as they were the only ones with
> > the FORCE_RESx attribute.
> >
> > I can work around it with
> >
> > diff --git a/arch/arm64/kvm/config.c b/arch/arm64/kvm/config.c
> > index 364bdd1e5be51..398458f4a6b7b 100644
> > --- a/arch/arm64/kvm/config.c
> > +++ b/arch/arm64/kvm/config.c
> > @@ -1283,7 +1283,7 @@ static void __init check_feat_map(const struct reg_bits_to_feat_map *map,
> >         u64 mask = 0;
> >
> >         for (int i = 0; i < map_size; i++)
> > -               if (!(map[i].flags & FORCE_RESx))
> > +               if (!(map[i].flags & FORCE_RESx) || !(map[i].bits & resx))
> >                         mask |= map[i].bits;
> >
> >         if (mask != ~resx)
> >
> > but it becomes a bit awkward...
> 
> If it becomes more complicated than the original, then what's the
> point. Up to you whether you want to try to pursue this or not.

Not more complicated, just moving the complexity somewhere else. I'll
add a comment explaining the logic at this point. Overall, this is a
net cleanup, I think.

> From my part:
> 
> Reviewed-by: Fuad Tabba <tabba@google.com>

Thank you!

	M.

-- 
Without deviation from the norm, progress is not possible.


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH 20/20] KVM: arm64: Add debugfs file dumping computed RESx values
  2026-01-26 12:16 ` [PATCH 20/20] KVM: arm64: Add debugfs file dumping computed RESx values Marc Zyngier
@ 2026-02-02  8:59   ` Fuad Tabba
  2026-02-02  9:14     ` Marc Zyngier
  0 siblings, 1 reply; 53+ messages in thread
From: Fuad Tabba @ 2026-02-02  8:59 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, kvm, Joey Gouly, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Will Deacon, Catalin Marinas

Hi Marc,

On Mon, 26 Jan 2026 at 12:17, Marc Zyngier <maz@kernel.org> wrote:
>
> Computing RESx values is hard. Verifying that they are correct is
> harder. Add a debugfs file called "resx" that will dump all the RESx
> values for a given VM.
>
> I found it useful, maybe you will too.

I'm sure I will :)

> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_host.h |  1 +
>  arch/arm64/kvm/sys_regs.c         | 98 +++++++++++++++++++++++++++++++
>  2 files changed, 99 insertions(+)
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index c82b071ade2a5..54072f6ec9d4b 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -375,6 +375,7 @@ struct kvm_arch {
>
>         /* Iterator for idreg debugfs */
>         u8      idreg_debugfs_iter;
> +       u16     sr_resx_iter;

Storing `sr_resx_iter` in `struct kvm_arch` effectively makes this
debugfs file exclusive (returning -EBUSY on contention). Standard
`seq_file` implementations should be stateless, using the `loff_t
*pos` argument to track the index. This allows multiple users to read
the file simultaneously without blocking each other.

>
>         /* Hypercall features firmware registers' descriptor */
>         struct kvm_smccc_features smccc_feat;
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 88a57ca36d96c..f3f92b489b588 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -5090,12 +5090,110 @@ static const struct seq_operations idregs_debug_sops = {
>
>  DEFINE_SEQ_ATTRIBUTE(idregs_debug);
>
> +static const struct sys_reg_desc *sr_resx_find(struct kvm *kvm, u16 pos)
> +{
> +       unsigned long i, sr_idx = 0;
> +
> +       for (i = 0; i < ARRAY_SIZE(sys_reg_descs); i++) {
> +               const struct sys_reg_desc *r = &sys_reg_descs[i];
> +
> +               if (r->reg < __SANITISED_REG_START__)
> +                       continue;
> +
> +               if (sr_idx == pos)
> +                       return r;
> +
> +               sr_idx++;
> +       }
> +
> +       return NULL;
> +}
> +
> +static void *sr_resx_start(struct seq_file *s, loff_t *pos)
> +{
> +       struct kvm *kvm = s->private;
> +       u16 *iter;
> +
> +       guard(mutex)(&kvm->arch.config_lock);

My understanding of `guard()` is that it releases the lock as soon as
the current scope ends (i.e., when `sr_resx_start() `returns). If the
intention was to protect the iteration, it seems like `sr_resx_next()`
and `sr_resx_show()` would end up running unprotected. That said,
converting this to a standard `seq_file` implementation should remove
the need for locking altogether.

I guess you based your code on the existing code for the idregs
debugfs. I had a look at that, and at vgic-debug, and I think they
both can be simplified and made more robust [1]. I also have a diff
that converts this to use `seq_file`. It's pretty similar to what I
have for idregs in the series I sent out [2]. Let me know if you'd
like me to share it.

Cheers,
/fuad

[1] https://lore.kernel.org/all/20260202085721.3954942-1-tabba@google.com/
[2] https://lore.kernel.org/all/20260202085721.3954942-2-tabba@google.com/




> +
> +       if (!kvm->arch.sysreg_masks)
> +               return NULL;
> +
> +       iter = &kvm->arch.sr_resx_iter;
> +       if (*iter != (u16)~0)
> +               return ERR_PTR(-EBUSY);
> +
> +       *iter = *pos;
> +       if (!sr_resx_find(kvm, *iter))
> +               iter = NULL;
> +
> +       return iter;
> +}
> +
> +static void *sr_resx_next(struct seq_file *s, void *v, loff_t *pos)
> +{
> +       struct kvm *kvm = s->private;
> +
> +       (*pos)++;
> +
> +       if (sr_resx_find(kvm, kvm->arch.sr_resx_iter + 1)) {
> +               kvm->arch.sr_resx_iter++;
> +
> +               return &kvm->arch.sr_resx_iter;
> +       }
> +
> +       return NULL;
> +}
> +
> +static void sr_resx_stop(struct seq_file *s, void *v)
> +{
> +       struct kvm *kvm = s->private;
> +
> +       if (IS_ERR(v))
> +               return;
> +
> +       guard(mutex)(&kvm->arch.config_lock);
> +
> +       kvm->arch.sr_resx_iter = ~0;
> +}
> +
> +static int sr_resx_show(struct seq_file *s, void *v)
> +{
> +       const struct sys_reg_desc *desc;
> +       struct kvm *kvm = s->private;
> +       struct resx resx;
> +
> +       desc = sr_resx_find(kvm, kvm->arch.sr_resx_iter);
> +
> +       if (!desc->name)
> +               return 0;
> +
> +       resx = kvm_get_sysreg_resx(kvm, desc->reg);
> +
> +       seq_printf(s, "%20s:\tRES0:%016llx\tRES1:%016llx\n",
> +                  desc->name, resx.res0, resx.res1);
> +
> +       return 0;
> +}
> +
> +static const struct seq_operations sr_resx_sops = {
> +       .start  = sr_resx_start,
> +       .next   = sr_resx_next,
> +       .stop   = sr_resx_stop,
> +       .show   = sr_resx_show,
> +};
> +
> +DEFINE_SEQ_ATTRIBUTE(sr_resx);
> +
>  void kvm_sys_regs_create_debugfs(struct kvm *kvm)
>  {
>         kvm->arch.idreg_debugfs_iter = ~0;
> +       kvm->arch.sr_resx_iter = ~0;
>
>         debugfs_create_file("idregs", 0444, kvm->debugfs_dentry, kvm,
>                             &idregs_debug_fops);
> +       debugfs_create_file("resx", 0444, kvm->debugfs_dentry, kvm,
> +                           &sr_resx_fops);
>  }
>
>  static void reset_vm_ftr_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *reg)
> --
> 2.47.3
>


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH 20/20] KVM: arm64: Add debugfs file dumping computed RESx values
  2026-02-02  8:59   ` Fuad Tabba
@ 2026-02-02  9:14     ` Marc Zyngier
  2026-02-02  9:57       ` Fuad Tabba
  0 siblings, 1 reply; 53+ messages in thread
From: Marc Zyngier @ 2026-02-02  9:14 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, linux-arm-kernel, kvm, Joey Gouly, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Will Deacon, Catalin Marinas

On Mon, 02 Feb 2026 08:59:45 +0000,
Fuad Tabba <tabba@google.com> wrote:
> 
> Hi Marc,
> 
> On Mon, 26 Jan 2026 at 12:17, Marc Zyngier <maz@kernel.org> wrote:
> >
> > Computing RESx values is hard. Verifying that they are correct is
> > harder. Add a debugfs file called "resx" that will dump all the RESx
> > values for a given VM.
> >
> > I found it useful, maybe you will too.
> 
> I'm sure I will :)
> 
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/include/asm/kvm_host.h |  1 +
> >  arch/arm64/kvm/sys_regs.c         | 98 +++++++++++++++++++++++++++++++
> >  2 files changed, 99 insertions(+)
> >
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index c82b071ade2a5..54072f6ec9d4b 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -375,6 +375,7 @@ struct kvm_arch {
> >
> >         /* Iterator for idreg debugfs */
> >         u8      idreg_debugfs_iter;
> > +       u16     sr_resx_iter;
> 
> Storing `sr_resx_iter` in `struct kvm_arch` effectively makes this
> debugfs file exclusive (returning -EBUSY on contention). Standard
> `seq_file` implementations should be stateless, using the `loff_t
> *pos` argument to track the index. This allows multiple users to read
> the file simultaneously without blocking each other.

Yup, that's a good point. I guess I've lazily reimplemented a square
wheel...

> 
> >
> >         /* Hypercall features firmware registers' descriptor */
> >         struct kvm_smccc_features smccc_feat;
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index 88a57ca36d96c..f3f92b489b588 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -5090,12 +5090,110 @@ static const struct seq_operations idregs_debug_sops = {
> >
> >  DEFINE_SEQ_ATTRIBUTE(idregs_debug);
> >
> > +static const struct sys_reg_desc *sr_resx_find(struct kvm *kvm, u16 pos)
> > +{
> > +       unsigned long i, sr_idx = 0;
> > +
> > +       for (i = 0; i < ARRAY_SIZE(sys_reg_descs); i++) {
> > +               const struct sys_reg_desc *r = &sys_reg_descs[i];
> > +
> > +               if (r->reg < __SANITISED_REG_START__)
> > +                       continue;
> > +
> > +               if (sr_idx == pos)
> > +                       return r;
> > +
> > +               sr_idx++;
> > +       }
> > +
> > +       return NULL;
> > +}
> > +
> > +static void *sr_resx_start(struct seq_file *s, loff_t *pos)
> > +{
> > +       struct kvm *kvm = s->private;
> > +       u16 *iter;
> > +
> > +       guard(mutex)(&kvm->arch.config_lock);
> 
> My understanding of `guard()` is that it releases the lock as soon as
> the current scope ends (i.e., when `sr_resx_start() `returns). If the
> intention was to protect the iteration, it seems like `sr_resx_next()`
> and `sr_resx_show()` would end up running unprotected. That said,
> converting this to a standard `seq_file` implementation should remove
> the need for locking altogether.
> 
> I guess you based your code on the existing code for the idregs
> debugfs. I had a look at that, and at vgic-debug, and I think they
> both can be simplified and made more robust [1]. I also have a diff
> that converts this to use `seq_file`. It's pretty similar to what I
> have for idregs in the series I sent out [2]. Let me know if you'd
> like me to share it.

Yes please. We might as well do the right thing, and I can fold that
into my current series with you as a co-author.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH 20/20] KVM: arm64: Add debugfs file dumping computed RESx values
  2026-02-02  9:14     ` Marc Zyngier
@ 2026-02-02  9:57       ` Fuad Tabba
  0 siblings, 0 replies; 53+ messages in thread
From: Fuad Tabba @ 2026-02-02  9:57 UTC (permalink / raw)
  To: maz
  Cc: kvmarm, linux-arm-kernel, kvm, joey.gouly, suzuki.poulose, oupton,
	yuzenghui, will, catalin.marinas, tabba

...

>
> Yes please. We might as well do the right thing, and I can fold that
> into my current series with you as a co-author.
>
> Thanks,

Here you go.

Cheers,
/fuad

---
 arch/arm64/include/asm/kvm_host.h |  1 -
 arch/arm64/kvm/sys_regs.c         | 42 +++++--------------------------
 2 files changed, 6 insertions(+), 37 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 54072f6ec9d4..c82b071ade2a 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -375,7 +375,6 @@ struct kvm_arch {
 
 	/* Iterator for idreg debugfs */
 	u8	idreg_debugfs_iter;
-	u16	sr_resx_iter;
 
 	/* Hypercall features firmware registers' descriptor */
 	struct kvm_smccc_features smccc_feat;
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index f3f92b489b58..d33c39ea8fad 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -5090,7 +5090,7 @@ static const struct seq_operations idregs_debug_sops = {
 
 DEFINE_SEQ_ATTRIBUTE(idregs_debug);
 
-static const struct sys_reg_desc *sr_resx_find(struct kvm *kvm, u16 pos)
+static const struct sys_reg_desc *sr_resx_find(struct kvm *kvm, loff_t pos)
 {
 	unsigned long i, sr_idx = 0;
 
@@ -5100,10 +5100,8 @@ static const struct sys_reg_desc *sr_resx_find(struct kvm *kvm, u16 pos)
 		if (r->reg < __SANITISED_REG_START__)
 			continue;
 
-		if (sr_idx == pos)
+		if (sr_idx++ == pos)
 			return r;
-
-		sr_idx++;
 	}
 
 	return NULL;
@@ -5112,22 +5110,11 @@ static const struct sys_reg_desc *sr_resx_find(struct kvm *kvm, u16 pos)
 static void *sr_resx_start(struct seq_file *s, loff_t *pos)
 {
 	struct kvm *kvm = s->private;
-	u16 *iter;
-
-	guard(mutex)(&kvm->arch.config_lock);
 
 	if (!kvm->arch.sysreg_masks)
 		return NULL;
 
-	iter = &kvm->arch.sr_resx_iter;
-	if (*iter != (u16)~0)
-		return ERR_PTR(-EBUSY);
-
-	*iter = *pos;
-	if (!sr_resx_find(kvm, *iter))
-		iter = NULL;
-
-	return iter;
+	return (void *)sr_resx_find(kvm, *pos);
 }
 
 static void *sr_resx_next(struct seq_file *s, void *v, loff_t *pos)
@@ -5136,36 +5123,20 @@ static void *sr_resx_next(struct seq_file *s, void *v, loff_t *pos)
 
 	(*pos)++;
 
-	if (sr_resx_find(kvm, kvm->arch.sr_resx_iter + 1)) {
-		kvm->arch.sr_resx_iter++;
-
-		return &kvm->arch.sr_resx_iter;
-	}
-
-	return NULL;
+	return (void *)sr_resx_find(kvm, *pos);
 }
 
 static void sr_resx_stop(struct seq_file *s, void *v)
 {
-	struct kvm *kvm = s->private;
-
-	if (IS_ERR(v))
-		return;
-
-	guard(mutex)(&kvm->arch.config_lock);
-
-	kvm->arch.sr_resx_iter = ~0;
 }
 
 static int sr_resx_show(struct seq_file *s, void *v)
 {
-	const struct sys_reg_desc *desc;
+	const struct sys_reg_desc *desc = v;
 	struct kvm *kvm = s->private;
 	struct resx resx;
 
-	desc = sr_resx_find(kvm, kvm->arch.sr_resx_iter);
-
-	if (!desc->name)
+	if (!desc)
 		return 0;
 
 	resx = kvm_get_sysreg_resx(kvm, desc->reg);
@@ -5188,7 +5159,6 @@ DEFINE_SEQ_ATTRIBUTE(sr_resx);
 void kvm_sys_regs_create_debugfs(struct kvm *kvm)
 {
 	kvm->arch.idreg_debugfs_iter = ~0;
-	kvm->arch.sr_resx_iter = ~0;
 
 	debugfs_create_file("idregs", 0444, kvm->debugfs_dentry, kvm,
 			    &idregs_debug_fops);
-- 
2.53.0.rc1.225.gd81095ad13-goog



^ permalink raw reply related	[flat|nested] 53+ messages in thread

end of thread, other threads:[~2026-02-02  9:57 UTC | newest]

Thread overview: 53+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-26 12:16 [PATCH 00/20] KVM: arm64: Generalise RESx handling Marc Zyngier
2026-01-26 12:16 ` [PATCH 01/20] arm64: Convert SCTLR_EL2 to sysreg infrastructure Marc Zyngier
2026-01-26 17:53   ` Fuad Tabba
2026-01-26 12:16 ` [PATCH 02/20] KVM: arm64: Remove duplicate configuration for SCTLR_EL1.{EE,E0E} Marc Zyngier
2026-01-26 18:04   ` Fuad Tabba
2026-01-26 12:16 ` [PATCH 03/20] KVM: arm64: Introduce standalone FGU computing primitive Marc Zyngier
2026-01-26 18:35   ` Fuad Tabba
2026-01-26 12:16 ` [PATCH 04/20] KVM: arm64: Introduce data structure tracking both RES0 and RES1 bits Marc Zyngier
2026-01-26 18:54   ` Fuad Tabba
2026-01-26 12:16 ` [PATCH 05/20] KVM: arm64: Extend unified RESx handling to runtime sanitisation Marc Zyngier
2026-01-26 19:15   ` Fuad Tabba
2026-01-27 10:52     ` Marc Zyngier
2026-01-26 12:16 ` [PATCH 06/20] KVM: arm64: Inherit RESx bits from FGT register descriptors Marc Zyngier
2026-01-27 15:21   ` Joey Gouly
2026-01-27 17:58   ` Fuad Tabba
2026-01-26 12:16 ` [PATCH 07/20] KVM: arm64: Allow RES1 bits to be inferred from configuration Marc Zyngier
2026-01-27 15:26   ` Joey Gouly
2026-01-27 17:58   ` Fuad Tabba
2026-01-26 12:16 ` [PATCH 08/20] KVM: arm64: Correctly handle SCTLR_EL1 RES1 bits for unsupported features Marc Zyngier
2026-01-27 18:06   ` Fuad Tabba
2026-01-26 12:16 ` [PATCH 09/20] KVM: arm64: Convert HCR_EL2.RW to AS_RES1 Marc Zyngier
2026-01-27 18:09   ` Fuad Tabba
2026-01-26 12:16 ` [PATCH 10/20] KVM: arm64: Simplify FIXED_VALUE handling Marc Zyngier
2026-01-27 18:20   ` Fuad Tabba
2026-01-26 12:16 ` [PATCH 11/20] KVM: arm64: Add REQUIRES_E2H1 constraint as configuration flags Marc Zyngier
2026-01-27 18:28   ` Fuad Tabba
2026-01-26 12:16 ` [PATCH 12/20] KVM: arm64: Add RESx_WHEN_E2Hx constraints " Marc Zyngier
2026-01-28 17:43   ` Fuad Tabba
2026-01-29 10:14     ` Marc Zyngier
2026-01-29 10:30       ` Fuad Tabba
2026-01-26 12:16 ` [PATCH 13/20] KVM: arm64: Move RESx into individual register descriptors Marc Zyngier
2026-01-29 16:29   ` Fuad Tabba
2026-01-29 17:19     ` Marc Zyngier
2026-01-29 17:39       ` Fuad Tabba
2026-01-29 18:07       ` Marc Zyngier
2026-01-29 18:13         ` Fuad Tabba
2026-01-30  9:06           ` Marc Zyngier
2026-01-26 12:16 ` [PATCH 14/20] KVM: arm64: Simplify handling of HCR_EL2.E2H RESx Marc Zyngier
2026-01-29 16:41   ` Fuad Tabba
2026-01-26 12:16 ` [PATCH 15/20] KVM: arm64: Get rid of FIXED_VALUE altogether Marc Zyngier
2026-01-29 16:54   ` Fuad Tabba
2026-01-26 12:16 ` [PATCH 16/20] KVM: arm64: Simplify handling of full register invalid constraint Marc Zyngier
2026-01-29 17:34   ` Fuad Tabba
2026-01-26 12:16 ` [PATCH 17/20] KVM: arm64: Remove all traces of FEAT_TME Marc Zyngier
2026-01-29 17:43   ` Fuad Tabba
2026-01-26 12:16 ` [PATCH 18/20] KVM: arm64: Remove all traces of HCR_EL2.MIOCNCE Marc Zyngier
2026-01-29 17:51   ` Fuad Tabba
2026-01-26 12:16 ` [PATCH 19/20] KVM: arm64: Add sanitisation to SCTLR_EL2 Marc Zyngier
2026-01-29 18:11   ` Fuad Tabba
2026-01-26 12:16 ` [PATCH 20/20] KVM: arm64: Add debugfs file dumping computed RESx values Marc Zyngier
2026-02-02  8:59   ` Fuad Tabba
2026-02-02  9:14     ` Marc Zyngier
2026-02-02  9:57       ` Fuad Tabba

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox