kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/6] KVM: arm64: FEAT_RASv1p1 support and RAS selection
@ 2025-08-17 20:21 Marc Zyngier
  2025-08-17 20:21 ` [PATCH v3 1/6] arm64: Add capability denoting FEAT_RASv1p1 Marc Zyngier
                   ` (6 more replies)
  0 siblings, 7 replies; 18+ messages in thread
From: Marc Zyngier @ 2025-08-17 20:21 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel, kvm
  Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Will Deacon, Catalin Marinas, Cornelia Huck

This is the next iteration of this series trying to plug some of our
RAS holes (no pun intended...). See [1] for the original series.

The difference with the previous drop is that we don't try to expose a
canonical encoding RASv1p1. Which means you must migrate between
similar implementations for now.

I've also added a cleanup patch at the end, which can be dropped.

Patches on top of v6.17-rc1.

* From v2 [2]:

  - Drop the canonical RASv1p1 advertisement

  - Expose ID_AA64PFR1_EL1.RAS_frac as a writable field

  - Added an extra patch dropping ARM64_FEATURE_MASK(), as it is both
    useless and annoying

  - Pick RB from Joey (thanks!)

* From v1 [1]:

  - Bunch of patches picked by Oliver (thanks!)

  - Added missing SYS_ERXMISC{2,3}_EL1 from the list of handled RAS
    registers

  - Added some rationale about the advertising of RASv1p1 (Cornelia)

  - Picked AB from Catalin (thanks!)

[1] https://lore.kernel.org/kvmarm/20250721101955.535159-1-maz@kernel.org
[2] https://lore.kernel.org/kvmarm/20250806165615.1513164-1-maz@kernel.org

Marc Zyngier (6):
  arm64: Add capability denoting FEAT_RASv1p1
  KVM: arm64: Handle RASv1p1 registers
  KVM: arm64: Ignore HCR_EL2.FIEN set by L1 guest's EL2
  KVM: arm64: Make ID_AA64PFR0_EL1.RAS writable
  KVM: arm64: Make ID_AA64PFR1_EL1.RAS_frac writable
  KVM: arm64: Get rid of ARM64_FEATURE_MASK()

 arch/arm64/include/asm/sysreg.h               |  3 -
 arch/arm64/kernel/cpufeature.c                | 24 ++++++
 arch/arm64/kvm/arm.c                          |  8 +-
 arch/arm64/kvm/hyp/vhe/switch.c               |  5 +-
 arch/arm64/kvm/nested.c                       |  3 +-
 arch/arm64/kvm/sys_regs.c                     | 75 +++++++++++++------
 arch/arm64/tools/cpucaps                      |  1 +
 tools/arch/arm64/include/asm/sysreg.h         |  3 -
 .../selftests/kvm/arm64/aarch32_id_regs.c     |  2 +-
 .../selftests/kvm/arm64/debug-exceptions.c    | 12 +--
 .../testing/selftests/kvm/arm64/no-vgic-v3.c  |  4 +-
 .../selftests/kvm/arm64/page_fault_test.c     |  6 +-
 .../testing/selftests/kvm/arm64/set_id_regs.c |  8 +-
 .../selftests/kvm/arm64/vpmu_counter_access.c |  2 +-
 .../selftests/kvm/lib/arm64/processor.c       |  6 +-
 15 files changed, 107 insertions(+), 55 deletions(-)

-- 
2.39.2


^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v3 1/6] arm64: Add capability denoting FEAT_RASv1p1
  2025-08-17 20:21 [PATCH v3 0/6] KVM: arm64: FEAT_RASv1p1 support and RAS selection Marc Zyngier
@ 2025-08-17 20:21 ` Marc Zyngier
  2025-08-18 12:32   ` Cornelia Huck
  2025-08-17 20:21 ` [PATCH v3 2/6] KVM: arm64: Handle RASv1p1 registers Marc Zyngier
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 18+ messages in thread
From: Marc Zyngier @ 2025-08-17 20:21 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel, kvm
  Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Will Deacon, Catalin Marinas, Cornelia Huck

Detecting FEAT_RASv1p1 is rather complicated, as there are two
ways for the architecture to advertise the same thing (always a
delight...).

Add a capability that will advertise this in a synthetic way to
the rest of the kernel.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kernel/cpufeature.c | 24 ++++++++++++++++++++++++
 arch/arm64/tools/cpucaps       |  1 +
 2 files changed, 25 insertions(+)

diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 9ad065f15f1d6..0d45c5e9b4da5 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -2269,6 +2269,24 @@ static void cpu_clear_disr(const struct arm64_cpu_capabilities *__unused)
 	/* Firmware may have left a deferred SError in this register. */
 	write_sysreg_s(0, SYS_DISR_EL1);
 }
+static bool has_rasv1p1(const struct arm64_cpu_capabilities *__unused, int scope)
+{
+	const struct arm64_cpu_capabilities rasv1p1_caps[] = {
+		{
+			ARM64_CPUID_FIELDS(ID_AA64PFR0_EL1, RAS, V1P1)
+		},
+		{
+			ARM64_CPUID_FIELDS(ID_AA64PFR0_EL1, RAS, IMP)
+		},
+		{
+			ARM64_CPUID_FIELDS(ID_AA64PFR1_EL1, RAS_frac, RASv1p1)
+		},
+	};
+
+	return (has_cpuid_feature(&rasv1p1_caps[0], scope) ||
+		(has_cpuid_feature(&rasv1p1_caps[1], scope) &&
+		 has_cpuid_feature(&rasv1p1_caps[2], scope)));
+}
 #endif /* CONFIG_ARM64_RAS_EXTN */
 
 #ifdef CONFIG_ARM64_PTR_AUTH
@@ -2687,6 +2705,12 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.cpu_enable = cpu_clear_disr,
 		ARM64_CPUID_FIELDS(ID_AA64PFR0_EL1, RAS, IMP)
 	},
+	{
+		.desc = "RASv1p1 Extension Support",
+		.capability = ARM64_HAS_RASV1P1_EXTN,
+		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
+		.matches = has_rasv1p1,
+	},
 #endif /* CONFIG_ARM64_RAS_EXTN */
 #ifdef CONFIG_ARM64_AMU_EXTN
 	{
diff --git a/arch/arm64/tools/cpucaps b/arch/arm64/tools/cpucaps
index ef0b7946f5a48..9ff5cdbd27597 100644
--- a/arch/arm64/tools/cpucaps
+++ b/arch/arm64/tools/cpucaps
@@ -53,6 +53,7 @@ HAS_S1PIE
 HAS_S1POE
 HAS_SCTLR2
 HAS_RAS_EXTN
+HAS_RASV1P1_EXTN
 HAS_RNG
 HAS_SB
 HAS_STAGE2_FWB
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 2/6] KVM: arm64: Handle RASv1p1 registers
  2025-08-17 20:21 [PATCH v3 0/6] KVM: arm64: FEAT_RASv1p1 support and RAS selection Marc Zyngier
  2025-08-17 20:21 ` [PATCH v3 1/6] arm64: Add capability denoting FEAT_RASv1p1 Marc Zyngier
@ 2025-08-17 20:21 ` Marc Zyngier
  2025-08-18 12:34   ` Cornelia Huck
  2025-08-21 13:13   ` Ben Horgan
  2025-08-17 20:21 ` [PATCH v3 3/6] KVM: arm64: Ignore HCR_EL2.FIEN set by L1 guest's EL2 Marc Zyngier
                   ` (4 subsequent siblings)
  6 siblings, 2 replies; 18+ messages in thread
From: Marc Zyngier @ 2025-08-17 20:21 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel, kvm
  Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Will Deacon, Catalin Marinas, Cornelia Huck

FEAT_RASv1p1 system registeres are not handled at all so far.
KVM will give an embarassed warning on the console and inject
an UNDEF, despite RASv1p1 being exposed to the guest on suitable HW.

Handle these registers similarly to FEAT_RAS, with the added fun
that there are *two* way to indicate the presence of FEAT_RASv1p1.

Reviewed-by: Joey Gouly <joey.gouly@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/sys_regs.c | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 82ffb3b3b3cf7..feb1a7a708e25 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -2697,6 +2697,18 @@ static bool access_ras(struct kvm_vcpu *vcpu,
 	struct kvm *kvm = vcpu->kvm;
 
 	switch(reg_to_encoding(r)) {
+	case SYS_ERXPFGCDN_EL1:
+	case SYS_ERXPFGCTL_EL1:
+	case SYS_ERXPFGF_EL1:
+	case SYS_ERXMISC2_EL1:
+	case SYS_ERXMISC3_EL1:
+		if (!(kvm_has_feat(kvm, ID_AA64PFR0_EL1, RAS, V1P1) ||
+		      (kvm_has_feat_enum(kvm, ID_AA64PFR0_EL1, RAS, IMP) &&
+		       kvm_has_feat(kvm, ID_AA64PFR1_EL1, RAS_frac, RASv1p1)))) {
+			kvm_inject_undefined(vcpu);
+			return false;
+		}
+		break;
 	default:
 		if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, RAS, IMP)) {
 			kvm_inject_undefined(vcpu);
@@ -3063,8 +3075,13 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	{ SYS_DESC(SYS_ERXCTLR_EL1), access_ras },
 	{ SYS_DESC(SYS_ERXSTATUS_EL1), access_ras },
 	{ SYS_DESC(SYS_ERXADDR_EL1), access_ras },
+	{ SYS_DESC(SYS_ERXPFGF_EL1), access_ras },
+	{ SYS_DESC(SYS_ERXPFGCTL_EL1), access_ras },
+	{ SYS_DESC(SYS_ERXPFGCDN_EL1), access_ras },
 	{ SYS_DESC(SYS_ERXMISC0_EL1), access_ras },
 	{ SYS_DESC(SYS_ERXMISC1_EL1), access_ras },
+	{ SYS_DESC(SYS_ERXMISC2_EL1), access_ras },
+	{ SYS_DESC(SYS_ERXMISC3_EL1), access_ras },
 
 	MTE_REG(TFSR_EL1),
 	MTE_REG(TFSRE0_EL1),
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 3/6] KVM: arm64: Ignore HCR_EL2.FIEN set by L1 guest's EL2
  2025-08-17 20:21 [PATCH v3 0/6] KVM: arm64: FEAT_RASv1p1 support and RAS selection Marc Zyngier
  2025-08-17 20:21 ` [PATCH v3 1/6] arm64: Add capability denoting FEAT_RASv1p1 Marc Zyngier
  2025-08-17 20:21 ` [PATCH v3 2/6] KVM: arm64: Handle RASv1p1 registers Marc Zyngier
@ 2025-08-17 20:21 ` Marc Zyngier
  2025-08-19 10:24   ` Joey Gouly
  2025-08-17 20:21 ` [PATCH v3 4/6] KVM: arm64: Make ID_AA64PFR0_EL1.RAS writable Marc Zyngier
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 18+ messages in thread
From: Marc Zyngier @ 2025-08-17 20:21 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel, kvm
  Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Will Deacon, Catalin Marinas, Cornelia Huck

An EL2 guest can set HCR_EL2.FIEN, which gives access to the RASv1p1
fault injection mechanism. This would allow an EL1 guest to inject
error records into the system, which does sound like a terrible idea.

Prevent this situation by added FIEN to the list of bits we silently
exclude from being inserted into the host configuration.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/hyp/vhe/switch.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index e482181c66322..0998ad4a25524 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -43,8 +43,11 @@ DEFINE_PER_CPU(unsigned long, kvm_hyp_vector);
  *
  * - API/APK: they are already accounted for by vcpu_load(), and can
  *   only take effect across a load/put cycle (such as ERET)
+ *
+ * - FIEN: no way we let a guest have access to the RAS "Common Fault
+ *   Injection" thing, whatever that does
  */
-#define NV_HCR_GUEST_EXCLUDE	(HCR_TGE | HCR_API | HCR_APK)
+#define NV_HCR_GUEST_EXCLUDE	(HCR_TGE | HCR_API | HCR_APK | HCR_FIEN)
 
 static u64 __compute_hcr(struct kvm_vcpu *vcpu)
 {
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 4/6] KVM: arm64: Make ID_AA64PFR0_EL1.RAS writable
  2025-08-17 20:21 [PATCH v3 0/6] KVM: arm64: FEAT_RASv1p1 support and RAS selection Marc Zyngier
                   ` (2 preceding siblings ...)
  2025-08-17 20:21 ` [PATCH v3 3/6] KVM: arm64: Ignore HCR_EL2.FIEN set by L1 guest's EL2 Marc Zyngier
@ 2025-08-17 20:21 ` Marc Zyngier
  2025-08-18 12:37   ` Cornelia Huck
  2025-08-17 20:21 ` [PATCH v3 5/6] KVM: arm64: Make ID_AA64PFR1_EL1.RAS_frac writable Marc Zyngier
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 18+ messages in thread
From: Marc Zyngier @ 2025-08-17 20:21 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel, kvm
  Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Will Deacon, Catalin Marinas, Cornelia Huck

Make ID_AA64PFR0_EL1.RAS writable so that we can restore a VM from
a system without RAS to a RAS-equipped machine (or disable RAS
in the guest).

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/sys_regs.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index feb1a7a708e25..3306fef432cbb 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -2941,7 +2941,6 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 		    ~(ID_AA64PFR0_EL1_AMU |
 		      ID_AA64PFR0_EL1_MPAM |
 		      ID_AA64PFR0_EL1_SVE |
-		      ID_AA64PFR0_EL1_RAS |
 		      ID_AA64PFR0_EL1_AdvSIMD |
 		      ID_AA64PFR0_EL1_FP)),
 	ID_FILTERED(ID_AA64PFR1_EL1, id_aa64pfr1_el1,
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 5/6] KVM: arm64: Make ID_AA64PFR1_EL1.RAS_frac writable
  2025-08-17 20:21 [PATCH v3 0/6] KVM: arm64: FEAT_RASv1p1 support and RAS selection Marc Zyngier
                   ` (3 preceding siblings ...)
  2025-08-17 20:21 ` [PATCH v3 4/6] KVM: arm64: Make ID_AA64PFR0_EL1.RAS writable Marc Zyngier
@ 2025-08-17 20:21 ` Marc Zyngier
  2025-08-18 12:43   ` Cornelia Huck
  2025-08-17 20:21 ` [PATCH v3 6/6] KVM: arm64: Get rid of ARM64_FEATURE_MASK() Marc Zyngier
  2025-08-22  0:01 ` [PATCH v3 0/6] KVM: arm64: FEAT_RASv1p1 support and RAS selection Oliver Upton
  6 siblings, 1 reply; 18+ messages in thread
From: Marc Zyngier @ 2025-08-17 20:21 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel, kvm
  Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Will Deacon, Catalin Marinas, Cornelia Huck

Allow userspace to write to RAS_frac, under the condition that
the host supports RASv1p1 with RAS_frac==1. Other configurations
will result in RAS_frac being exposed as 0, and therefore implicitly
not writable.

To avoid the clutter, the ID_AA64PFR1_EL1 sanitisation is moved to
its own function.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/nested.c   |  3 ++-
 arch/arm64/kvm/sys_regs.c | 41 ++++++++++++++++++++++++++-------------
 2 files changed, 29 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index 153b3e11b115d..1b0aedacc3f59 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -1458,9 +1458,10 @@ u64 limit_nv_id_reg(struct kvm *kvm, u32 reg, u64 val)
 		break;
 
 	case SYS_ID_AA64PFR1_EL1:
-		/* Only support BTI, SSBS, CSV2_frac */
+		/* Only support BTI, SSBS, RAS_frac, CSV2_frac */
 		val &= (ID_AA64PFR1_EL1_BT	|
 			ID_AA64PFR1_EL1_SSBS	|
+			ID_AA64PFR1_EL1_RAS_frac|
 			ID_AA64PFR1_EL1_CSV2_frac);
 		break;
 
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 3306fef432cbb..e149786f8bde0 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1584,6 +1584,7 @@ static u8 pmuver_to_perfmon(u8 pmuver)
 }
 
 static u64 sanitise_id_aa64pfr0_el1(const struct kvm_vcpu *vcpu, u64 val);
+static u64 sanitise_id_aa64pfr1_el1(const struct kvm_vcpu *vcpu, u64 val);
 static u64 sanitise_id_aa64dfr0_el1(const struct kvm_vcpu *vcpu, u64 val);
 
 /* Read a sanitised cpufeature ID register by sys_reg_desc */
@@ -1606,19 +1607,7 @@ static u64 __kvm_read_sanitised_id_reg(const struct kvm_vcpu *vcpu,
 		val = sanitise_id_aa64pfr0_el1(vcpu, val);
 		break;
 	case SYS_ID_AA64PFR1_EL1:
-		if (!kvm_has_mte(vcpu->kvm)) {
-			val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE);
-			val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE_frac);
-		}
-
-		val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_SME);
-		val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_RNDR_trap);
-		val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_NMI);
-		val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_GCS);
-		val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_THE);
-		val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTEX);
-		val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_PFAR);
-		val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MPAM_frac);
+		val = sanitise_id_aa64pfr1_el1(vcpu, val);
 		break;
 	case SYS_ID_AA64PFR2_EL1:
 		val &= ID_AA64PFR2_EL1_FPMR |
@@ -1836,6 +1825,31 @@ static u64 sanitise_id_aa64pfr0_el1(const struct kvm_vcpu *vcpu, u64 val)
 	return val;
 }
 
+static u64 sanitise_id_aa64pfr1_el1(const struct kvm_vcpu *vcpu, u64 val)
+{
+	u64 pfr0 = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
+
+	if (!kvm_has_mte(vcpu->kvm)) {
+		val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE);
+		val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE_frac);
+	}
+
+	if (!(cpus_have_final_cap(ARM64_HAS_RASV1P1_EXTN) &&
+	      SYS_FIELD_GET(ID_AA64PFR0_EL1, RAS, pfr0) == ID_AA64PFR0_EL1_RAS_IMP))
+		val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_RAS_frac);
+
+	val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_SME);
+	val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_RNDR_trap);
+	val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_NMI);
+	val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_GCS);
+	val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_THE);
+	val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTEX);
+	val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_PFAR);
+	val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MPAM_frac);
+
+	return val;
+}
+
 static u64 sanitise_id_aa64dfr0_el1(const struct kvm_vcpu *vcpu, u64 val)
 {
 	val = ID_REG_LIMIT_FIELD_ENUM(val, ID_AA64DFR0_EL1, DebugVer, V8P8);
@@ -2954,7 +2968,6 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 				       ID_AA64PFR1_EL1_SME |
 				       ID_AA64PFR1_EL1_RES0 |
 				       ID_AA64PFR1_EL1_MPAM_frac |
-				       ID_AA64PFR1_EL1_RAS_frac |
 				       ID_AA64PFR1_EL1_MTE)),
 	ID_WRITABLE(ID_AA64PFR2_EL1,
 		    ID_AA64PFR2_EL1_FPMR |
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 6/6] KVM: arm64: Get rid of ARM64_FEATURE_MASK()
  2025-08-17 20:21 [PATCH v3 0/6] KVM: arm64: FEAT_RASv1p1 support and RAS selection Marc Zyngier
                   ` (4 preceding siblings ...)
  2025-08-17 20:21 ` [PATCH v3 5/6] KVM: arm64: Make ID_AA64PFR1_EL1.RAS_frac writable Marc Zyngier
@ 2025-08-17 20:21 ` Marc Zyngier
  2025-08-21 11:29   ` Ben Horgan
  2025-08-22  0:01 ` [PATCH v3 0/6] KVM: arm64: FEAT_RASv1p1 support and RAS selection Oliver Upton
  6 siblings, 1 reply; 18+ messages in thread
From: Marc Zyngier @ 2025-08-17 20:21 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel, kvm
  Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Will Deacon, Catalin Marinas, Cornelia Huck

The ARM64_FEATURE_MASK() macro was a hack introduce whilst the
automatic generation of sysreg encoding was introduced, and was
too unreliable to be entirely trusted.

We are in a better place now, and we could really do without this
macro. Get rid of it altogether.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/sysreg.h               |  3 --
 arch/arm64/kvm/arm.c                          |  8 ++--
 arch/arm64/kvm/sys_regs.c                     | 40 +++++++++----------
 tools/arch/arm64/include/asm/sysreg.h         |  3 --
 .../selftests/kvm/arm64/aarch32_id_regs.c     |  2 +-
 .../selftests/kvm/arm64/debug-exceptions.c    | 12 +++---
 .../testing/selftests/kvm/arm64/no-vgic-v3.c  |  4 +-
 .../selftests/kvm/arm64/page_fault_test.c     |  6 +--
 .../testing/selftests/kvm/arm64/set_id_regs.c |  8 ++--
 .../selftests/kvm/arm64/vpmu_counter_access.c |  2 +-
 .../selftests/kvm/lib/arm64/processor.c       |  6 +--
 11 files changed, 44 insertions(+), 50 deletions(-)

diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index d5b5f2ae1afaa..6604fd6f33f45 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -1142,9 +1142,6 @@
 
 #define ARM64_FEATURE_FIELD_BITS	4
 
-/* Defined for compatibility only, do not add new users. */
-#define ARM64_FEATURE_MASK(x)	(x##_MASK)
-
 #ifdef __ASSEMBLY__
 
 	.macro	mrs_s, rt, sreg
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 888f7c7abf547..5bf101c869c9a 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -2408,12 +2408,12 @@ static u64 get_hyp_id_aa64pfr0_el1(void)
 	 */
 	u64 val = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
 
-	val &= ~(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2) |
-		 ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3));
+	val &= ~(ID_AA64PFR0_EL1_CSV2 |
+		 ID_AA64PFR0_EL1_CSV3);
 
-	val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2),
+	val |= FIELD_PREP(ID_AA64PFR0_EL1_CSV2,
 			  arm64_get_spectre_v2_state() == SPECTRE_UNAFFECTED);
-	val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3),
+	val |= FIELD_PREP(ID_AA64PFR0_EL1_CSV3,
 			  arm64_get_meltdown_state() == SPECTRE_UNAFFECTED);
 
 	return val;
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index e149786f8bde0..00a485180c4eb 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1617,18 +1617,18 @@ static u64 __kvm_read_sanitised_id_reg(const struct kvm_vcpu *vcpu,
 		break;
 	case SYS_ID_AA64ISAR1_EL1:
 		if (!vcpu_has_ptrauth(vcpu))
-			val &= ~(ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_APA) |
-				 ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_API) |
-				 ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_GPA) |
-				 ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_GPI));
+			val &= ~(ID_AA64ISAR1_EL1_APA |
+				 ID_AA64ISAR1_EL1_API |
+				 ID_AA64ISAR1_EL1_GPA |
+				 ID_AA64ISAR1_EL1_GPI);
 		break;
 	case SYS_ID_AA64ISAR2_EL1:
 		if (!vcpu_has_ptrauth(vcpu))
-			val &= ~(ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_APA3) |
-				 ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_GPA3));
+			val &= ~(ID_AA64ISAR2_EL1_APA3 |
+				 ID_AA64ISAR2_EL1_GPA3);
 		if (!cpus_have_final_cap(ARM64_HAS_WFXT) ||
 		    has_broken_cntvoff())
-			val &= ~ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_WFxT);
+			val &= ~ID_AA64ISAR2_EL1_WFxT;
 		break;
 	case SYS_ID_AA64ISAR3_EL1:
 		val &= ID_AA64ISAR3_EL1_FPRCVT | ID_AA64ISAR3_EL1_FAMINMAX;
@@ -1644,7 +1644,7 @@ static u64 __kvm_read_sanitised_id_reg(const struct kvm_vcpu *vcpu,
 		       ID_AA64MMFR3_EL1_S1PIE;
 		break;
 	case SYS_ID_MMFR4_EL1:
-		val &= ~ARM64_FEATURE_MASK(ID_MMFR4_EL1_CCIDX);
+		val &= ~ID_MMFR4_EL1_CCIDX;
 		break;
 	}
 
@@ -1830,22 +1830,22 @@ static u64 sanitise_id_aa64pfr1_el1(const struct kvm_vcpu *vcpu, u64 val)
 	u64 pfr0 = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
 
 	if (!kvm_has_mte(vcpu->kvm)) {
-		val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE);
-		val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE_frac);
+		val &= ~ID_AA64PFR1_EL1_MTE;
+		val &= ~ID_AA64PFR1_EL1_MTE_frac;
 	}
 
 	if (!(cpus_have_final_cap(ARM64_HAS_RASV1P1_EXTN) &&
 	      SYS_FIELD_GET(ID_AA64PFR0_EL1, RAS, pfr0) == ID_AA64PFR0_EL1_RAS_IMP))
-		val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_RAS_frac);
-
-	val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_SME);
-	val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_RNDR_trap);
-	val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_NMI);
-	val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_GCS);
-	val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_THE);
-	val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTEX);
-	val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_PFAR);
-	val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MPAM_frac);
+		val &= ~ID_AA64PFR1_EL1_RAS_frac;
+
+	val &= ~ID_AA64PFR1_EL1_SME;
+	val &= ~ID_AA64PFR1_EL1_RNDR_trap;
+	val &= ~ID_AA64PFR1_EL1_NMI;
+	val &= ~ID_AA64PFR1_EL1_GCS;
+	val &= ~ID_AA64PFR1_EL1_THE;
+	val &= ~ID_AA64PFR1_EL1_MTEX;
+	val &= ~ID_AA64PFR1_EL1_PFAR;
+	val &= ~ID_AA64PFR1_EL1_MPAM_frac;
 
 	return val;
 }
diff --git a/tools/arch/arm64/include/asm/sysreg.h b/tools/arch/arm64/include/asm/sysreg.h
index 690b6ebd118f4..65f2759ea27a3 100644
--- a/tools/arch/arm64/include/asm/sysreg.h
+++ b/tools/arch/arm64/include/asm/sysreg.h
@@ -1080,9 +1080,6 @@
 
 #define ARM64_FEATURE_FIELD_BITS	4
 
-/* Defined for compatibility only, do not add new users. */
-#define ARM64_FEATURE_MASK(x)	(x##_MASK)
-
 #ifdef __ASSEMBLY__
 
 	.macro	mrs_s, rt, sreg
diff --git a/tools/testing/selftests/kvm/arm64/aarch32_id_regs.c b/tools/testing/selftests/kvm/arm64/aarch32_id_regs.c
index cef8f7323ceb8..713005b6f508e 100644
--- a/tools/testing/selftests/kvm/arm64/aarch32_id_regs.c
+++ b/tools/testing/selftests/kvm/arm64/aarch32_id_regs.c
@@ -146,7 +146,7 @@ static bool vcpu_aarch64_only(struct kvm_vcpu *vcpu)
 
 	val = vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR0_EL1));
 
-	el0 = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL0), val);
+	el0 = FIELD_GET(ID_AA64PFR0_EL1_EL0, val);
 	return el0 == ID_AA64PFR0_EL1_EL0_IMP;
 }
 
diff --git a/tools/testing/selftests/kvm/arm64/debug-exceptions.c b/tools/testing/selftests/kvm/arm64/debug-exceptions.c
index e34963956fbc9..1d431de8729c5 100644
--- a/tools/testing/selftests/kvm/arm64/debug-exceptions.c
+++ b/tools/testing/selftests/kvm/arm64/debug-exceptions.c
@@ -116,12 +116,12 @@ static void reset_debug_state(void)
 
 	/* Reset all bcr/bvr/wcr/wvr registers */
 	dfr0 = read_sysreg(id_aa64dfr0_el1);
-	brps = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_BRPs), dfr0);
+	brps = FIELD_GET(ID_AA64DFR0_EL1_BRPs, dfr0);
 	for (i = 0; i <= brps; i++) {
 		write_dbgbcr(i, 0);
 		write_dbgbvr(i, 0);
 	}
-	wrps = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_WRPs), dfr0);
+	wrps = FIELD_GET(ID_AA64DFR0_EL1_WRPs, dfr0);
 	for (i = 0; i <= wrps; i++) {
 		write_dbgwcr(i, 0);
 		write_dbgwvr(i, 0);
@@ -418,7 +418,7 @@ static void guest_code_ss(int test_cnt)
 
 static int debug_version(uint64_t id_aa64dfr0)
 {
-	return FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer), id_aa64dfr0);
+	return FIELD_GET(ID_AA64DFR0_EL1_DebugVer, id_aa64dfr0);
 }
 
 static void test_guest_debug_exceptions(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
@@ -539,14 +539,14 @@ void test_guest_debug_exceptions_all(uint64_t aa64dfr0)
 	int b, w, c;
 
 	/* Number of breakpoints */
-	brp_num = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_BRPs), aa64dfr0) + 1;
+	brp_num = FIELD_GET(ID_AA64DFR0_EL1_BRPs, aa64dfr0) + 1;
 	__TEST_REQUIRE(brp_num >= 2, "At least two breakpoints are required");
 
 	/* Number of watchpoints */
-	wrp_num = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_WRPs), aa64dfr0) + 1;
+	wrp_num = FIELD_GET(ID_AA64DFR0_EL1_WRPs, aa64dfr0) + 1;
 
 	/* Number of context aware breakpoints */
-	ctx_brp_num = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_CTX_CMPs), aa64dfr0) + 1;
+	ctx_brp_num = FIELD_GET(ID_AA64DFR0_EL1_CTX_CMPs, aa64dfr0) + 1;
 
 	pr_debug("%s brp_num:%d, wrp_num:%d, ctx_brp_num:%d\n", __func__,
 		 brp_num, wrp_num, ctx_brp_num);
diff --git a/tools/testing/selftests/kvm/arm64/no-vgic-v3.c b/tools/testing/selftests/kvm/arm64/no-vgic-v3.c
index ebd70430c89de..f222538e60841 100644
--- a/tools/testing/selftests/kvm/arm64/no-vgic-v3.c
+++ b/tools/testing/selftests/kvm/arm64/no-vgic-v3.c
@@ -54,7 +54,7 @@ static void guest_code(void)
 	 * Check that we advertise that ID_AA64PFR0_EL1.GIC == 0, having
 	 * hidden the feature at runtime without any other userspace action.
 	 */
-	__GUEST_ASSERT(FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC),
+	__GUEST_ASSERT(FIELD_GET(ID_AA64PFR0_EL1_GIC,
 				 read_sysreg(id_aa64pfr0_el1)) == 0,
 		       "GICv3 wrongly advertised");
 
@@ -165,7 +165,7 @@ int main(int argc, char *argv[])
 
 	vm = vm_create_with_one_vcpu(&vcpu, NULL);
 	pfr0 = vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR0_EL1));
-	__TEST_REQUIRE(FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC), pfr0),
+	__TEST_REQUIRE(FIELD_GET(ID_AA64PFR0_EL1_GIC, pfr0),
 		       "GICv3 not supported.");
 	kvm_vm_free(vm);
 
diff --git a/tools/testing/selftests/kvm/arm64/page_fault_test.c b/tools/testing/selftests/kvm/arm64/page_fault_test.c
index dc6559dad9d86..4ccbd389d1336 100644
--- a/tools/testing/selftests/kvm/arm64/page_fault_test.c
+++ b/tools/testing/selftests/kvm/arm64/page_fault_test.c
@@ -95,14 +95,14 @@ static bool guest_check_lse(void)
 	uint64_t isar0 = read_sysreg(id_aa64isar0_el1);
 	uint64_t atomic;
 
-	atomic = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64ISAR0_EL1_ATOMIC), isar0);
+	atomic = FIELD_GET(ID_AA64ISAR0_EL1_ATOMIC, isar0);
 	return atomic >= 2;
 }
 
 static bool guest_check_dc_zva(void)
 {
 	uint64_t dczid = read_sysreg(dczid_el0);
-	uint64_t dzp = FIELD_GET(ARM64_FEATURE_MASK(DCZID_EL0_DZP), dczid);
+	uint64_t dzp = FIELD_GET(DCZID_EL0_DZP, dczid);
 
 	return dzp == 0;
 }
@@ -195,7 +195,7 @@ static bool guest_set_ha(void)
 	uint64_t hadbs, tcr;
 
 	/* Skip if HA is not supported. */
-	hadbs = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR1_EL1_HAFDBS), mmfr1);
+	hadbs = FIELD_GET(ID_AA64MMFR1_EL1_HAFDBS, mmfr1);
 	if (hadbs == 0)
 		return false;
 
diff --git a/tools/testing/selftests/kvm/arm64/set_id_regs.c b/tools/testing/selftests/kvm/arm64/set_id_regs.c
index d3bf9204409c3..36d40c267b994 100644
--- a/tools/testing/selftests/kvm/arm64/set_id_regs.c
+++ b/tools/testing/selftests/kvm/arm64/set_id_regs.c
@@ -594,8 +594,8 @@ static void test_user_set_mte_reg(struct kvm_vcpu *vcpu)
 	 */
 	val = vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR1_EL1));
 
-	mte = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE), val);
-	mte_frac = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE_frac), val);
+	mte = FIELD_GET(ID_AA64PFR1_EL1_MTE, val);
+	mte_frac = FIELD_GET(ID_AA64PFR1_EL1_MTE_frac, val);
 	if (mte != ID_AA64PFR1_EL1_MTE_MTE2 ||
 	    mte_frac != ID_AA64PFR1_EL1_MTE_frac_NI) {
 		ksft_test_result_skip("MTE_ASYNC or MTE_ASYMM are supported, nothing to test\n");
@@ -612,7 +612,7 @@ static void test_user_set_mte_reg(struct kvm_vcpu *vcpu)
 	}
 
 	val = vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR1_EL1));
-	mte_frac = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE_frac), val);
+	mte_frac = FIELD_GET(ID_AA64PFR1_EL1_MTE_frac, val);
 	if (mte_frac == ID_AA64PFR1_EL1_MTE_frac_NI)
 		ksft_test_result_pass("ID_AA64PFR1_EL1.MTE_frac=0 accepted and still 0xF\n");
 	else
@@ -774,7 +774,7 @@ int main(void)
 
 	/* Check for AARCH64 only system */
 	val = vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR0_EL1));
-	el0 = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL0), val);
+	el0 = FIELD_GET(ID_AA64PFR0_EL1_EL0, val);
 	aarch64_only = (el0 == ID_AA64PFR0_EL1_EL0_IMP);
 
 	ksft_print_header();
diff --git a/tools/testing/selftests/kvm/arm64/vpmu_counter_access.c b/tools/testing/selftests/kvm/arm64/vpmu_counter_access.c
index f16b3b27e32ed..a0c4ab8391559 100644
--- a/tools/testing/selftests/kvm/arm64/vpmu_counter_access.c
+++ b/tools/testing/selftests/kvm/arm64/vpmu_counter_access.c
@@ -441,7 +441,7 @@ static void create_vpmu_vm(void *guest_code)
 
 	/* Make sure that PMUv3 support is indicated in the ID register */
 	dfr0 = vcpu_get_reg(vpmu_vm.vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1));
-	pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), dfr0);
+	pmuver = FIELD_GET(ID_AA64DFR0_EL1_PMUVer, dfr0);
 	TEST_ASSERT(pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF &&
 		    pmuver >= ID_AA64DFR0_EL1_PMUVer_IMP,
 		    "Unexpected PMUVER (0x%x) on the vCPU with PMUv3", pmuver);
diff --git a/tools/testing/selftests/kvm/lib/arm64/processor.c b/tools/testing/selftests/kvm/lib/arm64/processor.c
index 9d69904cb6084..eb115123d7411 100644
--- a/tools/testing/selftests/kvm/lib/arm64/processor.c
+++ b/tools/testing/selftests/kvm/lib/arm64/processor.c
@@ -573,15 +573,15 @@ void aarch64_get_supported_page_sizes(uint32_t ipa, uint32_t *ipa4k,
 	err = ioctl(vcpu_fd, KVM_GET_ONE_REG, &reg);
 	TEST_ASSERT(err == 0, KVM_IOCTL_ERROR(KVM_GET_ONE_REG, vcpu_fd));
 
-	gran = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_TGRAN4), val);
+	gran = FIELD_GET(ID_AA64MMFR0_EL1_TGRAN4, val);
 	*ipa4k = max_ipa_for_page_size(ipa, gran, ID_AA64MMFR0_EL1_TGRAN4_NI,
 					ID_AA64MMFR0_EL1_TGRAN4_52_BIT);
 
-	gran = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_TGRAN64), val);
+	gran = FIELD_GET(ID_AA64MMFR0_EL1_TGRAN64, val);
 	*ipa64k = max_ipa_for_page_size(ipa, gran, ID_AA64MMFR0_EL1_TGRAN64_NI,
 					ID_AA64MMFR0_EL1_TGRAN64_IMP);
 
-	gran = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_TGRAN16), val);
+	gran = FIELD_GET(ID_AA64MMFR0_EL1_TGRAN16, val);
 	*ipa16k = max_ipa_for_page_size(ipa, gran, ID_AA64MMFR0_EL1_TGRAN16_NI,
 					ID_AA64MMFR0_EL1_TGRAN16_52_BIT);
 
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 1/6] arm64: Add capability denoting FEAT_RASv1p1
  2025-08-17 20:21 ` [PATCH v3 1/6] arm64: Add capability denoting FEAT_RASv1p1 Marc Zyngier
@ 2025-08-18 12:32   ` Cornelia Huck
  0 siblings, 0 replies; 18+ messages in thread
From: Cornelia Huck @ 2025-08-18 12:32 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm, linux-arm-kernel, kvm
  Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Will Deacon, Catalin Marinas

On Sun, Aug 17 2025, Marc Zyngier <maz@kernel.org> wrote:

> Detecting FEAT_RASv1p1 is rather complicated, as there are two
> ways for the architecture to advertise the same thing (always a
> delight...).
>
> Add a capability that will advertise this in a synthetic way to
> the rest of the kernel.
>
> Acked-by: Catalin Marinas <catalin.marinas@arm.com>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kernel/cpufeature.c | 24 ++++++++++++++++++++++++
>  arch/arm64/tools/cpucaps       |  1 +
>  2 files changed, 25 insertions(+)

Reviewed-by: Cornelia Huck <cohuck@redhat.com>


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 2/6] KVM: arm64: Handle RASv1p1 registers
  2025-08-17 20:21 ` [PATCH v3 2/6] KVM: arm64: Handle RASv1p1 registers Marc Zyngier
@ 2025-08-18 12:34   ` Cornelia Huck
  2025-08-21 13:13   ` Ben Horgan
  1 sibling, 0 replies; 18+ messages in thread
From: Cornelia Huck @ 2025-08-18 12:34 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm, linux-arm-kernel, kvm
  Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Will Deacon, Catalin Marinas

On Sun, Aug 17 2025, Marc Zyngier <maz@kernel.org> wrote:

> FEAT_RASv1p1 system registeres are not handled at all so far.

s/registeres/registers/

> KVM will give an embarassed warning on the console and inject
> an UNDEF, despite RASv1p1 being exposed to the guest on suitable HW.
>
> Handle these registers similarly to FEAT_RAS, with the added fun
> that there are *two* way to indicate the presence of FEAT_RASv1p1.
>
> Reviewed-by: Joey Gouly <joey.gouly@arm.com>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/sys_regs.c | 17 +++++++++++++++++
>  1 file changed, 17 insertions(+)

Reviewed-by: Cornelia Huck <cohuck@redhat.com>


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 4/6] KVM: arm64: Make ID_AA64PFR0_EL1.RAS writable
  2025-08-17 20:21 ` [PATCH v3 4/6] KVM: arm64: Make ID_AA64PFR0_EL1.RAS writable Marc Zyngier
@ 2025-08-18 12:37   ` Cornelia Huck
  0 siblings, 0 replies; 18+ messages in thread
From: Cornelia Huck @ 2025-08-18 12:37 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm, linux-arm-kernel, kvm
  Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Will Deacon, Catalin Marinas

On Sun, Aug 17 2025, Marc Zyngier <maz@kernel.org> wrote:

> Make ID_AA64PFR0_EL1.RAS writable so that we can restore a VM from
> a system without RAS to a RAS-equipped machine (or disable RAS
> in the guest).
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/sys_regs.c | 1 -
>  1 file changed, 1 deletion(-)

Reviewed-by: Cornelia Huck <cohuck@redhat.com>


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 5/6] KVM: arm64: Make ID_AA64PFR1_EL1.RAS_frac writable
  2025-08-17 20:21 ` [PATCH v3 5/6] KVM: arm64: Make ID_AA64PFR1_EL1.RAS_frac writable Marc Zyngier
@ 2025-08-18 12:43   ` Cornelia Huck
  0 siblings, 0 replies; 18+ messages in thread
From: Cornelia Huck @ 2025-08-18 12:43 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm, linux-arm-kernel, kvm
  Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Will Deacon, Catalin Marinas

On Sun, Aug 17 2025, Marc Zyngier <maz@kernel.org> wrote:

> Allow userspace to write to RAS_frac, under the condition that
> the host supports RASv1p1 with RAS_frac==1. Other configurations
> will result in RAS_frac being exposed as 0, and therefore implicitly
> not writable.
>
> To avoid the clutter, the ID_AA64PFR1_EL1 sanitisation is moved to
> its own function.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/nested.c   |  3 ++-
>  arch/arm64/kvm/sys_regs.c | 41 ++++++++++++++++++++++++++-------------
>  2 files changed, 29 insertions(+), 15 deletions(-)

Reviewed-by: Cornelia Huck <cohuck@redhat.com>


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 3/6] KVM: arm64: Ignore HCR_EL2.FIEN set by L1 guest's EL2
  2025-08-17 20:21 ` [PATCH v3 3/6] KVM: arm64: Ignore HCR_EL2.FIEN set by L1 guest's EL2 Marc Zyngier
@ 2025-08-19 10:24   ` Joey Gouly
  0 siblings, 0 replies; 18+ messages in thread
From: Joey Gouly @ 2025-08-19 10:24 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, kvm, Suzuki K Poulose, Oliver Upton,
	Zenghui Yu, Will Deacon, Catalin Marinas, Cornelia Huck

On Sun, Aug 17, 2025 at 09:21:55PM +0100, Marc Zyngier wrote:
> An EL2 guest can set HCR_EL2.FIEN, which gives access to the RASv1p1
> fault injection mechanism. This would allow an EL1 guest to inject
> error records into the system, which does sound like a terrible idea.
> 
> Prevent this situation by added FIEN to the list of bits we silently
> exclude from being inserted into the host configuration.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>

Reviewed-by: Joey Gouly <joey.gouly@arm.com>

> ---
>  arch/arm64/kvm/hyp/vhe/switch.c | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
> index e482181c66322..0998ad4a25524 100644
> --- a/arch/arm64/kvm/hyp/vhe/switch.c
> +++ b/arch/arm64/kvm/hyp/vhe/switch.c
> @@ -43,8 +43,11 @@ DEFINE_PER_CPU(unsigned long, kvm_hyp_vector);
>   *
>   * - API/APK: they are already accounted for by vcpu_load(), and can
>   *   only take effect across a load/put cycle (such as ERET)
> + *
> + * - FIEN: no way we let a guest have access to the RAS "Common Fault
> + *   Injection" thing, whatever that does
>   */
> -#define NV_HCR_GUEST_EXCLUDE	(HCR_TGE | HCR_API | HCR_APK)
> +#define NV_HCR_GUEST_EXCLUDE	(HCR_TGE | HCR_API | HCR_APK | HCR_FIEN)
>  
>  static u64 __compute_hcr(struct kvm_vcpu *vcpu)
>  {
> -- 
> 2.39.2
> 

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 6/6] KVM: arm64: Get rid of ARM64_FEATURE_MASK()
  2025-08-17 20:21 ` [PATCH v3 6/6] KVM: arm64: Get rid of ARM64_FEATURE_MASK() Marc Zyngier
@ 2025-08-21 11:29   ` Ben Horgan
  2025-08-21 13:43     ` Marc Zyngier
  0 siblings, 1 reply; 18+ messages in thread
From: Ben Horgan @ 2025-08-21 11:29 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm, linux-arm-kernel, kvm
  Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Will Deacon, Catalin Marinas, Cornelia Huck

Hi Marc,

On 8/17/25 21:21, Marc Zyngier wrote:
> The ARM64_FEATURE_MASK() macro was a hack introduce whilst the
> automatic generation of sysreg encoding was introduced, and was
> too unreliable to be entirely trusted.
> 
> We are in a better place now, and we could really do without this
> macro. Get rid of it altogether.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>   arch/arm64/include/asm/sysreg.h               |  3 --
>   arch/arm64/kvm/arm.c                          |  8 ++--
>   arch/arm64/kvm/sys_regs.c                     | 40 +++++++++----------
>   tools/arch/arm64/include/asm/sysreg.h         |  3 --
>   .../selftests/kvm/arm64/aarch32_id_regs.c     |  2 +-
>   .../selftests/kvm/arm64/debug-exceptions.c    | 12 +++---
>   .../testing/selftests/kvm/arm64/no-vgic-v3.c  |  4 +-
>   .../selftests/kvm/arm64/page_fault_test.c     |  6 +--
>   .../testing/selftests/kvm/arm64/set_id_regs.c |  8 ++--
>   .../selftests/kvm/arm64/vpmu_counter_access.c |  2 +-
>   .../selftests/kvm/lib/arm64/processor.c       |  6 +--
>   11 files changed, 44 insertions(+), 50 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> index d5b5f2ae1afaa..6604fd6f33f45 100644
> --- a/arch/arm64/include/asm/sysreg.h
> +++ b/arch/arm64/include/asm/sysreg.h
> @@ -1142,9 +1142,6 @@
>   
>   #define ARM64_FEATURE_FIELD_BITS	4
While you're at it, consider getting rid of ARM64_FEATURE_FIELD_BITS 
too. This is only used in the set_id_regs.c selftest.
>   
> -/* Defined for compatibility only, do not add new users. */
> -#define ARM64_FEATURE_MASK(x)	(x##_MASK)
> -
>   #ifdef __ASSEMBLY__
>   
>   	.macro	mrs_s, rt, sreg
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index 888f7c7abf547..5bf101c869c9a 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -2408,12 +2408,12 @@ static u64 get_hyp_id_aa64pfr0_el1(void)
>   	 */
>   	u64 val = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
>   
> -	val &= ~(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2) |
> -		 ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3));
> +	val &= ~(ID_AA64PFR0_EL1_CSV2 |
> +		 ID_AA64PFR0_EL1_CSV3);
>   
> -	val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV2),
> +	val |= FIELD_PREP(ID_AA64PFR0_EL1_CSV2,
>   			  arm64_get_spectre_v2_state() == SPECTRE_UNAFFECTED);
> -	val |= FIELD_PREP(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_CSV3),
> +	val |= FIELD_PREP(ID_AA64PFR0_EL1_CSV3,
>   			  arm64_get_meltdown_state() == SPECTRE_UNAFFECTED);
>   
>   	return val;
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index e149786f8bde0..00a485180c4eb 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1617,18 +1617,18 @@ static u64 __kvm_read_sanitised_id_reg(const struct kvm_vcpu *vcpu,
>   		break;
>   	case SYS_ID_AA64ISAR1_EL1:
>   		if (!vcpu_has_ptrauth(vcpu))
> -			val &= ~(ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_APA) |
> -				 ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_API) |
> -				 ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_GPA) |
> -				 ARM64_FEATURE_MASK(ID_AA64ISAR1_EL1_GPI));
> +			val &= ~(ID_AA64ISAR1_EL1_APA |
> +				 ID_AA64ISAR1_EL1_API |
> +				 ID_AA64ISAR1_EL1_GPA |
> +				 ID_AA64ISAR1_EL1_GPI);
>   		break;
>   	case SYS_ID_AA64ISAR2_EL1:
>   		if (!vcpu_has_ptrauth(vcpu))
> -			val &= ~(ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_APA3) |
> -				 ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_GPA3));
> +			val &= ~(ID_AA64ISAR2_EL1_APA3 |
> +				 ID_AA64ISAR2_EL1_GPA3);
>   		if (!cpus_have_final_cap(ARM64_HAS_WFXT) ||
>   		    has_broken_cntvoff())
> -			val &= ~ARM64_FEATURE_MASK(ID_AA64ISAR2_EL1_WFxT);
> +			val &= ~ID_AA64ISAR2_EL1_WFxT;
>   		break;
>   	case SYS_ID_AA64ISAR3_EL1:
>   		val &= ID_AA64ISAR3_EL1_FPRCVT | ID_AA64ISAR3_EL1_FAMINMAX;
> @@ -1644,7 +1644,7 @@ static u64 __kvm_read_sanitised_id_reg(const struct kvm_vcpu *vcpu,
>   		       ID_AA64MMFR3_EL1_S1PIE;
>   		break;
>   	case SYS_ID_MMFR4_EL1:
> -		val &= ~ARM64_FEATURE_MASK(ID_MMFR4_EL1_CCIDX);
> +		val &= ~ID_MMFR4_EL1_CCIDX;
>   		break;
>   	}
>   
> @@ -1830,22 +1830,22 @@ static u64 sanitise_id_aa64pfr1_el1(const struct kvm_vcpu *vcpu, u64 val)
>   	u64 pfr0 = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
>   
>   	if (!kvm_has_mte(vcpu->kvm)) {
> -		val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE);
> -		val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE_frac);
> +		val &= ~ID_AA64PFR1_EL1_MTE;
> +		val &= ~ID_AA64PFR1_EL1_MTE_frac;
>   	}
>   
>   	if (!(cpus_have_final_cap(ARM64_HAS_RASV1P1_EXTN) &&
>   	      SYS_FIELD_GET(ID_AA64PFR0_EL1, RAS, pfr0) == ID_AA64PFR0_EL1_RAS_IMP))
> -		val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_RAS_frac);
> -
> -	val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_SME);
> -	val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_RNDR_trap);
> -	val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_NMI);
> -	val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_GCS);
> -	val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_THE);
> -	val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTEX);
> -	val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_PFAR);
> -	val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MPAM_frac);
> +		val &= ~ID_AA64PFR1_EL1_RAS_frac;
> +
> +	val &= ~ID_AA64PFR1_EL1_SME;
> +	val &= ~ID_AA64PFR1_EL1_RNDR_trap;
> +	val &= ~ID_AA64PFR1_EL1_NMI;
> +	val &= ~ID_AA64PFR1_EL1_GCS;
> +	val &= ~ID_AA64PFR1_EL1_THE;
> +	val &= ~ID_AA64PFR1_EL1_MTEX;
> +	val &= ~ID_AA64PFR1_EL1_PFAR;
> +	val &= ~ID_AA64PFR1_EL1_MPAM_frac;
>   
>   	return val;
>   }
> diff --git a/tools/arch/arm64/include/asm/sysreg.h b/tools/arch/arm64/include/asm/sysreg.h
> index 690b6ebd118f4..65f2759ea27a3 100644
> --- a/tools/arch/arm64/include/asm/sysreg.h
> +++ b/tools/arch/arm64/include/asm/sysreg.h
> @@ -1080,9 +1080,6 @@
>   
>   #define ARM64_FEATURE_FIELD_BITS	4
>   
> -/* Defined for compatibility only, do not add new users. */
> -#define ARM64_FEATURE_MASK(x)	(x##_MASK)
> -
>   #ifdef __ASSEMBLY__
>   
>   	.macro	mrs_s, rt, sreg
> diff --git a/tools/testing/selftests/kvm/arm64/aarch32_id_regs.c b/tools/testing/selftests/kvm/arm64/aarch32_id_regs.c
> index cef8f7323ceb8..713005b6f508e 100644
> --- a/tools/testing/selftests/kvm/arm64/aarch32_id_regs.c
> +++ b/tools/testing/selftests/kvm/arm64/aarch32_id_regs.c
> @@ -146,7 +146,7 @@ static bool vcpu_aarch64_only(struct kvm_vcpu *vcpu)
>   
>   	val = vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR0_EL1));
>   
> -	el0 = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL0), val);
> +	el0 = FIELD_GET(ID_AA64PFR0_EL1_EL0, val);
>   	return el0 == ID_AA64PFR0_EL1_EL0_IMP;
>   }
>   
> diff --git a/tools/testing/selftests/kvm/arm64/debug-exceptions.c b/tools/testing/selftests/kvm/arm64/debug-exceptions.c
> index e34963956fbc9..1d431de8729c5 100644
> --- a/tools/testing/selftests/kvm/arm64/debug-exceptions.c
> +++ b/tools/testing/selftests/kvm/arm64/debug-exceptions.c
> @@ -116,12 +116,12 @@ static void reset_debug_state(void)
>   
>   	/* Reset all bcr/bvr/wcr/wvr registers */
>   	dfr0 = read_sysreg(id_aa64dfr0_el1);
> -	brps = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_BRPs), dfr0);
> +	brps = FIELD_GET(ID_AA64DFR0_EL1_BRPs, dfr0);
>   	for (i = 0; i <= brps; i++) {
>   		write_dbgbcr(i, 0);
>   		write_dbgbvr(i, 0);
>   	}
> -	wrps = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_WRPs), dfr0);
> +	wrps = FIELD_GET(ID_AA64DFR0_EL1_WRPs, dfr0);
>   	for (i = 0; i <= wrps; i++) {
>   		write_dbgwcr(i, 0);
>   		write_dbgwvr(i, 0);
> @@ -418,7 +418,7 @@ static void guest_code_ss(int test_cnt)
>   
>   static int debug_version(uint64_t id_aa64dfr0)
>   {
> -	return FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_DebugVer), id_aa64dfr0);
> +	return FIELD_GET(ID_AA64DFR0_EL1_DebugVer, id_aa64dfr0);
>   }
>   
>   static void test_guest_debug_exceptions(uint8_t bpn, uint8_t wpn, uint8_t ctx_bpn)
> @@ -539,14 +539,14 @@ void test_guest_debug_exceptions_all(uint64_t aa64dfr0)
>   	int b, w, c;
>   
>   	/* Number of breakpoints */
> -	brp_num = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_BRPs), aa64dfr0) + 1;
> +	brp_num = FIELD_GET(ID_AA64DFR0_EL1_BRPs, aa64dfr0) + 1;
>   	__TEST_REQUIRE(brp_num >= 2, "At least two breakpoints are required");
>   
>   	/* Number of watchpoints */
> -	wrp_num = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_WRPs), aa64dfr0) + 1;
> +	wrp_num = FIELD_GET(ID_AA64DFR0_EL1_WRPs, aa64dfr0) + 1;
>   
>   	/* Number of context aware breakpoints */
> -	ctx_brp_num = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_CTX_CMPs), aa64dfr0) + 1;
> +	ctx_brp_num = FIELD_GET(ID_AA64DFR0_EL1_CTX_CMPs, aa64dfr0) + 1;
>   
>   	pr_debug("%s brp_num:%d, wrp_num:%d, ctx_brp_num:%d\n", __func__,
>   		 brp_num, wrp_num, ctx_brp_num);
> diff --git a/tools/testing/selftests/kvm/arm64/no-vgic-v3.c b/tools/testing/selftests/kvm/arm64/no-vgic-v3.c
> index ebd70430c89de..f222538e60841 100644
> --- a/tools/testing/selftests/kvm/arm64/no-vgic-v3.c
> +++ b/tools/testing/selftests/kvm/arm64/no-vgic-v3.c
> @@ -54,7 +54,7 @@ static void guest_code(void)
>   	 * Check that we advertise that ID_AA64PFR0_EL1.GIC == 0, having
>   	 * hidden the feature at runtime without any other userspace action.
>   	 */
> -	__GUEST_ASSERT(FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC),
> +	__GUEST_ASSERT(FIELD_GET(ID_AA64PFR0_EL1_GIC,
>   				 read_sysreg(id_aa64pfr0_el1)) == 0,
>   		       "GICv3 wrongly advertised");
>   
> @@ -165,7 +165,7 @@ int main(int argc, char *argv[])
>   
>   	vm = vm_create_with_one_vcpu(&vcpu, NULL);
>   	pfr0 = vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR0_EL1));
> -	__TEST_REQUIRE(FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_GIC), pfr0),
> +	__TEST_REQUIRE(FIELD_GET(ID_AA64PFR0_EL1_GIC, pfr0),
>   		       "GICv3 not supported.");
>   	kvm_vm_free(vm);
>   
> diff --git a/tools/testing/selftests/kvm/arm64/page_fault_test.c b/tools/testing/selftests/kvm/arm64/page_fault_test.c
> index dc6559dad9d86..4ccbd389d1336 100644
> --- a/tools/testing/selftests/kvm/arm64/page_fault_test.c
> +++ b/tools/testing/selftests/kvm/arm64/page_fault_test.c
> @@ -95,14 +95,14 @@ static bool guest_check_lse(void)
>   	uint64_t isar0 = read_sysreg(id_aa64isar0_el1);
>   	uint64_t atomic;
>   
> -	atomic = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64ISAR0_EL1_ATOMIC), isar0);
> +	atomic = FIELD_GET(ID_AA64ISAR0_EL1_ATOMIC, isar0);
>   	return atomic >= 2;
>   }
>   
>   static bool guest_check_dc_zva(void)
>   {
>   	uint64_t dczid = read_sysreg(dczid_el0);
> -	uint64_t dzp = FIELD_GET(ARM64_FEATURE_MASK(DCZID_EL0_DZP), dczid);
> +	uint64_t dzp = FIELD_GET(DCZID_EL0_DZP, dczid);
>   
>   	return dzp == 0;
>   }
> @@ -195,7 +195,7 @@ static bool guest_set_ha(void)
>   	uint64_t hadbs, tcr;
>   
>   	/* Skip if HA is not supported. */
> -	hadbs = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR1_EL1_HAFDBS), mmfr1);
> +	hadbs = FIELD_GET(ID_AA64MMFR1_EL1_HAFDBS, mmfr1);
>   	if (hadbs == 0)
>   		return false;
>   
> diff --git a/tools/testing/selftests/kvm/arm64/set_id_regs.c b/tools/testing/selftests/kvm/arm64/set_id_regs.c
> index d3bf9204409c3..36d40c267b994 100644
> --- a/tools/testing/selftests/kvm/arm64/set_id_regs.c
> +++ b/tools/testing/selftests/kvm/arm64/set_id_regs.c
> @@ -594,8 +594,8 @@ static void test_user_set_mte_reg(struct kvm_vcpu *vcpu)
>   	 */
>   	val = vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR1_EL1));
>   
> -	mte = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE), val);
> -	mte_frac = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE_frac), val);
> +	mte = FIELD_GET(ID_AA64PFR1_EL1_MTE, val);
> +	mte_frac = FIELD_GET(ID_AA64PFR1_EL1_MTE_frac, val);
>   	if (mte != ID_AA64PFR1_EL1_MTE_MTE2 ||
>   	    mte_frac != ID_AA64PFR1_EL1_MTE_frac_NI) {
>   		ksft_test_result_skip("MTE_ASYNC or MTE_ASYMM are supported, nothing to test\n");
> @@ -612,7 +612,7 @@ static void test_user_set_mte_reg(struct kvm_vcpu *vcpu)
>   	}
>   
>   	val = vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR1_EL1));
> -	mte_frac = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTE_frac), val);
> +	mte_frac = FIELD_GET(ID_AA64PFR1_EL1_MTE_frac, val);
>   	if (mte_frac == ID_AA64PFR1_EL1_MTE_frac_NI)
>   		ksft_test_result_pass("ID_AA64PFR1_EL1.MTE_frac=0 accepted and still 0xF\n");
>   	else
> @@ -774,7 +774,7 @@ int main(void)
>   
>   	/* Check for AARCH64 only system */
>   	val = vcpu_get_reg(vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64PFR0_EL1));
> -	el0 = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL0), val);
> +	el0 = FIELD_GET(ID_AA64PFR0_EL1_EL0, val);
>   	aarch64_only = (el0 == ID_AA64PFR0_EL1_EL0_IMP);
>   
>   	ksft_print_header();
> diff --git a/tools/testing/selftests/kvm/arm64/vpmu_counter_access.c b/tools/testing/selftests/kvm/arm64/vpmu_counter_access.c
> index f16b3b27e32ed..a0c4ab8391559 100644
> --- a/tools/testing/selftests/kvm/arm64/vpmu_counter_access.c
> +++ b/tools/testing/selftests/kvm/arm64/vpmu_counter_access.c
> @@ -441,7 +441,7 @@ static void create_vpmu_vm(void *guest_code)
>   
>   	/* Make sure that PMUv3 support is indicated in the ID register */
>   	dfr0 = vcpu_get_reg(vpmu_vm.vcpu, KVM_ARM64_SYS_REG(SYS_ID_AA64DFR0_EL1));
> -	pmuver = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), dfr0);
> +	pmuver = FIELD_GET(ID_AA64DFR0_EL1_PMUVer, dfr0);
>   	TEST_ASSERT(pmuver != ID_AA64DFR0_EL1_PMUVer_IMP_DEF &&
>   		    pmuver >= ID_AA64DFR0_EL1_PMUVer_IMP,
>   		    "Unexpected PMUVER (0x%x) on the vCPU with PMUv3", pmuver);
> diff --git a/tools/testing/selftests/kvm/lib/arm64/processor.c b/tools/testing/selftests/kvm/lib/arm64/processor.c
> index 9d69904cb6084..eb115123d7411 100644
> --- a/tools/testing/selftests/kvm/lib/arm64/processor.c
> +++ b/tools/testing/selftests/kvm/lib/arm64/processor.c
> @@ -573,15 +573,15 @@ void aarch64_get_supported_page_sizes(uint32_t ipa, uint32_t *ipa4k,
>   	err = ioctl(vcpu_fd, KVM_GET_ONE_REG, &reg);
>   	TEST_ASSERT(err == 0, KVM_IOCTL_ERROR(KVM_GET_ONE_REG, vcpu_fd));
>   
> -	gran = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_TGRAN4), val);
> +	gran = FIELD_GET(ID_AA64MMFR0_EL1_TGRAN4, val);
>   	*ipa4k = max_ipa_for_page_size(ipa, gran, ID_AA64MMFR0_EL1_TGRAN4_NI,
>   					ID_AA64MMFR0_EL1_TGRAN4_52_BIT);
>   
> -	gran = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_TGRAN64), val);
> +	gran = FIELD_GET(ID_AA64MMFR0_EL1_TGRAN64, val);
>   	*ipa64k = max_ipa_for_page_size(ipa, gran, ID_AA64MMFR0_EL1_TGRAN64_NI,
>   					ID_AA64MMFR0_EL1_TGRAN64_IMP);
>   
> -	gran = FIELD_GET(ARM64_FEATURE_MASK(ID_AA64MMFR0_EL1_TGRAN16), val);
> +	gran = FIELD_GET(ID_AA64MMFR0_EL1_TGRAN16, val);
>   	*ipa16k = max_ipa_for_page_size(ipa, gran, ID_AA64MMFR0_EL1_TGRAN16_NI,
>   					ID_AA64MMFR0_EL1_TGRAN16_52_BIT);
>   

-- 
Thanks,

Ben


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 2/6] KVM: arm64: Handle RASv1p1 registers
  2025-08-17 20:21 ` [PATCH v3 2/6] KVM: arm64: Handle RASv1p1 registers Marc Zyngier
  2025-08-18 12:34   ` Cornelia Huck
@ 2025-08-21 13:13   ` Ben Horgan
  2025-08-21 13:37     ` Marc Zyngier
  1 sibling, 1 reply; 18+ messages in thread
From: Ben Horgan @ 2025-08-21 13:13 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm, linux-arm-kernel, kvm
  Cc: Joey Gouly, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Will Deacon, Catalin Marinas, Cornelia Huck

Hi Marc,

On 8/17/25 21:21, Marc Zyngier wrote:
> FEAT_RASv1p1 system registeres are not handled at all so far.
> KVM will give an embarassed warning on the console and inject
s/embarassed/embarrassed/

> an UNDEF, despite RASv1p1 being exposed to the guest on suitable HW.
> 
> Handle these registers similarly to FEAT_RAS, with the added fun
> that there are *two* way to indicate the presence of FEAT_RASv1p1.
> 
> Reviewed-by: Joey Gouly <joey.gouly@arm.com>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>   arch/arm64/kvm/sys_regs.c | 17 +++++++++++++++++
>   1 file changed, 17 insertions(+)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 82ffb3b3b3cf7..feb1a7a708e25 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -2697,6 +2697,18 @@ static bool access_ras(struct kvm_vcpu *vcpu,
>   	struct kvm *kvm = vcpu->kvm;
>   
>   	switch(reg_to_encoding(r)) {
> +	case SYS_ERXPFGCDN_EL1:
> +	case SYS_ERXPFGCTL_EL1:
> +	case SYS_ERXPFGF_EL1:
> +	case SYS_ERXMISC2_EL1:
> +	case SYS_ERXMISC3_EL1:
> +		if (!(kvm_has_feat(kvm, ID_AA64PFR0_EL1, RAS, V1P1) ||
> +		      (kvm_has_feat_enum(kvm, ID_AA64PFR0_EL1, RAS, IMP) &&
> +		       kvm_has_feat(kvm, ID_AA64PFR1_EL1, RAS_frac, RASv1p1)))) {
> +			kvm_inject_undefined(vcpu);
> +			return false;
> +		}
> +		break;
>   	default:
>   		if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, RAS, IMP)) {
>   			kvm_inject_undefined(vcpu);
The default condition needs updating for the case when 
ID_AA64PFR0_EL1.RAS = b10 otherwise access to the non-v1 specific RAS 
registers will result in an UNDEF being injected.

> @@ -3063,8 +3075,13 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>   	{ SYS_DESC(SYS_ERXCTLR_EL1), access_ras },
>   	{ SYS_DESC(SYS_ERXSTATUS_EL1), access_ras },
>   	{ SYS_DESC(SYS_ERXADDR_EL1), access_ras },
> +	{ SYS_DESC(SYS_ERXPFGF_EL1), access_ras },
> +	{ SYS_DESC(SYS_ERXPFGCTL_EL1), access_ras },
> +	{ SYS_DESC(SYS_ERXPFGCDN_EL1), access_ras },
>   	{ SYS_DESC(SYS_ERXMISC0_EL1), access_ras },
>   	{ SYS_DESC(SYS_ERXMISC1_EL1), access_ras },
> +	{ SYS_DESC(SYS_ERXMISC2_EL1), access_ras },
> +	{ SYS_DESC(SYS_ERXMISC3_EL1), access_ras },
>   
>   	MTE_REG(TFSR_EL1),
>   	MTE_REG(TFSRE0_EL1),

-- 
Thanks,

Ben


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 2/6] KVM: arm64: Handle RASv1p1 registers
  2025-08-21 13:13   ` Ben Horgan
@ 2025-08-21 13:37     ` Marc Zyngier
  2025-08-21 13:44       ` Ben Horgan
  0 siblings, 1 reply; 18+ messages in thread
From: Marc Zyngier @ 2025-08-21 13:37 UTC (permalink / raw)
  To: Ben Horgan
  Cc: kvmarm, linux-arm-kernel, kvm, Joey Gouly, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Will Deacon, Catalin Marinas,
	Cornelia Huck

On Thu, 21 Aug 2025 14:13:52 +0100,
Ben Horgan <ben.horgan@arm.com> wrote:
> 
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index 82ffb3b3b3cf7..feb1a7a708e25 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -2697,6 +2697,18 @@ static bool access_ras(struct kvm_vcpu *vcpu,
> >   	struct kvm *kvm = vcpu->kvm;
> >     	switch(reg_to_encoding(r)) {
> > +	case SYS_ERXPFGCDN_EL1:
> > +	case SYS_ERXPFGCTL_EL1:
> > +	case SYS_ERXPFGF_EL1:
> > +	case SYS_ERXMISC2_EL1:
> > +	case SYS_ERXMISC3_EL1:
> > +		if (!(kvm_has_feat(kvm, ID_AA64PFR0_EL1, RAS, V1P1) ||
> > +		      (kvm_has_feat_enum(kvm, ID_AA64PFR0_EL1, RAS, IMP) &&
> > +		       kvm_has_feat(kvm, ID_AA64PFR1_EL1, RAS_frac, RASv1p1)))) {
> > +			kvm_inject_undefined(vcpu);
> > +			return false;
> > +		}
> > +		break;
> >   	default:
> >   		if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, RAS, IMP)) {
> >   			kvm_inject_undefined(vcpu);
> The default condition needs updating for the case when
> ID_AA64PFR0_EL1.RAS = b10 otherwise access to the non-v1 specific RAS
> registers will result in an UNDEF being injected.

I don't think so. The RAS field is described as such:

	UnsignedEnum    31:28   RAS
	        0b0000  NI
        	0b0001  IMP
	        0b0010  V1P1
	        0b0011  V2
	EndEnum

Since this is an unsigned enum, this checks for a value < IMP. Only
RAS not being implemented is this condition satisfied, and an UNDEF
injected.

Or am I missing something obvious here (I wouldn't be surprised...)?

	M.

-- 
Jazz isn't dead. It just smells funny.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 6/6] KVM: arm64: Get rid of ARM64_FEATURE_MASK()
  2025-08-21 11:29   ` Ben Horgan
@ 2025-08-21 13:43     ` Marc Zyngier
  0 siblings, 0 replies; 18+ messages in thread
From: Marc Zyngier @ 2025-08-21 13:43 UTC (permalink / raw)
  To: Ben Horgan
  Cc: kvmarm, linux-arm-kernel, kvm, Joey Gouly, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Will Deacon, Catalin Marinas,
	Cornelia Huck

On Thu, 21 Aug 2025 12:29:43 +0100,
Ben Horgan <ben.horgan@arm.com> wrote:
> 
> Hi Marc,
> 
> On 8/17/25 21:21, Marc Zyngier wrote:
> > The ARM64_FEATURE_MASK() macro was a hack introduce whilst the
> > automatic generation of sysreg encoding was introduced, and was
> > too unreliable to be entirely trusted.
> > 
> > We are in a better place now, and we could really do without this
> > macro. Get rid of it altogether.
> > 
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >   arch/arm64/include/asm/sysreg.h               |  3 --
> >   arch/arm64/kvm/arm.c                          |  8 ++--
> >   arch/arm64/kvm/sys_regs.c                     | 40 +++++++++----------
> >   tools/arch/arm64/include/asm/sysreg.h         |  3 --
> >   .../selftests/kvm/arm64/aarch32_id_regs.c     |  2 +-
> >   .../selftests/kvm/arm64/debug-exceptions.c    | 12 +++---
> >   .../testing/selftests/kvm/arm64/no-vgic-v3.c  |  4 +-
> >   .../selftests/kvm/arm64/page_fault_test.c     |  6 +--
> >   .../testing/selftests/kvm/arm64/set_id_regs.c |  8 ++--
> >   .../selftests/kvm/arm64/vpmu_counter_access.c |  2 +-
> >   .../selftests/kvm/lib/arm64/processor.c       |  6 +--
> >   11 files changed, 44 insertions(+), 50 deletions(-)
> > 
> > diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> > index d5b5f2ae1afaa..6604fd6f33f45 100644
> > --- a/arch/arm64/include/asm/sysreg.h
> > +++ b/arch/arm64/include/asm/sysreg.h
> > @@ -1142,9 +1142,6 @@
> >     #define ARM64_FEATURE_FIELD_BITS	4
> While you're at it, consider getting rid of ARM64_FEATURE_FIELD_BITS
> too. This is only used in the set_id_regs.c selftest.

I don't really understand what this test (like most tests) is doing,
so I'm not going to touch it. If you figure it out, feel free to send
a patch.

Thanks,

	M.

-- 
Jazz isn't dead. It just smells funny.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 2/6] KVM: arm64: Handle RASv1p1 registers
  2025-08-21 13:37     ` Marc Zyngier
@ 2025-08-21 13:44       ` Ben Horgan
  0 siblings, 0 replies; 18+ messages in thread
From: Ben Horgan @ 2025-08-21 13:44 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, kvm, Joey Gouly, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Will Deacon, Catalin Marinas,
	Cornelia Huck

Hi Marc,

On 8/21/25 14:37, Marc Zyngier wrote:
> On Thu, 21 Aug 2025 14:13:52 +0100,
> Ben Horgan <ben.horgan@arm.com> wrote:
>>
>>> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
>>> index 82ffb3b3b3cf7..feb1a7a708e25 100644
>>> --- a/arch/arm64/kvm/sys_regs.c
>>> +++ b/arch/arm64/kvm/sys_regs.c
>>> @@ -2697,6 +2697,18 @@ static bool access_ras(struct kvm_vcpu *vcpu,
>>>   	struct kvm *kvm = vcpu->kvm;
>>>     	switch(reg_to_encoding(r)) {
>>> +	case SYS_ERXPFGCDN_EL1:
>>> +	case SYS_ERXPFGCTL_EL1:
>>> +	case SYS_ERXPFGF_EL1:
>>> +	case SYS_ERXMISC2_EL1:
>>> +	case SYS_ERXMISC3_EL1:
>>> +		if (!(kvm_has_feat(kvm, ID_AA64PFR0_EL1, RAS, V1P1) ||
>>> +		      (kvm_has_feat_enum(kvm, ID_AA64PFR0_EL1, RAS, IMP) &&
>>> +		       kvm_has_feat(kvm, ID_AA64PFR1_EL1, RAS_frac, RASv1p1)))) {
>>> +			kvm_inject_undefined(vcpu);
>>> +			return false;
>>> +		}
>>> +		break;
>>>   	default:
>>>   		if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, RAS, IMP)) {
>>>   			kvm_inject_undefined(vcpu);
>> The default condition needs updating for the case when
>> ID_AA64PFR0_EL1.RAS = b10 otherwise access to the non-v1 specific RAS
>> registers will result in an UNDEF being injected.
> 
> I don't think so. The RAS field is described as such:
> 
> 	UnsignedEnum    31:28   RAS
> 	        0b0000  NI
>         	0b0001  IMP
> 	        0b0010  V1P1
> 	        0b0011  V2
> 	EndEnum
> 
> Since this is an unsigned enum, this checks for a value < IMP. Only
> RAS not being implemented is this condition satisfied, and an UNDEF
> injected.
> 
> Or am I missing something obvious here (I wouldn't be surprised...)?

No, you are indeed correct. I missed the difference between
kvm_has_feat_enum() and kvm_has_feat(). Sorry for the noise.

> 
> 	M.
> 

-- 
Thanks,

Ben


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 0/6] KVM: arm64: FEAT_RASv1p1 support and RAS selection
  2025-08-17 20:21 [PATCH v3 0/6] KVM: arm64: FEAT_RASv1p1 support and RAS selection Marc Zyngier
                   ` (5 preceding siblings ...)
  2025-08-17 20:21 ` [PATCH v3 6/6] KVM: arm64: Get rid of ARM64_FEATURE_MASK() Marc Zyngier
@ 2025-08-22  0:01 ` Oliver Upton
  6 siblings, 0 replies; 18+ messages in thread
From: Oliver Upton @ 2025-08-22  0:01 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel, kvm, Marc Zyngier
  Cc: Oliver Upton, Joey Gouly, Suzuki K Poulose, Zenghui Yu,
	Will Deacon, Catalin Marinas, Cornelia Huck

On Sun, 17 Aug 2025 21:21:52 +0100, Marc Zyngier wrote:
> This is the next iteration of this series trying to plug some of our
> RAS holes (no pun intended...). See [1] for the original series.
> 
> The difference with the previous drop is that we don't try to expose a
> canonical encoding RASv1p1. Which means you must migrate between
> similar implementations for now.
> 
> [...]

Applied to fixes, thanks!

[1/6] arm64: Add capability denoting FEAT_RASv1p1
      https://git.kernel.org/kvmarm/kvmarm/c/8049164653c6
[2/6] KVM: arm64: Handle RASv1p1 registers
      https://git.kernel.org/kvmarm/kvmarm/c/d7b3e23f945b
[3/6] KVM: arm64: Ignore HCR_EL2.FIEN set by L1 guest's EL2
      https://git.kernel.org/kvmarm/kvmarm/c/9049fb1227a2
[4/6] KVM: arm64: Make ID_AA64PFR0_EL1.RAS writable
      https://git.kernel.org/kvmarm/kvmarm/c/1fab657cb2a0
[5/6] KVM: arm64: Make ID_AA64PFR1_EL1.RAS_frac writable
      https://git.kernel.org/kvmarm/kvmarm/c/7a765aa88e34
[6/6] KVM: arm64: Get rid of ARM64_FEATURE_MASK()
      https://git.kernel.org/kvmarm/kvmarm/c/0843e0ced338

--
Best,
Oliver

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2025-08-22  0:02 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-17 20:21 [PATCH v3 0/6] KVM: arm64: FEAT_RASv1p1 support and RAS selection Marc Zyngier
2025-08-17 20:21 ` [PATCH v3 1/6] arm64: Add capability denoting FEAT_RASv1p1 Marc Zyngier
2025-08-18 12:32   ` Cornelia Huck
2025-08-17 20:21 ` [PATCH v3 2/6] KVM: arm64: Handle RASv1p1 registers Marc Zyngier
2025-08-18 12:34   ` Cornelia Huck
2025-08-21 13:13   ` Ben Horgan
2025-08-21 13:37     ` Marc Zyngier
2025-08-21 13:44       ` Ben Horgan
2025-08-17 20:21 ` [PATCH v3 3/6] KVM: arm64: Ignore HCR_EL2.FIEN set by L1 guest's EL2 Marc Zyngier
2025-08-19 10:24   ` Joey Gouly
2025-08-17 20:21 ` [PATCH v3 4/6] KVM: arm64: Make ID_AA64PFR0_EL1.RAS writable Marc Zyngier
2025-08-18 12:37   ` Cornelia Huck
2025-08-17 20:21 ` [PATCH v3 5/6] KVM: arm64: Make ID_AA64PFR1_EL1.RAS_frac writable Marc Zyngier
2025-08-18 12:43   ` Cornelia Huck
2025-08-17 20:21 ` [PATCH v3 6/6] KVM: arm64: Get rid of ARM64_FEATURE_MASK() Marc Zyngier
2025-08-21 11:29   ` Ben Horgan
2025-08-21 13:43     ` Marc Zyngier
2025-08-22  0:01 ` [PATCH v3 0/6] KVM: arm64: FEAT_RASv1p1 support and RAS selection Oliver Upton

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).