* [PATCH v2 00/25] KVM/arm64: VM configuration enforcement
@ 2024-01-30 20:45 Marc Zyngier
2024-01-30 20:45 ` [PATCH v2 01/25] arm64: sysreg: Add missing ID_AA64ISAR[13]_EL1 fields and variants Marc Zyngier
` (24 more replies)
0 siblings, 25 replies; 48+ messages in thread
From: Marc Zyngier @ 2024-01-30 20:45 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown
This is the second version of this configurationm enforcement series
after some really awesome reviewing with from Joey.
I think I have taken most of the feedback into account, but please
shout if I have ignored something.
* From v1: [1]
- Fix embarrassing crash with FEAT_MOPS
- Better error handling in the FGT code
- Added/Fixed comments
- Simplified the __vcpu_sys_reg() macro
- Fixed FEAT_PIR handling
- Folded in Oliver's PMU rework
[1] https://lore.kernel.org/all/20240122201852.262057-1-maz@kernel.org
Marc Zyngier (25):
arm64: sysreg: Add missing ID_AA64ISAR[13]_EL1 fields and variants
KVM: arm64: Add feature checking helpers
KVM: arm64: nv: Add sanitising to VNCR-backed sysregs
KVM: arm64: nv: Add sanitising to EL2 configuration registers
KVM: arm64: nv: Add sanitising to VNCR-backed FGT sysregs
KVM: arm64: nv: Add sanitising to VNCR-backed HCRX_EL2
KVM: arm64: nv: Drop sanitised_sys_reg() helper
KVM: arm64: Unify HDFG[WR]TR_GROUP FGT identifiers
KVM: arm64: nv: Correctly handle negative polarity FGTs
KVM: arm64: nv: Turn encoding ranges into discrete XArray stores
KVM: arm64: Drop the requirement for XARRAY_MULTI
KVM: arm64: nv: Move system instructions to their own sys_reg_desc
array
KVM: arm64: Always populate the trap configuration xarray
KVM: arm64: Register AArch64 system register entries with the sysreg
xarray
KVM: arm64: Use the xarray as the primary sysreg/sysinsn walker
KVM: arm64: Rename __check_nv_sr_forward() to triage_sysreg_trap()
KVM: arm64: Add Fine-Grained UNDEF tracking information
KVM: arm64: Propagate and handle Fine-Grained UNDEF bits
KVM: arm64: Move existing feature disabling over to FGU infrastructure
KVM: arm64: Streamline save/restore of HFG[RW]TR_EL2
KVM: arm64: Make TLBI OS/Range UNDEF if not advertised to the guest
KVM: arm64: Make PIR{,E0}_EL1 UNDEF if S1PIE is not advertised to the
guest
KVM: arm64: Make AMU sysreg UNDEF if FEAT_AMU is not advertised to the
guest
KVM: arm64: Make FEAT_MOPS UNDEF if not advertised to the guest
KVM: arm64: Add debugfs file for guest's ID registers
arch/arm64/include/asm/kvm_arm.h | 4 +-
arch/arm64/include/asm/kvm_host.h | 107 ++++++++-
arch/arm64/include/asm/kvm_nested.h | 1 -
arch/arm64/kvm/Kconfig | 1 -
arch/arm64/kvm/arm.c | 7 +
arch/arm64/kvm/emulate-nested.c | 231 +++++++++++++-----
arch/arm64/kvm/hyp/include/hyp/switch.h | 130 +++++-----
arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 24 +-
arch/arm64/kvm/nested.c | 265 ++++++++++++++++++++-
arch/arm64/kvm/pmu-emul.c | 11 +-
arch/arm64/kvm/sys_regs.c | 235 +++++++++++++++---
arch/arm64/kvm/sys_regs.h | 2 +
arch/arm64/tools/sysreg | 8 +-
include/kvm/arm_pmu.h | 11 -
14 files changed, 856 insertions(+), 181 deletions(-)
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 48+ messages in thread
* [PATCH v2 01/25] arm64: sysreg: Add missing ID_AA64ISAR[13]_EL1 fields and variants
2024-01-30 20:45 [PATCH v2 00/25] KVM/arm64: VM configuration enforcement Marc Zyngier
@ 2024-01-30 20:45 ` Marc Zyngier
2024-01-30 20:45 ` [PATCH v2 02/25] KVM: arm64: Add feature checking helpers Marc Zyngier
` (23 subsequent siblings)
24 siblings, 0 replies; 48+ messages in thread
From: Marc Zyngier @ 2024-01-30 20:45 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown
Despite having the control bits for FEAT_SPECRES and FEAT_PACM,
the ID registers fields are either incomplete or missing.
Fix it.
Reviewed-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/tools/sysreg | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
index fa3fe0856880..53daaaef46cb 100644
--- a/arch/arm64/tools/sysreg
+++ b/arch/arm64/tools/sysreg
@@ -1366,6 +1366,7 @@ EndEnum
UnsignedEnum 43:40 SPECRES
0b0000 NI
0b0001 IMP
+ 0b0010 COSP_RCTX
EndEnum
UnsignedEnum 39:36 SB
0b0000 NI
@@ -1492,7 +1493,12 @@ EndEnum
EndSysreg
Sysreg ID_AA64ISAR3_EL1 3 0 0 6 3
-Res0 63:12
+Res0 63:16
+UnsignedEnum 15:12 PACM
+ 0b0000 NI
+ 0b0001 TRIVIAL_IMP
+ 0b0010 FULL_IMP
+EndEnum
UnsignedEnum 11:8 TLBIW
0b0000 NI
0b0001 IMP
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v2 02/25] KVM: arm64: Add feature checking helpers
2024-01-30 20:45 [PATCH v2 00/25] KVM/arm64: VM configuration enforcement Marc Zyngier
2024-01-30 20:45 ` [PATCH v2 01/25] arm64: sysreg: Add missing ID_AA64ISAR[13]_EL1 fields and variants Marc Zyngier
@ 2024-01-30 20:45 ` Marc Zyngier
2024-02-02 17:13 ` Suzuki K Poulose
2024-01-30 20:45 ` [PATCH v2 03/25] KVM: arm64: nv: Add sanitising to VNCR-backed sysregs Marc Zyngier
` (22 subsequent siblings)
24 siblings, 1 reply; 48+ messages in thread
From: Marc Zyngier @ 2024-01-30 20:45 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown
In order to make it easier to check whether a particular feature
is exposed to a guest, add a new set of helpers, with kvm_has_feat()
being the most useful.
Let's start making use of them in the PMU code (courtesy of Oliver).
Follow-up work will intricude additional use patterns.
Co-developed--by: Oliver Upton <oliver.upton@linux.dev>
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_host.h | 53 +++++++++++++++++++++++++++++++
arch/arm64/kvm/pmu-emul.c | 11 ++++---
arch/arm64/kvm/sys_regs.c | 6 ++--
include/kvm/arm_pmu.h | 11 -------
4 files changed, 61 insertions(+), 20 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 21c57b812569..c0cf9c5f5e8d 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -1233,4 +1233,57 @@ static inline void kvm_hyp_reserve(void) { }
void kvm_arm_vcpu_power_off(struct kvm_vcpu *vcpu);
bool kvm_arm_vcpu_stopped(struct kvm_vcpu *vcpu);
+#define __expand_field_sign_unsigned(id, fld, val) \
+ ((u64)(id##_##fld##_##val))
+
+#define __expand_field_sign_signed(id, fld, val) \
+ ({ \
+ s64 __val = id##_##fld##_##val; \
+ __val <<= 64 - id##_##fld##_WIDTH; \
+ __val >>= 64 - id##_##fld##_SHIFT - id##_##fld##_WIDTH; \
+ \
+ __val; \
+ })
+
+#define expand_field_sign(id, fld, val) \
+ (id##_##fld##_SIGNED ? \
+ __expand_field_sign_signed(id, fld, val) : \
+ __expand_field_sign_unsigned(id, fld, val))
+
+#define get_idreg_field_unsigned(kvm, id, fld) \
+ ({ \
+ u64 __val = IDREG(kvm, SYS_##id); \
+ __val &= id##_##fld##_MASK; \
+ __val >>= id##_##fld##_SHIFT; \
+ \
+ __val; \
+ })
+
+#define get_idreg_field_signed(kvm, id, fld) \
+ ({ \
+ s64 __val = IDREG(kvm, SYS_##id); \
+ __val <<= 64 - id##_##fld##_SHIFT - id##_##fld##_WIDTH; \
+ __val >>= id##_##fld##_SHIFT; \
+ \
+ __val; \
+ })
+
+#define get_idreg_field_enum(kvm, id, fld) \
+ get_idreg_field_unsigned(kvm, id, fld)
+
+#define get_idreg_field(kvm, id, fld) \
+ (id##_##fld##_SIGNED ? \
+ get_idreg_field_signed(kvm, id, fld) : \
+ get_idreg_field_unsigned(kvm, id, fld))
+
+#define kvm_has_feat(kvm, id, fld, limit) \
+ (get_idreg_field((kvm), id, fld) >= expand_field_sign(id, fld, limit))
+
+#define kvm_has_feat_enum(kvm, id, fld, limit) \
+ (get_idreg_field_unsigned((kvm), id, fld) == id##_##fld##_##limit)
+
+#define kvm_has_feat_range(kvm, id, fld, min, max) \
+ (get_idreg_field((kvm), id, fld) >= expand_field_sign(id, fld, min) && \
+ get_idreg_field((kvm), id, fld) <= expand_field_sign(id, fld, max))
+
#endif /* __ARM64_KVM_HOST_H__ */
diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
index 3d9467ff73bc..925522470b2b 100644
--- a/arch/arm64/kvm/pmu-emul.c
+++ b/arch/arm64/kvm/pmu-emul.c
@@ -64,12 +64,11 @@ u64 kvm_pmu_evtyper_mask(struct kvm *kvm)
{
u64 mask = ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMU_EXCLUDE_EL0 |
kvm_pmu_event_mask(kvm);
- u64 pfr0 = IDREG(kvm, SYS_ID_AA64PFR0_EL1);
- if (SYS_FIELD_GET(ID_AA64PFR0_EL1, EL2, pfr0))
+ if (kvm_has_feat(kvm, ID_AA64PFR0_EL1, EL2, IMP))
mask |= ARMV8_PMU_INCLUDE_EL2;
- if (SYS_FIELD_GET(ID_AA64PFR0_EL1, EL3, pfr0))
+ if (kvm_has_feat(kvm, ID_AA64PFR0_EL1, EL3, IMP))
mask |= ARMV8_PMU_EXCLUDE_NS_EL0 |
ARMV8_PMU_EXCLUDE_NS_EL1 |
ARMV8_PMU_EXCLUDE_EL3;
@@ -83,8 +82,10 @@ u64 kvm_pmu_evtyper_mask(struct kvm *kvm)
*/
static bool kvm_pmc_is_64bit(struct kvm_pmc *pmc)
{
+ struct kvm_vcpu *vcpu = kvm_pmc_to_vcpu(pmc);
+
return (pmc->idx == ARMV8_PMU_CYCLE_IDX ||
- kvm_pmu_is_3p5(kvm_pmc_to_vcpu(pmc)));
+ kvm_has_feat(vcpu->kvm, ID_AA64DFR0_EL1, PMUVer, V3P5));
}
static bool kvm_pmc_has_64bit_overflow(struct kvm_pmc *pmc)
@@ -556,7 +557,7 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
return;
/* Fixup PMCR_EL0 to reconcile the PMU version and the LP bit */
- if (!kvm_pmu_is_3p5(vcpu))
+ if (!kvm_has_feat(vcpu->kvm, ID_AA64DFR0_EL1, PMUVer, V3P5))
val &= ~ARMV8_PMU_PMCR_LP;
/* The reset bits don't indicate any state, and shouldn't be saved. */
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 041b11825578..3c31f8cb9eef 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -505,10 +505,9 @@ static bool trap_loregion(struct kvm_vcpu *vcpu,
struct sys_reg_params *p,
const struct sys_reg_desc *r)
{
- u64 val = IDREG(vcpu->kvm, SYS_ID_AA64MMFR1_EL1);
u32 sr = reg_to_encoding(r);
- if (!(val & (0xfUL << ID_AA64MMFR1_EL1_LO_SHIFT))) {
+ if (!kvm_has_feat(vcpu->kvm, ID_AA64MMFR1_EL1, LO, IMP)) {
kvm_inject_undefined(vcpu);
return false;
}
@@ -2748,8 +2747,7 @@ static bool trap_dbgdidr(struct kvm_vcpu *vcpu,
return ignore_write(vcpu, p);
} else {
u64 dfr = IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1);
- u64 pfr = IDREG(vcpu->kvm, SYS_ID_AA64PFR0_EL1);
- u32 el3 = !!SYS_FIELD_GET(ID_AA64PFR0_EL1, EL3, pfr);
+ u32 el3 = kvm_has_feat(vcpu->kvm, ID_AA64PFR0_EL1, EL3, IMP);
p->regval = ((SYS_FIELD_GET(ID_AA64DFR0_EL1, WRPs, dfr) << 28) |
(SYS_FIELD_GET(ID_AA64DFR0_EL1, BRPs, dfr) << 24) |
diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
index 4b9d8fb393a8..eb4c369a79eb 100644
--- a/include/kvm/arm_pmu.h
+++ b/include/kvm/arm_pmu.h
@@ -90,16 +90,6 @@ void kvm_vcpu_pmu_resync_el0(void);
vcpu->arch.pmu.events = *kvm_get_pmu_events(); \
} while (0)
-/*
- * Evaluates as true when emulating PMUv3p5, and false otherwise.
- */
-#define kvm_pmu_is_3p5(vcpu) ({ \
- u64 val = IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1); \
- u8 pmuver = SYS_FIELD_GET(ID_AA64DFR0_EL1, PMUVer, val); \
- \
- pmuver >= ID_AA64DFR0_EL1_PMUVer_V3P5; \
-})
-
u8 kvm_arm_pmu_get_pmuver_limit(void);
u64 kvm_pmu_evtyper_mask(struct kvm *kvm);
int kvm_arm_set_default_pmu(struct kvm *kvm);
@@ -168,7 +158,6 @@ static inline u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1)
}
#define kvm_vcpu_has_pmu(vcpu) ({ false; })
-#define kvm_pmu_is_3p5(vcpu) ({ false; })
static inline void kvm_pmu_update_vcpu_events(struct kvm_vcpu *vcpu) {}
static inline void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu) {}
static inline void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu) {}
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v2 03/25] KVM: arm64: nv: Add sanitising to VNCR-backed sysregs
2024-01-30 20:45 [PATCH v2 00/25] KVM/arm64: VM configuration enforcement Marc Zyngier
2024-01-30 20:45 ` [PATCH v2 01/25] arm64: sysreg: Add missing ID_AA64ISAR[13]_EL1 fields and variants Marc Zyngier
2024-01-30 20:45 ` [PATCH v2 02/25] KVM: arm64: Add feature checking helpers Marc Zyngier
@ 2024-01-30 20:45 ` Marc Zyngier
2024-01-31 14:52 ` Joey Gouly
2024-01-30 20:45 ` [PATCH v2 04/25] KVM: arm64: nv: Add sanitising to EL2 configuration registers Marc Zyngier
` (21 subsequent siblings)
24 siblings, 1 reply; 48+ messages in thread
From: Marc Zyngier @ 2024-01-30 20:45 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown
VNCR-backed "registers" are actually only memory. Which means that
there is zero control over what the guest can write, and that it
is the hypervisor's job to actually sanitise the content of the
backing store. Yeah, this is fun.
In order to preserve some form of sanity, add a repainting mechanism
that makes use of a per-VM set of RES0/RES1 masks, one pair per VNCR
register. These masks get applied on access to the backing store via
__vcpu_sys_reg(), ensuring that the state that is consumed by KVM is
correct.
So far, nothing populates these masks, but stay tuned.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_host.h | 22 ++++++++++++++++-
arch/arm64/kvm/arm.c | 1 +
arch/arm64/kvm/nested.c | 41 ++++++++++++++++++++++++++++++-
3 files changed, 62 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index c0cf9c5f5e8d..816288e2d735 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -238,6 +238,8 @@ static inline u16 kvm_mpidr_index(struct kvm_mpidr_data *data, u64 mpidr)
return index;
}
+struct kvm_sysreg_masks;
+
struct kvm_arch {
struct kvm_s2_mmu mmu;
@@ -312,6 +314,9 @@ struct kvm_arch {
#define KVM_ARM_ID_REG_NUM (IDREG_IDX(sys_reg(3, 0, 0, 7, 7)) + 1)
u64 id_regs[KVM_ARM_ID_REG_NUM];
+ /* Masks for VNCR-baked sysregs */
+ struct kvm_sysreg_masks *sysreg_masks;
+
/*
* For an untrusted host VM, 'pkvm.handle' is used to lookup
* the associated pKVM instance in the hypervisor.
@@ -474,6 +479,13 @@ enum vcpu_sysreg {
NR_SYS_REGS /* Nothing after this line! */
};
+struct kvm_sysreg_masks {
+ struct {
+ u64 res0;
+ u64 res1;
+ } mask[NR_SYS_REGS - __VNCR_START__];
+};
+
struct kvm_cpu_context {
struct user_pt_regs regs; /* sp = sp_el0 */
@@ -868,7 +880,15 @@ static inline u64 *__ctxt_sys_reg(const struct kvm_cpu_context *ctxt, int r)
#define ctxt_sys_reg(c,r) (*__ctxt_sys_reg(c,r))
-#define __vcpu_sys_reg(v,r) (ctxt_sys_reg(&(v)->arch.ctxt, (r)))
+u64 kvm_vcpu_sanitise_vncr_reg(const struct kvm_vcpu *, enum vcpu_sysreg);
+#define __vcpu_sys_reg(v,r) \
+ (*({ \
+ const struct kvm_cpu_context *ctxt = &(v)->arch.ctxt; \
+ u64 *__r = __ctxt_sys_reg(ctxt, (r)); \
+ if (vcpu_has_nv((v)) && (r) >= __VNCR_START__) \
+ *__r = kvm_vcpu_sanitise_vncr_reg((v), (r)); \
+ __r; \
+ }))
u64 vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg);
void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg);
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index a25265aca432..c063e84fc72c 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -206,6 +206,7 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
pkvm_destroy_hyp_vm(kvm);
kfree(kvm->arch.mpidr_data);
+ kfree(kvm->arch.sysreg_masks);
kvm_destroy_vcpus(kvm);
kvm_unshare_hyp(kvm, kvm + 1);
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index d55e809e26cb..c976cd4b8379 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -163,15 +163,54 @@ static u64 limit_nv_id_reg(u32 id, u64 val)
return val;
}
+
+u64 kvm_vcpu_sanitise_vncr_reg(const struct kvm_vcpu *vcpu, enum vcpu_sysreg sr)
+{
+ u64 v = ctxt_sys_reg(&vcpu->arch.ctxt, sr);
+ struct kvm_sysreg_masks *masks;
+
+ masks = vcpu->kvm->arch.sysreg_masks;
+
+ if (masks) {
+ sr -= __VNCR_START__;
+
+ v &= ~masks->mask[sr].res0;
+ v |= masks->mask[sr].res1;
+ }
+
+ return v;
+}
+
+static void __maybe_unused set_sysreg_masks(struct kvm *kvm, int sr, u64 res0, u64 res1)
+{
+ int i = sr - __VNCR_START__;
+
+ kvm->arch.sysreg_masks->mask[i].res0 = res0;
+ kvm->arch.sysreg_masks->mask[i].res1 = res1;
+}
+
int kvm_init_nv_sysregs(struct kvm *kvm)
{
+ int ret = 0;
+
mutex_lock(&kvm->arch.config_lock);
+ if (kvm->arch.sysreg_masks)
+ goto out;
+
+ kvm->arch.sysreg_masks = kzalloc(sizeof(*(kvm->arch.sysreg_masks)),
+ GFP_KERNEL);
+ if (!kvm->arch.sysreg_masks) {
+ ret = -ENOMEM;
+ goto out;
+ }
+
for (int i = 0; i < KVM_ARM_ID_REG_NUM; i++)
kvm->arch.id_regs[i] = limit_nv_id_reg(IDX_IDREG(i),
kvm->arch.id_regs[i]);
+out:
mutex_unlock(&kvm->arch.config_lock);
- return 0;
+ return ret;
}
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v2 04/25] KVM: arm64: nv: Add sanitising to EL2 configuration registers
2024-01-30 20:45 [PATCH v2 00/25] KVM/arm64: VM configuration enforcement Marc Zyngier
` (2 preceding siblings ...)
2024-01-30 20:45 ` [PATCH v2 03/25] KVM: arm64: nv: Add sanitising to VNCR-backed sysregs Marc Zyngier
@ 2024-01-30 20:45 ` Marc Zyngier
2024-01-31 17:16 ` Joey Gouly
2024-02-01 14:56 ` Joey Gouly
2024-01-30 20:45 ` [PATCH v2 05/25] KVM: arm64: nv: Add sanitising to VNCR-backed FGT sysregs Marc Zyngier
` (20 subsequent siblings)
24 siblings, 2 replies; 48+ messages in thread
From: Marc Zyngier @ 2024-01-30 20:45 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown
We can now start making use of our sanitising masks by setting them
to values that depend on the guest's configuration.
First up are VTTBR_EL2, VTCR_EL2, VMPIDR_EL2 and HCR_EL2.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/nested.c | 56 ++++++++++++++++++++++++++++++++++++++++-
1 file changed, 55 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index c976cd4b8379..ee461e630527 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -181,7 +181,7 @@ u64 kvm_vcpu_sanitise_vncr_reg(const struct kvm_vcpu *vcpu, enum vcpu_sysreg sr)
return v;
}
-static void __maybe_unused set_sysreg_masks(struct kvm *kvm, int sr, u64 res0, u64 res1)
+static void set_sysreg_masks(struct kvm *kvm, int sr, u64 res0, u64 res1)
{
int i = sr - __VNCR_START__;
@@ -191,6 +191,7 @@ static void __maybe_unused set_sysreg_masks(struct kvm *kvm, int sr, u64 res0, u
int kvm_init_nv_sysregs(struct kvm *kvm)
{
+ u64 res0, res1;
int ret = 0;
mutex_lock(&kvm->arch.config_lock);
@@ -209,6 +210,59 @@ int kvm_init_nv_sysregs(struct kvm *kvm)
kvm->arch.id_regs[i] = limit_nv_id_reg(IDX_IDREG(i),
kvm->arch.id_regs[i]);
+ /* VTTBR_EL2 */
+ res0 = res1 = 0;
+ if (!kvm_has_feat_enum(kvm, ID_AA64MMFR1_EL1, VMIDBits, 16))
+ res0 |= GENMASK(63, 56);
+ set_sysreg_masks(kvm, VTTBR_EL2, res0, res1);
+
+ /* VTCR_EL2 */
+ res0 = GENMASK(63, 32) | GENMASK(30, 20);
+ res1 = BIT(31);
+ set_sysreg_masks(kvm, VTCR_EL2, res0, res1);
+
+ /* VMPIDR_EL2 */
+ res0 = GENMASK(63, 40) | GENMASK(30, 24);
+ res1 = BIT(31);
+ set_sysreg_masks(kvm, VMPIDR_EL2, res0, res1);
+
+ /* HCR_EL2 */
+ res0 = BIT(48);
+ res1 = HCR_RW;
+ if (!kvm_has_feat(kvm, ID_AA64MMFR1_EL1, TWED, IMP))
+ res0 |= GENMASK(63, 59);
+ if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, MTE, MTE2))
+ res0 |= (HCR_TID5 | HCR_DCT | HCR_ATA);
+ if (!kvm_has_feat(kvm, ID_AA64MMFR2_EL1, EVT, TTLBxS))
+ res0 |= (HCR_TTLBIS | HCR_TTLBOS);
+ if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, CSV2, CSV2_2) &&
+ !kvm_has_feat(kvm, ID_AA64PFR1_EL1, CSV2_frac, CSV2_1p2))
+ res1 = HCR_ENSCXT;
+ if (!kvm_has_feat(kvm, ID_AA64MMFR2_EL1, EVT, IMP))
+ res0 |= (HCR_TID4 | HCR_TICAB | HCR_TOCU);
+ if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, AMU, V1P1))
+ res0 |= HCR_AMVOFFEN;
+ if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, RAS, V1P1))
+ res0 |= HCR_FIEN;
+ if (!kvm_has_feat(kvm, ID_AA64MMFR2_EL1, FWB, IMP))
+ res0 |= HCR_FWB;
+ if (!kvm_has_feat(kvm, ID_AA64MMFR2_EL1, NV, NV2))
+ res0 |= HCR_NV2;
+ if (!kvm_has_feat(kvm, ID_AA64MMFR2_EL1, NV, IMP))
+ res0 |= (HCR_AT | HCR_NV1 | HCR_NV);
+ if (!(__vcpu_has_feature(&kvm->arch, KVM_ARM_VCPU_PTRAUTH_ADDRESS) &&
+ __vcpu_has_feature(&kvm->arch, KVM_ARM_VCPU_PTRAUTH_ADDRESS)))
+ res0 |= (HCR_API | HCR_APK);
+ if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TME, IMP))
+ res0 |= BIT(39);
+ if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, RAS, IMP))
+ res0 |= (HCR_TERR | HCR_TEA);
+ if (!kvm_has_feat(kvm, ID_AA64MMFR1_EL1, LO, IMP))
+ res0 |= HCR_TLOR;
+ if (!kvm_has_feat(kvm, ID_AA64MMFR4_EL1, E2H0, IMP))
+ res1 |= HCR_E2H;
+ set_sysreg_masks(kvm, HCR_EL2, res0, res1);
+
out:
mutex_unlock(&kvm->arch.config_lock);
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v2 05/25] KVM: arm64: nv: Add sanitising to VNCR-backed FGT sysregs
2024-01-30 20:45 [PATCH v2 00/25] KVM/arm64: VM configuration enforcement Marc Zyngier
` (3 preceding siblings ...)
2024-01-30 20:45 ` [PATCH v2 04/25] KVM: arm64: nv: Add sanitising to EL2 configuration registers Marc Zyngier
@ 2024-01-30 20:45 ` Marc Zyngier
2024-02-01 14:38 ` Joey Gouly
2024-01-30 20:45 ` [PATCH v2 06/25] KVM: arm64: nv: Add sanitising to VNCR-backed HCRX_EL2 Marc Zyngier
` (19 subsequent siblings)
24 siblings, 1 reply; 48+ messages in thread
From: Marc Zyngier @ 2024-01-30 20:45 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown
Fine Grained Traps are controlled by a whole bunch of features.
Each one of them must be checked and the corresponding masks
computed so that we don't let the guest apply traps it shouldn't
be using.
This takes care of HFGxTR_EL2, HDFGxTR_EL2, and HAFGRTR_EL2.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/nested.c | 128 ++++++++++++++++++++++++++++++++++++++++
1 file changed, 128 insertions(+)
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index ee461e630527..cdeef3259193 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -263,6 +263,134 @@ int kvm_init_nv_sysregs(struct kvm *kvm)
res1 |= HCR_E2H;
set_sysreg_masks(kvm, HCR_EL2, res0, res1);
+ /* HFG[RW]TR_EL2 */
+ res0 = res1 = 0;
+ if (!(__vcpu_has_feature(&kvm->arch, KVM_ARM_VCPU_PTRAUTH_ADDRESS) &&
+ __vcpu_has_feature(&kvm->arch, KVM_ARM_VCPU_PTRAUTH_ADDRESS)))
+ res0 |= (HFGxTR_EL2_APDAKey | HFGxTR_EL2_APDBKey |
+ HFGxTR_EL2_APGAKey | HFGxTR_EL2_APIAKey |
+ HFGxTR_EL2_APIBKey);
+ if (!kvm_has_feat(kvm, ID_AA64MMFR1_EL1, LO, IMP))
+ res0 |= (HFGxTR_EL2_LORC_EL1 | HFGxTR_EL2_LOREA_EL1 |
+ HFGxTR_EL2_LORID_EL1 | HFGxTR_EL2_LORN_EL1 |
+ HFGxTR_EL2_LORSA_EL1);
+ if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, CSV2, CSV2_2) &&
+ !kvm_has_feat(kvm, ID_AA64PFR1_EL1, CSV2_frac, CSV2_1p2))
+ res0 |= (HFGxTR_EL2_SCXTNUM_EL1 | HFGxTR_EL2_SCXTNUM_EL0);
+ if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, GIC, IMP))
+ res0 |= HFGxTR_EL2_ICC_IGRPENn_EL1;
+ if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, RAS, IMP))
+ res0 |= (HFGxTR_EL2_ERRIDR_EL1 | HFGxTR_EL2_ERRSELR_EL1 |
+ HFGxTR_EL2_ERXFR_EL1 | HFGxTR_EL2_ERXCTLR_EL1 |
+ HFGxTR_EL2_ERXSTATUS_EL1 | HFGxTR_EL2_ERXMISCn_EL1 |
+ HFGxTR_EL2_ERXPFGF_EL1 | HFGxTR_EL2_ERXPFGCTL_EL1 |
+ HFGxTR_EL2_ERXPFGCDN_EL1 | HFGxTR_EL2_ERXADDR_EL1);
+ if (!kvm_has_feat(kvm, ID_AA64ISAR1_EL1, LS64, LS64_ACCDATA))
+ res0 |= HFGxTR_EL2_nACCDATA_EL1;
+ if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, GCS, IMP))
+ res0 |= (HFGxTR_EL2_nGCS_EL0 | HFGxTR_EL2_nGCS_EL1);
+ if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, SME, IMP))
+ res0 |= (HFGxTR_EL2_nSMPRI_EL1 | HFGxTR_EL2_nTPIDR2_EL0);
+ if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, THE, IMP))
+ res0 |= HFGxTR_EL2_nRCWMASK_EL1;
+ if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, S1PIE, IMP))
+ res0 |= (HFGxTR_EL2_nPIRE0_EL1 | HFGxTR_EL2_nPIR_EL1);
+ if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, S1POE, IMP))
+ res0 |= (HFGxTR_EL2_nPOR_EL0 | HFGxTR_EL2_nPOR_EL1);
+ if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, S2POE, IMP))
+ res0 |= HFGxTR_EL2_nS2POR_EL1;
+ if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, AIE, IMP))
+ res0 |= (HFGxTR_EL2_nMAIR2_EL1 | HFGxTR_EL2_nAMAIR2_EL1);
+ set_sysreg_masks(kvm, HFGRTR_EL2, res0 | __HFGRTR_EL2_RES0, res1);
+ set_sysreg_masks(kvm, HFGWTR_EL2, res0 | __HFGWTR_EL2_RES0, res1);
+
+ /* HDFG[RW]TR_EL2 */
+ res0 = res1 = 0;
+ if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, DoubleLock, IMP))
+ res0 |= HDFGRTR_EL2_OSDLR_EL1;
+ if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, PMUVer, IMP))
+ res0 |= (HDFGRTR_EL2_PMEVCNTRn_EL0 | HDFGRTR_EL2_PMEVTYPERn_EL0 |
+ HDFGRTR_EL2_PMCCFILTR_EL0 | HDFGRTR_EL2_PMCCNTR_EL0 |
+ HDFGRTR_EL2_PMCNTEN | HDFGRTR_EL2_PMINTEN |
+ HDFGRTR_EL2_PMOVS | HDFGRTR_EL2_PMSELR_EL0 |
+ HDFGRTR_EL2_PMMIR_EL1 | HDFGRTR_EL2_PMUSERENR_EL0 |
+ HDFGRTR_EL2_PMCEIDn_EL0);
+ if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, PMSVer, IMP))
+ res0 |= (HDFGRTR_EL2_PMBLIMITR_EL1 | HDFGRTR_EL2_PMBPTR_EL1 |
+ HDFGRTR_EL2_PMBSR_EL1 | HDFGRTR_EL2_PMSCR_EL1 |
+ HDFGRTR_EL2_PMSEVFR_EL1 | HDFGRTR_EL2_PMSFCR_EL1 |
+ HDFGRTR_EL2_PMSICR_EL1 | HDFGRTR_EL2_PMSIDR_EL1 |
+ HDFGRTR_EL2_PMSIRR_EL1 | HDFGRTR_EL2_PMSLATFR_EL1 |
+ HDFGRTR_EL2_PMBIDR_EL1);
+ if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, TraceVer, IMP))
+ res0 |= (HDFGRTR_EL2_TRC | HDFGRTR_EL2_TRCAUTHSTATUS |
+ HDFGRTR_EL2_TRCAUXCTLR | HDFGRTR_EL2_TRCCLAIM |
+ HDFGRTR_EL2_TRCCNTVRn | HDFGRTR_EL2_TRCID |
+ HDFGRTR_EL2_TRCIMSPECn | HDFGRTR_EL2_TRCOSLSR |
+ HDFGRTR_EL2_TRCPRGCTLR | HDFGRTR_EL2_TRCSEQSTR |
+ HDFGRTR_EL2_TRCSSCSRn | HDFGRTR_EL2_TRCSTATR |
+ HDFGRTR_EL2_TRCVICTLR);
+ if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, TraceBuffer, IMP))
+ res0 |= (HDFGRTR_EL2_TRBBASER_EL1 | HDFGRTR_EL2_TRBIDR_EL1 |
+ HDFGRTR_EL2_TRBLIMITR_EL1 | HDFGRTR_EL2_TRBMAR_EL1 |
+ HDFGRTR_EL2_TRBPTR_EL1 | HDFGRTR_EL2_TRBSR_EL1 |
+ HDFGRTR_EL2_TRBTRG_EL1);
+ if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, BRBE, IMP))
+ res0 |= (HDFGRTR_EL2_nBRBIDR | HDFGRTR_EL2_nBRBCTL |
+ HDFGRTR_EL2_nBRBDATA);
+ if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, PMSVer, V1P2))
+ res0 |= HDFGRTR_EL2_nPMSNEVFR_EL1;
+ set_sysreg_masks(kvm, HDFGRTR_EL2, res0 | HDFGRTR_EL2_RES0, res1);
+
+ /* Reuse the bits from the read-side and add the write-specific stuff */
+ if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, PMUVer, IMP))
+ res0 |= (HDFGWTR_EL2_PMCR_EL0 | HDFGWTR_EL2_PMSWINC_EL0);
+ if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, TraceVer, IMP))
+ res0 |= HDFGWTR_EL2_TRCOSLAR;
+ if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, TraceFilt, IMP))
+ res0 |= HDFGWTR_EL2_TRFCR_EL1;
+ set_sysreg_masks(kvm, HFGWTR_EL2, res0 | HDFGWTR_EL2_RES0, res1);
+
+ /* HFGITR_EL2 */
+ res0 = HFGITR_EL2_RES0;
+ res1 = HFGITR_EL2_RES1;
+ if (!kvm_has_feat(kvm, ID_AA64ISAR1_EL1, DPB, DPB2))
+ res0 |= HFGITR_EL2_DCCVADP;
+ if (!kvm_has_feat(kvm, ID_AA64MMFR1_EL1, PAN, PAN2))
+ res0 |= (HFGITR_EL2_ATS1E1RP | HFGITR_EL2_ATS1E1WP);
+ if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, OS))
+ res0 |= (HFGITR_EL2_TLBIRVAALE1OS | HFGITR_EL2_TLBIRVALE1OS |
+ HFGITR_EL2_TLBIRVAAE1OS | HFGITR_EL2_TLBIRVAE1OS |
+ HFGITR_EL2_TLBIVAALE1OS | HFGITR_EL2_TLBIVALE1OS |
+ HFGITR_EL2_TLBIVAAE1OS | HFGITR_EL2_TLBIASIDE1OS |
+ HFGITR_EL2_TLBIVAE1OS | HFGITR_EL2_TLBIVMALLE1OS);
+ if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, RANGE))
+ res0 |= (HFGITR_EL2_TLBIRVAALE1 | HFGITR_EL2_TLBIRVALE1 |
+ HFGITR_EL2_TLBIRVAAE1 | HFGITR_EL2_TLBIRVAE1 |
+ HFGITR_EL2_TLBIRVAALE1IS | HFGITR_EL2_TLBIRVALE1IS |
+ HFGITR_EL2_TLBIRVAAE1IS | HFGITR_EL2_TLBIRVAE1IS |
+ HFGITR_EL2_TLBIRVAALE1OS | HFGITR_EL2_TLBIRVALE1OS |
+ HFGITR_EL2_TLBIRVAAE1OS | HFGITR_EL2_TLBIRVAE1OS);
+ if (!kvm_has_feat(kvm, ID_AA64ISAR1_EL1, SPECRES, IMP))
+ res0 |= (HFGITR_EL2_CFPRCTX | HFGITR_EL2_DVPRCTX |
+ HFGITR_EL2_CPPRCTX);
+ if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, BRBE, IMP))
+ res0 |= (HFGITR_EL2_nBRBINJ | HFGITR_EL2_nBRBIALL);
+ if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, GCS, IMP))
+ res0 |= (HFGITR_EL2_nGCSPUSHM_EL1 | HFGITR_EL2_nGCSSTR_EL1 |
+ HFGITR_EL2_nGCSEPP);
+ if (!kvm_has_feat(kvm, ID_AA64ISAR1_EL1, SPECRES, COSP_RCTX))
+ res0 |= HFGITR_EL2_COSPRCTX;
+ if (!kvm_has_feat(kvm, ID_AA64ISAR2_EL1, ATS1A, IMP))
+ res0 |= HFGITR_EL2_ATS1E1A;
+ set_sysreg_masks(kvm, HFGITR_EL2, res0, res1);
+
+ /* HAFGRTR_EL2 - not a lot to see here*/
+ res0 = HAFGRTR_EL2_RES0;
+ res1 = HAFGRTR_EL2_RES1;
+ if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, AMU, V1P1))
+ res0 |= ~(res0 | res1);
+ set_sysreg_masks(kvm, HAFGRTR_EL2, res0, res1);
out:
mutex_unlock(&kvm->arch.config_lock);
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v2 06/25] KVM: arm64: nv: Add sanitising to VNCR-backed HCRX_EL2
2024-01-30 20:45 [PATCH v2 00/25] KVM/arm64: VM configuration enforcement Marc Zyngier
` (4 preceding siblings ...)
2024-01-30 20:45 ` [PATCH v2 05/25] KVM: arm64: nv: Add sanitising to VNCR-backed FGT sysregs Marc Zyngier
@ 2024-01-30 20:45 ` Marc Zyngier
2024-02-02 8:52 ` Oliver Upton
2024-01-30 20:45 ` [PATCH v2 07/25] KVM: arm64: nv: Drop sanitised_sys_reg() helper Marc Zyngier
` (18 subsequent siblings)
24 siblings, 1 reply; 48+ messages in thread
From: Marc Zyngier @ 2024-01-30 20:45 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown
Just like its little friends, HCRX_EL2 gets the feature set treatment
when backed by VNCR.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/nested.c | 42 +++++++++++++++++++++++++++++++++++++++++
1 file changed, 42 insertions(+)
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index cdeef3259193..72db632b115a 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -263,6 +263,48 @@ int kvm_init_nv_sysregs(struct kvm *kvm)
res1 |= HCR_E2H;
set_sysreg_masks(kvm, HCR_EL2, res0, res1);
+ /* HCRX_EL2 */
+ res0 = HCRX_EL2_RES0;
+ res1 = HCRX_EL2_RES1;
+ if (!kvm_has_feat(kvm, ID_AA64ISAR3_EL1, PACM, TRIVIAL_IMP))
+ res0 |= HCRX_EL2_PACMEn;
+ if (!kvm_has_feat(kvm, ID_AA64PFR2_EL1, FPMR, IMP))
+ res0 |= HCRX_EL2_EnFPM;
+ if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, GCS, IMP))
+ res0 |= HCRX_EL2_GCSEn;
+ if (!kvm_has_feat(kvm, ID_AA64ISAR2_EL1, SYSREG_128, IMP))
+ res0 |= HCRX_EL2_EnIDCP128;
+ if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, ADERR, DEV_ASYNC))
+ res0 |= (HCRX_EL2_EnSDERR | HCRX_EL2_EnSNERR);
+ if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, DF2, IMP))
+ res0 |= HCRX_EL2_TMEA;
+ if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, D128, IMP))
+ res0 |= HCRX_EL2_D128En;
+ if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, THE, IMP))
+ res0 |= HCRX_EL2_PTTWI;
+ if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, SCTLRX, IMP))
+ res0 |= HCRX_EL2_SCTLR2En;
+ if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, TCRX, IMP))
+ res0 |= HCRX_EL2_TCR2En;
+ if (!kvm_has_feat(kvm, ID_AA64ISAR2_EL1, MOPS, IMP))
+ res0 |= (HCRX_EL2_MSCEn | HCRX_EL2_MCE2);
+ if (!kvm_has_feat(kvm, ID_AA64MMFR1_EL1, CMOW, IMP))
+ res0 |= HCRX_EL2_CMOW;
+ if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, NMI, IMP))
+ res0 |= (HCRX_EL2_VFNMI | HCRX_EL2_VINMI | HCRX_EL2_TALLINT);
+ if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, SME, IMP) ||
+ !(read_sysreg_s(SYS_SMIDR_EL1) & SMIDR_EL1_SMPS))
+ res0 |= HCRX_EL2_SMPME;
+ if (!kvm_has_feat(kvm, ID_AA64ISAR1_EL1, XS, IMP))
+ res0 |= (HCRX_EL2_FGTnXS | HCRX_EL2_FnXS);
+ if (!kvm_has_feat(kvm, ID_AA64ISAR1_EL1, LS64, LS64_V))
+ res0 |= HCRX_EL2_EnASR;
+ if (!kvm_has_feat(kvm, ID_AA64ISAR1_EL1, LS64, LS64))
+ res0 |= HCRX_EL2_EnALS;
+ if (!kvm_has_feat(kvm, ID_AA64ISAR1_EL1, LS64, LS64_ACCDATA))
+ res0 |= HCRX_EL2_EnAS0;
+ set_sysreg_masks(kvm, HCRX_EL2, res0, res1);
+
/* HFG[RW]TR_EL2 */
res0 = res1 = 0;
if (!(__vcpu_has_feature(&kvm->arch, KVM_ARM_VCPU_PTRAUTH_ADDRESS) &&
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v2 07/25] KVM: arm64: nv: Drop sanitised_sys_reg() helper
2024-01-30 20:45 [PATCH v2 00/25] KVM/arm64: VM configuration enforcement Marc Zyngier
` (5 preceding siblings ...)
2024-01-30 20:45 ` [PATCH v2 06/25] KVM: arm64: nv: Add sanitising to VNCR-backed HCRX_EL2 Marc Zyngier
@ 2024-01-30 20:45 ` Marc Zyngier
2024-01-30 20:45 ` [PATCH v2 08/25] KVM: arm64: Unify HDFG[WR]TR_GROUP FGT identifiers Marc Zyngier
` (17 subsequent siblings)
24 siblings, 0 replies; 48+ messages in thread
From: Marc Zyngier @ 2024-01-30 20:45 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown
Now that we have the infrastructure to enforce a sanitised register
value depending on the VM configuration, drop the helper that only
used the architectural RES0 value.
Reviewed-by: Joey Gouly <joey.gouly@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/emulate-nested.c | 22 +++++++---------------
1 file changed, 7 insertions(+), 15 deletions(-)
diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index 431fd429932d..7a4a886adb9d 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -1897,14 +1897,6 @@ static bool check_fgt_bit(u64 val, const union trap_config tc)
return ((val >> tc.bit) & 1) == tc.pol;
}
-#define sanitised_sys_reg(vcpu, reg) \
- ({ \
- u64 __val; \
- __val = __vcpu_sys_reg(vcpu, reg); \
- __val &= ~__ ## reg ## _RES0; \
- (__val); \
- })
-
bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
{
union trap_config tc;
@@ -1940,25 +1932,25 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
case HFGxTR_GROUP:
if (is_read)
- val = sanitised_sys_reg(vcpu, HFGRTR_EL2);
+ val = __vcpu_sys_reg(vcpu, HFGRTR_EL2);
else
- val = sanitised_sys_reg(vcpu, HFGWTR_EL2);
+ val = __vcpu_sys_reg(vcpu, HFGWTR_EL2);
break;
case HDFGRTR_GROUP:
case HDFGWTR_GROUP:
if (is_read)
- val = sanitised_sys_reg(vcpu, HDFGRTR_EL2);
+ val = __vcpu_sys_reg(vcpu, HDFGRTR_EL2);
else
- val = sanitised_sys_reg(vcpu, HDFGWTR_EL2);
+ val = __vcpu_sys_reg(vcpu, HDFGWTR_EL2);
break;
case HAFGRTR_GROUP:
- val = sanitised_sys_reg(vcpu, HAFGRTR_EL2);
+ val = __vcpu_sys_reg(vcpu, HAFGRTR_EL2);
break;
case HFGITR_GROUP:
- val = sanitised_sys_reg(vcpu, HFGITR_EL2);
+ val = __vcpu_sys_reg(vcpu, HFGITR_EL2);
switch (tc.fgf) {
u64 tmp;
@@ -1966,7 +1958,7 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
break;
case HCRX_FGTnXS:
- tmp = sanitised_sys_reg(vcpu, HCRX_EL2);
+ tmp = __vcpu_sys_reg(vcpu, HCRX_EL2);
if (tmp & HCRX_EL2_FGTnXS)
tc.fgt = __NO_FGT_GROUP__;
}
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v2 08/25] KVM: arm64: Unify HDFG[WR]TR_GROUP FGT identifiers
2024-01-30 20:45 [PATCH v2 00/25] KVM/arm64: VM configuration enforcement Marc Zyngier
` (6 preceding siblings ...)
2024-01-30 20:45 ` [PATCH v2 07/25] KVM: arm64: nv: Drop sanitised_sys_reg() helper Marc Zyngier
@ 2024-01-30 20:45 ` Marc Zyngier
2024-01-30 20:45 ` [PATCH v2 09/25] KVM: arm64: nv: Correctly handle negative polarity FGTs Marc Zyngier
` (16 subsequent siblings)
24 siblings, 0 replies; 48+ messages in thread
From: Marc Zyngier @ 2024-01-30 20:45 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown
There is no reason to have separate FGT group identifiers for
the debug fine grain trapping. The sole requirement is to provide
the *names* so that the SR_FGF() macro can do its magic of picking
the correct bit definition.
So let's alias HDFGWTR_GROUP and HDFGRTR_GROUP.
Reviewed-by: Joey Gouly <joey.gouly@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/emulate-nested.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index 7a4a886adb9d..8a1cfcf553a2 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -1010,7 +1010,7 @@ enum fgt_group_id {
__NO_FGT_GROUP__,
HFGxTR_GROUP,
HDFGRTR_GROUP,
- HDFGWTR_GROUP,
+ HDFGWTR_GROUP = HDFGRTR_GROUP,
HFGITR_GROUP,
HAFGRTR_GROUP,
@@ -1938,7 +1938,6 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
break;
case HDFGRTR_GROUP:
- case HDFGWTR_GROUP:
if (is_read)
val = __vcpu_sys_reg(vcpu, HDFGRTR_EL2);
else
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v2 09/25] KVM: arm64: nv: Correctly handle negative polarity FGTs
2024-01-30 20:45 [PATCH v2 00/25] KVM/arm64: VM configuration enforcement Marc Zyngier
` (7 preceding siblings ...)
2024-01-30 20:45 ` [PATCH v2 08/25] KVM: arm64: Unify HDFG[WR]TR_GROUP FGT identifiers Marc Zyngier
@ 2024-01-30 20:45 ` Marc Zyngier
2024-01-30 20:45 ` [PATCH v2 10/25] KVM: arm64: nv: Turn encoding ranges into discrete XArray stores Marc Zyngier
` (15 subsequent siblings)
24 siblings, 0 replies; 48+ messages in thread
From: Marc Zyngier @ 2024-01-30 20:45 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown
Negative trap bits are a massive pain. They are, on the surface,
indistinguishable from RES0 bits. Do you trap? or do you ignore?
Thankfully, we now have the right infrastructure to check for RES0
bits as long as the register is backed by VNCR, which is the case
for the FGT registers.
Use that information as a discriminant when handling a trap that
is potentially caused by a FGT.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/emulate-nested.c | 59 +++++++++++++++++++++++++++++++--
1 file changed, 56 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index 8a1cfcf553a2..ef46c2e45307 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -1892,9 +1892,61 @@ static enum trap_behaviour compute_trap_behaviour(struct kvm_vcpu *vcpu,
return __compute_trap_behaviour(vcpu, tc.cgt, b);
}
-static bool check_fgt_bit(u64 val, const union trap_config tc)
+static u64 kvm_get_sysreg_res0(struct kvm *kvm, enum vcpu_sysreg sr)
{
- return ((val >> tc.bit) & 1) == tc.pol;
+ struct kvm_sysreg_masks *masks;
+
+ /* Only handle the VNCR-backed regs for now */
+ if (sr < __VNCR_START__)
+ return 0;
+
+ masks = kvm->arch.sysreg_masks;
+
+ return masks->mask[sr - __VNCR_START__].res0;
+}
+
+static bool check_fgt_bit(struct kvm *kvm, bool is_read,
+ u64 val, const union trap_config tc)
+{
+ enum vcpu_sysreg sr;
+
+ if (tc.pol)
+ return (val & BIT(tc.bit));
+
+ /*
+ * FGTs with negative polarities are an absolute nightmare, as
+ * we need to evaluate the bit in the light of the feature
+ * that defines it. WTF were they thinking?
+ *
+ * So let's check if the bit has been earmarked as RES0, as
+ * this indicates an unimplemented feature.
+ */
+ if (val & BIT(tc.bit))
+ return false;
+
+ switch ((enum fgt_group_id)tc.fgt) {
+ case HFGxTR_GROUP:
+ sr = is_read ? HFGRTR_EL2 : HFGWTR_EL2;
+ break;
+
+ case HDFGRTR_GROUP:
+ sr = is_read ? HDFGRTR_EL2 : HDFGWTR_EL2;
+ break;
+
+ case HAFGRTR_GROUP:
+ sr = HAFGRTR_EL2;
+ break;
+
+ case HFGITR_GROUP:
+ sr = HFGITR_EL2;
+ break;
+
+ default:
+ WARN_ONCE(1, "Unhandled FGT group");
+ return false;
+ }
+
+ return !(kvm_get_sysreg_res0(kvm, sr) & BIT(tc.bit));
}
bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
@@ -1969,7 +2021,8 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
return false;
}
- if (tc.fgt != __NO_FGT_GROUP__ && check_fgt_bit(val, tc))
+ if (tc.fgt != __NO_FGT_GROUP__ && check_fgt_bit(vcpu->kvm, is_read,
+ val, tc))
goto inject;
b = compute_trap_behaviour(vcpu, tc);
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v2 10/25] KVM: arm64: nv: Turn encoding ranges into discrete XArray stores
2024-01-30 20:45 [PATCH v2 00/25] KVM/arm64: VM configuration enforcement Marc Zyngier
` (8 preceding siblings ...)
2024-01-30 20:45 ` [PATCH v2 09/25] KVM: arm64: nv: Correctly handle negative polarity FGTs Marc Zyngier
@ 2024-01-30 20:45 ` Marc Zyngier
2024-01-31 14:57 ` Joey Gouly
2024-01-30 20:45 ` [PATCH v2 11/25] KVM: arm64: Drop the requirement for XARRAY_MULTI Marc Zyngier
` (14 subsequent siblings)
24 siblings, 1 reply; 48+ messages in thread
From: Marc Zyngier @ 2024-01-30 20:45 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown
In order to be able to store different values for member of an
encoding range, replace xa_store_range() calls with discrete
xa_store() calls and an encoding iterator.
We end-up using a bit more memory, but we gain some flexibility
that we will make use of shortly.
Take this opportunity to tidy up the error handling path.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/emulate-nested.c | 49 ++++++++++++++++++++++++---------
1 file changed, 36 insertions(+), 13 deletions(-)
diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index ef46c2e45307..42f5b4483c80 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -1757,6 +1757,28 @@ static __init void print_nv_trap_error(const struct encoding_to_trap_config *tc,
err);
}
+static u32 encoding_next(u32 encoding)
+{
+ u8 op0, op1, crn, crm, op2;
+
+ op0 = sys_reg_Op0(encoding);
+ op1 = sys_reg_Op1(encoding);
+ crn = sys_reg_CRn(encoding);
+ crm = sys_reg_CRm(encoding);
+ op2 = sys_reg_Op2(encoding);
+
+ if (op2 < Op2_mask)
+ return sys_reg(op0, op1, crn, crm, op2 + 1);
+ if (crm < CRm_mask)
+ return sys_reg(op0, op1, crn, crm + 1, 0);
+ if (crn < CRn_mask)
+ return sys_reg(op0, op1, crn + 1, 0, 0);
+ if (op1 < Op1_mask)
+ return sys_reg(op0, op1 + 1, 0, 0, 0);
+
+ return sys_reg(op0 + 1, 0, 0, 0, 0);
+}
+
int __init populate_nv_trap_config(void)
{
int ret = 0;
@@ -1775,23 +1797,18 @@ int __init populate_nv_trap_config(void)
ret = -EINVAL;
}
- if (cgt->encoding != cgt->end) {
- prev = xa_store_range(&sr_forward_xa,
- cgt->encoding, cgt->end,
- xa_mk_value(cgt->tc.val),
- GFP_KERNEL);
- } else {
- prev = xa_store(&sr_forward_xa, cgt->encoding,
+ for (u32 enc = cgt->encoding; enc <= cgt->end; enc = encoding_next(enc)) {
+ prev = xa_store(&sr_forward_xa, enc,
xa_mk_value(cgt->tc.val), GFP_KERNEL);
if (prev && !xa_is_err(prev)) {
ret = -EINVAL;
print_nv_trap_error(cgt, "Duplicate CGT", ret);
}
- }
- if (xa_is_err(prev)) {
- ret = xa_err(prev);
- print_nv_trap_error(cgt, "Failed CGT insertion", ret);
+ if (xa_is_err(prev)) {
+ ret = xa_err(prev);
+ print_nv_trap_error(cgt, "Failed CGT insertion", ret);
+ }
}
}
@@ -1804,6 +1821,7 @@ int __init populate_nv_trap_config(void)
for (int i = 0; i < ARRAY_SIZE(encoding_to_fgt); i++) {
const struct encoding_to_trap_config *fgt = &encoding_to_fgt[i];
union trap_config tc;
+ void *prev;
if (fgt->tc.fgt >= __NR_FGT_GROUP_IDS__) {
ret = -EINVAL;
@@ -1818,8 +1836,13 @@ int __init populate_nv_trap_config(void)
}
tc.val |= fgt->tc.val;
- xa_store(&sr_forward_xa, fgt->encoding,
- xa_mk_value(tc.val), GFP_KERNEL);
+ prev = xa_store(&sr_forward_xa, fgt->encoding,
+ xa_mk_value(tc.val), GFP_KERNEL);
+
+ if (xa_is_err(prev)) {
+ ret = xa_err(prev);
+ print_nv_trap_error(fgt, "Failed FGT insertion", ret);
+ }
}
kvm_info("nv: %ld fine grained trap handlers\n",
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v2 11/25] KVM: arm64: Drop the requirement for XARRAY_MULTI
2024-01-30 20:45 [PATCH v2 00/25] KVM/arm64: VM configuration enforcement Marc Zyngier
` (9 preceding siblings ...)
2024-01-30 20:45 ` [PATCH v2 10/25] KVM: arm64: nv: Turn encoding ranges into discrete XArray stores Marc Zyngier
@ 2024-01-30 20:45 ` Marc Zyngier
2024-01-30 20:45 ` [PATCH v2 12/25] KVM: arm64: nv: Move system instructions to their own sys_reg_desc array Marc Zyngier
` (13 subsequent siblings)
24 siblings, 0 replies; 48+ messages in thread
From: Marc Zyngier @ 2024-01-30 20:45 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown
Now that we don't use xa_store_range() anymore, drop the added
complexity of XARRAY_MULTI for KVM. It is likely still pulled
in by other bits of the kernel though.
Reviewed-by: Joey Gouly <joey.gouly@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/Kconfig | 1 -
1 file changed, 1 deletion(-)
diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
index 6c3c8ca73e7f..5c2a672c06a8 100644
--- a/arch/arm64/kvm/Kconfig
+++ b/arch/arm64/kvm/Kconfig
@@ -39,7 +39,6 @@ menuconfig KVM
select HAVE_KVM_VCPU_RUN_PID_CHANGE
select SCHED_INFO
select GUEST_PERF_EVENTS if PERF_EVENTS
- select XARRAY_MULTI
help
Support hosting virtualized guest machines.
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v2 12/25] KVM: arm64: nv: Move system instructions to their own sys_reg_desc array
2024-01-30 20:45 [PATCH v2 00/25] KVM/arm64: VM configuration enforcement Marc Zyngier
` (10 preceding siblings ...)
2024-01-30 20:45 ` [PATCH v2 11/25] KVM: arm64: Drop the requirement for XARRAY_MULTI Marc Zyngier
@ 2024-01-30 20:45 ` Marc Zyngier
2024-01-30 20:45 ` [PATCH v2 13/25] KVM: arm64: Always populate the trap configuration xarray Marc Zyngier
` (12 subsequent siblings)
24 siblings, 0 replies; 48+ messages in thread
From: Marc Zyngier @ 2024-01-30 20:45 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown
As NV results in a bunch of system instructions being trapped, it makes
sense to pull the system instructions into their own little array, where
they will eventually be joined by AT, TLBI and a bunch of other CMOs.
Based on an initial patch by Jintack Lim.
Reviewed-by: Joey Gouly <joey.gouly@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/sys_regs.c | 59 +++++++++++++++++++++++++++++----------
1 file changed, 44 insertions(+), 15 deletions(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 3c31f8cb9eef..70043bd78cd4 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -2196,16 +2196,6 @@ static u64 reset_hcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
* guest...
*/
static const struct sys_reg_desc sys_reg_descs[] = {
- { SYS_DESC(SYS_DC_ISW), access_dcsw },
- { SYS_DESC(SYS_DC_IGSW), access_dcgsw },
- { SYS_DESC(SYS_DC_IGDSW), access_dcgsw },
- { SYS_DESC(SYS_DC_CSW), access_dcsw },
- { SYS_DESC(SYS_DC_CGSW), access_dcgsw },
- { SYS_DESC(SYS_DC_CGDSW), access_dcgsw },
- { SYS_DESC(SYS_DC_CISW), access_dcsw },
- { SYS_DESC(SYS_DC_CIGSW), access_dcgsw },
- { SYS_DESC(SYS_DC_CIGDSW), access_dcgsw },
-
DBG_BCR_BVR_WCR_WVR_EL1(0),
DBG_BCR_BVR_WCR_WVR_EL1(1),
{ SYS_DESC(SYS_MDCCINT_EL1), trap_debug_regs, reset_val, MDCCINT_EL1, 0 },
@@ -2737,6 +2727,18 @@ static const struct sys_reg_desc sys_reg_descs[] = {
EL2_REG(SP_EL2, NULL, reset_unknown, 0),
};
+static struct sys_reg_desc sys_insn_descs[] = {
+ { SYS_DESC(SYS_DC_ISW), access_dcsw },
+ { SYS_DESC(SYS_DC_IGSW), access_dcgsw },
+ { SYS_DESC(SYS_DC_IGDSW), access_dcgsw },
+ { SYS_DESC(SYS_DC_CSW), access_dcsw },
+ { SYS_DESC(SYS_DC_CGSW), access_dcgsw },
+ { SYS_DESC(SYS_DC_CGDSW), access_dcgsw },
+ { SYS_DESC(SYS_DC_CISW), access_dcsw },
+ { SYS_DESC(SYS_DC_CIGSW), access_dcgsw },
+ { SYS_DESC(SYS_DC_CIGDSW), access_dcgsw },
+};
+
static const struct sys_reg_desc *first_idreg;
static bool trap_dbgdidr(struct kvm_vcpu *vcpu,
@@ -3429,6 +3431,24 @@ static bool emulate_sys_reg(struct kvm_vcpu *vcpu,
return false;
}
+static int emulate_sys_instr(struct kvm_vcpu *vcpu, struct sys_reg_params *p)
+{
+ const struct sys_reg_desc *r;
+
+ /* Search from the system instruction table. */
+ r = find_reg(p, sys_insn_descs, ARRAY_SIZE(sys_insn_descs));
+
+ if (likely(r)) {
+ perform_access(vcpu, p, r);
+ } else {
+ kvm_err("Unsupported guest sys instruction at: %lx\n",
+ *vcpu_pc(vcpu));
+ print_sys_reg_instr(p);
+ kvm_inject_undefined(vcpu);
+ }
+ return 1;
+}
+
static void kvm_reset_id_regs(struct kvm_vcpu *vcpu)
{
const struct sys_reg_desc *idreg = first_idreg;
@@ -3476,7 +3496,8 @@ void kvm_reset_sys_regs(struct kvm_vcpu *vcpu)
}
/**
- * kvm_handle_sys_reg -- handles a mrs/msr trap on a guest sys_reg access
+ * kvm_handle_sys_reg -- handles a system instruction or mrs/msr instruction
+ * trap on a guest execution
* @vcpu: The VCPU pointer
*/
int kvm_handle_sys_reg(struct kvm_vcpu *vcpu)
@@ -3493,12 +3514,19 @@ int kvm_handle_sys_reg(struct kvm_vcpu *vcpu)
params = esr_sys64_to_params(esr);
params.regval = vcpu_get_reg(vcpu, Rt);
- if (!emulate_sys_reg(vcpu, ¶ms))
+ /* System registers have Op0=={2,3}, as per DDI487 J.a C5.1.2 */
+ if (params.Op0 == 2 || params.Op0 == 3) {
+ if (!emulate_sys_reg(vcpu, ¶ms))
+ return 1;
+
+ if (!params.is_write)
+ vcpu_set_reg(vcpu, Rt, params.regval);
+
return 1;
+ }
- if (!params.is_write)
- vcpu_set_reg(vcpu, Rt, params.regval);
- return 1;
+ /* Hints, PSTATE (Op0 == 0) and System instructions (Op0 == 1) */
+ return emulate_sys_instr(vcpu, ¶ms);
}
/******************************************************************************
@@ -3952,6 +3980,7 @@ int __init kvm_sys_reg_table_init(void)
valid &= check_sysreg_table(cp15_regs, ARRAY_SIZE(cp15_regs), true);
valid &= check_sysreg_table(cp15_64_regs, ARRAY_SIZE(cp15_64_regs), true);
valid &= check_sysreg_table(invariant_sys_regs, ARRAY_SIZE(invariant_sys_regs), false);
+ valid &= check_sysreg_table(sys_insn_descs, ARRAY_SIZE(sys_insn_descs), false);
if (!valid)
return -EINVAL;
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v2 13/25] KVM: arm64: Always populate the trap configuration xarray
2024-01-30 20:45 [PATCH v2 00/25] KVM/arm64: VM configuration enforcement Marc Zyngier
` (11 preceding siblings ...)
2024-01-30 20:45 ` [PATCH v2 12/25] KVM: arm64: nv: Move system instructions to their own sys_reg_desc array Marc Zyngier
@ 2024-01-30 20:45 ` Marc Zyngier
2024-01-30 20:45 ` [PATCH v2 14/25] KVM: arm64: Register AArch64 system register entries with the sysreg xarray Marc Zyngier
` (11 subsequent siblings)
24 siblings, 0 replies; 48+ messages in thread
From: Marc Zyngier @ 2024-01-30 20:45 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown
As we are going to rely more and more on the global xarray that
contains the trap configuration, always populate it, even in the
non-NV case.
Reviewed-by: Joey Gouly <joey.gouly@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/sys_regs.c | 5 +----
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 70043bd78cd4..57f3d0c53fc3 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -3995,8 +3995,5 @@ int __init kvm_sys_reg_table_init(void)
if (!first_idreg)
return -EINVAL;
- if (kvm_get_mode() == KVM_MODE_NV)
- return populate_nv_trap_config();
-
- return 0;
+ return populate_nv_trap_config();
}
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v2 14/25] KVM: arm64: Register AArch64 system register entries with the sysreg xarray
2024-01-30 20:45 [PATCH v2 00/25] KVM/arm64: VM configuration enforcement Marc Zyngier
` (12 preceding siblings ...)
2024-01-30 20:45 ` [PATCH v2 13/25] KVM: arm64: Always populate the trap configuration xarray Marc Zyngier
@ 2024-01-30 20:45 ` Marc Zyngier
2024-01-30 20:45 ` [PATCH v2 15/25] KVM: arm64: Use the xarray as the primary sysreg/sysinsn walker Marc Zyngier
` (10 subsequent siblings)
24 siblings, 0 replies; 48+ messages in thread
From: Marc Zyngier @ 2024-01-30 20:45 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown
In order to reduce the number of lookups that we have to perform
when handling a sysreg, register each AArch64 sysreg descriptor
with the global xarray. The index of the descriptor is stored
as a 10 bit field in the data word.
Subsequent patches will retrieve and use the stored index.
Reviewed-by: Joey Gouly <joey.gouly@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_host.h | 3 +++
arch/arm64/kvm/emulate-nested.c | 39 +++++++++++++++++++++++++++++--
arch/arm64/kvm/sys_regs.c | 11 ++++++++-
3 files changed, 50 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 816288e2d735..b1a2b4cbc721 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -1078,6 +1078,9 @@ int kvm_handle_cp10_id(struct kvm_vcpu *vcpu);
void kvm_reset_sys_regs(struct kvm_vcpu *vcpu);
int __init kvm_sys_reg_table_init(void);
+struct sys_reg_desc;
+int __init populate_sysreg_config(const struct sys_reg_desc *sr,
+ unsigned int idx);
int __init populate_nv_trap_config(void);
bool lock_all_vcpus(struct kvm *kvm);
diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index 42f5b4483c80..ddf760b4dda5 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -427,12 +427,14 @@ static const complex_condition_check ccc[] = {
* [19:14] bit number in the FGT register (6 bits)
* [20] trap polarity (1 bit)
* [25:21] FG filter (5 bits)
- * [62:26] Unused (37 bits)
+ * [35:26] Main SysReg table index (10 bits)
+ * [62:36] Unused (27 bits)
* [63] RES0 - Must be zero, as lost on insertion in the xarray
*/
#define TC_CGT_BITS 10
#define TC_FGT_BITS 4
#define TC_FGF_BITS 5
+#define TC_SRI_BITS 10
union trap_config {
u64 val;
@@ -442,7 +444,8 @@ union trap_config {
unsigned long bit:6; /* Bit number */
unsigned long pol:1; /* Polarity */
unsigned long fgf:TC_FGF_BITS; /* Fine Grained Filter */
- unsigned long unused:37; /* Unused, should be zero */
+ unsigned long sri:TC_SRI_BITS; /* SysReg Index */
+ unsigned long unused:27; /* Unused, should be zero */
unsigned long mbz:1; /* Must Be Zero */
};
};
@@ -1868,6 +1871,38 @@ int __init populate_nv_trap_config(void)
return ret;
}
+int __init populate_sysreg_config(const struct sys_reg_desc *sr,
+ unsigned int idx)
+{
+ union trap_config tc;
+ u32 encoding;
+ void *ret;
+
+ /*
+ * 0 is a valid value for the index, but not for the storage.
+ * We'll store (idx+1), so check against an offset'd limit.
+ */
+ if (idx >= (BIT(TC_SRI_BITS) - 1)) {
+ kvm_err("sysreg %s (%d) out of range\n", sr->name, idx);
+ return -EINVAL;
+ }
+
+ encoding = sys_reg(sr->Op0, sr->Op1, sr->CRn, sr->CRm, sr->Op2);
+ tc = get_trap_config(encoding);
+
+ if (tc.sri) {
+ kvm_err("sysreg %s (%d) duplicate entry (%d)\n",
+ sr->name, idx - 1, tc.sri);
+ return -EINVAL;
+ }
+
+ tc.sri = idx + 1;
+ ret = xa_store(&sr_forward_xa, encoding,
+ xa_mk_value(tc.val), GFP_KERNEL);
+
+ return xa_err(ret);
+}
+
static enum trap_behaviour get_behaviour(struct kvm_vcpu *vcpu,
const struct trap_bits *tb)
{
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 57f3d0c53fc3..a410e99f827e 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -3972,6 +3972,7 @@ int __init kvm_sys_reg_table_init(void)
struct sys_reg_params params;
bool valid = true;
unsigned int i;
+ int ret = 0;
/* Make sure tables are unique and in order. */
valid &= check_sysreg_table(sys_reg_descs, ARRAY_SIZE(sys_reg_descs), false);
@@ -3995,5 +3996,13 @@ int __init kvm_sys_reg_table_init(void)
if (!first_idreg)
return -EINVAL;
- return populate_nv_trap_config();
+ ret = populate_nv_trap_config();
+
+ for (i = 0; !ret && i < ARRAY_SIZE(sys_reg_descs); i++)
+ ret = populate_sysreg_config(sys_reg_descs + i, i);
+
+ for (i = 0; !ret && i < ARRAY_SIZE(sys_insn_descs); i++)
+ ret = populate_sysreg_config(sys_insn_descs + i, i);
+
+ return ret;
}
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v2 15/25] KVM: arm64: Use the xarray as the primary sysreg/sysinsn walker
2024-01-30 20:45 [PATCH v2 00/25] KVM/arm64: VM configuration enforcement Marc Zyngier
` (13 preceding siblings ...)
2024-01-30 20:45 ` [PATCH v2 14/25] KVM: arm64: Register AArch64 system register entries with the sysreg xarray Marc Zyngier
@ 2024-01-30 20:45 ` Marc Zyngier
2024-01-30 20:45 ` [PATCH v2 16/25] KVM: arm64: Rename __check_nv_sr_forward() to triage_sysreg_trap() Marc Zyngier
` (9 subsequent siblings)
24 siblings, 0 replies; 48+ messages in thread
From: Marc Zyngier @ 2024-01-30 20:45 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown
Since we always start sysreg/sysinsn handling by searching the
xarray, use it as the source of the index in the correct sys_reg_desc
array.
This allows some cleanup, such as moving the handling of unknown
sysregs in a single location.
Reviewed-by: Joey Gouly <joey.gouly@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_nested.h | 2 +-
arch/arm64/kvm/emulate-nested.c | 40 +++++++++++++-----
arch/arm64/kvm/sys_regs.c | 63 +++++++++--------------------
3 files changed, 50 insertions(+), 55 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index 4882905357f4..68465f87d308 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -60,7 +60,7 @@ static inline u64 translate_ttbr0_el2_to_ttbr0_el1(u64 ttbr0)
return ttbr0 & ~GENMASK_ULL(63, 48);
}
-extern bool __check_nv_sr_forward(struct kvm_vcpu *vcpu);
+extern bool __check_nv_sr_forward(struct kvm_vcpu *vcpu, int *sr_idx);
int kvm_init_nv_sysregs(struct kvm *kvm);
diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index ddf760b4dda5..5aeaa688755f 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -2007,7 +2007,7 @@ static bool check_fgt_bit(struct kvm *kvm, bool is_read,
return !(kvm_get_sysreg_res0(kvm, sr) & BIT(tc.bit));
}
-bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
+bool __check_nv_sr_forward(struct kvm_vcpu *vcpu, int *sr_index)
{
union trap_config tc;
enum trap_behaviour b;
@@ -2015,9 +2015,6 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
u32 sysreg;
u64 esr, val;
- if (!vcpu_has_nv(vcpu) || is_hyp_ctxt(vcpu))
- return false;
-
esr = kvm_vcpu_get_esr(vcpu);
sysreg = esr_sys64_to_sysreg(esr);
is_read = (esr & ESR_ELx_SYS64_ISS_DIR_MASK) == ESR_ELx_SYS64_ISS_DIR_READ;
@@ -2028,13 +2025,16 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
* A value of 0 for the whole entry means that we know nothing
* for this sysreg, and that it cannot be re-injected into the
* nested hypervisor. In this situation, let's cut it short.
- *
- * Note that ultimately, we could also make use of the xarray
- * to store the index of the sysreg in the local descriptor
- * array, avoiding another search... Hint, hint...
*/
if (!tc.val)
- return false;
+ goto local;
+
+ /*
+ * If we're not nesting, immediately return to the caller, with the
+ * sysreg index, should we have it.
+ */
+ if (!vcpu_has_nv(vcpu) || is_hyp_ctxt(vcpu))
+ goto local;
switch ((enum fgt_group_id)tc.fgt) {
case __NO_FGT_GROUP__:
@@ -2076,7 +2076,7 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
case __NR_FGT_GROUP_IDS__:
/* Something is really wrong, bail out */
WARN_ONCE(1, "__NR_FGT_GROUP_IDS__");
- return false;
+ goto local;
}
if (tc.fgt != __NO_FGT_GROUP__ && check_fgt_bit(vcpu->kvm, is_read,
@@ -2089,6 +2089,26 @@ bool __check_nv_sr_forward(struct kvm_vcpu *vcpu)
((b & BEHAVE_FORWARD_WRITE) && !is_read))
goto inject;
+local:
+ if (!tc.sri) {
+ struct sys_reg_params params;
+
+ params = esr_sys64_to_params(esr);
+
+ /*
+ * Check for the IMPDEF range, as per DDI0487 J.a,
+ * D18.3.2 Reserved encodings for IMPLEMENTATION
+ * DEFINED registers.
+ */
+ if (!(params.Op0 == 3 && (params.CRn & 0b1011) == 0b1011))
+ print_sys_reg_msg(¶ms,
+ "Unsupported guest access at: %lx\n",
+ *vcpu_pc(vcpu));
+ kvm_inject_undefined(vcpu);
+ return true;
+ }
+
+ *sr_index = tc.sri - 1;
return false;
inject:
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index a410e99f827e..bc076ef8440b 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -3395,12 +3395,6 @@ int kvm_handle_cp14_32(struct kvm_vcpu *vcpu)
return kvm_handle_cp_32(vcpu, ¶ms, cp14_regs, ARRAY_SIZE(cp14_regs));
}
-static bool is_imp_def_sys_reg(struct sys_reg_params *params)
-{
- // See ARM DDI 0487E.a, section D12.3.2
- return params->Op0 == 3 && (params->CRn & 0b1011) == 0b1011;
-}
-
/**
* emulate_sys_reg - Emulate a guest access to an AArch64 system register
* @vcpu: The VCPU pointer
@@ -3409,44 +3403,22 @@ static bool is_imp_def_sys_reg(struct sys_reg_params *params)
* Return: true if the system register access was successful, false otherwise.
*/
static bool emulate_sys_reg(struct kvm_vcpu *vcpu,
- struct sys_reg_params *params)
+ struct sys_reg_params *params)
{
const struct sys_reg_desc *r;
r = find_reg(params, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
-
if (likely(r)) {
perform_access(vcpu, params, r);
return true;
}
- if (is_imp_def_sys_reg(params)) {
- kvm_inject_undefined(vcpu);
- } else {
- print_sys_reg_msg(params,
- "Unsupported guest sys_reg access at: %lx [%08lx]\n",
- *vcpu_pc(vcpu), *vcpu_cpsr(vcpu));
- kvm_inject_undefined(vcpu);
- }
- return false;
-}
-
-static int emulate_sys_instr(struct kvm_vcpu *vcpu, struct sys_reg_params *p)
-{
- const struct sys_reg_desc *r;
-
- /* Search from the system instruction table. */
- r = find_reg(p, sys_insn_descs, ARRAY_SIZE(sys_insn_descs));
+ print_sys_reg_msg(params,
+ "Unsupported guest sys_reg access at: %lx [%08lx]\n",
+ *vcpu_pc(vcpu), *vcpu_cpsr(vcpu));
+ kvm_inject_undefined(vcpu);
- if (likely(r)) {
- perform_access(vcpu, p, r);
- } else {
- kvm_err("Unsupported guest sys instruction at: %lx\n",
- *vcpu_pc(vcpu));
- print_sys_reg_instr(p);
- kvm_inject_undefined(vcpu);
- }
- return 1;
+ return false;
}
static void kvm_reset_id_regs(struct kvm_vcpu *vcpu)
@@ -3502,31 +3474,34 @@ void kvm_reset_sys_regs(struct kvm_vcpu *vcpu)
*/
int kvm_handle_sys_reg(struct kvm_vcpu *vcpu)
{
+ const struct sys_reg_desc *desc = NULL;
struct sys_reg_params params;
unsigned long esr = kvm_vcpu_get_esr(vcpu);
int Rt = kvm_vcpu_sys_get_rt(vcpu);
+ int sr_idx;
trace_kvm_handle_sys_reg(esr);
- if (__check_nv_sr_forward(vcpu))
+ if (__check_nv_sr_forward(vcpu, &sr_idx))
return 1;
params = esr_sys64_to_params(esr);
params.regval = vcpu_get_reg(vcpu, Rt);
/* System registers have Op0=={2,3}, as per DDI487 J.a C5.1.2 */
- if (params.Op0 == 2 || params.Op0 == 3) {
- if (!emulate_sys_reg(vcpu, ¶ms))
- return 1;
+ if (params.Op0 == 2 || params.Op0 == 3)
+ desc = &sys_reg_descs[sr_idx];
+ else
+ desc = &sys_insn_descs[sr_idx];
- if (!params.is_write)
- vcpu_set_reg(vcpu, Rt, params.regval);
+ perform_access(vcpu, ¶ms, desc);
- return 1;
- }
+ /* Read from system register? */
+ if (!params.is_write &&
+ (params.Op0 == 2 || params.Op0 == 3))
+ vcpu_set_reg(vcpu, Rt, params.regval);
- /* Hints, PSTATE (Op0 == 0) and System instructions (Op0 == 1) */
- return emulate_sys_instr(vcpu, ¶ms);
+ return 1;
}
/******************************************************************************
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v2 16/25] KVM: arm64: Rename __check_nv_sr_forward() to triage_sysreg_trap()
2024-01-30 20:45 [PATCH v2 00/25] KVM/arm64: VM configuration enforcement Marc Zyngier
` (14 preceding siblings ...)
2024-01-30 20:45 ` [PATCH v2 15/25] KVM: arm64: Use the xarray as the primary sysreg/sysinsn walker Marc Zyngier
@ 2024-01-30 20:45 ` Marc Zyngier
2024-01-30 20:45 ` [PATCH v2 17/25] KVM: arm64: Add Fine-Grained UNDEF tracking information Marc Zyngier
` (8 subsequent siblings)
24 siblings, 0 replies; 48+ messages in thread
From: Marc Zyngier @ 2024-01-30 20:45 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown
__check_nv_sr_forward() is not specific to NV anymore, and does
a lot more. Rename it to triage_sysreg_trap(), making it plain
that its role is to handle where an exception is to be handled.
Reviewed-by: Joey Gouly <joey.gouly@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_nested.h | 1 -
arch/arm64/kvm/emulate-nested.c | 2 +-
arch/arm64/kvm/sys_regs.c | 2 +-
arch/arm64/kvm/sys_regs.h | 2 ++
4 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index 68465f87d308..c77d795556e1 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -60,7 +60,6 @@ static inline u64 translate_ttbr0_el2_to_ttbr0_el1(u64 ttbr0)
return ttbr0 & ~GENMASK_ULL(63, 48);
}
-extern bool __check_nv_sr_forward(struct kvm_vcpu *vcpu, int *sr_idx);
int kvm_init_nv_sysregs(struct kvm *kvm);
diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index 5aeaa688755f..4f2a32a8f247 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -2007,7 +2007,7 @@ static bool check_fgt_bit(struct kvm *kvm, bool is_read,
return !(kvm_get_sysreg_res0(kvm, sr) & BIT(tc.bit));
}
-bool __check_nv_sr_forward(struct kvm_vcpu *vcpu, int *sr_index)
+bool triage_sysreg_trap(struct kvm_vcpu *vcpu, int *sr_index)
{
union trap_config tc;
enum trap_behaviour b;
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index bc076ef8440b..938df5dc3684 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -3482,7 +3482,7 @@ int kvm_handle_sys_reg(struct kvm_vcpu *vcpu)
trace_kvm_handle_sys_reg(esr);
- if (__check_nv_sr_forward(vcpu, &sr_idx))
+ if (triage_sysreg_trap(vcpu, &sr_idx))
return 1;
params = esr_sys64_to_params(esr);
diff --git a/arch/arm64/kvm/sys_regs.h b/arch/arm64/kvm/sys_regs.h
index c65c129b3500..997eea21ba2a 100644
--- a/arch/arm64/kvm/sys_regs.h
+++ b/arch/arm64/kvm/sys_regs.h
@@ -233,6 +233,8 @@ int kvm_sys_reg_get_user(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg,
int kvm_sys_reg_set_user(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg,
const struct sys_reg_desc table[], unsigned int num);
+bool triage_sysreg_trap(struct kvm_vcpu *vcpu, int *sr_index);
+
#define AA32(_x) .aarch32_map = AA32_##_x
#define Op0(_x) .Op0 = _x
#define Op1(_x) .Op1 = _x
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v2 17/25] KVM: arm64: Add Fine-Grained UNDEF tracking information
2024-01-30 20:45 [PATCH v2 00/25] KVM/arm64: VM configuration enforcement Marc Zyngier
` (15 preceding siblings ...)
2024-01-30 20:45 ` [PATCH v2 16/25] KVM: arm64: Rename __check_nv_sr_forward() to triage_sysreg_trap() Marc Zyngier
@ 2024-01-30 20:45 ` Marc Zyngier
2024-01-30 20:45 ` [PATCH v2 18/25] KVM: arm64: Propagate and handle Fine-Grained UNDEF bits Marc Zyngier
` (7 subsequent siblings)
24 siblings, 0 replies; 48+ messages in thread
From: Marc Zyngier @ 2024-01-30 20:45 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown
In order to efficiently handle system register access being disabled,
and this resulting in an UNDEF exception being injected, we introduce
the (slightly dubious) concept of Fine-Grained UNDEF, modeled after
the architectural Fine-Grained Traps.
For each FGT group, we keep a 64 bit word that has the exact same
bit assignment as the corresponding FGT register, where a 1 indicates
that trapping this register should result in an UNDEF exception being
reinjected.
So far, nothing populates this information, nor sets the corresponding
trap bits.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_host.h | 21 +++++++++++++++++++++
arch/arm64/kvm/emulate-nested.c | 12 ------------
2 files changed, 21 insertions(+), 12 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index b1a2b4cbc721..a7c3b9d7ec73 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -240,9 +240,30 @@ static inline u16 kvm_mpidr_index(struct kvm_mpidr_data *data, u64 mpidr)
struct kvm_sysreg_masks;
+enum fgt_group_id {
+ __NO_FGT_GROUP__,
+ HFGxTR_GROUP,
+ HDFGRTR_GROUP,
+ HDFGWTR_GROUP = HDFGRTR_GROUP,
+ HFGITR_GROUP,
+ HAFGRTR_GROUP,
+
+ /* Must be last */
+ __NR_FGT_GROUP_IDS__
+};
+
struct kvm_arch {
struct kvm_s2_mmu mmu;
+ /*
+ * Fine-Grained UNDEF, mimicking the FGT layout defined by the
+ * architecture. We track them globally, as we present the
+ * same feature-set to all vcpus.
+ *
+ * Index 0 is currently spare.
+ */
+ u64 fgu[__NR_FGT_GROUP_IDS__];
+
/* Interrupt controller */
struct vgic_dist vgic;
diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index 4f2a32a8f247..b67078b8271b 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -1009,18 +1009,6 @@ static const struct encoding_to_trap_config encoding_to_cgt[] __initconst = {
static DEFINE_XARRAY(sr_forward_xa);
-enum fgt_group_id {
- __NO_FGT_GROUP__,
- HFGxTR_GROUP,
- HDFGRTR_GROUP,
- HDFGWTR_GROUP = HDFGRTR_GROUP,
- HFGITR_GROUP,
- HAFGRTR_GROUP,
-
- /* Must be last */
- __NR_FGT_GROUP_IDS__
-};
-
enum fg_filter_id {
__NO_FGF__,
HCRX_FGTnXS,
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v2 18/25] KVM: arm64: Propagate and handle Fine-Grained UNDEF bits
2024-01-30 20:45 [PATCH v2 00/25] KVM/arm64: VM configuration enforcement Marc Zyngier
` (16 preceding siblings ...)
2024-01-30 20:45 ` [PATCH v2 17/25] KVM: arm64: Add Fine-Grained UNDEF tracking information Marc Zyngier
@ 2024-01-30 20:45 ` Marc Zyngier
2024-01-30 20:45 ` [PATCH v2 19/25] KVM: arm64: Move existing feature disabling over to FGU infrastructure Marc Zyngier
` (6 subsequent siblings)
24 siblings, 0 replies; 48+ messages in thread
From: Marc Zyngier @ 2024-01-30 20:45 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown
In order to correctly honor our FGU bits, they must be converted
into a set of FGT bits. They get merged as part of the existing
FGT setting.
Similarly, the UNDEF injection phase takes place when handling
the trap.
This results in a bit of rework in the FGT macros in order to
help with the code generation, as burying per-CPU accesses in
macros results in a lot of expansion, not to mention the vcpu->kvm
access on nvhe (kern_hyp_va() is not optimisation-friendly).
Reviewed-by: Joey Gouly <joey.gouly@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/emulate-nested.c | 11 ++++
arch/arm64/kvm/hyp/include/hyp/switch.h | 81 +++++++++++++++++++------
2 files changed, 72 insertions(+), 20 deletions(-)
diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index b67078b8271b..4697ba41b3a9 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -2017,6 +2017,17 @@ bool triage_sysreg_trap(struct kvm_vcpu *vcpu, int *sr_index)
if (!tc.val)
goto local;
+ /*
+ * If a sysreg can be trapped using a FGT, first check whether we
+ * trap for the purpose of forbidding the feature. In that case,
+ * inject an UNDEF.
+ */
+ if (tc.fgt != __NO_FGT_GROUP__ &&
+ (vcpu->kvm->arch.fgu[tc.fgt] & BIT(tc.bit))) {
+ kvm_inject_undefined(vcpu);
+ return true;
+ }
+
/*
* If we're not nesting, immediately return to the caller, with the
* sysreg index, should we have it.
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index a038320cdb08..a09149fd91ed 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -79,14 +79,48 @@ static inline void __activate_traps_fpsimd32(struct kvm_vcpu *vcpu)
clr |= ~hfg & __ ## reg ## _nMASK; \
} while(0)
-#define update_fgt_traps_cs(vcpu, reg, clr, set) \
+#define reg_to_fgt_group_id(reg) \
+ ({ \
+ enum fgt_group_id id; \
+ switch(reg) { \
+ case HFGRTR_EL2: \
+ case HFGWTR_EL2: \
+ id = HFGxTR_GROUP; \
+ break; \
+ case HFGITR_EL2: \
+ id = HFGITR_GROUP; \
+ break; \
+ case HDFGRTR_EL2: \
+ case HDFGWTR_EL2: \
+ id = HDFGRTR_GROUP; \
+ break; \
+ case HAFGRTR_EL2: \
+ id = HAFGRTR_GROUP; \
+ break; \
+ default: \
+ BUILD_BUG_ON(1); \
+ } \
+ \
+ id; \
+ })
+
+#define compute_undef_clr_set(vcpu, kvm, reg, clr, set) \
+ do { \
+ u64 hfg = kvm->arch.fgu[reg_to_fgt_group_id(reg)]; \
+ set |= hfg & __ ## reg ## _MASK; \
+ clr |= hfg & __ ## reg ## _nMASK; \
+ } while(0)
+
+#define update_fgt_traps_cs(hctxt, vcpu, kvm, reg, clr, set) \
do { \
- struct kvm_cpu_context *hctxt = \
- &this_cpu_ptr(&kvm_host_data)->host_ctxt; \
u64 c = 0, s = 0; \
\
ctxt_sys_reg(hctxt, reg) = read_sysreg_s(SYS_ ## reg); \
- compute_clr_set(vcpu, reg, c, s); \
+ if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)) \
+ compute_clr_set(vcpu, reg, c, s); \
+ \
+ compute_undef_clr_set(vcpu, kvm, reg, c, s); \
+ \
s |= set; \
c |= clr; \
if (c || s) { \
@@ -97,8 +131,8 @@ static inline void __activate_traps_fpsimd32(struct kvm_vcpu *vcpu)
} \
} while(0)
-#define update_fgt_traps(vcpu, reg) \
- update_fgt_traps_cs(vcpu, reg, 0, 0)
+#define update_fgt_traps(hctxt, vcpu, kvm, reg) \
+ update_fgt_traps_cs(hctxt, vcpu, kvm, reg, 0, 0)
/*
* Validate the fine grain trap masks.
@@ -122,6 +156,7 @@ static inline bool cpu_has_amu(void)
static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
{
struct kvm_cpu_context *hctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
+ struct kvm *kvm = kern_hyp_va(vcpu->kvm);
u64 r_clr = 0, w_clr = 0, r_set = 0, w_set = 0, tmp;
u64 r_val, w_val;
@@ -157,6 +192,9 @@ static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
compute_clr_set(vcpu, HFGWTR_EL2, w_clr, w_set);
}
+ compute_undef_clr_set(vcpu, kvm, HFGRTR_EL2, r_clr, r_set);
+ compute_undef_clr_set(vcpu, kvm, HFGWTR_EL2, w_clr, w_set);
+
/* The default to trap everything not handled or supported in KVM. */
tmp = HFGxTR_EL2_nAMAIR2_EL1 | HFGxTR_EL2_nMAIR2_EL1 | HFGxTR_EL2_nS2POR_EL1 |
HFGxTR_EL2_nPOR_EL1 | HFGxTR_EL2_nPOR_EL0 | HFGxTR_EL2_nACCDATA_EL1;
@@ -172,20 +210,26 @@ static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
write_sysreg_s(r_val, SYS_HFGRTR_EL2);
write_sysreg_s(w_val, SYS_HFGWTR_EL2);
- if (!vcpu_has_nv(vcpu) || is_hyp_ctxt(vcpu))
- return;
-
- update_fgt_traps(vcpu, HFGITR_EL2);
- update_fgt_traps(vcpu, HDFGRTR_EL2);
- update_fgt_traps(vcpu, HDFGWTR_EL2);
+ update_fgt_traps(hctxt, vcpu, kvm, HFGITR_EL2);
+ update_fgt_traps(hctxt, vcpu, kvm, HDFGRTR_EL2);
+ update_fgt_traps(hctxt, vcpu, kvm, HDFGWTR_EL2);
if (cpu_has_amu())
- update_fgt_traps(vcpu, HAFGRTR_EL2);
+ update_fgt_traps(hctxt, vcpu, kvm, HAFGRTR_EL2);
}
+#define __deactivate_fgt(htcxt, vcpu, kvm, reg) \
+ do { \
+ if ((vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)) || \
+ kvm->arch.fgu[reg_to_fgt_group_id(reg)]) \
+ write_sysreg_s(ctxt_sys_reg(hctxt, reg), \
+ SYS_ ## reg); \
+ } while(0)
+
static inline void __deactivate_traps_hfgxtr(struct kvm_vcpu *vcpu)
{
struct kvm_cpu_context *hctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
+ struct kvm *kvm = kern_hyp_va(vcpu->kvm);
if (!cpus_have_final_cap(ARM64_HAS_FGT))
return;
@@ -193,15 +237,12 @@ static inline void __deactivate_traps_hfgxtr(struct kvm_vcpu *vcpu)
write_sysreg_s(ctxt_sys_reg(hctxt, HFGRTR_EL2), SYS_HFGRTR_EL2);
write_sysreg_s(ctxt_sys_reg(hctxt, HFGWTR_EL2), SYS_HFGWTR_EL2);
- if (!vcpu_has_nv(vcpu) || is_hyp_ctxt(vcpu))
- return;
-
- write_sysreg_s(ctxt_sys_reg(hctxt, HFGITR_EL2), SYS_HFGITR_EL2);
- write_sysreg_s(ctxt_sys_reg(hctxt, HDFGRTR_EL2), SYS_HDFGRTR_EL2);
- write_sysreg_s(ctxt_sys_reg(hctxt, HDFGWTR_EL2), SYS_HDFGWTR_EL2);
+ __deactivate_fgt(hctxt, vcpu, kvm, HFGITR_EL2);
+ __deactivate_fgt(hctxt, vcpu, kvm, HDFGRTR_EL2);
+ __deactivate_fgt(hctxt, vcpu, kvm, HDFGWTR_EL2);
if (cpu_has_amu())
- write_sysreg_s(ctxt_sys_reg(hctxt, HAFGRTR_EL2), SYS_HAFGRTR_EL2);
+ __deactivate_fgt(hctxt, vcpu, kvm, HAFGRTR_EL2);
}
static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v2 19/25] KVM: arm64: Move existing feature disabling over to FGU infrastructure
2024-01-30 20:45 [PATCH v2 00/25] KVM/arm64: VM configuration enforcement Marc Zyngier
` (17 preceding siblings ...)
2024-01-30 20:45 ` [PATCH v2 18/25] KVM: arm64: Propagate and handle Fine-Grained UNDEF bits Marc Zyngier
@ 2024-01-30 20:45 ` Marc Zyngier
2024-01-30 20:45 ` [PATCH v2 20/25] KVM: arm64: Streamline save/restore of HFG[RW]TR_EL2 Marc Zyngier
` (5 subsequent siblings)
24 siblings, 0 replies; 48+ messages in thread
From: Marc Zyngier @ 2024-01-30 20:45 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown
We already trap a bunch of existing features for the purpose of
disabling them (MAIR2, POR, ACCDATA, SME...).
Let's move them over to our brand new FGU infrastructure.
Reviewed-by: Joey Gouly <joey.gouly@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_host.h | 4 ++++
arch/arm64/kvm/arm.c | 6 ++++++
arch/arm64/kvm/hyp/include/hyp/switch.h | 17 +++--------------
arch/arm64/kvm/sys_regs.c | 23 +++++++++++++++++++++++
4 files changed, 36 insertions(+), 14 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index a7c3b9d7ec73..eff894d59590 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -297,6 +297,8 @@ struct kvm_arch {
#define KVM_ARCH_FLAG_TIMER_PPIS_IMMUTABLE 6
/* Initial ID reg values loaded */
#define KVM_ARCH_FLAG_ID_REGS_INITIALIZED 7
+ /* Fine-Grained UNDEF initialised */
+#define KVM_ARCH_FLAG_FGU_INITIALIZED 8
unsigned long flags;
/* VM-wide vCPU feature set */
@@ -1107,6 +1109,8 @@ int __init populate_nv_trap_config(void);
bool lock_all_vcpus(struct kvm *kvm);
void unlock_all_vcpus(struct kvm *kvm);
+void kvm_init_sysreg(struct kvm_vcpu *);
+
/* MMIO helpers */
void kvm_mmio_write_buf(void *buf, unsigned int len, unsigned long data);
unsigned long kvm_mmio_read_buf(const void *buf, unsigned int len);
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index c063e84fc72c..9f806c9b7d5d 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -675,6 +675,12 @@ int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu)
return ret;
}
+ /*
+ * This needs to happen after NV has imposed its own restrictions on
+ * the feature set
+ */
+ kvm_init_sysreg(vcpu);
+
ret = kvm_timer_enable(vcpu);
if (ret)
return ret;
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index a09149fd91ed..245f9c1ca666 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -157,7 +157,7 @@ static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
{
struct kvm_cpu_context *hctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
struct kvm *kvm = kern_hyp_va(vcpu->kvm);
- u64 r_clr = 0, w_clr = 0, r_set = 0, w_set = 0, tmp;
+ u64 r_clr = 0, w_clr = 0, r_set = 0, w_set = 0;
u64 r_val, w_val;
CHECK_FGT_MASKS(HFGRTR_EL2);
@@ -174,13 +174,6 @@ static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
ctxt_sys_reg(hctxt, HFGRTR_EL2) = read_sysreg_s(SYS_HFGRTR_EL2);
ctxt_sys_reg(hctxt, HFGWTR_EL2) = read_sysreg_s(SYS_HFGWTR_EL2);
- if (cpus_have_final_cap(ARM64_SME)) {
- tmp = HFGxTR_EL2_nSMPRI_EL1_MASK | HFGxTR_EL2_nTPIDR2_EL0_MASK;
-
- r_clr |= tmp;
- w_clr |= tmp;
- }
-
/*
* Trap guest writes to TCR_EL1 to prevent it from enabling HA or HD.
*/
@@ -195,15 +188,11 @@ static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
compute_undef_clr_set(vcpu, kvm, HFGRTR_EL2, r_clr, r_set);
compute_undef_clr_set(vcpu, kvm, HFGWTR_EL2, w_clr, w_set);
- /* The default to trap everything not handled or supported in KVM. */
- tmp = HFGxTR_EL2_nAMAIR2_EL1 | HFGxTR_EL2_nMAIR2_EL1 | HFGxTR_EL2_nS2POR_EL1 |
- HFGxTR_EL2_nPOR_EL1 | HFGxTR_EL2_nPOR_EL0 | HFGxTR_EL2_nACCDATA_EL1;
-
- r_val = __HFGRTR_EL2_nMASK & ~tmp;
+ r_val = __HFGRTR_EL2_nMASK;
r_val |= r_set;
r_val &= ~r_clr;
- w_val = __HFGWTR_EL2_nMASK & ~tmp;
+ w_val = __HFGWTR_EL2_nMASK;
w_val |= w_set;
w_val &= ~w_clr;
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 938df5dc3684..39e7c7f74717 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -3942,6 +3942,29 @@ int kvm_vm_ioctl_get_reg_writable_masks(struct kvm *kvm, struct reg_mask_range *
return 0;
}
+void kvm_init_sysreg(struct kvm_vcpu *vcpu)
+{
+ struct kvm *kvm = vcpu->kvm;
+
+ mutex_lock(&kvm->arch.config_lock);
+
+ if (test_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags))
+ goto out;
+
+ kvm->arch.fgu[HFGxTR_GROUP] = (HFGxTR_EL2_nAMAIR2_EL1 |
+ HFGxTR_EL2_nMAIR2_EL1 |
+ HFGxTR_EL2_nS2POR_EL1 |
+ HFGxTR_EL2_nPOR_EL1 |
+ HFGxTR_EL2_nPOR_EL0 |
+ HFGxTR_EL2_nACCDATA_EL1 |
+ HFGxTR_EL2_nSMPRI_EL1_MASK |
+ HFGxTR_EL2_nTPIDR2_EL0_MASK);
+
+ set_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags);
+out:
+ mutex_unlock(&kvm->arch.config_lock);
+}
+
int __init kvm_sys_reg_table_init(void)
{
struct sys_reg_params params;
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v2 20/25] KVM: arm64: Streamline save/restore of HFG[RW]TR_EL2
2024-01-30 20:45 [PATCH v2 00/25] KVM/arm64: VM configuration enforcement Marc Zyngier
` (18 preceding siblings ...)
2024-01-30 20:45 ` [PATCH v2 19/25] KVM: arm64: Move existing feature disabling over to FGU infrastructure Marc Zyngier
@ 2024-01-30 20:45 ` Marc Zyngier
2024-01-30 20:45 ` [PATCH v2 21/25] KVM: arm64: Make TLBI OS/Range UNDEF if not advertised to the guest Marc Zyngier
` (4 subsequent siblings)
24 siblings, 0 replies; 48+ messages in thread
From: Marc Zyngier @ 2024-01-30 20:45 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown
The way we save/restore HFG[RW]TR_EL2 can now be simplified, and
the Ampere erratum hack is the only thing that still stands out.
Reviewed-by: Joey Gouly <joey.gouly@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/hyp/include/hyp/switch.h | 42 ++++++-------------------
1 file changed, 9 insertions(+), 33 deletions(-)
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 245f9c1ca666..2d5891518006 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -157,8 +157,6 @@ static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
{
struct kvm_cpu_context *hctxt = &this_cpu_ptr(&kvm_host_data)->host_ctxt;
struct kvm *kvm = kern_hyp_va(vcpu->kvm);
- u64 r_clr = 0, w_clr = 0, r_set = 0, w_set = 0;
- u64 r_val, w_val;
CHECK_FGT_MASKS(HFGRTR_EL2);
CHECK_FGT_MASKS(HFGWTR_EL2);
@@ -171,34 +169,10 @@ static inline void __activate_traps_hfgxtr(struct kvm_vcpu *vcpu)
if (!cpus_have_final_cap(ARM64_HAS_FGT))
return;
- ctxt_sys_reg(hctxt, HFGRTR_EL2) = read_sysreg_s(SYS_HFGRTR_EL2);
- ctxt_sys_reg(hctxt, HFGWTR_EL2) = read_sysreg_s(SYS_HFGWTR_EL2);
-
- /*
- * Trap guest writes to TCR_EL1 to prevent it from enabling HA or HD.
- */
- if (cpus_have_final_cap(ARM64_WORKAROUND_AMPERE_AC03_CPU_38))
- w_set |= HFGxTR_EL2_TCR_EL1_MASK;
-
- if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)) {
- compute_clr_set(vcpu, HFGRTR_EL2, r_clr, r_set);
- compute_clr_set(vcpu, HFGWTR_EL2, w_clr, w_set);
- }
-
- compute_undef_clr_set(vcpu, kvm, HFGRTR_EL2, r_clr, r_set);
- compute_undef_clr_set(vcpu, kvm, HFGWTR_EL2, w_clr, w_set);
-
- r_val = __HFGRTR_EL2_nMASK;
- r_val |= r_set;
- r_val &= ~r_clr;
-
- w_val = __HFGWTR_EL2_nMASK;
- w_val |= w_set;
- w_val &= ~w_clr;
-
- write_sysreg_s(r_val, SYS_HFGRTR_EL2);
- write_sysreg_s(w_val, SYS_HFGWTR_EL2);
-
+ update_fgt_traps(hctxt, vcpu, kvm, HFGRTR_EL2);
+ update_fgt_traps_cs(hctxt, vcpu, kvm, HFGWTR_EL2, 0,
+ cpus_have_final_cap(ARM64_WORKAROUND_AMPERE_AC03_CPU_38) ?
+ HFGxTR_EL2_TCR_EL1_MASK : 0);
update_fgt_traps(hctxt, vcpu, kvm, HFGITR_EL2);
update_fgt_traps(hctxt, vcpu, kvm, HDFGRTR_EL2);
update_fgt_traps(hctxt, vcpu, kvm, HDFGWTR_EL2);
@@ -223,9 +197,11 @@ static inline void __deactivate_traps_hfgxtr(struct kvm_vcpu *vcpu)
if (!cpus_have_final_cap(ARM64_HAS_FGT))
return;
- write_sysreg_s(ctxt_sys_reg(hctxt, HFGRTR_EL2), SYS_HFGRTR_EL2);
- write_sysreg_s(ctxt_sys_reg(hctxt, HFGWTR_EL2), SYS_HFGWTR_EL2);
-
+ __deactivate_fgt(hctxt, vcpu, kvm, HFGRTR_EL2);
+ if (cpus_have_final_cap(ARM64_WORKAROUND_AMPERE_AC03_CPU_38))
+ write_sysreg_s(ctxt_sys_reg(hctxt, HFGWTR_EL2), SYS_HFGWTR_EL2);
+ else
+ __deactivate_fgt(hctxt, vcpu, kvm, HFGWTR_EL2);
__deactivate_fgt(hctxt, vcpu, kvm, HFGITR_EL2);
__deactivate_fgt(hctxt, vcpu, kvm, HDFGRTR_EL2);
__deactivate_fgt(hctxt, vcpu, kvm, HDFGWTR_EL2);
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v2 21/25] KVM: arm64: Make TLBI OS/Range UNDEF if not advertised to the guest
2024-01-30 20:45 [PATCH v2 00/25] KVM/arm64: VM configuration enforcement Marc Zyngier
` (19 preceding siblings ...)
2024-01-30 20:45 ` [PATCH v2 20/25] KVM: arm64: Streamline save/restore of HFG[RW]TR_EL2 Marc Zyngier
@ 2024-01-30 20:45 ` Marc Zyngier
2024-01-31 15:05 ` Joey Gouly
2024-01-30 20:45 ` [PATCH v2 22/25] KVM: arm64: Make PIR{,E0}_EL1 UNDEF if S1PIE is " Marc Zyngier
` (3 subsequent siblings)
24 siblings, 1 reply; 48+ messages in thread
From: Marc Zyngier @ 2024-01-30 20:45 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown
Outer Shareable and Range TLBI instructions shouldn't be made available
to the guest if they are not advertised. Use FGU to disable those,
and set HCR_EL2.TLBIOS in the case the host doesn't have FGT. Note
that in that later case, we cannot efficiently disable TLBI Range
instructions, as this would require to trap all TLBIs.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/sys_regs.c | 34 ++++++++++++++++++++++++++++++++++
1 file changed, 34 insertions(+)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 39e7c7f74717..f07ee7c89822 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -3948,6 +3948,14 @@ void kvm_init_sysreg(struct kvm_vcpu *vcpu)
mutex_lock(&kvm->arch.config_lock);
+ /*
+ * In the absence of FGT, we cannot independently trap TLBI
+ * Range instructions. This isn't great, but trapping all
+ * TLBIs would be far worse. Live with it...
+ */
+ if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, OS))
+ vcpu->arch.hcr_el2 |= HCR_TTLBOS;
+
if (test_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags))
goto out;
@@ -3960,6 +3968,32 @@ void kvm_init_sysreg(struct kvm_vcpu *vcpu)
HFGxTR_EL2_nSMPRI_EL1_MASK |
HFGxTR_EL2_nTPIDR2_EL0_MASK);
+ if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, OS))
+ kvm->arch.fgu[HFGITR_GROUP] |= (HFGITR_EL2_TLBIRVAALE1OS|
+ HFGITR_EL2_TLBIRVALE1OS |
+ HFGITR_EL2_TLBIRVAAE1OS |
+ HFGITR_EL2_TLBIRVAE1OS |
+ HFGITR_EL2_TLBIVAALE1OS |
+ HFGITR_EL2_TLBIVALE1OS |
+ HFGITR_EL2_TLBIVAAE1OS |
+ HFGITR_EL2_TLBIASIDE1OS |
+ HFGITR_EL2_TLBIVAE1OS |
+ HFGITR_EL2_TLBIVMALLE1OS);
+
+ if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, RANGE))
+ kvm->arch.fgu[HFGITR_GROUP] |= (HFGITR_EL2_TLBIRVAALE1 |
+ HFGITR_EL2_TLBIRVALE1 |
+ HFGITR_EL2_TLBIRVAAE1 |
+ HFGITR_EL2_TLBIRVAE1 |
+ HFGITR_EL2_TLBIRVAALE1IS|
+ HFGITR_EL2_TLBIRVALE1IS |
+ HFGITR_EL2_TLBIRVAAE1IS |
+ HFGITR_EL2_TLBIRVAE1IS |
+ HFGITR_EL2_TLBIRVAALE1OS|
+ HFGITR_EL2_TLBIRVALE1OS |
+ HFGITR_EL2_TLBIRVAAE1OS |
+ HFGITR_EL2_TLBIRVAE1OS);
+
set_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags);
out:
mutex_unlock(&kvm->arch.config_lock);
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v2 22/25] KVM: arm64: Make PIR{,E0}_EL1 UNDEF if S1PIE is not advertised to the guest
2024-01-30 20:45 [PATCH v2 00/25] KVM/arm64: VM configuration enforcement Marc Zyngier
` (20 preceding siblings ...)
2024-01-30 20:45 ` [PATCH v2 21/25] KVM: arm64: Make TLBI OS/Range UNDEF if not advertised to the guest Marc Zyngier
@ 2024-01-30 20:45 ` Marc Zyngier
2024-01-31 14:46 ` Joey Gouly
2024-01-30 20:45 ` [PATCH v2 23/25] KVM: arm64: Make AMU sysreg UNDEF if FEAT_AMU " Marc Zyngier
` (2 subsequent siblings)
24 siblings, 1 reply; 48+ messages in thread
From: Marc Zyngier @ 2024-01-30 20:45 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown
As part of the ongoing effort to honor the guest configuration,
add the necessary checks to make PIR_EL1 and co UNDEF if not
advertised to the guest, and avoid context switching them.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 24 +++++++++++++++++++---
arch/arm64/kvm/sys_regs.c | 4 ++++
2 files changed, 25 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
index bb6b571ec627..4be6a7fa0070 100644
--- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
+++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
@@ -27,16 +27,34 @@ static inline void __sysreg_save_user_state(struct kvm_cpu_context *ctxt)
ctxt_sys_reg(ctxt, TPIDRRO_EL0) = read_sysreg(tpidrro_el0);
}
-static inline bool ctxt_has_mte(struct kvm_cpu_context *ctxt)
+static inline struct kvm_vcpu *ctxt_to_vcpu(struct kvm_cpu_context *ctxt)
{
struct kvm_vcpu *vcpu = ctxt->__hyp_running_vcpu;
if (!vcpu)
vcpu = container_of(ctxt, struct kvm_vcpu, arch.ctxt);
+ return vcpu;
+}
+
+static inline bool ctxt_has_mte(struct kvm_cpu_context *ctxt)
+{
+ struct kvm_vcpu *vcpu = ctxt_to_vcpu(ctxt);
+
return kvm_has_mte(kern_hyp_va(vcpu->kvm));
}
+static inline bool ctxt_has_s1pie(struct kvm_cpu_context *ctxt)
+{
+ struct kvm_vcpu *vcpu;
+
+ if (!cpus_have_final_cap(ARM64_HAS_S1PIE))
+ return false;
+
+ vcpu = ctxt_to_vcpu(ctxt);
+ return kvm_has_feat(kern_hyp_va(vcpu->kvm), ID_AA64MMFR3_EL1, S1PIE, IMP);
+}
+
static inline void __sysreg_save_el1_state(struct kvm_cpu_context *ctxt)
{
ctxt_sys_reg(ctxt, SCTLR_EL1) = read_sysreg_el1(SYS_SCTLR);
@@ -55,7 +73,7 @@ static inline void __sysreg_save_el1_state(struct kvm_cpu_context *ctxt)
ctxt_sys_reg(ctxt, CONTEXTIDR_EL1) = read_sysreg_el1(SYS_CONTEXTIDR);
ctxt_sys_reg(ctxt, AMAIR_EL1) = read_sysreg_el1(SYS_AMAIR);
ctxt_sys_reg(ctxt, CNTKCTL_EL1) = read_sysreg_el1(SYS_CNTKCTL);
- if (cpus_have_final_cap(ARM64_HAS_S1PIE)) {
+ if (ctxt_has_s1pie(ctxt)) {
ctxt_sys_reg(ctxt, PIR_EL1) = read_sysreg_el1(SYS_PIR);
ctxt_sys_reg(ctxt, PIRE0_EL1) = read_sysreg_el1(SYS_PIRE0);
}
@@ -131,7 +149,7 @@ static inline void __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt)
write_sysreg_el1(ctxt_sys_reg(ctxt, CONTEXTIDR_EL1), SYS_CONTEXTIDR);
write_sysreg_el1(ctxt_sys_reg(ctxt, AMAIR_EL1), SYS_AMAIR);
write_sysreg_el1(ctxt_sys_reg(ctxt, CNTKCTL_EL1), SYS_CNTKCTL);
- if (cpus_have_final_cap(ARM64_HAS_S1PIE)) {
+ if (ctxt_has_s1pie(ctxt)) {
write_sysreg_el1(ctxt_sys_reg(ctxt, PIR_EL1), SYS_PIR);
write_sysreg_el1(ctxt_sys_reg(ctxt, PIRE0_EL1), SYS_PIRE0);
}
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index f07ee7c89822..da9db99c77e7 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -3994,6 +3994,10 @@ void kvm_init_sysreg(struct kvm_vcpu *vcpu)
HFGITR_EL2_TLBIRVAAE1OS |
HFGITR_EL2_TLBIRVAE1OS);
+ if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, S1PIE, IMP))
+ kvm->arch.fgu[HFGxTR_GROUP] |= (HFGxTR_EL2_nPIRE0_EL1 |
+ HFGxTR_EL2_nPIR_EL1);
+
set_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags);
out:
mutex_unlock(&kvm->arch.config_lock);
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v2 23/25] KVM: arm64: Make AMU sysreg UNDEF if FEAT_AMU is not advertised to the guest
2024-01-30 20:45 [PATCH v2 00/25] KVM/arm64: VM configuration enforcement Marc Zyngier
` (21 preceding siblings ...)
2024-01-30 20:45 ` [PATCH v2 22/25] KVM: arm64: Make PIR{,E0}_EL1 UNDEF if S1PIE is " Marc Zyngier
@ 2024-01-30 20:45 ` Marc Zyngier
2024-01-30 20:45 ` [PATCH v2 24/25] KVM: arm64: Make FEAT_MOPS UNDEF if " Marc Zyngier
2024-01-30 20:45 ` [PATCH v2 25/25] KVM: arm64: Add debugfs file for guest's ID registers Marc Zyngier
24 siblings, 0 replies; 48+ messages in thread
From: Marc Zyngier @ 2024-01-30 20:45 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown
No AMU? No AMU! IF we see an AMU-related trap, let's turn it into
an UNDEF!
Reviewed-by: Joey Gouly <joey.gouly@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/kvm/sys_regs.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index da9db99c77e7..38ed47bd29db 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -3998,6 +3998,10 @@ void kvm_init_sysreg(struct kvm_vcpu *vcpu)
kvm->arch.fgu[HFGxTR_GROUP] |= (HFGxTR_EL2_nPIRE0_EL1 |
HFGxTR_EL2_nPIR_EL1);
+ if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, AMU, IMP))
+ kvm->arch.fgu[HAFGRTR_GROUP] |= ~(HAFGRTR_EL2_RES0 |
+ HAFGRTR_EL2_RES1);
+
set_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags);
out:
mutex_unlock(&kvm->arch.config_lock);
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v2 24/25] KVM: arm64: Make FEAT_MOPS UNDEF if not advertised to the guest
2024-01-30 20:45 [PATCH v2 00/25] KVM/arm64: VM configuration enforcement Marc Zyngier
` (22 preceding siblings ...)
2024-01-30 20:45 ` [PATCH v2 23/25] KVM: arm64: Make AMU sysreg UNDEF if FEAT_AMU " Marc Zyngier
@ 2024-01-30 20:45 ` Marc Zyngier
2024-01-31 15:02 ` Joey Gouly
2024-01-30 20:45 ` [PATCH v2 25/25] KVM: arm64: Add debugfs file for guest's ID registers Marc Zyngier
24 siblings, 1 reply; 48+ messages in thread
From: Marc Zyngier @ 2024-01-30 20:45 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown
We unconditionally enable FEAT_MOPS, which is obviously wrong.
So let's only do that when it is advertised to the guest.
Which means we need to rely on a per-vcpu HCRX_EL2 shadow register.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_arm.h | 4 +---
arch/arm64/include/asm/kvm_host.h | 1 +
arch/arm64/kvm/hyp/include/hyp/switch.h | 2 +-
arch/arm64/kvm/sys_regs.c | 7 +++++++
4 files changed, 10 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 3c6f8ba1e479..a1769e415d72 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -102,9 +102,7 @@
#define HCR_HOST_NVHE_PROTECTED_FLAGS (HCR_HOST_NVHE_FLAGS | HCR_TSC)
#define HCR_HOST_VHE_FLAGS (HCR_RW | HCR_TGE | HCR_E2H)
-#define HCRX_GUEST_FLAGS \
- (HCRX_EL2_SMPME | HCRX_EL2_TCR2En | \
- (cpus_have_final_cap(ARM64_HAS_MOPS) ? (HCRX_EL2_MSCEn | HCRX_EL2_MCE2) : 0))
+#define HCRX_GUEST_FLAGS (HCRX_EL2_SMPME | HCRX_EL2_TCR2En)
#define HCRX_HOST_FLAGS (HCRX_EL2_MSCEn | HCRX_EL2_TCR2En)
/* TCR_EL2 Registers bits */
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index eff894d59590..50a122d42587 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -584,6 +584,7 @@ struct kvm_vcpu_arch {
/* Values of trap registers for the guest. */
u64 hcr_el2;
+ u64 hcrx_el2;
u64 mdcr_el2;
u64 cptr_el2;
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 2d5891518006..e3fcf8c4d5b4 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -236,7 +236,7 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
if (cpus_have_final_cap(ARM64_HAS_HCX)) {
- u64 hcrx = HCRX_GUEST_FLAGS;
+ u64 hcrx = vcpu->arch.hcrx_el2;
if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)) {
u64 clr = 0, set = 0;
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 38ed47bd29db..2cb69efac1dc 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -3956,6 +3956,13 @@ void kvm_init_sysreg(struct kvm_vcpu *vcpu)
if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, OS))
vcpu->arch.hcr_el2 |= HCR_TTLBOS;
+ if (cpus_have_final_cap(ARM64_HAS_HCX)) {
+ vcpu->arch.hcrx_el2 = HCRX_GUEST_FLAGS;
+
+ if (kvm_has_feat(kvm, ID_AA64ISAR2_EL1, MOPS, IMP))
+ vcpu->arch.hcrx_el2 |= (HCRX_EL2_MSCEn | HCRX_EL2_MCE2);
+ }
+
if (test_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags))
goto out;
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 48+ messages in thread
* [PATCH v2 25/25] KVM: arm64: Add debugfs file for guest's ID registers
2024-01-30 20:45 [PATCH v2 00/25] KVM/arm64: VM configuration enforcement Marc Zyngier
` (23 preceding siblings ...)
2024-01-30 20:45 ` [PATCH v2 24/25] KVM: arm64: Make FEAT_MOPS UNDEF if " Marc Zyngier
@ 2024-01-30 20:45 ` Marc Zyngier
24 siblings, 0 replies; 48+ messages in thread
From: Marc Zyngier @ 2024-01-30 20:45 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown
Debugging ID register setup can be a complicated affair. Give the
kernel hacker a way to dump that state in an easy to parse way.
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm64/include/asm/kvm_host.h | 3 ++
arch/arm64/kvm/sys_regs.c | 81 +++++++++++++++++++++++++++++++
2 files changed, 84 insertions(+)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 50a122d42587..ea8acc1914aa 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -319,6 +319,9 @@ struct kvm_arch {
/* PMCR_EL0.N value for the guest */
u8 pmcr_n;
+ /* Iterator for idreg debugfs */
+ u8 idreg_debugfs_iter;
+
/* Hypercall features firmware registers' descriptor */
struct kvm_smccc_features smccc_feat;
struct maple_tree smccc_filter;
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 2cb69efac1dc..d08d67e2d441 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -12,6 +12,7 @@
#include <linux/bitfield.h>
#include <linux/bsearch.h>
#include <linux/cacheinfo.h>
+#include <linux/debugfs.h>
#include <linux/kvm_host.h>
#include <linux/mm.h>
#include <linux/printk.h>
@@ -3421,6 +3422,81 @@ static bool emulate_sys_reg(struct kvm_vcpu *vcpu,
return false;
}
+static void *idregs_debug_start(struct seq_file *s, loff_t *pos)
+{
+ struct kvm *kvm = s->private;
+ u8 *iter;
+
+ mutex_lock(&kvm->arch.config_lock);
+
+ iter = &kvm->arch.idreg_debugfs_iter;
+ if (*iter == (u8)~0) {
+ *iter = *pos;
+ if (*iter >= KVM_ARM_ID_REG_NUM)
+ iter = NULL;
+ } else {
+ iter = ERR_PTR(-EBUSY);
+ }
+
+ mutex_unlock(&kvm->arch.config_lock);
+
+ return iter;
+}
+
+static void *idregs_debug_next(struct seq_file *s, void *v, loff_t *pos)
+{
+ struct kvm *kvm = s->private;
+
+ (*pos)++;
+
+ if ((kvm->arch.idreg_debugfs_iter + 1) < KVM_ARM_ID_REG_NUM) {
+ kvm->arch.idreg_debugfs_iter++;
+
+ return &kvm->arch.idreg_debugfs_iter;
+ }
+
+ return NULL;
+}
+
+static void idregs_debug_stop(struct seq_file *s, void *v)
+{
+ struct kvm *kvm = s->private;
+
+ if (IS_ERR(v))
+ return;
+
+ mutex_lock(&kvm->arch.config_lock);
+
+ kvm->arch.idreg_debugfs_iter = ~0;
+
+ mutex_unlock(&kvm->arch.config_lock);
+}
+
+static int idregs_debug_show(struct seq_file *s, void *v)
+{
+ struct kvm *kvm = s->private;
+ const struct sys_reg_desc *desc;
+
+ desc = first_idreg + kvm->arch.idreg_debugfs_iter;
+
+ if (!desc->name)
+ return 0;
+
+ seq_printf(s, "%20s:\t%016llx\n",
+ desc->name, IDREG(kvm, IDX_IDREG(kvm->arch.idreg_debugfs_iter)));
+
+ return 0;
+}
+
+static const struct seq_operations idregs_debug_sops = {
+ .start = idregs_debug_start,
+ .next = idregs_debug_next,
+ .stop = idregs_debug_stop,
+ .show = idregs_debug_show,
+};
+
+DEFINE_SEQ_ATTRIBUTE(idregs_debug);
+
static void kvm_reset_id_regs(struct kvm_vcpu *vcpu)
{
const struct sys_reg_desc *idreg = first_idreg;
@@ -3440,6 +3516,11 @@ static void kvm_reset_id_regs(struct kvm_vcpu *vcpu)
id = reg_to_encoding(idreg);
}
+ kvm->arch.idreg_debugfs_iter = ~0;
+
+ debugfs_create_file("idregs", 0444, kvm->debugfs_dentry, kvm,
+ &idregs_debug_fops);
+
set_bit(KVM_ARCH_FLAG_ID_REGS_INITIALIZED, &kvm->arch.flags);
}
--
2.39.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 48+ messages in thread
* Re: [PATCH v2 22/25] KVM: arm64: Make PIR{,E0}_EL1 UNDEF if S1PIE is not advertised to the guest
2024-01-30 20:45 ` [PATCH v2 22/25] KVM: arm64: Make PIR{,E0}_EL1 UNDEF if S1PIE is " Marc Zyngier
@ 2024-01-31 14:46 ` Joey Gouly
0 siblings, 0 replies; 48+ messages in thread
From: Joey Gouly @ 2024-01-31 14:46 UTC (permalink / raw)
To: Marc Zyngier
Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
Mark Brown
Afternoon,
On Tue, Jan 30, 2024 at 08:45:29PM +0000, Marc Zyngier wrote:
> As part of the ongoing effort to honor the guest configuration,
> add the necessary checks to make PIR_EL1 and co UNDEF if not
> advertised to the guest, and avoid context switching them.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
> arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h | 24 +++++++++++++++++++---
> arch/arm64/kvm/sys_regs.c | 4 ++++
> 2 files changed, 25 insertions(+), 3 deletions(-)
>
> diff --git a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
> index bb6b571ec627..4be6a7fa0070 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/sysreg-sr.h
> @@ -27,16 +27,34 @@ static inline void __sysreg_save_user_state(struct kvm_cpu_context *ctxt)
> ctxt_sys_reg(ctxt, TPIDRRO_EL0) = read_sysreg(tpidrro_el0);
> }
>
> -static inline bool ctxt_has_mte(struct kvm_cpu_context *ctxt)
> +static inline struct kvm_vcpu *ctxt_to_vcpu(struct kvm_cpu_context *ctxt)
> {
> struct kvm_vcpu *vcpu = ctxt->__hyp_running_vcpu;
>
> if (!vcpu)
> vcpu = container_of(ctxt, struct kvm_vcpu, arch.ctxt);
>
> + return vcpu;
> +}
> +
> +static inline bool ctxt_has_mte(struct kvm_cpu_context *ctxt)
> +{
> + struct kvm_vcpu *vcpu = ctxt_to_vcpu(ctxt);
> +
> return kvm_has_mte(kern_hyp_va(vcpu->kvm));
> }
>
> +static inline bool ctxt_has_s1pie(struct kvm_cpu_context *ctxt)
> +{
> + struct kvm_vcpu *vcpu;
> +
> + if (!cpus_have_final_cap(ARM64_HAS_S1PIE))
> + return false;
> +
> + vcpu = ctxt_to_vcpu(ctxt);
> + return kvm_has_feat(kern_hyp_va(vcpu->kvm), ID_AA64MMFR3_EL1, S1PIE, IMP);
> +}
> +
> static inline void __sysreg_save_el1_state(struct kvm_cpu_context *ctxt)
> {
> ctxt_sys_reg(ctxt, SCTLR_EL1) = read_sysreg_el1(SYS_SCTLR);
> @@ -55,7 +73,7 @@ static inline void __sysreg_save_el1_state(struct kvm_cpu_context *ctxt)
> ctxt_sys_reg(ctxt, CONTEXTIDR_EL1) = read_sysreg_el1(SYS_CONTEXTIDR);
> ctxt_sys_reg(ctxt, AMAIR_EL1) = read_sysreg_el1(SYS_AMAIR);
> ctxt_sys_reg(ctxt, CNTKCTL_EL1) = read_sysreg_el1(SYS_CNTKCTL);
> - if (cpus_have_final_cap(ARM64_HAS_S1PIE)) {
> + if (ctxt_has_s1pie(ctxt)) {
> ctxt_sys_reg(ctxt, PIR_EL1) = read_sysreg_el1(SYS_PIR);
> ctxt_sys_reg(ctxt, PIRE0_EL1) = read_sysreg_el1(SYS_PIRE0);
> }
> @@ -131,7 +149,7 @@ static inline void __sysreg_restore_el1_state(struct kvm_cpu_context *ctxt)
> write_sysreg_el1(ctxt_sys_reg(ctxt, CONTEXTIDR_EL1), SYS_CONTEXTIDR);
> write_sysreg_el1(ctxt_sys_reg(ctxt, AMAIR_EL1), SYS_AMAIR);
> write_sysreg_el1(ctxt_sys_reg(ctxt, CNTKCTL_EL1), SYS_CNTKCTL);
> - if (cpus_have_final_cap(ARM64_HAS_S1PIE)) {
> + if (ctxt_has_s1pie(ctxt)) {
> write_sysreg_el1(ctxt_sys_reg(ctxt, PIR_EL1), SYS_PIR);
> write_sysreg_el1(ctxt_sys_reg(ctxt, PIRE0_EL1), SYS_PIRE0);
> }
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index f07ee7c89822..da9db99c77e7 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -3994,6 +3994,10 @@ void kvm_init_sysreg(struct kvm_vcpu *vcpu)
> HFGITR_EL2_TLBIRVAAE1OS |
> HFGITR_EL2_TLBIRVAE1OS);
>
> + if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, S1PIE, IMP))
> + kvm->arch.fgu[HFGxTR_GROUP] |= (HFGxTR_EL2_nPIRE0_EL1 |
> + HFGxTR_EL2_nPIR_EL1);
> +
> set_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags);
> out:
> mutex_unlock(&kvm->arch.config_lock);
Fixed lopsided-ness between save/restore from v1, also added a helper for getting the vcpu.
Reviewed-by: Joey Gouly <joey.gouly@arm.com>
Thanks,
Joey
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v2 03/25] KVM: arm64: nv: Add sanitising to VNCR-backed sysregs
2024-01-30 20:45 ` [PATCH v2 03/25] KVM: arm64: nv: Add sanitising to VNCR-backed sysregs Marc Zyngier
@ 2024-01-31 14:52 ` Joey Gouly
0 siblings, 0 replies; 48+ messages in thread
From: Joey Gouly @ 2024-01-31 14:52 UTC (permalink / raw)
To: Marc Zyngier
Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
Mark Brown
Hello,
On Tue, Jan 30, 2024 at 08:45:10PM +0000, Marc Zyngier wrote:
> VNCR-backed "registers" are actually only memory. Which means that
> there is zero control over what the guest can write, and that it
> is the hypervisor's job to actually sanitise the content of the
> backing store. Yeah, this is fun.
>
> In order to preserve some form of sanity, add a repainting mechanism
> that makes use of a per-VM set of RES0/RES1 masks, one pair per VNCR
> register. These masks get applied on access to the backing store via
> __vcpu_sys_reg(), ensuring that the state that is consumed by KVM is
> correct.
>
> So far, nothing populates these masks, but stay tuned.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
> arch/arm64/include/asm/kvm_host.h | 22 ++++++++++++++++-
> arch/arm64/kvm/arm.c | 1 +
> arch/arm64/kvm/nested.c | 41 ++++++++++++++++++++++++++++++-
> 3 files changed, 62 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index c0cf9c5f5e8d..816288e2d735 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -238,6 +238,8 @@ static inline u16 kvm_mpidr_index(struct kvm_mpidr_data *data, u64 mpidr)
> return index;
> }
>
> +struct kvm_sysreg_masks;
> +
> struct kvm_arch {
> struct kvm_s2_mmu mmu;
>
> @@ -312,6 +314,9 @@ struct kvm_arch {
> #define KVM_ARM_ID_REG_NUM (IDREG_IDX(sys_reg(3, 0, 0, 7, 7)) + 1)
> u64 id_regs[KVM_ARM_ID_REG_NUM];
>
> + /* Masks for VNCR-baked sysregs */
> + struct kvm_sysreg_masks *sysreg_masks;
> +
> /*
> * For an untrusted host VM, 'pkvm.handle' is used to lookup
> * the associated pKVM instance in the hypervisor.
> @@ -474,6 +479,13 @@ enum vcpu_sysreg {
> NR_SYS_REGS /* Nothing after this line! */
> };
>
> +struct kvm_sysreg_masks {
> + struct {
> + u64 res0;
> + u64 res1;
> + } mask[NR_SYS_REGS - __VNCR_START__];
> +};
> +
> struct kvm_cpu_context {
> struct user_pt_regs regs; /* sp = sp_el0 */
>
> @@ -868,7 +880,15 @@ static inline u64 *__ctxt_sys_reg(const struct kvm_cpu_context *ctxt, int r)
>
> #define ctxt_sys_reg(c,r) (*__ctxt_sys_reg(c,r))
>
> -#define __vcpu_sys_reg(v,r) (ctxt_sys_reg(&(v)->arch.ctxt, (r)))
> +u64 kvm_vcpu_sanitise_vncr_reg(const struct kvm_vcpu *, enum vcpu_sysreg);
> +#define __vcpu_sys_reg(v,r) \
> + (*({ \
> + const struct kvm_cpu_context *ctxt = &(v)->arch.ctxt; \
> + u64 *__r = __ctxt_sys_reg(ctxt, (r)); \
> + if (vcpu_has_nv((v)) && (r) >= __VNCR_START__) \
> + *__r = kvm_vcpu_sanitise_vncr_reg((v), (r)); \
> + __r; \
> + }))
>
> u64 vcpu_read_sys_reg(const struct kvm_vcpu *vcpu, int reg);
> void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg);
> diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
> index a25265aca432..c063e84fc72c 100644
> --- a/arch/arm64/kvm/arm.c
> +++ b/arch/arm64/kvm/arm.c
> @@ -206,6 +206,7 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
> pkvm_destroy_hyp_vm(kvm);
>
> kfree(kvm->arch.mpidr_data);
> + kfree(kvm->arch.sysreg_masks);
> kvm_destroy_vcpus(kvm);
>
> kvm_unshare_hyp(kvm, kvm + 1);
> diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
> index d55e809e26cb..c976cd4b8379 100644
> --- a/arch/arm64/kvm/nested.c
> +++ b/arch/arm64/kvm/nested.c
> @@ -163,15 +163,54 @@ static u64 limit_nv_id_reg(u32 id, u64 val)
>
> return val;
> }
> +
> +u64 kvm_vcpu_sanitise_vncr_reg(const struct kvm_vcpu *vcpu, enum vcpu_sysreg sr)
> +{
> + u64 v = ctxt_sys_reg(&vcpu->arch.ctxt, sr);
> + struct kvm_sysreg_masks *masks;
> +
> + masks = vcpu->kvm->arch.sysreg_masks;
> +
> + if (masks) {
> + sr -= __VNCR_START__;
> +
> + v &= ~masks->mask[sr].res0;
> + v |= masks->mask[sr].res1;
> + }
> +
> + return v;
> +}
> +
> +static void __maybe_unused set_sysreg_masks(struct kvm *kvm, int sr, u64 res0, u64 res1)
> +{
> + int i = sr - __VNCR_START__;
> +
> + kvm->arch.sysreg_masks->mask[i].res0 = res0;
> + kvm->arch.sysreg_masks->mask[i].res1 = res1;
> +}
> +
> int kvm_init_nv_sysregs(struct kvm *kvm)
> {
> + int ret = 0;
> +
> mutex_lock(&kvm->arch.config_lock);
>
> + if (kvm->arch.sysreg_masks)
> + goto out;
> +
> + kvm->arch.sysreg_masks = kzalloc(sizeof(*(kvm->arch.sysreg_masks)),
> + GFP_KERNEL);
> + if (!kvm->arch.sysreg_masks) {
> + ret = -ENOMEM;
> + goto out;
> + }
> +
> for (int i = 0; i < KVM_ARM_ID_REG_NUM; i++)
> kvm->arch.id_regs[i] = limit_nv_id_reg(IDX_IDREG(i),
> kvm->arch.id_regs[i]);
>
> +out:
> mutex_unlock(&kvm->arch.config_lock);
>
> - return 0;
> + return ret;
> }
Used suggestion from v1 to use vcpu_has_nv()/collapse macros.
Reviewed-by: Joey Gouly <joey.gouly@arm.com>
Thanks,
Joey
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v2 10/25] KVM: arm64: nv: Turn encoding ranges into discrete XArray stores
2024-01-30 20:45 ` [PATCH v2 10/25] KVM: arm64: nv: Turn encoding ranges into discrete XArray stores Marc Zyngier
@ 2024-01-31 14:57 ` Joey Gouly
0 siblings, 0 replies; 48+ messages in thread
From: Joey Gouly @ 2024-01-31 14:57 UTC (permalink / raw)
To: Marc Zyngier
Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
Mark Brown
On Tue, Jan 30, 2024 at 08:45:17PM +0000, Marc Zyngier wrote:
> In order to be able to store different values for member of an
> encoding range, replace xa_store_range() calls with discrete
> xa_store() calls and an encoding iterator.
>
> We end-up using a bit more memory, but we gain some flexibility
> that we will make use of shortly.
>
> Take this opportunity to tidy up the error handling path.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
> arch/arm64/kvm/emulate-nested.c | 49 ++++++++++++++++++++++++---------
> 1 file changed, 36 insertions(+), 13 deletions(-)
>
> diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
> index ef46c2e45307..42f5b4483c80 100644
> --- a/arch/arm64/kvm/emulate-nested.c
> +++ b/arch/arm64/kvm/emulate-nested.c
> @@ -1757,6 +1757,28 @@ static __init void print_nv_trap_error(const struct encoding_to_trap_config *tc,
> err);
> }
>
> +static u32 encoding_next(u32 encoding)
> +{
> + u8 op0, op1, crn, crm, op2;
> +
> + op0 = sys_reg_Op0(encoding);
> + op1 = sys_reg_Op1(encoding);
> + crn = sys_reg_CRn(encoding);
> + crm = sys_reg_CRm(encoding);
> + op2 = sys_reg_Op2(encoding);
> +
> + if (op2 < Op2_mask)
> + return sys_reg(op0, op1, crn, crm, op2 + 1);
> + if (crm < CRm_mask)
> + return sys_reg(op0, op1, crn, crm + 1, 0);
> + if (crn < CRn_mask)
> + return sys_reg(op0, op1, crn + 1, 0, 0);
> + if (op1 < Op1_mask)
> + return sys_reg(op0, op1 + 1, 0, 0, 0);
> +
> + return sys_reg(op0 + 1, 0, 0, 0, 0);
> +}
> +
> int __init populate_nv_trap_config(void)
> {
> int ret = 0;
> @@ -1775,23 +1797,18 @@ int __init populate_nv_trap_config(void)
> ret = -EINVAL;
> }
>
> - if (cgt->encoding != cgt->end) {
> - prev = xa_store_range(&sr_forward_xa,
> - cgt->encoding, cgt->end,
> - xa_mk_value(cgt->tc.val),
> - GFP_KERNEL);
> - } else {
> - prev = xa_store(&sr_forward_xa, cgt->encoding,
> + for (u32 enc = cgt->encoding; enc <= cgt->end; enc = encoding_next(enc)) {
> + prev = xa_store(&sr_forward_xa, enc,
> xa_mk_value(cgt->tc.val), GFP_KERNEL);
> if (prev && !xa_is_err(prev)) {
> ret = -EINVAL;
> print_nv_trap_error(cgt, "Duplicate CGT", ret);
> }
> - }
>
> - if (xa_is_err(prev)) {
> - ret = xa_err(prev);
> - print_nv_trap_error(cgt, "Failed CGT insertion", ret);
> + if (xa_is_err(prev)) {
> + ret = xa_err(prev);
> + print_nv_trap_error(cgt, "Failed CGT insertion", ret);
> + }
> }
> }
>
> @@ -1804,6 +1821,7 @@ int __init populate_nv_trap_config(void)
> for (int i = 0; i < ARRAY_SIZE(encoding_to_fgt); i++) {
> const struct encoding_to_trap_config *fgt = &encoding_to_fgt[i];
> union trap_config tc;
> + void *prev;
>
> if (fgt->tc.fgt >= __NR_FGT_GROUP_IDS__) {
> ret = -EINVAL;
> @@ -1818,8 +1836,13 @@ int __init populate_nv_trap_config(void)
> }
>
> tc.val |= fgt->tc.val;
> - xa_store(&sr_forward_xa, fgt->encoding,
> - xa_mk_value(tc.val), GFP_KERNEL);
> + prev = xa_store(&sr_forward_xa, fgt->encoding,
> + xa_mk_value(tc.val), GFP_KERNEL);
> +
> + if (xa_is_err(prev)) {
> + ret = xa_err(prev);
> + print_nv_trap_error(fgt, "Failed FGT insertion", ret);
> + }
> }
>
> kvm_info("nv: %ld fine grained trap handlers\n",
Tidied up error handling from v1.
Reviewed-by: Joey Gouly <joey.gouly@arm.com>
Thanks,
Joey
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v2 24/25] KVM: arm64: Make FEAT_MOPS UNDEF if not advertised to the guest
2024-01-30 20:45 ` [PATCH v2 24/25] KVM: arm64: Make FEAT_MOPS UNDEF if " Marc Zyngier
@ 2024-01-31 15:02 ` Joey Gouly
0 siblings, 0 replies; 48+ messages in thread
From: Joey Gouly @ 2024-01-31 15:02 UTC (permalink / raw)
To: Marc Zyngier
Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
Mark Brown
On Tue, Jan 30, 2024 at 08:45:31PM +0000, Marc Zyngier wrote:
> We unconditionally enable FEAT_MOPS, which is obviously wrong.
>
> So let's only do that when it is advertised to the guest.
> Which means we need to rely on a per-vcpu HCRX_EL2 shadow register.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
> arch/arm64/include/asm/kvm_arm.h | 4 +---
> arch/arm64/include/asm/kvm_host.h | 1 +
> arch/arm64/kvm/hyp/include/hyp/switch.h | 2 +-
> arch/arm64/kvm/sys_regs.c | 7 +++++++
> 4 files changed, 10 insertions(+), 4 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
> index 3c6f8ba1e479..a1769e415d72 100644
> --- a/arch/arm64/include/asm/kvm_arm.h
> +++ b/arch/arm64/include/asm/kvm_arm.h
> @@ -102,9 +102,7 @@
> #define HCR_HOST_NVHE_PROTECTED_FLAGS (HCR_HOST_NVHE_FLAGS | HCR_TSC)
> #define HCR_HOST_VHE_FLAGS (HCR_RW | HCR_TGE | HCR_E2H)
>
> -#define HCRX_GUEST_FLAGS \
> - (HCRX_EL2_SMPME | HCRX_EL2_TCR2En | \
> - (cpus_have_final_cap(ARM64_HAS_MOPS) ? (HCRX_EL2_MSCEn | HCRX_EL2_MCE2) : 0))
> +#define HCRX_GUEST_FLAGS (HCRX_EL2_SMPME | HCRX_EL2_TCR2En)
> #define HCRX_HOST_FLAGS (HCRX_EL2_MSCEn | HCRX_EL2_TCR2En)
>
> /* TCR_EL2 Registers bits */
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index eff894d59590..50a122d42587 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -584,6 +584,7 @@ struct kvm_vcpu_arch {
>
> /* Values of trap registers for the guest. */
> u64 hcr_el2;
> + u64 hcrx_el2;
> u64 mdcr_el2;
> u64 cptr_el2;
>
> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index 2d5891518006..e3fcf8c4d5b4 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -236,7 +236,7 @@ static inline void __activate_traps_common(struct kvm_vcpu *vcpu)
> write_sysreg(vcpu->arch.mdcr_el2, mdcr_el2);
>
> if (cpus_have_final_cap(ARM64_HAS_HCX)) {
> - u64 hcrx = HCRX_GUEST_FLAGS;
> + u64 hcrx = vcpu->arch.hcrx_el2;
> if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)) {
> u64 clr = 0, set = 0;
>
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 38ed47bd29db..2cb69efac1dc 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -3956,6 +3956,13 @@ void kvm_init_sysreg(struct kvm_vcpu *vcpu)
> if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, OS))
> vcpu->arch.hcr_el2 |= HCR_TTLBOS;
>
> + if (cpus_have_final_cap(ARM64_HAS_HCX)) {
> + vcpu->arch.hcrx_el2 = HCRX_GUEST_FLAGS;
> +
> + if (kvm_has_feat(kvm, ID_AA64ISAR2_EL1, MOPS, IMP))
> + vcpu->arch.hcrx_el2 |= (HCRX_EL2_MSCEn | HCRX_EL2_MCE2);
> + }
> +
> if (test_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags))
> goto out;
>
Removed incorrect kern_hyp_va().
Reviewed-by: Joey Gouly <joey.gouly@arm.com>
Thanks,
Joey
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v2 21/25] KVM: arm64: Make TLBI OS/Range UNDEF if not advertised to the guest
2024-01-30 20:45 ` [PATCH v2 21/25] KVM: arm64: Make TLBI OS/Range UNDEF if not advertised to the guest Marc Zyngier
@ 2024-01-31 15:05 ` Joey Gouly
0 siblings, 0 replies; 48+ messages in thread
From: Joey Gouly @ 2024-01-31 15:05 UTC (permalink / raw)
To: Marc Zyngier
Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
Mark Brown
On Tue, Jan 30, 2024 at 08:45:28PM +0000, Marc Zyngier wrote:
> Outer Shareable and Range TLBI instructions shouldn't be made available
> to the guest if they are not advertised. Use FGU to disable those,
> and set HCR_EL2.TLBIOS in the case the host doesn't have FGT. Note
> that in that later case, we cannot efficiently disable TLBI Range
> instructions, as this would require to trap all TLBIs.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
> arch/arm64/kvm/sys_regs.c | 34 ++++++++++++++++++++++++++++++++++
> 1 file changed, 34 insertions(+)
>
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 39e7c7f74717..f07ee7c89822 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -3948,6 +3948,14 @@ void kvm_init_sysreg(struct kvm_vcpu *vcpu)
>
> mutex_lock(&kvm->arch.config_lock);
>
> + /*
> + * In the absence of FGT, we cannot independently trap TLBI
> + * Range instructions. This isn't great, but trapping all
> + * TLBIs would be far worse. Live with it...
> + */
> + if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, OS))
> + vcpu->arch.hcr_el2 |= HCR_TTLBOS;
> +
> if (test_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags))
> goto out;
>
> @@ -3960,6 +3968,32 @@ void kvm_init_sysreg(struct kvm_vcpu *vcpu)
> HFGxTR_EL2_nSMPRI_EL1_MASK |
> HFGxTR_EL2_nTPIDR2_EL0_MASK);
>
> + if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, OS))
> + kvm->arch.fgu[HFGITR_GROUP] |= (HFGITR_EL2_TLBIRVAALE1OS|
> + HFGITR_EL2_TLBIRVALE1OS |
> + HFGITR_EL2_TLBIRVAAE1OS |
> + HFGITR_EL2_TLBIRVAE1OS |
> + HFGITR_EL2_TLBIVAALE1OS |
> + HFGITR_EL2_TLBIVALE1OS |
> + HFGITR_EL2_TLBIVAAE1OS |
> + HFGITR_EL2_TLBIASIDE1OS |
> + HFGITR_EL2_TLBIVAE1OS |
> + HFGITR_EL2_TLBIVMALLE1OS);
> +
> + if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, RANGE))
> + kvm->arch.fgu[HFGITR_GROUP] |= (HFGITR_EL2_TLBIRVAALE1 |
> + HFGITR_EL2_TLBIRVALE1 |
> + HFGITR_EL2_TLBIRVAAE1 |
> + HFGITR_EL2_TLBIRVAE1 |
> + HFGITR_EL2_TLBIRVAALE1IS|
> + HFGITR_EL2_TLBIRVALE1IS |
> + HFGITR_EL2_TLBIRVAAE1IS |
> + HFGITR_EL2_TLBIRVAE1IS |
> + HFGITR_EL2_TLBIRVAALE1OS|
> + HFGITR_EL2_TLBIRVALE1OS |
> + HFGITR_EL2_TLBIRVAAE1OS |
> + HFGITR_EL2_TLBIRVAE1OS);
> +
> set_bit(KVM_ARCH_FLAG_FGU_INITIALIZED, &kvm->arch.flags);
> out:
> mutex_unlock(&kvm->arch.config_lock);
Commit message update *and* a comment, generous!
Reviewed-by: Joey Gouly <joey.gouly@arm.com>
Thanks,
Joey
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v2 04/25] KVM: arm64: nv: Add sanitising to EL2 configuration registers
2024-01-30 20:45 ` [PATCH v2 04/25] KVM: arm64: nv: Add sanitising to EL2 configuration registers Marc Zyngier
@ 2024-01-31 17:16 ` Joey Gouly
2024-02-02 15:05 ` Marc Zyngier
2024-02-01 14:56 ` Joey Gouly
1 sibling, 1 reply; 48+ messages in thread
From: Joey Gouly @ 2024-01-31 17:16 UTC (permalink / raw)
To: Marc Zyngier
Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
Mark Brown
On Tue, Jan 30, 2024 at 08:45:11PM +0000, Marc Zyngier wrote:
> We can now start making use of our sanitising masks by setting them
> to values that depend on the guest's configuration.
>
> First up are VTTBR_EL2, VTCR_EL2, VMPIDR_EL2 and HCR_EL2.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
> arch/arm64/kvm/nested.c | 56 ++++++++++++++++++++++++++++++++++++++++-
> 1 file changed, 55 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
> index c976cd4b8379..ee461e630527 100644
> --- a/arch/arm64/kvm/nested.c
> +++ b/arch/arm64/kvm/nested.c
> @@ -181,7 +181,7 @@ u64 kvm_vcpu_sanitise_vncr_reg(const struct kvm_vcpu *vcpu, enum vcpu_sysreg sr)
> return v;
> }
>
> -static void __maybe_unused set_sysreg_masks(struct kvm *kvm, int sr, u64 res0, u64 res1)
> +static void set_sysreg_masks(struct kvm *kvm, int sr, u64 res0, u64 res1)
> {
> int i = sr - __VNCR_START__;
>
> @@ -191,6 +191,7 @@ static void __maybe_unused set_sysreg_masks(struct kvm *kvm, int sr, u64 res0, u
>
> int kvm_init_nv_sysregs(struct kvm *kvm)
> {
> + u64 res0, res1;
> int ret = 0;
>
> mutex_lock(&kvm->arch.config_lock);
> @@ -209,6 +210,59 @@ int kvm_init_nv_sysregs(struct kvm *kvm)
> kvm->arch.id_regs[i] = limit_nv_id_reg(IDX_IDREG(i),
> kvm->arch.id_regs[i]);
>
> + /* VTTBR_EL2 */
> + res0 = res1 = 0;
> + if (!kvm_has_feat_enum(kvm, ID_AA64MMFR1_EL1, VMIDBits, 16))
> + res0 |= GENMASK(63, 56);
> + set_sysreg_masks(kvm, VTTBR_EL2, res0, res1);
CnP?
> +
> + /* VTCR_EL2 */
> + res0 = GENMASK(63, 32) | GENMASK(30, 20);
> + res1 = BIT(31);
> + set_sysreg_masks(kvm, VTCR_EL2, res0, res1);
> +
> + /* VMPIDR_EL2 */
> + res0 = GENMASK(63, 40) | GENMASK(30, 24);
> + res1 = BIT(31);
> + set_sysreg_masks(kvm, VMPIDR_EL2, res0, res1);
> +
> + /* HCR_EL2 */
> + res0 = BIT(48);
> + res1 = HCR_RW;
> + if (!kvm_has_feat(kvm, ID_AA64MMFR1_EL1, TWED, IMP))
> + res0 |= GENMASK(63, 59);
> + if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, MTE, MTE2))
> + res0 |= (HCR_TID5 | HCR_DCT | HCR_ATA);
> + if (!kvm_has_feat(kvm, ID_AA64MMFR2_EL1, EVT, TTLBxS))
> + res0 |= (HCR_TTLBIS | HCR_TTLBOS);
> + if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, CSV2, CSV2_2) &&
> + !kvm_has_feat(kvm, ID_AA64PFR1_EL1, CSV2_frac, CSV2_1p2))
> + res1 = HCR_ENSCXT;
I'm confused here. If the VM doesn't have either CSV2_2 or CSV2_1p2.. HCR_ENSCXT is res1, that means we wouldn't trap?
The Arm ARM says:
EnSCXT, bit [53]
When FEAT_CSV2_2 is implemented or FEAT_CSV2_1p2 is implemented:
[..]
Otherwise:
RES0
And if you actually meant res1, then you need: res1 |= HCR_ENSCXT, otherwise you override the HCR_RW res1 bit!
> + if (!kvm_has_feat(kvm, ID_AA64MMFR2_EL1, EVT, IMP))
> + res0 |= (HCR_TID4 | HCR_TICAB | HCR_TOCU);
minor nitpick: can you reverse these TOCU | TICAB | TID4 so it matches the relative positions in the register..
> + if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, AMU, V1P1))
> + res0 |= HCR_AMVOFFEN;
We currently mask out ID_AA64PFR0_EL1.AMU (read_sanitised_id_aa64pfr0_el1 and
part of the .val in the sys_regs_desc[]), can't hurt to be feature proof.
> + if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, RAS, V1P1))
> + res0 |= HCR_FIEN;
> + if (!kvm_has_feat(kvm, ID_AA64MMFR2_EL1, FWB, IMP))
> + res0 |= HCR_FWB;
> + if (!kvm_has_feat(kvm, ID_AA64MMFR2_EL1, NV, NV2))
> + res0 |= HCR_NV2;
> + if (!kvm_has_feat(kvm, ID_AA64MMFR2_EL1, NV, IMP))
> + res0 |= (HCR_AT | HCR_NV1 | HCR_NV);
> + if (!(__vcpu_has_feature(&kvm->arch, KVM_ARM_VCPU_PTRAUTH_ADDRESS) &&
> + __vcpu_has_feature(&kvm->arch, KVM_ARM_VCPU_PTRAUTH_ADDRESS)))
> + res0 |= (HCR_API | HCR_APK);
> + if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TME, IMP))
> + res0 |= BIT(39);
> + if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, RAS, IMP))
> + res0 |= (HCR_TERR | HCR_TEA);
Same annoying nitpick, flip the values order to match the positions.
> + if (!kvm_has_feat(kvm, ID_AA64MMFR1_EL1, LO, IMP))
> + res0 |= HCR_TLOR;
> + if (!kvm_has_feat(kvm, ID_AA64MMFR4_EL1, E2H0, IMP))
> + res1 |= HCR_E2H;
> + set_sysreg_masks(kvm, HCR_EL2, res0, res1);
> +
> out:
> mutex_unlock(&kvm->arch.config_lock);
>
I need a lie down now.
Thanks,
Joey
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v2 05/25] KVM: arm64: nv: Add sanitising to VNCR-backed FGT sysregs
2024-01-30 20:45 ` [PATCH v2 05/25] KVM: arm64: nv: Add sanitising to VNCR-backed FGT sysregs Marc Zyngier
@ 2024-02-01 14:38 ` Joey Gouly
2024-02-02 14:58 ` Marc Zyngier
0 siblings, 1 reply; 48+ messages in thread
From: Joey Gouly @ 2024-02-01 14:38 UTC (permalink / raw)
To: Marc Zyngier
Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
Mark Brown
On Tue, Jan 30, 2024 at 08:45:12PM +0000, Marc Zyngier wrote:
> Fine Grained Traps are controlled by a whole bunch of features.
> Each one of them must be checked and the corresponding masks
> computed so that we don't let the guest apply traps it shouldn't
> be using.
>
> This takes care of HFGxTR_EL2, HDFGxTR_EL2, and HAFGRTR_EL2.
^^ and HFGITR_EL2
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
> arch/arm64/kvm/nested.c | 128 ++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 128 insertions(+)
>
> diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
> index ee461e630527..cdeef3259193 100644
> --- a/arch/arm64/kvm/nested.c
> +++ b/arch/arm64/kvm/nested.c
> @@ -263,6 +263,134 @@ int kvm_init_nv_sysregs(struct kvm *kvm)
> res1 |= HCR_E2H;
> set_sysreg_masks(kvm, HCR_EL2, res0, res1);
>
> + /* HFG[RW]TR_EL2 */
> + res0 = res1 = 0;
> + if (!(__vcpu_has_feature(&kvm->arch, KVM_ARM_VCPU_PTRAUTH_ADDRESS) &&
> + __vcpu_has_feature(&kvm->arch, KVM_ARM_VCPU_PTRAUTH_ADDRESS)))
> + res0 |= (HFGxTR_EL2_APDAKey | HFGxTR_EL2_APDBKey |
> + HFGxTR_EL2_APGAKey | HFGxTR_EL2_APIAKey |
> + HFGxTR_EL2_APIBKey);
> + if (!kvm_has_feat(kvm, ID_AA64MMFR1_EL1, LO, IMP))
> + res0 |= (HFGxTR_EL2_LORC_EL1 | HFGxTR_EL2_LOREA_EL1 |
> + HFGxTR_EL2_LORID_EL1 | HFGxTR_EL2_LORN_EL1 |
> + HFGxTR_EL2_LORSA_EL1);
> + if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, CSV2, CSV2_2) &&
> + !kvm_has_feat(kvm, ID_AA64PFR1_EL1, CSV2_frac, CSV2_1p2))
> + res0 |= (HFGxTR_EL2_SCXTNUM_EL1 | HFGxTR_EL2_SCXTNUM_EL0);
> + if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, GIC, IMP))
> + res0 |= HFGxTR_EL2_ICC_IGRPENn_EL1;
> + if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, RAS, IMP))
> + res0 |= (HFGxTR_EL2_ERRIDR_EL1 | HFGxTR_EL2_ERRSELR_EL1 |
> + HFGxTR_EL2_ERXFR_EL1 | HFGxTR_EL2_ERXCTLR_EL1 |
> + HFGxTR_EL2_ERXSTATUS_EL1 | HFGxTR_EL2_ERXMISCn_EL1 |
> + HFGxTR_EL2_ERXPFGF_EL1 | HFGxTR_EL2_ERXPFGCTL_EL1 |
> + HFGxTR_EL2_ERXPFGCDN_EL1 | HFGxTR_EL2_ERXADDR_EL1);
> + if (!kvm_has_feat(kvm, ID_AA64ISAR1_EL1, LS64, LS64_ACCDATA))
> + res0 |= HFGxTR_EL2_nACCDATA_EL1;
> + if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, GCS, IMP))
> + res0 |= (HFGxTR_EL2_nGCS_EL0 | HFGxTR_EL2_nGCS_EL1);
> + if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, SME, IMP))
> + res0 |= (HFGxTR_EL2_nSMPRI_EL1 | HFGxTR_EL2_nTPIDR2_EL0);
> + if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, THE, IMP))
> + res0 |= HFGxTR_EL2_nRCWMASK_EL1;
> + if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, S1PIE, IMP))
> + res0 |= (HFGxTR_EL2_nPIRE0_EL1 | HFGxTR_EL2_nPIR_EL1);
> + if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, S1POE, IMP))
> + res0 |= (HFGxTR_EL2_nPOR_EL0 | HFGxTR_EL2_nPOR_EL1);
> + if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, S2POE, IMP))
> + res0 |= HFGxTR_EL2_nS2POR_EL1;
> + if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, AIE, IMP))
> + res0 |= (HFGxTR_EL2_nMAIR2_EL1 | HFGxTR_EL2_nAMAIR2_EL1);
> + set_sysreg_masks(kvm, HFGRTR_EL2, res0 | __HFGRTR_EL2_RES0, res1);
> + set_sysreg_masks(kvm, HFGWTR_EL2, res0 | __HFGWTR_EL2_RES0, res1);
> +
> + /* HDFG[RW]TR_EL2 */
> + res0 = res1 = 0;
> + if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, DoubleLock, IMP))
> + res0 |= HDFGRTR_EL2_OSDLR_EL1;
> + if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, PMUVer, IMP))
> + res0 |= (HDFGRTR_EL2_PMEVCNTRn_EL0 | HDFGRTR_EL2_PMEVTYPERn_EL0 |
> + HDFGRTR_EL2_PMCCFILTR_EL0 | HDFGRTR_EL2_PMCCNTR_EL0 |
> + HDFGRTR_EL2_PMCNTEN | HDFGRTR_EL2_PMINTEN |
> + HDFGRTR_EL2_PMOVS | HDFGRTR_EL2_PMSELR_EL0 |
> + HDFGRTR_EL2_PMMIR_EL1 | HDFGRTR_EL2_PMUSERENR_EL0 |
> + HDFGRTR_EL2_PMCEIDn_EL0);
> + if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, PMSVer, IMP))
> + res0 |= (HDFGRTR_EL2_PMBLIMITR_EL1 | HDFGRTR_EL2_PMBPTR_EL1 |
> + HDFGRTR_EL2_PMBSR_EL1 | HDFGRTR_EL2_PMSCR_EL1 |
> + HDFGRTR_EL2_PMSEVFR_EL1 | HDFGRTR_EL2_PMSFCR_EL1 |
> + HDFGRTR_EL2_PMSICR_EL1 | HDFGRTR_EL2_PMSIDR_EL1 |
> + HDFGRTR_EL2_PMSIRR_EL1 | HDFGRTR_EL2_PMSLATFR_EL1 |
> + HDFGRTR_EL2_PMBIDR_EL1);
> + if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, TraceVer, IMP))
> + res0 |= (HDFGRTR_EL2_TRC | HDFGRTR_EL2_TRCAUTHSTATUS |
> + HDFGRTR_EL2_TRCAUXCTLR | HDFGRTR_EL2_TRCCLAIM |
> + HDFGRTR_EL2_TRCCNTVRn | HDFGRTR_EL2_TRCID |
> + HDFGRTR_EL2_TRCIMSPECn | HDFGRTR_EL2_TRCOSLSR |
> + HDFGRTR_EL2_TRCPRGCTLR | HDFGRTR_EL2_TRCSEQSTR |
> + HDFGRTR_EL2_TRCSSCSRn | HDFGRTR_EL2_TRCSTATR |
> + HDFGRTR_EL2_TRCVICTLR);
> + if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, TraceBuffer, IMP))
> + res0 |= (HDFGRTR_EL2_TRBBASER_EL1 | HDFGRTR_EL2_TRBIDR_EL1 |
> + HDFGRTR_EL2_TRBLIMITR_EL1 | HDFGRTR_EL2_TRBMAR_EL1 |
> + HDFGRTR_EL2_TRBPTR_EL1 | HDFGRTR_EL2_TRBSR_EL1 |
> + HDFGRTR_EL2_TRBTRG_EL1);
> + if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, BRBE, IMP))
> + res0 |= (HDFGRTR_EL2_nBRBIDR | HDFGRTR_EL2_nBRBCTL |
> + HDFGRTR_EL2_nBRBDATA);
> + if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, PMSVer, V1P2))
> + res0 |= HDFGRTR_EL2_nPMSNEVFR_EL1;
> + set_sysreg_masks(kvm, HDFGRTR_EL2, res0 | HDFGRTR_EL2_RES0, res1);
> +
> + /* Reuse the bits from the read-side and add the write-specific stuff */
> + if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, PMUVer, IMP))
> + res0 |= (HDFGWTR_EL2_PMCR_EL0 | HDFGWTR_EL2_PMSWINC_EL0);
> + if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, TraceVer, IMP))
> + res0 |= HDFGWTR_EL2_TRCOSLAR;
> + if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, TraceFilt, IMP))
> + res0 |= HDFGWTR_EL2_TRFCR_EL1;
> + set_sysreg_masks(kvm, HFGWTR_EL2, res0 | HDFGWTR_EL2_RES0, res1);
> +
> + /* HFGITR_EL2 */
> + res0 = HFGITR_EL2_RES0;
> + res1 = HFGITR_EL2_RES1;
> + if (!kvm_has_feat(kvm, ID_AA64ISAR1_EL1, DPB, DPB2))
> + res0 |= HFGITR_EL2_DCCVADP;
> + if (!kvm_has_feat(kvm, ID_AA64MMFR1_EL1, PAN, PAN2))
> + res0 |= (HFGITR_EL2_ATS1E1RP | HFGITR_EL2_ATS1E1WP);
> + if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, OS))
> + res0 |= (HFGITR_EL2_TLBIRVAALE1OS | HFGITR_EL2_TLBIRVALE1OS |
> + HFGITR_EL2_TLBIRVAAE1OS | HFGITR_EL2_TLBIRVAE1OS |
> + HFGITR_EL2_TLBIVAALE1OS | HFGITR_EL2_TLBIVALE1OS |
> + HFGITR_EL2_TLBIVAAE1OS | HFGITR_EL2_TLBIASIDE1OS |
> + HFGITR_EL2_TLBIVAE1OS | HFGITR_EL2_TLBIVMALLE1OS);
> + if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TLB, RANGE))
> + res0 |= (HFGITR_EL2_TLBIRVAALE1 | HFGITR_EL2_TLBIRVALE1 |
> + HFGITR_EL2_TLBIRVAAE1 | HFGITR_EL2_TLBIRVAE1 |
> + HFGITR_EL2_TLBIRVAALE1IS | HFGITR_EL2_TLBIRVALE1IS |
> + HFGITR_EL2_TLBIRVAAE1IS | HFGITR_EL2_TLBIRVAE1IS |
> + HFGITR_EL2_TLBIRVAALE1OS | HFGITR_EL2_TLBIRVALE1OS |
> + HFGITR_EL2_TLBIRVAAE1OS | HFGITR_EL2_TLBIRVAE1OS);
> + if (!kvm_has_feat(kvm, ID_AA64ISAR1_EL1, SPECRES, IMP))
> + res0 |= (HFGITR_EL2_CFPRCTX | HFGITR_EL2_DVPRCTX |
> + HFGITR_EL2_CPPRCTX);
> + if (!kvm_has_feat(kvm, ID_AA64DFR0_EL1, BRBE, IMP))
> + res0 |= (HFGITR_EL2_nBRBINJ | HFGITR_EL2_nBRBIALL);
> + if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, GCS, IMP))
> + res0 |= (HFGITR_EL2_nGCSPUSHM_EL1 | HFGITR_EL2_nGCSSTR_EL1 |
> + HFGITR_EL2_nGCSEPP);
> + if (!kvm_has_feat(kvm, ID_AA64ISAR1_EL1, SPECRES, COSP_RCTX))
> + res0 |= HFGITR_EL2_COSPRCTX;
> + if (!kvm_has_feat(kvm, ID_AA64ISAR2_EL1, ATS1A, IMP))
> + res0 |= HFGITR_EL2_ATS1E1A;
> + set_sysreg_masks(kvm, HFGITR_EL2, res0, res1);
> +
> + /* HAFGRTR_EL2 - not a lot to see here*/
*very* minor nitpick: add a space before the */
> + res0 = HAFGRTR_EL2_RES0;
> + res1 = HAFGRTR_EL2_RES1;
> + if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, AMU, V1P1))
> + res0 |= ~(res0 | res1);
> + set_sysreg_masks(kvm, HAFGRTR_EL2, res0, res1);
> out:
> mutex_unlock(&kvm->arch.config_lock);
>
Looked through every trap bit, that was tedious! and I just realised HCRX_EL2
is still left to do...
Reviewed-by: Joey Gouly <joey.gouly@arm.com>
Thanks,
Joey
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v2 04/25] KVM: arm64: nv: Add sanitising to EL2 configuration registers
2024-01-30 20:45 ` [PATCH v2 04/25] KVM: arm64: nv: Add sanitising to EL2 configuration registers Marc Zyngier
2024-01-31 17:16 ` Joey Gouly
@ 2024-02-01 14:56 ` Joey Gouly
2024-02-02 15:10 ` Marc Zyngier
1 sibling, 1 reply; 48+ messages in thread
From: Joey Gouly @ 2024-02-01 14:56 UTC (permalink / raw)
To: Marc Zyngier
Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
Mark Brown
On Tue, Jan 30, 2024 at 08:45:11PM +0000, Marc Zyngier wrote:
> We can now start making use of our sanitising masks by setting them
> to values that depend on the guest's configuration.
>
> First up are VTTBR_EL2, VTCR_EL2, VMPIDR_EL2 and HCR_EL2.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
> arch/arm64/kvm/nested.c | 56 ++++++++++++++++++++++++++++++++++++++++-
> 1 file changed, 55 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
> index c976cd4b8379..ee461e630527 100644
> --- a/arch/arm64/kvm/nested.c
> +++ b/arch/arm64/kvm/nested.c
> @@ -181,7 +181,7 @@ u64 kvm_vcpu_sanitise_vncr_reg(const struct kvm_vcpu *vcpu, enum vcpu_sysreg sr)
> return v;
> }
>
> -static void __maybe_unused set_sysreg_masks(struct kvm *kvm, int sr, u64 res0, u64 res1)
> +static void set_sysreg_masks(struct kvm *kvm, int sr, u64 res0, u64 res1)
> {
> int i = sr - __VNCR_START__;
>
> @@ -191,6 +191,7 @@ static void __maybe_unused set_sysreg_masks(struct kvm *kvm, int sr, u64 res0, u
>
> int kvm_init_nv_sysregs(struct kvm *kvm)
> {
> + u64 res0, res1;
> int ret = 0;
>
> mutex_lock(&kvm->arch.config_lock);
> @@ -209,6 +210,59 @@ int kvm_init_nv_sysregs(struct kvm *kvm)
> kvm->arch.id_regs[i] = limit_nv_id_reg(IDX_IDREG(i),
> kvm->arch.id_regs[i]);
>
> + /* VTTBR_EL2 */
> + res0 = res1 = 0;
> + if (!kvm_has_feat_enum(kvm, ID_AA64MMFR1_EL1, VMIDBits, 16))
> + res0 |= GENMASK(63, 56);
> + set_sysreg_masks(kvm, VTTBR_EL2, res0, res1);
> +
> + /* VTCR_EL2 */
> + res0 = GENMASK(63, 32) | GENMASK(30, 20);
> + res1 = BIT(31);
> + set_sysreg_masks(kvm, VTCR_EL2, res0, res1);
> +
> + /* VMPIDR_EL2 */
> + res0 = GENMASK(63, 40) | GENMASK(30, 24);
> + res1 = BIT(31);
> + set_sysreg_masks(kvm, VMPIDR_EL2, res0, res1);
> +
> + /* HCR_EL2 */
> + res0 = BIT(48);
> + res1 = HCR_RW;
Just want to clarify this bit actually, this is restricting the (first level
only?) nested EL1 to run as AArch64?
Should we be sanitising ID_AA64PFR0_EL1.EL1=0b0001?
> + if (!kvm_has_feat(kvm, ID_AA64MMFR1_EL1, TWED, IMP))
> + res0 |= GENMASK(63, 59);
> + if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, MTE, MTE2))
> + res0 |= (HCR_TID5 | HCR_DCT | HCR_ATA);
> + if (!kvm_has_feat(kvm, ID_AA64MMFR2_EL1, EVT, TTLBxS))
> + res0 |= (HCR_TTLBIS | HCR_TTLBOS);
> + if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, CSV2, CSV2_2) &&
> + !kvm_has_feat(kvm, ID_AA64PFR1_EL1, CSV2_frac, CSV2_1p2))
> + res1 = HCR_ENSCXT;
> + if (!kvm_has_feat(kvm, ID_AA64MMFR2_EL1, EVT, IMP))
> + res0 |= (HCR_TID4 | HCR_TICAB | HCR_TOCU);
> + if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, AMU, V1P1))
> + res0 |= HCR_AMVOFFEN;
> + if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, RAS, V1P1))
> + res0 |= HCR_FIEN;
> + if (!kvm_has_feat(kvm, ID_AA64MMFR2_EL1, FWB, IMP))
> + res0 |= HCR_FWB;
> + if (!kvm_has_feat(kvm, ID_AA64MMFR2_EL1, NV, NV2))
> + res0 |= HCR_NV2;
> + if (!kvm_has_feat(kvm, ID_AA64MMFR2_EL1, NV, IMP))
> + res0 |= (HCR_AT | HCR_NV1 | HCR_NV);
> + if (!(__vcpu_has_feature(&kvm->arch, KVM_ARM_VCPU_PTRAUTH_ADDRESS) &&
> + __vcpu_has_feature(&kvm->arch, KVM_ARM_VCPU_PTRAUTH_ADDRESS)))
> + res0 |= (HCR_API | HCR_APK);
> + if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TME, IMP))
> + res0 |= BIT(39);
> + if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, RAS, IMP))
> + res0 |= (HCR_TERR | HCR_TEA);
> + if (!kvm_has_feat(kvm, ID_AA64MMFR1_EL1, LO, IMP))
> + res0 |= HCR_TLOR;
> + if (!kvm_has_feat(kvm, ID_AA64MMFR4_EL1, E2H0, IMP))
> + res1 |= HCR_E2H;
> + set_sysreg_masks(kvm, HCR_EL2, res0, res1);
> +
> out:
> mutex_unlock(&kvm->arch.config_lock);
>
Thanks,
Joey
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v2 06/25] KVM: arm64: nv: Add sanitising to VNCR-backed HCRX_EL2
2024-01-30 20:45 ` [PATCH v2 06/25] KVM: arm64: nv: Add sanitising to VNCR-backed HCRX_EL2 Marc Zyngier
@ 2024-02-02 8:52 ` Oliver Upton
2024-02-02 13:30 ` Mark Brown
2024-02-02 14:56 ` Marc Zyngier
0 siblings, 2 replies; 48+ messages in thread
From: Oliver Upton @ 2024-02-02 8:52 UTC (permalink / raw)
To: Marc Zyngier
Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
Zenghui Yu, Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown
On Tue, Jan 30, 2024 at 08:45:13PM +0000, Marc Zyngier wrote:
> Just like its little friends, HCRX_EL2 gets the feature set treatment
> when backed by VNCR.
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
> arch/arm64/kvm/nested.c | 42 +++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 42 insertions(+)
>
> diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
> index cdeef3259193..72db632b115a 100644
> --- a/arch/arm64/kvm/nested.c
> +++ b/arch/arm64/kvm/nested.c
> @@ -263,6 +263,48 @@ int kvm_init_nv_sysregs(struct kvm *kvm)
> res1 |= HCR_E2H;
> set_sysreg_masks(kvm, HCR_EL2, res0, res1);
>
> + /* HCRX_EL2 */
> + res0 = HCRX_EL2_RES0;
> + res1 = HCRX_EL2_RES1;
I'm a bit worried that we're depending on the meaning of these generated
RES0/RES1 bitmasks not changing behind our backs.
Not like people read anything, but do you think it'd make sense to add a
warning comment to the sysreg file that adding new encodings can have a
functional change on KVM?
> + if (!kvm_has_feat(kvm, ID_AA64ISAR3_EL1, PACM, TRIVIAL_IMP))
> + res0 |= HCRX_EL2_PACMEn;
> + if (!kvm_has_feat(kvm, ID_AA64PFR2_EL1, FPMR, IMP))
> + res0 |= HCRX_EL2_EnFPM;
> + if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, GCS, IMP))
> + res0 |= HCRX_EL2_GCSEn;
> + if (!kvm_has_feat(kvm, ID_AA64ISAR2_EL1, SYSREG_128, IMP))
> + res0 |= HCRX_EL2_EnIDCP128;
> + if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, ADERR, DEV_ASYNC))
> + res0 |= (HCRX_EL2_EnSDERR | HCRX_EL2_EnSNERR);
> + if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, DF2, IMP))
> + res0 |= HCRX_EL2_TMEA;
> + if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, D128, IMP))
> + res0 |= HCRX_EL2_D128En;
> + if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, THE, IMP))
> + res0 |= HCRX_EL2_PTTWI;
Ok, not fair. The latest public version of the ARM ARM doesn't have any
of this. Where are you getting it from?
> + if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, SCTLRX, IMP))
> + res0 |= HCRX_EL2_SCTLR2En;
> + if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, TCRX, IMP))
> + res0 |= HCRX_EL2_TCR2En;
> + if (!kvm_has_feat(kvm, ID_AA64ISAR2_EL1, MOPS, IMP))
> + res0 |= (HCRX_EL2_MSCEn | HCRX_EL2_MCE2);
> + if (!kvm_has_feat(kvm, ID_AA64MMFR1_EL1, CMOW, IMP))
> + res0 |= HCRX_EL2_CMOW;
> + if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, NMI, IMP))
> + res0 |= (HCRX_EL2_VFNMI | HCRX_EL2_VINMI | HCRX_EL2_TALLINT);
> + if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, SME, IMP) ||
> + !(read_sysreg_s(SYS_SMIDR_EL1) & SMIDR_EL1_SMPS))
> + res0 |= HCRX_EL2_SMPME;
> + if (!kvm_has_feat(kvm, ID_AA64ISAR1_EL1, XS, IMP))
> + res0 |= (HCRX_EL2_FGTnXS | HCRX_EL2_FnXS);
> + if (!kvm_has_feat(kvm, ID_AA64ISAR1_EL1, LS64, LS64_V))
> + res0 |= HCRX_EL2_EnASR;
> + if (!kvm_has_feat(kvm, ID_AA64ISAR1_EL1, LS64, LS64))
> + res0 |= HCRX_EL2_EnALS;
> + if (!kvm_has_feat(kvm, ID_AA64ISAR1_EL1, LS64, LS64_ACCDATA))
> + res0 |= HCRX_EL2_EnAS0;
> + set_sysreg_masks(kvm, HCRX_EL2, res0, res1);
> +
> /* HFG[RW]TR_EL2 */
> res0 = res1 = 0;
> if (!(__vcpu_has_feature(&kvm->arch, KVM_ARM_VCPU_PTRAUTH_ADDRESS) &&
> --
> 2.39.2
>
--
Thanks,
Oliver
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v2 06/25] KVM: arm64: nv: Add sanitising to VNCR-backed HCRX_EL2
2024-02-02 8:52 ` Oliver Upton
@ 2024-02-02 13:30 ` Mark Brown
2024-02-02 14:56 ` Marc Zyngier
1 sibling, 0 replies; 48+ messages in thread
From: Mark Brown @ 2024-02-02 13:30 UTC (permalink / raw)
To: Oliver Upton
Cc: Marc Zyngier, kvmarm, linux-arm-kernel, James Morse,
Suzuki K Poulose, Zenghui Yu, Catalin Marinas, Will Deacon,
Joey Gouly
[-- Attachment #1.1: Type: text/plain, Size: 977 bytes --]
On Fri, Feb 02, 2024 at 08:52:32AM +0000, Oliver Upton wrote:
> On Tue, Jan 30, 2024 at 08:45:13PM +0000, Marc Zyngier wrote:
> > + if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, D128, IMP))
> > + res0 |= HCRX_EL2_D128En;
> > + if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, THE, IMP))
> > + res0 |= HCRX_EL2_PTTWI;
> Ok, not fair. The latest public version of the ARM ARM doesn't have any
> of this. Where are you getting it from?
We publish updates of the system register and instruction encodings much
more frequently than we publish updates to the full ARM, these are
released as DDI0601. You can currently find them at:
https://developer.arm.com/documentation/ddi0601/
https://developer.arm.com/downloads/-/exploration-tools
but those URLs might change at some point (the second one has downloads,
the first is a directly viewable web page). There's also DDI0602 with
SVE instructions. These are generally the first place that gets updated
when new things are added.
[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
[-- Attachment #2: Type: text/plain, Size: 176 bytes --]
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v2 06/25] KVM: arm64: nv: Add sanitising to VNCR-backed HCRX_EL2
2024-02-02 8:52 ` Oliver Upton
2024-02-02 13:30 ` Mark Brown
@ 2024-02-02 14:56 ` Marc Zyngier
2024-02-02 17:15 ` Oliver Upton
1 sibling, 1 reply; 48+ messages in thread
From: Marc Zyngier @ 2024-02-02 14:56 UTC (permalink / raw)
To: Oliver Upton
Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
Zenghui Yu, Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown
On Fri, 02 Feb 2024 08:52:32 +0000,
Oliver Upton <oliver.upton@linux.dev> wrote:
>
> On Tue, Jan 30, 2024 at 08:45:13PM +0000, Marc Zyngier wrote:
> > Just like its little friends, HCRX_EL2 gets the feature set treatment
> > when backed by VNCR.
> >
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> > arch/arm64/kvm/nested.c | 42 +++++++++++++++++++++++++++++++++++++++++
> > 1 file changed, 42 insertions(+)
> >
> > diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
> > index cdeef3259193..72db632b115a 100644
> > --- a/arch/arm64/kvm/nested.c
> > +++ b/arch/arm64/kvm/nested.c
> > @@ -263,6 +263,48 @@ int kvm_init_nv_sysregs(struct kvm *kvm)
> > res1 |= HCR_E2H;
> > set_sysreg_masks(kvm, HCR_EL2, res0, res1);
> >
> > + /* HCRX_EL2 */
> > + res0 = HCRX_EL2_RES0;
> > + res1 = HCRX_EL2_RES1;
>
> I'm a bit worried that we're depending on the meaning of these generated
> RES0/RES1 bitmasks not changing behind our backs.
>
> Not like people read anything, but do you think it'd make sense to add a
> warning comment to the sysreg file that adding new encodings can have a
> functional change on KVM?
No amount of warnings will do, because people don't give a damn.
I'm actually in favour of something far more radical: we snapshot the
raw value of all the used RES0/1, and put BUILD_BUG_ON()s that fire if
any value has changed. They will have to touch the KVM code to fix
that, and we catch them red-handed.
>
> > + if (!kvm_has_feat(kvm, ID_AA64ISAR3_EL1, PACM, TRIVIAL_IMP))
> > + res0 |= HCRX_EL2_PACMEn;
> > + if (!kvm_has_feat(kvm, ID_AA64PFR2_EL1, FPMR, IMP))
> > + res0 |= HCRX_EL2_EnFPM;
> > + if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, GCS, IMP))
> > + res0 |= HCRX_EL2_GCSEn;
> > + if (!kvm_has_feat(kvm, ID_AA64ISAR2_EL1, SYSREG_128, IMP))
> > + res0 |= HCRX_EL2_EnIDCP128;
> > + if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, ADERR, DEV_ASYNC))
> > + res0 |= (HCRX_EL2_EnSDERR | HCRX_EL2_EnSNERR);
> > + if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, DF2, IMP))
> > + res0 |= HCRX_EL2_TMEA;
> > + if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, D128, IMP))
> > + res0 |= HCRX_EL2_D128En;
> > + if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, THE, IMP))
> > + res0 |= HCRX_EL2_PTTWI;
>
> Ok, not fair. The latest public version of the ARM ARM doesn't have any
> of this. Where are you getting it from?
The bloody XML, from which all of this should, in theory, be extracted
without any human intervention. Sadly, it seems that we are not
allowed to do so, from what I have been told.
The ARM ARM is, unfortunately, being quickly rendered obsolete by the
lack of updates (FEAT_THE is part of the v9.4 architecture, released
in 2022).
I'll make sure to add a link to the XML in the cover letter when I
repost the series.
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v2 05/25] KVM: arm64: nv: Add sanitising to VNCR-backed FGT sysregs
2024-02-01 14:38 ` Joey Gouly
@ 2024-02-02 14:58 ` Marc Zyngier
0 siblings, 0 replies; 48+ messages in thread
From: Marc Zyngier @ 2024-02-02 14:58 UTC (permalink / raw)
To: Joey Gouly
Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
Mark Brown
On Thu, 01 Feb 2024 14:38:06 +0000,
Joey Gouly <joey.gouly@arm.com> wrote:
>
> On Tue, Jan 30, 2024 at 08:45:12PM +0000, Marc Zyngier wrote:
> > Fine Grained Traps are controlled by a whole bunch of features.
> > Each one of them must be checked and the corresponding masks
> > computed so that we don't let the guest apply traps it shouldn't
> > be using.
> >
> > This takes care of HFGxTR_EL2, HDFGxTR_EL2, and HAFGRTR_EL2.
>
> ^^ and HFGITR_EL2
The intention was that it was covered by HFGxTR, but I can now see how
that can be confusing.
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v2 04/25] KVM: arm64: nv: Add sanitising to EL2 configuration registers
2024-01-31 17:16 ` Joey Gouly
@ 2024-02-02 15:05 ` Marc Zyngier
0 siblings, 0 replies; 48+ messages in thread
From: Marc Zyngier @ 2024-02-02 15:05 UTC (permalink / raw)
To: Joey Gouly
Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
Mark Brown
On Wed, 31 Jan 2024 17:16:02 +0000,
Joey Gouly <joey.gouly@arm.com> wrote:
>
> On Tue, Jan 30, 2024 at 08:45:11PM +0000, Marc Zyngier wrote:
> > We can now start making use of our sanitising masks by setting them
> > to values that depend on the guest's configuration.
> >
> > First up are VTTBR_EL2, VTCR_EL2, VMPIDR_EL2 and HCR_EL2.
> >
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> > arch/arm64/kvm/nested.c | 56 ++++++++++++++++++++++++++++++++++++++++-
> > 1 file changed, 55 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
> > index c976cd4b8379..ee461e630527 100644
> > --- a/arch/arm64/kvm/nested.c
> > +++ b/arch/arm64/kvm/nested.c
> > @@ -181,7 +181,7 @@ u64 kvm_vcpu_sanitise_vncr_reg(const struct kvm_vcpu *vcpu, enum vcpu_sysreg sr)
> > return v;
> > }
> >
> > -static void __maybe_unused set_sysreg_masks(struct kvm *kvm, int sr, u64 res0, u64 res1)
> > +static void set_sysreg_masks(struct kvm *kvm, int sr, u64 res0, u64 res1)
> > {
> > int i = sr - __VNCR_START__;
> >
> > @@ -191,6 +191,7 @@ static void __maybe_unused set_sysreg_masks(struct kvm *kvm, int sr, u64 res0, u
> >
> > int kvm_init_nv_sysregs(struct kvm *kvm)
> > {
> > + u64 res0, res1;
> > int ret = 0;
> >
> > mutex_lock(&kvm->arch.config_lock);
> > @@ -209,6 +210,59 @@ int kvm_init_nv_sysregs(struct kvm *kvm)
> > kvm->arch.id_regs[i] = limit_nv_id_reg(IDX_IDREG(i),
> > kvm->arch.id_regs[i]);
> >
> > + /* VTTBR_EL2 */
> > + res0 = res1 = 0;
> > + if (!kvm_has_feat_enum(kvm, ID_AA64MMFR1_EL1, VMIDBits, 16))
> > + res0 |= GENMASK(63, 56);
> > + set_sysreg_masks(kvm, VTTBR_EL2, res0, res1);
>
> CnP?
Missing indeed. I'll add it.
>
> > +
> > + /* VTCR_EL2 */
> > + res0 = GENMASK(63, 32) | GENMASK(30, 20);
> > + res1 = BIT(31);
> > + set_sysreg_masks(kvm, VTCR_EL2, res0, res1);
> > +
> > + /* VMPIDR_EL2 */
> > + res0 = GENMASK(63, 40) | GENMASK(30, 24);
> > + res1 = BIT(31);
> > + set_sysreg_masks(kvm, VMPIDR_EL2, res0, res1);
> > +
> > + /* HCR_EL2 */
> > + res0 = BIT(48);
> > + res1 = HCR_RW;
> > + if (!kvm_has_feat(kvm, ID_AA64MMFR1_EL1, TWED, IMP))
> > + res0 |= GENMASK(63, 59);
> > + if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, MTE, MTE2))
> > + res0 |= (HCR_TID5 | HCR_DCT | HCR_ATA);
> > + if (!kvm_has_feat(kvm, ID_AA64MMFR2_EL1, EVT, TTLBxS))
> > + res0 |= (HCR_TTLBIS | HCR_TTLBOS);
> > + if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, CSV2, CSV2_2) &&
> > + !kvm_has_feat(kvm, ID_AA64PFR1_EL1, CSV2_frac, CSV2_1p2))
> > + res1 = HCR_ENSCXT;
>
> I'm confused here. If the VM doesn't have either CSV2_2 or CSV2_1p2.. HCR_ENSCXT is res1, that means we wouldn't trap?
>
> The Arm ARM says:
>
> EnSCXT, bit [53]
> When FEAT_CSV2_2 is implemented or FEAT_CSV2_1p2 is implemented:
>
> [..]
>
> Otherwise:
> RES0
>
> And if you actually meant res1, then you need: res1 |= HCR_ENSCXT, otherwise you override the HCR_RW res1 bit!
You're right on all counts. This is a total thinko, coupled with a
pretty bad bug.
>
> > + if (!kvm_has_feat(kvm, ID_AA64MMFR2_EL1, EVT, IMP))
> > + res0 |= (HCR_TID4 | HCR_TICAB | HCR_TOCU);
>
> minor nitpick: can you reverse these TOCU | TICAB | TID4 so it matches the relative positions in the register..
Sure can.
>
> > + if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, AMU, V1P1))
> > + res0 |= HCR_AMVOFFEN;
>
> We currently mask out ID_AA64PFR0_EL1.AMU (read_sanitised_id_aa64pfr0_el1 and
> part of the .val in the sys_regs_desc[]), can't hurt to be feature
> proof.
That's my idea. We shouldn't have to worry about any of that in the
core code, treat all features as potentially implemented, and actively
check for them being disabled.
>
> > + if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, RAS, V1P1))
> > + res0 |= HCR_FIEN;
> > + if (!kvm_has_feat(kvm, ID_AA64MMFR2_EL1, FWB, IMP))
> > + res0 |= HCR_FWB;
> > + if (!kvm_has_feat(kvm, ID_AA64MMFR2_EL1, NV, NV2))
> > + res0 |= HCR_NV2;
> > + if (!kvm_has_feat(kvm, ID_AA64MMFR2_EL1, NV, IMP))
> > + res0 |= (HCR_AT | HCR_NV1 | HCR_NV);
> > + if (!(__vcpu_has_feature(&kvm->arch, KVM_ARM_VCPU_PTRAUTH_ADDRESS) &&
> > + __vcpu_has_feature(&kvm->arch, KVM_ARM_VCPU_PTRAUTH_ADDRESS)))
> > + res0 |= (HCR_API | HCR_APK);
> > + if (!kvm_has_feat(kvm, ID_AA64ISAR0_EL1, TME, IMP))
> > + res0 |= BIT(39);
> > + if (!kvm_has_feat(kvm, ID_AA64PFR0_EL1, RAS, IMP))
> > + res0 |= (HCR_TERR | HCR_TEA);
>
> Same annoying nitpick, flip the values order to match the positions.
Will do.
>
> > + if (!kvm_has_feat(kvm, ID_AA64MMFR1_EL1, LO, IMP))
> > + res0 |= HCR_TLOR;
> > + if (!kvm_has_feat(kvm, ID_AA64MMFR4_EL1, E2H0, IMP))
> > + res1 |= HCR_E2H;
> > + set_sysreg_masks(kvm, HCR_EL2, res0, res1);
> > +
> > out:
> > mutex_unlock(&kvm->arch.config_lock);
> >
>
> I need a lie down now.
I know the feeling. This sort of crap ruins your day. Again, your
effort is much appreciated!
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v2 04/25] KVM: arm64: nv: Add sanitising to EL2 configuration registers
2024-02-01 14:56 ` Joey Gouly
@ 2024-02-02 15:10 ` Marc Zyngier
2024-02-02 16:26 ` Joey Gouly
0 siblings, 1 reply; 48+ messages in thread
From: Marc Zyngier @ 2024-02-02 15:10 UTC (permalink / raw)
To: Joey Gouly
Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
Mark Brown
On Thu, 01 Feb 2024 14:56:07 +0000,
Joey Gouly <joey.gouly@arm.com> wrote:
>
> On Tue, Jan 30, 2024 at 08:45:11PM +0000, Marc Zyngier wrote:
> > We can now start making use of our sanitising masks by setting them
> > to values that depend on the guest's configuration.
> >
> > First up are VTTBR_EL2, VTCR_EL2, VMPIDR_EL2 and HCR_EL2.
> >
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> > arch/arm64/kvm/nested.c | 56 ++++++++++++++++++++++++++++++++++++++++-
> > 1 file changed, 55 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
> > index c976cd4b8379..ee461e630527 100644
> > --- a/arch/arm64/kvm/nested.c
> > +++ b/arch/arm64/kvm/nested.c
> > @@ -181,7 +181,7 @@ u64 kvm_vcpu_sanitise_vncr_reg(const struct kvm_vcpu *vcpu, enum vcpu_sysreg sr)
> > return v;
> > }
> >
> > -static void __maybe_unused set_sysreg_masks(struct kvm *kvm, int sr, u64 res0, u64 res1)
> > +static void set_sysreg_masks(struct kvm *kvm, int sr, u64 res0, u64 res1)
> > {
> > int i = sr - __VNCR_START__;
> >
> > @@ -191,6 +191,7 @@ static void __maybe_unused set_sysreg_masks(struct kvm *kvm, int sr, u64 res0, u
> >
> > int kvm_init_nv_sysregs(struct kvm *kvm)
> > {
> > + u64 res0, res1;
> > int ret = 0;
> >
> > mutex_lock(&kvm->arch.config_lock);
> > @@ -209,6 +210,59 @@ int kvm_init_nv_sysregs(struct kvm *kvm)
> > kvm->arch.id_regs[i] = limit_nv_id_reg(IDX_IDREG(i),
> > kvm->arch.id_regs[i]);
> >
> > + /* VTTBR_EL2 */
> > + res0 = res1 = 0;
> > + if (!kvm_has_feat_enum(kvm, ID_AA64MMFR1_EL1, VMIDBits, 16))
> > + res0 |= GENMASK(63, 56);
> > + set_sysreg_masks(kvm, VTTBR_EL2, res0, res1);
> > +
> > + /* VTCR_EL2 */
> > + res0 = GENMASK(63, 32) | GENMASK(30, 20);
> > + res1 = BIT(31);
> > + set_sysreg_masks(kvm, VTCR_EL2, res0, res1);
> > +
> > + /* VMPIDR_EL2 */
> > + res0 = GENMASK(63, 40) | GENMASK(30, 24);
> > + res1 = BIT(31);
> > + set_sysreg_masks(kvm, VMPIDR_EL2, res0, res1);
> > +
> > + /* HCR_EL2 */
> > + res0 = BIT(48);
> > + res1 = HCR_RW;
>
> Just want to clarify this bit actually, this is restricting the (first level
> only?) nested EL1 to run as AArch64?
This is restricting it at all levels. The guest hypervisor will
eventually write its own view of HCR_EL2.RW in the VNCR page, and if
it has promised AArch32 to its own guest (at any level), it is in for
a treat, because we will forcefully reset this bit to 1.
> Should we be sanitising ID_AA64PFR0_EL1.EL1=0b0001?
We already do. See limit_nv_id_reg() and how it constraints
ID_AA64PFR0_EL1.ELx to 0b0001.
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v2 04/25] KVM: arm64: nv: Add sanitising to EL2 configuration registers
2024-02-02 15:10 ` Marc Zyngier
@ 2024-02-02 16:26 ` Joey Gouly
0 siblings, 0 replies; 48+ messages in thread
From: Joey Gouly @ 2024-02-02 16:26 UTC (permalink / raw)
To: Marc Zyngier
Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
Oliver Upton, Zenghui Yu, Catalin Marinas, Will Deacon,
Mark Brown
On Fri, Feb 02, 2024 at 03:10:26PM +0000, Marc Zyngier wrote:
> On Thu, 01 Feb 2024 14:56:07 +0000,
> Joey Gouly <joey.gouly@arm.com> wrote:
> >
> > On Tue, Jan 30, 2024 at 08:45:11PM +0000, Marc Zyngier wrote:
> > > We can now start making use of our sanitising masks by setting them
> > > to values that depend on the guest's configuration.
> > >
> > > First up are VTTBR_EL2, VTCR_EL2, VMPIDR_EL2 and HCR_EL2.
> > >
> > > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > > ---
> > > arch/arm64/kvm/nested.c | 56 ++++++++++++++++++++++++++++++++++++++++-
> > > 1 file changed, 55 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
> > > index c976cd4b8379..ee461e630527 100644
> > > --- a/arch/arm64/kvm/nested.c
> > > +++ b/arch/arm64/kvm/nested.c
> > > @@ -181,7 +181,7 @@ u64 kvm_vcpu_sanitise_vncr_reg(const struct kvm_vcpu *vcpu, enum vcpu_sysreg sr)
> > > return v;
> > > }
> > >
> > > -static void __maybe_unused set_sysreg_masks(struct kvm *kvm, int sr, u64 res0, u64 res1)
> > > +static void set_sysreg_masks(struct kvm *kvm, int sr, u64 res0, u64 res1)
> > > {
> > > int i = sr - __VNCR_START__;
> > >
> > > @@ -191,6 +191,7 @@ static void __maybe_unused set_sysreg_masks(struct kvm *kvm, int sr, u64 res0, u
> > >
> > > int kvm_init_nv_sysregs(struct kvm *kvm)
> > > {
> > > + u64 res0, res1;
> > > int ret = 0;
> > >
> > > mutex_lock(&kvm->arch.config_lock);
> > > @@ -209,6 +210,59 @@ int kvm_init_nv_sysregs(struct kvm *kvm)
> > > kvm->arch.id_regs[i] = limit_nv_id_reg(IDX_IDREG(i),
> > > kvm->arch.id_regs[i]);
> > >
> > > + /* VTTBR_EL2 */
> > > + res0 = res1 = 0;
> > > + if (!kvm_has_feat_enum(kvm, ID_AA64MMFR1_EL1, VMIDBits, 16))
> > > + res0 |= GENMASK(63, 56);
> > > + set_sysreg_masks(kvm, VTTBR_EL2, res0, res1);
> > > +
> > > + /* VTCR_EL2 */
> > > + res0 = GENMASK(63, 32) | GENMASK(30, 20);
> > > + res1 = BIT(31);
> > > + set_sysreg_masks(kvm, VTCR_EL2, res0, res1);
> > > +
> > > + /* VMPIDR_EL2 */
> > > + res0 = GENMASK(63, 40) | GENMASK(30, 24);
> > > + res1 = BIT(31);
> > > + set_sysreg_masks(kvm, VMPIDR_EL2, res0, res1);
> > > +
> > > + /* HCR_EL2 */
> > > + res0 = BIT(48);
> > > + res1 = HCR_RW;
> >
> > Just want to clarify this bit actually, this is restricting the (first level
> > only?) nested EL1 to run as AArch64?
>
> This is restricting it at all levels. The guest hypervisor will
> eventually write its own view of HCR_EL2.RW in the VNCR page, and if
> it has promised AArch32 to its own guest (at any level), it is in for
> a treat, because we will forcefully reset this bit to 1.
Alright, thanks, I'll have to work through that some more.
>
> > Should we be sanitising ID_AA64PFR0_EL1.EL1=0b0001?
>
> We already do. See limit_nv_id_reg() and how it constraints
> ID_AA64PFR0_EL1.ELx to 0b0001.
>
I forgot about that, was looking at read_sanitised_id_aa64pfr0_el1() in the non-
NV parts.
Thanks,
Joey
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v2 02/25] KVM: arm64: Add feature checking helpers
2024-01-30 20:45 ` [PATCH v2 02/25] KVM: arm64: Add feature checking helpers Marc Zyngier
@ 2024-02-02 17:13 ` Suzuki K Poulose
2024-02-04 11:08 ` Marc Zyngier
0 siblings, 1 reply; 48+ messages in thread
From: Suzuki K Poulose @ 2024-02-02 17:13 UTC (permalink / raw)
To: Marc Zyngier, kvmarm, linux-arm-kernel
Cc: James Morse, Oliver Upton, Zenghui Yu, Catalin Marinas,
Will Deacon, Joey Gouly, Mark Brown
Hi Marc,
On 30/01/2024 20:45, Marc Zyngier wrote:
> In order to make it easier to check whether a particular feature
> is exposed to a guest, add a new set of helpers, with kvm_has_feat()
> being the most useful.
>
> Let's start making use of them in the PMU code (courtesy of Oliver).
> Follow-up work will intricude additional use patterns.
I think there is a bit of inconsistency in the macros for signed
and unsigned. The unsigned fields are extracted (i.e., as if they
were shifted to bit 0). But the signed fields are not shifted completely
to bit '0' (in fact to different positions) and eventually we compare
wrong things.
Using ID_AA64PFR0_EL1, fld=EL2, val=IMP for unsigned and
ID_AA64PFR0_EL1, AdvSIMD, NI for signed.
>
> Co-developed--by: Oliver Upton <oliver.upton@linux.dev>
> Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
> arch/arm64/include/asm/kvm_host.h | 53 +++++++++++++++++++++++++++++++
> arch/arm64/kvm/pmu-emul.c | 11 ++++---
> arch/arm64/kvm/sys_regs.c | 6 ++--
> include/kvm/arm_pmu.h | 11 -------
> 4 files changed, 61 insertions(+), 20 deletions(-)
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 21c57b812569..c0cf9c5f5e8d 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -1233,4 +1233,57 @@ static inline void kvm_hyp_reserve(void) { }
> void kvm_arm_vcpu_power_off(struct kvm_vcpu *vcpu);
> bool kvm_arm_vcpu_stopped(struct kvm_vcpu *vcpu);
>
> +#define __expand_field_sign_unsigned(id, fld, val) \
> + ((u64)(id##_##fld##_##val))
For unsigned features we get the actual "field" value, not the value in
position. e.g,: ID_AA64PFR0_EL1_EL2_IMP = (0x1)
> +
> +#define __expand_field_sign_signed(id, fld, val) \
> + ({ \
> + s64 __val = id##_##fld##_##val; \
> + __val <<= 64 - id##_##fld##_WIDTH; \
> + __val >>= 64 - id##_##fld##_SHIFT - id##_##fld##_WIDTH; \
But for signed fields, we shift them back into the "position" as in the
ID_REG. e.g.,
ID_AA64PFR0_EL1, AdvSIMD, NI we get:
__val = ID_AA64PFR0_EL1_AdvSIMD_NI; /* = 0xf */
__val <<= 64 - 4; /* 0xf0_00_00_00_00_00_00_00 */
__val >>= 64 - 20 - 4; /* 0xff_ff_ff_ff_ff_f0_00_00 */
I think the last line instead should be:
__val >>= 64 - id##_##fld##_WIDTH;
> + \
> + __val; \
> + })
> +
> +#define expand_field_sign(id, fld, val) \
> + (id##_##fld##_SIGNED ? \
> + __expand_field_sign_signed(id, fld, val) : \
> + __expand_field_sign_unsigned(id, fld, val))
> +
> +#define get_idreg_field_unsigned(kvm, id, fld) \
> + ({ \
> + u64 __val = IDREG(kvm, SYS_##id); \
> + __val &= id##_##fld##_MASK; \
> + __val >>= id##_##fld##_SHIFT; \
> + \
We extract the field value for unsigned field, i.e., shifted to bit"0"
and that matches the expand_field_sign().
> + __val; \
> + })
> +
> +#define get_idreg_field_signed(kvm, id, fld) \
> + ({ \
> + s64 __val = IDREG(kvm, SYS_##id); \
> + __val <<= 64 - id##_##fld##_SHIFT - id##_##fld##_WIDTH; \
> + __val >>= id##_##fld##_SHIFT; \
However, here we get (assuming value ID_AA64PFR0_EL1_ASIMD = 0xf, and
all other fields are 0 for clarity)
__val = IDREG(kvm, SYS_ID_AA64PFR0_EL1); = 0xf0_00_00; /* 0xf << 20 */
__val <<= 64 - 20 - 4; /* = 0xf0_00_00_00_00_00_00_00 */
__val >>= 20; /* = 0xff_ff_ff_00_00_00_00_00 */
Thus they don;t match. Instead the last line should be :
__val >>= id##_##fld##_WIDTH;
Suzuki
> + \
> + __val; \
> + })
> +
> +#define get_idreg_field_enum(kvm, id, fld) \
> + get_idreg_field_unsigned(kvm, id, fld)
> +
> +#define get_idreg_field(kvm, id, fld) \
> + (id##_##fld##_SIGNED ? \
> + get_idreg_field_signed(kvm, id, fld) : \
> + get_idreg_field_unsigned(kvm, id, fld))
> +
> +#define kvm_has_feat(kvm, id, fld, limit) \
> + (get_idreg_field((kvm), id, fld) >= expand_field_sign(id, fld, limit))
> +
> +#define kvm_has_feat_enum(kvm, id, fld, limit) \
> + (get_idreg_field_unsigned((kvm), id, fld) == id##_##fld##_##limit)
> +
> +#define kvm_has_feat_range(kvm, id, fld, min, max) \
> + (get_idreg_field((kvm), id, fld) >= expand_field_sign(id, fld, min) && \
> + get_idreg_field((kvm), id, fld) <= expand_field_sign(id, fld, max))
> +
> #endif /* __ARM64_KVM_HOST_H__ */
> diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
> index 3d9467ff73bc..925522470b2b 100644
> --- a/arch/arm64/kvm/pmu-emul.c
> +++ b/arch/arm64/kvm/pmu-emul.c
> @@ -64,12 +64,11 @@ u64 kvm_pmu_evtyper_mask(struct kvm *kvm)
> {
> u64 mask = ARMV8_PMU_EXCLUDE_EL1 | ARMV8_PMU_EXCLUDE_EL0 |
> kvm_pmu_event_mask(kvm);
> - u64 pfr0 = IDREG(kvm, SYS_ID_AA64PFR0_EL1);
>
> - if (SYS_FIELD_GET(ID_AA64PFR0_EL1, EL2, pfr0))
> + if (kvm_has_feat(kvm, ID_AA64PFR0_EL1, EL2, IMP))
> mask |= ARMV8_PMU_INCLUDE_EL2;
>
> - if (SYS_FIELD_GET(ID_AA64PFR0_EL1, EL3, pfr0))
> + if (kvm_has_feat(kvm, ID_AA64PFR0_EL1, EL3, IMP))
> mask |= ARMV8_PMU_EXCLUDE_NS_EL0 |
> ARMV8_PMU_EXCLUDE_NS_EL1 |
> ARMV8_PMU_EXCLUDE_EL3;
> @@ -83,8 +82,10 @@ u64 kvm_pmu_evtyper_mask(struct kvm *kvm)
> */
> static bool kvm_pmc_is_64bit(struct kvm_pmc *pmc)
> {
> + struct kvm_vcpu *vcpu = kvm_pmc_to_vcpu(pmc);
> +
> return (pmc->idx == ARMV8_PMU_CYCLE_IDX ||
> - kvm_pmu_is_3p5(kvm_pmc_to_vcpu(pmc)));
> + kvm_has_feat(vcpu->kvm, ID_AA64DFR0_EL1, PMUVer, V3P5));
> }
>
> static bool kvm_pmc_has_64bit_overflow(struct kvm_pmc *pmc)
> @@ -556,7 +557,7 @@ void kvm_pmu_handle_pmcr(struct kvm_vcpu *vcpu, u64 val)
> return;
>
> /* Fixup PMCR_EL0 to reconcile the PMU version and the LP bit */
> - if (!kvm_pmu_is_3p5(vcpu))
> + if (!kvm_has_feat(vcpu->kvm, ID_AA64DFR0_EL1, PMUVer, V3P5))
> val &= ~ARMV8_PMU_PMCR_LP;
>
> /* The reset bits don't indicate any state, and shouldn't be saved. */
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 041b11825578..3c31f8cb9eef 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -505,10 +505,9 @@ static bool trap_loregion(struct kvm_vcpu *vcpu,
> struct sys_reg_params *p,
> const struct sys_reg_desc *r)
> {
> - u64 val = IDREG(vcpu->kvm, SYS_ID_AA64MMFR1_EL1);
> u32 sr = reg_to_encoding(r);
>
> - if (!(val & (0xfUL << ID_AA64MMFR1_EL1_LO_SHIFT))) {
> + if (!kvm_has_feat(vcpu->kvm, ID_AA64MMFR1_EL1, LO, IMP)) {
> kvm_inject_undefined(vcpu);
> return false;
> }
> @@ -2748,8 +2747,7 @@ static bool trap_dbgdidr(struct kvm_vcpu *vcpu,
> return ignore_write(vcpu, p);
> } else {
> u64 dfr = IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1);
> - u64 pfr = IDREG(vcpu->kvm, SYS_ID_AA64PFR0_EL1);
> - u32 el3 = !!SYS_FIELD_GET(ID_AA64PFR0_EL1, EL3, pfr);
> + u32 el3 = kvm_has_feat(vcpu->kvm, ID_AA64PFR0_EL1, EL3, IMP);
>
> p->regval = ((SYS_FIELD_GET(ID_AA64DFR0_EL1, WRPs, dfr) << 28) |
> (SYS_FIELD_GET(ID_AA64DFR0_EL1, BRPs, dfr) << 24) |
> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
> index 4b9d8fb393a8..eb4c369a79eb 100644
> --- a/include/kvm/arm_pmu.h
> +++ b/include/kvm/arm_pmu.h
> @@ -90,16 +90,6 @@ void kvm_vcpu_pmu_resync_el0(void);
> vcpu->arch.pmu.events = *kvm_get_pmu_events(); \
> } while (0)
>
> -/*
> - * Evaluates as true when emulating PMUv3p5, and false otherwise.
> - */
> -#define kvm_pmu_is_3p5(vcpu) ({ \
> - u64 val = IDREG(vcpu->kvm, SYS_ID_AA64DFR0_EL1); \
> - u8 pmuver = SYS_FIELD_GET(ID_AA64DFR0_EL1, PMUVer, val); \
> - \
> - pmuver >= ID_AA64DFR0_EL1_PMUVer_V3P5; \
> -})
> -
> u8 kvm_arm_pmu_get_pmuver_limit(void);
> u64 kvm_pmu_evtyper_mask(struct kvm *kvm);
> int kvm_arm_set_default_pmu(struct kvm *kvm);
> @@ -168,7 +158,6 @@ static inline u64 kvm_pmu_get_pmceid(struct kvm_vcpu *vcpu, bool pmceid1)
> }
>
> #define kvm_vcpu_has_pmu(vcpu) ({ false; })
> -#define kvm_pmu_is_3p5(vcpu) ({ false; })
> static inline void kvm_pmu_update_vcpu_events(struct kvm_vcpu *vcpu) {}
> static inline void kvm_vcpu_pmu_restore_guest(struct kvm_vcpu *vcpu) {}
> static inline void kvm_vcpu_pmu_restore_host(struct kvm_vcpu *vcpu) {}
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v2 06/25] KVM: arm64: nv: Add sanitising to VNCR-backed HCRX_EL2
2024-02-02 14:56 ` Marc Zyngier
@ 2024-02-02 17:15 ` Oliver Upton
2024-02-02 19:17 ` Oliver Upton
0 siblings, 1 reply; 48+ messages in thread
From: Oliver Upton @ 2024-02-02 17:15 UTC (permalink / raw)
To: Marc Zyngier
Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
Zenghui Yu, Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown
On Fri, Feb 02, 2024 at 02:56:50PM +0000, Marc Zyngier wrote:
> No amount of warnings will do, because people don't give a damn.
>
> I'm actually in favour of something far more radical: we snapshot the
> raw value of all the used RES0/1, and put BUILD_BUG_ON()s that fire if
> any value has changed. They will have to touch the KVM code to fix
> that, and we catch them red-handed.
Oh I like that a lot more.
> >
> > > + if (!kvm_has_feat(kvm, ID_AA64ISAR3_EL1, PACM, TRIVIAL_IMP))
> > > + res0 |= HCRX_EL2_PACMEn;
> > > + if (!kvm_has_feat(kvm, ID_AA64PFR2_EL1, FPMR, IMP))
> > > + res0 |= HCRX_EL2_EnFPM;
> > > + if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, GCS, IMP))
> > > + res0 |= HCRX_EL2_GCSEn;
> > > + if (!kvm_has_feat(kvm, ID_AA64ISAR2_EL1, SYSREG_128, IMP))
> > > + res0 |= HCRX_EL2_EnIDCP128;
> > > + if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, ADERR, DEV_ASYNC))
> > > + res0 |= (HCRX_EL2_EnSDERR | HCRX_EL2_EnSNERR);
> > > + if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, DF2, IMP))
> > > + res0 |= HCRX_EL2_TMEA;
> > > + if (!kvm_has_feat(kvm, ID_AA64MMFR3_EL1, D128, IMP))
> > > + res0 |= HCRX_EL2_D128En;
> > > + if (!kvm_has_feat(kvm, ID_AA64PFR1_EL1, THE, IMP))
> > > + res0 |= HCRX_EL2_PTTWI;
> >
> > Ok, not fair. The latest public version of the ARM ARM doesn't have any
> > of this. Where are you getting it from?
>
> The bloody XML, from which all of this should, in theory, be extracted
> without any human intervention. Sadly, it seems that we are not
> allowed to do so, from what I have been told.
>
> The ARM ARM is, unfortunately, being quickly rendered obsolete by the
> lack of updates (FEAT_THE is part of the v9.4 architecture, released
> in 2022).
Ah, got it. Thanks both for the reference. That really sucks though!
--
Thanks,
Oliver
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v2 06/25] KVM: arm64: nv: Add sanitising to VNCR-backed HCRX_EL2
2024-02-02 17:15 ` Oliver Upton
@ 2024-02-02 19:17 ` Oliver Upton
2024-02-02 20:14 ` Marc Zyngier
0 siblings, 1 reply; 48+ messages in thread
From: Oliver Upton @ 2024-02-02 19:17 UTC (permalink / raw)
To: Marc Zyngier
Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
Zenghui Yu, Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown
On Fri, Feb 02, 2024 at 05:15:51PM +0000, Oliver Upton wrote:
> On Fri, Feb 02, 2024 at 02:56:50PM +0000, Marc Zyngier wrote:
> > No amount of warnings will do, because people don't give a damn.
> >
> > I'm actually in favour of something far more radical: we snapshot the
> > raw value of all the used RES0/1, and put BUILD_BUG_ON()s that fire if
> > any value has changed. They will have to touch the KVM code to fix
> > that, and we catch them red-handed.
>
> Oh I like that a lot more.
Unsure which registers you planned to give this treatment, but all of
the ID registers we know about should probably go in this bucket too.
--
Thanks,
Oliver
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v2 06/25] KVM: arm64: nv: Add sanitising to VNCR-backed HCRX_EL2
2024-02-02 19:17 ` Oliver Upton
@ 2024-02-02 20:14 ` Marc Zyngier
0 siblings, 0 replies; 48+ messages in thread
From: Marc Zyngier @ 2024-02-02 20:14 UTC (permalink / raw)
To: Oliver Upton
Cc: kvmarm, linux-arm-kernel, James Morse, Suzuki K Poulose,
Zenghui Yu, Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown
On Fri, 02 Feb 2024 19:17:37 +0000,
Oliver Upton <oliver.upton@linux.dev> wrote:
>
> On Fri, Feb 02, 2024 at 05:15:51PM +0000, Oliver Upton wrote:
> > On Fri, Feb 02, 2024 at 02:56:50PM +0000, Marc Zyngier wrote:
> > > No amount of warnings will do, because people don't give a damn.
> > >
> > > I'm actually in favour of something far more radical: we snapshot the
> > > raw value of all the used RES0/1, and put BUILD_BUG_ON()s that fire if
> > > any value has changed. They will have to touch the KVM code to fix
> > > that, and we catch them red-handed.
> >
> > Oh I like that a lot more.
>
> Unsure which registers you planned to give this treatment, but all of
> the ID registers we know about should probably go in this bucket too.
Yup. My current plan is to simply extract all RES0/RES1 values from
arch/arm64/include/generated/asm/sysreg-defs.h (currently 232
definitions), and assert on that. We can relax it for registers that
don't have any RES0 or RES1 bits, but that's about it.
M.
--
Without deviation from the norm, progress is not possible.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v2 02/25] KVM: arm64: Add feature checking helpers
2024-02-02 17:13 ` Suzuki K Poulose
@ 2024-02-04 11:08 ` Marc Zyngier
2024-02-04 11:44 ` Marc Zyngier
0 siblings, 1 reply; 48+ messages in thread
From: Marc Zyngier @ 2024-02-04 11:08 UTC (permalink / raw)
To: Suzuki K Poulose
Cc: kvmarm, linux-arm-kernel, James Morse, Oliver Upton, Zenghui Yu,
Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown
On Fri, 02 Feb 2024 17:13:07 +0000,
Suzuki K Poulose <suzuki.poulose@arm.com> wrote:
>
> Hi Marc,
>
> On 30/01/2024 20:45, Marc Zyngier wrote:
> > In order to make it easier to check whether a particular feature
> > is exposed to a guest, add a new set of helpers, with kvm_has_feat()
> > being the most useful.
> >
> > Let's start making use of them in the PMU code (courtesy of Oliver).
> > Follow-up work will intricude additional use patterns.
>
> I think there is a bit of inconsistency in the macros for signed
> and unsigned. The unsigned fields are extracted (i.e., as if they
> were shifted to bit 0). But the signed fields are not shifted
> completely to bit '0' (in fact to different positions) and eventually
> we compare wrong things.
>
> Using ID_AA64PFR0_EL1, fld=EL2, val=IMP for unsigned and
> ID_AA64PFR0_EL1, AdvSIMD, NI for signed.
>
> >
> > Co-developed--by: Oliver Upton <oliver.upton@linux.dev>
> > Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> > arch/arm64/include/asm/kvm_host.h | 53 +++++++++++++++++++++++++++++++
> > arch/arm64/kvm/pmu-emul.c | 11 ++++---
> > arch/arm64/kvm/sys_regs.c | 6 ++--
> > include/kvm/arm_pmu.h | 11 -------
> > 4 files changed, 61 insertions(+), 20 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index 21c57b812569..c0cf9c5f5e8d 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -1233,4 +1233,57 @@ static inline void kvm_hyp_reserve(void) { }
> > void kvm_arm_vcpu_power_off(struct kvm_vcpu *vcpu);
> > bool kvm_arm_vcpu_stopped(struct kvm_vcpu *vcpu);
> > +#define __expand_field_sign_unsigned(id, fld, val)
> > \
> > + ((u64)(id##_##fld##_##val))
>
> For unsigned features we get the actual "field" value, not the value
> in position. e.g,: ID_AA64PFR0_EL1_EL2_IMP = (0x1)
>
> > +
> > +#define __expand_field_sign_signed(id, fld, val) \
> > + ({ \
> > + s64 __val = id##_##fld##_##val; \
> > + __val <<= 64 - id##_##fld##_WIDTH; \
> > + __val >>= 64 - id##_##fld##_SHIFT - id##_##fld##_WIDTH; \
>
> But for signed fields, we shift them back into the "position" as in
> the ID_REG. e.g.,
>
> ID_AA64PFR0_EL1, AdvSIMD, NI we get:
>
> __val = ID_AA64PFR0_EL1_AdvSIMD_NI; /* = 0xf */
> __val <<= 64 - 4; /* 0xf0_00_00_00_00_00_00_00 */
> __val >>= 64 - 20 - 4; /* 0xff_ff_ff_ff_ff_f0_00_00 */
>
> I think the last line instead should be:
> __val >>= 64 - id##_##fld##_WIDTH;
Huh, you're absolutely right.
>
> > + \
> > + __val; \
> > + })
> > +
> > +#define expand_field_sign(id, fld, val) \
> > + (id##_##fld##_SIGNED ? \
> > + __expand_field_sign_signed(id, fld, val) : \
> > + __expand_field_sign_unsigned(id, fld, val))
> > +
> > +#define get_idreg_field_unsigned(kvm, id, fld) \
> > + ({ \
> > + u64 __val = IDREG(kvm, SYS_##id); \
> > + __val &= id##_##fld##_MASK; \
> > + __val >>= id##_##fld##_SHIFT; \
> > + \
>
> We extract the field value for unsigned field, i.e., shifted to bit"0"
> and that matches the expand_field_sign().
>
> > + __val; \
> > + })
> > +
> > +#define get_idreg_field_signed(kvm, id, fld) \
> > + ({ \
> > + s64 __val = IDREG(kvm, SYS_##id); \
> > + __val <<= 64 - id##_##fld##_SHIFT - id##_##fld##_WIDTH; \
> > + __val >>= id##_##fld##_SHIFT; \
>
> However, here we get (assuming value ID_AA64PFR0_EL1_ASIMD = 0xf, and
> all other fields are 0 for clarity)
>
> __val = IDREG(kvm, SYS_ID_AA64PFR0_EL1); = 0xf0_00_00; /* 0xf << 20 */
> __val <<= 64 - 20 - 4; /* = 0xf0_00_00_00_00_00_00_00 */
> __val >>= 20; /* = 0xff_ff_ff_00_00_00_00_00 */
Gah... again!
>
> Thus they don;t match. Instead the last line should be :
>
> __val >>= id##_##fld##_WIDTH;
Shouldn't this be (64 - WIDTH) instead, since we want the value to be
shifted to bit 0? Otherwise, you get 0xff_00_00_00_00_00_00_00 (as per
your example).
Thanks a lot for spotting those, much appreciated.
M.
--
Without deviation from the norm, progress is not possible.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 48+ messages in thread
* Re: [PATCH v2 02/25] KVM: arm64: Add feature checking helpers
2024-02-04 11:08 ` Marc Zyngier
@ 2024-02-04 11:44 ` Marc Zyngier
2024-02-05 10:10 ` Suzuki K Poulose
0 siblings, 1 reply; 48+ messages in thread
From: Marc Zyngier @ 2024-02-04 11:44 UTC (permalink / raw)
To: Suzuki K Poulose
Cc: kvmarm, linux-arm-kernel, James Morse, Oliver Upton, Zenghui Yu,
Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown
On Sun, 04 Feb 2024 11:08:45 +0000,
Marc Zyngier <maz@kernel.org> wrote:
>
> On Fri, 02 Feb 2024 17:13:07 +0000,
> Suzuki K Poulose <suzuki.poulose@arm.com> wrote:
> >
> > Hi Marc,
> >
> > On 30/01/2024 20:45, Marc Zyngier wrote:
> > > In order to make it easier to check whether a particular feature
> > > is exposed to a guest, add a new set of helpers, with kvm_has_feat()
> > > being the most useful.
> > >
> > > Let's start making use of them in the PMU code (courtesy of Oliver).
> > > Follow-up work will intricude additional use patterns.
> >
> > I think there is a bit of inconsistency in the macros for signed
> > and unsigned. The unsigned fields are extracted (i.e., as if they
> > were shifted to bit 0). But the signed fields are not shifted
> > completely to bit '0' (in fact to different positions) and eventually
> > we compare wrong things.
> >
> > Using ID_AA64PFR0_EL1, fld=EL2, val=IMP for unsigned and
> > ID_AA64PFR0_EL1, AdvSIMD, NI for signed.
> >
> > >
> > > Co-developed--by: Oliver Upton <oliver.upton@linux.dev>
> > > Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
> > > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > > ---
> > > arch/arm64/include/asm/kvm_host.h | 53 +++++++++++++++++++++++++++++++
> > > arch/arm64/kvm/pmu-emul.c | 11 ++++---
> > > arch/arm64/kvm/sys_regs.c | 6 ++--
> > > include/kvm/arm_pmu.h | 11 -------
> > > 4 files changed, 61 insertions(+), 20 deletions(-)
> > >
> > > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > > index 21c57b812569..c0cf9c5f5e8d 100644
> > > --- a/arch/arm64/include/asm/kvm_host.h
> > > +++ b/arch/arm64/include/asm/kvm_host.h
> > > @@ -1233,4 +1233,57 @@ static inline void kvm_hyp_reserve(void) { }
> > > void kvm_arm_vcpu_power_off(struct kvm_vcpu *vcpu);
> > > bool kvm_arm_vcpu_stopped(struct kvm_vcpu *vcpu);
> > > +#define __expand_field_sign_unsigned(id, fld, val)
> > > \
> > > + ((u64)(id##_##fld##_##val))
> >
> > For unsigned features we get the actual "field" value, not the value
> > in position. e.g,: ID_AA64PFR0_EL1_EL2_IMP = (0x1)
> >
> > > +
> > > +#define __expand_field_sign_signed(id, fld, val) \
> > > + ({ \
> > > + s64 __val = id##_##fld##_##val; \
> > > + __val <<= 64 - id##_##fld##_WIDTH; \
> > > + __val >>= 64 - id##_##fld##_SHIFT - id##_##fld##_WIDTH; \
> >
> > But for signed fields, we shift them back into the "position" as in
> > the ID_REG. e.g.,
> >
> > ID_AA64PFR0_EL1, AdvSIMD, NI we get:
> >
> > __val = ID_AA64PFR0_EL1_AdvSIMD_NI; /* = 0xf */
> > __val <<= 64 - 4; /* 0xf0_00_00_00_00_00_00_00 */
> > __val >>= 64 - 20 - 4; /* 0xff_ff_ff_ff_ff_f0_00_00 */
> >
> > I think the last line instead should be:
> > __val >>= 64 - id##_##fld##_WIDTH;
>
> Huh, you're absolutely right.
>
> >
> > > + \
> > > + __val; \
> > > + })
> > > +
> > > +#define expand_field_sign(id, fld, val) \
> > > + (id##_##fld##_SIGNED ? \
> > > + __expand_field_sign_signed(id, fld, val) : \
> > > + __expand_field_sign_unsigned(id, fld, val))
> > > +
> > > +#define get_idreg_field_unsigned(kvm, id, fld) \
> > > + ({ \
> > > + u64 __val = IDREG(kvm, SYS_##id); \
> > > + __val &= id##_##fld##_MASK; \
> > > + __val >>= id##_##fld##_SHIFT; \
> > > + \
> >
> > We extract the field value for unsigned field, i.e., shifted to bit"0"
> > and that matches the expand_field_sign().
> >
> > > + __val; \
> > > + })
> > > +
> > > +#define get_idreg_field_signed(kvm, id, fld) \
> > > + ({ \
> > > + s64 __val = IDREG(kvm, SYS_##id); \
> > > + __val <<= 64 - id##_##fld##_SHIFT - id##_##fld##_WIDTH; \
> > > + __val >>= id##_##fld##_SHIFT; \
> >
> > However, here we get (assuming value ID_AA64PFR0_EL1_ASIMD = 0xf, and
> > all other fields are 0 for clarity)
> >
> > __val = IDREG(kvm, SYS_ID_AA64PFR0_EL1); = 0xf0_00_00; /* 0xf << 20 */
> > __val <<= 64 - 20 - 4; /* = 0xf0_00_00_00_00_00_00_00 */
> > __val >>= 20; /* = 0xff_ff_ff_00_00_00_00_00 */
>
> Gah... again!
>
> >
> > Thus they don;t match. Instead the last line should be :
> >
> > __val >>= id##_##fld##_WIDTH;
>
> Shouldn't this be (64 - WIDTH) instead, since we want the value to be
> shifted to bit 0? Otherwise, you get 0xff_00_00_00_00_00_00_00 (as per
> your example).
>
> Thanks a lot for spotting those, much appreciated.
And since it is now obvious to anyone that I can't shift properly,
let's just not do that. As it turns out, the kernel already has the
required macros, and I can stop being clever with these:
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index ea8acc1914aa..3457fa313bad 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -1290,11 +1290,8 @@ bool kvm_arm_vcpu_stopped(struct kvm_vcpu *vcpu);
#define __expand_field_sign_signed(id, fld, val) \
({ \
- s64 __val = id##_##fld##_##val; \
- __val <<= 64 - id##_##fld##_WIDTH; \
- __val >>= 64 - id##_##fld##_SHIFT - id##_##fld##_WIDTH; \
- \
- __val; \
+ u64 __val = id##_##fld##_##val; \
+ sign_extend64(__val, id##_##fld##_WIDTH - 1); \
})
#define expand_field_sign(id, fld, val) \
@@ -1313,11 +1310,9 @@ bool kvm_arm_vcpu_stopped(struct kvm_vcpu *vcpu);
#define get_idreg_field_signed(kvm, id, fld) \
({ \
- s64 __val = IDREG(kvm, SYS_##id); \
- __val <<= 64 - id##_##fld##_SHIFT - id##_##fld##_WIDTH; \
- __val >>= id##_##fld##_SHIFT; \
- \
- __val; \
+ u64 __val = IDREG(kvm, SYS_##id); \
+ __val = FIELD_GET(id##_##fld##_MASK, __val); \
+ sign_extend64(__val, id##_##fld##_WIDTH - 1); \
})
#define get_idreg_field_enum(kvm, id, fld) \
Thoughts?
M.
--
Without deviation from the norm, progress is not possible.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 48+ messages in thread
* Re: [PATCH v2 02/25] KVM: arm64: Add feature checking helpers
2024-02-04 11:44 ` Marc Zyngier
@ 2024-02-05 10:10 ` Suzuki K Poulose
0 siblings, 0 replies; 48+ messages in thread
From: Suzuki K Poulose @ 2024-02-05 10:10 UTC (permalink / raw)
To: Marc Zyngier
Cc: kvmarm, linux-arm-kernel, James Morse, Oliver Upton, Zenghui Yu,
Catalin Marinas, Will Deacon, Joey Gouly, Mark Brown
On 04/02/2024 11:44, Marc Zyngier wrote:
> On Sun, 04 Feb 2024 11:08:45 +0000,
> Marc Zyngier <maz@kernel.org> wrote:
>>
>> On Fri, 02 Feb 2024 17:13:07 +0000,
>> Suzuki K Poulose <suzuki.poulose@arm.com> wrote:
>>>
>>> Hi Marc,
>>>
>>> On 30/01/2024 20:45, Marc Zyngier wrote:
>>>> In order to make it easier to check whether a particular feature
>>>> is exposed to a guest, add a new set of helpers, with kvm_has_feat()
>>>> being the most useful.
>>>>
>>>> Let's start making use of them in the PMU code (courtesy of Oliver).
>>>> Follow-up work will intricude additional use patterns.
>>>
>>> I think there is a bit of inconsistency in the macros for signed
>>> and unsigned. The unsigned fields are extracted (i.e., as if they
>>> were shifted to bit 0). But the signed fields are not shifted
>>> completely to bit '0' (in fact to different positions) and eventually
>>> we compare wrong things.
>>>
>>> Using ID_AA64PFR0_EL1, fld=EL2, val=IMP for unsigned and
>>> ID_AA64PFR0_EL1, AdvSIMD, NI for signed.
>>>
>>>>
>>>> Co-developed--by: Oliver Upton <oliver.upton@linux.dev>
>>>> Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
>>>> Signed-off-by: Marc Zyngier <maz@kernel.org>
>>>> ---
>>>> arch/arm64/include/asm/kvm_host.h | 53 +++++++++++++++++++++++++++++++
>>>> arch/arm64/kvm/pmu-emul.c | 11 ++++---
>>>> arch/arm64/kvm/sys_regs.c | 6 ++--
>>>> include/kvm/arm_pmu.h | 11 -------
>>>> 4 files changed, 61 insertions(+), 20 deletions(-)
>>>>
>>>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>>>> index 21c57b812569..c0cf9c5f5e8d 100644
>>>> --- a/arch/arm64/include/asm/kvm_host.h
>>>> +++ b/arch/arm64/include/asm/kvm_host.h
>>>> @@ -1233,4 +1233,57 @@ static inline void kvm_hyp_reserve(void) { }
>>>> void kvm_arm_vcpu_power_off(struct kvm_vcpu *vcpu);
>>>> bool kvm_arm_vcpu_stopped(struct kvm_vcpu *vcpu);
>>>> +#define __expand_field_sign_unsigned(id, fld, val)
>>>> \
>>>> + ((u64)(id##_##fld##_##val))
>>>
>>> For unsigned features we get the actual "field" value, not the value
>>> in position. e.g,: ID_AA64PFR0_EL1_EL2_IMP = (0x1)
>>>
>>>> +
>>>> +#define __expand_field_sign_signed(id, fld, val) \
>>>> + ({ \
>>>> + s64 __val = id##_##fld##_##val; \
>>>> + __val <<= 64 - id##_##fld##_WIDTH; \
>>>> + __val >>= 64 - id##_##fld##_SHIFT - id##_##fld##_WIDTH; \
>>>
>>> But for signed fields, we shift them back into the "position" as in
>>> the ID_REG. e.g.,
>>>
>>> ID_AA64PFR0_EL1, AdvSIMD, NI we get:
>>>
>>> __val = ID_AA64PFR0_EL1_AdvSIMD_NI; /* = 0xf */
>>> __val <<= 64 - 4; /* 0xf0_00_00_00_00_00_00_00 */
>>> __val >>= 64 - 20 - 4; /* 0xff_ff_ff_ff_ff_f0_00_00 */
>>>
>>> I think the last line instead should be:
>>> __val >>= 64 - id##_##fld##_WIDTH;
>>
>> Huh, you're absolutely right.
>>
>>>
>>>> + \
>>>> + __val; \
>>>> + })
>>>> +
>>>> +#define expand_field_sign(id, fld, val) \
>>>> + (id##_##fld##_SIGNED ? \
>>>> + __expand_field_sign_signed(id, fld, val) : \
>>>> + __expand_field_sign_unsigned(id, fld, val))
>>>> +
>>>> +#define get_idreg_field_unsigned(kvm, id, fld) \
>>>> + ({ \
>>>> + u64 __val = IDREG(kvm, SYS_##id); \
>>>> + __val &= id##_##fld##_MASK; \
>>>> + __val >>= id##_##fld##_SHIFT; \
>>>> + \
>>>
>>> We extract the field value for unsigned field, i.e., shifted to bit"0"
>>> and that matches the expand_field_sign().
>>>
>>>> + __val; \
>>>> + })
>>>> +
>>>> +#define get_idreg_field_signed(kvm, id, fld) \
>>>> + ({ \
>>>> + s64 __val = IDREG(kvm, SYS_##id); \
>>>> + __val <<= 64 - id##_##fld##_SHIFT - id##_##fld##_WIDTH; \
>>>> + __val >>= id##_##fld##_SHIFT; \
>>>
>>> However, here we get (assuming value ID_AA64PFR0_EL1_ASIMD = 0xf, and
>>> all other fields are 0 for clarity)
>>>
>>> __val = IDREG(kvm, SYS_ID_AA64PFR0_EL1); = 0xf0_00_00; /* 0xf << 20 */
>>> __val <<= 64 - 20 - 4; /* = 0xf0_00_00_00_00_00_00_00 */
>>> __val >>= 20; /* = 0xff_ff_ff_00_00_00_00_00 */
>>
>> Gah... again!
>>
>>>
>>> Thus they don;t match. Instead the last line should be :
>>>
>>> __val >>= id##_##fld##_WIDTH;
>>
>> Shouldn't this be (64 - WIDTH) instead, since we want the value to be
>> shifted to bit 0? Otherwise, you get 0xff_00_00_00_00_00_00_00 (as per
>> your example).
Bah, You're right. See, its so easy to mess it up ;-)
>>
>> Thanks a lot for spotting those, much appreciated.
>
> And since it is now obvious to anyone that I can't shift properly,
> let's just not do that. As it turns out, the kernel already has the
> required macros, and I can stop being clever with these:
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index ea8acc1914aa..3457fa313bad 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -1290,11 +1290,8 @@ bool kvm_arm_vcpu_stopped(struct kvm_vcpu *vcpu);
>
> #define __expand_field_sign_signed(id, fld, val) \
> ({ \
> - s64 __val = id##_##fld##_##val; \
> - __val <<= 64 - id##_##fld##_WIDTH; \
> - __val >>= 64 - id##_##fld##_SHIFT - id##_##fld##_WIDTH; \
> - \
> - __val; \
> + u64 __val = id##_##fld##_##val; \
> + sign_extend64(__val, id##_##fld##_WIDTH - 1); \
> })
>
> #define expand_field_sign(id, fld, val) \
> @@ -1313,11 +1310,9 @@ bool kvm_arm_vcpu_stopped(struct kvm_vcpu *vcpu);
>
> #define get_idreg_field_signed(kvm, id, fld) \
> ({ \
> - s64 __val = IDREG(kvm, SYS_##id); \
> - __val <<= 64 - id##_##fld##_SHIFT - id##_##fld##_WIDTH; \
> - __val >>= id##_##fld##_SHIFT; \
> - \
> - __val; \
> + u64 __val = IDREG(kvm, SYS_##id); \
> + __val = FIELD_GET(id##_##fld##_MASK, __val); \
> + sign_extend64(__val, id##_##fld##_WIDTH - 1); \
> })
>
> #define get_idreg_field_enum(kvm, id, fld) \
>
> Thoughts?
Thats much much better and saves you some staring at the screen. Didn't
know that existed. Thanks for that.
Suzuki
>
> M.
>
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 48+ messages in thread
end of thread, other threads:[~2024-02-05 10:11 UTC | newest]
Thread overview: 48+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-01-30 20:45 [PATCH v2 00/25] KVM/arm64: VM configuration enforcement Marc Zyngier
2024-01-30 20:45 ` [PATCH v2 01/25] arm64: sysreg: Add missing ID_AA64ISAR[13]_EL1 fields and variants Marc Zyngier
2024-01-30 20:45 ` [PATCH v2 02/25] KVM: arm64: Add feature checking helpers Marc Zyngier
2024-02-02 17:13 ` Suzuki K Poulose
2024-02-04 11:08 ` Marc Zyngier
2024-02-04 11:44 ` Marc Zyngier
2024-02-05 10:10 ` Suzuki K Poulose
2024-01-30 20:45 ` [PATCH v2 03/25] KVM: arm64: nv: Add sanitising to VNCR-backed sysregs Marc Zyngier
2024-01-31 14:52 ` Joey Gouly
2024-01-30 20:45 ` [PATCH v2 04/25] KVM: arm64: nv: Add sanitising to EL2 configuration registers Marc Zyngier
2024-01-31 17:16 ` Joey Gouly
2024-02-02 15:05 ` Marc Zyngier
2024-02-01 14:56 ` Joey Gouly
2024-02-02 15:10 ` Marc Zyngier
2024-02-02 16:26 ` Joey Gouly
2024-01-30 20:45 ` [PATCH v2 05/25] KVM: arm64: nv: Add sanitising to VNCR-backed FGT sysregs Marc Zyngier
2024-02-01 14:38 ` Joey Gouly
2024-02-02 14:58 ` Marc Zyngier
2024-01-30 20:45 ` [PATCH v2 06/25] KVM: arm64: nv: Add sanitising to VNCR-backed HCRX_EL2 Marc Zyngier
2024-02-02 8:52 ` Oliver Upton
2024-02-02 13:30 ` Mark Brown
2024-02-02 14:56 ` Marc Zyngier
2024-02-02 17:15 ` Oliver Upton
2024-02-02 19:17 ` Oliver Upton
2024-02-02 20:14 ` Marc Zyngier
2024-01-30 20:45 ` [PATCH v2 07/25] KVM: arm64: nv: Drop sanitised_sys_reg() helper Marc Zyngier
2024-01-30 20:45 ` [PATCH v2 08/25] KVM: arm64: Unify HDFG[WR]TR_GROUP FGT identifiers Marc Zyngier
2024-01-30 20:45 ` [PATCH v2 09/25] KVM: arm64: nv: Correctly handle negative polarity FGTs Marc Zyngier
2024-01-30 20:45 ` [PATCH v2 10/25] KVM: arm64: nv: Turn encoding ranges into discrete XArray stores Marc Zyngier
2024-01-31 14:57 ` Joey Gouly
2024-01-30 20:45 ` [PATCH v2 11/25] KVM: arm64: Drop the requirement for XARRAY_MULTI Marc Zyngier
2024-01-30 20:45 ` [PATCH v2 12/25] KVM: arm64: nv: Move system instructions to their own sys_reg_desc array Marc Zyngier
2024-01-30 20:45 ` [PATCH v2 13/25] KVM: arm64: Always populate the trap configuration xarray Marc Zyngier
2024-01-30 20:45 ` [PATCH v2 14/25] KVM: arm64: Register AArch64 system register entries with the sysreg xarray Marc Zyngier
2024-01-30 20:45 ` [PATCH v2 15/25] KVM: arm64: Use the xarray as the primary sysreg/sysinsn walker Marc Zyngier
2024-01-30 20:45 ` [PATCH v2 16/25] KVM: arm64: Rename __check_nv_sr_forward() to triage_sysreg_trap() Marc Zyngier
2024-01-30 20:45 ` [PATCH v2 17/25] KVM: arm64: Add Fine-Grained UNDEF tracking information Marc Zyngier
2024-01-30 20:45 ` [PATCH v2 18/25] KVM: arm64: Propagate and handle Fine-Grained UNDEF bits Marc Zyngier
2024-01-30 20:45 ` [PATCH v2 19/25] KVM: arm64: Move existing feature disabling over to FGU infrastructure Marc Zyngier
2024-01-30 20:45 ` [PATCH v2 20/25] KVM: arm64: Streamline save/restore of HFG[RW]TR_EL2 Marc Zyngier
2024-01-30 20:45 ` [PATCH v2 21/25] KVM: arm64: Make TLBI OS/Range UNDEF if not advertised to the guest Marc Zyngier
2024-01-31 15:05 ` Joey Gouly
2024-01-30 20:45 ` [PATCH v2 22/25] KVM: arm64: Make PIR{,E0}_EL1 UNDEF if S1PIE is " Marc Zyngier
2024-01-31 14:46 ` Joey Gouly
2024-01-30 20:45 ` [PATCH v2 23/25] KVM: arm64: Make AMU sysreg UNDEF if FEAT_AMU " Marc Zyngier
2024-01-30 20:45 ` [PATCH v2 24/25] KVM: arm64: Make FEAT_MOPS UNDEF if " Marc Zyngier
2024-01-31 15:02 ` Joey Gouly
2024-01-30 20:45 ` [PATCH v2 25/25] KVM: arm64: Add debugfs file for guest's ID registers Marc Zyngier
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).