* [PATCH 00/11] KVM: arm64: nv: FPSIMD/SVE support
@ 2024-05-31 23:13 Oliver Upton
2024-05-31 23:13 ` [PATCH 01/11] KVM: arm64: nv: Forward FP/ASIMD traps to guest hypervisor Oliver Upton
` (11 more replies)
0 siblings, 12 replies; 21+ messages in thread
From: Oliver Upton @ 2024-05-31 23:13 UTC (permalink / raw)
To: kvmarm
Cc: Marc Zyngier, James Morse, Suzuki K Poulose, Zenghui Yu, kvm,
Oliver Upton
Hey!
I've decided to start messing around with nested and have SVE support
working for a nested guest. For the sake of landing a semi-complete
feature upstream, I've also picked up the FPSIMD patches from the NV
series Marc is carrying.
The most annoying part about this series (IMO) is that ZCR_EL2 traps
behave differently from what needs to be virtualized for the guest when
HCR_EL2.NV = 1, as it takes a sysreg trap (EC = 0x18) instead of an SVE
trap (EC = 0x19). So, we need to synthesize the ESR value when
reflecting back into the guest hypervisor.
Otherwise, some care is required to slap the guest hypervisor's ZCR_EL2
into the right place depending on whether or not the vCPU is in a hyp
context, since it affects the hyp's usage of SVE in addition to the VM.
There's more work to be done for honoring the L1's CPTR traps, as this
series only focuses on getting SVE and FPSIMD traps right. We'll get
there one day.
I tested this using a mix of the fpsimd-test and sve-test selftests
running at L0, L1, and L2 concurrently on Neoverse V2.
Jintack Lim (1):
KVM: arm64: nv: Forward FP/ASIMD traps to guest hypervisor
Oliver Upton (10):
KVM: arm64: nv: Forward SVE traps to guest hypervisor
KVM: arm64: nv: Load guest FP state for ZCR_EL2 trap
KVM: arm64: nv: Load guest hyp's ZCR into EL1 state
KVM: arm64: nv: Handle ZCR_EL2 traps
KVM: arm64: nv: Save guest's ZCR_EL2 when in hyp context
KVM: arm64: nv: Use guest hypervisor's max VL when running nested
guest
KVM: arm64: nv: Ensure correct VL is loaded before saving SVE state
KVM: arm64: Spin off helper for programming CPTR traps
KVM: arm64: nv: Honor guest hypervisor's FP/SVE traps in CPTR_EL2
KVM: arm64: Allow the use of SVE+NV
arch/arm64/include/asm/kvm_emulate.h | 47 +++++++++++++++++++
arch/arm64/include/asm/kvm_host.h | 7 +++
arch/arm64/include/asm/kvm_nested.h | 1 -
arch/arm64/kvm/arm.c | 5 --
arch/arm64/kvm/fpsimd.c | 22 +++++++--
arch/arm64/kvm/handle_exit.c | 19 ++++++--
arch/arm64/kvm/hyp/include/hyp/switch.h | 43 ++++++++++++++++-
arch/arm64/kvm/hyp/vhe/switch.c | 62 +++++++++++++++----------
arch/arm64/kvm/nested.c | 3 +-
arch/arm64/kvm/sys_regs.c | 40 ++++++++++++++++
10 files changed, 206 insertions(+), 43 deletions(-)
base-commit: 1613e604df0cd359cf2a7fbd9be7a0bcfacfabd0
--
2.45.1.288.g0e0cd299f1-goog
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH 01/11] KVM: arm64: nv: Forward FP/ASIMD traps to guest hypervisor
2024-05-31 23:13 [PATCH 00/11] KVM: arm64: nv: FPSIMD/SVE support Oliver Upton
@ 2024-05-31 23:13 ` Oliver Upton
2024-05-31 23:13 ` [PATCH 02/11] KVM: arm64: nv: Forward SVE " Oliver Upton
` (10 subsequent siblings)
11 siblings, 0 replies; 21+ messages in thread
From: Oliver Upton @ 2024-05-31 23:13 UTC (permalink / raw)
To: kvmarm
Cc: Marc Zyngier, James Morse, Suzuki K Poulose, Zenghui Yu, kvm,
Jintack Lim, Christoffer Dall, Oliver Upton
From: Jintack Lim <jintack.lim@linaro.org>
Give precedence to the guest hypervisor's trap configuration when
routing an FP/ASIMD trap taken to EL2. Take advantage of the
infrastructure for translating CPTR_EL2 into the VHE (i.e. EL1) format
and base the trap decision solely on the VHE view of the register.
Bury all of this behind a macro keyed off of the CPTR bitfield in
anticipation of supporting other traps (e.g. SVE).
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
[maz: account for HCR_EL2.E2H when testing for TFP/FPEN, with
all the hard work actually being done by Chase Conklin]
Signed-off-by: Marc Zyngier <maz@kernel.org>
[ oliver: translate nVHE->VHE format for testing traps; macro for reuse
in other CPTR_EL2.xEN fields ]
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
---
arch/arm64/include/asm/kvm_emulate.h | 43 +++++++++++++++++++++++++
arch/arm64/include/asm/kvm_nested.h | 1 -
arch/arm64/kvm/handle_exit.c | 16 ++++++---
arch/arm64/kvm/hyp/include/hyp/switch.h | 3 ++
4 files changed, 58 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 501e3e019c93..3dd2d80d0cfb 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -11,6 +11,7 @@
#ifndef __ARM64_KVM_EMULATE_H__
#define __ARM64_KVM_EMULATE_H__
+#include <linux/bitfield.h>
#include <linux/kvm_host.h>
#include <asm/debug-monitors.h>
@@ -599,4 +600,46 @@ static __always_inline void kvm_reset_cptr_el2(struct kvm_vcpu *vcpu)
kvm_write_cptr_el2(val);
}
+
+/*
+ * Returns a 'sanitised' view of CPTR_EL2, translating from nVHE to the VHE
+ * format if E2H isn't set.
+ */
+static inline u64 vcpu_sanitised_cptr_el2(const struct kvm_vcpu *vcpu)
+{
+ u64 cptr = vcpu_read_sys_reg(vcpu, CPTR_EL2);
+
+ if (!vcpu_el2_e2h_is_set(vcpu))
+ cptr = translate_cptr_el2_to_cpacr_el1(cptr);
+
+ return cptr;
+}
+
+static inline bool ____cptr_xen_trap_enabled(const struct kvm_vcpu *vcpu,
+ unsigned int xen)
+{
+ switch (xen) {
+ case 0b00:
+ case 0b10:
+ return true;
+ case 0b01:
+ return vcpu_el2_tge_is_set(vcpu) && !vcpu_is_el2(vcpu);
+ case 0b11:
+ default:
+ return false;
+ }
+}
+
+#define __guest_hyp_cptr_xen_trap_enabled(vcpu, xen) \
+ (!vcpu_has_nv(vcpu) ? false : \
+ ____cptr_xen_trap_enabled(vcpu, \
+ SYS_FIELD_GET(CPACR_ELx, xen, \
+ vcpu_sanitised_cptr_el2(vcpu))))
+
+static inline bool guest_hyp_fpsimd_traps_enabled(const struct kvm_vcpu *vcpu)
+{
+ return __guest_hyp_cptr_xen_trap_enabled(vcpu, FPEN);
+}
+
+
#endif /* __ARM64_KVM_EMULATE_H__ */
diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index 5e0ab0596246..5d55f76254c3 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -75,5 +75,4 @@ static inline bool kvm_auth_eretax(struct kvm_vcpu *vcpu, u64 *elr)
return false;
}
#endif
-
#endif /* __ARM64_KVM_NESTED_H */
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index b037f0a0e27e..59fe9b10a87a 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -94,11 +94,19 @@ static int handle_smc(struct kvm_vcpu *vcpu)
}
/*
- * Guest access to FP/ASIMD registers are routed to this handler only
- * when the system doesn't support FP/ASIMD.
+ * This handles the cases where the system does not support FP/ASIMD or when
+ * we are running nested virtualization and the guest hypervisor is trapping
+ * FP/ASIMD accesses by its guest guest.
+ *
+ * All other handling of guest vs. host FP/ASIMD register state is handled in
+ * fixup_guest_exit().
*/
-static int handle_no_fpsimd(struct kvm_vcpu *vcpu)
+static int kvm_handle_fpasimd(struct kvm_vcpu *vcpu)
{
+ if (guest_hyp_fpsimd_traps_enabled(vcpu))
+ return kvm_inject_nested_sync(vcpu, kvm_vcpu_get_esr(vcpu));
+
+ /* This is the case when the system doesn't support FP/ASIMD. */
kvm_inject_undefined(vcpu);
return 1;
}
@@ -304,7 +312,7 @@ static exit_handle_fn arm_exit_handlers[] = {
[ESR_ELx_EC_BREAKPT_LOW]= kvm_handle_guest_debug,
[ESR_ELx_EC_BKPT32] = kvm_handle_guest_debug,
[ESR_ELx_EC_BRK64] = kvm_handle_guest_debug,
- [ESR_ELx_EC_FP_ASIMD] = handle_no_fpsimd,
+ [ESR_ELx_EC_FP_ASIMD] = kvm_handle_fpasimd,
[ESR_ELx_EC_PAC] = kvm_handle_ptrauth,
};
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index a92566f36022..b302d32f8326 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -341,6 +341,9 @@ static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code)
/* Only handle traps the vCPU can support here: */
switch (esr_ec) {
case ESR_ELx_EC_FP_ASIMD:
+ /* Forward traps to the guest hypervisor as required */
+ if (guest_hyp_fpsimd_traps_enabled(vcpu))
+ return false;
break;
case ESR_ELx_EC_SVE:
if (!sve_guest)
--
2.45.1.288.g0e0cd299f1-goog
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 02/11] KVM: arm64: nv: Forward SVE traps to guest hypervisor
2024-05-31 23:13 [PATCH 00/11] KVM: arm64: nv: FPSIMD/SVE support Oliver Upton
2024-05-31 23:13 ` [PATCH 01/11] KVM: arm64: nv: Forward FP/ASIMD traps to guest hypervisor Oliver Upton
@ 2024-05-31 23:13 ` Oliver Upton
2024-05-31 23:13 ` [PATCH 03/11] KVM: arm64: nv: Load guest FP state for ZCR_EL2 trap Oliver Upton
` (9 subsequent siblings)
11 siblings, 0 replies; 21+ messages in thread
From: Oliver Upton @ 2024-05-31 23:13 UTC (permalink / raw)
To: kvmarm
Cc: Marc Zyngier, James Morse, Suzuki K Poulose, Zenghui Yu, kvm,
Oliver Upton
Similar to FPSIMD traps, don't load SVE state if the guest hypervisor
has SVE traps enabled and forward the trap instead. Note that ZCR_EL2
will require some special handling, as it takes a sysreg trap to EL2
when HCR_EL2.NV = 1.
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
---
arch/arm64/include/asm/kvm_emulate.h | 4 ++++
arch/arm64/kvm/handle_exit.c | 3 +++
arch/arm64/kvm/hyp/include/hyp/switch.h | 2 ++
3 files changed, 9 insertions(+)
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 3dd2d80d0cfb..e86de04ba1c4 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -641,5 +641,9 @@ static inline bool guest_hyp_fpsimd_traps_enabled(const struct kvm_vcpu *vcpu)
return __guest_hyp_cptr_xen_trap_enabled(vcpu, FPEN);
}
+static inline bool guest_hyp_sve_traps_enabled(const struct kvm_vcpu *vcpu)
+{
+ return __guest_hyp_cptr_xen_trap_enabled(vcpu, ZEN);
+}
#endif /* __ARM64_KVM_EMULATE_H__ */
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 59fe9b10a87a..e4f74699f360 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -217,6 +217,9 @@ static int kvm_handle_unknown_ec(struct kvm_vcpu *vcpu)
*/
static int handle_sve(struct kvm_vcpu *vcpu)
{
+ if (guest_hyp_sve_traps_enabled(vcpu))
+ return kvm_inject_nested_sync(vcpu, kvm_vcpu_get_esr(vcpu));
+
kvm_inject_undefined(vcpu);
return 1;
}
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index b302d32f8326..428ee15dd6ae 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -348,6 +348,8 @@ static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code)
case ESR_ELx_EC_SVE:
if (!sve_guest)
return false;
+ if (guest_hyp_sve_traps_enabled(vcpu))
+ return false;
break;
default:
return false;
--
2.45.1.288.g0e0cd299f1-goog
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 03/11] KVM: arm64: nv: Load guest FP state for ZCR_EL2 trap
2024-05-31 23:13 [PATCH 00/11] KVM: arm64: nv: FPSIMD/SVE support Oliver Upton
2024-05-31 23:13 ` [PATCH 01/11] KVM: arm64: nv: Forward FP/ASIMD traps to guest hypervisor Oliver Upton
2024-05-31 23:13 ` [PATCH 02/11] KVM: arm64: nv: Forward SVE " Oliver Upton
@ 2024-05-31 23:13 ` Oliver Upton
2024-06-01 9:47 ` Marc Zyngier
2024-05-31 23:13 ` [PATCH 04/11] KVM: arm64: nv: Load guest hyp's ZCR into EL1 state Oliver Upton
` (8 subsequent siblings)
11 siblings, 1 reply; 21+ messages in thread
From: Oliver Upton @ 2024-05-31 23:13 UTC (permalink / raw)
To: kvmarm
Cc: Marc Zyngier, James Morse, Suzuki K Poulose, Zenghui Yu, kvm,
Oliver Upton
Round out the ZCR_EL2 gymnastics by loading SVE state in the fast path
when the guest hypervisor tries to access SVE state.
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
---
arch/arm64/kvm/hyp/include/hyp/switch.h | 23 +++++++++++++++++++++++
1 file changed, 23 insertions(+)
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 428ee15dd6ae..5872eaafc7f0 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -345,6 +345,10 @@ static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code)
if (guest_hyp_fpsimd_traps_enabled(vcpu))
return false;
break;
+ case ESR_ELx_EC_SYS64:
+ if (WARN_ON_ONCE(!is_hyp_ctxt(vcpu)))
+ return false;
+ fallthrough;
case ESR_ELx_EC_SVE:
if (!sve_guest)
return false;
@@ -520,6 +524,22 @@ static bool handle_ampere1_tcr(struct kvm_vcpu *vcpu)
return true;
}
+static bool kvm_hyp_handle_zcr(struct kvm_vcpu *vcpu, u64 *exit_code)
+{
+ u32 sysreg = esr_sys64_to_sysreg(kvm_vcpu_get_esr(vcpu));
+
+ if (!vcpu_has_nv(vcpu))
+ return false;
+
+ if (sysreg != SYS_ZCR_EL2)
+ return false;
+
+ if (guest_owns_fp_regs())
+ return false;
+
+ return kvm_hyp_handle_fpsimd(vcpu, exit_code);
+}
+
static bool kvm_hyp_handle_sysreg(struct kvm_vcpu *vcpu, u64 *exit_code)
{
if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM) &&
@@ -537,6 +557,9 @@ static bool kvm_hyp_handle_sysreg(struct kvm_vcpu *vcpu, u64 *exit_code)
if (kvm_hyp_handle_cntpct(vcpu))
return true;
+ if (kvm_hyp_handle_zcr(vcpu, exit_code))
+ return true;
+
return false;
}
--
2.45.1.288.g0e0cd299f1-goog
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 04/11] KVM: arm64: nv: Load guest hyp's ZCR into EL1 state
2024-05-31 23:13 [PATCH 00/11] KVM: arm64: nv: FPSIMD/SVE support Oliver Upton
` (2 preceding siblings ...)
2024-05-31 23:13 ` [PATCH 03/11] KVM: arm64: nv: Load guest FP state for ZCR_EL2 trap Oliver Upton
@ 2024-05-31 23:13 ` Oliver Upton
2024-05-31 23:13 ` [PATCH 05/11] KVM: arm64: nv: Handle ZCR_EL2 traps Oliver Upton
` (7 subsequent siblings)
11 siblings, 0 replies; 21+ messages in thread
From: Oliver Upton @ 2024-05-31 23:13 UTC (permalink / raw)
To: kvmarm
Cc: Marc Zyngier, James Morse, Suzuki K Poulose, Zenghui Yu, kvm,
Oliver Upton
Load the guest hypervisor's ZCR_EL2 into the corresponding EL1 register
when restoring SVE state, as ZCR_EL2 affects the VL in the hypervisor
context.
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
---
arch/arm64/include/asm/kvm_host.h | 4 ++++
arch/arm64/kvm/hyp/include/hyp/switch.h | 3 ++-
2 files changed, 6 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 8170c04fde91..e01e6de414f1 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -844,6 +844,10 @@ struct kvm_vcpu_arch {
#define vcpu_sve_max_vq(vcpu) sve_vq_from_vl((vcpu)->arch.sve_max_vl)
+#define vcpu_sve_zcr_el1(vcpu) \
+ (unlikely(is_hyp_ctxt(vcpu)) ? __vcpu_sys_reg(vcpu, ZCR_EL2) : \
+ __vcpu_sys_reg(vcpu, ZCR_EL1))
+
#define vcpu_sve_state_size(vcpu) ({ \
size_t __size_ret; \
unsigned int __vcpu_vq; \
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 5872eaafc7f0..e1e888340739 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -317,7 +317,8 @@ static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu)
sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2);
__sve_restore_state(vcpu_sve_pffr(vcpu),
&vcpu->arch.ctxt.fp_regs.fpsr);
- write_sysreg_el1(__vcpu_sys_reg(vcpu, ZCR_EL1), SYS_ZCR);
+
+ write_sysreg_el1(vcpu_sve_zcr_el1(vcpu), SYS_ZCR);
}
/*
--
2.45.1.288.g0e0cd299f1-goog
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 05/11] KVM: arm64: nv: Handle ZCR_EL2 traps
2024-05-31 23:13 [PATCH 00/11] KVM: arm64: nv: FPSIMD/SVE support Oliver Upton
` (3 preceding siblings ...)
2024-05-31 23:13 ` [PATCH 04/11] KVM: arm64: nv: Load guest hyp's ZCR into EL1 state Oliver Upton
@ 2024-05-31 23:13 ` Oliver Upton
2024-05-31 23:13 ` [PATCH 06/11] KVM: arm64: nv: Save guest's ZCR_EL2 when in hyp context Oliver Upton
` (6 subsequent siblings)
11 siblings, 0 replies; 21+ messages in thread
From: Oliver Upton @ 2024-05-31 23:13 UTC (permalink / raw)
To: kvmarm
Cc: Marc Zyngier, James Morse, Suzuki K Poulose, Zenghui Yu, kvm,
Oliver Upton
Unlike other SVE-related registers, ZCR_EL2 takes a sysreg trap to EL2
when HCR_EL2.NV = 1. KVM still needs to honor the guest hypervisor's
trap configuration, which expects an SVE trap (i.e. ESR_EL2.EC = 0x19)
when CPTR traps are enabled for the vCPU's current context.
Otherwise, if the guest hypervisor has traps disabled, emulate the
access by mapping the requested VL into ZCR_EL1.
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
---
Notes, because I'm too lazy to respin before sending on a Friday:
- I'll want to add a helper for synthesizing the SVE trap, open-coding
it in the sysreg handler is gross.
- The sysreg handler needs to check CPACR_ELx_FPEN in addition to _ZEN,
like what I have now.
arch/arm64/include/asm/kvm_host.h | 3 +++
arch/arm64/kvm/sys_regs.c | 40 +++++++++++++++++++++++++++++++
2 files changed, 43 insertions(+)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index e01e6de414f1..aeb1c567dfad 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -422,6 +422,7 @@ enum vcpu_sysreg {
MDCR_EL2, /* Monitor Debug Configuration Register (EL2) */
CPTR_EL2, /* Architectural Feature Trap Register (EL2) */
HACR_EL2, /* Hypervisor Auxiliary Control Register */
+ ZCR_EL2, /* SVE Control Register (EL2) */
TTBR0_EL2, /* Translation Table Base Register 0 (EL2) */
TTBR1_EL2, /* Translation Table Base Register 1 (EL2) */
TCR_EL2, /* Translation Control Register (EL2) */
@@ -972,6 +973,7 @@ static inline bool __vcpu_read_sys_reg_from_cpu(int reg, u64 *val)
case DACR32_EL2: *val = read_sysreg_s(SYS_DACR32_EL2); break;
case IFSR32_EL2: *val = read_sysreg_s(SYS_IFSR32_EL2); break;
case DBGVCR32_EL2: *val = read_sysreg_s(SYS_DBGVCR32_EL2); break;
+ case ZCR_EL1: *val = read_sysreg_s(SYS_ZCR_EL12); break;
default: return false;
}
@@ -1017,6 +1019,7 @@ static inline bool __vcpu_write_sys_reg_to_cpu(u64 val, int reg)
case DACR32_EL2: write_sysreg_s(val, SYS_DACR32_EL2); break;
case IFSR32_EL2: write_sysreg_s(val, SYS_IFSR32_EL2); break;
case DBGVCR32_EL2: write_sysreg_s(val, SYS_DBGVCR32_EL2); break;
+ case ZCR_EL1: write_sysreg_s(val, SYS_ZCR_EL12); break;
default: return false;
}
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 22b45a15d068..a662e9d2d917 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -121,6 +121,7 @@ static bool get_el2_to_el1_mapping(unsigned int reg,
MAPPED_EL2_SYSREG(AMAIR_EL2, AMAIR_EL1, NULL );
MAPPED_EL2_SYSREG(ELR_EL2, ELR_EL1, NULL );
MAPPED_EL2_SYSREG(SPSR_EL2, SPSR_EL1, NULL );
+ MAPPED_EL2_SYSREG(ZCR_EL2, ZCR_EL1, NULL );
default:
return false;
}
@@ -2199,6 +2200,42 @@ static u64 reset_hcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
return __vcpu_sys_reg(vcpu, r->reg) = val;
}
+static unsigned int sve_el2_visibility(const struct kvm_vcpu *vcpu,
+ const struct sys_reg_desc *rd)
+{
+ unsigned int r;
+
+ r = el2_visibility(vcpu, rd);
+ if (r)
+ return r;
+
+ return sve_visibility(vcpu, rd);
+}
+
+static bool access_zcr_el2(struct kvm_vcpu *vcpu,
+ struct sys_reg_params *p,
+ const struct sys_reg_desc *r)
+{
+ u64 esr = FIELD_PREP(ESR_ELx_EC_MASK, ESR_ELx_EC_SVE) |
+ ESR_ELx_IL;
+ unsigned int vq;
+
+ if (guest_hyp_sve_traps_enabled(vcpu)) {
+ kvm_inject_nested_sync(vcpu, esr);
+ return true;
+ }
+
+ if (!p->is_write) {
+ p->regval = vcpu_read_sys_reg(vcpu, ZCR_EL2);
+ return true;
+ }
+
+ vq = SYS_FIELD_GET(ZCR_ELx, LEN, p->regval) + 1;
+ vq = min(vq, vcpu_sve_max_vq(vcpu));
+ vcpu_write_sys_reg(vcpu, vq - 1, ZCR_EL2);
+ return true;
+}
+
/*
* Architected system registers.
* Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2
@@ -2688,6 +2725,9 @@ static const struct sys_reg_desc sys_reg_descs[] = {
EL2_REG_VNCR(HFGITR_EL2, reset_val, 0),
EL2_REG_VNCR(HACR_EL2, reset_val, 0),
+ { SYS_DESC(SYS_ZCR_EL2), .access = access_zcr_el2, .reset = reset_val,
+ .visibility = sve_el2_visibility, .reg = ZCR_EL2 },
+
EL2_REG_VNCR(HCRX_EL2, reset_val, 0),
EL2_REG(TTBR0_EL2, access_rw, reset_val, 0),
--
2.45.1.288.g0e0cd299f1-goog
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 06/11] KVM: arm64: nv: Save guest's ZCR_EL2 when in hyp context
2024-05-31 23:13 [PATCH 00/11] KVM: arm64: nv: FPSIMD/SVE support Oliver Upton
` (4 preceding siblings ...)
2024-05-31 23:13 ` [PATCH 05/11] KVM: arm64: nv: Handle ZCR_EL2 traps Oliver Upton
@ 2024-05-31 23:13 ` Oliver Upton
2024-05-31 23:13 ` [PATCH 07/11] KVM: arm64: nv: Use guest hypervisor's max VL when running nested guest Oliver Upton
` (5 subsequent siblings)
11 siblings, 0 replies; 21+ messages in thread
From: Oliver Upton @ 2024-05-31 23:13 UTC (permalink / raw)
To: kvmarm
Cc: Marc Zyngier, James Morse, Suzuki K Poulose, Zenghui Yu, kvm,
Oliver Upton
When running a guest hypervisor, ZCR_EL2 is an alias for the counterpart
EL1 state.
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
---
arch/arm64/kvm/fpsimd.c | 11 ++++++++++-
1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
index 1807d3a79a8a..53168bbea8a7 100644
--- a/arch/arm64/kvm/fpsimd.c
+++ b/arch/arm64/kvm/fpsimd.c
@@ -173,7 +173,16 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu)
if (guest_owns_fp_regs()) {
if (vcpu_has_sve(vcpu)) {
- __vcpu_sys_reg(vcpu, ZCR_EL1) = read_sysreg_el1(SYS_ZCR);
+ u64 zcr = read_sysreg_el1(SYS_ZCR);
+
+ /*
+ * If the vCPU is in the hyp context then ZCR_EL1 is
+ * loaded with its vEL2 counterpart.
+ */
+ if (is_hyp_ctxt(vcpu))
+ __vcpu_sys_reg(vcpu, ZCR_EL2) = zcr;
+ else
+ __vcpu_sys_reg(vcpu, ZCR_EL1) = zcr;
/*
* Restore the VL that was saved when bound to the CPU,
--
2.45.1.288.g0e0cd299f1-goog
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 07/11] KVM: arm64: nv: Use guest hypervisor's max VL when running nested guest
2024-05-31 23:13 [PATCH 00/11] KVM: arm64: nv: FPSIMD/SVE support Oliver Upton
` (5 preceding siblings ...)
2024-05-31 23:13 ` [PATCH 06/11] KVM: arm64: nv: Save guest's ZCR_EL2 when in hyp context Oliver Upton
@ 2024-05-31 23:13 ` Oliver Upton
2024-05-31 23:13 ` [PATCH 08/11] KVM: arm64: nv: Ensure correct VL is loaded before saving SVE state Oliver Upton
` (4 subsequent siblings)
11 siblings, 0 replies; 21+ messages in thread
From: Oliver Upton @ 2024-05-31 23:13 UTC (permalink / raw)
To: kvmarm
Cc: Marc Zyngier, James Morse, Suzuki K Poulose, Zenghui Yu, kvm,
Oliver Upton
The max VL for nested guests is additionally constrained by the max VL
selected by the guest hypervisor. Use that instead of KVM's max VL when
running a nested guest.
Note that the guest hypervisor's ZCR_EL2 is sanitised against the VM's
max VL at the time of access, so there's no additional handling required
at the time of use.
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
---
arch/arm64/kvm/hyp/include/hyp/switch.h | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index e1e888340739..d806a0c1d556 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -314,10 +314,22 @@ static bool kvm_hyp_handle_mops(struct kvm_vcpu *vcpu, u64 *exit_code)
static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu)
{
+ /*
+ * The vCPU's saved SVE state layout always matches the max VL of the
+ * vCPU. Start off with the max VL so we can load the SVE state.
+ */
sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2);
__sve_restore_state(vcpu_sve_pffr(vcpu),
&vcpu->arch.ctxt.fp_regs.fpsr);
+ /*
+ * The effective VL for a VM could differ from the max VL when running a
+ * nested guest, as the guest hypervisor could select a smaller VL. Slap
+ * that into hardware before wrapping up.
+ */
+ if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu))
+ sve_cond_update_zcr_vq(__vcpu_sys_reg(vcpu, ZCR_EL2), SYS_ZCR_EL2);
+
write_sysreg_el1(vcpu_sve_zcr_el1(vcpu), SYS_ZCR);
}
--
2.45.1.288.g0e0cd299f1-goog
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 08/11] KVM: arm64: nv: Ensure correct VL is loaded before saving SVE state
2024-05-31 23:13 [PATCH 00/11] KVM: arm64: nv: FPSIMD/SVE support Oliver Upton
` (6 preceding siblings ...)
2024-05-31 23:13 ` [PATCH 07/11] KVM: arm64: nv: Use guest hypervisor's max VL when running nested guest Oliver Upton
@ 2024-05-31 23:13 ` Oliver Upton
2024-05-31 23:13 ` [PATCH 09/11] KVM: arm64: Spin off helper for programming CPTR traps Oliver Upton
` (3 subsequent siblings)
11 siblings, 0 replies; 21+ messages in thread
From: Oliver Upton @ 2024-05-31 23:13 UTC (permalink / raw)
To: kvmarm
Cc: Marc Zyngier, James Morse, Suzuki K Poulose, Zenghui Yu, kvm,
Oliver Upton
It is possible that the guest hypervisor has selected a smaller VL than
the maximum for its nested guest. As such, ZCR_EL2 may be configured for
a different VL when exiting a nested guest.
Set ZCR_EL2 (via the EL1 alias) to the maximum VL for the VM before
saving SVE state as the SVE save area is dimensioned by the max VL.
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
---
arch/arm64/kvm/fpsimd.c | 11 +++++++----
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
index 53168bbea8a7..bb2ef3166c63 100644
--- a/arch/arm64/kvm/fpsimd.c
+++ b/arch/arm64/kvm/fpsimd.c
@@ -193,11 +193,14 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu)
* Note that this means that at guest exit ZCR_EL1 is
* not necessarily the same as on guest entry.
*
- * Restoring the VL isn't needed in VHE mode since
- * ZCR_EL2 (accessed via ZCR_EL1) would fulfill the same
- * role when doing the save from EL2.
+ * ZCR_EL2 holds the guest hypervisor's VL when running
+ * a nested guest, which could be smaller than the
+ * max for the vCPU. Similar to above, we first need to
+ * switch to a VL consistent with the layout of the
+ * vCPU's SVE state. KVM support for NV implies VHE, so
+ * using the ZCR_EL1 alias is safe.
*/
- if (!has_vhe())
+ if (!has_vhe() || (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)))
sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1,
SYS_ZCR_EL1);
}
--
2.45.1.288.g0e0cd299f1-goog
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 09/11] KVM: arm64: Spin off helper for programming CPTR traps
2024-05-31 23:13 [PATCH 00/11] KVM: arm64: nv: FPSIMD/SVE support Oliver Upton
` (7 preceding siblings ...)
2024-05-31 23:13 ` [PATCH 08/11] KVM: arm64: nv: Ensure correct VL is loaded before saving SVE state Oliver Upton
@ 2024-05-31 23:13 ` Oliver Upton
2024-05-31 23:13 ` [PATCH 10/11] KVM: arm64: nv: Honor guest hypervisor's FP/SVE traps in CPTR_EL2 Oliver Upton
` (2 subsequent siblings)
11 siblings, 0 replies; 21+ messages in thread
From: Oliver Upton @ 2024-05-31 23:13 UTC (permalink / raw)
To: kvmarm
Cc: Marc Zyngier, James Morse, Suzuki K Poulose, Zenghui Yu, kvm,
Oliver Upton
A subsequent change to KVM will add preliminary support for merging a
guest hypervisor's CPTR traps with that of KVM. Prepare by spinning off
a new helper for managing CPTR traps.
Avoid reading CPACR_EL1 for the baseline trap config, and start off with
the most restrictive set of traps that is subsequently relaxed.
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
---
arch/arm64/kvm/hyp/vhe/switch.c | 49 ++++++++++++++++-----------------
1 file changed, 24 insertions(+), 25 deletions(-)
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index d7af5f46f22a..697253673d7b 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -65,6 +65,29 @@ static u64 __compute_hcr(struct kvm_vcpu *vcpu)
return hcr | (__vcpu_sys_reg(vcpu, HCR_EL2) & ~NV_HCR_GUEST_EXCLUDE);
}
+static void __activate_cptr_traps(struct kvm_vcpu *vcpu)
+{
+ /*
+ * With VHE (HCR.E2H == 1), accesses to CPACR_EL1 are routed to
+ * CPTR_EL2. In general, CPACR_EL1 has the same layout as CPTR_EL2,
+ * except for some missing controls, such as TAM.
+ * In this case, CPTR_EL2.TAM has the same position with or without
+ * VHE (HCR.E2H == 1) which allows us to use here the CPTR_EL2.TAM
+ * shift value for trapping the AMU accesses.
+ */
+ u64 val = CPACR_ELx_TTA | CPTR_EL2_TAM;
+
+ if (guest_owns_fp_regs()) {
+ val |= CPACR_ELx_FPEN;
+ if (vcpu_has_sve(vcpu))
+ val |= CPACR_ELx_ZEN;
+ } else {
+ __activate_traps_fpsimd32(vcpu);
+ }
+
+ write_sysreg(val, cpacr_el1);
+}
+
static void __activate_traps(struct kvm_vcpu *vcpu)
{
u64 val;
@@ -91,31 +114,7 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
}
}
- val = read_sysreg(cpacr_el1);
- val |= CPACR_ELx_TTA;
- val &= ~(CPACR_EL1_ZEN_EL0EN | CPACR_EL1_ZEN_EL1EN |
- CPACR_EL1_SMEN_EL0EN | CPACR_EL1_SMEN_EL1EN);
-
- /*
- * With VHE (HCR.E2H == 1), accesses to CPACR_EL1 are routed to
- * CPTR_EL2. In general, CPACR_EL1 has the same layout as CPTR_EL2,
- * except for some missing controls, such as TAM.
- * In this case, CPTR_EL2.TAM has the same position with or without
- * VHE (HCR.E2H == 1) which allows us to use here the CPTR_EL2.TAM
- * shift value for trapping the AMU accesses.
- */
-
- val |= CPTR_EL2_TAM;
-
- if (guest_owns_fp_regs()) {
- if (vcpu_has_sve(vcpu))
- val |= CPACR_EL1_ZEN_EL0EN | CPACR_EL1_ZEN_EL1EN;
- } else {
- val &= ~(CPACR_EL1_FPEN_EL0EN | CPACR_EL1_FPEN_EL1EN);
- __activate_traps_fpsimd32(vcpu);
- }
-
- write_sysreg(val, cpacr_el1);
+ __activate_cptr_traps(vcpu);
write_sysreg(__this_cpu_read(kvm_hyp_vector), vbar_el1);
}
--
2.45.1.288.g0e0cd299f1-goog
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 10/11] KVM: arm64: nv: Honor guest hypervisor's FP/SVE traps in CPTR_EL2
2024-05-31 23:13 [PATCH 00/11] KVM: arm64: nv: FPSIMD/SVE support Oliver Upton
` (8 preceding siblings ...)
2024-05-31 23:13 ` [PATCH 09/11] KVM: arm64: Spin off helper for programming CPTR traps Oliver Upton
@ 2024-05-31 23:13 ` Oliver Upton
2024-06-03 12:36 ` Marc Zyngier
2024-05-31 23:13 ` [PATCH 11/11] KVM: arm64: Allow the use of SVE+NV Oliver Upton
2024-06-01 10:24 ` [PATCH 00/11] KVM: arm64: nv: FPSIMD/SVE support Marc Zyngier
11 siblings, 1 reply; 21+ messages in thread
From: Oliver Upton @ 2024-05-31 23:13 UTC (permalink / raw)
To: kvmarm
Cc: Marc Zyngier, James Morse, Suzuki K Poulose, Zenghui Yu, kvm,
Oliver Upton
Start folding the guest hypervisor's FP/SVE traps into the value
programmed in hardware. Note that as of writing this is dead code, since
KVM does a full put() / load() for every nested exception boundary which
saves + flushes the FP/SVE state.
However, this will become useful when we can keep the guest's FP/SVE
state alive across a nested exception boundary and the host no longer
needs to conservatively program traps.
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
---
arch/arm64/kvm/hyp/vhe/switch.c | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 697253673d7b..d07b4f4be5e5 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -85,6 +85,19 @@ static void __activate_cptr_traps(struct kvm_vcpu *vcpu)
__activate_traps_fpsimd32(vcpu);
}
+ /*
+ * Layer the guest hypervisor's trap configuration on top of our own if
+ * we're in a nested context.
+ */
+ if (!vcpu_has_nv(vcpu) || is_hyp_ctxt(vcpu))
+ goto write;
+
+ if (guest_hyp_fpsimd_traps_enabled(vcpu))
+ val &= ~CPACR_ELx_FPEN;
+ if (guest_hyp_sve_traps_enabled(vcpu))
+ val &= ~CPACR_ELx_ZEN;
+
+write:
write_sysreg(val, cpacr_el1);
}
--
2.45.1.288.g0e0cd299f1-goog
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH 11/11] KVM: arm64: Allow the use of SVE+NV
2024-05-31 23:13 [PATCH 00/11] KVM: arm64: nv: FPSIMD/SVE support Oliver Upton
` (9 preceding siblings ...)
2024-05-31 23:13 ` [PATCH 10/11] KVM: arm64: nv: Honor guest hypervisor's FP/SVE traps in CPTR_EL2 Oliver Upton
@ 2024-05-31 23:13 ` Oliver Upton
2024-06-01 10:24 ` [PATCH 00/11] KVM: arm64: nv: FPSIMD/SVE support Marc Zyngier
11 siblings, 0 replies; 21+ messages in thread
From: Oliver Upton @ 2024-05-31 23:13 UTC (permalink / raw)
To: kvmarm
Cc: Marc Zyngier, James Morse, Suzuki K Poulose, Zenghui Yu, kvm,
Oliver Upton
Allow SVE and NV to mix now that everything is in place to handle it
correctly.
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
---
arch/arm64/kvm/arm.c | 5 -----
arch/arm64/kvm/nested.c | 3 +--
2 files changed, 1 insertion(+), 7 deletions(-)
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 9996a989b52e..e2c934728f73 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1419,11 +1419,6 @@ static int kvm_vcpu_init_check_features(struct kvm_vcpu *vcpu,
test_bit(KVM_ARM_VCPU_PTRAUTH_GENERIC, &features))
return -EINVAL;
- /* Disallow NV+SVE for the time being */
- if (test_bit(KVM_ARM_VCPU_HAS_EL2, &features) &&
- test_bit(KVM_ARM_VCPU_SVE, &features))
- return -EINVAL;
-
if (!test_bit(KVM_ARM_VCPU_EL1_32BIT, &features))
return 0;
diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index 6813c7c7f00a..0aefc3e1b9a7 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -41,13 +41,12 @@ static u64 limit_nv_id_reg(u32 id, u64 val)
break;
case SYS_ID_AA64PFR0_EL1:
- /* No AMU, MPAM, S-EL2, RAS or SVE */
+ /* No AMU, MPAM, S-EL2, or RAS */
val &= ~(GENMASK_ULL(55, 52) |
NV_FTR(PFR0, AMU) |
NV_FTR(PFR0, MPAM) |
NV_FTR(PFR0, SEL2) |
NV_FTR(PFR0, RAS) |
- NV_FTR(PFR0, SVE) |
NV_FTR(PFR0, EL3) |
NV_FTR(PFR0, EL2) |
NV_FTR(PFR0, EL1));
--
2.45.1.288.g0e0cd299f1-goog
^ permalink raw reply related [flat|nested] 21+ messages in thread
* Re: [PATCH 03/11] KVM: arm64: nv: Load guest FP state for ZCR_EL2 trap
2024-05-31 23:13 ` [PATCH 03/11] KVM: arm64: nv: Load guest FP state for ZCR_EL2 trap Oliver Upton
@ 2024-06-01 9:47 ` Marc Zyngier
2024-06-01 16:47 ` Oliver Upton
0 siblings, 1 reply; 21+ messages in thread
From: Marc Zyngier @ 2024-06-01 9:47 UTC (permalink / raw)
To: Oliver Upton; +Cc: kvmarm, James Morse, Suzuki K Poulose, Zenghui Yu, kvm
On Sat, 01 Jun 2024 00:13:50 +0100,
Oliver Upton <oliver.upton@linux.dev> wrote:
>
> Round out the ZCR_EL2 gymnastics by loading SVE state in the fast path
> when the guest hypervisor tries to access SVE state.
>
> Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
> ---
> arch/arm64/kvm/hyp/include/hyp/switch.h | 23 +++++++++++++++++++++++
> 1 file changed, 23 insertions(+)
>
> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index 428ee15dd6ae..5872eaafc7f0 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -345,6 +345,10 @@ static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code)
> if (guest_hyp_fpsimd_traps_enabled(vcpu))
> return false;
> break;
> + case ESR_ELx_EC_SYS64:
> + if (WARN_ON_ONCE(!is_hyp_ctxt(vcpu)))
> + return false;
> + fallthrough;
> case ESR_ELx_EC_SVE:
> if (!sve_guest)
> return false;
> @@ -520,6 +524,22 @@ static bool handle_ampere1_tcr(struct kvm_vcpu *vcpu)
> return true;
> }
>
> +static bool kvm_hyp_handle_zcr(struct kvm_vcpu *vcpu, u64 *exit_code)
> +{
> + u32 sysreg = esr_sys64_to_sysreg(kvm_vcpu_get_esr(vcpu));
> +
> + if (!vcpu_has_nv(vcpu))
> + return false;
> +
> + if (sysreg != SYS_ZCR_EL2)
> + return false;
> +
> + if (guest_owns_fp_regs())
> + return false;
> +
> + return kvm_hyp_handle_fpsimd(vcpu, exit_code);
For my own understanding of the flow: let's say the L1 guest accesses
ZCR_EL2 while the host own the FP regs:
- ZCR_EL2 traps
- we restore the guest's state, enable SVE
- ZCR_EL2 traps again
- emulate the access on the slow path
In contrast, the same thing using ZCR_EL1 in L1 results in:
- ZCR_EL1 traps
- we restore the guest's state, enable SVE
and we're done.
Is that correct? If so, a comment would help... ;-)
> +}
> +
> static bool kvm_hyp_handle_sysreg(struct kvm_vcpu *vcpu, u64 *exit_code)
> {
> if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM) &&
> @@ -537,6 +557,9 @@ static bool kvm_hyp_handle_sysreg(struct kvm_vcpu *vcpu, u64 *exit_code)
> if (kvm_hyp_handle_cntpct(vcpu))
> return true;
>
> + if (kvm_hyp_handle_zcr(vcpu, exit_code))
> + return true;
> +
> return false;
> }
>
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 00/11] KVM: arm64: nv: FPSIMD/SVE support
2024-05-31 23:13 [PATCH 00/11] KVM: arm64: nv: FPSIMD/SVE support Oliver Upton
` (10 preceding siblings ...)
2024-05-31 23:13 ` [PATCH 11/11] KVM: arm64: Allow the use of SVE+NV Oliver Upton
@ 2024-06-01 10:24 ` Marc Zyngier
2024-06-01 16:57 ` Oliver Upton
11 siblings, 1 reply; 21+ messages in thread
From: Marc Zyngier @ 2024-06-01 10:24 UTC (permalink / raw)
To: Oliver Upton; +Cc: kvmarm, James Morse, Suzuki K Poulose, Zenghui Yu, kvm
On Sat, 01 Jun 2024 00:13:47 +0100,
Oliver Upton <oliver.upton@linux.dev> wrote:
>
> Hey!
>
> I've decided to start messing around with nested and have SVE support
> working for a nested guest. For the sake of landing a semi-complete
> feature upstream, I've also picked up the FPSIMD patches from the NV
> series Marc is carrying.
>
> The most annoying part about this series (IMO) is that ZCR_EL2 traps
> behave differently from what needs to be virtualized for the guest when
> HCR_EL2.NV = 1, as it takes a sysreg trap (EC = 0x18) instead of an SVE
> trap (EC = 0x19). So, we need to synthesize the ESR value when
> reflecting back into the guest hypervisor.
That's unfortunately not a unique case. The ERETAx emulation already
requires us to synthesise the ESR on PAC check failure, and I'm afraid
ZCR_EL2 might not be the last case.
In general, we'll see this problem for any instruction or sysreg that
can generate multiple exception classes.
>
> Otherwise, some care is required to slap the guest hypervisor's ZCR_EL2
> into the right place depending on whether or not the vCPU is in a hyp
> context, since it affects the hyp's usage of SVE in addition to the VM.
>
> There's more work to be done for honoring the L1's CPTR traps, as this
> series only focuses on getting SVE and FPSIMD traps right. We'll get
> there one day.
I have patches for that in my NV series, which would take the place of
patches 9 and 10 in your series (or supplement them, depending on how
we want to slice this).
>
> I tested this using a mix of the fpsimd-test and sve-test selftests
> running at L0, L1, and L2 concurrently on Neoverse V2.
Thanks a lot for tackling this. It'd be good to put together a series
that has the EL2 sysreg save/restore patches as a prefix of this, plus
the CPTR_EL2 changes. That way, we'd have something that can be merged
as a consistent set.
I'll try to take this into my branch and see what explodes!
Cheers,
M.
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 03/11] KVM: arm64: nv: Load guest FP state for ZCR_EL2 trap
2024-06-01 9:47 ` Marc Zyngier
@ 2024-06-01 16:47 ` Oliver Upton
0 siblings, 0 replies; 21+ messages in thread
From: Oliver Upton @ 2024-06-01 16:47 UTC (permalink / raw)
To: Marc Zyngier; +Cc: kvmarm, James Morse, Suzuki K Poulose, Zenghui Yu, kvm
On Sat, Jun 01, 2024 at 10:47:47AM +0100, Marc Zyngier wrote:
> On Sat, 01 Jun 2024 00:13:50 +0100, Oliver Upton <oliver.upton@linux.dev> wrote:
> > +static bool kvm_hyp_handle_zcr(struct kvm_vcpu *vcpu, u64 *exit_code)
> > +{
> > + u32 sysreg = esr_sys64_to_sysreg(kvm_vcpu_get_esr(vcpu));
> > +
> > + if (!vcpu_has_nv(vcpu))
> > + return false;
> > +
> > + if (sysreg != SYS_ZCR_EL2)
> > + return false;
> > +
> > + if (guest_owns_fp_regs())
> > + return false;
> > +
> > + return kvm_hyp_handle_fpsimd(vcpu, exit_code);
>
> For my own understanding of the flow: let's say the L1 guest accesses
> ZCR_EL2 while the host own the FP regs:
>
> - ZCR_EL2 traps
> - we restore the guest's state, enable SVE
> - ZCR_EL2 traps again
> - emulate the access on the slow path
>
> In contrast, the same thing using ZCR_EL1 in L1 results in:
>
> - ZCR_EL1 traps
> - we restore the guest's state, enable SVE
>
> and we're done.
>
> Is that correct? If so, a comment would help... ;-)
Yeah, and I agree having a comment for this would be a good idea. Now
that I'm looking at this code again, I had wanted to avoid the second
trap on ZCR_EL2, so I'll probably fold in a change to bounce out to the
slow path after loading SVE state.
--
Thanks,
Oliver
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 00/11] KVM: arm64: nv: FPSIMD/SVE support
2024-06-01 10:24 ` [PATCH 00/11] KVM: arm64: nv: FPSIMD/SVE support Marc Zyngier
@ 2024-06-01 16:57 ` Oliver Upton
2024-06-02 14:28 ` Marc Zyngier
0 siblings, 1 reply; 21+ messages in thread
From: Oliver Upton @ 2024-06-01 16:57 UTC (permalink / raw)
To: Marc Zyngier; +Cc: kvmarm, James Morse, Suzuki K Poulose, Zenghui Yu, kvm
On Sat, Jun 01, 2024 at 11:24:49AM +0100, Marc Zyngier wrote:
> On Sat, 01 Jun 2024 00:13:47 +0100,
> Oliver Upton <oliver.upton@linux.dev> wrote:
> >
> > Hey!
> >
> > I've decided to start messing around with nested and have SVE support
> > working for a nested guest. For the sake of landing a semi-complete
> > feature upstream, I've also picked up the FPSIMD patches from the NV
> > series Marc is carrying.
> >
> > The most annoying part about this series (IMO) is that ZCR_EL2 traps
> > behave differently from what needs to be virtualized for the guest when
> > HCR_EL2.NV = 1, as it takes a sysreg trap (EC = 0x18) instead of an SVE
> > trap (EC = 0x19). So, we need to synthesize the ESR value when
> > reflecting back into the guest hypervisor.
>
> That's unfortunately not a unique case. The ERETAx emulation already
> requires us to synthesise the ESR on PAC check failure, and I'm afraid
> ZCR_EL2 might not be the last case.
>
> In general, we'll see this problem for any instruction or sysreg that
> can generate multiple exception classes.
Right, I didn't have a good feel yet for whether or not we could add
some generalized infrastructure for 'remapping' ESR values for the guest
hypervisor. Of course, not needed for this, but cooking up an ISS is
likely to require a bit of manual intervention.
> > Otherwise, some care is required to slap the guest hypervisor's ZCR_EL2
> > into the right place depending on whether or not the vCPU is in a hyp
> > context, since it affects the hyp's usage of SVE in addition to the VM.
> >
> > There's more work to be done for honoring the L1's CPTR traps, as this
> > series only focuses on getting SVE and FPSIMD traps right. We'll get
> > there one day.
>
> I have patches for that in my NV series, which would take the place of
> patches 9 and 10 in your series (or supplement them, depending on how
> we want to slice this).
That'd be great, I just wanted to post something focused on FP/SVE to
start but...
> >
> > I tested this using a mix of the fpsimd-test and sve-test selftests
> > running at L0, L1, and L2 concurrently on Neoverse V2.
>
> Thanks a lot for tackling this. It'd be good to put together a series
> that has the EL2 sysreg save/restore patches as a prefix of this, plus
> the CPTR_EL2 changes. That way, we'd have something that can be merged
> as a consistent set.
I'd be happy to stitch together something like this to round out the
feature. I deliberately left out the handling of vEL2 registers because
of the CPACR_EL1 v. CPTR_EL2 mess, but we may as well sort that out.
Did you want to post your CPTR bits when you have a chance?
--
Thanks,
Oliver
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 00/11] KVM: arm64: nv: FPSIMD/SVE support
2024-06-01 16:57 ` Oliver Upton
@ 2024-06-02 14:28 ` Marc Zyngier
0 siblings, 0 replies; 21+ messages in thread
From: Marc Zyngier @ 2024-06-02 14:28 UTC (permalink / raw)
To: Oliver Upton; +Cc: kvmarm, James Morse, Suzuki K Poulose, Zenghui Yu, kvm
On Sat, 01 Jun 2024 17:57:31 +0100,
Oliver Upton <oliver.upton@linux.dev> wrote:
>
> On Sat, Jun 01, 2024 at 11:24:49AM +0100, Marc Zyngier wrote:
> > On Sat, 01 Jun 2024 00:13:47 +0100,
> > Oliver Upton <oliver.upton@linux.dev> wrote:
> > >
> > > Hey!
> > >
> > > I've decided to start messing around with nested and have SVE support
> > > working for a nested guest. For the sake of landing a semi-complete
> > > feature upstream, I've also picked up the FPSIMD patches from the NV
> > > series Marc is carrying.
> > >
> > > The most annoying part about this series (IMO) is that ZCR_EL2 traps
> > > behave differently from what needs to be virtualized for the guest when
> > > HCR_EL2.NV = 1, as it takes a sysreg trap (EC = 0x18) instead of an SVE
> > > trap (EC = 0x19). So, we need to synthesize the ESR value when
> > > reflecting back into the guest hypervisor.
> >
> > That's unfortunately not a unique case. The ERETAx emulation already
> > requires us to synthesise the ESR on PAC check failure, and I'm afraid
> > ZCR_EL2 might not be the last case.
> >
> > In general, we'll see this problem for any instruction or sysreg that
> > can generate multiple exception classes.
>
> Right, I didn't have a good feel yet for whether or not we could add
> some generalized infrastructure for 'remapping' ESR values for the guest
> hypervisor. Of course, not needed for this, but cooking up an ISS is
> likely to require a bit of manual intervention.
So far, it is pretty limited, only takes a couple of lines of code,
and is likely to always be coupled with some more complicated handling
(I don't see this being *only* a quick ESR remapping).
> > > Otherwise, some care is required to slap the guest hypervisor's ZCR_EL2
> > > into the right place depending on whether or not the vCPU is in a hyp
> > > context, since it affects the hyp's usage of SVE in addition to the VM.
> > >
> > > There's more work to be done for honoring the L1's CPTR traps, as this
> > > series only focuses on getting SVE and FPSIMD traps right. We'll get
> > > there one day.
> >
> > I have patches for that in my NV series, which would take the place of
> > patches 9 and 10 in your series (or supplement them, depending on how
> > we want to slice this).
>
> That'd be great, I just wanted to post something focused on FP/SVE to
> start but...
>
> > >
> > > I tested this using a mix of the fpsimd-test and sve-test selftests
> > > running at L0, L1, and L2 concurrently on Neoverse V2.
> >
> > Thanks a lot for tackling this. It'd be good to put together a series
> > that has the EL2 sysreg save/restore patches as a prefix of this, plus
> > the CPTR_EL2 changes. That way, we'd have something that can be merged
> > as a consistent set.
>
> I'd be happy to stitch together something like this to round out the
> feature. I deliberately left out the handling of vEL2 registers because
> of the CPACR_EL1 v. CPTR_EL2 mess, but we may as well sort that out.
>
> Did you want to post your CPTR bits when you have a chance?
Yup, I'll rework that on top of your series and we'll take it from
there.
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 10/11] KVM: arm64: nv: Honor guest hypervisor's FP/SVE traps in CPTR_EL2
2024-05-31 23:13 ` [PATCH 10/11] KVM: arm64: nv: Honor guest hypervisor's FP/SVE traps in CPTR_EL2 Oliver Upton
@ 2024-06-03 12:36 ` Marc Zyngier
2024-06-03 17:28 ` Oliver Upton
0 siblings, 1 reply; 21+ messages in thread
From: Marc Zyngier @ 2024-06-03 12:36 UTC (permalink / raw)
To: Oliver Upton; +Cc: kvmarm, James Morse, Suzuki K Poulose, Zenghui Yu, kvm
On Sat, 01 Jun 2024 00:13:57 +0100,
Oliver Upton <oliver.upton@linux.dev> wrote:
>
> Start folding the guest hypervisor's FP/SVE traps into the value
> programmed in hardware. Note that as of writing this is dead code, since
> KVM does a full put() / load() for every nested exception boundary which
> saves + flushes the FP/SVE state.
>
> However, this will become useful when we can keep the guest's FP/SVE
> state alive across a nested exception boundary and the host no longer
> needs to conservatively program traps.
>
> Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
> ---
> arch/arm64/kvm/hyp/vhe/switch.c | 13 +++++++++++++
> 1 file changed, 13 insertions(+)
>
> diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
> index 697253673d7b..d07b4f4be5e5 100644
> --- a/arch/arm64/kvm/hyp/vhe/switch.c
> +++ b/arch/arm64/kvm/hyp/vhe/switch.c
> @@ -85,6 +85,19 @@ static void __activate_cptr_traps(struct kvm_vcpu *vcpu)
> __activate_traps_fpsimd32(vcpu);
> }
>
> + /*
> + * Layer the guest hypervisor's trap configuration on top of our own if
> + * we're in a nested context.
> + */
> + if (!vcpu_has_nv(vcpu) || is_hyp_ctxt(vcpu))
> + goto write;
> +
> + if (guest_hyp_fpsimd_traps_enabled(vcpu))
> + val &= ~CPACR_ELx_FPEN;
> + if (guest_hyp_sve_traps_enabled(vcpu))
> + val &= ~CPACR_ELx_ZEN;
I'm afraid this isn't quite right. You are clearing both FPEN (resp
ZEN) bits based on any of the two bits being clear, while what we want
is to actually propagate the 0 bits (and only those).
What I have in my tree is something along the lines of:
cptr = vcpu_sanitised_cptr_el2(vcpu);
tmp = cptr & (CPACR_ELx_ZEN_MASK | CPACR_ELx_FPEN_MASK);
val &= ~(tmp ^ (CPACR_ELx_ZEN_MASK | CPACR_ELx_FPEN_MASK));
which makes sure that we only clear the relevant bits.
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 10/11] KVM: arm64: nv: Honor guest hypervisor's FP/SVE traps in CPTR_EL2
2024-06-03 12:36 ` Marc Zyngier
@ 2024-06-03 17:28 ` Oliver Upton
2024-06-04 11:14 ` Marc Zyngier
0 siblings, 1 reply; 21+ messages in thread
From: Oliver Upton @ 2024-06-03 17:28 UTC (permalink / raw)
To: Marc Zyngier; +Cc: kvmarm, James Morse, Suzuki K Poulose, Zenghui Yu, kvm
Hey,
On Mon, Jun 03, 2024 at 01:36:54PM +0100, Marc Zyngier wrote:
[...]
> > + /*
> > + * Layer the guest hypervisor's trap configuration on top of our own if
> > + * we're in a nested context.
> > + */
> > + if (!vcpu_has_nv(vcpu) || is_hyp_ctxt(vcpu))
> > + goto write;
> > +
> > + if (guest_hyp_fpsimd_traps_enabled(vcpu))
> > + val &= ~CPACR_ELx_FPEN;
> > + if (guest_hyp_sve_traps_enabled(vcpu))
> > + val &= ~CPACR_ELx_ZEN;
>
> I'm afraid this isn't quite right. You are clearing both FPEN (resp
> ZEN) bits based on any of the two bits being clear, while what we want
> is to actually propagate the 0 bits (and only those).
An earlier version of the series I had was effectively doing this,
applying the L0 trap configuration on top of L1's CPTR_EL2. Unless I'm
missing something terribly obvious, I think this is still correct, as:
- If we're in a hyp context, vEL2's CPTR_EL2 is loaded into CPACR_EL1.
The independent EL0/EL1 enable bits are handled by hardware. All this
junk gets skipped and we go directly to writing CPTR_EL2.
- If we are not in a hyp context, vEL2's CPTR_EL2 gets folded into the
hardware value for CPTR_EL2. TGE must be 0 in this case, so there is
no conditional trap based on what EL the vCPU is in. There's only two
functional trap states at this point, hence the all-or-nothing
approach.
> What I have in my tree is something along the lines of:
>
> cptr = vcpu_sanitised_cptr_el2(vcpu);
> tmp = cptr & (CPACR_ELx_ZEN_MASK | CPACR_ELx_FPEN_MASK);
> val &= ~(tmp ^ (CPACR_ELx_ZEN_MASK | CPACR_ELx_FPEN_MASK));
My hesitation with this is it gives the impression that both trap bits
are significant, but in reality only the LSB is useful. Unless my
understanding is disastrously wrong, of course :)
Anyway, my _slight_ preference is towards keeping what I have if
possible, with a giant comment explaining the reasoning behind it. But I
can take your approach instead too.
--
Thanks,
Oliver
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 10/11] KVM: arm64: nv: Honor guest hypervisor's FP/SVE traps in CPTR_EL2
2024-06-03 17:28 ` Oliver Upton
@ 2024-06-04 11:14 ` Marc Zyngier
2024-06-04 17:44 ` Oliver Upton
0 siblings, 1 reply; 21+ messages in thread
From: Marc Zyngier @ 2024-06-04 11:14 UTC (permalink / raw)
To: Oliver Upton; +Cc: kvmarm, James Morse, Suzuki K Poulose, Zenghui Yu, kvm
On Mon, 03 Jun 2024 18:28:56 +0100,
Oliver Upton <oliver.upton@linux.dev> wrote:
>
> Hey,
>
> On Mon, Jun 03, 2024 at 01:36:54PM +0100, Marc Zyngier wrote:
>
> [...]
>
> > > + /*
> > > + * Layer the guest hypervisor's trap configuration on top of our own if
> > > + * we're in a nested context.
> > > + */
> > > + if (!vcpu_has_nv(vcpu) || is_hyp_ctxt(vcpu))
> > > + goto write;
> > > +
> > > + if (guest_hyp_fpsimd_traps_enabled(vcpu))
> > > + val &= ~CPACR_ELx_FPEN;
> > > + if (guest_hyp_sve_traps_enabled(vcpu))
> > > + val &= ~CPACR_ELx_ZEN;
> >
> > I'm afraid this isn't quite right. You are clearing both FPEN (resp
> > ZEN) bits based on any of the two bits being clear, while what we want
> > is to actually propagate the 0 bits (and only those).
>
> An earlier version of the series I had was effectively doing this,
> applying the L0 trap configuration on top of L1's CPTR_EL2. Unless I'm
> missing something terribly obvious, I think this is still correct, as:
>
> - If we're in a hyp context, vEL2's CPTR_EL2 is loaded into CPACR_EL1.
> The independent EL0/EL1 enable bits are handled by hardware. All this
> junk gets skipped and we go directly to writing CPTR_EL2.
Yup.
>
> - If we are not in a hyp context, vEL2's CPTR_EL2 gets folded into the
> hardware value for CPTR_EL2. TGE must be 0 in this case, so there is
> no conditional trap based on what EL the vCPU is in. There's only two
> functional trap states at this point, hence the all-or-nothing
> approach.
Ah, I see it now. Only bit[0] of each 2-bit field matters in that
case. This thing is giving me a headache.
>
> > What I have in my tree is something along the lines of:
> >
> > cptr = vcpu_sanitised_cptr_el2(vcpu);
> > tmp = cptr & (CPACR_ELx_ZEN_MASK | CPACR_ELx_FPEN_MASK);
> > val &= ~(tmp ^ (CPACR_ELx_ZEN_MASK | CPACR_ELx_FPEN_MASK));
>
> My hesitation with this is it gives the impression that both trap bits
> are significant, but in reality only the LSB is useful. Unless my
> understanding is disastrously wrong, of course :)
No, you are absolutely right. Although you *are* clearing both bits
anyway ;-).
>
> Anyway, my _slight_ preference is towards keeping what I have if
> possible, with a giant comment explaining the reasoning behind it. But I
> can take your approach instead too.
I think the only arguments for my own solution are:
- slightly better codegen (no function call or inlining), and a
smaller .text section in switch.o, because the helpers are not
cheap:
LLVM:
0 .text 00003ef8 (guest_hyp_*_traps_enabled)
0 .text 00003d48 (bit ops)
GCC:
0 .text 00002624 (guest_hyp_*_traps_enabled)
0 .text 000024b4 (bit ops)
Yes, LLVM is an absolute pig because of BTI...
- tracking the guest's bits more precisely may make it easier to debug
but these are pretty weak arguments, and I don't really care either
way at this precise moment.
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH 10/11] KVM: arm64: nv: Honor guest hypervisor's FP/SVE traps in CPTR_EL2
2024-06-04 11:14 ` Marc Zyngier
@ 2024-06-04 17:44 ` Oliver Upton
0 siblings, 0 replies; 21+ messages in thread
From: Oliver Upton @ 2024-06-04 17:44 UTC (permalink / raw)
To: Marc Zyngier; +Cc: kvmarm, James Morse, Suzuki K Poulose, Zenghui Yu, kvm
On Tue, Jun 04, 2024 at 12:14:42PM +0100, Marc Zyngier wrote:
> On Mon, 03 Jun 2024 18:28:56 +0100, Oliver Upton <oliver.upton@linux.dev> wrote:
> > Anyway, my _slight_ preference is towards keeping what I have if
> > possible, with a giant comment explaining the reasoning behind it. But I
> > can take your approach instead too.
>
> I think the only arguments for my own solution are:
>
> - slightly better codegen (no function call or inlining), and a
> smaller .text section in switch.o, because the helpers are not
> cheap:
>
> LLVM:
>
> 0 .text 00003ef8 (guest_hyp_*_traps_enabled)
> 0 .text 00003d48 (bit ops)
>
> GCC:
> 0 .text 00002624 (guest_hyp_*_traps_enabled)
> 0 .text 000024b4 (bit ops)
>
Oh, that's spectacular :-)
> Yes, LLVM is an absolute pig because of BTI...
>
> - tracking the guest's bits more precisely may make it easier to debug
>
> but these are pretty weak arguments, and I don't really care either
> way at this precise moment.
Yeah, so I think the right direction here is to combine our approaches,
and do direct bit manipulation, but only on bit[0]. That way we still
have an opportunity to document the very intentional simplification of
trap state too.
--
Thanks,
Oliver
^ permalink raw reply [flat|nested] 21+ messages in thread
end of thread, other threads:[~2024-06-04 17:44 UTC | newest]
Thread overview: 21+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-05-31 23:13 [PATCH 00/11] KVM: arm64: nv: FPSIMD/SVE support Oliver Upton
2024-05-31 23:13 ` [PATCH 01/11] KVM: arm64: nv: Forward FP/ASIMD traps to guest hypervisor Oliver Upton
2024-05-31 23:13 ` [PATCH 02/11] KVM: arm64: nv: Forward SVE " Oliver Upton
2024-05-31 23:13 ` [PATCH 03/11] KVM: arm64: nv: Load guest FP state for ZCR_EL2 trap Oliver Upton
2024-06-01 9:47 ` Marc Zyngier
2024-06-01 16:47 ` Oliver Upton
2024-05-31 23:13 ` [PATCH 04/11] KVM: arm64: nv: Load guest hyp's ZCR into EL1 state Oliver Upton
2024-05-31 23:13 ` [PATCH 05/11] KVM: arm64: nv: Handle ZCR_EL2 traps Oliver Upton
2024-05-31 23:13 ` [PATCH 06/11] KVM: arm64: nv: Save guest's ZCR_EL2 when in hyp context Oliver Upton
2024-05-31 23:13 ` [PATCH 07/11] KVM: arm64: nv: Use guest hypervisor's max VL when running nested guest Oliver Upton
2024-05-31 23:13 ` [PATCH 08/11] KVM: arm64: nv: Ensure correct VL is loaded before saving SVE state Oliver Upton
2024-05-31 23:13 ` [PATCH 09/11] KVM: arm64: Spin off helper for programming CPTR traps Oliver Upton
2024-05-31 23:13 ` [PATCH 10/11] KVM: arm64: nv: Honor guest hypervisor's FP/SVE traps in CPTR_EL2 Oliver Upton
2024-06-03 12:36 ` Marc Zyngier
2024-06-03 17:28 ` Oliver Upton
2024-06-04 11:14 ` Marc Zyngier
2024-06-04 17:44 ` Oliver Upton
2024-05-31 23:13 ` [PATCH 11/11] KVM: arm64: Allow the use of SVE+NV Oliver Upton
2024-06-01 10:24 ` [PATCH 00/11] KVM: arm64: nv: FPSIMD/SVE support Marc Zyngier
2024-06-01 16:57 ` Oliver Upton
2024-06-02 14:28 ` Marc Zyngier
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox