* [PATCH v3 00/11] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode
@ 2024-05-28 12:59 Fuad Tabba
2024-05-28 12:59 ` [PATCH v3 01/11] KVM: arm64: Reintroduce __sve_save_state Fuad Tabba
` (12 more replies)
0 siblings, 13 replies; 26+ messages in thread
From: Fuad Tabba @ 2024-05-28 12:59 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: maz, will, qperret, tabba, seanjc, alexandru.elisei,
catalin.marinas, philmd, james.morse, suzuki.poulose,
oliver.upton, mark.rutland, broonie, joey.gouly, rananta,
yuzenghui
Changes since v2 [1]
- Rebased on Linux 6.10-rc1 (1613e604df0c)
- Apply suggestions/fixes suggested for V2 (Marc)
- Add an isb() to __hyp_sve_restore_guest()
- Squash patch that introduces kvm_host_sve_max_vl with following
patch, since it's used there
- Some refactoring and tidying up
- Introduce and use sve_cond_update_zcr_vq_isb(), which only does
an isb() if ZCR is updated (RFC, next to last patch)
- Remove sve_cond_update_zcr_vq_*, since it's not likely to help
much (RFC, last patch)
With the KVM host data rework [2], handling of fpsimd and sve
state in protected mode is done at hyp. For protected VMs, we
don't want to leak any guest state to the host, including whether
a guest has used fpsimd/sve.
To complete the work started with the host data rework, in
regards to protected mode, ensure that the host's fpsimd context
and its sve context are restored on guest exit, since the rework
has hidden the fpsimd/sve state from the host.
This patch series eagerly restores the host fpsimd/sve state on
guest exit when running in protected mode, which happens only if
the guest has used fpsimd/sve. This means that the saving of the
state is lazy, similar to the behavior of KVM in other modes, but
the restoration of the host state is eager.
The last two patches are not essential to this patch series, and
the last one undoes the next-to-last. Please consider only one
(or neither) of these two patches for inclusion.
This series is based on Linux 6.10-rc1 (1613e604df0c).
Tested on qemu, with the kernel sve stress tests.
Cheers,
/fuad
[1] https://lore.kernel.org/all/20240521163720.3812851-1-tabba@google.com/
[2] https://lore.kernel.org/all/20240322170945.3292593-1-maz@kernel.org/
Fuad Tabba (11):
KVM: arm64: Reintroduce __sve_save_state
KVM: arm64: Abstract set/clear of CPTR_EL2 bits behind helper
KVM: arm64: Specialize handling of host fpsimd state on trap
KVM: arm64: Allocate memory mapped at hyp for host sve state in pKVM
KVM: arm64: Eagerly restore host fpsimd/sve state in pKVM
KVM: arm64: Consolidate initializing the host data's fpsimd_state/sve
in pKVM
KVM: arm64: Refactor CPACR trap bit setting/clearing to use ELx format
KVM: arm64: Add an isb before restoring guest sve state
KVM: arm64: Do not use sve_cond_update_zcr updating with
ZCR_ELx_LEN_MASK
KVM: arm64: Do not perform an isb() if ZCR_EL2 isn't updated
KVM: arm64: Drop sve_cond_update_zcr_vq_*
arch/arm64/include/asm/el2_setup.h | 6 +-
arch/arm64/include/asm/fpsimd.h | 11 ----
arch/arm64/include/asm/kvm_arm.h | 6 ++
arch/arm64/include/asm/kvm_emulate.h | 71 +++++++++++++++++++++--
arch/arm64/include/asm/kvm_host.h | 25 +++++++-
arch/arm64/include/asm/kvm_hyp.h | 2 +
arch/arm64/include/asm/kvm_pkvm.h | 9 +++
arch/arm64/kvm/arm.c | 76 +++++++++++++++++++++++++
arch/arm64/kvm/fpsimd.c | 8 +--
arch/arm64/kvm/hyp/fpsimd.S | 6 ++
arch/arm64/kvm/hyp/include/hyp/switch.h | 36 ++++++------
arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 1 -
arch/arm64/kvm/hyp/nvhe/hyp-main.c | 75 +++++++++++++++++++++---
arch/arm64/kvm/hyp/nvhe/pkvm.c | 17 ++----
arch/arm64/kvm/hyp/nvhe/setup.c | 25 +++++++-
arch/arm64/kvm/hyp/nvhe/switch.c | 24 +++++++-
arch/arm64/kvm/hyp/vhe/switch.c | 12 ++--
arch/arm64/kvm/reset.c | 3 +
18 files changed, 342 insertions(+), 71 deletions(-)
base-commit: 1613e604df0cd359cf2a7fbd9be7a0bcfacfabd0
--
2.45.1.288.g0e0cd299f1-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 26+ messages in thread
* [PATCH v3 01/11] KVM: arm64: Reintroduce __sve_save_state
2024-05-28 12:59 [PATCH v3 00/11] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode Fuad Tabba
@ 2024-05-28 12:59 ` Fuad Tabba
2024-05-31 12:26 ` Mark Brown
2024-05-28 12:59 ` [PATCH v3 02/11] KVM: arm64: Abstract set/clear of CPTR_EL2 bits behind helper Fuad Tabba
` (11 subsequent siblings)
12 siblings, 1 reply; 26+ messages in thread
From: Fuad Tabba @ 2024-05-28 12:59 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: maz, will, qperret, tabba, seanjc, alexandru.elisei,
catalin.marinas, philmd, james.morse, suzuki.poulose,
oliver.upton, mark.rutland, broonie, joey.gouly, rananta,
yuzenghui
Now that the hypervisor is handling the host sve state in
protected mode, it needs to be able to save it.
This reverts commit e66425fc9ba3 ("KVM: arm64: Remove unused
__sve_save_state").
Signed-off-by: Fuad Tabba <tabba@google.com>
---
arch/arm64/include/asm/kvm_hyp.h | 1 +
arch/arm64/kvm/hyp/fpsimd.S | 6 ++++++
2 files changed, 7 insertions(+)
diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index 3e80464f8953..2ab23589339a 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -111,6 +111,7 @@ void __debug_restore_host_buffers_nvhe(struct kvm_vcpu *vcpu);
void __fpsimd_save_state(struct user_fpsimd_state *fp_regs);
void __fpsimd_restore_state(struct user_fpsimd_state *fp_regs);
+void __sve_save_state(void *sve_pffr, u32 *fpsr);
void __sve_restore_state(void *sve_pffr, u32 *fpsr);
u64 __guest_enter(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/hyp/fpsimd.S b/arch/arm64/kvm/hyp/fpsimd.S
index 61e6f3ba7b7d..e950875e31ce 100644
--- a/arch/arm64/kvm/hyp/fpsimd.S
+++ b/arch/arm64/kvm/hyp/fpsimd.S
@@ -25,3 +25,9 @@ SYM_FUNC_START(__sve_restore_state)
sve_load 0, x1, x2, 3
ret
SYM_FUNC_END(__sve_restore_state)
+
+SYM_FUNC_START(__sve_save_state)
+ mov x2, #1
+ sve_save 0, x1, x2, 3
+ ret
+SYM_FUNC_END(__sve_save_state)
--
2.45.1.288.g0e0cd299f1-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH v3 02/11] KVM: arm64: Abstract set/clear of CPTR_EL2 bits behind helper
2024-05-28 12:59 [PATCH v3 00/11] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode Fuad Tabba
2024-05-28 12:59 ` [PATCH v3 01/11] KVM: arm64: Reintroduce __sve_save_state Fuad Tabba
@ 2024-05-28 12:59 ` Fuad Tabba
2024-05-28 12:59 ` [PATCH v3 03/11] KVM: arm64: Specialize handling of host fpsimd state on trap Fuad Tabba
` (10 subsequent siblings)
12 siblings, 0 replies; 26+ messages in thread
From: Fuad Tabba @ 2024-05-28 12:59 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: maz, will, qperret, tabba, seanjc, alexandru.elisei,
catalin.marinas, philmd, james.morse, suzuki.poulose,
oliver.upton, mark.rutland, broonie, joey.gouly, rananta,
yuzenghui
The same traps controlled by CPTR_EL2 or CPACR_EL1 need to be
toggled in different parts of the code, but the exact bits and
their polarity differ between these two formats and the mode
(vhe/nvhe/hvhe).
To reduce the amount of duplicated code and the chance of getting
the wrong bit/polarity or missing a field, abstract the set/clear
of CPTR_EL2 bits behind a helper.
Since (h)VHE is the way of the future, use the CPACR_EL1 format,
which is a subset of the VHE CPTR_EL2, as a reference.
No functional change intended.
Suggested-by: Oliver Upton <oliver.upton@linux.dev>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
arch/arm64/include/asm/kvm_arm.h | 6 +++
arch/arm64/include/asm/kvm_emulate.h | 62 +++++++++++++++++++++++++
arch/arm64/kvm/hyp/include/hyp/switch.h | 18 ++-----
arch/arm64/kvm/hyp/nvhe/hyp-main.c | 6 +--
4 files changed, 73 insertions(+), 19 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index e01bb5ca13b7..b2adc2c6c82a 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -305,6 +305,12 @@
GENMASK(19, 14) | \
BIT(11))
+#define CPTR_VHE_EL2_RES0 (GENMASK(63, 32) | \
+ GENMASK(27, 26) | \
+ GENMASK(23, 22) | \
+ GENMASK(19, 18) | \
+ GENMASK(15, 0))
+
/* Hyp Debug Configuration Register bits */
#define MDCR_EL2_E2TB_MASK (UL(0x3))
#define MDCR_EL2_E2TB_SHIFT (UL(24))
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 501e3e019c93..2d7a0bdf9d03 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -557,6 +557,68 @@ static __always_inline void kvm_incr_pc(struct kvm_vcpu *vcpu)
vcpu_set_flag((v), e); \
} while (0)
+#define __build_check_all_or_none(r, bits) \
+ BUILD_BUG_ON(((r) & (bits)) && ((r) & (bits)) != (bits))
+
+#define __cpacr_to_cptr_clr(clr, set) \
+ ({ \
+ u64 cptr = 0; \
+ \
+ if ((set) & CPACR_ELx_FPEN) \
+ cptr |= CPTR_EL2_TFP; \
+ if ((set) & CPACR_ELx_ZEN) \
+ cptr |= CPTR_EL2_TZ; \
+ if ((set) & CPACR_ELx_SMEN) \
+ cptr |= CPTR_EL2_TSM; \
+ if ((clr) & CPACR_ELx_TTA) \
+ cptr |= CPTR_EL2_TTA; \
+ if ((clr) & CPTR_EL2_TAM) \
+ cptr |= CPTR_EL2_TAM; \
+ if ((clr) & CPTR_EL2_TCPAC) \
+ cptr |= CPTR_EL2_TCPAC; \
+ \
+ cptr; \
+ })
+
+#define __cpacr_to_cptr_set(clr, set) \
+ ({ \
+ u64 cptr = 0; \
+ \
+ if ((clr) & CPACR_ELx_FPEN) \
+ cptr |= CPTR_EL2_TFP; \
+ if ((clr) & CPACR_ELx_ZEN) \
+ cptr |= CPTR_EL2_TZ; \
+ if ((clr) & CPACR_ELx_SMEN) \
+ cptr |= CPTR_EL2_TSM; \
+ if ((set) & CPACR_ELx_TTA) \
+ cptr |= CPTR_EL2_TTA; \
+ if ((set) & CPTR_EL2_TAM) \
+ cptr |= CPTR_EL2_TAM; \
+ if ((set) & CPTR_EL2_TCPAC) \
+ cptr |= CPTR_EL2_TCPAC; \
+ \
+ cptr; \
+ })
+
+#define cpacr_clear_set(clr, set) \
+ do { \
+ BUILD_BUG_ON((set) & CPTR_VHE_EL2_RES0); \
+ BUILD_BUG_ON((clr) & CPACR_ELx_E0POE); \
+ __build_check_all_or_none((clr), CPACR_ELx_FPEN); \
+ __build_check_all_or_none((set), CPACR_ELx_FPEN); \
+ __build_check_all_or_none((clr), CPACR_ELx_ZEN); \
+ __build_check_all_or_none((set), CPACR_ELx_ZEN); \
+ __build_check_all_or_none((clr), CPACR_ELx_SMEN); \
+ __build_check_all_or_none((set), CPACR_ELx_SMEN); \
+ \
+ if (has_vhe() || has_hvhe()) \
+ sysreg_clear_set(cpacr_el1, clr, set); \
+ else \
+ sysreg_clear_set(cptr_el2, \
+ __cpacr_to_cptr_clr(clr, set), \
+ __cpacr_to_cptr_set(clr, set));\
+ } while (0)
+
static __always_inline void kvm_write_cptr_el2(u64 val)
{
if (has_vhe() || has_hvhe())
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index a92566f36022..2cfbfedadea6 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -330,7 +330,6 @@ static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code)
{
bool sve_guest;
u8 esr_ec;
- u64 reg;
if (!system_supports_fpsimd())
return false;
@@ -353,19 +352,10 @@ static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code)
/* Valid trap. Switch the context: */
/* First disable enough traps to allow us to update the registers */
- if (has_vhe() || has_hvhe()) {
- reg = CPACR_EL1_FPEN_EL0EN | CPACR_EL1_FPEN_EL1EN;
- if (sve_guest)
- reg |= CPACR_EL1_ZEN_EL0EN | CPACR_EL1_ZEN_EL1EN;
-
- sysreg_clear_set(cpacr_el1, 0, reg);
- } else {
- reg = CPTR_EL2_TFP;
- if (sve_guest)
- reg |= CPTR_EL2_TZ;
-
- sysreg_clear_set(cptr_el2, reg, 0);
- }
+ if (sve_guest)
+ cpacr_clear_set(0, CPACR_ELx_FPEN|CPACR_ELx_ZEN);
+ else
+ cpacr_clear_set(0, CPACR_ELx_FPEN);
isb();
/* Write out the host state if it's in the registers */
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
index d5c48dc98f67..f71394d0e32a 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
@@ -405,11 +405,7 @@ void handle_trap(struct kvm_cpu_context *host_ctxt)
handle_host_smc(host_ctxt);
break;
case ESR_ELx_EC_SVE:
- if (has_hvhe())
- sysreg_clear_set(cpacr_el1, 0, (CPACR_EL1_ZEN_EL1EN |
- CPACR_EL1_ZEN_EL0EN));
- else
- sysreg_clear_set(cptr_el2, CPTR_EL2_TZ, 0);
+ cpacr_clear_set(0, CPACR_ELx_ZEN);
isb();
sve_cond_update_zcr_vq(ZCR_ELx_LEN_MASK, SYS_ZCR_EL2);
break;
--
2.45.1.288.g0e0cd299f1-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH v3 03/11] KVM: arm64: Specialize handling of host fpsimd state on trap
2024-05-28 12:59 [PATCH v3 00/11] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode Fuad Tabba
2024-05-28 12:59 ` [PATCH v3 01/11] KVM: arm64: Reintroduce __sve_save_state Fuad Tabba
2024-05-28 12:59 ` [PATCH v3 02/11] KVM: arm64: Abstract set/clear of CPTR_EL2 bits behind helper Fuad Tabba
@ 2024-05-28 12:59 ` Fuad Tabba
2024-05-31 13:35 ` Mark Brown
2024-05-28 12:59 ` [PATCH v3 04/11] KVM: arm64: Allocate memory mapped at hyp for host sve state in pKVM Fuad Tabba
` (9 subsequent siblings)
12 siblings, 1 reply; 26+ messages in thread
From: Fuad Tabba @ 2024-05-28 12:59 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: maz, will, qperret, tabba, seanjc, alexandru.elisei,
catalin.marinas, philmd, james.morse, suzuki.poulose,
oliver.upton, mark.rutland, broonie, joey.gouly, rananta,
yuzenghui
In subsequent patches, n/vhe will diverge on saving the host
fpsimd/sve state when taking a guest fpsimd/sve trap. Add a
specialized helper to handle it.
No functional change intended.
Signed-off-by: Fuad Tabba <tabba@google.com>
---
arch/arm64/kvm/hyp/include/hyp/switch.h | 4 +++-
arch/arm64/kvm/hyp/nvhe/switch.c | 5 +++++
arch/arm64/kvm/hyp/vhe/switch.c | 5 +++++
3 files changed, 13 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 2cfbfedadea6..d3a3f1cee668 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -320,6 +320,8 @@ static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu)
write_sysreg_el1(__vcpu_sys_reg(vcpu, ZCR_EL1), SYS_ZCR);
}
+static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu);
+
/*
* We trap the first access to the FP/SIMD to save the host context and
* restore the guest context lazily.
@@ -360,7 +362,7 @@ static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code)
/* Write out the host state if it's in the registers */
if (host_owns_fp_regs())
- __fpsimd_save_state(*host_data_ptr(fpsimd_state));
+ kvm_hyp_save_fpsimd_host(vcpu);
/* Restore the guest state */
if (sve_guest)
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 6758cd905570..019f863922fa 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -182,6 +182,11 @@ static bool kvm_handle_pvm_sys64(struct kvm_vcpu *vcpu, u64 *exit_code)
kvm_handle_pvm_sysreg(vcpu, exit_code));
}
+static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu)
+{
+ __fpsimd_save_state(*host_data_ptr(fpsimd_state));
+}
+
static const exit_handler_fn hyp_exit_handlers[] = {
[0 ... ESR_ELx_EC_MAX] = NULL,
[ESR_ELx_EC_CP15_32] = kvm_hyp_handle_cp15_32,
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index d7af5f46f22a..20073579e9f5 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -262,6 +262,11 @@ static bool kvm_hyp_handle_eret(struct kvm_vcpu *vcpu, u64 *exit_code)
return true;
}
+static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu)
+{
+ __fpsimd_save_state(*host_data_ptr(fpsimd_state));
+}
+
static const exit_handler_fn hyp_exit_handlers[] = {
[0 ... ESR_ELx_EC_MAX] = NULL,
[ESR_ELx_EC_CP15_32] = kvm_hyp_handle_cp15_32,
--
2.45.1.288.g0e0cd299f1-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH v3 04/11] KVM: arm64: Allocate memory mapped at hyp for host sve state in pKVM
2024-05-28 12:59 [PATCH v3 00/11] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode Fuad Tabba
` (2 preceding siblings ...)
2024-05-28 12:59 ` [PATCH v3 03/11] KVM: arm64: Specialize handling of host fpsimd state on trap Fuad Tabba
@ 2024-05-28 12:59 ` Fuad Tabba
2024-05-28 12:59 ` [PATCH v3 05/11] KVM: arm64: Eagerly restore host fpsimd/sve " Fuad Tabba
` (8 subsequent siblings)
12 siblings, 0 replies; 26+ messages in thread
From: Fuad Tabba @ 2024-05-28 12:59 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: maz, will, qperret, tabba, seanjc, alexandru.elisei,
catalin.marinas, philmd, james.morse, suzuki.poulose,
oliver.upton, mark.rutland, broonie, joey.gouly, rananta,
yuzenghui
Protected mode needs to maintain (save/restore) the host's sve
state, rather than relying on the host kernel to do that. This is
to avoid leaking information to the host about guests and the
type of operations they are performing.
As a first step towards that, allocate memory mapped at hyp, per
cpu, for the host sve state. The following patch will use this
memory to save/restore the host state.
Signed-off-by: Fuad Tabba <tabba@google.com>
---
Note that the last patch in this series will consolidate the
setup of the host's fpsimd and sve states, which currently take
place in two different locations. Moreover, that last patch will
also place the host fpsimd and sve_state pointers in a union.
---
arch/arm64/include/asm/kvm_host.h | 17 ++++++++
arch/arm64/include/asm/kvm_hyp.h | 1 +
arch/arm64/include/asm/kvm_pkvm.h | 9 ++++
arch/arm64/kvm/arm.c | 68 +++++++++++++++++++++++++++++++
arch/arm64/kvm/hyp/nvhe/pkvm.c | 2 +
arch/arm64/kvm/hyp/nvhe/setup.c | 24 +++++++++++
arch/arm64/kvm/reset.c | 3 ++
7 files changed, 124 insertions(+)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 8170c04fde91..90df7ccec5f4 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -76,6 +76,7 @@ static inline enum kvm_mode kvm_get_mode(void) { return KVM_MODE_NONE; };
DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use);
extern unsigned int __ro_after_init kvm_sve_max_vl;
+extern unsigned int __ro_after_init kvm_host_sve_max_vl;
int __init kvm_arm_init_sve(void);
u32 __attribute_const__ kvm_target_cpu(void);
@@ -521,6 +522,20 @@ struct kvm_cpu_context {
u64 *vncr_array;
};
+struct cpu_sve_state {
+ __u64 zcr_el1;
+
+ /*
+ * Ordering is important since __sve_save_state/__sve_restore_state
+ * relies on it.
+ */
+ __u32 fpsr;
+ __u32 fpcr;
+
+ /* Must be SVE_VQ_BYTES (128 bit) aligned. */
+ __u8 sve_regs[];
+};
+
/*
* This structure is instantiated on a per-CPU basis, and contains
* data that is:
@@ -534,7 +549,9 @@ struct kvm_cpu_context {
*/
struct kvm_host_data {
struct kvm_cpu_context host_ctxt;
+
struct user_fpsimd_state *fpsimd_state; /* hyp VA */
+ struct cpu_sve_state *sve_state; /* hyp VA */
/* Ownership of the FP regs */
enum {
diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index 2ab23589339a..d313adf53bef 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -143,5 +143,6 @@ extern u64 kvm_nvhe_sym(id_aa64smfr0_el1_sys_val);
extern unsigned long kvm_nvhe_sym(__icache_flags);
extern unsigned int kvm_nvhe_sym(kvm_arm_vmid_bits);
+extern unsigned int kvm_nvhe_sym(kvm_host_sve_max_vl);
#endif /* __ARM64_KVM_HYP_H__ */
diff --git a/arch/arm64/include/asm/kvm_pkvm.h b/arch/arm64/include/asm/kvm_pkvm.h
index ad9cfb5c1ff4..cd56acd9a842 100644
--- a/arch/arm64/include/asm/kvm_pkvm.h
+++ b/arch/arm64/include/asm/kvm_pkvm.h
@@ -128,4 +128,13 @@ static inline unsigned long hyp_ffa_proxy_pages(void)
return (2 * KVM_FFA_MBOX_NR_PAGES) + DIV_ROUND_UP(desc_max, PAGE_SIZE);
}
+static inline size_t pkvm_host_sve_state_size(void)
+{
+ if (!system_supports_sve())
+ return 0;
+
+ return size_add(sizeof(struct cpu_sve_state),
+ SVE_SIG_REGS_SIZE(sve_vq_from_vl(kvm_host_sve_max_vl)));
+}
+
#endif /* __ARM64_KVM_PKVM_H__ */
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 9996a989b52e..1acf7415e831 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1931,6 +1931,11 @@ static unsigned long nvhe_percpu_order(void)
return size ? get_order(size) : 0;
}
+static size_t pkvm_host_sve_state_order(void)
+{
+ return get_order(pkvm_host_sve_state_size());
+}
+
/* A lookup table holding the hypervisor VA for each vector slot */
static void *hyp_spectre_vector_selector[BP_HARDEN_EL2_SLOTS];
@@ -2310,12 +2315,20 @@ static void __init teardown_subsystems(void)
static void __init teardown_hyp_mode(void)
{
+ bool free_sve = system_supports_sve() && is_protected_kvm_enabled();
int cpu;
free_hyp_pgds();
for_each_possible_cpu(cpu) {
free_page(per_cpu(kvm_arm_hyp_stack_page, cpu));
free_pages(kvm_nvhe_sym(kvm_arm_hyp_percpu_base)[cpu], nvhe_percpu_order());
+
+ if (free_sve) {
+ struct cpu_sve_state *sve_state;
+
+ sve_state = per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->sve_state;
+ free_pages((unsigned long) sve_state, pkvm_host_sve_state_order());
+ }
}
}
@@ -2398,6 +2411,50 @@ static int __init kvm_hyp_init_protection(u32 hyp_va_bits)
return 0;
}
+static int init_pkvm_host_sve_state(void)
+{
+ int cpu;
+
+ if (!system_supports_sve())
+ return 0;
+
+ /* Allocate pages for host sve state in protected mode. */
+ for_each_possible_cpu(cpu) {
+ struct page *page = alloc_pages(GFP_KERNEL, pkvm_host_sve_state_order());
+
+ if (!page)
+ return -ENOMEM;
+
+ per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->sve_state = page_address(page);
+ }
+
+ /*
+ * Don't map the pages in hyp since these are only used in protected
+ * mode, which will (re)create its own mapping when initialized.
+ */
+
+ return 0;
+}
+
+/*
+ * Finalizes the initialization of hyp mode, once everything else is initialized
+ * and the initialziation process cannot fail.
+ */
+static void finalize_init_hyp_mode(void)
+{
+ int cpu;
+
+ if (!is_protected_kvm_enabled() || !system_supports_sve())
+ return;
+
+ for_each_possible_cpu(cpu) {
+ struct cpu_sve_state *sve_state;
+
+ sve_state = per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->sve_state;
+ per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->sve_state = kern_hyp_va(sve_state);
+ }
+}
+
static void pkvm_hyp_init_ptrauth(void)
{
struct kvm_cpu_context *hyp_ctxt;
@@ -2566,6 +2623,10 @@ static int __init init_hyp_mode(void)
goto out_err;
}
+ err = init_pkvm_host_sve_state();
+ if (err)
+ goto out_err;
+
err = kvm_hyp_init_protection(hyp_va_bits);
if (err) {
kvm_err("Failed to init hyp memory protection\n");
@@ -2730,6 +2791,13 @@ static __init int kvm_arm_init(void)
if (err)
goto out_subs;
+ /*
+ * This should be called after initialization is done and failure isn't
+ * possible anymore.
+ */
+ if (!in_hyp_mode)
+ finalize_init_hyp_mode();
+
kvm_arm_initialised = true;
return 0;
diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c
index 16aa4875ddb8..25e9a94f6d76 100644
--- a/arch/arm64/kvm/hyp/nvhe/pkvm.c
+++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c
@@ -18,6 +18,8 @@ unsigned long __icache_flags;
/* Used by kvm_get_vttbr(). */
unsigned int kvm_arm_vmid_bits;
+unsigned int kvm_host_sve_max_vl;
+
/*
* Set trap register values based on features in ID_AA64PFR0.
*/
diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c
index 859f22f754d3..3fae42479598 100644
--- a/arch/arm64/kvm/hyp/nvhe/setup.c
+++ b/arch/arm64/kvm/hyp/nvhe/setup.c
@@ -67,6 +67,28 @@ static int divide_memory_pool(void *virt, unsigned long size)
return 0;
}
+static int pkvm_create_host_sve_mappings(void)
+{
+ void *start, *end;
+ int ret, i;
+
+ if (!system_supports_sve())
+ return 0;
+
+ for (i = 0; i < hyp_nr_cpus; i++) {
+ struct kvm_host_data *host_data = per_cpu_ptr(&kvm_host_data, i);
+ struct cpu_sve_state *sve_state = host_data->sve_state;
+
+ start = kern_hyp_va(sve_state);
+ end = start + PAGE_ALIGN(pkvm_host_sve_state_size());
+ ret = pkvm_create_mappings(start, end, PAGE_HYP);
+ if (ret)
+ return ret;
+ }
+
+ return 0;
+}
+
static int recreate_hyp_mappings(phys_addr_t phys, unsigned long size,
unsigned long *per_cpu_base,
u32 hyp_va_bits)
@@ -125,6 +147,8 @@ static int recreate_hyp_mappings(phys_addr_t phys, unsigned long size,
return ret;
}
+ pkvm_create_host_sve_mappings();
+
/*
* Map the host sections RO in the hypervisor, but transfer the
* ownership from the host to the hypervisor itself to make sure they
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index 1b7b58cb121f..3fc8ca164dbe 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -32,6 +32,7 @@
/* Maximum phys_shift supported for any VM on this host */
static u32 __ro_after_init kvm_ipa_limit;
+unsigned int __ro_after_init kvm_host_sve_max_vl;
/*
* ARMv8 Reset Values
@@ -51,6 +52,8 @@ int __init kvm_arm_init_sve(void)
{
if (system_supports_sve()) {
kvm_sve_max_vl = sve_max_virtualisable_vl();
+ kvm_host_sve_max_vl = sve_max_vl();
+ kvm_nvhe_sym(kvm_host_sve_max_vl) = kvm_host_sve_max_vl;
/*
* The get_sve_reg()/set_sve_reg() ioctl interface will need
--
2.45.1.288.g0e0cd299f1-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH v3 05/11] KVM: arm64: Eagerly restore host fpsimd/sve state in pKVM
2024-05-28 12:59 [PATCH v3 00/11] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode Fuad Tabba
` (3 preceding siblings ...)
2024-05-28 12:59 ` [PATCH v3 04/11] KVM: arm64: Allocate memory mapped at hyp for host sve state in pKVM Fuad Tabba
@ 2024-05-28 12:59 ` Fuad Tabba
2024-05-31 14:09 ` Mark Brown
2024-05-28 12:59 ` [PATCH v3 06/11] KVM: arm64: Consolidate initializing the host data's fpsimd_state/sve " Fuad Tabba
` (7 subsequent siblings)
12 siblings, 1 reply; 26+ messages in thread
From: Fuad Tabba @ 2024-05-28 12:59 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: maz, will, qperret, tabba, seanjc, alexandru.elisei,
catalin.marinas, philmd, james.morse, suzuki.poulose,
oliver.upton, mark.rutland, broonie, joey.gouly, rananta,
yuzenghui
When running in protected mode we don't want to leak protected
guest state to the host, including whether a guest has used
fpsimd/sve. Therefore, eagerly restore the host state on guest
exit when running in protected mode, which happens only if the
guest has used fpsimd/sve.
Signed-off-by: Fuad Tabba <tabba@google.com>
---
arch/arm64/kvm/hyp/include/hyp/switch.h | 13 ++++-
arch/arm64/kvm/hyp/nvhe/hyp-main.c | 67 +++++++++++++++++++++++--
arch/arm64/kvm/hyp/nvhe/pkvm.c | 2 +
arch/arm64/kvm/hyp/nvhe/switch.c | 16 +++++-
4 files changed, 93 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index d3a3f1cee668..89c52b59d2a9 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -320,6 +320,17 @@ static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu)
write_sysreg_el1(__vcpu_sys_reg(vcpu, ZCR_EL1), SYS_ZCR);
}
+static inline void __hyp_sve_save_host(void)
+{
+ struct cpu_sve_state *sve_state = *host_data_ptr(sve_state);
+
+ sve_state->zcr_el1 = read_sysreg_el1(SYS_ZCR);
+ write_sysreg_s(ZCR_ELx_LEN_MASK, SYS_ZCR_EL2);
+ isb();
+ __sve_save_state(sve_state->sve_regs + sve_ffr_offset(kvm_host_sve_max_vl),
+ &sve_state->fpsr);
+}
+
static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu);
/*
@@ -354,7 +365,7 @@ static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code)
/* Valid trap. Switch the context: */
/* First disable enough traps to allow us to update the registers */
- if (sve_guest)
+ if (sve_guest || (is_protected_kvm_enabled() && system_supports_sve()))
cpacr_clear_set(0, CPACR_ELx_FPEN|CPACR_ELx_ZEN);
else
cpacr_clear_set(0, CPACR_ELx_FPEN);
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
index f71394d0e32a..1088b0bd3cc5 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
@@ -23,20 +23,80 @@ DEFINE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params);
void __kvm_hyp_host_forward_smc(struct kvm_cpu_context *host_ctxt);
+static void __hyp_sve_save_guest(struct kvm_vcpu *vcpu)
+{
+ __vcpu_sys_reg(vcpu, ZCR_EL1) = read_sysreg_el1(SYS_ZCR);
+ /*
+ * On saving/restoring guest sve state, always use the maximum VL for
+ * the guest. The layout of the data when saving the sve state depends
+ * on the VL, so use a consistent (i.e., the maximum) guest VL.
+ */
+ sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2);
+ isb();
+ __sve_save_state(vcpu_sve_pffr(vcpu), &vcpu->arch.ctxt.fp_regs.fpsr);
+ write_sysreg_s(ZCR_ELx_LEN_MASK, SYS_ZCR_EL2);
+}
+
+static void __hyp_sve_restore_host(void)
+{
+ struct cpu_sve_state *sve_state = *host_data_ptr(sve_state);
+
+ /*
+ * On saving/restoring host sve state, always use the maximum VL for
+ * the host. The layout of the data when saving the sve state depends
+ * on the VL, so use a consistent (i.e., the maximum) host VL.
+ *
+ * Setting ZCR_EL2 to ZCR_ELx_LEN_MASK sets the effective length
+ * supported by the system (or limited at EL3).
+ */
+ write_sysreg_s(ZCR_ELx_LEN_MASK, SYS_ZCR_EL2);
+ isb();
+ __sve_restore_state(sve_state->sve_regs + sve_ffr_offset(kvm_host_sve_max_vl),
+ &sve_state->fpsr);
+ write_sysreg_el1(sve_state->zcr_el1, SYS_ZCR);
+}
+
+static void fpsimd_sve_flush(void)
+{
+ *host_data_ptr(fp_owner) = FP_STATE_HOST_OWNED;
+}
+
+static void fpsimd_sve_sync(struct kvm_vcpu *vcpu)
+{
+ if (!guest_owns_fp_regs())
+ return;
+
+ cpacr_clear_set(0, CPACR_ELx_FPEN|CPACR_ELx_ZEN);
+ isb();
+
+ if (vcpu_has_sve(vcpu))
+ __hyp_sve_save_guest(vcpu);
+ else
+ __fpsimd_save_state(&vcpu->arch.ctxt.fp_regs);
+
+ if (system_supports_sve())
+ __hyp_sve_restore_host();
+ else
+ __fpsimd_restore_state(*host_data_ptr(fpsimd_state));
+
+ *host_data_ptr(fp_owner) = FP_STATE_HOST_OWNED;
+}
+
static void flush_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu)
{
struct kvm_vcpu *host_vcpu = hyp_vcpu->host_vcpu;
+ fpsimd_sve_flush();
+
hyp_vcpu->vcpu.arch.ctxt = host_vcpu->arch.ctxt;
hyp_vcpu->vcpu.arch.sve_state = kern_hyp_va(host_vcpu->arch.sve_state);
- hyp_vcpu->vcpu.arch.sve_max_vl = host_vcpu->arch.sve_max_vl;
+ hyp_vcpu->vcpu.arch.sve_max_vl = min(host_vcpu->arch.sve_max_vl, kvm_host_sve_max_vl);
hyp_vcpu->vcpu.arch.hw_mmu = host_vcpu->arch.hw_mmu;
hyp_vcpu->vcpu.arch.hcr_el2 = host_vcpu->arch.hcr_el2;
hyp_vcpu->vcpu.arch.mdcr_el2 = host_vcpu->arch.mdcr_el2;
- hyp_vcpu->vcpu.arch.cptr_el2 = host_vcpu->arch.cptr_el2;
hyp_vcpu->vcpu.arch.iflags = host_vcpu->arch.iflags;
@@ -54,10 +114,11 @@ static void sync_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu)
struct vgic_v3_cpu_if *host_cpu_if = &host_vcpu->arch.vgic_cpu.vgic_v3;
unsigned int i;
+ fpsimd_sve_sync(&hyp_vcpu->vcpu);
+
host_vcpu->arch.ctxt = hyp_vcpu->vcpu.arch.ctxt;
host_vcpu->arch.hcr_el2 = hyp_vcpu->vcpu.arch.hcr_el2;
- host_vcpu->arch.cptr_el2 = hyp_vcpu->vcpu.arch.cptr_el2;
host_vcpu->arch.fault = hyp_vcpu->vcpu.arch.fault;
diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c
index 25e9a94f6d76..feb27b4ce459 100644
--- a/arch/arm64/kvm/hyp/nvhe/pkvm.c
+++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c
@@ -588,6 +588,8 @@ int __pkvm_init_vcpu(pkvm_handle_t handle, struct kvm_vcpu *host_vcpu,
if (ret)
unmap_donated_memory(hyp_vcpu, sizeof(*hyp_vcpu));
+ hyp_vcpu->vcpu.arch.cptr_el2 = kvm_get_reset_cptr_el2(&hyp_vcpu->vcpu);
+
return ret;
}
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 019f863922fa..bef74de7065b 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -184,7 +184,21 @@ static bool kvm_handle_pvm_sys64(struct kvm_vcpu *vcpu, u64 *exit_code)
static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu)
{
- __fpsimd_save_state(*host_data_ptr(fpsimd_state));
+ /*
+ * Non-protected kvm relies on the host restoring its sve state.
+ * Protected kvm restores the host's sve state as not to reveal that
+ * fpsimd was used by a guest nor leak upper sve bits.
+ */
+ if (unlikely(is_protected_kvm_enabled() && system_supports_sve())) {
+ __hyp_sve_save_host();
+
+ /* Re-enable SVE traps if not supported for the guest vcpu. */
+ if (!vcpu_has_sve(vcpu))
+ cpacr_clear_set(CPACR_ELx_ZEN, 0);
+
+ } else {
+ __fpsimd_save_state(*host_data_ptr(fpsimd_state));
+ }
}
static const exit_handler_fn hyp_exit_handlers[] = {
--
2.45.1.288.g0e0cd299f1-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH v3 06/11] KVM: arm64: Consolidate initializing the host data's fpsimd_state/sve in pKVM
2024-05-28 12:59 [PATCH v3 00/11] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode Fuad Tabba
` (4 preceding siblings ...)
2024-05-28 12:59 ` [PATCH v3 05/11] KVM: arm64: Eagerly restore host fpsimd/sve " Fuad Tabba
@ 2024-05-28 12:59 ` Fuad Tabba
2024-05-28 12:59 ` [PATCH v3 07/11] KVM: arm64: Refactor CPACR trap bit setting/clearing to use ELx format Fuad Tabba
` (6 subsequent siblings)
12 siblings, 0 replies; 26+ messages in thread
From: Fuad Tabba @ 2024-05-28 12:59 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: maz, will, qperret, tabba, seanjc, alexandru.elisei,
catalin.marinas, philmd, james.morse, suzuki.poulose,
oliver.upton, mark.rutland, broonie, joey.gouly, rananta,
yuzenghui
Now that we have introduced finalize_init_hyp_mode(), lets
consolidate the initializing of the host_data fpsimd_state and
sve state.
Signed-off-by: Fuad Tabba <tabba@google.com>
---
arch/arm64/include/asm/kvm_host.h | 10 ++++++++--
arch/arm64/kvm/arm.c | 20 ++++++++++++++------
arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 1 -
arch/arm64/kvm/hyp/nvhe/pkvm.c | 11 -----------
arch/arm64/kvm/hyp/nvhe/setup.c | 1 -
5 files changed, 22 insertions(+), 21 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 90df7ccec5f4..36b8e97bf49e 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -550,8 +550,14 @@ struct cpu_sve_state {
struct kvm_host_data {
struct kvm_cpu_context host_ctxt;
- struct user_fpsimd_state *fpsimd_state; /* hyp VA */
- struct cpu_sve_state *sve_state; /* hyp VA */
+ /*
+ * All pointers in this union are hyp VA.
+ * sve_state is only used in pKVM and if system_supports_sve().
+ */
+ union {
+ struct user_fpsimd_state *fpsimd_state;
+ struct cpu_sve_state *sve_state;
+ };
/* Ownership of the FP regs */
enum {
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 1acf7415e831..59716789fe0f 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -2444,14 +2444,22 @@ static void finalize_init_hyp_mode(void)
{
int cpu;
- if (!is_protected_kvm_enabled() || !system_supports_sve())
- return;
+ if (system_supports_sve() && is_protected_kvm_enabled()) {
+ for_each_possible_cpu(cpu) {
+ struct cpu_sve_state *sve_state;
- for_each_possible_cpu(cpu) {
- struct cpu_sve_state *sve_state;
+ sve_state = per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->sve_state;
+ per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->sve_state =
+ kern_hyp_va(sve_state);
+ }
+ } else {
+ for_each_possible_cpu(cpu) {
+ struct user_fpsimd_state *fpsimd_state;
- sve_state = per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->sve_state;
- per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->sve_state = kern_hyp_va(sve_state);
+ fpsimd_state = &per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->host_ctxt.fp_regs;
+ per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->fpsimd_state =
+ kern_hyp_va(fpsimd_state);
+ }
}
}
diff --git a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h
index 22f374e9f532..24a9a8330d19 100644
--- a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h
+++ b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h
@@ -59,7 +59,6 @@ static inline bool pkvm_hyp_vcpu_is_protected(struct pkvm_hyp_vcpu *hyp_vcpu)
}
void pkvm_hyp_vm_table_init(void *tbl);
-void pkvm_host_fpsimd_state_init(void);
int __pkvm_init_vm(struct kvm *host_kvm, unsigned long vm_hva,
unsigned long pgd_hva);
diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c
index feb27b4ce459..ea67fcbf8376 100644
--- a/arch/arm64/kvm/hyp/nvhe/pkvm.c
+++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c
@@ -249,17 +249,6 @@ void pkvm_hyp_vm_table_init(void *tbl)
vm_table = tbl;
}
-void pkvm_host_fpsimd_state_init(void)
-{
- unsigned long i;
-
- for (i = 0; i < hyp_nr_cpus; i++) {
- struct kvm_host_data *host_data = per_cpu_ptr(&kvm_host_data, i);
-
- host_data->fpsimd_state = &host_data->host_ctxt.fp_regs;
- }
-}
-
/*
* Return the hyp vm structure corresponding to the handle.
*/
diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c
index 3fae42479598..f4350ba07b0b 100644
--- a/arch/arm64/kvm/hyp/nvhe/setup.c
+++ b/arch/arm64/kvm/hyp/nvhe/setup.c
@@ -324,7 +324,6 @@ void __noreturn __pkvm_init_finalise(void)
goto out;
pkvm_hyp_vm_table_init(vm_table_base);
- pkvm_host_fpsimd_state_init();
out:
/*
* We tail-called to here from handle___pkvm_init() and will not return,
--
2.45.1.288.g0e0cd299f1-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH v3 07/11] KVM: arm64: Refactor CPACR trap bit setting/clearing to use ELx format
2024-05-28 12:59 [PATCH v3 00/11] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode Fuad Tabba
` (5 preceding siblings ...)
2024-05-28 12:59 ` [PATCH v3 06/11] KVM: arm64: Consolidate initializing the host data's fpsimd_state/sve " Fuad Tabba
@ 2024-05-28 12:59 ` Fuad Tabba
2024-05-28 12:59 ` [PATCH v3 08/11] KVM: arm64: Add an isb before restoring guest sve state Fuad Tabba
` (5 subsequent siblings)
12 siblings, 0 replies; 26+ messages in thread
From: Fuad Tabba @ 2024-05-28 12:59 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: maz, will, qperret, tabba, seanjc, alexandru.elisei,
catalin.marinas, philmd, james.morse, suzuki.poulose,
oliver.upton, mark.rutland, broonie, joey.gouly, rananta,
yuzenghui
When setting/clearing CPACR bits for EL0 and EL1, use the ELx
format of the bits, which covers both. This makes the code
clearer, and reduces the changes of accidentally missing a bit.
No functional change intended.
Signed-off-by: Fuad Tabba <tabba@google.com>
---
arch/arm64/include/asm/el2_setup.h | 6 +++---
arch/arm64/include/asm/kvm_emulate.h | 9 ++++-----
arch/arm64/kvm/fpsimd.c | 4 +---
arch/arm64/kvm/hyp/nvhe/pkvm.c | 2 +-
arch/arm64/kvm/hyp/nvhe/switch.c | 5 ++---
arch/arm64/kvm/hyp/vhe/switch.c | 7 +++----
6 files changed, 14 insertions(+), 19 deletions(-)
diff --git a/arch/arm64/include/asm/el2_setup.h b/arch/arm64/include/asm/el2_setup.h
index e4546b29dd0c..fd87c4b8f984 100644
--- a/arch/arm64/include/asm/el2_setup.h
+++ b/arch/arm64/include/asm/el2_setup.h
@@ -146,7 +146,7 @@
/* Coprocessor traps */
.macro __init_el2_cptr
__check_hvhe .LnVHE_\@, x1
- mov x0, #(CPACR_EL1_FPEN_EL1EN | CPACR_EL1_FPEN_EL0EN)
+ mov x0, #CPACR_ELx_FPEN
msr cpacr_el1, x0
b .Lskip_set_cptr_\@
.LnVHE_\@:
@@ -277,7 +277,7 @@
// (h)VHE case
mrs x0, cpacr_el1 // Disable SVE traps
- orr x0, x0, #(CPACR_EL1_ZEN_EL1EN | CPACR_EL1_ZEN_EL0EN)
+ orr x0, x0, #CPACR_ELx_ZEN
msr cpacr_el1, x0
b .Lskip_set_cptr_\@
@@ -298,7 +298,7 @@
// (h)VHE case
mrs x0, cpacr_el1 // Disable SME traps
- orr x0, x0, #(CPACR_EL1_SMEN_EL0EN | CPACR_EL1_SMEN_EL1EN)
+ orr x0, x0, #CPACR_ELx_SMEN
msr cpacr_el1, x0
b .Lskip_set_cptr_sme_\@
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 2d7a0bdf9d03..21650e7924d4 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -632,17 +632,16 @@ static __always_inline u64 kvm_get_reset_cptr_el2(struct kvm_vcpu *vcpu)
u64 val;
if (has_vhe()) {
- val = (CPACR_EL1_FPEN_EL0EN | CPACR_EL1_FPEN_EL1EN |
- CPACR_EL1_ZEN_EL1EN);
+ val = (CPACR_ELx_FPEN | CPACR_EL1_ZEN_EL1EN);
if (cpus_have_final_cap(ARM64_SME))
val |= CPACR_EL1_SMEN_EL1EN;
} else if (has_hvhe()) {
- val = (CPACR_EL1_FPEN_EL0EN | CPACR_EL1_FPEN_EL1EN);
+ val = CPACR_ELx_FPEN;
if (!vcpu_has_sve(vcpu) || !guest_owns_fp_regs())
- val |= CPACR_EL1_ZEN_EL1EN | CPACR_EL1_ZEN_EL0EN;
+ val |= CPACR_ELx_ZEN;
if (cpus_have_final_cap(ARM64_SME))
- val |= CPACR_EL1_SMEN_EL1EN | CPACR_EL1_SMEN_EL0EN;
+ val |= CPACR_ELx_SMEN;
} else {
val = CPTR_NVHE_EL2_RES1;
diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
index 1807d3a79a8a..eb21f29d91fc 100644
--- a/arch/arm64/kvm/fpsimd.c
+++ b/arch/arm64/kvm/fpsimd.c
@@ -161,9 +161,7 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu)
if (has_vhe() && system_supports_sme()) {
/* Also restore EL0 state seen on entry */
if (vcpu_get_flag(vcpu, HOST_SME_ENABLED))
- sysreg_clear_set(CPACR_EL1, 0,
- CPACR_EL1_SMEN_EL0EN |
- CPACR_EL1_SMEN_EL1EN);
+ sysreg_clear_set(CPACR_EL1, 0, CPACR_ELx_SMEN);
else
sysreg_clear_set(CPACR_EL1,
CPACR_EL1_SMEN_EL0EN,
diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c
index ea67fcbf8376..95cf18574251 100644
--- a/arch/arm64/kvm/hyp/nvhe/pkvm.c
+++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c
@@ -65,7 +65,7 @@ static void pvm_init_traps_aa64pfr0(struct kvm_vcpu *vcpu)
/* Trap SVE */
if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_SVE), feature_ids)) {
if (has_hvhe())
- cptr_clear |= CPACR_EL1_ZEN_EL0EN | CPACR_EL1_ZEN_EL1EN;
+ cptr_clear |= CPACR_ELx_ZEN;
else
cptr_set |= CPTR_EL2_TZ;
}
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index bef74de7065b..b264a1e3bb6e 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -48,15 +48,14 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
val |= has_hvhe() ? CPACR_EL1_TTA : CPTR_EL2_TTA;
if (cpus_have_final_cap(ARM64_SME)) {
if (has_hvhe())
- val &= ~(CPACR_EL1_SMEN_EL1EN | CPACR_EL1_SMEN_EL0EN);
+ val &= ~CPACR_ELx_SMEN;
else
val |= CPTR_EL2_TSM;
}
if (!guest_owns_fp_regs()) {
if (has_hvhe())
- val &= ~(CPACR_EL1_FPEN_EL0EN | CPACR_EL1_FPEN_EL1EN |
- CPACR_EL1_ZEN_EL0EN | CPACR_EL1_ZEN_EL1EN);
+ val &= ~(CPACR_ELx_FPEN|CPACR_ELx_ZEN);
else
val |= CPTR_EL2_TFP | CPTR_EL2_TZ;
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 20073579e9f5..1723090f3c4f 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -93,8 +93,7 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
val = read_sysreg(cpacr_el1);
val |= CPACR_ELx_TTA;
- val &= ~(CPACR_EL1_ZEN_EL0EN | CPACR_EL1_ZEN_EL1EN |
- CPACR_EL1_SMEN_EL0EN | CPACR_EL1_SMEN_EL1EN);
+ val &= ~(CPACR_ELx_ZEN|CPACR_ELx_SMEN);
/*
* With VHE (HCR.E2H == 1), accesses to CPACR_EL1 are routed to
@@ -109,9 +108,9 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
if (guest_owns_fp_regs()) {
if (vcpu_has_sve(vcpu))
- val |= CPACR_EL1_ZEN_EL0EN | CPACR_EL1_ZEN_EL1EN;
+ val |= CPACR_ELx_ZEN;
} else {
- val &= ~(CPACR_EL1_FPEN_EL0EN | CPACR_EL1_FPEN_EL1EN);
+ val &= ~CPACR_ELx_FPEN;
__activate_traps_fpsimd32(vcpu);
}
--
2.45.1.288.g0e0cd299f1-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH v3 08/11] KVM: arm64: Add an isb before restoring guest sve state
2024-05-28 12:59 [PATCH v3 00/11] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode Fuad Tabba
` (6 preceding siblings ...)
2024-05-28 12:59 ` [PATCH v3 07/11] KVM: arm64: Refactor CPACR trap bit setting/clearing to use ELx format Fuad Tabba
@ 2024-05-28 12:59 ` Fuad Tabba
2024-05-28 12:59 ` [PATCH v3 09/11] KVM: arm64: Do not use sve_cond_update_zcr updating with ZCR_ELx_LEN_MASK Fuad Tabba
` (4 subsequent siblings)
12 siblings, 0 replies; 26+ messages in thread
From: Fuad Tabba @ 2024-05-28 12:59 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: maz, will, qperret, tabba, seanjc, alexandru.elisei,
catalin.marinas, philmd, james.morse, suzuki.poulose,
oliver.upton, mark.rutland, broonie, joey.gouly, rananta,
yuzenghui
Since sve_cond_update_zcr_vq() does not have a barrier, add an
instruction synchronization barrier after updating ZCR before
restoring the guest sve state.
Signed-off-by: Fuad Tabba <tabba@google.com>
---
arch/arm64/kvm/hyp/include/hyp/switch.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 89c52b59d2a9..24b43f1f3d51 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -315,6 +315,7 @@ static bool kvm_hyp_handle_mops(struct kvm_vcpu *vcpu, u64 *exit_code)
static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu)
{
sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2);
+ isb();
__sve_restore_state(vcpu_sve_pffr(vcpu),
&vcpu->arch.ctxt.fp_regs.fpsr);
write_sysreg_el1(__vcpu_sys_reg(vcpu, ZCR_EL1), SYS_ZCR);
--
2.45.1.288.g0e0cd299f1-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH v3 09/11] KVM: arm64: Do not use sve_cond_update_zcr updating with ZCR_ELx_LEN_MASK
2024-05-28 12:59 [PATCH v3 00/11] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode Fuad Tabba
` (7 preceding siblings ...)
2024-05-28 12:59 ` [PATCH v3 08/11] KVM: arm64: Add an isb before restoring guest sve state Fuad Tabba
@ 2024-05-28 12:59 ` Fuad Tabba
2024-05-28 12:59 ` [PATCH v3 10/11] KVM: arm64: Do not perform an isb() if ZCR_EL2 isn't updated Fuad Tabba
` (3 subsequent siblings)
12 siblings, 0 replies; 26+ messages in thread
From: Fuad Tabba @ 2024-05-28 12:59 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: maz, will, qperret, tabba, seanjc, alexandru.elisei,
catalin.marinas, philmd, james.morse, suzuki.poulose,
oliver.upton, mark.rutland, broonie, joey.gouly, rananta,
yuzenghui
A conditional update of ZCR with ZCR_ELx_LEN_MASK is unlikely to
be beneficial, since at the time nobody implements 2k vectors.
Signed-off-by: Fuad Tabba <tabba@google.com>
---
arch/arm64/kvm/hyp/nvhe/hyp-main.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
index 1088b0bd3cc5..b28d7d8cdc30 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
@@ -468,7 +468,7 @@ void handle_trap(struct kvm_cpu_context *host_ctxt)
case ESR_ELx_EC_SVE:
cpacr_clear_set(0, CPACR_ELx_ZEN);
isb();
- sve_cond_update_zcr_vq(ZCR_ELx_LEN_MASK, SYS_ZCR_EL2);
+ write_sysreg_s(ZCR_ELx_LEN_MASK, SYS_ZCR_EL2);
break;
case ESR_ELx_EC_IABT_LOW:
case ESR_ELx_EC_DABT_LOW:
--
2.45.1.288.g0e0cd299f1-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH v3 10/11] KVM: arm64: Do not perform an isb() if ZCR_EL2 isn't updated
2024-05-28 12:59 [PATCH v3 00/11] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode Fuad Tabba
` (8 preceding siblings ...)
2024-05-28 12:59 ` [PATCH v3 09/11] KVM: arm64: Do not use sve_cond_update_zcr updating with ZCR_ELx_LEN_MASK Fuad Tabba
@ 2024-05-28 12:59 ` Fuad Tabba
2024-05-28 12:59 ` [PATCH v3 11/11] KVM: arm64: Drop sve_cond_update_zcr_vq_* Fuad Tabba
` (2 subsequent siblings)
12 siblings, 0 replies; 26+ messages in thread
From: Fuad Tabba @ 2024-05-28 12:59 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: maz, will, qperret, tabba, seanjc, alexandru.elisei,
catalin.marinas, philmd, james.morse, suzuki.poulose,
oliver.upton, mark.rutland, broonie, joey.gouly, rananta,
yuzenghui
When conditionally updating ZCR, no need to perform an isb() if
the value isn't updated. Introduce and use
sve_cond_update_zcr_vq_isb(), which only has the barrier if the
value of ZCR is updated.
Signed-off-by: Fuad Tabba <tabba@google.com>
---
This patch is undone by the following patch. Please consider one
of these two patches, or feel free to drop both.
---
arch/arm64/include/asm/fpsimd.h | 14 ++++++++++++--
arch/arm64/kvm/hyp/include/hyp/switch.h | 3 +--
arch/arm64/kvm/hyp/nvhe/hyp-main.c | 3 +--
3 files changed, 14 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h
index bc69ac368d73..531f805e4643 100644
--- a/arch/arm64/include/asm/fpsimd.h
+++ b/arch/arm64/include/asm/fpsimd.h
@@ -219,15 +219,24 @@ static inline void sve_user_enable(void)
sysreg_clear_set(cpacr_el1, 0, CPACR_EL1_ZEN_EL0EN);
}
-#define sve_cond_update_zcr_vq(val, reg) \
+#define __sve_cond_update_zcr_vq(val, reg, sync) \
do { \
u64 __zcr = read_sysreg_s((reg)); \
u64 __new = __zcr & ~ZCR_ELx_LEN_MASK; \
__new |= (val) & ZCR_ELx_LEN_MASK; \
- if (__zcr != __new) \
+ if (__zcr != __new) { \
write_sysreg_s(__new, (reg)); \
+ if (sync) \
+ isb(); \
+ } \
} while (0)
+#define sve_cond_update_zcr_vq(val, reg) \
+ __sve_cond_update_zcr_vq(val, reg, false)
+
+#define sve_cond_update_zcr_vq_isb(val, reg) \
+ __sve_cond_update_zcr_vq(val, reg, true)
+
/*
* Probing and setup functions.
* Calls to these functions must be serialised with one another.
@@ -330,6 +339,7 @@ static inline void sve_user_disable(void) { BUILD_BUG(); }
static inline void sve_user_enable(void) { BUILD_BUG(); }
#define sve_cond_update_zcr_vq(val, reg) do { } while (0)
+#define sve_cond_update_zcr_vq_isb(val, reg) do { } while (0)
static inline void vec_init_vq_map(enum vec_type t) { }
static inline void vec_update_vq_map(enum vec_type t) { }
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 24b43f1f3d51..162a60bcc27d 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -314,8 +314,7 @@ static bool kvm_hyp_handle_mops(struct kvm_vcpu *vcpu, u64 *exit_code)
static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu)
{
- sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2);
- isb();
+ sve_cond_update_zcr_vq_isb(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2);
__sve_restore_state(vcpu_sve_pffr(vcpu),
&vcpu->arch.ctxt.fp_regs.fpsr);
write_sysreg_el1(__vcpu_sys_reg(vcpu, ZCR_EL1), SYS_ZCR);
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
index b28d7d8cdc30..cef51fe80aa8 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
@@ -31,8 +31,7 @@ static void __hyp_sve_save_guest(struct kvm_vcpu *vcpu)
* the guest. The layout of the data when saving the sve state depends
* on the VL, so use a consistent (i.e., the maximum) guest VL.
*/
- sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2);
- isb();
+ sve_cond_update_zcr_vq_isb(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2);
__sve_save_state(vcpu_sve_pffr(vcpu), &vcpu->arch.ctxt.fp_regs.fpsr);
write_sysreg_s(ZCR_ELx_LEN_MASK, SYS_ZCR_EL2);
}
--
2.45.1.288.g0e0cd299f1-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH v3 11/11] KVM: arm64: Drop sve_cond_update_zcr_vq_*
2024-05-28 12:59 [PATCH v3 00/11] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode Fuad Tabba
` (9 preceding siblings ...)
2024-05-28 12:59 ` [PATCH v3 10/11] KVM: arm64: Do not perform an isb() if ZCR_EL2 isn't updated Fuad Tabba
@ 2024-05-28 12:59 ` Fuad Tabba
2024-05-30 18:22 ` Oliver Upton
2024-05-28 13:13 ` [PATCH v3 00/11] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode Fuad Tabba
2024-05-30 18:29 ` Oliver Upton
12 siblings, 1 reply; 26+ messages in thread
From: Fuad Tabba @ 2024-05-28 12:59 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: maz, will, qperret, tabba, seanjc, alexandru.elisei,
catalin.marinas, philmd, james.morse, suzuki.poulose,
oliver.upton, mark.rutland, broonie, joey.gouly, rananta,
yuzenghui
The conditional update likely doesn't help performance, but makes
the code a bit more difficult to read. Remove these macros and
just write directly to ZCR.
Signed-off-by: Fuad Tabba <tabba@google.com>
---
This patch is meant as an RFC, and undoes the previous patch.
Please feel free to accept/drop whichever patches you think make
sense.
---
arch/arm64/include/asm/fpsimd.h | 21 ---------------------
arch/arm64/kvm/fpsimd.c | 4 ++--
arch/arm64/kvm/hyp/include/hyp/switch.h | 3 ++-
arch/arm64/kvm/hyp/nvhe/hyp-main.c | 3 ++-
4 files changed, 6 insertions(+), 25 deletions(-)
diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h
index 531f805e4643..d5ef1cf34e10 100644
--- a/arch/arm64/include/asm/fpsimd.h
+++ b/arch/arm64/include/asm/fpsimd.h
@@ -219,24 +219,6 @@ static inline void sve_user_enable(void)
sysreg_clear_set(cpacr_el1, 0, CPACR_EL1_ZEN_EL0EN);
}
-#define __sve_cond_update_zcr_vq(val, reg, sync) \
- do { \
- u64 __zcr = read_sysreg_s((reg)); \
- u64 __new = __zcr & ~ZCR_ELx_LEN_MASK; \
- __new |= (val) & ZCR_ELx_LEN_MASK; \
- if (__zcr != __new) { \
- write_sysreg_s(__new, (reg)); \
- if (sync) \
- isb(); \
- } \
- } while (0)
-
-#define sve_cond_update_zcr_vq(val, reg) \
- __sve_cond_update_zcr_vq(val, reg, false)
-
-#define sve_cond_update_zcr_vq_isb(val, reg) \
- __sve_cond_update_zcr_vq(val, reg, true)
-
/*
* Probing and setup functions.
* Calls to these functions must be serialised with one another.
@@ -338,9 +320,6 @@ static inline bool sve_vq_available(unsigned int vq) { return false; }
static inline void sve_user_disable(void) { BUILD_BUG(); }
static inline void sve_user_enable(void) { BUILD_BUG(); }
-#define sve_cond_update_zcr_vq(val, reg) do { } while (0)
-#define sve_cond_update_zcr_vq_isb(val, reg) do { } while (0)
-
static inline void vec_init_vq_map(enum vec_type t) { }
static inline void vec_update_vq_map(enum vec_type t) { }
static inline int vec_verify_vq_map(enum vec_type t) { return 0; }
diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
index eb21f29d91fc..bf8bf9975951 100644
--- a/arch/arm64/kvm/fpsimd.c
+++ b/arch/arm64/kvm/fpsimd.c
@@ -187,8 +187,8 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu)
* role when doing the save from EL2.
*/
if (!has_vhe())
- sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1,
- SYS_ZCR_EL1);
+ write_sysreg_s(vcpu_sve_max_vq(vcpu) - 1,
+ SYS_ZCR_EL1);
}
/*
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 162a60bcc27d..800dd4e1bcbe 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -314,7 +314,8 @@ static bool kvm_hyp_handle_mops(struct kvm_vcpu *vcpu, u64 *exit_code)
static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu)
{
- sve_cond_update_zcr_vq_isb(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2);
+ write_sysreg_s(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2);
+ isb();
__sve_restore_state(vcpu_sve_pffr(vcpu),
&vcpu->arch.ctxt.fp_regs.fpsr);
write_sysreg_el1(__vcpu_sys_reg(vcpu, ZCR_EL1), SYS_ZCR);
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
index cef51fe80aa8..0feacd13b4c2 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
@@ -31,7 +31,8 @@ static void __hyp_sve_save_guest(struct kvm_vcpu *vcpu)
* the guest. The layout of the data when saving the sve state depends
* on the VL, so use a consistent (i.e., the maximum) guest VL.
*/
- sve_cond_update_zcr_vq_isb(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2);
+ write_sysreg_s(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2);
+ isb();
__sve_save_state(vcpu_sve_pffr(vcpu), &vcpu->arch.ctxt.fp_regs.fpsr);
write_sysreg_s(ZCR_ELx_LEN_MASK, SYS_ZCR_EL2);
}
--
2.45.1.288.g0e0cd299f1-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 26+ messages in thread
* Re: [PATCH v3 00/11] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode
2024-05-28 12:59 [PATCH v3 00/11] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode Fuad Tabba
` (10 preceding siblings ...)
2024-05-28 12:59 ` [PATCH v3 11/11] KVM: arm64: Drop sve_cond_update_zcr_vq_* Fuad Tabba
@ 2024-05-28 13:13 ` Fuad Tabba
2024-05-30 18:29 ` Oliver Upton
12 siblings, 0 replies; 26+ messages in thread
From: Fuad Tabba @ 2024-05-28 13:13 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: maz, will, qperret, seanjc, alexandru.elisei, catalin.marinas,
philmd, james.morse, suzuki.poulose, oliver.upton, mark.rutland,
broonie, joey.gouly, rananta, yuzenghui
Hi,
On Tue, May 28, 2024 at 1:59 PM Fuad Tabba <tabba@google.com> wrote:
>
> Changes since v2 [1]
> - Rebased on Linux 6.10-rc1 (1613e604df0c)
> - Apply suggestions/fixes suggested for V2 (Marc)
> - Add an isb() to __hyp_sve_restore_guest()
> - Squash patch that introduces kvm_host_sve_max_vl with following
> patch, since it's used there
> - Some refactoring and tidying up
> - Introduce and use sve_cond_update_zcr_vq_isb(), which only does
> an isb() if ZCR is updated (RFC, next to last patch)
Just realized that
"An indirect read of ZCR_EL1.LEN appears to occur in program order
relative to a direct write of the same register, without the need for
explicit synchronization."
https://developer.arm.com/documentation/ddi0595/2021-03/AArch64-Registers/ZCR-EL2--SVE-Control-Register--EL2-
I'll wait until I get comments on this series as it is before
respinning. Apologies for the spam.
Cheers,
/fuad
> - Remove sve_cond_update_zcr_vq_*, since it's not likely to help
> much (RFC, last patch)
>
> With the KVM host data rework [2], handling of fpsimd and sve
> state in protected mode is done at hyp. For protected VMs, we
> don't want to leak any guest state to the host, including whether
> a guest has used fpsimd/sve.
>
> To complete the work started with the host data rework, in
> regards to protected mode, ensure that the host's fpsimd context
> and its sve context are restored on guest exit, since the rework
> has hidden the fpsimd/sve state from the host.
>
> This patch series eagerly restores the host fpsimd/sve state on
> guest exit when running in protected mode, which happens only if
> the guest has used fpsimd/sve. This means that the saving of the
> state is lazy, similar to the behavior of KVM in other modes, but
> the restoration of the host state is eager.
>
> The last two patches are not essential to this patch series, and
> the last one undoes the next-to-last. Please consider only one
> (or neither) of these two patches for inclusion.
>
> This series is based on Linux 6.10-rc1 (1613e604df0c).
>
> Tested on qemu, with the kernel sve stress tests.
>
> Cheers,
> /fuad
>
> [1] https://lore.kernel.org/all/20240521163720.3812851-1-tabba@google.com/
> [2] https://lore.kernel.org/all/20240322170945.3292593-1-maz@kernel.org/
>
> Fuad Tabba (11):
> KVM: arm64: Reintroduce __sve_save_state
> KVM: arm64: Abstract set/clear of CPTR_EL2 bits behind helper
> KVM: arm64: Specialize handling of host fpsimd state on trap
> KVM: arm64: Allocate memory mapped at hyp for host sve state in pKVM
> KVM: arm64: Eagerly restore host fpsimd/sve state in pKVM
> KVM: arm64: Consolidate initializing the host data's fpsimd_state/sve
> in pKVM
> KVM: arm64: Refactor CPACR trap bit setting/clearing to use ELx format
> KVM: arm64: Add an isb before restoring guest sve state
> KVM: arm64: Do not use sve_cond_update_zcr updating with
> ZCR_ELx_LEN_MASK
> KVM: arm64: Do not perform an isb() if ZCR_EL2 isn't updated
> KVM: arm64: Drop sve_cond_update_zcr_vq_*
>
> arch/arm64/include/asm/el2_setup.h | 6 +-
> arch/arm64/include/asm/fpsimd.h | 11 ----
> arch/arm64/include/asm/kvm_arm.h | 6 ++
> arch/arm64/include/asm/kvm_emulate.h | 71 +++++++++++++++++++++--
> arch/arm64/include/asm/kvm_host.h | 25 +++++++-
> arch/arm64/include/asm/kvm_hyp.h | 2 +
> arch/arm64/include/asm/kvm_pkvm.h | 9 +++
> arch/arm64/kvm/arm.c | 76 +++++++++++++++++++++++++
> arch/arm64/kvm/fpsimd.c | 8 +--
> arch/arm64/kvm/hyp/fpsimd.S | 6 ++
> arch/arm64/kvm/hyp/include/hyp/switch.h | 36 ++++++------
> arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 1 -
> arch/arm64/kvm/hyp/nvhe/hyp-main.c | 75 +++++++++++++++++++++---
> arch/arm64/kvm/hyp/nvhe/pkvm.c | 17 ++----
> arch/arm64/kvm/hyp/nvhe/setup.c | 25 +++++++-
> arch/arm64/kvm/hyp/nvhe/switch.c | 24 +++++++-
> arch/arm64/kvm/hyp/vhe/switch.c | 12 ++--
> arch/arm64/kvm/reset.c | 3 +
> 18 files changed, 342 insertions(+), 71 deletions(-)
>
>
> base-commit: 1613e604df0cd359cf2a7fbd9be7a0bcfacfabd0
> --
> 2.45.1.288.g0e0cd299f1-goog
>
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v3 11/11] KVM: arm64: Drop sve_cond_update_zcr_vq_*
2024-05-28 12:59 ` [PATCH v3 11/11] KVM: arm64: Drop sve_cond_update_zcr_vq_* Fuad Tabba
@ 2024-05-30 18:22 ` Oliver Upton
2024-05-30 20:14 ` Oliver Upton
2024-05-31 6:40 ` Fuad Tabba
0 siblings, 2 replies; 26+ messages in thread
From: Oliver Upton @ 2024-05-30 18:22 UTC (permalink / raw)
To: Fuad Tabba
Cc: kvmarm, linux-arm-kernel, maz, will, qperret, seanjc,
alexandru.elisei, catalin.marinas, philmd, james.morse,
suzuki.poulose, mark.rutland, broonie, joey.gouly, rananta,
yuzenghui
Hi,
On Tue, May 28, 2024 at 01:59:14PM +0100, Fuad Tabba wrote:
> The conditional update likely doesn't help performance, but makes
> the code a bit more difficult to read. Remove these macros and
> just write directly to ZCR.
>
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
> This patch is meant as an RFC, and undoes the previous patch.
> Please feel free to accept/drop whichever patches you think make
> sense.
The motivation behind this helper is to avoid unnecessary synchronizing
behavior, since as you note the architecture does not require
explicit synchronization for the value to become visible to subsequent
instructions.
Now, that doesn't _necessarily_ imply full blown context synchronization,
but it isn't too far fetched to think a conservative implementation does
exactly that upon ZCR write.
--
Thanks,
Oliver
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v3 00/11] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode
2024-05-28 12:59 [PATCH v3 00/11] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode Fuad Tabba
` (11 preceding siblings ...)
2024-05-28 13:13 ` [PATCH v3 00/11] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode Fuad Tabba
@ 2024-05-30 18:29 ` Oliver Upton
12 siblings, 0 replies; 26+ messages in thread
From: Oliver Upton @ 2024-05-30 18:29 UTC (permalink / raw)
To: Fuad Tabba
Cc: kvmarm, linux-arm-kernel, maz, will, qperret, seanjc,
alexandru.elisei, catalin.marinas, philmd, james.morse,
suzuki.poulose, mark.rutland, broonie, joey.gouly, rananta,
yuzenghui
On Tue, May 28, 2024 at 01:59:03PM +0100, Fuad Tabba wrote:
> Changes since v2 [1]
> - Rebased on Linux 6.10-rc1 (1613e604df0c)
> - Apply suggestions/fixes suggested for V2 (Marc)
> - Add an isb() to __hyp_sve_restore_guest()
> - Squash patch that introduces kvm_host_sve_max_vl with following
> patch, since it's used there
> - Some refactoring and tidying up
> - Introduce and use sve_cond_update_zcr_vq_isb(), which only does
> an isb() if ZCR is updated (RFC, next to last patch)
> - Remove sve_cond_update_zcr_vq_*, since it's not likely to help
> much (RFC, last patch)
>
> With the KVM host data rework [2], handling of fpsimd and sve
> state in protected mode is done at hyp. For protected VMs, we
> don't want to leak any guest state to the host, including whether
> a guest has used fpsimd/sve.
>
> To complete the work started with the host data rework, in
> regards to protected mode, ensure that the host's fpsimd context
> and its sve context are restored on guest exit, since the rework
> has hidden the fpsimd/sve state from the host.
>
> This patch series eagerly restores the host fpsimd/sve state on
> guest exit when running in protected mode, which happens only if
> the guest has used fpsimd/sve. This means that the saving of the
> state is lazy, similar to the behavior of KVM in other modes, but
> the restoration of the host state is eager.
>
> The last two patches are not essential to this patch series, and
> the last one undoes the next-to-last. Please consider only one
> (or neither) of these two patches for inclusion.
For patches 1-7 (with the unnecessary isb()'s addressed):
Reviewed-by: Oliver Upton <oliver.upton@linux.dev>
I think we can do without the rest of the series for 6.10.
I also tested this on Neoverse V2.
--
Thanks,
Oliver
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v3 11/11] KVM: arm64: Drop sve_cond_update_zcr_vq_*
2024-05-30 18:22 ` Oliver Upton
@ 2024-05-30 20:14 ` Oliver Upton
2024-05-31 6:40 ` Fuad Tabba
1 sibling, 0 replies; 26+ messages in thread
From: Oliver Upton @ 2024-05-30 20:14 UTC (permalink / raw)
To: Fuad Tabba
Cc: kvmarm, linux-arm-kernel, maz, will, qperret, seanjc,
alexandru.elisei, catalin.marinas, philmd, james.morse,
suzuki.poulose, mark.rutland, broonie, joey.gouly, rananta,
yuzenghui
On Thu, May 30, 2024 at 11:22:42AM -0700, Oliver Upton wrote:
> Hi,
>
> On Tue, May 28, 2024 at 01:59:14PM +0100, Fuad Tabba wrote:
> > The conditional update likely doesn't help performance, but makes
> > the code a bit more difficult to read. Remove these macros and
> > just write directly to ZCR.
> >
> > Signed-off-by: Fuad Tabba <tabba@google.com>
> > ---
> > This patch is meant as an RFC, and undoes the previous patch.
> > Please feel free to accept/drop whichever patches you think make
> > sense.
>
> The motivation behind this helper is to avoid unnecessary synchronizing
> behavior, since as you note the architecture does not require
> explicit synchronization for the value to become visible to subsequent
> instructions.
*indirect reads from subsequent instructions :)
--
Thanks,
Oliver
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v3 11/11] KVM: arm64: Drop sve_cond_update_zcr_vq_*
2024-05-30 18:22 ` Oliver Upton
2024-05-30 20:14 ` Oliver Upton
@ 2024-05-31 6:40 ` Fuad Tabba
1 sibling, 0 replies; 26+ messages in thread
From: Fuad Tabba @ 2024-05-31 6:40 UTC (permalink / raw)
To: Oliver Upton
Cc: kvmarm, linux-arm-kernel, maz, will, qperret, seanjc,
alexandru.elisei, catalin.marinas, philmd, james.morse,
suzuki.poulose, mark.rutland, broonie, joey.gouly, rananta,
yuzenghui
Hi Oliver,
On Thu, May 30, 2024 at 7:22 PM Oliver Upton <oliver.upton@linux.dev> wrote:
>
> Hi,
>
> On Tue, May 28, 2024 at 01:59:14PM +0100, Fuad Tabba wrote:
> > The conditional update likely doesn't help performance, but makes
> > the code a bit more difficult to read. Remove these macros and
> > just write directly to ZCR.
> >
> > Signed-off-by: Fuad Tabba <tabba@google.com>
> > ---
> > This patch is meant as an RFC, and undoes the previous patch.
> > Please feel free to accept/drop whichever patches you think make
> > sense.
>
> The motivation behind this helper is to avoid unnecessary synchronizing
> behavior, since as you note the architecture does not require
> explicit synchronization for the value to become visible to subsequent
> instructions.
>
> Now, that doesn't _necessarily_ imply full blown context synchronization,
> but it isn't too far fetched to think a conservative implementation does
> exactly that upon ZCR write.
Thanks for the explanation and for the reviews!
/fuad
> --
> Thanks,
> Oliver
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v3 01/11] KVM: arm64: Reintroduce __sve_save_state
2024-05-28 12:59 ` [PATCH v3 01/11] KVM: arm64: Reintroduce __sve_save_state Fuad Tabba
@ 2024-05-31 12:26 ` Mark Brown
2024-06-03 8:28 ` Fuad Tabba
0 siblings, 1 reply; 26+ messages in thread
From: Mark Brown @ 2024-05-31 12:26 UTC (permalink / raw)
To: Fuad Tabba
Cc: kvmarm, linux-arm-kernel, maz, will, qperret, seanjc,
alexandru.elisei, catalin.marinas, philmd, james.morse,
suzuki.poulose, oliver.upton, mark.rutland, joey.gouly, rananta,
yuzenghui
[-- Attachment #1.1: Type: text/plain, Size: 527 bytes --]
On Tue, May 28, 2024 at 01:59:04PM +0100, Fuad Tabba wrote:
> This reverts commit e66425fc9ba3 ("KVM: arm64: Remove unused
> __sve_save_state").
> +void __sve_save_state(void *sve_pffr, u32 *fpsr);
The prototype says this takes 2 arguments...
> +SYM_FUNC_START(__sve_save_state)
> + mov x2, #1
> + sve_save 0, x1, x2, 3
...but the macro takes and the function supplies 3 arguments. This was
a bug in the code at the time you did the revert (the prototype had been
missed when adding save_ffr) but we should still fix it.
[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
[-- Attachment #2: Type: text/plain, Size: 176 bytes --]
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v3 03/11] KVM: arm64: Specialize handling of host fpsimd state on trap
2024-05-28 12:59 ` [PATCH v3 03/11] KVM: arm64: Specialize handling of host fpsimd state on trap Fuad Tabba
@ 2024-05-31 13:35 ` Mark Brown
0 siblings, 0 replies; 26+ messages in thread
From: Mark Brown @ 2024-05-31 13:35 UTC (permalink / raw)
To: Fuad Tabba
Cc: kvmarm, linux-arm-kernel, maz, will, qperret, seanjc,
alexandru.elisei, catalin.marinas, philmd, james.morse,
suzuki.poulose, oliver.upton, mark.rutland, joey.gouly, rananta,
yuzenghui
[-- Attachment #1.1: Type: text/plain, Size: 266 bytes --]
On Tue, May 28, 2024 at 01:59:06PM +0100, Fuad Tabba wrote:
> In subsequent patches, n/vhe will diverge on saving the host
> fpsimd/sve state when taking a guest fpsimd/sve trap. Add a
> specialized helper to handle it.
Reviewed-by: Mark Brown <broonie@kernel.org>
[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
[-- Attachment #2: Type: text/plain, Size: 176 bytes --]
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v3 05/11] KVM: arm64: Eagerly restore host fpsimd/sve state in pKVM
2024-05-28 12:59 ` [PATCH v3 05/11] KVM: arm64: Eagerly restore host fpsimd/sve " Fuad Tabba
@ 2024-05-31 14:09 ` Mark Brown
2024-06-03 8:37 ` Fuad Tabba
0 siblings, 1 reply; 26+ messages in thread
From: Mark Brown @ 2024-05-31 14:09 UTC (permalink / raw)
To: Fuad Tabba
Cc: kvmarm, linux-arm-kernel, maz, will, qperret, seanjc,
alexandru.elisei, catalin.marinas, philmd, james.morse,
suzuki.poulose, oliver.upton, mark.rutland, joey.gouly, rananta,
yuzenghui
[-- Attachment #1.1: Type: text/plain, Size: 2440 bytes --]
On Tue, May 28, 2024 at 01:59:08PM +0100, Fuad Tabba wrote:
> +static inline void __hyp_sve_save_host(void)
> +{
> + struct cpu_sve_state *sve_state = *host_data_ptr(sve_state);
> +
> + sve_state->zcr_el1 = read_sysreg_el1(SYS_ZCR);
> + write_sysreg_s(ZCR_ELx_LEN_MASK, SYS_ZCR_EL2);
As well as the sync issue Oliver mentioned on the removal of
_cond_update() just doing these updates as a blind write creates a
surprise if we ever get more control bits in ZCR_EL2.
> +static void __hyp_sve_restore_host(void)
> +{
> + struct cpu_sve_state *sve_state = *host_data_ptr(sve_state);
> +
> + /*
> + * On saving/restoring host sve state, always use the maximum VL for
> + * the host. The layout of the data when saving the sve state depends
> + * on the VL, so use a consistent (i.e., the maximum) host VL.
> + *
> + * Setting ZCR_EL2 to ZCR_ELx_LEN_MASK sets the effective length
> + * supported by the system (or limited at EL3).
> + */
> + write_sysreg_s(ZCR_ELx_LEN_MASK, SYS_ZCR_EL2);
Setting ZCR_ELx_LEN_MASK sets the VL to the maximum supported/configured
for the current PE, not the system. This will hopefully be the same
since we really hope implementors continue to build symmetric systems
but there is handling for that case in the kernel just in case. Given
that we record the host's maximum VL should we use it?
> +static void fpsimd_sve_flush(void)
> +{
> + *host_data_ptr(fp_owner) = FP_STATE_HOST_OWNED;
> +}
> +
That's not what I'd have expected something called fpsimd_sve_flush() to
do, I'd have expected it to save the current state to memory and then
mark it as free (that's what fpsimd_flush_cpu_state() does in the host
kernel). Perhaps just inline this into the one user?
> +static void fpsimd_sve_sync(struct kvm_vcpu *vcpu)
> +{
> + if (!guest_owns_fp_regs())
> + return;
> +
> + cpacr_clear_set(0, CPACR_ELx_FPEN|CPACR_ELx_ZEN);
> + isb();
Spaces around |.
> static void flush_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu)
> {
> struct kvm_vcpu *host_vcpu = hyp_vcpu->host_vcpu;
>
> + fpsimd_sve_flush();
> +
> hyp_vcpu->vcpu.arch.ctxt = host_vcpu->arch.ctxt;
>
> hyp_vcpu->vcpu.arch.sve_state = kern_hyp_va(host_vcpu->arch.sve_state);
> - hyp_vcpu->vcpu.arch.sve_max_vl = host_vcpu->arch.sve_max_vl;
> + hyp_vcpu->vcpu.arch.sve_max_vl = min(host_vcpu->arch.sve_max_vl, kvm_host_sve_max_vl);
This needs a comment I think.
[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
[-- Attachment #2: Type: text/plain, Size: 176 bytes --]
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v3 01/11] KVM: arm64: Reintroduce __sve_save_state
2024-05-31 12:26 ` Mark Brown
@ 2024-06-03 8:28 ` Fuad Tabba
0 siblings, 0 replies; 26+ messages in thread
From: Fuad Tabba @ 2024-06-03 8:28 UTC (permalink / raw)
To: Mark Brown
Cc: kvmarm, linux-arm-kernel, maz, will, qperret, seanjc,
alexandru.elisei, catalin.marinas, philmd, james.morse,
suzuki.poulose, oliver.upton, mark.rutland, joey.gouly, rananta,
yuzenghui
Hi Mark,
On Fri, May 31, 2024 at 1:26 PM Mark Brown <broonie@kernel.org> wrote:
>
> On Tue, May 28, 2024 at 01:59:04PM +0100, Fuad Tabba wrote:
>
> > This reverts commit e66425fc9ba3 ("KVM: arm64: Remove unused
> > __sve_save_state").
>
> > +void __sve_save_state(void *sve_pffr, u32 *fpsr);
>
> The prototype says this takes 2 arguments...
>
> > +SYM_FUNC_START(__sve_save_state)
> > + mov x2, #1
> > + sve_save 0, x1, x2, 3
>
> ...but the macro takes and the function supplies 3 arguments. This was
> a bug in the code at the time you did the revert (the prototype had been
> missed when adding save_ffr) but we should still fix it.
Thanks for spotting this! I will fix it on the respin.
Cheers,
/fuad
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v3 05/11] KVM: arm64: Eagerly restore host fpsimd/sve state in pKVM
2024-05-31 14:09 ` Mark Brown
@ 2024-06-03 8:37 ` Fuad Tabba
2024-06-03 13:27 ` Mark Brown
0 siblings, 1 reply; 26+ messages in thread
From: Fuad Tabba @ 2024-06-03 8:37 UTC (permalink / raw)
To: Mark Brown
Cc: kvmarm, linux-arm-kernel, maz, will, qperret, seanjc,
alexandru.elisei, catalin.marinas, philmd, james.morse,
suzuki.poulose, oliver.upton, mark.rutland, joey.gouly, rananta,
yuzenghui
Hi Mark,
On Fri, May 31, 2024 at 3:09 PM Mark Brown <broonie@kernel.org> wrote:
>
> On Tue, May 28, 2024 at 01:59:08PM +0100, Fuad Tabba wrote:
>
> > +static inline void __hyp_sve_save_host(void)
> > +{
> > + struct cpu_sve_state *sve_state = *host_data_ptr(sve_state);
> > +
> > + sve_state->zcr_el1 = read_sysreg_el1(SYS_ZCR);
> > + write_sysreg_s(ZCR_ELx_LEN_MASK, SYS_ZCR_EL2);
>
> As well as the sync issue Oliver mentioned on the removal of
> _cond_update() just doing these updates as a blind write creates a
> surprise if we ever get more control bits in ZCR_EL2.
I'm not sure it does. The other bits are RES0, and this is always
setting the length. So even if new control bits are added, this
shouldn't matter.
Also, one of the concerns in terms of performance is now with
nested-virt support being added, and the overhead of doing the
conditional update when we know that it's unlikely that anyone is
implementing vectors as big as the max.
>
> > +static void __hyp_sve_restore_host(void)
> > +{
> > + struct cpu_sve_state *sve_state = *host_data_ptr(sve_state);
> > +
> > + /*
> > + * On saving/restoring host sve state, always use the maximum VL for
> > + * the host. The layout of the data when saving the sve state depends
> > + * on the VL, so use a consistent (i.e., the maximum) host VL.
> > + *
> > + * Setting ZCR_EL2 to ZCR_ELx_LEN_MASK sets the effective length
> > + * supported by the system (or limited at EL3).
> > + */
> > + write_sysreg_s(ZCR_ELx_LEN_MASK, SYS_ZCR_EL2);
>
> Setting ZCR_ELx_LEN_MASK sets the VL to the maximum supported/configured
> for the current PE, not the system. This will hopefully be the same
> since we really hope implementors continue to build symmetric systems
> but there is handling for that case in the kernel just in case. Given
> that we record the host's maximum VL should we use it?
You're right, but even if the current PE had a different vector
length, ZCR_ELx_LEN_MASK is the default value for ZCR_EL2 when the
host is running (this is the existing behavior before this patch
series). It is also the value this patch series uses when saving the
host SVE state. So since we are consistent I think this is correct.
> > +static void fpsimd_sve_flush(void)
> > +{
> > + *host_data_ptr(fp_owner) = FP_STATE_HOST_OWNED;
> > +}
> > +
>
> That's not what I'd have expected something called fpsimd_sve_flush() to
> do, I'd have expected it to save the current state to memory and then
> mark it as free (that's what fpsimd_flush_cpu_state() does in the host
> kernel). Perhaps just inline this into the one user?
>
> > +static void fpsimd_sve_sync(struct kvm_vcpu *vcpu)
> > +{
> > + if (!guest_owns_fp_regs())
> > + return;
> > +
> > + cpacr_clear_set(0, CPACR_ELx_FPEN|CPACR_ELx_ZEN);
> > + isb();
>
> Spaces around |.
Will do.
> > static void flush_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu)
> > {
> > struct kvm_vcpu *host_vcpu = hyp_vcpu->host_vcpu;
> >
> > + fpsimd_sve_flush();
> > +
> > hyp_vcpu->vcpu.arch.ctxt = host_vcpu->arch.ctxt;
> >
> > hyp_vcpu->vcpu.arch.sve_state = kern_hyp_va(host_vcpu->arch.sve_state);
> > - hyp_vcpu->vcpu.arch.sve_max_vl = host_vcpu->arch.sve_max_vl;
> > + hyp_vcpu->vcpu.arch.sve_max_vl = min(host_vcpu->arch.sve_max_vl, kvm_host_sve_max_vl);
>
> This needs a comment I think.
Will do.
Thanks!
/fuad
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v3 05/11] KVM: arm64: Eagerly restore host fpsimd/sve state in pKVM
2024-06-03 8:37 ` Fuad Tabba
@ 2024-06-03 13:27 ` Mark Brown
2024-06-03 13:48 ` Marc Zyngier
0 siblings, 1 reply; 26+ messages in thread
From: Mark Brown @ 2024-06-03 13:27 UTC (permalink / raw)
To: Fuad Tabba
Cc: kvmarm, linux-arm-kernel, maz, will, qperret, seanjc,
alexandru.elisei, catalin.marinas, philmd, james.morse,
suzuki.poulose, oliver.upton, mark.rutland, joey.gouly, rananta,
yuzenghui
[-- Attachment #1.1: Type: text/plain, Size: 2949 bytes --]
On Mon, Jun 03, 2024 at 09:37:16AM +0100, Fuad Tabba wrote:
> On Fri, May 31, 2024 at 3:09 PM Mark Brown <broonie@kernel.org> wrote:
> > On Tue, May 28, 2024 at 01:59:08PM +0100, Fuad Tabba wrote:
> > > + write_sysreg_s(ZCR_ELx_LEN_MASK, SYS_ZCR_EL2);
> > As well as the sync issue Oliver mentioned on the removal of
> > _cond_update() just doing these updates as a blind write creates a
> > surprise if we ever get more control bits in ZCR_EL2.
> I'm not sure it does. The other bits are RES0, and this is always
> setting the length. So even if new control bits are added, this
> shouldn't matter.
The surprise would be that if new control bits were added this would
result in clearing them on restore.
> Also, one of the concerns in terms of performance is now with
> nested-virt support being added, and the overhead of doing the
> conditional update when we know that it's unlikely that anyone is
> implementing vectors as big as the max.
I guess there's the option of doing a restore of a value fixed during
initialisation instead?
> > > + /*
> > > + * On saving/restoring host sve state, always use the maximum VL for
> > > + * the host. The layout of the data when saving the sve state depends
> > > + * on the VL, so use a consistent (i.e., the maximum) host VL.
> > > + *
> > > + * Setting ZCR_EL2 to ZCR_ELx_LEN_MASK sets the effective length
> > > + * supported by the system (or limited at EL3).
> > > + */
> > > + write_sysreg_s(ZCR_ELx_LEN_MASK, SYS_ZCR_EL2);
> > Setting ZCR_ELx_LEN_MASK sets the VL to the maximum supported/configured
> > for the current PE, not the system. This will hopefully be the same
> > since we really hope implementors continue to build symmetric systems
> > but there is handling for that case in the kernel just in case. Given
> > that we record the host's maximum VL should we use it?
> You're right, but even if the current PE had a different vector
> length, ZCR_ELx_LEN_MASK is the default value for ZCR_EL2 when the
> host is running (this is the existing behavior before this patch
> series). It is also the value this patch series uses when saving the
> host SVE state. So since we are consistent I think this is correct.
The reason we just set all bits in ZCR_EL2.LEN is that we don't
currently use SVE at EL2 so we're just passing everything through to EL1
and letting it worry about things. As we start adding more SVE code at
EL2 we need to care more and I think we should start explicitly
programming what we think we're using to use to avoid surprises. For
example in this series we allocate the buffer used to store the host SVE
state based on the probed maximum usable VL for the system but here we
use whatever the PE has as the maximum VL. This means that in the
(hopefully unlikely) case where the probed value is lower than the PE
value we'll overflow the buffer.
[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
[-- Attachment #2: Type: text/plain, Size: 176 bytes --]
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v3 05/11] KVM: arm64: Eagerly restore host fpsimd/sve state in pKVM
2024-06-03 13:27 ` Mark Brown
@ 2024-06-03 13:48 ` Marc Zyngier
2024-06-03 14:15 ` Mark Brown
0 siblings, 1 reply; 26+ messages in thread
From: Marc Zyngier @ 2024-06-03 13:48 UTC (permalink / raw)
To: Mark Brown
Cc: Fuad Tabba, kvmarm, linux-arm-kernel, will, qperret, seanjc,
alexandru.elisei, catalin.marinas, philmd, james.morse,
suzuki.poulose, oliver.upton, mark.rutland, joey.gouly, rananta,
yuzenghui
On Mon, 03 Jun 2024 14:27:07 +0100,
Mark Brown <broonie@kernel.org> wrote:
>
> On Mon, Jun 03, 2024 at 09:37:16AM +0100, Fuad Tabba wrote:
> > On Fri, May 31, 2024 at 3:09 PM Mark Brown <broonie@kernel.org> wrote:
> > > On Tue, May 28, 2024 at 01:59:08PM +0100, Fuad Tabba wrote:
>
> > > > + write_sysreg_s(ZCR_ELx_LEN_MASK, SYS_ZCR_EL2);
>
> > > As well as the sync issue Oliver mentioned on the removal of
> > > _cond_update() just doing these updates as a blind write creates a
> > > surprise if we ever get more control bits in ZCR_EL2.
>
> > I'm not sure it does. The other bits are RES0, and this is always
> > setting the length. So even if new control bits are added, this
> > shouldn't matter.
>
> The surprise would be that if new control bits were added this would
> result in clearing them on restore.
And as far as I can tell, there is no in-flight architectural change
that touches this class of registers. And should this eventually
happens, we will have to audit *all* the spots when ZCR_ELx is touched
and turn them all into RMW accesses. Having an extra spot here isn't
going to change this in a material way.
>
> > Also, one of the concerns in terms of performance is now with
> > nested-virt support being added, and the overhead of doing the
> > conditional update when we know that it's unlikely that anyone is
> > implementing vectors as big as the max.
>
> I guess there's the option of doing a restore of a value fixed during
> initialisation instead?
And what do we gain from that?
>
> > > > + /*
> > > > + * On saving/restoring host sve state, always use the maximum VL for
> > > > + * the host. The layout of the data when saving the sve state depends
> > > > + * on the VL, so use a consistent (i.e., the maximum) host VL.
> > > > + *
> > > > + * Setting ZCR_EL2 to ZCR_ELx_LEN_MASK sets the effective length
> > > > + * supported by the system (or limited at EL3).
> > > > + */
> > > > + write_sysreg_s(ZCR_ELx_LEN_MASK, SYS_ZCR_EL2);
>
> > > Setting ZCR_ELx_LEN_MASK sets the VL to the maximum supported/configured
> > > for the current PE, not the system. This will hopefully be the same
> > > since we really hope implementors continue to build symmetric systems
> > > but there is handling for that case in the kernel just in case. Given
> > > that we record the host's maximum VL should we use it?
>
> > You're right, but even if the current PE had a different vector
> > length, ZCR_ELx_LEN_MASK is the default value for ZCR_EL2 when the
> > host is running (this is the existing behavior before this patch
> > series). It is also the value this patch series uses when saving the
> > host SVE state. So since we are consistent I think this is correct.
>
> The reason we just set all bits in ZCR_EL2.LEN is that we don't
> currently use SVE at EL2 so we're just passing everything through to EL1
> and letting it worry about things. As we start adding more SVE code at
> EL2 we need to care more and I think we should start explicitly
> programming what we think we're using to use to avoid surprises. For
> example in this series we allocate the buffer used to store the host SVE
> state based on the probed maximum usable VL for the system but here we
> use whatever the PE has as the maximum VL. This means that in the
> (hopefully unlikely) case where the probed value is lower than the PE
> value we'll overflow the buffer.
In that case, we need the *real* maximum across all CPUs, not the
maximum usable.
M.
--
Without deviation from the norm, progress is not possible.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v3 05/11] KVM: arm64: Eagerly restore host fpsimd/sve state in pKVM
2024-06-03 13:48 ` Marc Zyngier
@ 2024-06-03 14:15 ` Mark Brown
2024-06-03 14:31 ` Marc Zyngier
0 siblings, 1 reply; 26+ messages in thread
From: Mark Brown @ 2024-06-03 14:15 UTC (permalink / raw)
To: Marc Zyngier
Cc: Fuad Tabba, kvmarm, linux-arm-kernel, will, qperret, seanjc,
alexandru.elisei, catalin.marinas, philmd, james.morse,
suzuki.poulose, oliver.upton, mark.rutland, joey.gouly, rananta,
yuzenghui
[-- Attachment #1.1: Type: text/plain, Size: 1253 bytes --]
On Mon, Jun 03, 2024 at 02:48:39PM +0100, Marc Zyngier wrote:
> Mark Brown <broonie@kernel.org> wrote:
> > On Mon, Jun 03, 2024 at 09:37:16AM +0100, Fuad Tabba wrote:
> > > Also, one of the concerns in terms of performance is now with
> > > nested-virt support being added, and the overhead of doing the
> > > conditional update when we know that it's unlikely that anyone is
> > > implementing vectors as big as the max.
> > I guess there's the option of doing a restore of a value fixed during
> > initialisation instead?
> And what do we gain from that?
Reducing the number of places that need updating.
> > programming what we think we're using to use to avoid surprises. For
> > example in this series we allocate the buffer used to store the host SVE
> > state based on the probed maximum usable VL for the system but here we
> > use whatever the PE has as the maximum VL. This means that in the
> > (hopefully unlikely) case where the probed value is lower than the PE
> > value we'll overflow the buffer.
> In that case, we need the *real* maximum across all CPUs, not the
> maximum usable.
If we stick with setting all the bits then yes, we'd need that. It'd
also be more robust than trusting that the host won't set a higher
length.
[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
[-- Attachment #2: Type: text/plain, Size: 176 bytes --]
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH v3 05/11] KVM: arm64: Eagerly restore host fpsimd/sve state in pKVM
2024-06-03 14:15 ` Mark Brown
@ 2024-06-03 14:31 ` Marc Zyngier
0 siblings, 0 replies; 26+ messages in thread
From: Marc Zyngier @ 2024-06-03 14:31 UTC (permalink / raw)
To: Mark Brown
Cc: Fuad Tabba, kvmarm, linux-arm-kernel, will, qperret, seanjc,
alexandru.elisei, catalin.marinas, philmd, james.morse,
suzuki.poulose, oliver.upton, mark.rutland, joey.gouly, rananta,
yuzenghui
On Mon, 03 Jun 2024 15:15:37 +0100,
Mark Brown <broonie@kernel.org> wrote:
>
> On Mon, Jun 03, 2024 at 02:48:39PM +0100, Marc Zyngier wrote:
> > Mark Brown <broonie@kernel.org> wrote:
> > > On Mon, Jun 03, 2024 at 09:37:16AM +0100, Fuad Tabba wrote:
>
> > > > Also, one of the concerns in terms of performance is now with
> > > > nested-virt support being added, and the overhead of doing the
> > > > conditional update when we know that it's unlikely that anyone is
> > > > implementing vectors as big as the max.
>
> > > I guess there's the option of doing a restore of a value fixed during
> > > initialisation instead?
>
> > And what do we gain from that?
>
> Reducing the number of places that need updating.
That'd be assuming that the host side value never requires any change
and will be OK with a fixed value. Given that this highly hypothetical
change would be likely to be a hierarchical control much like ZCR_ELx
is today, the fixed value is likely to change on a regular basis.
In any case, this is something we can (re)evaluate when we get to it.
If ever.
M.
--
Without deviation from the norm, progress is not possible.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 26+ messages in thread
end of thread, other threads:[~2024-06-03 14:31 UTC | newest]
Thread overview: 26+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-05-28 12:59 [PATCH v3 00/11] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode Fuad Tabba
2024-05-28 12:59 ` [PATCH v3 01/11] KVM: arm64: Reintroduce __sve_save_state Fuad Tabba
2024-05-31 12:26 ` Mark Brown
2024-06-03 8:28 ` Fuad Tabba
2024-05-28 12:59 ` [PATCH v3 02/11] KVM: arm64: Abstract set/clear of CPTR_EL2 bits behind helper Fuad Tabba
2024-05-28 12:59 ` [PATCH v3 03/11] KVM: arm64: Specialize handling of host fpsimd state on trap Fuad Tabba
2024-05-31 13:35 ` Mark Brown
2024-05-28 12:59 ` [PATCH v3 04/11] KVM: arm64: Allocate memory mapped at hyp for host sve state in pKVM Fuad Tabba
2024-05-28 12:59 ` [PATCH v3 05/11] KVM: arm64: Eagerly restore host fpsimd/sve " Fuad Tabba
2024-05-31 14:09 ` Mark Brown
2024-06-03 8:37 ` Fuad Tabba
2024-06-03 13:27 ` Mark Brown
2024-06-03 13:48 ` Marc Zyngier
2024-06-03 14:15 ` Mark Brown
2024-06-03 14:31 ` Marc Zyngier
2024-05-28 12:59 ` [PATCH v3 06/11] KVM: arm64: Consolidate initializing the host data's fpsimd_state/sve " Fuad Tabba
2024-05-28 12:59 ` [PATCH v3 07/11] KVM: arm64: Refactor CPACR trap bit setting/clearing to use ELx format Fuad Tabba
2024-05-28 12:59 ` [PATCH v3 08/11] KVM: arm64: Add an isb before restoring guest sve state Fuad Tabba
2024-05-28 12:59 ` [PATCH v3 09/11] KVM: arm64: Do not use sve_cond_update_zcr updating with ZCR_ELx_LEN_MASK Fuad Tabba
2024-05-28 12:59 ` [PATCH v3 10/11] KVM: arm64: Do not perform an isb() if ZCR_EL2 isn't updated Fuad Tabba
2024-05-28 12:59 ` [PATCH v3 11/11] KVM: arm64: Drop sve_cond_update_zcr_vq_* Fuad Tabba
2024-05-30 18:22 ` Oliver Upton
2024-05-30 20:14 ` Oliver Upton
2024-05-31 6:40 ` Fuad Tabba
2024-05-28 13:13 ` [PATCH v3 00/11] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode Fuad Tabba
2024-05-30 18:29 ` Oliver Upton
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).