* [PATCH v4 0/9] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode
@ 2024-06-03 12:28 Fuad Tabba
2024-06-03 12:28 ` [PATCH v4 1/9] KVM: arm64: Reintroduce __sve_save_state Fuad Tabba
` (9 more replies)
0 siblings, 10 replies; 24+ messages in thread
From: Fuad Tabba @ 2024-06-03 12:28 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: maz, will, qperret, tabba, seanjc, alexandru.elisei,
catalin.marinas, philmd, james.morse, suzuki.poulose,
oliver.upton, mark.rutland, broonie, joey.gouly, rananta,
yuzenghui
Changes since v3 [1]:
- Rebased on Linux 6.10-rc2 (c3f38fa61af7)
- Dropped v3 patches 8--11 (Oliver)
- Removed unnecessary isb()s (Oliver)
- Formatting/comments (Mark)
- Fix __sve_save_state()/__sve_restore_state() prototypes (Mark)
- Save/restore ffr with the sve state
- Added a patch that checks at hyp that SME features aren't
enabled on guest entry, to ensure it's not in streaming mode
With the KVM host data rework [2], handling of fpsimd and sve
state in protected mode is done at hyp. For protected VMs, we
don't want to leak any guest state to the host, including whether
a guest has used fpsimd/sve.
To complete the work started with the host data rework, in
regards to protected mode, ensure that the host's fpsimd context
and its sve context are restored on guest exit, since the rework
has hidden the fpsimd/sve state from the host.
This patch series eagerly restores the host fpsimd/sve state on
guest exit when running in protected mode, which happens only if
the guest has used fpsimd/sve. This means that the saving of the
state is lazy, similar to the behavior of KVM in other modes, but
the restoration of the host state is eager.
This series is based on Linux 6.10-rc2 (c3f38fa61af7).
Tested on qemu, with the kernel sve stress tests.
Cheers,
/fuad
[1] https://lore.kernel.org/all/20240528125914.277057-1-tabba@google.com/
[2] https://lore.kernel.org/all/20240322170945.3292593-1-maz@kernel.org/
Fuad Tabba (9):
KVM: arm64: Reintroduce __sve_save_state
KVM: arm64: Fix prototype for __sve_save_state/__sve_restore_state
KVM: arm64: Abstract set/clear of CPTR_EL2 bits behind helper
KVM: arm64: Specialize handling of host fpsimd state on trap
KVM: arm64: Allocate memory mapped at hyp for host sve state in pKVM
KVM: arm64: Eagerly restore host fpsimd/sve state in pKVM
KVM: arm64: Consolidate initializing the host data's fpsimd_state/sve
in pKVM
KVM: arm64: Refactor CPACR trap bit setting/clearing to use ELx format
KVM: arm64: Ensure that SME controls are disabled in protected mode
arch/arm64/include/asm/el2_setup.h | 6 +-
arch/arm64/include/asm/kvm_arm.h | 6 ++
arch/arm64/include/asm/kvm_emulate.h | 71 +++++++++++++++++++--
arch/arm64/include/asm/kvm_host.h | 25 +++++++-
arch/arm64/include/asm/kvm_hyp.h | 4 +-
arch/arm64/include/asm/kvm_pkvm.h | 9 +++
arch/arm64/kvm/arm.c | 76 ++++++++++++++++++++++
arch/arm64/kvm/fpsimd.c | 11 +++-
arch/arm64/kvm/hyp/fpsimd.S | 6 ++
arch/arm64/kvm/hyp/include/hyp/switch.h | 36 ++++++-----
arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 1 -
arch/arm64/kvm/hyp/nvhe/hyp-main.c | 84 ++++++++++++++++++++++---
arch/arm64/kvm/hyp/nvhe/pkvm.c | 17 ++---
arch/arm64/kvm/hyp/nvhe/setup.c | 25 +++++++-
arch/arm64/kvm/hyp/nvhe/switch.c | 24 ++++++-
arch/arm64/kvm/hyp/vhe/switch.c | 12 ++--
arch/arm64/kvm/reset.c | 3 +
17 files changed, 358 insertions(+), 58 deletions(-)
base-commit: c3f38fa61af77b49866b006939479069cd451173
--
2.45.1.288.g0e0cd299f1-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v4 1/9] KVM: arm64: Reintroduce __sve_save_state
2024-06-03 12:28 [PATCH v4 0/9] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode Fuad Tabba
@ 2024-06-03 12:28 ` Fuad Tabba
2024-06-03 13:55 ` Mark Brown
2024-06-03 12:28 ` [PATCH v4 2/9] KVM: arm64: Fix prototype for __sve_save_state/__sve_restore_state Fuad Tabba
` (8 subsequent siblings)
9 siblings, 1 reply; 24+ messages in thread
From: Fuad Tabba @ 2024-06-03 12:28 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: maz, will, qperret, tabba, seanjc, alexandru.elisei,
catalin.marinas, philmd, james.morse, suzuki.poulose,
oliver.upton, mark.rutland, broonie, joey.gouly, rananta,
yuzenghui
Now that the hypervisor is handling the host sve state in
protected mode, it needs to be able to save it.
This reverts commit e66425fc9ba3 ("KVM: arm64: Remove unused
__sve_save_state").
Reviewed-by: Oliver Upton <oliver.upton@linux.dev>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
arch/arm64/include/asm/kvm_hyp.h | 1 +
arch/arm64/kvm/hyp/fpsimd.S | 6 ++++++
2 files changed, 7 insertions(+)
diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index 3e80464f8953..2ab23589339a 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -111,6 +111,7 @@ void __debug_restore_host_buffers_nvhe(struct kvm_vcpu *vcpu);
void __fpsimd_save_state(struct user_fpsimd_state *fp_regs);
void __fpsimd_restore_state(struct user_fpsimd_state *fp_regs);
+void __sve_save_state(void *sve_pffr, u32 *fpsr);
void __sve_restore_state(void *sve_pffr, u32 *fpsr);
u64 __guest_enter(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/hyp/fpsimd.S b/arch/arm64/kvm/hyp/fpsimd.S
index 61e6f3ba7b7d..e950875e31ce 100644
--- a/arch/arm64/kvm/hyp/fpsimd.S
+++ b/arch/arm64/kvm/hyp/fpsimd.S
@@ -25,3 +25,9 @@ SYM_FUNC_START(__sve_restore_state)
sve_load 0, x1, x2, 3
ret
SYM_FUNC_END(__sve_restore_state)
+
+SYM_FUNC_START(__sve_save_state)
+ mov x2, #1
+ sve_save 0, x1, x2, 3
+ ret
+SYM_FUNC_END(__sve_save_state)
--
2.45.1.288.g0e0cd299f1-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v4 2/9] KVM: arm64: Fix prototype for __sve_save_state/__sve_restore_state
2024-06-03 12:28 [PATCH v4 0/9] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode Fuad Tabba
2024-06-03 12:28 ` [PATCH v4 1/9] KVM: arm64: Reintroduce __sve_save_state Fuad Tabba
@ 2024-06-03 12:28 ` Fuad Tabba
2024-06-03 14:19 ` Mark Brown
2024-06-03 12:28 ` [PATCH v4 3/9] KVM: arm64: Abstract set/clear of CPTR_EL2 bits behind helper Fuad Tabba
` (7 subsequent siblings)
9 siblings, 1 reply; 24+ messages in thread
From: Fuad Tabba @ 2024-06-03 12:28 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: maz, will, qperret, tabba, seanjc, alexandru.elisei,
catalin.marinas, philmd, james.morse, suzuki.poulose,
oliver.upton, mark.rutland, broonie, joey.gouly, rananta,
yuzenghui
Since the prototypes for __sve_save_state/__sve_restore_state at
hyp were added, the underlying macro has acquired a third
parameter for saving/restoring ffr.
Fix the prototypes to account for the third parameter, and
restore the ffr for the guest since it is saved.
Suggested-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
arch/arm64/include/asm/kvm_hyp.h | 4 ++--
arch/arm64/kvm/hyp/include/hyp/switch.h | 3 ++-
2 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index 2ab23589339a..686cce7e4e96 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -111,8 +111,8 @@ void __debug_restore_host_buffers_nvhe(struct kvm_vcpu *vcpu);
void __fpsimd_save_state(struct user_fpsimd_state *fp_regs);
void __fpsimd_restore_state(struct user_fpsimd_state *fp_regs);
-void __sve_save_state(void *sve_pffr, u32 *fpsr);
-void __sve_restore_state(void *sve_pffr, u32 *fpsr);
+void __sve_save_state(void *sve_pffr, u32 *fpsr, int save_ffr);
+void __sve_restore_state(void *sve_pffr, u32 *fpsr, int restore_ffr);
u64 __guest_enter(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index a92566f36022..d58933ae8fd5 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -316,7 +316,8 @@ static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu)
{
sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2);
__sve_restore_state(vcpu_sve_pffr(vcpu),
- &vcpu->arch.ctxt.fp_regs.fpsr);
+ &vcpu->arch.ctxt.fp_regs.fpsr,
+ true);
write_sysreg_el1(__vcpu_sys_reg(vcpu, ZCR_EL1), SYS_ZCR);
}
--
2.45.1.288.g0e0cd299f1-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v4 3/9] KVM: arm64: Abstract set/clear of CPTR_EL2 bits behind helper
2024-06-03 12:28 [PATCH v4 0/9] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode Fuad Tabba
2024-06-03 12:28 ` [PATCH v4 1/9] KVM: arm64: Reintroduce __sve_save_state Fuad Tabba
2024-06-03 12:28 ` [PATCH v4 2/9] KVM: arm64: Fix prototype for __sve_save_state/__sve_restore_state Fuad Tabba
@ 2024-06-03 12:28 ` Fuad Tabba
2024-06-03 12:28 ` [PATCH v4 4/9] KVM: arm64: Specialize handling of host fpsimd state on trap Fuad Tabba
` (6 subsequent siblings)
9 siblings, 0 replies; 24+ messages in thread
From: Fuad Tabba @ 2024-06-03 12:28 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: maz, will, qperret, tabba, seanjc, alexandru.elisei,
catalin.marinas, philmd, james.morse, suzuki.poulose,
oliver.upton, mark.rutland, broonie, joey.gouly, rananta,
yuzenghui
The same traps controlled by CPTR_EL2 or CPACR_EL1 need to be
toggled in different parts of the code, but the exact bits and
their polarity differ between these two formats and the mode
(vhe/nvhe/hvhe).
To reduce the amount of duplicated code and the chance of getting
the wrong bit/polarity or missing a field, abstract the set/clear
of CPTR_EL2 bits behind a helper.
Since (h)VHE is the way of the future, use the CPACR_EL1 format,
which is a subset of the VHE CPTR_EL2, as a reference.
No functional change intended.
Suggested-by: Oliver Upton <oliver.upton@linux.dev>
Reviewed-by: Oliver Upton <oliver.upton@linux.dev>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
arch/arm64/include/asm/kvm_arm.h | 6 +++
arch/arm64/include/asm/kvm_emulate.h | 62 +++++++++++++++++++++++++
arch/arm64/kvm/hyp/include/hyp/switch.h | 18 ++-----
arch/arm64/kvm/hyp/nvhe/hyp-main.c | 6 +--
4 files changed, 73 insertions(+), 19 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index e01bb5ca13b7..b2adc2c6c82a 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -305,6 +305,12 @@
GENMASK(19, 14) | \
BIT(11))
+#define CPTR_VHE_EL2_RES0 (GENMASK(63, 32) | \
+ GENMASK(27, 26) | \
+ GENMASK(23, 22) | \
+ GENMASK(19, 18) | \
+ GENMASK(15, 0))
+
/* Hyp Debug Configuration Register bits */
#define MDCR_EL2_E2TB_MASK (UL(0x3))
#define MDCR_EL2_E2TB_SHIFT (UL(24))
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 501e3e019c93..2d7a0bdf9d03 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -557,6 +557,68 @@ static __always_inline void kvm_incr_pc(struct kvm_vcpu *vcpu)
vcpu_set_flag((v), e); \
} while (0)
+#define __build_check_all_or_none(r, bits) \
+ BUILD_BUG_ON(((r) & (bits)) && ((r) & (bits)) != (bits))
+
+#define __cpacr_to_cptr_clr(clr, set) \
+ ({ \
+ u64 cptr = 0; \
+ \
+ if ((set) & CPACR_ELx_FPEN) \
+ cptr |= CPTR_EL2_TFP; \
+ if ((set) & CPACR_ELx_ZEN) \
+ cptr |= CPTR_EL2_TZ; \
+ if ((set) & CPACR_ELx_SMEN) \
+ cptr |= CPTR_EL2_TSM; \
+ if ((clr) & CPACR_ELx_TTA) \
+ cptr |= CPTR_EL2_TTA; \
+ if ((clr) & CPTR_EL2_TAM) \
+ cptr |= CPTR_EL2_TAM; \
+ if ((clr) & CPTR_EL2_TCPAC) \
+ cptr |= CPTR_EL2_TCPAC; \
+ \
+ cptr; \
+ })
+
+#define __cpacr_to_cptr_set(clr, set) \
+ ({ \
+ u64 cptr = 0; \
+ \
+ if ((clr) & CPACR_ELx_FPEN) \
+ cptr |= CPTR_EL2_TFP; \
+ if ((clr) & CPACR_ELx_ZEN) \
+ cptr |= CPTR_EL2_TZ; \
+ if ((clr) & CPACR_ELx_SMEN) \
+ cptr |= CPTR_EL2_TSM; \
+ if ((set) & CPACR_ELx_TTA) \
+ cptr |= CPTR_EL2_TTA; \
+ if ((set) & CPTR_EL2_TAM) \
+ cptr |= CPTR_EL2_TAM; \
+ if ((set) & CPTR_EL2_TCPAC) \
+ cptr |= CPTR_EL2_TCPAC; \
+ \
+ cptr; \
+ })
+
+#define cpacr_clear_set(clr, set) \
+ do { \
+ BUILD_BUG_ON((set) & CPTR_VHE_EL2_RES0); \
+ BUILD_BUG_ON((clr) & CPACR_ELx_E0POE); \
+ __build_check_all_or_none((clr), CPACR_ELx_FPEN); \
+ __build_check_all_or_none((set), CPACR_ELx_FPEN); \
+ __build_check_all_or_none((clr), CPACR_ELx_ZEN); \
+ __build_check_all_or_none((set), CPACR_ELx_ZEN); \
+ __build_check_all_or_none((clr), CPACR_ELx_SMEN); \
+ __build_check_all_or_none((set), CPACR_ELx_SMEN); \
+ \
+ if (has_vhe() || has_hvhe()) \
+ sysreg_clear_set(cpacr_el1, clr, set); \
+ else \
+ sysreg_clear_set(cptr_el2, \
+ __cpacr_to_cptr_clr(clr, set), \
+ __cpacr_to_cptr_set(clr, set));\
+ } while (0)
+
static __always_inline void kvm_write_cptr_el2(u64 val)
{
if (has_vhe() || has_hvhe())
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index d58933ae8fd5..055d2ca7264e 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -331,7 +331,6 @@ static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code)
{
bool sve_guest;
u8 esr_ec;
- u64 reg;
if (!system_supports_fpsimd())
return false;
@@ -354,19 +353,10 @@ static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code)
/* Valid trap. Switch the context: */
/* First disable enough traps to allow us to update the registers */
- if (has_vhe() || has_hvhe()) {
- reg = CPACR_EL1_FPEN_EL0EN | CPACR_EL1_FPEN_EL1EN;
- if (sve_guest)
- reg |= CPACR_EL1_ZEN_EL0EN | CPACR_EL1_ZEN_EL1EN;
-
- sysreg_clear_set(cpacr_el1, 0, reg);
- } else {
- reg = CPTR_EL2_TFP;
- if (sve_guest)
- reg |= CPTR_EL2_TZ;
-
- sysreg_clear_set(cptr_el2, reg, 0);
- }
+ if (sve_guest)
+ cpacr_clear_set(0, CPACR_ELx_FPEN | CPACR_ELx_ZEN);
+ else
+ cpacr_clear_set(0, CPACR_ELx_FPEN);
isb();
/* Write out the host state if it's in the registers */
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
index d5c48dc98f67..f71394d0e32a 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
@@ -405,11 +405,7 @@ void handle_trap(struct kvm_cpu_context *host_ctxt)
handle_host_smc(host_ctxt);
break;
case ESR_ELx_EC_SVE:
- if (has_hvhe())
- sysreg_clear_set(cpacr_el1, 0, (CPACR_EL1_ZEN_EL1EN |
- CPACR_EL1_ZEN_EL0EN));
- else
- sysreg_clear_set(cptr_el2, CPTR_EL2_TZ, 0);
+ cpacr_clear_set(0, CPACR_ELx_ZEN);
isb();
sve_cond_update_zcr_vq(ZCR_ELx_LEN_MASK, SYS_ZCR_EL2);
break;
--
2.45.1.288.g0e0cd299f1-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v4 4/9] KVM: arm64: Specialize handling of host fpsimd state on trap
2024-06-03 12:28 [PATCH v4 0/9] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode Fuad Tabba
` (2 preceding siblings ...)
2024-06-03 12:28 ` [PATCH v4 3/9] KVM: arm64: Abstract set/clear of CPTR_EL2 bits behind helper Fuad Tabba
@ 2024-06-03 12:28 ` Fuad Tabba
2024-06-03 12:28 ` [PATCH v4 5/9] KVM: arm64: Allocate memory mapped at hyp for host sve state in pKVM Fuad Tabba
` (5 subsequent siblings)
9 siblings, 0 replies; 24+ messages in thread
From: Fuad Tabba @ 2024-06-03 12:28 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: maz, will, qperret, tabba, seanjc, alexandru.elisei,
catalin.marinas, philmd, james.morse, suzuki.poulose,
oliver.upton, mark.rutland, broonie, joey.gouly, rananta,
yuzenghui
In subsequent patches, n/vhe will diverge on saving the host
fpsimd/sve state when taking a guest fpsimd/sve trap. Add a
specialized helper to handle it.
No functional change intended.
Reviewed-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Oliver Upton <oliver.upton@linux.dev>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
arch/arm64/kvm/hyp/include/hyp/switch.h | 4 +++-
arch/arm64/kvm/hyp/nvhe/switch.c | 5 +++++
arch/arm64/kvm/hyp/vhe/switch.c | 5 +++++
3 files changed, 13 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 055d2ca7264e..9b904c858df0 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -321,6 +321,8 @@ static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu)
write_sysreg_el1(__vcpu_sys_reg(vcpu, ZCR_EL1), SYS_ZCR);
}
+static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu);
+
/*
* We trap the first access to the FP/SIMD to save the host context and
* restore the guest context lazily.
@@ -361,7 +363,7 @@ static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code)
/* Write out the host state if it's in the registers */
if (host_owns_fp_regs())
- __fpsimd_save_state(*host_data_ptr(fpsimd_state));
+ kvm_hyp_save_fpsimd_host(vcpu);
/* Restore the guest state */
if (sve_guest)
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 6758cd905570..019f863922fa 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -182,6 +182,11 @@ static bool kvm_handle_pvm_sys64(struct kvm_vcpu *vcpu, u64 *exit_code)
kvm_handle_pvm_sysreg(vcpu, exit_code));
}
+static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu)
+{
+ __fpsimd_save_state(*host_data_ptr(fpsimd_state));
+}
+
static const exit_handler_fn hyp_exit_handlers[] = {
[0 ... ESR_ELx_EC_MAX] = NULL,
[ESR_ELx_EC_CP15_32] = kvm_hyp_handle_cp15_32,
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index d7af5f46f22a..20073579e9f5 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -262,6 +262,11 @@ static bool kvm_hyp_handle_eret(struct kvm_vcpu *vcpu, u64 *exit_code)
return true;
}
+static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu)
+{
+ __fpsimd_save_state(*host_data_ptr(fpsimd_state));
+}
+
static const exit_handler_fn hyp_exit_handlers[] = {
[0 ... ESR_ELx_EC_MAX] = NULL,
[ESR_ELx_EC_CP15_32] = kvm_hyp_handle_cp15_32,
--
2.45.1.288.g0e0cd299f1-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v4 5/9] KVM: arm64: Allocate memory mapped at hyp for host sve state in pKVM
2024-06-03 12:28 [PATCH v4 0/9] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode Fuad Tabba
` (3 preceding siblings ...)
2024-06-03 12:28 ` [PATCH v4 4/9] KVM: arm64: Specialize handling of host fpsimd state on trap Fuad Tabba
@ 2024-06-03 12:28 ` Fuad Tabba
2024-06-03 14:50 ` Mark Brown
2024-06-03 12:28 ` [PATCH v4 6/9] KVM: arm64: Eagerly restore host fpsimd/sve " Fuad Tabba
` (4 subsequent siblings)
9 siblings, 1 reply; 24+ messages in thread
From: Fuad Tabba @ 2024-06-03 12:28 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: maz, will, qperret, tabba, seanjc, alexandru.elisei,
catalin.marinas, philmd, james.morse, suzuki.poulose,
oliver.upton, mark.rutland, broonie, joey.gouly, rananta,
yuzenghui
Protected mode needs to maintain (save/restore) the host's sve
state, rather than relying on the host kernel to do that. This is
to avoid leaking information to the host about guests and the
type of operations they are performing.
As a first step towards that, allocate memory mapped at hyp, per
cpu, for the host sve state. The following patch will use this
memory to save/restore the host state.
Reviewed-by: Oliver Upton <oliver.upton@linux.dev>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
arch/arm64/include/asm/kvm_host.h | 17 ++++++++
arch/arm64/include/asm/kvm_hyp.h | 1 +
arch/arm64/include/asm/kvm_pkvm.h | 9 ++++
arch/arm64/kvm/arm.c | 68 +++++++++++++++++++++++++++++++
arch/arm64/kvm/hyp/nvhe/pkvm.c | 2 +
arch/arm64/kvm/hyp/nvhe/setup.c | 24 +++++++++++
arch/arm64/kvm/reset.c | 3 ++
7 files changed, 124 insertions(+)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 8170c04fde91..90df7ccec5f4 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -76,6 +76,7 @@ static inline enum kvm_mode kvm_get_mode(void) { return KVM_MODE_NONE; };
DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use);
extern unsigned int __ro_after_init kvm_sve_max_vl;
+extern unsigned int __ro_after_init kvm_host_sve_max_vl;
int __init kvm_arm_init_sve(void);
u32 __attribute_const__ kvm_target_cpu(void);
@@ -521,6 +522,20 @@ struct kvm_cpu_context {
u64 *vncr_array;
};
+struct cpu_sve_state {
+ __u64 zcr_el1;
+
+ /*
+ * Ordering is important since __sve_save_state/__sve_restore_state
+ * relies on it.
+ */
+ __u32 fpsr;
+ __u32 fpcr;
+
+ /* Must be SVE_VQ_BYTES (128 bit) aligned. */
+ __u8 sve_regs[];
+};
+
/*
* This structure is instantiated on a per-CPU basis, and contains
* data that is:
@@ -534,7 +549,9 @@ struct kvm_cpu_context {
*/
struct kvm_host_data {
struct kvm_cpu_context host_ctxt;
+
struct user_fpsimd_state *fpsimd_state; /* hyp VA */
+ struct cpu_sve_state *sve_state; /* hyp VA */
/* Ownership of the FP regs */
enum {
diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index 686cce7e4e96..b05bceca3385 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -143,5 +143,6 @@ extern u64 kvm_nvhe_sym(id_aa64smfr0_el1_sys_val);
extern unsigned long kvm_nvhe_sym(__icache_flags);
extern unsigned int kvm_nvhe_sym(kvm_arm_vmid_bits);
+extern unsigned int kvm_nvhe_sym(kvm_host_sve_max_vl);
#endif /* __ARM64_KVM_HYP_H__ */
diff --git a/arch/arm64/include/asm/kvm_pkvm.h b/arch/arm64/include/asm/kvm_pkvm.h
index ad9cfb5c1ff4..cd56acd9a842 100644
--- a/arch/arm64/include/asm/kvm_pkvm.h
+++ b/arch/arm64/include/asm/kvm_pkvm.h
@@ -128,4 +128,13 @@ static inline unsigned long hyp_ffa_proxy_pages(void)
return (2 * KVM_FFA_MBOX_NR_PAGES) + DIV_ROUND_UP(desc_max, PAGE_SIZE);
}
+static inline size_t pkvm_host_sve_state_size(void)
+{
+ if (!system_supports_sve())
+ return 0;
+
+ return size_add(sizeof(struct cpu_sve_state),
+ SVE_SIG_REGS_SIZE(sve_vq_from_vl(kvm_host_sve_max_vl)));
+}
+
#endif /* __ARM64_KVM_PKVM_H__ */
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 9996a989b52e..1acf7415e831 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1931,6 +1931,11 @@ static unsigned long nvhe_percpu_order(void)
return size ? get_order(size) : 0;
}
+static size_t pkvm_host_sve_state_order(void)
+{
+ return get_order(pkvm_host_sve_state_size());
+}
+
/* A lookup table holding the hypervisor VA for each vector slot */
static void *hyp_spectre_vector_selector[BP_HARDEN_EL2_SLOTS];
@@ -2310,12 +2315,20 @@ static void __init teardown_subsystems(void)
static void __init teardown_hyp_mode(void)
{
+ bool free_sve = system_supports_sve() && is_protected_kvm_enabled();
int cpu;
free_hyp_pgds();
for_each_possible_cpu(cpu) {
free_page(per_cpu(kvm_arm_hyp_stack_page, cpu));
free_pages(kvm_nvhe_sym(kvm_arm_hyp_percpu_base)[cpu], nvhe_percpu_order());
+
+ if (free_sve) {
+ struct cpu_sve_state *sve_state;
+
+ sve_state = per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->sve_state;
+ free_pages((unsigned long) sve_state, pkvm_host_sve_state_order());
+ }
}
}
@@ -2398,6 +2411,50 @@ static int __init kvm_hyp_init_protection(u32 hyp_va_bits)
return 0;
}
+static int init_pkvm_host_sve_state(void)
+{
+ int cpu;
+
+ if (!system_supports_sve())
+ return 0;
+
+ /* Allocate pages for host sve state in protected mode. */
+ for_each_possible_cpu(cpu) {
+ struct page *page = alloc_pages(GFP_KERNEL, pkvm_host_sve_state_order());
+
+ if (!page)
+ return -ENOMEM;
+
+ per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->sve_state = page_address(page);
+ }
+
+ /*
+ * Don't map the pages in hyp since these are only used in protected
+ * mode, which will (re)create its own mapping when initialized.
+ */
+
+ return 0;
+}
+
+/*
+ * Finalizes the initialization of hyp mode, once everything else is initialized
+ * and the initialziation process cannot fail.
+ */
+static void finalize_init_hyp_mode(void)
+{
+ int cpu;
+
+ if (!is_protected_kvm_enabled() || !system_supports_sve())
+ return;
+
+ for_each_possible_cpu(cpu) {
+ struct cpu_sve_state *sve_state;
+
+ sve_state = per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->sve_state;
+ per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->sve_state = kern_hyp_va(sve_state);
+ }
+}
+
static void pkvm_hyp_init_ptrauth(void)
{
struct kvm_cpu_context *hyp_ctxt;
@@ -2566,6 +2623,10 @@ static int __init init_hyp_mode(void)
goto out_err;
}
+ err = init_pkvm_host_sve_state();
+ if (err)
+ goto out_err;
+
err = kvm_hyp_init_protection(hyp_va_bits);
if (err) {
kvm_err("Failed to init hyp memory protection\n");
@@ -2730,6 +2791,13 @@ static __init int kvm_arm_init(void)
if (err)
goto out_subs;
+ /*
+ * This should be called after initialization is done and failure isn't
+ * possible anymore.
+ */
+ if (!in_hyp_mode)
+ finalize_init_hyp_mode();
+
kvm_arm_initialised = true;
return 0;
diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c
index 16aa4875ddb8..25e9a94f6d76 100644
--- a/arch/arm64/kvm/hyp/nvhe/pkvm.c
+++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c
@@ -18,6 +18,8 @@ unsigned long __icache_flags;
/* Used by kvm_get_vttbr(). */
unsigned int kvm_arm_vmid_bits;
+unsigned int kvm_host_sve_max_vl;
+
/*
* Set trap register values based on features in ID_AA64PFR0.
*/
diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c
index 859f22f754d3..3fae42479598 100644
--- a/arch/arm64/kvm/hyp/nvhe/setup.c
+++ b/arch/arm64/kvm/hyp/nvhe/setup.c
@@ -67,6 +67,28 @@ static int divide_memory_pool(void *virt, unsigned long size)
return 0;
}
+static int pkvm_create_host_sve_mappings(void)
+{
+ void *start, *end;
+ int ret, i;
+
+ if (!system_supports_sve())
+ return 0;
+
+ for (i = 0; i < hyp_nr_cpus; i++) {
+ struct kvm_host_data *host_data = per_cpu_ptr(&kvm_host_data, i);
+ struct cpu_sve_state *sve_state = host_data->sve_state;
+
+ start = kern_hyp_va(sve_state);
+ end = start + PAGE_ALIGN(pkvm_host_sve_state_size());
+ ret = pkvm_create_mappings(start, end, PAGE_HYP);
+ if (ret)
+ return ret;
+ }
+
+ return 0;
+}
+
static int recreate_hyp_mappings(phys_addr_t phys, unsigned long size,
unsigned long *per_cpu_base,
u32 hyp_va_bits)
@@ -125,6 +147,8 @@ static int recreate_hyp_mappings(phys_addr_t phys, unsigned long size,
return ret;
}
+ pkvm_create_host_sve_mappings();
+
/*
* Map the host sections RO in the hypervisor, but transfer the
* ownership from the host to the hypervisor itself to make sure they
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index 1b7b58cb121f..3fc8ca164dbe 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -32,6 +32,7 @@
/* Maximum phys_shift supported for any VM on this host */
static u32 __ro_after_init kvm_ipa_limit;
+unsigned int __ro_after_init kvm_host_sve_max_vl;
/*
* ARMv8 Reset Values
@@ -51,6 +52,8 @@ int __init kvm_arm_init_sve(void)
{
if (system_supports_sve()) {
kvm_sve_max_vl = sve_max_virtualisable_vl();
+ kvm_host_sve_max_vl = sve_max_vl();
+ kvm_nvhe_sym(kvm_host_sve_max_vl) = kvm_host_sve_max_vl;
/*
* The get_sve_reg()/set_sve_reg() ioctl interface will need
--
2.45.1.288.g0e0cd299f1-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v4 6/9] KVM: arm64: Eagerly restore host fpsimd/sve state in pKVM
2024-06-03 12:28 [PATCH v4 0/9] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode Fuad Tabba
` (4 preceding siblings ...)
2024-06-03 12:28 ` [PATCH v4 5/9] KVM: arm64: Allocate memory mapped at hyp for host sve state in pKVM Fuad Tabba
@ 2024-06-03 12:28 ` Fuad Tabba
2024-06-03 15:52 ` Mark Brown
2024-06-03 12:28 ` [PATCH v4 7/9] KVM: arm64: Consolidate initializing the host data's fpsimd_state/sve " Fuad Tabba
` (3 subsequent siblings)
9 siblings, 1 reply; 24+ messages in thread
From: Fuad Tabba @ 2024-06-03 12:28 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: maz, will, qperret, tabba, seanjc, alexandru.elisei,
catalin.marinas, philmd, james.morse, suzuki.poulose,
oliver.upton, mark.rutland, broonie, joey.gouly, rananta,
yuzenghui
When running in protected mode we don't want to leak protected
guest state to the host, including whether a guest has used
fpsimd/sve. Therefore, eagerly restore the host state on guest
exit when running in protected mode, which happens only if the
guest has used fpsimd/sve.
Reviewed-by: Oliver Upton <oliver.upton@linux.dev>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
arch/arm64/kvm/hyp/include/hyp/switch.h | 13 ++++-
arch/arm64/kvm/hyp/nvhe/hyp-main.c | 67 +++++++++++++++++++++++--
arch/arm64/kvm/hyp/nvhe/pkvm.c | 2 +
arch/arm64/kvm/hyp/nvhe/switch.c | 16 +++++-
4 files changed, 93 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 9b904c858df0..0c4de44534b7 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -321,6 +321,17 @@ static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu)
write_sysreg_el1(__vcpu_sys_reg(vcpu, ZCR_EL1), SYS_ZCR);
}
+static inline void __hyp_sve_save_host(void)
+{
+ struct cpu_sve_state *sve_state = *host_data_ptr(sve_state);
+
+ sve_state->zcr_el1 = read_sysreg_el1(SYS_ZCR);
+ write_sysreg_s(ZCR_ELx_LEN_MASK, SYS_ZCR_EL2);
+ __sve_save_state(sve_state->sve_regs + sve_ffr_offset(kvm_host_sve_max_vl),
+ &sve_state->fpsr,
+ true);
+}
+
static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu);
/*
@@ -355,7 +366,7 @@ static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code)
/* Valid trap. Switch the context: */
/* First disable enough traps to allow us to update the registers */
- if (sve_guest)
+ if (sve_guest || (is_protected_kvm_enabled() && system_supports_sve()))
cpacr_clear_set(0, CPACR_ELx_FPEN | CPACR_ELx_ZEN);
else
cpacr_clear_set(0, CPACR_ELx_FPEN);
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
index f71394d0e32a..bd93b8a9e172 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
@@ -23,20 +23,80 @@ DEFINE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params);
void __kvm_hyp_host_forward_smc(struct kvm_cpu_context *host_ctxt);
+static void __hyp_sve_save_guest(struct kvm_vcpu *vcpu)
+{
+ __vcpu_sys_reg(vcpu, ZCR_EL1) = read_sysreg_el1(SYS_ZCR);
+ /*
+ * On saving/restoring guest sve state, always use the maximum VL for
+ * the guest. The layout of the data when saving the sve state depends
+ * on the VL, so use a consistent (i.e., the maximum) guest VL.
+ */
+ sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2);
+ __sve_save_state(vcpu_sve_pffr(vcpu), &vcpu->arch.ctxt.fp_regs.fpsr, true);
+ write_sysreg_s(ZCR_ELx_LEN_MASK, SYS_ZCR_EL2);
+}
+
+static void __hyp_sve_restore_host(void)
+{
+ struct cpu_sve_state *sve_state = *host_data_ptr(sve_state);
+
+ /*
+ * On saving/restoring host sve state, always use the maximum VL for
+ * the host. The layout of the data when saving the sve state depends
+ * on the VL, so use a consistent (i.e., the maximum) host VL.
+ *
+ * Setting ZCR_EL2 to ZCR_ELx_LEN_MASK sets the effective length
+ * supported by the system (or limited at EL3).
+ */
+ write_sysreg_s(ZCR_ELx_LEN_MASK, SYS_ZCR_EL2);
+ __sve_restore_state(sve_state->sve_regs + sve_ffr_offset(kvm_host_sve_max_vl),
+ &sve_state->fpsr,
+ true);
+ write_sysreg_el1(sve_state->zcr_el1, SYS_ZCR);
+}
+
+static void fpsimd_sve_flush(void)
+{
+ *host_data_ptr(fp_owner) = FP_STATE_HOST_OWNED;
+}
+
+static void fpsimd_sve_sync(struct kvm_vcpu *vcpu)
+{
+ if (!guest_owns_fp_regs())
+ return;
+
+ cpacr_clear_set(0, CPACR_ELx_FPEN | CPACR_ELx_ZEN);
+ isb();
+
+ if (vcpu_has_sve(vcpu))
+ __hyp_sve_save_guest(vcpu);
+ else
+ __fpsimd_save_state(&vcpu->arch.ctxt.fp_regs);
+
+ if (system_supports_sve())
+ __hyp_sve_restore_host();
+ else
+ __fpsimd_restore_state(*host_data_ptr(fpsimd_state));
+
+ *host_data_ptr(fp_owner) = FP_STATE_HOST_OWNED;
+}
+
static void flush_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu)
{
struct kvm_vcpu *host_vcpu = hyp_vcpu->host_vcpu;
+ fpsimd_sve_flush();
+
hyp_vcpu->vcpu.arch.ctxt = host_vcpu->arch.ctxt;
hyp_vcpu->vcpu.arch.sve_state = kern_hyp_va(host_vcpu->arch.sve_state);
- hyp_vcpu->vcpu.arch.sve_max_vl = host_vcpu->arch.sve_max_vl;
+ /* Limit guest vector length to the maximum supported by the host. */
+ hyp_vcpu->vcpu.arch.sve_max_vl = min(host_vcpu->arch.sve_max_vl, kvm_host_sve_max_vl);
hyp_vcpu->vcpu.arch.hw_mmu = host_vcpu->arch.hw_mmu;
hyp_vcpu->vcpu.arch.hcr_el2 = host_vcpu->arch.hcr_el2;
hyp_vcpu->vcpu.arch.mdcr_el2 = host_vcpu->arch.mdcr_el2;
- hyp_vcpu->vcpu.arch.cptr_el2 = host_vcpu->arch.cptr_el2;
hyp_vcpu->vcpu.arch.iflags = host_vcpu->arch.iflags;
@@ -54,10 +114,11 @@ static void sync_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu)
struct vgic_v3_cpu_if *host_cpu_if = &host_vcpu->arch.vgic_cpu.vgic_v3;
unsigned int i;
+ fpsimd_sve_sync(&hyp_vcpu->vcpu);
+
host_vcpu->arch.ctxt = hyp_vcpu->vcpu.arch.ctxt;
host_vcpu->arch.hcr_el2 = hyp_vcpu->vcpu.arch.hcr_el2;
- host_vcpu->arch.cptr_el2 = hyp_vcpu->vcpu.arch.cptr_el2;
host_vcpu->arch.fault = hyp_vcpu->vcpu.arch.fault;
diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c
index 25e9a94f6d76..feb27b4ce459 100644
--- a/arch/arm64/kvm/hyp/nvhe/pkvm.c
+++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c
@@ -588,6 +588,8 @@ int __pkvm_init_vcpu(pkvm_handle_t handle, struct kvm_vcpu *host_vcpu,
if (ret)
unmap_donated_memory(hyp_vcpu, sizeof(*hyp_vcpu));
+ hyp_vcpu->vcpu.arch.cptr_el2 = kvm_get_reset_cptr_el2(&hyp_vcpu->vcpu);
+
return ret;
}
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 019f863922fa..bef74de7065b 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -184,7 +184,21 @@ static bool kvm_handle_pvm_sys64(struct kvm_vcpu *vcpu, u64 *exit_code)
static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu)
{
- __fpsimd_save_state(*host_data_ptr(fpsimd_state));
+ /*
+ * Non-protected kvm relies on the host restoring its sve state.
+ * Protected kvm restores the host's sve state as not to reveal that
+ * fpsimd was used by a guest nor leak upper sve bits.
+ */
+ if (unlikely(is_protected_kvm_enabled() && system_supports_sve())) {
+ __hyp_sve_save_host();
+
+ /* Re-enable SVE traps if not supported for the guest vcpu. */
+ if (!vcpu_has_sve(vcpu))
+ cpacr_clear_set(CPACR_ELx_ZEN, 0);
+
+ } else {
+ __fpsimd_save_state(*host_data_ptr(fpsimd_state));
+ }
}
static const exit_handler_fn hyp_exit_handlers[] = {
--
2.45.1.288.g0e0cd299f1-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v4 7/9] KVM: arm64: Consolidate initializing the host data's fpsimd_state/sve in pKVM
2024-06-03 12:28 [PATCH v4 0/9] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode Fuad Tabba
` (5 preceding siblings ...)
2024-06-03 12:28 ` [PATCH v4 6/9] KVM: arm64: Eagerly restore host fpsimd/sve " Fuad Tabba
@ 2024-06-03 12:28 ` Fuad Tabba
2024-06-03 15:43 ` Mark Brown
2024-06-03 12:28 ` [PATCH v4 8/9] KVM: arm64: Refactor CPACR trap bit setting/clearing to use ELx format Fuad Tabba
` (2 subsequent siblings)
9 siblings, 1 reply; 24+ messages in thread
From: Fuad Tabba @ 2024-06-03 12:28 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: maz, will, qperret, tabba, seanjc, alexandru.elisei,
catalin.marinas, philmd, james.morse, suzuki.poulose,
oliver.upton, mark.rutland, broonie, joey.gouly, rananta,
yuzenghui
Now that we have introduced finalize_init_hyp_mode(), lets
consolidate the initializing of the host_data fpsimd_state and
sve state.
Reviewed-by: Oliver Upton <oliver.upton@linux.dev>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
arch/arm64/include/asm/kvm_host.h | 10 ++++++++--
arch/arm64/kvm/arm.c | 20 ++++++++++++++------
arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 1 -
arch/arm64/kvm/hyp/nvhe/pkvm.c | 11 -----------
arch/arm64/kvm/hyp/nvhe/setup.c | 1 -
5 files changed, 22 insertions(+), 21 deletions(-)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 90df7ccec5f4..36b8e97bf49e 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -550,8 +550,14 @@ struct cpu_sve_state {
struct kvm_host_data {
struct kvm_cpu_context host_ctxt;
- struct user_fpsimd_state *fpsimd_state; /* hyp VA */
- struct cpu_sve_state *sve_state; /* hyp VA */
+ /*
+ * All pointers in this union are hyp VA.
+ * sve_state is only used in pKVM and if system_supports_sve().
+ */
+ union {
+ struct user_fpsimd_state *fpsimd_state;
+ struct cpu_sve_state *sve_state;
+ };
/* Ownership of the FP regs */
enum {
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 1acf7415e831..59716789fe0f 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -2444,14 +2444,22 @@ static void finalize_init_hyp_mode(void)
{
int cpu;
- if (!is_protected_kvm_enabled() || !system_supports_sve())
- return;
+ if (system_supports_sve() && is_protected_kvm_enabled()) {
+ for_each_possible_cpu(cpu) {
+ struct cpu_sve_state *sve_state;
- for_each_possible_cpu(cpu) {
- struct cpu_sve_state *sve_state;
+ sve_state = per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->sve_state;
+ per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->sve_state =
+ kern_hyp_va(sve_state);
+ }
+ } else {
+ for_each_possible_cpu(cpu) {
+ struct user_fpsimd_state *fpsimd_state;
- sve_state = per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->sve_state;
- per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->sve_state = kern_hyp_va(sve_state);
+ fpsimd_state = &per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->host_ctxt.fp_regs;
+ per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->fpsimd_state =
+ kern_hyp_va(fpsimd_state);
+ }
}
}
diff --git a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h
index 22f374e9f532..24a9a8330d19 100644
--- a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h
+++ b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h
@@ -59,7 +59,6 @@ static inline bool pkvm_hyp_vcpu_is_protected(struct pkvm_hyp_vcpu *hyp_vcpu)
}
void pkvm_hyp_vm_table_init(void *tbl);
-void pkvm_host_fpsimd_state_init(void);
int __pkvm_init_vm(struct kvm *host_kvm, unsigned long vm_hva,
unsigned long pgd_hva);
diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c
index feb27b4ce459..ea67fcbf8376 100644
--- a/arch/arm64/kvm/hyp/nvhe/pkvm.c
+++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c
@@ -249,17 +249,6 @@ void pkvm_hyp_vm_table_init(void *tbl)
vm_table = tbl;
}
-void pkvm_host_fpsimd_state_init(void)
-{
- unsigned long i;
-
- for (i = 0; i < hyp_nr_cpus; i++) {
- struct kvm_host_data *host_data = per_cpu_ptr(&kvm_host_data, i);
-
- host_data->fpsimd_state = &host_data->host_ctxt.fp_regs;
- }
-}
-
/*
* Return the hyp vm structure corresponding to the handle.
*/
diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c
index 3fae42479598..f4350ba07b0b 100644
--- a/arch/arm64/kvm/hyp/nvhe/setup.c
+++ b/arch/arm64/kvm/hyp/nvhe/setup.c
@@ -324,7 +324,6 @@ void __noreturn __pkvm_init_finalise(void)
goto out;
pkvm_hyp_vm_table_init(vm_table_base);
- pkvm_host_fpsimd_state_init();
out:
/*
* We tail-called to here from handle___pkvm_init() and will not return,
--
2.45.1.288.g0e0cd299f1-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v4 8/9] KVM: arm64: Refactor CPACR trap bit setting/clearing to use ELx format
2024-06-03 12:28 [PATCH v4 0/9] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode Fuad Tabba
` (6 preceding siblings ...)
2024-06-03 12:28 ` [PATCH v4 7/9] KVM: arm64: Consolidate initializing the host data's fpsimd_state/sve " Fuad Tabba
@ 2024-06-03 12:28 ` Fuad Tabba
2024-06-03 12:28 ` [PATCH v4 9/9] KVM: arm64: Ensure that SME controls are disabled in protected mode Fuad Tabba
2024-06-04 14:30 ` [PATCH v4 0/9] KVM: arm64: Fix handling of host fpsimd/sve state " Marc Zyngier
9 siblings, 0 replies; 24+ messages in thread
From: Fuad Tabba @ 2024-06-03 12:28 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: maz, will, qperret, tabba, seanjc, alexandru.elisei,
catalin.marinas, philmd, james.morse, suzuki.poulose,
oliver.upton, mark.rutland, broonie, joey.gouly, rananta,
yuzenghui
When setting/clearing CPACR bits for EL0 and EL1, use the ELx
format of the bits, which covers both. This makes the code
clearer, and reduces the chances of accidentally missing a bit.
No functional change intended.
Reviewed-by: Oliver Upton <oliver.upton@linux.dev>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
arch/arm64/include/asm/el2_setup.h | 6 +++---
arch/arm64/include/asm/kvm_emulate.h | 9 ++++-----
arch/arm64/kvm/fpsimd.c | 4 +---
arch/arm64/kvm/hyp/nvhe/pkvm.c | 2 +-
arch/arm64/kvm/hyp/nvhe/switch.c | 5 ++---
arch/arm64/kvm/hyp/vhe/switch.c | 7 +++----
6 files changed, 14 insertions(+), 19 deletions(-)
diff --git a/arch/arm64/include/asm/el2_setup.h b/arch/arm64/include/asm/el2_setup.h
index e4546b29dd0c..fd87c4b8f984 100644
--- a/arch/arm64/include/asm/el2_setup.h
+++ b/arch/arm64/include/asm/el2_setup.h
@@ -146,7 +146,7 @@
/* Coprocessor traps */
.macro __init_el2_cptr
__check_hvhe .LnVHE_\@, x1
- mov x0, #(CPACR_EL1_FPEN_EL1EN | CPACR_EL1_FPEN_EL0EN)
+ mov x0, #CPACR_ELx_FPEN
msr cpacr_el1, x0
b .Lskip_set_cptr_\@
.LnVHE_\@:
@@ -277,7 +277,7 @@
// (h)VHE case
mrs x0, cpacr_el1 // Disable SVE traps
- orr x0, x0, #(CPACR_EL1_ZEN_EL1EN | CPACR_EL1_ZEN_EL0EN)
+ orr x0, x0, #CPACR_ELx_ZEN
msr cpacr_el1, x0
b .Lskip_set_cptr_\@
@@ -298,7 +298,7 @@
// (h)VHE case
mrs x0, cpacr_el1 // Disable SME traps
- orr x0, x0, #(CPACR_EL1_SMEN_EL0EN | CPACR_EL1_SMEN_EL1EN)
+ orr x0, x0, #CPACR_ELx_SMEN
msr cpacr_el1, x0
b .Lskip_set_cptr_sme_\@
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 2d7a0bdf9d03..21650e7924d4 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -632,17 +632,16 @@ static __always_inline u64 kvm_get_reset_cptr_el2(struct kvm_vcpu *vcpu)
u64 val;
if (has_vhe()) {
- val = (CPACR_EL1_FPEN_EL0EN | CPACR_EL1_FPEN_EL1EN |
- CPACR_EL1_ZEN_EL1EN);
+ val = (CPACR_ELx_FPEN | CPACR_EL1_ZEN_EL1EN);
if (cpus_have_final_cap(ARM64_SME))
val |= CPACR_EL1_SMEN_EL1EN;
} else if (has_hvhe()) {
- val = (CPACR_EL1_FPEN_EL0EN | CPACR_EL1_FPEN_EL1EN);
+ val = CPACR_ELx_FPEN;
if (!vcpu_has_sve(vcpu) || !guest_owns_fp_regs())
- val |= CPACR_EL1_ZEN_EL1EN | CPACR_EL1_ZEN_EL0EN;
+ val |= CPACR_ELx_ZEN;
if (cpus_have_final_cap(ARM64_SME))
- val |= CPACR_EL1_SMEN_EL1EN | CPACR_EL1_SMEN_EL0EN;
+ val |= CPACR_ELx_SMEN;
} else {
val = CPTR_NVHE_EL2_RES1;
diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
index 1807d3a79a8a..eb21f29d91fc 100644
--- a/arch/arm64/kvm/fpsimd.c
+++ b/arch/arm64/kvm/fpsimd.c
@@ -161,9 +161,7 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu)
if (has_vhe() && system_supports_sme()) {
/* Also restore EL0 state seen on entry */
if (vcpu_get_flag(vcpu, HOST_SME_ENABLED))
- sysreg_clear_set(CPACR_EL1, 0,
- CPACR_EL1_SMEN_EL0EN |
- CPACR_EL1_SMEN_EL1EN);
+ sysreg_clear_set(CPACR_EL1, 0, CPACR_ELx_SMEN);
else
sysreg_clear_set(CPACR_EL1,
CPACR_EL1_SMEN_EL0EN,
diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c
index ea67fcbf8376..95cf18574251 100644
--- a/arch/arm64/kvm/hyp/nvhe/pkvm.c
+++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c
@@ -65,7 +65,7 @@ static void pvm_init_traps_aa64pfr0(struct kvm_vcpu *vcpu)
/* Trap SVE */
if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_SVE), feature_ids)) {
if (has_hvhe())
- cptr_clear |= CPACR_EL1_ZEN_EL0EN | CPACR_EL1_ZEN_EL1EN;
+ cptr_clear |= CPACR_ELx_ZEN;
else
cptr_set |= CPTR_EL2_TZ;
}
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index bef74de7065b..6af179c6356d 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -48,15 +48,14 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
val |= has_hvhe() ? CPACR_EL1_TTA : CPTR_EL2_TTA;
if (cpus_have_final_cap(ARM64_SME)) {
if (has_hvhe())
- val &= ~(CPACR_EL1_SMEN_EL1EN | CPACR_EL1_SMEN_EL0EN);
+ val &= ~CPACR_ELx_SMEN;
else
val |= CPTR_EL2_TSM;
}
if (!guest_owns_fp_regs()) {
if (has_hvhe())
- val &= ~(CPACR_EL1_FPEN_EL0EN | CPACR_EL1_FPEN_EL1EN |
- CPACR_EL1_ZEN_EL0EN | CPACR_EL1_ZEN_EL1EN);
+ val &= ~(CPACR_ELx_FPEN | CPACR_ELx_ZEN);
else
val |= CPTR_EL2_TFP | CPTR_EL2_TZ;
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 20073579e9f5..8fbb6a2e0559 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -93,8 +93,7 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
val = read_sysreg(cpacr_el1);
val |= CPACR_ELx_TTA;
- val &= ~(CPACR_EL1_ZEN_EL0EN | CPACR_EL1_ZEN_EL1EN |
- CPACR_EL1_SMEN_EL0EN | CPACR_EL1_SMEN_EL1EN);
+ val &= ~(CPACR_ELx_ZEN | CPACR_ELx_SMEN);
/*
* With VHE (HCR.E2H == 1), accesses to CPACR_EL1 are routed to
@@ -109,9 +108,9 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
if (guest_owns_fp_regs()) {
if (vcpu_has_sve(vcpu))
- val |= CPACR_EL1_ZEN_EL0EN | CPACR_EL1_ZEN_EL1EN;
+ val |= CPACR_ELx_ZEN;
} else {
- val &= ~(CPACR_EL1_FPEN_EL0EN | CPACR_EL1_FPEN_EL1EN);
+ val &= ~CPACR_ELx_FPEN;
__activate_traps_fpsimd32(vcpu);
}
--
2.45.1.288.g0e0cd299f1-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH v4 9/9] KVM: arm64: Ensure that SME controls are disabled in protected mode
2024-06-03 12:28 [PATCH v4 0/9] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode Fuad Tabba
` (7 preceding siblings ...)
2024-06-03 12:28 ` [PATCH v4 8/9] KVM: arm64: Refactor CPACR trap bit setting/clearing to use ELx format Fuad Tabba
@ 2024-06-03 12:28 ` Fuad Tabba
2024-06-03 14:43 ` Mark Brown
2024-06-04 14:30 ` [PATCH v4 0/9] KVM: arm64: Fix handling of host fpsimd/sve state " Marc Zyngier
9 siblings, 1 reply; 24+ messages in thread
From: Fuad Tabba @ 2024-06-03 12:28 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel
Cc: maz, will, qperret, tabba, seanjc, alexandru.elisei,
catalin.marinas, philmd, james.morse, suzuki.poulose,
oliver.upton, mark.rutland, broonie, joey.gouly, rananta,
yuzenghui
KVM (and pKVM) do not support SME guests. Therefore KVM ensures
that the host's SME state is flushed and that SME controls for
enabling access to ZA storage and for streaming are disabled.
pKVM needs to protect against a buggy/malicious host. Ensure that
it wouldn't run a guest when protected mode is enabled should any
of the SME controls be enabled.
Signed-off-by: Fuad Tabba <tabba@google.com>
---
arch/arm64/kvm/fpsimd.c | 7 +++++++
arch/arm64/kvm/hyp/nvhe/hyp-main.c | 11 +++++++++++
2 files changed, 18 insertions(+)
diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
index eb21f29d91fc..521b32868d0d 100644
--- a/arch/arm64/kvm/fpsimd.c
+++ b/arch/arm64/kvm/fpsimd.c
@@ -90,6 +90,13 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
fpsimd_save_and_flush_cpu_state();
}
}
+
+ /*
+ * If normal guests gain SME support, maintain this behavior for pKVM
+ * guests, which don't support SME.
+ */
+ WARN_ON(is_protected_kvm_enabled() && system_supports_sme() &&
+ read_sysreg_s(SYS_SVCR));
}
/*
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
index bd93b8a9e172..f43d845f3c4e 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
@@ -140,6 +140,17 @@ static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt)
struct pkvm_hyp_vcpu *hyp_vcpu;
struct kvm *host_kvm;
+ /*
+ * KVM (and pKVM) doesn't support SME guests for now, and
+ * ensures that SME features aren't enabled in pstate when
+ * loading a vcpu. Therefore, if SME features enabled the host
+ * is misbehaving.
+ */
+ if (unlikely(system_supports_sme() && read_sysreg_s(SYS_SVCR))) {
+ ret = -EINVAL;
+ goto out;
+ }
+
host_kvm = kern_hyp_va(host_vcpu->kvm);
hyp_vcpu = pkvm_load_hyp_vcpu(host_kvm->arch.pkvm.handle,
host_vcpu->vcpu_idx);
--
2.45.1.288.g0e0cd299f1-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 24+ messages in thread
* Re: [PATCH v4 1/9] KVM: arm64: Reintroduce __sve_save_state
2024-06-03 12:28 ` [PATCH v4 1/9] KVM: arm64: Reintroduce __sve_save_state Fuad Tabba
@ 2024-06-03 13:55 ` Mark Brown
2024-06-03 14:11 ` Fuad Tabba
0 siblings, 1 reply; 24+ messages in thread
From: Mark Brown @ 2024-06-03 13:55 UTC (permalink / raw)
To: Fuad Tabba
Cc: kvmarm, linux-arm-kernel, maz, will, qperret, seanjc,
alexandru.elisei, catalin.marinas, philmd, james.morse,
suzuki.poulose, oliver.upton, mark.rutland, joey.gouly, rananta,
yuzenghui
[-- Attachment #1.1: Type: text/plain, Size: 287 bytes --]
On Mon, Jun 03, 2024 at 01:28:43PM +0100, Fuad Tabba wrote:
> +void __sve_save_state(void *sve_pffr, u32 *fpsr);
> +SYM_FUNC_START(__sve_save_state)
> + mov x2, #1
> + sve_save 0, x1, x2, 3
> + ret
> +SYM_FUNC_END(__sve_save_state)
This is still trying to use x2 as an argument here?
[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
[-- Attachment #2: Type: text/plain, Size: 176 bytes --]
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v4 1/9] KVM: arm64: Reintroduce __sve_save_state
2024-06-03 13:55 ` Mark Brown
@ 2024-06-03 14:11 ` Fuad Tabba
2024-06-03 14:16 ` Mark Brown
0 siblings, 1 reply; 24+ messages in thread
From: Fuad Tabba @ 2024-06-03 14:11 UTC (permalink / raw)
To: Mark Brown
Cc: kvmarm, linux-arm-kernel, maz, will, qperret, seanjc,
alexandru.elisei, catalin.marinas, philmd, james.morse,
suzuki.poulose, oliver.upton, mark.rutland, joey.gouly, rananta,
yuzenghui
Hi Mark,
On Mon, Jun 3, 2024 at 2:55 PM Mark Brown <broonie@kernel.org> wrote:
>
> On Mon, Jun 03, 2024 at 01:28:43PM +0100, Fuad Tabba wrote:
>
> > +void __sve_save_state(void *sve_pffr, u32 *fpsr);
>
> > +SYM_FUNC_START(__sve_save_state)
> > + mov x2, #1
> > + sve_save 0, x1, x2, 3
> > + ret
> > +SYM_FUNC_END(__sve_save_state)
>
> This is still trying to use x2 as an argument here?
Since __sve_restore_state also has the same problem, I fixed both in
the following patch:
KVM: arm64: Fix prototype for __sve_save_state/__sve_restore_state
https://lore.kernel.org/all/20240603122852.3923848-3-tabba@google.com/
Cheers,
/fuad
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v4 1/9] KVM: arm64: Reintroduce __sve_save_state
2024-06-03 14:11 ` Fuad Tabba
@ 2024-06-03 14:16 ` Mark Brown
0 siblings, 0 replies; 24+ messages in thread
From: Mark Brown @ 2024-06-03 14:16 UTC (permalink / raw)
To: Fuad Tabba
Cc: kvmarm, linux-arm-kernel, maz, will, qperret, seanjc,
alexandru.elisei, catalin.marinas, philmd, james.morse,
suzuki.poulose, oliver.upton, mark.rutland, joey.gouly, rananta,
yuzenghui
[-- Attachment #1.1: Type: text/plain, Size: 570 bytes --]
On Mon, Jun 03, 2024 at 03:11:38PM +0100, Fuad Tabba wrote:
> On Mon, Jun 3, 2024 at 2:55 PM Mark Brown <broonie@kernel.org> wrote:
> > On Mon, Jun 03, 2024 at 01:28:43PM +0100, Fuad Tabba wrote:
> > This is still trying to use x2 as an argument here?
> Since __sve_restore_state also has the same problem, I fixed both in
> the following patch:
> KVM: arm64: Fix prototype for __sve_save_state/__sve_restore_state
> https://lore.kernel.org/all/20240603122852.3923848-3-tabba@google.com/
Ah, I see - it'd have been good to note that in the changelog.
[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
[-- Attachment #2: Type: text/plain, Size: 176 bytes --]
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v4 2/9] KVM: arm64: Fix prototype for __sve_save_state/__sve_restore_state
2024-06-03 12:28 ` [PATCH v4 2/9] KVM: arm64: Fix prototype for __sve_save_state/__sve_restore_state Fuad Tabba
@ 2024-06-03 14:19 ` Mark Brown
0 siblings, 0 replies; 24+ messages in thread
From: Mark Brown @ 2024-06-03 14:19 UTC (permalink / raw)
To: Fuad Tabba
Cc: kvmarm, linux-arm-kernel, maz, will, qperret, seanjc,
alexandru.elisei, catalin.marinas, philmd, james.morse,
suzuki.poulose, oliver.upton, mark.rutland, joey.gouly, rananta,
yuzenghui
[-- Attachment #1.1: Type: text/plain, Size: 453 bytes --]
On Mon, Jun 03, 2024 at 01:28:44PM +0100, Fuad Tabba wrote:
> Since the prototypes for __sve_save_state/__sve_restore_state at
> hyp were added, the underlying macro has acquired a third
> parameter for saving/restoring ffr.
>
> Fix the prototypes to account for the third parameter, and
> restore the ffr for the guest since it is saved.
Reviewed-by: Mark Brown <broonie@kernel.org>
The change to fix restore should be sent as a bug fix.
[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
[-- Attachment #2: Type: text/plain, Size: 176 bytes --]
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v4 9/9] KVM: arm64: Ensure that SME controls are disabled in protected mode
2024-06-03 12:28 ` [PATCH v4 9/9] KVM: arm64: Ensure that SME controls are disabled in protected mode Fuad Tabba
@ 2024-06-03 14:43 ` Mark Brown
0 siblings, 0 replies; 24+ messages in thread
From: Mark Brown @ 2024-06-03 14:43 UTC (permalink / raw)
To: Fuad Tabba
Cc: kvmarm, linux-arm-kernel, maz, will, qperret, seanjc,
alexandru.elisei, catalin.marinas, philmd, james.morse,
suzuki.poulose, oliver.upton, mark.rutland, joey.gouly, rananta,
yuzenghui
[-- Attachment #1.1: Type: text/plain, Size: 881 bytes --]
On Mon, Jun 03, 2024 at 01:28:51PM +0100, Fuad Tabba wrote:
> +++ b/arch/arm64/kvm/fpsimd.c
> @@ -90,6 +90,13 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
> fpsimd_save_and_flush_cpu_state();
> }
> }
> +
> + /*
> + * If normal guests gain SME support, maintain this behavior for pKVM
> + * guests, which don't support SME.
> + */
> + WARN_ON(is_protected_kvm_enabled() && system_supports_sme() &&
> + read_sysreg_s(SYS_SVCR));
> }
The comment doesn't line up clearly with the check here, we're checking
if the hypervisor supports protected guests not if the current guest is
protected. At this point we're dealing with the host state rather than
the guest state so the guest being protected or not doesn't matter when
we check SVCR. I'm not sure this assert really makes sense, or perhaps
it should be hoisted into the above system_support_sme() section?
[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
[-- Attachment #2: Type: text/plain, Size: 176 bytes --]
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v4 5/9] KVM: arm64: Allocate memory mapped at hyp for host sve state in pKVM
2024-06-03 12:28 ` [PATCH v4 5/9] KVM: arm64: Allocate memory mapped at hyp for host sve state in pKVM Fuad Tabba
@ 2024-06-03 14:50 ` Mark Brown
2024-06-04 8:24 ` Fuad Tabba
0 siblings, 1 reply; 24+ messages in thread
From: Mark Brown @ 2024-06-03 14:50 UTC (permalink / raw)
To: Fuad Tabba
Cc: kvmarm, linux-arm-kernel, maz, will, qperret, seanjc,
alexandru.elisei, catalin.marinas, philmd, james.morse,
suzuki.poulose, oliver.upton, mark.rutland, joey.gouly, rananta,
yuzenghui
[-- Attachment #1.1: Type: text/plain, Size: 450 bytes --]
On Mon, Jun 03, 2024 at 01:28:47PM +0100, Fuad Tabba wrote:
> {
> if (system_supports_sve()) {
> kvm_sve_max_vl = sve_max_virtualisable_vl();
> + kvm_host_sve_max_vl = sve_max_vl();
> + kvm_nvhe_sym(kvm_host_sve_max_vl) = kvm_host_sve_max_vl;
As discussed on the prior version this either needs to be the maximum
available physical VL on any individual PE or the hypervisor needs to
use this VL explicitly when saving/restoring host state.
[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
[-- Attachment #2: Type: text/plain, Size: 176 bytes --]
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v4 7/9] KVM: arm64: Consolidate initializing the host data's fpsimd_state/sve in pKVM
2024-06-03 12:28 ` [PATCH v4 7/9] KVM: arm64: Consolidate initializing the host data's fpsimd_state/sve " Fuad Tabba
@ 2024-06-03 15:43 ` Mark Brown
0 siblings, 0 replies; 24+ messages in thread
From: Mark Brown @ 2024-06-03 15:43 UTC (permalink / raw)
To: Fuad Tabba
Cc: kvmarm, linux-arm-kernel, maz, will, qperret, seanjc,
alexandru.elisei, catalin.marinas, philmd, james.morse,
suzuki.poulose, oliver.upton, mark.rutland, joey.gouly, rananta,
yuzenghui
[-- Attachment #1.1: Type: text/plain, Size: 245 bytes --]
On Mon, Jun 03, 2024 at 01:28:49PM +0100, Fuad Tabba wrote:
> Now that we have introduced finalize_init_hyp_mode(), lets
> consolidate the initializing of the host_data fpsimd_state and
> sve state.
Reviewed-by: Mark Brown <broonie@kernel.org>
[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
[-- Attachment #2: Type: text/plain, Size: 176 bytes --]
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v4 6/9] KVM: arm64: Eagerly restore host fpsimd/sve state in pKVM
2024-06-03 12:28 ` [PATCH v4 6/9] KVM: arm64: Eagerly restore host fpsimd/sve " Fuad Tabba
@ 2024-06-03 15:52 ` Mark Brown
2024-06-04 12:03 ` Fuad Tabba
0 siblings, 1 reply; 24+ messages in thread
From: Mark Brown @ 2024-06-03 15:52 UTC (permalink / raw)
To: Fuad Tabba
Cc: kvmarm, linux-arm-kernel, maz, will, qperret, seanjc,
alexandru.elisei, catalin.marinas, philmd, james.morse,
suzuki.poulose, oliver.upton, mark.rutland, joey.gouly, rananta,
yuzenghui
[-- Attachment #1.1: Type: text/plain, Size: 223 bytes --]
On Mon, Jun 03, 2024 at 01:28:48PM +0100, Fuad Tabba wrote:
> +static void fpsimd_sve_flush(void)
> +{
> + *host_data_ptr(fp_owner) = FP_STATE_HOST_OWNED;
> +}
My previous comments about this being confusing still stand.
[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
[-- Attachment #2: Type: text/plain, Size: 176 bytes --]
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v4 5/9] KVM: arm64: Allocate memory mapped at hyp for host sve state in pKVM
2024-06-03 14:50 ` Mark Brown
@ 2024-06-04 8:24 ` Fuad Tabba
0 siblings, 0 replies; 24+ messages in thread
From: Fuad Tabba @ 2024-06-04 8:24 UTC (permalink / raw)
To: Mark Brown
Cc: kvmarm, linux-arm-kernel, maz, will, qperret, seanjc,
alexandru.elisei, catalin.marinas, philmd, james.morse,
suzuki.poulose, oliver.upton, mark.rutland, joey.gouly, rananta,
yuzenghui
Hi Mark,
On Mon, Jun 3, 2024 at 3:50 PM Mark Brown <broonie@kernel.org> wrote:
>
> On Mon, Jun 03, 2024 at 01:28:47PM +0100, Fuad Tabba wrote:
>
> > {
> > if (system_supports_sve()) {
> > kvm_sve_max_vl = sve_max_virtualisable_vl();
> > + kvm_host_sve_max_vl = sve_max_vl();
> > + kvm_nvhe_sym(kvm_host_sve_max_vl) = kvm_host_sve_max_vl;
>
> As discussed on the prior version this either needs to be the maximum
> available physical VL on any individual PE or the hypervisor needs to
> use this VL explicitly when saving/restoring host state.
Sorry it took me a while, but I understand what the issue is now. I'll
send a patch to fix this.
Thanks for being patient with me,
/fuad
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v4 6/9] KVM: arm64: Eagerly restore host fpsimd/sve state in pKVM
2024-06-03 15:52 ` Mark Brown
@ 2024-06-04 12:03 ` Fuad Tabba
2024-06-04 13:13 ` Mark Brown
0 siblings, 1 reply; 24+ messages in thread
From: Fuad Tabba @ 2024-06-04 12:03 UTC (permalink / raw)
To: Mark Brown
Cc: kvmarm, linux-arm-kernel, maz, will, qperret, seanjc,
alexandru.elisei, catalin.marinas, philmd, james.morse,
suzuki.poulose, oliver.upton, mark.rutland, joey.gouly, rananta,
yuzenghui
Hi Mark,
On Mon, Jun 3, 2024 at 4:52 PM Mark Brown <broonie@kernel.org> wrote:
>
> On Mon, Jun 03, 2024 at 01:28:48PM +0100, Fuad Tabba wrote:
>
> > +static void fpsimd_sve_flush(void)
> > +{
> > + *host_data_ptr(fp_owner) = FP_STATE_HOST_OWNED;
> > +}
>
> My previous comments about this being confusing still stand.
Sorry, I missed this in my reply to v3.
This follows the convention for save/flush in hyp-main.c. Since the
act of flushing the fpsimd/sve state is lazy, i.e., only takes place
if the guest were to use fpsimd/sve, then the only thing that we need
to do to flush is to mark the state as owned by the host.
You suggested inlining this, but since this is static, I think the
compiler would do that. Even though it's only one line, it maintains
symmetry with fpsimd_sve_sync().
Cheers,
/fuad
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v4 6/9] KVM: arm64: Eagerly restore host fpsimd/sve state in pKVM
2024-06-04 12:03 ` Fuad Tabba
@ 2024-06-04 13:13 ` Mark Brown
2024-06-04 13:52 ` Marc Zyngier
0 siblings, 1 reply; 24+ messages in thread
From: Mark Brown @ 2024-06-04 13:13 UTC (permalink / raw)
To: Fuad Tabba
Cc: kvmarm, linux-arm-kernel, maz, will, qperret, seanjc,
alexandru.elisei, catalin.marinas, philmd, james.morse,
suzuki.poulose, oliver.upton, mark.rutland, joey.gouly, rananta,
yuzenghui
[-- Attachment #1.1: Type: text/plain, Size: 1056 bytes --]
On Tue, Jun 04, 2024 at 01:03:07PM +0100, Fuad Tabba wrote:
> On Mon, Jun 3, 2024 at 4:52 PM Mark Brown <broonie@kernel.org> wrote:
> > On Mon, Jun 03, 2024 at 01:28:48PM +0100, Fuad Tabba wrote:
> > > +static void fpsimd_sve_flush(void)
> > > +{
> > > + *host_data_ptr(fp_owner) = FP_STATE_HOST_OWNED;
> > > +}
> > My previous comments about this being confusing still stand.
> Sorry, I missed this in my reply to v3.
> This follows the convention for save/flush in hyp-main.c. Since the
> act of flushing the fpsimd/sve state is lazy, i.e., only takes place
> if the guest were to use fpsimd/sve, then the only thing that we need
> to do to flush is to mark the state as owned by the host.
I think this needs a comment mentioning what's going on here.
> You suggested inlining this, but since this is static, I think the
> compiler would do that. Even though it's only one line, it maintains
> symmetry with fpsimd_sve_sync().
The reason I was suggesting inlining was that it removes the need to
name the function.
[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
[-- Attachment #2: Type: text/plain, Size: 176 bytes --]
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v4 6/9] KVM: arm64: Eagerly restore host fpsimd/sve state in pKVM
2024-06-04 13:13 ` Mark Brown
@ 2024-06-04 13:52 ` Marc Zyngier
2024-06-04 14:07 ` Mark Brown
0 siblings, 1 reply; 24+ messages in thread
From: Marc Zyngier @ 2024-06-04 13:52 UTC (permalink / raw)
To: Mark Brown
Cc: Fuad Tabba, kvmarm, linux-arm-kernel, will, qperret, seanjc,
alexandru.elisei, catalin.marinas, philmd, james.morse,
suzuki.poulose, oliver.upton, mark.rutland, joey.gouly, rananta,
yuzenghui
On Tue, 04 Jun 2024 14:13:16 +0100,
Mark Brown <broonie@kernel.org> wrote:
>
> [1 <text/plain; utf-8 (quoted-printable)>]
> On Tue, Jun 04, 2024 at 01:03:07PM +0100, Fuad Tabba wrote:
> > On Mon, Jun 3, 2024 at 4:52 PM Mark Brown <broonie@kernel.org> wrote:
> > > On Mon, Jun 03, 2024 at 01:28:48PM +0100, Fuad Tabba wrote:
>
> > > > +static void fpsimd_sve_flush(void)
> > > > +{
> > > > + *host_data_ptr(fp_owner) = FP_STATE_HOST_OWNED;
> > > > +}
>
> > > My previous comments about this being confusing still stand.
>
> > Sorry, I missed this in my reply to v3.
>
> > This follows the convention for save/flush in hyp-main.c. Since the
> > act of flushing the fpsimd/sve state is lazy, i.e., only takes place
> > if the guest were to use fpsimd/sve, then the only thing that we need
> > to do to flush is to mark the state as owned by the host.
>
> I think this needs a comment mentioning what's going on here.
>
> > You suggested inlining this, but since this is static, I think the
> > compiler would do that. Even though it's only one line, it maintains
> > symmetry with fpsimd_sve_sync().
>
> The reason I was suggesting inlining was that it removes the need to
> name the function.
You're missing the point.
The name is important, and the current name is correct. We use *flush
(resp. *sync) for operations that need to happen before (resp. after)
the entry into the guest. This is consistent with the rest of the code
base for most of the subsystems KVM/arm64 deals with.
So the current form of this function, as a standalone function with
its current name, stays.
M.
--
Without deviation from the norm, progress is not possible.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v4 6/9] KVM: arm64: Eagerly restore host fpsimd/sve state in pKVM
2024-06-04 13:52 ` Marc Zyngier
@ 2024-06-04 14:07 ` Mark Brown
0 siblings, 0 replies; 24+ messages in thread
From: Mark Brown @ 2024-06-04 14:07 UTC (permalink / raw)
To: Marc Zyngier
Cc: Fuad Tabba, kvmarm, linux-arm-kernel, will, qperret, seanjc,
alexandru.elisei, catalin.marinas, philmd, james.morse,
suzuki.poulose, oliver.upton, mark.rutland, joey.gouly, rananta,
yuzenghui
[-- Attachment #1.1: Type: text/plain, Size: 974 bytes --]
On Tue, Jun 04, 2024 at 02:52:52PM +0100, Marc Zyngier wrote:
> Mark Brown <broonie@kernel.org> wrote:
> > On Tue, Jun 04, 2024 at 01:03:07PM +0100, Fuad Tabba wrote:
> > > You suggested inlining this, but since this is static, I think the
> > > compiler would do that. Even though it's only one line, it maintains
> > > symmetry with fpsimd_sve_sync().
> > The reason I was suggesting inlining was that it removes the need to
> > name the function.
> You're missing the point.
I think not.
> The name is important, and the current name is correct. We use *flush
> (resp. *sync) for operations that need to happen before (resp. after)
> the entry into the guest. This is consistent with the rest of the code
> base for most of the subsystems KVM/arm64 deals with.
Hence my suggestion to add documentation to the function to avoid the
risk of surprises, the above text explains the motivation for the
suggestion that Faud is responding to. Either approach would work.
[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
[-- Attachment #2: Type: text/plain, Size: 176 bytes --]
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v4 0/9] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode
2024-06-03 12:28 [PATCH v4 0/9] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode Fuad Tabba
` (8 preceding siblings ...)
2024-06-03 12:28 ` [PATCH v4 9/9] KVM: arm64: Ensure that SME controls are disabled in protected mode Fuad Tabba
@ 2024-06-04 14:30 ` Marc Zyngier
9 siblings, 0 replies; 24+ messages in thread
From: Marc Zyngier @ 2024-06-04 14:30 UTC (permalink / raw)
To: kvmarm, linux-arm-kernel, Fuad Tabba
Cc: will, qperret, seanjc, alexandru.elisei, catalin.marinas, philmd,
james.morse, suzuki.poulose, oliver.upton, mark.rutland, broonie,
joey.gouly, rananta, yuzenghui
On Mon, 03 Jun 2024 13:28:42 +0100, Fuad Tabba wrote:
> Changes since v3 [1]:
> - Rebased on Linux 6.10-rc2 (c3f38fa61af7)
> - Dropped v3 patches 8--11 (Oliver)
> - Removed unnecessary isb()s (Oliver)
> - Formatting/comments (Mark)
> - Fix __sve_save_state()/__sve_restore_state() prototypes (Mark)
> - Save/restore ffr with the sve state
> - Added a patch that checks at hyp that SME features aren't
> enabled on guest entry, to ensure it's not in streaming mode
>
> [...]
Applied to fixes, thanks!
[1/9] KVM: arm64: Reintroduce __sve_save_state
commit: 87bb39ed40bdf1596b8820e800226e24eb642677
[2/9] KVM: arm64: Fix prototype for __sve_save_state/__sve_restore_state
commit: 45f4ea9bcfe909b3461059990b1e232e55dde809
[3/9] KVM: arm64: Abstract set/clear of CPTR_EL2 bits behind helper
commit: 6d8fb3cbf7e06431a607c30c1bc4cd53a62c220a
[4/9] KVM: arm64: Specialize handling of host fpsimd state on trap
commit: e511e08a9f496948b13aac50610f2d17335f56c3
[5/9] KVM: arm64: Allocate memory mapped at hyp for host sve state in pKVM
commit: 66d5b53e20a6e00b7ce3b652a3e2db967f7b33d0
[6/9] KVM: arm64: Eagerly restore host fpsimd/sve state in pKVM
commit: b5b9955617bc0b41546f2fa7c3dbcc048b43dc82
[7/9] KVM: arm64: Consolidate initializing the host data's fpsimd_state/sve in pKVM
commit: 1696fc2174dbab12228ea9ec4c213d6aeea348f8
[8/9] KVM: arm64: Refactor CPACR trap bit setting/clearing to use ELx format
commit: a69283ae1db8dd416870d931caa9e2d3d2c1cd8b
[9/9] KVM: arm64: Ensure that SME controls are disabled in protected mode
commit: afb91f5f8ad7af172d993a34fde1947892408f53
Cheers,
M.
--
Without deviation from the norm, progress is not possible.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 24+ messages in thread
end of thread, other threads:[~2024-06-04 14:30 UTC | newest]
Thread overview: 24+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-06-03 12:28 [PATCH v4 0/9] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode Fuad Tabba
2024-06-03 12:28 ` [PATCH v4 1/9] KVM: arm64: Reintroduce __sve_save_state Fuad Tabba
2024-06-03 13:55 ` Mark Brown
2024-06-03 14:11 ` Fuad Tabba
2024-06-03 14:16 ` Mark Brown
2024-06-03 12:28 ` [PATCH v4 2/9] KVM: arm64: Fix prototype for __sve_save_state/__sve_restore_state Fuad Tabba
2024-06-03 14:19 ` Mark Brown
2024-06-03 12:28 ` [PATCH v4 3/9] KVM: arm64: Abstract set/clear of CPTR_EL2 bits behind helper Fuad Tabba
2024-06-03 12:28 ` [PATCH v4 4/9] KVM: arm64: Specialize handling of host fpsimd state on trap Fuad Tabba
2024-06-03 12:28 ` [PATCH v4 5/9] KVM: arm64: Allocate memory mapped at hyp for host sve state in pKVM Fuad Tabba
2024-06-03 14:50 ` Mark Brown
2024-06-04 8:24 ` Fuad Tabba
2024-06-03 12:28 ` [PATCH v4 6/9] KVM: arm64: Eagerly restore host fpsimd/sve " Fuad Tabba
2024-06-03 15:52 ` Mark Brown
2024-06-04 12:03 ` Fuad Tabba
2024-06-04 13:13 ` Mark Brown
2024-06-04 13:52 ` Marc Zyngier
2024-06-04 14:07 ` Mark Brown
2024-06-03 12:28 ` [PATCH v4 7/9] KVM: arm64: Consolidate initializing the host data's fpsimd_state/sve " Fuad Tabba
2024-06-03 15:43 ` Mark Brown
2024-06-03 12:28 ` [PATCH v4 8/9] KVM: arm64: Refactor CPACR trap bit setting/clearing to use ELx format Fuad Tabba
2024-06-03 12:28 ` [PATCH v4 9/9] KVM: arm64: Ensure that SME controls are disabled in protected mode Fuad Tabba
2024-06-03 14:43 ` Mark Brown
2024-06-04 14:30 ` [PATCH v4 0/9] KVM: arm64: Fix handling of host fpsimd/sve state " Marc Zyngier
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).