linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v1 0/7] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode
@ 2024-05-17 13:18 Fuad Tabba
  2024-05-17 13:18 ` [PATCH v1 1/7] KVM: arm64: Reintroduce __sve_save_state Fuad Tabba
                   ` (7 more replies)
  0 siblings, 8 replies; 23+ messages in thread
From: Fuad Tabba @ 2024-05-17 13:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: maz, will, qperret, tabba, seanjc, alexandru.elisei,
	catalin.marinas, philmd, james.morse, suzuki.poulose,
	oliver.upton, mark.rutland, broonie, joey.gouly, rananta,
	yuzenghui

With the KVM host data rework [1], handling of fpsimd and sve
state in protected mode is done at hyp. For protected VMs, we
don't want to leak any guest state to the host, including whether
a guest has used fpsimd/sve.

To complete the work started with the host data rework, in
regards to protected mode, ensure that the host's fpsimd context
and its sve context are restored on guest exit, since the rework
has hidden the fpsimd/sve state from the host.

This patch series eagerly restores the host fpsimd/sve state on
guest exit when running in protected mode, which happens only if
the guest has used fpsimd/sve. This means that the saving of the
state is lazy, similar to the behavior of KVM in other modes, but
the restoration of the host state is eager.

This series is based on kvmarm-6.10-1 (kvmarm/next). It should
not have any effect on modes other than protected mode.

Tested on qemu, with the kernel sve stress tests.

Cheers,
/fuad

[1] https://lore.kernel.org/all/20240322170945.3292593-1-maz@kernel.org/

Fuad Tabba (7):
  KVM: arm64: Reintroduce __sve_save_state
  KVM: arm64: Specialize deactivate fpsimd/sve traps on guest trap
  KVM: arm64: Specialize handling of host fpsimd state on trap
  KVM: arm64: Store the maximum sve vector length at hyp
  KVM: arm64: Allocate memory at hyp for host sve state in pKVM
  KVM: arm64: Eagerly restore host fpsimd/sve state in pKVM
  KVM: arm64: Consolidate initializing the host data's fpsimd_state/sve
    in pKVM

 arch/arm64/include/asm/kvm_host.h       | 11 +++-
 arch/arm64/include/asm/kvm_hyp.h        |  2 +
 arch/arm64/include/asm/kvm_pkvm.h       |  9 +++
 arch/arm64/include/uapi/asm/ptrace.h    | 14 +++++
 arch/arm64/kvm/arm.c                    | 75 +++++++++++++++++++++++++
 arch/arm64/kvm/hyp/fpsimd.S             |  6 ++
 arch/arm64/kvm/hyp/include/hyp/switch.h | 31 +++++-----
 arch/arm64/kvm/hyp/include/nvhe/pkvm.h  |  1 -
 arch/arm64/kvm/hyp/nvhe/hyp-main.c      | 57 ++++++++++++++++++-
 arch/arm64/kvm/hyp/nvhe/pkvm.c          | 15 ++---
 arch/arm64/kvm/hyp/nvhe/setup.c         | 25 ++++++++-
 arch/arm64/kvm/hyp/nvhe/switch.c        | 40 +++++++++++++
 arch/arm64/kvm/hyp/vhe/switch.c         | 16 ++++++
 arch/arm64/kvm/reset.c                  |  2 +
 14 files changed, 271 insertions(+), 33 deletions(-)


base-commit: eaa46a28d59655aa89a8fb885affa6fc0de44376
-- 
2.45.0.rc1.225.g2a3ae87e7f-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH v1 1/7] KVM: arm64: Reintroduce __sve_save_state
  2024-05-17 13:18 [PATCH v1 0/7] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode Fuad Tabba
@ 2024-05-17 13:18 ` Fuad Tabba
  2024-05-17 13:18 ` [PATCH v1 2/7] KVM: arm64: Specialize deactivate fpsimd/sve traps on guest trap Fuad Tabba
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 23+ messages in thread
From: Fuad Tabba @ 2024-05-17 13:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: maz, will, qperret, tabba, seanjc, alexandru.elisei,
	catalin.marinas, philmd, james.morse, suzuki.poulose,
	oliver.upton, mark.rutland, broonie, joey.gouly, rananta,
	yuzenghui

Now that the hypervisor is handling the host sve state in
protected mode, it needs to be able to save it.

This reverts commit e66425fc9ba3 ("KVM: arm64: Remove unused
__sve_save_state").

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/kvm_hyp.h | 1 +
 arch/arm64/kvm/hyp/fpsimd.S      | 6 ++++++
 2 files changed, 7 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index 3e80464f8953..2ab23589339a 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -111,6 +111,7 @@ void __debug_restore_host_buffers_nvhe(struct kvm_vcpu *vcpu);
 
 void __fpsimd_save_state(struct user_fpsimd_state *fp_regs);
 void __fpsimd_restore_state(struct user_fpsimd_state *fp_regs);
+void __sve_save_state(void *sve_pffr, u32 *fpsr);
 void __sve_restore_state(void *sve_pffr, u32 *fpsr);
 
 u64 __guest_enter(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/hyp/fpsimd.S b/arch/arm64/kvm/hyp/fpsimd.S
index 61e6f3ba7b7d..e950875e31ce 100644
--- a/arch/arm64/kvm/hyp/fpsimd.S
+++ b/arch/arm64/kvm/hyp/fpsimd.S
@@ -25,3 +25,9 @@ SYM_FUNC_START(__sve_restore_state)
 	sve_load 0, x1, x2, 3
 	ret
 SYM_FUNC_END(__sve_restore_state)
+
+SYM_FUNC_START(__sve_save_state)
+	mov	x2, #1
+	sve_save 0, x1, x2, 3
+	ret
+SYM_FUNC_END(__sve_save_state)
-- 
2.45.0.rc1.225.g2a3ae87e7f-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v1 2/7] KVM: arm64: Specialize deactivate fpsimd/sve traps on guest trap
  2024-05-17 13:18 [PATCH v1 0/7] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode Fuad Tabba
  2024-05-17 13:18 ` [PATCH v1 1/7] KVM: arm64: Reintroduce __sve_save_state Fuad Tabba
@ 2024-05-17 13:18 ` Fuad Tabba
  2024-05-17 13:18 ` [PATCH v1 3/7] KVM: arm64: Specialize handling of host fpsimd state on trap Fuad Tabba
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 23+ messages in thread
From: Fuad Tabba @ 2024-05-17 13:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: maz, will, qperret, tabba, seanjc, alexandru.elisei,
	catalin.marinas, philmd, james.morse, suzuki.poulose,
	oliver.upton, mark.rutland, broonie, joey.gouly, rananta,
	yuzenghui

The code for deactivating traps, to be able to update the
fpsimd/sve registers, is the only code in switch.h that is n/vhe
specific, i.e., behaves differently whether it's running in
vhe/nvhe. Move it to specialized functions in switch.c like other
mode-specific code.

This is needed for subsequent patches, since the logic for
deciding which traps to enable/disable will diverge between
n/vhe.

No functional change intended.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/kvm/hyp/include/hyp/switch.h | 18 +++---------------
 arch/arm64/kvm/hyp/nvhe/switch.c        | 21 +++++++++++++++++++++
 arch/arm64/kvm/hyp/vhe/switch.c         | 11 +++++++++++
 3 files changed, 35 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index a92566f36022..890388c17c3e 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -320,6 +320,8 @@ static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu)
 	write_sysreg_el1(__vcpu_sys_reg(vcpu, ZCR_EL1), SYS_ZCR);
 }
 
+static void __deactivate_fpsimd_sve_traps(struct kvm_vcpu *vcpu);
+
 /*
  * We trap the first access to the FP/SIMD to save the host context and
  * restore the guest context lazily.
@@ -330,7 +332,6 @@ static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code)
 {
 	bool sve_guest;
 	u8 esr_ec;
-	u64 reg;
 
 	if (!system_supports_fpsimd())
 		return false;
@@ -353,20 +354,7 @@ static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code)
 	/* Valid trap.  Switch the context: */
 
 	/* First disable enough traps to allow us to update the registers */
-	if (has_vhe() || has_hvhe()) {
-		reg = CPACR_EL1_FPEN_EL0EN | CPACR_EL1_FPEN_EL1EN;
-		if (sve_guest)
-			reg |= CPACR_EL1_ZEN_EL0EN | CPACR_EL1_ZEN_EL1EN;
-
-		sysreg_clear_set(cpacr_el1, 0, reg);
-	} else {
-		reg = CPTR_EL2_TFP;
-		if (sve_guest)
-			reg |= CPTR_EL2_TZ;
-
-		sysreg_clear_set(cptr_el2, reg, 0);
-	}
-	isb();
+	__deactivate_fpsimd_sve_traps(vcpu);
 
 	/* Write out the host state if it's in the registers */
 	if (host_owns_fp_regs())
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 6758cd905570..2b5af10cdf1d 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -112,6 +112,27 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu)
 	write_sysreg(__kvm_hyp_host_vector, vbar_el2);
 }
 
+static void __deactivate_fpsimd_sve_traps(struct kvm_vcpu *vcpu)
+{
+	bool clear_sve_traps = vcpu_has_sve(vcpu);
+	u64 reg;
+
+	if (has_hvhe()) {
+		reg = CPACR_EL1_FPEN_EL0EN | CPACR_EL1_FPEN_EL1EN;
+		if (clear_sve_traps)
+			reg |= CPACR_EL1_ZEN_EL0EN | CPACR_EL1_ZEN_EL1EN;
+
+		sysreg_clear_set(cpacr_el1, 0, reg);
+	} else {
+		reg = CPTR_EL2_TFP;
+		if (clear_sve_traps)
+			reg |= CPTR_EL2_TZ;
+
+		sysreg_clear_set(cptr_el2, reg, 0);
+	}
+	isb();
+}
+
 /* Save VGICv3 state on non-VHE systems */
 static void __hyp_vgic_save_state(struct kvm_vcpu *vcpu)
 {
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index d7af5f46f22a..740360065d7d 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -209,6 +209,17 @@ void kvm_vcpu_put_vhe(struct kvm_vcpu *vcpu)
 	host_data_ptr(host_ctxt)->__hyp_running_vcpu = NULL;
 }
 
+static void __deactivate_fpsimd_sve_traps(struct kvm_vcpu *vcpu)
+{
+	u64 reg = CPACR_EL1_FPEN_EL0EN | CPACR_EL1_FPEN_EL1EN;
+
+	if (vcpu_has_sve(vcpu))
+		reg |= CPACR_EL1_ZEN_EL0EN | CPACR_EL1_ZEN_EL1EN;
+
+	sysreg_clear_set(cpacr_el1, 0, reg);
+	isb();
+}
+
 static bool kvm_hyp_handle_eret(struct kvm_vcpu *vcpu, u64 *exit_code)
 {
 	u64 esr = kvm_vcpu_get_esr(vcpu);
-- 
2.45.0.rc1.225.g2a3ae87e7f-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v1 3/7] KVM: arm64: Specialize handling of host fpsimd state on trap
  2024-05-17 13:18 [PATCH v1 0/7] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode Fuad Tabba
  2024-05-17 13:18 ` [PATCH v1 1/7] KVM: arm64: Reintroduce __sve_save_state Fuad Tabba
  2024-05-17 13:18 ` [PATCH v1 2/7] KVM: arm64: Specialize deactivate fpsimd/sve traps on guest trap Fuad Tabba
@ 2024-05-17 13:18 ` Fuad Tabba
  2024-05-17 13:18 ` [PATCH v1 4/7] KVM: arm64: Store the maximum sve vector length at hyp Fuad Tabba
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 23+ messages in thread
From: Fuad Tabba @ 2024-05-17 13:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: maz, will, qperret, tabba, seanjc, alexandru.elisei,
	catalin.marinas, philmd, james.morse, suzuki.poulose,
	oliver.upton, mark.rutland, broonie, joey.gouly, rananta,
	yuzenghui

In subsequent patches, n/vhe will diverge on saving the host
fpsimd/sve state when taking a guest fpsimd/sve trap. Add a
specialized helper to handle it.

No functional change intended.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/kvm/hyp/include/hyp/switch.h | 3 ++-
 arch/arm64/kvm/hyp/nvhe/switch.c        | 5 +++++
 arch/arm64/kvm/hyp/vhe/switch.c         | 5 +++++
 3 files changed, 12 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index 890388c17c3e..d15272445db2 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -321,6 +321,7 @@ static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu)
 }
 
 static void __deactivate_fpsimd_sve_traps(struct kvm_vcpu *vcpu);
+static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu);
 
 /*
  * We trap the first access to the FP/SIMD to save the host context and
@@ -358,7 +359,7 @@ static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code)
 
 	/* Write out the host state if it's in the registers */
 	if (host_owns_fp_regs())
-		__fpsimd_save_state(*host_data_ptr(fpsimd_state));
+		kvm_hyp_save_fpsimd_host(vcpu);
 
 	/* Restore the guest state */
 	if (sve_guest)
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 2b5af10cdf1d..935f3db245e9 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -203,6 +203,11 @@ static bool kvm_handle_pvm_sys64(struct kvm_vcpu *vcpu, u64 *exit_code)
 		kvm_handle_pvm_sysreg(vcpu, exit_code));
 }
 
+static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu)
+{
+	__fpsimd_save_state(*host_data_ptr(fpsimd_state));
+}
+
 static const exit_handler_fn hyp_exit_handlers[] = {
 	[0 ... ESR_ELx_EC_MAX]		= NULL,
 	[ESR_ELx_EC_CP15_32]		= kvm_hyp_handle_cp15_32,
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 740360065d7d..6a6d69c3c58a 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -273,6 +273,11 @@ static bool kvm_hyp_handle_eret(struct kvm_vcpu *vcpu, u64 *exit_code)
 	return true;
 }
 
+static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu)
+{
+	__fpsimd_save_state(*host_data_ptr(fpsimd_state));
+}
+
 static const exit_handler_fn hyp_exit_handlers[] = {
 	[0 ... ESR_ELx_EC_MAX]		= NULL,
 	[ESR_ELx_EC_CP15_32]		= kvm_hyp_handle_cp15_32,
-- 
2.45.0.rc1.225.g2a3ae87e7f-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v1 4/7] KVM: arm64: Store the maximum sve vector length at hyp
  2024-05-17 13:18 [PATCH v1 0/7] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode Fuad Tabba
                   ` (2 preceding siblings ...)
  2024-05-17 13:18 ` [PATCH v1 3/7] KVM: arm64: Specialize handling of host fpsimd state on trap Fuad Tabba
@ 2024-05-17 13:18 ` Fuad Tabba
  2024-05-17 13:18 ` [PATCH v1 5/7] KVM: arm64: Allocate memory at hyp for host sve state in pKVM Fuad Tabba
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 23+ messages in thread
From: Fuad Tabba @ 2024-05-17 13:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: maz, will, qperret, tabba, seanjc, alexandru.elisei,
	catalin.marinas, philmd, james.morse, suzuki.poulose,
	oliver.upton, mark.rutland, broonie, joey.gouly, rananta,
	yuzenghui

In subsequent patches, hyp needs to know the maximum sve vector
length for the host, without needing the trust the host for that.
This is used when allocating memory for the host sve state in the
following patch, as well as for allocating and restricting guest
sve state in a future patch series.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/kvm_host.h | 1 +
 arch/arm64/include/asm/kvm_hyp.h  | 1 +
 arch/arm64/kvm/arm.c              | 1 +
 arch/arm64/kvm/hyp/nvhe/pkvm.c    | 2 ++
 arch/arm64/kvm/reset.c            | 2 ++
 5 files changed, 7 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 8170c04fde91..0a5fceb20f3a 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -76,6 +76,7 @@ static inline enum kvm_mode kvm_get_mode(void) { return KVM_MODE_NONE; };
 DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use);
 
 extern unsigned int __ro_after_init kvm_sve_max_vl;
+extern unsigned int __ro_after_init kvm_host_sve_max_vl;
 int __init kvm_arm_init_sve(void);
 
 u32 __attribute_const__ kvm_target_cpu(void);
diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index 2ab23589339a..d313adf53bef 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -143,5 +143,6 @@ extern u64 kvm_nvhe_sym(id_aa64smfr0_el1_sys_val);
 
 extern unsigned long kvm_nvhe_sym(__icache_flags);
 extern unsigned int kvm_nvhe_sym(kvm_arm_vmid_bits);
+extern unsigned int kvm_nvhe_sym(kvm_host_sve_max_vl);
 
 #endif /* __ARM64_KVM_HYP_H__ */
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 9996a989b52e..9e565ea3d645 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -2378,6 +2378,7 @@ static void kvm_hyp_init_symbols(void)
 	kvm_nvhe_sym(id_aa64smfr0_el1_sys_val) = read_sanitised_ftr_reg(SYS_ID_AA64SMFR0_EL1);
 	kvm_nvhe_sym(__icache_flags) = __icache_flags;
 	kvm_nvhe_sym(kvm_arm_vmid_bits) = kvm_arm_vmid_bits;
+	kvm_nvhe_sym(kvm_host_sve_max_vl) = kvm_host_sve_max_vl;
 }
 
 static int __init kvm_hyp_init_protection(u32 hyp_va_bits)
diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c
index 16aa4875ddb8..25e9a94f6d76 100644
--- a/arch/arm64/kvm/hyp/nvhe/pkvm.c
+++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c
@@ -18,6 +18,8 @@ unsigned long __icache_flags;
 /* Used by kvm_get_vttbr(). */
 unsigned int kvm_arm_vmid_bits;
 
+unsigned int kvm_host_sve_max_vl;
+
 /*
  * Set trap register values based on features in ID_AA64PFR0.
  */
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index 1b7b58cb121f..e818727900ec 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -46,11 +46,13 @@ static u32 __ro_after_init kvm_ipa_limit;
 				 PSR_AA32_I_BIT | PSR_AA32_F_BIT)
 
 unsigned int __ro_after_init kvm_sve_max_vl;
+unsigned int __ro_after_init kvm_host_sve_max_vl;
 
 int __init kvm_arm_init_sve(void)
 {
 	if (system_supports_sve()) {
 		kvm_sve_max_vl = sve_max_virtualisable_vl();
+		kvm_host_sve_max_vl = sve_max_vl();
 
 		/*
 		 * The get_sve_reg()/set_sve_reg() ioctl interface will need
-- 
2.45.0.rc1.225.g2a3ae87e7f-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v1 5/7] KVM: arm64: Allocate memory at hyp for host sve state in pKVM
  2024-05-17 13:18 [PATCH v1 0/7] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode Fuad Tabba
                   ` (3 preceding siblings ...)
  2024-05-17 13:18 ` [PATCH v1 4/7] KVM: arm64: Store the maximum sve vector length at hyp Fuad Tabba
@ 2024-05-17 13:18 ` Fuad Tabba
  2024-05-17 13:18 ` [PATCH v1 6/7] KVM: arm64: Eagerly restore host fpsimd/sve " Fuad Tabba
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 23+ messages in thread
From: Fuad Tabba @ 2024-05-17 13:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: maz, will, qperret, tabba, seanjc, alexandru.elisei,
	catalin.marinas, philmd, james.morse, suzuki.poulose,
	oliver.upton, mark.rutland, broonie, joey.gouly, rananta,
	yuzenghui

Protected mode needs to maintain (save/restore) the host's sve
state, rather than relying on the host kernel to do that. This is
to avoid leaking information to the host about guests and the
type of operations they are performing.

As a first step towards that, allocate memory at hyp, per cpu, to
hold the host sve data. The following patch will use this memory
to save/restore the host state.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
Note that the last patch in this series will consolidate the
setup of the host's fpsimd and sve states, which currently take
place in two different locations. Moreover, that last patch will
also place the host fpsimd and sve_state pointers in a union.
---
 arch/arm64/include/asm/kvm_host.h    |  2 +
 arch/arm64/include/asm/kvm_pkvm.h    |  9 ++++
 arch/arm64/include/uapi/asm/ptrace.h | 14 ++++++
 arch/arm64/kvm/arm.c                 | 68 ++++++++++++++++++++++++++++
 arch/arm64/kvm/hyp/nvhe/setup.c      | 24 ++++++++++
 5 files changed, 117 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 0a5fceb20f3a..7b3745ef1d73 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -535,7 +535,9 @@ struct kvm_cpu_context {
  */
 struct kvm_host_data {
 	struct kvm_cpu_context host_ctxt;
+
 	struct user_fpsimd_state *fpsimd_state;	/* hyp VA */
+	struct user_sve_state *sve_state;	/* hyp VA */
 
 	/* Ownership of the FP regs */
 	enum {
diff --git a/arch/arm64/include/asm/kvm_pkvm.h b/arch/arm64/include/asm/kvm_pkvm.h
index ad9cfb5c1ff4..b9d12e123efb 100644
--- a/arch/arm64/include/asm/kvm_pkvm.h
+++ b/arch/arm64/include/asm/kvm_pkvm.h
@@ -128,4 +128,13 @@ static inline unsigned long hyp_ffa_proxy_pages(void)
 	return (2 * KVM_FFA_MBOX_NR_PAGES) + DIV_ROUND_UP(desc_max, PAGE_SIZE);
 }
 
+static inline size_t pkvm_host_sve_state_size(void)
+{
+	if (!system_supports_sve())
+		return 0;
+
+	return size_add(sizeof(struct user_sve_state),
+			SVE_SIG_REGS_SIZE(sve_vq_from_vl(kvm_host_sve_max_vl)));
+}
+
 #endif	/* __ARM64_KVM_PKVM_H__ */
diff --git a/arch/arm64/include/uapi/asm/ptrace.h b/arch/arm64/include/uapi/asm/ptrace.h
index 7fa2f7036aa7..77aabf964071 100644
--- a/arch/arm64/include/uapi/asm/ptrace.h
+++ b/arch/arm64/include/uapi/asm/ptrace.h
@@ -120,6 +120,20 @@ struct user_sve_header {
 	__u16 __reserved;
 };
 
+struct user_sve_state {
+	__u64 zcr_el1;
+
+	/*
+	 * Ordering is important since __sve_save_state/__sve_restore_state
+	 * relies on it.
+	 */
+	__u32 fpsr;
+	__u32 fpcr;
+
+	/* Must be SVE_VQ_BYTES (128 bit) aligned. */
+	__u8 sve_regs[];
+};
+
 /* Definitions for user_sve_header.flags: */
 #define SVE_PT_REGS_MASK		(1 << 0)
 
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index 9e565ea3d645..a9b1b0e9c319 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -1931,6 +1931,11 @@ static unsigned long nvhe_percpu_order(void)
 	return size ? get_order(size) : 0;
 }
 
+static size_t pkvm_host_sve_state_order(void)
+{
+	return get_order(pkvm_host_sve_state_size());
+}
+
 /* A lookup table holding the hypervisor VA for each vector slot */
 static void *hyp_spectre_vector_selector[BP_HARDEN_EL2_SLOTS];
 
@@ -2316,7 +2321,15 @@ static void __init teardown_hyp_mode(void)
 	for_each_possible_cpu(cpu) {
 		free_page(per_cpu(kvm_arm_hyp_stack_page, cpu));
 		free_pages(kvm_nvhe_sym(kvm_arm_hyp_percpu_base)[cpu], nvhe_percpu_order());
+
+		if (system_supports_sve() && is_protected_kvm_enabled()) {
+			struct user_sve_state *sve_state;
+
+			sve_state = per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->sve_state;
+			free_pages((unsigned long) sve_state, pkvm_host_sve_state_order());
+		}
 	}
+
 }
 
 static int __init do_pkvm_init(u32 hyp_va_bits)
@@ -2399,6 +2412,50 @@ static int __init kvm_hyp_init_protection(u32 hyp_va_bits)
 	return 0;
 }
 
+static int init_pkvm_host_sve_state(void)
+{
+	int cpu;
+
+	if (!system_supports_sve())
+		return 0;
+
+	/* Allocate pages for host sve state in protected mode. */
+	for_each_possible_cpu(cpu) {
+		struct page *page = alloc_pages(GFP_KERNEL, pkvm_host_sve_state_order());
+
+		if (!page)
+			return -ENOMEM;
+
+		per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->sve_state = page_address(page);
+	}
+
+	/*
+	 * Don't map the pages in hyp since these are only used in protected
+	 * mode, which will (re)create its own mapping when initialized.
+	 */
+
+	return 0;
+}
+
+/*
+ * Finalizes the initialization of hyp mode, once everything else is initialized
+ * and the initialziation process cannot fail.
+ */
+static void finalize_init_hyp_mode(void)
+{
+	int cpu;
+
+	if (!is_protected_kvm_enabled() || !system_supports_sve())
+		return;
+
+	for_each_possible_cpu(cpu) {
+		struct user_sve_state *sve_state;
+
+		sve_state = per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->sve_state;
+		per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->sve_state = kern_hyp_va(sve_state);
+	}
+}
+
 static void pkvm_hyp_init_ptrauth(void)
 {
 	struct kvm_cpu_context *hyp_ctxt;
@@ -2567,6 +2624,10 @@ static int __init init_hyp_mode(void)
 			goto out_err;
 		}
 
+		err = init_pkvm_host_sve_state();
+		if (err)
+			goto out_err;
+
 		err = kvm_hyp_init_protection(hyp_va_bits);
 		if (err) {
 			kvm_err("Failed to init hyp memory protection\n");
@@ -2731,6 +2792,13 @@ static __init int kvm_arm_init(void)
 	if (err)
 		goto out_subs;
 
+	/*
+	 * This should be called after initialization is done and failure isn't
+	 * possible anymore.
+	 */
+	if (!in_hyp_mode)
+		finalize_init_hyp_mode();
+
 	kvm_arm_initialised = true;
 
 	return 0;
diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c
index 859f22f754d3..5c8cd806efb9 100644
--- a/arch/arm64/kvm/hyp/nvhe/setup.c
+++ b/arch/arm64/kvm/hyp/nvhe/setup.c
@@ -67,6 +67,28 @@ static int divide_memory_pool(void *virt, unsigned long size)
 	return 0;
 }
 
+static int pkvm_create_host_sve_mappings(void)
+{
+	void *start, *end;
+	int ret, i;
+
+	if (!system_supports_sve())
+		return 0;
+
+	for (i = 0; i < hyp_nr_cpus; i++) {
+		struct kvm_host_data *host_data = per_cpu_ptr(&kvm_host_data, i);
+		struct user_sve_state *sve_state = host_data->sve_state;
+
+		start = kern_hyp_va(sve_state);
+		end = start + PAGE_ALIGN(pkvm_host_sve_state_size());
+		ret = pkvm_create_mappings(start, end, PAGE_HYP);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
 static int recreate_hyp_mappings(phys_addr_t phys, unsigned long size,
 				 unsigned long *per_cpu_base,
 				 u32 hyp_va_bits)
@@ -125,6 +147,8 @@ static int recreate_hyp_mappings(phys_addr_t phys, unsigned long size,
 			return ret;
 	}
 
+	pkvm_create_host_sve_mappings();
+
 	/*
 	 * Map the host sections RO in the hypervisor, but transfer the
 	 * ownership from the host to the hypervisor itself to make sure they
-- 
2.45.0.rc1.225.g2a3ae87e7f-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v1 6/7] KVM: arm64: Eagerly restore host fpsimd/sve state in pKVM
  2024-05-17 13:18 [PATCH v1 0/7] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode Fuad Tabba
                   ` (4 preceding siblings ...)
  2024-05-17 13:18 ` [PATCH v1 5/7] KVM: arm64: Allocate memory at hyp for host sve state in pKVM Fuad Tabba
@ 2024-05-17 13:18 ` Fuad Tabba
  2024-05-17 17:09   ` Oliver Upton
  2024-05-17 13:18 ` [PATCH v1 7/7] KVM: arm64: Consolidate initializing the host data's fpsimd_state/sve " Fuad Tabba
  2024-05-17 17:30 ` [PATCH v1 0/7] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode Oliver Upton
  7 siblings, 1 reply; 23+ messages in thread
From: Fuad Tabba @ 2024-05-17 13:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: maz, will, qperret, tabba, seanjc, alexandru.elisei,
	catalin.marinas, philmd, james.morse, suzuki.poulose,
	oliver.upton, mark.rutland, broonie, joey.gouly, rananta,
	yuzenghui

When running in protected mode we don't want to leak protected
guest state to the host, including whether a guest has used
fpsimd/sve. Therefore, eagerly restore the host state on guest
exit when running in protected mode, which happens only if the
guest has used fpsimd/sve.

As a future optimisation, we could go back to lazy restoring
state at the host after exiting non-protected guests.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/kvm/hyp/include/hyp/switch.h | 10 +++++
 arch/arm64/kvm/hyp/nvhe/hyp-main.c      | 57 +++++++++++++++++++++++--
 arch/arm64/kvm/hyp/nvhe/pkvm.c          |  2 +
 arch/arm64/kvm/hyp/nvhe/switch.c        | 18 +++++++-
 4 files changed, 82 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index d15272445db2..4dc620499e5d 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -320,6 +320,16 @@ static inline void __hyp_sve_restore_guest(struct kvm_vcpu *vcpu)
 	write_sysreg_el1(__vcpu_sys_reg(vcpu, ZCR_EL1), SYS_ZCR);
 }
 
+static inline void __hyp_sve_save_host(void)
+{
+	struct user_sve_state *sve_state = *host_data_ptr(sve_state);
+
+	sve_state->zcr_el1 = read_sysreg_el1(SYS_ZCR);
+	sve_cond_update_zcr_vq(ZCR_ELx_LEN_MASK, SYS_ZCR_EL2);
+	__sve_save_state(sve_state->sve_regs + sve_ffr_offset(kvm_host_sve_max_vl),
+			 &sve_state->fpsr);
+}
+
 static void __deactivate_fpsimd_sve_traps(struct kvm_vcpu *vcpu);
 static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu);
 
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
index d5c48dc98f67..8b9556f5dee3 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
@@ -23,20 +23,70 @@ DEFINE_PER_CPU(struct kvm_nvhe_init_params, kvm_init_params);
 
 void __kvm_hyp_host_forward_smc(struct kvm_cpu_context *host_ctxt);
 
+static void __hyp_sve_save_guest(struct kvm_vcpu *vcpu)
+{
+	__vcpu_sys_reg(vcpu, ZCR_EL1) = read_sysreg_el1(SYS_ZCR);
+	sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1, SYS_ZCR_EL2);
+	__sve_save_state(vcpu_sve_pffr(vcpu), &vcpu->arch.ctxt.fp_regs.fpsr);
+	sve_cond_update_zcr_vq(ZCR_ELx_LEN_MASK, SYS_ZCR_EL2);
+}
+
+static void __hyp_sve_restore_host(void)
+{
+	struct user_sve_state *sve_state = *host_data_ptr(sve_state);
+
+	sve_cond_update_zcr_vq(ZCR_ELx_LEN_MASK, SYS_ZCR_EL2);
+	__sve_restore_state(sve_state->sve_regs + sve_ffr_offset(kvm_host_sve_max_vl),
+			    &sve_state->fpsr);
+	write_sysreg_el1(sve_state->zcr_el1, SYS_ZCR);
+}
+
+static void fpsimd_sve_flush(void)
+{
+	*host_data_ptr(fp_owner) = FP_STATE_HOST_OWNED;
+}
+
+static void fpsimd_sve_sync(struct kvm_vcpu *vcpu)
+{
+	if (!guest_owns_fp_regs())
+		return;
+
+	if (has_hvhe())
+		sysreg_clear_set(cpacr_el1, 0,
+				 (CPACR_EL1_ZEN_EL1EN | CPACR_EL1_ZEN_EL0EN |
+				  CPACR_EL1_FPEN_EL1EN | CPACR_EL1_FPEN_EL0EN));
+	else
+		sysreg_clear_set(cptr_el2, CPTR_EL2_TZ | CPTR_EL2_TFP, 0);
+	isb();
+
+	if (vcpu_has_sve(vcpu))
+		__hyp_sve_save_guest(vcpu);
+	else
+		__fpsimd_save_state(&vcpu->arch.ctxt.fp_regs);
+
+	if (system_supports_sve())
+		__hyp_sve_restore_host();
+	else
+		__fpsimd_restore_state(*host_data_ptr(fpsimd_state));
+
+	*host_data_ptr(fp_owner) = FP_STATE_HOST_OWNED;
+}
+
 static void flush_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu)
 {
 	struct kvm_vcpu *host_vcpu = hyp_vcpu->host_vcpu;
 
+	fpsimd_sve_flush();
+
 	hyp_vcpu->vcpu.arch.ctxt	= host_vcpu->arch.ctxt;
 
 	hyp_vcpu->vcpu.arch.sve_state	= kern_hyp_va(host_vcpu->arch.sve_state);
-	hyp_vcpu->vcpu.arch.sve_max_vl	= host_vcpu->arch.sve_max_vl;
+	hyp_vcpu->vcpu.arch.sve_max_vl	= min(host_vcpu->arch.sve_max_vl, kvm_host_sve_max_vl);
 
 	hyp_vcpu->vcpu.arch.hw_mmu	= host_vcpu->arch.hw_mmu;
 
 	hyp_vcpu->vcpu.arch.hcr_el2	= host_vcpu->arch.hcr_el2;
 	hyp_vcpu->vcpu.arch.mdcr_el2	= host_vcpu->arch.mdcr_el2;
-	hyp_vcpu->vcpu.arch.cptr_el2	= host_vcpu->arch.cptr_el2;
 
 	hyp_vcpu->vcpu.arch.iflags	= host_vcpu->arch.iflags;
 
@@ -54,10 +104,11 @@ static void sync_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu)
 	struct vgic_v3_cpu_if *host_cpu_if = &host_vcpu->arch.vgic_cpu.vgic_v3;
 	unsigned int i;
 
+	fpsimd_sve_sync(&hyp_vcpu->vcpu);
+
 	host_vcpu->arch.ctxt		= hyp_vcpu->vcpu.arch.ctxt;
 
 	host_vcpu->arch.hcr_el2		= hyp_vcpu->vcpu.arch.hcr_el2;
-	host_vcpu->arch.cptr_el2	= hyp_vcpu->vcpu.arch.cptr_el2;
 
 	host_vcpu->arch.fault		= hyp_vcpu->vcpu.arch.fault;
 
diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c
index 25e9a94f6d76..feb27b4ce459 100644
--- a/arch/arm64/kvm/hyp/nvhe/pkvm.c
+++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c
@@ -588,6 +588,8 @@ int __pkvm_init_vcpu(pkvm_handle_t handle, struct kvm_vcpu *host_vcpu,
 	if (ret)
 		unmap_donated_memory(hyp_vcpu, sizeof(*hyp_vcpu));
 
+	hyp_vcpu->vcpu.arch.cptr_el2 = kvm_get_reset_cptr_el2(&hyp_vcpu->vcpu);
+
 	return ret;
 }
 
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index 935f3db245e9..a2e1419f0f0f 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -114,7 +114,8 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu)
 
 static void __deactivate_fpsimd_sve_traps(struct kvm_vcpu *vcpu)
 {
-	bool clear_sve_traps = vcpu_has_sve(vcpu);
+	bool clear_sve_traps = vcpu_has_sve(vcpu) ||
+			       (is_protected_kvm_enabled() && system_supports_sve());
 	u64 reg;
 
 	if (has_hvhe()) {
@@ -205,7 +206,20 @@ static bool kvm_handle_pvm_sys64(struct kvm_vcpu *vcpu, u64 *exit_code)
 
 static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu)
 {
-	__fpsimd_save_state(*host_data_ptr(fpsimd_state));
+	/*
+	 * Non-protected kvm relies on the host restoring its sve state.
+	 * Protected kvm restores the host's sve state as not to reveal that
+	 * fpsimd was used by a guest nor leak upper sve bits.
+	 */
+	if (unlikely(is_protected_kvm_enabled() && system_supports_sve())) {
+		__hyp_sve_save_host();
+
+		/* Re-enable SVE traps for guests that do not support it. */
+		if (!vcpu_has_sve(vcpu))
+			sysreg_clear_set(cptr_el2, 0, CPTR_EL2_TZ);
+	} else {
+		__fpsimd_save_state(*host_data_ptr(fpsimd_state));
+	}
 }
 
 static const exit_handler_fn hyp_exit_handlers[] = {
-- 
2.45.0.rc1.225.g2a3ae87e7f-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v1 7/7] KVM: arm64: Consolidate initializing the host data's fpsimd_state/sve in pKVM
  2024-05-17 13:18 [PATCH v1 0/7] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode Fuad Tabba
                   ` (5 preceding siblings ...)
  2024-05-17 13:18 ` [PATCH v1 6/7] KVM: arm64: Eagerly restore host fpsimd/sve " Fuad Tabba
@ 2024-05-17 13:18 ` Fuad Tabba
  2024-05-17 17:30 ` [PATCH v1 0/7] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode Oliver Upton
  7 siblings, 0 replies; 23+ messages in thread
From: Fuad Tabba @ 2024-05-17 13:18 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: maz, will, qperret, tabba, seanjc, alexandru.elisei,
	catalin.marinas, philmd, james.morse, suzuki.poulose,
	oliver.upton, mark.rutland, broonie, joey.gouly, rananta,
	yuzenghui

Now that we have introduced finalize_init_hyp_mode(), lets
consolidate the initializing of the host_data fpsimd_state and
sve state.

Signed-off-by: Fuad Tabba <tabba@google.com>
---
 arch/arm64/include/asm/kvm_host.h      | 10 ++++++++--
 arch/arm64/kvm/arm.c                   | 18 ++++++++++++------
 arch/arm64/kvm/hyp/include/nvhe/pkvm.h |  1 -
 arch/arm64/kvm/hyp/nvhe/pkvm.c         | 11 -----------
 arch/arm64/kvm/hyp/nvhe/setup.c        |  1 -
 5 files changed, 20 insertions(+), 21 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 7b3745ef1d73..8a170f314498 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -536,8 +536,14 @@ struct kvm_cpu_context {
 struct kvm_host_data {
 	struct kvm_cpu_context host_ctxt;
 
-	struct user_fpsimd_state *fpsimd_state;	/* hyp VA */
-	struct user_sve_state *sve_state;	/* hyp VA */
+	/*
+	 * All pointers in this union are hyp VA.
+	 * sve_state is only used in pKVM and if system_supports_sve().
+	 */
+	union {
+		struct user_fpsimd_state *fpsimd_state;
+		struct user_sve_state *sve_state;
+	};
 
 	/* Ownership of the FP regs */
 	enum {
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
index a9b1b0e9c319..a1c7e0ad6951 100644
--- a/arch/arm64/kvm/arm.c
+++ b/arch/arm64/kvm/arm.c
@@ -2445,14 +2445,20 @@ static void finalize_init_hyp_mode(void)
 {
 	int cpu;
 
-	if (!is_protected_kvm_enabled() || !system_supports_sve())
-		return;
-
 	for_each_possible_cpu(cpu) {
-		struct user_sve_state *sve_state;
+		if (system_supports_sve() && is_protected_kvm_enabled()) {
+			struct user_sve_state *sve_state;
 
-		sve_state = per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->sve_state;
-		per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->sve_state = kern_hyp_va(sve_state);
+			sve_state = per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->sve_state;
+			per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->sve_state =
+				kern_hyp_va(sve_state);
+		} else {
+			struct user_fpsimd_state *fpsimd_state;
+
+			fpsimd_state = &per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->host_ctxt.fp_regs;
+			per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->fpsimd_state =
+				kern_hyp_va(fpsimd_state);
+		}
 	}
 }
 
diff --git a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h
index 22f374e9f532..24a9a8330d19 100644
--- a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h
+++ b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h
@@ -59,7 +59,6 @@ static inline bool pkvm_hyp_vcpu_is_protected(struct pkvm_hyp_vcpu *hyp_vcpu)
 }
 
 void pkvm_hyp_vm_table_init(void *tbl);
-void pkvm_host_fpsimd_state_init(void);
 
 int __pkvm_init_vm(struct kvm *host_kvm, unsigned long vm_hva,
 		   unsigned long pgd_hva);
diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c
index feb27b4ce459..ea67fcbf8376 100644
--- a/arch/arm64/kvm/hyp/nvhe/pkvm.c
+++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c
@@ -249,17 +249,6 @@ void pkvm_hyp_vm_table_init(void *tbl)
 	vm_table = tbl;
 }
 
-void pkvm_host_fpsimd_state_init(void)
-{
-	unsigned long i;
-
-	for (i = 0; i < hyp_nr_cpus; i++) {
-		struct kvm_host_data *host_data = per_cpu_ptr(&kvm_host_data, i);
-
-		host_data->fpsimd_state = &host_data->host_ctxt.fp_regs;
-	}
-}
-
 /*
  * Return the hyp vm structure corresponding to the handle.
  */
diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c
index 5c8cd806efb9..84f766ab1810 100644
--- a/arch/arm64/kvm/hyp/nvhe/setup.c
+++ b/arch/arm64/kvm/hyp/nvhe/setup.c
@@ -324,7 +324,6 @@ void __noreturn __pkvm_init_finalise(void)
 		goto out;
 
 	pkvm_hyp_vm_table_init(vm_table_base);
-	pkvm_host_fpsimd_state_init();
 out:
 	/*
 	 * We tail-called to here from handle___pkvm_init() and will not return,
-- 
2.45.0.rc1.225.g2a3ae87e7f-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH v1 6/7] KVM: arm64: Eagerly restore host fpsimd/sve state in pKVM
  2024-05-17 13:18 ` [PATCH v1 6/7] KVM: arm64: Eagerly restore host fpsimd/sve " Fuad Tabba
@ 2024-05-17 17:09   ` Oliver Upton
  2024-05-20  7:37     ` Fuad Tabba
  0 siblings, 1 reply; 23+ messages in thread
From: Oliver Upton @ 2024-05-17 17:09 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, linux-arm-kernel, maz, will, qperret, seanjc,
	alexandru.elisei, catalin.marinas, philmd, james.morse,
	suzuki.poulose, mark.rutland, broonie, joey.gouly, rananta,
	yuzenghui

Hi Fuad,

On Fri, May 17, 2024 at 02:18:13PM +0100, Fuad Tabba wrote:
>  static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu)
>  {
> -	__fpsimd_save_state(*host_data_ptr(fpsimd_state));
> +	/*
> +	 * Non-protected kvm relies on the host restoring its sve state.
> +	 * Protected kvm restores the host's sve state as not to reveal that
> +	 * fpsimd was used by a guest nor leak upper sve bits.
> +	 */
> +	if (unlikely(is_protected_kvm_enabled() && system_supports_sve())) {
> +		__hyp_sve_save_host();
> +
> +		/* Re-enable SVE traps for guests that do not support it. */
> +		if (!vcpu_has_sve(vcpu))
> +			sysreg_clear_set(cptr_el2, 0, CPTR_EL2_TZ);

This doesn't account for hVHE. I wonder we'd be better off abstracting
CPTR_EL2 behind a helper wherever it gets used in nVHE and translate
into the VHE-format behind the scenes:

static inline void __cptr_clear_set_hvhe(u64 cptr_clr, u64 cptr_set)
{
	u64 clr = 0, set = 0;

	if (cptr_clr & CPTR_EL2_TFP)
		set |= CPACR_ELx_FPEN;
	if (cptr_clr & CPTR_EL2_TZ)
		set |= CPACR_ELx_ZEN;
	if (cptr_clr & CPTR_EL2_TSM)
		set |= CPACR_ELx_SMEN;

	if (cptr_set & CPTR_EL2_TFP)
		clr |= CPACR_ELx_FPEN;
	if (cptr_set & CPTR_EL2_TZ)
		clr |= CPACR_ELx_ZEN;
	if (cptr_set & CPTR_EL2_TSM)
		clr |= CPACR_ELx_SMEN;

	sysreg_clear_set(cpacr_el1, clr, set);
}

static inline void cptr_clear_set(u64 clr, u64 set)
{
	if (has_hvhe())
		__cptr_clear_set_hvhe(clr, set);
	else
		sysreg_clear_set(cptr_el2, clr, set);
}

-- 
Thanks,
Oliver

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v1 0/7] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode
  2024-05-17 13:18 [PATCH v1 0/7] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode Fuad Tabba
                   ` (6 preceding siblings ...)
  2024-05-17 13:18 ` [PATCH v1 7/7] KVM: arm64: Consolidate initializing the host data's fpsimd_state/sve " Fuad Tabba
@ 2024-05-17 17:30 ` Oliver Upton
  2024-05-17 18:19   ` Mark Brown
  7 siblings, 1 reply; 23+ messages in thread
From: Oliver Upton @ 2024-05-17 17:30 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: kvmarm, linux-arm-kernel, maz, will, qperret, seanjc,
	alexandru.elisei, catalin.marinas, philmd, james.morse,
	suzuki.poulose, mark.rutland, broonie, joey.gouly, rananta,
	yuzenghui

Hi Fuad,

On Fri, May 17, 2024 at 02:18:07PM +0100, Fuad Tabba wrote:
> With the KVM host data rework [1], handling of fpsimd and sve
> state in protected mode is done at hyp. For protected VMs, we
> don't want to leak any guest state to the host, including whether
> a guest has used fpsimd/sve.
> 
> To complete the work started with the host data rework, in
> regards to protected mode, ensure that the host's fpsimd context
> and its sve context are restored on guest exit, since the rework
> has hidden the fpsimd/sve state from the host.
> 
> This patch series eagerly restores the host fpsimd/sve state on
> guest exit when running in protected mode, which happens only if
> the guest has used fpsimd/sve. This means that the saving of the
> state is lazy, similar to the behavior of KVM in other modes, but
> the restoration of the host state is eager.

Hmm... Is there any reason why we need to be concerned about preserving
host SVE state?

The syscall ABI has it that only the first 128 bits of the vector
registers are preserved by the kernel, and I see no reason why we
couldn't apply a similar restriction to KVM_RUN HVCs into EL2. We'd need
to eagerly flush the vector registers on entry to avoid disclosing guest
usage of SVE.

What you have is certainly correct, I just wonder if we're going out of
our way to save/restore 0's for larger VLs.

-- 
Thanks,
Oliver

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v1 0/7] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode
  2024-05-17 17:30 ` [PATCH v1 0/7] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode Oliver Upton
@ 2024-05-17 18:19   ` Mark Brown
  2024-05-20  7:35     ` Fuad Tabba
  0 siblings, 1 reply; 23+ messages in thread
From: Mark Brown @ 2024-05-17 18:19 UTC (permalink / raw)
  To: Oliver Upton
  Cc: Fuad Tabba, kvmarm, linux-arm-kernel, maz, will, qperret, seanjc,
	alexandru.elisei, catalin.marinas, philmd, james.morse,
	suzuki.poulose, mark.rutland, joey.gouly, rananta, yuzenghui


[-- Attachment #1.1: Type: text/plain, Size: 1271 bytes --]

On Fri, May 17, 2024 at 05:30:54PM +0000, Oliver Upton wrote:

> Hmm... Is there any reason why we need to be concerned about preserving
> host SVE state?

> The syscall ABI has it that only the first 128 bits of the vector
> registers are preserved by the kernel, and I see no reason why we
> couldn't apply a similar restriction to KVM_RUN HVCs into EL2. We'd need
> to eagerly flush the vector registers on entry to avoid disclosing guest
> usage of SVE.

> What you have is certainly correct, I just wonder if we're going out of
> our way to save/restore 0's for larger VLs.

Not just larger VLs, there's also the P registers even for 128 bit SVE.

I think it'd be sensible to discard.  A big part of why the host ABI is
like that is that the AAPCS makes the SVE specific state caller
preserved on function calls, with syscalls mirroring that.  This means
that even if the kernel is using FP the HVC would need to be inline in a
function using SVE in order to get any state that needs to be preserved
in there, or there'd need to be some other non-AAPCS thing going on.  We
already ensure that any EL0 state is saved prior to trying to run a VM,
I've not checked the interaction with pKVM here but if there's any
issues I'd hope it's not too difficult to close them.

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v1 0/7] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode
  2024-05-17 18:19   ` Mark Brown
@ 2024-05-20  7:35     ` Fuad Tabba
  2024-05-20  8:11       ` Marc Zyngier
  0 siblings, 1 reply; 23+ messages in thread
From: Fuad Tabba @ 2024-05-20  7:35 UTC (permalink / raw)
  To: Mark Brown
  Cc: Oliver Upton, kvmarm, linux-arm-kernel, maz, will, qperret,
	seanjc, alexandru.elisei, catalin.marinas, philmd, james.morse,
	suzuki.poulose, mark.rutland, joey.gouly, rananta, yuzenghui

Hi Oliver and Mark,

On Fri, May 17, 2024 at 7:19 PM Mark Brown <broonie@kernel.org> wrote:
>
> On Fri, May 17, 2024 at 05:30:54PM +0000, Oliver Upton wrote:
>
> > Hmm... Is there any reason why we need to be concerned about preserving
> > host SVE state?
>
> > The syscall ABI has it that only the first 128 bits of the vector
> > registers are preserved by the kernel, and I see no reason why we
> > couldn't apply a similar restriction to KVM_RUN HVCs into EL2. We'd need
> > to eagerly flush the vector registers on entry to avoid disclosing guest
> > usage of SVE.
>
> > What you have is certainly correct, I just wonder if we're going out of
> > our way to save/restore 0's for larger VLs.
>
> Not just larger VLs, there's also the P registers even for 128 bit SVE.
>
> I think it'd be sensible to discard.  A big part of why the host ABI is
> like that is that the AAPCS makes the SVE specific state caller
> preserved on function calls, with syscalls mirroring that.  This means
> that even if the kernel is using FP the HVC would need to be inline in a
> function using SVE in order to get any state that needs to be preserved
> in there, or there'd need to be some other non-AAPCS thing going on.  We
> already ensure that any EL0 state is saved prior to trying to run a VM,
> I've not checked the interaction with pKVM here but if there's any
> issues I'd hope it's not too difficult to close them.

The reason for that is that in pKVM we want to avoid leaking any
information about protected VM activity to the host, including whether
the VM might have performed fpsimd/sve operations. Therefore, we need
to ensure that the host SVE state looks the same after a protected
guest has run as it did before a protected guest has run.

It would be correct to only save/restore the host's fpsimd state
(i.e., first 128 bits of the vector registers), which is what KVM does
in other modes. However, unless we always zero out the rest of the
state, regardless whether the protected guest has used fpsimd/sve,
then the host would be able to find out that the guest has in fact
performed fpsimd/sve operations.

This isn't necessary for non-protected VMs, but Marc thought that for
now it would be better to simplify things and have pKVM behave the
same way for both protected and non-protected VMs. As a future
optimization for non-protected VMs, we could have them behave as VMs
in other modes.

Thanks,
/fuad

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v1 6/7] KVM: arm64: Eagerly restore host fpsimd/sve state in pKVM
  2024-05-17 17:09   ` Oliver Upton
@ 2024-05-20  7:37     ` Fuad Tabba
  2024-05-20  8:05       ` Marc Zyngier
  0 siblings, 1 reply; 23+ messages in thread
From: Fuad Tabba @ 2024-05-20  7:37 UTC (permalink / raw)
  To: Oliver Upton
  Cc: kvmarm, linux-arm-kernel, maz, will, qperret, seanjc,
	alexandru.elisei, catalin.marinas, philmd, james.morse,
	suzuki.poulose, mark.rutland, broonie, joey.gouly, rananta,
	yuzenghui

Hi Oliver,

On Fri, May 17, 2024 at 6:09 PM Oliver Upton <oliver.upton@linux.dev> wrote:
>
> Hi Fuad,
>
> On Fri, May 17, 2024 at 02:18:13PM +0100, Fuad Tabba wrote:
> >  static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu)
> >  {
> > -     __fpsimd_save_state(*host_data_ptr(fpsimd_state));
> > +     /*
> > +      * Non-protected kvm relies on the host restoring its sve state.
> > +      * Protected kvm restores the host's sve state as not to reveal that
> > +      * fpsimd was used by a guest nor leak upper sve bits.
> > +      */
> > +     if (unlikely(is_protected_kvm_enabled() && system_supports_sve())) {
> > +             __hyp_sve_save_host();
> > +
> > +             /* Re-enable SVE traps for guests that do not support it. */
> > +             if (!vcpu_has_sve(vcpu))
> > +                     sysreg_clear_set(cptr_el2, 0, CPTR_EL2_TZ);
>
> This doesn't account for hVHE. I wonder we'd be better off abstracting
> CPTR_EL2 behind a helper wherever it gets used in nVHE and translate
> into the VHE-format behind the scenes:

Right! Too many modes to keep track of :)

Abstracting cptr_el2 would make things clearer and less error-prone.
I'll do that on the respin.

Cheers,
/fuad

>
> static inline void __cptr_clear_set_hvhe(u64 cptr_clr, u64 cptr_set)
> {
>         u64 clr = 0, set = 0;
>
>         if (cptr_clr & CPTR_EL2_TFP)
>                 set |= CPACR_ELx_FPEN;
>         if (cptr_clr & CPTR_EL2_TZ)
>                 set |= CPACR_ELx_ZEN;
>         if (cptr_clr & CPTR_EL2_TSM)
>                 set |= CPACR_ELx_SMEN;
>
>         if (cptr_set & CPTR_EL2_TFP)
>                 clr |= CPACR_ELx_FPEN;
>         if (cptr_set & CPTR_EL2_TZ)
>                 clr |= CPACR_ELx_ZEN;
>         if (cptr_set & CPTR_EL2_TSM)
>                 clr |= CPACR_ELx_SMEN;
>
>         sysreg_clear_set(cpacr_el1, clr, set);
> }
>
> static inline void cptr_clear_set(u64 clr, u64 set)
> {
>         if (has_hvhe())
>                 __cptr_clear_set_hvhe(clr, set);
>         else
>                 sysreg_clear_set(cptr_el2, clr, set);
> }
>
> --
> Thanks,
> Oliver

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v1 6/7] KVM: arm64: Eagerly restore host fpsimd/sve state in pKVM
  2024-05-20  7:37     ` Fuad Tabba
@ 2024-05-20  8:05       ` Marc Zyngier
  2024-05-20  8:53         ` Fuad Tabba
  2024-05-20 17:08         ` Oliver Upton
  0 siblings, 2 replies; 23+ messages in thread
From: Marc Zyngier @ 2024-05-20  8:05 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: Oliver Upton, kvmarm, linux-arm-kernel, will, qperret, seanjc,
	alexandru.elisei, catalin.marinas, philmd, james.morse,
	suzuki.poulose, mark.rutland, broonie, joey.gouly, rananta,
	yuzenghui

On Mon, 20 May 2024 08:37:22 +0100,
Fuad Tabba <tabba@google.com> wrote:
> 
> Hi Oliver,
> 
> On Fri, May 17, 2024 at 6:09 PM Oliver Upton <oliver.upton@linux.dev> wrote:
> >
> > Hi Fuad,
> >
> > On Fri, May 17, 2024 at 02:18:13PM +0100, Fuad Tabba wrote:
> > >  static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu)
> > >  {
> > > -     __fpsimd_save_state(*host_data_ptr(fpsimd_state));
> > > +     /*
> > > +      * Non-protected kvm relies on the host restoring its sve state.
> > > +      * Protected kvm restores the host's sve state as not to reveal that
> > > +      * fpsimd was used by a guest nor leak upper sve bits.
> > > +      */
> > > +     if (unlikely(is_protected_kvm_enabled() && system_supports_sve())) {
> > > +             __hyp_sve_save_host();
> > > +
> > > +             /* Re-enable SVE traps for guests that do not support it. */
> > > +             if (!vcpu_has_sve(vcpu))
> > > +                     sysreg_clear_set(cptr_el2, 0, CPTR_EL2_TZ);
> >
> > This doesn't account for hVHE. I wonder we'd be better off abstracting
> > CPTR_EL2 behind a helper wherever it gets used in nVHE and translate
> > into the VHE-format behind the scenes:
> 
> Right! Too many modes to keep track of :)
> 
> Abstracting cptr_el2 would make things clearer and less error-prone.
> I'll do that on the respin.

If we're going with the conversion game, then I'd suggest you use the
VHE format as the reference, and convert it to nVHE on the flight.
That's for a few reasons:

- like it or not, nVHE is going the way of the dodo. I love my v8.0
  hardware to bits, but it sucks, and nVHE is now optional anyway.

- Keeping everything in the VHE format helps drawing a parallel with
  what is happening in the kernel (you grep for the same symbols).

- One day, I hope to be able to rip any form of SVE/SME support out of
  nVHE and only keep it for hVHE, because there is no ARMv8.0
  implementations with these extensions (apart from SW models).  One
  day...

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v1 0/7] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode
  2024-05-20  7:35     ` Fuad Tabba
@ 2024-05-20  8:11       ` Marc Zyngier
  2024-05-20 17:37         ` Oliver Upton
  0 siblings, 1 reply; 23+ messages in thread
From: Marc Zyngier @ 2024-05-20  8:11 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: Mark Brown, Oliver Upton, kvmarm, linux-arm-kernel, will, qperret,
	seanjc, alexandru.elisei, catalin.marinas, philmd, james.morse,
	suzuki.poulose, mark.rutland, joey.gouly, rananta, yuzenghui

On Mon, 20 May 2024 08:35:47 +0100,
Fuad Tabba <tabba@google.com> wrote:
> 
> Hi Oliver and Mark,
> 
> On Fri, May 17, 2024 at 7:19 PM Mark Brown <broonie@kernel.org> wrote:
> >
> > On Fri, May 17, 2024 at 05:30:54PM +0000, Oliver Upton wrote:
> >
> > > Hmm... Is there any reason why we need to be concerned about preserving
> > > host SVE state?
> >
> > > The syscall ABI has it that only the first 128 bits of the vector
> > > registers are preserved by the kernel, and I see no reason why we
> > > couldn't apply a similar restriction to KVM_RUN HVCs into EL2. We'd need
> > > to eagerly flush the vector registers on entry to avoid disclosing guest
> > > usage of SVE.
> >
> > > What you have is certainly correct, I just wonder if we're going out of
> > > our way to save/restore 0's for larger VLs.
> >
> > Not just larger VLs, there's also the P registers even for 128 bit SVE.
> >
> > I think it'd be sensible to discard.  A big part of why the host ABI is
> > like that is that the AAPCS makes the SVE specific state caller
> > preserved on function calls, with syscalls mirroring that.  This means
> > that even if the kernel is using FP the HVC would need to be inline in a
> > function using SVE in order to get any state that needs to be preserved
> > in there, or there'd need to be some other non-AAPCS thing going on.  We
> > already ensure that any EL0 state is saved prior to trying to run a VM,
> > I've not checked the interaction with pKVM here but if there's any
> > issues I'd hope it's not too difficult to close them.
> 
> The reason for that is that in pKVM we want to avoid leaking any
> information about protected VM activity to the host, including whether
> the VM might have performed fpsimd/sve operations. Therefore, we need
> to ensure that the host SVE state looks the same after a protected
> guest has run as it did before a protected guest has run.
> 
> It would be correct to only save/restore the host's fpsimd state
> (i.e., first 128 bits of the vector registers), which is what KVM does
> in other modes. However, unless we always zero out the rest of the
> state, regardless whether the protected guest has used fpsimd/sve,
> then the host would be able to find out that the guest has in fact
> performed fpsimd/sve operations.
> 
> This isn't necessary for non-protected VMs, but Marc thought that for
> now it would be better to simplify things and have pKVM behave the
> same way for both protected and non-protected VMs. As a future
> optimization for non-protected VMs, we could have them behave as VMs
> in other modes.

And I stand by what I said. Having a hybrid mode is a maintenance
burden, and it will absolutely lead to some sort of horrible bugs (it
just take a look at the mailing list to see that we have no shortage
of bugs related to lazy FP/SVE handling).

If someone is desperate for lazy handling, then the lazy handling
should apply to both protected and non-protected VMs so that we can
actually reason about it.

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v1 6/7] KVM: arm64: Eagerly restore host fpsimd/sve state in pKVM
  2024-05-20  8:05       ` Marc Zyngier
@ 2024-05-20  8:53         ` Fuad Tabba
  2024-05-20 17:08         ` Oliver Upton
  1 sibling, 0 replies; 23+ messages in thread
From: Fuad Tabba @ 2024-05-20  8:53 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Oliver Upton, kvmarm, linux-arm-kernel, will, qperret, seanjc,
	alexandru.elisei, catalin.marinas, philmd, james.morse,
	suzuki.poulose, mark.rutland, broonie, joey.gouly, rananta,
	yuzenghui

Hi Marc,

On Mon, May 20, 2024 at 9:05 AM Marc Zyngier <maz@kernel.org> wrote:
>
> On Mon, 20 May 2024 08:37:22 +0100,
> Fuad Tabba <tabba@google.com> wrote:
> >
> > Hi Oliver,
> >
> > On Fri, May 17, 2024 at 6:09 PM Oliver Upton <oliver.upton@linux.dev> wrote:
> > >
> > > Hi Fuad,
> > >
> > > On Fri, May 17, 2024 at 02:18:13PM +0100, Fuad Tabba wrote:
> > > >  static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu)
> > > >  {
> > > > -     __fpsimd_save_state(*host_data_ptr(fpsimd_state));
> > > > +     /*
> > > > +      * Non-protected kvm relies on the host restoring its sve state.
> > > > +      * Protected kvm restores the host's sve state as not to reveal that
> > > > +      * fpsimd was used by a guest nor leak upper sve bits.
> > > > +      */
> > > > +     if (unlikely(is_protected_kvm_enabled() && system_supports_sve())) {
> > > > +             __hyp_sve_save_host();
> > > > +
> > > > +             /* Re-enable SVE traps for guests that do not support it. */
> > > > +             if (!vcpu_has_sve(vcpu))
> > > > +                     sysreg_clear_set(cptr_el2, 0, CPTR_EL2_TZ);
> > >
> > > This doesn't account for hVHE. I wonder we'd be better off abstracting
> > > CPTR_EL2 behind a helper wherever it gets used in nVHE and translate
> > > into the VHE-format behind the scenes:
> >
> > Right! Too many modes to keep track of :)
> >
> > Abstracting cptr_el2 would make things clearer and less error-prone.
> > I'll do that on the respin.
>
> If we're going with the conversion game, then I'd suggest you use the
> VHE format as the reference, and convert it to nVHE on the flight.
> That's for a few reasons:

Will do.

Cheers,
/fuad


>
> - like it or not, nVHE is going the way of the dodo. I love my v8.0
>   hardware to bits, but it sucks, and nVHE is now optional anyway.
>
> - Keeping everything in the VHE format helps drawing a parallel with
>   what is happening in the kernel (you grep for the same symbols).
>
> - One day, I hope to be able to rip any form of SVE/SME support out of
>   nVHE and only keep it for hVHE, because there is no ARMv8.0
>   implementations with these extensions (apart from SW models).  One
>   day...
>
> Thanks,
>
>         M.
>
> --
> Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v1 6/7] KVM: arm64: Eagerly restore host fpsimd/sve state in pKVM
  2024-05-20  8:05       ` Marc Zyngier
  2024-05-20  8:53         ` Fuad Tabba
@ 2024-05-20 17:08         ` Oliver Upton
  1 sibling, 0 replies; 23+ messages in thread
From: Oliver Upton @ 2024-05-20 17:08 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Fuad Tabba, kvmarm, linux-arm-kernel, will, qperret, seanjc,
	alexandru.elisei, catalin.marinas, philmd, james.morse,
	suzuki.poulose, mark.rutland, broonie, joey.gouly, rananta,
	yuzenghui

On Mon, May 20, 2024 at 09:05:34AM +0100, Marc Zyngier wrote:
> On Mon, 20 May 2024 08:37:22 +0100,
> Fuad Tabba <tabba@google.com> wrote:
> > 
> > Hi Oliver,
> > 
> > On Fri, May 17, 2024 at 6:09 PM Oliver Upton <oliver.upton@linux.dev> wrote:
> > >
> > > Hi Fuad,
> > >
> > > On Fri, May 17, 2024 at 02:18:13PM +0100, Fuad Tabba wrote:
> > > >  static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu)
> > > >  {
> > > > -     __fpsimd_save_state(*host_data_ptr(fpsimd_state));
> > > > +     /*
> > > > +      * Non-protected kvm relies on the host restoring its sve state.
> > > > +      * Protected kvm restores the host's sve state as not to reveal that
> > > > +      * fpsimd was used by a guest nor leak upper sve bits.
> > > > +      */
> > > > +     if (unlikely(is_protected_kvm_enabled() && system_supports_sve())) {
> > > > +             __hyp_sve_save_host();
> > > > +
> > > > +             /* Re-enable SVE traps for guests that do not support it. */
> > > > +             if (!vcpu_has_sve(vcpu))
> > > > +                     sysreg_clear_set(cptr_el2, 0, CPTR_EL2_TZ);
> > >
> > > This doesn't account for hVHE. I wonder we'd be better off abstracting
> > > CPTR_EL2 behind a helper wherever it gets used in nVHE and translate
> > > into the VHE-format behind the scenes:
> > 
> > Right! Too many modes to keep track of :)
> > 
> > Abstracting cptr_el2 would make things clearer and less error-prone.
> > I'll do that on the respin.
> 
> If we're going with the conversion game, then I'd suggest you use the
> VHE format as the reference, and convert it to nVHE on the flight.
> That's for a few reasons:

Agreed, I was thinking of hVHE as this 'thing on the side', but really
it is the direction of the architecture :) So yeah, prefer your
suggestion.

-- 
Thanks,
Oliver

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v1 0/7] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode
  2024-05-20  8:11       ` Marc Zyngier
@ 2024-05-20 17:37         ` Oliver Upton
  2024-05-20 17:53           ` Mark Brown
  2024-05-20 17:57           ` Fuad Tabba
  0 siblings, 2 replies; 23+ messages in thread
From: Oliver Upton @ 2024-05-20 17:37 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: Fuad Tabba, Mark Brown, kvmarm, linux-arm-kernel, will, qperret,
	seanjc, alexandru.elisei, catalin.marinas, philmd, james.morse,
	suzuki.poulose, mark.rutland, joey.gouly, rananta, yuzenghui

On Mon, May 20, 2024 at 09:11:13AM +0100, Marc Zyngier wrote:
> On Mon, 20 May 2024 08:35:47 +0100, Fuad Tabba <tabba@google.com> wrote:
> > The reason for that is that in pKVM we want to avoid leaking any
> > information about protected VM activity to the host, including whether
> > the VM might have performed fpsimd/sve operations. Therefore, we need
> > to ensure that the host SVE state looks the same after a protected
> > guest has run as it did before a protected guest has run.

Wouldn't it be equally valid to just zero the state that will not be
preserved regardless of whether or not the guest used fpsimd/sve?

> > It would be correct to only save/restore the host's fpsimd state
> > (i.e., first 128 bits of the vector registers), which is what KVM does
> > in other modes. However, unless we always zero out the rest of the
> > state, regardless whether the protected guest has used fpsimd/sve,
> > then the host would be able to find out that the guest has in fact
> > performed fpsimd/sve operations.
> > 
> > This isn't necessary for non-protected VMs, but Marc thought that for
> > now it would be better to simplify things and have pKVM behave the
> > same way for both protected and non-protected VMs. As a future
> > optimization for non-protected VMs, we could have them behave as VMs
> > in other modes.
> 
> And I stand by what I said. Having a hybrid mode is a maintenance
> burden, and it will absolutely lead to some sort of horrible bugs (it
> just take a look at the mailing list to see that we have no shortage
> of bugs related to lazy FP/SVE handling).

Agree, but I don't think the suggestion is in any way incompatible with
eager save/restore of FP/SVE state.

From the looks of it, we're *still* adding protected-mode specialization
to save/restore the host's SVE state, even though we decided in commit
8383741ab2e7 ("KVM: arm64: Get rid of host SVE tracking/saving") that
this was completely unnecessary in non-protected configurations.

What I'm instead suggesting is that we make it part of the __kvm_vcpu_run() API
that the non-overlapping SVE state gets discarded by the callee, which
would align with an expectation that the host kernel has already done
this upon syscall entry.

Then all of the FPSIMD/SVE save/restore logic we have in the hyp 'just
works' so long as we 0 the SVE registers before loading in the host's
FPSIMD state.

-- 
Thanks,
Oliver

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v1 0/7] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode
  2024-05-20 17:37         ` Oliver Upton
@ 2024-05-20 17:53           ` Mark Brown
  2024-05-20 17:59             ` Fuad Tabba
  2024-05-20 17:57           ` Fuad Tabba
  1 sibling, 1 reply; 23+ messages in thread
From: Mark Brown @ 2024-05-20 17:53 UTC (permalink / raw)
  To: Oliver Upton
  Cc: Marc Zyngier, Fuad Tabba, kvmarm, linux-arm-kernel, will, qperret,
	seanjc, alexandru.elisei, catalin.marinas, philmd, james.morse,
	suzuki.poulose, mark.rutland, joey.gouly, rananta, yuzenghui


[-- Attachment #1.1: Type: text/plain, Size: 401 bytes --]

On Mon, May 20, 2024 at 05:37:36PM +0000, Oliver Upton wrote:

> Wouldn't it be equally valid to just zero the state that will not be
> preserved regardless of whether or not the guest used fpsimd/sve?

FWIW I've had this benchmarked for some implementations as causing a low
single digits percentage overhead on syscalls (as you'd expect there's
additional overhead when VLs over 128 are supported).

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v1 0/7] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode
  2024-05-20 17:37         ` Oliver Upton
  2024-05-20 17:53           ` Mark Brown
@ 2024-05-20 17:57           ` Fuad Tabba
  2024-05-20 20:53             ` Oliver Upton
  1 sibling, 1 reply; 23+ messages in thread
From: Fuad Tabba @ 2024-05-20 17:57 UTC (permalink / raw)
  To: Oliver Upton
  Cc: Marc Zyngier, Mark Brown, kvmarm, linux-arm-kernel, will, qperret,
	seanjc, alexandru.elisei, catalin.marinas, philmd, james.morse,
	suzuki.poulose, mark.rutland, joey.gouly, rananta, yuzenghui

Hi Oliver,

On Mon, May 20, 2024 at 6:37 PM Oliver Upton <oliver.upton@linux.dev> wrote:
>
> On Mon, May 20, 2024 at 09:11:13AM +0100, Marc Zyngier wrote:
> > On Mon, 20 May 2024 08:35:47 +0100, Fuad Tabba <tabba@google.com> wrote:
> > > The reason for that is that in pKVM we want to avoid leaking any
> > > information about protected VM activity to the host, including whether
> > > the VM might have performed fpsimd/sve operations. Therefore, we need
> > > to ensure that the host SVE state looks the same after a protected
> > > guest has run as it did before a protected guest has run.
>
> Wouldn't it be equally valid to just zero the state that will not be
> preserved regardless of whether or not the guest used fpsimd/sve?

Yes it would. I think I did mention that as an option. However, that
would need to be done at every protected guest exit, whereas restoring
the host SVE state only needs to be done if the guest has used
fpsimd/sve.

I think the code for the latter (i.e., zeroing out), would be simpler.
I'm happy to do it that way if you and the others think it's better.

> > > It would be correct to only save/restore the host's fpsimd state
> > > (i.e., first 128 bits of the vector registers), which is what KVM does
> > > in other modes. However, unless we always zero out the rest of the
> > > state, regardless whether the protected guest has used fpsimd/sve,
> > > then the host would be able to find out that the guest has in fact
> > > performed fpsimd/sve operations.
> > >
> > > This isn't necessary for non-protected VMs, but Marc thought that for
> > > now it would be better to simplify things and have pKVM behave the
> > > same way for both protected and non-protected VMs. As a future
> > > optimization for non-protected VMs, we could have them behave as VMs
> > > in other modes.
> >
> > And I stand by what I said. Having a hybrid mode is a maintenance
> > burden, and it will absolutely lead to some sort of horrible bugs (it
> > just take a look at the mailing list to see that we have no shortage
> > of bugs related to lazy FP/SVE handling).
>
> Agree, but I don't think the suggestion is in any way incompatible with
> eager save/restore of FP/SVE state.
>
> From the looks of it, we're *still* adding protected-mode specialization
> to save/restore the host's SVE state, even though we decided in commit
> 8383741ab2e7 ("KVM: arm64: Get rid of host SVE tracking/saving") that
> this was completely unnecessary in non-protected configurations.
>
> What I'm instead suggesting is that we make it part of the __kvm_vcpu_run() API
> that the non-overlapping SVE state gets discarded by the callee, which
> would align with an expectation that the host kernel has already done
> this upon syscall entry.
>
> Then all of the FPSIMD/SVE save/restore logic we have in the hyp 'just
> works' so long as we 0 the SVE registers before loading in the host's
> FPSIMD state.

If Marc is happy with this approach, I could do it that way. Either
way, I'll hack on it and present it as an alternative in my respin.

Cheers,
/fuad

> --
> Thanks,
> Oliver

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v1 0/7] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode
  2024-05-20 17:53           ` Mark Brown
@ 2024-05-20 17:59             ` Fuad Tabba
  0 siblings, 0 replies; 23+ messages in thread
From: Fuad Tabba @ 2024-05-20 17:59 UTC (permalink / raw)
  To: Mark Brown
  Cc: Oliver Upton, Marc Zyngier, kvmarm, linux-arm-kernel, will,
	qperret, seanjc, alexandru.elisei, catalin.marinas, philmd,
	james.morse, suzuki.poulose, mark.rutland, joey.gouly, rananta,
	yuzenghui

On Mon, May 20, 2024 at 6:53 PM Mark Brown <broonie@kernel.org> wrote:
>
> On Mon, May 20, 2024 at 05:37:36PM +0000, Oliver Upton wrote:
>
> > Wouldn't it be equally valid to just zero the state that will not be
> > preserved regardless of whether or not the guest used fpsimd/sve?
>
> FWIW I've had this benchmarked for some implementations as causing a low
> single digits percentage overhead on syscalls (as you'd expect there's
> additional overhead when VLs over 128 are supported).

Interesting. Is this an argument in favor of zeroing (i.e., the
overhead is low and acceptable), or against (it is still an overhead)?

Thanks,
/fuad

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v1 0/7] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode
  2024-05-20 17:57           ` Fuad Tabba
@ 2024-05-20 20:53             ` Oliver Upton
  2024-05-21 12:27               ` Fuad Tabba
  0 siblings, 1 reply; 23+ messages in thread
From: Oliver Upton @ 2024-05-20 20:53 UTC (permalink / raw)
  To: Fuad Tabba
  Cc: Marc Zyngier, Mark Brown, kvmarm, linux-arm-kernel, will, qperret,
	seanjc, alexandru.elisei, catalin.marinas, philmd, james.morse,
	suzuki.poulose, mark.rutland, joey.gouly, rananta, yuzenghui

Hey Fuad,

On Mon, May 20, 2024 at 06:57:36PM +0100, Fuad Tabba wrote:
> Hi Oliver,
> 
> On Mon, May 20, 2024 at 6:37 PM Oliver Upton <oliver.upton@linux.dev> wrote:
> >
> > On Mon, May 20, 2024 at 09:11:13AM +0100, Marc Zyngier wrote:
> > > On Mon, 20 May 2024 08:35:47 +0100, Fuad Tabba <tabba@google.com> wrote:
> > > > The reason for that is that in pKVM we want to avoid leaking any
> > > > information about protected VM activity to the host, including whether
> > > > the VM might have performed fpsimd/sve operations. Therefore, we need
> > > > to ensure that the host SVE state looks the same after a protected
> > > > guest has run as it did before a protected guest has run.
> >
> > Wouldn't it be equally valid to just zero the state that will not be
> > preserved regardless of whether or not the guest used fpsimd/sve?
> 
> Yes it would. I think I did mention that as an option.

Apologies, I probably missed it earlier on then.

> However, that would need to be done at every protected guest exit, whereas
> restoring the host SVE state only needs to be done if the guest has used
> fpsimd/sve.

Indeed, what I was _hoping_ is that implementations do a decent job of
handling a zeroing idiom for SVE and avoid needing to fetch a bunch of
state out of memory.

> I think the code for the latter (i.e., zeroing out), would be simpler.
> I'm happy to do it that way if you and the others think it's better.

Right, I have no fundamental objections to fully managing the host SVE
state in EL2. Strong preference for something simple + correct in the
interim. Anyway, thanks for suffering through my whining and hopefully
we can land a fix soon :)

-- 
Best,
Oliver

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v1 0/7] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode
  2024-05-20 20:53             ` Oliver Upton
@ 2024-05-21 12:27               ` Fuad Tabba
  0 siblings, 0 replies; 23+ messages in thread
From: Fuad Tabba @ 2024-05-21 12:27 UTC (permalink / raw)
  To: Oliver Upton
  Cc: Marc Zyngier, Mark Brown, kvmarm, linux-arm-kernel, will, qperret,
	seanjc, alexandru.elisei, catalin.marinas, philmd, james.morse,
	suzuki.poulose, mark.rutland, joey.gouly, rananta, yuzenghui

Hi Oliver,


On Mon, May 20, 2024 at 9:53 PM Oliver Upton <oliver.upton@linux.dev> wrote:
>
> Hey Fuad,
>
> On Mon, May 20, 2024 at 06:57:36PM +0100, Fuad Tabba wrote:
> > Hi Oliver,
> >
> > On Mon, May 20, 2024 at 6:37 PM Oliver Upton <oliver.upton@linux.dev> wrote:
> > >
> > > On Mon, May 20, 2024 at 09:11:13AM +0100, Marc Zyngier wrote:
> > > > On Mon, 20 May 2024 08:35:47 +0100, Fuad Tabba <tabba@google.com> wrote:
> > > > > The reason for that is that in pKVM we want to avoid leaking any
> > > > > information about protected VM activity to the host, including whether
> > > > > the VM might have performed fpsimd/sve operations. Therefore, we need
> > > > > to ensure that the host SVE state looks the same after a protected
> > > > > guest has run as it did before a protected guest has run.
> > >
> > > Wouldn't it be equally valid to just zero the state that will not be
> > > preserved regardless of whether or not the guest used fpsimd/sve?
> >
> > Yes it would. I think I did mention that as an option.
>
> Apologies, I probably missed it earlier on then.
>
> > However, that would need to be done at every protected guest exit, whereas
> > restoring the host SVE state only needs to be done if the guest has used
> > fpsimd/sve.
>
> Indeed, what I was _hoping_ is that implementations do a decent job of
> handling a zeroing idiom for SVE and avoid needing to fetch a bunch of
> state out of memory.
>
> > I think the code for the latter (i.e., zeroing out), would be simpler.
> > I'm happy to do it that way if you and the others think it's better.
>
> Right, I have no fundamental objections to fully managing the host SVE
> state in EL2. Strong preference for something simple + correct in the
> interim. Anyway, thanks for suffering through my whining and hopefully
> we can land a fix soon :)

Thanks for your review and comments, which are very helpful as always.
I'll respin this within the next couple of days.

Cheers,
/fuad

> --
> Best,
> Oliver

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2024-05-21 12:28 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-05-17 13:18 [PATCH v1 0/7] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode Fuad Tabba
2024-05-17 13:18 ` [PATCH v1 1/7] KVM: arm64: Reintroduce __sve_save_state Fuad Tabba
2024-05-17 13:18 ` [PATCH v1 2/7] KVM: arm64: Specialize deactivate fpsimd/sve traps on guest trap Fuad Tabba
2024-05-17 13:18 ` [PATCH v1 3/7] KVM: arm64: Specialize handling of host fpsimd state on trap Fuad Tabba
2024-05-17 13:18 ` [PATCH v1 4/7] KVM: arm64: Store the maximum sve vector length at hyp Fuad Tabba
2024-05-17 13:18 ` [PATCH v1 5/7] KVM: arm64: Allocate memory at hyp for host sve state in pKVM Fuad Tabba
2024-05-17 13:18 ` [PATCH v1 6/7] KVM: arm64: Eagerly restore host fpsimd/sve " Fuad Tabba
2024-05-17 17:09   ` Oliver Upton
2024-05-20  7:37     ` Fuad Tabba
2024-05-20  8:05       ` Marc Zyngier
2024-05-20  8:53         ` Fuad Tabba
2024-05-20 17:08         ` Oliver Upton
2024-05-17 13:18 ` [PATCH v1 7/7] KVM: arm64: Consolidate initializing the host data's fpsimd_state/sve " Fuad Tabba
2024-05-17 17:30 ` [PATCH v1 0/7] KVM: arm64: Fix handling of host fpsimd/sve state in protected mode Oliver Upton
2024-05-17 18:19   ` Mark Brown
2024-05-20  7:35     ` Fuad Tabba
2024-05-20  8:11       ` Marc Zyngier
2024-05-20 17:37         ` Oliver Upton
2024-05-20 17:53           ` Mark Brown
2024-05-20 17:59             ` Fuad Tabba
2024-05-20 17:57           ` Fuad Tabba
2024-05-20 20:53             ` Oliver Upton
2024-05-21 12:27               ` Fuad Tabba

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).