linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 00/13] KVM/arm64: Add NV support for ERET and PAuth
@ 2024-02-26 10:05 Marc Zyngier
  2024-02-26 10:05 ` [PATCH v2 01/13] KVM: arm64: Harden __ctxt_sys_reg() against out-of-range values Marc Zyngier
                   ` (12 more replies)
  0 siblings, 13 replies; 28+ messages in thread
From: Marc Zyngier @ 2024-02-26 10:05 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Joey Gouly, Will Deacon, Catalin Marinas

This is the second version of this series introducing ERET and PAuth
support for NV guests, and now the base prefix for the NV support
series.

Thanks to Joey for reviewing the first half of the series and
providing valuable suggestions, much appreciated!

* From v1 [1]:

  - Don't repaint the ISS_ERET* definitions, but provide reasonable
    helpers instead
  - Dropped superfluous VNCR_EL2 definition
  - Amended comments and creative spelling

[1] https://lore.kernel.org/r/20240219092014.783809-1-maz@kernel.org

Marc Zyngier (13):
  KVM: arm64: Harden __ctxt_sys_reg() against out-of-range values
  KVM: arm64: Add helpers for ESR_ELx_ERET_ISS_ERET*
  KVM: arm64: nv: Drop VCPU_HYP_CONTEXT flag
  KVM: arm64: nv: Configure HCR_EL2 for FEAT_NV2
  KVM: arm64: nv: Add trap forwarding for ERET and SMC
  KVM: arm64: nv: Fast-track 'InHost' exception returns
  KVM: arm64: nv: Honor HFGITR_EL2.ERET being set
  KVM: arm64: nv: Handle HCR_EL2.{API,APK} independently
  KVM: arm64: nv: Reinject PAC exceptions caused by HCR_EL2.API==0
  KVM: arm64: nv: Add kvm_has_pauth() helper
  KVM: arm64: nv: Add emulation for ERETAx instructions
  KVM: arm64: nv: Handle ERETA[AB] instructions
  KVM: arm64: nv: Advertise support for PAuth

 arch/arm64/include/asm/esr.h            |  12 ++
 arch/arm64/include/asm/kvm_emulate.h    |   5 -
 arch/arm64/include/asm/kvm_host.h       |  26 +++-
 arch/arm64/include/asm/kvm_nested.h     |  13 ++
 arch/arm64/include/asm/pgtable-hwdef.h  |   1 +
 arch/arm64/kvm/Makefile                 |   1 +
 arch/arm64/kvm/emulate-nested.c         |  66 +++++---
 arch/arm64/kvm/handle_exit.c            |  38 ++++-
 arch/arm64/kvm/hyp/include/hyp/switch.h |  36 ++++-
 arch/arm64/kvm/hyp/nvhe/switch.c        |   2 +-
 arch/arm64/kvm/hyp/vhe/switch.c         |  96 +++++++++++-
 arch/arm64/kvm/nested.c                 |   8 +-
 arch/arm64/kvm/pauth.c                  | 196 ++++++++++++++++++++++++
 13 files changed, 444 insertions(+), 56 deletions(-)
 create mode 100644 arch/arm64/kvm/pauth.c

-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH v2 01/13] KVM: arm64: Harden __ctxt_sys_reg() against out-of-range values
  2024-02-26 10:05 [PATCH v2 00/13] KVM/arm64: Add NV support for ERET and PAuth Marc Zyngier
@ 2024-02-26 10:05 ` Marc Zyngier
  2024-02-26 10:05 ` [PATCH v2 02/13] KVM: arm64: Add helpers for ESR_ELx_ERET_ISS_ERET* Marc Zyngier
                   ` (11 subsequent siblings)
  12 siblings, 0 replies; 28+ messages in thread
From: Marc Zyngier @ 2024-02-26 10:05 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Joey Gouly, Will Deacon, Catalin Marinas

The unsuspecting kernel tinkerer can be easily confused into
writing something that looks like this:

	ikey.lo = __vcpu_sys_reg(vcpu, SYS_APIAKEYLO_EL1);

which seems vaguely sensible, until you realise that the second
parameter is the encoding of a sysreg, and not the index into
the vcpu sysreg file... Debugging what happens in this case is
an interesting exercise in head<->wall interactions.

As they often say: "Any resemblance to actual persons, living
or dead, or actual events is purely coincidental".

In order to save people's time, add some compile-time hardening
that will at least weed out the "stupidly out of range" values.
This will *not* catch anything that isn't a compile-time constant.

Reviewed-by: Joey Gouly <joey.gouly@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 181fef12e8e8..a5ec4c7d3966 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -895,7 +895,7 @@ struct kvm_vcpu_arch {
  * Don't bother with VNCR-based accesses in the nVHE code, it has no
  * business dealing with NV.
  */
-static inline u64 *__ctxt_sys_reg(const struct kvm_cpu_context *ctxt, int r)
+static inline u64 *___ctxt_sys_reg(const struct kvm_cpu_context *ctxt, int r)
 {
 #if !defined (__KVM_NVHE_HYPERVISOR__)
 	if (unlikely(cpus_have_final_cap(ARM64_HAS_NESTED_VIRT) &&
@@ -905,6 +905,13 @@ static inline u64 *__ctxt_sys_reg(const struct kvm_cpu_context *ctxt, int r)
 	return (u64 *)&ctxt->sys_regs[r];
 }
 
+#define __ctxt_sys_reg(c,r)						\
+	({								\
+	    	BUILD_BUG_ON(__builtin_constant_p(r) &&			\
+			     (r) >= NR_SYS_REGS);			\
+		___ctxt_sys_reg(c, r);					\
+	})
+
 #define ctxt_sys_reg(c,r)	(*__ctxt_sys_reg(c,r))
 
 u64 kvm_vcpu_sanitise_vncr_reg(const struct kvm_vcpu *, enum vcpu_sysreg);
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 02/13] KVM: arm64: Add helpers for ESR_ELx_ERET_ISS_ERET*
  2024-02-26 10:05 [PATCH v2 00/13] KVM/arm64: Add NV support for ERET and PAuth Marc Zyngier
  2024-02-26 10:05 ` [PATCH v2 01/13] KVM: arm64: Harden __ctxt_sys_reg() against out-of-range values Marc Zyngier
@ 2024-02-26 10:05 ` Marc Zyngier
  2024-02-26 10:05 ` [PATCH v2 03/13] KVM: arm64: nv: Drop VCPU_HYP_CONTEXT flag Marc Zyngier
                   ` (10 subsequent siblings)
  12 siblings, 0 replies; 28+ messages in thread
From: Marc Zyngier @ 2024-02-26 10:05 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Joey Gouly, Will Deacon, Catalin Marinas

The ESR_ELx_ERET_ISS_ERET* macros are a bit confusing:

- ESR_ELx_ERET_ISS_ERET really indicates that we have trapped an
  ERETA* instruction, as opposed to an ERET

- ESR_ELx_ERET_ISS_ERETA really indicates that we have trapped
  an ERETAB instruction, as opposed to an ERETAA.

We could repaint those to make more sense, but these are the
names that are present in the ARM ARM, and we are sentimentally
attached to those.

Instead, add two new helpers:

- esr_iss_is_eretax() being true tells you that you need to
  authenticate the ERET

- esr_iss_is_eretab() tells you that you need to use the B key
  instead of the A key

Following patches will make use of these primitives.

Suggested-by: Joey Gouly <joey.gouly@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/esr.h | 12 ++++++++++++
 arch/arm64/kvm/handle_exit.c |  2 +-
 2 files changed, 13 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
index 353fe08546cf..98008c16025e 100644
--- a/arch/arm64/include/asm/esr.h
+++ b/arch/arm64/include/asm/esr.h
@@ -407,6 +407,18 @@ static inline bool esr_fsc_is_access_flag_fault(unsigned long esr)
 	return (esr & ESR_ELx_FSC_TYPE) == ESR_ELx_FSC_ACCESS;
 }
 
+/* Indicate whether ESR.EC==0x1A is for an ERETAx instruction */
+static inline bool esr_iss_is_eretax(unsigned long esr)
+{
+	return esr & ESR_ELx_ERET_ISS_ERET;
+}
+
+/* Indicate which key is used for ERETAx (false: A-Key, true: B-Key) */
+static inline bool esr_iss_is_eretab(unsigned long esr)
+{
+	return esr & ESR_ELx_ERET_ISS_ERETA;
+}
+
 const char *esr_get_class_string(unsigned long esr);
 #endif /* __ASSEMBLY */
 
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 617ae6dea5d5..15221e481ccd 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -219,7 +219,7 @@ static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu)
 
 static int kvm_handle_eret(struct kvm_vcpu *vcpu)
 {
-	if (kvm_vcpu_get_esr(vcpu) & ESR_ELx_ERET_ISS_ERET)
+	if (esr_iss_is_eretax(kvm_vcpu_get_esr(vcpu)))
 		return kvm_handle_ptrauth(vcpu);
 
 	/*
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 03/13] KVM: arm64: nv: Drop VCPU_HYP_CONTEXT flag
  2024-02-26 10:05 [PATCH v2 00/13] KVM/arm64: Add NV support for ERET and PAuth Marc Zyngier
  2024-02-26 10:05 ` [PATCH v2 01/13] KVM: arm64: Harden __ctxt_sys_reg() against out-of-range values Marc Zyngier
  2024-02-26 10:05 ` [PATCH v2 02/13] KVM: arm64: Add helpers for ESR_ELx_ERET_ISS_ERET* Marc Zyngier
@ 2024-02-26 10:05 ` Marc Zyngier
  2024-02-26 10:05 ` [PATCH v2 04/13] KVM: arm64: nv: Configure HCR_EL2 for FEAT_NV2 Marc Zyngier
                   ` (9 subsequent siblings)
  12 siblings, 0 replies; 28+ messages in thread
From: Marc Zyngier @ 2024-02-26 10:05 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Joey Gouly, Will Deacon, Catalin Marinas

It has become obvious that HCR_EL2.NV serves the exact same use
as VCPU_HYP_CONTEXT, only in an architectural way. So just drop
the flag for good.

Reviewed-by: Joey Gouly <joey.gouly@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 2 --
 arch/arm64/kvm/hyp/vhe/switch.c   | 7 +------
 2 files changed, 1 insertion(+), 8 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index a5ec4c7d3966..75eb8e170515 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -816,8 +816,6 @@ struct kvm_vcpu_arch {
 #define DEBUG_STATE_SAVE_SPE	__vcpu_single_flag(iflags, BIT(5))
 /* Save TRBE context if active  */
 #define DEBUG_STATE_SAVE_TRBE	__vcpu_single_flag(iflags, BIT(6))
-/* vcpu running in HYP context */
-#define VCPU_HYP_CONTEXT	__vcpu_single_flag(iflags, BIT(7))
 
 /* SVE enabled for host EL0 */
 #define HOST_SVE_ENABLED	__vcpu_single_flag(sflags, BIT(0))
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 1581df6aec87..58415783fd53 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -197,7 +197,7 @@ static void early_exit_filter(struct kvm_vcpu *vcpu, u64 *exit_code)
 	 * If we were in HYP context on entry, adjust the PSTATE view
 	 * so that the usual helpers work correctly.
 	 */
-	if (unlikely(vcpu_get_flag(vcpu, VCPU_HYP_CONTEXT))) {
+	if (unlikely(read_sysreg(hcr_el2) & HCR_NV)) {
 		u64 mode = *vcpu_cpsr(vcpu) & (PSR_MODE_MASK | PSR_MODE32_BIT);
 
 		switch (mode) {
@@ -240,11 +240,6 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
 	sysreg_restore_guest_state_vhe(guest_ctxt);
 	__debug_switch_to_guest(vcpu);
 
-	if (is_hyp_ctxt(vcpu))
-		vcpu_set_flag(vcpu, VCPU_HYP_CONTEXT);
-	else
-		vcpu_clear_flag(vcpu, VCPU_HYP_CONTEXT);
-
 	do {
 		/* Jump in the fire! */
 		exit_code = __guest_enter(vcpu);
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 04/13] KVM: arm64: nv: Configure HCR_EL2 for FEAT_NV2
  2024-02-26 10:05 [PATCH v2 00/13] KVM/arm64: Add NV support for ERET and PAuth Marc Zyngier
                   ` (2 preceding siblings ...)
  2024-02-26 10:05 ` [PATCH v2 03/13] KVM: arm64: nv: Drop VCPU_HYP_CONTEXT flag Marc Zyngier
@ 2024-02-26 10:05 ` Marc Zyngier
  2024-02-26 10:05 ` [PATCH v2 05/13] KVM: arm64: nv: Add trap forwarding for ERET and SMC Marc Zyngier
                   ` (8 subsequent siblings)
  12 siblings, 0 replies; 28+ messages in thread
From: Marc Zyngier @ 2024-02-26 10:05 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Joey Gouly, Will Deacon, Catalin Marinas

Add the HCR_EL2 configuration for FEAT_NV2, adding the required
bits for running a guest hypervisor, and overall merging the
allowed bits provided by the guest.

This heavily replies on unavaliable features being sanitised
when the HCR_EL2 shadow register is accessed, and only a couple
of bits must be explicitly disabled.

Non-NV guests are completely unaffected by any of this.

Reviewed-by: Joey Gouly <joey.gouly@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/hyp/include/hyp/switch.h |  4 +--
 arch/arm64/kvm/hyp/nvhe/switch.c        |  2 +-
 arch/arm64/kvm/hyp/vhe/switch.c         | 35 ++++++++++++++++++++++++-
 3 files changed, 36 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index e3fcf8c4d5b4..f5f701f309a9 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -271,10 +271,8 @@ static inline void __deactivate_traps_common(struct kvm_vcpu *vcpu)
 	__deactivate_traps_hfgxtr(vcpu);
 }
 
-static inline void ___activate_traps(struct kvm_vcpu *vcpu)
+static inline void ___activate_traps(struct kvm_vcpu *vcpu, u64 hcr)
 {
-	u64 hcr = vcpu->arch.hcr_el2;
-
 	if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM))
 		hcr |= HCR_TVM;
 
diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
index c50f8459e4fc..4103625e46c5 100644
--- a/arch/arm64/kvm/hyp/nvhe/switch.c
+++ b/arch/arm64/kvm/hyp/nvhe/switch.c
@@ -40,7 +40,7 @@ static void __activate_traps(struct kvm_vcpu *vcpu)
 {
 	u64 val;
 
-	___activate_traps(vcpu);
+	___activate_traps(vcpu, vcpu->arch.hcr_el2);
 	__activate_traps_common(vcpu);
 
 	val = vcpu->arch.cptr_el2;
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 58415783fd53..d5fdcea2b366 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -33,11 +33,44 @@ DEFINE_PER_CPU(struct kvm_host_data, kvm_host_data);
 DEFINE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt);
 DEFINE_PER_CPU(unsigned long, kvm_hyp_vector);
 
+/*
+ * HCR_EL2 bits that the NV guest can freely change (no RES0/RES1
+ * semantics, irrespective of the configuration), but that cannot be
+ * applied to the actual HW as things would otherwise break badly.
+ *
+ * - TGE: we want the guest to use EL1, which is incompatible with
+ *   this bit being set
+ *
+ * - API/APK: for hysterical raisins, we enable PAuth lazily, which
+ *   means that the guest's bits cannot be directly applied (we really
+ *   want to see the traps). Revisit this at some point.
+ */
+#define NV_HCR_GUEST_EXCLUDE	(HCR_TGE | HCR_API | HCR_APK)
+
+static u64 __compute_hcr(struct kvm_vcpu *vcpu)
+{
+	u64 hcr = vcpu->arch.hcr_el2;
+
+	if (!vcpu_has_nv(vcpu))
+		return hcr;
+
+	if (is_hyp_ctxt(vcpu)) {
+		hcr |= HCR_NV | HCR_NV2 | HCR_AT | HCR_TTLB;
+
+		if (!vcpu_el2_e2h_is_set(vcpu))
+			hcr |= HCR_NV1;
+
+		write_sysreg_s(vcpu->arch.ctxt.vncr_array, SYS_VNCR_EL2);
+	}
+
+	return hcr | (__vcpu_sys_reg(vcpu, HCR_EL2) & ~NV_HCR_GUEST_EXCLUDE);
+}
+
 static void __activate_traps(struct kvm_vcpu *vcpu)
 {
 	u64 val;
 
-	___activate_traps(vcpu);
+	___activate_traps(vcpu, __compute_hcr(vcpu));
 
 	if (has_cntpoff()) {
 		struct timer_map map;
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 05/13] KVM: arm64: nv: Add trap forwarding for ERET and SMC
  2024-02-26 10:05 [PATCH v2 00/13] KVM/arm64: Add NV support for ERET and PAuth Marc Zyngier
                   ` (3 preceding siblings ...)
  2024-02-26 10:05 ` [PATCH v2 04/13] KVM: arm64: nv: Configure HCR_EL2 for FEAT_NV2 Marc Zyngier
@ 2024-02-26 10:05 ` Marc Zyngier
  2024-02-26 10:05 ` [PATCH v2 06/13] KVM: arm64: nv: Fast-track 'InHost' exception returns Marc Zyngier
                   ` (7 subsequent siblings)
  12 siblings, 0 replies; 28+ messages in thread
From: Marc Zyngier @ 2024-02-26 10:05 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Joey Gouly, Will Deacon, Catalin Marinas, Jintack Lim

Honor the trap forwarding bits for both ERET and SMC, using a new
helper that checks for common conditions.

Reviewed-by: Joey Gouly <joey.gouly@arm.com>
Co-developed-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Jintack Lim <jintack.lim@linaro.org>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_nested.h |  1 +
 arch/arm64/kvm/emulate-nested.c     | 27 +++++++++++++++++++++++++++
 arch/arm64/kvm/handle_exit.c        |  7 +++++++
 3 files changed, 35 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index c77d795556e1..dbc4e3a67356 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -60,6 +60,7 @@ static inline u64 translate_ttbr0_el2_to_ttbr0_el1(u64 ttbr0)
 	return ttbr0 & ~GENMASK_ULL(63, 48);
 }
 
+extern bool forward_smc_trap(struct kvm_vcpu *vcpu);
 
 int kvm_init_nv_sysregs(struct kvm *kvm);
 
diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index 4697ba41b3a9..2d80e81ae650 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -2117,6 +2117,26 @@ bool triage_sysreg_trap(struct kvm_vcpu *vcpu, int *sr_index)
 	return true;
 }
 
+static bool forward_traps(struct kvm_vcpu *vcpu, u64 control_bit)
+{
+	bool control_bit_set;
+
+	if (!vcpu_has_nv(vcpu))
+		return false;
+
+	control_bit_set = __vcpu_sys_reg(vcpu, HCR_EL2) & control_bit;
+	if (!is_hyp_ctxt(vcpu) && control_bit_set) {
+		kvm_inject_nested_sync(vcpu, kvm_vcpu_get_esr(vcpu));
+		return true;
+	}
+	return false;
+}
+
+bool forward_smc_trap(struct kvm_vcpu *vcpu)
+{
+	return forward_traps(vcpu, HCR_TSC);
+}
+
 static u64 kvm_check_illegal_exception_return(struct kvm_vcpu *vcpu, u64 spsr)
 {
 	u64 mode = spsr & PSR_MODE_MASK;
@@ -2155,6 +2175,13 @@ void kvm_emulate_nested_eret(struct kvm_vcpu *vcpu)
 	u64 spsr, elr, mode;
 	bool direct_eret;
 
+	/*
+	 * Forward this trap to the virtual EL2 if the virtual
+	 * HCR_EL2.NV bit is set and this is coming from !EL2.
+	 */
+	if (forward_traps(vcpu, HCR_NV))
+		return;
+
 	/*
 	 * Going through the whole put/load motions is a waste of time
 	 * if this is a VHE guest hypervisor returning to its own
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 15221e481ccd..6a88ec024e2f 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -55,6 +55,13 @@ static int handle_hvc(struct kvm_vcpu *vcpu)
 
 static int handle_smc(struct kvm_vcpu *vcpu)
 {
+	/*
+	 * Forward this trapped smc instruction to the virtual EL2 if
+	 * the guest has asked for it.
+	 */
+	if (forward_smc_trap(vcpu))
+		return 1;
+
 	/*
 	 * "If an SMC instruction executed at Non-secure EL1 is
 	 * trapped to EL2 because HCR_EL2.TSC is 1, the exception is a
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 06/13] KVM: arm64: nv: Fast-track 'InHost' exception returns
  2024-02-26 10:05 [PATCH v2 00/13] KVM/arm64: Add NV support for ERET and PAuth Marc Zyngier
                   ` (4 preceding siblings ...)
  2024-02-26 10:05 ` [PATCH v2 05/13] KVM: arm64: nv: Add trap forwarding for ERET and SMC Marc Zyngier
@ 2024-02-26 10:05 ` Marc Zyngier
  2024-02-28 16:08   ` Joey Gouly
  2024-02-26 10:05 ` [PATCH v2 07/13] KVM: arm64: nv: Honor HFGITR_EL2.ERET being set Marc Zyngier
                   ` (6 subsequent siblings)
  12 siblings, 1 reply; 28+ messages in thread
From: Marc Zyngier @ 2024-02-26 10:05 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Joey Gouly, Will Deacon, Catalin Marinas

A significant part of the FEAT_NV extension is to trap ERET
instructions so that the hypervisor gets a chance to switch
from a vEL2 L1 guest to an EL1 L2 guest.

But this also has the unfortunate consequence of trapping ERET
in unsuspecting circumstances, such as staying at vEL2 (interrupt
handling while being in the guest hypervisor), or returning to host
userspace in the case of a VHE guest.

Although we already make some effort to handle these ERET quicker
by not doing the put/load dance, it is still way too far down the
line for it to be efficient enough.

For these cases, it would ideal to ERET directly, no question asked.
Of course, we can't do that. But the next best thing is to do it as
early as possible, in fixup_guest_exit(), much as we would handle
FPSIMD exceptions.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/emulate-nested.c | 29 +++-------------------
 arch/arm64/kvm/hyp/vhe/switch.c | 44 +++++++++++++++++++++++++++++++++
 2 files changed, 47 insertions(+), 26 deletions(-)

diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index 2d80e81ae650..63a74c0330f1 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -2172,8 +2172,7 @@ static u64 kvm_check_illegal_exception_return(struct kvm_vcpu *vcpu, u64 spsr)
 
 void kvm_emulate_nested_eret(struct kvm_vcpu *vcpu)
 {
-	u64 spsr, elr, mode;
-	bool direct_eret;
+	u64 spsr, elr;
 
 	/*
 	 * Forward this trap to the virtual EL2 if the virtual
@@ -2182,33 +2181,11 @@ void kvm_emulate_nested_eret(struct kvm_vcpu *vcpu)
 	if (forward_traps(vcpu, HCR_NV))
 		return;
 
-	/*
-	 * Going through the whole put/load motions is a waste of time
-	 * if this is a VHE guest hypervisor returning to its own
-	 * userspace, or the hypervisor performing a local exception
-	 * return. No need to save/restore registers, no need to
-	 * switch S2 MMU. Just do the canonical ERET.
-	 */
-	spsr = vcpu_read_sys_reg(vcpu, SPSR_EL2);
-	spsr = kvm_check_illegal_exception_return(vcpu, spsr);
-
-	mode = spsr & (PSR_MODE_MASK | PSR_MODE32_BIT);
-
-	direct_eret  = (mode == PSR_MODE_EL0t &&
-			vcpu_el2_e2h_is_set(vcpu) &&
-			vcpu_el2_tge_is_set(vcpu));
-	direct_eret |= (mode == PSR_MODE_EL2h || mode == PSR_MODE_EL2t);
-
-	if (direct_eret) {
-		*vcpu_pc(vcpu) = vcpu_read_sys_reg(vcpu, ELR_EL2);
-		*vcpu_cpsr(vcpu) = spsr;
-		trace_kvm_nested_eret(vcpu, *vcpu_pc(vcpu), spsr);
-		return;
-	}
-
 	preempt_disable();
 	kvm_arch_vcpu_put(vcpu);
 
+	spsr = __vcpu_sys_reg(vcpu, SPSR_EL2);
+	spsr = kvm_check_illegal_exception_return(vcpu, spsr);
 	elr = __vcpu_sys_reg(vcpu, ELR_EL2);
 
 	trace_kvm_nested_eret(vcpu, elr, spsr);
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index d5fdcea2b366..eaf242b8e0cf 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -206,6 +206,49 @@ void kvm_vcpu_put_vhe(struct kvm_vcpu *vcpu)
 	__vcpu_put_switch_sysregs(vcpu);
 }
 
+static bool kvm_hyp_handle_eret(struct kvm_vcpu *vcpu, u64 *exit_code)
+{
+	u64 spsr, mode;
+
+	/*
+	 * Going through the whole put/load motions is a waste of time
+	 * if this is a VHE guest hypervisor returning to its own
+	 * userspace, or the hypervisor performing a local exception
+	 * return. No need to save/restore registers, no need to
+	 * switch S2 MMU. Just do the canonical ERET.
+	 *
+	 * Unless the trap has to be forwarded further down the line,
+	 * of course...
+	 */
+	if (__vcpu_sys_reg(vcpu, HCR_EL2) & HCR_NV)
+		return false;
+
+	spsr = read_sysreg_el1(SYS_SPSR);
+	mode = spsr & (PSR_MODE_MASK | PSR_MODE32_BIT);
+
+	switch (mode) {
+	case PSR_MODE_EL0t:
+		if (!(vcpu_el2_e2h_is_set(vcpu) && vcpu_el2_tge_is_set(vcpu)))
+			return false;
+		break;
+	case PSR_MODE_EL2t:
+		mode = PSR_MODE_EL1t;
+		break;
+	case PSR_MODE_EL2h:
+		mode = PSR_MODE_EL1h;
+		break;
+	default:
+		return false;
+	}
+
+	spsr = (spsr & ~(PSR_MODE_MASK | PSR_MODE32_BIT)) | mode;
+
+	write_sysreg_el2(spsr, SYS_SPSR);
+	write_sysreg_el2(read_sysreg_el1(SYS_ELR), SYS_ELR);
+
+	return true;
+}
+
 static const exit_handler_fn hyp_exit_handlers[] = {
 	[0 ... ESR_ELx_EC_MAX]		= NULL,
 	[ESR_ELx_EC_CP15_32]		= kvm_hyp_handle_cp15_32,
@@ -216,6 +259,7 @@ static const exit_handler_fn hyp_exit_handlers[] = {
 	[ESR_ELx_EC_DABT_LOW]		= kvm_hyp_handle_dabt_low,
 	[ESR_ELx_EC_WATCHPT_LOW]	= kvm_hyp_handle_watchpt_low,
 	[ESR_ELx_EC_PAC]		= kvm_hyp_handle_ptrauth,
+	[ESR_ELx_EC_ERET]		= kvm_hyp_handle_eret,
 	[ESR_ELx_EC_MOPS]		= kvm_hyp_handle_mops,
 };
 
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 07/13] KVM: arm64: nv: Honor HFGITR_EL2.ERET being set
  2024-02-26 10:05 [PATCH v2 00/13] KVM/arm64: Add NV support for ERET and PAuth Marc Zyngier
                   ` (5 preceding siblings ...)
  2024-02-26 10:05 ` [PATCH v2 06/13] KVM: arm64: nv: Fast-track 'InHost' exception returns Marc Zyngier
@ 2024-02-26 10:05 ` Marc Zyngier
  2024-03-01 18:07   ` Joey Gouly
  2024-02-26 10:05 ` [PATCH v2 08/13] KVM: arm64: nv: Handle HCR_EL2.{API,APK} independently Marc Zyngier
                   ` (5 subsequent siblings)
  12 siblings, 1 reply; 28+ messages in thread
From: Marc Zyngier @ 2024-02-26 10:05 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Joey Gouly, Will Deacon, Catalin Marinas

If the L1 hypervisor decides to trap ERETs while running L2,
make sure we don't try to emulate it, just like we wouldn't
if it had its NV bit set.

The exception will be reinjected from the core handler.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/hyp/vhe/switch.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index eaf242b8e0cf..3ea9bdf6b555 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -220,7 +220,8 @@ static bool kvm_hyp_handle_eret(struct kvm_vcpu *vcpu, u64 *exit_code)
 	 * Unless the trap has to be forwarded further down the line,
 	 * of course...
 	 */
-	if (__vcpu_sys_reg(vcpu, HCR_EL2) & HCR_NV)
+	if ((__vcpu_sys_reg(vcpu, HCR_EL2) & HCR_NV) ||
+	    (__vcpu_sys_reg(vcpu, HFGITR_EL2) & HFGITR_EL2_ERET))
 		return false;
 
 	spsr = read_sysreg_el1(SYS_SPSR);
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 08/13] KVM: arm64: nv: Handle HCR_EL2.{API,APK} independently
  2024-02-26 10:05 [PATCH v2 00/13] KVM/arm64: Add NV support for ERET and PAuth Marc Zyngier
                   ` (6 preceding siblings ...)
  2024-02-26 10:05 ` [PATCH v2 07/13] KVM: arm64: nv: Honor HFGITR_EL2.ERET being set Marc Zyngier
@ 2024-02-26 10:05 ` Marc Zyngier
  2024-03-07 15:14   ` Joey Gouly
  2024-02-26 10:05 ` [PATCH v2 09/13] KVM: arm64: nv: Reinject PAC exceptions caused by HCR_EL2.API==0 Marc Zyngier
                   ` (4 subsequent siblings)
  12 siblings, 1 reply; 28+ messages in thread
From: Marc Zyngier @ 2024-02-26 10:05 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Joey Gouly, Will Deacon, Catalin Marinas

Although KVM couples API and APK for simplicity, the architecture
makes no such requirement, and the two can be independently set or
cleared.

Check for which of the two possible reasons we have trapped here,
and if the corresponding L1 control bit isn't set, delegate the
handling for forwarding.

Otherwise, set this exact bit in HCR_EL2 and resume the guest.
Of course, in the non-NV case, we keep setting both bits and
be done with it. Note that the entry core already saves/restores
the keys should any of the two control bits be set.

This results in a bit of rework, and the removal of the (trivial)
vcpu_ptrauth_enable() helper.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_emulate.h    |  5 ----
 arch/arm64/kvm/hyp/include/hyp/switch.h | 32 +++++++++++++++++++++----
 2 files changed, 27 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index debc3753d2ef..d2177bc77844 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -125,11 +125,6 @@ static inline void vcpu_set_wfx_traps(struct kvm_vcpu *vcpu)
 	vcpu->arch.hcr_el2 |= HCR_TWI;
 }
 
-static inline void vcpu_ptrauth_enable(struct kvm_vcpu *vcpu)
-{
-	vcpu->arch.hcr_el2 |= (HCR_API | HCR_APK);
-}
-
 static inline void vcpu_ptrauth_disable(struct kvm_vcpu *vcpu)
 {
 	vcpu->arch.hcr_el2 &= ~(HCR_API | HCR_APK);
diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
index f5f701f309a9..a0908d7a8f56 100644
--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
+++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
@@ -480,11 +480,35 @@ DECLARE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt);
 static bool kvm_hyp_handle_ptrauth(struct kvm_vcpu *vcpu, u64 *exit_code)
 {
 	struct kvm_cpu_context *ctxt;
-	u64 val;
+	u64 enable = 0;
 
 	if (!vcpu_has_ptrauth(vcpu))
 		return false;
 
+	/*
+	 * NV requires us to handle API and APK independently, just in
+	 * case the hypervisor is totally nuts. Please barf >here<.
+	 */
+	if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)) {
+		switch (ESR_ELx_EC(kvm_vcpu_get_esr(vcpu))) {
+		case ESR_ELx_EC_PAC:
+			if (!(__vcpu_sys_reg(vcpu, HCR_EL2) & HCR_API))
+				return false;
+
+			enable |= HCR_API;
+			break;
+
+		case ESR_ELx_EC_SYS64:
+			if (!(__vcpu_sys_reg(vcpu, HCR_EL2) & HCR_APK))
+				return false;
+
+			enable |= HCR_APK;
+			break;
+		}
+	} else {
+		enable = HCR_API | HCR_APK;
+	}
+
 	ctxt = this_cpu_ptr(&kvm_hyp_ctxt);
 	__ptrauth_save_key(ctxt, APIA);
 	__ptrauth_save_key(ctxt, APIB);
@@ -492,11 +516,9 @@ static bool kvm_hyp_handle_ptrauth(struct kvm_vcpu *vcpu, u64 *exit_code)
 	__ptrauth_save_key(ctxt, APDB);
 	__ptrauth_save_key(ctxt, APGA);
 
-	vcpu_ptrauth_enable(vcpu);
 
-	val = read_sysreg(hcr_el2);
-	val |= (HCR_API | HCR_APK);
-	write_sysreg(val, hcr_el2);
+	vcpu->arch.hcr_el2 |= enable;
+	sysreg_clear_set(hcr_el2, 0, enable);
 
 	return true;
 }
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 09/13] KVM: arm64: nv: Reinject PAC exceptions caused by HCR_EL2.API==0
  2024-02-26 10:05 [PATCH v2 00/13] KVM/arm64: Add NV support for ERET and PAuth Marc Zyngier
                   ` (7 preceding siblings ...)
  2024-02-26 10:05 ` [PATCH v2 08/13] KVM: arm64: nv: Handle HCR_EL2.{API,APK} independently Marc Zyngier
@ 2024-02-26 10:05 ` Marc Zyngier
  2024-02-26 10:05 ` [PATCH v2 10/13] KVM: arm64: nv: Add kvm_has_pauth() helper Marc Zyngier
                   ` (3 subsequent siblings)
  12 siblings, 0 replies; 28+ messages in thread
From: Marc Zyngier @ 2024-02-26 10:05 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Joey Gouly, Will Deacon, Catalin Marinas

In order for a L1 hypervisor to correctly handle PAuth instructions,
it must observe traps caused by a L1 PAuth instruction when
HCR_EL2.API==0. Since we already handle the case for API==1 as
a fixup, only the exception injection case needs to be handled.

Rework the kvm_handle_ptrauth() callback to reinject the trap
in this case. Note that APK==0 is already handled by the exising
triage_sysreg_trap() helper.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/handle_exit.c | 28 +++++++++++++++++++++++++---
 1 file changed, 25 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 6a88ec024e2f..1ba2f788b2c3 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -214,12 +214,34 @@ static int handle_sve(struct kvm_vcpu *vcpu)
 }
 
 /*
- * Guest usage of a ptrauth instruction (which the guest EL1 did not turn into
- * a NOP). If we get here, it is that we didn't fixup ptrauth on exit, and all
- * that we can do is give the guest an UNDEF.
+ * Two possibilities to handle a trapping ptrauth instruction:
+ *
+ * - Guest usage of a ptrauth instruction (which the guest EL1 did not
+ *   turn into a NOP). If we get here, it is that we didn't fixup
+ *   ptrauth on exit, and all that we can do is give the guest an
+ *   UNDEF (as the guest isn't supposed to use ptrauth without being
+ *   told it could).
+ *
+ * - Running an L2 NV guest while L1 has left HCR_EL2.API==0, and for
+ *   which we reinject the exception into L1. API==1 is handled as a
+ *   fixup so the only way to get here is when API==0.
+ *
+ * Anything else is an emulation bug (hence the WARN_ON + UNDEF).
  */
 static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu)
 {
+	if (!vcpu_has_ptrauth(vcpu)) {
+		kvm_inject_undefined(vcpu);
+		return 1;
+	}
+
+	if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)) {
+		kvm_inject_nested_sync(vcpu, kvm_vcpu_get_esr(vcpu));
+		return 1;
+	}
+
+	/* Really shouldn't be here! */
+	WARN_ON_ONCE(1);
 	kvm_inject_undefined(vcpu);
 	return 1;
 }
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 10/13] KVM: arm64: nv: Add kvm_has_pauth() helper
  2024-02-26 10:05 [PATCH v2 00/13] KVM/arm64: Add NV support for ERET and PAuth Marc Zyngier
                   ` (8 preceding siblings ...)
  2024-02-26 10:05 ` [PATCH v2 09/13] KVM: arm64: nv: Reinject PAC exceptions caused by HCR_EL2.API==0 Marc Zyngier
@ 2024-02-26 10:05 ` Marc Zyngier
  2024-02-26 10:05 ` [PATCH v2 11/13] KVM: arm64: nv: Add emulation for ERETAx instructions Marc Zyngier
                   ` (2 subsequent siblings)
  12 siblings, 0 replies; 28+ messages in thread
From: Marc Zyngier @ 2024-02-26 10:05 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Joey Gouly, Will Deacon, Catalin Marinas

Pointer Authentication comes in many flavors, and a faithful emulation
relies on correctly handling the flavour implemented by the HW.

For this, provide a new kvm_has_pauth() that checks whether we
expose to the guest a particular level of support. This checks
across all 3 possible authentication algorithms (Q5, Q3 and IMPDEF).

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 75eb8e170515..a97b092b7064 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -1334,4 +1334,19 @@ bool kvm_arm_vcpu_stopped(struct kvm_vcpu *vcpu);
 	(get_idreg_field((kvm), id, fld) >= expand_field_sign(id, fld, min) && \
 	 get_idreg_field((kvm), id, fld) <= expand_field_sign(id, fld, max))
 
+/* Check for a given level of PAuth support */
+#define kvm_has_pauth(k, l)						\
+	({								\
+		bool pa, pi, pa3;					\
+									\
+		pa  = kvm_has_feat((k), ID_AA64ISAR1_EL1, APA, l);	\
+		pa &= kvm_has_feat((k), ID_AA64ISAR1_EL1, GPA, IMP);	\
+		pi  = kvm_has_feat((k), ID_AA64ISAR1_EL1, API, l);	\
+		pi &= kvm_has_feat((k), ID_AA64ISAR1_EL1, GPI, IMP);	\
+		pa3  = kvm_has_feat((k), ID_AA64ISAR2_EL1, APA3, l);	\
+		pa3 &= kvm_has_feat((k), ID_AA64ISAR2_EL1, GPA3, IMP);	\
+									\
+		(pa + pi + pa3) == 1;					\
+	})
+
 #endif /* __ARM64_KVM_HOST_H__ */
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 11/13] KVM: arm64: nv: Add emulation for ERETAx instructions
  2024-02-26 10:05 [PATCH v2 00/13] KVM/arm64: Add NV support for ERET and PAuth Marc Zyngier
                   ` (9 preceding siblings ...)
  2024-02-26 10:05 ` [PATCH v2 10/13] KVM: arm64: nv: Add kvm_has_pauth() helper Marc Zyngier
@ 2024-02-26 10:05 ` Marc Zyngier
  2024-03-07 13:39   ` Joey Gouly
  2024-03-08 17:20   ` Joey Gouly
  2024-02-26 10:06 ` [PATCH v2 12/13] KVM: arm64: nv: Handle ERETA[AB] instructions Marc Zyngier
  2024-02-26 10:06 ` [PATCH v2 13/13] KVM: arm64: nv: Advertise support for PAuth Marc Zyngier
  12 siblings, 2 replies; 28+ messages in thread
From: Marc Zyngier @ 2024-02-26 10:05 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Joey Gouly, Will Deacon, Catalin Marinas

FEAT_NV has the interesting property of relying on ERET being
trapped. An added complexity is that it also traps ERETAA and
ERETAB, meaning that the Pointer Authentication aspect of these
instruction must be emulated.

Add an emulation of Pointer Authentication, limited to ERETAx
(always using SP_EL2 as the modifier and ELR_EL2 as the pointer),
using the Generic Authentication instructions.

The emulation, however small, is placed in its own compilation
unit so that it can be avoided if the configuration doesn't
include it (or the toolchan in not up to the task).

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_nested.h    |  12 ++
 arch/arm64/include/asm/pgtable-hwdef.h |   1 +
 arch/arm64/kvm/Makefile                |   1 +
 arch/arm64/kvm/pauth.c                 | 196 +++++++++++++++++++++++++
 4 files changed, 210 insertions(+)
 create mode 100644 arch/arm64/kvm/pauth.c

diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
index dbc4e3a67356..5e0ab0596246 100644
--- a/arch/arm64/include/asm/kvm_nested.h
+++ b/arch/arm64/include/asm/kvm_nested.h
@@ -64,4 +64,16 @@ extern bool forward_smc_trap(struct kvm_vcpu *vcpu);
 
 int kvm_init_nv_sysregs(struct kvm *kvm);
 
+#ifdef CONFIG_ARM64_PTR_AUTH
+bool kvm_auth_eretax(struct kvm_vcpu *vcpu, u64 *elr);
+#else
+static inline bool kvm_auth_eretax(struct kvm_vcpu *vcpu, u64 *elr)
+{
+	/* We really should never execute this... */
+	WARN_ON_ONCE(1);
+	*elr = 0xbad9acc0debadbad;
+	return false;
+}
+#endif
+
 #endif /* __ARM64_KVM_NESTED_H */
diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h
index e4944d517c99..bb88e9ef6296 100644
--- a/arch/arm64/include/asm/pgtable-hwdef.h
+++ b/arch/arm64/include/asm/pgtable-hwdef.h
@@ -277,6 +277,7 @@
 #define TCR_TBI1		(UL(1) << 38)
 #define TCR_HA			(UL(1) << 39)
 #define TCR_HD			(UL(1) << 40)
+#define TCR_TBID0		(UL(1) << 51)
 #define TCR_TBID1		(UL(1) << 52)
 #define TCR_NFD0		(UL(1) << 53)
 #define TCR_NFD1		(UL(1) << 54)
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index c0c050e53157..04882b577575 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -23,6 +23,7 @@ kvm-y += arm.o mmu.o mmio.o psci.o hypercalls.o pvtime.o \
 	 vgic/vgic-its.o vgic/vgic-debug.o
 
 kvm-$(CONFIG_HW_PERF_EVENTS)  += pmu-emul.o pmu.o
+kvm-$(CONFIG_ARM64_PTR_AUTH)  += pauth.o
 
 always-y := hyp_constants.h hyp-constants.s
 
diff --git a/arch/arm64/kvm/pauth.c b/arch/arm64/kvm/pauth.c
new file mode 100644
index 000000000000..a3a5c404375b
--- /dev/null
+++ b/arch/arm64/kvm/pauth.c
@@ -0,0 +1,196 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2024 - Google LLC
+ * Author: Marc Zyngier <maz@kernel.org>
+ *
+ * Primitive PAuth emulation for ERETAA/ERETAB.
+ *
+ * This code assumes that is is run from EL2, and that it is part of
+ * the emulation of ERETAx for a guest hypervisor. That's a lot of
+ * baked-in assumptions and shortcuts.
+ *
+ * Do no reuse for anything else!
+ */
+
+#include <linux/kvm_host.h>
+
+#include <asm/kvm_emulate.h>
+#include <asm/pointer_auth.h>
+
+static u64 compute_pac(struct kvm_vcpu *vcpu, u64 ptr,
+		       struct ptrauth_key ikey)
+{
+	struct ptrauth_key gkey;
+	u64 mod, pac = 0;
+
+	preempt_disable();
+
+	if (!vcpu_get_flag(vcpu, SYSREGS_ON_CPU))
+		mod = __vcpu_sys_reg(vcpu, SP_EL2);
+	else
+		mod = read_sysreg(sp_el1);
+
+	gkey.lo = read_sysreg_s(SYS_APGAKEYLO_EL1);
+	gkey.hi = read_sysreg_s(SYS_APGAKEYHI_EL1);
+
+	__ptrauth_key_install_nosync(APGA, ikey);
+	isb();
+
+	asm volatile(ARM64_ASM_PREAMBLE ".arch_extension pauth\n"
+		     "pacga %0, %1, %2" : "=r" (pac) : "r" (ptr), "r" (mod));
+	isb();
+
+	__ptrauth_key_install_nosync(APGA, gkey);
+
+	preempt_enable();
+
+	/* PAC in the top 32bits */
+	return pac;
+}
+
+static bool effective_tbi(struct kvm_vcpu *vcpu, bool bit55)
+{
+	u64 tcr = vcpu_read_sys_reg(vcpu, TCR_EL2);
+	bool tbi, tbid;
+
+	/*
+	 * Since we are authenticating an instruction address, we have
+	 * to take TBID into account. If E2H==0, ignore VA[55], as
+	 * TCR_EL2 only has a single TBI/TBID. If VA[55] was set in
+	 * this case, this is likely a guest bug...
+	 */
+	if (!vcpu_el2_e2h_is_set(vcpu)) {
+		tbi = tcr & BIT(20);
+		tbid = tcr & BIT(29);
+	} else if (bit55) {
+		tbi = tcr & TCR_TBI1;
+		tbid = tcr & TCR_TBID1;
+	} else {
+		tbi = tcr & TCR_TBI0;
+		tbid = tcr & TCR_TBID0;
+	}
+
+	return tbi && !tbid;
+}
+
+static int compute_bottom_pac(struct kvm_vcpu *vcpu, bool bit55)
+{
+	static const int maxtxsz = 39; // Revisit these two values once
+	static const int mintxsz = 16; // (if) we support TTST/LVA/LVA2
+	u64 tcr = vcpu_read_sys_reg(vcpu, TCR_EL2);
+	int txsz;
+
+	if (!vcpu_el2_e2h_is_set(vcpu) || !bit55)
+		txsz = FIELD_GET(TCR_T0SZ_MASK, tcr);
+	else
+		txsz = FIELD_GET(TCR_T1SZ_MASK, tcr);
+
+	return 64 - clamp(txsz, mintxsz, maxtxsz);
+}
+
+static u64 compute_pac_mask(struct kvm_vcpu *vcpu, bool bit55)
+{
+	int bottom_pac;
+	u64 mask;
+
+	bottom_pac = compute_bottom_pac(vcpu, bit55);
+
+	mask = GENMASK(54, bottom_pac);
+	if (!effective_tbi(vcpu, bit55))
+		mask |= GENMASK(63, 56);
+
+	return mask;
+}
+
+static u64 to_canonical_addr(struct kvm_vcpu *vcpu, u64 ptr, u64 mask)
+{
+	bool bit55 = !!(ptr & BIT(55));
+
+	if (bit55)
+		return ptr | mask;
+
+	return ptr & ~mask;
+}
+
+static u64 corrupt_addr(struct kvm_vcpu *vcpu, u64 ptr)
+{
+	bool bit55 = !!(ptr & BIT(55));
+	u64 mask, error_code;
+	int shift;
+
+	if (effective_tbi(vcpu, bit55)) {
+		mask = GENMASK(54, 53);
+		shift = 53;
+	} else {
+		mask = GENMASK(62, 61);
+		shift = 61;
+	}
+
+	if (esr_iss_is_eretab(kvm_vcpu_get_esr(vcpu)))
+		error_code = 2 << shift;
+	else
+		error_code = 1 << shift;
+
+	ptr &= ~mask;
+	ptr |= error_code;
+
+	return ptr;
+}
+
+/*
+ * Authenticate an ERETAA/ERETAB instruction, returning true if the
+ * authentication succeeded and false otherwise. In all cases, *elr
+ * contains the VA to ERET to. Potential exception injection is left
+ * to the caller.
+ */
+bool kvm_auth_eretax(struct kvm_vcpu *vcpu, u64 *elr)
+{
+	u64 sctlr = vcpu_read_sys_reg(vcpu, SCTLR_EL2);
+	u64 esr = kvm_vcpu_get_esr(vcpu);
+	u64 ptr, cptr, pac, mask;
+	struct ptrauth_key ikey;
+
+	*elr = ptr = vcpu_read_sys_reg(vcpu, ELR_EL2);
+
+	/* We assume we're already in the context of an ERETAx */
+	if (esr_iss_is_eretab(esr)) {
+		if (!(sctlr & SCTLR_EL1_EnIB))
+			return true;
+
+		ikey.lo = __vcpu_sys_reg(vcpu, APIBKEYLO_EL1);
+		ikey.hi = __vcpu_sys_reg(vcpu, APIBKEYHI_EL1);
+	} else {
+		if (!(sctlr & SCTLR_EL1_EnIA))
+			return true;
+
+		ikey.lo = __vcpu_sys_reg(vcpu, APIAKEYLO_EL1);
+		ikey.hi = __vcpu_sys_reg(vcpu, APIAKEYHI_EL1);
+	}
+
+	mask = compute_pac_mask(vcpu, !!(ptr & BIT(55)));
+	cptr = to_canonical_addr(vcpu, ptr, mask);
+
+	pac = compute_pac(vcpu, cptr, ikey);
+
+	/*
+	 * Slightly deviate from the pseudocode: if we have a PAC
+	 * match with the signed pointer, then it must be good.
+	 * Anything after this point is pure error handling.
+	 */
+	if ((pac & mask) == (ptr & mask)) {
+		*elr = cptr;
+		return true;
+	}
+
+	/*
+	 * Authentication failed, corrupt the canonical address if
+	 * PAuth2 isn't implemented, or some XORing if it is.
+	 */
+	if (!kvm_has_pauth(vcpu->kvm, PAuth2))
+		cptr = corrupt_addr(vcpu, cptr);
+	else
+		cptr = ptr ^ (pac & mask);
+
+	*elr = cptr;
+	return false;
+}
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 12/13] KVM: arm64: nv: Handle ERETA[AB] instructions
  2024-02-26 10:05 [PATCH v2 00/13] KVM/arm64: Add NV support for ERET and PAuth Marc Zyngier
                   ` (10 preceding siblings ...)
  2024-02-26 10:05 ` [PATCH v2 11/13] KVM: arm64: nv: Add emulation for ERETAx instructions Marc Zyngier
@ 2024-02-26 10:06 ` Marc Zyngier
  2024-03-12 11:17   ` Joey Gouly
  2024-02-26 10:06 ` [PATCH v2 13/13] KVM: arm64: nv: Advertise support for PAuth Marc Zyngier
  12 siblings, 1 reply; 28+ messages in thread
From: Marc Zyngier @ 2024-02-26 10:06 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Joey Gouly, Will Deacon, Catalin Marinas

Now that we have some emulation in place for ERETA[AB], we can
plug it into the exception handling machinery.

As for a bare ERET, an "easy" ERETAx instruction is processed as
a fixup, while something that requires a translation regime
transition or an exception delivery is left to the slow path.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/emulate-nested.c | 22 ++++++++++++++++++++--
 arch/arm64/kvm/handle_exit.c    |  3 ++-
 arch/arm64/kvm/hyp/vhe/switch.c | 13 +++++++++++--
 3 files changed, 33 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
index 63a74c0330f1..72d733c74a38 100644
--- a/arch/arm64/kvm/emulate-nested.c
+++ b/arch/arm64/kvm/emulate-nested.c
@@ -2172,7 +2172,7 @@ static u64 kvm_check_illegal_exception_return(struct kvm_vcpu *vcpu, u64 spsr)
 
 void kvm_emulate_nested_eret(struct kvm_vcpu *vcpu)
 {
-	u64 spsr, elr;
+	u64 spsr, elr, esr;
 
 	/*
 	 * Forward this trap to the virtual EL2 if the virtual
@@ -2181,12 +2181,30 @@ void kvm_emulate_nested_eret(struct kvm_vcpu *vcpu)
 	if (forward_traps(vcpu, HCR_NV))
 		return;
 
+	/* Check for an ERETAx */
+	esr = kvm_vcpu_get_esr(vcpu);
+	if (esr_iss_is_eretax(esr) && !kvm_auth_eretax(vcpu, &elr)) {
+		/*
+		 * Oh no, ERETAx failed to authenticate.  If we have
+		 * FPACCOMBINE, deliver an exception right away.  If we
+		 * don't, then let the mangled ELR value trickle down the
+		 * ERET handling, and the guest will have a little surprise.
+		 */
+		if (kvm_has_pauth(vcpu->kvm, FPACCOMBINE)) {
+			esr &= ESR_ELx_ERET_ISS_ERETA;
+			esr |= FIELD_PREP(ESR_ELx_EC_MASK, ESR_ELx_EC_FPAC);
+			kvm_inject_nested_sync(vcpu, esr);
+			return;
+		}
+	}
+
 	preempt_disable();
 	kvm_arch_vcpu_put(vcpu);
 
 	spsr = __vcpu_sys_reg(vcpu, SPSR_EL2);
 	spsr = kvm_check_illegal_exception_return(vcpu, spsr);
-	elr = __vcpu_sys_reg(vcpu, ELR_EL2);
+	if (!esr_iss_is_eretax(esr))
+		elr = __vcpu_sys_reg(vcpu, ELR_EL2);
 
 	trace_kvm_nested_eret(vcpu, elr, spsr);
 
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 1ba2f788b2c3..407bdfbb572b 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -248,7 +248,8 @@ static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu)
 
 static int kvm_handle_eret(struct kvm_vcpu *vcpu)
 {
-	if (esr_iss_is_eretax(kvm_vcpu_get_esr(vcpu)))
+	if (esr_iss_is_eretax(kvm_vcpu_get_esr(vcpu)) &&
+	    !vcpu_has_ptrauth(vcpu))
 		return kvm_handle_ptrauth(vcpu);
 
 	/*
diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
index 3ea9bdf6b555..49d36666040e 100644
--- a/arch/arm64/kvm/hyp/vhe/switch.c
+++ b/arch/arm64/kvm/hyp/vhe/switch.c
@@ -208,7 +208,8 @@ void kvm_vcpu_put_vhe(struct kvm_vcpu *vcpu)
 
 static bool kvm_hyp_handle_eret(struct kvm_vcpu *vcpu, u64 *exit_code)
 {
-	u64 spsr, mode;
+	u64 esr = kvm_vcpu_get_esr(vcpu);
+	u64 spsr, elr, mode;
 
 	/*
 	 * Going through the whole put/load motions is a waste of time
@@ -242,10 +243,18 @@ static bool kvm_hyp_handle_eret(struct kvm_vcpu *vcpu, u64 *exit_code)
 		return false;
 	}
 
+	/* If ERETAx fails, take the slow path */
+	if (esr_iss_is_eretax(esr)) {
+		if (!(vcpu_has_ptrauth(vcpu) && kvm_auth_eretax(vcpu, &elr)))
+			return false;
+	} else {
+		elr = read_sysreg_el1(SYS_ELR);
+	}
+
 	spsr = (spsr & ~(PSR_MODE_MASK | PSR_MODE32_BIT)) | mode;
 
 	write_sysreg_el2(spsr, SYS_SPSR);
-	write_sysreg_el2(read_sysreg_el1(SYS_ELR), SYS_ELR);
+	write_sysreg_el2(elr, SYS_ELR);
 
 	return true;
 }
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 13/13] KVM: arm64: nv: Advertise support for PAuth
  2024-02-26 10:05 [PATCH v2 00/13] KVM/arm64: Add NV support for ERET and PAuth Marc Zyngier
                   ` (11 preceding siblings ...)
  2024-02-26 10:06 ` [PATCH v2 12/13] KVM: arm64: nv: Handle ERETA[AB] instructions Marc Zyngier
@ 2024-02-26 10:06 ` Marc Zyngier
  2024-03-12 11:21   ` Joey Gouly
  12 siblings, 1 reply; 28+ messages in thread
From: Marc Zyngier @ 2024-02-26 10:06 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Suzuki K Poulose, Oliver Upton, Zenghui Yu,
	Joey Gouly, Will Deacon, Catalin Marinas

Now that we (hopefully) correctly handle ERETAx, drop the masking
of the PAuth feature (something that was not even complete, as
APA3 and AGA3 were still exposed).

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/nested.c | 8 ++------
 1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
index ced30c90521a..6813c7c7f00a 100644
--- a/arch/arm64/kvm/nested.c
+++ b/arch/arm64/kvm/nested.c
@@ -35,13 +35,9 @@ static u64 limit_nv_id_reg(u32 id, u64 val)
 		break;
 
 	case SYS_ID_AA64ISAR1_EL1:
-		/* Support everything but PtrAuth and Spec Invalidation */
+		/* Support everything but Spec Invalidation */
 		val &= ~(GENMASK_ULL(63, 56)	|
-			 NV_FTR(ISAR1, SPECRES)	|
-			 NV_FTR(ISAR1, GPI)	|
-			 NV_FTR(ISAR1, GPA)	|
-			 NV_FTR(ISAR1, API)	|
-			 NV_FTR(ISAR1, APA));
+			 NV_FTR(ISAR1, SPECRES));
 		break;
 
 	case SYS_ID_AA64PFR0_EL1:
-- 
2.39.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 06/13] KVM: arm64: nv: Fast-track 'InHost' exception returns
  2024-02-26 10:05 ` [PATCH v2 06/13] KVM: arm64: nv: Fast-track 'InHost' exception returns Marc Zyngier
@ 2024-02-28 16:08   ` Joey Gouly
  2024-02-29 13:44     ` Marc Zyngier
  0 siblings, 1 reply; 28+ messages in thread
From: Joey Gouly @ 2024-02-28 16:08 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Will Deacon, Catalin Marinas

On Mon, Feb 26, 2024 at 10:05:54AM +0000, Marc Zyngier wrote:
> A significant part of the FEAT_NV extension is to trap ERET
> instructions so that the hypervisor gets a chance to switch
> from a vEL2 L1 guest to an EL1 L2 guest.
> 
> But this also has the unfortunate consequence of trapping ERET
> in unsuspecting circumstances, such as staying at vEL2 (interrupt
> handling while being in the guest hypervisor), or returning to host
> userspace in the case of a VHE guest.
> 
> Although we already make some effort to handle these ERET quicker
> by not doing the put/load dance, it is still way too far down the
> line for it to be efficient enough.
> 
> For these cases, it would ideal to ERET directly, no question asked.
> Of course, we can't do that. But the next best thing is to do it as
> early as possible, in fixup_guest_exit(), much as we would handle
> FPSIMD exceptions.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/emulate-nested.c | 29 +++-------------------
>  arch/arm64/kvm/hyp/vhe/switch.c | 44 +++++++++++++++++++++++++++++++++
>  2 files changed, 47 insertions(+), 26 deletions(-)
> 
> diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
> index 2d80e81ae650..63a74c0330f1 100644
> --- a/arch/arm64/kvm/emulate-nested.c
> +++ b/arch/arm64/kvm/emulate-nested.c
> @@ -2172,8 +2172,7 @@ static u64 kvm_check_illegal_exception_return(struct kvm_vcpu *vcpu, u64 spsr)
>  
>  void kvm_emulate_nested_eret(struct kvm_vcpu *vcpu)
>  {
> -	u64 spsr, elr, mode;
> -	bool direct_eret;
> +	u64 spsr, elr;
>  
>  	/*
>  	 * Forward this trap to the virtual EL2 if the virtual
> @@ -2182,33 +2181,11 @@ void kvm_emulate_nested_eret(struct kvm_vcpu *vcpu)
>  	if (forward_traps(vcpu, HCR_NV))
>  		return;
>  
> -	/*
> -	 * Going through the whole put/load motions is a waste of time
> -	 * if this is a VHE guest hypervisor returning to its own
> -	 * userspace, or the hypervisor performing a local exception
> -	 * return. No need to save/restore registers, no need to
> -	 * switch S2 MMU. Just do the canonical ERET.
> -	 */
> -	spsr = vcpu_read_sys_reg(vcpu, SPSR_EL2);
> -	spsr = kvm_check_illegal_exception_return(vcpu, spsr);
> -
> -	mode = spsr & (PSR_MODE_MASK | PSR_MODE32_BIT);
> -
> -	direct_eret  = (mode == PSR_MODE_EL0t &&
> -			vcpu_el2_e2h_is_set(vcpu) &&
> -			vcpu_el2_tge_is_set(vcpu));
> -	direct_eret |= (mode == PSR_MODE_EL2h || mode == PSR_MODE_EL2t);
> -
> -	if (direct_eret) {
> -		*vcpu_pc(vcpu) = vcpu_read_sys_reg(vcpu, ELR_EL2);
> -		*vcpu_cpsr(vcpu) = spsr;
> -		trace_kvm_nested_eret(vcpu, *vcpu_pc(vcpu), spsr);
> -		return;
> -	}
> -
>  	preempt_disable();
>  	kvm_arch_vcpu_put(vcpu);
>  
> +	spsr = __vcpu_sys_reg(vcpu, SPSR_EL2);
> +	spsr = kvm_check_illegal_exception_return(vcpu, spsr);
>  	elr = __vcpu_sys_reg(vcpu, ELR_EL2);
>  
>  	trace_kvm_nested_eret(vcpu, elr, spsr);
> diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
> index d5fdcea2b366..eaf242b8e0cf 100644
> --- a/arch/arm64/kvm/hyp/vhe/switch.c
> +++ b/arch/arm64/kvm/hyp/vhe/switch.c
> @@ -206,6 +206,49 @@ void kvm_vcpu_put_vhe(struct kvm_vcpu *vcpu)
>  	__vcpu_put_switch_sysregs(vcpu);
>  }
>  
> +static bool kvm_hyp_handle_eret(struct kvm_vcpu *vcpu, u64 *exit_code)
> +{
> +	u64 spsr, mode;
> +
> +	/*
> +	 * Going through the whole put/load motions is a waste of time
> +	 * if this is a VHE guest hypervisor returning to its own
> +	 * userspace, or the hypervisor performing a local exception
> +	 * return. No need to save/restore registers, no need to
> +	 * switch S2 MMU. Just do the canonical ERET.
> +	 *
> +	 * Unless the trap has to be forwarded further down the line,
> +	 * of course...
> +	 */
> +	if (__vcpu_sys_reg(vcpu, HCR_EL2) & HCR_NV)
> +		return false;
> +
> +	spsr = read_sysreg_el1(SYS_SPSR);
> +	mode = spsr & (PSR_MODE_MASK | PSR_MODE32_BIT);
> +
> +	switch (mode) {
> +	case PSR_MODE_EL0t:
> +		if (!(vcpu_el2_e2h_is_set(vcpu) && vcpu_el2_tge_is_set(vcpu)))
> +			return false;
> +		break;
> +	case PSR_MODE_EL2t:
> +		mode = PSR_MODE_EL1t;
> +		break;
> +	case PSR_MODE_EL2h:
> +		mode = PSR_MODE_EL1h;
> +		break;
> +	default:
> +		return false;
> +	}

Thanks for pointing out to_hw_pstate() (off-list), I spent far too long trying
to understand how the original code converted PSTATE.M from (v)EL2 to EL1, and
missed that while browsing.

Seems hard to re-use to_hw_pstate() here, since we want the early returns.

> +
> +	spsr = (spsr & ~(PSR_MODE_MASK | PSR_MODE32_BIT)) | mode;

I don't think we need to mask out PSR_MODE32_BIT here again, since if it was
set in `mode`, it wouldn't have matched in the switch statement. It's possibly
out of 'defensiveness' though. And I'm being nitpicky.

> +
> +	write_sysreg_el2(spsr, SYS_SPSR);
> +	write_sysreg_el2(read_sysreg_el1(SYS_ELR), SYS_ELR);
> +
> +	return true;
> +}
> +
>  static const exit_handler_fn hyp_exit_handlers[] = {
>  	[0 ... ESR_ELx_EC_MAX]		= NULL,
>  	[ESR_ELx_EC_CP15_32]		= kvm_hyp_handle_cp15_32,
> @@ -216,6 +259,7 @@ static const exit_handler_fn hyp_exit_handlers[] = {
>  	[ESR_ELx_EC_DABT_LOW]		= kvm_hyp_handle_dabt_low,
>  	[ESR_ELx_EC_WATCHPT_LOW]	= kvm_hyp_handle_watchpt_low,
>  	[ESR_ELx_EC_PAC]		= kvm_hyp_handle_ptrauth,
> +	[ESR_ELx_EC_ERET]		= kvm_hyp_handle_eret,
>  	[ESR_ELx_EC_MOPS]		= kvm_hyp_handle_mops,
>  };
>  

Otherwise,

Reviewed-by: Joey Gouly <joey.gouly@arm.com>

Thanks,
Joey

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 06/13] KVM: arm64: nv: Fast-track 'InHost' exception returns
  2024-02-28 16:08   ` Joey Gouly
@ 2024-02-29 13:44     ` Marc Zyngier
  0 siblings, 0 replies; 28+ messages in thread
From: Marc Zyngier @ 2024-02-29 13:44 UTC (permalink / raw)
  To: Joey Gouly
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Will Deacon, Catalin Marinas

On Wed, 28 Feb 2024 16:08:00 +0000,
Joey Gouly <joey.gouly@arm.com> wrote:
> 
> On Mon, Feb 26, 2024 at 10:05:54AM +0000, Marc Zyngier wrote:
> > A significant part of the FEAT_NV extension is to trap ERET
> > instructions so that the hypervisor gets a chance to switch
> > from a vEL2 L1 guest to an EL1 L2 guest.
> > 
> > But this also has the unfortunate consequence of trapping ERET
> > in unsuspecting circumstances, such as staying at vEL2 (interrupt
> > handling while being in the guest hypervisor), or returning to host
> > userspace in the case of a VHE guest.
> > 
> > Although we already make some effort to handle these ERET quicker
> > by not doing the put/load dance, it is still way too far down the
> > line for it to be efficient enough.
> > 
> > For these cases, it would ideal to ERET directly, no question asked.
> > Of course, we can't do that. But the next best thing is to do it as
> > early as possible, in fixup_guest_exit(), much as we would handle
> > FPSIMD exceptions.
> > 
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/kvm/emulate-nested.c | 29 +++-------------------
> >  arch/arm64/kvm/hyp/vhe/switch.c | 44 +++++++++++++++++++++++++++++++++
> >  2 files changed, 47 insertions(+), 26 deletions(-)
> > 
> > diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
> > index 2d80e81ae650..63a74c0330f1 100644
> > --- a/arch/arm64/kvm/emulate-nested.c
> > +++ b/arch/arm64/kvm/emulate-nested.c
> > @@ -2172,8 +2172,7 @@ static u64 kvm_check_illegal_exception_return(struct kvm_vcpu *vcpu, u64 spsr)
> >  
> >  void kvm_emulate_nested_eret(struct kvm_vcpu *vcpu)
> >  {
> > -	u64 spsr, elr, mode;
> > -	bool direct_eret;
> > +	u64 spsr, elr;
> >  
> >  	/*
> >  	 * Forward this trap to the virtual EL2 if the virtual
> > @@ -2182,33 +2181,11 @@ void kvm_emulate_nested_eret(struct kvm_vcpu *vcpu)
> >  	if (forward_traps(vcpu, HCR_NV))
> >  		return;
> >  
> > -	/*
> > -	 * Going through the whole put/load motions is a waste of time
> > -	 * if this is a VHE guest hypervisor returning to its own
> > -	 * userspace, or the hypervisor performing a local exception
> > -	 * return. No need to save/restore registers, no need to
> > -	 * switch S2 MMU. Just do the canonical ERET.
> > -	 */
> > -	spsr = vcpu_read_sys_reg(vcpu, SPSR_EL2);
> > -	spsr = kvm_check_illegal_exception_return(vcpu, spsr);
> > -
> > -	mode = spsr & (PSR_MODE_MASK | PSR_MODE32_BIT);
> > -
> > -	direct_eret  = (mode == PSR_MODE_EL0t &&
> > -			vcpu_el2_e2h_is_set(vcpu) &&
> > -			vcpu_el2_tge_is_set(vcpu));
> > -	direct_eret |= (mode == PSR_MODE_EL2h || mode == PSR_MODE_EL2t);
> > -
> > -	if (direct_eret) {
> > -		*vcpu_pc(vcpu) = vcpu_read_sys_reg(vcpu, ELR_EL2);
> > -		*vcpu_cpsr(vcpu) = spsr;
> > -		trace_kvm_nested_eret(vcpu, *vcpu_pc(vcpu), spsr);
> > -		return;
> > -	}
> > -
> >  	preempt_disable();
> >  	kvm_arch_vcpu_put(vcpu);
> >  
> > +	spsr = __vcpu_sys_reg(vcpu, SPSR_EL2);
> > +	spsr = kvm_check_illegal_exception_return(vcpu, spsr);
> >  	elr = __vcpu_sys_reg(vcpu, ELR_EL2);
> >  
> >  	trace_kvm_nested_eret(vcpu, elr, spsr);
> > diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
> > index d5fdcea2b366..eaf242b8e0cf 100644
> > --- a/arch/arm64/kvm/hyp/vhe/switch.c
> > +++ b/arch/arm64/kvm/hyp/vhe/switch.c
> > @@ -206,6 +206,49 @@ void kvm_vcpu_put_vhe(struct kvm_vcpu *vcpu)
> >  	__vcpu_put_switch_sysregs(vcpu);
> >  }
> >  
> > +static bool kvm_hyp_handle_eret(struct kvm_vcpu *vcpu, u64 *exit_code)
> > +{
> > +	u64 spsr, mode;
> > +
> > +	/*
> > +	 * Going through the whole put/load motions is a waste of time
> > +	 * if this is a VHE guest hypervisor returning to its own
> > +	 * userspace, or the hypervisor performing a local exception
> > +	 * return. No need to save/restore registers, no need to
> > +	 * switch S2 MMU. Just do the canonical ERET.
> > +	 *
> > +	 * Unless the trap has to be forwarded further down the line,
> > +	 * of course...
> > +	 */
> > +	if (__vcpu_sys_reg(vcpu, HCR_EL2) & HCR_NV)
> > +		return false;
> > +
> > +	spsr = read_sysreg_el1(SYS_SPSR);
> > +	mode = spsr & (PSR_MODE_MASK | PSR_MODE32_BIT);
> > +
> > +	switch (mode) {
> > +	case PSR_MODE_EL0t:
> > +		if (!(vcpu_el2_e2h_is_set(vcpu) && vcpu_el2_tge_is_set(vcpu)))
> > +			return false;
> > +		break;
> > +	case PSR_MODE_EL2t:
> > +		mode = PSR_MODE_EL1t;
> > +		break;
> > +	case PSR_MODE_EL2h:
> > +		mode = PSR_MODE_EL1h;
> > +		break;
> > +	default:
> > +		return false;
> > +	}
> 
> Thanks for pointing out to_hw_pstate() (off-list), I spent far too long trying
> to understand how the original code converted PSTATE.M from (v)EL2 to EL1, and
> missed that while browsing.
> 
> Seems hard to re-use to_hw_pstate() here, since we want the early
> returns.

Indeed. I tried to fit it in, but ended up checking for things twice,
which isn't great either.

> 
> > +
> > +	spsr = (spsr & ~(PSR_MODE_MASK | PSR_MODE32_BIT)) | mode;
> 
> I don't think we need to mask out PSR_MODE32_BIT here again, since if it was
> set in `mode`, it wouldn't have matched in the switch statement. It's possibly
> out of 'defensiveness' though. And I'm being nitpicky.

It's a sanity thing. We want to make sure all of M[4:0] are cleared
before or'ing the new mode. I agree that we wouldn't be there if
PSR_MODE_32BIT was set, but this matches the usage in most other
places in the code.

> 
> > +
> > +	write_sysreg_el2(spsr, SYS_SPSR);
> > +	write_sysreg_el2(read_sysreg_el1(SYS_ELR), SYS_ELR);
> > +
> > +	return true;
> > +}
> > +
> >  static const exit_handler_fn hyp_exit_handlers[] = {
> >  	[0 ... ESR_ELx_EC_MAX]		= NULL,
> >  	[ESR_ELx_EC_CP15_32]		= kvm_hyp_handle_cp15_32,
> > @@ -216,6 +259,7 @@ static const exit_handler_fn hyp_exit_handlers[] = {
> >  	[ESR_ELx_EC_DABT_LOW]		= kvm_hyp_handle_dabt_low,
> >  	[ESR_ELx_EC_WATCHPT_LOW]	= kvm_hyp_handle_watchpt_low,
> >  	[ESR_ELx_EC_PAC]		= kvm_hyp_handle_ptrauth,
> > +	[ESR_ELx_EC_ERET]		= kvm_hyp_handle_eret,
> >  	[ESR_ELx_EC_MOPS]		= kvm_hyp_handle_mops,
> >  };
> >  
> 
> Otherwise,
> 
> Reviewed-by: Joey Gouly <joey.gouly@arm.com>

Thanks!

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 07/13] KVM: arm64: nv: Honor HFGITR_EL2.ERET being set
  2024-02-26 10:05 ` [PATCH v2 07/13] KVM: arm64: nv: Honor HFGITR_EL2.ERET being set Marc Zyngier
@ 2024-03-01 18:07   ` Joey Gouly
  2024-03-01 19:14     ` Marc Zyngier
  0 siblings, 1 reply; 28+ messages in thread
From: Joey Gouly @ 2024-03-01 18:07 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Will Deacon, Catalin Marinas

Got a question about this one,

On Mon, Feb 26, 2024 at 10:05:55AM +0000, Marc Zyngier wrote:
> If the L1 hypervisor decides to trap ERETs while running L2,
> make sure we don't try to emulate it, just like we wouldn't
> if it had its NV bit set.
> 
> The exception will be reinjected from the core handler.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/hyp/vhe/switch.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
> index eaf242b8e0cf..3ea9bdf6b555 100644
> --- a/arch/arm64/kvm/hyp/vhe/switch.c
> +++ b/arch/arm64/kvm/hyp/vhe/switch.c
> @@ -220,7 +220,8 @@ static bool kvm_hyp_handle_eret(struct kvm_vcpu *vcpu, u64 *exit_code)
>  	 * Unless the trap has to be forwarded further down the line,
>  	 * of course...
>  	 */
> -	if (__vcpu_sys_reg(vcpu, HCR_EL2) & HCR_NV)
> +	if ((__vcpu_sys_reg(vcpu, HCR_EL2) & HCR_NV) ||
> +	    (__vcpu_sys_reg(vcpu, HFGITR_EL2) & HFGITR_EL2_ERET))
>  		return false;
>  
>  	spsr = read_sysreg_el1(SYS_SPSR);

Are we missing a forward_traps() call in kvm_emulated_nested_eret() for the
HFGITR case?

Trying to follow the code path here, and I'm unsure of where else the
HFIGTR_EL2_ERET trap would be forwarded:

kvm_arm_vcpu_enter_exit ->
	ERET executes in guest
	fixup_guest_exit ->
		kvm_hyp_handle_eret (returns false)

handle_exit ->
	kvm_handle_eret ->
		kvm_emulated_nested_eret
			if HCR_NV
				forward traps
			else
				emulate ERET


And if the answer is that it is being reinjected somewhere, putting that
function name in the commit instead of 'core handler' would help with
understanding!

I need to find the time to get an NV setup set-up, so I can do some experiments
myself.

Thanks,
Joey

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 07/13] KVM: arm64: nv: Honor HFGITR_EL2.ERET being set
  2024-03-01 18:07   ` Joey Gouly
@ 2024-03-01 19:14     ` Marc Zyngier
  2024-03-01 20:15       ` Joey Gouly
  0 siblings, 1 reply; 28+ messages in thread
From: Marc Zyngier @ 2024-03-01 19:14 UTC (permalink / raw)
  To: Joey Gouly
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Will Deacon, Catalin Marinas

Hi Joey,

On Fri, 01 Mar 2024 18:07:34 +0000,
Joey Gouly <joey.gouly@arm.com> wrote:
> 
> Got a question about this one,
> 
> On Mon, Feb 26, 2024 at 10:05:55AM +0000, Marc Zyngier wrote:
> > If the L1 hypervisor decides to trap ERETs while running L2,
> > make sure we don't try to emulate it, just like we wouldn't
> > if it had its NV bit set.
> > 
> > The exception will be reinjected from the core handler.
> > 
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/kvm/hyp/vhe/switch.c | 3 ++-
> >  1 file changed, 2 insertions(+), 1 deletion(-)
> > 
> > diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
> > index eaf242b8e0cf..3ea9bdf6b555 100644
> > --- a/arch/arm64/kvm/hyp/vhe/switch.c
> > +++ b/arch/arm64/kvm/hyp/vhe/switch.c
> > @@ -220,7 +220,8 @@ static bool kvm_hyp_handle_eret(struct kvm_vcpu *vcpu, u64 *exit_code)
> >  	 * Unless the trap has to be forwarded further down the line,
> >  	 * of course...
> >  	 */
> > -	if (__vcpu_sys_reg(vcpu, HCR_EL2) & HCR_NV)
> > +	if ((__vcpu_sys_reg(vcpu, HCR_EL2) & HCR_NV) ||
> > +	    (__vcpu_sys_reg(vcpu, HFGITR_EL2) & HFGITR_EL2_ERET))
> >  		return false;
> >  
> >  	spsr = read_sysreg_el1(SYS_SPSR);
> 
> Are we missing a forward_traps() call in kvm_emulated_nested_eret() for the
> HFGITR case?
> 
> Trying to follow the code path here, and I'm unsure of where else the
> HFIGTR_EL2_ERET trap would be forwarded:
> 
> kvm_arm_vcpu_enter_exit ->
> 	ERET executes in guest
> 	fixup_guest_exit ->
> 		kvm_hyp_handle_eret (returns false)
> 
> handle_exit ->
> 	kvm_handle_eret ->
> 		kvm_emulated_nested_eret
> 			if HCR_NV
> 				forward traps
> 			else
> 				emulate ERET

There's a bit more happening in kvm_handle_eret().

> 
> 
> And if the answer is that it is being reinjected somewhere, putting that
> function name in the commit instead of 'core handler' would help with
> understanding!

Let's look at the code:

	static int kvm_handle_eret(struct kvm_vcpu *vcpu)
	{
		[...]
	
		if (is_hyp_ctxt(vcpu))
			kvm_emulate_nested_eret(vcpu);

If we're doing an ERET from guest EL2, then we just emulate it,
because there is nothing else to do. Crucially, HFGITR_EL2.ERET
doesn't apply to EL2.

		else
			kvm_inject_nested_sync(vcpu, kvm_vcpu_get_esr(vcpu));

In any other case, we simply reinject the trap into the guest EL2,
because that's the only possible outcome. And that's what you were
missing.

		return 1;
	}
	

> I need to find the time to get an NV setup set-up, so I can do some experiments
> myself.

The FVP should be a good enough environment if you can bare the
glacial speed. Other than that, I hear that QEMU has grown some NV
support lately, but I haven't tried it yet. HW-wise, M2 is the only
machine that can be bought by a human being (everything else is
vapourware, or they would have already taken my money).

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 07/13] KVM: arm64: nv: Honor HFGITR_EL2.ERET being set
  2024-03-01 19:14     ` Marc Zyngier
@ 2024-03-01 20:15       ` Joey Gouly
  0 siblings, 0 replies; 28+ messages in thread
From: Joey Gouly @ 2024-03-01 20:15 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Will Deacon, Catalin Marinas

On Fri, Mar 01, 2024 at 07:14:00PM +0000, Marc Zyngier wrote:
> Hi Joey,
> 
> On Fri, 01 Mar 2024 18:07:34 +0000,
> Joey Gouly <joey.gouly@arm.com> wrote:
> > 
> > Got a question about this one,
> > 
> > On Mon, Feb 26, 2024 at 10:05:55AM +0000, Marc Zyngier wrote:
> > > If the L1 hypervisor decides to trap ERETs while running L2,
> > > make sure we don't try to emulate it, just like we wouldn't
> > > if it had its NV bit set.
> > > 
> > > The exception will be reinjected from the core handler.
> > > 
> > > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > > ---
> > >  arch/arm64/kvm/hyp/vhe/switch.c | 3 ++-
> > >  1 file changed, 2 insertions(+), 1 deletion(-)
> > > 
> > > diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
> > > index eaf242b8e0cf..3ea9bdf6b555 100644
> > > --- a/arch/arm64/kvm/hyp/vhe/switch.c
> > > +++ b/arch/arm64/kvm/hyp/vhe/switch.c
> > > @@ -220,7 +220,8 @@ static bool kvm_hyp_handle_eret(struct kvm_vcpu *vcpu, u64 *exit_code)
> > >  	 * Unless the trap has to be forwarded further down the line,
> > >  	 * of course...
> > >  	 */
> > > -	if (__vcpu_sys_reg(vcpu, HCR_EL2) & HCR_NV)
> > > +	if ((__vcpu_sys_reg(vcpu, HCR_EL2) & HCR_NV) ||
> > > +	    (__vcpu_sys_reg(vcpu, HFGITR_EL2) & HFGITR_EL2_ERET))
> > >  		return false;
> > >  
> > >  	spsr = read_sysreg_el1(SYS_SPSR);
> > 
> > Are we missing a forward_traps() call in kvm_emulated_nested_eret() for the
> > HFGITR case?
> > 
> > Trying to follow the code path here, and I'm unsure of where else the
> > HFIGTR_EL2_ERET trap would be forwarded:
> > 
> > kvm_arm_vcpu_enter_exit ->
> > 	ERET executes in guest
> > 	fixup_guest_exit ->
> > 		kvm_hyp_handle_eret (returns false)
> > 
> > handle_exit ->
> > 	kvm_handle_eret ->
> > 		kvm_emulated_nested_eret
> > 			if HCR_NV
> > 				forward traps
> > 			else
> > 				emulate ERET
> 
> There's a bit more happening in kvm_handle_eret().
> 
> > 
> > 
> > And if the answer is that it is being reinjected somewhere, putting that
> > function name in the commit instead of 'core handler' would help with
> > understanding!
> 
> Let's look at the code:
> 
> 	static int kvm_handle_eret(struct kvm_vcpu *vcpu)
> 	{
> 		[...]
> 	
> 		if (is_hyp_ctxt(vcpu))
> 			kvm_emulate_nested_eret(vcpu);
> 
> If we're doing an ERET from guest EL2, then we just emulate it,
> because there is nothing else to do. Crucially, HFGITR_EL2.ERET
> doesn't apply to EL2.
> 
> 		else
> 			kvm_inject_nested_sync(vcpu, kvm_vcpu_get_esr(vcpu));
> 
> In any other case, we simply reinject the trap into the guest EL2,
> because that's the only possible outcome. And that's what you were
> missing.
> 
> 		return 1;
> 	}
> 	

Thanks, that makes sense now! I was forgetting about the crucial fact that
HFGITR_EL2.ERET applies to EL1, which is !is_hyp_ctxt(), so we take the other
branch.

With that cleared up:

Reviewed-by: Joey Gouly <joey.gouly@arm.com>

Thanks,
Joey

> 
> > I need to find the time to get an NV setup set-up, so I can do some experiments
> > myself.
> 
> The FVP should be a good enough environment if you can bare the
> glacial speed. Other than that, I hear that QEMU has grown some NV
> support lately, but I haven't tried it yet. HW-wise, M2 is the only
> machine that can be bought by a human being (everything else is
> vapourware, or they would have already taken my money).
> 
> Thanks,
> 
> 	M.
> 
> -- 
> Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 11/13] KVM: arm64: nv: Add emulation for ERETAx instructions
  2024-02-26 10:05 ` [PATCH v2 11/13] KVM: arm64: nv: Add emulation for ERETAx instructions Marc Zyngier
@ 2024-03-07 13:39   ` Joey Gouly
  2024-03-07 14:24     ` Marc Zyngier
  2024-03-08 17:20   ` Joey Gouly
  1 sibling, 1 reply; 28+ messages in thread
From: Joey Gouly @ 2024-03-07 13:39 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Will Deacon, Catalin Marinas

Note that this is my first time looking at PAuth.

On Mon, Feb 26, 2024 at 10:05:59AM +0000, Marc Zyngier wrote:
> FEAT_NV has the interesting property of relying on ERET being
> trapped. An added complexity is that it also traps ERETAA and
> ERETAB, meaning that the Pointer Authentication aspect of these
> instruction must be emulated.
> 
> Add an emulation of Pointer Authentication, limited to ERETAx
> (always using SP_EL2 as the modifier and ELR_EL2 as the pointer),
> using the Generic Authentication instructions.
> 
> The emulation, however small, is placed in its own compilation
> unit so that it can be avoided if the configuration doesn't
> include it (or the toolchan in not up to the task).
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_nested.h    |  12 ++
>  arch/arm64/include/asm/pgtable-hwdef.h |   1 +
>  arch/arm64/kvm/Makefile                |   1 +
>  arch/arm64/kvm/pauth.c                 | 196 +++++++++++++++++++++++++
>  4 files changed, 210 insertions(+)
>  create mode 100644 arch/arm64/kvm/pauth.c
> 
> diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
> index dbc4e3a67356..5e0ab0596246 100644
> --- a/arch/arm64/include/asm/kvm_nested.h
> +++ b/arch/arm64/include/asm/kvm_nested.h
> @@ -64,4 +64,16 @@ extern bool forward_smc_trap(struct kvm_vcpu *vcpu);
>  
>  int kvm_init_nv_sysregs(struct kvm *kvm);
>  
> +#ifdef CONFIG_ARM64_PTR_AUTH
> +bool kvm_auth_eretax(struct kvm_vcpu *vcpu, u64 *elr);
> +#else
> +static inline bool kvm_auth_eretax(struct kvm_vcpu *vcpu, u64 *elr)
> +{
> +	/* We really should never execute this... */
> +	WARN_ON_ONCE(1);
> +	*elr = 0xbad9acc0debadbad;
> +	return false;
> +}
> +#endif
> +
>  #endif /* __ARM64_KVM_NESTED_H */
> diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h
> index e4944d517c99..bb88e9ef6296 100644
> --- a/arch/arm64/include/asm/pgtable-hwdef.h
> +++ b/arch/arm64/include/asm/pgtable-hwdef.h
> @@ -277,6 +277,7 @@
>  #define TCR_TBI1		(UL(1) << 38)
>  #define TCR_HA			(UL(1) << 39)
>  #define TCR_HD			(UL(1) << 40)
> +#define TCR_TBID0		(UL(1) << 51)
>  #define TCR_TBID1		(UL(1) << 52)
>  #define TCR_NFD0		(UL(1) << 53)
>  #define TCR_NFD1		(UL(1) << 54)
> diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
> index c0c050e53157..04882b577575 100644
> --- a/arch/arm64/kvm/Makefile
> +++ b/arch/arm64/kvm/Makefile
> @@ -23,6 +23,7 @@ kvm-y += arm.o mmu.o mmio.o psci.o hypercalls.o pvtime.o \
>  	 vgic/vgic-its.o vgic/vgic-debug.o
>  
>  kvm-$(CONFIG_HW_PERF_EVENTS)  += pmu-emul.o pmu.o
> +kvm-$(CONFIG_ARM64_PTR_AUTH)  += pauth.o
>  
>  always-y := hyp_constants.h hyp-constants.s
>  
> diff --git a/arch/arm64/kvm/pauth.c b/arch/arm64/kvm/pauth.c
> new file mode 100644
> index 000000000000..a3a5c404375b
> --- /dev/null
> +++ b/arch/arm64/kvm/pauth.c
> @@ -0,0 +1,196 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * Copyright (C) 2024 - Google LLC
> + * Author: Marc Zyngier <maz@kernel.org>
> + *
> + * Primitive PAuth emulation for ERETAA/ERETAB.
> + *
> + * This code assumes that is is run from EL2, and that it is part of
> + * the emulation of ERETAx for a guest hypervisor. That's a lot of
> + * baked-in assumptions and shortcuts.
> + *
> + * Do no reuse for anything else!
> + */
> +
> +#include <linux/kvm_host.h>
> +
> +#include <asm/kvm_emulate.h>
> +#include <asm/pointer_auth.h>
> +
> +static u64 compute_pac(struct kvm_vcpu *vcpu, u64 ptr,
> +		       struct ptrauth_key ikey)
> +{
> +	struct ptrauth_key gkey;
> +	u64 mod, pac = 0;
> +
> +	preempt_disable();
> +
> +	if (!vcpu_get_flag(vcpu, SYSREGS_ON_CPU))
> +		mod = __vcpu_sys_reg(vcpu, SP_EL2);
> +	else
> +		mod = read_sysreg(sp_el1);
> +
> +	gkey.lo = read_sysreg_s(SYS_APGAKEYLO_EL1);
> +	gkey.hi = read_sysreg_s(SYS_APGAKEYHI_EL1);
> +
> +	__ptrauth_key_install_nosync(APGA, ikey);
> +	isb();
> +
> +	asm volatile(ARM64_ASM_PREAMBLE ".arch_extension pauth\n"
> +		     "pacga %0, %1, %2" : "=r" (pac) : "r" (ptr), "r" (mod));

To use `pacga`, we require that the Address authentication and Generic
authentication use the same algorithm, right?

There doesn't seem to be a check for that up front. There is kinda a check for
that if the PAC doesn't match, (by kvm_has_pauth()).

I'm just pointing out that this looks a little odd to me, to get your input,
not sure if there's actually anything wrong here.

> +	isb();
> +
> +	__ptrauth_key_install_nosync(APGA, gkey);
> +
> +	preempt_enable();
> +
> +	/* PAC in the top 32bits */
> +	return pac;
> +}
> +
> +static bool effective_tbi(struct kvm_vcpu *vcpu, bool bit55)
> +{
> +	u64 tcr = vcpu_read_sys_reg(vcpu, TCR_EL2);
> +	bool tbi, tbid;
> +
> +	/*
> +	 * Since we are authenticating an instruction address, we have
> +	 * to take TBID into account. If E2H==0, ignore VA[55], as
> +	 * TCR_EL2 only has a single TBI/TBID. If VA[55] was set in
> +	 * this case, this is likely a guest bug...
> +	 */
> +	if (!vcpu_el2_e2h_is_set(vcpu)) {
> +		tbi = tcr & BIT(20);
> +		tbid = tcr & BIT(29);
> +	} else if (bit55) {
> +		tbi = tcr & TCR_TBI1;
> +		tbid = tcr & TCR_TBID1;
> +	} else {
> +		tbi = tcr & TCR_TBI0;
> +		tbid = tcr & TCR_TBID0;
> +	}
> +
> +	return tbi && !tbid;
> +}
> +
> +static int compute_bottom_pac(struct kvm_vcpu *vcpu, bool bit55)
> +{
> +	static const int maxtxsz = 39; // Revisit these two values once
> +	static const int mintxsz = 16; // (if) we support TTST/LVA/LVA2
> +	u64 tcr = vcpu_read_sys_reg(vcpu, TCR_EL2);
> +	int txsz;
> +
> +	if (!vcpu_el2_e2h_is_set(vcpu) || !bit55)
> +		txsz = FIELD_GET(TCR_T0SZ_MASK, tcr);
> +	else
> +		txsz = FIELD_GET(TCR_T1SZ_MASK, tcr);
> +
> +	return 64 - clamp(txsz, mintxsz, maxtxsz);
> +}
> +
> +static u64 compute_pac_mask(struct kvm_vcpu *vcpu, bool bit55)
> +{
> +	int bottom_pac;
> +	u64 mask;
> +
> +	bottom_pac = compute_bottom_pac(vcpu, bit55);
> +
> +	mask = GENMASK(54, bottom_pac);
> +	if (!effective_tbi(vcpu, bit55))
> +		mask |= GENMASK(63, 56);
> +
> +	return mask;
> +}
> +
> +static u64 to_canonical_addr(struct kvm_vcpu *vcpu, u64 ptr, u64 mask)
> +{
> +	bool bit55 = !!(ptr & BIT(55));
> +
> +	if (bit55)
> +		return ptr | mask;
> +
> +	return ptr & ~mask;
> +}
> +
> +static u64 corrupt_addr(struct kvm_vcpu *vcpu, u64 ptr)
> +{
> +	bool bit55 = !!(ptr & BIT(55));
> +	u64 mask, error_code;
> +	int shift;
> +
> +	if (effective_tbi(vcpu, bit55)) {
> +		mask = GENMASK(54, 53);
> +		shift = 53;
> +	} else {
> +		mask = GENMASK(62, 61);
> +		shift = 61;
> +	}
> +
> +	if (esr_iss_is_eretab(kvm_vcpu_get_esr(vcpu)))
> +		error_code = 2 << shift;
> +	else
> +		error_code = 1 << shift;
> +
> +	ptr &= ~mask;
> +	ptr |= error_code;
> +
> +	return ptr;
> +}
> +
> +/*
> + * Authenticate an ERETAA/ERETAB instruction, returning true if the
> + * authentication succeeded and false otherwise. In all cases, *elr
> + * contains the VA to ERET to. Potential exception injection is left
> + * to the caller.
> + */
> +bool kvm_auth_eretax(struct kvm_vcpu *vcpu, u64 *elr)
> +{
> +	u64 sctlr = vcpu_read_sys_reg(vcpu, SCTLR_EL2);
> +	u64 esr = kvm_vcpu_get_esr(vcpu);
> +	u64 ptr, cptr, pac, mask;
> +	struct ptrauth_key ikey;
> +
> +	*elr = ptr = vcpu_read_sys_reg(vcpu, ELR_EL2);
> +
> +	/* We assume we're already in the context of an ERETAx */
> +	if (esr_iss_is_eretab(esr)) {
> +		if (!(sctlr & SCTLR_EL1_EnIB))
> +			return true;
> +
> +		ikey.lo = __vcpu_sys_reg(vcpu, APIBKEYLO_EL1);
> +		ikey.hi = __vcpu_sys_reg(vcpu, APIBKEYHI_EL1);
> +	} else {
> +		if (!(sctlr & SCTLR_EL1_EnIA))
> +			return true;
> +
> +		ikey.lo = __vcpu_sys_reg(vcpu, APIAKEYLO_EL1);
> +		ikey.hi = __vcpu_sys_reg(vcpu, APIAKEYHI_EL1);
> +	}
> +
> +	mask = compute_pac_mask(vcpu, !!(ptr & BIT(55)));
> +	cptr = to_canonical_addr(vcpu, ptr, mask);
> +
> +	pac = compute_pac(vcpu, cptr, ikey);
> +
> +	/*
> +	 * Slightly deviate from the pseudocode: if we have a PAC
> +	 * match with the signed pointer, then it must be good.
> +	 * Anything after this point is pure error handling.
> +	 */
> +	if ((pac & mask) == (ptr & mask)) {
> +		*elr = cptr;
> +		return true;
> +	}
> +
> +	/*
> +	 * Authentication failed, corrupt the canonical address if
> +	 * PAuth2 isn't implemented, or some XORing if it is.
> +	 */
> +	if (!kvm_has_pauth(vcpu->kvm, PAuth2))
> +		cptr = corrupt_addr(vcpu, cptr);
> +	else
> +		cptr = ptr ^ (pac & mask);
> +
> +	*elr = cptr;
> +	return false;
> +}

Thanks,
Joey

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 11/13] KVM: arm64: nv: Add emulation for ERETAx instructions
  2024-03-07 13:39   ` Joey Gouly
@ 2024-03-07 14:24     ` Marc Zyngier
  0 siblings, 0 replies; 28+ messages in thread
From: Marc Zyngier @ 2024-03-07 14:24 UTC (permalink / raw)
  To: Joey Gouly
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Will Deacon, Catalin Marinas

On Thu, 07 Mar 2024 13:39:12 +0000,
Joey Gouly <joey.gouly@arm.com> wrote:
> 
> Note that this is my first time looking at PAuth.

I'm sorry! ;-)

> 
> On Mon, Feb 26, 2024 at 10:05:59AM +0000, Marc Zyngier wrote:
> > FEAT_NV has the interesting property of relying on ERET being
> > trapped. An added complexity is that it also traps ERETAA and
> > ERETAB, meaning that the Pointer Authentication aspect of these
> > instruction must be emulated.
> > 
> > Add an emulation of Pointer Authentication, limited to ERETAx
> > (always using SP_EL2 as the modifier and ELR_EL2 as the pointer),
> > using the Generic Authentication instructions.
> > 
> > The emulation, however small, is placed in its own compilation
> > unit so that it can be avoided if the configuration doesn't
> > include it (or the toolchan in not up to the task).
> > 
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/include/asm/kvm_nested.h    |  12 ++
> >  arch/arm64/include/asm/pgtable-hwdef.h |   1 +
> >  arch/arm64/kvm/Makefile                |   1 +
> >  arch/arm64/kvm/pauth.c                 | 196 +++++++++++++++++++++++++
> >  4 files changed, 210 insertions(+)
> >  create mode 100644 arch/arm64/kvm/pauth.c
> > 
> > diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
> > index dbc4e3a67356..5e0ab0596246 100644
> > --- a/arch/arm64/include/asm/kvm_nested.h
> > +++ b/arch/arm64/include/asm/kvm_nested.h
> > @@ -64,4 +64,16 @@ extern bool forward_smc_trap(struct kvm_vcpu *vcpu);
> >  
> >  int kvm_init_nv_sysregs(struct kvm *kvm);
> >  
> > +#ifdef CONFIG_ARM64_PTR_AUTH
> > +bool kvm_auth_eretax(struct kvm_vcpu *vcpu, u64 *elr);
> > +#else
> > +static inline bool kvm_auth_eretax(struct kvm_vcpu *vcpu, u64 *elr)
> > +{
> > +	/* We really should never execute this... */
> > +	WARN_ON_ONCE(1);
> > +	*elr = 0xbad9acc0debadbad;
> > +	return false;
> > +}
> > +#endif
> > +
> >  #endif /* __ARM64_KVM_NESTED_H */
> > diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h
> > index e4944d517c99..bb88e9ef6296 100644
> > --- a/arch/arm64/include/asm/pgtable-hwdef.h
> > +++ b/arch/arm64/include/asm/pgtable-hwdef.h
> > @@ -277,6 +277,7 @@
> >  #define TCR_TBI1		(UL(1) << 38)
> >  #define TCR_HA			(UL(1) << 39)
> >  #define TCR_HD			(UL(1) << 40)
> > +#define TCR_TBID0		(UL(1) << 51)
> >  #define TCR_TBID1		(UL(1) << 52)
> >  #define TCR_NFD0		(UL(1) << 53)
> >  #define TCR_NFD1		(UL(1) << 54)
> > diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
> > index c0c050e53157..04882b577575 100644
> > --- a/arch/arm64/kvm/Makefile
> > +++ b/arch/arm64/kvm/Makefile
> > @@ -23,6 +23,7 @@ kvm-y += arm.o mmu.o mmio.o psci.o hypercalls.o pvtime.o \
> >  	 vgic/vgic-its.o vgic/vgic-debug.o
> >  
> >  kvm-$(CONFIG_HW_PERF_EVENTS)  += pmu-emul.o pmu.o
> > +kvm-$(CONFIG_ARM64_PTR_AUTH)  += pauth.o
> >  
> >  always-y := hyp_constants.h hyp-constants.s
> >  
> > diff --git a/arch/arm64/kvm/pauth.c b/arch/arm64/kvm/pauth.c
> > new file mode 100644
> > index 000000000000..a3a5c404375b
> > --- /dev/null
> > +++ b/arch/arm64/kvm/pauth.c
> > @@ -0,0 +1,196 @@
> > +// SPDX-License-Identifier: GPL-2.0-only
> > +/*
> > + * Copyright (C) 2024 - Google LLC
> > + * Author: Marc Zyngier <maz@kernel.org>
> > + *
> > + * Primitive PAuth emulation for ERETAA/ERETAB.
> > + *
> > + * This code assumes that is is run from EL2, and that it is part of
> > + * the emulation of ERETAx for a guest hypervisor. That's a lot of
> > + * baked-in assumptions and shortcuts.
> > + *
> > + * Do no reuse for anything else!
> > + */
> > +
> > +#include <linux/kvm_host.h>
> > +
> > +#include <asm/kvm_emulate.h>
> > +#include <asm/pointer_auth.h>
> > +
> > +static u64 compute_pac(struct kvm_vcpu *vcpu, u64 ptr,
> > +		       struct ptrauth_key ikey)
> > +{
> > +	struct ptrauth_key gkey;
> > +	u64 mod, pac = 0;
> > +
> > +	preempt_disable();
> > +
> > +	if (!vcpu_get_flag(vcpu, SYSREGS_ON_CPU))
> > +		mod = __vcpu_sys_reg(vcpu, SP_EL2);
> > +	else
> > +		mod = read_sysreg(sp_el1);
> > +
> > +	gkey.lo = read_sysreg_s(SYS_APGAKEYLO_EL1);
> > +	gkey.hi = read_sysreg_s(SYS_APGAKEYHI_EL1);
> > +
> > +	__ptrauth_key_install_nosync(APGA, ikey);
> > +	isb();
> > +
> > +	asm volatile(ARM64_ASM_PREAMBLE ".arch_extension pauth\n"
> > +		     "pacga %0, %1, %2" : "=r" (pac) : "r" (ptr), "r" (mod));
> 
> To use `pacga`, we require that the Address authentication and Generic
> authentication use the same algorithm, right?

Indeed. It is a strong requirement, and if we don't have that, nothing
works.

Or rather, emulating ERETAx becomes so bloody complicated it isn't
funny: you need to somehow to replay the auth in the context of a
guest so that all of TCR_EL2, SP_EL2, SCTLR_EL2, HCR_EL2, ELR_EL2 are
correctly set, get the value back, and compare it to the one you are
trying to authenticate.

Not happening! Mark and I talked about it ages ago and concluded it
was absolutely insane. PACGA allows us to sidestep the whole thing and
reconstruct the PAC like the pseudocode does using the ComputePAC()
function.

> There doesn't seem to be a check for that up front. There is kinda a check for
> that if the PAC doesn't match, (by kvm_has_pauth()).

Indeed. The main issue is that it is really hard to prevent that
unless we forbid it for all KVM guests, not just NV guests. It is also
that I don't know of any HW that would implement two different auth
methods (which would be really bizarre from an implementation
perspective).

I'm happy to harden system_has_full_ptr_auth() in that case, which
would do the trick.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 08/13] KVM: arm64: nv: Handle HCR_EL2.{API,APK} independently
  2024-02-26 10:05 ` [PATCH v2 08/13] KVM: arm64: nv: Handle HCR_EL2.{API,APK} independently Marc Zyngier
@ 2024-03-07 15:14   ` Joey Gouly
  2024-03-07 15:58     ` Marc Zyngier
  0 siblings, 1 reply; 28+ messages in thread
From: Joey Gouly @ 2024-03-07 15:14 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Will Deacon, Catalin Marinas

On Mon, Feb 26, 2024 at 10:05:56AM +0000, Marc Zyngier wrote:
> Although KVM couples API and APK for simplicity, the architecture
> makes no such requirement, and the two can be independently set or
> cleared.
> 
> Check for which of the two possible reasons we have trapped here,
> and if the corresponding L1 control bit isn't set, delegate the
> handling for forwarding.
> 
> Otherwise, set this exact bit in HCR_EL2 and resume the guest.
> Of course, in the non-NV case, we keep setting both bits and
> be done with it. Note that the entry core already saves/restores
> the keys should any of the two control bits be set.
> 
> This results in a bit of rework, and the removal of the (trivial)
> vcpu_ptrauth_enable() helper.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_emulate.h    |  5 ----
>  arch/arm64/kvm/hyp/include/hyp/switch.h | 32 +++++++++++++++++++++----
>  2 files changed, 27 insertions(+), 10 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
> index debc3753d2ef..d2177bc77844 100644
> --- a/arch/arm64/include/asm/kvm_emulate.h
> +++ b/arch/arm64/include/asm/kvm_emulate.h
> @@ -125,11 +125,6 @@ static inline void vcpu_set_wfx_traps(struct kvm_vcpu *vcpu)
>  	vcpu->arch.hcr_el2 |= HCR_TWI;
>  }
>  
> -static inline void vcpu_ptrauth_enable(struct kvm_vcpu *vcpu)
> -{
> -	vcpu->arch.hcr_el2 |= (HCR_API | HCR_APK);
> -}
> -
>  static inline void vcpu_ptrauth_disable(struct kvm_vcpu *vcpu)
>  {
>  	vcpu->arch.hcr_el2 &= ~(HCR_API | HCR_APK);
> diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> index f5f701f309a9..a0908d7a8f56 100644
> --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> @@ -480,11 +480,35 @@ DECLARE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt);
>  static bool kvm_hyp_handle_ptrauth(struct kvm_vcpu *vcpu, u64 *exit_code)
>  {
>  	struct kvm_cpu_context *ctxt;
> -	u64 val;
> +	u64 enable = 0;
>  
>  	if (!vcpu_has_ptrauth(vcpu))
>  		return false;
>  
> +	/*
> +	 * NV requires us to handle API and APK independently, just in
> +	 * case the hypervisor is totally nuts. Please barf >here<.
> +	 */
> +	if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)) {
> +		switch (ESR_ELx_EC(kvm_vcpu_get_esr(vcpu))) {
> +		case ESR_ELx_EC_PAC:
> +			if (!(__vcpu_sys_reg(vcpu, HCR_EL2) & HCR_API))
> +				return false;
> +
> +			enable |= HCR_API;
> +			break;
> +
> +		case ESR_ELx_EC_SYS64:
> +			if (!(__vcpu_sys_reg(vcpu, HCR_EL2) & HCR_APK))
> +				return false;
> +
> +			enable |= HCR_APK;
> +			break;
> +		}
> +	} else {
> +		enable = HCR_API | HCR_APK;
> +	}
> +
>  	ctxt = this_cpu_ptr(&kvm_hyp_ctxt);
>  	__ptrauth_save_key(ctxt, APIA);
>  	__ptrauth_save_key(ctxt, APIB);
> @@ -492,11 +516,9 @@ static bool kvm_hyp_handle_ptrauth(struct kvm_vcpu *vcpu, u64 *exit_code)
>  	__ptrauth_save_key(ctxt, APDB);
>  	__ptrauth_save_key(ctxt, APGA);
>  
> -	vcpu_ptrauth_enable(vcpu);
>  
> -	val = read_sysreg(hcr_el2);
> -	val |= (HCR_API | HCR_APK);
> -	write_sysreg(val, hcr_el2);
> +	vcpu->arch.hcr_el2 |= enable;
> +	sysreg_clear_set(hcr_el2, 0, enable);
>  
>  	return true;
>  }

A bit of sleuthing tells me you plan to delete kvm_hyp_handle_ptrauth() anyway,
so presumably it makes some sense to put that patch before this to avoid
modifying the code just to delete it!

Thanks,
Joey

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 08/13] KVM: arm64: nv: Handle HCR_EL2.{API,APK} independently
  2024-03-07 15:14   ` Joey Gouly
@ 2024-03-07 15:58     ` Marc Zyngier
  0 siblings, 0 replies; 28+ messages in thread
From: Marc Zyngier @ 2024-03-07 15:58 UTC (permalink / raw)
  To: Joey Gouly
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Will Deacon, Catalin Marinas

On Thu, 07 Mar 2024 15:14:54 +0000,
Joey Gouly <joey.gouly@arm.com> wrote:
> 
> On Mon, Feb 26, 2024 at 10:05:56AM +0000, Marc Zyngier wrote:
> > Although KVM couples API and APK for simplicity, the architecture
> > makes no such requirement, and the two can be independently set or
> > cleared.
> > 
> > Check for which of the two possible reasons we have trapped here,
> > and if the corresponding L1 control bit isn't set, delegate the
> > handling for forwarding.
> > 
> > Otherwise, set this exact bit in HCR_EL2 and resume the guest.
> > Of course, in the non-NV case, we keep setting both bits and
> > be done with it. Note that the entry core already saves/restores
> > the keys should any of the two control bits be set.
> > 
> > This results in a bit of rework, and the removal of the (trivial)
> > vcpu_ptrauth_enable() helper.
> > 
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  arch/arm64/include/asm/kvm_emulate.h    |  5 ----
> >  arch/arm64/kvm/hyp/include/hyp/switch.h | 32 +++++++++++++++++++++----
> >  2 files changed, 27 insertions(+), 10 deletions(-)
> > 
> > diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
> > index debc3753d2ef..d2177bc77844 100644
> > --- a/arch/arm64/include/asm/kvm_emulate.h
> > +++ b/arch/arm64/include/asm/kvm_emulate.h
> > @@ -125,11 +125,6 @@ static inline void vcpu_set_wfx_traps(struct kvm_vcpu *vcpu)
> >  	vcpu->arch.hcr_el2 |= HCR_TWI;
> >  }
> >  
> > -static inline void vcpu_ptrauth_enable(struct kvm_vcpu *vcpu)
> > -{
> > -	vcpu->arch.hcr_el2 |= (HCR_API | HCR_APK);
> > -}
> > -
> >  static inline void vcpu_ptrauth_disable(struct kvm_vcpu *vcpu)
> >  {
> >  	vcpu->arch.hcr_el2 &= ~(HCR_API | HCR_APK);
> > diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
> > index f5f701f309a9..a0908d7a8f56 100644
> > --- a/arch/arm64/kvm/hyp/include/hyp/switch.h
> > +++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
> > @@ -480,11 +480,35 @@ DECLARE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt);
> >  static bool kvm_hyp_handle_ptrauth(struct kvm_vcpu *vcpu, u64 *exit_code)
> >  {
> >  	struct kvm_cpu_context *ctxt;
> > -	u64 val;
> > +	u64 enable = 0;
> >  
> >  	if (!vcpu_has_ptrauth(vcpu))
> >  		return false;
> >  
> > +	/*
> > +	 * NV requires us to handle API and APK independently, just in
> > +	 * case the hypervisor is totally nuts. Please barf >here<.
> > +	 */
> > +	if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)) {
> > +		switch (ESR_ELx_EC(kvm_vcpu_get_esr(vcpu))) {
> > +		case ESR_ELx_EC_PAC:
> > +			if (!(__vcpu_sys_reg(vcpu, HCR_EL2) & HCR_API))
> > +				return false;
> > +
> > +			enable |= HCR_API;
> > +			break;
> > +
> > +		case ESR_ELx_EC_SYS64:
> > +			if (!(__vcpu_sys_reg(vcpu, HCR_EL2) & HCR_APK))
> > +				return false;
> > +
> > +			enable |= HCR_APK;
> > +			break;
> > +		}
> > +	} else {
> > +		enable = HCR_API | HCR_APK;
> > +	}
> > +
> >  	ctxt = this_cpu_ptr(&kvm_hyp_ctxt);
> >  	__ptrauth_save_key(ctxt, APIA);
> >  	__ptrauth_save_key(ctxt, APIB);
> > @@ -492,11 +516,9 @@ static bool kvm_hyp_handle_ptrauth(struct kvm_vcpu *vcpu, u64 *exit_code)
> >  	__ptrauth_save_key(ctxt, APDB);
> >  	__ptrauth_save_key(ctxt, APGA);
> >  
> > -	vcpu_ptrauth_enable(vcpu);
> >  
> > -	val = read_sysreg(hcr_el2);
> > -	val |= (HCR_API | HCR_APK);
> > -	write_sysreg(val, hcr_el2);
> > +	vcpu->arch.hcr_el2 |= enable;
> > +	sysreg_clear_set(hcr_el2, 0, enable);
> >  
> >  	return true;
> >  }
> 
> A bit of sleuthing tells me you plan to delete kvm_hyp_handle_ptrauth() anyway,
> so presumably it makes some sense to put that patch before this to avoid
> modifying the code just to delete it!

Well, I haven't posted that patch yet (soon!), but it is also
important to show how these things interact overall. *if* we agree
that there is no point in the current approach, then I'll squash the
two.

But there is a lot to be said about:

- discussion on the list first
- minimal changes to track regressions

So I think there is still value in reviewing this patch on its own!

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 11/13] KVM: arm64: nv: Add emulation for ERETAx instructions
  2024-02-26 10:05 ` [PATCH v2 11/13] KVM: arm64: nv: Add emulation for ERETAx instructions Marc Zyngier
  2024-03-07 13:39   ` Joey Gouly
@ 2024-03-08 17:20   ` Joey Gouly
  2024-03-08 17:54     ` Marc Zyngier
  1 sibling, 1 reply; 28+ messages in thread
From: Joey Gouly @ 2024-03-08 17:20 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Will Deacon, Catalin Marinas

Phew..

On Mon, Feb 26, 2024 at 10:05:59AM +0000, Marc Zyngier wrote:
> FEAT_NV has the interesting property of relying on ERET being
> trapped. An added complexity is that it also traps ERETAA and
> ERETAB, meaning that the Pointer Authentication aspect of these
> instruction must be emulated.
> 
> Add an emulation of Pointer Authentication, limited to ERETAx
> (always using SP_EL2 as the modifier and ELR_EL2 as the pointer),
> using the Generic Authentication instructions.
> 
> The emulation, however small, is placed in its own compilation
> unit so that it can be avoided if the configuration doesn't
> include it (or the toolchan in not up to the task).
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/include/asm/kvm_nested.h    |  12 ++
>  arch/arm64/include/asm/pgtable-hwdef.h |   1 +
>  arch/arm64/kvm/Makefile                |   1 +
>  arch/arm64/kvm/pauth.c                 | 196 +++++++++++++++++++++++++
>  4 files changed, 210 insertions(+)
>  create mode 100644 arch/arm64/kvm/pauth.c
> 
> diff --git a/arch/arm64/include/asm/kvm_nested.h b/arch/arm64/include/asm/kvm_nested.h
> index dbc4e3a67356..5e0ab0596246 100644
> --- a/arch/arm64/include/asm/kvm_nested.h
> +++ b/arch/arm64/include/asm/kvm_nested.h
> @@ -64,4 +64,16 @@ extern bool forward_smc_trap(struct kvm_vcpu *vcpu);
>  
>  int kvm_init_nv_sysregs(struct kvm *kvm);
>  
> +#ifdef CONFIG_ARM64_PTR_AUTH
> +bool kvm_auth_eretax(struct kvm_vcpu *vcpu, u64 *elr);
> +#else
> +static inline bool kvm_auth_eretax(struct kvm_vcpu *vcpu, u64 *elr)
> +{
> +	/* We really should never execute this... */
> +	WARN_ON_ONCE(1);
> +	*elr = 0xbad9acc0debadbad;
> +	return false;
> +}
> +#endif
> +
>  #endif /* __ARM64_KVM_NESTED_H */
> diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h
> index e4944d517c99..bb88e9ef6296 100644
> --- a/arch/arm64/include/asm/pgtable-hwdef.h
> +++ b/arch/arm64/include/asm/pgtable-hwdef.h
> @@ -277,6 +277,7 @@
>  #define TCR_TBI1		(UL(1) << 38)
>  #define TCR_HA			(UL(1) << 39)
>  #define TCR_HD			(UL(1) << 40)
> +#define TCR_TBID0		(UL(1) << 51)
>  #define TCR_TBID1		(UL(1) << 52)
>  #define TCR_NFD0		(UL(1) << 53)
>  #define TCR_NFD1		(UL(1) << 54)
> diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
> index c0c050e53157..04882b577575 100644
> --- a/arch/arm64/kvm/Makefile
> +++ b/arch/arm64/kvm/Makefile
> @@ -23,6 +23,7 @@ kvm-y += arm.o mmu.o mmio.o psci.o hypercalls.o pvtime.o \
>  	 vgic/vgic-its.o vgic/vgic-debug.o
>  
>  kvm-$(CONFIG_HW_PERF_EVENTS)  += pmu-emul.o pmu.o
> +kvm-$(CONFIG_ARM64_PTR_AUTH)  += pauth.o
>  
>  always-y := hyp_constants.h hyp-constants.s
>  
> diff --git a/arch/arm64/kvm/pauth.c b/arch/arm64/kvm/pauth.c
> new file mode 100644
> index 000000000000..a3a5c404375b
> --- /dev/null
> +++ b/arch/arm64/kvm/pauth.c
> @@ -0,0 +1,196 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * Copyright (C) 2024 - Google LLC
> + * Author: Marc Zyngier <maz@kernel.org>
> + *
> + * Primitive PAuth emulation for ERETAA/ERETAB.
> + *
> + * This code assumes that is is run from EL2, and that it is part of
> + * the emulation of ERETAx for a guest hypervisor. That's a lot of
> + * baked-in assumptions and shortcuts.
> + *
> + * Do no reuse for anything else!
> + */
> +
> +#include <linux/kvm_host.h>
> +
> +#include <asm/kvm_emulate.h>
> +#include <asm/pointer_auth.h>
> +
> +static u64 compute_pac(struct kvm_vcpu *vcpu, u64 ptr,
> +		       struct ptrauth_key ikey)
> +{
> +	struct ptrauth_key gkey;
> +	u64 mod, pac = 0;
> +
> +	preempt_disable();
> +
> +	if (!vcpu_get_flag(vcpu, SYSREGS_ON_CPU))
> +		mod = __vcpu_sys_reg(vcpu, SP_EL2);
> +	else
> +		mod = read_sysreg(sp_el1);
> +
> +	gkey.lo = read_sysreg_s(SYS_APGAKEYLO_EL1);
> +	gkey.hi = read_sysreg_s(SYS_APGAKEYHI_EL1);
> +
> +	__ptrauth_key_install_nosync(APGA, ikey);
> +	isb();
> +
> +	asm volatile(ARM64_ASM_PREAMBLE ".arch_extension pauth\n"
> +		     "pacga %0, %1, %2" : "=r" (pac) : "r" (ptr), "r" (mod));
> +	isb();
> +
> +	__ptrauth_key_install_nosync(APGA, gkey);
> +
> +	preempt_enable();
> +
> +	/* PAC in the top 32bits */
> +	return pac;
> +}
> +
> +static bool effective_tbi(struct kvm_vcpu *vcpu, bool bit55)
> +{
> +	u64 tcr = vcpu_read_sys_reg(vcpu, TCR_EL2);
> +	bool tbi, tbid;
> +
> +	/*
> +	 * Since we are authenticating an instruction address, we have
> +	 * to take TBID into account. If E2H==0, ignore VA[55], as
> +	 * TCR_EL2 only has a single TBI/TBID. If VA[55] was set in
> +	 * this case, this is likely a guest bug...
> +	 */
> +	if (!vcpu_el2_e2h_is_set(vcpu)) {
> +		tbi = tcr & BIT(20);
> +		tbid = tcr & BIT(29);
> +	} else if (bit55) {
> +		tbi = tcr & TCR_TBI1;
> +		tbid = tcr & TCR_TBID1;
> +	} else {
> +		tbi = tcr & TCR_TBI0;
> +		tbid = tcr & TCR_TBID0;
> +	}
> +
> +	return tbi && !tbid;
> +}
> +
> +static int compute_bottom_pac(struct kvm_vcpu *vcpu, bool bit55)
> +{
> +	static const int maxtxsz = 39; // Revisit these two values once
> +	static const int mintxsz = 16; // (if) we support TTST/LVA/LVA2
> +	u64 tcr = vcpu_read_sys_reg(vcpu, TCR_EL2);
> +	int txsz;
> +
> +	if (!vcpu_el2_e2h_is_set(vcpu) || !bit55)
> +		txsz = FIELD_GET(TCR_T0SZ_MASK, tcr);
> +	else
> +		txsz = FIELD_GET(TCR_T1SZ_MASK, tcr);
> +
> +	return 64 - clamp(txsz, mintxsz, maxtxsz);
> +}
> +
> +static u64 compute_pac_mask(struct kvm_vcpu *vcpu, bool bit55)
> +{
> +	int bottom_pac;
> +	u64 mask;
> +
> +	bottom_pac = compute_bottom_pac(vcpu, bit55);
> +
> +	mask = GENMASK(54, bottom_pac);
> +	if (!effective_tbi(vcpu, bit55))
> +		mask |= GENMASK(63, 56);
> +
> +	return mask;
> +}
> +
> +static u64 to_canonical_addr(struct kvm_vcpu *vcpu, u64 ptr, u64 mask)
> +{
> +	bool bit55 = !!(ptr & BIT(55));
> +
> +	if (bit55)
> +		return ptr | mask;
> +
> +	return ptr & ~mask;
> +}
> +
> +static u64 corrupt_addr(struct kvm_vcpu *vcpu, u64 ptr)
> +{
> +	bool bit55 = !!(ptr & BIT(55));
> +	u64 mask, error_code;
> +	int shift;
> +
> +	if (effective_tbi(vcpu, bit55)) {
> +		mask = GENMASK(54, 53);
> +		shift = 53;
> +	} else {
> +		mask = GENMASK(62, 61);
> +		shift = 61;
> +	}
> +
> +	if (esr_iss_is_eretab(kvm_vcpu_get_esr(vcpu)))
> +		error_code = 2 << shift;
> +	else
> +		error_code = 1 << shift;
> +
> +	ptr &= ~mask;
> +	ptr |= error_code;
> +
> +	return ptr;
> +}
> +
> +/*
> + * Authenticate an ERETAA/ERETAB instruction, returning true if the
> + * authentication succeeded and false otherwise. In all cases, *elr
> + * contains the VA to ERET to. Potential exception injection is left
> + * to the caller.
> + */
> +bool kvm_auth_eretax(struct kvm_vcpu *vcpu, u64 *elr)
> +{
> +	u64 sctlr = vcpu_read_sys_reg(vcpu, SCTLR_EL2);
> +	u64 esr = kvm_vcpu_get_esr(vcpu);
> +	u64 ptr, cptr, pac, mask;
> +	struct ptrauth_key ikey;
> +
> +	*elr = ptr = vcpu_read_sys_reg(vcpu, ELR_EL2);
> +
> +	/* We assume we're already in the context of an ERETAx */
> +	if (esr_iss_is_eretab(esr)) {
> +		if (!(sctlr & SCTLR_EL1_EnIB))
> +			return true;
> +
> +		ikey.lo = __vcpu_sys_reg(vcpu, APIBKEYLO_EL1);
> +		ikey.hi = __vcpu_sys_reg(vcpu, APIBKEYHI_EL1);
> +	} else {
> +		if (!(sctlr & SCTLR_EL1_EnIA))
> +			return true;
> +
> +		ikey.lo = __vcpu_sys_reg(vcpu, APIAKEYLO_EL1);
> +		ikey.hi = __vcpu_sys_reg(vcpu, APIAKEYHI_EL1);
> +	}
> +
> +	mask = compute_pac_mask(vcpu, !!(ptr & BIT(55)));
> +	cptr = to_canonical_addr(vcpu, ptr, mask);
> +
> +	pac = compute_pac(vcpu, cptr, ikey);
> +
> +	/*
> +	 * Slightly deviate from the pseudocode: if we have a PAC
> +	 * match with the signed pointer, then it must be good.
> +	 * Anything after this point is pure error handling.
> +	 */
> +	if ((pac & mask) == (ptr & mask)) {
> +		*elr = cptr;
> +		return true;
> +	}
> +
> +	/*
> +	 * Authentication failed, corrupt the canonical address if
> +	 * PAuth2 isn't implemented, or some XORing if it is.
> +	 */
> +	if (!kvm_has_pauth(vcpu->kvm, PAuth2))
> +		cptr = corrupt_addr(vcpu, cptr);
> +	else
> +		cptr = ptr ^ (pac & mask);
> +
> +	*elr = cptr;
> +	return false;
> +}

Each function in this file is quite small, but there's certainly a lot of
complexity and background knowledge required to understand them!

I spent quite some time on each part to see if it matches what I understood
from the Arm ARM.

Reviewed-by: Joey Gouly <joey.gouly@arm.com>


A side note / thing I considered. KVM doesn't currently handle ERET exceptions
from EL1.

1. If an ERETA{A,B} were executed from a nested EL1 guest, that would be
trapped up to Host KVM at EL2.

2. kvm_hyp_handle_eret() returns false since it's not from vEL2.  Inside
kvm_handle_eret(), is_hyp_ctxt() is false so the exception is injected into
vEL2 (via kvm_inject_nested_sync()).

3. vEL2 gets the exception, kvm_hyp_handle_eret() returns false as before.
Inside kvm_handle_eret(), is_hyp_ctxt() is also false, so
kvm_inject_nested_sync() is called but now errors out since vcpu_has_nv() is
false.

Is that flow right? Am I missing something?

Thanks,
Joey

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 11/13] KVM: arm64: nv: Add emulation for ERETAx instructions
  2024-03-08 17:20   ` Joey Gouly
@ 2024-03-08 17:54     ` Marc Zyngier
  2024-03-12 10:46       ` Joey Gouly
  0 siblings, 1 reply; 28+ messages in thread
From: Marc Zyngier @ 2024-03-08 17:54 UTC (permalink / raw)
  To: Joey Gouly
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Will Deacon, Catalin Marinas

On Fri, 08 Mar 2024 17:20:59 +0000,
Joey Gouly <joey.gouly@arm.com> wrote:
> 
> Phew..

[...]

> Each function in this file is quite small, but there's certainly a lot of
> complexity and background knowledge required to understand them!
> 
> I spent quite some time on each part to see if it matches what I understood
> from the Arm ARM.
> 
> Reviewed-by: Joey Gouly <joey.gouly@arm.com>

Thanks a lot for putting up with it, much appreciated.

> A side note / thing I considered. KVM doesn't currently handle ERET exceptions
> from EL1.

EL1 is ambiguous here. Is that EL1 from the PoV of the guest?

>
> 1. If an ERETA{A,B} were executed from a nested EL1 guest, that would be
> trapped up to Host KVM at EL2.

There are two possibilities for that (assuming EL1 from the PoV of a
L1 guest):

(1) this EL1 guest is itself a guest hypervisor (i.e. we are running
    an L1 guest which itself is using NV and running an L2 which
    itself is a hypervisor). In that case, ERET* would have to be
    trapped to EL2 and re-injected. Note that we do not support NV
    under NV. Yet...

(2) the L2 guest is not a hypervisor (no recursive NV), but the L1
    hypervisor has set HFGITR_EL2.ERET==1. We'd have to re-inject the
    exception into L1, just like in the precedent case.

If neither HCR_EL2.NV nor HFGITR_EL2.ERET are set, then no ERET* gets
trapped at all. Crucially, when running an L2 guest that doesn't isn't
itself a hypervisor (no nested NV), we do not trap ERET* at all.

In a way, the NV overhead is mostly when running L1. Once you run L2,
the overhead "vanishes", to some extent (as long as you don't exit,
because that's where the cost is).

> 2. kvm_hyp_handle_eret() returns false since it's not from vEL2.  Inside
> kvm_handle_eret(), is_hyp_ctxt() is false so the exception is injected into
> vEL2 (via kvm_inject_nested_sync()).
> 
> 3. vEL2 gets the exception, kvm_hyp_handle_eret() returns false as before.
> Inside kvm_handle_eret(), is_hyp_ctxt() is also false, so
> kvm_inject_nested_sync() is called but now errors out since vcpu_has_nv() is
> false.
> 
> Is that flow right? Am I missing something?

I'm not sure. The cases where ERET gets trapped are really limited to
the above two cases.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 11/13] KVM: arm64: nv: Add emulation for ERETAx instructions
  2024-03-08 17:54     ` Marc Zyngier
@ 2024-03-12 10:46       ` Joey Gouly
  0 siblings, 0 replies; 28+ messages in thread
From: Joey Gouly @ 2024-03-12 10:46 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Will Deacon, Catalin Marinas

On Fri, Mar 08, 2024 at 05:54:20PM +0000, Marc Zyngier wrote:
> On Fri, 08 Mar 2024 17:20:59 +0000,
> Joey Gouly <joey.gouly@arm.com> wrote:
> > 
> > Phew..
> 
> [...]
> 
> > Each function in this file is quite small, but there's certainly a lot of
> > complexity and background knowledge required to understand them!
> > 
> > I spent quite some time on each part to see if it matches what I understood
> > from the Arm ARM.
> > 
> > Reviewed-by: Joey Gouly <joey.gouly@arm.com>
> 
> Thanks a lot for putting up with it, much appreciated.
> 
> > A side note / thing I considered. KVM doesn't currently handle ERET exceptions
> > from EL1.
> 
> EL1 is ambiguous here. Is that EL1 from the PoV of the guest?

Yes I meant an EL1 guest (not vEL2).

> 
> >
> > 1. If an ERETA{A,B} were executed from a nested EL1 guest, that would be
> > trapped up to Host KVM at EL2.
> 
> There are two possibilities for that (assuming EL1 from the PoV of a
> L1 guest):
> 
> (1) this EL1 guest is itself a guest hypervisor (i.e. we are running
>     an L1 guest which itself is using NV and running an L2 which
>     itself is a hypervisor). In that case, ERET* would have to be
>     trapped to EL2 and re-injected. Note that we do not support NV
>     under NV. Yet...
> 
> (2) the L2 guest is not a hypervisor (no recursive NV), but the L1
>     hypervisor has set HFGITR_EL2.ERET==1. We'd have to re-inject the
>     exception into L1, just like in the precedent case.
> 
> If neither HCR_EL2.NV nor HFGITR_EL2.ERET are set, then no ERET* gets
> trapped at all. Crucially, when running an L2 guest that doesn't isn't
> itself a hypervisor (no nested NV), we do not trap ERET* at all.

That was the missing part. __compute_hcr() only adds HCR_EL2.NV when
is_hyp_ctxt() is true. When I conjured up this scenario, I had HCR_EL2.NV set
(in my head) for the L2 EL1 guest, which is not the case.

> 
> In a way, the NV overhead is mostly when running L1. Once you run L2,
> the overhead "vanishes", to some extent (as long as you don't exit,
> because that's where the cost is).
> 
> > 2. kvm_hyp_handle_eret() returns false since it's not from vEL2.  Inside
> > kvm_handle_eret(), is_hyp_ctxt() is false so the exception is injected into
> > vEL2 (via kvm_inject_nested_sync()).
> > 
> > 3. vEL2 gets the exception, kvm_hyp_handle_eret() returns false as before.
> > Inside kvm_handle_eret(), is_hyp_ctxt() is also false, so
> > kvm_inject_nested_sync() is called but now errors out since vcpu_has_nv() is
> > false.
> > 
> > Is that flow right? Am I missing something?
> 
> I'm not sure. The cases where ERET gets trapped are really limited to
> the above two cases.
> 

Thanks for the explanation,

Joey

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 12/13] KVM: arm64: nv: Handle ERETA[AB] instructions
  2024-02-26 10:06 ` [PATCH v2 12/13] KVM: arm64: nv: Handle ERETA[AB] instructions Marc Zyngier
@ 2024-03-12 11:17   ` Joey Gouly
  0 siblings, 0 replies; 28+ messages in thread
From: Joey Gouly @ 2024-03-12 11:17 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Will Deacon, Catalin Marinas

On Mon, Feb 26, 2024 at 10:06:00AM +0000, Marc Zyngier wrote:
> Now that we have some emulation in place for ERETA[AB], we can
> plug it into the exception handling machinery.
> 
> As for a bare ERET, an "easy" ERETAx instruction is processed as
> a fixup, while something that requires a translation regime
> transition or an exception delivery is left to the slow path.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/emulate-nested.c | 22 ++++++++++++++++++++--
>  arch/arm64/kvm/handle_exit.c    |  3 ++-
>  arch/arm64/kvm/hyp/vhe/switch.c | 13 +++++++++++--
>  3 files changed, 33 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/arm64/kvm/emulate-nested.c b/arch/arm64/kvm/emulate-nested.c
> index 63a74c0330f1..72d733c74a38 100644
> --- a/arch/arm64/kvm/emulate-nested.c
> +++ b/arch/arm64/kvm/emulate-nested.c
> @@ -2172,7 +2172,7 @@ static u64 kvm_check_illegal_exception_return(struct kvm_vcpu *vcpu, u64 spsr)
>  
>  void kvm_emulate_nested_eret(struct kvm_vcpu *vcpu)
>  {
> -	u64 spsr, elr;
> +	u64 spsr, elr, esr;
>  
>  	/*
>  	 * Forward this trap to the virtual EL2 if the virtual
> @@ -2181,12 +2181,30 @@ void kvm_emulate_nested_eret(struct kvm_vcpu *vcpu)
>  	if (forward_traps(vcpu, HCR_NV))
>  		return;
>  
> +	/* Check for an ERETAx */
> +	esr = kvm_vcpu_get_esr(vcpu);
> +	if (esr_iss_is_eretax(esr) && !kvm_auth_eretax(vcpu, &elr)) {
> +		/*
> +		 * Oh no, ERETAx failed to authenticate.  If we have
> +		 * FPACCOMBINE, deliver an exception right away.  If we
> +		 * don't, then let the mangled ELR value trickle down the
> +		 * ERET handling, and the guest will have a little surprise.
> +		 */
> +		if (kvm_has_pauth(vcpu->kvm, FPACCOMBINE)) {
> +			esr &= ESR_ELx_ERET_ISS_ERETA;
> +			esr |= FIELD_PREP(ESR_ELx_EC_MASK, ESR_ELx_EC_FPAC);
> +			kvm_inject_nested_sync(vcpu, esr);
> +			return;
> +		}
> +	}
> +
>  	preempt_disable();
>  	kvm_arch_vcpu_put(vcpu);
>  
>  	spsr = __vcpu_sys_reg(vcpu, SPSR_EL2);
>  	spsr = kvm_check_illegal_exception_return(vcpu, spsr);
> -	elr = __vcpu_sys_reg(vcpu, ELR_EL2);
> +	if (!esr_iss_is_eretax(esr))
> +		elr = __vcpu_sys_reg(vcpu, ELR_EL2);
>  
>  	trace_kvm_nested_eret(vcpu, elr, spsr);
>  
> diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
> index 1ba2f788b2c3..407bdfbb572b 100644
> --- a/arch/arm64/kvm/handle_exit.c
> +++ b/arch/arm64/kvm/handle_exit.c
> @@ -248,7 +248,8 @@ static int kvm_handle_ptrauth(struct kvm_vcpu *vcpu)
>  
>  static int kvm_handle_eret(struct kvm_vcpu *vcpu)
>  {
> -	if (esr_iss_is_eretax(kvm_vcpu_get_esr(vcpu)))
> +	if (esr_iss_is_eretax(kvm_vcpu_get_esr(vcpu)) &&
> +	    !vcpu_has_ptrauth(vcpu))
>  		return kvm_handle_ptrauth(vcpu);
>  
>  	/*
> diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
> index 3ea9bdf6b555..49d36666040e 100644
> --- a/arch/arm64/kvm/hyp/vhe/switch.c
> +++ b/arch/arm64/kvm/hyp/vhe/switch.c
> @@ -208,7 +208,8 @@ void kvm_vcpu_put_vhe(struct kvm_vcpu *vcpu)
>  
>  static bool kvm_hyp_handle_eret(struct kvm_vcpu *vcpu, u64 *exit_code)
>  {
> -	u64 spsr, mode;
> +	u64 esr = kvm_vcpu_get_esr(vcpu);
> +	u64 spsr, elr, mode;
>  
>  	/*
>  	 * Going through the whole put/load motions is a waste of time
> @@ -242,10 +243,18 @@ static bool kvm_hyp_handle_eret(struct kvm_vcpu *vcpu, u64 *exit_code)
>  		return false;
>  	}
>  
> +	/* If ERETAx fails, take the slow path */
> +	if (esr_iss_is_eretax(esr)) {
> +		if (!(vcpu_has_ptrauth(vcpu) && kvm_auth_eretax(vcpu, &elr)))
> +			return false;
> +	} else {
> +		elr = read_sysreg_el1(SYS_ELR);
> +	}
> +
>  	spsr = (spsr & ~(PSR_MODE_MASK | PSR_MODE32_BIT)) | mode;
>  
>  	write_sysreg_el2(spsr, SYS_SPSR);
> -	write_sysreg_el2(read_sysreg_el1(SYS_ELR), SYS_ELR);
> +	write_sysreg_el2(elr, SYS_ELR);
>  
>  	return true;
>  }
> 

Reviewed-by: Joey Gouly <joey.gouly@arm.com>

Thanks,
Joey

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 13/13] KVM: arm64: nv: Advertise support for PAuth
  2024-02-26 10:06 ` [PATCH v2 13/13] KVM: arm64: nv: Advertise support for PAuth Marc Zyngier
@ 2024-03-12 11:21   ` Joey Gouly
  0 siblings, 0 replies; 28+ messages in thread
From: Joey Gouly @ 2024-03-12 11:21 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, kvm, linux-arm-kernel, James Morse, Suzuki K Poulose,
	Oliver Upton, Zenghui Yu, Will Deacon, Catalin Marinas

On Mon, Feb 26, 2024 at 10:06:01AM +0000, Marc Zyngier wrote:
> Now that we (hopefully) correctly handle ERETAx, drop the masking
> of the PAuth feature (something that was not even complete, as
> APA3 and AGA3 were still exposed).
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/nested.c | 8 ++------
>  1 file changed, 2 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/arm64/kvm/nested.c b/arch/arm64/kvm/nested.c
> index ced30c90521a..6813c7c7f00a 100644
> --- a/arch/arm64/kvm/nested.c
> +++ b/arch/arm64/kvm/nested.c
> @@ -35,13 +35,9 @@ static u64 limit_nv_id_reg(u32 id, u64 val)
>  		break;
>  
>  	case SYS_ID_AA64ISAR1_EL1:
> -		/* Support everything but PtrAuth and Spec Invalidation */
> +		/* Support everything but Spec Invalidation */
>  		val &= ~(GENMASK_ULL(63, 56)	|
> -			 NV_FTR(ISAR1, SPECRES)	|
> -			 NV_FTR(ISAR1, GPI)	|
> -			 NV_FTR(ISAR1, GPA)	|
> -			 NV_FTR(ISAR1, API)	|
> -			 NV_FTR(ISAR1, APA));
> +			 NV_FTR(ISAR1, SPECRES));
>  		break;
>  
>  	case SYS_ID_AA64PFR0_EL1:

Reviewed-by: Joey Gouly <joey.gouly@arm.com>

Thanks,
Joey

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2024-03-12 11:21 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-02-26 10:05 [PATCH v2 00/13] KVM/arm64: Add NV support for ERET and PAuth Marc Zyngier
2024-02-26 10:05 ` [PATCH v2 01/13] KVM: arm64: Harden __ctxt_sys_reg() against out-of-range values Marc Zyngier
2024-02-26 10:05 ` [PATCH v2 02/13] KVM: arm64: Add helpers for ESR_ELx_ERET_ISS_ERET* Marc Zyngier
2024-02-26 10:05 ` [PATCH v2 03/13] KVM: arm64: nv: Drop VCPU_HYP_CONTEXT flag Marc Zyngier
2024-02-26 10:05 ` [PATCH v2 04/13] KVM: arm64: nv: Configure HCR_EL2 for FEAT_NV2 Marc Zyngier
2024-02-26 10:05 ` [PATCH v2 05/13] KVM: arm64: nv: Add trap forwarding for ERET and SMC Marc Zyngier
2024-02-26 10:05 ` [PATCH v2 06/13] KVM: arm64: nv: Fast-track 'InHost' exception returns Marc Zyngier
2024-02-28 16:08   ` Joey Gouly
2024-02-29 13:44     ` Marc Zyngier
2024-02-26 10:05 ` [PATCH v2 07/13] KVM: arm64: nv: Honor HFGITR_EL2.ERET being set Marc Zyngier
2024-03-01 18:07   ` Joey Gouly
2024-03-01 19:14     ` Marc Zyngier
2024-03-01 20:15       ` Joey Gouly
2024-02-26 10:05 ` [PATCH v2 08/13] KVM: arm64: nv: Handle HCR_EL2.{API,APK} independently Marc Zyngier
2024-03-07 15:14   ` Joey Gouly
2024-03-07 15:58     ` Marc Zyngier
2024-02-26 10:05 ` [PATCH v2 09/13] KVM: arm64: nv: Reinject PAC exceptions caused by HCR_EL2.API==0 Marc Zyngier
2024-02-26 10:05 ` [PATCH v2 10/13] KVM: arm64: nv: Add kvm_has_pauth() helper Marc Zyngier
2024-02-26 10:05 ` [PATCH v2 11/13] KVM: arm64: nv: Add emulation for ERETAx instructions Marc Zyngier
2024-03-07 13:39   ` Joey Gouly
2024-03-07 14:24     ` Marc Zyngier
2024-03-08 17:20   ` Joey Gouly
2024-03-08 17:54     ` Marc Zyngier
2024-03-12 10:46       ` Joey Gouly
2024-02-26 10:06 ` [PATCH v2 12/13] KVM: arm64: nv: Handle ERETA[AB] instructions Marc Zyngier
2024-03-12 11:17   ` Joey Gouly
2024-02-26 10:06 ` [PATCH v2 13/13] KVM: arm64: nv: Advertise support for PAuth Marc Zyngier
2024-03-12 11:21   ` Joey Gouly

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).