linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v1] arm64: Add TLB Conflict Abort Exception handler to KVM
@ 2025-01-10 17:24 Mikołaj Lenczewski
  2025-01-10 18:49 ` Oliver Upton
  2025-01-15 15:13 ` Marc Zyngier
  0 siblings, 2 replies; 5+ messages in thread
From: Mikołaj Lenczewski @ 2025-01-10 17:24 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel, linux-kernel, ryan.roberts,
	catalin.marinas, mark.rutland, james.morse, will, maz,
	oliver.upton, joey.gouly, suzuki.poulose, yuzenghui
  Cc: Mikołaj Lenczewski

Currently, KVM does not handle the case of a stage 2 TLB conflict abort
exception. This can legitimately occurs when the guest is eliding full
BBM semantics as permitted by BBM level 2. In this case it is possible
for a confclit abort to be delivered to EL2. We handle that by
invalidating the full TLB.

The Arm ARM specifies that the worst-case invalidation is either a
`tlbi vmalls12e1` or a `tlbi alle1` (as per DDI0487K section D8.16.3).
We implement `tlbi alle1` by extending the existing
__kvm_flush_vm_context() helper to allow for differentiating between
inner-shareable and cpu-local invalidations.

This commit applies on top of v6.13-rc2 (fac04efc5c79).

Signed-off-by: Mikołaj Lenczewski <miko.lenczewski@arm.com>
---
 arch/arm64/include/asm/esr.h       |  8 ++++++++
 arch/arm64/include/asm/kvm_asm.h   |  2 +-
 arch/arm64/kvm/hyp/nvhe/hyp-main.c |  2 +-
 arch/arm64/kvm/hyp/nvhe/tlb.c      |  9 +++++++--
 arch/arm64/kvm/hyp/vhe/tlb.c       |  9 +++++++--
 arch/arm64/kvm/mmu.c               | 13 +++++++++++++
 arch/arm64/kvm/vmid.c              |  2 +-
 7 files changed, 38 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
index d1b1a33f9a8b..8a66f81ca291 100644
--- a/arch/arm64/include/asm/esr.h
+++ b/arch/arm64/include/asm/esr.h
@@ -121,6 +121,7 @@
 #define ESR_ELx_FSC_SEA_TTW(n)	(0x14 + (n))
 #define ESR_ELx_FSC_SECC	(0x18)
 #define ESR_ELx_FSC_SECC_TTW(n)	(0x1c + (n))
+#define ESR_ELx_FSC_TLBABT	(0x30)
 
 /* Status codes for individual page table levels */
 #define ESR_ELx_FSC_ACCESS_L(n)	(ESR_ELx_FSC_ACCESS + (n))
@@ -464,6 +465,13 @@ static inline bool esr_fsc_is_access_flag_fault(unsigned long esr)
 	       (esr == ESR_ELx_FSC_ACCESS_L(0));
 }
 
+static inline bool esr_fsc_is_tlb_conflict_abort(unsigned long esr)
+{
+	esr = esr & ESR_ELx_FSC;
+
+	return esr == ESR_ELx_FSC_TLBABT;
+}
+
 /* Indicate whether ESR.EC==0x1A is for an ERETAx instruction */
 static inline bool esr_iss_is_eretax(unsigned long esr)
 {
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index ca2590344313..095872af764a 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -222,7 +222,7 @@ DECLARE_KVM_NVHE_SYM(__per_cpu_end);
 DECLARE_KVM_HYP_SYM(__bp_harden_hyp_vecs);
 #define __bp_harden_hyp_vecs	CHOOSE_HYP_SYM(__bp_harden_hyp_vecs)
 
-extern void __kvm_flush_vm_context(void);
+extern void __kvm_flush_vm_context(bool cpu_local);
 extern void __kvm_flush_cpu_context(struct kvm_s2_mmu *mmu);
 extern void __kvm_tlb_flush_vmid_ipa(struct kvm_s2_mmu *mmu, phys_addr_t ipa,
 				     int level);
diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
index 6aa0b13d86e5..f44a7550f4a7 100644
--- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
+++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
@@ -195,7 +195,7 @@ static void handle___kvm_adjust_pc(struct kvm_cpu_context *host_ctxt)
 
 static void handle___kvm_flush_vm_context(struct kvm_cpu_context *host_ctxt)
 {
-	__kvm_flush_vm_context();
+	__kvm_flush_vm_context(false);
 }
 
 static void handle___kvm_tlb_flush_vmid_ipa(struct kvm_cpu_context *host_ctxt)
diff --git a/arch/arm64/kvm/hyp/nvhe/tlb.c b/arch/arm64/kvm/hyp/nvhe/tlb.c
index 48da9ca9763f..97f749ad63cc 100644
--- a/arch/arm64/kvm/hyp/nvhe/tlb.c
+++ b/arch/arm64/kvm/hyp/nvhe/tlb.c
@@ -261,10 +261,15 @@ void __kvm_flush_cpu_context(struct kvm_s2_mmu *mmu)
 	exit_vmid_context(&cxt);
 }
 
-void __kvm_flush_vm_context(void)
+void __kvm_flush_vm_context(bool cpu_local)
 {
 	/* Same remark as in enter_vmid_context() */
 	dsb(ish);
-	__tlbi(alle1is);
+
+	if (cpu_local)
+		__tlbi(alle1);
+	else
+		__tlbi(alle1is);
+
 	dsb(ish);
 }
diff --git a/arch/arm64/kvm/hyp/vhe/tlb.c b/arch/arm64/kvm/hyp/vhe/tlb.c
index 3d50a1bd2bdb..564602fa4d62 100644
--- a/arch/arm64/kvm/hyp/vhe/tlb.c
+++ b/arch/arm64/kvm/hyp/vhe/tlb.c
@@ -213,10 +213,15 @@ void __kvm_flush_cpu_context(struct kvm_s2_mmu *mmu)
 	exit_vmid_context(&cxt);
 }
 
-void __kvm_flush_vm_context(void)
+void __kvm_flush_vm_context(bool cpu_local)
 {
 	dsb(ishst);
-	__tlbi(alle1is);
+
+	if (cpu_local)
+		__tlbi(alle1);
+	else
+		__tlbi(alle1is);
+
 	dsb(ish);
 }
 
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index c9d46ad57e52..7c0d97449d23 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1756,6 +1756,19 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
 	ipa = fault_ipa = kvm_vcpu_get_fault_ipa(vcpu);
 	is_iabt = kvm_vcpu_trap_is_iabt(vcpu);
 
+	if (esr_fsc_is_tlb_conflict_abort(esr)) {
+
+		/* Architecturely, at this stage 2 tlb conflict abort, we must
+		 * either perform a `tlbi vmalls12e1`, or a `tlbi alle1`. Due
+		 * to nesting of VMs, we would have to iterate all flattened
+		 * VMIDs to clean out a single guest, so we perform a `tlbi alle1`
+		 * instead to save time.
+		 */
+		__kvm_flush_vm_context(true);
+
+		return 1;
+	}
+
 	if (esr_fsc_is_translation_fault(esr)) {
 		/* Beyond sanitised PARange (which is the IPA limit) */
 		if (fault_ipa >= BIT_ULL(get_kvm_ipa_limit())) {
diff --git a/arch/arm64/kvm/vmid.c b/arch/arm64/kvm/vmid.c
index 806223b7022a..d558428fcfed 100644
--- a/arch/arm64/kvm/vmid.c
+++ b/arch/arm64/kvm/vmid.c
@@ -66,7 +66,7 @@ static void flush_context(void)
 	 * the next context-switch, we broadcast TLB flush + I-cache
 	 * invalidation over the inner shareable domain on rollover.
 	 */
-	kvm_call_hyp(__kvm_flush_vm_context);
+	kvm_call_hyp(__kvm_flush_vm_context, false);
 }
 
 static bool check_update_reserved_vmid(u64 vmid, u64 newvmid)
-- 
2.45.2



^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH v1] arm64: Add TLB Conflict Abort Exception handler to KVM
  2025-01-10 17:24 [PATCH v1] arm64: Add TLB Conflict Abort Exception handler to KVM Mikołaj Lenczewski
@ 2025-01-10 18:49 ` Oliver Upton
  2025-01-16 10:41   ` Mikołaj Lenczewski
  2025-01-15 15:13 ` Marc Zyngier
  1 sibling, 1 reply; 5+ messages in thread
From: Oliver Upton @ 2025-01-10 18:49 UTC (permalink / raw)
  To: Mikołaj Lenczewski
  Cc: kvmarm, linux-arm-kernel, linux-kernel, ryan.roberts,
	catalin.marinas, mark.rutland, james.morse, will, maz, joey.gouly,
	suzuki.poulose, yuzenghui

On Fri, Jan 10, 2025 at 05:24:07PM +0000, Mikołaj Lenczewski wrote:
> Currently, KVM does not handle the case of a stage 2 TLB conflict abort
> exception. This can legitimately occurs when the guest is eliding full
> BBM semantics as permitted by BBM level 2. In this case it is possible
> for a confclit abort to be delivered to EL2. We handle that by
> invalidating the full TLB.

typo: conflict

Also, a bit of a nitpick, but mentioning that TLB conflict abort routing
is implementation defined when S2 is enabled is valuable information.

> @@ -1756,6 +1756,19 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
>  	ipa = fault_ipa = kvm_vcpu_get_fault_ipa(vcpu);
>  	is_iabt = kvm_vcpu_trap_is_iabt(vcpu);
>  
> +	if (esr_fsc_is_tlb_conflict_abort(esr)) {
> +
> +		/* Architecturely, at this stage 2 tlb conflict abort, we must
> +		 * either perform a `tlbi vmalls12e1`, or a `tlbi alle1`. Due
> +		 * to nesting of VMs, we would have to iterate all flattened
> +		 * VMIDs to clean out a single guest, so we perform a `tlbi alle1`
> +		 * instead to save time.
> +		 */

I'm not sure I follow this.

At this point we've taken a TLB conflict abort out of a specific
hardware MMU context, and it's unclear to me why a conflict abort in one
stage-2 MMU has any bearing on the other stage-2 MMUs that could be
associated with this guest.

Even in NV, KVM is always responsible for the maintenance of hardware
stage-2 MMUs. So stage-2 TLBI elision in the guest hypervisor should
not lead to a stage-2 TLB conflict abort.

TLBI ALLE1 is a larger hammer than what's actually necessary here. Could
you perhaps introduce a new invalidation routine,
__kvm_tlb_flush_vmid_nsh(), that does a TLBI VMALLS12E1 behind the
scenes?

If an NV guest is playing games at stage-1 across VMIDs then it gets to
suffer the consequences (additional TLB conflict aborts).

-- 
Thanks,
Oliver


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v1] arm64: Add TLB Conflict Abort Exception handler to KVM
  2025-01-10 17:24 [PATCH v1] arm64: Add TLB Conflict Abort Exception handler to KVM Mikołaj Lenczewski
  2025-01-10 18:49 ` Oliver Upton
@ 2025-01-15 15:13 ` Marc Zyngier
  2025-01-16 10:42   ` Mikołaj Lenczewski
  1 sibling, 1 reply; 5+ messages in thread
From: Marc Zyngier @ 2025-01-15 15:13 UTC (permalink / raw)
  To: Mikołaj Lenczewski
  Cc: kvmarm, linux-arm-kernel, linux-kernel, ryan.roberts,
	catalin.marinas, mark.rutland, james.morse, will, oliver.upton,
	joey.gouly, suzuki.poulose, yuzenghui

On Fri, 10 Jan 2025 17:24:07 +0000,
Mikołaj Lenczewski <miko.lenczewski@arm.com> wrote:
> 
> Currently, KVM does not handle the case of a stage 2 TLB conflict abort
> exception. This can legitimately occurs when the guest is eliding full
> BBM semantics as permitted by BBM level 2. In this case it is possible
> for a confclit abort to be delivered to EL2. We handle that by
> invalidating the full TLB.
> 
> The Arm ARM specifies that the worst-case invalidation is either a
> `tlbi vmalls12e1` or a `tlbi alle1` (as per DDI0487K section D8.16.3).
> We implement `tlbi alle1` by extending the existing
> __kvm_flush_vm_context() helper to allow for differentiating between
> inner-shareable and cpu-local invalidations.
> 
> This commit applies on top of v6.13-rc2 (fac04efc5c79).
> 
> Signed-off-by: Mikołaj Lenczewski <miko.lenczewski@arm.com>
> ---
>  arch/arm64/include/asm/esr.h       |  8 ++++++++
>  arch/arm64/include/asm/kvm_asm.h   |  2 +-
>  arch/arm64/kvm/hyp/nvhe/hyp-main.c |  2 +-
>  arch/arm64/kvm/hyp/nvhe/tlb.c      |  9 +++++++--
>  arch/arm64/kvm/hyp/vhe/tlb.c       |  9 +++++++--
>  arch/arm64/kvm/mmu.c               | 13 +++++++++++++
>  arch/arm64/kvm/vmid.c              |  2 +-
>  7 files changed, 38 insertions(+), 7 deletions(-)
>

[...]

> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index c9d46ad57e52..7c0d97449d23 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -1756,6 +1756,19 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
>  	ipa = fault_ipa = kvm_vcpu_get_fault_ipa(vcpu);
>  	is_iabt = kvm_vcpu_trap_is_iabt(vcpu);
>  
> +	if (esr_fsc_is_tlb_conflict_abort(esr)) {
> +
> +		/* Architecturely, at this stage 2 tlb conflict abort, we must
> +		 * either perform a `tlbi vmalls12e1`, or a `tlbi alle1`. Due
> +		 * to nesting of VMs, we would have to iterate all flattened
> +		 * VMIDs to clean out a single guest, so we perform a `tlbi alle1`
> +		 * instead to save time.
> +		 */
> +		__kvm_flush_vm_context(true);
> +
> +		return 1;
> +	}
> +

This is broken. At this stage, you are preemptible, so whatever
invalidation you are performing might be happening on the wrong CPU
(and I really don't want to see a broadcast invalidation).

I really don't see why this can't be handled as a fixup in the inner
run loop, which would save *a lot* of cycles and do the right thing.

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v1] arm64: Add TLB Conflict Abort Exception handler to KVM
  2025-01-10 18:49 ` Oliver Upton
@ 2025-01-16 10:41   ` Mikołaj Lenczewski
  0 siblings, 0 replies; 5+ messages in thread
From: Mikołaj Lenczewski @ 2025-01-16 10:41 UTC (permalink / raw)
  To: Oliver Upton
  Cc: kvmarm, linux-arm-kernel, linux-kernel, ryan.roberts,
	catalin.marinas, mark.rutland, james.morse, will, maz, joey.gouly,
	suzuki.poulose, yuzenghui

On Fri, Jan 10, 2025 at 10:49:56AM -0800, Oliver Upton wrote:
> On Fri, Jan 10, 2025 at 05:24:07PM +0000, Mikołaj Lenczewski wrote:
> > Currently, KVM does not handle the case of a stage 2 TLB conflict abort
> > exception. This can legitimately occurs when the guest is eliding full
> > BBM semantics as permitted by BBM level 2. In this case it is possible
> > for a confclit abort to be delivered to EL2. We handle that by
> > invalidating the full TLB.
> 
> typo: conflict
> 
> Also, a bit of a nitpick, but mentioning that TLB conflict abort routing
> is implementation defined when S2 is enabled is valuable information.
> 
> > @@ -1756,6 +1756,19 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
> >  	ipa = fault_ipa = kvm_vcpu_get_fault_ipa(vcpu);
> >  	is_iabt = kvm_vcpu_trap_is_iabt(vcpu);
> >  
> > +	if (esr_fsc_is_tlb_conflict_abort(esr)) {
> > +
> > +		/* Architecturely, at this stage 2 tlb conflict abort, we must
> > +		 * either perform a `tlbi vmalls12e1`, or a `tlbi alle1`. Due
> > +		 * to nesting of VMs, we would have to iterate all flattened
> > +		 * VMIDs to clean out a single guest, so we perform a `tlbi alle1`
> > +		 * instead to save time.
> > +		 */
> 
> I'm not sure I follow this.
> 
> At this point we've taken a TLB conflict abort out of a specific
> hardware MMU context, and it's unclear to me why a conflict abort in one
> stage-2 MMU has any bearing on the other stage-2 MMUs that could be
> associated with this guest.
> 
> Even in NV, KVM is always responsible for the maintenance of hardware
> stage-2 MMUs. So stage-2 TLBI elision in the guest hypervisor should
> not lead to a stage-2 TLB conflict abort.
> 
> TLBI ALLE1 is a larger hammer than what's actually necessary here. Could
> you perhaps introduce a new invalidation routine,
> __kvm_tlb_flush_vmid_nsh(), that does a TLBI VMALLS12E1 behind the
> scenes?
> 
> If an NV guest is playing games at stage-1 across VMIDs then it gets to
> suffer the consequences (additional TLB conflict aborts).
> 
> -- 
> Thanks,
> Oliver

Apologies for the very late reply. Thank you for taking the time to
review my patch. Will fix the type, and work on fixing the logic there
to use a TLBI VMALLS12E1.

-- 
Kind regards,
Mikołaj Lenczewski


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v1] arm64: Add TLB Conflict Abort Exception handler to KVM
  2025-01-15 15:13 ` Marc Zyngier
@ 2025-01-16 10:42   ` Mikołaj Lenczewski
  0 siblings, 0 replies; 5+ messages in thread
From: Mikołaj Lenczewski @ 2025-01-16 10:42 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvmarm, linux-arm-kernel, linux-kernel, ryan.roberts,
	catalin.marinas, mark.rutland, james.morse, will, oliver.upton,
	joey.gouly, suzuki.poulose, yuzenghui

On Wed, Jan 15, 2025 at 03:13:54PM +0000, Marc Zyngier wrote:
> On Fri, 10 Jan 2025 17:24:07 +0000,
> Mikołaj Lenczewski <miko.lenczewski@arm.com> wrote:
> > 
> > Currently, KVM does not handle the case of a stage 2 TLB conflict abort
> > exception. This can legitimately occurs when the guest is eliding full
> > BBM semantics as permitted by BBM level 2. In this case it is possible
> > for a confclit abort to be delivered to EL2. We handle that by
> > invalidating the full TLB.
> > 
> > The Arm ARM specifies that the worst-case invalidation is either a
> > `tlbi vmalls12e1` or a `tlbi alle1` (as per DDI0487K section D8.16.3).
> > We implement `tlbi alle1` by extending the existing
> > __kvm_flush_vm_context() helper to allow for differentiating between
> > inner-shareable and cpu-local invalidations.
> > 
> > This commit applies on top of v6.13-rc2 (fac04efc5c79).
> > 
> > Signed-off-by: Mikołaj Lenczewski <miko.lenczewski@arm.com>
> > ---
> >  arch/arm64/include/asm/esr.h       |  8 ++++++++
> >  arch/arm64/include/asm/kvm_asm.h   |  2 +-
> >  arch/arm64/kvm/hyp/nvhe/hyp-main.c |  2 +-
> >  arch/arm64/kvm/hyp/nvhe/tlb.c      |  9 +++++++--
> >  arch/arm64/kvm/hyp/vhe/tlb.c       |  9 +++++++--
> >  arch/arm64/kvm/mmu.c               | 13 +++++++++++++
> >  arch/arm64/kvm/vmid.c              |  2 +-
> >  7 files changed, 38 insertions(+), 7 deletions(-)
> >
> 
> [...]
> 
> > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> > index c9d46ad57e52..7c0d97449d23 100644
> > --- a/arch/arm64/kvm/mmu.c
> > +++ b/arch/arm64/kvm/mmu.c
> > @@ -1756,6 +1756,19 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
> >  	ipa = fault_ipa = kvm_vcpu_get_fault_ipa(vcpu);
> >  	is_iabt = kvm_vcpu_trap_is_iabt(vcpu);
> >  
> > +	if (esr_fsc_is_tlb_conflict_abort(esr)) {
> > +
> > +		/* Architecturely, at this stage 2 tlb conflict abort, we must
> > +		 * either perform a `tlbi vmalls12e1`, or a `tlbi alle1`. Due
> > +		 * to nesting of VMs, we would have to iterate all flattened
> > +		 * VMIDs to clean out a single guest, so we perform a `tlbi alle1`
> > +		 * instead to save time.
> > +		 */
> > +		__kvm_flush_vm_context(true);
> > +
> > +		return 1;
> > +	}
> > +
> 
> This is broken. At this stage, you are preemptible, so whatever
> invalidation you are performing might be happening on the wrong CPU
> (and I really don't want to see a broadcast invalidation).
> 
> I really don't see why this can't be handled as a fixup in the inner
> run loop, which would save *a lot* of cycles and do the right thing.
> 
> Thanks,
> 
> 	M.
> 
> -- 
> Without deviation from the norm, progress is not possible.

Aplogies for my very late reply. Thank you for tatking the time to
review my patch. Will work on this.

-- 
Kind regards,
Mikołaj Lenczewski


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2025-01-16 10:44 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-01-10 17:24 [PATCH v1] arm64: Add TLB Conflict Abort Exception handler to KVM Mikołaj Lenczewski
2025-01-10 18:49 ` Oliver Upton
2025-01-16 10:41   ` Mikołaj Lenczewski
2025-01-15 15:13 ` Marc Zyngier
2025-01-16 10:42   ` Mikołaj Lenczewski

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).