linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] KVM: arm64: Address soft lockups due to I-cache CMOs
@ 2023-09-20  8:01 Oliver Upton
  2023-09-20  8:01 ` [PATCH 1/2] arm64: tlbflush: Rename MAX_TLBI_OPS Oliver Upton
                   ` (3 more replies)
  0 siblings, 4 replies; 8+ messages in thread
From: Oliver Upton @ 2023-09-20  8:01 UTC (permalink / raw)
  To: kvmarm
  Cc: kvm, Marc Zyngier, James Morse, Suzuki K Poulose, Zenghui Yu,
	Will Deacon, Catalin Marinas, linux-arm-kernel, Gavin Shan,
	Oliver Upton

Small series to address the soft lockups that Gavin hits when running
KVM guests w/ hugepages on an Ampere Altra Max machine. While I
absolutely loathe "fixing" the issue of slow I-cache CMOs in this way,
I can't really think of an alternative.

Oliver Upton (2):
  arm64: tlbflush: Rename MAX_TLBI_OPS
  KVM: arm64: Avoid soft lockups due to I-cache maintenance

 arch/arm64/include/asm/kvm_mmu.h  | 37 ++++++++++++++++++++++++++-----
 arch/arm64/include/asm/tlbflush.h |  8 +++----
 2 files changed, 35 insertions(+), 10 deletions(-)


base-commit: ce9ecca0238b140b88f43859b211c9fdfd8e5b70
-- 
2.42.0.459.ge4e396fd5e-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 1/2] arm64: tlbflush: Rename MAX_TLBI_OPS
  2023-09-20  8:01 [PATCH 0/2] KVM: arm64: Address soft lockups due to I-cache CMOs Oliver Upton
@ 2023-09-20  8:01 ` Oliver Upton
  2023-09-21  3:27   ` Gavin Shan
  2023-09-22 10:25   ` Will Deacon
  2023-09-20  8:01 ` [PATCH 2/2] KVM: arm64: Avoid soft lockups due to I-cache maintenance Oliver Upton
                   ` (2 subsequent siblings)
  3 siblings, 2 replies; 8+ messages in thread
From: Oliver Upton @ 2023-09-20  8:01 UTC (permalink / raw)
  To: kvmarm
  Cc: kvm, Marc Zyngier, James Morse, Suzuki K Poulose, Zenghui Yu,
	Will Deacon, Catalin Marinas, linux-arm-kernel, Gavin Shan,
	Oliver Upton

Perhaps unsurprisingly, I-cache invalidations suffer from performance
issues similar to TLB invalidations on certain systems. TLB and I-cache
maintenance all result in DVM on the mesh, which is where the real
bottleneck lies.

Rename the heuristic to point the finger at DVM, such that it may be
reused for limiting I-cache invalidations.

Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
---
 arch/arm64/include/asm/tlbflush.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
index b149cf9f91bc..3431d37e5054 100644
--- a/arch/arm64/include/asm/tlbflush.h
+++ b/arch/arm64/include/asm/tlbflush.h
@@ -333,7 +333,7 @@ static inline void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
  * This is meant to avoid soft lock-ups on large TLB flushing ranges and not
  * necessarily a performance improvement.
  */
-#define MAX_TLBI_OPS	PTRS_PER_PTE
+#define MAX_DVM_OPS	PTRS_PER_PTE
 
 /*
  * __flush_tlb_range_op - Perform TLBI operation upon a range
@@ -413,12 +413,12 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma,
 
 	/*
 	 * When not uses TLB range ops, we can handle up to
-	 * (MAX_TLBI_OPS - 1) pages;
+	 * (MAX_DVM_OPS - 1) pages;
 	 * When uses TLB range ops, we can handle up to
 	 * (MAX_TLBI_RANGE_PAGES - 1) pages.
 	 */
 	if ((!system_supports_tlb_range() &&
-	     (end - start) >= (MAX_TLBI_OPS * stride)) ||
+	     (end - start) >= (MAX_DVM_OPS * stride)) ||
 	    pages >= MAX_TLBI_RANGE_PAGES) {
 		flush_tlb_mm(vma->vm_mm);
 		return;
@@ -451,7 +451,7 @@ static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end
 {
 	unsigned long addr;
 
-	if ((end - start) > (MAX_TLBI_OPS * PAGE_SIZE)) {
+	if ((end - start) > (MAX_DVM_OPS * PAGE_SIZE)) {
 		flush_tlb_all();
 		return;
 	}
-- 
2.42.0.459.ge4e396fd5e-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 2/2] KVM: arm64: Avoid soft lockups due to I-cache maintenance
  2023-09-20  8:01 [PATCH 0/2] KVM: arm64: Address soft lockups due to I-cache CMOs Oliver Upton
  2023-09-20  8:01 ` [PATCH 1/2] arm64: tlbflush: Rename MAX_TLBI_OPS Oliver Upton
@ 2023-09-20  8:01 ` Oliver Upton
  2023-09-21  3:28   ` Gavin Shan
  2023-09-21  7:39 ` [PATCH 0/2] KVM: arm64: Address soft lockups due to I-cache CMOs Marc Zyngier
  2023-09-22 17:56 ` Oliver Upton
  3 siblings, 1 reply; 8+ messages in thread
From: Oliver Upton @ 2023-09-20  8:01 UTC (permalink / raw)
  To: kvmarm
  Cc: kvm, Marc Zyngier, James Morse, Suzuki K Poulose, Zenghui Yu,
	Will Deacon, Catalin Marinas, linux-arm-kernel, Gavin Shan,
	Oliver Upton

Gavin reports of soft lockups on his Ampere Altra Max machine when
backing KVM guests with hugetlb pages. Upon further investigation, it
was found that the system is unable to keep up with parallel I-cache
invalidations done by KVM's stage-2 fault handler.

This is ultimately an implementation problem. I-cache maintenance
instructions are available at EL0, so nothing stops a malicious
userspace from hammering a system with CMOs and cause it to fall over.
"Fixing" this problem in KVM is nothing more than slapping a bandage
over a much deeper problem.

Anyway, the kernel already has a heuristic for limiting TLB
invalidations to avoid soft lockups. Reuse that logic to limit I-cache
CMOs done by KVM to map executable pages on systems without FEAT_DIC.
While at it, restructure __invalidate_icache_guest_page() to improve
readability and squeeze our new condition into the existing branching
structure.

Link: https://lore.kernel.org/kvmarm/20230904072826.1468907-1-gshan@redhat.com/
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
---
 arch/arm64/include/asm/kvm_mmu.h | 37 ++++++++++++++++++++++++++------
 1 file changed, 31 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 96a80e8f6226..a425ecdd7be0 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -224,16 +224,41 @@ static inline void __clean_dcache_guest_page(void *va, size_t size)
 	kvm_flush_dcache_to_poc(va, size);
 }
 
+static inline size_t __invalidate_icache_max_range(void)
+{
+	u8 iminline;
+	u64 ctr;
+
+	asm volatile(ALTERNATIVE_CB("movz %0, #0\n"
+				    "movk %0, #0, lsl #16\n"
+				    "movk %0, #0, lsl #32\n"
+				    "movk %0, #0, lsl #48\n",
+				    ARM64_ALWAYS_SYSTEM,
+				    kvm_compute_final_ctr_el0)
+		     : "=r" (ctr));
+
+	iminline = SYS_FIELD_GET(CTR_EL0, IminLine, ctr) + 2;
+	return MAX_DVM_OPS << iminline;
+}
+
 static inline void __invalidate_icache_guest_page(void *va, size_t size)
 {
-	if (icache_is_aliasing()) {
-		/* any kind of VIPT cache */
+	/*
+	 * VPIPT I-cache maintenance must be done from EL2. See comment in the
+	 * nVHE flavor of __kvm_tlb_flush_vmid_ipa().
+	 */
+	if (icache_is_vpipt() && read_sysreg(CurrentEL) != CurrentEL_EL2)
+		return;
+
+	/*
+	 * Blow the whole I-cache if it is aliasing (i.e. VIPT) or the
+	 * invalidation range exceeds our arbitrary limit on invadations by
+	 * cache line.
+	 */
+	if (icache_is_aliasing() || size > __invalidate_icache_max_range())
 		icache_inval_all_pou();
-	} else if (read_sysreg(CurrentEL) != CurrentEL_EL1 ||
-		   !icache_is_vpipt()) {
-		/* PIPT or VPIPT at EL2 (see comment in __kvm_tlb_flush_vmid_ipa) */
+	else
 		icache_inval_pou((unsigned long)va, (unsigned long)va + size);
-	}
 }
 
 void kvm_set_way_flush(struct kvm_vcpu *vcpu);
-- 
2.42.0.459.ge4e396fd5e-goog


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2] arm64: tlbflush: Rename MAX_TLBI_OPS
  2023-09-20  8:01 ` [PATCH 1/2] arm64: tlbflush: Rename MAX_TLBI_OPS Oliver Upton
@ 2023-09-21  3:27   ` Gavin Shan
  2023-09-22 10:25   ` Will Deacon
  1 sibling, 0 replies; 8+ messages in thread
From: Gavin Shan @ 2023-09-21  3:27 UTC (permalink / raw)
  To: Oliver Upton, kvmarm
  Cc: kvm, Marc Zyngier, James Morse, Suzuki K Poulose, Zenghui Yu,
	Will Deacon, Catalin Marinas, linux-arm-kernel

On 9/20/23 18:01, Oliver Upton wrote:
> Perhaps unsurprisingly, I-cache invalidations suffer from performance
> issues similar to TLB invalidations on certain systems. TLB and I-cache
> maintenance all result in DVM on the mesh, which is where the real
> bottleneck lies.
> 
> Rename the heuristic to point the finger at DVM, such that it may be
> reused for limiting I-cache invalidations.
> 
> Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
> ---
>   arch/arm64/include/asm/tlbflush.h | 8 ++++----
>   1 file changed, 4 insertions(+), 4 deletions(-)
> 

Reviewed-by: Gavin Shan <gshan@redhat.com>
Tested-by: Gavin Shan <gshan@redhat.com>

> diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
> index b149cf9f91bc..3431d37e5054 100644
> --- a/arch/arm64/include/asm/tlbflush.h
> +++ b/arch/arm64/include/asm/tlbflush.h
> @@ -333,7 +333,7 @@ static inline void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
>    * This is meant to avoid soft lock-ups on large TLB flushing ranges and not
>    * necessarily a performance improvement.
>    */
> -#define MAX_TLBI_OPS	PTRS_PER_PTE
> +#define MAX_DVM_OPS	PTRS_PER_PTE
>   
>   /*
>    * __flush_tlb_range_op - Perform TLBI operation upon a range
> @@ -413,12 +413,12 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma,
>   
>   	/*
>   	 * When not uses TLB range ops, we can handle up to
> -	 * (MAX_TLBI_OPS - 1) pages;
> +	 * (MAX_DVM_OPS - 1) pages;
>   	 * When uses TLB range ops, we can handle up to
>   	 * (MAX_TLBI_RANGE_PAGES - 1) pages.
>   	 */
>   	if ((!system_supports_tlb_range() &&
> -	     (end - start) >= (MAX_TLBI_OPS * stride)) ||
> +	     (end - start) >= (MAX_DVM_OPS * stride)) ||
>   	    pages >= MAX_TLBI_RANGE_PAGES) {
>   		flush_tlb_mm(vma->vm_mm);
>   		return;
> @@ -451,7 +451,7 @@ static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end
>   {
>   	unsigned long addr;
>   
> -	if ((end - start) > (MAX_TLBI_OPS * PAGE_SIZE)) {
> +	if ((end - start) > (MAX_DVM_OPS * PAGE_SIZE)) {
>   		flush_tlb_all();
>   		return;
>   	}


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/2] KVM: arm64: Avoid soft lockups due to I-cache maintenance
  2023-09-20  8:01 ` [PATCH 2/2] KVM: arm64: Avoid soft lockups due to I-cache maintenance Oliver Upton
@ 2023-09-21  3:28   ` Gavin Shan
  0 siblings, 0 replies; 8+ messages in thread
From: Gavin Shan @ 2023-09-21  3:28 UTC (permalink / raw)
  To: Oliver Upton, kvmarm
  Cc: kvm, Marc Zyngier, James Morse, Suzuki K Poulose, Zenghui Yu,
	Will Deacon, Catalin Marinas, linux-arm-kernel

On 9/20/23 18:01, Oliver Upton wrote:
> Gavin reports of soft lockups on his Ampere Altra Max machine when
> backing KVM guests with hugetlb pages. Upon further investigation, it
> was found that the system is unable to keep up with parallel I-cache
> invalidations done by KVM's stage-2 fault handler.
> 
> This is ultimately an implementation problem. I-cache maintenance
> instructions are available at EL0, so nothing stops a malicious
> userspace from hammering a system with CMOs and cause it to fall over.
> "Fixing" this problem in KVM is nothing more than slapping a bandage
> over a much deeper problem.
> 
> Anyway, the kernel already has a heuristic for limiting TLB
> invalidations to avoid soft lockups. Reuse that logic to limit I-cache
> CMOs done by KVM to map executable pages on systems without FEAT_DIC.
> While at it, restructure __invalidate_icache_guest_page() to improve
> readability and squeeze our new condition into the existing branching
> structure.
> 
> Link: https://lore.kernel.org/kvmarm/20230904072826.1468907-1-gshan@redhat.com/
> Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
> ---
>   arch/arm64/include/asm/kvm_mmu.h | 37 ++++++++++++++++++++++++++------
>   1 file changed, 31 insertions(+), 6 deletions(-)
> 

Reviewed-by: Gavin Shan <gshan@redhat.com>
Tested-by: Gavin Shan <gshan@redhat.com>

> diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
> index 96a80e8f6226..a425ecdd7be0 100644
> --- a/arch/arm64/include/asm/kvm_mmu.h
> +++ b/arch/arm64/include/asm/kvm_mmu.h
> @@ -224,16 +224,41 @@ static inline void __clean_dcache_guest_page(void *va, size_t size)
>   	kvm_flush_dcache_to_poc(va, size);
>   }
>   
> +static inline size_t __invalidate_icache_max_range(void)
> +{
> +	u8 iminline;
> +	u64 ctr;
> +
> +	asm volatile(ALTERNATIVE_CB("movz %0, #0\n"
> +				    "movk %0, #0, lsl #16\n"
> +				    "movk %0, #0, lsl #32\n"
> +				    "movk %0, #0, lsl #48\n",
> +				    ARM64_ALWAYS_SYSTEM,
> +				    kvm_compute_final_ctr_el0)
> +		     : "=r" (ctr));
> +
> +	iminline = SYS_FIELD_GET(CTR_EL0, IminLine, ctr) + 2;
> +	return MAX_DVM_OPS << iminline;
> +}
> +
>   static inline void __invalidate_icache_guest_page(void *va, size_t size)
>   {
> -	if (icache_is_aliasing()) {
> -		/* any kind of VIPT cache */
> +	/*
> +	 * VPIPT I-cache maintenance must be done from EL2. See comment in the
> +	 * nVHE flavor of __kvm_tlb_flush_vmid_ipa().
> +	 */
> +	if (icache_is_vpipt() && read_sysreg(CurrentEL) != CurrentEL_EL2)
> +		return;
> +
> +	/*
> +	 * Blow the whole I-cache if it is aliasing (i.e. VIPT) or the
> +	 * invalidation range exceeds our arbitrary limit on invadations by
> +	 * cache line.
> +	 */
> +	if (icache_is_aliasing() || size > __invalidate_icache_max_range())
>   		icache_inval_all_pou();
> -	} else if (read_sysreg(CurrentEL) != CurrentEL_EL1 ||
> -		   !icache_is_vpipt()) {
> -		/* PIPT or VPIPT at EL2 (see comment in __kvm_tlb_flush_vmid_ipa) */
> +	else
>   		icache_inval_pou((unsigned long)va, (unsigned long)va + size);
> -	}
>   }
>   
>   void kvm_set_way_flush(struct kvm_vcpu *vcpu);


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 0/2] KVM: arm64: Address soft lockups due to I-cache CMOs
  2023-09-20  8:01 [PATCH 0/2] KVM: arm64: Address soft lockups due to I-cache CMOs Oliver Upton
  2023-09-20  8:01 ` [PATCH 1/2] arm64: tlbflush: Rename MAX_TLBI_OPS Oliver Upton
  2023-09-20  8:01 ` [PATCH 2/2] KVM: arm64: Avoid soft lockups due to I-cache maintenance Oliver Upton
@ 2023-09-21  7:39 ` Marc Zyngier
  2023-09-22 17:56 ` Oliver Upton
  3 siblings, 0 replies; 8+ messages in thread
From: Marc Zyngier @ 2023-09-21  7:39 UTC (permalink / raw)
  To: Oliver Upton
  Cc: kvmarm, kvm, James Morse, Suzuki K Poulose, Zenghui Yu,
	Will Deacon, Catalin Marinas, linux-arm-kernel, Gavin Shan

On Wed, 20 Sep 2023 09:01:31 +0100,
Oliver Upton <oliver.upton@linux.dev> wrote:
> 
> Small series to address the soft lockups that Gavin hits when running
> KVM guests w/ hugepages on an Ampere Altra Max machine. While I
> absolutely loathe "fixing" the issue of slow I-cache CMOs in this way,
> I can't really think of an alternative.

I don't think there is any, unfortunately. I don't think this change
is inherently bad (we should have added something like this a long
while ago), but it scares me that these systems can apparently be
DoS'd from userspace or a guest...

Anyway:

Reviewed-by: Marc Zyngier <maz@kernel.org>

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/2] arm64: tlbflush: Rename MAX_TLBI_OPS
  2023-09-20  8:01 ` [PATCH 1/2] arm64: tlbflush: Rename MAX_TLBI_OPS Oliver Upton
  2023-09-21  3:27   ` Gavin Shan
@ 2023-09-22 10:25   ` Will Deacon
  1 sibling, 0 replies; 8+ messages in thread
From: Will Deacon @ 2023-09-22 10:25 UTC (permalink / raw)
  To: Oliver Upton
  Cc: kvmarm, kvm, Marc Zyngier, James Morse, Suzuki K Poulose,
	Zenghui Yu, Catalin Marinas, linux-arm-kernel, Gavin Shan

On Wed, Sep 20, 2023 at 08:01:32AM +0000, Oliver Upton wrote:
> Perhaps unsurprisingly, I-cache invalidations suffer from performance
> issues similar to TLB invalidations on certain systems. TLB and I-cache
> maintenance all result in DVM on the mesh, which is where the real
> bottleneck lies.
> 
> Rename the heuristic to point the finger at DVM, such that it may be
> reused for limiting I-cache invalidations.
> 
> Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
> ---
>  arch/arm64/include/asm/tlbflush.h | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
> index b149cf9f91bc..3431d37e5054 100644
> --- a/arch/arm64/include/asm/tlbflush.h
> +++ b/arch/arm64/include/asm/tlbflush.h
> @@ -333,7 +333,7 @@ static inline void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
>   * This is meant to avoid soft lock-ups on large TLB flushing ranges and not
>   * necessarily a performance improvement.
>   */
> -#define MAX_TLBI_OPS	PTRS_PER_PTE
> +#define MAX_DVM_OPS	PTRS_PER_PTE
>  
>  /*
>   * __flush_tlb_range_op - Perform TLBI operation upon a range
> @@ -413,12 +413,12 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma,
>  
>  	/*
>  	 * When not uses TLB range ops, we can handle up to
> -	 * (MAX_TLBI_OPS - 1) pages;
> +	 * (MAX_DVM_OPS - 1) pages;
>  	 * When uses TLB range ops, we can handle up to
>  	 * (MAX_TLBI_RANGE_PAGES - 1) pages.
>  	 */
>  	if ((!system_supports_tlb_range() &&
> -	     (end - start) >= (MAX_TLBI_OPS * stride)) ||
> +	     (end - start) >= (MAX_DVM_OPS * stride)) ||
>  	    pages >= MAX_TLBI_RANGE_PAGES) {
>  		flush_tlb_mm(vma->vm_mm);
>  		return;
> @@ -451,7 +451,7 @@ static inline void flush_tlb_kernel_range(unsigned long start, unsigned long end
>  {
>  	unsigned long addr;
>  
> -	if ((end - start) > (MAX_TLBI_OPS * PAGE_SIZE)) {
> +	if ((end - start) > (MAX_DVM_OPS * PAGE_SIZE)) {
>  		flush_tlb_all();
>  		return;
>  	}
> -- 
> 2.42.0.459.ge4e396fd5e-goog

Acked-by: Will Deacon <will@kernel.org>

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 0/2] KVM: arm64: Address soft lockups due to I-cache CMOs
  2023-09-20  8:01 [PATCH 0/2] KVM: arm64: Address soft lockups due to I-cache CMOs Oliver Upton
                   ` (2 preceding siblings ...)
  2023-09-21  7:39 ` [PATCH 0/2] KVM: arm64: Address soft lockups due to I-cache CMOs Marc Zyngier
@ 2023-09-22 17:56 ` Oliver Upton
  3 siblings, 0 replies; 8+ messages in thread
From: Oliver Upton @ 2023-09-22 17:56 UTC (permalink / raw)
  To: kvmarm, Oliver Upton
  Cc: Marc Zyngier, Suzuki K Poulose, kvm, Will Deacon, Zenghui Yu,
	linux-arm-kernel, Catalin Marinas, James Morse, Gavin Shan

On Wed, 20 Sep 2023 08:01:31 +0000, Oliver Upton wrote:
> Small series to address the soft lockups that Gavin hits when running
> KVM guests w/ hugepages on an Ampere Altra Max machine. While I
> absolutely loathe "fixing" the issue of slow I-cache CMOs in this way,
> I can't really think of an alternative.
> 
> Oliver Upton (2):
>   arm64: tlbflush: Rename MAX_TLBI_OPS
>   KVM: arm64: Avoid soft lockups due to I-cache maintenance
> 
> [...]

Applied to kvmarm/next, thanks!

[1/2] arm64: tlbflush: Rename MAX_TLBI_OPS
      https://git.kernel.org/kvmarm/kvmarm/c/ec1c3b9ff160
[2/2] KVM: arm64: Avoid soft lockups due to I-cache maintenance
      https://git.kernel.org/kvmarm/kvmarm/c/909b583f81b5

--
Best,
Oliver

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2023-09-22 17:57 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-09-20  8:01 [PATCH 0/2] KVM: arm64: Address soft lockups due to I-cache CMOs Oliver Upton
2023-09-20  8:01 ` [PATCH 1/2] arm64: tlbflush: Rename MAX_TLBI_OPS Oliver Upton
2023-09-21  3:27   ` Gavin Shan
2023-09-22 10:25   ` Will Deacon
2023-09-20  8:01 ` [PATCH 2/2] KVM: arm64: Avoid soft lockups due to I-cache maintenance Oliver Upton
2023-09-21  3:28   ` Gavin Shan
2023-09-21  7:39 ` [PATCH 0/2] KVM: arm64: Address soft lockups due to I-cache CMOs Marc Zyngier
2023-09-22 17:56 ` Oliver Upton

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).