linux-riscv.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v1] mm: Remove arch_flush_tlb_batched_pending() arch helper
@ 2025-06-09 10:31 Ryan Roberts
  2025-06-09 10:45 ` Lorenzo Stoakes
                   ` (3 more replies)
  0 siblings, 4 replies; 7+ messages in thread
From: Ryan Roberts @ 2025-06-09 10:31 UTC (permalink / raw)
  To: Andrew Morton, Catalin Marinas, Will Deacon, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Alexandre Ghiti, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Dave Hansen, H. Peter Anvin,
	David Hildenbrand, Lorenzo Stoakes, Rik van Riel, Liam R. Howlett,
	Vlastimil Babka, Harry Yoo
  Cc: Ryan Roberts, linux-arm-kernel, linux-kernel, linux-riscv,
	linux-mm

Since commit 4b634918384c ("arm64/mm: Close theoretical race where stale
TLB entry remains valid"), all arches that use tlbbatch for reclaim
(arm64, riscv, x86) implement arch_flush_tlb_batched_pending() with a
flush_tlb_mm().

So let's simplify by removing the unnecessary abstraction and doing the
flush_tlb_mm() directly in flush_tlb_batched_pending(). This effectively
reverts commit db6c1f6f236d ("mm/tlbbatch: introduce
arch_flush_tlb_batched_pending()").

Suggested-by: Will Deacon <will@kernel.org>
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
 arch/arm64/include/asm/tlbflush.h | 11 -----------
 arch/riscv/include/asm/tlbflush.h |  1 -
 arch/riscv/mm/tlbflush.c          |  5 -----
 arch/x86/include/asm/tlbflush.h   |  5 -----
 mm/rmap.c                         |  2 +-
 5 files changed, 1 insertion(+), 23 deletions(-)

diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
index aa9efee17277..18a5dc0c9a54 100644
--- a/arch/arm64/include/asm/tlbflush.h
+++ b/arch/arm64/include/asm/tlbflush.h
@@ -322,17 +322,6 @@ static inline bool arch_tlbbatch_should_defer(struct mm_struct *mm)
 	return true;
 }

-/*
- * If mprotect/munmap/etc occurs during TLB batched flushing, we need to ensure
- * all the previously issued TLBIs targeting mm have completed. But since we
- * can be executing on a remote CPU, a DSB cannot guarantee this like it can
- * for arch_tlbbatch_flush(). Our only option is to flush the entire mm.
- */
-static inline void arch_flush_tlb_batched_pending(struct mm_struct *mm)
-{
-	flush_tlb_mm(mm);
-}
-
 /*
  * To support TLB batched flush for multiple pages unmapping, we only send
  * the TLBI for each page in arch_tlbbatch_add_pending() and wait for the
diff --git a/arch/riscv/include/asm/tlbflush.h b/arch/riscv/include/asm/tlbflush.h
index 1a20dd746a49..eed0abc40514 100644
--- a/arch/riscv/include/asm/tlbflush.h
+++ b/arch/riscv/include/asm/tlbflush.h
@@ -63,7 +63,6 @@ void flush_pud_tlb_range(struct vm_area_struct *vma, unsigned long start,
 bool arch_tlbbatch_should_defer(struct mm_struct *mm);
 void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch,
 		struct mm_struct *mm, unsigned long start, unsigned long end);
-void arch_flush_tlb_batched_pending(struct mm_struct *mm);
 void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch);

 extern unsigned long tlb_flush_all_threshold;
diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c
index e737ba7949b1..8404530ec00f 100644
--- a/arch/riscv/mm/tlbflush.c
+++ b/arch/riscv/mm/tlbflush.c
@@ -234,11 +234,6 @@ void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch,
 	mmu_notifier_arch_invalidate_secondary_tlbs(mm, start, end);
 }

-void arch_flush_tlb_batched_pending(struct mm_struct *mm)
-{
-	flush_tlb_mm(mm);
-}
-
 void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
 {
 	__flush_tlb_range(NULL, &batch->cpumask,
diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
index e9b81876ebe4..00daedfefc1b 100644
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -356,11 +356,6 @@ static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *b
 	mmu_notifier_arch_invalidate_secondary_tlbs(mm, 0, -1UL);
 }

-static inline void arch_flush_tlb_batched_pending(struct mm_struct *mm)
-{
-	flush_tlb_mm(mm);
-}
-
 extern void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch);

 static inline bool pte_flags_need_flush(unsigned long oldflags,
diff --git a/mm/rmap.c b/mm/rmap.c
index fb63d9256f09..fd160ddaa980 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -746,7 +746,7 @@ void flush_tlb_batched_pending(struct mm_struct *mm)
 	int flushed = batch >> TLB_FLUSH_BATCH_FLUSHED_SHIFT;

 	if (pending != flushed) {
-		arch_flush_tlb_batched_pending(mm);
+		flush_tlb_mm(mm);
 		/*
 		 * If the new TLB flushing is pending during flushing, leave
 		 * mm->tlb_flush_batched as is, to avoid losing flushing.
--
2.43.0


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH v1] mm: Remove arch_flush_tlb_batched_pending() arch helper
  2025-06-09 10:31 [PATCH v1] mm: Remove arch_flush_tlb_batched_pending() arch helper Ryan Roberts
@ 2025-06-09 10:45 ` Lorenzo Stoakes
  2025-06-09 11:01   ` Ryan Roberts
  2025-06-10  6:22 ` Anshuman Khandual
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 7+ messages in thread
From: Lorenzo Stoakes @ 2025-06-09 10:45 UTC (permalink / raw)
  To: Ryan Roberts
  Cc: Andrew Morton, Catalin Marinas, Will Deacon, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Alexandre Ghiti, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Dave Hansen, H. Peter Anvin,
	David Hildenbrand, Rik van Riel, Liam R. Howlett, Vlastimil Babka,
	Harry Yoo, linux-arm-kernel, linux-kernel, linux-riscv, linux-mm

On Mon, Jun 09, 2025 at 11:31:30AM +0100, Ryan Roberts wrote:
> Since commit 4b634918384c ("arm64/mm: Close theoretical race where stale
> TLB entry remains valid"), all arches that use tlbbatch for reclaim
> (arm64, riscv, x86) implement arch_flush_tlb_batched_pending() with a
> flush_tlb_mm().
>
> So let's simplify by removing the unnecessary abstraction and doing the
> flush_tlb_mm() directly in flush_tlb_batched_pending(). This effectively
> reverts commit db6c1f6f236d ("mm/tlbbatch: introduce
> arch_flush_tlb_batched_pending()").
>
> Suggested-by: Will Deacon <will@kernel.org>
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>

Thanks, love to see an arch_*() helper go :)

Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>

Couple points below.

> ---
>  arch/arm64/include/asm/tlbflush.h | 11 -----------
>  arch/riscv/include/asm/tlbflush.h |  1 -
>  arch/riscv/mm/tlbflush.c          |  5 -----
>  arch/x86/include/asm/tlbflush.h   |  5 -----
>  mm/rmap.c                         |  2 +-
>  5 files changed, 1 insertion(+), 23 deletions(-)
>
> diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
> index aa9efee17277..18a5dc0c9a54 100644
> --- a/arch/arm64/include/asm/tlbflush.h
> +++ b/arch/arm64/include/asm/tlbflush.h
> @@ -322,17 +322,6 @@ static inline bool arch_tlbbatch_should_defer(struct mm_struct *mm)
>  	return true;
>  }
>
> -/*
> - * If mprotect/munmap/etc occurs during TLB batched flushing, we need to ensure
> - * all the previously issued TLBIs targeting mm have completed. But since we
> - * can be executing on a remote CPU, a DSB cannot guarantee this like it can
> - * for arch_tlbbatch_flush(). Our only option is to flush the entire mm.
> - */

Hm are we losing information here? I guess it's hard to know whewre to put
this though.

> -static inline void arch_flush_tlb_batched_pending(struct mm_struct *mm)
> -{
> -	flush_tlb_mm(mm);
> -}
> -
>  /*
>   * To support TLB batched flush for multiple pages unmapping, we only send
>   * the TLBI for each page in arch_tlbbatch_add_pending() and wait for the
> diff --git a/arch/riscv/include/asm/tlbflush.h b/arch/riscv/include/asm/tlbflush.h
> index 1a20dd746a49..eed0abc40514 100644
> --- a/arch/riscv/include/asm/tlbflush.h
> +++ b/arch/riscv/include/asm/tlbflush.h
> @@ -63,7 +63,6 @@ void flush_pud_tlb_range(struct vm_area_struct *vma, unsigned long start,
>  bool arch_tlbbatch_should_defer(struct mm_struct *mm);
>  void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch,
>  		struct mm_struct *mm, unsigned long start, unsigned long end);
> -void arch_flush_tlb_batched_pending(struct mm_struct *mm);
>  void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch);
>
>  extern unsigned long tlb_flush_all_threshold;
> diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c
> index e737ba7949b1..8404530ec00f 100644
> --- a/arch/riscv/mm/tlbflush.c
> +++ b/arch/riscv/mm/tlbflush.c
> @@ -234,11 +234,6 @@ void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch,
>  	mmu_notifier_arch_invalidate_secondary_tlbs(mm, start, end);
>  }
>
> -void arch_flush_tlb_batched_pending(struct mm_struct *mm)
> -{
> -	flush_tlb_mm(mm);
> -}
> -
>  void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
>  {
>  	__flush_tlb_range(NULL, &batch->cpumask,
> diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
> index e9b81876ebe4..00daedfefc1b 100644
> --- a/arch/x86/include/asm/tlbflush.h
> +++ b/arch/x86/include/asm/tlbflush.h
> @@ -356,11 +356,6 @@ static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *b
>  	mmu_notifier_arch_invalidate_secondary_tlbs(mm, 0, -1UL);
>  }
>
> -static inline void arch_flush_tlb_batched_pending(struct mm_struct *mm)
> -{
> -	flush_tlb_mm(mm);
> -}
> -
>  extern void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch);
>
>  static inline bool pte_flags_need_flush(unsigned long oldflags,
> diff --git a/mm/rmap.c b/mm/rmap.c
> index fb63d9256f09..fd160ddaa980 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -746,7 +746,7 @@ void flush_tlb_batched_pending(struct mm_struct *mm)
>  	int flushed = batch >> TLB_FLUSH_BATCH_FLUSHED_SHIFT;
>
>  	if (pending != flushed) {
> -		arch_flush_tlb_batched_pending(mm);
> +		flush_tlb_mm(mm);

I see that CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH is only implemented in
riscv (if !nommu), x86, arm64, and therefore we are only going to invoke
this for those arches which previously did the same anyway, so this is
safe.

Kinda wish we could avoid this ugly #ifdef #else #endif pattern here in
mm/rmap.c but probably necessary in this case.

>  		/*
>  		 * If the new TLB flushing is pending during flushing, leave
>  		 * mm->tlb_flush_batched as is, to avoid losing flushing.
> --
> 2.43.0
>

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v1] mm: Remove arch_flush_tlb_batched_pending() arch helper
  2025-06-09 10:45 ` Lorenzo Stoakes
@ 2025-06-09 11:01   ` Ryan Roberts
  2025-06-12 17:17     ` Lorenzo Stoakes
  0 siblings, 1 reply; 7+ messages in thread
From: Ryan Roberts @ 2025-06-09 11:01 UTC (permalink / raw)
  To: Lorenzo Stoakes
  Cc: Andrew Morton, Catalin Marinas, Will Deacon, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Alexandre Ghiti, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Dave Hansen, H. Peter Anvin,
	David Hildenbrand, Rik van Riel, Liam R. Howlett, Vlastimil Babka,
	Harry Yoo, linux-arm-kernel, linux-kernel, linux-riscv, linux-mm

On 09/06/2025 11:45, Lorenzo Stoakes wrote:
> On Mon, Jun 09, 2025 at 11:31:30AM +0100, Ryan Roberts wrote:
>> Since commit 4b634918384c ("arm64/mm: Close theoretical race where stale
>> TLB entry remains valid"), all arches that use tlbbatch for reclaim
>> (arm64, riscv, x86) implement arch_flush_tlb_batched_pending() with a
>> flush_tlb_mm().
>>
>> So let's simplify by removing the unnecessary abstraction and doing the
>> flush_tlb_mm() directly in flush_tlb_batched_pending(). This effectively
>> reverts commit db6c1f6f236d ("mm/tlbbatch: introduce
>> arch_flush_tlb_batched_pending()").
>>
>> Suggested-by: Will Deacon <will@kernel.org>
>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
> 
> Thanks, love to see an arch_*() helper go :)
> 
> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>

Thanks!

> 
> Couple points below.
> 
>> ---
>>  arch/arm64/include/asm/tlbflush.h | 11 -----------
>>  arch/riscv/include/asm/tlbflush.h |  1 -
>>  arch/riscv/mm/tlbflush.c          |  5 -----
>>  arch/x86/include/asm/tlbflush.h   |  5 -----
>>  mm/rmap.c                         |  2 +-
>>  5 files changed, 1 insertion(+), 23 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
>> index aa9efee17277..18a5dc0c9a54 100644
>> --- a/arch/arm64/include/asm/tlbflush.h
>> +++ b/arch/arm64/include/asm/tlbflush.h
>> @@ -322,17 +322,6 @@ static inline bool arch_tlbbatch_should_defer(struct mm_struct *mm)
>>  	return true;
>>  }
>>
>> -/*
>> - * If mprotect/munmap/etc occurs during TLB batched flushing, we need to ensure
>> - * all the previously issued TLBIs targeting mm have completed. But since we
>> - * can be executing on a remote CPU, a DSB cannot guarantee this like it can
>> - * for arch_tlbbatch_flush(). Our only option is to flush the entire mm.
>> - */
> 
> Hm are we losing information here? I guess it's hard to know whewre to put
> this though.

The generic version of this comment exists above flush_tlb_batched_pending() in
rmap.c.

For the arm64-specific description of why we need to flush the whole mm, that's
captured in Commit 4b634918384c ("arm64/mm: Close theoretical race where stale
TLB entry remains valid"), although I accept that may not be the first place
someone looks.

I don't think we should be defining arch_ helpers just to provide a hook for
some arch-specific comments though.

> 
>> -static inline void arch_flush_tlb_batched_pending(struct mm_struct *mm)
>> -{
>> -	flush_tlb_mm(mm);
>> -}
>> -
>>  /*
>>   * To support TLB batched flush for multiple pages unmapping, we only send
>>   * the TLBI for each page in arch_tlbbatch_add_pending() and wait for the
>> diff --git a/arch/riscv/include/asm/tlbflush.h b/arch/riscv/include/asm/tlbflush.h
>> index 1a20dd746a49..eed0abc40514 100644
>> --- a/arch/riscv/include/asm/tlbflush.h
>> +++ b/arch/riscv/include/asm/tlbflush.h
>> @@ -63,7 +63,6 @@ void flush_pud_tlb_range(struct vm_area_struct *vma, unsigned long start,
>>  bool arch_tlbbatch_should_defer(struct mm_struct *mm);
>>  void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch,
>>  		struct mm_struct *mm, unsigned long start, unsigned long end);
>> -void arch_flush_tlb_batched_pending(struct mm_struct *mm);
>>  void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch);
>>
>>  extern unsigned long tlb_flush_all_threshold;
>> diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c
>> index e737ba7949b1..8404530ec00f 100644
>> --- a/arch/riscv/mm/tlbflush.c
>> +++ b/arch/riscv/mm/tlbflush.c
>> @@ -234,11 +234,6 @@ void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch,
>>  	mmu_notifier_arch_invalidate_secondary_tlbs(mm, start, end);
>>  }
>>
>> -void arch_flush_tlb_batched_pending(struct mm_struct *mm)
>> -{
>> -	flush_tlb_mm(mm);
>> -}
>> -
>>  void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
>>  {
>>  	__flush_tlb_range(NULL, &batch->cpumask,
>> diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
>> index e9b81876ebe4..00daedfefc1b 100644
>> --- a/arch/x86/include/asm/tlbflush.h
>> +++ b/arch/x86/include/asm/tlbflush.h
>> @@ -356,11 +356,6 @@ static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *b
>>  	mmu_notifier_arch_invalidate_secondary_tlbs(mm, 0, -1UL);
>>  }
>>
>> -static inline void arch_flush_tlb_batched_pending(struct mm_struct *mm)
>> -{
>> -	flush_tlb_mm(mm);
>> -}
>> -
>>  extern void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch);
>>
>>  static inline bool pte_flags_need_flush(unsigned long oldflags,
>> diff --git a/mm/rmap.c b/mm/rmap.c
>> index fb63d9256f09..fd160ddaa980 100644
>> --- a/mm/rmap.c
>> +++ b/mm/rmap.c
>> @@ -746,7 +746,7 @@ void flush_tlb_batched_pending(struct mm_struct *mm)
>>  	int flushed = batch >> TLB_FLUSH_BATCH_FLUSHED_SHIFT;
>>
>>  	if (pending != flushed) {
>> -		arch_flush_tlb_batched_pending(mm);
>> +		flush_tlb_mm(mm);
> 
> I see that CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH is only implemented in
> riscv (if !nommu), x86, arm64, and therefore we are only going to invoke
> this for those arches which previously did the same anyway, so this is
> safe.

It's also the way it used to be done before arm64 joined the party and thought
it could optimize by just issuing a DSB. I since discoved that the DSB approach
is buggy so arm64 has now fallen back to flush_tlb_mm() so the reason for the
original introduction of arch_flush_tlb_batched_pending() has gone.

Thanks,
Ryan

> 
> Kinda wish we could avoid this ugly #ifdef #else #endif pattern here in
> mm/rmap.c but probably necessary in this case.
> 
>>  		/*
>>  		 * If the new TLB flushing is pending during flushing, leave
>>  		 * mm->tlb_flush_batched as is, to avoid losing flushing.
>> --
>> 2.43.0
>>


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v1] mm: Remove arch_flush_tlb_batched_pending() arch helper
  2025-06-09 10:31 [PATCH v1] mm: Remove arch_flush_tlb_batched_pending() arch helper Ryan Roberts
  2025-06-09 10:45 ` Lorenzo Stoakes
@ 2025-06-10  6:22 ` Anshuman Khandual
  2025-06-10 10:37 ` David Hildenbrand
  2025-06-12 15:20 ` Will Deacon
  3 siblings, 0 replies; 7+ messages in thread
From: Anshuman Khandual @ 2025-06-10  6:22 UTC (permalink / raw)
  To: Ryan Roberts, Andrew Morton, Catalin Marinas, Will Deacon,
	Paul Walmsley, Palmer Dabbelt, Albert Ou, Alexandre Ghiti,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	H. Peter Anvin, David Hildenbrand, Lorenzo Stoakes, Rik van Riel,
	Liam R. Howlett, Vlastimil Babka, Harry Yoo
  Cc: linux-arm-kernel, linux-kernel, linux-riscv, linux-mm



On 09/06/25 4:01 PM, Ryan Roberts wrote:
> Since commit 4b634918384c ("arm64/mm: Close theoretical race where stale
> TLB entry remains valid"), all arches that use tlbbatch for reclaim
> (arm64, riscv, x86) implement arch_flush_tlb_batched_pending() with a
> flush_tlb_mm().
> 
> So let's simplify by removing the unnecessary abstraction and doing the
> flush_tlb_mm() directly in flush_tlb_batched_pending(). This effectively
> reverts commit db6c1f6f236d ("mm/tlbbatch: introduce
> arch_flush_tlb_batched_pending()").
> 
> Suggested-by: Will Deacon <will@kernel.org>
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>

Makes sense to replace arch callbacks with common generic functions
applicable for all platforms.

Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>


> ---
>  arch/arm64/include/asm/tlbflush.h | 11 -----------
>  arch/riscv/include/asm/tlbflush.h |  1 -
>  arch/riscv/mm/tlbflush.c          |  5 -----
>  arch/x86/include/asm/tlbflush.h   |  5 -----
>  mm/rmap.c                         |  2 +-
>  5 files changed, 1 insertion(+), 23 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
> index aa9efee17277..18a5dc0c9a54 100644
> --- a/arch/arm64/include/asm/tlbflush.h
> +++ b/arch/arm64/include/asm/tlbflush.h
> @@ -322,17 +322,6 @@ static inline bool arch_tlbbatch_should_defer(struct mm_struct *mm)
>  	return true;
>  }
> 
> -/*
> - * If mprotect/munmap/etc occurs during TLB batched flushing, we need to ensure
> - * all the previously issued TLBIs targeting mm have completed. But since we
> - * can be executing on a remote CPU, a DSB cannot guarantee this like it can
> - * for arch_tlbbatch_flush(). Our only option is to flush the entire mm.
> - */
> -static inline void arch_flush_tlb_batched_pending(struct mm_struct *mm)
> -{
> -	flush_tlb_mm(mm);
> -}
> -
>  /*
>   * To support TLB batched flush for multiple pages unmapping, we only send
>   * the TLBI for each page in arch_tlbbatch_add_pending() and wait for the
> diff --git a/arch/riscv/include/asm/tlbflush.h b/arch/riscv/include/asm/tlbflush.h
> index 1a20dd746a49..eed0abc40514 100644
> --- a/arch/riscv/include/asm/tlbflush.h
> +++ b/arch/riscv/include/asm/tlbflush.h
> @@ -63,7 +63,6 @@ void flush_pud_tlb_range(struct vm_area_struct *vma, unsigned long start,
>  bool arch_tlbbatch_should_defer(struct mm_struct *mm);
>  void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch,
>  		struct mm_struct *mm, unsigned long start, unsigned long end);
> -void arch_flush_tlb_batched_pending(struct mm_struct *mm);
>  void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch);
> 
>  extern unsigned long tlb_flush_all_threshold;
> diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c
> index e737ba7949b1..8404530ec00f 100644
> --- a/arch/riscv/mm/tlbflush.c
> +++ b/arch/riscv/mm/tlbflush.c
> @@ -234,11 +234,6 @@ void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch,
>  	mmu_notifier_arch_invalidate_secondary_tlbs(mm, start, end);
>  }
> 
> -void arch_flush_tlb_batched_pending(struct mm_struct *mm)
> -{
> -	flush_tlb_mm(mm);
> -}
> -
>  void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
>  {
>  	__flush_tlb_range(NULL, &batch->cpumask,
> diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
> index e9b81876ebe4..00daedfefc1b 100644
> --- a/arch/x86/include/asm/tlbflush.h
> +++ b/arch/x86/include/asm/tlbflush.h
> @@ -356,11 +356,6 @@ static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *b
>  	mmu_notifier_arch_invalidate_secondary_tlbs(mm, 0, -1UL);
>  }
> 
> -static inline void arch_flush_tlb_batched_pending(struct mm_struct *mm)
> -{
> -	flush_tlb_mm(mm);
> -}
> -
>  extern void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch);
> 
>  static inline bool pte_flags_need_flush(unsigned long oldflags,
> diff --git a/mm/rmap.c b/mm/rmap.c
> index fb63d9256f09..fd160ddaa980 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -746,7 +746,7 @@ void flush_tlb_batched_pending(struct mm_struct *mm)
>  	int flushed = batch >> TLB_FLUSH_BATCH_FLUSHED_SHIFT;
> 
>  	if (pending != flushed) {
> -		arch_flush_tlb_batched_pending(mm);
> +		flush_tlb_mm(mm);
>  		/*
>  		 * If the new TLB flushing is pending during flushing, leave
>  		 * mm->tlb_flush_batched as is, to avoid losing flushing.
> --
> 2.43.0
> 
> 


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v1] mm: Remove arch_flush_tlb_batched_pending() arch helper
  2025-06-09 10:31 [PATCH v1] mm: Remove arch_flush_tlb_batched_pending() arch helper Ryan Roberts
  2025-06-09 10:45 ` Lorenzo Stoakes
  2025-06-10  6:22 ` Anshuman Khandual
@ 2025-06-10 10:37 ` David Hildenbrand
  2025-06-12 15:20 ` Will Deacon
  3 siblings, 0 replies; 7+ messages in thread
From: David Hildenbrand @ 2025-06-10 10:37 UTC (permalink / raw)
  To: Ryan Roberts, Andrew Morton, Catalin Marinas, Will Deacon,
	Paul Walmsley, Palmer Dabbelt, Albert Ou, Alexandre Ghiti,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	H. Peter Anvin, Lorenzo Stoakes, Rik van Riel, Liam R. Howlett,
	Vlastimil Babka, Harry Yoo
  Cc: linux-arm-kernel, linux-kernel, linux-riscv, linux-mm

On 09.06.25 12:31, Ryan Roberts wrote:
> Since commit 4b634918384c ("arm64/mm: Close theoretical race where stale
> TLB entry remains valid"), all arches that use tlbbatch for reclaim
> (arm64, riscv, x86) implement arch_flush_tlb_batched_pending() with a
> flush_tlb_mm().
> 
> So let's simplify by removing the unnecessary abstraction and doing the
> flush_tlb_mm() directly in flush_tlb_batched_pending(). This effectively
> reverts commit db6c1f6f236d ("mm/tlbbatch: introduce
> arch_flush_tlb_batched_pending()").
> 
> Suggested-by: Will Deacon <will@kernel.org>
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
> ---

Acked-by: David Hildenbrand <david@redhat.com>

-- 
Cheers,

David / dhildenb


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v1] mm: Remove arch_flush_tlb_batched_pending() arch helper
  2025-06-09 10:31 [PATCH v1] mm: Remove arch_flush_tlb_batched_pending() arch helper Ryan Roberts
                   ` (2 preceding siblings ...)
  2025-06-10 10:37 ` David Hildenbrand
@ 2025-06-12 15:20 ` Will Deacon
  3 siblings, 0 replies; 7+ messages in thread
From: Will Deacon @ 2025-06-12 15:20 UTC (permalink / raw)
  To: Ryan Roberts
  Cc: Andrew Morton, Catalin Marinas, Paul Walmsley, Palmer Dabbelt,
	Albert Ou, Alexandre Ghiti, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, H. Peter Anvin, David Hildenbrand,
	Lorenzo Stoakes, Rik van Riel, Liam R. Howlett, Vlastimil Babka,
	Harry Yoo, linux-arm-kernel, linux-kernel, linux-riscv, linux-mm

On Mon, Jun 09, 2025 at 11:31:30AM +0100, Ryan Roberts wrote:
> Since commit 4b634918384c ("arm64/mm: Close theoretical race where stale
> TLB entry remains valid"), all arches that use tlbbatch for reclaim
> (arm64, riscv, x86) implement arch_flush_tlb_batched_pending() with a
> flush_tlb_mm().
> 
> So let's simplify by removing the unnecessary abstraction and doing the
> flush_tlb_mm() directly in flush_tlb_batched_pending(). This effectively
> reverts commit db6c1f6f236d ("mm/tlbbatch: introduce
> arch_flush_tlb_batched_pending()").
> 
> Suggested-by: Will Deacon <will@kernel.org>
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
> ---
>  arch/arm64/include/asm/tlbflush.h | 11 -----------
>  arch/riscv/include/asm/tlbflush.h |  1 -
>  arch/riscv/mm/tlbflush.c          |  5 -----
>  arch/x86/include/asm/tlbflush.h   |  5 -----
>  mm/rmap.c                         |  2 +-
>  5 files changed, 1 insertion(+), 23 deletions(-)

Thank you, Ryan!

Acked-by: Will Deacon <will@kernel.org>

I assume this will go via akpm rather than the arch tree?

Will

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v1] mm: Remove arch_flush_tlb_batched_pending() arch helper
  2025-06-09 11:01   ` Ryan Roberts
@ 2025-06-12 17:17     ` Lorenzo Stoakes
  0 siblings, 0 replies; 7+ messages in thread
From: Lorenzo Stoakes @ 2025-06-12 17:17 UTC (permalink / raw)
  To: Ryan Roberts
  Cc: Andrew Morton, Catalin Marinas, Will Deacon, Paul Walmsley,
	Palmer Dabbelt, Albert Ou, Alexandre Ghiti, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Dave Hansen, H. Peter Anvin,
	David Hildenbrand, Rik van Riel, Liam R. Howlett, Vlastimil Babka,
	Harry Yoo, linux-arm-kernel, linux-kernel, linux-riscv, linux-mm

On Mon, Jun 09, 2025 at 12:01:28PM +0100, Ryan Roberts wrote:
> On 09/06/2025 11:45, Lorenzo Stoakes wrote:
> > On Mon, Jun 09, 2025 at 11:31:30AM +0100, Ryan Roberts wrote:
> >> Since commit 4b634918384c ("arm64/mm: Close theoretical race where stale
> >> TLB entry remains valid"), all arches that use tlbbatch for reclaim
> >> (arm64, riscv, x86) implement arch_flush_tlb_batched_pending() with a
> >> flush_tlb_mm().
> >>
> >> So let's simplify by removing the unnecessary abstraction and doing the
> >> flush_tlb_mm() directly in flush_tlb_batched_pending(). This effectively
> >> reverts commit db6c1f6f236d ("mm/tlbbatch: introduce
> >> arch_flush_tlb_batched_pending()").
> >>
> >> Suggested-by: Will Deacon <will@kernel.org>
> >> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
> >
> > Thanks, love to see an arch_*() helper go :)
> >
> > Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
>
> Thanks!
>
> >
> > Couple points below.
> >
> >> ---
> >>  arch/arm64/include/asm/tlbflush.h | 11 -----------
> >>  arch/riscv/include/asm/tlbflush.h |  1 -
> >>  arch/riscv/mm/tlbflush.c          |  5 -----
> >>  arch/x86/include/asm/tlbflush.h   |  5 -----
> >>  mm/rmap.c                         |  2 +-
> >>  5 files changed, 1 insertion(+), 23 deletions(-)
> >>
> >> diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
> >> index aa9efee17277..18a5dc0c9a54 100644
> >> --- a/arch/arm64/include/asm/tlbflush.h
> >> +++ b/arch/arm64/include/asm/tlbflush.h
> >> @@ -322,17 +322,6 @@ static inline bool arch_tlbbatch_should_defer(struct mm_struct *mm)
> >>  	return true;
> >>  }
> >>
> >> -/*
> >> - * If mprotect/munmap/etc occurs during TLB batched flushing, we need to ensure
> >> - * all the previously issued TLBIs targeting mm have completed. But since we
> >> - * can be executing on a remote CPU, a DSB cannot guarantee this like it can
> >> - * for arch_tlbbatch_flush(). Our only option is to flush the entire mm.
> >> - */
> >
> > Hm are we losing information here? I guess it's hard to know whewre to put
> > this though.
>
> The generic version of this comment exists above flush_tlb_batched_pending() in
> rmap.c.
>
> For the arm64-specific description of why we need to flush the whole mm, that's
> captured in Commit 4b634918384c ("arm64/mm: Close theoretical race where stale
> TLB entry remains valid"), although I accept that may not be the first place
> someone looks.
>
> I don't think we should be defining arch_ helpers just to provide a hook for
> some arch-specific comments though.

Yeah completely agree.

>
> >
> >> -static inline void arch_flush_tlb_batched_pending(struct mm_struct *mm)
> >> -{
> >> -	flush_tlb_mm(mm);
> >> -}
> >> -
> >>  /*
> >>   * To support TLB batched flush for multiple pages unmapping, we only send
> >>   * the TLBI for each page in arch_tlbbatch_add_pending() and wait for the
> >> diff --git a/arch/riscv/include/asm/tlbflush.h b/arch/riscv/include/asm/tlbflush.h
> >> index 1a20dd746a49..eed0abc40514 100644
> >> --- a/arch/riscv/include/asm/tlbflush.h
> >> +++ b/arch/riscv/include/asm/tlbflush.h
> >> @@ -63,7 +63,6 @@ void flush_pud_tlb_range(struct vm_area_struct *vma, unsigned long start,
> >>  bool arch_tlbbatch_should_defer(struct mm_struct *mm);
> >>  void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch,
> >>  		struct mm_struct *mm, unsigned long start, unsigned long end);
> >> -void arch_flush_tlb_batched_pending(struct mm_struct *mm);
> >>  void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch);
> >>
> >>  extern unsigned long tlb_flush_all_threshold;
> >> diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c
> >> index e737ba7949b1..8404530ec00f 100644
> >> --- a/arch/riscv/mm/tlbflush.c
> >> +++ b/arch/riscv/mm/tlbflush.c
> >> @@ -234,11 +234,6 @@ void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *batch,
> >>  	mmu_notifier_arch_invalidate_secondary_tlbs(mm, start, end);
> >>  }
> >>
> >> -void arch_flush_tlb_batched_pending(struct mm_struct *mm)
> >> -{
> >> -	flush_tlb_mm(mm);
> >> -}
> >> -
> >>  void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
> >>  {
> >>  	__flush_tlb_range(NULL, &batch->cpumask,
> >> diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
> >> index e9b81876ebe4..00daedfefc1b 100644
> >> --- a/arch/x86/include/asm/tlbflush.h
> >> +++ b/arch/x86/include/asm/tlbflush.h
> >> @@ -356,11 +356,6 @@ static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *b
> >>  	mmu_notifier_arch_invalidate_secondary_tlbs(mm, 0, -1UL);
> >>  }
> >>
> >> -static inline void arch_flush_tlb_batched_pending(struct mm_struct *mm)
> >> -{
> >> -	flush_tlb_mm(mm);
> >> -}
> >> -
> >>  extern void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch);
> >>
> >>  static inline bool pte_flags_need_flush(unsigned long oldflags,
> >> diff --git a/mm/rmap.c b/mm/rmap.c
> >> index fb63d9256f09..fd160ddaa980 100644
> >> --- a/mm/rmap.c
> >> +++ b/mm/rmap.c
> >> @@ -746,7 +746,7 @@ void flush_tlb_batched_pending(struct mm_struct *mm)
> >>  	int flushed = batch >> TLB_FLUSH_BATCH_FLUSHED_SHIFT;
> >>
> >>  	if (pending != flushed) {
> >> -		arch_flush_tlb_batched_pending(mm);
> >> +		flush_tlb_mm(mm);
> >
> > I see that CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH is only implemented in
> > riscv (if !nommu), x86, arm64, and therefore we are only going to invoke
> > this for those arches which previously did the same anyway, so this is
> > safe.
>
> It's also the way it used to be done before arm64 joined the party and thought
> it could optimize by just issuing a DSB. I since discoved that the DSB approach
> is buggy so arm64 has now fallen back to flush_tlb_mm() so the reason for the
> original introduction of arch_flush_tlb_batched_pending() has gone.

Ack thanks for the background!

>
> Thanks,
> Ryan
>
> >
> > Kinda wish we could avoid this ugly #ifdef #else #endif pattern here in
> > mm/rmap.c but probably necessary in this case.
> >
> >>  		/*
> >>  		 * If the new TLB flushing is pending during flushing, leave
> >>  		 * mm->tlb_flush_batched as is, to avoid losing flushing.
> >> --
> >> 2.43.0
> >>
>

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2025-06-13  0:24 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-06-09 10:31 [PATCH v1] mm: Remove arch_flush_tlb_batched_pending() arch helper Ryan Roberts
2025-06-09 10:45 ` Lorenzo Stoakes
2025-06-09 11:01   ` Ryan Roberts
2025-06-12 17:17     ` Lorenzo Stoakes
2025-06-10  6:22 ` Anshuman Khandual
2025-06-10 10:37 ` David Hildenbrand
2025-06-12 15:20 ` Will Deacon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).