linux-arch.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/3] skip redundant TLB sync IPIs
@ 2025-12-29 14:52 Lance Yang
  2025-12-29 14:52 ` [PATCH v2 1/3] mm/tlb: allow architectures to " Lance Yang
                   ` (3 more replies)
  0 siblings, 4 replies; 12+ messages in thread
From: Lance Yang @ 2025-12-29 14:52 UTC (permalink / raw)
  To: akpm
  Cc: will, aneesh.kumar, npiggin, peterz, tglx, mingo, bp, dave.hansen,
	x86, hpa, arnd, david, lorenzo.stoakes, ziy, baolin.wang,
	Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, ioworker0,
	shy828301, riel, jannh, linux-arch, linux-mm, linux-kernel

From: Lance Yang <lance.yang@linux.dev>

Hi all,

When unsharing hugetlb PMD page tables or collapsing pages in khugepaged,
we send two IPIs: one for TLB invalidation, and another to synchronize
with concurrent GUP-fast walkers. However, if the TLB flush already
reaches all CPUs, the second IPI is redundant. GUP-fast runs with IRQs
disabled, so when the TLB flush IPI completes, any concurrent GUP-fast
must have finished.

This series introduces a way for architectures to indicate their TLB flush
already provides full synchronization, allowing the redundant IPI to be
skipped. For now, the optimization is implemented for x86 first and applied
to all page table operations that free or unshare tables.

David Hildenbrand did the initial implementation. I built on his work and
relied on off-list discussions to push it further - thanks a lot David!

v1 -> v2:
- Fix cover letter encoding to resolve send-email issues. Apologies for
  any email flood caused by the failed send attempts :(

RFC -> v1:
- Use a callback function in pv_mmu_ops instead of comparing function
  pointers (per David)
- Embed the check directly in tlb_remove_table_sync_one() instead of
  requiring every caller to check explicitly (per David)
- Move tlb_table_flush_implies_ipi_broadcast() outside of
  CONFIG_MMU_GATHER_RCU_TABLE_FREE to fix build error on architectures
  that don't enable this config.
  https://lore.kernel.org/oe-kbuild-all/202512142156.cShiu6PU-lkp@intel.com/
- https://lore.kernel.org/linux-mm/20251213080038.10917-1-lance.yang@linux.dev/


Lance Yang (3):
  mm/tlb: allow architectures to skip redundant TLB sync IPIs
  x86/mm: implement redundant IPI elimination for page table operations
  mm: embed TLB flush IPI check in tlb_remove_table_sync_one()

 arch/x86/include/asm/paravirt_types.h |  6 ++++++
 arch/x86/include/asm/tlb.h            | 19 ++++++++++++++++++-
 arch/x86/kernel/paravirt.c            | 10 ++++++++++
 include/asm-generic/tlb.h             | 14 ++++++++++++++
 mm/mmu_gather.c                       |  4 ++++
 5 files changed, 52 insertions(+), 1 deletion(-)

-- 
2.49.0


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v2 1/3] mm/tlb: allow architectures to skip redundant TLB sync IPIs
  2025-12-29 14:52 [PATCH v2 0/3] skip redundant TLB sync IPIs Lance Yang
@ 2025-12-29 14:52 ` Lance Yang
  2025-12-29 15:00   ` Lance Yang
  2025-12-30 20:31   ` [PATCH v2 1/3] mm/tlb: allow architectures to " David Hildenbrand (Red Hat)
  2025-12-29 14:52 ` [PATCH v2 2/3] x86/mm: implement redundant IPI elimination for page table operations Lance Yang
                   ` (2 subsequent siblings)
  3 siblings, 2 replies; 12+ messages in thread
From: Lance Yang @ 2025-12-29 14:52 UTC (permalink / raw)
  To: akpm
  Cc: will, aneesh.kumar, npiggin, peterz, tglx, mingo, bp, dave.hansen,
	x86, hpa, arnd, david, lorenzo.stoakes, ziy, baolin.wang,
	Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, ioworker0,
	shy828301, riel, jannh, linux-arch, linux-mm, linux-kernel,
	Lance Yang

From: Lance Yang <lance.yang@linux.dev>

When unsharing hugetlb PMD page tables, we currently send two IPIs: one
for TLB invalidation, and another to synchronize with concurrent GUP-fast
walkers.

However, if the TLB flush already reaches all CPUs, the second IPI is
redundant. GUP-fast runs with IRQs disabled, so when the TLB flush IPI
completes, any concurrent GUP-fast must have finished.

Add tlb_table_flush_implies_ipi_broadcast() to let architectures indicate
their TLB flush provides full synchronization, enabling the redundant IPI
to be skipped.

Suggested-by: David Hildenbrand (Red Hat) <david@kernel.org>
Signed-off-by: Lance Yang <lance.yang@linux.dev>
---
 include/asm-generic/tlb.h | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
index 4d679d2a206b..e8d99b5e831f 100644
--- a/include/asm-generic/tlb.h
+++ b/include/asm-generic/tlb.h
@@ -261,6 +261,20 @@ static inline void tlb_remove_table_sync_one(void) { }
 
 #endif /* CONFIG_MMU_GATHER_RCU_TABLE_FREE */
 
+/*
+ * Architectures can override if their TLB flush already broadcasts IPIs to all
+ * CPUs when freeing or unsharing page tables.
+ *
+ * Return true only when the flush guarantees:
+ * - IPIs reach all CPUs with potentially stale paging-structure cache entries
+ * - Synchronization with IRQ-disabled code like GUP-fast
+ */
+#ifndef tlb_table_flush_implies_ipi_broadcast
+static inline bool tlb_table_flush_implies_ipi_broadcast(void)
+{
+	return false;
+}
+#endif
 
 #ifndef CONFIG_MMU_GATHER_NO_GATHER
 /*
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v2 2/3] x86/mm: implement redundant IPI elimination for page table operations
  2025-12-29 14:52 [PATCH v2 0/3] skip redundant TLB sync IPIs Lance Yang
  2025-12-29 14:52 ` [PATCH v2 1/3] mm/tlb: allow architectures to " Lance Yang
@ 2025-12-29 14:52 ` Lance Yang
  2025-12-29 14:52 ` [PATCH v2 3/3] mm: embed TLB flush IPI check in tlb_remove_table_sync_one() Lance Yang
  2025-12-31  4:26 ` [PATCH v2 0/3] skip redundant TLB sync IPIs Dave Hansen
  3 siblings, 0 replies; 12+ messages in thread
From: Lance Yang @ 2025-12-29 14:52 UTC (permalink / raw)
  To: akpm
  Cc: will, aneesh.kumar, npiggin, peterz, tglx, mingo, bp, dave.hansen,
	x86, hpa, arnd, david, lorenzo.stoakes, ziy, baolin.wang,
	Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, ioworker0,
	shy828301, riel, jannh, linux-arch, linux-mm, linux-kernel,
	Lance Yang

From: Lance Yang <lance.yang@linux.dev>

Add a callback function flush_tlb_multi_implies_ipi_broadcast to pv_mmu_ops
to explicitly track whether flush_tlb_multi IPIs provide sufficient
synchronization for GUP-fast when freeing or unsharing page tables.

Pass both freed_tables and unshared_tables to flush_tlb_mm_range() to
ensure lazy-TLB CPUs receive IPIs and flush their paging-structure caches:
	flush_tlb_mm_range(..., freed_tables || unshared_tables);

Suggested-by: David Hildenbrand (Red Hat) <david@kernel.org>
Signed-off-by: Lance Yang <lance.yang@linux.dev>
---
 arch/x86/include/asm/paravirt_types.h |  6 ++++++
 arch/x86/include/asm/tlb.h            | 19 ++++++++++++++++++-
 arch/x86/kernel/paravirt.c            | 10 ++++++++++
 3 files changed, 34 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 3502939415ad..a5bd0983da1f 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -133,6 +133,12 @@ struct pv_mmu_ops {
 	void (*flush_tlb_multi)(const struct cpumask *cpus,
 				const struct flush_tlb_info *info);
 
+	/*
+	 * Indicates whether flush_tlb_multi IPIs provide sufficient
+	 * synchronization for GUP-fast when freeing or unsharing page tables.
+	 */
+	bool (*flush_tlb_multi_implies_ipi_broadcast)(void);
+
 	/* Hook for intercepting the destruction of an mm_struct. */
 	void (*exit_mmap)(struct mm_struct *mm);
 	void (*notify_page_enc_status_changed)(unsigned long pfn, int npages, bool enc);
diff --git a/arch/x86/include/asm/tlb.h b/arch/x86/include/asm/tlb.h
index 866ea78ba156..3a7cdfdcea8e 100644
--- a/arch/x86/include/asm/tlb.h
+++ b/arch/x86/include/asm/tlb.h
@@ -5,10 +5,26 @@
 #define tlb_flush tlb_flush
 static inline void tlb_flush(struct mmu_gather *tlb);
 
+#define tlb_table_flush_implies_ipi_broadcast tlb_table_flush_implies_ipi_broadcast
+static inline bool tlb_table_flush_implies_ipi_broadcast(void);
+
 #include <asm-generic/tlb.h>
 #include <linux/kernel.h>
 #include <vdso/bits.h>
 #include <vdso/page.h>
+#include <asm/paravirt.h>
+
+static inline bool tlb_table_flush_implies_ipi_broadcast(void)
+{
+#ifdef CONFIG_PARAVIRT
+	if (pv_ops.mmu.flush_tlb_multi_implies_ipi_broadcast)
+		return pv_ops.mmu.flush_tlb_multi_implies_ipi_broadcast();
+
+	return false;
+#else
+	return !cpu_feature_enabled(X86_FEATURE_INVLPGB);
+#endif
+}
 
 static inline void tlb_flush(struct mmu_gather *tlb)
 {
@@ -20,7 +36,8 @@ static inline void tlb_flush(struct mmu_gather *tlb)
 		end = tlb->end;
 	}
 
-	flush_tlb_mm_range(tlb->mm, start, end, stride_shift, tlb->freed_tables);
+	flush_tlb_mm_range(tlb->mm, start, end, stride_shift,
+			   tlb->freed_tables || tlb->unshared_tables);
 }
 
 static inline void invlpg(unsigned long addr)
diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
index ab3e172dcc69..4eaa44800b39 100644
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -60,6 +60,15 @@ void __init native_pv_lock_init(void)
 		static_branch_enable(&virt_spin_lock_key);
 }
 
+static bool native_flush_tlb_multi_implies_ipi_broadcast(void)
+{
+	/* Paravirt may use hypercalls that don't send real IPIs. */
+	if (pv_ops.mmu.flush_tlb_multi != native_flush_tlb_multi)
+		return false;
+
+	return !cpu_feature_enabled(X86_FEATURE_INVLPGB);
+}
+
 struct static_key paravirt_steal_enabled;
 struct static_key paravirt_steal_rq_enabled;
 
@@ -173,6 +182,7 @@ struct paravirt_patch_template pv_ops = {
 	.mmu.flush_tlb_kernel	= native_flush_tlb_global,
 	.mmu.flush_tlb_one_user	= native_flush_tlb_one_user,
 	.mmu.flush_tlb_multi	= native_flush_tlb_multi,
+	.mmu.flush_tlb_multi_implies_ipi_broadcast = native_flush_tlb_multi_implies_ipi_broadcast,
 
 	.mmu.exit_mmap		= paravirt_nop,
 	.mmu.notify_page_enc_status_changed	= paravirt_nop,
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v2 3/3] mm: embed TLB flush IPI check in tlb_remove_table_sync_one()
  2025-12-29 14:52 [PATCH v2 0/3] skip redundant TLB sync IPIs Lance Yang
  2025-12-29 14:52 ` [PATCH v2 1/3] mm/tlb: allow architectures to " Lance Yang
  2025-12-29 14:52 ` [PATCH v2 2/3] x86/mm: implement redundant IPI elimination for page table operations Lance Yang
@ 2025-12-29 14:52 ` Lance Yang
  2025-12-30 20:33   ` David Hildenbrand (Red Hat)
  2025-12-31  4:26 ` [PATCH v2 0/3] skip redundant TLB sync IPIs Dave Hansen
  3 siblings, 1 reply; 12+ messages in thread
From: Lance Yang @ 2025-12-29 14:52 UTC (permalink / raw)
  To: akpm
  Cc: will, aneesh.kumar, npiggin, peterz, tglx, mingo, bp, dave.hansen,
	x86, hpa, arnd, david, lorenzo.stoakes, ziy, baolin.wang,
	Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, ioworker0,
	shy828301, riel, jannh, linux-arch, linux-mm, linux-kernel,
	Lance Yang

From: Lance Yang <lance.yang@linux.dev>

Embed the tlb_table_flush_implies_ipi_broadcast() check directly inside
tlb_remove_table_sync_one() instead of requiring every caller to check
it explicitly. This relies on callers to do the right thing: flush with
freed_tables=true or unshared_tables=true beforehand.

All existing callers satisfy this requirement:

1. mm/khugepaged.c:1188 (collapse_huge_page):

   pmdp_collapse_flush(vma, address, pmd)
   -> flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE)
      -> flush_tlb_mm_range(mm, ..., freed_tables = true)
         -> flush_tlb_multi(mm_cpumask(mm), info)

   So freed_tables=true before calling tlb_remove_table_sync_one().

2. include/asm-generic/tlb.h:861 (tlb_flush_unshared_tables):

   tlb_flush_mmu_tlbonly(tlb)
   -> tlb_flush(tlb)
      -> flush_tlb_mm_range(mm, ..., unshared_tables = true)
         -> flush_tlb_multi(mm_cpumask(mm), info)

   unshared_tables=true (equivalent to freed_tables for sending IPIs).

3. mm/mmu_gather.c:341 (__tlb_remove_table_one):

   When we can't allocate a batch page in tlb_remove_table(), we do:

   tlb_table_invalidate(tlb)
   -> tlb_flush_mmu_tlbonly(tlb)
      -> flush_tlb_mm_range(mm, ..., freed_tables = true)
         -> flush_tlb_multi(mm_cpumask(mm), info)

   Then:
   tlb_remove_table_one(table)
   -> __tlb_remove_table_one(table) // if !CONFIG_PT_RECLAIM
      -> tlb_remove_table_sync_one()

   freed_tables=true, and this should work too.

   Why is tlb->freed_tables guaranteed? Because callers like
   pte_free_tlb() (via free_pte_range) set freed_tables=true before
   calling __pte_free_tlb(), which then calls tlb_remove_table().
   We cannot free page tables without freed_tables=true.

   Note that tlb_remove_table_sync_one() was a NOP on bare metal x86
   (CONFIG_MMU_GATHER_RCU_TABLE_FREE=n) before commit a37259732a7d
   ("x86/mm: Make MMU_GATHER_RCU_TABLE_FREE unconditional").

4-5. mm/khugepaged.c:1683,1819 (pmdp_get_lockless_sync macro):

   Same as #1. These also use pmdp_collapse_flush() beforehand.

Suggested-by: David Hildenbrand (Red Hat) <david@kernel.org>
Signed-off-by: Lance Yang <lance.yang@linux.dev>
---
 mm/mmu_gather.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
index 7468ec388455..7b588643cbae 100644
--- a/mm/mmu_gather.c
+++ b/mm/mmu_gather.c
@@ -276,6 +276,10 @@ static void tlb_remove_table_smp_sync(void *arg)
 
 void tlb_remove_table_sync_one(void)
 {
+	/* Skip the IPI if the TLB flush already synchronized with other CPUs. */
+	if (tlb_table_flush_implies_ipi_broadcast())
+		return;
+
 	/*
 	 * This isn't an RCU grace period and hence the page-tables cannot be
 	 * assumed to be actually RCU-freed.
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 1/3] mm/tlb: allow architectures to skip redundant TLB sync IPIs
  2025-12-29 14:52 ` [PATCH v2 1/3] mm/tlb: allow architectures to " Lance Yang
@ 2025-12-29 15:00   ` Lance Yang
  2025-12-29 15:01     ` [PATCH v2 0/3] " Lance Yang
  2025-12-30 20:31   ` [PATCH v2 1/3] mm/tlb: allow architectures to " David Hildenbrand (Red Hat)
  1 sibling, 1 reply; 12+ messages in thread
From: Lance Yang @ 2025-12-29 15:00 UTC (permalink / raw)
  To: akpm
  Cc: will, aneesh.kumar, npiggin, peterz, tglx, mingo, bp, dave.hansen,
	x86, hpa, arnd, david, lorenzo.stoakes, ziy, baolin.wang,
	Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, ioworker0,
	shy828301, riel, jannh, linux-arch, linux-mm, linux-kernel

Confusingly, I'm still unable to send the cover letter, but there's no
error message. I'll post it here instead ...

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v2 0/3] skip redundant TLB sync IPIs
  2025-12-29 15:00   ` Lance Yang
@ 2025-12-29 15:01     ` Lance Yang
  0 siblings, 0 replies; 12+ messages in thread
From: Lance Yang @ 2025-12-29 15:01 UTC (permalink / raw)
  To: lance.yang
  Cc: Liam.Howlett, akpm, aneesh.kumar, arnd, baohua, baolin.wang, bp,
	dave.hansen, david, dev.jain, hpa, ioworker0, jannh, linux-arch,
	linux-kernel, linux-mm, lorenzo.stoakes, mingo, npache, npiggin,
	peterz, riel, ryan.roberts, shy828301, tglx, will, x86, ziy

From: Lance Yang <lance.yang@linux.dev>

Hi all,

When unsharing hugetlb PMD page tables or collapsing pages in khugepaged,
we send two IPIs: one for TLB invalidation, and another to synchronize
with concurrent GUP-fast walkers. However, if the TLB flush already
reaches all CPUs, the second IPI is redundant. GUP-fast runs with IRQs
disabled, so when the TLB flush IPI completes, any concurrent GUP-fast
must have finished.

This series introduces a way for architectures to indicate their TLB flush
already provides full synchronization, allowing the redundant IPI to be
skipped. For now, the optimization is implemented for x86 first and applied
to all page table operations that free or unshare tables.

David Hildenbrand did the initial implementation. I built on his work and
relied on off-list discussions to push it further - thanks a lot David!

v1 -> v2:
- Fix cover letter encoding to resolve send-email issues. Apologies for
  any email flood caused by the failed send attempts :(

RFC -> v1:
- Use a callback function in pv_mmu_ops instead of comparing function
  pointers (per David)
- Embed the check directly in tlb_remove_table_sync_one() instead of
  requiring every caller to check explicitly (per David)
- Move tlb_table_flush_implies_ipi_broadcast() outside of
  CONFIG_MMU_GATHER_RCU_TABLE_FREE to fix build error on architectures
  that don't enable this config.
  https://lore.kernel.org/oe-kbuild-all/202512142156.cShiu6PU-lkp@intel.com/
- https://lore.kernel.org/linux-mm/20251213080038.10917-1-lance.yang@linux.dev/


Lance Yang (3):
  mm/tlb: allow architectures to skip redundant TLB sync IPIs
  x86/mm: implement redundant IPI elimination for page table operations
  mm: embed TLB flush IPI check in tlb_remove_table_sync_one()

 arch/x86/include/asm/paravirt_types.h |  6 ++++++
 arch/x86/include/asm/tlb.h            | 19 ++++++++++++++++++-
 arch/x86/kernel/paravirt.c            | 10 ++++++++++
 include/asm-generic/tlb.h             | 14 ++++++++++++++
 mm/mmu_gather.c                       |  4 ++++
 5 files changed, 52 insertions(+), 1 deletion(-)

-- 
2.49.0


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 1/3] mm/tlb: allow architectures to skip redundant TLB sync IPIs
  2025-12-29 14:52 ` [PATCH v2 1/3] mm/tlb: allow architectures to " Lance Yang
  2025-12-29 15:00   ` Lance Yang
@ 2025-12-30 20:31   ` David Hildenbrand (Red Hat)
  2025-12-31  2:29     ` Lance Yang
  1 sibling, 1 reply; 12+ messages in thread
From: David Hildenbrand (Red Hat) @ 2025-12-30 20:31 UTC (permalink / raw)
  To: Lance Yang, akpm
  Cc: will, aneesh.kumar, npiggin, peterz, tglx, mingo, bp, dave.hansen,
	x86, hpa, arnd, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
	npache, ryan.roberts, dev.jain, baohua, ioworker0, shy828301,
	riel, jannh, linux-arch, linux-mm, linux-kernel

On 12/29/25 15:52, Lance Yang wrote:
> From: Lance Yang <lance.yang@linux.dev>
> 
> When unsharing hugetlb PMD page tables, we currently send two IPIs: one
> for TLB invalidation, and another to synchronize with concurrent GUP-fast
> walkers.
> 
> However, if the TLB flush already reaches all CPUs, the second IPI is
> redundant. GUP-fast runs with IRQs disabled, so when the TLB flush IPI
> completes, any concurrent GUP-fast must have finished.
> 
> Add tlb_table_flush_implies_ipi_broadcast() to let architectures indicate
> their TLB flush provides full synchronization, enabling the redundant IPI
> to be skipped.
> 
> Suggested-by: David Hildenbrand (Red Hat) <david@kernel.org>
> Signed-off-by: Lance Yang <lance.yang@linux.dev>
> ---
>   include/asm-generic/tlb.h | 14 ++++++++++++++
>   1 file changed, 14 insertions(+)
> 
> diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
> index 4d679d2a206b..e8d99b5e831f 100644
> --- a/include/asm-generic/tlb.h
> +++ b/include/asm-generic/tlb.h
> @@ -261,6 +261,20 @@ static inline void tlb_remove_table_sync_one(void) { }
>   
>   #endif /* CONFIG_MMU_GATHER_RCU_TABLE_FREE */
>   
> +/*
> + * Architectures can override if their TLB flush already broadcasts IPIs to all
> + * CPUs when freeing or unsharing page tables.
> + *
> + * Return true only when the flush guarantees:
> + * - IPIs reach all CPUs with potentially stale paging-structure cache entries
> + * - Synchronization with IRQ-disabled code like GUP-fast
> + */
> +#ifndef tlb_table_flush_implies_ipi_broadcast
> +static inline bool tlb_table_flush_implies_ipi_broadcast(void)
> +{
> +	return false;
> +}
> +#endif
>   
>   #ifndef CONFIG_MMU_GATHER_NO_GATHER
>   /*


This should likely get squashed into patch #3. Patch #1 itself does not 
add a lot of value to be had separately.

So best to squash both and have them as #1, to then implement it in #2 
for x86.

-- 
Cheers

David

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 3/3] mm: embed TLB flush IPI check in tlb_remove_table_sync_one()
  2025-12-29 14:52 ` [PATCH v2 3/3] mm: embed TLB flush IPI check in tlb_remove_table_sync_one() Lance Yang
@ 2025-12-30 20:33   ` David Hildenbrand (Red Hat)
  2025-12-31  3:03     ` Lance Yang
  0 siblings, 1 reply; 12+ messages in thread
From: David Hildenbrand (Red Hat) @ 2025-12-30 20:33 UTC (permalink / raw)
  To: Lance Yang, akpm
  Cc: will, aneesh.kumar, npiggin, peterz, tglx, mingo, bp, dave.hansen,
	x86, hpa, arnd, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
	npache, ryan.roberts, dev.jain, baohua, ioworker0, shy828301,
	riel, jannh, linux-arch, linux-mm, linux-kernel

On 12/29/25 15:52, Lance Yang wrote:
> From: Lance Yang <lance.yang@linux.dev>
> 
> Embed the tlb_table_flush_implies_ipi_broadcast() check directly inside
> tlb_remove_table_sync_one() instead of requiring every caller to check
> it explicitly. This relies on callers to do the right thing: flush with
> freed_tables=true or unshared_tables=true beforehand.
> 
> All existing callers satisfy this requirement:
> 
> 1. mm/khugepaged.c:1188 (collapse_huge_page):
> 
>     pmdp_collapse_flush(vma, address, pmd)
>     -> flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE)
>        -> flush_tlb_mm_range(mm, ..., freed_tables = true)
>           -> flush_tlb_multi(mm_cpumask(mm), info)
> 
>     So freed_tables=true before calling tlb_remove_table_sync_one().
> 
> 2. include/asm-generic/tlb.h:861 (tlb_flush_unshared_tables):
> 
>     tlb_flush_mmu_tlbonly(tlb)
>     -> tlb_flush(tlb)
>        -> flush_tlb_mm_range(mm, ..., unshared_tables = true)
>           -> flush_tlb_multi(mm_cpumask(mm), info)
> 
>     unshared_tables=true (equivalent to freed_tables for sending IPIs).
> 
> 3. mm/mmu_gather.c:341 (__tlb_remove_table_one):
> 
>     When we can't allocate a batch page in tlb_remove_table(), we do:
> 
>     tlb_table_invalidate(tlb)
>     -> tlb_flush_mmu_tlbonly(tlb)
>        -> flush_tlb_mm_range(mm, ..., freed_tables = true)
>           -> flush_tlb_multi(mm_cpumask(mm), info)
> 
>     Then:
>     tlb_remove_table_one(table)
>     -> __tlb_remove_table_one(table) // if !CONFIG_PT_RECLAIM
>        -> tlb_remove_table_sync_one()
> 
>     freed_tables=true, and this should work too.
> 
>     Why is tlb->freed_tables guaranteed? Because callers like
>     pte_free_tlb() (via free_pte_range) set freed_tables=true before
>     calling __pte_free_tlb(), which then calls tlb_remove_table().
>     We cannot free page tables without freed_tables=true.
> 
>     Note that tlb_remove_table_sync_one() was a NOP on bare metal x86
>     (CONFIG_MMU_GATHER_RCU_TABLE_FREE=n) before commit a37259732a7d
>     ("x86/mm: Make MMU_GATHER_RCU_TABLE_FREE unconditional").
> 
> 4-5. mm/khugepaged.c:1683,1819 (pmdp_get_lockless_sync macro):
> 
>     Same as #1. These also use pmdp_collapse_flush() beforehand.
> 
> Suggested-by: David Hildenbrand (Red Hat) <david@kernel.org>
> Signed-off-by: Lance Yang <lance.yang@linux.dev>

LGTM. I think we should document that somewhere. Can we add some 
kerneldoc for tlb_remove_table_sync_one() where we document that it 
doesn't to any sync if a previous TLB flush when removing/unsharing page 
tables would have already performed an IPI?

> ---
>   mm/mmu_gather.c | 4 ++++
>   1 file changed, 4 insertions(+)
> 
> diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
> index 7468ec388455..7b588643cbae 100644
> --- a/mm/mmu_gather.c
> +++ b/mm/mmu_gather.c
> @@ -276,6 +276,10 @@ static void tlb_remove_table_smp_sync(void *arg)
>   
>   void tlb_remove_table_sync_one(void)
>   {
> +	/* Skip the IPI if the TLB flush already synchronized with other CPUs. */
> +	if (tlb_table_flush_implies_ipi_broadcast())
> +		return;
> +
>   	/*
>   	 * This isn't an RCU grace period and hence the page-tables cannot be
>   	 * assumed to be actually RCU-freed.


-- 
Cheers

David

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 1/3] mm/tlb: allow architectures to skip redundant TLB sync IPIs
  2025-12-30 20:31   ` [PATCH v2 1/3] mm/tlb: allow architectures to " David Hildenbrand (Red Hat)
@ 2025-12-31  2:29     ` Lance Yang
  0 siblings, 0 replies; 12+ messages in thread
From: Lance Yang @ 2025-12-31  2:29 UTC (permalink / raw)
  To: David Hildenbrand (Red Hat), akpm
  Cc: will, aneesh.kumar, npiggin, peterz, tglx, mingo, bp, dave.hansen,
	x86, hpa, arnd, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
	npache, ryan.roberts, dev.jain, baohua, ioworker0, shy828301,
	riel, jannh, linux-arch, linux-mm, linux-kernel



On 2025/12/31 04:31, David Hildenbrand (Red Hat) wrote:
> On 12/29/25 15:52, Lance Yang wrote:
>> From: Lance Yang <lance.yang@linux.dev>
>>
>> When unsharing hugetlb PMD page tables, we currently send two IPIs: one
>> for TLB invalidation, and another to synchronize with concurrent GUP-fast
>> walkers.
>>
>> However, if the TLB flush already reaches all CPUs, the second IPI is
>> redundant. GUP-fast runs with IRQs disabled, so when the TLB flush IPI
>> completes, any concurrent GUP-fast must have finished.
>>
>> Add tlb_table_flush_implies_ipi_broadcast() to let architectures indicate
>> their TLB flush provides full synchronization, enabling the redundant IPI
>> to be skipped.
>>
>> Suggested-by: David Hildenbrand (Red Hat) <david@kernel.org>
>> Signed-off-by: Lance Yang <lance.yang@linux.dev>
>> ---
>>   include/asm-generic/tlb.h | 14 ++++++++++++++
>>   1 file changed, 14 insertions(+)
>>
>> diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
>> index 4d679d2a206b..e8d99b5e831f 100644
>> --- a/include/asm-generic/tlb.h
>> +++ b/include/asm-generic/tlb.h
>> @@ -261,6 +261,20 @@ static inline void 
>> tlb_remove_table_sync_one(void) { }
>>   #endif /* CONFIG_MMU_GATHER_RCU_TABLE_FREE */
>> +/*
>> + * Architectures can override if their TLB flush already broadcasts 
>> IPIs to all
>> + * CPUs when freeing or unsharing page tables.
>> + *
>> + * Return true only when the flush guarantees:
>> + * - IPIs reach all CPUs with potentially stale paging-structure 
>> cache entries
>> + * - Synchronization with IRQ-disabled code like GUP-fast
>> + */
>> +#ifndef tlb_table_flush_implies_ipi_broadcast
>> +static inline bool tlb_table_flush_implies_ipi_broadcast(void)
>> +{
>> +    return false;
>> +}
>> +#endif
>>   #ifndef CONFIG_MMU_GATHER_NO_GATHER
>>   /*
> 
> 
> This should likely get squashed into patch #3. Patch #1 itself does not 
> add a lot of value to be had separately.
> 
> So best to squash both and have them as #1, to then implement it in #2 
> for x86.

Sounds good, will do! Squashing #1 and #3 together, keeping the x86
implementation as #2 ;)

Cheers,
Lance

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 3/3] mm: embed TLB flush IPI check in tlb_remove_table_sync_one()
  2025-12-30 20:33   ` David Hildenbrand (Red Hat)
@ 2025-12-31  3:03     ` Lance Yang
  0 siblings, 0 replies; 12+ messages in thread
From: Lance Yang @ 2025-12-31  3:03 UTC (permalink / raw)
  To: David Hildenbrand (Red Hat), akpm
  Cc: will, aneesh.kumar, npiggin, peterz, tglx, mingo, bp, dave.hansen,
	x86, hpa, arnd, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
	npache, ryan.roberts, dev.jain, baohua, ioworker0, shy828301,
	riel, jannh, linux-arch, linux-mm, linux-kernel



On 2025/12/31 04:33, David Hildenbrand (Red Hat) wrote:
> On 12/29/25 15:52, Lance Yang wrote:
>> From: Lance Yang <lance.yang@linux.dev>
>>
>> Embed the tlb_table_flush_implies_ipi_broadcast() check directly inside
>> tlb_remove_table_sync_one() instead of requiring every caller to check
>> it explicitly. This relies on callers to do the right thing: flush with
>> freed_tables=true or unshared_tables=true beforehand.
>>
>> All existing callers satisfy this requirement:
>>
>> 1. mm/khugepaged.c:1188 (collapse_huge_page):
>>
>>     pmdp_collapse_flush(vma, address, pmd)
>>     -> flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE)
>>        -> flush_tlb_mm_range(mm, ..., freed_tables = true)
>>           -> flush_tlb_multi(mm_cpumask(mm), info)
>>
>>     So freed_tables=true before calling tlb_remove_table_sync_one().
>>
>> 2. include/asm-generic/tlb.h:861 (tlb_flush_unshared_tables):
>>
>>     tlb_flush_mmu_tlbonly(tlb)
>>     -> tlb_flush(tlb)
>>        -> flush_tlb_mm_range(mm, ..., unshared_tables = true)
>>           -> flush_tlb_multi(mm_cpumask(mm), info)
>>
>>     unshared_tables=true (equivalent to freed_tables for sending IPIs).
>>
>> 3. mm/mmu_gather.c:341 (__tlb_remove_table_one):
>>
>>     When we can't allocate a batch page in tlb_remove_table(), we do:
>>
>>     tlb_table_invalidate(tlb)
>>     -> tlb_flush_mmu_tlbonly(tlb)
>>        -> flush_tlb_mm_range(mm, ..., freed_tables = true)
>>           -> flush_tlb_multi(mm_cpumask(mm), info)
>>
>>     Then:
>>     tlb_remove_table_one(table)
>>     -> __tlb_remove_table_one(table) // if !CONFIG_PT_RECLAIM
>>        -> tlb_remove_table_sync_one()
>>
>>     freed_tables=true, and this should work too.
>>
>>     Why is tlb->freed_tables guaranteed? Because callers like
>>     pte_free_tlb() (via free_pte_range) set freed_tables=true before
>>     calling __pte_free_tlb(), which then calls tlb_remove_table().
>>     We cannot free page tables without freed_tables=true.
>>
>>     Note that tlb_remove_table_sync_one() was a NOP on bare metal x86
>>     (CONFIG_MMU_GATHER_RCU_TABLE_FREE=n) before commit a37259732a7d
>>     ("x86/mm: Make MMU_GATHER_RCU_TABLE_FREE unconditional").
>>
>> 4-5. mm/khugepaged.c:1683,1819 (pmdp_get_lockless_sync macro):
>>
>>     Same as #1. These also use pmdp_collapse_flush() beforehand.
>>
>> Suggested-by: David Hildenbrand (Red Hat) <david@kernel.org>
>> Signed-off-by: Lance Yang <lance.yang@linux.dev>
> 
> LGTM. I think we should document that somewhere. Can we add some 

Thanks!

> kerneldoc for tlb_remove_table_sync_one() where we document that it 
> doesn't to any sync if a previous TLB flush when removing/unsharing page 
> tables would have already performed an IPI?

Fair point. Would something like this work?

---8<---
diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
index 7b588643cbae..9139f0a6b8bd 100644
--- a/mm/mmu_gather.c
+++ b/mm/mmu_gather.c
@@ -274,6 +274,20 @@ static void tlb_remove_table_smp_sync(void *arg)
  	/* Simply deliver the interrupt */
  }

+/**
+ * tlb_remove_table_sync_one - Send IPI to synchronize page table 
operations
+ *
+ * Sends an IPI to all CPUs to synchronize when freeing or unsharing page
+ * tables (e.g., to ensure concurrent GUP-fast walkers have completed).
+ *
+ * If a previous TLB flush (when removing/unsharing page tables) already
+ * broadcast IPIs to all CPUs, the redundant IPI is skipped. The 
optimization
+ * relies on architectures implementing 
tlb_table_flush_implies_ipi_broadcast()
+ * to indicate when their TLB flush provides sufficient synchronization.
+ *
+ * Note that callers must ensure that a TLB flush with freed_tables=true or
+ * unshared_tables=true has been performed before calling.
+ */
  void tlb_remove_table_sync_one(void)
  {
  	/* Skip the IPI if the TLB flush already synchronized with other CPUs. */
---

Cheers,
Lance

> 
>> ---
>>   mm/mmu_gather.c | 4 ++++
>>   1 file changed, 4 insertions(+)
>>
>> diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
>> index 7468ec388455..7b588643cbae 100644
>> --- a/mm/mmu_gather.c
>> +++ b/mm/mmu_gather.c
>> @@ -276,6 +276,10 @@ static void tlb_remove_table_smp_sync(void *arg)
>>   void tlb_remove_table_sync_one(void)
>>   {
>> +    /* Skip the IPI if the TLB flush already synchronized with other 
>> CPUs. */
>> +    if (tlb_table_flush_implies_ipi_broadcast())
>> +        return;
>> +
>>       /*
>>        * This isn't an RCU grace period and hence the page-tables 
>> cannot be
>>        * assumed to be actually RCU-freed.
> 
> 


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 0/3] skip redundant TLB sync IPIs
  2025-12-29 14:52 [PATCH v2 0/3] skip redundant TLB sync IPIs Lance Yang
                   ` (2 preceding siblings ...)
  2025-12-29 14:52 ` [PATCH v2 3/3] mm: embed TLB flush IPI check in tlb_remove_table_sync_one() Lance Yang
@ 2025-12-31  4:26 ` Dave Hansen
  2025-12-31 12:33   ` David Hildenbrand (Red Hat)
  3 siblings, 1 reply; 12+ messages in thread
From: Dave Hansen @ 2025-12-31  4:26 UTC (permalink / raw)
  To: Lance Yang, akpm
  Cc: will, aneesh.kumar, npiggin, peterz, tglx, mingo, bp, dave.hansen,
	x86, hpa, arnd, david, lorenzo.stoakes, ziy, baolin.wang,
	Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, ioworker0,
	shy828301, riel, jannh, linux-arch, linux-mm, linux-kernel

On 12/29/25 06:52, Lance Yang wrote:
...
> This series introduces a way for architectures to indicate their TLB flush
> already provides full synchronization, allowing the redundant IPI to be
> skipped. For now, the optimization is implemented for x86 first and applied
> to all page table operations that free or unshare tables.

I really don't like all the complexity here. Even on x86, there are
three or more ways of deriving this. Having the pv_ops check the value
of another pv op is also a bit unsettling.

That said, complexity can be worth it with sufficient demonstrated
gains. But:

> When unsharing hugetlb PMD page tables or collapsing pages in khugepaged,
> we send two IPIs: one for TLB invalidation, and another to synchronize
> with concurrent GUP-fast walkers.

Those aren't exactly hot paths. khugepaged is fundamentally rate
limited. I don't think unsharing hugetlb PMD page tables just is all
that common either.

What kind of end user benefit is there to justify the complexity?

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 0/3] skip redundant TLB sync IPIs
  2025-12-31  4:26 ` [PATCH v2 0/3] skip redundant TLB sync IPIs Dave Hansen
@ 2025-12-31 12:33   ` David Hildenbrand (Red Hat)
  0 siblings, 0 replies; 12+ messages in thread
From: David Hildenbrand (Red Hat) @ 2025-12-31 12:33 UTC (permalink / raw)
  To: Dave Hansen, Lance Yang, akpm
  Cc: will, aneesh.kumar, npiggin, peterz, tglx, mingo, bp, dave.hansen,
	x86, hpa, arnd, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
	npache, ryan.roberts, dev.jain, baohua, ioworker0, shy828301,
	riel, jannh, linux-arch, linux-mm, linux-kernel

On 12/31/25 05:26, Dave Hansen wrote:
> On 12/29/25 06:52, Lance Yang wrote:
> ...
>> This series introduces a way for architectures to indicate their TLB flush
>> already provides full synchronization, allowing the redundant IPI to be
>> skipped. For now, the optimization is implemented for x86 first and applied
>> to all page table operations that free or unshare tables.
> 
> I really don't like all the complexity here. Even on x86, there are
> three or more ways of deriving this. Having the pv_ops check the value
> of another pv op is also a bit unsettling.

Right. What I actually meant is that we simply have a property "bool 
flush_tlb_multi_implies_ipi_broadcast" that we set only to true from the 
initialization code.

Without comparing the pv_ops.

That should reduce the complexity quite a bit IMHO.

But maybe you have an even better way on how to indicate support, in a 
very simple way.

> 
> That said, complexity can be worth it with sufficient demonstrated
> gains. But:
> 
>> When unsharing hugetlb PMD page tables or collapsing pages in khugepaged,
>> we send two IPIs: one for TLB invalidation, and another to synchronize
>> with concurrent GUP-fast walkers.
> 
> Those aren't exactly hot paths. khugepaged is fundamentally rate
> limited. I don't think unsharing hugetlb PMD page tables just is all
> that common either.

Given that the added IPIs during unsharing broke Oracle DBs rather badly 
[1], I think this is actually a case worth optimizing.

I'd assume that the impact can be measured on a many-core/many-socket 
system with an adjusted reproducer of [1]. The impact will not be as big 
as what [1] fixed (we reduced the tlb_remove_table_sync_one() 
invocations quite drastically).

After all, tlb_remove_table_sync_one() sends an IPI to *all* CPUs in the 
system, not just the ones in the MM CPU mask, which is rather bad on 
systems with a lot of CPUs. Of course, this way we can only optimize on 
systems that actually send IPIs during TLB flushes.

For other systems, it will be more tricky to avoid these broadcast IPIs.

(I have the faint recollection that the IPI broadcast through 
tlb_remove_table_sync_one() is a problem when called from 
__tlb_remove_table_one() on RT systems ...)

[1] https://lkml.kernel.org/r/20251223214037.580860-1-david@kernel.org

-- 
Cheers

David

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2025-12-31 12:33 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-12-29 14:52 [PATCH v2 0/3] skip redundant TLB sync IPIs Lance Yang
2025-12-29 14:52 ` [PATCH v2 1/3] mm/tlb: allow architectures to " Lance Yang
2025-12-29 15:00   ` Lance Yang
2025-12-29 15:01     ` [PATCH v2 0/3] " Lance Yang
2025-12-30 20:31   ` [PATCH v2 1/3] mm/tlb: allow architectures to " David Hildenbrand (Red Hat)
2025-12-31  2:29     ` Lance Yang
2025-12-29 14:52 ` [PATCH v2 2/3] x86/mm: implement redundant IPI elimination for page table operations Lance Yang
2025-12-29 14:52 ` [PATCH v2 3/3] mm: embed TLB flush IPI check in tlb_remove_table_sync_one() Lance Yang
2025-12-30 20:33   ` David Hildenbrand (Red Hat)
2025-12-31  3:03     ` Lance Yang
2025-12-31  4:26 ` [PATCH v2 0/3] skip redundant TLB sync IPIs Dave Hansen
2025-12-31 12:33   ` David Hildenbrand (Red Hat)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).