linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/3] skip redundant TLB sync IPIs
@ 2025-12-29 14:52 Lance Yang
  2025-12-29 14:52 ` [PATCH v2 1/3] mm/tlb: allow architectures to " Lance Yang
                   ` (3 more replies)
  0 siblings, 4 replies; 15+ messages in thread
From: Lance Yang @ 2025-12-29 14:52 UTC (permalink / raw)
  To: akpm
  Cc: will, aneesh.kumar, npiggin, peterz, tglx, mingo, bp, dave.hansen,
	x86, hpa, arnd, david, lorenzo.stoakes, ziy, baolin.wang,
	Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, ioworker0,
	shy828301, riel, jannh, linux-arch, linux-mm, linux-kernel

From: Lance Yang <lance.yang@linux.dev>

Hi all,

When unsharing hugetlb PMD page tables or collapsing pages in khugepaged,
we send two IPIs: one for TLB invalidation, and another to synchronize
with concurrent GUP-fast walkers. However, if the TLB flush already
reaches all CPUs, the second IPI is redundant. GUP-fast runs with IRQs
disabled, so when the TLB flush IPI completes, any concurrent GUP-fast
must have finished.

This series introduces a way for architectures to indicate their TLB flush
already provides full synchronization, allowing the redundant IPI to be
skipped. For now, the optimization is implemented for x86 first and applied
to all page table operations that free or unshare tables.

David Hildenbrand did the initial implementation. I built on his work and
relied on off-list discussions to push it further - thanks a lot David!

v1 -> v2:
- Fix cover letter encoding to resolve send-email issues. Apologies for
  any email flood caused by the failed send attempts :(

RFC -> v1:
- Use a callback function in pv_mmu_ops instead of comparing function
  pointers (per David)
- Embed the check directly in tlb_remove_table_sync_one() instead of
  requiring every caller to check explicitly (per David)
- Move tlb_table_flush_implies_ipi_broadcast() outside of
  CONFIG_MMU_GATHER_RCU_TABLE_FREE to fix build error on architectures
  that don't enable this config.
  https://lore.kernel.org/oe-kbuild-all/202512142156.cShiu6PU-lkp@intel.com/
- https://lore.kernel.org/linux-mm/20251213080038.10917-1-lance.yang@linux.dev/


Lance Yang (3):
  mm/tlb: allow architectures to skip redundant TLB sync IPIs
  x86/mm: implement redundant IPI elimination for page table operations
  mm: embed TLB flush IPI check in tlb_remove_table_sync_one()

 arch/x86/include/asm/paravirt_types.h |  6 ++++++
 arch/x86/include/asm/tlb.h            | 19 ++++++++++++++++++-
 arch/x86/kernel/paravirt.c            | 10 ++++++++++
 include/asm-generic/tlb.h             | 14 ++++++++++++++
 mm/mmu_gather.c                       |  4 ++++
 5 files changed, 52 insertions(+), 1 deletion(-)

-- 
2.49.0



^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v2 1/3] mm/tlb: allow architectures to skip redundant TLB sync IPIs
  2025-12-29 14:52 [PATCH v2 0/3] skip redundant TLB sync IPIs Lance Yang
@ 2025-12-29 14:52 ` Lance Yang
  2025-12-29 15:00   ` Lance Yang
  2025-12-30 20:31   ` [PATCH v2 1/3] mm/tlb: allow architectures to " David Hildenbrand (Red Hat)
  2025-12-29 14:52 ` [PATCH v2 2/3] x86/mm: implement redundant IPI elimination for page table operations Lance Yang
                   ` (2 subsequent siblings)
  3 siblings, 2 replies; 15+ messages in thread
From: Lance Yang @ 2025-12-29 14:52 UTC (permalink / raw)
  To: akpm
  Cc: will, aneesh.kumar, npiggin, peterz, tglx, mingo, bp, dave.hansen,
	x86, hpa, arnd, david, lorenzo.stoakes, ziy, baolin.wang,
	Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, ioworker0,
	shy828301, riel, jannh, linux-arch, linux-mm, linux-kernel,
	Lance Yang

From: Lance Yang <lance.yang@linux.dev>

When unsharing hugetlb PMD page tables, we currently send two IPIs: one
for TLB invalidation, and another to synchronize with concurrent GUP-fast
walkers.

However, if the TLB flush already reaches all CPUs, the second IPI is
redundant. GUP-fast runs with IRQs disabled, so when the TLB flush IPI
completes, any concurrent GUP-fast must have finished.

Add tlb_table_flush_implies_ipi_broadcast() to let architectures indicate
their TLB flush provides full synchronization, enabling the redundant IPI
to be skipped.

Suggested-by: David Hildenbrand (Red Hat) <david@kernel.org>
Signed-off-by: Lance Yang <lance.yang@linux.dev>
---
 include/asm-generic/tlb.h | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
index 4d679d2a206b..e8d99b5e831f 100644
--- a/include/asm-generic/tlb.h
+++ b/include/asm-generic/tlb.h
@@ -261,6 +261,20 @@ static inline void tlb_remove_table_sync_one(void) { }
 
 #endif /* CONFIG_MMU_GATHER_RCU_TABLE_FREE */
 
+/*
+ * Architectures can override if their TLB flush already broadcasts IPIs to all
+ * CPUs when freeing or unsharing page tables.
+ *
+ * Return true only when the flush guarantees:
+ * - IPIs reach all CPUs with potentially stale paging-structure cache entries
+ * - Synchronization with IRQ-disabled code like GUP-fast
+ */
+#ifndef tlb_table_flush_implies_ipi_broadcast
+static inline bool tlb_table_flush_implies_ipi_broadcast(void)
+{
+	return false;
+}
+#endif
 
 #ifndef CONFIG_MMU_GATHER_NO_GATHER
 /*
-- 
2.49.0



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v2 2/3] x86/mm: implement redundant IPI elimination for page table operations
  2025-12-29 14:52 [PATCH v2 0/3] skip redundant TLB sync IPIs Lance Yang
  2025-12-29 14:52 ` [PATCH v2 1/3] mm/tlb: allow architectures to " Lance Yang
@ 2025-12-29 14:52 ` Lance Yang
  2025-12-29 14:52 ` [PATCH v2 3/3] mm: embed TLB flush IPI check in tlb_remove_table_sync_one() Lance Yang
  2025-12-31  4:26 ` [PATCH v2 0/3] skip redundant TLB sync IPIs Dave Hansen
  3 siblings, 0 replies; 15+ messages in thread
From: Lance Yang @ 2025-12-29 14:52 UTC (permalink / raw)
  To: akpm
  Cc: will, aneesh.kumar, npiggin, peterz, tglx, mingo, bp, dave.hansen,
	x86, hpa, arnd, david, lorenzo.stoakes, ziy, baolin.wang,
	Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, ioworker0,
	shy828301, riel, jannh, linux-arch, linux-mm, linux-kernel,
	Lance Yang

From: Lance Yang <lance.yang@linux.dev>

Add a callback function flush_tlb_multi_implies_ipi_broadcast to pv_mmu_ops
to explicitly track whether flush_tlb_multi IPIs provide sufficient
synchronization for GUP-fast when freeing or unsharing page tables.

Pass both freed_tables and unshared_tables to flush_tlb_mm_range() to
ensure lazy-TLB CPUs receive IPIs and flush their paging-structure caches:
	flush_tlb_mm_range(..., freed_tables || unshared_tables);

Suggested-by: David Hildenbrand (Red Hat) <david@kernel.org>
Signed-off-by: Lance Yang <lance.yang@linux.dev>
---
 arch/x86/include/asm/paravirt_types.h |  6 ++++++
 arch/x86/include/asm/tlb.h            | 19 ++++++++++++++++++-
 arch/x86/kernel/paravirt.c            | 10 ++++++++++
 3 files changed, 34 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 3502939415ad..a5bd0983da1f 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -133,6 +133,12 @@ struct pv_mmu_ops {
 	void (*flush_tlb_multi)(const struct cpumask *cpus,
 				const struct flush_tlb_info *info);
 
+	/*
+	 * Indicates whether flush_tlb_multi IPIs provide sufficient
+	 * synchronization for GUP-fast when freeing or unsharing page tables.
+	 */
+	bool (*flush_tlb_multi_implies_ipi_broadcast)(void);
+
 	/* Hook for intercepting the destruction of an mm_struct. */
 	void (*exit_mmap)(struct mm_struct *mm);
 	void (*notify_page_enc_status_changed)(unsigned long pfn, int npages, bool enc);
diff --git a/arch/x86/include/asm/tlb.h b/arch/x86/include/asm/tlb.h
index 866ea78ba156..3a7cdfdcea8e 100644
--- a/arch/x86/include/asm/tlb.h
+++ b/arch/x86/include/asm/tlb.h
@@ -5,10 +5,26 @@
 #define tlb_flush tlb_flush
 static inline void tlb_flush(struct mmu_gather *tlb);
 
+#define tlb_table_flush_implies_ipi_broadcast tlb_table_flush_implies_ipi_broadcast
+static inline bool tlb_table_flush_implies_ipi_broadcast(void);
+
 #include <asm-generic/tlb.h>
 #include <linux/kernel.h>
 #include <vdso/bits.h>
 #include <vdso/page.h>
+#include <asm/paravirt.h>
+
+static inline bool tlb_table_flush_implies_ipi_broadcast(void)
+{
+#ifdef CONFIG_PARAVIRT
+	if (pv_ops.mmu.flush_tlb_multi_implies_ipi_broadcast)
+		return pv_ops.mmu.flush_tlb_multi_implies_ipi_broadcast();
+
+	return false;
+#else
+	return !cpu_feature_enabled(X86_FEATURE_INVLPGB);
+#endif
+}
 
 static inline void tlb_flush(struct mmu_gather *tlb)
 {
@@ -20,7 +36,8 @@ static inline void tlb_flush(struct mmu_gather *tlb)
 		end = tlb->end;
 	}
 
-	flush_tlb_mm_range(tlb->mm, start, end, stride_shift, tlb->freed_tables);
+	flush_tlb_mm_range(tlb->mm, start, end, stride_shift,
+			   tlb->freed_tables || tlb->unshared_tables);
 }
 
 static inline void invlpg(unsigned long addr)
diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
index ab3e172dcc69..4eaa44800b39 100644
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -60,6 +60,15 @@ void __init native_pv_lock_init(void)
 		static_branch_enable(&virt_spin_lock_key);
 }
 
+static bool native_flush_tlb_multi_implies_ipi_broadcast(void)
+{
+	/* Paravirt may use hypercalls that don't send real IPIs. */
+	if (pv_ops.mmu.flush_tlb_multi != native_flush_tlb_multi)
+		return false;
+
+	return !cpu_feature_enabled(X86_FEATURE_INVLPGB);
+}
+
 struct static_key paravirt_steal_enabled;
 struct static_key paravirt_steal_rq_enabled;
 
@@ -173,6 +182,7 @@ struct paravirt_patch_template pv_ops = {
 	.mmu.flush_tlb_kernel	= native_flush_tlb_global,
 	.mmu.flush_tlb_one_user	= native_flush_tlb_one_user,
 	.mmu.flush_tlb_multi	= native_flush_tlb_multi,
+	.mmu.flush_tlb_multi_implies_ipi_broadcast = native_flush_tlb_multi_implies_ipi_broadcast,
 
 	.mmu.exit_mmap		= paravirt_nop,
 	.mmu.notify_page_enc_status_changed	= paravirt_nop,
-- 
2.49.0



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v2 3/3] mm: embed TLB flush IPI check in tlb_remove_table_sync_one()
  2025-12-29 14:52 [PATCH v2 0/3] skip redundant TLB sync IPIs Lance Yang
  2025-12-29 14:52 ` [PATCH v2 1/3] mm/tlb: allow architectures to " Lance Yang
  2025-12-29 14:52 ` [PATCH v2 2/3] x86/mm: implement redundant IPI elimination for page table operations Lance Yang
@ 2025-12-29 14:52 ` Lance Yang
  2025-12-30 20:33   ` David Hildenbrand (Red Hat)
  2025-12-31  4:26 ` [PATCH v2 0/3] skip redundant TLB sync IPIs Dave Hansen
  3 siblings, 1 reply; 15+ messages in thread
From: Lance Yang @ 2025-12-29 14:52 UTC (permalink / raw)
  To: akpm
  Cc: will, aneesh.kumar, npiggin, peterz, tglx, mingo, bp, dave.hansen,
	x86, hpa, arnd, david, lorenzo.stoakes, ziy, baolin.wang,
	Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, ioworker0,
	shy828301, riel, jannh, linux-arch, linux-mm, linux-kernel,
	Lance Yang

From: Lance Yang <lance.yang@linux.dev>

Embed the tlb_table_flush_implies_ipi_broadcast() check directly inside
tlb_remove_table_sync_one() instead of requiring every caller to check
it explicitly. This relies on callers to do the right thing: flush with
freed_tables=true or unshared_tables=true beforehand.

All existing callers satisfy this requirement:

1. mm/khugepaged.c:1188 (collapse_huge_page):

   pmdp_collapse_flush(vma, address, pmd)
   -> flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE)
      -> flush_tlb_mm_range(mm, ..., freed_tables = true)
         -> flush_tlb_multi(mm_cpumask(mm), info)

   So freed_tables=true before calling tlb_remove_table_sync_one().

2. include/asm-generic/tlb.h:861 (tlb_flush_unshared_tables):

   tlb_flush_mmu_tlbonly(tlb)
   -> tlb_flush(tlb)
      -> flush_tlb_mm_range(mm, ..., unshared_tables = true)
         -> flush_tlb_multi(mm_cpumask(mm), info)

   unshared_tables=true (equivalent to freed_tables for sending IPIs).

3. mm/mmu_gather.c:341 (__tlb_remove_table_one):

   When we can't allocate a batch page in tlb_remove_table(), we do:

   tlb_table_invalidate(tlb)
   -> tlb_flush_mmu_tlbonly(tlb)
      -> flush_tlb_mm_range(mm, ..., freed_tables = true)
         -> flush_tlb_multi(mm_cpumask(mm), info)

   Then:
   tlb_remove_table_one(table)
   -> __tlb_remove_table_one(table) // if !CONFIG_PT_RECLAIM
      -> tlb_remove_table_sync_one()

   freed_tables=true, and this should work too.

   Why is tlb->freed_tables guaranteed? Because callers like
   pte_free_tlb() (via free_pte_range) set freed_tables=true before
   calling __pte_free_tlb(), which then calls tlb_remove_table().
   We cannot free page tables without freed_tables=true.

   Note that tlb_remove_table_sync_one() was a NOP on bare metal x86
   (CONFIG_MMU_GATHER_RCU_TABLE_FREE=n) before commit a37259732a7d
   ("x86/mm: Make MMU_GATHER_RCU_TABLE_FREE unconditional").

4-5. mm/khugepaged.c:1683,1819 (pmdp_get_lockless_sync macro):

   Same as #1. These also use pmdp_collapse_flush() beforehand.

Suggested-by: David Hildenbrand (Red Hat) <david@kernel.org>
Signed-off-by: Lance Yang <lance.yang@linux.dev>
---
 mm/mmu_gather.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
index 7468ec388455..7b588643cbae 100644
--- a/mm/mmu_gather.c
+++ b/mm/mmu_gather.c
@@ -276,6 +276,10 @@ static void tlb_remove_table_smp_sync(void *arg)
 
 void tlb_remove_table_sync_one(void)
 {
+	/* Skip the IPI if the TLB flush already synchronized with other CPUs. */
+	if (tlb_table_flush_implies_ipi_broadcast())
+		return;
+
 	/*
 	 * This isn't an RCU grace period and hence the page-tables cannot be
 	 * assumed to be actually RCU-freed.
-- 
2.49.0



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 1/3] mm/tlb: allow architectures to skip redundant TLB sync IPIs
  2025-12-29 14:52 ` [PATCH v2 1/3] mm/tlb: allow architectures to " Lance Yang
@ 2025-12-29 15:00   ` Lance Yang
  2025-12-29 15:01     ` [PATCH v2 0/3] " Lance Yang
  2025-12-30 20:31   ` [PATCH v2 1/3] mm/tlb: allow architectures to " David Hildenbrand (Red Hat)
  1 sibling, 1 reply; 15+ messages in thread
From: Lance Yang @ 2025-12-29 15:00 UTC (permalink / raw)
  To: akpm
  Cc: will, aneesh.kumar, npiggin, peterz, tglx, mingo, bp, dave.hansen,
	x86, hpa, arnd, david, lorenzo.stoakes, ziy, baolin.wang,
	Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, ioworker0,
	shy828301, riel, jannh, linux-arch, linux-mm, linux-kernel

Confusingly, I'm still unable to send the cover letter, but there's no
error message. I'll post it here instead ...


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v2 0/3] skip redundant TLB sync IPIs
  2025-12-29 15:00   ` Lance Yang
@ 2025-12-29 15:01     ` Lance Yang
  0 siblings, 0 replies; 15+ messages in thread
From: Lance Yang @ 2025-12-29 15:01 UTC (permalink / raw)
  To: lance.yang
  Cc: Liam.Howlett, akpm, aneesh.kumar, arnd, baohua, baolin.wang, bp,
	dave.hansen, david, dev.jain, hpa, ioworker0, jannh, linux-arch,
	linux-kernel, linux-mm, lorenzo.stoakes, mingo, npache, npiggin,
	peterz, riel, ryan.roberts, shy828301, tglx, will, x86, ziy

From: Lance Yang <lance.yang@linux.dev>

Hi all,

When unsharing hugetlb PMD page tables or collapsing pages in khugepaged,
we send two IPIs: one for TLB invalidation, and another to synchronize
with concurrent GUP-fast walkers. However, if the TLB flush already
reaches all CPUs, the second IPI is redundant. GUP-fast runs with IRQs
disabled, so when the TLB flush IPI completes, any concurrent GUP-fast
must have finished.

This series introduces a way for architectures to indicate their TLB flush
already provides full synchronization, allowing the redundant IPI to be
skipped. For now, the optimization is implemented for x86 first and applied
to all page table operations that free or unshare tables.

David Hildenbrand did the initial implementation. I built on his work and
relied on off-list discussions to push it further - thanks a lot David!

v1 -> v2:
- Fix cover letter encoding to resolve send-email issues. Apologies for
  any email flood caused by the failed send attempts :(

RFC -> v1:
- Use a callback function in pv_mmu_ops instead of comparing function
  pointers (per David)
- Embed the check directly in tlb_remove_table_sync_one() instead of
  requiring every caller to check explicitly (per David)
- Move tlb_table_flush_implies_ipi_broadcast() outside of
  CONFIG_MMU_GATHER_RCU_TABLE_FREE to fix build error on architectures
  that don't enable this config.
  https://lore.kernel.org/oe-kbuild-all/202512142156.cShiu6PU-lkp@intel.com/
- https://lore.kernel.org/linux-mm/20251213080038.10917-1-lance.yang@linux.dev/


Lance Yang (3):
  mm/tlb: allow architectures to skip redundant TLB sync IPIs
  x86/mm: implement redundant IPI elimination for page table operations
  mm: embed TLB flush IPI check in tlb_remove_table_sync_one()

 arch/x86/include/asm/paravirt_types.h |  6 ++++++
 arch/x86/include/asm/tlb.h            | 19 ++++++++++++++++++-
 arch/x86/kernel/paravirt.c            | 10 ++++++++++
 include/asm-generic/tlb.h             | 14 ++++++++++++++
 mm/mmu_gather.c                       |  4 ++++
 5 files changed, 52 insertions(+), 1 deletion(-)

-- 
2.49.0



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 1/3] mm/tlb: allow architectures to skip redundant TLB sync IPIs
  2025-12-29 14:52 ` [PATCH v2 1/3] mm/tlb: allow architectures to " Lance Yang
  2025-12-29 15:00   ` Lance Yang
@ 2025-12-30 20:31   ` David Hildenbrand (Red Hat)
  2025-12-31  2:29     ` Lance Yang
  1 sibling, 1 reply; 15+ messages in thread
From: David Hildenbrand (Red Hat) @ 2025-12-30 20:31 UTC (permalink / raw)
  To: Lance Yang, akpm
  Cc: will, aneesh.kumar, npiggin, peterz, tglx, mingo, bp, dave.hansen,
	x86, hpa, arnd, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
	npache, ryan.roberts, dev.jain, baohua, ioworker0, shy828301,
	riel, jannh, linux-arch, linux-mm, linux-kernel

On 12/29/25 15:52, Lance Yang wrote:
> From: Lance Yang <lance.yang@linux.dev>
> 
> When unsharing hugetlb PMD page tables, we currently send two IPIs: one
> for TLB invalidation, and another to synchronize with concurrent GUP-fast
> walkers.
> 
> However, if the TLB flush already reaches all CPUs, the second IPI is
> redundant. GUP-fast runs with IRQs disabled, so when the TLB flush IPI
> completes, any concurrent GUP-fast must have finished.
> 
> Add tlb_table_flush_implies_ipi_broadcast() to let architectures indicate
> their TLB flush provides full synchronization, enabling the redundant IPI
> to be skipped.
> 
> Suggested-by: David Hildenbrand (Red Hat) <david@kernel.org>
> Signed-off-by: Lance Yang <lance.yang@linux.dev>
> ---
>   include/asm-generic/tlb.h | 14 ++++++++++++++
>   1 file changed, 14 insertions(+)
> 
> diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
> index 4d679d2a206b..e8d99b5e831f 100644
> --- a/include/asm-generic/tlb.h
> +++ b/include/asm-generic/tlb.h
> @@ -261,6 +261,20 @@ static inline void tlb_remove_table_sync_one(void) { }
>   
>   #endif /* CONFIG_MMU_GATHER_RCU_TABLE_FREE */
>   
> +/*
> + * Architectures can override if their TLB flush already broadcasts IPIs to all
> + * CPUs when freeing or unsharing page tables.
> + *
> + * Return true only when the flush guarantees:
> + * - IPIs reach all CPUs with potentially stale paging-structure cache entries
> + * - Synchronization with IRQ-disabled code like GUP-fast
> + */
> +#ifndef tlb_table_flush_implies_ipi_broadcast
> +static inline bool tlb_table_flush_implies_ipi_broadcast(void)
> +{
> +	return false;
> +}
> +#endif
>   
>   #ifndef CONFIG_MMU_GATHER_NO_GATHER
>   /*


This should likely get squashed into patch #3. Patch #1 itself does not 
add a lot of value to be had separately.

So best to squash both and have them as #1, to then implement it in #2 
for x86.

-- 
Cheers

David


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 3/3] mm: embed TLB flush IPI check in tlb_remove_table_sync_one()
  2025-12-29 14:52 ` [PATCH v2 3/3] mm: embed TLB flush IPI check in tlb_remove_table_sync_one() Lance Yang
@ 2025-12-30 20:33   ` David Hildenbrand (Red Hat)
  2025-12-31  3:03     ` Lance Yang
  0 siblings, 1 reply; 15+ messages in thread
From: David Hildenbrand (Red Hat) @ 2025-12-30 20:33 UTC (permalink / raw)
  To: Lance Yang, akpm
  Cc: will, aneesh.kumar, npiggin, peterz, tglx, mingo, bp, dave.hansen,
	x86, hpa, arnd, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
	npache, ryan.roberts, dev.jain, baohua, ioworker0, shy828301,
	riel, jannh, linux-arch, linux-mm, linux-kernel

On 12/29/25 15:52, Lance Yang wrote:
> From: Lance Yang <lance.yang@linux.dev>
> 
> Embed the tlb_table_flush_implies_ipi_broadcast() check directly inside
> tlb_remove_table_sync_one() instead of requiring every caller to check
> it explicitly. This relies on callers to do the right thing: flush with
> freed_tables=true or unshared_tables=true beforehand.
> 
> All existing callers satisfy this requirement:
> 
> 1. mm/khugepaged.c:1188 (collapse_huge_page):
> 
>     pmdp_collapse_flush(vma, address, pmd)
>     -> flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE)
>        -> flush_tlb_mm_range(mm, ..., freed_tables = true)
>           -> flush_tlb_multi(mm_cpumask(mm), info)
> 
>     So freed_tables=true before calling tlb_remove_table_sync_one().
> 
> 2. include/asm-generic/tlb.h:861 (tlb_flush_unshared_tables):
> 
>     tlb_flush_mmu_tlbonly(tlb)
>     -> tlb_flush(tlb)
>        -> flush_tlb_mm_range(mm, ..., unshared_tables = true)
>           -> flush_tlb_multi(mm_cpumask(mm), info)
> 
>     unshared_tables=true (equivalent to freed_tables for sending IPIs).
> 
> 3. mm/mmu_gather.c:341 (__tlb_remove_table_one):
> 
>     When we can't allocate a batch page in tlb_remove_table(), we do:
> 
>     tlb_table_invalidate(tlb)
>     -> tlb_flush_mmu_tlbonly(tlb)
>        -> flush_tlb_mm_range(mm, ..., freed_tables = true)
>           -> flush_tlb_multi(mm_cpumask(mm), info)
> 
>     Then:
>     tlb_remove_table_one(table)
>     -> __tlb_remove_table_one(table) // if !CONFIG_PT_RECLAIM
>        -> tlb_remove_table_sync_one()
> 
>     freed_tables=true, and this should work too.
> 
>     Why is tlb->freed_tables guaranteed? Because callers like
>     pte_free_tlb() (via free_pte_range) set freed_tables=true before
>     calling __pte_free_tlb(), which then calls tlb_remove_table().
>     We cannot free page tables without freed_tables=true.
> 
>     Note that tlb_remove_table_sync_one() was a NOP on bare metal x86
>     (CONFIG_MMU_GATHER_RCU_TABLE_FREE=n) before commit a37259732a7d
>     ("x86/mm: Make MMU_GATHER_RCU_TABLE_FREE unconditional").
> 
> 4-5. mm/khugepaged.c:1683,1819 (pmdp_get_lockless_sync macro):
> 
>     Same as #1. These also use pmdp_collapse_flush() beforehand.
> 
> Suggested-by: David Hildenbrand (Red Hat) <david@kernel.org>
> Signed-off-by: Lance Yang <lance.yang@linux.dev>

LGTM. I think we should document that somewhere. Can we add some 
kerneldoc for tlb_remove_table_sync_one() where we document that it 
doesn't to any sync if a previous TLB flush when removing/unsharing page 
tables would have already performed an IPI?

> ---
>   mm/mmu_gather.c | 4 ++++
>   1 file changed, 4 insertions(+)
> 
> diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
> index 7468ec388455..7b588643cbae 100644
> --- a/mm/mmu_gather.c
> +++ b/mm/mmu_gather.c
> @@ -276,6 +276,10 @@ static void tlb_remove_table_smp_sync(void *arg)
>   
>   void tlb_remove_table_sync_one(void)
>   {
> +	/* Skip the IPI if the TLB flush already synchronized with other CPUs. */
> +	if (tlb_table_flush_implies_ipi_broadcast())
> +		return;
> +
>   	/*
>   	 * This isn't an RCU grace period and hence the page-tables cannot be
>   	 * assumed to be actually RCU-freed.


-- 
Cheers

David


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 1/3] mm/tlb: allow architectures to skip redundant TLB sync IPIs
  2025-12-30 20:31   ` [PATCH v2 1/3] mm/tlb: allow architectures to " David Hildenbrand (Red Hat)
@ 2025-12-31  2:29     ` Lance Yang
  0 siblings, 0 replies; 15+ messages in thread
From: Lance Yang @ 2025-12-31  2:29 UTC (permalink / raw)
  To: David Hildenbrand (Red Hat), akpm
  Cc: will, aneesh.kumar, npiggin, peterz, tglx, mingo, bp, dave.hansen,
	x86, hpa, arnd, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
	npache, ryan.roberts, dev.jain, baohua, ioworker0, shy828301,
	riel, jannh, linux-arch, linux-mm, linux-kernel



On 2025/12/31 04:31, David Hildenbrand (Red Hat) wrote:
> On 12/29/25 15:52, Lance Yang wrote:
>> From: Lance Yang <lance.yang@linux.dev>
>>
>> When unsharing hugetlb PMD page tables, we currently send two IPIs: one
>> for TLB invalidation, and another to synchronize with concurrent GUP-fast
>> walkers.
>>
>> However, if the TLB flush already reaches all CPUs, the second IPI is
>> redundant. GUP-fast runs with IRQs disabled, so when the TLB flush IPI
>> completes, any concurrent GUP-fast must have finished.
>>
>> Add tlb_table_flush_implies_ipi_broadcast() to let architectures indicate
>> their TLB flush provides full synchronization, enabling the redundant IPI
>> to be skipped.
>>
>> Suggested-by: David Hildenbrand (Red Hat) <david@kernel.org>
>> Signed-off-by: Lance Yang <lance.yang@linux.dev>
>> ---
>>   include/asm-generic/tlb.h | 14 ++++++++++++++
>>   1 file changed, 14 insertions(+)
>>
>> diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
>> index 4d679d2a206b..e8d99b5e831f 100644
>> --- a/include/asm-generic/tlb.h
>> +++ b/include/asm-generic/tlb.h
>> @@ -261,6 +261,20 @@ static inline void 
>> tlb_remove_table_sync_one(void) { }
>>   #endif /* CONFIG_MMU_GATHER_RCU_TABLE_FREE */
>> +/*
>> + * Architectures can override if their TLB flush already broadcasts 
>> IPIs to all
>> + * CPUs when freeing or unsharing page tables.
>> + *
>> + * Return true only when the flush guarantees:
>> + * - IPIs reach all CPUs with potentially stale paging-structure 
>> cache entries
>> + * - Synchronization with IRQ-disabled code like GUP-fast
>> + */
>> +#ifndef tlb_table_flush_implies_ipi_broadcast
>> +static inline bool tlb_table_flush_implies_ipi_broadcast(void)
>> +{
>> +    return false;
>> +}
>> +#endif
>>   #ifndef CONFIG_MMU_GATHER_NO_GATHER
>>   /*
> 
> 
> This should likely get squashed into patch #3. Patch #1 itself does not 
> add a lot of value to be had separately.
> 
> So best to squash both and have them as #1, to then implement it in #2 
> for x86.

Sounds good, will do! Squashing #1 and #3 together, keeping the x86
implementation as #2 ;)

Cheers,
Lance


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 3/3] mm: embed TLB flush IPI check in tlb_remove_table_sync_one()
  2025-12-30 20:33   ` David Hildenbrand (Red Hat)
@ 2025-12-31  3:03     ` Lance Yang
  0 siblings, 0 replies; 15+ messages in thread
From: Lance Yang @ 2025-12-31  3:03 UTC (permalink / raw)
  To: David Hildenbrand (Red Hat), akpm
  Cc: will, aneesh.kumar, npiggin, peterz, tglx, mingo, bp, dave.hansen,
	x86, hpa, arnd, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
	npache, ryan.roberts, dev.jain, baohua, ioworker0, shy828301,
	riel, jannh, linux-arch, linux-mm, linux-kernel



On 2025/12/31 04:33, David Hildenbrand (Red Hat) wrote:
> On 12/29/25 15:52, Lance Yang wrote:
>> From: Lance Yang <lance.yang@linux.dev>
>>
>> Embed the tlb_table_flush_implies_ipi_broadcast() check directly inside
>> tlb_remove_table_sync_one() instead of requiring every caller to check
>> it explicitly. This relies on callers to do the right thing: flush with
>> freed_tables=true or unshared_tables=true beforehand.
>>
>> All existing callers satisfy this requirement:
>>
>> 1. mm/khugepaged.c:1188 (collapse_huge_page):
>>
>>     pmdp_collapse_flush(vma, address, pmd)
>>     -> flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE)
>>        -> flush_tlb_mm_range(mm, ..., freed_tables = true)
>>           -> flush_tlb_multi(mm_cpumask(mm), info)
>>
>>     So freed_tables=true before calling tlb_remove_table_sync_one().
>>
>> 2. include/asm-generic/tlb.h:861 (tlb_flush_unshared_tables):
>>
>>     tlb_flush_mmu_tlbonly(tlb)
>>     -> tlb_flush(tlb)
>>        -> flush_tlb_mm_range(mm, ..., unshared_tables = true)
>>           -> flush_tlb_multi(mm_cpumask(mm), info)
>>
>>     unshared_tables=true (equivalent to freed_tables for sending IPIs).
>>
>> 3. mm/mmu_gather.c:341 (__tlb_remove_table_one):
>>
>>     When we can't allocate a batch page in tlb_remove_table(), we do:
>>
>>     tlb_table_invalidate(tlb)
>>     -> tlb_flush_mmu_tlbonly(tlb)
>>        -> flush_tlb_mm_range(mm, ..., freed_tables = true)
>>           -> flush_tlb_multi(mm_cpumask(mm), info)
>>
>>     Then:
>>     tlb_remove_table_one(table)
>>     -> __tlb_remove_table_one(table) // if !CONFIG_PT_RECLAIM
>>        -> tlb_remove_table_sync_one()
>>
>>     freed_tables=true, and this should work too.
>>
>>     Why is tlb->freed_tables guaranteed? Because callers like
>>     pte_free_tlb() (via free_pte_range) set freed_tables=true before
>>     calling __pte_free_tlb(), which then calls tlb_remove_table().
>>     We cannot free page tables without freed_tables=true.
>>
>>     Note that tlb_remove_table_sync_one() was a NOP on bare metal x86
>>     (CONFIG_MMU_GATHER_RCU_TABLE_FREE=n) before commit a37259732a7d
>>     ("x86/mm: Make MMU_GATHER_RCU_TABLE_FREE unconditional").
>>
>> 4-5. mm/khugepaged.c:1683,1819 (pmdp_get_lockless_sync macro):
>>
>>     Same as #1. These also use pmdp_collapse_flush() beforehand.
>>
>> Suggested-by: David Hildenbrand (Red Hat) <david@kernel.org>
>> Signed-off-by: Lance Yang <lance.yang@linux.dev>
> 
> LGTM. I think we should document that somewhere. Can we add some 

Thanks!

> kerneldoc for tlb_remove_table_sync_one() where we document that it 
> doesn't to any sync if a previous TLB flush when removing/unsharing page 
> tables would have already performed an IPI?

Fair point. Would something like this work?

---8<---
diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
index 7b588643cbae..9139f0a6b8bd 100644
--- a/mm/mmu_gather.c
+++ b/mm/mmu_gather.c
@@ -274,6 +274,20 @@ static void tlb_remove_table_smp_sync(void *arg)
  	/* Simply deliver the interrupt */
  }

+/**
+ * tlb_remove_table_sync_one - Send IPI to synchronize page table 
operations
+ *
+ * Sends an IPI to all CPUs to synchronize when freeing or unsharing page
+ * tables (e.g., to ensure concurrent GUP-fast walkers have completed).
+ *
+ * If a previous TLB flush (when removing/unsharing page tables) already
+ * broadcast IPIs to all CPUs, the redundant IPI is skipped. The 
optimization
+ * relies on architectures implementing 
tlb_table_flush_implies_ipi_broadcast()
+ * to indicate when their TLB flush provides sufficient synchronization.
+ *
+ * Note that callers must ensure that a TLB flush with freed_tables=true or
+ * unshared_tables=true has been performed before calling.
+ */
  void tlb_remove_table_sync_one(void)
  {
  	/* Skip the IPI if the TLB flush already synchronized with other CPUs. */
---

Cheers,
Lance

> 
>> ---
>>   mm/mmu_gather.c | 4 ++++
>>   1 file changed, 4 insertions(+)
>>
>> diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
>> index 7468ec388455..7b588643cbae 100644
>> --- a/mm/mmu_gather.c
>> +++ b/mm/mmu_gather.c
>> @@ -276,6 +276,10 @@ static void tlb_remove_table_smp_sync(void *arg)
>>   void tlb_remove_table_sync_one(void)
>>   {
>> +    /* Skip the IPI if the TLB flush already synchronized with other 
>> CPUs. */
>> +    if (tlb_table_flush_implies_ipi_broadcast())
>> +        return;
>> +
>>       /*
>>        * This isn't an RCU grace period and hence the page-tables 
>> cannot be
>>        * assumed to be actually RCU-freed.
> 
> 



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 0/3] skip redundant TLB sync IPIs
  2025-12-29 14:52 [PATCH v2 0/3] skip redundant TLB sync IPIs Lance Yang
                   ` (2 preceding siblings ...)
  2025-12-29 14:52 ` [PATCH v2 3/3] mm: embed TLB flush IPI check in tlb_remove_table_sync_one() Lance Yang
@ 2025-12-31  4:26 ` Dave Hansen
  2025-12-31 12:33   ` David Hildenbrand (Red Hat)
  3 siblings, 1 reply; 15+ messages in thread
From: Dave Hansen @ 2025-12-31  4:26 UTC (permalink / raw)
  To: Lance Yang, akpm
  Cc: will, aneesh.kumar, npiggin, peterz, tglx, mingo, bp, dave.hansen,
	x86, hpa, arnd, david, lorenzo.stoakes, ziy, baolin.wang,
	Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, ioworker0,
	shy828301, riel, jannh, linux-arch, linux-mm, linux-kernel

On 12/29/25 06:52, Lance Yang wrote:
...
> This series introduces a way for architectures to indicate their TLB flush
> already provides full synchronization, allowing the redundant IPI to be
> skipped. For now, the optimization is implemented for x86 first and applied
> to all page table operations that free or unshare tables.

I really don't like all the complexity here. Even on x86, there are
three or more ways of deriving this. Having the pv_ops check the value
of another pv op is also a bit unsettling.

That said, complexity can be worth it with sufficient demonstrated
gains. But:

> When unsharing hugetlb PMD page tables or collapsing pages in khugepaged,
> we send two IPIs: one for TLB invalidation, and another to synchronize
> with concurrent GUP-fast walkers.

Those aren't exactly hot paths. khugepaged is fundamentally rate
limited. I don't think unsharing hugetlb PMD page tables just is all
that common either.

What kind of end user benefit is there to justify the complexity?


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 0/3] skip redundant TLB sync IPIs
  2025-12-31  4:26 ` [PATCH v2 0/3] skip redundant TLB sync IPIs Dave Hansen
@ 2025-12-31 12:33   ` David Hildenbrand (Red Hat)
  2026-01-02 16:41     ` Dave Hansen
  0 siblings, 1 reply; 15+ messages in thread
From: David Hildenbrand (Red Hat) @ 2025-12-31 12:33 UTC (permalink / raw)
  To: Dave Hansen, Lance Yang, akpm
  Cc: will, aneesh.kumar, npiggin, peterz, tglx, mingo, bp, dave.hansen,
	x86, hpa, arnd, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
	npache, ryan.roberts, dev.jain, baohua, ioworker0, shy828301,
	riel, jannh, linux-arch, linux-mm, linux-kernel

On 12/31/25 05:26, Dave Hansen wrote:
> On 12/29/25 06:52, Lance Yang wrote:
> ...
>> This series introduces a way for architectures to indicate their TLB flush
>> already provides full synchronization, allowing the redundant IPI to be
>> skipped. For now, the optimization is implemented for x86 first and applied
>> to all page table operations that free or unshare tables.
> 
> I really don't like all the complexity here. Even on x86, there are
> three or more ways of deriving this. Having the pv_ops check the value
> of another pv op is also a bit unsettling.

Right. What I actually meant is that we simply have a property "bool 
flush_tlb_multi_implies_ipi_broadcast" that we set only to true from the 
initialization code.

Without comparing the pv_ops.

That should reduce the complexity quite a bit IMHO.

But maybe you have an even better way on how to indicate support, in a 
very simple way.

> 
> That said, complexity can be worth it with sufficient demonstrated
> gains. But:
> 
>> When unsharing hugetlb PMD page tables or collapsing pages in khugepaged,
>> we send two IPIs: one for TLB invalidation, and another to synchronize
>> with concurrent GUP-fast walkers.
> 
> Those aren't exactly hot paths. khugepaged is fundamentally rate
> limited. I don't think unsharing hugetlb PMD page tables just is all
> that common either.

Given that the added IPIs during unsharing broke Oracle DBs rather badly 
[1], I think this is actually a case worth optimizing.

I'd assume that the impact can be measured on a many-core/many-socket 
system with an adjusted reproducer of [1]. The impact will not be as big 
as what [1] fixed (we reduced the tlb_remove_table_sync_one() 
invocations quite drastically).

After all, tlb_remove_table_sync_one() sends an IPI to *all* CPUs in the 
system, not just the ones in the MM CPU mask, which is rather bad on 
systems with a lot of CPUs. Of course, this way we can only optimize on 
systems that actually send IPIs during TLB flushes.

For other systems, it will be more tricky to avoid these broadcast IPIs.

(I have the faint recollection that the IPI broadcast through 
tlb_remove_table_sync_one() is a problem when called from 
__tlb_remove_table_one() on RT systems ...)

[1] https://lkml.kernel.org/r/20251223214037.580860-1-david@kernel.org

-- 
Cheers

David


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 0/3] skip redundant TLB sync IPIs
  2025-12-31 12:33   ` David Hildenbrand (Red Hat)
@ 2026-01-02 16:41     ` Dave Hansen
  2026-01-03  8:39       ` Lance Yang
  0 siblings, 1 reply; 15+ messages in thread
From: Dave Hansen @ 2026-01-02 16:41 UTC (permalink / raw)
  To: David Hildenbrand (Red Hat), Lance Yang, akpm
  Cc: will, aneesh.kumar, npiggin, peterz, tglx, mingo, bp, dave.hansen,
	x86, hpa, arnd, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
	npache, ryan.roberts, dev.jain, baohua, ioworker0, shy828301,
	riel, jannh, linux-arch, linux-mm, linux-kernel

On 12/31/25 04:33, David Hildenbrand (Red Hat) wrote:
> On 12/31/25 05:26, Dave Hansen wrote:
>> On 12/29/25 06:52, Lance Yang wrote:
>> ...
>>> This series introduces a way for architectures to indicate their TLB
>>> flush
>>> already provides full synchronization, allowing the redundant IPI to be
>>> skipped. For now, the optimization is implemented for x86 first and
>>> applied
>>> to all page table operations that free or unshare tables.
>>
>> I really don't like all the complexity here. Even on x86, there are
>> three or more ways of deriving this. Having the pv_ops check the value
>> of another pv op is also a bit unsettling.
> 
> Right. What I actually meant is that we simply have a property "bool
> flush_tlb_multi_implies_ipi_broadcast" that we set only to true from the
> initialization code.
> 
> Without comparing the pv_ops.
> 
> That should reduce the complexity quite a bit IMHO.

Yeah, that sounds promising.

> But maybe you have an even better way on how to indicate support, in a
> very simple way.

Rather than having some kind of explicit support enumeration, the other
idea I had would be to actually track the state about what needs to get
flushed somewhere. For instance, even CPUs with enabled INVLPGB support
still use IPIs sometimes. That makes the
tlb_table_flush_implies_ipi_broadcast() check a bit imperfect as is
because it will for the extra sync IPI even when INVLPGB isn't being
used for an mm.

First, we already save some semblance of support for doing different
flushes when freeing page tables mmu_gather->freed_tables. But, the call
sites in question here are for a single flush and don't use mmu_gathers.

The other pretty straightforward thing to do would be to add something
to mm->context that indicates that page tables need to be freed but
there might still be wild gup walkers out there that need an IPI. It
would get set when the page tables are modified and cleared at all the
sites where an IPIs are sent.


>> That said, complexity can be worth it with sufficient demonstrated
>> gains. But:
>>
>>> When unsharing hugetlb PMD page tables or collapsing pages in
>>> khugepaged,
>>> we send two IPIs: one for TLB invalidation, and another to synchronize
>>> with concurrent GUP-fast walkers.
>>
>> Those aren't exactly hot paths. khugepaged is fundamentally rate
>> limited. I don't think unsharing hugetlb PMD page tables just is all
>> that common either.
> 
> Given that the added IPIs during unsharing broke Oracle DBs rather badly
> [1], I think this is actually a case worth optimizing.
...
> [1] https://lkml.kernel.org/r/20251223214037.580860-1-david@kernel.org

Gah, that's good context, thanks.

Are there any tests out there that might catch these this case better?
It might be something good to have 0day watch for.


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 0/3] skip redundant TLB sync IPIs
  2026-01-02 16:41     ` Dave Hansen
@ 2026-01-03  8:39       ` Lance Yang
  2026-01-03 17:06         ` Dave Hansen
  0 siblings, 1 reply; 15+ messages in thread
From: Lance Yang @ 2026-01-03  8:39 UTC (permalink / raw)
  To: Dave Hansen, David Hildenbrand (Red Hat)
  Cc: will, aneesh.kumar, npiggin, peterz, tglx, mingo, bp, dave.hansen,
	x86, hpa, arnd, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
	npache, ryan.roberts, dev.jain, baohua, ioworker0, shy828301,
	riel, jannh, linux-arch, linux-mm, linux-kernel, akpm



On 2026/1/3 00:41, Dave Hansen wrote:
> On 12/31/25 04:33, David Hildenbrand (Red Hat) wrote:
>> On 12/31/25 05:26, Dave Hansen wrote:
>>> On 12/29/25 06:52, Lance Yang wrote:
>>> ...
>>>> This series introduces a way for architectures to indicate their TLB
>>>> flush
>>>> already provides full synchronization, allowing the redundant IPI to be
>>>> skipped. For now, the optimization is implemented for x86 first and
>>>> applied
>>>> to all page table operations that free or unshare tables.
>>>
>>> I really don't like all the complexity here. Even on x86, there are
>>> three or more ways of deriving this. Having the pv_ops check the value
>>> of another pv op is also a bit unsettling.
>>
>> Right. What I actually meant is that we simply have a property "bool
>> flush_tlb_multi_implies_ipi_broadcast" that we set only to true from the
>> initialization code.
>>
>> Without comparing the pv_ops.
>>
>> That should reduce the complexity quite a bit IMHO.
> 
> Yeah, that sounds promising.

Thanks a lot for taking the time to review!

Yeah, I simplified things to just a bool property set during init
(no pv_ops comparison at runtime) as follows:

---8<---
diff --git a/arch/x86/include/asm/paravirt.h 
b/arch/x86/include/asm/paravirt.h
index 13f9cd31c8f8..a926d459e6f5 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -698,6 +698,7 @@ static __always_inline unsigned long 
arch_local_irq_save(void)

  extern void default_banner(void);
  void native_pv_lock_init(void) __init;
+void setup_pv_tlb_flush_ipi_broadcast(void) __init;

  #else  /* __ASSEMBLER__ */

@@ -727,6 +728,10 @@ void native_pv_lock_init(void) __init;
  static inline void native_pv_lock_init(void)
  {
  }
+
+static inline void setup_pv_tlb_flush_ipi_broadcast(void)
+{
+}
  #endif
  #endif /* !CONFIG_PARAVIRT */

diff --git a/arch/x86/include/asm/paravirt_types.h 
b/arch/x86/include/asm/paravirt_types.h
index 3502939415ad..7c010d8bee60 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -133,6 +133,12 @@ struct pv_mmu_ops {
  	void (*flush_tlb_multi)(const struct cpumask *cpus,
  				const struct flush_tlb_info *info);

+	/*
+	 * Indicates whether flush_tlb_multi IPIs provide sufficient
+	 * synchronization for GUP-fast when freeing or unsharing page tables.
+	 */
+	bool flush_tlb_multi_implies_ipi_broadcast;
+
  	/* Hook for intercepting the destruction of an mm_struct. */
  	void (*exit_mmap)(struct mm_struct *mm);
  	void (*notify_page_enc_status_changed)(unsigned long pfn, int npages, 
bool enc);
diff --git a/arch/x86/include/asm/tlb.h b/arch/x86/include/asm/tlb.h
index 866ea78ba156..f570c7b2d03e 100644
--- a/arch/x86/include/asm/tlb.h
+++ b/arch/x86/include/asm/tlb.h
@@ -5,10 +5,23 @@
  #define tlb_flush tlb_flush
  static inline void tlb_flush(struct mmu_gather *tlb);

+#define tlb_table_flush_implies_ipi_broadcast 
tlb_table_flush_implies_ipi_broadcast
+static inline bool tlb_table_flush_implies_ipi_broadcast(void);
+
  #include <asm-generic/tlb.h>
  #include <linux/kernel.h>
  #include <vdso/bits.h>
  #include <vdso/page.h>
+#include <asm/paravirt.h>
+
+static inline bool tlb_table_flush_implies_ipi_broadcast(void)
+{
+#ifdef CONFIG_PARAVIRT
+	return pv_ops.mmu.flush_tlb_multi_implies_ipi_broadcast;
+#else
+	return !cpu_feature_enabled(X86_FEATURE_INVLPGB);
+#endif
+}

  static inline void tlb_flush(struct mmu_gather *tlb)
  {
@@ -20,7 +33,8 @@ static inline void tlb_flush(struct mmu_gather *tlb)
  		end = tlb->end;
  	}

-	flush_tlb_mm_range(tlb->mm, start, end, stride_shift, tlb->freed_tables);
+	flush_tlb_mm_range(tlb->mm, start, end, stride_shift,
+			   tlb->freed_tables || tlb->unshared_tables);
  }

  static inline void invlpg(unsigned long addr)
diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
index ab3e172dcc69..0a49c2d79693 100644
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -60,6 +60,23 @@ void __init native_pv_lock_init(void)
  		static_branch_enable(&virt_spin_lock_key);
  }

+void __init setup_pv_tlb_flush_ipi_broadcast(void)
+{
+	/*
+	 * For native TLB flush, if we don't have INVLPGB, we use IPI-based
+	 * flushing which sends real IPIs to all CPUs. This provides sufficient
+	 * synchronization for GUP-fast.
+	 *
+	 * For paravirt (e.g., KVM, Xen, HyperV), hypercalls may not send real
+	 * IPIs, so we keep the default value of false. Only set to true when
+	 * using native flush_tlb_multi without INVLPGB.
+	 */
+	if (pv_ops.mmu.flush_tlb_multi == native_flush_tlb_multi &&
+	    !cpu_feature_enabled(X86_FEATURE_INVLPGB))
+		pv_ops.mmu.flush_tlb_multi_implies_ipi_broadcast = true;
+}
+
+
  struct static_key paravirt_steal_enabled;
  struct static_key paravirt_steal_rq_enabled;

@@ -173,6 +190,7 @@ struct paravirt_patch_template pv_ops = {
  	.mmu.flush_tlb_kernel	= native_flush_tlb_global,
  	.mmu.flush_tlb_one_user	= native_flush_tlb_one_user,
  	.mmu.flush_tlb_multi	= native_flush_tlb_multi,
+	.mmu.flush_tlb_multi_implies_ipi_broadcast = false,

  	.mmu.exit_mmap		= paravirt_nop,
  	.mmu.notify_page_enc_status_changed	= paravirt_nop,
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 74aa904be6dc..3f673e686b12 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -51,6 +51,7 @@
  #include <asm/olpc_ofw.h>
  #include <asm/pci-direct.h>
  #include <asm/prom.h>
+#include <asm/paravirt.h>
  #include <asm/proto.h>
  #include <asm/realmode.h>
  #include <asm/thermal.h>
@@ -1257,6 +1258,7 @@ void __init setup_arch(char **cmdline_p)
  	io_apic_init_mappings();

  	x86_init.hyper.guest_late_init();
+	setup_pv_tlb_flush_ipi_broadcast();

  	e820__reserve_resources();
  	e820__register_nosave_regions(max_pfn);
---

> 
>> But maybe you have an even better way on how to indicate support, in a
>> very simple way.
> 
> Rather than having some kind of explicit support enumeration, the other
> idea I had would be to actually track the state about what needs to get
> flushed somewhere. For instance, even CPUs with enabled INVLPGB support
> still use IPIs sometimes. That makes the
> tlb_table_flush_implies_ipi_broadcast() check a bit imperfect as is
> because it will for the extra sync IPI even when INVLPGB isn't being
> used for an mm.
> 
> First, we already save some semblance of support for doing different
> flushes when freeing page tables mmu_gather->freed_tables. But, the call
> sites in question here are for a single flush and don't use mmu_gathers.
> 
> The other pretty straightforward thing to do would be to add something
> to mm->context that indicates that page tables need to be freed but
> there might still be wild gup walkers out there that need an IPI. It
> would get set when the page tables are modified and cleared at all the
> sites where an IPIs are sent.

Thanks for the suggestion! The mm->context tracking idea makes a lot
of sense - it would handle those mixed INVLPGB/IPI cases much better :)

Maybe we could do that as a follow-up. I'd like to keep things simple
for now, so we just add a bool property to skip redundant TLB sync IPIs
on systems without INVLPGB support.

Then we could add the mm->context (or something similar) tracking later
to handle things more precisely.

Anyway, I'm open to going straight to the mm->context approach as well
and happy to do that instead :D

Thanks,
Lance

> 
> 
>>> That said, complexity can be worth it with sufficient demonstrated
>>> gains. But:
>>>
>>>> When unsharing hugetlb PMD page tables or collapsing pages in
>>>> khugepaged,
>>>> we send two IPIs: one for TLB invalidation, and another to synchronize
>>>> with concurrent GUP-fast walkers.
>>>
>>> Those aren't exactly hot paths. khugepaged is fundamentally rate
>>> limited. I don't think unsharing hugetlb PMD page tables just is all
>>> that common either.
>>
>> Given that the added IPIs during unsharing broke Oracle DBs rather badly
>> [1], I think this is actually a case worth optimizing.
> ...
>> [1] https://lkml.kernel.org/r/20251223214037.580860-1-david@kernel.org
> 
> Gah, that's good context, thanks.
> 
> Are there any tests out there that might catch these this case better?
> It might be something good to have 0day watch for.



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 0/3] skip redundant TLB sync IPIs
  2026-01-03  8:39       ` Lance Yang
@ 2026-01-03 17:06         ` Dave Hansen
  0 siblings, 0 replies; 15+ messages in thread
From: Dave Hansen @ 2026-01-03 17:06 UTC (permalink / raw)
  To: Lance Yang, David Hildenbrand (Red Hat)
  Cc: will, aneesh.kumar, npiggin, peterz, tglx, mingo, bp, dave.hansen,
	x86, hpa, arnd, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
	npache, ryan.roberts, dev.jain, baohua, ioworker0, shy828301,
	riel, jannh, linux-arch, linux-mm, linux-kernel, akpm

On 1/3/26 00:39, Lance Yang wrote:
...
> Maybe we could do that as a follow-up. I'd like to keep things simple
> for now, so we just add a bool property to skip redundant TLB sync IPIs
> on systems without INVLPGB support.

It's not just INVLPGB support. Take a look at hyperv_flush_tlb_multi(),
for instance. It can eventually land back in native_flush_tlb_multi(),
but would also "fail" the pv_ops check in all cases.

It's not that Hyper-V performance is super important, it just that the
semantics of the chosen approach here are rather complicated.

> Then we could add the mm->context (or something similar) tracking later
> to handle things more precisely.
> 
> Anyway, I'm open to going straight to the mm->context approach as well
> and happy to do that instead :D

I'd really like to see what an mm->context approach looks like before we
go forward with what is being proposed here.

Is there some kind of hurry to get this done immediately?


^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2026-01-03 17:06 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-12-29 14:52 [PATCH v2 0/3] skip redundant TLB sync IPIs Lance Yang
2025-12-29 14:52 ` [PATCH v2 1/3] mm/tlb: allow architectures to " Lance Yang
2025-12-29 15:00   ` Lance Yang
2025-12-29 15:01     ` [PATCH v2 0/3] " Lance Yang
2025-12-30 20:31   ` [PATCH v2 1/3] mm/tlb: allow architectures to " David Hildenbrand (Red Hat)
2025-12-31  2:29     ` Lance Yang
2025-12-29 14:52 ` [PATCH v2 2/3] x86/mm: implement redundant IPI elimination for page table operations Lance Yang
2025-12-29 14:52 ` [PATCH v2 3/3] mm: embed TLB flush IPI check in tlb_remove_table_sync_one() Lance Yang
2025-12-30 20:33   ` David Hildenbrand (Red Hat)
2025-12-31  3:03     ` Lance Yang
2025-12-31  4:26 ` [PATCH v2 0/3] skip redundant TLB sync IPIs Dave Hansen
2025-12-31 12:33   ` David Hildenbrand (Red Hat)
2026-01-02 16:41     ` Dave Hansen
2026-01-03  8:39       ` Lance Yang
2026-01-03 17:06         ` Dave Hansen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).