From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from out-170.mta1.migadu.com (out-170.mta1.migadu.com [95.215.58.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5294A1FC7FB for ; Mon, 9 Mar 2026 02:08:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=95.215.58.170 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773022088; cv=none; b=K89DFqIAMB4aPloWbkU2AcgM/DujKTfxJcSdhTZc3HLDc3ie7Aj1BUbQVhhqKzYWIX1oYZay9EDA4hNmZX4jy3Zg1I4HY//T9UKWUxDFrSR/O/+pSrW5QmHVcbJi9eY3crm2MNDQyzVFrKbsjSyWjzaapDlSI/UPYe7Km7Aepj8= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773022088; c=relaxed/simple; bh=ZR90e5K53gOxxa+behLy7mdghiYB5g01Qsg1VLXFRFE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=pGVv5cEt/l09XtDYsPodItvgysBEMvbSyyFHqcdbhQlV0TK6PPOhf8iarO4Zx1wHi1BpnJYvEhDcBO4tC+Yw6e4+3i7X6rTVnJcQ4qShvyitZtWXKOCAMIZxOaj9/y7BrbQFnM7ynV+RLSIWahkZD0T0Lf4yJaSlOiU2tkfc7BM= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=OPBlw3bs; arc=none smtp.client-ip=95.215.58.170 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="OPBlw3bs" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1773022084; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jRrnNQfohi2smOuuXp+6eVAaybcvsU0A5HfE+uuiZn8=; b=OPBlw3bshdShj+LXx1jxf/i2sidyMs6K6INVu1Eqq69zxGcgpHB0ot3trKQIKuJYqYuDy5 ehFk/TYtADr5xYatCpUPN2uup5u1nXZixCunYhTa48RdE6Gt9gC5kkOk40vh19xVbehXM5 dK+HBRY90Wr2TpYlX4ERA2ynUvzhrxY= From: Lance Yang To: akpm@linux-foundation.org Cc: peterz@infradead.org, david@kernel.org, dave.hansen@intel.com, dave.hansen@linux.intel.com, ypodemsk@redhat.com, hughd@google.com, will@kernel.org, aneesh.kumar@kernel.org, npiggin@gmail.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, arnd@arndb.de, lorenzo.stoakes@oracle.com, ziy@nvidia.com, baolin.wang@linux.alibaba.com, Liam.Howlett@oracle.com, npache@redhat.com, ryan.roberts@arm.com, dev.jain@arm.com, baohua@kernel.org, shy828301@gmail.com, riel@surriel.com, jannh@google.com, jgross@suse.com, seanjc@google.com, pbonzini@redhat.com, boris.ostrovsky@oracle.com, virtualization@lists.linux.dev, kvm@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, ioworker0@gmail.com, Lance Yang Subject: [PATCH v7 2/2] x86/tlb: skip redundant sync IPIs for native TLB flush Date: Mon, 9 Mar 2026 10:07:11 +0800 Message-ID: <20260309020711.20831-3-lance.yang@linux.dev> In-Reply-To: <20260309020711.20831-1-lance.yang@linux.dev> References: <20260309020711.20831-1-lance.yang@linux.dev> Precedence: bulk X-Mailing-List: virtualization@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Migadu-Flow: FLOW_OUT From: Lance Yang Enable the optimization introduced in the previous patch for x86. native_pv_tlb_init() checks whether native_flush_tlb_multi() is in use. On CONFIG_PARAVIRT systems, it checks pv_ops; on non-PARAVIRT, native flush is always in use. It decides once at boot whether to enable the optimization: if using native TLB flush and INVLPGB is not supported, we know IPIs were sent and can skip the redundant sync. The decision is fixed via a static key as Peter suggested[1]. PV backends (KVM, Xen, Hyper-V) typically have their own implementations and don't call native_flush_tlb_multi() directly, so they cannot be trusted to provide the IPI guarantees we need. Two-step plan as David suggested[2]: Step 1 (this patch): Skip redundant sync when we're 100% certain the TLB flush sent IPIs. INVLPGB is excluded because when supported, we cannot guarantee IPIs were sent, keeping it clean and simple. Step 2 (future work): Send targeted IPIs only to CPUs actually doing software/lockless page table walks, benefiting all architectures. Regarding Step 2, it obviously only applies to setups where Step 1 does not apply: like x86 with INVLPGB or arm64. [1] https://lore.kernel.org/linux-mm/20260302145652.GH1395266@noisy.programming.kicks-ass.net/ [2] https://lore.kernel.org/linux-mm/bbfdf226-4660-4949-b17b-0d209ee4ef8c@kernel.org/ Suggested-by: Peter Zijlstra Suggested-by: David Hildenbrand (Arm) Signed-off-by: Lance Yang --- arch/x86/include/asm/tlb.h | 17 ++++++++++++++++- arch/x86/include/asm/tlbflush.h | 2 ++ arch/x86/kernel/smpboot.c | 1 + arch/x86/mm/tlb.c | 15 +++++++++++++++ 4 files changed, 34 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/tlb.h b/arch/x86/include/asm/tlb.h index 866ea78ba156..99de622d3856 100644 --- a/arch/x86/include/asm/tlb.h +++ b/arch/x86/include/asm/tlb.h @@ -5,11 +5,21 @@ #define tlb_flush tlb_flush static inline void tlb_flush(struct mmu_gather *tlb); +#define tlb_table_flush_implies_ipi_broadcast tlb_table_flush_implies_ipi_broadcast +static inline bool tlb_table_flush_implies_ipi_broadcast(void); + #include #include #include #include +DECLARE_STATIC_KEY_FALSE(tlb_ipi_broadcast_key); + +static inline bool tlb_table_flush_implies_ipi_broadcast(void) +{ + return static_branch_likely(&tlb_ipi_broadcast_key); +} + static inline void tlb_flush(struct mmu_gather *tlb) { unsigned long start = 0UL, end = TLB_FLUSH_ALL; @@ -20,7 +30,12 @@ static inline void tlb_flush(struct mmu_gather *tlb) end = tlb->end; } - flush_tlb_mm_range(tlb->mm, start, end, stride_shift, tlb->freed_tables); + /* + * Pass both freed_tables and unshared_tables so that lazy-TLB CPUs + * also receive IPIs during unsharing page tables. + */ + flush_tlb_mm_range(tlb->mm, start, end, stride_shift, + tlb->freed_tables || tlb->unshared_tables); } static inline void invlpg(unsigned long addr) diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index 5a3cdc439e38..8ba853154b46 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -18,6 +18,8 @@ DECLARE_PER_CPU(u64, tlbstate_untag_mask); +void __init native_pv_tlb_init(void); + void __flush_tlb_all(void); #define TLB_FLUSH_ALL -1UL diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c index 5cd6950ab672..3cdb04162843 100644 --- a/arch/x86/kernel/smpboot.c +++ b/arch/x86/kernel/smpboot.c @@ -1167,6 +1167,7 @@ void __init native_smp_prepare_boot_cpu(void) switch_gdt_and_percpu_base(me); native_pv_lock_init(); + native_pv_tlb_init(); } void __init native_smp_cpus_done(unsigned int max_cpus) diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 621e09d049cb..8f5585ebaf09 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -26,6 +26,8 @@ #include "mm_internal.h" +DEFINE_STATIC_KEY_FALSE(tlb_ipi_broadcast_key); + #ifdef CONFIG_PARAVIRT # define STATIC_NOPV #else @@ -1834,3 +1836,16 @@ static int __init create_tlb_single_page_flush_ceiling(void) return 0; } late_initcall(create_tlb_single_page_flush_ceiling); + +void __init native_pv_tlb_init(void) +{ +#ifdef CONFIG_PARAVIRT + if (pv_ops.mmu.flush_tlb_multi != native_flush_tlb_multi) + return; +#endif + + if (cpu_feature_enabled(X86_FEATURE_INVLPGB)) + return; + + static_branch_enable(&tlb_ipi_broadcast_key); +} -- 2.49.0