From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EFC28CE7B1F for ; Fri, 14 Nov 2025 15:17:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=8nnNDo7oy33aNFJ8GpidZCiS52jFHxLb2tZYlzIaaH0=; b=qcIgXhv5Dbx33X0Wza0d7Of7lK w7H4kIHL7LPdDdvqxBgrR0JPE49Ukr7cEpTJ9PlQYKA96RS9GsGYEvZPiLvj/Jh8yGBBkogFC3lPh 3PpAVcF39S6vH5gTGXXpGD7TZX34vROv7ZyAbqrfPZnQ8SmFmRpc/aLD+k7wyedanClAv1B42EnSk UNegV+WnThjYT38RHMA2L+RYh1mUueK9lGniHc1+Ztyi28o1B5UfTp1vNElhSjoOh7hi/s+bZXhJ1 pKmMoNoP3lGRqgRJoo/pWlU8/QOR1wU7ZAf2ErN8UR43k3k2cqtpwWVEL4fGHe2tALpUNSwTqt5Em LQe2bA5w==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1vJvYM-0000000CVQ5-0EQd; Fri, 14 Nov 2025 15:17:34 +0000 Received: from desiato.infradead.org ([2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1vJvYL-0000000CVOs-0tPV for linux-arm-kernel@bombadil.infradead.org; Fri, 14 Nov 2025 15:17:33 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=Content-Transfer-Encoding:MIME-Version :References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=8nnNDo7oy33aNFJ8GpidZCiS52jFHxLb2tZYlzIaaH0=; b=dUoqXdi71r+XTYuHJ3a+Z9xmUQ t3O73O/zYl1o7x99kp6im4km6a13LVKv38VNsmbk2yL96w8i8g77keyOSG+AGS/qnOniP/08oIBGk 1BJj85kM+PHofP74zfRCJFUuWf8LlC6qsYHhiBYjorMq+rnCYcMsQioLRwKXKG9FkwLgolCQXewQt lyxPrIdLuow+L/q3dQV9l2J9bxAgBgr8J/JMAcRaCjM8xNMy/lfze+5a1uFqT4pCTtmC7II0yXnsw ceva0Gakf+7xUo0Y0kl+KC6RApBKy9ZxCN2S9gPk+nzMAZTRyx2TuGtHkz2UBuHqz6fMXWpiS5ljM clRO8qrA==; Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by desiato.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1vJugc-00000002nEv-2pW5 for linux-arm-kernel@lists.infradead.org; Fri, 14 Nov 2025 14:22:05 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1763133448; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8nnNDo7oy33aNFJ8GpidZCiS52jFHxLb2tZYlzIaaH0=; b=ao8CTe1QTKOHyYjtkiqG+TjPw7Dznzge0m4XDwQJdbN1u/pTLiS2FqsxQrt5fnJMtgPo79 bWvbMZCGDIAvzlVP4rmxGE9pF+Jb49QIzfsB9WG/1EZje6kCcs4GL0q1eRXBwV0DqMwtt3 n/YBDNlziN8KqXbiZYBjemZT/muAwdk= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1763133448; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8nnNDo7oy33aNFJ8GpidZCiS52jFHxLb2tZYlzIaaH0=; b=ao8CTe1QTKOHyYjtkiqG+TjPw7Dznzge0m4XDwQJdbN1u/pTLiS2FqsxQrt5fnJMtgPo79 bWvbMZCGDIAvzlVP4rmxGE9pF+Jb49QIzfsB9WG/1EZje6kCcs4GL0q1eRXBwV0DqMwtt3 n/YBDNlziN8KqXbiZYBjemZT/muAwdk= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-421-JVVClndjP8CFZcyMJNIxlg-1; Fri, 14 Nov 2025 10:17:23 -0500 X-MC-Unique: JVVClndjP8CFZcyMJNIxlg-1 X-Mimecast-MFC-AGG-ID: JVVClndjP8CFZcyMJNIxlg_1763133438 Received: from mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.93]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 2113F195608E; Fri, 14 Nov 2025 15:17:18 +0000 (UTC) Received: from vschneid-thinkpadt14sgen2i.remote.csb (unknown [10.45.226.10]) by mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 6AFB41800451; Fri, 14 Nov 2025 15:17:03 +0000 (UTC) From: Valentin Schneider To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, rcu@vger.kernel.org, x86@kernel.org, linux-arm-kernel@lists.infradead.org, loongarch@lists.linux.dev, linux-riscv@lists.infradead.org, linux-arch@vger.kernel.org, linux-trace-kernel@vger.kernel.org Cc: Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" , Andy Lutomirski , Peter Zijlstra , Arnaldo Carvalho de Melo , Josh Poimboeuf , Paolo Bonzini , Arnd Bergmann , Frederic Weisbecker , "Paul E. McKenney" , Jason Baron , Steven Rostedt , Ard Biesheuvel , Sami Tolvanen , "David S. Miller" , Neeraj Upadhyay , Joel Fernandes , Josh Triplett , Boqun Feng , Uladzislau Rezki , Mathieu Desnoyers , Mel Gorman , Andrew Morton , Masahiro Yamada , Han Shen , Rik van Riel , Jann Horn , Dan Carpenter , Oleg Nesterov , Juri Lelli , Clark Williams , Yair Podemsky , Marcelo Tosatti , Daniel Wagner , Petr Tesarik , Shrikanth Hegde Subject: [RFC PATCH v7 30/31] x86/mm, mm/vmalloc: Defer kernel TLB flush IPIs under CONFIG_COALESCE_TLBI=y Date: Fri, 14 Nov 2025 16:14:27 +0100 Message-ID: <20251114151428.1064524-10-vschneid@redhat.com> In-Reply-To: <20251114150133.1056710-1-vschneid@redhat.com> References: <20251114150133.1056710-1-vschneid@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.93 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20251114_142202_949769_A3715DFF X-CRM114-Status: GOOD ( 24.60 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Previous commits have added an unconditional TLB flush right after switching to the kernel CR3 on NOHZ_FULL CPUs, and a software signal to determine whether a CPU has its kernel CR3 loaded. Using these two components, we can now safely defer kernel TLB flush IPIs targeting NOHZ_FULL CPUs executing in userspace (i.e. with the user CR3 loaded). Note that the COALESCE_TLBI config option is introduced in a later commit, when the whole feature is implemented. Signed-off-by: Valentin Schneider --- arch/x86/include/asm/tlbflush.h | 3 +++ arch/x86/mm/tlb.c | 34 ++++++++++++++++++++++++++------- mm/vmalloc.c | 34 ++++++++++++++++++++++++++++----- 3 files changed, 59 insertions(+), 12 deletions(-) diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h index e39ae95b85072..6d533afd70952 100644 --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -321,6 +321,9 @@ extern void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, unsigned long end, unsigned int stride_shift, bool freed_tables); extern void flush_tlb_kernel_range(unsigned long start, unsigned long end); +#ifdef CONFIG_COALESCE_TLBI +extern void flush_tlb_kernel_range_deferrable(unsigned long start, unsigned long end); +#endif static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long a) { diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 5d221709353e0..1ce80f8775e7a 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include @@ -1529,23 +1530,24 @@ static void do_kernel_range_flush(void *info) flush_tlb_one_kernel(addr); } -static void kernel_tlb_flush_all(struct flush_tlb_info *info) +static void kernel_tlb_flush_all(smp_cond_func_t cond, struct flush_tlb_info *info) { if (cpu_feature_enabled(X86_FEATURE_INVLPGB)) invlpgb_flush_all(); else - on_each_cpu(do_flush_tlb_all, NULL, 1); + on_each_cpu_cond(cond, do_flush_tlb_all, NULL, 1); } -static void kernel_tlb_flush_range(struct flush_tlb_info *info) +static void kernel_tlb_flush_range(smp_cond_func_t cond, struct flush_tlb_info *info) { if (cpu_feature_enabled(X86_FEATURE_INVLPGB)) invlpgb_kernel_range_flush(info); else - on_each_cpu(do_kernel_range_flush, info, 1); + on_each_cpu_cond(cond, do_kernel_range_flush, info, 1); } -void flush_tlb_kernel_range(unsigned long start, unsigned long end) +static inline void +__flush_tlb_kernel_range(smp_cond_func_t cond, unsigned long start, unsigned long end) { struct flush_tlb_info *info; @@ -1555,13 +1557,31 @@ void flush_tlb_kernel_range(unsigned long start, unsigned long end) TLB_GENERATION_INVALID); if (info->end == TLB_FLUSH_ALL) - kernel_tlb_flush_all(info); + kernel_tlb_flush_all(cond, info); else - kernel_tlb_flush_range(info); + kernel_tlb_flush_range(cond, info); put_flush_tlb_info(); } +void flush_tlb_kernel_range(unsigned long start, unsigned long end) +{ + __flush_tlb_kernel_range(NULL, start, end); +} + +#ifdef CONFIG_COALESCE_TLBI +static bool flush_tlb_kernel_cond(int cpu, void *info) +{ + return housekeeping_cpu(cpu, HK_TYPE_KERNEL_NOISE) || + per_cpu(kernel_cr3_loaded, cpu); +} + +void flush_tlb_kernel_range_deferrable(unsigned long start, unsigned long end) +{ + __flush_tlb_kernel_range(flush_tlb_kernel_cond, start, end); +} +#endif + /* * This can be used from process context to figure out what the value of * CR3 is without needing to do a (slow) __read_cr3(). diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 798b2ed21e460..76ec10d56623b 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -494,6 +494,30 @@ void vunmap_range_noflush(unsigned long start, unsigned long end) __vunmap_range_noflush(start, end); } +#ifdef CONFIG_COALESCE_TLBI +/* + * !!! BIG FAT WARNING !!! + * + * The CPU is free to cache any part of the paging hierarchy it wants at any + * time. It's also free to set accessed and dirty bits at any time, even for + * instructions that may never execute architecturally. + * + * This means that deferring a TLB flush affecting freed page-table-pages (IOW, + * keeping them in a CPU's paging hierarchy cache) is a recipe for disaster. + * + * This isn't a problem for deferral of TLB flushes in vmalloc, because + * page-table-pages used for vmap() mappings are never freed - see how + * __vunmap_range_noflush() walks the whole mapping but only clears the leaf PTEs. + * If this ever changes, TLB flush deferral will cause misery. + */ +void __weak flush_tlb_kernel_range_deferrable(unsigned long start, unsigned long end) +{ + flush_tlb_kernel_range(start, end); +} +#else +#define flush_tlb_kernel_range_deferrable(start, end) flush_tlb_kernel_range(start, end) +#endif + /** * vunmap_range - unmap kernel virtual addresses * @addr: start of the VM area to unmap @@ -507,7 +531,7 @@ void vunmap_range(unsigned long addr, unsigned long end) { flush_cache_vunmap(addr, end); vunmap_range_noflush(addr, end); - flush_tlb_kernel_range(addr, end); + flush_tlb_kernel_range_deferrable(addr, end); } static int vmap_pages_pte_range(pmd_t *pmd, unsigned long addr, @@ -2339,7 +2363,7 @@ static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end, nr_purge_nodes = cpumask_weight(&purge_nodes); if (nr_purge_nodes > 0) { - flush_tlb_kernel_range(start, end); + flush_tlb_kernel_range_deferrable(start, end); /* One extra worker is per a lazy_max_pages() full set minus one. */ nr_purge_helpers = atomic_long_read(&vmap_lazy_nr) / lazy_max_pages(); @@ -2442,7 +2466,7 @@ static void free_unmap_vmap_area(struct vmap_area *va) flush_cache_vunmap(va->va_start, va->va_end); vunmap_range_noflush(va->va_start, va->va_end); if (debug_pagealloc_enabled_static()) - flush_tlb_kernel_range(va->va_start, va->va_end); + flush_tlb_kernel_range_deferrable(va->va_start, va->va_end); free_vmap_area_noflush(va); } @@ -2890,7 +2914,7 @@ static void vb_free(unsigned long addr, unsigned long size) vunmap_range_noflush(addr, addr + size); if (debug_pagealloc_enabled_static()) - flush_tlb_kernel_range(addr, addr + size); + flush_tlb_kernel_range_deferrable(addr, addr + size); spin_lock(&vb->lock); @@ -2955,7 +2979,7 @@ static void _vm_unmap_aliases(unsigned long start, unsigned long end, int flush) free_purged_blocks(&purge_list); if (!__purge_vmap_area_lazy(start, end, false) && flush) - flush_tlb_kernel_range(start, end); + flush_tlb_kernel_range_deferrable(start, end); mutex_unlock(&vmap_purge_lock); } -- 2.51.0