From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from out-180.mta1.migadu.com (out-180.mta1.migadu.com [95.215.58.180]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 366BF3EBF3A for ; Tue, 24 Feb 2026 03:07:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=95.215.58.180 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771902445; cv=none; b=W8HahUrTn0lg9fMbtJzfbGbS3gQFAqLFjeNxLlaTm9foVuUKHV8o+XhDGZs7OoRKpOX8xbTBUm9T1fLNV22dWojjNzQTTI2S9dEA9/pFWQGFGF3BbzvYAP8SLQEztQvrzCaRwRYVYY28y2o3xD8y9sRIQ9XNxhT8cRi1XfoaLcQ= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771902445; c=relaxed/simple; bh=CLDZ3N9oWNtnYPFFsuXD1lVz/2GVm0KOz2tYjmmiXKI=; h=From:To:Cc:Subject:Date:Message-ID:MIME-Version; b=K1C5bsRLFjr+OCSPHrCsoUT3VpEHnjjl8yGIk8tV3ItSOaTdTxLC/oZua7A/HWHyKoQLiQYV0GPptjR+A2NgTcpRRYRdp0GcEkF9nIHG2Li+rUyLDUILbvnFornHh547Y5c+7WyYQ4GcVm2MPfpAYovi/WhmG8rDZKN5PYZyo10= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=npAXx+JU; arc=none smtp.client-ip=95.215.58.180 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="npAXx+JU" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1771902441; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=WLwpQj7UEW24aRTF++A/SJR0UpFf3Vw3VaJ5uRvm9iQ=; b=npAXx+JUUm50z6mE651FPHlsI2aPQv88d1OpSNpjsJfKXvI2mWkSc2vDWrYqTaz3IgFAfK eEP5pkox612VAvAMGNmnHjbhHW4UdCAWITXqCogKUz9sbg3dyvF3D7h3o+aN3xx5lwNAWZ aAlX17jWbftH4w6Fg3O9xcUNGWNr4KU= From: Lance Yang To: akpm@linux-foundation.org, peterz@infradead.org Cc: david@kernel.org, dave.hansen@intel.com, will@kernel.org, aneesh.kumar@kernel.org, npiggin@gmail.com, linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Lance Yang Subject: [PATCH v2 1/1] mm/mmu_gather: replace IPI with synchronize_rcu() when batch allocation fails Date: Tue, 24 Feb 2026 11:07:00 +0800 Message-ID: <20260224030700.35857-1-lance.yang@linux.dev> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Migadu-Flow: FLOW_OUT From: Lance Yang When freeing page tables, we try to batch them. If batch allocation fails (GFP_NOWAIT), __tlb_remove_table_one() immediately frees the one without batching. On !CONFIG_PT_RECLAIM, the fallback sends an IPI to all CPUs via tlb_remove_table_sync_one(). It disrupts all CPUs even when only a single process is unmapping memory. IPI broadcast was reported to hurt RT workloads[1]. tlb_remove_table_sync_one() synchronizes with lockless page-table walkers (e.g. GUP-fast) that rely on IRQ disabling. These walkers use local_irq_disable(), which is also an RCU read-side critical section. This patch introduces tlb_remove_table_sync_rcu() which uses RCU grace period (synchronize_rcu()) instead of IPI broadcast. This provides the same guarantee as IPI but without disrupting all CPUs. Since batch allocation already failed, we are in a way slow path where sleeping is acceptable - we are in process context (unmap_region, exit_mmap) with only mmap_lock held. might_sleep() will catch any invalid context. [1] https://lore.kernel.org/linux-mm/1b27a3fa-359a-43d0-bdeb-c31341749367@kernel.org/ Link: https://lore.kernel.org/linux-mm/20260202150957.GD1282955@noisy.programming.kicks-ass.net/ Link: https://lore.kernel.org/linux-mm/dfdfeac9-5cd5-46fc-a5c1-9ccf9bd3502a@intel.com/ Link: https://lore.kernel.org/linux-mm/bc489455-bb18-44dc-8518-ae75abda6bec@kernel.org/ Suggested-by: Peter Zijlstra Suggested-by: Dave Hansen Suggested-by: David Hildenbrand (Arm) Signed-off-by: Lance Yang --- v1 -> v2: - Wrap synchronize_rcu() in tlb_remove_table_sync_rcu() with proper kerneldoc (per David) - Add might_sleep() to make sleeping constraint explicit (per Dave) - Clarify this is for synchronization, not memory freeing (per Dave) - https://lore.kernel.org/linux-mm/20260223033604.10198-1-lance.yang@linux.dev/ include/asm-generic/tlb.h | 4 ++++ mm/mmu_gather.c | 22 +++++++++++++++++++++- 2 files changed, 25 insertions(+), 1 deletion(-) diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index 4aeac0c3d3f0..bdcc2778ac64 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -251,6 +251,8 @@ static inline void tlb_remove_table(struct mmu_gather *tlb, void *table) void tlb_remove_table_sync_one(void); +void tlb_remove_table_sync_rcu(void); + #else #ifdef tlb_needs_table_invalidate @@ -259,6 +261,8 @@ void tlb_remove_table_sync_one(void); static inline void tlb_remove_table_sync_one(void) { } +static inline void tlb_remove_table_sync_rcu(void) { } + #endif /* CONFIG_MMU_GATHER_RCU_TABLE_FREE */ diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index fe5b6a031717..2c6fa8db55df 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -296,6 +296,26 @@ static void tlb_remove_table_free(struct mmu_table_batch *batch) call_rcu(&batch->rcu, tlb_remove_table_rcu); } +/** + * tlb_remove_table_sync_rcu() - synchronize with software page-table walkers + * + * Like tlb_remove_table_sync_one() but uses RCU grace period instead of IPI + * broadcast. Use in slow paths where sleeping is acceptable. + * + * Software/Lockless page-table walkers use local_irq_disable(), which is also + * an RCU read-side critical section. synchronize_rcu() waits for all such + * sections, providing the same guarantee as tlb_remove_table_sync_one() but + * without disrupting all CPUs with IPIs. + * + * Do not use for freeing memory. Use RCU callbacks instead to avoid latency + * spikes. Cannot be called from any atomic context. + */ +void tlb_remove_table_sync_rcu(void) +{ + might_sleep(); + synchronize_rcu(); +} + #else /* !CONFIG_MMU_GATHER_RCU_TABLE_FREE */ static void tlb_remove_table_free(struct mmu_table_batch *batch) @@ -339,7 +359,7 @@ static inline void __tlb_remove_table_one(void *table) #else static inline void __tlb_remove_table_one(void *table) { - tlb_remove_table_sync_one(); + tlb_remove_table_sync_rcu(); __tlb_remove_table(table); } #endif /* CONFIG_PT_RECLAIM */ -- 2.49.0