From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-dy1-f170.google.com (mail-dy1-f170.google.com [74.125.82.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EC18C39D6E3 for ; Mon, 13 Apr 2026 22:39:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=74.125.82.170 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776120001; cv=none; b=F/TOLza06vUehjhci33JHJwX2PTlpDp2MLfM+ryhZnI/Iyy2jyd9kKuZCMUKhZXmuX6s+ydDWzXfua/UjlsEciWwcmcYUwRt4QEIhITvCQxmylwPTfj1Z+nrWm8+1nyMcDl+TYhuTCMOZyRIAfZLcE7j/Oo9qYz7qo4a3zxuXfA= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776120001; c=relaxed/simple; bh=bwzNrlVYoUNou2xMQp86JpnGgFyY1zstL7nL+sbvTA4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=vCDFkUe8cgNKcoZGgKoCIc/sa8ZSUJMVlMmFuH4w2k9kDDtbMbw44583RqDEC7Dl3xUOFgGad8rydswbwc8M1vSZqqbTxG/6uNaPJKD7ImodpD3yAWuv0WgomxWHg/4D5gkCuf2/Hq5jxi0NhGZCXduRmXmVBYCeJgPvfvYK+Ig= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=kernel.org; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=do7dB6if; arc=none smtp.client-ip=74.125.82.170 Authentication-Results: smtp.subspace.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=kernel.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="do7dB6if" Received: by mail-dy1-f170.google.com with SMTP id 5a478bee46e88-2dbb4ad19a5so558342eec.0 for ; Mon, 13 Apr 2026 15:39:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20251104; t=1776119998; x=1776724798; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:from:to:cc:subject:date :message-id:reply-to; bh=m5SZlDYq9EPQdnrcj2kdyj1ENxPSAFfrXTE89MwF4qY=; b=do7dB6if7cxILkA2EeOy53SSA4g58QHXWXNsge3fmtC1d7d3z8vau6JLuFIQXFfDdm Ktt70WmUHVgJIhCqOjNhpZ4sIHppxAwW2c0GsCvRxOPqpbOuP4lZp6EkX6Z3QwA8HKFQ 29HVOfVcc/G21wwRo83fWm3FzGGaFl6Uw86hX1eZiWZlt/Bj1BiW4ONFJ4DPc8MbfP1g RHuaYA8kLbLOJdTnaeCOT/BPzWmDWJZoTymQRcdsEkLtOXXAv6Iceg3KUkBSsMAGgYBd tfofMtvv6tsPFJlwkL6E6ecUt8vdpMe6MIuhljY6hJY2CzPq3gYK27Pwwn4525GmOXoY jP9w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1776119998; x=1776724798; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:sender:x-gm-gg :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=m5SZlDYq9EPQdnrcj2kdyj1ENxPSAFfrXTE89MwF4qY=; b=aElsj81xE47z5Fx6KIZo7AaDJBwz7DxPPF4OjlZoonKxcmORaU55+7KRa7CDJ+WQBY t3yMHY1b3EMbKdLS33VndOfnpTl6FilVR5LKiQdNhKwzuumaLLOgW/pv2ORChbxOtO1q tyZhDEOoQuPSiLum7cmfG6Hudl+jWqUiPTFFSt5/y065E7MD7BkkfgCEP4aJ6YP+PnZ4 1LY4ZsvfPH8DwmEyTHkbFgIVssJWVI9ut589Bwen6E4N2K3ncJxF/6BQH+q22had9eXR OG2rLYm7CWJrKEFxLBC0DW2aZeWmIS9kG9fZzWf5WlUa3Y/dB2AR61B4kCsIk8ALWcKU 8A5Q== X-Forwarded-Encrypted: i=1; AFNElJ+klVz7I9l45YaSlX28pQwluix0mMYnN2UlcykPyUC6P/ZXrZ9qjqp01aU5pCF0W9nxgEoaMfQAgB7RA6w=@vger.kernel.org X-Gm-Message-State: AOJu0YzP2H6OtB4KrENNqQUfD0xusM/b6DpWxYZHULNcBTIm9/+dGggL yhXQRKuqtJTcBEVhTlUCNSUBffa3eAK8Cq4dm6DdI6IMWgieTV/vI/+u X-Gm-Gg: AeBDievRA/2aMYufnktl9xEgvt/yoGDNoW/wpNr55gM12rHk3w2B2nEAVeLmxbCT6TX Uv2oXnpTb3JPWoXOgWk1SSMsazJESumSoEv8+cuJIomeSyP49u4kbjiXweufuLQZRG3c4G9LdTe Wbw2qL+sH63WbNaq8FRClRHYGg5wlzpk5Fsl1LbGO5pO3CeZDGyKktaXm7YclDKLI+rvY4MWKB6 y9Xooldy7GC+tNcyo4IY3cPU4kj8vzuUwKa+nr2ZbSN/ZFUZ75Ee/15jnOsm3cH3dBmSDe/njWo E8K7DTnSmmPmq5FRWbAA9aNQGZSdreXE7IyM07W117AjH18Knrtb0tv8K8lHwDzzHFapgKfZ6LY UZmTw+fZ8NY2wH64tUS3zNhWOs2o3VeTR6IkjBgqfiKoZfC8v25BrmGoTFG2QWci7hWbX6wkDuo V1rLTTs4rlbUSxZMWcY4+9w0Y6mAFVC+Tclqw0FV7dyowiSHw4+tcL4Vh334/mdM1+qp4klIvfq 5i+MHjyOAXcLGE= X-Received: by 2002:a05:7301:6086:b0:2d2:c60d:4fc2 with SMTP id 5a478bee46e88-2d5871bc5a9mr9173087eec.4.1776119997625; Mon, 13 Apr 2026 15:39:57 -0700 (PDT) Received: from bbox-1.mtv.corp.google.com ([2a00:79e0:2e7c:8:c871:4088:5cd5:bd1b]) by smtp.gmail.com with ESMTPSA id 5a478bee46e88-2d561cd3138sm18577297eec.14.2026.04.13.15.39.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 13 Apr 2026 15:39:54 -0700 (PDT) Sender: Minchan Kim From: Minchan Kim To: akpm@linux-foundation.org Cc: david@kernel.org, mhocko@suse.com, brauner@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, surenb@google.com, timmurray@google.com, Minchan Kim Subject: [RFC 1/3] mm: process_mrelease: expedite clean file folio reclaim via mmu_gather Date: Mon, 13 Apr 2026 15:39:46 -0700 Message-ID: <20260413223948.556351-2-minchan@kernel.org> X-Mailer: git-send-email 2.54.0.rc0.605.g598a273b03-goog In-Reply-To: <20260413223948.556351-1-minchan@kernel.org> References: <20260413223948.556351-1-minchan@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Currently, process_mrelease() unmaps the pages but leaves clean file folios on the LRU list, relying on standard memory reclaim to eventually free them. This delays the immediate recovery of system memory under OOM or container shutdown scenarios. This patch implements an expedited eviction mechanism for clean file folios by integrating directly into the low-level TLB batching infrastructure (mmu_gather). Instead of repeatedly locking and evicting folios one by one inside the unmap loop (zap_present_folio_ptes), we pass the MMF_UNSTABLE flag status down to free_pages_and_swap_cache(). Within this single unified loop, anonymous pages are released via free_swap_cache(), and file-backed folios are symmetrically truncated via mapping_evict_folio(). This avoids introducing unnecessary data structures, preserves TLB flush safety, and removes duplicate tree traversals, resulting in an extremely lean and highly responsive process_mrelease() implementation. Signed-off-by: Minchan Kim --- arch/s390/include/asm/tlb.h | 2 +- include/linux/swap.h | 9 ++++++--- mm/mmu_gather.c | 8 +++++--- mm/swap_state.c | 19 +++++++++++++++++-- 4 files changed, 29 insertions(+), 9 deletions(-) diff --git a/arch/s390/include/asm/tlb.h b/arch/s390/include/asm/tlb.h index 619fd41e710e..554842345ccd 100644 --- a/arch/s390/include/asm/tlb.h +++ b/arch/s390/include/asm/tlb.h @@ -62,7 +62,7 @@ static inline bool __tlb_remove_folio_pages(struct mmu_gather *tlb, VM_WARN_ON_ONCE(delay_rmap); VM_WARN_ON_ONCE(page_folio(page) != page_folio(page + nr_pages - 1)); - free_pages_and_swap_cache(encoded_pages, ARRAY_SIZE(encoded_pages)); + free_pages_and_caches(encoded_pages, ARRAY_SIZE(encoded_pages), false); return false; } diff --git a/include/linux/swap.h b/include/linux/swap.h index 62fc7499b408..e7b929b062f8 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -433,7 +433,7 @@ static inline unsigned long total_swapcache_pages(void) void free_swap_cache(struct folio *folio); void free_folio_and_swap_cache(struct folio *folio); -void free_pages_and_swap_cache(struct encoded_page **, int); +void free_pages_and_caches(struct encoded_page **pages, int nr, bool free_unmapped_file); /* linux/mm/swapfile.c */ extern atomic_long_t nr_swap_pages; extern long total_swap_pages; @@ -510,8 +510,11 @@ static inline void put_swap_device(struct swap_info_struct *si) do { (val)->freeswap = (val)->totalswap = 0; } while (0) #define free_folio_and_swap_cache(folio) \ folio_put(folio) -#define free_pages_and_swap_cache(pages, nr) \ - release_pages((pages), (nr)); +static inline void free_pages_and_caches(struct encoded_page **pages, + int nr, bool free_unmapped_file) +{ + release_pages(pages, nr); +} static inline void free_swap_cache(struct folio *folio) { diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index fe5b6a031717..5ce5824db07f 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -100,7 +100,8 @@ void tlb_flush_rmaps(struct mmu_gather *tlb, struct vm_area_struct *vma) */ #define MAX_NR_FOLIOS_PER_FREE 512 -static void __tlb_batch_free_encoded_pages(struct mmu_gather_batch *batch) +static void __tlb_batch_free_encoded_pages(struct mm_struct *mm, + struct mmu_gather_batch *batch) { struct encoded_page **pages = batch->encoded_pages; unsigned int nr, nr_pages; @@ -135,7 +136,8 @@ static void __tlb_batch_free_encoded_pages(struct mmu_gather_batch *batch) } } - free_pages_and_swap_cache(pages, nr); + free_pages_and_caches(pages, nr, + mm_flags_test(MMF_UNSTABLE, mm)); pages += nr; batch->nr -= nr; @@ -148,7 +150,7 @@ static void tlb_batch_pages_flush(struct mmu_gather *tlb) struct mmu_gather_batch *batch; for (batch = &tlb->local; batch && batch->nr; batch = batch->next) - __tlb_batch_free_encoded_pages(batch); + __tlb_batch_free_encoded_pages(tlb->mm, batch); tlb->active = &tlb->local; } diff --git a/mm/swap_state.c b/mm/swap_state.c index 6d0eef7470be..e70a52ead6d3 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -400,11 +400,22 @@ void free_folio_and_swap_cache(struct folio *folio) folio_put(folio); } +static inline void free_file_cache(struct folio *folio) +{ + if (folio_trylock(folio)) { + mapping_evict_folio(folio_mapping(folio), folio); + folio_unlock(folio); + } +} + /* * Passed an array of pages, drop them all from swapcache and then release * them. They are removed from the LRU and freed if this is their last use. + * + * If @free_unmapped_file is true, this function will proactively evict clean + * file-backed folios if they are no longer mapped. */ -void free_pages_and_swap_cache(struct encoded_page **pages, int nr) +void free_pages_and_caches(struct encoded_page **pages, int nr, bool free_unmapped_file) { struct folio_batch folios; unsigned int refs[PAGEVEC_SIZE]; @@ -413,7 +424,11 @@ void free_pages_and_swap_cache(struct encoded_page **pages, int nr) for (int i = 0; i < nr; i++) { struct folio *folio = page_folio(encoded_page_ptr(pages[i])); - free_swap_cache(folio); + if (folio_test_anon(folio)) + free_swap_cache(folio); + else if (unlikely(free_unmapped_file)) + free_file_cache(folio); + refs[folios.nr] = 1; if (unlikely(encoded_page_flags(pages[i]) & ENCODED_PAGE_BIT_NR_PAGES_NEXT)) -- 2.54.0.rc0.605.g598a273b03-goog