From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E820323E358 for ; Tue, 14 Apr 2026 20:22:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776198175; cv=none; b=Bh06D4aMMDO5ICep3xoYEvmuh8sDpBuq/VGTkmHo+FHiEKmnNpB08KaVhSRoiIX1H78CbjpuoqMKIUxBp908Vjj/9wY98ZO8dLO4hVw5oBUHATDuW7h88PtFZRxWHvyYqiTWB0PyQ2+dosHI3EFgvYSBqsNNYOIJiHbP1xcsUbQ= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776198175; c=relaxed/simple; bh=cfjU5QisdEAymeM9swyhg5uyXcoLzthAg7VU3GjAQvo=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=GNL7r0Ed5/jj8/wcK5zRs0DxvJzhyNQY3IxsC2jjccIrQMBsXIatBI04kyqwaCj8jA7eKaba2Ich00cjSuJLu6sXyUFWFaH/MW9xrXo6g2BtgAGUyrzb8wUucs/AjDQD2ru6EKFcM6PbrYHFxEZhurvpgUeWcNwnoRg57zfKl78= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=KHUIc4gF; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="KHUIc4gF" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5BAA2C19425; Tue, 14 Apr 2026 20:22:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1776198174; bh=cfjU5QisdEAymeM9swyhg5uyXcoLzthAg7VU3GjAQvo=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=KHUIc4gFg+pCS23EISb63Q1xNtlb0VkasazOG12l6Y/MFCbJ8Bdb4id4lsgSmHsEy inBDFxO2X+xqw4kEUVDfRGKvZ1F6we/nH7E5AoLJzpuwqTjJa+8gyCZ5LMLu0RwQfB U/CHZlqlk8SoXPnQnyiKRB9YtluBmu6BPLWD9ooKrknUSXPMit3YotQqC2z+1QQjpL Q5BqvFUg7RJNzHGyYteiBs4sJQJ1bbQNiCtI1pQ4cXzLr7sxKgzJtJDVRHJ9g/ScEc PDLhiKq7E7WDbGvRgdVsNdUF4NpZnCHZyIuW7J90tyBzNNIrNElPr4z+a2Ih9sgcF5 +0G/NkdlQ7Tlg== Date: Tue, 14 Apr 2026 13:22:53 -0700 From: Minchan Kim To: "David Hildenbrand (Arm)" Cc: akpm@linux-foundation.org, mhocko@suse.com, brauner@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, surenb@google.com, timmurray@google.com Subject: Re: [RFC 2/3] mm: process_mrelease: skip LRU movement for exclusive file folios Message-ID: References: <20260413223948.556351-1-minchan@kernel.org> <20260413223948.556351-3-minchan@kernel.org> <0c22f74a-ff3d-405b-8d9e-a7222f5b9dfe@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <0c22f74a-ff3d-405b-8d9e-a7222f5b9dfe@kernel.org> On Tue, Apr 14, 2026 at 09:20:25AM +0200, David Hildenbrand (Arm) wrote: > On 4/14/26 00:39, Minchan Kim wrote: > > For the process_mrelease reclaim, skip LRU handling for exclusive > > file-backed folios since they will be freed soon so pointless > > to move around in the LRU. > > > > This avoids costly LRU movement which accounts for a significant portion > > of the time during unmap_page_range. > > > > - 91.31% 0.00% mmap_exit_test [kernel.kallsyms] [.] exit_mm > > exit_mm > > __mmput > > exit_mmap > > unmap_vmas > > - unmap_page_range > > - 55.75% folio_mark_accessed > > + 48.79% __folio_batch_add_and_move > > 4.23% workingset_activation > > + 12.94% folio_remove_rmap_ptes > > + 9.86% page_table_check_clear > > + 3.34% tlb_flush_mmu > > 1.06% __page_table_check_pte_clear > > > > Signed-off-by: Minchan Kim > > --- > > mm/memory.c | 13 ++++++++++++- > > 1 file changed, 12 insertions(+), 1 deletion(-) > > > > diff --git a/mm/memory.c b/mm/memory.c > > index 2f815a34d924..25e17893c919 100644 > > --- a/mm/memory.c > > +++ b/mm/memory.c > > @@ -1640,6 +1640,8 @@ static __always_inline void zap_present_folio_ptes(struct mmu_gather *tlb, > > bool delay_rmap = false; > > > > if (!folio_test_anon(folio)) { > > + bool skip_mark_accessed; > > + > > ptent = get_and_clear_full_ptes(mm, addr, pte, nr, tlb->fullmm); > > if (pte_dirty(ptent)) { > > folio_mark_dirty(folio); > > @@ -1648,7 +1650,16 @@ static __always_inline void zap_present_folio_ptes(struct mmu_gather *tlb, > > *force_flush = true; > > } > > } > > - if (pte_young(ptent) && likely(vma_has_recency(vma))) > > + > > + /* > > + * For the process_mrelease reclaim, skip LRU handling for exclusive > > + * file-backed folios since they will be freed soon so pointless > > + * to move around in the LRU. > > + */ > > + skip_mark_accessed = mm_flags_test(MMF_UNSTABLE, mm) && > > + folio_mapcount(folio) < 2; > > folio_mapcount() is most certainly the wrong thing to use if you want to > handle large folios properly. > > Maybe !folio_likely_mapped_shared() is what you are looking for. Maybe. Didn't know that. I will use folio_maybe_mapped_shared in next revision. Thank you!