linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 0/4] mm: batched unmap lazyfree large folios during reclamation
@ 2025-02-14  9:30 Barry Song
  2025-02-14  9:30 ` [PATCH v4 1/4] mm: Set folio swapbacked iff folios are dirty in try_to_unmap_one Barry Song
                   ` (4 more replies)
  0 siblings, 5 replies; 45+ messages in thread
From: Barry Song @ 2025-02-14  9:30 UTC (permalink / raw)
  To: akpm, linux-mm
  Cc: 21cnbao, baolin.wang, chrisl, david, ioworker0, kasong,
	linux-arm-kernel, linux-kernel, linux-riscv, lorenzo.stoakes,
	ryan.roberts, v-songbaohua, x86, ying.huang, zhengtangquan

From: Barry Song <v-songbaohua@oppo.com>

Commit 735ecdfaf4e8 ("mm/vmscan: avoid split lazyfree THP during
shrink_folio_list()") prevents the splitting of MADV_FREE'd THP in
madvise.c. 
However, those folios are still added to the deferred_split list in 
try_to_unmap_one() because we are unmapping PTEs and removing rmap
entries one by one.

Firstly, this has rendered the following counter somewhat confusing, 
/sys/kernel/mm/transparent_hugepage/hugepages-size/stats/split_deferred
The split_deferred counter was originally designed to track operations 
such as partial unmap or madvise of large folios. However, in practice, 
most split_deferred cases arise from memory reclamation of aligned 
lazyfree mTHPs as observed by Tangquan. This discrepancy has made
the split_deferred counter highly misleading.

Secondly, this approach is slow because it requires iterating through 
each PTE and removing the rmap one by one for a large folio. In fact, 
all PTEs of a pte-mapped large folio should be unmapped at once, and
the entire folio should be removed from the rmap as a whole.

Thirdly, it also increases the risk of a race condition where lazyfree
folios are incorrectly set back to swapbacked, as a speculative folio_get
may occur in the shrinker's callback.
deferred_split_scan() might call folio_try_get(folio) since we have
added the folio to split_deferred list while removing rmap for the
1st subpage, and while we are scanning the 2nd to nr_pages PTEs of
this folio in try_to_unmap_one(), the entire mTHP could be
transitioned back to swap-backed because the reference count is
incremented, which can make "ref_count == 1 + map_count" within
try_to_unmap_one() false.

   /*
    * The only page refs must be one from isolation
    * plus the rmap(s) (dropped by discard:).
    */
   if (ref_count == 1 + map_count &&
       (!folio_test_dirty(folio) ||
        ...
        (vma->vm_flags & VM_DROPPABLE))) {
           dec_mm_counter(mm, MM_ANONPAGES);
           goto discard;
   }

This patchset resolves the issue by marking only genuinely dirty folios 
as swap-backed, as suggested by David, and transitioning to batched 
unmapping of entire folios in try_to_unmap_one(). Consequently, the 
deferred_split count drops to zero, and memory reclamation performance 
improves significantly — reclaiming 64KiB lazyfree large folios is now 
2.5x faster(The specific data is embedded in the changelog of patch
3/4).

By the way, while the patchset is primarily aimed at PTE-mapped large 
folios, Baolin and Lance also found that try_to_unmap_one() handles 
lazyfree redirtied PMD-mapped large folios inefficiently — it splits 
the PMD into PTEs and iterates over them. This patchset removes the 
unnecessary splitting, enabling us to skip redirtied PMD-mapped large 
folios 3.5X faster during memory reclamation. (The specific data can 
be found in the changelog of patch 4/4).

-v4:
 * collect reviewed-by of Kefeng, Baolin, Lance, thanks!
 * rebase on top of David's "mm: fixes for device-exclusive entries
(hmm)" patchset v2:
 https://lore.kernel.org/all/20250210193801.781278-1-david@redhat.com/

-v3:
https://lore.kernel.org/all/20250115033808.40641-1-21cnbao@gmail.com/ 

 * collect reviewed-by and acked-by of Baolin, David, Lance and Will.
   thanks!
 * refine pmd-mapped THP lazyfree code per Baolin and Lance.
 * refine tlbbatch deferred flushing range support code per David.

-v2:
 https://lore.kernel.org/linux-mm/20250113033901.68951-1-21cnbao@gmail.com/

 * describle backgrounds, problems more clearly in cover-letter per
   Lorenzo Stoakes;
 * also handle redirtied pmd-mapped large folios per Baolin and Lance;
 * handle some corner cases such as HWPOSION, pte_unused;
 * riscv and x86 build issues.

-v1:
 https://lore.kernel.org/linux-mm/20250106031711.82855-1-21cnbao@gmail.com/

Barry Song (4):
  mm: Set folio swapbacked iff folios are dirty in try_to_unmap_one
  mm: Support tlbbatch flush for a range of PTEs
  mm: Support batched unmap for lazyfree large folios during reclamation
  mm: Avoid splitting pmd for lazyfree pmd-mapped THP in try_to_unmap

 arch/arm64/include/asm/tlbflush.h |  23 +++--
 arch/arm64/mm/contpte.c           |   2 +-
 arch/riscv/include/asm/tlbflush.h |   3 +-
 arch/riscv/mm/tlbflush.c          |   3 +-
 arch/x86/include/asm/tlbflush.h   |   3 +-
 mm/huge_memory.c                  |  24 ++++--
 mm/rmap.c                         | 136 ++++++++++++++++++------------
 7 files changed, 115 insertions(+), 79 deletions(-)

-- 
2.39.3 (Apple Git-146)



^ permalink raw reply	[flat|nested] 45+ messages in thread

end of thread, other threads:[~2025-07-01 16:17 UTC | newest]

Thread overview: 45+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-02-14  9:30 [PATCH v4 0/4] mm: batched unmap lazyfree large folios during reclamation Barry Song
2025-02-14  9:30 ` [PATCH v4 1/4] mm: Set folio swapbacked iff folios are dirty in try_to_unmap_one Barry Song
2025-02-14  9:30 ` [PATCH v4 2/4] mm: Support tlbbatch flush for a range of PTEs Barry Song
2025-02-14  9:30 ` [PATCH v4 3/4] mm: Support batched unmap for lazyfree large folios during reclamation Barry Song
2025-06-24 12:55   ` David Hildenbrand
2025-06-24 15:26     ` Lance Yang
2025-06-24 15:34       ` David Hildenbrand
2025-06-24 16:25         ` Lance Yang
2025-06-25  9:38           ` Barry Song
2025-06-25 10:00           ` David Hildenbrand
2025-06-25 10:38             ` Barry Song
2025-06-25 10:43               ` David Hildenbrand
2025-06-25 10:49                 ` Barry Song
2025-06-25 10:59                   ` David Hildenbrand
2025-06-25 10:47             ` Lance Yang
2025-06-25 10:49               ` David Hildenbrand
2025-06-25 10:57               ` Barry Song
2025-06-25 11:01                 ` David Hildenbrand
2025-06-25 11:15                   ` Barry Song
2025-06-25 11:27                     ` David Hildenbrand
2025-06-25 11:42                       ` Barry Song
2025-06-25 12:09                         ` David Hildenbrand
2025-06-25 12:20                           ` Lance Yang
2025-06-25 12:25                             ` David Hildenbrand
2025-06-25 12:35                               ` Lance Yang
2025-06-25 21:03                               ` Barry Song
2025-06-26  1:17                                 ` Lance Yang
2025-06-26  8:17                                   ` David Hildenbrand
2025-06-26  9:29                                     ` Lance Yang
2025-06-26 12:44                                       ` Lance Yang
2025-06-26 13:16                                         ` David Hildenbrand
2025-06-26 13:52                                           ` Lance Yang
2025-06-26 14:39                                             ` David Hildenbrand
2025-06-26 15:06                                               ` Lance Yang
2025-06-26 21:46                                       ` Barry Song
2025-06-26 21:52                                         ` David Hildenbrand
2025-06-25 12:58                           ` Lance Yang
2025-06-25 13:02                             ` David Hildenbrand
2025-06-25  8:44         ` Lance Yang
2025-06-25  9:29           ` Lance Yang
2025-07-01 10:03   ` Harry Yoo
2025-07-01 13:27     ` Harry Yoo
2025-07-01 16:17       ` David Hildenbrand
2025-02-14  9:30 ` [PATCH v4 4/4] mm: Avoid splitting pmd for lazyfree pmd-mapped THP in try_to_unmap Barry Song
2025-06-25 13:49 ` [PATCH v4 0/4] mm: batched unmap lazyfree large folios during reclamation Lorenzo Stoakes

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).