From: Dev Jain <dev.jain@arm.com>
To: akpm@linux-foundation.org, david@kernel.org, hughd@google.com,
chrisl@kernel.org
Cc: ljs@kernel.org, Liam.Howlett@oracle.com, vbabka@kernel.org,
rppt@kernel.org, surenb@google.com, mhocko@suse.com,
kasong@tencent.com, qi.zheng@linux.dev, shakeel.butt@linux.dev,
baohua@kernel.org, axelrasmussen@google.com, yuanchu@google.com,
weixugc@google.com, riel@surriel.com, harry@kernel.org,
jannh@google.com, pfalcato@suse.de,
baolin.wang@linux.alibaba.com, shikemeng@huaweicloud.com,
nphamcs@gmail.com, bhe@redhat.com, youngjun.park@lge.com,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
ryan.roberts@arm.com, anshuman.khandual@arm.com,
Dev Jain <dev.jain@arm.com>
Subject: [PATCH v2 3/9] mm/rmap: refactor some code around lazyfree folio unmapping
Date: Fri, 10 Apr 2026 16:01:58 +0530 [thread overview]
Message-ID: <20260410103204.120409-4-dev.jain@arm.com> (raw)
In-Reply-To: <20260410103204.120409-1-dev.jain@arm.com>
For lazyfree folio unmapping, after clearing the ptes we must abort the
operation if the folio got dirtied or it has unexpected references.
Refactor this logic into a function which will return whether we need
to abort or not.
If we abort, we restore the ptes and bail out of try_to_unmap_one.
Otherwise adjust the rss stats of the mm and jump to a label.
Also rename that label from "discard" to "finish_unmap"; the former
is appropriate in the lazyfree context, but the code following the label
is executed for other successful unmap code paths too, so 'discard' does
not sound correct for them.
Signed-off-by: Dev Jain <dev.jain@arm.com>
---
mm/rmap.c | 95 ++++++++++++++++++++++++++++++++-----------------------
1 file changed, 55 insertions(+), 40 deletions(-)
diff --git a/mm/rmap.c b/mm/rmap.c
index a9c43e2f6e695..fa5d6599dedf0 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1978,6 +1978,56 @@ static inline unsigned int folio_unmap_pte_batch(struct folio *folio,
FPB_RESPECT_WRITE | FPB_RESPECT_SOFT_DIRTY);
}
+static inline bool can_unmap_lazyfree_folio_range(struct vm_area_struct *vma,
+ struct folio *folio, unsigned long address, pte_t *ptep,
+ pte_t pteval, unsigned long nr_pages)
+{
+ struct mm_struct *mm = vma->vm_mm;
+ int ref_count, map_count;
+
+ /*
+ * Synchronize with gup_pte_range():
+ * - clear PTE; barrier; read refcount
+ * - inc refcount; barrier; read PTE
+ */
+ smp_mb();
+
+ ref_count = folio_ref_count(folio);
+ map_count = folio_mapcount(folio);
+
+ /*
+ * Order reads for page refcount and dirty flag
+ * (see comments in __remove_mapping()).
+ */
+ smp_rmb();
+
+ if (folio_test_dirty(folio) && !(vma->vm_flags & VM_DROPPABLE)) {
+ /*
+ * redirtied either using the page table or a previously
+ * obtained GUP reference.
+ */
+ set_ptes(mm, address, ptep, pteval, nr_pages);
+ folio_set_swapbacked(folio);
+ return false;
+ }
+
+ if (ref_count != 1 + map_count) {
+ /*
+ * Additional reference. Could be a GUP reference or any
+ * speculative reference. GUP users must mark the folio
+ * dirty if there was a modification. This folio cannot be
+ * reclaimed right now either way, so act just like nothing
+ * happened.
+ * We'll come back here later and detect if the folio was
+ * dirtied when the additional reference is gone.
+ */
+ set_ptes(mm, address, ptep, pteval, nr_pages);
+ return false;
+ }
+
+ return true;
+}
+
static inline bool unmap_hugetlb_folio(struct vm_area_struct *vma,
struct folio *folio, struct page_vma_mapped_walk *pvmw,
struct page *page, enum ttu_flags flags, pte_t *pteval,
@@ -2256,47 +2306,12 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
/* MADV_FREE page check */
if (!folio_test_swapbacked(folio)) {
- int ref_count, map_count;
-
- /*
- * Synchronize with gup_pte_range():
- * - clear PTE; barrier; read refcount
- * - inc refcount; barrier; read PTE
- */
- smp_mb();
-
- ref_count = folio_ref_count(folio);
- map_count = folio_mapcount(folio);
-
- /*
- * Order reads for page refcount and dirty flag
- * (see comments in __remove_mapping()).
- */
- smp_rmb();
-
- if (folio_test_dirty(folio) && !(vma->vm_flags & VM_DROPPABLE)) {
- /*
- * redirtied either using the page table or a previously
- * obtained GUP reference.
- */
- set_ptes(mm, address, pvmw.pte, pteval, nr_pages);
- folio_set_swapbacked(folio);
+ if (!can_unmap_lazyfree_folio_range(vma, folio, address,
+ pvmw.pte, pteval, nr_pages))
goto walk_abort;
- } else if (ref_count != 1 + map_count) {
- /*
- * Additional reference. Could be a GUP reference or any
- * speculative reference. GUP users must mark the folio
- * dirty if there was a modification. This folio cannot be
- * reclaimed right now either way, so act just like nothing
- * happened.
- * We'll come back here later and detect if the folio was
- * dirtied when the additional reference is gone.
- */
- set_ptes(mm, address, pvmw.pte, pteval, nr_pages);
- goto walk_abort;
- }
+
add_mm_counter(mm, MM_ANONPAGES, -nr_pages);
- goto discard;
+ goto finish_unmap;
}
if (folio_dup_swap(folio, subpage) < 0) {
@@ -2359,7 +2374,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
*/
add_mm_counter(mm, mm_counter_file(folio), -nr_pages);
}
-discard:
+finish_unmap:
if (unlikely(folio_test_hugetlb(folio))) {
hugetlb_remove_rmap(folio);
} else {
--
2.34.1
next prev parent reply other threads:[~2026-04-10 10:32 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-10 10:31 [PATCH v2 0/9] Optimize anonymous large folio unmapping Dev Jain
2026-04-10 10:31 ` [PATCH v2 1/9] mm/rmap: initialize nr_pages to 1 at loop start in try_to_unmap_one Dev Jain
2026-04-10 10:31 ` [PATCH v2 2/9] mm/rmap: refactor hugetlb pte clearing " Dev Jain
2026-04-10 10:31 ` Dev Jain [this message]
2026-04-10 10:31 ` [PATCH v2 4/9] mm/memory: Batch set uffd-wp markers during zapping Dev Jain
2026-04-10 10:32 ` [PATCH v2 5/9] mm/rmap: batch unmap folios belonging to uffd-wp VMAs Dev Jain
2026-04-10 10:32 ` [PATCH v2 6/9] mm/swapfile: Add batched version of folio_dup_swap Dev Jain
2026-04-10 10:32 ` [PATCH v2 7/9] mm/swapfile: Add batched version of folio_put_swap Dev Jain
2026-04-10 10:32 ` [PATCH v2 8/9] mm/rmap: Add batched version of folio_try_share_anon_rmap_pte Dev Jain
2026-04-10 10:32 ` [PATCH v2 9/9] mm/rmap: enable batch unmapping of anonymous folios Dev Jain
2026-04-10 13:53 ` [PATCH v2 0/9] Optimize anonymous large folio unmapping Lorenzo Stoakes
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260410103204.120409-4-dev.jain@arm.com \
--to=dev.jain@arm.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=anshuman.khandual@arm.com \
--cc=axelrasmussen@google.com \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=bhe@redhat.com \
--cc=chrisl@kernel.org \
--cc=david@kernel.org \
--cc=harry@kernel.org \
--cc=hughd@google.com \
--cc=jannh@google.com \
--cc=kasong@tencent.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ljs@kernel.org \
--cc=mhocko@suse.com \
--cc=nphamcs@gmail.com \
--cc=pfalcato@suse.de \
--cc=qi.zheng@linux.dev \
--cc=riel@surriel.com \
--cc=rppt@kernel.org \
--cc=ryan.roberts@arm.com \
--cc=shakeel.butt@linux.dev \
--cc=shikemeng@huaweicloud.com \
--cc=surenb@google.com \
--cc=vbabka@kernel.org \
--cc=weixugc@google.com \
--cc=youngjun.park@lge.com \
--cc=yuanchu@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox