linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Wei Yang <richard.weiyang@gmail.com>
To: akpm@linux-foundation.org, david@redhat.com,
	lorenzo.stoakes@oracle.com, ziy@nvidia.com,
	baolin.wang@linux.alibaba.com, Liam.Howlett@oracle.com,
	npache@redhat.com, ryan.roberts@arm.com, dev.jain@arm.com,
	baohua@kernel.org, lance.yang@linux.dev
Cc: linux-mm@kvack.org, Wei Yang <richard.weiyang@gmail.com>
Subject: [Patch v2 2/2] mm/huge_memory: Optimize and simplify __split_unmapped_folio() logic
Date: Thu, 16 Oct 2025 00:46:13 +0000	[thread overview]
Message-ID: <20251016004613.514-3-richard.weiyang@gmail.com> (raw)
In-Reply-To: <20251016004613.514-1-richard.weiyang@gmail.com>

Existing __split_unmapped_folio() code splits the given folio and update
stats, but it is complicated to understand.

After simplification, __split_unmapped_folio() directly calculate and
update the folio statistics upon a successful split:

* All resulting folios are @split_order.

* The number of new folios are calculated directly from @old_order
  and @split_order.

* The folio for the next split is identified as the one containing
  @split_at.

* An xas_try_split() error is returned directly without worrying
  about stats updates.

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
Reviewed-by: Zi Yan <ziy@nvidia.com>

---
v2:
  * merge patch 2-5
  * retain start_order
  * new_folios -> nr_new_folios
  * add a comment at the end of the loop
---
 mm/huge_memory.c | 66 ++++++++++++++----------------------------------
 1 file changed, 19 insertions(+), 47 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 4b2d5a7e5c8e..68e851f5fcb2 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3528,15 +3528,9 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
 		struct address_space *mapping, bool uniform_split)
 {
 	bool is_anon = folio_test_anon(folio);
-	int order = folio_order(folio);
-	int start_order = uniform_split ? new_order : order - 1;
-	bool stop_split = false;
-	struct folio *next;
+	int old_order = folio_order(folio);
+	int start_order = uniform_split ? new_order : old_order - 1;
 	int split_order;
-	int ret = 0;
-
-	if (is_anon)
-		mod_mthp_stat(order, MTHP_STAT_NR_ANON, -1);
 
 	folio_clear_has_hwpoisoned(folio);
 
@@ -3545,17 +3539,13 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
 	 * folio is split to new_order directly.
 	 */
 	for (split_order = start_order;
-	     split_order >= new_order && !stop_split;
+	     split_order >= new_order;
 	     split_order--) {
-		struct folio *end_folio = folio_next(folio);
-		int old_order = folio_order(folio);
-		struct folio *new_folio;
+		int nr_new_folios = 1UL << (old_order - split_order);
 
 		/* order-1 anonymous folio is not supported */
 		if (is_anon && split_order == 1)
 			continue;
-		if (uniform_split && split_order != new_order)
-			continue;
 
 		if (mapping) {
 			/*
@@ -3568,49 +3558,31 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
 			else {
 				xas_set_order(xas, folio->index, split_order);
 				xas_try_split(xas, folio, old_order);
-				if (xas_error(xas)) {
-					ret = xas_error(xas);
-					stop_split = true;
-				}
+				if (xas_error(xas))
+					return xas_error(xas);
 			}
 		}
 
-		if (!stop_split) {
-			folio_split_memcg_refs(folio, old_order, split_order);
-			split_page_owner(&folio->page, old_order, split_order);
-			pgalloc_tag_split(folio, old_order, split_order);
+		folio_split_memcg_refs(folio, old_order, split_order);
+		split_page_owner(&folio->page, old_order, split_order);
+		pgalloc_tag_split(folio, old_order, split_order);
+		__split_folio_to_order(folio, old_order, split_order);
 
-			__split_folio_to_order(folio, old_order, split_order);
+		if (is_anon) {
+			mod_mthp_stat(old_order, MTHP_STAT_NR_ANON, -1);
+			mod_mthp_stat(split_order, MTHP_STAT_NR_ANON, nr_new_folios);
 		}
 
 		/*
-		 * Iterate through after-split folios and update folio stats.
-		 * But in buddy allocator like split, the folio
-		 * containing the specified page is skipped until its order
-		 * is new_order, since the folio will be worked on in next
-		 * iteration.
+		 * For uniform split, we have finished the job.
+		 * For non-uniform split, we assign folio to the one the one
+		 * containing @split_at and assign @old_order to @split_order.
 		 */
-		for (new_folio = folio; new_folio != end_folio; new_folio = next) {
-			next = folio_next(new_folio);
-			/*
-			 * for buddy allocator like split, new_folio containing
-			 * @split_at page could be split again, thus do not
-			 * change stats yet. Wait until new_folio's order is
-			 * @new_order or stop_split is set to true by the above
-			 * xas_split() failure.
-			 */
-			if (new_folio == page_folio(split_at)) {
-				folio = new_folio;
-				if (split_order != new_order && !stop_split)
-					continue;
-			}
-			if (is_anon)
-				mod_mthp_stat(folio_order(new_folio),
-					      MTHP_STAT_NR_ANON, 1);
-		}
+		folio = page_folio(split_at);
+		old_order = split_order;
 	}
 
-	return ret;
+	return 0;
 }
 
 bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
-- 
2.34.1



  parent reply	other threads:[~2025-10-16  0:46 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-10-16  0:46 [Patch v2 0/2] mm/huge_memory: cleanup __split_unmapped_folio() Wei Yang
2025-10-16  0:46 ` [Patch v2 1/2] mm/huge_memory: cache folio attribute in __split_unmapped_folio() Wei Yang
2025-10-16  1:34   ` Barry Song
2025-10-16 20:01   ` David Hildenbrand
2025-10-17  9:46   ` Lorenzo Stoakes
2025-10-19  7:51     ` Wei Yang
2025-10-16  0:46 ` Wei Yang [this message]
2025-10-16  1:25   ` [Patch v2 2/2] mm/huge_memory: Optimize and simplify __split_unmapped_folio() logic wang lian
2025-10-16 20:10   ` David Hildenbrand
2025-10-16 20:22     ` Zi Yan
2025-10-16 20:55       ` David Hildenbrand
2025-10-16 20:56         ` Zi Yan
2025-10-17  0:55   ` Wei Yang
2025-10-17  9:44   ` Lorenzo Stoakes
2025-10-17 14:26     ` Zi Yan
2025-10-17 14:29       ` Zi Yan
2025-10-17 14:44       ` Lorenzo Stoakes
2025-10-17 14:45       ` David Hildenbrand
2025-10-17 14:55         ` Lorenzo Stoakes
2025-10-17 17:24           ` Zi Yan
2025-10-20 14:03             ` Lorenzo Stoakes
2025-10-20 14:28               ` Zi Yan
2025-10-21  0:30               ` Wei Yang
2025-10-21  9:17                 ` Lorenzo Stoakes
2025-10-19  8:00           ` Wei Yang
2025-10-20 11:55             ` Lorenzo Stoakes

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20251016004613.514-3-richard.weiyang@gmail.com \
    --to=richard.weiyang@gmail.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=baohua@kernel.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=david@redhat.com \
    --cc=dev.jain@arm.com \
    --cc=lance.yang@linux.dev \
    --cc=linux-mm@kvack.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=npache@redhat.com \
    --cc=ryan.roberts@arm.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).