* [Patch v3 0/4] mm/huge_memory: cleanup __split_unmapped_folio()
@ 2025-10-21 21:21 Wei Yang
2025-10-21 21:21 ` [Patch v3 1/4] mm/huge_memory: avoid reinvoking folio_test_anon() Wei Yang
` (3 more replies)
0 siblings, 4 replies; 24+ messages in thread
From: Wei Yang @ 2025-10-21 21:21 UTC (permalink / raw)
To: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang
Cc: linux-mm, Wei Yang
This short patch series cleans up and optimizes the internal logic of
the __split_unmapped_folio() function.
The goal is to improve clarity and efficiency by eliminating redundant
checks, caching stable attribute values, and simplifying the iteration
logic used for updating folio statistics.
These changes make the code easier to follow and maintain.
The split_huge_page_test selftest pass.
v3:
* only merge 4&5 in v1
* refine the comment
v2:
* merge patch 2-5
* http://lkml.kernel.org/r/20251016004613.514-1-richard.weiyang@gmail.com
v1:
* http://lkml.kernel.org/r/20251014134606.22543-1-richard.weiyang@gmail.com
Wei Yang (4):
mm/huge_memory: avoid reinvoking folio_test_anon()
mm/huge_memory: update folio stat after successful split
mm/huge_memory: optimize and simplify folio stat update after split
mm/huge_memory: optimize old_order derivation during folio splitting
mm/huge_memory.c | 70 +++++++++++++++---------------------------------
1 file changed, 21 insertions(+), 49 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 24+ messages in thread
* [Patch v3 1/4] mm/huge_memory: avoid reinvoking folio_test_anon()
2025-10-21 21:21 [Patch v3 0/4] mm/huge_memory: cleanup __split_unmapped_folio() Wei Yang
@ 2025-10-21 21:21 ` Wei Yang
2025-10-24 14:08 ` Lorenzo Stoakes
2025-10-21 21:21 ` [Patch v3 2/4] mm/huge_memory: update folio stat after successful split Wei Yang
` (2 subsequent siblings)
3 siblings, 1 reply; 24+ messages in thread
From: Wei Yang @ 2025-10-21 21:21 UTC (permalink / raw)
To: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang
Cc: linux-mm, Wei Yang, Liam Howlett
During the execution of __split_unmapped_folio(), the folio's anon/!anon
attribute is invariant (not expected to change).
Therefore, it is safe and more efficient to retrieve this attribute once
at the start and reuse it throughout the function.
Link: https://lkml.kernel.org/r/20251016004613.514-1-richard.weiyang@gmail.com
Link: https://lkml.kernel.org/r/20251016004613.514-2-richard.weiyang@gmail.com
Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Reviewed-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: wang lian <lianux.mm@gmail.com>
Reviewed-by: Barry Song <baohua@kernel.org>
Acked-by: David Hildenbrand <david@redhat.com>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: Dev Jain <dev.jain@arm.com>
Cc: Lance Yang <lance.yang@linux.dev>
Cc: Liam Howlett <liam.howlett@oracle.com>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: Mariano Pache <npache@redhat.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>
---
v3:
* adjust subject
* add const to variable is_anon
---
mm/huge_memory.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 370ecfd6a182..6d82df4a88dc 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3595,6 +3595,7 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
struct page *split_at, struct xa_state *xas,
struct address_space *mapping, bool uniform_split)
{
+ const bool is_anon = folio_test_anon(folio);
int order = folio_order(folio);
int start_order = uniform_split ? new_order : order - 1;
bool stop_split = false;
@@ -3602,7 +3603,7 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
int split_order;
int ret = 0;
- if (folio_test_anon(folio))
+ if (is_anon)
mod_mthp_stat(order, MTHP_STAT_NR_ANON, -1);
folio_clear_has_hwpoisoned(folio);
@@ -3619,7 +3620,7 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
struct folio *new_folio;
/* order-1 anonymous folio is not supported */
- if (folio_test_anon(folio) && split_order == 1)
+ if (is_anon && split_order == 1)
continue;
if (uniform_split && split_order != new_order)
continue;
@@ -3671,7 +3672,7 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
if (split_order != new_order && !stop_split)
continue;
}
- if (folio_test_anon(new_folio))
+ if (is_anon)
mod_mthp_stat(folio_order(new_folio),
MTHP_STAT_NR_ANON, 1);
}
--
2.34.1
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [Patch v3 2/4] mm/huge_memory: update folio stat after successful split
2025-10-21 21:21 [Patch v3 0/4] mm/huge_memory: cleanup __split_unmapped_folio() Wei Yang
2025-10-21 21:21 ` [Patch v3 1/4] mm/huge_memory: avoid reinvoking folio_test_anon() Wei Yang
@ 2025-10-21 21:21 ` Wei Yang
2025-10-22 20:26 ` David Hildenbrand
` (3 more replies)
2025-10-21 21:21 ` [Patch v3 3/4] mm/huge_memory: optimize and simplify folio stat update after split Wei Yang
2025-10-21 21:21 ` [Patch v3 4/4] mm/huge_memory: optimize old_order derivation during folio splitting Wei Yang
3 siblings, 4 replies; 24+ messages in thread
From: Wei Yang @ 2025-10-21 21:21 UTC (permalink / raw)
To: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang
Cc: linux-mm, Wei Yang, wang lian
The current implementation complicates this process:
* It iterates over the resulting new folios.
* It uses a flag (@stop_split) to conditionally skip updating the stat
for the folio at @split_at during the loop.
* It then attempts to update the skipped stat on a subsequent failure
path.
This logic is unnecessarily hard to follow.
This commit refactors the code to update the folio statistics only after
a successful split. This makes the logic much cleaner and sets the stage
for further simplification of the stat-handling code.
Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: wang lian <lianux.mm@gmail.com>
---
mm/huge_memory.c | 44 +++++++++++---------------------------------
1 file changed, 11 insertions(+), 33 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 6d82df4a88dc..b9a38dba8eb8 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3598,13 +3598,8 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
const bool is_anon = folio_test_anon(folio);
int order = folio_order(folio);
int start_order = uniform_split ? new_order : order - 1;
- bool stop_split = false;
struct folio *next;
int split_order;
- int ret = 0;
-
- if (is_anon)
- mod_mthp_stat(order, MTHP_STAT_NR_ANON, -1);
folio_clear_has_hwpoisoned(folio);
@@ -3613,7 +3608,7 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
* folio is split to new_order directly.
*/
for (split_order = start_order;
- split_order >= new_order && !stop_split;
+ split_order >= new_order;
split_order--) {
struct folio *end_folio = folio_next(folio);
int old_order = folio_order(folio);
@@ -3636,49 +3631,32 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
else {
xas_set_order(xas, folio->index, split_order);
xas_try_split(xas, folio, old_order);
- if (xas_error(xas)) {
- ret = xas_error(xas);
- stop_split = true;
- }
+ if (xas_error(xas))
+ return xas_error(xas);
}
}
- if (!stop_split) {
- folio_split_memcg_refs(folio, old_order, split_order);
- split_page_owner(&folio->page, old_order, split_order);
- pgalloc_tag_split(folio, old_order, split_order);
-
- __split_folio_to_order(folio, old_order, split_order);
- }
+ folio_split_memcg_refs(folio, old_order, split_order);
+ split_page_owner(&folio->page, old_order, split_order);
+ pgalloc_tag_split(folio, old_order, split_order);
+ __split_folio_to_order(folio, old_order, split_order);
+ if (is_anon)
+ mod_mthp_stat(old_order, MTHP_STAT_NR_ANON, -1);
/*
* Iterate through after-split folios and update folio stats.
- * But in buddy allocator like split, the folio
- * containing the specified page is skipped until its order
- * is new_order, since the folio will be worked on in next
- * iteration.
*/
for (new_folio = folio; new_folio != end_folio; new_folio = next) {
next = folio_next(new_folio);
- /*
- * for buddy allocator like split, new_folio containing
- * @split_at page could be split again, thus do not
- * change stats yet. Wait until new_folio's order is
- * @new_order or stop_split is set to true by the above
- * xas_split() failure.
- */
- if (new_folio == page_folio(split_at)) {
+ if (new_folio == page_folio(split_at))
folio = new_folio;
- if (split_order != new_order && !stop_split)
- continue;
- }
if (is_anon)
mod_mthp_stat(folio_order(new_folio),
MTHP_STAT_NR_ANON, 1);
}
}
- return ret;
+ return 0;
}
bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
--
2.34.1
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [Patch v3 3/4] mm/huge_memory: optimize and simplify folio stat update after split
2025-10-21 21:21 [Patch v3 0/4] mm/huge_memory: cleanup __split_unmapped_folio() Wei Yang
2025-10-21 21:21 ` [Patch v3 1/4] mm/huge_memory: avoid reinvoking folio_test_anon() Wei Yang
2025-10-21 21:21 ` [Patch v3 2/4] mm/huge_memory: update folio stat after successful split Wei Yang
@ 2025-10-21 21:21 ` Wei Yang
2025-10-22 20:28 ` David Hildenbrand
` (3 more replies)
2025-10-21 21:21 ` [Patch v3 4/4] mm/huge_memory: optimize old_order derivation during folio splitting Wei Yang
3 siblings, 4 replies; 24+ messages in thread
From: Wei Yang @ 2025-10-21 21:21 UTC (permalink / raw)
To: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang
Cc: linux-mm, Wei Yang, wang lian
The loop executed after a successful folio split currently has two
combined responsibilities:
* updating statistics for the new folios
* determining the folio for the next split iteration.
This commit refactors the logic to directly calculate and update folio
statistics, eliminating the need for the iteration step.
We can do this because all necessary information is already available:
* All resulting new folios have the same order, which is @split_order.
* The exact number of new folios can be calculated directly using
@old_order and @split_order.
* The folio for the subsequent split is simply the one containing
@split_at.
By leveraging this knowledge, we can achieve the stat update more
cleanly and efficiently without the looping logic.
Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: wang lian <lianux.mm@gmail.com>
---
mm/huge_memory.c | 18 ++++--------------
1 file changed, 4 insertions(+), 14 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index b9a38dba8eb8..093b3ffb180f 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3598,7 +3598,6 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
const bool is_anon = folio_test_anon(folio);
int order = folio_order(folio);
int start_order = uniform_split ? new_order : order - 1;
- struct folio *next;
int split_order;
folio_clear_has_hwpoisoned(folio);
@@ -3610,9 +3609,8 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
for (split_order = start_order;
split_order >= new_order;
split_order--) {
- struct folio *end_folio = folio_next(folio);
int old_order = folio_order(folio);
- struct folio *new_folio;
+ int nr_new_folios = 1UL << (old_order - split_order);
/* order-1 anonymous folio is not supported */
if (is_anon && split_order == 1)
@@ -3641,19 +3639,11 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
pgalloc_tag_split(folio, old_order, split_order);
__split_folio_to_order(folio, old_order, split_order);
- if (is_anon)
+ if (is_anon) {
mod_mthp_stat(old_order, MTHP_STAT_NR_ANON, -1);
- /*
- * Iterate through after-split folios and update folio stats.
- */
- for (new_folio = folio; new_folio != end_folio; new_folio = next) {
- next = folio_next(new_folio);
- if (new_folio == page_folio(split_at))
- folio = new_folio;
- if (is_anon)
- mod_mthp_stat(folio_order(new_folio),
- MTHP_STAT_NR_ANON, 1);
+ mod_mthp_stat(split_order, MTHP_STAT_NR_ANON, nr_new_folios);
}
+ folio = page_folio(split_at);
}
return 0;
--
2.34.1
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [Patch v3 4/4] mm/huge_memory: optimize old_order derivation during folio splitting
2025-10-21 21:21 [Patch v3 0/4] mm/huge_memory: cleanup __split_unmapped_folio() Wei Yang
` (2 preceding siblings ...)
2025-10-21 21:21 ` [Patch v3 3/4] mm/huge_memory: optimize and simplify folio stat update after split Wei Yang
@ 2025-10-21 21:21 ` Wei Yang
2025-10-22 20:29 ` David Hildenbrand
` (3 more replies)
3 siblings, 4 replies; 24+ messages in thread
From: Wei Yang @ 2025-10-21 21:21 UTC (permalink / raw)
To: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang
Cc: linux-mm, Wei Yang, wang lian
Folio splitting requires both the folio's original order (@old_order)
and the new target order (@split_order).
In the current implementation, @old_order is repeatedly retrieved using
folio_order().
However, for every iteration after the first, the folio being split is
the result of the previous split, meaning its order is already known to
be equal to the previous iteration's @split_order.
This commit optimizes the logic:
* Instead of calling folio_order(), we now set @old_order directly to
the value of @split_order from the previous iteration.
This change avoids unnecessary function calls and simplifies the loop
setup.
Also it removes a check for non-existent case, since for uniform
splitting we only do split when @split_order == @new_order.
Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: wang lian <lianux.mm@gmail.com>
---
mm/huge_memory.c | 13 ++++++++-----
1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 093b3ffb180f..a4fa8b0e5b5a 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3596,8 +3596,8 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
struct address_space *mapping, bool uniform_split)
{
const bool is_anon = folio_test_anon(folio);
- int order = folio_order(folio);
- int start_order = uniform_split ? new_order : order - 1;
+ int old_order = folio_order(folio);
+ int start_order = uniform_split ? new_order : old_order - 1;
int split_order;
folio_clear_has_hwpoisoned(folio);
@@ -3609,14 +3609,11 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
for (split_order = start_order;
split_order >= new_order;
split_order--) {
- int old_order = folio_order(folio);
int nr_new_folios = 1UL << (old_order - split_order);
/* order-1 anonymous folio is not supported */
if (is_anon && split_order == 1)
continue;
- if (uniform_split && split_order != new_order)
- continue;
if (mapping) {
/*
@@ -3643,7 +3640,13 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
mod_mthp_stat(old_order, MTHP_STAT_NR_ANON, -1);
mod_mthp_stat(split_order, MTHP_STAT_NR_ANON, nr_new_folios);
}
+ /*
+ * If uniform split, the process is complete.
+ * If non-uniform, continue splitting the folio at @split_at
+ * as long as the next @split_order is >= @new_order.
+ */
folio = page_folio(split_at);
+ old_order = split_order;
}
return 0;
--
2.34.1
^ permalink raw reply related [flat|nested] 24+ messages in thread
* Re: [Patch v3 2/4] mm/huge_memory: update folio stat after successful split
2025-10-21 21:21 ` [Patch v3 2/4] mm/huge_memory: update folio stat after successful split Wei Yang
@ 2025-10-22 20:26 ` David Hildenbrand
2025-10-22 20:31 ` Zi Yan
` (2 subsequent siblings)
3 siblings, 0 replies; 24+ messages in thread
From: David Hildenbrand @ 2025-10-22 20:26 UTC (permalink / raw)
To: Wei Yang, akpm, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang
Cc: linux-mm, wang lian
On 21.10.25 23:21, Wei Yang wrote:
> The current implementation complicates this process:
>
> * It iterates over the resulting new folios.
> * It uses a flag (@stop_split) to conditionally skip updating the stat
> for the folio at @split_at during the loop.
> * It then attempts to update the skipped stat on a subsequent failure
> path.
>
> This logic is unnecessarily hard to follow.
>
> This commit refactors the code to update the folio statistics only after
> a successful split. This makes the logic much cleaner and sets the stage
> for further simplification of the stat-handling code.
>
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: wang lian <lianux.mm@gmail.com>
> ---
This was easier to digest. I hope I didn't miss anything (it's late ...).
Acked-by: David Hildenbrand <david@redhat.com>
--
Cheers
David / dhildenb
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Patch v3 3/4] mm/huge_memory: optimize and simplify folio stat update after split
2025-10-21 21:21 ` [Patch v3 3/4] mm/huge_memory: optimize and simplify folio stat update after split Wei Yang
@ 2025-10-22 20:28 ` David Hildenbrand
2025-10-22 20:32 ` Zi Yan
` (2 subsequent siblings)
3 siblings, 0 replies; 24+ messages in thread
From: David Hildenbrand @ 2025-10-22 20:28 UTC (permalink / raw)
To: Wei Yang, akpm, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang
Cc: linux-mm, wang lian
On 21.10.25 23:21, Wei Yang wrote:
> The loop executed after a successful folio split currently has two
> combined responsibilities:
>
> * updating statistics for the new folios
> * determining the folio for the next split iteration.
>
> This commit refactors the logic to directly calculate and update folio
> statistics, eliminating the need for the iteration step.
>
> We can do this because all necessary information is already available:
>
> * All resulting new folios have the same order, which is @split_order.
> * The exact number of new folios can be calculated directly using
> @old_order and @split_order.
> * The folio for the subsequent split is simply the one containing
> @split_at.
>
> By leveraging this knowledge, we can achieve the stat update more
> cleanly and efficiently without the looping logic.
>
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: wang lian <lianux.mm@gmail.com>
> ---
Acked-by: David Hildenbrand <david@redhat.com>
--
Cheers
David / dhildenb
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Patch v3 4/4] mm/huge_memory: optimize old_order derivation during folio splitting
2025-10-21 21:21 ` [Patch v3 4/4] mm/huge_memory: optimize old_order derivation during folio splitting Wei Yang
@ 2025-10-22 20:29 ` David Hildenbrand
2025-10-22 20:33 ` Zi Yan
` (2 subsequent siblings)
3 siblings, 0 replies; 24+ messages in thread
From: David Hildenbrand @ 2025-10-22 20:29 UTC (permalink / raw)
To: Wei Yang, akpm, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang
Cc: linux-mm, wang lian
On 21.10.25 23:21, Wei Yang wrote:
> Folio splitting requires both the folio's original order (@old_order)
> and the new target order (@split_order).
>
> In the current implementation, @old_order is repeatedly retrieved using
> folio_order().
>
> However, for every iteration after the first, the folio being split is
> the result of the previous split, meaning its order is already known to
> be equal to the previous iteration's @split_order.
>
> This commit optimizes the logic:
>
> * Instead of calling folio_order(), we now set @old_order directly to
> the value of @split_order from the previous iteration.
>
> This change avoids unnecessary function calls and simplifies the loop
> setup.
>
> Also it removes a check for non-existent case, since for uniform
> splitting we only do split when @split_order == @new_order.
>
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: wang lian <lianux.mm@gmail.com>
> ---
Acked-by: David Hildenbrand <david@redhat.com>
--
Cheers
David / dhildenb
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Patch v3 2/4] mm/huge_memory: update folio stat after successful split
2025-10-21 21:21 ` [Patch v3 2/4] mm/huge_memory: update folio stat after successful split Wei Yang
2025-10-22 20:26 ` David Hildenbrand
@ 2025-10-22 20:31 ` Zi Yan
2025-10-23 1:26 ` wang lian
2025-10-24 14:32 ` Lorenzo Stoakes
3 siblings, 0 replies; 24+ messages in thread
From: Zi Yan @ 2025-10-22 20:31 UTC (permalink / raw)
To: Wei Yang
Cc: akpm, david, lorenzo.stoakes, baolin.wang, Liam.Howlett, npache,
ryan.roberts, dev.jain, baohua, lance.yang, linux-mm, wang lian
On 21 Oct 2025, at 17:21, Wei Yang wrote:
> The current implementation complicates this process:
>
> * It iterates over the resulting new folios.
> * It uses a flag (@stop_split) to conditionally skip updating the stat
> for the folio at @split_at during the loop.
> * It then attempts to update the skipped stat on a subsequent failure
> path.
>
> This logic is unnecessarily hard to follow.
>
> This commit refactors the code to update the folio statistics only after
> a successful split. This makes the logic much cleaner and sets the stage
> for further simplification of the stat-handling code.
>
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: wang lian <lianux.mm@gmail.com>
> ---
> mm/huge_memory.c | 44 +++++++++++---------------------------------
> 1 file changed, 11 insertions(+), 33 deletions(-)
>
LGTM. Reviewed-by: Zi Yan <ziy@nvidia.com>
--
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Patch v3 3/4] mm/huge_memory: optimize and simplify folio stat update after split
2025-10-21 21:21 ` [Patch v3 3/4] mm/huge_memory: optimize and simplify folio stat update after split Wei Yang
2025-10-22 20:28 ` David Hildenbrand
@ 2025-10-22 20:32 ` Zi Yan
2025-10-23 1:29 ` wang lian
2025-10-24 14:35 ` Lorenzo Stoakes
3 siblings, 0 replies; 24+ messages in thread
From: Zi Yan @ 2025-10-22 20:32 UTC (permalink / raw)
To: Wei Yang
Cc: akpm, david, lorenzo.stoakes, baolin.wang, Liam.Howlett, npache,
ryan.roberts, dev.jain, baohua, lance.yang, linux-mm, wang lian
On 21 Oct 2025, at 17:21, Wei Yang wrote:
> The loop executed after a successful folio split currently has two
> combined responsibilities:
>
> * updating statistics for the new folios
> * determining the folio for the next split iteration.
>
> This commit refactors the logic to directly calculate and update folio
> statistics, eliminating the need for the iteration step.
>
> We can do this because all necessary information is already available:
>
> * All resulting new folios have the same order, which is @split_order.
> * The exact number of new folios can be calculated directly using
> @old_order and @split_order.
> * The folio for the subsequent split is simply the one containing
> @split_at.
>
> By leveraging this knowledge, we can achieve the stat update more
> cleanly and efficiently without the looping logic.
>
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: wang lian <lianux.mm@gmail.com>
> ---
> mm/huge_memory.c | 18 ++++--------------
> 1 file changed, 4 insertions(+), 14 deletions(-)
>
LGTM. Reviewed-by: Zi Yan <ziy@nvidia.com>
--
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Patch v3 4/4] mm/huge_memory: optimize old_order derivation during folio splitting
2025-10-21 21:21 ` [Patch v3 4/4] mm/huge_memory: optimize old_order derivation during folio splitting Wei Yang
2025-10-22 20:29 ` David Hildenbrand
@ 2025-10-22 20:33 ` Zi Yan
2025-10-23 1:32 ` wang lian
2025-10-24 14:46 ` Lorenzo Stoakes
3 siblings, 0 replies; 24+ messages in thread
From: Zi Yan @ 2025-10-22 20:33 UTC (permalink / raw)
To: Wei Yang
Cc: akpm, david, lorenzo.stoakes, baolin.wang, Liam.Howlett, npache,
ryan.roberts, dev.jain, baohua, lance.yang, linux-mm, wang lian
On 21 Oct 2025, at 17:21, Wei Yang wrote:
> Folio splitting requires both the folio's original order (@old_order)
> and the new target order (@split_order).
>
> In the current implementation, @old_order is repeatedly retrieved using
> folio_order().
>
> However, for every iteration after the first, the folio being split is
> the result of the previous split, meaning its order is already known to
> be equal to the previous iteration's @split_order.
>
> This commit optimizes the logic:
>
> * Instead of calling folio_order(), we now set @old_order directly to
> the value of @split_order from the previous iteration.
>
> This change avoids unnecessary function calls and simplifies the loop
> setup.
>
> Also it removes a check for non-existent case, since for uniform
> splitting we only do split when @split_order == @new_order.
>
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: wang lian <lianux.mm@gmail.com>
> ---
> mm/huge_memory.c | 13 ++++++++-----
> 1 file changed, 8 insertions(+), 5 deletions(-)
>
LGTM. Thanks. Reviewed-by: Zi Yan <ziy@nvidia.com>
--
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Patch v3 2/4] mm/huge_memory: update folio stat after successful split
2025-10-21 21:21 ` [Patch v3 2/4] mm/huge_memory: update folio stat after successful split Wei Yang
2025-10-22 20:26 ` David Hildenbrand
2025-10-22 20:31 ` Zi Yan
@ 2025-10-23 1:26 ` wang lian
2025-10-24 14:32 ` Lorenzo Stoakes
3 siblings, 0 replies; 24+ messages in thread
From: wang lian @ 2025-10-23 1:26 UTC (permalink / raw)
To: richard.weiyang
Cc: Liam.Howlett, akpm, baohua, baolin.wang, david, dev.jain,
lance.yang, lianux.mm, linux-mm, lorenzo.stoakes, npache,
ryan.roberts, ziy
> The current implementation complicates this process:
> * It iterates over the resulting new folios.
> * It uses a flag (@stop_split) to conditionally skip updating the stat
> for the folio at @split_at during the loop.
> * It then attempts to update the skipped stat on a subsequent failure
> path.
> This logic is unnecessarily hard to follow.
> This commit refactors the code to update the folio statistics only after
> a successful split. This makes the logic much cleaner and sets the stage
> for further simplification of the stat-handling code.
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: wang lian <lianux.mm@gmail.com>
---
LGTM.
Reviewed-by: wang lian <lianux.mm@gmail.com>
--
Best Regards,
wang lian
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Patch v3 3/4] mm/huge_memory: optimize and simplify folio stat update after split
2025-10-21 21:21 ` [Patch v3 3/4] mm/huge_memory: optimize and simplify folio stat update after split Wei Yang
2025-10-22 20:28 ` David Hildenbrand
2025-10-22 20:32 ` Zi Yan
@ 2025-10-23 1:29 ` wang lian
2025-10-24 14:35 ` Lorenzo Stoakes
3 siblings, 0 replies; 24+ messages in thread
From: wang lian @ 2025-10-23 1:29 UTC (permalink / raw)
To: richard.weiyang
Cc: Liam.Howlett, akpm, baohua, baolin.wang, david, dev.jain,
lance.yang, lianux.mm, linux-mm, lorenzo.stoakes, npache,
ryan.roberts, ziy
> The loop executed after a successful folio split currently has two
> combined responsibilities:
> * updating statistics for the new folios
> * determining the folio for the next split iteration.
> This commit refactors the logic to directly calculate and update folio
> statistics, eliminating the need for the iteration step.
> We can do this because all necessary information is already available:
> * All resulting new folios have the same order, which is @split_order.
> * The exact number of new folios can be calculated directly using
> @old_order and @split_order.
> * The folio for the subsequent split is simply the one containing
> @split_at.
> By leveraging this knowledge, we can achieve the stat update more
> cleanly and efficiently without the looping logic.
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: wang lian <lianux.mm@gmail.com>
---
LGTM.
Reviewed-by: wang lian <lianux.mm@gmail.com>
--
Best Regards,
wang lian
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Patch v3 4/4] mm/huge_memory: optimize old_order derivation during folio splitting
2025-10-21 21:21 ` [Patch v3 4/4] mm/huge_memory: optimize old_order derivation during folio splitting Wei Yang
2025-10-22 20:29 ` David Hildenbrand
2025-10-22 20:33 ` Zi Yan
@ 2025-10-23 1:32 ` wang lian
2025-10-24 14:46 ` Lorenzo Stoakes
3 siblings, 0 replies; 24+ messages in thread
From: wang lian @ 2025-10-23 1:32 UTC (permalink / raw)
To: richard.weiyang
Cc: Liam.Howlett, akpm, baohua, baolin.wang, david, dev.jain,
lance.yang, lianux.mm, linux-mm, lorenzo.stoakes, npache,
ryan.roberts, ziy
> Folio splitting requires both the folio's original order (@old_order)
> and the new target order (@split_order).
> In the current implementation, @old_order is repeatedly retrieved using
> folio_order().
> However, for every iteration after the first, the folio being split is
> the result of the previous split, meaning its order is already known to
> be equal to the previous iteration's @split_order.
> This commit optimizes the logic:
> * Instead of calling folio_order(), we now set @old_order directly to
> the value of @split_order from the previous iteration.
> This change avoids unnecessary function calls and simplifies the loop
> setup.
> Also it removes a check for non-existent case, since for uniform
> splitting we only do split when @split_order == @new_order.
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: wang lian <lianux.mm@gmail.com>
---
LGTM.
Reviewed-by: wang lian <lianux.mm@gmail.com>
--
Best Regards,
wang lian
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Patch v3 1/4] mm/huge_memory: avoid reinvoking folio_test_anon()
2025-10-21 21:21 ` [Patch v3 1/4] mm/huge_memory: avoid reinvoking folio_test_anon() Wei Yang
@ 2025-10-24 14:08 ` Lorenzo Stoakes
0 siblings, 0 replies; 24+ messages in thread
From: Lorenzo Stoakes @ 2025-10-24 14:08 UTC (permalink / raw)
To: Wei Yang
Cc: akpm, david, ziy, baolin.wang, Liam.Howlett, npache, ryan.roberts,
dev.jain, baohua, lance.yang, linux-mm
On Tue, Oct 21, 2025 at 09:21:39PM +0000, Wei Yang wrote:
> During the execution of __split_unmapped_folio(), the folio's anon/!anon
> attribute is invariant (not expected to change).
>
> Therefore, it is safe and more efficient to retrieve this attribute once
> at the start and reuse it throughout the function.
Thanks!
>
> Link: https://lkml.kernel.org/r/20251016004613.514-1-richard.weiyang@gmail.com
> Link: https://lkml.kernel.org/r/20251016004613.514-2-richard.weiyang@gmail.com
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
LGTM, so:
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
> Reviewed-by: wang lian <lianux.mm@gmail.com>
> Reviewed-by: Barry Song <baohua@kernel.org>
> Acked-by: David Hildenbrand <david@redhat.com>
> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
> Cc: Dev Jain <dev.jain@arm.com>
> Cc: Lance Yang <lance.yang@linux.dev>
> Cc: Liam Howlett <liam.howlett@oracle.com>
> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Cc: Mariano Pache <npache@redhat.com>
> Cc: Ryan Roberts <ryan.roberts@arm.com>
>
> ---
> v3:
> * adjust subject
> * add const to variable is_anon
> ---
> mm/huge_memory.c | 7 ++++---
> 1 file changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 370ecfd6a182..6d82df4a88dc 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -3595,6 +3595,7 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
> struct page *split_at, struct xa_state *xas,
> struct address_space *mapping, bool uniform_split)
> {
> + const bool is_anon = folio_test_anon(folio);
Thanks for making const!
> int order = folio_order(folio);
> int start_order = uniform_split ? new_order : order - 1;
> bool stop_split = false;
> @@ -3602,7 +3603,7 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
> int split_order;
> int ret = 0;
>
> - if (folio_test_anon(folio))
> + if (is_anon)
> mod_mthp_stat(order, MTHP_STAT_NR_ANON, -1);
>
> folio_clear_has_hwpoisoned(folio);
> @@ -3619,7 +3620,7 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
> struct folio *new_folio;
>
> /* order-1 anonymous folio is not supported */
> - if (folio_test_anon(folio) && split_order == 1)
> + if (is_anon && split_order == 1)
> continue;
> if (uniform_split && split_order != new_order)
> continue;
> @@ -3671,7 +3672,7 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
> if (split_order != new_order && !stop_split)
> continue;
> }
> - if (folio_test_anon(new_folio))
> + if (is_anon)
> mod_mthp_stat(folio_order(new_folio),
> MTHP_STAT_NR_ANON, 1);
> }
> --
> 2.34.1
>
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Patch v3 2/4] mm/huge_memory: update folio stat after successful split
2025-10-21 21:21 ` [Patch v3 2/4] mm/huge_memory: update folio stat after successful split Wei Yang
` (2 preceding siblings ...)
2025-10-23 1:26 ` wang lian
@ 2025-10-24 14:32 ` Lorenzo Stoakes
2025-10-31 0:46 ` Wei Yang
3 siblings, 1 reply; 24+ messages in thread
From: Lorenzo Stoakes @ 2025-10-24 14:32 UTC (permalink / raw)
To: Wei Yang
Cc: akpm, david, ziy, baolin.wang, Liam.Howlett, npache, ryan.roberts,
dev.jain, baohua, lance.yang, linux-mm, wang lian
On Tue, Oct 21, 2025 at 09:21:40PM +0000, Wei Yang wrote:
> The current implementation complicates this process:
>
> * It iterates over the resulting new folios.
> * It uses a flag (@stop_split) to conditionally skip updating the stat
> for the folio at @split_at during the loop.
> * It then attempts to update the skipped stat on a subsequent failure
> path.
>
> This logic is unnecessarily hard to follow.
>
> This commit refactors the code to update the folio statistics only after
> a successful split. This makes the logic much cleaner and sets the stage
> for further simplification of the stat-handling code.
>
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
LGTM, thanks for separating these! So:
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: wang lian <lianux.mm@gmail.com>
> ---
> mm/huge_memory.c | 44 +++++++++++---------------------------------
> 1 file changed, 11 insertions(+), 33 deletions(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 6d82df4a88dc..b9a38dba8eb8 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -3598,13 +3598,8 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
> const bool is_anon = folio_test_anon(folio);
> int order = folio_order(folio);
> int start_order = uniform_split ? new_order : order - 1;
> - bool stop_split = false;
> struct folio *next;
> int split_order;
> - int ret = 0;
> -
> - if (is_anon)
> - mod_mthp_stat(order, MTHP_STAT_NR_ANON, -1);
>
> folio_clear_has_hwpoisoned(folio);
>
> @@ -3613,7 +3608,7 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
> * folio is split to new_order directly.
> */
> for (split_order = start_order;
> - split_order >= new_order && !stop_split;
> + split_order >= new_order;
> split_order--) {
> struct folio *end_folio = folio_next(folio);
> int old_order = folio_order(folio);
> @@ -3636,49 +3631,32 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
> else {
> xas_set_order(xas, folio->index, split_order);
> xas_try_split(xas, folio, old_order);
> - if (xas_error(xas)) {
> - ret = xas_error(xas);
> - stop_split = true;
> - }
> + if (xas_error(xas))
> + return xas_error(xas);
> }
> }
>
> - if (!stop_split) {
> - folio_split_memcg_refs(folio, old_order, split_order);
> - split_page_owner(&folio->page, old_order, split_order);
> - pgalloc_tag_split(folio, old_order, split_order);
> -
> - __split_folio_to_order(folio, old_order, split_order);
> - }
> + folio_split_memcg_refs(folio, old_order, split_order);
> + split_page_owner(&folio->page, old_order, split_order);
> + pgalloc_tag_split(folio, old_order, split_order);
> + __split_folio_to_order(folio, old_order, split_order);
>
> + if (is_anon)
> + mod_mthp_stat(old_order, MTHP_STAT_NR_ANON, -1);
> /*
> * Iterate through after-split folios and update folio stats.
> - * But in buddy allocator like split, the folio
> - * containing the specified page is skipped until its order
> - * is new_order, since the folio will be worked on in next
> - * iteration.
> */
> for (new_folio = folio; new_folio != end_folio; new_folio = next) {
> next = folio_next(new_folio);
> - /*
> - * for buddy allocator like split, new_folio containing
> - * @split_at page could be split again, thus do not
> - * change stats yet. Wait until new_folio's order is
> - * @new_order or stop_split is set to true by the above
> - * xas_split() failure.
> - */
> - if (new_folio == page_folio(split_at)) {
> + if (new_folio == page_folio(split_at))
> folio = new_folio;
> - if (split_order != new_order && !stop_split)
OK I guess we don't need this as in !uniform_split case we use
xas_set_order() to set the order, then try to split, and if an error arose
we already would have handled, so split_order == new_order is guaranteed at
this point, right?
> - continue;
> - }
> if (is_anon)
> mod_mthp_stat(folio_order(new_folio),
> MTHP_STAT_NR_ANON, 1);
> }
> }
>
> - return ret;
> + return 0;
> }
>
> bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
> --
> 2.34.1
>
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Patch v3 3/4] mm/huge_memory: optimize and simplify folio stat update after split
2025-10-21 21:21 ` [Patch v3 3/4] mm/huge_memory: optimize and simplify folio stat update after split Wei Yang
` (2 preceding siblings ...)
2025-10-23 1:29 ` wang lian
@ 2025-10-24 14:35 ` Lorenzo Stoakes
3 siblings, 0 replies; 24+ messages in thread
From: Lorenzo Stoakes @ 2025-10-24 14:35 UTC (permalink / raw)
To: Wei Yang
Cc: akpm, david, ziy, baolin.wang, Liam.Howlett, npache, ryan.roberts,
dev.jain, baohua, lance.yang, linux-mm, wang lian
On Tue, Oct 21, 2025 at 09:21:41PM +0000, Wei Yang wrote:
> The loop executed after a successful folio split currently has two
> combined responsibilities:
>
> * updating statistics for the new folios
> * determining the folio for the next split iteration.
>
> This commit refactors the logic to directly calculate and update folio
> statistics, eliminating the need for the iteration step.
>
> We can do this because all necessary information is already available:
>
> * All resulting new folios have the same order, which is @split_order.
> * The exact number of new folios can be calculated directly using
> @old_order and @split_order.
> * The folio for the subsequent split is simply the one containing
> @split_at.
>
> By leveraging this knowledge, we can achieve the stat update more
> cleanly and efficiently without the looping logic.
>
Thanks for this + previous commit's great commit messages, much appreciated!
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
This separation makes a really big difference, much appreciated.
LGTM, so:
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: wang lian <lianux.mm@gmail.com>
> ---
> mm/huge_memory.c | 18 ++++--------------
> 1 file changed, 4 insertions(+), 14 deletions(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index b9a38dba8eb8..093b3ffb180f 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -3598,7 +3598,6 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
> const bool is_anon = folio_test_anon(folio);
> int order = folio_order(folio);
> int start_order = uniform_split ? new_order : order - 1;
> - struct folio *next;
> int split_order;
>
> folio_clear_has_hwpoisoned(folio);
> @@ -3610,9 +3609,8 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
> for (split_order = start_order;
> split_order >= new_order;
> split_order--) {
> - struct folio *end_folio = folio_next(folio);
> int old_order = folio_order(folio);
> - struct folio *new_folio;
> + int nr_new_folios = 1UL << (old_order - split_order);
>
> /* order-1 anonymous folio is not supported */
> if (is_anon && split_order == 1)
> @@ -3641,19 +3639,11 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
> pgalloc_tag_split(folio, old_order, split_order);
> __split_folio_to_order(folio, old_order, split_order);
>
> - if (is_anon)
> + if (is_anon) {
> mod_mthp_stat(old_order, MTHP_STAT_NR_ANON, -1);
> - /*
> - * Iterate through after-split folios and update folio stats.
> - */
> - for (new_folio = folio; new_folio != end_folio; new_folio = next) {
> - next = folio_next(new_folio);
> - if (new_folio == page_folio(split_at))
> - folio = new_folio;
> - if (is_anon)
> - mod_mthp_stat(folio_order(new_folio),
> - MTHP_STAT_NR_ANON, 1);
> + mod_mthp_stat(split_order, MTHP_STAT_NR_ANON, nr_new_folios);
> }
> + folio = page_folio(split_at);
> }
>
> return 0;
> --
> 2.34.1
>
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Patch v3 4/4] mm/huge_memory: optimize old_order derivation during folio splitting
2025-10-21 21:21 ` [Patch v3 4/4] mm/huge_memory: optimize old_order derivation during folio splitting Wei Yang
` (2 preceding siblings ...)
2025-10-23 1:32 ` wang lian
@ 2025-10-24 14:46 ` Lorenzo Stoakes
2025-10-24 15:29 ` Zi Yan
3 siblings, 1 reply; 24+ messages in thread
From: Lorenzo Stoakes @ 2025-10-24 14:46 UTC (permalink / raw)
To: Wei Yang
Cc: akpm, david, ziy, baolin.wang, Liam.Howlett, npache, ryan.roberts,
dev.jain, baohua, lance.yang, linux-mm, wang lian
On Tue, Oct 21, 2025 at 09:21:42PM +0000, Wei Yang wrote:
> Folio splitting requires both the folio's original order (@old_order)
> and the new target order (@split_order).
>
> In the current implementation, @old_order is repeatedly retrieved using
> folio_order().
>
> However, for every iteration after the first, the folio being split is
> the result of the previous split, meaning its order is already known to
> be equal to the previous iteration's @split_order.
>
> This commit optimizes the logic:
>
> * Instead of calling folio_order(), we now set @old_order directly to
> the value of @split_order from the previous iteration.
>
> This change avoids unnecessary function calls and simplifies the loop
> setup.
>
> Also it removes a check for non-existent case, since for uniform
> splitting we only do split when @split_order == @new_order.
>
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Thanks the separation makes a HUGE difference, much appreciated. So:
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: wang lian <lianux.mm@gmail.com>
> ---
> mm/huge_memory.c | 13 ++++++++-----
> 1 file changed, 8 insertions(+), 5 deletions(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 093b3ffb180f..a4fa8b0e5b5a 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -3596,8 +3596,8 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
> struct address_space *mapping, bool uniform_split)
> {
> const bool is_anon = folio_test_anon(folio);
> - int order = folio_order(folio);
> - int start_order = uniform_split ? new_order : order - 1;
> + int old_order = folio_order(folio);
> + int start_order = uniform_split ? new_order : old_order - 1;
> int split_order;
>
> folio_clear_has_hwpoisoned(folio);
> @@ -3609,14 +3609,11 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
> for (split_order = start_order;
> split_order >= new_order;
A thought for the future - now things are simplified, it might be nice to just
separate out the core of this loop and have the uniform split just call the
split out function directly, and the non-uniform one do the loop.
As it's a bit gross in the uniform case we just let split_order go to new_order
- 1 to exit the loop.
BUT - let's please save that for another patch :)
This all looks fine.
> split_order--) {
> - int old_order = folio_order(folio);
> int nr_new_folios = 1UL << (old_order - split_order);
>
> /* order-1 anonymous folio is not supported */
> if (is_anon && split_order == 1)
> continue;
> - if (uniform_split && split_order != new_order)
> - continue;
>
> if (mapping) {
> /*
> @@ -3643,7 +3640,13 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
> mod_mthp_stat(old_order, MTHP_STAT_NR_ANON, -1);
> mod_mthp_stat(split_order, MTHP_STAT_NR_ANON, nr_new_folios);
> }
> + /*
> + * If uniform split, the process is complete.
> + * If non-uniform, continue splitting the folio at @split_at
> + * as long as the next @split_order is >= @new_order.
> + */
> folio = page_folio(split_at);
> + old_order = split_order;
> }
>
> return 0;
> --
> 2.34.1
>
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Patch v3 4/4] mm/huge_memory: optimize old_order derivation during folio splitting
2025-10-24 14:46 ` Lorenzo Stoakes
@ 2025-10-24 15:29 ` Zi Yan
2025-10-24 15:33 ` Lorenzo Stoakes
2025-10-31 1:50 ` Wei Yang
0 siblings, 2 replies; 24+ messages in thread
From: Zi Yan @ 2025-10-24 15:29 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: Wei Yang, akpm, david, baolin.wang, Liam.Howlett, npache,
ryan.roberts, dev.jain, baohua, lance.yang, linux-mm, wang lian
On 24 Oct 2025, at 10:46, Lorenzo Stoakes wrote:
> On Tue, Oct 21, 2025 at 09:21:42PM +0000, Wei Yang wrote:
>> Folio splitting requires both the folio's original order (@old_order)
>> and the new target order (@split_order).
>>
>> In the current implementation, @old_order is repeatedly retrieved using
>> folio_order().
>>
>> However, for every iteration after the first, the folio being split is
>> the result of the previous split, meaning its order is already known to
>> be equal to the previous iteration's @split_order.
>>
>> This commit optimizes the logic:
>>
>> * Instead of calling folio_order(), we now set @old_order directly to
>> the value of @split_order from the previous iteration.
>>
>> This change avoids unnecessary function calls and simplifies the loop
>> setup.
>>
>> Also it removes a check for non-existent case, since for uniform
>> splitting we only do split when @split_order == @new_order.
>>
>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>
> Thanks the separation makes a HUGE difference, much appreciated. So:
>
> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
>
>> Cc: Zi Yan <ziy@nvidia.com>
>> Cc: wang lian <lianux.mm@gmail.com>
>> ---
>> mm/huge_memory.c | 13 ++++++++-----
>> 1 file changed, 8 insertions(+), 5 deletions(-)
>>
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index 093b3ffb180f..a4fa8b0e5b5a 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -3596,8 +3596,8 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
>> struct address_space *mapping, bool uniform_split)
>> {
>> const bool is_anon = folio_test_anon(folio);
>> - int order = folio_order(folio);
>> - int start_order = uniform_split ? new_order : order - 1;
>> + int old_order = folio_order(folio);
>> + int start_order = uniform_split ? new_order : old_order - 1;
>> int split_order;
>>
>> folio_clear_has_hwpoisoned(folio);
>> @@ -3609,14 +3609,11 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
>> for (split_order = start_order;
>> split_order >= new_order;
>
> A thought for the future - now things are simplified, it might be nice to just
> separate out the core of this loop and have the uniform split just call the
> split out function directly, and the non-uniform one do the loop.
>
> As it's a bit gross in the uniform case we just let split_order go to new_order
> - 1 to exit the loop.
Yeah, something like:
if (uniform_split) {
if (mapping)
xas_split(xas, folio, old_order);
split_folio_to_order(...);
return 0;
}
for () {
...
split_folio_to_order(...);
...
}
where split_folio_to_order(...) just
split memcg, split page owner, pgalloc_tag_split, __split_folio_to_order,
and stats update
>
> BUT - let's please save that for another patch :)
I agree.
>
> This all looks fine.
>
>> split_order--) {
>> - int old_order = folio_order(folio);
>> int nr_new_folios = 1UL << (old_order - split_order);
>>
>> /* order-1 anonymous folio is not supported */
>> if (is_anon && split_order == 1)
>> continue;
>> - if (uniform_split && split_order != new_order)
>> - continue;
>>
>> if (mapping) {
>> /*
>> @@ -3643,7 +3640,13 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
>> mod_mthp_stat(old_order, MTHP_STAT_NR_ANON, -1);
>> mod_mthp_stat(split_order, MTHP_STAT_NR_ANON, nr_new_folios);
>> }
>> + /*
>> + * If uniform split, the process is complete.
>> + * If non-uniform, continue splitting the folio at @split_at
>> + * as long as the next @split_order is >= @new_order.
>> + */
>> folio = page_folio(split_at);
>> + old_order = split_order;
>> }
>>
>> return 0;
>> --
>> 2.34.1
>>
>>
--
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Patch v3 4/4] mm/huge_memory: optimize old_order derivation during folio splitting
2025-10-24 15:29 ` Zi Yan
@ 2025-10-24 15:33 ` Lorenzo Stoakes
2025-10-31 1:50 ` Wei Yang
1 sibling, 0 replies; 24+ messages in thread
From: Lorenzo Stoakes @ 2025-10-24 15:33 UTC (permalink / raw)
To: Zi Yan
Cc: Wei Yang, akpm, david, baolin.wang, Liam.Howlett, npache,
ryan.roberts, dev.jain, baohua, lance.yang, linux-mm, wang lian
On Fri, Oct 24, 2025 at 11:29:00AM -0400, Zi Yan wrote:
> >> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> >> index 093b3ffb180f..a4fa8b0e5b5a 100644
> >> --- a/mm/huge_memory.c
> >> +++ b/mm/huge_memory.c
> >> @@ -3596,8 +3596,8 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
> >> struct address_space *mapping, bool uniform_split)
> >> {
> >> const bool is_anon = folio_test_anon(folio);
> >> - int order = folio_order(folio);
> >> - int start_order = uniform_split ? new_order : order - 1;
> >> + int old_order = folio_order(folio);
> >> + int start_order = uniform_split ? new_order : old_order - 1;
> >> int split_order;
> >>
> >> folio_clear_has_hwpoisoned(folio);
> >> @@ -3609,14 +3609,11 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
> >> for (split_order = start_order;
> >> split_order >= new_order;
> >
> > A thought for the future - now things are simplified, it might be nice to just
> > separate out the core of this loop and have the uniform split just call the
> > split out function directly, and the non-uniform one do the loop.
> >
> > As it's a bit gross in the uniform case we just let split_order go to new_order
> > - 1 to exit the loop.
>
> Yeah, something like:
>
> if (uniform_split) {
> if (mapping)
> xas_split(xas, folio, old_order);
> split_folio_to_order(...);
> return 0;
> }
>
> for () {
> ...
> split_folio_to_order(...);
> ...
> }
>
> where split_folio_to_order(...) just
> split memcg, split page owner, pgalloc_tag_split, __split_folio_to_order,
> and stats update
Yeah exactly :)
>
> >
> > BUT - let's please save that for another patch :)
>
> I agree.
Yes, let's land this first and that can be a follow up!
Thanks again for your help on this series and sorry if I was a little too
grumpy before :) I cut down caffeine drastically recently and you
know... it's hard ;)
Cheers, Lorenzo
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Patch v3 2/4] mm/huge_memory: update folio stat after successful split
2025-10-24 14:32 ` Lorenzo Stoakes
@ 2025-10-31 0:46 ` Wei Yang
0 siblings, 0 replies; 24+ messages in thread
From: Wei Yang @ 2025-10-31 0:46 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: Wei Yang, akpm, david, ziy, baolin.wang, Liam.Howlett, npache,
ryan.roberts, dev.jain, baohua, lance.yang, linux-mm, wang lian
On Fri, Oct 24, 2025 at 03:32:42PM +0100, Lorenzo Stoakes wrote:
[...]
>> @@ -3636,49 +3631,32 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
>> else {
>> xas_set_order(xas, folio->index, split_order);
>> xas_try_split(xas, folio, old_order);
>> - if (xas_error(xas)) {
>> - ret = xas_error(xas);
>> - stop_split = true;
>> - }
>> + if (xas_error(xas))
>> + return xas_error(xas);
>> }
>> }
>>
>> - if (!stop_split) {
>> - folio_split_memcg_refs(folio, old_order, split_order);
>> - split_page_owner(&folio->page, old_order, split_order);
>> - pgalloc_tag_split(folio, old_order, split_order);
>> -
>> - __split_folio_to_order(folio, old_order, split_order);
>> - }
>> + folio_split_memcg_refs(folio, old_order, split_order);
>> + split_page_owner(&folio->page, old_order, split_order);
>> + pgalloc_tag_split(folio, old_order, split_order);
>> + __split_folio_to_order(folio, old_order, split_order);
>>
>> + if (is_anon)
>> + mod_mthp_stat(old_order, MTHP_STAT_NR_ANON, -1);
>> /*
>> * Iterate through after-split folios and update folio stats.
>> - * But in buddy allocator like split, the folio
>> - * containing the specified page is skipped until its order
>> - * is new_order, since the folio will be worked on in next
>> - * iteration.
>> */
>> for (new_folio = folio; new_folio != end_folio; new_folio = next) {
>> next = folio_next(new_folio);
>> - /*
>> - * for buddy allocator like split, new_folio containing
>> - * @split_at page could be split again, thus do not
>> - * change stats yet. Wait until new_folio's order is
>> - * @new_order or stop_split is set to true by the above
>> - * xas_split() failure.
>> - */
>> - if (new_folio == page_folio(split_at)) {
>> + if (new_folio == page_folio(split_at))
>> folio = new_folio;
>> - if (split_order != new_order && !stop_split)
>
>OK I guess we don't need this as in !uniform_split case we use
>xas_set_order() to set the order, then try to split, and if an error arose
>we already would have handled, so split_order == new_order is guaranteed at
>this point, right?
>
Sorry for the late reply, I was on travel.
In non-uniform splitting, @split_order represents an intermediate step, with
@new_order as the final target size.
The check for split_order != new_order is necessary for the final split
iteration.
If this check is absent, the logic that updates folio statistics for the newly
split folios can be incorrectly bypassed when the !stop_split condition is
met.
Not sure I get your point correctly.
>> - continue;
>> - }
>> if (is_anon)
>> mod_mthp_stat(folio_order(new_folio),
>> MTHP_STAT_NR_ANON, 1);
>> }
>> }
>>
>> - return ret;
>> + return 0;
>> }
>>
>> bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
>> --
>> 2.34.1
>>
>>
--
Wei Yang
Help you, Help me
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Patch v3 4/4] mm/huge_memory: optimize old_order derivation during folio splitting
2025-10-24 15:29 ` Zi Yan
2025-10-24 15:33 ` Lorenzo Stoakes
@ 2025-10-31 1:50 ` Wei Yang
2025-10-31 1:55 ` Zi Yan
1 sibling, 1 reply; 24+ messages in thread
From: Wei Yang @ 2025-10-31 1:50 UTC (permalink / raw)
To: Zi Yan
Cc: Lorenzo Stoakes, Wei Yang, akpm, david, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang, linux-mm,
wang lian
On Fri, Oct 24, 2025 at 11:29:00AM -0400, Zi Yan wrote:
>On 24 Oct 2025, at 10:46, Lorenzo Stoakes wrote:
[...]
>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>>> index 093b3ffb180f..a4fa8b0e5b5a 100644
>>> --- a/mm/huge_memory.c
>>> +++ b/mm/huge_memory.c
>>> @@ -3596,8 +3596,8 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
>>> struct address_space *mapping, bool uniform_split)
>>> {
>>> const bool is_anon = folio_test_anon(folio);
>>> - int order = folio_order(folio);
>>> - int start_order = uniform_split ? new_order : order - 1;
>>> + int old_order = folio_order(folio);
>>> + int start_order = uniform_split ? new_order : old_order - 1;
>>> int split_order;
>>>
>>> folio_clear_has_hwpoisoned(folio);
>>> @@ -3609,14 +3609,11 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
>>> for (split_order = start_order;
>>> split_order >= new_order;
>>
>> A thought for the future - now things are simplified, it might be nice to just
>> separate out the core of this loop and have the uniform split just call the
>> split out function directly, and the non-uniform one do the loop.
>>
>> As it's a bit gross in the uniform case we just let split_order go to new_order
>> - 1 to exit the loop.
>
>Yeah, something like:
>
>if (uniform_split) {
> if (mapping)
> xas_split(xas, folio, old_order);
> split_folio_to_order(...);
> return 0;
>}
>
>for () {
>...
>split_folio_to_order(...);
>...
>}
>
>where split_folio_to_order(...) just
>split memcg, split page owner, pgalloc_tag_split, __split_folio_to_order,
>and stats update
>
This looks reasonable, while I found we already have split_folio_to_order().
>>
>> BUT - let's please save that for another patch :)
>
>I agree.
>>
>> This all looks fine.
>>
>>> split_order--) {
>>> - int old_order = folio_order(folio);
>>> int nr_new_folios = 1UL << (old_order - split_order);
>>>
>>> /* order-1 anonymous folio is not supported */
>>> if (is_anon && split_order == 1)
>>> continue;
>>> - if (uniform_split && split_order != new_order)
>>> - continue;
>>>
>>> if (mapping) {
>>> /*
>>> @@ -3643,7 +3640,13 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
>>> mod_mthp_stat(old_order, MTHP_STAT_NR_ANON, -1);
>>> mod_mthp_stat(split_order, MTHP_STAT_NR_ANON, nr_new_folios);
>>> }
>>> + /*
>>> + * If uniform split, the process is complete.
>>> + * If non-uniform, continue splitting the folio at @split_at
>>> + * as long as the next @split_order is >= @new_order.
>>> + */
>>> folio = page_folio(split_at);
>>> + old_order = split_order;
>>> }
>>>
>>> return 0;
>>> --
>>> 2.34.1
>>>
>>>
>
>
>--
>Best Regards,
>Yan, Zi
--
Wei Yang
Help you, Help me
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Patch v3 4/4] mm/huge_memory: optimize old_order derivation during folio splitting
2025-10-31 1:50 ` Wei Yang
@ 2025-10-31 1:55 ` Zi Yan
2025-10-31 2:00 ` Wei Yang
0 siblings, 1 reply; 24+ messages in thread
From: Zi Yan @ 2025-10-31 1:55 UTC (permalink / raw)
To: Wei Yang
Cc: Lorenzo Stoakes, akpm, david, baolin.wang, Liam.Howlett, npache,
ryan.roberts, dev.jain, baohua, lance.yang, linux-mm, wang lian
On 30 Oct 2025, at 21:50, Wei Yang wrote:
> On Fri, Oct 24, 2025 at 11:29:00AM -0400, Zi Yan wrote:
>> On 24 Oct 2025, at 10:46, Lorenzo Stoakes wrote:
> [...]
>>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>>>> index 093b3ffb180f..a4fa8b0e5b5a 100644
>>>> --- a/mm/huge_memory.c
>>>> +++ b/mm/huge_memory.c
>>>> @@ -3596,8 +3596,8 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
>>>> struct address_space *mapping, bool uniform_split)
>>>> {
>>>> const bool is_anon = folio_test_anon(folio);
>>>> - int order = folio_order(folio);
>>>> - int start_order = uniform_split ? new_order : order - 1;
>>>> + int old_order = folio_order(folio);
>>>> + int start_order = uniform_split ? new_order : old_order - 1;
>>>> int split_order;
>>>>
>>>> folio_clear_has_hwpoisoned(folio);
>>>> @@ -3609,14 +3609,11 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
>>>> for (split_order = start_order;
>>>> split_order >= new_order;
>>>
>>> A thought for the future - now things are simplified, it might be nice to just
>>> separate out the core of this loop and have the uniform split just call the
>>> split out function directly, and the non-uniform one do the loop.
>>>
>>> As it's a bit gross in the uniform case we just let split_order go to new_order
>>> - 1 to exit the loop.
>>
>> Yeah, something like:
>>
>> if (uniform_split) {
>> if (mapping)
>> xas_split(xas, folio, old_order);
>> split_folio_to_order(...);
>> return 0;
>> }
>>
>> for () {
>> ...
>> split_folio_to_order(...);
>> ...
>> }
>>
>> where split_folio_to_order(...) just
>> split memcg, split page owner, pgalloc_tag_split, __split_folio_to_order,
>> and stats update
>>
>
> This looks reasonable, while I found we already have split_folio_to_order().
Then, we just need a new name, like __split_folio_and_update_stats().
I am bad at naming. Feel free to come up with a better one. :)
>
>>>
>>> BUT - let's please save that for another patch :)
>>
>> I agree.
>>>
>>> This all looks fine.
>>>
>>>> split_order--) {
>>>> - int old_order = folio_order(folio);
>>>> int nr_new_folios = 1UL << (old_order - split_order);
>>>>
>>>> /* order-1 anonymous folio is not supported */
>>>> if (is_anon && split_order == 1)
>>>> continue;
>>>> - if (uniform_split && split_order != new_order)
>>>> - continue;
>>>>
>>>> if (mapping) {
>>>> /*
>>>> @@ -3643,7 +3640,13 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
>>>> mod_mthp_stat(old_order, MTHP_STAT_NR_ANON, -1);
>>>> mod_mthp_stat(split_order, MTHP_STAT_NR_ANON, nr_new_folios);
>>>> }
>>>> + /*
>>>> + * If uniform split, the process is complete.
>>>> + * If non-uniform, continue splitting the folio at @split_at
>>>> + * as long as the next @split_order is >= @new_order.
>>>> + */
>>>> folio = page_folio(split_at);
>>>> + old_order = split_order;
>>>> }
>>>>
>>>> return 0;
>>>> --
>>>> 2.34.1
>>>>
>>>>
>>
>>
>> --
>> Best Regards,
>> Yan, Zi
>
> --
> Wei Yang
> Help you, Help me
--
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Patch v3 4/4] mm/huge_memory: optimize old_order derivation during folio splitting
2025-10-31 1:55 ` Zi Yan
@ 2025-10-31 2:00 ` Wei Yang
0 siblings, 0 replies; 24+ messages in thread
From: Wei Yang @ 2025-10-31 2:00 UTC (permalink / raw)
To: Zi Yan
Cc: Wei Yang, Lorenzo Stoakes, akpm, david, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang, linux-mm,
wang lian
On Thu, Oct 30, 2025 at 09:55:18PM -0400, Zi Yan wrote:
>On 30 Oct 2025, at 21:50, Wei Yang wrote:
>
>> On Fri, Oct 24, 2025 at 11:29:00AM -0400, Zi Yan wrote:
>>> On 24 Oct 2025, at 10:46, Lorenzo Stoakes wrote:
>> [...]
>>>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>>>>> index 093b3ffb180f..a4fa8b0e5b5a 100644
>>>>> --- a/mm/huge_memory.c
>>>>> +++ b/mm/huge_memory.c
>>>>> @@ -3596,8 +3596,8 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
>>>>> struct address_space *mapping, bool uniform_split)
>>>>> {
>>>>> const bool is_anon = folio_test_anon(folio);
>>>>> - int order = folio_order(folio);
>>>>> - int start_order = uniform_split ? new_order : order - 1;
>>>>> + int old_order = folio_order(folio);
>>>>> + int start_order = uniform_split ? new_order : old_order - 1;
>>>>> int split_order;
>>>>>
>>>>> folio_clear_has_hwpoisoned(folio);
>>>>> @@ -3609,14 +3609,11 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
>>>>> for (split_order = start_order;
>>>>> split_order >= new_order;
>>>>
>>>> A thought for the future - now things are simplified, it might be nice to just
>>>> separate out the core of this loop and have the uniform split just call the
>>>> split out function directly, and the non-uniform one do the loop.
>>>>
>>>> As it's a bit gross in the uniform case we just let split_order go to new_order
>>>> - 1 to exit the loop.
>>>
>>> Yeah, something like:
>>>
>>> if (uniform_split) {
>>> if (mapping)
>>> xas_split(xas, folio, old_order);
>>> split_folio_to_order(...);
>>> return 0;
>>> }
>>>
>>> for () {
>>> ...
>>> split_folio_to_order(...);
>>> ...
>>> }
>>>
>>> where split_folio_to_order(...) just
>>> split memcg, split page owner, pgalloc_tag_split, __split_folio_to_order,
>>> and stats update
>>>
>>
>> This looks reasonable, while I found we already have split_folio_to_order().
>
>Then, we just need a new name, like __split_folio_and_update_stats().
>I am bad at naming. Feel free to come up with a better one. :)
>
Ok, let me have a try ... not good at it neither.
--
Wei Yang
Help you, Help me
^ permalink raw reply [flat|nested] 24+ messages in thread
end of thread, other threads:[~2025-10-31 2:00 UTC | newest]
Thread overview: 24+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-21 21:21 [Patch v3 0/4] mm/huge_memory: cleanup __split_unmapped_folio() Wei Yang
2025-10-21 21:21 ` [Patch v3 1/4] mm/huge_memory: avoid reinvoking folio_test_anon() Wei Yang
2025-10-24 14:08 ` Lorenzo Stoakes
2025-10-21 21:21 ` [Patch v3 2/4] mm/huge_memory: update folio stat after successful split Wei Yang
2025-10-22 20:26 ` David Hildenbrand
2025-10-22 20:31 ` Zi Yan
2025-10-23 1:26 ` wang lian
2025-10-24 14:32 ` Lorenzo Stoakes
2025-10-31 0:46 ` Wei Yang
2025-10-21 21:21 ` [Patch v3 3/4] mm/huge_memory: optimize and simplify folio stat update after split Wei Yang
2025-10-22 20:28 ` David Hildenbrand
2025-10-22 20:32 ` Zi Yan
2025-10-23 1:29 ` wang lian
2025-10-24 14:35 ` Lorenzo Stoakes
2025-10-21 21:21 ` [Patch v3 4/4] mm/huge_memory: optimize old_order derivation during folio splitting Wei Yang
2025-10-22 20:29 ` David Hildenbrand
2025-10-22 20:33 ` Zi Yan
2025-10-23 1:32 ` wang lian
2025-10-24 14:46 ` Lorenzo Stoakes
2025-10-24 15:29 ` Zi Yan
2025-10-24 15:33 ` Lorenzo Stoakes
2025-10-31 1:50 ` Wei Yang
2025-10-31 1:55 ` Zi Yan
2025-10-31 2:00 ` Wei Yang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).