* [Patch v2 0/2] mm/huge_memory: cleanup __split_unmapped_folio()
@ 2025-10-16 0:46 Wei Yang
2025-10-16 0:46 ` [Patch v2 1/2] mm/huge_memory: cache folio attribute in __split_unmapped_folio() Wei Yang
2025-10-16 0:46 ` [Patch v2 2/2] mm/huge_memory: Optimize and simplify __split_unmapped_folio() logic Wei Yang
0 siblings, 2 replies; 26+ messages in thread
From: Wei Yang @ 2025-10-16 0:46 UTC (permalink / raw)
To: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang
Cc: linux-mm, Wei Yang
This short patch series cleans up and optimizes the internal logic of
the __split_unmapped_folio() function.
The goal is to improve clarity and efficiency by eliminating redundant
checks, caching stable attribute values, and simplifying the iteration
logic used for updating folio statistics.
These changes make the code easier to follow and maintain.
The split_huge_page_test selftest pass.
v2:
* merge patch 2-5
v1:
http://lkml.kernel.org/r/20251014134606.22543-1-richard.weiyang@gmail.com
Wei Yang (2):
mm/huge_memory: cache folio attribute in __split_unmapped_folio()
mm/huge_memory: Optimize and simplify __split_unmapped_folio() logic
mm/huge_memory.c | 69 +++++++++++++++---------------------------------
1 file changed, 21 insertions(+), 48 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 26+ messages in thread
* [Patch v2 1/2] mm/huge_memory: cache folio attribute in __split_unmapped_folio()
2025-10-16 0:46 [Patch v2 0/2] mm/huge_memory: cleanup __split_unmapped_folio() Wei Yang
@ 2025-10-16 0:46 ` Wei Yang
2025-10-16 1:34 ` Barry Song
` (2 more replies)
2025-10-16 0:46 ` [Patch v2 2/2] mm/huge_memory: Optimize and simplify __split_unmapped_folio() logic Wei Yang
1 sibling, 3 replies; 26+ messages in thread
From: Wei Yang @ 2025-10-16 0:46 UTC (permalink / raw)
To: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang
Cc: linux-mm, Wei Yang
During the execution of __split_unmapped_folio(), the folio's anon/!anon
attribute is invariant (not expected to change).
Therefore, it is safe and more efficient to retrieve this attribute once
at the start and reuse it throughout the function.
Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
Reviewed-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: wang lian <lianux.mm@gmail.com>
---
mm/huge_memory.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 3c74227cc847..4b2d5a7e5c8e 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3527,6 +3527,7 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
struct page *split_at, struct xa_state *xas,
struct address_space *mapping, bool uniform_split)
{
+ bool is_anon = folio_test_anon(folio);
int order = folio_order(folio);
int start_order = uniform_split ? new_order : order - 1;
bool stop_split = false;
@@ -3534,7 +3535,7 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
int split_order;
int ret = 0;
- if (folio_test_anon(folio))
+ if (is_anon)
mod_mthp_stat(order, MTHP_STAT_NR_ANON, -1);
folio_clear_has_hwpoisoned(folio);
@@ -3551,7 +3552,7 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
struct folio *new_folio;
/* order-1 anonymous folio is not supported */
- if (folio_test_anon(folio) && split_order == 1)
+ if (is_anon && split_order == 1)
continue;
if (uniform_split && split_order != new_order)
continue;
@@ -3603,7 +3604,7 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
if (split_order != new_order && !stop_split)
continue;
}
- if (folio_test_anon(new_folio))
+ if (is_anon)
mod_mthp_stat(folio_order(new_folio),
MTHP_STAT_NR_ANON, 1);
}
--
2.34.1
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [Patch v2 2/2] mm/huge_memory: Optimize and simplify __split_unmapped_folio() logic
2025-10-16 0:46 [Patch v2 0/2] mm/huge_memory: cleanup __split_unmapped_folio() Wei Yang
2025-10-16 0:46 ` [Patch v2 1/2] mm/huge_memory: cache folio attribute in __split_unmapped_folio() Wei Yang
@ 2025-10-16 0:46 ` Wei Yang
2025-10-16 1:25 ` wang lian
` (3 more replies)
1 sibling, 4 replies; 26+ messages in thread
From: Wei Yang @ 2025-10-16 0:46 UTC (permalink / raw)
To: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang
Cc: linux-mm, Wei Yang
Existing __split_unmapped_folio() code splits the given folio and update
stats, but it is complicated to understand.
After simplification, __split_unmapped_folio() directly calculate and
update the folio statistics upon a successful split:
* All resulting folios are @split_order.
* The number of new folios are calculated directly from @old_order
and @split_order.
* The folio for the next split is identified as the one containing
@split_at.
* An xas_try_split() error is returned directly without worrying
about stats updates.
Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
Reviewed-by: Zi Yan <ziy@nvidia.com>
---
v2:
* merge patch 2-5
* retain start_order
* new_folios -> nr_new_folios
* add a comment at the end of the loop
---
mm/huge_memory.c | 66 ++++++++++++++----------------------------------
1 file changed, 19 insertions(+), 47 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 4b2d5a7e5c8e..68e851f5fcb2 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3528,15 +3528,9 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
struct address_space *mapping, bool uniform_split)
{
bool is_anon = folio_test_anon(folio);
- int order = folio_order(folio);
- int start_order = uniform_split ? new_order : order - 1;
- bool stop_split = false;
- struct folio *next;
+ int old_order = folio_order(folio);
+ int start_order = uniform_split ? new_order : old_order - 1;
int split_order;
- int ret = 0;
-
- if (is_anon)
- mod_mthp_stat(order, MTHP_STAT_NR_ANON, -1);
folio_clear_has_hwpoisoned(folio);
@@ -3545,17 +3539,13 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
* folio is split to new_order directly.
*/
for (split_order = start_order;
- split_order >= new_order && !stop_split;
+ split_order >= new_order;
split_order--) {
- struct folio *end_folio = folio_next(folio);
- int old_order = folio_order(folio);
- struct folio *new_folio;
+ int nr_new_folios = 1UL << (old_order - split_order);
/* order-1 anonymous folio is not supported */
if (is_anon && split_order == 1)
continue;
- if (uniform_split && split_order != new_order)
- continue;
if (mapping) {
/*
@@ -3568,49 +3558,31 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
else {
xas_set_order(xas, folio->index, split_order);
xas_try_split(xas, folio, old_order);
- if (xas_error(xas)) {
- ret = xas_error(xas);
- stop_split = true;
- }
+ if (xas_error(xas))
+ return xas_error(xas);
}
}
- if (!stop_split) {
- folio_split_memcg_refs(folio, old_order, split_order);
- split_page_owner(&folio->page, old_order, split_order);
- pgalloc_tag_split(folio, old_order, split_order);
+ folio_split_memcg_refs(folio, old_order, split_order);
+ split_page_owner(&folio->page, old_order, split_order);
+ pgalloc_tag_split(folio, old_order, split_order);
+ __split_folio_to_order(folio, old_order, split_order);
- __split_folio_to_order(folio, old_order, split_order);
+ if (is_anon) {
+ mod_mthp_stat(old_order, MTHP_STAT_NR_ANON, -1);
+ mod_mthp_stat(split_order, MTHP_STAT_NR_ANON, nr_new_folios);
}
/*
- * Iterate through after-split folios and update folio stats.
- * But in buddy allocator like split, the folio
- * containing the specified page is skipped until its order
- * is new_order, since the folio will be worked on in next
- * iteration.
+ * For uniform split, we have finished the job.
+ * For non-uniform split, we assign folio to the one the one
+ * containing @split_at and assign @old_order to @split_order.
*/
- for (new_folio = folio; new_folio != end_folio; new_folio = next) {
- next = folio_next(new_folio);
- /*
- * for buddy allocator like split, new_folio containing
- * @split_at page could be split again, thus do not
- * change stats yet. Wait until new_folio's order is
- * @new_order or stop_split is set to true by the above
- * xas_split() failure.
- */
- if (new_folio == page_folio(split_at)) {
- folio = new_folio;
- if (split_order != new_order && !stop_split)
- continue;
- }
- if (is_anon)
- mod_mthp_stat(folio_order(new_folio),
- MTHP_STAT_NR_ANON, 1);
- }
+ folio = page_folio(split_at);
+ old_order = split_order;
}
- return ret;
+ return 0;
}
bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
--
2.34.1
^ permalink raw reply related [flat|nested] 26+ messages in thread
* Re: [Patch v2 2/2] mm/huge_memory: Optimize and simplify __split_unmapped_folio() logic
2025-10-16 0:46 ` [Patch v2 2/2] mm/huge_memory: Optimize and simplify __split_unmapped_folio() logic Wei Yang
@ 2025-10-16 1:25 ` wang lian
2025-10-16 20:10 ` David Hildenbrand
` (2 subsequent siblings)
3 siblings, 0 replies; 26+ messages in thread
From: wang lian @ 2025-10-16 1:25 UTC (permalink / raw)
To: richard.weiyang
Cc: Liam.Howlett, akpm, baohua, baolin.wang, david, dev.jain,
lance.yang, linux-mm, lorenzo.stoakes, npache, ryan.roberts, ziy,
wang lian
> Existing __split_unmapped_folio() code splits the given folio and update
> stats, but it is complicated to understand.
> After simplification, __split_unmapped_folio() directly calculate and
> update the folio statistics upon a successful split:
> * All resulting folios are @split_order.
> * The number of new folios are calculated directly from @old_order
and @split_order.
> * The folio for the next split is identified as the one containing
@split_at.
> * An xas_try_split() error is returned directly without worrying
about stats updates.
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
The simplification looks great and makes the logic much easier to follow.
The new comment you added is also very helpful for understanding the
difference between the two split modes.
LGTM.
Reviewed-by: wang lian <lianux.mm@gmail.com>
> + * For uniform split, we have finished the job.
> + * For non-uniform split, we assign folio to the one the one
> + * containing @split_at and assign @old_order to @split_order.
--
Best Regards,
wang lian
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Patch v2 1/2] mm/huge_memory: cache folio attribute in __split_unmapped_folio()
2025-10-16 0:46 ` [Patch v2 1/2] mm/huge_memory: cache folio attribute in __split_unmapped_folio() Wei Yang
@ 2025-10-16 1:34 ` Barry Song
2025-10-16 20:01 ` David Hildenbrand
2025-10-17 9:46 ` Lorenzo Stoakes
2 siblings, 0 replies; 26+ messages in thread
From: Barry Song @ 2025-10-16 1:34 UTC (permalink / raw)
To: Wei Yang
Cc: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, lance.yang, linux-mm
On Thu, Oct 16, 2025 at 2:24 PM Wei Yang <richard.weiyang@gmail.com> wrote:
>
> During the execution of __split_unmapped_folio(), the folio's anon/!anon
> attribute is invariant (not expected to change).
>
> Therefore, it is safe and more efficient to retrieve this attribute once
> at the start and reuse it throughout the function.
>
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
> Reviewed-by: wang lian <lianux.mm@gmail.com>
> ---
Reviewed-by: Barry Song <baohua@kernel.org>
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Patch v2 1/2] mm/huge_memory: cache folio attribute in __split_unmapped_folio()
2025-10-16 0:46 ` [Patch v2 1/2] mm/huge_memory: cache folio attribute in __split_unmapped_folio() Wei Yang
2025-10-16 1:34 ` Barry Song
@ 2025-10-16 20:01 ` David Hildenbrand
2025-10-17 9:46 ` Lorenzo Stoakes
2 siblings, 0 replies; 26+ messages in thread
From: David Hildenbrand @ 2025-10-16 20:01 UTC (permalink / raw)
To: Wei Yang, akpm, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang
Cc: linux-mm
On 16.10.25 02:46, Wei Yang wrote:
> During the execution of __split_unmapped_folio(), the folio's anon/!anon
> attribute is invariant (not expected to change).
>
> Therefore, it is safe and more efficient to retrieve this attribute once
> at the start and reuse it throughout the function.
>
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
> Reviewed-by: wang lian <lianux.mm@gmail.com>
> ---
> mm/huge_memory.c | 7 ++++---
> 1 file changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 3c74227cc847..4b2d5a7e5c8e 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -3527,6 +3527,7 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
> struct page *split_at, struct xa_state *xas,
> struct address_space *mapping, bool uniform_split)
> {
> + bool is_anon = folio_test_anon(folio);
Can be const.
Acked-by: David Hildenbrand <david@redhat.com>
--
Cheers
David / dhildenb
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Patch v2 2/2] mm/huge_memory: Optimize and simplify __split_unmapped_folio() logic
2025-10-16 0:46 ` [Patch v2 2/2] mm/huge_memory: Optimize and simplify __split_unmapped_folio() logic Wei Yang
2025-10-16 1:25 ` wang lian
@ 2025-10-16 20:10 ` David Hildenbrand
2025-10-16 20:22 ` Zi Yan
2025-10-17 0:55 ` Wei Yang
2025-10-17 9:44 ` Lorenzo Stoakes
3 siblings, 1 reply; 26+ messages in thread
From: David Hildenbrand @ 2025-10-16 20:10 UTC (permalink / raw)
To: Wei Yang, akpm, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang
Cc: linux-mm
On 16.10.25 02:46, Wei Yang wrote:
> Existing __split_unmapped_folio() code splits the given folio and update
> stats, but it is complicated to understand.
>
> After simplification, __split_unmapped_folio() directly calculate and
s/calculate/calculates/
> update the folio statistics upon a successful split:
s/update/updates/
>
> * All resulting folios are @split_order.
>
> * The number of new folios are calculated directly from @old_order
> and @split_order.
That makes sense.
>
> * The folio for the next split is identified as the one containing
> @split_at.
>
That as well.
> * An xas_try_split() error is returned directly without worrying
> about stats updates.
Why is that change ok?
>
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
>
> ---
> v2:
> * merge patch 2-5
> * retain start_order
> * new_folios -> nr_new_folios
> * add a comment at the end of the loop
> ---
> mm/huge_memory.c | 66 ++++++++++++++----------------------------------
> 1 file changed, 19 insertions(+), 47 deletions(-)
Skimmed over the rest and LGTM, but it's all a bit complicated to
understand.
Trusting on Zu Yan here :)
--
Cheers
David / dhildenb
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Patch v2 2/2] mm/huge_memory: Optimize and simplify __split_unmapped_folio() logic
2025-10-16 20:10 ` David Hildenbrand
@ 2025-10-16 20:22 ` Zi Yan
2025-10-16 20:55 ` David Hildenbrand
0 siblings, 1 reply; 26+ messages in thread
From: Zi Yan @ 2025-10-16 20:22 UTC (permalink / raw)
To: David Hildenbrand
Cc: Wei Yang, akpm, lorenzo.stoakes, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang, linux-mm
On 16 Oct 2025, at 16:10, David Hildenbrand wrote:
> On 16.10.25 02:46, Wei Yang wrote:
>> Existing __split_unmapped_folio() code splits the given folio and update
>> stats, but it is complicated to understand.
>>
>> After simplification, __split_unmapped_folio() directly calculate and
>
> s/calculate/calculates/
>
>> update the folio statistics upon a successful split:
>
> s/update/updates/
>
>>
>> * All resulting folios are @split_order.
>>
>> * The number of new folios are calculated directly from @old_order
>> and @split_order.
>
> That makes sense.
>
>>
>> * The folio for the next split is identified as the one containing
>> @split_at.
>>
>
> That as well.
>
>> * An xas_try_split() error is returned directly without worrying
>> about stats updates.
>
> Why is that change ok?
Before this, the code decreases 1 for the to-be-split folio before
the split actually happens, so for a failed xas_try_split() the stats
needs to be fixed up. Wei’s code updates stats after the split,
removing the stats fixup for a xas_try_split() failure.
How about?
* Stats fixup is no longer needed for an xas_try_split() error,
since originally stats was updated before an split happens.
>
>>
>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>> Cc: Zi Yan <ziy@nvidia.com>
>> Reviewed-by: Zi Yan <ziy@nvidia.com>
>>
>> ---
>> v2:
>> * merge patch 2-5
>> * retain start_order
>> * new_folios -> nr_new_folios
>> * add a comment at the end of the loop
>> ---
>> mm/huge_memory.c | 66 ++++++++++++++----------------------------------
>> 1 file changed, 19 insertions(+), 47 deletions(-)
>
> Skimmed over the rest and LGTM, but it's all a bit complicated to understand.
>
> Trusting on Zu Yan here :)
;)
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Patch v2 2/2] mm/huge_memory: Optimize and simplify __split_unmapped_folio() logic
2025-10-16 20:22 ` Zi Yan
@ 2025-10-16 20:55 ` David Hildenbrand
2025-10-16 20:56 ` Zi Yan
0 siblings, 1 reply; 26+ messages in thread
From: David Hildenbrand @ 2025-10-16 20:55 UTC (permalink / raw)
To: Zi Yan
Cc: Wei Yang, akpm, lorenzo.stoakes, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang, linux-mm
On 16.10.25 22:22, Zi Yan wrote:
> On 16 Oct 2025, at 16:10, David Hildenbrand wrote:
>
>> On 16.10.25 02:46, Wei Yang wrote:
>>> Existing __split_unmapped_folio() code splits the given folio and update
>>> stats, but it is complicated to understand.
>>>
>>> After simplification, __split_unmapped_folio() directly calculate and
>>
>> s/calculate/calculates/
>>
>>> update the folio statistics upon a successful split:
>>
>> s/update/updates/
>>
>>>
>>> * All resulting folios are @split_order.
>>>
>>> * The number of new folios are calculated directly from @old_order
>>> and @split_order.
>>
>> That makes sense.
>>
>>>
>>> * The folio for the next split is identified as the one containing
>>> @split_at.
>>>
>>
>> That as well.
>>
>>> * An xas_try_split() error is returned directly without worrying
>>> about stats updates.
>>
>> Why is that change ok?
>
> Before this, the code decreases 1 for the to-be-split folio before
> the split actually happens, so for a failed xas_try_split() the stats
> needs to be fixed up. Wei’s code updates stats after the split,
> removing the stats fixup for a xas_try_split() failure.
Ah, that was not immediately clear to me.
>
> How about?
>
> * Stats fixup is no longer needed for an xas_try_split() error,
> since originally stats was updated before an split happens.
... since we now update the stats only after a successful split.
?
>
>>
>>>
>>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>>> Cc: Zi Yan <ziy@nvidia.com>
>>> Reviewed-by: Zi Yan <ziy@nvidia.com>
>>>
>>> ---
>>> v2:
>>> * merge patch 2-5
>>> * retain start_order
>>> * new_folios -> nr_new_folios
>>> * add a comment at the end of the loop
>>> ---
>>> mm/huge_memory.c | 66 ++++++++++++++----------------------------------
>>> 1 file changed, 19 insertions(+), 47 deletions(-)
>>
>> Skimmed over the rest and LGTM, but it's all a bit complicated to understand.
>>
>> Trusting on Zu Yan here :)
>
> ;)
"Zi" of course :)
--
Cheers
David / dhildenb
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Patch v2 2/2] mm/huge_memory: Optimize and simplify __split_unmapped_folio() logic
2025-10-16 20:55 ` David Hildenbrand
@ 2025-10-16 20:56 ` Zi Yan
0 siblings, 0 replies; 26+ messages in thread
From: Zi Yan @ 2025-10-16 20:56 UTC (permalink / raw)
To: David Hildenbrand
Cc: Wei Yang, akpm, lorenzo.stoakes, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang, linux-mm
On 16 Oct 2025, at 16:55, David Hildenbrand wrote:
> On 16.10.25 22:22, Zi Yan wrote:
>> On 16 Oct 2025, at 16:10, David Hildenbrand wrote:
>>
>>> On 16.10.25 02:46, Wei Yang wrote:
>>>> Existing __split_unmapped_folio() code splits the given folio and update
>>>> stats, but it is complicated to understand.
>>>>
>>>> After simplification, __split_unmapped_folio() directly calculate and
>>>
>>> s/calculate/calculates/
>>>
>>>> update the folio statistics upon a successful split:
>>>
>>> s/update/updates/
>>>
>>>>
>>>> * All resulting folios are @split_order.
>>>>
>>>> * The number of new folios are calculated directly from @old_order
>>>> and @split_order.
>>>
>>> That makes sense.
>>>
>>>>
>>>> * The folio for the next split is identified as the one containing
>>>> @split_at.
>>>>
>>>
>>> That as well.
>>>
>>>> * An xas_try_split() error is returned directly without worrying
>>>> about stats updates.
>>>
>>> Why is that change ok?
>>
>> Before this, the code decreases 1 for the to-be-split folio before
>> the split actually happens, so for a failed xas_try_split() the stats
>> needs to be fixed up. Wei’s code updates stats after the split,
>> removing the stats fixup for a xas_try_split() failure.
>
> Ah, that was not immediately clear to me.
>
>>
>> How about?
>>
>> * Stats fixup is no longer needed for an xas_try_split() error,
>> since originally stats was updated before an split happens.
>
> ... since we now update the stats only after a successful split.
Sounds good to me.
>
> ?
>
>>
>>>
>>>>
>>>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>>>> Cc: Zi Yan <ziy@nvidia.com>
>>>> Reviewed-by: Zi Yan <ziy@nvidia.com>
>>>>
>>>> ---
>>>> v2:
>>>> * merge patch 2-5
>>>> * retain start_order
>>>> * new_folios -> nr_new_folios
>>>> * add a comment at the end of the loop
>>>> ---
>>>> mm/huge_memory.c | 66 ++++++++++++++----------------------------------
>>>> 1 file changed, 19 insertions(+), 47 deletions(-)
>>>
>>> Skimmed over the rest and LGTM, but it's all a bit complicated to understand.
>>>
>>> Trusting on Zu Yan here :)
>>
>> ;)
>
> "Zi" of course :)
>
> --
> Cheers
>
> David / dhildenb
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Patch v2 2/2] mm/huge_memory: Optimize and simplify __split_unmapped_folio() logic
2025-10-16 0:46 ` [Patch v2 2/2] mm/huge_memory: Optimize and simplify __split_unmapped_folio() logic Wei Yang
2025-10-16 1:25 ` wang lian
2025-10-16 20:10 ` David Hildenbrand
@ 2025-10-17 0:55 ` Wei Yang
2025-10-17 9:44 ` Lorenzo Stoakes
3 siblings, 0 replies; 26+ messages in thread
From: Wei Yang @ 2025-10-17 0:55 UTC (permalink / raw)
To: Wei Yang
Cc: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang, linux-mm
On Thu, Oct 16, 2025 at 12:46:13AM +0000, Wei Yang wrote:
>Existing __split_unmapped_folio() code splits the given folio and update
>stats, but it is complicated to understand.
>
>After simplification, __split_unmapped_folio() directly calculate and
>update the folio statistics upon a successful split:
>
>* All resulting folios are @split_order.
>
>* The number of new folios are calculated directly from @old_order
> and @split_order.
>
>* The folio for the next split is identified as the one containing
> @split_at.
>
>* An xas_try_split() error is returned directly without worrying
> about stats updates.
@Andrew
From David and Zi comment, there is little change in change log.
Would you mind helping update it? Below is the updated version.
---
Existing __split_unmapped_folio() code splits the given folio and
updates stats, but it is complicated to understand.
After simplification, __split_unmapped_folio() directly calculates and
updates the folio statistics upon a successful split:
* All resulting folios are @split_order.
* The number of new folios are calculated directly from @old_order and
@split_order.
* The folio for the next split is identified as the one containing
@split_at.
* Stats fixup is no longer needed for an xas_try_split() error,
since since we now update the stats only after a successful split.
--
Wei Yang
Help you, Help me
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Patch v2 2/2] mm/huge_memory: Optimize and simplify __split_unmapped_folio() logic
2025-10-16 0:46 ` [Patch v2 2/2] mm/huge_memory: Optimize and simplify __split_unmapped_folio() logic Wei Yang
` (2 preceding siblings ...)
2025-10-17 0:55 ` Wei Yang
@ 2025-10-17 9:44 ` Lorenzo Stoakes
2025-10-17 14:26 ` Zi Yan
3 siblings, 1 reply; 26+ messages in thread
From: Lorenzo Stoakes @ 2025-10-17 9:44 UTC (permalink / raw)
To: Wei Yang
Cc: akpm, david, ziy, baolin.wang, Liam.Howlett, npache, ryan.roberts,
dev.jain, baohua, lance.yang, linux-mm
On Thu, Oct 16, 2025 at 12:46:13AM +0000, Wei Yang wrote:
> Existing __split_unmapped_folio() code splits the given folio and update
> stats, but it is complicated to understand.
>
> After simplification, __split_unmapped_folio() directly calculate and
> update the folio statistics upon a successful split:
>
> * All resulting folios are @split_order.
>
> * The number of new folios are calculated directly from @old_order
> and @split_order.
>
> * The folio for the next split is identified as the one containing
> @split_at.
>
> * An xas_try_split() error is returned directly without worrying
> about stats updates.
You seem to be doing two things at once, a big refactoring where you move stuff
about AND changing functionality.
Can we split this out please? It makes review so much harder.
>
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
>
> ---
> v2:
> * merge patch 2-5
> * retain start_order
> * new_folios -> nr_new_folios
> * add a comment at the end of the loop
> ---
> mm/huge_memory.c | 66 ++++++++++++++----------------------------------
> 1 file changed, 19 insertions(+), 47 deletions(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 4b2d5a7e5c8e..68e851f5fcb2 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -3528,15 +3528,9 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
> struct address_space *mapping, bool uniform_split)
> {
> bool is_anon = folio_test_anon(folio);
> - int order = folio_order(folio);
> - int start_order = uniform_split ? new_order : order - 1;
> - bool stop_split = false;
> - struct folio *next;
> + int old_order = folio_order(folio);
> + int start_order = uniform_split ? new_order : old_order - 1;
> int split_order;
> - int ret = 0;
> -
> - if (is_anon)
> - mod_mthp_stat(order, MTHP_STAT_NR_ANON, -1);
>
> folio_clear_has_hwpoisoned(folio);
>
> @@ -3545,17 +3539,13 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
> * folio is split to new_order directly.
> */
> for (split_order = start_order;
> - split_order >= new_order && !stop_split;
> + split_order >= new_order;
> split_order--) {
> - struct folio *end_folio = folio_next(folio);
> - int old_order = folio_order(folio);
> - struct folio *new_folio;
> + int nr_new_folios = 1UL << (old_order - split_order);
>
> /* order-1 anonymous folio is not supported */
> if (is_anon && split_order == 1)
> continue;
> - if (uniform_split && split_order != new_order)
> - continue;
>
> if (mapping) {
> /*
> @@ -3568,49 +3558,31 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
> else {
> xas_set_order(xas, folio->index, split_order);
> xas_try_split(xas, folio, old_order);
> - if (xas_error(xas)) {
> - ret = xas_error(xas);
> - stop_split = true;
> - }
> + if (xas_error(xas))
> + return xas_error(xas);
> }
> }
>
> - if (!stop_split) {
> - folio_split_memcg_refs(folio, old_order, split_order);
> - split_page_owner(&folio->page, old_order, split_order);
> - pgalloc_tag_split(folio, old_order, split_order);
> + folio_split_memcg_refs(folio, old_order, split_order);
> + split_page_owner(&folio->page, old_order, split_order);
> + pgalloc_tag_split(folio, old_order, split_order);
> + __split_folio_to_order(folio, old_order, split_order);
>
> - __split_folio_to_order(folio, old_order, split_order);
> + if (is_anon) {
> + mod_mthp_stat(old_order, MTHP_STAT_NR_ANON, -1);
> + mod_mthp_stat(split_order, MTHP_STAT_NR_ANON, nr_new_folios);
> }
>
> /*
> - * Iterate through after-split folios and update folio stats.
> - * But in buddy allocator like split, the folio
> - * containing the specified page is skipped until its order
> - * is new_order, since the folio will be worked on in next
> - * iteration.
> + * For uniform split, we have finished the job.
Finsihed what job? This is unclear.
> + * For non-uniform split, we assign folio to the one the one
'To the one the one' you're duplicating that, and I have no idea what 'the one'
means?
> + * containing @split_at and assign @old_order to @split_order.
Now you're just describing code, and why are you making it kdoc-like in a
non-kdoc comment?
I mean you're now unconditionally assigning folio to page_folio(split_at) and
reassigning split-order to old_order so you really need to be clearer about what
you mean here, given there is no e.g.:
if (is uniform split)
break;
Something simpler would probably work better here.
> */
> - for (new_folio = folio; new_folio != end_folio; new_folio = next) {
> - next = folio_next(new_folio);
> - /*
> - * for buddy allocator like split, new_folio containing
> - * @split_at page could be split again, thus do not
> - * change stats yet. Wait until new_folio's order is
> - * @new_order or stop_split is set to true by the above
> - * xas_split() failure.
> - */
> - if (new_folio == page_folio(split_at)) {
> - folio = new_folio;
> - if (split_order != new_order && !stop_split)
> - continue;
> - }
> - if (is_anon)
> - mod_mthp_stat(folio_order(new_folio),
> - MTHP_STAT_NR_ANON, 1);
> - }
> + folio = page_folio(split_at);
> + old_order = split_order;
> }
>
> - return ret;
> + return 0;
> }
>
> bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
> --
> 2.34.1
>
>
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Patch v2 1/2] mm/huge_memory: cache folio attribute in __split_unmapped_folio()
2025-10-16 0:46 ` [Patch v2 1/2] mm/huge_memory: cache folio attribute in __split_unmapped_folio() Wei Yang
2025-10-16 1:34 ` Barry Song
2025-10-16 20:01 ` David Hildenbrand
@ 2025-10-17 9:46 ` Lorenzo Stoakes
2025-10-19 7:51 ` Wei Yang
2 siblings, 1 reply; 26+ messages in thread
From: Lorenzo Stoakes @ 2025-10-17 9:46 UTC (permalink / raw)
To: Wei Yang
Cc: akpm, david, ziy, baolin.wang, Liam.Howlett, npache, ryan.roberts,
dev.jain, baohua, lance.yang, linux-mm
I don't like 'cache' here. You're not cache-ing anything, you're relying on an
attribute remaining constant.
Something like 'avoid reinvoking folio_test_anon()' would be simpler.
On Thu, Oct 16, 2025 at 12:46:12AM +0000, Wei Yang wrote:
> During the execution of __split_unmapped_folio(), the folio's anon/!anon
> attribute is invariant (not expected to change).
>
> Therefore, it is safe and more efficient to retrieve this attribute once
> at the start and reuse it throughout the function.
>
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
> Reviewed-by: wang lian <lianux.mm@gmail.com>
> ---
> mm/huge_memory.c | 7 ++++---
> 1 file changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 3c74227cc847..4b2d5a7e5c8e 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -3527,6 +3527,7 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
> struct page *split_at, struct xa_state *xas,
> struct address_space *mapping, bool uniform_split)
> {
> + bool is_anon = folio_test_anon(folio);
As David said, make this const please.
> int order = folio_order(folio);
> int start_order = uniform_split ? new_order : order - 1;
> bool stop_split = false;
> @@ -3534,7 +3535,7 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
> int split_order;
> int ret = 0;
>
> - if (folio_test_anon(folio))
> + if (is_anon)
> mod_mthp_stat(order, MTHP_STAT_NR_ANON, -1);
>
> folio_clear_has_hwpoisoned(folio);
> @@ -3551,7 +3552,7 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
> struct folio *new_folio;
>
> /* order-1 anonymous folio is not supported */
> - if (folio_test_anon(folio) && split_order == 1)
> + if (is_anon && split_order == 1)
> continue;
> if (uniform_split && split_order != new_order)
> continue;
> @@ -3603,7 +3604,7 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
> if (split_order != new_order && !stop_split)
> continue;
> }
> - if (folio_test_anon(new_folio))
> + if (is_anon)
> mod_mthp_stat(folio_order(new_folio),
> MTHP_STAT_NR_ANON, 1);
> }
> --
> 2.34.1
>
>
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Patch v2 2/2] mm/huge_memory: Optimize and simplify __split_unmapped_folio() logic
2025-10-17 9:44 ` Lorenzo Stoakes
@ 2025-10-17 14:26 ` Zi Yan
2025-10-17 14:29 ` Zi Yan
` (2 more replies)
0 siblings, 3 replies; 26+ messages in thread
From: Zi Yan @ 2025-10-17 14:26 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: Wei Yang, akpm, david, baolin.wang, Liam.Howlett, npache,
ryan.roberts, dev.jain, baohua, lance.yang, linux-mm
On 17 Oct 2025, at 5:44, Lorenzo Stoakes wrote:
> On Thu, Oct 16, 2025 at 12:46:13AM +0000, Wei Yang wrote:
>> Existing __split_unmapped_folio() code splits the given folio and update
>> stats, but it is complicated to understand.
>>
>> After simplification, __split_unmapped_folio() directly calculate and
>> update the folio statistics upon a successful split:
>>
>> * All resulting folios are @split_order.
>>
>> * The number of new folios are calculated directly from @old_order
>> and @split_order.
>>
>> * The folio for the next split is identified as the one containing
>> @split_at.
>>
>> * An xas_try_split() error is returned directly without worrying
>> about stats updates.
>
> You seem to be doing two things at once, a big refactoring where you move stuff
> about AND changing functionality.
No function change is done in this patchset. The wording might be
confusing here, it should be read like:
After simplification, __split_unmapped_folio() directly calculate and
update the folio statistics upon a successful split, so An xas_try_split()
error is returned directly without worrying about stats updates.
David suggested a change[1] to make it clear:
Stats fixup is no longer needed for an xas_try_split() error,
since we now update the stats only after a successful split.
[1] https://lore.kernel.org/linux-mm/518dedb8-d379-47c3-a4c1-f4afc789f1b4@redhat.com/
>
> Can we split this out please? It makes review so much harder.
I asked Wei to use a single patch for this change, since the original
code was complicated due to the initial implementation. After my
recent change (first commit 6c7de9c83)[1], __split_unmmaped_folio()
can be simplified like Wei did here.
[1] https://lore.kernel.org/all/20250718023000.4044406-7-ziy@nvidia.com/
>
>>
>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>> Cc: Zi Yan <ziy@nvidia.com>
>> Reviewed-by: Zi Yan <ziy@nvidia.com>
>>
>> ---
>> v2:
>> * merge patch 2-5
>> * retain start_order
>> * new_folios -> nr_new_folios
>> * add a comment at the end of the loop
>> ---
>> mm/huge_memory.c | 66 ++++++++++++++----------------------------------
>> 1 file changed, 19 insertions(+), 47 deletions(-)
>>
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index 4b2d5a7e5c8e..68e851f5fcb2 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -3528,15 +3528,9 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
>> struct address_space *mapping, bool uniform_split)
>> {
>> bool is_anon = folio_test_anon(folio);
>> - int order = folio_order(folio);
>> - int start_order = uniform_split ? new_order : order - 1;
>> - bool stop_split = false;
>> - struct folio *next;
>> + int old_order = folio_order(folio);
>> + int start_order = uniform_split ? new_order : old_order - 1;
>> int split_order;
>> - int ret = 0;
>> -
>> - if (is_anon)
>> - mod_mthp_stat(order, MTHP_STAT_NR_ANON, -1);
>>
>> folio_clear_has_hwpoisoned(folio);
>>
>> @@ -3545,17 +3539,13 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
>> * folio is split to new_order directly.
>> */
>> for (split_order = start_order;
>> - split_order >= new_order && !stop_split;
>> + split_order >= new_order;
>> split_order--) {
>> - struct folio *end_folio = folio_next(folio);
>> - int old_order = folio_order(folio);
>> - struct folio *new_folio;
>> + int nr_new_folios = 1UL << (old_order - split_order);
>>
>> /* order-1 anonymous folio is not supported */
>> if (is_anon && split_order == 1)
>> continue;
>> - if (uniform_split && split_order != new_order)
>> - continue;
>>
>> if (mapping) {
>> /*
>> @@ -3568,49 +3558,31 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
>> else {
>> xas_set_order(xas, folio->index, split_order);
>> xas_try_split(xas, folio, old_order);
>> - if (xas_error(xas)) {
>> - ret = xas_error(xas);
>> - stop_split = true;
>> - }
>> + if (xas_error(xas))
>> + return xas_error(xas);
>> }
>> }
>>
>> - if (!stop_split) {
>> - folio_split_memcg_refs(folio, old_order, split_order);
>> - split_page_owner(&folio->page, old_order, split_order);
>> - pgalloc_tag_split(folio, old_order, split_order);
>> + folio_split_memcg_refs(folio, old_order, split_order);
>> + split_page_owner(&folio->page, old_order, split_order);
>> + pgalloc_tag_split(folio, old_order, split_order);
>> + __split_folio_to_order(folio, old_order, split_order);
>>
>> - __split_folio_to_order(folio, old_order, split_order);
>> + if (is_anon) {
>> + mod_mthp_stat(old_order, MTHP_STAT_NR_ANON, -1);
>> + mod_mthp_stat(split_order, MTHP_STAT_NR_ANON, nr_new_folios);
>> }
>>
>> /*
>> - * Iterate through after-split folios and update folio stats.
>> - * But in buddy allocator like split, the folio
>> - * containing the specified page is skipped until its order
>> - * is new_order, since the folio will be worked on in next
>> - * iteration.
>> + * For uniform split, we have finished the job.
>
> Finsihed what job? This is unclear.
>
>> + * For non-uniform split, we assign folio to the one the one
>
> 'To the one the one' you're duplicating that, and I have no idea what 'the one'
> means?
>
>> + * containing @split_at and assign @old_order to @split_order.
>
> Now you're just describing code, and why are you making it kdoc-like in a
> non-kdoc comment?
>
> I mean you're now unconditionally assigning folio to page_folio(split_at) and
> reassigning split-order to old_order so you really need to be clearer about what
> you mean here, given there is no e.g.:
>
> if (is uniform split)
> break;
>
> Something simpler would probably work better here.
>
>> */
>> - for (new_folio = folio; new_folio != end_folio; new_folio = next) {
>> - next = folio_next(new_folio);
>> - /*
>> - * for buddy allocator like split, new_folio containing
>> - * @split_at page could be split again, thus do not
>> - * change stats yet. Wait until new_folio's order is
>> - * @new_order or stop_split is set to true by the above
>> - * xas_split() failure.
>> - */
>> - if (new_folio == page_folio(split_at)) {
>> - folio = new_folio;
>> - if (split_order != new_order && !stop_split)
>> - continue;
>> - }
>> - if (is_anon)
>> - mod_mthp_stat(folio_order(new_folio),
>> - MTHP_STAT_NR_ANON, 1);
>> - }
>> + folio = page_folio(split_at);
>> + old_order = split_order;
>> }
>>
>> - return ret;
>> + return 0;
>> }
>>
>> bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
>> --
>> 2.34.1
>>
>>
--
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Patch v2 2/2] mm/huge_memory: Optimize and simplify __split_unmapped_folio() logic
2025-10-17 14:26 ` Zi Yan
@ 2025-10-17 14:29 ` Zi Yan
2025-10-17 14:44 ` Lorenzo Stoakes
2025-10-17 14:45 ` David Hildenbrand
2 siblings, 0 replies; 26+ messages in thread
From: Zi Yan @ 2025-10-17 14:29 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: Wei Yang, akpm, david, baolin.wang, Liam.Howlett, npache,
ryan.roberts, dev.jain, baohua, lance.yang, linux-mm
On 17 Oct 2025, at 10:26, Zi Yan wrote:
> On 17 Oct 2025, at 5:44, Lorenzo Stoakes wrote:
>
>> On Thu, Oct 16, 2025 at 12:46:13AM +0000, Wei Yang wrote:
>>> Existing __split_unmapped_folio() code splits the given folio and update
>>> stats, but it is complicated to understand.
>>>
>>> After simplification, __split_unmapped_folio() directly calculate and
>>> update the folio statistics upon a successful split:
>>>
>>> * All resulting folios are @split_order.
>>>
>>> * The number of new folios are calculated directly from @old_order
>>> and @split_order.
>>>
>>> * The folio for the next split is identified as the one containing
>>> @split_at.
>>>
>>> * An xas_try_split() error is returned directly without worrying
>>> about stats updates.
>>
>> You seem to be doing two things at once, a big refactoring where you move stuff
>> about AND changing functionality.
>
> No function change is done in this patchset. The wording might be
> confusing here, it should be read like:
>
> After simplification, __split_unmapped_folio() directly calculate and
> update the folio statistics upon a successful split, so An xas_try_split()
> error is returned directly without worrying about stats updates.
>
> David suggested a change[1] to make it clear:
> Stats fixup is no longer needed for an xas_try_split() error,
> since we now update the stats only after a successful split.
>
>
> [1] https://lore.kernel.org/linux-mm/518dedb8-d379-47c3-a4c1-f4afc789f1b4@redhat.com/
>
>>
>> Can we split this out please? It makes review so much harder.
>
> I asked Wei to use a single patch for this change, since the original
> code was complicated due to the initial implementation. After my
> recent change (first commit 6c7de9c83)[1], __split_unmmaped_folio()
> can be simplified like Wei did here.
>
>
>
> [1] https://lore.kernel.org/all/20250718023000.4044406-7-ziy@nvidia.com/
>
>>
>>>
>>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>>> Cc: Zi Yan <ziy@nvidia.com>
>>> Reviewed-by: Zi Yan <ziy@nvidia.com>
>>>
>>> ---
>>> v2:
>>> * merge patch 2-5
>>> * retain start_order
>>> * new_folios -> nr_new_folios
>>> * add a comment at the end of the loop
>>> ---
>>> mm/huge_memory.c | 66 ++++++++++++++----------------------------------
>>> 1 file changed, 19 insertions(+), 47 deletions(-)
>>>
>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>>> index 4b2d5a7e5c8e..68e851f5fcb2 100644
>>> --- a/mm/huge_memory.c
>>> +++ b/mm/huge_memory.c
>>> @@ -3528,15 +3528,9 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
>>> struct address_space *mapping, bool uniform_split)
>>> {
>>> bool is_anon = folio_test_anon(folio);
>>> - int order = folio_order(folio);
>>> - int start_order = uniform_split ? new_order : order - 1;
>>> - bool stop_split = false;
>>> - struct folio *next;
>>> + int old_order = folio_order(folio);
>>> + int start_order = uniform_split ? new_order : old_order - 1;
>>> int split_order;
>>> - int ret = 0;
>>> -
>>> - if (is_anon)
>>> - mod_mthp_stat(order, MTHP_STAT_NR_ANON, -1);
>>>
>>> folio_clear_has_hwpoisoned(folio);
>>>
>>> @@ -3545,17 +3539,13 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
>>> * folio is split to new_order directly.
>>> */
>>> for (split_order = start_order;
>>> - split_order >= new_order && !stop_split;
>>> + split_order >= new_order;
>>> split_order--) {
>>> - struct folio *end_folio = folio_next(folio);
>>> - int old_order = folio_order(folio);
>>> - struct folio *new_folio;
>>> + int nr_new_folios = 1UL << (old_order - split_order);
>>>
>>> /* order-1 anonymous folio is not supported */
>>> if (is_anon && split_order == 1)
>>> continue;
>>> - if (uniform_split && split_order != new_order)
>>> - continue;
>>>
>>> if (mapping) {
>>> /*
>>> @@ -3568,49 +3558,31 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
>>> else {
>>> xas_set_order(xas, folio->index, split_order);
>>> xas_try_split(xas, folio, old_order);
>>> - if (xas_error(xas)) {
>>> - ret = xas_error(xas);
>>> - stop_split = true;
>>> - }
>>> + if (xas_error(xas))
>>> + return xas_error(xas);
>>> }
>>> }
>>>
>>> - if (!stop_split) {
>>> - folio_split_memcg_refs(folio, old_order, split_order);
>>> - split_page_owner(&folio->page, old_order, split_order);
>>> - pgalloc_tag_split(folio, old_order, split_order);
>>> + folio_split_memcg_refs(folio, old_order, split_order);
>>> + split_page_owner(&folio->page, old_order, split_order);
>>> + pgalloc_tag_split(folio, old_order, split_order);
>>> + __split_folio_to_order(folio, old_order, split_order);
>>>
>>> - __split_folio_to_order(folio, old_order, split_order);
>>> + if (is_anon) {
>>> + mod_mthp_stat(old_order, MTHP_STAT_NR_ANON, -1);
>>> + mod_mthp_stat(split_order, MTHP_STAT_NR_ANON, nr_new_folios);
>>> }
>>>
>>> /*
>>> - * Iterate through after-split folios and update folio stats.
>>> - * But in buddy allocator like split, the folio
>>> - * containing the specified page is skipped until its order
>>> - * is new_order, since the folio will be worked on in next
>>> - * iteration.
>>> + * For uniform split, we have finished the job.
>>
>> Finsihed what job? This is unclear.
>>
>>> + * For non-uniform split, we assign folio to the one the one
>>
>> 'To the one the one' you're duplicating that, and I have no idea what 'the one'
>> means?
>>
>>> + * containing @split_at and assign @old_order to @split_order.
>>
>> Now you're just describing code, and why are you making it kdoc-like in a
>> non-kdoc comment?
>>
>> I mean you're now unconditionally assigning folio to page_folio(split_at) and
>> reassigning split-order to old_order so you really need to be clearer about what
>> you mean here, given there is no e.g.:
>>
>> if (is uniform split)
>> break;
>>
>> Something simpler would probably work better here.
Maybe, when uniform_split is true, for loop will only be done once,
since start_order == new_order. This break is almost a no op.
>>
>>> */
>>> - for (new_folio = folio; new_folio != end_folio; new_folio = next) {
>>> - next = folio_next(new_folio);
>>> - /*
>>> - * for buddy allocator like split, new_folio containing
>>> - * @split_at page could be split again, thus do not
>>> - * change stats yet. Wait until new_folio's order is
>>> - * @new_order or stop_split is set to true by the above
>>> - * xas_split() failure.
>>> - */
>>> - if (new_folio == page_folio(split_at)) {
>>> - folio = new_folio;
>>> - if (split_order != new_order && !stop_split)
>>> - continue;
>>> - }
>>> - if (is_anon)
>>> - mod_mthp_stat(folio_order(new_folio),
>>> - MTHP_STAT_NR_ANON, 1);
>>> - }
>>> + folio = page_folio(split_at);
>>> + old_order = split_order;
>>> }
>>>
>>> - return ret;
>>> + return 0;
>>> }
>>>
>>> bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
>>> --
>>> 2.34.1
>>>
>>>
>
>
> --
> Best Regards,
> Yan, Zi
--
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Patch v2 2/2] mm/huge_memory: Optimize and simplify __split_unmapped_folio() logic
2025-10-17 14:26 ` Zi Yan
2025-10-17 14:29 ` Zi Yan
@ 2025-10-17 14:44 ` Lorenzo Stoakes
2025-10-17 14:45 ` David Hildenbrand
2 siblings, 0 replies; 26+ messages in thread
From: Lorenzo Stoakes @ 2025-10-17 14:44 UTC (permalink / raw)
To: Zi Yan
Cc: Wei Yang, akpm, david, baolin.wang, Liam.Howlett, npache,
ryan.roberts, dev.jain, baohua, lance.yang, linux-mm
On Fri, Oct 17, 2025 at 10:26:13AM -0400, Zi Yan wrote:
> On 17 Oct 2025, at 5:44, Lorenzo Stoakes wrote:
>
> > On Thu, Oct 16, 2025 at 12:46:13AM +0000, Wei Yang wrote:
> >> Existing __split_unmapped_folio() code splits the given folio and update
> >> stats, but it is complicated to understand.
> >>
> >> After simplification, __split_unmapped_folio() directly calculate and
> >> update the folio statistics upon a successful split:
> >>
> >> * All resulting folios are @split_order.
> >>
> >> * The number of new folios are calculated directly from @old_order
> >> and @split_order.
> >>
> >> * The folio for the next split is identified as the one containing
> >> @split_at.
> >>
> >> * An xas_try_split() error is returned directly without worrying
> >> about stats updates.
> >
> > You seem to be doing two things at once, a big refactoring where you move stuff
> > about AND changing functionality.
>
> No function change is done in this patchset. The wording might be
> confusing here, it should be read like:
I made this assessment based on David saying 'why are you making this change?'
repeatedly.
That doesn't sound like a refactoring, but if you're claiming these are all
invalid...
>
> After simplification, __split_unmapped_folio() directly calculate and
> update the folio statistics upon a successful split, so An xas_try_split()
> error is returned directly without worrying about stats updates.
>
> David suggested a change[1] to make it clear:
> Stats fixup is no longer needed for an xas_try_split() error,
> since we now update the stats only after a successful split.
Right isn't that a substantive change?
>
>
> [1] https://lore.kernel.org/linux-mm/518dedb8-d379-47c3-a4c1-f4afc789f1b4@redhat.com/
>
> >
> > Can we split this out please? It makes review so much harder.
>
> I asked Wei to use a single patch for this change, since the original
> code was complicated due to the initial implementation. After my
> recent change (first commit 6c7de9c83)[1], __split_unmmaped_folio()
> can be simplified like Wei did here.
>
>
>
> [1] https://lore.kernel.org/all/20250718023000.4044406-7-ziy@nvidia.com/
>
I mean I am not sure why it is so problematic to split up changes so we dont' do
a bunch of complicated/not complicated things all at once which is harder to
review, more bug prone and makes bisecting harder.
I've done series which consist of a _series_ of 'no functional change' patches
becuase they logically fell out that way.
But if you're really confident this is functional-only and is a simple and
sensible change I can take another look.
Thanks, Lorenzo
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Patch v2 2/2] mm/huge_memory: Optimize and simplify __split_unmapped_folio() logic
2025-10-17 14:26 ` Zi Yan
2025-10-17 14:29 ` Zi Yan
2025-10-17 14:44 ` Lorenzo Stoakes
@ 2025-10-17 14:45 ` David Hildenbrand
2025-10-17 14:55 ` Lorenzo Stoakes
2 siblings, 1 reply; 26+ messages in thread
From: David Hildenbrand @ 2025-10-17 14:45 UTC (permalink / raw)
To: Zi Yan, Lorenzo Stoakes
Cc: Wei Yang, akpm, baolin.wang, Liam.Howlett, npache, ryan.roberts,
dev.jain, baohua, lance.yang, linux-mm
On 17.10.25 16:26, Zi Yan wrote:
> On 17 Oct 2025, at 5:44, Lorenzo Stoakes wrote:
>
>> On Thu, Oct 16, 2025 at 12:46:13AM +0000, Wei Yang wrote:
>>> Existing __split_unmapped_folio() code splits the given folio and update
>>> stats, but it is complicated to understand.
>>>
>>> After simplification, __split_unmapped_folio() directly calculate and
>>> update the folio statistics upon a successful split:
>>>
>>> * All resulting folios are @split_order.
>>>
>>> * The number of new folios are calculated directly from @old_order
>>> and @split_order.
>>>
>>> * The folio for the next split is identified as the one containing
>>> @split_at.
>>>
>>> * An xas_try_split() error is returned directly without worrying
>>> about stats updates.
>>
>> You seem to be doing two things at once, a big refactoring where you move stuff
>> about AND changing functionality.
>
> No function change is done in this patchset. The wording might be
> confusing here, it should be read like:
>
> After simplification, __split_unmapped_folio() directly calculate and
> update the folio statistics upon a successful split, so An xas_try_split()
> error is returned directly without worrying about stats updates.
>
> David suggested a change[1] to make it clear:
> Stats fixup is no longer needed for an xas_try_split() error,
> since we now update the stats only after a successful split.
>
>
> [1] https://lore.kernel.org/linux-mm/518dedb8-d379-47c3-a4c1-f4afc789f1b4@redhat.com/
>
>>
>> Can we split this out please? It makes review so much harder.
>
> I asked Wei to use a single patch for this change, since the original
> code was complicated due to the initial implementation. After my
> recent change (first commit 6c7de9c83)[1], __split_unmmaped_folio()
> can be simplified like Wei did here.
I think it was too much split, agreed.
This patch was a bit hard for me to review, not sure if there would have
been a better way to split some stuff out more logically.
Most changes just complicated way.
Not sure if splitting out the stats update change or the calculation of
the number of folios could have been easily done.
--
Cheers
David / dhildenb
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Patch v2 2/2] mm/huge_memory: Optimize and simplify __split_unmapped_folio() logic
2025-10-17 14:45 ` David Hildenbrand
@ 2025-10-17 14:55 ` Lorenzo Stoakes
2025-10-17 17:24 ` Zi Yan
2025-10-19 8:00 ` Wei Yang
0 siblings, 2 replies; 26+ messages in thread
From: Lorenzo Stoakes @ 2025-10-17 14:55 UTC (permalink / raw)
To: David Hildenbrand
Cc: Zi Yan, Wei Yang, akpm, baolin.wang, Liam.Howlett, npache,
ryan.roberts, dev.jain, baohua, lance.yang, linux-mm
On Fri, Oct 17, 2025 at 04:45:17PM +0200, David Hildenbrand wrote:
> On 17.10.25 16:26, Zi Yan wrote:
> > On 17 Oct 2025, at 5:44, Lorenzo Stoakes wrote:
> >
> > > On Thu, Oct 16, 2025 at 12:46:13AM +0000, Wei Yang wrote:
> > > > Existing __split_unmapped_folio() code splits the given folio and update
> > > > stats, but it is complicated to understand.
> > > >
> > > > After simplification, __split_unmapped_folio() directly calculate and
> > > > update the folio statistics upon a successful split:
> > > >
> > > > * All resulting folios are @split_order.
> > > >
> > > > * The number of new folios are calculated directly from @old_order
> > > > and @split_order.
> > > >
> > > > * The folio for the next split is identified as the one containing
> > > > @split_at.
> > > >
> > > > * An xas_try_split() error is returned directly without worrying
> > > > about stats updates.
> > >
> > > You seem to be doing two things at once, a big refactoring where you move stuff
> > > about AND changing functionality.
> >
> > No function change is done in this patchset. The wording might be
> > confusing here, it should be read like:
> >
> > After simplification, __split_unmapped_folio() directly calculate and
> > update the folio statistics upon a successful split, so An xas_try_split()
> > error is returned directly without worrying about stats updates.
> >
> > David suggested a change[1] to make it clear:
> > Stats fixup is no longer needed for an xas_try_split() error,
> > since we now update the stats only after a successful split.
> >
> >
> > [1] https://lore.kernel.org/linux-mm/518dedb8-d379-47c3-a4c1-f4afc789f1b4@redhat.com/
> >
> > >
> > > Can we split this out please? It makes review so much harder.
> >
> > I asked Wei to use a single patch for this change, since the original
> > code was complicated due to the initial implementation. After my
> > recent change (first commit 6c7de9c83)[1], __split_unmmaped_folio()
> > can be simplified like Wei did here.
>
> I think it was too much split, agreed.
>
> This patch was a bit hard for me to review, not sure if there would have
> been a better way to split some stuff out more logically.
>
> Most changes just complicated way.
>
> Not sure if splitting out the stats update change or the calculation of the
> number of folios could have been easily done.
I mean as I said to Wei/Zi, my objection is both moving code around and
making complicated changes that require you to really dig in to figure them
out _at the same time_.
It's perfectly feasible to send separate patches for non-functional
changes, I've done that myself, each building on the next to make it clear
what's going on.
I'm a little surprised at the oppositon to that...
But if you're convinced there's no other way this could be done (bit
surprising as you're asking 'why this change' several times here) I guess
I'll sit down and try to decode it.
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Patch v2 2/2] mm/huge_memory: Optimize and simplify __split_unmapped_folio() logic
2025-10-17 14:55 ` Lorenzo Stoakes
@ 2025-10-17 17:24 ` Zi Yan
2025-10-20 14:03 ` Lorenzo Stoakes
2025-10-19 8:00 ` Wei Yang
1 sibling, 1 reply; 26+ messages in thread
From: Zi Yan @ 2025-10-17 17:24 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: David Hildenbrand, Wei Yang, akpm, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang, linux-mm
On 17 Oct 2025, at 10:55, Lorenzo Stoakes wrote:
> On Fri, Oct 17, 2025 at 04:45:17PM +0200, David Hildenbrand wrote:
>> On 17.10.25 16:26, Zi Yan wrote:
>>> On 17 Oct 2025, at 5:44, Lorenzo Stoakes wrote:
>>>
>>>> On Thu, Oct 16, 2025 at 12:46:13AM +0000, Wei Yang wrote:
>>>>> Existing __split_unmapped_folio() code splits the given folio and update
>>>>> stats, but it is complicated to understand.
>>>>>
>>>>> After simplification, __split_unmapped_folio() directly calculate and
>>>>> update the folio statistics upon a successful split:
>>>>>
>>>>> * All resulting folios are @split_order.
>>>>>
>>>>> * The number of new folios are calculated directly from @old_order
>>>>> and @split_order.
>>>>>
>>>>> * The folio for the next split is identified as the one containing
>>>>> @split_at.
>>>>>
>>>>> * An xas_try_split() error is returned directly without worrying
>>>>> about stats updates.
>>>>
>>>> You seem to be doing two things at once, a big refactoring where you move stuff
>>>> about AND changing functionality.
>>>
>>> No function change is done in this patchset. The wording might be
>>> confusing here, it should be read like:
>>>
>>> After simplification, __split_unmapped_folio() directly calculate and
>>> update the folio statistics upon a successful split, so An xas_try_split()
>>> error is returned directly without worrying about stats updates.
>>>
>>> David suggested a change[1] to make it clear:
>>> Stats fixup is no longer needed for an xas_try_split() error,
>>> since we now update the stats only after a successful split.
>>>
>>>
>>> [1] https://lore.kernel.org/linux-mm/518dedb8-d379-47c3-a4c1-f4afc789f1b4@redhat.com/
>>>
>>>>
>>>> Can we split this out please? It makes review so much harder.
>>>
>>> I asked Wei to use a single patch for this change, since the original
>>> code was complicated due to the initial implementation. After my
>>> recent change (first commit 6c7de9c83)[1], __split_unmmaped_folio()
>>> can be simplified like Wei did here.
>>
>> I think it was too much split, agreed.
>>
>> This patch was a bit hard for me to review, not sure if there would have
>> been a better way to split some stuff out more logically.
>>
>> Most changes just complicated way.
>>
>> Not sure if splitting out the stats update change or the calculation of the
>> number of folios could have been easily done.
>
> I mean as I said to Wei/Zi, my objection is both moving code around and
> making complicated changes that require you to really dig in to figure them
> out _at the same time_.
>
> It's perfectly feasible to send separate patches for non-functional
> changes, I've done that myself, each building on the next to make it clear
> what's going on.
>
> I'm a little surprised at the oppositon to that...
>
> But if you're convinced there's no other way this could be done (bit
> surprising as you're asking 'why this change' several times here) I guess
> I'll sit down and try to decode it.
Let me explain the changes in details below, so that we might find a
good way of splitting it up for an easy review:
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 4b2d5a7e5c8e..68e851f5fcb2 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -3528,15 +3528,9 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
> struct address_space *mapping, bool uniform_split)
> {
> bool is_anon = folio_test_anon(folio);
> - int order = folio_order(folio);
> - int start_order = uniform_split ? new_order : order - 1;
> - bool stop_split = false;
> - struct folio *next;
> + int old_order = folio_order(folio);
> + int start_order = uniform_split ? new_order : old_order - 1;
> int split_order;
> - int ret = 0;
These remove unused variables and do a order->old_order rename.
> -
> - if (is_anon)
> - mod_mthp_stat(order, MTHP_STAT_NR_ANON, -1);
In the original __split_unmapped_folio() implementation, I probably
moved the stats in split_huge_page_to_list_to_order() (now __folio_split())[1]
in to __split_unmapped_folio() and needed to handle xas_try_split() failures,
so left mod_mthp_stat(order, ..., -1) here. It should have been put
together with mod_mthp_stat(new_order, ..., 1 << (order - new_order).
[1] https://elixir.bootlin.com/linux/v6.13/source/mm/huge_memory.c#L3622
>
> folio_clear_has_hwpoisoned(folio);
>
> @@ -3545,17 +3539,13 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
> * folio is split to new_order directly.
> */
> for (split_order = start_order;
> - split_order >= new_order && !stop_split;
> + split_order >= new_order;
stop_split was used to handle xas_try_split() failure, since the stats separation
I had in the initial code, I needed to add the -1 stats back. Now with Wei's
change, subtracting old folio and increment new folios are done together,
stop_split is no longer needed.
> split_order--) {
> - struct folio *end_folio = folio_next(folio);
> - int old_order = folio_order(folio);
> - struct folio *new_folio;
> + int nr_new_folios = 1UL << (old_order - split_order);
stats for counting new after-split folios.
>
> /* order-1 anonymous folio is not supported */
> if (is_anon && split_order == 1)
> continue;
> - if (uniform_split && split_order != new_order)
> - continue;
split_order is always new_order when uniform_split is true, so it is useless
code.
>
> if (mapping) {
> /*
> @@ -3568,49 +3558,31 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
> else {
> xas_set_order(xas, folio->index, split_order);
> xas_try_split(xas, folio, old_order);
> - if (xas_error(xas)) {
> - ret = xas_error(xas);
> - stop_split = true;
> - }
> + if (xas_error(xas))
> + return xas_error(xas);
xas_error() was not returned immediately because the -1 for old folio needed
to be fixed later. Now since the stats is not changed upfront, it can simply
return.
> }
> }
>
> - if (!stop_split) {
> - folio_split_memcg_refs(folio, old_order, split_order);
> - split_page_owner(&folio->page, old_order, split_order);
> - pgalloc_tag_split(folio, old_order, split_order);
> + folio_split_memcg_refs(folio, old_order, split_order);
> + split_page_owner(&folio->page, old_order, split_order);
> + pgalloc_tag_split(folio, old_order, split_order);
> + __split_folio_to_order(folio, old_order, split_order);
>
> - __split_folio_to_order(folio, old_order, split_order);
No more stop_split, so all these are unconditional.
> + if (is_anon) {
> + mod_mthp_stat(old_order, MTHP_STAT_NR_ANON, -1);
> + mod_mthp_stat(split_order, MTHP_STAT_NR_ANON, nr_new_folios);
> }
Update the stats when the actual split is done, much better than
the original one, which assumes the split would happen and recovers
when it does not.
>
> /*
> - * Iterate through after-split folios and update folio stats.
> - * But in buddy allocator like split, the folio
> - * containing the specified page is skipped until its order
> - * is new_order, since the folio will be worked on in next
> - * iteration.
> + * For uniform split, we have finished the job.
> + * For non-uniform split, we assign folio to the one the one
> + * containing @split_at and assign @old_order to @split_order.
> */
> - for (new_folio = folio; new_folio != end_folio; new_folio = next) {
> - next = folio_next(new_folio);
> - /*
> - * for buddy allocator like split, new_folio containing
> - * @split_at page could be split again, thus do not
> - * change stats yet. Wait until new_folio's order is
> - * @new_order or stop_split is set to true by the above
> - * xas_split() failure.
> - */
> - if (new_folio == page_folio(split_at)) {
> - folio = new_folio;
> - if (split_order != new_order && !stop_split)
> - continue;
> - }
> - if (is_anon)
> - mod_mthp_stat(folio_order(new_folio),
> - MTHP_STAT_NR_ANON, 1);
> - }
This loop was needed to update after-split folios in the xarray and unfreeze
folios, but since last refactor, neither is here. And the stats update can
be done in one shot instead of the originally one by one method. So
this hunk can be removed completely.
> + folio = page_folio(split_at);
> + old_order = split_order;
These two are for non-uniform split, where the code moves to split next
folio after it splits the current one into half. And the old_order needs
to be updated, since the folio has been split.
For uniform split, the for loop is done once, these two effectively does
nothing, since the for loop ends.
> }
>
> - return ret;
> + return 0;
xas_try_split() failure is handled above, code should always succeed at this
point.
> }
>
> bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
--
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Patch v2 1/2] mm/huge_memory: cache folio attribute in __split_unmapped_folio()
2025-10-17 9:46 ` Lorenzo Stoakes
@ 2025-10-19 7:51 ` Wei Yang
0 siblings, 0 replies; 26+ messages in thread
From: Wei Yang @ 2025-10-19 7:51 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: Wei Yang, akpm, david, ziy, baolin.wang, Liam.Howlett, npache,
ryan.roberts, dev.jain, baohua, lance.yang, linux-mm
On Fri, Oct 17, 2025 at 10:46:43AM +0100, Lorenzo Stoakes wrote:
>I don't like 'cache' here. You're not cache-ing anything, you're relying on an
>attribute remaining constant.
>
>Something like 'avoid reinvoking folio_test_anon()' would be simpler.
>
Will adjust it.
>On Thu, Oct 16, 2025 at 12:46:12AM +0000, Wei Yang wrote:
>> During the execution of __split_unmapped_folio(), the folio's anon/!anon
>> attribute is invariant (not expected to change).
>>
>> Therefore, it is safe and more efficient to retrieve this attribute once
>> at the start and reuse it throughout the function.
>>
>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>> Cc: Zi Yan <ziy@nvidia.com>
>> Reviewed-by: Zi Yan <ziy@nvidia.com>
>> Reviewed-by: wang lian <lianux.mm@gmail.com>
>> ---
>> mm/huge_memory.c | 7 ++++---
>> 1 file changed, 4 insertions(+), 3 deletions(-)
>>
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index 3c74227cc847..4b2d5a7e5c8e 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -3527,6 +3527,7 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
>> struct page *split_at, struct xa_state *xas,
>> struct address_space *mapping, bool uniform_split)
>> {
>> + bool is_anon = folio_test_anon(folio);
>
>As David said, make this const please.
>
Will do.
>> int order = folio_order(folio);
>> int start_order = uniform_split ? new_order : order - 1;
>> bool stop_split = false;
>> @@ -3534,7 +3535,7 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
>> int split_order;
>> int ret = 0;
>>
>> - if (folio_test_anon(folio))
>> + if (is_anon)
>> mod_mthp_stat(order, MTHP_STAT_NR_ANON, -1);
>>
>> folio_clear_has_hwpoisoned(folio);
>> @@ -3551,7 +3552,7 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
>> struct folio *new_folio;
>>
>> /* order-1 anonymous folio is not supported */
>> - if (folio_test_anon(folio) && split_order == 1)
>> + if (is_anon && split_order == 1)
>> continue;
>> if (uniform_split && split_order != new_order)
>> continue;
>> @@ -3603,7 +3604,7 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
>> if (split_order != new_order && !stop_split)
>> continue;
>> }
>> - if (folio_test_anon(new_folio))
>> + if (is_anon)
>> mod_mthp_stat(folio_order(new_folio),
>> MTHP_STAT_NR_ANON, 1);
>> }
>> --
>> 2.34.1
>>
>>
--
Wei Yang
Help you, Help me
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Patch v2 2/2] mm/huge_memory: Optimize and simplify __split_unmapped_folio() logic
2025-10-17 14:55 ` Lorenzo Stoakes
2025-10-17 17:24 ` Zi Yan
@ 2025-10-19 8:00 ` Wei Yang
2025-10-20 11:55 ` Lorenzo Stoakes
1 sibling, 1 reply; 26+ messages in thread
From: Wei Yang @ 2025-10-19 8:00 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: David Hildenbrand, Zi Yan, Wei Yang, akpm, baolin.wang,
Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang,
linux-mm
On Fri, Oct 17, 2025 at 03:55:16PM +0100, Lorenzo Stoakes wrote:
>On Fri, Oct 17, 2025 at 04:45:17PM +0200, David Hildenbrand wrote:
>> On 17.10.25 16:26, Zi Yan wrote:
>> > On 17 Oct 2025, at 5:44, Lorenzo Stoakes wrote:
>> >
>> > > On Thu, Oct 16, 2025 at 12:46:13AM +0000, Wei Yang wrote:
>> > > > Existing __split_unmapped_folio() code splits the given folio and update
>> > > > stats, but it is complicated to understand.
>> > > >
>> > > > After simplification, __split_unmapped_folio() directly calculate and
>> > > > update the folio statistics upon a successful split:
>> > > >
>> > > > * All resulting folios are @split_order.
>> > > >
>> > > > * The number of new folios are calculated directly from @old_order
>> > > > and @split_order.
>> > > >
>> > > > * The folio for the next split is identified as the one containing
>> > > > @split_at.
>> > > >
>> > > > * An xas_try_split() error is returned directly without worrying
>> > > > about stats updates.
>> > >
>> > > You seem to be doing two things at once, a big refactoring where you move stuff
>> > > about AND changing functionality.
>> >
>> > No function change is done in this patchset. The wording might be
>> > confusing here, it should be read like:
>> >
>> > After simplification, __split_unmapped_folio() directly calculate and
>> > update the folio statistics upon a successful split, so An xas_try_split()
>> > error is returned directly without worrying about stats updates.
>> >
>> > David suggested a change[1] to make it clear:
>> > Stats fixup is no longer needed for an xas_try_split() error,
>> > since we now update the stats only after a successful split.
>> >
>> >
>> > [1] https://lore.kernel.org/linux-mm/518dedb8-d379-47c3-a4c1-f4afc789f1b4@redhat.com/
>> >
>> > >
>> > > Can we split this out please? It makes review so much harder.
>> >
>> > I asked Wei to use a single patch for this change, since the original
>> > code was complicated due to the initial implementation. After my
>> > recent change (first commit 6c7de9c83)[1], __split_unmmaped_folio()
>> > can be simplified like Wei did here.
>>
>> I think it was too much split, agreed.
>>
>> This patch was a bit hard for me to review, not sure if there would have
>> been a better way to split some stuff out more logically.
>>
>> Most changes just complicated way.
>>
>> Not sure if splitting out the stats update change or the calculation of the
>> number of folios could have been easily done.
>
>I mean as I said to Wei/Zi, my objection is both moving code around and
>making complicated changes that require you to really dig in to figure them
>out _at the same time_.
Is this version better for review?
http://lkml.kernel.org/r/20251014134606.22543-1-richard.weiyang@gmail.com
>
>It's perfectly feasible to send separate patches for non-functional
>changes, I've done that myself, each building on the next to make it clear
>what's going on.
>
>I'm a little surprised at the oppositon to that...
>
>But if you're convinced there's no other way this could be done (bit
>surprising as you're asking 'why this change' several times here) I guess
>I'll sit down and try to decode it.
--
Wei Yang
Help you, Help me
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Patch v2 2/2] mm/huge_memory: Optimize and simplify __split_unmapped_folio() logic
2025-10-19 8:00 ` Wei Yang
@ 2025-10-20 11:55 ` Lorenzo Stoakes
0 siblings, 0 replies; 26+ messages in thread
From: Lorenzo Stoakes @ 2025-10-20 11:55 UTC (permalink / raw)
To: Wei Yang
Cc: David Hildenbrand, Zi Yan, akpm, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang, linux-mm
On Sun, Oct 19, 2025 at 08:00:17AM +0000, Wei Yang wrote:
>
> Is this version better for review?
>
> http://lkml.kernel.org/r/20251014134606.22543-1-richard.weiyang@gmail.com
>
No, this was _too_ divided :) I will look at Zi's reply and dig in when I get a
chance.
Sorry for holding this up but I think important to try to get this right!
Cheers, Lorenzo
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Patch v2 2/2] mm/huge_memory: Optimize and simplify __split_unmapped_folio() logic
2025-10-17 17:24 ` Zi Yan
@ 2025-10-20 14:03 ` Lorenzo Stoakes
2025-10-20 14:28 ` Zi Yan
2025-10-21 0:30 ` Wei Yang
0 siblings, 2 replies; 26+ messages in thread
From: Lorenzo Stoakes @ 2025-10-20 14:03 UTC (permalink / raw)
To: Zi Yan
Cc: David Hildenbrand, Wei Yang, akpm, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang, linux-mm
On Fri, Oct 17, 2025 at 01:24:37PM -0400, Zi Yan wrote:
> Let me explain the changes in details below, so that we might find a
> good way of splitting it up for an easy review:
Thanks, much appreciated!
Looking through it seems to me there should be 3 patches:
- Getting rid of the trailing loop
- Stats fixups
- Remaining trivial cleanups
Do you agree?
Each can have a commit message that describes in detail what's happening
and why it's ok.
This is also useful for bisection purposes.
>
> > diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> > index 4b2d5a7e5c8e..68e851f5fcb2 100644
> > --- a/mm/huge_memory.c
> > +++ b/mm/huge_memory.c
> > @@ -3528,15 +3528,9 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
> > struct address_space *mapping, bool uniform_split)
> > {
> > bool is_anon = folio_test_anon(folio);
> > - int order = folio_order(folio);
> > - int start_order = uniform_split ? new_order : order - 1;
> > - bool stop_split = false;
> > - struct folio *next;
> > + int old_order = folio_order(folio);
> > + int start_order = uniform_split ? new_order : old_order - 1;
> > int split_order;
> > - int ret = 0;
>
> These remove unused variables and do a order->old_order rename.
>
> > -
> > - if (is_anon)
> > - mod_mthp_stat(order, MTHP_STAT_NR_ANON, -1);
>
> In the original __split_unmapped_folio() implementation, I probably
> moved the stats in split_huge_page_to_list_to_order() (now __folio_split())[1]
> in to __split_unmapped_folio() and needed to handle xas_try_split() failures,
> so left mod_mthp_stat(order, ..., -1) here. It should have been put
> together with mod_mthp_stat(new_order, ..., 1 << (order - new_order).
>
> [1] https://elixir.bootlin.com/linux/v6.13/source/mm/huge_memory.c#L3622
>
> >
> > folio_clear_has_hwpoisoned(folio);
> >
> > @@ -3545,17 +3539,13 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
> > * folio is split to new_order directly.
> > */
> > for (split_order = start_order;
> > - split_order >= new_order && !stop_split;
> > + split_order >= new_order;
>
> stop_split was used to handle xas_try_split() failure, since the stats separation
> I had in the initial code, I needed to add the -1 stats back. Now with Wei's
> change, subtracting old folio and increment new folios are done together,
> stop_split is no longer needed.
>
> > split_order--) {
> > - struct folio *end_folio = folio_next(folio);
> > - int old_order = folio_order(folio);
> > - struct folio *new_folio;
> > + int nr_new_folios = 1UL << (old_order - split_order);
>
> stats for counting new after-split folios.
>
> >
> > /* order-1 anonymous folio is not supported */
> > if (is_anon && split_order == 1)
> > continue;
> > - if (uniform_split && split_order != new_order)
> > - continue;
>
> split_order is always new_order when uniform_split is true, so it is useless
> code.
>
> >
> > if (mapping) {
> > /*
> > @@ -3568,49 +3558,31 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
> > else {
> > xas_set_order(xas, folio->index, split_order);
> > xas_try_split(xas, folio, old_order);
> > - if (xas_error(xas)) {
> > - ret = xas_error(xas);
> > - stop_split = true;
> > - }
> > + if (xas_error(xas))
> > + return xas_error(xas);
>
> xas_error() was not returned immediately because the -1 for old folio needed
> to be fixed later. Now since the stats is not changed upfront, it can simply
> return.
>
>
> > }
> > }
> >
> > - if (!stop_split) {
> > - folio_split_memcg_refs(folio, old_order, split_order);
> > - split_page_owner(&folio->page, old_order, split_order);
> > - pgalloc_tag_split(folio, old_order, split_order);
> > + folio_split_memcg_refs(folio, old_order, split_order);
> > + split_page_owner(&folio->page, old_order, split_order);
> > + pgalloc_tag_split(folio, old_order, split_order);
> > + __split_folio_to_order(folio, old_order, split_order);
> >
> > - __split_folio_to_order(folio, old_order, split_order);
>
> No more stop_split, so all these are unconditional.
>
> > + if (is_anon) {
> > + mod_mthp_stat(old_order, MTHP_STAT_NR_ANON, -1);
> > + mod_mthp_stat(split_order, MTHP_STAT_NR_ANON, nr_new_folios);
> > }
>
> Update the stats when the actual split is done, much better than
> the original one, which assumes the split would happen and recovers
> when it does not.
>
> >
> > /*
> > - * Iterate through after-split folios and update folio stats.
> > - * But in buddy allocator like split, the folio
> > - * containing the specified page is skipped until its order
> > - * is new_order, since the folio will be worked on in next
> > - * iteration.
> > + * For uniform split, we have finished the job.
> > + * For non-uniform split, we assign folio to the one the one
> > + * containing @split_at and assign @old_order to @split_order.
> > */
> > - for (new_folio = folio; new_folio != end_folio; new_folio = next) {
> > - next = folio_next(new_folio);
> > - /*
> > - * for buddy allocator like split, new_folio containing
> > - * @split_at page could be split again, thus do not
> > - * change stats yet. Wait until new_folio's order is
> > - * @new_order or stop_split is set to true by the above
> > - * xas_split() failure.
> > - */
> > - if (new_folio == page_folio(split_at)) {
> > - folio = new_folio;
> > - if (split_order != new_order && !stop_split)
> > - continue;
> > - }
> > - if (is_anon)
> > - mod_mthp_stat(folio_order(new_folio),
> > - MTHP_STAT_NR_ANON, 1);
> > - }
>
> This loop was needed to update after-split folios in the xarray and unfreeze
> folios, but since last refactor, neither is here. And the stats update can
> be done in one shot instead of the originally one by one method. So
> this hunk can be removed completely.
Which refactor in particular? This is the bit that really needs separating as
it's the most confusing and begs the question 'why can we avoid doing this'?
>
> > + folio = page_folio(split_at);
> > + old_order = split_order;
>
> These two are for non-uniform split, where the code moves to split next
> folio after it splits the current one into half. And the old_order needs
> to be updated, since the folio has been split.
>
> For uniform split, the for loop is done once, these two effectively does
> nothing, since the for loop ends.
>
> > }
> >
> > - return ret;
> > + return 0;
>
> xas_try_split() failure is handled above, code should always succeed at this
> point.
>
> > }
> >
> > bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
>
> --
> Best Regards,
> Yan, Zi
>
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Patch v2 2/2] mm/huge_memory: Optimize and simplify __split_unmapped_folio() logic
2025-10-20 14:03 ` Lorenzo Stoakes
@ 2025-10-20 14:28 ` Zi Yan
2025-10-21 0:30 ` Wei Yang
1 sibling, 0 replies; 26+ messages in thread
From: Zi Yan @ 2025-10-20 14:28 UTC (permalink / raw)
To: Lorenzo Stoakes, Wei Yang
Cc: David Hildenbrand, akpm, baolin.wang, Liam.Howlett, npache,
ryan.roberts, dev.jain, baohua, lance.yang, linux-mm
On 20 Oct 2025, at 10:03, Lorenzo Stoakes wrote:
> On Fri, Oct 17, 2025 at 01:24:37PM -0400, Zi Yan wrote:
>> Let me explain the changes in details below, so that we might find a
>> good way of splitting it up for an easy review:
>
> Thanks, much appreciated!
>
> Looking through it seems to me there should be 3 patches:
>
> - Getting rid of the trailing loop
> - Stats fixups
> - Remaining trivial cleanups
>
> Do you agree?
Sounds good to me. Thanks for taking a look. Let's see how Wei feels about this.
>
> Each can have a commit message that describes in detail what's happening
> and why it's ok.
>
> This is also useful for bisection purposes.
>
>>
>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>>> index 4b2d5a7e5c8e..68e851f5fcb2 100644
>>> --- a/mm/huge_memory.c
>>> +++ b/mm/huge_memory.c
>>> @@ -3528,15 +3528,9 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
>>> struct address_space *mapping, bool uniform_split)
>>> {
>>> bool is_anon = folio_test_anon(folio);
>>> - int order = folio_order(folio);
>>> - int start_order = uniform_split ? new_order : order - 1;
>>> - bool stop_split = false;
>>> - struct folio *next;
>>> + int old_order = folio_order(folio);
>>> + int start_order = uniform_split ? new_order : old_order - 1;
>>> int split_order;
>>> - int ret = 0;
>>
>> These remove unused variables and do a order->old_order rename.
>>
>>> -
>>> - if (is_anon)
>>> - mod_mthp_stat(order, MTHP_STAT_NR_ANON, -1);
>>
>> In the original __split_unmapped_folio() implementation, I probably
>> moved the stats in split_huge_page_to_list_to_order() (now __folio_split())[1]
>> in to __split_unmapped_folio() and needed to handle xas_try_split() failures,
>> so left mod_mthp_stat(order, ..., -1) here. It should have been put
>> together with mod_mthp_stat(new_order, ..., 1 << (order - new_order).
>>
>> [1] https://elixir.bootlin.com/linux/v6.13/source/mm/huge_memory.c#L3622
>>
>>>
>>> folio_clear_has_hwpoisoned(folio);
>>>
>>> @@ -3545,17 +3539,13 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
>>> * folio is split to new_order directly.
>>> */
>>> for (split_order = start_order;
>>> - split_order >= new_order && !stop_split;
>>> + split_order >= new_order;
>>
>> stop_split was used to handle xas_try_split() failure, since the stats separation
>> I had in the initial code, I needed to add the -1 stats back. Now with Wei's
>> change, subtracting old folio and increment new folios are done together,
>> stop_split is no longer needed.
>>
>>> split_order--) {
>>> - struct folio *end_folio = folio_next(folio);
>>> - int old_order = folio_order(folio);
>>> - struct folio *new_folio;
>>> + int nr_new_folios = 1UL << (old_order - split_order);
>>
>> stats for counting new after-split folios.
>>
>>>
>>> /* order-1 anonymous folio is not supported */
>>> if (is_anon && split_order == 1)
>>> continue;
>>> - if (uniform_split && split_order != new_order)
>>> - continue;
>>
>> split_order is always new_order when uniform_split is true, so it is useless
>> code.
>>
>>>
>>> if (mapping) {
>>> /*
>>> @@ -3568,49 +3558,31 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
>>> else {
>>> xas_set_order(xas, folio->index, split_order);
>>> xas_try_split(xas, folio, old_order);
>>> - if (xas_error(xas)) {
>>> - ret = xas_error(xas);
>>> - stop_split = true;
>>> - }
>>> + if (xas_error(xas))
>>> + return xas_error(xas);
>>
>> xas_error() was not returned immediately because the -1 for old folio needed
>> to be fixed later. Now since the stats is not changed upfront, it can simply
>> return.
>>
>>
>>> }
>>> }
>>>
>>> - if (!stop_split) {
>>> - folio_split_memcg_refs(folio, old_order, split_order);
>>> - split_page_owner(&folio->page, old_order, split_order);
>>> - pgalloc_tag_split(folio, old_order, split_order);
>>> + folio_split_memcg_refs(folio, old_order, split_order);
>>> + split_page_owner(&folio->page, old_order, split_order);
>>> + pgalloc_tag_split(folio, old_order, split_order);
>>> + __split_folio_to_order(folio, old_order, split_order);
>>>
>>> - __split_folio_to_order(folio, old_order, split_order);
>>
>> No more stop_split, so all these are unconditional.
>>
>>> + if (is_anon) {
>>> + mod_mthp_stat(old_order, MTHP_STAT_NR_ANON, -1);
>>> + mod_mthp_stat(split_order, MTHP_STAT_NR_ANON, nr_new_folios);
>>> }
>>
>> Update the stats when the actual split is done, much better than
>> the original one, which assumes the split would happen and recovers
>> when it does not.
>>
>>>
>>> /*
>>> - * Iterate through after-split folios and update folio stats.
>>> - * But in buddy allocator like split, the folio
>>> - * containing the specified page is skipped until its order
>>> - * is new_order, since the folio will be worked on in next
>>> - * iteration.
>>> + * For uniform split, we have finished the job.
>>> + * For non-uniform split, we assign folio to the one the one
>>> + * containing @split_at and assign @old_order to @split_order.
>>> */
>>> - for (new_folio = folio; new_folio != end_folio; new_folio = next) {
>>> - next = folio_next(new_folio);
>>> - /*
>>> - * for buddy allocator like split, new_folio containing
>>> - * @split_at page could be split again, thus do not
>>> - * change stats yet. Wait until new_folio's order is
>>> - * @new_order or stop_split is set to true by the above
>>> - * xas_split() failure.
>>> - */
>>> - if (new_folio == page_folio(split_at)) {
>>> - folio = new_folio;
>>> - if (split_order != new_order && !stop_split)
>>> - continue;
>>> - }
>>> - if (is_anon)
>>> - mod_mthp_stat(folio_order(new_folio),
>>> - MTHP_STAT_NR_ANON, 1);
>>> - }
>>
>> This loop was needed to update after-split folios in the xarray and unfreeze
>> folios, but since last refactor, neither is here. And the stats update can
>> be done in one shot instead of the originally one by one method. So
>> this hunk can be removed completely.
>
> Which refactor in particular? This is the bit that really needs separating as
> it's the most confusing and begs the question 'why can we avoid doing this'?
>
Patch 1(https://lore.kernel.org/linux-mm/20250718183720.4054515-2-ziy@nvidia.com/) in the __folio_split() clean up series.
>>
>>> + folio = page_folio(split_at);
>>> + old_order = split_order;
>>
>> These two are for non-uniform split, where the code moves to split next
>> folio after it splits the current one into half. And the old_order needs
>> to be updated, since the folio has been split.
>>
>> For uniform split, the for loop is done once, these two effectively does
>> nothing, since the for loop ends.
>>
>>> }
>>>
>>> - return ret;
>>> + return 0;
>>
>> xas_try_split() failure is handled above, code should always succeed at this
>> point.
>>
>>> }
>>>
>>> bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
>>
>> --
>> Best Regards,
>> Yan, Zi
>>
--
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Patch v2 2/2] mm/huge_memory: Optimize and simplify __split_unmapped_folio() logic
2025-10-20 14:03 ` Lorenzo Stoakes
2025-10-20 14:28 ` Zi Yan
@ 2025-10-21 0:30 ` Wei Yang
2025-10-21 9:17 ` Lorenzo Stoakes
1 sibling, 1 reply; 26+ messages in thread
From: Wei Yang @ 2025-10-21 0:30 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: Zi Yan, David Hildenbrand, Wei Yang, akpm, baolin.wang,
Liam.Howlett, npache, ryan.roberts, dev.jain, baohua, lance.yang,
linux-mm
On Mon, Oct 20, 2025 at 03:03:06PM +0100, Lorenzo Stoakes wrote:
>On Fri, Oct 17, 2025 at 01:24:37PM -0400, Zi Yan wrote:
>> Let me explain the changes in details below, so that we might find a
>> good way of splitting it up for an easy review:
>
>Thanks, much appreciated!
>
>Looking through it seems to me there should be 3 patches:
>
>- Getting rid of the trailing loop
>- Stats fixups
Hmm... I don't get how to arrange in this order.
Remove the trailing loop before stats fixup means we still do mod_mthp_stat()
in two places? And we keep the stop_split and fixup the stats on stop_split?
If this is what you mean, it looks complicated the change.
>- Remaining trivial cleanups
My suggestion is switch 1 and 2:
- Stats fixups
- Getting rid of the trailing loop
- Remaining trivial cleanups
We first update stats only on success split, so that we could get rid of the
stop_split. Then we could get rid of the trailing loop in a clean way.
And this my v1 with patch 4&5 merged.
>Do you agree?
>
>Each can have a commit message that describes in detail what's happening
>and why it's ok.
>
>This is also useful for bisection purposes.
>
--
Wei Yang
Help you, Help me
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [Patch v2 2/2] mm/huge_memory: Optimize and simplify __split_unmapped_folio() logic
2025-10-21 0:30 ` Wei Yang
@ 2025-10-21 9:17 ` Lorenzo Stoakes
0 siblings, 0 replies; 26+ messages in thread
From: Lorenzo Stoakes @ 2025-10-21 9:17 UTC (permalink / raw)
To: Wei Yang
Cc: Zi Yan, David Hildenbrand, akpm, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang, linux-mm
On Tue, Oct 21, 2025 at 12:30:23AM +0000, Wei Yang wrote:
> On Mon, Oct 20, 2025 at 03:03:06PM +0100, Lorenzo Stoakes wrote:
> >On Fri, Oct 17, 2025 at 01:24:37PM -0400, Zi Yan wrote:
> >> Let me explain the changes in details below, so that we might find a
> >> good way of splitting it up for an easy review:
> >
> >Thanks, much appreciated!
> >
> >Looking through it seems to me there should be 3 patches:
> >
> >- Getting rid of the trailing loop
> >- Stats fixups
>
> Hmm... I don't get how to arrange in this order.
>
> Remove the trailing loop before stats fixup means we still do mod_mthp_stat()
> in two places? And we keep the stop_split and fixup the stats on stop_split?
>
> If this is what you mean, it looks complicated the change.
>
> >- Remaining trivial cleanups
>
> My suggestion is switch 1 and 2:
>
> - Stats fixups
> - Getting rid of the trailing loop
> - Remaining trivial cleanups
Yup this works too!
>
> We first update stats only on success split, so that we could get rid of the
> stop_split. Then we could get rid of the trailing loop in a clean way.
>
> And this my v1 with patch 4&5 merged.
>
> >Do you agree?
> >
> >Each can have a commit message that describes in detail what's happening
> >and why it's ok.
> >
> >This is also useful for bisection purposes.
> >
>
> --
> Wei Yang
> Help you, Help me
>
Thanks, Lorenzo
^ permalink raw reply [flat|nested] 26+ messages in thread
end of thread, other threads:[~2025-10-21 9:17 UTC | newest]
Thread overview: 26+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-16 0:46 [Patch v2 0/2] mm/huge_memory: cleanup __split_unmapped_folio() Wei Yang
2025-10-16 0:46 ` [Patch v2 1/2] mm/huge_memory: cache folio attribute in __split_unmapped_folio() Wei Yang
2025-10-16 1:34 ` Barry Song
2025-10-16 20:01 ` David Hildenbrand
2025-10-17 9:46 ` Lorenzo Stoakes
2025-10-19 7:51 ` Wei Yang
2025-10-16 0:46 ` [Patch v2 2/2] mm/huge_memory: Optimize and simplify __split_unmapped_folio() logic Wei Yang
2025-10-16 1:25 ` wang lian
2025-10-16 20:10 ` David Hildenbrand
2025-10-16 20:22 ` Zi Yan
2025-10-16 20:55 ` David Hildenbrand
2025-10-16 20:56 ` Zi Yan
2025-10-17 0:55 ` Wei Yang
2025-10-17 9:44 ` Lorenzo Stoakes
2025-10-17 14:26 ` Zi Yan
2025-10-17 14:29 ` Zi Yan
2025-10-17 14:44 ` Lorenzo Stoakes
2025-10-17 14:45 ` David Hildenbrand
2025-10-17 14:55 ` Lorenzo Stoakes
2025-10-17 17:24 ` Zi Yan
2025-10-20 14:03 ` Lorenzo Stoakes
2025-10-20 14:28 ` Zi Yan
2025-10-21 0:30 ` Wei Yang
2025-10-21 9:17 ` Lorenzo Stoakes
2025-10-19 8:00 ` Wei Yang
2025-10-20 11:55 ` Lorenzo Stoakes
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).