* [Patch v2] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported()
@ 2025-11-05 7:25 Wei Yang
2025-11-05 8:05 ` David Hildenbrand (Red Hat)
2025-11-05 16:41 ` Zi Yan
0 siblings, 2 replies; 6+ messages in thread
From: Wei Yang @ 2025-11-05 7:25 UTC (permalink / raw)
To: akpm, david, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang
Cc: linux-mm, Wei Yang, David Hildenbrand (Red Hat)
The functions uniform_split_supported() and
non_uniform_split_supported() share significantly similar logic.
The only functional difference is that uniform_split_supported()
includes an additional check on the requested @new_order.
The reason for this check comes from the following two aspects:
* some file system or swap cache just supports order-0 folio
* the behavioral difference between uniform/non-uniform split
The behavioral difference between uniform split and non-uniform:
* uniform split splits folio directly to @new_order
* non-uniform split creates after-split folios with orders from
folio_order(folio) - 1 to new_order.
This means for non-uniform split or !new_order split we should check the
file system and swap cache respectively.
This commit unifies the logic and merge the two functions into a single
combined helper, removing redundant code and simplifying the split
support checking mechanism.
Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: "David Hildenbrand (Red Hat)" <david@kernel.org>
---
v2:
* remove need_check
* update comment
* add more explanation in change log
* selftests/split_huge_page_test pass
---
include/linux/huge_mm.h | 8 ++---
mm/huge_memory.c | 70 ++++++++++++++++++-----------------------
2 files changed, 33 insertions(+), 45 deletions(-)
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index cbb2243f8e56..79343809a7be 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -369,10 +369,8 @@ int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list
unsigned int new_order, bool unmapped);
int min_order_for_split(struct folio *folio);
int split_folio_to_list(struct folio *folio, struct list_head *list);
-bool uniform_split_supported(struct folio *folio, unsigned int new_order,
- bool warns);
-bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
- bool warns);
+bool folio_split_supported(struct folio *folio, unsigned int new_order,
+ bool uniform_split, bool warns);
int folio_split(struct folio *folio, unsigned int new_order, struct page *page,
struct list_head *list);
@@ -403,7 +401,7 @@ static inline int split_huge_page_to_order(struct page *page, unsigned int new_o
static inline int try_folio_split_to_order(struct folio *folio,
struct page *page, unsigned int new_order)
{
- if (!non_uniform_split_supported(folio, new_order, /* warns= */ false))
+ if (!folio_split_supported(folio, new_order, /* uniform_split = */ false, /* warns= */ false))
return split_huge_page_to_order(&folio->page, new_order);
return folio_split(folio, new_order, page, NULL);
}
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 381a49c5ac3f..db442e0e3a46 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3666,55 +3666,49 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
return 0;
}
-bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
- bool warns)
+bool folio_split_supported(struct folio *folio, unsigned int new_order,
+ bool uniform_split, bool warns)
{
if (folio_test_anon(folio)) {
/* order-1 is not supported for anonymous THP. */
VM_WARN_ONCE(warns && new_order == 1,
"Cannot split to order-1 folio");
return new_order != 1;
- } else if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
- !mapping_large_folio_support(folio->mapping)) {
- /*
- * No split if the file system does not support large folio.
- * Note that we might still have THPs in such mappings due to
- * CONFIG_READ_ONLY_THP_FOR_FS. But in that case, the mapping
- * does not actually support large folios properly.
- */
- VM_WARN_ONCE(warns,
- "Cannot split file folio to non-0 order");
- return false;
- }
-
- /* Only swapping a whole PMD-mapped folio is supported */
- if (folio_test_swapcache(folio)) {
- VM_WARN_ONCE(warns,
- "Cannot split swapcache folio to non-0 order");
- return false;
- }
-
- return true;
-}
-
-/* See comments in non_uniform_split_supported() */
-bool uniform_split_supported(struct folio *folio, unsigned int new_order,
- bool warns)
-{
- if (folio_test_anon(folio)) {
- VM_WARN_ONCE(warns && new_order == 1,
- "Cannot split to order-1 folio");
- return new_order != 1;
- } else if (new_order) {
+ } else if (!uniform_split || new_order) {
if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
!mapping_large_folio_support(folio->mapping)) {
+ /*
+ * We can always split a folio down to a single page
+ * (new_order == 0) uniformly.
+ *
+ * For any other scenario
+ * a) uniform split targeting a large folio
+ * (new_order > 0)
+ * b) any non-uniform split
+ * we must confirm that the file system supports large
+ * folios.
+ *
+ * Note that we might still have THPs in such
+ * mappings, which is created from khugepaged when
+ * CONFIG_READ_ONLY_THP_FOR_FS is enabled. But in that
+ * case, the mapping does not actually support large
+ * folios properly.
+ */
VM_WARN_ONCE(warns,
"Cannot split file folio to non-0 order");
return false;
}
}
- if (new_order && folio_test_swapcache(folio)) {
+ /*
+ * swapcache folio could only be split to order 0
+ *
+ * non-uniform split creates after-split folios with orders from
+ * folio_order(folio) - 1 to new_order, making it not suitable for any
+ * swapcache folio split. Only uniform split to order-0 can be used
+ * here.
+ */
+ if ((!uniform_split || new_order) && folio_test_swapcache(folio)) {
VM_WARN_ONCE(warns,
"Cannot split swapcache folio to non-0 order");
return false;
@@ -3772,11 +3766,7 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
if (new_order >= old_order)
return -EINVAL;
- if (uniform_split && !uniform_split_supported(folio, new_order, true))
- return -EINVAL;
-
- if (!uniform_split &&
- !non_uniform_split_supported(folio, new_order, true))
+ if (!folio_split_supported(folio, new_order, uniform_split, /* warn = */ true))
return -EINVAL;
is_hzp = is_huge_zero_folio(folio);
--
2.34.1
^ permalink raw reply related [flat|nested] 6+ messages in thread* Re: [Patch v2] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported()
2025-11-05 7:25 [Patch v2] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported() Wei Yang
@ 2025-11-05 8:05 ` David Hildenbrand (Red Hat)
2025-11-05 9:15 ` Wei Yang
2025-11-05 16:28 ` Zi Yan
2025-11-05 16:41 ` Zi Yan
1 sibling, 2 replies; 6+ messages in thread
From: David Hildenbrand (Red Hat) @ 2025-11-05 8:05 UTC (permalink / raw)
To: Wei Yang, akpm, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang
Cc: linux-mm
On 05.11.25 08:25, Wei Yang wrote:
> The functions uniform_split_supported() and
> non_uniform_split_supported() share significantly similar logic.
>
> The only functional difference is that uniform_split_supported()
> includes an additional check on the requested @new_order.
>
> The reason for this check comes from the following two aspects:
>
> * some file system or swap cache just supports order-0 folio
> * the behavioral difference between uniform/non-uniform split
>
> The behavioral difference between uniform split and non-uniform:
>
> * uniform split splits folio directly to @new_order
> * non-uniform split creates after-split folios with orders from
> folio_order(folio) - 1 to new_order.
>
> This means for non-uniform split or !new_order split we should check the
> file system and swap cache respectively.
>
> This commit unifies the logic and merge the two functions into a single
> combined helper, removing redundant code and simplifying the split
> support checking mechanism.
>
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: "David Hildenbrand (Red Hat)" <david@kernel.org>
>
> ---
> v2:
> * remove need_check
> * update comment
> * add more explanation in change log
> * selftests/split_huge_page_test pass
> ---
> include/linux/huge_mm.h | 8 ++---
> mm/huge_memory.c | 70 ++++++++++++++++++-----------------------
> 2 files changed, 33 insertions(+), 45 deletions(-)
>
> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> index cbb2243f8e56..79343809a7be 100644
> --- a/include/linux/huge_mm.h
> +++ b/include/linux/huge_mm.h
> @@ -369,10 +369,8 @@ int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list
> unsigned int new_order, bool unmapped);
> int min_order_for_split(struct folio *folio);
> int split_folio_to_list(struct folio *folio, struct list_head *list);
> -bool uniform_split_supported(struct folio *folio, unsigned int new_order,
> - bool warns);
> -bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
> - bool warns);
> +bool folio_split_supported(struct folio *folio, unsigned int new_order,
> + bool uniform_split, bool warns);
> int folio_split(struct folio *folio, unsigned int new_order, struct page *page,
> struct list_head *list);
>
> @@ -403,7 +401,7 @@ static inline int split_huge_page_to_order(struct page *page, unsigned int new_o
> static inline int try_folio_split_to_order(struct folio *folio,
> struct page *page, unsigned int new_order)
> {
> - if (!non_uniform_split_supported(folio, new_order, /* warns= */ false))
> + if (!folio_split_supported(folio, new_order, /* uniform_split = */ false, /* warns= */ false))
> return split_huge_page_to_order(&folio->page, new_order);
> return folio_split(folio, new_order, page, NULL);
> }
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 381a49c5ac3f..db442e0e3a46 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -3666,55 +3666,49 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
> return 0;
> }
>
> -bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
> - bool warns)
> +bool folio_split_supported(struct folio *folio, unsigned int new_order,
> + bool uniform_split, bool warns)
> {
> if (folio_test_anon(folio)) {
> /* order-1 is not supported for anonymous THP. */
> VM_WARN_ONCE(warns && new_order == 1,
> "Cannot split to order-1 folio");
> return new_order != 1;
> - } else if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
> - !mapping_large_folio_support(folio->mapping)) {
> - /*
> - * No split if the file system does not support large folio.
> - * Note that we might still have THPs in such mappings due to
> - * CONFIG_READ_ONLY_THP_FOR_FS. But in that case, the mapping
> - * does not actually support large folios properly.
> - */
> - VM_WARN_ONCE(warns,
> - "Cannot split file folio to non-0 order");
> - return false;
> - }
> -
> - /* Only swapping a whole PMD-mapped folio is supported */
> - if (folio_test_swapcache(folio)) {
> - VM_WARN_ONCE(warns,
> - "Cannot split swapcache folio to non-0 order");
> - return false;
> - }
> -
> - return true;
> -}
> -
> -/* See comments in non_uniform_split_supported() */
> -bool uniform_split_supported(struct folio *folio, unsigned int new_order,
> - bool warns)
> -{
> - if (folio_test_anon(folio)) {
> - VM_WARN_ONCE(warns && new_order == 1,
> - "Cannot split to order-1 folio");
> - return new_order != 1;
> - } else if (new_order) {
> + } else if (!uniform_split || new_order) {
> if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
> !mapping_large_folio_support(folio->mapping)) {
> + /*
> + * We can always split a folio down to a single page
> + * (new_order == 0) uniformly.
> + *
> + * For any other scenario
> + * a) uniform split targeting a large folio
> + * (new_order > 0)
> + * b) any non-uniform split
> + * we must confirm that the file system supports large
> + * folios.
> + *
> + * Note that we might still have THPs in such
> + * mappings, which is created from khugepaged when
> + * CONFIG_READ_ONLY_THP_FOR_FS is enabled. But in that
> + * case, the mapping does not actually support large
> + * folios properly.
> + */
> VM_WARN_ONCE(warns,
> "Cannot split file folio to non-0 order");
> return false;
> }
> }
>
> - if (new_order && folio_test_swapcache(folio)) {
> + /*
> + * swapcache folio could only be split to order 0
> + *
> + * non-uniform split creates after-split folios with orders from
> + * folio_order(folio) - 1 to new_order, making it not suitable for any
> + * swapcache folio split. Only uniform split to order-0 can be used
> + * here.
> + */
> + if ((!uniform_split || new_order) && folio_test_swapcache(folio)) {
Staring at the existing code, how would we reach the
folio_test_swapcache() test for anon folios?
At the beginning of the function we have
if (folio_test_anon()) {
...
return new_order != 1;
}
Aren't we missing a check for anon folios that are in the swapcache?
--
Cheers
David
^ permalink raw reply [flat|nested] 6+ messages in thread* Re: [Patch v2] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported()
2025-11-05 8:05 ` David Hildenbrand (Red Hat)
@ 2025-11-05 9:15 ` Wei Yang
2025-11-05 16:28 ` Zi Yan
1 sibling, 0 replies; 6+ messages in thread
From: Wei Yang @ 2025-11-05 9:15 UTC (permalink / raw)
To: David Hildenbrand (Red Hat)
Cc: Wei Yang, akpm, lorenzo.stoakes, ziy, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang, linux-mm
On Wed, Nov 05, 2025 at 09:05:31AM +0100, David Hildenbrand (Red Hat) wrote:
>On 05.11.25 08:25, Wei Yang wrote:
>> The functions uniform_split_supported() and
>> non_uniform_split_supported() share significantly similar logic.
>>
>> The only functional difference is that uniform_split_supported()
>> includes an additional check on the requested @new_order.
>>
>> The reason for this check comes from the following two aspects:
>>
>> * some file system or swap cache just supports order-0 folio
>> * the behavioral difference between uniform/non-uniform split
>>
>> The behavioral difference between uniform split and non-uniform:
>>
>> * uniform split splits folio directly to @new_order
>> * non-uniform split creates after-split folios with orders from
>> folio_order(folio) - 1 to new_order.
>>
>> This means for non-uniform split or !new_order split we should check the
>> file system and swap cache respectively.
>>
>> This commit unifies the logic and merge the two functions into a single
>> combined helper, removing redundant code and simplifying the split
>> support checking mechanism.
>>
>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>> Cc: Zi Yan <ziy@nvidia.com>
>> Cc: "David Hildenbrand (Red Hat)" <david@kernel.org>
>>
>> ---
>> v2:
>> * remove need_check
>> * update comment
>> * add more explanation in change log
>> * selftests/split_huge_page_test pass
>> ---
>> include/linux/huge_mm.h | 8 ++---
>> mm/huge_memory.c | 70 ++++++++++++++++++-----------------------
>> 2 files changed, 33 insertions(+), 45 deletions(-)
>>
>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
>> index cbb2243f8e56..79343809a7be 100644
>> --- a/include/linux/huge_mm.h
>> +++ b/include/linux/huge_mm.h
>> @@ -369,10 +369,8 @@ int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list
>> unsigned int new_order, bool unmapped);
>> int min_order_for_split(struct folio *folio);
>> int split_folio_to_list(struct folio *folio, struct list_head *list);
>> -bool uniform_split_supported(struct folio *folio, unsigned int new_order,
>> - bool warns);
>> -bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
>> - bool warns);
>> +bool folio_split_supported(struct folio *folio, unsigned int new_order,
>> + bool uniform_split, bool warns);
>> int folio_split(struct folio *folio, unsigned int new_order, struct page *page,
>> struct list_head *list);
>> @@ -403,7 +401,7 @@ static inline int split_huge_page_to_order(struct page *page, unsigned int new_o
>> static inline int try_folio_split_to_order(struct folio *folio,
>> struct page *page, unsigned int new_order)
>> {
>> - if (!non_uniform_split_supported(folio, new_order, /* warns= */ false))
>> + if (!folio_split_supported(folio, new_order, /* uniform_split = */ false, /* warns= */ false))
>> return split_huge_page_to_order(&folio->page, new_order);
>> return folio_split(folio, new_order, page, NULL);
>> }
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index 381a49c5ac3f..db442e0e3a46 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -3666,55 +3666,49 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
>> return 0;
>> }
>> -bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
>> - bool warns)
>> +bool folio_split_supported(struct folio *folio, unsigned int new_order,
>> + bool uniform_split, bool warns)
>> {
>> if (folio_test_anon(folio)) {
>> /* order-1 is not supported for anonymous THP. */
>> VM_WARN_ONCE(warns && new_order == 1,
>> "Cannot split to order-1 folio");
>> return new_order != 1;
>> - } else if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
>> - !mapping_large_folio_support(folio->mapping)) {
>> - /*
>> - * No split if the file system does not support large folio.
>> - * Note that we might still have THPs in such mappings due to
>> - * CONFIG_READ_ONLY_THP_FOR_FS. But in that case, the mapping
>> - * does not actually support large folios properly.
>> - */
>> - VM_WARN_ONCE(warns,
>> - "Cannot split file folio to non-0 order");
>> - return false;
>> - }
>> -
>> - /* Only swapping a whole PMD-mapped folio is supported */
>> - if (folio_test_swapcache(folio)) {
>> - VM_WARN_ONCE(warns,
>> - "Cannot split swapcache folio to non-0 order");
>> - return false;
>> - }
>> -
>> - return true;
>> -}
>> -
>> -/* See comments in non_uniform_split_supported() */
>> -bool uniform_split_supported(struct folio *folio, unsigned int new_order,
>> - bool warns)
>> -{
>> - if (folio_test_anon(folio)) {
>> - VM_WARN_ONCE(warns && new_order == 1,
>> - "Cannot split to order-1 folio");
>> - return new_order != 1;
>> - } else if (new_order) {
>> + } else if (!uniform_split || new_order) {
>> if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
>> !mapping_large_folio_support(folio->mapping)) {
>> + /*
>> + * We can always split a folio down to a single page
>> + * (new_order == 0) uniformly.
>> + *
>> + * For any other scenario
>> + * a) uniform split targeting a large folio
>> + * (new_order > 0)
>> + * b) any non-uniform split
>> + * we must confirm that the file system supports large
>> + * folios.
>> + *
>> + * Note that we might still have THPs in such
>> + * mappings, which is created from khugepaged when
>> + * CONFIG_READ_ONLY_THP_FOR_FS is enabled. But in that
>> + * case, the mapping does not actually support large
>> + * folios properly.
>> + */
>> VM_WARN_ONCE(warns,
>> "Cannot split file folio to non-0 order");
>> return false;
>> }
>> }
>> - if (new_order && folio_test_swapcache(folio)) {
>> + /*
>> + * swapcache folio could only be split to order 0
>> + *
>> + * non-uniform split creates after-split folios with orders from
>> + * folio_order(folio) - 1 to new_order, making it not suitable for any
>> + * swapcache folio split. Only uniform split to order-0 can be used
>> + * here.
>> + */
>> + if ((!uniform_split || new_order) && folio_test_swapcache(folio)) {
>
>Staring at the existing code, how would we reach the folio_test_swapcache()
>test for anon folios?
>
>At the beginning of the function we have
>
>if (folio_test_anon()) {
> ...
> return new_order != 1;
>}
>
>Aren't we missing a check for anon folios that are in the swapcache?
>
Hmm... haven't noticed this.
I need to do some home work on it.
>--
>Cheers
>
>David
--
Wei Yang
Help you, Help me
^ permalink raw reply [flat|nested] 6+ messages in thread* Re: [Patch v2] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported()
2025-11-05 8:05 ` David Hildenbrand (Red Hat)
2025-11-05 9:15 ` Wei Yang
@ 2025-11-05 16:28 ` Zi Yan
1 sibling, 0 replies; 6+ messages in thread
From: Zi Yan @ 2025-11-05 16:28 UTC (permalink / raw)
To: David Hildenbrand (Red Hat)
Cc: Wei Yang, akpm, lorenzo.stoakes, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang, linux-mm
On 5 Nov 2025, at 3:05, David Hildenbrand (Red Hat) wrote:
> On 05.11.25 08:25, Wei Yang wrote:
>> The functions uniform_split_supported() and
>> non_uniform_split_supported() share significantly similar logic.
>>
>> The only functional difference is that uniform_split_supported()
>> includes an additional check on the requested @new_order.
>>
>> The reason for this check comes from the following two aspects:
>>
>> * some file system or swap cache just supports order-0 folio
>> * the behavioral difference between uniform/non-uniform split
>>
>> The behavioral difference between uniform split and non-uniform:
>>
>> * uniform split splits folio directly to @new_order
>> * non-uniform split creates after-split folios with orders from
>> folio_order(folio) - 1 to new_order.
>>
>> This means for non-uniform split or !new_order split we should check the
>> file system and swap cache respectively.
>>
>> This commit unifies the logic and merge the two functions into a single
>> combined helper, removing redundant code and simplifying the split
>> support checking mechanism.
>>
>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>> Cc: Zi Yan <ziy@nvidia.com>
>> Cc: "David Hildenbrand (Red Hat)" <david@kernel.org>
>>
>> ---
>> v2:
>> * remove need_check
>> * update comment
>> * add more explanation in change log
>> * selftests/split_huge_page_test pass
>> ---
>> include/linux/huge_mm.h | 8 ++---
>> mm/huge_memory.c | 70 ++++++++++++++++++-----------------------
>> 2 files changed, 33 insertions(+), 45 deletions(-)
>>
>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
>> index cbb2243f8e56..79343809a7be 100644
>> --- a/include/linux/huge_mm.h
>> +++ b/include/linux/huge_mm.h
>> @@ -369,10 +369,8 @@ int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list
>> unsigned int new_order, bool unmapped);
>> int min_order_for_split(struct folio *folio);
>> int split_folio_to_list(struct folio *folio, struct list_head *list);
>> -bool uniform_split_supported(struct folio *folio, unsigned int new_order,
>> - bool warns);
>> -bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
>> - bool warns);
>> +bool folio_split_supported(struct folio *folio, unsigned int new_order,
>> + bool uniform_split, bool warns);
>> int folio_split(struct folio *folio, unsigned int new_order, struct page *page,
>> struct list_head *list);
>> @@ -403,7 +401,7 @@ static inline int split_huge_page_to_order(struct page *page, unsigned int new_o
>> static inline int try_folio_split_to_order(struct folio *folio,
>> struct page *page, unsigned int new_order)
>> {
>> - if (!non_uniform_split_supported(folio, new_order, /* warns= */ false))
>> + if (!folio_split_supported(folio, new_order, /* uniform_split = */ false, /* warns= */ false))
>> return split_huge_page_to_order(&folio->page, new_order);
>> return folio_split(folio, new_order, page, NULL);
>> }
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index 381a49c5ac3f..db442e0e3a46 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -3666,55 +3666,49 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
>> return 0;
>> }
>> -bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
>> - bool warns)
>> +bool folio_split_supported(struct folio *folio, unsigned int new_order,
>> + bool uniform_split, bool warns)
>> {
>> if (folio_test_anon(folio)) {
>> /* order-1 is not supported for anonymous THP. */
>> VM_WARN_ONCE(warns && new_order == 1,
>> "Cannot split to order-1 folio");
>> return new_order != 1;
>> - } else if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
>> - !mapping_large_folio_support(folio->mapping)) {
>> - /*
>> - * No split if the file system does not support large folio.
>> - * Note that we might still have THPs in such mappings due to
>> - * CONFIG_READ_ONLY_THP_FOR_FS. But in that case, the mapping
>> - * does not actually support large folios properly.
>> - */
>> - VM_WARN_ONCE(warns,
>> - "Cannot split file folio to non-0 order");
>> - return false;
>> - }
>> -
>> - /* Only swapping a whole PMD-mapped folio is supported */
>> - if (folio_test_swapcache(folio)) {
>> - VM_WARN_ONCE(warns,
>> - "Cannot split swapcache folio to non-0 order");
>> - return false;
>> - }
>> -
>> - return true;
>> -}
>> -
>> -/* See comments in non_uniform_split_supported() */
>> -bool uniform_split_supported(struct folio *folio, unsigned int new_order,
>> - bool warns)
>> -{
>> - if (folio_test_anon(folio)) {
>> - VM_WARN_ONCE(warns && new_order == 1,
>> - "Cannot split to order-1 folio");
>> - return new_order != 1;
>> - } else if (new_order) {
>> + } else if (!uniform_split || new_order) {
>> if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
>> !mapping_large_folio_support(folio->mapping)) {
>> + /*
>> + * We can always split a folio down to a single page
>> + * (new_order == 0) uniformly.
>> + *
>> + * For any other scenario
>> + * a) uniform split targeting a large folio
>> + * (new_order > 0)
>> + * b) any non-uniform split
>> + * we must confirm that the file system supports large
>> + * folios.
>> + *
>> + * Note that we might still have THPs in such
>> + * mappings, which is created from khugepaged when
>> + * CONFIG_READ_ONLY_THP_FOR_FS is enabled. But in that
>> + * case, the mapping does not actually support large
>> + * folios properly.
>> + */
>> VM_WARN_ONCE(warns,
>> "Cannot split file folio to non-0 order");
>> return false;
>> }
>> }
>> - if (new_order && folio_test_swapcache(folio)) {
>> + /*
>> + * swapcache folio could only be split to order 0
>> + *
>> + * non-uniform split creates after-split folios with orders from
>> + * folio_order(folio) - 1 to new_order, making it not suitable for any
>> + * swapcache folio split. Only uniform split to order-0 can be used
>> + * here.
>> + */
>> + if ((!uniform_split || new_order) && folio_test_swapcache(folio)) {
>
> Staring at the existing code, how would we reach the folio_test_swapcache() test for anon folios?
>
> At the beginning of the function we have
>
> if (folio_test_anon()) {
> ...
> return new_order != 1;
> }
>
> Aren't we missing a check for anon folios that are in the swapcache?
Yes, sending a patch to fix this.
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [Patch v2] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported()
2025-11-05 7:25 [Patch v2] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported() Wei Yang
2025-11-05 8:05 ` David Hildenbrand (Red Hat)
@ 2025-11-05 16:41 ` Zi Yan
2025-11-06 1:46 ` Wei Yang
1 sibling, 1 reply; 6+ messages in thread
From: Zi Yan @ 2025-11-05 16:41 UTC (permalink / raw)
To: Wei Yang
Cc: akpm, david, lorenzo.stoakes, baolin.wang, Liam.Howlett, npache,
ryan.roberts, dev.jain, baohua, lance.yang, linux-mm,
David Hildenbrand (Red Hat)
On 5 Nov 2025, at 2:25, Wei Yang wrote:
> The functions uniform_split_supported() and
> non_uniform_split_supported() share significantly similar logic.
>
> The only functional difference is that uniform_split_supported()
> includes an additional check on the requested @new_order.
>
> The reason for this check comes from the following two aspects:
>
> * some file system or swap cache just supports order-0 folio
> * the behavioral difference between uniform/non-uniform split
>
> The behavioral difference between uniform split and non-uniform:
>
> * uniform split splits folio directly to @new_order
> * non-uniform split creates after-split folios with orders from
> folio_order(folio) - 1 to new_order.
>
> This means for non-uniform split or !new_order split we should check the
> file system and swap cache respectively.
>
> This commit unifies the logic and merge the two functions into a single
> combined helper, removing redundant code and simplifying the split
> support checking mechanism.
>
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: "David Hildenbrand (Red Hat)" <david@kernel.org>
>
> ---
> v2:
> * remove need_check
> * update comment
> * add more explanation in change log
> * selftests/split_huge_page_test pass
> ---
> include/linux/huge_mm.h | 8 ++---
> mm/huge_memory.c | 70 ++++++++++++++++++-----------------------
> 2 files changed, 33 insertions(+), 45 deletions(-)
>
> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> index cbb2243f8e56..79343809a7be 100644
> --- a/include/linux/huge_mm.h
> +++ b/include/linux/huge_mm.h
> @@ -369,10 +369,8 @@ int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list
> unsigned int new_order, bool unmapped);
> int min_order_for_split(struct folio *folio);
> int split_folio_to_list(struct folio *folio, struct list_head *list);
> -bool uniform_split_supported(struct folio *folio, unsigned int new_order,
> - bool warns);
> -bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
> - bool warns);
> +bool folio_split_supported(struct folio *folio, unsigned int new_order,
> + bool uniform_split, bool warns);
> int folio_split(struct folio *folio, unsigned int new_order, struct page *page,
> struct list_head *list);
>
> @@ -403,7 +401,7 @@ static inline int split_huge_page_to_order(struct page *page, unsigned int new_o
> static inline int try_folio_split_to_order(struct folio *folio,
> struct page *page, unsigned int new_order)
> {
> - if (!non_uniform_split_supported(folio, new_order, /* warns= */ false))
> + if (!folio_split_supported(folio, new_order, /* uniform_split = */ false, /* warns= */ false))
> return split_huge_page_to_order(&folio->page, new_order);
> return folio_split(folio, new_order, page, NULL);
> }
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 381a49c5ac3f..db442e0e3a46 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -3666,55 +3666,49 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
> return 0;
> }
>
> -bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
> - bool warns)
> +bool folio_split_supported(struct folio *folio, unsigned int new_order,
> + bool uniform_split, bool warns)
For this one, David suggested to use a enum
enum split_type {
SPLIT_TYPE_UNIFORM,
SPLIT_TYPE_NON_UNIFORM,
};
in a separate cleanup patch. It is better to send it along with this
one, so that:
1. it will be easy for reviewers to keep track of both changes,
2. it also helps resolve the dependency issue, where the cleanup patch
goes after this one.
Thanks.
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 6+ messages in thread* Re: [Patch v2] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported()
2025-11-05 16:41 ` Zi Yan
@ 2025-11-06 1:46 ` Wei Yang
0 siblings, 0 replies; 6+ messages in thread
From: Wei Yang @ 2025-11-06 1:46 UTC (permalink / raw)
To: Zi Yan
Cc: Wei Yang, akpm, david, lorenzo.stoakes, baolin.wang, Liam.Howlett,
npache, ryan.roberts, dev.jain, baohua, lance.yang, linux-mm,
David Hildenbrand (Red Hat)
On Wed, Nov 05, 2025 at 11:41:35AM -0500, Zi Yan wrote:
>On 5 Nov 2025, at 2:25, Wei Yang wrote:
>
>> The functions uniform_split_supported() and
>> non_uniform_split_supported() share significantly similar logic.
>>
>> The only functional difference is that uniform_split_supported()
>> includes an additional check on the requested @new_order.
>>
>> The reason for this check comes from the following two aspects:
>>
>> * some file system or swap cache just supports order-0 folio
>> * the behavioral difference between uniform/non-uniform split
>>
>> The behavioral difference between uniform split and non-uniform:
>>
>> * uniform split splits folio directly to @new_order
>> * non-uniform split creates after-split folios with orders from
>> folio_order(folio) - 1 to new_order.
>>
>> This means for non-uniform split or !new_order split we should check the
>> file system and swap cache respectively.
>>
>> This commit unifies the logic and merge the two functions into a single
>> combined helper, removing redundant code and simplifying the split
>> support checking mechanism.
>>
>> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
>> Cc: Zi Yan <ziy@nvidia.com>
>> Cc: "David Hildenbrand (Red Hat)" <david@kernel.org>
>>
>> ---
>> v2:
>> * remove need_check
>> * update comment
>> * add more explanation in change log
>> * selftests/split_huge_page_test pass
>> ---
>> include/linux/huge_mm.h | 8 ++---
>> mm/huge_memory.c | 70 ++++++++++++++++++-----------------------
>> 2 files changed, 33 insertions(+), 45 deletions(-)
>>
>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
>> index cbb2243f8e56..79343809a7be 100644
>> --- a/include/linux/huge_mm.h
>> +++ b/include/linux/huge_mm.h
>> @@ -369,10 +369,8 @@ int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list
>> unsigned int new_order, bool unmapped);
>> int min_order_for_split(struct folio *folio);
>> int split_folio_to_list(struct folio *folio, struct list_head *list);
>> -bool uniform_split_supported(struct folio *folio, unsigned int new_order,
>> - bool warns);
>> -bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
>> - bool warns);
>> +bool folio_split_supported(struct folio *folio, unsigned int new_order,
>> + bool uniform_split, bool warns);
>> int folio_split(struct folio *folio, unsigned int new_order, struct page *page,
>> struct list_head *list);
>>
>> @@ -403,7 +401,7 @@ static inline int split_huge_page_to_order(struct page *page, unsigned int new_o
>> static inline int try_folio_split_to_order(struct folio *folio,
>> struct page *page, unsigned int new_order)
>> {
>> - if (!non_uniform_split_supported(folio, new_order, /* warns= */ false))
>> + if (!folio_split_supported(folio, new_order, /* uniform_split = */ false, /* warns= */ false))
>> return split_huge_page_to_order(&folio->page, new_order);
>> return folio_split(folio, new_order, page, NULL);
>> }
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index 381a49c5ac3f..db442e0e3a46 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -3666,55 +3666,49 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
>> return 0;
>> }
>>
>> -bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
>> - bool warns)
>> +bool folio_split_supported(struct folio *folio, unsigned int new_order,
>> + bool uniform_split, bool warns)
>
>For this one, David suggested to use a enum
>
>enum split_type {
> SPLIT_TYPE_UNIFORM,
> SPLIT_TYPE_NON_UNIFORM,
>};
>
>in a separate cleanup patch. It is better to send it along with this
>one, so that:
>
>1. it will be easy for reviewers to keep track of both changes,
>2. it also helps resolve the dependency issue, where the cleanup patch
> goes after this one.
>
>Thanks.
Sure, will add it.
>
>
>Best Regards,
>Yan, Zi
--
Wei Yang
Help you, Help me
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2025-11-06 1:46 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-11-05 7:25 [Patch v2] mm/huge_memory: merge uniform_split_supported() and non_uniform_split_supported() Wei Yang
2025-11-05 8:05 ` David Hildenbrand (Red Hat)
2025-11-05 9:15 ` Wei Yang
2025-11-05 16:28 ` Zi Yan
2025-11-05 16:41 ` Zi Yan
2025-11-06 1:46 ` Wei Yang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).