* Re: [PATCH] replace free hugepage folios after migration
2024-12-18 6:33 [PATCH] replace free hugepage folios after migration yangge1116
@ 2024-12-19 16:40 ` David Hildenbrand
2024-12-20 8:56 ` Ge Yang
2024-12-19 18:43 ` SeongJae Park
2024-12-21 14:35 ` David Hildenbrand
2 siblings, 1 reply; 13+ messages in thread
From: David Hildenbrand @ 2024-12-19 16:40 UTC (permalink / raw)
To: yangge1116, akpm
Cc: linux-mm, linux-kernel, stable, 21cnbao, baolin.wang, muchun.song,
liuzixing, Oscar Salvador
On 18.12.24 07:33, yangge1116@126.com wrote:
> From: yangge <yangge1116@126.com>
CCing Oscar, who worked on migrating these pages during memory offlining
and alloc_contig_range().
>
> My machine has 4 NUMA nodes, each equipped with 32GB of memory. I
> have configured each NUMA node with 16GB of CMA and 16GB of in-use
> hugetlb pages. The allocation of contiguous memory via the
> cma_alloc() function can fail probabilistically.
>
> The cma_alloc() function may fail if it sees an in-use hugetlb page
> within the allocation range, even if that page has already been
> migrated. When in-use hugetlb pages are migrated, they may simply
> be released back into the free hugepage pool instead of being
> returned to the buddy system. This can cause the
> test_pages_isolated() function check to fail, ultimately leading
> to the failure of the cma_alloc() function:
> cma_alloc()
> __alloc_contig_migrate_range() // migrate in-use hugepage
> test_pages_isolated()
> __test_page_isolated_in_pageblock()
> PageBuddy(page) // check if the page is in buddy
I thought this would be working as expected, at least we tested it with
alloc_contig_range / virtio-mem a while ago.
On the memory_offlining path, we migrate hugetlb folios, but also
dissolve any remaining free folios even if it means that we will going
below the requested number of hugetlb pages in our pool.
During alloc_contig_range(), we only migrate them, to then free them up
after migration.
Under which circumstances doe sit apply that "they may simply be
released back into the free hugepage pool instead of being returned to
the buddy system"?
>
> To address this issue, we will add a function named
> replace_free_hugepage_folios(). This function will replace the
> hugepage in the free hugepage pool with a new one and release the
> old one to the buddy system. After the migration of in-use hugetlb
> pages is completed, we will invoke the replace_free_hugepage_folios()
> function to ensure that these hugepages are properly released to
> the buddy system. Following this step, when the test_pages_isolated()
> function is executed for inspection, it will successfully pass.
>
> Signed-off-by: yangge <yangge1116@126.com>
> ---
> include/linux/hugetlb.h | 6 ++++++
> mm/hugetlb.c | 37 +++++++++++++++++++++++++++++++++++++
> mm/page_alloc.c | 13 ++++++++++++-
> 3 files changed, 55 insertions(+), 1 deletion(-)
>
> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> index ae4fe86..7d36ac8 100644
> --- a/include/linux/hugetlb.h
> +++ b/include/linux/hugetlb.h
> @@ -681,6 +681,7 @@ struct huge_bootmem_page {
> };
>
> int isolate_or_dissolve_huge_page(struct page *page, struct list_head *list);
> +int replace_free_hugepage_folios(unsigned long start_pfn, unsigned long end_pfn);
> struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
> unsigned long addr, int avoid_reserve);
> struct folio *alloc_hugetlb_folio_nodemask(struct hstate *h, int preferred_nid,
> @@ -1059,6 +1060,11 @@ static inline int isolate_or_dissolve_huge_page(struct page *page,
> return -ENOMEM;
> }
>
> +int replace_free_hugepage_folios(unsigned long start_pfn, unsigned long end_pfn)
> +{
> + return 0;
> +}
> +
> static inline struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
> unsigned long addr,
> int avoid_reserve)
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 8e1db80..a099c54 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -2975,6 +2975,43 @@ int isolate_or_dissolve_huge_page(struct page *page, struct list_head *list)
> return ret;
> }
>
> +/*
> + * replace_free_hugepage_folios - Replace free hugepage folios in a given pfn
> + * range with new folios.
> + * @stat_pfn: start pfn of the given pfn range
> + * @end_pfn: end pfn of the given pfn range
> + * Returns 0 on success, otherwise negated error.
> + */
> +int replace_free_hugepage_folios(unsigned long start_pfn, unsigned long end_pfn)
> +{
> + struct hstate *h;
> + struct folio *folio;
> + int ret = 0;
> +
> + LIST_HEAD(isolate_list);
> +
> + while (start_pfn < end_pfn) {
> + folio = pfn_folio(start_pfn);
> + if (folio_test_hugetlb(folio)) {
> + h = folio_hstate(folio);
> + } else {
> + start_pfn++;
> + continue;
> + }
> +
> + if (!folio_ref_count(folio)) {
> + ret = alloc_and_dissolve_hugetlb_folio(h, folio, &isolate_list);
> + if (ret)
> + break;
> +
> + putback_movable_pages(&isolate_list);
> + }
> + start_pfn++;
> + }
> +
> + return ret;
> +}
> +
> struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
> unsigned long addr, int avoid_reserve)
> {
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index dde19db..1dcea28 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -6504,7 +6504,18 @@ int alloc_contig_range_noprof(unsigned long start, unsigned long end,
> ret = __alloc_contig_migrate_range(&cc, start, end, migratetype);
> if (ret && ret != -EBUSY)
> goto done;
> - ret = 0;
> +
> + /*
> + * When in-use hugetlb pages are migrated, they may simply be
> + * released back into the free hugepage pool instead of being
> + * returned to the buddy system. After the migration of in-use
> + * huge pages is completed, we will invoke the
> + * replace_free_hugepage_folios() function to ensure that
> + * these hugepages are properly released to the buddy system.
> + */
> + ret = replace_free_hugepage_folios(start, end);
> + if (ret)
> + goto done;
>
> /*
> * Pages from [start, end) are within a pageblock_nr_pages
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH] replace free hugepage folios after migration
2024-12-19 16:40 ` David Hildenbrand
@ 2024-12-20 8:56 ` Ge Yang
2024-12-20 16:30 ` David Hildenbrand
0 siblings, 1 reply; 13+ messages in thread
From: Ge Yang @ 2024-12-20 8:56 UTC (permalink / raw)
To: David Hildenbrand, akpm
Cc: linux-mm, linux-kernel, stable, 21cnbao, baolin.wang, muchun.song,
liuzixing, Oscar Salvador
在 2024/12/20 0:40, David Hildenbrand 写道:
> On 18.12.24 07:33, yangge1116@126.com wrote:
>> From: yangge <yangge1116@126.com>
>
> CCing Oscar, who worked on migrating these pages during memory offlining
> and alloc_contig_range().
>
>>
>> My machine has 4 NUMA nodes, each equipped with 32GB of memory. I
>> have configured each NUMA node with 16GB of CMA and 16GB of in-use
>> hugetlb pages. The allocation of contiguous memory via the
>> cma_alloc() function can fail probabilistically.
>>
>> The cma_alloc() function may fail if it sees an in-use hugetlb page
>> within the allocation range, even if that page has already been
>> migrated. When in-use hugetlb pages are migrated, they may simply
>> be released back into the free hugepage pool instead of being
>> returned to the buddy system. This can cause the
>> test_pages_isolated() function check to fail, ultimately leading
>> to the failure of the cma_alloc() function:
>> cma_alloc()
>> __alloc_contig_migrate_range() // migrate in-use hugepage
>> test_pages_isolated()
>> __test_page_isolated_in_pageblock()
>> PageBuddy(page) // check if the page is in buddy
>
> I thought this would be working as expected, at least we tested it with
> alloc_contig_range / virtio-mem a while ago.
>
> On the memory_offlining path, we migrate hugetlb folios, but also
> dissolve any remaining free folios even if it means that we will going
> below the requested number of hugetlb pages in our pool.
>
> During alloc_contig_range(), we only migrate them, to then free them up
> after migration.
>
> Under which circumstances doe sit apply that "they may simply be
> released back into the free hugepage pool instead of being returned to
> the buddy system"?
>
After migration, in-use hugetlb pages are only released back to the
hugetlb pool and are not returned to the buddy system.
The specific steps for reproduction are as follows:
1,Reserve hugetlb pages. Some of these hugetlb pages are allocated
within the CMA area.
echo 10240 > /proc/sys/vm/nr_hugepages
2,To ensure that hugetlb pages are in an in-use state, we can use the
following command.
qemu-system-x86_64 \
-mem-prealloc \
-mem-path /dev/hugepage/ \
...
3,At this point, using cma_alloc() to allocate contiguous memory may
result in a probable failure.
>>
>> To address this issue, we will add a function named
>> replace_free_hugepage_folios(). This function will replace the
>> hugepage in the free hugepage pool with a new one and release the
>> old one to the buddy system. After the migration of in-use hugetlb
>> pages is completed, we will invoke the replace_free_hugepage_folios()
>> function to ensure that these hugepages are properly released to
>> the buddy system. Following this step, when the test_pages_isolated()
>> function is executed for inspection, it will successfully pass.
>>
>> Signed-off-by: yangge <yangge1116@126.com>
>> ---
>> include/linux/hugetlb.h | 6 ++++++
>> mm/hugetlb.c | 37 +++++++++++++++++++++++++++++++++++++
>> mm/page_alloc.c | 13 ++++++++++++-
>> 3 files changed, 55 insertions(+), 1 deletion(-)
>>
>> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
>> index ae4fe86..7d36ac8 100644
>> --- a/include/linux/hugetlb.h
>> +++ b/include/linux/hugetlb.h
>> @@ -681,6 +681,7 @@ struct huge_bootmem_page {
>> };
>> int isolate_or_dissolve_huge_page(struct page *page, struct
>> list_head *list);
>> +int replace_free_hugepage_folios(unsigned long start_pfn, unsigned
>> long end_pfn);
>> struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
>> unsigned long addr, int avoid_reserve);
>> struct folio *alloc_hugetlb_folio_nodemask(struct hstate *h, int
>> preferred_nid,
>> @@ -1059,6 +1060,11 @@ static inline int
>> isolate_or_dissolve_huge_page(struct page *page,
>> return -ENOMEM;
>> }
>> +int replace_free_hugepage_folios(unsigned long start_pfn, unsigned
>> long end_pfn)
>> +{
>> + return 0;
>> +}
>> +
>> static inline struct folio *alloc_hugetlb_folio(struct
>> vm_area_struct *vma,
>> unsigned long addr,
>> int avoid_reserve)
>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>> index 8e1db80..a099c54 100644
>> --- a/mm/hugetlb.c
>> +++ b/mm/hugetlb.c
>> @@ -2975,6 +2975,43 @@ int isolate_or_dissolve_huge_page(struct page
>> *page, struct list_head *list)
>> return ret;
>> }
>> +/*
>> + * replace_free_hugepage_folios - Replace free hugepage folios in a
>> given pfn
>> + * range with new folios.
>> + * @stat_pfn: start pfn of the given pfn range
>> + * @end_pfn: end pfn of the given pfn range
>> + * Returns 0 on success, otherwise negated error.
>> + */
>> +int replace_free_hugepage_folios(unsigned long start_pfn, unsigned
>> long end_pfn)
>> +{
>> + struct hstate *h;
>> + struct folio *folio;
>> + int ret = 0;
>> +
>> + LIST_HEAD(isolate_list);
>> +
>> + while (start_pfn < end_pfn) {
>> + folio = pfn_folio(start_pfn);
>> + if (folio_test_hugetlb(folio)) {
>> + h = folio_hstate(folio);
>> + } else {
>> + start_pfn++;
>> + continue;
>> + }
>> +
>> + if (!folio_ref_count(folio)) {
>> + ret = alloc_and_dissolve_hugetlb_folio(h, folio,
>> &isolate_list);
>> + if (ret)
>> + break;
>> +
>> + putback_movable_pages(&isolate_list);
>> + }
>> + start_pfn++;
>> + }
>> +
>> + return ret;
>> +}
>> +
>> struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
>> unsigned long addr, int avoid_reserve)
>> {
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index dde19db..1dcea28 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -6504,7 +6504,18 @@ int alloc_contig_range_noprof(unsigned long
>> start, unsigned long end,
>> ret = __alloc_contig_migrate_range(&cc, start, end, migratetype);
>> if (ret && ret != -EBUSY)
>> goto done;
>> - ret = 0;
>> +
>> + /*
>> + * When in-use hugetlb pages are migrated, they may simply be
>> + * released back into the free hugepage pool instead of being
>> + * returned to the buddy system. After the migration of in-use
>> + * huge pages is completed, we will invoke the
>> + * replace_free_hugepage_folios() function to ensure that
>> + * these hugepages are properly released to the buddy system.
>> + */
>> + ret = replace_free_hugepage_folios(start, end);
>> + if (ret)
>> + goto done;
>> /*
>> * Pages from [start, end) are within a pageblock_nr_pages
>
>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH] replace free hugepage folios after migration
2024-12-20 8:56 ` Ge Yang
@ 2024-12-20 16:30 ` David Hildenbrand
2024-12-21 12:04 ` Ge Yang
0 siblings, 1 reply; 13+ messages in thread
From: David Hildenbrand @ 2024-12-20 16:30 UTC (permalink / raw)
To: Ge Yang, akpm
Cc: linux-mm, linux-kernel, stable, 21cnbao, baolin.wang, muchun.song,
liuzixing, Oscar Salvador, Michal Hocko
On 20.12.24 09:56, Ge Yang wrote:
>
>
> 在 2024/12/20 0:40, David Hildenbrand 写道:
>> On 18.12.24 07:33, yangge1116@126.com wrote:
>>> From: yangge <yangge1116@126.com>
>>
>> CCing Oscar, who worked on migrating these pages during memory offlining
>> and alloc_contig_range().
>>
>>>
>>> My machine has 4 NUMA nodes, each equipped with 32GB of memory. I
>>> have configured each NUMA node with 16GB of CMA and 16GB of in-use
>>> hugetlb pages. The allocation of contiguous memory via the
>>> cma_alloc() function can fail probabilistically.
>>>
>>> The cma_alloc() function may fail if it sees an in-use hugetlb page
>>> within the allocation range, even if that page has already been
>>> migrated. When in-use hugetlb pages are migrated, they may simply
>>> be released back into the free hugepage pool instead of being
>>> returned to the buddy system. This can cause the
>>> test_pages_isolated() function check to fail, ultimately leading
>>> to the failure of the cma_alloc() function:
>>> cma_alloc()
>>> __alloc_contig_migrate_range() // migrate in-use hugepage
>>> test_pages_isolated()
>>> __test_page_isolated_in_pageblock()
>>> PageBuddy(page) // check if the page is in buddy
>>
>> I thought this would be working as expected, at least we tested it with
>> alloc_contig_range / virtio-mem a while ago.
>>
>> On the memory_offlining path, we migrate hugetlb folios, but also
>> dissolve any remaining free folios even if it means that we will going
>> below the requested number of hugetlb pages in our pool.
>>
>> During alloc_contig_range(), we only migrate them, to then free them up
>> after migration.
>>
>> Under which circumstances doe sit apply that "they may simply be
>> released back into the free hugepage pool instead of being returned to
>> the buddy system"?
>>
>
> After migration, in-use hugetlb pages are only released back to the
> hugetlb pool and are not returned to the buddy system.
We had
commit ae37c7ff79f1f030e28ec76c46ee032f8fd07607
Author: Oscar Salvador <osalvador@suse.de>
Date: Tue May 4 18:35:29 2021 -0700
mm: make alloc_contig_range handle in-use hugetlb pages
alloc_contig_range() will fail if it finds a HugeTLB page within the
range, without a chance to handle them. Since HugeTLB pages can be
migrated as any LRU or Movable page, it does not make sense to bail out
without trying. Enable the interface to recognize in-use HugeTLB pages so
we can migrate them, and have much better chances to succeed the call.
And I am trying to figure out if it never worked correctly, or if
something changed that broke it.
In start_isolate_page_range()->isolate_migratepages_block(), we do the
ret = isolate_or_dissolve_huge_page(page, &cc->migratepages);
to add these folios to the cc->migratepages list.
In __alloc_contig_migrate_range(), we migrate the pages using migrate_pages().
After that, the src hugetlb folios should still be isolated? But I'm getting
confused when these pages get un-silated and putback to hugetlb/freed.
>
> The specific steps for reproduction are as follows:
> 1,Reserve hugetlb pages. Some of these hugetlb pages are allocated
> within the CMA area.
> echo 10240 > /proc/sys/vm/nr_hugepages
>
> 2,To ensure that hugetlb pages are in an in-use state, we can use the
> following command.
> qemu-system-x86_64 \
> -mem-prealloc \
> -mem-path /dev/hugepage/ \
> ...
>
> 3,At this point, using cma_alloc() to allocate contiguous memory may
> result in a probable failure.
>
Will these free hugetlb folios become surplus pages? I would have assumed
they get freed immediately to the buddy, or does you config maybe allow for
surplus pages?
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH] replace free hugepage folios after migration
2024-12-20 16:30 ` David Hildenbrand
@ 2024-12-21 12:04 ` Ge Yang
2024-12-21 14:32 ` David Hildenbrand
0 siblings, 1 reply; 13+ messages in thread
From: Ge Yang @ 2024-12-21 12:04 UTC (permalink / raw)
To: David Hildenbrand, akpm
Cc: linux-mm, linux-kernel, stable, 21cnbao, baolin.wang, muchun.song,
liuzixing, Oscar Salvador, Michal Hocko
在 2024/12/21 0:30, David Hildenbrand 写道:
> On 20.12.24 09:56, Ge Yang wrote:
>>
>>
>> 在 2024/12/20 0:40, David Hildenbrand 写道:
>>> On 18.12.24 07:33, yangge1116@126.com wrote:
>>>> From: yangge <yangge1116@126.com>
>>>
>>> CCing Oscar, who worked on migrating these pages during memory offlining
>>> and alloc_contig_range().
>>>
>>>>
>>>> My machine has 4 NUMA nodes, each equipped with 32GB of memory. I
>>>> have configured each NUMA node with 16GB of CMA and 16GB of in-use
>>>> hugetlb pages. The allocation of contiguous memory via the
>>>> cma_alloc() function can fail probabilistically.
>>>>
>>>> The cma_alloc() function may fail if it sees an in-use hugetlb page
>>>> within the allocation range, even if that page has already been
>>>> migrated. When in-use hugetlb pages are migrated, they may simply
>>>> be released back into the free hugepage pool instead of being
>>>> returned to the buddy system. This can cause the
>>>> test_pages_isolated() function check to fail, ultimately leading
>>>> to the failure of the cma_alloc() function:
>>>> cma_alloc()
>>>> __alloc_contig_migrate_range() // migrate in-use hugepage
>>>> test_pages_isolated()
>>>> __test_page_isolated_in_pageblock()
>>>> PageBuddy(page) // check if the page is in buddy
>>>
>>> I thought this would be working as expected, at least we tested it with
>>> alloc_contig_range / virtio-mem a while ago.
>>>
>>> On the memory_offlining path, we migrate hugetlb folios, but also
>>> dissolve any remaining free folios even if it means that we will going
>>> below the requested number of hugetlb pages in our pool.
>>>
>>> During alloc_contig_range(), we only migrate them, to then free them up
>>> after migration.
>>>
>>> Under which circumstances doe sit apply that "they may simply be
>>> released back into the free hugepage pool instead of being returned to
>>> the buddy system"?
>>>
>>
>> After migration, in-use hugetlb pages are only released back to the
>> hugetlb pool and are not returned to the buddy system.
>
> We had
>
> commit ae37c7ff79f1f030e28ec76c46ee032f8fd07607
> Author: Oscar Salvador <osalvador@suse.de>
> Date: Tue May 4 18:35:29 2021 -0700
>
> mm: make alloc_contig_range handle in-use hugetlb pages
> alloc_contig_range() will fail if it finds a HugeTLB page within the
> range, without a chance to handle them. Since HugeTLB pages can be
> migrated as any LRU or Movable page, it does not make sense to bail
> out
> without trying. Enable the interface to recognize in-use HugeTLB
> pages so
> we can migrate them, and have much better chances to succeed the call.
>
>
> And I am trying to figure out if it never worked correctly, or if
> something changed that broke it.
>
>
> In start_isolate_page_range()->isolate_migratepages_block(), we do the
>
> ret = isolate_or_dissolve_huge_page(page, &cc->migratepages);
>
> to add these folios to the cc->migratepages list.
>
> In __alloc_contig_migrate_range(), we migrate the pages using
> migrate_pages().
>
>
> After that, the src hugetlb folios should still be isolated?
Yes.
But I'm
> getting
> confused when these pages get un-silated and putback to hugetlb/freed.
>
If the migration is successful, call folio_putback_active_hugetlb to
release the src hugetlb folios back to the free hugetlb pool.
trace:
unmap_and_move_huge_page
folio_putback_active_hugetlb
folio_put
free_huge_folio
alloc_contig_range_noprof
__alloc_contig_migrate_range
if (test_pages_isolated()) //to determine if hugetlb pages in buddy
isolate_freepages_range //grab isolated pages from freelists.
else
undo_isolate_page_range //undo isolate
>
>>
>> The specific steps for reproduction are as follows:
>> 1,Reserve hugetlb pages. Some of these hugetlb pages are allocated
>> within the CMA area.
>> echo 10240 > /proc/sys/vm/nr_hugepages
>>
>> 2,To ensure that hugetlb pages are in an in-use state, we can use the
>> following command.
>> qemu-system-x86_64 \
>> -mem-prealloc \
>> -mem-path /dev/hugepage/ \
>> ...
>>
>> 3,At this point, using cma_alloc() to allocate contiguous memory may
>> result in a probable failure.
>>
>
> Will these free hugetlb folios become surplus pages? I would have assumed
> they get freed immediately to the buddy, or does you config maybe allow for
> surplus pages?
>
These freed hugetlb folios will not become surplus pages. I have not
configured the system to allow for the existence of surplus pages.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH] replace free hugepage folios after migration
2024-12-21 12:04 ` Ge Yang
@ 2024-12-21 14:32 ` David Hildenbrand
2024-12-22 11:50 ` Ge Yang
0 siblings, 1 reply; 13+ messages in thread
From: David Hildenbrand @ 2024-12-21 14:32 UTC (permalink / raw)
To: Ge Yang, akpm
Cc: linux-mm, linux-kernel, stable, 21cnbao, baolin.wang, muchun.song,
liuzixing, Oscar Salvador, Michal Hocko
On 21.12.24 13:04, Ge Yang wrote:
>
>
> 在 2024/12/21 0:30, David Hildenbrand 写道:
>> On 20.12.24 09:56, Ge Yang wrote:
>>>
>>>
>>> 在 2024/12/20 0:40, David Hildenbrand 写道:
>>>> On 18.12.24 07:33, yangge1116@126.com wrote:
>>>>> From: yangge <yangge1116@126.com>
>>>>
>>>> CCing Oscar, who worked on migrating these pages during memory offlining
>>>> and alloc_contig_range().
>>>>
>>>>>
>>>>> My machine has 4 NUMA nodes, each equipped with 32GB of memory. I
>>>>> have configured each NUMA node with 16GB of CMA and 16GB of in-use
>>>>> hugetlb pages. The allocation of contiguous memory via the
>>>>> cma_alloc() function can fail probabilistically.
>>>>>
>>>>> The cma_alloc() function may fail if it sees an in-use hugetlb page
>>>>> within the allocation range, even if that page has already been
>>>>> migrated. When in-use hugetlb pages are migrated, they may simply
>>>>> be released back into the free hugepage pool instead of being
>>>>> returned to the buddy system. This can cause the
>>>>> test_pages_isolated() function check to fail, ultimately leading
>>>>> to the failure of the cma_alloc() function:
>>>>> cma_alloc()
>>>>> __alloc_contig_migrate_range() // migrate in-use hugepage
>>>>> test_pages_isolated()
>>>>> __test_page_isolated_in_pageblock()
>>>>> PageBuddy(page) // check if the page is in buddy
>>>>
>>>> I thought this would be working as expected, at least we tested it with
>>>> alloc_contig_range / virtio-mem a while ago.
>>>>
>>>> On the memory_offlining path, we migrate hugetlb folios, but also
>>>> dissolve any remaining free folios even if it means that we will going
>>>> below the requested number of hugetlb pages in our pool.
>>>>
>>>> During alloc_contig_range(), we only migrate them, to then free them up
>>>> after migration.
>>>>
>>>> Under which circumstances doe sit apply that "they may simply be
>>>> released back into the free hugepage pool instead of being returned to
>>>> the buddy system"?
>>>>
>>>
>>> After migration, in-use hugetlb pages are only released back to the
>>> hugetlb pool and are not returned to the buddy system.
>>
>> We had
>>
>> commit ae37c7ff79f1f030e28ec76c46ee032f8fd07607
>> Author: Oscar Salvador <osalvador@suse.de>
>> Date: Tue May 4 18:35:29 2021 -0700
>>
>> mm: make alloc_contig_range handle in-use hugetlb pages
>> alloc_contig_range() will fail if it finds a HugeTLB page within the
>> range, without a chance to handle them. Since HugeTLB pages can be
>> migrated as any LRU or Movable page, it does not make sense to bail
>> out
>> without trying. Enable the interface to recognize in-use HugeTLB
>> pages so
>> we can migrate them, and have much better chances to succeed the call.
>>
>>
>> And I am trying to figure out if it never worked correctly, or if
>> something changed that broke it.
>>
>>
>> In start_isolate_page_range()->isolate_migratepages_block(), we do the
>>
>> ret = isolate_or_dissolve_huge_page(page, &cc->migratepages);
>>
>> to add these folios to the cc->migratepages list.
>>
>> In __alloc_contig_migrate_range(), we migrate the pages using
>> migrate_pages().
>>
>>
>> After that, the src hugetlb folios should still be isolated?
> Yes.
>
> But I'm
>> getting
>> confused when these pages get un-silated and putback to hugetlb/freed.
>>
> If the migration is successful, call folio_putback_active_hugetlb to
> release the src hugetlb folios back to the free hugetlb pool.
>
> trace:
> unmap_and_move_huge_page
> folio_putback_active_hugetlb
> folio_put
> free_huge_folio
>
> alloc_contig_range_noprof
> __alloc_contig_migrate_range
> if (test_pages_isolated()) //to determine if hugetlb pages in buddy
> isolate_freepages_range //grab isolated pages from freelists.
> else
> undo_isolate_page_range //undo isolate
Ah, now I remember, thanks.
So when we free an ordinary page, we put it onto the buddy isolate list,
from where we can grab it later and nobody can allocate it in the meantime.
In case of hugetlb, we simply free it back to hugetlb, from where it can
likely even get allocated immediately again.
I think that can actually happen in your proposal: the now-free page
will get reallocated, for example for migrating the next folio. Or some
concurrent system activity can simply allocate the now-free folio. Or am
I missing something that prevents these now-free hugetlb folios from
getting re-allocated after migration succeeded?
Conceptually, I think we would want migration code in the case of
alloc_contig_range() to allocate a new folio from the buddy, and to free
the old one back to the buddy immediately, without ever allowing
re-allocation of it.
What needs to be handled is detecting that
(a) we want to allocate a fresh hugetlb folio as migration target
(b) if migration succeeds, we have to free the hugetlb folio back to the
buddy
(c) if migation fails, we have to free the allocated hugetlb foliio back
to the buddy
We could provide a custom alloc_migration_target that we pass to
migrate_page to allocate a fresh hugetlb folio to handle (a). Using the
put_new_folio callback we could handle (c). (b) would need some thought.
Maybe we can also just mark the source folio as we isolate it, and
enlighten migration+freeing code to handle it automatically?
Hoping to get some feedback from hugetlb maintainers.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH] replace free hugepage folios after migration
2024-12-21 14:32 ` David Hildenbrand
@ 2024-12-22 11:50 ` Ge Yang
0 siblings, 0 replies; 13+ messages in thread
From: Ge Yang @ 2024-12-22 11:50 UTC (permalink / raw)
To: David Hildenbrand, akpm
Cc: linux-mm, linux-kernel, stable, 21cnbao, baolin.wang, muchun.song,
liuzixing, Oscar Salvador, Michal Hocko
在 2024/12/21 22:32, David Hildenbrand 写道:
> On 21.12.24 13:04, Ge Yang wrote:
>>
>>
>> 在 2024/12/21 0:30, David Hildenbrand 写道:
>>> On 20.12.24 09:56, Ge Yang wrote:
>>>>
>>>>
>>>> 在 2024/12/20 0:40, David Hildenbrand 写道:
>>>>> On 18.12.24 07:33, yangge1116@126.com wrote:
>>>>>> From: yangge <yangge1116@126.com>
>>>>>
>>>>> CCing Oscar, who worked on migrating these pages during memory
>>>>> offlining
>>>>> and alloc_contig_range().
>>>>>
>>>>>>
>>>>>> My machine has 4 NUMA nodes, each equipped with 32GB of memory. I
>>>>>> have configured each NUMA node with 16GB of CMA and 16GB of in-use
>>>>>> hugetlb pages. The allocation of contiguous memory via the
>>>>>> cma_alloc() function can fail probabilistically.
>>>>>>
>>>>>> The cma_alloc() function may fail if it sees an in-use hugetlb page
>>>>>> within the allocation range, even if that page has already been
>>>>>> migrated. When in-use hugetlb pages are migrated, they may simply
>>>>>> be released back into the free hugepage pool instead of being
>>>>>> returned to the buddy system. This can cause the
>>>>>> test_pages_isolated() function check to fail, ultimately leading
>>>>>> to the failure of the cma_alloc() function:
>>>>>> cma_alloc()
>>>>>> __alloc_contig_migrate_range() // migrate in-use hugepage
>>>>>> test_pages_isolated()
>>>>>> __test_page_isolated_in_pageblock()
>>>>>> PageBuddy(page) // check if the page is in buddy
>>>>>
>>>>> I thought this would be working as expected, at least we tested it
>>>>> with
>>>>> alloc_contig_range / virtio-mem a while ago.
>>>>>
>>>>> On the memory_offlining path, we migrate hugetlb folios, but also
>>>>> dissolve any remaining free folios even if it means that we will going
>>>>> below the requested number of hugetlb pages in our pool.
>>>>>
>>>>> During alloc_contig_range(), we only migrate them, to then free
>>>>> them up
>>>>> after migration.
>>>>>
>>>>> Under which circumstances doe sit apply that "they may simply be
>>>>> released back into the free hugepage pool instead of being returned to
>>>>> the buddy system"?
>>>>>
>>>>
>>>> After migration, in-use hugetlb pages are only released back to the
>>>> hugetlb pool and are not returned to the buddy system.
>>>
>>> We had
>>>
>>> commit ae37c7ff79f1f030e28ec76c46ee032f8fd07607
>>> Author: Oscar Salvador <osalvador@suse.de>
>>> Date: Tue May 4 18:35:29 2021 -0700
>>>
>>> mm: make alloc_contig_range handle in-use hugetlb pages
>>> alloc_contig_range() will fail if it finds a HugeTLB page
>>> within the
>>> range, without a chance to handle them. Since HugeTLB pages
>>> can be
>>> migrated as any LRU or Movable page, it does not make sense to
>>> bail
>>> out
>>> without trying. Enable the interface to recognize in-use HugeTLB
>>> pages so
>>> we can migrate them, and have much better chances to succeed
>>> the call.
>>>
>>>
>>> And I am trying to figure out if it never worked correctly, or if
>>> something changed that broke it.
>>>
>>>
>>> In start_isolate_page_range()->isolate_migratepages_block(), we do the
>>>
>>> ret = isolate_or_dissolve_huge_page(page, &cc->migratepages);
>>>
>>> to add these folios to the cc->migratepages list.
>>>
>>> In __alloc_contig_migrate_range(), we migrate the pages using
>>> migrate_pages().
>>>
>>>
>>> After that, the src hugetlb folios should still be isolated?
>> Yes.
>>
>> But I'm
>>> getting
>>> confused when these pages get un-silated and putback to hugetlb/freed.
>>>
>> If the migration is successful, call folio_putback_active_hugetlb to
>> release the src hugetlb folios back to the free hugetlb pool.
>>
>> trace:
>> unmap_and_move_huge_page
>> folio_putback_active_hugetlb
>> folio_put
>> free_huge_folio
>>
>> alloc_contig_range_noprof
>> __alloc_contig_migrate_range
>> if (test_pages_isolated()) //to determine if hugetlb pages in
>> buddy
>> isolate_freepages_range //grab isolated pages from freelists.
>> else
>> undo_isolate_page_range //undo isolate
>
> Ah, now I remember, thanks.
>
> So when we free an ordinary page, we put it onto the buddy isolate list,
> from where we can grab it later and nobody can allocate it in the meantime.
>
> In case of hugetlb, we simply free it back to hugetlb, from where it can
> likely even get allocated immediately again.
>
> I think that can actually happen in your proposal: the now-free page
> will get reallocated, for example for migrating the next folio. Or some
> concurrent system activity can simply allocate the now-free folio. Or am
> I missing something that prevents these now-free hugetlb folios from
> getting re-allocated after migration succeeded?
>
>
> Conceptually, I think we would want migration code in the case of
> alloc_contig_range() to allocate a new folio from the buddy, and to free
> the old one back to the buddy immediately, without ever allowing re-
> allocation of it.
>
> What needs to be handled is detecting that
>
> (a) we want to allocate a fresh hugetlb folio as migration target
> (b) if migration succeeds, we have to free the hugetlb folio back to the
> buddy
> (c) if migation fails, we have to free the allocated hugetlb foliio back
> to the buddy
>
>
> We could provide a custom alloc_migration_target that we pass to
> migrate_page to allocate a fresh hugetlb folio to handle (a). Using the
> put_new_folio callback we could handle (c). (b) would need some thought.
It seems that if we allocate a fresh hugetlb folio as the migration
target, the source hugetlb folio will be automatically released back to
the buddy system.
>
> Maybe we can also just mark the source folio as we isolate it, and
> enlighten migration+freeing code to handle it automatically?
Can we determine whether a hugetlb page is isolated when allocating it
from the free hugetlb pool?
dequeue_hugetlb_folio_node_exact() {
list_for_each_entry(folio, &h->hugepage_freelists[nid], lru) {
if (is_migrate_isolate_page(folio)) { //determine whether a
hugetlb page is isolated
continue;
}
}
}
>
> Hoping to get some feedback from hugetlb maintainers.
>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH] replace free hugepage folios after migration
2024-12-18 6:33 [PATCH] replace free hugepage folios after migration yangge1116
2024-12-19 16:40 ` David Hildenbrand
@ 2024-12-19 18:43 ` SeongJae Park
2024-12-20 9:03 ` Ge Yang
2024-12-21 14:35 ` David Hildenbrand
2 siblings, 1 reply; 13+ messages in thread
From: SeongJae Park @ 2024-12-19 18:43 UTC (permalink / raw)
To: yangge1116
Cc: SeongJae Park, akpm, linux-mm, linux-kernel, stable, 21cnbao,
david, baolin.wang, muchun.song, liuzixing
Hello,
On Wed, 18 Dec 2024 14:33:08 +0800 yangge1116@126.com wrote:
[...]
> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> index ae4fe86..7d36ac8 100644
> --- a/include/linux/hugetlb.h
> +++ b/include/linux/hugetlb.h
> @@ -681,6 +681,7 @@ struct huge_bootmem_page {
> };
>
> int isolate_or_dissolve_huge_page(struct page *page, struct list_head *list);
> +int replace_free_hugepage_folios(unsigned long start_pfn, unsigned long end_pfn);
> struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
> unsigned long addr, int avoid_reserve);
> struct folio *alloc_hugetlb_folio_nodemask(struct hstate *h, int preferred_nid,
> @@ -1059,6 +1060,11 @@ static inline int isolate_or_dissolve_huge_page(struct page *page,
> return -ENOMEM;
> }
>
> +int replace_free_hugepage_folios(unsigned long start_pfn, unsigned long end_pfn)
> +{
> + return 0;
> +}
> +
I think this should be static inline. Otherwise, build fails when
CONFIG_HUGETLB_PAGE is unset. Since this is already merged into mm-unstable
and the problem and fix seems straigthforward, I directly sent my fix:
https://lore.kernel.org/20241219183753.62922-1-sj@kernel.org
Thanks,
SJ
[...]
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH] replace free hugepage folios after migration
2024-12-19 18:43 ` SeongJae Park
@ 2024-12-20 9:03 ` Ge Yang
0 siblings, 0 replies; 13+ messages in thread
From: Ge Yang @ 2024-12-20 9:03 UTC (permalink / raw)
To: SeongJae Park
Cc: akpm, linux-mm, linux-kernel, stable, 21cnbao, david, baolin.wang,
muchun.song, liuzixing
在 2024/12/20 2:43, SeongJae Park 写道:
> Hello,
>
> On Wed, 18 Dec 2024 14:33:08 +0800 yangge1116@126.com wrote:
>
> [...]
>> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
>> index ae4fe86..7d36ac8 100644
>> --- a/include/linux/hugetlb.h
>> +++ b/include/linux/hugetlb.h
>> @@ -681,6 +681,7 @@ struct huge_bootmem_page {
>> };
>>
>> int isolate_or_dissolve_huge_page(struct page *page, struct list_head *list);
>> +int replace_free_hugepage_folios(unsigned long start_pfn, unsigned long end_pfn);
>> struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
>> unsigned long addr, int avoid_reserve);
>> struct folio *alloc_hugetlb_folio_nodemask(struct hstate *h, int preferred_nid,
>> @@ -1059,6 +1060,11 @@ static inline int isolate_or_dissolve_huge_page(struct page *page,
>> return -ENOMEM;
>> }
>>
>> +int replace_free_hugepage_folios(unsigned long start_pfn, unsigned long end_pfn)
>> +{
>> + return 0;
>> +}
>> +
>
> I think this should be static inline. Otherwise, build fails when
> CONFIG_HUGETLB_PAGE is unset. Since this is already merged into mm-unstable
> and the problem and fix seems straigthforward, I directly sent my fix:
> https://lore.kernel.org/20241219183753.62922-1-sj@kernel.org
>
>
> Thanks,
> SJ
>
> [...]
Thanks.
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH] replace free hugepage folios after migration
2024-12-18 6:33 [PATCH] replace free hugepage folios after migration yangge1116
2024-12-19 16:40 ` David Hildenbrand
2024-12-19 18:43 ` SeongJae Park
@ 2024-12-21 14:35 ` David Hildenbrand
2024-12-22 8:13 ` Ge Yang
2 siblings, 1 reply; 13+ messages in thread
From: David Hildenbrand @ 2024-12-21 14:35 UTC (permalink / raw)
To: yangge1116, akpm
Cc: linux-mm, linux-kernel, stable, 21cnbao, baolin.wang, muchun.song,
liuzixing
On 18.12.24 07:33, yangge1116@126.com wrote:
> From: yangge <yangge1116@126.com>
>
> My machine has 4 NUMA nodes, each equipped with 32GB of memory. I
> have configured each NUMA node with 16GB of CMA and 16GB of in-use
> hugetlb pages. The allocation of contiguous memory via the
> cma_alloc() function can fail probabilistically.
>
> The cma_alloc() function may fail if it sees an in-use hugetlb page
> within the allocation range, even if that page has already been
> migrated. When in-use hugetlb pages are migrated, they may simply
> be released back into the free hugepage pool instead of being
> returned to the buddy system. This can cause the
> test_pages_isolated() function check to fail, ultimately leading
> to the failure of the cma_alloc() function:
> cma_alloc()
> __alloc_contig_migrate_range() // migrate in-use hugepage
> test_pages_isolated()
> __test_page_isolated_in_pageblock()
> PageBuddy(page) // check if the page is in buddy
>
> To address this issue, we will add a function named
> replace_free_hugepage_folios(). This function will replace the
> hugepage in the free hugepage pool with a new one and release the
> old one to the buddy system. After the migration of in-use hugetlb
> pages is completed, we will invoke the replace_free_hugepage_folios()
> function to ensure that these hugepages are properly released to
> the buddy system. Following this step, when the test_pages_isolated()
> function is executed for inspection, it will successfully pass.
>
> Signed-off-by: yangge <yangge1116@126.com>
> ---
> include/linux/hugetlb.h | 6 ++++++
> mm/hugetlb.c | 37 +++++++++++++++++++++++++++++++++++++
> mm/page_alloc.c | 13 ++++++++++++-
> 3 files changed, 55 insertions(+), 1 deletion(-)
>
> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> index ae4fe86..7d36ac8 100644
> --- a/include/linux/hugetlb.h
> +++ b/include/linux/hugetlb.h
> @@ -681,6 +681,7 @@ struct huge_bootmem_page {
> };
>
> int isolate_or_dissolve_huge_page(struct page *page, struct list_head *list);
> +int replace_free_hugepage_folios(unsigned long start_pfn, unsigned long end_pfn);
> struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
> unsigned long addr, int avoid_reserve);
> struct folio *alloc_hugetlb_folio_nodemask(struct hstate *h, int preferred_nid,
> @@ -1059,6 +1060,11 @@ static inline int isolate_or_dissolve_huge_page(struct page *page,
> return -ENOMEM;
> }
>
> +int replace_free_hugepage_folios(unsigned long start_pfn, unsigned long end_pfn)
> +{
> + return 0;
> +}
> +
> static inline struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
> unsigned long addr,
> int avoid_reserve)
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 8e1db80..a099c54 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -2975,6 +2975,43 @@ int isolate_or_dissolve_huge_page(struct page *page, struct list_head *list)
> return ret;
> }
>
> +/*
> + * replace_free_hugepage_folios - Replace free hugepage folios in a given pfn
> + * range with new folios.
> + * @stat_pfn: start pfn of the given pfn range
> + * @end_pfn: end pfn of the given pfn range
> + * Returns 0 on success, otherwise negated error.
> + */
> +int replace_free_hugepage_folios(unsigned long start_pfn, unsigned long end_pfn)
> +{
> + struct hstate *h;
> + struct folio *folio;
> + int ret = 0;
> +
> + LIST_HEAD(isolate_list);
> +
> + while (start_pfn < end_pfn) {
> + folio = pfn_folio(start_pfn);
> + if (folio_test_hugetlb(folio)) {
> + h = folio_hstate(folio);
> + } else {
> + start_pfn++;
> + continue;
> + }
> +
> + if (!folio_ref_count(folio)) {
> + ret = alloc_and_dissolve_hugetlb_folio(h, folio, &isolate_list);
> + if (ret)
> + break;
> +
> + putback_movable_pages(&isolate_list);
> + }
> + start_pfn++;
> + }
> +
> + return ret;
> +}
> +
> struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
> unsigned long addr, int avoid_reserve)
> {
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index dde19db..1dcea28 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -6504,7 +6504,18 @@ int alloc_contig_range_noprof(unsigned long start, unsigned long end,
> ret = __alloc_contig_migrate_range(&cc, start, end, migratetype);
> if (ret && ret != -EBUSY)
> goto done;
> - ret = 0;
> +
> + /*
> + * When in-use hugetlb pages are migrated, they may simply be
> + * released back into the free hugepage pool instead of being
> + * returned to the buddy system. After the migration of in-use
> + * huge pages is completed, we will invoke the
> + * replace_free_hugepage_folios() function to ensure that
> + * these hugepages are properly released to the buddy system.
> + */
As mentioned in my other mail, what I don't like about this is, IIUC,
the pages can get reallocated anytime after we successfully migrated
them, or is there anything that prevents that?
Did you ever try allocating a larger range with a single
alloc_contig_range() call, that possibly has to migrate multiple hugetlb
folios in one go (and maybe just allocates one of the just-freed hugetlb
folios as migration target)?
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH] replace free hugepage folios after migration
2024-12-21 14:35 ` David Hildenbrand
@ 2024-12-22 8:13 ` Ge Yang
2025-01-08 21:05 ` David Hildenbrand
0 siblings, 1 reply; 13+ messages in thread
From: Ge Yang @ 2024-12-22 8:13 UTC (permalink / raw)
To: David Hildenbrand, akpm
Cc: linux-mm, linux-kernel, stable, 21cnbao, baolin.wang, muchun.song,
liuzixing
在 2024/12/21 22:35, David Hildenbrand 写道:
> On 18.12.24 07:33, yangge1116@126.com wrote:
>> From: yangge <yangge1116@126.com>
>>
>> My machine has 4 NUMA nodes, each equipped with 32GB of memory. I
>> have configured each NUMA node with 16GB of CMA and 16GB of in-use
>> hugetlb pages. The allocation of contiguous memory via the
>> cma_alloc() function can fail probabilistically.
>>
>> The cma_alloc() function may fail if it sees an in-use hugetlb page
>> within the allocation range, even if that page has already been
>> migrated. When in-use hugetlb pages are migrated, they may simply
>> be released back into the free hugepage pool instead of being
>> returned to the buddy system. This can cause the
>> test_pages_isolated() function check to fail, ultimately leading
>> to the failure of the cma_alloc() function:
>> cma_alloc()
>> __alloc_contig_migrate_range() // migrate in-use hugepage
>> test_pages_isolated()
>> __test_page_isolated_in_pageblock()
>> PageBuddy(page) // check if the page is in buddy
>>
>> To address this issue, we will add a function named
>> replace_free_hugepage_folios(). This function will replace the
>> hugepage in the free hugepage pool with a new one and release the
>> old one to the buddy system. After the migration of in-use hugetlb
>> pages is completed, we will invoke the replace_free_hugepage_folios()
>> function to ensure that these hugepages are properly released to
>> the buddy system. Following this step, when the test_pages_isolated()
>> function is executed for inspection, it will successfully pass.
>>
>> Signed-off-by: yangge <yangge1116@126.com>
>> ---
>> include/linux/hugetlb.h | 6 ++++++
>> mm/hugetlb.c | 37 +++++++++++++++++++++++++++++++++++++
>> mm/page_alloc.c | 13 ++++++++++++-
>> 3 files changed, 55 insertions(+), 1 deletion(-)
>>
>> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
>> index ae4fe86..7d36ac8 100644
>> --- a/include/linux/hugetlb.h
>> +++ b/include/linux/hugetlb.h
>> @@ -681,6 +681,7 @@ struct huge_bootmem_page {
>> };
>> int isolate_or_dissolve_huge_page(struct page *page, struct
>> list_head *list);
>> +int replace_free_hugepage_folios(unsigned long start_pfn, unsigned
>> long end_pfn);
>> struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
>> unsigned long addr, int avoid_reserve);
>> struct folio *alloc_hugetlb_folio_nodemask(struct hstate *h, int
>> preferred_nid,
>> @@ -1059,6 +1060,11 @@ static inline int
>> isolate_or_dissolve_huge_page(struct page *page,
>> return -ENOMEM;
>> }
>> +int replace_free_hugepage_folios(unsigned long start_pfn, unsigned
>> long end_pfn)
>> +{
>> + return 0;
>> +}
>> +
>> static inline struct folio *alloc_hugetlb_folio(struct
>> vm_area_struct *vma,
>> unsigned long addr,
>> int avoid_reserve)
>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>> index 8e1db80..a099c54 100644
>> --- a/mm/hugetlb.c
>> +++ b/mm/hugetlb.c
>> @@ -2975,6 +2975,43 @@ int isolate_or_dissolve_huge_page(struct page
>> *page, struct list_head *list)
>> return ret;
>> }
>> +/*
>> + * replace_free_hugepage_folios - Replace free hugepage folios in a
>> given pfn
>> + * range with new folios.
>> + * @stat_pfn: start pfn of the given pfn range
>> + * @end_pfn: end pfn of the given pfn range
>> + * Returns 0 on success, otherwise negated error.
>> + */
>> +int replace_free_hugepage_folios(unsigned long start_pfn, unsigned
>> long end_pfn)
>> +{
>> + struct hstate *h;
>> + struct folio *folio;
>> + int ret = 0;
>> +
>> + LIST_HEAD(isolate_list);
>> +
>> + while (start_pfn < end_pfn) {
>> + folio = pfn_folio(start_pfn);
>> + if (folio_test_hugetlb(folio)) {
>> + h = folio_hstate(folio);
>> + } else {
>> + start_pfn++;
>> + continue;
>> + }
>> +
>> + if (!folio_ref_count(folio)) {
>> + ret = alloc_and_dissolve_hugetlb_folio(h, folio,
>> &isolate_list);
>> + if (ret)
>> + break;
>> +
>> + putback_movable_pages(&isolate_list);
>> + }
>> + start_pfn++;
>> + }
>> +
>> + return ret;
>> +}
>> +
>> struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
>> unsigned long addr, int avoid_reserve)
>> {
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index dde19db..1dcea28 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -6504,7 +6504,18 @@ int alloc_contig_range_noprof(unsigned long
>> start, unsigned long end,
>> ret = __alloc_contig_migrate_range(&cc, start, end, migratetype);
>> if (ret && ret != -EBUSY)
>> goto done;
>> - ret = 0;
>> +
>> + /*
>> + * When in-use hugetlb pages are migrated, they may simply be
>> + * released back into the free hugepage pool instead of being
>> + * returned to the buddy system. After the migration of in-use
>> + * huge pages is completed, we will invoke the
>> + * replace_free_hugepage_folios() function to ensure that
>> + * these hugepages are properly released to the buddy system.
>> + */
>
> As mentioned in my other mail, what I don't like about this is, IIUC,
> the pages can get reallocated anytime after we successfully migrated
> them, or is there anything that prevents that?
>
The pages can get reallocated anytime after we successfully migrated
them. Currently, I haven't thought of a good way to prevent it.
> Did you ever try allocating a larger range with a single
> alloc_contig_range() call, that possibly has to migrate multiple hugetlb
> folios in one go (and maybe just allocates one of the just-freed hugetlb
> folios as migration target)?
>
I have tried using a single alloc_contig_range() call to allocate a
larger contiguous range, and it works properly. This is because during
the period between __alloc_contig_migrate_range() and
isolate_freepages_range(), no one allocates a hugetlb folio from the
free hugetlb pool.
>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH] replace free hugepage folios after migration
2024-12-22 8:13 ` Ge Yang
@ 2025-01-08 21:05 ` David Hildenbrand
2025-01-09 9:50 ` Ge Yang
0 siblings, 1 reply; 13+ messages in thread
From: David Hildenbrand @ 2025-01-08 21:05 UTC (permalink / raw)
To: Ge Yang, akpm
Cc: linux-mm, linux-kernel, stable, 21cnbao, baolin.wang, muchun.song,
liuzixing
Sorry for the late reply, holidays ...
>> Did you ever try allocating a larger range with a single
>> alloc_contig_range() call, that possibly has to migrate multiple hugetlb
>> folios in one go (and maybe just allocates one of the just-freed hugetlb
>> folios as migration target)?
>>
>
> I have tried using a single alloc_contig_range() call to allocate a
> larger contiguous range, and it works properly. This is because during
> the period between __alloc_contig_migrate_range() and
> isolate_freepages_range(), no one allocates a hugetlb folio from the
> free hugetlb pool.
Did you trigger the following as well?
alloc_contig_range() that covers multiple in-use hugetlb pages, like
[ huge 0 ] [ huge 1 ] [ huge 2 ] [ huge 3 ]
I assume the following happens:
To migrate huge 0, we have to allocate a fresh page from the buddy.
After migration, we return now-free huge 0 to the pool.
To migrate huge 1, we can just grab now-free huge 0 from the pool, and
not allocate a fresh one from the buddy.
At least that's my impression when reading
alloc_migration_target()->alloc_hugetlb_folio_nodemask().
Or is for some reason available_huge_pages()==false and we always end up
in alloc_migrate_hugetlb_folio()->alloc_fresh_hugetlb_folio()?
Sorry for the stupid questions, the code is complicated, and I cannot
see how this would work.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH] replace free hugepage folios after migration
2025-01-08 21:05 ` David Hildenbrand
@ 2025-01-09 9:50 ` Ge Yang
0 siblings, 0 replies; 13+ messages in thread
From: Ge Yang @ 2025-01-09 9:50 UTC (permalink / raw)
To: David Hildenbrand, akpm
Cc: linux-mm, linux-kernel, stable, 21cnbao, baolin.wang, muchun.song,
liuzixing
在 2025/1/9 5:05, David Hildenbrand 写道:
> Sorry for the late reply, holidays ...
>
>>> Did you ever try allocating a larger range with a single
>>> alloc_contig_range() call, that possibly has to migrate multiple hugetlb
>>> folios in one go (and maybe just allocates one of the just-freed hugetlb
>>> folios as migration target)?
>>>
>>
>> I have tried using a single alloc_contig_range() call to allocate a
>> larger contiguous range, and it works properly. This is because during
>> the period between __alloc_contig_migrate_range() and
>> isolate_freepages_range(), no one allocates a hugetlb folio from the
>> free hugetlb pool.
>
> Did you trigger the following as well?
>
> alloc_contig_range() that covers multiple in-use hugetlb pages, like
>
> [ huge 0 ] [ huge 1 ] [ huge 2 ] [ huge 3 ]
>
> I assume the following happens:
>
> To migrate huge 0, we have to allocate a fresh page from the buddy.
> After migration, we return now-free huge 0 to the pool.
>
> To migrate huge 1, we can just grab now-free huge 0 from the pool, and
> not allocate a fresh one from the buddy.
>
> At least that's my impression when reading alloc_migration_target()-
> >alloc_hugetlb_folio_nodemask().
Thank you very much for your suggestions.
It needs to be discussed in two different scenarios:
1, When All Free HugeTLB Pages in the Pool Are Allocated,
available_huge_page() Returns false
If available_huge_page() returns false, indicating that no free huge
pages are available in the hugeTLB pool, we will invoke
alloc_migrate_hugetlb_folio() to allocate a new folio. A temporary flag
will be set on this new folio. After the migration of the hugeTLB folio
is completed, the temporary flag will be transferred from the new folio
to the old one. Any folio with the temporary flag, when freed, will be
directly released to the buddy allocator.
2, When Some Free HugeTLB Pages in the Pool Are Still Available,
available_huge_page() Returns true
If available_huge_page() returns true, indicating that there are still
free huge pages available in the hugeTLB pool, we will call
dequeue_hugetlb_folio_node() to allocate a new folio. After the
migration of the hugeTLB folio is completed, the old folio will be
released back to the free hugeTLB pool. However, this scenario may pose
potential issues, as you mentioned earlier. It seems that the issue can
be resolved by the following approach:
dequeue_hugetlb_folio_node_exact() {
list_for_each_entry(folio, &h->hugepage_freelists[nid], lru) {
if (is_migrate_isolate_page(folio)) { //determine whether a
hugetlb page is isolated
continue;
}
}
}
>
> Or is for some reason available_huge_pages()==false and we always end up
> in alloc_migrate_hugetlb_folio()->alloc_fresh_hugetlb_folio()?
>
> Sorry for the stupid questions, the code is complicated, and I cannot
> see how this would work.
>
^ permalink raw reply [flat|nested] 13+ messages in thread