* [PATCH v3] mm/migrate_device: fix pgtable leak in migrate_vma_insert_huge_pmd_page
@ 2026-05-01 11:51 Sunny Patel
2026-05-01 12:44 ` Andrew Morton
0 siblings, 1 reply; 8+ messages in thread
From: Sunny Patel @ 2026-05-01 11:51 UTC (permalink / raw)
To: Andrew Morton, David Hildenbrand
Cc: Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, linux-mm,
linux-kernel, Sunny Patel
When migrate_vma_insert_huge_pmd_page() jumps to unlock_abort due
to a PMD check failure, the pgtable allocated earlier via
pte_alloc_one() is never freed, causing a memory leak.
Added free_abort label to release the pgtable in error path.
Signed-off-by: Sunny Patel <nueralspacetech@gmail.com>
---
Changes in v3:
- Added free_abort label as suggested to release the pgtable
in error path.
- v2 had the pte_free() call in both error paths, this
version has a single goto target for both paths to avoid code duplication.
mm/migrate_device.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/mm/migrate_device.c b/mm/migrate_device.c
index fbfe5715f635..0360b410067b 100644
--- a/mm/migrate_device.c
+++ b/mm/migrate_device.c
@@ -840,7 +840,7 @@ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
} else {
if (folio_is_zone_device(folio) &&
!folio_is_device_coherent(folio)) {
- goto abort;
+ goto free_abort;
}
entry = folio_mk_pmd(folio, vma->vm_page_prot);
if (vma->vm_flags & VM_WRITE)
@@ -893,6 +893,8 @@ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
unlock_abort:
spin_unlock(ptl);
+free_abort:
+ pte_free(vma->vm_mm, pgtable);
abort:
for (i = 0; i < HPAGE_PMD_NR; i++)
src[i] &= ~MIGRATE_PFN_MIGRATE;
--
2.43.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH v3] mm/migrate_device: fix pgtable leak in migrate_vma_insert_huge_pmd_page
2026-05-01 11:51 [PATCH v3] mm/migrate_device: fix pgtable leak in migrate_vma_insert_huge_pmd_page Sunny Patel
@ 2026-05-01 12:44 ` Andrew Morton
2026-05-01 19:08 ` David Hildenbrand (Arm)
2026-05-02 0:47 ` Balbir Singh
0 siblings, 2 replies; 8+ messages in thread
From: Andrew Morton @ 2026-05-01 12:44 UTC (permalink / raw)
To: Sunny Patel
Cc: David Hildenbrand, Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim,
Byungchul Park, Gregory Price, Ying Huang, Alistair Popple,
linux-mm, linux-kernel, Balbir Singh
On Fri, 1 May 2026 17:21:16 +0530 Sunny Patel <nueralspacetech@gmail.com> wrote:
> When migrate_vma_insert_huge_pmd_page() jumps to unlock_abort due
> to a PMD check failure, the pgtable allocated earlier via
> pte_alloc_one() is never freed, causing a memory leak.
>
> Added free_abort label to release the pgtable in error path.
>
> ...
>
> --- a/mm/migrate_device.c
> +++ b/mm/migrate_device.c
> @@ -840,7 +840,7 @@ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
> } else {
> if (folio_is_zone_device(folio) &&
> !folio_is_device_coherent(folio)) {
> - goto abort;
> + goto free_abort;
> }
> entry = folio_mk_pmd(folio, vma->vm_page_prot);
> if (vma->vm_flags & VM_WRITE)
> @@ -893,6 +893,8 @@ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
>
> unlock_abort:
> spin_unlock(ptl);
> +free_abort:
> + pte_free(vma->vm_mm, pgtable);
> abort:
> for (i = 0; i < HPAGE_PMD_NR; i++)
> src[i] &= ~MIGRATE_PFN_MIGRATE;
Yikes, we leak that page on several error paths.
Thanks, I'll retain David's ack from the v2 patch.
Balbir, please review?
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v3] mm/migrate_device: fix pgtable leak in migrate_vma_insert_huge_pmd_page
2026-05-01 12:44 ` Andrew Morton
@ 2026-05-01 19:08 ` David Hildenbrand (Arm)
2026-05-02 1:02 ` Balbir Singh
2026-05-07 9:38 ` Huang, Ying
2026-05-02 0:47 ` Balbir Singh
1 sibling, 2 replies; 8+ messages in thread
From: David Hildenbrand (Arm) @ 2026-05-01 19:08 UTC (permalink / raw)
To: Andrew Morton, Sunny Patel
Cc: Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, linux-mm,
linux-kernel, Balbir Singh
On 5/1/26 14:44, Andrew Morton wrote:
> On Fri, 1 May 2026 17:21:16 +0530 Sunny Patel <nueralspacetech@gmail.com> wrote:
>
>> When migrate_vma_insert_huge_pmd_page() jumps to unlock_abort due
>> to a PMD check failure, the pgtable allocated earlier via
>> pte_alloc_one() is never freed, causing a memory leak.
>>
>> Added free_abort label to release the pgtable in error path.
>>
>> ...
>>
>> --- a/mm/migrate_device.c
>> +++ b/mm/migrate_device.c
>> @@ -840,7 +840,7 @@ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
>> } else {
>> if (folio_is_zone_device(folio) &&
>> !folio_is_device_coherent(folio)) {
>> - goto abort;
>> + goto free_abort;
>> }
>> entry = folio_mk_pmd(folio, vma->vm_page_prot);
>> if (vma->vm_flags & VM_WRITE)
>> @@ -893,6 +893,8 @@ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
>>
>> unlock_abort:
>> spin_unlock(ptl);
>> +free_abort:
>> + pte_free(vma->vm_mm, pgtable);
>> abort:
>> for (i = 0; i < HPAGE_PMD_NR; i++)
>> src[i] &= ~MIGRATE_PFN_MIGRATE;
>
> Yikes, we leak that page on several error paths.
>
> Thanks, I'll retain David's ack from the v2 patch.
Yes. If we want to avoid more labels, we could do something like:
diff --git a/mm/migrate_device.c b/mm/migrate_device.c
index ab49d4dcdb60..babb56c4d47f 100644
--- a/mm/migrate_device.c
+++ b/mm/migrate_device.c
@@ -795,8 +795,8 @@ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
struct folio *folio = page_folio(page);
int ret;
vm_fault_t csa_ret;
- spinlock_t *ptl;
- pgtable_t pgtable;
+ spinlock_t *ptl = NULL;
+ pgtable_t pgtable = NULL;
pmd_t entry;
bool flush = false;
unsigned long i;
@@ -818,14 +818,14 @@ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
count_vm_event(THP_FAULT_FALLBACK);
count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE);
ret = -ENOMEM;
- goto abort;
+ goto error;
}
__folio_mark_uptodate(folio);
pgtable = pte_alloc_one(vma->vm_mm);
if (unlikely(!pgtable))
- goto abort;
+ goto error;
if (folio_is_device_private(folio)) {
swp_entry_t swp_entry;
@@ -840,7 +840,7 @@ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
} else {
if (folio_is_zone_device(folio) &&
!folio_is_device_coherent(folio)) {
- goto abort;
+ goto error;
}
entry = folio_mk_pmd(folio, vma->vm_page_prot);
if (vma->vm_flags & VM_WRITE)
@@ -850,21 +850,21 @@ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
ptl = pmd_lock(vma->vm_mm, pmdp);
csa_ret = check_stable_address_space(vma->vm_mm);
if (csa_ret)
- goto unlock_abort;
+ goto error;
/*
* Check for userfaultfd but do not deliver the fault. Instead,
* just back off.
*/
if (userfaultfd_missing(vma))
- goto unlock_abort;
+ goto error;
if (!pmd_none(*pmdp)) {
if (!is_huge_zero_pmd(*pmdp))
- goto unlock_abort;
+ goto error;
flush = true;
} else if (!pmd_none(*pmdp))
- goto unlock_abort;
+ goto error;
add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR);
folio_add_new_anon_rmap(folio, vma, addr, RMAP_EXCLUSIVE);
@@ -891,9 +891,11 @@ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
return 0;
-unlock_abort:
- spin_unlock(ptl);
-abort:
+error:
+ if (ptl)
+ spin_unlock(ptl);
+ if (pgtable)
+ pte_free(vma->vm_mm, pgtable);
for (i = 0; i < HPAGE_PMD_NR; i++)
src[i] &= ~MIGRATE_PFN_MIGRATE;
return 0;
--
Cheers,
David
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH v3] mm/migrate_device: fix pgtable leak in migrate_vma_insert_huge_pmd_page
2026-05-01 12:44 ` Andrew Morton
2026-05-01 19:08 ` David Hildenbrand (Arm)
@ 2026-05-02 0:47 ` Balbir Singh
2026-05-02 0:59 ` Balbir Singh
1 sibling, 1 reply; 8+ messages in thread
From: Balbir Singh @ 2026-05-02 0:47 UTC (permalink / raw)
To: Andrew Morton, Sunny Patel
Cc: David Hildenbrand, Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim,
Byungchul Park, Gregory Price, Ying Huang, Alistair Popple,
linux-mm, linux-kernel
On 5/1/26 22:44, Andrew Morton wrote:
> On Fri, 1 May 2026 17:21:16 +0530 Sunny Patel <nueralspacetech@gmail.com> wrote:
>
>> When migrate_vma_insert_huge_pmd_page() jumps to unlock_abort due
>> to a PMD check failure, the pgtable allocated earlier via
>> pte_alloc_one() is never freed, causing a memory leak.
>>
>> Added free_abort label to release the pgtable in error path.
>>
>> ...
>>
>> --- a/mm/migrate_device.c
>> +++ b/mm/migrate_device.c
>> @@ -840,7 +840,7 @@ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
>> } else {
>> if (folio_is_zone_device(folio) &&
>> !folio_is_device_coherent(folio)) {
>> - goto abort;
>> + goto free_abort;
>> }
>> entry = folio_mk_pmd(folio, vma->vm_page_prot);
>> if (vma->vm_flags & VM_WRITE)
>> @@ -893,6 +893,8 @@ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
>>
>> unlock_abort:
>> spin_unlock(ptl);
>> +free_abort:
>> + pte_free(vma->vm_mm, pgtable);
>> abort:
>> for (i = 0; i < HPAGE_PMD_NR; i++)
>> src[i] &= ~MIGRATE_PFN_MIGRATE;
>
> Yikes, we leak that page on several error paths.
>
> Thanks, I'll retain David's ack from the v2 patch.
>
> Balbir, please review?
The issue can be reached if we try to migrate a non private/coherent folio (I assume DAX/P2PDMA)
pages. Even migrate_vma_insert_page() has a similar pattern. Creating the pud/pmd/pte structure
there is unnecessary (but not allocations occur, depends on pmd_none() for example), do we want
to clean up all of it (need to think through)?
I'll re-review and send fixes as needed. Looks like David sent some fixes out, will review
those as well.
Balbir
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v3] mm/migrate_device: fix pgtable leak in migrate_vma_insert_huge_pmd_page
2026-05-02 0:47 ` Balbir Singh
@ 2026-05-02 0:59 ` Balbir Singh
0 siblings, 0 replies; 8+ messages in thread
From: Balbir Singh @ 2026-05-02 0:59 UTC (permalink / raw)
To: Andrew Morton, Sunny Patel
Cc: David Hildenbrand, Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim,
Byungchul Park, Gregory Price, Ying Huang, Alistair Popple,
linux-mm, linux-kernel
On 5/2/26 10:47, Balbir Singh wrote:
> On 5/1/26 22:44, Andrew Morton wrote:
>> On Fri, 1 May 2026 17:21:16 +0530 Sunny Patel <nueralspacetech@gmail.com> wrote:
>>
>>> When migrate_vma_insert_huge_pmd_page() jumps to unlock_abort due
>>> to a PMD check failure, the pgtable allocated earlier via
>>> pte_alloc_one() is never freed, causing a memory leak.
>>>
>>> Added free_abort label to release the pgtable in error path.
>>>
>>> ...
>>>
>>> --- a/mm/migrate_device.c
>>> +++ b/mm/migrate_device.c
>>> @@ -840,7 +840,7 @@ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
>>> } else {
>>> if (folio_is_zone_device(folio) &&
>>> !folio_is_device_coherent(folio)) {
>>> - goto abort;
>>> + goto free_abort;
>>> }
>>> entry = folio_mk_pmd(folio, vma->vm_page_prot);
>>> if (vma->vm_flags & VM_WRITE)
>>> @@ -893,6 +893,8 @@ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
>>>
>>> unlock_abort:
>>> spin_unlock(ptl);
>>> +free_abort:
>>> + pte_free(vma->vm_mm, pgtable);
>>> abort:
>>> for (i = 0; i < HPAGE_PMD_NR; i++)
>>> src[i] &= ~MIGRATE_PFN_MIGRATE;
>>
>> Yikes, we leak that page on several error paths.
>>
>> Thanks, I'll retain David's ack from the v2 patch.
>>
>> Balbir, please review?
>
>
> The issue can be reached if we try to migrate a non private/coherent folio (I assume DAX/P2PDMA)
> pages. Even migrate_vma_insert_page() has a similar pattern. Creating the pud/pmd/pte structure
> there is unnecessary (but not allocations occur, depends on pmd_none() for example), do we want
> to clean up all of it (need to think through)?
>
> I'll re-review and send fixes as needed. Looks like David sent some fixes out, will review
> those as well.
>
I checked migrate_vma_insert_page() is no affected.
Balbir
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v3] mm/migrate_device: fix pgtable leak in migrate_vma_insert_huge_pmd_page
2026-05-01 19:08 ` David Hildenbrand (Arm)
@ 2026-05-02 1:02 ` Balbir Singh
2026-05-08 11:41 ` David Hildenbrand (Arm)
2026-05-07 9:38 ` Huang, Ying
1 sibling, 1 reply; 8+ messages in thread
From: Balbir Singh @ 2026-05-02 1:02 UTC (permalink / raw)
To: David Hildenbrand (Arm), Andrew Morton, Sunny Patel
Cc: Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, linux-mm,
linux-kernel
On 5/2/26 05:08, David Hildenbrand (Arm) wrote:
> On 5/1/26 14:44, Andrew Morton wrote:
>> On Fri, 1 May 2026 17:21:16 +0530 Sunny Patel <nueralspacetech@gmail.com> wrote:
>>
>>> When migrate_vma_insert_huge_pmd_page() jumps to unlock_abort due
>>> to a PMD check failure, the pgtable allocated earlier via
>>> pte_alloc_one() is never freed, causing a memory leak.
>>>
>>> Added free_abort label to release the pgtable in error path.
>>>
>>> ...
>>>
>>> --- a/mm/migrate_device.c
>>> +++ b/mm/migrate_device.c
>>> @@ -840,7 +840,7 @@ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
>>> } else {
>>> if (folio_is_zone_device(folio) &&
>>> !folio_is_device_coherent(folio)) {
>>> - goto abort;
>>> + goto free_abort;
>>> }
>>> entry = folio_mk_pmd(folio, vma->vm_page_prot);
>>> if (vma->vm_flags & VM_WRITE)
>>> @@ -893,6 +893,8 @@ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
>>>
>>> unlock_abort:
>>> spin_unlock(ptl);
>>> +free_abort:
>>> + pte_free(vma->vm_mm, pgtable);
>>> abort:
>>> for (i = 0; i < HPAGE_PMD_NR; i++)
>>> src[i] &= ~MIGRATE_PFN_MIGRATE;
>>
>> Yikes, we leak that page on several error paths.
>>
>> Thanks, I'll retain David's ack from the v2 patch.
>
> Yes. If we want to avoid more labels, we could do something like:
>
> diff --git a/mm/migrate_device.c b/mm/migrate_device.c
> index ab49d4dcdb60..babb56c4d47f 100644
> --- a/mm/migrate_device.c
> +++ b/mm/migrate_device.c
> @@ -795,8 +795,8 @@ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
> struct folio *folio = page_folio(page);
> int ret;
> vm_fault_t csa_ret;
> - spinlock_t *ptl;
> - pgtable_t pgtable;
> + spinlock_t *ptl = NULL;
> + pgtable_t pgtable = NULL;
> pmd_t entry;
> bool flush = false;
> unsigned long i;
> @@ -818,14 +818,14 @@ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
> count_vm_event(THP_FAULT_FALLBACK);
> count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE);
> ret = -ENOMEM;
> - goto abort;
> + goto error;
> }
>
> __folio_mark_uptodate(folio);
>
> pgtable = pte_alloc_one(vma->vm_mm);
> if (unlikely(!pgtable))
> - goto abort;
> + goto error;
>
> if (folio_is_device_private(folio)) {
> swp_entry_t swp_entry;
> @@ -840,7 +840,7 @@ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
> } else {
> if (folio_is_zone_device(folio) &&
> !folio_is_device_coherent(folio)) {
> - goto abort;
> + goto error;
> }
> entry = folio_mk_pmd(folio, vma->vm_page_prot);
> if (vma->vm_flags & VM_WRITE)
> @@ -850,21 +850,21 @@ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
> ptl = pmd_lock(vma->vm_mm, pmdp);
> csa_ret = check_stable_address_space(vma->vm_mm);
> if (csa_ret)
> - goto unlock_abort;
> + goto error;
>
> /*
> * Check for userfaultfd but do not deliver the fault. Instead,
> * just back off.
> */
> if (userfaultfd_missing(vma))
> - goto unlock_abort;
> + goto error;
>
> if (!pmd_none(*pmdp)) {
> if (!is_huge_zero_pmd(*pmdp))
> - goto unlock_abort;
> + goto error;
> flush = true;
> } else if (!pmd_none(*pmdp))
> - goto unlock_abort;
> + goto error;
>
> add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR);
> folio_add_new_anon_rmap(folio, vma, addr, RMAP_EXCLUSIVE);
> @@ -891,9 +891,11 @@ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
>
> return 0;
>
> -unlock_abort:
> - spin_unlock(ptl);
> -abort:
> +error:
> + if (ptl)
> + spin_unlock(ptl);
> + if (pgtable)
> + pte_free(vma->vm_mm, pgtable);
> for (i = 0; i < HPAGE_PMD_NR; i++)
> src[i] &= ~MIGRATE_PFN_MIGRATE;
> return 0;
>
>
The changes look good!
Reviewed-by: Balbir Singh <balbirs@nvidia.com>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v3] mm/migrate_device: fix pgtable leak in migrate_vma_insert_huge_pmd_page
2026-05-01 19:08 ` David Hildenbrand (Arm)
2026-05-02 1:02 ` Balbir Singh
@ 2026-05-07 9:38 ` Huang, Ying
1 sibling, 0 replies; 8+ messages in thread
From: Huang, Ying @ 2026-05-07 9:38 UTC (permalink / raw)
To: David Hildenbrand (Arm)
Cc: Andrew Morton, Sunny Patel, Zi Yan, Matthew Brost, Joshua Hahn,
Rakie Kim, Byungchul Park, Gregory Price, Alistair Popple,
linux-mm, linux-kernel, Balbir Singh
"David Hildenbrand (Arm)" <david@kernel.org> writes:
> On 5/1/26 14:44, Andrew Morton wrote:
>> On Fri, 1 May 2026 17:21:16 +0530 Sunny Patel <nueralspacetech@gmail.com> wrote:
>>
>>> When migrate_vma_insert_huge_pmd_page() jumps to unlock_abort due
>>> to a PMD check failure, the pgtable allocated earlier via
>>> pte_alloc_one() is never freed, causing a memory leak.
>>>
>>> Added free_abort label to release the pgtable in error path.
>>>
>>> ...
>>>
>>> --- a/mm/migrate_device.c
>>> +++ b/mm/migrate_device.c
>>> @@ -840,7 +840,7 @@ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
>>> } else {
>>> if (folio_is_zone_device(folio) &&
>>> !folio_is_device_coherent(folio)) {
>>> - goto abort;
>>> + goto free_abort;
>>> }
>>> entry = folio_mk_pmd(folio, vma->vm_page_prot);
>>> if (vma->vm_flags & VM_WRITE)
>>> @@ -893,6 +893,8 @@ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
>>>
>>> unlock_abort:
>>> spin_unlock(ptl);
>>> +free_abort:
>>> + pte_free(vma->vm_mm, pgtable);
>>> abort:
>>> for (i = 0; i < HPAGE_PMD_NR; i++)
>>> src[i] &= ~MIGRATE_PFN_MIGRATE;
>>
>> Yikes, we leak that page on several error paths.
>>
>> Thanks, I'll retain David's ack from the v2 patch.
>
> Yes. If we want to avoid more labels, we could do something like:
>
> diff --git a/mm/migrate_device.c b/mm/migrate_device.c
> index ab49d4dcdb60..babb56c4d47f 100644
> --- a/mm/migrate_device.c
> +++ b/mm/migrate_device.c
> @@ -795,8 +795,8 @@ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
> struct folio *folio = page_folio(page);
> int ret;
> vm_fault_t csa_ret;
> - spinlock_t *ptl;
> - pgtable_t pgtable;
> + spinlock_t *ptl = NULL;
> + pgtable_t pgtable = NULL;
> pmd_t entry;
> bool flush = false;
> unsigned long i;
> @@ -818,14 +818,14 @@ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
> count_vm_event(THP_FAULT_FALLBACK);
> count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE);
> ret = -ENOMEM;
> - goto abort;
> + goto error;
> }
>
> __folio_mark_uptodate(folio);
>
> pgtable = pte_alloc_one(vma->vm_mm);
> if (unlikely(!pgtable))
> - goto abort;
> + goto error;
>
> if (folio_is_device_private(folio)) {
> swp_entry_t swp_entry;
> @@ -840,7 +840,7 @@ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
> } else {
> if (folio_is_zone_device(folio) &&
> !folio_is_device_coherent(folio)) {
> - goto abort;
> + goto error;
> }
> entry = folio_mk_pmd(folio, vma->vm_page_prot);
> if (vma->vm_flags & VM_WRITE)
> @@ -850,21 +850,21 @@ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
> ptl = pmd_lock(vma->vm_mm, pmdp);
> csa_ret = check_stable_address_space(vma->vm_mm);
> if (csa_ret)
> - goto unlock_abort;
> + goto error;
>
> /*
> * Check for userfaultfd but do not deliver the fault. Instead,
> * just back off.
> */
> if (userfaultfd_missing(vma))
> - goto unlock_abort;
> + goto error;
>
> if (!pmd_none(*pmdp)) {
> if (!is_huge_zero_pmd(*pmdp))
> - goto unlock_abort;
> + goto error;
> flush = true;
> } else if (!pmd_none(*pmdp))
> - goto unlock_abort;
> + goto error;
>
> add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR);
> folio_add_new_anon_rmap(folio, vma, addr, RMAP_EXCLUSIVE);
> @@ -891,9 +891,11 @@ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
>
> return 0;
>
> -unlock_abort:
> - spin_unlock(ptl);
> -abort:
> +error:
> + if (ptl)
> + spin_unlock(ptl);
> + if (pgtable)
> + pte_free(vma->vm_mm, pgtable);
> for (i = 0; i < HPAGE_PMD_NR; i++)
> src[i] &= ~MIGRATE_PFN_MIGRATE;
> return 0;
Both look good to me, feel free to add my
Reviewed-by: Huang Ying <ying.huang@linux.alibaba.com>
in the future versions.
---
Best Regards,
Huang, Ying
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v3] mm/migrate_device: fix pgtable leak in migrate_vma_insert_huge_pmd_page
2026-05-02 1:02 ` Balbir Singh
@ 2026-05-08 11:41 ` David Hildenbrand (Arm)
0 siblings, 0 replies; 8+ messages in thread
From: David Hildenbrand (Arm) @ 2026-05-08 11:41 UTC (permalink / raw)
To: Balbir Singh, Andrew Morton, Sunny Patel
Cc: Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, linux-mm,
linux-kernel
>> -unlock_abort:
>> - spin_unlock(ptl);
>> -abort:
>> +error:
>> + if (ptl)
>> + spin_unlock(ptl);
>> + if (pgtable)
>> + pte_free(vma->vm_mm, pgtable);
>> for (i = 0; i < HPAGE_PMD_NR; i++)
>> src[i] &= ~MIGRATE_PFN_MIGRATE;
>> return 0;
>>
>>
>
> The changes look good!
>
> Reviewed-by: Balbir Singh <balbirs@nvidia.com>
Do we prefer that or what we have in v3?
In case we prefer what I posted, it would be great if Sunny could send a v4!
--
Cheers,
David
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2026-05-08 11:41 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-01 11:51 [PATCH v3] mm/migrate_device: fix pgtable leak in migrate_vma_insert_huge_pmd_page Sunny Patel
2026-05-01 12:44 ` Andrew Morton
2026-05-01 19:08 ` David Hildenbrand (Arm)
2026-05-02 1:02 ` Balbir Singh
2026-05-08 11:41 ` David Hildenbrand (Arm)
2026-05-07 9:38 ` Huang, Ying
2026-05-02 0:47 ` Balbir Singh
2026-05-02 0:59 ` Balbir Singh
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox