From: "David Hildenbrand (Arm)" <david@kernel.org>
To: Andrew Morton <akpm@linux-foundation.org>,
Sunny Patel <nueralspacetech@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>, Matthew Brost <matthew.brost@intel.com>,
Joshua Hahn <joshua.hahnjy@gmail.com>,
Rakie Kim <rakie.kim@sk.com>, Byungchul Park <byungchul@sk.com>,
Gregory Price <gourry@gourry.net>,
Ying Huang <ying.huang@linux.alibaba.com>,
Alistair Popple <apopple@nvidia.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
Balbir Singh <balbirs@nvidia.com>
Subject: Re: [PATCH v3] mm/migrate_device: fix pgtable leak in migrate_vma_insert_huge_pmd_page
Date: Fri, 1 May 2026 21:08:25 +0200 [thread overview]
Message-ID: <24ab5ddc-11a9-40ed-90b2-1a6c68010928@kernel.org> (raw)
In-Reply-To: <20260501054416.af0ed62d635c3eb01d425e61@linux-foundation.org>
On 5/1/26 14:44, Andrew Morton wrote:
> On Fri, 1 May 2026 17:21:16 +0530 Sunny Patel <nueralspacetech@gmail.com> wrote:
>
>> When migrate_vma_insert_huge_pmd_page() jumps to unlock_abort due
>> to a PMD check failure, the pgtable allocated earlier via
>> pte_alloc_one() is never freed, causing a memory leak.
>>
>> Added free_abort label to release the pgtable in error path.
>>
>> ...
>>
>> --- a/mm/migrate_device.c
>> +++ b/mm/migrate_device.c
>> @@ -840,7 +840,7 @@ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
>> } else {
>> if (folio_is_zone_device(folio) &&
>> !folio_is_device_coherent(folio)) {
>> - goto abort;
>> + goto free_abort;
>> }
>> entry = folio_mk_pmd(folio, vma->vm_page_prot);
>> if (vma->vm_flags & VM_WRITE)
>> @@ -893,6 +893,8 @@ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
>>
>> unlock_abort:
>> spin_unlock(ptl);
>> +free_abort:
>> + pte_free(vma->vm_mm, pgtable);
>> abort:
>> for (i = 0; i < HPAGE_PMD_NR; i++)
>> src[i] &= ~MIGRATE_PFN_MIGRATE;
>
> Yikes, we leak that page on several error paths.
>
> Thanks, I'll retain David's ack from the v2 patch.
Yes. If we want to avoid more labels, we could do something like:
diff --git a/mm/migrate_device.c b/mm/migrate_device.c
index ab49d4dcdb60..babb56c4d47f 100644
--- a/mm/migrate_device.c
+++ b/mm/migrate_device.c
@@ -795,8 +795,8 @@ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
struct folio *folio = page_folio(page);
int ret;
vm_fault_t csa_ret;
- spinlock_t *ptl;
- pgtable_t pgtable;
+ spinlock_t *ptl = NULL;
+ pgtable_t pgtable = NULL;
pmd_t entry;
bool flush = false;
unsigned long i;
@@ -818,14 +818,14 @@ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
count_vm_event(THP_FAULT_FALLBACK);
count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE);
ret = -ENOMEM;
- goto abort;
+ goto error;
}
__folio_mark_uptodate(folio);
pgtable = pte_alloc_one(vma->vm_mm);
if (unlikely(!pgtable))
- goto abort;
+ goto error;
if (folio_is_device_private(folio)) {
swp_entry_t swp_entry;
@@ -840,7 +840,7 @@ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
} else {
if (folio_is_zone_device(folio) &&
!folio_is_device_coherent(folio)) {
- goto abort;
+ goto error;
}
entry = folio_mk_pmd(folio, vma->vm_page_prot);
if (vma->vm_flags & VM_WRITE)
@@ -850,21 +850,21 @@ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
ptl = pmd_lock(vma->vm_mm, pmdp);
csa_ret = check_stable_address_space(vma->vm_mm);
if (csa_ret)
- goto unlock_abort;
+ goto error;
/*
* Check for userfaultfd but do not deliver the fault. Instead,
* just back off.
*/
if (userfaultfd_missing(vma))
- goto unlock_abort;
+ goto error;
if (!pmd_none(*pmdp)) {
if (!is_huge_zero_pmd(*pmdp))
- goto unlock_abort;
+ goto error;
flush = true;
} else if (!pmd_none(*pmdp))
- goto unlock_abort;
+ goto error;
add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR);
folio_add_new_anon_rmap(folio, vma, addr, RMAP_EXCLUSIVE);
@@ -891,9 +891,11 @@ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
return 0;
-unlock_abort:
- spin_unlock(ptl);
-abort:
+error:
+ if (ptl)
+ spin_unlock(ptl);
+ if (pgtable)
+ pte_free(vma->vm_mm, pgtable);
for (i = 0; i < HPAGE_PMD_NR; i++)
src[i] &= ~MIGRATE_PFN_MIGRATE;
return 0;
--
Cheers,
David
next prev parent reply other threads:[~2026-05-01 19:08 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-05-01 11:51 [PATCH v3] mm/migrate_device: fix pgtable leak in migrate_vma_insert_huge_pmd_page Sunny Patel
2026-05-01 12:44 ` Andrew Morton
2026-05-01 19:08 ` David Hildenbrand (Arm) [this message]
2026-05-02 1:02 ` Balbir Singh
2026-05-08 11:41 ` David Hildenbrand (Arm)
2026-05-07 9:38 ` Huang, Ying
2026-05-02 0:47 ` Balbir Singh
2026-05-02 0:59 ` Balbir Singh
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=24ab5ddc-11a9-40ed-90b2-1a6c68010928@kernel.org \
--to=david@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=apopple@nvidia.com \
--cc=balbirs@nvidia.com \
--cc=byungchul@sk.com \
--cc=gourry@gourry.net \
--cc=joshua.hahnjy@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=matthew.brost@intel.com \
--cc=nueralspacetech@gmail.com \
--cc=rakie.kim@sk.com \
--cc=ying.huang@linux.alibaba.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox