From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from out30-113.freemail.mail.aliyun.com (out30-113.freemail.mail.aliyun.com [115.124.30.113]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1C0F1327BFC for ; Thu, 7 May 2026 09:39:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=115.124.30.113 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778146769; cv=none; b=qbzdBIubGWNwa0q8V9xpOLlW+PnD5fFhAUQFwR4fIlKWeOZo92ae+suv/P3b02Lxspsmpg22SsVILHTN/pJEXrP8G9GamLEobb8N3OYXdo8WiQfW/etdSHUUpxLCJpNI4Zl5X94DQXtpP2p3aE3e/7bo7DEXWiTdhLso3Gf/qSw= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778146769; c=relaxed/simple; bh=yi4FLPMoQnUmsCO5yy6cQSC1vJ493yg9WSaj/AscDEo=; h=From:To:Cc:Subject:In-Reply-To:References:Date:Message-ID: MIME-Version:Content-Type; b=BrZWpFeByKJ0cu0GY3TMT1s0AeGH0W7i85oSmjHx96j2VzqH1fLMdJoH1Nkb6suU9Tm2FQlMY9IBPEqVgelOSQD+eVc/KA7ZRfe9e8nOZo3q/JZO4JFH9To4H7VcjasQOhEUXawndfTbAFmbe31cVa5sey6QNCXNBE1+W3wOFLo= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com; spf=pass smtp.mailfrom=linux.alibaba.com; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b=wKin2CO6; arc=none smtp.client-ip=115.124.30.113 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.alibaba.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.alibaba.com header.i=@linux.alibaba.com header.b="wKin2CO6" DKIM-Signature:v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1778146763; h=From:To:Subject:Date:Message-ID:MIME-Version:Content-Type; bh=JfUL/fvaGnUTKvsv1GzX2Lkgd+M9acofoMo8NpSe9dI=; b=wKin2CO6lFuwTks2T3kozlqQgXky7dgJtma4iuGJ4AlSiaLiRc70wxIeH/QjqBoFuMcSiFoMc8D4+aPNttt3DyaBD3eLxTMxP+HdGFEQIGukkjD5SwzlXgGKJ+Cx+1L8KXg2ak6GGJmODtvlg5cTd3m0z/Y1ugSBxJFcqYvUlYE= X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R981e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=maildocker-contentspam033045133197;MF=ying.huang@linux.alibaba.com;NM=1;PH=DS;RN=13;SR=0;TI=SMTPD_---0X2UA4zj_1778146736; Received: from DESKTOP-5N7EMDA(mailfrom:ying.huang@linux.alibaba.com fp:SMTPD_---0X2UA4zj_1778146736 cluster:ay36) by smtp.aliyun-inc.com; Thu, 07 May 2026 17:39:22 +0800 From: "Huang, Ying" To: "David Hildenbrand (Arm)" Cc: Andrew Morton , Sunny Patel , Zi Yan , Matthew Brost , Joshua Hahn , Rakie Kim , Byungchul Park , Gregory Price , Alistair Popple , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Balbir Singh Subject: Re: [PATCH v3] mm/migrate_device: fix pgtable leak in migrate_vma_insert_huge_pmd_page In-Reply-To: <24ab5ddc-11a9-40ed-90b2-1a6c68010928@kernel.org> (David Hildenbrand's message of "Fri, 1 May 2026 21:08:25 +0200") References: <20260501115122.23288-1-nueralspacetech@gmail.com> <20260501054416.af0ed62d635c3eb01d425e61@linux-foundation.org> <24ab5ddc-11a9-40ed-90b2-1a6c68010928@kernel.org> Date: Thu, 07 May 2026 17:38:54 +0800 Message-ID: <87ik8z36fl.fsf@DESKTOP-5N7EMDA> User-Agent: Gnus/5.13 (Gnus v5.13) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=ascii "David Hildenbrand (Arm)" writes: > On 5/1/26 14:44, Andrew Morton wrote: >> On Fri, 1 May 2026 17:21:16 +0530 Sunny Patel wrote: >> >>> When migrate_vma_insert_huge_pmd_page() jumps to unlock_abort due >>> to a PMD check failure, the pgtable allocated earlier via >>> pte_alloc_one() is never freed, causing a memory leak. >>> >>> Added free_abort label to release the pgtable in error path. >>> >>> ... >>> >>> --- a/mm/migrate_device.c >>> +++ b/mm/migrate_device.c >>> @@ -840,7 +840,7 @@ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate, >>> } else { >>> if (folio_is_zone_device(folio) && >>> !folio_is_device_coherent(folio)) { >>> - goto abort; >>> + goto free_abort; >>> } >>> entry = folio_mk_pmd(folio, vma->vm_page_prot); >>> if (vma->vm_flags & VM_WRITE) >>> @@ -893,6 +893,8 @@ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate, >>> >>> unlock_abort: >>> spin_unlock(ptl); >>> +free_abort: >>> + pte_free(vma->vm_mm, pgtable); >>> abort: >>> for (i = 0; i < HPAGE_PMD_NR; i++) >>> src[i] &= ~MIGRATE_PFN_MIGRATE; >> >> Yikes, we leak that page on several error paths. >> >> Thanks, I'll retain David's ack from the v2 patch. > > Yes. If we want to avoid more labels, we could do something like: > > diff --git a/mm/migrate_device.c b/mm/migrate_device.c > index ab49d4dcdb60..babb56c4d47f 100644 > --- a/mm/migrate_device.c > +++ b/mm/migrate_device.c > @@ -795,8 +795,8 @@ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate, > struct folio *folio = page_folio(page); > int ret; > vm_fault_t csa_ret; > - spinlock_t *ptl; > - pgtable_t pgtable; > + spinlock_t *ptl = NULL; > + pgtable_t pgtable = NULL; > pmd_t entry; > bool flush = false; > unsigned long i; > @@ -818,14 +818,14 @@ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate, > count_vm_event(THP_FAULT_FALLBACK); > count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE); > ret = -ENOMEM; > - goto abort; > + goto error; > } > > __folio_mark_uptodate(folio); > > pgtable = pte_alloc_one(vma->vm_mm); > if (unlikely(!pgtable)) > - goto abort; > + goto error; > > if (folio_is_device_private(folio)) { > swp_entry_t swp_entry; > @@ -840,7 +840,7 @@ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate, > } else { > if (folio_is_zone_device(folio) && > !folio_is_device_coherent(folio)) { > - goto abort; > + goto error; > } > entry = folio_mk_pmd(folio, vma->vm_page_prot); > if (vma->vm_flags & VM_WRITE) > @@ -850,21 +850,21 @@ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate, > ptl = pmd_lock(vma->vm_mm, pmdp); > csa_ret = check_stable_address_space(vma->vm_mm); > if (csa_ret) > - goto unlock_abort; > + goto error; > > /* > * Check for userfaultfd but do not deliver the fault. Instead, > * just back off. > */ > if (userfaultfd_missing(vma)) > - goto unlock_abort; > + goto error; > > if (!pmd_none(*pmdp)) { > if (!is_huge_zero_pmd(*pmdp)) > - goto unlock_abort; > + goto error; > flush = true; > } else if (!pmd_none(*pmdp)) > - goto unlock_abort; > + goto error; > > add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR); > folio_add_new_anon_rmap(folio, vma, addr, RMAP_EXCLUSIVE); > @@ -891,9 +891,11 @@ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate, > > return 0; > > -unlock_abort: > - spin_unlock(ptl); > -abort: > +error: > + if (ptl) > + spin_unlock(ptl); > + if (pgtable) > + pte_free(vma->vm_mm, pgtable); > for (i = 0; i < HPAGE_PMD_NR; i++) > src[i] &= ~MIGRATE_PFN_MIGRATE; > return 0; Both look good to me, feel free to add my Reviewed-by: Huang Ying in the future versions. --- Best Regards, Huang, Ying