From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C7BF4313298 for ; Tue, 28 Apr 2026 15:23:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777389837; cv=none; b=hWBZx+sX0Sv2xHC3NSp83+ZrS8IzZYbzzn+RQ887Tnzw+ncmAq35Zan8W2SMktG8YroY1ugzMFkhMF4COYJwEVvjY2h110VLtQ0jHLu5xkPK+4QMPmKfURsDs+ppaiLkNVuehh85jtjWF/moQnmcYxrY7mSxx6dNbOYHnsU8C7w= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777389837; c=relaxed/simple; bh=/8mWtq0w/TCcPnfgb5B5GfL81qNSdgN0wyVGg7Zdyzc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=QvsS9ahyA+9kDNUy6cZmx0SeBCFfOxooVFy/EeHBQtqYKcCz5X70gRLvrBE7G+JoA5WJbAqhpU1c4mmZog3hsB9Yf69XFvTlZ+TAB3zetxMC9FYrLiMHeRrZdCU0ZidhZPaImFEdp3RKSy2jFgPh8o6DZbbnj4ZghlVXfKwm/aE= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=NCaXYnWM; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="NCaXYnWM" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 009C3C2BCB5; Tue, 28 Apr 2026 15:23:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1777389837; bh=/8mWtq0w/TCcPnfgb5B5GfL81qNSdgN0wyVGg7Zdyzc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=NCaXYnWMKAcCnxgj/udY3PJQFjbC6n6EXjZHR2G/zjmSQttxzR9KJMtH6iiQdLDgN rTjyAIUCQDeqPTiwxg5ocNZSb6zYdcmAOTAaKWqUPRgvWGYIGf3O38/m2hgds2d5GZ hWrGicpi8nlZYDdXzwDxYg2Q/wjEHvXDaMwNZUrNeJITKPElPLktPrw3Cv37/Q6LMM DZvZsvVj2ScDegSF2aXPRKmStHTICK5N1xUWyGV/VX9+OoktIOenQ9WCJp0XHr5sTc 6g5/NriBFjxhCHeUJ87g0+8aUJg/2li7XaDIoNR4v+8gMzDiW2QVMdNwYSq+JWtGr/ A6ngZUww4Louw== From: Sasha Levin To: stable@vger.kernel.org Cc: Alistair Popple , Zenghui Yu , Balbir Singh , David Hildenbrand , Jason Gunthorpe , Leon Romanovsky , Liam Howlett , "Lorenzo Stoakes (Oracle)" , Michal Hocko , Mike Rapoport , Suren Baghdasaryan , Matthew Brost , Andrew Morton , Sasha Levin Subject: [PATCH 6.12.y] lib: test_hmm: evict device pages on file close to avoid use-after-free Date: Tue, 28 Apr 2026 11:23:54 -0400 Message-ID: <20260428152354.3033526-1-sashal@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: <2026042708-kilobyte-oblivious-2d9d@gregkh> References: <2026042708-kilobyte-oblivious-2d9d@gregkh> Precedence: bulk X-Mailing-List: stable@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: Alistair Popple [ Upstream commit 744dd97752ef1076a8d8672bb0d8aa2c7abc1144 ] Patch series "Minor hmm_test fixes and cleanups". Two bugfixes a cleanup for the HMM kernel selftests. These were mostly reported by Zenghui Yu with special thanks to Lorenzo for analysing and pointing out the problems. This patch (of 3): When dmirror_fops_release() is called it frees the dmirror struct but doesn't migrate device private pages back to system memory first. This leaves those pages with a dangling zone_device_data pointer to the freed dmirror. If a subsequent fault occurs on those pages (eg. during coredump) the dmirror_devmem_fault() callback dereferences the stale pointer causing a kernel panic. This was reported [1] when running mm/ksft_hmm.sh on arm64, where a test failure triggered SIGABRT and the resulting coredump walked the VMAs faulting in the stale device private pages. Fix this by calling dmirror_device_evict_chunk() for each devmem chunk in dmirror_fops_release() to migrate all device private pages back to system memory before freeing the dmirror struct. The function is moved earlier in the file to avoid a forward declaration. Link: https://lore.kernel.org/20260331063445.3551404-1-apopple@nvidia.com Link: https://lore.kernel.org/20260331063445.3551404-2-apopple@nvidia.com Fixes: b2ef9f5a5cb3 ("mm/hmm/test: add selftest driver for HMM") Signed-off-by: Alistair Popple Reported-by: Zenghui Yu Closes: https://lore.kernel.org/linux-mm/8bd0396a-8997-4d2e-a13f-5aac033083d7@linux.dev/ Reviewed-by: Balbir Singh Tested-by: Zenghui Yu Cc: David Hildenbrand Cc: Jason Gunthorpe Cc: Leon Romanovsky Cc: Liam Howlett Cc: Lorenzo Stoakes (Oracle) Cc: Michal Hocko Cc: Mike Rapoport Cc: Suren Baghdasaryan Cc: Zenghui Yu Cc: Matthew Brost Cc: Signed-off-by: Andrew Morton [ kept the existing simpler `dmirror_device_evict_chunk()` body instead of the upstream compound-folio version ] Signed-off-by: Sasha Levin --- lib/test_hmm.c | 86 ++++++++++++++++++++++++++++---------------------- 1 file changed, 49 insertions(+), 37 deletions(-) diff --git a/lib/test_hmm.c b/lib/test_hmm.c index 056f2e411d7b4..850d23469ef78 100644 --- a/lib/test_hmm.c +++ b/lib/test_hmm.c @@ -183,11 +183,60 @@ static int dmirror_fops_open(struct inode *inode, struct file *filp) return 0; } +static void dmirror_device_evict_chunk(struct dmirror_chunk *chunk) +{ + unsigned long start_pfn = chunk->pagemap.range.start >> PAGE_SHIFT; + unsigned long end_pfn = chunk->pagemap.range.end >> PAGE_SHIFT; + unsigned long npages = end_pfn - start_pfn + 1; + unsigned long i; + unsigned long *src_pfns; + unsigned long *dst_pfns; + + src_pfns = kvcalloc(npages, sizeof(*src_pfns), GFP_KERNEL | __GFP_NOFAIL); + dst_pfns = kvcalloc(npages, sizeof(*dst_pfns), GFP_KERNEL | __GFP_NOFAIL); + + migrate_device_range(src_pfns, start_pfn, npages); + for (i = 0; i < npages; i++) { + struct page *dpage, *spage; + + spage = migrate_pfn_to_page(src_pfns[i]); + if (!spage || !(src_pfns[i] & MIGRATE_PFN_MIGRATE)) + continue; + + if (WARN_ON(!is_device_private_page(spage) && + !is_device_coherent_page(spage))) + continue; + spage = BACKING_PAGE(spage); + dpage = alloc_page(GFP_HIGHUSER_MOVABLE | __GFP_NOFAIL); + lock_page(dpage); + copy_highpage(dpage, spage); + dst_pfns[i] = migrate_pfn(page_to_pfn(dpage)); + if (src_pfns[i] & MIGRATE_PFN_WRITE) + dst_pfns[i] |= MIGRATE_PFN_WRITE; + } + migrate_device_pages(src_pfns, dst_pfns, npages); + migrate_device_finalize(src_pfns, dst_pfns, npages); + kvfree(src_pfns); + kvfree(dst_pfns); +} + static int dmirror_fops_release(struct inode *inode, struct file *filp) { struct dmirror *dmirror = filp->private_data; + struct dmirror_device *mdevice = dmirror->mdevice; + int i; mmu_interval_notifier_remove(&dmirror->notifier); + + if (mdevice->devmem_chunks) { + for (i = 0; i < mdevice->devmem_count; i++) { + struct dmirror_chunk *devmem = + mdevice->devmem_chunks[i]; + + dmirror_device_evict_chunk(devmem); + } + } + xa_destroy(&dmirror->pt); kfree(dmirror); return 0; @@ -1214,43 +1263,6 @@ static int dmirror_snapshot(struct dmirror *dmirror, return ret; } -static void dmirror_device_evict_chunk(struct dmirror_chunk *chunk) -{ - unsigned long start_pfn = chunk->pagemap.range.start >> PAGE_SHIFT; - unsigned long end_pfn = chunk->pagemap.range.end >> PAGE_SHIFT; - unsigned long npages = end_pfn - start_pfn + 1; - unsigned long i; - unsigned long *src_pfns; - unsigned long *dst_pfns; - - src_pfns = kvcalloc(npages, sizeof(*src_pfns), GFP_KERNEL | __GFP_NOFAIL); - dst_pfns = kvcalloc(npages, sizeof(*dst_pfns), GFP_KERNEL | __GFP_NOFAIL); - - migrate_device_range(src_pfns, start_pfn, npages); - for (i = 0; i < npages; i++) { - struct page *dpage, *spage; - - spage = migrate_pfn_to_page(src_pfns[i]); - if (!spage || !(src_pfns[i] & MIGRATE_PFN_MIGRATE)) - continue; - - if (WARN_ON(!is_device_private_page(spage) && - !is_device_coherent_page(spage))) - continue; - spage = BACKING_PAGE(spage); - dpage = alloc_page(GFP_HIGHUSER_MOVABLE | __GFP_NOFAIL); - lock_page(dpage); - copy_highpage(dpage, spage); - dst_pfns[i] = migrate_pfn(page_to_pfn(dpage)); - if (src_pfns[i] & MIGRATE_PFN_WRITE) - dst_pfns[i] |= MIGRATE_PFN_WRITE; - } - migrate_device_pages(src_pfns, dst_pfns, npages); - migrate_device_finalize(src_pfns, dst_pfns, npages); - kvfree(src_pfns); - kvfree(dst_pfns); -} - /* Removes free pages from the free list so they can't be re-allocated */ static void dmirror_remove_free_pages(struct dmirror_chunk *devmem) { -- 2.53.0