From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 271771A4F12; Mon, 23 Jun 2025 22:06:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750716397; cv=none; b=bPe3/D3TTCLqyM74hoazgxweB2mCxB/nautWnJL7zfuQgBdQQ4JRr79Oger4YzAFkOglGbxId2TcR81hSvaVrIT4hk5dgVjrTU0OEfSCHHTJtJzwDsjoHE+9I4w5mjSODS2pxgtaAGsBQfz2pGpNtai7Vhd0eAB0iBXHgnrlflc= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750716397; c=relaxed/simple; bh=ZUzYO9ldCxGnis+EKKTlVRrc3fYKuA+nc6u7PSxKJzc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=YwbfvVUG/ILafSkbzlZDp4LL4XeJy/FI641QrStdCSUtaZ0FE4OF0demb8mxN4hd0HlklNKsNdaQFMWV/qvR+yWBr/r0rNiepH6kWkqjn4hUjERAcPlEggTTJHAHd8iGzO9i8X+ZsvB5UkTXEkp6TalZtaNHxHsr7WlQDqrJ7nQ= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b=GnIkDjQk; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b="GnIkDjQk" Received: by smtp.kernel.org (Postfix) with ESMTPSA id B1D3FC4CEF2; Mon, 23 Jun 2025 22:06:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1750716397; bh=ZUzYO9ldCxGnis+EKKTlVRrc3fYKuA+nc6u7PSxKJzc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GnIkDjQkNTXU6piakyuUvNvkTEFOqjoiX3bh7bLbwrLlYSmhmxHBBlyyhLwAmB1vt 44dcuRwn8GAfz5dtcNQ3WXquNohkXSKif3LmXqwaclCmFVHI66Balr6w8qiRCiH12G lUipwUyeaIm5pUiZJFnLmyTvmrqJmK3zwGZMzrCg= From: Greg Kroah-Hartman To: stable@vger.kernel.org Cc: Greg Kroah-Hartman , patches@lists.linux.dev, Gavin Guo , David Hildenbrand , Hugh Dickins , Zi Yan , Gavin Shan , Florent Revest , "Matthew Wilcox (Oracle)" , Miaohe Lin , Andrew Morton Subject: [PATCH 5.10 346/355] mm/huge_memory: fix dereferencing invalid pmd migration entry Date: Mon, 23 Jun 2025 15:09:07 +0200 Message-ID: <20250623130637.130239273@linuxfoundation.org> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250623130626.716971725@linuxfoundation.org> References: <20250623130626.716971725@linuxfoundation.org> User-Agent: quilt/0.68 X-stable: review X-Patchwork-Hint: ignore Precedence: bulk X-Mailing-List: patches@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit 5.10-stable review patch. If anyone has any objections, please let me know. ------------------ From: Gavin Guo commit be6e843fc51a584672dfd9c4a6a24c8cb81d5fb7 upstream. When migrating a THP, concurrent access to the PMD migration entry during a deferred split scan can lead to an invalid address access, as illustrated below. To prevent this invalid access, it is necessary to check the PMD migration entry and return early. In this context, there is no need to use pmd_to_swp_entry and pfn_swap_entry_to_page to verify the equality of the target folio. Since the PMD migration entry is locked, it cannot be served as the target. Mailing list discussion and explanation from Hugh Dickins: "An anon_vma lookup points to a location which may contain the folio of interest, but might instead contain another folio: and weeding out those other folios is precisely what the "folio != pmd_folio((*pmd)" check (and the "risk of replacing the wrong folio" comment a few lines above it) is for." BUG: unable to handle page fault for address: ffffea60001db008 CPU: 0 UID: 0 PID: 2199114 Comm: tee Not tainted 6.14.0+ #4 NONE Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014 RIP: 0010:split_huge_pmd_locked+0x3b5/0x2b60 Call Trace: try_to_migrate_one+0x28c/0x3730 rmap_walk_anon+0x4f6/0x770 unmap_folio+0x196/0x1f0 split_huge_page_to_list_to_order+0x9f6/0x1560 deferred_split_scan+0xac5/0x12a0 shrinker_debugfs_scan_write+0x376/0x470 full_proxy_write+0x15c/0x220 vfs_write+0x2fc/0xcb0 ksys_write+0x146/0x250 do_syscall_64+0x6a/0x120 entry_SYSCALL_64_after_hwframe+0x76/0x7e The bug is found by syzkaller on an internal kernel, then confirmed on upstream. Link: https://lkml.kernel.org/r/20250421113536.3682201-1-gavinguo@igalia.com Link: https://lore.kernel.org/all/20250414072737.1698513-1-gavinguo@igalia.com/ Link: https://lore.kernel.org/all/20250418085802.2973519-1-gavinguo@igalia.com/ Fixes: 84c3fc4e9c56 ("mm: thp: check pmd migration entry in common path") Signed-off-by: Gavin Guo Acked-by: David Hildenbrand Acked-by: Hugh Dickins Acked-by: Zi Yan Reviewed-by: Gavin Shan Cc: Florent Revest Cc: Matthew Wilcox (Oracle) Cc: Miaohe Lin Cc: Signed-off-by: Andrew Morton [gavin: backport the migration checking logic to __split_huge_pmd] Signed-off-by: Gavin Guo Signed-off-by: Greg Kroah-Hartman --- mm/huge_memory.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2227,7 +2227,7 @@ void __split_huge_pmd(struct vm_area_str VM_BUG_ON(freeze && !page); if (page) { VM_WARN_ON_ONCE(!PageLocked(page)); - if (page != pmd_page(*pmd)) + if (is_pmd_migration_entry(*pmd) || page != pmd_page(*pmd)) goto out; }