From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AE7B6371040 for ; Tue, 31 Mar 2026 11:39:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774957149; cv=none; b=QcRtOaTTggCAr2p+mTBQfGwoCqw6KGxuD9Zvbfj0Scqw/R3hDZ33N+tRLxUdd4djOIZik2JflYtRg9X/gCr+AnY0d7IfyxIT43yCXHA4ajcJ/hAJeYm4AVtD8xYeQZsy98OtcD5QRqT72ItYruGF4l8JB4bkMh2bWlscDwFdnws= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774957149; c=relaxed/simple; bh=9TVsGpRL6+MN3e1y5lPWhM/OGbwY7tdezVxx7EO5ug8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ZXAB/MsbzePZB/WSGsBDvt0gGjhkM9GUy/pnEah8VQgZ7IXULotkAMhaERHONGqWGJfYzNjCrNr2PjS6Ehj+LHgFku5UhGYGFQYPk8yFkfIIbuYBD1g0+ApErlQOK3wE+MlkTtu3DmTRx7Gft6JIjTgG8WQb/MAPgn+a7zNKkR8= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=HZYa3VPH; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="HZYa3VPH" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C97FBC19423; Tue, 31 Mar 2026 11:39:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774957149; bh=9TVsGpRL6+MN3e1y5lPWhM/OGbwY7tdezVxx7EO5ug8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=HZYa3VPHcJnsLB506fvFBfNovyHwUpJkgN/wAgVrF+W5y7/Bx5MP7OKs8wDRfN+R4 E0zHJY7zdgjZi2RcpzOUKlTVEexqGxU2xEfrGMvutBaqWGxP3ZPvBH66J5QBGEtAGt kJTKTay32zJicFeWBLsdvylj6guMElpSE+ZqlZy5Lc/ZPVsqwWZnN9IwXCOBfegV3W lCvYgbHFnGGGc+Y5o3SKVH6PN3tV59oRk8V8J4Vbok6yvTtpXTpKIohebWhy9IO+Qq ETRVdAkXUFLZeNIx3fZL8YMqJM7ZU86d77jP/ZChaNqguuDbgORAunHt9gmWLpqGz2 2JPAy+nBTEDNQ== From: Sasha Levin To: stable@vger.kernel.org Cc: Jinjiang Tu , "David Hildenbrand (Arm)" , "Lorenzo Stoakes (Oracle)" , Barry Song , Kefeng Wang , Liam Howlett , Michal Hocko , Mike Rapoport , Nanyong Sun , Ryan Roberts , Suren Baghdasaryan , Vlastimil Babka , Andrew Morton , Sasha Levin Subject: [PATCH 6.12.y] mm/huge_memory: fix folio isn't locked in softleaf_to_folio() Date: Tue, 31 Mar 2026 07:39:06 -0400 Message-ID: <20260331113906.2080339-1-sashal@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: <2026033036-upstate-manifesto-09c1@gregkh> References: <2026033036-upstate-manifesto-09c1@gregkh> Precedence: bulk X-Mailing-List: stable@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: Jinjiang Tu [ Upstream commit 4c5e7f0fcd592801c9cc18f29f80fbee84eb8669 ] On arm64 server, we found folio that get from migration entry isn't locked in softleaf_to_folio(). This issue triggers when mTHP splitting and zap_nonpresent_ptes() races, and the root cause is lack of memory barrier in softleaf_to_folio(). The race is as follows: CPU0 CPU1 deferred_split_scan() zap_nonpresent_ptes() lock folio split_folio() unmap_folio() change ptes to migration entries __split_folio_to_order() softleaf_to_folio() set flags(including PG_locked) for tail pages folio = pfn_folio(softleaf_to_pfn(entry)) smp_wmb() VM_WARN_ON_ONCE(!folio_test_locked(folio)) prep_compound_page() for tail pages In __split_folio_to_order(), smp_wmb() guarantees page flags of tail pages are visible before the tail page becomes non-compound. smp_wmb() should be paired with smp_rmb() in softleaf_to_folio(), which is missed. As a result, if zap_nonpresent_ptes() accesses migration entry that stores tail pfn, softleaf_to_folio() may see the updated compound_head of tail page before page->flags. This issue will trigger VM_WARN_ON_ONCE() in pfn_swap_entry_folio() because of the race between folio split and zap_nonpresent_ptes() leading to a folio incorrectly undergoing modification without a folio lock being held. This is a BUG_ON() before commit 93976a20345b ("mm: eliminate further swapops predicates"), which in merged in v6.19-rc1. To fix it, add missing smp_rmb() if the softleaf entry is migration entry in softleaf_to_folio() and softleaf_to_page(). [tujinjiang@huawei.com: update function name and comments] Link: https://lkml.kernel.org/r/20260321075214.3305564-1-tujinjiang@huawei.com Link: https://lkml.kernel.org/r/20260319012541.4158561-1-tujinjiang@huawei.com Fixes: e9b61f19858a ("thp: reintroduce split_huge_page()") Signed-off-by: Jinjiang Tu Acked-by: David Hildenbrand (Arm) Reviewed-by: Lorenzo Stoakes (Oracle) Cc: Barry Song Cc: Kefeng Wang Cc: Liam Howlett Cc: Michal Hocko Cc: Mike Rapoport Cc: Nanyong Sun Cc: Ryan Roberts Cc: Suren Baghdasaryan Cc: Vlastimil Babka Cc: Signed-off-by: Andrew Morton [ applied fix to swapops.h using old pfn_swap_entry/swp_entry_t naming ] Signed-off-by: Sasha Levin --- include/linux/swapops.h | 27 +++++++++++++++++++-------- 1 file changed, 19 insertions(+), 8 deletions(-) diff --git a/include/linux/swapops.h b/include/linux/swapops.h index cb468e418ea11..d988ce9516510 100644 --- a/include/linux/swapops.h +++ b/include/linux/swapops.h @@ -484,15 +484,29 @@ static inline int pte_none_mostly(pte_t pte) return pte_none(pte) || is_pte_marker(pte); } -static inline struct page *pfn_swap_entry_to_page(swp_entry_t entry) +static inline void swap_entry_migration_sync(swp_entry_t entry, + struct folio *folio) { - struct page *p = pfn_to_page(swp_offset_pfn(entry)); + /* + * Ensure we do not race with split, which might alter tail pages into new + * folios and thus result in observing an unlocked folio. + * This matches the write barrier in __split_folio_to_order(). + */ + smp_rmb(); /* * Any use of migration entries may only occur while the * corresponding page is locked */ - BUG_ON(is_migration_entry(entry) && !PageLocked(p)); + BUG_ON(!folio_test_locked(folio)); +} + +static inline struct page *pfn_swap_entry_to_page(swp_entry_t entry) +{ + struct page *p = pfn_to_page(swp_offset_pfn(entry)); + + if (is_migration_entry(entry)) + swap_entry_migration_sync(entry, page_folio(p)); return p; } @@ -501,11 +515,8 @@ static inline struct folio *pfn_swap_entry_folio(swp_entry_t entry) { struct folio *folio = pfn_folio(swp_offset_pfn(entry)); - /* - * Any use of migration entries may only occur while the - * corresponding folio is locked - */ - BUG_ON(is_migration_entry(entry) && !folio_test_locked(folio)); + if (is_migration_entry(entry)) + swap_entry_migration_sync(entry, folio); return folio; } -- 2.53.0