From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E2329308F26 for ; Wed, 1 Apr 2026 16:45:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775061931; cv=none; b=EY0wfklHnOLKrlQiGYnyJ05jkfB1TPzurbycmqsZ+ZmAcqt5JASrDHfVR6TiCX+55W2gfDpaEx68LrkbuiEmA2MGzL6LIia3yN54xgZcyoD+T/bk4aHlawrcDZlEcvrDkOtce0hrPMLKR4TvVuP9fF3D2R6rdgfPLkevnORoi8E= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775061931; c=relaxed/simple; bh=3WGy125Z2CllwYHgNUY456JYX5aTUoILIAu03Q8blc4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=AZN0s4ueCO4HPF1hGWQ5SeLMQZWoWFDDo+4xTgUBAj/INOuyTsQr20QrfG2CMFzMCtc8EQ9WkaopjBIYYKE3zXSguMGz5QrD4Xt7CUHr0iglqhbl3e/7lcnpIQFZt+jTojm8+UIcRKRXBoSKOe3SwnfpD4TKttnlo/9QmcCoNIc= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=YRXkikZi; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="YRXkikZi" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 61819C19423; Wed, 1 Apr 2026 16:45:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1775061930; bh=3WGy125Z2CllwYHgNUY456JYX5aTUoILIAu03Q8blc4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YRXkikZiVk35UlRY/MrHX2uOcUwZvKbaLOySrZqUWvHcgxEsvx4u+WipUtn5AeFtw t97GItbg81dT3nU+M+JZyulR3A3Ne01U8Q0CuIdkMb1Je5zoxn5gOhiOi08S/Jchj3 YbNY0TCE+Xy0QHjklFeOIulByh5l9zlgEUQy53K6lDCJEKmQoHcWJqI9lgDbc0P1XZ GX/XIcLkdVAkZIxx/cFtoY5HlPMdUYnJtcxx8/JAGeYOdWVdkR0lHOjcYOIh+61MGD o2thu1Podcb7WteR+EuURZ1/csRnTljkuyCNV6USOpfLr6SGaqnwH+sw6ry6X7+7hA jGTBQtUfiPxGw== From: Sasha Levin To: stable@vger.kernel.org Cc: "David Hildenbrand (Arm)" , "Mike Rapoport (Microsoft)" , "Lorenzo Stoakes (Oracle)" , Liam Howlett , Michal Hocko , Peter Xu , Suren Baghdasaryan , Vlastimil Babka , Andrew Morton , Sasha Levin Subject: [PATCH 6.18.y 2/2] mm/memory: fix PMD/PUD checks in follow_pfnmap_start() Date: Wed, 1 Apr 2026 12:45:26 -0400 Message-ID: <20260401164526.141913-2-sashal@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: <20260401164526.141913-1-sashal@kernel.org> References: <2026033028-blitz-spill-525b@gregkh> <20260401164526.141913-1-sashal@kernel.org> Precedence: bulk X-Mailing-List: stable@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: "David Hildenbrand (Arm)" [ Upstream commit ffef67b93aa352b34e6aeba3d52c19a63885409a ] follow_pfnmap_start() suffers from two problems: (1) We are not re-fetching the pmd/pud after taking the PTL Therefore, we are not properly stabilizing what the lock actually protects. If there is concurrent zapping, we would indicate to the caller that we found an entry, however, that entry might already have been invalidated, or contain a different PFN after taking the lock. Properly use pmdp_get() / pudp_get() after taking the lock. (2) pmd_leaf() / pud_leaf() are not well defined on non-present entries pmd_leaf()/pud_leaf() could wrongly trigger on non-present entries. There is no real guarantee that pmd_leaf()/pud_leaf() returns something reasonable on non-present entries. Most architectures indeed either perform a present check or make it work by smart use of flags. However, for example loongarch checks the _PAGE_HUGE flag in pmd_leaf(), and always sets the _PAGE_HUGE flag in __swp_entry_to_pmd(). Whereby pmd_trans_huge() explicitly checks pmd_present(), pmd_leaf() does not do that. Let's check pmd_present()/pud_present() before assuming "the is a present PMD leaf" when spotting pmd_leaf()/pud_leaf(), like other page table handling code that traverses user page tables does. Given that non-present PMD entries are likely rare in VM_IO|VM_PFNMAP, (1) is likely more relevant than (2). It is questionable how often (1) would actually trigger, but let's CC stable to be sure. This was found by code inspection. Link: https://lkml.kernel.org/r/20260323-follow_pfnmap_fix-v1-1-5b0ec10872b3@kernel.org Fixes: 6da8e9634bb7 ("mm: new follow_pfnmap API") Signed-off-by: David Hildenbrand (Arm) Acked-by: Mike Rapoport (Microsoft) Reviewed-by: Lorenzo Stoakes (Oracle) Cc: Liam Howlett Cc: Michal Hocko Cc: Peter Xu Cc: Suren Baghdasaryan Cc: Vlastimil Babka Cc: Signed-off-by: Andrew Morton Signed-off-by: Sasha Levin --- mm/memory.c | 18 +++++++++++++++--- 1 file changed, 15 insertions(+), 3 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index a217d9bacc0cf..94bf107a47caf 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -6697,11 +6697,16 @@ int follow_pfnmap_start(struct follow_pfnmap_args *args) pudp = pud_offset(p4dp, address); pud = pudp_get(pudp); - if (pud_none(pud)) + if (!pud_present(pud)) goto out; if (pud_leaf(pud)) { lock = pud_lock(mm, pudp); - if (!unlikely(pud_leaf(pud))) { + pud = pudp_get(pudp); + + if (unlikely(!pud_present(pud))) { + spin_unlock(lock); + goto out; + } else if (unlikely(!pud_leaf(pud))) { spin_unlock(lock); goto retry; } @@ -6713,9 +6718,16 @@ int follow_pfnmap_start(struct follow_pfnmap_args *args) pmdp = pmd_offset(pudp, address); pmd = pmdp_get_lockless(pmdp); + if (!pmd_present(pmd)) + goto out; if (pmd_leaf(pmd)) { lock = pmd_lock(mm, pmdp); - if (!unlikely(pmd_leaf(pmd))) { + pmd = pmdp_get(pmdp); + + if (unlikely(!pmd_present(pmd))) { + spin_unlock(lock); + goto out; + } else if (unlikely(!pmd_leaf(pmd))) { spin_unlock(lock); goto retry; } -- 2.53.0