From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2DAB12C15BB; Tue, 31 Mar 2026 16:43:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774975391; cv=none; b=eUO0Tg9tgViN3O7DMzIOezfIpD3wmap13aQ7O4iDw8UvznYPB2OnZ7G7hAam27QpRRCxAZ/QpqJcOrJ3S9JXEnORmGSI9yBBrgnwCIsVdoTT0TW7qAtGf9vSLWZu2rT4ykl6jNS09r56kn/FII7ZfVfpIM8gRuvZ6sdkuRZ7oBs= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774975391; c=relaxed/simple; bh=nSaHdSkXiSRfgAta04BdX7inYw7KaCFcEfarBC6wl4Q=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=XBlpVPbLYWR5tmGZQURAu6JsEk+2X5qYGiDRPz8hlAoslDYiv8u6FMfibMXbbUIH+W7X6oa4Fope/Er60KzZL/BaABp4fq4TZusljF5Qpj0GmG8paPWw2Xxb5bMKRGXoxyab3dXf5c5FLIhkO//yl7pbVEF63i+V0iwLr2QFbDc= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b=UdfI8TeN; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b="UdfI8TeN" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6FEF7C19423; Tue, 31 Mar 2026 16:43:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1774975390; bh=nSaHdSkXiSRfgAta04BdX7inYw7KaCFcEfarBC6wl4Q=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UdfI8TeNXsKYlvLRDLIyYr3P4Mxo/mTlQeH1DjAfwkFL0IdgT9iAPucIG43x50XaH dg/6yPqpuo2XcIQCIP6+WnBQPZha+PG4O9SxsziQBN9WSY//twmqQhD/Da4GL5o2ML Y9GAut29wVvLEWGC0KfxMhJLz2KSvVv9zw2YihIY= From: Greg Kroah-Hartman To: stable@vger.kernel.org Cc: Greg Kroah-Hartman , patches@lists.linux.dev, "David Hildenbrand (Arm)" , "Mike Rapoport (Microsoft)" , "Lorenzo Stoakes (Oracle)" , Liam Howlett , Michal Hocko , Peter Xu , Suren Baghdasaryan , Vlastimil Babka , Andrew Morton Subject: [PATCH 6.19 277/342] mm/memory: fix PMD/PUD checks in follow_pfnmap_start() Date: Tue, 31 Mar 2026 18:21:50 +0200 Message-ID: <20260331161809.131618589@linuxfoundation.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: <20260331161758.909578033@linuxfoundation.org> References: <20260331161758.909578033@linuxfoundation.org> User-Agent: quilt/0.69 X-stable: review X-Patchwork-Hint: ignore Precedence: bulk X-Mailing-List: stable@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit 6.19-stable review patch. If anyone has any objections, please let me know. ------------------ From: David Hildenbrand (Arm) commit ffef67b93aa352b34e6aeba3d52c19a63885409a upstream. follow_pfnmap_start() suffers from two problems: (1) We are not re-fetching the pmd/pud after taking the PTL Therefore, we are not properly stabilizing what the lock actually protects. If there is concurrent zapping, we would indicate to the caller that we found an entry, however, that entry might already have been invalidated, or contain a different PFN after taking the lock. Properly use pmdp_get() / pudp_get() after taking the lock. (2) pmd_leaf() / pud_leaf() are not well defined on non-present entries pmd_leaf()/pud_leaf() could wrongly trigger on non-present entries. There is no real guarantee that pmd_leaf()/pud_leaf() returns something reasonable on non-present entries. Most architectures indeed either perform a present check or make it work by smart use of flags. However, for example loongarch checks the _PAGE_HUGE flag in pmd_leaf(), and always sets the _PAGE_HUGE flag in __swp_entry_to_pmd(). Whereby pmd_trans_huge() explicitly checks pmd_present(), pmd_leaf() does not do that. Let's check pmd_present()/pud_present() before assuming "the is a present PMD leaf" when spotting pmd_leaf()/pud_leaf(), like other page table handling code that traverses user page tables does. Given that non-present PMD entries are likely rare in VM_IO|VM_PFNMAP, (1) is likely more relevant than (2). It is questionable how often (1) would actually trigger, but let's CC stable to be sure. This was found by code inspection. Link: https://lkml.kernel.org/r/20260323-follow_pfnmap_fix-v1-1-5b0ec10872b3@kernel.org Fixes: 6da8e9634bb7 ("mm: new follow_pfnmap API") Signed-off-by: David Hildenbrand (Arm) Acked-by: Mike Rapoport (Microsoft) Reviewed-by: Lorenzo Stoakes (Oracle) Cc: Liam Howlett Cc: Michal Hocko Cc: Peter Xu Cc: Suren Baghdasaryan Cc: Vlastimil Babka Cc: Signed-off-by: Andrew Morton Signed-off-by: Greg Kroah-Hartman --- mm/memory.c | 18 +++++++++++++++--- 1 file changed, 15 insertions(+), 3 deletions(-) --- a/mm/memory.c +++ b/mm/memory.c @@ -6775,11 +6775,16 @@ retry: pudp = pud_offset(p4dp, address); pud = pudp_get(pudp); - if (pud_none(pud)) + if (!pud_present(pud)) goto out; if (pud_leaf(pud)) { lock = pud_lock(mm, pudp); - if (!unlikely(pud_leaf(pud))) { + pud = pudp_get(pudp); + + if (unlikely(!pud_present(pud))) { + spin_unlock(lock); + goto out; + } else if (unlikely(!pud_leaf(pud))) { spin_unlock(lock); goto retry; } @@ -6791,9 +6796,16 @@ retry: pmdp = pmd_offset(pudp, address); pmd = pmdp_get_lockless(pmdp); + if (!pmd_present(pmd)) + goto out; if (pmd_leaf(pmd)) { lock = pmd_lock(mm, pmdp); - if (!unlikely(pmd_leaf(pmd))) { + pmd = pmdp_get(pmdp); + + if (unlikely(!pmd_present(pmd))) { + spin_unlock(lock); + goto out; + } else if (unlikely(!pmd_leaf(pmd))) { spin_unlock(lock); goto retry; }