From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2A2F03D9DC7; Thu, 30 Apr 2026 13:19:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777555198; cv=none; b=sc2h+aQlHsveJtEv7anG0ghW/eL1sV07jeSreHm0R2Eiw0gFrwEIMK3C2iaTNR9Q891McsLQNUkZ1bacpJVh+/EpooumRAtDBpzH0SG1ollAzHdDR8JotEksHVcr5ZtRjQL41abVWa9XezkUBFt4uaLVfArarh/5rfOy3520Eqo= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777555198; c=relaxed/simple; bh=IBpDiy93+weWWApVOVB+pQcwXsPcT2CofYBIjjvyjhs=; h=Date:To:From:Subject:Message-Id; b=thg5uJzRpLMP/g9jjFbQy9B6ZmD8tuhwxpK8BlP5vxaG5GBuWZYsaMZ2Lqe2NWE/Q+mlsjsM4EYzwkklc67CQ+qh0JcxpYxv41qLBxv9pqRPJWgjG6znCIpp8Xe6YPxZ7HI1/GwLunXCuec8PbgEcDUZ7S61ajBOScAAk7BMQXk= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux-foundation.org header.i=@linux-foundation.org header.b=SBWki/Ya; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux-foundation.org header.i=@linux-foundation.org header.b="SBWki/Ya" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 94351C2BCB3; Thu, 30 Apr 2026 13:19:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1777555197; bh=IBpDiy93+weWWApVOVB+pQcwXsPcT2CofYBIjjvyjhs=; h=Date:To:From:Subject:From; b=SBWki/YagYUAYU4Av/zwqOo/+7OGLvRtLJsvR+LmuHnabAfLrlGLEwGHiWp4ouGco llLjJk0JDGnFARg7uxp/WX8SuxU0itViscgFVXiaioHFwqKMH+bTNdkS5TtNx+25wh rb6+CYPbzGkC0Mwaz5JmUjVJdVWCC90ch8fsyQgE= Date: Thu, 30 Apr 2026 06:19:56 -0700 To: mm-commits@vger.kernel.org,vbabka@kernel.org,surenb@google.com,stable@vger.kernel.org,rppt@kernel.org,osalvador@suse.de,mhocko@suse.com,maobibo@loongson.cn,ljs@kernel.org,liam@infradead.org,lance.yang@linux.dev,hughd@google.com,david@kernel.org,akpm@linux-foundation.org From: Andrew Morton Subject: + mm-fix-__vm_normal_page-to-handle-missing-support-for-pmd_special-pud_special.patch added to mm-hotfixes-unstable branch Message-Id: <20260430131957.94351C2BCB3@smtp.kernel.org> Precedence: bulk X-Mailing-List: stable@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: The patch titled Subject: mm: fix __vm_normal_page() to handle missing support for pmd_special()/pud_special() has been added to the -mm mm-hotfixes-unstable branch. Its filename is mm-fix-__vm_normal_page-to-handle-missing-support-for-pmd_special-pud_special.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-fix-__vm_normal_page-to-handle-missing-support-for-pmd_special-pud_special.patch This patch will later appear in the mm-hotfixes-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via various branches at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there most days ------------------------------------------------------ From: "David Hildenbrand (Arm)" Subject: mm: fix __vm_normal_page() to handle missing support for pmd_special()/pud_special() Date: Thu, 30 Apr 2026 13:31:22 +0200 On x86 32-bit with THP enabled, zap_huge_pmd() is seen to generate a "WARNING: mm/memory.c:735 at __vm_normal_page+0x6a/0x7d", from the VM_WARN_ON_ONCE(is_zero_pfn(pfn) || is_huge_zero_pfn(pfn)); followed by "BUG: Bad rss-counter state"s, then later "BUG: Bad page state"s when reclaim gets to call shrink_huge_zero_folio_scan(). It's as if the _PAGE_SPECIAL bit never got set in the huge_zero pmd: and indeed, whereas pte_special() and pte_mkspecial() are subject to a dedicated CONFIG_ARCH_HAS_PTE_SPECIAL, pmd_special() and pmd_mkspecial() are subject to CONFIG_ARCH_SUPPORTS_PMD_PFNMAP, which is never enabled on any 32-bit architecture. While the problem was exposed through commit d80a9cb1a64a ("mm/huge_memory: add and use normal_or_softleaf_folio_pmd()"), it was an oversight in commit af38538801c6 ("mm/memory: factor out common code from vm_normal_page_*()") and would result in other problems: * huge zero folio accounted in smaps, pagemap (PAGE_IS_FILE) and numamaps as file-backed THP * folio_walk_start() returning the folio even without FW_ZEROPAGE set. Callers seem to tolerate that, though. ... and triggering the VM_WARN_ON_ONE(), although never reported so far. To fix it, teach vm_normal_page_pmd()/vm_normal_page_pud() to consider whether pmd_special/pud_special is actually implemented. Link: https://lore.kernel.org/20260430-pmd_special-v1-1-dbcbcfd72c20@kernel.org Fixes: af38538801c6 ("mm/memory: factor out common code from vm_normal_page_*()") Signed-off-by: David Hildenbrand (Arm) Reported-by: Hugh Dickins Closes: https://lore.kernel.org/r/74a75b59-2e13-3985-ee99-d5521f39df2a@google.com Reported-by: Bibo Mao Closes: https://lore.kernel.org/r/20260430041121.2839350-1-maobibo@loongson.cn Debugged-by: Hugh Dickins Reviewed-by: Lance Yang Tested-by: Bibo Mao Cc: Liam R. Howlett Cc: Lorenzo Stoakes Cc: Michal Hocko Cc: Mike Rapoport Cc: Oscar Salvador Cc: Suren Baghdasaryan Cc: Vlastimil Babka Cc: Signed-off-by: Andrew Morton --- mm/memory.c | 22 +++++++++++++++++++--- 1 file changed, 19 insertions(+), 3 deletions(-) --- a/mm/memory.c~mm-fix-__vm_normal_page-to-handle-missing-support-for-pmd_special-pud_special +++ a/mm/memory.c @@ -612,6 +612,21 @@ static void print_bad_page_map(struct vm dump_stack(); add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE); } + +static inline bool pgtable_level_has_pxx_special(enum pgtable_level level) +{ + switch (level) { + case PGTABLE_LEVEL_PTE: + return IS_ENABLED(CONFIG_ARCH_HAS_PTE_SPECIAL); + case PGTABLE_LEVEL_PMD: + return IS_ENABLED(CONFIG_ARCH_SUPPORTS_PMD_PFNMAP); + case PGTABLE_LEVEL_PUD: + return IS_ENABLED(CONFIG_ARCH_SUPPORTS_PUD_PFNMAP); + default: + return false; + } +} + #define print_bad_pte(vma, addr, pte, page) \ print_bad_page_map(vma, addr, pte_val(pte), page, PGTABLE_LEVEL_PTE) @@ -684,7 +699,7 @@ static inline struct page *__vm_normal_p unsigned long addr, unsigned long pfn, bool special, unsigned long long entry, enum pgtable_level level) { - if (IS_ENABLED(CONFIG_ARCH_HAS_PTE_SPECIAL)) { + if (pgtable_level_has_pxx_special(level)) { if (unlikely(special)) { #ifdef CONFIG_FIND_NORMAL_PAGE if (vma->vm_ops && vma->vm_ops->find_normal_page) @@ -699,8 +714,9 @@ static inline struct page *__vm_normal_p return NULL; } /* - * With CONFIG_ARCH_HAS_PTE_SPECIAL, any special page table - * mappings (incl. shared zero folios) are marked accordingly. + * With working pte_special()/pmd_special()..., any special page + * table mappings (incl. shared zero folios) are marked + * accordingly. */ } else { if (unlikely(vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP))) { _ Patches currently in -mm which might be from david@kernel.org are mm-page_alloc-fix-initialization-of-tags-of-the-huge-zero-folio-with-init_on_free.patch mm-fix-__vm_normal_page-to-handle-missing-support-for-pmd_special-pud_special.patch sh-use-folio_mapped-instead-of-page_mapped-in-sh4_flush_cache_page.patch bpf-arena-use-page_ref_count-instead-of-page_mapped-in-arena_free_pages.patch mm-remove-page_mapped.patch