From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7138D26159E; Sun, 26 Apr 2026 19:54:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777233295; cv=none; b=T1AkHWqhPqNJ9rZzY4WBdAwFrGmbnNk9XflKo/+U6eikqf4JdU1f8E/BMJ9Wnlp0AEcH+6o5zuP3o5TpX9Pt455h9/J1Gikb7EnDcsJqGvKMeyU5XSEm3I82m2o0/1TJZ4brOLEvtpVjRKL/mrQyoPjL1TKEYEx19oG0NHj04GM= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777233295; c=relaxed/simple; bh=Aiuhl+Q8MuYw7BXDlccuWSaKSusUiXc0YyLKdzfJf18=; h=Date:To:From:Subject:Message-Id; b=KXlzkN5SBwSaJR3xEATXLKIfiFKPVl8obp+aA8tFmlZGg8k96GNBbyvtLkjJk5zyW0giUgiz04+Mv1aQu3tPzja1lDnGhZJg6TguUQJ8fwJ2nLQht9G79+EAc80360iUZaKBLHwNJhv7WJMpNcUk8eChTDPybqtibRsRzDrSJkg= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux-foundation.org header.i=@linux-foundation.org header.b=HGAez/X/; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux-foundation.org header.i=@linux-foundation.org header.b="HGAez/X/" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 18C04C2BCAF; Sun, 26 Apr 2026 19:54:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1777233295; bh=Aiuhl+Q8MuYw7BXDlccuWSaKSusUiXc0YyLKdzfJf18=; h=Date:To:From:Subject:From; b=HGAez/X/KZ9MNbwGs8x0kb57JZ3BVofBaYWaY4LDl40ehjLmtKYMZrI7/5Jf6jR9W 4UDxA6eU2LtWRXTWB45pjqwctK9bWt6ArfUa0xReem5FbLqxHcwg/kgAQ3gf5Sfl8m nUN/HXDI798pNlRObN4hdyFDT64zeuS2n/STE8/A= Date: Sun, 26 Apr 2026 12:54:54 -0700 To: mm-commits@vger.kernel.org,surenb@google.com,stable@vger.kernel.org,rppt@kernel.org,osalvador@suse.de,npiggin@gmail.com,mpe@ellerman.id.au,mhocko@suse.com,maddy@linux.ibm.com,ljs@kernel.org,liam@infradead.org,joao.m.martins@oracle.com,david@kernel.org,aneesh.kumar@linux.ibm.com,songmuchun@bytedance.com,akpm@linux-foundation.org From: Andrew Morton Subject: + mm-sparse-vmemmap-fix-dax-vmemmap-accounting-with-optimization.patch added to mm-new branch Message-Id: <20260426195455.18C04C2BCAF@smtp.kernel.org> Precedence: bulk X-Mailing-List: stable@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: The patch titled Subject: mm/sparse-vmemmap: fix DAX vmemmap accounting with optimization has been added to the -mm mm-new branch. Its filename is mm-sparse-vmemmap-fix-dax-vmemmap-accounting-with-optimization.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-sparse-vmemmap-fix-dax-vmemmap-accounting-with-optimization.patch This patch will later appear in the mm-new branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Note, mm-new is a provisional staging ground for work-in-progress patches, and acceptance into mm-new is a notification for others take notice and to finish up reviews. Please do not hesitate to respond to review feedback and post updated versions to replace or incrementally fixup patches in mm-new. The mm-new branch of mm.git is not included in linux-next If a few days of testing in mm-new is successful, the patch will me moved into mm.git's mm-unstable branch, which is included in linux-next Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via various branches at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there most days ------------------------------------------------------ From: Muchun Song Subject: mm/sparse-vmemmap: fix DAX vmemmap accounting with optimization Date: Sun, 26 Apr 2026 17:26:38 +0800 When vmemmap optimization is enabled for DAX, the nr_memmap_pages counter in /proc/vmstat is incorrect. The current code always accounts for the full, non-optimized vmemmap size, but vmemmap optimization reduces the actual number of vmemmap pages by reusing tail pages. This causes the system to overcount vmemmap usage, leading to inaccurate page statistics in /proc/vmstat. Fix this by introducing section_nr_vmemmap_pages(), which returns the exact vmemmap page count for a given pfn range based on whether optimization is in effect. Link: https://lore.kernel.org/20260426092640.375967-5-songmuchun@bytedance.com Fixes: 15995a352474 ("mm: report per-page metadata information") Signed-off-by: Muchun Song Acked-by: Mike Rapoport (Microsoft) Acked-by: Oscar Salvador Cc: "Aneesh Kumar K.V" Cc: David Hildenbrand (Arm) Cc: Joao Martins Cc: Liam Howlett Cc: Lorenzo Stoakes Cc: Madhavan Srinivasan Cc: Michael Ellerman Cc: Michal Hocko Cc: Nicholas Piggin Cc: Suren Baghdasaryan Cc: Signed-off-by: Andrew Morton --- mm/sparse-vmemmap.c | 34 ++++++++++++++++++++++++++++++---- 1 file changed, 30 insertions(+), 4 deletions(-) --- a/mm/sparse-vmemmap.c~mm-sparse-vmemmap-fix-dax-vmemmap-accounting-with-optimization +++ a/mm/sparse-vmemmap.c @@ -647,6 +647,31 @@ void offline_mem_sections(unsigned long } } +static int __meminit section_nr_vmemmap_pages(unsigned long pfn, unsigned long nr_pages, + struct vmem_altmap *altmap, struct dev_pagemap *pgmap) +{ + const unsigned int order = pgmap ? pgmap->vmemmap_shift : 0; + const unsigned long pages_per_compound = 1UL << order; + + VM_WARN_ON_ONCE(!IS_ALIGNED(pfn | nr_pages, PAGES_PER_SUBSECTION)); + + if (!vmemmap_can_optimize(altmap, pgmap)) + return DIV_ROUND_UP(nr_pages * sizeof(struct page), PAGE_SIZE); + + if (order < PFN_SECTION_SHIFT) { + VM_WARN_ON_ONCE(!IS_ALIGNED(pfn | nr_pages, pages_per_compound)); + return VMEMMAP_RESERVE_NR * nr_pages / pages_per_compound; + } + + VM_WARN_ON_ONCE(!IS_ALIGNED(pfn | nr_pages, PAGES_PER_SECTION)); + VM_WARN_ON_ONCE(nr_pages > PAGES_PER_SECTION); + + if (IS_ALIGNED(pfn, pages_per_compound)) + return VMEMMAP_RESERVE_NR; + + return 0; +} + static struct page * __meminit populate_section_memmap(unsigned long pfn, unsigned long nr_pages, int nid, struct vmem_altmap *altmap, struct dev_pagemap *pgmap) @@ -654,7 +679,7 @@ static struct page * __meminit populate_ struct page *page = __populate_section_memmap(pfn, nr_pages, nid, altmap, pgmap); - memmap_pages_add(DIV_ROUND_UP(nr_pages * sizeof(struct page), PAGE_SIZE)); + memmap_pages_add(section_nr_vmemmap_pages(pfn, nr_pages, altmap, pgmap)); return page; } @@ -665,7 +690,7 @@ static void depopulate_section_memmap(un unsigned long start = (unsigned long) pfn_to_page(pfn); unsigned long end = start + nr_pages * sizeof(struct page); - memmap_pages_add(-1L * (DIV_ROUND_UP(nr_pages * sizeof(struct page), PAGE_SIZE))); + memmap_pages_add(-section_nr_vmemmap_pages(pfn, nr_pages, altmap, pgmap)); vmemmap_free(start, end, altmap); } @@ -673,9 +698,10 @@ static void free_map_bootmem(struct page { unsigned long start = (unsigned long)memmap; unsigned long end = (unsigned long)(memmap + PAGES_PER_SECTION); + unsigned long pfn = page_to_pfn(memmap); - memmap_boot_pages_add(-1L * (DIV_ROUND_UP(PAGES_PER_SECTION * sizeof(struct page), - PAGE_SIZE))); + memmap_boot_pages_add(-section_nr_vmemmap_pages(pfn, PAGES_PER_SECTION, + NULL, NULL)); vmemmap_free(start, end, NULL); } _ Patches currently in -mm which might be from songmuchun@bytedance.com are mm-memory_hotplug-fix-memory-block-reference-leak-on-remove.patch drivers-base-memory-fix-memory-block-reference-leak-in-poison-accounting.patch mm-sparse-remove-sparse-buffer-pre-allocation-mechanism.patch mm-sparse-vmemmap-fix-vmemmap-accounting-underflow.patch mm-memory_hotplug-fix-incorrect-altmap-passing-in-error-path.patch mm-sparse-vmemmap-pass-pgmap-argument-to-memory-deactivation-paths.patch mm-sparse-vmemmap-fix-dax-vmemmap-accounting-with-optimization.patch mm-mm_init-fix-pageblock-migratetype-for-zone_device-compound-pages.patch mm-mm_init-fix-uninitialized-struct-pages-for-zone_device.patch