From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 08E3A38AC8C; Sun, 26 Apr 2026 19:54:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777233285; cv=none; b=DhsQQgtI+vyTjfQ2NzApfoRNLVOFwYmV2OaRka0ufiFT/M5+VGhiSx4VPGlL82HoCjo7ESHpQO9RJMx+X97vLJl0NUdCfDEGnTfDn/mIfEjtAPb20bFt46Ionqm9cZDXV4fxqV5pv/Ft2OJibjdWlVF3FuSUjKN5tKXgbi0DLjM= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777233285; c=relaxed/simple; bh=mZJ80nR5YJ+oimsPdYu8B1+9oskhJylQOSHAs1nb5Kw=; h=Date:To:From:Subject:Message-Id; b=owFwWie/xafAMSMqKzT0SvsAlFqgMo0jh62+om7zOHRdfCWzPbItDEMwwWVhx1mC5zmeZ7ycD3yb4LYij15CQcDeS5kxYm2UlHnHDjepv23WeCSXpUJuyOBJZARKGPkrRUW/JaQDXAgaBopIkv5znWs7CaIjrbelH7nmqvj5RJY= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux-foundation.org header.i=@linux-foundation.org header.b=UfUOmG0m; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux-foundation.org header.i=@linux-foundation.org header.b="UfUOmG0m" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8F7B5C2BCAF; Sun, 26 Apr 2026 19:54:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1777233284; bh=mZJ80nR5YJ+oimsPdYu8B1+9oskhJylQOSHAs1nb5Kw=; h=Date:To:From:Subject:From; b=UfUOmG0m6ToWD9YtLgkyxb6twhoqqS365jLELi4WtDM6MXBHDIx4mYsRl4Symzmlo xZGgmezlkywCn4CxsHlyiI2mhr1jfb7G4ZBe6Cc0ECIwOmIUSZGA3ia6iM3Y8PzklR dqTxTFxmETJQGT5qQBBETdgzS+AWyzGmp4YayhkQ= Date: Sun, 26 Apr 2026 12:54:44 -0700 To: mm-commits@vger.kernel.org,surenb@google.com,stable@vger.kernel.org,rppt@kernel.org,osalvador@suse.de,npiggin@gmail.com,mpe@ellerman.id.au,mhocko@suse.com,maddy@linux.ibm.com,ljs@kernel.org,liam@infradead.org,joao.m.martins@oracle.com,david@kernel.org,aneesh.kumar@linux.ibm.com,songmuchun@bytedance.com,akpm@linux-foundation.org From: Andrew Morton Subject: + mm-sparse-vmemmap-fix-vmemmap-accounting-underflow.patch added to mm-new branch Message-Id: <20260426195444.8F7B5C2BCAF@smtp.kernel.org> Precedence: bulk X-Mailing-List: stable@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: The patch titled Subject: mm/sparse-vmemmap: fix vmemmap accounting underflow has been added to the -mm mm-new branch. Its filename is mm-sparse-vmemmap-fix-vmemmap-accounting-underflow.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-sparse-vmemmap-fix-vmemmap-accounting-underflow.patch This patch will later appear in the mm-new branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Note, mm-new is a provisional staging ground for work-in-progress patches, and acceptance into mm-new is a notification for others take notice and to finish up reviews. Please do not hesitate to respond to review feedback and post updated versions to replace or incrementally fixup patches in mm-new. The mm-new branch of mm.git is not included in linux-next If a few days of testing in mm-new is successful, the patch will me moved into mm.git's mm-unstable branch, which is included in linux-next Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via various branches at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there most days ------------------------------------------------------ From: Muchun Song Subject: mm/sparse-vmemmap: fix vmemmap accounting underflow Date: Sun, 26 Apr 2026 17:26:35 +0800 Patch series "mm: Fix vmemmap optimization accounting and initialization", v7. The series fixes several bugs in vmemmap optimization, mainly around incorrect page accounting and memmap initialization in DAX and memory hotplug paths. It also fixes pageblock migratetype initialization and struct page initialization for ZONE_DEVICE compound pages. The first four patches fix vmemmap accounting issues. The first patch fixes an accounting underflow in the section activation failure path. The second patch fixes incorrect altmap passing in the error path. The third patch passes pgmap through memory deactivation paths so the teardown side can determine whether vmemmap optimization was in effect. The fourth patch uses that information to account the optimized DAX vmemmap size correctly. The last two patches fix initialization issues in mm/mm_init. One makes sure all pageblocks in ZONE_DEVICE compound pages get their migratetype initialized. The other fixes a case where DAX memory hotplug reuses an unoptimized early-section memmap while compound_nr_pages() still assumes vmemmap optimization, leaving tail struct pages uninitialized. This patch (of 6): In section_activate(), if populate_section_memmap() fails, the error handling path calls section_deactivate() to roll back the state. This causes a vmemmap accounting imbalance. Since commit c3576889d87b ("mm: fix accounting of memmap pages"), memmap pages are accounted for only after populate_section_memmap() succeeds. However, the failure path unconditionally calls section_deactivate(), which decreases the vmemmap count. Consequently, a failure in populate_section_memmap() leads to an accounting underflow, incorrectly reducing the system's tracked vmemmap usage. Fix this more thoroughly by moving all accounting calls into the lower level functions that actually perform the vmemmap allocation and freeing: - populate_section_memmap() accounts for newly allocated vmemmap pages - depopulate_section_memmap() unaccounts when vmemmap is freed This ensures proper accounting in all code paths, including error handling and early section cases. Link: https://lore.kernel.org/20260426092640.375967-1-songmuchun@bytedance.com Link: https://lore.kernel.org/20260426092640.375967-2-songmuchun@bytedance.com Fixes: c3576889d87b ("mm: fix accounting of memmap pages") Signed-off-by: Muchun Song Acked-by: Mike Rapoport (Microsoft) Acked-by: Oscar Salvador Acked-by: David Hildenbrand (Arm) Cc: "Aneesh Kumar K.V" Cc: Joao Martins Cc: Liam Howlett Cc: Lorenzo Stoakes Cc: Madhavan Srinivasan Cc: Michael Ellerman Cc: Michal Hocko Cc: Nicholas Piggin Cc: Suren Baghdasaryan Cc: Signed-off-by: Andrew Morton --- mm/sparse-vmemmap.c | 20 ++++++++++++-------- 1 file changed, 12 insertions(+), 8 deletions(-) --- a/mm/sparse-vmemmap.c~mm-sparse-vmemmap-fix-vmemmap-accounting-underflow +++ a/mm/sparse-vmemmap.c @@ -651,7 +651,12 @@ static struct page * __meminit populate_ unsigned long nr_pages, int nid, struct vmem_altmap *altmap, struct dev_pagemap *pgmap) { - return __populate_section_memmap(pfn, nr_pages, nid, altmap, pgmap); + struct page *page = __populate_section_memmap(pfn, nr_pages, nid, altmap, + pgmap); + + memmap_pages_add(DIV_ROUND_UP(nr_pages * sizeof(struct page), PAGE_SIZE)); + + return page; } static void depopulate_section_memmap(unsigned long pfn, unsigned long nr_pages, @@ -660,13 +665,17 @@ static void depopulate_section_memmap(un unsigned long start = (unsigned long) pfn_to_page(pfn); unsigned long end = start + nr_pages * sizeof(struct page); + memmap_pages_add(-1L * (DIV_ROUND_UP(nr_pages * sizeof(struct page), PAGE_SIZE))); vmemmap_free(start, end, altmap); } + static void free_map_bootmem(struct page *memmap) { unsigned long start = (unsigned long)memmap; unsigned long end = (unsigned long)(memmap + PAGES_PER_SECTION); + memmap_boot_pages_add(-1L * (DIV_ROUND_UP(PAGES_PER_SECTION * sizeof(struct page), + PAGE_SIZE))); vmemmap_free(start, end, NULL); } @@ -769,14 +778,10 @@ static void section_deactivate(unsigned * The memmap of early sections is always fully populated. See * section_activate() and pfn_valid() . */ - if (!section_is_early) { - memmap_pages_add(-1L * (DIV_ROUND_UP(nr_pages * sizeof(struct page), PAGE_SIZE))); + if (!section_is_early) depopulate_section_memmap(pfn, nr_pages, altmap); - } else if (memmap) { - memmap_boot_pages_add(-1L * (DIV_ROUND_UP(nr_pages * sizeof(struct page), - PAGE_SIZE))); + else if (memmap) free_map_bootmem(memmap); - } if (empty) ms->section_mem_map = (unsigned long)NULL; @@ -821,7 +826,6 @@ static struct page * __meminit section_a section_deactivate(pfn, nr_pages, altmap); return ERR_PTR(-ENOMEM); } - memmap_pages_add(DIV_ROUND_UP(nr_pages * sizeof(struct page), PAGE_SIZE)); return memmap; } _ Patches currently in -mm which might be from songmuchun@bytedance.com are mm-memory_hotplug-fix-memory-block-reference-leak-on-remove.patch drivers-base-memory-fix-memory-block-reference-leak-in-poison-accounting.patch mm-sparse-remove-sparse-buffer-pre-allocation-mechanism.patch mm-sparse-vmemmap-fix-vmemmap-accounting-underflow.patch mm-memory_hotplug-fix-incorrect-altmap-passing-in-error-path.patch mm-sparse-vmemmap-pass-pgmap-argument-to-memory-deactivation-paths.patch mm-sparse-vmemmap-fix-dax-vmemmap-accounting-with-optimization.patch mm-mm_init-fix-pageblock-migratetype-for-zone_device-compound-pages.patch mm-mm_init-fix-uninitialized-struct-pages-for-zone_device.patch